background preloader

Confirmation bias

Confirmation bias
Bias confirming existing attitudes Confirmation bias (also confirmatory bias, myside bias,[a] or congeniality bias[2]) is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values.[3] People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs. Biased search for information, biased interpretation of this information, and biased memory recall, have been invoked to explain four specific effects: A series of psychological experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Flawed decisions due to confirmation bias have been found in a wide range of political, organizational, financial and scientific contexts. [edit] Related:  Science, Environment

Selection bias Selection bias is a statistical bias in which there is an error in choosing the individuals or groups to take part in a scientific study.[1] It is sometimes referred to as the selection effect. The phrase "selection bias" most often refers to the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may not be accurate. Types[edit] There are many types of possible selection bias, including: Sampling bias[edit] A distinction of sampling bias (albeit not a universally accepted one) is that it undermines the external validity of a test (the ability of its results to be generalized to the rest of the population), while selection bias mainly addresses internal validity for differences or similarities found in the sample at hand. Time interval[edit] Exposure[edit] Data[edit] Studies[edit] Attrition[edit] Observer selection[edit] Avoidance[edit] Related issues[edit] See also[edit]

Synchronicity Synchronicity is the occurrence of two or more events that appear to be meaningfully related but not causally related. Synchronicity holds that such events are "meaningful coincidences". The concept of synchronicity was first defined by Carl Jung, a Swiss psychiatrist, in the 1920s.[1] During his career, Jung furnished several slightly different definitions of it.[2] Jung variously defined synchronicity as an "acausal connecting (togetherness) principle," "meaningful coincidence," and "acausal parallelism." He introduced the concept as early as the 1920s but gave a full statement of it only in 1951 in an Eranos lecture.[3] In 1952, he published a paper "Synchronizität als ein Prinzip akausaler Zusammenhänge" (Synchronicity – An Acausal Connecting Principle)[4] in a volume which also contained a related study by the physicist and Nobel laureate Wolfgang Pauli.[5] In his book Synchronicity: An Acausal Connecting Principle, Jung wrote:[6] Description[edit] Examples[edit] Criticisms[edit]

Argumentum ad populum In argumentation theory, an argumentum ad populum (Latin for "argument to the people") is a fallacious argument that concludes that a proposition must be true because many or most people believe it, often concisely encapsulated as: "If many believe so, it is so." This type of argument is known by several names,[1] including appeal to the masses, appeal to belief, appeal to the majority, appeal to democracy, appeal to popularity, argument by consensus, consensus fallacy, authority of the many, bandwagon fallacy, vox populi,[2] and in Latin as argumentum ad numerum ("appeal to the number"), fickle crowd syndrome, and consensus gentium ("agreement of the clans"). It is also the basis of a number of social phenomena, including communal reinforcement and the bandwagon effect. The Chinese proverb "three men make a tiger" concerns the same idea. Evidence[edit] One could claim that smoking is a healthy pastime, since millions of people do it. Exceptions[edit] Language[edit] Reversals[edit]

Dan Ariely on How and Why We Cheat - Farnam Street We all like to think of ourselves as honest, but there are inevitably certain situations in which we’re more likely to cheat. There are many things that make us less honest, like feeling disconnected from the consequences and when our willpower is depleted. Learning why we cheat can help us avoid incentivizing it. Three years ago, Dan Ariely, a psychology and behavioral economics professor at Duke, put out a book called The (Honest) Truth About Dishonesty: How We Lie to Everyone–Especially Ourselves. I read the book back closer to when it was released, and I recently revisited it to see how it held up to my initial impressions. It was even better. We’re Cheaters All Dan is both an astute researcher and a good writer; he knows how to get to the point, and his points matter. In The Honest Truth, Ariely doesn’t just explore where cheating comes from but he digs into which situations make us more likely to cheat than others. Cheating was standard, but only a little.

Illusory correlation History[edit] "Illusory correlation" was originally coined by Chapman and Chapman (1967) to describe people's tendencies to overestimate relationships between two groups when distinctive and unusual information is presented.[5][6] The concept was used to question claims about objective knowledge in clinical psychology through the Chapmans' refutation of many clinicians' widely used Wheeler signs for homosexuality in Rorschach tests.[7] Example[edit] David Hamilton and Robert Gifford (1976) conducted a series of experiments that demonstrated how stereotypic beliefs regarding minorities could derive from illusory correlation processes.[8] To test their hypothesis, Hamilton and Gifford had research participants read a series of sentences describing either desirable or undesirable behaviors, which were attributed to either Group A or Group B.[5] Abstract groups were used so that no previously established stereotypes would influence results. Theories[edit] General theory[edit] Age[edit]

Juggling by numbers: How notation revealed new tricks 19 December 2012Last updated at 20:12 ET By Laura Gray BBC News The mathematical formula of juggling Juggling is usually associated with brightly coloured balls and clowning around, but it has more connections than you might think with the world of numbers. Colin Wright is a mathematician who in the 1980s helped develop a notation system for juggling while at Cambridge University. He was frustrated that there was no way to write down juggling moves. "There was a juggling move called Mills Mess and when I tried to write it down I couldn't. The system he helped devise became known as Siteswap. These sequences encoded the number of beats of each throw, which is related to their height and the hand to which the throw is made. Sequences of numbers are used to denote particular juggling moves also known as "Siteswap" The higher the ball is thrown, the bigger the number, so throwing a four means you are throwing the ball higher than a two. The numbers are then written into sequences. “Start Quote

Anchoring Anchoring or focalism is a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered (the "anchor") when making decisions. During decision making, anchoring occurs when individuals use an initial piece of information to make subsequent judgments. Once an anchor is set, other judgments are made by adjusting away from that anchor, and there is a bias toward interpreting other information around the anchor. For example, the initial price offered for a used car sets the standard for the rest of the negotiations, so that prices lower than the initial price seem more reasonable even if they are still higher than what the car is really worth. Focusing effect[edit] The focusing effect (or focusing illusion) is a cognitive bias that occurs when people place too much importance on one aspect of an event, causing an error in accurately predicting the utility of a future outcome.[1] Anchoring and adjustment heuristic[edit] or reversed as .

Filter bubble Intellectual isolation involving search engines The term filter bubble was coined by internet activist Eli Pariser, circa 2010. A filter bubble or ideological frame is a state of intellectual isolation[1] that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior, and search history.[2] As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles.[3] The choices made by these algorithms are only sometimes transparent.[4] Prime examples include Google Personalized Search results and Facebook's personalized news-stream. Technology such as social media “lets you go off with like-minded people, so you're not mixing and sharing and understanding other points of view ... Concept[edit] Many people are unaware that filter bubbles even exist. [edit]

Related: