background preloader

Pareidolia

Pareidolia
A satellite photo of a mesa in Cydonia, often called the Face on Mars. Later imagery from other angles did not contain the illusion. Examples[edit] Projective tests[edit] The Rorschach inkblot test uses pareidolia in an attempt to gain insight into a person's mental state. Art[edit] In his notebooks, Leonardo da Vinci wrote of pareidolia as a device for painters, writing "if you look at any walls spotted with various stains or with a mixture of different kinds of stones, if you are about to invent some scene you will be able to see in it a resemblance to various different landscapes adorned with mountains, rivers, rocks, trees, plains, wide valleys, and various groups of hills. Religious[edit] Publicity surrounding sightings of religious figures and other surprising images in ordinary objects has spawned a market for such items on online auctions like eBay. Divination[edit] Various European ancient divination practices involve the interpretation of shadows cast by objects. Fossils[edit] Related:  Wiki: The Mind

Semantic satiation History and research[edit] The phrase "semantic satiation" was coined by Leon Jakobovits James in his doctoral dissertation at McGill University, Montreal, Canada awarded in 1962.[1] Prior to that, the expression "verbal satiation" had been used along with terms that express the idea of mental fatigue. The dissertation listed many of the names others had used for the phenomenon: "Many other names have been used for what appears to be essentially the same process: inhibition (Herbert, 1824, in Boring, 1950), refractory phase and mental fatigue (Dodge, 1917; 1926a), lapse of meaning (Bassett and Warne, 1919), work decrement (Robinson and Bills, 1926), cortical inhibition (Pavlov, 192?) The explanation for the phenomenon was that verbal repetition repeatedly aroused a specific neural pattern in the cortex which corresponds to the meaning of the word. Applications[edit] In popular culture[edit] See also[edit] References[edit] Further reading[edit] Dodge, R.

Political bias Bias towards a political side in supposedly-objective information Political bias is a bias or perceived bias involving the slanting or altering of information to make a political position or political candidate seem more attractive. With a distinct association with media bias, it commonly refers to how a reporter, news organisation, or TV show covers a political candidate or a policy issue.[1] Bias emerges in a political context when individuals engage in an inability or an unwillingness to understand a politically opposing point of view. Such bias in individuals may have its roots in their traits and thinking styles; it is unclear whether individuals at particular positions along the political spectrum are more biased than any other individuals.[2] With an understanding of political bias, comes the acknowledgement of its violation of expected political neutrality.[4] A lack of political neutrality is the result of political bias.[4] Types of bias in a political context[edit] See also[edit]

Milgram experiment The experimenter (E) orders the teacher (T), the subject of the experiment, to give what the latter believes are painful electric shocks to a learner (L), who is actually an actor and confederate. The subject believes that for each wrong answer, the learner was receiving actual electric shocks, though in reality there were no such punishments. Being separated from the subject, the confederate set up a tape recorder integrated with the electro-shock generator, which played pre-recorded sounds for each shock level.[1] The experiments began in July 1961, three months after the start of the trial of German Nazi war criminal Adolf Eichmann in Jerusalem. Milgram devised his psychological study to answer the popular question at that particular time: "Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?" The experiment[edit] Milgram Experiment advertisement Results[edit] Criticism[edit] Ethics[edit] Replications[edit]

Overconfidence effect Personal cognitive bias The overconfidence effect is a well-established bias in which a person's subjective confidence in their judgments is reliably greater than the objective accuracy of those judgments, especially when confidence is relatively high.[1][2] Overconfidence is one example of a miscalibration of subjective probabilities. Throughout the research literature, overconfidence has been defined in three distinct ways: (1) overestimation of one's actual performance; (2) overplacement of one's performance relative to others; and (3) overprecision in expressing unwarranted certainty in the accuracy of one's beliefs.[3][4] The most common way in which overconfidence has been studied is by asking people how confident they are of specific beliefs they hold or answers they provide. The data show that confidence systematically exceeds accuracy, implying people are more sure that they are correct than they deserve to be. Types[edit] Overestimation[edit] Illusion of control[edit] Notes[edit]

Stanford prison experiment The Stanford prison experiment (SPE) was a study of the psychological effects of becoming a prisoner or prison guard. The experiment was conducted at Stanford University from August 14–20, 1971, by a team of researchers led by psychology professor Philip Zimbardo.[1] It was funded by the US Office of Naval Research[2] and was of interest to both the US Navy and Marine Corps as an investigation into the causes of conflict between military guards and prisoners. Goals and methods[edit] Zimbardo and his team aimed to test the hypothesis that the inherent personality traits of prisoners and guards are the chief cause of abusive behavior in prison. The experiment was conducted in the basement of Jordan Hall (Stanford's psychology building). The researchers held an orientation session for guards the day before the experiment, during which they instructed them not to physically harm the prisoners. The prisoners were "arrested" at their homes and "charged" with armed robbery. Results[edit] [edit]

Filter bubble Intellectual isolation involving search engines The term filter bubble was coined by internet activist Eli Pariser, circa 2010. A filter bubble or ideological frame is a state of intellectual isolation[1] that can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior, and search history.[2] As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles.[3] The choices made by these algorithms are only sometimes transparent.[4] Prime examples include Google Personalized Search results and Facebook's personalized news-stream. Technology such as social media “lets you go off with like-minded people, so you're not mixing and sharing and understanding other points of view ... Concept[edit] Many people are unaware that filter bubbles even exist. [edit]

Sense of agency The "sense of agency" (SA) refers to the subjective awareness that one is initiating, executing, and controlling one's own volitional actions in the world.[1] It is the pre-reflective awareness or implicit sense that it is I who is executing bodily movement(s) or thinking thoughts. In normal, non-pathological experience, the SA is tightly integrated with one's "sense of ownership" (SO), which is the pre-reflective awareness or implicit sense that one is the owner of an action, movement or thought. If someone else were to move your arm (while you remained passive) you would certainly have sensed that it were your arm that moved and thus a sense of ownership (SO) for that movement. Normally SA and SO are tightly integrated, such that while typing one has an enduring, embodied, and tacit sense that "my own fingers are doing the moving" (SO) and that "the typing movements are controlled (or volitionally directed) by me" (SA). Definition[edit] Neuroscience of the sense of agency[edit]

Dan Ariely on How and Why We Cheat - Farnam Street We all like to think of ourselves as honest, but there are inevitably certain situations in which we’re more likely to cheat. There are many things that make us less honest, like feeling disconnected from the consequences and when our willpower is depleted. Learning why we cheat can help us avoid incentivizing it. Three years ago, Dan Ariely, a psychology and behavioral economics professor at Duke, put out a book called The (Honest) Truth About Dishonesty: How We Lie to Everyone–Especially Ourselves. I read the book back closer to when it was released, and I recently revisited it to see how it held up to my initial impressions. It was even better. We’re Cheaters All Dan is both an astute researcher and a good writer; he knows how to get to the point, and his points matter. In The Honest Truth, Ariely doesn’t just explore where cheating comes from but he digs into which situations make us more likely to cheat than others. Cheating was standard, but only a little.

Proprioception The cerebellum is largely responsible for coordinating the unconscious aspects of proprioception. Proprioception (/ˌproʊpri.ɵˈsɛpʃən/ PRO-pree-o-SEP-shən), from Latin proprius, meaning "one's own", "individual" and perception, is the sense of the relative position of neighbouring parts of the body and strength of effort being employed in movement.[1] It is provided by proprioceptors in skeletal striated muscles and in joints. It is distinguished from exteroception, by which one perceives the outside world, and interoception, by which one perceives pain, hunger, etc., and the movement of internal organs. The brain integrates information from proprioception and from the vestibular system into its overall sense of body position, movement, and acceleration. The word kinesthesia or kinæsthesia (kinesthetic sense) has been used inconsistently to refer either to proprioception alone or to the brain's integration of proprioceptive and vestibular inputs. History of study[edit] Components[edit]

Confirmation bias Tendency of people to favor information that confirms their beliefs or values Confirmation bias, also known as myside bias, is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values.[1] People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs. Confirmation bias is a broad construct covering a number of explanations. A series of psychological experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Flawed decisions due to confirmation bias have been found in political, organizational, financial and scientific contexts. Definition and context[edit] Confirmation bias (or confirmatory bias) has also been termed myside bias.

Operant conditioning Diagram of operant conditioning Operant conditioning separates itself from classical conditioning because it is highly complex, integrating positive and negative conditioning into its practices; whereas, classical conditioning focuses only on either positive or negative conditioning but not both together. Another dubbing of operant conditioning is instrumental learning. Instrumental conditioning was first discovered and published by Jerzy Konorski and was also referred to as Type II reflexes. Operant behavior operates on the environment and is maintained by its antecedents and consequences, while classical conditioning is maintained by conditioning of reflexive (reflex) behaviors, which are elicited by antecedent conditions. Historical notes[edit] Thorndike's law of effect[edit] Main article: Law of effect Operant conditioning, sometimes called instrumental learning, was first extensively studied by Edward L. Skinner[edit] Main article: B. B.F. Tools and procedures[edit] See also[edit] 1.

Fear of missing out Feeling of worry about lost opportunities Fear of missing out (FOMO) is the feeling of apprehension that one is either not in the know about or missing out on information, events, experiences, or life decisions that could make one's life better.[2] FOMO is also associated with a fear of regret,[3] which may lead to concerns that one might miss an opportunity for social interaction, a novel experience, a memorable event, profitable investment or the comfort of those you love and who love you back.[4] It is characterized by a desire to stay continually connected with what others are doing,[2] and can be described as the fear that deciding not to participate is the wrong choice.[3][5] FOMO could result from not knowing about a conversation,[6] missing a TV show, not attending a wedding or party,[7] or hearing that others have discovered a new restaurant.[8] FOMO in recent years has been attributed to a number of negative psychological and behavioral symptoms.[3][9][10] History[edit]

Macdonald triad The Macdonald triad (also known as the triad of sociopathy or the homicidal triad) is a set of three behavioral characteristics that has been suggested, if all three or any combination of two, are present together, to be predictive of or associated with, later violent tendencies, particularly with relation to serial offenses. The triad was first proposed by psychiatrist J.M. Macdonald in "The Threat to Kill", a 1963 paper in the American Journal of Psychiatry.[1] Small-scale studies conducted by psychiatrists Daniel Hellman and Nathan Blackman, and then FBI agents John E. Douglas and Robert K. The triad links cruelty to animals, obsession with fire setting, and persistent bedwetting past a certain age, to violent behaviors, particularly homicidal behavior and sexually predatory behavior.[5] Some other studies claim to have not found statistically significant links between the triad and violent offenders. Firesetting[edit] Cruelty to animals[edit] Enuresis[edit] See also[edit]

The algorithms that detect hate speech online are biased against black people Platforms like Facebook, YouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. Doing this effectively is more urgent than ever in light of recent mass shootings and violence linked to hate speech online. But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. This is in large part because what is considered offensive depends on social context. Both papers, presented at a recent prestigious annual conference for computational linguistics, show how natural language processing AI — which is often proposed as a tool to objectively identify offensive language — can amplify the same biases that human beings have. Flawed human decisions get reflected in algorithms The results were astounding.

Related: