What Statistics Can and Can’t Tell Us About Ourselves Harold Eddleston, a seventy-seven-year-old from Greater Manchester, was still reeling from a cancer diagnosis he had been given that week when, on a Saturday morning in February, 1998, he received the worst possible news.
He would have to face the future alone: his beloved wife had died unexpectedly, from a heart attack. Eddleston’s daughter, concerned for his health, called their family doctor, a well-respected local man named Harold Shipman. He came to the house, sat with her father, held his hand, and spoke to him tenderly. A Revolution Is Happening in Psychology. Here's How It's Playing Out. Illustration by Matt Chase article continues after advertisement At the turn of the millennium, behavioral scientists sketched a picture of the mind as capricious—and often quite malleable.
Exposure to words associated with the elderly could make people walk more slowly, NYU researchers reported in the 1990s. More recently, a Harvard psychologist rose to prominence by arguing that so-called power poses could elevate a person's propensity to take risks. Washing one's hands, a U.K. lab posited, could affect moral judgment by reducing feelings of disgust. Rainbow of open science practices. In defense of the replication movement. Basic concepts: the norms of science. – Adventures in Ethics and Science. Since much of what I write about the responsible conduct of research takes them for granted, it’s time that I wrote a basic concepts post explaining the norms of science famously described by sociologist Robert K.
Merton in 1942.  Before diving in, here’s Merton’s description: The ethos of science is that affectively toned complex of values and norms which is held to be binding on the man of science. The norms are expressed in the form of prescriptions, proscriptions, preferences, and permissions. Why the Journal of Personality and Social Psychology Should Retract Article DOI: 10.1037/a0021524 “Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect” by Daryl J. Bem. Added January 30, 2018: A formal letter to the editor of JPSP, calling for a retraction of the article (Letter).
“I’m all for rigor, but I prefer other people do it. I see its importance—it’s fun for some people—but I don’t have the patience for it. If you looked at all my past experiments, they were always rhetorical devices. I gathered data to show how my point would be made. The backfire effect is elusive. The backfire effect is when correcting misinformation hardens, rather than corrects, someone’s mistaken belief.
In defense of the replication movement. New review prompts a re-think on what low sugar levels do to our thinking – Research Digest. Glucose.
Fuel for our cells, vital for life. But how fundamental is it to how we think? According to dual-systems theory (best known from Nobel laureate Daniel Kahneman’s work), low blood glucose favours the use of fast and dirty System One thinking over the deliberative, effortful System Two. Why We Need a Statistical Revolution. My father told me the most important thing about solving a problem is to formulate it accurately, and one would think that, as statisticians, most of us would agree with that advice.
Suppose we were to build a spaceship that can fly to Mars and return safely to Earth. It would be folly indeed to make simplifying assumptions in its construction that science tells us are false, such as assuming that during take off the temperature of materials would not exceed a certain critical level, or that friction is constant in the atmosphere. Such assumptions could spell death for the astronauts and failure for their mission. And yet, that is what many statisticians often do, sometimes referring to the great 20th century English statistician, George E.P.
Box’s belief that “Essentially, all models are wrong, but some are useful” (Empirical Model-Building and Response Surfaces, 1987). Do We Need An Adoption Service for Orphan Data? - Neuroskeptic. Having recently left an academic post, I’ve been thinking about what will happen to the data that I collected during my previous role that remains unpublished.
Will it, like so much data, end up stuck in the limbo of the proverbial ‘file drawer’? The ‘file drawer problem’ is generally understood to mean “the bias introduced into the scientific literature by selective publication – chiefly by a tendency to publish positive results but not to publish negative or nonconfirmatory results.” However, while selective publication based on results is a big problem, even positive data can end up unpublished.
Stupid solutions to real problems in science – The 100% CI. This post is a joint effort from all of us at the 100% CI.
There is a subreddit for crazy ideas. The person running it either isn’t particularly crazy or maybe was out of crazy ideas when they named it /r/CrazyIdeas. Some of the ideas posted there are pretty innovative, most ideas are mainly silly, and others sound like excellent ideas. Stereothreat. Rewarding negative results keeps science on track. Two research prizes signal a shifting culture.
One, announced earlier this month by the European College of Neuropsychopharmacology, offers a €10,000 (US$11,800) award for negative results in preclinical neuroscience: careful experiments that do not confirm an accepted hypothesis or previous result. The other, from the international Organization for Human Brain Mapping, is entering its second year. Preliminary 2017 Replicability Rankings of 104 Psychology Journals. The table shows the preliminary 2017 rankings of 104 psychology journals. A description of the methodology and analyses of by discipline and time are reported below the table. ‘Before you know it’ by John A. Bargh: A quantitative book review. November 28, Open Draft/Preprint (Version 1.0) [Please provide comments and suggestions] In this blog post I present a quantitative review of John A Bargh’s book “Before you know it: The unconscious reasons we do what we do” A quantitative book review is different from a traditional book review.
The goal of a quantitative review is to examine the strength of the scientific evidence that is provided to support ideas in the book. Readers of a popular science book written by an eminent scientist expect that these ideas are based on solid scientific evidence. Introductory psychology textbooks accused of spreading myths and liberal-leaning bias. By Christian Jarrett Is the job of introductory psychology textbooks to present students with a favourable and neat impression of psychology or to give them a warts and all account of the field? This is a key question raised by a new analysis of the treatment of controversial theories and recognised myths by 24 best-selling US introductory psychology texts. Writing in Current Psychology, Christopher Ferguson at Stetson University and his colleagues at Texas A&M International University conclude that intro textbooks often have difficulty covering controversial topics with care, and that whether intentionally or not, they are frequently presenting students with a liberal-leaning, over-simplified perspective, as well propagating or failing to challenge myths and urban legends.
While acknowledging the daunting task that confronts textbook authors, Ferguson and his colleagues call on them to aspire to tell the full story. —Education or Indoctrination? Image via Peter Merholz/Flickr Related. Here’s How A Controversial Study About Kids And Cookies Turned Out To Be Wrong — And Wrong Again. Retract, replace, retract: Beleaguered food researcher pulls article from JAMA journal (again)
Brian Wansink A high-profile food researcher who’s faced heavy criticism about his work has retracted the revised version of an article he’d already retracted last month. Yes, you read that right: Brian Wansink at Cornell University retracted the original article from JAMA Pediatrics in September, replacing it with a revised version. Now he’s retracting the revised version, citing a major error: The study, which reported children were more likely to choose an apple over a cookie if the apple included an Elmo sticker, was conducted in children 3-5 years old, not 8-11, as the study reported.
Although Wansink told BuzzFeed he asked the journal to retract the paper, Annette Flanagin, Executive Managing Editor for The JAMA Network, told us the editors requested the retraction: Is the staggeringly profitable business of scientific publishing bad for science? N 2011, Claudio Aspesi, a senior investment analyst at Bernstein Research in London, made a bet that the dominant firm in one of the most lucrative industries in the world was headed for a crash. Reed-Elsevier, a multinational publishing giant with annual revenues exceeding £6bn, was an investor’s darling.
It was one of the few publishers that had successfully managed the transition to the internet, and a recent company report was predicting yet another year of growth. Aspesi, though, had reason to believe that that prediction – along with those of every other major financial analyst – was wrong. The core of Elsevier’s operation is in scientific journals, the weekly or monthly publications in which scientists share their results.
Despite the narrow audience, scientific publishing is a remarkably big business. Criticizing a scientist’s work isn’t bullying. Hinterhaus Productions/Getty Images When teaching research methods to first-year college students, I used to tell them that scientists try to prove themselves wrong. Did power-posing guru Amy Cuddy deserve her public shaming? Kimberly White/Getty Images for theNew York Times The first big reveal in this week’s New York Times Magazine feature, “When the Revolution Came for Amy Cuddy,” is that the superstar of social psychology and inventor of the “power pose” has been reduced to hunching. 1,500 scientists lift the lid on reproducibility. More than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments.
Emails Show How An Ivy League Prof Tried To Do Damage Control For His Bogus Food Science. The Voodoo Curse of Circular fMRI Analysis - Neuroskeptic. Pay up or retract? Survey creator's demands for money rile some health researchers. Last June, a health care researcher in the United States was clearing out her email when she came across a message that looked like spam.
You can’t play 20 questions with nature and win. “You can’t play 20 questions with nature and win” is the title of Allen Newell‘s 1973 paper, a classic in cognitive science. In the paper he confesses that although he sees many excellent psychology experiments, all making undeniable scientific contributions, he can’t imagine them cohering into progress for the field as a whole. He describes the state of psychology as focussed on individual phenomena – mental rotation, chunking in memory, subitizing, etc – studied in a way to resolve binary questions – issues such as nature vs nature, conscious vs unconscious, serial vs parallel processing. Firm Foundations. What Are the Most-Replicated Studies in Psychological Science? Amid ongoing efforts to improve the reproducibility of psychological science, it’s easy to lose sight of the findings that remain sturdy decades after they were first reported. Basic concepts: the norms of science. – Adventures in Ethics and Science.
NeuroChambers: Open-ended, Open Science. Chris has kindly allowed me to crash his blog, to publicise and to gather ideas and opinions for a new article type at Cortex. The Rules of Replication: Part II. Do replication studies need special rules? In my previous post I focused on the question of whether replicators need to work with original authors when conducting their replication studies. Rolf Zwaan: Concurrent Replication. I’m working on a paper with Alex Etz, Rich Lucas, and Brent Donnellan. We had to cut 2,000 words and the text below is one of the darlings we killed. “Mindless Eating,” or how to send an entire life of research into question. Vazire Syllabus replicability syllabus.docx. Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency. Crystal Prison Zone: Curiously Strong effects.
FlexibleMeasures.com - Flexibility in Methods & Measures of Social Science. Research Statement April 2017. NC3Rs EDA. StudySwap: A platform for interlab replication, collaboration, and research resource exchange. Persuasive mark: Openness initiative for reviewers - an editorial discussion. Nick Brown's blog: A different set of problems in an article from the Cornell Food and Brand Lab. "I placed too much faith in underpowered studies:" Nobel Prize winner admits mistakes. Rolf Zwaan: Replicating Effects by Duplicating Data.
Inside the debate about power posing: a Q & A with Amy Cuddy. How trustworthy is the data that psychologists collect online? – Research Digest. Two Manifestos for Better Science - Neuroskeptic. So apparently this is why we have positive psychology but not evidence-based psychological treatment. Still Not Significant (with images, tweets) · anniebruton. As A Major Retraction Shows, We’re All Vulnerable To Faked Data. Doubts About Study of Gay Canvassers Rattle the Field. LaCour Made Up His Biggest Funding Source. Study Using Gay Canvassers Erred in Methods, Not Results, Author Says. Psychology’s Racism-Measuring Tool Isn’t Up to the Job. Real Data Are Messy - Neuroskeptic. Can Psychology Be an Empirical Science? - Neuroskeptic. Social Priming: Money for Nothing? - Neuroskeptic. Don’t call it a comeback. Three Popular Psychology Studies That Didn't Hold Up. Philip Zimbardo has a theory. Andrew Przybylski/Phil Zimbardo debate video game effects.
Terry Burnham: A Trick For Higher SAT scores? Unfortunately no. A puzzle about the latest test ban (or ‘don’t ask, don’t tell’) Surrogate Science. Rolf Zwaan: When Replicating Stapel is not an Exercise in Futility. Why We May Never Beat Stigma.  No-way Interactions. Sometimes i'm wrong: this is what p-hacking looks like. Words and sorcery.  Fake Data: Mendel vs. Stapel. Linguistic Traces of a Scientific Fraud: The Case of Diederik Stapel. Did a five-day camp without digital devices really boost children's interpersonal skills? Curtain up on second act for Dutch fraudster Stapel: College teacher. Misjudgements will drive social trials underground. Trendspotter: The brain-scan job interview.  Fake-Data Colada.
Scientific method: Statistical errors. Psychology Today apparently retracts Kanazawa piece on why black women are “rated less physically attractive” The British amateur who debunked the mathematics of happiness. Is social psychology really in crisis? Journal Impact Factors. Research finds 'US effect' exaggerates results in human behaviour studies.
Three Cheers for Failure! Most brain imaging papers fail to provide enough methodological detail to allow replication. Nobel laureate challenges psychologists to clean up their act. Crowdsourcing the psych lab. Sort of Significant: Are Psychology Papers Just Nipping Past the p Value. Neuroleadership – lots of old-fashioned psychology, very little neuroscience. Report: Dutch 'Lord of the Data' Forged Dozens of Studies. University withdraws articles - News - Erasmus Universiteit Rotterdam. Fraud Detection Method Called Credible But Used Like an 'Instrument of Medieval Torture' The data detective. Uncertainty shrouds psychologist's resignation. Psychological Statistics: What's up with social psychology? Simonsohn’s Fraud Detection Technique Revealed » Random Assignment. Psychology is Science » Random Assignment.
Replication studies: Bad copy. An editorial board discusses fMRI analysis and “false-positive psychology” « The Hardest Science. Despite Occasional Scandals, Science Can Police Itself - Commentary. As Dutch Research Scandal Unfolds, Social Psychologists Question Themselves - Research.