background preloader

Statistics fun, statistical literacy & science litteracy

Facebook Twitter

Saturday Morning Breakfast Cereal. Tax credit claimants, nationalities and 'non-UK families' – the data. More than 7% of all couples in the UK comprise one UK national and one non-UK national, according to analysis compiled by the Office for National Statistics (ONS) for the Guardian.

Tax credit claimants, nationalities and 'non-UK families' – the data

But when any such couples claim tax credits, they could be considered migrant families by the British government. HMRC, which collects and supplies the government with data on tax credits, defines non-UK families as ones “where at least one adult is a migrant in the family”. There are more than 1.1 million couples in the UK where one partner is a British national and the other a foreign national. ‘Nobody’s ever asked this before’ and other research question misconceptions.

[41] Falsely Reassuring: Analyses of ALL p-values. It is a neat idea.

[41] Falsely Reassuring: Analyses of ALL p-values

Get a ton of papers. Extract all p-values. Examine the prevalence of p-hacking by assessing if there are too many p-values near p=.05. Economists have done it [SSRN], as have psychologists [.html], and biologists [.html]. [39] Power Naps: When do within-subject comparisons help vs hurt (yes, hurt) power? A recent Science-paper (.pdf) used a total sample size of N=40 to arrive at the conclusion that implicit racial and gender stereotypes can be reduced while napping.

[39] Power Naps: When do within-subject comparisons help vs hurt (yes, hurt) power?

N=40 is a small sample for a between-subject experiment. One needs N=92 to reliably detect that men are heavier than women (SSRN). The study, however, was within-subject, for instance, its dependent variable, the Implicit Association Test (IAT), was contrasted within-participant before and after napping. Top 10 ways to save science from its statistical self. Second of two parts (read part 1) Statistics is to science as steroids are to baseball.

Top 10 ways to save science from its statistical self

Daniel Lakens: Which statistics should you report? Throughout the history of psychological science, there has been a continuing debate about which statistics are used and how these statistics are reported.

Daniel Lakens: Which statistics should you report?

I distinguish between reporting statistics, and interpreting statistics. This is important, because a lot of the criticism on the statistics researchers use comes from how statistics are interpreted, not how they are reported. When it comes to reporting statistics, my approach is simple: The more, the merrier. At the very minimum, descriptive statistics (e.g., means and standard deviations) are required to understand the reported data, preferably complemented with visualizations of the data (for example in online supplementary material).

This should include the sample sizes (per condition for between subject designs), and correlations between dependent variables in within subject designs. Is this the worst government statistic ever created? I forgot to post this column up last year.

Is this the worst government statistic ever created?

It’s a fun one: the Department for Communities and Local Government have produced a truly farcical piece of evidence, and promoted it very hard, claiming it as good stats. I noticed the column was missing today, because Private Eye have published on the same report in their current issue, finding emails that have gone missing through FOI applications, and other nonsense. That part is all neatly summarised online in the Local Government Chronicle here. Is this the worst government statistic ever created? Ben Goldacre, The Guardian, 24 June 2011. How can you tell if scientific evidence is strong or weak? - 8 ways to be a more savvy science reader. The most reliable type of study — especially for clinical trials — is the randomized, placebo-controlled, double-blind study.

How can you tell if scientific evidence is strong or weak? - 8 ways to be a more savvy science reader

[33] “The” Effect Size Does Not Exist. Consider the robust phenomenon of anchoring, where people’s numerical estimates are biased towards arbitrary starting points.

[33] “The” Effect Size Does Not Exist

What does it mean to say “the” effect size of anchoring? It surely depends on moderators like domain of the estimate, expertise, and perceived informativeness of the anchor. [30] Trim-and-Fill is Full of It (bias) Statistically significant findings are much more likely to be published than non-significant ones (no citation necessary).

[30] Trim-and-Fill is Full of It (bias)

Because overestimated effects are more likely to be statistically significant than are underestimated effects, this means that most published effects are overestimates. Effects are smaller – often much smaller – than the published record suggests. [27] Thirty-somethings are Shrinking and Other U-Shaped Challenges. A recent Psych Science (.pdf) paper found that sports teams can perform worse when they have too much talent.

[27] Thirty-somethings are Shrinking and Other U-Shaped Challenges

For example, in Study 3 they found that NBA teams with a higher percentage of talented players win more games, but that teams with the highest levels of talented players win fewer games. The hypothesis is easy enough to articulate, but pause for a moment and ask yourself, “How would you test it?” This post shows the most commonly used test is incorrect, and suggests a simple alternative. What test would you run? UK finalist in $1m global teacher prize. 13 February 2015Last updated at 07:23 ET By Sean Coughlan Education correspondent Richard Spencer displays some of his dance moves in the classroom A science teacher from north-east England who sings and dances his way around the classroom is among 10 finalists in a world teaching contest.

Biology teacher Richard Spencer gets his Middlesbrough College students up and moving to aid their understanding of complicated scientific terms. He says his pupils enjoy his classes and "learn a lot from joining in". Dr Spencer is the only UK representative left in the $1m (£650,000) Global Teacher Prize. He is up against teachers from countries including the US, India, Kenya and Afghanistan in the competition designed to raise the status of teaching. The NHS Is Calling Out Journalists On Twitter For Getting Their Facts Wrong - BuzzFeed News. The Cognitive Science Song. The one chart you need to understand any health study. Today, the prestigious academic journal JAMA Internal Medicine published an article on the association between eating whole grains and having a lower risk of death from cardiovascular disease. Many news sources are going to have headlines like "Whole grains lead to heart-healthy benefits" and "Whole Grain Consumption Lowers Death Risk.

" On the accuracy of statistical procedures in Microsoft Excel 2007. A Department of Decision Sciences, LeBow College of Business, Drexel University, Philadelphia, PA 19104, United Statesb Carmichael, CA, United States Available online 12 March 2008 Choose an option to locate/access this article: Check if you have access through your login credentials or your institution Check access doi:10.1016/j.csda.2008.03.004 Get rights and content Excel 2007, like its predecessors, fails a standard set of intermediate-level accuracy tests in three areas: statistical distributions, random number generation, and estimation. Copyright © 2008 Elsevier B.V. Daniel Gilbert on Twitter: "@Neuro_Skeptic effect sizes in psych experiments are typically meaningless. They tell you about the strength of arbitrary manipulations." That Catcalling Video and Why “Research Methods” is such an Exciting Topic (Really!) — The Message.

A few days ago, a video produced by a third party “viral video creative agency” for the anti-harrasment NGO, “Hollaback,” went viral. The two-minute clip showed a conventionally attractive, white actress walking the streets of New York for “10 hours”, and being repeatedly catcalled. But you couldn't help noticing that all men catcalling to the white actress were people of color — actually almost all black men. [29] Help! Someone Thinks I p-hacked. [26] What If Games Were Shorter? Stirling Behavioural Science Blog : This one goes up to eleven.

Spurious Correlations. Liz_buckley: Um. Guys. It's 50% longer,... Power (Statistics) Dan Meyer: Math class needs a makeover. A formula for justice. Do we need a statistics campaign? Using Mammography to Screen Women for Breast Cancer May Be Less Effective In Reducing Death Rates Than Previously Estimated - September 22, 2010 -2010 Releases - Press Releases. For immediate release: Wednesday, September 22, 2010 Boston, MA — A new study led by Harvard School of Public Health (HSPH) researchers has found that a breast cancer screening program in Norway, which made mammographic screening available to women between the ages of 50 and 69, resulted in a 10% decrease in breast cancer deaths in that age group. “The observed reduction in death from breast cancer after introduction of the mammography screening program was far less than we expected,” said lead author Mette Kalager, a visiting scientist at HSPH and a surgeon at Oslo University Hospital in Norway.

“The results showed that other factors, such as enhanced breast cancer awareness and improved treatment, actually had a greater effect on reducing mortality from breast cancer.” The study appears in the September 23, 2010 issue of The New England Journal of Medicine. Chances are, we'd all benefit from a statistics lesson. The simple truth about statistics. Statistical literacy guide. Hakeem Al-Kazak - Types and Errors (Statistics Song)