Disinformation. Astroturfing works, and it’s a major challenge to climate change. If you’re a regular Ars reader, the concept of "astroturf organizations”—fake grassroots movements backed by large corporations—won’t be new.
In the past, we’ve covered astroturfing by AT&T, cable companies, and even the Chinese government. But we haven't really addressed a key question: does astroturfing actually work? Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison. Problems and challenges of predatory journals. Predatory journals: The rise of worthless biomedical science. Valuable Research in Fake Journals and Self-boasting with Fake Metrics. Watch out for faux journals and fake conferences. About a year ago, Robert Calin-Jageman, PhD, noticed an uptick in email solicitations requesting manuscripts for journals he wasn't familiar with or inviting him to academic conferences he hadn't heard of before.
Upon further investigation, the Dominican University psychology professor traced the journals and conferences — all of which had seemingly legitimate websites — back to OMICS Publishing Group, an India-based firm that has been accused of using unethical publishing practices, such as inviting potential authors to submit manuscripts without informing them of pricey author fees, or fraudulently listing academic editors who have nothing to do with the journal. Calin-Jageman also found that the group was one of more than 300 publishers on a blacklistyteqbusdatwydqtzvt of "potential, possible or probable predatory scholarly open access publishers. " Beall says it's usually pretty easy to spot a potentially predatory journal solicitation. . — Amy Novotney. Predatory publishers: the journals that churn out fake science.
A vast ecosystem of predatory publishers is churning out “fake science” for profit, an investigation by the Guardian in collaboration with German publishers NDR, WDR and Süddeutsche Zeitung Magazin has found.
More than 175,000 scientific articles have been produced by five of the largest “predatory open-access publishers”, including India-based Omics publishing group and the Turkish World Academy of Science, Engineering and Technology, or Waset. But the vast majority of those articles skip almost all of the traditional checks and balances of scientific publishing, from peer review to an editorial board. Instead, most journals run by those companies will publish anything submitted to them – provided the required fee is paid. To demonstrate the lack of peer review, Svea Eckert, a researcher who worked with NDR on the investigation, successfully submitted an article created by the joke site SCIgen, which automatically generates gibberish computer science papers. List of Suspicious Journals and Publishers - Choosing a Journal for Publication of an Article - Yale University Library Research Guides at Yale University.
Stop Predatory Journals. Journals that publish work without proper peer review and which charge scholars sometimes huge fees to submit should not be allowed to share space with legitimate journals and publishers, whether open access or not.
These journals and publishers cheapen intellectual work by misleading scholars, preying particularly early career researchers trying to gain an edge. The credibility of scholars duped into publishing in these journals can be seriously damaged by doing so. It is important that as a scholarly community we help to protect each other from being taken advantage of in this way. Some Basic Criteria. Most researchers don't understand error bars. [This post was originally published in March 2007] Earlier today I posted a poll [and I republished that poll yesterday] challenging Cognitive Daily readers to show me that they understand error bars -- those little I-shaped indicators of statistical power you sometimes see on graphs.
I was quite confident that they wouldn't succeed. Why was I so sure? Because in 2005, a team led by Sarah Belia conducted a study of hundreds of researchers who had published articles in top psychology, neuroscience, and medical journals. Only a small portion of them could demonstrate accurate knowledge of how error bars relate to significance. Confidence Intervals First off, we need to know the correct answer to the problem, which requires a bit of explanation. Now suppose we want to know if men's reaction times are different from women's reaction times. Standard Errors But perhaps the study participants were simply confusing the concept of confidence interval with standard error.
Error Bars Considered Harmful: Exploring Alternate Encodings for Mean and Error. Perceptions of scientific research literature and strategies for reading papers depend on academic career stage. Data Can Lie–Here’s A Guide To Calling Out B.S. According to the University of Washington professors Carl T.
Bergstrom and Jevin West, it’s time someone did something about it. Their answer? The Bullshit Syllabus. Calling Bullshit — Syllabus. Logistics Course: INFO 198 / BIOL 106B.
University of Washington To be offered: Autumn Quarter 2017 Credit: 3 credits, graded Enrollment: 180 students Instructors: Carl T. Bergstrom and Jevin West Synopsis: Our world is saturated with bullshit. Learn to detect and defuse it. Learning Objectives Our learning objectives are straightforward. Remain vigilant for bullshit contaminating your information diet. The Increasing Problem With the Misinformed (by @baekdal) #analysis. When discussing the future of newspapers, we have a tendency to focus only on the publishing side.
We talk about the changes in formats, the new reader behaviors, the platforms, the devices, and the strange new world of distributed digital distribution, which are not just forcing us to do things in new ways, but also atomizes the very core of the newspaper. New paper: “Why most of psychology is statistically unfalsifiable” – Medium. Daniël Lakens and I have been working on a new paper that serves as a comment on the Reproducibility Project: Psychology and Patil, Peng, and Leek’s (2016) use of prediction intervals to analyze the results.
Our point of view on the RP:P echoes Etz and Vandekerckhove (2016): the neither the original studies nor the replications were, on the whole, particularly informative. We differ from Etz and Vandekerckhove in that we use a straightforward classical statistical analysis of differences between the studies, and we reveal that for most of the study pairs, even very large differences between the two results 1) cannot be detected by the design, and 2) cannot be rejected in light of the data. The reason is, essentially, that the resolution of the findings is simply lacking. In light of this fact, all the discussion of moderators as a possible explanation for failures to replicate is over-interpreting noise. There might be differences between the studies. Other things to note: