Site icon R-bloggers

Scientists misusing Statistics

[This article was first published on Revolutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

In ScienceNews this month, there’s controversial article exposing the fact that results claimed to be "statistically significant" in scientific articles aren’t always what they’re cracked up to be. The article — titled "Odds Are, It’s Wrong" is interesting, but I take a bit of an issue with the sub-headline, "Science fails to face the shortcomings of Statistics". As it happens, the examples in the article are mostly cases of scientists behaving badly and abusing statistical techniques and results:

Statisticians, in general, are aware of these problems and have offered solutions: there’s a vast field of literature on multiple comparisons tests, reporting bias, and alternatives (such as Bayesian methods) to P-value tests. But more often than not, these "arcane" issues (which are actually part of any statistical training) go ignored in scientific journals. You don’t need to be a cynic to understand the motives of the authors for doing so — hey, a publication is a publication, right? — but the cooperation of the peer reviewers and editorial boards is disturbing.

ScienceNews: Odds Are, It’s Wrong

To leave a comment for the author, please follow the link and comment on their blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.