Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The scientific process has been going through a welcome period of introspection recently, with a focus on understanding just how reliable the results of scientific studies are. We're not talking here about scientific fraud, but how the scientific process itself and the focus on p-values (which not even statisticians can easily explain) as the criterion for a positive result leads to a surprisingly large number of false positives to be published. On top of that, there's the issue of publication bias (especially in the pharmaceutical industry), an area where Ben Goldacre has taken a lead. The whole issue is wrapped in the concept of reproducibility — the idea that independent researchers should be able to replicate the results of published studies — for which David Spiegelhalter gives a great primer in the video below.
So it's welcome news that one of the top science breakthroughs of 2015 according to Science and Nature is Brian Nosek's project to reproduce the results of 100 scientific studies published in psychology journals. The detailed methodology is described in this paper, but in short Nosek recruited replication teams to recreate the studies as described in the carefully-selected papers, and analyze the data they collect:
Moreover,to maximize reproducibility and accuracy, the analyses for every replication study were reproduced by another analyst independent of the replication team using the R statistical programming language and a standardized analytic format. A controller R script was created to regenerate the entire analysis of every study and recreate the master data file.
R is a natural fit for a reproducibility project like this: as a scripting language, the R script itself provides a reproducible documentation of every step of the process. (Revolution R Open, Microsoft's enhanced R distribution, additionally includes features to facilitate reproducibility when using R packages.) The R script used for the psychology replication project describes and executes the process for checking the results of the papers.
Of the 100 papers studies, 97 of them reported statistically significant effects. (This is itself a reflection of publication bias; studies where there is no effect rarely get published.) Yet of those 97 papers, in 61 cases the reported significant results could not be replicated when the study was repeated. Their conclusion:
A large portion of replications produced weaker evidence for the original findings despite using materials provided by the original authors, review in advance for methodological fidelity, and high statistical power to detect the original effect sizes.
Study like this of the scientific method itself can only improve the scientific process, and is deserving of its accolade as a breakthrough. Read more about the project and the replicated studies at the link below.
Open Science Framework: Estimating the Reproducibility of Psychological Science (via Solomon Messing)
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.