Site icon R-bloggers

rOpenSci Statistical Software Testing and Peer Review – Community Call Summary

[This article was first published on rOpenSci - open tools for open science, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

A week ago we held a Community Call discussing rOpenSci Statistical Software Testing and Peer Review. This call included speakers Noam Ross, Mark Padgham, Anna Krystalli, Alex Hayes, and John Sakaluk.

This post provides a ready reference and description of this community call, which introduced the system being developed for peer review of explicitly statistical software, along with a couple of the automated software tools for use by developers and reviewers of statistical software.

After a welcome from Stefanie Butland, Anna Krystalli gave an overview of the context and importance of our new tools from an editorial perspective. Noam Ross then introduced the statistical software review project, members of its advisory board, and the standards-based system which will be used to assess and review statistical software. Mark Padgham then briefly described the two main tools intended for use by developers and reviewers: the autotest package for automated testing of software to ensure robust responses to unexpected inputs throughout development, and the srr (software review roclets) package for documenting within code itself how and where it complies with both general and category-specific standards for statistical software.

The call then moved on to a “hands-on” demonstration of how these packages can be used in practice. John Sakaluk showed autotests‘s capabilites on his dySEM package. John developed dySEM for his own use, and would now like to refine and extend the package for more general use, ideally working towards submission to our peer-review system. John described the usefulness of autotest in explicitly revealing aspects of his code which could be improved for more general usage, and in particular that,

one of the things that’s really useful for me here, as a self taught and newbie developer, is I find myself adding to my package development list almost every time that I open it up in terms of wish-listing new functionality. And what’s really nice about this [autotest tool] is this can help me set some targets for priority items just for tightening up the programming of the existing functions. – John Sakaluk

Alex Hayes then described his experiences from initial review of his fastadi package, and of the role standards can play in software improvement and assessment, noting in particular the usefulness of standards as contextual “touchpoints” for review, and how the srr package tracks these standards through the development process.

🔗 Summarizing the community call

Here we’ve organized the video content by speakers and questions, including links to the specific time points in the video as well as to questions and answers in the collaborative notes document. We hope that by preparing this summary, more people will be able to benefit from this information.

🔗 Speakers

🔗 Questions

🔗 Want to get involved?

Not sure how you might contribute? Contact us (mark@ropensci.org) and tell us what you’re thinking. We are particularly keen to help people from underrepresented groups find ways to get involved.

To leave a comment for the author, please follow the link and comment on their blog: rOpenSci - open tools for open science.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.