rOpenSci Statistical Software Testing and Peer Review – Community Call Summary

[This article was first published on rOpenSci - open tools for open science, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

A week ago we held a Community Call discussing rOpenSci Statistical Software Testing and Peer Review. This call included speakers Noam Ross, Mark Padgham, Anna Krystalli, Alex Hayes, and John Sakaluk.

Headshots of the moderator and four panelists

This post provides a ready reference and description of this community call, which introduced the system being developed for peer review of explicitly statistical software, along with a couple of the automated software tools for use by developers and reviewers of statistical software.

After a welcome from Stefanie Butland, Anna Krystalli gave an overview of the context and importance of our new tools from an editorial perspective. Noam Ross then introduced the statistical software review project, members of its advisory board, and the standards-based system which will be used to assess and review statistical software. Mark Padgham then briefly described the two main tools intended for use by developers and reviewers: the autotest package for automated testing of software to ensure robust responses to unexpected inputs throughout development, and the srr (software review roclets) package for documenting within code itself how and where it complies with both general and category-specific standards for statistical software.

The call then moved on to a “hands-on” demonstration of how these packages can be used in practice. John Sakaluk showed autotests‘s capabilites on his dySEM package. John developed dySEM for his own use, and would now like to refine and extend the package for more general use, ideally working towards submission to our peer-review system. John described the usefulness of autotest in explicitly revealing aspects of his code which could be improved for more general usage, and in particular that,

one of the things that’s really useful for me here, as a self taught and newbie developer, is I find myself adding to my package development list almost every time that I open it up in terms of wish-listing new functionality. And what’s really nice about this [autotest tool] is this can help me set some targets for priority items just for tightening up the programming of the existing functions. – John Sakaluk

Alex Hayes then described his experiences from initial review of his fastadi package, and of the role standards can play in software improvement and assessment, noting in particular the usefulness of standards as contextual “touchpoints” for review, and how the srr package tracks these standards through the development process.

🔗 Summarizing the community call

Here we’ve organized the video content by speakers and questions, including links to the specific time points in the video as well as to questions and answers in the collaborative notes document. We hope that by preparing this summary, more people will be able to benefit from this information.

🔗 Speakers

  • Anna Krystalli – The editorial perspective (video)

  • Noam Ross – Project introduction (video)

  • Mark Padgham – Introducing autotest and srr packages (video)

  • John Sakaluk – Using autotest on a package-in-development (video)

  • Alex Hayes – Using srr while preparing a package for review (video)

  • Anna Kyrstalli – Moderates questions (video)

🔗 Questions

  • (Anna Krystalli) Suggestion of RStudio addin for srr (video)

  • (Anna Krystalli) Could we get more detail on what autotest looks for? (video | document)

  • (Steffi LaZerte) Is autotest applicable to all packages or just statistical packages? (video | document)

  • (Kieran Martin) What is missing from these package for assessing statistical packages? (video | document)

  • (Joss Langford) We are just beginning to re-code some existing packages – so have the advantage of starting with a blank sheet AND a good specification AND well tested code snippets. We’re new to this community – what advice would give on both engagement and coding? (video | document)

  • (John Sakaluk) Regarding representation in the program, do we have any sense where groups of folks are over/under-represented within specific areas? (video | document)

  • (Rafael Pilliard Hellwig) What is the vision for applying these srr standards to existing (previously published) rOpenSci packages (video | document)

  • (Charles Sweetland) Does autotest take into account dependencies and dependency changes? (document)

🔗 Want to get involved?

Not sure how you might contribute? Contact us ([email protected]) and tell us what you’re thinking. We are particularly keen to help people from underrepresented groups find ways to get involved.

  • Contact us about submitting packages for peer-review, even if the package is only a concept right now
  • Contact us if you would be interested in reviewing statistical software packages
  • We’re aiming for an official launch in around one month’s time
  • Use autotest on your own package right now, and give feedback to help improve it
  • Help us to improve our standards by giving any kind of feedback
  • Informal comments, questions, suggestions

To leave a comment for the author, please follow the link and comment on their blog: rOpenSci - open tools for open science.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)