[This article was first published on mikeksmith's posterous, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
If I come across this kind of code when I’m checking (QCing) code it makes me want to punch the programmer’s face. I find that it’s impossible to step through and check each dataset with the previous incarnation. Which is how I check what has happened in between lines. (In this case the changes are trivial, but the principle applies). However when I discussed it with a colleague recently he said “No, I think this is cool. Your final dataset is still called ‘data’ and you can easily see what changes have been made between the input and the final version ready for analysis.”
data <- read.csv(“foo.csv”)
data <- data[data$STUDY == 1234,]
data <- data[!is.na(data$VAR),]
data <- data[data$VAR > 0,]
data$VAR<- log(data$VAR)
So. Is this cool? Or really, really bad?
To leave a comment for the author, please follow the link and comment on their blog: mikeksmith's posterous.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.