[This article was first published on Yet Another Blog in Statistical Computing » S+/R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Similar to the row search, by-group aggregation is another perfect use case to demonstrate the power of split-and-conquer with parallelism.
In the example below, it is shown that the homebrew by-group aggregation with foreach pakage, albeit inefficiently coded, is still a lot faster than the summarize() function in Hmisc package.
load('2008.Rdata')
pkgs <- c('rbenchmark', 'doParallel', 'foreach', 'Hmisc')
lapply(pkgs, require, character.only = T)
registerDoParallel(cores = 8)
benchmark(replications = 10,
summarize = {
summarize(data[c 1=""Month")" language="("Distance","][/c], data["Month"], colMeans, stat.name = NULL)
},
foreach = {
data2 <- split(data, data$Month)
test2 <- foreach(i = data2, .combine = rbind) %dopar% (data.frame(Month = unique(i$Month), Distance= mean(i$Distance)))
}
)
# test replications elapsed relative user.self sys.self user.child
# 2 foreach 10 19.644 1.00 17.411 1.965 1.528
# 1 summarize 10 30.244 1.54 29.822 0.441 0.000
To leave a comment for the author, please follow the link and comment on their blog: Yet Another Blog in Statistical Computing » S+/R.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
