Parallel Multicore Processing with R (on Windows)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This post offers simple example and installation tips for “doSMP” the new Parallel Processing backend package for R under windows.
* * *
Update:
The required packages are not yet available on CRAN, but until they will get online, you can download them from here:
REvolution foreach windows bundle
(Simply unzip the folders inside your R library folder)
* * *
Recently, REvolution blog announced the release of “doSMP”, an R package which offers support for symmetric multicore processing (SMP) on Windows.
This means you can now speed up loops in R code running iterations in parallel on a multi-core or multi-processor machine, thus offering windows users what was until recently available for only Linux/Mac users through the doMC package.
Installation
For now, doSMP is not available on CRAN, so in order to get it you will need to download the REvolution R distribution “R Community 3.2” (they will ask you to supply your e-mail, but I trust REvolution won’t do anything too bad with it…)
If you already have R installed, and want to keep using it (and not the REvolution distribution, as was the case with me), you can navigate to the library folder inside the REvolution distribution it, and copy all the folders (package folders) from there to the library folder in your own R installation.
If you are using R 2.11.0, you will also need to download (and install) the revoIPC package from here:
revoIPC package – download link (required for running doSMP on windows)
(Thanks to Tao Shi for making this available!)
Usage
Once you got the folders in place, you can then load the packages and do something like this:
require(doSMP) workers <- startWorkers(2) # My computer has 2 cores registerDoSMP(workers) # create a function to run in each itteration of the loop check <-function(n) { for(i in 1:1000) { sme <- matrix(rnorm(100), 10,10) solve(sme) } } times <- 10 # times to run the loop # comparing the running time for each loop system.time(x <- foreach(j=1:times ) %dopar% check(j)) # 2.56 seconds (notice that the first run would be slower, because of R's lazy loading) system.time(for(j in 1:times ) x <- check(j)) # 4.82 seconds # stop workers stopWorkers(workers)
Points to notice:
- You will only benefit from the parallelism if the body of the loop is performing time-consuming operations. Otherwise, R serial loops will be faster
- Notice that on the first run, the foreach loop could be slow because of R’s lazy loading of functions.
- I am using startWorkers(2) because my computer has two cores, if your computer has more (for example 4) use more.
- Lastly – if you want more examples on usage, look at the “ParallelR Lite User’s Guide”, included with REvolution R Community 3.2 installation in the “doc” folder
Updates (15.5.10)
The new R version (2.11.0) doesn’t work with doSMP, and will return you with the following error:
Loading required package: revoIPC
Error: package ‘revoIPC’ was built for i386-pc-intel32
So far, a solution is not found, except using REvolution R distribution, or using R 2.10
A thread on the subject was started recently to report the problem. Updates will be given in case someone would come up with better solutions.
Thanks to Tao Shi, there is now a solution to the problem. You’ll need to download the revoIPC package from here:
revoIPC package – download link (required for running doSMP on windows)
Install the package on your R distribution, and follow all of the other steps detailed earlier in this post. It will now work fine on R 2.11.0
Update 2: Notice that I added, in the beginning of the post, a download link to all the packages required for running parallel foreach with R 2.11.0 on windows. (That is until they will be uploaded to CRAN)
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.