With byte compiler, R 2.14 will be even faster
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
In a presentation at the JSM 2011 conference in Miami yesterday, R core member Luke Tierney revealed that the next major update to R, R 2.14, will feature improved speed when processing interpreted R code, thanks to standard use of the new byte compiler feature.
The byte compiler was introduced with R 2.13, but while R developers could use it for their own functions, the standard base and recommended packages were unaffected by the byte compiler. Starting with R 2.14, all of the standard functions and packages in R will be pre-compiled into byte-code, which in some cases can speed up performance by a factor of 5x or more. (In an experimental version of the compiler, which may make it into 2.14, even greater speedups are possible.) The benefits accrue mainly to pure R functions which deal with scalars and very short vectors — R functions which call out to C code and operations on large vectors won't be affected much. With the new compiler, Tierney says there should be fewer occasions where R programmers need to turn to C or other external languages to speed up R code.
In other news for R 2.14, Tierney says that the new version may also make transparent use of parallel processing for some operations on multi-core machines. The colSums and dist functions already include hidden features for parallel processing, and if tests go well features like this may become the default in future versions of R. Features from Tierney's experimental pnmath package, which parallelizes some basic math routines in R, may also make it into the next release.
R 2.14 is expected to be released later this year.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.