[This article was first published on Revolutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Last month's release of Revolution R Enterprise 6.1 added the capability to fit decision and regresson trees on large data sets (using a new parallel external memory algorithm included in the RevoScaleR package). It also introduced the possibility of applying this and the other big-data statistical methods of RevoScaleR to data files distributed in in Hadoop's HDFS file system*, using the Hadoop nodes themselves as the compute engine (with Revolution R Enterprise installed). Revolution Analytics' VP of Development Sue Ranney explained how this works in a recent webinar. I've embedded the slides below, and you can also watch the webinar recording on YouTube.
Revolution Analytics webinars: New Advances in High Performance Analytics with R: ‘Big Data’ Decision Trees and Analysis of Hadoop Data
[*] Or to use the department of redundancy department-approved acronym, HHFDSFS
To leave a comment for the author, please follow the link and comment on their blog: Revolutions.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.