Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
If you'd like to get started using R with Spark, you'll need to set up a Spark cluster and install R and all the other necessary software on the nodes. A really easy way to achieve that is to launch an HDInsight cluster on Azure, which is just a managed Spark cluster with some useful extra components. You'll just need to configure the components you'll need, in our case R and Microsoft R Server, and RStudio Server.
This tutorial explains how to launch an HDInsight cluster for use with R. It explains how to size the cluster and launch the cluster, connect to it via SSH, install Microsoft R Server (with R) on each of the nodes, and install RStudio Server community edition to use as an IDE on the edge node. (If you find you need a larger or smaller cluster after you've set it up, it's easy to resize the cluster dynamically.) Once you have the cluster up an running, here are some things you can try:
- Import and explore data within the Spark cluster
- Use the sparklyr package to interact with Spark Data Frames from RStudio
- Operationalize R functions on the Spark cluster for remote calling
- Train statistical models on very large data sets with the RevoScaleR package
To get started with R and Spark, follow the instructions for setting up a HDInsight cluster at the link below.
Microsoft Azure Documentation: Get started using R Server on HDInsight
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.