Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
- sparklyr basics
- dplyr and DBI interface on Spark
- Running Spark ML Machine Learning K-means Algorithm from R
If you don’t know much about Spark yet, you can read my April post Answers to FAQ about SparkR for R users – where I explained how could we use SparkR
package that is distributed with Spark. Many things (code) might have changed since that time, due to the rapid development caused by great popularity of Spark. Now we can use version 2.0.0 of Spark. If you are migrating from previous versions I suggest you should look at Migration Guide – Upgrading From SparkR 1.6.x to 2.0.
sparklyr basics
This packages is based on sparkapi package that enables to run Spark applications locally or on YARN cluster just from R. It translates R code to bash invocation of spark-shell. It’s biggest advantage is dplyr interface for working with Spark Data Frames (that might be Hive Tables) and possibility to invoke algorithms from Spark ML library.
Installation of sparklyr, then Spark itself and simple application initiation is described by this code
One don’t have to specify config by himself, but if this is desired then remember that you could also specify parameters for Spark application with config.yml files so that you can benefit from many profiles (development, production). In version 2.0.0 it is desired to name master yarn
instead of yarn-client
and passing the deployMode
parameter, which is different from version 1.6.x. All available parameters can be found in Running Spark on YARN documentation page.
dplyr and DBI interface on Spark
When connecting to YARN, it is most probable that you would like to use data tables that are stored on Hive. Remember that
Configuration of Hive is done by placing your hive-site.xml, core-site.xml (for security configuration), and hdfs-site.xml (for HDFS configuration) file in conf/.
where conf/
is set as HADOOP_CONF_DIR
. Read more about using Hive tables from Spark
If everything is set up and the application runs properly, you can use dplyr interface to provide lazy evaluation for data manipulations. Data are stored on Hive, Spark application runs on YARN cluster, and the code is invoked from R in the simple language of data transformations (dplyr) – everything thanks to sparklyr team great job! Easy example is below
You can also perform any operation on datasets use by Spark
Note that original commas in iris names have been translated to _
.
This package also provides interface for functions defined in DBI
package
Running Spark ML Machine Learning K-means Algorithm from R
The basic example on how sparklyr invokes Scala code from Spark ML will be presented on K-means algorithm.
If you check the code of sparklyr::ml_kmeans
function you will see that for input tbl_spark
object, named x and character vector containing features’ names (featuers
)
sparklyr ensures that you have proper connection to spark data frame and prepares features in convenient form and naming convention. At the end it prepares a Spark DataFrame for Spark ML routines.
This is done in a new environment, so that we can store arguments for future ML algorithm and the model itself in its own environment. This is safe and clean solution. You can construct a simple model calling a Spark ML class like this
which invokes new object of class KMeans
on which we can invoke parameters setters to change default parameters like this
For an existing object of KMeans
class we can invoke its method called fit
that is responsible for starting the K-means clustering algorithm
which returns new object on which we can compute, e.g centers of outputted clustering
or the Within Set Sum of Squared Errors (called Cost) (which is mine small contribution #173 )
This sometimes helps to decide how many clusters should we specify for clustering problem
and is presented in print
method for ml_model_kmeans
object
All that can be better understood if we’ll have a look on Spark ML docuemtnation for KMeans (be carefull not to confuse with Spark MLlib where methods and parameters have different names than those in Spark ML). This enabled me to provide simple update for ml_kmeans()
(#179) so that we can specify tol
(tolerance) parameter in ml_kmeans()
to support tolerance of convergence.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.