Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Introduction
Market Basket Analysis or association rules mining can be a very useful technique to gain insights in transactional data sets, and it can be useful for product recommendation. The classical example is data in a supermarket. For each customer we know what the individual products (items) are that he has bought. With association rules mining we can identify items that are frequently bought together. Other use cases for MBA could be web click data, log files, and even questionnaires.
In R there is a package arules to calculate association rules, it makes use of the so-called Apriori algorithm. For data sets that are not too big, calculating rules with arules in R (on a laptop) is not a problem. But when you have very huge data sets, you need to do something else, you can:
- use more computing power (or cluster of computing nodes).
- use another algorithm, for example FP Growth, which is more scalable. See this blog for some details on Apriori vs. FP Growth.
Or do both of the above points by using FPGrowth in Spark MLlib on a cluster. And the nice thing is: you can stay in your familiar R Studio environment!
Spark MLlib and sparklyr
Example Data set
We use the example groceries transactions data in the arules package. It is not a big data set and you would definitely not need more than a laptop, but it is much more realistic than the example given in the Spark MLlib documentation :-).
Preparing the data
I am a fan of sparklyr
First connect to spark and read in the groceries transactional data, and upload the data to Spark. I am just using a local spark install on my Ubuntu laptop.
###### sparklyr code to perform FPGrowth algorithm ############ library(sparklyr) library(dplyr) #### spark connect ######################################### sc <- spark_connect(master = "local") #### first create some dummy data ########################### transactions = readRDS("transactions.RDs") #### upload to spark ######################################### trx_tbl = copy_to(sc, transactions, overwrite = TRUE)
For demonstration purposes, data is copied in this example from the local R session to Spark. For large data sets this is not feasible anymore, in that case data can come from hive tables (on the cluster).
The figure above shows the products purchased by the first four customers in Spark in an RStudio grid. Although transactional systems will often output the data in this structure, it is not what the FPGrowth model in MLlib expects. It expects the data aggregated by id (customer) and the products inside an array. So there is one more preparation step.
# data needs to be aggregated by id, the items need to be in a list trx_agg = trx_tbl %>% group_by(id) %>% summarise( items = collect_list(item) )
The figure above shows the aggregated data, customer 12, has a list of 9 items that he has purchased.
Running the FPGrowth algorithm
We can now run the FPGrowth algorithm, but there is one more thing. Sparklyr does not expose the FPGrowth algorithm (yet), there is no R interface to the FPGrowth algorithm. Luckily, sparklyr allows the user to invoke the underlying Scala methods in Spark. We can define an new object with invoke_new
uid = sparklyr:::random_string("fpgrowth_") jobj = invoke_new(sc, "org.apache.spark.ml.fpm.FPGrowth", uid)
Now jobj is an object of class FPGrowth in Spark.
jobj <jobj[457]> class org.apache.spark.ml.fpm.FPGrowth fpgrowth_d4d41f71f3e0
And by looking at the Scala documentation of FPGrowth we see that there are more methods that you can use. We need to use the function invoke, to specify which column contains the list of items, to specify the minimum confidence and to specify the minimum support.
jobj %>% invoke("setItemsCol", "items") %>% invoke("setMinConfidence", 0.03) %>% invoke("setMinSupport", 0.01) %>% invoke("fit", spark_dataframe(trx_agg))
By invoking fit, the FPGrowth algorithm is fitted and an FPGrowthModel object is returned where we can invoke associationRules to get the calculated rules in a spark data frame
rules = FPGmodel %>% invoke("associationRules")
The rules in the spark data frame consists of an antecedent column (the left hand side of the rule), a consequent column (the right hand side of the rule) and a column with the confidence of the rule. Note that the antecedent and consequent are lists of items! If needed we can split these lists and collect them to R for plotting for further analysis.
The invoke statements and rules extractions statements can of course be wrapped inside functions to make it more reusable. So given the aggregated transactions in a spark table trx_agg, you can get something like:
GroceryRules = ml_fpgrowth( trx_agg ) %>% ml_fpgrowth_extract_rules() plot_rules(GroceryRules)
Conclusion
The complete R script can be found on my GitHub. If arules in R on your laptop is not workable anymore because of the size of your data, consider FPGrowth in Spark through sparklyr.
cheers, Longhow
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.