Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Responding to the birth rates analysis in the post earlier this week on big-data analysis with Revolution R Enterprise, Luis Apiolaza asks at the Quantum Forests blog, do we really need to deal with big data in R?
My basic question is why would I want to deal with all those 100 million records directly in R? Wouldn’t it make much more sense to reduce the data to a meaningful size using the original database, up there in the cloud, and download the reduced version to continue an in-depth analysis?
As Luis points out (and as most of us know from experience), 90% of statistical data analysis is data preparation. Many "big data" problems are in fact analyses of small data sets, that have been carefully (and often painfully) extracted from a data store we'd refer to today as "Big Data". And while we could use another tool to do that extraction, personally I'd prefer to do it in R myself. Not just because needing access to another tool probably means delays, authorizations, and probably having to ask a DBA nicely, but also because the extraction process itself (in my opinion) requires a certain level of statistical expertise. For me, at least, it's often an iterative process of identifying the variables I need, the right way to do the aggregation/smoothing/dimension reduction, how to handle missing values and data quality issues … the list goes on and on. To be able to extract from a large data set using the R language alone is a great boon — especially when the source data set is very large. That's why we created the rxDataStep function in RevoScaleR. (You can read more about rxDataStep in our new white paper, The RevoScaleR Data Step White Paper.)
Then again, some statistical problems simply do require analysis of very large datasets. wholesale. Some of the commenters to Luis's post provide their own examples, and Revolution Analytics' CEO Norman Nie has written a white paper identifying five situations where analysis of large data sets in R is useful:
- Use Data Mining to Make Predictions
- Make Predictive Models More Powerful
- Find and Understand Rare Events
- Extract and Analyze ‘Low Incidence Populations’
- Avoid Dependence on ‘Statistical Significance’
You can read Norman's explanations of these uses of Big Data in the white paper, The Rise of Big Data Spurs a Revolution in Big Analytics, available for download at the link below.
Revolution Analytics White Papers: The Rise of Big Data Spurs a Revolution in Big Analytics
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.