Open Source is Opening Data to Predictive Analytics
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This article by REvolution Computing CEO Norman Nie is crossposted from the Future of Open Source Forum.
The R Project: despite there being over 2 million users of this open-source language for statistical data analysis, you might not have heard of it … yet. You might have seen this feature in the New York Times last year, and you might have heard how REvolution Computing is enhancing and supporting R for commercial use. Because what was once a secret of drug-development statisticians at pharmaceutical companies, quants on Wall Street, and PhD-level statistical researchers around the globe (not to mention pioneers at Web 2.0 companies like Google and Facebook) is suddenly becoming mainstream. The reason? The perfect storm of a deluge of data, open-source technology, and the rise of predictive analytics.
Predictive analytics — the process of being able to infer meaningful relationships and predictions from vast quantities of data — is disrupting industries in every sector. You’ve probably seen the impact of predictive analytics yourself: ever been surprised by Amazon apparently “reading your mind” on a suggested purchase, or by LinkedIn being able to figure out who you know, but aren’t yet connected with? That’s predictive analytics in action. By applying advanced statistical models to data, product designers, marketers, sales organizations — basically, anyone who needs to understand the present or predict the future — are able to draw value from the data they’ve collected like never before.
Predictive analytics are only possible with data — lots of data. Just last week, the Economist published a nine-part special report on the Data Deluge. Companies like Nestlé and Walmart are collecting reams of data on individual products and consumers. And given that Nestlé (to take just one example) has more than 100,000 products in 200 countries, we’re talking about huge amounts of data being collected.
The world has largely solved the problem of how to collect and store these vast quantities of data — see David McFarlane’s post for a great review of the impact of FOSS here. But the real impact of analyzing these data sets is only just now being felt routinely. It truly is a revolution: the information that can be teased out of these data is shaking many industries to their core. This quote from the Economist special report sums it up well:
“Revolutions in science have often been preceded by revolutions in measurement,” says Sinan Aral, a business professor at New York University. Just as the microscope transformed biology by exposing germs, and the electron microscope changed physics, all these data are turning the social sciences upside down.
Open Source software is playing a key role in this revolution. A noted analyst recently wrote that the most important factor influencing the spread of predictive analytics is the growing popularity of R. And in the Economist’s special report, the combination of R and Hadoop received special attention:
A free programming language called R lets companies examine and present big data sets, and free software called Hadoop now allows ordinary PCs to analyse huge quantities of data that previously required a supercomputer. It does this by parcelling out the tasks to numerous computers at once. This saves time and money. For example, the New York Times a few years ago used cloud computing and Hadoop to convert over 400,000 scanned images from its archives, from 1851 to 1922. By harnessing the power of hundreds of computers, it was able to do the job in 36 hours.
This revolution fills me with some pride: I started pushing for broad adoption of data analytics as a crucial element in every aspect of science and business decision-making some 40 years ago, when I created SPSS (now part of IBM). The revolution began in scientific practice and now open source R (co-created by REvolution board member Robert Gentleman) represents its future. Today, all of the Fortune 500 companies use R for their data analyses. It’s used in life sciences, financial services, defense technology and other large industries requiring high performance analytical computation.
In the coming months and years, I predict that open-source software will continue to be the driving force in analytical innovation. Open-source platforms like Hadoop, coupled with innovations in open-source file-systems, are able to adapt to the rapidly-evolving data storage and processing requirements. And it’s open-source environments like R, with its world-wide community of researchers collaborating to push the boundaries of statistical analytics, that are most likely provide the novel predictive techniques required to tease yet more accurate predictions from these huge information-age datasets. Tie that with the backing of a commercial company to provide the scalability, usability, and integration into Web-based systems that businesses require to deploy predictive analytics, and you’ve truly got a REvolution in the making.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.