Solving Big Problems with Oracle R Enterprise, Part II
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Part II – Solving Big Problems with Oracle R Enterprise
In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet. We demonstrated the calculations against sample data for a small set of accounts. While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger. So much bigger that our approach in the previous post won’t scale to meet the real-world needs.
From our previous post, here are the challenges we need to conquer:
- The actual data that needs to be used lives in a database, not in a spreadsheet
- The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network
- The overall process needs to run fast- much faster than a single processor
- The actual data needs to be kept secured- another reason to not want to move it from the database and across the network
- And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes
In this post, we will show how we moved from sample data environment to working with full-scale data. This post is based on actual work we did for a financial services customer during a recent proof-of-concept.
Getting started with the Database
At this point, we have some sample data and our IRR function. We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet. So our database was empty. But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the). The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create(). The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA.
If we go to sql*plus, we can also check out our new IRR_DATA table:
At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA. So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.
This worked. No SQL coding required!
Going from Crawling to Walking
Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts. In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series. Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply(). You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1
Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via “R CMD INSTALL myIRR”).
The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running. What actually happens is that ore.groupApply uses the Oracle database to perform the work. And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT. Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function. Then the Oracle database combines all the individual results from the calls to the R function.
This is significant because now the embedded R engine only needs to deal with the data for a single account at a time. Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care. Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts).
Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program. Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet.
Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once.
From Walking to Running
ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance. This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation. But, this is not an issue for ORE. Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations. That is extremely exciting and the way we actually did these calculations in the customer proof.
As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”. To use rqGroupEval via SQL, there is a bit of simple setup needed. Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script:
At this point, our initial setup of rqGroupEval is done for the IRR_DATA table. The next step is to define our R function to the database. We do that via a call to ORE’s rqScriptCreate.
Now we can test it. The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax. The first argument to irr_dataGroupEval is a cursor defining our input. You can add additional where clauses and subqueries to this cursor as appropriate. The second argument is any additional inputs to the R function. The third argument is the text of a dummy select statement. The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return. The fourth argument is the column of the input table to split/group by. The final argument is the name of the R function as you defined it when you called rqScriptCreate().
The Real-World Results
In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example. For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so. In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that. And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.
For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine. As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations. To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.
Let’s look at the SQL used in the customer proof:
Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function. That is all we need to do to enable Oracle to use parallel R engines.
Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image):
From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above. You may need to right click on the above images and view the images full-screen to see the entire image):
- The SQL completed in 110 seconds (1.8minutes)
- We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation)
- We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation)
- We ran with 72 degrees of parallelism spread across 4 database servers
- Most of our 110seconds was spent in the “External Procedure call” event
- On average, we performed 8,200 executions of our R function per second (110s/911k accounts)
- On average, each execution was passed 110 rows of data (103m detail rows/911k accounts)
- On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods)
- On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s)
R + Oracle R Enterprise: Best of R + Best of Oracle Database
This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time. While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained.
This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues). It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes.
In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world. In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.