Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
We’re into the third day of Business Science Demo Week. Hopefully by now you’re getting a taste of some interesting and useful packages. For those that may have missed it, every day this week we are demo-ing an R package: tidyquant
(Monday), timetk
(Tuesday), sweep
(Wednesday), tibbletime
(Thursday) and h2o
(Friday)! That’s five packages in five days! We’ll give you intel on what you need to know about these packages to go from zero to hero. Today is sweep
, which has broom
-style tidiers for forecasting. Let’s get going!
Previous Demo Week Demos
sweep: What’s It Used For?
sweep
is used for tidying the forecast
package workflow. Like broom
is to the stats
library, sweep
is to forecast
package. It has useful functions including: sw_tidy
, sw_glance
, sw_augment
, and sw_sweep
. We’ll check out each in this demo.
An added benefit to sweep
and timetk
is if the ts-objects are created from time-based tibbles (tibbles with date or datetime index), the date or datetime information is carried through the forecasting process as a timetk index attribute. Bottom Line: This means we can finally use dates when forecasting as opposed to the regularly spaced numeric dates that the ts-system uses!
Load Libraries
We’ll need four libraries today:
sweep
: For tidying theforecast
package (likebroom
is tostats
,sweep
is toforecast
)forecast
: Package that includes ARIMA, ETS, and other popular forecasting algorithmstidyquant
: For getting data and loading the tidyverse behind the scenestimetk
: Toolkit for working with time series in R. We’ll use to coerce fromtbl
tots
.
If you don’t already have installed, you can install with install.packages()
. Then load the libraries as follows.
Data
We’ll use the same data as in the previous post where we used timetk
to forecast with time series machine learning. We get data using the tq_get()
function from tidyquant
. The data comes from FRED: Beer, Wine, and Distilled Alcoholic Beverages Sales.
It’s a good idea to visualize the data so we know what we’re working with. Visualization is particularly important for time series analysis and forecasting (as we see during time series machine learning). We’ll use tidyquant
charting tools: mainly geom_ma(ma_fun = SMA, n = 12)
to add a 12-period simple moving average to get an idea of the trend. We can also see there appears to be both trend (moving average is increasing in a relatively linear pattern) and some seasonality (peaks and troughs tend to occur at specific months).
Now that you have a feel for the time series we’ll be working with today, let’s move onto the demo!
DEMO: Tidy forecasting with forecast + sweep
We’ll use the combination of forecast
and sweep
to perform tidy forecasting.
Key Insight:
Forecasting using the forecast
package is a non-tidy process that involves ts
class objects. We have seen this system before where we can “tidy” these objects. For the stats
library, we have broom
, which tidies models and predictions. For the forecast
package we now have sweep
, which tidies models and forecasts.
Objective: We’ll work through an ARIMA analysis to forecast the next 12 months of time series data.
Step 1: Create ts object
Use timetk::tk_ts()
to convert from tbl
to ts
. From the previous post, we learned that this has two benefits:
- It’s a consistent method to convert to and from
ts
. - The ts-object contains a
timetk_idx
(timetk index) as an attribute, which is the original time-based index.
Here’s how to convert. Remember that ts-objects are regular time series so we need to specify a start
and a freq
.
We can check that the ts-object has a timetk_idx
.
Great. This will be important when we use sw_sweep()
later. Next, we’ll model using ARIMA.
Step 2A: Model using ARIMA
We can use the auto.arima()
function from the forecast
package to model the time series.
Step 2B: Tidy the Model
Like broom
tidies the stats
package, we can use sweep
functions to tidy the ARIMA model. Let’s examine three tidiers, which enable tidy model evaluation:
sw_tidy()
: Used to retrieve the model coefficientssw_glance()
: Used to retrieve model description and training set accuracy metricssw_augment()
: Used to get model residuals
sw_tidy
The sw_tidy()
function returns the model coefficients in a tibble (tidy data frame).
sw_glance
The sw_glance()
function returns the training set accuracy measures in a tibble (tidy data frame). We use glimpse
to aid in quickly reviewing the model metrics.
sw_augment
The sw_augument()
function helps with model evaluation. We get the “.actual”, “.fitted” and “.resid” columns, which are useful in evaluating the model against the training data. Note that we can pass timetk_idx = TRUE
to return the original date index.
We can visualize the residual diagnostics for the training data to make sure there is no pattern leftover.
Step 3: Make a Forecast
Make a forecast using the forecast()
function.
One problem is the forecast output is not “tidy”. We need it in a data frame if we want to work with it using the tidyverse
functionality. The class is “forecast”, which is a ts-based-object (its contents are ts-objects).
Step 4: Tidy the Forecast with sweep
We can use sw_sweep()
to tidy the forecast output. As an added benefit, if the forecast-object has a timetk index, we can use it to return a date/datetime index as opposed to regular index from the ts-based-object.
First, let’s check if the forecast-object has a timetk index. Great. We can use the timetk_idx
argument when we apply sw_sweep()
.
Now, use sw_sweep()
to tidy the forecast output. Internally it projects a future time series index based on “timetk_idx” that is an attribute (this all happens because we created the ts-object originally with tk_ts()
in Step 1). Bottom Line: This means we can finally use dates with the forecast package (as opposed to the regularly spaced numeric index that the ts-system uses)!!!
Step 5: Compare Actuals vs Predictions
We can use tq_get()
to retrieve the actual data. Note that we don’t have all of the data for comparison, but we can at least compare the first several months of actual values.
Notice that we have the entire forecast in a tibble. We can now more easily visualize the forecast.
We can investigate the error on our test set (actuals vs predictions).
And we can calculate a few residuals metrics. The MAPE error is approximately 4.3% from the actual value, which is slightly better than the simple linear regression from the timetk demo. Note that the RMSE is slighly worse.
Next Steps
The sweep
package is very useful for tidying the forecast
package output. This demo showed some of the basics. Interested readers should check out the documentation, which goes into expanded detail on scaling analysis by groups and using multiple forecast models.
- Business Science Software Website
- sweep documentation
- sweep GitHub Page
- Business Science Insights Blog
Announcements
We have a busy couple of weeks. In addition to Demo Week, we have:
DataTalk
On Thursday, October 26 at 7PM EST, Matt will be giving a FREE LIVE #DataTalk on Machine Learning for Recruitment and Reducing Employee Attrition. You can sign up for a reminder at the Experian Data Lab website.
#MachineLearning for Reducing Employee Attrition @BizScienchttps://t.co/vlxmjWzKCL#ML #AI #HR #IoTT #IoT #DL #BigData #Tech #Cloud #Jobs pic.twitter.com/dF5Znf10Sk
— Experian DataLab (@ExperianDataLab) October 18, 2017
EARL
On Friday, November 3rd, Matt will be presenting at the EARL Conference on HR Analytics: Using Machine Learning to Predict Employee Turnover.
?Hey #rstats. I'll be presenting @earlconf on #MachineLearning applications in #HumanResources. Get 15% off tickets: https://t.co/b6JUQ6BSTl
— Matt Dancho (@mdancho84) October 11, 2017
Courses
Based on recent demand, we are considering offering application-specific machine learning courses for Data Scientists. The content will be business problems similar to our popular articles:
-
HR Analytics: Using Machine Learning to Predict Employee Turnover
-
Sales Analytics: How To Use Machine Learning to Predict and Optimize Product Backorders
The student will learn from Business Science how to implement cutting edge data science to solve business problems. Please let us know if you are interested. You can leave comments as to what you would like to see at the bottom of the post in Disqus.
About Business Science
Business Science specializes in “ROI-driven data science”. Our focus is machine learning and data science in business applications. We help businesses that seek to add this competitive advantage but may not have the resources currently to implement predictive analytics. Business Science works with clients primarily in small to medium size businesses, guiding these organizations in expanding predictive analytics while executing on ROI generating projects. Visit the Business Science website or contact us to learn more!
Follow Business Science on Social Media
- @bizScienc is on twitter!
- Check us out on Facebook page!
- Check us out on LinkedIn!
- Sign up for our insights blog to stay updated!
- If you like our software, star our GitHub packages!
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.