Hyperparameter Tuning Forecasts in Parallel with Modeltime
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Speed up forecasting with
modeltime
’s new built-in parallel processing.
Fitting many time series models can be an expensive process. To help speed up computation, modeltime
now includes parallel processing, which is support for high-performance computing by spreading the model fitting steps across multiple CPUs or clusters.
Highlights
-
We now have a new workflow for forecast model fitting with parallel processing that is much faster when creating many forecast models.
-
With 2-cores we got an immediate 30%-40% boost in performance. With more expensive processes and more CPU cores we get even more performance.
-
It’s perfect for hyperparameter tuning. See
create_model_grid()
for filling model specs with hyperparameters. -
The workflow is simple. Just use
parallel_start(6)
to fire up 6-cores. Just usecontrol_fit_workflowsets(allow_par = TRUE)
to tell themodeltime_fit_workflowset()
to run in parallel.
Forecast Hyperparameter Tuning Tutorial
Speed up forecasting
Speed up forecasting using multiple processors
In this tutorial, we go through a common Hyperparameter Tuning workflow that shows off the modeltime
parallel processing integration and support for workflowsets
from the tidymodels ecosystem. Hyperparameter tuning is an expensive process that can benefit from parallelization.
If you like what you see, I have an Advanced Time Series Course where you will learn the foundations of the growing Modeltime Ecosystem.
Time Series Forecasting Article Guide:
This article is part of a series of software announcements on the Modeltime Forecasting Ecosystem.
Like these articles?
? Register to stay in the know
?
on new cutting-edge R software like modeltime
.
What is Modeltime?
A growing ecosystem for tidymodels forecasting
Modeltime is a growing ecosystem of forecasting packages used to develop scalable forecasting systems for your business.
The Modeltime Ecosystem extends tidymodels
, which means any machine learning algorithm can now become a forecasting algorithm.
The Modeltime Ecosystem includes:
Out-of-the-Box
Parallel Processing Functionality Included
The newest feature of the modeltime
package is parallel processing functionality. Modeltime comes with:
-
Use of
parallel_start()
andparallel_stop()
to simplify the parallel processing setup. -
Use of
create_model_grid()
to help generateparsnip
model specs fromdials
parameter grids. -
Use of
modeltime_fit_workflowset()
for initial fitting many models in parallel usingworkflowsets
from thetidymodels
ecosystem. -
Use of
modeltime_refit()
to refit models in parallel. -
Use of
control_fit_workflowset()
andcontrol_refit()
for controlling the fitting and refitting of many models.
Download the Cheat Sheet
As you go through this tutorial, it may help to use the Ultimate R Cheat Sheet. Page 3 covers the Modeltime Forecasting Ecosystem with links to key documentation.
Forecasting Ecosystem Links (Ultimate R Cheat Sheet)
How to Use Parallel Processing
Let’s go through a common Hyperparameter Tuning workflow that shows off the modeltime
parallel processing integration and support for workflowsets
from the tidymodels ecosystem.
Libraries
Load the following libraries. Note that the new parallel processing functionality is available in Modeltime 0.6.1 (or greater).
Setup Parallel Backend
I’ll set up this tutorial to use two (2) cores.
- To simplify creating clusters,
modeltime
includesparallel_start()
. We can simply supply the number of cores we’d like to use. - To detect how many physical cores you have, you can run
parallel::detectCores(logical = FALSE)
.
Load Data
We’ll use the walmart_sales_weeekly
dataset from timetk
. It has seven (7) time series that represent weekly sales demand by department.
Train / Test Splits
Use time_series_split()
to make a temporal split for all seven time series.
Recipe
Make a preprocessing recipe that generates time series features.
Model Specifications
We’ll make 6 xgboost
model specifications using boost_tree()
and the “xgboost” engine. These will be combined with the recipe
from the previous step using a workflow_set()
in the next section.
The general idea
We can vary the learn_rate
parameter to see it’s effect on forecast error.
A faster way
You may notice that this is a lot of repeated code to adjust the learn_rate
. To simplify this process, we can use create_model_grid()
.
Extracting the model list
We can extract the model list for use with our workflowset
next. This is the same result if we would have placed the manually generated 6 model specs into a list()
.
Workflowsets
With the workflow_set()
function, we can combine the 6 xgboost models with the 1 recipe to return six (6) combinations of recipe and model specifications. These are currently untrained (unfitted).
Parallel Training (Fitting)
We can train each of the combinations in parallel.
Controlling the Fitting Proces
Each fitting function in modeltime
has a “control” function:
control_fit_workflowset()
formodeltime_fit_workflowset()
control_refit()
formodeltime_refit()
The control functions help the user control the verbosity (adding remarks while training) and set up parallel processing. We can see the output when verbose = TRUE
and allow_par = TRUE
.
-
allow_par: Whether or not the user has indicated that parallel processing should be used.
-
If the user has set up parallel processing externally, the clusters will be reused.
-
If the user has not set up parallel processing, the fitting (training) process will set up parallel processing internally and shutdown. Note that this is more expensive, and usually costs around 10-15 seconds to set up.
-
-
verbose: Will return important messages showing the progress of the fitting operation.
-
cores: The cores that the user has set up. Since we’ve already set up
doParallel
to use 2 cores, the control recognizes this. -
packages: The packages are packages that will be sent to each of the workers.
Fitting Using Parallel Backend
We use the modeltime_fit_workflowset()
and control_fit_workflowset()
together to train the unfitted workflowset in parallel.
This returns a modeltime table.
Comparison to Sequential Backend
We can compare to a sequential backend. We have a slight perfomance boost. Note that this performance benefit increases with the size of the training task.
Accuracy Assessment
We can review the forecast accuracy. We can see that Model 5 has the lowest MAE.
Accuracy Table | ||||||||
---|---|---|---|---|---|---|---|---|
.model_id | .model_desc | .type | mae | mape | mase | smape | rmse | rsq |
1 | XGBOOST | Test | 55572.50 | 98.52 | 1.63 | 194.17 | 66953.92 | 0.96 |
2 | XGBOOST | Test | 48819.23 | 86.15 | 1.43 | 151.49 | 58992.30 | 0.96 |
3 | XGBOOST | Test | 13426.89 | 21.69 | 0.39 | 25.06 | 17376.53 | 0.98 |
4 | XGBOOST | Test | 3699.94 | 8.94 | 0.11 | 8.68 | 5163.37 | 0.98 |
5 | XGBOOST | Test | 3296.74 | 7.30 | 0.10 | 7.37 | 5166.48 | 0.98 |
6 | XGBOOST | Test | 3612.70 | 8.15 | 0.11 | 8.24 | 5308.19 | 0.98 |
Forecast Assessment
We can visualize the forecast.
Closing Clusters
We can close the parallel clusters using parallel_stop()
.
It gets better
You’ve just scratched the surface, here’s what’s coming…
The Modeltime Ecosystem functionality is much more feature-rich than what we’ve covered here (I couldn’t possibly cover everything in this post). ?
Here’s what I didn’t cover:
-
Feature Engineering: We can make this forecast much more accurate by including features from competition-winning strategies
-
Ensemble Modeling: We can stack H2O Models with other models not included in H2O like GluonTS Deep Learning.
-
Deep Learning: We can use GluonTS Deep Learning for developing high-performance, scalable forecasts.
So how are you ever going to learn time series analysis and forecasting?
You’re probably thinking:
- There’s so much to learn
- My time is precious
- I’ll never learn time series
I have good news that will put those doubts behind you.
You can learn time series analysis and forecasting in hours with my state-of-the-art time series forecasting course. ?
Advanced Time Series Course
Become the times series expert in your organization.
My Advanced Time Series Forecasting in R course is available now. You’ll learn timetk
and modeltime
plus the most powerful time series forecasting techniques available like GluonTS Deep Learning. Become the times series domain expert in your organization.
? Advanced Time Series Course.
You will learn:
- Time Series Foundations - Visualization, Preprocessing, Noise Reduction, & Anomaly Detection
- Feature Engineering using lagged variables & external regressors
- Hyperparameter Tuning - For both sequential and non-sequential models
- Time Series Cross-Validation (TSCV)
- Ensembling Multiple Machine Learning & Univariate Modeling Techniques (Competition Winner)
- Deep Learning with GluonTS (Competition Winner)
- and more.
Unlock the High-Performance Time Series Course
Project Roadmap, Future Work, and Contributing to Modeltime
Modeltime is a growing ecosystem of packages that work together for forecasting and time series analysis. Here are several useful links:
-
Modeltime Ecosystem Roadmap on GitHub - See the past development and future trajectory. Did we miss something? Make a suggestion.
-
Business Science data science blog - I announce all Modeltime Software happenings
Acknowledgements
I’d like to acknowledge a Business Science University student that is part of the BSU Modeltime Dev Team. Alberto González Almuiña has helped BIG TIME with development of modeltime
’s parallel processing functionality, contributing the initial software design. His effort is truly appreciated.
Have questions about Modeltime?
Make a comment in the chat below. ?
And, if you plan on using modeltime
for your business, it’s a no-brainer: Join my Time Series Course.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.