Site icon R-bloggers

Time Series Forecasting Lab (Part 3) – Machine Learning with Workflows

[This article was first published on Foundations, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Cover photo by Solen Feyissa on Unsplash

Go to R-bloggers for R news and tutorials contributed by hundreds of R bloggers.

Introduction

This is the third of a series of 6 articles about time series forecasting with panel data and ensemble stacking with R.

Through these articles I will be putting into practice what I have learned from the Business Science University training course 2 DS4B 203-R: High-Performance Time Series Forecasting”, delivered by Matt Dancho. If you are looking to gain a high level of expertise in time series with R I strongly recommend this course.

The objective of this article is to show how do we fit machine learning models for time series with modeltime. Modeltime is used to integrate time series models ino the tydimodels ecosystem.

You will understand the notion of forecasting workflows e.g., how to fit a model by adding its specification and corresponding preprocessing recipe (see Part 2 ) to a workflow. The notion of modeltime table and calibration table will also be very useful since it allows to evaluate and forecast all models at the same time for all time series (panel data). Finally, you will perform and plot forecasts on test dataset. Hyperparameter tuning will be covered in the next article (Part 4).

Prerequisites

I assume you are already familiar with the following topics, packages and terms:

Packages

The following packages must be loaded:

# 01 FEATURE ENGINEERING
library(tidyverse)  # loading dplyr, tibble, ggplot2, .. dependencies
library(timetk)  # using timetk plotting, diagnostics and augment operations
library(tsibble)  # for month to Date conversion
library(tsibbledata)  # for aus_retail dataset
library(fastDummies)  # for dummyfying categorical variables
library(skimr) # for quick statistics

# 02 FEATURE ENGINEERING WITH RECIPES
library(tidymodels) # with workflow dependency

# 03 MACHINE LEARNING
library(modeltime) # ML models specifications and engines
library(tictoc) # measure training elapsed time

Fitting ML models

As depicted in the figure below, the end-to-end process is the following:

  1. define 4 workflows, each comprising the model specification and engine, the recipe (can be updated), and the fit function on the training dataset extracted from the splits object.
  2. add all workflows into a modeltime table and calibrate
  3. evaluate models based on default performance metrics
  4. perform and plot forecasts with test dataset

Load artifacts

Let us load our work from Part 2.

artifacts <- read_rds("feature_engineering_artifacts_list.rds")

splits            <- artifacts$splits
recipe_spec       <- artifacts$recipes$recipe_spec
Industries        <- artifacts$data$industries

Check training data

Remember, the training data is not trainings(data), it is the output of the recipe specification recipe_spec once you prepare it and juice it, as shown in the code snippet below :

> recipe_spec %>%
   prep() %>%
   juice() %>%
   glimpse()
Rows: 7,560
Columns: 52
$ rowid                                                                      <int> 14, 467, 920, 1373, 1826, 2279, 2732, ~
$ Month                                                                      <date> 1983-05-01, 1983-05-01, 1983-05-01, 1~
$ Month_sin12_K1                                                             <dbl> 0.5145554, 0.5145554, 0.5145554, 0.514~
$ Month_cos12_K1                                                             <dbl> 8.574571e-01, 8.574571e-01, 8.574571e-~
$ Turnover_lag12                                                             <dbl> -2.355895, -2.397538, -1.674143, -1.89~
$ Turnover_lag13                                                             <dbl> -2.011144, -2.195619, -1.713772, -1.89~
$ Turnover_lag12_roll_3                                                      <dbl> -2.216035, -2.299396, -1.771137, -1.97~
$ Turnover_lag13_roll_3                                                      <dbl> -2.183520, -2.296579, -1.693957, -1.89~
$ Turnover_lag12_roll_6                                                      <dbl> -2.213974, -2.275130, -1.824439, -2.02~
$ Turnover_lag13_roll_6                                                      <dbl> -2.197201, -2.278793, -1.787835, -1.98~
$ Turnover_lag12_roll_9                                                      <dbl> -2.190758, -2.244673, -1.826687, -2.01~
$ Turnover_lag13_roll_9                                                      <dbl> -2.213974, -2.275130, -1.824439, -2.02~
$ Turnover_lag12_roll_12                                                     <dbl> -2.095067, -2.152525, -1.819038, -1.97~
$ Turnover_lag13_roll_12                                                     <dbl> -2.147914, -2.200931, -1.828293, -2.00~
$ Turnover                                                                   <dbl> -2.042611, -1.866127, -1.383977, -1.53~
$ Month_index.num                                                            <dbl> -1.727712, -1.727712, -1.727712, -1.72~
$ Month_year                                                                 <dbl> -1.712475, -1.712475, -1.712475, -1.71~
$ Month_half                                                                 <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,~
$ Month_quarter                                                              <int> 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,~
$ Month_month                                                                <int> 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,~
$ Industry_Cafes..restaurants.and.catering.services                          <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Cafes..restaurants.and.takeaway.food.services                     <dbl> 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Clothing.retailing                                                <dbl> 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Clothing..footwear.and.personal.accessory.retailing               <dbl> 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Department.stores                                                 <dbl> 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Electrical.and.electronic.goods.retailing                         <dbl> 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Food.retailing                                                    <dbl> 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,~
$ Industry_Footwear.and.other.personal.accessory.retailing                   <dbl> 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,~
$ Industry_Furniture..floor.coverings..houseware.and.textile.goods.retailing <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0,~
$ Industry_Hardware..building.and.garden.supplies.retailing                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0,~
$ Industry_Household.goods.retailing                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,~
$ Industry_Liquor.retailing                                                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,~
$ Industry_Newspaper.and.book.retailing                                      <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,~
$ Industry_Other.recreational.goods.retailing                                <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Other.retailing                                                   <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Other.retailing.n.e.c.                                            <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Other.specialised.food.retailing                                  <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Pharmaceutical..cosmetic.and.toiletry.goods.retailing             <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Supermarket.and.grocery.stores                                    <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Industry_Takeaway.food.services                                            <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_01                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_02                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_03                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_04                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_05                                                         <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,~
$ Month_month.lbl_06                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_07                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_08                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_09                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_10                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_11                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
$ Month_month.lbl_12                                                         <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~

Notion of workflow

As per workflows, “A workflow is an object that can bundle together your pre-processing, modeling, and post-processing requests. The recipe prepping and model fitting can be executed using a single call to fit()”.

The following code snippet is a template of a workflow definition:

wflw_fit_<model> <- workflow() %>%
  add_model(
    spec = <modeltime model>(
      <model specification>
    ) %>% 
      set_engine(<original package>)
  ) %>%
  add_recipe( <recipe and its updates>) %>%
  fit(training(splits))

ML model specification and engine

You will find in the figure below the specification and engine(s) for each of the 4 models used in this article.

In this article we will keep default specification values, except for prophet() and prophet_xgboost() where we will make some modifications.

We will also measure the elapsed training time with titoc package. I’m using a Lenovo Windows 10 laptop, 16GB RAM, i5 CPU.

Random Forest

You only need to indicate the mode in the specification. Since we are dealing with prdictions of a numeric measure (here Turnover) the mode is a regression and not a classification.

For random forest, you need to update the recipe since rand_forest() will also consider Month (the timestamp) as a feature, thus its role needs to be updated to “indicator” (see Part 2 for explanations on add_role)

# * RANDOM FOREST ----
> tic()
wflw_fit_rf <- workflow() %>%
  add_model(
    spec = rand_forest(
      mode = "regression"
    ) %>% 
      set_engine("ranger")
  ) %>%
  add_recipe(recipe_spec %>% 
               update_role(Month, new_role = "indicator")) %>%
  fit(training(splits))
toc()

11.54 sec elapsed
> wflw_fit_rf
== Workflow [trained] =====================================================================================================
Preprocessor: Recipe
Model: rand_forest()

-- Preprocessor -----------------------------------------------------------------------------------------------------------
5 Recipe Steps

* step_other()
* step_timeseries_signature()
* step_rm()
* step_dummy()
* step_normalize()

-- Model ------------------------------------------------------------------------------------------------------------------
Ranger result

Call:
 ranger::ranger(x = maybe_data_frame(x), y = y, num.threads = 1,      verbose = FALSE, seed = sample.int(10^5, 1)) 

Type:                             Regression 
Number of trees:                  500 
Sample size:                      7560 
Number of independent variables:  49 
Mtry:                             7 
Target node size:                 5 
Variable importance mode:         none 
Splitrule:                        variance 
OOB prediction error (MSE):       0.0369327 
R squared (OOB):                  0.9547372

XGBoost

You only need to indicate the mode in the specification. Since we are dealing with time series forecast of a numeric measure (here Turnover) the mode is a regression and not a classification.

You need to update the recipe since boost_tree() will also consider Month (the timestamp) as a feature, thus its role needs to be updated to “indicator” (see Part 2 for explanations on add_role)

# * XGBOOST ----
tic()
wflw_fit_xgboost <- workflow() %>%
  add_model(
    spec = boost_tree(
      mode = "regression"
    ) %>%
      set_engine("xgboost")
  ) %>%
  add_recipe(recipe_spec %>% 
               update_role(Month, new_role = "indicator")) %>%
  fit(training(splits))
toc()

1.3 sec elapsed
> wflw_fit_xgboost
== Workflow [trained] =====================================================================================================
Preprocessor: Recipe
Model: boost_tree()

-- Preprocessor -----------------------------------------------------------------------------------------------------------
5 Recipe Steps

* step_other()
* step_timeseries_signature()
* step_rm()
* step_dummy()
* step_normalize()

-- Model ------------------------------------------------------------------------------------------------------------------
##### xgb.Booster
raw: 103.6 Kb 
call:
  xgboost::xgb.train(params = list(eta = 0.3, max_depth = 6, gamma = 0, 
    colsample_bytree = 1, colsample_bynode = 1, min_child_weight = 1, 
    subsample = 1, objective = "reg:squarederror"), data = x$data, 
    nrounds = 15, watchlist = x$watchlist, verbose = 0, nthread = 1)
params (as set within xgb.train):
  eta = "0.3", max_depth = "6", gamma = "0", colsample_bytree = "1", colsample_bynode = "1", min_child_weight = "1", subsample = "1", objective = "reg:squarederror", nthread = "1", validate_parameters = "TRUE"
xgb.attributes:
  niter
callbacks:
  cb.evaluation.log()
# of features: 49 
niter: 15
nfeatures : 49 
evaluation_log:
    iter training_rmse
       1      0.773716
       2      0.563249
---                   
      14      0.159565
      15      0.157188

Prophet

Since we are working with monthly data there is no need to calculate neither daily nor weekly seasonality. Thus, we set corresponding parameters to FALSE in the model specification. Although we know that prophet uses Fourier terms for the annual seasonality component, we will keep the Fourier features in the training data, thus we will not update the recipe. Notice the number of external regressors displayed at the end.

# * PROPHET ----
tic()
wflw_fit_prophet <- workflow() %>%
  add_model(
    spec = prophet_reg(
      seasonality_daily  = FALSE, 
      seasonality_weekly = FALSE, 
      seasonality_yearly = TRUE
    ) %>% 
      set_engine("prophet")
  ) %>%
  add_recipe(recipe_spec) %>%
  fit(training(splits))
toc()

10.36 sec elapsed
> wflw_fit_prophet
== Workflow [trained] =====================================================================================================
Preprocessor: Recipe
Model: prophet_reg()

-- Preprocessor -----------------------------------------------------------------------------------------------------------
5 Recipe Steps

* step_other()
* step_timeseries_signature()
* step_rm()
* step_dummy()
* step_normalize()

-- Model ------------------------------------------------------------------------------------------------------------------
PROPHET w/ Regressors Model
- growth: 'linear'
- n.changepoints: 25
- changepoint.range: 0.8
- yearly.seasonality: 'TRUE'
- weekly.seasonality: 'FALSE'
- daily.seasonality: 'FALSE'
- seasonality.mode: 'additive'
- changepoint.prior.scale: 0.05
- seasonality.prior.scale: 10
- holidays.prior.scale: 10
- logistic_cap: NULL
- logistic_floor: NULL
- extra_regressors: 49

Prophet Boost

It combines:

Since XGBoost will take care of the seasonality we must set the corresponding parameters to FALSE within the specification.

# * PROPHET BOOST ----
tic()
wflw_fit_prophet_boost <- workflow() %>%
  add_model(
    spec = prophet_boost(
      seasonality_daily  = FALSE, 
      seasonality_weekly = FALSE, 
      seasonality_yearly = FALSE
    ) %>% 
      set_engine("prophet_xgboost")
  ) %>%
  add_recipe(recipe_spec) %>%
  fit(training(splits))
toc()

3.16 sec elapsed
> wflw_fit_prophet_boost
== Workflow [trained] =====================================================================================================
Preprocessor: Recipe
Model: prophet_boost()

-- Preprocessor -----------------------------------------------------------------------------------------------------------
5 Recipe Steps

* step_other()
* step_timeseries_signature()
* step_rm()
* step_dummy()
* step_normalize()

-- Model ------------------------------------------------------------------------------------------------------------------
PROPHET w/ XGBoost Errors
---
Model 1: PROPHET
 - growth: 'linear'
 - n.changepoints: 25
 - changepoint.range: 0.8
 - yearly.seasonality: 'FALSE'
 - weekly.seasonality: 'FALSE'
 - daily.seasonality: 'FALSE'
 - seasonality.mode: 'additive'
 - changepoint.prior.scale: 0.05
 - seasonality.prior.scale: 10
 - holidays.prior.scale: 10
 - logistic_cap: NULL
 - logistic_floor: NULL

---
Model 2: XGBoost Errors

xgboost::xgb.train(params = list(eta = 0.3, max_depth = 6, gamma = 0, 
    colsample_bytree = 1, colsample_bynode = 1, min_child_weight = 1, 
    subsample = 1, objective = "reg:squarederror"), data = x$data, 
    nrounds = 15, watchlist = x$watchlist, verbose = 0, nthread = 1)

Models evaluation

Modeltime table

You can add all workflows to a single modeltime table with modeltime_table()

  1. It creates a table of models

  2. It validates that all objects are models and they have been fitted (trained)

  3. It provides an ID and Description of the models

Models’ description can be modified.

> submodels_tbl <- modeltime_table(
  wflw_fit_rf,
  wflw_fit_xgboost,
  wflw_fit_prophet,
  wflw_fit_prophet_boost
)

> submodels_tbl
# Modeltime Table
# A tibble: 4 x 3
  .model_id .model     .model_desc              
      <int> <list>     <chr>                    
1         1 <workflow> RANGER                   
2         2 <workflow> XGBOOST                  
3         3 <workflow> PROPHET W/ REGRESSORS    
4         4 <workflow> PROPHET W/ XGBOOST ERRORS

Calibration table

Calibration sets the stage for accuracy and forecast confidence by computing predictions and residuals from out of sample data.

Two columns are added to the previous modeltime table:

> calibrated_wflws_tbl <- submodels_tbl %>%
  modeltime_calibrate(new_data = testing(splits))

> calibrated_wflws_tbl
# Modeltime Table
# Modeltime Table
# Modeltime Table
# A tibble: 4 x 5
  .model_id .model     .model_desc               .type .calibration_data   
      <int> <list>     <chr>                     <chr> <list>              
1         1 <workflow> RANGER                    Test  <tibble [1,000 x 4]>
2         2 <workflow> XGBOOST                   Test  <tibble [1,000 x 4]>
3         3 <workflow> PROPHET W/ REGRESSORS     Test  <tibble [1,000 x 4]>
4         4 <workflow> PROPHET W/ XGBOOST ERRORS Test  <tibble [1,000 x 4]>

Notice that the number of rows of the test dataset (testing(splits)) is indeed 1000 (20 induestries x 50 months).

Model Evaluation

The results of calibration are used for Accuracy Calculations: the out of sample actual and prediction values are used to calculate performance metrics. Refer to modeltime_accuracy() in the code snippet below:

> calibrated_wflws_tbl %>%
  modeltime_accuracy(testing(splits)) %>%
  arrange(rmse)

# A tibble: 4 x 9
  .model_id .model_desc               .type   mae  mape  mase smape  rmse   rsq
      <int> <chr>                     <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1         3 PROPHET W/ REGRESSORS     Test  0.189  21.6 0.417  21.5 0.240 0.856
2         4 PROPHET W/ XGBOOST ERRORS Test  0.204  24.8 0.451  20.8 0.287 0.753
3         2 XGBOOST                   Test  0.216  25.0 0.476  22.4 0.299 0.747
4         1 RANGER                    Test  0.235  25.4 0.519  25.0 0.312 0.765

The accuracy results table was sorted by RMSE (ascending order): Prophet has the lowest RMSE value (0.264), it also has the highest R-squared (0.856).

Forecast Plot

We visually check the models’ performance on the forecast plot.

The results of calibration can be used to perform forecast with modeltime_forecast() and plot the predictions of all four models against the actual data with plot_modeltime_forecast(), all 20 industries’ predictions are displayed at once. For the sake of clarity, confidence interval display was disabled in the code snippet below.

calibrated_wflws_tbl %>%
  modeltime_forecast(
    new_data    = testing(splits),
    actual_data = artifacts$data$data_prepared_tbl,
    keep_data   = TRUE 
  ) %>%
  group_by(Industry) %>%
  plot_modeltime_forecast(
    .facet_ncol         = 4, 
    .conf_interval_show = FALSE,
    .interactive        = TRUE
  )

It is hard to see coloured predictions of all four models even though you can zoom on each subplot (as I did on the left of the plot).

Since the models’ hyperparameters have not been tuned yet, we will not dig to deep into the analysis of the results. For now, you may notice on the first column (zoomed subplots on the left) how the models match well the actual data in the second, third, and fourthe subplots, for the first subplot is still has problems catching up the trend.

Save your work

workflow_artifacts <- list(

  workflows = list(

    wflw_random_forest = wflw_fit_rf,
    wflw_xgboost = wflw_fit_xgboost,
    wflw_prophet = wflw_fit_prophet,
    wflw_prophet_boost = wflw_fit_prophet_boost

  ),

  calibration = list(calibration_tbl = calibrated_wflws_tbl)

)

workflow_artifacts %>%
  write_rds("workflows_artifacts_list.rds")

Conclusion

In this article you have learned how to fit 4 machine learning models (Random Forest, XGBoost, Prophet, and Prohet Boost) with forecasting workflows using the modeltime package.

A workflow comprises a model specification and a preprocessing recipe which can be modified on the fly.

The modeltime and calibration tables will help to evaluate and perfom predictions (here with test dataset) for all 4 models on all 20 time series (panel data).

We haven’t perfomed hyperparameter tuning which will be covered on Part 4.

References

1 Dancho, Matt, “DS4B 203-R: High-Performance Time Series Forecasting”, Business Science University

To leave a comment for the author, please follow the link and comment on their blog: Foundations.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.