Ensemble learning for time series forecasting in R
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Ensemble learning methods are widely used nowadays for its predictive performance improvement. Ensemble learning combines multiple predictions (forecasts) from one or multiple methods to overcome accuracy of simple prediction and to avoid possible overfit. In the domain of time series forecasting, we have somehow obstructed situation because of dynamic changes in coming data. However, when a single regression model is used for forecasting, time dependency is not the obstacle, we can tune it at current time of a sliding window. For this reason, in this post, I will describe you two simple ensemble learning methods – Bagging and Random Forest. Bagging will be used with combination of two simple regression trees methods used in the previous post (RPART and CTREE). I will not repeat most of the things mentioned there so check it first if you didn’t make it already: Using regression trees for forecasting double-seasonal time series with trend.
You will learn in this post how to:
- tune simple regression trees methods by Bagging
- set important hyperparameters of decision tree methods and randomized them in Bagging
- use Random Forest method for forecasting time series
- set hyperparameters of Random Forest and find it optimally by grid search method
Time series data of electricity consumption
As in previous posts, I will use smart meter data of electricity consumption for demonstrating forecasting of seasonal time series. The dataset of aggregated electricity load of consumers from an anonymous area is used. Time series data have the length of 17 weeks.
Firstly, let’s scan all of the needed packages for data analysis, modeling and visualizing.
Now read the mentioned time series data by read_feather
to one data.table
. The dataset can be found on my github repo, the name of the file is DT_load_17weeks.
And store information of the date and period of time series that is 48.
For data visualization needs, store my favorite ggplot theme settings by function theme
.
I will use three weeks of data of electricity consumption for training regression trees methods. Forecasts will be performed to one day ahead. Let’s extract train and test set from the dataset.
And visualize the train part:
Bagging
Bagging or bootstrap aggregating, is ensemble learning meta-algorithm used to improve prediction accuracy and to overcome (avoid) overfitting. The algorithm is very simple. The first step is sampling a training dataset with replacement with some defined sample ratio (e.g. 0.7). Then a model is trained on a new train set. This procedure is repeated N_boot times (e.g. 100). Final ensemble prediction is just average of N_boot predictions. For aggregating predictions, the median can be also used and will be used in this post.
Bagging + RPART
The first “bagged” method is RPART (CART) tree. Training set consists of lagged electricity load by one day and double-seasonal Fourier terms (daily and weekly seasonality). Electricity consumption is firstly detrended by STL decomposition and trend part is forecasted (modeled) by ARIMA (auto.arima
function). Seasonal and remainder part is then forecasted by regression tree model. More detailed description and explanations are in my previous post about regression trees methods. Let’s define train and test data (by matrix_train
and matrix_test
).
I will perform 100 bootstrapped forecasts by RPART, that will be stored in the matrix of size \( 100\times 48 \) (pred_mat
). Additional four parameters will be also sampled (randomized) to avoid overfitting. Sample ratio for sampling train set is sampled in the range from 0.7 to 0.9. Three hyperparameters of RPART are sampled also: minsplit
, maxdepth
and complexity parameter cp
. Hyperparameters are sampled around values set on my previous post modeling.
Let’s compute median of forecasts and visualize all created forecasts. For ggplot
visualization needs we must use melt
function to the matrix of forecasts pred_mat
.
The red line is median of forecasts. We can see that created forecasts by RPART have typical rectangular shape, but final ensemble forecasts has nice smooth behaviour. For comparison, compute forecast by RPART on whole train set and compare it with previous results. I am using function RpartTrend
from my previous post.
The blue line is simple RPART forecast. The difference between ensemble forecast and simple one is not very evident in this scenario.
Bagging with CTREE
The second “bagged” regression tree method is CTREE. I will randomize only mincriterion
hyperparameter of CTREE method that decisions about splitting a node. The sample ratio is of course sampled (randomized) as well as in the previous case with RPART.
Let’s visualize results:
Again, behaviour of bootstrapped forecasts is very rectangular and the final ensemble forecast is much more smoother. Compare it also with forecast performed on whole train set. I am using function CtreeTrend
from my previous post.
The simple forecast is a little bit more rectangular than ensemble one.
Random Forest
Random Forest is an improvement of Bagging ensemble learning method. It uses a modified tree learning algorithm that selects, at each candidate split in the learning process, a random subset of the features. This process is sometimes called “feature bagging”. The classical Bagging is also used in the method of course. Hyperparameters of Random Forest is similar to those in RPART method. Additional ones are: number of trees (ntree
); number of variables sampled in each candidate split (mtry
). Importance of variables can be also incorporated to learning process in order to enhance the performance. Random Forest is implemented in R by package randomForest
and same-named function. Default settings for hyperparameters are: mtry
= p/3 (where p is number of features), so 9/3 = 3 in our case, and nodesize
= 5. I will set a number of trees to 1000.
Whereas we use ensemble of CART trees in Random Forest, we can compute variable importance. Checking variable importance can be important for extracting just most valuable features and to analyze dependencies and correlations in data. Let’s check variable importance by varImpPlot
.
We can see that Lag and S1.336 (weekly seasonal feature in sinus form) features have best scores in both measures. It implies that weekly seasonality and and lagged values of load are most important for our training data.
Let’s compute also forecast.
And compare forecasts from all ensemble learning approaches so far and real electricity load.
There are just small differences.
Hyperparameters tuning - grid search
It is always highly recommended to tune hyperparameters of our used method. By hyperparameters tuning, we can significantly improve predictive performance. I will try to tune two hyperparameters of Random Forest, mtry
and nodesize
, by grid search method.
For specificity of time series forecasting, I proposed my own function of grid search (gridSearch
). The evaluation is based on sliding window forecasting, so forecasting error is average of errors in some range of time. I also created my own Random Forest function (RFgrid
) that is compatible with grid search function. The output of gridSearch
function is matrix of average MAPE (Mean Absolute Percentage Error) values corresponding to hyperparameters used.
For our purpose, I will make 20 forecasts for each hyperparameter combination. I will test nodesize
and mtry
parameters for values \( 2,3,4,5,6 \). It is taking some time to compute, so be aware.
Let’s check the results.
And find the minimum.
So best (optimal) setting is mtry
= 6 and nodesize
= 5.
For better imagination and analysis of results, let’s visualize the computed grid of MAPE values.
We can see that lowest average MAPEs are for mtry
= 6 and nodesize
= 2 or 5.
Comparison
For more exact evaluation, I made comparison of five methods - simple RPART, simple CTREE, Bagging + RPART, Bagging + CTREE and Random Forest on whole available dataset (so 98 forecasts were performed).
I created plotly
boxplots graph of MAPE values from these 5 methods. Whole evaluation can be seen in the script that is stored in my GitHub repository.
We can see that Bagging really helps decrease forecasting error for RPART and CTREE. The Random Forest has best results among all tested methods, so I proposed suitable approach for seasonal time series forecasting.
Conclusion
In this post, I showed you how to use basic ensemble learning methods to improve forecasting accuracy. I used classical Bagging in combination with regression tree methods - RPART and CTREE. The extension of Bagging, Random Forest, was also used and evaluated. I showed you how to set important hyperparameters and how to tune them by grid search method. The Random Forest method comes most accurate and I highly recommend it for time series forecasting. But, it must be said that feature engineering is very important part also of regression modeling of time series. So, I don’t generalize results for every possible task of time series forecasting.
In the future post, I will write about other bootstrapping techniques for time series or Boosting.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.