Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
In the previous post (https://statcompute.wordpress.com/2016/01/01/the-power-of-decision-stumps), it was shown that the boosting algorithm performs extremely well even with a simple 1-level stump as the base learner and provides a better performance lift than the bagging algorithm does. However, this observation shouldn’t be generalized, which would be demonstrated in the following example.
First of all, we developed a rule-based PART model as below. Albeit pruned, this model will still tend to over-fit the data, as shown in the highlighted.
# R = TRUE AND N = 10 FOR 10-FOLD CV PRUNING # M = 5 SPECIFYING MINIMUM NUMBER OF CASES PER LEAF part_control <- Weka_control(R = TRUE, N = 10, M = 5, Q = 2016) part <- PART(fml, data = df, control = part_control) roc(as.factor(train$DEFAULT), predict(part, newdata = train, type = "probability")[, 2]) # Area under the curve: 0.6839 roc(as.factor(test$DEFAULT), predict(part, newdata = test, type = "probability")[, 2]) # Area under the curve: 0.6082
Next, we applied the boosting to the PART model. As shown in the highlighted result below, AUC of the boosting on the testing data is even lower than AUC of the base model.
wlist <- list(PART, R = TRUE, N = 10, M = 5, Q = 2016) # I = 100 SPECIFYING NUMBER OF ITERATIONS # Q = TRUE SPECIFYING RESAMPLING USED IN THE BOOSTING boost_control <- Weka_control(I = 100, S = 2016, Q = TRUE, P = 100, W = wlist) boosting <- AdaBoostM1(fml, data = train, control = boost_control) roc(as.factor(test$DEFAULT), predict(boosting, newdata = test, type = "probability")[, 2]) # Area under the curve: 0.592
However, if employing the bagging, we are able to achieve more than 11% performance lift in terms of AUC.
# NUM-SLOTS = 0 AND I = 100 FOR PARALLELISM # P = 50 SPECIFYING THE SIZE OF EACH BAG bag_control <- Weka_control("num-slots" = 0, I = 100, S = 2016, P = 50, W = wlist) bagging <- Bagging(fml, data = train, control = bag_control) roc(as.factor(test$DEFAULT), predict(bagging, newdata = test, type = "probability")[, 2]) # Area under the curve: 0.6778
From examples demonstrated today and yesterday, an important lesson to learn is that ensemble methods are powerful machine learning tools only when they are used appropriately. Empirically speaking, while the boosting works well to improve the performance of a under-fitted base model such as the decision stump, the bagging might be able to perform better in the case of an over-fitted base model with high variance and low bias.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.