[This article was first published on Yet Another Blog in Statistical Computing » S+/R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The tree-based Cubist model can be easily used to develop an ensemble classifier with a scheme called “committees”. The concept of “committees” is similar to the one of “boosting” by developing a series of trees sequentially with adjusted weights. However, the final prediction is the simple average of predictions from all “committee” members, an idea more close to “bagging”.
Below is a demonstration showing how to use the train() function in the caret package to select the optimal number of “committees” in the ensemble model with cubist, e.g. 100 in the example. As shown, the ensemble model is able to outperform the standalone model by ~4% in a separate testing dataset.
data(Boston, package = "MASS") X <- Boston[, 1:13] Y <- log(Boston[, 14]) # SAMPLE THE DATA set.seed(2015) rows <- sample(1:nrow(Boston), nrow(Boston) - 100) X1 <- X[rows, ] X2 <- X[-rows, ] Y1 <- Y[rows] Y2 <- Y[-rows] pkgs <- c('doMC', 'Cubist', 'caret') lapply(pkgs, require, character.only = T) registerDoMC(core = 7) # TRAIN A STANDALONE MODEL FOR COMPARISON mdl1 <- cubist(x = X1, y = Y1, control = cubistControl(unbiased = TRUE, label = "log_medv", seed = 2015)) print(cor(Y2, predict(mdl1, newdata = X2) ^ 2)) # [1] 0.923393 # SEARCH FOR THE OPTIMIAL NUMBER OF COMMITEES test <- train(x = X1, y = Y1, "cubist", tuneGrid = expand.grid(.committees = seq(10, 100, 10), .neighbors = 0), trControl = trainControl(method = 'cv')) print(test) # OUTPUT SHOWING A HIGHEST R^2 WHEN # OF COMMITEES = 100 # committees RMSE Rsquared RMSE SD Rsquared SD # 10 0.1607422 0.8548458 0.04166821 0.07783100 # 20 0.1564213 0.8617020 0.04223616 0.07858360 # 30 0.1560715 0.8619450 0.04015586 0.07534421 # 40 0.1562329 0.8621699 0.03904749 0.07301656 # 50 0.1563900 0.8612108 0.03904703 0.07342892 # 60 0.1558986 0.8620672 0.03819357 0.07138955 # 70 0.1553652 0.8631393 0.03849417 0.07173025 # 80 0.1552432 0.8629853 0.03887986 0.07254633 # 90 0.1548292 0.8637903 0.03880407 0.07182265 # 100 0.1547612 0.8638320 0.03953242 0.07354575 mdl2 <- cubist(x = X1, y = Y1, committees = 100, control = cubistControl(unbiased = TRUE, label = "log_medv", seed = 2015)) print(cor(Y2, predict(mdl2, newdata = X2) ^ 2)) # [1] 0.9589031
To leave a comment for the author, please follow the link and comment on their blog: Yet Another Blog in Statistical Computing » S+/R.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.