[This article was first published on Blog - Applied Predictive Modeling, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
A new version of caret
(6.0-52) is on CRAN.
Here is the news file but the Cliff Notes are:
- sub-sampling for class imbalances is now integrated with
train
and is used inside of standard resampling. There are four methods available right now: up- and down-sampling, SMOTE, and ROSE. The help page has detailed information. - Nine additional models were added, bringing the total up to 192.
- More error traps were added for common mistakes (e.g. bad factor levels in classification).
- Various bug fixes and snake enhancements
On-deck for upcoming versions:
- An expanded interface for preprocessing. You might want to process some predictors one way and others differently (or not at all). A new interface will allow for this but should maintain backwards compatibility (I hope)
- Censored data models. Right now we are spec’ing out how this will work but the plan is to have models for predicting the outcome directly as well as models that predict survivor function probabilities. Email me (
max.kuhn@pfizer.com
) if you have an interest in this. - Enabling prediction intervals (for models that support this) using
predict.train
. To be clear,caret
isn’t generating these but if you fit anlm
model, you will be able to generate intervals frompredict.lm
and pass them throughpredict.train
.
Max
To leave a comment for the author, please follow the link and comment on their blog: Blog - Applied Predictive Modeling.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.