vtreat up on CRAN!
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Nina Zumel and I are proud to announce our R vtreat
variable treatment library has just been accepted by CRAN!
It will take some time for the vtreat
package to progress to various CRAN mirrors, but as of now you can install vtreat
with the command:
install.packages('vtreat',
repos='http://cran.r-project.org/')
Instead of needing to use devtools to install from the Github version as in:
devtools::install_github('WinVector/vtreat')
The purpose of vtreat
library is to reliably prepare data for supervised machine learning. We try to leave as much as possible to the machine learning algorithms themselves, but cover most of the truly necessary typically ignored precautions. The library is designed to produce a data.frame
that is entirely numeric and takes common precautions to guard against the following real world data issues:
- Categorical variables with very many levels.
We re-encode such variables as a family of indicator or dummy variables for common levels plus an additional impact code (also called “effects coded” in Jacob Cohen, Patricia Cohen, Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 2nd edition, 1983). This allows principled use (including smoothing) of huge categorical variables (like zip-codes) when building models. This is critical for some libraries (such as
randomForest
, which has hard limits on the number of allowed levels. - Novel categorical levels). A common problem in deploying a classifier to production is: new levels (levels not seen during training) encountered during model application. We deal with this by encoding categorical variables in a possibly redundant manner: reserving a dummy variable for all levels (not the more common all but a reference level scheme). This is in fact the correct representation for regularized modeling techniques and lets us code novel levels as all dummies simultaneously zero (which is a reasonable thing to try). This encoding while limited is cheaper than the fully Bayesian solution of computing a weighted sum over previously seen levels during model application.
- Missing/invalid values NA, NaN, +-Inf.
Variables with these issues are re-coded as two columns. The first column is clean copy of the variable (with missing/invalid values replaced with either zero or the grand mean, depending on the user chose of the
scale
parameter). The second column is a dummy or indicator that marks if the replacement has been performed. This is simpler than imputation of missing values, and allows the downstream model to attempt to use missingness as a useful signal (which it often is in industrial data). - Extreme values. Variables can be restricted to stay in ranges seen during training. This can defend against some run-away classifier issues during model application.
- Constant and near-constant variables. Variables that “don’t vary” or “nearly don’t vary” are suppressed.
- Need for estimated single-variable model effect sizes and significances. It is a dirty secret that even popular machine learning techniques need some variable pruning (when exposed to very wide data frames, see here and here). We make the necessary effect size estimates and significances easily available and supply initial variable pruning.
The above are all awful things that often lurk in real world data. Automating these steps ensures they are easy enough that you actually perform them and leaves the analyst time to look for additional data issues. For example this allowed us to essentially automate a number of the steps taught in chapters 4 and 6 of Practical Data Science with R (Zumel, Mount; Manning 2014) into a very short worksheet (though we think for understanding it is essential to work all the steps by hand as we did in the book).
The idea is: data.frame
s prepared with the vtreat
library are somewhat safe to train on as some precaution has been taken against all of the above issues. Also of interest are the vtreat
variable significances (help in initial variable pruning, a necessity when there are a large number of columns) and vtreat::prepare(scale=TRUE)
which re-encodes all variables into effect units making them suitable for y-aware dimension reduction (variable clustering, or principal component analysis) and for geometry sensitive machine learning techniques (k-means, knn, linear SVM, and more). You may want to do more than the vtreat
library does (such as Bayesian imputation, variable clustering, and more) but you certainly do not want to do less.
The original announcement is getting a bit out of date, so we hope to be able to write a new article on vtreat
soon. Until then we suggest running vignette('vtreat')
in R to produce a rendered version of the package vignette. You can also checkout the package manual, now available online.
There have been a number of recent substantial improvements to the library, including:
- Out of sample scoring.
- Ability to use
parallel
. - More general calculation of effect sizes and significances.
- Addition of collaring or Winsorising to defend from outliers.
Some of our related articles (which should make clear some of our motivations, and design decisions):
- Modeling trick: impact coding of categorical variables with many levels
- A bit more on impact coding
- vtreat: designing a package for variable treatment
- A comment on preparing data for classifiers
- Nina Zumel presenting on vtreat
- What is new in the vtreat library?
A short example of current best practice using vtreat
(variable coding, train, test split) is here.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.