Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Datasets come sometimes with predictors that take an unique value across samples. Such uninformative predictor is more common than you might think. This kind of predictor is not only non-informative, it can break some models you may want to fit to your data (see example below). Even more common is the presence of predictors that are almost constant across samples. One quick and dirty solution is to remove all predictors that satisfy some threshold criterion related to their variance.
Here I discuss this quick solution but point out that this might not be the best approach to use depending on your problem. That is, throwing data away should be avoided, if possible.
It would be nice to know how you deal with this problem.
Zero and near-zero predictors
Constant and almost constant predictors across samples (called zero and near-zero variance predictors in [1], respectively) happens quite often. One reason is because we usually break a categorical variable with many categories into several dummy variables. Hence, when one of the categories have zero observations, it becomes a dummy variable full of zeroes.
To illustrate this, take a look at what happens when we want to apply Linear Discriminant Analysis (LDA) to the German Credit Data.
require(caret) data(GermanCredit) require(MASS) r = lda(formula = Class ~ ., data = GermanCredit) Error in lda.default(x, grouping, ...) : variables 26 44 appear to be constant within groups
If we take a closer look at those predictors indicated as problematic by lda
we see what is the problem. Note that I have added +1 to the index since lda
does not count the target variable when informing you where the problem is.
colnames(GermanCredit)[26 + 1] [1] "Purpose.Vacation" table(GermanCredit[, 26 + 1]) 0 1000 colnames(GermanCredit)[44 + 1] [1] "Personal.Female.Single" table(GermanCredit[, 44 + 1]) 0 1000
Quick and dirty solution: throw data away
As we can see above no loan was taken to pay for a vacation and there is no single female in our dataset. A natural first choice is to remove predictors like those. And this is exactly what the function nearZeroVar
from the caret
package does. It not only removes predictors that have one unique value across samples (zero variance predictors), but also removes predictors that have both 1) few unique values relative to the number of samples and 2) large ratio of the frequency of the most common value to the frequency of the second most common value (near-zero variance predictors).
x = nearZeroVar(GermanCredit, saveMetrics = TRUE) str(x, vec.len=2) 'data.frame': 62 obs. of 4 variables: $ freqRatio : num 1.03 1 ... $ percentUnique: num 3.3 92.1 0.4 0.4 5.3 ... $ zeroVar : logi FALSE FALSE FALSE ... $ nzv : logi FALSE FALSE FALSE ...
We can see above that if we call the nearZeroVar
function with the argument saveMetrics = TRUE
we have access to the frequency ratio and the percentage of unique values for each predictor, as well as flags that indicates if the variables are considered zero variance or near-zero variance predictors. By default, a predictor is classified as near-zero variance if the percentage of unique values in the samples is less than uniqueCut
and freqCut
.
We can explore which ones are the zero variance predictors
x[x[,"zeroVar"] > 0, ] freqRatio percentUnique zeroVar nzv Purpose.Vacation 0 0.1 TRUE TRUE Personal.Female.Single 0 0.1 TRUE TRUE
and which ones are the near-zero variance predictors
x[x[,"zeroVar"] + x[,"nzv"] > 0, ] freqRatio percentUnique zeroVar nzv ForeignWorker 26.02703 0.2 FALSE TRUE CreditHistory.NoCredit.AllPaid 24.00000 0.2 FALSE TRUE CreditHistory.ThisBank.AllPaid 19.40816 0.2 FALSE TRUE Purpose.DomesticAppliance 82.33333 0.2 FALSE TRUE Purpose.Repairs 44.45455 0.2 FALSE TRUE Purpose.Vacation 0.00000 0.1 TRUE TRUE Purpose.Retraining 110.11111 0.2 FALSE TRUE Purpose.Other 82.33333 0.2 FALSE TRUE SavingsAccountBonds.gt.1000 19.83333 0.2 FALSE TRUE Personal.Female.Single 0.00000 0.1 TRUE TRUE OtherDebtorsGuarantors.CoApplicant 23.39024 0.2 FALSE TRUE OtherInstallmentPlans.Stores 20.27660 0.2 FALSE TRUE Job.UnemployedUnskilled 44.45455 0.2 FALSE TRUE
Now, should we always remove our near-zero variance predictors? Well, I am not that comfortable with that.
Try not to throw your data away
Think for a moment, the solution above is easy and “solves the problem”, but we are assuming that all those predictors are non-informative, which is not necessarily true, specially for the near-zero variance ones. Those near-variance predictors can in fact turn out to be very informative.
For example, assume that a binary predictor in a classification problem has lots of zeroes and few ones (near-variance predictor). Every time this predictor is equal to one we know exactly what is the class of the target variable, while a value of zero for this predictor can be associated with either one the classes. This is a valuable predictor that would be thrown away by the method above.
This is somewhat related to the separation problem that can happen in logistic regression, where a predictor (or combination of predictors) can perfectly predicts (separate) the data. The common approach not long ago was to exclude those predictors from the analysis, but better solutions were discussed by [2], which proposed a penalized likelihood solution, and [3], that suggested the use of weekly informative priors for the regression coefficients of the logistic model.
Personally, I prefer to use a well designed bayesian model whenever possible, more like the solution provided by [3] for the separation problem mentioned above. One solution for the near-variance predictor is to collect more data, and although this is not always possible, there is a lot of applications where you know you will receive more data from time to time. It is then important to keep in mind that such well designed model would still give you sensible solutions while you still don’t have enough data but would naturally adapt as more data arrives for your application.
References:
[1] Kuhn, M., and Johnson, K. (2013). Applied Predictive Modeling. Springer.
[2] Zorn, C. (2005). A solution to separation in binary response models. Political Analysis, 13(2), 157-170.
[3] Gelman, A., Jakulin, A., Pittau, M.G. and Su, Y.S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 1360-1383.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.