Site icon R-bloggers

Regression regularization example

[This article was first published on R snippets, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Recently I needed a simple example showing when application of regularization in regression is worthwhile. Here is the code I came up with (along with basic application of parallelization of code execution).

Assume you have 60 observations and 50 explanatory variables x1 to x50. All these variables are IID from uniform distribution on interval [0, 1). Predicted variable y is generated as a sum of variables x1 to x50 and independent random noise N(0, 1).
Our objective is to compare for such data: (a) linear regression on all 50 variables, regressions obtained by variable selection using (b) AIC and (c) BIC criteria and (d) Lasso regularization.
What we do is generate 100 times the training data set and compare the four predictions against known expected value of y for 10 000 randomly selected values of explanatory variables. We use mean squared deviation of the prediction from the mean (thus for ideal model it is equal to 0).

Here is the code that runs the simulation. Because each step of the procedure is lengthily I parallelize the computations.

library(parallel)< o:p>

run <- function(job) {< o:p>
    require(lasso2)< o:p>

    gen.data <- function(v, n) {< o:p>
        data.set <- data.frame(replicate(v, runif(n)))< o:p>
        # true y is equal to sum of x< o:p>
        data.set$y <- rowSums(data.set)< o:p>
        names(data.set) <- c(paste(“x”, 1:v, sep = “”), “y”)< o:p>
        return(data.set)< o:p>
    }< o:p>

    v <- 50< o:p>
    n <- 60< o:p>

    data.set <- gen.data(v, n)< o:p>
    # add noise to y in training set< o:p>
    data.set$y <- data.set$y + rnorm(n)< o:p>
    new.set <- gen.data(v, 10000)< o:p>
    model.lm <- lm(y ~ ., data.set)< o:p>
    model.aic <- step(model.lm, trace = 0)< o:p>
    model.bic = step(model.lm, trace = 0, k = log(n))< o:p>
    model.lasso <- l1ce(y ~ ., data.set,< o:p>
                        sweep.out = NULL, standardize = FALSE)< o:p>
    models = list(model.lm, model.aic, model.bic, model.lasso)< o:p>
    results <- numeric(length(models))< o:p>
    for (j in seq_along(models)) {< o:p>
        pred <- predict(models[[j]], newdata = new.set)< o:p>
        results[j] <- mean((pred new.set$y) ^ 2)< o:p>
    }< o:p>
    return(results)< o:p>
}< o:p>
cl <- makeCluster(4)< o:p>
system.time(msd <- t(parSapply(cl, 1:100, run))) # 58.07 seconds< o:p>
stopCluster(cl)< o:p>

colnames(msd) = c(“lm”, “aic”, “bic”, “lasso”)< o:p>
par(mar = c(2, 2, 1, 1))< o:p>
boxplot(msd)< o:p>
for (i in 1:ncol(msd)) {< o:p>
    lines(c(i 0.4, i + 0.4), rep(mean(msd[, i]), 2),< o:p>
          col = “red”, lwd = 2)< o:p>

}< o:p>

The code produces boxplots for distribution of mean squared deviation from theoretical mean and additionally puts red line at the mean level of mean squared deviation. Here is the result:


Notice that in this example neither AIC not BIC improve over linear regression with all variables. However Lasso consistently produces significantly better models.

To leave a comment for the author, please follow the link and comment on their blog: R snippets.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.