Regression regularization example

[This article was first published on R snippets, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Recently I needed a simple example showing when application of regularization in regression is worthwhile. Here is the code I came up with (along with basic application of parallelization of code execution).

Assume you have 60 observations and 50 explanatory variables x1 to x50. All these variables are IID from uniform distribution on interval [0, 1). Predicted variable y is generated as a sum of variables x1 to x50 and independent random noise N(0, 1).
Our objective is to compare for such data: (a) linear regression on all 50 variables, regressions obtained by variable selection using (b) AIC and (c) BIC criteria and (d) Lasso regularization.
What we do is generate 100 times the training data set and compare the four predictions against known expected value of y for 10 000 randomly selected values of explanatory variables. We use mean squared deviation of the prediction from the mean (thus for ideal model it is equal to 0).

Here is the code that runs the simulation. Because each step of the procedure is lengthily I parallelize the computations.

library(parallel)

run <- function(job) {
    require(lasso2)

    gen.data <- function(v, n) {
        data.set <- data.frame(replicate(v, runif(n)))
        # true y is equal to sum of x
        data.set$y <- rowSums(data.set)
        names(data.set) <- c(paste(“x”, 1:v, sep = “”), “y”)
        return(data.set)
    }

    v <- 50
    n <- 60

    data.set <- gen.data(v, n)
    # add noise to y in training set
    data.set$y <- data.set$y + rnorm(n)
    new.set <- gen.data(v, 10000)
    model.lm <- lm(y ~ ., data.set)
    model.aic <- step(model.lm, trace = 0)
    model.bic = step(model.lm, trace = 0, k = log(n))
    model.lasso <- l1ce(y ~ ., data.set,
                        sweep.out = NULL, standardize = FALSE)
    models = list(model.lm, model.aic, model.bic, model.lasso)
    results <- numeric(length(models))
    for (j in seq_along(models)) {
        pred <- predict(models[[j]], newdata = new.set)
        results[j] <- mean((pred new.set$y) ^ 2)
    }
    return(results)
}
cl <- makeCluster(4)
system.time(msd <- t(parSapply(cl, 1:100, run))) # 58.07 seconds
stopCluster(cl)

colnames(msd) = c(“lm”, “aic”, “bic”, “lasso”)
par(mar = c(2, 2, 1, 1))
boxplot(msd)
for (i in 1:ncol(msd)) {
    lines(c(i 0.4, i + 0.4), rep(mean(msd[, i]), 2),
          col = “red”, lwd = 2)

}

The code produces boxplots for distribution of mean squared deviation from theoretical mean and additionally puts red line at the mean level of mean squared deviation. Here is the result:


Notice that in this example neither AIC not BIC improve over linear regression with all variables. However Lasso consistently produces significantly better models.

To leave a comment for the author, please follow the link and comment on their blog: R snippets.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)