Common Ensemble Models can be Biased

[This article was first published on R – Win-Vector Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

In our previous article , we showed that generalized linear models are unbiased, or calibrated: they preserve the conditional expectations and rollups of the training data. A calibrated model is important in many applications, particularly when financial data is involved.

However, when making predictions on individuals, a biased model may be preferable; biased models may be more accurate, or make predictions with lower relative error than an unbiased model. For example, tree-based ensemble models tend to be highly accurate, and are often the modeling approach of choice for many machine learning applications. In this note, we will show that tree-based models are biased, or uncalibrated. This means they may not always represent the best bias/variance trade-off.

Example: Predicting income

We’ll continue the example from the previous post: predicting income from demographic variables (sex, age, employment, education). The data is from the 2016 US Census American Community Survay (ACS) Public Use Microdata Sample (PUMS) for our example. More information about the data can be found here. First, we’ll get the training and test data, and show how the expected income varies along different groupings (by sex, by employment, and by education):

library(zeallot)
library(wrapr)
location <- "https://github.com/WinVector/PDSwR2/raw/master/PUMS/incomedata.rds"
incomedata <- readRDS(url(location))

c(test, train) %<-% split(incomedata, incomedata$gp)

# A convenience function to calculate and display
# the conditional expected incomes
show_conditional_means <- function(d, outcome = "income") {
  cols <- qc(sex, employment, education)
  lapply(
    cols := cols, 
    function(colname) {
      aggregate(d[, outcome, drop = FALSE], 
                d[, colname, drop = FALSE], 
                FUN = mean)
    })
}

display_tables <- function(tlist) {
  for(vi in tlist) {
    print(knitr::kable(vi))
  }
}
display_tables(
  show_conditional_means(train))
sexincome
Male55755.51
Female47718.52
employmentincome
Employee of a private for profit51620.39
Federal government employee64250.09
Local government employee54740.93
Private not-for-profit employee53106.41
Self employed incorporated66100.07
Self employed not incorporated41346.47
State government employee53977.20
educationincome
no high school diploma31883.18
Regular high school diploma38052.13
GED or alternative credential37273.30
some college credit, no degree42991.09
Associate’s degree47759.61
Bachelor’s degree65668.51
Master’s degree79225.87
Professional degree97772.60
Doctorate degree91214.55

Three models

We’ll fit three models to the data: two tree ensemble models (random forest and gradient boosting), and one (quasi)Poisson model–a calibrated model– for comparison.

library(randomForest)
library(xgboost)
# Quasipoisson model 
model_pincome <- glm(income ~ age+sex+employment+education,
                    data=train,
                    family=quasipoisson)
# random forest model 
model_rf_1stage <- randomForest(income ~ age+sex+employment+education,
                  data=train)

# gradient boosting model 

# build the model.matrix for the training data
train_mm <- model.matrix(income ~ age+sex+employment+education,
                        train)

cvobj <- xgb.cv(params = list(objective="reg:linear"),
               train_mm, 
               label= train$income, 
               verbose=0, 
               nfold=5,
               nrounds=50)

evallog <- cvobj$evaluation_log
ntrees <- which.min(evallog$test_rmse_mean)

model_xgb <- xgboost(train_mm, 
                    label= train$income, 
                    verbose=0, nrounds=ntrees)

#
# make the predictions on training data
#

train <- transform(train,
                   pred_pois = predict(model_pincome, 
                                       train, type="response"),
                   pred_rf_raw = predict(model_rf_1stage, 
                                         newdata=train, type="response"),
                   pred_xgb = predict(model_xgb, train_mm))

First, we’ll compare the rollups of the predictions to the actual rollups.

outcomecols <- qc(income, pred_pois, pred_rf_raw, pred_xgb)
rollups <-show_conditional_means(train, outcomecols)
display_tables(rollups)
sexincomepred_poispred_rf_rawpred_xgb
Male55755.5155755.5155261.0454203.70
Female47718.5247718.5248405.7147326.59
employmentincomepred_poispred_rf_rawpred_xgb
Employee of a private for profit51620.3951620.3951276.9550294.95
Federal government employee64250.0964250.0961623.8760596.12
Local government employee54740.9354740.9355464.3654121.91
Private not-for-profit employee53106.4153106.4154135.7553417.86
Self employed incorporated66100.0766100.0763840.9163391.52
Self employed not incorporated41346.4741346.4746257.9842578.69
State government employee53977.2053977.2055530.8654752.98
educationincomepred_poispred_rf_rawpred_xgb
no high school diploma31883.1831883.1840599.8840287.54
Regular high school diploma38052.1338052.1341864.1836245.78
GED or alternative credential37273.3037273.3042316.7637654.63
some college credit, no degree42991.0942991.0944303.1441259.59
Associate’s degree47759.6147759.6146831.1344995.83
Bachelor’s degree65668.5165668.5164131.6164043.09
Master’s degree79225.8779225.8770762.2477177.23
Professional degree97772.6097772.6077940.1693507.90
Doctorate degree91214.5591214.5576972.0286496.11

Note that the rollups of the predictions from the two ensemble models don’t match the true rollups, even on the training data; unlike the Poisson model, the random forest and gradient boosting models are uncalibrated.

Model performance on holdout data

Let’s see the performance of the models on test data.

# build the model.matrix for the test data
test_mm <- model.matrix(income ~ age+sex+employment+education,
                        test)

test <- transform(test,
                   pred_pois = predict(model_pincome, 
                                       test, type="response"),
                   pred_rf_raw = predict(model_rf_1stage, 
                                         newdata=test, type="response"),
                   pred_xgb = predict(model_xgb, test_mm))
outcomecols <- qc(income, pred_pois, pred_rf_raw, pred_xgb)
rollups <-show_conditional_means(test, outcomecols)
display_tables(rollups)
sexincomepred_poispred_rf_rawpred_xgb
Male55408.9555899.8355210.8254236.94
Female46261.9947111.0147950.0146705.38
employmentincomepred_poispred_rf_rawpred_xgb
Employee of a private for profit50717.9651362.4451040.5649995.11
Federal government employee66268.0564881.3261974.3661574.06
Local government employee52565.8954119.8354901.9253703.97
Private not-for-profit employee52887.5253259.0753987.9053441.93
Self employed incorporated67744.6166096.2063790.7763100.00
Self employed not incorporated41417.2541507.1746086.6342296.44
State government employee51314.9253973.3955262.1154374.54
educationincomepred_poispred_rf_rawpred_xgb
no high school diploma29903.7031783.6040469.9440169.21
Regular high school diploma36979.3337746.8141648.8035989.04
GED or alternative credential39636.8637177.5042620.3738180.68
some college credit, no degree43490.4243270.8644449.9841538.77
Associate’s degree48384.1947234.5646309.6844383.58
Bachelor’s degree65268.9666141.2764387.4464320.68
Master’s degree77180.4079594.1770804.8177491.04
Professional degree94976.7599009.5678713.5594974.29
Doctorate degree87535.8391742.5476517.4186141.53
# see how close the rollups get to ground truth for employment
err_mag <- function(x, y) {
  delta = y-x
  sqrt(sum(delta^2))
}

employment <- rollups$employment
lapply(employment[, qc(pred_pois, pred_rf_raw, pred_xgb)],
       function(p) err_mag(p, employment$income)) %.>%
  as.data.frame(.)  %.>% 
  knitr::kable(.)
pred_poispred_rf_rawpred_xgb
3831.9678844.4367474.311
Unnamed chunk 9 1

The calibrated Poisson model gives better estimates of the income rollups with respect to employment than either of the ensemble models, despite the fact that all the models have similar root mean squared error when making individual predictions.

# predictions on individuals 
rmse = function(x, y) {
  sqrt(mean((y-x)^2))
}

lapply(test[, qc(pred_pois, pred_rf_raw, pred_xgb)],
       function(p) rmse(p, test$income)) %.>%
  as.data.frame(.)  %.>% 
  knitr::kable(.)
pred_poispred_rf_rawpred_xgb
31341.1431688.7731299.37

Conclusion

In this example, the input variables were simply not informative enough, so the tree ensemble models performed equivalently to the Poisson model for predicting income. With more informative (and nonlinear) input variables, one can expect that ensemble models will outperform linear or generalized linear models, in terms of predictions on individuals. However, even these more accurate ensemble models can be biased, so they are not guaranteed to estimate important aggregates (grouped sums or conditional means) correctly.

In the next note, we’ll propose a polishing step on uncalibrated models that mitigates this bias, potentially enabling models that are both highly accurate on individuals, while estimating certain aggregates correctly.

To leave a comment for the author, please follow the link and comment on their blog: R – Win-Vector Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)