R-bloggers

Explain that tidymodels blackbox!

[This article was first published on R – Michael's and Christian's Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Let’s explain a {tidymodels} random forest by classic explainability methods (permutation importance, partial dependence plots (PDP), Friedman’s H statistics), and also fancy SHAP.

Disclaimer: {hstats}, {kernelshap} and {shapviz} are three of my own packages.

Diabetes data

We will use the diabetes prediction dataset of Kaggle to model diabetes (yes/no) as a function of six demographic features (age, gender, BMI, hypertension, heart disease, and smoking history). It has 100k rows.

Note: The data additionally contains the typical diabetes indicators HbA1c level and blood glucose level, but we wont use them to avoid potential causality issues, and to gain insights also for people that do not know these values.

# https://www.kaggle.com/datasets/iammustafatz/diabetes-prediction-dataset

library(tidyverse)
library(tidymodels)
library(hstats)
library(kernelshap)
library(shapviz)
library(patchwork)

df0 <- read.csv("diabetes_prediction_dataset.csv")  # from above Kaggle link
dim(df0)  # 100000 9
head(df0)
# gender age hypertension heart_disease smoking_history   bmi HbA1c_level blood_glucose_level diabetes
# Female  80            0             1           never 25.19         6.6                 140        0
# Female  54            0             0         No Info 27.32         6.6                  80        0
#   Male  28            0             0           never 27.32         5.7                 158        0
# Female  36            0             0         current 23.45         5.0                 155        0
#   Male  76            1             1         current 20.14         4.8                 155        0
# Female  20            0             0           never 27.32         6.6                  85        0

summary(df0)
anyNA(df0)  # FALSE
table(df0$smoking_history, useNA = "ifany")

# DATA PREPARATION

# Note: tidymodels needs a factor response for classification
df1 <- df0 |>
  transform(
    y = factor(diabetes, levels = 0:1, labels = c("No", "Yes")),
    female = (gender == "Female") * 1,
    smoking_history = factor(
      smoking_history, 
      levels = c("No Info", "never", "former", "not current", "current", "ever")
    ),
    bmi = pmin(bmi, 50)
  )

# UNIVARIATE ANALYSIS

ggplot(df1, aes(diabetes)) +
  geom_bar(fill = "chartreuse4")

df1  |>  
  select(age, bmi, HbA1c_level, blood_glucose_level) |> 
  pivot_longer(everything()) |> 
  ggplot(aes(value)) +
  geom_histogram(fill = "chartreuse4", bins = 19) +
  facet_wrap(~ name, scale = "free_x")

ggplot(df1, aes(smoking_history)) +
  geom_bar(fill = "chartreuse4")

df1 |> 
  select(heart_disease, hypertension, female) |>
  pivot_longer(everything()) |> 
  ggplot(aes(name, value)) +
  stat_summary(fun = mean, geom = "bar", fill = "chartreuse4") +
  xlab(element_blank())
“yes” proportion of binary variables (including the response)
Distribution of numeric variables
Distribution of smoking_history

Modeling

Let’s fit a random forest via tidymodels with {ranger} backend.

We add a predict function pf() that outputs only the probability of the “Yes” class.

set.seed(1)
ix <- initial_split(df1, strata = diabetes, prop = 0.8)
train <- training(ix)
test <- testing(ix)

xvars <- c("age", "bmi", "smoking_history", "heart_disease", "hypertension", "female")

rf_spec <- rand_forest(trees = 500) |> 
  set_mode("classification") |> 
  set_engine("ranger", num.threads = NULL, seed = 49)

rf_wf <- workflow() |> 
  add_model(rf_spec) |>
  add_formula(reformulate(xvars, "y"))

model <- rf_wf |> 
    fit(train)

# predict() gives No/Yes columns
predict(model, head(test), type = "prob")
# .pred_No .pred_Yes
#    0.981    0.0185

# We need to extract only the "Yes" probabilities
pf <- function(m, X) {
  predict(m, X, type = "prob")$.pred_Yes
}
pf(model, head(test))  # 0.01854290 ...

Classic explanation methods

# 4 times repeated permutation importance wrt test logloss
imp <- perm_importance(
  model, X = test, y = "diabetes", v = xvars, pred_fun = pf, loss = "logloss"
)
plot(imp) +
  xlab("Increase in test logloss")

# Partial dependence of age
partial_dep(model, v = "age", train, pred_fun = pf) |> 
  plot()

# All PDP in one patchwork
p <- lapply(xvars, function(x) plot(partial_dep(model, v = x, X = train, pred_fun = pf)))
wrap_plots(p) &
  ylim(0, 0.23) &
  ylab("Probability")

# Friedman's H stats
system.time( # 20 s
  H <- hstats(model, train[xvars], approx = TRUE, pred_fun = pf)
)
H  # 15% of prediction variability comes from interactions
plot(H)

# Stratified PDP of strongest interaction
partial_dep(model, "age", BY = "bmi", X = train, pred_fun = pf) |> 
  plot(show_points = FALSE)

Feature importance

Permutation importance measures by how much the average test loss (in our case log loss) increases when a feature is shuffled before calculating the losses. We repeat the process four times and also show standard errors.

Permutation importance: Age and BMI are the two main risk factors.

Main effects

Main effects are estimated by PDP. They show how the average prediction changes with a feature, keeping every other feature fixed. Using a fixed vertical axis helps to grasp the strenght of the effect.

PDPs: The diabetes risk tends to increase with age, high (and very low) BMI, presence of heart disease/hypertension, and it is a bit lower for females and non-smoker.

Interaction strength

Interaction strength can be measured by Friedman’s H statistics, see the earlier blog post. A specific interaction can then be visualized by a stratified PDP.

Friedman’s H statistics: Left: BMI and age are the two features with clearly strongest interactions. Right: Their pairwise interaction explains about 10% of their joint effect variability.
Stratified PDP: The strong interaction between age and BMI is clearly visible. A high BMI makes the age effect on diabetes stronger.

SHAP

What insights does a SHAP analysis bring?

We will crunch slow exact permutation SHAP values via kernelshap::permshap(). If we had more features, we could switch to

set.seed(1)
X_explain <- train[sample(1:nrow(train), 1000), xvars]
X_background <- train[sample(1:nrow(train), 200), ]

system.time(  # 10 minutes
  shap_values <- permshap(model, X = X_explain, bg_X = X_background, pred_fun = pf)
)
shap_values <- shapviz(shap_values)
shap_values  # 'shapviz' object representing 1000 x 6 SHAP matrix
saveRDS(shap_values, file = "shap_values.rds")
# shap_values <- readRDS("shap_values.rds")

sv_importance(shap_values, show_numbers = TRUE)
sv_importance(shap_values, kind = "bee")
sv_dependence(shap_values, v = xvars) &
  ylim(-0.14, 0.24) &
  ylab("Probability")

SHAP importance

SHAP importance: On average, the age increases or decreases the diabetes probability by 4.7% etc. In this case, the top three features are the same as in permutation importance.

SHAP “summary” plot

SHAP “summary” plot: Additionally to the bar plot, we see that higher age, higher BMI, hypertension, smoking, males, and having a heart disease are associated with higher diabetes risk.

SHAP dependence plots

SHAP dependence plots: We see similar shapes as in the PDPs. Thanks to the vertical scatter, we can, e.g., spot that the BMI effect strongly depends on the age. As in the PDPs, we have selected a common vertical scale to also see the effect strength.

Final words

The full R script

To leave a comment for the author, please follow the link and comment on their blog: R – Michael's and Christian's Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Exit mobile version