Machine learning models repeatedly outperform interpretable, parametric models like the linear regression model.
The gains in performance have a price: The models operate as black boxes which are not interpretable.
Fortunately, there are many methods that can make machine learning models interpretable.
The R package iml
provides tools for analysing any black box machine learning model:

Feature importance: Which were the most important features?

Feature effects: How does a feature influence the prediction? (Partial dependence plots and individual conditional expectation curves)

Explanations for single predictions: How did the feature values of a single data point affect its prediction? (LIME and Shapley value)

Surrogate trees: Can we approximate the underlying black box model with a short decision tree?

The iml package works for any classification and regression machine learning model: random forests, linear models, neural networks, xgboost, etc.
This blog post shows you how to use the iml
package to analyse machine learning models.
While the mlr
package makes it super easy to train machine learning models, the iml
package makes it easy to extract insights about the learned black box machine learning models.
If you want to learn more about the technical details of all the methods, read the Interpretable Machine Learning book.
Let’s explore the iml
toolbox for interpreting an mlr
machine learning model with concrete examples!
Data: Boston Housing
We’ll use the MASS::Boston
dataset to demonstrate the abilities of the iml package. This dataset contains median house values from Boston neighbourhoods.
Fitting the machine learning model
First we train a randomForest to predict the Boston median housing value:
Using the iml Predictor container
We create a Predictor
object, that holds the model and the data. The iml
package uses R6 classes: New objects can be created by calling Predictor$new()
.
Predictor
works best with mlr models (WrappedModel
class), but it is also possible to use models from other packages.
Feature importance
We can measure how important each feature was for the predictions with FeatureImp
. The feature importance measure works by shuffling each feature and measuring how much the performance drops. For this regression task we choose to measure the loss in performance with the mean absolute error (‘mae’); another choice would be the mean squared error (‘mse’).
Once we created a new object of FeatureImp
, the importance is automatically computed.
We can call the plot()
function of the object or look at the results in a data.frame.
Partial dependence
Besides learning which features were important, we are interested in how the features influence the predicted outcome. The Partial
class implements partial dependence plots and individual conditional expectation curves. Each individual line represents the predictions (yaxis) for one data point when we change one of the features (e.g. ‘lstat’ on the xaxis). The highlighted line is the pointwise average of the individual lines and equals the partial dependence plot. The marks on the xaxis indicates the distribution of the ‘lstat’ feature, showing how relevant a region is for interpretation (little or no points mean that we should not overinterpret this region).
If we want to compute the partial dependence curves for another feature, we can simply reset the feature.
Also, we can center the curves at a feature value of our choice, which makes it easier to see the trend of the curves:
Surrogate model
Another way to make the models more interpretable is to replace the black box with a simpler model – a decision tree. We take the predictions of the black box model (in our case the random forest) and train a decision tree on the original features and the predicted outcome.
The plot shows the terminal nodes of the fitted tree.
The maxdepth parameter controls how deep the tree can grow and therefore how interpretable it is.
We can use the tree to make predictions:
Explain single predictions with a local model
Global surrogate model can improve the understanding of the global model behaviour.
We can also fit a model locally to understand an individual prediction better. The local model fitted by LocalModel
is a linear regression model and the data points are weighted by how close they are to the data point for wich we want to explain the prediction.
Explain single predictions with game theory
An alternative for explaining individual predictions is a method from coalitional game theory named Shapley value.
Assume that for one data point, the feature values play a game together, in which they get the prediction as a payout. The Shapley value tells us how to fairly distribute the payout among the feature values.
We can reuse the object to explain other data points:
The results in data.frame form can be extracted like this:
The iml
package is available on CRAN and on Github.
Rbloggers.com offers daily email updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...