eXplainable AI + Shiny = xai2shiny

[This article was first published on R in ResponsibleML on Medium, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

xai2shiny is a new tool for lightning-quick deployment of machine learning models and their explorations using Shiny.

By Anna Kozak

The explainability of machine learning models has already proven to be an essential part of building a successful model. The crucial reason to try and understand the models’ choices instead of just using a well-performing black-box is to share it with others. And what better way to do that than creating an application straight from the model with one function?

To catch a glimpse at the possibilities and features of the package, we will go through step by step with an example based on the Titanic dataset.

Firstly, we need the data and a model, or even better, two models to compare between each other. We will use one linear and one random forest model.

library("ranger")
model_rf <- ranger(survived ~ ., data = titanic_imputed)
model_glm <- glm(survived ~ ., data = titanic_imputed,
                 family = "binomial")

With the models created, let’s utilize the DALEX package to create corresponding explainers, which will be used to create every plot in the application.

explainer_rf <- explain(model_rf, data = titanic_imputed[,-8],
                        y = titanic_imputed$survived)
explainer_glm <- explain(model_glm, data = titanic_imputed[,-8], 
                         y = titanic_imputed$survived)

With all the prerequisites met, let’s install xai2shiny and finally see what’s it all about.

devtools::install_github("ModelOriented/xai2shiny")
xai2shiny(explainer_rf, explainer_glm)

As simple as that, the Shiny application is now running and ready to be shared (for example by shinyapps). Try it yourself here.

The Shiny application consists of three main elements: model performance, local explanations, and global explanations. All of them are highly customizable so you can retrieve just the right information.

The first component consists of models’ prediction and it’s performance. For classification tasks, like the one in the example we can see the ROC curve along with performance measures such as recall, precision, F1, and accuracy. We can choose any variables we want, which is helpful in case of huge datasets containing hundreds of columns. Not only that but also we can choose the text description option, which will enable text descriptions of all components in the application.

The second component is called local explanations and consists of two plots. In the section on the left we can find out about variable attributions to a set prediction in two ways: a Break Down plot and SHAP values plot. The second section on the other hand allows us to hypothesize about model results if a variable we choose is changed by utilizing Ceteris Paribus profiles. In case you want to learn more about the distinction between different explanation types and their meaning make sure to check out the BASIC XAI with DALEX blog series.

In the GIF above we can also see the explainer switching option. It allows us to compare different models’ behavior all in one application without long loading times.

Global explanations — feature importance plot showcase

The final component of the application is global explanations. They aren’t shown during application start-up to save resources but don’t worry. Just tick the global explanations option in the sidebar and have fun exploring with feature importance and partial dependence plots. At first glance, these charts may look like the charts in the previous section, and they should! The main difference is that the global explanations show information about the model’s behavior on the dataset level.

Thanks to the above-mentioned components we can explore any models and compare them to one another in one easy-to-use and quick application. In case you found the example interesting, feel free to try it out on your models and get to know them better.

In case you want to learn more about the package, make sure to visit https://github.com/ModelOriented/xai2shiny.

If you are interested in other posts about explainable, fair, and responsible ML, follow #ResponsibleML on Medium.

In order to see more R related content visit https://www.r-bloggers.com


eXplainable AI + Shiny = xai2shiny was originally published in ResponsibleML on Medium, where people are continuing the conversation by highlighting and responding to this story.

To leave a comment for the author, please follow the link and comment on their blog: R in ResponsibleML on Medium.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)