Recently, I have faced a tough challenge. We were doing a proof of concept for the champion-challenger analysis + XAI exploration. It was done with a business partner that has its models written as if-else queries in salesforce. Part of our team were eager to build ML challengers in mlr (R) while others were after scikit-learn (python). As if that were not enough I was going to try h2o automl (java+wrappers in R/python) as a benchmark. Data was already cleaned and preprocessed, so the training phase was relatively easy in each framework. But how to cross-compare models created in 4 different frameworks?
We need more adapters not more standards
DALEX is an R package that creates a standardized uniform interface for model exploration. Its main function explain() takes a model and additional metadata to prepare a uniform interface with exposed functions for calculation of predictions, calculation of residuals, operations on the data and the target variable (operations like feature permutations). All of these elements are needed for model exploration. Once the wrapper is created, one can use XAI tools available in various useful R packages (e.g., ingredients, iBreakDown, auditor, modelStudio, shapper and vivo) without worrying, which framework was used to develop the model. It does’t matter if it’s R, python, java or any future framework.
The figure below shows an example model exploration created in modelStudio. The internal structure of the explored model is separated from the model interface. In the same way we can explore a model created in R, python or h2o.
Smart model wrappers
Preparation of a wrapper is automated for the most common ML frameworks. You can specify your own function for calculation of predictions or residuals but you do not have to. For most frameworks such information can be extracted in an automated way. Below we present an example for the randomForest model from the randomForest package. The explain() function knows how to wrap objects of the randomForest class, thus the wrapper definition is reduced to a specification of a model, a validation data and a unique label that will be used in explanations. DALEXtra is an extension pack with predefined wrappers for scikit-learn, keras, mljar, h2o and mlr models.
Just to show an example. In the figure below we present a partial dependency profiles for four models overlaid in a single figure. One can see that the average model behaviour is pretty similar except for very low values of the age variable for which catboost is giving higher predictions than gbm. This plot was generated with a single plot function that takes four partial dependency explanations as arguments. Each explanation knows how to access the model (created with python, mlr or java) through an explainer.
In the DALEXtra vignette you will find more examples on how to create and use wrappers for different frameworks.
For the champion-challenger analysis it makes sense to compare models created in different frameworks. After all it’s an exploration, in which you want to try different tools to compare their strengths and weaknesses. Your task or problem could gain a lot by using technology/framework X. To facilitate such comparisons you need wrappers that can be easily created, and that can be used for model cross comparisons. DALEX and DALEXtra create such wrappers. They provide an abstraction over internal structure of predictive models. You can build your own package for model exploration on top of this abstraction. Take as an example ingredients, iBreakDown, auditor or modelStudio.
DALEXtra is maintained by Szymon Maksymiuk, who is also a contributor in DALEX. Both tools are developed as a part of the DrWhy.AI framework. This description was greatly improved based on comments from Hubert Baniecki, Wojciech Kretowicz, Anna Kozak and Alicja Gosiewska.