Most predictive ML models are based on a simple assumption: the future will be similar to the past. We can learn some relations on historical data and use them to predict the future.
The COVID19 pandemic shows us how fragile this assumption is.
Explainability is now more important than ever because without understanding how black box ML models work we risk meaningless predictions due to data drift, out of distribution errors or other issues.
As part of the DrWhy initiative, we are developing a new tool for interpretable interactive comparisons of multiple predictive models.
Code name: Arena
Various XAI techniques are implemented, so one can juxtapose explanations for different models or explanations for different instances in an interactive dashboard.
- Play with an example for FIFA20 data and gbm vs lm model
- Play with an example for apartments data and three predictive models
Try it yourself
You can use Arena in three steps.
- Train models in any ML framework.
- Wrap them up with DALEX::explain function.
- Use ArenaR to automatically generate a dashboard to explore them.
You can install a dev version of ArenaR from the GitHub
Find the GitHub repository at https://github.com/ModelOriented/ArenaR.
Find vignettes and documentation at https://arenar.drwhy.ai/
Find a step by step introduction at https://arenar.drwhy.ai/articles/arena_intro_titanic.html
And feel free to star the repo for future updates.