Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
The new version of modelStudio has recently been released on CRAN.
modelStudio is an R package that automates the exploration of ML models and allows for interactive examination. It works in a model agnostic fashion, therefore is compatible with most of the ML frameworks (e.g. mlr/mlr3, xgboost, caret, h2o, scikit-learn, lightGBM, keras/tensorflow).
Recently, we have uploaded to arXiv an article presenting the main principles behind this tool: The Grammar of Interactive Explanatory Model Analysis. Here are the highlights.
Local and global level model explanations complement each other. There is an increasing number of voices arguing that a single method of model exploration cannot fit all needs of different stakeholders (see e.g. Arya et al 2019 or Sokol et al 2020). In this article we show how common XAI methods can be combined into larger blocks that complement each other. Such constructs address a wider range of user needs. In the picture below we show how such a juxtaposition of aspects shows the model from different perspectives and allows to better understand how it behaves.
As in the story of the blind and the elephant, we cannot sufficiently explain a complex model using a single method that gives only one perspective.
Isolated explanations are prone to misunderstanding, which inevitably leads to wrong reasoning. Without multi-faceted interactive explanation, there will be no understanding nor trust for models.
Explanation of predictive models is a process not a chart. We also argue that each explanation raises new questions. So a good XAI system should allow for interactive exploration of different aspects of the model. To make this possible, we introduce a taxonomy of explanations and propose the grammar that generates the process of exploring the complex model.
modelStudio implements the principles of IEMA. The modelStudio framework was created to allow such iterative exploration with a quick feedback loop as model debugging is often demanding and laborious.
The topic of eXplainable Artificial Intelligence brings much attention recently. However, the literature is dominated by works either focused on a list of requirements for its better adoption or contributions with a very technical approach to explaining only a single aspect of the model. In the paper, we propose a third way. First, we argue that explaining a single aspect of the model is incomplete. Second, we propose a taxonomy of methods for explanations, which focuses on the needs of different stakeholders apparent in the lifecycle of Machine Learning models. Third, we describe that Interactive XAI is a process in which explanations are related to a sequence of analysis of complementary model aspects.
This is certainly only a single step towards a better understanding of the model’s exploration process. If you have any suggestions and comments about this process, we will be happy to hear them.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.