Site icon R-bloggers

Sneak peek into ‘sauron’ package – XAI for Convolutional Neural Networks.

[This article was first published on R on Data Science Guts, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Explainable Artificial Intelligence, or XAI for short, is a set of tools that helps us understand and interpret complicated “black box” machine and deep learning models and their predictions. Today I would like to show you a sneak peek of my newest package called sauron, which allows you to explain decisions of Convolutional Neural Networks.

What exactly does CNN see?

Let’s start with the basic. We’re gonna need a model, test images for which we want to generate explanations and image preprocessing function (if needed).

library(sauron)

input_imgs_paths <- list.files(system.file("extdata", "images", package = "sauron"), full.names = TRUE)
model <- application_xception()
preprocessing_function <- xception_preprocess_input

There’s a ton of different methods to explain CNNs, but for now with sauron you have access to 6 gradient based ones. You can check full list using sauron_available_methods function:

sauron_available_methods
# # A tibble: 6 x 2
#   method name                  
#   <chr>  <chr>                 
# 1 V      Vanilla gradient      
# 2 GI     Gradient x Input      
# 3 SG     SmoothGrad            
# 4 SGI    SmoothGrad x Input    
# 5 IG     Integrated Gradients  
# 6 GB     Guided Backpropagation

Package is still in development so I won’t talk about the theory of those methods today. I will leave it for another post (or more probably multiple posts 🙂 ).

To generate any set of explanations simply use generate_explanations function. Beside, images paths, model and optional preprocessing function you have to pass class indexes for which explanation should be made (NULL means select class with highest probability for this image), some method specific arguments and if you want to generate grayscale o RGB explanation maps.

explanations <- generate_explanations(
  model,
  input_imgs_paths,
  preprocessing_function,
  class_index = NULL,
  methods = sauron_available_methods$method,
  num_samples = 5, # SmoothGrad samples
  noise_sd = 0.1, # SmoothGrad noise standard divination
  steps = 10, # Integrated Gradients steps
  grayscale = FALSE)

Now we can plot our results:

plot_explanations(explanations, FALSE)
# $Input

# 
# $V

# 
# $GI

# 
# $SG

# 
# $SGI

# 
# $IG

# 
# $GB

# 
# $Input

# 
# $V

# 
# $GI

# 
# $SG

# 
# $SGI

# 
# $IG

# 
# $GB

What next?

First of all, as I said in the beginning, sauron is still in development and it should be available at the end of 2020. So if this topic is interesting for you be sure to visit my github from time to time.

Second of all I’m planning to expand sauron capabilities. The first step will be to add methods like: Grad-CAM, Guided Grad-CAM, Occlusion and Layer-wise Relevance Propagation (LRP).

To leave a comment for the author, please follow the link and comment on their blog: R on Data Science Guts.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.