Explainable Statistical/Machine Learning explainability using Kernel Ridge Regression surrogates

[This article was first published on T. Moudiki's Webpage - R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

As announced last week, this week’s topic is Statistical/Machine Learning (ML) explainability using Kernel Ridge Regression (KRR) surrogates. The core idea underlying this type of ML explainability methods is to apply a second learning model to the predictions of the first so-called black-box model.

How am I envisaging it? Not by utilizing KRR as a learning model, but as a flexible (continuous and derivable) function of model covariates and predictions. For more details, you can read this pdf document on ResearchGate:

Statistical/Machine learning model explainability using Kernel RidgeRegression surrogates

Comments, suggestions are welcome as usual.

pres-image

To leave a comment for the author, please follow the link and comment on their blog: T. Moudiki's Webpage - R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)