Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
For some reasons I couldn’t foresee, there’s been no blog post here on november 13 and november 20. So, here is the post about LSBoost announced here a few weeks ago.
First things first, what is LSBoost? Gradient boosted nonlinear penalized least squares. More precisely in LSBoost, the ensembles’ base learners are penalized, randomized neural networks.
These previous posts, with several Python and R examples, constitute a good introduction to LSBoost:
-
https://thierrymoudiki.github.io/blog/2020/07/24/python/r/lsboost/explainableml/mlsauce/xai-boosting
More recently, I’ve also written a more formal, short introduction to LSBoost:
The paper’s code – and more insights on LSBoost – can be found in the following Jupyter notebook:
Comments, suggestions are welcome as usual.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.