Boosting nonlinear penalized least squares

[This article was first published on T. Moudiki's Webpage - R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

For some reasons I couldn’t foresee, there’s been no blog post here on november 13 and november 20. So, here is the post about LSBoost announced here a few weeks ago.

First things first, what is LSBoost? Gradient boosted nonlinear penalized least squares. More precisely in LSBoost, the ensembles’ base learners are penalized, randomized neural networks.

These previous posts, with several Python and R examples, constitute a good introduction to LSBoost:

More recently, I’ve also written a more formal, short introduction to LSBoost:

The paper’s code – and more insights on LSBoost – can be found in the following Jupyter notebook:

Comments, suggestions are welcome as usual.

pres-image

To leave a comment for the author, please follow the link and comment on their blog: T. Moudiki's Webpage - R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)