Ensemble Learning in R

March 7, 2017
By

(This article was first published on R Blog, and kindly contributed to R-bloggers)

Previous research in data mining has devised numerous different algorithms for learning tasks. While an individual algorithm might already work decently, one can usually obtain a better predictive by combining several. This approach is referred to as ensemble learning.
Common examples include random forests, boosting and AdaBost in particular.

Our slide deck is positioned at the intersection of teaching the basic idea of ensemble learning and providing practical insights in R.
Therefore, each algorithm comes with an easy-to-understand explanation on how to use it in R.

We hope that the slide deck enables practitioners to quickly adopt ensemble learning for their applications in R. Moreover, the materials might lay the groundwork for courses on data mining and machine learning.

Download the slides here
Download the exercise sheet here
The content was republished on r-bloggers.com with permission.

To leave a comment for the author, please follow the link and comment on their blog: R Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...



If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Comments are closed.

Sponsors

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)