Machine Learning and Data Mining with R

[This article was first published on Revolutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

The San Francisco Bay Area ACM runs several courses on data mining and machine learning with R. Machine Learning 101 deals primarily with supervised learning problems, and Machine Learning 102 covers unsupervised learning and fault detection. Machine Learning 101 & 102 were most recently presented by Mike Bowles & Tricia Hoffman in September, and the lecture notes and class exercises are available for download.

If you'd like to attend these classes in person, they'll run again at the Hacker's Dojo near San Francisco starting on January 22. The cost is $150 per person, and you can register here for Machine Learning 101/102.

Machine Learning 101, deals primarily with supervised learning problems. Machine Learning 102 will cover unsupervised learning and fault detection.

Both 101 and 102 begin at the level of elementary probability and statistics and from that background survey a broad array of machine learning techniques. The classes will give participants a working knowledge of these techniques and will leave them prepared to apply those techniques to real problems. To get the most out of the class, participants will need to work through the homework assignments.

You can also register for Machine Learning 201/202 (starting on January 12) for a more in-depth exploration of these topics, as described below:

Machine Learning 201 and 202 cover topics in greater depth than 101 and 102. Participants in the class should come away able to read the current literature and apply what they read to their own work. Machine Learning 201 and 202 can be taken in any order.

Machine Learning 201 begins with ordinary least squares regression and extends this basic tool in a number of directions. We'll consider various regularization approaches. We'll introduce logistic regression and we'll learn how to code categorical inputs and outputs. We'll look at feature space expansions. These will lead naturally to generalizations of linear regression, known as the “generalized linear model” and the “generalized additive model”.

Hacker Dojo Machine Learning 101 & 102: Fall 2010 Course Materials

To leave a comment for the author, please follow the link and comment on their blog: Revolutions. offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)