# Ryan Rosario on Parallel programming in R

[This article was first published on

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

**Revolutions**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Earlier this year data scientist Ryan Rosario gave a talk on parellel computing with R to the Los Angeles R User Group, and he recently made the slides from the talk available online. They're a great resource for anyone looking to make use of multi-processor systems a Hadoop based architechure to speed computations with big data. Ryan's talk was divided into three parts:

**Explicit parallelism**has the R programmer responsible for dividing the problem to be solved into independent chunks (to be run in parallel), and also responsible for aggregating the results from each chunk. It's especially suited to “embarassingly parallel” problems like large-scale simulations and by-group analyses. Ryan explains how to use the parallel package in R to perform explicit parallelism, using random cross-subset validation (to train a spam-detection algorithm) as an example.**Implicit parallelism**is easier for programmers than explicit parallelism, because (as Ryan writes), “most of the messy legwork in setting up the system and distributing data is abstracted away.” In this section, Ryan shows how to use the mclapply function from the multicore package. It works just like the regular lapply function to iterate across the elemenst of a list, but iterations automatically run in parallel to speed up the computations. In the Appendix at the end of the slides, Ryan also shows how to use Revolution Analytics' foreach package with doMC for parallel programming, with some neat examples of bootsrapping and a parallel implementation of the quicksort algorithm.**Map-Reduce**is a somewhat complex but very powerful paradigm for processing large datastores in parallel. It's best known as the programming framework for Hadoop-based systems, and Ryan shows how to use Revolution Analytics' RHadoop project to implement map-reduce for Hadoop using R. An implementation of K-means clustering with Hadoop is given as an example.

Many thanks to Ryan for sharing these slides, which you can find at the link below.

ByteMining: Parallelization in R, Revisited

To

**leave a comment**for the author, please follow the link and comment on their blog:**Revolutions**.R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.