AMPLab Announces Developer Preview of SparkR

[This article was first published on Revolutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

The team at AMPLab has announced a developer preview of SparkR, an R package enabling R users to run jobs on an Apache Spark cluster.  Spark is an open source project that supports distributed in-memory computing for advanced analytics, such as fast queries, machine learning, streaming analytics and graph engines.  Spark works with every data format supported in Hadoop, and supports YARN 2.2.

SparkR exposes the Spark API as distributed lists in R and automatically serializes the necessary variables to execute a function on the cluster. 

SparkR is available now on GitHub. It requires Scala 2.10, Spark version 0.9.0 or higher and depends on the rjava and testthat R packages.

To leave a comment for the author, please follow the link and comment on their blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)