The team at AMPLab has announced a developer preview of SparkR, an R package enabling R users to run jobs on an Apache Spark cluster. Spark is an open source project that supports distributed in-memory computing for advanced analytics, such as fast queries, machine learning, streaming analytics and graph engines. Spark works with every data format supported in Hadoop, and supports YARN 2.2.
SparkR exposes the Spark API as distributed lists in R and automatically serializes the necessary variables to execute a function on the cluster.
SparkR is available now on GitHub. It requires Scala 2.10, Spark version 0.9.0 or higher and depends on the rjava and testthat R packages.
To leave a comment
for the author, please follow the link and comment on his blog: Revolutions
offers daily e-mail updates
news and tutorials
on topics such as: visualization (ggplot2
), programming (RStudio
, Web Scraping
) statistics (regression
, time series
) and more...
If you got this far, why not subscribe for updates
from the site? Choose your flavor: e-mail
, or facebook