New series: R and big data (concentrating on Spark and sparklyr)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Win-Vector LLC has recently been teaching how to use R
with big data through Spark
and sparklyr
. We have also been helping clients become productive on R/Spark
infrastructure through direct consulting and bespoke training. I thought this would be a good time to talk about the power of working with big-data using R
, share some hints, and even admit to some of the warts found in this combination of systems.
The ability to perform sophisticated analyses and modeling on “big data” with R
is rapidly improving, and this is the time for businesses to invest in the technology. Win-Vector can be your key partner in methodology development and training (through our consulting and training practices).
J. Howard Miller’s, 1943.
The field is exciting, rapidly evolving, and even a touch dangerous. We invite you to start using Spark
through R
and are starting a new series of articles tagged “R and big data” to help you produce production quality solutions quickly.
Please read on for a brief description of our new articles series: “R and big data.”
Background
R
is a best of breed in-memory analytics platform. R
allows the analyst to write programs that operate over their data and bring in a huge suite of powerful statistical techniques and machine learning procedures. Spark
is an analytics platform designed to operate over big data that exposes some of its own statistical and machine learning capabilities. R
can now be operated “over Spark
“. That is: R
programs can delegate tasks to Spark
clusters and issue commands to Spark
clusters. In some cases the syntax for operating over Spark
is deliberately identical to working over data stored in R
.
Why R
and Spark
The advantages are:
Spark
can work at a scale and speed far larger than nativeR
. The ability to send work toSpark
increasesR
‘s capabilities.R
has machine learning and statistical capabilities that go far beyond what is available onSpark
or any other “big data” system (many of which are descended from report generation or basic analytics). The ability to use specializedR
methods on data samples yields additional capabilities.R
andSpark
can share code and data.
The R
/Spark
combination is not the only show in town; but it is a powerful capability that may not be safe to ignore. We will also talk about additional tools that can be brought into the mix: such as the powerful large scale machine learning capabilities from h2o
The warts
Frankly a lot of this is very new, and still on the “bleeding edge.” Spark 2.x
has only been available in stable form since July 26, 2016 (or just under a year). Spark 2.x
is much more capable than the Spark 1.x
series in terms of both data manipulation and machine learning, so we strongly suggest clients strongly insist on Spark 2.x
clusters from their infrastructure vendors (such as Couldera, Hortonworks, MapR, and others) despite having only become available in these packaged solutions recently. The sparklyr
adapter itself was first available on CRAN
only as of September 24th, 2016. And SparkR
only started distributing with Spark 1.4
as of June 2015.
While R
/Spark
is indeed a powerful combination, nobody seems to sharing a lot of production experiences and best practices whith it yet.
Some of the problems are sins of optimism. A lot of people still confuse successfully standing a cluster up with effectively using it. Other people confuse statistical and procedures available in in-memory R
(which are very broad and often quite mature) with those available in Spark
(which are less numerous and less mature).
Our goal
What we want to do with the “R
and big data” series is:
- Give a taste of some of the power of the
R
/Spark
combination. - Share a “capabilities and readiness” checklist you should apply when evaluating infrastructure.
- Start to publicly document
R
/Spark
best practices. - Describe some of the warts and how to work around them.
- Share fun tricks and techniques that make working with
R
/Spark
much easier and more effective.
The start
Our next article in this series will be up soon and will discuss the nature of data-handles in Sparklyr
(one of the R
/Spark
interfaces) and how to manage your data inventory neatly.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.