Posts Tagged ‘ high-performance computing ’

Using R for Map-Reduce applications in Hadoop

May 4, 2011
By

Data Scientist Antonio Piccolboni recently published this comparison of the various language and interfaces available for programming Big Data analysis tasks in the map-reduce framework. The interfaces he reviewed included: Java Hadoop (mature and efficient, but verbose and difficult to program) Cascading (brings an SQL-like flavor to Java programming with Hadoop) Pipes/C++ (a C++ interface to programming on Hadoop)...

Read more »

Parallel processing in R for Windows

March 4, 2011
By

The doSMP package (and its companion package, revoIPC), previously bundled only with Revolution R, is now available on CRAN for use with open-source R under the GPL2 license. In short, doSMP makes it easy to do SMP parallel processing on a Windows box with multiple processors. (It works on Mac and Linux too, but it's been relatively easy to...

Read more »

Setting up a parallel computing cluster for R with OpenSSH and doSNOW

February 25, 2011
By

Responding to yesterday's post which included an aside on using parallel processing for by-group computations in R, reader Christian Gunning mused about the possibility of using doSNOW on his network, with OpenSSH to manage the authentication: I sit on a fast campus network and have at least 10 remote cores available that I could farm out for big jobs....

Read more »

Packages for By-Group Processing in R

February 24, 2011
By

Analyst and BI expert Steve Miller takes a look at the facilities in R for doing "by-group" processing of data. The task consisted of: ... read several text files, merge the results, reshape the intermediate data, calculate some new variables, take care of missing values, attend to meta data, execute a few predictive models and graph the results. Then...

Read more »

Run R in parallel on a Hadoop cluster with AWS in 15 minutes

January 10, 2011
By

If you're looking to apply massively parallel resources to an R problem, one of the most time-consuming aspects of the problem might not be the computations themselves, but the task of setting up the cluster in the first place. You can use Amazon Web Services to set up the cluster in the cloud, but even that take some time,...

Read more »

Using R and Hadoop to analyze VOIP data

November 8, 2010
By

Last month, the newest member of Revolution's engineering team, Saptarshi Guha, gave a presentation at Hadoop World 2010 on using R and Hadoop to analyze 1.3 billion voice-over-IP packets to identify calls and measure call quality. Saptarshi, of course, is the author of RHIPE, which lets R programmers write map-reduce algorithms in the Hadoop framework without needing to learn...

Read more »

Making sense of MapReduce

September 24, 2010
By

From guest blogger Joseph Rickert. Last night I went to hear Ken Krugler of Bixolabs talk about Hadoop at the monthly meeting of the Software Developers Forum. Maybe because Ken is an unusually lucid speaker, or maybe because I just reached some sort of cumulative tipping point through the prep work of all those patient people who have tried...

Read more »

Guidelines for efficient R programming

September 22, 2010
By

R is designed to make it easy to clearly express statistical ideas in code, but when it come to writing code that runs as fast as possible, there are a few tips, tricks and caveats to be aware of. As part of the BioConductor conference this past summer, Martin Morgan prepared a tutorial on efficient R programming. (Patrick Abouyen...

Read more »

Saptarshi Guha on Hadoop, R

September 20, 2010
By

Saptarshi Guha (author of the Rhipe package) joins the likes of Ebay, Yahoo, Twitter and Facebook and as one of just 37 presenters at the Hadoop World conference. (Revolution Analytics is proud to sponsor Saptarshi's presence at this event, which take place in New York on October 12.) He'll be talking about using R and Hadoop to analyze Voice-over-IP...

Read more »

plyr and reshape: better, faster, more productive

September 10, 2010
By

Hadley Wickham has just released updates to his data-manipulation packages for R, plyr and reshape (now called reshape2), that are much faster and more memory-efficient than the previous incarnations. The reshape2 package lets you flexibly restructure and aggregate data using just three functions (melt, acast and dcast), whereas the plyr package is kind of like a supercharged SQL "GROUP...

Read more »