Comparing performance in R, foreach/doSNOW, SAS, and NumPY (MKL)

June 17, 2012
By

This is a follow up to my previous post.  There is a quicker way to compute the function I created (basic cumulative sum) in R.

function f(x) {
sum = 0;
for (i in seq(1,x)) sum = sum + i
return(sum)
}

Use this:

f2 = function(x){
return(sum(seq(x)))
}

If I time it, we see:

system.time( (out = apply(as.array(seq(10000)),1,f2)))
user  system elapsed
0.35    0.05    0.39

Nice!  Spread that across 3 CPUs and we can bring it down a bit:

system.time( (out2 =  foreach(i=seq(0,9),.combine=’c’) %dopar% {
apply(as.array(seq(i*1000+1,(i+1)*1000)),1,f2)
}))
user  system elapsed
0.02    0.00    0.26

Not too shabby.  How fast can we do this in SAS:

optionscmplib=work.fns;
procfcmp outlib=work.fns.fns;
function csum(x);
sum = 0;
do i=1to x;
sum = sum+i;
end;
return (sum);
endsub;
run;
data_null_;
doi=1 to 10000;
x = csum(i);
end;
run;
NOTE: DATA statement used (Total process time):
real time           0.24 seconds
cpu time            0.25 seconds

SAS on a single CPU is just as fast as R on 3.  It’s not worth attempting to multi-thread this in SAS.  The overhead would be too much as SAS/CONNECT is made for bigger problems.

So what about NumPY in Python?  If we use the version compiled with MKL we ought to be able to do reduction in blazing fast time.  MKL should use the SSE registers on the processor.  Further, we’ll use the “fromfunction” method that lets us pass a lambda to the array creation method.

import numpy as np
import time as time
def f(x,y):
x = x +1
return(np.cumsum(x))
s = time.time()
y = np.fromfunction(f,(10000,1))

el = time.time() – s
print “%0.6f” % el

0.002000

FAST.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...