Row Search in Parallel

[This article was first published on Yet Another Blog in Statistical Computing » S+/R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

I’ve been always wondering whether the efficiency of row search can be improved if the whole data.frame is splitted into chunks and then the row search is conducted within each chunk in parallel.

In the R code below, a comparison is done between the standard row search and the parallel row search with the FOREACH package. The result is very encouraging. For 10 replications, the elapsed time of parallel search is only the fraction of the elapsed time of standard search.

load('2008.Rdata')
data2 <- split(data, 1:8)

library(rbenchmark)
library(doParallel)
registerDoParallel(cores = 8)
library(foreach)

benchmark(replications = 10, order = "elapsed",
  non_parallel = {
    test1 <- data[which(data$ArrTime == 1500 & data$Origin == 'ABE'), ]
  },
  parallel = {
    test2 <- foreach(i = data2, .combine = rbind) %dopar% i[which(i$ArrTime == 1500 & i$Origin == 'ABE'), ]
  }
)
#           test replications elapsed relative user.self sys.self user.child
# 2     parallel           10   2.680    1.000     0.319    0.762     12.078
# 1 non_parallel           10   7.474    2.789     7.339    0.139      0.000

To leave a comment for the author, please follow the link and comment on their blog: Yet Another Blog in Statistical Computing » S+/R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)