Unless you count DSC 2003 in Vienna, last week's useR conference at Stanford was my very first time at useR. It was a great event, it was awesome to meet our lovely and vibrant R community in real life, which we otherwise only get know from online interactions, and of course it was very nice to meet old friends and make new ones.
|The future is promising.|
At the end of the second day, I presented A Future for R (18 min talk; slides below) on how you can use the future package for asynchronous (parallel and distributed) processing using a single unified API regardless of what backend you have available, e.g. multicore, multisession, ad hoc cluster, and job schedulers. I ended with a teaser on how futures can be used for much more than speeding up your code, e.g. generating graphics remotely and displaying it locally.
Here's an example using two futures that process data in parallel:
> library("future") > plan(multiprocess) ## Parallel processing > a %<-% slow_sum(1:50) ## These two assignments are > b %<-% slow_sum(51:100) ## non-blocking and in parallel > y <- a + b ## Waits for a and b to be resolved > y  5050
Below are different formats of my talk (18 slides + 9 appendix slides) on 2016-06-28:
- HTML (incremental slides; requires online access)
- HTML (non-incremental slides; requires online access)
- PDF (incremental slides)
- PDF (non-incremental slides)
- Markdown (screen reader friendly)
- Channel 9 or YouTube (video recording)
May the future be with you!
- useR 2016:
- Conference site: http://user2016.org/
- Talk abstract: https://user2016.sched.org/event/7BZK/a-future-for-r
- future package:
- future.BatchJobs package:
- doFuture package: