Fearsome Engines Part 2: Innovations and new features
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
There are lots of R engines emerging! I’ve interviewed members of each of the teams involved in these projects. In part 1 of this series, we covered the motivation of each project. This part looks at the technical achievements and new features.
Many of the innovations are performance improvements, reflecting the primary goal of several of the teams. pqR (the only engine that like R is written in C) get a lot of benefit from simple fixes to R’s C code.
Radford Neal:
A big part of the speed-up in pqR comes just from local rewrites of C code, or relatively limited redesigns of how operations are done.
In general however, some bigger changes have been required to make R go faster.
Garbage collection and reference counting
One performance bottleneck seems to be R’s garbage collector (the thing that frees up memory when you remove variables), with almost all the projects using a different collection strategy to try and make memory management more efficient.
Renjin gets a lot of benefit for free by using the Java Virtual Machine (JVM):
Alex Bertram:
The JVM has 20 years and millions of man hours invested in the garbage collector. As revolutionary as R is you just don’t have a group of people that are focused on garbage collection. So out of the box we see that if you a have a memory intensive benchmark then Renjin sails through it [compared to GNU R].
One of the things that we get for free is parallel garbage collection. That’s a nice extra.
In theory, if you copy a variable then R will create a completely new copy of it. For example, if you do y <- x <- 1:10
, then x and y both store completely separate copies of 1:10. In practice, this is really wasteful of memory, so y will just point to the same memory location as x until you modify one of the variables to make them different. This means that you need to keep a count of how many variables point to a particular location in memory; a process known as reference counting.
GNU R uses a simply system where variables are referenced zero, one or two-or-more times. Both pqR and TERR have tried to improve on this. pqR stores the number of references in a three-bit field, so it can count from zero to seven (seven is really “seven-or-more”). TERR goes even further, storing the count as a 16-bit integer, allowing counts up to 65535. The more accurate reference count allows some memory to be reclaimed when a reference count decreases to zero, which is cheaper than reclaiming it via the garbage collector. (GNU R and pqR always use the garbage collector for reclaiming memory.)
While the pros and cons of different methods of reference counting and garbage collection can get quite technical, the good news is that is seems to be an area where substantial performance improvement is possible. Radford Neal has written in depth about the pqR implementation, and Michael Sannella’s talk from useR2013 is available on slideshare.
Smart execution
GNU R uses what is known as “lazy evaluation” on variables. That is, when you pass variables into a function they aren’t copied or evaluated until absolutely necessary. Several of the engines have taken this concept further, with smarter ways of evaluating (or not evaluating) code in order to improve performance.
Riposte uses “fusion” operations on vectors to reduce the creation of intermediate variables.
Justin Talbot:
While R uses C code for vector operations, it only does one vector operation at a time. This leads to lots of memory traffic as R must read in each vector operand and write out each vector result. Riposte tackles this problem by performing “fusion”, an optimization where multiple vector operations are performed simultaneously in the machine’s registers. This eliminates a lot of memory traffic and can speed up long vector code. In some of our benchmarks I’ve seen 10-50x
performance improvements for long vectors (100K-100s of millions of elements).
As an example, if you wanted to find which elements of a vector corresponded to males over 40, you could write something like age > 40 & gender == "male"
. GNU R would create several intermediate vectors for age > 40
and gender == "male"
before calculating the result. An alternative that uses less memory is to calculate the result of the whole operation on each element, and then loop over the vector (in C++, where loops are fast).
Of course, R’s flexibility causes problems.
Justin Talbot:
R allows you to replace the default operators within almost any scope, by e.g.:
`+` <- function(x,y) x*y
Because of this, if you have a simple for loop:
j <- 0 for(i in 1:10000) { j <- j+1 }
an interpreter can’t simply do an addition. Instead, it has to check each time through the loop that `+` is still really plus and not something else. If j were some long vector, this wouldn’t matter, the time to do the check would be dominated by the cost of actually doing the addition. But if j is short, as it is here, then the time to check dominates.
Riposte solves this problem by assuming that a small set of very common operators cannot be replaced in very limited circumstances. Basically, if you write:
j+1
and j is a basic type (not an S3/S4 class) Riposte assumes that + must be normal addition. If you really want + to use your redefined version you can either (1) make j an object or (2) call it using names, e.g.:
`+`(x=j, y=1)
.
Some of the other engines have similar concepts to Riposte’s fusion. Renjin calls it a “vector pipeline” and FastR calls it a “view”. All refer to executing multiple steps as a group.
Jan Vitek:
We have two implementations of a vector. One is the usual one, and one is an interface. This is a lazy proxy to the computation. So if I do binary plus on two vectors I could allocate an array and do the plus on every element of the inputs as GNU R would do it, but our implementation you could also create a view. This is an object that just remembers that there are these two vectors that it needs to add. It only starts adding the numbers when asked for them.
The last sentence is particularly important. Somewhat non-intuitively, executing code straight away doesn’t always result in the best performance – sometimes it can be faster to wait. This “deferred execution” becomes very important when trying to parallelise code, as we’ll see in a moment.
Parallelism
Much of GNU R is single threaded by default, so with four cores now standard on desktop machines, exploiting multicore capabilities is an obvious target for performance boosting. In fairness, since R2.14.0, the parallel
package has allowed multicore capabilities with a bit of code rewriting, and there are many packages for various types of parallelism. The holy grail of parallelism, however is for it to happen automatically (“implicit parallelism”) without code rewrites.
The FastR project have been pushing parallel computing in several forms.
Tomas Kalibera:
We are pushing parallelism a lot. There are different parallelisms that you may look at: multi-threading, GPUs and possibly distributed settings.
One of the good things about an implementation from scratch is that you can build a mostly thread-safe interpreter. At least we know the points where we aren’t thread-safe. That means that we can evaluate the views in parallel. We know that they are simple operations on vectors. They don’t have side-effects. We can just put them in a thread-pool and almost magically you get parallelism.
In the long run the goal is to integrate parallelisation into the compiler. To use the compiler to analyse the code and understand where parallelism makes sense and to automatically synthesise parallelism in the programs. It should work!
Renjin’s vector pipeline provides implicit parallelism.
Alex Bertram:
With our vector pipeline, we have support for implicit parallelisation. Add sum of two large vectors, it will recognise that the calculation has two independent sums and calculate those on different threads. You don’t have to deal with that; it just parallelises it automatically.
Renjin will try and defer work as long as possible so that it has an opportunity to parallelise more things. You can get quite far before you have to do any work, and then Renjin will say “I’ve got all these different calculations that I have to do, a lot of them are independent. Let me distribute them across the available cores”.
pqR uses a related concept called helper threads. While individual calculations (“tasks” in pqR terminology) are single threaded, several calculations can be done in parallel, one on each thread.
Riposte has several parallelisation tricks up its sleeve. As well as vector fusion and implicit multicore behaviour it also uses SSE and AVX processor level instructions. (AFAIK, these were originally designed for parallel processing in media streaming so this is quite a novel use.) Surprisingly, fusion is much more important than multicore parallelism.
Justin Talbot:
For big vector operations on which Riposte performs fusion, it will attempt to leverage the SSE or AVX instructions to get a 2-4x performance improvement on a single core. It will also distribute the work across multiple threads. This leads to nearly linear performance increases for large enough vectors. Note the orders of magnitude though: fusion itself can lead to 10-50x performance improvement. Distributing the work across 4-6 cores only leads to at most a 4-6x improvement. So there’s more to be gained by doing fusion well.
Bytecode compilation
Interpreted languages are, in general, slower than compiled languages. This means that there is scope for improving R’s performance by compiling the code before it is executed. Recent versions of GNU R include the compiler
package, which converts functions to bytecode, which can then be run more quickly than pure interpreted code. Riposte and Renjin both try and improve on this.
Justin Talbot:
when I started working on R it only used the older AST walking
interpreter originally developed in the early 1990s. Interpreter
technology has advanced considerably since then. Riposte uses a much
more modern register-based bytecode interpreter. This made it about 5x
faster than R on scalar code (e.g. your typical for loop). Luke
Tierney’s new stack-based bytecode interpreter speeds up R somewhat,
but I think Riposte’s interpreter is still 2-3x faster in general.
Compiling R code is, in general, a very hard problem since R code is very flexible. If you include calls to the eval
function, or start redefining operators, or messing about with environments, then optimising code is almost impossible.
A similar situation occurs with JavaScript, and the browser makers (who have considerably more resource than the R community) have put in a lot of effort trying to make JavaScript run faster. There, the solution seem to be “compile the bits that you can, optimise other bits where possible, and just run the hard stuff”.
Renjin uses a similar approach for R code.
Alex Bertram:
We have different interpretation modes. The baseline is this Abstract Syntax Tree (AST) interpreter that’s modelled almost exactly like GNU R. If you want to figure out how an eval statement works or system calls, we just went in and looked at how it was implemented in GNU R. The aim of that is that we have at least one AST interpreter that is capable of handling everything.
If we see that you’re dealing with large data or there’s a function that is getting called a lot, or you’ve hit a for loop of one to one million then it breaks out of that baseline interpreter and tries to use more advanced optimisations. That will only work for a subset of the R language, so the worst case it that we get performance equal to GNU R.
If you’ve got a for loop and you’re calling eval, then Renjin will throw up its hands and say ‘alright, if that’s what you want to do then you’re not going to get any performance speedup’. But if it sees that in the for loop, all you’re doing are some computations, then it will break out [of the AST interpreter] and compile down to machine code.
Data that isn’t in memory
By default, all R variables are stored in RAM. This contrasts with, for example, SAS, where variables are stored on disk. There is a tradeoff in this – RAM can be accessed more quickly than disk, but it limits the size of the data the that you can store. Some R packages allow disk-based variables, most notably the ff and bigmemory packages. The MonetDB.R package allows database variables stored in a MonetDB.
In an ideal world, you should be allowed to store your variables anywhere: in RAM, on disk, or in a database, and R would be able to work with them in the same way. GNU R isn’t well designed for this. As an example, numeric vectors, in the underlying C code, are always assumed to be a pointer to a double array. To easily allow different data types, a level of abstraction is needed.
TERR leverages the object oriented nature of C++ to achieve this. In TERR, each data type is an abstract C++ class, that can have one or more representations.
I don’t know any details of Tibco’s implementation, but it seems that from an abstract class of NumericVector, you could have concrete classes of NumericVectorInRam and NumericVectorOnDisk to account for different data sources (not to mention specialisation for vectors with or without names and other attributes).
Data from disk and database sources isn’t yet supported in TERR, but it is planned for the future.
Renjin has also made some progress towards non-RAM variables.
Alex Bertram:
We have been able to introduce a layer of abstraction between the algorithms that operate on data and where that data is stored. In GNU R there is an expectation that all the data is in arrays, and that’s very difficult to overcome.
In Renjin, it’s very basic but we have an interface, so if you want the sum of something, say a vector or other object, we hand some algorithm a pointer to an interface that says ‘if you want the tenth element, then call this method; if you want the length, then call that, etc.’. So it could be in memory, backed by an array, but it could also be on disk. It could be in a database or in a buffer.
That’s the basis of everything else: there’s a layer of abstraction between your algorithms and your storage.
Code specialisation
CXXR and FastR both make some performance gains by providing specialised routines for particular cases.
Tomas Kalibera:
Imagine that you are indexing a vector. Every user of R knows that there are very rich semantics. You can have logical or numeric indicies, you can have NAs, you can have attributes on the base or on the index. There are many different semantics. One possibility is to implement the code using a single subroutine that will cover all the general cases but this is slow.
We implement many different selection methods. One a still a general one that can do everything, but we also have specialised ones that make some assumptions. [For example], in one we assume that a base numeric vector is a double vector and an index is a scalar integer or a scalar double within the bounds of the vector. Of course you have to have some guards – some checks that these assumptions are valid, but once you check these guards then the code is much simpler and the just-in-time Java compiler is then able to optimise the code much better because it is so much smaller than the general case. We get big speedups from this.
Other features
TERR provides a C++ API to allow easy integration with C++ code. (Using Rcpp, it isn’t hard, but alternatives are always welcome.)
FastR is experimenting with several prototype Oracle technologies. Truffle allows code to self-modify as it runs, providing on-the-fly optimisation, and Delight autogenerates GPU code, providing 20x speedups on some problems.
CXXR has been conspicuously absent from this so far. It’s single user-level new feature is provenance tracking, based on the S AUDIT feature.
Andrew Runnalls:
R itself stores (in the .Rhistory) file a record of all R commands that have been issued, so to that extent you can “recreate your current workspace”. The S AUDIT feature, however, enabled you to identify the specific sequence of commands that contributed to the current value of a particular S object: it was this selectivity that I particularly valued.
Rather than adding new features, a substantial amount of development effort has gone into refactoring R to make it more maintainable and better documented. Are these features? Hell yes! Rather than being user-level features, they are developer-level features. In this respect, I think CXXR might be useful for the developers of other engines. Rather than trying to replicate features from GNU R’s source code; it might be easier to replicate them from CXXR.
This post is far from exhaustive on the new features in the engines. But as you can tell from what I’ve included, there really are a ton of technical innovations that have been made, with more on the way.
I put it to Radford Neal that it was difficult to tell which ones were the important ones.
Radford Neal:
What is most important is very hard to say, since it depends so much
on the user’s program and environment. Helper threads are of no use
on a single-core system! And speeding up matrix multiplication
doesn’t help programs that don’t use it. I’d encourage people who’ve
tried pqR to let me know what programs have and have not been sped up
a lot – so far I haven’t gotten much of that. This is most helpful,
of course, if you can send me your program and data so that I can try
it myself, and maybe change pqR to make your program faster.
Let me know in the comments which features seem most exciting to you, or if there is anything that you want to see in R that isn’t possible.
In the next part, I’ll have a go at figuring out which engines you might want to use for which purpose.
Tagged: CXXR, engine, FastR, pqR, r, Renijn, Riposte, TERR
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.