Here you will find daily news and tutorials about R, contributed by over 573 bloggers.
There are many ways to follow us - By e-mail:On Facebook: If you are an R blogger yourself you are invited to add your own R content feed to this site (Non-English R bloggers should add themselves- here)

Last night I was working on a difficult optimization problems, using the wonderful DEoptim package for R. Unfortunately, the optimization was taking a long time, so I thought I’d speed it up using a foreach loop, which resulted in the following function:

Here’s what’s going on: I divide the bounds for each parameter into n segments, and use a foreach loop to run DEoptim on each segment, collect the results of the loop, and then return the optimization results for the segment with the lowest value of the objective function. Additionally, I defined a “parDEoptim” class to make it easier to combine the results during the foreach loop. All of the work is still being done by the DEoptim algorithm. All I’ve done is split up the problem into several chunks.

Here is an example, straight out of the DEoptim documentation:

In theory, on a 20-core machine, this should run a bit faster than the serial example. Note that you may need to set itermax for the parallel run at a higher value than (itermax for the serial run)/(number of segments), as you want to make sure the algorithm can find the minimum of each segment. Also note that, in this example, there are 20 segments on the interval c(-10,-10) to c(10,10), which means that 2 of the segments have boundaries at c(1,1), which is the global minimum of the function. The DEoptim algorithm has no trouble finding a solution at the boundary of the parameter space, which is why it’s so easy to parallelize.