In an excellent four part series of posts in March, Dave Giles introduces Markov Chain Monte Carlo (MCMC) and Gibbs samplers. In these posts he gives a cogent explanation for the reasoning and mechanics involved in this branch of econometrics/statistics as well as clear simulated examples in R.
If you have not checked it out yet, now is definitely a good time.
Find Dave’s posts:
Post 1: Introduction
Post 2: Showing that MCMC “Actually Works”
Post 3: Shows an additional example as well as how to extract marginal posterior distributions
Post 4: Shows how simple it is to use R to implement MCMC
In order to experiment with the topic of MCMC I have made some modifications to Dave’s code in R. He makes no assertions that his code is in efficient form.
Gibs defined below is the same as his code except that Gibs is now defined as a function. Gibs2 has modified the code as best I could do in order so that I am working with vectors as much as possible rather than item by item manipulation. I used Noam Ross’s excellent post to inform out my understanding of improving processing speeds with R.
By vectoring the random draws Gibs2 processes 2 to 3 times faster than Gibs. Full code can be found on github:
# user system elapsed # 0.97 0.06 1.17 system.time(gibs(nrep=10^6)) # user system elapsed # 9.31 0.02 9.64 system.time(gibs2()) # user system elapsed # 0.35 0.00 0.36 system.time(gibs2(nrep=10^6)) # user system elapsed # 3.66 0.01 3.80
As an exercise I also coded the same gibs function in Julia. This can also be found on github as well.
@elapsed yy = gibs() # 0.063151467 @elapsed gibs(10^6) # 0.479542057 @elapsed yy = gibs2() # 0.010729382 @elapsed yy = gibs2(10^6) # 0.065821774
The first thing to notice is that when coding the initial form of gibs, the julia version is considerably faster (10-15x). Gains from improving the form are also larger in julia with gibs2 running much faster than the R version (30-55x faster).