Handbook of Markov chain Monte Carlo

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

At JSM, John Kimmel gave me a copy of the Handbook of Markov chain Monte Carlo, as I had not (yet?!) received it. This handbook is edited by Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng, all first-class jedis of the MCMC galaxy. I had not had a chance to get a look at the book until now as Jean-Michel Marin took it home for me from Miami, but, as he remarked in giving it back to me last week, the outcome truly is excellent! Of course, authors and editors being friends of mine, the reader may worry about the objectivity of this assessment; however the quality of the contents is clearly there and the book appears as a worthy successor to the tremendous Markov chain Monte Carlo in Practice by Wally Gilks, Sylvia Richardson and David Spiegelhalter. (I can attest to the involvement of the editors from the many rounds of reviews we exchanged about our MCMC history chapter!) The style of the chapters is rather homogeneous and there are a few R codes here and there. So, while I will still stick to our Monte Carlo Statistical Methods book for teaching MCMC to my graduate students next month, I think the book can well be used at a teaching level as well as a reference on the state-of-the-art MCMC technology.

I will not go in details over all the chapters. The first half of the book covers MCMC methodology with a beautiful and lively first chapter by Charlie Geyer that manages to cover the essentials of MCMC in a very coherent way while also explaining very very clearly the four fundamentals advances contained in Peter Green’s 1995 reversible jump Biometrika paper. (I figure it would sound like base-jumping to someone who had never heard of MCMC! In the literal sense of jumping from a cliff with 5 seconds to reach the ground!) Terrific chapter! While it would have been also terrific to read the expected chapter on reversible jump by Peter  Green and David Hastie, Yanan Fan and Scott Sisson survey reversible jump in proper details in Chapter 3, esp. convergence assessment for RJMCM.  Then, the next chapter about optimal proposal distributions and adaptive MCMC is from Jeff Rosenthal, with his usual pedagogical qualities (incl. great FAQ sections!). The chapter about MCMC using Hamiltonian dynamics is also from Toronto, being written by Radford Neal, and it is a huge chapter, full of details and ideas about Hamiltonian MCMC, that should prove very profitable to all readers. (And a good prequel to Girolami and Calderhead’s discussion paper in JRSS B.) Both following chapters are about convergence assessments, by Andrew Gelman and Kenneth Shirley and by James Flegal and Galin Jones. (As stressed in Introducing Monte Carlo Methods with R, I particularly like the idea of Flegal and Jones to validate a bootstrap approach to confidence evaluation!) The two next chapters are covering perfect sampling, by Radu Craiu and Xiao-Li Meng, and by Mark Huber. (Perfect stuff, even though I got disillusioned over the years about the range of this fascinating use of MCMC outputs. Mark’s spatial processes are certainly the most convincing domain of application.) Jim Hobert wrote a chapter on data augmentation algorithm, full of fine details about the convergence of this special case of Gibbs sampling, which illustrates very well the current thoughts on convergence assessment. Charlie Geyer has a short chapter on importance sampling, simulated tempering and umbrella sampling, with an application to the approximation of Bayes factors, while Scott Sisson and Yanan Fan wrote the chapter on ABC. (Two interesting sentences from this chapter are that “model comparison through likelihood-free posteriors with a fixed vector of summary statistics will ultimately compare distortions of those models which are overly  simplified wrt the true data-generating process. This remains true even when using sufficient statistics and for ε→0.” (p.329) and  “While [using likelihood-free inference for model selection purposes] is a natural extension of inference for individual models, the analysis in Section 12.4.4 urges caution and suggests that further research is needed into the effect of the likelihood-free approximation (…) on the marginal likelihoods upon which model comparison is based” (p.333), as our PNAS paper brings some light on both questionings.) The second half of the book is more topical, with applications of MCMC in Genetics, Physics, Ecology, MRI data, Astronomy, however it also contains methodological directions, like the chapter written by Paul Fearnhead on MCMC for state space models.


Filed under: Books, R, Statistics, University life Tagged: ABC, adaptive MCMC methods, base-jumping, Biometrika, book review, edited book, Gaussian state spaces, history of statistics, Markov chains, MCMC, Monte Carlo Statistical Methods, perfect sampling, R, reversible jump, simulation

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)