Monte Carlo Statistical Methods third edition

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Last week, George Casella and I worked around the clock on starting the third edition of Monte Carlo Statistical Methods by detailing the changes to make and designing the new table of contents. The new edition will not see a revolution in the presentation of the material but rather a more mature perspective on what matters most in statistical simulation:

Chapter 1 (Introduction) will include real datasets with more complex motivating models (e.g., Bayesian lasso).

Chapter 2 (Random generation) will update the section on uniform generators and include a section on ABC. (I will also include a note on the irrelevance of hardware random generators.)

Chapter 3 (Monte Carlo integration) will include a reference to INLA, the integrated Laplace approximation of Rue, Martinez and Chopin, as well as to our recent vanilla Rao-Blackwellisation paper.

Chapter 4 (Control of convergence) will remove the multivariate normal approximation of the beginning, replace it with the Brownian bound solution already presented in Introducing Monte Carlo Methods with R, and include connections with the Read Paper of Kong et al. and the multiple mixture paper of Owen and Zhou.

Chapter 4 (Stochastic optimisation) will include some example from Introducing Monte Carlo Methods with R and add recent results on EM standard error by Cappé and Moulines.

Chapter 6 (Markov chains) will now be split in two chapter. The first chapter will deal with the basics of Markov chains, independent of MCMC algorithms. The second Markov Chain chapter will be devoted to the theory specific to MCMC use, including regeneration, Peskun ordering, batch means, spectral analysis…

Chapter 7 (Metropolis-Hastings algorithms) will include a very basic example at the beginning and cover algorithms beyond the random walk. Adaptive MCMC will also be processed in this chapter.

Chapter 8 (Slice sampling) and Chapter 9 (2-stage Gibbs sampling) will be reunited, as in the first edition of the book (!). The new chapter will also compare mixture models with product partition models. And hopefully do a better job at covering Liu, Wong and Kong (1994).

Chapter 10 (general Gibbs sampling) will cover mixed linear models and hierarchical models in more details. It will include entries on parameter expansion, Dirichlet processes, JAGS, Bayesian lasso.

Chapter 11 (Reversible jump algorithms) will face an in-depth change to become a chapter on computational techniques for model choice. This means covering intra-model as well as inter-model computational tools like bridge, path, umbrella, nested sampling, harmonic means, Chib’s representation, &tc. We will reduce considerably the entry on reversible jump and cover other stochastic search methods, like the shotgun gun stochastic search of Hans, Dobra and West.

Chapter 12 (Diagnosing convergence) will focus more precisely on the methods that survived the test of time, removing some parts and illustrating remaining methods with coda output. Batch means and effective sample size will be part of the diagnostics.

Chapter 13 (Perfect sampling) will disappear into a section of Chapter 6 (Markov chains), as we fear perfect sampling remained more of an elegant theoretical construct than a genuinely implementable technique, despite the fascination it inspired in the community, us included.

Chapter 14 (Iterated and Sequential Importance Sampling) will become a chapter on Iterated and Sequential Monte Carlo, with an extensive rewriting in order to include some of the most recent advances in particle systems, including the 2009 Read Paper of Andrieu, Doucet and Holenstein.

Overall, the goal is to make the book more focussed on well-established techniques, reinforcing the theoretical backup whenever possible, as well as to cover recent developments of importance in the field. Given the availability of the companion Introducing Monte Carlo Methods with R , we will not cover the practicals of R implementation, even though we will make all R codes available once the revision is completed. We hope to be done by next summer, even though the simultaneous handling of three other books will certainly be a liability for me…


Filed under: Books, R, Statistics, University life Tagged: Bayesian lasso, hierarchical Bayesian modelling, Introducing Monte Carlo Methods with R, Markov chains, MCMC, mixture estimation, Monte Carlo Statistical Methods, nested sampling, perfect sampling, Peskun ordering, R, Rao-Blackwellisation, regeneration, slice sampling

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)