I thumbed through the book at the joint statistical meetings, and decided to buy it along with Bayesian Core. And I'm glad I did. Albert clearly positioned the book to be a companion to an introductory and perhaps even intermediate course in Bayesian statistics. I've found the book to be very useful to learning about Bayesian computation and deepening my understanding of Bayesian statistics.
I include the bad first because there are few bad things.
- I thought the functions laplace (which computes the normal approximation to a posterior using the Laplacian method) and the linear regression functions were a bit black-boxish. The text described these functions generally, but not nearly in the detail that it described other important functions such as rwmetrop and indepmetrop (which run random walk and independence Metropolis chains). Since I think that laplace is a very useful function, I think it would have been better to go into a little more detail. However, Albert did show the function in action in many different situations, including the computation of Bayes factors.
- The choice of starting points for laplace seemed black-boxish as well. They were clearly chosen to be close to the mode (one of the functions of the function is to compute a mode of the log posterior distribution), but Albert doesn't really go into how to choose "intelligent" starting points. I recommend using a grid search using the R function expand.grid (and patience).
- I wish the Chapter on MCMC included a problem on Gibbs sampling, though there is Chapter on Gibbs sampling in the end.
- I wish it included a little more detail about accounting for the Jacobian when parameters are transformed. (Most parameters are transformed to the real line.)
- I wish the book included more about adaptive rejection sampling.
In no particular order:
- Albert includes detailed examples from a wide variety of fields. The examples vary in difficulty from run-of-the-mill (such as estimating a single proportion) to the sophisticated (such as Weibull survival regression with censored data). Regression and generalized linear models are covered.
- The exercises really deepen the understanding of the material. You really need a computer with the R statistical package to read this book and get the most out of it. Take the time to work through the examples. Because I did this, I much better understand the Metropolis algorithms and the importance of choosing the right algorithm (and right parameters) to run an MCMC. Do it incorrectly and the results are compromised due to high (sometimes very high) autocorrelation and poor mixing.
- The book is accompanied by a package LearnBayes that contain a lot of good datasets and some very useful functions for learning and general use. The laplace, metropolis, and gibbs (which actually implements Metropolis within Gibbs sampling) functions all can be used outside of the context of the book.
- The book covers several different sampling algorithms, including importance, rejection sampling (not adaptive), and sample importance resampling. Along with this material are examples and exercises that show the importance of good proposal densities and what can happen with bad proposal densities.
- A lot of the exercises extend exercises in previous chapters, so that the active reader gets to compare different approaches to the same problem.
- The book heavily refers to other books on Bayesian statistics, such as Berry and Stangl's Bayesian Biostatistics, Carlin and Louis's Bayes and Emprical Bayes for Data Analysis, and Gelman, et al's Bayesian Data Analysis. In doing so, this book increases the instructive value of the other Bayesian books on the market.