Review of Jim Albert’s Bayesian Computation with R

[This article was first published on Realizations in Biostatistics, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

When I first read Andrew Gelman’s quick off-the-cuff review of the book Bayesian Computation with R, I thought it was a bit harsh. So did Gelman.

I thumbed through the book at the joint statistical meetings, and decided to buy it along with Bayesian Core. And I’m glad I did. Albert clearly positioned the book to be a companion to an introductory and perhaps even intermediate course in Bayesian statistics. I’ve found the book to be very useful to learning about Bayesian computation and deepening my understanding of Bayesian statistics.

The Bad

I include the bad first because there are few bad things.

  • I thought the functions laplace (which computes the normal approximation to a posterior using the Laplacian method) and the linear regression functions were a bit black-boxish. The text described these functions generally, but not nearly in the detail that it described other important functions such as rwmetrop and indepmetrop (which run random walk and independence Metropolis chains). Since I think that laplace is a very useful function, I think it would have been better to go into a little more detail. However, Albert did show the function in action in many different situations, including the computation of Bayes factors.
  • The choice of starting points for laplace seemed black-boxish as well. They were clearly chosen to be close to the mode (one of the functions of the function is to compute a mode of the log posterior distribution), but Albert doesn’t really go into how to choose “intelligent” starting points. I recommend using a grid search using the R function expand.grid (and patience).
  • I wish the Chapter on MCMC included a problem on Gibbs sampling, though there is Chapter on Gibbs sampling in the end.
  • I wish it included a little more detail about accounting for the Jacobian when parameters are transformed. (Most parameters are transformed to the real line.)
  • I wish the book included more about adaptive rejection sampling.
The Good

In no particular order:
  • Albert includes detailed examples from a wide variety of fields. The examples vary in difficulty from run-of-the-mill (such as estimating a single proportion) to the sophisticated (such as Weibull survival regression with censored data). Regression and generalized linear models are covered.
  • The exercises really deepen the understanding of the material. You really need a computer with the R statistical package to read this book and get the most out of it. Take the time to work through the examples. Because I did this, I much better understand the Metropolis algorithms and the importance of choosing the right algorithm (and right parameters) to run an MCMC. Do it incorrectly and the results are compromised due to high (sometimes very high) autocorrelation and poor mixing.
  • The book is accompanied by a package LearnBayes that contain a lot of good datasets and some very useful functions for learning and general use. The laplace, metropolis, and gibbs (which actually implements Metropolis within Gibbs sampling) functions all can be used outside of the context of the book.
  • The book covers several different sampling algorithms, including importance, rejection sampling (not adaptive), and sample importance resampling. Along with this material are examples and exercises that show the importance of good proposal densities and what can happen with bad proposal densities.
  • A lot of the exercises extend exercises in previous chapters, so that the active reader gets to compare different approaches to the same problem.
  • The book heavily refers to other books on Bayesian statistics, such as Berry and Stangl’s Bayesian Biostatistics, Carlin and Louis’s Bayes and Emprical Bayes for Data Analysis, and Gelman, et al’s Bayesian Data Analysis. In doing so, this book increases the instructive value of the other Bayesian books on the market.
Overall, this book is a great companion to any effort to learn about Bayesian statistics (estimation and inference) and Bayesian computation. Like any book, it’s rewards are commensurate with the effort. I highly recommend working the exercises and going beyond the scope of the exercises (such as investigating diagnostics when not explicitly directed to do so). Read/work this book in conjunction with other heavy-hitter books such as Bayes and Empirical Bayes or Bayesian Data Analysis.

To leave a comment for the author, please follow the link and comment on their blog: Realizations in Biostatistics.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)