Bayesian modeling using WinBUGS

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Yes, yet another Bayesian textbook: Ioannis Ntzoufras’ Bayesian modeling using WinBUGS was published in 2009 and it got an honourable mention at the 2009 PROSE Award. (Nice acronym for a book award! All the mathematics books awarded that year were actually statistics books.) Bayesian modeling using WinBUGS is rather similar to the more recent Bayesian ideas and data analysis that I reviewed last week and hence I am afraid the review will draw a comparison between both books. (Which is a bit unfair to Bayesian modeling using WinBUGS since I reviewed Bayesian ideas and data analysis  on its own! However, I will presumably write my CHANCE column as a joint review.)

As history has proved, the main reason why Bayesian theory was unable to establish a foothold as a well accepted quantitative approach for data analysis was the intractability involved in the calculation of the posterior distribution.” Chap. 1, p.1

The book launches into a very quick introduction to Bayesian analysis, since, by page 15, we are “done” with linear regression and conjugate priors. This is somehow softened by the inclusion at the end of the chapter of a few examples, including one on the Greek football  team in Euro 2004, but nothing comparable with Christensen et al.’s initial chapter of motivating examples. Chapter 2 on MCMC methods follows the same pattern:  a quick and dense introduction in about ten pages, followed by 40 pages of illuminating examples, worked out in full detail. CODA is described in an Appendix. Compared with Bayesian ideas and data analysis, Bayesian modeling using WinBUGS spends time introducing WinBUGS and Chapter 3 acts like a 20 page user manual, while Chapter 4 corresponds to the WinBUGS example manual. Chapter 5 gets back to a more statistical aspect, the processing of regression models (including Zellner’s g-prior). up to ANOVA. Chapter 6 extends the previous chapter to categorical variables and the ANCOVA model, as well as the 2006-2007 English premier league. Chapter 7 moves to the standard generalised linear models, with an extension in Chapter 8 to count data, zero inflated models, and survival data. Chapter 9 covers hierarchical models, with mixed models, longitudinal data, and the water polo World Cup 2000.

Although this [the harmonic mean] estimator is simple, it is quite unstable and sensitive to small likelihood values and hence is not recommended.” Chap. 11, p. 393

While most chapters rely on DIC for model comparison, the last two chapters of Bayesian modeling using WinBUGS open on other model comparison approaches like the posterior predictive p-value, residual values, cross-validation (with, once again!, the dreaded harmonic mean estimator! and, once again, Geisser and Eddy’s conditional predictive ordinates), keeping the introduction of Bayes factors for Chapter 11, with an immediate criticism through the Jeffreys-Lindley-Bartlett paradox, maybe because “Bayes factors cannot be generally calculated within WinBUGS unless sophisticated approaches are used” (p.390). Surprisingly, and as clearly stated in the above quote, the computational section warns about the poor performances of the harmonic mean estimator without making the connection with the earlier proposal of the very same estimator (p.375). After reviewing the most standard approaches for marginal approximation, Ioannis Ntzoufras falls back on a Laplace approximation to the likelihood function. This chapter also covers variable selection by Gibbs sampling, stochastic search, the Carlin and Chib (1995) method and reversible jump MCMC, the later being expedited in half a page! It concludes with the (non-Bayesian) information criteria, AIC and BIC.

Bayesian statistics suddenly became fashionable, opening new highways for statistical research.” Chap. 1, p.2

On the material (!) side, while the presentation is overall very nice, I dislike the fonts (which are imposed by J.Wiley, as I remember for our mixture book) and the fact that the text within a page seems to have slided down to the bottom: I mean, each page of text ends up (or down) very close to the physical bottom of the page. Nothing important, obviously, but a slight impression of cramming… (See, e.g., pages 3 or 38, where a subscript would have been sticking out of the book!) A further nitpicking remark is that the examples start as indented and then loose their indentation after a paragraph or two, which does not help in identifying examples as a whole within the text. I like the idea of highlighting R/WinBUGS code with grey background (as we did in Bayesian Core), however the rendering of the two-column opposition of algorithm and R code is unfortunately difficult to read.  Some graphs are given as screen copies, which reduce their readability for no proper reason. Also, the price of the book ($130, $102 on amazon) is a wee higher than similar books in the area (Bayesian ideas and data analysis is only $62, Bayesian data analysis is only $62, the Bayesian choice a mere $37…, but this seems to be a publisher’s policy, witness Bolstad’s Introduction to Bayesian Statistics at the same $102.)

All predictive diagnostics presented above have the disadvantage of double usage of the data.” Chap. 10, p.375

In conclusion, and in reflection with their respective titles, Bayesian modeling using WinBUGS feels more technical than  Bayesian ideas and data analysis, even though their coverage is in fine very similar. Not only do they both insist on methodology much more than theory, but they also similarly emphasize the application aspect through numerous examples based on real data. The later is slightly more philosophical and for this reason (as well as typographical comfort) more to my own personal subjective taste. I figure the choice of one versus the other as a textbook will very much depend on the intended audience. More mature statistical students may favour Bayesian ideas and data analysis, while more applied students could benefit more from Bayesian modeling using WinBUGS.

(Note: this book is not to be confused with the very recent Bayesian population modeling using WinBUGS, by Marc Kéry and Michael Schaub, which is about Bayesian analysis for ecology and which I am looking forward reading…)


Filed under: Books, R, Statistics, University life Tagged: AIC, Bayesian ideas and data analysis, Bayesian statistics, BIC, book review, conjugate priors, DIC, harmonic mean estimator, information criterion, MCMC algorithms, PROSE Award, WinBUGS

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)