Bayesian ideas and data analysis

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Here is [yet!] another Bayesian textbook that appeared recently. I read it in the past few days and, despite my obvious biases and prejudices, I liked it very much! It has a lot in common (at least in spirit) with our Bayesian Core, which may explain why I feel so benevolent towards Bayesian ideas and data analysis. Just like ours, the book by Ron Christensen, Wes Johnson, Adam Branscum, and Timothy Hanson is indeed focused on explaining the Bayesian ideas through (real) examples and it covers a lot of regression models, all the way to non-parametrics. It contains a good proportion of WinBugs and R codes. It intermingles methodology and computational chapters in the first part, before moving to the serious business of analysing more and more complex regression models. Exercises appear throughout the text rather than at the end of the chapters. As the volume of their book is more important (over 500 pages), the authors spend more time on analysing various datasets for each chapter and, more importantly, provide a rather unique entry on prior assessment and construction. Especially in the regression chapters. The author index is rather original in that it links the authors with more than one entry to the topics they are connected with (Ron Christensen winning the game with the highest number of entries). 

Although the prior may work well in the sense that it is easily overwhelmed by the data, on should never forget that in itself it is saying very stupid things, namely that θ is likely to be either huge or essentially 0.” (Chap. 4, p. 71)

The book is pleasant to read, with humorous comments here and there. (I could have done without the dedication to Wes’ dog, though! Athough I loved Ron’s dedication to the S.I. and to Kaikoura… And I missed the line on Monte Crisco. Even though I got the one on “my niece found a nice niche in Nice”.) The presentation is dense but uses enough codes and graphs to make the going smooth. The sections on testing are presenting a wide range of options, which is not the way I would do it, but fine nonetheless. The authors even expose Neyman-Pearson testing to highlight the distinction with Bayesian approaches. The going gets a bit rough (in terms of measure theory, see page 56) for point null hypotheses, but the authors manage to get the idea clarified through examples. Model checking is proposed via Bayesian p-values

\mathbb{P}(m(X)\le m(x_\text{obs}))

[which has the drawback of not being invariant by reparameterisation], predictive p-values, Bayes factors, BIC [not a Bayesian criterion!], DIC, and the authors’ favourite, the pseudo-marginal likelihood (page 81)

\hat m(x) = \prod_{i=1}^n f_i(x_i|x_{-i})

where the components of the product are the cross-validation predictive densities. This pseudo-marginal likelihood allows for improper priors and, like Aitkin’s integrated likelihood, it is not a Bayesian procedure in that the data is used several times to construct the procedure… In addition, the authors recommend the worst version of Gelfand and Dey’s (Series B, 1994) estimate to approximate these cross-validation predictive densities, which indeed amount to using the dreadful harmonic mean estimate!!! They apparently missed Radford’s blog on the issue. (Chapter 4 is actually the central chapter of the book in my opinion and I could make many more comments on how I would have presented things. Like exchangeability, sufficiency [missing the point on model checking!], improper priors [analysed à la Jeffreys as if they were true priors],  a very artificial example of inconsistent Bayes estimators [already discussed there], identifiability [imposed rather than ignored: “Our problem with Bayesian analysis is that it is easy to overlook identifiability issues”, p.96].)

Even though the likelihood function has a place of honor in both frequentist and Bayesian statistics it is a rather artificial construction. If you accept that parameters are artificial constructs, then likelihoods must also be artificial constructs.” Chap. 4, p.93

Once again,  I like very much the second part on regression models. Even though I missed Zellner’s g-prior. (Some of the graphs are plain ugly: Fig. 5.4 and Fig. 15.7, for instance.) I do prefer their coverage of MCMC (Chap. 6) to Bill Bolstad‘s, esp. when the authors argue that they “don’t believe that thinning is worthwhile” (p.146). (Gibbs sampling is however missing the positivity constraint of Hammersley and Clifford [and Besag‘s].) And, again, having whole sections on the prior construction is a very neat thing.

Last but not least, the picture on the backcover is one of Pierre Simon Laplace himself! First, Laplace did much more  [than Bayes] for the birth of Bayesian statistics. Second, this avoids us the replication of the likely apocryphal picture of Thomas Bayes.


Filed under: Books, R, Statistics, Travel, University life Tagged: Bayes factors, Bayesian ideas, Bayesian inference, Bayesian model choice, Bayesian statistics, book review, harmonic mean, Harold Jeffreys, Kaikoura, measure theory, New Zealand, Pierre Simon de Laplace, point null hypotheses, prior construction

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)