Hyper-g priors

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Earlier this month, Daniel Sabanés Bové and Leo Held posted a paper about g-priors on arXiv. While I glanced at it for a few minutes, I did not have the chance to get a proper look at it till last Sunday. The g-prior was first introduced by the late Arnold Zellner for (standard) linear models, but they can be extended to generalised linear models (formalised by the late John Nelder) at little cost. In Bayesian Core, Jean-Michel Marin and I do centre the prior modelling in both linear and generalised linear models around g-priors, using the naïve extension for generalised linear models,

beta sim mathcal{N}(0,g sigma^2 (mathbf{X}^text{T}mathbf{X})^{-1})

as in the linear case. Indeed, the reasonable alternative would be to include the true information matrix but since it depends on the parameter beta outside the normal case this is not truly an alternative. Bové and Held propose a slightly different version

beta sim mathcal{N}(0,g sigma^2 c (mathbf{X}^text{T}mathbf{W}mathbf{X})^{-1})

where W is a diagonal weight matrix and c is a family dependent scale factor evaluated at the mode 0. As in Liang et al. (2008, JASA) and most of the current literature, they also separate the intercept beta_0 from the other regression coefficients. They also burn their “improperness joker” by choosing a flat prior on beta_0, which means they need to use a proper prior on g, again as Liang et al. (2008, JASA), for the corresponding Bayesian model comparison to be valid. In Bayesian Core, we do not separate beta_0 from the other regression coefficients and hence are left with one degree of freedom that we spend in choosing an improper prior on g instead. (Hence I do not get the remark of Bové and Held that our choice “prohibits Bayes factor comparisons with the null model“. As argued in Bayesian Core, the factor g being an hyperparameter shared by all models, we can use the same improper prior on g in all models and hence use standard Bayes factors.) In order to achieve closed form expressions, the authors use Cui and George ‘s (2008) prior

pi(g) propto (1+g)^{1+a}exp{-b/(1+g)}

which requires the two hyper-hyper-parameters a and b to be specified.

The second part of the paper considers computational issues. It compares the ILA solution of Rue, Martino and Chopin (2009, Series B) with an MCMC solution based on an independent proposal on g resulting from linear interpolations (?). The marginal likelihoods are approximated by Chib and Jeliazkov (2001, JASA) for the MCMC part. Unsurprisingly, ILA does much better, even with a 97% acceptance rate in the MCMC algorithm.

The paper is very well-written and quite informative about the existing literature. It also uses the Pima Indian dataset  (The authors even dug out a 1991 paper of mine I had completely forgotten!) I am actually thinking of using the review in our revision of Bayesian Core, even though I think we should stick to our choice of including beta_0 within the set of parameters…


Filed under: Books, R, Statistics Tagged: Arnold Zellner, Bayesian Core, Bayesian model choice, g-prior, GLMs, ILA, MCMC

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)