**Xi'an's Og » R**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

**Y**et another paper on ABC model choice was posted on arXiv a few days ago, just prior to the ABC in London meeting that ended in the pub above (most conveniently located next to my B&B!). It is written by Olivier Francois and Guillaume Laval and the approach relies on DIC for running model selection. Although I disagree with the reasons given for abandoning Bayes factors in favour of this more rudimentary indicator, I consider the paper (and the trend) an interesting and positive contribution to the idea already stressed by Oliver Ratmann and coauthors that model selection with ABC should be more exploratory than decisional…**H**ere are a few specific comments on the paper that may sound overly negative. However, these are more about the motivation of the switch from Bayes factor to DIC than about the idea itself. Again, adding new exploratory tools to the toolbox is (for me) the way to proceed.

**A** first criticism is the distinction made therein (page 2) between rejection algorithms on the one hand and MCMC and SMC algorithms on the other hand. Indeed, the authors give the impression that the regularisation mechanisms of Beaumont et al. (2002) and followers only apply to the first type of algorithms. And then again in the description of model choice tools, the estimation of the posterior probabilities sounds different when using sequential algorithms… To me, there is no reason for such a distinction. Whatever the type of simulation method one uses, the outcome can always be exploited in the same way, aiming at unbiased or at least converging estimators of quantities of interest, including posterior probabilities.

**T**he second criticism is that the authors seem to lay the blame for poor performances of ABC model selection on the lack of regression adjustment in the approximation of posterior probabilities (page 3). This is ignoring the logistic estimates of Beaumont (2008) used for instance in DIYABC (and in the population genetic experiment described in the slides I used in Zurich). The authors mention “a serious concern” and that “model choice based on those probabilities does not apply to the models in which we eventually make inference”, the second point being rather obscure. It may mean there is a confusion between the adjustment brought by the regression (which modifies the ABC parameter sample, hence a different “model”) and the model choice procedure (which does not depend on this modification). But this is somehow minor against the discrepancy due to the use of summary statistics stressed in our paper.

**T**he solution adopted by the authors is to rely on Spiegelhalter et al.’s (2002) DIC to compare models. As discussed in our Bayesian Analysis (2006) paper, the DIC criterion is rather ambiguous, especially in missing variables models, which include ABC when the simulated data is processed as an additional variable. The additional difficulty in ABC settings is to find an acceptable proxy for the log-likelihood. One solution considered by the authors is to use the estimated expectation of the marginal *p(s _{0}|s)* in the DIC criterion, integrating out

*θ*. Another one does the opposite, using

*p(s*by integrating out

_{0}|θ)*s*. (Because they are based on

*θ*‘s, those quantities can be subjected to regression adjustments.) In a Gaussian/Laplace toy problem, the authors found a complete opposition between the results derived from the ABC Bayes factor and the ABC DIC.

Filed under: R, Statistics Tagged: ABC, ABC in London, Bayes factors, DIC, The Queens Arm

**leave a comment**for the author, please follow the link and comment on their blog:

**Xi'an's Og » R**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.