IMIS & AMIS

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

A most interesting paper by Adrian Raftery and Le Bao appeared in the Early View section of Biometrics.  It aims at better predictions for HIV prevalence—in the original UNAIDS implementation, a naïve SIR procedure was used, based on the prior as importance function, which sometimes resulted in terrible degeneracy—, but its methodological input is about incremental mixture importance sampling (IMIS), thus relates to the general topic of adaptive Monte Carlo methods I am interested in. (And to some extent to our recent AMIS paper.) Actually, a less elaborate (and less related) version of the IMIS algorithm first appeared in a 2006 paper by Steele, Raftery and Edmond in JCGS in the setting of finite mixture likelihoods and I somehow managed to miss it…

Raftery and Bao propose to replace SIR with an iterative importance sampling technique developed in 2003 by Steele et al. that has some similarities with population Monte Carlo (PMC). (A negligible misrepresentation of PMC in the current paper is that our method does not use “the prior as importance function’”.) In its current format, the IMIS algorithm starts from a first guess (e.g., the prior distribution) and builds a sequence of Gaussian (or Gaussian mixture) approximations whose parameters are estimated from the current population, while all simulation are merged together at each step, using a mixture stabilising weight

pi(theta_i^s|x) / omega_0 p_0(theta_i^0)+sum_r omega_r hat q_r(theta_i^s)

where the weights omega_r depend on the number of simulations at step r. This pattern also appears in our adaptive multiple importance sampling (AMIS) algorithm developed in this arXiv paper with Jean-Marie Cornuet, Jean-Michel Marin and Antonietta Mira, and in the original paper by Owen and Zhou (2000, JASA) that inspired us. Raftery and Bo extend the methodology to an IMIS with optimisation at the initial stage, while AMIS incorporates the natural population Monte Carlo stepwise optimisation developed in Douc et al. (2008, Annals of Statistics) that brings the proposal kernel closer to the target after each iteration. The application of the simulations to conduct model choice found in the current paper and in Steele et al. can also be paralleled with the one using population Monte Carlo we conducted for cosmological data in MNRAS.

Interestingly, Raftery and Bo (and also Steele et al.) refer to the defensive mixture paper of Hesterberg (1995, Technometrics), which has been very influential in my research on importance sampling, and (less directly) to Owen and Zhou (2000, JASA), who did propose the deterministic mixture scheme that inspired AMIS. Besides the foundational papers of Oh and Berger (1991, JASA) and West (1993, J. Royal Statistical Society Series B), they also mention a paper by Raghavan and Cox (1998, J. Statistical Simulation & Computation) I was not aware of, which introduces as well a mixture of importance proposals as a variance stabilising technique.


Filed under: R, Statistics, University life Tagged: adaptive importance sampling, adaptivity, AMIS, defensive mxture, IMIS, incremental mixture, mixture estimation, population Monte Carlo, SIR

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)