Another Bernoulli factory

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

The paper “Exact sampling for intractable probability distributions via a Bernoulli factory” by James Flegal and Radu Herbei got posted on arXiv without me noticing, presumably because it came out just between Larry Brown’s conference in Philadelphia and my skiing vacations! I became aware of it only yesterday and find it quite interesting in that it links the Bernoulli factory method I discussed a while ago and my ultimate perfect sampling paper with Jim Hobert. In this 2004 paper in Annals of Applied Probability, we got a representation of the stationary distribution of a Markov chain as

\sum_{n=1}^{\infty} p_n Q_n(dx)

where

p_n = \mathbb{P}(\tau\ge n)\qquad\text{and}\qquad Q_n(A)=\mathbb{P}(X_n\in A|\tau\ge n),

the stopping time τ being the first occurrence of a renewal event in the split chain. While Q_n is reasonably easy to simulate by rejection (even tohugh it may prove lengthy when n is large, simulating from the tail distribution of the stopping time is much harder.

“Obtaining a single draw from π would require an obscene number of draws.” J. Flegal and R. Herbei

The solution of Flegal and Herbei is to use (a) accept-reject, (b) a bound obtained in our paper, and (c) the Bernoulli factory. That is,

\mathbb{P}(\tau\le n) \le\dfrac{d(n) M}{\mathbb{E}[\tau]} \quad\text{with}\quad d(n)=(\beta-1)^{-1}\beta^{-n}

where β and M depend on the minorisation conditions for the Markov chain. The Bernoulli factory is used to generate Bernoulli variables with probability ap where

a=1/[Md(n)]\qquad\text{and}\qquad p=\mathbb{P}(\tau\ge n)

The exact contribution of the paper consists in improving upon Nacu and Peres‘ original algorithm by turning the original function min{2p,1-w} into a twice differentiable function. This makes the bound in the sandwiching algorithm of Latuszyński et al. moving from n^{-1/2} to n^{-1}. As Krzysztof Latuszyński pointed out to me, it is even possible to improve the convergence to an exponential rate for the Bernoulli factory, but this does not bring an improvement in the [our] perfect sampling algorithm that motivates the study. The later remains with an infinite expected running time (Blanchet and Meng). This difficulty is reflected in the experiments where an implementation for the normal-gamma posterior required 35 hours of computation, one single call to the Bernoulli factory accounting for 70% of the computing time! The realistic random effect model is rather anticlimactic in that the final outcome of the three pages 19-22 is a single realisation from the exact sampler. Noticeably, no output from the Bernoulli factory was used in any of the examples implemented in the paper. Nonetheless, I find the paper quite interesting from the Bernoulli factory point of view and prone to open new research directions on this “cute” problem.


Filed under: R, Statistics Tagged: Bernoulli, MCMC, perfect sampling

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)