where did the normalising constants go?! [part 1]

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

from Banff Centre cafetaria, Banff, March 21, 2012When listening this week to several talks in Banff handling large datasets or complex likelihoods by parallelisation, splitting the posterior as

\prod_{i=1}^k p_i(\theta)

and handling each term of this product on a separate processor or thread as proportional to a probability density,

p_i(\theta)\propto m_i(\theta)=\omega_i p_i(\theta),

then producing simulations from the mi‘s and attempting at deriving simulations from the original product, I started to wonder where all those normalising constants went. What vaguely bothered me for a while, even prior to the meeting, and then unclicked thanks to Sylvia’s talk yesterday was the handling of the normalising constants ωi by those different approaches… Indeed, it seemed to me that the samples from the mi‘s should be weighted by

\omega_i\prod_{j\ne i}^k p_j(\theta)

rather than just

\prod_{j\ne i}^k p_j(\theta)

or than the product of the other posteriors

\prod_{j\ne i}^k m_j(\theta)

which makes or should make a significant difference. For instance, a sheer importance sampling argument for the aggregated sample exhibited those weights

\mathbb{E}[h(\theta_i)\prod_{i=1}^k p_i(\theta_i)\big/m_i(\theta_i)]=\omega_i^{-1}\int h(\theta_i)\prod_{j\ne i}^k p_j(\theta_i)\text{d}\theta_i

Hence processing the samples on an equal footing or as if the proper weight was the product of the other posteriors mj should have produced a bias in the resulting sample. This was however the approach in both Scott et al.‘s and Neiswanger et al.‘s perspectives. As well as Wang and Dunson‘s, who also started from the product of posteriors. (Normalizing constants are considered in, e.g., Theorem 1, but only for the product density and its Weierstrass convolution version.) And in Sylvia’s talk. Such a consensus of high calibre researchers cannot get it wrong! So I must have missed something: what happened is that the constants eventually did not matter, as expanded in the next post


Filed under: R, Statistics, Travel Tagged: big data, consensus, embarassingly parallel, normalising constant, parallel processing

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)