where did the normalising constants go?! [part 1]

March 10, 2014
By

(This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers)

from Banff Centre cafetaria, Banff, March 21, 2012When listening this week to several talks in Banff handling large datasets or complex likelihoods by parallelisation, splitting the posterior as

\prod_{i=1}^k p_i(\theta)

and handling each term of this product on a separate processor or thread as proportional to a probability density,

p_i(\theta)\propto m_i(\theta)=\omega_i p_i(\theta),

then producing simulations from the mi‘s and attempting at deriving simulations from the original product, I started to wonder where all those normalising constants went. What vaguely bothered me for a while, even prior to the meeting, and then unclicked thanks to Sylvia’s talk yesterday was the handling of the normalising constants ωi by those different approaches… Indeed, it seemed to me that the samples from the mi‘s should be weighted by

\omega_i\prod_{j\ne i}^k p_j(\theta)

rather than just

\prod_{j\ne i}^k p_j(\theta)

or than the product of the other posteriors

\prod_{j\ne i}^k m_j(\theta)

which makes or should make a significant difference. For instance, a sheer importance sampling argument for the aggregated sample exhibited those weights

\mathbb{E}[h(\theta_i)\prod_{i=1}^k p_i(\theta_i)\big/m_i(\theta_i)]=\omega_i^{-1}\int h(\theta_i)\prod_{j\ne i}^k p_j(\theta_i)\text{d}\theta_i

Hence processing the samples on an equal footing or as if the proper weight was the product of the other posteriors mj should have produced a bias in the resulting sample. This was however the approach in both Scott et al.‘s and Neiswanger et al.‘s perspectives. As well as Wang and Dunson‘s, who also started from the product of posteriors. (Normalizing constants are considered in, e.g., Theorem 1, but only for the product density and its Weierstrass convolution version.) And in Sylvia’s talk. Such a consensus of high calibre researchers cannot get it wrong! So I must have missed something: what happened is that the constants eventually did not matter, as expanded in the next post


Filed under: R, Statistics, Travel Tagged: big data, consensus, embarassingly parallel, normalising constant, parallel processing

To leave a comment for the author, please follow the link and comment on his blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...



If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Comments are closed.