high dimension Metropolis-Hastings algorithms
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
When discussing high dimension models with Ingmar Schüster Schuster [blame my fascination for accented characters!] the other day, we came across the following paradox with Metropolis-Hastings algorithms. If attempting to simulate from a multivariate standard normal distribution in a large dimension, when starting from the mode of the target, i.e., its mean γ, leaving the mode γis extremely unlikely, given the huge drop between the value of the density at the mode γ and at likely realisations (corresponding to the blue sequence). Even when relying the scale that makes the proposal identical to the target! Resorting to a tiny scale like Σ/p manages to escape the unhealthy neighbourhood of the highly unlikely mode (as shown with the brown sequence).
Here is the corresponding R code:
p=100 T=1e3 mh=mu #mode as starting value vale=rep(0,T) for (t in 1:T){ prop=mvrnorm(1,mh,sigma/p) if (log(runif(1))<logdmvnorm(prop,mu,sigma)- logdmvnorm(mh,mu,sigma)) mh=prop vale[t]=logdmvnorm(mh,mu,sigma)}
Filed under: Books, Kids, Mountains, pictures, R, Statistics Tagged: acceptance probability, curse of dimensionality, high dimensions, MCMC, Metropolis-Hastings algorithm, Monte Carlo Statistical Methods, unmlaut
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.