How to sample from multidimensional distributions using Gibbs sampling?

[This article was first published on Appsilon Data Science Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

We will show how to perform multivariate random sampling using one of the Markov Chain Monte Carlo (MCMC) algorithms, called the Gibbs sampler.


Random sampling with rabbit on the bed plane

via GIPHY


To start, what are MCMC algorithms and what are they based on?

Suppose we are interested in generating a random variable with a distribution of , over .
If we are not able to do this directly, we will be satisfied with generating a sequence of random variables , which in a sense tending to a distribution of . The MCMC approach explains how to do so:

Build a Markov chain , for , whose stationary distribution is . If the structure is correct, we should expect random variables to converge to .

In addition, we can expect that for function , occurs:

with probability equals to 1.

In essence, we want the above equality to occur for any arbitrary random variable .


One of the MCMC algorithms that guarantee the above properties is the so-called Gibbs sampler.

Gibbs Sampler – description of the algorithm.

Assumptions:

  • is defined on the product space
  • We are able to draw from the conditional distributions , where

Algorithm steps:

  1. Select the initial values
  2. For repeat:

    For sample from distribution

  3. Repeat step 2 until the distribution of vector stabilizes.
  4. The subsequent iterations of point 2 will provide a randomization of .

How do we understand point 3?
We are most satisfied when average coordinates stabilize to some accuracy.


Intuition in two-dimensional case:
Gibbs sampling on the plane.
Source: [3]


Gibbs sampling for randomization with a two-dimensional normal distribution.

We will sample from the distribution of , where and .

Knowing joint density of , it’s easy to show, that:


R implementation:

gibbs_normal_sampling <- function(n_iteration, init_point, ro) {
  # init point is some numeric vector of length equals to 2
  theta_1 <- c(init_point[1], numeric(n_iteration))
  theta_2 <- c(init_point[2], numeric(n_iteration))
  for (i in 2:(n_iteration+1)) {
    theta_1[i] <- rnorm(1, ro * theta_2[i-1], sqrt(1 - ro^2))
    theta_2[i] <- rnorm(1, ro * theta_1[i], sqrt(1 - ro^2))
  }
  list(theta_1, theta_2)
}


Using the above function, let’s see how the 10 points were scored at :
Sampling ten points via gibbs sampler - animation.

And for 10000 iterations:
Sampling from bivariate normal distribution.

We leave you a comparison of how the stability of the parameters changes depending on the selected parameter.


Let’s move on to use the Gibbs sampler to estimate the density parameters.

We will show the use of the Gibbs sampler and bayesian statistics to estimate the mean parameters in the mix of normal distributions.

Assumptions (simplified case):

  • iid. sample comes from a mixture of normal distributions , where , i are known.
  • For i=1,2 (a priori distributions) and with are independent.
  • - the classification vector (unobserved) from which Y is derived (when , is drawn from , when then drawn from ).

With above assumptions it can be shown that:

The form of the algorithm:

  1. Choose starting point for the mean
  2. In the -th iteration do:
    • With the probability:
      draw for
    • Calculate:

    • Generate:

  3. Perform step 2. until the distribution of vector stabilizes.

How to do this in R?

At start let’s generate random sample from mixture of normals with parameters .

set.seed(12345)
mu_1 <- 10
mu_2 <- 2
sigma_1 <- 1
sigma_2 <- 2
pi_known <- 0.7
n <- 2000
pi_sampled <- rbinom(n, 1, pi_known)
y_1 <- rnorm(n, mu_1, sigma_1)
y_2 <- rnorm(n, mu_2, sigma_2)
y <- (1 - pi_sampled) * y_1 + pi_sampled * y_2

Take a quick look at a histogram of our data:

The following task is to estimate and from above model.

mu_init <- c(12, 0)
n_iterations <- 300
delta <- vector("list", n_iterations)
mu_1_vec <- c(mu_init[1], numeric(n_iterations))
mu_2_vec <- c(mu_init[2], numeric(n_iterations))

delta_probability <- function(i, y, mu_1_vec, mu_2_vec, sigma_1, sigma_2, pi_known) {
  pi_known * dnorm(y, mu_2_vec[i - 1], sigma_2) /
      ((1- pi_known) * dnorm(y, mu_1_vec[i - 1], sigma_1) + pi_known * dnorm(y, mu_2_vec[i - 1], sigma_2))
}

mu_1_mean <- function(delta, i, y) {
  sum((1 - delta[[i - 1]]) * y) / (1 + sum(1 - delta[[i - 1]]))
}

mu_2_mean <- function(delta, i, y) {
  sum(delta[[i - 1]] * y) / (1 + sum(delta[[i - 1]]))
}

for (j in 2:(n_iterations + 1)) {
  delta[[j - 1]] <- y %>% map_int(
    ~ rbinom(1, 1, delta_probability(j, ., mu_1_vec, mu_2_vec, sigma_1, sigma_2, pi_known)))
  mu_1_vec[j] <- rnorm(1, mu_1_mean(delta, j, y), sqrt(1 / (1 + sum(1 - delta[[j - 1]]))))
  mu_2_vec[j] <- rnorm(1, mu_2_mean(delta, j, y), sqrt(1 / (1 + sum(delta[[j - 1]]))))
}


Let’s see the relation of sampled data to known one:

The following plot presents the mean of the vector at each iteration:



Let’s check how mean of parameters and stabilize at used algorithm:



Note how little iteration was enough to stabilize the parameters.


Finally let’s see estimated density with our initial sample:


To those concerned about the topic, refer to [1] where you can find a generalization of normal distribution mixture by extending a priori distributions to other parameters.

It is also worth to compare the above algorithm with its deterministic counterpart, Expectation Maximization (EM) algorithm see [2].


Write your question and comments below. We’d love to hear what you think.


Resources:

[1] http://gauss.stat.su.se/rr/RR2006_1.pdf

[2] http://web.stanford.edu/~hastie/ElemStatLearn/

[3] http://vcla.stat.ucla.edu/old/MCMC/MCMC_tutorial.htm

To leave a comment for the author, please follow the link and comment on their blog: Appsilon Data Science Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)