Site icon R-bloggers

MCMC for Econometrics Students – II

[This article was first published on Econometrics Beat: Dave Giles' Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
This is the second in a set of posts about Monte Carlo Markov Chain (MCMC, or MC2) methods in Bayesian econometrics. The background was provided in this first post, where the Gibbs sampler was introduced.

The main objective of the present post is to convince you that this MCMC stuff actually works!

To achieve this, what we’re going to do is work through a simple example – one for which we actually know the answer in advance. That way, we’ll be able to check our results from applying the Gibbs sampler with the facts. Hopefully, we’ll then be able to see that this technique works – at least for this example! 

I’ll be using some R script that I’ve written to take students through this, and it’s available on the code page for this blog. I should  mention in advance that this code is not especially elegant. It’s been written, quite deliberately, in a step-by-step manner to make it relatively transparent to non-users of R. Hopefully, the comments that are embedded in the code will also help.

It’s also important to note that this first illustration of the Gibbs sampler in action does not involve the posterior distribution for the parameters in a Bayesian analysis of some model.  Instead, we’re going to look at the problem of obtaining the marginals of a bivariate normal distribution, when we know the form of the conditional distributions.

In other words – let’s proceed one step at a time. The subsequent posts on this topic will be dealing with Bayesian posterior analysis.

Let’s take a look at the set-up, and the analysis that we’re going to undertake.

Suppose that we have two random variables, Y1 and Y2, which follow a bivariate normal distribution with a mean vector, (μ1 , μ2)’, and a correlation of ρ. The variances of Y1 and Y2 will be σ12 and σ22 respectively, and so the covariance between Y1 and Y2 will be (ρσ1σ2).

Now, it’s well known that the conditional distribution of Y1 given Y2 is

                 p(y1 | y2) ~ N[μ1 + ρσ1(y2 – μ2) / σ2  ,  σ12(1 – ρ2)]  ,

and the conditional distribution of Y2 given Y1 is

                 p(y2 | y1) ~ N[μ2 + ρσ2(y1– μ1) / σ1   , σ22(1 – ρ2)]  .

We’ll take this information as given, and we’ll use it (via the Gibbs sampler) to obtain the marginal distributions of Y1 and Y2.

Now, in fact these marginal distributions are known to be

               p(y1) ~ N[μ1 , σ12]
and
              p(y2) ~ N[μ2 , σ22] .

However, we’re going to pretend that we don’t know this. It means, though, that we can check our MCMC results for accuracy at the end of the exercise.

Without any loss of generality, I’m going to set μ1 = μ2 = 0; and σ1 = σ2 = 1. Regardless of the value of ρ, this means that the marginal distributions for Y1 and Y2 should each be standard normal.

Now, let’s recall the steps involved in using the Gibbs sampler in this context. What we’ll do is:
  1. Choose an initial value for Y1. Let’s call this value y1(0).
  2. Next, generate a random Y2 value (say, y2(1) ) from p(y2 | y1(0)).
  3. Then, generate a new random Y1 value (say, y1(1)) from p(y1 | y2(1)).
  4. Repeat steps 2 and 3 many, many times, saving the strings of Y1 and Y2 values.
  5. Throw away the first thousand or so values from these strings of Y1 and Y2 values, as they are draws from the conditional distributions.
  6. After these “burn in” values have been discarded, the subsequent strings should contain thousands of values from the marginal distributions of Y1 and Y2. These are the values that we’re interested in.
Of course, we have to allow for a sufficiently long “burn in period” for this to work effectively. This is something that we can return to in a later post.

So, let’s take a look at my R code. It starts off as follows, by  initializing some quantities that we’ll need:


After assigning an initial value for Y1 (i.e., step 1 above), we run through the Gibbs sampler loop, and store all of the values drawn from the conditional distributions for Y1 and Y2:


Next, we throw away the “burn in” observations, and just focus on the next 100,000 values of each of Y1 and Y2:


Let’s plot these values for Y1:


This is what we get:


Notice that the values are centered around a level of zero, and very few values fall above 3, or below -3. This is what we’d expect if the distribution is standard normal, which is what the marginal distribution for Y1 is supposed to be.

Now repeat this for Y2:


Once again, the results are what you’d expect from a standard normal distribution. 
This is a good start, but we need to explore these values more in more detail. Let’s look at some summary statistics. The code,
results in:

The means of the Y1 and Y2 values are essentially zero, and the variances are essentially unity. That’s what they should be if these draws are from the marginal distributions, and not from the conditional distributions.
Next, let’s look at histograms of the Y1 and Y2 data:

Here are the results:

I’m almost convinced that the 100,000 values for Y1 and those for Y2, come from standard normal distributions.
Let’s nail this with a couple of Q-Q plots and Jarque-Bera tests. (The latter require that we have installed, and call up, the “tseries” package in R.)





The Q-Q plots look great, and the p-values associated with the Jarque-Bera tests very strongly suggest that we can’t reject normality.

I’m convinced, and I hope that you are too! Our Gibbs sampler seems to have successfully generated 100,000 draws from the marginal distributions for Y1 and Y2.

Once again, the full set of R commands is on the code page for this blog. Some subsequent posts will provide examples of applying the Gibbs sampler to determine the marginal posterior distributions for the parameters in some Bayesian problems.


© 2014, David E. Giles

To leave a comment for the author, please follow the link and comment on their blog: Econometrics Beat: Dave Giles' Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.