Visualizing a Markov Chain

[This article was first published on R on Will Hipson, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

A Markov Chain describes a sequence of states where the probability of transitioning from states depends only the current state. Markov chains are useful in a variety of computer science, mathematics, and probability contexts, also featuring prominently in Bayesian computation as Markov Chain Monte Carlo. Here, we’re going to look at a relatively simple breed of Markov chain and build up some intuition using simulations and animations (two of my favorite things).

The R code for simulating the Markov Chain and creating the visualizations is at the bottom of the post.

Example

Imagine a school yard with 100 children. During recess, children can either be on the soccer field or the playground. So there are two possible states for a child to be in at any given time: (1) soccer field or (2) playground. At the beginning of recess, approximately half of the children will start on the soccer field and the other half will start on the playground. But as time goes on some children will switch from the soccer field to the playground and vice versa. Let’s say we know from previous experience that at any given moment, if a child is on the soccer field they have a .10 probability (10%) of going to the playground. In contrast, if a child is on the playground they have a .05 probability (5%) of going to the soccer field. Thus, the child has a .90 and .95 probability of staying in their respective state.

This can represented pictorially. The circles (nodes) represent the two states and the arrows (edges) represent the probabilities of moving to the other state.

In this fairly simple setup, we’ll see ways of exploring how this system evolves over time.

Formal Definitions

Some terminology here: We’re describing a finite state Markov chain. This means that there is a discrete (countable) set of possible states to be in. For example, a child can be on the soccer field OR the playground and we ignore in-between states. We also say that the Markov chain is memoryless, which means that the process is only dependent on the current state of the chain. Each child’s probability of being, say, on the playground depends only on where they currently are. Let’s express these statements formally:

\[\text{Pr}(X_{n+1}=x|X_n=x_n)\]

We read this as “the probability that the next state (\(n+1\)) of the random variable \(X\) is a particular value, \(x\), depends on the current state of the random variable, \(X_n=x_n\)”.

We can also express this process as a series of matrix operations. Don’t worry if matrices aren’t your cup of tea; in this example the matrices are pretty small and the math isn’t very complicated. We start by defining a transition matrix, \(T\):

\[T=\begin{bmatrix} 0.90&0.05 \\ 0.10&0.95 \\ \end{bmatrix}\]

The elements in \(T\) correspond to the probabilities we defined earlier. For example, \(T_{1,1} = .90\) is the probability of staying on the soccer field and \(T_{2,1} = .10\) is the probability of moving from the soccer field to the playground (note that these sum to 1). Column 2 of \(T\) likewise corresponds to the probabilities of moving vs. staying when the child is on the playground.

Computing the Chain

The question we want to answer is: at the end of recess, how many children will be on the soccer field vs. the playground? Let’s start by looking at just one moment – what happens after a single iteration of the process? We don’t need to specify a time interval for each iteration (but in this example we can imagine that one iteration is ten seconds or so), we just need to be clear that the transition probabilities apply at each iteration. Here’s our equation for the first iteration:

\[X_{n=1}\begin{bmatrix} 0.90&0.05 \\ 0.10&0.95 \\ \end{bmatrix}\begin{bmatrix} 0.50\\ 0.50\\ \end{bmatrix}\]

We have our transition matrix, \(T\), from before and we’re going to matrix-multiply it with a vector containing the proportion of children present in each state (remember we said that at the beginning of recess half of the children would be on the soccer field and half on the playground). Multiplying this out we get:

\[X_{n=1}\begin{bmatrix} 0.90&0.05 \\ 0.10&0.95 \\ \end{bmatrix}\begin{bmatrix} 0.50\\ 0.50\\ \end{bmatrix} = \begin{bmatrix} 0.90\cdot0.50+0.05\cdot0.50 \\ 0.10\cdot0.50+0.95\cdot0.50 \\ \end{bmatrix} =\begin{bmatrix} 0.475\\ 0.525\\ \end{bmatrix}\]

So after one iteration we expect roughly 48 children (rounding up from \(.475\cdot100\)) to be on the soccer field and 52 children to be on the playground. If we want to see how the process unfolds over another iteration, we simply plug in the previous result and do the operation again:

\[X_{n=2}\begin{bmatrix} 0.90&0.05 \\ 0.10&0.95 \\ \end{bmatrix}\begin{bmatrix} 0.475\\ 0.525\\ \end{bmatrix} = \begin{bmatrix} 0.90\cdot0.475+0.05\cdot0.525 \\ 0.10\cdot0.475+0.95\cdot0.525 \\ \end{bmatrix} =\begin{bmatrix} 0.45\\ 0.55\\ \end{bmatrix}\]

Running the Simulation

So, analytically, this is our expected result over a couple of iterations. But we want to know how the Markov process unfolds over many iterations. We could continue doing this analytically using linear algebra, but instead we’re going to use Markov Chain Monte Carlo to obtain an empirical estimate. Markov Chain Monte Carlo is nothing more than simulating a Markov chain, like we did above but for many more iterations.

Running the chain for 100 iterations, here’s our result:

After 100 iterations of the chain, about two thirds of the children end up on the playground and one third end up on the soccer field. Note how at the beginning of the chain around the first several iterations there’s a bigger change in the proportions but then as the chain progresses it stabilizes. We would say that the chain has converged, which means that running it for more iterations probably won’t change our conclusions.

It’s worth noting that a small difference in the transition probabilities (0.05) translates into a wide difference in proportion after the chain has converged. It’s a pattern similar to phenomena described by Chaos Theory, where minor differences in starting conditions can have dramatic downstream effects.

To see this process play out in real time, here’s an animated simulation. Each dot is a child and the color corresponds to their starting location. The cluster on the left is the soccer field and the cluster on the right is the playground. I’ve also given the points a bit of random movement to simulate kids running around haphazardly on the school yard.

To leave a comment for the author, please follow the link and comment on their blog: R on Will Hipson.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)