Variable selection using Gibbs sampling

[This article was first published on Fabian Dablander, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

“Which variables are important?” is a key question in science and statistics. In this blog post, I focus on linear models and discuss a Bayesian solution to this problem using spike-and-slab priors and the Gibbs sampler, a computational method to sample from a joint distribution using only conditional distributions.

Variable selection is a beast. To slay it, we must draw on ideas from different fields. We have to discuss the basics of Bayesian inference which motivates our principal weapon, the Gibbs sampler. As an instruction manual, we apply it to a simple example: drawing samples from a bivariate Gaussian distribution (for pre-combat exercises, see here). The Gibbs sampler feeds on conditional distributions. To be able to derive those easily, we need to equip ourselves with $d$-separation and directed acyclic graphs (DAGs). Having trained and become stronger, we attack variable selection in the linear regression case using Gibbs sampling with spike-and-slab priors. These priors are special in that they are a discrete mixture of a Dirac delta function — which can shrink regression coefficients exactly to zero — and a Gaussian distribution. We tackle the single predictor case first, and then generalize it to $p > 1$ predictors. For $p$ predictors, the Gibbs sampler with spike-and-slab priors yields a posterior distribution over all possible $2^p$ regression models, an enormous feat. From this, posterior inclusion probabilities and model-averaged parameter estimates follow straightforwardly. To wield this weapon in practice, we implement the method in R and engage in variable selection on simulated and real data. Seems like we have a lot to cover, so let’s get started!

Quantifying uncertainty

Bayesian inference is an excellent tool for uncertainty quantification. Assume you have assigned a prior distribution to some parameter $\beta$ of a model $\mathcal{M}$, call it $p(\beta \mid \mathcal{M})$. After you have observed data $\mathbf{y}$, how should you update your belief to arrive at the posterior, $p(\beta \mid y, \mathcal{M})$? The rules of probability dictate:

The computationally easy parts of the right-hand side is the specification of the prior and, unless you do crazy things, also the likelihood. The tough bit is the marginal likelihood or normalizing constant which, as the name implies, makes the posterior distribution integrate to one, as all proper probability distributions must. In contrast to differentiation, which is a local operation, integration is a global operation and is thus much harder. It becomes even harder with many parameters.

Usually, Bayes’ rule is given without conditioning on the model, $\mathcal{M}$. However, this assumes that we know one model to be true with certainty, thus ignoring the uncertainty we have about the models. We can apply Bayes’ rule not only on parameters, but also on models:

where $m$ is the number of all models and

is in fact the marginal likelihood of our first equation. To illustrate how one could do variable selection, assume we have two models, $\mathcal{M}_1$ and $\mathcal{M}_2$, which differ in their number of predictors:

If these two are the only models we consider, then we can quantify their respective merits using posterior odds:

where we can interpret the Bayes factor as an indicator for how much more likely the data are under $\mathcal{M}_4$, which includes $\beta_2$, compared to $\mathcal{M}_2$, which does not include $\beta_2$. However, two additional regression models are possible:

In general, if $p$ is the number of predictors, then there are $2^p$ possible regression models in total. If we ignore some of those a priori, we will have violated Cromwell’s rule, which states that we should never assign prior probabilities of zero to things that could possibly happen. Otherwise, regardless of the evidence, we would never change our mind. As Dennis Lindley put it, we should

“[…] leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved.” (Lindley, 1991, p. 101)

One elegant aspect about the Bayes factor is that we do not need to compute the normalizing constant of all models (it cancels in the ratio), which would require us to enumerate and assign priors to all possible models. If we are willing to do this, however, then we can model-average to get a posterior distribution of $\beta_j$ that takes into account the uncertainty about all $m$ models:

which requires computing the posterior distribution over the parameter of interest $\beta_j$ in each model $\mathcal{M}_j$, as well as the posterior distribution over all such models. Needless to say, this is a difficult problem; the bulk of this blog post is to find an efficient way to do this in the context of linear regression models. For variable selection, we might be interested in another quantity: the posterior probability that $\beta_j \neq 0$, averaged over all models. We can arrive at this by similar means:

Note that conditional on a model $\mathcal{M}_i$, $\beta_j$ is either zero or not zero. Therefore, all the terms in which $\beta_j$ is zero drop out of the sum, and we are left with summing the posterior model probabilities for the models in which $\beta_j \neq 0$. This model-averaging perspective strikes me as a very elegant approach to variable selection.1 In the remainder of this blog post, we will solve this variable selection problem for linear regression using the Gibbs sampler with spike-and-slab priors.

Gibbs sampling

Much of the advent in Bayesian inference in the last few decades is due to methods that arrive at the posterior distribution without calculating the marginal likelihood. One such method is the Gibbs sampler, which breaks down a high-dimensional problem into a number of smaller low-dimensional problems. It’s really one of the coolest things in statistics: it samples from the joint posterior distribution and its marginals by sampling from the conditional posterior distributions. To prove that it works mathematically is not trivial, and beyond this already lengthy introductory blog post.2 Thus, instead of getting bogged down in the technical details, let’s take a look at a motivating example.

Sampling from a bivariate Gaussian

To illlustrate, let $X_1$ and $X_2$ be bivariate normally distributed random variables with population mean zero ($\mu_1 = \mu_2 = 0$), unit variance ($\sigma_1^2 = \sigma_2^2 = 1$), and correlation $\rho$. As you may recall from a previous blogpost, the conditional Gaussian distribution of $X_1$ given $X_2 = x_2$, and $X_2$ given $X_1 = x_1$, respectively, are:

The Gibbs sampler makes it so that if we sample repeatedly from these two conditional distributions:

then these will be samples from the joint distribution $p(X_1, X_2)$ and its marginals.

To illustrate, we implement this Gibbs sampler in R.

sample_bivariate_normal <- function(rho, nr_samples) {
  x <- matrix(0, nrow = nr_samples, ncol = 2)
  
  for (i in seq(2, nr_samples)) {
    x[i, 1] <- rnorm(1, rho*x[i-1, 2], sqrt(1 - rho^2))  # sample from p(X1 | X2 = x2)
    x[i, 2] <- rnorm(1, rho*x[i, 1]  , sqrt(1 - rho^2))  # sample from p(X2 | X1 = x1)
  }
  
  x
}

Let’s see it in action:

set.seed(1)
samples <- sample_bivariate_normal(rho = 0.5, nr_samples = 10000)
cov(samples)
##           [,1]      [,2]
## [1,] 1.0178545 0.5091747
## [2,] 0.5091747 0.9949518

plot of chunk unnamed-chunk-3

Wait a minute, you might say. In this toy example, what was the prior distribution, and which posterior did we compute? The answer is: there were none! We have used the Gibbs sampler not to learn about a parameter, but rather to illustrate that sampling from conditional distributions in this way results in samples from the joint distribution. In the next section, we look at how graphs can help us in finding conditional independencies.3

Conditional independence and graphs

Before we look into variable selection using spike-and-slab priors in the linear regression case, we need to get some preliminaries about conditional independence out of the way. We write:

to denote that $X$ and $Y$ are conditionally independent given $Z$.4 We can visualize conditional independencies between random variables using directed acyclic graphs (DAGs). The figure below distinguishes between three different DAG structures.

DAG (a) above is a common cause structure. A good example is the positive correlation between the number of storks and the number of human babies delivered; these two variables become independent once one conditions on the common cause economic development (Matthews, 2001). DAG (b) is an example where the effect of $X$ on $Y$ is fully mediated by $Z$: conditional on $Z$, $X$ does not have an effect on $Y$. Thus, both in DAGs (a) and (b), conditioning on $Z$ renders $X$ and $Y$ independent.

Two variables can also be marginally independent, for which we write:

which holds in DAG (c). Note, however, that if we would condition on $Z$ in DAG (c), then $X$ and $Y$ would become dependent. $Z$ is a collider, and conditioning on it induces a dependency between $X$ and $Y$. Although not visible in the DAG, a dependency would also have been induced btween $X$ and $Y$ if we had conditioned on any children of $Z$.

Note that although we visualize the conditional independencies in a DAG, we do not interpret it causally. We are merely interested in seeing, not doing, and view the arrows as “incidental construction features supporting the $d$-separation semantics” (Dawid, 2010 p. 90).

As we will see in the next section, being able to read conditional independencies from a graph greatly aids in finding conditional distributions feeding the Gibbs sampler.

Spike-and-Slab Regression

Model specification

In a previous blog post, we discussed the (history of the) methods of least squares and linear regression. However, we did not assess whether a particular variable $X$ is actually associated with an outcome $Y$. We can think of this problem as hypothesis testing, variable selection, or structure learning. In particular, we may write the regression model with a single predictor variable as:

We put the following prior on $\beta$:

where $\pi \in [0, 1]$ is a mixture weight, $\sigma_y^2$ is the variance of $\mathbf{y}$, $\delta_0$ is the Dirac delta function (the spike), and $\tau^2$ is the variance of the slab. We multiply $\tau^2$ with $\sigma_y^2$ so that the prior naturally scales with the scale of the outcome. If we would not do this, then our results would depend on the measurement units of $\mathbf{y}$. Instead of fixing $\tau^2$ to a constant, we learn it by specifying

which results in a scale-mixture of Gaussians, that is, a Cauchy distribution with scale $s$. The figure below visualizes the marginal prior on $\beta$ as a discrete mixture ($\pi = 0.5$) of a Dirac delta, a Cauchy with scale $s = 1/2$, and $\sigma_y^2 = 1$.

plot of chunk unnamed-chunk-4

The idea behind this specification is to allow the regression weight $\beta$ to be exactly zero. Using Gibbs sampling, we will arrive at $p(\pi \mid y)$ which indicates the posterior probability of the parameter $\beta$ being zero. We continue the prior specification with

where we set $a = b = 1$ and $\alpha_1 = \alpha_2 = 2$. We can visualize the relations between all random variables in a DAG, see the figure below. Nodes with a grey shadow are observed or set by us, white nodes denote random variables.

Using $d$-separation as introduced in the previous section, we note that this larger graph is basically a collection of DAGs (b) and (c). This helps us see that the joint probability distribution factors:

where we have suppressed conditioning on the hyperparameters $a = b = 1$, $\alpha_1 = \alpha_2 = 0.01$, $s = 1/2,$ the predictor variables $X$, and the variance of the outcome $\sigma_y^2$.

For the Gibbs sampler, we need the conditional posterior distribution of each parameter given the data and all other parameters. Using the conditional independence structure of the graph, this results in the following conditional distributions:

In the next sections, we derive these conditional posterior distributions in turn. Since the single predictor case is slightly simpler to follow, we focus on it. However, the generalization to the multiple predictor setting is relatively straightforward, and I will sketch it afterwards.

Conditional posterior $p(\theta \mid \pi)$

We expand:

where $B$ is the beta function, and where we realize the numerator is the kernel of a Beta distribution, and the denominator is the normalizing constant. Thus, the posterior is again a Beta distribution:

As we can see, the conditional posterior of $\theta$ only depends on $\pi$. That means, however, that we can never get much information about this parameter, as $\pi$ can only be 0 or 1, and so the Beta distribution can only become $\text{Beta}(2, 1)$ or $\text{Beta}(1, 2)$ with a uniform prior $a = b = 1$. The posterior mean of $\theta$ can thus never become larger than $2/3$ or smaller than $1/3$.

Conditional posterior $p(\tau^2 \mid \beta, \pi)$

The conditional posterior on $\tau^2$ also depends on $\pi$ because conditioning on $\beta$ means conditioning on a collider, inducing the dependency. We expand:

To make the notation less cluttered, we will call the normalizing constant in this and all following derivations $Z$. Note that terms that do not depend on the parameter of interest in the numerator cancel, as the same terms appear in the normalizing constant. Further note that $\pi$ can be either 0 or 1. We first tackle the $\pi = 1$ case and write

where $\Gamma$ is the gamma function. Absorbing everything that does not depend on $\tau^2$ into the normalizing constant, we write:

which is a new inverse Gamma distribution:

On the other hand, if $\pi = 0$, then $\beta = 0$ and we simply sample from the prior:

Because the derivation is very similar, we look at the conditional posterior $p(\sigma_e^2 \mid y, \beta)$ next.

Conditional posterior $p(\sigma_e^2 \mid y, \beta)$

Again writing the normalizing constant as $Z$, we expand:

which looks very similar to the conditional posterior on $\tau^2$. In fact, using the same tricks as above — absorbing terms that do not depend on $\sigma_e^2$ into $Z$, and putting terms together — we write:

which is again an inverse Gamma distribution:

Contrasting this derivation with the one above, we note something interesting. Our belief about the variance $\sigma_e^2$ gets updated using the $n$ data points $\mathbf{y}$, whereas our belief about $\tau^2$ gets updated using only $\beta$. “In the Bayesian framework, the difference between data and parameters is fuzzy”, McElreath points out (2016, p. 34); or, put even more strongly, Dawid (1979, p.1): “[…] the distinction between data and parameters is largely irrelevant”.

Because the conditional posterior of $\pi$ is quite tricky, we continue with the conditional posterior of $\beta$.

Conditional posterior $p(\beta \mid y, \pi, \tau^2, \sigma_e^2)$

The conditional posterior of $\beta$ given $\pi = 0$ is easy: it is the Dirac delta function $\delta_0$, from which samples will always have value 0. The conditional posterior for $\pi = 1$ is a little more complicated to derive, but not by much. We start by writing:

where we again write the normalizing constant as $Z$. Expanding, we get:

We can again absorb terms that do not depend on $\beta$ into $Z$. We proceed:

We can further absorb the $\sum_{i=1}^n y_i^2$ term into $Z$ and put the $\beta^2$ terms together. We write:

If you have followed my previous blog post (see here), then you might guess what comes next: completing the square! We expand:

where have absorbed the last term into the normalizing constant $Z$ because it does not depend on $\beta$. Note that this is the kernel of a Gaussian distribution, which completes our ordeal — which we both enjoy, admit it! — resulting in:

Note again that we take samples from $\delta_0$ to always be zero.

Conditional posterior $p(\pi \mid \beta, \theta, \tau^2)$: First attempt

Applying $d$-separation, the graph tells us that $\pi$ is independent of $\mathbf{y}$ given $\beta$:

This means we can expand in the following way:

where we could again cancel terms that were common to both the numerator and denominator. From this, it may come as a surprise that this conditional posterior should be harder than the other ones. Let’s tackle the cases where $\pi = 0$ and $\pi = 1$ in turn; the normalizing constant $Z$ is simply their sum.

We start with $\pi = 1$:

which looks perfectly reasonable. If $\pi = 0$, we have:

which looks peculiar. To see how this bites us, we note that:

The issue with this is as follows. Remember that, in the Gibbs sampler, we sample from this conditional posterior using previous samples of $\beta$, $\tau^2$, and $\theta$ — call them $\beta^{\small{\star}}$, $\tau^{2\small{\star}}$, and $\theta^{\small{\star}}$, respectively. The previous value $\beta^{\small{\star}}$ depends on the previous sample for $\pi$, denoted $\pi^{\small{\star}}$, such that if $\pi^{\small{\star}} = 0$ then $\beta^{\small{\star}} = 0$. If this happens in the sampling process — and it will — then we have to evaluate $\delta_0\left(\beta^{\small{\star}}\right)$ which puts infinite mass on $\beta^{\small{\star}} = 0$. This means that the ratio above will become zero, resulting in a new draw for $\pi$ that is $\pi^{\small{\star}} = 0$. However, this in turn means that the new value for $\beta$ will be $\beta^{\small{\star}} = 0$, and the whole spiel repeats. The Gibbs sampler thus gets forever stuck in the region $\beta = 0$, which means that the Markov chain will not converge to the joint posterior distribution.

Before we go back to the drawing board, one might suggest that we could simply set $\delta_0 = 1$, and then carry out the computation needed to draw from the conditional posterior of $\pi$. It runs into the following issue, however. Let $\xi$ be the chance parameter which governs the Bernoulli from which we draw $\pi$. With $\delta_0 = 1$, we have:

Now let us assume the previous draw of $\beta$ was $\beta^{\small{\star}} = 0$. For simplicity, let $\theta = \frac{1}{2}$ and $\sigma_y^2 = 1$. This leads to:

which can never become zero, regardless of the data! If $\tau^2 = 1$, for example, then $\xi = 0.285$. Recall that $\tau^2$ is the variance of the prior assigned to $\beta$. The only way for $\xi$ to become zero, i.e., overwhelmingly support the model in which $\beta = 0$, is for $\tau^2$ to become very, very large. This is known as the Jeffreys-Bartlett-Lindley paradox5, and it makes sense: a model which assigns all possible values for $\beta$ similar plausibility will make poor predictions. If we had set $\tau^2$ by hand, then we could (artificially) achieve strong support for the null model (not that this is desirable!). However, we have assigned $\tau^2$ a prior, learning its value from data, and so this will practically never happen. Thus, even though $\xi$ approaches zero more closely the larger $\tau^2$, we will effectively never find strong support for the model in which $\beta = 0$.

In sum, we have tried two things to work with the Dirac delta function: (a) take it at face value, and (b) have it return 1 instead of Inf. The first approach lead to our Gibbs sampler getting stuck, only sampling values $\beta = 0$. The second approach lead to a situation in which we will always find bounded support for the model in which $\beta = 0$, regardless of the data. From this, we can easily draw the conclusion that working with the Dirac delta function is a pain! One might therefore be tempted to suggest to stop being so discrete: instead of $\delta_0$, use another Gaussian with a very small variance. This in fact solves the issue, because then instead of evaluating $\delta_0\left(\beta^{\small{\star}}\right)$, which puts infinite mass on $\beta^{\small{\star}} = 0$, we compute the density of $\beta^{\star}$ under a Gaussian distribution; even though it has small variance, it certainly will not return Inf. This is actually the approach by George & McCulloch (19993), who proposed the spike-and-slab prior setup under the name of Stochastic Search Variable Selection. Two issues remain: it may be difficult to choose this small variance in practice, and if it is very small, the Gibbs sampler will still be inefficient. Thus, we have to find another way to get rid of $\delta_0$.

Conditional posterior $p(\pi \mid \beta, \theta, \tau^2)$: Second attempt

In mathematics, it sometimes helps to write things down in a more complicated manner. In our case, we can do so by conditioning on $\mathbf{y}$ and $\sigma_e^2$, even though $\pi$ is independent of them given $\beta$. This might help because we get another likelihood term with which we can play with. We again start with $\pi = 1$, yielding:

The case where $\pi = 0$ yields:

Argh! It did not work. Observe that again $\pi$ would be drawn from a Bernoulli, but with a more complicated chance parameter $\xi$ than above:

where the $\left(2\pi\sigma_e^2\right)^{-\frac{n}{2}}$ term cancels. Still, the denominator features the unholy Dirac delta function $\delta_0$ — the bane of our existence — and we run into the same issue as above.

Exhausted, we ask: should we not try to use a continuous spike instead of the discontinuous Dirac delta? No — let us not give up just yet! I was a bit surprised, however, by how difficult it was to find literature that talked about how to handle the Dirac spike. For example, in a review of Bayesian variable selection methods, O’Hara & Sillanpää (2009) mention the continuous but not the discontinuous spike-and-slab setting. I eventually did find a useful reference (Geweke, 1996) through the paper by George & McCulloch (1997). Motivated by the fact that this problem is indeed not impossible to solve, let’s get back to the drawing board!

Conditional posterior $p(\pi \mid \beta, \theta, \tau^2)$: Third attempt

You may be surprised to hear that the thing that impedes Bayesian inference most is actually of great help here: integration! Upon reflection, this makes sense. How do we get rid of $\beta$, which itself depends on the unholy Dirac delta function? We integrate it out! Again tackling the case for which $\pi = 0$ first, we write:

where because $p(\theta)$, $p(\sigma_e^2)$, and $p(\tau^2)$ feature both in the case where $\pi = 0$ and $\pi = 1$, they can be absorbed into $Z$. For $\pi = 1$, the integration bit is a tick more involved. Using the sum and product rules of probability, we write:

This integrand very much looks like the expression we had for the conditional posterior of $\beta$, but unnormalized. So we already know that we will get out the normalizing constant of the conditional posterior of $\beta$, in addition to some other stuff. We put everything that does not depend on $\beta$ outside of the integral:

where we now only focus on the integrand, call it $A$, because the margins of these pages are too small.6. For the integrand, we do the exact same computation as in the derivation of the conditional posterior on $\beta$, except that when “completing the square”, we cannot cancel the term. Instead, we put it in front of the integral. We write:

where the second term of the last line is the normalizing constant of the conditional posterior on $\beta$. Let $\xi$ again be the chance parameter of the Bernoulli distribution from which we draw $\pi$. Then:

Note that this arduous adventure got rid of our nemesis, $\delta_0$. After this third and final attempt, we may take a short rest. Here is a visual break:

In the remainder of the blog post, we will (a) implement this in R, (b) generalize it to $p > 1$ variables, and (c) apply it to some real data.

Implementation in R

The code below implements the spike-and-slab regression for $p = 1$ predictors:

#' Spike-and-Slab Regression using Gibbs Sampling for p = 1 predictors
#'
#' @param y: vector of responses
#' @param x: vector of predictor values
#' @param nr_samples: indicates number of samples drawn
#' @param a1: parameter a1 of Gamma prior on variance sigma2e
#' @param a2: parameter a2 of Gamma prior on variance sigma2e
#' @param theta: parameter of prior over mixture weight
#' @param burnin: number of samples we discard ('burnin samples')
#'
#' @returns matrix of posterior samples from parameters pi, beta, tau2, sigma2e, theta
ss_regress_univ <- function(
  y, x, nr_samples = 4000, a1 = .01, a2 = .01,
  theta = 0.5, s = 1/2, a = 1, b = 1, nr_burnin = round(nr_samples / 4, 2)
  ) {
  
  # res is where we store the posterior samples
  res <- matrix(NA, nrow = nr_samples, ncol = 5)
  colnames(res) <- c('pi', 'beta', 'sigma2', 'tau2', 'theta')
  
  # take the MLE estimate as the values for the first sample
  m <- lm(y ~ x - 1)
  res[1, ] <- c(0, coef(m), var(predict(m) - y), 1, .5)
  
  # compute these quantities only once
  n <- length(y)
  var_y <- var(y)
  sum_xy <- sum(x*y)
  sum_x2 <- sum(x^2)
  
  # we start running the Gibbs sampler
  for (i in seq(2, nr_samples)) {
    
    # first, get all the values of the previous time point
    pi_prev <- res[i-1, 1]
    beta_prev <- res[i-1, 2]
    sigma2_prev <- res[i-1, 3]
    tau2_prev <- res[i-1, 4]
    theta_prev <- res[i-1, 5]
    
    ## Start sampling from the conditional posterior distributions
    ##############################################################
    
    # sample theta from a Beta
    theta_new <- rbeta(1, a + pi_prev, b + 1 - pi_prev)
    
    # sample sigma2e from an Inverse Gamma
    sigma2_new <- 1 / rgamma(1, a1 + n/2, a2 + sum((y - x*beta_prev)^2) / 2)
    
    # sample tau2 from an Inverse Gamma
    tau2_new <- 1 / rgamma(1, 1/2 + 1/2 * pi_prev, s^2/2 + beta_prev^2 / (2*var_y))
    
    # store this as a variable since it gets computed very often
    var_comb <- sum_x2 + sigma2_new/(tau2_new*var_y)
    
    # sample beta from a Gaussian
    beta_mu <- sum_xy / var_comb
    beta_var <- sigma2_new / var_comb
    beta_new <- rnorm(1, beta_mu, sqrt(beta_var))
    
    # compute chance parameter of the conditional posterior of pi (Bernoulli)
    l0 <- log(1 - theta_new)
    l1 <- (
      log(theta_new) - .5 * log(tau2_new*var_y) +
      sum_xy^2 / (2*sigma2_new*var_comb) + .5 * log(beta_var) 
    )
    
    # sample pi from a Bernoulli
    pi_new <- rbinom(1, 1, exp(l1) / (exp(l1) + exp(l0)))
    
    # add new samples
    res[i, ] <- c(pi_new, beta_new * pi_new, sigma2_new, tau2_new, theta_new)
  }
  
  # remove the first nr_burnin number of samples
  res[-seq(nr_burnin), ]
}

Example application I

Here, we simply simulate some data to see whether we can recover the coefficient.

gen_dat <- function(n = 100, b = 0, sigma2e = 1, seed = 1) {
  set.seed(seed)
  
  p <- length(b)
  X <- replicate(p, rnorm(n))
  y <- X %*% t(b) + rnorm(n, 0, sqrt(sigma2e))
  
  list('y' = y, 'X' = X)
}
 
dat <- gen_dat(n = 100, b = 0.3, sigma2e = 1)
samples <- ss_regress_univ(dat$y, dat$X)
 
head(samples)
##      pi      beta    sigma2       tau2     theta
## [1,]  1 0.2906086 0.6597533 0.90971999 0.7347514
## [2,]  1 0.1211445 0.8227258 0.19379877 0.8147812
## [3,]  1 0.2482826 0.8256208 0.21308479 0.9398529
## [4,]  1 0.2698416 0.8924097 1.27511931 0.2272394
## [5,]  1 0.2569462 0.8575250 9.26546148 0.3319193
## [6,]  1 0.3302473 0.7589350 0.05923922 0.8465538

The samples for $\beta$ are from its marginal distribution, that is, from the distribution weighted by the uncertainty about each model. We can plot this model-averaged posterior:

plot of chunk unnamed-chunk-8

In this case, we have two models:

where we, for simplicity, set the intercepts to 0. The dashed grey line indicates the posterior mean for $\beta$ conditional on the model $\mathcal{M}_1$. The dashed black line, on the other hand, indicates the posterior mean for $\beta$ where we have taken the uncertainy across models into account.

apply(samples, 2, mean)
##         pi       beta     sigma2       tau2      theta 
##  0.8420000  0.2292838  0.9456540 18.3779151  0.6181061

From this, we can also compute the posterior inclusion odds, which is $\frac{0.84}{1 - 0.84} = 5.30$. This means that $\mathcal{M}_1$ is about 5 times more likely than $\mathcal{M}_0$. In the short primer on Bayesian inference above, we have noted that computing posterior inclusion probabilities requires assigning a prior distribution to models. This brings with it some subtleties, and we will sketch the issue of assigning priors to models at the end of this blog post. In the next section, we generalize our spike-and-slab Gibbs sampling procedure to $p > 1$ variables.

Allowing $p > 1$ predictors

In the case of multiple predictors, the Gibbs sampling procedure changes slightly. We use independent priors over each predictor:

for all $i \in [1, \ldots, p]$. We again set $a = b = 1$ and $\alpha_1 = \alpha_2 = 0.01$. Note that $\tau^2$ and $\theta$ are common to all predictors. Let $\mathbf{y} \in \mathbb{R}^{n \times 1}$ be an $n$-dimensional row vector; $\mathbf{X} \in \mathbb{R}^{n \times p}$ be a $n \times p$-dimensional matrix; and $\beta \in \mathbb{R}^{p \times 1}$ be a $p$-dimensional row vector. With this notation, the residual sum of squares, which was $\sum_{i=1}^n (y_i - \beta x_i)^2$ previously, becomes $(\mathbf{y} - \mathbf{X}\beta)^T (\mathbf{y} - \mathbf{X}\beta)$. Similarly, where we previously had $\beta^2$ we now have $\beta^T\beta$.

In the next sections, I provide the updated conditional posterior distributions, as well as update the R code to handle $p > 1$ predictors. Compared to the univariable case, we simply have to replace the scalar by vector quantities, except for the conditional posteriors on $\pi_i$ — these again require an integration trick. We tackle the conditional posteriors in turn.

Conditional posterior $p(\theta \mid \pi)$

The conditional posterior of $\theta$ with $p$ predictors is:

Note that while before the posterior mean of $\theta$ was bounded between $1/3$ and $2/3$, the posterior mean is now bounded between $\frac{1}{2 + p}$ and $\frac{1 + p}{2 + p}$.

Conditional posterior $p(\tau^2 \mid \beta, \pi)$

We again have two cases for $\tau^2$, but they are slightly different compared to the univariable case. We sample from the prior if all $\pi_i$’s are zero. Let $\pi = (\pi_1, \ldots, \pi_p)$ be the vector of mixture weights, and let $\mathbf{0}$ be a vector of zeros of length $p$, then:

Note that $\beta_i = 0$ if $\pi_i = 0$, and that we thus sample from the prior if all $\pi_i$’s are zero.

Conditional posterior $p(\sigma_e^2 \mid y, \beta)$

The conditional posterior on $\sigma_e^2$ changes only slightly:

Conditional posterior $p(\beta \mid y, \pi, \tau^2, \sigma_e^2)$

We could write the prior over all $\beta_i$’s as a multivariate Gaussian with a diagonal covariance matrix. With a Gaussian likelihood, this prior is conjugate, such that the conditional posterior on the regression weights $\beta$ is a multivariate Gaussian distribution. We sketch the derivation as it may be interesting in itself. The idea is to write:

Thus, we draw all $\beta_i$’s from:

where we then set the $\beta_i$’s to zero for which $\pi_i = 0$.

Conditional posterior $p(\pi \mid \beta, \theta, \tau^2)$

Because the the individual $\pi_i$’s are conditionally independent given $\theta$, the update step is very similar to the univariable case. We compare the case where the $j^{\text{th}}$ element of $\beta$ is zero ($\pi_j = 0$) against the case where it is not zero ($\pi_j = 1$). The other indicator variables, call them $\pi_{-j}$, are whatever their current sample is. Therefore, we need to compute the probability with which $\pi_j = 1$ compared to $\pi_j = 0$, given the same values for $\pi_{-j}$. Let $\xi_j$ denote the probability that we sample $\pi_j = 1$, and let $\beta_{-j}$ denote the vector of regression weights without $\beta_j$, and for which $\beta_i = 0$ if $\pi_i = 0$. We cycle through each $\pi_j$ and compute:

We then draw $\pi_j$ from a Bernoulli with chance parameter $\xi_j$; we repeat this procedure for all $j = [1, \ldots, p]$ predictors. We start with the $\pi_j = 0$ case for which $\beta_j = 0$. We write:

where we have absorbed the terms that appear both in the posterior for $\pi_j = 0$ and $\pi_j = 1$ into $Z$. Note that in the expression above the prediction is done with only $p - 1$ predictor terms, some of which may be zero and others not, depending on the current sample. We could have written this equivalently with $\mathbf{X}\beta$ with the constraint that $\beta_j = 0$.

The expression for $\pi_j = 1$ requires integrating over $\beta_j$. We start with the expression that already has most of the terms in $Z$:

To single out $\beta_j$ from $\beta$, define

as the residuals of the regression $\mathbf{y}$ on $\mathbf{X}_{-j}$.7 Due to linearity, we can write

such that

which is a very similar integration problem as in the univariable case. The same trick holds here: we remove all terms that do not depend on $\beta_j$ from the integral, complete the square, and find the normalizing constant of a Gaussian. In fact, the steps are exactly the same as above, except that we have $z_i$ instead of $y_i$, and so we just give the solution:

The conditional posterior of $\pi_j = 0$ is therefore a Bernoulli distribution with (1 minus) chance parameter:

where $z_j$ changes depending which $\beta_j$ we currently sample.

Implementation in R

The implementation changes only slightly:

#' Spike-and-Slab Regression using Gibbs Sampling for p > 1 predictors
#'
#' @param y: vector of responses
#' @param X: matrix of predictor values
#' @param nr_samples: indicates number of samples drawn
#' @param a1: parameter a1 of Gamma prior on variance sigma2e
#' @param a2: parameter a2 of Gamma prior on variance sigma2e
#' @param theta: parameter of prior over mixture weight
#' @param burnin: number of samples we discard ('burnin samples')
#'
#' @returns matrix of posterior samples from parameters pi, beta, tau2, sigma2e, theta
ss_regress <- function(
  y, X, a1 = .01, a2 = .01, theta = .5,
  a = 1, b = 1, s = 1/2, nr_samples = 6000, nr_burnin = round(nr_samples / 4, 2)
  ) {
  
  p <- ncol(X)
  n <- nrow(X)
  
  # res is where we store the posterior samples
  res <- matrix(NA, nrow = nr_samples, ncol = 2*p + 1 + 1 + 1)
  
  colnames(res) <- c(
    paste0('pi', seq(p)),
    paste0('beta', seq(p)),
    'sigma2e', 'tau2', 'theta'
  )
  
  # take the MLE estimate as the values for the first sample
  m <- lm(y ~ X - 1)
  res[1, ] <- c(rep(0, p), coef(m), var(predict(m) - y), 1, .5)
  
  # compute only once
  XtX <- t(X) %*% X
  Xty <- t(X) %*% y
  var_y <- as.numeric(var(y))
  
  # we start running the Gibbs sampler
  for (i in seq(2, nr_samples)) {
    
    # first, get all the values of the previous time point
    pi_prev <- res[i-1, seq(p)]
    beta_prev <- res[i-1, seq(p + 1, 2*p)]
    sigma2e_prev <- res[i-1, ncol(res) - 2]
    tau2_prev <- res[i-1, ncol(res) - 1]
    theta_prev <- res[i-1, ncol(res)]
    
    ## Start sampling from the conditional posterior distributions
    ##############################################################
    
    # sample theta from a Beta
    theta_new <- rbeta(1, a + sum(pi_prev), b + sum(1 - pi_prev))
    
    # sample sigma2e from an Inverse-Gamma
    err <- y - X %*% beta_prev
    sigma2e_new <- 1 / rgamma(1, a1 + n/2, a2 + t(err) %*% err / 2)
    
    # sample tau2 from an Inverse Gamma
    tau2_new <- 1 / rgamma(
      1, 1/2 + 1/2 * sum(pi_prev),
      s^2/2 + t(beta_prev) %*% beta_prev / (2*var_y)
    )
    
    # sample beta from multivariate Gaussian
    beta_cov <- qr.solve((1/sigma2e_new) * XtX + diag(1/(tau2_new*var_y), p))
    beta_mean <- beta_cov %*% Xty * (1/sigma2e_new)
    beta_new <- mvtnorm::rmvnorm(1, beta_mean, beta_cov)
    
    # sample each pi_j in random order
    for (j in sample(seq(p))) {
      
      # get the betas for which beta_j is zero
      pi0 <- pi_prev
      pi0[j] <- 0
      bp0 <- t(beta_new * pi0)
      
      # compute the z variables and the conditional variance
      xj <- X[, j]
      z <- y - X %*% bp0
      cond_var <- (sum(xj^2) + sigma2e_new/(tau2_new*var_y))
      
      # compute chance parameter of the conditional posterior of pi_j (Bernoulli)
      l0 <- log(1 - theta_new)
      l1 <- (
        log(theta_new) - .5 * log(tau2_new*var_y) +
        sum(xj*z)^2 / (2*sigma2e_new*cond_var) + .5 * log(sigma2e_new / cond_var)
      )
      
      # sample pi_j from a Bernoulli
      pi_prev[j] <- rbinom(1, 1, exp(l1) / (exp(l1) + exp(l0)))
    }
    
    pi_new <- pi_prev
    
    # add new samples
    res[i, ] <- c(pi_new, beta_new*pi_new, sigma2e_new, tau2_new, theta_new)
  }
  
  # remove the first nr_burnin number of samples
  res[-seq(nr_burnin), ]
}

We might want to run not only one Markov chain, as ss_regress does, but several; and we might also want to run them in parallel, which is achieved by the following wrapper:

library('doParallel')
registerDoParallel(cores = 4)
 
#' Calls the ss_regress function in parallel
#' 
#' @params same as ss_regress
#' @params nr_cores: numeric, number of cores to run ss_regress in parallel
#' @returns a list with nr_cores entries which are posterior samples
ss_regressm <- function(
  y, X, a1 = .01, a2 = .01, theta = .5,
  a = 1, b = 1, s = 1/2, nr_samples = 6000,
  nr_burnin = round(nr_samples / 4, 2), nr_cores = 4
  ) {
  
  samples <- foreach(i = seq(nr_cores), .combine = rbind) %dopar% {
    ss_regress(
      y = y, X = X, a1 = a1, a2 = a2, theta = theta,
      a = a, b = b, s = s, nr_samples = nr_samples,
      nr_burnin = nr_burnin
    )
  }
  
  samples
}

Example Application II

We use a data set on (aggregated) attitudes of clerical employees in a large financial organization. We want to predict the overall rating based on answers to seven questions, which are our predictors:

data(attitude)
head(attitude)
##   rating complaints privileges learning raises critical advance
## 1     43         51         30       39     61       92      45
## 2     63         64         51       54     63       73      47
## 3     71         70         68       69     76       86      48
## 4     61         63         45       47     54       84      35
## 5     81         78         56       66     71       83      47
## 6     43         55         49       44     54       49      34

We $z$-standardize our variables which forces the intercept to be zero. We do this because we have, for simplicity, neglected to include an intercept in our Gibbs sampling derivations.

std <- function(x) (x - mean(x)) / sd(x)
 
attitude_z <- apply(attitude, 2, std)
yz <- attitude_z[, 1]
Xz <- attitude_z[, -1]
 
samples <- ss_regressm(
  y = yz, X = Xz, a1 = .01, a2 = .01,
  a = 1, b = 1, s = 1/2, nr_cores = 4, nr_samples = 4000
)
 
post_means <- apply(samples, 2, mean)
 
res_table <- cbind(
  post_means[grepl('beta', names(post_means))],
  post_means[grepl('pi', names(post_means))]
)
rownames(res_table) <- colnames(Xz)
colnames(res_table) <- c('Post. Mean', 'Post. Inclusion')
 
round(res_table, 3)
##            Post. Mean Post. Inclusion
## complaints      0.601           0.998
## privileges     -0.011           0.319
## learning        0.211           0.692
## raises          0.058           0.425
## critical        0.007           0.286
## advance        -0.079           0.418

We can also visualize these results:

plot of chunk unnamed-chunk-15

We are certain to include only the predictor variable complaints. There remains large uncertainty as to whether the other variables are associated, or not associated, with the outcome.

As an aside, there are also other options than specifying independent priors over the $\beta$’s, which is what we have done in our setup. The most popular prior specification is based on Zellner’s (1986) $g$-prior:

where $g = \tau^2$ in our terminology and which does not have a diagonal covariance matrix but one that is scaled by $\left(\mathbf{X}^T\mathbf{X}\right)^{-1}$. Liang et al. (2008) propose various ways to deal with $g$. One of them, as discussed in this blog post, is to assign $g$ an inverse Gamma distribution which leads to a (multivariate) marginal Cauchy distribution on $\beta$. Som, Hans, & MacEachern (2016) point out an interesting problem that may arise when using, as we have done in this blog post, a single global $g$ or $\tau^2$ parameter. Li & Clyde (2018) unify various approaches in a general framework that extends to generalized linear models.8 In the next section, I briefly sketch some subtleties in assigning a prior to models.

Prior on Models

We have seen that the Gibbs sampler with spike-and-slab priors can yield model-averaged parameter estimates as well as posterior inclusion probabilities. However, in the first section of this blog post, I have pointed out that this is only possible once we assign priors to models. Have we done so? Yes, albeit implicitly. We have $2^p$ possible models, where a model simply indexes which of the $\pi_i$’s equal 1 and which equal 0. For example, the model with zero predictors has $\pi = \mathbf{0}$, whereas the model which includes all predictors has $\pi = \mathbf{1}$. Thus, a prior assigned to $\pi_i$ constitutes a prior assigned to models. The independent spike-and-slab prior specification described above yields:

In the next two sections, we will discuss the implications of different choices for $p(\theta)$.

Uniform on Models, Non-uniform on Model Size

Let’s focus on the special case $\theta = \frac{1}{2}$ for a moment. This yields:

the uniform prior over all models. It may be surprising to hear that this uniform prior over models induces a non-uniform prior on model size. To see this, let’s introduce the new random variable $K = \sum_{i=1}^p \pi_i$, which counts the number of active predictors and thus constitutes the size of a model. Now that we focus on $K$ instead of the individual $\pi_i$’s, we do not care which particular $\pi_i$’s are zero or not, but only how many of them are non-zero. Resultingly, there are ${p \choose k}$ possible ways of obtaining $K = k$ active predictors, and the prior distribution distribution assigned to $K$ becomes:

which is a Binomial distribution with $\theta = \frac{1}{2}$, encoding the prior expectation that half of the predictor variables will be included. To further see that a uniform prior over models leads to a non-uniform prior over model size, assume that we have $p = 2$ predictors and thus $m = 2^2 = 4$ models. The uniform prior on models assigns a probability of $\frac{1}{4}$ to all models coded in terms of $\pi$ as $[(0, 0), (1, 0), (0, 1), (1, 1)]$. However, there is only ${2 \choose 0} = {2 \choose 2} = 1$ way to get a model that includes zero or both predictors, while there are ${2 \choose 1} = 2$ ways to get models that include one predictor. Thus, models that are of size one (i.e., either include $\beta_1$ or $\beta_2$) get assigned double the amount of probability mass than models that include zero or both predictors; for a visual illustration, see the figure below.

plot of chunk unnamed-chunk-18

Uniform on Model Size, Non-uniform on Models

We may be uncomfortable with the prior expectation that half of the variables are included, i.e. that $\theta = \frac{1}{2}$. In our spike-and-slab prior specification above, we have instead assigned $\theta$ a Beta prior. This leads to:

where we have recognized the integrand as the kernel of a Beta distribution.

We can again study the implied prior on model size. Using the same intuition as above, the distribution assigned to $K$ becomes:

which is not a Binomial but a Beta-binomial distribution. Assuming again that we have $p = 2$ predictors and thus $m = 2^2 = 4$ models, and that $a = b = 1$ as above, this setup induces a uniform distribution over $K$:

dbetabin <- function(p, k, a = 1, b = 1) {
  choose(p, k) * beta(a + k, b + p - k) / beta(a, b)
}
 
dbetabin(2, c(0, 1, 2))
## [1] 0.3333333 0.3333333 0.3333333

Conversely, this implies a non-uniform prior over models. In particular, this prior setup assigns more mass on extremely sparse or extremely dense models. To see this, note again that there is only ${2 \choose 0} = {2 \choose 2} = 1$ way to get a model that includes zero or both predictors, while there are two ${2 \choose 1} = 2$ ways to get a model that includes one predictor. Thus, models that are of size one (i.e., either include $\beta_1$ or $\beta_2$) get assigned only half as much probability mass than models that include zero or both predictors; for a visual illustration, see the figure below.

plot of chunk unnamed-chunk-20

Especially with a large number of predictors, we might be wary of the assumption that the model which includes no predictor and the model which includes all predictors are the most likely models a priori.9 We can think of priors assigned to models and model size as formalizing how sparse we think the part of the world we are modeling is. The wonderful thing about using Bayesian statistics to quantify uncertainty is that these assumptions are out in the open. This by itself does not imply, however, that variable selection seizes to be a difficult and nuanced problem.

Conclusion

If you have stayed with me until the bitter end, awesome! We have covered a lot in this blog post. In particular, we have tackled the problem of variable selection using a Bayesian approach which allowed us to quantify and incorporate uncertainty about parameters as well as models. We have focused on linear regression with spike-and-slab priors and derived a Gibbs sampler for the single and multiple predictor case. Applying this to simulated and real data, we have seen how this leads to model-averaged parameter estimates, as well as uncertainty estimates about whether or not to include a particular predictor variable. Lastly, we have discussed the nuances of assigning priors to models. If you want to read up on any of these topics, I encourage you to check out the references below. Otherwise, hope to see you next month!


I would like to thank Don van den Bergh, Max Hinne, and Maarten Marsman for discussions about the Gibbs sampler, and Sophia Crüwell for comments on this blog post.


References

  • Lindley, Dennis (1991). Making Decisions (2 ed.). New Jersey, US: Wiley.
  • George, E. I. (2004). The Variable Selection Problem. Journal of the American Statistical Association, 95(452), 1304-1308.
  • Clyde, M., & George, E. I. (2004). Model uncertainty. Statistical Science,19(1) 81-94.
  • Hinne, M., Gronau, Q. F., van den Bergh, D., & Wagenmakers, E. J. (2019). A conceptual introduction to Bayesian Model Averaging. doi: 10.31234/osf.io/wgb64.
  • Robert, C., & Casella, G. (2011). A short history of Markov chain Monte Carlo: Subjective recollections from incomplete data. Statistical Science, 26(1), 102-115.
  • McElreath, R. (2015). Statistical Rethinking: A Bayesian course with examples in R and Stan. London, UK: Chapman and Hall/CRC.
  • Matthews, R. (2001). Storks deliver babies (p = 0.008). Teaching Statistics, 22(2), 36-38.
  • Dawid, A. P. (2010). Beware of the DAG! In Proceedings of the NIPS 2008 Workshop on Causality. Journal of Machine Learning Research Workshop and Conference Proceedings, (6) 59–86.
  • Dawid, A. P. (1979). Conditional independence in statistical theory. Journal of the Royal Statistical Society: Series B (Methodological), 41(1), 1-15.
  • Casella, G., & George, E. I. (1992). Explaining the Gibbs sampler. The American Statistician, 46(3), 167-174.
  • George, E. I., & McCulloch, R. E. (1993). Variable selection via Gibbs Sampling. Journal of the American Statistical Association, 88(423), 881-889.
  • O’Hara, R. B., & Sillanpää, M. J. (2009). A review of Bayesian variable selection methods: what, how and which. Bayesian Analysis, 4(1), 85-117.
  • George, E. I., & McCulloch, R. E. (1997). Approaches for Bayesian variable selection. Statistica Sinica, 7(2), 339-373.
  • Geweke, J. (1994). Variable selection and model comparison in regression. In Bayesian Statistics 5: Proceedings of the 5th Valencia International Meeting, 1-30.
  • Zellner, A. (1986). On Assessing Prior Distributions and Bayesian Regression Analysis With g-Prior Distributions. In Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti, 233-243. The Netherlands, Amsterdam: Elsevier.
  • Liang, F., Paulo, R., Molina, G., Clyde, M. A., & Berger, J. O. (2008). Mixtures of g-priors for Bayesian variable selection. Journal of the American Statistical Association, 103(481), 410-423.
  • Som, A., Hans, C. M., & MacEachern, S. N. (2016). A conditional Lindley paradox in Bayesian linear models. Biometrika, 103(4), 993-999.
  • Li, Y., & Clyde, M. A. (2018). Mixtures of g-priors in generalized linear models. Journal of the American Statistical Association, 113(524), 1828-1845.

Footnotes


  1. For a very concise overview of variable selection, see George (2004). For a good overview of model uncertainty, see Clyde & George (2004). For a conceptual introduction to model-averaging, see Hinne, Gronau, van den Bergh, & Wagenmakers (2019). 

  2. For mathematical details, see for example Casella & George (1992). 

  3. Although I usually try to provide some historical context, this blog post is already quite long. To keep it short, and if you are interested, I recommend you read Robert & Casella (2011). 

  4. The actual symbol for conditional independence, introduced by Dawid (1979), differs from $\perp$ in that it has two vertical lines. However, MathJax does not have the correct symbol in its library. 

  5. Dennis Lindley also has a second paradox named after him, see here — which is a little tongue in cheek. 

  6. Two things. First, you really shouldn’t read these blog posts on your phone! Second, too small margins might remind you of an expression by Fermat, who used this as a justification to not give a proof of his famous last theorem. I recently read an absolutely captivating book about Fermat’s last theorem which might interest you; see here

  7. It is called regressing $\mathbf{y}$ on $\mathbf{X}$ because we project the response on the predictors

  8. The regression implementation in the BayesFactor R package is based on the model selection approach discussed in Liang et al. (2008), while the BAS R package and JASP use the framework described in Li & Clyde (2018). You might find it insightful to compare the analysis results we have gotten here with the results when using these packages. See this gist for a comparison. 

  9. It is generally unlikely that there are many large effects; Gelman uses what he calls the Piranha argument to justify this claim: if there were many large effects, then they would interfere with each other. 

To leave a comment for the author, please follow the link and comment on their blog: Fabian Dablander.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)