Quantifying uncertainty around R-squared for generalized linear mixed models

[This article was first published on Ecology in silico, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

People love $R^2$. As such, when Nakagawa and Schielzeth published an article in the journal Methods in Ecology and Evolution earlier this year, ecologists (amid increasing use of generalized linear mixed models (GLMMs)) rejoiced. Now there’s an R function that automates $R^2$ calculations for GLMMs fit with the lme4 package.

$R^2$ is usually reported as a point estimate of the variance explained by a model, using the maximum likelihood estimates of the model parameters and ignoring uncertainty around these estimates. Nakagawa and Schielzeth (2013) noted that it may be desirable to quantify the uncertainty around $R^2$ using MCMC sampling. So, here we are.

Background

$R^2$ quantifies the proportion of observed variance explained by a statistical model. When it is large (near 1), much of the variance in the data is explained by the model.

Nakagawa and Schielzeth (2013) present two $R^2$ statistics for generalized linear mixed models:

1) Marginal $R^2_{GLMM(m)}$, which represents the proportion of variance explained by fixed effects:

where $\sigma^2_f$ represents the variance in the fitted values (on a link scale) based on the fixed effects:

$\boldsymbol{X}$ is the design matrix of the fixed effects, and $\boldsymbol{\beta}$ is the vector of fixed effects estimates.

$\sum_{l=1}^{u}\sigma^2_l$ represents the sum the variance components for all of $u$ random effects. $\sigma^2_d$ is the distribution-specific variance (Nakagawa & Schielzeth 2010), and $\sigma^2_d$ represents added dispersion.

2) Conditional $R^2_{GLMM(c)}$ represents the proportion of variance explained by the fixed and random effects combined:

Point-estimation of $R^2_{GLMM}$

Here, I’ll follow the example of an overdispersed Poisson GLMM provided in the supplement to Nakagawa & Schielzeth, available here. This is their most complicated example, and the simpler ones ought to be relatively straightforward for those that are interested in normal or binomial GLMMs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# First, simulate data (code from Nakagawa & Schielzeth 2013): 
# 12 different populations n = 960
Population <- gl(12, 80, 960)

# 120 containers (8 individuals in each container)
Container <- gl(120, 8, 960)

# Sex of the individuals. Uni-sex within each container (individuals are
# sorted at the pupa stage)
Sex <- factor(rep(rep(c("Female", "Male"), each = 8), 60))

# Habitat at the collection site: dry or wet soil (four indiviudal from
# each Habitat in each container)
Habitat <- factor(rep(rep(c("dry", "wet"), each = 4), 120))

# Food treatment at the larval stage: special food ('Exp') or standard
# food ('Cont')
Treatment <- factor(rep(c("Cont", "Exp"), 480))

# Data combined in a dataframe
Data <- data.frame(Population = Population,
    Container = Container, Sex = Sex,
    Habitat = Habitat, Treatment = Treatment)

# Subset the design matrix (only females express colour morphs)
DataF <- Data[Data$Sex == "Female", ]

# random effects
PopulationE <- rnorm(12, 0, sqrt(0.4))
ContainerE <- rnorm(120, 0, sqrt(0.05))

# generation of response values on link scale (!) based on fixed effects,
# random effects and residual errors
EggLink <- with(DataF,
                  1.1 +
                  0.5 * (as.numeric(Treatment) - 1) +
                  0.1 * (as.numeric(Habitat) - 1) +
                  PopulationE[Population] +
                  ContainerE[Container] +
                  rnorm(480, 0, sqrt(0.1)))  # adds overdispersion

# data generation (on data scale!) based on Poisson distribution
DataF$Egg <- rpois(length(EggLink), exp(EggLink))

# save data (to current work directory)
write.csv(DataF, file = "BeetlesFemale.csv", row.names = F)

Having simulated a dataset, calculate the $R^2$ point-estimates, using the lme4 package to fit the model.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
## Now, calculate R-squared (code from Nakagawa & Schielzeth 2013)
library(arm)
library(lme4)

# Clear memory
rm(list = ls())

# Read fecundity data (Poisson, available for females only)
Data <- read.csv("BeetlesFemale.csv")

# Creating a dummy variable that allows estimating additive dispersion in
# lmer This triggers a warning message when fitting the model
Unit <- factor(1:length(Data$Egg))

# Fit null model without fixed effects (but including all random effects)
m0 <- lmer(Egg ~ 1 + (1 | Population) + (1 | Container) + (1 | Unit),
    family = "poisson", data = Data)

# Fit alternative model including fixed and all random effects
mF <- lmer(Egg ~ Treatment + Habitat + (1 | Population) + (1 | Container) +
    (1 | Unit), family = "poisson", data = Data)

# View model fits for both models
summary(m0)
summary(mF)

# Extraction of fitted value for the alternative model fixef() extracts
# coefficents for fixed effects mF@X returns fixed effect design matrix
Fixed <- fixef(mF)[2] * mF@X[, 2] + fixef(mF)[3] * mF@X[, 3]

# Calculation of the variance in fitted values
VarF <- var(Fixed)

# An alternative way for getting the same result
VarF <- var(as.vector(fixef(mF) %*% t(mF@X)))

# R2GLMM(m) - marginal R2GLMM see Equ. 29 and 30 and Table 2 fixef(m0)
# returns the estimate for the intercept of null model
R2m <- VarF/(VarF + VarCorr(mF)$Container[1] +
               VarCorr(mF)$Population[1] + VarCorr(mF)$Unit[1] +
                log(1 + 1/exp(as.numeric(fixef(m0))))
            )

# R2GLMM(c) - conditional R2GLMM for full model Equ. XXX, XXX
R2c <- (VarF + VarCorr(mF)$Container[1] + VarCorr(mF)$Population[1])/
         (VarF + VarCorr(mF)$Container[1] + VarCorr(mF)$Population[1] +
           VarCorr(mF)$Unit[1] + log(1 + 1/exp(as.numeric(fixef(m0))))
         )

# Print marginal and conditional R-squared values
cbind(R2m, R2c)

Having stored our point estimates, we can now turn to Bayesian methods instead, and generate $R^2$ posteriors.

Posterior uncertainty in $R^2_{GLMM}$

We need to fit two models in order to get the needed parameters for $R^2_{GLMM}$. First, a model that includes all random effects, but only an intercept fixed effect is fit to estimate the distribution specific variance $\sigma^2_d$. Second, we fit a model that includes all random and all fixed effects to estimate the remaining variance components.

First I’ll clean up the data that we’ll feed to JAGS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Prepare the data
jags_d <- as.list(Data)[-c(2, 3)]  # redefine container, don't need sex
jags_d$nobs <- nrow(Data)
jags_d$npop <- length(unique(jags_d$Population))

# renumber containers from 1:ncontainer for ease of indexing
jags_d$Container <- rep(NA, nrow(Data))
for (i in 1:nrow(Data)) {
  jags_d$Container[i] <- which(unique(Data$Container) == Data$Container[i])
}
jags_d$ncont <- length(unique(jags_d$Container))

# Convert binary factors to 0's and 1's
jags_d$Habitat <- ifelse(jags_d$Habitat == "dry", 0, 1)
jags_d$Treatment <- ifelse(jags_d$Treatment == "Cont", 0, 1)
str(jags_d)

Then, fitting the intercept model:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# intercept model statement:
cat("
model{
  # priors on precisions (inverse variances)
  tau.pop ~ dgamma(0.01, 0.01)
  sd.pop <- sqrt(1/tau.pop)
  tau.cont ~ dgamma(0.01, 0.01)
  sd.cont <- sqrt(1/tau.cont)
  tau.unit ~ dgamma(0.01, 0.01)
  sd.unit <- sqrt(1/tau.unit)
  # prior on intercept
  alpha ~ dnorm(0, 0.01)

  # random effect of container
  for (i in 1:ncont){
    cont[i] ~ dnorm(0, tau.cont)
  }

  # random effect of population
  for (i in 1:npop){
    pop[i] ~ dnorm(0, tau.pop)
  }

  # likelihood
  for (i in 1:nobs){
    Egg[i] ~ dpois(mu[i])
    log(mu[i]) <- cont[Container[i]] + pop[Population[i]] + unit[i]
    unit[i] ~ dnorm(alpha, tau.unit) 
  }
}
    ", fill=T, file="pois_intercept.txt")

nstore <- 2000
nthin <- 20
ni <- nstore*nthin
require(rjags)
int_mod <- jags.model("pois_intercept.txt",
                      data=jags_d[-c(2, 3)], # exclude unused data 
                      n.chains=3,
                      n.adapt=5000)

vars <- c("sd.pop", "sd.cont", "sd.unit", "alpha")
int_out <- coda.samples(int_mod, n.iter=ni, thin=nthin,
                        variable.names=vars)

Then, fit the full mixed-model with all fixed and random effects:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# covariate model statement:
cat("
model{
  # priors on precisions (inverse variances)
  tau.pop ~ dgamma(0.01, 0.01)
  sd.pop <- sqrt(1/tau.pop)
  tau.cont ~ dgamma(0.01, 0.01)
  sd.cont <- sqrt(1/tau.cont)
  tau.unit ~ dgamma(0.01, 0.01)
  sd.unit <- sqrt(1/tau.unit)
  # priors on coefficients
  alpha ~ dnorm(0, 0.01)
  beta1 ~ dnorm(0, 0.01)
  beta2 ~ dnorm(0, 0.01)

  # random effect of container
  for (i in 1:ncont){
    cont[i] ~ dnorm(0, tau.cont)
  }

  # random effect of population
  for (i in 1:npop){
    pop[i] ~ dnorm(0, tau.pop)
  }

  # likelihood
  for (i in 1:nobs){
    Egg[i] ~ dpois(mu[i])
    log(mu[i]) <- cont[Container[i]] + pop[Population[i]] + unit[i]
    mu_f[i] <- alpha + beta1 * Treatment[i] + beta2 * Habitat[i]
    unit[i] ~ dnorm(mu_f[i], tau.unit) 
  }
}
    ", fill=T, file="pois_cov.txt")

cov_mod <- jags.model("pois_cov.txt",
                      data=jags_d,
                      n.chains=3,
                      n.adapt=5000)

vars2 <- c("sd.pop", "sd.cont", "sd.unit", "alpha", "beta1", "beta2")
cov_out <- coda.samples(cov_mod, n.iter=ni, thin=nthin,
                        variable.names=vars2)

For every MCMC draw, we can calculate $R^2_{GLMM}$, generating posteriors for both the marginal and conditional values.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# Step 1: variance in expected values (using fixed effects only)
require(ggmcmc)
d_int <- ggs(int_out)
d_cov <- ggs(cov_out)

alpha_cov <- subset(d_cov, Parameter == "alpha")$value
alpha_int <- subset(d_int, Parameter == "alpha")$value
b1_cov <- subset(d_cov, Parameter == "beta1")$value
b2_cov <- subset(d_cov, Parameter == "beta2")$value

Xmat <- cbind(rep(1, jags_d$nobs), jags_d$Treatment, jags_d$Habitat)
beta_mat <- cbind(alpha_cov, b1_cov, b2_cov)

fixed_expect <- array(dim = c(nstore, jags_d$nobs))
varF <- rep(NA, nstore)
for (i in 1:nstore) {
    fixed_expect[i, ] <- beta_mat[i, ] %*% t(Xmat)
    varF[i] <- var(fixed_expect[i, ])
}

# Step 2: calculate remaining variance components 
# among container variance
varCont <- subset(d_cov, Parameter == "sd.cont")$value^2
# among population variance
varPop <- subset(d_cov, Parameter == "sd.pop")$value^2
# overdispersion variance
varUnit <- subset(d_cov, Parameter == "sd.unit")$value^2
# distribution variance (Table 2, Nakagawa & Schielzeth 2013)
varDist <- log(1/exp(alpha_int) + 1)

# Finally, calculate posterior R-squared values 
# marginal
postR2m <- varF/(varF + varCont + varPop + varUnit + varDist)
# conditional
postR2c <- (varF + varCont + varPop)/
             (varF + varCont + varPop + varUnit + varDist)

# compare posterior R-squared values to point estimates
par(mfrow = c(1, 2))
hist(postR2m, main = "Marginal R-squared",
        ylab = "Posterior density",
        xlab = NULL, breaks = 20)
abline(v = R2m, col = "blue", lwd = 4)
hist(postR2c, main = "Conditional R-squared",
        ylab = "Posterior density",
        xlab = NULL, breaks = 25)
abline(v = R2c, col = "blue", lwd = 4)

This plot shows the posterior $R^2_{GLMM}$ distributions for both the marginal and conditional cases, with the point estimates generated with lmer shown as vertical blue lines. Personally, I find it to be a bit more informative and intuitive to think of $R^2$ as a probability distribution that integrates uncertainty in its component parameters. That said, it is unconventional to represent $R^2$ in this way, which could compromise the ease with which this handy statistic can be explained to the uninitiated (e.g. first year biology undergraduates). But, being a derived parameter, those wishing to generate a posterior can do so relatively easily.

Aside: correspondence between parameter estimates

Some may be wondering whether the parameter estimates generated with lme4 are comparable to those generated using JAGS. Having used vague priors, we would expect them to be similar. We can plot the Bayesian credible intervals (in blue), with the previous point estimates (as open black circles):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
par(mfrow = c(1, 2))
require(mcmcplots)
caterplot(int_out, style = "plain")
caterpoints(c(m0@fixef,
              attr(VarCorr(m0)$Container, "stddev"),
              attr(VarCorr(m0)$Population, "stddev"),
              attr(VarCorr(m0)$Unit, "stddev")))
title("Intercept model BCIs & point estimates")

caterplot(cov_out, style = "plain")
caterpoints(c(mF@fixef,
              attr(VarCorr(mF)$Container, "stddev"),
              attr(VarCorr(mF)$Population, "stddev"),
              attr(VarCorr(mF)$Unit, "stddev")))
title("Covariate model BCIs & point estimates")

To leave a comment for the author, please follow the link and comment on their blog: Ecology in silico.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)