Site icon R-bloggers

An adverse consequence of fitting "maximal" linear mixed models

[This article was first published on Shravan Vasishth's Slog (Statistics blog), and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Distribution of intercept-slope correlation estimates with 37 subjects, 15 items

Distribution of intercept-slope correlation estimates with 50 subjects, 30 items
Should one always fit a full variance covariance matrix (a “maximal” model) when one analyzes repeated measures data-sets using linear mixed models? Here, I present one reason why blindly fitting ”maximal” models does not make much sense.

Let’s create a repeated measures data-set that has two conditions (we want to keep this example simple), and the following underlying generative distribution, which is estimated from the Gibson and Wu 2012 (Language and Cognitive Processes) data-set. The dependent variable is reading time (rt).

begin{equation}label{eq:ranslp2}
rt_{i} = beta_0 + u_{0j} + w_{0k} + (beta_1 + u_{1j} + w_{1k}) hbox{x}_i + epsilon_i
end{equation}

begin{equation}
begin{pmatrix}
  u_{0j} \
  u_{1j}
end{pmatrix}
sim
Nleft(
begin{pmatrix}
  0 \
  0
end{pmatrix},
Sigma_{u}
right)
quad
begin{pmatrix}
  w_{0k} \
  w_{1k} \
end{pmatrix}
sim
N left(
begin{pmatrix}
  0 \
  0
end{pmatrix},
Sigma_{w}
right)
end{equation}


begin{equation}label{eq:sigmau}
Sigma_u =
left[ begin{array}{cc}
sigma_{mathrm{u0}}^2 & rho_u , sigma_{u0} sigma_{u1}  \
rho_u , sigma_{u0} sigma_{u1} & sigma_{u1}^2end{array} right]
end{equation}

begin{equation}label{eq:sigmaw}
Sigma_w =
left[ begin{array}{cc}
sigma_{mathrm{w0}}^2 & rho_w , sigma_{w0} sigma_{w1}  \
rho_w , sigma_{w0} sigma_{w1} & sigma_{w1}^2end{array} right]
end{equation}

begin{equation}
epsilon_i sim N(0,sigma^2)
end{equation}

One difference from the Gibson and Wu data-set is that each subject is assumed to see each instance of each item (like in the old days of ERP research), but nothing hinges on this simplification; the results presented will hold regardless of whether we do a Latin square or not (I tested this).

The  parameters and sample sizes are assumed to have the following values:


* $beta_1$=487
* $beta_2$= 61.5

* $sigma$=544
* $sigma_{u0}$=160
* $sigma_{u1}$=195
* $sigma_{w0}$=154
* $sigma_{w1}$=142
* $rho_u=rho_w$=0.6
* 37 subjects
* 15 items

Next, we generate data 100 times using the above parameter and model specification, and estimate (from lmer) the parameters each time. With the kind of sample size we have above, a maximal model does a terrible job of estimating the correlation parameters $rho_u=rho_w$=0.6.

However, if we generate data 100 times using 50 subjects instead of 37, and 30 items instead of 15, lmer is able to estimate the correlations reasonably well.

In both cases we fit ”maximal” models; in the first case, it makes no sense to fit a “maximal” model because the correlation estimates tend to be over-estimated. The classical method (the generalized likelihood ratio test (the anova function in lme4) to find the ”best” model) for determining which model is appropriate is discussed in the Pinheiro and Bates book, and would lead us to adopt a simpler model in the first case.

 Douglas Bates himself has something to say on this topic:

https://stat.ethz.ch/pipermail/r-sig-mixed-models/2014q3/022509.html

As Bates puts it:

“Estimation of variance and covariance components requires a large number of groups. It is important to realize this. It is also important to realize that in most cases you are not terribly interested in precise estimates of variance components. Sometimes you are but a substantial portion of the time you are using random effects to model subject-to-subject variability, etc. and if the data don’t provide sufficient subject-to-subject variability to support the model then drop down to a simpler model. “

Here is the code I used:


To leave a comment for the author, please follow the link and comment on their blog: Shravan Vasishth's Slog (Statistics blog).

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.