More on fixed and random effects: Plotting and interpreting
[This article was first published on Minding the Brain, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
In a recent post I showed how plotting model fits can help to interpret higher-order polynomial terms. The key comparison there was between a model that did and did not have the higher order fixed effect terms. If you're going to use this strategy, you need to remember that fixed and random effects capture some of the same variance, so if you're going to remove some fixed effects to visualize their effect, you also need to remove the corresponding random effects. In that previous post, those higher-order random effects were not included in the model (more on this in a minute), so I could just talk about the fixed effects. Here's how it would look if I started with a full model that included all fixed and random effects and compared it to just removing the higher-order fixed effects…
Here are the models – they're the same as in the previous post, except that they include the higher-order random effects:
m.full <- lmer(fixS ~ (ot1 + ot2 + ot3 + ot4) * obj * cond + (1 + ot1 + ot2 + ot3 + ot4 | subj) + (1 + ot1 + ot2 + ot3 + ot4 | subj:obj:cond), data = subset(data.ex, obj != "T"), REML = F) m.exSub <- lmer(fixS ~ (ot1 + ot2 + ot3 + ot4) * obj + (ot1 + ot2 + ot3 + ot4) * cond + (ot1 + ot2) * obj * cond + (1 + ot1 + ot2 + ot3 + ot4 | subj) + (1 + ot1 + ot2 + ot3 + ot4 | subj:obj:cond), data = subset(data.ex, obj != "T"), REML = F)And here are the model fits (thinner lines represent the model without the higher-order fixed effects):
Not much of a difference, is there? If we also drop the higher-order random effects (thicker dashed lines in the graph below), then we can again see that the higher-order terms were capturing the Early-vs-Late difference:
m.exSub2 <- lmer(fixS ~ (ot1 + ot2 + ot3 + ot4) * obj + (ot1 + ot2 + ot3 + ot4) * cond + (ot1 + ot2) * obj * cond + (1 + ot1 + ot2 + ot3 + ot4 | subj) + (1 + ot1 + ot2 | subj:obj:cond), data = subset(data.ex, obj != "T"), REML = F)
This graphical example shows that random and fixed effects capture some of the same variance and this point is also important for deciding which random effects to include in the model in the first place. This decision is somewhat complicated and I don't think the field has completely settled on an answer (I'll be revisiting the issue in future posts). I generally try to include as many time terms in the random effects as possible before running into convergence errors. An important rule-of-thumb is to always include random effects that correspond to the fixed effects that you plan to interpret. Since random and fixed effects capture some of the same variance, you can get spuriously significant fixed effects if you omit the corresponding random effects. It won't affect the actual fixed effect parameter estimates (since the random effects are constrained to have a mean of 0), but omitting random effects tends to reduce the standard error of the corresponding fixed effect parameter estimates, which makes them look more statistically significant than they should be.
Here's how that looks in the context of the Early-vs-Late example:
coefs.full <- as.data.frame(summary(m.full)@coefs) coefs.full$p <- format.pval(2 * (1 - pnorm(abs(coefs.full[, "t value"])))) coefs.full[grep("*objU:cond*", rownames(coefs.full), value = T), ] ## Estimate Std. Error t value p ## objU:condEarly -0.004164 0.01709 -0.2436 0.80751 ## ot1:objU:condEarly 0.065878 0.07745 0.8506 0.39497 ## ot2:objU:condEarly -0.047568 0.04362 -1.0906 0.27545 ## ot3:objU:condEarly -0.156184 0.05181 -3.0145 0.00257 ## ot4:objU:condEarly 0.075709 0.03308 2.2888 0.02209 m.ex <- lmer(fixS ~ (ot1 + ot2 + ot3 + ot4) * obj * cond + (1 + ot1 + ot2 + ot3 + ot4 | subj) + (1 + ot1 + ot2 | subj:obj:cond), data = subset(data.ex, obj != "T"), REML = F) coefs.ex <- as.data.frame(summary(m.ex)@coefs) coefs.ex$p <- format.pval(2 * (1 - pnorm(abs(coefs.ex[, "t value"])))) coefs.ex[grep("*objU:cond*", rownames(coefs.ex), value = T), ] ## Estimate Std. Error t value p ## objU:condEarly -0.004164 0.01701 -0.2448 0.80664 ## ot1:objU:condEarly 0.065878 0.07586 0.8685 0.38514 ## ot2:objU:condEarly -0.047568 0.04184 -1.1370 0.25554 ## ot3:objU:condEarly -0.156184 0.02327 -6.7119 1.9e-11 ## ot4:objU:condEarly 0.075709 0.02327 3.2535 0.00114
As I said, the presence (m.full) vs. absence (m.ex) of the higher-order random effects does not affect the fixed effect parameter estimates, but it does affect their standard errors. Without the cubic and quartic random effects, those fixed effect standard errors are much smaller, which increases their apparent statistical significance. In this example, the cubic and quartic terms' p-values are < 0.05 either way, but you can see that the differences in the t-value were quite large, so it's not hard to imagine an effect that would look significant only without the corresponding random effect.
To leave a comment for the author, please follow the link and comment on their blog: Minding the Brain.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.