Site icon R-bloggers

FAQs on mixed-effects models

[This article was first published on R on Pablo Bernabeu, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

I am dealing with nested data, and I remember from an article by Clark (1973) that nested should be analysed using special models. I’ve looked into mixed-effects models, and I’ve reached a structure with random intercepts by subjects and by items. Is this fine?

In early days, researchers would aggregate the data across these repeated measures to prevent the violation of the assumption of independence of observations, which is one of the most important assumptions in statistics. With the advent of mixed-effects models, researchers began accounting for these repeated measures using random intercepts and slopes. However, problems of convergence led many researchers to remove random slopes. This became widespread until, over the past few years, we have realised that random slopes are necessary to prevent an inflation of the Type I error due to the violation of the assumption of independence (Brauer & Curtin, 2018; Singmann & Kellen, 2019). Please see Table 17 in Brauer and Curtin (2018). Due to the present reasons, the models in the current article are anti-conservative. To redress this problem, please consider the inclusion of random slopes by participant for all between-items variables [e.g., (stimulus_condition | participant)], and random slopes by item for all between-participants variables [e.g., (extraversion | item)]. Interaction terms should also have the corresponding slopes, except when the variables in the interaction vary within different units, that is, one between participants and one between items (Brauer & Curtin, 2018). Each of the random intercepts and random slopes included in the model should be noted in the main text, for instance using footnotes in the results table (see example).

I calculated the p values by comparing minimally-different models using the anova function. Is this fine?

Luke (2017) warns that the p values calculated by model comparison—which are based on likelihood ratio tests—can be anti-conservative. Therefore, the Kenward-Roger and the Satterthwaite methods are recommended instead (both available in other packages, such as lmerTest and afex).

The lme4 package only runs on one thread (CPU) but the computer has 8. Do you have any advice on making the model run using more of the threads? It’s taking a very long time. I’ve seen these two possible solutions online from 2018 (here and here) but would like some advice if they have any or have attempted either of these solutions.

From the information I have seen in the past as well as right now, parallelising (g)lmer intentionally would be very involved. There is certainly interest in it, as your resources show (also see here). However, the current information suggests to me that it is not possible.

Interestingly, some isolated cases of unintentional parallelisation have been documented, and the developers of the lme4 package were surprised about them because they have not created this feature (see here and here).

I think the best approach may be running your model(s) in a high-performance computing (HPC) cluster. Although this would not reduce the amount of time required for each model, it would have two advantages. First, your own computers wouldn’t be busy for days, and second, you could even run several models at the same time without exhausting your own computers. I still have access to the HPC at my previous university, and it would be fine for me to send your model(s) there if that would help you. Feel free to let me know. Otherwise I can see that your university has this facility too.

We took your advice and ran the model on a supercomputer – it took roughly 2.5 days, which is what it took for the model to run on my iMac and a gaming laptop Vivienne has.

The model, however, didn’t converge. We have read that you can use allFit() to try the fit with all available optimizers. Do you have any experience using this? If you did, I wondered where this would sit in the code for the model? How and where do I add this in to check all available optimizers, please?

I have attached my code in a txt file and the data in excel for you to see, in case it is of any use.

The multi-optimizer check is indeed a way (albeit tentative) to probe into the convergence. Convergence has long been a fuzzy subject, as there are different standpoints depending on the degree of conservativeness that is sought after by the analysts.

On Page 124 in my thesis (https://osf.io/97u5c), you can find this multi-optimizer check (also see this blog post). All the code is available on OSF. More generally, I discuss the issue of convergence throughout the thesis.

I have run the model with optimizer="nloptwrap" and algorithm="NLOPT_LN_BOBYQA" and received the following warning message (once the model ran) –

In optwrap(optimizer, devfun, start, rho$lower, control = control, :
convergence code 5 from nloptwrap: NLOPT_MAXEVAL_REACHED: optimization stopped becasue maxeval (above) was reached.

Does this mean that the model didn’t converge? I’m only asking because I wasn’t given a statement saying it didn’t converge, as it did with Nelder_Mead. It was stated (at the end of summary table)

Optimizer (Nelder_Mead) convergence code: 4 (failure to converge in 10000 evaluations)
failure to converge in 10000 evaluations

Please try increasing the max number of iterations.

We increased the max number of iterations to 1e6 and then 1e7, and the model didn’t converge. But it has converged with maxeval=1e8.

I wanted to ask please, do you know of any issues with the max iterations being this high and effecting the interpretability of the model? Or is it completely fine?

There are no side-effects to increasing the number of iterations (see Remedy 6 in Brauer & Curtin, 2018).

To leave a comment for the author, please follow the link and comment on their blog: R on Pablo Bernabeu.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Exit mobile version