More REML exercise

[This article was first published on Wiekvoet, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Last week I tried exercise 1 of the SAS(R) proc mixed with R libraries lme4 and MCMCglm. So this week I aimed for exercise 2 but ended up redoing exercise 1 with nlme.
Exercise 2 results gave me problems with library lme4 and latter parts of the exercise had special correlation structures which seem not feasible with lme4. So library nlme came in the picture. However, while reading up on it, I concluded that Pinheiro and Bates (2000) used this same data in their book. I am not about to reproduce the book in a blog, so I have to check in more detail what I can blog. So, for now I am redoing exercise 1 with library nlme.

Running the analysis

The first part of the analysis is actually clear sailing. The random effect specification is quite different from the other libraries, but it was pretty simple.
m1 <- lme(Y ~ A + B + A*B,random=~1|Block/A,data=sp)
summary(m1)
Linear mixed-effects model fit by REML
 Data: sp 
       AIC      BIC    logLik
  137.7618 145.7752 -59.88092

Random effects:
 Formula: ~1 | Block
        (Intercept)
StdDev:    7.899104

 Formula: ~1 | A %in% Block
        (Intercept) Residual
StdDev:    3.921982 3.059593

Fixed effects: Y ~ A + B + A * B 
             Value Std.Error DF   t-value p-value
(Intercept)  37.00  4.667411  9  7.927307  0.0000
A2            1.00  3.517318  6  0.284308  0.7857
A3          -11.00  3.517318  6 -3.127383  0.0204
B2           -8.25  2.163459  9 -3.813338  0.0041
A2:B2         0.50  3.059593  9  0.163420  0.8738
A3:B2         7.75  3.059593  9  2.533016  0.0321
 Correlation: 
      (Intr) A2     A3     B2     A2:B2 
A2    -0.377                            
A3    -0.377  0.500                     
B2    -0.232  0.308  0.308              
A2:B2  0.164 -0.435 -0.217 -0.707       
A3:B2  0.164 -0.217 -0.435 -0.707  0.500

Standardized Within-Group Residuals:
         Min           Q1          Med           Q3          Max 
-1.165347655 -0.544427403 -0.007998877  0.526246640  1.473815718 

Number of Observations: 24
Number of Groups: 
       Block A %in% Block 
           4           12 

Covariance


VarCorr(m1)
            Variance     StdDev  
Block =     pdLogChol(1)         
(Intercept) 62.395850    7.899104
A =         pdLogChol(1)         
(Intercept) 15.381944    3.921982
Residual     9.361111    3.059593

Likelihood

-2*logLik(m1)
‘log Lik.’ 119.7618 (df=9)

ANOVA

anova(m1)

            numDF denDF  F-value p-value
(Intercept)     1     9 55.34418  <.0001
A               2     6  4.06957  0.0764
B               1     9 19.38873  0.0017
A:B             2     9  4.01929  0.0566

Predict A level 1

This is a problem. I went through the book, browsed the web, only to conclude that getting the mean is not difficult, but getting the associated standard error is not supported. Not because of a hate for standard errors, but because it is an intrinsically difficult problem. I seem to remember reading years ago that lme4 had the MCMC functionality precisely for this reason. Which does not help me if I try stubbornly to get the standard error anyway. 

Not a solution

After reading http://glmm.wikidot.com/faq, https://stat.ethz.ch/pipermail/r-sig-mixed-models/2010q2/003585.html and  http://stackoverflow.com/questions/14358811/extract-prediction-band-from-lme-fit I drafted this, which does not really work for my problem as it does not provide level A1 without specifying a level for B.
newdat <- unique(sp[sp$A==1,c('A','B','Block')])
newdat$pred <- predict(m1, newdat, level = 0)
Designmat <- model.matrix(eval(eval(m1$call$fixed)[-2]), newdat[-3]) 
predvar <- diag(Designmat %*% m1$varFix %*% t(Designmat)) 
newdat$SE <- sqrt(predvar) 
newdat$SE2 <- sqrt(predvar+m1$sigma^2)
newdat
   A B Block  pred       SE      SE2
1  1 1     1 37.00 4.667411 5.580846
4  1 1     2 37.00 4.667411 5.580846
7  1 1     3 37.00 4.667411 5.580846
10 1 1     4 37.00 4.667411 5.580846
13 1 2     1 28.75 4.667411 5.580846
16 1 2     2 28.75 4.667411 5.580846
19 1 2     3 28.75 4.667411 5.580846
22 1 2     4 28.75 4.667411 5.580846

Combining parameters

Since level A1 is just a weighted sum of known parameters with their own variances. But it is so unclear which level of variance is used. It does provide the 4.54 which SAS has labelled ‘A1 mean broad’.
fixef(m1)%*%c(1,0,0,.5,0,0)
       [,1]
[1,] 32.875
sqrt(vcov(m1)[1,1]+ .25 *vcov(m1)[4,4]+vcov(m1)[1,4])
[1] 4.540329

Contrasts

With a different parameterization of the model and suitable contrasts. This also provides the ‘A1 mean broad’ of SAS.
contrasts(sp$B) <- contr.sum(levels(sp$B))
m2 <- lme(Y ~ A + B + A*B -1 ,random=~1|Block/A,data=sp)
summary(m2)

Linear mixed-effects model fit by REML
 Data: sp 
       AIC      BIC    logLik
  141.9207 149.9341 -61.96036

Random effects:
 Formula: ~1 | Block
        (Intercept)
StdDev:    7.899104

 Formula: ~1 | A %in% Block
        (Intercept) Residual
StdDev:    3.921982 3.059593

Fixed effects: Y ~ A + B + A * B – 1 
       Value Std.Error DF   t-value p-value
A1    32.875  4.540329  6  7.240665  0.0004
A2    34.125  4.540329  6  7.515975  0.0003
A3    25.750  4.540329  6  5.671395  0.0013
B1     4.125  1.081730 10  3.813338  0.0034
A2:B1 -0.250  1.529797 10 -0.163420  0.8734
A3:B1 -3.875  1.529797 10 -2.533016  0.0297
 Correlation: 
      A1     A2     A3     B1     A2:B1 
A2     0.757                            
A3     0.757  0.757                     
B1     0.000  0.000  0.000              
A2:B1  0.000  0.000  0.000 -0.707       
A3:B1  0.000  0.000  0.000 -0.707  0.500

Standardized Within-Group Residuals:
         Min           Q1          Med           Q3          Max 
-1.165347654 -0.544427403 -0.007998877  0.526246640  1.473815718 

Number of Observations: 24
Number of Groups: 
       Block A %in% Block 
           4           12 

To leave a comment for the author, please follow the link and comment on their blog: Wiekvoet.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)