advanced.procD.lm for pairwise tests and model comparisons

June 9, 2015
By

(This article was first published on geomorph, and kindly contributed to R-bloggers)

In geomorph 2.1.5, we decided to deprecate the functions, pairwiseD.test and pairwise.slope.test.   Our reason for this was two-fold.  First, recent updates by CRAN rendered these functions unstable.  These functions depended on the model.frame family of base functions, which were updated by CRAN.  We tried to find solutions but the updated pairwise functions were worse than non-functioning functions, as they sometimes provided incorrect results (owing to strange sorting of rows in design matrices).  We realized that we were in a position that required a complete overhaul of these functions, if we wanted to maintain them.  Second, because advanced.procD.lm was already capable of pairwise tests and did not suffer from the same issues, we realized we did not have to update the other functions, but could instead help users understand how to use advancd.procD.lm.  Basically, this blog post is a much better use of time than trying again and again to fix broken functions.  Before reading on, if you have not already read the blog post on ANOVA in geomorph, it would probably be worth your time to read that post first.
There are three topics covered in this post: 1) advanced.procD.lm as a model comparison test, 2) “full” randomization versus randomized residual permutation procedure (RRPP), and 3) pairwise tests.  These topics are not independent.  It is hoped that users realize that pairwise tests are nothing more than using the same resampling experiment for ANOVA for all pairwise test statistics.  Thus, understanding the first two topics makes the pairwise options of advanced.procD.lm obvious to manipulate.  (Note that the term ANOVA is used here to include univariate and multivariate ANOVA.  The ANOVA results in geomorph functions are not obtained differently for univariate and multivariate data.)

Model comparisons: As explained in the ANOVA blog post, we use comparisons of model (sum of squared) error to calculate effects (sum of squares, SS).  An “effect” is described by the variables contained in one model and lacked in another.  For example, using the plethodon data,

> library(geomorph)
> data(plethodon)
> gpa <- gpagen(plethodon$land)
> Y <- two.d.array(gpa$coords)

> species <- plethodon$species
> site <- plethodon$site
> fit1 <- lm(Y ~ 1) # effects = intercept
> fit2 <- lm(Y ~ species) # effects = intercept + species

we have created two models, fit1 and fit 2.  It is easiest to appreciate the difference between these models by looking at the model matrices.

> model.matrix(fit1)
   (Intercept)
1            1
2            1
3            1
4            1
5            1
6            1
7            1
8            1
9            1
10           1
11           1
12           1
13           1
14           1
15           1
16           1
17           1
18           1
19           1
20           1
21           1
22           1
23           1
24           1
25           1
26           1
27           1
28           1
29           1
30           1
31           1
32           1
33           1
34           1
35           1
36           1
37           1
38           1
39           1
40           1
attr(,”assign”)
[1] 0
> model.matrix(fit2)
   (Intercept)                  Teyah
1            1                      0
2            1                      0
3            1                      0
4            1                      0
5            1                      0
6            1                      0
7            1                      0
8            1                      0
9            1                      0
10           1                      0
11           1                      1
12           1                      1
13           1                      1
14           1                      1
15           1                      1
16           1                      1
17           1                      1
18           1                      1
19           1                      1
20           1                      1
21           1                      0
22           1                      0
23           1                      0
24           1                      0
25           1                      0
26           1                      0
27           1                      0
28           1                      0
29           1                      0
30           1                      0
31           1                      1
32           1                      1
33           1                      1
34           1                      1
35           1                      1
36           1                      1
37           1                      1
38           1                      1
39           1                      1
40           1                      1
attr(,”assign”)
[1] 0 1
attr(,”contrasts”)
attr(,”contrasts”)$species`
[1] “contr.treatment”

The fit2 model is more “complex” but contains the element of the fit1 model.  (It can also be seen that the species “effect” allows two means instead of one.  The first species becomes the intercept [second column = 0] and the second species changes the value of the intercept [second column = 1].  More on this here.)  If one wanted to evaluate the species effect, he could compare the error produced by the two models, since “species” is contained in one but lacked in the other.  This is what is done with advanced.procD.lm, e.g.

> advanced.procD.lm(fit1, fit2)

ANOVA with RRPP

        df     SSE       SS      F      Z     P
        39 0.19694                           
species 38 0.16768 0.029258 6.6304 4.9691 0.001

This is how this ANOVA should be interpreted: first, there are 40 salamanders in these data.  The intercept model (fit1) estimates the mean.  Therefore there are 40 – 1 = 39 degrees of freedom.  The species model (fit2) estimates two means.  Therefore there are 40 – 2 = 38 degrees of freedom.  From the “predicted” values (one multivariate mean or two multivariate means, depending on model), residuals are obtained,  their Procrustes distances to predicted values are calculated (the procD part of the function name), and the squared distances are summed to find the sum of squared error (SSE) for each model.  Error can only decrease with more model effects.  By adding the species effect, the change in SSE was 0.029258, which is the model effect.  For descriptive purposes, this can be converted to an F value by dividing the effect SS by the difference in degrees of freedom (1 df), then dividing this value by the SSE of the full model divided by its degrees of freedom (38 df).  This is a measure of effect size.  More importantly, the effect size can also be evaluated by the position of the observed SS in the distribution of random SS.  Since we did not specify the number of permutations, the default is used (999 plus the observed).  The observed SS is 4.813 standard deviations from the expected value of 0 under the null hypothesis, with a probability of being exceeded of 0.001 (the P-value).  ANOVA with RRPP indicates that in every random permutation, the residuals of the “reduced” model are randomzied.

For comparison, here are the results from procD.lm on fit2, using RRPP

> procD.lm(fit2, RRPP=TRUE)

Type I (Sequential) Sums of Squares and Cross-products

Randomized Residual Permutation Procedure used

          df       SS        MS     Rsq      F      Z P.value
species    1 0.029258 0.0292578 0.14856 6.6304 4.8448   0.001
Residuals 38 0.167682 0.0044127                             
Total     39 0.196940    

Notice that the summary statistics are the exact same (except Z and maybe P.value, as these depend on random outcomes).  This is the case if the reduced model is the most basic “null” model (contains only an intercept) and there is only one effect.  This time, let’s do the same thing without RRPP.

> procD.lm(fit2, RRPP=FALSE)

Type I (Sequential) Sums of Squares and Cross-products

Randomization of Raw Values used

          df       SS        MS     Rsq      F     Z P.value
species    1 0.029258 0.0292578 0.14856 6.6304 4.772   0.002
Residuals 38 0.167682 0.0044127                            
Total     39 0.196940 

Notice that the Z and P.values were so indifferent as to suggest that nothing different was done.  In this case, this is the truth.  RRPP performed with residuals from a model with only an intercept is tantamount to a “full” randomization of the observed values.  (This is explained better here.)

Full randomization versus RRPP: To better appreciate the difference between RRPP and full randomization, let’s make a different model

fit3 <- lm(Y ~ species + site)

Now we have options!  First, let’s just use procD.lm and forget about mode comparisons… sort of.  We will do this with both full randomization and RRPP


> procD.lm(fit3, RRPP=FALSE)

Type I (Sequential) Sums of Squares and Cross-products

Randomization of Raw Values used

          df       SS       MS     Rsq      F       Z P.value
species    1 0.029258 0.029258 0.14856 10.479  4.7407   0.001
site       1 0.064375 0.064375 0.32688 23.056 10.1732   0.001
Residuals 37 0.103307 0.002792                              
Total     39 0.196940     
                                  
> procD.lm(fit3, RRPP=TRUE)

Type I (Sequential) Sums of Squares and Cross-products

Randomized Residual Permutation Procedure used

          df       SS       MS     Rsq      F       Z P.value
species    1 0.029258 0.029258 0.14856 10.479  4.7368   0.002
site       1 0.064375 0.064375 0.32688 23.056 11.9331   0.001
Residuals 37 0.103307 0.002792                              
Total     39 0.196940   

Notice that the test stats are the same for both but the effect sizes (Z scores) are different.  The species Z scores are similar, and similar to what was observed before.  The site Z scores are slightly different though.  There are cases where one method might produce significant results and the other does not.  The reason for this is that prcoD.lm performs a series of model comparisons.  The first model has an intercept.  The second model add species.  The third model adds site to the second model.  There are two model comparisons: first to second and second to third.  Where these methods differ is that with RRPP, the residuals from the “reduced” model is used in each comparison; with full randomization, the residuals from the intercept model (only and always) are used.

This gets even more complex with more effects added to the model; e.g., an interaction:

> fit4 <- lm(Y ~ species * site)
> procD.lm(fit4, RRPP=FALSE)

Type I (Sequential) Sums of Squares and Cross-products

Randomization of Raw Values used

             df       SS       MS     Rsq      F      Z P.value
species       1 0.029258 0.029258 0.14856 14.544 4.7214   0.001
site          1 0.064375 0.064375 0.32688 32.000 9.9451   0.001
species:site  1 0.030885 0.030885 0.15682 15.352 4.9743   0.001
Residuals    36 0.072422 0.002012                             
Total        39 0.196940    
                                  
> procD.lm(fit4, RRPP=TRUE)

Type I (Sequential) Sums of Squares and Cross-products

Randomized Residual Permutation Procedure used

             df       SS       MS     Rsq      F       Z P.value
species       1 0.029258 0.029258 0.14856 14.544  4.7512   0.001
site          1 0.064375 0.064375 0.32688 32.000 11.5108   0.001
species:site  1 0.030885 0.030885 0.15682 15.352  9.8574   0.001
Residuals    36 0.072422 0.002012                              
Total        39 0.196940   

It should be appreciated that because full randomization ignores the effects already added to the model (e.g., species and site, before the interaction is added), spurious results can occur.  This could be significant effects rendered non-significant, or non-significant effects rendered significant.  The reason advanced.procD.lm is useful is that it allows creativity rather than the simple on/off switch for RRPP in procD.lm.

Model comparisons redux: Here is a simple exercise.  Let’s do all logical model comparisons of the model fits we have thus far, using advanced.procD.lm.

> advanced.procD.lm(fit1, fit2)

ANOVA with RRPP

        df     SSE       SS      F      Z     P
        39 0.19694                            
species 38 0.16768 0.029258 6.6304 4.8135 0.001

> advanced.procD.lm(fit2, fit3)

ANOVA with RRPP

             df     SSE       SS      F      Z     P
species      38 0.16768                            
species+site 37 0.10331 0.064375 23.056 11.459 0.001

> advanced.procD.lm(fit3, fit4)

ANOVA with RRPP

                          df      SSE       SS      F      Z     P
species+site              37 0.103307                            
species+site+species:site 36 0.072422 0.030885 15.352 9.4351 0.001

The Z values in each test are approximately the same as procD.lm with RRPP.  The SS values are exactly the same!  (The F values are not, as the error SS changes in each case.)  So, it might not be clear why advanced.procD.lm is useful.  Here is something that cannot be done with procD.lm.

> advanced.procD.lm(fit2, fit4)

ANOVA with RRPP

                          df      SSE      SS      F      Z     P
species                   38 0.167682                          
species+site+species:site 36 0.072422 0.09526 23.676 9.6226 0.001

The usefulness is that advanced.procD.lm can be used with any comparison of nested models.  The models do not have to differ by one effect.  Also, a “full” model evaluation can be done as

> advanced.procD.lm(fit1, fit4)

ANOVA with RRPP

                          df      SSE      SS      F      Z     P
                          39 0.196940                           
species+site+species:site 36 0.072422 0.12452 20.632 7.3273 0.001

Understanding that any two nested models can be compared, and that advanced.procD.lm uses RRPP exclusively, one can use the resampling experiment to perform pairwise comparisons for model effects that describe groups.

Pairwise tests: When one or more model effects are factors (categorical), pairwise statistics can be calculated and statistically evaluated with advanced.prcoD.lm.  This is accomplished with the groups operator within the function.  E.g.,

> advanced.procD.lm(fit1, fit4, groups = ~species*site)
$anova.table

ANOVA with RRPP

                          df      SSE      SS      F      Z     P
                          39 0.196940                           
species+site+species:site 36 0.072422 0.12452 20.632 7.4109 0.001

$Means.dist
            Jord:Allo  Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo  0.00000000 0.09566672 0.02432519  0.1013670
Jord:Symp  0.09566672 0.00000000 0.09193082  0.1069432
Teyah:Allo 0.02432519 0.09193082 0.00000000  0.0994980
Teyah:Symp 0.10136696 0.10694324 0.09949800  0.0000000

$Prob.Means.dist
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo      1.000     0.001      0.708      0.001
Jord:Symp      0.001     1.000      0.001      0.001
Teyah:Allo     0.708     0.001      1.000      0.001
Teyah:Symp     0.001     0.001      0.001      1.000

In addition to the ANOVA, the pairwise Procrustes distances between all possible means (as defined) were calculated.  The P-values below these indicate the probability of finding a greater distance, by chance, from the resampling experiment.  Because fit1 contains only an intercept, the resampling experiment was a full randomization of shape values.  To account for species and site main effects, this analysis could be repeated with the model that contains the main effects, but no interaction.  E.g.,

> advanced.procD.lm(fit3, fit4, groups = ~species*site)
$anova.table

ANOVA with RRPP

                          df      SSE       SS      F      Z     P
species+site              37 0.103307                            
species+site+species:site 36 0.072422 0.030885 15.352 9.9593 0.001

$Means.dist
            Jord:Allo  Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo  0.00000000 0.09566672 0.02432519  0.1013670
Jord:Symp  0.09566672 0.00000000 0.09193082  0.1069432
Teyah:Allo 0.02432519 0.09193082 0.00000000  0.0994980
Teyah:Symp 0.10136696 0.10694324 0.09949800  0.0000000

$Prob.Means.dist
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo      1.000     0.014      0.999      0.589
Jord:Symp      0.014     1.000      0.597      0.001
Teyah:Allo     0.999     0.597      1.000      0.001
Teyah:Symp     0.589     0.001      0.001      1.000

This has a profound effect.  Many of the previous significant pairwise differences in means are now not significant, after accounting for general species and site effects.  One should always be careful when interpreting results to understand the null hypothesis.  The former test assumes a null hypothesis of no differences among means; the latter test assumes a null hypothesis of no difference among means, given species and site effects.  These are two different things!

When using advanced.procD.lm, one can add covariates or other factors that might be extraneous sources of variation.  For example, if we wanted to repeat the last test but also account for body size, the following could be done.  (Also notice via this example, that making model fits beforehand is not necessary.)

> advanced.procD.lm(Y ~ log(CS) + species + site, ~ log(CS) + species*site, groups = ~species*site)
$anova.table

ANOVA with RRPP

                                  df      SSE       SS      F      Z     P
log(CS)+species+site              36 0.098490                            
log(CS)+species+site+species:site 35 0.068671 0.029819 15.198 9.6757 0.001

$Means.dist
            Jord:Allo  Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo  0.00000000 0.09566672 0.02432519  0.1013670
Jord:Symp  0.09566672 0.00000000 0.09193082  0.1069432
Teyah:Allo 0.02432519 0.09193082 0.00000000  0.0994980
Teyah:Symp 0.10136696 0.10694324 0.09949800  0.0000000

$Prob.Means.dist
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo      1.000     0.013      1.000      0.591
Jord:Symp      0.013     1.000      0.584      0.001
Teyah:Allo     1.000     0.584      1.000      0.001
Teyah:Symp     0.591     0.001      0.001      1.000

The ANOVA and pairwise stats change a bit (but means do not), as log(CS) accounts for variation in shape in both models.  Also note that “~” is needed in all operator parts that are formulaic.  This is essential for proper functioning.  (Technical note:  this test is not quite appropriate, as the means are not appropriate.  This will be explained below, after further discussion about slopes.)

One can also compare slopes for a covariate among groups (or account for slopes.  This involves comparing a model with a common slope to one allowing different slopes (factor-slope interactions).  E.g.,

> advanced.procD.lm(Y ~ log(CS) + species*site, ~ log(CS)*species*site, groups = ~species*site, slope = ~log(CS))
$anova.table

ANOVA with RRPP

                                                                                    df      SSE        SS      F      Z     P
log(CS)+species+site+species:site                                                   35 0.068671                             
log(CS)+species+site+log(CS):species+log(CS):site+species:site+log(CS):species:site 32 0.061718 0.0069531 1.2017 1.2817 0.099

$LS.Means.dist
            Jord:Allo  Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo  0.00000000 0.09396152 0.02473040 0.10148562
Jord:Symp  0.09396152 0.00000000 0.08999162 0.10547891
Teyah:Allo 0.02473040 0.08999162 0.00000000 0.09949284
Teyah:Symp 0.10148562 0.10547891 0.09949284 0.00000000

$Prob.Means.dist
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo      1.000     0.605      0.871      0.639
Jord:Symp      0.605     1.000      0.629      0.629
Teyah:Allo     0.871     0.629      1.000      0.638
Teyah:Symp     0.639     0.629      0.638      1.000

$Slopes.dist
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo  0.0000000 0.2188238  0.1780151  0.1231082
Jord:Symp  0.2188238 0.0000000  0.2718850  0.2354029
Teyah:Allo 0.1780151 0.2718850  0.0000000  0.1390140
Teyah:Symp 0.1231082 0.2354029  0.1390140  0.0000000

$Prob.Slopes.dist
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo      1.000     0.374      0.091      0.165
Jord:Symp      0.374     1.000      0.134      0.174
Teyah:Allo     0.091     0.134      1.000      0.259
Teyah:Symp     0.165     0.174      0.259      1.000

$Slopes.correlation
              Jord:Allo   Jord:Symp   Teyah:Allo Teyah:Symp
Jord:Allo   1.000000000  0.01344439 -0.006345334  0.1577696
Jord:Symp   0.013444387  1.00000000 -0.288490065 -0.3441474
Teyah:Allo -0.006345334 -0.28849007  1.000000000  0.3397718
Teyah:Symp  0.157769562 -0.34414737  0.339771753  1.0000000

$Prob.Slopes.cor
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo      1.000     0.257      0.108      0.090
Jord:Symp      0.257     1.000      0.058      0.007
Teyah:Allo     0.108     0.058      1.000      0.424
Teyah:Symp     0.090     0.007      0.424      1.000

A couple of things.  First, instead of means there is a comparison of least-squares (LS) means.  These are predicted values at the average value of the covariate (slope variable).  These can be different than means, as groups can be comprised of different ranges of covariates.  The slope distance is the difference in amount of shape change (as a Procrustes distance) per unit of covariate change.  The slope correlation is the vector correlation of slope vectors.  This indicates if vectors point in different directions in the tangent space (or other data space).  Note that these pairwise stats should not be considered in this case, as the ANOVA reveals a non significant difference between models.

Returning to the incorrect pairwise test between LS means, the correct method is as follows.

> advanced.procD.lm(Y ~ log(CS) + species + site, ~ log(CS)+ species*site, groups = ~species*site, slope = ~log(CS))
$anova.table

ANOVA with RRPP

                                  df      SSE       SS      F      Z     P
log(CS)+species+site              36 0.098490                            
log(CS)+species+site+species:site 35 0.068671 0.029819 15.198 9.6675 0.001

$LS.Means.dist
            Jord:Allo  Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo  0.00000000 0.09396152 0.02473040 0.10148562
Jord:Symp  0.09396152 0.00000000 0.08999162 0.10547891
Teyah:Allo 0.02473040 0.08999162 0.00000000 0.09949284
Teyah:Symp 0.10148562 0.10547891 0.09949284 0.00000000

$Prob.Means.dist
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo       1.00     0.020      1.000      0.580
Jord:Symp       0.02     1.000      0.578      0.001
Teyah:Allo      1.00     0.578      1.000      0.002
Teyah:Symp      0.58     0.001      0.002      1.000

$Slopes.dist
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo  0.0000000 0.2188238  0.1780151  0.1231082
Jord:Symp  0.2188238 0.0000000  0.2718850  0.2354029
Teyah:Allo 0.1780151 0.2718850  0.0000000  0.1390140
Teyah:Symp 0.1231082 0.2354029  0.1390140  0.0000000

$Prob.Slopes.dist
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo      1.000     0.675      0.340      0.425
Jord:Symp      0.675     1.000      0.418      0.486
Teyah:Allo     0.340     0.418      1.000      0.511
Teyah:Symp     0.425     0.486      0.511      1.000

$Slopes.correlation
              Jord:Allo   Jord:Symp   Teyah:Allo Teyah:Symp
Jord:Allo   1.000000000  0.01344439 -0.006345334  0.1577696
Jord:Symp   0.013444387  1.00000000 -0.288490065 -0.3441474
Teyah:Allo -0.006345334 -0.28849007  1.000000000  0.3397718
Teyah:Symp  0.157769562 -0.34414737  0.339771753  1.0000000

$Prob.Slopes.cor
           Jord:Allo Jord:Symp Teyah:Allo Teyah:Symp
Jord:Allo      1.000     0.317      0.204      0.193
Jord:Symp      0.317     1.000      0.116      0.045
Teyah:Allo     0.204     0.116      1.000      0.447
Teyah:Symp     0.193     0.045      0.447      1.000

In this case, the LS means comparison is meaningful and the rest can be ignored.  (Sorry, it is too daunting to make a program that anticipates every intention of the user.  Sometimes excessive output is required and it is reliant upon the user to know what he is doing and which output to interpret.)   Likewise, if a significant slope-interaction was observed (i.e., heterogeneity of slopes), then it would be silly to compare LS. means.  It is imperative that the user understand the models that are used and which output to interpret.  Here is a little guide.

1. If there is a covariate involved, compare two models: covariate + groups and covariate*groups.  If the ANOVA returns a significant result, re-perform and assign groups = ~groups and slope = ~covariate.  Focus on the slope distance and slope correlations (or angles, if one of the angle options is chosen).  If ANOVA does not return a significant result, go to 2.

2. If there is a covariate involved, compare two models: covariate  and covariate + groups.  If the ANOVA returns a significant result, re-perform and assign groups = ~groups and slope = ~covariate.  Focus on the LS means  If ANOVA does not return a significant result, groups are not different.

3. If groups represent a factorial interaction (e.g., species*site), one should also consider main effects in the reduced model.  If not, then an intercept can comprise the reduced model.

Ultimately, the advanced.procD.lm function has the capacity to compare a multitude of different reduced and full models and perform specialized pairwise tests.  For example, one could do this:

advanced.procD.lm(Y ~ log(CS) + site, ~ log(CS)*site*species, groups = ~ species, slope = log(CS), angle.type = “rad”)

Doing this would require a lucid reason to think the residuals of the reduced model are the appropriate exchangeable units under a null hypothesis and that comparing species only, despite an interaction with site, is an appropriate thing to do.  Although advanced.procD.lm can handle it, it is the user’s responsibility to validate the output as legitimate.

We hope that this tutorial and the Q & A that will result will be more edifying than the previous pairwiseD.test and pairwise.slope.test, which although more straightforward at first, were less flexible.  With a little patience and practice, this function will become clear.

More analytical details found here.

To leave a comment for the author, please follow the link and comment on their blog: geomorph.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...



If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Comments are closed.

Search R-bloggers


Sponsors

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)