Eigen-analysis of Linear Model Behavior in R

[This article was first published on Climate Change Ecology » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

This post is actually about replicating the figures in Otto and Day: A Biologist’s Guide to Mathematical Modeling in Ecology and Evolution. The figures I’m interested in for this post are Figures 9.1 and 9.2 in the chapter ‘General Solutions and Transformations – Models with Multiple Variables’. However, these figures are actually about analyzing model behavior of linear models of the form. So even though I’m replicating figures, the techniques apply to any linear model of the form:

ModelFormalthough there can be more than two variables and equations. These models can be simplified into matrix format:

mat_explicit

which is equivalent to:

MatForm

The long-term behavior of these models is determined by the eigenvalues and eigenvectors:

transA is a matrix of eigenvectors and D is a diagonal matrix of eigenvalues and n(0) is a vector of starting conditions. In their example, n(0) = [2,1] and

M

Using R, n(t), or the output at each time step, can be calculated using the transform above:

M <- matrix( c(20/33, -2/11, 8/33, 46/33) , ncol=2 )
A <- eigen(M)$vectors
D <- diag(eigen(M)$values)

N <- array( dim=c(11, 2) )

n0 <- matrix( c(2,1) )

N[1,] <- n0

for(i in 2:11){
    N[i,] <- A %*% D^(i-1) %*% solve(A) %*% n0
}

N <- as.data.frame(N)

The dataframe N now contains n(t) for each time step. I had to use D^(i-1) because N[1,] was actually n(0), and N[2,] was D^1, etc. Plotting these results, including the eigenvectors, gives Figure 9.1 from the Otto and Day. I arbitrarily scaled the eigenvectors, but I modified the axes to be as similar to the original figure as possible

a <- sqrt(eigen(M)$values) * 5
ends <- abs(A) %*% diag(a)

par(mar=c(4,4,1,1))

library(shape)

plot(V2 ~ V1, N, bty='l',
 xlab='Species 1', ylab='Species 1',
 pch=16,
 ylim=c(0, 12),
 xlim=c(0, 5),
 yaxs='i',
 xaxs='i')

Arrows(0,0, ends[1,], ends[2,])

text(N[1,], 't = 0', pos=3)
text(N[2,], 't = 1', pos=3)
text(N[11,], 't = 10', pos=3)

text(ends[1,], ends[2,],
    c('Second Eigenvector', 'First Eigenvector'),
    pos=c(2,3))

R_Fig91

This example shows how to use R to examine the behavior of a linear model at multiple time steps. It also shows how the system is dominated by the eigenvector with the largest eigenvalue (see how the points begin to converge and lie entirely along the second eigen vector) NOTE: In my example, I switched the names of the eigenvectors to be consistent with the book. R actually organizes the eigenvectors for you in order of eigenvalue, such that the labels of the eigenvectors in the graph above are actually switched from what R would output.

We can transform the model output n(t) to use the eigenvectors as the coordinate axes:

yt <- array( dim=c(11, 2) )
for(i in 1:nrow(N)){
    yt[i,] <- solve(A) %*% N[i,]
}

par(mar=c(4,4,1,1))
plot(yt, pch=16,
    ylab='First Eigenvector', xlab='Second Eigenvector',
    yaxs='i', xaxs='i',
    ylim=c(0, 2), xlim=c(0, 12), bty='l')

text(yt[1,1], yt[1,2], 't = 0', pos=4)
text(yt[11,1], yt[11,2], 't = 10', pos=3)

which gives

R_Fig92

Again, the axes here are reversed from the book because R automatically organizes the eigenvectors. Also note that transforming n(t) onto the eigenvectors uses the formula:mappingwhich is different from the way we map observations onto principle components (i.e. eigenvectors) during PCA. Actually, in a PCA, the formula above and the traditional formula  (y(t) = n(t)A) are equivalent:

X1 <- rnorm(20, 0, 10)
Y1 <- X1 + rnorm(20, 0, 2)

Z <- cbind(X1, Y1)

cmat <- cor(Z)

A <- eigen(cmat)$vectors

Zt <- Z %*% A

yt <- array(dim=c(20, 2))
for(i in 1:nrow(yt)){
    yt[i,] <- solve(A) %*% Z[i,]
}

round(Zt, 3) == round(yt, 3)

This is not the case for linear models (try it). The difference is that PCA operates on the variance-covariance (or correlation) matrix of a set of observations. Correlation and covariance matrices are symmetrical and thus the inverse of A is the same as the transpose of A (again, try it). For example, n(t)A is equivalent to t(A) t(n(t)) where t( ) is the transpose (I have no idea how to do superscripts in WordPress). However, because in PCA the eigenvectors are from a symmetric matrix, t(A) A^-1, and A^-1 t(n(t)) n(t)A. So the two formulas are identical.

Linear model matrices aren’t necessarily symmetrical and the eigenvectors aren’t necessarily orthogonal (see the first figure). Thus, A^-1 t(n(t)) does NOT equal n(t)A That’s what I’ve figured out so far, anyway (this is the problem with learning statistics before linear algebra, rather than the other way around. I got used to using a formula that’s specific to symmetric matrices and never realized it wouldn’t work elsewhere). Just be sure to use the right mapping formula. When in doubt, use A^-1n(t). That’s my pretty pathetic explanation.


To leave a comment for the author, please follow the link and comment on their blog: Climate Change Ecology » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)