Sequences defined using a Linear Recurrence

January 6, 2014

(This article was first published on Freakonometrics » R-english, and kindly contributed to R-bloggers)

In the introduction to the time series course (MAT8181) this morning, we did spend some time on the expression of (deterministic) sequences defined using a linear recurence (we will need that later on, so I wanted to make sure that those results were familiar to everyone).

  • First order recurence

The most simple case is the first order recurence,{n-1} where\neq%201 (for convenience). Observe that we can remove the constant, using a simple translation\underbrace{[u_n-m]}_{v_n}%20=%20b%20\underbrace{[u_{n-1}-m]}_{v_{n-1}} if So, starting from this point, we will always remove the constant in the recurent equation. Thus,{v_n}%20=%20b{v_{n-1}}. From this equation, observe that{v_n}%20=%20b^n{v_{0}}, which is the general expression of{v_n}.

  • Second order recurence

Consider now a second order recurence,{v_n}%20=%20a{v_{n-1}}+b{v_{n-2}}. In order to find the general expression of{v_n}, define\boldsymbol{V}_n%20=(v_{n}},{v_{n-1}})^{\sffamily%20T}. Then\underbrace{\begin{bmatrix}v_n\\v_{n-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20\underbrace{\begin{bmatrix}a&%20b%20\\%201%20&%200\end{bmatrix}}_B\underbrace{\begin{bmatrix}v_{n-1}%20\\v_{n-2}%20\end{bmatrix}%20}_{\boldsymbol{V}_{n-1}%20} This time, we have a vectorial linear recurent equation. But what we’ve done previously still holds. For instance,{\boldsymbol{V}_n%20}=B{\boldsymbol{V}_{n-1}%20}=\cdots=B^n\boldsymbol{V}_{0} What could we say about^n ? If can be diagonalized, then\Delta%20P^{-1} and^n=P\Delta^n%20P^{-1}. Thus,\underbrace{\begin{bmatrix}v_n\\v_{n-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20B^n%20\underbrace{\begin{bmatrix}v_{0}%20\\v_{-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_{0}%20}=%20P\underbrace{\begin{bmatrix}\lambda_1^n&%200%20\\%200%20&%20\lambda_2^n\end{bmatrix}}_{\Delta^n}%20P^{-1}\underbrace{\begin{bmatrix}v_{0}%20\\v_{-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_{0}%20} so what we’ll get here is something like\alpha%20\lambda_1^n%20+\beta\lambda_2^n for some constant\alpha and\beta. Recall that\lambda_1 and\lambda_2 are the eigenvalues of matrix, and they are also the roots of the characteristic polynomial^2%20-%20ax%20-%20b. Since and are real-valued, there are two roots for the polynomial, possibly identical, possibly complex (but then conjugate). An interesting case is obtained when the roots are^{\pm%20i\theta}. In that case^n(\alpha\cos(n\theta)%20+%20\beta\sin(n\theta)) To visualize this general term, consider the following code. A first strategy is to define the sequence, given the two parameters, and two starting values. E.g.

> a=.5
> b=-.9
> u1=1; u0=1

Then, we iterate to generate the sequence,

> v=c(u1,u0)
> while(length(v)<100) v=c(a*v[1]+b*v[2],v)
> plot(0:99,rev(v))

It is also possible to use the generic expression we’ve just seen. Here, the roots of the characteristic polynomial are

> r=polyroot(c(-b, -a, 1))
> r
[1] 0.25+0.9151503i 0.25-0.9151503i
> plot(r,xlim=c(-1.1,1.1),ylim=c(-1.1,1.1),pch=19,col="red")
> u=seq(-1,1,by=.01)
> lines(u,sqrt(1-u^2),lty=2)
> lines(u,-sqrt(1-u^2),lty=2) Since,\alpha%20\lambda_1^n%20+\beta\lambda_2^n, then\begin{cases}%20\alpha%20+%20\beta%20=%20v_0%20\\%20\alpha%20r_1%20+%20\beta%20r_2%20=%20v_1%20\end{cases} it is possible to derive numerical expressions for the two parameters. If^n(A\cos(n\theta)%20+%20B\sin(n\theta)), then\lambda+\mu while\lambda-\mu). Thus,

> A=sum( solve(matrix(c(1,r[1],1,r[2]),2,2),c(u0,u1)))
> B=diff(solve(matrix(c(1,r[1],1,r[2]),2,2),c(u0,u1)))* complex(real=0,imaginary=1)

We can plot the sequence of points

> plot(0:99,rev(v))

and then we can also plot the sine wave, too

> t=seq(0,100,by=.1)
> lines(t,Vectorize(bv)(t-1),col="red",lty=2)
> lines(t,-Vectorize(bv)(t-1),col="red",lty=2)
> lines(t,Vectorize(fv)(t-1),col="blue")

We will see a lot of graph like this in the course, when looking at autocorrelation functions.

  • Higher order recurence

More generally, we can write\underbrace{\begin{bmatrix}v_n\\v_{n-1}\\v_{n-2}\\%20\vdots%20\\%20v_{n-p+1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20\underbrace{\begin{bmatrix}b_{1}%20&%20b_{2}%20&b_3&%20\cdots%20&%20b_{p}%20\\%201%20&%200%20&%200&%20\cdots%20&0\\%200%20&%201%20&%200&%20\cdots%20&0\\%20\vdots%20&%20\vdots%20&%20\vdots%20&%20\ddots%20&%20\vdots%20\\%200%20&%200%20&%200&%20\cdots%20&%200\end{bmatrix}}_B\underbrace{\begin{bmatrix}v_{n-1}%20\\v_{n-2}\\v_{n-3}%20\\%20\vdots%20\\%20v_{n-p}%20\end{bmatrix}%20}_{\boldsymbol{V}_{n-1}%20} The matrix is a so called companion matrix. And similar results could be obtained for the expression of the general term of the sequence. If all that is not familar, I strongly recommand to read carefully a textbook on sequences and linear recurence.

To leave a comment for the author, please follow the link and comment on their blog: Freakonometrics » R-english. offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...

If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Comments are closed.