# Kalman filter example visualised with R

**mages' blog**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

At theÂ last Cologne R user meetingÂ Holger Zien gave a greatÂ introduction to dynamic linear modelsÂ (dlm). One special case of a dlm is theÂ Kalman filter, which I will discuss in this post in more detail. I kind of used it earlier when IÂ measured the temperatureÂ in my room.Â

Over the last week I came across the wonderful quantitative economic modelling siteÂ quant-econ.net, designed and written byÂ Thomas J. SargentÂ andÂ John Stachurski. The site not only provides access to their lecture notes, including the Kalman filer, but also code in Python and Julia.Â

I will take theirÂ example of the Kalman filterÂ and go through it with R. I particularly liked their visuals of the various steps of the Kalman filter.

### Motivation

Suppose I have a little robot that moves autonomously over my desk. How would the robot know where it is? Well, suppose the robot can calculate from its wheel spin how far it went and an additional sensor (e.g. GPS-like) provides also information about its location. Of course both pieces of information will be imprecise. How can the various signals be combined?Â

This is a classic scenario for the Kalman filter. Its key assumptions are that the errors/noise are Gaussian and that the state space evolutionÂ xtÂ from one time step to the next is linear, so is the mapping to the sensor signalsÂ yt. For the example I will use below it reads:

WithÂ A,Q,G,R,Î£Â matrices of appropriate dimensions. The Kalman filter provides recursive estimators forÂ xt:

### Start with a prior

Letâ€™s say at timeÂ t0Â the robot has the expected positionÂ x^=(0.2,âˆ’0.2). That means, the robot doesnâ€™t know its location on the table with certainty. Further I shall assume that the probability density function of the position follows a Normal distribution with covariance matrixÂ Î£=[0.40.30.30.45]

The prior distribution of the robotâ€™s position can be visualised in R with a contour plot.

Â

```
library(mnormt)
xhat <- c(0.2, -0.2)
Sigma <- matrix(c(0.4, 0.3,
0.3, 0.45), ncol=2)
x1 <- seq(-2, 4,length=151)
x2 <- seq(-4, 2,length=151)
f <- function(x1,x2, mean=xhat, varcov=Sigma)
dmnorm(cbind(x1,x2), mean,varcov)
z <- outer(x1,x2, f)
mycols <- topo.colors(100,0.5)
image(x1,x2,z, col=mycols, main="Prior density",
xlab=expression('x'[1]), ylab=expression('x'[2]))
contour(x1,x2,z, add=TRUE)
points(0.2, -0.2, pch=19)
text(0.1, -0.2, labels = expression(hat(x)), adj = 1)
```

### Sensor information

The â€˜GPSâ€™ sensor provides also data about the location of the robot, again the sensor signal can be regarded as the expected position with some noise. Suppose the reading of the sensor isÂ y=(2.3,âˆ’1.9). I will assume that the sensor signal has an error following a Normal distribution, with a covariance matrixÂ R=0.5Î£.

Conceptually I think about it like this, given that the robot is at positionÂ x, what would be the likelihood that the sensor reading isÂ y|x? The Kalman filter assumes a linear mapping fromÂ xÂ toÂ y:Â

In my caseÂ GÂ will be the identity matrix. Letâ€™s add the sensor information to the plot.

Â

```
R <- 0.5 * Sigma
z2 <- outer(x1,x2, f, mean=c(2.3, -1.9), varcov=R)
image(x1, x2, z2, col=mycols, main="Sensor density")
contour(x1, x2, z2, add=TRUE)
points(2.3, -1.9, pch=19)
text(2.2, -1.9, labels = "y", adj = 1)
contour(x1, x2,z, add=TRUE)
points(0.2, -0.2, pch=19)
text(0.1, -0.2, labels = expression(hat(x)), adj = 1)
```

### Filtering

I have two pieces of information about the location of the robot, both with a Normal distribution, my prior isÂ xâˆ¼N(x^,Î£)Â and for the distribution of my sensor I haveÂ y|xâˆ¼N(Gx,R).Â

What I would like to know is the location of the robot given the sensor reading:Â

Now, becauseÂ x,yÂ are Normal andÂ GÂ is the identity matrix, I know that the posterior distribution ofÂ x|yÂ is Normal again, with parameters (seeÂ conjugate prior):

Using theÂ matrix inversion identity

I can write the above as:

In the more general case whenÂ GÂ is not the identity matrix I have

The updated posterior probability density functionÂ p(x|y)=N(x^f,Î£f)Â is visualised in the chart below.

I can see how the prior and likelihood distributions have influenced the posterior distribution of the robotâ€™s location. The sensor information is regarded more credible than the prior information and hence the posterior is closer to the sensor data.

```
G = diag(2)
y <- c(2.4, -1.9)
xhatf <- xhat + Sigma %*% t(G) %*% solve(G %*% Sigma %*% t(G) + R) %*% (y - G %*% xhat)
Sigmaf <- Sigma - Sigma %*% t(G) %*% solve(G %*% Sigma %*% t(G) + R) %*% G %*% Sigma
z3 <- outer(x1, x2, f, mean=c(xhatf), varcov=Sigmaf)
image(x1, x2, z3, col=mycols,
xlab=expression('x'[1]), ylab=expression('x'[2]),
main="Filtered density")
contour(x1, x2, z3, add=TRUE)
points(xhatf[1], xhatf[2], pch=19)
text(xhatf[1]-0.1, xhatf[2],
labels = expression(hat(x)[f]), adj = 1)
lb <- adjustcolor("black", alpha=0.5)
contour(x1, x2, z, add=TRUE, col=lb)
points(0.2, -0.2, pch=19, col=lb)
text(0.1, -0.2, labels = expression(hat(x)), adj = 1, col=lb)
contour(x1, x2, z2, add=TRUE, col=lb)
points(2.3, -1.9, pch=19, col=lb)
text(2.2, -1.9,labels = "y", adj = 1, col=lb)
```

### Forecasting

Ok, I have a better understanding of the robotâ€™s position at timeÂ t0, but the robot is moving and what Iâ€™d like to know is where the it will be after one time step.

Suppose I have a linear model that explains how the state evolves over time, e.g. based on wheel spin. Again, I will assume a Gaussian process. In this toy example I follow the assumptions ofÂ Sargent and Stachurski, although this doesnâ€™t make much sense for the robot:Â

with

As all the processes are linear and Normal the calculations are straightforward:

The matrixÂ AÎ£Gâ€²(GÎ£Gâ€²+R)âˆ’1Â is often written asÂ KÎ£Â and called theÂ *Kalman gain*. Using this notation, I can summarise the results as follows:

Adding the prediction to my chart gives me:Â

Â

```
A <- matrix(c(1.2, 0,
0, -0.2), ncol=2)
Q <- 0.3 * Sigma
K <- A %*% Sigma %*% t(G) %*% solve(G%*% Sigma %*% t(G) + R)
xhatnew <- A %*% xhat + K %*% (y - G %*% xhat)
Sigmanew <- A %*% Sigma %*% t(A) - K %*% G %*% Sigma %*% t(A) + Q
z4 <- outer(x1,x2, f, mean=c(xhatnew), varcov=Sigmanew)
image(x1, x2, z4, col=mycols,
xlab=expression('x'[1]), ylab=expression('x'[2]),
main="Predictive density")
contour(x1, x2, z4, add=TRUE)
points(xhatnew[1], xhatnew[2], pch=19)
text(xhatnew[1]-0.1, xhatnew[2],
labels = expression(hat(x)[new]), adj = 1)
contour(x1, x2, z3, add=TRUE, col=lb)
points(xhatf[1], xhatf[2], pch=19, col=lb)
text(xhatf[1]-0.1, xhatf[2], col=lb,
labels = expression(hat(x)[f]), adj = 1)
contour(x1, x2, z, add=TRUE, col=lb)
points(0.2, -0.2, pch=19, col=lb)
text(0.1, -0.2, labels = expression(hat(x)), adj = 1, col=lb)
contour(x1, x2, z2, add=TRUE, col=lb)
points(2.3, -1.9, pch=19, col=lb)
text(2.2, -1.9,labels = "y", adj = 1, col=lb)
```

Thatâ€™s it, having gone through the updating process for one time step gives me the recursive Kalman filter iteration mentioned above.Â

Another way to visualise the four steps can be achieved with theÂ `lattice`

Â package:

```
library(lattice)
grid <- expand.grid(x=x1,y=x2)
grid$Prior <- as.vector(z)
grid$Likelihood <- as.vector(z2)
grid$Posterior <- as.vector(z3)
grid$Predictive <- as.vector(z4)
contourplot(Prior + Likelihood + Posterior + Predictive ~ x*y,
data=grid, col.regions=mycols, region=TRUE,
as.table=TRUE,
xlab=expression(x[1]),
ylab=expression(x[2]),
main="Kalman Filter",
panel=function(x,y,...){
panel.grid(h=-1, v=-1)
panel.contourplot(x,y,...)
})
```

You can find theÂ code of this post on Github.

### Session Info

```
R version 3.1.2 (2014-10-31)
Platform: x86_64-apple-darwin13.4.0 (64-bit)
locale:
[1] en_GB.UTF-8/en_GB.UTF-8/en_GB.UTF-8/C/en_GB.UTF-8/en_GB.UTF-8
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] lattice_0.20-29 mnormt_1.5-1
loaded via a namespace (and not attached):
[1] grid_3.1.2 tools_3.1.2
```

**leave a comment**for the author, please follow the link and comment on their blog:

**mages' blog**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.