Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

This post implements a R code for Carter-Kohn algorithm in a simple univariate state space model. For clear understanding of this backward sampling algorithm, only Gibbs sampling for state estimates is considered with other parameters being fixed or known.

# Gibbs Sampling for State Space Models in R

In the previous posts below, we have implemented Bayesian linear regression model model using Gibbs sampling and derived the Carter-Kohn algorithm for sampling posterior distribution of states.

Since this post implements the Carter-Kohn algorithm in R, it is highly recommended that you read the second post at first.

### Linear Regression with Time-varying Coefficients

A state space model for linear regression with time-varying coefficients is of the following form.
\begin{align} Y_t &= H\beta_t + \epsilon_t, \quad &\epsilon_t \sim N(0,R) \\ \beta_t &= \mu + F\beta_{t-1} + \nu_t, \quad &\nu_t \sim N(0,Q) \end{align}

### Kalman Filtering

Kalman filter recursion is used to get state estimates from a state space model.
\begin{align} \beta_{t|t-1} &= \mu + F\beta_{t-1|t-1} \\ P_{t|t-1} &= FP_{t-1|t-1}F^{T} + Q \\ \epsilon_{t|t-1} &= Y_t – H\beta_{t|t-1} \\ f_{t|t-1} &= H P_{t|t-1} H^{T} + R \\ K &= P_{t|t-1}H^{T}f_{t|t-1}^{-1} \\ \beta_{t|t} &= \beta_{t|t-1} + K \epsilon_{t|t-1} \\ P_{t|t} &= P_{t|t-1} – K H P_{t|t-1} \end{align}

### Carter-Kohn algorithm

As can be seen in previous post, Carter and Kohn (1994) backward sampling algorithm is summarized as two distributions : normal densities for $$\beta_T|\tilde{Y}_T$$ and $$\beta_t|\beta_{t+1},\tilde{Y}_t$$

 \begin{align} \beta_T|\tilde{Y}_T &\sim N(\color{blue}{\beta_{T|T}}, \color{blue}{P_{T|T}})\\ \beta_t|\beta_{t+1},\tilde{Y}_t &\sim N(\color{red}{\beta_{t|t, \beta_{t+1}}}, \color{red}{P_{t|t, \beta_{t+1}}}) \\ \\ \color{blue}{\beta_{T|T}} &= \beta_{T|T-1} + K \epsilon_{T|T-1} \\ \color{blue}{P_{T|T}} &= P_{T|T-1} – K H P_{T|T-1} \\ \\ \color{red}{\beta_{t|t,\beta_{t+1}}} &= \beta_{t|t} + K^{*} (\beta_{t+1} – \mu – F\beta_{t|t}) \\ \color{red}{P_{t|t,\beta_{t+1}}} &= P_{t|t} – K^{*} F P_{t|t} \\ K^* &= P_{t|t}F^{T}(F P_{t|t} F^{T} + Q)^{-1} \\ \\ \tilde{Y}_k &= \{Y_1, Y_2,…, Y_{k-1}, Y_k\} \end{align}

### R code

The following R code generates sampling distributions of state variables using Carter-Kohn algorithm in a state space model with other parameters being fixed. The purpose of this R code is to make clear the process of the backward sampling for state variables.

 #========================================================## Quantitative ALM, Financial Econometrics & Derivatives # ML/DL using R, Python, Tensorflow by Sang-Heon Lee ## https://kiandlee.blogspot.com#——————————————————–## Gibbs Sampling for State Space Model # : Carter and Kohn (1994) algorithm# : Forward Filtering Backward Sampling#========================================================# graphics.off()  # clear all graphsrm(list = ls()) # remove all files from your workspace #========================================================# generate data for a state space model#========================================================# Y[t] = H[t]*b[t]     + e1,   var(e1) = R# b[t] = mu + F*b[t-1] + e2,   var(e2) = Q#——————————————————– T <– 500 # number of observation or maximum of t # these are fixedQ = 0.001; R = 0.01; F = 0.95; mu = 0; # error termse1 <– rnorm(T)*sqrt(R);e2 <– rnorm(T)*sqrt(Q); # simulated states and time series b, Yb <– Y <– rep(0,T); H <– rnorm(T); # random covariate for (j in 2:T) {    b[j] <– mu + F*b[j–1] + e2[j]    Y[j] <– H[j]*b[j] + e1[j]} #========================================================# Step 1 Set up matrices for the Kalman Filter#======================================================== b0  <– 0        # state variable  b[0/0]p00 <– 1        # variance of state variable p[0/0]btt <– rep(0,T) # will hold the filtered state variableptt <– rep(0,T) # will hold its variance #initialize the state variableb11=b0; p11=p00 # Forward Filteringfor (i in 1:T) {   b10  <– mu+b11*F   p10  <– F*p11*F + Q   feta <– H[i]*p10*H[i] + R   K    <– p10*H[i]/feta   b11  <– b10 + K*(Y[i] – H[i]*b10)   p11  <– p10–K*(H[i]*p10)      # save state estimates and its variance   ptt[i] <– p11; btt[i] <– b11}      #========================================================# Carter and Kohn Backward recursion # to calculate the mean and variance # of the distribution of the state vector#======================================================== # draw of the state variable : Carter-Kohn algorithmb_ck = rep(0,T)   # draw b_T|Ytilde_T at time Tb_ck[T] = rnorm(1, mean=btt[T], sd = sqrt(ptt[T])) # Backward Samplingfor (i in (T–1):1) {      # mean and variance   bttbt1=btt[i]+ptt[i]*F*solve(F*ptt[i]*F+Q)*(b_ck[i+1]–mu–btt[i]*F)     pttbt1=ptt[i]–ptt[i]*F*solve(F*ptt[i]*F+Q)*F*ptt[i]      # draw b_t|b_t+1, Ytilde_t at time t   b_ck[i] <– rnorm(1, mean=bttbt1, sd = sqrt(pttbt1))} #========================================================# Graph : true states, state estimates and sampled states#======================================================== x11(width=9, height = 5); matplot(cbind(b, btt, b_ck), xlab=“time”, ylab=“”,        main = “real states, state estimates, sampled states”,        type=“l”, lty = 1,col=c(3,2,12), lwd = 3)legend(“bottom”, c(“data”,“Kalman forward filtering”,                   “Carter-Kohn backward sampling”),       inset=c(–1,0),col=c(3,2,12),lty=1,lwd=3,cex=1,        xpd=F,bty=“n”,horiz = FALSE) Colored by Color Scripter cs

The following figure shows the true (data) states, the Kalman forward filtered state estimates, and one Carter-Kohn backward sampled estimates.
Carter-Kohn backward sampled estimates are generated repeatedly with new sampled other parameters. But since we assume that other parameters are fixed, we just repeat backward Carter-Kohn sampling with these fixed parameters. For example, the following R code repeats this backward sampling for 500 times and shows how sampled states are generated.

 #========================================================# 500 Carter-Kohn Backward Sampling# with fixed other parameters#========================================================   x11(width=9, height = 5); matplot(b, xlab=“time”, ylab=“”,main = “sampled states”,        type=“l”, lty = 1,col=1, lwd = 3, ylim = c(–0.5,0.5))legend(“bottom”, c(“real states”,                   “Carter-Kohn backward sampling”),       inset=c(–1,0),col=c(“black”,“darkslategray1”),       lty=1,lwd=3,cex=1,        xpd=F,bty=“n”,horiz = FALSE) # Carter-Kohn Backward Sampling repeatedlyfor(j in 1:500) {      # will hold draws of the state variable   b_ck = rep(0,T)        # draw b_T|Ytilde_T at time T   b_ck[T] = rnorm(1, mean=btt[T], sd = sqrt(ptt[T]))      # Backward Sampling   for (i in (T–1):1) {            # Kalman gain      KG <– ptt[i]*F*solve(F*ptt[i]*F+Q)            # mean and variance      bttbt1=btt[i]+KG*(b_ck[i+1]–mu–btt[i]*F)        pttbt1=ptt[i]–KG*F*ptt[i]              # draw b_t|b_t+1, Ytilde_t at time t      b_ck[i] <– rnorm(1, mean=bttbt1, sd = sqrt(pttbt1))   }      lines(b_ck, xlab=“”, ylab=“”,main = “”,         type=“l”,lty = 1, lwd = 1,col=“blue”)   lines(b_ck, xlab=“”, ylab=“”,main = “”,         type=“l”,lty = 1, lwd = 1,col=“darkslategray1”)   lines(b, xlab=“”, ylab=“”,main = “”,         type=“l”,lty = 1, col = 1,lwd = 3)} Colored by Color Scripter cs

The following dynamic image shows new sampled states from Carter-Kohn algorithm with a sampling number of 500. 500 times sampled states are similar to the true states (black line) but show random characteristics stemming from each probability distributions.

### Concluding Remarks

In this post, we implements R code for Gibbs sampling for a state space model by using Carter-Kohn algorithm. Intuitively, we can find the way how backward sampled state estimates are generated in univariate case. In the next post, we will deal with multivariate state space model naturally with no assumption of fixed parameters. $$\blacksquare$$