Solvency II yield curve ‘fits exactly’. Too tightly to explain ?

[This article was first published on Thierry Moudiki's blog » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

For the QIS5 and the LTGA (Quantitative Impact Studies), the Prudential Authority (EIOPA) suggested the use of Smith-Wilson method for the purpose of discounting future cash flows. A description of this method can be found here. The technical specifications for the QIS5 argued about this method that : “the term structure is fitted exactly to all observed zero coupon bond prices”. The method was subsequently used for the LTGA.

But : is fitting exactly, a desirable feature for a yield curve ? I won’t give a definitive answer here, but some hints, as the question could be answered in a lot of different ways (well… this post is mainly an excuse to present you ycinterextra in action after this, for you to tell me how to improve the package).

I’ll fit the Nelson-Siegel (NS), Svensson (SV) and Smith-Wilson (SW) models (implemented in ycinterextra) to the data from Diebold, F. X., & Li, C. (2006). Forecasting the term structure of government bond yields. Journal of econometrics, 130(2), 337-364. And  we’ll see how accurate are the  values they produce on out-of-sample data. I used mixed ideas from Diebold and Li (2006), along with Gilli, Manfred and Schumann, Enrico, A Note on ‘Good Starting Values’ in Numerical Optimisation (June 3, 2010). Available at SSRN: http://ssrn.com/abstract=1620083.

The data can be found here : http://www.ssc.upenn.edu/~fdiebold/papers/paper49/FBFITTED.txt.

# I put the data in a text file.
data.diebold.li <- scan("FBFITTED.txt", skip = 14)

Now, loading the package and formatting the data :

library(ycinterextra)

# Number of observation dates from January 1985 through December 2000
nbdates <- 372

# Time to maturities for each observation date
mats <- round(c(1, 3, 6, 9, 12, 15, 18, 21, 24, 30, 36, 48, 60, 72, 84, 96, 108, 120)/12, 3)
nbmats <- length(mats)
nbcol <- nbmats + 1

# Formatting the data
data.diebold.li.final <- NULL
for(i in seq_len(nbdates))
{
 data.diebold.li.final <- rbind(data.diebold.li.final, data.diebold.li[(nbcol*(i -1)+1):(nbcol*(i -1)+nbcol)])
}

A 3D plot of the observed data can be obtained with the following function :

x <- mats
y <- data.diebold.li.final[,1]
z <- data.diebold.li.final[,-1]

graph3D <- function(x, y, z)
{
 par(bg = "white")
 nrz <- nrow(z)
 ncz <- ncol(z)
 # Create a function interpolating colors in the range of specified colors
 jet.colors <- colorRampPalette( c("blue", "green") )
 # Generate the desired number of colors from this palette

 nbcol <- 100
 color <- jet.colors(nbcol)
 # Compute the z-value at the facet centres
 zfacet <- z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz]
 # Recode facet z-values into color indices
 facetcol <- cut(zfacet, nbcol)
 persp(y, x, z, col = color[facetcol], phi = 30, theta = -30, ticktype= 'detailed', nticks = 3,
 xlab = "observation date", ylab = "maturity", zlab = "zero rates in pct.")
}

graph3D(x, y, z)

image1

Now, let’s calibrate the 3 models, NS, SV and SW to the observed yield curves (this takes a lot of time, as there are 372 curves, each with 18 maturities) :

nrz <- nrow(z)
ncz <- ncol(z)
yM.NS <- matrix(NA, nrow = nrz, ncol = ncz)
yM.SV <- matrix(NA, nrow = nrz, ncol = ncz)
yM.SW <- matrix(NA, nrow = nrz, ncol = ncz)

# Loop over the dates from January 1985 through December 2000
for (t in seq_len(nbdates))
{
 # market yields
 yM <- as.numeric(data.diebold.li.final[t, -1])

 yM.NS[t, ] <- fitted(ycinter(yM = yM, matsin = mats, matsout = mats,
 method = "NS", typeres = "rates"))

 yM.SV[t, ] <- fitted(ycinter(yM = yM, matsin = mats, matsout = mats,
 method = "SV", typeres = "rates"))

 yM.SW[t, ] <- fitted(ycinter(yM = yM, matsin = mats, matsout = mats,
 method = "SW", typeres = "rates"))
}

After a little time… SW curve seems to fit very well indeed, and the plot below looks pretty similar to the one we made before :

graph3D(x, y, yM.SW)

image2

With a plot of the average yield curve (average of the 372 observed curves as in Diebold and Li (2006)), we can see that all of the three models seem to provide fairly good fits to the data :

# Average yield curves
plot(mats, apply(z, 2, mean), pch = 3, xlab = "time to maturity",
 ylab = "yield to maturity",
 main = paste("Actual and fitted (model-based)", "n", "average yield curves"))
lines(mats, apply(yM.NS, 2, mean), col = "blue", lwd= 2)
lines(mats, apply(yM.SV, 2, mean), col = "red", lwd= 2)
lines(mats, apply(yM.SW, 2, mean), col = "green", lwd=2, lty = 2)
legend("bottomright", c("Actual", "NS", "SV", "SW"),
 col=c("black", "blue", "red", "green"), text.col = c("black", "blue", "red", "green"),
 lty=c(NA, 1, 1, 2), lwd = c(1, 2, 2, 2), pch=c(3, NA, NA, NA))

image3

Here are plots of the residuals for NS and SW (for SV, the plot is pretty similar to the plot for NS) :

# residuals
graph3D(x, y, yM.NS-z)
graph3D(x, y, yM.SW-z)

image4image5

Now, taking a closer look at some given cross-sections (here on 3/31/1989), we see what’s actually going on (click for a better resolution) :

t <- 231

par(mfrow=c(2,2))

plot(mats, data.diebold.li.final[t, -1], xlab = "time to maturity",
 ylab = "yield to maturity", col="red", main = "Observed data")
points(mats, data.diebold.li.final[t, -1], col="red")

plot(mats, yM.NS [t,], type='l', xlab = "time to maturity",
 ylab = "yield to maturity", col = "blue", main = "Nelson-Siegel fit")
points(mats, data.diebold.li.final[t, -1], col="red")

plot(mats, yM.SV[t,], type='l', xlab = "time to maturity",
 ylab = "yield to maturity", col = "blue", main = "Svensson fit")
points(mats, data.diebold.li.final[t, -1], col="red")

plot(mats, yM.SW[t,], type='l', xlab = "time to maturity",
 ylab = "yield to maturity", col = "blue", main = "Smith-Wilson fit")
points(mats, data.diebold.li.final[t, -1], col="red")

image6

And for 7/31/1989 (with t == 235) :

NS fits, well. SV fits a litte bit better, due to its extended number of parameters. SW fits perfectly to each point, due to its laaarge number of parameters.

Let’s see how accurate are the values produced by the 3 methods on out-of-sample values, by the use of a simple machine learning test set method (a  method like k-fold cross-validation can be used as well, it’s computationnally more expensive, but simple though). I divide the sample data into 2 sets :

  •  A learning set with the zero rates observed at time to maturities : c(1, 6, 9, 15, 18, 21, 24, 30, 48, 72, 84, 96, 120)/12
  • A test set with the zero rates observed at time to maturities : c(3, 12, 36, 60, 108)/12

The models’ parameters are estimated using the learning set, and the test set is used to assess how well (with less error) the model can predict unknown values :

# Maturities for test set
matsoutsample <- c(3, 12, 36, 60, 108)/12
matchoutsample <- match(matsoutsample, mats)
m <- length(matsoutsample)

# Maturities for learning set
matsinsample <- mats[-matchoutsample]

 

# Residuals matrices
res.NS <- matrix(NA, nrow = nrz, ncol = length(matchoutsample))
res.SV <- matrix(NA, nrow = nrz, ncol = length(matchoutsample))
res.SW <- matrix(NA, nrow = nrz, ncol = length(matchoutsample))

# Loop over the dates from January 1985 through December 2000
for (t in seq_len(nbdates))
{
 yM.t <- as.numeric(data.diebold.li.final[t, -c(1, matchoutsample)])

 yM.t.bechm <- as.numeric(data.diebold.li.final[t, -1])

 res.NS[t, ] <- (fitted(ycinter(yM = yM.t, matsin = matsinsample, matsout = mats,
 method = "NS", typeres = "rates")) - yM.t.bechm)[matchoutsample]

 res.SV[t, ] <- (fitted(ycinter(yM = yM.t, matsin = matsinsample, matsout = mats,
 method = "SV", typeres = "rates")) - yM.t.bechm)[matchoutsample]

 res.SW[t, ] <- (fitted(ycinter(yM = yM.t, matsin = matsinsample, matsout = mats,
 method = "SW", typeres = "rates")) - yM.t.bechm)[matchoutsample]
}

For each model, we get the 372 mean squared errors as follows :

mse.ns <- apply((res.NS), 1, function(x) crossprod(x))/m
mse.sv <- apply((res.SV), 1, function(x) crossprod(x))/m
mse.sw <- apply((res.SW), 1, function(x) crossprod(x))/m

A summary of these mean squared errors shows that SW actually overfits, and fails to predict the out-of-sample data :

summary(mse.ns)
 Min. 1st Qu. Median Mean 3rd Qu. Max.
0.0005895 0.0054050 0.0117700 0.0227100 0.0198400 0.5957000
summary(mse.sv)
 Min. 1st Qu. Median Mean 3rd Qu. Max.
0.0003694 0.0075310 0.0154200 0.0329400 0.0319700 0.6079000
summary(mse.sw)
 Min. 1st Qu. Median Mean 3rd Qu. Max.
 0.057 0.556 1.478 431.200 10.820 29040.000

To leave a comment for the author, please follow the link and comment on their blog: Thierry Moudiki's blog » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)