Fast computation of cross-validation in linear models

March 17, 2014
By

(This article was first published on Hyndsight » R, and kindly contributed to R-bloggers)

The leave-one-out cross-validation statistic is given by

    \[\text{CV} = \frac{1}{N} \sum_{i=1}^N e_{[i]}^2,\]

where e_{[i]} = y_i - \hat{y}_{[i]}, ~y_1,\dots,y_N are the observations, and \hat{y}_{[i]} is the predicted value obtained when the model is estimated with the ith case deleted. This is also sometimes known as the PRESS (Prediction Residual Sum of Squares) statistic.

It turns out that for linear models, we do not actually have to estimate the model N times, once for each omitted case. Instead, CV can be computed after estimating the model once on the complete data set.

Suppose we have a linear regression model \bm{Y} = \bm{X}\bm{\beta} + \bm{e}. Then \hat{\bm{\beta}} = (\bm{X}'\bm{X})^{-1}\bm{X}'\bm{Y} and \bm{H} = \bm{X}(\bm{X}'\bm{X})^{-1}\bm{X}' is the “hat-matrix”. It has this name because it is used to compute \bm{\hat{Y}} = \bm{X}\hat{\bm{\beta}} = }\bm{H}\bm{Y}. If the diagonal values of \bm{H} are denoted by h_{1},\dots,h_{N}, then the leave-one-out cross-validation statistic can be computed using

    \[ \text{CV} = \frac{1}{N}\sum_{i=1}^N [e_{i}/(1-h_{i})]^2,\]

where e_i = y_i - \hat{y}_i and \hat{y}_i is the predicted value obtained when the model is estimated with all data included. This is a remarkable result, and is given without proof in Section 5.5 of my forecasting textbook.

I am teaching my second year forecasting class about this result tomorrow, and I thought my students might like to see the proof. What follows is the simplest proof I know (adapted from Seber and Lee, 2003).

Proof

Let \bm{X}_{[i]} and \bm{Y}_{[i]} be similar to \bm{X} and \bm{Y} but with the ith row deleted in each case. Let \bm{x}'_i be the ith row of \bm{X} and let

    \[ \hat{\bm{\beta}}_{[i]} = (\bm{X}_{[i]}'\bm{X}_{[i]})^{-1}\bm{X}_{[i]}' \bm{Y}_{[i]} \]

be the estimate of \bm{\beta} without the ith case. Then e_{[i]} = y_i - \bm{x}_i'\hat{\bm{\beta}}_{[i]}.

Now \bm{X}_{[i]}'\bm{X}_{[i]} = (\bm{X}'\bm{X} - \bm{x}_i\bm{x}_i') and \bm{x}_i(\bm{X}'\bm{X})^{-1}\bm{x}_i = h_i. So by the Sherman–Morrison–Woodbury formula,

    \[ (\bm{X}_{[i]}'\bm{X}_{[i]})^{-1} = (\bm{X}'\bm{X})^{-1} + \frac{(\bm{X}'\bm{X})^{-1}\bm{x}_i\bm{x}_i'(\bm{X}'\bm{X})^{-1}}{1-h_i}. \]

Also note that \bm{X}_{[i]}' \bm{Y}_{[i]} = \bm{X}'\bm{Y} - \bm{x}y_i. Therefore

    \begin{align*} \bm{\hat{\beta}}_{[i]}  %&= (\bm{X}_{[i]}'\bm{X}_{[i]})^{-1} (\bm{X}'\bm{Y} - \bm{x}_i y_i)\\ &=  \left[ (\bm{X}'\bm{X})^{-1}  + \frac{ (\bm{X}'\bm{X})^{-1}\bm{x}_i\bm{x}_i'(\bm{X}'\bm{X})^{-1} }{1-h_i} \right] (\bm{X}'\bm{Y} - \bm{x}_i y_i)\\ &=  \hat{\bm{\beta}} - \left[ \frac{ (\bm{X}'\bm{X})^{-1}\bm{x}_i}{1-h_i}\right] \left[y_i(1-h_i) -  \bm{x}_i' \hat{\bm{\beta}} +h_i y_i \right]\\ &=  \hat{\bm{\beta}} - (\bm{X}'\bm{X})^{-1}\bm{x}_i e_i / (1-h_i) \end{align*}

Thus

    \begin{align*} e_{[i]} &= y_i - \bm{x}_i'\hat{\bm{\beta}}_{[i]} \\ & = y_i - \bm{x}_i' \left[ \hat{\bm{\beta}} - (\bm{X}'\bm{X})^{-1}\bm{x}_ie_i/(1-h_i)\right] \\ &= e_i + h_i e_i/(1-h_i) \\ &= e_i/(1-h_i), \end{align*}

and the result follows.


This result is implemented in the CV() function from the forecast package for R.

To leave a comment for the author, please follow the link and comment on his blog: Hyndsight » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...



If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Comments are closed.