Fast computation of cross-validation in linear models

[This article was first published on Hyndsight » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

The leave-one-out cross-validation statistic is given by

    \[\text{CV} = \frac{1}{N} \sum_{i=1}^N e_{[i]}^2,\]

where e_{[i]} = y_i - \hat{y}_{[i]}, ~y_1,\dots,y_N are the observations, and \hat{y}_{[i]} is the predicted value obtained when the model is estimated with the ith case deleted. This is also sometimes known as the PRESS (Prediction Residual Sum of Squares) statistic.

It turns out that for linear models, we do not actually have to estimate the model N times, once for each omitted case. Instead, CV can be computed after estimating the model once on the complete data set.

Suppose we have a linear regression model \bm{Y} = \bm{X}\bm{\beta} + \bm{e}. Then \hat{\bm{\beta}} = (\bm{X}'\bm{X})^{-1}\bm{X}'\bm{Y} and \bm{H} = \bm{X}(\bm{X}'\bm{X})^{-1}\bm{X}' is the “hat-matrix”. It has this name because it is used to compute \bm{\hat{Y}} = \bm{X}\hat{\bm{\beta}} = }\bm{H}\bm{Y}. If the diagonal values of \bm{H} are denoted by h_{1},\dots,h_{N}, then the leave-one-out cross-validation statistic can be computed using

    \[ \text{CV} = \frac{1}{N}\sum_{i=1}^N [e_{i}/(1-h_{i})]^2,\]

where e_i = y_i - \hat{y}_i and \hat{y}_i is the predicted value obtained when the model is estimated with all data included. This is a remarkable result, and is given without proof in Section 5.5 of my forecasting textbook.

I am teaching my second year forecasting class about this result tomorrow, and I thought my students might like to see the proof. What follows is the simplest proof I know (adapted from Seber and Lee, 2003).

Proof

Let \bm{X}_{[i]} and \bm{Y}_{[i]} be similar to \bm{X} and \bm{Y} but with the ith row deleted in each case. Let \bm{x}'_i be the ith row of \bm{X} and let

    \[ \hat{\bm{\beta}}_{[i]} = (\bm{X}_{[i]}'\bm{X}_{[i]})^{-1}\bm{X}_{[i]}' \bm{Y}_{[i]} \]

be the estimate of \bm{\beta} without the ith case. Then e_{[i]} = y_i - \bm{x}_i'\hat{\bm{\beta}}_{[i]}.

Now \bm{X}_{[i]}'\bm{X}_{[i]} = (\bm{X}'\bm{X} - \bm{x}_i\bm{x}_i') and \bm{x}_i(\bm{X}'\bm{X})^{-1}\bm{x}_i = h_i. So by the Sherman–Morrison–Woodbury formula,

    \[ (\bm{X}_{[i]}'\bm{X}_{[i]})^{-1} = (\bm{X}'\bm{X})^{-1} + \frac{(\bm{X}'\bm{X})^{-1}\bm{x}_i\bm{x}_i'(\bm{X}'\bm{X})^{-1}}{1-h_i}. \]

Also note that \bm{X}_{[i]}' \bm{Y}_{[i]} = \bm{X}'\bm{Y} - \bm{x}y_i. Therefore

    \begin{align*} \bm{\hat{\beta}}_{[i]}  %&= (\bm{X}_{[i]}'\bm{X}_{[i]})^{-1} (\bm{X}'\bm{Y} - \bm{x}_i y_i)\\ &=  \left[ (\bm{X}'\bm{X})^{-1}  + \frac{ (\bm{X}'\bm{X})^{-1}\bm{x}_i\bm{x}_i'(\bm{X}'\bm{X})^{-1} }{1-h_i} \right] (\bm{X}'\bm{Y} - \bm{x}_i y_i)\\ &=  \hat{\bm{\beta}} - \left[ \frac{ (\bm{X}'\bm{X})^{-1}\bm{x}_i}{1-h_i}\right] \left[y_i(1-h_i) -  \bm{x}_i' \hat{\bm{\beta}} +h_i y_i \right]\\ &=  \hat{\bm{\beta}} - (\bm{X}'\bm{X})^{-1}\bm{x}_i e_i / (1-h_i) \end{align*}

Thus

    \begin{align*} e_{[i]} &= y_i - \bm{x}_i'\hat{\bm{\beta}}_{[i]} \\ & = y_i - \bm{x}_i' \left[ \hat{\bm{\beta}} - (\bm{X}'\bm{X})^{-1}\bm{x}_ie_i/(1-h_i)\right] \\ &= e_i + h_i e_i/(1-h_i) \\ &= e_i/(1-h_i), \end{align*}

and the result follows.


This result is implemented in the CV() function from the forecast package for R.

To leave a comment for the author, please follow the link and comment on their blog: Hyndsight » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)