[This article was first published on Colman Statistics, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
In this post, I will illustrate the use of prediction intervals for the comparison of measurement methods. In the example a new spectral method for measuring whole blood hemoglobin is compared with a reference method.
First, let’s start with discussing the large difference between a confidence interval and a prediction interval.
Prediction interval versus Confidence interval
Very often a confidence interval is misinterpreted as a prediction interval, leading to unrealistic “precise” predictions. Prediction intervals (PI) resemble confidence intervals (CI), but the width of the PI is by definition larger than the width of the CI.
Let’s assume that we measure the whole blood hemoglobin concentration in a random sample of 100 persons. We obtain the estimated mean (Est_mean), limits of the confidence interval (CI_Lower and CI_Upper) and limits of the prediction interval (PI_Lower and PI_Upper):
(The R-code to do this is in the next paragraph)
A Confidence interval (CI) is an interval of good estimates of the unknown true population parameter. About a 95% confidence interval for the mean, we can state that if we would repeat our sampling process infinitely, 95% of the constructed confidence intervals would contain the true population mean.
In other words, there is a 95% chance of selecting a sample such that the 95% confidence interval calculated from that sample contains the true population mean.
Interpretation of the 95% confidence interval in our example:
-The values contained in the interval [138g/L to 143g/L] are good estimates of the unknown mean whole blood hemoglobin concentration in the population. In general, if we would repeat our sampling process infinitely, 95% of such constructed confidence intervals would contain the true mean hemoglobin concentration.
A Prediction interval (PI) is an estimate of an interval in which a future observation will fall, with a certain confidence level, given the observations that were already observed. About a 95% prediction interval we can state that if we would repeat our sampling process infinitely, 95% of the constructed prediction intervals would contain the new observation.
Interpretation of the 95% prediction interval in the above example:
-Given the observed whole blood hemogblobin concentrations, the whole blood hemogblobin concentration of a new sample will be between 113g/L and 167g/L with a confidence of 95%. In general, if we would repeat our sampling process infinitely, 95% of the such constructed prediction intervals would contain the new hemoglobin concentration measurement.
Remark: Very often we will read the interpretation “The whole blood hemogblobin concentration of a new sample will be between 113g/L and 167g/L with a probability of 95%.” (for example on wikipedia). This interpretation is correct in the theoretical situation where the parameters (true mean and standard deviation) are known.
Estimating a prediction interval in R
First, let’s simulate some data. The sample size in the plot above was (n=100). Now, to see the effect of the sample size on the width of the confidence interval and the prediction interval, let’s take a “sample” of 400 hemoglobin measurements using the same parameters:
hemoglobin<-rnorm(400, mean = 139, sd = 14.75)
Although we don’t need a linear regression yet, I’d like to use the lm() function, which makes it very easy to construct a confidence interval (CI) and a prediction interval (PI). We can estimate the mean by fitting a “regression model” with an intercept only (no slope). The default confidence level is 95%.
The CI object has a length of 400. But since there is no slope in our “model”, each row is exactly the same.
PI<-predict(lm(df$hemoglobin~ 1), interval="predict")
current data refer to _future_ responses
## fit lwr upr
## 139.2474 111.1134 167.3815
We get a “warning” that “predictions on current data refer to future responses”. That’s exactly what we want, so no worries there. As you see, the column names of the objects CI and PI are the same.
Now, let’s visualize the confidence and the prediction interval.
The code below is not very elegant but I like the result (tips are welcome :-))
The width of the confidence interval is very small, now that we have this large sample size (n=400). This is not surprising, as the estimated mean is the only source of uncertainty. In contrast, the width of the prediction interval is still substantial. The prediction interval has two sources of uncertainty: the estimated mean (just like the confidence interval) and the random variance of new observations.
Example: comparing a new with a reference measurement method
A prediction interval can be useful in the case where a new method should replace a standard (or reference) method.
If we can predict well enough what the measurement by the reference method would be, (given the new method), than the two methods give similar information and the new method can be used.
For example in (Tian, 2017) a new spectral method (Near-Infra-Red) to measure hemoglobin is compared with a Golden Standard. In contrast with the Golden Standard method, the new spectral method does not require reagents. Moreover, the new method is faster. We will investigate whether we can predict well enough, based on the measured concentration of the new method, what the measurement by the Golden Standard would be. (note: the measured concentrations presented below are fictive)
abline (a=fit.lm$coefficients, b=fit.lm$coefficients )
cat ("Adding the identity line:")
Adding the identity line (dotted line):
abline (a=0, b=1, lty=2)
If both measurement methods would exactly correspond, the intercept would be zero and the slope would be one (=“identity line”, dotted line).
Now let’s calculated the confidence interval for this linear regression.
In (Bland, Altman 2003) it is proposed to calculate the average width of this prediction interval, and see whether this is acceptable.
In the above example, both methods do have the same measurement scale (g/L), but the linear regression with prediction interval is particularly useful when the two methods of measurement have different units.
However, the method has some disadvantages:
Predictions intervals are very sensitive to deviations from the normal distribution.
In “standard” linear regression (or Ordinary Least Squares (OLS) regression),the presence of measurement error is allowed for the Y-variable (here, the reference method) but not for the X-variable (the new method). In other words, the absence of errors on the x-axis is one of the assumptions. Since we can expect some measurement error for the new method, this assumption is violated here.
Taking into account errors on both axes
In contrast to Ordinary Least Square (OLS) regression, Bivariate Least Square (BLS) regression takes into account the measurement errors of both methods (the New method and the Reference method). Interestingly, prediction intervals calculated with BLS are not affected when the axes are switched (del Rio, 2001).
In 2017, a new R-package BivRegBLS was released. It offers several methods to assess the agreement in method comparison studies, including Bivariate Least Square (BLS) regression.
If the data are unreplicated but the variance of the measurement error of the methods is known, The BLS() and XY.plot() functions can be used to fit a bivariate Least Square regression line and corresponding confidence and prediction intervals.
Now we would like to decide whether the new method can replace the reference method. We allow the methods to differ up to a given threshold, which is not clinically relevant. Based on this threshold an “acceptance interval” is created.
Suppose that differences up to 10 g/L (=threshold) are not clinically relevant, then the acceptance interval can be defined as Y=X±Δ, with Δ equal to 10.
If the PI is inside the acceptance interval for the measurement range of interest then the two measurement methods can be considered to be interchangeable (see Francq, 2016).
The accept.int argument of the XY.plot() function allows for a visualization of this acceptance interval.
For the measurement region 120g/L to 150 g/L, we can conclude that the difference between both methods is acceptable. If the measurement regions below 120g/l and above 150g/L are important, the new method cannot replace the reference method.
Regression on replicated data
In method comparison studies, it is adviced to create replicates (2 or more measurements of the same sample with the same method). An example of such a dataset:
When replicates are available, the variance of the measurement errors is calculated for both the new and the reference method, and used to estimate the prediction interval. Again, the BLS() function and the XY.plot() function are used to estimate and plot the BLS regression line, the corresponding CI and PI.
It is clear that the prediction interval is not inside the acceptance interval here. The new method cannot replace the reference method. A possible solution is to average repeated measures. The BivRegBLS package can create prediction intervals for the mean of (2 or more) future values, too! More information in this presentation (presented at useR!2017).
In the plot above, averages of the two replicates are calculated and plotted. I’d like to see the individual measurements:
Although not appropriate in the context of method comparison studies, Pearson correlation is still frequently used. See Bland & Altman (2003) for an explanation on why correlations are not adviced.
Methods presented in this blogpost are not applicable to time-series
-Confidence interval and prediction interval:
Applied Linear Statistical Models, 2005, Michael Kutner, Christopher Nachtsheim, John Neter, William Li. Section 2.5
-Prediction interval for method comparison:
Bland, J. M. and Altman, D. G. (2003), Applying the right statistics: analyses of measurement studies. Ultrasound Obstet Gynecol, 22: 85-93. doi:10.1002/uog.12
section: “Appropriate use of regression”.
Francq, B. G., and Govaerts, B. (2016) How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models. Statist. Med., 35: 2328-2358. doi: 10.1002/sim.6872.
del Río, F. J., Riu, J. and Rius, F. X. (2001), Prediction intervals in linear regression taking into account errors on both axes. J. Chemometrics, 15: 773-788. doi:10.1002/cem.663
-Example of a method comparison study:
H. Tian, M. Li, Y. Wang, D. Sheng, J. Liu, and L. Zhang, “Optical wavelength selection for portable hemoglobin determination by near-infrared spectroscopy method,” Infrared Phys. Techn 86, 98-102 (2017). doi.org/10.1016/j.infrared.2017.09.004.
-the predict() and lm() functions of R:
Chambers, J. M. and Hastie, T. J. (1992) Statistical Models in S. Wadsworth & Brooks/Cole.