Quite some time ago, back in 2011, I wrote a post that used an additive model to fit a smooth trend to the then-current Hadley Centre/CRU global temperature time series data set. Since then the media and scientific papers have been full of reports of record warm temperatures in the past couple of years, of controversies (imagined) regarding data-changes to suit the hypothesis of human induce global warming, and the brouhaha over whether global warming had stalled; the great global warming hiatus or pause. So it seemed like a good time to revisit that analysis and update it using the latest HadCRUT data.
A further motivation was my reading Cahill, Rahmstorf, and Parnell (2015), in which the authors use a Bayesian change point model for global temperatures. This model is essentially piece-wise linear but with smooth transitions between the piece-wise linear components. I don’t immediately see where in their Bayesian model the smooth transitions come from, but that’s what they show. My gut reaction was why piece-wise linear with smooth transitions? Why not smooth everywhere? And that’s what the additive model I show here assumes.
First, I grab the data (Morice et al. 2012) from the Hadley Centre’s website and load it into R
The values in
Temperature are anomalies relative to 1961–1990, in degrees C.
The model I fitted in the last post was
[ y = _0 + f() + , N(0, ^2) ]
where we have a smooth function of
Year as the trend, and allow for possibly correlated residuals via correlation matrix ( ).
The data set contains a partial set of observations for 2016, but seeing as that year is (at the time of writing) incomplete, I delete that observation.
The data are shown below
The model described above can be fitted using the
gamm() function in the mgcv package. There are other options that allow one to use
gam(), or even
bam() in the same package, which are simpler, but I want to keep this post consistent with the one from a few years ago, so
gamm() it is. Recall that
gamm() represents the additive model as a mixed effects model via the well-known equivalence between random effects and splines, and fits the model using
lme(). This allows for correlation structures in the residuals. Previously we saw that an AR(1) process in the residuals was the best fitting of the models tried, so we start with that and then try a model with AR(2) errors.
A generalised likelihood ratio test suggests little support for the more complex AR(2) errors model
The AR(1) has successfully modelled most of the residual correlation
Before drawing the fitted trend, I want to put a simultaneous confidence interval around the estimate. mgcv makes this very easy to do via posterior simulation. To simulate from the fitted model, I have written a
simulate.gamm() method for the
simulate() generic that ships with R. The code below downloads the Gist containing the
simulate.gam code and then uses it to simulate from the model at 200 locations over the time period of the observations. I’ve written about posterior simulation from GAMs before, so if the code below or the general idea isn’t clear, I suggest you check out the earlier post.
Having arranged the fitted values and upper and lower simultaneous confidence intervals tidily they can be added easily to the existing plot of the datat
Whilst the simultaneous confidence interval shows the uncertainty in the fitted trend, it isn’t as clear about what form this uncertainty takes; for example, periods where there is little change or large uncertainty are often characterised by a wide range range of functional forms, not just flat, smooth functions. To get a sense of the uncertainty in the shapes of the simulated trends we can plot some of the draws from the posterior distribution of the model
If you look closely at the period 1850–1900, you’ll notice a wide range of trends through this period, each of which is consistent with the fitted model but illustrates the uncertainty in the estimates of the spline coefficients. An additional factor is that these splines have a global amount of smoothness; once the smoothness parameter(s) are estimated, the smoothness allowance this affords is spread evenly over the fitted function. Adaptive splines would solve this problem as they in effect allow you to spread the smoothness allowance unevenly, using it sparingly where there is no smooth variation in he data and applying it liberally where there is.
An instructive visualisation for the period of the purported pause or hiatus in global warming is to look at the shapes of the posterior simulations and the slopes of the trends for each year. I first look at the posterior simulations:
Whilst the plot only shows 50 of the 10,000 posterior draws, it’s pretty clear that, in these data at least, there is little or no support for the pause hypothesis; most of the posterior simulations are linearly increasing over the period of interest. Only one or two show a marked shallowing of the slope of the simulated trend through the period.
The first derivatives of the fitted trend can be used to determine where temperatures are increasing or decreasing. Using the standard error of the derivative or posterior simulation we can also say where the confidence interval on the derivative doesn’t include 0 — suggesting statistically significant change in temperature.
The code below uses some functions I wrote to compute the first derivatives of GAM(M) model terms via posterior simulation. I’ve written about this method before, so I suggest you check out that post if any of this isn’t clear.
A ggplot2 version of the derivatives is produced using the code below. The two additional
geom_line() calls add thick lines over sections of the derivative plot to illustrate those points where zero is not contained within the confidence interval of the first derivative.
Looking at this plot, despite the large (and expected) uncertainty in the derivative of the fitted trend towards the end of the observation period, the first derivatives of at least 95% of the 10,000 posterior simulations are all bounded well above zero. I’ll take a closer look at this now, plotting kernel density estimates of the posterior distribution of first derivatives evaluated at each year for the period of interest.
First I generate another 10,000 simulations from the posterior of the fitted model, this time for each year in the interval 1998–2015. Then I do a little processing to get the derivatives into a format suitable for plotting with ggplot and finally create kernel density estimate plots faceted by
We can also look at the smallest derivative for each year over all of the 10,000 posterior simulations
Only 4 of the 18 years have a single simulation with a derivative less than 0. We can also plot all the kernel density estimates on the same plot to see if there is much variation between years (there doesn’t appear to be much going on from the previous figures).
As anticipated, there’s very little between-year shift in the slopes of the trends simulated from the posterior distribution of the model.
Returning to Cahill, Rahmstorf, and Parnell (2015) for a moment; the fitted trend from their Bayesian change point model is very similar to the fitted spline. There are some differences in the early part of the series; where their model has a single piecewise linear function through 1850–1900, the additive model suggests a small decrease in global temperatures leading up to 1900. Thereafter the models are very similar, with the exception that the smooth transitions between periods of increase are somewhat longer with the additive model than the one of Cahill, Rahmstorf, and Parnell (2015).
Cahill, Niamh, Stefan Rahmstorf, and Andrew C Parnell. 2015. “Change Points of Global Temperature.” Environmental Research Letters: ERL [Web Site] 10 (8). IOP Publishing: 084002. doi:http://doi.org/10.1088/1748-9326/10/8/084002.
Morice, Colin P, John J Kennedy, Nick A Rayner, and Phil D Jones. 2012. “Quantifying Uncertainties in Global and Regional Temperature Change Using an Ensemble of Observational Estimates: The HadCRUT4 Data Set.” J. Geophys. Res. 117 (D8): D08101.