Quite some years ago, whilst working on the EU Sixth Framework project Eurolimpacs, I organized a workshop on statistical methods for analyzing time series data. One of the sessions was on the analysis of extremes, ably given by Paul Northrop (UCL Department of Statistical Science). That intro certainly whet my appetite but I never quite found the time to dig into the arcane world of extreme value theory. Two recent events rekindled my interest in extremes; Simon Wood quietly introduced into his mgcv package a family function for the generalized extreme value distribution (GEV), and I was asked to review a paper on extremes in time series. Since then I’ve been investigating options for fitting models for extremes to environmental time series, especially those that allow for timevarying effects of covariates on the parameters of the GEV. One of the first things I did was sit down with mgcv to get a feel for the gevlss()
family function that Simon had added to the package by repeating an analysis of a classic example data set that had been performed using the VGAM package of Thomas Yee.
The analysis I wanted to recreate was reported in a 2007 paper by Thomas Yee and Alec Stephenson (Yee and Stephenson, 2007) and concerned a time series of annual maximum sealevel at Fremantle, Western Australia. This example is also used extensively in Stuart Coles excellent book on statistical modeling of extremes (Coles, 2001). The data are available from the ismev support package for Coles’ book in the data set fremantle
The data contain 86 observations of the annual maximum sea level (in meters) over the period 1897–1989. The aim of the analysis is to account for any change in the distribution of annual maxima over time and to investigate any relationship with the Southern Oscillation Index, a measure of meteorological phenomena which reflects the development and intensity of El Niño events, and those of its counterpart La Niña, in the south Pacific. The data are shown below using ggplot
In extreme value analysis, one of the key components is to assess the behaviour of the very large, or small, events/observations, and often the focus is on those that are much more extreme than those in the observational record. This requires a considerably different approach to the usual statistical methods that focus on the mean of a distribution. Whilst we could approach the analysis of data like that in fremantle
from the view point of traditional methods employing the Gaussian distribution, the events of interest, the extreme high sealevel events, are way off in the tails of a distribution fitted by considering (usually) just its mean (and variance). Even small uncertainties in estimation of the distribution can be amplified when we get out into the extreme tails of the Gaussian, complicating inference about extremes and inflating uncertainties.
Extreme value theory has developed separate models and limiting distributions that replace central role that the Gaussian distribution plays in other areas of statistical modeling and inference. Consider again the sealevel data; the sea level would have been measured daily (or roughly daily) at Fremantle in order to produce the annual maximum series we wish to analyze. For a single year, we might denote these daily observations by (Z_1, , Z_m) and we’ll assume that these are a random sample of sealevel values. The annual maximum is given by
[ Y_m = { Z_1, , Z_m } ]
(Y_m) are commonly known as block maxima — the maxima of a block of random variables (Z_m). Extreme value theory considers the limiting distribution of (Y_m) as (m) tends to infinity. More simply, we want derive the distribution of annual maximum sealevel values as the number of annual maxima tends to infinity. The limiting distribution for (Y_m) is restricted to the class of generalized extreme value distributions (GEV), which have the following form
[ G(y) = { – _{+}^{1/} } ]
where (), (> 0), and () are the location, (positive) scale, and shape parameters respectively of the distribution. The distribution has support on values (y) where (1 + (y – / ) > 0), which is indicated by the subscript (+) in the main equation above. () and () can take any real value in the range ()–(+), whereas () the scale, or variance, parameter can be any positive real value.
The GEV distribution encompasses the three potential extreme value distributions for block maxima:
I. the Gumbel distribution, II. the Fréchet distribution, and III. the Weibul distribution.
These are also known as the Type I, II, and III extreme value distributions. Though I won’t write out the equations for each of these distributions, they are all quite similar to the GEV distribution and have parameters (), (), whilst the Fréchet and Weibull distributions also have a shape parameter (). The distributions differ markedly at the extreme positive end of (y), (y_{+}); the Weibull is finite, but both the Fréchet and Gumbel distributions are infinite, being distinguished by having polynomially and exponentially decaying density respectively. Each of these distributions can be reached from the GEV
 the Gumbel is reached when (= 0),
 the Fréchet when () is positive ((> 0)), and
 the Weibull when () is negative ((< 0))
Traditionally, researchers had to decide which type of tail behaviour they expected prior to fitting one of the three extreme value distributions. The clear advantage of the GEV is that the choice of distribution is now a parameter that can be included in the model fitting process leading to fewer a priori decisions needing to be made ahead of the analysis.
As I mentioned above, the gevlss()
family allows for separate linear predictors () for each of the parameters (), (), and (), to depend on one or more covariates. When setting up this model, therefore, we need to specify not one formula, but three. These are supplied in a list, with only the first having a left hand side term for the response.
The first model considered by Yee and Stephenson (2007) allowed for a smooth trend in Year
and a smooth effect of SOI
in the linear predictors for () and () whilst () was modeled as an intercept only linear predictor. The reason for the simple linear predictor for () is that this parameter is exceedingly difficult to estimate from data; in a relatively small data set like the fremantle
one there is very little information with which to inform ().
To specify this model in gam()
we need to create a list of three formula objects as follows:
list(SeaLevel ~ s(cYear) + s(SOI),
~ s(cYear) + s(SOI),
~ 1)
Key points to note here are
 The ordering of the formula components is (), (), and (),

only the first formula, for (), has a left hand side specifying the response variable, in this case
SeaLevel
, 
the second and third formulas are righthand sided only and start with a
~
, 
interceptonly linear predictors are indicated by the formula
~ 1
This model can be thought of as an extended GLM and as such, each linear predictor is associated with a link function. The default links for (), (), and () in the gevlss()
family are “identity”
, “identity”
and “logit”
respectively, although technically the linear predictor for (), (_{}), is for the log scale parameter and hence the default identity link implies a fixed log link for (). Additionally, the “logit”
link for () is modified to restrict the range to 1 – 0.5. To match the model fitted by Yee and Stephenson (2007), the identity link is used for all three parameters.
Finally, note that the VGAM package requires the user to specify the degrees of freedom for each smooth term and the software searches for a smoothing parameter that achieves the required degrees of freedom. mgcv takes a different tack; the user specifies the dimension of the basis (the number of basis functions) to use for each smooth term and then it chooses smoothness parameters via penalized likelihood to maximize a logmarginal or logrestricted marginal likelihood. Assuming that the dimension of the basis is sufficiently rich to include the true but unknown smooth function, the mgcv approach avoids the user having to state a priori how wiggly each smooth term should be.
Yee and Stephenson (2007) used three degrees of freedom splines for each smooth term. Here I leave the basis dimension at the (essentially arbitrary) default value of 10. It will be instructive to see what smoothness parameters are selected as optimal, how mgcv copes with estimating smoothness in a relatively complex setting, and how the estimated smooths compare with those assumed by Yee and Stephenson (2007).
One final tweak is required; the estimates of the intercept terms for () and () would imply extrapolation backwards in time 2,000 years. It can help numerical stability when fitting if we centre Year
about say the middle of the time series, which we do now before proceeding
With that out of the way, the model is fitted with relative ease as follows
The summary()
output is similar to that of standard GAMs, except the convention is to append .N
, where N
is a positive integer, to terms for (confusingly) the second and third linear predictors respectively. The parametric terms are listed first.
Of interest here for this model is the estimate of (), which is negative, 0.25 (with standard error 0.09 yielding approximate 95% confidence interval 0.43 – 0.08), indicating a Weibulltype distribution for the annual sealevel maxima. The values reported by Yee and Stephenson (2007) are () = 0.27, with standard error 0.06.
The smooth terms are listed next, and with the exception of the smooth of Year
in (_{}), all the estimated smooths have been penalized to (effectively) linear functions. The partial effect of each smooth can be plotted using the plot()
method for gam
models
As reported in Yee and Stephenson (2007), the fitted smooth of Year
in (_{}) (lower left panel) is somewhat nonlinear, with partial effect of decreasing variance in sealevels through c. 1945 and increasing variance thereafter. Yee and Stephenson (2007) suggest that this smooth may be replaced by a piecewise linear function with a knot round 1945. The authors also simplified the model by replacing the smooths for all the other variables with linear parametric terms. We will investigate this model next.
I haven’t quite worked out how to get gam()
to fit a piecewise linear function yet, but the approach below is pretty close. The following model uses the new bspline basis in mgcv, which allows a lot of control over how the basis is set up. In basic R, a piecewise linear basis with interior knot at 1945 would be created using splines::bs(Year, degree = 1, knots = 1945)
, but then as far as gam()
is concerned, the resulting basis functions are simply two continuous covariates that are treated a linear parametric terms.
We can use the new bspline basis to achieve something similar to (the same as ?) splines::bs
if we set the knot locations explicitly and use m = 1
(for linear splines) and basis dimension k = 3
. If you are setting the knots manually, then for the bspline basis in mgcv you need to specify k + m + 1
(5) knots and the middle k  m + 1
(3) knots should include all the covariate values. I’m not sure what determines where the two exterior knots should be located; in the code below I just locate the at +/ 10 years from the extremes of the data. The knot locations then are specified as a list with component cYear
(to match the covariate name), and as we’re modeling with the centred Year
, I centre the knot locations using the middle year as before.
The GAM can then be specified as before with three formulas. The type of smooth for cYear
in (_{}) is specified via bs = “bs”
and the remaining parameters of the basis are as described above. The list of knots we just created is passed to the knots
argument.
The summary output indicates significant linear parametric effects of cYear
and SOI
in ({}), and SOI
in ({}). There is now some evidence of an effect of SOI
on the variance of the block maxima, although we would be right to treat this result with caution as the piecewise linear structure was only guessed at after fitting the more general smooth term, which was not statistically significant. Yee and Stephenson (2007) performed an informal deviance test between the two models, which we repeat here
the results of which match those published and suggest that the simpler model with the piecewise linear smooth of Year
in (_{}) is sufficient to describe the effect on the variance of the sealevel maxima.
The fitted piecewise linear smooth can be plotted using the plot()
method as before. To get the linear terms plotted we need to use to all.terms = TRUE
option
This plot is a little more clunky than the previous one as the linear terms are plotted via calls to termplot()
and the way this is achieved in plot.gam()
doesn’t allow for separate yaxis limits for the linear terms (scale = 0
) and the scheme
argument does not affect these plots either.
If we wanted an entirely datadriven approach to fitting the smooth of Year
in (_{}), and wanted to crack that particular nut with an industrialsized wrecking ball, we could use the adaptive spline basis by changing the basis type for the smooth to bs = “ad”
as follows (note this takes a while to fit)
Again, there is some evidence of a trend in the variance of the sealevel maxima; the higher pvalue here likely reflects the additional uncertainty arising from having to deduce the shape and varying wiggliness of the spline from the data directly. The resulting smooth is largely indistinguishable from the piecewise linear one in m2
, except for the smooth transition around 1945.
My attempt to replicate the analysis of Yee and Stephenson (2007) was largely devoid of any troubles despite the gevlss()
family being both new and described by Simon as “somewhat experimental”. The main difficulty was in trying to get a piecewise linear spline within the mgcv framework, largely because doing it via splines::bs()
makes it much more difficult to plot the partial effect of the overall function with the easily accessible tools that mgcv provides.
One area where mgcv is lacking in relation to VGAM for fitting GEV models is in the array of support functions that go with the fitted models — VGAM has lots of plot types specific to extreme value models that help with interpreting and checking the fitted model. In a future post I may try to tackle some of this using mgcv, if I find the time.
This is hopefully the first of several posts on modeling block maxima using mgcv and GAMs, so if you have any comments, suggests, corrections, let me know in the comments below.
References
Coles, S. (2001). An introduction to statistical modeling of extreme values: Springer London doi:10.1007/9781447136750.
Yee, T. W., and Stephenson, A. G. (2007). Vector generalized linear and additive extreme value models. Extremes 10, 1–19. doi:10.1007/s1068700700324.
Rbloggers.com offers daily email updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...