**Freakonometrics » R-english**, and kindly contributed to R-bloggers)

In the MAT8595 course, we’ve seen yesterday Hill estimator of the tail index. To be more specific, we did see see that if , with , then Hill estimators for are given by

for . Then we did say that satisfies some consistency in the sense that if , but not too fast, i.e. (under additional assumptions on the rate of convergence, it is possible to prove that ). Further, under additional technical conditions

In order to illustrate this point, consider the following code. First, let us consider a Pareto survival function, and the associated quantile function

> alpha=1.5 > S=function(x){ifelse(x>1,x^(-alpha),1)} > Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

The code here is obviously too complicated, since this power function can easily be inverted. But later on, we will consider a more complex survival function. Here are the survival function, and the quantile function,

> u=seq(0,5,by=.01) > plot(u,Vectorize(S)(u),type="l",col="red") > u=seq(0,99/100,by=.01) > plot(u,Vectorize(Q)(u),type="l",col="blue",ylim=c(0,20))

Here, we need the quantile function to generate a random sample from this distribution,

> n=500 > set.seed(1) > X=Vectorize(Q)(runif(n))

Hill plot is here

> library(evir) > hill(X) > abline(h=alpha,col="blue")

We can now generate thousands of random samples, and see how those estimators behave (for some specific ‘s).

> ns=10000 > HillK=matrix(NA,ns,10) > for(s in 1:ns){ + X=Vectorize(Q)(runif(n)) + H=hill(X,plot=FALSE) + hillk=function(k) H$y[H$x==k] + HillK[s,]=Vectorize(hillk)(15*(1:10)) + }

and if we compute the average,

> plot(15*(1:10),apply(HillK,2,mean)

we do get a series of estimators that can be considered as unbiased.

So far, so good. Now, recall that being in the max-domain of attraction of the Fréchet distribution does not mean that , with , but is means that

for some slowly varying function , not necessarily constant! In order to understand what could happen, we have to be slightly more specific. And this can be done only by looking at second order regular variation property of the survival function. Assume, here that there is some auxilary function such that

This (positive) constant is – somehow – related to the speed of convergence of the ratio of the survival functions to the power function (see e.g. Geluk *et al.* (2000) for some examples).

To be more specific, assume that

then, the second order regular variation property is obtained using , and then, if goes to infinity too fast, then the estimator will be biased. More precisely (see Chapter 6 in Embrechts *et al.* (1997)), if , then, for some ,

The intuitive interpretation of this result is that if is too large, and if the underlying distribution is not *exactly* a Pareto distribution (and we do have this second order property), then Hill’s estimator is biased. This is what we mean when we say

- if is too large, is a biased estimator
- if is too small, is a volatile estimator

(the later comes from properties of a sample mean: the more observations, the less the volatility of the mean).

Let us run some simulations to get a better understanding of what’s going on. Using the previous code, it is actually extremly simple to generate a random sample with survival function

> beta=.5 > S=function(x){+ ifelse(x>1,.5*x^(-alpha)*(1+x^(-beta)),1) } > Q=function(p){uniroot(function(x) S(x)-(1-p),lower=1,upper=1e+9)$root}

If we use the code above. Here, with

> n=500 > set.seed(1) > X=Vectorize(Q)(runif(n))

the Hill plot becomes

> library(evir) > hill(X) > abline(h=alpha,col="blue")

But it’s based on one sample, only. Again, consider thousands of samples, and let us see how Hill’s estimator is behaving,

so that the (empirical) mean of those estimator is

**leave a comment**for the author, please follow the link and comment on their blog:

**Freakonometrics » R-english**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...