In search of an incredible posterior
Want to share your content on Rbloggers? click here if you have a blog, or here if you don't.
What is credibility?
For over one hundred years ^{1} actuaries have been wresting with the idea of “credibility”. This is the process whereby one may make a quantitative assessment of the predictive power of sample data. Where necessary, the researcher augments the sample with some exogeneous information – usually more data – to arrive at a final conclusion. In its simplest form, the first moment of the quantity being studied (pure premium, for example) is calculated as the weighted average of two numbers: the sample mean and the alternate estimate. The weight is what we (that is, actuaries) mean when we say credibility and it varies between zero and one.
This alternate estimate is generally referred to as the “complement of credibility”. There are thousands of pages written about how the complement may be derived. Boor ^{2} is a particularly good paper. Suffice it to say that it should reflect the experience under study. This presents a slight conundrum: the sample data should be distinct enough that it must be separated and yet it’s not stable enough to stand on its own. For practitioners in other fields, this sounds a lot like a hierarchical model. In this case, certain “house effects” have enough signal to warrant deviation from the global experience, but are noisy enough that we should reflect a bit of the total sample data. Gelman and Hill ^{3} is a fantastic treatment of the subject.
We may also view the complement of credibility in a Bayesian context. Here, the prior distribution may be regarded as the “complement” which we combine with data to arrive at a final estimate. However, here’s something interesting. We have a great deal of latitude in selecting the prior. Last week, I toyed with the idea of a prior which would be so powerful it would overwhelm whatever data entered into the analysis. Effectively, this was the opposite of an uninformed prior. I wanted to explore the idea of a pricing algorithm which would – by design – have very low credibility. Phrased differently, I wanted a posterior that reflected almost zero credibility an incredible posterior, if you will.
The scenario I investigated was one where there were five years of claims data of very low volume, say an expected value of five claims per year. Most actuaries would regard that as sample with fairly low credibility. Against that I used a prior with an expected value of 100 claims. Surely, the data didn’t stand a chance. To keep things simple, I was using a Poisson frequency model, with parameter determined by a gamma distribution. ^{4}
First things first, I translated my prior assumptions about the Poisson parameter into parameters for the gamma distribution.
Parameters
In the code below, I’ll use the variable names mu
and sigma
to refer to the expected value and standard deviation. The variable prefixes “prior”, “posterior” and “sample”, should make it clear what I’m referring to. In formulae, I’ll use and for prior and posterior expected value and standard deviation. The posterior will carry a prime superscript.
gamma
To start, we’ll derive some fairly basic relationships between the moments of our claims and the parameters of the gamma.
(1) 
(2) 
This then allows us to define in terms of and .
(3) 
Because the gamma and poisson are conjugate pairs, there are simple closed form solutions for the posterior and as follows:
(4) 
(5) 
Finally, we’ll show some convenience functions so that and may be derived by and . Starting from equations (1) and (2),
Credibility
Z is generally used as the variable which denotes the credibility of the sample. The credibility equation is easily expressed as:
(6) 
By rearranging terms, we can get an implied credibility which depends on the prior and posterior means and our sample average.
(7) 
But we can reexpress based on equations (1), (4) and (5) above. Then,
(8) 
We may now use equationn (8) to alter equation (7) as follows:
This finally gives us a formula for the credibility based solely onn the mean and variance of our prior gamma.
(9) 
It’s hard to describe just how happy I was when I worked this out. All these steps are the distillation of several pages of caveman scribbles on several sheets of paper. I’m not a mathematician, folks. When I get to this sort of result, it’s more fun than huffing paint.
Once I’m over that elation, note something very significant about this formula. It says almost nothing about my sample. My sample data could come from Mars and it would have no impact on how much credibility I assign it apart from N. What matters most is the relationship between the mean and variance of my prior. Weirdly, I’m trying to work out the credibility of my credibility complement.
But does it work?
With equation (9), we can easily write a function to compute credibility for a number of prior scenarios. I’ll assume a sample of five observations and a prior \mu of 100. I’ll construct a data frame with varying sigmas. (I like to think in terms of CV, so that I can easily switch up the mean and continue to get reasonable variances.)
And here’s how the credibility would look based on sigma. Notice how quickly the credibility of the sample exceeds 50%.
Now let’s apply this to some sample data. I’ve got some convenience functions to translate my sample data into posterior gamma parameters and implied credibilities. I’ll do this with two samples: one has a mean of 5 and the other has a mean of 200. Both are a good distance away from my prior assumption.
We’ll plot this against our prior sigmas. Look familiar?
Just to hammer the point home, we’ll overlay the credibility as calculated by equation (9).
Yep, they’re the same. And what happens if I use a very different sample?
And there we have it: if we want an incredible posterior, we have to have a pretty tight estimate around our prior mean. This is actually sort of good news. Within the insurance industry, nonactuaries have a vast store of experience and an innate sense of the loss characteristics of their business. However, they’re often accustomed to thinking in terms of the first moment of the distribution, “poperties like this ought to cost XX”. If you haven’t had a chance to hang out with facultative underwriters, treat yourself. They’ve seen it all, they love to talk and they love to pick up a check. When you’re talking, push questions like “How often would it be above Y? What’s the 90th percentile?” You’ll start to get a quantitative sense for the range around our prior mean. And, in a Bayesian sense, that’s just about all that matters.
Big tip of the hat to my pals Avi and Dave for their very helpful comments. Obviously, anything silly or wrong is all my fault.

It’s true! The Casualty Actuarial Society in the US was founded in 1914 to among other things address the problem of just how much data was needed to estimate premium in workers compensation insurance. ↩

Boor, Joseph, “The Complement of Credibility,” PCAS LXXXIII, Part 1, 1996, 132 ↩

Gelman & Hill, “Data Analysis Using Regression and Multilevel/Hierarchical Models,” December 2006 ↩

In a realistic setting, we’d translate the Poisson parameter into a frequency that would be applied to exposure, like number of employees, number of vehicles, etc. To keep things simple, I presume that my prior and sample have identical exposures, so I can think in terms of claims rather than claim frequencies. ↩
Rbloggers.com offers daily email updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/datascience job.
Want to share your content on Rbloggers? click here if you have a blog, or here if you don't.