Site icon R-bloggers

In search of an incredible posterior

[This article was first published on PirateGrunt, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

What is credibility?

For over one hundred years 1 actuaries have been wresting with the idea of “credibility”. This is the process whereby one may make a quantitative assessment of the predictive power of sample data. Where necessary, the researcher augments the sample with some exogeneous information – usually more data – to arrive at a final conclusion. In its simplest form, the first moment of the quantity being studied (pure premium, for example) is calculated as the weighted average of two numbers: the sample mean and the alternate estimate. The weight is what we (that is, actuaries) mean when we say credibility and it varies between zero and one.

This alternate estimate is generally referred to as the “complement of credibility”. There are thousands of pages written about how the complement may be derived. Boor 2 is a particularly good paper. Suffice it to say that it should reflect the experience under study. This presents a slight conundrum: the sample data should be distinct enough that it must be separated and yet it’s not stable enough to stand on its own. For practitioners in other fields, this sounds a lot like a hierarchical model. In this case, certain “house effects” have enough signal to warrant deviation from the global experience, but are noisy enough that we should reflect a bit of the total sample data. Gelman and Hill 3 is a fantastic treatment of the subject.

We may also view the complement of credibility in a Bayesian context. Here, the prior distribution may be regarded as the “complement” which we combine with data to arrive at a final estimate. However, here’s something interesting. We have a great deal of latitude in selecting the prior. Last week, I toyed with the idea of a prior which would be so powerful it would overwhelm whatever data entered into the analysis. Effectively, this was the opposite of an uninformed prior. I wanted to explore the idea of a pricing algorithm which would – by design – have very low credibility. Phrased differently, I wanted a posterior that reflected almost zero credibility- an incredible posterior, if you will.

The scenario I investigated was one where there were five years of claims data of very low volume, say an expected value of five claims per year. Most actuaries would regard that as sample with fairly low credibility. Against that I used a prior with an expected value of 100 claims. Surely, the data didn’t stand a chance. To keep things simple, I was using a Poisson frequency model, with parameter determined by a gamma distribution. 4

First things first, I translated my prior assumptions about the Poisson parameter into parameters for the gamma distribution.

Parameters

In the code below, I’ll use the variable names mu and sigma to refer to the expected value and standard deviation. The variable prefixes “prior”, “posterior” and “sample”, should make it clear what I’m referring to. In formulae, I’ll use and for prior and posterior expected value and standard deviation. The posterior will carry a prime superscript.

gamma

To start, we’ll derive some fairly basic relationships between the moments of our claims and the parameters of the gamma.

(1)
(2)

This then allows us to define in terms of and .

(3)

Because the gamma and poisson are conjugate pairs, there are simple closed form solutions for the posterior and as follows:

(4)
(5)

Finally, we’ll show some convenience functions so that and may be derived by and . Starting from equations (1) and (2),

Credibility

Z is generally used as the variable which denotes the credibility of the sample. The credibility equation is easily expressed as:

(6)

By rearranging terms, we can get an implied credibility which depends on the prior and posterior means and our sample average.

(7)

But we can re-express based on equations (1), (4) and (5) above. Then,

(8)

We may now use equationn (8) to alter equation (7) as follows:

This finally gives us a formula for the credibility based solely onn the mean and variance of our prior gamma.

(9)

It’s hard to describe just how happy I was when I worked this out. All these steps are the distillation of several pages of caveman scribbles on several sheets of paper. I’m not a mathematician, folks. When I get to this sort of result, it’s more fun than huffing paint.

Once I’m over that elation, note something very significant about this formula. It says almost nothing about my sample. My sample data could come from Mars and it would have no impact on how much credibility I assign it apart from N. What matters most is the relationship between the mean and variance of my prior. Weirdly, I’m trying to work out the credibility of my credibility complement.

But does it work?

With equation (9), we can easily write a function to compute credibility for a number of prior scenarios. I’ll assume a sample of five observations and a prior \mu of 100. I’ll construct a data frame with varying sigmas. (I like to think in terms of CV, so that I can easily switch up the mean and continue to get reasonable variances.)

PriorCredibility <- function(N, mu, sigma){
  Z <- N / (N + mu / sigma^2)
  Z
}

N <- 5

priorMu <- 100
priorCV <- seq(.005, .5, length.out = 100)
priorSigma <- priorMu * priorCV
priorZ <- PriorCredibility(N, priorMu, priorSigma)

df <- data.frame(priorZ, priorSigma, priorMu)

And here’s how the credibility would look based on sigma. Notice how quickly the credibility of the sample exceeds 50%.

library(ggplot2)
pltPrior <- ggplot(df, aes(priorSigma, priorZ)) + geom_line() + ylim(c(0,1))
pltPrior

Now let’s apply this to some sample data. I’ve got some convenience functions to translate my sample data into posterior gamma parameters and implied credibilities. I’ll do this with two samples: one has a mean of 5 and the other has a mean of 200. Both are a good distance away from my prior assumption.

PosteriorMu <- function(sampleMean, N, priorMu, priorSigma){
  numerator <- priorMu^2 + sampleMean * N * priorSigma^2
  denominator <- priorMu + N * priorSigma^2
  postMu <- numerator / denominator
  postMu
}

SampleCredibility <- function(priorMu, posteriorMu, sampleMu){
  Z <- (posteriorMu - priorMu) / (sampleMu - priorMu)
  Z
}

set.seed(1234)
df$sampleMu1 <- 5
df$sampleMu2 <- 200

sampleX_1 <- rpois(N, df$sampleMu1)
df$posteriorMu_1 <- PosteriorMu(mean(sampleX_1), N, df$priorMu, df$priorSigma)
df$postZ_1 <- SampleCredibility(df$priorMu, df$posteriorMu_1, mean(sampleX_1))

sampleX_2 <- rpois(N, df$sampleMu2)
df$posteriorMu_2 <- PosteriorMu(mean(sampleX_2), N, df$priorMu, df$priorSigma)
df$postZ_2 <- SampleCredibility(df$priorMu, df$posteriorMu_2, mean(sampleX_2))

We’ll plot this against our prior sigmas. Look familiar?

pltSample1 <- ggplot(df, aes(priorSigma, postZ_1)) + geom_line() + ylim(c(0,1))
pltSample1

Just to hammer the point home, we’ll overlay the credibility as calculated by equation (9).

pltSample1 <- pltSample1 + geom_line(aes(priorSigma, priorZ), color = "red")
pltSample1

Yep, they’re the same. And what happens if I use a very different sample?

pltSample2 <- ggplot(df, aes(priorSigma, postZ_2)) + geom_line() + ylim(c(0,1))
pltSample2

And there we have it: if we want an incredible posterior, we have to have a pretty tight estimate around our prior mean. This is actually sort of good news. Within the insurance industry, non-actuaries have a vast store of experience and an innate sense of the loss characteristics of their business. However, they’re often accustomed to thinking in terms of the first moment of the distribution, “poperties like this ought to cost XX”. If you haven’t had a chance to hang out with facultative underwriters, treat yourself. They’ve seen it all, they love to talk and they love to pick up a check. When you’re talking, push questions like “How often would it be above Y? What’s the 90th percentile?” You’ll start to get a quantitative sense for the range around our prior mean. And, in a Bayesian sense, that’s just about all that matters.

Big tip of the hat to my pals Avi and Dave for their very helpful comments. Obviously, anything silly or wrong is all my fault.

## R version 3.3.0 (2016-05-03)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 14.04.4 LTS
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  base     
## 
## other attached packages:
## [1] ggplot2_2.1.0
## 
## loaded via a namespace (and not attached):
##  [1] labeling_0.3     colorspace_1.2-6 scales_0.4.0     plyr_1.8.4      
##  [5] magrittr_1.5     formatR_1.4      tools_3.3.0      gtable_0.2.0    
##  [9] Rcpp_0.12.5      lubridate_1.5.6  stringi_1.1.1    grid_3.3.0      
## [13] knitr_1.13       methods_3.3.0    stringr_1.0.0    munsell_0.4.3   
## [17] evaluate_0.9
  1. It’s true! The Casualty Actuarial Society in the US was founded in 1914 to- among other things- address the problem of just how much data was needed to estimate premium in workers compensation insurance.

  2. Boor, Joseph, “The Complement of Credibility,” PCAS LXXXIII, Part 1, 1996, 1-32

  3. Gelman & Hill, “Data Analysis Using Regression and Multilevel/Hierarchical Models,” December 2006

  4. In a realistic setting, we’d translate the Poisson parameter into a frequency that would be applied to exposure, like number of employees, number of vehicles, etc. To keep things simple, I presume that my prior and sample have identical exposures, so I can think in terms of claims rather than claim frequencies.

To leave a comment for the author, please follow the link and comment on their blog: PirateGrunt.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.