15 GIT All Exercises
15 GIT All Exercises
Exercises
Contents
1 Introduction 2
1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
i
ACTL3162 General Insurance Techniques Exercises
1
Module 1
Introduction
1.1 Preliminaries
Exercise 1.1: [NLI1, Solution][?, Exercise 1]
(c) Assume X N (0, 1). Prove that E[X 2k+1 ] = 0 for all k N0 (natural numbers with
zero).
Exercise 1.2: [NLI2, Solution][?, Exercise 2] Assume that Xk has a 2 -distribution with k N
degrees of freedom, i.e. Xk is absolutely continuous with density
1
f (x) = k/2
xk/21 exp(x/2), for x 0. (1.1)
2 (k/2)
(a) Prove that f is a density (hint: see Section 3.3.3 and proof of Proposition 2.20 in ?).
(b) Prove
MXk (r) = (1 2r)k/2 for r < 1/2. (1.2)
d
(c) Choose Z N (0, 1) and prove Z 2 = X1 .
i.i.d Pk d
(d) Choose Z1 , ..., Zk N (0, 1). Prove i=1 Zi2 = Xk and calculate the first two moments of
the latter.
1.2 Solutions
Solution 1.1: [NLI1, Exercise]
2
ACTL3162 General Insurance Techniques Exercises
P
(b) We can write down the moment generating function for i Xi (using the assumption of
independence),
!
Y Y X X
MPi Xi (r) = MXi (r) = exp(ri + r2 i2 /2) = exp r i + r 2 i2 /2 , (1.4)
i i i i
(c) It is tempted to use the moment generating function to solve this exercise, but obtaining
explicit formula for the n-th derivative of the m.g.f. is not so easy. We will resort back
to the old fashion way. First denote E[X 2k+1 ] = I2k+1 for k N0 , then
Z
2k+1 1 1 2
I2k+1 = x exp x dx
2 2
Z 1 2
1 d exp x
= x2k 2
dx
2 dx
x= Z
1 2k 1 2 1 2k1 1 2
= x exp x + 2k x exp x dx
2 2 x= 2 2
1
= 2k I2k1 . (1.5)
2
(2k2)
This recursive formula keeps on going and yields I2k+1 = 2k . . . 22 I1 . Since I1 =
2 2
E[X] = 0, therefore we have E[X 2k+1 ] = 0 for all k N0 .
(a) To show that f is a density function, first note that f (x) 0 for all x 0. Secondly
using a gamma density with shape parameter k/2 for k N and scale parameter 1/2, we
have
Z
1
k/2
xk/21 exp(x/2)dx = 1. (1.6)
0 2 (k/2)
3
ACTL3162 General Insurance Techniques Exercises
Let z= y 1/2 and dz = 1/2y 1/2 dy, using this change of variable, we have z changes from
0 to x and y changes from 0 to x. The cumulative probability function F (x) becomes
Z x
1 1
F (x) = exp y y 1/2 dy
2 2
Z0 x
1 1
= 1/2 (1/2)
exp y y 1/2 dy, (1.9)
0 2 2
4
Module 2
Exercise 2.2: [los7, Solution] Let the sum S = X1 + X2 + X3 with X1 , X2 and X3 distributed
as follows:
x f1 (x) f2 (x) f3 (x)
0 0.2 - 0.5
1 0.3 - 0.5
2 0.5 p -
3 - 1p -
4 - - -
where 0 < p < 1. You are also given that FS (4) = 0.43. Calculate the value of p.
Calculate
1. The distribution of X1 + X2
2. The distribution of X1 + X2 + X3
5
ACTL3162 General Insurance Techniques Exercises
Exercise 2.4: [los11, Solution] An insurance portfolio has the following characteristics:
Distribution of N Distribution of X
n P (N = n) x p (x)
0 0.4 1 0.6
1 0.3 2 0.4
2 0.3
Number of claims, N , and claim amounts, X, are mutually independent. Compute the condi-
tional probability that the average claim size exceeds the expected claim size, given that there
is at least one claim.
S = X 1 + X2 + + XN
where:
n P (N = n)
0 0.3
1 0.2
2 0.5
Exercise 2.6: [los13K, Solution] [?, Problem 2.3.1] Calculate Pr[S = s] for s = 0, 1, . . . , 6
when S = X1 + 2X2 + 3X3 and Xj Poisson(j).
x Solution
1 0 0.0300
2 1 0.0000
3 2 0.0100
4 3 0.0600
5 4 0.0050
6 5 0.0200
6
ACTL3162 General Insurance Techniques Exercises
7 6 0.0900
8 7 0.0100
9 8 0.0350
10 9 0.1200
11 10 0.0200
12 11 0.0500
13 12 0.1200
14 13 0.0300
15 14 0.0500
16 15 0.0750
17 16 0.0300
18 17 0.0350
19 18 0.0750
20 19 0.0225
21 20 0.0350
22 21 0.0150
23 22 0.0225
24 23 0.0075
25 24 0.0150
26 25 0.0050
27 26 0.0075
28 27 0.0000
29 28 0.0050
Exercise 2.8: [los21R, Solution][R ] Consider Exercise 2.6 above. Can you use the R code
developed in Exercise 2.7 to get the distribution of S? Why?
in R as a function of the distributions of N and X. What are the conditions for this to be
feasible? Print the pmf and df of S, as well as the descriptive statistics of S. Try to make your
code as efficient as possible.
Check your code with ?, Example 12.2.2.
x f_S F_S
1 0 0.0024787522 0.002478752
2 1 0.0024787522 0.004957504
3 2 0.0061968804 0.011154385
4 3 0.0128068862 0.023961271
5 4 0.0149757944 0.038937065
6 5 0.0243950527 0.063332118
7 6 0.0332600344 0.096592153
[...]
37 36 0.0004111363 0.999152382
7
ACTL3162 General Insurance Techniques Exercises
0.06
0.05
0.04
results$f_S
0.03
0.02
0.01
0.00
0 5 10 15 20 25 30 35
results$x
1. Poisson()
2. binomial(n, p)
3. negative binomial(r, p)
Exercise 2.12: [NLI2.7, Solution][?, Corollary 2.7] Assume S1 , ..., Sn are independent with
Sj CompBinom(vj , p, G) for all j = 1, ..., n. Show that the aggregated claim has a compound
binomial distribution with
n
X Xn
S= Si CompBinom( vj , p, G). (2.1)
i=1 j=1
Exercise 2.14: [los15K, Solution] [?, Problem 3.5.8] Assume that S1 is compound Poisson
distributed with parameter = 2 and claim sizes p(1) = p(3) = 21 . Let S2 = S1 + N , where
N is Poisson(1) distributed and independent of S1 . Determine the mgf of S2 . What is the
corresponding distribution? Determine Pr[S2 2.4]. Leave the powers of e unevaluated.
8
ACTL3162 General Insurance Techniques Exercises
Exercise 2.15: [los16, Solution] You are given S = S1 + S2 , where S1 and S2 are independent
and have compound Poisson distributions with 1 = 3 and 2 = 2 and individual claim amount
distributions:
x p1 (x) p2 (x)
1 0.25 0.10
2 0.75 0.40
3 0.00 0.40
4 0.00 0.10
Determine the mean and variance of the individual claim amount for S.
Exercise 2.16: [los17K, Solution] [?, Problem 3.4.3] Assume that S1 is compound Poisson
with 1 = 4 and claims p1 (j) = 14 , j = 0, 1, 2, 3, and S2 is also compound Poisson with 2 = 2
and p2 (j) = 21 , j = 2, 4. If S1 and S2 are independent, then what is the distribution of S1 + S2 ?
Exercise 2.17: [los18, Solution] Suppose that S has a compound Poisson distribution with
parameter and with discrete claims distribution
2. Prove that each Ni has a Poisson distribution and find its parameter.
Exercise 2.18: [los19, Solution] Suppose that the number of accidents incurred by an insured
driver in a single year has a Poisson distribution with parameter . If an accident happens, the
probability is p that the damage amount will exceed a deductible (or excess) amount. On the
assumption that the number of accidents is independent of the severity of the accidents, derive
the distribution of the number of accidents that result in a claim payment.
Exercise 2.19: [sur1K, Solution] [?, Problem 4.2.2] Let {N (t), t 0} be a Poisson process
with parameter , and let pn (t) = Pr[N (t) = n] and p1 (t) 0. Show that
Exercise 2.20: [NLI6, Solution][?, Exercise 6] An insurance company decides to offer a no-
claims bonus to good car drivers, namely,
9
ACTL3162 General Insurance Techniques Exercises
How does the base premium need to be adjusted so that this no-claims bonus can be financed?
For simplicity we assume that all risks have been insured for at least 6 years. Answer the
question in the following two situations:
(a) Homogeneous portfolio with i.i.d. risks having i.i.d. Poisson claim counts with frequency
parameter = 0.2.
(b) Heterogeneous portfolio with independent risks being characterised by a frequency param-
eter having a gamma distribution with mean = 0.2 and Vco() = 1 (Vco stands for
coefficient of variation). Conditionally, given , the individual years have i.i.d. Poisson
claim counts with frequency parameter .
where xj refers to the claim amount observed and j is the (right-censor) indicator whether
the applicable policy limit has been reached. You are to fit a (simple) exponential distribution
model to the observed claims with
1. Write down the log-likelihood function for estimating the exponential parameter.
3. Describe how you can derive a standard error of your parameter estimate.
Exercise 2.22: [Fit2, Solution] [Institute of Actuaries, Subject 106, September 2000] An insur-
ance company has a portfolio of policies with a per-risk excess of loss reinsurance arrangement
with a deductible of M > 0. Claims made to the direct insurer, denoted by X, have a Pareto
distribution with cumulative distribution function
200
FX (x; ) = 1 .
200 + x
There were a total of n claims from the portfolio. Of these claims, I were for amounts less than
the deductible. The claims less than the deductible are
xi , for i = 1, 2, ..., I.
PI
The value of the statistic i=1 log (200 + xi ) = y is given.
10
ACTL3162 General Insurance Techniques Exercises
Exercise 2.23: [Fit3, Solution] Observations (which are number of claims) Y1 , Y2 , ..., Yn are
independent Poisson random variables with E(Yi ) = i where
, for i = 1, 2, ..., m
log i =
+ , for i = m + 1, m + 2, ..., n
Exercise 2.24: [Fit4, Solution][?, Exercise 12.52] Consider the Inverse Gaussian distribution
with density function expressed as
1/2 " 2 #
x
fX (x) = exp , for x > 0.
2x3 2x
1. Show that n n
(xj )2
X X 1 1 n
= 2
+ (x )2 ,
j=1
xj j=1
xj x x
Pn
where x = (1/n) j=1 xj .
2. For a given sample x1 , x2 , ..., xn , show that the maximum likelihood estimates of and
are
b=x
and
n
b = .
Pn 1 1
j=1
xj x
Exercise 2.25: [Fit5, Solution] The following 20 claim amounts were observed over a period
of time:
132 149 476 147 135 110 176 107 147 165
135 117 110 111 226 108 102 108 227 102
You are interested in estimating the probability that a claim will exceed 200. You are to fit the
Pareto distribution with cumulative distribution function of the form
2. Use this to estimate the probability that a claim will exceed 200.
Exercise 2.26: [NLI8, Solution][?, Exercise 8] Natural hazards in Switzerland are covered by
the so-called Schweizersische Elementarschade-Pool (ES-Pool). This is a pool of private Swiss
insurance companies which organises the diversification of natural hazards in Switzerland.
For pricing of these natural hazards one distinguishes between small events and large events,
the latter having a total claim amount exceeding CHF 50 millions per events. The following
11
ACTL3162 General Insurance Techniques Exercises
15 storm and flood events have been observed in years 1986-2005 (these are the events with a
total claim amount exceeding CHF 50 millions).
Fit a Pareto distribution with parameters = 50 and > 0 to the observed claim sizes.
Estimate parameter using the unbiased version of the MLE.
We introduce a maximal claims cover of M = 2 billions CHF per event, i.e. the individual
claims are given by Yi M = min{Yi , M } (see also Section 3.4.2 in ?). For the yearly
claim amount of storm and flood events, we assume a compound Poisson distribution
with Pareto claim sizes Yi . What is the expected total yearly claim amount?
What is the probability that we observe a storm and flood event next year which exceeds
the level of M = 2 billions CHF?
2.4 Solutions
Solution 2.1: [los3K, Exercise] Firstly, one should notice that X is a discrete uniform random
variable over {1, 2, 3, 4, 5, 6}. Therefore we have
1+2+3+4+5+6 7
E(X) = = ,
6 2
2
2 2 12 + 22 + 32 + 42 + 52 + 62 7 35
V ar(X) = E(X ) (E(X)) = = .
6 2 12
We have Y |X = x binomial(x, 1/2). Hence,
E (Y ) = E [E (Y |X )] = E [1/2X] = 7/4
and
V ar (Y ) = V ar [E (Y |X )] + E [V ar (Y |X )]
= V ar [1/2X] + E [1/4X] = 77/48.
12
ACTL3162 General Insurance Techniques Exercises
Solution 2.4: [los11, Exercise] Let S = X1 + + XN denote the aggregate claims. Then we
wish to compute the probability P NS > E (X) |N > 0 where E (X) = 1.4. Thus, we have
S Pr [S > 1.4N N > 0]
Pr > E (X) |N > 0 =
N Pr [N > 0]
Pr [S > 1.4 |N = 1 ] Pr [N = 1] + Pr [S > 2.8 |N = 2 ] Pr [N = 2]
=
h i 1 0.4
(0.4) (0.3) + 1 (0.6)2 (0.3)
= = 0.52.
0.6
13
ACTL3162 General Insurance Techniques Exercises
1. Try to copy this code and paste this code in a R document. If you source it, you should
find the same results as in Exercise 1.8.1 and 1.8.2.
14
ACTL3162 General Insurance Techniques Exercises
#####################
# convolution 1.8.1 #
#####################
#print results
results<-data.frame(x=c(0:10),DistrX1=fX1,DistrX2=fX2,Solution=fX12_8a)
print("Exercise 1.8.1")
print(results)
#####################
# convolution 1.8.2 #
#####################
#use the same algorithm as above, but with fX12_8a and fX3...
# add 0s
fX12_8a <- c(fX12_8a,rep(0,5))
fX3 <- c(fX3,rep(0,10))
#print results
resultsb<-data.frame(x=c(0:15),DistrX3=fX3,DistrX1X2=fX12_8a,Solution=fX123_8b)
print("Exercise 1.8.2")
print(resultsb)
15
ACTL3162 General Insurance Techniques Exercises
#############
# variables #
#############
# vectors of probabilities
fX1 <- c(.1,.2,.2,.2,.2,.05,.05)
fX2 <- c(0,.2,0,.3,.4,.1)
fX3 <- c(.3,.1,.05,.3,.15,.1)
# weights
alpha <- c(3,0,2)
################
# convolutions #
################
#just to be sure..
if(sum(alpha)==1) print("there is no convolution to do!")
#####
#note that we have at most only two convolutions to do
16
ACTL3162 General Insurance Techniques Exercises
#initialising results array - 3 columns for scaled Xs, one for convolution
#of first two, last for the solution
fS <- array(rep(0,(rangeS+1)*5),c(rangeS+1,5))
for(i in 1:(rangeS+1)) {
if(alpha[2]==0){ # then only 1 and 3 need to be convoluted (see test above)
fS[i,5] <- sum(fS[1:i,1]*fS[i:1,3]) #end of the story
} else { if(alpha[1]==0){ #then only 2 and 3 need to be convoluted
fS[i,5] <- sum(fS[1:i,2]*fS[i:1,3]) #end of the story
} else { fS[i,4] <- sum(fS[1:i,1]*fS[i:1,2]) # we do 1 and 2 and see...
if(alpha[3]==0) { # if alpha 3 is 0, then it is finished
fS[i,5] <- fS[i,4] # ... and we translate results in column 5
# otherwise we do the last convolution 1*2 with 3:
} else {fS[i,5] <- sum(fS[1:i,3]*fS[i:1,4])}
} # end alpha 3 if
} #end second else
} # end for
#print results
results<-data.frame(x=c(0:rangeS),Solution=fS[,5])
print("Solution")
print(results)
plot(results)
#check we have a true distribution..:
if(min(fS[,5])<0) print("Oups some probabilities are negative") else
print("All probabilities are positive")
if(max(fS[,5])>1) print("Oups some probabilities are > 1") else
print("All probabilities are < 1")
print(c("The sum of them is: ",sum(fS[,5])))
17
ACTL3162 General Insurance Techniques Exercises
0.12
0.10
0.08
Solution
0.06
0.04
0.02
0.00
0 5 10 15 20 25
You can see the effect of the scaling as the pmf is not smooth.
Solution 2.8: [los21R, Exercise] The support of the Xi is not finite, which means that it is
not possible to calculate the exact distribution of S without truncating the distribution of Xi
at some points; otherwise the program will consider the support of S as infinite and there will
be infinite loops. However, it is possible to achieve a decent level of accuracy (if one is patient),
but that would require careful additional programming to determine where the distribution of
the Xi s should be truncated. This would also require analysis to check if the moments and
quantiles of S are reasonably conserved...
Note that it is not possible to develop a code for the general case
S = 1 X1 + 2 X2 + 3 X3 , i 0 (integers), i = 1, . . . , 3
if some of the Xi s have infinite support without truncating them. However, if all the Xi s are
Poisson, we can then use A Theorem 12.4.1 and allow for any number of i s.
Solution 2.9: [los22R, Exercise] First note that this formula can only be used if X and N are
both discrete and with a finite range. One possible code is as follows:
# inputs #
##########
fX <-c(0,.5,.4,.1)
fN <-c(.1,.3,.4,.2)
# program #
###########
# range of X, N and S
rX <- length(fX)-1
rN <- length(fN)-1
18
ACTL3162 General Insurance Techniques Exercises
rS <- rX*rN
# print results #
#################
results <- data.frame(x=0:rS,pmf=distS[1:(rS+1),rN+2],df=distS[1:(rS+1),rN+3])
print(results)
source("path to program of Exercise 11.a")
pmf_to_desc_stats(distS[1:(rS+1),rN+2],1)
Efficiency comments:
1. Consider the convolutions part. In the bounds for j and for the convolutions, we recognise
that the probability masses of the convolutions will spread in the shape of a triangle
in the table of successive convolutions. The upper side of the triangle has slope 0 if
Pr[X = 0] > 0 and slope -1 otherwise. The lower side of the triangle will always have a
slope of minus the range of X. So you will have probabilities everywhere only if the range
of X is infinite (which will never happen in this program - see the note at the beginning of
the solution) and if Pr[X = 0] > 0. We can thus save resources by doing the convolutions
(products) only when these probabilities are different form zeros, which is achieved by
19
ACTL3162 General Insurance Techniques Exercises
We use here the fact that R is a vector based program and we can perform j products in
a single line using 1:j and j:1 in the index of distS. This effectively does the same as
the following loop
temp <- 0
for(k in 1:j) {
temp <- temp + distS[k,2]*distS[j-k+1,i+1]
} # end k loop
distS[j,i+2] <- temp
which is a more traditional way to program it (in VBA or Maple or Mathematica, you
would program the convolution with the loop).
What impact do these considerations have? Here are the approximate processing time if
fX (x) = 0.01, x = 1, 2, . . . , 100 and fN (x) = 0.05, x = 0, 1, . . . , 19 (performed on an iMac
with 3.06 GHz Intel Core 2 Duo chip):
no triangle triangle
loop 2 min. 56 sec. 1 min. 5 sec.
vector 3 sec. 1.5 sec.
In this case, using the vector-based programming is much more important than being smart
about the triangle, but this is because we use R and we can take advantage of its very powerful
algorithms to handle vectors. In VBA, the triangle trick would be crucial.
Finally, note the shape of the pmf of S in this case (we omit here the mass at 0):
0.0012
0.0010
distS[2:(rS + 1), rN + 2]
0.0008
0.0006
0.0004
0.0002
0.0000
1:rS
20
ACTL3162 General Insurance Techniques Exercises
You can see the effect of compounding: when probabilities are due to only a few claims there
are irregularities (jumps) in the pmf (lhs of plot). When we are looking at outcomes of S that
involve a lot a different possible number of claims, the pmf is much smoother (rhs of plot).
Solution 2.10: [los23R, Exercise] Here is the code with the parameters corresponding to Ex-
ercise 1.12:
# Inputs #
##########
fX <- c(0,1/6,2/6,3/6)
l <- 6
s <- 36
# Program #
###########
#number of variables
num <-length(alpha)
#number of columns that are necessary
#(the num distributions + num-1 convolutions)
colu <- 2*num -1
#calculate the df
FS <- c()
for(i in 1:(s+1)){FS[i]<-sum(fS[1:i,colu])}
#print results
results <- data.frame(x=c(0:s),f_S=fS[,colu],F_S=FS)
plot(results$x,results$f_S)
print(results)
21
ACTL3162 General Insurance Techniques Exercises
Binomial:
E (N ) = np; V ar (N ) = npq;
n
mN (t) = q + pet ;
E (S) = np ; V ar (S) = np 2 + npq2 ;
mS (t) = [q + pmX (t)]n .
Negative Binomial:
E (N ) = r (1 p) /p; V ar (N ) = r (1 p) /p2 ;
r
p/ 1 (1 p) et ;
mN (t) =
E (S) = r (1 p) /p ; V ar (S) = r (1 p) /p 2 + r (1 p) /p2 2 ;
mS (t) = {p/ [1 (1 p) mX (t)]}r .
Thus,
P (S e2 ;
= 0) =
P (S 0.2e2 ;
= 1) =
P (S 0.42e2 ;
= 2) =
P (S e2 (0.6 + 0.08 + 0.0013) = 0.6813e2 ;
= 3) =
2 4
P (S = 4) = e 0.8 + 0.2 + 0.006 + 0.000067 = 1.008067e2 .
3
22
ACTL3162 General Insurance Techniques Exercises
2. S compound Poisson( = 2) with p (x) = 0.1x, x = 1, 2, 3, 4. Write fj for the pdf with
Sj = 1 N1 + + j Nj , j = 1, 2, 3, 4, and pj for the pdf of j Nj so that
pj (x) = P (j Nj = x) = exp (0.2 j) (0.2 j)x/j / (x/j)!.
x p1 p2 = f2 p3 = f3 p4 = f4
0 .819 .670 .549 .549 .301 .449 .1353
1 .164 .110 .060 .0270
2 .016 .268 .231 .127 .0568
3 .001 .045 .329 .205 .0922
4 .000 .054 .048 .062 .359 .1364
Solution 2.15: [los16, Exercise] From Theorem 12.4.1, we know S is compound Poisson with
= 5 and individual claims distribution
x P (x)
3
1 5
(0.25) + 25 (0.10) = 0.19
3
2 5
(0.75) + 25 (0.40) = 0.61
2
3 5
(0.40) = 0.16
2
4 5
(0.10) = 0.04
Thus, mean of individual claim amount is
1 0.19 + 2 0.61 + 3 0.16 + 4 0.04 = 2.05
and the variance of individual claim amount is
(1 2.05)2 0.19 + (2 2.05)2 0.61
+ (3 2.05)2 0.16 + (4 2.05)2 0.04
= 0.5075.
23
ACTL3162 General Insurance Techniques Exercises
Solution 2.17: [los18, Exercise] Define the sum of the number of claims arising from each
possible claim amount by
Xm
N= Ni .
i=1
Solution 2.18: [los19, Exercise] Denote N to be the number of accidents which is given to be
Poisson() distribution. Now, suppose N1 is the number of these accidents that lead to claims
(i.e. damage amount exceeds the deductible or excess). Then clearly, conditionally on N , N1
has a Binomial (N, p) distribution, since an accident leads to either a claim or no claim. Thus,
we have
m!
P (N1 = n |N = m) = pn (1 p)mn , n m.
n! (m n)!
24
ACTL3162 General Insurance Techniques Exercises
which gives the result for n = 0, 1, ... Note that for all n, we can write
Re-arranging, we get
Thus, the probability of having n jumps till t + dt is equal to the probability of getting n 1
jumps till t and another one within the next tiny interval dt, plus the probability of getting n
jumps till t and not other one within the next tiny interval dt.
Note that this expression could have been written directly using the law of total probability
and the property of the Poisson process seen in the lecture.
25
ACTL3162 General Insurance Techniques Exercises
(a) Assume that we are currently evaluating all the policies and we can break them down into
three categories: no claim for 6+ years (e60.2 = 0.3012), no claim for 3-6 years (e30.2
e60.2 = 0.2476174) and the rest. So about 30.12% of policies receive a 30% discount and
24.76% of policies receive a 10% discount. Now assume that the new premium is P and
we wish to solve
P 0.7 e60.2 + 0.9 (e30.2 e60.2 ) + 1 e30.2 = E[Y ] = P = 1.13 E[Y ] (2.4)
So in order to finance the bonus, we have the raise the base premium by 13 percent.
(b) For claims in the heterogeneous portfolio, we have assumed that the frequency parameter
follows a Gamma(, ) distribution. Using the fact that the mean = 0.2 and Vco() =
1, we can work out that = 1 and = 5. Thus, the probability of having zero claim in
one year (conditioning on values of ) is
Z
Prob[Number of Claims = 0|] = e 5 e5 d = 5/6. (2.5)
0
So the three categories have the following break-down, no claim for 6+ years ((5/6)6 =
0.334898), no claim for 3-6 years ((5/6)3 (5/6)6 = 0.2438057) and the rest. Using the
same method, the new premium we wish to solve is
P 0.7 (5/6)6 + 0.9 ((5/6)3 (5/6)6 ) + 1 (5/6)3 = E[Y ] = P = 1.142661 E[Y ],
(2.6)
which yields a 14% increase on the base premium.
Solution 2.21: [Fit1, Exercise] Note that there are no truncation in the observations and that
P (X > x) = ex .
1. The likelihood function for the observed data (xj , j ), j = 1, 2, ..., n can be written as
n
Y 1j j
L (; xj , j ) = exj exj
j=1
so that Pn
j=1 (1 j )
b = Pn .
j=1 xj
26
ACTL3162 General Insurance Techniques Exercises
3. Standard errors can be derived based on the second derivative of the log-likelihood. In
this case, it will be
n
2 ` (; xj , j ) 1 X
= 2 (1 j )
2 j=1
which should be negative at the optimum. So the standard error of our parameter estimate
will equal to
qP
#1/2 n
j=1 (1 j )
" n
1 X b
(1 j ) = P
q = P n .
j=1 xj
b2 j=1 n
(1 )
j=1 j
It is the square root of the negative of the inverse of the Hessian (which is the second
derivative) evaluated at the MLE.
Solution 2.22: [Fit2, Exercise] Note that the density of the Pareto can be expressed as
200
fX (x; ) = .
(200 + x)+1
Note also that the deductible of the Excess of Loss is equivalent to a policy limit from the point
of view of the insurer.
` (; xi ) = log L (; xi )
I
X
= I log + n log 200 (n I) log M ( + 1) log (200 + xi ) .
i=1
which gives
I
b= PI .
(n I) log (200 + M ) n log 200 + i=1 log (200 + xi )
You may wish to check that this gives the maximum by evaluating the second derivative:
2 ` (; xi ) I
2
= 2 < 0.
27
ACTL3162 General Insurance Techniques Exercises
so that log-likelihood is
n
X
` (, ; yi ) = log L (, ; yi ) = [i + yi log i log yi !]
i=1
m
X n
X
+
= [e + yi log yi !] + e + yi ( + ) log yi !
i=1 i=m+1
m
X n
X n
X
+
= me + yi (n m) e + ( + ) yi log yi !.
i=1 i=m+1 i=1
and n
` (, ; yi ) X
= (n m) e+ + yi = 0.
i=m+1
These yield to
m
X n
X n
X
me + yi yi + yi = 0
i=1 i=m+1 i=m+1
28
ACTL3162 General Insurance Techniques Exercises
1. We have
n n
(xj )2 2
X X
= xj 2 +
j=1
xj j=1
xj
n n 2
2 2
X X
= + 2 + xj
j=1
xj x j=1
x
n
n2
2
X 1 1
= + 2n + nx
j=1
xj x x
n
2
X 1 1 n 2
2x + x2
= +
j=1
xj x x
n
X 1 1 n
= 2
+ (x )2
j=1
xj x x
which implies
b=x
29
ACTL3162 General Insurance Techniques Exercises
which implies
n
b = P .
n 1 1
j=1 xj
x
Solution 2.25: [Fit5, Exercise] Notice that the density of the given Pareto can be written as
fX (x) = 100 x1 .
Differentiating, we get
` () 20 X
= + 20 log 100 log xi = 0
so that
20 20
b= P = = 2.848.
log xi 40 log 10 99.1252 92.1034
2. The required probability estimate is therefore
30
ACTL3162 General Insurance Techniques Exercises
In previous part of the question, the fitted distribution for claim sizes Yi is a Pareto
distribution (50,0.9824864). Next we count the number of claims for each year and fit
them to a Poisson distribution. There are 2 claims in 1986, 2 in 1987, 1 in 1990, 1 in
1992, 2 in 1993, 1 in 1994, 3 in 1999, 2 in 2000 and 1 in 2005. These claim counts yield an
MLE estimator for the Poisson parameter of M LE = 5/3. The expected claim amount
(per claim) is
Z 2000 Z
E[min(Yi , 2000)] = yg(y)dy + 2000g(y)dy
0 2000
= I(G(2000)) + 2000(1 G(2000))
0.9824864+1 0.9824864
2000 2000
=1 + 2000
50 50
= 41.34829. (2.9)
So the expected total yearly claim amount is (using properties of compound Poisson),
5/3 41.34829 = 68.91382.
The probability that we observe a storm and flood event next year which exceeds the
level of 2 billions CHF is the product between probability of having one claim and claim
amount exceeds 2 billions,
Z
M LE
M LE e g(y)dy = 0.00648. (2.10)
2000
31
Module 3
where cgf is an expression and where param is a list with the numerical values of the
parameters of cgf. The following commandfor an inverse Gaussian( = 2, = 4)
CMom123Gam12(expression(alpha*(1-sqrt(1-2*t/beta))),list(alpha=2,beta=4))
should yield
0.500000 0.125000 0.093750 2.121320 7.500000
1. Create a function that will calculate and return (to the assigned itemitem<-function())
a vector with E[], V ar(), 1 () and 2 () of a non-negative discrete random variable (with
finite range) in function of its pmf. In addition, a binary variable indicates if these de-
scriptive statistics should be printed in a data frame or not. Thus, the code (in a separate
R document) should look like
pmf_to_desc_stats <- function(pmf,print) {
[code omitted ^]
where pmf is a vector with the probabilities and where print is a binary (0-1) variable
indicating if the results should print or not.
2. Add this function to the code developed in Exercise 2.7 part 2 to print the descriptive
statistics. You should get something like
[1] "Descriptive statistics"
mean variance skewness kurtosis
1 12.05 35.1675 0.1213840 -0.4493699
32
ACTL3162 General Insurance Techniques Exercises
Exercise 3.4: [NLI7, Solution][?, Exercise 7] Assume Y (, c), where its density is for
y 0,
c 1
g(y) = y exp(cy). (3.1)
()
Prove the statements of the moment generating function MY and the loss size index
function I(G(y)). Hint: use the trick of the proof of Proposition 2.20 in ?.
Exercise 3.5: [los2K, Solution] [?, Problem 2.2.1] Determine the expected value and the vari-
ance of X = IB if the claim probability equals 0.1. First, assume that B equals 5 with
probability 1. Then, let B Uniform(0,10).
Exercise 3.6: [los4K, Solution] [?, Problem 2.2.5] If X = IB, what is mX (t)?
Exercise 3.7: [los5K, Solution] [?, Problem 2.2.6] Consider the following cdf F:
0 for x < 2
x
F (x) = for 2 x < 4
4
1 for 4 x
Determine independent random variables I, X, and Y such that Z = IX + (1 I)Y has cdf
F , I Bernoulli, X is a discrete and Y a continuous random variable.
Exercise 3.8: [los6K, Solution] [?, Problem 2.2.8] Suppose that T = qX + (1 q)Y and
Z = IX + (1 I)Y with I Bernoulli(q). Compare E[T k ] with E[Z k ], k = 1, 2.
t 1 2 3 4 5 6 7 8 9 10
Nt 1000 997 985 989 1056 1070 994 986 1093 1054
vt 10000 10000 10000 10000 10000 10000 10000 10000 10000 10000
33
ACTL3162 General Insurance Techniques Exercises
Exercise 3.10: [Fit6R, Solution] The following data are the results of a sample of 250 losses:
0 - 25 5
25 - 50 37
50 - 75 28
75 - 100 31
100 - 125 23
125 - 150 9
150 - 200 22
200 - 250 17
250 - 350 15
350 - 500 17
500 - 750 13
750 - 1,000 12
1,000 - 1,500 3
1,500 - 2,500 5
2,500 - 5,000 5
5,000 - 10,000 3
10,000 - 25,000 3
25,000 2
1. Determine the maximum likelihood estimate of . (Be careful with the log-likelihood
function, since the data are grouped.)
2. Conduct a 2 goodness-of-fit test of this inverse exponential distribution model on the
data.
3. Explain the two other types of hypothesis tests, together with their similarities and differ-
ences, that can be conducted in determining the quality of the fit of your chosen model.
Exercise 3.11: [NLI9, Solution][?, Exercise 9] Assume we have i.i.d. claim sizes Y = (Y1 , ..., Yn )0
with n = 1000 which were generated by a gamma distribution, see Figure 3.1. The sample mean
and sample standard deviation are given by
bn = 0.1039 and
bn = 0.1039. (3.3)
If we fit the parameters of the gamma distribution we obtain the method of moments estimator
and the MLEs
bM M = 0.9794 and b
cM M = 9.4249, (3.4)
bM LE = 1.0013 and b
cM LE = 9.6360. (3.5)
This provides the fitted distributions displayed in Figure 3.2. The fits look perfect and the
corresponding log-likelihoods are given by
MM , b
`Y (b cM M ) = 1264.013 and `Y (b
M LE , b
cM LE ) = 1264.171. (3.6)
34
ACTL3162 General Insurance Techniques Exercises
Figure 3.1: i.i.d. claim sizes Y = (Y1 , ..., Yn )0 with n = 1000; lhs: observed data; rhs: empirical
distribution function.
Figure 3.2: Fitted gamma distribution; lhs: log-log plot; rhs: QQ plot.
M LE , b
(a) Why is `Y (b cM LE ) > `Y (b
M M , b
cM M ) and which fit should be preferred according to
AIC?
(b) The estimates of are very close to 1 and we could also use an exponential distribu-
cM LE = 9.6231
tion function. For the exponential distribution function we obtain MLE b
M LE
and `Y (b
c ) = 1264.169. Which model (gamma or exponential) should be preferred
according to the AIC and the BIC?
Exercise 3.12: [Fit7R, Solution][R ] The data in the attachment liability.txt contains data
on liability insurance claim sizes (in German former currency - Marks) for the year 1982.
2. Fit a Pareto distribution to these data using maximum likelihood estimation. Provide the
usual graphical comparisons (histogram vs. fitted parametric density function, empirical
35
ACTL3162 General Insurance Techniques Exercises
90 Chapter 3. Individual Claim Size Modeling
CDF vs. fitted parametric CDF, Q-Q plot, P-P plot).
Example 3.15 (leverage effect of claims inflation). Assume that Y0 Pareto(, )
3. Fit a Weibull distribution to these data using maximum likelihood estimation. Provide
with > 1 and choose a deductible d > . In that case we have, see (3.10),
the usual graphical comparisons.
A B
4. Fit a lognormal distribution to these data using 1
d maximum likelihood estimation. Provide
E [(Y0 d)+ ] = d.
the usual comparisons. 1
5. Which distribution would you choose? Justify your answer.
Choose inflation index i > 0 such that (1 + i) < d. From (3.7) we obtain
(d)
3.4 = (1 + i)Y
CalculatingY1 within 0 Pareto((1
layers + i), ).sizes
for claim
w)
This provides for > 1 and i > 0
Exercise 3.13: [sur12K, Solution] [?, Problem 3.8.5] Determine the cdf of Pr[Z d] and the
stop loss premium E[(Z d)+A B or combination Z of exponential distribution as
] for a mixture
d 1
in E [(Y1 d)+ ] = d
p(x) (1
= qe+ x
i) + (1 1 x , x > 0.
q)e
A B
d of Z 1z, given Z > z.
(m
Also determine the conditional = (1 + i)
distribution
d > (1 + i) E [(Y0 d)+ ] .
1
Exercise 3.14: [sur13, Solution] Show that
Observe that we obtain a strict inequality, i.e. the pure riskZ premium grows faster
than the claim sizes itself. The reason forx)dF
this(x)
faster growth is[1that
F claims
Z d d
E [(S d) + ] = E[S] d + (d = E[S] (x)] dx.Y0 d
may entitle for claims payments after 0 claims inflation adjustments,
0 i.e. not only the
claim sizes are growing under inflation but also the number of claims is growing if
tes
Exercise 3.15: [NLI10, Solution][?, Exercise 10] In Figure 3.3 we display the distribution
one does not adapt the deductible to inflation.
function of loss without reinsurance Y G and the resulting distribution function of the
loss to the insurer after applying different re-insurance covers to loss Y . Can you explicitly
Exercise
determine the 10. In Figurecovers
re-insurance 3.30 we display
from the the distribution
graphs in Figurefunction
3.3? Note of loss Y
that and
theGfunctions
belowthearedistribution
cumulative function of the
distribution loss after applying different re-insurance covers to
functions.
no
NL
Figure
Figure 3.3: 3.30: Distribution
Cumulative functions
Distribution implied
functions by re-insurance
implied contracts.
by re-insurance contracts
Y . Can you explicitly determine the re-insurance covers from the graphs in Figure
Exercise
3.30. 3.16: [NLI11, Solution][?, Exercise 11] Assume claims sizes Yi in a give line
of
business can be described by a log-normal distribution with mean E[Yi ] = 3000 and Vco(Yi ) = 4
(coefficient of variation).
Version June 29, 2015, M.V. Wthrich, ETH Zurich
36
ACTL3162 General Insurance Techniques Exercises
Up to now the insurance company was not offering contracts with deductibles. Now it wants to
offer the following three deductible versions d = 200, 500, 1000. Answer the following questions:
2. How does the expected claim size change by the introduction of deductibles?
Exercise 3.17: [sur14K, Solution] [?, Problem 3.10.4] Assume that X1 , X2 , . . . are independent
and identically distributed risks that represent the loss on a portfolio in consecutive years. We
could insure these risks with separate stop loss contracts for one year with a retention d, but
we could also consider only one contract for the whole period of n years with the retention
nd. Show that E [(X1 d)+ ] + . . . + E [(Xn d)+ ] E [(X1 + . . . + Xn nd)+ ]. If d E[Xi ],
examine how the total net stop loss premium for the one-year contracts E [(X1 d)+ ] relates
to the stop loss premium for the n-year period E [(X1 + . . . + Xn nd)+ ].
Hint [?, Rule of thumb 3.10.1]: For retentions t larger than the expectation = E[U ] = E[W ],
we have for the stop loss premiums of risks U and W :
E [(U t)+ ] V ar(U )
.
E [(W t)+ ] V ar(W )
Exercise 3.18: [sur15, Solution] [2005 Quiz 1 Question 4] An insurer has a portfolio consisting
of 1000 one year term life insurance policies that pays $100 in the event of death within one
year. The probability of death is 0.002.
The insurer has a EoL reinsurance for each policy in excess of $90.
1. For the insurer, calculate the expected total annual claims and the variance of total annual
claims without the reinsurance.
2. For the insurer, calculate the expected total annual claims and the variance of total annual
claims with the reinsurance.
3. For the reinsurer, calculate the expected total annual claims and the variance of total
annual claims.
Exercise 3.19: [sur16R, Solution][R ] This question is a follow-up from Exercise 4.18
In Exercise 4.18, we have calculated x, fS (x), FS (x), x = 0, 1, 2, . . . , 25. Now using the same
idea, prepare a table for each of the cases i = 1, 2 and 3 including:
3.5 Solutions
Solution 3.1: [los36R, Exercise] One possible code is as follows:
CMom123Gam12 <- function(cgf,param){
#initialising vectors
dcgf <- c(cgf) # for cgf, cgf, cgf, etc...
kappa <- c() # for the cumulants
37
ACTL3162 General Insurance Techniques Exercises
for(i in 1:4){
dcgf <- c(dcgf, D(dcgf[i],"t")) # i-th derivative
kappa <- c(kappa, eval(dcgf[i+1],param)) #i-th cumulant
}
# remember that 4-th cumulant is not the 4-th central moment!
#returning results
c(kappa[1:3],gamma)
}
# descriptive statistics
m <- sum(pmf*(0:range))
v <- sum(pmf*(0:range - m)^2)
g1 <- sum(pmf*(0:range - m)^3)/v^(3/2)
g2 <- sum(pmf*(0:range - m)^4)/v^2-3
# print results
if(print==1){
results <- data.frame(mean=m,variance=v,skewness=g1,kurtosis=g2)
print(results)
}
Note that it is important to return a vector with the results you may need to use later in
your code, because they are lost once the function has been processed. In other words, if
you run the function on one line, m (the mean) will not be available in the subsequent
lines. This also means (and this is the reason why it is so) that you can use m in your
main code and use the function without having a conflict between the two ms.
2. Add the following lines to the code:
source("[path to file]")
print("Descriptive statistics")
pmf_to_desc_stats(fS[,conv+1],1)
Note that [path to the file] needs to be replaced by the path to the R document where
the code of your function is. If you create several functions, you could put all of them in
a single file that is sourced at the beginning of your normal R files. This way, you can
use all your functions whenever you want. Note that before spending time creating a new
function, it may be advisable to check if this function does not already exist in R. . .
38
ACTL3162 General Insurance Techniques Exercises
E Y2
= E [exp (2 log Y )] = mlog Y (2)
= exp 2 + 2 2
so that
V ar (Y ) = exp 2 + 2 exp 2 1 .
If Y Pareto(, x0 ) then xY0
Pareto(, 1) and log 1 + xY0 Exponential ().
Proof
Y
Pr y = Pr [Y yx0 ]
x0
x0
=1
x0 + yx0
1
=1 which is the CDF of a Pareto (, 1) random variable
(1 + y)
and
Y
Pr ln 1 + y = Pr [Y (ey 1)x0 ]
x0
x0
=1
x0 + x0 (ey 1)
= 1 ey which is the CDF of an Exp() random variable
So
E [Y ] = E [Y + x0 ] x0
Y
= x0 E + 1 x0
x0
Y
= x0 E exp 1 ln +1 x0
x0
= x0 MZ (1) x0 where Z Exponential()
= x0 x0
1
x0
=
1
Now
Y
E exp 2 ln +1 = MZ (2) where Z Exponential()
x0
=
2
39
ACTL3162 General Insurance Techniques Exercises
But
" 2 #
Y Y + x0
E exp 2 ln +1 =E
x0 x0
1
= 2 E Y 2 + 2Y x0 + x20
x0
So
2 x0
2 2
E Y = x0 x0 2x0
2 1
2 ( 1) ( 1)( 2) 2( 2)
= x0
( 1)( 2)
2
x0
= (2( 1) 2( 2))
( 1)( 2)
2x20
=
( 1)( 2)
Then
2
2x20
x0
V ar [Y ] =
( 1)( 2) 1
2
2x0 ( 1) x20 ( 2)
=
( 1)2 ( 2) ( 1)2 ( 2)
x20 (2 2 + 2)
=
( 1)2 ( 2)
x20
=
( 1)2 ( 2)
40
ACTL3162 General Insurance Techniques Exercises
Solution 3.5: [los2K, Exercise] First note that with B = 5 w.p. 1, we have = E [B |I = 1] =
5 and 2 = Var[B|I = 1] = 0. We will use the decomposition of variance result that Var[X] =
Var[E[B|I]] + E[Var[B|I]]. Below, we also have that q = 0.1, which is the probability of a claim
occurring. It is then clear that
E (X) = q = 0.5
and
V ar (X) = 2 q (1 q) + 2 q = 9/4.
100
Now, if B U (0, 10), then = 5 and 2 = 12
. Thus,
E (X) = q = 0.5
and
V ar (X) = 2 q (1 q) + 2 q = 37/12.
mX (t) = E E eXt |I
= (1 q) e0 + qmX|I=1 (t)
= 1 q + qmB (t) .
Solution 3.7: [los5K, Exercise] F (x) has a jump of size 1/2 in 2 and is uniform on (2, 4), so
F is the following mixture of cdfs:
1 1
F (x) = G (x) + H (x)
2 2
with dG (2) = 1 and H Uniform(2, 4). The mixed r.v. I X + (1 I) Y has cdf F for I
Bernoulli(1/2), X 2 and Y Uniform(2, 4), independent.
41
ACTL3162 General Insurance Techniques Exercises
E (T ) = qE (X) + (1 q) E (Y )
and
E T2 = E q 2 X 2 + 2q (1 q) XY + (1 q)2 Y 2
= q 2 E X 2 + 2q (1 q) E (X) E (Y ) + (1 q)2 E Y 2 .
E (Z) = E [IX + (1 I) Y ]
= E (I) E (X) + (1 E (I)) E (Y )
= qE (X) + (1 q) E (Y ) = E (T )
and
E Z2 = E I 2 X 2 + 2I (1 I) XY + (1 I)2 Y 2
= E I 2 E X 2 + 0 + E (1 I)2 E Y 2
= qE X 2 + (1 q) E Y 2 .
Solution 3.9: [NLI5, Exercise] Obtaining MLE for Poisson is easy and the formula is
T
1 X
M LE
P OI = PT Nt . (3.11)
t=1 vt t=1
We compare this value to the 95%-quantile of the Chi-square distribution with 9 degrees of
freedom 16.91898. Since the value of test statistic is smaller than the critical value, we cannot
reject the null hypothesis on the 5% significance level.
For the negative-binomial model, we do not have explicit formula for the MLE and we use
the function fitdistr(...) from the MASS package in R. Note that we can do this since vt
have uniform values of 10000, so we can find the maximum likelihood estimator for the size
parameter and divide by 100000. Theen test statistic for the negative binomial model is similar
to the Poisson one,
T
X (Nt M LE
N B vt )
2
2N B = M LE
= 1955.112. (3.13)
t=1
N B vt
This is way larger than the 95%-quantile of the Chi-square distribution with 9 degrees of
freedom 16.91898. So it is clear that the Poisson model is preferred.
R-code:
42
ACTL3162 General Insurance Techniques Exercises
Solution 3.10: [Fit6R, Exercise] Denote the range of losses by [ai , bi ) for i = 1, 2, ...18 since
there are 18 given intervals of losses. Note that for a1 = 0 and b18 = .
where ni refers to the number of claims observed in the i-th range of loss. The log-
likelihood is given by
18
X
` (; ai , bi , ni ) = ni log [exp (/bi ) exp (/ai )] .
i=1
which can only be numerically computed. So we will use Excel to find the MLE of . In
the sheet Parameter Estimate of Fit6R.xls(x), the derivatives are computed and use
goal seek to find the MLE such that the derivative is zero. The parameter estimate for
is 93.18568.
2. In the sheet Chi-sq Test of Fit6R.xls(x), we use Excel to compute the Chi-square test
statistic 16.5232 and the associated p-value 0.34825.
It is clear that the result of the chi-square test above does support the Inverse Exponential
model for the given data. The two other tests that can be conducted are the Kolmogorov-
Smirnoff (K-S) and the Anderson-Darling (A-D) tests. Both test, similar to the chi-square
43
ACTL3162 General Insurance Techniques Exercises
test, whether the data comes from the assumed population. K-S and A-D tests are quite
similar - both look at the difference between the empirical and model dfs: K-S in absolute
value, A-D in squared difference. But A-D is weighted average, with more emphasis on
good fit in the tails than in the middle; K-S puts no such emphasis. For K-S and A-D
tests, no adjustments are made to account for increase in the number of parameters. The
result is that more complex models often will fare better on these tests. On the other
hand, the 2 test adjusts the d.f. for increases in the number of parameters. All 3 tests
are sensitive to sample size:
(b) We calculate the AIC and the BIC for gamma and exponential fits,
According to both the AIC and the BIC criteria, exponential distribution is preferred
(smallest AIC/BIC).
data<-read.table("liability.txt",header=T)
attach(data)
source("DataSummStats.R")
DataSummStats(liability)
Value
Number 6.800000e+02
Mean 7.416837e+04
5th Q 3.000000e+04
25th Q 3.500000e+04
Median 5.000000e+04
75th Q 7.500000e+04
95th Q 2.000000e+05
Variance 8.689669e+09
StdDev 9.321839e+04
Minimum 3.000000e+04
Maximum 1.200000e+06
Skewness 6.440000e+00
Kurtosis 5.720000e+01
44
ACTL3162 General Insurance Techniques Exercises
Substituting the data yields, mle = 30000 and mle = 2.012135. The graphical compar-
isons are shown below.
source("qpareto.R")
source("dpareto.R")
source("ppareto.R")
#MLE
lambda.hat<-min(liability)
alpha.hat<-860/sum(log(liability)-log(lambda.hat))
par.hat.p<-c(alpha.hat,lambda.hat)
par.hat.p
par(mfrow=c(2,2))
#histogram
hist(liability,breaks=100,prob=T,xlab="claims",main="Histogram of Claims")
xgrid<-seq(min(liability),max(liability),length=860)
lines(xgrid,dpareto(xgrid,par.hat.p[1],par.hat.p[2]),col=2)
legend(300000,0.00002,legend=c("Pareto Model"),lty=1,col=2)
#qqplot
plot(qpareto(empirical(liability),par.hat.p[1],par.hat.p[2]),liability,
xlab="theoretical quantiles",ylab="sample quantiles",main="Q-Q plot",cex=0.45)
abline(0,1,col=2)
#ppplot
plot(ppareto(liability,par.hat.p[1],par.hat.p[2]),empirical(liability),
xlab="theoretical probability",ylab="sample probability",main="P-P plot",cex=0.45)
abline(0,1,col=2)
45
ACTL3162 General Insurance Techniques Exercises
mle = 79791.605439
kmle = 1.184008.
loglike<-function(x,par){
-sum(log(dweibull(x,par[1],par[2])))
}
#histogram
hist(liability,breaks=100,prob=T,xlab="claims",main="Histogram of Claims")
xgrid<-seq(min(liability),max(liability),length=860)
46
ACTL3162 General Insurance Techniques Exercises
lines(xgrid,dweibull(xgrid,par.hat.w[1],par.hat.w[2]),col=2)
legend(300000,0.00002,legend=c("Weibull Model"),lty=1,col=2)
#qqplot
plot(qweibull(empirical(liability),par.hat.w[1],par.hat.w[2]),liability,
xlab="theoretical quantiles",ylab="sample quantiles",main="Q-Q plot",cex=0.45)
abline(0,1,col=2)
#ppplot
plot(pweibull(liability,par.hat.w[1],par.hat.w[2]),empirical(liability),
xlab="theoretical probability",ylab="sample probability",main="P-P plot",cex=0.45)
abline(0,1,col=2)
4. For the log-normal case, there are two parameters need to be estimated ( and 2 ). Use
47
ACTL3162 General Insurance Techniques Exercises
mle = 10.9373482
mle = 0.6262126
loglike<-function(x,par){
-sum(log(dlnorm(x,par[1],par[2])))
}
#histogram
hist(liability,breaks=100,prob=T,xlab="claims",main="Histogram of Claims")
xgrid<-seq(min(liability),max(liability),length=860)
lines(xgrid,dlnorm(xgrid,par.hat.l[1],par.hat.l[2]),col=2)
legend(300000,0.00002,legend=c("Lognormal Model"),lty=1,col=2)
#qqplot
plot(qlnorm(empirical(liability),par.hat.l[1],par.hat.l[2]),liability,
xlab="theoretical quantiles",ylab="sample quantiles",main="Q-Q plot",cex=0.45)
abline(0,1,col=2)
#ppplot
plot(plnorm(liability,par.hat.l[1],par.hat.l[2]),emprical(liability),
xlab="theoretical probability",ylab="sample probability",main="P-P plot",cex=0.45)
abline(0,1,col=2)
48
ACTL3162 General Insurance Techniques Exercises
5. #loglikelihood
sum(log(dpareto(liability,par.hat.p[1],par.hat.p[2])))
sum(log(dweibull(liability,par.hat.w[1],par.hat.w[2])))
sum(log(dlnorm(liability,par.hat.l[1],par.hat.l[2])))
#kolmogorov-smirnoff
max(abs(ecdf(liability)(liability)-ppareto(liability,par.hat.p[1],par.hat.p[2])))
max(abs(ecdf(liability)(liability)-pweibull(liability,par.hat.w[1],par.hat.w[2])))
max(abs(ecdf(liability)(liability)-plnorm(liability,par.hat.l[1],par.hat.l[2])))
#A-D statistics
f.p<-function(x){
860*dpareto(x,par.hat.p[1],par.hat.p[2])*(ecdf(liability)(x)
-ppareto(x,par.hat.p[1],par.hat.p[2]))^2/
(ppareto(x,par.hat.p[1],par.hat.p[2])*(1-ppareto(x,par.hat.p[1],par.hat.p[2])))
}
f.w<-function(x){
860*dweibull(x,par.hat.w[1],par.hat.w[2])*(ecdf(liability)(x)
-pweibull(x,par.hat.w[1],par.hat.w[2]))^2/
(pweibull(x,par.hat.w[1],par.hat.w[2])*(1-pweibull(x,par.hat.w[1],par.hat.w[2])))
}
f.l<-function(x){
860*dlnorm(x,par.hat.l[1],par.hat.l[2])*(ecdf(liability)(x)
-plnorm(x,par.hat.l[1],par.hat.l[2]))^2/
49
ACTL3162 General Insurance Techniques Exercises
(plnorm(x,par.hat.l[1],par.hat.l[2])*(1-plnorm(x,par.hat.l[1],par.hat.l[2])))
}
sum(f.p(seq(30001,1200000)))
sum(f.w(seq(30001,1200000)))
sum(f.l(seq(30001,1200000)))
#Chi-square statistics
n=100
interval<-seq(30000,1200000,length=n)
e.p<-c()
for(i in 1:n-1){
e.p[i]<-860*(ppareto(interval[i+1],par.hat.p[1],par.hat.p[2])
-ppareto(interval[i],par.hat.p[1],par.hat.p[2]))
}
e.w<-c()
interval<-seq(30000,1200000,length=n)
for(i in 1:n-1){
e.w[i]<-860*(pweibull(interval[i+1],par.hat.w[1],par.hat.w[2])
-pweibull(interval[i],par.hat.w[1],par.hat.w[2]))
}
e.l<-c()
for(i in 1:n-1){
e.l[i]<-860*(plnorm(interval[i+1],par.hat.l[1],par.hat.l[2])
-plnorm(interval[i],par.hat.l[1],par.hat.l[2]))
}
o<-c()
for(i in 1:n-1){
o[i]<-860*(ecdf(liability)(interval[i+1])-ecdf(liability)(interval[i]))
}
sum((e.p-o)^2/e.p)
sum((e.w-o)^2/e.w)
sum((e.l-o)^2/e.l)
We will evaluate the models based on the following table.
Model Loglikelihood K-S A-D 2
Pareto -7822.041 0.1820444 205.7854 95384.95
Weibull -8284.174 0.1876902 30.54793 37484626594
Lognormal -8083.975 0.1968117 21.46748 6058129
Using the table above, we will choose the Pareto distribution as it has the highest log-
likelihood, lowers K-S and 2 test statistics. But its high value of A-D statistic is quite
baffling and therefore not considered.
50
ACTL3162 General Insurance Techniques Exercises
2. With the introduction of deductibles, the expected claim size will decrease.
3. Recall that E[(Y d)+ ] = P[Y > d]e(d), where d > 0 is the deductible level and e(d) is
the mean excess above d. For a log-normal distributed claim,
log d ( + 2 )
log d
E[(Y d)+ ] = Y 1 d 1 . (3.16)
Substituting d = 200, 500, 1000 yields E[(Y 200)+ ] = 2822.893, E[(Y 500)+ ] = 2620.95
and E[(Y 1000)+ ] = 2372.275.
R-codes:
51
ACTL3162 General Insurance Techniques Exercises
mean<-3000
vco<-4
sigma2<-log(vco^2+1)
sigma<-sigma2^(1/2)
mu<-log(mean)-sigma2/2
d<-200
mean*(1-pnorm((log(d)-mu-sigma2)/sigma))-d*(1-pnorm((log(d)-mu)/sigma))
d<-500
mean*(1-pnorm((log(d)-mu-sigma2)/sigma))-d*(1-pnorm((log(d)-mu)/sigma))
d<-1000
mean*(1-pnorm((log(d)-mu-sigma2)/sigma))-d*(1-pnorm((log(d)-mu)/sigma))
Also,
1
V ar X = V ar (X1 ) .
n
Using the hint, for d ,
h i V ar X
E X d + /E (X1 d)+ .
V ar (X1 )
This leads to P
E ( xi nd)+
1.
E (X1 d)+
Hence, the slp for a one-year contract with retention d is about as large as the one for an n-year
period with retention nd.
Solution 3.18: [sur15, Exercise] Let S be the total annual claims paid by the insurer in the
absence of the stop-loss reinsurance, S be the total annual claims paid by the insurer in the
presence of the stop-loss reinsurance, and SR be the total annual claims paid by the reinsurance.
We have:
52
ACTL3162 General Insurance Techniques Exercises
Solution 3.19: [sur16R, Exercise] For the solution code, refer to solution of [los40R, Exercise]
1. Below are tables for d, E[(S d)+ ], E[((S d)+ )2 ], V ar[(S d)+ ], d = 0, 1, 2, . . . , 25.
53
ACTL3162 General Insurance Techniques Exercises
54
ACTL3162 General Insurance Techniques Exercises
portfolio, i.e. the negative binomial case. We will expect the least stop-loss premium for
the least variable portfolio, i.e. the binomial case. Illustration by the diagram below.
Stoploss Premium
15
E[(Sd)+]
10
Poi
NegBin
Bin
5
0
0 20 40 60 80
55
Module 4
4.1 Approximations
Exercise 4.1: [los29, Solution] [2004 Final Exam Question] For a specific type of insurance
risk, the claim amount X is being modeled as the product of two random variables I and B as
X = I B,
where I is the claim indicator random variable and B is the random variable representing the
amount of the claim, conditional on the event that a claim occurs.
Now, assume that the probability that a claim occurs is P (I = 1) = q, or otherwise, the
probability that no claim occurs is P (I = 0) = 1 q. Furthermore, denote by and 2 to be
the mean and the variance of B, respectively.
B =bZ
where b is a fixed constant and Z is a Poisson random variable with parameter , so that
B takes possible values of 0, b, 2b, 3b, and so on. Determine expressions for the mean,
variance, and probability distribution of X in terms of the parameters , q, and b.
A XA b = 1, = 1.5, q = 0.1
B XB b = 2, = 1.0, q = 0.2
Given that the two types of risk are independent, use the convolution formula to calculate
the probability that XA + XB is greater than 3. Do not use any approximations.
56
ACTL3162 General Insurance Techniques Exercises
4. Consider a portfolio of 100 independent Type A risks (above). Use the Normal (Central
Limit Theorem) approximation to calculate the probability that the average claim amount
per risk is greater than 0.5.
Exercise 4.2: [NLI12, Solution][?, Exercise 12] Assume that S has a compound Poisson dis-
tribution with expected number of claims v > 0 and claim size distribution G having finite
third moment.
1. Prove that the fit of moment approximation for a translated gamma distribution for X
provides the following system of equations
E[Y13 ]
vE[Y1 ] = k + /c, vE[Y12 ] = /c2 and = 2 1/2 . (4.1)
(v)1/2 E[Y12 ]3/2
2. Solve this system of equations for k R, > 0 and c > 0 (assume that G(0) = 0).
Exercise 4.3: [los31K, Solution] [?, Problem 3.6.1] Assume S is compound Poisson distributed
with parameter = 12 and uniform(0, 1) distributed claims. Approximate Pr[S < 10] with the
CLT approximation, the translated gamma approximation and the NP approximation.
Exercise 4.4: [los32K, Solution] [?, Problem 2.5.9] An insurers portfolio contains 2000 one-
year life insurance policies. Half of them are characterised by a payment b1 = 1 and a probability
of dying within 1 year of q1 = 1%. For the other half, we have b2 = 2 and q2 = 5%. Use the CLT
to determine the minimum safety loading, as a percentage, to be added to the net premium
to ensure that the probability that the total payment exceeds the total premium income is at
most 5%.
Exercise 4.5: [los33K, Solution] [?, Problem 2.5.13] A portfolio consists of two types of con-
tracts. For type k, k = 1, 2, the claim probability is qk and the number of policies is nk . If
there is a claim, then its size is x with probability pk (x):
Assume that the contracts are independent. Let Sk denote the total claim amount of the
contracts of type k and let S = S1 + S2 . Calculate the expected value and the variance of a
contract of type k, k = 1, 2. Then, calculate the expected value and the variance of S. Use the
CLT to determine the minimum capital that covers all claims with probability 95%.
Exercise 4.6: [los34, Solution] We want to approximate the individual model Se by a collective
model S. Show that j = ln(1 qj ) yields both a larger expectation and a larger variance
than the ones of S.
e For both cases, compare Pr[Ii = j] and Pr[Ni = j], j = 0, 1, 2, . . . in both
models as well as the cdfs of Ii and Ni .
Exercise 4.7: [los35K, Solution] [?, Problem 3.7.2] Consider a portfolio of 100 one-year life
insurance policies. 25 policies have insured amounts 1 and probability dying within this year
0.01, 25 policies have insured amounts 1 and probability dying within this year 0.02, 25 policies
have insured amounts 2 and probability dying within this year 0.01 and 25 policies have insured
amounts 2 and probability dying within this year 0.02.
57
ACTL3162 General Insurance Techniques Exercises
Determine the expectation and the variance of the total claims S. e Choose an appropriate
compound Poisson distribution S to approximate S and compare the expectations and the
e
variances. Determine for both S and Se the parameters of a suitable approximating translated
gamma distribution.
Exercise 4.8: [los37R, Solution][R ] Develop a function that will return and print (if the user
wishes so) the following approximations of Pr[S s] in function of the first three central
moments, as well as 1 and 2 :
Normal Power 2
translated gamma
dk
(k) (x) = (x).
dxk
Note
1 1 2
(1) (x) = (x) = e 2 x
2
(2) (x) = x(x)
(3) (x) = (x2 1)(x)
etc. . .
1. For each of the three cases i = 1, 2, 3, give the true value of Pr(Si > 50) as well as its
CLT, translated gamma and normal power approximations.
58
ACTL3162 General Insurance Techniques Exercises
Exercise 4.12: [los26, Solution] Check the results of Exercise 2.5 using Panjers recursion
algorithm.
Exercise 4.13: [los27, Solution] The individual claim amount distribution has the following
distribution:
x P (X = x)
1 0.2
2 0.2
3 0.2
4 0.4
Determine fS (s) for s = 0, 1, 2, 3, 4:
Exercise 4.14: [los28, Solution] Let p(x) be Poisson(1). Verify that pn (x) using de Prils
algorithm will yield the exact probabilities of a Poisson(n) random variable for x = 0, 1, 2, 3.
Exercise 4.15: [los30R, Solution][R ] Develop a function for Panjers recursion that returns
fS (x) and FS (x), and this in function of a, b, fX (x) and fS (0). In addition, the function should
allow for the choice (type below) between computing
whatever number of values that is necessary in order to have all FS (x) < 1 and the
first > 1 . ( vartype below)
Finally, the function should also allow its user to print fS (x) and FS (x) for all the recursions
that were performed (binary print below). The function could then begin in the following
way:
[code omitted ^]
Exercise 4.16: [los38R, Solution][R ] Develop a function that will discretise a continuous dis-
tribution in m steps of length h, in function of its pdf or cdf. One possible beginning is
discretisation <-function(densityorcdf,type,h,m){
59
ACTL3162 General Insurance Techniques Exercises
where densityorcdf is a function and where type is binary and defines if densityorcdf if
a cdf (1) or a density (0).
The following codediscretisation of a gamma( = 2, = 4)
pmf<-discretisation(function(x){16*x*exp(-4*x)},0,0.002,1500)
plot(pmf)
0.0010
0.0005
0.0000
Index
Exercise 4.17: [los39R, Solution][R ] Let S compound Poisson( = 20, p(x) = 0.2e0.2x ).
We are interested in determining Pr[S 150]. Use the functions developed in the previous
exercises to compute all 6 approximations mentioned in Exercise 4.8, as well as the equivalent
probability using Panjers recursion after having discretised p(x) with h = 0.01 and m =
5000. Calculate the deviation between the approximations and the probability calculated with
Panjers recursion.
Here are the outputs you should get:
Approximations for Pr[X<=150]:
NP1 NP2 EW1 EW2 EW3 TG Panjer
Pr[X<=150] 0.9430769 0.93132 0.9295227 0.9306522 0.93277 0.9325275 0.9323526
and
Si = X1 + X2 + . . . + XNi , i = 1, 2, 3
60
ACTL3162 General Insurance Techniques Exercises
with
N1 Poisson(1 = 4),
N2 Neg Bin(r2 = 4, p2 = 0.5), and
N3 Bin(n3 = 8, p3 = 0.5),
1. Using Panjers recursion formula, prepare a table for each of the cases i = 1, 2 and 3
including:
x, fS (x), FS (x), x = 0, 1, 2, . . . , 25
E[()k ], k = 1, 2, 3
E[( E[])k ], k = 2, 3
1 ()
For the moments of Si , use the distribution of Si you computed with Panjers recursion
formula (after an appropriate number of recursions).
3. Interpret your results in part 1 and 2 (compare the three assumptions, i = 1, 2 or 3).
4.3 Solutions
Solution 4.1: [los29, Exercise] This was a Year 2004 final exam question.
E (X) = E [E (X |I )] = E (X |I = 1) P (I = 1) = qE (B) = q
V ar (X) = V ar [E (X |I )] + E [V ar (X |I )]
= [E (B)]2 V ar (I) + qV ar (B)
= q (1 q) [E (B)]2 + qV ar (B)
V ar (X) = q (1 q) 2 + q 2 .
E (X) = qb
and
V ar (X) = q (1 q) b2 2 + qb2 .
61
ACTL3162 General Insurance Techniques Exercises
mX (t) = E[etx ]
= E[etIB ]
= E[etIB |I = 1]P (I = 1) + E[etIB |I = 0]P (I = 0)
= E[etIB ]q + 1 (1 q)
bt 1)
= qe(e + (1 q)
X k ekbt
= qe + (1 q)
k=0
k!
X k kbt
= (1 q) + qe + qe e .
k=1
k!
The distribution of X can be identified by using the 1-1 correspondence between distri-
bution and mgf, i.e. by matching
X
mX (t) = E[etX ] = ety P (t = y)
y=0
with the expression above. Thus, by carefully selecting coefficients of eXt , we find
P (X = 0) = (1 q) + qe
and
x
P (X = bx) = qe , for x = 1, 2, 3, ...
x!
3. Notice that XA takes possible values 0, 1, 2, ... while XB takes possible values 0, 2, 4, ...
Now, doing the convolution for XA + XB , we have
(1) (2) (3) (4) (6)
x P (XA = x) P (XB = x) P (XA + XB = x) P (XA + XB x)
= (2)*(3)
0 0.922313 0.873576 0.805710 0.805710
1 0.033470 0 0.029238 0.834949
2 0.025102 0.073576 0.089789 0.924737
3 0.012551 0 0.013427 0.938164
i 1.5 1.0
x x/2
qe1.5 (1.5)
x!
qe1 (1)
(x/2)!
and
V ar (X) = q (1 q) 2 + q = 0.3525.
62
ACTL3162 General Insurance Techniques Exercises
E[(S E[S]])3
S = X = = 2 1/2
S3
E[Y13 ]
= = 2 1/2 . (4.5)
(v)1/2 E[Y12 ]3/2
(v)E[Y12 ]3
=4 . (4.6)
E[Y13 ]2
(v)E[Y12 ]3 1 E[Y12 ]
c2 = 4 = c = 2 > 0. (4.7)
E[Y13 ]2 vE[Y12 ] E[Y13 ]
Finally substituting the solved and c into equation for the expectation gives k
(v)E[Y12 ]2
k = vE[Y1 ] 2 . (4.8)
E[Y13 ]
63
ACTL3162 General Insurance Techniques Exercises
Solution 4.3: [los31K, Exercise] Note that for compound Poisson moments, it can be derived
that
E[Z] = E[X]
V ar[Z] = E[X 2 ]
E [(X E[X])3 ]
Skew[Z] =
3
3
E[X ]
=
3
where X is the severity distribution and is the frequency severity. Plugging in these expres-
sions, we have that
E (S) = 1 = 6
and
1
V ar (S) = 2 = 12 =4
3
and
3 1 3
S = 3 / = 12 / 4 = 3/8.
4
(Note that the third moment of a U (a, b) distribution is 14 (a + b)(a2 + b2 ).
CLT:
P (S < 10) P [Z < (10 6) /2] = (2) = 0.977.
Translated Gamma: Using known expressions for the skewness of a Gamma distribution (skew-
ness = 2 ), we have that = 4/ 2 = 28 49 ;. Further, it is also known that =
2
= 83 ;. Finally,
using the expression in the lecture slides, we have that the shift x0 = 2
= 4 32 .Thus,
2 4 8
P (S < 10) G 10 4 ; 28 ; = 0.968.
3 9 3
NP approximation: Calculating s using the expressions in the lecture notes, we have that
P (S < 10) P (Z < 2) 97 8 = 0.968.
and
V ar (S) = 1000 12 0.01 0.99 + 1000 22 0.05 0.95 = 199.9
We know (y) = 0.95 for y = 1.645. So by letting P be equal to smallest premium income, it
should satisfy
P 110
P Z 0.95,
199.9
therefore the premium income must be P 110 + 1.645 199.9 = 133.258, and the loading
therefore has to be at least 23.258/110 = 21.14%.
64
ACTL3162 General Insurance Techniques Exercises
Solution 4.5: [los33K, Exercise] Let X1 be a claim of type 1, then P (X1 = 0) = 1 q1 , and
P (X1 = j) = q1 p1 (j) , for j = 1, 3. So
and
E X12 = 12 0.01 0.5 + 32 0.01 0.5 = 0.05
and
E X22 = 12 0.02 0.5 + 22 0.02 0.5 = 0.05
so that V ar (X2 ) = 0.0491. Then, calculating the expected value and variance of S:
and
E (S2 ) = 2000 0.03 = 60, V ar (S1 ) = 2000 0.0496 = 98.2
and thus
E (S1 + S2 ) = 20 + 60 = 80, V ar (S1 + S2 ) = 147.8.
This capital is p
E (S) + 1.645 V ar (S) = 99.999.
Solution 4.6: [los34, Exercise] Let S be based on j = log (1 qP j ) , bj > 0, for all j. Note
that qj < j since log (1 qj ) = qj + qj /2 + qj /3 + Since S = n1 Nj bj , so that
2 3
n
X n
X
E (S ) = j bj > qj bj = E (S) = E Se
1 1
and n n
X X
V ar (S ) = j b2j > qj b2j = V ar (S) > V ar Se .
1 1
which implies
P (Ni = 0) = 1 qi = P (Ii = 0)
P (Ni = 1) < P (Ii = 1)
P (Ni = 2, 3...) > P (Ii = 2, 3, ...) = 0.
Furthermore,
65
ACTL3162 General Insurance Techniques Exercises
and X
V ar Se = ni qi (1 qi ) b2i = 3.6875
and X
e=ni qi (1 qi ) (1 2qi ) b3i / 3 = 0.906.1
= 2 14 ; 2 = 3.75; = 3 / (2 )3/2 = 6.75 3 = 0.93.
For Se :
= 4.871; = 1.149; x0 = 1.988.
For S :
= 4.630; = 1.111; x0 = 1.917.
#standardised s
z<-(s-Mom[1])/Mom[2]^.5
#Vector of results
aprob <- c()
# Normal Power #
################
# Edgeworth #
#############
#Edgeworth 1
aprob <- c(aprob,pnorm(z)-Mom[4]/6*eval(dphi[3],list(x=z)))
#Edgeworth 2
aprob <- c(aprob,as.double(aprob[3])+Mom[5]/24*eval(dphi[4],list(x=z)))
#Edgeworth 3
aprob <- c(aprob,as.double(aprob[4])+Mom[4]^2/72*eval(dphi[6],list(x=z)))
# Translated Gamma #
####################
#parameters
tgbeta <- 2*Mom[2]/Mom[3]
tgalpha <- tgbeta^2*Mom[2]
x0 <- Mom[1]-tgalpha/tgbeta
#approximation
1
Correction to the skewness formula: made on 21-9-2004.
66
ACTL3162 General Insurance Techniques Exercises
#print results
if(print==1){
cat("Approximations for Pr[X",as.character(s),"]\n",sep="")
print(approximations)
} # end if
Solution 4.9: [los41R, Exercise] For the solution code, refer to [los40R, Exercise]
2. Recall that we have computed the skewness of the distribution of S under the different
assumptions.
We can see that under the negative binomial and Poisson assumptions of N , the coeffi-
cients of skewness of S take value of 1.15 and 0.419. When approximating S, the trans-
lated gamma method produces the most accurate results as the method uses a gamma
distribution to approximate which itself is positively skewed. The normal power method
performs quite well, while the CLT method performs very poorly.
Under the binomial assumption of N , S has a coefficient of skewness of 0.419. In this
case, the normal power method performs the best as it uses the CLT idea which preserves
certain degree of symmetry and at the same time allows for a certain degree of skewness.
67
ACTL3162 General Insurance Techniques Exercises
f (0) = e2
f (1) = .2f (0) = .2e2
f (2) = .42f (0) = .42e2
and so on.
Note that f (0) will be a factor in all the f (x), x > 0. Thus, multiplying all the elements by
f (0) at the end only will be more efficient and will also prevent any rounding error (of the
evaluation of e2 ) to spread and amplify for large x.
Solution 4.12: [los26, Exercise] Note that iXi is compound Poisson with parameters (i =
i, pi (i) = 1). It follows from Theorem 12.4.1 that S is compound Poisson with parameters
= 1 + 2 + 3 = 6 and
1/6 x = 1,
2/6 x = 2,
p(x) =
3/6 x = 3,
0 elsewhere.
fS (0) = P (N = 0) = e1
68
ACTL3162 General Insurance Techniques Exercises
and
x
1X
fS (x) = (ax + bh) p (h) fS (x h)
x h=1
x
1X
= hp (h) fS (x h)
x h=1
1
= [0.2fS (x 1) + 0.4fS (x 2) + 0.6fS (x 3) + 1.6fS (x 4)]
x
where in the Poisson case, a = 0, b = . Therefore:
Since Pr[X = 0] = 0
fS (0) = Pr[N = 0] = pr = 0.04.
We have then
min(4,s)
X s+j
fS (s) = 0.8 p(j)fS (s j), s = 1, 2, . . . .
j=1
s
and thus
69
ACTL3162 General Insurance Techniques Exercises
We have then
pn (0) = (p(0))n = en
pn (1) = (n + 1 1)en = nen
n2
n n n+12 n
p (2) = e n+ = en
2 2 2
2
n3
n n n+13n 2n + 2 3 n n
p (3) = e + + = en .
3 2 3 2 3! 3!
# printing results
if(print==1) {
results <- data.frame(x=0:(i-1),fS=pmf,FS=df)
print(results)
} # end if
#returning results
array(c(pmf,df),c(i,2))
}
Note that we need to return an array now in order to be able to refer to the results as object[i,j]
with i for x and j for fS (j = 1) or FS (j = 2).
discretisation <-function(densityorcdf,type,h,m){
if(type==1) { #cdf
pmf <- c()
pmf <- c(densityorcdf(h/2))
for(i in 1:(m-1)) {
pmf <- c(pmf,densityorcdf(h*i+h/2)-densityorcdf(h*i-h/2))
}
pmf <- c(pmf,1-densityorcdf((m-.5)*h))
pmf } else { pmf <- c() #density
pmf <- c(as.double(integrate(densityorcdf,0,h/2)[1]))
for(i in 1:(m-1)) {
70
ACTL3162 General Insurance Techniques Exercises
beta <-.2
lambda <- 20
s<-150
###########################################
# Calculate the approximate probabilities #
##########################################
# Calculate the "Panjer" probability via #
# discretisation and Panjer #
#################
# print results #
The plot of fS (x) shows that the distribution of S tends to be normal when is big, but the
distribution is still slightly skewed (1 = 0.474), which is why the CLT approximation performs
badly.
71
ACTL3162 General Insurance Techniques Exercises
Index
Solution 4.18: [los40R, Exercise] This is the R-code for this question.
#Assignment 2008, Question 3
################################
# Assumptions and parameters #
################################
#Distribution of losses
p <- c(.2,.1,.05,.1,.05,.1,.05,.1,.05,.1,.1)
########
#this will be the array of the moments of X, S with Poisson, S with NB, S with B
Moments <- array (dim=c(6,4))
rownames(Moments) <- c("E[.]","E[.^2]","E[.^3]","Var(.)","E[(.-E[.])^3]","gamma(.)")
colnames(Moments) <- c("X "," S if N Poisson"," S if N Neg Bin"," S if N Binomial")
# Moments of X #
####################
# Initialization of parameters
P_lambda <- 4
NB_r <- 4
NB_p <- 0.5
B_n <- 8
B_p <- 0.5
#End of calculations
end <- 300
##############################################
# Distribution of S and moments of (S-d)+ #
##############################################
72
ACTL3162 General Insurance Techniques Exercises
# Preliminaries #
#####################
#Masses at 0 in Fs
Output[1,1,3] <- Output[1,1,2]
Output[2,1,3] <- Output[2,1,2]
Output[3,1,3] <- Output[3,1,2]
Output[2,1,5] = NB_r*(1-NB_p)/NB_p*Moments[1,1]
Output[2,1,7] = NB_r*(1-NB_p)/NB_p^2*Moments[1,1]^2 + NB_r*(1-NB_p)/NB_p*Moments[4,1]
Output[2,1,6] = Output[2,1,7] + Output[2,1,5]^2
Output[3,1,5] = B_n*B_p*Moments[1,1]
Output[3,1,7] = B_n*B_p*(1-B_p)*Moments[1,1]^2+B_n*B_p*Moments[4,1]
Output[3,1,6] = Output[3,1,7] + Output[3,1,5]^2
# a-b parameters
ab <- array(dim=c(3,2))
ab[1,1] <- 0
ab[1,2] <- P_lambda
ab[2,1] <- 1-NB_p
ab[2,2] <- (NB_r-1)*(1-NB_p)
ab[3,1] <- -B_p/(1-B_p)
ab[3,2] <- (B_n+1)*B_p/(1-B_p)
#pdf and cdf and SL premium moments for the three cases
for (i in 1:3)
{
Output[i,1,1] <- 0 # column x
Output[i,1,4] <- 0 # column d
for (s in 1:end)
{
sum[i] <- 0
for (h in 1:min(s,length(p)-1))
{
sum[i] <- sum[i] + (ab[i,1]+ab[i,2]*h/s)*p[h+1]*Output[i,s-h+1,2]
}
Output[i,s+1,1] <- s # label column x
Output[i,s+1,2] <- 1/(1-ab[i,1]*p[1])*sum[i] #fs
Output[i,s+1,3] <- Output[i,s,3] + Output[i,s+1,2] # Fs
Output[i,s+1,4] <- s # label column d
Output[i,s+1,5] <- Output[i,s,5] - 1 + Output[i,s,3] # E[(S-d)+]
Output[i,s+1,6] <- Output[i,s,6] - 2*Output[i,s,5]+1-Output[i,s,3]
Output[i,s+1,7] <- Output[i,s+1,6] - Output[i,s+1,5]^2
}
}
####################
# Moments of S #
####################
73
ACTL3162 General Insurance Techniques Exercises
# Remember: #
###############
##this will be the array of the moments of X, S with Poisson, S with NB, S with B
#Moments <- array (dim=c(6,4))
#rownames(Moments) <- c("E[.]","E[.^2]","E[.^3]","Var(.)","E[(.-E[.])^3]","gamma(.)")
#colnames(Moments) <- c("X "," S if N Poisson"," S if N Neg Bin"," S if N Binomial")
for (i in 2:4) {
Moments[1,i] <- sum(c(0:end)*c(Output[i-1,1:(end+1),2]))
Moments[2,i] <- sum(c(0:end)^2*c(Output[i-1,1:(end+1),2]))
Moments[3,i] <- sum(c(0:end)^3*c(Output[i-1,1:(end+1),2]))
Moments[4,i] <- sum((c(0:end)-Moments[1,i])^2*c(Output[i-1,1:(end+1),2]))
Moments[5,i] <- sum((c(0:end)-Moments[1,i])^3*c(Output[i-1,1:(end+1),2]))
Moments[6,i] <- Moments[5,i]/Moments[4,i]^(1.5)
}
################################
# Probability approximations #
################################
for (i in 1:3) {
Proba[i,1] <- 1-Output[i,ss+1,3]
Proba[i,2] <- 1-pnorm(ss,Moments[1,i+1],Moments[4,i+1]^.5)
Proba[i,3] <- 1-pgamma(ss-Moments[1,i+1]+2*Moments[4,i+1]^.5/Moments[6,i+1] ,
4/Moments[6,i+1]^2 , 2/Moments[6,i+1]/Moments[4,i+1]^.5)
Proba[i,4] <- 1-pnorm((9/Moments[6,i+1]^2+6*(ss-Moments[1,i+1])/Moments[4,i+1]^.5/Moments[6,i+1]+1)
^.5-3/Moments[6,i+1])
}
###################
# Print results #
###################
#End of table
prints <- 80
for (i in 1:7) {
for (j in 1:(prints+1)) {
Poisson[j,i] <- Output[1,j,i]
NegBin[j,i] <- Output[2,j,i]
Bin[j,i] <- Output[3,j,i]
}
}
74
ACTL3162 General Insurance Techniques Exercises
print(Bin,print.gap=3)
# Moments #
###########
options(scipen=2)
# Probabilities #
#################
# plots #
#########
mfrow=c(2,2)
plot(Poisson[,1],Poisson[,2],main="f(x) of S if N Poisson",xlab="",ylab="",col="1")
plot(NegBin[,1],NegBin[,2],main="f(x) of S if N NegBin",xlab="",ylab="",col="2")
plot(Bin[,1],Bin[,2],main="f(x) of S if N Bin",xlab="",ylab="",col="3")
plot(Poisson[,1],Poisson[,3],main="F(x) of S",xlab="",ylab="",col="1",type="l")
lines(NegBin[,1],NegBin[,3],col="2")
lines(Bin[,1],Bin[,3],col="3")
plot(Poisson[,4],Poisson[,5],type="l",col="1",main="Stop-loss Premium",xlab="d",ylab="E[(S-d)+]",)
lines(NegBin[,4],NegBin[,5],col="2")
lines(Bin[,4],Bin[,5],col="3")
legend(60,10,legend=c("Poi","NegBin","Bin"),lty=1,col=1:3)
print(c("Distribution of S if N is Poisson"))
[1] "Distribution of S if N is Poisson"
print(Poisson[,1:3],print.gap=3)
x fs(x) Fs(x)
0 0.040762203978 0.04076220
1 0.016304881591 0.05706709
2 0.011413417114 0.06848050
3 0.020000654752 0.08848116
4 0.016185312460 0.10466647
5 0.024547760130 0.12921423
6 0.021824859398 0.15103909
7 0.030071258549 0.18111035
8 0.028466826771 0.20957717
9 0.036711555213 0.24628873
10 0.044414433611 0.29070316
11 0.031582034174 0.32228520
12 0.032987043085 0.35527224
13 0.033859163507 0.38913140
14 0.034284589883 0.42341599
15 0.035304798458 0.45872079
16 0.034700813781 0.49342161
17 0.035645183751 0.52906679
18 0.033930905728 0.56299770
19 0.034555188644 0.59755288
20 0.032430919914 0.62998380
21 0.028716574671 0.65870038
22 0.027888692422 0.68658907
23 0.026623935971 0.71321301
24 0.025483357983 0.73869637
25 0.024071665116 0.76276803
75
ACTL3162 General Insurance Techniques Exercises
7 0.0329761903 0.25599482
8 0.0281639543 0.28415877
9 0.0384485051 0.32260728
10 0.0452089616 0.36781624
11 0.0266490320 0.39446527
12 0.0275746285 0.42203990
13 0.0278652951 0.44990519
14 0.0279380518 0.47784325
15 0.0284733228 0.50631657
16 0.0276653689 0.53398194
17 0.0282848455 0.56226678
18 0.0265446534 0.58881144
19 0.0270714614 0.61588290
20 0.0250558092 0.64093871
21 0.0218626373 0.66280134
22 0.0213481731 0.68414952
23 0.0205224539 0.70467197
24 0.0198092739 0.72448125
25 0.0189021953 0.74338344
print(c("Distribution of S if N is Binomial"))
[1] "Distribution of S if N is Binomial"
print(Bin[,1:3],print.gap=3)
x fs(x) Fs(x)
0 1.679616e-02 0.01679616
1 1.119744e-02 0.02799360
2 8.864640e-03 0.03685824
3 1.500768e-02 0.05186592
4 1.382022e-02 0.06568614
5 1.988766e-02 0.08557380
6 1.986838e-02 0.10544218
7 2.599662e-02 0.13143879
8 2.717911e-02 0.15861790
9 3.350957e-02 0.19212747
10 4.153728e-02 0.23366474
11 3.468674e-02 0.26835149
12 3.666887e-02 0.30502036
13 3.836575e-02 0.34338611
14 3.931281e-02 0.38269892
15 4.096584e-02 0.42366476
16 4.079371e-02 0.46445847
17 4.210595e-02 0.50656442
18 4.069388e-02 0.54725830
19 4.134746e-02 0.58860576
20 3.935274e-02 0.62795850
21 3.533203e-02 0.66329053
22 3.406891e-02 0.69735943
23 3.222769e-02 0.72958712
24 3.050720e-02 0.76009432
25 2.843743e-02 0.78853176
options(scipen=2)
print(c("Moments of the 4 distributions"))
[1] "Moments of the 4 distributions"
print(Moments,digits=3)
X S if N Poisson S if N Neg Bin S if N Binomial
E[.] 4.50 18.000 18.00 18.000
E[.^2] 32.50 454.000 535.00 413.500
E[.^3] 262.50 13902.000 20760.00 11019.750
Var(.) 12.25 130.000 211.00 89.500
E[(.-E[.])^3] 6.00 1050.000 3534.00 354.750
gamma(.) 0.14 0.708 1.15 0.419
76
ACTL3162 General Insurance Techniques Exercises
N1 Poisson(1 = 4),
N2 Neg Bin(r2 = 4, p2 = 0.5), and
N3 Bin(n3 = 8, p3 = 0.5).
The random variables N1 , N2 and N3 all have the expected value of 4. But their variances
are different. N1 has variance of 4, N2 has variance of 8 and N3 has variance of 2. Recall
that N in the collective risk model is the random variable that represents the number of
claims in a portfolio. The variability of this random variable will play a vital role in the
distribution of S. We will illustrate this using the following graphs.
Binomial Poisson
0.04
0.04
0.03
0.03
Poisson[1:81, 2]
0.02
0.02
0.01
0.01
0.00
0.00
0 20 40 60 80 0 20 40 60 80
Bin[1:81,
Negative 1]
Binomial Poisson[1:81,
Distribution 1]
functions
1.0
0.08
0.8
Poisson[1:81, 3]
0.06
0.6
0.04
0.4
0.02
0.2
0.00
0 20 40 60 80 0 20 40 60 80
The first three graphs represent the distribution of S under each assumption of Ni , i =
1, 2, 3. Notice that under the negative binomial assumption, S has a big tail. Under the
Poisson assumption, the tail is smaller. Under the binomial assumption, the tail is the
smallest. This is caused by the different values of variance of the random variables Ni
under different assumptions. Under the negative binomial assumption, N has the largest
variance. Therefore this causes a large variability in the entire portfolio which is reflected
by the big tail.
77
ACTL3162 General Insurance Techniques Exercises
This idea is reinforced by the fourth graph, which is the cumulative distribution function
under each assumptions of Ni , i = 1, 2, 3. We notice that, under the negative binomial
assumption (red line), more probabilities are assigned to the larger values of x. In com-
parison, for the binomial assumption, more probabilities are assigned to small values of
x.
78
Module 5
Determine the smallest relative security loading the insurer can choose so that it is certain
that ruin does not occur at time 1.
where p1 = E(X).
79
ACTL3162 General Insurance Techniques Exercises
and
lim R
Exercise 5.6: [sur7, Solution] Suppose that claims form a compound Poisson process, with
= 1 and p (x) = 1 for 0 < x < 1.
Premiums are received continuously at the rate of c = 1.
Find the adjustment coefficient if proportional reinsurance is purchased with a retention = 0.5
and with reinsurance loading equal to 100%.
Exercise 5.7: [sur8, Solution] An insurance company has aggregate claims that have a com-
pound Poisson distribution with:
= 2;
p (x) = 1/2, 0 x 2; and
premium collection rate is 6.
The insurer buys a proportional reinsurance that reimburses 20% of each individual claim.
The adjustment coefficient with reinsurance is 1.75. Determine the reinsurance loading.
1. Find R if c = 3.
2. Plot R for = i/100, i = 1, 2, . . . , 100.
Exercise 5.9: [sur10R, Solution][R ] Consider Exercise 5.6 above. Assume now that = 0.60
(where this is the loading of the insurer). Find the optimal that will minimise the probability
of ruin and plot R for = 0.5 + i/100, i = 1, 2, . . . , 49.
Exercise 5.10: [sur11, Solution] Consider a Brownian motion process with drift and volatil-
ity 2 . Using the moment generating function to show that the Brownian motion can be
approximated by a shifted Poisson process { N (t) t, t 0}.
[Hint: you need to determine (, , ) and c(, , )]
80
ACTL3162 General Insurance Techniques Exercises
5
0
Exercise 5.13: [cop2, Solution] Let Y1 and Y2 be two (underlying) lifetimes with distribution
functions H1 and H2 . Suppose there exists independent rv Z Exp() that represents the
time until a common disaster. Assume both lives are subject to the same disaster, so that the
age-at-death random variables X1 and X2 are given by
X1 = min (Y1 , Z) and X2 = min (Y2 , Z) .
Additionally assume that Y1 Exp(1 ) and Y2 Exp(2 ).
81
ACTL3162 General Insurance Techniques Exercises
Exercise 5.14: [cop3, Solution] Let (X1 , X2 ) be a bivariate random vector with distribution
function described by a copula as
P (X1 x1 , X2 x2 ) = C (P (X1 x1 ) , P (X2 x2 ))
Exercise 5.15: [cop4, Solution] Consider the case of the bivariate Normal copula written as
C (u1 , u2 ) = 1 (u1 ) , 1 (u2 )
where denotes the joint distribution function of a bivariate standard Normal as given by
Z x2 Z x1
1 s2 + t2 2st
1
(x1 , x2 ) = p exp dsdt.
2 1 2 2 (1 2 )
1. Prove that we can re-write the equivalent form of the bivariate Normal copula as
Z u1 !
1 (u2 ) 1 (z)
C (u1 , u2 ) = p dz.
0 1 2
2. Prove that the bivariate Normal copula generates the bivariate Normal distribution if and
only if the marginals are standard Normal.
3. Find an expression for the copula density
2 C (u1 , u2 )
c (u1 , u2 ) =
u1 u2
and explain the significance of the copula density.
Exercise 5.16: [cop5R, Solution][R ] The loss and ALAE recorded for each of 24 claims are
provided in the table below:
Loss ALAE Loss ALAE
82
ACTL3162 General Insurance Techniques Exercises
Assume that loss has a Pareto distribution with density of the form
fLoss (x) = for x > 0
(x + )+1
and that ALAE has a Gamma distribution with density of the form
1 x
fALAE (x) = x e for x > 0.
()
1. Write an R code to estimate the parameters, together with their standard errors, assuming
loss and ALAE are independent random variables.
2. Re-write your R code to estimate the parameters, together again with their associated
standard errors, assuming this time that loss and ALAE follow a Frank copula with the
form !
1 eu1 1 eu2 1
C (u1 , u2 ) = log 1 + for 6= 0.
e 1
3. Based on the results of your model estimation, is there evidence to support that loss and
ALAE are not independent?
Set v = 2t/(b a)
In addition, the following algorithm generates random variates (u, v) from a Plackett distribu-
tion with parameter :
Set v = [c (1 2t)d]/2b
It can also be shown that Spearmans for a member of the Plackett family of copulas with
parameter > 0 is given by
+1 2
= ln .
1 ( 1)2
Generate 1000 random variates (u, v) from a Plackett distribution with parameter = 1. Use
R to do this, carefully documenting your code. Calculate the corresponding pair of random
variables (x, y) for a bivariate Pareto distribution; that is, with two Pareto marginals FX (x)
83
ACTL3162 General Insurance Techniques Exercises
and FY (y), each with parameters (2, 200). Repeat this exercise for parameters = 0.36 and
= 2.7. Plot your six outcomes on one page (using R), with the first column containing the
(u, v) results, and the second column containing the (x, y) results. The rows should contain
ascending values of .
Repeat the task outlined in the last paragraph, except now for random variates (u, v) from a
Farlie-Gumbel-Morgenstern distribution; the value of used should reflect an equivalent value
of Spearmans to the value of of the previous paragraph.
Comment on your results.
5.5 Solutions
Solution 5.1: [sur2, Exercise] Note that this surplus process is not a Cramer-Lundberg process
since the number of claims is not random.
The surplus process can still be expressed as C (t) = c0 + t S (t) where u = 1 and the
premium rate = E (X) (1 + ) = 4.2 (1 + ). Thus,
Solution 5.2: [sur3, Exercise] Recall that the adjustment coefficient R is the positive solution
to:
1 + (1 + ) p1 R = mX (R) .
If we multiply both sides by , we have
+ (1 + ) p1 R = mX (R)
+ R = mX (R) .
1 + (1 + ) E (X) R = E eRX
so that Z Rx
e 1 (1 + ) Rx dP (x) = 0.
0
This implies
Z Z x Z Z
R eRy (1 + ) R dydP (x) = R eRy (1 + ) R
dP (x) dy
0 0 0 y
Z
(1 P (y)) R eRy (1 + ) R dy
=
0
= 0.
84
ACTL3162 General Insurance Techniques Exercises
E eR(S(1)) = 1
1
= log mS(1) (R)
R
which confirms (4).
Solution 5.4: [sur5A, Exercise] Think of the graph of 1 + (1 + )p1 r against mX (r) seen in
lecture.
As 0, the slope of the line tends to p1 , which is exactly m0X (0). There is then only one
root, R = 0, and thus (u) = 1, u 0.
The case is less clear. As , the intersection of 1 + (1 + )p1 r with mX (r) will be
at the limit of mX (r) as r (remember that m0X (r) > 0, r > 0). But, the mgf of X is not
always defined for r [0, ). For instance, if X is exponential(), R as . On the
other hand, if X is inverse Gaussian, there will be no other root than the trivial 0 as
(see A, example 13.4.3.e on page 412).
Solution 5.5: [sur6A, Exercise] Note that the expression on the right hand side of (1) is the
beginning of Taylors
expansion of the exponential on the left hand side. Now remember that
rX
mX (r) = E e and thus
1
1 + (1 + )p1 R = E eRX > 1 + Rp1 + R2 p2 .
2
Subtracting 1 on both sides, dividing by R and reorganising yields to the desired expression.
85
ACTL3162 General Insurance Techniques Exercises
Solution 5.7: [sur8, Exercise] Since reinsurance is proportional, h (x) = (1 )x = 0.2x and
reinsurance premium is
Z 2
h = (1 + ) h (x) p (x) dx
0
= 0.4 (1 + )
Solution 5.8: [sur9R, Exercise] We solve for the positive R which is the solution to + R =
mX (R). The m.g.f. is given by
1
1 + et + e2t + e4t
mX (t) =
4
so that the equation to solve is
1
1 + eR + e2R + e4R = 0.
1 + R
4
0.1
0.0
theta
86
ACTL3162 General Insurance Techniques Exercises
# create a function for finding the root of eqR for given theta
fR <- function(x){
uniroot(eqR,lower=0.001,upper=2,theta=x)$root
}
# part a
R <- fR(5/7)
print(R)
# part b
plot <- c()
for(i in 1:100){plot <- c(plot,fR(i/100))} # the y values
plot(1:100/100,plot,xlab="theta",ylab="R")
1.20
1.15
1.10
alpha
87
ACTL3162 General Insurance Techniques Exercises
# create a function for finding the root of eqR for given theta
fR <- function(x){
uniroot(eqR,lower=0.001,upper=100,alpha=x)$root
}
#plotting
plot <- c()
for(i in 1:49) plot <- c(plot,fR(.5+i/100))
plot(51:99/100,plot,xlab="alpha",ylab="R")
Solution 5.10: [sur11, Exercise] Consider the shifted Poisson process { N (t) t, t 0} and
we will match the mean and variance of the shifted Poisson process with the Brownian motion
(1 unit of time), i.e.
= and 2 = 2 . (5.1)
Solving for and yields
2 2
= and = . (5.2)
2
Next we obtain the moment generating function of process { N (t) t}
k
E ek( N (t)t) = etk E ek( N (t)) = etk et(e 1) .
(5.3)
(k )2 (k )3
ek 1 = k + + + ... (5.4)
2! 3!
If we substitute the expansion back into (5.3) and make change of and c using (5.2), we obtain
2 2 (k )2 (k )3
tk+ 2 t k + 2! + 3! +...
E ek( N (t)t) = e
2 2 2
tk+tk+ tk+ 2 tk2 +o( )
= e . (5.5)
which is the moment generation function of a Brownian motion process with drift and volatil-
ity 2 .
88
ACTL3162 General Insurance Techniques Exercises
#initialising parameters
mu <- 3
sigma <- 6
tau <- .03
# number of jumps
num <- round(3*lambda)
# calculate process
Brownian <- array(c(rep(0,2*(num+1))),c(num+1,2))
for(i in 1:num){
Brownian[i+1,1] <- Brownian[i,1] + times[i]
Brownian[i+1,2] <- tau*i-c*Brownian[i+1,1]
} #end of for loop
# plot Brownian
colnames(Brownian)<-c("t","W(t)")
plot(Brownian,type="l",lwd="0.1",main="Approximation of a Brownian Motion")
F2 (x2 ) = F (, x2 ) = 1 ex2 .
89
ACTL3162 General Insurance Techniques Exercises
2. Let u1 = F1 (x1 ) and u2 = F2 (x2 ). We find that we can write the bivariate joint distri-
bution function as
n o
1 1/
P (X1 x1 , X2 x2 ) = C (u1 , u2 ) = exp log u11 + log u2
n o
1/
= exp [( log u1 ) + ( log u2 ) ] .
This is the required copula function and we can indeed re-write this as
(u) = ( log u) .
Solution 5.13: [cop2, Exercise] Denote the marginals of X1 and X2 by F1 and F2 , respectively.
F (x1 , x2 ) = P (X1 x1 , X2 x2 )
= P (X1 x1 ) + P (X2 x2 ) + P (X1 > x1 , X2 > x2 ) 1
where
90
ACTL3162 General Insurance Techniques Exercises
1
!
[P (X2 > x2 )] [P (X1 > x1 )] 1 + ,
= min 1
.
[P (X1 > x1 )] [P (X2 > x2 )] 2 +
We find that the required copula for the Marshall-Olkin bivariate exponential distribution
has the form
C (u1 , u2 ) = u1 + u2 1
1 1
+ min (1 u2 ) (1 u1 ) 1 + , (1 u1 ) (1 u2 ) 2 +
and
u2 = P (X2 x2 ) = 1 P (X2 > x2 ) .
1. Note that
91
ACTL3162 General Insurance Techniques Exercises
s t ds
Now apply a change of variable w = p so that dw = p and
1 2 1 2
1 1 t2 ( (u2 )t)/ 1 1
Z x1 Z 1 2
1 2
C (u1 , u2 ) = e 2 exp w dwdt
2 2 2
Z x1
1 1 2
p
e 2 t 1 (u2 ) t / 1 2 dt
=
2
Now applying the transformation t = 1 (z) so that
1 1 2
dz = e 2 t dt,
2
we find that
!
u1 1 1
(u2 )
Z
(z)
C (u1 , u2 ) = p dz
0 1 2
!
u1 1 1
(u2 )
Z
(z)
= p dz.
0 1 2
Indeed this is a nice result!.
2. One direction is pretty obvious: if the marginals are standard Normal, then the bivariate
Normal copula generates a bivariate Normal distribution. Consider that
Z x1 Z x2
1 s2 + t2 2st
1
C (F1 (x1 ) , F2 (x2 )) = p exp dsdt
2 1 2 2 (1 2 )
which is the Normal copula with Normal marginals if and only if
1 (F1 (x1 )) = x1 and 1 (F2 (x2 )) = x2
whenever F1 = F2 = , the Normal marginals.
3. Using the canonical representation of the bivariate Normal with standard Normal marginals,
we find that2
1 x21 + x22 2x1 x2
1
f (x1 , x2 ) = p exp
2 1 2 2 (1 2 )
2
1 x
= c ( (x1 ) , (x2 )) exp 1
2 2
2
1 x
exp 2
2 2
2
Corrections made as at 3 Sep 2010. Thanks to Alex, Kieran.
92
ACTL3162 General Insurance Techniques Exercises
so that
x2 +x22 2x1 x2
1
exp 12 1 (1 2)
2 12
c ( (x1 ) , (x2 )) = 2 2
x x
1 exp 1 1 exp 2
2 2 2 2
2 2 2 2
1 x1 + x2 2x1 x2
= p exp .
1 2 2 (1 2 )
To interpret, the copula density distorts the independence to induce the actual dependence
structure.
Solution 5.16: [cop5R, Exercise] R code for optimization of log-likelihood both the case of
independence and the case of Frank copula.
1. The following is the R code for maximizing the log-likelihood for the case of independence.
93
ACTL3162 General Insurance Techniques Exercises
The results of the estimation, together with the standard errors are as follows:
2. The following is the R code for maximizing the log-likelihood for the case of the Frank
copula.
94
ACTL3162 General Insurance Techniques Exercises
return(num/den)
} "neg.loglik" <- function(parm,x1,x2) {
p.lambda <- parm[1]
p.theta <- parm[2]
g.alpha <- parm[3]
g.beta <- parm[4]
f.alpha <- parm[5]
f1 <- dPareto(x1,alpha=p.theta,x0=p.lambda)
f2 <- dgamma(x2,shape=g.alpha,rate=1/g.beta)
u1 <- pPareto(x1,alpha=p.theta,x0=p.lambda)
u2 <- pgamma(x2,shape=g.alpha,rate=1/g.beta)
f12 <- C12.Frank(u1,u2,f.alpha)
log.ll <- log(f1)+log(f2)+log(f12)
return(sum(-log.ll))
} init.est <- c(36000,2.6,0.75,804,-2.1)
The results of the estimation, together with the standard errors are as follows:
95
ACTL3162 General Insurance Techniques Exercises
So therefore, we cannot reject the null hypothesis that Loss and ALAE are independent.
There is no strong evidence to support that they are not independent. Indeed, if we
examine the scatterplot between Loss and ALAE, this also helps to confirm this because
we see no evidence of any particular dependencies between Loss and ALAE.
6
4
8 9 10 11 12
log(Loss)
parameter
Plackett FGM Type of dependence
0.36 -0.987 -0.329 negative
1.00 0.000 0.000 independence
2.70 0.962 0.321 positive
In fact, the Plackett copulas allow for stronger negative and positive dependences than
the FGM copulas, whose parameter must lie within [1, 1], whereas the Plackett can
take any positive real value. Note that
2 1 2 2 2 ln 1
= ln = .
( 1)2 ( 1)2 ( 1)2
When = 1, this expression is indeterminate. Using de lHospital rule (twice), we get
d2
2 2 ln 1 d2
(2 2 ln 1) 2 2/
lim = lim d2
= lim = 0,
1 ( 1)2 1
d2
( 1)2 1 2
which could be expected since a Plackett copula with = 1 is the independence copula.
96
ACTL3162 General Insurance Techniques Exercises
# number of couples
n = 1000
# parameters
theta <- c(.36,1,2.7)
for (i in 1:3) {
u <- runif(n) # our first independent variables set us
t <- runif(n) # our second independent variables set ts
v <- c(rep(0,n)) # initialise the vector for the vs
#algorithm to create dependent vs out of the ts with Plackett
a <- t*(1-t)
b <- theta[i]+a*(theta[i]-1)^2
c <- 2*a*(u*theta[i]^2+1-u)+theta[i]*(1-2*a)
d <- sqrt(theta[i])*sqrt(theta[i]+4*a*u*(1-u)*(1-theta[i])^2)
v <- (c-(1-2*t)*d)/2/b
# mapping marginals into Paretos
x <- 200*((1-u)^(-1/2)-1)
y <- 200*((1-v)^(-1/2)-1)
# plots for that theta
plot(cbind(u,v),
main=paste("Plot of v against u, case theta=",theta[i]),
pch=20) # for small dots
plot(cbind(x,y),
main=paste("Plot of y against x, case theta=",theta[i]),
pch=20,
xlim=c(0,800),ylim=c(0,800)) #up to the 96% quantile
} # end i loop
# number of couples
n = 1000
# parameters
theta <- c(-.987,0,.962)
97
ACTL3162 General Insurance Techniques Exercises
for (i in 1:3) {
u <- runif(n) # our first independent variables set us
t <- runif(n) # our second independent variables set ts
v <- c(rep(0,n)) # initialise the vector for the vs
#algorithm to create dependent vs out of the ts with FGM
a <- 1+theta[i]*(1-2*u)
b <- sqrt(a^2-4*(a-1)*t)
v <-2*t/(b+a)
# mapping marginals into Paretos
x <- 200*((1-u)^(-1/2)-1)
y <- 200*((1-v)^(-1/2)-1)
# plots for that theta
plot(cbind(u,v),
main=paste("Plot of v against u, case theta=",theta[i]),
pch=20) # for small dots
plot(cbind(x,y),
main=paste("Plot of y against x, case theta=",theta[i]),
pch=20,
xlim=c(0,800),ylim=c(0,800)) #up to the 96% quantile
} # end i loop
98
ACTL3162 General Insurance Techniques Exercises
Plot of v against u, case theta= 0.36 Plot of y against x, case theta= 0.36
800
1.0
0.8
600
0.6
400
v
y
0.4
200
0.2
0.0
0
0.0 0.2 0.4 0.6 0.8 1.0 0 200 400 600 800
u x
800
1.0
0.8
600
0.6
400
v
y
0.4
200
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0 0 200 400 600 800
u x
Plot of v against u, case theta= 2.7 Plot of y against x, case theta= 2.7
800
1.0
0.8
600
0.6
400
v
y
0.4
200
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0 0 200 400 600 800
u x
99
ACTL3162 General Insurance Techniques Exercises
Plot of v against u, case theta= -0.987 Plot of y against x, case theta= -0.987
800
1.0
0.8
600
0.6
400
v
y
0.4
200
0.2
0.0
0
0.0 0.2 0.4 0.6 0.8 1.0 0 200 400 600 800
u x
800
1.0
0.8
600
0.6
400
v
y
0.4
200
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0 0 200 400 600 800
u x
Plot of v against u, case theta= 0.962 Plot of y against x, case theta= 0.962
800
1.0
0.8
600
0.6
400
v
y
0.4
200
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0 0 200 400 600 800
u x
100
ACTL3162 General Insurance Techniques Exercises
First, note:
the three different s considered for each copula correspond to negative dependence,
independence and positive dependence.
we are very close to the maximum negative and positive dependence possible for
FGM copulas: Spearmans rho needs to be within [1/3, 1/3] for such copulas.
Hence, the outputs give an idea of the whole spectrum of dependence FGM copulas
can offer: it is limited. . .
u v plots: in the independence cases, points are scattered in a random way and
no grouping is observed. For negative dependence, points are grouped more around
the top left and bottom right regions, whereas for positive dependence, points are
grouped more around the bottom left and top right regions, as they should be.
x y plots have been restricted to values up to 800the 96% quantile of the
Pareto(2, 200)in order to have a reasonable view of the observations. Otherwise,
displaying extreme data points results in aggregation of most of the data points
around the origin.
x y plots: if we had been presented with these plots only, it would have been im-
possible (or very difficult) to guess the structure of dependence between the random
variables, even after the modification described in the previous point. It is possible
to recognise some negative dependence when relevant (points are grouped close to
the axes and there are fewer points around the top right region) but the cases of
independence and positive dependence are not clear.
Overall it is more difficult to see the type of dependence in the x y scatterplots
than in the u v scatterplots, and this even though the marginal distributions are
the same. This shows that it is often more useful to plot probabilities rather than the
raw observations in order get some sense about the dependence structure between
random variables.
In both cases, positive or negative dependence are observed in both left and right
tails (there is no such thing as dependence only in one of the tails).
It seems that in the case of FGM copulas the maximum negative dependence is
weaker that the maximum positive dependence, whereas for Plackett copulas it
looks symmetrical.
If we needed a higher level of dependence than [1/3, 1/3], it would not be pos-
sible to use an FGM copula. On the other hand, the family of Plackett copulas is a
comprehensive family, that is, it can model the strongest negative and positive depen-
dence (countermonotonicity and comonotonicity copulas, respectivelythe Frechet
bounds). For instance, a Plackett can achieve a Spearmans rho of -0.8 and 0.8 if
the parameter is 0.0412 and 24.26, respectively. This is illustrated below:
101
ACTL3162 General Insurance Techniques Exercises
Plot of v against u, case theta= 0.0412 Plot of v against u, case theta= 24.26
1.0
1.0
0.8
0.8
0.6
0.6
v
v
0.4
0.4
0.2
0.2
0.0
0.0
0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
u u
102
Module 6
coefficient of variation).
Exercise 6.2: [NLI13.1, Solution] Suppose that we have two independent lines of business, S1
and S2 , who have distributions S1 CompPoi(10, G1 Exp(10)) and S2 CompPoi(20, G2
Exp(20)).
103
ACTL3162 General Insurance Techniques Exercises
for i = 1, 2, ..., I and j = 1, 2, .., J. We wish to find the parameters , 1,i and 2,j > 0 such
that the following expression is minimised,
(Si,j vi,j 1,i 2,j )2
X2 = . (6.1)
vi,j 1,i 2,j
We have a 2 by 4 table of risk classes. For simplicity, we can set vi,j 1 and we need to
determine the positive tariff factors , 1,i and 2,j for i = 1, 2, ..., I and j = 1, 2, .., J, that
minimise
(Si,j 1,i 2,j )2
X2 = . (6.2)
1,i 2,j
The observations of Si,j are given in the following table.
Exercise 6.5: [NLI21.1, Solution] Using the data from Table 6.1 (assuming a multiplicative
tariff structure). Further assume that wm = 1 and = w1. Also to ensure uniqueness of
solutions, we assume = 1 and 1,1 = 1. We follow a log-linear MLE structure for the claims.
(a) Provide the design matrix Z, the corresponding vector and log-claim vector X.
(b) Using the log-linear MLE formula (Equation (7.9) in ?), determine the tariffication factors.
6.4 Solutions
Solution 6.1: [NLI13, Exercise] Let S = 3i=1 Si be the aggregation of the three individual
P
lines of car fleet business, the moment generating function of S is
( 3 )
X
MS (r) = exp i vi MY (i) (r) 1 . (6.3)
1
i=1
1. Using the moment generating function of S, the expected claim amount of the car fleet is
3
X (i)
E[S] = i vi E[Y1 ] = 39330. (6.4)
i=1
104
ACTL3162 General Insurance Techniques Exercises
2. Using the moment generating function of S, the variance of the claim amount S is
3
X (i)
Var(S) = i vi E[(Y1 )2 ] = 693705000 (6.5)
i=1
The premium for the car fleet using the variance loading principle with = 3 106 is
(a) Using the aggregation of compound Poisson distributions, the sum S = S1 + S2 also has
a compound Poisson distribution
1 2
S = S1 + S2 CompPoi 30, G1 (x) + G2 (x) . (6.7)
3 3
(c) According to the Central Limit Theorem, the sum S = ni=1 Si can be approximated by
P
the normal distribution with mean nE[Si ] = E[S] and variance nVar(Si ) = Var(S). A
natural candidate for is 1.96 (the 97.5% quantile of a standard normal random variable).
Such a premium corresponds to a 2.5% Value-at-Risk.
Solution 6.3: [NLI7.3, Exercise] Note that we need to set = 1 and 1,1 to ensure that we
have unique solutions.
R-code:
105
ACTL3162 General Insurance Techniques Exercises
y2<-x[5]
temp<-mu*t(c(x1,x2,x3,x4)%*%t(c(y1,y2)))
sum((Sij-temp)^2/temp)
}
# use optim function to product output
OptimResult<-optim(c(30,30,30,30,1),ObjFun,method = "L-BFGS-B",hessian=T)
OptimResult
# check eigenvalues and they are all positive
eigen(OptimResult$hessian)
# positive definite matrix -> local minimiser
There are multiply ways to convert the table above to be the design design matrix Z (as long
as it is full rank). If we choose to have an intercept term, then we need to drop one of the
three car-type columns and one of the four year-type columns (to ensure full-rankness). Here
we drop the columns truck and 51-60y. The resulting design matrix is
1 1 0 1 0 0
1 1 0 0 1 0
1 1 0 0 0 1
1 1 0 0 0 0
1 0 1 1 0 0
1 0 1 0 1 0
Z= . (6.10)
1 0 1 0 0 1
1 0 1 0 0 0
1 0 0 1 0 0
1 0 0 0 1 0
1 0 0 0 0 1
1 0 0 0 0 0
106
ACTL3162 General Insurance Techniques Exercises
(a) From Table 6.1, we can construct the following categorical variables
1 0 0 0 1
M LE = (Z 0 1 Z)1 Z 0 1 X, (6.13)
which yields
1,2
0.2227
2,1 7.2215
M LE = 2,2 = 7.0187
. (6.14)
6.9276
2,3
2,4 7.0903
Since we assume that 0 = 0 and 1,1 = 0, we now have a complete set of tariffication
factors,
= exp(0 ), 1,i = exp(1,i ) and 2,j = exp(2,j ). (6.15)
107
Module 7
1. Normal(, 2 )
2. Poisson()
3. Binomial(m, p)
4. Negbin(r, p)
5. Gamma(, )
6. Inverse Gaussian(, )
Exercise 7.2: [glm2K, Solution] [?, Problem 8.6.8] The following is an extract from ?, pp.193
4.
108
ACTL3162 General Insurance Techniques Exercises
So we see that all the examples of distributions in the exponential dispersion family
that we have given can be generated by starting with prototypical elements of each
type, and next taking Esscher transforms . . . .
Exercise 7.3: [glm3, Solution] [Jiwooks Final Exam Question 2002 - modified] The density
of the Binomial distribution is given by
n!
f (y; p) = py (1 p)(ny) .
(n y)!y!
Show that the Binomial distribution is a member of the exponential dispersion family with
density
y b()
f (y; , ) = exp + c(y; ) .
Exercise 7.6: [NLI22, Solution][?, Exercise 22] Calculate the deviance statistics for the Poisson
and the gamma model, see also ?, equation (3.4).
109
ACTL3162 General Insurance Techniques Exercises
Exercise 7.7: [glm6K, Solution] [?, Problem 8.6.6] Show that in general, the scaled deviance
equals
D 2X
= wi {yi (i i ) [b(i ) b(i )]}.
i
Exercise 7.8: [glm7, Solution] Show that the deviance for an Inverse Gaussian distribution
has the following form:
n
X 1 (yi bi )2
D= 2 .
i=1
b i y i
Exercise 7.10: [glm9, Solution] [Institute question, April 2006] An insurance company has a
set of n risks (i = 1, 2, ..., n) for which it has recorded the number of claims per month, Yij , for
m months (j = 1, 2, ..., m).
It is assumed that the number of claims for each risk, for each month, are independent Poisson
random variables with
E (Yij ) = ij .
These random variables are modelled using a Generalized Linear Model, with
1
Pn
where y i = m j=1 yij .
3. A company has data for each month over a 2 year period. For one risk, the average
number of claims per month was 17.45. In the most recent month for this risk, there were
9 claims. Calculate the contribution that this observation makes to the deviance.
Exercise 7.11: [glm10, Solution] [Institute question, Sep 2003] There are m male drivers in
each of three age groups, and data on the number of claims made during the last year are
available. Assume that the numbers of claims are independent Poisson random variables. If
Yij is the number of claims for the jth male driver in group i (i = 1, 2, 3; j = 1, 2, ..., m), let
E(Yij ) = ij and suppose log (ij ) = i .
1. Show that this is a Generalized Linear Model, identifying the link function and the linear
predictor.
110
ACTL3162 General Insurance Techniques Exercises
3. For a particular data set with 20 observations in each group, several models are fitted,
with deviances as shown below:
Link function Deviance
Exercise 7.12: [glm11, Solution] An insurance company tested for claim sizes under two
factors, i.e. CAR, the insurance group into which the car was placed, and AGE, the age
of the policyholder (i.e. two-way contingency table). It was assumed that the the claim size
yi follows a gamma distribution, i.e.
1 yi i i yi i
f (yi ) = exp for yi 0, i > 0, i = 1
(i ) yi i i
with a log-link function. Analysis of a set of data for which n = 8 provided the following SAS
output:
Observation Claim size CAR type Age group Pred Xbeta Resdev
1 27 1 1 25.53 3.24 0.30
2 16 1 2 24.78 3.21 1.90
3 36 1 1 3.41 1.03
4 45 1 2 38.09 3.64 1.11
5 38 2 1 40.85 3.71 0.46
6 27 2 2 36.97 3.61 1.73
7 14 2 1 2.45 0.69
8 6 2 2 14.59 2.68 2.55
111
ACTL3162 General Insurance Techniques Exercises
Exercise 7.13: [glm12R, Solution][R ] In this question, the vehicle insurance data set1 is used,
car.csv. This data set is based on one-year vehicle insurance policies taken out in 2004 or
2005. There are 67856 policies of which 4624 had at least one claim.
The data frame car.csv contains claim occurrence clm, which takes value 1 if there is a
claim and 0 otherwise. The variable veh value represents the vehicle value which takes value
from $0 $350, 000. We will not be concerned about other variables at the moment.
In this question, we will build a logistic regression model to apply to the vehicle insurance
data set. Previous study has shown that the relationship between the likelihood of occurrence
of a claim and vehicle value are possibly quadratic or cubic.
1. Suppose the relationship between vehicle value and the probability of a claim is cubic,
formulate the model and test significance of the coefficients.
2. Use AIC to determine the which model is the best model. Linear, quadratic or cubic.
Exercise 7.14: [glm13R, Solution][R ] Third party insurance 1 is a compulsory insurance for
vehicle owners in Australia. It insures vehicle owners against injury caused to other drivers,
passengers or pedestrians, as a result of an accident.
In this question, the third party claims data set Third party claims.xls is used. This data
set records the number of third party claims in a twelve-month period between 1984-1986 in
each of 176 geographical areas (local government areas) in New South Wales, Australia.
1. Now consider a model for the number of claims (claims) in an area as a function of the
number of accidents (accidents). Produce a scatter plot of of claims against accidents.
Do you think a simple linear regression model is appropriate?
2. Fit a simple linear regression to the model and use the plot command to produce residual
and diagnostic plots for the fitted model. What do the plots tell you?
3. Now fit a Poisson regression model with claims as response and log(accident) as the
predictor (include offset=log(population) in your code). Check if there is overdisper-
sion in the model by computing the estimate of .
7.4 Solutions
Solution 7.1: [glm1, Exercise] See the lecture notes (or Table 3.1 in ?). The density and the
mean are given in the table and the variance can be derived easily from the table with:
2 = V () .
Try to map some of the densities into the exponential family formulation.
1
?
112
ACTL3162 General Insurance Techniques Exercises
Solution 7.2: [glm2K, Exercise] (8.6.8) Although this has not been discussed in lecture, it
should not be a difficult exercise to show that exponential dispersion is preserved under
Esscher transformation. The proof is straightforward using the cgf argument in (8.42), although
mgf can also be used. Now, to prove the statements in Remark 8.6.10, start from
h (t) = et+h 1 eh 1 = eh et 1
which is clearly the cgf of a Poisson eh . As yet another example, in the Gamma(1, 1) case
from Table A, we have
1 1
h (t) = log log
1th 1h
1h
= log
1ht
which is clearly the cgf of a Gamma(1, 1 h) . For inverse Gaussian (1,1), we have
p
h (t) = (1 1 2(t + h)) (1 1 2h)
p
= 1 2h 1 2(t + h)
r
1 2t 2h
= 1 2h(1 )
1 2h
r
2t
= 1 2h(1 1 ).
1 2h
Solution 7.3: [glm3, Exercise] You ought to be able to verify that the Binomial belongs to
the family of Exponential Dispersion with
n!
b () = n log 1 + e , c (y; ) = log , and = 1.
(n y)!y!
The three components of a generalized linear model are: (1) Stochastic Component: The
observations Yi are independent and each follows an Exponential Dispersion P distribution. (2)
Systematic Component: Every observation has a linear predictor i = j xij j where xij
denotes the jth explanatory variable, and (3) Link function: The expected value E (Yi ) = i
is linked to the linear predictor i by the link function i = g (i ). Now to find the deviance of
113
ACTL3162 General Insurance Techniques Exercises
Solution 7.4: [glm4K, Exercise] (8.4.1) We know that if D denotes the deviance, the scaled
deviance is
D
= 2 log L/ b Le
by definition, where L
b is the likelihood computed using the MLEs b under the current model
replacing the , while L is the likelihood computed with the replaced by the estimates under
e
the full model, hence the actual observations y, in view of the remarks just below (8.22). To
show that (8.23) results from this is basic algebra. To see this, note that
n
Q bi yi
e bi /yi ! n yi !
D i=1
Y
bi
e(bi yi )
= 2 log n
= 2 log
yi
eyi yiyi /yi !
Q
i=1
i=1
n
X
bi
= 2 (b
i yi ) + yi log
i=1
yi
n
X
bi
= 2 i yi ) yi log
(b .
i=1
yi
Solution 7.5: [glm5K, Exercise] (8.4.2) To show that (8.26) results, following the discussion in
the previous problem, we can verify that, for exponential dispersion models, the scaled deviance
can be expressed as
D
= 2 log L/L b e
X yi i i
n e b b i b bi
e
= 2 .
i=1
114
ACTL3162 General Insurance Techniques Exercises
Solution 7.7: [glm6K, Exercise] (8.6.6) Note that the log-likelihood can be expressed as
n
X yi i b (i )
` (; y) = + c (yi ; /wi )
i=1
/wi
n
2 X h e i
= wi yi i b ei yi bi b bi
i=1
115
ACTL3162 General Insurance Techniques Exercises
Solution 7.8: [glm7, Exercise] Recall that the scaled deviance for any member of the Expo-
nential Dispersion family has the form
D h
e y ` ;
i
= 2 ` ; by
n
2 X h e i
= yi i b ei yi bi b bi
i=1
where for the Inverse Gaussian, we have verified (in lecture) that
1
= /2 , = / = , and b () = 2 = 1/.
22
Thus, the deviance can be expressed as
n
X 1 1 1 1
D = 2 yi 2 + yi 2
i=1
2y i y i 2b i
bi
n n
" 2 #
yi2
X
X 1 2yi 1 yi
= 2 1+ 2 = 1
i=1
2y i
b i
b i i=1
y i
bi
n
" 2 # n
X 1 bi yi X 1 1
= = 2
i y i ) 2 .
(b
i=1
yi
b i i=1
b y
i i
Solution 7.9: [glm8, Exercise] See Final Exams solution, Year 2005.
so that m
i 1 X
e = yij = y i
m j=1
and the MLE is
bi = log y i .
116
ACTL3162 General Insurance Techniques Exercises
2. The deviance is
Pn Pm
i=1 j=1 (yij log yij yij log yij !)
2 [` (y; y) ` (y; )] = 2
ni=1 m
P P
j=1 (yij log y i y i log yij !)
n m
XX yij
= 2 yij log (yij y i ) .
i=1 j=1
yi
1. If Y has a Poisson distribution with mean parameter , then its density can be written
as
y y log
f (y; ) = e /y! = exp log y!
1
which is of the exponential dispersion family form. The link function is the log so that
g () = log and the linear predictor is
= log = i .
where yi+ refers to the sum of the observations in the ith group. Differentiating, we get
` (i )
= mei + yi+ = 0
i
so that the maximum likelihood estimator of i is
bi = log (yi+ /m) .
117
ACTL3162 General Insurance Techniques Exercises
3. In comparing the models, notice the nesting: Model 3 is the smallest and is contained in
Model 2 which is contained in Model 1. We may use our Rule of Thumb of significant
improvement if the decrease in deviance is larger than twice the additional parameter.
Here we summarize in table form:
First additional Significant
Model Deviance Difference d.f. D1 D2 > 2 (p q)? improvement?
Model 3 72.53 - -
Model 2 61.64 10.89 1 Yes Yes
Model 1 60.40 1.24 1 No No
Solution 7.12: [glm11, Exercise] We know that the linear predictor, for the ith observation,
is X
i = log i = xij j = xTi (in vector form).
j
Thus,
T
E (yi ) = i = exi .
and therefore, the predicted values are
and
E (y7 ) = e2.45 = 11.59.
> car<-read.csv(".../car.csv")
> attach(car)
> names(car)
[1] "veh_value" "exposure" "clm" "numclaims" "claimcst0" "veh_body"
[7] "veh_age" "gender" "area" "agecat" "X_OBSTAT_"
> car.glm<-glm(clm~veh_value+I(veh_value^2)+I(veh_value^3),family=binomial,data=car)
> summary(car.glm)
118
ACTL3162 General Insurance Techniques Exercises
Call:
glm(formula = clm ~ veh_value + I(veh_value^2) + I(veh_value^3),
family = binomial, data = car)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.4093 -0.3885 -0.3729 -0.3561 2.9462
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.9247606 0.0476282 -61.408 < 2e-16 ***
veh_value 0.2605947 0.0420331 6.200 5.66e-10 ***
I(veh_value^2) -0.0382409 0.0084167 -4.543 5.53e-06 ***
I(veh_value^3) 0.0008803 0.0002752 3.199 0.00138 **
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
The fits shows that all the coefficients are significant as the p-values are all smaller than
0.01.
2. > car.qua<-glm(clm~veh_value+I(veh_value^2),family=binomial,data=car)
> car.lin<-glm(clm~veh_value,family=binomial,data=car)
> car.lin$aic
[1] 33749.12
car.qua$aic
[1] 33718.92
car.cub$aic
[1] 33718.72
The difference between the AIC of the cubic and quadratic models is less than one. This shows
that if we include a cubic explanatory variable, the improvement of the fit quantified by AIC
only decreases by 0.2. Therefore, when evaluating a model by the principal of parsimony, a
quadratic model is preferred. Further, the AIC of the quadratic model is much less than that
of the linear, suggesting that the linear model is inadequate.
1. plot(accidents,claims,xlab="Accidents",ylab="Claims")
119
ACTL3162 General Insurance Techniques Exercises
We can clearly see that there is a concentration of points around the origin make it
difficult to discern the relationship between the predictor and response. The data is also
strongly heteroskedasitic, which means more variable for higher value of the predictor.
This is a violation of the homoskedasticity assumption of linear model.
2. > third.lm<-lm(claims~accidents,offset=log(population))
> plot(third.lm)
The residuals vs fitted plot shows that the residual is clearly do not follow a standard nor-
mal distribution and the variance seems to inflate as the fitted value increases. Diagnostic
checks indicate clear violation of the homoskedasticity assumption.
3. > third.poi<- glm(claims ~ log(accidents), family=poisson,offset=log(population))
> summary(third.poi)
Call:
glm(formula = claims ~ log(accidents), family = poisson, offset = log(population))
Deviance Residuals:
Min 1Q Median 3Q Max
-38.957 -3.551 0.116 3.842 45.965
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -7.093809 0.026992 -262.81 <2e-16 ***
log(accidents) 0.259103 0.003376 76.75 <2e-16 ***
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
120
ACTL3162 General Insurance Techniques Exercises
> sum(resid(third.poi,type="pearson")^2)/third.poi$df.residual
[1] 101.7168
The estimate of takes a value of 101.7168. The inflated dispersion parameter suggests
there is overdispersion in the data.
4. > third.qpoi<- glm(claims ~ log(accidents), family=quasipoisson,offset=log(population))
> summary(third.qpoi)
Call:
glm(formula = claims ~ log(accidents), family = quasipoisson,
offset = log(population))
Deviance Residuals:
Min 1Q Median 3Q Max
-38.957 -3.551 0.116 3.842 45.965
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -7.09381 0.27223 -26.058 < 2e-16 ***
log(accidents) 0.25910 0.03405 7.609 1.66e-12 ***
---
Signif. codes: 0 *** 0.001 ** 0.01 * 0.05 . 0.1 1
The quasi-poisson estimates of are identical to those of the Poisson model, but with
standard errors larger by a factor of 1/2 = 10.085.
121
Module 8
Let Xjt denote the claim size of policy j during year t, for 1 j J and 1 t T . This
random variable is function of a risk profile, which cannot be observed and which is assumed to
be the same for a given contract (for given j but across t) but different between policies (across
j).
The unobservable risk profile is modelled as a random variable , and the risk profile of policy
j is a possible outcome j of (but we cannot observe it). Risk profiles across contracts are
assumed to be independent. We will denote
as the expectation and variance of claim sizes, as functions the risk profile, respectively. The
moments of these quantities are key quantities and are denoted
Note that when we will need to consider () or 2 () for a particular policy j, we will write
j () or j2 (). Finally, assume that for the same policy (given a certain risk profile ), claim
sizes are independent over time (across t). Claim sizes across policies (across j) are always
independent.
For nonparametric estimates, we will denote
T J
1X 1X
Xj = Xjt and X = Xj
T t=1 J j=1
as the average claim size of policy j and overall average claim size (across j as well), respectively.
Finally Xj,T +1 is the claim size whose expectation we want to estimate for policy j for the T + 1
period.
8.1 Preliminaries
Exercise 8.1: [cre1, Solution] Prove that
122
ACTL3162 General Insurance Techniques Exercises
and that
Cov (X, Y ) = Cov (X, Y ) .
Also, derive the formula
Cov(X, Y ) = E [Cov(X, Y |Z)] + Cov (E[X|Z], E[Y |Z])
for the decomposition into conditional variances.
Exercise 8.7: [cre6R, Solution][R ] Suppose you are given the following observed claims for 3
groups and 6 years:
t = 1 t = 2 t = 3 t = 4 t = 5 t = 6 Xj
j = 1 1047 1874 1501 1497 1876 1740 1589
j = 2 2003 1726 1524 1776 1764 2010 1800
j = 3 1597 943 920 1780 980 1010 1205
Test whether the mean claims per group are all equal. What does this imply regarding premium
calculations?
123
ACTL3162 General Insurance Techniques Exercises
Exercise 8.9: [cre4, Solution] [April 2006 Institute of Actuaries CT6 Question] An insurer
has for 2 years insured a number of domestic animals against veterinary costs. In year 1, there
were n1 policies and in year 2, there were n2 policies. The number of claims per policy per year
follows a Poisson distribution with unknown (mean) parameter .
Individual claim amounts were a constant c in year 1 and a constant c(1 + r) in year 2. The
average total claim amount per policy was y1 in year 1 and y2 in year 2. Prior beliefs about
follow a Gamma distribution with mean / and variance /2 . In year 3, there are n3
policies, and individual claim amounts are c(1 + r)2 . Let Y3 be the random variable denoting
the average total claim amounts per policy in year 3.
1. State the distribution of the number of claims on the whole portfolio over the 2 year
period.
2. Derive the posterior distribution of , given y1 and y2 .
3. Show that the posterior expectation of Y3 given y1 and y2 can be written in the form of
a credibility estimate
Z k + (1 Z) c (1 + r)2
specifying expressions for k and Z.
4. Describe k in words and comment on the impact the values of n1 , n2 have on Z.
Exercise 8.10: [cre5, Solution] You are given that an individual automobile insured has an
annual claim frequency distribution that follows a Poisson distribution with mean , where
because of parameter uncertainty, actually follows a Gamma distribution with parameters
and . A total of one claim is observed for the insured over a five-year period.
One actuary assumes that = 2 and = 5, and a second actuary assumes the same
mean for the Gamma distribution, but only half the variance.
Both actuaries determine the Bayesian premium for the expected number of claims in the
next year using their model assumptions.
Determine the ratio of the Bayesian premium that the first actuary calculates to the
Bayesian premium that the second actuary calculates.
P = g0 + gX j .
Find values for g0 and g such that P is unbiased and such that hit minimisesi the quadratic
2
error with respect to Xj,T +1 . You may assume that minimising E Xj,T +1 P is equivalent
h i2
to minimising E () P .
124
ACTL3162 General Insurance Techniques Exercises
Exercise 8.12: [cre8, Solution] In the Buhlmann model, find the variance of the credibility
premium
P = zX j + (1 z)X
as well as its MSE (remember we want to estimate Xj,T +1 ).
Exercise 8.13: [cre9, Solution] In the Buhlmann model, recall that M SB: mean square be-
tween is
J
T X 2
M SB = Xj X
J 1 j=1
and M SW : mean square within is
J T
1 XX 2
M SW = Xjt X j .
J(T 1) j=1 t=1
Exercise 8.15: [cre11K, Solution] [?, Problem 7.4.1] Let X1 , . . . , XT be independent random
2
variables with variances
P 2 V2 ar(Xt ) = s /wt for certain positive
P numbers wt , t = 1, . . . , T . Show
that the variance t t s /wt of the linear combination t t Xt with = 1 is minimal when
we take t wt , where the symbol means proportional to. Hence the optimal solution has
t = wt /w . Prove also that the minimal value of the variance in this case is s2 /w .
Exercise 8.16: [cre12K, Solution] [?, Problem 7.4.2] Prove that in the Buhlmann-Straub
model, we have V ar(X zw ) V ar(X ww ). (Here,
T
X zj
X zw = Xjt ,
z
t=1
T
X wjt
X jw = Xjt
t=1
w j
J
X wj
X ww = Xjw ,
j=1
w
and zj is the credibility factor.)
125
ACTL3162 General Insurance Techniques Exercises
Exercise 8.17: [cre13K, Solution] [?, Problem 7.4.10] Estimate the credibility premiums in
the Buhlmann-Straub setting when the claims experience for three years is given for three
contracts, each with weight wjt 1. The claims on the contracts are as follows:
1 2 3 4 5
risk class 1 v1,t 729 786 872 951 1019
S1,t 583 1100 262 837 1630
X1,t 80.0% 139.9% 30.0% 88.0% 160.0%
risk class 2 v2,t 1631 1802 2090 2300 2368
S2,t 99 1298 326 463 895
X2,t 6.1% 72.0% 15.6% 20.1% 37.8%
risk class 3 v3,t 796 827 874 917 944
S3,t 1433 496 699 1742 1038
X3,t 180.0% 60.0% 80.0% 190.0% 110.0%
risk class 4 v4,t 3152 3454 3715 3859 4198
S4,t 1765 4145 3121 4129 3358
X4,t 56.0% 120.0% 84.0% 107.0% 80.0%
risk class 5 v5,t 400 420 422 424 440
S5,t 40 0 169 1018 44
X5,t 10.0% 0.0% 40.0% 240.1% 10.0%
Table 8.1: Observed claim Si,t and corresponding numbers of policies vi,t .
\
\
(a) Choose the data of Table 8.1 and calculate the inhomogeneous credibility estimator ( i)
for the claims ratios under the assumption that the collective mean is given by 0 = 90%
and the variance between risk classes is given by 2 = 0.20.
(b) What changes if the variance between risk classes is given by 2 = 0.05?
Exercise 8.19: [NLI24, Solution][?, Exercise 24] Estimate the prediction uncertainty E[(Xi,T +1
hom
\
\
(i ) )2 ] for the data in Table 8.1 under the assumption that the volume grows 5% in each
risk class.
126
ACTL3162 General Insurance Techniques Exercises
region i vi Ni
1 50,061 3,880
2 10,135 794
3 121,310 8,941
4 35,045 3,448
5 19,720 1,672
6 39,092 5,186
7 4,192 314
8 19,635 1,934
9 21,618 2,285
10 34,332 2,689
11 11,105 661
12 56,590 4,878
13 13,551 1,205
14 19,139 1,646
15 10,242 850
16 28,137 2,229
17 33,846 3,389
18 61,573 5,937
19 17,067 1,530
20 8,263 671
21 148,872 15,014
total 763,525 69,153
Table 8.2: Observed volumes vi and claims counts Ni in regions i = 1, 2, ..., 21.
8.4 Solutions
Solution 8.1: [cre1, Exercise]
We first have
Remember E[X + Y ] = E[X] + E[Y ] holds irrespective of the dependence structure of X and
Y . Then we have
127
ACTL3162 General Insurance Techniques Exercises
Notice that when X = Y , then it reduces to the familiar formula for conditional variances:
or
V ar (X) = E [V ar (X |Z )] + V ar [E (X |Z )] .
Cov(Xit , Xjk ) = E [Cov(Xit , Xjk |)] + Cov [E(Xit |), E(Xjk |)]
= E [Cov(Xit , Xjk |)] + Cov [i (), j ()] .
= s2 + a.
For the last case, since policies are independent across multiple lines (when i 6= j), we have
Cov(Xit , Xjk ) = 0.
128
ACTL3162 General Insurance Techniques Exercises
129
ACTL3162 General Insurance Techniques Exercises
for j = 1, ..., J and t = 1, ..., T , where the error terms jt N (0, s2 ), we can test equality of
means with
H0 : m1 = m2 = m3
or equivalently
H0 : 1 = 2 = 3 = 0.
Here, J = 3 and T = 6. The test statistic is
1
P 2
M SB J1
T Xj X
j
F = = 2 .
M SW 1
P P
X jt X j
J(T 1) j t
The p-value = 1.280% which is less than a level of significance of 5% suggests evidence to
reject the null hypothesis. Alternatively, we may compare this with the F -value from a table:
F0.05 (2, 15) = 3.68. The observed F -statistics of 5.9099 is larger, therefore we reject the null.
The groups are therefore non-homogeneous. This suggests that we should be asking for a
different premium for each group. A premium credibility formula which attaches credibility
factor according to each own groups claim experience would be suitable.
where c is a constant that makes this a proper density. Clearly this has the form of a Gamma
density with parameters x + and T + .
Solution 8.9: [cre4, Exercise] Exact solutions drawn from the Institute paper/report.
1. The total number of claims has a Poisson distribution with parameter (n1 + n2 ) .
130
ACTL3162 General Insurance Techniques Exercises
2. Let Yi denote the average total claim amount per policy in year i and let Xi denote the
total number of claims in year i. Then Xi has a Poisson distribution with parameter ni
and
n1 n2
X1 = Y1 and X2 = Y2 .
c c (1 + r)
We have
f ( |y1 , y2 ) f (y1 , y2 | ) ()
en1 (n1 )y1 n1 /c en2 (n2 )y2 n2 /c(1+r) e 1
e(n1 +n2 +) (+y1 n1 /c+y2 n2 /c(1+r))1
which implies that the posterior distribution of is Gamma with parameters + y1 n1 /c +
y2 n2 /c(1 + r) and n1 + n2 + .
3. Thus, our predicted value of Y3 , given the observed claims y1 and y2 is given by
c (1 + r)2
E (Y3 |y1 , y2 ) = E (X3 |y1 , y2 )
n3
c (1 + r)2 + y1 n1 /c + y2 n2 /c(1 + r)
= n3
n3 n1 + n2 +
2 2
c (1 + r) + n1 y1 (1 + r) + n2 y2 (1 + r)
=
n1 + n2 +
= c (1 + r)2
n1 + n2 +
!
n1 y1 (1 + r)2 + n2 y2 (1 + r) n1 + n2
+
n1 + n2 n1 + n2 +
Solution 8.10: [cre5, Exercise] It was shown in class that when claim frequency X1 , ..., XT are
independent Poisson() with having a Gamma(, ) prior, then the posterior distribution is
Gamma( + x , + T ) so that the Bayesian premium is given by
+ x
E ( |X1 , ..., XT ) = .
+T
According to the first actuary, = 2 and = 5. The second actuary sets the parameter with
equal mean, but only half the variance. Therefore
2 1
= and 2 =
5
( ) 25
131
ACTL3162 General Insurance Techniques Exercises
so that = 4 and = 10. Since there is only one claim in 5 years, x = 1 and T = 5. The
first actuary sets the premium to
+ x 2+1 3
= =
+T 5+5 10
and the second actuary to
+ x 4+1 5 1
= = = .
+ T 10 + 5 15 3
The ratio is therefore
3/10
= 90%
1/3
which interestingly, despite assuming larger variance, the first actuary has a smaller premium.
Intuitively, this is because as the increasing number of years contributes to larger credibility
attached to ones own claims experience, it also provides a greater number of opportunities
to correct for premium miscalculations in the past. Presumably, at policy inception, the first
actuary will require larger premium than the second actuary, in the absence of any claims
experience. If claims experience becomes more favorable than expected, then there will be
larger correction in premium calculated. And the magnitude of correction increases then with
time, assuming of course, favorable experience continues.
Unbiasedness condition implies the second term is zero. Now, note that
V ar Xj,T +1 g0 + gX j = V ar (Xj,T +1 ) 2Cov Xj,T +1 , g0 + gX j + V ar g0 + gX j
= V ar (Xj,T +1 ) 2gCov Xj,T +1 , X j + g 2 V ar X j .
or equivalently
g0 = (1 g) E [ ()] .
132
ACTL3162 General Insurance Techniques Exercises
Using Exercise 8.3 and 8.4 yields the familiar credibility formula:
Cov Xj,T +1 , X j V ar [ ()]
z = = 1
V ar X j T
E [ 2 ()] + V ar [ ()]
T V ar [ ()] T
= 2
= 2
.
E [ ()] + T V ar [ ()] T + (E [ ()] /V ar [ ()])
Solution 8.12: [cre8, Exercise] From Exercise 8.5 and 8.6, we obtained V ar(X) and Cov(X j , X).
Now, we have the variance of P
V ar zX j + (1 z) X
= z 2 V ar X j + 2z (1 z) Cov X j , X + (1 z)2 V ar X
a + s2 /T
= z 2 (a + s2 /T ) + 2z(1 z)(a + s2 /T ) + (1 z)2
J
= (a + s2 /T ) z 2 + 2z(1 z) + (1 z)2 /J
The expression in curly braces is inferior to 1 as long as J > 1, which shows that the variance
of the credibility premium is lower than the one of X j . Furthermore, choosing z = 0 or z = 1
yields the variances of X and X j , respectively, as it should.
The MSE (as an estimator for Xj,T +1 ) of the credibility premium can be derived as follows:
h 2 i
E Xj,T +1 zX j (1 z) X
= V ar Xj,T +1 zX j (1 z) X
[because unbiasedness of the linear estimator]
= V ar (Xj,T +1 ) 2Cov Xj,T +1 , zX j + (1 z) X + V ar zX j + (1 z) X
V ar (Xj,T +1 ) = s2 + a.
133
ACTL3162 General Insurance Techniques Exercises
134
ACTL3162 General Insurance Techniques Exercises
and k is therefore
E [s2 ()] s2
k= = .
V ar [ ()] a
Recall the formula of the credibility premium,
PTcred
+1 = zX j + (1 z)m.
Firstly notice that, if we have more experience in the data(i.e. T increases), then z will increase.
This makes sense as more experience means that we will give more credibility to the individual
mean X j .
If the heterogeneity of the portfolio increases (a %) that is risks are quite different amongst
135
ACTL3162 General Insurance Techniques Exercises
each portfolio, then we will expect k to decrease. Decreased value of k will increase the credi-
bility coefficient z. So if the portfolios we have are quite different from each other, we will use
more information from the individual mean structure X j , i.e. giving it more credibility.
In another situation where if the risk variability decreases within the portfolio(s2 &), then
we will also expect k to decrease which will result in a increasing value of z. This again makes
sense. If each individual portfolio does not vary dramatically, then we will obviously use more
information on the mean structure of individual portfolio X j .
where V ar (Xt ) = s2 /wt and subject to the condition = t t = 1. The Lagrangian for
P
this problem can be written as
X
t2 s2 /wt ( 1) .
L=
t
Solution 8.16: [cre12K, Exercise] In Exercise 8.15 we show that if the variance is of the
form s2 /wt , then the alphas of an estimator of X should be proportional to the inverse: wt
(or equivalently, wt /s2 since s2 is a constant). We have for the unconditional variance of Xij
(whose expectation is the collective premium, which we need in the credibility premium)
which is now proportional to zj and not to wjt . The best unconditional expected value of Xij
should then be computed using the zj s, not the wij s, as this will minimise the variance of the
estimator.
wjt wj
Note that X jw = Tt=1 wj Xjt andXww = Jj=1 w
P P
Xjw .
136
ACTL3162 General Insurance Techniques Exercises
Consider the sequence of independent random variables X1w , , XJw . The variance of X jw is
z
Hence the variance of the linear combination X zw = Jj=1 zj Xjw is the smallest among all the
P
linear combinations. Therefore,
J
X wj
V ar(X zw ) V ar( Xjw ) = V ar(X ww ).
j=1
w
PT wjt PJ wj
Solution 8.17: [cre13K, Exercise] Note that X jw = t=1 wj Xjt and X ww = j=1 w Xjw .
From the formulas we saw in the lecture we get
1 X
se2 = wjt (Xjt Xjw )2 = 8
J (T 1) j,t
and
wj (Xjw Xww )2 (J 1) se2
P
j 11
a= P 2 = .
w j wj /w
e
3
Thus, the Buhlmann-Straub credibility factor is given by
aT
e 11
ze = 2
= .
aT + se
e 19
The credibility premiums are therefore:
137
ACTL3162 General Insurance Techniques Exercises
Solution 8.18: [NLI23, Exercise] From ?, Theorem 8.17, the inhomogeneous credibility esti-
mator is given by
\
\
( i ) = i,T Xi,1:T + (1 i,T )0 , (8.1)
b
with credibility weight i,T and observation based estimator Xbi,1:T . Since 2 and 0 are given,
2 2
we only need to obtain the estimator for , T = 261.2. Then using the above formula, results
are summarised in the following table,
risk class 1 risk class 2 risk class 3 risk class 4 risk class 5
bi,T 76.94% 88.64% 76.94% 93.37% 61.72%
\
\ 2
( i ) ( = 0.2) 0.9866506 0.3702171 1.1623246 0.8988722 0.7169975
\
\ 2
(i ) ( = 0.05) 0.9512165 0.5048693 1.0550632 0.8990594 0.8148146
R-code:
v.matrix<-c(729,1631,796,3152,400,786,1802,827,3454,420,872,
2090,874,3715,422,951,2300,917,3859,424,1019,2368,944,4198,440)
dim(v.matrix)<-c(5,5)
S.matrix<-c(583,99,1433,1765,40,1100,1298,496,4145,0,262,326,
699,3121,169,837,463,1742,4129,1018,1630,895,1038,3358,44)
dim(S.matrix)<-c(5,5)
X.matrix<-S.matrix/v.matrix
w.matrix<-v.matrix
tau2<-0.2
mu0<-0.9
# calculating \hat{alpha}_{i,T}
alphahat<-rowSums(w.matrix)/(rowSums(w.matrix)+sigma2hat/tau2)
# calculating \hat{X}_{i,1:T}
Xhat<-rowSums(w.matrix*X.matrix)/rowSums(w.matrix)
tau2<-0.05
mu0<-0.9
138
ACTL3162 General Insurance Techniques Exercises
Xhat<-rep(rowSums(w.matrix*X.matrix)/rowSums(w.matrix),5)
dim(Xhat)<-c(5,5)
s2hat<-rowSums(w.matrix*((X.matrix-Xhat)^2))/4
sigma2hat<-mean(s2hat)
# calculating \hat{alpha}_{i,T}
alphahat<-rowSums(w.matrix)/(rowSums(w.matrix)+sigma2hat/tau2)
# calculating \hat{X}_{i,1:T}
Xhat<-rowSums(w.matrix*X.matrix)/rowSums(w.matrix)
Solution 8.19: [NLI24, Exercise] From ?, Equation (8.18), the prediction uncertainty has the
following formula is given by,
!
hom 2
\ 2
\ 1 i,T
E Xi,T +1 ( i)
= 2
+ (1 i,T ) 1 + P . (8.2)
wi,T +1 i i,T
We have estimators T2 = 261.2, T2 = 0.1021 and wi,T +1 using 5% increment factor. Results
are tabulated below,
risk class 1 risk class 2 risk class 3 risk class 4 risk class 5
hom
\
\
E[(Xi,T +1 (i) )2 ] 0.2482469 0.1062646 0.2676411 0.0597071 0.5744316
R-code:
v.matrix<-c(729,1631,796,3152,400,786,1802,827,3454,420,872,2090,
874,3715,422,951,2300,917,3859,424,1019,2368,944,4198,440)
dim(v.matrix)<-c(5,5)
S.matrix<-c(583,99,1433,1765,40,1100,1298,496,4145,0,262,326,
699,3121,169,837,463,1742,4129,1018,1630,895,1038,3358,44)
dim(S.matrix)<-c(5,5)
X.matrix<-S.matrix/v.matrix
w.matrix<-v.matrix
sigma2hat<-261.2
tau2hat<-0.1021
# calculating \hat{alpha}_{i,T}
alphahat<-rowSums(w.matrix)/(rowSums(w.matrix)+sigma2hat/tau2hat)
139
ACTL3162 General Insurance Techniques Exercises
w.matrix[,5]*1.05
E[Ni |i ]
(i ) = E[Xi |i ] = = i 0 , (8.3)
vi
2 (i ) Var[Ni |i ] i 0
= Var[Xi |i ] = 2
= . (8.4)
vi vi vi
The collective mean is 0 = E[(1 )] = 0 E[1 ] = 8.8%. The prior uncertainty is given by
2 = 2.4104 . The volatility within risk classes also happens to be 2 = E[ 2 (1 )] = 0 E[1 ] =
8.8% (thanks to the Poisson assumption).
R-code:
140
ACTL3162 General Insurance Techniques Exercises
N.vector<-c(3880,794,8941,3448,1672,5186,314,1934,2285,2689,661,
4878,1205,1646,850,2229,3389,5937,1530,671,15014)
x.vector<-N.vector/v.vector
# collective mean, vol between, vol within
mu0<-0.088
sigma2<-0.088
tau2<-0.00024
# credibility weights
alpha.vector<-v.vector/(v.vector+sigma2/tau2)
# credibility estimators
ans<-cbind(v.vector,N.vector,x.vector,
alpha.vector,alpha.vector*x.vector+(1-alpha.vector)*mu0)
141
Module 9
Claims Reserving
0 1 2 3 4
1 A1 A2 A3 B1 E
2 A4 A5 A6 B2
3 C1 C2 C3 X
b34
4 D1 D2 D X
b44
5 F
D (B + X34 )
X44 =
A + C
indeed produces the same estimate.
Exercise 9.2: [IBNR1Y, Solution] Suppose that an insurer has observations of claims from
year 2009 to year 2011. The observations of the claims arrivals and development are recorded in
two tables: the first one lists the accident date, reporting date, settlement status and settlement
date (if applicable) of each observed claim; the second one records the transaction of the claims.
142
ACTL3162 General Insurance Techniques Exercises
Construct a 3-by-3 annual loss triangle based on the above two tables.
Exercise 9.4: [IBNR4K, Solution] [?, Problem 9.2.4] Apply the arithmetic separation method
to the same data of the previous exercise. Determine the missing values by linear or loglinear
interpolation, whichever seems more appropriate.
Exercise 9.5: [IBNR5, Solution] For a certain portfolio of general insurance policies, denote by
Xij the claims that occur in accident year i, but paid in development year j, where i = 1, 2, ..., t
and j = 0, 1, ..., t 1 with observable claims only for i + j t. The triangle below shows the
observed incremental claims for this portfolio for a 3-year development period:
Accident Development Year
Year 0 1 2
1 2,541 1,029 217
2 2,824 790
3 1,981
1. Give five (5) reasons for the possible delay between the occurrence and the actual payment
of claims that gives rise to Incurred-but-not-Reported (IBNR) reserves.
2. In the Chain Ladder approach of estimating the bottom-half of the claims run-off triangle,
the claims Xij are assumed to be Poisson distributed with mean i j . Derive explicit
forms for the maximum likelihood estimators for the parameters i and j .
143
ACTL3162 General Insurance Techniques Exercises
3. Using the result in (2), calculate the maximum likelihood estimates for i and j for
i = 1, 2, 3 and j = 0, 1, 2 and use these to estimate the bottom half of the triangle.
4. Explain the difference between the Chain Ladder approach and the Arithmetic Separation
Method.
Exercise 9.6: [IBNR7, Solution] Estimate the expected outstanding claims reserve for the
data in the table below (figures in $1000), using the Bornhuetter-Ferguson method. Assume
that an expected loss ratio of 85%, and that the total claims paid are $1, 942, 000.
Development Year
2000 2001 2002
Year of Origin
2000 X00 X01 X02
2001 X10 X11
2002 X20
Based on the following assumptions, we would like to estimate the outstanding claims where:
claim payments for each year of origin and development year have a log-normal distribu-
tion,
i+j j
!
X X
ln (Xij ) Normal + k + k , 2
k=1 k=1
where Xij denotes the claim amount paid in development year j arising from losses occuring
in year of origin i.
claim payments for each year of origin and development year are independent.
the expected value of the logarithm of the claim payments in year of origin 0, Year 2000
and development year 0, Year 2000 is .
the expected change in the logarithm of the claim payments from one accounting year to
the next is given by i for each accounting year i.
144
ACTL3162 General Insurance Techniques Exercises
for each year of origin, the expected change in the logarithm of the claims payment from
development year j 1(j = 1, 2...) to development j is equal to j and this is the same
for each year of origin.
the logarithm of claim payments have the same variance, 2 , regardless of year of origin
or year of development.
The i values allow for any inflation in values from one accounting year to the next. The j
values allow for the settlement pattern of claims over time arising from the same policy year.
The run-off triangle of expected values for the logarithm of the claims payments will then be
Development Year
2000 2001 2002
Year of Origin
+1
2000 +1 +2
+1 +1
+2
+1
2001
+1 +2
+1
2002 +1
+2
Assuming that 3 = 4 = 0.018, estimate the outstanding claims X12 , X21 and X22 .
Exercise 9.8: [IBNR2Y, Solution] Consider a 3-by-3 incremental loss triangle where we have
I = t = 3 and J = 2 (that is, we observe {xij ; i + j 3, 1 i 3, 0 j 2}, and all the
observations are not cumulative). The exposures of the ith accident period is a known constant
ci (1 i 3). We assume that Xij follows a normal distribution with parameters ij = j ci
and ij = j (1 i 3 and 0 j 2). In other words, the probability density function is of
Xij is
(xij j ci )2
1
2 2
fXij (xij ) = e j . (9.1)
j 2
145
ACTL3162 General Insurance Techniques Exercises
2. calculate the conditional mean square error of prediction (MSEP) of the reserving esti-
mates with Macks formula;
3. calculate the process uncertainty and parameter uncertainty involved in the above condi-
tional MSEP;
5. estimate the reserves with a Poisson model and compared the results to the Chain-Ladder
reserves; comment on the comparison.
The USAApaid data required for this exercise comes from the private passenger auto liabil-
ity/medical line of business of the United Services Automobile Association company (https:
//cran.r-project.org/web/packages/ChainLadder/ChainLadder.pdf). This is part of the
Schedule P dataset maintained by the National Association of Insurance Commissioners. The
Schedule P dataset provides real insurance data of nice lines of business over 10 years of all
U.S. general insurers. One can refer to the Casualty Actuarial Society website via http:
//www.casact.org/research/index.cfm?fa=loss_reserves_data for more information and
a clean subset of the data.
9.5 Solutions
Solution 9.1: [IBNR1K, Exercise] (9.2.1) This is immediate from, beginning with equation,
b44 = D (B + ) = D (B + C B /A )
X
A + C A + C
D B /A (A + C )
=
A + C
D B
=
A
which gives the result.
Solution 9.2: [IBNR1Y, Exercise] The construction and result of the loss triangle are shown
in the following table.
146
ACTL3162 General Insurance Techniques Exercises
Solution 9.3: [IBNR3K, Exercise] (9.2.3) First, it can be verified that the row and column
totals are:
Year Development Year Row
of Origin 0 1 2 5 4 Total
1 232 338 373 389 391 391
2 258 373 429 456 456
3 221 303 307 307
4 359 430 430
5 349 349
Column Total 1419 374 95 43 2
You can proceed by estimating the parameters as suggested in the book (or applying the
mechanics of using ratios of cumulative claims as discussed in lecture). We have
b1 = 391.0,
b2 = 458.3,
b3 = 325.1,
b4 = 498.0, and
b5 = 545.5
and
b0 = 0.640, b1 = 0.224, b2 = 0.081, b3 = 0.051, and b4 = 0.0051.
As a result, we have the bottom part of the claims run-off triangle:
Some differences may exist due to rounding. The total IBNR reserve is about 285.
and
b1 = 346.2170,
b2 = 413.0731,
b3 = 390.0336,
b4 = 515.2749, and
b5 = 453.
These can be obtained as suggested by the maximum likelihood estimates derived in lecture.
For example, P
i+j=5 xij
X
b5 = P4 = xij = 349 + 71 + 4 + 27 + 2 = 453
j=0 j i+j=5
147
ACTL3162 General Insurance Techniques Exercises
the sum of the claims in the big diagonal, and so on. You must be able to verify the rest. Then,
we can extrapolate the -factors linearly to yield k , for k = 6, ..., 9:
b6 = 518.27,
b7 = 549.85,
b8 = 581.43,
b9 = 613.01 k = 328.79 + 31.58 k)
(b
One can also exponentially extrapolate, leading to
bk = e5.81+0.0759k .
b6 = 526.05,
b7 = 567.53,
b8 = 612.29,
b9 = 660.57
(The value X32 = 4 arouses suspicion.) As a result, the lower right triangle of estimated values
for the first case (linear extrapolation) becomes:
Year Development Year Row
of Origin 0 1 2 3 4 Total
1
2 2.3 2.3
3 23.0 2.4 25.4
4 36.3 24.4 2.6 63.3
5 109.4 38.5 25.8 2.7 176.4
The total IBNR reserve required would be about 267.4. (Try working out the exponential case!)
Alternatively, one can use linear extrapolation based on the last two -factors:
k5
bk = 5
b5 + (b b4 ) = 5
b5 + (b b4 )(k 5); k = 6, ..., 9.
54
One can also use log-linear extrapolation based on the last two -factors:
k5
ln
bk = ln b5 ln
b5 + (ln b4 ) = ln b5 ln
b5 + (ln b4 )(k 5); k = 6, ..., 9.
54
1. Some possible reasons for delay: (1) delay in assessing exact size or amount of claims; (2)
delay in investigating whether claim is valid; (3) long legal proceedings; (4) claims have
occurred, but not filed later; and (5) claim consists of series of payments (e.g. disability
insurance).
2. First, notice that you can write the probability:
P (X = xij ) = ei j (i j )xij /xij !
for i, j satisfying i + j < t. The full likelihood of all observed values can be written as
Y
L (, ) = ei j (i j )xij /xij !
i,j
Take the log of the likelihood and maximize. The solutions will have the form:
P
j xij
P
xij
bi = P and bj = Pi ,
bj j i
bi
148
ACTL3162 General Insurance Techniques Exercises
3. Notice that we can actually write the observed claims in the chain ladder approach as
So easily we can verify, together with the assumption that all claims settle after 3 devel-
opment years, i.e. 0 + 1 + 2 = 1, the following:
X b2 b2 = 220; X
b22 = b3 b1 = 672; X
b31 = b3 b2 = 161.
b22 =
4. The chain ladder method assumes that the claims are Poisson distributed, Xij Poisson(i j ),
where the s denote the accident year effect, and the s.denote the development year
effect. It has no calendar year effect, unlike the Arithmetic Separation method where
the claims are assumed to be Poisson(j k ). Here as in the chain ladder, s.denote
the development year but the s denote the accident year effect. Both methods use
maximum likelihood to estimate their corresponding parameters, although in predicting
unpaid claims, because future calendar years have not occurred yet, in the Arithmetic
separation method, the s may have to be extrapolated from the estimated ones.
Solution 9.6: [IBNR7, Exercise] First calculate the initial expected total loss as 85% of the
earned premium. This gives figure of 731, 799, 833 and 867.
Now calculate the development factors for individual years in the usual way. We find that the
factors are 1.2406, 1.250, 1.0362.
Tackling the years one at a time:
The total expected outgo for Accident Year 2005 is 715 as we are assuming that Accident Year
2005 is fully run-off.
For Accident Year 2006, the expected outgo was initially 799. On this basis we would expect
799
to have paid out 1.0362 = 771.09 so far. So we should have to pay out 799 771.09 = 27.91 in
the future. In fact we have incurred 750, so our final figure would be 750 + 27.91 = 777.91.
For Accident Year 2007, the expected outgo was initially 833. On this basis we would expect to
833
have paid out 1.03621.125 = 714.58 so far. So we would have to pay out 833 714.58 = 118.42
in the future. In fact we have incurred 700 so far, so our finial figure should be 700 + 118.42 =
818.42.
149
ACTL3162 General Insurance Techniques Exercises
For Accident Year 2008, the expected outgo was initially 867. On this basis we would expect
867
to have paid out 1.03621.1251.2406 = 599.50 so far. So we would have to pay out 867 599.50 =
267.5 in the future. In fact we have incurred 647 so far, so our final figure would be 647+267.5 =
915.5.
So the total payout expected is 3225.83, of which we have already paid 1942. So the balance is
1284.
Solution 9.7: [IBNR6, Exercise] (Jiwooks solution to Final Exam Question 2002) Note that
we can write the claims run-off model as:
Development Year
0 1 2
Year of Origin
+1
0 +1 +2
+1 +1
+2
+1
+1 +2
1
+1 +2 +3
+1 +1
+2
+1
+1 +2
2 +1 +2 +3
+2 +3 +4
+1 +1
+2
Firstly,
12 = + 1 + 2 + 3 + 1 + 2
= 7.0632 + 0.025 + 0.012 + 0.018 0.028 + 0.0123
= 7.1097
Hence,
log X12 N ormal 7.1097, 12
So 2
b12 = E (X12 ) = e+ 2
X =7.1097+0.5
= e7.6097 = 2017.67
Secondly,
21 = + 1 + 2 + 3 + 1
= 7.0632 + 0.025 + 0.012 + 0.018 0.0208
= 7.0974
Hence
log X21 N ormal 7.0974, 12
150
ACTL3162 General Insurance Techniques Exercises
So 2
b21 = E (X21 ) = e+ 2 = e7.5974 = 1993.01
X
Lastly,
22 = + 1 + 2 + 3 + 4 + 1 + 2
= 7.0632 + 0.025 + 0.012 + 0.018 + 0.018 0.0208 + 0.0123
= 7.1277
So 2
b22 = E (X22 ) = e+ 2 = e7.6277 = 2054.32
X
(2)
151
ACTL3162 General Insurance Techniques Exercises
Solution 9.9: [IBNR3Y, Exercise] Please see below the R codes with results.
# > summary(ResultMCL)
# $ByOrigin
# Latest Dev.To.Date Ultimate IBNR Mack.S.E CV(IBNR)
# 0 886334 1.0000000 886334.0 0.0000 0.000 NaN
# 1 982148 0.9992023 982932.1 784.0531 916.992 1.1695534
# 2 1075537 0.9966965 1079101.8 3564.7739 1327.393 0.3723639
# 3 1138375 0.9927425 1146697.2 8322.1732 1666.579 0.2002577
# 4 1226650 0.9843548 1246146.2 19496.1724 2945.469 0.1510794
# 5 1324732 0.9633905 1375072.8 50340.7522 7486.013 0.1487068
# 6 1320130 0.9164206 1440528.5 120398.4954 19876.901 0.1650926
# 7 1185300 0.8284440 1430754.6 245454.5883 34104.399 0.1389438
# 8 966162 0.6636149 1455907.7 489745.7092 63485.591 0.1296297
# 9 542021 0.3454994 1568804.4 1026783.4153 117999.863 0.1149219
#
# $Totals
# Totals
# Latest: 1.064739e+07
# Dev: 8.442082e-01
# Ultimate: 1.261228e+07
# IBNR: 1.964890e+06
# Mack S.E.: 1.491160e+05
# CV(IBNR): 7.589026e-02
# > ResultMCL$Mack.ProcessRisk[,ncol(USAApaid)]
# 0 1 2 3 4 5
# 0.0000 631.4351 969.2396 1269.0693 2450.1816 6582.9597
# 6 7 8 9
# 17940.2609 31237.4293 58835.1132 109898.2894
ResultMCL$Total.ProcessRisk[ncol(USAApaid)]
152
ACTL3162 General Insurance Techniques Exercises
# > ResultMCL$Total.ProcessRisk[ncol(USAApaid)]
# 10
# 129958.2
ResultMCL$Mack.ParameterRisk[,ncol(USAApaid)]
# > ResultMCL$Mack.ParameterRisk[,ncol(USAApaid)]
# 0 1 2 3 4 5
# 0.0000 664.9542 906.9437 1080.2545 1634.7471 3564.4124
# 6 7 8 9
# 8557.9334 13686.9669 23850.5710 42968.9854
ResultMCL$Total.ParameterRisk[ncol(USAApaid)]
# > ResultMCL$Total.ParameterRisk[ncol(USAApaid)]
# [1] 73119.55
# 1-year CDR
CDR(ResultMCL)
# > CDR(ResultMCL)
# IBNR CDR(1)S.E. Mack.S.E.
# 0 0.0000 0.000 0.000
# 1 784.0531 916.992 916.992
# 2 3564.7739 1035.163 1327.393
# 3 8322.1732 1134.252 1666.579
# 4 19496.1724 2478.155 2945.469
# 5 50340.7522 6874.055 7486.013
# 6 120398.4954 18413.859 19876.901
# 7 245454.5883 28014.262 34104.399
# 8 489745.7092 53622.597 63485.591
# 9 1026783.4153 98121.714 117999.863
# Total 1964890.1331 125036.244 149116.032
# > summary(ResultPoisson)
# Latest Dev.To.Date Ultimate IBNR S.E CV
# 1 982148 0.9992024 982932 784 2330.911 2.97310136
# 2 1075537 0.9966963 1079102 3565 4509.067 1.26481552
# 3 1138375 0.9927426 1146697 8322 6562.979 0.78863004
# 4 1226650 0.9843550 1246146 19496 9699.278 0.49750094
153
ACTL3162 General Insurance Techniques Exercises
One can see that the reserves estimates obtained from the Poisson model and the CL model
are the same. One can prove this by setting = 1 in the over-dispersed Poisson model and
follow the same steps as those in section 9.3.2.
154
ACTL3162 General Insurance Techniques Exercises
(t)
Ci,J =E[Ci,J |Dt ]
=E[E[Ci,J |Ci,J1 ]|Dt ]
=E[Ci,J1 1
J1 |Dt ]
..
.
J1 (9.11)
Y
=E[Ci,ti 1
j |Dt ]
j=ti
J1
Y
=Ci,ti fjt
j=ti
1. A higher business volume tends to result in lower uncertainty (measured by both total
msep1/2 and CDR msep1/2 ) as a percentage of reserves. This can be explained by higher
diversification of claims associated with a higher business volume.
2. The ratio of the one-year claims development result uncertainy over the total run-off
uncertainty stays relatively stable across varying companies. This is because the ratio
depends on the nature of the business and here we are concerned with the same business
segment of different companies. This shows that knowing the next diagonal releases a
major part of the claims run-off risks, that is, around 80% of the total run-off uncertainty.
(This is similar to Exercise 9.18)
155
Module 10
states
decision
1 2 3
d1 14 12 13
d2 13 15 14
d3 11 15 5
Exercise 10.2: [DnG2, Solution] [Decisions & Games notes, exercise # 3] The profit per
client-day made by a privately owned health center depends on the variable costs involved.
Variable costs, over which the owner of the health center has no control, take one of three
levels: 1 =high, 2 =most likely, and 3 =low. The owner has to decide at what level to set
the number of client-days for the coming year. Client-days can be either d1 = 16, d2 = 13.4, or
d3 = 10 (each in 000). The profit in ($) per client-day is as follows:
states
decision
1 2 3
d1 85 95 110
d2 105 115 130
d3 125 135 150
1. Determine the Bayes criterion solution based on the annual profits, given the probability
distribution P (1 ) = 0.1, P (2 ) = 0.6, and P (3 ) = 0.3.
2. Determine both the minimax solution and the maximin solution to this problem.
Exercise 10.3: [DnG4, Solution] [Decisions & Games notes, exercise # 5] A firm is contem-
plating three investment alternatives: stocks, bonds, and a savings account, involving three
156
ACTL3162 General Insurance Techniques Exercises
potential economic conditions: accelerated, normal, or slow growth. Each condition has a
probability of occurrence P (accelerated growth)= 0.2, P (normal growth)= 0.5 and P (slow
growth)= 0.3. It is assumed that the decision maker who has $100,000 to invest wishes to
invest all the fund in a single investment class. The annual returns ($) yielded from the stocks,
bonds, and savings account are as follows:
Economic Conditions
Alternative accelerated normal slow
Investment growth growth growth
Stocks 20,000 13,000 -8,000
Bonds 16,000 12,000 2,000
Savings 10,000 10,000 10,000
2. Determine both the minimax regret solution and maximin solution to this problem.
3. Explain briefly when Bayesian decision analysis (i.e. Bayes rule) can be used.
Player B
strategies
x y z
Player A 1 250 300 150
strategies 2 50 165 125
3 100 275 225
1. Using the rule of dominance, reduce the payoff matrix to a 2-by-2 matrix.
2. Solve algebraically for the mixed-strategy probabilities for players A and B and determine
the expected gain for player A and the expected loss for player B. Discuss the meaning
of this solution value.
10.3 Solutions
Solution 10.1: [DnG1, Exercise] Decisions & Games, Exercise # 2: Solution from Institute.
First, check out the table below:
states
decision p1 = 0.25 p2 = 0.25 p3 = 0.5 expected
1 2 3 maximum loss
d1 14 12 13 14 13
d2 13 15 14 15 14
d3 11 15 5 15 9
157
ACTL3162 General Insurance Techniques Exercises
Thus, the minimax solution is d1 . The Bayes criterion solution is d3 since it gives smallest
expected loss. [Refer to bold values.]
Solution 10.2: [DnG2, Exercise] Decisions & Games, Exercise # 3: Solution from Institute.
First convert the table into annual profits as follows:
annual profits
decision p1 = 0.1 p2 = 0.6 p3 = 0.3 expected
1 2 3 maximum regret minimum profit
d1 1360 1520 1760 47 1360 1576
d2 1407 1541 1742 18 1407 1587.9
d3 1250 1350 1500 260 1250 1385
Therefore, Bayes criterion solution is to choose d2 as it gives the largest expected profit. The
minimax is to choose d3 and the maximin solution is to choose d2 .
Solution 10.3: [DnG4, Exercise] Decisions & Games, Exercise # 5: Jiwooks solution, Emil
modified.
158
ACTL3162 General Insurance Techniques Exercises
3. Bayes rule can be used when we want to revise probabilities of potential states of nature
based on additional information, experiments and personal judgments.
Solution 10.4: [DnG3, Exercise] Decisions & Games, Exercise # 4: Jiwooks solutions.
Player B
x z
Player A 1 250 150
3 100 225
2. If Player B selects strategy x, the possible payoffs for Player A are 250 and 100. Therefore,
if Player A selects strategy 1 with probability p, Player As expected gain is given by
250p + 100(1 p)
150p + 225(1 p)
So, we have
159
ACTL3162 General Insurance Techniques Exercises
160