Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Probability Distributions: Multivariate, Risk and Insurance, Lecture notes of Law

Multivariate probability distributions, including the bivariate normal, joint probability functions, joint probability density functions, joint cumulative distribution functions, central limit theorem, conditional and marginal probability distributions, moments for joint, conditional, and marginal probability distributions, joint moment generating functions, covariance and correlation coefficient, transformations and order statistics, and probabilities and moments for linear combinations of independent random variables. It also discusses risk and insurance, including definitions, useful facts, and solutions to problems related to exponential utility functions and compound Poisson processes.

What you will learn

  • What is the role of covariance and correlation coefficient in multivariate probability distributions?
  • What is the definition of a multivariate probability distribution?
  • What is the central limit theorem, and how does it apply to multivariate probability distributions?
  • What is the difference between joint and marginal probability distributions?
  • How do you calculate the moments of a multivariate probability distribution?

Typology: Lecture notes

2021/2022

Uploaded on 09/12/2022

kimball
kimball 🇬🇧

5

(3)

220 documents

1 / 37

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
STAT 484 Actuarial Science: Models
INTRODUCTION
Who is an Actuary?
An actuary is a person who analyzes financial risks for different types of in-
surance and pension programs. The word “actuary” comes from the Latin
for account keeper, deriving from actus “public business.”
If you are seriously considering becoming an actuary, then visit
www.beanactuary.org
In the United States, actuaries have two professional organizations: the So-
ciety of Actuaries (SOA) and the Casualty Actuarial Society (CAS).
What is the Society of Actuaries (SOA)?
SOA members work in life and health insurances, and retirement programs.
Check their website at www.soa.org
What is the Casualty Actuarial Society (CAS)?
CAS members work in property (automobile, homeowner) and casualty in-
surance (workers’ compensation). Check their website at www.casact.org
Historical Background
The first actuary to practice in North America was Jacob Shoemaker of
Philadelphia, a key organizer in 1809 of the Pennsylvania Company for In-
surances on Lives and Granting Annuities. The Actuarial Society of America
came into being in New York in 1889. In 1909, actuaries of life companies
in the midwestern and southern United States organized the American In-
stitute of Actuaries with headquarters in Chicago. In 1914 the actuaries of
property and liability companies formed the Casualty Actuarial and Statis-
tical Society of America which in 1921 was renamed the Casualty Actuarial
Society. In 1949, the Society of Actuaries was created as the result of the
merger between the Actuarial Society of America and the American Institute
of Actuaries.
1889, Actuarial
Society of America
1909, American
Institute of Actuaries
1914, Casualty Actuarial
and Statistical
Society of America
1921, renamed Casualty
Actuarial Society
1949, Society
of Actuaries
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25

Partial preview of the text

Download Probability Distributions: Multivariate, Risk and Insurance and more Lecture notes Law in PDF only on Docsity!

STAT 484 Actuarial Science: Models

INTRODUCTION

Who is an Actuary?

An actuary is a person who analyzes financial risks for different types of in- surance and pension programs. The word “actuary” comes from the Latin for account keeper, deriving from actus “public business.”

If you are seriously considering becoming an actuary, then visit

www.beanactuary.org

In the United States, actuaries have two professional organizations: the So- ciety of Actuaries (SOA) and the Casualty Actuarial Society (CAS).

What is the Society of Actuaries (SOA)?

SOA members work in life and health insurances, and retirement programs. Check their website at www.soa.org

What is the Casualty Actuarial Society (CAS)?

CAS members work in property (automobile, homeowner) and casualty in- surance (workers’ compensation). Check their website at www.casact.org

Historical Background

The first actuary to practice in North America was Jacob Shoemaker of Philadelphia, a key organizer in 1809 of the Pennsylvania Company for In- surances on Lives and Granting Annuities. The Actuarial Society of America came into being in New York in 1889. In 1909, actuaries of life companies in the midwestern and southern United States organized the American In- stitute of Actuaries with headquarters in Chicago. In 1914 the actuaries of property and liability companies formed the Casualty Actuarial and Statis- tical Society of America which in 1921 was renamed the Casualty Actuarial Society. In 1949, the Society of Actuaries was created as the result of the merger between the Actuarial Society of America and the American Institute of Actuaries.

1889, Actuarial Society of America

1909, American Institute of Actuaries

1914, Casualty Actuarial and Statistical Society of America

1921, renamed Casualty Actuarial Society

1949, Society of Actuaries

The information below is taken from The Society of Actuaries Basic Educa- tion Catalog, Spring 2006.

Anyone pursuing actuarial career may apply to the SOA. He would be a member of this organization as long as he pays the dues. Anyone who has passed certain number of actuarial exams and met some additional require- ments, becomes an Associate of the Society of Actuaries (ASA), and then, after passing more exams and meeting some more additional requirements one becomes a Fellow of the Society of Actuaries (FSA).

What are Actuarial Exams?

Historical Note: In 1896, after some hesitation, an examination system was adopted. The first Fellow by examination qualified in 1900.

SOA currently offers nine actuarial exams.

Exam 1 Probability (same as SOA Exam P) Exam 2 Financial Mathematics (same as SOA Exam FM) Exam 3 Actuarial Models: Segment 3F, Financial Economics (same as SOA Exam MFE) and Segment 3L, Life Contingencies and Statistics Exam 4 Construction and Evaluation of Actuarial Models (same as SOA Exam C) Exam 5 Introduction to Property and Casualty Insurance and Ratemaking Exam 6 Reserving, Insurance Accounting Principles, Reinsurance, and Enterprise Risk Management Exam 7 Law, Regulation, Government and Industry Insurance Programs, and Financial Reporting and Taxation Exam 8 Investments and Financial Analysis Exam 9 Advanced Ratemaking, Rate of Return, and Individual Risk Rating Plans In this course we review the material for Exam 1/P and learn roughly 25% of what is needed to pass Exam 3 Segment L. A disclaimer is in order here. Taking this course doesn’t guarantee that you pass these exams. The course prepares you to start preparing for the exams.

Risk and Insurance

Reference: Risk and Insurance, Study Notes, SOA web site, code P-21-05.

Definitions. People need economic security (food, clothing, shelter, med- ical care, etc.). The possibility to lose the economic security is called the economic risk or simply risk. This risk causes many people to buy insur- ance. Insurance is an agreement where, for a stipulated payment called the premium, one party called the insurer agrees to pay the other called the policyholder (or insured) or his designated beneficiary a defined amount called claim payment or benefit upon the occurrence of a specific loss. This defined claim payment can be a fixed amount or can reimburse all or a part of the loss that occurred. The insurance contract is called the policy. Only small percentage of policyholders suffer loss. Their losses are paid out of the premiums collected from the pool of policyholders. Thus, the entire pool compensates the unfortunate few. Each policyholder exchanges an unknown loss for the payment of a know premium. The insurer considers the losses expected for the insurance pool and the potential of variation in order to charge premiums that, in total, will be suf- ficient to cover all of the projected claim payments for the insurance pool. The insurer may restrict the particular kinds of losses covered. A peril is a potential cause of a loss. Perils may include fires, hurricanes, theft, or heart attacks. The insurance policy may define specific perils that are covered, or it may cover all perils with certain exclusions such as, for example, property loss as a result of a war or loss of life due to suicide. Losses depend on two random variables. The first is the number of losses that will occur in a specified period. This random variable is called frequency of loss and its probability distribution is called frequency distribution. The second random variable is the amount of the loss, given that a loss has oc- curred. This random variable is called the severity and its distribution is called the severity distribution. By combining the frequency and the sever- ity distributions, one can determine the overall loss distribution. Example. Suppose a car owner will have no accidents in a year with prob- ability 0.8 and will have one accident with probability 0.2. This is the fre- quency distribution. Suppose also that with probability 0.5 the car will need repairs costing $500, with probability 0.4 the repairs will cost $5,000, and with probability 0.1 the car will need to be replaced at the cost $25,000. This is the severity distribution. Combining the two distributions, we have that the distribution of X, the total loss due to an accident is

f (x) =

  1. 8 , if x = 0 (0.2)(0.5) = 0. 1 , if x = 500 (0.2)(0.4) = 0. 08 , if x = 5, 000 (0.2)(0.1) = 0. 02 , if x = 25, 000.

Definitions (continued...) The expected amount of claim payments is called the net premium or benefit premium. The gross premium is the total of the net premium and the amount to cover the insurer’s expenses for selling and servicing the policy, including some profit. Policyholders are willing to pay a gross premium for an insurance policy, which exceeds the expected amount of their losses, to substitute the fixed premium payment for a potentially enormous payment if they are not insured.

What kind of risk is insurable?

An insurable risk must satisfy the following criteria:

  1. The potential loss must be significant so that substituting the premium payment for an unknown economic outcome (given no insurance) is desirable.
  2. The loss and its economic value must be well-defined and out of the pol- icyholder’s control. For example, the policyholder should not be allowed to cause a loss or to lie about its value.
  3. Covered losses should be reasonably independent. For example, an insurer should not insure all the stores in one area against fire.

Examples of Insurance.

  1. the auto liability insurance – will pay benefits to the other party if a policyholder causes a car accident.
  2. the auto collision insurance – will pay benefits to a policyholder in case of a car accident.
  3. the auto insurance against damages other than accident – will pay bene- fits to a policyholder in case the car is damaged from hailstones, tornado, vandalism, flood, earthquake, etc.
  4. the homeowners insurance – will pay benefits to a policyholder towards repairing or replacing the house in case of damage from a covered peril, such as flood, earthquake, landslide, tornado, etc. The contents of the house may also be covered in case of damage or theft.
  5. the health insurance (medical, dental, vision, etc. insurances) – will cover some or all health expenses of a policyholder.
  6. the life insurance – will pay benefits to a designated beneficiary in case of a policyholder’s death.
  7. the disability income insurance – will replace all or portion of a policy- holder’s income in case of disability.
  8. the life annuity – will make regular payments to a policyholder after the retirement until death.

Limits on Policy Benefits.

Definition. If an insurance does not reimburse the entire loss, the poli- cyholder must cover part of the loss. This type of limit on policy benefits is

  1. The additive law for two events: P(A ∪ B) = P(A) + P(B) − P(A ∩ B).
  2. The additive law for three events: P(A ∪ B ∪ C) = P(A) + P(B) + P(C) − P(A ∩ B) − P(A ∩ C) − P(B ∩ C) + P(A ∩ B ∩ C).
  3. The complement rule: P(Ac) = 1 − P(A).
  4. If Ai, i = 1,... , n are disjoint, then P(∪ni=1 Ai) =

∑n i=1 P(Ai).

  1. DeMorgan’s laws: P ((A ∩ B)c) = P(Ac^ ∪ Bc), P ((A ∪ B)c) = P(Ac^ ∩ Bc). Definition. Two events A and B are independent iff P(A∩B) = P(A) P(B). Definition. If events A 1 , A 2 ,... are independent then P(∩ Ai) =

P(Ai). Definition. The conditional probability of A given B is

P(A| B) =

P(A ∩ B)

P(B)

Useful Formula P(A ∩ B) = P(A| B)P(B). This is the multiplicative law. Definition. A set {A 1 , A 2 ,... , An} is a partition of the sample space S if (1) Ai are mutually exclusive, and (2) ∪Ai = S. Draw Venn Diagram. The Bayes Theorem. Let {A 1 , A 2 ,... , An} be a partition of S. Given that an event B has occurred, the updated probability of Ai is

P(Ai| B) =

P(B| Ai) P(Ai) ∑n j=1 P(B|^ Aj^ )^ P(Aj^ )

Definition. A combination of objects is an unordered arrangement of the objects. Definition. The number of combinations of k objects chosen from n dis- tinct objects is given by the binomial coefficient

(n k

= (^) k!(nn−!k)!. Definition. The number of ways to separate n distinct objects into k groups of sizes n 1 ,... , nk where n 1 + · · · + nk = n is given by the formula

( n n 1

n − n 1 n 2

nk nk

n! n 1!... nk!

3.1 – 3.9 Discrete Probability Distributions.

Definition. A random variable is a function that assigns a real number to every outcome in the sample space. Definition. A discrete random variable assumes a finite or a countably in- finity number of values. Definition. The probability distribution of a discrete random variable X is the list of the values with the respective probabilities P(X = x) = p(x). The function p(x) is called the probability function. It has the properties: (i) p(x) ≥ 0 ∀x, and (ii)

x p(x) = 1. Definition. The mean (or expected value or expectation or average) of a discrete random variable X is E(X) =

x x p(x). Useful Formulas.

  1. E(aX + b) = a E(X) + b.
  1. E(g(X)) =

x g(x)^ p(x).

  1. E(f (X 1 ) + g(X 2 )) = E(f (X 1 )) + E(g(X 2 )). Definition. The 100α’s percentile of a distribution of a random variable is the value x such that P(X ≤ x) = α. Definition. The first quantile Q 1 of a distribution of X satisfies P(X < Q 1 ) = .25. Definition. The median M of a distribution of X satisfies P(X < M ) = 0 .5 = P(X > M ). Definition. The third quantile Q 3 of a distribution of X satisfies P(X < Q 3 ) = .75. Definition. The mode of a distribution is a local maximum. Definition. The kth moment of a random variable X is E(Xk). Definition. The variance of a random variable X is Var(X) = E (X − E(X))^2. Useful Formulas.
  2. Var(X) = E(X)^2 − (E(X))^2.
  3. Var(aX + b) = a^2 Var(X).
  4. If X 1 and X 2 are independent, then E(f (X 1 )g(X 2 )) = E(f (X 1 ) E(g(X 2 )).
  5. If X 1 and X 2 are independent, then Var(f (X 1 ) + g(X 2 )) = Var(f (X 1 )) + Var(g(X 2 )). Definition. The standard deviation of a random variable X is σ =

Var(X). Definition. The interquantile range of a distribution is Q 3 − Q 1 , the differ- ence between the third and the first quantiles. Definition. The moment generating function (m.g.f.) of a random variable

X is m(t) = mX (t) = E(etX^ ). Useful Formulas.

  1. E(X) = m′(0), E(X^2 ) = m′′(0). In general, E(Xn) = m(n)(0).
  2. If X 1 , · · · , Xn are independent and X =

∑n i=1 Xi, then^ mX^ (t) =^

∏n i=1 mXi^ (t).

Certain Discrete Distributions.

Name Notation P(X = x) E(X) Var(X) m(t)

Binomial X ∼ Bi(n, p)

(n x

px(1 − p)n−x, np np(1 − p) (pet^ + 1 − p)n x = 0,... , n Geometric Geom(p) p(1 − p)x−^1 , (^1) p^1 p− 2 p^ pe

t 1 −(1−p)et x = 1,...

Negative N B(p, r)

(x− 1 r− 1

pr(1 − p)x−r^ rp^ (1−p 2 p)r

pet 1 −(1−p)et

)r

Binomial x = r,... Hypergeo- HG(N, n, k) (kx)(Nn^ −−xk ) (Nn ) , n (^) Nk n (^) Nk (1 − (^) Nk )NN^ − −n 1 no closed metric x = 0,... , n, form Poisson P oi(λ) λ

x x! e

−λ (^) λ λ eλ(et−1) x = 0, 1 ,...

4.2 – 4.6, 4.9 Continuous Probability Distributions.

Normal: X ∼ N (μ, σ^2 ), f (x) = √ 2 π σ^12 e−^

(x−μ)^2 2 σ^2 , −∞ < x < ∞, E(X) = μ,

Var(X) = σ^2 , m(t) = Exp

μ t + σ

(^2) t 2 2

Lognormal: If log X ∼ N (μ, σ^2 ), then X has a lognormal distribution. The density of lognormal distribution is

f (x) =

2 πσ^2

x

e−(log^ x−μ)

(^2) /(2σ (^2) ) , x > 0 , E(Xn) = enμ+n

(^2) σ (^2) / 2 , m(t) ̸∃.

Chi-squared: X ∼ χ^2 (p), f (x) = (^) Γ(p/2) 2^1 p/ 2 x(p/2)−^1 e−x/^2 , x > 0 , E(X) =

p, Var(X) = 2p, m(t) =

1 − 2 t

)p/ 2 , t < 1 / 2. Useful Facts:

  1. If Zii.i.d. ∼ N (0, 1), i = 1,... , n, and X =

∑n i=1 Z

2 i ∼^ χ (^2) (n) has the chi-

squared distribution with n degrees of freedom.

  1. χ^2 (n) is Gamma(n/ 2 , 1 /2).

Beta: X ∼ Beta(α, β), f (x) = (^) Γ(Γ(αα)Γ(+ββ)) xα−^1 (1 − x)β−^1 , 0 < x < 1,

E(X) = (^) αα+β , Var(X) = (^) (α+β) 2 αβ(α+β+1) , m(t) doesn’t look good.

Pareto: X ∼ P areto(α, β), f (x) = βα

β xβ+1^ ,^ x > α,^ α >^0 ,^ β^ >^ 0, E(X) = (^) ββα− 1 , Var(X) = βα

2 (β−1)^2 (β−2) , β >^2 ,^ m(t)^ ̸∃.

Weibull: X ∼ W eibull(γ, β), f (x) = γβ xγ−^1 e−x γ (^) /β , x > 0 , γ > 0 , β > 0,

E(Xn) = βn/γ^ Γ(1 + n/γ), m(t) doesn’t look good. Useful Fact: If X ∼ Exp(β), then X^1 /γ^ ∼ W eibull(γ, β).

6.4 Transformations. Definition. Let X be a continuous random variable with p.d.f. fX and c.d.f. FX , and let g be a function. Define a transformation Y = g(X). Then the c.d.f of Y is FY (y) = P(Y ≤ y) = P(g(X) ≤ y) =

x: g(x)≤y fX^ (x)^ dx, and the p.d.f. of Y is fY (y) = F (^) Y′ (y). In the special case when g is strictly increasing, FY (y) = P (X ≤ g−^1 (y)) =

FX (g−^1 (y)), and fY (y) = F (^) X′ (g−^1 (y)) dg

− (^1) (y) dy. When^ g^ is strictly decreasing, FY (y) = P (X ≥ g−^1 (y)) = 1−FX (g−^1 (y)) and fY (y) = −F (^) X′ (g−^1 (y)) dg

− (^1) (y) dy.

In general, if g is strictly increasing or decreasing, fY (y) = F (^) X′ (g−^1 (y)) dg

− (^1) (y) dy.

Order Statistics.

Definition. Let X 1 ,... , Xn be i.i.d. random variables with c.d.f. F and p.d.f. f. Suppose n realizations of these variables are observed. The ob- servations in increasing order X(1),... , X(n) are called order statistics. The p.d.f. of the ith order statistic is

fX(i) (x) =

n! (i − 1)!(n − i)!

[F (x)]i−^1 f (x) [1 − F (x)]n−i^.

Interesting special cases are

  1. FX(n) (x) = P (Xmax ≤ x) = P(X 1 ≤ x,... , Xn ≤ x) = [F (x)]n^ ,

fX(n) (x) = n [F (x)]n−^1 f (x).

  1. 1 − FX(1) (x) = P (Xmin ≥ x) = P(X 1 ≥ x,... , Xn ≥ x) = [1 − F (x)]n^ ,

fX(1) (x) = n [1 − F (x)]n−^1 f (x).

5.2 – 5.8 Multivariate Probability Distributions.

Definition. Let X 1 and X 2 be two discrete random variables. The joint probability function of X 1 and X 2 is p (x 1 , x 2 ) = P(X 1 = x 1 , X 2 = x 2 ). Definition. Let Y 1 and Y 2 be two continuous random variables. The joint c.d.f is F (y 1 , y 2 ) = P(Y 1 ≤ y 1 , Y 2 ≤ y 2 ). The joint density is

f (y 1 , y 2 ) =

∂^2 F (y 1 , y 2 ) ∂y 1 ∂y 2

or, equivalently, F (y 1 , y 2 ) =

∫ (^) y 1 −∞

∫ (^) y 2 −∞ f^ (u, v)^ dv du. Definition. The marginal probability distribution of X 1 is p 1 (x 1 ) =

x 2 p^ (x^1 , x^2 ), of X 2 is p 1 (x 1 ) =

x 1 p^ (x^1 , x^2 ). Definition. The marginal density of Y 1 is f 1 (y 1 ) =

−∞ f^ (y^1 , y^2 )^ dy^2 , of^ Y^2 is f 2 (y 2 ) =

−∞ f^ (y^1 , y^2 )^ dy^1. Definition. The conditional probability function of X 1 given that X 2 = x 2 is

p(x 1 | x 2 ) =

p (x 1 , x 2 ) p 2 (x 2 )

Definition. The conditional density of Y 1 given that Y 2 = y 2 is

f (y 1 | y 2 ) =

f (y 1 , y 2 ) f 2 (y 2 )

Useful Formula.

P(Y 1 ≤ y 1 | Y 2 = y 2 ) =

∫ (^) y 1 ∫^ −∞ f^ (u, y^2 )^ du ∞ −∞ f^ (u, y^2 )^ du

Definition. The joint m.g.f. of X and Y is m(t, s) = et X+s Y^.

Definition. The covariance between two random variables X and Y is

Cov(X, Y ) = E [(X − E(X)) (Y − E(Y ))] = E(XY ) − E(X)E(Y ).

Properties.

  1. Cov(X Y ) = Cov(Y, X).
  2. Cov(aX + b, cY + d) = acCov(X, Y ).

to play the game is if E(W ) = 0. That is, we both expect to win nothing (a fair game). For example, if I pay you $10 with probability 3/8 and you pay me $6 with probability 5/8.

Definition. When making a decision in a situation that involves random- ness, one approach is to replace the distribution of possible outcomes by the expected value of the outcomes. This approach is called the expected value principle. Definition. In economics, the expected value of random prospects with monetary payments is called the fair value (or actuarial value) of the prospect.

We might expect a similar principle to be applicable in insurance business. It turns out that it is not always so. Consider the following situation. Sup- pose you own w = $100 which you might lose with probability p = 0.01. If you lose your wealth, an insurer offers to reimburse you in full (called a complete coverage or a complete insurance). How much would you be willing to pay for the prospect? Solution: Notice that if not insured you expect to lose wp = (100)(.01) = $1. Suppose you are willing to pay $x for the insurance. The insurer’s gain then equals x with probability 1−p = .99 and x−100 = x−w with probabil- ity p = .01. Thus, the insurer’s expected gain is x(1 − p) + (x − w)p = x − wp. If it were to be a fair game, x − wp = 0 or x = wp, that is, you should be willing to pay the amount of your expected loss wp = $1. Would you be will- ing to pay more than $1? Most likely not, if your wealth is just $100. But suppose your wealth is one million dollars. Then you should be willing to pay the insurer the amount of the expected loss of (1, 000 , 000)(.01) = $10, 000. Would you be willing to pay more than that? Most likely yes, because if not insured, there is a chance of a catastrophic loss. But how much more are you willing to pay? It depends on a person.

Definition. The value (or utility) that a particular decision-maker attaches to wealth of amount w, can be specified in the form of a function u(w), called a utility function.

How to determine values of an individual’s utility function?

Example. Suppose a decision-maker has wealth w = $20, 000. So, his utility function is defined on interval [0, 20 , 000]. We chose arbitrarily the endpoints of his utility function, for example, u(0) = −1 and u(20, 000) = 0. To de- termine the values of the utility function in intermediate points, proceed as follows. Ask the decision-maker the following question: Suppose you can lose your $20,000 with probability 0.5. How much would you be willing to pay an insurer for a complete insurance against the loss? That is, define the max- imum amount G such that u(20, 000 − G) = (0.5) u(20, 000) + (0.5) u(0) = (0.5)(0) + (0.5)(−1) = − 0. 5. Here the left-hand side represents the utility of the insured certain amount $20, 000 − G, while the right-hand side is

the expected utility of the uninsured wealth. Suppose the decision-maker defines G = $12, 000. Therefore, u(8, 000) = − 0 .5. Notice that the deci- sion maker is willing to pay for the insurance more than the expected loss (0.5)(20, 000) + (0.5)(0) = 10, 000. To determine the other values of the utility function, ask the question: What is the maximum amount you would pay for a complete insurance against a situation that could leave you with wealth w 2 with probability p, or at re- duced wealth w 1 with probability 1 − p? That is, we ask the decision maker to specify G such that u(w 2 − G) = p u(w 2 ) + (1 − p) u(w 1 ). For example, G = 7, 500 in the situation u(20, 000 −G) = (0.5) u(20, 000)+(0.5) u(8, 000) = (0.5)(0) + (0.5)(− 0 .5) = −.25. This defines u(12, 500) = − 0 .25. Notice that G again exceeds the expected loss of (0.5)(0) + (0.5)(12, 000) = 6, 000. Pic- ture.

The main theorem of the utility theory states that a decision-maker prefers the distribution of X to the distribution of Y , if E(u(X)) > E(u(Y )) and is indifferent between the two distributions, if E(u(X)) = E(u(Y )).

1.3. Insurance and Utility.

We apply the utility theory to the decision problems faced by a property owner. Suppose the random loss X to his property has a known distribution. The owner will be indifferent between paying a fixed amount G to an insurer, or assuming the risk himself. That is, u(w − G) = E(u(w − X)). Remember that to make a profit, an insurer must charge a premium that exceeds the expected loss, that is, it should be true that G > E(X) = μ. Which utility functions satisfy this property?

Proposition. If u′(w) > 0 and u′′(w) < 0, then G > μ. Proof: We make use of Jensen’s inequality which states that if u′′(w) < 0 , then E(u(X)) ≤ u(E(X)). The exact equality holds iff X = μ. By this inequality, u(w−G) = E(u(w−X)) ≤ u(E(w−X)) = u(w−μ). Now, since u′(w) > 0, u is an increasing function, and therefore, w − G ≤ w − μ or μ ≤ G with μ < G unless X is a constant. 2 Definition. A decision-maker with utility function u(w) is risk averse if u′′(w) < 0, and a risk lover if u′′(w) > 0. Remark. According to the proposition, for risk averse people G ≥ μ and, so, they are able to get a complete insurance. It can be shown that for risk lovers G ≤ μ and, so, they won’t be insured.

Three functions are commonly used to model utility functions. Model 1. An exponential utility function is of the form u(w) = −e−α w, where w > 0 and α > 0 is a constant. It has the following properties: (1) u′(w) = α e−α w^ > 0 , (2) u′′(w) = −α^2 e−α w^ < 0 , (3) G doesn’t de- pend on w. To see that, write −e−α^ (w−G)^ = E

[

− e−α^ (w−X)

]

= − e−α w^ MX (α) where MX is the m.g.f. of X. From here, G = ln MX (α)/α.

1.5. Optimal Insurance.

Theorem 1.5.1. Suppose a decision maker (1) has wealth w, (2) has a utility function u(w) such that u′(w) > 0 and u′′(w) < 0 (a risk averse), (3) faces a random loss X, and (4) is willing to spend amount P purchasing an insurance. Suppose also that the insurance market offers for a payment P all feasible insurance contracts of the form I(x) where 0 ≤ I(x) ≤ x (avoiding an incentive to incur the loss) with expected payoff E(I(X)) = β. Then to maximize the expected utility, the decision maker should choose an insurance policy

Id∗ (x) =

0 if x < d∗, x − d∗^ if x ≥ d∗.

where d∗^ is the unique solution of E(Id(X)) =

d (x^ −^ d)f^ (x)^ dx^ =^ β. Definition. A feasible insurance contract of the form

Id(x) =

0 if x < d x − d if x ≥ d

pays losses above the deductible amount d. This type of contract is called stop-loss or excess-of-loss insurance.

Example. Assume w = 100 and X ∼ U nif (0, 100). Then by the theo- rem d∗^ solves

β =

d

(x − d)

dx =

d^2 − d + 50.

That is, d∗^ solves the quadratic equation d^2 − 200 d + 10, 000 − 200 β = 0, or d∗^ = 100 −

200 β. For example,

Expected payoff β Deductible d∗ 0 100 10 55. 20 36. 30 22. 40 10. 45 5. 50 0

Note that there is no need to specify u(w) and P.

2.2. Models for Individual Claim Random Variables.

We will consider three individual risk models for short time periods. These models do not take into account the inflation of money. Model 1. In a one-year term life insurance the insurer agrees to pay an amount b if the insured dies within a year of policy issue and to pay nothing if the insured survives the year. The probability of a claim during the year is denoted by q. The claim random variable X has distribution

P(X = x) =

1 − q x = 0 q x = b 0 otherwise.

Notice that X = bI where I ∼ Bernoulli(q) indicates whether a death has occurred and, therefore, is called an indicator. Thus, the expected claim E(X) = bq and Var(X) = b^2 q(1 − q).

Model 2. Consider the above model X = IB where the claim amount B varies. Suppose if death is accidental, the benefit amount is B = $50, 000, otherwise, B = $25, 000. Suppose also that the probability of an accidental death within the year is .0005, and the probability of a nonaccidental death is .002. That is,

P(I = 1, B = 50, 000) =. 0005 , P(I = 1, B = 25, 000) =. 002.

Hence, P(I = 1) = .0005 + .002 = .0025, and P(I = 0) = .9975. Therefore, the distribution of X is P(X = 0) =. 9975 , P(X = 25, 000) =. 002 , P(X = 50 , 000) = .0005. The expectation E(X) = $75. Also, the conditional distri- bution of B, given I = 1, is

P(B = 25, 000 | I = 1) =

P(B = 25, 000 , I = 1)

P(I = 1)

and

P(B = 50, 000 | I = 1) =

P(B = 50, 000 , I = 1)

P(I = 1)

This means that 20% of all payoffs are for accidental deaths, and 80% are for nonaccidental. The expected payoff is (25,000)(.8)+(50,000)(.2)=$30,000.

Model 3. Consider an automobile collision coverage above a $250 deductible up to a maximum claim of $2,000. Assume that for an individual the prob- ability of one claim in a period is .15, and the probability of more than one claim is zero, that is, P(I = 1) =. 15 , and P(I = 0) = .85. Assume also that

P(B ≤ x| I = 1) =

0 if x ≤ 0 (.9)

[

1 − (1 − x/ 2 , 000)^2

]

if 0 < x < 2 , 000 1 if x ≥ 2 , 000.

and MS (t) = (pet^ + 1 − p)n, and therefore S ∼ Bi(n, p). (2) Show that the sum of r independent Geom(p) random variables is N B(r, p). (3) Show that the sum of α independent Exp(β) random variables is Gamma(α, β). (4) Show that the sum of n independent P oi(λ) random variables is P oi(nλ). (5) Show that the sum of n independent N (μ, σ^2 ) random variables is N (nμ, nσ^2 ).

2.5. Applications of the Central Limit Theorem to Insurance.

Example 2.5.1. The table below gives the number of insured nk, the benefit amount bk, and the probability of claim qk where k = 1,... , 4.

k nk bk qk bkqk b^2 kqk(1 − qk) 1 500 1 .02 .02. 2 500 2 .02 .04. 3 300 1 .10 .10. 4 500 2 .10 .20.

The life insurance company wants to collect the amount equal to 95th per- centile of the distribution of total claims. The share of the jth insured is (1 + θ)E(Xj ). The amount θE(Xj ) is called the security loading and θ is called the relative security loading. This way the company protects itself from the loss of funds due to excess claims. Find θ. Solution: We want to find θ such that P(S ≤ (1 + θ)E(S)) = .95. We use the CLT to obtain

P

Z ≤

θE(S) √ Var(S)

θE(S) √ Var(S)

E(S) =

k, nk^ bk^ qk^ = 160,^ Var(S) =^

k nk^ b

2 k qk^ (1^ −^ qk) = 256.^ Therefore, θ = .1645.

Example 2.5.3. A life insurance benefits are q 1 = · · · = q 5 = .02 and

Benefit amount Number insured 10,000 8, 20,000 3, 30,000 2, 50,000 1, 100,000 500

The retention limit is the amount below which this company (the ceding com- pany) will retain the insurance and above which it will purchase reinsurance coverage from another (the reinsuring) company. Suppose the insurance com- pany sets the retention limit at 20,000. Suppose also that reinsurance is available at a cost of .025 per unit of coverage. It is assumed that the model

is a closed model, that is, the number of insured units is known and doesn’t change during the covered period. Otherwise, the model allows migration in and out of the insurance system and is called an open model. Find the prob- ability that the company’s retained claims plus cost of reinsurance exceeds 8,250,000. Solution: Lets work in units of $10,000. Let S be the amount of retained claims paid. The portfolio of retained business is

k bk nk 1 1 8, 2 2 8,

E(S) = 480, Var(S) = 784. The total coverage in the plan is (8, 000)(1) + (3, 500)(2)+· · ·+(500)(10) = 35, 000, and the retained coverage is (8, 000)(1)+ (8, 000)(2) = 24, 000. The difference 35, 000 − 24 , 000 = 11, 000 is the rein- sured amount. The reinsurance cost is (11, 000)(.025) = 275. Thus, the retained claims plus the reinsurance cost is S + 275. We need to compute

P(S + 275 > 825) = P(S > 550) = P

Z > 550 √− 784480

= P(Z > 2 .5) = .0062.

12.1. Collective Risk Models for a Single Period. Introduction.

Definition. The collective risk model is the model of the aggregate claim amount generated by a portfolio of policies. Denote by N the number of claims generated by a portfolio of policies in a given time period. Let Xi be the amount of the ith claim (severity of ith claim). Then S = X 1 + · · · + XN is the aggregate claim amount. The variables N, X 1 ,... , XN are random variables such that (1) Xi are identically distributed and (2) N, X 1 ,... , Xn are independent.

12.2. The Distribution of Aggregate Claims.

Notation. Denote by pk = E(Xk) the kth moment of the i.i.d. Xi’s. Let MX (t) = E

[

etX^

]

be the m.g.f. of Xi. Also, let MN (t) and MS (t) denote the m.g.f.’s of N and S, respectively. Proposition. (1) E(S) = E[E(S| N )] = E[p 1 N ] = p 1 E(N ). (2) Var(S) = Var[E(S| N )] + E[Var(S| N )] = Var[p 1 N ] + E[(p 2 − p^21 ) N ] = p^21 Var(N ) + (p 2 − p^21 ) E(N ). (3) MS (t) = E[E(etS^ | N )] = E[MX (t)N^ ] = E

[

eN^ ln^ MX^ (t)

]

= MN (ln MX (t)).

Examples 12.2.1 and 12.2.3. Let N ∼ Geom(p) and Xi ∼ Exp(1). Find the distribution of S. Solution:

MN (t) =

p 1 − (1 − p)et^

and MX (t) =

(1 − t)

Thus,

MS (t) =

p 1 − (1 − p)/(1 − t)

= p + (1 − p)

p p − t