Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

M. Phil. in Statistical Science Exam: Statistical Theory, Exams of Statistics

A past exam paper for a master's degree in statistical science, focusing on statistical theory. It includes questions on orthogonality of parameters, profile likelihood, saddlepoint approximation, m-estimators, maximal invariants, and diagnostic checks. Students are required to attempt four questions, with no more than two from section b.

Typology: Exams

2012/2013

Uploaded on 02/26/2013

dharmanand
dharmanand 🇮🇳

3.3

(3)

61 documents

1 / 4

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
M. PHIL. IN STATISTICAL SCIENCE
Thursday 7 June 2001 1.30 to 4.30
STATISTICAL THEORY
You should attempt FOUR questions, no more than two of which should be from Section B.
You may not start to read the questions
printed on the subsequent pages until
instructed to do so by the Invigilator.
pf3
pf4

Partial preview of the text

Download M. Phil. in Statistical Science Exam: Statistical Theory and more Exams Statistics in PDF only on Docsity!

M. PHIL. IN STATISTICAL SCIENCE

Thursday 7 June 2001 1.30 to 4.

STATISTICAL THEORY

You should attempt FOUR questions, no more than two of which should be from Section B.

You may not start to read the questions

printed on the subsequent pages until

instructed to do so by the Invigilator.

SECTION A

1 Let a d-dimensional parameter vector θ be partitioned as θ = (ψ, λ).

Explain what is meant by orthogonality of ψ and λ. Discuss briefly the consequences of parameter orthogonality for maximum likelihood estimation.

Suppose that Y is distributed according to a density of the form

PY (y; θ) = a(λ, y)exp{λt(y; ψ)}.

Show that ψ and λ are orthogonal.

2 Write a brief account of the concept and properties of profile likelihood.

Define what is meant by modified profile likelihood.

Let Y 1 ,... , Yn be independent, identically distributed according to an inverse Gaussian distribution with density

{ψ/(2πy^3 )}^1 /^2 exp {−

ψ 2 λ^2 y

(y − λ)^2 }, y > 0

where ψ > 0 and λ > 0. The parameter of interest is ψ.

Find the form of the profile log-likelihood function and of the modified profile log- likelihood.

(^3) (i) Let Y 1 ,... , Yn be independent, identically distributed random variables with den- sity fY (y) and cumulant generating function KY (t). Describe in detail the saddlepoint approximation to the density of

Y = n−^1

∑^ n

i=

Yi.

(ii) Let Y 1 ,... , Yn be independent random variables each with a Laplace density

fY (y) = exp{−|y|}/ 2 , −∞ < y < ∞.

Show that the cumulant generating function is KY (t) = −log(1 − t^2 ), |t| < 1, and derive the form of the saddlepoint approximation to the density of Y.

STATISTICAL THEORY

SECTION B

7 (i) Assume that the n-dimensional observation vector Y may be written

Ω : Y = Xβ + 

where X is a given n × p matrix of rank p, β is an unknown vector, and

 ∼ Nn(0, σ^2 I).

Let Q(β) = (Y − Xβ)T^ (Y − Xβ). Show that Q(β) is a convex function of β, and find βˆ, the least-squares estimator of β. Show also that

Q( βˆ) = Y T^ (I − H)Y

where H is a matrix that you should define.

(ii) Let ˆ = Y − X βˆ. Find the distribution of ˆ, and discuss how this may be used to perform diagnostic checks of Ω.

(iii) Suppose that your data actually corresponded to the model

Yi ∼ N (μi, σ^2 i ), 1 ≤ i ≤ n, with σ^2 i ∝ μ^2 i.

How would your diagnostic checks detect this, and what transformation of Yi would be appropriate?

8 Suppose that Y 1 , · · · , Yn are independent Poisson random variables, with E(Yi) = μiti, 1 ≤ i ≤ n, where t 1 , · · · , tn are given times. Discuss carefully how to fit the model

H 0 : logμi = βT^ xi, 1 ≤ i ≤ n,

where x 1 , · · · , xn are given covariates, and β is a vector of unknown parameters.

9 Write a brief account of the role of conditioning in classical statistical inference.

Contrast briefly the handling of nuisance parameters in classical approaches to inference with that in the Bayesian approach.

10 Let X 1 ,... , Xn be independent, identically distributed random variables, with the exponential density f (x; θ) = θe−θx, x > 0.

Obtain the maximum likelihood estimator θˆ of θ. What is the asymptotic distribution of

n(θˆ − θ)?

Show that θˆ is biased as an estimator of θ.

What is the minimum variance unbiased estimator of θ? Justify your answer carefully, stating clearly any results you use.

STATISTICAL THEORY