Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

M.Phil. in Statistical Science Exam: Statistical Theory, Exams of Statistics

A past exam paper for the m.phil. In statistical science program, focusing on statistical theory. It includes instructions for the exam, as well as questions covering topics such as profile likelihood, conditional likelihood, transformation models, minimal sufficient statistics, m-estimators, and bayesian decision theory. Candidates are required to answer questions related to these topics and demonstrate a deep understanding of statistical concepts.

Typology: Exams

2012/2013

Uploaded on 02/26/2013

dharmanand
dharmanand 🇮🇳

3.3

(3)

61 documents

1 / 6

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
M. PHIL. IN STATISTICAL SCIENCE
Thursday 2 June 2005 1.30 to 4.30
STATISTICAL THEORY
Attempt FOUR questions, not more than TWO of which should be from Section B.
There are TEN questions in total.
The questions carry equal weight.
STATIONERY REQUIREMENTS SPECIAL REQUIREMENTS
Cover sheet None
Treasury Tag
Script paper
You may not start to read the questions
printed on the subsequent pages until
instructed to do so by the Invigilator.
pf3
pf4
pf5

Partial preview of the text

Download M.Phil. in Statistical Science Exam: Statistical Theory and more Exams Statistics in PDF only on Docsity!

M. PHIL. IN STATISTICAL SCIENCE

Thursday 2 June 2005 1.30 to 4.

STATISTICAL THEORY

Attempt FOUR questions, not more than TWO of which should be from Section B.

There are TEN questions in total.

The questions carry equal weight.

STATIONERY REQUIREMENTS SPECIAL REQUIREMENTS

Cover sheet None Treasury Tag Script paper

You may not start to read the questions

printed on the subsequent pages until

instructed to do so by the Invigilator.

Section A

1 Explain briefly the concepts of profile likelihood and conditional likelihood, for inference about a parameter of interest ψ, in the presence of a nuisance parameter λ.

Suppose Y 1 ,... , Yn are independent, identically distributed from the exponential family density f (y; ψ, λ) = exp{ψτ 1 (y) + λτ 2 (y) − d(ψ, λ) − Q(y)},

where ψ, λ are both scalar.

Obtain a saddlepoint approximation to the density of S = n−^1

∑n i=1 τ^2 (Yi). Show that use of the saddlepoint approximation leads to an approximate conditional log-likelihood function for ψ of the form

lp(ψ) + B(ψ),

where lp(ψ) is the profile log-likelihood, and B(ψ) is an adjustment which you should specify carefully.

2 Explain in detail what is meant by a transformation model.

What is meant by (i) a maximal invariant, (ii) an equivariant estimator, in the context of a transformation model?

Describe in detail how an equivariant estimator can be used to construct a maximal invariant. Illustrate the construction for the case of a location-scale model.

STATISTICAL THEORY

5 Let Y 1 ,... , Yn be independent, identically distributed from a distribution F , with density f symmetric about an unknown point θ. Suppose we wish to test H 0 : θ = θ 0 against H 1 : θ < θ 0.

Explain how to test H 0 against H 1 using (i) the sign test, and (ii) the Wilcoxon signed rank test.

Show that the null mean and variance of the Wilcoxon signed rank statistic are 1 4 n(n^ + 1) and^

1 24 n(n^ + 1)(2n^ + 1) respectively. What is meant by a one-sample U -statistic? State, without proof, a result concerning the asymptotic distribution of a one- sample U -statistic, and use it to deduce asymptotic normality of the Wilcoxon signed rank statistic.

6 Write brief notes on four of the following:

(i) Edgeworth expansion;

(ii) parameter orthogonality; (iii) Laplace approximation;

(iv) Bartlett correction;

(v) the invariance principle; (vi) finite-sample versions of robustness measures;

(vii) tests based on the empirical distribution function; (viii) large-sample likelihood theory.

STATISTICAL THEORY

Section B

7 Assume that the n-dimensional observation vector Y may be written

Y = Xβ + ,

where X is a given n × p matrix of rank p, β is an unknown vector, and

 ∼ Nn(0, σ^2 I).

Let Q(β) = (Y − Xβ)T^ (Y − Xβ). Find βˆ, the least-squares estimator of β, and show that

Q( βˆ) = Y T^ (I − H)Y

where H is a matrix that you should define.

If now Xβ is written as Xβ = X 1 β 1 + X 2 β 2 , where X = (X 1 : X 2 ), βT^ = (βT 1 : β 2 T ), and β 2 is of dimension p 2 , state without proof the form of the F -test for testing H 0 : β 2 = 0.

What is meant by saying that β 1 is orthogonal to β 2? What is the practical relevance of orthogonality?

8 Suppose that Y 1 ,... , Yn are independent binomial observations, with

Yi ∼ B(ti, πi) and log(πi/(1 − πi)) = βT^ xi, for 1 6 i 6 n,

where t 1 ,... , tn and x 1 ,... , xn are given. Discuss carefully the estimation of β.

Your solution should include

(i) the method of checking the fit of the above logistic model,

and

(ii) the method for finding an approximate 95% confidence interval for β 2 , the second component of the vector β.

STATISTICAL THEORY [TURN OVER