


Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
This is the Exam of Statistical Science which includes Stochastic Differential Equation, Brownian Motion, Solution, Measurable Function, Markov Process, Starting, Bounded Functions, Local Martingale, First Time etc. Key important points are: Sample Quantile, Independent, Identically Distributed, Random Variables, Empirical Distribution Function, Smoothness Condition, Sample Median, Asymptotic Distribution, Asymptotic Variance, Normal Distribution Function
Typology: Exams
1 / 4
This page cannot be seen from the preview
Don't miss anything!
Monday 5 June 2006 1.30 to 4.
Attempt FOUR questions. There are SIX questions in total.
The questions carry equal weight.
Cover sheet None Treasury Tag Script paper
1 Let X 1 ,... , Xn be independent and identically distributed random variables with distribution function F. Define the empirical distribution function Fˆn. State and prove the Glivenko–Cantelli theorem.
Define the pth sample quantile Fˆ (^) n− 1 (p). Subject to a smoothness condition which you should specify, write down the asymptotic distribution of the sample median, Fˆ (^) n− 1 (1/2).
In each of the two cases below, compare the asymptotic variance of n^1 /^2 Fˆ (^) n− 1 (1/2) with that of n^1 /^2 X¯n, where X¯n = n−^1 (X 1 +... + Xn):
(i) F = Φ, the standard normal distribution function (ii) F has density f (x) = 6x(1 − x) for x ∈ (0, 1).
2 Let Y 1 ,... , Yn be independent and identically distributed with model function f (y; θ), where θ ∈ Θ ⊆ Rd, and let θ 0 denote the true parameter value. Derive the
asymptotic distribution of the maximum likelihood estimator θˆn.
[You may assume that the usual regularity conditions hold. In particular, you may assume a Taylor expansion for the score function U (θ), of the form
0 = U (θˆn) = U (θ 0 ) − j(θ 0 )(θˆn − θ 0 ) + op(n^1 /^2 ),
as n → ∞, where j(θ) is the observed information matrix at θ.]
Describe how this asymptotic result is related to the Wald test of H 0 : θ = θ 0 against H 1 : θ 6 = θ 0. Now suppose that θ = (ψ, λ), where only ψ is of interest. Describe the Wald test of H 0 : ψ = ψ 0 against H 1 : ψ 6 = ψ 0.
Let Y 1 ,... , Yn be independent and identically distributed with the inverse Gaussian density
f (y; ψ, λ) =
( (^) ψ
2 πy^3
exp
ψ 2 λ^2 y
(y − λ)^2
, y > 0 , ψ > 0 , λ > 0.
Show that the maximum likelihood estimator of ψ is
ψˆ =
n
∑^ n
i=
Yi
where Y¯ = n−^1 (Y 1 +... + Yn).
Using the fact that Eψ,λ(Y 1 ) = λ, show further that the Wald statistics for testing H 0 : ψ = ψ 0 against H 1 : ψ 6 = ψ 0 coincide in the two cases where λ is known and where λ is unknown.
Statistical Theory
5 Let f be a bounded density with a bounded, continuous second derivative f ′′ satisfying
−∞ f^
′′(x) (^2) dx < ∞, and let X 1 ,... , Xn be independent and identically
distributed with density f. Define the kernel density estimator fˆh(x) with kernel K and bandwidth h. Under conditions on h and K which you should specify, derive the leading term of an asymptotic expansion for the bias of fˆh(x) as a point estimator of f (x).
∫ Observing that Var{^ fˆh(x)}^ =^ (nh)−^1 R(K)f^ (x) +^ o{^1 /(nh)},^ where^ R(K)^ = ∞ −∞ K(z)
(^2) dz, and provided that f ′′(x) 6 = 0, find the bandwidth hAM SE (x) which
minimises the asymptotic mean squared error of fˆh(x) at the point x. Write down (or compute) the asymptotically optimal mean integrated squared error bandwidth, hAM ISE.
For f (x) = φ(x), the standard normal density, show that
inf x∈R{− 1 , 1 }
hAM SE (x) hAM ISE
( (^9) e 5
8192
[You may find it helpful to note that R(φ′′) = 8 √^3 π .]
6 Let g : (a, b) → R be a smooth function with a unique minimum at ˜y ∈ (a, b) satisfying g′′(˜y) > 0. Sketch a derivation of Laplace’s method for approximating
gn =
∫ (^) b
a
e−ng(y)^ dy.
[You may treat error terms informally. An explicit expression for the O(n−^1 ) term is not required.]
By making an appropriate substitution, use Laplace’s method to approximate
Γ(n + 1) =
0
yne−y^ dy.
Let p(θ) denote a prior for a parameter θ ∈ Θ ⊆ R, and let Y 1 ,... , Yn be independent and identically distributed with conditional density f (y|θ). Explain how Laplace’s method may be used to approximate the posterior expectation of a function g(θ) of interest.
Statistical Theory