

Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
It covers introduction of linear model, multivariate statistics, multivariate conditional distributions and quadratic forms
Typology: Lecture notes
1 / 2
This page cannot be seen from the preview
Don't miss anything!
Quadratic Forms.
Recall that, Y ′AY is called a quadratic form of Y when Y is a random vector.
Result. If X ∼ Np(μ, Σ), Σ > 0, then (X − μ)′Σ−^1 (X − μ) ∼ χ^2 p.
Proof. Z = Σ−^1 /^2 (X − μ) ∼ Np(0, Ip). i.e., Z 1 , Z 2 ,... , Zp are i.i.d. N (0, 1). Therefore Z′Z =
∑p i=1 Z 2 i ∼^ χ 2 p. Note that (X^ −^ μ) ′Σ− (^1) (X − μ) = Z′Z.
Result. If X 1 , X 2 ,... , Xn is a random sample from N (μ, σ^2 ), then X¯ and S^2 =
∑n i=1(Xi^ −^ X¯)
(^2) are independent, X¯ ∼ N (μ, σ (^2) /n) and S (^2) /σ (^2) ∼ χ 2 n− 1.
Proof. First note that X = (X 1 , X 2 ,... , Xn)′^ ∼ Nn(μ 1 , σ^2 In). Now con- sider an orthogonal matrix An×n = ((aij )) with the first row being a′ 1 = ( √^1 n , √^1 n ,... , √^1 n ) = √^1 n 1 ′. (Simply consider a basis for Rn^ with a 1 as the
first vector, orthogonalize the rest.) Now let Y = AX. i.e., Yi = a′ iX, i = 1 , 2 ,... , n. Since X ∼ Nn(μ 1 , σ^2 In), we have that Y ∼ Nn(μA 1 , σ^2 AA′) = Nn(μA 1 , σ^2 In). Therfeore, Yi are independent normal with variance σ^2. Fur- ther, E(Yi) = E(a′ iX) = μa′ i 1. Thus, E(Y 1 ) = μa′ 11 = μ √^1 n 1 ′ 1 =
nμ.
For i > 1, E(Yi) = μa′ i 1 = μ
na′ ia 1 = 0. i.e., Y 2 ,... , Yn are i.i.d N (0, σ^2 ). Therefore,
∑n i=2 Y^
2 i ∼^ χ 2 n− 1.^ Further,^ Y^1 =^ a ′ 1 X^ =^ √^1 n
∑n i=1 Xi^ =^
n X¯ ∼
N (
nμ, σ^2 ) and is independent of (Y 2 ,... , Yn). Also, S^2 =
∑n i=1(Xi^ −^
∑n i=1 X
2 i −^ n^ X¯
∑n i=2 Y^
2 i ∼^ χ
2 n− 1 which is inde- pendent of Y 1 , and therefore of X¯.
If X ∼ Np(0, I), then X′X =
∑p i=1 X
2 i ∼^ χ
2 p. i.e.,^ X
′IX ∼ χ 2 p. Also, note X′( √^1 p 1 √^1 p 1 ′)X = p X¯^2 ∼ χ^21 and X′(I − (^1) p 11 ′)X ∼ χ^2 p− 1.
What is the distribution of X′AX for any arbitrary A which is p.s.d.? With- out loss of generality we can assume that A is symmetric since
(A+A′))X = X′BX, where B =
(A+A′) is always symmetric.
Since A is symmetric p.s.d., A = ΓDλΓ′, so X′AX = X′ΓDλΓ′X = Y ′DλY , where Y = Γ′X ∼ Np(0, Γ′Γ = I). Therefore X′AX =
∑p i=1 diY^
2 i , where di are eigen values of A and Yi are i.i.d N (0, 1). Therefore X′AX has the χ^2 distribution if di = 1 or 0. Equivalently, X′AX ∼ χ^2 if A^2 = A or A is symmetric idempotent or A is an orthogonal projection matrix. The equivalence may be seen as follows. If d 1 ≥ d 2 ≥... ≥ dp ≥ 0 are such that
d 1 = d 2 =... = dr = 1 and dr+1 =... = dp = 0, then
Ir 0 0 0
Ir 0 0 0
Ir 0 0 0
If A^2 = A then ΓDλΓ′ΓDλΓ′^ = ΓD^2 λΓ′^ = ΓDλΓ′^ implies that D λ^2 = Dλ, or that d^2 i = di, or that di = 0 or 1.
We will show the converse now. Suppose X′AX ∼ χ^2 r and A is symmetric p.s.d. Then the mgf of X′AX is:
MX′AX (t) =
0
exp(tu)
exp(−u/2)ur/^2 −^1 2 r/^2 Γ(r/2)
du
0
exp(−u 2 (1 − 2 t))ur/^2 −^1 2 r/^2 Γ(r/2)
du
= (1 − 2 t)−r/^2 , for 1 − 2 t > 0.
But in distribution, X′AX =
∑p i=1 diY^
2 i ,^ Yi^ i.i.d.^ N^ (0,^ 1), so
MX′AX (t) = E
exp(t
∑^ p
i=
diY (^) i^2 )
[ (^) p ∏
i=
exp(tdiY (^) i^2 )
∏^ p
i=
exp(tdiY (^) i^2 )
∏^ p
i=
(1 − 2 tdi)−^1 /^2 , for 1 − 2 tdi > 0.
Now note that X′AX ∼ χ^2 r implies X′AX > 0 wp 1. i.e.,
∑p i=1 diY^ 2 i >^0 wp 1, which in turn imples that di ≥ 0 for all i. (This is because, if dl < 0, since Y (^) l^2 ∼ χ^21 independently of Yi, i 6 = l, we would have
∑p i=1 diY^ 2 i <^ 0 with positive probability.) Therefore, for t < mini (^21) di , equating the two mgf’s, we
have (1 − 2 t)−r/^2 =
∏p i=1(1^ −^2 tdi) − 1 / (^2) , or (1 − 2 t)r/ (^2) = ∏p i=1(1^ −^2 tdi)
or (1 − 2 t)r^ =
∏p i=1(1^ −^2 tdi). Equality of two polynomials mean that their roots must be the same. Check that r of the di’s must be 1 and rest 0. Thus the following result follows.
Result. X′AX ∼ χ^2 r iff A is a symmetric idempotent matrix or an orthogonal projection matrix of rank r.