Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Multivariate Statistics Part 1, Lecture notes of Statistics

It covers introduction of linear model, multivariate statistics, multivariate conditional distributions and quadratic forms

Typology: Lecture notes

2020/2021

Available from 12/13/2021

debabrata-pal
debabrata-pal 🇮🇳

1 document

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Quadratic Forms.
Recall that, YAY is called a quadratic form of Ywhen Yis a random vector.
Result. If XNp(µ, Σ), Σ >0, then (Xµ)Σ1(Xµ)χ2
p.
Proof. Z= Σ1/2(Xµ)Np(0, Ip). i.e., Z1, Z2,...,Zpare i.i.d. N(0,1).
Therefore ZZ=Pp
i=1 Z2
iχ2
p. Note that (Xµ)Σ1(Xµ) = ZZ.
Result. If X1, X2,...,Xnis a random sample from N(µ, σ2), then ¯
Xand
S2=Pn
i=1(Xi¯
X)2are independent, ¯
XN(µ, σ2/n) and S22χ2
n1.
Proof. First note that X= (X1, X2,...,Xn)Nn(µ1, σ2In). Now con-
sider an orthogonal matrix An×n= ((aij )) with the first row being a
1=
(1
n,1
n,..., 1
n) = 1
n1. (Simply consider a basis for Rnwith a1as the
first vector, orthogonalize the rest.) Now let Y=AX. i.e., Yi=a
iX,i=
1,2,...,n. Since XNn(µ1, σ2In), we have that YNn(µA1, σ2AA) =
Nn(µA1, σ2In). Therfeore, Yiare independent normal with variance σ2. Fur-
ther, E(Yi) = E(a
iX) = µa
i1. Thus, E(Y1) = µa
11=µ1
n11=.
For i > 1, E(Yi) = µa
i1=µna
ia1= 0. i.e., Y2,...,Ynare i.i.d N(0, σ2).
Therefore, Pn
i=2 Y2
iχ2
n1. Further, Y1=a
1X=1
nPn
i=1 Xi=n¯
X
N(nµ, σ2) and is independent of (Y2,...,Yn). Also, S2=Pn
i=1(Xi¯
X)2=
Pn
i=1 X2
in¯
X2=XXY2
1=YYY2
1=Pn
i=2 Y2
iχ2
n1which is inde-
pendent of Y1, and therefore of ¯
X.
If XNp(0, I), then XX=Pp
i=1 X2
iχ2
p. i.e., XIX χ2
p. Also, note
X(1
p11
p1)X=p¯
X2χ2
1and X(I1
p11)Xχ2
p1.
What is the distribution of XAX for any arbitrary Awhich is p.s.d.? With-
out loss of generality we can assume that Ais symmetric since
XAX =X(1
2(A+A))X=XBX, where B=1
2(A+A) is always symmetric.
Since Ais symmetric p.s.d., A= ΓDλΓ, so XAX =XΓDλΓX=YDλY,
where Y= ΓXNp(0,ΓΓ = I). Therefore XAX =Pp
i=1 diY2
i, where
diare eigen values of Aand Yiare i.i.d N(0,1). Therefore XAX has the
χ2distribution if di= 1 or 0. Equivalently, XAX χ2if A2=Aor
Ais symmetric idempotent or Ais an orthogonal projection matrix. The
equivalence may be seen as follows. If d1d2... dp0 are such that
1
pf2

Partial preview of the text

Download Multivariate Statistics Part 1 and more Lecture notes Statistics in PDF only on Docsity!

Quadratic Forms.

Recall that, Y ′AY is called a quadratic form of Y when Y is a random vector.

Result. If X ∼ Np(μ, Σ), Σ > 0, then (X − μ)′Σ−^1 (X − μ) ∼ χ^2 p.

Proof. Z = Σ−^1 /^2 (X − μ) ∼ Np(0, Ip). i.e., Z 1 , Z 2 ,... , Zp are i.i.d. N (0, 1). Therefore Z′Z =

∑p i=1 Z 2 i ∼^ χ 2 p. Note that (X^ −^ μ) ′Σ− (^1) (X − μ) = Z′Z.

Result. If X 1 , X 2 ,... , Xn is a random sample from N (μ, σ^2 ), then X¯ and S^2 =

∑n i=1(Xi^ −^ X¯)

(^2) are independent, X¯ ∼ N (μ, σ (^2) /n) and S (^2) /σ (^2) ∼ χ 2 n− 1.

Proof. First note that X = (X 1 , X 2 ,... , Xn)′^ ∼ Nn(μ 1 , σ^2 In). Now con- sider an orthogonal matrix An×n = ((aij )) with the first row being a′ 1 = ( √^1 n , √^1 n ,... , √^1 n ) = √^1 n 1 ′. (Simply consider a basis for Rn^ with a 1 as the

first vector, orthogonalize the rest.) Now let Y = AX. i.e., Yi = a′ iX, i = 1 , 2 ,... , n. Since X ∼ Nn(μ 1 , σ^2 In), we have that Y ∼ Nn(μA 1 , σ^2 AA′) = Nn(μA 1 , σ^2 In). Therfeore, Yi are independent normal with variance σ^2. Fur- ther, E(Yi) = E(a′ iX) = μa′ i 1. Thus, E(Y 1 ) = μa′ 11 = μ √^1 n 1 ′ 1 =

nμ.

For i > 1, E(Yi) = μa′ i 1 = μ

na′ ia 1 = 0. i.e., Y 2 ,... , Yn are i.i.d N (0, σ^2 ). Therefore,

∑n i=2 Y^

2 i ∼^ χ 2 n− 1.^ Further,^ Y^1 =^ a ′ 1 X^ =^ √^1 n

∑n i=1 Xi^ =^

n X¯ ∼

N (

nμ, σ^2 ) and is independent of (Y 2 ,... , Yn). Also, S^2 =

∑n i=1(Xi^ −^

X¯)^2 =

∑n i=1 X

2 i −^ n^ X¯

2 = X′X − Y 2

1 =^ Y^

′Y − Y 2

1 =^

∑n i=2 Y^

2 i ∼^ χ

2 n− 1 which is inde- pendent of Y 1 , and therefore of X¯.

If X ∼ Np(0, I), then X′X =

∑p i=1 X

2 i ∼^ χ

2 p. i.e.,^ X

′IX ∼ χ 2 p. Also, note X′( √^1 p 1 √^1 p 1 ′)X = p X¯^2 ∼ χ^21 and X′(I − (^1) p 11 ′)X ∼ χ^2 p− 1.

What is the distribution of X′AX for any arbitrary A which is p.s.d.? With- out loss of generality we can assume that A is symmetric since

X′AX = X′(

(A+A′))X = X′BX, where B =

(A+A′) is always symmetric.

Since A is symmetric p.s.d., A = ΓDλΓ′, so X′AX = X′ΓDλΓ′X = Y ′DλY , where Y = Γ′X ∼ Np(0, Γ′Γ = I). Therefore X′AX =

∑p i=1 diY^

2 i , where di are eigen values of A and Yi are i.i.d N (0, 1). Therefore X′AX has the χ^2 distribution if di = 1 or 0. Equivalently, X′AX ∼ χ^2 if A^2 = A or A is symmetric idempotent or A is an orthogonal projection matrix. The equivalence may be seen as follows. If d 1 ≥ d 2 ≥... ≥ dp ≥ 0 are such that

d 1 = d 2 =... = dr = 1 and dr+1 =... = dp = 0, then

A = Γ

Ir 0 0 0

A^2 = Γ

Ir 0 0 0

Ir 0 0 0

Γ′^ = A.

If A^2 = A then ΓDλΓ′ΓDλΓ′^ = ΓD^2 λΓ′^ = ΓDλΓ′^ implies that D λ^2 = Dλ, or that d^2 i = di, or that di = 0 or 1.

We will show the converse now. Suppose X′AX ∼ χ^2 r and A is symmetric p.s.d. Then the mgf of X′AX is:

MX′AX (t) =

0

exp(tu)

exp(−u/2)ur/^2 −^1 2 r/^2 Γ(r/2)

du

0

exp(−u 2 (1 − 2 t))ur/^2 −^1 2 r/^2 Γ(r/2)

du

= (1 − 2 t)−r/^2 , for 1 − 2 t > 0.

But in distribution, X′AX =

∑p i=1 diY^

2 i ,^ Yi^ i.i.d.^ N^ (0,^ 1), so

MX′AX (t) = E

[

exp(t

∑^ p

i=

diY (^) i^2 )

]

= E

[ (^) p ∏

i=

exp(tdiY (^) i^2 )

]

∏^ p

i=

E

[

exp(tdiY (^) i^2 )

]

∏^ p

i=

(1 − 2 tdi)−^1 /^2 , for 1 − 2 tdi > 0.

Now note that X′AX ∼ χ^2 r implies X′AX > 0 wp 1. i.e.,

∑p i=1 diY^ 2 i >^0 wp 1, which in turn imples that di ≥ 0 for all i. (This is because, if dl < 0, since Y (^) l^2 ∼ χ^21 independently of Yi, i 6 = l, we would have

∑p i=1 diY^ 2 i <^ 0 with positive probability.) Therefore, for t < mini (^21) di , equating the two mgf’s, we

have (1 − 2 t)−r/^2 =

∏p i=1(1^ −^2 tdi) − 1 / (^2) , or (1 − 2 t)r/ (^2) = ∏p i=1(1^ −^2 tdi)

or (1 − 2 t)r^ =

∏p i=1(1^ −^2 tdi). Equality of two polynomials mean that their roots must be the same. Check that r of the di’s must be 1 and rest 0. Thus the following result follows.

Result. X′AX ∼ χ^2 r iff A is a symmetric idempotent matrix or an orthogonal projection matrix of rank r.