AY0708 Sem1 - NUS Mathematics Society

ST2131/MA2216
Probability
AY 2007/2008 Sem 1
NATIONAL UNIVERSITY OF SINGAPORE
MATHEMATICS SOCIETY
PAST YEAR PAPER SOLUTIONS
with credits to Teo Wei Hao, Zheng Shaoxuan
ST2131/MA2216 Probability
AY 2007/2008 Sem 1
Question 1
(a) We are given that W ∼ N (400, 402 ). Let n cars be required to cause structural damage to the bridge
with probability exceedingP0.1. Let Xi be the r.v. of the weight
n is
Pn of each car, i = 1, 2, . . . , n. Since
n
2
2
large, by C.L.T., we have i=1 Xi ≈ N (3n, 0.3 n). Thus ( i=1 Xi )−W ≈ N (3n−400, 0.3 n+402 ).
By referring to the statistical table, we obtain,
(Ã n
!
)
X
P{Z > 1.2816} = 0.1 ≤ P
Xi − W > 0
i=1
¾
½
400 − 3n
.
≈ P Z>√
0.32 n + 402
√
400 − 3n
u2 − 402
. Let u = 0.32 n + 402 , i.e. n =
Thus we conclude that 1.2816 ≥ √
.
0.32
0.32 n + 402
This give us,
µ
µ 2
¶¶
1
u − 402
1.2816 ≥
400 − 3
u
0.32
(1.2816)(0.32 )u ≥ (400)(0.32 ) − 3(u2 − 402 )
3u2 + (1.2816)(0.32 )u − (400)(0.32 ) − (3)(402 ) ≥ 0.
We solve the above quadratic inequality with u ≥ 0, and substitute back to get n ≥ 116.2.
Thus n ≥ 117.
(b) Φ can be treated as a function of 1 variable, thus together with X and Z independent, we have,
Z
Z
E(Φ(X)) =
Φ(x)fX (x) dx =
P{Z ≤ x}fX (x) dx = P{Z ≤ X}.
R
R
Question 2
(i) Since X1 , X2 , X3 are independent r.v., we have X ∼ P (λ1 + λ2 ) and Y ∼ P (λ2 + λ3 ).
(ii) We have E(X) = E(X1 ) + E(X2 ) = λ1 + λ2 , and E(Y ) = E(X2 ) + E(X3 ) = λ2 + λ3 .
(iii) We have Cov(Xi , Xj ) = 0 if i 6= j. Thus
Cov(X, Y ) = Cov(X1 + X2 , X2 + X3 )
= Cov(X1 , X2 ) + Cov(X2 , X2 ) + Cov(X1 , X3 ) + Cov(X2 , X3 )
= Var(X2 )
= λ2 .
NUS Math LaTeXify Proj Team
Page: 1 of 5
NUS Mathematics Society
ST2131/MA2216
Probability
AY 2007/2008 Sem 1
(iv) Since X1 , X2 , X3 are independent r.v., we have,
P{X = j, Y = k} =
=
j
X
i=0
j
X
P{X = j, Y = k | X2 = i}P{X2 = i}
P{X1 = j − i, X3 = k − i | X2 = i}P{X2 = i}
i=0
=
=
j
X
P{X1 = j − i}P{X3 = k − i}P{X2 = i}
i=0
Ã
j
X
i=0
j−i
−λ1 λ1
e
(j − i)!
= e−(λ1 +λ2 +λ3 )
j
X
i=0
!Ã
λk−i
e−λ3 3
(k − i)!
!µ
e
i
−λ2 λ2
¶
i!
i k−i
λj−i
1 λ2 λ3
.
(j − i)!i!(k − i)!
Question 3
(i) We have,
Z
fU (u) =
R
Z
u
1
dv
2
u−1 2u v
·
¸u
1
=
ln v
2u2
u−1
1
1
(ln u − ln u−1 ) = 2 ln u,
=
2
2u
u
f(U,V ) (u, v) dv =
Thus the marginal p.d.f. of U is given by,
fU (u) =
(
1 < u.
1
u2
ln u, 1 < u;
0,
otherwise.
For 0 < v < 1, we have 1 < v −1 ≤ u. Therefore,
Z
Z ∞
1
du
fV (v) =
f(U,V ) (u, v) du =
2
R
v −1 2u v
·
¸
−1 ∞
=
2uv v−1
1
=
.
2
For v ≥ 1, we have 1 ≤ v ≤ u. Therefore,
Z
Z ∞
1
fV (v) =
f(U,V ) (u, v) du =
du
2v
2u
R
v
¸
·
−1 ∞
=
2uv v
1
=
.
2v 2
Thus the marginal p.d.f. of V is given by,

1

0 < v < 1;
2,
1
fV (v) =
, v ≥ 1;
2v 2


0,
otherwise.
NUS Math LaTeXify Proj Team
Page: 2 of 5
NUS Mathematics Society
ST2131/MA2216
Probability
AY 2007/2008 Sem 1
(ii) No.
Since f(U,V ) (u, v) 6= fU (u)fV (v) for some u, v ∈ R, we have U and V to be not independent.
√
(iii) Let x = uv, y =
r
u
x
. This give us u = xy, and v = .
v
y
For u ∈ (1, ∞) and v ∈ (0, ∞), we have x, y ∈ (0, ∞). Now since 1 < u, u−1 ≤ v ≤ u, we have
1
x
≤ ≤ xy. This implies that 1 ≤ x2 , y 2 and since x, y > 0, we have 1 ≤ x, y.
xy
y
r
r
r
r
δx
1
1 v δx
1 u δy
1
δy
−1 u
Next, we obtain
.
=
,
=
,
=
and
=
δu
2 u δv
2 v δu
2 uv
δv
2
v3
µ r ¶µ r ¶ µ r ¶Ã r !
1 v
−1 u
1 u
1
1
1
Thus J(u, v) =
−
=− .
3
2 u
2
v
2 v
2 uv
2v
1
Since v ≥ 0, we have |J(u, v)| =
. Therefore,
2v
1
f(X,Y ) (x, y) =
f
(u, v)
|J(u, v)| (U,V )
¶
µ
1
= (2v)
2u2 v
1
1
=
= 2 2 , x, y ≥ 1.
2
u
x y
Thus the joint p.d.f. of X and Y is given by,
f(X,Y ) (x, y) =
(
1
u2
=
1
,
x2 y 2
0,
x, y ≥ 1;
otherwise.
µ
¶µ ¶
1
1
(iv) Since f(X,Y ) (x, y) =
, we have the marginal p.d.f. of X and Y to be given by,
2
x
y2
(
(
1
1
y ≥ 1;
,
x
≥
1;
2,
2
fX (x) = x
fY (y) = y
0,
otherwise.
0,
otherwise.
X and Y are thus independent.
Question 4
(i) We are given that X and Y are r.v. such that Y ∼ N (µ, σ 2 ) and X | (Y = y) ∼ N (y, 1).
This give us E(X | Y = y) = y Thus E(X) = E(E(X | Y )) = E(Y ) = µ.
(ii) We have
E(X 2 | Y = y) = Var(X | Y = y) + E(X | Y = y)2 = 1 + y 2 .
(iii) Using (4ii.), we get
Var(X) = E(X 2 ) − E(X)2 = E(E(X 2 | Y )) − µ2
= E(1 + Y 2 ) − µ2
= 1 + E(Y 2 ) − µ2
= 1 + Var(Y ) + E(Y )2 − µ2
= 1 + σ2.
NUS Math LaTeXify Proj Team
Page: 3 of 5
NUS Mathematics Society
ST2131/MA2216
Probability
AY 2007/2008 Sem 1
(iv) We have,
E(XY ) = E(E(XY | Y )) = E(Y E(X | Y ))
= E(Y 2 )
= Var(X) + E(Y )2
= σ 2 + µ2 .
Thus Cov(X, Y ) = E(XY ) − E(X)E(Y ) = σ 2 .
µ
Var(X)
Cov(X, Y )
(v) We have the covariance matrix, Σ =
Cov(X, Y )
Var(Y )
2
2
2
2
2
We thus get det(Σ) = (1 + σ )(σ ) − (σ )(σ ) = σ .
¶
µ
=
1 + σ2 σ2
σ2
σ2
¶
.
(vi) We have,
µ
¶µ
¶
1 y−µ 2
1 − 1 (x−y)2
1
−
(
)
√ e 2
√
f(X,Y ) (x, y) = fX|Y (x|y)fY (y) =
e 2 σ
2π
2πσ
1 − 1 (x−y)2 − 1 ( y−µ )2
e 2
e 2 σ , x, y ∈ R.
=
2πσ
µ
¶
x−µ
Now with what we determined in (4v.), let r =
. Then we have,
y−µ
f(X,Y ) (x, y) =
1
2π(detΣ)
1
1
2
e− 2 (r
T Σ−1 r)
,
x, y ∈ R.
Thus (X, Y ) is a bivariate normal distribution.
(vii) Yes.
We have Cov(X − Y, Y ) = Cov(X, Y ) − Cov(Y, Y ) = σ 2 − σ 2 = 0.
Since (X − Y, Y ) is bivariate normal, we can thus conclude that X − Y and Y are independent.
Question 5
(a)
(i) We have,
P{Jk = 1} = P{Jk = 1 | Ik = 0}P{Ik = 0} + P{Jk = 1 | Ik = 1}P{Ik = 1}
= P{Ik−1 = 1 | Ik = 0}P{Ik = 0} + P{Ik−1 = 0 | Ik = 1}P{Ik = 1}
= P{Ik−1 = 1}P{Ik = 0} + P{Ik−1 = 0}P{Ik = 1}
= p(1 − p) + p(1 − p) = 2p(1 − p).
P
(ii) By definition of our Jk s, we have X = nk=2 Jk .
P
P
P
(iii) Thus, E(X) = E( nk=2 Jk ) = nk=2 E(Jk ) = nk=2 2p(1 − p) = 2p(1 − p)(n − 1).
(b)
(i) We have
Z
fX (x) =
NUS Math LaTeXify Proj Team
Z
x
2 −2x
e
dy
0 x
= 2e−2x , x > 0.
f (x, y) dy =
R
Page: 4 of 5
NUS Mathematics Society
ST2131/MA2216
Probability
Thus the marginal p.d.f. of X is given by,
(
fX (x) =
AY 2007/2008 Sem 1
2e−2x , x > 0;
0,
otherwise,
i.e. X ∼ Exp(2).
1
f (x, y)
= . Thus,
fX (x)
x
Z
Z x
y
E(Y | X = x) =
yfY |X (y|x) dy =
dy
x
R
· 0 2 ¸x
x
y
= .
=
2x 0
2
(ii) We have fY |X (y|x) =
(iii) By result of (5bii.), and using the fact that X ∼ Exp(2) gives us E(X) = 21 , we have
µ ¶
X
E(Y ) = E(E(Y | X)) = E
2
1
E(X)
=
2µ ¶
1 1
1
=
= .
2 2
4
(iv) Now using the additional fact that Var(X) = 14 ,
E(XY ) = E(E(XY | X)) = E(XE(Y | X))
µ µ ¶¶
X
= E X
2
1
=
E(X 2 )
2
1
(Var(X) + E(X)2 )
=
2Ã
µ ¶2 !
1 1
1
1
=
+
= .
2 4
2
4
Alt: The above method uses solutions of all the parts prior to it. Thus in the situation where we
do not have the solution to any of the parts, we still have a short solution that directly obtain
the answer to (5biv.) independently, by using,
Z ∞Z ∞
Z ∞Z ∞
E(XY ) =
xyf (x, y) dx dy =
2ye−2x dx dy
0
y
0
y
Z ∞
£
¤∞
=
y −e−2x y dy
Z0 ∞
=
ye−2y dy
·0
¸
Z ∞
1 −2y ∞
1
= − ye
−
− e−2y dy
2
2
0
0
·
¸∞
1
1
= 0 + − e−2y
= .
4
4
0
NUS Math LaTeXify Proj Team
Page: 5 of 5
NUS Mathematics Society