Stochastic Processes 2 - solutions to 2007 exam 1. (a) This is an

Stochastic Processes 2 - solutions to 2007 exam
1. (a) This is an example presented in lectures. Let Fn = σ{X1 , X2 , . . . , Xn }. Then
1
E[eθSn eθXn+1 |Fn ]
n+1
φ(θ)
eθSn
eθSn
θXn+1
E[e
]
=
φ(θ) = Mn .
=
φ(θ)n+1
φ(θ)n+1
E[Mn+1 |Fn ] =
(b) Let M be a martingale and T a stopping time satisfying any one of the following:
• T ≤ constant;
• T < ∞ a.s. and |Mt∧T | ≤ constant a.s. for all t;
• E[T ] < ∞ and |M(t+1)∧T − Mt∧T | ≤ const a.s. for all t;
then E[MT ] = E[M0 ].
(c) (i) Apply (a) with Xk = Yk −p and θ = L, and observe that φ(L) = E[eL(Y1 −p) ] =
E[e−L(U1 −u) ] = 1.
(ii) Jensen’s inequality says 1 = E[e−L(U1 −u) ] = E[eL(Y1 −p) ] > eLE(Y1 −p) . Since
L > 0, taking logs gives the desired result. (We have strict inequality rather
than ≥ because eLx is strictly convex.)
(iii) Since p > E(Y1 ), Un → +∞ as n → ∞, so P (T0 = ∞) > 0 and consequently
none of the conditions of the optional stopping theorem can be satisfied.
(iv) If Mn is a martingale and T a stopping time, then the stopped process MT ∧n is
also a martingale so at any fixed time N , E(MN ) = M0 . Alternatively, since
the stopping time T0 ∧ N ≤ N is bounded, the optional stopping theorem
applies.
(v) This problem is a variation of some tutorial problems, which concern random
walk/Brownian motion drifting to +∞. We have
1 = E[e−L(UT0 ∧N −u) ] = E[e−L(UT0 ∧N −u) I{T0 <N } ] + E[e−L(UT0 ∧N −u) I{T0 ≥N } ]
≥ E[e−L(UT0 ∧N −u) I{T0 <N } ] = E[e−L(UT0 −u) I{T0 <N } ].
Since UT0 ≤ 0 by definition of T0 and L > 0, e−L(UT0 −u) ≥ eLu , so from above
we get
1 ≥ E[e−L(UT0 −u) I{T0 <N } ] ≥ eLu E[I{T0 <N } ].
Letting N → ∞ then gives the desired result: I{T0 <N } → I{T0 <∞} and since
I{T0 <N } ≤ 1, dominated convergence theorem gives E[I{T0 <N } ] → E[I{T0 <∞} ] =
P (T0 < ∞).
2. Part (a) is a tutorial problem while the rest is new, although the results of (g) and (h)
were obtained in lectures via a different approach.
(a) Let Fn = σ{W1 , . . . , Wn }. Then Wn+1 is a compound sum and we have
E(Wn+1 |Fn ) = µWn ⇒ E(Mn+1 |Fn ) = Mn .
(b) Since E(Mn ) = M0 = 1, E(Wn ) = µn .
(c) Let Mn be a martingale. Then the following statements are equivalent:
(i) Mn is a uniformly integrable.
1
(ii) Mn converges almost surely and in L1 to a limit M∞ and E[M∞ ] = E[M0 ].
(iii) Mn = E[M∞ |Fn ] for some random variable M∞ ∈ L1 .
(d) E(W12 ) = E(Y12 ) = µ2 + σ 2 .
(n)
(e) Dropping the superscript on Yi
for notational convenience
2
|Wn ) = E[(Y1 + Y2 + · · · + YWn )2 |Wn ]
E(Wn+1
= E[Y12 + · · · + YW2 n + Y1 Y2 + Y1 Y3 + · · · + YWn −1 YWn |Wn ]
= Wn E(Y12 ) + Wn (Wn − 1)E(Y1 Y2 )
= Wn E(Y12 ) + Wn (Wn − 1)E(Y1 )E(Y2 )
= Wn (µ2 + σ 2 ) + µ2 Wn (Wn − 1) = σ 2 Wn + µ2 Wn2 .
(We have used the fact that in the 2nd line above, there are Wn terms of the form
Yi2 and Wn (Wn − 1) terms of the form Yi Yj , i 6= j.) Taking expectations above
gives
E(Wn+1 ) = µ2 E(Wn2 ) + σ 2 E(Wn ) = µ2 E(Wn2 ) + σ 2 µn .
Alternatively, by independence of the Yi , Var(Wn+1 |Wn ) = σ 2 Wn . Then
2
E(Wn+1
|Wn ) = Var(Wn+1 |Wn ) + E(Wn+1 |Wn )2
= σ 2 Wn + µ2 Wn2 .
(f) For n = 1, the given formula gives E(W12 ) = µ2 + σ 2 which agrees with (d). Hence
by induction,
2
E(Wn+1
) = µ2 E(Wn2 ) + σ 2 µn
µn − 1 o
n
+ µn
= µ2n+2 + σ 2 µn+1
µ−1
n µn − 1 o
= µ2(n+1) + σ 2 µn µ
+1
µ−1
n µn+1 − µ + µ − 1 o
= µ2(n+1) + σ 2 µn
µ−1
n µn+1 − 1 o
= µ2(n+1) + σ 2 µn
.
µ−1
(g)
E(Wn2 )
µ2n
n
n
µn − 1 o
= sup 1 + σ 2 µ−n−1
µ−1
n
n
1 − µ−n o
= sup 1 + σ 2
µ(µ − 1)
n
1
≤ 1 + σ2
µ(µ − 1)
sup E(Mn2 ) = sup
n
since µ > 1, so µ−n → 0. This shows that Mn is L2 -bounded and hence uniformly
integrable.
(h) If P (Wn = 0 for some n) = 1, Wn = 0 ∀n eventually and hence Mn = 0 ∀n
eventually, i.e. M∞ = 0 almost surely. But this is impossible since for a uniformly
integrable martingale, Mn = E(M∞ |Fn ).
2
3. (a) This has been presented in lectures, in even greater generality.
d(eat Xt ) = aeat Xt dt + eat dXt
= aeat Xt dt + eat (−aXt dt + σ dBt ) = σeat dBt .
Integrating gives
at
e Xt − X 0 =
Z
t
σeas dBs ,
0
hence
Xt = X 0 e
−at
+ σe
−at
Z
t
eas dBs .
0
(b) This problem is a variation on Bessel processes, which have been covered in lectures.
(i) It is clear that βt is a (local) martingale. Its quadratic variation is
d[β]t =
X2 (t)2
Xn (t)2
X1 (t)2
dt +
dt + · · · +
dt = dt.
rt
rt
rt
Hence βt is a Brownian motion.
(ii)
1
drt = 2X1 (t) dX1 (t) + · · · + 2Xn (t) dXn (t) + (2 d[X1 ]t + · · · + 2 d[Xn ]t )
2
= 2X1 (t)(−aX1 (t) dt + σ dB1 (t)) + · · · + 2Xn (t)(−aXn (t) dt + σ dBn (t))
+ nσ 2 dt
= (nσ 2 − 2a(X1 (t)2 + · · · + Xn (t)2 )) dt
+ 2σ(X1 (t) dB1 (t) + · · · + Xn (t) dBn (t))
√
= (nσ 2 − 2art ) dt + 2σ rt dβt .
(iii) Here we have σ = a = n = 1. So define a process Xt by
dXt = −Xt dt + dBt ,
X0 =
√
r0 ,
where Bt is related to βt by dβt = (Xt /|Xt |) dBt , i.e. Bt =
Rt
√
Then Xt = r0 e−t + e−t 0 es dBs and rt = Xt2 .
Rt
(c) (i) Simply replace a by −a in (a): Mt = 0 σe−as dBs .
(ii)
Z t
σ 2 (1 − e−2at )
2
2
e−2as ds =
E(Mt ) = σ
2a
0
Rt
0
sgn(Xs ) dβs .
Hence supt E(Mt2 ) ≤ σ 2 /(2a). Since Mt is L2 -bounded, the desired result
follows from the martingale convergence theorem.
4. Although systems of SDEs were never presented in lectures, the questions involve only
basic applications of Itˆo’s formula, so it’s not as hard as it seems.
3
(a) Itˆo’s formula gives
1
1
cos Bt dt = −Yt dBt − Xt dt
2
2
1
1
dYt = d(sin Bt ) = cos Bt dBt − sin Bt dt = Xt dBt − Yt dt
2
2
dXt = d(cos Bt ) = − sin Bt dBt −
Finally X0 = 1, Y0 = 0.
(b) (i) Itˆo’s formula gives
d((cos t)Xt − (sin t)Vt )
= −(sin t)Xt dt + (cos t) dXt − (cos t)Vt dt − (sin t) dVt
= −(sin t)Xt dt + (cos t)Vt dt − (cos t)Vt dt − (sin t)(−Xt dt + dBt )
= − sin t dBt .
Similarly
d(sin t)Xt + (cos t)Vt
= (cos t)Xt dt + (sin t) dXt − (sin t)Vt dt + (cos t) dVt
= (cos t)Xt dt + (sin t)Vt dt − (sin t)Vt dt + (cos t)(−Xt dt + dBt )
= cos t dBt .
Integrating the above gives the desired result.
(ii) We have
t
Z
(cos t)Xt − (sin t)Vt = X0 −
sin s dBs
0
Z t
(sin t)Xt + (cos t)Vt = V0 +
cos s dBs .
(4.1)
(4.2)
0
Taking cos t×(4.1)+ sin t×(4.2) gives
Xt = X0 cos t + V0 sin t − cos t
Z
t
sin s dBs + sin t
0
Z
t
cos s dBs
0
while cos t×(4.2)− sin t×(4.1) gives
Vt = V0 cos t − X0 sin t + cos t
Z
t
cos s dBs + sin t
0
Z
t
sin s dBs .
0
(iii) Itˆo’s formula gives
d(f (t)Bt ) = f (t) dBt + f 0 (t)Bt dt.
Integrating and rearranging gives
Z t
Z t
f (s) dBs = f (t)Bt −
Bs f 0 (s) ds.
(iv) From (iii) we have
Z
0
0
t
sin s dBs = (sin t)Bt −
Z
cos s dBs = (cos t)Bt +
Z
Z 0t
0
4
t
Bs cos s ds
0
t
Bs sin s ds.
0
Substituting the above into the formulas in (ii) gives
Z t
Z t
Bs sin s ds
Bs cos s ds + sin t
Xt = X0 cos t + V0 sin t + cos t
0
0
2
Vt = V0 cos t − X0 sin t + (cos t) Bt + cos t
Z
t
0
Bs sin s ds + (sin t)2 Bt
Z t
Bs cos s ds
− sin t
0
= V0 cos t − X0 sin t + Bt + cos t
Z
t
0
Bs sin s ds − sin t
Z
t
Bs cos s ds.
0
Differentiating the above shows
dXt
= V0 cos t − X0 sin t + (cos t)2 Bt − sin t
dt
Z
t
0
Bs cos s ds + (sin t)2 Bt
Z t
Bs sin s ds
+ cos t
0
= V0 cos t − X0 sin t + Bt + cos t
5
Z
t
0
Bs sin s ds − sin t
Z
t
Bs cos s ds = Vt .
0