Exam in SF2970 Martingales and Stochastic Integrals. Wednesday

Exam in SF2970 Martingales and Stochastic Integrals.
Wednesday March 12 2014 8.00-13.00.
Answers and suggestions for solutions.
1.
(a)
i. We have that
1
1
1
1
1
·1+ ·1+ ·0+ ·1= .
3
6
3
6
2
Since X = I{ω1 ,ω2 } is constant on the sets generating F it is measurable
with respect to F, and
E[X|F] = X = I{ω1 ,ω2 } .
ii. Let B1 = {ω1 , ω4 } and B2 = {ω2 , ω3 }. Then
E[X] =
E[X|G] = E[X|{B1 , B2 }] =
2
X
E[X|Bi ] · IBi .
i=1
Furthermore,
E[X|B1 ] =
E[X|B2 ] =
Thus
Z
1
XdP
P (B1 ) B1
Z
1
XdP
P (B2 ) B2
=
=
1
1·
1/3 + 1/6
1
1·
1/6 + 1/3
1
1
+0·
=
3
6
1
1
+0·
=
6
3
1
2
· IB1 (ω) + · IB2 (ω).
3
3
If X were independent of G, we would have that E[X|G] = E[X], and since
we do not, X is not independent of G.
E[X|G](ω) =
(b)
i. The Itˆo formula applied to Yt = f (t, Bt ) yields
∂f
1 ∂f
∂f
dt +
dBt +
(dBt )2
dYt =
∂t
∂x
2 ∂x
!
∂f
1 ∂2f
∂f
=
+
dBt
dt +
2
∂t
2 ∂x
∂x
Now Y is a martingale if and only if the dt-term vanishes, i.e. if and only
if
1 ∂2f
∂f
+
= 0.
∂t
2 ∂x2
1
1
For e− 2 t+Bt we have that f (t, x) = e− 2 t+x , and
1
∂f
∂2f
1 1
= − e− 2 t+x , and
= e− 2 t+x ,
2
∂t
2
∂x
and thus the condition is satisfied.
1
2
,
3
1
.
3
SF2970
2
Exam 2014-03-12
1
ii. Since we know from the previous exercise that Mt = e− 2 t+Bt is a martingale
we have that
E[Mt ] = E[M0 ] = E[eB0 ] = 1,
since B0 = 0.
2.
Let Yt = 1/Xt . The Itˆo formula applied to this yields
dYt
1
1
= − 2 dXt +
2
Xt
=
=
−
2
(dXt )2
Xt3
σ
σ2
λ
+ 1 dt −
dBt +
dt
Xt
Xt
Xt
1 + (σ 2 − λ)Yt dt − σYt dBt .
We thus see that Y satisfies a linear SDE. To solve this SDE let Yt = Ut Vt , where
dUt = (σ 2 − λ)Ut − σUt dBt ,
and
dVt = at dt + bt dBt .
Then we have
dYt = d(Ut Vt )
= Vt dUt + Ut dVt + dUt dVt
= (σ 2 − λ)Yt dt + σYt dBt + at Ut dt + bt Ut dBt + bt σUt dt
= [at Ut + bt σUt + (σ 2 − λ)Yt ]dt + (bt Ut + σYt )dBt
Identifying coefficients we obtain bt = 0, and at = 1/Ut . Now, U is geometric
Brownian motion and the solution is given by
Ut = U0 exp
1 2
σ − λ t − σBt .
2
and to obtain Vt we just integrate
Vt = V0 +
Z
0
t
1
ds.
Us
In order to satisfy the initial condition Y0 = 1/x0 , let U0 = 1, and V0 = 1/x0 .
SF2970
3
Exam 2014-03-12
Now working backwards we see that
Xt =
=
=
3.
1
U −1
1
=
= t
Yt
Ut Vt
Vt
exp
1
x0
+
Rt
n
0 exp
x0 exp
1 + x0
Rt
0
λ − 21 σ 2 t + σBt
n
n
exp
o
o
λ − 21 σ 2 t + σBt ds
λ − 12 σ 2 t + σBt
n
o
o
λ − 12 σ 2 t + σBt ds
.
(a) We have that
E[Xt2
+
Yt2 ]
"Z
= E
0
"Z
= E
= E
t
cos(u)dBu
2 #
t
0
Z
cos(u)dBu
2
t
Z
t
t
0
2
cos (u)du + E
0
=
+
Z
+E
cos(u)dBu
"Z
t
0
Z
t
2 #
2
sin(u)dBu
sin (u)du
0
2 #
[cos2 (u) + sin2 (u)]du = t,
0
where we have used the Itˆo isometry to obtain the third equality. The identity
thus holds.
(b) The Itˆo formula applied to M yields
1
1
dMt = 2Xt dXt + 2(dXt )2 + 2Yt dYt + 2(dYt )2 − dt
2
2
2
= 2 cos(t)Xt dBt + cos (t)dt + 2 sin(t)Yt dBt + sin2 (t)dt − dt
= 2 cos(t)Xt dBt + 2 sin(t)Yt dBt .
Since this is a martingale increment (no dt-term), M is a martingale.
(c) The covariance is
t
Cov(Xt , Yt ) = E
Z
Z
t
= E
cos(u)dBu
0
=
=
0
t
0
t
sin(u)dBu
cos(u) sin(u)du
0
Z
Z
1
sin(2u)du
2
t
1
− cos(2u)
4
= [1 − cos(2t)]/4.
0
The processes are uncorrelated when the covariance is zero, i.e. when
1 − cos(2t) = 0.
SF2970
This is true for t = ±nπ,
time, so
t = nπ,
4.
4
Exam 2014-03-12
n = 0, 1, 2, . . . Here we are interested in positive
n = 1, 2, 3 . . .
(a) If X satisfies
dXs = µ(s, Xs )ds + σ(s, Xs )dWs ,
(1)
Xt = x,
(2)
the Itˆo formula applied to Zs = e−
−
dZs = e
Rs
0
c(Xu )du
(
Rs
0
c(Xu )du
F (s, Xs ) yields
∂F
∂F
1 ∂2F
−cF +
+µ
+ σ2 2
∂t
∂x
2 ∂x
)
{z
}
|
=0!
ds + σe−
Rs
0
c(Xu )du ∂F
∂x
Since the dt-term vanishes dZ is a martingale increment, and Z is a martingale
(if sufficiently integrable).
(b) The martingale property of Z yields
−
Zt = e
Rt
Rt
0
c(Xu )du
−
F (t, Xt ) = E [ ZT | Ft ] = E e
RT
0
c(Xu )du
F (T, XT ) Ft
Since e− 0 c(Xu )du ∈ Ft we can divide by it and move it into the conditional
expectation on the right hand side, and if we also use that F (T, XT ) = Φ(XT )
we obtain
RT
−
F (t, Xt ) = E e
t
c(Xu )du
Φ(XT ) Ft .
(c) Using the representation formula stated we get
F (t, x) = E e−
RT
t
Xs ds
· 1 Xt = x ,
where the dynamics of X are given by
(
dXs = dWs ,
Xt = x.
(d) The integral is the limit of Riemann sums of the form
n
X
W (τi )(ti − ti−1 )
i=1
where ti−1 ≤ τi ≤ ti . The sums are thus linear combinations of a multivariate
normal vector, which means that the sums and their limit will be normally
distributed.
For the expectation we have that
E
Z
0
t
Ws ds =
Z
0
t
E[Ws ]ds = 0,
dW.
SF2970
5
Exam 2014-03-12
since E[Ws ] = 0. For the variance we get
V ar
Z
t
Ws ds
0
= E
"Z
t
Ws ds
0
= E
Z
= E
Z t Z
t
Ws ds
0
=
Z tZ
Now, if s ≤ u
0
t
Ws ds
Ws Wu dsdu
t
E[Ws Wu ]dsdu.
0
0
Z
t
0
0
2 #
E[Ws Wu ] = E[E[[Ws (Wu − Ws + Ws )|Fs ]] = E[E[[Ws2 + Ws (Wu − Ws )|Fs ]]
= E[Ws2 + Ws E[Wu − Ws ]],
where we have used that Ws ∈ Fs and that Wu − Ws is independent of Fs .
Since E[Ws2 ] = s and E[Wu − Ws ] = 0, we obtain
E[Ws Wu ] = min{s, u}.
(Recall that we assumed that s ≤ u.) Finally,
V ar
Z
t
0
Ws ds
=
Z tZ
0
= 2
min{s, u}dsdu
0
Z tZ
0
= 2·
5.
t
u
sdsdu
0
t3
t3
= .
6
3
Let W be a one-dimensional standard Brownian motion on the filtered probability
space (Ω, F, P, {Ft }t≥0 ). For the fixed time interval [0, T ], define the process X as
the solution to the SDE
dXt = dWt ,
X0 = x.
Define a new measure Q via the following Girsanov transformation
dQ = L(T )dP,
on FT
where
dLt = Xt Lt dWt ,
L0 = 1.
The Girsanov theorem then tells us that
dWt = Xt dt + dBt ,
(3)
(4)
SF2970
6
Exam 2014-03-12
where B is a Q-Brownian motion. The SDE for X thus becomes
dXt = Xt dt + dBt ,
which means that
Xt = x +
Z
t
Xs ds + Bt ,
0
Now using that if Z ∈ Ft , then
E Q [Z] = E P [Lt Z]
we obtain
E Q [f (Xt )] = E P [Lt f (Xt )].
(5)
The solution of (3) with initial condition (4) is given by
Lt = exp
Z
0
t
1
Xs dWs −
2
Z
0
t
Xs2 ds
.
Now, using that Xt = x + Wt under P we obtain
Z
t
0
Xs dWs =
Z
t
0
(x + Wt )dWs =
Z
t
0
xdWs +
Z
0
t
1
Wt dWs = xWt + (Wt2 − t).
2
Inserting this into (5) we get
Q
E [f (Xt )] = E
P
1
1
exp
(Wt2 − t) + xWt −
2
2
which was the formula to be proved.
Z
t
0
2
(x + Wt ) ds f (x + Wt ) ,