Exercises for Stochastic Calculus
February 7, 2017 | Author: A.Benhari | Category: N/A
Short Description
Download Exercises for Stochastic Calculus...
Description
Abdelkader BENHARI
Exercises for Stochastic Calculus
Probability measure, Review of probability theory, Markov chains, Recurrence transition matrices, Stationary distributions, Hitting times, Poisson processes, Renewal theory, Branching processes, Branching and point processes, Martingales in discrete time, Brownian motion, Martingales in continuous time
Series 1: Probability mesure
1.XERCISeX Show that if denition of
A and B σ -algebra,
belongs to the
σ -algebra F
see Denition 1.3).
countable intersections, i.e. if
Ai ∈ F
for
B\A = B ∩ Ac = (B c ∪ A)c (B ∪ A) ∈ F . 2) Take Ai ∈ F for i = 1, 2, . . . . c c ∞ (∪∞ i=1 A ) = ∩i=1 A ∈ F
Proof. 1)
c
B\A ∈ F (for F is closed under ∩∞ i=1 Ai ∈ F .
then also
Also show that
i = 1, 2, . . .
, then
Bc ∈ F, Bc ∪ A ∈ F
and since
so
c
Since
c ∪∞ i=1 A ∈ F
it follows that
2. Though a fair die once. Assume that we only can observe if the number obtained is small,
A = {1, 2, 3}
B = {1, 3, 5}. Describe the σ -algebra F generated
and if the number is odd,
resulting probability space; in particular, describe the by
A
and
B
in terms of a suitable partition (for denition of a partition, see
Denition 1.9) of the sample space. Proof. Looking at the Venn diagram
insert diagram,
are at most four partitions of the space,
{5}
(A ∪ B)c = {4, 6} of which 4 combined in 2 = 16 dierent
and
can be
we conclude that there
A ∩ B = {1, 3}, A ∩ B c = {2}, Ac ∩ B =
none is an empty sets. ways to generate the
These partitions
σ -algebra F
dened
below.
F ={∅, Ω, A, B, Ac , B c , A ∪ B, A ∪ B c , Ac ∪ B, Ac ∪ B c , A ∩ B, A ∩ B c , Ac ∩ B, Ac ∩ B c , (A ∩ B c ) ∪ (Ac ∩ B), (A ∩ B) ∪ (Ac ∩ B c )} ={∅, Ω, {1, 2, 3}, {1, 3, 5}, {4, 5, 6}, {2, 4, 6}, {1, 2, 3, 5}, {1, 2, 3, 4, 6}, {1, 3, 4, 5, 6}, {2, 4, 5, 6}, {1, 3}, {2}, {5}, {4, 6}, {2, 5}, {1, 3, 4, 6}}. The probability measure on each of the sets in
F
may be deduced by using
the additivity of the probability measure and the probability measure of each
P (A ∩ B) = 2/6, P (A ∩ B c ) = 1/6, P (Ac ∩ B) = 1/6 P ((A ∪ B) ) = 2/6. of the partitions,
and
c
3. Given a probability space Dene
(Ω, F, P)
1. Show that
Z
is
F -measurable
2. Find a special case where
Y
and functions
X : Ω → R , Y : Ω → R.
Z = max{X, Y }.
Z
if both
is
F
X
and
Y
are
F -measurable.
measurable even though neither
X
is.
Proof. 1)
{ω ∈ Ω : Z(ω) ≤ x} = {ω ∈ Ω : max{X(ω), Y (ω)} ≤ x} = {ω ∈ Ω : X(ω) ≤ x} ∩ {ω ∈ Ω : X(ω) ≤ x} ∈ F, | {z } | {z } ∈F
1
∈F
nor
where the last conclusion is based on the result of proposition 3.2.
σ -algebra F was generated by the A = {HT, T H}, i.e. F = {∅, Ω, A, Ac } = {∅, {HH, T T, HT, T H}, {HT, T H}, {HH, T T }}.
2) Suppose you threw two coins and our single set Let
X=
n
1 0
if if
ω ∈ {HT } Y = ω ∈ {HH, T T, T H}
1 0
n
ω ∈ {T H} . ω ∈ {HH, T T, HT }
if if
Then
Z= X
hence
and
Y
are not
n
1 0
ω ∈ {HT, T H} ω ∈ {HH, T T },
if if
F -measurable
while
Z
is.
4.
n = 4 times. X : ω → {−1, 1}.
Toss a fair soin functions
describe the sample space
1. describe the probability space
(Ω, F1 , P1 )
come to be a legitimate event. with
E[X] = 0
Ω.
we want to consider
thar arises if we want each out-
How many
F1 -measurable
functions
X
are there?
2. Now describe the probability space
(Ω, F2 , P2 )
that arises if we want that
only combinations of sets of the type
Ai = {ω ∈ Ω :
Number of heads
should be events. How many
= i},
F2 -measurable
i = {0, 1, 2, 3, 4}
functions
X
with
E[X] = 0
are there? 3. Solve 2) above for some other
n.
Proof. 1) Each coin that is tossed has to possible outcomes, and since there
24 = 16 possible outcomes, or partitions. So the sample space is Ω = {HHHH, HHHT, HHT H, etc.} with 16 members that describes all the information from that four tosses. F1 is the power set of Ω consisting of all combinations of sets of Ω (don't forget that ∅ allways is included in a σ -algebra ), hence F1 = σ(Ω). P1 is deduced by rst observing that each ω ∈ Ω has P(ω) = 1/16 and then using the additivity och the probability are four coins to be tossed, there are
measure. For any function there is a set
A ∈ F1
X : Ω → {−1, 1}
that is measurable with respect to
F1
such that
X(ω) =
n 1 −1
if if
ω∈A ω ∈ Ac .
E[X] = P1 (A) − P1 (Ac ) so if E[X] = 0 we must have P1 (A) = P1 (Ac ), since there are 16 partitions each with probability measure 1/16, there are
Hence
and 16 8 ways of combining the partitions so that there are equally many of them in 16 A and Ac , hence there are 8 possible nctions X : Ω → {−1, 1} such that
E[X] = 0.
2
2) The sets
A0 = {T T T T } A1 = {T T T H, T T HT, T HT T, HT T T } A2 = {T T HH, T HHT, HHT T, HT T H, T HT H, HT HT } A3 = {HHHT, HHT H, HT HH, T HHH} A4 = {HHHH}, Ω. The σ -algebra F2 is the σ -algebra generated by this Pi = P(Ai ), then P0 = 1/16, P1 = 4/16, P2 = 6/16, P3 = 4/16 and P4 = 1/16. For any function X : Ω → {−1, 1} that is measurable with respect to F2 there is a a set A ∈ F2 such that n 1 if ω∈A X(ω) = −1 if ω ∈ Ac . denes a partition of
partition. Dene
E[X] = P2 (A) − P2 (Ac ) so if E[X] = 0 we must have P1 (A) = P1 (Ac ). We may write E[X] = k0 P0 + k1 P1 + k2 P2 + k3 P3 + k4 P4 = k0 /160 + 4k1 /16 + 6k2 /16 + 4k3 /16 + k4 /16 which is zero either if ki = (−1)i or if ki = −(−1)i . Hence, there are two possible F2 -measurable functions X : Ω → {−1, 1} such that E[X] = 0. Hence
3) Using the same notation as in 2), we conclude that, for any given in-
Ω with σ -algebra Fn generated by this partition, and with Pn (Ai ) = i (1/2)i for i = 0, 1, . . . , n. And n n Pn Pn E[X] = i=0 ki i (1/2)i for ki = ±1, where it should be noted that i=0 i (1/2)i = 1. For any function X : Ω → {−1, 1} that is measurable with respect to Fn there is a a set A ∈ Fn such that n 1 if ω∈A X(ω) = −1 if ω ∈ Ac . teger
n > 0, A0 , A1 , . . . , An
Hence
is a partition of
n
E[X] = Pn (A) − Pn (Ac )
so if
E[X] = 0
we must have
Pn (A) = Pn (Ac ) =
1/2. For even
n,
we have
X
n
i (1/2)i = 1/2
i=odd
X
n
i (1/2)i = 1/2
i=even so
E[X] = 0
if
ki = (−1)i
or
ki = −(−1)i . (n−1)/2
X
For odd
n,
we have
n
i (1/2)i = 1/2
i=0 n X
n
i (1/2)i = 1/2
i=(n−1)/2+1
i = 0, 1, . . . , (n − 1)/2 and there are 2(n−1)/2 ways (n−1)/2 of combining these. Hence, there are 2 possible Fn -measurable functions X : Ω → {−1, 1} such that E[X] = 0. so
E[X] = 0 if kn−i = −ki
3
for
5. Consider the probability space algebra on
Ω
and
P
(Ω, F, P)
Ω = [0, 1], F is the Borel σ (Ω, F). Show that Y (ω) = 2|ω − 1/2| have the same
where
is the uniform probability measure on
X(ω) = ω P(X = Y ) = 0.
the two random variables distribution, but that
and
Proof. (For denition of distribution function see denition 3.4) Since
X
is
uniform, the distribution function is
P(X ≤ x) = 0 P(X ≤ x) = x P(X ≤ x) = 1 Y
so for
we get, with
for for for
x ∈ (−∞, 0] x ∈ [0, 1] x ∈ [1, ∞)
x ∈ [0, 1],
P(Y ≤ x) = P(2|X − 1/2| ≤ x) = P(|2X − 1| ≤ x) = {by
symmetry}
= 2P(0 ≤ 2X − 1 ≤ x) = 2P(1/2 ≤ X ≤ (x + 1)/2) = 2((x + 1)/2 − 1/2) = x. Hence
P(X ≤ x) = P(Y ≤ x) = x
for
x ∈ [0, 1].
But since
P(X = Y ) = P(X = 2|X − 1/2|) = P({ω ∈ Ω : X = 2X − 1/2} ∪ {ω ∈ Ω : X = −2X + 1/2}) = P({ω ∈ Ω : X = 1} ∪ {ω ∈ Ω : X = 3}) = P(∅) = 0, we conclude that
P (X = Y ) = 0.
6.
σ -algebra containing a set A is that intersection of all A. Also, show by counter example that the union of two necessarily a σ -algebra .
Show that the smallest
σ -algebras σ -algebras
containing is not
GA be the set of all σ -algebras containing A, we want to show that ∩G∈GA G = {ω ∈ Ω : ω ∈ G for every G ∈ GA } = F is a σ -algebra ; if it is, then it is the smallest σ -algebra since otherwise, there would be a σ -algebra containing A that was smaller than F , this would be a contradiction since then this σ -algebra would be included in G and therefore F would not be minimal. To check that F is a σ -algebra , we simply use Denition 1.3. Proof. Let
1.
so
∅ ∈ G ∀G ∈ GA
hence
∅ ∈ F. ω c ∈ G ∀G ∈ GA
2. If
ω ∈ G ∀G ∈ GA
3. If
ω1 , ω2 , . . . ∈ G ∀G ∈ GA
F
is a
σ -algebra
then
then
hence
ωc ∈ F .
∪∞ i=1 ωi ∈ G ∀G ∈ GA
hence
ω1 , ω2 , . . . ∈ F .
.
σ -algebras is not necessarily a σ -algebra , F1 = {∅, Ω, A, Ac } and F2 = {∅, Ω, B, B c } where A ⊂ B . Then F1 ∪ F2 = {ω ∈ Ω : ω ∈ F1 or F2 } = {∅, Ω, A, Ac , B, B c } is not a σ -algebra since, e.g. Bc ∪ A ∈ / F1 ∪ F2 . To show that the union of two
take
4
7. given a probability space algebra of
F,
(Ω, F, P) and a random ˆ = E[X|A]. X
variable
1. Show that
ˆ ]=0 E[(X − X)Y
if
Y
is an
A-measurable
A-measurable random variable Y Y = E[X|A].
2. Show that the is
X.
Let
A
be a sub-σ -
and consider
random variable.
that minimize
E[(X −Y )2 ]
Proof. 1) Using equality 4 followed by equality 2 in Proposition 4.8 we get
ˆ ] = E[E[(X − X)Y ˆ |A]] = E[E[(X − X)|A]Y ˆ E[(X − X)Y ] ˆ − X)Y ˆ ] = 0. = E[(X 2) Using equality 3 followed by equality 2 in Proposition 4.8 we get
E[(X − Y )2 ] = E[X 2 ] − 2E[XY ] + E[Y 2 ] = E[X 2 ] − 2E[E[XY |A]] + E[Y 2 ] = E[X 2 ] − 2E[Y E[X|A]] + E[Y 2 ] = E[X 2 ] − E[E[X|A]2 ] + E[E[X|A]2 ] − 2E[Y E[X|A]] + E[Y 2 ] = E[X 2 ] − E[E[X|A]2 ] + E[(Y − E[X|A])2 ], | {z } ≥0
E[(X − Y )2 ]
hence
is minimized when
Y = E[X|A].
8. Let
X
space
and Y be two integrable random variables dened on the same probability (Ω, F, P). Let A be a sub-σ -algebra such that X is A-measurable.
1. Show that
E[Y |A] = X
implies that
2. Show by counter example that that
E[Y |X] = X .
E[Y |X] = X
does not necessarily imply
E[Y |A] = X .
Proof. (For denition of conditional expectation see Denition 4.6, for proper-
ties of the conditional expectation, see Proposition 4.8.) 1) Since
X
is
A-measurable, σ(X) ⊆ A hence, using equality 4 in Proposition
4.8, we get
∆
E[Y |X] = E[Y | σ(X)] = E[E[Y |A] |X] = E[X|X] = X. | {z } | {z } ⊆A
=X
X and Z be independent and integrable random variables and assume E[Z] = 0 and dene Y = X + Z . Let A = σ(X, Z), then
2) Let that
E[Y |σ(X)] = E[X + Z|σ(X)] = X + 0 = X, while
E[Y |A] = E[X + Z|σ(X, Z)] = X + Z 6= X.
5
Series 2 : review of probability theory
Exercise 1 For each given p let X have a binomial distribution with parameters p and N. Suppose N is itself binomially distributed with parameters q and M, M > N. (a) Show analytically that X has a binomial distribution with parameters pq and M. (b) Give a probabilistic argument for this result. Exercise 2 Using the central limit theorem for suitable Poisson random variables, prove that −n
lim e
n→∞
n X nk k=0
1 = . k! 2
Exercise 3 Let X and Y be independent, identically distributed, positive random variables with continuous density function f (x) satisfying f (x) > 0 for x > 0. Assume, further, that U = X − Y and V = min(X, Y) are independent random variables. Prove that ( −λx λe for x > 0; f (x) = 0 elsewhere, for some λ > 0. Hint: Show first that the joint density function of U and V is fU,V (u, v) = f (v) f (v + |u|). Next, equate this with the product of the marginal densities for U, V. Exercise 4 Let X be a nonnegative integer-valued random variable with probability P n generating function f (s) = ∞ n=0 an s . After observing X, then conduct X binomial trials with probability p of success. Let Y denote the resulting number of successes. (a) Determine the probability generating function of Y. (b) Determine the probability generating function of X given that Y = X. (c) Suppose that for every p (0 < p < 1) the probability generating functions of (a) and (b) coincide. Prove that the distribution of X is Poisson, i.e., f (s) = eλ(s−1) for some λ > 0.
6
Series 2: review of probability theory
Solutions
Exercise 1 (a) Since the distribution of N is binomial(M, q), we have fN (s) = E[sN ] = (sq + 1 − q)M . Since the conditional distribution of X given N = n is Binomial(n, p), we have fX|N=n (s) = E[sX |N = n] = (sp + 1 − p)n . We can compute the generating function of X, say fX (s), by conditioning on N. We obtain X
fX (s) = E[s ] =
M X n=0
E[sX | N = n] P[N = n]
M X = (sp + 1 − p)n P[N = n] = fN (sp + 1 − p) n=0
=
(sp + 1 − p)q + 1 − q M = s(pq) + 1 − pq M .
Here we recognize the generating function of the binomial(M, pq) distribution. Here’s another approach. For k ∈ {0, 1, 2, ..., M}, we get P[X = k] =
=
M X
n=0 M X n=k
P[X = k| N = n] P[N = n] ! ! n k n−k M n p (1 − p) q (1 − q)M−n k n M
=
= =
X (M − k)! M! (pq)k ((1 − p)q)n−k (1 − q)M−k k!(M − k)! (n − k)!(M − n)! n=k ! ! M−k X M M−k k (pq) ((1 − p)q) j (1 − q)M−k− j k j j=0 ! ! M M k M−k (pq) ((1 − p)q + (1 − q)) = (pq)k (1 − pq) M−k . k k
This is the binomial(M, pq) distribution. (b) Imagine M kids. Each one of them will toss a silver coin and a gold coin. With a silver coin the probability of heads is q. With a gold coin, it is p. Let N be the number of kids who will get heads with the silver coin and let X be the number of kids who will get heads on both tosses. Clearly the distribution of N is binomial(M, q), the conditional distribution of X given N = n is binomial(n, p) and the distribution of X is binomial(M, pq).
7
Exercise 2 Let X , X , X , ... be independent random variables, each with distribution Poisson(1), and let S n = Pn 1 2 3 k=1 Xk . Since the mean and the variance of the Poisson(1) distribution are both equal to 1, the central limit theorem gives us " # Z x 2 Sn − n 1 lim P √ ≤x = √ e−u /2 du n→∞ n −∞ 2π for every x ∈ R. With x = 0, this gives us "
# Sn − n 1 lim P √ ≤0 = . n→∞ 2 n This last equation can be written as 1 lim P [S n ≤ n] = . n→∞ 2 In view of the lemma below, the distribution of S n is Poisson(n). Therefore, the last equation can be written as n X nk 1 −n = . lim e n→∞ k! 2 k=0 Lemma. If U ∼ Poisson(α) and V ∼ Poisson(β) and if U and V are independent, then U + V ∼ Poisson(α + β). The elementary proof is left as an exercise. Exercise 3 Let fX,Y (x, y) denote the joint density of X and Y. Let fU,V (u, v) denote the joint density of U and V. For the moment we do not make any independence assumptions. Fix u > 0 and v > 0. The following computation will be easy to follow if you look at the first quadrant of R2 and visualize the set of points (x, y) for which min(x, y) ≤ v and x − y ≤ u. fU,V (u, v) = = =
∂2 P[(U ≤ u) ∩ (V ≤ v)] ∂u∂v ∂2 P[(X − Y ≤ u) ∩ (min(X, Y) ≤ v)] ∂u∂v ∂2 (P[(X, Y) ∈ A] + P[(X, Y) ∈ B]) ∂u∂v
where A = {(x, y) ∈ R2 : v < y < ∞ and 0 < x < v} and B = {(x, y) ∈ R2 : 0 < y ≤ v and 0 < x ≤ u + y}. Since A depends only on v, we have ∂2 P[(X, Y) ∈ A] = 0. ∂u∂v Thus we have fU,V (u, v) = = = =
8
∂2 P[(X, Y) ∈ B] ∂u∂v Z v Z u+y ∂2 fX,Y (x, y) dx dy ∂u∂v 0 0 Z u+v ∂ fX,Y (x, v) dx ∂u 0 fX,Y (u + v, v).
A similar calculation gives us fU,V (u, v) = fX,Y (v − u, v) for the case where u > 0 and v < 0. (You should do the computation; be careful with the domain of integration in the (x, y) plane). In summary, we have shown that ( fX,Y (v + |u|, v) if u ∈ R and v > 0, (1) fU,V (u, v) = 0 otherwise. If we assume that X and Y are independent and identically distributed with density f , then equation (1) can be written as ( f (v + |u|) f (v) if u ∈ R and v > 0, fU,V (u, v) = 0 otherwise. If we assume furthermore that U and V are independent, i.e., fU,V (u, v) = fU (u) fV (v) for all v > 0 and u ∈ R, then the above result gives us f (v) f (v + |u|) = fU (u) fV (v)
v > 0, − ∞ < u < ∞.
In particular we have (2)
f (u + v) = g(u) h(v)
u > 0, v > 0
with g(u) = fU (u) and h(v) = fV (v)/ f (v). From (2) we get, for all x > 0 and y > 0, R∞ R∞ R∞ f (x + v)dv h(v)dv h(v)dv g(x) P[X > x + y] y y y R∞ = R∞ = = R∞ P[X > x] f (x + v)dv g(x) h(v)dv h(v)dv 0
0
0
Thus the ratio P[X > x + y]/P[X > x] does not depend on x. Taking the limit as x → 0, we see that this ratio is equal to P[X > y]. Thus we have P[X > x + y] = P[X > x] P[X > y]
(3)
pour tout x > 0, y > 0.
The lemma below allows us to conclude that X ∼ exponential(λ). Lemma. Let X be a non negative random variable. (a) If X ∼ exponential(λ) for some λ > 0, then equation (3) is true. (b) If equation (3) is true, then X ∼ exponential(λ) (for some λ > 0). The proof is left to the reader. Exercise 4 (a) The distribution of Y given X = n is Binomial(n, p). Thus, E(sY |X = n) = (sp + 1 − p)n . Using this, we have fY (s) = E(sY ) =
∞ X n=0
E(sY |X = n) P(X = n) =
(b) We have P(X = Y) =
∞ X n=0
∞ X (sp + 1 − p)n · P(X = n) = f (sp + 1 − p). n=0
P(X = n, Y = n) =
∞ X
pn P(X = n) = f (p)
n=0
and then P(X = n | X = Y) =
P(X = n, X = Y) P(X = n)P(Y = n | X = n) pn P(X = n) = = . P(X = Y) f (p) f (p)
9
To conclude, fX|X=Y (s) =
∞ X n=0
sn P(X = n | X = Y) =
∞ 1 X n n f (sp) . s p P(X = n) = f (p) n=0 f (p)
(c) From parts (a) and (b) we have f (1 − p + ps) =
f (ps) f (p)
0 < p < 1 and − 1 < s < 1.
If we fix p and we take derivative w.r.t. s, we obtain f ′ (1 − p + ps) p =
f ′ (ps) p f (p)
0 < p < 1 and − 1 < s < 1.
Now we divide by p on both sides and we evaluate at s = 1. We obtain f ′ (1) =
f ′ (p) f (p)
00 be a Markov chain on Z such that, for each n, the transitions n → n + 1 and n → n − 1 occur with probability p and 1 − p, respectively. We want to find P(Xn reaches a + b − 3 before reaching 5 | X0 = a), which is the same as P(Xn reaches a + b − 8 before reaching 0 | X0 = a − 5). By Gambler’s ruin, this is equal to
q a−5
−1 p q a+b−8 −1 p a−5 a+b−8
if p , q; if p = q = 1/2.
Exercise 4 (1) The transition matrix is 0 0.995 0.005 0.05 0.05 . P = 0.9 0 0 1
(2) Remark: for the purpose of this question, computing explicitely the eigenvalues of this 2 × 2 matrix and checking that their modulus are strictly smaller than 1 would be fine. Here, we study the spectrum using arguments that are easier to extend to more general situations. Note that for any i, one has X Ri j ≤ 1, j
Hence, for any x = (x1 , x2 ) ∈ (1)
C2,
X X XX Ri j xi ≤ Ri j |xi | ≤ kxk1 . kR xk1 = T
j
j
i
i
As a consequence, the spectrum of RT (which is the same as that of R) is contained in {λ ∈ Moreover, since X R2 j < 1,
C : |λ| ≤ 1}.
j
the inequality in (1) is strict if x2 , 0. Assume that there exists λ ∈ of modulus 1, and x ∈ 2 a non-zero vector such that RT x = λx. Then it is easy to see that x2 , 0, and thus kRT xk1 < kxk1 , a contradiction with the assumption that λ has modulus one. We have proved that the eigenvalues of RT , which are also those of R, have modulus strictly smaller than 1.
C
C
20
This clearly implies that Q and the series +∞ X
Rn
n=0
are well defined. Now, observe that (I − R)
+∞ X
Rn =
n=0
+∞ X n=0
Rn −
+∞ X
Rn = I,
n=1
P n so in fact Q = +∞ n=0 R . Finally, one has to see that Rnij is the probability that starting from i, the walk is at j at time n. By definition, this probability is X Pi0 i1 · · · Pin−1 in , i=i0 ,i1 ,...,in−1 ,in = j
where i1 , . . . , in−1 take all possible values in {1, 2, 3}. But since state 3 is absorbing (that is, P3 j = 0 if j , 3), the sum above is zero if one ik is equal to 3. So the sum is equal to X Ri0 i1 · · · Rin−1 in , i=i0 ,i1 ,...,in−1 ,in = j
where i1 , . . . , in−1 take all possible values in {1, 2}. This last expression is precisely Rnij . (3) A computation shows that " # 3800 20 Q= . 3600 20 Starting from a car that is working, the expected time before it goes out of order is the sum of expected times spent in states 1 and 2, which is thus 3820 days (and on average, the total number of days during which the car is working before it gets out of order is 3800 days). (4) A careful examination of the proof of part (2) shows that the only requirement is that the “out of order” state can be reached from any other state in a finite number of steps. (5) For instance, to get the expected time before getting “SF”, we may consider the (3 × 3) reduced transition matrix R given by FF FS S S FF q p 0 . FS 0 0 p 0 0 p SS For the inverse, we obtain (I − R)−1
1/p 1 p/q = 0 1 p/q . 0 0 1/q
It takes two tosses to “get the chain started”. But the moment of appearance of “SF” is not changed if instead we start in the state “FF”. We get that the expected time before “SF” appears is 1 p 1 1 +1+ = + , p q p q as expected.
21
Series 5: stationary distributions
Exercise 1 Consider a Markov chain with transition probabilities matrix p0 p P = m · · · p1
p1 p0 ··· p2
p2 p1 ··· p3
··· ··· ···
pm pm−1 ··· p0
where 0 < pi < 1 for each i and p0 + p1 + · · · + pm = 1. Determine lim Pnij , the stationary distribution. n→∞
Exercise 2 An airline reservation system has two computers only one of which is in operation at any given time. A computer may break down any given day with probability p. There is a single repair facility which takes 2 days to restore a computer to normal. The facilities are such that only one computer at a time can be dealt with. Form a Markov chain by taking as states the pairs (x, y) where x is the number of machines in operating conditions at the end of a day and y is 1 if a day’s labor has been expended on a machine not yet repaired and 0 otherwise. Enumerating the states in the order (2, 0), (1, 0), (1, 1), (0, 1), the transition matrix is q p 0 0 0 q q p 0 0 1 0
0 p 0 0
where p, q > 0 and p + q = 1. Find the stationary distribution in terms of p and q. Exercise 3 Sociologists often assume that the social classes of successive generations in a family can be regarded as a Markov chain. Thus, the occupation of a son is assumed to depend only on his father’s occupation and not on his grandfather’s. Suppose that such a model is appropriate and that the transition probability matrix is given by
Father’s class
Lower Middle Upper
Lower .40 .05 .05
Son’s class Middle Upper .50 .10 .70 .25 .50 .45
For such a population, what fraction of people are middle class in the long run? Exercise 4 Suppose that the weather on any day depends on the weather conditions for the previous two days. To be exact, suppose that if it was sunny today and yesterday, then it will be sunny tomorrow with probability .8; if it was sunny today but cloudy yesterday, then it will be sunny tomorrow with probability .6; if it was cloudy today but sunny yesterday, then it will be sunny tomorrow with probability .4; if it was cloudy for the last two days, then it will be sunny tomorrow with probability .1.
22
Such a model can be transformed into a Markov chain provided we say that the state at any time is determined by the weather conditions during both that day and the previous day. We say the process is in • State (S,S) if it was sunny both today and yesterday; • State (S,C) if it was sunny yesterday but cloudy today; • State (C,S) if it was cloudy yesterday but sunny today; • State (C,C) if it was cloudy both today and yesterday. Enumerating the states in the order (S,S), (S,C), (C,S), (C,C), the transition matrix is .8 .2 .4 .6 .6 .4 .1 .9 Find the stationary distribution for the Markov chain. Exercise 5 (Chapter 3, Problem 4) Consider a discrete time Markov chain with states 0, 1, . . . , N whose matrix has elements µi , j = i − 1, λi , j = i + 1, i, j = 0, 1, . . . , N; Pi j = 1 − λi − µi , j = i; 0, | j − i| > 1. Suppose that µ0 = λ0 = µN = λN = 0, and all other µi ’s and λi ’s are strictly positive, and that the initial state of the process is k. Determine the absorption probabilities at 0 and N.
23
Series 5: stationary distributions
Solutions
Exercise 1 Since all entries in the transition matrix are strictly positive, the chain is irreducible and positive recurrent, so there exists a unique invariant measure. By the symmetric structure of the matrix, we can guess that 1 1 , . . . , m+1 ) is invariant, and indeed it is trivial to check that πP = π. the uniform distribution π = ( m+1 Exercise 2 Following the enumeration in the exercise, let (π1 , π2 , π3 , π4 ) denote the stationary distribution. It must satisfy: (1)
π1 + π2 + π3 + π4 = 1;
(2)
qπ1 + qπ3 = π1 ;
(3)
pπ1 + pπ3 + π4 = π2 ;
(4)
qπ2 = π3 ;
(5)
pπ2 = π4 .
Applying (4) in (2), we get qπ1 + q2 π2 = π1 =⇒ π1 =
q2 π2 . p
Using this, (4) and (5) in (1), we get q2 p . π2 + π2 + pπ2 + qπ2 = 1 =⇒ π2 = p 1 + p2 We thus get π1 =
q2 , 1+p2
π2 =
p , 1+p2
π3 =
qp , 1+p2
π4 =
p2 . 1+p2
Exercise 3 Since all the entries of the transition matrix are positive, the chain is irreducible and aperiodic, so there exists a unique stationary distribution. Denoting this distribution by π = (x, y, z), the quantity we are looking for – namely, the fraction of people that are middle class in the long run – is equal to y. We know that x + y + z = 1 and also that .40 .50 .10 (x y z) .05 .70 .25 = (x y z); .05 .50 .45 solving the corresponding system of linear equations, we get x =
1 13 ,
y=
5 8
and z =
31 104 .
Exercise 4 Let x, y, z, w denote the states of the chain in the order that they appear in the transition matrix. The invariant measure is obtained by solving the system .8 .2 .4 .6 = (x y z w); (x y z w) .6 .4 .1 .9 x + y + z + w = 1.
24
3 The unique solution is (x y z w) = ( 11 ,
1 1 6 11 , 11 , 11 ).
Exercise 5 For k ∈ {0, . . . , N}, define f (k) = P(∃n > 0 : Xn = N | X0 = k). Since 0 and N are absorbing, we have f (0) = 0, f (N) = 1. For k ∈ {1, . . . , N − 1}, we have f (k) = µk f (k − 1) + λk f (k + 1) =⇒ f (k + 1) − f (k) =
µk ( f (k) − f (k − 1)). λk
For k ∈ {1, . . . , N}, define ∆k = f (k) − f (k − 1). We can thus rewrite what we have obtained above as ∆k+1 = λµkk ∆k ; iterating this we get ∆k =
µ1 · · · µk−1 ∆1 λ1 · · · λk−1
∀k>2
We can now find ∆1 : 1 = f (N) − f (0) = ∆1 + · · · + ∆N = 1 + =⇒ ∆1 = 1 +
! µ1 · · · µk−1 µ1 ∆1 + ···+ λ1 λ1 · · · λk−1 !−1
µ1 · · · µk−1 µ1 + ···+ λ1 λ1 · · · λk−1
Then, for k ∈ {1, . . . , N}, we have ···µk−1 1 + λµ11 + · · · + λµ11 ···λ µ1 k−1 f (k) = f (k) − f (0) = ∆1 + · · · ∆k = ∆1 + ∆1 · · · + ∆k = µ1 µ1 ···µN−1 . λ1 + · · · + λ λ ···λ 1
25
1
N−1
Series 6: hitting times
Exercise 1 Let (Xn ) be an irreducible Markov chain with stationary distribution π. Find Eπ (min{n : Xn = X0 }). Exercise 2 Let P be the transition matrix of an irreducible Markov chain in state space S . A distribution π on S is called reversible for P if, for each x, y ∈ S , we have π(x)P(x, y) = π(y)P(y, x). (a) Show that, if π is reversible for P, then π is stationary for P. (b) Assuming that π is reversible for P, show that, for any n and any x0 , x1 , . . . xn ∈ S , Pπ (X0 = x0 , . . . , Xn = xn ) = Pπ (Xn = x0 , . . . , X0 = xn ). (c) Random walk on a graph. Let G = (V, E) be a finite connected graph. For each x ∈ V, let d(x) denote the degree of x. Let P be the Markov transition matrix for the Markov chain on V given by P(x, y) =
(
1 d(x)
0
if (x, y) ∈ E; otherwise.
This is called the random walk on G. Show that π(x) = d(x)/
P
z d(z)
is reversible for P.
Exercise 3 We identify the square grid Λ = {1, . . . , 8}2 with a chessboard. In chess, a knight is allowed to move on the board as follows. If the knight is in position (x, y), it can go to any of the positions in ! (x + 1, y + 2), (x − 1, y + 2), (x − 2, y + 1), (x − 2, y − 1), Λ∩ . (x − 1, y − 2), (x + 1, y − 2), (x + 2, y − 1), (x + 2, y + 1) Let (Xn ) denote the sequence of positions of a knight that at time zero is in position (1, 1) and at each step chooses uniformly among its allowable displacements. Find the expected time until the knight returns to (1, 1). (Hint: describe the chain as in Exercise 2(c).) Exercise 4 Consider a finite population (of fixed size N) of individuals of possible types A and a undergoing the following growth process. At instants of time t1 < t2 < t3 < . . ., one individual dies and is replaced by another of type A or a. If just before a replacement time tn there are j A’s and N − j a’s present, we postulate that the probability that an A individual dies is jµ1 /B j and that an a individual dies is (N − j)µ2 /B j where B j = µ1 j + µ2 (N − j). The rationale of this model is predicated on the following structure: Generally a type A individual has chance µ1 /(µ1 + µ2 ) of dying at each epoch tn and an a individual has chance µ2 /(µ1 + µ2 ) of dying at time tn . (µ1 /µ2 can be interpreted as the selective advantage of A types over a types). Taking account of the sizes of the population it is plausible to assign the probabilities µ1 j/B j and (µ2 (N − j)/B j) to the events that the replaced individual is of type A and type a, respectively. We assume no difference in the birth pattern of the two types and so the new individual is taken to be A with probability j/N and a with probability (N − j)/N. (1) Describe the transition probabilities of the Markov chain (Xn ) representing the number of individuals of type A at times tn (n = 1, 2, . . .). (2) Find the probability that the population is eventually all of type a, given k A’s and (N − k) a’s initially.
26
Exercise 5 Let (Xk ) be a Markov chain for which i is a recurrent state. Show that lim P(Xk , i for n + 1 6 k 6 n + N | X0 = i) = 0.
N→∞
If i is a positive recurrent state prove that the convergence in the above equation is uniform with respect to n.
27
Series 6: hitting times
Solutions
Exercise 1
Eπ (min{n : Xn = X0 }) =
X
π(x) · E x (min{n : Xn = X0 }) =
x∈S
X
π(x) ·
x∈S
1 = #S . π(x)
Exercise 2 (a) For any x ∈ S , X
π(y)P(y, x) =
y
X y
π(x)P(x, y) = π(x)
X
P(x, y) = π(x),
y
so π is stationary. (b) Pπ (X0 = x0 , . . . , Xn = xn ) = π(x0 )P(x0 , x1 )P(x1 , x2 ) · · · P(xn−1 , xn ) = P(x1 , x0 )π(x1 )P(x1 , x2 ) · · · P(xn−1 , xn ) = P(x1 , x0 )P(x2 , x1 )π(x2 )P(x2 , x3 ) · · · P(xn−1 , xn ) = · · · = P(x1 , x0 )P(x2 , x1 )P(x3 , x2 ) · · · P(xn , xn−1 )π(xn ) = Pπ (Xn = x0 , . . . , X0 = xn ). (c) Let x, y ∈ V. If x and y are not adjacent, we have π(x)P(x, y) = π(y)P(y, x) = 0. Otherwise, 1 d(y) 1 d(x) 1 · =P · π(x)P(x, y) = P =P = π(y)P(y, x). d(z) d(z) d(z) d(x) d(y) z z z
Exercise 3 We give a graph structure to the chessboard by taking the vertex set to be Λ and setting edges x ∼ y if and only if the knight can go from x to y (this implies that it can also go from y to x, so the relation is symmetric). The stochastic sequence (Xn ) is then a random walk on this graph. We claim that the graph is connected (equivalently, that the chain is irreducible). To see this, consider the reduced board Λ′ = {1, 2, 3}2 . Without leaving Λ′ , it is possible to go from (1, 1) to (1, 2): performing the moves (1, 1) → (3, 2) → (1, 3) → (2, 1); of course, reversing these moves allows us to go from (2, 1) to (1, 1). Now, consider two adjacent positions x, y ∈ Λ (x and y differ by 1 in one coordinate and coincide in the other). For any such pair x, y, it is possible to find a 3x3 subset of Λ such that either x or y is one of the corners of the square. Then, except possibly for a rotation, the moves given for Λ′ can be repeated so that we can go from x to y and from y to x. Now, for arbitrary points x, y ∈ Λ, we can construct a path from x to y such that at each step we go to an adjacent position, so all cases are now covered. P By Exercise 2(c), the stationary distribution is given by π(x) = d(x)/ z d(z). The degrees of the vertices 28
of the graph are given by 2 3 4 4 4 4 3 2
3 4 6 6 6 6 4 3
4 6 8 8 8 8 6 4
4 6 8 8 8 8 6 4
4 6 8 8 8 8 6 4
4 6 8 8 8 8 6 4
3 2 4 3 6 4 6 4 6 4 6 4 4 3 3 2 P The expected return time to (1, 1) is given by 1/π((1, 1)) = ( z d(z))/d((1, 1)) = 336/2 = 168. Exercise 4 (1) The transition probabilities are given by P j, j−1 =
µ1 j(N − j) , B jN
P j j = 1 − P j, j−1 − P j, j+1 ,
P j, j+1 =
Pi j = 0,
µ2 (N − j) j , B jN
for |i − j| > 1.
(2) We are exactly in the context of last week’s Exercise 5. Using what was obtained there, we have P(∃n : Xn = 0 | X0 = 0) = 1, P(∃n : Xn = 0 | X0 = N) = 0 and P(∃n : Xn = N | X0 = k) = Since in our present case we have
Pk,k−1 Pk,k+1
µ1 µ2
=
1+ 1+
P1,0 P2,1 P1,0 P1,2 + P1,2 P2,3 P1,0 P2,1 P1,0 P1,2 + P1,2 P2,3
+ ··· + ···
P1,0 ···Pk−1,k−2 P1,2 ···Pk−1,k P1,0 ···PN−1,N−2 P1,2 ···PN−1,N
for all k, we conclude that 1+
µ1 µ2
+ ··· +
1+
µ1 µ2
+ ··· +
P(∃n : Xn = 0 | X0 = k) = 1 − P(∃n : Xn = N | X0 = k) = 1 −
µ k−1 1
µ2
µ N−1 = 1
µ2
µ N 1
µ2
−
µ N 1
µ2
µ k 1
µ2
.
−1
Exercise 5 Let τi = min{k > 1 : Xk = i} be the first return time to i. When i is recurrent, we have Pi (τi < ∞) = 1, so lim Pi (τi > K) = 0.
(1)
K→∞
If, in addition, i is positive recurrent, we have ∞ X
Pi (τi > l) = Ei (τi ) < ∞,
l=0
so that (2)
lim
K→∞
∞ X
Pi (τi > l) = 0.
l=K
With this at hand, we are ready: Pi (∄k ∈ [n, n + N] : Xk = i) =
n−1 X
Pi (Xt = i, Xt+1 , . . . , Xn+N , i)
t=0
=
n−1 X
Pi (Xt = i) · Pi (τi > N + n − t) 6
t=0
n−1 X t=0
29
Pi (τi > N + n − t) =
n X l=1
Pi (τi > N + l).
It follows from (1) that, if we keep n fixed and take N to infinity, the last expression converges to zero. In addition, if i is positive recurrent, we can use (2) to obtain n X
Pi (τi > N + l) =
l=1
N+n X l=N+1
as N → ∞, so the convergence is uniform in n.
30
Pi (τi > l) 6
∞ X l=N+1
Pi (τi > l) → 0
Series 7: recap on Markov chains
Exercise 1 Let S = {0, 1, 2, 3}. We consider the Markov chain on S whose transition matrix is given by 1/4 1/4 1/2 0 1/6 1/6 1/3 1/3 . P = 0 1/3 2/3 0 0 0 1/2 1/2 1) What are the communication classes, the transient states ? Is the chain aperiodic ? If not, give the period. 2) What is P200 ? What approximately is P100 ? 3) Starting from 2, what is the expected number of visits to state 3 before returning to 2 ? Exercise 2 Consider the Markov chain on the state space below, where at each step a neighbour is uniformly chosen among sites that are linked by an edge to the current position.
1) What are the communication classes, the transient states ? Is the chain aperiodic ? If not, give the period. 2) Is there an invariant measure ? Is it unique ? In case it is, give it. 3) Starting from x, what is the probability to hit S before F ? 4) Starting from x, what is the expected number of visits to F before returning to x ? Exercise 3 We have n coins C1 , . . . , Cn . For each k, Ck is biased so that it has probability 1/(2k + 1) of falling heads. We toss the n coins. What is the probability of obtaining an odd number of heads ? (Hint: letting pk = 1/(2k + 1) and qk = 1 − pk , you may study the function f (x) = (q1 + p1 x) · · · (qn + pn x) and its power series.)
31
Series 7: recap on Markov chains
Solutions
Exercise 1 The communication classes are {0, 1} and {2, 3}. To see that state 0 is transient, note that for the chain starting at 0, there is a non-zero probability that the chain goes to state 2. Once there, there is no path going back to 0, so indeed, the probability that the return time to 0 is infinite is strictly positive. The same holds for state 1. The chain is aperiodic on its recurrent class {2, 3}. 2) We have 1 5 1 + = . P200 = (P00 )2 + P01 P10 = 16 24 48 To approximate P100 (that is, to find what Pn looks like when n tends to infinity), we first compute the stationary distribution on the communication class {2, 3}, which has transition matrix " # 1/3 2/3 . 1/2 1/2 We find a stationary distribution π with weights (3/7 4/7). On the set {2, 3}, the chain is irreducible and aperiodic, so starting from any point, the distribution of Xn converges to the stationary distribution. Hence, for i, j ∈ {2, 3}, one has Pnij → π( j) as n tends to infinity. Clearly, if i ∈ {2, 3} and j ∈ {0, 1}, then Pnij = 0. If j ∈ {0, 1}, then it is a transient state, so for any i, we have Pnij → 0 (in fact, we proved a stronger P statement in exercise 1 of series 3: that n Pnij is finite). There remains to determine Pnij when i ∈ {0, 1} and j ∈ {2, 3}. Intuitively, the chain enters the class {2, 3} at some point, and then converges to equilibrium in the class {2, 3}, so we should have lim Pnij = π( j).
(1)
n→+∞
Here is a rigorous proof of this fact. Let τ = inf{n : Xn ∈ {2, 3}}. Since i is transient, Pi [τ = +∞] = 0. For any ε > 0, we can find N large enough such that Pi [τ ≥ N] ≤ ε. For j ∈ {2, 3} and n ≥ N, we have (2)
Pnij = Pi [Xn = j] =
N−1 X k=0
We decompose Pi [Xn = j, τ = k] into
Pi [Xn = j, τ = k] + Pi [τ ≥ N] . | {z } ≤ε
Pi [Xn = j, τ = k, Xτ = 2] + Pi [Xn = j, τ = k, Xτ = 3], and we apply the strong Markov property at time τ on each term. The first term becomes Pi [τ = k, Xτ = 2] P2 [Xn−k = j] −−−−−→ Pi [τ = k, Xτ = 2] π( j), n→+∞
where we used the fact that the Markov chain on {2, 3} is aperiodic. Similarly, Pi [Xn = j, τ = k, Xτ = 3] −−−−−→ Pi [τ = k, Xτ = 3] π( j), n→+∞
32
and putting the two together, we obtain Pi [Xn = j, τ = k] −−−−−→ Pi [τ = k] π( j). n→+∞
So the sum appearing in the right-hand side of (2) is such that N−1 X
Pi [Xn = j, τ = k] −−−−−→ Pi [τ < N] π( j), n→+∞
k=0
and 1 − ε ≤ Pi [τ < N] ≤ 1. We have thus proved that lim sup Pnij ≤ π( j) + ε, n→+∞
and lim inf Pnij ≥ (1 − ε)π( j). n→+∞
Since ε > 0 was arbitrary, this proves (1). Conclusion : P100
0 0 3/7 0 0 3/7 ≃ lim Pn = n→+∞ 0 0 3/7 0 0 3/7
4/7 4/7 4/7 4/7
.
3) We can use the fact that this expected value is exactly π(3)/π(2) = 4/3. We can also take a more explicit approach and decompose the possible movements of the chain: with probability 1/3 the walk stays at 2 and therefore the number of visits to 3 is zero. Else it goes to 3, and then stays for a time geometric(1/2), and then goes back to state 2. The expectation we are looking for is thus +∞
2X 1 1 0+ k 3 3 k=1 2
!k
=
4 . 3
Exercise 2 1) The Markov chain is a random walk on a graph (see exercise 2 of series 5). Let G = (V, E) denote the graph. G is connected, that is, for any x, y ∈ V, there exists a path x0 = x, x1 , x2 , . . . , xN = y such that (xi , xi+1 ) ∈ E for each i. We then have P x (y is ever reached) > P(x, x1 )P(x1 , x2 ) · · · P(xN−1 , y) > 0, so the chain is irreducible, and there are no transient states. The chain is also aperiodic. To see this, fix any vertex that is the extreme point of one of the diagonal segments. Starting from this vertex, we can return to it in 3 steps (using the diagonal) and also in 4 steps (not using the diagonal), so the period of this vertex must divide 3 and 4, so it is 1. Since the chain is irreducible, we conclude that all vertices have period 1, so the chain is aperiodic. 2) since the chain is irreducible and the state space is finite, there exists a unique invariant measure. We can look for a reversible measure as in exercise 2 of series 5, and we find that the equilibrium distribution π evaluated at a vertex x is equal to the degree of x divided by the sum of the degrees of all vertices. In our case, this gives the following invariant measure. 2/52 3/52 3/52 2/52
3/52 5/52 5/52 3/52
3/52 5/52 5/52 3/52
2/52 3/52 3/52 2/52
(Remark: the invariant measure is represented here as a matrix just for ease of notation, following the positions of the graph; we can of course enumerate the 16 states of the chain and present the invariant measure as a row vector as we usually do.)
33
3) Consider the diagram 1/2 1 − b 1 − a 0 b 1/2 1 − c 1 − a a c 1/2 1 − b 1 a b 1/2 The value at each site represents the probability, starting from this site, to arrive at S before F. Notice that we have used symmetry several times. Using the relation P x (S is reached before F) =
1 X Py (S is reached before F), deg(x) y∼x
valid for any x < {S , F}, we obtain the system of linear equations a = 31 + 13 b + 13 c b = 31 a + 13 c = 2 a + 2 · 1 + 1 (1 − c) 5
5
2
5
n X
ak xk .
Solving this gives b = 47 . 4) This is π(F)/π(x) = 2/3. Exercise 3 Let us write
f (x) =
k=0
By developping, we observe that ak is the probability of getting exactly k heads. We want to know what is a1 + a3 + a5 + · · · . Note that n X f (1) = ak , k=0
while f (−1) =
n X k=0
ak (−1)k =
X
k even
ak −
X
ak .
k odd
So the quantity we are interested in is f (1) − f (−1) . 2 Clearly, f (1) = 1, while an easy computation gives f (−1) = 1/(2n + 1). The probability we are interested in is thus n/(2n + 1).
34
Series 8: Poisson processes
Exercise 1 Let X1 and X2 be independent exponentially distributed random variables with parameters λ1 and λ2 so that P(Xi > t) = exp(−λi t) Let N=
(
for t > 0.
1 if X1 < X2 ; 2 if X2 6 X1 ,
U = min{X1 , X2 } = XN , V = max{X1 , X2 }, W = V − U = |X1 − X2 |. Show a) P(N = 1) = λ1 /(λ1 + λ2 ) and P(N = 2) = λ2 /(λ1 + λ2 ). b) P(U > t) = exp{−(λ1 + λ2 )t} for t > 0. c) N and U are independent random variables. d) P(W > t|N = 1) = exp{−λ2 t} and P(W > t|N = 2) = exp{−λ1 t} for t > 0. e) U and W = V − U are independent random variables. Exercise 2 Assume a device fails when a cumulative effect of k shocks occur. If the shocks happen according to a Poisson process with parameter λ, find the density function for the life T of the device Answer: k k−1 −λt if t > 0; λ tΓ(k)e f (t) = 0, if t 6 0. Exercise 3 Let {X(t), t > 0} be a Poisson process with intensity parameter λ. Suppose each arrival is “registered” with probability p, independently of other arrivals. Let {Y(t), t > 0} be the process of “registered” arrivals. Prove that Y(t) is a Poisson process with parameter λp. Exercise 4 At time zero, a single bacterium in a dish divides into two bacteria. This species of bacteria has the following property: after a bacterium B divides into two new bacteria B1 and B2 , the subsequent length of time until each Bi divides is an exponential random variable of rate λ = 1, independently of everything else happening in the dish. a) Compute the expectation of the time T n at which the number of bacteria reaches n. b) Compute the variance of T n .
35
Series 8: Poisson processes
Solutions
Exercise 1 a) P(N = 1) = P(X1 < X2 ) =
Z
0
∞Z ∞
λ2 e−λ2 y λ1 e−λ1 x dy dx =
x
λ1 ; λ1 + λ2
λ2 . P(N = 2) = 1 − P(N = 1) = λ1 + λ2 b) P(U > t) = P(X1 , X2 > t) = P(X1 > t) P(X2 > t) = e−(λ1 +λ2 )t . c) Z
∞Z ∞
λ1 = P(U > t) P(N = 1). λ1 + λ2 t x P(U > t, N = 2) = P(U > t) − P(U > t, N = 1) = P(U > t)(1 − P(N = 1)) = P(U > t) P(N = 2). P(U > t, N = 1) = P(t < X1 < X2 ) =
λ2 e−λ2 y λ1 e−λ1 x dy dx =
d) P(W > t, N = 1) P(X2 > X1 + t) = P(N = 1) P(N = 1) Z ∞Z ∞ λ1 1 λ1 + λ2 · · e−λ2 t = e−λ2 t . = λ λ2 e−λ2 y λ1 e−λ1 x dydx = 1 λ λ + λ 1 1 2 0 x+t
P(W > t | N = 1) =
λ1 +λ2
Exchanging the roles of X1 and X2 , we get P(W > t | N = 2) = e−λ1 t . e) For all s, t > 0, P(U > s, W > t) = P(s < X1 < X2 − t) + P(s < X2 < X1 − t) Z ∞Z ∞ Z ∞Z ∞ −λ2 y −λ1 x λ1 e−λ1 x λ2 e−λ2 y dx dy λ2 e λ2 e dy dx + = s
x+t
s
y+t
λ1 λ2 = e−λ2 t e−(λ1 +λ2 )s + e−λ1 t e−(λ1 +λ2 )s . λ1 + λ2 λ1 + λ2 Also, P(W > t) = P(W > t | N = 1) · P(N = 1) + P(W > t | N = 2) · P(N = 2) =
λ1 λ2 e−λ2 t + e−λ1 t . λ1 + λ2 λ1 + λ2
We thus have P(U > s, W > t) = P(U > s) · P(W > t), so they are independent. Exercise 2 If T 1 , T 2 , T 3 , . . . are the arrival times for the Poisson process, then T 1 , T 2 −T 1 , T 3 −T 2 , . . . are independent and all have exponential(λ) distribution. We are looking for the law of T k = T 1 +(T 2 −T 1 )+· · ·+(T k −T k−1 ). This is given by the k-convolution of exponential λ. We claim that the density f (k) of this convolution is given by k k−1 t e−λt if t > 0; λΓ(k) (k) f (x) = 0, otherwise. 36
This is easy to prove by induction: when k = 1, it is trivial; assuming it is true for k, we get f (k+1) (x) = ( f (k) ∗ f (1) )(x) =
Z
x
λe−λ(x−y) 0
λk+1 −λx xk λk+1 xk λk yk−1 −λy e dy = e = · e−λx Γ(k) Γ(k) k Γ(k + 1)
since kΓ(k) = Γ(k + 1). Exercise 3 Let T 1 , T 2 , . . . denote the sequence of arrival times of X(t). One rigorous way of stating what is said in the exercise is as follows. Let Z1 , Z2 , . . . be a sequence of independent Bernoulli(p) random variables. PX(t) Assume that they are also independent of the process X. Then, put Y(t) = i=1 Zi . With this construction, Zi is 1 if arrival i is registered and 0 otherwise. Given an interval A ⊂ [0, ∞), we will denote respectively by NA and N˜ A the number of arrivals of X(t) and Y(t) in A. We prove that Y(t) is a Poisson process of parameter λp by verifying: • If A, B are disjoint intervals, then N˜ A and N˜ B are independent. X P(N˜ A = m, N˜ B = n, NA = m1 , NB = n1 ) P(N˜ A = m N˜ B = n) = m1 >m n1 >n
=
X
X
X
X
P
N˜ A = m, N˜ B = n, NA = m1 , T r , . . . , T r+m1 −1 ∈ A, NB = n1 , T s , . . . , T s+m2 −1 ∈ B
P
! P 2 Zi = m, s+m j=s Z j = n, NA = m1 , T r , . . . , T r+m1 −1 ∈ A, NB = n1 , T s , . . . , T s+m2 −1 ∈ B
r,s: m1 >m, n1 >n r+m1 6s
=
r,s: m1 >m, n1 >n r+m1 6s
=
Pr+m1 i=r
NA = m1 , T r , . . . , T r+m1 −1 ∈ A, NB = n1 , T s , . . . , T s+m2 −1 ∈ B
X
P(Bin(m1 , p) = m) · P(Bin(n1 , p) = n) ·
X
P(Bin(m1 , p) = m) · P(Bin(n1 , p) = n) P(NA = m1 , NB = n1 )
X
r,s: r+m1 6s
m1 >m, n1 >n
=
!
P
!
m1 >m, n1 >n
X P(Bin(m1 , p) = m) · P(NA = m1 ) = m1 >m
X P(Bin(n , p) = n) · P(N = n ) 1 B 1 n1 >n
Repeating the above computation in each of the last big parenthesis, we get P(N˜ A = m, N˜ B = n) = P(N˜ A = m) · P(N˜ B = n) as required.
• P(N˜ [t,t+h] > 2) = o(h). This is true because {N˜ [t,t+h] > 2} ⊂ {N[t,t+h] > 2} and P(N[t,t+h] > 2) = o(h). • P(N˜ [t,t+h] = 1) = λph + o(h). Indeed, P(N˜ [t,t+h] = 1) = P(N˜ [t,t+h] = 1, N[t,t+h] = 1) + P(N˜ [t,t+h] = 1, N[t,t+h] > 2) ∞ X = P(N[t,t+h] = 1, T i ∈ [t, t + h], Zi = 1) + o(h) i=1 ∞ X
=p
P(N[t,t+h] = 1, T i ∈ [t, t + h]) + o(h)
i=1
= p · P(N[t,t+h] = 1) + o(t) = pλh + po(h) + o(h).
37
Exercise 4 Let R0 = 0 and let R1 < R2 < . . . be the times at which the population size increases. Since we already start with two bacteria at time 0, we have T n = Rn−2 . Consider the following alternate experiment. Start at time S 0 = 0 a population of 2 bacteria and wait until one of them reproduces; call this time S 1 . At this instant, we forget about these bacteria and start watching another population with three new-born bacteria and wait until one of them reproduces; call this time S 2 . Then yet again we forget about this population and start a new one with four new-born bacteira, and wait until one of them reproduces, time S 3 , and so on. By the lack of memory of the exponential distribution, the law of R1 , R2 − R1 , R3 − R2 , . . . is equal to that of S 1 , S 2 − S 1 , S 3 − S 2 , . . . In the alternate experiment it is easy to see that S m − S m−1 is equal to the minimum of m + 1 independent exponential(1) random variables (the times until the reproduction of each bacterium in the mth population) and using Exercise 1b we conclude that S m −S m−1 has exponential(m+1) distribution. Thus, E(T n ) = E(Rn−2 ) = E(R1 + (R2 − R1 ) + · · · + (Rn−2 − Rn−3 )) =
1 1 1 + + ... + . 2 3 n−1
Using independence, Var(T n ) = Var(Rn−2 ) = Var(R1 ) + Var(R2 − R1 ) + · · · + Var(Rn−2 − Rn−3 )) =
38
1 1 1 + 2 + ... + . 2 2 3 (n − 1)2
Series 9: More on Poisson processes
Exercise 1 Messages arrive at a telegraph office in accordance with the laws of a Poisson process with mean rate of 3 messages per hour. (a) What is the probability that no message will have arrived during the morning hours (8 to 12)? (b) What is the distribution of the time at which the first afternoon message arrives? Exercise 2 A continuous time Markov chain has two states labeled 0 and 1. The waiting time in state 0 is exponentially distributed with parameter λ > 0. The waiting time in state 1 follows an exponential distribution with parameters µ > 0. Compute the probability P00 (t) of being in state 0 at time t starting at time 0 in state 0. µ λ + λ+µ e−(λ+µ)t . Solution: P00 (t) = λ+µ Exercise 3 In the above problem, let λ = µ and define N(t) to be the number of times the system has changed states in time t > 0. Find the probability distribution of N(t). n Solution: P(N(t) = n) = e−λt (λt) n! . Exercise 4 Let X(t) be a pure birth continuous time Markov chain. Assume that P(an event happens in (t, t + h) | X(t) is odd) = λ1 h + o(h); P(an event happens in (t, t + h)) | X(t) is even) = λ2 h + o(h), where o(h)/h → 0 as h → 0. Take X(0) = 0. Find the probabilities: P1 (t) = P(X(t) is odd),
P2 (t) = P(X(t) is even).
Exercise 5 Under the conditions of the above problem, show that EX(t) =
(λ1 − λ2 )λ2 2λ1 λ2 [exp{−(λ1 + λ2 )t} − 1]. t+ λ1 + λ2 (λ1 + λ2 )2
39
Series 9: More on Poisson processes
Solutions
Exercise 1 The distribution of the number of messages that arrive during a four-hour period is Poisson(4 · λ) = Poisson(12). Thus, P(No messages from hour 8 to 12) = P(Poi(12) = 0) = e−12 . By the lack of memory of the Poisson process, the arrival time T of the first message after hour 12 is given by T = 12 + X, where X is a random variable with exponential(3) distribution. Exercise 2 The forward Kolmogorov equation tells us that P′00 (t) = −λP00 (t) + µP01 (t) = −λP00 (t) + µ(1 − P00 (t)) = −(λ + µ)P00 (t) + µ. We can then observe that d P00 (t)e(λ+µ)t = µe(λ+µ)t . dt
Hence, there exists a constant c ∈
R such that
P00 (t)e(λ+µ)t =
µ −(λ+µ)t e + c. λ+µ
Observing further that P00 (t) = 1, we arrive at the conclusion. Exercise 3 Let T 0 = 0 and T 1 < T 2 < · · · be the times at which the chain changes state. Then, the hypothesis implies that T 1 , T 2 − T 1 , T 3 − T 2 , . . . is a sequence of independent random variables with the exponential λ distribution. Thus, N(t) = max{n : T n 6 t} is a Poisson process of intensity λ. In particular, for fixed t the law of N(t) is Poisson(λt), that is, P(N(t) = n) = ((λt)n /n!)e−λt . Exercise 4 For t > 0, h > 0, P1 (t) = P(X(t + h) is odd) = P (X(t) is odd, no arrivals in [t, t + h]) + P (X(t) is even, one arrival in [t, t + h]) + P (X(t + h) is odd, two or more arrivals in [t, t + h]) = P(X(t) is odd) P(No arrivals in [t, t + h]] | X(t) is odd) + P(X(t) is even) P(One arrival in [t, t + h] | X(t) is even) + o(h) = P1 (t)(1 − λ1 h − o(h)) + P2 (t)(λ2 h + o(h)) + o(h). Dividing by 1/h and taking h to zero, we conclude that P1 is the solution of the differential equation P′1 (t) = −λ1 P1 (t) + λ2 P2 (t).
40
The initial condition is P1 (0) = 0, since we start at zero which is even. The solution of this equation is P1 (t) =
λ2 λ2 − e−t(λ1 +λ2 ) . λ1 + λ2 λ1 + λ2
We then immediately get P2 (t) = 1 − P1 (t) =
λ2 λ1 + e−t(λ1 +λ2 ) λ1 + λ2 λ1 + λ2
Exercise 5 To begin with, note that, for t, h ≥ 0, one has E[X(t + h) − X(t)] = P[X(t + h) − X(t) = 1] + E[(X(t + h) − X(t)) 1X(t+h)−X(t)≥2 ]. We would like to let h tend to 0 in the equality above, and to show that the last term is negligible, that is, E[(X(t + h) − X(t)) 1X(t+h)−X(t)≥2 ] = o(h). Let λ′ = min(λ1 , λ2 ). It is possible to find a coupling of X(t + h) − X(t) with a Poisson random variable N of parameter λ′ h, such that X(t + h) − X(t) ≤ N. So E[(X(t + h) − X(t)) 1X(t+h)−X(t)≥2 ] ≤ ≤
+∞ X
k=2 +∞ X
kP[X(t + h) − X(t) = k] kP[N = k]
k=2
+∞ X k=2
′
ke−λ h
(λ′ h)k = O(h2 ). k!
As a consequence, E[X(t + h) − X(t)] = P[X(t + h) − X(t) = 1] + o(h) = P[X(t + h) − X(t) = 1 | X(t) is odd]P1 (t) + P[X(t + h) − X(t) = 1 | X(t) is even] + o(h), where P1 (t) = P[X(t) is odd] and P2 (t) = P[X(t) is even]. Using the assumptions of the exercise, we get E[X(t + h) − X(t)] = λ1 hP1 (t) + λ2 hP2 (t) + o(h). Hence, the function t 7→ E[X(t)] is (right-) differentiable, and d E[X(t)] = λ1 P1 (t) + λ2 P2 (t). dt Using the results of the previous exercise, together with the fact that E[X(0)] = 0, we get Z t E(X(t)) = [λ1 P1 (s) + λ2 P2 (s)] ds 0 Z t λ1 λ2 − λ22 −s(λ1 +λ2 ) 2λ1 λ2 2λ1 λ2 (λ1 − λ2 )λ2 [exp{−(λ1 + λ2 )t} − 1]. = + e t+ ds = λ1 + λ2 λ1 + λ2 (λ1 + λ2 )2 0 λ1 + λ2
41
Series 10: Renewal theory
Exercise 1 A patient arrives at a doctor’s office. With probability 1/5 he receives service immediately, while with probability 4/5 his service is deferred an hour. After an hour’s wait again with probability 1/5 his needs are serviced instantly or another delay of an hour is imposed and so on. (a) What is the patient’s waiting time distribution? (b) What is the distribution of the number of patients who receive service over an 8-hour period assuming the same procedure is followed independently for every arrival and the arrival pattern is that of a Poisson process with parameter 1? Exercise 2 The random lifetime of an item has distribution function F(x). Show that the mean remaining life of an item of age x is R∞ {1 − F(t)}dt . E(X − x | X > x) = x 1 − F(x) Hint. Recall the derivation, that applies to any positive integrable random variable Z, Z ∞ Z ∞Z x Z ∞Z ∞ Z ∞ E(Z) = x FZ (dx) = 1 dt FZ (dx) = FZ (dx)dt = (1 − FZ (t))dt. 0
0
0
0
t
0
Try to do something similar. Exercise 3 Find P(N(t) > k) in a renewal process having lifetime density ( −ρ(x−δ) ρe for x > δ; f (x) = 0 for x 6 δ. where δ > 0 is fixed. Exercise 4 For a renewal process with distribution F(t) =
Rt 0
xe−x dx, show that −x
P(Number of renewals in (0, x] is even) = e
·
∞ X n=0
x4n+1 x4n + (4n)! (4n + 1)!
!
Hint: P(Number of renewals in (0, x] is even) =
∞ X n=0
42
P(Number of renewals in (0, x] = 2n).
Solutions 10
Solutions
Exercise 1 (a) P(Waiting time = k) = (4/5)k−1 (1/5), k = 0, 1, 2, . . .. (b) We will need the following fact. Let X be a Binomial(N, p) random variable, where N is itself a Poi(λ) random variable. Then, the distribution of X is Poi(λp). This can be proved with probability generating functions: gBin(n,p) (s) = (sp + 1 − p)n , gPoi(λ) (s) = eλ(s−1) ; ! ∞ ∞ n X X X n λ −λ e = eλ(sp+1−p) e−λ = eλp(s−1) = gPoi(λp) (s). gX = E(s |N = n) P(N = n) = (1 + sp − p) n! n=0 n=0 Now, for i = 0, . . . 7, let Ni denote the number of patients that arrive in the time interval (i, i + 1] and Xi the number of these patients that receive service before hour 8. Since patients arrive as a Poisson process with parameter 1 and (i, i + 1] has length 1, Ni has Poisson(1) distribution. For fixed i, the probability that a given patient that arrives in (i, i + 1] receives service before hour 8 is equal to the probability that he waits 7 − i or less hours for service; this is equal to 1 − (4/5)7−i+1 = 1 − (4/5)8−i . Thus, Xi has Binomial(Ni , 1 − (4/5)8−i ) = Poi(1 − (4/5)8−1 ) distribution. Notice that X0 , . . . X7 are all independent, because they are each determined from arrivals of a Poisson process in disjoint intervals and patients’ waiting times, which are assumed to be independent. Since for the sum of independent Poisson random variables has Poisson distribution with parameter equal to the sum of the parameters, we have !8 !7 !1 4 4 4 +1− + ··· + 1 − X0 , . . . X7 ∼ Poi 1 − . 5 5 5 Exercise 2 Z ∞ Z ∞Z y 1 1 · · y F(dy) = −x + 1 dt F(dy) P(X > x) x P(X > x) x 0 ! Z xZ ∞ Z ∞Z ∞ 1 · F(dy) dt + F(dy) dt = −x + P(X > x) 0 x x t ! R∞ Z ∞ (1 − F(t)) 1 dt. x(1 − F(x)) + (1 − F(t))dt = x = −x + P(X > x) 1 − F(x) x
E(X − x | X > x) = −x +
Exercise 3 Let the sequence X1 , X2 , . . . of independent random variables with the assigned density be the sequence of inter-arrival times of the renewal process. The density in the statement of the exercise is the density for a random variable obtained by adding δ to a exponential(ρ) random variable. We can thus write Xi = Yi + δ, where Yi ∼ exp(ρ) and i > 0. Then, ( 0, if t − δk 6 0; P(N(t) > k) = P(X1 + · · · + Xk 6 t) = P(Y1 + · · · + Yk 6 t − δk) = ˜ − δk) > k), otherwise, P(N(t ˜ − δk) > k) = P(Poi(ρ) > k). where N˜ is a Poisson process of parameter ρ. We further remark that P(N(t
43
Exercise 4 Rx Let F be the distribution function of the renewal times; we have F(x) = 0 ye−y dy = 1 − e−x − xe−x for x > 0. Also let f (x) = xe−x · I{x>0} be the density function of the renewal times. Let f (n) denote the n-convolution of f ; let us show by induction that f (n) (x) =
1 x2n−1 e−x · I{x>0} . (2n − 1)!
Indeed, this trivially holds for n = 1 and, assuming it holds for fixed n and x > 0, Z x (n+1) f (x − y) · f (n) (y) dy f (x) = 0 Z x 1 = (x − y)e−(x−y) · y2n−1 e−y dy (2n − 1)! 0 ( 2n+1 ) x2n+1 x 1 e−x − x2(n+1)−1 e−x . = = (2n − 1)! 2n 2n + 1 (2(n + 1) − 1)! Of course, if x < 0 we have f n+1 (x) = 0 and so the induction is complete. Now, let N(x) denote the number of renewals in [0, x]; we have Z x Z x 1 (2n) y4n−1 e−y (e−(x−y) + (x − y)e−(x−y) ) dy (1 − F(x − y)) · f (y) dy = P(N(x) = 2n) = 0 (4n − 1)! 0 (Z x ) Z x Z x e−x = y4n−1 dy − y4n−1 dy + x y4n dy (4n − 1)! 0 0 0 ( 4n ) ) ( 4n x x4n+1 x4n+1 x4n+1 x e−x −x + − + =e . = (4n − 1)! 4n 4n 4n + 1 (4n)! (4n + 1)! The result now follows by summing this expression over n.
44
Series 11: more on renewal theory
Exercise 1 The weather in a certain locale consists of rainy spells alternating with spells when the sun shines. Suppose that the number of days of each rainy spell is Poisson distributed with parameter 2 and a sunny spell is distributed according to an exponential distribution with mean 7 days. Assume that the successive random durations of rainy and sunny spells are statistically independent variables. In the long run, what is the probability on a given day that it will be raining? Exercise 2 Determine the distribution of the total life βt of the Poisson process. Exercise 3 Consider a renewal process with non-arithmetic, finite mean distribution of renewals, and suppose that the excess life γt and current life δt are independent random variables for all t. Establish that the process is Poisson. Hint: use limit theorems on the identity P[δt ≥ x, γt > y] = P[δt ≥ x] P[γt > y], to derive a functional equation for 1 v(x) = µ
Z
+∞
(1 − F(t)) dt. x
Exercise 4 Show that the renewal function corresponding to the lifetime density f (x) = λ2 xe−λx , is
x>0
1 1 M(t) = λt − (1 − e−2λt ). 2 4 Hint. Use the uniqueness of the solution of the renewal equation. Here are some shortcuts for the computations: Z T 1 e−λt dt = (1 − e−λT ); λ 0 Z T 1 te−λt dt = 2 (1 − e−λT − λT e−λT ); λ 0 ! Z T 1 λ2 T 2 e−λT 2 −λt −λT −λT ; t e dt = 3 1 − e − λT e − 2 λ 0 Z T 1 teλt dt = 2 (1 − eλT + λT eλT ). λ 0
45
Series 11: more on renewal theory
Solutions
Exercise 1 Since we are only interested in the long run result, we can assume that at time 0 we are starting a rainy spell. For t > 0, define A(t) = P(It is raining at time t); a(t) = P(The initial rainy spell is still taking place at time t). Also let F denote the distribution of the duration of two successive seasons (i.e., the distribution of a sum of two independent random variables, one with law Poisson(2) and the other with law exponential(1/7)). We then have the renewal equation Z t A(t − s) dF(s). A(t) = a(t) + 0
(Notice that F is neither purely discrete nor purely continuous, so the integral has to be seen as a sum plus an integral). From the renewal theorem, we have Z ∞ 1 a(s) ds, lim A(t) = t→∞ µ 0 where µ is the expectation with F, equal to 2 + 7. Denoting by Y the duration of the initial R ∞ associated P 2 rainy spell, we have 0 a(s) ds = ∞ s=0 P(Y > s) ds = E(Y) = 2. So the final solution is 2+7 . Exercise 2 Let λ denote the parameter of the Poisson process. We have βt = γt + δt , where γt = S N(t)−t is the residual life and δt = t − S N(t) is the current life. Moreover, since the process is Poisson(λ), γt and δt are independent, with distributions given by Fγt (x) =
(
0, if x < 0; 1 − e−λx if x > 0.
0, if x < 0; −λx 1 − e , if 0 6 x < t; Fδt (x) = 1 if x > t.
Notice that the distribution of γt does not depend on t, and is the exponential(λ) distribution. The distribution of βt is given by the convolution of the two above distributions. If x < t, we have Z x Fγt (x − s) Fδt (ds) = Fβt (x) = Z0 x (1 − e−λ(x−s) ) λe−λs ds = 1 − e−λx − λxe−λx . 0
If x > t, we have to take into account that the distribution of δt has a mass of e−λt in point {t}; the convolution is then given by Z x Fβt (x) = Fγt (x − s) Fδt (ds) = Z0 t (1 − e−λ(x−s) ) λe−λs ds + (1 − e−λ(x−t) )e−λt = 1 − e−λx − λte−λx . 0
46
Putting together the two cases, the solution can be expressed as ( 0, if x < 0; Fβt (x) = 1 − e−λx − λ · min(t, x) · e−λx if x > 0. Exercise 3 It follows from the renewal theorem that lim P[γt > y] = v(y).
t→+∞
Note that P[δt ≥ x, γt > y] = P[γt−x > y + x], and using the previous result, we get that this converges to v(x + y) as t tends to infinity. Similarly, observing the case y = 0, we infer that lim P[δt ≥ x] = v(x).
t→+∞
We thus obtain that v(x + y) = v(x)v(y). Since v is monotone, it is a classical exercise to check that there must exist c, λ such that v(x) = eλx (if g = log v, then g(x + y) = g(x) + g(y). . . ) By differentiating, we find 1 − F(t) =
−λ λx e , µ
so F is the repartition function of an exponential random variable of parameter −λ(> 0) (and µ = −λ). Exercise 4 Let M(t) = E(#Arrivals until time t); ¯ = 1 λt − 1 (1 − e−2λt ). M(t) 2 4 ¯ Our objective is to show that M(t) = M(t). We know that the renewal equation Z T A(T − t) F(dt) A(T ) = F(T ) + 0
has a unique solution and this solution is M (see course notes or page 183 of the textbook). So, we only R ¯ − t) F(dt). ¯ ) = F(T ) + T M(T need to check that M(T 0 First note that, for T > 0, Z T F(T ) = λ2 te−λt dt = 1 − e−λT − λT e−λT . 0
Next,
T
T
"
# 1 1 1 −2λ(T −t) ¯ − t) F(dt) = M(T λ(T − t) − + e λ2 te−λt dt 2 4 4 0 0 Z T Z Z Z λ3 λ3 T 2 −λt λ2 T −λt λ2 −2λT T λt −λt = T te dt − t e dt − te dt + e te dt. 2 2 0 4 0 4 0 0 RT ¯ − t) F(dt) = Using the shortcuts in the statement of the exercise and simplifying, we get F(T ) + 0 M(T ¯ ) as required. M(T Z
Z
47
Series 12: branching processes
Exercise 1 Sir Galton is worried about the survival of his family name. He himself has three sons, and estimates that each of them has probability 1/8 to have no boy, probability 1/2 to have 1 boy, probability 1/4 to have 2 boys, and probability 1/8 to have 3 boys. He thinks that these probabilities will be constant in time, that the numbers of children his descendants will have are independent random variables, and since he lives in the xixth century, he believes that only men can pass on their name. According to these assumptions, what is the probability that his name will become extinct ? Exercise 2 A server takes 1 minute to serve each client. During the n-th minute, the number Zn of clients that arrive and get in line for service is a random variable. We assume that these variables are independent and that P(Zn = 0) = 0.2,
P(Zn = 1) = 0.2,
P(Zn = 2) = 0.6.
The attendant can leave to have a coffee only if there are no clients to serve. What is the probability that he can ever leave for a coffee? Exercise 3 P (i) Show that, for a positive random variable Y with i2 P(Y = i) < ∞, we have 2 Var(Y) = g′′ Y (1) + E(Y) − E(Y) ,
where gY is the probability generating function of Y. (ii) Let (Xn )n>0 be a branching process (as is usual, assume X0 = 1) and let Z be a random variable whose distribution is equal to the distribution of the number of descendents in (Xn ). Let m = E(Z). Using the relation gXn+1 = gXn ◦ gZ , show that E(Xn ) = mn . (iii) Let σ2 = Var(Z). Again using gXn+1 = gXn ◦ gZ , show that g′′Xn+1 = (g′′Xn ◦ gZ )(g′Z )2 + (g′Xn ◦ gZ )g′′ Z. (iv) Show by induction that Var(Xn ) =
48
(
mn (mn −1) 2 σ m2 −m nσ2
if m , 1; if m = 1.
Series 12: branching processes
Solutions
Exercise 1 Let p be the probability that the descendence of one of Galton’s sons dies out. Since the average number of boys is strictly larger than 1, we know that p > 0. The fixed point equation in our case reads p=
1 1 1 1 + p + p2 + p3 . 8 2 4 8
As always, p = 1 is a solution. It is not the one we are looking for, so we can simplify the equation into p2 + 3p − 1 = 0. The solution we are looking for is
√
13 − 3 . 2 Now, the descendences of Galton’s three sons are assumed to be independent. The probability that they all become extinct is thus 3 √ 13 − 3 , 2 p=
and eternal survival is the complementary event.
Exercise 2 We use a branching process as a model to describe the problem. Let us suppose that there are X0 clients in line. They constitute generation 0. The “direct descendents” of a client are those that arrive while that client is being served. Generation n + 1 is formed of direct descendents of clients of generation n. Thus, each one of the Xn clients of generation n will be served during a minute during which a certain number Zi (i = 1, . . . , Xn ) of clients arrive: his direct descendents. Once all clients of generation n have been served, we find Xn+1 = Z1 + · · · + ZXn new clients in line Xn is thus a branching process. The probability that the server can have a pause is equal to the probability of extinction of the branching process, which in turn is given by the smallest solution of the equation α=
3 1 1 + α + α2 , 5 5 5
namely α = 1/3. Exercise 3 (i) For |s| < 1, we have g′′ Y (s) When
P∞
2 n=1 n
∞ ∞ ∞ X X d2 X n d2 n s P(Y = n) = s P(Y = n) = = 2 n(n − 1)sn−2 P(Y = n). 2 ds n=0 ds n=1 n=0
P(Y = n) < ∞, we can take
′′ g′′ Y (1) = lim gY (s) = s→1
∞ X n=1
n(n − 1)P(Y = n) =
∞ X n=1
n2 P(Y = n) −
∞ X n=1
nP(Y = n) = E(Y 2 ) − E(Y).
Then, 2 Var(Y) = E(Y 2 ) − E(Y)2 = g′′ Y (1) + E(Y) − E(Y) .
49
(ii) We know that the generating function of Xn is given by the recursion formula gXn+1 = gXn ◦ gZ , so that we get g′Xn+1 = (g′Xn ◦ gZ )g′Z
(1) and then
E(Xn+1 ) = g′Xn+1 (1) = g′Xn (gZ (1)) · g′Z (1) = E(Xn )E(Z), since gZ (1) = 1. Thus, E(Xn ) = mn for each n > 1. (iii) − (iv) We obviously have Var(X0 ) = 0. Deriving equation (1), we get ′′ ′ 2 ′ ′′ g′′Xn+1 = (g′Xn ◦ gZ )′ g′Z + (g′Xn ◦ gZ )g′′ z = (gXn ◦ gZ )(gZ ) + (gXn ◦ gZ )gZ .
We then have Var(Xn+1 ) = g′′Xn+1 (1) + E(Xn+1 ) − E(Xn+1 )2
2 = g′′Xn (gZ (1))(g′Z (1))2 + g′Xn (gZ (1))g′′ Z (1) + E(Xn+1 ) − E(Xn+1 )
= (Var(Xn ) + E(Xn2 ) − E(Xn ))E(Z)2 + E(Xn )(Var(Z) + E(Z)2 − E(Z)) + E(Xn+1 ) − E(Xn+1 )2 = m2 Var(Xn ) + m2(n+1) − mn+2 + mn σ2 + mn+2 − mn+1 + mn+1 − m2(n+1)
= mn σ2 + m2 Var(Xn ). Thus,
Var(Xn+1 ) = mn σ2 + m2 Var(Xn ) = (mn + mn+1 )σ2 + m4 Var(Xn−1 ) 2 (n + 1)σ n n+1 2n 2 2n+2 = · · · = (m + m + · · · + m )σ + m Var(X0 ) = n n+1 m (m −1) σ2 m−1
since Var(X0 ) = 0. The result is now proved.
50
if m = 1 if m , 1
Series 13: branching and point processes
Exercise 1 Suppose that in a branching process, the number of offspring of an initial particle has a distribution whose generating function is f (s). Each member of the first generation has a number of offspring whose distribution has generating function g(s). The next generation has generating function f , the next g, and the functions continue to alternate this way from generation to generation. What is the probability of extinction ? Does this probability change if we start the process with the function g, and then continue to alternate ? Can the process go extinct with probability 1 in one case, but not in the other ? Exercise 2 Consider a branching process with initial size N and probability generating function ϕ(s) = q + ps,
p, q > 0,
p + q = 1.
Determine the probability distribution of the time T when the population first becomes extinct. Exercise 3 Points are thrown on 2 according to a Poisson point process of rate λ. What is the distribution of the distance from the origin to the closest point of the point process ?
R
Exercise 4 Let t 7→ X(t) be a Poisson process of rate λ. What is the distribution of the set of jumps of the process t 7→ X(e−t ) ?
51
Series 13: branching and point processes
Solutions
Exercise 1 Considering only even generations, one obtains a usual branching process with generating function f (g(s)). The probability of extinction is thus the smallest p > 0 such that f (g(p)) = p. If we start with the generating function g instead of f , then the probability of extinction is the smallest p such that g( f (p)) = p. These two probabilities are different in general. Choose f (s) =
1 3 + s, 4 4
g(s) = s2 . Solving f (g(p)) = p for 0 < p < 1 leads to p = 1/3, while solving g( f (p)) = p for 0 < p < 1 leads to p = 1/9. It is however not possible to find an example where one has a non-zero probability of survival in one case, but not on the other. Recall that this question is decided by comparing the expected number of descendants to the value 1. Here, if m1 is the expected number of descendants associated to the generating function f , and m2 is the expected number of descendants associated to the generating function g, then the expected number of descendants associated to the generating function f (g(s)) is m1 m2 , which is the same as the value for g( f (s)). This can be proved by representing the process using random variables, but can also be seen directly on the characteristic function, since the value is ! ! d d f (g(s)) = g′ (1) f ′ (g(1)) = g′ (1) f ′ (1) = m1 m2 = g( f (s)) . s=1 s=1 ds ds
Exercise 2 Let us consider the process started with one individual, and let Xn be the size of the population at time n. The generating function of Xn is ϕ(n) (s), and one can show by induction that ϕ(n) (s) = q + pq + · · · + pn−1 q + pn s. The probability that Xn = 0 is thus ϕ(n) (0) = q + pq + · · · + pn−1 q =
q(1 − pk ) = 1 − pk . 1− p
When the population is started with N individuals, the offspring of these individuals are independent, and thus P[T ≤ N] = P[the offspring of the N individuals are all extinct] = (1 − pk )N . Exercise 3 The distance d from the origin to the closest point is larger than r if and only if there is not point of the Poisson process that falls within the ball of radius r centred at the origin. The number of such points follows a Poisson distribution with parameter πr2 λ. Hence, P[d > r] = exp(−πλr2 ).
52
It thus follows that d has a probability density 2πλr exp(−πλr2 )dr. Exercise 4 For any interval I, let N(I) be the number of jumps of X occuring during I, and N ′ (I) be the number of jumps of X ′ that occur during I. One can check that N([e−b , e−a ]) = N ′ ([a, b]). As a consequence, it is easy to see that for two disjoint intervals I1 and I2 , the random variables N ′ (I1 ) and N ′ (I2 ) are independent. Moreover, N ′ ([a, b]) follows a Poisson distribution with parameter −a
λ(e
−b
−e )=
Z
b
λe−x dx.
a
The set of jumps of the process t 7→ X(e−t ) thus forms a Poisson point process with intensity measure λe−x dx.
53
Series 14: martingales in discrete time
1. let Xi be a sequence of independent V (Xi ) = E[(Xi − E[Xi ])2 ] = σi2 . Show
Sn =
random variables with
E[Xi ] = 0
and
that the sequence
n X
(Xi2 − σi2 )
i=1 is a martingale with respect to
F , the ltration generated by the sequence {Xi }.
Proof. (For denition of martingale see Denition 5.2) We rst check the inte-
grability os
Sn . n X
E[|Sn |] = E[|
n n X X Xi2 − σi2 |] ≤ E[ |Xi2 − σi2 |] ≤ E[|Xi2 − σi2 |]
i=1
≤
n X
i=1
E[Xi2 ] + E[σi2 ] =
Sn
i=1
2σi2 < ∞
i=1
i=1 hence
n X
is integrable.
To check that
1, 2, . . . , n
are
Fn
Sn
is
Fn
-measurable, just observe that since
Xi
for
i =
measurable, the sum of them is as well.
What remains to prove is the matringale property.
2 2 2 2 |Fn ] − σn+1 |Fn ] = E[Sn |Fn ] + E[Xn+1 − σn+1 E[Sn+1 |Fn ] = E[Sn + Xn+1 2 2 2 2 − σn+1 = E[Sn |Fn ] + σn+1 ] − σn+1 = Sn + E[Xn+1 = Sn .
Sn
Hence
is an
F -martingale.
2. Let
Xi
be IID with
Xi ∼ N (0, 1)
for each
i
and put
Yn =
Pn
i=1
Xi .
Show that
Sn = exp{αYn − nα2 /2} is an
Fn -martingale
for every
α ∈ R.
Proof. (For denition of martingale see Denition 5.2) We rst check the in-
tegrability os
X ∼ N (0, 1)
Sn .
Since
Sn ≥ 0
and by knowing that
n X
2
Xi − nα2 /2}] = e−nα
i=1
Sn
Fn
for
/2
n Y
E[eαXi ] = 1 < ∞.
i=1
is integrable.
To check that are
/2
(moment generating function), we get
E[|Sn |] = E[Sn ] = E[exp{α hence
2
E[ecX ] = ec
Sn
is
Fn
-measurable, observe that since
Xi
for
i = 1, 2, . . . , n Sn is
measurable, and the exponential of the sum is continuous, so
measurable as well.
54
What remains to prove is the matringale property.
E[Sn+1 |Fn ] = E[exp{α
n X
Xi − nα2 /2}|Fn ] = E[Sn exp{αXn+1 − α2 /2}|Fn ]
i=1
= Sn E[exp{αXn+1 − α2 /2}|Fn ] = Sn . {z } | =1 (as in 7.1)
Hence
Sn
is an
F -martingale.
3. Let
Xi
be a sequence of bounded random variables such that
Sn =
n X
Xi
i=1 is an
F -martingale.
Show that Cov(Xi , Xj )
Proof. By Proposition 5.4 we get that
n+m
=0
for
E[Xi ] = 0
i 6= j .
for
i > 1,
hence for
1≤n≤
we have
Cov(Xn , Xn+m )
= E[Xn Xn+m ] − E[Xn ] E[Xn+m ] = E[E[Xn Xn+m |Fn ]] | {z } =0
= E[Xn E[Xn+m |Fn ]] = E[Xn E[Sn+m − Sn+m−1 |Fn ]] = 0 where the last equality stems from the fact that the sequence
Xi
Sn
is an
F -martingale.
Hence
are mutually uncorrelated.
4. Let
{Mn }
and
{Nn }
be square integrable
F -martingales.
Show that
E[Mn+1 Nn+1 |Fn ] − Mn Nn = hM, N in+1 − hM, N in
(1)
Proof. (For denition of square integrability see Denition 8.1, for denition of
quadratic variatio and covariation see page 54) The right hand side of equality (1) yields
hM, N in+1 − hM, N in = =
n X
E[(Mi+1 − Mi )(Ni+1 − Ni )|Fi ] −
i=0
n−1 X
E[(Mi+1 − Mi )(Ni+1 − Ni )|Fi ]
i=0
= E[(Mn+1 − Mn )(Nn+1 − Nn )|Fn ] = E[Mn+1 Nn+1 − Mn+1 Nn − Mn Nn+1 + Mn Nn |Fn ].
55
Using the martingale property of the two processes measurability of
Mn
and
Nn
with respect to
Fn ,
{Mn }
and
{Nn },
and the
we get
E[Mn+1 Nn+1 − Mn+1 Nn − Mn Nn+1 + Mn Nn |Fn ] = E[Mn+1 Nn+1 |Fn ] − E[Mn+1 Nn |Fn ] − E[Mn Nn+1 |Fn ] + E[Mn Nn |Fn ] = E[Mn+1 Nn+1 |Fn ] − Nn E[Mn+1 |Fn ] − Mn E[Nn+1 |Fn ] + Mn Nn = E[Mn+1 Nn+1 |Fn ] − Nn Mn , and the proof is done.
5. Let
{Mn }
1. Let
and
α
{Nn }
and
β
be square integrable
F -martingales.
be real numbers. verify that, for every integer
n ≥ 0,
hαM + βN in = α2 hM in + 2αβhM, N in + β 2 hN in . 2. Derive the Cauchy-Schwarz inequality
|hM, N in | ≤
p
hM in
p hN in , n ≥ 0.
Proof. (For denition of square integrability see Denition 8.1, for denition of
quadratic variatio and covariation see page 54) 1) By Denition 8.3 we get
hαM + βN in =
n−1 X
E[(αMi+1 + βNi+1 − αMi − βNi )2 |Fi ]
i=0
=
n−1 X
E[((αMi+1 − αMi ) + (βNi+1 − βNi ))2 |Fi ]
i=0
=
n−1 X
E[α2 (Mi+1 − Mi )2 + 2αβ(Mi+1 − Mi )(Ni+1 − Ni ) + β 2 (Ni+1 − Ni )2 |Fi ]
i=0
= α2 hM in + 2αβhM, N in + β 2 hN in , which is what we wherev set out to prove. 2) It is easily seen that the quadratic variation is alway positive and by using this observation, combined with the result from the rst part of this exercise, we get, for any
λ ∈ R, 0 ≤ hM − λN in = hM in − 2λhM, N in + λ2 hN in .
Let
λ = hM, N in /hN in , 0 ≤ hM in − 2λhM, N in + λ2 hN in = hM in − 2hM, N i2n /hN in + hM, N i2n /hN in = hM in − hM, N i2n /hN in
hence
⇐⇒
hM, N i2n /hN in ≤ hM in p p hM, N in ≤ hM in hN in
and the proof is done.
56
6. Let
{Mn }
and
{Nn }
be square integrable
F -martingales.
Check the following
parallellogram equality,
hM, N in =
1 (hM + N in − hM − N in ). 4
Proof. Using the result from part 1) of problem 8.2 we get
hM + N in − hM − N in = hM in + 2hM, N in + hN in − hM in + 2hM, N in − hN in = 4hM, N in , hence
hM, N in = 14 (hM + N in − hM − N in ).
7. Let
{Mn }
{Nn } be two square integrable F -martingales and let ϕ and ψ F -adapted processes. Derive the Cauchy-Schwarz inequality
and
be bounded
|hIM (ϕ), IN (ψ)in | ≤
p
hIM (ϕ)in
Proof. By Proposition 9.3 we have that both
tegrable
F -martingales,
p
hIN (ψ)in , n ≥ 0.
IM (ϕ)
and
IN (ψ)
are square in-
so the proof is identical to the one given in part 2) of
problem 8.2.
8. In this problem we look at a simple market with only two assets; a bond and a stock. The bond price is modelled according to
n B n B0 where
r > −1
= (1 + r)Bn−1 =1
for
n = 1, 2, . . . , N
is the constant rate of return for the bond. The stock price is
asumed to be stochastic, with dynamics
n
where
s>0
= (1 + Rn )Sn−1 =s
for
n = 1, 2, . . . , N
{Rn } is a sequence of IID random variables on (Ω, F, P). Fur{Fn } be the ltration given by Fn = σ(R1 , . . . , Rn ) n = 1, . . . , N
and
thermore, let a) When is
Sn S0
Sn /Bn
a martingale with respect to the ltration
{Fn }?
We now look at portfolios consisting of the bond and the stock.
For every
n = 0, 1, 2, . . . , N let xn and yn be the number of stocks and bonds respectively bought at time n and held over the period [n, n + 1). furthermore, let Vn = xn Sn + yn Bn
57
[n, n + 1),
by the value of the portfolio over
and let
V0
be our initial wealth.
The rebalancing of the portfolio is done in the following way.
n
At every time
n − 1,
we observe the value of our old portfolio, composed at time
which at time
n
is
xn−1 Sn + yn−1 Bn .
We are allowed to only use this
amount to to rebalance the portfolio at time
n,
i.e.
we are not allowed to
withdraw or add any money to the portfolio. A portfolio with this restriction is called a
self −f inancingportf olio. {xn , yn } of {Fn }-adapted
as a pair
Formally we dene a self-nancing portfolio processes such that
xn−1 Sn + yn−1 Bn = xn Sn + yn Bn , n = 1, . . . , N. b) Show that if so is
Vn /Bn ,
Sn /Bn is a martingale with respect to the ltration {Fn }, then Vn is the portfolio value of any self-nancing portfolio.
where
Finally we look at a type of self-nancing portfolios called arbitrage strategies. A portfolio
{xn , yn }
is called an arbitrage if we have
V0 = 0 P(VN ≥ 0) = 1 P(VN > 0) > 0 for the value process of the portfolio. The idea formalized in an arbitrage portfolio is that with an initial wealth of at time
N
0
we get a non-negative portfolio value
with probability one, i.e. your are certain to make money on your
strategy. We say that a model is arbitrage free if the model permits arbitrage portfolios. c) Show that if
Sn /Bn
is a martingale then every self-nancing portfolio is
arbitrage free. Let
Qn
be a square integrable martingale with respect to the ltration
such that
Qn > 0 a.s.
and
{Fn }
Q0 = 1 a.s..
d) Show that even if Sn /Bn is not a martingale with respect to the ltration {Fn }, nding a process Qn as dened above such that Sn Qn /Bn , will give that Vn Qn /Bn is a martingale with respect to the ltration {Fn }, and furthermore, that Vn is arbitrage free. Even though the multiplication of the positive martingale
Qn
might seem unim-
portant, we will later in the course see that this is in fact a very special action which gives us the ability to change measure.
In nancial applications, this
is important since the portfolio pricing theory say that a portfolio should be priced under a risk neutral measure, a measure where all portfolios, divided by the bank process
Bn
should be a martingale.
The reason for this is that the
Sn /Bn or Sn Qn /Bn Qn guarantees closely related to Qn
theory is based on a no-arbitrage assumption, which hold if is a martingale as proven in this exercise. that
Vn
So the existence of
is arbitrage free, and using a change of measure
we may price any portfolio
Vn
consisting of
Sn
Proof. a) Use Denition 5.2 to conclude that
and
Bn
Sn /Bn
in a consistent way.
is an
{Fn }-martingale
if
the process is integrable, measurable and have the martingale property, i.e. that
58
E[Sn+1 /Bn+1 |Fn ] = Sn /Bn . and the Rn 's are IID we get
Since
Sn /Bn ≥ 0
for every
n, Bn
is deterministic
Qn n Y E[1 + Rn ] i=1 E[1 + Rn ] Q = E[|Sn /Bn |] = E[Sn /Bn ] = , n 1+r (1 + r) i=1 i=1 hence
Sn /Bn
is integrable if
Rn
is.
produkt is a continuous mapping,
Since
Sn /Bn
is
(2)
Fn = σ(R1 , . . . , Rn ), and the the Fn -measurable. To check the mar-
tingale property, just add a conditioning to (2),
E[Sn+1 /Bn+1 |Fn ] =
Sn E[1 + Rn |Fn ] Sn 1 + E[Rn |Fn ] Sn E[1 + Rn+1 |Fn ] = = . Bn (1 + r) Bn 1+r Bn 1+r
To get the martingale property
E[Sn+1 /Bn+1 |Fn ] = Sn /Bn
we must have
1 + E[Rn |Fn ] = 1, 1+r or equivalently that
E[Rn+1 |Fn ] = r.
Hence
Sn /Bn
is an
{Fn }-martingale
if
E[Rn+1 |Fn ] = r. b) Since the denition of a self-nancing portfolio is that
xn−1 Sn + yn−1 Bn = xn Sn + yn Bn , n = 1, . . . , N. we get, by the denition of
Vn ,
xn+1 Sn+1 + yn+1 Bn+1 xn Sn+1 + yn Bn+1 Vn+1 |Fn ] = E[ |Fn ] = E[ |Fn ] Bn+1 Bn+1 Bn+1 Bn+1 Sn+1 |Fn ] + yn = xn E[ Bn+1 Bn+1
E[
since
{xn , yn }
are
Fn -measurable.
Under the assuption that
Sn /Bn
is an
Fn -
martingale we get
E[ so
Vn /Bn
Vn+1 xn Sn+1 + yn Bn+1 xn Sn + yn Bn Vn |Fn ] = E[ |Fn ] = = , Bn+1 Bn+1 Bn Bn is an
Fn -martingale
if
Sn /Bn
is.
Vn = xn Sn + yn Bn is Vn /Bn is a martingale if Sn /Bn is. To check that any self-nancing Vn is arbitrage free, we must have V0 = x0 S0 + y0 B0 = 0. Let Sn /Bn
c) From b) we have that any self-nancing portfolio such that portfolio
be a martingale, then by Proposition 5.4 a) we have
E[Vn+1 /Bn+1 ] = V0 /B0 = V0 = 0. Assume that
Bn < ∞
P (Vn ≥ 0) = 1 n we get
and
P (Vn > 0) > 0.
Since
E[Vn /Bn ] = 0
for any
E[
Vn Vn Vn ] = E[ I{Vn =0} ] + E[ I{Vn >0} ] > 0, Bn B B | n {z } | n {z } =0
59
>0
and
where
I{·}
is the indicator function. This is a contradiction to
d) Following the same lines as in b) we get that if with respect to the ltration
{Fn }
E[Vn /Bn ] = 0,
Vn .
hence there are no arbitrage strategies
and
Vn
Sn Qn /Bn
is a martingale
is self nancing,
E[
Vn+1 Qn+1 xn Sn+1 Qn+1 + yn Bn+1 Qn+1 xn Sn Qn + yn Bn Qn Vn Qn |Fn ] = E[ |Fn ] = = , Bn+1 Bn+1 Bn Bn
so
Vn Qn /Bn
is an
Fn -martingale
if
Sn Qn /Bn
is. And following the same lines
as the proof of c),
E[
Vn+1 Qn+1 V0 Q0 ]= = V0 = 0. Bn+1 B0
Assume that P (Vn ≥ 0) = 1 and P (Vn > 0) > 0. Qn > 0, Bn < ∞ for any n we get
E[
I{·}
E[Vn Qn /Bn ] = 0
and
Vn Qn Vn Qn Vn Qn ] = E[ I{Vn =0} ] + E[ I{Vn >0} ] > 0, Bn Bn Bn | {z } | {z } =0
where
Since
>0
is the indicator function. This is a contradiction to
hence there are no arbitrage strategies
E[Vn Qn /Bn ] = 0,
Vn .
9. A coin is tossed
N
N is known in advance. 1 unit 1 unit with probability p ∈ (1/2, 1] 1 − p. If we let Xn n = 1, 2, . . . , N be
times, where the number
invested in a coin toss gives the net protof and the net prot of
−1
with probability
the net prot per unit invested in the
P(Xn = 1) = p
and
nth
coin toss, then,
P(Xn = −1) = 1 − p,
and the Xn 's are independent of each other. Let Fn = σ(X1 , . . . , Xn )and let Sn , n = 1, 2, . . . , N ne the wealth of the investor at time n. Assume further that the initial wealth S0 is a given constant. Any non-nergative amount Cn can be invested in coin toss n + 1, n = 1, . . . , n − 1, but we assume that borrowing money is not allowed so Cn ∈ [0, Sn ]. Thus we have
Sn+1 = Sn + Cn Xn+1 , n = 1, . . . , N − 1
and
Cn ∈ [0, Sn ].
Finally assume that the objective of the investor is to maximize the expected rate of return a) Show that
E[(1/N ) log(SN /S0 )]. Sn
is a submartingale with respect to the ltration
{F}.
Cn the investor use in the investment game, α = p log(p) + (1 − p) log(1 − p) + log(2) is a superto the ltration {Fn }.
b) Show that whatever strategy
Ln = log(Sn ) − nα
where
martingale with respect
Hint: At some point you need to study the function
g(x) = p log(1 + x) + (1 − p) log(1 − x)
60
for
x ∈ [0, 1]
and
p ∈ (1/2, 1).
c) Show that the fact that
log(Sn ) − nα
is a supermartingale implies that
E[[] log(SN /S0 )] ≤ N α.
d) Show that if
Cn = Sn (2p − 1), Ln
is an
{F}-martingale.
Proof. (For denition of submartingale and supermartingale see the text follow-
ing Denition 5.2) a) To show that to show that
Sn
is a submartingale w.r.t.
{F}
we want
E[Sn+1 |Fn ] ≥ Sn ,
E[Sn+1 |Fn ] = E[Sn + Cn Xn+1 |Fn ] = {Cn
and
= Sn + Cn E[Xn+1 |Fn ] = {Xn+1
Sn
are
Fn -measurable}
Fn } = Sn + Cn E[Xn+1 ] = Sn + Cn (1 · p − 1 · (1 − p)) ≥ Sn , |{z} | {z } independent of
≥0
hence
Sn
is a submartingale w.r.t.
b) We now want to show that
>0
{Fn }.
E[Ln+1 |Fn ] ≤ Ln ,
E[Ln+1 |Fn ] = E[log(Sn+1 ) − (n + 1)α|Fn ] = E[log(Sn + Cn Xn+1 )|Fn ] − (n + 1)α = E[log(Sn (1 + Cn Xn+1 /Sn ))|Fn ] − (n + 1)α = E[log(Sn ) + log(1 + Cn Xn+1 /Sn ))|Fn ] − (n + 1)α = log(Sn ) − nα +E[log(1 + Cn Xn+1 /Sn ))|Fn ] − α | {z } =Ln
= Ln + p log(1 + Cn /Sn ) + (1 − p) log(1 − Cn /Sn ) −α = Ln + g(Cn /Sn ) − α. {z } | =g(Cn /Sn )
g 00 (x) = −p/(1+x2 )−(1−p)/(1−x2 ) < 0 for x ∈ [0, 1), g is concave in that region, and the maximum is x ˆ = 2p−1 since g 0 (ˆ x) = p/(1+ˆ x)−(1−p)/(1−ˆ x) = 0 so g(x) ≤ g(ˆ x) for all x ∈ [0, 1]. Since Cn /Sn ∈ [0, 1], Since
g(Cn /Sn ) ≤ g(ˆ x) = g(2p − 1) = p log(p) + (1 − p) log(1 − p) + log 2 = α, hence
E[Ln+1 |Fn ] = Ln + g(Cn /Sn ) − α ≤ Ln + α − α = Ln , so
Ln
is a supermartingale w.r.t.
c) We have just shown that
{Fn }.
{Fn }.
Ln = log(Sn ) − nα is a E[Ln ] ≤ L0 so
supermartingale w.r.t.
Because of this we also have that
E[log(SN ) − N α] ≤ log(S0 ) − 0 · α ⇐⇒ E[log(SN /S0 )] ≤ N α.
d) For
Cn = SN (2p − 1)
we get
E[Ln+1 |Fn ] = Ln + g(Cn /Sn ) − α = Ln + g(2p − 1) − α = Ln , hence
Ln
is a
{Fn }-martingale
61
using the strategy
Cn = Sn (2p − 1).
10.
Xn n = 0, 1, 2, . . . is that price of a stock at time n and assume that {Fn }. This means that if we buy one unit of stock at time n, paying Xn , the expected price of the stock tomorrow (represented by the time n + 1) given the information Fn is lower Assume that
Xn
is a supermartingale with respect to the ltration
than today's price. in other words, we expect the price to go down. Investing in the stock does not seem to be a good idea, but is it possible to nd a strategy that performs better? The answer is no, and the objective of this exercise is to show that. Let
Cn
be a process adapted to
representing our investment strategy. after
X.
n
days is given by
IX (C)n ,
with
0 ≤ Cn n = 0, 1, 2, . . .,
the sochastic integral of
Now, show that for any supermartingale
bounded process
{Fn }
We know that the gain of our trading
Xn
C
with respect to
and any positive, adapted and
Cn E[IX (C)n+1 |Fn ] ≤ IX (C),
i.e. that
IX (C)n
is also a supermartingale.
I (C) see Denition 9.1) We PX n−1 i=0 Ci (Xi+1 − Xi ) so taking
Proof. (For denition of the stochastic integral
may write the stochastic integral as
IX (C)n =
the conditional expectation of the stochastic integral we get
n X E[IX (C)n+1 |Fn ] = E[ Ci (Xi+1 − Xi )|Fn ] i=0
= E[
n−1 X
Ci (Xi+1 − Xi ) + Cn (Xn+1 − Xn )|Fn ]
i=0
= E[IX (C)n + Cn (Xn+1 − Xn )|Fn ] = {IX (C)n
is
= IX (C)n + E[Cn (Xn+1 − Xn )|Fn ] = {Cn
and
Xn
Fn
-measurable}
= IX (C)n + Cn (E[Xn+1 |Fn ] − Xn ) ≤ {Xn
is supermartingale and
are {Fn }-adapted}
≤ IX (C)n + Cn (Xn − Xn ) = IX (C)n . Hence
IX (C)n
is a supermartingale with respect to
62
{Fn }
if
Xn
is.
Cn ≥ 0}
Series 15: discrete Brownian motion 1. Let
Bn , n = 0, 1, 2, . . .
be a discrete Brownian motion. Show that
Bn P −→ 0 hBin that is for every
as
n → ∞,
>0 Bn P > →0 hBin
as
n → ∞.
Proof. Recall that for a square integrable random variable
X ,the
Chebyshev's
inequality is
P(|X| > ) ≤ Since
E[X 2 ] . 2
hBin = n, wich is given in the text at page 64 if needed, we get Bn Bn n E[Bn2 ] = P > = P > = P (|B | > n) ≤ n n hBin (n)2 (n)2 1 1 = 2 → 0 as n → ∞. n
Hence
Bn P −→ 0 hBin
as
n → ∞.
2. Assume the value of the bond's rate of return is
1
2
r = e2σ − 1
random variable
for some constant σ . What should be the distribution of the ∆ (1+Rn ) in order to model S˜n = Sn /Bn as a geometric Brownian
motion i.e. 1
2 S˜n = seσWn −2nσ , S˜0 = s,
where
Wn
is a discrete Brownian motion.
Proof. from the denition of
S˜n
we get
(1 + Rn )Sn−1 (1 + Rn )Sn−1 Sn . = = S˜n = 1 2 Bn (1 + r)Bn−1 e 2 σ Bn−1 We get
(1 + Rn+1 ) (1 + Rn+1 ) S˜n+1 = , = 1 2 1 2 σ ˜ Sn e2 e2σ so letting
S˜n
be a geometric Brownian motion, we must have 1
2 1 2 S˜n+1 seσWn+1 −2(n+1)σ = seσ(Wn+1 −Wn )−2σ . = 1 S˜n seσWn −2nσ2
Combining the two results we get 1 2 S˜n+1 (1 + Rn+1 ) , = seσ(Wn+1 −Wn )−2σ = 1 2 S˜n e2σ
which holds if
1 + Rn+1 = seσ(Wn+1 −Wn ) . 63
Series 16: martingales in continuous time
1. Let
{Mt }
1. Let
and
α
{Nt }
and
β
be square integrable
Ft -martingales.
be real numbers. verify that, for every
t ≥ 0,
hαM + βN it = α2 hM it + 2αβhM, N it + β 2 hN it . 2. Derive the Cauchy-Schwarz inequality
|hM, N it | ≤
p p hM it hN it , t ≥ 0.
Proof. (For denition of square integrability see Denition 11.3, for denition
of quadratic variation and covariation see pages 74-75) 1) We use the denition of the covariation process to get
hαM + βN it = lim
n−1 X
kΠk→0
= lim
kΠk→0
= lim
kΠk→0
= lim
kΠk→0
+ lim
kΠk→0
(αMi+1 + βNi+1 − αMi − βNi )2
i=0
n−1 X
((αMi+1 − αMi ) + (βNi+1 − βNi ))2
i=0 n−1 X
α2 (Mi+1 − Mi )2 + 2αβ(Mi+1 − Mi )(Ni+1 − Ni ) + β 2 (Ni+1 − Ni )2
i=0 n−1 X
α2 (Mi+1 − Mi )2 + 2 lim
kΠk→0
i=0 n−1 X
n−1 X
αβ(Mi+1 − Mi )(Ni+1 − Ni )
i=0
β 2 (Ni+1 − Ni )2
i=0
α2 hM it + 2αβhM, N it + β 2 hN it , which is what we wherev set out to prove. 2) Recall the Cauchy-Schwarz inequality for
n-dimensional
n X
v u n n uX X b2i . ai bi ≤ t a2i
i=1
i=1
euclidean space
i=1
We have
hM, N it = lim
kΠk→0
n−1 X
(Mi+1 − Mi )(Ni+1 − Ni )
i=0
v un−1 n−1 uX X (Ni+1 − Ni )2 ≤ lim t (Mi+1 − Mi )2 kΠk→0
i=0
64
i=0
and since
√
·
is continuous the limit may be passed inside the root sign
v u n−1 n−1 u X X hM, N it ≤ t lim (Mi+1 − Mi )2 (Ni+1 − Ni )2 kΠk→0
i=0
i=0
p = hM it hN it and the proof is done.
2. Let
{Mt }
and
{Nt }
be square integrable
Ft -martingales.
Check the following
parallellogram equality,
hM, N it =
1 (hM + N it − hM − N it ), t ≥ 0. 4
Proof. Using the result from part 1) of problem 11.1 we get
hM + N it − hM − N it = hM it + 2hM, N it + hN it − hM it + 2hM, N it − hN it = 4hM, N it , hence
hM, N it = 14 (hM + N it − hM − N it ).
65
Series 17: martingales in continuous time 1. (The value of a European Call Option ). In the Black.Scholes model, the price
St
of a risky asset (i.e. an asset that has no deterministic payo )at time
t
is
given by the formula 1
St = se(r− 2 σ where
Bt
is a Brownian motion and
s
2
)t+σBt
is a positive constant representing the
initial value of the asset. The value of a European Call option, with maturity time
T
and strike price
K
is
(ST − K)+
at time
T.
If
T > t,
compute explicitly
E[(ST − K)+ |Ft ]. Proof. Because of the Markov property of the Brownian motion, any expectation
h of the Brownian motion evaluated at time T , h(BT ), conditioned t < T is only dependent on the value Bt and the time to maturity
of a function on a time
T − t.
By Proposition 12.4 we get
E[(ST − K)+ |Ft ] = E[(St e(r− = {Proposition
12.4}
σ2 2
)(T −t)+σ(BT −Bt )
= EBt [(St e(r−
σ2 2
66
− K)+ |Ft ]
)(T −t)+σ(BT −Bt )
− K)+ ]
BT − Bt =
and by the time homogeneity we may write
N (0, 1),
√
T − tX
where
X ∼
so
E[(ST − K)+ |Ft ] = EBt [(St e(r−
σ2 2
)(T −t)+σ(BT −Bt )
− K)+ ]
√
σ2
= EBt [(St e(r− 2 )(T −t)+σ T −tX − K)+ ] Z √ σ2 x2 1 (St e(r− 2 )(T −t)+σ T −tx − K)+ e− 2 dx. =√ 2π R Since
(·)+
is non-zero only when
rewritten to get
x
St e(r−
σ2 2
√ )(T −t)+σ T −tx
≥ K
which may be
separated as
log
x≥
K St
2
− (r − σ2 )(T − t) √ . σ T −t
Call the right hand side of the inequality
d1 ,
the integral may be written as
∞
√ x2 σ2 1 E[(ST − K)+ |Ft ] = √ (St e(r− 2 )(T −t)+σ T −tx − K)e− 2 dx 2π d1 Z ∞ Z ∞ √ x2 σ2 x2 1 1 =√ St e(r− 2 )(T −t)+σ T −tx e− 2 dx − K √ e− 2 dx 2π d1 2π d1 {z } |
Z
1 = √ St er(T −t) 2π
Z
∞
2 − σ2
e
(T −t)+σ
√
2 T −tx− x2
d1
P(X≥d1 ) Z ∞
1 dx − K √ 2π |
e−
d1
{z
P(X≥d1 )
x2 2
dx }
Z ∞ √ 2 1 1 = √ St er(T −t) e− 2 (x−σ T −t) dx − KP(X ≥ d1 ) 2π d1 √ = {y = x − σ T − t, dy = dx} Z ∞ 2 1 − y2 r(T −t) √ e = St e dy −KP(X ≥ d1 ) 2π d1 −σ√T −t {z } | √ =P(X≤d1 −σ T −t)
√ = St er(T −t) P(X ≥ d1 − σ T − t) − KP(X ≥ d1 ). This is the explicit form of the Call Option price.
2. Let
Bt
be a one dimensional Brownian motion and let
generated by
Bt .
Ft
be the ltration
Show that
E[Bt3 |Fs ] = Bs3 + 3(t − s)Bs . Proof. We start by separating the process into a part that is measurable with
respect to
Fs
and one that is independent of
Ft ,
namely
E[Bt3 |Ft ] = E[(Bt − Bs + Bs )3 |Ft ] = E[(Bt − Bs )3 + 3(Bt − Bs )2 Bs + 3(Bt − Bs )Bs2 + Bs3 |Ft ] = E[(Bt − Bs )3 ] + 3Bs E[(Bt − Bs )2 ] + 3Bs2 E[Bt − Bs ] + Bs3 .
67
√ Bt − Bs ∼ N (0, t − s)
and since the normal distribution is symmetric all odd
moments is zero, so
E[Bt3 |Ft ] = 0 + 3Bs (t − s)2 + 0 + Bs3 = Bs3 + 3(t − s)Bs .
3. Show that the following processes are martingales with respect to tion generated by theone dimensional Brownian motion 1.
Bt3 − 3tBt
2.
Bt4 − 6tBt2 + 3t2 .
Ft ,the
ltra-
Bt
Proof. 1) From Exercise 12.2 we have that
E[Bt3 |Ft ] = Bs3 + 3(t − s)Bs . Using this together with the fact that
Bt
is an
Ft -martingale
we get
E[Bt3 − 3tBt |Ft ] = Bs3 + 3(t − s)Bs − 3tBs = Bs3 − 3sBs Bt3 − 3tBt with respect to Ft . 4 2) We start by computing E[Bt |Fs ], and as in Exercise 12.2 we do this by separating Bt in a part that is measurable with respect to Fs and part that is
which proves the martingale property of
independent of
Fs
E[Bt4 |Fs ] = E[(Bt − Bs + Bs )4 |Fs ] = E[(Bt − Bs )4 + 4(Bt − Bs )3 Bs + 6(Bt − Bs )2 Bs2 + 4(Bt − Bs )Bs3 + Bs4 |Fs ] = E[(Bt − Bs )4 ] + 4Bs E[(Bt − Bs )3 ] + 6Bs2 E[(Bt − Bs )2 ] + 4Bs3 E[Bt − Bs ] + Bs4 . √ √ Recall that Bt − Bs ∼ N (0, t − s) so we may write Bt − Bs = T − sX X ∼ N (0, 1) so we may rewrite our expression as
where
√ 3 E[Bt4 |Fs ] = (t − s)2 E[X 4 ] + 4Bs t − s E[X 3 ] + 6Bs2 (t − s)E[X 2 ] √ + 4Bs3 t − sE[X] + Bs4 and since all odd moments of the standard normal distribution is zero and the second moment is one we have
E[Bt4 |Fs ] = (t − s)2 E[X 4 ] + 6Bs2 (t − s) + Bs4 . To evaluate
E[X 4 ]
we use the moment generating function of the standard nor-
mal distribution
ΨX (u) = E[euX ] = eu and use the result that the
n'th
moment of
X.
n'th
/2
ΨX (u) evaluated ΨX (u) is
derivative of
The fourth derivative of
(n)
2
ΨX (u) = (3 + 6u2 + u4 )ΨX (u)
68
in
u=0
is the
and since
ΨX (0) = 1
(4)
ΨX (0) = E[X 4 ] = 3.
we get
From this we get
E[Bt4 |Fs ] = (t − s)2 E[X 4 ] + 6Bs2 (t − s) + Bs4 = 3(t − s)2 + 6Bs2 (t − s) + Bs4 . We now may derive the martingale property of
2 that Bt
−t
in an
Bt4 − 6tBt2 + 3t2 ,
using the fact
Ft -martingale,
E[Bt4 − 6tBt2 + 3t2 |Fs ] = 3(t − s)2 + 6Bs2 (t − s) + Bs4 − 6tE[Bt2 |Fs ] + 3t2 = 3(t − s)2 + 6Bs2 (t − s) + Bs4 − 6tE[Bt2 − t + t|Fs ] + 3t2 = 3(t − s)2 + 6Bs2 (t − s) + Bs4 − 6t(Bs2 − s + t) + 3t2 = 3t2 − 6ts + 3s2 + 6Bs2 t − 6sBs2 + Bs4 − 6tBs2 + 6ts − 6t2 + 3t2 = Bs4 − 6sBs2 + 3s2 , hence
Bt4 − 6tBt2 + 3t2
is an
Ft -martingale
4.
{ti }∞ i=0 be an increasing ≤ ti+1 . Furthermore, let
Let
t∗i
sequence of scalars and dene
S˜n =
n−1 X
t∗i
such that
ti <
Bt∗i (Bti+1 − Bti ),
i=0 where
Bti
is the discrete Brownian motion.
S˜k , 0 ≤ k ≤ n B.
Check that generated by
is not a martingale with respect to the ltration
Proof. We only chek the martingale property of
k−1 X
E[S˜k |Fk−1 ] = E[
S˜k .
Bt∗i (Bti+1 − Bti )|Fk−1 ] = {Bt∗i (Bti+1 − Bti )
i=0 are
Fk−1 -measurable
for
i ≤ k − 2} =
k−2 X
Bt∗i (Bti+1 − Bti )
i=0
|
{z
˜k−1 =S
}
+ E[Bt∗k−1 (Btk − Btk−1 )|Fk−1 ] = {E[Btk−1 (Btk − Btk−1 )|Fk−1 ] = 0} = S˜k−1 + E[Bt∗k−1 (Btk − Btk−1 )|Fk−1 ] − E[Btk−1 (Btk − Btk−1 )|Fk−1 ] = S˜k−1 + E[(Bt∗k−1 − Btk−1 )(Btk − Btk−1 )|Fk−1 ] = S˜k−1 + (t∗k−1 − tk−1 ) 6= S˜k−1 for any
tk−1 < t∗k−1 ≤ tk
hence
S˜k
is not a martingale with respect to the
ltration generated by the Brownian motion
69
B.
5. Let
Bt
Xt
be a Brownian motion and let
be the stochastic integral
t
Z
es−t dBs
Xt = 0 1. Determine the expectation
E[Xt ]
and the variance
V (Xt )
of
Xt .
2. Show that the random variable
Wt = has distribution Proof. 1) By part
p
2(t + 1)Xlog(t+1)/2
Wt ∼ N (0, t).
(vi)
of Proposition 13.11, that denes properties of then Ito
integral, we have that since the integrand,
es−t ,
of the stochastic integral is
deterministic, the stochastic integral is normally distributed as
Z t Z t 1 Xt ∼ N 0, (es−t )2 ds = N 0, e2(s−t) ds = N 0, (1 − e−2t ) . 2 0 0 1 −2t Hence Xt has the distribution Xt ∼ N 0, (1 − e ) . 2 1 −2t 2) From the rst part of the exercise, we know that Xt ∼ N 0, (1 − e ) . 2 For a normally distrbuted random variable
N (0, c2 σ 2 )
Y ∼ N (0, σ)
it holds that
cY ∼
hence
p 2 1 (1 − e−2 log(t+1)/2 ) = N 0, (t + 1)(1 − e− log(t+1) ) Wt ∼ N 0, 2(t + 1) 2 1 t = N 0, (t + 1)(1 − ) = N 0, (t + 1) = N 0, t . t+1 t+1 And the proof is done.
6. Let
B
be a Brownian motion. Find
z∈R Z
F (ω) = z +
and
ϕ(s, ω) ∈ V
T
ϕ(s, ω)dBs 0
in the following cases 1.
F (ω) = BT3 (ω).
2.
F (ω) =
3.
F (ω) = eT /2 cosh(BT (ω)) = eT /2 12 (eBT (ω) + e−BT (ω) )
RT 0
Bs3 ds.
70
such that
Proof. 1) We get
1 d(Bt3 ) = 3Bt2 dBt + 6Bt dt 2 and since
d(tBt ) = tdBt + Bt dt Rt Rt tB = 0 sdBs + 0 Bs ds which may be rewritten Rt Rt t Bs ds = 0 (t − s)dBs . We may now write the BT3 0
we have
using
to get
as
Rt 0
tdBs
T
Z
BT3
tBt =
3(Bt2 + (T − t))dBt
=z+ 0
where
z = E[BT3 ] = 0.
Hence
ϕ(s, ω) = 3(Bs (ω)2 + (T − s))
2)
d(T BT3 ) = T (3BT2 dBT + 3BT dT ) + BT3 dT hence
T
Z
T BT3 = −z +
s3Bs2 dBs +
z ∈ R. Z
T
T
Z
Bs3 ds,
3sBs ds + 0
0 for some
Z
0
Rewriting the expression gives
T
Bs3 ds
=z+
T BT3
T
Z
s3Bs2 dBs
−
Z −
3sBs ds 0
0
0
T
which by problem 1) gives
T
Z
Bs3 ds = z +
RT
We need to rewrite
0
T
Z 3T (Bs2 + (T − s)) − s3Bs2 dBs −
3sBs ds.
0
0
0
ds.
T
Z
3sBs ds
on a form that is with respect to
dBs
instead of
Study
d(T 2 BT ) = 2T BT dT + T 2 dBT , hence
T 2 BT =
RT 0
2sBs ds +
RT 0
s2 dBs
T
Z
sBs ds = 0
1 2
and by writing
Z
T 2 BT =
RT 0
T 2 dBs
we get
T
(T 2 − s2 )dBs ,
0
hence
Z
T
Bs3 ds = z +
0
Z
T
0
We may now write the
Z 0
T
Bs3 ds
Z 3 T 2 3T (Bs2 + (T − s)) − s3Bs2 dBs − (T − s2 )dBs . 2 0 RT 3 Bs ds as 0
Z =z+ 0
T
3 3T (Bs2 + (T − s)) − s3Bs2 − (T 2 − s2 ) dBs . 2
71
RT RT z = E[ 0 Bs3 ds] = 0 E[Bs3 ]ds = 0. s)) − s3Bs (ω)2 − 23 (T 2 − s2 ).
Hence
where
ϕ(s, ω) = 3T (Bs (ω)2 + (T −
3) We notice that since
d(eBt −t/2 ) = eBt −t/2 dBt d(e−Bt +t/2 ) = −e−Bt +t/2 dBt , we may write
1 1 eT /2 (eBT (ω) + e−BT (ω) ) = eT /2 (eT /2 eBT (ω)−T /2 + e−T /2 e−BT (ω)+T /2 ) 2 2 Z T Z T 1 Bs −s/2 −T /2 T /2 T /2 e−Bs +s/2 dBs ) e dBs − e =z+e (e 2 0 0 Z T Z T 1 = z + (eT e−Bs +s/2 dBs ) eBs −s/2 dBs − 2 0 0 Z T T Bs −s/2 e e − e−Bs +s/2 =z+ dBs 2 0 where
z
is
1 1 z = E[F ] = eT /2 E[eBT ] + E[e−BT ] = eT /2 eT /2 E[eBT −T /2 ] | {z } 2 2 =1
+ e−T /2 E[e−BT +T /2 ] | {z }
1 1 = eT /2 eT /2 + e−T /2 = (eT + 1). 2 2
=1
7. Let
Xt
be a generalized geometric grownian motion given by
dXt = αt Xt dt + βt Xt dBt where
αt
and
βt
(3)
are bounded deterministic functions and
B
is a Brownian
motion. 1. Find an explicit expression for 2. Find
z∈R
Proof. 1) Let
and
ϕ(t, ω) ∈ V
Rt
Xt = e
dXt = αt e
0
Rt 0
αs ds αs ds
Yt ,
Xt
E[Xt ]. Rt X(T, ω) = z + 0 ϕ(s, ω)dBs (ω).
and compute
such that
the dierential of
Yt dt + e
Rt 0
αs ds
Xt
is
dYt = αt Xt dt + e
Rt 0
αs ds
dYt
for this expression to be equal to (3) we must have
e
which holds if
Rt 0
αs ds
dYt = βYt dBt
dYt = βt e |
hence
Yt = y0 e
72
Rt 0
αs ds
{z
=Xt
Yt dBt }
Yt is an exponential martingale given by R Rt β dBt − 12 0t βt2 dt 0 t
where
y0 = 1
since
X0 = 1.
This gives the following expresiion for
Xt = e
Rt 0
Using the martiongale property of
E[Xt ] = E[e
Rt
αs ds
0
Xt
R βt dBt + 0t (αt − 12 βt )2 dt
Yt
gives
Yt ] = e
Rt 0
αs ds
E[Yt ] = e | {z }
Rt 0
αs ds
.
=Y0
RT
XT = e 0 αs ds YT where YT is the exponential martingale with SDE dYt = βt Yt dBt hence XT may be written as Z T R Z T RT RT RT T αs ds αs ds αs ds 0 0 0 e 0 αs ds βs Ys dBs βs Ys dBs ) = |e {z } + XT = e YT = e (1 + 2) In part 1) it was shown that
0
and
ϕ(s, ω) = e
RT
αs ds
0
βs Ys (ω).
0
=z
ϕ(s, ω) ∈ V ,
We need to show that
the criterias
are given in Denition 13.1. Part 1 and 2 of Deniton 13.1 is showed by noticing that
ϕs
therefore universally measurable, and
RT
αu du
βs which is deterministic and Ys which is the exponential martingale and
is the product of the processes
e
0
therefore fullll the conditions 1) and 2). The product of these two processes does also fullll criterias 1) and 2).
βt
by using that
|βt | < K
(βt e
≤ |e Yt2 = e
Rt 0
T
Z
0 RT 0
0
View more...
Comments