0% found this document useful (0 votes)
69 views15 pages

The Annals of Applied Probability 10.1214/09-AAP637 Institute of Mathematical Statistics

This document summarizes a new formula for solving linear stochastic equations of the form Xt = Yt + ∫(0,t] Xs− dZs, where Z is a càdlàg semimartingale and Y is a càdlàg adapted process with bounded variation. The key results are: 1) The unique solution to the stochastic equation is given by Xt = ∫[0,t] Uu,t dYu, where Uu,t is defined by a formula involving the processes Y, Z, and their quadratic variation. 2) When Y and -Z are nondecreasing processes with stationary increments, the model generalizes shot-noise

Uploaded by

melotmani
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views15 pages

The Annals of Applied Probability 10.1214/09-AAP637 Institute of Mathematical Statistics

This document summarizes a new formula for solving linear stochastic equations of the form Xt = Yt + ∫(0,t] Xs− dZs, where Z is a càdlàg semimartingale and Y is a càdlàg adapted process with bounded variation. The key results are: 1) The unique solution to the stochastic equation is given by Xt = ∫[0,t] Uu,t dYu, where Uu,t is defined by a formula involving the processes Y, Z, and their quadratic variation. 2) When Y and -Z are nondecreasing processes with stationary increments, the model generalizes shot-noise

Uploaded by

melotmani
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

The Annals of Applied Probability

2010, Vol. 20, No. 2, 367–381


DOI: 10.1214/09-AAP637
c Institute of Mathematical Statistics, 2010

A NEW FORMULA FOR SOME LINEAR STOCHASTIC


EQUATIONS WITH APPLICATIONS
arXiv:1009.3373v1 [math.PR] 17 Sep 2010

By Offer Kella1 and Marc Yor


Hebrew University of Jerusalem and Université Pierre et Marie Curie
We give a representation of theR solution for a stochastic linear
equation of the form Xt = Yt + (0,t] Xs− dZs where Z is a càdlàg
semimartingale and Y is a càdlàg adapted process with bounded vari-
ation on finite intervals. As an application we study the case where
Y and −Z are nondecreasing, jointly have stationary increments and
the jumps of −Z are bounded by 1. Special cases of this process are
shot-noise processes, growth collapse (additive increase, multiplica-
tive decrease) processes and clearing processes. When Y and Z are,
in addition, independent Lévy processes, the resulting X is called a
generalized Ornstein–Uhlenbeck process.

1. Introduction. In this paper we show that when Z is a càdlàg adapted


semimartingale and Y is càdlàg adapted and with bounded variation on
Rcompact intervals, then the unique càdlàg adaptedR solution of Xt = Yt +
(0,t] Xs− dZs is given via the representation Xt = [0,t] Uu,t dYu where Uu,t
is defined by formula (2) below. This form seems to be new and we note
that the integral with respect to Y is defined path-wise while the integral
in the integral equation can be a stochastic integral. Of course when Y is
a semimartingale, one cannot expect such a representation of the solution
since {Uu,t |0 ≤ u ≤ t} is not adapted as a process indexed by u.
We discuss an application to the case where Y and −Z are nondecreasing
processes jointly having stationary increments and subsequently specialize to
cases where one or both also have independent increments (Lévy processes).
This model is a generalization of both the shot-noise process as well as
a growth–collapse process (e.g., see, [7, 11, 16] and references therein) or

Received July 2009.


1
Supported by Grant 434/09 from the Israel Science Foundation and the Vigevani Chair
in Statistics.
AMS 2000 subject classifications. Primary 60H20; secondary 60G51, 60K30.
Key words and phrases. Linear stochastic equation, growth collapse process, risk pro-
cess, shot-noise process, generalized Ornstein–Uhlenbeck process.

This is an electronic reprint of the original article published by the


Institute of Mathematical Statistics in The Annals of Applied Probability,
2010, Vol. 20, No. 2, 367–381. This reprint differs from the original in pagination
and typographic detail.
1
2 O. KELLA AND M. YOR

more generally an additive increase and multiplicative decrease process. The


later have been used as models for the TCP window size in communication
networks.
We note that Jacod ([8], Theorem 6.8, page 194) and Yoeurp and Yor
[21] give a complete solution for the case where the integrator is a semi-
martingale and the driving process is càdlàg, Jaschke [9] gives a derivation
for the case where the integrator does not have jumps of size −1, and Protter
([20], Theorems 52 and 53, pages 322–323) treats the case with a continuous
integrator.
The literature related to generalized Ornstein–Uhlenbeck processes and
their applications which are directly related to some of the special cases of
the applications that we consider is huge and growing exponentially fast. We
refer the reader to [1–6, 14, 15, 17–19, 22] and further references therein.

2. Main result. With respect to some standard (right continuous aug-


mented) filtration, let Y = {Yt |t ≥ 0} and Z = {Zt |t ≥ 0} be two adapted
càdlàg processes. Denote Z0− = 0, and for t > 0, Zt− = lims↑t Zs . Set ∆Zt =
Zt − Zt− when
P Z is of bounded variation on compact intervals (BV); set
Ztc = Zt − 0≤s≤t ∆Zs and similarly for any other càdlàg process consid-
ered in this paper.

Theorem 1. Assume Y and Z are càdlàg and adapted, Y is BV and Z


is a semimartingale.
R Then the unique càdlàg adapted solution to the equation
Xt = Yt + (0,t] Xs− dZs is
Z
(1) Xt = Uu,t dYu ,
[0,t]

where
 Zt −Zu −(1/2)([Z,Z]c −[Z,Z]c )
e
t u
 Y
(2) Uu,t = × (1 + ∆Zs )e−∆Zs , 0 ≤ u < t,

 u<s≤t
1, 0≤u=t
and [Z, Z] is the quadratic variation process associated with Z. When Z is
BV then (2) reduces to
 c c Y
 eZt −Zu (1 + ∆Zs ), 0 ≤ u < t,
(3) Uu,t = u<s≤t
1, 0 ≤ u = t,

where Z c is the continuous part of Z as defined earlier (rather than the


continuous martingale part of Z as is customary in stochastic calculus).
LINEAR STOCHASTIC EQUATIONS 3

Proof. Note that with T0 = 0 and for n ≥ 1, Tn = inf{t > Tn−1 |∆Zt =
−1}, then for Tn < u ≤ t < Tn+1
UTn ,t UTn ,t
(4) = Uu,t (1 + ∆Zu ), = Uu,t .
UTn ,u− UTn ,u
Also, since
P Y is a BV process, the covariation process [Y, Z] is given via
[Y, Z]t = 0≤s≤t ∆Ys ∆Zs . If one follows the solution in equation (6.9) in
Theorem (6.8) on page 194 of [8], then for Tn ≤ t < Tn+1 we have that
 Z Z 
−1 −1
Xt = UTn ,t ∆YTn + UTn ,u− dYu − UTn ,u d[Y, Z]u
(Tn ,t] (Tn ,t]
Z X
(5) = UTn ,t ∆YTn + Uu,t (1 + ∆Zu ) dYu − Uu,t ∆Yu ∆Zu
(Tn ,t] Tn <u≤t
Z
= Uu,t dYu ,
[Tn ,t]

where the second equality is justified since the first integral on the right-hand
side of the first equality is a path-wise Stieltjes integral, and the second is a
sum which is also defined path-wise. If Y was a general semimartingale, then
interchanging UTn ,t with the integral sign like this would not be justified as
the resulting integrand would no longer be adapted. Clearly if n ≥ 1, then
Uu,t = 0 for u < Tn , and thus
Z
(6) Xt = Uu,t dYu .
[0,t]

Since this holds for all n, the proof for the more general case is complete.
For the case c
Pwhere Z is BV, it is evident that [Z, Z] = 0, and it is easy to
check that u<s≤t ∆Zs is convergent (actually, absolutely convergent), and
hence the result follows. 

Of course one may also define the counting process,


X
(7) Nt = 1{∆Zs =−1} ,
0<s≤t

which is a.s. finite for all t ≥ 0 and right-continuous (possibly a.s. identically
zero or terminating), and write
Z
(8) Xt = Uu,t dYu .
[TNt ,t]

It is worth while to note that for the case where Z is also a BV process,
there is a more direct proof involving (path-wise) Stieltjes integration which
4 O. KELLA AND M. YOR

can be taught in a classroom as follows. Write Z = A − B, where A and B are


right-continuous and P nondecreasing and have no jump points in common.
d c
Write At = At − At = 0<s≤t max(∆Zs , 0) and similarly for B. Observe that
by right continuity ∆At , ∆Bt , Adt − A0 and Btd − B0 all converge to zero as
t ↓ 0. In particular, for every t for which −1 ≤ ∆Bs (≤ 0) for 0 < s ≤ t, we
have that
d
Y
(9) 1 + Adt − A0 ≤ (1 + ∆As ) ≤ eAt −A0
0<s≤t

and
d
Y
(10) 1 + Btd − B0 ≤ (1 + ∆Bs ) ≤ eBt −B0
0<s≤t

which implies that


Y  Y  Y 
(11) (1 + ∆Zs ) = (1 + ∆As ) (1 + ∆Bs ) → 1
0<s≤t 0<s≤t 0<s≤t

as t ↓ 0.
c
Now note that with Ct = eZt and Dt = 0<s≤t (1 + ∆Zs ), ordinary (Stielt-
Q
jes) integration by parts yields
Z Z X
Ut ≡ Ct Dt = C0+ D0+ + Ds− dCs + Cs− dDs + ∆Cs ∆Ds ,
(0,t] (0,t] 0<s≤t
(12)
and it is easy to check that the continuity of C and the fact that dCt = Ct dZtc
imply that
Z
(13) Ut = 1 + Us− dZs .
(0,t]

With this formula established, it is clear that if we denote Uu,t as in (3),


then in an identical way to which (13) was obtained we have (path-wise)
that
Z
(14) Uu,t = 1 + Uu,s− dZs
(u,t]

for all 0 ≤ u ≤ t.R R R


Now, if Xt = [0,t] Us,t dYs , then Xt− = [0,t) Us,t− dYs and thus (0,t] Xs− dZs
is given by
Z Z Z Z
Uu,s− dYu dZs = Uu,s− dZs dYu
(0,t] [0,s) [0,t) (u,t]
(15) Z
= (Uu,t − 1) dYu ,
[0,t)
LINEAR STOCHASTIC EQUATIONS 5

but since Ut,t = 1 we can include t in the domain of integration without


changing the value which gives
Z Z
(16) Xs− dZs = (Uu,t − 1) dYu = Xt − Yt
(0,t] [0,t]
as required.

3. Applications. Assume that Y and Z are right-continuous and nonde-


creasing jointly having stationary increments in the strong sense that the
law of θs (Y, Z) is independent of s where
(17) θs (Y (t), Z(t)) = (Y (t + s) − Y (s), Z(t + s) − Z(s)).
It is standard to (uniquely) extend (Y, Z) to be a double sided process having
stationary increments, that is, that t ∈ R rather than t ≥ 0, thus we assume
it at the outset. Finally we assume that Z has jumps bounded by 1. Without
loss of generality let us assume that Y0 = Z0 = 0, otherwise we perform what
follows for Y − Y0 and Z − Z0 which also have stationaryRincrements. We
consider the unique process X defined via Xt = X0 + Yt − (0,t] Xs− dZs for
t ≥ 0 where X0 is almost surely finite; the unique solution of which is
Z
−Ztc c c
Y Y
(18) Xt = X0 e (1 − ∆Zs ) + e−(Zt −Zu ) (1 − ∆Zs ) dYu ,
0<s≤t (0,t] u<s≤t

where an empty product (when u = t or when t = 0 on the right) is defined


to be 1.
Special cases of such processes are the shot-noise processes in which Zt =
rt and Y are compound Poisson, growth collapse or additive increase multi-
plicative decrease (AIMD) processes in which Yt = rt and usually Z = qNλ
where Nλ is a Poisson process with rate λ, and 0 < q < 1, as well as clearing
processes where Z is a Poisson process or, more generally, a renewal counting
process (see, e.g., [10, 12]).
Consider the nondecreasing processes
X
(19) Jt = Ztc − log(1 − ∆Zs )1{∆Zs <1} ,
0<s≤t
P
and Nt = 0<s≤t 1{∆Zs =1} . Then it is clear that Y, J, N jointly have station-
ary increments (in the strong sense), and from (18) we have
Z
(20) Xt = X0 e−Jt 1{Nt =0} + e−(Jt −Js ) 1{Nt −Ns =0} dYs .
(0,t]

eJs
R
If dYs is a.s. finite (recalling that for s ≤ 0, Js ≤ J0 = 0), then
(−∞,0]
= (−∞,t] e−(Jt −Js ) 1{Nt −Ns =0} dYs it is clear that X ∗ is a station-
Xt∗
R
setting
ary process. Moreover, if, in addition, either limt→∞ Nt ≥ 1 a.s. (equiva-
lently, T1 = inf{t|∆Zt = 1} is a.s. finite) or Jt → ∞ a.s. as t → ∞, then
6 O. KELLA AND M. YOR

|Xt∗ − Xt | → 0 a.s. as t → ∞, and thus for any a.s. finite initial X0 , a limit-
ing distribution exists which is distributed like X0∗ .
In fact, when X0 is independent of (Y, Z), then shifting by −t, noting
that θ−t Js = Js−t − J−t (so that θ−t Jt = 0) and similarly for N and Y , it is
clear that Xt has the same distribution as
Z
X0 eJ−t 1{N−t =0} + eJs−t 1{Ns−t =0} dYs−t
(0,t]
(21) Z
= X0 eJ−t 1{N−t =0} + eJs 1{Ns =0} dYs .
(−t,0]
In particular, this implies that when X0 = 0, then Xt is stochastically in-
creasing in t ≥ 0.
Let us summarize our findings as follows.

Theorem 2. If (−∞,0] eJs dYs < ∞ a.s., and either T1 < ∞ a.s. or Jt →
R

∞ a.s. as t → ∞, then X has the unique stationary version


Z
(22) Xt∗ = e−(Jt −Js ) 1{Nt −Ns =0} dYs ,
(−∞,t]
and for every initial a.s. finite X0 , Xt converges in distribution to X0∗ . More-
over, when X0 = 0 a.s., then Xt is stochastically increasing in t ≥ 0.

We note that when (Y, Z) also have independent increments so that they
form a Lévy process, then the negative of the time reversed process is a
left-continuous version of the forward process, and thus in this case [when
X0 is independent of (Y, Z)], Xt is also distributed like
Z
−Jt
(23) X0 e 1{Nt =0} + e−Js 1{Ns =0} dYs
(0,t]
which is also the consequence of the usual time reversal argument for Lévy
processes. In what follows we will consider special cases of this structure.
We observe that in the general case N is a simple (i.e., a.s. ∆Nt ∈ {0, 1}
for all t) counting process associated with a time stationary point process.
Special cases of such processes are Poisson processes and delayed renewal
processes where the delay has the stationary excess lifetime distribution as-
sociated with the subsequent i.i.d. inter-renewal times. We will consider this
special case a bit later.

3.1. EXt for independent X0 , Y , Z. Since Y has stationary increments,


it follows that EYt = EY1 t. From (21) we have that when EY1 and EX0 are
finite, then for t ≥ 0,
Z 0
J−t
(24) EXt = EX0 Ee 1{N−t =0} + EY1 EeJs 1{Ns =0} ds,
−t
LINEAR STOCHASTIC EQUATIONS 7

and since for s ≤ 0, we have that Js = −(J0 − Js ) is distributed like −J−s =


−(J−s − J0 ), and similarly for N , we have that
Z t
−Jt
(25) EXt = EX0 Ee 1{Nt =0} + EY1 Ee−Js 1{Ns =0} ds.
0

3.2. EXt for independent X0 , Y , Z with Lévy Z. Here Z is a subordina-


tor with Laplace–Stieltjes exponent −ηz (α) = log Ee−αZ1 where, for α ≥ 0,
Z
(26) ηz (α) = cz α + (1 − e−αx )νz (dx)
(0,1]
R
with cz ≥ 0 and (0,1] xνz (dx) < ∞. Since the jumps of Z are bounded above
by 1, then νz ((1, ∞)) = 0.
In this case Ztc = cz t, N is a Poisson process with rate λ = νz {1} which is
independent of the subordinator,
X
(27) Jt = cz t − log(1 − ∆Zs )1{∆Zs <1} ;
0<s≤t

the Lévy measure of which, call it νj , is defined via νj ((a, b]) = νz ((1 −
e−a , 1 − e−b ]) for 0 < a < b < ∞ and with exponent
Z
ηj (α) = cz α + (1 − e−αx )νj (dx)
(0,∞)
(28) Z
= cz α + (1 − (1 − x)α )νz (dx),
(0,1)

so that for α > 0,


Z
(29) ηj (α) + λ = cz α + (1 − (1 − x)α )νz (dx).
(0,1]

We note that
Z Z
(30) min(x, 1)νj (dx) = min(− log(1 − x), 1)νz (dx),
(0,∞) (0,1)
x
and since − log(1 − x) ≤ 1−x ≤ xe for 0 < x ≤ 1 − e−1 , the right-hand side
R
is dominated above by e (0,1) xνz (dx) < ∞, so that νj is indeed the proper
Lévy measure of a subordinator. Now, for this case, Ee−Js = e−ηj (1)s where
Z
ηj (1) = cz + (1 − (1 − x)1 )νz (dx)
(0,1)
(31) Z
= cz + xνz (dx) = ηz′ (0) − λ
(0,1)
8 O. KELLA AND M. YOR
′ ′
recalling λ = νz {1}. Therefore, Ee−Js R1{Ns =0} = e−(ηz (0)−λ)s e−λs = e−ηz (0)s
so that in this case, since ηz′ (0) = cz + (0,1] xνz (dx) = EZ1 , (25) becomes

EY1
(32) EXt = EX0 e−EZ1 t + (1 − e−EZ1 t ).
EZ1
Recall that here Y need not have independent increments.

3.3. Independent X0 , Y , Z with Lévy Y . Since for every 0 = t0 < t1 <


· · · < tn = t the independence between Y and Z and hence the independence
of Y and J , yield
n
" ! #

−Jti−1
X
E exp −α e 1{Nti−1 =0} (Yti − Yti−1 ) Z

i=1
(33)
n
exp(−ηy (αe−Jti−1 1{Nti−1 =0} )(ti − ti−1 )).
Y
=
i=1

It thus follows, as in equation (5.9) of [13] for the more general multivariate
case and in Proposition 1 of [19] for the case where Y and Z are compound
Poisson, that
  Z  
−Js
E exp −α e 1{Ns =0} dYs Z

(0,t]
(34)  Z t 
−Js
= exp − ηy (αe 1{Ns =0} ) ds .
0

This implies, as in Theorem 5.1 of [13], that the conditional distribution of


−Js 1
R
(0,t] e {Ns =0} dYs given Z is infinitely divisible, as on the right-hand side,
−ηy /n is also a Laplace–Stieltjes exponent of a subordinator.
Equation (34), with ξ0 (α) = Ee−αX0 , a ∧ b = min(a, b), and recalling

(35) T1 = inf{t|∆Zt = 1} = inf{t|Nt > 0}

yields
 Z t 
−αXt −Jt −Js
Ee = Eξ0 (αe 1{Nt =0} ) exp − ηy (αe )1{Ns =0} ds
0
 Z t 
−Jt −Js
(36) = Eξ0 (αe ) exp − ηy (αe ) ds 1{T1 >t}
0
 Z T1 
−Js
+ E exp − ηy (αe ) ds 1{T1 ≤t} .
0
LINEAR STOCHASTIC EQUATIONS 9

Clearly, when either T1 < ∞ a.s. or Jt → ∞ a.s. as t → ∞, then


 Z T1 
−αXt −Js
(37) lim Ee = E exp − ηy (αe ) ds .
t→∞ 0
We now observe that if N and J are independent, as for instance in the case
where Z is a subordinator, and N is the counting process associated with a
time stationary version of a renewal process the latter having inter-renewal
time distribution F having a finite mean µ, then it is well known that N
is a delayed renewal process in which the times between the (i − 1)th and
ith jumps are distributed F for i ≥ 2 and the time until the first jump (i.e.,
the delay) has a distribution with density fe (t) = (1 − F (t))/µ. Therefore,
in this case,
 Z T1  Z ∞  Z t 
−Js −Js
E exp − ηy (αe ) ds = E exp − ηy (αe ) ds fe (t) dt.
0 0 0
(38)
Differentiating the right-hand side of the first equality in (36) once and
setting α = 0 gives (25) as expected, while for the case where X0 = 0 a.s.,
differentiating twice and setting α = 0 yields
Z t 2 Z t
2 ′ 2
(39) EXt = (ηy (0)) E e 1{Ns =0} ds −ηy′′ (0)E
−Js
e−2Js 1{Ns =0} ds.
0 0

3.4. EXt2 for independent Y , Z with Lévy Y, Z and X0 = 0. We note


Rt −(ηj (β)+λ)t
that for every β > 0, E 0 e−βJs 1{Ns =0} ds = 1−eηj (β)+λ , where λ = νz {1}.
Also, note that since Nu ≤ Ns for u ≤ s,
Z t 2 Z tZ s
−Js
e 1{Ns =0} ds = 2 e−Js −Ju 1{Ns =0} du ds
0 0 0
(40) Z tZ s
=2 e−(Js −Ju ) e−2Ju 1{Ns =0} du ds,
0 0
and therefore (using Fubini and the stationary independent increments prop-
erty of J ), the expected value of the left-hand side is
Z tZ s
2 e−(ηj (1)+λ)(s−u) e−(ηj (2)+λ)u du ds
0 0
(41)
(1 − e−(ηj (1)+λ)t )/(ηj (1) + λ) − (1 − e−(ηj (2)+λ)t )/(ηj (2) + λ)
=2 .
ηj (2) − ηj (1)
Finally, we observe that for every positive integer n, we obtain [recall (29)]
Z
ηj (n) + λ = cz n + (1 − (1 − x)n )νz (dx)
(0,1]
10 O. KELLA AND M. YOR

(42)
n   Z
X n k−1
= cz n + (−1) xk νz (dx),
k (0,1]
k=1

(0) (k)
and since, ηz (0) = ηz (0) = 0, ηz′ (0) = cz +
R
(0,1) xνz (dx) and ηz (0) =
(−1)k−1 (0,1] xk νz (dx), for k ≥ 2, it holds that
R

n  
X n
(43) ηj (n) + λ = ηz(k) (0).
k
k=0

In particular ηj (1) + λ = ηz′ (0) = cz + (0,1] xν(dx) and ηj (2) + λ = 2ηz′ (0) +
R

ηz′′ (0), so that ηj (2) − ηj (1) = ηz′ (0) + ηz′′ (0).


To summarize, when EX0 = 0, we have
EXt2 = 2(ηy′ (0))2
′ ′ ′′
× ((1 − e−ηz (0)t )/ηz′ (0) − (1 − e−(2ηz (0)+ηz (0))t )/(2ηz′ (0) + ηz′′ (0)))
(44)
/(ηz′ (0) + ηz′′ (0))
′ ′′
′′ 1 − e−(2ηz (0)+ηz (0))t
− ηy (0)
2ηz′ (0) + ηz′′ (0)
which converges to
2(ηy′ (0))2 − ηz′ (0)ηy′′ (0) (ηy′ (0)/ηz′ (0))2 − ηy′′ (0)/(2ηz′ (0))
(45) =
ηz′ (0)(2ηz′ (0) + ηz′′ (0)) 1 + ηz′′ (0)/(2ηz′ (0))
as t → ∞. We note that as νz (1, ∞) = 0, then clearly whenever either cz > 0
or νz (0, 1) 6= 0 (i.e., Z − N is not identically zero), it holds that
Z Z

(46) ηz (0) = cz + xνz (dx) > x2 νz (dx) = −ηz′′ (0).
(0,1] (0,1]

3.5. Lévy Z, linear Y and X0 = x. It is of interest to consider the special


case where Yt = rt for some r > 0 and X0 = x for some x ≥ 0. For the
case where Z is compound Poisson this model becomes the growth–collapse
process from [16] where the computation of transient moments turns out to
be especially tractable. Since
Xt x Xs−
Z
(47) = +t− dZs
r r (0,t] r

we may without loss of generality assume that r = 1. Recall (23). Following


the ideas in the proof of Proposition 3.1 of [4], we first write for a ≥ 0 and
LINEAR STOCHASTIC EQUATIONS 11

b ≥ 1,
Z t b Z t Z t b−1
−aJt −Js −aJt −Js
Ee e ds = bEe e ds e−Ju du
0 0 u
Z t Z t b−1
(48) =b Ee−a(Jt −Ju ) e−(Js −Ju ) ds e−(a+b)Ju du
0 u
Z t Z t−u b−1
=b e−ηj (a+b)u Ee−aJt−u e−Js ds .
0 0
Thus, if T ∼ exp(θ) for some θ > 0 and is independent of Z, then since
the conditional distribution of T − u given T > u is the same as that of T
(memoryless property), it readily follows that
Z T b Z T b−1
−aJT −Js b −aJT −Js
(49) Ee e ds = Ee e ds .
0 ηj (a + b) + θ 0
RT
For a = 0 we have that, since T1 ∧ T ∼ exp(λ + θ) and 0 e−Js 1{Ns =0} ds =
R T1 ∧T −J
0 e s ds,
Z T b Z T b−1
−Js b −Js
(50) E e 1{Ns =0} ds = E e 1{Ns =0} ds .
0 ηj (b) + λ + θ 0
For a > 0 we have, from the fact that T1 ∧ T is independent of 1{T1 >T } , that
Z T b
−aJT −Js
Ee 1{NT =0} e 1{Ns =0} ds
0
Z T1 ∧T b
−aJT1 ∧T −Js
(51) = Ee 1{T1 >T } e ds
0
Z T1 ∧T b
θ
= Ee−aJT1 ∧T e −Js
ds
λ+θ 0
and thus
Z T b
−aJT −Js
Ee 1{NT =0} e 1{Ns =0} ds
0
Z T b
−aJT −Js
(52) = Ee 1{NT =0} e ds
0
Z T b−1
b
= Ee−aJT 1{NT =0} e −Js
ds .
ηj (a + b) + λ + θ 0
Clearly, when b = 0 and a > 0 we have that
θ
(53) Ee−aJT 1{NT =0} = e−(ηj (a)+λ)T = .
ηj (a) + λ + θ
12 O. KELLA AND M. YOR

Now
 Z T n
EXTn = E xe −JT
1{NT =0} + −Js
e 1{Ns =0} ds
0
n   Z T n−k
X n k −kJT −Js
(54) = x Ee 1{NT =0} e ds
k 0
k=1
Z T n
−Js
+E e 1{Ns =0} ds ,
0
and denoting [recall (43)]
Z i  
X i
(55) µi = ηj (i) + λ = cz i + (1 − (1 − x)i )νz (dx) = ηz(k) (0),
(0,1] k
k=0
it follows from (50), (52), (53) and (54), with some manipulations, that
n n n
!
xk ki=1 µi Y
Q
n n! X µi Y µi
EXT = Qn −
i=1 µi k=1 k!
i=k+1
µi + θ
i=k
µi + θ
(56)
n
!
Y µi
+ ,
µi + θ
i=1

Rwhere an empty product is defined to be 1. Finally, noting that EXTn =


∞ −θt
0 e dEXtn it follows that if {Ei |i ≥ 1} are i.i.d. random variables with
distribution exp(1), then Ei /µi ∼ exp(µi ). It is well known and easy to check
that
n Z ∞ " n #
Y µi X Ei
(57) = e−θt dP ≤t ;
µi + θ 0 µi
i=k i=k
hence, for 1 ≤ k ≤ n,
n n n n
" #

µi Y µi Ei Ei
Y Z X X
−θt
(58) − = e dP ≤t< ,
µi + θ µi + θ 0 µi µi
i=k+1 i=k i=k+1 i=k
and thus we have the following somewhat curious result.

Theorem 3. Let pij (t) be the transition matrix function of a pure death
process D = {Dt |t ≥ 0} with death rates µi , i ≥ 1 (0 is absorbing). Then
n
!
xk ki=1 µi
Q
n n! X
EXt = Qn pn0 (t) + pnk (t)
i=1 µi k=1
k!
(59) "D #
n! Yt xµi
= Qn E D0 = n ,
i=1 µi i
i=1
LINEAR STOCHASTIC EQUATIONS 13

where an empty product is 1.

In particular, when x = 0, then


n!
EXtn = Qn pn0 (t)
i=1 µi
n
Z Z !
X
(60) = n! ··· exp − µi xi dx1 · · · dxn
Pn i=1
i=1 xi ≤t
x1 ,...,xn ≥0
n
Z Z !
X
n
= n!t ··· exp −t µi xi dx1 · · · dxn .
Pn i=1
i=1 xi ≤1
x1 ,...,xn ≥0

In fact, one may also give a finite simple algorithm with which to compute
EXtn . For the sake of brevity we do it only for the case x = 0. This can be
done similarly to the Brownian motion in the proof of Theorem 1 on page 31
of [22] or, equivalently, directly from (60) as follows. Set f0 = 0 and for n ≥ 1
and 0 < a1 < a2 < · · · < an , let
n
Z Z !
X
fn (a1 , . . . , an ) = · · · exp − ai xi dx1 · · · dxn
Pn i=1
i=1 xi ≤1
x1 ,...,xn ≥0
Pn
Z Z Z 1− i=2 xi

−a1 x1
= ··· e dx1
Pn 0
i=2 xi ≤1
x2 ,...,xn ≥0
(61)
n
!
X
× exp − ai xi dx2 · · · dxn
i=2

fn−1 (a2 , . . . , an ) − e−a1 fn−1 (a2 − a1 , . . . , an − a1 )


= .
a1
Alternatively, if we denote g0 = 1, and for n ≥ 1 and b1 , . . . , bn > 0,

(62) gn (b1 , . . . , bn ) = fn (b1 , b1 + b2 , . . . , b1 + · · · + bn ).

Then
gn−1 (b1 + b2 , b3 , . . . , bn ) − e−b1 gn−1 (b2 , b3 , . . . , bn )
(63) gn (b1 , . . . , bn ) = .
b1
14 O. KELLA AND M. YOR

From the above, it is also clear (see also [22], Theorem 1, page 31 for the
case of a Brownian motion) that, in fact,
EXtn = tn n!fn (µ1 t, . . . , µn t)
(64)
= tn n!gn (µ1 t, (µ2 − µ1 )t, . . . , (µn − µn−1 )t)
is a linear combination of exponentials. An algorithm for computing the coef-
ficients of this linear combination is equivalent to the above simple algorithm
which involves only a finite number of additions and multiplications.
We emphasize that the fact that Theorem 3 holds for all n ≥ 1, and the
algorithm for the computation of moments, also valid for all n ≥ 1, is special
for the case where Z is a nonzero subordinator. This is true since this is
the only case where ηj (n) is finite, strictly positive for all n ≥ 1 and strictly
increasing.

REFERENCES
[1] Bertoin, J., Lindner, A. and Maller, R. (2008). On continuity properties of the
law of integrals of Lévy processes. In Séminaire de Probabilités XLI. Lecture
Notes in Math. 1934 137–159. Springer, Berlin. MR2483729
[2] Bertoin, J. and Yor, M. (2005). Exponential functionals of Lévy processes. Probab.
Surv. 2 191–212 (electronic). MR2178044
[3] Bertoin, J., Biane, P. and Yor, M. (2004). Poissonian exponential functionals,
q-series, q-integrals, and the moment problem for log-normal distributions. In
Seminar on Stochastic Analysis, Random Fields and Applications IV. Progress
in Probability 58 45–56. Birkhäuser, Basel. MR2096279
[4] Carmona, P., Petit, F. and Yor, M. (1997). On the distribution and asymptotic
results for exponential functionals of Lévy processes. In Exponential Functionals
and Principal Values Related to Brownian Motion 73–130. Rev. Math. Iberoam.,
Madrid. MR1648657
[5] Carmona, P., Petit, F. and Yor, M. (2001). Exponential functionals of Lévy
processes. In Lévy Processes: Theory and Applications (O. E. Barndorff-
Nielsen, T. Mikosch and S. I. Resnick, eds.) 41–55. Birkhäuser, Boston,
MA. MR1833691
[6] Erickson, K. B. and Maller, R. A. (2005). Generalised Ornstein–Uhlenbeck
processes and the convergence of Lévy integrals. In Séminaire de Probabilités
XXXVIII. Lecture Notes in Math. 1857 70–94. Springer, Berlin. MR2126967
[7] Guillemin, F., Robert, P. and Zwart, B. (2004). AIMD algorithms and exponen-
tial functionals. Ann. Appl. Probab. 14 90–117. MR2023017
[8] Jacod, J. (1979). Calcul Stochastique et Problèmes de Martingales. Lecture Notes in
Mathematics 714. Springer, Berlin. MR542115
[9] Jaschke, S. (2003). A note on the inhomogeneous linear stochastic differential equa-
tion. Insurance Math. Econom. 32 461–464. MR1994504
[10] Kella, O. (1998). An exhaustive Lévy storage process with intermittent output.
Comm. Statist. Stochastic Models 14 979–992. MR1631475
[11] Kella, O. (2009). On growth collapse processes with stationary structure and their
shot-noise counterparts. J. Appl. Probab. 46 363–371.
LINEAR STOCHASTIC EQUATIONS 15

[12] Kella, O., Perry, D. and Stadje, W. (2003). A stochastic clearing model with a
Brownian and a compound Poisson component. Probab. Engrg. Inform. Sci. 17
1–22. MR1959382
[13] Kella, O. and Whitt, W. (1999). Linear stochastic fluid networks. J. Appl. Probab.
36 244–260. MR1699623
[14] Klüppelberg, C., Lindner, A. and Maller, R. (2004). A continuous-time
GARCH process driven by a Lévy process: Stationarity and second-order be-
haviour. J. Appl. Probab. 41 601–622. MR2074811
[15] Lachal, A. (2003). Some probability distributions in modeling DNA replication.
Ann. Appl. Probab. 13 1207–1230. MR1994049
[16] Löpker, A. H. and van Leeuwaarden, J. S. H. (2008). Transient moments of the
TCP window size process. J. Appl. Probab. 45 163–175. MR2409318
[17] Lindner, A. and Maller, R. (2005). Lévy integrals and the stationarity of gener-
alised Ornstein–Uhlenbeck processes. Stochastic Process. Appl. 115 1701–1722.
MR2165340
[18] Lindner, A. and Sato, K.-i. (2009). Continuity properties and infinite divisibility
of stationary distributions of some generalized Ornstein–Uhlenbeck processes.
Ann. Probab. 37 250–274. MR2489165
[19] Nilsen, T. and Paulsen, J. (1996). On the distribution of a randomly discounted
compound Poisson process. Stochastic Process. Appl. 61 305–310. MR1386179
[20] Protter, P. E. (2004). Stochastic Integration and Differential Equations, 2nd ed.
Stochastic Modelling and Applied Probability 21. Springer, Berlin. MR2020294
[21] Yoeurp, C. and Yor, M. (1977). Espace orthogonal á une semi-martingale. Unpub-
lished manuscript.
[22] Yor, M. (2001). Exponential Functionals of Brownian Motion and Related Processes.
Springer, Berlin. MR1854494

Department of Statistics Laboratoire de Probabilités


Hebrew University of Jerusalem et Modèles aléatoires
Jerusalem 91905 Université Pierre et Marie Curie
Israel Boı̂te courrier 188
E-mail: [email protected] 75252 Paris Cedex 05
France
E-mail: [email protected]

You might also like