Characteristics_of_a_stochastic_process
Characteristics_of_a_stochastic_process
Theoretical grounds
In this chapter we consider random functions with the phase space being either real
line R or complex plane C.
Definition 2.1. Assume that E|X(t)| < +∞, t ∈ T. Function {aX (t) = EX(t),t ∈ T}
is called the mean function (or simply the mean) of the random function X. Function
X̃(t) = X(t) − aX (t),t ∈ T is called the centered (or compensated) function, corre-
sponding to function X.
Recall that covariance of two real-valued random variables ξ and η , both hav-
ing the second moment, is defined as cov(ξ , η ) = E(ξ − Eξ )(η − Eη ) = Eξ η −
Eξ Eη . If ξ , η are complex-valued and E|ξ |2 < +∞, E|η |2 < +∞ then cov(ξ , η ) =
E(ξ − Eξ )(η − Eη ) = Eξ η − Eξ Eη (here “ ”, the overbar, is a sign of complex
conjugation).
is called the covariance function (or simply the covariance) of the random func-
tion X. If X,Y are two functions with E|X(t)|2 < +∞, E|Y (t)|2 < +∞,t ∈ T, then
{RX,Y (t, s) = cov(X(t),Y (s)),t, s ∈ T} is called the mutual covariance function for
the functions X,Y .
Definition 2.3. Let T be some set, function K be defined on T × T, and take values
in C. Function K is nonnegatively defined if
m
∑ K(t j ,tk )c j ck ≥ 0
j,k=1
Remark 2.1. Recall that the mean vector and covariance matrix for a random vector
ξ = (ξ1 , . . . , ξm ) are aξ = (Eξ j )mj=1 and Rξ = (cov(ξ j , ξk ))mj,k=1 , respectively. If the
conditions of Proposition 2.1 hold, then for any m ∈ N,t1 , . . . ,tm ∈ T the covariance
matrix for the vector (X(t1 ), . . . , X(tm )) is equal to Kt1 ...tm (see Definition 2.4) and the
mean vector is equal to at1 ...tm = (a(t j ))mj=1 .
Mean and covariance functions of a random function do not determine the finite-
dimensional distributions of this function uniquely (e.g., see Problem 6.7). On the
other hand, the family of finite-dimensional characteristic functions of the random
function X has unique correspondence to its finite-dimensional characteristics be-
cause the characteristic function of a random vector determines the distribution of
this vector uniquely. The following theorem is the reformulation of the Kolmogorov
theorem (Theorem 1.1) in terms of characteristic functions.
2 Mean and covariance functions. Characteristic functions 13
Bibliography
[9], Chapter II; [24], Volume 1, Chapter IV, §1; [25], Chapter I, §1; [79], Chapter 16.
Problems
2.1. Find the covariance function for (a) the Wiener process; (b) the Poisson process.
2.2. Let W be the Wiener process. Find the mean and covariance functions for the
process X(t) = W 2 (t),t ≥ 0.
2.3. Let W be the Wiener process. Find the covariance function for the process X if
(a) X(t) = W (1/t) , t > 0.
(b) X(t) = W (et
), t2∈ R.
(c) X(t) = W 1 − t , t ∈ [−1, 1].
2.4. Let W be the Wiener process. Find the characteristic function for W (2)+2W (1).
2.5. Let N be the Poisson process with intensity λ . Find the characteristic function
for N(2) + 2N(1).
2.8. Let W be the Wiener process and f ∈ C([0, 1]). Find the characteristic func-
tion for random variable 01 f (s)W (s) ds (the integral is defined for every ω in the
Riemann sense; see Problem 1.25). Prove that this random variable is normally
distributed.
t
2.9. Let W be the Wiener process, f ∈ C([0, 1]), X(t) = 0 f (s)W (s) ds, t ∈ [0, 1].
Find RW,X .
2.10. Let N be the Poisson process, f ∈ C([0, 1]). Find the characteristic functions of
random variables: (a) 01 f (s)N(s) ds; (b) 01 f (s)dN(s) ≡ ∑ f (s), where summation
2.12. Find all one-dimensional and m-dimensional characteristic functions: (a) for
the process introduced in Problem 1.2; (b) for the process introduced in Problem 1.4.
2.13. Find the covariance function of the process X(t) = ξ1 f1 (t) + · · · + ξn fn (t),
t ∈ R, where f1 , . . . , fn are nonrandom functions, and ξ1 , . . . , ξn are noncorrelated
random variables with variances σ12 , . . . , σn2 .
2.14. Let {ξn , n ≥ 1} be the sequence of independent square integrable random vari-
ables. Denote an = Eξn , σn2 = Var ξn .
(1) Prove that series ∑n ξn converges in the mean square sense if and only if the series
∑n an and ∑n σn2 are convergent.
(2) Let { fn (t), t ∈ R}n∈N be the sequence of nonrandom functions. Formulate the
necessary and sufficient conditions for the series X(t) = ∑n ξn fn (t) to converge in
the mean square for every t ∈ R. Find the mean and covariance functions of the
process X.
2.15. Are the following functions nonnegatively defined: (a) K(t, s) = sint sin s;
(b) K(t, s) = sin(t + s); (c) K(t, s) = t 2 + s2 (t, s ∈ R)?
2.18. Let N be the Poisson process with intensity λ . Let X(t) = 0 when N(t) is odd
and X(t) = 1 when N(t) is even.
(1) Find the mean and covariance of the process X.
(2) Find RN,X .
2 Mean and covariance functions. Characteristic functions 15
2.19. Let W and N be the independent Wiener process and Poisson process with
intensity λ , respectively. Find the mean and covariance of the process X(t) =
W (N(t)). Is X a process with independent increments?
2.20. Find RX,W and RX,N for the process from the previous problem.
2.21. Let N1 , N2 be two independent Poisson processes with intensities λ1 , λ2 ,
respectively. Define X(t) = (N1 (t))N2 (t) ,t ∈ R+ if at least one of the values N1 (t),
N2 (t) is nonzero and X(t) = 1 if N1 (t) = N2 (t) = 0. Find:
(a) The mean function of the process X
(b) The covariance function of the process X
2.22. Let X,Y be two independent and centered processes and c > 0 be a constant.
Prove that RX+Y = RX + RY , R√cX = cRX , RXY = RX RY .
2.23. Let K1 , K2 be two nonnegatively defined functions and c > 0. Prove that
the following functions are nonnegatively defined: (a) R = K1 + K2 ; (b) R = cK1 ;
(c) R = K1 · K2 .
2.24. Let K be a nonnegatively defined function on T × T.
(1) Prove that for every polynomial P(·) with nonnegative coefficients the function
R = P(K) is nonnegatively defined.
(2) Prove that the function R = eK is nonnegatively defined.
(3) When it is additionally assumed that for some p ∈ (0, 1) K(t,t) < p−1 , t ∈ T,
prove that the function R = (1 − pK)−1 is nonnegatively defined.
2.25. Give the probabilistic interpretation of items (1)–(3) of the previous problem;
that is, construct the stochastic process for which R is the covariance function.
2.26. Let K(t, s) = ts,t, s ∈ R+ . Prove that for an arbitrary polynomial P the function
R = P(K) is nonnegatively defined if and only if all coefficients of the polynomial P
are nonnegative. Compare with item (1) of Problem 2.24.
2.27. Which of the following functions are nonnegatively defined: (a) K(t, s) =
sin(t − s); (b) K(t, s) = cos(t − s); (c) K(t, s) = e−(t−s) ; (d) K(t, s) = e−|t−s| ;
2 4
(e) K(t, s) = e−(t−s) ; (f) K(t, s) = e−(t−s) ?
2.28. Let K ∈ C ([a, b] × [a, b]). Prove that K is nonnegatively defined if and only if
the integral operator AK : L2 ([a, b]) → L2 ([a, b]), defined by
b
AK f (t) = K(t, s) f (s) ds, f ∈ L2 ([a, b]),
a
is nonnegative.
2.29. Let AK be the operator from the previous problem. Check the following
statements.
(a) The set of eigenvalues of the operator AK is at most countable.
(b) The function K is nonnegatively defined if and only if every eigenvalue of the
operator AK is nonnegative.
16 2 Mean and covariance functions. Characteristic functions
2.30. Let K(s,t) = F(t − s), t, s ∈ R, where the function F is periodic with period
2π and F(x) = π − |x| for |x| ≤ π . Construct the Gaussian process with covariance K
of the form ∑n εn fn (t), where {εn , n ≥ 1} is a sequence of the independent normally
distributed random variables.
2.31. Solve the previous problem assuming that F has period 2 and F(x) =
(1 − x)2 , x ∈ [0, 1].
2.32. Denote {τn , n ≥ 1} the jump moments for the Poisson process N(t), τ0 = 0.
Let {εn , n ≥ 0} be i.i.d. random variables that have expectation a and variance
σ 2 . Consider the stochastic processes X(t) = ∑nk=0 εk , t ∈ [τn , τn+1 ), Y (t) = εn ,
t ∈ [τn , τn+1 ), n ≥ 0. Find the mean and covariance functions of the processes X,Y.
Exemplify the models that lead to such processes.
2.33. A radiation measuring instrument accumulates radiation with the rate that
equals a Roentgen per hour, right up to the failing moment. Let X(t) be the read-
ing at point of time t ≥ 0. Find the mean and covariance functions for the process X
if X(0) = 0, the failing moment has distribution function F, and after the failure the
measuring instrument is fixed (a) at zero point; (b) at the last reading.
2.34. The device registers a Poisson flow of particles with intensity λ > 0. Energies
of different particles are independent random variables. Expectation of every parti-
cle’s energy is equal to a and variance is equal to σ 2 . Let X(t) be the readings of the
device at point of time t ≥ 0. Find the mean and covariance functions of the process
X if the device shows
(a) Total energy of the particles have arrived during the time interval [0,t].
(b) The energy of the last particle.
(c) The sum of the energies of the last K particles.
2.35. A Poisson flow of claims with intensity λ > 0 is observed. Let X(t),t ∈ R be
the time between t and the moment of the last claim coming before t. Find the mean
and covariance functions for the process X.
Hints
2.1. See the hint to Problem 2.17.
2.4. Because the variables (W (1),W (2)) are jointly Gaussian, the variable W (2) +
2W (1) is normally distributed. Calculate its mean and variance and use the formula
for the characteristic function of the Gaussian distribution. Another method is pro-
posed in the following hint.
2.5. N(2) + 2N(1) = N(2) − N(1) + 3N(1). The values N(2) − N(1) and N(1)
are Poisson-distributed random variables and thus their characteristic functions are
known. These values are independent, that is, the required function can be obtained
as a product.
2 Mean and covariance functions. Characteristic functions 17
2.6. (a) If η ∼ N(0, 1), then Eη 2k−1 = 0, Eη 2k = (2k − 1)!! = (2k − 1)(2k − 3) · · · 1
for k ∈ N. Prove and use this for the calculations.
(b) Use the explicit formula
for the Gaussian density.
(c) Use formula cos x = 21 eix + e−ix and Problem 2.4.
2.17. (1) Let s ≤ t; then values X(t) − X(s) and X(s) are independent which means
that they are uncorrelated. Therefore cov(X(t), X(s)) = cov(X(t) − X(s), X(s)) +
cov(X(s), X(s)) = cov(X(t ∧ s), X(t ∧ s)). The case t ≤ s can be treated similarly.
2.23. Items (a) and (b) can be proved using the definition. In item (c) you can use the
previous problem.
2.24. Proof of item (1) can be directly obtained from the previous problem. For
the proof of items (2) and (3) use item (1), Taylor decomposition of the functions
x → ex , x → (1 − px)−1 and a fact that the pointwise limit of a sequence of nonnega-
tively defined functions is also a nonnegatively defined function. (Prove this fact!).
2.3. For arbitrary f : R+ → R+ , the covariance function for the process X(t) =
W ( f (t)),t ∈ R+ is equal to RX (t, s) = RW ( f (t), f (s)) = f (t) ∧ f (s).
2.8. Let In = n−1 ∑nk=1 f (k/n)W (k/n). Because the process W a.s. has continuous
trajectories and the function f is continuous, the Riemann integral sum In converges
to I = 01 f (t)W (t) dt a.s. Therefore φIn (z) → φI (z), n → +∞, z ∈ R. Hence,
−1
∑nk=1 f (k/n)W (k/n) i ∑nk=1 zn−1 ∑nj=k f ( j/n) (W (k/n)−W ((k−1)/n))
EeizIn = Eeizn = Ee
n 2 2
−(2n)−1 zn−1 ∑nj=k f ( j/n) 2 /2 1 1
= ∏e → e−(z ) (0 t f (s) ds) dt
, n → ∞.
k=1
1 1 2
Thus I is a Gaussian random variable with zero mean and variance 0 t f (s) ds dt.
s
2.9. RW,X (t, s) = 0 f (r)(t ∧ r) dr.
1 1
2.10. (a) φ (z) = exp λ eiz t f (s) ds − 1 dt .
0
1
(b) φ (z) = exp λ 0 eiz f (t) − 1 dt .
18 2 Mean and covariance functions. Characteristic functions
2.12. (a) Let 0 ≤ t1 < · · · < tn ≤ 1; then φt1 ,...,tm (z1 , . . . , zm ) = t1 eiz1 +···+izm +
(t2 − t1 )eiz2 +···+izm + · · · + (tm − tm−1 )eizm + (1 − tm ).
(b) Let 0 ≤ t1 < · · · < tn ≤ 1, then
−1 +···+iz −1 −1 +···+iz −1
φt1 ,...,tm (z1 , . . . , zm ) = F(t1 )eiz1 n mn
+ (F(t2 ) − F(t1 ))eiz2 n mn
+···
−1 n
+ (F(tm ) − F(tm−1 ))eizm n + (1 − F(tm )) .
2.20.
−λ t k(λ t)k (λ t)k
RX,W (t, s) = E[N(t) ∧ s] = e ∑ k! + s · ∑ k! , RX,N ≡ 0.
k<s k≥s
2.27. Functions from the items (b), (d), (e) are nonnegatively defined; the others
are not.
because every sum under the limit sign is nonnegative. Because C([a, b]) is a
dense subset in L2 ([a, b]) the above inequality yields that (AK f , f )L2 ([a,b]) ≥ 0,
f ∈ L2 ([a, b]). On the other hand, let (AK f , f )L2 ([a,b]) ≥ 0 for every f ∈ L2 ([a, b]),
and let points t1 , . . . ,tm and constants z1 , . . . , zm be fixed. Choose m sequences of
continuous functions { fn1 , n ≥ 1}, . . . , { fnm , n ≥ 1} such that, for arbitrary function
φ ∈ C([a, b]), ab φ (t) fnj (t) dt → φ (t j ), n → ∞, j = 1, . . . , m. Putting fn = ∑mj=1 z j fnj ,
we obtain that ∑mj,k=1 z j zk K(t j ,tk ) = limn→∞ ab ab K(t, s) fn (t) fn (s) dsdt = limn→∞ (AK
fn , fn ) ≥ 0.
2.29. Statement (a) is a particular case of the theorem on the spectrum of a compact
operator. Statement (b) follows from the previous problem and theorem on spectral
decomposition of a compact self-adjoint operator.
http://www.springer.com/978-0-387-87861-4