0% found this document useful (0 votes)
35 views

SoICT-IT2022-04-Stochastic Processes - 2021jun09 v1.2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

SoICT-IT2022-04-Stochastic Processes - 2021jun09 v1.2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

PRESENTATION

TITLE
Stochastic Proceses
STOCHASTIC PROCESSES

• Text
IV. Stochastic processes
• Definitions
• Stationary processes
• Spectral density function
• Ergodicity of stochastic processes
• Stochastic Data Processing Systems
IV. Stochastic processes
4.1. Definitions
X (t, x )
• Stochastic processes
• Let x denote the random !
X (t, x n )
outcome of an experiment.
To every such outcome X (t, x k ) !
suppose a waveform X(t, x)
is assigned. X (t, x 2 ) !
• The collection of such X (t, x1 )
waveforms form a t
0 t1 t2
stochastic process.
IV. Stochastic processes
4.1. Definitions
• The set of {xk} and the time index t can be continuous
or discrete (countably infinite or finite) as well.
• For fixed xi ÎS (the set of all experimental outcomes),
X(t, xi) is a specific time function.
• For fixed t, X1=X(t1, xi) is a random variable. The
ensemble of all such realizations X(t, x) over time
represents the stochastic process X(t).
IV. Stochastic processes
4.1. Definitions
• Example
• X(t)=Acos(w0t+j)
• Where j is a uniformly distributed random variable in (0, 2p)
represents a stochastic process.
• Some stochastic processes
• Brownian motion,
• Stock market fluctuations,
• Various queuing systems.
IV. Stochastic processes
4.1. Definitions
• If X(t) is a stochastic process, then for fixed t, X(t)
represents a random variable.
• Its distribution function is given by

FX ( x, t ) = P{ X (t ) £ x}
• Notice that Fx(x, t) depends on t, since for a different t, we
obtain a different random variable.

D dFX ( x, t )
f X ( x, t ) =
dx
• Derivative of Fx(x, t) represents the first-order probability
density function of the process X(t).
IV. Stochastic processes
4.1. Definitions
• For t = t1 and t = t2,
• X(t) represents two different random variables X1 = X(t1) and
X2 = X(t2) respectively.
• Their joint distribution is given by

FX ( x1 , x2 , t1 , t2 ) = P{ X (t1 ) £ x1 , X (t2 ) £ x2 }
• Their joint density function is:

D ¶ 2
FX ( x1 , x2 , t1 , t2 )
f X ( x1 , x2 , t1 , t2 ) =
¶x1 ¶x2
• And represents the second-order density function of the
process X(t).
IV. Stochastic processes
4.1. Definitions
• Similarly fx(x1, ..., xn, t1, ..., tn) represents the nth order
density function of the process X(t).
• Complete specification of the stochastic process X(t)
requires the knowledge of fx(x1, ..., xn, t1, ..., tn) for all ti, i
= 1, ..., n and for all n. (an almost impossible task in
reality).
IV. Stochastic processes
4.1. Definitions
• Characterisctics of stochastic processes
• Mean of a stochastic process

µ (t ) = E{ X (t )} = ò -¥ x f ( x, t )dx
D
X

• µ(t) represents the mean value of a process X(t). In general, the


mean of a process can depend on the time index t.
• Autocorrelation function of a process X(t) is defined as
D
RXX (t1 , t2 ) = E{ X (t1 ) X (t2 )} = ò ò x1 x2 f X ( x1 , x2 , t1 , t2 )dx1dx2
* *

• It represents the interrelationship between the random variables X1 =


X(t1) and X2 = X(t2) generated from the process X(t).
IV. Stochastic processes
4.1. Definitions
• Properties of autocorrelation function
1. RXX (t1 , t 2 ) = RXX* (t 2 , t1 ) = [ E{ X (t 2 ) X * (t1 )}]*
2. RXX (t , t ) = E{| X (t ) |2 } ³ 0.
(Average instantaneous power)
• 3. Rxx(t1, t2) represents a nonnegative definite function, i.e., for
any set of constants {ai}ni=1
n n
åå i j RXX (ti , t j ) ³ 0.
a a *

i =1 j =1
• This equation follows by noticing that: E{|Y|2}³0 and
n
Y = å ai X (ti ).
i =1
IV. Stochastic processes
4.1. Definitions
• The autocovariance function of the process X(t).

C XX (t1 , t 2 ) = RXX (t1 , t 2 ) - µ X (t1 ) µ X (t 2 )


*

• Examples
• Given T
z = ò -T X (t )dt.
T T
E [| z | ] = ò -T ò -T E{ X (t1 ) X * (t2 )}dt1dt2
2

T T
= ò -T ò -T R XX (t1 , t2 )dt1dt2
IV. Stochastic processes
4.1. Definitions
• Consider process X(t)
X (t ) = a cos(w 0 t + j ), j ~ U (0,2p ).
µ (t ) = E{ X (t )} = aE{cos(w 0 t + j )}
X

= a cos w 0 t E{cosj } - a sin w 0 t E{sin j } = 0,


2p
since E{cosj } = 1 cosj dj = 0 = E{sin j }.
2p ò0
R XX (t1 , t2 ) = a 2 E{cos(w 0 t1 + j ) cos(w 0 t2 + j )}
a2
= E{cosw 0 (t1 - t2 ) + cos(w 0 (t1 + t2 ) + 2j )}
2
a2
= cos w 0 (t1 - t2 ).
2
IV. Stochastic processes
4.2. Stationary processes
• Strict sense and wide sense stationarity
• Stationarity
• Stationary processes exhibit statistical properties that are
invariant to shift in the time index.
• Thus, for example, second-order stationarity implies that
the statistical properties of the pairs {X(t1) , X(t2) } and
{X(t1+c) , X(t2+c)} are the same for any c.
• Similarly first-order stationarity implies that the statistical
properties of X(ti) and X(ti+c) are the same for any c.
IV. Stochastic processes
4.2. Stationary processes
• Strict sense stationarity
• In strict terms, the statistical properties of a stochastic process are
governed by the joint probability density function.
• A process is nth-order Strict-Sense Stationary (S.S.S) if
f X ( x1 , x2 ,! xn , t1 , t2 !, tn ) º f X ( x1 , x2 ,! xn , t1 + c, t2 + c !, tn + c )
for any c, where the left side represents the joint density function of
the random variables X1=X(t1), ..., Xn=X(tn) and the right side
corresponds to the joint density function of the random variables
X’1=X(t1+c), ..., X’n=X(tn+c).
• A process X(t) is said to be strict-sense stationary if equation
above is true for all ti, i=1, ..., n; n=1, 2, ... and any c.
IV. Stochastic processes
4.2. Stationary processes
• First order strict sense stationary process
• For any c
f X ( x, t ) º f X ( x, t + c )
• In particular, if c = -t, then
f X ( x, t ) = f X ( x )
• That means, the first-order density of X(t) is
independent of t. In that case

E [ X (t )] = ò -¥ x f ( x )dx = µ , a constant. (1)
IV. Stochastic processes
4.2. Stationary processes
• Second order strict sense stochastic process
• From definition, we have:

f X ( x1 , x2 , t1 , t 2 ) º f X ( x1 , x2 , t1 + c, t 2 + c)
• for any c. If c is choosen so as c = -t2, we get
f X ( x1 , x2 , t1 , t 2 ) º f X ( x1 , x2 , t1 - t 2 )
• The second order density function of a strict sense
stationary process depends only on the difference of
the time indices t1-t2=t.
IV. Stochastic processes
4.2. Stationary processes
• The autocorrelation function is given by
RXX (t1 , t2 ) =D E{ X (t1 ) X * (t2 )}
= ò ò x1 x2* f X ( x1 , x2 ,t = t1 - t2 )dx1dx2
= RXX (t1 - t2 ) D= RXX (t ) = RXX* ( -t ),
(2)
• The autocorrelation function of a second order strict-
sense stationary process depends only on the
difference of the time indices t2-t1=t
IV. Stochastic processes
4.2. Stationary processes
• Wide sense stationarity
• A process X(t) is said to be Wide-Sense Stationary if:
E{ X (t )} = µ and E{ X (t ) X * (t )} = R (t - t
1 2 XX 1 2 ),
• Since these equations follow from (1) and (2), strict-sense
stationarity always implies wide-sense stationarity.
• In general, the converse is not true
• Exception: the Gaussian process (normal process).
• This follows, since if X(t) is a Gaussian process, then by definition
X1=X(t1), ..., Xn=X(tn) are jointly Gaussian random variables for any
t1,..., tn whose joint characteristic function is given by
n n
j å µ ( tk )wk - åå C XX
( ti , tk )wi w k / 2
f (w1 , w 2 ,!, w n ) = e
X
k =1 l ,k
IV. Stochastic processes
4.2. Stationary processes
• Examples
• The process:
X (t ) = a cos(w 0 t + j ), j ~ U (0,2p ).
• This process is wide sense stationary but not
t2 strict sense stationary.
• If the process X(t) has zero mean,
T
then s2z is reduced to: z= ò -T
t = t1 -t 2
X (t )dt.
-T T t T T
t1
t -T s z2 = E{| z |2 } = ò -T ò -T R (t1 - t2 )dt1dt2 .
XX

2T - t
• As t1 and t2 varies from –T to T, so t=t2-t1
varies from -2T to 2T.
• Rxx(t) is a constant over the shaded region in
the figure on the left.
IV. Stochastic processes
4.3. Power spectrum
• Power spectrum
• For a deterministic signal x(t)
• The spectrum is well defined: If X(w) represents its Fourier
transform, +¥ - jw t
X (w ) = ò -¥ x(t )e dt ,
• then |X(w)|2 represents its energy spectrum. This follows from
Parseval’s theorem since the signal energy is given by
+¥ +¥
1 w d w = E.
ò -¥ x (t )dt = ò -¥
2 2
2p | X ( ) |
• Thus |X(w)|2Dw represents the signal energy in the band (w, w+Dw)
| X (w )|2
X (t ) Energy in (w ,w +Dw )

0 t
0 w
w w + Dw
IV. Stochastic processes
4.3. Power spectrum
• For stochastic processes,
• A direct application of Fourier transform generates a sequence of
random variables for every w
• For a stochastic process, E{|X(t)|2} represents the ensemble average
power (instantaneous energy) at the instant t.
• Partial Fourier transform of a process X(t) based on (– T, T ) is given
by T - jw t
X T (w ) = ò -T X (t )e dt
• The power distribution associated with that realization based on (– T,
T ) is represented by

| X T (w ) |2 1 T - jw t
2

2T
=
2T ò -T X (t )e dt
IV. Stochastic processes
4.3. Power spectrum
• The average power distribution based on (– T, T ) is ensemble
average of power distribution for w
ì | X T (w ) |2 ü 1 T T - jw ( t1 - t2 )
PT (w ) = E í ý = ò- T ò- T
E { X ( t1 ) X *
( t 2 )}e dt1dt2
î 2T þ 2T
1 T T - jw ( t1 - t2 )
=
2T ò- T ò- T XX 1 2
R ( t , t ) e dt1dt2
• We have this represents the power distribution of X(t) based on (–
T, T ).
• If X(t) is assumed to be w.s.s, then
RXX (t1 , t2 ) = RXX (t1 - t2 )
• and we have
1 T T - jw ( t1 - t2 )
PT (w ) = ò -T ò -T XX 1 2
R ( t - t ) e dt1dt2 .
2T
IV. Stochastic processes
4.3. Power spectrum
• Let t = t2-t1, we obtain
1 2T - jwt
PT (w ) = ò - 2T XX
R (t ) e (2T - | t |)dt
2T
2T
= ò - 2T RXX (t )e - jwt (1 - 2|tT| )dt ³ 0
• This is the power distribution of the w.s.s. process X(t) based on
(– T, T ).
• Leting T®¥, we obtain

S XX (w ) = lim PT (w ) = ò-¥ RXX (t )e - jwt dt ³ 0
T ®¥
• Sxx(w) is the power spectral density of the w.s.s process X(t).
IV. Stochastic processes
4.3. Power spectrum
• Khinchin-Wiener theorem
• The autocorrelation function and the power spectrum of a
w.s.s process form a Fourier transform pair.

RXX (w ) ¬¾®
F×T
S XX (w ) ³ 0.

RXX (t ) = 1 w jwt
dw
2p ò-¥ XX
S ( ) e

+¥ - jwt
S
• For t = 0, XX (w ) = lim PT
(w ) = ò-¥ XX
R (t ) e dt ³ 0
T ®¥

1 w w
ò-¥ XX = = } = P,
2
2p S ( ) d R XX
(0) E {| X ( t ) | the total power.
IV. Stochastic processes
4.3. Power spectrum
• The area under Sxx(w) represents the total power of the
process X(t), and hence Sxx(w) truly represents the power
spectrum.

S XX (w ) S XX (w ) Dw
represents the power
in the band (w , w + Dw )

w w + Dw w
0

• The nonnegative-definiteness property of the auto-correlation


function translates into the “nonnegative” property for its
Fourier transform (power spectrum).
RXX (t ) nonnegative - definite Û S XX (w ) ³ 0.
IV. Stochastic processes
4.3. Power spectrum
• If X(t) is a real w.s.s process, then
RXX (t ) = RXX ( -t )
• So that +¥
S (w ) = ò -¥ R (t )e - jwt dt
XX XX


= ò -¥ RXX (t ) cos wt dt
¥
= 2 ò 0 RXX (t ) cos wt dt = S XX ( -w ) ³ 0
• The power spectrum is an even function, (in addition to
being real and nonnegative).
IV. Stochastic processes
4.4. Ergodicity
• Time averages
• Given wide-sense stationary process X(t).
• Time averages T
1
T ®¥ 2T ò
• Mean n = lim x (t )dt
-T

• Autocorrelation T
1
R (t ) = lim
T ®¥ 2T ò x(t + t )x (t )dt
-T
• These limits are random variables.
• Problems:
? ?
n = E{x (t )} R (t ) = E{x (t + t ) x (t )}
IV. Stochastic processes
4.4. Ergodicity
• Ergodicity
• X(t) is ergodic if in the most general form if all its
statistics can be determined from a single function X(t, z)
of the process.
• X(t) is ergodic if time averages equal ensemble averages
(expected values)
IV. Stochastic processes
4.4. Ergodicity
• Ergodicity of the mean X (t, x )

• Time average of a given process X(t) !


T X (t, x n )
1
ò x (t )dt
!
nT = X (t, x k )

• nT is a random variable. 2T -T X (t, x 2 ) !


• Since E{X(t)} is a constant, we have
X (t, x1 )
E{nT} = E{X(t)} = h t
0 t1 t2
• The variance of nT is given by:

1 æ t ö
2T
sn = ò ç1 - ÷[ R(t ) - h ]dt
2 2
T
T 0 è 2T ø
• R(t) is the autocorrelation of X(t).
• If this variance tends to zero with T®¥, then nT tends to its
expected value.
IV. Stochastic processes
4.4. Ergodicity
• Ergodic theorem for E{X(t)}

T
1
lim
T ®¥ 2T
-T
ò x (t )dt = E{x (t )} = h
iff
æ t ö
2T
1
ò0 è 2T ø t - h ]dt = 0
2
lim ç 1 - ÷[ R ( )
T ®¥ T
IV. Stochastic processes
4.4. Ergodicity
• Ergodicity of autocorrelation
• We form the average:
T
1
R T (l ) =
2T ò x(t + l )x (t )dt
-T
• We have T
1
E{R T (l )} =
2T -T
ò E{x(t + l )x (t )}dt = R(l )
• For a given l, RT(l) is the time average of the process
F(t)=x(t+l)x(t)
• The mean of the process F(t) is given by
E{F(t)}=E{x(t+l)x(t)} = R(l)
IV. Stochastic processes
4.4. Ergodicity
• Its autocorrelation
RFF(t)=E{x(t+l+t)x(t+t)x(t+l)x(t)}
• Hence with w(t) = F(t), we have
• Ergodicity theorem for autocorrelation
• For a given l
T
1
lim
T ®¥ 2T
-T
ò x(t + l ) x (t )dt = E{x(t + l ) x (t )} = R(l )
iff
æ t ö
2T
1
ò0 è 2T ø FF t - (l )]dt = 0
2
lim ç 1 - ÷[ R ( ) R
T ®¥ T
IV. Stochastic processes
4.4. Ergodicity
• Ergodicity of the distribution function
• We determine first order distribution F(x) = E{X(t)£x} of a given
process X(t) by a suitable time average.
• Consider the process
ì1 if x (t ) £ x
y(t ) = í
î0 if x(t ) > x
• Its mean is given by
• Its autocorrelation E{y(t )} = 1.P{x (t ) £ x} = F ( x)
• Where F(x,x;t) is the second order distribution of x(t)

E{y(t + t ) y(t )} = 1.P{x (t + t ) £ x, x (t ) £ x} = F ( x, x;t )


IV. Stochastic processes
4.4. Ergodicity
• We form the time average
T
1
yT =
2T ò y(t )dt
-T
• We have
E{yT} = E{y(t)} = F(x)
• The variance of yT is given by
1 æ t ö
2T

ò t - h ]dt = 0
2
ç 1 - ÷[ R ( )
T 0 è 2T ø
• Where R(t) and h are replaced by F(x, x;t) and F(x)
IV. Stochastic processes
4.4. Ergodicity
• Ergodic theorem for distribution function
• For a given x,
T
1
lim
T ®¥ 2T
-T
ò y(t )dt = F ( x)
iff
1 æ t ö
2T
lim ò ç1 - ÷[ F ( x , x; t ) - F 2
( x)]dt = 0
T ®¥ T 2T ø

Stochastic Data Processing System
Models
Stochastic Data Processing
System Models
• Classification of Linear Systems
LTI systems
• Systems without memory
• Systems with memory
• Time Invariant systems
• Autocorrelation of output processes
• Power Spectrum and Khinchin-Wiener Theorem
LTI systems with stochastic input
• Deterministic system transformation
Y (t, x i )
X (t, x i )

¾¾¾®
X (t )
T [×] ¾Y¾®
¾
(t )

t t
• Y(t) = T[X(t)].
• Problems formulation: goal: to study the output
process statistics in term of the input process
statistics and the system function
• Is the output process stochastic ?
• How are statistics of the output process ?
• What is the relation between input and output processes ?
LTI systems with stochastic input
• Scope: Response of the deterministics systems under
an actions of stochastic processes
Deterministic systems

Memoryless systems Systems with memory


Y (t ) = g [ X (t )]
Time varying systems Time Invariant Linear systems
systems

Y (t ) = ò - ¥ h(t - t ) X (t )dt Y (t ) = L[ X (t )]
+¥ Linear time-invariant systems
= ò - ¥ h(t ) X (t - t )dt .
X (t ) h (t ) Y (t )
4.1 Systems with stochastic inputs

• Memoryless systems
• Response of memoryless systems
• Y(t) = g[X(t)]
• First order distribution and density: FY(y,; t) và fY(y; t) in
relation with FX(x; t), fX(x; t):
FY ( y ) = P(Y (x ) £ y ) = P (g ( X (x )) £ y ) = P (X (x ) £ g -1 ( -¥, y ]).
• Expectation:
¥
E{Y (t )} = ò g ( x) f

X ( x; t )dx
4.1 Systems with stochastic inputs

• Second order ststistics:


• Second order density fY(y1, y2; t1, t2 of Y(t) can be
determined in term of fX(x1, x2; t1, t2)
• Autocorrelation: E{Y(t1)Y(t2)}:
¥ ¥
E{Y (t1 )Y (t 2 )} = ò ò g(x )g(x ) f
- ¥- ¥
1 2 X ( x1 , x2 ; t1 , t 2 )dx1dx2

• n order statistics: n order density function of Y(t)


fY(y1, y2, …yn; t1, t2, …, tn)

(t1) = g(X(t1)), …, Y(tn) = g(X(tn)).


45

4.1 Systems with stochastic inputs


Memoryless Systems:
The output Y(t) in this case depends only on the present value of the
input X(t). i.e.,
Y (t ) = g{ X (t )}
Strict-sense Strict-sense
stationary input Memoryless
system stationary output.

Wide-sense Need not be


Memoryless stationary in
stationary input system any sense.

X(t) stationary Y(t) stationary,but


Gaussian with Memoryless not Gaussian with
system R XY (t ) = hR XX (t ).
RXX (t )
4.1 Hệ thống TTBB dưới tác dụng
của quá trình ngẫu nhiên
• Khảo sát tính dừng của tín hiệu đầu ra.
• Khi tín hiệu đầu vào là tín hiệu dừng theo nghĩa hẹp, tín hiệu
ra cũng dừng theo nghĩa hẹp;
• Nếu đầu vào X(t) là dừng theo bậc N, đáp ứng Y(t) cũng
dừng theo bậc N;
• Nếu X(t) là dừng trong khoảng thì Y(t) cũng dừng trong
khoảng đó;
• Nếu X(t) là dừng theo nghĩa rộng, Y(t) có thể không dừng
theo mọi nghĩa.
• Ví dụ:
• Bộ thu nhận theo luật bình phương: Y(t) = X2(t)
4.1 Systems with stochastic inputs

• COnsider the memoryless system

ì 1, x³0
g ( x) = í
î- 1, x<0
48

4.1 Systems with stochastic inputs


Linear Systems: L[×] represents a linear system if
L{a1 X (t1 ) + a2 X (t2 )} = a1 L{ X (t1 )} + a2 L{ X (t2 )}.
Let Y (t ) = L{X (t )}
represent the output of a linear system.
Time-Invariant System: L[×] represents a time-invariant system if

Y (t ) = L{X (t )} Þ L{X (t - t0 )} = Y (t - t0 )
i.e., shift in the input results in the same shift in the output also.
If L[×] satisfies both equations, then it corresponds to
a linear time-invariant (LTI) system.
LTI systems can be uniquely represented in terms of their output
h (t ) to
Impulse
a delta function response of
the system
d (t ) LTI h(t )
t

Impulse Impulse
response
49
4.1 Systems with stochastic inputs Y (t )

X (t )
t
X (t ) Y (t )
t LTI

Y (t ) = ò - ¥ h(t - t ) X (t )dt
arbitrary
input +¥
= ò - ¥ h(t ) X (t - t )dt
Eq. follows by expressing X(t) as

X (t ) = ò - ¥ X (t )d (t - t )dt
and we have
Y (t ) = L{X (t )}.

Y (t ) = L{ X (t )} = L{ò - ¥ X (t )d (t - t )dt }

= ò - ¥ L{ X (t )d (t - t )dt } By Linearity


= ò - ¥ X (t ) L{d (t - t )}dt By Time-invariance
+¥ +¥
= ò - ¥ X (t )h (t - t )dt = ò - ¥ h (t ) X (t - t )dt .
4.1 Systems with stochastic inputs

• Output statistics
• Mean of the output

µ (t ) = E{Y (t )} = ò - ¥ E{ X (t )h(t - t )dt }
Y


= ò - ¥ µ X (t )h(t - t )dt = µ X (t ) * h(t ).

• Cross-correlation of input-output
R XY (t1 , t2 ) = E{ X (t1 )Y * (t2 )}

= E{ X (t1 ) ò - ¥ X * (t2 - a )h * (a )da }

= ò - ¥ E{ X (t1 ) X * (t2 - a )}h * (a )da

= ò - ¥ R XX (t1 , t2 - a )h * (a )da
= R XX (t1 , t2 ) * h * (t2 ).
4.1 Systems with stochastic inputs
• Autocorrelation of output

RYY (t1 , t 2 ) = E{Y (t1 )Y * (t 2 )}


+¥ +¥
= E{ò X (t1 - b )h( b )db Y * (t 2 )} = ò E{ X (t1 - b )Y * (t 2 )}h( b )db
-¥ -¥

= ò RXY (t1 - b , t 2 )h( b )db = RXY (t1 , t 2 ) * h(t1 )

RYY (t1 , t2 ) = R XX (t1 , t2 ) * h * (t2 ) * h(t1 ).

µ (t )
X h(t) µ (t )
Y

(a)

RXX (t1 , t 2 ) ¾
¾® h*(t2) ¾¾ ¾¾®
R XY ( t1 ,t 2 )
h(t1) ¾
¾® RYY (t1 , t 2 )
(b)
4.1 Systems with stochastic inputs
• If X(t) - wide sense stationary µ (t ) = µ
X X

µ (t ) = µ
Y X ò -¥ h(t )dt = µ X
c, a constant.
RXX (t1 , t 2 ) = RXX (t1 - t 2 )
• X(t), Y(t) are jointly w.s.s

R XY (t1 , t2 ) = ò - ¥ R XX (t1 - t2 + a )h * (a )da
= R XX (t ) * h * ( -t ) = R XY (t ), t = t1 - t2 .
• Y(t) is w.s.s.

RYY (t1 , t 2 ) = ò -¥ RXY (t1 - b - t 2 )h( b )db , t = t1 - t 2
= RXY (t ) * h(t ) = RYY (t ).
RYY (t ) = RXX (t ) * h* (-t ) * h(t ).
53

4.1 Systems with stochastic inputs


The output process is also wide-sense stationary.
X (t ) This gives rise to the following representation
Y (t )
wide-sense wide-sense
stationary process LTI system
h(t) stationary process.

(a)

X (t ) Y (t )
strict-sense LTI system
stationary process strict-sense
h(t) stationary process
(see Text for proof )
(b)

X (t ) Y (t )
Gaussian process
Gaussian
process (also Linear system (also stationary)
stationary)
(c)
4.1 Systems with stochastic inputs
• Theorem:
• For linear systems:
E{L[X(t)]} = L[E{X(t)}]
4.1 Systems with stochastic inputs

• White noise
• W(t) is said to be a white noise process if
RWW (t1 , t 2 ) = q (t1 )d (t1 - t 2 ),
• E[W(t1) W*(t2)] = 0 unless t1 = t2
• W(t) is said to be wide-sense stationary (w.s.s) white
noise if
• E[W(t)] = constant, and

RWW (t1 , t 2 ) = qd (t1 - t 2 ) = qd (t ).


• If W(t) is also a Gaussian process (white Gaussian
process), then all of its samples are independent random
variables
4.1 Systems with stochastic inputs
LTI Colored noise
White noise
h(t) N (t ) = h (t ) *W (t )
W(t)
• For w.s.s. white noise input W(t), we have

E [ N (t )] = µW ò -¥ h(t )dt , a constant
Rnn (t ) = qd (t ) * h* (-t ) * h(t ) = qh* (-t ) * h(t ) = qr (t )
• where

r (t ) = h(t ) * h (-t ) = ò -¥ h(a )h* (a + t )da .
*

• Thus the output of a white noise process through an LTI


system represents a (colored) noise process.
• . White noise need not be Gaussian.
• “White” and “Gaussian” are two different concepts.
4.2 Discrete Time Stochastic
Processes
• Definition
• A discrete time stochastic process Xn = X(nT) is a
sequence of random variables.
• The mean, autocorrelation and auto-covariance
functions of a discrete-time process are:

µ n = E{ X (nT )}
R(n1 , n2 ) = E{ X (n1T ) X (n2T )}
*

C (n1 , n2 ) = R(n1 , n2 ) - µ n1 µ n*2


4.2 Discrete Time Stochastic
Processes
• Tính dừng
• Strict sense stationarity and wide-sense stationarity
definitions apply here also.
• X(nT) is wide sense stationary if :
E{ X (nT )} = µ , a constant
E [ X {( k + n )T } X * {(k )T }] = R (n ) = rn = r-*n
• R(n1, n2) = R(n1 – n2) = R*(n2 – n1)
• The positive-definite property of the autocorrelation
sequence æ r r r ! r ö
0 1 2 n
• ç * ÷ , n = 0, 1, 2, …, ¥
ç r1 r0 r1 ! rn -1 ÷
Tn = ç ÷ = Tn
*

"
ç ÷
ç r* r* ! r* r ÷
è n n -1 1 0 ø
4.2 Discrete Time Stochastic
Processes
• Output of the LTI systems
• X(nT) - W.S.S, LTI system h(nT), Y(nT) - response.
• Cross-correlation of input-output are:
R XY ( n ) = R XX ( n ) * h * ( -n )
RYY ( n ) = R XY ( n ) * h( n )
RYY ( n ) = R XX ( n ) * h * ( -n ) * h( n ).

• Thus wide-sense stationarity from input to output is


preserved for discrete-time systems also
4.3. Auto Regressive Moving
Average (ARMA) Processes
• Consider an input
p
– output representation
q
of LTIs
X ( n ) = - å a k X ( n - k ) + å bkW ( n - k ),
k =1 k =0
• where X(n) may be considered as the output of a system
{h(n)} driven by the input W(n)
• The transfer function
W(n) h(n) X(n)
p q
X ( z ) å ak z - k = W ( z ) å bk z - k , a0 º 1
k =0 k =0
-1 -2 -q
¥
X ( z ) b0 + b1 z + b2 z + ! + bq z B( z )
H ( z ) = å h(k ) z - k = = -1 -2 -p
=
k =0 W ( z ) 1 + a1 z + a2 z + ! + a p z A( z )
¥
X ( n ) = å h( n - k )W ( k ).
k =0
4.3. Auto Regressive Moving
Average (ARMA) Processes
n Transfer function has p poles and q zeros
• The output undergoes regression over p of its
previous values and at the same time a moving
average based on W(n), W(n-1) ..., W(n-q) of the
input over (q + 1) values is added to it (ARMA(p, q))
• The input {W(n)} represents a sequence of uncorrelated
random variables of zero mean µ = 0 and constant variance
s2 w so that RWW (n) = s W2d (n).
n If {W(n)} )}is normally distributed then the output {X(n also
represents a strict-sense stationary normal process.
n If q = 0, AR(p) – all-pole process.
n if p = 0, represent MA(q) process
4.3. Auto Regressive Moving
Average (ARMA) Processes
• Autoregressive process AR(1)
• An AR(1) process has the form:
X ( n ) = aX ( n - 1) + W ( n )
• The corresponding system transfer ¥
:
1
H ( z) =
1 - az -1
= å a n -n
z
n =0

• if |a| < 1 - stable system.


• System impulse response: h(n ) = a n , | a |< 1
• Output autocorrelation:
¥ |n|
a
RXX (n) = s W2d (n) * {a - n } * {a n } = s W2 å a |n|+ k a k = s W2
k =0 1 - a 2
4.3. Auto Regressive Moving
Average (ARMA) Processes
• Tự tương quan chuẩn hóa của đầu ra:
R XX ( n )
r (n) = = a |n |
, | n | ³ 0.
R XX (0)
X

• Trường hợp quá trình X(t) kết hợp với nhiễu V(t):
• V(t) là chuỗi ngẫu nhiên không tương quan với giá trị trung
bình 0 và tự tương quan sV2.
• Y(t) = X(t) + V(t)
|n|
a
RYY (n) = RXX (n) + RVV (n) = RXX (n) + s V2d (n) = s W2 + s 2
d ( n)
1- a 2 V

R (n) ì1 n = 0 s 2

r (n) = YY
= í |n| c= 2 W
< 1.
Y
R (0) îc a
YY n = ±1, ± 2,! s + s (1 - a )
W
2
V
2
4.3. Auto Regressive Moving
Average (ARMA) Processes
• the effect of superimposing an error sequence
on an AR(1) model AR(1)
r X (0) = r Y (0) = 1

r X (k ) > rY (k )

n
0 k
4.3. Auto Regressive Moving
Average (ARMA) Processes
• Autoregressive AR(2)
X (n) = a1 X (n - 1) + a2 X (n - 2) + W (n)
• Transfer function
¥
1 b1 b2
H ( z ) = å h( n) z -n
= = +
n =0
-1
1 - a1 z - a2 z -2
1 - l1 z -1
1 - l2 z -1
h(0) = 1, h(1) = a1 , h(n) = a1h(n - 1) + a2 h(n - 2), n ³ 2
• with l1, l2 - poles of the system, the impulse response:
• Và các hệ thức: h(n) = b1l1n + b2 ln2 , n ³ 0
b1 + b2 = 1, b1l1 + b2 l2 = a1. l1 + l2 = a1 , l1l2 = -a 2 ,
• H(z) stable follows | l1 |< 1, | l2 |< 1.
4.3. Auto Regressive Moving
Average (ARMA) Processes
• Autocorrelation of output process
• Autocorrelation:

RXX (n) = E{ X (n + m) X * (m)}


= E{[ a1 X (n + m - 1) + a2 X (n + m - 2)] X * (m)}
+ E{W (n + m) X * (m)}
= a1 RXX (n - 1) + a2 RXX (n - 2)
• Correlation coefficient:
RXX ( n )
r X (n) = = a1 r X ( n - 1) + a2 r X ( n - 2).
RXX (0)
4.3. Auto Regressive Moving
Average (ARMA) Processes
• ARIMA processes (Autoregressive Integrated
moving average)
• For modeling time series
• Consider ARMA processes:
p W – white
q noise
X ( n ) = - å a k X ( n - k ) + å bkW ( n - k ),
k =1 k =0

( 1
(1 + $ 𝐷 % ) 𝑋 𝑛 = (𝑏/ + $ 𝐷 0 )𝑊(𝑛)
%&' 0&'
• Di – delay operator, or lag operator
• An ARIMA(p’,d,q) process is a particular case of an ARMA(p,q)
process having the autoregressive polynomial with d unit roots
and p’ = p-d .
4.3. Auto Regressive Moving
Average (ARMA) Processes
• Autocorrelation

RXX (n) = RWW (n) * h* (- n) * h(n) = s W2 h* (- n) * h(n)


¥
= s W å h* ( n + k ) * h( k )
2

k =0

æ | b | 2
( l* n
) b *
b ( l * n
) b b *
( l* n
) | b | 2
( l 2) ö
* n
= sW ç
2 1 1
+ 1 2 1
+ 1 2 2
+ 2
÷
è 1- | l1 | 1 - l1l2 1 - l1l2 1- | l2 | ø
2 * * 2

RXX (n) *n *n
r X ( n) = = c1l1 + c2 l2
RXX (0)
4.3. Auto Regressive Moving
Average (ARMA) Processes
• Moving average process
q
X(n) = ∑ bkW (n − k),
k=0

• q zeros; without poles;


• Non-regressive systems;
• Impulse response and transfer function:

H ( z ) = ∑ h ( k ) z − k =b0 + b1 z −1 + b2 z −2 ++ bq z − q
k =0
Thank you for
your attentions!

You might also like