Y .C, YA,: Yt Yy y Ys
Y .C, YA,: Yt Yy y Ys
STATIONARY PROCESSES
We shall describe the fundamental concepts in theory of time series models. Introduce
the concepts of stochastic processes, mean and covariance functions, stationary
processes and autocorrelation functions.
The sequence of random variables {Yt : t = 0, ±1, ±2,......} is a stochastic process and
the observed series y1 , y2 ....., yt
n is regarded as a realisation of the underlying process,
= E (YtYs ) − µt µs
1
(iii) The autocorrelation function, ρt , s is given by
ρt ,s = Corr (Yt , Ys ) (4.2.3)
where
Cov (Yt , Ys ) γ t ,s
Corr (Yt , Ys ) = = (4.2.4)
Var (Yt )Var (Ys ) γ t ,t γ s ,s
(iv) Recall that both covariance and correlation are measures of the & linear
dependence between random variables but that the unitless correlation is
somewhat easier to interpret.
(v) The following properties are from known results and our definitions:
γ t ,s = γ s ,t ρ t ,s = ρ s ,t (4.2.5)
corMe,Y) Cov(Ys,Y)
=
CourLYe,1) Gor(Ys,YA)
=
γ t ,s ≤ γ t ,t γ s ,s ρt ,s ≤ 1
(vi) If c1, c2 ,..., cm and d1, d2 ,..., dn are constants, and t1, t2 ,..., tm and s1, s2 ,..., sn
are time points, then
m n m n
CLEARUP FO Cov ∑ ciYti , ∑ j s j = ∑∑ ci d j Cov Yti , Ys j
d Y ( ) (4.2.6)
i =1
YOURSELF j =1 i =1 j =1
m m n i −1
(vii) Var ∑ ciYti = ∑ ci2Var Yti + 2∑∑ ci c j Cov Yti , Ys j
( ) ( ) (4.2.7)
i =1 i =1 i = 2 j =1
2
-
111111"he
YEh(z k-
dll ll
4.3 Stationarity
(i) A process {Yt } is said to be strongly stationary or strictly stationary if the
joint distribution of Yt1 , Yt2 ,...., Ytn is the same as the joint distribution of
Yt1−k , Yt2 −k ,...., Ytn −k for all sets of time points t1, t2 ,..., tn and lag ' k '
Va(Ye
<a (ii) If a process is strictly stationary and has finite variance, then the covariance
function must depend only on the time lag
(iii) The distribution of Yt is the same as that of Yt −k ∀t , k that is, the Y ' s are
marginally identically distributed
γ t ,s = Cov (Yt −s , Y0 )
-
= Cov (Y0 , Ys−t ) Cor(Yz,Ys) Cor(YA-1, YS k)
=
· Gr(Yt A, YS t)
=
(
= Cov Y0 , Y t − s ) - -
Cor(Yo, YS-t)
=
= γ 0, t − s
cv(Ye,Y( 1))
Generall,
=
S THEN T
The covariance between Yt and Ys depends on time only through the time
difference t − s and not on the actual times ' t ' and ' s '
>A
hdds!!! I
Cor(12,4) Cor(Y0,Y2) Cor((,
=
) Cor(Y8,410)
= =
3
V V N
ie k2k
= 2
=
k2
=
Cor(1,4s) Cov(Ye,y,t 5)) Ve,1t 31 011 5
=
=
=
-
-
-
Cor(YA, Yt k)
=
-
Cor(Y,YA (t k))
-
=Cor(Ye, Yk)
-
(v)
⑦
Thus for a stationary process:
γ k = Cov (Yt , Yt −k ) /
γ
ρ k = Corr (Yt , Yt −k ) = k
γ0
intcolysis
recen
Ve,k Up
=
=
γ 0 = Var (Yt ) ρ0 = 1
γ k = γ −k ρk = ρ− k (4.2.8)
γk ≤ γ0 ρk ≤ 1
(i). E (Yt ) = µ ∀t
A series {Yt } is said to be lagged if its time axis is shifted:- shifting by k lags
4
Illustration 1: Random Walk evidg,v2)
Let e1, e2 ,..., be a sequence of independent, identical distributed random variables each
Illustration 1 solution
(a) µt = E (Yt )
= E ( e1 + e2 + .... + et )
= E ( e1 ) + E ( e2 ) + .... + E ( et )
=0 ∀t
I <-
= Var ( e1 ) + Var ( e2 ) + .... + Var ( et )
= σ e2 + σ e2 + ... + σ e2
= tσ e2 the process variance increases linearly with time
INDEPENDENCE
IMPLICATION:
of et's
not stationary
Random walk is
a
process
5
Yz 2+2z
=
...
+
et
e
Y 1, 2z (1 et 1 ...
+
...
+
7 +
+
=
⑨ Cor(ei,ej) 0, i j
-
=
s t
γ t ,s = ∑∑ Cov ( ei , e j ) <
And sincets
i =1 j =1
of
there will be the sum
= tσ e2 if i = j
⑫ado
to
=0 if i ≠ j
Ut,s mm(s,A).*,
ij
assume t -s =>
=
=
not
Since γ t , s = γ s ,t we have the auto-covariance function for all time-points, t and s , then
γ t ,s = tσ e2 for 1 ≤ t ≤ s
Va(Ys) sOe2
=> =
tσ e2
Cor(Ye,Ys) to?, fonts
= =
tσ e2 ⋅ sσ e2
t
= 1≤ t ≤ s
s
For example:-
ρ1,2 = 1 = 0.707 ρ 24,25 = 24 = 0.98
2 25
6
Noticing that values of Y at neighbouring time points, are strongly correlated and
values of Y at distant time points, are less correlated.
Illustration 2 solution
e +e
(a) E (Yt ) = E t t −1 = 0
2
INDEPENDENCE
(b)
e + e Var ( et + et −1 ) 1 2
Var (Yt ) = Var t t −1 =
2 4
= 2 σe X
↑
Vo ()"Vm(1 1 1) [v(t) Vm(e -1)]
= +
+
=
+
4(0+02) 102
=
=
7
YA=E(ey et 1)
+
2)
Yz z(1 +ef
-
+ -
e +e e +e
(c) Cov (Yt , Yt −1 ) = Cov t t −1 , t −1 t −2 INDEPENDENCE
m 2 2
Y =8 =8 x8
1 ~- ~
=
4
{Cov ( et , et −1 ) + Cov ( et , et −2 ) + Cov ( et −1, et −1 ) + Cov ( et −1, et −2 )}
Cov ( et −1 , et −1 )
= all other covariances are zero
4
3)
σ e2
Cor(G 2 12 27
+
I
+
-
-
2
=
+
z
E(ey et 1)
YA=
=
+
Y1 2(21 18 3)
-
+
2
-
-
i.e γ t ,t −1 = 14 σ e2 ∀t
e +e e +e
Cov (Yt , Yt −2 ) = Cov t t −1 , t −2 t −3
#testates 8
=
+GRETICIE)
m 2 2
V2
=0 since the e ' s are independent
So: E
+
1 σ e2 t −&
k =0
2
= 14 σ e2 V. =
to2
s
γ t ,s t − k =1
mo
0 t − Sk > 1
Vs zv1A 5
-
-
0
=
\
From the fastthat
Pt,s = 20=1,1 0
I
Etoe"
PAsEvEtton, re
z,
=
1t -s= 1
-2 0.1t-sp/
=
8
For autocorrelation function (acf):
1
·
t−k =0
ρ t ,k = 12 t − k =1
0 t − k >1
Note that ρ2,1 = ρ3,2 = ρ4,3 = ρ9,8 = 12 . That is, values of y precisely one time
unit apart have exactly the same correlation no matter where they occur in time.
Now ρ 3,1 = ρ 4,2 = ρ t ,t −2 .
In general ρt ,t −k is the same for all values of t .
Example 4.3.1
Suppose that the observed process is given by
Yt = Z t + θ Z t −1 where {Zt } ~ N ( 0,σ z2 )
Find the autocorrelation function (acf) for Yt when θ = 3 and when θ = 13
E (Yt ) = E ( Zt + θ Z t −1 ) = 0
thisis
<
Var (Yt ) = Var ( Z t + θ Z t −1 = σ z2
) +θ 2
σ z2 = σ z2 (1 + θ )
2
Vur(zt) Var(8ZA 1)
= +
1
INDEPENDENCE
9
INDENDENCE TO
- -0
1
A cat
[
t
E i) Cov(zt,827 2)
-
⑤ I
+
F
- -
/
= θ Cov ( Z t −1 , Z t −1 ) = θσ z2 0
*
hb -
For k > 1:
&
Cov (Yt , Yt − k ) = Cov ( Zt + θ Z t −1, Z t − k + θ Zt −k −1 )
G(Yt,y1.h) Vm 0
= =
=
- Cov (Yt , Yt −k )
Corr (Yt , Yt −k ) =
Var (Yt )Var (Yt −k )
Re: To a= 1
1
fazz-
k =0
k 1:=
2
ρ k = 2θσ z 2 = θ 2 k =1
σ z (1+θ )
1+θ
0
k >1 h>: 1783 8
=
3
For θ = 3 : ρ1 = 3
= 103
1+32
1
So the autocorrelation functions are identical
θ = 13 : ρ1 = 2
3
= 103
( )
1+ 13
52 E
=
Check:8 = 5:
5. (( Fz E =5.25 2,
=
8
=
=
=
10
scalar
Example 4.3.2 2
Suppose Yt = 5 + 2t + Z t , where {Z t } is a zero-mean stationary series with auto-
- -
covariance function γ k
(a) Find the mean function for {Yt }
(b) Find the auto-covariance function for {Yt }
(c) Is {Yt } stationary? Why or why not?
E(2A) E(zt)
XE(5)
+
+
Solution 4.3.2
(a) E (Yt ) = E ( 5 + 2t + Z t ) = 5 + 2t < functionof time
= Cov ( Z t , Z t −k )
= γk
(c) Since the process mean varies with time, then Yt is not stationary
11
Example 4.3.3 - >E(YA) m, Nt =
-
1
autocovariance function for Wt
We-kYt k Yt k 1
=
-
-
-
-
Solution4.33
-
=
-
DYA -
1 (YA
=
- YA 1)
-
-
(YA
-
1
-
YA
-
1)
= ∇ 2Yt YA
=
- YA
-
1
-
YE
-
1 yz
+
-
2
= Yt − 2Yt −1 + Yt − 2
Yz
=
-
2Yz - 1 YA
+ -
2
Now, U t is the first difference of the process {∇Yt } , and by part (i), {∇Yt } is
stationary.
So, U t is the difference of a stationary process and from part (i), is itself
stationary.
1 -
2n 1 +
0
=
He i
+ -
-
-
= -
or
E(ht) E(Wt) = -
E(WF1) 0 = -
0 0
=
12
Cor (Ut,"A-h) Cor(Wz-Wt-), WE-R-WA-k-1) Cor(WA, WA-R) CovSWA,WF -k+)
(
=
= -
Solution 4.3.4
E (Yt ) = E (α Yt −1 + Z t )
= α E (Yt −1 )
= 0 ∀t µ = αµ ⇒ (1 − α ) µ = 0
( )
E Yt 2 = E (α YtYt −1 ) + E (Yt Zt )
γ 0 = αγ1 + E (Yt Zt )
Now:
E (Yt Zt ) = E (αYt −1 + Zt ) Zt
( )
= α E (Yt −1Zt ) + E Zt2
= 0 + σ z2
Hence, γ 0 = αγ 1 + σ z2 (1)
13
YA hYA-1 ZA
=
+ where Zt i (0,03)
YA zt xyz
=> =
+
-
1
zz 4(zz
=
+
-
1 (YA 2)
+
z1 (27
=
+
- 1 12VA
+
-
2
12(z1 2 (YA 3)
+
zA azz
-
-
=
+ +
+
13zz + 1YA -
3
zA (z1 2
-
+ +
1
=
(YA 4)
+
zA (z1
=
+
+
1
+ (221 -
2
+
13(21 -
3
+
-
(221 +
1321 -
3
(YA
+ -
4
z1 421 2
+
= -
1
+
1
:
1321 (21 [Pzz ...
+
(221
.. p
+
-
+
+ -
3
+
- 4
z1 421 2
+
= -
1
+
1
=
ciz
- +i reall:Zn id(0,03
= + -
=>
zc0 0
=
=
INDEPENDENCE
Ve,"vm(y) vm(z5z
= + i) = 2Vm(xz + 1 i) =
1z0()vm(zz 1) -
=
8(13):0=0z01(aY" RECON
<
E( ) 1
=
= where <1
16K /
in (0,02)
Recall:Yr <
=
Yz-1 Znwhen I
+
([cYA
=
-
2 zz
+
-
7 zz +
[yA
=
-
z 2zA
+
-
1 zt
+
Likewise, we can find that
YA k (kyz
+
=
2
+
xizA k i
+ -
involve EA+1,", =A k
+
:Cor(Ye,YA h) +
=
xzA
car(Y1,2 yz 2 4 i)
+
+ -
Uh* =
Cov(Ye, <*YA)
atcor(YA, YA)
=
corarianceone
1
xkVm(Yt)
=
ch,
=
k
- and
cuela
cour(Ye,beta)
PR
ak
=
Multiply the series by Yt −1 :
YtYt −1 = α Yt 2−1 + Yt −1Zt
Take expectation:
( )
E (YtYt −1 ) = α E Yt 2−1 + E (Yt −1Zt )
γ 1 = αγ 0 + 0 (2)
γ k = αγ k −1 k ≥2
when k = 2 : γ 2 = αγ1 = α 2γ 0
k = 3: γ 3 = αγ 2 = α 3γ 0
So γ k = α kγ 0 and ρk = α k k ≥ 1
14
Alternative:
γ 0 = α 2γ 0 + σ z2
σ z2
⇒ γ0 =
1−α 2
∴ γ k = αγ k −1 k ≥1
15
4.4 General Linear Process
Let {Yt } denote the observed time series; and let {Z t } represent an unobserved white
noise series, that is, a sequence of identically distributed, zero-mean, independent
random variables. The assumption of independence could be replaced by the weaker
assumption, that the {Z t } are uncorrelated random variables, but we will not pursue
that slight generality.
YA =
z8Yizt -
i
A general linear process, {Yt } is one that can be represented as a weighted linear
combination of present and past white noise terms as-
- Yt = Z t + ψ 1Z t −1 + ψ 2 Z t −2 + ... (4.4.1)
Y. 1
=
Since this is an infinite series, certain conditions must be placed on the ψ -weights to
be meaningful mathematically.
We assume that
E(X)
∞
∑ψ i2 < ∞
<0
(4.4.2)
i =1
ψ j =φ j &
where −1 < φ < 1 10K1
So: Yt = Z t + φ Zt −1 + φ 2 Z t −2 + ...
16
Vo
-104
Also III
(
Var (Yt ) = Var Zt + φ Zt −1 + φ 2 Zt −2 + ... )
independence= Var ( Zt ) + φ 2Var ( Zt −1 ) + φ 4Var ( Zt −2 ) + ....
L
& id(0,02) 0
=
+ 0P0+10402 ...
= a
(
= σ e2 1 + φ 2 + φ 4 + .... (y2):
) 82 =
00 Geometric
~
σ e2 series
= (by summing a geometric series)
1−φ2
Furthermore
-
X
Cov (Yt , Yt −1 ) ( t t −1
2
= Cov Z + φ Z + φ Z + ..., Z + φ Z + φ Z + ....
t −2 t −1 t −2
2
t −3 )
V
( )
= Cov (φ Zt −1, Zt −1 ) + Cov φ 2 Zt −2 , φ Zt −2 + .....
= φσ e2 + φ 3σ e2 + φ 5σ e2 + ....
) =00328(02):
(
= φσ e2 1 + φ 2 + φ 4 + .....
Geometric
o
φσ e2 series
= (by summing a geometric series)
1−φ2
Also
Corr (Yt , Yt −1 ) =
φσ e2
1−φ2
=φ
P =
σ e2
/
=
2
1−φ n
P
17
4 Cov(Yt, Ye -1) ((Ex + 0Zt+ BZt 7
Z1 1 0Z1 8E1 3,...)
+
.; +
= + - 2 -
2
= - -
everythiswith Z => 0
↳
o
...
+
+...
=
00 p02 0502...
+
+
0204 ...)
00(1-
=
+
+ +
Geometric series
00+2
=
0-a is
=
this
Not said V,
0rez,
=
and know 50 e
=
=> V ne
=
0"z1 0471
(z G(yt,yA 2) G(zt 1zt 021
+ + +
+
4 ...,
+
3
-
1 2
-
= - -
-
=
zt -
2 + 0zt -
3
+
8It -
4 ...)
+
=
Co(0Z1z,Ze -2) Cov(0"ZA-3, 1Z1-3) +
+
Cov(0*ZA +,x*ZA-4)
-
+..
=
80+ 040 002 +
...
-
002(g*
=
...)
old
same sometic exies
g20(#
(
=
00.0 028
=
=
So... V 0%
=
0 08.
=
v 050
=
ohoere
=
Soph R ut =
= =
φ kσ e2
Likewise, Cov (Yt , Yt −k ) = and Corr (Yt , Yt − k ) = φ k
1−φ2
weakly
The process is stationary:- the autocovariance structure depends only on time lag.
stationary
For a general linear process, Yt = Z t + ψ 1Z t −1 + ψ 2 Z t −2 + ... , similar calculation yield the
Before we following results:
assumed
vi 0i
=
E ( Yt ) = 0
∞
γ k = Cov (Yt , Yt − k ) = σ e2 ∑ψ iψ i+k k ≥ 0 with ψ 0 = 1
i =0
j =0
document
= A( B) X t
∞
where A ( B ) = ∑ ajB j is called a linear filter.
j =−∞
18
4.5 Summary
(i) Stationarity
A stationary series is:
roughly horizontal
constant variance
no patterns predictable in the long-term
(ii) Non-stationarity
To identify the non-stationary series:-
time plot
the ACF of stationary data drops to zero relatively quickly
the ACF of non-stationary data decreases slowly
the value of r1 is often large and positive
(iii) Differencing
Differencing helps to stabilise the mean
The differenced series will have only T − 1 values since it is not possible
to calculate a difference for the first observation
Occasionally the differenced data will not appear stationary and it may
be necessary to difference the data a second time
19
(iv) Seasonal differencing
A seasonal difference is the difference between an observation and the
corresponding observation from the previous year.
20