0% found this document useful (0 votes)
16 views

Lecture 11. Time Series

1. The document discusses time series data and autocorrelation. 2. Key assumptions for time series models include linearity, no perfect collinearity, zero conditional mean of the error term, homoskedasticity, and no serial correlation. 3. Under these assumptions, OLS estimators are unbiased and their variances can be estimated using the sum of squared residuals.

Uploaded by

Andrea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Lecture 11. Time Series

1. The document discusses time series data and autocorrelation. 2. Key assumptions for time series models include linearity, no perfect collinearity, zero conditional mean of the error term, homoskedasticity, and no serial correlation. 3. Under these assumptions, OLS estimators are unbiased and their variances can be estimated using the sum of squared residuals.

Uploaded by

Andrea
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Outline

ECMT5001: Principles of Econometrics 1 Time series data

Lecture 11: Time series and autocorrelation


2 Trends in time series

Instructor: Simon Kwok1


3 Autocorrelation
1 School
of Economics
The University of Sydney
4 Detecting autocorrelation

5 Remedies for autocorrelation

1
Based on lecture notes by Nicolas de Roos.
Simon Kwok ECMT5001 L11 1 / 57 Simon Kwok ECMT5001 L11 2 / 57

Outline Time series data

1 Time series data


Time series data is ordered by time
each separate observation is a di↵erent time period
2 Trends in time series
unit of observation is a day or week, month, year, etc.
(compared with an individual, firm, country, etc.)
3 Autocorrelation use the subscript t rather than i for an observation

4 Detecting autocorrelation We no longer have a random sample of individual units


we have one realisation of a stochastic (random) process
5 Remedies for autocorrelation

Simon Kwok ECMT5001 L11 Time series data 3 / 57 Simon Kwok ECMT5001 L11 Time series data 4 / 57
Time series models: Examples Classical assumptions of the time series model

A static model relates contemporaneous variables


TS.1 Linearity in parameters
yt = 0 + 1 zt + ut
yt = 0 + 1 x1t + 2 x2t + ··· + k xkt + ut
e.g. quantity demanded depends on price TS.2 No perfect collinearity
A finite distributed lag (FDL) model allows one or more variables to a↵ect
TS.3 Zero conditional mean
y with a lag
E (ut |X) = 0, t = 1, 2, . . . , n
yt = ↵ 0 + 0 zt + 1 zt 1 + 2 zt 2 + ut
the error term in any period is uncorrelated with the explanatory
e.g. GDP might be a↵ected by lagged interest rates
variables in all time periods
a FDL model of order q includes q lags of zt
i.e. ut must be uncorrelated with xs even when s 6= t
lags are important in time series analysis

Simon Kwok ECMT5001 L11 Time series data 5 / 57 Simon Kwok ECMT5001 L11 Time series data 6 / 57

Classical assumptions Classical assumptions


Notation: the matrix X contains data on the complete time paths of all
explanatory variables The zero conditional mean assumption implies the x’s are strictly
2 3 exogenous
x11 x12 ... x1k
6 .. .. .. 7 this is stronger than the contemporaneous exogeneity assumption
6 . . . 7
6 7 used in cross section models: E (ui |xi ) = 0
X=6 6 xt1 xt2 . . . xtk 77 explanatory variables in period t
6 .. .. .. 7 we need strict exogeneity to establish small sample properties such as
4 . . . 5 unbiasedness
xn1 xn2 . . . xnk
contemporaneous exogeneity is sufficient for large sample properties
such as consistency
In time series models we require E (ut |X) = 0, t = 1, 2, . . . , n
in cross section models we assume E (ui |xi ) = 0 Note: we have not required a random sample
we need not worry how the error term for one individual is associated strict exogeneity plays a similar role
with the explanatory variables for another individual
the random sampling assumption ensures there is no correlation

Simon Kwok ECMT5001 L11 Time series data 7 / 57 Simon Kwok ECMT5001 L11 Time series data 8 / 57
Properties of OLS Classical assumptions

TS.4 Homoskedasticity
2
Theorem (Unbiasedness of OLS) Var (ut |X) = Var (ut ) =
Under assumptions TS.1 - TS.3, the OLS estimators are unbiased:
the error variance is independent of the x’s
E ( ˆj ) = j, j = 0, 1, . . . , k the error variance is constant over time

Assumptions TS.1 - TS.3 ensure unbiasedness but they do not allow us to TS.5 No serial correlation
derive variances
Corr (ut , us |X) = 0, t 6= s
we need two more assumptions to derive the estimator variances
if TS.5 fails, the model su↵ers from autocorrelation or serial
correlation

Simon Kwok ECMT5001 L11 Time series data 9 / 57 Simon Kwok ECMT5001 L11 Time series data 10 / 57

Properties of OLS Properties of OLS

Theorem (Variance of the OLS estimators) Theorem (Estimation of the error variance)
Under assumptions TS.1-5, the variance of the OLS estimator is Under assumptions TS.1-5,
2 SSR
Var ( ˆj |X) = , j = 1, . . . , k E (ˆ 2 ) = 2
, ˆ2 =
SSTj (1 Rj2 ) n k 1

ˆ 2 is an unbiased estimator of the error variance 2


This is the same formula as in the cross section case
SSTj is the total sum of squares of xj Theorem (Gauss-Markov theorem)
Rj2 is the R 2 from the regression of xj on the other x’s Under assumptions TS.1-5, the OLS estimators have the least variance of
2 all linear unbiased estimators.
We do not usually know
the next theorem tells us how we can calculate it The OLS estimators are BLUE

Simon Kwok ECMT5001 L11 Time series data 11 / 57 Simon Kwok ECMT5001 L11 Time series data 12 / 57
Inference with time series Outline

1 Time series data

The conditions for valid inference are similar to the cross section context
for small samples we need to assume normality of errors
2 Trends in time series
I this ensures we can use the t distribution
for large samples, we can rely on a version of the Central Limit 3 Autocorrelation
Theorem for asymptotic normality
I we can then use the t distribution as an approximation
4 Detecting autocorrelation

5 Remedies for autocorrelation

Simon Kwok ECMT5001 L11 Time series data 13 / 57 Simon Kwok ECMT5001 L11 Trends in time series 14 / 57

Trends in time series Trends in time series

Linear trend
Economic time series often have a trend
yt = ↵ 0 + ↵ 1 t + u t
growth or inflation causes many macroeconomic variables to have a
common trend
e.g. GDP, retail sales, imports, exports, etc. Quadratic trend

If two series have a common trend, we cannot assume the relationship is yt = ↵ 0 + ↵ 1 t + ↵ 2 t 2 + u t


causal
often both will be trending because of other unobservable factors Exponential trend
even if those factors are unobserved, we can control for them by
allowing for a trend log(yt ) = ↵0 + ↵1 t + ut
yt = a 0 e ↵ 1 t v t , a 0 = e ↵0 , vt = e u t

Simon Kwok ECMT5001 L11 Trends in time series 15 / 57 Simon Kwok ECMT5001 L11 Trends in time series 16 / 57
Seasonality Seasonality
Consider a seasonal time series with a linear trend
Often we observe periodic patterns in the data
e.g. sales in December may be higher than other months yt = 0 + 1t + 1 d1t + 2 d2t + 3 d3t + ut
e.g. demand for heating may be higher in winter
Predicted values
e.g. demand for soft drinks may be higher in summer
ŷt = ˆ0 + ˆ1 t + ˆ1 if t is in Quarter 1
We can use seasonal dummy variables ŷt = ˆ0 + ˆ1 t + ˆ2 if t is in Quarter 2
e.g. with quarterly data we could use 3 dummies
ŷt = ˆ0 + ˆ1 t + ˆ3 if t is in Quarter 3
d1 = 1 if period is in Quarter 1, else d1 = 0 ŷt = ˆ0 + ˆ1 t if t is in Quarter 4
d2 = 1 if period is in Quarter 2, else d2 = 0
d3 = 1 if period is in Quarter 3, else d3 = 0 hypothesis test for a trend H0 : 1 =0
hypothesis test for seasonality H0 : 1 = 2 = 3 =0

Simon Kwok ECMT5001 L11 Trends in time series 17 / 57 Simon Kwok ECMT5001 L11 Trends in time series 18 / 57

Outline Autocorrelation

Autocorrelation
1 Time series data
error terms are correlated with their previous values
violates assumption TS.5
2 Trends in time series
Corr (ut , us |X) = 0, t 6= s
3 Autocorrelation
Consequences of autocorrelation for the OLS estimator
unbiased (as long as the other TS assumptions are met)
4 Detecting autocorrelation
incorrect variance
we cannot perform inference
5 Remedies for autocorrelation
Note: implications are similar to heteroskedasticity

Simon Kwok ECMT5001 L11 Autocorrelation 19 / 57 Simon Kwok ECMT5001 L11 Autocorrelation 20 / 57
Causes of autocorrelation Business cycle example
Regression line fitted to business cycle data
1. Inertia alternating residual patterns: positive, negative, positive, . . .
a random variable continues its state of motion until hit by an
external force
e.g. cycles in economic time series persist until there is an external
shock

Example: business cycles


recurring and self-sustaining fluctuations in economic activity
momentum built into the series
classic example is real GDP; many other examples

Simon Kwok ECMT5001 L11 Autocorrelation 21 / 57 Simon Kwok ECMT5001 L11 Autocorrelation 22 / 57

Autocorrelation of order 1 or AR(1) Causes of autocorrelation

The AR(1) model of autocorrelation


2. Model misspecification
ut = ⇢ut 1 + et , t = 2, . . . , n
Omitted variables
where ut is the regression error, et is i.i.d, and |⇢| < 1 if the omitted variable is correlated over time the regression residuals
will be autocorrelated
Positive autocorrelation (0 < ⇢ < 1)
Incorrect functional form
positive residuals are likely to be followed by positive residuals
e.g. consider a cost function (MC = marginal cost)
e.g. business cycles
2
Negative autocorrelation ( 1 < ⇢ < 0) MCt = 0 + 1 outputt + 2 outputt + ut (true model)
positive residuals are likely to be followed by negative residuals MCt = 0 + 1 outputt + vt (fitted model)
e.g. disruption then catch up

Simon Kwok ECMT5001 L11 Autocorrelation 23 / 57 Simon Kwok ECMT5001 L11 Autocorrelation 24 / 57
Functional form example Causes of autocorrelation

3. Data smoothing
data are often “smoothed” by official agencies
well meaning attempt to remove trends or cycles

Common examples
moving averages
seasonal adjustment
interpolation between data periods (e.g. between Census years)

These procedures can induce correlation in the errors

Simon Kwok ECMT5001 L11 Autocorrelation 25 / 57 Simon Kwok ECMT5001 L11 Autocorrelation 26 / 57

Outline Detection: Graphical analysis

1 Time series data

2 Trends in time series Residual plots are an informal test for autocorrelation
1. plot of residuals against time
3 Autocorrelation 2. plot of residuals against their lagged values

4 Detecting autocorrelation

5 Remedies for autocorrelation

Simon Kwok ECMT5001 L11 Detecting autocorrelation 27 / 57 Simon Kwok ECMT5001 L11 Detecting autocorrelation 28 / 57
Detection: Graphical analysis Detection: Graphical analysis
Plot of residuals against time Plot of residuals against time
plot shows a run of positive ût followed by a run of negative ût residuals constantly change from positive to negative
suggests positive autocorrelation (0 < ⇢ < 1) suggests negative autocorrelation ( 1 < ⇢ < 0)

Simon Kwok ECMT5001 L11 Detecting autocorrelation 29 / 57 Simon Kwok ECMT5001 L11 Detecting autocorrelation 30 / 57

Detection: Graphical analysis Detection: Graphical analysis


Plot of residuals against time Plot of ût against ût 1
uncorrelated residuals should show neither pattern (⇢ = 0) if residuals are positively correlated we should see a positive
relationship

Simon Kwok ECMT5001 L11 Detecting autocorrelation 31 / 57 Simon Kwok ECMT5001 L11 Detecting autocorrelation 32 / 57
Detection: Graphical analysis Detection: Graphical analysis
Plot of ût against ût 1 Plot of ût against ût 1

if residuals are negatively correlated we should see a negative if residuals are uncorrelated we should see no obvious relationship
relationship

Simon Kwok ECMT5001 L11 Detecting autocorrelation 33 / 57 Simon Kwok ECMT5001 L11 Detecting autocorrelation 34 / 57

Testing for autocorrelation t test: hypotheses


Intuition
examine the sample correlation between ût and ût 1
We will consider the following tests if positive first-order autocorrelation exists, we expect the sample
1. t test for AR(1) errors correlation to be significantly positive
2. Durbin-Watson test
The null hypothesis is of no first-order autocorrelation
the DW test applies to a static model with AR(1) errors and is not
applicable to models with lagged y ’s H0 : ⇢ = 0
3. Breusch-Godfrey test
The alternative hypothesis is one of the following
the BG test can test for higher-order autocorrelation and is valid in
the presence of lagged y ’s H1 : ⇢ 6= 0 (either positive or negative autocorrelation)
H1 : ⇢ > 0 (positive autocorrelation)
H1 : ⇢ < 0 (negative autocorrelation)

Simon Kwok ECMT5001 L11 Detecting autocorrelation 35 / 57 Simon Kwok ECMT5001 L11 Detecting autocorrelation 36 / 57
t test: estimating ⇢ t test: steps

1. Estimate model by OLS


The estimate of ⇢ is derived from a regression of the residuals on the
yt = 0 + 1 xt + ut
lagged residuals (i.e. estimation of an AR(1) on the residuals)
2. Obtain the residuals ût
ût = ↵0 + ⇢ût 1 + et
3. Regress ût on ût 1 with or without a constant term
The estimate of ⇢ is the usual OLS estimate (either is asymptotically valid)
Pn 4. Use the t statistic of ⇢ˆ to test: H0 : ⇢ = 0
ût û t ût 1 û t
⇢ˆ = t=2 Pn 2
t=2 ût as a rough guide, autocorrelation is a problem if H0 is rejected at the
5% level
the test can be made robust to heteroskedasticity by using robust
standard errors

Simon Kwok ECMT5001 L11 Detecting autocorrelation 37 / 57 Simon Kwok ECMT5001 L11 Detecting autocorrelation 38 / 57

Durbin-Watson test Durbin-Watson test

If ⇢ ⇡ 0 then d ⇡ 2
no evidence of first order autocorrelation
The Durbin-Watson test uses the statistic
Pn If d is “close” to zero
2
(ût ût 1) significant positive first order autocorrelation
d = t=2Pn 2
t=1 ût
If d approaches 4
d is closely related to ⇢ˆ since, for large n significant negative first order autocorrelation

d ⇡ 2(1 ⇢ˆ) Summary


⇢ d case
because 1 < ⇢ < 1, it follows that 0 < d < 4 -1 4 perfect negative AR(1)
0 2 no autocorrelation
1 0 perfect positive AR(1)

Simon Kwok ECMT5001 L11 Detecting autocorrelation 39 / 57 Simon Kwok ECMT5001 L11 Detecting autocorrelation 40 / 57
Durbin-Watson test: Steps Breusch-Godfrey test

1. Estimate model by OLS


The BG test allows for more general forms of autocorrelation than AR(1)
yt = 0 + 1 xt + ut e.g. higher order autocorrelation is possible for seasonal data

2. Obtain the residuals ût Suppose we suspect autocorrelation of order q


3. Calculate the d statistic
ut = ⇢1 ut 1 + ⇢2 ut 2 + · · · + ⇢q ut q + et
4. Obtain critical values from Durbin-Watson tables and conduct the test
We want to test
most software packages automatically perform the DW test for time
series data H0 : ⇢1 = ⇢2 = · · · = ⇢q = 0
the DW test is not valid if there is a lagged dependent variable
(e.g. if yt 1 is an explanatory variable)

Simon Kwok ECMT5001 L11 Detecting autocorrelation 41 / 57 Simon Kwok ECMT5001 L11 Detecting autocorrelation 42 / 57

Breusch-Godfrey test: Steps The DW and BG tests

1. Estimate model by OLS

yt = 0 + 1 xt + ut
The BG test has some advantages over the DW test
2. Obtain the residuals ût 1. it can test for higher orders of autocorrelation
3. Regress ût on all the x’s and ût 1, . . . , ût q 2. the original model can include lagged y ’s as regressors
4. (For large n) Use the R 2 from this auxiliary regression to calculate the
(LM) test statistic The DW test is not applicable for models with lagged y ’s as explanatory
2 2 variables
(n q)R ⇠ q (q degrees of freedom)

5. (an F test of restrictions is also valid)

regressors can include lagged dependent variables (yt 1 , yt 2 , . . . )

Simon Kwok ECMT5001 L11 Detecting autocorrelation 43 / 57 Simon Kwok ECMT5001 L11 Detecting autocorrelation 44 / 57
Outline Remedies for autocorrelation

1 Time series data Recall that with autocorrelation


OLS estimates are unbiased and consistent
2 Trends in time series standard errors are biased so inference (e.g. t and F tests) is not valid

3 Autocorrelation Two kinds of remedy


1. construct a new estimator (GLS) that accounts for autocorrelation
4 Detecting autocorrelation
2. adjust the standard errors so that valid inference is possible

Note: these solutions are the same as for heteroskedasticity


5 Remedies for autocorrelation

Simon Kwok ECMT5001 L11 Remedies for autocorrelation 45 / 57 Simon Kwok ECMT5001 L11 Remedies for autocorrelation 46 / 57

1. The GLS estimator The GLS estimator


From the model we have

Suppose the errors follow an AR(1) process and consider a BLUE estimator yt 1 = 0 + 1 xt 1 + ut 1

Multiply by ⇢
Consider the simple regression model
⇢yt 1 =⇢ 0 + ⇢ 1 xt 1 + ⇢ut 1
yt = 0 + 1 xt + ut , ut = ⇢ut 1 + et , 1<⇢<1
Subtract this from our original model
where
yt ⇢yt 1 = 0 (1 ⇢) + 1 (xt ⇢xt 1) + (ut ⇢ut 1)
2
et ⇠ N(0, ), Cov (et , es ) = 0, for all t 6= s
The transformed white noise error term is

et = u t ⇢ut 1

Simon Kwok ECMT5001 L11 Remedies for autocorrelation 47 / 57 Simon Kwok ECMT5001 L11 Remedies for autocorrelation 48 / 57
The GLS estimator GLS properties

The data in the rearranged model

yt ⇢yt 1 = 0 (1 ⇢) + 1 (xt ⇢xt 1) + et We use the estimates ⇤


0 and ⇤
1 to obtain BLUE estimates of the
parameters 0 and 1
are known as quasi-di↵erenced data
the approach can be easily extended to AR(q) models and to multiple
Rewrite the model as regression

yt⇤ = ⇤
0 + ⇤
1 x1 + et , Note: we lose 1 observation when making the transformation because
y1 ⇢y0 is not available
where
for large n this is not a problem
yt⇤ = yt ⇢yt 1, xt⇤ = xt ⇢xt 1,

0 = 0 (1 ⇢).

GLS estimation involves estimating the quasi-di↵erenced model by OLS

Simon Kwok ECMT5001 L11 Remedies for autocorrelation 49 / 57 Simon Kwok ECMT5001 L11 Remedies for autocorrelation 50 / 57

Feasible GLS Feasible GLS: Steps

1. Estimate the model by OLS

yt = 0 + 1 xt + ut
How do we obtain the transformed variables yt⇤ and xt⇤ ?
2. Obtain the residuals ût
we need to know ⇢ to calculate
3. Regress ût on ût 1 to obtain an estimate of ⇢
yt⇤ = yt ⇢yt 1, xt⇤ = xt ⇢xt 1 4. Estimate the transformed equation by OLS

Usually ⇢ is unknown, but we can estimate it using feasible GLS yt⇤ = ⇤


0 + ⇤
1 xt + et

where

0 = (1 ⇢ˆ) 0, yt⇤ = yt ⇢ˆyt 1, xt⇤ = xt ⇢ˆxt 1

Simon Kwok ECMT5001 L11 Remedies for autocorrelation 51 / 57 Simon Kwok ECMT5001 L11 Remedies for autocorrelation 52 / 57
Feasible GLS: Issues Feasible GLS: Remarks

Problems with feasible GLS


not unbiased
We should examine the transformed model for autocorrelation
t and F statistics are only approximately t and F distributed
if we reject H0 (no autocorrelation) then we have the wrong model
for autocorrelation
However, feasible GLS is consistent
e.g. we may have the wrong order of autocorrelation
with large samples, the t and F statistics are reasonable
approximations
Also, be careful comparing the reported R 2 with the quasi-di↵erenced data
to the R 2 for the original model
Other variations on feasible GLS are available
GLS now has a di↵erent dependent variable yt⇤
Cochrane-Orcutt estimation uses an iterative method to calculate the
correlation coefficient
this method is quite commonly used (e.g. Gretl does it)

Simon Kwok ECMT5001 L11 Remedies for autocorrelation 53 / 57 Simon Kwok ECMT5001 L11 Remedies for autocorrelation 54 / 57

2. Adjusting the standard errors Adjusting the standard errors

Newey and West derive a formula for the variance


it is consistent: in large samples its distribution collapses around the
An alternative to feasible GLS is to correct the standard errors true value
we can calculate autocorrelation-robust standard errors in a similar
way to heteroskedasticity-robust standard errors Note: the Newey West estimator is consistent even if there is no
autocorrelation present
We scale the OLS standard errors to take into account the autocorrelation however White’s (heteroskedasticity robust) estimator is more
this yields the HAC estimator (heteroskedasticity and autocorrelation efficient if there is no autocorrelation
consistent)
When the errors satisfy the Gauss-Markov assumptions, the OLS variance
estimator is more efficient than either the White or the Newey-West
estimators

Simon Kwok ECMT5001 L11 Remedies for autocorrelation 55 / 57 Simon Kwok ECMT5001 L11 Remedies for autocorrelation 56 / 57
Summary

1. OLS inference is valid with time series data under similar assumptions
to the cross-section model
2. Strict exogeneity is required for unbiasedness
3. If errors are autocorrelated, OLS estimates remain unbiased, but
inference is invalid
4. Autocorrelation can be detected graphically or with formal tests (e.g.
t, Durbin-Watson, Breusch-Godfrey)
5. Two possible remedies
a. calculate robust standard errors: inference is valid, but OLS estimates
are inefficient
b. estimate by GLS: GLS estimates are BLUE

Simon Kwok ECMT5001 L11 Remedies for autocorrelation 57 / 57

You might also like