0% found this document useful (0 votes)
21 views

Autocorrelation I

This document discusses autocorrelation in regression models. It defines autocorrelation and explains how it can be caused by factors like omitted variables or functional form misspecification. It then outlines consequences of autocorrelation like inefficient ordinary least squares estimators. The document proposes tests to detect autocorrelation like the Durbin-Watson test and Breusch-Godfrey test.

Uploaded by

birhan hailie
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Autocorrelation I

This document discusses autocorrelation in regression models. It defines autocorrelation and explains how it can be caused by factors like omitted variables or functional form misspecification. It then outlines consequences of autocorrelation like inefficient ordinary least squares estimators. The document proposes tests to detect autocorrelation like the Durbin-Watson test and Breusch-Godfrey test.

Uploaded by

birhan hailie
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 20

Autocorrelation:

Nature and Detection

13.1
Aims and Learning Objectives

By the end of this session students should be able to:

• Explain the nature of autocorrelation

• Understand the causes and consequences of


autocorrelation

• Perform tests to determine whether a regression


model has autocorrelated disturbances

13.2
Nature of Autocorrelation

Autocorrelation is a systematic pattern in the


errors that can be either attracting (positive)
or repelling (negative) autocorrelation.

For efficiency (accurate estimation/prediction)


all systematic information needs to be incor-
porated into the regression model.

13.3
Regression Model

Yt = 1 + 2X2t + 3X3t + Ut

No autocorrelation: Cov (Ui, Uj)


or E(Ui, Uj) = 0

Autocorrelation: Cov (Ui, Uj)  0


or E(Ui, Uj)  0
Note: i  j
In general
E(Ut, Ut-s)  0
13.4
Ut Attracting
Postive
. . .. . . . ..
.. .
Auto. 0
. . .. . . ...
. t

Ut Random
No . . .. . . . . . . . .
Auto. 0
. . .. . . . . .. .
. . . . .t
. Repelling
Ut . . . . .
Negative
0
. . .
Auto.
. . . . . . . t
. . 13.5
Order of Autocorrelation
Yt = 1 + 2X2t + 3X3t + Ut

1st Order: Ut = Ut1 + t


2nd Order: Ut = 1 Ut1 + 2 Ut2 + t
3rd Order: Ut = 1 Ut1 + 2 Ut2 + 3 Ut3 + t
Where -1 <  < +1
We will assume First Order Autocorrelation:
AR(1) : Ut = Ut1 + t
13.6
Causes of Autocorrelation

Direct Indirect

• Inertia or persistence • Omitted Variables

• Spatial correlation • Functional form

• Cyclical Influences • Seasonality


13.7
Consequences of Autocorrelation

1. Ordinary least squares still linear and


unbiased.

2. Ordinary least squares not efficient.

3. Usual formulas give incorrect standard


errors for least squares.

4. Confidence intervals and hypothesis tests


based on usual standard errors are wrong.13.8
^ ^
Yt = 1 + 2Xt + et
Autocorrelated disturbances: E(et, et-s)  0
Formula for ordinary least squares variance
(no autocorrelation in disturbances): ˆ  2
Var (  2 )  2
x t

Formula for ordinary least squares variance


(autocorrelated disturbances): 2
  
Var ( ˆ 2 )  1  1 2 xi x j  k 
 t  t
x 2 
x 2 

Therefore when errors are autocorrelated ordinary


least squares estimators are inefficient (i.e. not “best”)
13.9
Detecting Autocorrelation

Yt  ˆ1  ˆ2 X 2t  ˆ3 X 3t  et


et provide proxies for Ut

Preliminary Analysis (Informal Tests)

• Data - autocorrelation often occurs in time-series


(exceptions: spatial correlation, panel data)

• Graphical examination of residuals - plot et against


time or et-1 to see if there is a relation
13.10
Formal Tests for Autocorrelation

Runs Test: analyse the uninterrupted sequence of the


residuals

Durbin-Watson (DW) d test: ratio of the sum of


squared differences in successive residuals to the
residual sum of squares

Breusch-Godfrey LM test: A more general test


which does not assume the disturbances are AR(1).

13.11
Durbin-Watson d Test

H o:  = 0 vs. H1:  = 0 ,  > 0, or  < 0


The Durbin-Watson Test statistic, d, is :
n
et et-1
2
t=2
d = n
et
t=1
2

Ratio of the sum of squared differences in successive


residuals to the residual sum of squares
13.12
The test statistic, d, is approximately related to ^
 as:

d  2(1)
^

 = 0 , the Durbin-Watson statistic is d  2.


When ^

 = 1 , the Durbin-Watson statistic is d  0.


When ^

 = -1 , the Durbin-Watson statistic is d  4.


When ^

13.13
DW d Test
4 Steps

Step 1: Estimate Yˆi  ˆ1  ˆ2 X 2i  ˆ3 X 3i


And obtain the residuals
Step 2: Compute the DW d test statistic

Step 3: Obtain dL and dU: the lower and upper points


from the Durbin-Watson tables

13.14
Step 4: Implement the following decision rule:
Value of d relative to dL and dU Decision

d < dL Reject null of no positive


autocorrelation
dL  d  dU No decision

dU < d < 4 - dU Do not reject null of no


positive or negative
autocorrelation

4 – dL < d < 4 - dU No decision

d > 4 - dL Reject null of no negative


autocorrelation

13.15
Restrictive Assumptions:
• There is an intercept in the model

• X values are non-stochastic

• Disturbances are AR(1)

• Model does not include a lagged dependent


variable as an explanatory variable, e.g.

Yt = 1 + 2X2t + 3X3t + 4Yt-1+ Ut


13.16
Breusch-Godfrey LM Test

This test is valid with lagged dependent variables


and can be used to test for higher order
autocorrelation

Suppose, for example, that we estimate:

Yt = 1 + 2X2t + 3X3t + 4Yt-1+ Ut


And wish to test for autocorrelation of the form:

U t  1U t 1   2U t 2  3U t 3  vt
13.17
Breusch-Godfrey LM Test
4 steps

Step 1. Estimate
Yt = 1 + 2X2t + 3X3t + 4Yt-1+ Ut
obtain the residuals (et)

Step 2. Estimate the following auxiliary regression


model:
et  b1  b2 X 2  b3 X 3  b4Yt 1
 c1et 1  c2et 2  c3et 3  wt 13.18
Breusch-Godfrey LM Test

Step 3. For large sample sizes, the test statistic is:

( n  p) R ~ 
2 2
p

Step 4. If the test statistic exceeds the critical


chi-square value we can reject the null hypothesis
of no serial correlation in any of the  terms

13.19
Summary

In this lecture we have:

1. Analysed the theoretical causes and


consequences of autocorrelation

2. Described a number of methods for detecting


the presence of autocorrelation

13.20

You might also like