0% found this document useful (0 votes)
15 views

Autocorrelation

Uploaded by

amanikeshsingh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Autocorrelation

Uploaded by

amanikeshsingh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

AUTOCORRELATION

PRESENTED BY:
BHARAT SAINI : 2K21/BBA/28
ESHA SEMWAL: 2K21/BBA/45
STUTI AGGARWAL : 2K21/BBA/148
TANMAY CHHIKARA : 2K21/BBA/155
e n d a INTRODUCTION

Ag CAUSES OF AUTOCORRELATION

CONSEQUENCES

DETECTING AUTOCORRELATION

GRAPHICAL METHOD

THE DURBIN WATSON TEST

THE BREUSH GODFREY TEST

THE RUN TEST

REMEDIAL MEASURES

REFRENCES
INTRODUCTION
Autocorrelation occurs in time-series studies when the errors associated with a
given time period carry over into future time periods.

For example, if we are predicting the growth of stock dividends, an overestimate


in one year is likely to lead to overestimates in succeeding years.

One of the assumptions of the classical linear regression(CLRM) IS:

but when

condition known as autocorrelation.


correlation between members of series of observed data ordered in time (as time
series data ) or space (as cross sectional data).

For Example: The regression of family consumption expenditure on family


income. the effect of an increase of one families income on its expenditure is not
expected to effect the consumption of another family, but however if there is such
dependence, we have autocorrelation.
CAUSES OF AUTOCORRELATION
INERTIA- Macroeconomics data experience cycles/business cycles. Eg. GDP, Price index, unemployment.

SPECIFICATION BIAS- Excluded variable

That important factors influencing the data


are not included in the model, leading to a
misrepresentation of the relationship
between variables and potentially causing
autocorrelation.

SPECIFICATION BIAS- Incorrect Functional Form

It occurs when the chosen model structure


does not accurately represent the true
relationship between variables, potentially
leading to autocorrelation in the residuals.

COBWEB PHENOMENON- In agricultural market, the supply reacts to price with a lag of one time period
because supply decisions take time to implement. This is known as the cobweb phenomenon.

Thus, at the beginning of this year's planting of crops, farmersare influenced by the price prevailing last year.
Lags

Auto regression one of the explanatory variables is the lagged value of the dependent
variable .

If lagged neglect the resulting error term will reflect a systematic pattern due to the
infulence of lagged consumption on current consumption.

Data manipulation

First equation is not autocorrelated but the error term in the first differencee form is
autocorrelated.

Non stationarity

Time series data is stationary if its characteristics (eg mean, variance and covariance)are not
change over time (time variant)
CONSEQUENCES Back to Agenda

The OLS estimators are unbiased and consistent but inefficient ,no longer
BLUE.

They are still normally distributed in large samples.

LEAD (R^2) being unduly high.

The residual variance is likely to underestimate true variance.

In most cases standard errors are underestimated.

Thus, the hypothesis-testing procedure becomes suspect, since the estimated


standard errors may not be reliable, even in large samples
DETECTING AUTO CORRELATION

There are two ways in general.

The first is the informal way which is done through graphs and therefore
we call it the graphical method.

The second is through formal tests for autocorrelation, like the following
ones:
1. The Durbin Watson Test
2. The Breusch-Godfrey Test
3. The Run test
GRAPHICAL METHOD
In this method the residuals are
plotted against the time.

This plot is called as Time


Sequence Plot.

If time sequence plot doesn’t


exhibit any pattern then
autocorrelation is said to be
absent (fig. e)

If it exhibit some pattern then


autocorrelation is said to be
present (Figure a, b, c & d)
The Durbin Watson Test

This test was developed by Statisticians Durbin and Watson.

It is most frequently used test for the detection of autocorrelation. It is also


called as Durbin– Watson d test.

It is used to test the null hypothesis that there is no autocorrelation.

The Durbin–Watson d statistic is given by:


The value of d statistic lies between 0
and 4. A value near 0 shows the
presence of positive autocorrelation,
value near 4 shows presence of
negative autocorrelation whereas value
near two shows absence of
autocorrelation.

However it is difficult to decide how


much near to 0, 2 or 4. Therefore a
criteria was suggested by Durbin and
Watson. They construct a table for
upper bound (dU) and lower bound (dL)
for the d statistic. This table is for 6 to
200 observations and maximum 20
explanatory variables. The decision
criteria is explained in figure on next
slide.
The Durbin – Watson test is used under following assumptions only:

1. The Regression model includes intercept term.

2. The explanatory variables must be non stochastic (non random or fixed).

3. The error term must be normally distributed.

4. The regression term doesn’t include any lagged (past) value of dependent variable.

5. There must be no missing observation.

6. It can be used only for first order autocorrelation.


BREUSCH-GODFREY (BG) TEST

• Test allows for:

(1) Lagged values of the dependent variables to be included as regressors.

(2) Higher-order autoregressive schemes, such as AR(2), AR(3), etc.

(3) Moving average terms of the error term, such as U1, U2, etc.

• The error term in the main equation follows the following AR(p)
autoregressive structure:

• The null hypothesis of no serial correlation is:


BG test steps
Regress et, on the regressors in the model and the
p autoregressive in of equation obtain R^2 from
this auxiliary regression.

If the sample size is large, BG have shown that:

That is, in large samples, (n - p) times R^2 follows


the chi-square distribution with p degrees of
freedom.

Rejection of the null hypothesis implies evidence


of autocorrelation.
RUN TEST
This method is similar to the run test for randomness.

In this method first the regression model is fitted using OLS method and the
residual are obtained.

The residuals are arranged according to time.

The no. of runs (R) formed by + and – signs are counted. If it exceeds the
tabulated value then autocorrelation is said to be absent.

If N1 & N2 are no. of + & – signs respectively then for large sample the test can
be approximated by Wald’s test using:
First-Difference Transformation
If autocorrelation is of AR(1) type, we
have:
REMEDIAL MEASURES
Assume p=1 and run first-difference
model (taking first difference of
dependent variable and all regressors)

Generalized Transformation

Estimate value of p through regression of


residual on lagged residual and use value to run
transformed regression.

Newey-West Method
Generates HAC (heteroscedasticity and autocorrelation
consistent) standard errors.
R E F E R E N C E S
Basic econometrics / Damodar N. Gujarati, Dawn C. Porter. —
5th ed.

Draper NR & Smith H, Applied Regression Analysis, 3rd


edition (1998), John Wiley & Sons Inc

Johnston J & Dinardo J, Econometric Methods, 4th edition


(1997), McGraw-Hill Companies.

You might also like