0% found this document useful (0 votes)
6 views

Autocorrelation (1)

Eco metrics

Uploaded by

Derara Chala
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Autocorrelation (1)

Eco metrics

Uploaded by

Derara Chala
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

2.

4 Autocorrelation
Recall that one of the assumptions about the error terms is that the error term of one observation
is not correlated with the error term of another observation (the correlation among various error
terms is zero). If they are correlated, then the situation is said to be one of autocorrelation. This
is also called as the problem of serial correlation. We know that heteroscedasticity is associated
more with cross sectional data. Autocorrelation is usually more associated with time series data.
Of course, autocorrelation can be present even in cross-section data.
Autocorrelation occurring in cross-sectional data is also sometimes called spatial correlation
(correlation in space rather than in time). In CLRM we assume that there is no autocorrelation.
This implies:

𝐶𝑜𝑣(𝜀𝑖 , 𝜀𝑗 ) = 𝐸(𝜀𝑖 , 𝜀𝑗 ) = 0, 𝑖 ≠ 𝑗

Source of autocorrelation
Some of the possible reasons for the introduction of autocorrelation in the data are as follows:
1. Inertia or Carryover of effect: Carryover of effect, at least in part, is an important source of
autocorrelation. For example, the monthly data on expenditure of household is influenced by
the expenditure of preceding month. For instance, gross domestic product (GDP), production,
employment, money supply, etc. reflect recurring and self-sustaining fluctuations in economic
activity. When an economy is recovering from recession, most of the time series will be
moving upwards. This means any subsequent value of a series at one point of time is always
greater than its previous time value.
Such a momentum continuous till it slows down due to, say, a factor like increase in taxes or
interest or both. Hence, in regressions involving time series data, successive observations would
generally be inter-dependent or correlated. Such an uptick effect is termed as ‘inertia’ which
literally means a situation that continues to hold in a similar manner for many successive time
periods.
2. Deletion of some variables: Another source of autocorrelation is the effect of deletion of some
variables. In regression modeling, it is not possible to include all the variables in the model.

1
There can be various reasons for this, e.g., some variable may be qualitative, sometimes direct
observations may not be available on the variable etc. The joint effect of such deleted variables
gives rise to autocorrelation in the data.
3. Model-misspecification: The misspecification of the form of relationship can also introduce

autocorrelation in the data. It is assumed that the form of relationship between study and
explanatory variables is linear. If there are log or exponential terms present in the model so
that the linearity of the model is questionable, then this also gives rise to autocorrelation in the
data.
4. Data Smoothing: Sometimes we need to average the data presented. Considering averages
implies ‘data smoothing’. We may prefer to convert monthly data into quarterly data by
averaging the data over every three months. However, this smoothness, desired in many
contexts, may itself lead to a systematic pattern in disturbances, resulting in autocorrelation.
5. The difference between the observed and true values of the variable is called measurement
error or errors–in-variable. The presence of measurement errors on the dependent variable
may also introduce the autocorrelation in the data.
Autocorrelation may be positive or negative depending on the data. Generally, economic data
exhibits positive autocorrelation. This is because most of them either move upwards or downwards
over time. Such a trend continues at least for some time i.e. some months, or quarters. This means,
they are not generally expected to exhibit a sudden upward or downward movement unless there
is a reason or a shock.
The following structures are popular in autocorrelation:
a) Autoregressive (AR) process.
b) Moving average (MA) process.
c) Joint autoregression moving average (ARMA) process

Consequences of Autocorrelation
When the assumption of no-autocorrelation is violated, the estimators of the regression model
based on sample data suffers from certain consequences. More specifically, the OLS estimators
will suffer from the following consequences.

2
a. The least squares estimators are still linear and unbiased. In other words, the estimated values
of parameters continue to be unbiased. However, they are not efficient because they do not
have minimum variance. Therefore, the usual OLS estimators are not BLUE.
b. The estimated variances of OLS estimators (𝛽𝑗 ) are biased. Hence, the usual formula used to
estimate the variances, and their standard errors underestimate the true variances (𝜹𝟐 ) and
standard errors. Consequently, the decision of rejecting a parameter on the basis of t-values,
concluding that a particular coefficient is statistically different from zero, would be an incorrect
conclusion.
• Usual t -ratio and F ratio tests provide misleading results.
• As a result, we are likely to overestimate 𝑹𝟐
• Narrow confidence interval
• Prediction may have large variances

Detection or tests of Autocorrelation


There are many methods of detecting the presence of autocorrelation.
1. Graphical Method
A visual examination of OLS residuals 𝑒𝑡 quite often conveys the presence of autocorrelation
among the error terms 𝑢𝑡 . Such a graphical presentation (Fig. 12.3) is known as the ‘time sequence
plot’. The first part of this figure does not show any clear pattern in the movement of the error
terms. This means there is an absence of autocorrelation. In the lower part of Fig. 12.3, you will
notice that the correlation between the two residual terms is first negative and then becomes
positive. Therefore, plotting the sample residuals gives us the first indication on the presence or
absence of autocorrelation.

3
2. Durbin-Watson Test
The Durbin-Watson test, or the DW test as it is popularly called, is an analytical method of
detecting the presence of autocorrelation. It is used for testing the hypothesis of lack of first-order
autocorrelation in the disturbance term. The null hypothesis is
𝐻0 : 𝜌 = 0 (There is no autocorrelation)
Use OLS to estimate 𝛽 𝑖𝑛 𝑌 = 𝑋𝛽 + 𝜀 and obtain the residual vector (the residual figure)
𝑒 = 𝑌 − 𝑋𝛽̂

Where 𝑟 is the sample autocorrelation coefficient from residuals based on OLSE and can be regarded as
the regression coefficient of 𝑒𝑡 on 𝑒𝑡−1 . Here

4
Limitations of D-W test
 If d falls in the inconclusive zone, then no conclusive inference can be drawn. i.e

 The D-W test is not applicable when the intercept term is absent in the model.

5
3. The Breusch-Godfrey (BG) Test
To avoid the pitfalls of the Durbin Watson d-test, Breusch and Godfrey have proposed a test
criterion for autocorrelation that is general in nature. This is in the sense that:
a) It can handle non-stochastic regressors as well as the lagged values of Yt ;
b) It can deal with higher-order autoregressive schemes such as AR(2), AR(3) … etc.
c) It can also handle simple or higher order moving averages.
Let us now consider a two-variable regression model to see how the BG test works.

Where 𝑣𝑡 is the white noise or the stochastic error term. We wish to test:
𝐻0 : 𝜌1 = 𝜌2 = ⋯ 𝜌𝑝 = 0
The null hypothesis says that there is no autocorrelation of any order. Now, the BG test involves
the following steps:
1. Estimate the model 𝑌𝑡 = 𝛽1 + 𝛽2 𝑋𝑡 + 𝑢𝑡 by OLS method and obtain the residuals 𝑒𝑡 .
2. Regress the residuals 𝑒𝑡 .on the p-lagged values of estimated residuals obtained in step (1)
above , i.e., 𝑒𝑡−1 , 𝑒𝑡−2 , … , 𝑒𝑡−𝑝 Here we take the residual et which are estimate of the error
𝑢𝑡 , as the error term is not known.
3. Obtain 𝑅 2 from the auxiliary regression
4. For large samples, the Breusch and Godfrey test statistic is computed as:

The BG-Test is also referred to as the LM (Lagrange Multiplier) Test

6
Remedial Measures for Autocorrelation
To suggest remedial measures for autocorrelation, we assume the nature of interdependence in the
error term 𝑢𝑡 in a regression model like:

where 𝑣𝑡 is assumed to follow the OLS assumptions. We first consider the case where 𝜌 is known.
Here, transforming the model in a certain manner (called as the Cochrane Orcutt procedure) will
reduce the equation to an OLS compatible model.
1. Cochrane-Orcutt Transformation (Autoregressive Scheme is Known)
Suppose we know the value of 𝜌. This helps us to transform the regression model given as 𝑌𝑡 = 𝛽1 +
𝛽2 𝑋𝑡 + 𝑢𝑡 in a manner that the error term becomes free from autocorrelation. Subsequently, we apply
the OLS method to the transformed model. For this, we consider a one-period lag in 𝑌𝑡 = 𝛽1 + 𝛽2 𝑋𝑡 +
𝑢𝑡 𝑎𝑠
𝑌𝑡−1 = 𝛽1 + 𝛽2 𝑋𝑡−1 + 𝑢𝑡−1 (1)
Let us multiply equation (1) on both the sides by 𝜌. We obtain:
𝜌𝑌𝑡−1 = 𝜌𝛽1 + 𝜌𝛽2 𝑋𝑡−1 + 𝜌𝑢𝑡−1 (2)
Let us now subtract equation (2) from equation (1) to obtain:

Note that we have used 𝑣𝑡 for the new disturbance term above. Let us now denote:

The transformed model will be

Now, the transformed variables 𝑌𝑡∗ and 𝑋𝑡∗ will have the desirable BLUE property. The estimators
obtained by applying the OLS method to the above equation are called the Generalized Least

7
Squares (GLS) estimators. The transformation as suggested above is known as the Cochrane-
Orcutt transformation procedure.
2. Autoregressive Scheme is not Known
Suppose we do not know 𝜌. Thus, we need methods for estimating 𝜌. We first consider the case where
𝜌 = 1. This amounts to assuming that the error terms are perfectly positively autocorrected. This case
is called as the First Difference Method. If this assumption holds, a generalized difference equation
can be considered by taking the difference between (𝑌𝑡 = 𝛽1 + 𝛽2 𝑋𝑡 + 𝑢𝑡 ) and its first order
autoregressive schemes as:

where the symbol 𝛥 (read as delta) is the first difference operator. Note that the difference model
has no intercept. If 𝜌 is not known, then we can estimate 𝜌 by the following methods.

Durbin Watson Method


From the above equation, we see that d-statistic and 𝜌 are related. We can this relationship to
estimate 𝜌. The d-statistic and 𝜌 are related as:

If the value of d is known, then 𝜌̂ can be estimated from the d-statistic.

You might also like