5. week3_2
5. week3_2
W EEK 3.2
Juan R. Hernández1
1
I will draw heavily on the contents of the book of reference and some slides in its website.
1 / 18
Outline
2 / 18
1. Diagnostics and Refinements: Omitted Variables
Some immediate questions arise when, for example, some variables
are not statistically significant, but the F-test suggests that not all
estimates are zero.
yt = β1 + β2 x2t + β3 x3t + ut
4 / 18
1. Diagnostics and Refinements: Stability
To conduct the Chow Test, follow these steps:
(i) Split the data into two sub-periods. Estimate the regression over the
whole period (this is now the restricted regression) and then for the two
sub-periods separately (3 regressions). Obtain the RSS for each
regression.
(ii) Compute an F-test to assess the difference between the RSS’s:
▶ 1987M11–1992M12
▶ 1981M1–1992M12
6 / 18
1. Diagnostics and Refinements: Stability
The results of the Chow test are:
▶ The null hypothesis is
(1) (2) (1) (2)
H0 : β1 = β1 and β2 = β2
7 / 18
1. Diagnostics and Refinements: Stability
Problem with the Chow test is that we need to have enough data to do the
regression on both sub-samples. An alternative is the predictive failure test:
▶ Estimate the regression over a “long” sub-period (i.e. most of the data)
and then predict values for the rest of the sample and compare the two.2
To calculate the test statistic:
– Run the regression for the whole period (the restricted regression) and
obtain the RSS.
– Run the regression for the “long” sub-period and obtain the RSS
(called RSS1 ).3
RSS − RSS1 n1 − K
test statistic = ×
RSS1 n2
where n2 = number of observations that the model is attempting to
‘predict’. The test statistic will follow an F(n2 , n1 − K).
2
There are 2 types of predictive failure tests: Forward predictive failure tests: keep the last few observations for forecast
testing (e.g. If we have observations for 1970Q1-1994Q4, estimate the model over 1970Q1-1993Q4 and predict
1994Q1-1994Q4). Backward predictive failure tests, the aim is to “back-cast” the first few observations (e.g. estimate the model
over 1971Q1-1994Q4 and backcast 1970Q1-1970Q4).
3
Note the label for the number of observations n1 in the “long” subperiod (even though it may come second).
8 / 18
1. Diagnostics and Refinements: Stability
An example of the Predictive Failure Tests. We have the following
models estimated for the CAPM Beta (β2 ) on Glaxo:
▶ 1981M1–1992M12 (whole sample)
Can this regression adequately ‘forecast’ the values for the last two
years? The test statistic would be given by
0.0434 − 0.0420 120 − 2
test statistic = × = 0.164
0.0420 24
Compare the test statistic with an F(24,118) = 1.66 at the 5% level.
So we fail to reject the null hypothesis that the model can adequately
predict the last few observations.
9 / 18
1. Diagnostics and Refinements: Stability
Ok...but how do we decide the sub-parts to use?
▶ As a rule of thumb, we could use all or some of the following:
– Plot the dependent variable, y, over time and split the data
accordingly to any obvious structural changes in the series:
1400
1200
1000
800
yt
600
400
200
0
1
129
161
193
321
417
33
65
97
225
257
289
353
385
449
Observation number
11 / 18
2. Time Series Regression
12 / 18
2. Time Series Regression
13 / 18
2. Dynamic Models
We could extend the model even further by adding extra lags, e.g.
x2t−2 , yt−3 .
14 / 18
2. Dynamic Models
There are many reasons why we might want (or need) to include lags
in a regression:
▶ Inertia of the dependent variable (e.g. inflation).
15 / 18
2. Dynamic Models
16 / 18
2. Dynamic Models
17 / 18
2. Dynamic Models
18 / 18