0% found this document useful (0 votes)
31 views7 pages

Chapter 5 Testing For Linear Restrictions and Structural Change

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views7 pages

Chapter 5 Testing For Linear Restrictions and Structural Change

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

CHAPTER 5: TESTING FOR LINEAR RESTRICTIONS AND

STRUCTURAL CHANGE

5.1 Testing linear equality restrictions (Restricted Least Squares)


There are occasions where economic theory suggests that the coefficients in a
regression model satisfy some linear equality restrictions.
For instance, consider the Cobb-Douglas production function:
𝛽 𝛽
𝑌𝑖 = 𝛽1 𝑋2𝑖2 𝑋3𝑖3 𝑒 𝜇𝑖 , (1)

where 𝑌 = output, 𝑋2 = labour input, and 𝑋3 = capital input. In log form, the equation
becomes

𝑙𝑛𝑌𝑖 = 𝛽0 + 𝛽2 𝑙𝑛𝑋2𝑖 + 𝛽3 𝑙𝑛𝑋3𝑖 + 𝜇𝑖 , (2)


where 𝛽0 = 𝑙𝑛𝛽1 .
If there are constant returns to scale (equiproportional change in output for an
equiproportional change in the inputs), econometric theory suggests that
𝛽2 + 𝛽3 = 1, (3)
which is an example of a linear equality restriction.
How does one find out that there are constant returns to scale, that is if the restrictions
is valid?
Two approaches are used: The t-Test approach and the F-Test approach.
The F-Test Approach
We incorporate the restriction (3) into the estimating procedure.
𝛽2 = 1 − 𝛽3 .
We can write the Cobb-Douglas production function as:
𝑙𝑛𝑌𝑖 = 𝛽0 + (1 − 𝛽3 )𝑙𝑛𝑋2𝑖 + 𝛽3 𝑙𝑛𝑋3𝑖 + 𝜇𝑖 ,
= 𝛽0 + 𝑙𝑛𝑋2𝑖 + 𝛽3 (𝑙𝑛𝑋3𝑖 − 𝑙𝑛𝑋2𝑖 ) + 𝜇𝑖
(𝑙𝑛𝑌𝑖 − 𝑙𝑛𝑋2𝑖 ) = 𝛽0 + 𝛽3 (𝑙𝑛𝑋3𝑖 − 𝑙𝑛𝑋2𝑖 ) + 𝜇𝑖 (4)
or
ln(𝑌𝑖 /𝑋2𝑖 ) = 𝛽0 + 𝛽3 ln(𝑋3𝑖 /𝑋2𝑖 ) + 𝜇𝑖 , (5)
where (𝑌𝑖 /𝑋2𝑖 ) = output /labour ratio and (𝑋3𝑖 /𝑋2𝑖 ) =capital labour ratio.

1
The procedure outlined in Equation (4) or (5) is restricted least squares (RLS). The
procedure can be generalized to models containing any number of explanatory
variables and more than one linear restriction.
(𝑅𝑆𝑆𝑅 −𝑅𝑆𝑆𝑈𝑅 )/𝑚
𝐹= follows the F distribution with 𝑚, (𝑛 − 𝑘) df
𝑅𝑆𝑆𝑈𝑅 /(𝑛−𝑘)

𝑚 =number of linear restrictions


𝑘 =number of parameters in the unrestricted regression
𝑛 =number of observations
𝑅𝑆𝑆𝑈𝑅 = 𝑅𝑆𝑆 of the unrestricted regression (2)
𝑅𝑆𝑆𝑅 = 𝑅𝑆𝑆 of the restricted regression (4)

F-test for linear restrictions


Suppose we have m linear restrictions on the parameters of a model. To test
whether the restrictions are valid, we use the F-test constructed as follows:
• Apply OLS to the unrestricted model and obtain the residual sum of squares
𝑅𝑆𝑆𝑈𝑅 with 𝑛 − 𝑘 df where 𝑘 is the number of parameters.
• Obtain the number of restricted parameters and compute the corresponding
residual sum of squares 𝑅𝑆𝑆𝑅 with 𝑛 − (𝑘 − 𝑚) = 𝑛 − 𝑘 + 𝑚 df.
(𝑅𝑆𝑆𝑅 −𝑅𝑆𝑆𝑈𝑅 )/𝑚
• Compute the 𝐹-statistic: 𝐹𝑐𝑎𝑙 = with 𝐹𝑚,(𝑛−𝑘) 𝛼.
𝑅𝑆𝑆𝑈𝑅 /(𝑛−𝑘)
• The null hypothesis of linear restriction is rejected at 𝛼 level os significance if
𝐹𝑐𝑎𝑙 > 𝐹𝑚,(𝑛−𝑘) 𝛼.

Testing for linear Restrictions in SAS


Example 1
𝑌𝑖 = 𝛽0 + 𝛽1 𝑋𝑖1 + 𝛽2 𝑋𝑖2 + 𝛽3 𝑋𝑖3 + 𝛽4 𝑋𝑖4 + 𝛽5 𝑋15 + 𝑢𝑖 (Unrestricted model)

𝐻0 : 𝛽1 = 1 and 𝛽2 − 𝛽3 = 4

𝐻1 : 𝛽1 ≠ 1 and 𝛽2 − 𝛽3 ≠ 4

𝑌 ∗ = 𝛽0 + +𝛽3 𝑋 ∗ + 𝛽4 𝑋𝑖4 + 𝛽5 𝑋15 + 𝑢𝑖 (Restricted model)

Proof

𝑌𝑖 = 𝛽0 + 𝛽1 𝑋𝑖1 + 𝛽2 𝑋𝑖2 + 𝛽3 𝑋𝑖3 + 𝛽4 𝑋𝑖4 + 𝛽5 𝑋15 + 𝑢𝑖


= 𝛽0 + 𝑋𝑖1 + (𝛽3 + 4)𝑋𝑖2 + 𝛽3 𝑋𝑖3 + 𝛽4 𝑋𝑖4 + 𝛽5 𝑋15 + 𝑢𝑖
= 𝛽0 + 𝑋𝑖1 + 𝛽3 𝑋𝑖2 + 4𝑋𝑖2 + 𝛽3 𝑋𝑖3 + 𝛽4 𝑋𝑖4 + 𝛽5 𝑋15 + 𝑢𝑖
[𝑌𝑖 − 𝑋𝑖1 − 4𝑋𝑖2 ] = 𝛽0 + 𝛽3 (𝑋𝑖2 + 𝑋𝑖3 ) + 𝛽4 𝑋𝑖4 + 𝛽5 𝑋15 + 𝑢𝑖

2
𝑌 ∗ = 𝛽0 + +𝛽3 𝑋 ∗ + 𝛽4 𝑋𝑖4 + 𝛽5 𝑋15 + 𝑢𝑖 ,

where 𝑌 ∗ = [𝑌𝑖 − 𝑋𝑖1 − 4𝑋𝑖2 ] and 𝑋 ∗ = (𝑋𝑖2 + 𝑋𝑖3 ).

Test Statistic
(𝑹𝑺𝑺𝑹 − 𝑹𝑺𝑺𝒖 )/𝒎
𝑭𝒄𝒂𝒍 =
𝑹𝑺𝑺𝒖 /(𝒏 − 𝒌)

𝑅𝑆𝑆𝑢 = 8858.906
𝑅𝑆𝑆𝑅 = 103612.6
𝑚 =? , 𝑘 =? , 𝑛 = 20
𝐹𝑐𝑎𝑙 =?
𝐹𝑐𝑣 =?
Decision

F-test for linear restrictions in SAS


Proc reg data=sasrest1;
model y=x1 x2 x3 x4 x5;
test x1=1,x2-x3=4;
run;

Proc reg data=sasrest2;


model y=x1 x2 x3 x4 x5;
test x2=1,x3-x4=2;
run;

3
5.2 Testing for Structural Change/ Parameter Stability of Regression Models
When we use regression models involving time series data, it may happen that there
is structural change in the relationship between 𝑌 and the regressors. By structural
change we mean that the values of the parameters of the model do not remain the
same throughout the entire period. Structural change may be due to external forces
like recession, oil price shocks, policy changes, tax changes, strikes, conflicts,
embargoes, sanctions etc.
When we examine multiple regression equation and use it for predictions at future
points of time, we assume that the parameters are constant over the entire time period
of estimation and prediction. To test the hypothesis of parameter constancy (stability)
i.e. structural stability of the regression model, some tests have been proposed one of
them is the Chow Test.
Example
Consider the relationship between savings and personal income pre and post 1982
recession periods. Data is divided into two time periods 1970 − 1981 and 1982 − 1995.
Possible regressions:
Time period 1970 − 1981: 𝑌𝑡 = 𝜆1 + 𝜆2 𝑋𝑡 + 𝜇1𝑡 , 𝑛1 = 12 (1)
Time period 1982 − 1995: 𝑌𝑡 = 𝛾1 + 𝛾2 𝑋𝑡 + 𝜇2𝑡 , 𝑛1 = 14 (2)
Time period 1970 − 1995: 𝑌𝑡 = 𝛼1 + 𝛼1 𝑋𝑡 + 𝜇3𝑡 , 𝑛 = 26 (3)
Regression (3) assumes that there is no difference between the two time periods and
therefore estimates the relationship between savings and income for the entire period
(𝑛), This regression assumes that the intercept as well as the slope coefficients
remains the same over the entire period i.e. there is no structural change. If this is the
situation, then 𝛼1 = 𝜆1 = 𝛾1 and 𝛼2 = 𝜆2 = 𝛾2 .
Regressions (1) and (2) assume that the regressions in the two time periods are
different I.e. the intercept and slope coefficients are different as indicated by the
subscripted parameters.
Chow Test
Purpose: Testing for structural change or parameter stability of regression models.
Assumptions
The test assumes that:
1. 𝜇1𝑡 ~𝑁(0, 𝜎 2 ) and 𝜇2𝑡 ~𝑁(0, 𝜎 2 )i.e. the error terms in the subperiod regressions
are normally distributed with same (homoscedastic) variances 𝜎 2 .
2. The error terms 𝜇1𝑡 and 𝜇2𝑡 are independently distributed.
Hypotheses:
𝐻0 :There is no structural change i.e. regressions in the subperiods are the same.

4
𝐻1 :There is structural change.
Procedure
• Estimate regression (3) which is appropriate if there is no parameter stability
and obtain 𝑅𝑆𝑆𝑅 (restricted residual sum of squares because it is obtained by
imposing restrictions that 𝜆1 = 𝛾1 and 𝜆2 = 𝛾2 i.e. the subperiod regressions are
not different)
• Estimate equation (1) and obtain 𝑅𝑆𝑆1.
• Estimate equation (2) and obtain 𝑅𝑆𝑆2.
• Obtain the unrestricted residual sum of squares 𝑅𝑆𝑆𝑈 . 𝑅𝑆𝑆𝑈 = 𝑅𝑆𝑆1 + 𝑅𝑆𝑆2 .
The idea behind the Chow test is that there is no structural change (i.e. regression (1)
and (2) are the same.), the 𝑅𝑆𝑆𝑅 and 𝑅𝑆𝑆𝑈 should not be statistically different.

• Calculate
(𝑅𝑆𝑆𝑅 −𝑅𝑆𝑆𝑈 )/𝑘
∴ 𝐹𝑐𝑎𝑙 = 𝑅𝑆𝑆 ~ 𝐹(𝑘,𝑛1 +𝑛2−2𝑘) , where 𝑘 is the number of parameters
𝑈𝑅 /(𝑛1 +𝑛2 −2𝑘)
estimated.
Decision rule
We do not reject the null hypothesis of parameter stability (mo structural change) if
𝐹𝑐𝑎𝑙 < 𝐹𝑐𝑣 ,
OR
(𝑅𝑆𝑆𝑅 −𝑅𝑆𝑆𝑈 )/𝑘+1
𝐹𝑐𝑎𝑙 = 𝑅𝑆𝑆 ~ 𝐹(𝑘+1,𝑛1 +𝑛2 −2𝑘−2) , where 𝑘 is the number of explanatory
𝑈𝑅 /(𝑛1 +𝑛2 −2𝑘−2)
variables
The regression from the pooled data imposes the regression that the parameters are
the same in the two periods.
The dummy variable method is used to test for stability of individual coefficients.
Example
Data on per capita food consumption, price of food and per capita income for the years
1927 − 1941 and 1948 − 1962 was used to estimate the following equations. World
war (II) years were excluded.
Equation 1: 𝑙𝑛𝑞 =∝ +𝛽1 𝑙𝑛𝑃 + 𝛽2 𝑙𝑛𝑌
Equation 2: 𝑙𝑛𝑞 =∝ +𝛽1 𝑙𝑛𝑃 + 𝛽2 𝑙𝑛𝑌 + 𝛽3 𝑙𝑛𝑃𝑙𝑛𝑌

5
Equation 𝟏:
𝐻0 : Parameter stability (no structural change)
𝐻1 : Parameter instability (structural change)
𝑅𝑆𝑆𝑈 = (𝑅𝑆𝑆1 + 𝑅𝑆𝑆2 ) = (0.1151 + 0.0544)/100 = 0.001695 with 24 df
𝑅𝑆𝑆𝑅 = 0.002866 with 27 df
(0.002866−0.001695)/3
𝐹𝑐𝑎𝑙 = = 5.53.
0.001695/24

𝐹𝑐𝑣 = 𝐹(3,24)0.05 = 3.01.


Since 𝐹𝑐𝑎𝑙 > 𝐹𝑐𝑣 , we reject the null hypothesis of parameter stability.

If we look at the individual coefficients, 𝛽̂1 is almost the same for the two regressions.
Thus, it appears that the price elasticity has been constant but it is the income elasticity
that has changed in the two periods.
Drawbacks of the Chow test
• The Chow test only tells us if the two regressions of the subgroups are different
without telling us whether the difference is on account of the intercepts or slopes
or both.
• The chow test assumes that we know the points of structural breaks.
Other tests that can be used are the Likelihood ratio test (LR), the Wald test (W) and
the Lagrange Multiplier test.

6
Practical Example: Chow test for structural change

𝑌 = 𝛼0 + 𝛼1 𝑋 + 𝑢 (eg 𝑛 = 20) Find 𝑅𝑆𝑆𝑅


𝑌 = 𝛽0 + 𝛽1 𝑋 + 𝑢 (eg 𝑛1 = 11) Find 𝑅𝑆𝑆1
𝑌 = 𝛾0 + 𝛾1 𝑋 + 𝑢 (eg 𝑛2 = 9) Find 𝑅𝑆𝑆2
𝐻𝑜 : 𝛽0 = 𝛾0 and 𝛽1 = 𝛾1 No Structural Break(change)
𝐻𝑜 : 𝛽0 ≠ 𝛾0 and 𝛽1 ≠ 𝛾1 Structural Break
(𝑹𝑺𝑺𝑹 − 𝑹𝑺𝑺𝒖 )/𝒌
𝑭=
𝑹𝑺𝑺𝒖 /(𝒏𝟏 + 𝒏𝟐 − 𝟐𝒌)
𝑹𝑺𝑺𝒖 = 𝑹𝑺𝑺𝟏 + 𝑹𝑺𝑺𝟐

Chow test in SAS


Proc autoreg data=saschow1;
model y=x1/chow=(12);
run;

You might also like