EViews 9 Users Guide II
EViews 9 Users Guide II
ISBN:978-1-880411-27-8
This software product, including program code and manual, is copyrighted, and all rights are
reserved by IHS Global Inc. The distribution and sale of this product are intended for the use of
the original purchaser only. Except as permitted under the United States Copyright Act of 1976,
no part of this product may be reproduced or distributed in any form or by any means, or stored
in a database or retrieval system, without the prior written permission of IHS Global Inc.
Disclaimer
The authors and IHS Global Inc. assume no responsibility for any errors that may appear in this
manual or the EViews program. The user assumes all responsibility for the selection of the program to achieve intended results, and for the installation, use, and results obtained from the program.
Trademarks
EViews is a registered trademark of IHS Global Inc. Windows, Excel, PowerPoint, and Access
are registered trademarks of Microsoft Corporation. PostScript is a trademark of Adobe Corporation. X11.2 and X12-ARIMA Version 0.2.7, and X-13ARIMA-SEATS are seasonal adjustment programs developed by the U. S. Census Bureau. Tramo/Seats is copyright by Agustin Maravall and
Victor Gomez. Info-ZIP is provided by the persons listed in the infozip_license.txt file. Please
refer to this file in the EViews directory for more information on Info-ZIP. Zlib was written by
Jean-loup Gailly and Mark Adler. More information on zlib can be found in the zlib_license.txt
file in the EViews directory. Bloomberg is a trademark of Bloomberg Finance L.P. All other product names mentioned in this manual may be trademarks or registered trademarks of their respective companies.
Table of Contents
EVIEWS 9 USERS GUIDE I 1
PREFACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
PART I. EVIEWS FUNDAMENTALS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
CHAPTER 1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
What is EViews? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
The EViews Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Custom Edit Fields in EViews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Breaking or Canceling in EViews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Closing EViews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Where to Go For Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
EViews Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
CHAPTER 2. A DEMONSTRATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Getting Data into EViews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Examining the Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Estimating a Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Specification and Hypothesis Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Modifying the Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Forecasting from an Estimated Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Additional Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
iiTable of Contents
Table of Contentsiii
ivTable of Contents
Interpolate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Seasonal Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
Automatic ARIMA Forecasting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
Forecast Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Exponential Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Hodrick-Prescott Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .491
Frequency (Band-Pass) Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
Whiten Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
Distribution Plot Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Table of Contentsv
viTable of Contents
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .845
Table of Contentsvii
viiiTable of Contents
Table of Contentsix
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
xTable of Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
CHAPTER 39. STATE SPACE MODELS AND THE KALMAN FILTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . .673
Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
Specifying a State Space Model in EViews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
Working with the State Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
Converting from Version 3 Sspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
Technical Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
Table of Contentsxi
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
xiiTable of Contents
Table of Contentsxiii
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1002
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043
xivTable of Contents
Preface
The first volume of the EViews Users Guide describes the basics of using EViews and
describes a number of tools for basic statistical analysis using series and group objects.
The second volume of the EViews Users Guide, offers a description of EViews interactive
tools for advanced statistical and econometric analysis. The material in Users Guide II may
be divided into several parts:
Part V. Basic Single Equation Analysis on page 3 discusses the use of the equation
object to perform standard regression analysis, ordinary least squares, weighted least
squares, nonlinear least squares, basic time series regression, specification testing and
forecasting.
Part VI. Advanced Single Equation Analysis, beginning on page 229 documents twostage least squares (TSLS) and generalized method of moments (GMM), autoregressive conditional heteroskedasticity (ARCH) models, single-equation cointegration
equation specifications, discrete and limited dependent variable models, generalized
linear models (GLM), robust least squares, least squares regression with breakpoints,
threshold regression, switching regression, quantile regression, and user-specified
likelihood estimation.
Part VII. Advanced Univariate Analysis, on page 525 describes advanced tools for
univariate time series analysis, including unit root tests in both conventional and
panel data settings, variance ratio tests, and the BDS test for independence.
Part VIII. Multiple Equation Analysis on page 581 describes estimation and forecasting with systems of equations (least squares, weighted least squares, SUR, system
TSLS, 3SLS, FIML, GMM, multivariate ARCH), vector autoregression and error correction models (VARs and VECs), state space models and model solution.
Part IX. Panel and Pooled Data on page 755 documents working with and estimating models with time series, cross-sectional data. The analysis may involve small
numbers of cross-sections, with series for each cross-section variable (pooled data) or
large numbers systems of cross-sections, with stacked data (panel data).
Part X. Advanced Multivariate Analysis, beginning on page 937 describes tools for
testing for cointegration and for performing Factor Analysis.
2Preface
Equation Objects
Single equation regression estimation in EViews is performed using the equation object. To
create an equation object in EViews: select Object/New Object.../Equation or Quick/Estimate Equation from the main menu, or simply type the keyword equation in the command window.
Next, you will specify your equation in the Equation Specification dialog box that appears,
and select an estimation method. Below, we provide details on specifying equations in
EViews. EViews will estimate the equation and display results in the equation window.
The estimation results are stored as part of the equation object so they can be accessed at
any time. Simply open the object to display the summary results, or to access EViews tools
for working with results from an equation object. For example, you can retrieve the sum-ofsquares from any equation, or you can use the estimated equation as part of a multi-equation model.
Note the presence of the series name C in the list of regressors. This is a built-in EViews
series that is used to specify a constant in a regression. EViews does not automatically
include a constant in a regression so you must explicitly list the constant (or its equivalent)
as a regressor. The internal series C does not appear in your workfile, and you may not use
it outside of specifying an equation. If you need a series of ones, you can generate a new
series, or use the number 1 as an auto-series.
You may have noticed that there is a pre-defined object C in your workfile. This is the
default coefficient vectorwhen you specify an equation by listing variable names, EViews
stores the estimated coefficients in this vector, in the order of appearance in the list. In the
example above, the constant will be stored in C(1) and the coefficient on INC will be held in
C(2).
Lagged series may be included in statistical operations using the same notation as in generating a new series with a formulaput the lag in parentheses after the name of the series.
For example, the specification:
cs cs(-1) c inc
tells EViews to regress CS on its own lagged value, a constant, and INC. The coefficient for
lagged CS will be placed in C(1), the coefficient for the constant is C(2), and the coefficient
of INC is C(3).
You can include a consecutive range of lagged series by using the word to between the
lags. For example:
cs c cs(-1 to -4) inc
regresses CS on a constant, CS(-1), CS(-2), CS(-3), CS(-4), and INC. If you don't include the
first lag, it is taken to be zero. For example:
cs c inc(to -2) inc(-4)
specifies a regression of the natural logarithm of CS on a constant, its own lagged value, and
a two period moving average of INC.
Typing the list of series may be cumbersome, especially if you are working with many
regressors. If you wish, EViews can create the specification list for you. First, highlight the
dependent variable in the workfile window by single clicking on the entry. Next, CTRL-click
on each of the explanatory variables to highlight them as well. When you are done selecting
all of your variables, double click on any of the highlighted series, and select Open/Equation, or right click and select Open/as Equation.... The Equation Specification dialog
box should appear with the names entered in the specification field. The constant C is automatically included in this list; you must delete the C if you do not wish to include the constant.
An equation formula in EViews is a mathematical expression involving regressors and coefficients. To specify an equation using a formula, simply enter the expression in the dialog in
place of the list of variables. EViews will add an implicit additive disturbance to this equation and will estimate the parameters of the model using least squares.
When you specify an equation by list, EViews converts this into an equivalent equation formula. For example, the list,
log(cs) c log(cs(-1)) log(inc)
Equations do not have to have a dependent variable followed by an equal sign and then an
expression. The = sign can be anywhere in the formula, as in:
log(urate) - c(1)*dmr = c(2)
(19.1)
EViews will find the coefficient values that minimize the sum of squares of the given expression, in this case (C(1)*X+C(2)*Y+4*Z). While EViews will estimate an expression of this
type, since there is no dependent variable, some regression statistics (e.g. R-squared) are not
reported and the equation cannot be used for forecasting. This restriction also holds for any
equation that includes coefficients to the left of the equal sign. For example, if you specify:
x + c(1)*y = c(2)*z
EViews finds the values of C(1) and C(2) that minimize the sum of squares of (X+C(1)*Y
C(2)*Z). The estimated coefficients will be identical to those from an equation specified
using:
x = -c(1)*y + c(2)*z
To estimate a nonlinear model, simply enter the nonlinear formula. EViews will automatically detect the nonlinearity and estimate the model using nonlinear least squares. For
details, see Nonlinear Least Squares on page 40.
One benefit to specifying an equation by formula is that you can elect to use a different coefficient vector. To create a new coefficient vector, choose Object/New Object and select
Matrix-Vector-Coef from the main menu, type in a name for the coefficient vector, and click
OK. In the New Matrix dialog box that appears, select Coefficient Vector and specify how
many rows there should be in the vector. The object will be listed in the workfile directory
with the coefficient vector icon (the little b ).
You may then use this coefficient vector in your specification. For example, suppose you created coefficient vectors A and BETA, each with a single row. Then you can specify your
equation using the new coefficients in place of C:
log(cs) = a(1) + beta(1)*log(cs(-1))
Estimation Sample
You should also specify the sample to be used in estimation. EViews will fill out the dialog
with the current workfile sample, but you can change the sample for purposes of estimation
by entering your sample string or object in the edit box (see Samples on page 127 of
Users Guide I for details). Changing the estimation sample does not affect the current workfile sample.
If any of the series used in estimation contain missing data, EViews will temporarily adjust
the estimation sample of observations to exclude those observations (listwise exclusion).
EViews notifies you that it has adjusted the sample by reporting the actual sample used in
the estimation results:
Dependent Variable: Y
Method: Leas t Squares
Date: 08/08/09 Time: 14:44
Sample (adjusted): 1959M01 1989M12
Included observations: 340 after adjustments
Here we see the top of an equation output view. EViews reports that it has adjusted the sample. Out of the 372 observations in the period 1959M011989M12, EViews uses the 340
observations with valid data for all of the relevant variables.
You should be aware that if you include lagged variables in a regression, the degree of sample adjustment will differ depending on whether data for the pre-sample period are available
or not. For example, suppose you have nonmissing data for the two series M1 and IP over
the period 1959M011989M12 and specify the regression as:
m1 c ip ip(-1) ip(-2) ip(-3)
If you set the estimation sample to the period 1959M011989M12, EViews adjusts the sample to:
Dependent Variable: M1
Method: Least Squares
Date: 08/08/09 Time: 14:45
Sample: 1960M01 1989M12
Included observations: 360
since data for IP(3) are not available until 1959M04. However, if you set the estimation
sample to the period 1960M011989M12, EViews will not make any adjustment to the sample since all values of IP(-3) are available during the estimation sample.
Some operations, most notably estimation with MA terms and ARCH, do not allow missing
observations in the middle of the sample. When executing these procedures, an error message is displayed and execution is halted if an NA is encountered in the middle of the sample. EViews handles missing data at the very start or the very end of the sample range by
adjusting the sample endpoints and proceeding with the estimation procedure.
Estimation Options
EViews provides a number of estimation options. These options allow you to weight the estimating equation, to compute heteroskedasticity and auto-correlation robust covariances,
Equation Output11
and to control various features of your estimation algorithm. These options are discussed in
detail in Estimation Options on page 43.
Equation Output
When you click OK in the Equation Specification dialog, EViews displays the equation window displaying the estimation output view (the examples in this chapter are obtained using
the workfile Basics.WF1):
Dependent Variable: LOG(M1)
Method: Leas t Squares
Date: 08/08/09 Time: 14:51
Sample: 1959M01 1989M12
Included observations: 372
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
LOG(IP)
TB3
-1.699912
1.765866
-0.011895
0.164954
0.043546
0.004628
-10.30539
40.55199
-2.570016
0.0000
0.0000
0.0106
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-s tatistic)
0.886416
0.885800
0.187183
12.92882
97.00979
1439.848
0.000000
5.663717
0.553903
-0.505429
-0.473825
-0.492878
0.008687
y = Xb + e
(19.2)
Coefficient Results
Regression Coefficients
The column labeled Coefficient depicts the estimated coefficients. The least squares
regression coefficients b are computed by the standard OLS formula:
1
b = ( XX ) Xy
(19.3)
If your equation is specified by list, the coefficients will be labeled in the Variable column
with the name of the corresponding regressor; if your equation is specified by formula,
EViews lists the actual coefficients, C(1), C(2), etc.
For the simple linear models considered here, the coefficient measures the marginal contribution of the independent variable to the dependent variable, holding all other variables
fixed. If you have included C in your list of regressors, the corresponding coefficient is the
constant or intercept in the regressionit is the base level of the prediction when all of the
other independent variables are zero. The other coefficients are interpreted as the slope of
the relation between the corresponding independent variable and the dependent variable,
assuming all other variables do not change.
Standard Errors
The Std. Error column reports the estimated standard errors of the coefficient estimates.
The standard errors measure the statistical reliability of the coefficient estimatesthe larger
the standard errors, the more statistical noise in the estimates. If the errors are normally distributed, there are about 2 chances in 3 that the true regression coefficient lies within one
standard error of the reported coefficient, and 95 chances out of 100 that it lies within two
standard errors.
The covariance matrix of the estimated coefficients is computed as:
2
var ( b ) = s ( XX ) ;
2
s = e e ( T k ) ;
e = y Xb
(19.4)
where e is the residual. The standard errors of the estimated coefficients are the square
roots of the diagonal elements of the coefficient covariance matrix. You can view the whole
covariance matrix by choosing View/Covariance Matrix.
t-Statistics
The t-statistic, which is computed as the ratio of an estimated coefficient to its standard
error, is used to test the hypothesis that a coefficient is equal to zero. To interpret the t-statistic, you should examine the probability of observing the t-statistic given that the coefficient
is equal to zero. This probability computation is described below.
In cases where normality can only hold asymptotically, EViews will often report a z-statistic
instead of a t-statistic.
Probability
The last column of the output shows the probability of drawing a t-statistic (or a z-statistic)
as extreme as the one actually observed, under the assumption that the errors are normally
distributed, or that the estimated coefficients are asymptotically normally distributed.
This probability is also known as the p-value or the marginal significance level. Given a pvalue, you can tell at a glance if you reject or accept the hypothesis that the true coefficient
Equation Output13
is zero against a two-sided alternative that it differs from zero. For example, if you are performing the test at the 5% significance level, a p-value lower than 0.05 is taken as evidence
to reject the null hypothesis of a zero coefficient. If you want to conduct a one-sided test, the
appropriate probability is one-half that reported by EViews.
For the above example output, the hypothesis that the coefficient on TB3 is zero is rejected
at the 5% significance level but not at the 1% level. However, if theory suggests that the
coefficient on TB3 cannot be positive, then a one-sided test will reject the zero null hypothesis at the 1% level.
The p-values for t-statistics are computed from a t-distribution with T k degrees of freedom. The p-value for z-statistics are computed using the standard normal distribution.
Summary Statistics
R-squared
2
The R-squared ( R ) statistic measures the success of the regression in predicting the values
2
of the dependent variable within the sample. In standard settings, R may be interpreted as
the fraction of the variance of the dependent variable explained by the independent variables. The statistic will equal one if the regression fits perfectly, and zero if it fits no better
than the simple mean of the dependent variable. It can be negative for a number of reasons.
For example, if the regression does not have an intercept or constant, if the regression contains coefficient restrictions, or if the estimation method is two-stage least squares or ARCH.
2
e e
R = 1 ------------------------------------- ;
( y y ) ( y y )
2
y =
yt T
(19.5)
t = 1
Adjusted R-squared
2
One problem with using R as a measure of goodness of fit is that the R will never
2
decrease as you add more regressors. In the extreme case, you can always obtain an R of
one if you include as many independent regressors as there are sample observations.
2
The adjusted R , commonly denoted as R , penalizes the R for the addition of regressors
2
which do not contribute to the explanatory power of the model. The adjusted R is computed as:
2
2 T 1
R = 1 ( 1 R ) ------------Tk
2
(19.6)
The R is never larger than the R , can decrease as you add regressors, and for poorly fitting models, may be negative.
e e
-----------------(T k)
s =
(19.7)
Sum-of-Squared Residuals
The sum-of-squared residuals can be used in a variety of statistical calculations, and is presented separately for your convenience:
T
( yi X i b )
e e =
(19.8)
t = 1
Log Likelihood
EViews reports the value of the log likelihood function (assuming normally distributed
errors) evaluated at the estimated values of the coefficients. Likelihood ratio tests may be
conducted by looking at the difference between the log likelihood values of the restricted
and unrestricted versions of an equation.
The log likelihood is computed as:
T
l = ---- ( 1 + log ( 2p ) + log ( e e T ) )
2
(19.9)
When comparing EViews output to that reported from other sources, note that EViews does
not ignore constant terms in the log likelihood.
Durbin-Watson Statistic
The Durbin-Watson statistic measures the serial correlation in the residuals. The statistic is
computed as
T
DW =
( e t e t 1 )
t =2
e t
(19.10)
t =1
See Johnston and DiNardo (1997, Table D.5) for a table of the significance points of the distribution of the Durbin-Watson statistic.
As a rule of thumb, if the DW is less than 2, there is evidence of positive serial correlation.
The DW statistic in our output is very close to one, indicating the presence of serial correlation in the residuals. See Background, beginning on page 87, for a more extensive discussion of the Durbin-Watson statistic and the consequences of serially correlated residuals.
Equation Output15
There are better tests for serial correlation. In Testing for Serial Correlation on page 95, we
discuss the Q-statistic, and the Breusch-Godfrey LM test, both of which provide a more general testing framework than the Durbin-Watson test.
y =
yt T ;
sy =
( yt y )
(T 1)
(19.11)
t =1
t = 1
AIC = 2l T + 2k T
(19.12)
Schwarz Criterion
The Schwarz Criterion (SC) is an alternative to the AIC that imposes a larger penalty for
additional coefficients:
SC = 2l T + ( k log T ) T
(19.13)
Hannan-Quinn Criterion
The Hannan-Quinn Criterion (HQ) employs yet another penalty function:
HQ = 2 ( l T ) + 2k log ( log ( T ) ) T
(19.14)
F-Statistic
The F-statistic reported in the regression output is from a test of the hypothesis that all of
the slope coefficients (excluding the constant, or intercept) in a regression are zero. For ordinary least squares models, the F-statistic is computed as:
2
R (k 1)
F = ------------------------------------------2
(1 R ) (T k)
(19.15)
Under the null hypothesis with normally distributed errors, this statistic has an F-distribution with k 1 numerator degrees of freedom and T k denominator degrees of freedom.
The p-value given just below the F-statistic, denoted Prob(F-statistic), is the marginal significance level of the F-test. If the p-value is less than the significance level you are testing,
say 0.05, you reject the null hypothesis that all slope coefficients are equal to zero. For the
example above, the p-value is essentially zero, so we reject the null hypothesis that all of the
regression coefficients are zero. Note that the F-test is a joint test so that even if all the t-statistics are insignificant, the F-statistic can be highly significant.
Note that since the F-statistic depends only on the sums-of-squared residuals of the estimated equation, it is not robust to heterogeneity or serial correlation. The use of robust estimators of the coefficient covariances (Robust Standard Errors on page 32) will have no
effect on the F-statistic. If you do choose to employ robust covariance estimators, EViews
will also report a robust Wald test statistic and p-value for the hypothesis that all non-intercept coefficients are equal to zero.
@coefcov(i,j)
@coefs(i)
@dw
Durbin-Watson statistic
@f
@fprob
F-statistic
F-statistic probability.
@hq
@jstat
@logl
@meandep
GMM)
@ncoef
@r2
R-squared statistic
@rbar2
Equation Output17
@rlogl
@regobs
@schwarz
@sddep
@se
@ssr
@stderrs(i)
@tstats(i)
c(i)
@coefs
@stderrs
@tstats
@pvals
@smpl
@updatetime
See also Equation (p. 31) in the Object Reference for a complete list.
Functions that return a vector or matrix object should be assigned to the corresponding
object type. For example, you should assign the results from @tstats to a vector:
vector tstats = eq1.@tstats
For documentation on using vectors and matrices in EViews, see Chapter 11. Matrix Language, on page 257 of the Command and Programming Reference.
Views of an Equation
Representations. Displays the equation in three basic forms: EViews command form
showing the command associated with the equation, as an algebraic equation with
symbolic coefficients, and as an equation with a text representation of the estimated
values of the coefficients.
You can cut-and-paste
from the representations
view into any application
that supports the Windows
clipboard.
Estimation Output. Displays the equation output
results described above.
Actual, Fitted, Residual.
These views display the
actual and fitted values of
the dependent variable and the residuals from the regression in tabular and graphical
form. Actual, Fitted, Residual Table displays these values in table form.
Note that the actual value
is always the sum of the
fitted value and the residual. Actual, Fitted, Residual Graph displays a
standard EViews graph of
the actual values, fitted
values, and residuals,
along with dotted lines
showing at plus and minus
one estimated standard
error. Residual Graph
plots only the residuals, while the Standardized Residual Graph plots the residuals
divided by the estimated residual standard deviation.
ARMA structure.... Provides views which describe the estimated ARMA structure of
your residuals. Details on these views are provided in ARMA Structure on page 116.
Gradients and Derivatives. Provides views which describe the gradients of the objective function and the information about the computation of any derivatives of the
regression function. Details on these views are provided in Appendix D. Gradients
and Derivatives, on page 1019.
Covariance Matrix. Displays the covariance matrix of the coefficient estimates as a
spreadsheet view. To save this covariance matrix as a matrix object, use the @coefcov member of the equation, as in
sym mycov = eq1.@coefcov
Procedures of an Equation
Specify/Estimate. Brings up the Equation Specification dialog box so that you can
modify your specification. You can edit the equation specification, or change the estimation method or estimation sample.
Forecast. Forecasts or fits values using the estimated equation. Forecasting using
equations is discussed in Chapter 23. Forecasting from an Equation, on page 135.
Make Residual Series. Saves the residuals from the regression as a series in the
workfile. Depending on the estimation method, you may choose from three types of
residuals: ordinary, standardized, and generalized. For ordinary least squares, only
the ordinary residuals may be saved.
Make Regressor Group. Creates an untitled group comprised of all the variables used
in the equation (with the exception of the constant).
Make Gradient Group. Creates a group containing the gradients of the objective function with respect to the coefficients of the model.
Make Derivative Group. Creates a group containing the derivatives of the regression
function with respect to the coefficients in the regression function.
Make Model. Creates an untitled model containing a link to the estimated equation if
a named equation or the substituted coefficients representation of an untitled equation. This model can be solved in the usual manner. See Chapter 40. Models, on
page 699 for information on how to use models for forecasting and simulations.
Update Coefs from Equation. Places the estimated coefficients of the equation in the
coefficient vector. You can use this procedure to initialize starting values for various
estimation procedures.
There is an even better approach to saving the residuals. Even if you have already overwritten the RESID series, you can always create the desired series using EViews built-in procedures if you still have the equation object. If your equation is named EQ1, open the equation
window and select Proc/Make Residual Series..., or enter:
eq1.makeresid res1
forms the fitted value of CS, CSHAT, from the OLS regression coefficients and the independent variables from the equation object EQ1.
Note that while EViews will accept a series generating equation which does not explicitly
refer to a named equation:
series cshat = c(1) + c(2)*gdp
and will use the existing values in the C coefficient vector, we strongly recommend that you
always use named equations to identify the appropriate coefficients. In general, C will contain the correct coefficient values only immediately following estimation or a coefficient
update. Using a named equation, or selecting Proc/Update Coefs from Equation, guarantees that you are using the correct coefficient values.
An alternative to referring to the coefficient vector is to reference the @coefs elements of
your equation (see Selected Keywords that Return Scalar Values on page 16). For example,
the examples above may be written as:
series cshat=eq1.@coefs(1)+eq1.@coefs(2)*gdp
EViews assigns an index to each coefficient in the order that it appears in the representations
view. Thus, if you estimate the equation:
equation eq01.ls y=c(10)+b(5)*y(-1)+a(7)*inc
where BETA is a coefficient vector. Again, however, we recommend that you use the @coefs
elements to refer to the coefficients of EQ02. Alternatively, you can update the coefficients in
BETA prior to use by selecting Proc/Update Coefs from Equation from the equation window. Note that EViews does not allow you to refer to the named equation coefficients
EQ02.BETA(1) and EQ02.BETA(2). You must instead use the expressions, EQ02.@COEFS(1)
and EQ02.@COEFS(2).
Estimation Problems
Exact Collinearity
If the regressors are very highly collinear, EViews may encounter difficulty in computing the
regression estimates. In such cases, EViews will issue an error message Near singular
matrix. When you get this error message, you should check to see whether the regressors
are exactly collinear. The regressors are exactly collinear if one regressor can be written as a
linear combination of the other regressors. Under exact collinearity, the regressor matrix X
does not have full column rank and the OLS estimator cannot be computed.
You should watch out for exact collinearity when you are using dummy variables in your
regression. A set of mutually exclusive dummy variables and the constant term are exactly
collinear. For example, suppose you have quarterly data and you try to run a regression with
the specification:
y c x @seas(1) @seas(2) @seas(3) @seas(4)
EViews will return a Near singular matrix error message since the constant and the four
quarterly dummy variables are exactly collinear through the relation:
c = @seas(1) + @seas(2) + @seas(3) + @seas(4)
In this case, simply drop either the constant term or one of the dummy variables.
The textbooks listed above provide extensive discussion of the issue of collinearity.
References
Davidson, Russell and James G. MacKinnon (1993). Estimation and Inference in Econometrics, Oxford:
Oxford University Press.
Greene, William H. (2008). Econometric Analysis, 6th Edition, Upper Saddle River, NJ: Prentice-Hall.
Johnston, Jack and John Enrico DiNardo (1997). Econometric Methods, 4th Edition, New York: McGrawHill.
Pindyck, Robert S. and Daniel L. Rubinfeld (1998). Econometric Models and Economic Forecasts, 4th edition, New York: McGraw-Hill.
Wooldridge, Jeffrey M. (2013). Introductory Econometrics: A Modern Approach. Mason, OH: South-Western Cengage Learning.
yt = wt d + b0 xt + b1 xt 1 + + bk xt k + et
(20.1)
The coefficients b describe the lag in the effect of x on y . In many cases, the coefficients
can be estimated directly using this specification. In other cases, the high collinearity of current and lagged values of x will defeat direct estimation.
You can reduce the number of parameters to be estimated by using polynomial distributed
lags (PDLs) to impose a smoothness condition on the lag coefficients. Smoothness is
expressed as requiring that the coefficients lie on a polynomial of relatively low degree. A
polynomial distributed lag model with order p restricts the b coefficients to lie on a p -th
order polynomial of the form,
2
bj = g1 + g2 ( j c ) + g3 ( j c ) + + gp + 1 ( j c )
for j = 1, 2, , k , where c is a pre-specified constant given by:
(20.2)
c = (k) 2
(k 1) 2
if k is even
if k is odd
(20.3)
The PDL is sometimes referred to as an Almon lag. The constant c is included only to avoid
numerical problems that can arise from collinearity and does not affect the estimates of b .
This specification allows you to estimate a model with k lags of x using only p parameters
(if you choose p > k , EViews will return a Near Singular Matrix error).
If you specify a PDL, EViews substitutes Equation (20.2) into (20.1), yielding,
yt = wt d + g1 z1 + g2 z2 + + gp + 1 zp + 1 + et
(20.4)
where:
z1 = xt + xt 1 + + xt k
z 2 = cx t + ( 1 c )x t 1 + + ( k c )x t k
(20.5)
zp + 1 = ( c ) xt + ( 1 c ) xt 1 + + ( k c ) xt k
Once we estimate g from Equation (20.4), we can recover the parameters of interest b , and
their standard errors using the relationship described in Equation (20.2). This procedure is
straightforward since b is a linear transformation of g .
The specification of a polynomial distributed lag has three elements: the length of the lag k ,
the degree of the polynomial (the highest power in the polynomial) p , and the constraints
that you want to apply. A near end constraint restricts the one-period lead effect of x on y
to be zero:
b1 = g1 + g2 ( 1 c ) + + g p + 1 ( 1 c )
= 0.
(20.6)
A far end constraint restricts the effect of x on y to die off beyond the number of specified
lags:
bk + 1 = g1 + g2 ( k + 1 c ) + + gp + 1 ( k + 1 c )
= 0.
(20.7)
If you restrict either the near or far end of the lag, the number of g parameters estimated is
reduced by one to account for the restriction; if you restrict both the near and far end of the
lag, the number of g parameters is reduced by two.
By default, EViews does not impose constraints.
You may omit the constraint code if you do not want to constrain the lag polynomial. Any
number of pdl terms may be included in an equation. Each one tells EViews to fit distributed lag coefficients to the series and to constrain the coefficients to lie on a polynomial.
For example, the commands:
ls sales c pdl(orders,8,3)
fits SALES to a constant, and a distributed lag of current and eight lags of ORDERS, where
the lag coefficients of ORDERS lie on a third degree polynomial with no endpoint constraints. Similarly:
ls div c pdl(rev,12,4,2)
fits DIV to a distributed lag of current and 12 lags of REV, where the coefficients of REV lie
on a 4th degree polynomial with a constraint at the far end.
The pdl specification may also be used in two-stage least squares. If the series in the pdl is
exogenous, you should include the PDL of the series in the instruments as well. For this purpose, you may specify pdl(*) as an instrument; all pdl variables will be used as instruments. For example, if you specify the TSLS equation as,
sales c inc pdl(orders(-1),12,4)
with instruments:
fed fed(-1) pdl(*)
the distributed lag of ORDERS will be used as instruments together with FED and FED(1).
Polynomial distributed lags cannot be used in nonlinear specifications.
Example
We may estimate a distributed lag model of industrial production (IP) on money (M1) in the
workfile Basics.WF1 by entering the command:
ls ip c m1(0 to -12)
Coefficient
Std. Error
t-Statistic
Prob.
C
M1
M1(-1)
M1(-2)
M1(-3)
M1(-4)
M1(-5)
M1(-6)
M1(-7)
M1(-8)
M1(-9)
M1(-10)
M1(-11)
M1(-12)
40.67568
0.129699
-0.045962
0.033183
0.010621
0.031425
-0.048847
0.053880
-0.015240
-0.024902
-0.028048
0.030806
0.018509
-0.057373
0.823866
0.214574
0.376907
0.397099
0.405861
0.418805
0.431728
0.440753
0.436123
0.423546
0.413540
0.407523
0.389133
0.228826
49.37171
0.604449
-0.121944
0.083563
0.026169
0.075035
-0.113143
0.122245
-0.034944
-0.058795
-0.067825
0.075593
0.047564
-0.250728
0.0000
0.5459
0.9030
0.9335
0.9791
0.9402
0.9100
0.9028
0.9721
0.9531
0.9460
0.9398
0.9621
0.8022
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-s tatistic)
0.852398
0.846852
7.643137
20212.47
-1235.849
153.7030
0.000000
71.72679
19.53063
6.943606
7.094732
7.003697
0.008255
Taken individually, none of the coefficients on lagged M1 are statistically different from zero.
2
Yet the regression as a whole has a reasonable R with a very significant F-statistic (though
with a very low Durbin-Watson statistic). This is a typical symptom of high collinearity
among the regressors and suggests fitting a polynomial distributed lag model.
To estimate a fifth-degree polynomial distributed lag model with no constraints, set the sample using the command,
smpl 1959m01 1989m12
by entering the expression in the Equation Estimation dialog and estimating using Least
Squares.
The following result is reported at the top of the equation window:
Dependent Variable: IP
Method: Least Squares
Date: 08/08/09 T ime: 15:35
Sample (adjusted): 1960M01 1989M12
Included observations: 360 after adjustments
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
PDL01
PDL02
PDL03
PDL04
PDL05
PDL06
40.67311
-4.66E-05
-0.015625
-0.000160
0.001862
2.58E-05
-4.93E-05
0.815195
0.055566
0.062884
0.013909
0.007700
0.000408
0.000180
49.89374
-0.000839
-0.248479
-0.011485
0.241788
0.063211
-0.273611
0.0000
0.9993
0.8039
0.9908
0.8091
0.9496
0.7845
R-squared
Adjusted R-s quared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F -statis tic)
0.852371
0.849862
7.567664
20216.15
-1235.882
339.6882
0.000000
71.72679
19.53063
6.904899
6.980462
6.934944
0.008026
This portion of the view reports the estimated coefficients g of the polynomial in
Equation (20.2) on page 23. The terms PDL01, PDL02, PDL03, , correspond to z 1, z 2,
in Equation (20.4).
The implied coefficients of interest b j in equation (1) are reported at the bottom of the
table, together with a plot of the estimated polynomial:
The Sum of Lags reported at the bottom of the table is the sum of the estimated coefficients
on the distributed lag and has the interpretation of the long run effect of M1 on IP, assuming
stationarity.
Note that selecting View/Coefficient Diagnostics for an equation estimated with PDL terms
tests the restrictions on g , not on b . In this example, the coefficients on the fourth(PDL05) and fifth-order (PDL06) terms are individually insignificant and very close to zero.
To test the joint significance of these two terms, click View/Coefficient Diagnostics/Wald
Test-Coefficient Restrictions and enter:
c(6)=0, c(7)=0
in the Wald Test dialog box (see Wald Test (Coefficient Restrictions) on page 170 for an
extensive discussion of Wald tests in EViews). EViews displays the result of the joint test:
Wald Test:
Equation: Untitled
Null Hyp othesis: C(6)=0, C(7)=0
Test Stati stic
F-statisti c
Chi-squa re
Value
df
Probability
0.039852
0.079704
(2, 353)
2
0.9609
0.9609
Value
Std. Err.
2.5 8E-05
-4.93E-05
0.00040 8
0.00018 0
There is no evidence to reject the null hypothesis, suggesting that you could have fit a lower
order polynomial to your lag structure.
When used in an equation specification, @expand creates a set of dummy variables that
span the unique integer or string values of the input series.
For example consider the following two variables:
SEX is a numeric series which takes the values 1 and 0.
REGION is an alpha series which takes the values North, South, East, and
West.
The equation list specification
income age @expand(sex)
is used to regress INCOME on the regressor AGE, and two dummy variables, one for
SEX=0 and one for SEX=1.
Similarly, the @expand statement in the equation list specification,
income @expand(sex, region) age
specifies that dummy variables for all values of REGION where SEX=1 should be
dropped.
We caution you to take some care in using @expand since it is very easy to generate excessively large numbers of regressors.
@expand may also be used as part of a general mathematical expression, for example, in
interactions with another variable as in:
2*@expand(x)
log(x+y)*@expand(z)
a*@expand(x)/b
Somewhat less useful (at least its uses may not be obvious) but supported are cases like:
log(x+y*@expand(z))
(@expand(x)-@expand(y))
As with all expressions included on an estimation or group creation command line, they
should be enclosed in parentheses if they contain spaces. Thus, the following expressions
are valid,
a*expand(x)
(a
@expand(x))
@expand(x)
Example
Following Wooldridge (2000, Example 3.9, p. 106), we regress the log median housing price,
LPRICE, on a constant, the log of the amount of pollution (LNOX), and the average number
of houses in the community, ROOMS, using data from Harrison and Rubinfeld (1978). The
data are available in the workfile Hprice2.WF1.
We expand the example to include a dummy variable for each value of the series RADIAL,
representing an index for community access to highways. We use @expand to create the
dummy variables of interest, with a list specification of:
lprice lnox rooms @expand(radial)
We deliberately omit the constant term C since the @expand creates a full set of dummy
variables. The top portion of the results is depicted below:
Coefficient
Std. Error
t-Statistic
Prob.
LNOX
ROOMS
RADIAL=1
RADIAL=2
RADIAL=3
RADIAL=4
RADIAL=5
RADIAL=6
RADIAL=7
RADIAL=8
RADIAL=24
-0.487579
0.284844
8.930255
9.030875
9.085988
8.960967
9.110542
9.001712
9.013491
9.070626
8.811812
0.084998
0.018790
0.205986
0.209225
0.199781
0.198646
0.209759
0.205166
0.206797
0.214776
0.217787
-5.736396
15.15945
43.35368
43.16343
45.47970
45.11016
43.43330
43.87528
43.58621
42.23297
40.46069
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
Note that EViews has automatically created dummy variable expressions for each distinct
value in RADIAL. If we wish to renormalize our dummy variables with respect to a different
omitted category, we may include the C in the regression list, and explicitly exclude a value.
For example, to exclude the category RADIAL=24, we use the list:
lprice c lnox rooms @expand(radial, @drop(24))
Coefficient
Std. Error
t-Statistic
Prob.
C
LNOX
ROOMS
RADIAL=1
RADIAL=2
RADIAL=3
RADIAL=4
RADIAL=5
RADIAL=6
RADIAL=7
RADIAL=8
8.811812
-0.487579
0.284844
0.118444
0.219063
0.274176
0.149156
0.298730
0.189901
0.201679
0.258814
0.217787
0.084998
0.018790
0.072129
0.066055
0.059458
0.042649
0.037827
0.062190
0.077635
0.066166
40.46069
-5.736396
15.15945
1.642117
3.316398
4.611253
3.497285
7.897337
3.053568
2.597794
3.911591
0.0000
0.0000
0.0000
0.1012
0.0010
0.0000
0.0005
0.0000
0.0024
0.0097
0.0001
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-s tatistic)
0.573871
0.565262
0.269841
36.04295
-49.60111
66.66195
0.000000
9.941057
0.409255
0.239530
0.331411
0.275566
0.671010
S = E ( b b ) ( b b )
1
= ( XX ) E ( XeeX ) ( XX )
1
= ( XX ) T Q ( XX )
2
= j ( XX )
(20.8)
A key part of this derivation is the assumption that the error terms, e , are conditionally
2
homoskedastic, which implies that Q = E ( XeeX T ) = j ( XX T ) . A sufficient, but
not necessary, condition for this restriction is that the errors are i.i.d. In cases where this
assumption is relaxed to allow for heteroskedasticity or autocorrelation, the expression for
the covariance matrix will be different.
EViews provides built-in tools for estimating the coefficient covariance under the assumption that the residuals are conditionally heteroskedastic, and under the assumption of heteroskedasticity and autocorrelation. The coefficient covariance estimator under the first
assumption is termed a Heteroskedasticity Consistent Covariance (White) estimator, and the
T
Q = ------------Tk
T
2
e t X t Xt T
(20.9)
t = 1
where e t are the estimated residuals, T is the number of observations, k is the number of
regressors, and T ( T k ) is an optional degree-of-freedom correction. The degree-of-freedom White heteroskedasticity consistent covariance matrix estimator is given by
1
T
S W = ------------- ( XX )
Tk
T
2
e t X t X t ( XX )
(20.10)
t = 1
To illustrate the use of White covariance estimates, we use an example from Wooldridge
(2000, p. 251) of an estimate of a wage equation for college professors. The equation uses
dummy variables to examine wage differences between four groups of individuals: married
men (MARRMALE), married women (MARRFEM), single women (SINGLEFEM), and the
base group of single men. The explanatory variables include levels of education (EDUC),
experience (EXPER) and tenure (TENURE). The data are in the workfile Wooldridge.WF1.
To select the White covariance estimator, specify the equation
as before, then select the Options tab and select Huber-White
in the Covariance method drop-down. You may, if desired,
use the checkbox to remove the default d.f. Adjustment, but
in this example, we will use the default setting. (Note that the
Information matrix combo setting is not important in linear
specifications).
The output for the robust covariances for this regression are shown below:
Coefficient
Std. Error
t-Statistic
Prob.
C
MARRMALE
MARRFEM
SINGFEM
EDUC
EXPER
EXPER^2
TENURE
TENURE^2
0.321378
0.212676
-0.198268
-0.110350
0.078910
0.026801
-0.000535
0.029088
-0.000533
0.109469
0.057142
0.058770
0.057116
0.007415
0.005139
0.000106
0.006941
0.000244
2.935791
3.721886
-3.373619
-1.932028
10.64246
5.215010
-5.033361
4.190731
-2.187835
0.0035
0.0002
0.0008
0.0539
0.0000
0.0000
0.0000
0.0000
0.0291
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
Prob(Wald F-statistic)
0.460877
0.452535
0.393290
79.96799
-250.9552
55.24559
0.000000
0.000000
1.623268
0.531538
0.988423
1.061403
1.016998
1.784785
51.69553
As Wooldridge notes, the heteroskedasticity robust standard errors for this specification are
not very different from the non-robust forms, and the test statistics for statistical significance
of coefficients are generally unchanged. While robust standard errors are often larger than
their usual counterparts, this is not necessarily the case, and indeed this equation has some
robust standard errors that are smaller than the conventional estimates.
Notice that in EViews reports both an F-statistic and associated probability and the robust
Wald test statistic and p-value for the hypothesis that all non-intercept coefficients are equal
to zero. Recall that the familiar residual based F-statistic for testing the null hypothesis
depends only on the coefficient point estimates, and not their standard errors, and is valid
only under the maintained hypotheses of no heteroskedasticity or serial correlation. For
ordinary least squares with conventionally estimated standard errors, this statistic is numerically identical to the Wald statistic. If, however, robust standard errors are employed, the
numerical equivalence between the two breaks down, so EViews reports both statistics.
EViews reports the robust F-statistic as the Wald F-statistic in equation output, and the corresponding p-value as Prob(Wald F-statistic). In this example, the non-robust F-statistic
and the robust Wald show that the non-intercept coefficients are statistically significant.
(20.11)
Coefficient
Std. Error
t-Statistic
Prob.
FDD
FDD(-1)
FDD(-2)
FDD(-3)
FDD(-4)
FDD(-5)
FDD(-6)
FDD(-7)
FDD(-8)
FDD(-9)
FDD(-10)
FDD(-11)
FDD(-12)
FDD(-13)
FDD(-14)
FDD(-15)
FDD(-16)
FDD(-17)
FDD(-18)
C
0.503798
0.169918
0.067014
0.071087
0.024776
0.031935
0.032560
0.014913
-0.042196
-0.010300
-0.116300
-0.066283
-0.142268
-0.081575
-0.056372
-0.031875
-0.006777
0.001394
0.001824
-0.340237
0.139563
0.088943
0.060693
0.044894
0.031656
0.030763
0.047602
0.015743
0.034885
0.051452
0.070656
0.053014
0.077424
0.042992
0.035300
0.028018
0.055701
0.018445
0.016973
0.273659
3.609818
1.910407
1.104158
1.583444
0.782679
1.038086
0.684014
0.947323
-1.209594
-0.200181
-1.646013
-1.250288
-1.837518
-1.897435
-1.596959
-1.137658
-0.121670
0.075584
0.107450
-1.243289
0.0003
0.0566
0.2700
0.1139
0.4341
0.2997
0.4942
0.3439
0.2269
0.8414
0.1003
0.2117
0.0666
0.0583
0.1108
0.2557
0.9032
0.9398
0.9145
0.2143
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
Prob(Wald F-statistic)
0.128503
0.100532
4.803944
13662.11
-1818.719
4.594247
0.000000
0.001769
-0.115821
5.065300
6.008886
6.153223
6.065023
1.821196
2.257876
Note in particular that the top of the equation output shows the use of HAC covariance estimates along with relevant information about the settings used to compute the long-run
covariance matrix.
The robust Wald p-value is slightly higher than the corresponding non-robust F-statistic pvalue, but are significant at conventional test levels.
tency properties of ordinary least squares estimates, but OLS is no longer efficient and
conventional estimates of the coefficient standard errors are not valid.
2
If the variances j t are known up to a positive scale factor, you may use weighted least
squares (WLS) to obtain efficient estimates that support valid inference. Specifically, if
y t = x t b + e t
E ( et Xt ) = 0
Var ( e t X t ) =
(20.12)
2
jt
and we observe h t = aj t , the WLS estimator for b minimizes the weighted sum-ofsquared residuals:
S(b) =
h----t ( yt x t b )
w t ( y t x t b )
(20.13)
(20.14)
(20.15)
1
2
s = ------------- ( y Xb WLS )W ( y Xb WLS )
Tk
(20.16)
where
You will use the three parts of the Weights section of the Options tab to specify your
weights.
The Type dropdown is used to specify the form in which the weight data
are provided. If, for example, your weight series VARWGT contains values
proportional to the conditional variance, you should select Variance.
Alternately, if your series INVARWGT contains the values proportional to
the inverse of the standard deviation of the residuals you should choose Inverse std. dev.
Next, you should enter an expression for your weight series in the Weight series edit field.
Lastly, you should choose a scaling method for the weights. There are
three choices: Average, None, and (in some cases) EViews default. If you
select Average, EViews will, prior to use, scale the weights prior so that
the w i sum to T . The EViews default specification scales the weights so the square roots
of the w i sum to T . (The latter square root scaling, which offers backward compatibility to
EViews 6 and earlier, was originally introduced in an effort to make the weighted residuals
w t ( y t x t b ) comparable to the unweighted residuals.) Note that the EViews default
method is only available if you select Inverse std. dev. as weighting Type.
Unless there is good reason to do so, we recommend that you employ Inverse std.
dev. weights with EViews default scaling, even if it means you must transform your
weight series. The other weight types and scaling methods were introduced in EViews
7, so equations estimated using the alternate settings may not be read by prior versions of EViews.
Click on the Options tab, and fill out the Weights section as
depicted here. We select Inverse std. dev. as our Type, and specify 1/SIGMA for our Weight series. Lastly, we select EViews
default as our Scaling method.
Click on OK to estimate the specified equation. The results are
given by:
Depend ent Variable: Y
Method: Least Squares
Date: 06/17/09 Ti me: 10:01
Sample: 1 9
Included observations: 9
Weighting series: 1/SIGMA
Weight type: Inverse standa rd deviation (EViews default scaling)
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
X
3406.640
154.1526
80.98322
16.95929
42.06600
9.089565
0.0 000
0.0 000
Weighted Statistics
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.921893
0.910734
126.6652
112308.5
-55.21346
82.62018
0.000040
4098.4 17
629.17 67
12.714 10
12.757 93
12.619 52
1.1839 41
4039.4 04
Un weighted Statistics
R-squared
Adjusted R-squared
S.E. of regression
Durbin-Watson stat
0.935499
0.926285
114.1939
1.141034
4161.6 67
420.59 54
91281.79
The top portion of the output displays the estimation settings which show both the specified
weighting series and the type of weighting employed in estimation. The middle section
shows the estimated coefficient values and corresponding standard errors, t-statistics and
probabilities.
The bottom portion of the output displays two sets of statistics. The Weighted Statistics
show statistics corresponding to the actual estimated equation. For purposes of discussion,
there are two types of summary statistics: those that are (generally) invariant to the scaling
of the weights, and those that vary with the weight scale.
The R-squared, Adjusted R-squared, F-statistic and Prob(F-stat), and the DurbinWatson stat, are all invariant to your choice of scale. Notice that these are all fit measures
or test statistics which involve ratios of terms that remove the scaling.
One additional invariant statistic of note is the Weighted mean dep. which is the weighted
mean of the dependent variable, computed as:
wt yt
y w = ----------------- wt
(20.17)
The weighted mean is the value of the estimated intercept in the restricted model, and is
used in forming the reported F-test.
The remaining statistics such as the Mean dependent var., Sum squared resid, and the
Log likelihood all depend on the choice of scale. They may be thought of as the statistics
computed using the weighted data, y t = w t y t and x t = w t x t . For example, the
mean of the dependent variable is computed as ( y t ) T , and the sum-of-squared resid2
uals is given by w t ( y t x t b ) . These values should not be compared across equations estimated using different weight scaling.
Lastly, EViews reports a set of Unweighted Statistics. As the name suggests, these are statistics computed using the unweighted data and the WLS coefficients.
y t = f(x t, b) + e t ,
(20.18)
where f is a general function of the explanatory variables x t and the parameters b . Least
squares estimation chooses the parameter values that minimize the sum of squared residuals:
S(b) =
( yt f(xt, b) )
= ( y f ( X, b ) ) ( y f ( X, b ) )
(20.19)
We say that a model is linear in parameters if the derivatives of f with respect to the parameters do not depend upon b ; if the derivatives are functions of b , we say that the model is
nonlinear in parameters.
For example, consider the model given by:
y t = b 1 + b 2 log L t + b 3 log K t + e t .
(20.20)
It is easy to see that this model is linear in its parameters, implying that it can be estimated
using ordinary least squares.
yt = b1 Lt 2 Kt 3 + et
(20.21)
has derivatives that depend upon the elements of b . There is no way to rearrange the terms
in this model so that ordinary least squares can be used to minimize the sum-of-squared
residuals. We must use nonlinear least squares techniques to estimate the parameters of the
model.
Nonlinear least squares minimizes the sum-of-squared residuals with respect to the choice
of parameters b . While there is no closed form solution for the parameters, estimates my be
obtained from iterative methods as described in Optimization Algorithms, beginning on
page 1011.
Estimates of the coefficient covariance take the general form:
1
1
S NLLS = cA BA
(20.22)
where A is an estimate of the information, B is the variance of the residual weighted gradients, and c is a scale parameter.
For the ordinary covariance estimator, we assume that A = B . Then we have
1
S NLLS = cA
(20.23)
f ( b ) f ( b ) 1
S NLLS = c --------------- --------------
b b
(20.24)
1 S(b)
S NLLS = c --- -----------------
2 bb
(20.25)
evaluated at b NLLS .
Alternately, we may assume distinct A and B and employ a White or HAC sandwich estimator for the coefficient covariance as in Robust Standard Errors, beginning on page 32.
In this case, A is estimated using the OPG or Hessian, and the B is a robust estimate of the
variance of the gradient weighted residuals. In this case, c is a scalar representing the
degree-of-freedom correction, if employed.
For additional discussion of nonlinear estimation, see Pindyck and Rubinfeld (1998, p. 265273), Davidson and MacKinnon (1993), or Amemiya(1983).
is a nonlinear specification that uses the first through the fourth elements of the default
coefficient vector, C.
To create a new coefficient vector, select Object/New Object.../Matrix-Vector-Coef in the
main menu and provide a name. You may now use this coefficient vector in your specification. For example, if you create a coefficient vector named CF, you can rewrite the specification above as:
y = cf(11) + cf(12)*(k^cf(13)+l^cf(14))
S ( c ( 1 ), c ( 2 ) ) =
{ yt ( c ( 1 )x t + c ( 2 )z t + 4 )
t
2 2
(20.26)
If you wish, the equation specification may be given by a simple expression that does not
include a dependent variable. For example, the input,
(c(1)*x + c(2)*z + 4)^2
is interpreted by EViews as ( c ( 1 )x t + c ( 2 )z t + 4 )
S ( c ( 1 ), c ( 2 ) ) =
{ ( c ( 1 )x t + c ( 2 )z t + 4 )
2 2
(20.27)
While EViews will estimate the parameters of this last specification, the equation cannot be
used for forecasting and cannot be included in a model. This restriction also holds for any
equation that includes coefficients to the left of the equal sign. For example, if you specify,
x + c(1)*y = z^c(2)
EViews will find the values of C(1) and C(2) that minimize the sum of squares of the
implicit equation:
c(2)
x t + c ( 1 )y t z t
= et
(20.28)
The estimated equation cannot be used in forecasting or included in a model, since there is
no dependent variable.
Estimation Options
Clicking on the Options tab displays the nonlinear least squares estimation options:
Coefficient Covariance
EViews allows you to compute ordinary coefficient covariances using the inverse of either
the OPG of the mean function or the observed Hessian of the objective function, or to com-
pute robust sandwich estimators for the covariance matrix using White or HAC (NeweyWest) estimators.
The topmost Covariance method dropdown menu should be used to choose between
the default Ordinary or the robust Huber-White or HAC (Newey-West) methods.
In the Information matrix menu you should choose between the OPG and the Hessian - observed estimators for the information.
If you select HAC (Newey-West), you will be presented with a HAC options button
that, if pressed, brings up a dialog to allow you to control the long-run variance computation.
See Robust Standard Errors, beginning on page 32 for a discussion of White and HAC
standard errors.
You may use the d.f. Adjustment checkbox to enable or disable the degree-of-freedom correction for the coefficient covariance. For the Ordinary method, this setting amounts to
determining whether the residual variance estimator is or is not degree-of-freedom corrected. For the sandwich estimators, the degree-of-freedom correction is applied to the entire
matrix.
Optimization
You may control the iterative process by specifying the optimization method, convergence
criterion, and maximum number of iterations.
The Optimization method dropdown menu lets you choose between the default GaussNewton and BFGS, Newton-Raphson, and EViews legacy methods.
In general, the differences between the estimates should be small for well-behaved nonlinear specifications, but if you are experiencing trouble, you may wish to experiment with
methods. Note that EViews legacy is a particular implementation of Gauss-Newton with
Marquardt or line search steps, and is provided for backward estimation compatibility.
The Step method allow you to choose the approach for choosing candidate iterative steps.
The default method is Marquardt, but you may instead select Dogleg or Line Search.
See Optimization Method on page 1006, and Optimization Algorithms on page 1011 for
related discussion.
EViews will report that the estimation procedure has converged if the convergence test value
is below your convergence tolerance. See for details. While there is no best choice of convergence tolerance, and the choice is somewhat individual, as a guideline note that we generally set ours something on the order of 1e-8 or so and then adjust it upward if necessary for
models with difficult to compute numeric derivatives.
See Iteration and Convergence on page 1006 for additional discussion.
In most cases, you need not change the maximum number of iterations. However, for some
difficult to estimate models, the iterative procedure may not converge within the maximum
number of iterations. If your model does not converge within the allotted number of iterations, simply click on the Estimate button, and, if desired, increase the maximum number of
iterations. Click on OK to accept the options, and click on OK to begin estimation. EViews
will start estimation using the last set of parameter values as starting values.
These options may also be set from the global options dialog. See Appendix A, Estimation
Defaults on page 822 for details.
Derivative Methods
Estimation in EViews requires computation of the derivatives of the regression function with
respect to the parameters.
In most cases, you need not worry about the settings for the derivative computation. The
EViews estimation engine will employ analytic expressions for the derivatives, if possible, or
will compute high numeric derivatives, switching between lower precision computation
early in the iterative procedure and higher precision computation for later iterations and
final computation. You may elect to use only numeric derivatives.
See Derivative Computation on page 1009 for additional discussion.
Starting Values
Iterative estimation procedures require starting values for the coefficients of the model. The
closer to the true values the better, so if you have reasonable guesses for parameter values,
these can be useful. In some cases, you can obtain good starting values by estimating a
restricted version of the model using least squares. In general, however, you may need to
experiment in order to find starting values.
There are no general rules for selecting starting values for parameters so there are no settings in this page for choosing values. EViews uses the values in the coefficient vector at the
time you begin the estimation procedure as starting values for the iterative procedure. It is
easy to examine and change these coefficient starting values. To see the current starting values, double click on the coefficient vector in the workfile directory. If the values appear to
be reasonable, you can close the window and proceed with estimating your model.
If you wish to change the starting values, first make certain that the spreadsheet view of
your coefficients is in edit mode, then enter the coefficient values. When you are finished
setting the initial values, close the coefficient vector window and estimate your model.
You may also set starting coefficient values from the command window using the PARAM
command. Simply enter the PARAM keyword, following by each coefficient and desired
value. For example, if your default coefficient vector is C, the statement:
param c(1) 153 c(2) .68 c(3) .15
C(1)
C(2)
C(3)
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
Coefficient
Std. Error
t-Statistic
Prob.
2.839332
0.259119
0.182315
0.281733
0.041680
0.020335
10.07810
6.216837
8.965475
0.0000
0.0000
0.0000
0.997260
0.997231
0.024403
0.112552
441.9798
34393.45
0.000000
7.472280
0.463744
-4.572707
-4.521808
-4.552093
0.136871
If the estimation procedure has converged, EViews will report this fact, along with the number of iterations that were required. If the iterative procedure did not converge, EViews will
report Convergence not achieved after followed by the number of iterations attempted.
Below the line describing convergence, and a description of the method employed in computing the coefficient covariances, EViews will repeat the nonlinear specification so that you
can easily interpret the estimated coefficients of your model.
EViews provides you with all of the usual summary statistics for regression models. Provided that your model has converged, the standard statistical results and tests are asymptotically valid.
CS t = c 1 + GDP t + u t
ut = c3 ut 1 + c4 ut 2 + et
(20.29)
See Initializing the AR Errors, on page 130 for additional details. EViews does not currently estimate nonlinear models with MA errors, nor does it estimate weighted models with
AR termsif you add AR terms to a weighted nonlinear model, the weighting series will be
ignored.
Weighted NLS
Weights can be used in nonlinear estimation in a manner analogous to weighted linear least
squares in equations without ARMA terms. To estimate an equation using weighted nonlinear least squares, enter your specification, press the Options button and fill in the weight
specification.
EViews minimizes the sum of the weighted squared residuals:
S(b) =
w t ( y t f(x t, b) )
= ( y f ( X, b ) )W ( y f ( X, b ) )
(20.30)
with respect to the parameters b , where w t are the values of the weight series and W is
the diagonal matrix of weights. The first-order conditions are given by,
f ( b )
-------------- W ( y f ( X, b ) ) = 0
b
(20.31)
and the default OPG d.f. corrected covariance estimate is computed as:
f ( b )
2 f ( b )
S WNLLS = s -------------- W --------------
b
b
(20.32)
2 1 S(b)
S WNLLS = s --- -----------------
2 bb
(20.33)
Starting Values
If you experience problems with the very first iteration of a nonlinear procedure, the problem is almost certainly related to starting values. See the discussion in Starting Values on
page 45 for details on how to examine and change your starting values.
Model Identification
If EViews goes through a number of iterations and then reports that it encounters a Near
Singular Matrix, you should check to make certain that your model is identified. Models are
said to be non-identified if there are multiple sets of coefficients which identically yield the
minimized sum-of-squares value. If this condition holds, it is impossible to choose between
the coefficients on the basis of the minimum sum-of-squares criterion.
For example, the nonlinear specification:
2
yt = b1 b2 + b2 xt + et
(20.34)
is not identified, since any coefficient pair ( b 1, b 2 ) is indistinguishable from the pair
( b 1, b 2 ) in terms of the sum-of-squared residuals.
For a thorough discussion of identification of nonlinear least squares models, see Davidson
and MacKinnon (1993, Sections 2.3, 5.2 and 6.3).
Optimization Algorithm
In general, the choice of optimization algorithm should have little effect on the computation
of estimates. That said, if you are experiencing trouble, you may wish to experiment with
different methods. In addition, you may wish to experiment with different optimizers to
ensure that your estimates are robust to the choice of optimization method.
Note that EViews legacy is a particular implementation of Gauss-Newton with Marquardt or
line search steps, and is provided for backward estimation compatibility.
See Optimization on page 44 for discussion.
Convergence Criterion
EViews may report that it is unable to improve the sums-of-squares. This result may be evidence of non-identification or model misspecification. Alternatively, it may be the result of
setting your convergence criterion too low, which can occur if your nonlinear specification is
particularly complex.
If you wish to change the convergence criterion, enter the new value in the Options tab. Be
aware that increasing this value increases the possibility that you will stop at a local minimum, and may hide misspecification or non-identification of your model.
See Setting Estimation Options on page 1005, for related discussion.
regressors by selecting the Use number of regressors option and providing a number of the
corresponding edit field.
You may also set the maximum number of steps taken by the procedure. To set the maximum number of additions to the model, change the Forwards steps, and to set the maximum number of removals, change the Backwards steps. You may also set the total number
of additions and removals. In general it is best to leave these numbers at a high value. Note,
however, that the Stepwise routines have the potential to repetitively add and remove the
same variables, and by setting the maximum number of steps you can mitigate this behavior.
The Swapwise method lets you choose whether you wish to use Max R-squared or Min Rsquared, and choose the number of additional variables to be selected. The Combinatorial
method simply prompts you to provide the number of additional variables. By default both
of these procedures have the number of additional variables set to one. In both cases this
merely chooses the single variable that will lead to the largest increase in R-squared.
For additional discussion, see Selection Methods, beginning on page 53.
Lastly, each of the methods lets you choose a Weight series to perform weighted least
squares estimation. Simply check the Use weight series option, then enter the name of the
weight series in the edit field. See Weighted Least Squares on page 36 for details.
Example
As an example we use the following code to generate a workfile with 40 independent variables (X1X40), and a dependent variable, Y, which is a linear combination of a constant,
variables X11X15, and a normally distributed random error term.
create u 100
rndseed 1
group xs
for !i=1 to 40
series x!i=nrnd
%name="x"+@str(!i)
xs.add {%name}
next
series y = nrnd + 3
for !i=11 to 15
y = y + !i*x{!i}
next
Given this data we can use a forwards stepwise routine to choose the best 5 regressors,
after the constant, from the group of 40 in XS. We do this by entering Y C in the first Specification box of the estimation dialog, and XS in the List of search regressors box. In the
Stopping Criteria section of the Options tab we check Use Number of Regressors, and
enter 5 as the number of regressors. Estimating this specification yields the results:
Dependent Variable: Y
Method: Stepwis e Regress ion
Date: 08/08/09 Time: 22:39
Sample: 1 100
Included observations: 100
Number of always included regressors: 1
Number of searc h regressors: 40
Selection method: Stepwise forwards
Stopping criterion: p-value forwards/backwards = 0.5/0.5
Stopping criterion: Number of search regres sors = 5
Variable
C
X15
X14
X12
X13
X11
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-s tatistic)
Coefficient
Std. Error
t-Statistic
Prob.*
2.973731
14.98849
14.01298
11.85221
12.88029
11.02252
0.102755
0.091087
0.091173
0.101569
0.102182
0.102758
28.93992
164.5517
153.6967
116.6914
126.0526
107.2664
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.999211
0.999169
0.968339
88.14197
-135.5828
23802.50
0.000000
-0.992126
33.58749
2.831656
2.987966
2.894917
1.921653
Selection Summary
Added
Added
Added
Added
Added
X15
X14
X12
X13
X11
The top portion of the output shows the equation specification and information about the
stepwise method. The next section shows the final estimated specification along with coefficient estimates, standard errors and t-statistics, and p-values. Note that the stepwise routine
chose the correct five regressors, X11X15. The bottom portion of the output shows a summary of the steps taken by the selection method. Specifications with a large number of steps
may show only a brief summary.
Selection Methods
EViews allows you to specify variables to be included as regressors along with a set of variables from which the selection procedure will choose additional regressors. The first set of
variables are termed the always included variables, and the latter are the set of potential
added variables. EViews supports several procedures for selecting the added variables.
Uni-directional-Forwards
The Uni-directional-Forwards method uses either a lowest p-value or largest t-statistic criterion for adding variables.
The method begins with no added regressors. If using the p-value criterion, we select the
variable that would have the lowest p-value were it added to the regression. If the p-value is
lower than the specified stopping criteria, the variable is added. The selection continues by
selecting the variable with the next lowest p-value, given the inclusion of the first variable.
The procedure stops when the lowest p-value of the variables not yet included is greater
than the specified forwards stopping criterion, or the number of forward steps or number of
added regressors reach the optional user specified limits.
If using the largest t-statistic criterion, the same variables are selected, but the stopping criterion is specified in terms of the statistic value instead of the p-value.
Uni-directional-Backwards
The Uni-directional-Backwards method is analogous to the Uni-directional-Forwards
method, but begins with all possible added variables included, and then removes the variable with the highest p-value. The procedure continues by removing the variable with the
next highest p-value, given that the first variable has already been removed. This process
continues until the highest p-value is less than the specified backwards stopping criteria, or
the number of backward steps or number of added regressors reach the optional user specified limits.
The largest t-statistic may be used in place of the lowest p-value as a selection criterion.
Stepwise-Forwards
The Stepwise-Forwards method is a combination of the Uni-directional-Forwards and Backwards methods. Stepwise-Forwards begins with no additional regressors in the regression,
then adds the variable with the lowest p-value. The variable with the next lowest p-value
given that the first variable has already been chosen, is then added. Next both of the added
variables are checked against the backwards p-value criterion. Any variable whose p-value
is higher than the criterion is removed.
Once the removal step has been performed, the next variable is added. At this, and each successive addition to the model, all the previously added variables are checked against the
backwards criterion and possibly removed. The Stepwise-Forwards routine ends when the
lowest p-value of the variables not yet included is greater than the specified forwards stopping criteria (or the number of forwards and backwards steps or the number of added
regressors has reached the corresponding optional user specified limit).
You may elect to use the largest t-statistic in place of the lowest p-value as the selection criterion.
Stepwise-Backwards
The Stepwise-Backwards procedure reverses the Stepwise-Forwards method. All possible
added variables are first included in the model. The variable with the highest p-value is first
removed. The variable with the next highest p-value, given the removal of the first variable,
is also removed. Next both of the removed variables are checked against the forwards pvalue criterion. Any variable whose p-value is lower than the criterion is added back in to
the model.
Once the addition step has been performed, the next variable is removed. This process continues where at each successive removal from the model, all the previously removed variables are checked against the forwards criterion and potentially re-added. The StepwiseBackwards routine ends when the largest p-value of the variables inside the model is less
than the specified backwards stopping criterion, or the number of forwards and backwards
steps or number of regressors reaches the corresponding optional user specified limit.
The largest t-statistic may be used in place of the lowest p-value as a selection criterion.
Combinatorial
For a given number of added variables, the Combinatorial method evaluates every possible
combination of added variables, and selects the combination that leads to the largest Rsquared in a regression using the added and always included variables as regressors. This
method is more thorough than the previous methods, since those methods do not compare
every possible combination of variables, and obviously requires additional computation.
With large numbers of potential added variables, the Combinatorial approach can take a
very long time to complete.
References
Amemiya, Takeshi (1983). Nonlinear Regression Models, Chapter 6 in Z. Griliches and M. D. Intriligator
(eds.), Handbook of Econometrics, Volume 1, Amsterdam: Elsevier Science Publishers B.V.
Davidson, Russell and James G. MacKinnon (1993). Estimation and Inference in Econometrics, Oxford:
Oxford University Press.
Derksen, S. and H. J. Keselman (1992). Backward, Forward and Stepwise Automated Subset Selection
Algorithms: Frequency of Obtaining Authentic and Noise Variables, British Journal of Mathematical
and Statistical Psychology, 45, 265282.
Fair, Ray C. (1970). The Estimation of Simultaneous Equation Models With Lagged Endogenous Variables
and First Order Serially Correlated Errors, Econometrica, 38, 507516.
Fair, Ray C. (1984). Specification, Estimation, and Analysis of Macroeconometric Models, Cambridge, MA:
Harvard University Press.
Harrison, D. and D. L. Rubinfeld (1978). Hedonic Housing Prices and the Demand for Clean Air, Journal
of Environmental Economics and Management, 5, 81-102.
Hurvich, C. M. and C. L. Tsai (1990). The Impact of Model Selection on Inference in Linear Regression,
American Statistician, 44, 214217.
Johnston, Jack and John Enrico DiNardo (1997). Econometric Methods, 4th Edition, New York: McGrawHill.
Newey, Whitney and Kenneth West (1987a). Hypothesis Testing with Efficient Method of Moments Estimation, International Economic Review, 28, 777787.
Newey, Whitney and Kenneth West (1987b). A Simple Positive Semi-Definite, Heteroskedasticity and
Autocorrelation Consistent Covariance Matrix, Econometrica, 55, 703708.
Pindyck, Robert S. and Daniel L. Rubinfeld (1998). Econometric Models and Economic Forecasts, 4th edition, New York: McGraw-Hill.
Roecker, E. B. (1991). Prediction Error and its Estimation for Subset-Selection Models, Technometrics,
33, 459469.
Tauchen, George (1986). Statistical Properties of Generalized Method-of-Moments Estimators of Structural Parameters Obtained From Financial Market Data, Journal of Business & Economic Statistics,
4, 397416.
White, Halbert (1980).A Heteroskedasticity-Consistent Covariance Matrix and a Direct Test for Heteroskedasticity, Econometrica, 48, 817838.
Wooldridge, Jeffrey M. (2000). Introductory Econometrics: A Modern Approach. Cincinnati, OH: SouthWestern College Publishing.
Background
A fundamental assumption of regression analysis is that the right-hand side variables are
uncorrelated with the disturbance term. If this assumption is violated, both OLS and
weighted LS are biased and inconsistent.
There are a number of situations where some of the right-hand side variables are correlated
with disturbances. Some classic examples occur when:
There are endogenously determined variables on the right-hand side of the equation.
Right-hand side variables are measured with error.
For simplicity, we will refer to variables that are correlated with the residuals as endogenous,
and variables that are not correlated with the residuals as exogenous or predetermined.
The standard approach in cases where right-hand side variables are correlated with the
residuals is to estimate the equation using instrumental variables regression. The idea
behind instrumental variables is to find a set of variables, termed instruments, that are both
(1) correlated with the explanatory variables in the equation, and (2) uncorrelated with the
disturbances. These instruments are used to eliminate the correlation between right-hand
side variables and the disturbances.
There are many different approaches to using instruments to eliminate the effect of variable
and residual correlation. EViews offers three basic types of instrumental variable estimators:
Two-stage Least Squares (TSLS), Limited Information Maximum Likelihood and K-Class Estimation (LIML), and Generalized Method of Moments (GMM).
the instruments. This stage involves estimating an OLS regression of each variable in the
model on the set of instruments. The second stage is a regression of the original equation,
with all of the variables replaced by the fitted values from the first-stage regressions. The
coefficients of this regression are the TSLS estimates.
You need not worry about the separate stages of TSLS since EViews will estimate both stages
simultaneously using instrumental variables techniques. More formally, let Z be the matrix
of instruments, and let y and X be the dependent and explanatory variables. The linear
TSLS objective function is given by:
1
W ( b ) = ( y Xb )Z ( ZZ ) Z ( y Xb )
(21.1)
Then the coefficients computed in two-stage least squares are given by,
1
b TSLS = ( XZ ( ZZ ) ZX ) XZ ( ZZ ) Zy ,
(21.2)
and the standard estimated covariance matrix of these coefficients may be computed using:
1
2
1
S TSLS = s ( XZ ( ZZ ) ZX ) ,
2
(21.3)
where s is the estimated residual variance (square of the standard error of the regression).
2
If desired, s may be replaced by the non-d.f. corrected estimator. Note also that EViews
offers both White and HAC covariance matrix options for two-stage least squares.
There are a few things to keep in mind as you enter your instruments:
In order to calculate TSLS estimates, your specification must satisfy the order condition for identification, which says that there must be at least as many instruments as
there are coefficients in your equation. There is an additional rank condition which
must also be satisfied. See Davidson and MacKinnon (1993) and Johnston and
DiNardo (1997) for additional discussion.
For econometric reasons that we will not pursue here, any right-hand side variables
that are not correlated with the disturbances should be included as instruments.
EViews will, by default, add a constant to the instrument list. If you do not wish a
constant to be added to the instrument list, the Include a constant check box should
be unchecked.
To illustrate the estimation of two-stage least squares, we use an example from Stock and
Watson 2007 (p. 438), which estimates the demand for cigarettes in the United States in
1995. (The data are available in the workfile Sw_cig.WF1.) The dependent variable is the
per capita log of packs sold LOG(PACKPC). The exogenous variables are a constant, C, and
the log of real per capita state income LOG(PERINC). The endogenous variable is the log of
real after tax price per pack LOG(RAVGPRC). The additional instruments are average state
sales tax RTAXSO, and cigarette specific taxes RTAXS. Stock and Watson use the White covariance estimator for the standard errors.
The equation specification is then,
log(packpc) c log(ravgprs) log(perinc)
This specification satisfies the order condition for identification, which requires that there
are at least as many instruments (four) as there are coefficients (three) in the equation specification. Note that listing C as an instrument is redundant, since by default, EViews automatically adds it to the instrument list.
To specify the use of White heteroskedasticity robust standard
errors, we will select White in the Coefficient covariance matrix
dropdown menu on the Options tab. By default, EViews will estimate the using the Ordinary method with d.f. Adjustment as specified in Equation (21.3).
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
LO G(RAVGPRS)
LOG (PERINC)
9.894956
-1.277424
0.280405
0.959217
0.249610
0.253890
10.31566
-5.117680
1.104436
0.0 000
0.0 000
0.2 753
R-squared
Adjusted R-squared
S.E. of regression
F-statistic
Prob(F-statistic)
Instrument rank
Prob(J-statistic)
0.429422
0.404063
0.187856
13.28079
0.000029
4
0.576557
4.5 388 37
0.2 433 46
1.5 880 44
1.9 463 51
1.8 458 68
0.3 118 33
EViews identifies the estimation procedure, as well as the list of instruments in the header.
This information is followed by the usual coefficient, t-statistics, and asymptotic p-values.
The summary statistics reported at the bottom of the table are computed using the formulae
outlined in Summary Statistics on page 13. Bear in mind that all reported statistics are
only asymptotically valid. For a discussion of the finite sample properties of TSLS, see Johnston and DiNardo (1997, p. 355358) or Davidson and MacKinnon (1993, p. 221224).
Three other summary statistics are reported: Instrument rank, the J-statistic and the
Prob(J-statistic). The Instrument rank is simply the rank of the instrument matrix, and is
equal to the number of instruments used in estimation. The J-statistic is calculated as:
1
1
2
---- uZ ( s ZZ T ) Zu
T
(21.4)
where u are the regression residuals. See Generalized Method of Moments, beginning on
page 69 for additional discussion of the J-statistic.
EViews uses the structural residuals u t = y t x t b T SLS in calculating the summary statistics. For example, the default estimator of the standard error of the regression used in the
covariance calculation is:
2
s =
ut ( T k ) .
t
(21.5)
These structural, or regression, residuals should be distinguished from the second stage
residuals that you would obtain from the second stage regression if you actually computed
the two-stage least squares estimates in two separate stages. The second stage residuals are
given by u t = y t x t b T SLS , where the y t and x t are the fitted values from the first-stage
regressions.
We caution you that some of the reported statistics should be interpreted with care. For
example, since different equation specifications will have different instrument lists, the
2
reported R for TSLS can be negative even when there is a constant in the equation.
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
LO G(RAVGPRS)
LOG (PERINC)
A R(1)
10.02006
-1.309245
0.291047
0.026532
0.996752
0.271683
0.290818
0.133425
10.05272
-4.819022
1.000785
0.198852
0.0 000
0.0 000
0.3 225
0.8 433
R-squared
Adjusted R-squared
S.E. of regression
Durbin-Watson stat
J-statistic
0.431689
0.392039
0.191584
1.951380
1.494632
Inverte d AR Roots
4.5 371 96
0.2 457 09
1.5 782 84
7
0.6 835 10
.03
The Options button in the estimation box may be used to change the iteration limit and convergence criterion for the nonlinear instrumental variables procedure.
First-order AR errors
Suppose your specification is:
y t = x t b + w t g + u t
ut = r1 ut 1 + et
(21.6)
where x t is a vector of endogenous variables, and w t is a vector of predetermined variables, which, in this context, may include lags of the dependent variable z t . is a vector of
instrumental variables not in w t that is large enough to identify the parameters of the
model.
In this setting, there are important technical issues to be raised in connection with the
choice of instruments. In a widely cited result, Fair (1970) shows that if the model is estimated using an iterative Cochrane-Orcutt procedure, all of the lagged left- and right-hand
side variables ( y t 1, x t 1, w t 1 ) must be included in the instrument list to obtain consistent estimates. In this case, then the instrument list should include:
( w t , z t , y t 1, x t 1 , w t 1 ) .
(21.7)
EViews estimates the model as a nonlinear regression model so that Fairs warning does not
apply. Estimation of the model does, however, require specification of additional instruments to satisfy the instrument order condition for the transformed specification. By default,
the first-stage instruments employed in TSLS are formed as if one were running CochraneOrcutt using Fairs prescription. Thus, if you omit the lagged left- and right-hand side terms
from the instrument list, EViews will, by default, automatically add the lagged terms as
instruments. This addition will be noted in your output.
You may instead instruct EViews not to add the lagged left- and right-hand side terms as
instruments. In this case, you are responsible for adding sufficient instruments to ensure the
order condition is satisfied.
( w t, z t, y t 4, x t 4, w t 4 )
(21.8)
If you include AR terms from 1 through 4, one possible instrument list is:
( w t, z t, y t 1, , y t 4, x t 1, , x t 4, w t 1, , w t 4 )
(21.9)
Note that while conceptually valid, this instrument list has a large number of overidentifying
instruments, which may lead to computational difficulties and large finite sample biases
(Fair (1984, p. 214), Davidson and MacKinnon (1993, p. 222-224)). In theory, adding instruments should always improve your estimates, but as a practical matter this may not be so in
small samples.
In this case, you may wish to turn off the automatic lag instrument addition and handle the
additional instrument specification directly.
Examples
Suppose that you wish to estimate the consumption function by two-stage least squares,
allowing for first-order serial correlation. You may then use two-stage least squares with the
variable list,
cons c gdp ar(1)
Notice that the lags of both the dependent and endogenous variables (CONS(1) and GDP(
1)), are included in the instrument list.
Similarly, consider the consumption function:
cons c cons(-1) gdp ar(1)
Here we treat the lagged left and right-hand side variables from the original specification as
predetermined and add the lagged values to the instrument list.
Lastly, consider the specification:
cons c gdp ar(1 to 4)
Illustration
Suppose that you wish to estimate the consumption function by two-stage least squares,
accounting for first-order moving average errors. You may then use two-stage least squares
with the variable list,
cons c gdp ma(1)
EViews will add both first and second lags of CONS and GDP to the instrument list.
Technical Details
Most of the technical details are identical to those outlined above for AR errors. EViews
transforms the model that is nonlinear in parameters (employing backcasting, if appropriate) and then estimates the model using nonlinear instrumental variables techniques.
Recall that by default, EViews augments the instrument list by adding lagged dependent and
regressor variables corresponding to the AR lags. Note however, that each MA term involves
an infinite number of AR terms. Clearly, it is impossible to add an infinite number of lags to
the instrument list, so that EViews performs an ad hoc approximation by adding a truncated
set of instruments involving the MA order and an additional lag. If for example, you have an
MA(5), EViews will add lagged instruments corresponding to lags 5 and 6.
Of course, you may instruct EViews not to add the extra instruments. In this case, you are
responsible for adding enough instruments to ensure the instrument order condition is satisfied.
y t = f(x t, b) + e t ,
(21.10)
W ( b ) = ( y f ( X, b ) )Z ( ZZ ) Z ( y f ( X, b ) )
(21.11)
G ( b )Z ( ZZ ) Z ( y f ( X, b ) ) = 0
(21.12)
(21.13)
With nonlinear two-stage least squares estimation, you have a great deal of flexibility with
your choice of instruments. Intuitively, you want instruments that are correlated with the
derivatives G ( b ) . Since G is nonlinear, you may begin to think about using more than just
the exogenous and predetermined variables as instruments. Various nonlinear functions of
these variables, for example, cross-products and powers, may also be valid instruments. One
should be aware, however, of the possible finite sample biases resulting from using too
many instruments.
W ( b ) = ( y f ( X, b ) )WZ ( ZWZ ) ZW ( y f ( X, b ) ) .
(21.14)
The default reported standard errors are based on the covariance matrix estimate given by:
1
2
1
S WTSNLLS = s ( G ( b )WZ ( ZWZ ) ZWG ( b ) )
(21.15)
where b b WTSNLLS .
lem where one or more of the right hand side variables in the regression are correlated with
residuals.
LIML was first introduced by Anderson and Rubin (1949), prior to the introduction of twostage least squares. However traditionally TSLS has been favored by researchers over LIML
as a method of instrumental variable estimation. If the equation is exactly identified, LIML
and TSLS will be numerically identical. Recent studies (for example, Hahn and Inoue 2002)
have, however, found that LIML performs better than TSLS in situations where there are
many weak instruments.
The linear LIML estimator minimizes
1
( y Xb )Z ( ZZ ) Z ( y Xb )
W ( b ) = T ----------------------------------------------------------------------------( y Xb ) ( y Xb )
(21.16)
with respect to b , where y is the dependent variable, X are explanatory variables, and Z are
instrumental variables.
Computationally, it is often easier to write this minimization problem in a slightly differentform. Let W = ( y, X ) and b = ( 1, b ) . Then the linear LIML objective function can be
written as:
1
b WZ ( ZZ ) ZWb
W ( b ) = T ----------------------------------------------------b WWb
1
(21.17)
T
1
L = ---- ( log ( uu ) + log XAX XAZ ( ZAZ ) ZAX )
2
(21.18)
1
K-Class
K-Class estimation is a third form of instrumental variable estimation; in fact TSLS and LIML
are special cases of K-Class estimation. The linear K-Class objective function is, for a fixed
k , given by:
W ( b ) = ( y Xb ) ( I kM Z ) ( y Xb )
(21.19)
b k = ( X ( I kM Z )X ) X ( I kM Z )y
(21.20)
where P Z = Z ( ZZ ) Z and M Z = I Z ( ZZ ) Z = I P Z .
If k = 1 , then the K-Class estimator is the TSLS estimator. If k = 0 , then the K-Class estimator is OLS. LIML is a K-Class estimator with k = l , the minimum eigenvalue described
above.
The obvious K-Class covariance matrix estimator is given by:
2
1
S k = s ( X ( I kM Z )X )
(21.21)
Bekker (1994) offers a covariance matrix estimator for K-Class estimators with normal error
terms that is more robust to weak instruments. The Bekker covariance matrix estimate is
given by:
1
1
S BEKK = H S H
(21.22)
where
H = XP Z X a ( XX )
2
2
2
P Z X
S = s ( ( 1 a ) X
+a X
M Z X
)
(21.23)
for
uP Z u
= X uuX
- and X
a = --------------------------- .
uu
uu
Hansen, Hausman and Newey (2006) offer an extension to Bekkers covariance matrix estimate for cases with non-normal error terms.
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
Y
Y(-1)
W
17.14765
-0.222513
0.396027
0.822559
1.840295
0.201748
0.173598
0.055378
9.317882
-1.102927
2.281293
14.85347
0.0 000
0.2 854
0.0 357
0.0 000
R-squared
Adjusted R-squared
S.E. of regression
Durbin-Watson stat
0.956572
0.948909
1.550791
1.487859
53.995 24
6.8 608 66
40.884 19
1.4 987 46
EViews identifies the LIML estimation procedure, along with the choice of covariance matrix
type and the list of instruments in the header. This information is followed by the usual
coefficient, t-statistics, and asymptotic p-values.
The standard summary statistics reported at the bottom of the table are computed using the
formulae outlined in Summary Statistics on page 13. Along with the standard statistics,
the LIML minimum eigenvalue is also reported, if the estimation type was LIML.
E ( m ( y t, b ) ) = 0 .
(21.24)
In EViews (as in most econometric applications), we restrict our attention to moment conditions that may be written as an orthogonality condition between the residuals of an equation, u t ( b ) = u ( y t, X t, b ) , and a set of K instruments Z t :
E ( Zt ut ( b ) ) = 0
(21.25)
The traditional Method of Moments estimator is defined by replacing the moment conditions
in Equation (21.24) with their sample analog:
1
1
m T ( b ) = ---- Z t u t ( b ) = ---- Zu ( b ) = 0
T
T
(21.26)
and finding the parameter vector b which solves this set of L equations.
When there are more moment conditions than parameters ( L > K ), the system of equations
given in Equation (21.26) may not have an exact solution. Such as system is said to be overidentified. Though we cannot generally find an exact solution for an overidentified system,
we can reformulate the problem as one of choosing a b so that the sample moment m T ( b )
is as close to zero as possible, where close is defined using the quadratic form:
T1 m T ( b )
T ) = T m T ( b )W
J ( b, W
(21.27)
1
T1 Zu ( b )
= ---- u ( b )Z W
T
T ( b b 0 ) N ( 0, V )
The asymptotic covariance matrix V of
1
V = ( SW S )
for
T asymptoti(21.28)
T ( b b 0 ) is given by
1
SW SW S ( SW S )
(21.29)
T
W = plim W
1
S = plim ---- Z u ( b )
T
1
S = plim ---- Zu ( b )u ( b )Z
T
(21.30)
In the leading case where the u t ( b ) are the residuals from a linear specification so that
u t ( b ) = y t X t b , the GMM objective function is given by
1
T1 Z ( y Xb )
T ) = ---- ( y Xb )ZW
J ( b, W
T
1
(21.31)
1
T ZX ) XZW
T Zy .
and the GMM estimator yields the unique solution v = ( XZW
The asymptotic covariance matrix is given by Equation (21.27), with
1
S = plim ---- ( ZX )
T
(21.32)
It can be seen from this formation that both two-stage least squares and ordinary least
squares estimation are both special cases of GMM estimation. The two-stage least squares
2
objective is simply the GMM objective function multiplied by j using weighting matrix
2
T = ( j ZZ T ) . Ordinary least squares is equivalent to two-stage least squares objecW
tive with the instruments set equal to the derivatives of u t ( b ) , which in the linear case are
the regressors.
T = S
plimW
(21.33)
Intuitively, this result follows since we naturally want to assign less weight to the moment
conditions that are measured imprecisely. For a GMM estimator with an optimal weighting
matrix, the asymptotic covariance matrix of b is given by
1
V = ( SS S )
= ( SS S )
SS SS S ( SSS )
(21.34)
Two-stage least squares: the two-stage least squares weighting matrix is given by
2
2
T = ( j ZZ T ) where j is an estimator of the residual variance based on an
W
2
initial estimate of b . The estimator for the variance will be s or the no d.f. corrected
equivalent, depending on your settings for the coefficient covariance calculation.
White: the White weighting matrix is a heteroskedasticity consistent estimator of the
long-run covariance matrix of { Z t u t ( b ) } based on an initial estimate of b .
HAC - Newey-West: the HAC weighting matrix is a heteroskedasticity and autocorrelation consistent estimator of the long-run covariance matrix of { Z t u t ( b ) } based on
an initial estimate of b .
User-specified: this method allows you to provide your own weighting matrix (specified as a sym matrix containing a scaled estimate of the long-run covariance
= TS ).
U
For related discussion of the White and HAC - Newey West robust standard error estimators, see Robust Standard Errors on page 32.
T = S T ( b 0 )
4. Minimize the GMM objective function with weighting matrix W
1
1
J ( b 1, b 0 ) = ---- u ( b 1 )Z S T ( b 0 ) Zu ( b 1 )
T
(21.35)
An alternative approach due to Hansen, Heaton and Yaron (1996) notes that since the optimal weighting matrix is dependent on the parameters, we may rewrite the GMM objective
function as
1
1
J ( b ) = ---- u ( b )Z S T ( b ) Z u ( b )
T
(21.36)
where the weighting matrix is a direct function of the b being estimated. The estimator
which minimizes Equation (21.36) with respect to b has been termed the Continuously
Updated Estimator (CUE).
convergence, is conducted in step 4. The iterations are therefore simultaneous in the sense
that each weight iteration is paired with a coefficient iteration.
1-Step Weight Plus 1 Iteration performs a single weight iteration after the initial two-stage
least squares estimates, and then a single iteration of the non-linear optimizer based on the
updated weight matrix.
The Continuously Updating approach is again based on Equation (21.36).
Conventional Estimators
Using Equation (21.29) and inserting estimators and sample moments, we obtain an estimator for the asymptotic covariance matrix of b 1 :
1
1
T ( b 1, b 0 ) = A
B
( S )A
V
(21.37)
where
1
= u ( b 1 ) Z S T ( b 0 ) Z u ( b 1 )
A
= u ( b 1 ) Z S T ( b 0 ) 1 S S T ( b 0 ) 1 Z u ( b 1 )
B
(21.38)
Notice that the estimator depends on both the final coefficient estimates b 1 and the b 0
used to form the estimation weighting matrix, as well as an additional estimate of the longrun covariance matrix S . For weight update methods which iterate the weights until the
coefficients converge the two sets of coefficients will be identical.
EViews offers six different covariance specifications of this form, Estimation default, Estimation updated, Two-stage Least Squares, White, HAC (Newey-West), and User defined,
each corresponding to a different estimator for S .
Of these, Estimation default and Estimation update are the most commonly employed
coefficient covariance methods. Both methods compute S using the estimation weighting
matrix specification (i.e. if White was chosen as the estimation weighting matrix, then
White will also be used for estimating S ).
Estimation default uses the previously computed estimate of the long-run covariance
matrix to form S = S T ( b 0 ) . The asymptotic covariance matrix simplifies consider1
T ( b ) = A
.
ably in this case so that V
Estimation updated performs one more step 3 in the iterative estimation procedure,
computing an estimate of the long-run covariance using the final coefficient estimates
to obtain S = S T ( b 1 ) . Since this method relies on the iterative estimation procedure, it is not available for equations estimated by CUE.
In cases, where the weighting matrices are iterated to convergence, these two approaches
will yield identical results.
The remaining specifications compute estimates of S at the final parameters b 1 using the
indicated long-run covariance method. You may use these methods to estimate your equa T = S T ( b 0 ) , while you comtion using one set of assumptions for the weighting matrix W
pute the coefficient covariance using a different set of assumptions for S = S T ( b 1 ) .
The primary application for this mixed weighting approach is in computing robust standard
errors. Suppose, for example, that you want to estimate your equation using TSLS weights,
but with robust standard errors. Selecting Two-stage least squares for the estimation
weighting matrix and White for the covariance calculation method will instruct EViews to
compute TSLS estimates with White coefficient covariances and standard errors. Similarly,
estimating with Two-stage least squares estimation weights and HAC - Newey-West covariance weights produces TSLS estimates with HAC coefficient covariances and standard
errors.
Note that it is possible to choose combinations of estimation and covariance weights that,
while reasonable, are not typically employed. You may, for example, elect to use White estimation weights with HAC covariance weights, or perhaps HAC estimation weights using one
set of HAC options and HAC covariance weights with a different set of options. It is also possible, though not recommended, to construct odder pairings such as HAC estimation weights
with TSLS covariance weights.
Windmeijer Estimator
Various Monte Carlo studies (e.g. Arellano and Bond 1991) have shown that the above covariance estimators can produce standard errors that are downward biased in small samples.
Windmeijer (2000, 2005) observes that part of this downward bias is due to extra variation
caused by the initial weight matrix estimation being itself based on consistent estimates of
the equation parameters.
Following this insight it is possible to calculate bias-corrected standard error estimates
which take into account the variation of the initial parameter estimates. Windmeijer provides two forms of bias corrected standard errors; one for GMM models estimated in a onestep (one optimal GMM weighting matrix) procedure, and one for GMM models estimated
using an iterate-to-convergence procedure.
The Windmeijer corrected variance-covariance matrix of the one-step estimator is given by:
2 D 2S
1 + D 2S V
1+V
1 D 2S + D 2S V
V W 2S tep = V
(21.39)
where:
1
1 = A
, the estimation default covariance estimator
V
2T = S T ( b 1 ) , the updated weighting matrix (at final parameter estimates)
W
1
1
2 = A
A
B
, the estimation updated covariance estimator where S = S T ( b 1 )
V
j
W
1T b j
= W
WIC = ( I D C ) 1 V
C ( I D C ) 1
V
(21.40)
where:
1
Weighted GMM
Weights may also be used in GMM estimation. The objective function for weighted GMM is,
1
1
S ( b ) = ---- ( y f ( X, b ) )LZ S T ZL ( y f ( X, b ) )
T
(21.41)
where S T is the long-run covariance of w t Z t e t where we now use L to indicate the diagonal matrix with observation weights w t .
The default reported standard errors are based on the covariance matrix estimate given by:
1
1
S WGMM = ( G ( b )LZ S T ZLG ( b ) )
(21.42)
where b b WGMM .
( y t c ( 1 ) c ( 2 )x t )
= 0
( y t c ( 1 ) c ( 2 )x t )z t
= 0
( y t c ( 1 ) c ( 2 )x t )w t
= 0
(21.43)
( c ( 1 ) log yt + x t
c(2)
) = 0
( c ( 1 ) log yt + x t
)z t = 0
c(2)
)z t 1 = 0
( c ( 1 ) log y t + x t
(21.44)
Beneath the Instrument list box there are two dropdown menus that let you set the Estimation weighting matrix and the Weight updating.
The Estimation weight matrix dropdown specifies the type of GMM weighting matrix that
will be used during estimation. You can choose from Two-stage least squares, White, HAC
(Newey-West), and User-specified. If you select HAC (Newey West) then a button appears
that lets you set the weighting matrix computation options. If you select User-specified you
must enter the name of a symmetric matrix in the workfile containing an estimate of the
= TS ).
weighting matrix (long-run covariance) scaled by the number of observations U
Note that the matrix must have as many columns as the number of instruments specified.
matrix can be retrieved from any equation estimated by GMM using the @instwgt
The U
data member (see Equation Data Members on page 35 of the Command and Programming
which is an implicit estimator of the long-run covariance
Reference). @instwgt returns U
scaled by the number of observations.
For example, for GMM equations estimated using the Two-stage least squares weighting
2
2
matrix, will contain j ( ZZ ) (where the estimator for the variance will use s or the no
d.f. corrected equivalent, depending on your options for coefficient covariance calculation).
2
Equations estimated with a White weighting matrix will return e Z t Z t .
Storing the user weighting matrix from one equation, and using it during the estimation of a
second equation may prove useful when computing diagnostics that involve comparing Jstatistics between two different equations.
The Weight updating dropdown menu lets you set the estimation algorithm type. For linear
equations, you can choose between N-Step Iterative, Iterate to Convergence, and Continuously Updating. For non-linear equations, the choice is between Sequential N-Step Iterative, Sequential Iterate to Convergence, Simultaneous Iterate to Convergence, 1-Step
Weight Plus 1 Iteration, and Continuously Updating.
To illustrate estimation of GMM models in EViews, we estimate the same Klein model introduced in Estimating LIML and K-Class in EViews, on page 67, as again replicated by
Greene 2008 (p. 385). We again estimate the Consumption equation, where consumption
(CONS) is regressed on a constant, private profits (Y), lagged private profits (Y(-1)), and
wages (W) using data in Klein.WF1. The instruments are a constant, lagged corporate
profits (P(-1)), lagged capital stock (K(-1)), lagged GNP (X(-1)), a time trend (TM), Govern-
ment wages (WG), Government spending (G) and taxes (T). Greene uses the White weighting matrix, and an N-Step Iterative updating procedure, with N set to 2. The results of this
estimation are shown below:
Depend ent Variable: CONS
Method: Generali zed Method of Moments
Date: 04/21/09 Ti me: 12:17
Sample (adjusted) : 1921 194 1
Included observations: 21 after adjustments
Linear estimation with 2 weigh t update s
Estimation weighting matrix: White
Stand ar d errors & covariance compute d using estimation weighting
matrix
No d.f. adjustment for standa rd errors & covariance
Instrument specification: C P (- 1) K(-1) X(-1) TM WG G T
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
Y
Y(-1)
W
14.31902
0.090243
0.143328
0.863930
0.896606
0.061598
0.065493
0.029250
15.97025
1.465032
2.188443
29.53616
0.0 000
0.1 612
0.0 429
0.0 000
R-squared
Adjusted R-squared
S.E. of regression
Durbin-Watson stat
J-statistic
0.976762
0.972661
1.134401
1.420878
3.742084
53.995 24
6.8 608 66
21.876 70
8
0.4 420 35
The EViews output header shows a summary of the estimation type and settings, along with
the instrument specification. Note that in this case the header shows that the equation was
linear, with a 2 step iterative weighting update performed. It also shows that the weighing
matrix type was White, and this weighting matrix was used for the covariance matrix, with
no degree of freedom adjustment.
Following the header the standard coefficient estimates, standard errors, t-statistics and
associated p-values are shown. Below that information are displayed the summary statistics.
Apart from the standard statistics shown in an equation, the instrument rank (the number of
linearly independent instruments used in estimation) is also shown (8 in this case), and the
J-statistic and associated p-value is also shown.
As a second example, we also estimate the equation for Investment. Investment (I) is
regressed on a constant, private profits (Y), lagged private profits (Y(-1)) and lagged capital
stock (K-1)). The instruments are again a constant, lagged corporate profits (P(-1)), lagged
capital stock (K(-1)), lagged GNP (X(-1)), a time trend (TM), Government wages (WG), Government spending (G) and taxes (T).
Unlike Greene, we will use a HAC weighting matrix, with pre-whitening (fixed at 1 lag), a
Tukey-Hanning kernel with Andrews Automatic Bandwidth selection. We will also use the
Continuously Updating weighting updating procedure. The output from this equation is
show below:
Depend ent Variable: I
Method: Generali zed Method of Moments
Date: 08/10/09 Ti me: 10:48
Sample (adjusted) : 1921 194 1
Included observations: 21 after adjustments
Continuously updati ng weights & coefficients
Estimation weighting matrix: HAC (Pre whitening with lags = 1, Tukey
-Hanning kernel, Andr ews band width = 2.1803)
Stand ar d errors & covariance compute d using estimation weighting
matrix
Convergence achieved afte r 30 iterations
No d.f. adjustment for standa rd errors & covariance
Instrument specification: C P (- 1) K(-1) X(-1) TM WG G T
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
Y
Y(-1)
K(-1)
22.20609
-0.261377
0.935801
-0.157050
5.693625
0.277758
0.235666
0.024042
3.900168
-0.941024
3.970878
-6.532236
0.0 012
0.3 599
0.0 010
0.0 000
R-squared
Adjusted R-squared
S.E. of regression
Durbin-Watson stat
J-statistic
0.659380
0.599271
2.248495
1.804037
1.949180
1.2 666 67
3.5 519 48
85.947 40
8
0.7 451 06
Note that the header information for this equation shows slightly different information from
the previous estimation. The inclusion of the HAC weighting matrix yields information on
the prewhitening choice (lags = 1), and on the kernel specification, including the bandwidth that was chosen by the Andrews procedure (2.1803). Since the CUE procedure is
used, the number of optimization iterations that took place is reported (39).
Instrument Summary
The Instrument Summary view of an equation is available for non-panel equations estimated by GMM, TSLS or LIML. The summary will display the number of instruments specified, the instrument specification, and a list of the instruments that were used in estimation.
For most equations, the instruments used will be the same as the instruments that were
specified in the equation, however if two or more of the instruments are collinear, EViews
will automatically drop instruments until the instrument matrix is of full rank. In cases
where instruments have been dropped, the summary will list which instruments were
dropped.
The Instrument Summary view may be found under View/IV Diagnostics & Tests/Instrument Summary.
E ( Zu ( b ) ) = 0
(21.45)
The Instrument Orthogonality Test evaluates whether this condition possibly holds for a
subset of the instruments but not for the remaining instruments
E ( Z 1 u ( b ) ) = 0
E ( Z 2 u ( b ) ) 0
(21.46)
Where Z = ( Z 1, Z 2 ) , and Z 1 are instruments for which the condition is assumed to hold.
The test statistic, C T , is calculated as the difference in J-statistics between the original
equation and a secondary equation estimated using only Z 1 as instruments:
1
1
1
1
T1 Z 1 u ( b )
T Z u ( b ) ---- u ( b )Z 1 W
C T = ---- u ( b )ZW
T
T
(21.47)
T
where b are the parameter estimates from the original TSLS or GMM estimation, and W
1
T1 is the
is the original weighting matrix, b are the estimates from the test equation, and W
1
T corresponding to the instrumatrix for the test equation formed by taking the subset of W
ments in Z 1 . The test statistic is Chi-squared distributed with degrees of freedom equal to
the number of instruments in Z 2 .
To perform the Instrumental Orthogonality Test in EViews, click on View/IV Diagnostics
and Tests/Instrument Orthogonality Test. A dialog box will the open up asking you to
enter a list of the Z 2 instruments for which the orthogonality condition may not hold. Click
on OK and the test results will be displayed.
and an instrument, whereas endogenous variable are those which are specified in the
regressor list only.
The Endogeneity Test tests whether a subset of the endogenous variables are actually exogenous. This is calculated by running a secondary estimation where the test variables are
treated as exogenous rather than endogenous, and then comparing the J-statistic between
this secondary estimation and the original estimation:
1
1
1
1
T Z u ( b ) -- T* Z u ( b )
- u ( b )ZW
H T = ---- u ( b )Z W
T
T
(21.48)
where b are the parameter estimates from the original TSLS or GMM estimation obtained
T , and b are the estimates from the test equation estimated using Z , the
using weights W
T is the weighting
instruments augmented by the variables which are being tested, and W
matrix from the secondary estimation.
1
T
T* should be a sub-matrix of W
Note that in the case of GMM estimation, the matrix W
to ensure positivity of the test statistic. Accordingly, in computing the test statistic, EViews
1
T* ,
first estimates the secondary equation to obtain b , and then forms a new matrix W
1
T corresponding to the original instruments Z . A third estimation
which is the subset of W
is then performed using the subset matrix for weighting, and the test statistic is calculated
as:
1
1
1
1
T* Zu ( b )
T Z u ( b ) --H T = ---- u ( b )Z W
- u ( b )Z W
T
T
(21.49)
The test statistic is distributed as a Chi-squared random variable with degrees of freedom
equal to the number of regressors tested for endogeneity.
To perform the Regressor Endogeneity Test in EViews, click on View/IV Diagnostics and
Tests/Regressor Endogeneity Test. A dialog box will the open up asking you to enter a list
of regressors to test for endogeneity. Once you have entered those regressors, hit OK and the
test results are shown.
tion when the instruments are weak, see, for example, Moreira 2001, Stock and Yugo 2004 or
Stock, Wright and Yugo 2002.
Although the Cragg-Donald statistic is only valid for TSLS and other K-class estimators,
EViews also reports for equations estimated by GMM for comparative purposes.
The Cragg-Donald statistic is calculated as:
2
( T k1 k2 )
1 2
1
G t = ---------------------------------- ( X E M X Z X E )
( M X X E )M X Z Z ( ( M X Z Z ) ( M X Z Z ) )
k
2
( M X Z Z ) ( M X X E ) ( X E M X Z X E )
(21.50)
1 2
where:
M XZ = I X Z ( X Z X Z ) X Z
1
M X = I X X ( X X X X ) X X
k 1 = number of columns of X X
k 2 = number of columns of Z Z
The statistic does not follow a standard distribution, however Stock and Yugo provide a
table of critical values for certain combinations of instruments and endogenous variable
numbers. EViews will report these critical values if they are available for the specified number of instruments and endogenous variables in the equation.
Moment Selection Criteria (MSC) are a form of Information Criteria that can be used to compare different instrument sets. Comparison of the MSC from equations estimated with different instruments can help determine which instruments perform the best. EViews reports
three different MSCs: two proposed by Andrews (1999)a Schwarz criterion based, and a
Hannan-Quinn criterion based, and the third proposed by Hall, Inoue, Jana and Shin
(2007)the Relevant Moment Selection Criterion. They are calculated as follows:
SIC-based = J T ( c k ) ln ( T )
HQIQ-based = J T 2.01 ( c k ) ln ( ln ( T ) )
Relevant MSC = ln ( TQ ) ( 1 t ) ( c k ) ln ( t )
T 12
t = ----
b
and b is equal 1 for TSLS and White GMM estimation, and equal to the bandwidth used in
HAC GMM estimation.
To view the Weak Instrument Diagnostics in EViews, click on View/IV Diagnostics & Tests/
Weak Instrument Diagnostics.
1 1 1 1 1
AF 1 = ( v 1 v 2 ) ------V 1 + ------V 2 ( v 1 v 2 )
T1
T2
(21.51)
Where v i refers to the coefficient estimates from subsample i , T i refers to the number of
observations in subsample i , and V i is the estimate of the variance-covariance matrix for
subsample i .
The Andrews-Fair LR-type statistic is a comparison of the J-statistics from each of the subsample estimations:
AF 2 = J R ( J 1 + J 2 )
(21.52)
Where J R is a J-statistic calculated with the original equations residuals, but a GMM
weighting matrix equal to the weighted (by number of observations) sum of the estimated
weighting matrices from each of the subsample estimations.
References85
OT = J1 + J2
(21.53)
The first two statistics have an asymptotic x distribution with ( m 1 )k degrees of freedom, where m is the number of subsamples, and k is the number of coefficients in the orig2
inal equation. The O-statistic also follows an asymptotic x distribution, but with
2 ( q ( m 1 )k ) degrees of freedom.
To apply the GMM Breakpoint test, click on View/Breakpoint Test. In the dialog box that
appears simply enter the dates or observation numbers of the breakpoint you wish to test.
References
Amemiya, T. (1975). The Nonlinear Limited-Information Maximum-Likelihood Estimator and the Modified Nonlinear Two-Stage Least-Squares Estimator, Journal of Econometrics, 3, 375-386.
Anderson, T.W. and H. Rubin (1950). The Asymptotic Properties of Estimates of the Parameters of a Single Equation in a Complete System of Stochastic Equations, The Annals of Mathematical Statistics,
21(4), 570-582.
Andrews, D.W.K. (1999). Consistent Moment Selection Procedures for Generalized Method of Moments
Estimation, Econometrica, 67(3), 543-564.
Andrews, D.W.K. (Oct. 1988). Inference in Nonlinear Econometric Models with Structural Change, The
Review of Economic Studies, 55(4), 615-639.
Anderson, T. W. and H. Rubin (1949). Estimation of the parameters of a single equation in a complete
system of stochastic equations, Annals of Mathematical Statistics, 20, 4663.
Arellano, M. and S. Bond (1991). Some Tests of Specification For Panel Data: Monte Carlo Evidence and
an Application to Employment Equations, Review of Economic Studies, 38, 277-297.
Bekker, P. A. (1994). Alternative Approximations to the Distributions of Instrumental Variable Estimators, Econometrica, 62(3), 657-681.
Cragg, J.G. and S. G. Donald (1993). Testing Identifiability and Specification in Instrumental Variable
Models, Econometric Theory, 9(2), 222-240.
Eichenbaum, M., L.P. Hansen, and K.J. Singleton (1988). A Time Series Analysis of Representative Agent
Models of Consumption and Leisure Choice under Uncertainty, The Quarterly Journal of Economics, 103(1), 51-78.
Hahn, J. and A. Inoue (2002). A Monte Carlo Comparison of Various Asymptotic Approximations to the
Distribution of Instrumental Variables Estimators, Econometric Reviews, 21(3), 309-336
Hall, A.R., A. Inoue, K. Jana, and C. Shin (2007). Information in Generalized Method of Moments Estimation and Entropy-based Moment Selection, Journal of Econometrics, 38, 488-512.
Hansen, C., J. Hausman, and W. Newey (2006). Estimation with Many Instrumental Variables, MIMEO.
Hausman, J., J.H. Stock, and M. Yogo (2005). Asymptotic Properties of the Han-Hausman Test for Weak
Instruments, Economics Letters, 89, 333-342.
Moreira, M.J. (2001). Tests With Correct Size When Instruments Can Be Arbitrarily Weak, MIMEO.
Stock, J.H. and M. Yogo (2004). Testing for Weak Instruments in Linear IV Regression, MIMEO.
Stock, J.H., J.H. Wright, and M. Yogo (2002). A Survey of Weak Instruments and Weak Identification in
Generalized Method of Moments, Journal of Business & Economic Statistics, 20(4), 518-529.
Windmeijer, F. (2000). A finite Sample Correction for the Variance of Linear Two-Step GMM Estimators,
The Institute for Fiscal Studies, Working Paper 00/19.
Windmeijer, F. (2005). A finite Sample Correction for the Variance of Linear efficient Two-Step GMM Estimators, Journal of Econometrics, 126, 25-51.
Background
A common occurrence in time series regression is the presence of correlation between residuals and their lagged values. This serial correlation violates the standard assumption of
regression theory which requires uncorrelated regression disturbances. Among the problems
associated with unaccounted for serial correlation in a regression framework are:
OLS is no longer efficient among linear estimators. Intuitively, since prior residuals
help to predict current residuals, we can take advantage of this information to form a
better prediction of the dependent variable.
Standard errors computed using the textbook OLS formula are not correct, and are
generally understated.
If there are lagged dependent variables on the right-hand side of the equation specification, OLS estimates are biased and inconsistent.
A popular framework for modeling serial dependence is the Autoregressive-Moving Average
(ARMA) and Autoregressive-Integrated-Moving Average (ARIMA) models popularized by
Box and Jenkins (1976) and generalized to Autoregressive-Fractionally Integrated-Moving
Average (ARFIMA) specifications.
(Note that ARMA and ARIMA models which allow for explanatory variables in the mean are
sometimes termed ARIMAX and ARIMAX. We will generally use ARMA to refer to models
both with and without explanatory variables unless there is a specific reason to distinguish
between the two types.)
Yt = r1 Yt 1 + r2 Yt 2 + + rp Yt p + et
p
(22.1)
rj Yt j + et
j =1
where e t are the independent and identically distributed innovations for the process and the
autoregressive parameters r i characterize the nature of the dependence. Note that the autocorrelations of a stationary AR( p ) are infinite, but decline geometrically so they die off
quickly, and the partial autocorrelations for lags greater than p are zero.
It will be convenient for the discussion to follow to define a lag operator L such that:
k
L Yt = Yt k
(22.2)
Yt =
rj L Yt + et
j=1
(22.3)
r ( L )Y t = e t
where
p
r(L) = 1
rj L
(22.4)
j=1
is a lag polynomial that characterizes the AR process.If we add a mean to the model, we
obtain:
r ( L ) ( Yt mt ) = et
(22.5)
Y t = X t b + u t
u t = ru t 1 + e t
(22.6)
Y t = X t b + rL ( Y t X t b ) + e t
= X t b + r ( Y t 1 X t 1 b ) + e t
(22.7)
Background89
In the representation it is easy to see that the AR(1) model incorporates the residual from
the previous observation into the regression model for the current observation.
Rearranging terms and using the lag operator, we have the polynomial form
( 1 rL ) ( Y t X t b ) = e t
(22.8)
Higher-Order AR Models
A regression model with an autoregressive process of order p , AR( p ), is given by:
Y t = X t b + u t
ut = r1 ut 1 + r2 ut 2 + + rp ut p + et
(22.9)
Y t = X t b +
r j ( Y t j X t j b ) + e t
(22.10)
j= 1
r j L ( Y t X t b )
= et
(22.11)
j =1
Y t = e t + v 1 e t 1 + rv 2 e t 2 + + v q Y t q
q
= et +
vj et j
(22.12)
j=1
= v ( L )e t
where e t are the innovations, and
v(L) = 1 +
vj L
(22.13)
j= 1
is the moving average polynomial with parameters v i that characterize the MA process.
Note that the autocorrelations of an MA model are zero for lags greater than q .
You should pay particular attention to the definition of the lag polynomial when comparing
results across different papers, books, or software, as the opposite sign convention is sometimes employed for the v coefficients.
Adding a mean to the model, we get the mean adjusted form:
Y t m = v ( L )e t
(22.14)
Y t = X t b + u t
u t = e t + ve t 1
(22.15)
The parameter v is the first-order moving average coefficient. Substituting, the MA(1) may
be written as
Y t = X t b + e t + ve t 1
(22.16)
Y t X t b = ( 1 + vL )e t
(22.17)
and
r ( L ) ( Y t m t ) = v ( L )e t
(22.18)
We term this model an ARMA( p, q ) to indicate that there are p lags in the AR and q terms
in the MA.
Y t = X t b + u t
u t = ru t 1 + e t + ve t 1
(22.19)
The parameter r is the first-order serial correlation coefficient, and the v is the moving
average coefficient. Substituting, the ARMA(1, 1) may be written as
Y t = X t b + ru t 1 + e t + ve t 1
= X t b + r ( Y t 1 X t 1 b ) + e t + ve t 1
(22.20)
or equivalently,
( 1 rL ) ( Y t X t b ) = ( 1 + vL )e t
(22.21)
Background91
lag polynomials. These products produce higher order ARMA models with nonlinear restrictions on the coefficients.
Seasonal AR Terms
A SAR( p ) term is a seasonal autoregressive term with lag p . A SAR adds to an existing AR
specification a polynomial with a lag of p :
1 fp L
(22.22)
The SAR is not intended to be used alone. The SAR allows you to form the product of lag
polynomials, with the resulting lag structure defined by the product of the AR and SAR lag
polynomials.
For example, a second-order AR process without seasonality is given by,
Yt = r1 Yt 1 + r2 Yt 2 + et ,
(22.23)
( 1 r 1 L r 2 L )Y t = e t
(22.24)
For quarterly data, we might wish to add a SAR(4) term because we believe that there is correlation between a quarter and the quarter the year previous. Then the resulting process
would be:
2
( 1 r 1 L r 2 L ) ( 1 f 4 L )Y t = e t .
(22.25)
Y t = r 1 Y t 1 + r 2 Y t 2 + Yu t 4 f 4 r 1 Y t 5 f 4 r 2 Y t 6 + e t .
(22.26)
The parameter f 4 is associated with the seasonal part of the process. Note that this is an
AR(6) process with nonlinear restrictions on the coefficients.
Seasonal MA Terms
Similarly, SMA( q ) can be included in your specification to specify a seasonal moving average term with lag q . The resulting the MA lag structure is obtained from the product of the
lag polynomial specified by the MA terms and the one specified by any SMA terms.
For example, second-order MA process without seasonality may be written as
Yt = et + v1 et 1 + v2 et 2 ,
(22.27)
Y t = ( 1 + v 1 L + v 2 L )e t .
(22.28)
To take account of seasonality in a quarterly workfile you may wish to add an SMA(4). Then
the resulting process is:
Y t = ( 1 + v 1 L + v 2 L ) ( 1 + q 4 L )e t
(22.29)
Yt = Yt + v1 Yt 1 + v2 Yt 2 + q4 Yt 4 + q4 v1 Yt 5 + q4 v2 Yt 6 .
(22.30)
The parameter w 4 is associated with the seasonal part of an MA(6) process which has nonlinear restrictions on the coefficients.
Integrated Models
A time series Y t is said to be integrated of order 0 or I ( 0 ) , if it may be written as a MA process Y t = v ( L )e t , with coefficients such that
vi <
(22.31)
i= 1
Roughly speaking, an I ( 0 ) process is a moving average with autocovariances that die off
sufficiently quickly, a condition which is necessary for stationarity (Hamilton, 2004).
d
ARIMA Model
An ARIMA( p, d, q ) model is defined as an I ( d ) process whose d -th integer difference follows a stationary ARMA( p, q ) process. In polynomial form we have:
d
r ( L ) ( 1 L ) ( Y t m t ) = v ( L )e t
(22.32)
Example
The ARIMA(1,1,1) Model
An ARIMA(1,1,1) model for Y t assumes that the first difference of Y t is an ARMA(1,1).
( 1 rL ) ( 1 L ) ( Y t X t b ) = e t + ve t 1
(22.33)
DY t = DX t b + r ( DY t 1 DX t 1 b ) + e t + ve t 1
or
(22.34)
Background93
DY t = DX t b + u t
u t = ru t 1 + e t + ve t 1
(22.35)
ARFIMA Model
Stationary processes are said to have long memory when autocorrelations are persistent,
decaying more slowly than the rate associated with ARMA models. Modeling long term
dependence is difficult for standard ARMA specifications as it requires non-parsimonious,
large-order ARMA representations that are generally accompanied by undesirable short-run
dynamics (Sowell, 1992).
One popular approach to modeling long memory processes is to employ the notion of fractional integration (Granger and Joyeux, 1980; Hosking, 1981). A fractionally integrated
series is one with long-memory that is not I ( 1 ) .
Following Granger and Joyeux (1981) and Hosking (1981), we may define a discrete time
fractional difference operator which depends on the parameter d :
= (1 L)
k = 0
d ( 1 )k Lk =
k
k = 0
G( d + k)
k
-------------------------------------L
G ( d )G ( k + 1 )
(22.36)
r ( L ) ( 1 L ) ( Y t m t ) = v ( L )e t
(22.37)
Notice that the ARFIMA specification is identical to the standard Box-Jenkins ARIMA formulation in Equation (22.32), but allowing for non-integer d . Note also that the range restriction on d is non-binding as we may apply integer differencing or summing until d is in the
desired range.
By combining fractional differencing with a traditional ARMA specification, the ARFIMA
model allows for flexible dynamic patterns. Crucially, when 1 2 < d < 1 2 , the autocorrelations and partial autocorrelations of the ARFIMA process decay more slowly (hyperboli-
cally) than the rates associated with ARMA specifications. Thus, the ARFIMA model allows
you to model slowing decaying long-run dependence using the d parameter and more rapidly decaying short-run dynamics using a parsimonious ARMA( p, q ).
u t = ru t 1 + e t .
(22.38)
If there is no serial correlation, the DW statistic will be around 2. The DW statistic will fall
below 2 if there is positive serial correlation (in the worst case, it will be near zero). If there
is negative correlation, the statistic will lie somewhere between 2 and 4.
Positive serial correlation is the most commonly observed form of dependence. As a rule of
thumb, with 50 or more observations and only a few independent variables, a DW statistic
below about 1.5 is a strong indication of positive first order serial correlation. See Johnston
and DiNardo (1997, Chapter 6.6.1) for a thorough discussion on the Durbin-Watson test and
a table of the significance points of the statistic.
There are three main limitations of the DW test as a test for serial correlation. First, the distribution of the DW statistic under the null hypothesis depends on the data matrix x . The
usual approach to handling this problem is to place bounds on the critical region, creating a
region where the test results are inconclusive. Second, if there are lagged dependent variables on the right-hand side of the regression, the DW test is no longer valid. Lastly, you
may only test the null hypothesis of no serial correlation against the alternative hypothesis
of first-order serial correlation.
Two other tests of serial correlationthe Q-statistic and the Breusch-Godfrey LM testovercome these limitations, and are preferred in most applications.
asymptotic x distribution under the null hypothesis. The distribution of the F-statistic is
not known, but is often used to conduct an informal test of the null.
See Serial Correlation LM Test on page 183 for further discussion of the serial correlation
LM test.
Example
As an example of the application of serial correlation testing procedures, consider the following results from estimating a simple consumption function by ordinary least squares
using data in the workfile Uroot.WF1:
Depend ent Variable: CS
Method: Least Squares
Date: 08/10/09 Ti me: 11:06
Sample: 1948Q3 1 988Q4
Included observations: 162
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
GDP
CS(-1 )
-9.227624
0.038732
0.952049
5.898177
0.017205
0.024484
-1.564487
2.251193
38.88516
0.1 197
0.0 257
0.0 000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.999625
0.999621
13.53003
291 06.82
-650.3497
212 047.1
0.000000
1781.6 75
694.54 19
8.0 660 46
8.1 232 23
8.0 892 61
1.6 722 55
A quick glance at the results reveals that the coefficients are statistically significant and the
fit is very tight. However, if the error term is serially correlated, the estimated OLS standard
errors are invalid and the estimated coefficients will be biased and inconsistent due to the
presence of a lagged dependent variable on the right-hand side. The Durbin-Watson statistic
is not appropriate as a test for serial correlation in this case, since there is a lagged dependent variable on the right-hand side of the equation.
Selecting View/Residual Diagnostics/Correlogram-Q-statistics for the first 12 lags from
this equation produces the following view:
The correlogram has spikes at lags up to three and at lag eight. The Q-statistics are significant at all lags, indicating significant serial correlation in the residuals.
Selecting View/Residual Diagnostics/Serial Correlation LM Test and entering a lag of 4
yields the following result (top portion only):
Breusch-Godfrey Serial Correlation LM Test:
F-statistic
Obs*R-s quared
3.654696
13.96215
Prob. F(4,155)
Prob. Chi-Square(4)
0.0071
0.0074
The test rejects the hypothesis of no serial correlation up to order four. The Q-statistic and
the LM test both indicate that the residuals are serially correlated and the equation should
be re-specified before using it for hypothesis tests and forecasting.
Putting aside the Equation specification for a moment, consider the Estimation settings
section at the bottom of the dialog
When estimating ARMA models, you may choose LS Least Squares (NLS and
ARMA), TSLS Two-Stage Least Squares (TSNLS and ARMA), or GMM - Generalized Method of Moments in the estimation Method dropdown menu.
Note that some estimation techniques and methods (notable maximum likelihood and
fractional integration) are only available under the least squares option.
Enter the sample specification in the Sample edit dialog.
As the focus of our discussion will be on the equation specification for standard ARIMA and
ARFIMA models and on the corresponding settings on the Options tab, the remainder of our
discussion will assume you have selected the LS Least Squares (NLS and ARMA) method
in the dropdown menu. We will make brief comments about other specifications when
appropriate.
Equation Specification
EViews estimates general ARIMA and ARFIMA specifications that allow for right-hand side
explanatory variables (ARIMAX and ARFIMAX).
You should enter your equation specification in the top edit field. As with other equation
specifications, you may enter your equation by listing the dependent variable followed by
explanatory variables and special keywords, or you may provide an explicit expression for
the equation.
To specify your ARIMA model, you will:
Difference your dependent variable, if necessary, to account for the integer order of
integration.
Describe your structural regression model (dependent variables and mean regressors)
and add AR, SAR, MA, SMA terms, as necessary.
To specify your ARFIMA model you will:
Difference your dependent variable, if necessary, to account for an integer order of
integration.
Describe your structural regression model (dependent variables and regressors) and
add any ordinary and seasonal ARMA terms, if desired.
Add the d keyword to the specification to indicate that you would like to estimate and
use a fractional difference parameter d .
Specifying AR Terms
To specify an AR term in EViews, you will use the keyword ar, followed by the desired lag
or lag range enclosed in parentheses. You must explicitly instruct EViews to use each AR lag
you wish to include.
First-Order AR
For specifications defined by list, simply add the ar keywords to the list. For example, to
estimate a simple consumption function with AR(1) errors, and enter your list of variables
as usual, adding the keyword expression AR(1) to the end of your list.
For the specification:
CS t = c 1 + c 2 GDP t + u t
u t = ru t 1 + e t
(22.39)
with the series CS and GDP in the workfile, you may specify your equation as:
cs c gdp ar(1)
For specifications defined by expression, specify your model using EViews expressions, followed by an additive term describing the AR lag coefficient assignment enclosed in square
brackets. For the revised specification:
c
CS t = c 1 + GDP t 2 + u t
u t = ru t 1 + e t
(22.40)
Higher-Order AR
Estimating higher order AR models is only slightly more complicated. To estimate an AR( k ),
you should enter your specification, followed by expressions for each AR lag you wish to
include. You may use the to keyword to define a lag range.
If you wish to estimate a model with autocorrelations from one to five:
CS t = c 1 + c 2 GDP t + u t
ut = r1 ut 1 + r2 ut 2 + + r5 ut 5 + et
(22.41)
or more concisely
cs c gdp ar(1 to 5)
The latter form specifies a lag range from 1 to 5 using the to keyword.
We emphasize the fact that you must explicitly list AR lags that you wish to include. By
requiring that you enter all of the desired AR terms, EViews allows you the flexibility to
restrict lower order correlations to be zero. For example, if you have quarterly data and want
only to include a single term to account for seasonal autocorrelation, you could enter
cs c gdp ar(4)
For specifications defined by expression, you must list the coefficient assignment for each of
the lags separately, separated by commas:
c2
CS t = c 1 + GDP t + u t
ut = r1 ut 1 + r2 ut 2 + + r5 ut 5 + et
(22.42)
Seasonal AR
Seasonal AR terms may be added using the sar keyword, followed by a lag or lag range
enclosed in parentheses. The specification
cs c gdp ar(1) sar(4)
will define an AR(5) model with coefficient restrictions as described above (Seasonal
ARMA Terms on page 90).
Note that in the absence ordinary AR terms, the sar is equivalent to an ar. Thus,
cs c gdp ar(4)
cs c gdp sar(4)
Specifying MA Terms
To specify an MA term in EViews, you will use the keyword ma, followed by the desired lag
or lag range enclosed in parentheses. You must explicitly instruct EViews to use each MA lag
you wish to include. You may use the to keyword to define a lag range.
For specifications defined by list,:
CS t = c 1 + c 2 GDP t + u t
ut = et + v1 et 1 + v2 et 2
(22.43)
or more concisely as
cs c gdp ma(1 to 2)
CS t = c 1 + GDP t 2 + u t
ut = et + v1 et 1 + v2 et 2
(22.44)
Seasonal MA terms may be added using the sma keyword, followed by a lag enclosed in
parentheses. The specification
cs c gdp ma(1) ma(4)
will define an MA(5) model with coefficient restrictions as described above (Seasonal
ARMA Terms on page 90). Note that in the absence of ordinary MA terms, the sma is equivalent to an ma. Thus,
cs c gdp ma(4)
cs c gdp sma(4)
Specifying Differencing
There are two distinct methods of specifying differencing in EViews:
For integer differencing, you will apply the difference operator to the dependent and
explanatory variables either before estimation, or by using series expressions in the
equation specification.
For fractional differencing, you will, include the d keyword in the by-list equation
specification to indicate that the dependent and explanatory variables should be fractionally differenced.
Integer Differencing
The d operator may be used to specify integer differences of series. To specify first differencing, simply include the series name in parentheses after d. For example, d(gdp) specifies
the first difference of GDP, or GDPGDP(1).
Higher-order and seasonal differencing may be specified using the two optional parameters,
n and s . d(x,n) specifies the n -th order difference of the series X:
n
d ( x, n ) = ( 1 L ) x ,
(22.45)
where L is the lag operator. For example, d(gdp,2) specifies the second order difference of
GDP:
d(gdp,2) = gdp 2*gdp(1) + gdp(2)
d(x,n,s) specifies n -th order ordinary differencing of X with a multiplicative seasonal dif-
ference at lag s :
n
d ( x, n, s ) = ( 1 L ) ( 1 L )x .
(22.46)
For example, d(gdp,0,4) specifies zero ordinary differencing with a seasonal difference at
lag 4, or GDPGDP(4).
If you need to work in logs, you can also use the dlog operator, which returns differences in
the log values. For example, dlog(gdp) specifies the first difference of log(GDP) or
log(GDP)log(GDP(1)). You may also specify the n and s options as described for the simple d operator, dlog(x,n,s).
There are two ways to estimate ARIMA models in EViews. First, you may generate a new
series containing the differenced data, and then estimate an ARMA model using the new
data. For example, to estimate a Box-Jenkins ARIMA(1, 1, 1) model for M1 you can first create the difference series by typing in the command line:
series dm1 = d(m1)
and then use this series when you enter your equation specification:
dm1 c ar(1) ma(1)
Alternatively, you may include the difference operator d directly in the estimation specification. For example, the same ARIMA(1,1,1) model can be estimated using the command:
d(m1) c ar(1) ma(1)
The latter method should generally be employed for an important reason. If you define a
new variable, such as DM1 above, and use it in your estimation procedure, then when you
forecast from the estimated model, EViews will produce forecasts of the dependent variable
DM1. That is, you will get a forecast of the differenced series. If you are really interested in
forecasts of the level variable, in this case M1, you will have to manually transform the forecasted value and adjust the computed standard errors accordingly.
Furthermore, if any other transformation or lags of the original series M1 are included as
regressors, EViews will not know that they are related to DM1. If, however, you specify the
model using the difference operator expression for the dependent variable, d(m1), the forecasting procedure will provide you with the option of forecasting the level variable, in this
case M1.
The difference operator may also be used in specifying exogenous variables and can be used
in equations with or without ARMA terms. Simply include the series expression in the list of
regressors. For example:
d(cs, 2) c d(gdp,2) d(gdp(-1),2) d(gdp(-2),2) time
is a valid specification that employs the difference operator on both the left-hand and righthand sides of the equation.
Fractional Differencing
If you wish to perform fractional differencing as part of ARFIMA estimation, simply add the
d keyword to the existing specification.
Note that fractional integration models may only be estimated in equations specified by list.
You may not specify an ARFIMA model using expression.
Specification Examples
For example, to estimate a second-order autoregressive and first-order moving average error
process ARMA(2, 1), you would include expressions for the AR(1), AR(2), and MA(1) terms
along with the dependent variable (INC) and your other regressors (in this case C and GOV):
inc c gov ar(1 to 2) ma(1)
Once again, you need not use AR and MA terms consecutively. For example, if you want to
fit a fourth-order autoregressive model, you could use AR(4) by itself, resulting in a
restricted ARMA(4, 0):
inc c gov ar(4)
You may also specify a pure moving average model by using only MA terms. Thus:
For equations specified by expression, simply enter the explicit equation involving the possibly differenced dependent variable, and add any expressions for AR and MA terms in square
brackets:
dlog(cs) = c(1) + dlog(gdp)^c(2) + [ar(1)=c(3), ar(2)=c(4),
ma(1)=c(5), ma(2)=c(6)]
To estimate an ARFIMA(2, d , 1) (fractionally integrated second-order autoregressive, firstorder moving average error model), you would include expressions for the AR(1), AR(2),
and MA(1) terms and the d keyword along with the dependent variable (INC) and other
regressors (C and GOV):
log(inc) c log(gov) ar(1 to 2) ma(1) d
Estimation Options
Clicking on the Options tab displays a variety of estimation options. The available options
will differ depending on whether your equation is specified by list or by expression and
whether there are ARMA and fractional differencing components. For the remainder of this
discussion, we will assume that you have included ARMA or fractional differencing in the
equation specification, and we discuss in turn the settings available for each specification
method.
ARMA
The ARMA section of the page controls the method for estimating your ARMA components
and setting starting values.
ARMA Method
The Method dropdown specifies the objective function used in the estimation method:
For models without fractional differencing, you may choose between the default ML
(maximum likelihood), GLS (generalized least squares), and CLS (conditional least
squares) estimation.
For models with fractional differencing, you may choose between the default ML and
GLS estimation (CLS is not available for ARFIMA models).
See Estimation Method Details on page 128 for discussion of these objective functions.
Starting Values
The nonlinear estimation techniques used to estimate ARMA and ARFIMA models require
starting values for all coefficient estimates. Normally, EViews determines its own starting
values and for the most part this is an issue with which you need not be concerned. There
are, however, occasions where you may want to override the default starting values.
First, estimation will sometimes halt when the maximum number of iterations is reached,
despite the fact that convergence is not achieved. Resuming the estimation with starting values left over from previous estimation instructs EViews to continue from where it left off
instead of starting over. You may also want to try different starting values to ensure that the
estimates are a global rather than a local minimum. You might also want to supply starting
values if you have a rough idea of what the answers should be, and want to speed up the
estimation process.
The Starting ARMA coefficient values dropdown will offer choices for overriding the
default EViews starting values. The available starting value options will differ depending on
the ARMA method selected above:
If you select ML or GLS estimation as your method, you will be presented with the
choice of Automatic, EViews fixed, Random, and User-specified.
For User-specified, all of the coefficients are taken from the values in the coefficient
vector in the workfile as described below.
For each of the remaining methods, the mean coefficients are obtained from simple
OLS regression.
The default EViews Automatic initializes the ARMA coefficients using least squares
regression of residuals against lagged residuals (for AR terms) and innovations (for
MA terms), where innovations are obtained by first regressing residuals against many
lags of residuals. EViews fixed sets the ARMA coefficients to arbitrary fixed values of
0.0025 for ordinary ARMA and 0.01 for seasonal ARMA terms. Random generates randomized ARMA coefficients.
For ARFIMA estimation, the fractional difference parameter is initialized using the
Geweke and Porter-Hundlak (1983) log periodogram regression (Automatic), a fixed
value of 0.1 (EViews fixed), or a randomly generated uniform [ 0.5, 0.5 ] (Random).
If you select the CLS estimation method, the starting values dropdown will let you
choose between OLS/TLS, .8 x OLS/TSLS, .5 x OLS/TSLS, .3 x OLS/TSLS, Zero, and
User-specified.
For the User-specified selection, all of the coefficients are initialized from the values
in the coefficient vector in the workfile as described below.
For the variants of OLS/TSLS, EViews will initialize the mean coefficients at the specified fraction of the simple OLS or TSLS estimates while Zero sets the mean coefficients to zero.
Coefficients for ARMA terms are always set to arbitrary fixed values of 0.0025 for ordinary ARMA and 0.01 for seasonal ARMA terms.
For you to usefully set user-specified starting values, you will need a little more information
about how EViews assigns coefficients for the ARMA terms.
EViews assigns coefficient numbers to the variables in the following order:
First are the coefficients of the variables, in order of entry.
Next is the ARFIMA coefficient.
Next come the AR terms in the order of entry.
The SAR, MA, and SMA coefficients follow, in that order.
(Following estimation, you can always see the assignment of coefficients by looking at the
Representations view of your equation.)
Thus the following two specifications will have their coefficients in the same order:
y c x ma(2) ma(1) sma(4) ar(1)
y sma(4) c ar(1) ma(2) x ma(1)
By default EViews uses the built-in C coefficient vector, but this may be overridden (see
Coefficient Name on page 110). To set initial values, you may edit the corresponding elements of the coefficient vector in the workfile, or you may also assign values in the vector
using the param command:
param c(1) 50 c(2) .8 c(3) .2 c(4) .6 c(5) .1 c(6) .5
The starting values will be 50 for the constant, 0.8 for X, 0.2 for AR(1), 0.6 for MA(2), 0.1
for MA(1) and 0.5 for SMA(4).
Backcasting
If your specification includes MA terms and the ARMA estimation method is CLS, EViews
will display a checkbox for whether or not to use backcasting to initialize the MA innovations. By default, EViews performs backcasting as described in Initializing MA Innovations on page 132, but you can unselect this option to set the presample innovations to
their unconditional expectation of zero.
Coefficient Covariance
The Coefficient covariance section of the page controls the computation of the estimates of
the precision of your coefficient estimates.
The options that are available will depend on the ARMA estimation method.
For ML or GLS estimation, covariances are always calculated by taking the inverse of
the an estimate of the information matrix.
The default setting for Information matrix estimation uses the outer product of the
gradients (OPG), but you may instead use the dropdown to use the observed Hessian
(Hessian - observed).
For CLS estimation, you may choose a Covariance method using the dropdown
menu.
The default Ordinary method takes the inverse of the estimate of the information
matrix. Alternately you may choose to compute Huber-White or HAC (Newey-West)
sandwich covariances.
In the latter case, EViews will display a HAC options button which you may use to
access various settings for controlling the long-run covariance estimation.
The Information matrix dropdown menu will offer you the choice between computing the information matrix estimate using the outer product of the gradients (OPG) or
the observed Hessian (Hessian - observed).
If you select GLS or CLS estimation, the covariance matrix will, by default, employ a degreeof-freedom correction. If you select ML estimation the default computation will not employ
degree-of-freedom correction. In all three cases, the d.f. Adjustment checkbox may be used
to modify the computation.
Estimation Algorithm
EViews provides a number of options that allow you to control the iterative procedure of the
estimation algorithm. In general, you can rely on the EViews choices, but on occasion you
may wish to override the default settings.
The Estimation algorithm section of the dialog contains settings for the numeric optimization of your likelihood or least squares objective function.
By default, EViews estimates ARMA and ARFIMA models using the Broyden, Fletcher, Goldfarb and Shanno (BFGS) algorithm. You may use the Optimization method dropdown to
select a different method:
For models estimated using ML and GLS, you may choose to estimate your model
using BFGS, OPG-BHHH (Gauss-Newton using the outer-product of the gradient),
Kohn-Ansley (transformation to pseudo-GLS regression model), and Newton-Raphson.
For models estimated using CLS, you may choose between BFGS, Gauss-Newton,
Newton-Raphson, and EViews. The latter employs OPG/BHHH with a Marquardt
diagonal adjustment.
For all but EViews, the Step method combo lets you choose between the default Marquardt, Dogleg, and Line Search determined steps. The default method is Marquardt.
In addition, you can use the Maximum iterations and Convergence tolerance edit fields to
change the stopping rules from their global default settings. Checking the Display settings
in output box instructs EViews to put information about starting values and other optimization settings at the top of your equation output.
Coefficient Name
For equations specified by list EViews will, by default, use the built-in C vector to hold coefficient estimates. You may change this assignment by entering the name of a coefficient
object in the Coefficient name edit field.
If the coefficient does not exist, EViews will create it and size it appropriately. If the coefficient already exists, it will be resized if necessary so that it is large enough to hold the
results. If an object of a different type with that name is present in the workfile, EViews will
issue an error message.
You may use the page to control the computation of the coefficient covariance, the optimization method, the ARMA starting coefficient values, and the default coefficient name.
Coefficient Covariance
The Coefficient covariance section of the page allows you to specify a Covariance method
using the dropdown menu. You may choose to compute the default Ordinary, or the HuberWhite, or HAC (Newey-West) sandwich covariances. If you select HAC (Newey-West),
EViews will display a HAC options button which you may use to access various settings for
controlling the long-run covariance estimation.
As before, the Information matrix dropdown menu will offer you the choice between computing the information matrix estimate using the outer product of the gradients (OPG) or the
observed Hessian (Hessian - observed).
By default, EViews will apply a degree-of-freedom correction to the estimated covariance
matrix. You may uncheck the d.f. Adjustment checkbox to remove this correction.
Estimation Algorithm
By default, EViews estimates by-expression ARMA and ARFIMA models using BFGS. You
may use the Optimization method dropdown to choose between BFGS, Gauss-Newton,
Newton-Raphson, and EViews, the latter of which employs Gauss-Newton with a Marquardt diagonal adjustment.
Where appropriate, the Step method combo lets you choose between the default Marquardt, Dogleg, and Line Search determined steps.
The Maximum iterations and Convergence tolerance edit fields may be used to limit the
number of iterations and to set the algorithm stopping rule. Checking the Display settings
in output box instructs EViews to put information about starting values and other optimization settings at the top of your equation output.
ARMA Method
You will not be able to specify an ARMA method as ARMA equations specified by expression
may only use the CLS objective.
Starting Values
The starting value dropdown menu lets you choose between the default OLS/TLS, and .8 x
OLS/TSLS, .5 x OLS/TSLS, .3 x OLS/TSLS, Zero, and User-specified.
For the variants of OLS/TSLS, EViews will initialize the mean coefficients at the specified
fraction of the simple OLS or TSLS estimates (ignoring ARMA terms), while Zero sets the
mean coefficients to zero. Coefficients for ARMA terms are always set to arbitrary fixed values of 0.0025 for ordinary ARMA and 0.01 for seasonal ARMA terms.
For the User-specified selection, the coefficients are initialized from the values in the coefficient vector in the workfile.
Estimation Output
EViews displays a variety of results in the output view following estimation.
The top portion of the output displays information about the optimization technique, ARMA
estimation method, the coefficient covariance calculation, and if requested, the starting values used to initialize the optimization procedure.
Dependent Variable: DLOG(GNP)
Method: ARMA Maximum Likelihood (BFGS)
Date: 02/06/15 Time: 10:20
Sample: 1947Q2 1989Q4
Included observations: 171
Convergence achieved after 8 iterations
Coefficient covariance computed using outer product of gradients
The next section shows the estimated coefficients, coefficient standard errors, and t-statistics. In addition to the estimates of the ARMA coefficients, EViews will display estimates of
the fractional integration parameter for ARFIMA models, and the estimate of the error variance if the ARMA estimation method is maximum likelihood.
Estimation Output113
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
D
0.008051
0.288756
0.003814
0.057164
2.111053
5.051407
0.0362
0.0000
0.099981
0.094656
0.010238
0.017713
541.7179
18.77385
0.000025
0.008032
0.010760
-6.312490
-6.275746
-6.297581
1.824729
Note that all of the equation summary results involving residuals differ from those computed in standard OLS settings so that some care should be taken in interpreting results. To
understand the issues, keep in mind that there are two different residuals associated with an
ARMA model. The first are the estimated unconditional residuals:
u t = Y t X t b ,
(22.47)
which are computed using the original explanatory variables and the estimated coefficients,
b . These residuals are the errors that you would obtain if you made a prediction of the
value of Y t using contemporaneous information while ignoring the information contained
in the lagged residuals.
Generally, there is little reason to examine the unconditional residuals, and EViews does not
automatically compute them following estimation.
The second set of residuals are the estimated one-period ahead forecast errors, e . As the
name suggests, these residuals represent the forecast errors you would make if you computed forecasts using a prediction of the residuals based upon past values of your data, in
addition to the contemporaneous information. In essence, you improve upon the unconditional forecasts and residuals by taking advantage of the predictive power of the lagged
residuals.
For ARMA models, the computed residuals, and all of the residual-based regression statis2
ticssuch as the R , the standard error of regression, and the Durbin-Watson statistic
reported by EViews are based on the estimated one-period ahead forecast errors, e .
Lastly, to aid in the interpretation of the results for ARMA and ARFIMA models, EViews displays a the reciprocal roots of the AR and MA polynomials in the lower block of the results.
EViews reports these roots as Inverted AR Roots and Inverted MA Roots at the bottom of
the regression output. For our general ARMA model with the lag polynomials r ( L ) and
v ( L ) , the reported roots are the roots of the polynomials:
1
r(x ) = 0
and
v(x ) = 0 .
(22.48)
The roots, which may be imaginary, should have modulus no greater than one. The output
will display a warning message if any of the roots violate this condition.
If r has a real root whose absolute value exceeds one or a pair of complex reciprocal roots
outside the unit circle (that is, with modulus greater than one), it means that the autoregressive process is explosive.
For example, in the simple AR(1) model, the estimated parameter r is the serial correlation
coefficient of the unconditional residuals. For a stationary AR(1) model, the true r lies
between 1 (extreme negative serial correlation) and +1 (extreme positive serial correlation).
If v has reciprocal roots outside the unit circle, we say that the MA process is noninvertible,
which makes interpreting and using the MA results difficult. However, noninvertibility poses
no substantive problem, since as Hamilton (1994a, p. 65) notes, there is always an equivalent representation for the MA model where the reciprocal roots lie inside the unit circle.
Accordingly, you should try to re-estimate your model with different starting values until
you get a moving average process that satisfies invertibility. Alternatively, you may wish to
turn off MA backcasting (see Initializing MA Innovations on page 132).
If the estimated MA process has roots with modulus close to one, it is a sign that you may
have over-differenced the data, which introduced an MA unit root. The process will be difficult to estimate and even more difficult to forecast. If possible, you should re-estimate with
one less round of differencing, perhaps using ARFIMA to account for long-run dependence.
Consider the following example output from ARMA estimation:
Equation Diagnostics115
Dependent Variable: CP
Method: ARMA Maximum Likelihood (BFGS)
Date: 03/01/15 Time: 15:25
Sample: 1954M01 1993M07
Included observations: 475
Convergence achieved after 117 iterations
Coefficient covariance computed using outer product of gradients
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
AR(1)
SAR(4)
MA(1)
MA(4)
SIGMASQ
5.836704
0.973815
0.225555
0.466481
-0.344940
0.249337
1.750241
0.007755
0.049713
0.016635
0.043602
0.007393
3.334801
125.5649
4.537146
28.04168
-7.911135
33.72769
0.0009
0.0000
0.0000
0.0000
0.0000
0.0000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
Inverted AR Roots
Inverted MA Roots
0.974433
0.974161
0.502520
118.4349
-346.2736
3575.030
0.000000
.97
-.69
.67
6.331116
3.126173
1.483257
1.535846
1.503938
1.986429
.00-.69i
-.00+.69i
-.11-.74i
-.92
y t = 5.84 + u t
4
(22.49)
or equivalently, to:
(22.50)
Note the signs of the MA terms, which may be reversed from those in some textbooks. Note
also that the inverted AR roots have moduli very close to one, which is typical for many
macro time series models.
Equation Diagnostics
In addition to the usual views and procs for an equation such as coefficient confidence
ellipses, Wald tests, omitted and redundant variables tests, EViews offers diagnostics for
examining the properties of your ARMA model and the properties of the estimated innovations.
ARMA Structure
This set of views provides access to several diagnostic views that help you assess the structure of the ARMA portion of the estimated equation. The view is currently available only for
models specified by list that includes at least one AR or MA term and estimated by least
squares. There are three views available: roots, correlogram, and impulse response.
To display the ARMA structure, select View/
ARMA Structure... from the menu of an
estimated equation. If the equation type
supports this view and there are no ARMA
components in the specification, EViews
will open the ARMA Diagnostic Views dialog:
On the left-hand side of the dialog, you will
select one of the three types of diagnostics.
When you click on one of the types, the
right-hand side of the dialog will change to show you the options for each type.
Roots
The roots view displays the inverse roots of the AR and/or MA characteristic polynomial.
The roots may be displayed as a graph or as a table by selecting the appropriate radio button.
The graph view plots the roots in the complex plane where the horizontal axis is the real
part and the vertical axis is the imaginary part of each root.
If the estimated ARMA process is
(covariance) stationary, then all AR
roots should lie inside the unit circle.
If the estimated ARMA process is
invertible, then all MA roots should lie
inside the unit circle. The table view
displays all roots in order of decreasing modulus (square root of the sum
of squares of the real and imaginary
parts).
For imaginary roots (which come in
conjugate pairs), we also display the
cycle corresponding to that root. The
cycle is computed as 2p a , where
Equation Diagnostics117
a = atan ( i r ) , and i and r are the imaginary and real parts of the root, respectively.
The cycle for a real root is infinite and is not reported.
Inverse Roots of AR/MA Polynomial(s)
Specification: R C AR(1) SAR(4) MA(1) MA(4)
Date: 03/01/15 Time: 16:19
Sample: 1954M01 1994M12
Included observations: 470
AR Root(s)
Modulus
Cycle
0.987785
0.617381
-0.617381
2.60e-17 0.617381i
0.987785
0.617381
0.617381
0.617381
4.000000
Modulus
Cycle
-0.815844
-0.112642 0.619634i
0.557503
0.815844
0.629790
0.557503
3.589119
Correlogram
The correlogram view compares the autocorrelation pattern of the structural residuals
and that of the estimated model for a specified number of periods (recall that the structural residuals are the residuals after
removing the effect of the fitted exogenous
regressors but not the ARMA terms). For a
properly specified model, the residual and
theoretical (estimated) autocorrelations and
partial autocorrelations should be close.
To perform the comparison, simply select the Correlogram diagnostic, specify a number of
lags to be evaluated, and a display format (Graph or Table).
Impulse Response
The ARMA impulse response view traces the response of the ARMA part of the estimated
equation to shocks in the innovation.
An impulse response function traces the response to a one-time shock in the innovation.
The accumulated response is the accumulated sum of the impulse responses. It can be interpreted as the response to step impulse where the same shock occurs in every period from
the first.
To compute the impulse response (and accumulated responses), select the Impulse
Response diagnostic, enter the number of
periods, and display type, and define the
shock. For the latter, you have the choice of
using a one standard deviation shock (using
the standard error of the regression for the
estimated equation), or providing a user specified value. Note that if you select a one standard deviation shock, EViews will take
account of innovation uncertainty when estimating the standard errors of the responses.
Equation Diagnostics119
If a series has strong AR components, the shape of the frequency spectrum will contain
peaks at points of high cyclical frequencies. Here we show a typical AR(2) model, where the
data were generated such that r 1 = 0.7 and r 2 = 0.5 .
Q-statistics
If your ARMA model is correctly specified, the residuals from the model should be nearly
white noise. This means that there should be no serial correlation left in the residuals. The
Durbin-Watson statistic reported in the regression output is a test for AR(1) in the absence of
lagged dependent variables on the right-hand side. As discussed in Correlograms and Q-statistics on page 96, more general tests for serial correlation in the residuals may be carried
out with View/Residual Diagnostics/Correlogram-Q-statistic and View/Residual Diagnostics/Serial Correlation LM Test.
Examples121
For the example seasonal ARMA model, the 12-period residual correlogram looks as follows:
The correlogram has a significant spike at lag 5, and all subsequent Q-statistics are highly
significant. This result clearly indicates the need for respecification of the model.
Examples
To illustrate the estimation of ARIMA and ARFIMA specifications in EViews we consider
examples from Sowell (1992a) which model the natural logarithm of postwar quarterly U.S.
real GDP from 1947q1 to 1989q4. Sowell estimates a number of models which are compared
using AIC and SIC. We will focus on the ARMA(3, 2) and ARFIMA(3, d , 2) specifications
(Table 2, p. 288 and Table 3, p. 289).
To estimate the ARMA(3, 2) we open an equation dialog by selecting Object/New Object/
Equation, by selecting Quick/Estimate Equation..., or by typing the command keyword
equation in the command line. EViews will display the least squares dialog:
We enter the expression for the dependent variable, followed by the AR and MA terms using
ranges that include all of the desired terms, and C to indicate that we wish to include an
intercept. Next, we click on the Options tab to display the estimation settings.
First, we instruct EViews to compute coefficient standard errors using the observed Hessian
by setting the Information matrix dropdown to Hessian - observed. In addition, we set the
Optimization method to BFGS, the Convergence tolerance to 1e-8, and the ARMA
Method to ML. Click on OK to estimate the model.
Examples123
EViews will perform the iterative maximum likelihood estimation using BFGS and will display the estimation results:
Dependent Variable: DLOG(GNP)
Method: ARMA Maximum Likelihood (BFGS)
Date: 03/01/15 Time: 20:18
Sample: 1947Q2 1989Q4
Included observations: 171
Convergence achieved after 18 iterations
Coefficient covariance computed using observed Hessian
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
AR(1)
AR(2)
AR(3)
MA(1)
MA(2)
SIGMASQ
0.008008
0.599278
-0.671335
0.137677
-0.277643
0.793977
9.24E-05
0.001189
0.148087
0.178976
0.104632
0.121744
0.118172
9.99E-06
6.732176
4.046797
-3.750983
1.315814
-2.280545
6.718800
9.245754
0.0000
0.0001
0.0002
0.1901
0.0239
0.0000
0.0000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
Inverted AR Roots
Inverted MA Roots
0.197098
0.167724
0.009816
0.015801
551.3248
6.709856
0.000002
.24
.14+.88i
0.008032
0.010760
-6.366372
-6.237766
-6.314189
1.998281
.18-.74i
The top portion of the output displays information about the estimation method, optimization, and covariance calculation.
The next section contains the coefficient estimates, standard errors, t-statistics and corresponding p-value. (It is worth pointing out that the reported ARMA coefficients use a different sign convention than those in Sowell so that the ARMA coefficients all have the opposite
sign).
Notice that since we estimated the model using ML, EViews displays the estimate of the
error variance as one of the estimated coefficients. You should be aware that the EViews
reported p-value for SIGMASQ is for the two-sided test, despite the fact that SIGMASQ must
be non-negative. (If desired, you may use the reported coefficient, standard error, and the
@CTDIST function to compute the appropriate one-sided p-value.)
The final section shows the inverted AR and MA roots.
It may be instructive to compare these results to those obtained from an alternative conditional least squares approach to estimating the specification. To reestimate your equation
using CLS, click on the Estimate button to bring up the dialog, then on the Options tab to
show the estimation options. In the ARMA section of the page, we have:
Select CLS in the Method dropdown, and click on OK to estimate the new specification.
Click on OK to accept the changes and re-estimate the model.
Dependent Variable: DLOG(GNP)
Method: ARMA Conditional Least Squares (BFGS / Marquardt steps)
Date: 03/01/15 Time: 21:00
Sample (adjusted): 1948Q1 1989Q4
Included observations: 168 after adjustments
Convergence achieved after 38 iterations
Coefficient covariance computed using observed Hessian
MA Backcast: 1947Q3 1947Q4
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
AR(1)
AR(2)
AR(3)
MA(1)
MA(2)
0.007994
0.563811
-0.673101
0.158506
-0.242197
0.814550
0.001254
0.176602
0.161797
0.108283
0.153644
0.098533
6.373517
3.192556
-4.160159
1.463812
-1.576346
8.266750
0.0000
0.0017
0.0001
0.1452
0.1169
0.0000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
Inverted AR Roots
Inverted MA Roots
0.200837
0.176172
0.009844
0.015698
540.9882
8.142440
0.000001
.27
.12-.89i
0.008045
0.010845
-6.368908
-6.257337
-6.323627
1.994105
.15+.76i
The top of the new equation output now reports that estimation was performed using CLS
and that the MA errors were initialized using backcasting. Despite the different objectives,
we see that the CLS ARMA coefficient estimates are generally quite similar to those obtained
from exact ML estimation. Lastly, we note that the estimate of the variance is not reported as
part of the coefficient output for CLS estimation.
Next, following Sowell, we estimate an ARFIMA(3, d , 2). Once again, click on the Estimate
button to bring up the dialog:
Examples125
and add the special d keyword to the list of regressors to tell EViews that you wish to estimate the fractional integration parameter. Click on OK to estimate the updated equation.
Dependent Variable: DLOG(GNP)
Method: ARMA Maximum Likelihood (BFGS)
Date: 03/01/15 Time: 21:18
Sample: 1947Q2 1989Q4
Included observations: 171
Convergence achieved after 29 iterations
Coefficient covariance computed using observed Hessian
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
D
AR(1)
AR(2)
AR(3)
MA(1)
MA(2)
SIGMASQ
0.007886
-0.606793
1.195165
-0.939049
0.516754
-0.291411
0.811038
9.02E-05
0.000373
0.306851
0.351233
0.295641
0.178254
0.125001
0.114772
9.76E-06
21.15945
-1.977481
3.402768
-3.176311
2.898971
-2.331272
7.066532
9.239475
0.0000
0.0497
0.0008
0.0018
0.0043
0.0210
0.0000
0.0000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
Inverted AR Roots
Inverted MA Roots
0.216684
0.183044
0.009725
0.015416
552.9221
6.441378
0.000001
.82
.15-.89i
.19-.77i
0.008032
0.010760
-6.373358
-6.226380
-6.313720
1.995509
Notice first that EViews has switched from CLS estimation to ML since ARFIMA models may
only be estimated using ML or GLS.
Turning to the estimate of the fractional differencing parameter, we see that it is negative
and statistically significantly different from zero at the 5% level. Thus, we can reject the unit
root hypothesis under this specification. Alternately, we cannot reject the time trend null
hypothesis that d = 1.0 .
(Note: the results reported in Sowell differ slightly, presumably due to differences in the
nonlinear optimization procedure in general, and the estimate of the observed Hessian in
particularfor what it is worth, the EViews likelihood is slightly higher than the likelihood
reported by Sowell. Notably, Sowells conclusions differ slightly from than those outlined
here, as he finds that the unit root and trend hypotheses are both consistent with the
ARFIMA estimates. Sowell does not reject the zero null at the 5% level, but does reject at the
10% level. See Sowell for detailed interpretation of results.)
Additional Topics
Dealing with Estimation Problems
Since EViews uses nonlinear estimation algorithms to estimate ARMA models, all of the discussion in Chapter 20, Solving Estimation Problems on page 48, is applicable, especially
the advice to try alternative starting values.
There are a few other issues to consider that are specific to estimation of ARMA and
ARFIMA models.
First, MA models are notoriously difficult to estimate. In particular, you should avoid high
order MA terms unless absolutely required for your model as they are likely to cause estimation difficulties. For example, a single large autocorrelation spike at lag 57 in the correlogram
does not necessarily require you to include an MA(57) term in your model unless you know
there is something special happening every 57 periods. It is more likely that the spike in the
correlogram is simply the product of one or more outliers in the series. By including many
MA terms in your model, you lose degrees of freedom, and may sacrifice stability and reliability of your estimates.
If the underlying roots of the MA process have modulus close to one, you may encounter
estimation difficulties, with EViews reporting that it cannot improve the sum-of-squares or
that it failed to converge in the maximum number of iterations. This behavior may be a sign
that you have over-differenced the data. You should check the correlogram of the series to
determine whether you can re-estimate with one less round of differencing.
Lastly, if you continue to have problems, you may wish to turn off MA backcasting.
Additional Topics127
For a discussion of how to estimate TSLS specifications with ARMA errors, see Nonlinear
Two-stage Least Squares on page 64.
CS t = c 1 + GDP t 2 + u t
ut = c3 ut 1 + c4 ut 2 + et
(22.51)
Simply specify your model using EViews expressions, followed by an additive term describing the AR correction enclosed in square brackets. The AR term should contain a coefficient
assignment for each AR lag, separated by commas:
cs = c(1) + gdp^c(2) + [ar(1)=c(3), ar(2)=c(4)]
EViews transforms this nonlinear model by differencing, and estimates the transformed nonlinear specification using a Gauss-Newton iterative procedure (see Initializing the AR
Errors on page 130).
r ( L ) ( 1 L ) ( Y t m t ) = v ( L )e t
r ( L )u t = v ( L )e t
(22.52)
ut = ( 1 L ) ( Yt mt )
d
= Y t X t b
(22.53)
and innovations
et = ut r1 ut 1 rp ut p + v1 et 1 + vq et q
(22.54)
We will use the expressions for the unconditional residuals and innovations to describe
three objective functions that may be used to estimate the ARIMA model.
(For simplicity of notation our discussion abstracts from SAR and SMA terms and coefficients. It is straightforward to allow for the inclusion of these seasonal terms).
T
1
1
2
1
log L ( b, r, v, j , d ) = ---- log ( 2p ) --- log Q --- uQ u
2
2
2
T
1
= ---- log ( 2p ) --- log Q S ( b, r, v, d )
2
2
(22.55)
ARIMA ML
It is well-known that for ARIMA models where d is a known integer, we may employ the
Kalman filter to efficiently evaluate the likelihood. The Kalman filter works with the state
space prediction error decomposition form of the likelihood, which eliminates the need to
invert the large matrix Q .
See Hamilton (2004, Chapter 13, p. 372) or Box, Jenkins, and Reinsel (2008, 7.4, p. 275) for
extensive discussion.
ARFIMA ML
Sowell (1992) and Doornik and Ooms (2003) offer detailed descriptions of the evaluation of
the likelihood for ARFIMA models. In particular, practical evaluation of Equation (22.55)
requires that we address several computational issues.
First, we must compute the autocovariances of the ARFIMA process that appear in the Q
which an involve an infinite order MA representation. Fortunately, Hosking (1981) and Sowell (1992) describe closed-form alternatives and Sowell (1992) derives efficient recursive
algorithms using hypergeometric functions.
Second, we must compute the determinant of the variance matrix and generalized (inverse
variance weighted) residuals in a manner that is computationally and storage efficient.
Doornik and Ooms (2003) describe a Levinson-Durbin algorithm for efficiently performing
this operation with minimal operation count while eliminating the need to store the full
T T matrix Q .
Third, where possible we follow Doornik and Ooms (2003) in concentrate the likelihood
2
with respect to the regression coefficients b and the scale parameter j .
S ( b, r, v, d ) = uQ u
(22.56)
The recursive innovation equation in Equation (22.54) is easy to evaluate given parameter
values, lagged values of the differenced Y t , X t , and estimates of the lagged innovations.
Note, however that neither the u t nor the can be substituted in the first period as they are
not available until we start up the difference equation.
We discuss below methods for starting up the recursion by specifying presample values of
u t and e t . Given these presample values, the conditional likelihood function for normally
distributed innovations is given by
T
1
2
2
log l ( b, r, v, j ) = ---- log ( 2pj ) --------2
2
2j
T
2
et
t = 1
(22.57)
1
T
2
= ---- log ( 2pj ) --------S
( b, r , v )
2
2
2j
Notice that the conditional likelihood function depends on the data and the mean and
ARMA parameters only through the conditional least squares function S ( b, r, v ) , so that
the conditional likelihood may be maximized by minimizing S ( b, r, v ) .
Coefficient standard errors for the CLS estimation are the same as those for any other nonlinear least squares routine: ordinary inverse of the estimate of the information matrix, or a
White robust or Newey-West HAC sandwich covariance estimator. In all three cases, one can
use either the Gauss-Newton outer-product of the Jacobians, or the Newton-Raphson negative of the Hessian to estimate the information matrix.
In the remainder of this section we discuss the initialization of the recursion. EViews initializes the AR errors using lagged data (adjusting the estimation sample if necessary), and initializes the MA innovations using backcasting or the unconditional (zero) expectation.
Y t = X t b + u t
ut = r1 ut 1 + r2 ut 2 + + rp ut p + et
(22.58)
for t = 1, 2, , T . Estimation of this model using conditional least squares requires computation of the innovations e t for each period in the estimation sample.
We can rewrite out model as
e t = Y t X t b ( r 1 u t 1 + r 2 u t 2 + + r p u t p )
(22.59)
Y t = X t b + u t
u t = ru t 1 + e t
(22.60)
into a nonlinear model by substituting the second equation into the first, writing u t 1 in
terms of observables and rearranging terms:
Y t = X t b + ru t 1 + e t
= X t b + r ( Y t 1 X t 1 b ) + e t
(22.61)
= rY t 1 + ( X t rX t 1 )b + e t
so that the innovation recursion written in terms of observables is given by
e t = ( Y t rY t 1 ) ( X t rX t 1 )b
(22.62)
Notice that we require observation on the Y t and X t in the period before the start of the
recursion. If these values are not available, we must adjust the period of interest to begin at
t = 2 so that the values of the observed data in t = 1 may be substituted into the equation to obtain an expression for u 1 .
Higher order AR specifications are handled analogously. For example, a nonlinear AR(3) is
estimated using nonlinear least squares on the innovations given by:
Y t = ( r 1 Y t 1 + r 2 Y t 2 + r 3 Y t 3 ) + f ( X t, b ) r 1 f ( X t 1 , b )
r 2 f ( X t 2, b ) r 3 f ( X t 3, b ) + e t
(22.63)
It is important to note that textbooks often describe techniques for estimating linear AR
models like Equation (22.58). The most widely discussed approaches, the Cochrane-Orcutt,
Prais-Winsten, Hatanaka, and Hildreth-Lu procedures, are multi-step approaches designed
so that estimation can be performed using standard linear regression. These approaches proceed by obtaining an initial consistent estimate of the AR coefficients r and then estimating
the remaining coefficients via a second-stage linear regression.
All of these approaches suffer from important drawbacks which occur when working with
models containing lagged dependent variables as regressors, or models using higher-order
AR specifications; see Davidson and MacKinnon (1993, p. 329341), Greene (2008, p. 648
652).
In contrast, the EViews conditional least squares estimates the coefficients r and b are estimated simultaneously by minimizing the nonlinear sum-of-squares function S ( b, r, v )
(which maximizes the conditional likelihood). The nonlinear least squares approach has the
advantage of being easy-to-understand, generally applicable, and easily extended to models
that contain endogenous right-hand side variables and to nonlinear mean specifications.
Thus, for a nonlinear mean AR(1) specification, EViews transforms the nonlinear model,
Y t = f ( X t, b ) + u t
u t = ru t 1 + e t
(22.64)
Y t = rY t 1 + f ( X t, b ) rf ( X t 1, b ) + e t
(22.65)
e t = ( Y t rY t 1 ) ( f ( X t, b ) rf ( X t 1, b ) )
(22.66)
e t = Y t ( r 1 Y t 1 + r 2 Y t 2 + r 3 Y t 3 ) + f ( X t, b )
( r 1 f ( X t 1 , b ) + r 2 f ( X t 2 , b ) + r 3 f ( X t 3, b ) )
(22.67)
For additional detail, see Fair (1984, p. 210214), and Davidson and MacKinnon (1993, p.
331341).
Initializing MA Innovations
Consider an MA( q ) regression model of the form:
Y t = X t b + u t
ut = et + v1 et 1 + v2 et 2 + + vq et q
(22.68)
for t = 1, 2, , T . Estimation of this model using conditional least squares requires computation of the innovations e t for each period in the estimation sample.
Computing the innovations is a straightforward process. Suppose we have an initial estimate
of the coefficients, ( b , v ) , and estimates of the pre-estimation sample values of e :
{ e ( q 1 ), e ( q 2 ), , e 0 }
(22.69)
Then, after first computing the unconditional residuals u t = Y t X t b , we may use forward recursion to solve for the remaining values of the innovations:
e t = u t v 1 e t 1 v q e t q
(22.70)
for t = 1, 2, , T .
All that remains is to specify a method of obtaining estimates of the pre-sample values of e :
{ e ( q 1 ), e ( q 2 ), , e 0 }
(22.71)
One may employ backcasting to obtain the pre-sample innovations (Box and Jenkins, 1976).
As the name suggests, backcasting uses a backward recursion method to obtain estimates of
e for this period.
To start the recursion, the q values for the innovations beyond the estimation sample are set
to zero:
References133
e T + 1 = e T + 2 = = e T + q = 0
(22.72)
EViews then uses the actual results to perform the backward recursion:
e t = u t v 1 e t + 1 v q e t + q
(22.73)
e ( q 1 ) = = e 0 = 0 ,
(22.74)
Whichever methods is used to initialize the presample values, the sum-of-squared residuals
(SSR) is formed recursively as a function of the b and v , using the fitted values of the
lagged innovations:
T
S ( b, v ) =
( Y t X t b v 1 e t 1 v q e t q ) .
(22.75)
t = q+1
References
Baille, Richard (1996). Long Memory Processes and Fractional Integration in Econometrics, Journal of
Econometrics, 73, 559..
Box, George E. P. and Gwilym M. Jenkins (1976). Time Series Analysis: Forecasting and Control, Revised
Edition, Oakland, CA: Holden-Day.
Box, George E.P., Jenkins, Gwilym M., and Gregory C. Reinsel (2008). Time Series Analysis: Forecasting
and Control, Fourth Edition, Hoboken, New Jersey: John Wiley & Sons.
Doornik, Jurgen A. and Marius Ooms (2003). Computational Aspects of Maximum Likelihood Estimation
of Autoregressive Fractionally Integrated Moving Average Models, Computational Statistics & Data
Analysis, 42, 333348.
Fair, Ray C. (1984). Specification, Estimation, and Analysis of Macroeconometric Models, Cambridge, MA:
Harvard University Press.
Geweke, J. F. and S. Porter-Hudak (1983). The Estimation and Application of Long Memory Time Series
Models Journal of Time Series Analysis, 4, 221238.
Granger, C. W. J. and Roselyne Joyeux (1980). An Introduction to Long-Memory Time Series Models and
Fractional Differencing, Journal of Time Series Analysis, 1, 1529.
Greene, William H. (2008). Econometric Analysis, 6th Edition, Upper Saddle River, NJ: Prentice-Hall.
Hamilton, James D. (1994). Time Series Analysis, Princeton University Press.
Dependent Variable: HS
Method: Leas t Squares
Date: 08/09/09 Time: 07:45
Sample (adjusted): 1959M03 1990M01
Included observations: 371 after adjustments
Conv ergence achieved after 6 iterations
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
HS(-1)
SP
AR(1)
0.321924
0.952653
0.005222
-0.271254
0.117278
0.016218
0.007588
0.052114
2.744973
58.74151
0.688248
-5.205025
0.0063
0.0000
0.4917
0.0000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-s tatistic)
0.861373
0.860240
0.082618
2.505050
400.6830
760.1338
0.000000
Inverted AR Roots
-.27
7.324051
0.220996
-2.138453
-2.096230
-2.121683
2.013460
Note that the estimation sample is adjusted by two observations to account for the first difference of the lagged endogenous variable used in deriving AR(1) estimates for this model.
To get a feel for the fit of the model, select View/Actual, Fitted, Residual, then choose
Actual, Fitted, Residual Graph:
The actual and fitted values depicted on the upper portion of the graph are virtually indistinguishable. This view provides little control over the process of producing fitted values, and
does not allow you to save your fitted values. These limitations are overcome by using
EViews built-in forecasting procedures to compute fitted values for the dependent variable.
In addition, in specifications that contain ARMA terms, you can set the Structural
option, instructing EViews to ignore any ARMA terms in the equation when forecasting. By default, when your equation has ARMA terms, both dynamic and static solution methods form forecasts of the residuals. If you select Structural, all forecasts will
ignore the forecasted residuals and will form predictions using only the structural part
of the ARMA specification.
Sample range. You must specify the sample to be used for the forecast. By default,
EViews sets this sample to be the workfile sample. By specifying a sample outside the
sample used in estimating your equation (the estimation sample), you can instruct
EViews to produce out-of-sample forecasts.
Note that you are responsible for supplying the values for the independent variables
in the out-of-sample forecasting period. For static forecasts, you must also supply the
values for any lagged dependent variables.
Output. You can choose to see the forecast output as a graph or a numerical forecast
evaluation, or both. Forecast evaluation is only available if the forecast sample
includes observations for which the dependent variable is observed.
Insert actuals for out-of-sample observations. By default, EViews will fill the forecast series with the values of the actual dependent variable for observations not in the
forecast sample. This feature is convenient if you wish to show the divergence of the
forecast from the actual values; for observations prior to the beginning of the forecast
sample, the two series will contain the same values, then they will diverge as the forecast differs from the actuals. In some contexts, however, you may wish to have forecasted values only for the observations in the forecast sample. If you uncheck this
option, EViews will fill the out-of-sample observations with missing values.
Note that when performing forecasts from equations specified using expressions or autoupdating series, you may encounter a version of the Forecast dialog that differs from the
basic dialog depicted above. See Forecasting from Equations with Expressions on page 155
for details.
An Illustration
Suppose we produce a dynamic forecast using EQ01 over the sample 1959M01 to 1996M01.
The forecast values will be placed in the series HSF, and EViews will display a graph of the
forecasts and the plus and minus two standard error bands, as well as a forecast evaluation:
An Illustration139
This is a dynamic forecast for the period from 1959M01 through 1996M01. For every period,
the previously forecasted values for HS(-1) are used in forming a forecast of the subsequent
value of HS. As noted in the output, the forecast values are saved in the series HSF. Since
HSF is a standard EViews series, you may examine your forecasts using all of the standard
tools for working with series objects.
For example, we may examine the actual versus fitted values by creating a group containing
HS and HSF, and plotting the two series. Select HS and HSF in the workfile window, then
right-mouse click and select Open/as Group. Then select View/Graph... and select Line &
Symbol in the Graph Type/Basic type page to display a graph of the two series:
Note the considerable difference between this actual and fitted graph and the Actual, Fitted,
Residual Graph depicted above.
To perform a series of one-step ahead forecasts, click on Forecast on the equation toolbar,
and select Static forecast. Make certain that the forecast sample is set to 1959m01
1995m06. Click on OK. EViews will display the forecast results:
We may also compare the actual and fitted values from the static forecast by examining a
line graph of a group containing HS and the new HSF.
The one-step ahead static forecasts are more accurate than the dynamic forecasts since, for
each period, the actual value of HS(-1) is used in forming the forecast of HS. These one-step
ahead static forecasts are the same forecasts used in the Actual, Fitted, Residual Graph displayed above.
Lastly, we construct a dynamic forecast beginning in 1990M02 (the first period following the
estimation sample) and ending in 1996M01. Keep in mind that data are available for SP for
An Illustration141
this entire period. The plot of the actual and the forecast values for 1989M01 to 1996M01 is
given by:
Since we use the default settings for out-of-forecast sample values, EViews backfills the forecast series prior to the forecast sample (up through 1990M01), then dynamically forecasts
HS for each subsequent period through 1996M01. This is the forecast that you would have
constructed if, in 1990M01, you predicted values of HS from 1990M02 through 1996M01,
given knowledge about the entire path of SP over that period.
The corresponding static forecast is displayed below:
Again, EViews backfills the values of the forecast series, HSF1, through 1990M01. This forecast is the one you would have constructed if, in 1990M01, you used all available data to
estimate a model, and then used this estimated model to perform one-step ahead forecasts
every month for the next six years.
The remainder of this chapter focuses on the details associated with the construction of
these forecasts, the corresponding forecast evaluations, and forecasting in more complex settings involving equations with expressions or auto-updating series.
Forecast Basics
EViews stores the forecast results in the series specified in the Forecast name field. We will
refer to this series as the forecast series.
The forecast sample specifies the observations for which EViews will try to compute fitted or
forecasted values. If the forecast is not computable, a missing value will be returned. In
some cases, EViews will carry out automatic adjustment of the sample to prevent a forecast
consisting entirely of missing values (see Adjustment for Missing Values on page 143,
below). Note that the forecast sample may or may not overlap with the sample of observations used to estimate the equation.
For values not included in the forecast sample, there are two options. By default, EViews fills
in the actual values of the dependent variable. If you turn off the Insert actuals for out-ofsample option, out-of-forecast-sample values will be filled with NAs.
As a consequence of these rules, all data in the forecast series will be overwritten during the
forecast procedure. Existing values in the forecast series will be lost.
y t = c ( 1 ) + c ( 2 )x t + c ( 3 )z t .
(23.1)
You should make certain that you have valid values for the exogenous right-hand side variables for all observations in the forecast period. If any data are missing in the forecast sample, the corresponding forecast observation will be an NA.
Forecast Basics143
If you specified the beginning of the forecast sample to the beginning of the workfile range,
EViews will adjust forward the forecast sample by 2 observations, and will use the pre-forecast-sample values of the lagged variables (the loss of 2 observations occurs because the
residual loses one observation due to the lagged endogenous variable so that the forecast for
the error term can begin only from the third observation.)
y t = x t b + e t ,
(23.2)
y t = x t b .
(23.3)
Forecasts are made with error, where the error is simply the difference between the actual
and forecasted value e t = y t x t b . Assuming that the model is correctly specified, there
are two sources of forecast error: residual uncertainty and coefficient uncertainty.
Residual Uncertainty
The first source of error, termed residual or innovation uncertainty, arises because the innovations e in the equation are unknown for the forecast period and are replaced with their
expectations. While the residuals are zero in expected value, the individual values are nonzero; the larger the variation in the individual residuals, the greater the overall error in the
forecasts.
The standard measure of this variation is the standard error of the regression (labeled S.E.
of regression in the equation output). Residual uncertainty is usually the largest source of
forecast error.
In dynamic forecasts, innovation uncertainty is compounded by the fact that lagged dependent variables and ARMA terms depend on lagged innovations. EViews also sets these equal
to their expected values, which differ randomly from realized values. This additional source
of forecast uncertainty tends to rise over the forecast horizon, leading to a pattern of increasing forecast errors. Forecasting with lagged dependent variables and ARMA terms is discussed in more detail below.
Coefficient Uncertainty
The second source of forecast error is coefficient uncertainty. The estimated coefficients b of
the equation deviate from the true coefficients b in a random fashion. The standard error of
the estimated coefficient, given in the regression output, is a measure of the precision with
which the estimated coefficients measure the true coefficients.
The effect of coefficient uncertainty depends upon the exogenous variables. Since the estimated coefficients are multiplied by the exogenous variables x in the computation of forecasts, the more the exogenous variables deviate from their mean values, the greater is the
forecast uncertainty.
Forecast Variability
The variability of forecasts is measured by the forecast standard errors. For a single equation
without lagged dependent variables or ARMA terms, the forecast standard errors are computed as:
1
forecast se = s 1 + x t ( XX ) x t
(23.4)
where s is the standard error of regression. These standard errors account for both innovation (the first term) and coefficient uncertainty (the second term). Point forecasts made from
linear regression models estimated by least squares are optimal in the sense that they have
the smallest forecast variance among forecasts made by linear unbiased estimators. Moreover, if the innovations are normally distributed, the forecast errors have a t-distribution and
forecast intervals can be readily formed.
Forecast Basics145
If you supply a name for the forecast standard errors, EViews computes and saves a series of
forecast standard errors in your workfile. You can use these standard errors to form forecast
intervals. If you choose the Do graph option for output, EViews will plot the forecasts with
plus and minus two standard error bands. These two standard error bands provide an
approximate 95% forecast interval; if you (hypothetically) make many forecasts, the actual
value of the dependent variable will fall inside these bounds 95 percent of the time.
Additional Details
EViews accounts for the additional forecast uncertainty generated when lagged dependent
variables are used as explanatory variables (see Forecasts with Lagged Dependent Variables on page 148).
There are cases where coefficient uncertainty is ignored in forming the forecast standard
error. For example, coefficient uncertainty is always ignored in equations specified by
expression, for example, nonlinear least squares, and equations that include PDL (polynomial distributed lag) terms (Forecasting with Nonlinear and PDL Specifications on
page 161).
In addition, forecast standard errors do not account for GLS weights in estimated panel
equations.
Forecast Evaluation
Suppose we construct a dynamic forecast for HS over the period 1990M02 to 1996M01 using
our estimated housing equation. If the Forecast evaluation option is checked, and there are
actual data for the forecasted variable for the forecast sample, EViews reports a table of statistical results evaluating the forecast:
Forecast: HSF
Actual: HS
Sample: 1990M02 1996M01
Include observations: 72
Root Mean Squared Error
Mean Absolute Error
Mean Absolute Percentage Error
Theil Inequality Coefficient
Bias Proportion
Variance Proportion
Covariance Proportion
0.318700
0.297261
4.205889
0.021917
0.869982
0.082804
0.047214
Note that EViews cannot compute a forecast evaluation if there are no data for the dependent variable for the forecast sample.
The forecast evaluation is saved in one of two formats. If you turn on the Do graph option,
the forecasts are included along with a graph of the forecasts. If you wish to display the eval-
uations in their own table, you should turn off the Do graph option in the Forecast dialog
box.
Suppose the forecast sample is j = T + 1, T + 2, , T + h , and denote the actual and
forecasted value in period t as y t and y t , respectively. The reported forecast error statistics
are computed as follows:
Root Mean Squared Error
T+h
( y t y t ) h
t = T+1
T+h
y t y t h
t = T+1
T+h
100
t = T+1
y t y t
-------------- h
yt
T+h
( y t y t ) h
t = T+1
------------------------------------------------------------------------------T+h
T+h
y t h +
t = T+1
yt h
t = T+1
The first two forecast error statistics depend on the scale of the dependent variable. These
should be used as relative measures to compare forecasts for the same series across different
models; the smaller the error, the better the forecasting ability of that model according to
that criterion. The remaining two statistics are scale invariant. The Theil inequality coefficient always lies between zero and one, where zero indicates a perfect fit.
The mean squared forecast error can be decomposed as:
( y t yt )
h = ( ( y t h ) y ) + ( s y s y ) + 2 ( 1 r )s y s y
(23.5)
( ( y t h ) y )
---------------------------------------2
( y t y t ) h
Forecast Basics147
Variance Proportion
( sy sy )
-----------------------------------2
( y t y t ) h
Covariance Proportion
2 ( 1 r )s y s y
-----------------------------------2
(
y
y
)
t
t
The bias proportion tells us how far the mean of the forecast is from the mean of the
actual series.
The variance proportion tells us how far the variation of the forecast is from the variation of the actual series.
The covariance proportion measures the remaining unsystematic forecasting errors.
Note that the bias, variance, and covariance proportions add up to one.
If your forecast is good, the bias and variance proportions should be small so that most of
the bias should be concentrated on the covariance proportions. For additional discussion of
forecast evaluation, see Pindyck and Rubinfeld (1998, p. 210-214).
For the example output, the bias proportion is large, indicating that the mean of the forecasts
does a poor job of tracking the mean of the dependent variable. To check this, we will plot
the forecasted series together with the actual series in the forecast sample with the two standard error bounds. Suppose we saved the forecasts and their standard errors as HSF and
HSFSE, respectively. Then the plus and minus two standard error series can be generated by
the commands:
smpl 1990m02 1996m01
series hsf_high = hsf + 2*hsfse
series hsf_low = hsf - 2*hsfse
Create a group containing the four series. You can highlight the four series HS, HSF,
HSF_HIGH, and HSF_LOW, double click on the selected area, and select Open Group, or you
can select Quick/Show and enter the four series names. Once you have the group open,
select View/Graph... and select Line & Symbol from the left side of the dialog.
The forecasts completely miss the downturn at the start of the 1990s, but, subsequent to the
recovery, track the trend reasonably well from 1992 to 1996.
and click on the Forecast button and fill out the series names in the dialog as above. There
is some question, however, as to how we should evaluate the lagged value of Y that appears
on the right-hand side of the equation. There are two possibilities: dynamic forecasting and
static forecasting.
Dynamic Forecasting
If you select dynamic forecasting, EViews will perform a multi-step forecast of Y, beginning
at the start of the forecast sample. For our single lag specification above:
The initial observation in the forecast sample will use the actual value of lagged Y.
Thus, if S is the first observation in the forecast sample, EViews will compute:
y S = c ( 1 ) + c ( 2 )x S + c ( 3 )z S + c ( 4 )y S 1 ,
(23.6)
where y S 1 is the value of the lagged endogenous variable in the period prior to the
start of the forecast sample. This is the one-step ahead forecast.
Forecasts for subsequent observations will use the previously forecasted values of Y:
y S + k = c ( 1 ) + c ( 2 )x S + k + c ( 3 )z S + k + c ( 4 )y S + k 1 .
(23.7)
These forecasts may differ significantly from the one-step ahead forecasts.
If there are additional lags of Y in the estimating equation, the above algorithm is modified
to account for the non-availability of lagged forecasted values in the additional period. For
example, if there are three lags of Y in the equation:
The first observation ( S ) uses the actual values for all three lags, y S 3 , y S 2 , and
yS 1 .
The second observation ( S + 1 ) uses actual values for y S 2 and, y S 1 and the forecasted value y S of the first lag of y S + 1 .
The third observation ( S + 2 ) will use the actual values for y S 1 , and forecasted values y S + 1 and y S for the first and second lags of y S + 2 .
All subsequent observations will use the forecasted values for all three lags.
The selection of the start of the forecast sample is very important for dynamic forecasting.
The dynamic forecasts are true multi-step forecasts (from the start of the forecast sample),
since they use the recursively computed forecast of the lagged value of the dependent variable. These forecasts may be interpreted as the forecasts for subsequent periods that would
be computed using information available at the start of the forecast sample.
Dynamic forecasting requires that data for the exogenous variables be available for every
observation in the forecast sample, and that values for any lagged dependent variables be
observed at the start of the forecast sample (in our example, y S 1 , but more generally, any
lags of y ). If necessary, the forecast sample will be adjusted.
Any missing values for the explanatory variables will generate an NA for that observation
and in all subsequent observations, via the dynamic forecasts of the lagged dependent variable.
Lastly, we note that for non-linear dynamic forecasting, EViews produces what Tong and
Lim (1980) term the eventual forecasting function in which the lagged forecasted values
are substituted recursively into the one-step ahead function. This approach differs from the
simulation based approaches to multi-step forecasting which employs stochastic simulation.
If you wish to obtain the latter forecasts, you may create a model from your equation using
Proc/Make Model, and then use the resulting model to perform the dynamic stochastic simulation.
Static Forecasting
Static forecasting performs a series of one-step ahead forecasts of the dependent variable:
For each observation in the forecast sample, EViews computes:
y S + k = c ( 1 ) + c ( 2 )x S + k + c ( 3 )z S + k + c ( 4 )y S + k 1
always using the actual value of the lagged endogenous variable.
(23.8)
Static forecasting requires that data for both the exogenous and any lagged endogenous variables be observed for every observation in the forecast sample. As above, EViews will, if
necessary, adjust the forecast sample to account for pre-sample lagged variables. If the data
are not available for any period, the forecasted value for that observation will be an NA. The
presence of a forecasted value of NA does not have any impact on forecasts for subsequent
observations.
Structural Forecasts
By default, EViews will forecast values for the residuals using the estimated ARMA structure, as described below.
For some types of work, you may wish to assume that the ARMA errors are always zero. If
you select the structural forecast option by checking Structural (ignore ARMA), EViews
computes the forecasts assuming that the errors are always zero. If the equation is estimated
without ARMA terms, this option has no effect on the forecasts.
If you choose the Dynamic option, both the lagged dependent variable and the lagged residuals will be forecasted dynamically. If you select Static, both will be set to the actual lagged
values. For example, consider the following AR(2) model:
y t = x t b + u t
(23.9)
ut = r1 ut 1 + r2 ut 2 + et
Denote the fitted residuals as e t = y t x t b , and suppose the model was estimated using
data up to t = S 1 . Then, provided that the x t values are available, the static and
dynamic forecasts for t = S, S + 1, , are given by:
Static
Dynamic
y S
x S b + r 1 e S 1 + r 2 e S 2
x S b + r 1 e S 1 + r 2 e S 2
y S + 1
x S + 1 b + r 1 e S + r 2 e S 1
x S + 1 b + r 1 u S + r 2 e S 1
y S + 2
x S + 2 b + r 1 e S + 1 + r 2 e S
x S + 2 b + r 1 u S + 1 + r 2 u S
where the residuals u t = y t x t b are formed using the forecasted values of y t . For subsequent observations, the dynamic forecast will always use the residuals based upon the
multi-step forecasts, while the static forecast will use the one-step ahead forecast residuals.
y S = f 1 e S 1 + + f q e S q ,
(23.10)
you will need values for the pre-forecast sample innovations, e S 1, e S 2, , e S q . Similarly, constructing a static forecast for a given period will require estimates of the q lagged
innovations at every period in the forecast sample.
If your equation is estimated with backcasting turned on, EViews will perform backcasting
to obtain these values. If your equation is estimated with backcasting turned off, or if the
forecast sample precedes the estimation sample, the initial values will be set to zero.
Backcast Sample
The first step in obtaining pre-forecast innovations is obtaining estimates of the pre-estimation sample innovations: e 0, e 1, e 2, , e q . (For notational convenience, we normalize
the start and end of the estimation sample to t = 1 and t = T , respectively.)
EViews offers two different approaches for obtaining estimatesyou may use the MA backcast dropdown menu to
choose between the default Estimation period and the Forecast available (v5) methods.
The Estimation period method uses data for the estimation sample to compute backcast
estimates. Then as in estimation (Initializing MA Innovations on page 132), the q values
for the innovations beyond the estimation sample are set to zero:
e T + 1 = e T + 2 = = e T + q = 0
(23.11)
EViews then uses the unconditional residuals to perform the backward recursion:
e t = u t v 1 e t + 1 v q e t + q
(23.12)
Pre-Forecast Innovations
Given the backcast estimates of the pre-estimation sample residuals, forward recursion is
used to obtain values for the pre-forecast sample innovations.
For dynamic forecasting, one need only obtain innovation values for the q periods prior to
the start of the forecast sample; all subsequent innovations are set to zero. EViews obtains
estimates of the pre-sample e S 1, e S 2, , e S q using the recursion:
e t = u t v 1 e t 1 v q e t q
(23.13)
Additional Notes
Note that EViews computes the residuals used in backcast and forward recursion from the
observed data and estimated coefficients. If EViews is unable to compute values for the
unconditional residuals u t for a given period, the sequence of innovations and forecasts will
be filled with NAs. In particular, static forecasts must have valid data for both the dependent
and explanatory variables for all periods from the beginning of estimation sample to the end
of the forecast sample, otherwise the backcast values of the innovations, and hence the forecasts will contain NAs. Likewise, dynamic forecasts must have valid data from the beginning
of the estimation period through the start of the forecast period.
Example
As an example of forecasting from ARMA models, consider forecasting the monthly new
housing starts (HS) series. The estimation period is 1959M011984M12 and we forecast for
the period 1985M011991M12. We estimated the following simple multiplicative seasonal
autoregressive model,
hs c ar(1) sar(12)
yielding:
Dependent Variable: HS
Method: Least Squares
Date: 08/08/06 Time: 17:42
Sample (adjusted): 1960M02 1984M12
Included observations: 299 after adjustments
Convergence achieved after 5 iterations
C
AR(1)
SAR(12)
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
Inverted AR Roots
Coefficient
Std. Error
t-Statistic
Prob.
7.317283
0.935392
-0.113868
0.071371
0.021028
0.060510
102.5243
44.48403
-1.881798
0.0000
0.0000
0.0608
0.862967
0.862041
0.088791
2.333617
301.2645
932.0312
0.000000
.94
.59+.59i
-.22-.81i
-.81+.22i
.81-.22i
.22+.81i
-.59+.59i
.81+.22i
.22-.81i
-.59-.59i
7.313496
0.239053
-1.995080
-1.957952
-1.980220
2.452568
.59-.59i
-.22+.81i
-.81-.22i
To perform a dynamic forecast from this estimated model, click Forecast on the equation
toolbar, enter 1985m01 1991m12 in the Forecast sample field, then select Forecast evaluation and unselect Forecast graph. The forecast evaluation statistics for the model are
shown below:
The large variance proportion indicates that the forecasts are not tracking the variation in
the actual HS series. To plot the actual and forecasted series together with the two standard
error bands, you can type:
smpl 1985m01 1991m12
plot hs hs_f hs_f+2*hs_se hs_f-2*hs_se
where HS_F and HS_SE are the forecasts and standard errors of HS.
As indicated by the large variance proportion, the forecasts track the seasonal movements in
HS only at the beginning of the forecast sample and quickly flatten out to the mean forecast
value.
In discussing the relevant issues, we distinguish between specifications that contain only
auto-series expressions such as LOG(X), and those that contain auto-updating series such as
EXPZ.
Point Forecasts
EViews always provides you with the option to forecast the dependent variable expression.
If the expression can be normalized (solved for the first series in the expression), EViews
also provides you with the option to forecast the normalized series.
For example, suppose you estimated an equation with the specification:
(log(hs)+sp) c hs(-1)
If you press the Forecast button, EViews will open a dialog prompting you for your forecast
specification.
The resulting Forecast dialog is a
slightly more complex version of the
basic dialog, providing you with a
new section allowing you to choose
between two series to forecast: the
normalized series, HS, or the equation dependent variable,
LOG(HS)+SP.
Simply select the radio button for the
desired forecast series. Note that you
are not provided with the opportunity to forecast SP directly since HS,
the first series that appears on the
left-hand side of the estimation
equation, is offered as the choice of
normalized series.
It is important to note that the Dynamic forecast method is available since EViews is able to
determine that the forecast equation has dynamic elements, with HS appearing on the lefthand side of the equation (either directly as HS or in the expression LOG(HS)+SP) and on
the right-hand side of the equation in lagged form. If you select dynamic forecasting, previ-
ously forecasted values for HS(-1) will be used in forming forecasts of either HS or
LOG(HS)+SP.
If the formula can be normalized, EViews will compute the forecasts of the transformed
dependent variable by first forecasting the normalized series and then transforming the forecasts of the normalized series. This methodology has important consequences when the formula includes lagged series. For example, consider the following two models:
series dhs = d(hs)
equation eq1.ls d(hs) c sp
equation eq2.ls dhs c sp
The dynamic forecasts of the first difference D(HS) from the first equation will be numerically identical to those for DHS from the second equation. However, the static forecasts for
D(HS) from the two equations will not be identical. In the first equation, EViews knows that
the dependent variable is a transformation of HS, so it will use the actual lagged value of HS
in computing the static forecast of the first difference D(HS). In the second equation, EViews
simply views DY as an ordinary series, so that only the estimated constant and SP are used
to compute the static forecast.
One additional word of cautionwhen you have dependent variables that use lagged values
of a series, you should avoid referring to the lagged series before the current series in a
dependent variable expression. For example, consider the two equation specifications:
d(hs) c sp
(-hs(-1)+hs) c sp
Both specifications have the first difference of HS as the dependent variable and the estimation results are identical for the two models. However, if you forecast HS from the second
model, EViews will try to calculate the forecasts of HS using leads of the actual series HS.
These forecasts of HS will differ from those produced by the first model, which may not be
what you expected.
In some cases, EViews will not be able to normalize the dependent variable expression. In
this case, the Forecast dialog will only offer you the option of forecasting the entire expression. If, for example, you specify your equation as:
log(hs)+1/log(hs) = c(1) + c(2)*hs(-1)
EViews will not be able to normalize the dependent variable for forecasting. The corresponding Forecast dialog will reflect this fact.
For the first equation, you may choose to forecast either HS or D(HS). In both cases, the
forecast standard errors will be exact, since the expression involves only linear transformations. The two standard errors will, however, differ in dynamic forecasts since the forecast
standard errors for HS take into account the forecast uncertainty from the lagged value of
HS. In the second example, the forecast standard errors for LOG(HS) will be exact. If, however, you request a forecast for HS itself, the standard errors saved in the series will be the
approximate (linearized) forecast standard errors for HS.
Note that when EViews displays a graph view of the forecasts together with standard error
bands, the standard error bands are always exact. Thus, in forecasting the underlying dependent variable in a nonlinear expression, the standard error bands will not be the same as
those you would obtain by constructing series using the linearized standard errors saved in
the workfile.
Suppose in our second example above that you store the forecast of HS and its standard
errors in the workfile as the series HSHAT and SE_HSHAT. Then the approximate two standard error bounds can be generated manually as:
series hshat_high1 = hshat + 2*se_hshat
series hshat_low1 = hshat - 2*se_hshat
These forecast error bounds will be symmetric about the point forecasts HSHAT.
On the other hand, when EViews plots the forecast error bounds of HS, it proceeds in two
steps. It first obtains the forecast of LOG(HS) and its standard errors (named, say, LHSHAT
and SE_LHSHAT) and forms the forecast error bounds on LOG(HS):
lhshat + 2*se_lhshat
lhshat - 2*se_lhshat
It then normalizes (inverts the transformation) of the two standard error bounds to obtain
the prediction interval for HS:
series hshat_high2 = exp(hshat + 2*se_hshat)
series hshat_low2 = exp(hshat - 2*se_hshat)
Because this transformation is a non-linear transformation, these bands will not be symmetric around the forecast.
To take a more complicated example, suppose that you generate the series DLHS and LHS,
and then estimate three equivalent models:
series dlhs = dlog(hs)
series lhs = log(hs)
equation eq1.ls dlog(hs) c sp
equation eq2.ls d(lhs) c sp
equation eq3.ls dlhs c sp
The estimated equations from the three models are numerically identical. If you choose to
forecast the underlying dependent (normalized) series from each model, EQ1 will forecast
HS, EQ2 will forecast LHS (the log of HS), and EQ3 will forecast DLHS (the first difference of
the logs of HS, LOG(HS)LOG(HS(1)). The forecast standard errors saved from EQ1 will be
linearized approximations to the forecast standard error of HS, while those from the latter
two will be exact for the forecast standard error of LOG(HS) and the first difference of the
logs of HS.
Static forecasts from all three models are identical because the forecasts from previous periods are not used in calculating this period's forecast when performing static forecasts. For
dynamic forecasts, the log of the forecasts from EQ1 will be identical to those from EQ2 and
the log first difference of the forecasts from EQ1 will be identical to the first difference of the
forecasts from EQ2 and to the forecasts from EQ3. For static forecasts, the log first difference
of the forecasts from EQ1 will be identical to the first difference of the forecasts from EQ2.
However, these forecasts differ from those obtained from EQ3 because EViews does not
know that the generated series DLY is actually a difference term so that it does not use the
dynamic relation in the forecasts.
It is worth pointing out this specification yields results that are identical to those obtained
from estimating an equation using the expressions directly using LOG(HS) and LOG(HS(1)):
log(hs) c log(hs(-1))
The Forecast dialog for the first equation specification (using LOGHS and LOGHSLAG) contains an additional dropdown menu allowing you to specify whether to interpret the autoupdating series as ordinary series, or whether to look inside LOGHS and LOGHSLAG to use
their expressions.
EViews will display a message in the status line at the bottom of the EViews window when
forecast standard errors only account for innovation uncertainty.
For example, consider the three specifications:
log(y) c x
y = c(1) + c(2)*x
y = exp(c(1)*x)
y c x pdl(z, 4, 2)
Forecast standard errors from the first model account for both coefficient and innovation
uncertainty since the model is specified by list, and does not contain a PDL specification.
The remaining specifications have forecast standard errors that account only for residual
uncertainty.
Note also that for non-linear dynamic forecasting, EViews produces what Tong and Lim
(1980) term the eventual forecasting function in which the lagged forecasted values are
substituted recursively into the one-step ahead function. If you wish to obtain simulationbased multi-step forecasting, you may create a model from your equation using Proc/Make
Model, and then use the resulting model to perform the dynamic stochastic simulation.
References
Pindyck, Robert S. and Daniel L. Rubinfeld (1998). Econometric Models and Economic Forecasts, 4th edition, New York: McGraw-Hill.
Tong, H. and K. S. Lim (1980). Threshold Autoregression, Limit Cycles and Cyclical Data, Journal of the
Royal Statistical Society. Series B (Methodological), 42, 245292.
Background
Each test procedure described below involves the specification of a null hypothesis, which is
the hypothesis under test. Output from a test command consists of the sample values of one
or more test statistics and their associated probability numbers (p-values). The latter indicate the probability of obtaining a test statistic whose absolute value is greater than or equal
to that of the sample statistic if the null hypothesis is true. Thus, low p-values lead to the
rejection of the null hypothesis. For example, if a p-value lies between 0.05 and 0.01, the
null hypothesis is rejected at the 5 percent but not at the 1 percent level.
Bear in mind that there are different assumptions and distributional results associated with
each test. For example, some of the test statistics have exact, finite sample distributions
2
(usually t or F-distributions). Others are large sample test statistics with asymptotic x distributions. Details vary from one test to another and are given below in the description of
each test.
The View button on the equation toolbar gives you a choice among three categories of tests
to check the specification of the equation. For some equations estimated using particular
methods, only a subset of these categories will be available.
Additional tests are discussed elsewhere in the Users Guide.
These tests include unit root tests (Performing Unit Root Tests
in EViews on page 528), the Granger causality test (Granger
Causality on page 564 of Users Guide I), tests specific to
binary, order, censored, and count models (Chapter 28. Discrete and Limited Dependent
Variable Models, on page 297), and the tests for cointegration (Testing for Cointegration
on page 270).
Coefficient Diagnostics
These diagnostics provide information and evaluate restrictions on the estimated coefficients, including the special case of tests for omitted and redundant variables.
Scaled Coefficients
The Scaled Coefficients view displays the coefficient estimates, the standardized coefficient estimates and the elasticity at means. The
standardized coefficients are the point estimates of
the coefficients standardized by multiplying by the
standard deviation of the dependent variable
divided by the standard deviation of the regressor.
The elasticity at means are the point estimates of
the coefficients scaled by the mean of the dependent variable divided by the mean of the
regressor.
Coefficient Diagnostics165
estimated coefficients from an estimation object. When you perform a Wald test, EViews
provides a table of output showing the numeric values associated with the test.
An alternative approach to displaying the results of a Wald test is to display a confidence
interval. For a given test size, say 5%, we may display the one-dimensional interval within
which the test statistic must lie for us not to reject the null hypothesis. Comparing the realization of the test statistic to the interval corresponds to performing the Wald test.
The one-dimensional confidence interval may be generalized to the case involving two
restrictions, where we form a joint confidence region, or confidence ellipse. The confidence
ellipse may be interpreted as the region in which the realization of two test statistics must lie
for us not to reject the null.
To display confidence ellipses in EViews, simply select View/Coefficient Diagnostics/Confidence Ellipse... from the estimation object toolbar. EViews will display a dialog prompting
you to specify the coefficient restrictions and test size, and to select display options.
The first part of the dialog is identical to that found in
the Wald test viewhere, you will enter your coefficient restrictions into the edit box, with multiple
restrictions separated by commas. The computation
of the confidence ellipse requires a minimum of two
restrictions. If you provide more than two restrictions,
EViews will display all unique pairs of confidence
ellipses.
In this simple example depicted here using equation
EQ01 from the workfile Cellipse.WF1, we provide a
(comma separated) list of coefficients from the estimated equation. This description of the restrictions
takes advantage of the fact that EViews interprets any expression without an explicit equal
sign as being equal to zero (so that C(1) and C(1)=0 are equivalent). You may, of
course, enter an explicit restriction involving an equal sign (for example, C(1)+C(2) =
C(3)/2).
Next, select a size or sizes for the confidence ellipses. Here, we instruct EViews to construct
a 95% confidence ellipse. Under the null hypothesis, the test statistic values will fall outside
of the corresponding confidence ellipse 5% of the time.
Lastly, we choose a display option for the individual confidence intervals. If you select Line
or Shade, EViews will mark the confidence interval for each restriction, allowing you to see,
at a glance, the individual results. Line will display the individual confidence intervals as
dotted lines; Shade will display the confidence intervals as a shaded region. If you select
None, EViews will not display the individual intervals.
The output depicts three confidence ellipses that result from pairwise tests implied by the
three restrictions (C(1)=0, C(2)=0, and C(3)=0).
-.014
-.016
-.018
-.020
-.022
.8
C(3)
-.012
C(2)
Notice first the presence of the dotted lines showing the corresponding
confidence intervals for the individual coefficients.
.7
-.0
12
-.0
16
-.0
20
EViews allows you to display more than one size for your confidence ellipses. This feature
allows you to draw confidence contours so that you may see how the rejection region
changes at different probability values. To do so, simply enter a space delimited list of confidence levels. Note that while the coefficient restriction expressions must be separated by
commas, the contour levels must be separated by spaces.
.85
.80
.75
C(3)
.70
.65
.60
.55
.50
-.022
-.020
-.018
-.016
C(2)
-.014
-.012
-.010
Coefficient Diagnostics167
Here, the individual confidence intervals are depicted with shading. The individual intervals
are based on the largest size confidence level (which has the widest interval), in this case,
0.9.
Computational Details
Consider two functions of the parameters f 1 ( b ) and f 2 ( b ) , and define the bivariate function f ( b ) = ( f 1 ( b ), f 2 ( b ) ) .
The size a joint confidence ellipse is defined as the set of points b such that:
1
( b f ( b ) ) ( V ( b ) ) ( b f ( b ) ) = c a
(24.1)
where b are the parameter estimates, V ( b ) is the covariance matrix of b , and c a is the
size a critical value for the related distribution. If the parameter estimates are least-squares
based, the F ( 2, n 2 ) distribution is used; if the parameter estimates are likelihood based,
2
the x ( 2 ) distribution will be employed.
The individual intervals are two-sided intervals based on either the t-distribution (in the
cases where c a is computed using the F-distribution), or the normal distribution (where c a
2
is taken from the x distribution).
Varia ble
Coefficient
Variance
Uncentered
VIF
X1
X2
X3
X4
X5
0.002 909
3.72E-06
0.002 894
1.43E-06
1.74E-06
1010.429
106.8 991
1690.308
31.15 205
28.87 596
var ( b ) = j ( XX )
= j V S V
(24.2)
var ( b i ) = j
v ij
(24.3)
min ( m m )
k j --------------------mj
(24.4)
Coefficient Diagnostics169
If we let:
2
v
f ij -----ijmj
(24.5)
f i f ij
(24.6)
and
f ij
p ji -----fi
(24.7)
These proportions, together with the condition numbers, can then be used as a diagnostic
tool for determining collinearity between each of the coefficients.
Belsley, Kuh and Welsch recommend the following procedure:
Check the condition numbers of the matrix. A condition number smaller than 1/900
(0.001) could signify the presence of collinearity. Note that BKW use a rule of any
number greater than 30, but base it on the condition numbers of X , rather than
1
XX .
If there are one or more small condition numbers, then the variance-decomposition
proportions should be investigated. Two or more variables with values greater than
0.5 associated with a small condition number indicate the possibility of collinearity
between those two variables.
To view the coefficient variance decomposition in EViews, select View/Coefficient Diagnostics/Coefficient Variance Decomposition. EViews will then display a table showing the
Eigenvalues, Condition Numbers, corresponding Variance Decomposition Proportions and,
for comparison purposes, the corresponding Eigenvectors.
As an example, we estimate an equation using data from Longley (1967), as republished in
Greene (2008). The workfile Longley.WF1 contains macro economic variables for the US
between 1947 and 1962, and is often used as an example of multicollinearity in a data set.
The equation we estimate regresses Employment on Year (YEAR), the GNP Deflator
(PRICE), GNP, and Armed Forces Size (ARMED). The coefficient variance decomposition for
this equation is show below.
Eigenvalues
Condition
17208.87
1.09E-11
0.208842
9.02E- 07
0.054609
3.45E -06
1.88E-07
1.00000 0
Variable
YEAR
PRICE
GNP
ARMED
0.988939
1.000000
0.978760
0.037677
Associated Eigenvalue
2
3
0.010454
9.20E- 09
0.002518
0.441984
0.000607
5.75E -10
0.017746
0.520339
4
2.60E-13
7.03E-19
0.00097 5
9.31E-11
Eigenvectors
Variable
YEAR
PRICE
GNP
ARMED
0.030636
-0.999531
0.000105
0.000434
Associated Eigenvalue
2
3
-0.904160
-0.027528
0.001526
0.426303
-0.426067
-0.013451
0.007921
-0.904557
4
-0.004 751
-0.000 253
-0.999 967
-0.006 514
The top line of the table shows the eigenvalues, sorted from largest to smallest, with the
condition numbers below. Note that the final condition number is always equal to 1. Three
of the four eigenvalues have condition numbers smaller than 0.001, with the smallest condition number being very small: 1.09E-11, which would indicate a large amount of collinearity.
The second section of the table displays the decomposition proportions. The proportions
associated with the smallest condition number are located in the first column. Three of these
values are larger than 0.5, indeed they are very close to 1. This indicates that there is a high
level of collinearity between those three variables, YEAR, PRICE and GNP.
Coefficient Diagnostics171
(24.8)
where Q , K and L denote value-added output and the inputs of capital and labor respectively. The hypothesis of constant returns to scale is then tested by the restriction:
a + b = 1.
Estimation of the Cobb-Douglas production function using annual data from 1947 to 1971 in
the workfile Coef_test.WF1 provided the following result:
Depend ent Variable: LOG(Q )
Method: Least Squares
Date: 08/10/09 Ti me: 11:46
Sample: 1947 19 71
Included observations: 25
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
LOG(L)
LOG(K)
-2.327939
1.591175
0.239604
0.410601
0.167740
0.105390
-5.669595
9.485970
2.273498
0.0 000
0.0 000
0.0 331
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.983672
0.982187
0.043521
0.041669
44.48746
662 .6819
0.000000
4.7 675 86
0.3 260 86
-3.31899 7
-3.17273 2
-3.27842 9
0.6 373 00
The sum of the coefficients on LOG(L) and LOG(K) appears to be in excess of one, but to
determine whether the difference is statistically relevant, we will conduct the hypothesis test
of constant returns.
To carry out a Wald test, choose View/Coefficient Diagnostics/Wald-Coefficient Restrictions from the equation toolbar. Enter the restrictions into the edit box, with multiple
coefficient restrictions separated by commas. The restrictions should be expressed as equations involving the estimated coefficients and constants. The coefficients should be referred
to as C(1), C(2), and so on, unless you have used a different coefficient vector in estimation.
If you enter a restriction that involves a series name, EViews will prompt you to enter an
observation at which the test statistic will be evaluated. The value of the series will at that
period will be treated as a constant for purposes of constructing the test statistic.
To test the hypothesis of constant returns to scale, type the following restriction in the dialog
box:
c(2) + c(3) = 1
and click OK. EViews reports the following result of the Wald test:
Wald Test:
Equation: EQ1
Null Hyp othesis: C(2) + C(3) = 1
Test Stati stic
t-statistic
F-statisti c
Chi-squa re
Value
df
Probability
10.95526
120.0177
120.0177
22
(1, 22)
1
0.0000
0.0000
0.0000
Value
Std. Err.
0.830779
0.07583 4
EViews reports an F-statistic and a Chi-square statistic with associated p-values. In cases
with a single restriction, EViews reports the t-statistic equivalent of the F-statistic. See
Wald Test Details on page 175 for a discussion of these statistics. In addition, EViews
reports the value of the normalized (homogeneous) restriction and an associated standard
error. In this example, we have a single linear restriction so the F-statistic and Chi-square
statistic are identical, with the p-value indicating that we can decisively reject the null
hypothesis of constant returns to scale.
To test more than one restriction, separate the restrictions by commas. For example, to test
the hypothesis that the elasticity of output with respect to labor is 2/3 and the elasticity with
respect to capital is 1/3, enter the restrictions as,
c(2)=2/3, c(3)=1/3
Coefficient Diagnostics173
Wald Test:
Equation: EQ1
Null Hyp othesis: C(2)=2/3, C(3)=1/3
Test Stati stic
F-statisti c
Chi-squa re
Value
df
Probability
106.6113
213.2226
(2, 22)
2
0.0000
0.0000
Value
Std. Err.
0.924508
-0.09372 9
0.16774 0
0.10539 0
Note that in addition to the test statistic summary, we report the values of both of the normalized restrictions, along with their standard errors (the square roots of the diagonal elements of the restriction covariance matrix).
As an example of a nonlinear model with a nonlinear restriction, we estimate a general production function of the form:
log Q = b 1 + b 2 log ( b 3 K
b4
b4
+ ( 1 b 3 )L ) + e
(24.9)
and test the constant elasticity of substitution (CES) production function restriction
b 2 = 1 b 4 . This is an example of a nonlinear restriction. To estimate the (unrestricted)
nonlinear model, you may initialize the parameters using the command
param c(1) -2.6 c(2) 1.8 c(3) 1e-4 c(4) -6
then select Quick/Estimate Equation and then estimate the following specification:
log(q) = c(1) + c(2)*log(c(3)*k^c(4)+(1-c(3))*l^c(4))
to obtain
S td. Error
t-S tatistic
Prob.
-2.655953
-0.301579
4.37E-05
-6.121195
0.337610
0.245596
0.000318
5.100604
-7.866935
-1.227944
0.137553
-1.200092
0.0 000
0.2 331
0.8 919
0.2 435
C(1)
C(2)
C(3)
C(4)
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.985325
0.983229
0.042229
0.037450
45.82200
470 .0092
0.000000
4.7 675 86
0.3 260 86
-3.34576 0
-3.15074 0
-3.29167 0
0.7 251 56
To test the nonlinear restriction b 2 = 1 b 4 , choose View/Coefficient Diagnostics/WaldCoefficient Restrictions from the equation toolbar and type the following restriction in
the Wald Test dialog box:
c(2)=1/c(4)
Value
df
Probability
-1.259105
1.585344
1.585344
21
(1, 21)
1
0.2218
0.2218
0.2080
Value
Std. Err.
-0.13821 2
0.10977 0
We focus on the p-values for the statistics which show that we fail to reject the null hypothesis. Note that EViews reports that it used the delta method (with analytic derivatives) to
compute the Wald restriction variance for the nonlinear restriction.
Coefficient Diagnostics175
It is well-known that nonlinear Wald tests are not invariant to the way that you specify the
nonlinear restrictions. In this example, the nonlinear restriction b 2 = 1 b 4 may equivalently be written as b 2 b 4 = 1 or b 4 = 1 b 2 (for nonzero b 2 and b 4 ). For example,
entering the restriction as,
c(2)*c(4)=1
yields:
Wald Test:
Equation: Untitled
Null Hyp othesis: C(2)*C(4)=1
Test Stati stic
t-statistic
F-statisti c
Chi-squa re
Value
df
Probability
11.11048
123.4427
123.4427
21
(1, 21)
1
0.0000
0.0000
0.0000
Value
Std. Err.
0.846022
0.07614 6
so that the test now decisively rejects the null hypothesis. We hasten to add that this type of
inconsistency in results is not unique to EViews, but is a more general property of the Wald
test. Unfortunately, there does not seem to be a general solution to this problem (see Davidson and MacKinnon, 1993, Chapter 13).
y = f(b) + e
(24.10)
H0 : g ( b ) = 0 ,
k
(24.11)
where g is a smooth function, g: R R , imposing q restrictions on b . The Wald statistic is then computed as:
g ( b )
g ( b )
W = g ( b ) --------------V
( b ) -------------- g ( b )
b
b
b =b
(24.12)
where T is the number of observations and b is the vector of unrestricted parameter esti is an estimate of the b covariance. In the standard regression case, V
f i ( b ) f i ( b ) 1
( b ) = s 2 -------------- ---------------
V
i b b
(24.13)
b =b
where u is the vector of unrestricted residuals, and s is the usual estimator of the unre2
stricted residual variance, s = ( uu ) ( N k ) , but the estimator of V may differ. For
may be a robust variance matrix estimator computing using White or Neweyexample, V
West techniques.
2
More formally, under the null hypothesis H 0 , the Wald statistic has an asymptotic x ( q )
distribution, where q is the number of restrictions under H 0 .
For the textbook case of a linear regression model,
y = Xb + e
(24.14)
H 0 : Rb r = 0 ,
(24.15)
W = ( Rb r ) ( Rs ( XX ) R ) ( Rb r ) ,
(24.16)
W
( u u uu ) q
F = ----- = ------------------------------------,
q
( uu ) ( T k )
(24.17)
where u is the vector of residuals from the restricted regression. In this case, the F-statistic
compares the residual sum of squares computed with and without the restrictions imposed.
We remind you that the expression for the finite sample F-statistic in (24.17) is for standard
linear regression, and is not valid for more general cases (nonlinear models, ARMA specifications, or equations where the variances are estimated using other methods such as
Newey-West or White). In non-standard settings, the reported F-statistic (which EViews
always computes as W q ), does not possess the desired finite-sample properties. In these
cases, while asymptotically valid, F-statistic (and corresponding t-statistic) results should be
viewed as illustrative and for comparison purposes only.
Coefficient Diagnostics177
Omitted Variables
This test enables you to add a set of variables to an existing equation and to ask whether the
set makes a significant contribution to explaining the variation in the dependent variable.
The null hypothesis H 0 is that the additional set of regressors are not jointly significant.
The output from the test is an F-statistic and a likelihood ratio (LR) statistic with associated
p-values, together with the estimation results of the unrestricted model under the alternative. The F-statistic is based on the difference between the residual sums of squares of the
restricted and unrestricted regressions and is only valid in linear regression based settings.
The LR statistic is computed as:
LR = 2 ( l r l u )
(24.18)
where l r and l u are the maximized values of the (Gaussian) log likelihood function of the
unrestricted and restricted regressions, respectively. Under H 0 , the LR statistic has an
2
asymptotic x distribution with degrees of freedom equal to the number of restrictions (the
number of added variables).
Bear in mind that:
The omitted variables test requires that the same number of observations exist in the
original and test equations. If any of the series to be added contain missing observations over the sample of the original equation (which will often be the case when you
add lagged variables), the test statistics cannot be constructed.
The omitted variables test can be applied to equations estimated with linear LS, ARCH
(mean equation only), binary, ordered, censored, truncated, and count models. The
test is available only if you specify the equation by listing the regressors, not by a formula.
Equations estimated by Two-Stage Least Squares and GMM offer a variant of this test
based on the difference in J-statistics.
To perform an LR test in these settings, you can estimate a separate equation for the unrestricted and restricted models over a common sample, and evaluate the LR statistic and pvalue using scalars and the @cchisq function, as described above.
in the dialog, then EViews reports the results of the unrestricted regression containing the
two additional explanatory variables, and displays statistics testing the hypothesis that the
coefficients on the new variables are jointly zero. The top part of the output depicts the test
results (the bottom portion shows the estimated test equation):
Omitted Vari ables Test
Equation: EQ1
Specificatio n: LOG(Q) C LOG(L) L OG(K)
Omitted Vari ables: LOG (L)^2 LOG(K)^2
F-statistic
Likelihood r atio
Value
2.490982
5.560546
df
(2, 20)
2
Probability
0.1082
0.0620
Sum of Sq.
0.008310
0.041669
0.033359
0.033359
df
2
22
20
20
Mean
Squares
0.004155
0.001894
0.001668
0.001668
Value
44.48746
47.26774
df
22
20
F-test summary:
Test SSR
Restricted SS R
Unrestricted SSR
Unrestricted SSR
LR test summary:
Restricted L ogL
Unrestricted LogL
The F-statistic has an exact finite sample F-distribution under H 0 for linear models if the
errors are independent and identically distributed normal random variables. The numerator
degrees of freedom is the number of additional regressors and the denominator degrees of
freedom is the number of observations less the total number of regressors. The log likeli2
hood ratio statistic is the LR test statistic and is asymptotically distributed as a x with
degrees of freedom equal to the number of added regressors.
In our example, neither test rejects the null hypothesis that the two series do not belong to
the equation at a 5% significance level.
Redundant Variables
The redundant variables test allows you to test for the statistical significance of a subset of
your included variables. More formally, the test is for whether a subset of variables in an
equation all have zero coefficients and might thus be deleted from the equation. The redundant variables test can be applied to equations estimated by linear LS, TSLS, ARCH (mean
equation only), binary, ordered, censored, truncated, and count methods. The test is available only if you specify the equation by listing the regressors, not by a formula.
Coefficient Diagnostics179
separated by at least one space. Suppose, for example, that the initial regression specification is:
log(q) c log(l) log(k) log(l)^2 log(k)^2
in the dialog, then EViews reports the results of the restricted regression dropping the two
regressors, followed by the statistics associated with the test of the hypothesis that the coefficients on the two variables are jointly zero. The top portion of the output is:
Redundant Variables Test
Equation: EQ1
Specificatio n: LOG(Q) C LOG(L) L OG(K) LOG( L)^2 LOG(K)^2
Redundant Variables: LOG(L) ^2 LOG(K )^2
F-statistic
Likelihood r atio
Value
2.490982
5.560546
df
(2, 20)
2
Probability
0.1082
0.0620
Sum of Sq.
0.008310
0.041669
0.033359
0.033359
df
2
22
20
20
Mean
Squares
0.004155
0.001894
0.001668
0.001668
Value
44.48746
47.26774
df
22
20
F-test summary:
Test SSR
Restricted SS R
Unrestricted SSR
Unrestricted SSR
LR test summary:
Restricted L ogL
Unrestricted LogL
The reported test statistics are the F-statistic and the Log likelihood ratio. The F-statistic has
an exact finite sample F-distribution under H 0 if the errors are independent and identically
distributed normal random variables and the model is linear. The numerator degrees of freedom are given by the number of coefficient restrictions in the null hypothesis. The denominator degrees of freedom are given by the total regression degrees of freedom. The LR test is
2
an asymptotic test, distributed as a x with degrees of freedom equal to the number of
excluded variables under H 0 . In this case, there are two degrees of freedom.
whether the demand function for energy differs between the different states of the USA. The
test may be used with least squares and two-stage least squares regressions.
By default the Factor Breakpoint test tests whether there is a structural change in all of the
equation parameters. However if the equation is linear EViews allows you to test whether
there has been a structural change in a subset of the parameters.
To carry out the test, we partition the data by splitting the estimation sample into subsamples of each unique value of the classification variable. Each subsample must contain more
observations than the number of coefficients in the equation so that the equation can be
estimated. The Factor Breakpoint test compares the sum of squared residuals obtained by fitting a single equation to the entire sample with the sum of squared residuals obtained when
separate equations are fit to each subsample of the data.
EViews reports three test statistics for the Factor Breakpoint test. The F-statistic is based on
the comparison of the restricted and unrestricted sum of squared residuals and in the simplest case involving two subsamples, is computed as:
( u u ( u 1 u 1 + u 2 u 2 ) ) k
F = -----------------------------------------------------------------( u 1 u 1 + u 2 u 2 ) ( T 2k )
(24.19)
where u u is the restricted sum of squared residuals, u i u i is the sum of squared residuals
from subsample i , T is the total number of observations, and k is the number of parameters in the equation. This formula can be generalized naturally to more than two subsamples. The F-statistic has an exact finite sample F-distribution if the errors are independent
and identically distributed normal random variables.
The log likelihood ratio statistic is based on the comparison of the restricted and unrestricted
maximum of the (Gaussian) log likelihood function. The LR test statistic has an asymptotic
2
x distribution with degrees of freedom equal to ( m 1 )k under the null hypothesis of no
structural change, where m is the number of subsamples.
The Wald statistic is computed from a standard Wald test of the restriction that the coefficients on the equation parameters are the same in all subsamples. As with the log likelihood
2
ratio statistic, the Wald statistic has an asymptotic x distribution with ( m 1 )k degrees of
freedom, where m is the number of subsamples.
For example, suppose we have estimated an equation specification of
lwage c grade age high
Residual Diagnostics181
6.227078
73.19468
74.72494
0.0 000
0.0 000
0.0 000
Residual Diagnostics
EViews provides tests for serial correlation, normality, heteroskedasticity, and autoregressive
conditional heteroskedasticity in the residuals from your estimated equation. Not all of these
tests are available for every specification.
of lags. Further details on these statistics and the Ljung-Box Q-statistics that are also computed are provided in Q-Statistics on page 395 in Users Guide I.
This view is available for the residuals from least squares, two-stage least squares, nonlinear
least squares and binary, ordered, censored, and count models. In calculating the probability
values for the Q-statistics, the degrees of freedom are adjusted to account for estimated
ARMA terms.
To display the correlograms and Q-statistics, push View/Residual Diagnostics/Correlogram-Q-statistics on the equation toolbar. In the Lag Specification dialog box, specify the
number of lags you wish to use in computing the correlogram.
Residual Diagnostics183
yt = Xt b + et
(24.20)
where b are the estimated coefficients and e are the errors. The test statistic for lag order p
is based on the auxiliary regression for the residuals e = y Xb :
et = Xt g +
a s e t s + v t .
(24.21)
s =1
Following the suggestion by Davidson and MacKinnon (1993), EViews sets any presample
values of the residuals to 0. This approach does not affect the asymptotic distribution of the
statistic, and Davidson and MacKinnon argue that doing so provides a test statistic which
has better finite sample properties than an approach which drops the initial observations.
This is a regression of the residuals on the original regressors X and lagged residuals up to
order p . EViews reports two test statistics from this test regression. The F-statistic is an
omitted variable test for the joint significance of all lagged residuals. Because the omitted
variables are residuals and not independent variables, the exact finite sample distribution of
the F-statistic under H 0 is still not known, but we present the F-statistic for comparison
purposes.
The Obs*R-squared statistic is the Breusch-Godfrey LM test statistic. This LM statistic is
2
computed as the number of observations, times the (uncentered) R from the test regression. Under quite general conditions, the LM test statistic is asymptotically distributed as a
2
x (p) .
The serial correlation LM test is available for residuals from either least squares or two-stage
least squares estimation. The original regression may include AR and MA terms, in which
case the test regression will be modified to take account of the ARMA terms. Testing in 2SLS
settings involves additional complications, see Wooldridge (1990) for details.
To carry out the test, push View/Residual Diagnostics/Serial
Correlation LM Test on the equation toolbar and specify the
highest order of the AR or MA process that might describe the
serial correlation. If the test indicates serial correlation in the
residuals, LS standard errors are invalid and should not be used
for inference.
To illustrate, consider the macroeconomic data in our
Basics.WF1 workfile. We begin by regressing money supply M1 on a constant, contemporaneous industrial production IP and three lags of IP using the equation specification
m1 c ip(0 to -3)
The serial correlation LM test results for this equation with 2 lags in the test equation
strongly reject the null of no serial correlation:
Breusch-Godfrey Serial Co rrel ation LM Test:
F-statistic
Obs*R-squared
252 80.60
357 .5040
Prob. F(2,353)
Prob. Chi-Square(2 )
0.0 000
0.0 000
Test Equation:
Depend ent Variable: RESID
Method: Least Squares
Date: 08/10/09 Ti me: 14:58
Sample: 1960M01 1989M12
Included observations: 360
Presample missin g value la gged residuals set to ze ro.
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
IP
IP(-1 )
IP(-2 )
IP(-3 )
RESID(-1)
RESID(-2)
-0.584837
-11.36147
17.13281
-5.029158
-0.717490
1.158582
-0.156513
1.294016
0.599613
1.110223
1.241122
0.629348
0.051233
0.051610
-0.451955
-18 .94800
15.43187
-4.052107
-1.140054
22.61410
-3.032587
0.6 516
0.0 000
0.0 000
0.0 001
0.2 550
0.0 000
0.0 026
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.993067
0.992949
6.422212
145 59.42
-1176.798
842 6.868
0.000000
-6.00E -15
76.481 59
6.5 766 55
6.6 522 18
6.6 067 00
1.5 826 14
Residual Diagnostics185
Heteroskedasticity Tests
This set of tests allows you to test for a range of specifications of heteroskedasticity in the
residuals of your equation. Ordinary least squares estimates are consistent in the presence of
heteroskedasticity, but the conventional computed standard errors are no longer valid. If you
find evidence of heteroskedasticity, you should either choose the robust standard errors
option to correct the standard errors (see Heteroskedasticity Consistent Covariances
(White) on page 33) or you should model the heteroskedasticity to obtain more efficient
estimates using weighted least squares.
EViews lets you employ a number of different heteroskedasticity tests, or to use our custom
test wizard to test for departures from heteroskedasticity using a combination of methods.
Each of these tests involve performing an auxiliary regression using the residuals from the
original equation. These tests are available for equations estimated by least squares, twostage least squares, and nonlinear least squares. The individual tests are outlined below.
Breusch-Pagan-Godfrey (BPG)
The Breusch-Pagan-Godfrey test (see Breusch-Pagan, 1979, and Godfrey, 1978) is a Lagrange
multiplier test of the null hypothesis of no heteroskedasticity against heteroskedasticity of
2
2
the form j t = j h ( z t a ) , where z t is a vector of independent variables. Usually this vector contains the regressors from the original least squares regression, but it is not necessary.
The test is performed by completing an auxiliary regression of the squared residuals from
the original equation on ( 1, z t ) . The explained sum of squares from this auxiliary regres4
2
sion is then divided by 2j to give an LM statistic, which follows a x -distribution with
degrees of freedom equal to the number of variables in z under the null hypothesis of no
heteroskedasticity. Koenker (1981) suggested that a more easily computed statistic of Obs*R2
squared (where R is from the auxiliary regression) be used. Koenker's statistic is also dis2
tributed as a x with degrees of freedom equal to the number of variables in z . Along with
these two statistics, EViews also quotes an F-statistic for a redundant variable test for the
joint significance of the variables in z in the auxiliary regression.
As an example of a BPG test suppose we had an original equation of
log(m1) = c(1) + c(2)*log(ip) + c(3)*tb3
and we believed that there was heteroskedasticity in the residuals that depended on a function of LOG(IP) and TB3, then the following auxiliary regression could be performed
resid^2 = c(1) + c(2)*log(ip) + c(3)*tb3
Note that both the ARCH and White tests outlined below can be seen as Breusch-Pagan-Godfrey type tests, since both are auxiliary regressions of the squared residuals on a set of
regressors and a constant.
Harvey
The Harvey (1976) test for heteroskedasticity is similar to the Breusch-Pagan-Godfrey test.
However Harvey tests a null hypothesis of no heteroskedasticity against heteroskedasticity
of the form of j t2 = exp ( z t'a ) , where, again, z t is a vector of independent variables.
To test for this form of heteroskedasticity, an auxiliary regression of the log of the original
equation's squared residuals on ( 1, z t ) is performed. The LM statistic is then the explained
sum of squares from the auxiliary regression divided by w' ( 0.5 ) , the derivative of the log
2
gamma function evaluated at 0.5. This statistic is distributed as a x with degrees of freedom equal to the number of variables in z . EViews also quotes the Obs*R-squared statistic,
and the redundant variable F-statistic.
Glejser
The Glejser (1969) test is also similar to the Breusch-Pagan-Godfrey test. This test tests
m
2
2
against an alternative hypothesis of heteroskedasticity of the form j t = ( j + z t a ) with
m = 1, 2 . The auxiliary regression that Glejser proposes regresses the absolute value of
the residuals from the original equation upon ( 1, z t ) . An LM statistic can be formed by
2
dividing the explained sum of squares from this auxiliary regression by ( ( 1 2 p )j ) . As
with the previous tests, this statistic is distributed from a chi-squared distribution with
degrees of freedom equal to the number of variables in z . EViews also quotes the Obs*Rsquared statistic, and the redundant variable F-statistic.
ARCH LM Test
The ARCH test is a Lagrange multiplier (LM) test for autoregressive conditional heteroskedasticity (ARCH) in the residuals (Engle 1982). This particular heteroskedasticity specification was motivated by the observation that in many financial time series, the magnitude of
residuals appeared to be related to the magnitude of recent residuals. ARCH in itself does not
invalidate standard LS inference. However, ignoring ARCH effects may result in loss of efficiency; see Chapter 25. ARCH and GARCH Estimation, on page 231 for a discussion of estimation of ARCH models in EViews.
The ARCH LM test statistic is computed from an auxiliary test regression. To test the null
hypothesis that there is no ARCH up to order q in the residuals, we run the regression:
2
et
= b0 +
q
2
b s e t s + v t ,
(24.22)
s =1
where e is the residual. This is a regression of the squared residuals on a constant and
lagged squared residuals up to order q . EViews reports two test statistics from this test
regression. The F-statistic is an omitted variable test for the joint significance of all lagged
squared residuals. The Obs*R-squared statistic is Engles LM test statistic, computed as the
2
number of observations times the R from the test regression. The exact finite sample distri-
Residual Diagnostics187
bution of the F-statistic under H 0 is not known, but the LM test statistic is asymptotically
2
distributed as a x ( q ) under quite general conditions.
yt = b1 + b2 xt + b3 zt + et
(24.23)
where the b are the estimated parameters and e the residual. The test statistic is then based
on the auxiliary regression:
2
et = a0 + a1 xt + a2 zt + a3 xt + a4 zt + a5 xt zt + vt .
(24.24)
Prior to EViews 6, White tests always included the level values of the regressors (i.e. the
cross product of the regressors and a constant) whether the original regression included a
constant term. This is no longer the caselevel values are only included if the original
regression included a constant.
EViews reports three test statistics from the test regression. The F-statistic is a redundant
variable test for the joint significance of all cross products, excluding the constant. It is presented for comparison purposes.
The Obs*R-squared statistic is Whites test statistic, computed as the number of observa2
tions times the centered R from the test regression. The exact finite sample distribution of
the F-statistic under H 0 is not known, but Whites test statistic is asymptotically distributed
2
as a x with degrees of freedom equal to the number of slope coefficients (excluding the
constant) in the test regression.
The third statistic, an LM statistic, is the explained sum of squares from the auxiliary regres4
sion divided by 2j . This, too, is distributed as chi-squared distribution with degrees of
freedom equal to the number of slope coefficients (minus the constant) in the auxiliary
regression.
White also describes this approach as a general test for model misspecification, since the
null hypothesis underlying the test assumes that the errors are both homoskedastic and
independent of the regressors, and that the linear specification of the model is correct. Failure of any one of these conditions could lead to a significant test statistic. Conversely, a nonsignificant test statistic implies that none of the three conditions is violated.
When there are redundant cross-products, EViews automatically drops them from the test
regression. For example, the square of a dummy variable is the dummy variable itself, so
EViews drops the squared term to avoid perfect collinearity.
You may choose which type of test to perform by clicking on the name in the Test type box.
The remainder of the dialog will change, allowing you to specify various options for the
selected test.
The BPG, Harvey and Glejser tests allow you to specify which variables to use in the auxiliary regression. Note that you may choose to add all of the variables used in the original
equation by pressing the Add equation regressors button. If the original equation was nonlinear this button will add the coefficient gradients from that equation. Individual gradients
can be added by using the @grad keyword to add the i-th gradient (e.g., @grad(2)).
The ARCH test simply lets you specify the number of lags to include for the ARCH specification.
The White test lets you choose whether to include cross terms or no cross terms using the
Include cross terms checkbox. The cross terms version of the test is the original version of
White's test that includes all of the cross product terms. However, the number of cross-product terms increases with the square of the number of right-hand side variables in the regression; with large numbers of regressors, it may not be practical to include all of these terms.
The no cross terms specification runs the test regression using only squares of the regressors.
The Custom Test Wizard lets you combine or specify in greater detail the various tests. The
following example, using EQ1 from the Basics.WF1 workfile, shows how to use the Custom Wizard. The equation has the following specification:
log(m1) = c(1) + c(2)*log(ip) + c(3)*tb3
Residual Diagnostics189
The first page of the wizard allows you to choose which transformation of the residuals you
want to use as the dependent variable in the auxiliary regression. Note this is really a choice
between doing a Breusch-Pagan-Godfrey, a Harvey, or a Glejser type test. In our example we
choose to use the LOG of the squared residuals:
Once you have chosen a dependent variable, click on Next. Step two of the wizard lets you
decide whether to include a White specification. If you check the Include White specification checkbox and click on Next, EViews will display the White Specification page which
lets you specify options for the test. If you do not elect to include a White specification and
click on Next, EViews will skip the White Specification page, and continue on to the next
section of the wizard.
There are two parts to the dialog. In the upper section you may use the Type of White Test
dropdown menu to select the basic test.
You may choose to include cross terms or not, whether to run
an EViews 5 compatible test (as noted above, the auxiliary
regression run by EViews differs slightly in Version 6 and
later when there is no constant in the original equation), or,
by choosing Custom, whether to include a set of variables not identical to those used in the
original equation. The custom test allows you to perform a test where you include the
squares and cross products of an arbitrary set of regressors. Note if you when you provide a
set of variables that differs from those in the original equation, the test is no longer a White
test, but could still be a valid test for heteroskedasticity. For our example we choose to
include C and LOG(IP) as regressors, and choose to use cross terms.
Click on Next to continue to the next section of the wizard. EViews prompts you for whether
you wish to add any other variables as part of a Harvey (Breusch-Pagan-Godfrey/Harvey/
Glejser) specification. If you elect to do so, EViews will display a dialog prompting you to
add additional regressors. Note that if you have already included a White specification and
your original equation had a constant term, your auxiliary regression will already include
level values of the original equation regressors (since the cross-product of the constant term
and those regressors is their level values). In our example we choose to add the variable Y to
the auxiliary regression:
Residual Diagnostics191
Next we can add ARCH terms to the auxiliary regression. The ARCH specification lets you
specify a lag structure. You can either specify a number of lags, so that the auxiliary regression will include lagged values of the squared residuals up to the number you choose, or
you may provide a custom lag structure. Custom structures are entered in pairs of lags. In
our example we choose to include lags of 1, 2, 3 and 6:
The final step of the wizard is to view the final specification of the auxiliary regression, with
all the options you have previously chosen, and make any modifications. For our choices,
the final specification looks like this:
Our ARCH specification with lags of 1, 2, 3, 6 is shown first, followed by the White specification, and then the additional term, Y. Upon clicking Finish the main Heteroskedasticity
Tests dialog has been filled out with our specification:
Note, rather than go through the wizard, we could have typed this specification directly into
the dialog.
This test results in the following output:
Stability Diagnostics193
203 .6910
289 .0262
160 .8560
Prob. F(10,324)
Prob. Chi-Square(1 0)
Prob. Chi-Square(1 0)
0.0 000
0.0 000
0.0 000
Test Equation:
Depend ent Variable: LRESID2
Method: Least Squares
Date: 08/10/09 Ti me: 15:06
Sample (adjusted) : 1959M07 1989M12
Included observations: 335 after adjustments
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
LRESID2(-1)
LRESID2(-2)
LRESID2(-3)
LRESID2(-6)
LOG (IP)
(LOG(IP)) ^2
(LOG(IP))*TB3
TB3
TB3^2
Y
2.320248
0.875599
0.061016
-0.035013
0.024621
-1.622303
0.255666
-0.040560
0.097993
0.002845
-0.023621
10.82443
0.055882
0.074610
0.061022
0.036220
5.792786
0.764826
0.154475
0.631189
0.005380
0.039166
0.214353
15.66873
0.817805
-0.573768
0.679761
-0.280056
0.334280
-0.262566
0.155252
0.528851
-0.603101
0.8 304
0.0 000
0.4 141
0.5 665
0.4 971
0.7 796
0.7 384
0.7 931
0.8 767
0.5 973
0.5 469
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.862765
0.858529
0.624263
126 .2642
-311.9056
203 .6910
0.000000
-4.04684 9
1.6 597 17
1.9 277 94
2.0 530 35
1.9 777 24
2.1 305 11
This output contains both the set of test statistics, and the results of the auxiliary regression
on which they are based. All three statistics reject the null hypothesis of homoskedasticity.
Stability Diagnostics
EViews provides several test statistic views that examine whether the parameters of your
model are stable across various subsamples of your data.
One common approach is to split the T observations in your data set of observations into
T 1 observations to be used for estimation, and T 2 = T T 1 observations to be used for
testing and evaluation. In time series work, you will usually take the first T 1 observations
for estimation and the last T 2 for testing. With cross-section data, you may wish to order
the data by some variable, such as household income, sales of a firm, or other indicator variables and use a subset for testing.
Note that the alternative of using all available sample observations for estimation promotes
a search for a specification that best fits that specific data set, but does not allow for testing
predictions of the model against data that have not been used in estimating the model. Nor
does it allow one to test for parameter constancy, stability and robustness of the estimated
relationship.
There are no hard and fast rules for determining the relative sizes of T 1 and T 2 . In some
cases there may be obvious points at which a break in structure might have taken placea
war, a piece of legislation, a switch from fixed to floating exchange rates, or an oil shock.
Where there is no reason a priori to expect a structural break, a commonly used rule-ofthumb is to use 85 to 90 percent of the observations for estimation and the remainder for
testing.
EViews provides built-in procedures which facilitate variations on this type of analysis.
( u u ( u 1 u 1 + u 2 u 2 ) ) k
,
F = -----------------------------------------------------------------( u 1 u 1 + u 2 u 2 ) ( T 2k )
(24.25)
where u u is the restricted sum of squared residuals, u i u i is the sum of squared residuals
from subsample i , T is the total number of observations, and k is the number of parame-
Stability Diagnostics195
ters in the equation. This formula can be generalized naturally to more than one breakpoint.
The F-statistic has an exact finite sample F-distribution if the errors are independent and
identically distributed normal random variables.
The log likelihood ratio statistic is based on the comparison of the restricted and unrestricted
maximum of the (Gaussian) log likelihood function. The LR test statistic has an asymptotic
2
x distribution with degrees of freedom equal to ( m 1 )k under the null hypothesis of no
structural change, where m is the number of subsamples.
The Wald statistic is computed from a standard Wald test of the restriction that the coefficients on the equation parameters are the same in all subsamples. As with the log likelihood
2
ratio statistic, the Wald statistic has an asymptotic x distribution with ( m 1 )k degrees of
freedom, where m is the number of subsamples.
One major drawback of the breakpoint test is that each subsample requires at least as many
observations as the number of estimated parameters. This may be a problem if, for example,
you want to test for structural change between wartime and peacetime where there are only
a few observations in the wartime sample. The Chow forecast test, discussed below, should
be used in such cases.
To apply the Chow breakpoint test, push View/
Stability Diagnostics/Chow Breakpoint Test
on the equation toolbar. In the dialog that
appears, list the dates or observation numbers for
the breakpoints in the upper edit field, and the
regressors that are allowed to vary across breakpoints in the lower edit field.
For example, if your original equation was estimated from 1950 to 1994, entering:
1960
specifies three subsamples, 1950 to 1959, 1960 to 1969, and 1970 to 1994.
The results of a test applied to EQ1 in the workfile Coef_test.WF1, using the settings
above are:
186.8638
523.8566
1121.183
Prob. F(6,36 3)
Prob. Chi-Square(6)
Prob. Chi-Square(6)
0.0 000
0.0 000
0.0 000
MaxF =
max ( F ( t ) )
t1 t t2
(24.26)
t2
1
1
ExpF = ln --- exp --- F ( t )
2
k
t = t1
(24.27)
Stability Diagnostics197
1
AveF = --k
t2
F(t)
(24.28)
t = t1
The distribution of these test statistics is non-standard. Andrews (1993) developed their true
distribution, and Hansen (1997) provided approximate asymptotic p-values. EViews reports
the Hansen p-values. The distribution of these statistics becomes degenerate as t 1
approaches the beginning of the equation sample, or t 2 approaches the end of the equation
sample. To compensate for this behavior, it is generally suggested that the ends of the equation sample not be included in the testing procedure. A standard level for this trimming is
15%, where we exclude the first and last 15% of the observations. EViews sets trimming at
15% by default, but also allows the user to choose other levels. Note EViews only allows
symmetric trimming, i.e. the same number of observations are removed from the beginning
of the estimation sample as from the end.
The Quandt-Andrews Breakpoint Test can be evaluated
for an equation by selecting
View/Stability Diagnostics/
Quandt-Andrews Breakpoint Test from the equation toolbar. The resulting
dialog allows you to choose
the level of symmetric observation trimming for the test,
and, if your original equation was linear, which variables you wish to test for the unknown break point. You may also choose to save the
individual Chow Breakpoint test statistics into new series within your workfile by entering a
name for the new series.
As an example we estimate a consumption function, EQ02 in the workfile DEMO.WF1,
using quarterly data from 1952Q1 to 1992Q4. To test for an unknown structural break point
amongst all the original regressors we run the Quandt-Andrews test with 15% trimming.
This test gives the following results:
Note all three of the summary statistic measures fail to reject the null hypothesis of no structural breaks at the 1% level within the 113 possible dates tested. The maximum statistic was
in 1982Q2, and that is the most likely breakpoint location. Also, since the original equation
was linear, note that the p-value for the LR F-statistic is identical to the Wald F-statistic.
Background
We consider a standard multiple linear regression model with T periods and m potential
breaks (producing m + 1 regimes). For the observations T j, T j + 1, , T j + 1 1 in
regime j we have the regression model
y t = X t b + Z t d j + e t
(24.29)
for the regimes j = 0, , m . Note that the regressors are divided into two groups. The X
variables are those whose parameters do not vary across regimes, while the Z variables
have coefficients that are regime specific.
While it is slightly more convenient to define breakdates to be the last date of a regime, we
follow EViewss convention in defining the breakdate to be the first date of the subsequent
regime. We tie down the endpoints by setting T 0 = 1 and T m + 1 = T + 1 .
The multiple breakpoint tests that we consider may broadly be divided into three categories:
tests that employ global maximizers for the breakpoints, test that employ sequentially determined breakpoints, and hybrid tests, which combine the two approaches.
Stability Diagnostics199
Tj + 1 1
S ( b, d { T } ) = y t X t b Z t d j
t=T
j= 0
j
(24.30)
using standard least squares regression to obtain estimates ( b , d ) . The global m -break
optimizers are the set of breakpoints and corresponding coefficient estimates that minimize
sum-of-squares across all possible sets of m -break partitions.
Note that the number of comparison models increases rapidly in both m and T so that efficient algorithms for computing the optimizers are required. Practical algorithms for computing the global optimizers for multiple breakpoint models are outlined in Bai and Perron
(2003a).
These global breakpoint estimates are then used as the basis for several breakpoint tests.
EViews supports both the Bai and Perron (1998) tests of l -breaks versus none test (along
with the double maximum variants of this test in which l is determined as part of the testing procedure), and information criterion methods (Yao, 1988 and Liu, Wi, and Zidek, 1997)
for determining the number of breaks.
Global L Breaks vs. None
Bai and Perron (1998) describe a generalization of the Quandt-Andrews test (Andrews,
1993) in which we test for equality of the d j across multiple regimes. For a test of the null of
no breaks against an alternative of l breaks, we employ an F-statistic to evaluate the null
hypothesis that d 0 = d 1 = = d l + 1 . The general form of the statistic (Bai-Perron 2003a)
is:
1 T ( l + 1 )q p
( d )R ) 1 Rd
F ( d ) = ---- ---------------------------------------- ( Rd ) ( RV
T
kq
(24.31)
WDmax applies weights to the individual statistics so that the implied marginal p -values
are equal prior to taking the maximum.
The distributions of these test statistics are non-standard, but Bai and Perron (2003b) provide critical value and response surface computations for various trimming parameters
(minimum sample sizes for estimating a break), numbers of regressors, and numbers of
breaks.
Information Criteria
Yao (1988) shows that under relatively strong conditions, the number of breaks l that minimizes the Schwarz criterion is a consistent estimator of the true number of breaks in a
breaking mean model.
More generally, Liu, Wu, and Zidek (1997) propose use of modified Schwarz criterion for
determining the number of breaks in a regression framework. LWZ offer theoretical results
showing consistency of the estimated number of breakpoints, and provide simulation results
to guide the choice of the modified penalty criterion.
Stability Diagnostics201
ron (1998). Critical value and response surface computations are again provided by Bai and
Perron (2003b).
The dialog is divided into the Test specification, Breakpoint variables, and Options sections.
Test Specification
The Test specification section contains a Method dropdown where you may specify the type of test you wish to
perform. You may choose between:
Breakpoint Variables
EViews supports the testing of partial structural change models in which only a subset of the
variables in the regression are subject to change across regimes. The variables which have
regime specific coefficients should be listed in the Regressors to vary across breakpoints
edit field.
By default, all of the variables in your specification will be included in this list. To treat some
of these variables as non-varying X s, you may simply delete them from the list. Note that
there must be at least one variable in the list.
Options
The Options section of the dialog allow you to specify the maximum number of breaks or break levels to consider, the trimming
percentage of the sample, the significance level for any test computations (if relevant), and assumptions regarding the computation of the variance matrices used in testing (if relevant):
The Maximum breaks limits the number of breakpoints
allowed via global testing and in sequential or mixed l vs.
l + 1 testing. If you have selected the Sequential tests all
subsets method, the edit field will be labeled Maximum levels to indicate that the
restriction is on the maximum number of break levels allowed. This change in labeling reflects the fact that the Bai all subsets approach potentially adds l + 1 breaks for
a given set of l breaks.
Stability Diagnostics203
Examples
To illustrate the use of these tools in practice, we consider a simple model of the U.S. ex-post
real interest rate from Garcia and Perron (1996) that is used as an example by Bai and Perron
(2003a). The data, which consist of observations for the three-month treasury rate deflated
by the CPI for the period 1961q11983q3, are provided in the series RATES in the workfile
realrate.WF1. The regression model consists of a constant regressor, and allows for serial
correlation that differs across regimes through the use of HAC covariance estimation. We
allow up to 5 breaks in the model, and employ a trimming percentage of 15% ( e = 15 ) .
Since there are 103 observations in the sample, the trimming value implies that regimes are
restricted to have at least 15 observations.
Following Bai and Perron
(2003a), we begin by estimating the equation specification
using least squares. Our equation specification consists of
the dependent variable and a
single (constant) regressor, so
we enter
rate c
Stability Diagnostics205
Click on OK to accept the HAC settings, and then on OK to estimate the equation. The estimation results should be as depicted below:
Dependent Variable: RATES
Method: Least Squares
Date: 12/03/12 Time: 14:09
Sample: 1961Q1 1986Q3
Included observations: 103
HAC standard errors & covariance (Prewhitening with lags = 1,
Quadratic-Spectral kernel, Andrews bandwidth = 1.9610)
Variable
Coefficient
Std. Error
t-Statistic
Prob.
1.375142
0.599818
2.292600
0.0239
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Durbin-Watson stat
0.000000
0.000000
3.451231
1214.922
-273.2375
0.745429
1.375142
3.451231
5.325001
5.350580
5.335361
To construct multiple breakpoint tests for this equation, select View/Stability Diagnostics/
Multiple Breakpoint Test... from the equation dialog. We consider examples for three different approaches for multiple breakpoint testing with this equation.
Sequential Bai-Perron
The default Method setting (Sequential L+1 breaks vs. L) instructs EViews to perform
sequential testing of l + 1 versus l breaks using the methods outlined by Bai (1997) and Bai
and Perron (1998).
The middle section of the table presents the actual sequential test results:
Stability Diagnostics207
Break Test
0 vs. 1 *
1 vs. 2 *
2 vs. 3 *
3 vs. 4
F-statistic
57.90582
33.92749
14.72464
0.033044
Scaled
F-statistic
57.90582
33.92749
14.72464
0.033044
3
Critical
Value**
8.58
10.13
11.14
11.83
EViews displays the F-statistic, along with the F-statistic scaled by the number of varying
regressors (which is the same in this case, since we only have the single, varying regressor),
and the Bai-Perron critical value for the scaled statistic. The sequential test results indicate
that there are three breakpoints: we reject the nulls of 0, 1, and 2 breakpoints in favor of the
alternatives of 1, 2, and 3 breakpoints, but the test of 4 versus 3 breakpoints does not reject
the null.
The bottom portion of the output shows the estimated breakdates:
Break dates:
1
2
3
Sequential
1980Q4
1972Q4
1967Q1
Repartition
1967Q1
1972Q4
1980Q4
EViews displays both the breakdates obtained from the original sequential procedure, and
those obtained following the repartition procedure. In this case, the dates do not change.
Again bear in mind that the results follow the EViews convention in defining breakdates to
be the first date of the subsequent regime.
We again leave the remaining settings at their default values with the exception of the Allow
error distributions to differ across breaks checkbox which is selected. Click on OK to perform the test.
The top portion of the output, which shows the test settings, is almost identical to the output for the previous example. The only difference is a line identifying the test method as
being Bai-Perron tests of 1 to M globally determined breaks.
The middle portion of the output contains the test results:
Sequential F-statistic determined breaks:
Significant F-statistic largest breaks:
UDmax determined breaks:
WDmax determined breaks:
Breaks
1*
2*
3*
4*
5*
5
5
1
1
F-statistic
Scaled
F-statistic
Weighted
F-statistic
Critical
Value
57.90582
43.01429
33.32281
24.77054
18.32587
57.90582
43.01429
33.32281
24.77054
18.32587
57.90582
51.11671
47.97143
42.59143
40.21381
8.58
7.22
5.96
4.99
3.91
UDMax statistic*
WDMax statistic*
8.88
9.91
The first four lines summarize the results for different approaches to determining the number of breaks. The Sequential result is obtained by performing tests from 1 to the maximum number until we cannot reject the null; the Significant result chooses the largest
statistically significant breakpoint. In both cases, the multiple breakpoint test indicates that
there are 5 breaks. The UDmax and WDmax results show the number of breakpoints as
Stability Diagnostics209
determined by application of the unweighted and weighted maximized statistics. The maximized statistics both indicate the presence of a single break.
The remaining lines show the individual test statistics (original, scaled, weighted) along
with the critical values for the scaled statistics. In each case, the statistics far exceed the critical value so that we reject the null of no breaks. Note that the values corresponding to the
UDmax and WDmax statistics are shaded for easy identification.
The last two lines of output show the test results for double maximum statistics. In both
cases, the maximized value clearly exceeds the critical value, so that we reject the null of no
breaks in favor of the alternative of a single break.
The bottom of the portion shows the global optimizers for the breakpoints for each number
of breaks:
Estimated break dates:
1: 1980Q4
2: 1972Q4, 1980Q4
3: 1967Q1, 1972Q4, 1980Q4
4: 1967Q1, 1972Q4, 1977Q1, 1980Q4
5: 1965Q1, 1968Q4, 1972Q4, 1977Q1, 1980Q4
Note that the three-break global optimizers are the same as those obtained in the sequential
testing example (Sequential Bai-Perron on page 205). This equivalence will not hold in
general.
Here we see the dialog when we select Global information criteria in the Method dropdown menu. Note that there are no options for computing the coefficient covariances since
this method does not require their calculation. Click on OK to construct the table of results.
The top and bottom portions of the output are similar to the results seen previously so we
focus only on the test summaries themselves:
Breaks # of Coefs.
0
1
2
3
4
5
1
3
5
7
9
11
Sum of
Sq. Resids.
1214.922
644.9955
455.9502
445.1819
444.8797
449.6395
2
2
Log-L
-273.2375
-240.6282
-222.7649
-221.5340
-221.4990
-222.0471
Schwarz*
Criterion
2.512703
1.969506
1.712641
1.778735
1.868051
1.968688
LWZ*
Criterion
2.550154
2.082148
1.900875
2.042977
2.208735
2.386267
The two summary rows show that both the Schwarz and the LWZ information criteria select
2 breaks. The remainder of the output shows, for each break, the number of estimated coefficients, the optimized sum-of-squared residuals and likelihood, and the values of the information criteria. The minimized Schwarz and LWZ values are shaded for easy identification.
( u u uu ) T
F = ---------------------------------------2- ,
uu ( T 1 k )
(24.32)
where u u is the residual sum of squares when the equation is fitted to all T sample observations, uu is the residual sum of squares when the equation is fitted to T 1 observations,
and k is the number of estimated coefficients. This F-statistic follows an exact finite sample
F-distribution if the errors are independent, and identically, normally distributed.
The log likelihood ratio statistic is based on the comparison of the restricted and unrestricted
maximum of the (Gaussian) log likelihood function. Both the restricted and unrestricted log
likelihood are obtained by estimating the regression using the whole sample. The restricted
regression uses the original set of regressors, while the unrestricted regression adds a
2
dummy variable for each forecast point. The LR test statistic has an asymptotic x distribution with degrees of freedom equal to the number of forecast points T 2 under the null
hypothesis of no structural change.
Stability Diagnostics211
F-statistic
Likelihood r atio
Value
0.708348
91.57087
df
(88, 102)
88
Probability
0.9511
0.3761
Sum of Sq.
0.061798
0.162920
0.101122
0.101122
df
88
190
102
102
Mean
Squares
0.000702
0.000857
0.000991
0.000991
Value
40 6.4749
45 2.2603
df
190
102
F-test summary:
Test SSR
Restricted SS R
Unrestricted SSR
Unrestricted SSR
LR test summary:
Restricted L ogL
Unrestricted LogL
Neither of the forecast test statistics reject the null hypothesis of no structural change in the
consumption function before and after 1973q1.
If we test the same hypothesis using the Chow breakpoint test, the result is:
Chow Breakpoint Test: 1973Q1
Null Hypothesis: No breaks at specified breakpoints
Varying regressors: All equation variables
Equation Sample: 1947Q1 1994Q4
F-statistic
Log likelihood ratio
Wald Statistic
38.39198
65.75466
76.78396
Prob. F(2,188)
Prob. Chi-Square(2)
Prob. Chi-Square(2)
0.0000
0.0000
0.0000
Note that the breakpoint test statistics decisively reject the hypothesis from above. This
example illustrates the possibility that the two Chow tests may yield conflicting results.
y = Xb + e ,
(24.33)
where the disturbance vector e is presumed to follow the multivariate normal distribution
2
N ( 0, j I ) . Specification error is an omnibus term which covers any departure from the
assumptions of the maintained model. Serial correlation, heteroskedasticity, or non-normal2
ity of all violate the assumption that the disturbances are distributed N ( 0, j I ) . Tests for
these specification errors have been described above. In contrast, RESET is a general test for
the following types of specification errors:
Omitted variables; X does not include all relevant variables.
Incorrect functional form; some or all of the variables in y and X should be transformed to logs, powers, reciprocals, or in some other way.
Correlation between X and e , which may be caused, among other things, by measurement error in X , simultaneity, or the presence of lagged y values and serially
correlated disturbances.
Under such specification errors, LS estimators will be biased and inconsistent, and conventional inference procedures will be invalidated. Ramsey (1969) showed that any or all of
these specification errors produce a non-zero mean vector for e . Therefore, the null and
alternative hypotheses of the RESET test are:
2
H 0 : e N ( 0, j I )
2
H 1 : e N ( m, j I )
(24.34)
m0
y = Xb + Zg + e .
(24.35)
The test of specification error evaluates the restriction g = 0 . The crucial question in constructing the test is to determine what variables should enter the Z matrix. Note that the Z
matrix may, for example, be comprised of variables that are not in the original specification,
so that the test of g = 0 is simply the omitted variables test described above.
In testing for incorrect functional form, the nonlinear part of the regression model may be
some function of the regressors included in X . For example, if a linear relation,
y = b0 + b1 X + e ,
(24.36)
y = b0 + b1 X + b2 X + e
(24.37)
Stability Diagnostics213
the augmented model has Z = X and we are back to the omitted variable case. A more
general example might be the specification of an additive relation,
y = b0 + b1 X1 + b2 X2 + e
(24.38)
y = b0 X1 1 X2 2 + e .
(24.39)
Z = [ y , y , y , ]
(24.40)
where y is the vector of fitted values from the regression of y on X . The superscripts indicate the powers to which these predictions are raised. The first power is not included since it
is perfectly collinear with the X matrix.
Output from the test reports the test regression and the F-statistic and log likelihood ratio for
testing the hypothesis that the coefficients on the powers of fitted values are all zero. A
study by Ramsey and Alexander (1984) showed that the RESET test could detect specification error in an equation which was known a priori to be misspecified but which nonetheless gave satisfactory values for all the more traditional test criteriagoodness of fit, test for
first order serial correlation, high t-ratios.
To apply the test, select View/Stability Diagnostics/Ramsey RESET Test and specify the
number of fitted terms to include in the test regression. The fitted terms are the powers of
the fitted values from the original regression, starting with the square or second power. For
2
example, if you specify 1, then the test will add y in the regression, and if you specify 2,
2
3
then the test will add y and y in the regression, and so on. If you specify a large number
of fitted terms, EViews may report a near singular matrix error message since the powers of
the fitted values are likely to be highly collinear. The Ramsey RESET test is only applicable
to equations estimated using selected methods.
More formally, let X t 1 denote the ( t 1 ) k matrix of the regressors from period 1 to
period t 1 , and y t 1 the corresponding vector of observations on the dependent variable.
These data up to period t 1 give an estimated coefficient vector, denoted by b t 1 . This
coefficient vector gives you a forecast of the dependent variable in period t . The forecast is
x t b t 1 , where x t is the row vector of observations on the regressors in period t . The
forecast error is y t x t b t 1 , and the forecast variance is given by:
2
j ( 1 + x t ( X t 1 X t 1 ) x t ) .
(24.41)
( y t x t 1 b )
-.
w t = ----------------------------------------------------------------------12
1
( 1 + x t ( X t 1 X t 1 ) x t )
(24.42)
Recursive Residuals
This option shows a plot of the recursive residuals about the zero line. Plus and minus two
standard errors are also shown at each point. Residuals outside the standard error bands
suggest instability in the parameters of the equation.
CUSUM Test
The CUSUM test (Brown, Durbin, and Evans, 1975) is based on the cumulative sum of the
recursive residuals. This option plots the cumulative sum together with the 5% critical lines.
The test finds parameter instability if the cumulative sum goes outside the area between the
two critical lines.
The CUSUM test is based on the statistic:
Stability Diagnostics215
Wt =
wr s ,
(24.43)
r = k+1
for t = k + 1, , T , where w is the recursive residual defined above, and s is the standard deviation of the recursive residuals w t . If the b vector remains constant from period to
period, E ( W t ) = 0 , but if b changes, W t will tend to diverge from the zero mean value
line. The significance of any departure from the zero line is assessed by reference to a pair of
5% significance lines, the distance between which increases with t . The 5% significance
lines are found by connecting the points:
[ k, 0.948 ( T k )
12
[ T, 3 0.948 ( T k )
and
12
].
(24.44)
55
60
65
70
CUSUM
75
80
85
90
5% Significance
The test clearly indicates instability in the equation during the sample period.
St =
r = k+1
2
w r
r = k+1
2
w r .
(24.45)
E ( St ) = ( t k ) ( T k )
(24.46)
expected value. See Brown, Durbin, and Evans (1975) or Johnston and DiNardo (1997, Table
D.8) for a table of significance lines for the CUSUM of squares test.
The CUSUM of squares test provides a plot of S t against t and the pair of 5 percent critical
lines. As with the CUSUM test, movement outside the critical lines is suggestive of parameter or variance instability.
The cumulative sum of squares is generally
within the 5% significance lines, suggesting that the residual variance is somewhat
stable.
1.2
1.0
0.8
0.6
0.4
.08
.04
.00
-.04
.000
.025
-.08
.050
.075
.100
.125
.150
50
55
60
65
70
75
80
85
One-Step Probability
Recursive Residuals
For the test equation, there is evidence of instability early in the sample period.
90
Stability Diagnostics217
1.3
1.2
1.1
1.0
0.9
0.8
Note that you can use the recursive residuals to reconstruct the CUSUM and CUSUM of
squares series.
Leverage Plots
Leverage plots are the multivariate equivalent of a simple residual plot in a univariate
regression. Like influence statistics, leverage plots can be used as a method for identifying
influential observations or outliers, as well as a method of graphically diagnosing any potential failures of the underlying assumptions of a regression model.
Leverage plots are calculated by, in essence, turning a multivariate regression into a collection of univariate regressions. Following the notation given in Belsley, Kuh and Welsch 2004
(Section 2.1), the leverage plot for the k-th coefficient is computed as follows:
Let X k be the k-th column of the data matrix (the k-th variable in a linear equation, or the
k-th gradient in a non-linear), and X [ k ] be the remaining columns. Let u k be the residuals
from a regression of the dependent variable, y on X [ k ] , and let v k be the residuals from a
regression of X k on X [ k ] . The leverage plot for the k-th coefficient is then a scatter plot of
u k on v k .
It can easily be shown that in an auxiliary regression of u k on a constant and v k , the coefficient on v k will be identical to the k-th coefficient from the original regression. Thus the
original regression can be represented as a series of these univariate auxiliary regressions.
In a univariate regression, a plot of the residuals against the explanatory variable is often
used to check for outliers (any observation whose residual is far from the regression line), or
to check whether the model is possibly mis-specified (for example to check for linearity).
Leverage plots can be used in the same way in a multivariate regression, since each coefficient has been modelled in a univariate auxiliary regression.
To display leverage plots in EViews select View/
Stability Diagnostics/Leverage Plots.... EViews
will then display a dialog which lets you choose
some simple options for the leverage plots.
The Variables to plot box lets you enter which
variables, or coefficients in a non-linear equation,
you wish to plot. By default this box will be filled
in with the original regressors from your equation.
Note that EViews will let you enter variables that
were not in the original equation, in which case
the plot will simply show the original equation
residuals plotted against the residuals from a
regression of the new variable against the original
regressors.
Stability Diagnostics219
To add a regression line to each scatter plot, select the Add fit lines checkbox. If you do not
wish to create plots of the partialed variables, but would rather plot the original regression
residuals against the raw regressors, unselect the Partial out variables checkbox.
Finally, if you wish to save the partial residuals for each variable into a series in the workfile, you may enter a naming suffix in the Enter a naming suffix to save the variables as a
series box. EViews will then append the name of each variable to the suffix you entered as
the name of the created series.
We illustrate using an example taken from Wooldridge (2000, Example 9.8) for the regression of R&D expenditures (RDINTENS) on sales (SALES), profits (PROFITMARG), and a
constant (using the workfile Rdchem.WF1). The leverage plots for equation E1 are displayed here:
Influence Statistics
Influence statistics are a method of discovering influential observations, or outliers. They are
a measure of the difference that a single observation makes to the regression results, or how
different an observation is from the other observations in an equations sample. EViews provides a selection of six different influence statistics: RStudent, DRResid, DFFITS, CovRatio,
HatMatrix and DFBETAS.
RStudent is the studentized residual; the residual of the equation at that observation
divided by an estimate of its standard deviation:
ei
e i = --------------------------s ( i ) 1 hi
(24.47)
where e i is the original residual for that observation, s ( i ) is the variance of the residuals that would have resulted had observation i not been included in the estimation,
1
and h i is the i-th diagonal element of the Hat Matrix, i.e. x i ( X'X ) x i . The RStudent
is also numerically identical to the t-statistic that would result from putting a dummy
variable in the original equation which is equal to 1 on that particular observation
and zero elsewhere. Thus it can be interpreted as a test for the significance of that
observation.
DFFITS is the scaled difference in fitted values for that observation between the original equation and an equation estimated without that observation, where the scaling is
done by dividing the difference by an estimate of the standard deviation of the fit:
DFFITS i =
hi
------------1 hi
12
ei
--------------------------s ( i ) 1 hi
(24.48)
DRResid is the dropped residual, an estimate of the residual for that observation had
the equation been run without that observations data.
COVRATIO is the ratio of the determinant of the covariance matrix of the coefficients
from the original equation to the determinant of the covariance matrix from an equation without that observation.
1
bj bj ( i )
DFBETAS i, j = -------------------------------s ( i ) var ( b j )
(24.49)
Stability Diagnostics221
Applications
For illustrative purposes, we provide a demonstration of how to carry out some other specification tests in EViews. For brevity, the discussion is based on commands, but most of these
procedures can also be carried out using the menu system.
W = ( b 1 b 2 ) ( V 1 + V 2 ) ( b 1 b 2 ) ,
2
(24.50)
which has an asymptotic x distribution with degrees of freedom equal to the number of
estimated parameters in the b vector.
Applications223
To carry out this test in EViews, we estimate the model in each subsample and save the estimated coefficients and their covariance matrix. For example, consider the quarterly workfile
of macroeconomic data in the workfile Coef_test2.WF1 (containing data for 1947q1
1994q4) and suppose wish to test whether there was a structural change in the consumption
function in 1973q1. First, estimate the model in the first sample and save the results by the
commands:
coef(2) b1
smpl 1947q1 1972q4
equation eq_1.ls log(cs)=b1(1)+b1(2)*log(gdp)
sym v1=eq_1.@cov
The first line declares the coefficient vector, B1, into which we will place the coefficient estimates in the first sample. Note that the equation specification in the third line explicitly
refers to elements of this coefficient vector. The last line saves the coefficient covariance
matrix as a symmetric matrix named V1. Similarly, estimate the model in the second sample
and save the results by the commands:
coef(2) b2
smpl 1973q1 1994q4
equation eq_2.ls log(cs)=b2(1)+b2(2)*log(gdp)
sym v2=eq_2.@cov
The Wald statistic is saved in the 1 1 matrix named WALD. To see the value, either double click on WALD or type show wald. You can compare this value with the critical values
2
from the x distribution with 2 degrees of freedom. Alternatively, you can compute the pvalue in EViews using the command:
scalar wald_p=1-@cchisq(wald(1,1),2)
The p-value is saved as a scalar named WALD_P. To see the p-value, double click on
WALD_P or type show wald_p. The WALD statistic value of 53.1243 has an associated pvalue of 2.9e-12 so that we decisively reject the null hypothesis of no structural change.
posed by Davidson and MacKinnon (1989, 1993), which carries out the test by running an
auxiliary regression.
The following equation in the Basics.WF1 workfile was estimated by OLS:
Depend ent Variable: LOG( M1)
Method: Least Squares
Date: 08/10/09 Ti me: 16:08
Sample (adjusted) : 1959M02 1995M04
Included observations: 435 after adjustments
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
LOG (IP)
DLOG(PPI)
TB3
LOG( M1(-1))
-0.022699
0.011630
-0.024886
-0.000366
0.996578
0.004443
0.002585
0.042754
9.91E-05
0.001210
-5.108528
4.499708
-0.582071
-3.692675
823.4440
0.0 000
0.0 000
0.5 608
0.0 003
0.0 000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.999953
0.999953
0.004601
0.009102
172 6.233
230 4897.
0.000000
5.8 445 81
0.6 705 96
-7.91371 4
-7.86687 1
-7.89522 6
1.2 659 20
Suppose we are concerned that industrial production (IP) is endogenously determined with
money (M1) through the money supply function. If endogeneity is present, then OLS estimates will be biased and inconsistent. To test this hypothesis, we need to find a set of instrumental variables that are correlated with the suspect variable IP but not with the error
term of the money demand equation. The choice of the appropriate instrument is a crucial
step. Here, we take the unemployment rate (URATE) and Moodys AAA corporate bond
yield (AAA) as instruments.
To carry out the Hausman test by artificial regression, we run two OLS regressions. In the
first regression, we regress the suspect variable (log) IP on all exogenous variables and
instruments and retrieve the residuals:
equation eq_test.ls log(ip) c dlog(ppi) tb3 log(m1(-1)) urate aaa
eq_test.makeresid res_ip
Then in the second regression, we re-estimate the money demand function including the
residuals from the first regression as additional regressors. The result is:
Applications225
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
LOG (IP)
DLOG(PPI)
TB3
LOG( M1(-1))
RES_IP
-0.007145
0.001560
0.020233
-0.000185
1.001093
0.014428
0.007473
0.004672
0.045935
0.000121
0.002123
0.005593
-0.956158
0.333832
0.440465
-1.527775
471.4894
2.579826
0.3 395
0.7 387
0.6 598
0.1 273
0.0 000
0.0 102
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.999954
0.999954
0.004571
0.008963
172 9.581
186 8171.
0.000000
5.8 445 81
0.6 705 96
-7.92451 1
-7.86830 0
-7.90232 6
1.3 078 38
If the OLS estimates are consistent, then the coefficient on the first stage residuals should
not be significantly different from zero. In this example, the test rejects the hypothesis of
consistent OLS estimates at conventional levels.
Note that an alternative form of a regressor endogeneity test may be computed using the
Regressor Endogeneity Test view of an equation estimated by TSLS or GMM (see Regressor
Endogeneity Test on page 81).
Non-nested Tests
Most of the tests discussed in this chapter are nested tests in which the null hypothesis is
obtained as a special case of the alternative hypothesis. Now consider the problem of choosing between the following two specifications of a consumption function:
H1 :
CS t = a 1 + a 2 GDP t + a 3 GDP t 1 + e t
H2 :
CS t = b 1 + b 2 GDP t + b 3 CS t 1 + e t
(24.51)
for the variables in the workfile Coef_test2.WF1. These are examples of non-nested models since neither model may be expressed as a restricted version of the other.
The J-test proposed by Davidson and MacKinnon (1993) provides one method of choosing
between two non-nested models. The idea is that if one model is the correct model, then the
fitted values from the other model should not have explanatory power when estimating that
model. For example, to test model H 1 against model H 2 , we first estimate model H 2 and
retrieve the fitted values:
equation eq_cs2.ls cs c gdp cs(-1)
eq_cs2.fit(f=na) cs2
The second line saves the fitted values as a series named CS2. Then estimate model H 1
including the fitted values from model H 2 . The result is:
Depend ent Variable: CS
Method: Least Squares
Date: 08/10/09 Ti me: 16:17
Sample (adjusted) : 1947Q2 1994Q4
Included observations: 191 after adjustments
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
GDP
GDP(-1)
CS2
7.313232
0.278749
-0.314540
1.048470
4.391305
0.029278
0.029287
0.019684
1.665389
9.520694
-10 .73978
53.26506
0.0 975
0.0 000
0.0 000
0.0 000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.999833
0.999830
11.05357
228 47.93
-727.9220
373 074.4
0.000000
1953.9 66
848.43 87
7.6 641 04
7.7 322 15
7.6 916 92
2.2 531 86
The fitted values from model H 2 enter significantly in model H 1 and we reject model H 1 .
We may also test model H 2 against model H 1 . First, estimate model H 1 and retrieve the
fitted values:
equation eq_cs1a.ls cs gdp gdp(-1)
eq_cs1a.fit(f=na) cs1
Then estimate model H 2 including the fitted values from model H 1 . The results of this
reverse test regression are given by:
References227
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
GDP
CS(-1 )
CS1F
-1413.901
5.131858
0.977604
-7.240322
130.6449
0.472770
0.018325
0.673506
-10 .82247
10.85486
53.34810
-10 .75020
0.0 000
0.0 000
0.0 000
0.0 000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.999836
0.999833
11.04237
229 23.56
-731.5490
381 618.5
0.000000
1962.7 79
854.98 10
7.6 619 69
7.7 298 33
7.6 894 55
2.2 607 86
The fitted values are again statistically significant and we reject model H 2 .
In this example, we reject both specifications, against the alternatives, suggesting that
another model for the data is needed. It is also possible that we fail to reject both models, in
which case the data do not provide enough information to discriminate between the two
models.
References
Andrews, Donald W. K. (1993). Tests for Parameter Instability and Structural Change With Unknown
Change Point, Econometrica, 61(4), 821856.
Andrews, Donald W. K. and W. Ploberger (1994). Optimal Tests When a Nuisance Parameter is Present
Only Under the Alternative, Econometrica, 62(6), 13831414.
Bai, Jushan (1997). Estimating Multiple Breaks One at a Time, Econometric Theory, 13, 315352.
Bai, Jushan and Pierre Perron (1998). Estimating and Testing Linear Models with Multiple Structural
Changes, Econometrica, 66, 4778.
Bai, Jushan and Pierre Perron (2003a). Computation and Analysis of Multiple Structural Change Models, Journal of Applied Econometrics, 6, 7278.
Bai, Jushan and Pierre Perron (2003b). Critical Values for Multiple Structural Change Tests, Econometrics Journal, 18, 122.
Breusch, T. S., and A. R. Pagan (1979). A Simple Test for Heteroskedasticity and Random Coefficient Variation, Econometrica, 48, 12871294.
Brown, R. L., J. Durbin, and J. M. Evans (1975). Techniques for Testing the Constancy of Regression
Relationships Over Time, Journal of the Royal Statistical Society, Series B, 37, 149192.
Davidson, Russell and James G. MacKinnon (1989). Testing for Consistency using Artificial Regressions,
Econometric Theory, 5, 363384.
Davidson, Russell and James G. MacKinnon (1993). Estimation and Inference in Econometrics, Oxford:
Oxford University Press.
Engle, Robert F. (1982). Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of
U.K. Inflation, Econometrica, 50, 9871008.
Garcia, Rene and Pierre Perron (1996). An Analysis of the Real Interest Rate Under Regime Shifts, The
Review of Economics and Statistics, 78, 111125.
Glejser, H. (1969). A New Test For Heteroscedasticity, Journal of the American Statistical Association,
64, 316323.
Godfrey, L. G. (1978). Testing for Multiplicative Heteroscedasticity, Journal of Econometrics, 8, 227236.
Godfrey, L. G. (1988). Specification Tests in Econometrics, Cambridge: Cambridge University Press.
Hansen, B. E. (1997). Approximate Asymptotic P Values for Structural-Change Tests, Journal of Business
and Economic Statistics, 15(1), 6067.
Harvey, Andrew C. (1976). Estimating Regression Models with Multiplicative Heteroscedasticity, Econometrica, 44, 461465.
Hausman, Jerry A. (1978). Specification Tests in Econometrics, Econometrica, 46, 12511272.
Johnston, Jack and John Enrico DiNardo (1997). Econometric Methods, 4th Edition, New York: McGrawHill.
Koenker, R. (1981). A Note on Studentizing a Test for Heteroskedasticity, Journal of Econometrics, 17,
107112.
Liu, Jian, Wu, Shiying, and James V. Zidek (1997). On Segmented Multivariate Regression, Statistica
Sinica, 7, 497525.
Longley, J. W. An Appraisal of Least Squares Programs for the Electronic Computer from the Point of
View of the User, Journal of the American Statistical Association, 62(319), 819-841.
Perron, Pierre (2006). Dealing with Structural Breaks, in Palgrave Handbook of Econometrics, Vol. 1:
Econometric Theory, T. C. Mills and K. Patterson (eds.). New York: Palgrave Macmillan.
Ramsey, J. B. (1969). Tests for Specification Errors in Classical Linear Least Squares Regression Analysis, Journal of the Royal Statistical Society, Series B, 31, 350371.
Ramsey, J. B. and A. Alexander (1984). The Econometric Approach to Business-Cycle Analysis Reconsidered, Journal of Macroeconomics, 6, 347356.
White, Halbert (1980).A Heteroskedasticity-Consistent Covariance Matrix and a Direct Test for Heteroskedasticity, Econometrica, 48, 817838.
Wooldridge, Jeffrey M. (1990). A Note on the Lagrange Multiplier and F-statistics for Two Stage Least
Squares Regression, Economics Letters, 34, 151-155.
Wooldridge, Jeffrey M. (2000). Introductory Econometrics: A Modern Approach. Cincinnati, OH: SouthWestern College Publishing.
Yao, Yi-Ching (1988). Estimating the Number of Change-points via Schwarz Criterion, Statistics & Probability Letters, 6, 181189.
Chapter 36. Univariate Time Series Analysis, on page 527 describes tools for univariate time series analysis, including unit root tests in both conventional and panel data
settings, variance ratio tests, and the BDS test for independence.
Y t = X t v + e t
2
jt
= q+
2
ae t 1
2
bj t 1
(25.1)
(25.2)
in which the mean equation given in (25.1) is written as a function of exogenous variables
2
with an error term. Since j t is the one-period ahead forecast variance based on past infor-
mation, it is called the conditional variance. The conditional variance equation specified in
(25.2) is a function of three terms:
A constant term: q .
News about volatility from the previous period, measured as the lag of the squared
2
residual from the mean equation: e t 1 (the ARCH term).
2
q
2
j t = ----------------- + a
(1 b)
j1 2
et j .
(25.3)
j=1
We see that the GARCH(1,1) variance specification is analogous to the sample variance, but that it down-weights more distant lagged squared errors.
2
The error in the squared returns is given by u t = e t j t . Substituting for the variances in the variance equation and rearranging terms we can write our model in
terms of the errors:
2
e t = q + ( a + b )e t 1 + u t bn t 1 .
(25.4)
Thus, the squared errors follow a heteroskedastic ARMA(1,1) process. The autoregressive root which governs the persistence of volatility shocks is the sum of a plus b . In
many applied settings, this root is very close to unity so that shocks die out rather
slowly.
= q+
p
2
bj jt j
j= 1
ai et i
(25.5)
i= 1
Y t = X t v + lj t + e t .
(25.6)
The ARCH-M model is often used in financial applications where the expected return on an
asset is related to the expected asset risk. The estimated coefficient on the expected risk is a
measure of the risk-return tradeoff.
Two variants of this ARCH-M specification use the conditional standard deviation or the log
of the conditional variance in place of the variance in Equation (25.6).
Y t = X t v + lj t + e t .
Y t = X t v +
2
l log ( j t )
+ et
(25.7)
(25.8)
= q+
j=1
p
2
bj jt j
a i e t i + Z t p .
(25.9)
i=1
Note that the forecasted variances from this model are not guaranteed to be positive. You
may wish to introduce regressors in a form where they are always positive to minimize the
possibility that a single, large negative value generates a negative forecasted value.
Distributional Assumptions
To complete the basic ARCH specification, we require an assumption about the conditional
distribution of the error term e . There are three assumptions commonly employed when
working with ARCH models: normal (Gaussian) distribution, Students t-distribution, and
the Generalized Error Distribution (GED). Given a distributional assumption, ARCH models
are typically estimated by the method of maximum likelihood.
For example, for the GARCH(1, 1) model with conditionally normal errors, the contribution
to the log-likelihood for observation t is:
2
1
1
2 1
2
l t = --- log ( 2p ) --- log j t --- ( y t X t v ) j t ,
2
2
2
(25.10)
( y t X t v )
1 p ( n 2 )G ( v 2 ) 1
2 (n + 1)
-
l t = --- log ------------------------------------------ --- log j t ----------------- log 1 + ---------------------------2
2
2
2 G((v + 1) 2) 2
jt ( n 2 )
(25.11)
where the degree of freedom n > 2 controls the tail behavior. The t-distribution approaches
the normal as n .
For the GED, we have:
2 r2
3
1
1 G(1 r)
2 G ( 3 r ) ( y t X t v )
-
l t = --- log ------------------------------------2- --- log j t -----------------------------------------------2
2 G(3 r)(r 2) 2
j G(1 r)
(25.12)
where the tail parameter r > 0 . The GED is a normal distribution if r = 2 , and fat-tailed if
r < 2.
By default, ARCH models in EViews are estimated by the method of maximum likelihood
under the assumption that the errors are conditionally normally distributed.
Class of models
To estimate one of the standard GARCH models as described above, select the GARCH/
TARCH entry in the Model dropdown menu. The other entries (EGARCH, PARCH, and
Component ARCH(1, 1)) correspond to more complicated variants of the GARCH specification. We discuss each of these models in Additional ARCH Models on page 244.
In the Order section, you should choose the number of ARCH and GARCH terms. The
default, which includes one ARCH and one GARCH term is by far the most popular specification.
If you wish to estimate an asymmetric model, you should enter the number of asymmetry
terms in the Threshold order edit field. The default settings estimate a symmetric model
with threshold order 0.
Variance regressors
In the Variance regressors edit box, you may optionally list variables you wish to include in
the variance specification. Note that, with the exception of IGARCH models, EViews will
always include a constant as a variance regressor so that you do not need to add C to this
list.
The distinction between the permanent and transitory regressors is discussed in The Component GARCH (CGARCH) Model on page 247.
Restrictions
If you choose the GARCH/TARCH model, you may restrict the parameters of the GARCH
model in two ways. One option is to set the Restrictions dropdown to IGARCH, which
restricts the persistent parameters to sum up to one. Another is Variance Target, which
restricts the constant term to a function of the GARCH parameters and the unconditional
variance:
2
q = j 1
j =1
bj
a i
(25.13)
i= 1
Estimation Options
EViews provides you with access to a number of optional estimation settings. Simply click
on the Options tab and fill out the dialog as desired.
Backcasting
By default, both the innovations used in initializing MA
estimation and the initial
variance required for the
GARCH terms are computed
using backcasting methods.
Details on the MA backcasting procedure are provided
in Initializing MA Innovations on page 132.
When computing backcast
initial variances for GARCH,
EViews first uses the coefficient values to compute the
residuals of the mean equation, and then computes an
exponential smoothing estimator of the initial values,
T
2
j0
2
e0
T 2
= l j + ( 1 l )
Tj1
( e T j ) ,
(25.14)
j= 0
2
where e are the residuals from the mean equation, j is the unconditional variance estimate:
T
2
j =
e t T
(25.15)
t =1
and the smoothing parameter l = 0.7 . However, you have the option to choose from a
number of weights from 0.1 to 1, in increments of 0.1, by using the Presample variance
drop-down list. Notice that if the parameter is set to 1, then the initial value is simply the
unconditional variance, e.g. backcasting is not calculated:
2
2
j 0 = j .
(25.16)
Using the unconditional variance provides another common way to set the presample variance.
Our experience has been that GARCH models initialized using backcast exponential smoothing often outperform models initialized using the unconditional variance.
Derivative Methods
EViews uses both numeric and analytic derivatives in estimating ARCH models. Fully analytic derivatives are available for GARCH(p, q) models with simple mean specifications
assuming normal or unrestricted t-distribution errors.
Analytic derivatives are not available for models with ARCH in mean specifications, complex
variance equation specifications (e.g. threshold terms, exogenous variance regressors, or
integrated or target variance restrictions), models with certain error assumptions (e.g. errors
following the GED or fixed parameter t-distributions), and all non-GARCH(p, q) models (e.g.
EGARCH, PARCH, component GARCH).
Some specifications offer analytic derivatives for a subset of coefficients. For example, simple GARCH models with non-constant regressors allow for analytic derivatives for the variance coefficients but use numeric derivatives for any non-constant regressor coefficients.
You may control the method used in computing numeric derivatives to favor speed (fewer
function evaluations) or to favor accuracy (more function evaluations).
Starting Values
As with other iterative procedures, starting coefficient values are required. EViews will supply its own starting values for ARCH procedures using OLS regression for the mean equation. Using the Options dialog, you can also set starting values to various fractions of the
OLS starting values, or you can specify the values yourself by choosing the User Specified
option, and placing the desired coefficients in the default coefficient vector.
GARCH(1,1) examples
To estimate a standard GARCH(1,1) model with no regressors in the mean and variance
equations:
Rt = c + et
2
(25.17)
j t = q + ae t 1 + bj t 1
you should enter the various parts of your specification:
Fill in the Mean Equation Specification edit box as
r c
Enter 1 for the number of ARCH terms, and 1 for the number of GARCH terms, and
select GARCH/TARCH.
Select None for the ARCH-M term.
Leave blank the Variance Regressors edit box.
To estimate the ARCH(4)-M model:
R t = g 0 + g 1 DUM t + g 2 j t + e t
2
j t = q + a 1 e t 1 + a 2 e t 2 + a 3 e t 3 + a 4 e t 4 + g 3 DUM t
(25.18)
Coefficient
Std. Error
z-Statistic
Prob.
0.000597
0.000149
4.013882
0.0001
4.261215
10.34861
153.4702
0.0000
0.0000
0.0000
Variance Equation
C
RESID(-1)^2
GARCH(-1)
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Durbin-Watson stat
5.83E-07
0.053317
0.939955
-0.000014
-0.000014
0.008889
0.199649
8608.650
1.964029
1.37E-07
0.005152
0.006125
0.000564
0.008888
-6.807476
-6.798243
-6.804126
By default, the estimation output header describes the estimation sample, and the methods
used for computing the coefficient standard errors, the initial variance terms, and the variance equation. Also noted is the method for computing the presample variance, in this case
backcasting with smoothing parameter l = 0.7 .
The main output from ARCH estimation is divided into two sectionsthe upper part provides the standard output for the mean equation, while the lower part, labeled Variance
Equation, contains the coefficients, standard errors, z-statistics and p-values for the coefficients of the variance equation.
The ARCH parameters correspond to a and the GARCH parameters to b in Equation (25.2)
on page 231. The bottom panel of the output presents the standard set of regression statistics
2
using the residuals from the mean equation. Note that measures such as R may not be
2
meaningful if there are no regressors in the mean equation. Here, for example, the R is
negative.
In this example, the sum of the ARCH and GARCH coefficients ( a + b ) is very close to one,
indicating that volatility shocks are quite persistent. This result is often observed in high frequency financial data.
specified, all Q-statistics should not be significant. See Correlogram on page 393 of
Users Guide I for an explanation of correlograms and Q-statistics. See also Residual
Diagnostics/ARCH LM Test.
Residual Diagnostics/HistogramNormality Test displays descriptive statistics and a
histogram of the standardized residuals. You can use the Jarque-Bera statistic to test
the null of whether the standardized residuals are normally distributed. If the standardized residuals are normally distributed, the Jarque-Bera statistic should not be
significant. See Descriptive Statistics & Tests, beginning on page 374 of Users Guide
I for an explanation of the Jarque-Bera test. For example, the histogram of the standardized residuals from the GARCH(1,1) model fit to the daily stock return looks as
follows:
The standardized residuals are leptokurtic and the Jarque-Bera statistic strongly
rejects the hypothesis of normal distribution.
Residual Diagnostics/ARCH LM Test carries out Lagrange multiplier tests to test
whether the standardized residuals exhibit additional ARCH. If the variance equation
is correctly specified, there should be no ARCH left in the standardized residuals. See
ARCH LM Test on page 186 for a discussion of testing. See also Residual Diagnostics/Correlogram Squared Residuals.
Forecast uses the estimated ARCH model to compute static and dynamic forecasts of
the mean, its forecast standard error, and the conditional variance. To save any of
these forecasts in your workfile, type a name in the corresponding dialog box. If you
choose the Forecast Graph option, EViews displays the graphs of the forecasts and
two standard deviation bands for the mean forecast.
2
Note that the squared residuals e t may not be available for presample values or when
computing dynamic forecasts. In such cases, EViews will replaced the term by its
expected value. In the simple GARCH(p, q) case, for example, the expected value of
2
2
the squared residual is the fitted variance, e.g., E ( e t ) = j t . In other models, the
expected value of the residual term will differ depending on the distribution and, in
some cases, the estimated parameters of the model.
For example, to construct dynamic forecasts of SPX using the previously estimated
model, click on Forecast and fill in the Forecast dialog, setting the sample to
2001m01 @last so the dynamic forecast begins immediately following the estimation period. Unselect the Forecast Evaluation checkbox and click on OK to display
the forecast results.
It will be useful to display these results in two columns. Right-mouse click then select
Position and align graphs..., enter 2 for the number of Columns, and select Automatic spacing. Click on OK to display the rearranged graph:
The first graph is the forecast of SPX (SPXF) from the mean equation with two stan2
dard deviation bands. The second graph is the forecast of the conditional variance j t .
Make Residual Series saves the residuals as named series in your workfile. You have
the option to save the ordinary residuals, e t , or the standardized residuals, e t j t .
The residuals will be named RESID1, RESID2, and so on; you can rename the series
with the name button in the series window.
Make GARCH Variance Series... saves the conditional variances j t as named series
in your workfile. You should provide a name for the target conditional variance series
and, if relevant, you may provide a name for the permanent component series. You
may take the square root of the conditional variance series to get the conditional standard deviations as displayed by the View/GARCH Graph/Conditional Standard
Deviation.
bj j
tj
j=1
ai e
(25.19)
ti
i =1
such that
q
bj +
j =1
ai
= 1
(25.20)
i =1
then we have an integrated GARCH. This model was originally described in Engle and
Bollerslev (1986). To estimate this model, select IGARCH in the Restrictions drop-down
menu for the GARCH/TARCH model.
jt = q +
j= 1
where
It
p
2
bj jt j +
i= 1
r
2
ai et i +
gk et k It k
(25.21)
k =1
In this model, good news, e t i > 0 , and bad news. e t i < 0 , have differential effects on the
conditional variance; good news has an impact of a i , while bad news has an impact of
a i + g i . If g i > 0 , bad news increases volatility, and we say that there is a leverage effect for
the i-th order. If g i 0 , the news impact is asymmetric.
Note that GARCH is a special case of the TARCH model where the threshold term is set to
zero. To estimate a TARCH model, specify your GARCH model with ARCH and GARCH order
and then change the Threshold order to the desired value.
= q+
p
2
b j log ( j t j )
j=1
i= 1
et i
a i --------- +
jt i
k =1
et k
-.
g k ---------jt k
(25.22)
Note that the left-hand side is the log of the conditional variance. This implies that the leverage effect is exponential, rather than quadratic, and that forecasts of the conditional variance are guaranteed to be nonnegative. The presence of leverage effects can be tested by the
hypothesis that g i < 0 . The impact is asymmetric if g i 0 .
There are two differences between the EViews specification of the EGARCH model and the
original Nelson model. First, Nelson assumes that the e t follows a Generalized Error Distribution (GED), while EViews offers you a choice of normal, Students t-distribution, or GED.
Second, Nelson's specification for the log conditional variance is a restricted version of:
q
2
log ( j t ) = q +
j= 1
p
2
b j log ( j t j ) +
i =1
et i
et i
E --------a i --------- +
j jt i
ti
k =1
et k
g k ---------jt k
Notice that we have specified the mean equation using an explicit expression. Using the
explicit expression is for illustration purposes only; we could just as well entered dlog(ibm)
c dlog(spx) as our specification.
jt = q +
j= 1
p
d
bj jt j +
ai ( et i
gi et i )
(25.23)
i=1
model, for example, you will to set the order of the asymmetric terms to zero and will set d
to 1.
jt = q + a ( et 1 q ) + b ( jt 1 q ) .
(25.24)
shows mean reversion to q , which is a constant for all time. By contrast, the component
model allows mean reversion to a varying level m t , modeled as:
2
jt mt = a ( et 1 mt 1 ) + b ( jt 1 mt 1 )
2
(25.25)
m t = q + r ( m t 1 q ) + f ( e t 1 j t 1 ).
2
Here j t is still the volatility, while m t takes the place of q and is the time varying long-run
2
volatility. The first equation describes the transitory component, j t m t , which converges
to zero with powers of ( a + b ). The second equation describes the long run component
m t , which converges to q with powers of r . r is typically between 0.99 and 1 so that m t
approaches q very slowly. We can combine the transitory and permanent equations and
write:
2
j t = ( 1 a b ) ( 1 r )q + ( a + f )e t 1 ( ar + ( a + b )f ) e t 2
2
(25.26)
+ ( b f )j t 1 ( br ( a + b )f )j t 2
which shows that the component model is a (nonlinear) restricted GARCH(2, 2) model.
To select the Component ARCH model, simply choose Component ARCH(1,1) in the Model
dropdown menu. You can include exogenous variables in the conditional variance equation
of component models, either in the permanent or transitory equation (or both). The variables in the transitory equation will have an impact on the short run movements in volatility, while the variables in the permanent equation will affect the long run levels of volatility.
An asymmetric Component ARCH model may be estimated by checking the Include threshold term checkbox. This option combines the component model with the asymmetric
TARCH model, introducing asymmetric effects in the transitory equation and estimates models of the form:
y t = x t p + e t
2
m t = q + r ( m t 1 q ) + f ( e t 1 j t 1 ) + v 1 z 1t
2
(25.27)
2
j t m t = a ( e t 1 m t 1 ) + g ( e t 1 m t 1 )d t 1 + b ( j t 1 m t 1 ) + v 2 z 2t
where z are the exogenous variables and d is the dummy variable indicating negative
shocks. g > 0 indicates the presence of transitory leverage effects in the conditional variance.
Examples249
Examples
As an illustration of ARCH modeling in
EViews, we estimate a model for the
daily S&P 500 stock index from 1990 to
1999 (in the workfile Stocks.WF1).
The dependent variable is the daily
continuously compounding return,
log ( s t s t 1 ) , where s t is the daily
close of the index. A graph of the return
series clearly shows volatility clustering.
We will specify our mean equation with
a simple constant:
DLOG(SPX)
.06
.04
.02
.00
-.02
-.04
-.06
-.08
90
91
92
93
94
95
96
97
98
99
log ( s t s t 1 ) = c 1 + e t
For the variance specification, we employ an EGARCH(1, 1) model:
et 1
et 1
2
2
log ( j t ) = q + b log ( j t 1 ) + a ---------- + g ---------jt 1
jt 1
(25.28)
When we previously estimated a GARCH(1,1) model with the data, the standardized residual showed evidence of excess kurtosis. To model the thick tail in the residuals, we will
assume that the errors follow a Student's t-distribution.
To estimate this model, open the GARCH estimation dialog, enter the mean specification:
dlog(spx) c
select the EGARCH method, enter 1 for the ARCH and GARCH orders and the Asymmetric
order, and select Students t for the Error distribution. Click on OK to continue.
EViews displays the results of the estimation procedure. The top portion contains a description of the estimation specification, including the estimation sample, error distribution
assumption, and backcast assumption.
Below the header information are the results for the mean and the variance equations, followed by the results for any distributional parameters. Here, we see that the relatively small
degrees of freedom parameter for the t-distribution suggests that the distribution of the standardized errors departs significantly from normality.
Coefficient
Std. Error
z-Statistic
Prob.
0.000513
0.000135
3.810600
0.0001
Variance Equation
C(2)
C(3)
C(4)
C(5)
-0.196710
0.113675
-0.064068
0.988584
0.039150
0.017550
0.011575
0.003360
-5.024490
6.477203
-5.535009
294.2102
0.0000
0.0000
0.0000
0.0000
T-DIST. DOF
6.703688
0.844702
7.936156
0.0000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Durbin-Watson stat
-0.000032
-0.000032
0.008889
0.199653
8691.953
1.963994
0.000564
0.008888
-6.871798
-6.857949
-6.866773
To test whether there any remaining ARCH effects in the residuals, select View/Residual
Diagnostics/ARCH LM Test... and specify the order to test. EViews will open the general
Heteroskedasticity Tests dialog opened to the ARCH page. Enter 7 in the dialog for the
number of lags and click on OK.
Examples251
The top portion of the output from testing up-to an ARCH(7) is given by:
Heteroskedasticity Test: ARCH
F-statistic
Obs*R-squared
0.398895
2.798042
Prob. F(7,2513)
Prob. Chi-Square(7)
0.9034
0.9030
simulates a random draw from the t-distribution with 6.7 degrees of freedom. Then, create a
group containing the series RESID02 and TDIST. Select View/Graph... and choose QuantileQuantile from the left-hand side of the dialog and Empirical from the Q-Q graph dropdown
on the right-hand side.
The large negative residuals more closely follow a straight line. On the other hand, one
can see a slight deviation from t-distribution
for large positive shocks. This is expected, as
the previous QQ-plot suggested that, with the
exception of the large negative shocks, the
residuals were close to normally distributed.
To see how the model might fit real data, we
examine static forecasts for out-of-sample
data. Click on the Forecast button on the
equation toolbar, type in SPX_VOL in the
GARCH field to save the forecasted conditional variance, change the sample to the
post-estimation sample period 1/1/2000 1/1/
2002 and click on Static to select a static
forecast.
References253
Since the actual volatility is unobserved, we will use the squared return series
(DLOG(SPX)^2) as a proxy for the realized volatility. A plot of the proxy against the forecasted volatility for the years 2000 and 2001 provides an indication of the models ability to
track variations in market volatility.
.0040
.0035
.0030
.0025
.0020
.0015
.0010
.0005
.0000
I
II
III
2000
IV
II
III
IV
2001
DLOG(SPX)^2
SPX_VOL
References
Bollerslev, Tim (1986). Generalized Autoregressive Conditional Heteroskedasticity, Journal of Econometrics, 31, 307327.
Bollerslev, Tim, Ray Y. Chou, and Kenneth F. Kroner (1992). ARCH Modeling in Finance: A Review of the
Theory and Empirical Evidence, Journal of Econometrics, 52, 559.
Bollerslev, Tim, Robert F. Engle and Daniel B. Nelson (1994). ARCH Models, Chapter 49 in Robert F.
Engle and Daniel L. McFadden (eds.), Handbook of Econometrics, Volume 4, Amsterdam: Elsevier
Science B.V.
Bollerslev, Tim and Jeffrey M. Wooldridge (1992). Quasi-Maximum Likelihood Estimation and Inference
in Dynamic Models with Time Varying Covariances, Econometric Reviews, 11, 143172.
Ding, Zhuanxin, C. W. J. Granger, and R. F. Engle (1993). A Long Memory Property of Stock Market
Returns and a New Model, Journal of Empirical Finance, 1, 83106.
Engle, Robert F. (1982). Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of
U.K. Inflation, Econometrica, 50, 9871008.
Engle, Robert F., and Bollerslev, Tim (1986). Modeling the Persistence of Conditional Variances, Econometric Reviews, 5, 150.
Engle, Robert F., David M. Lilien, and Russell P. Robins (1987). Estimating Time Varying Risk Premia in
the Term Structure: The ARCH-M Model, Econometrica, 55, 391407.
Glosten, L. R., R. Jaganathan, and D. Runkle (1993). On the Relation between the Expected Value and
the Volatility of the Normal Excess Return on Stocks, Journal of Finance, 48, 17791801.
Nelson, Daniel B. (1991). Conditional Heteroskedasticity in Asset Returns: A New Approach, Econometrica, 59, 347370.
Schwert, W. (1989). Stock Volatility and Crash of 87, Review of Financial Studies, 3, 77102.
Taylor, S. (1986). Modeling Financial Time Series, New York: John Wiley & Sons.
Zakoan, J. M. (1994). Threshold Heteroskedastic Models, Journal of Economic Dynamics and Control,
18, 931-944.
Background
It is well known that many economic time series are difference stationary. In general, a
regression involving the levels of these I(1) series will produce misleading results, with conventional Wald tests for coefficient significance spuriously showing a significant relationship
between unrelated series (Phillips 1986).
Engle and Granger (1987) note that a linear combination of two or more I(1) series may be
stationary, or I(0), in which case we say the series are cointegrated. Such a linear combination defines a cointegrating equation with cointegrating vector of weights characterizing the
long-run relationship between the variables.
We will work with the standard triangular representation of a regression specification and
assume the existence of a single cointegrating vector (Hansen 1992b, Phillips and Hansen
1990). Consider the n + 1 dimensional time series vector process ( y t, X t ) , with cointegrating equation
y t = X t b + D 1t g 1 + u 1t
(26.1)
where D t = ( D 1t , D 2t ) are deterministic trend regressors and the n stochastic regressors X t are governed by the system of equations:
X t = G 21D 1t + G 22D 2t + e 2t
e 2t = u 2t
(26.2)
The p 1 -vector of D 1t regressors enter into both the cointegrating equation and the regressors equations, while the p 2 -vector of D 2t are deterministic trend regressors which are
included in the regressors equations but excluded from the cointegrating equation (if a nontrending regressor such as the constant is present, it is assumed to be an element of D 1t so
it is not in D 2t ).
Following Hansen (1992b), we assume that the innovations u t = ( u 1t, u 2t ) are strictly
stationary and ergodic with zero mean, contemporaneous covariance matrix S , one-sided
long-run covariance matrix L , and covariance matrix Q , each of which we partition conformably with u t
S = E ( u t u t ) =
j 11 j 12
j 21 S 22
L =
E ( u t u t j )
j=0
Q =
j =
E ( u t u t j ) =
l 11 l 12
(26.3)
l 21 L 22
q 11 q 12
q 21 Q 22
= L + L S
In addition, we assume a rank n long-run covariance matrix Q with non-singular submatrix Q 22 . Taken together, the assumptions imply that the elements of y t and X t are I(1)
and cointegrated but exclude both cointegration amongst the elements of X t and multicointegration. Discussions of additional and in some cases alternate assumptions for this specification are provided by Phillips and Hansen (1990), Hansen (1992b), and Park (1992).
It is well-known that if the series are cointegrated, ordinary least squares estimation (static
OLS) of the cointegrating vector b in Equation (26.1) is consistent, converging at a faster
rate than is standard (Hamilton 1994). One important shortcoming of static OLS (SOLS) is
that the estimates have an asymptotic distribution that is generally non-Gaussian, exhibit
asymptotic bias, asymmetry, and are a function of non-scalar nuisance parameters. Since
conventional testing procedures are not valid unless modified substantially, SOLS is generally not recommended if one wishes to conduct inference on the cointegrating vector.
The problematic asymptotic distribution of SOLS arises due to the presence of long-run correlation between the cointegrating equation errors and regressor innovations and ( q 12 ) , and
cross-correlation between the cointegrating equation errors and the regressors ( l 12 ) . In the
special case where the X t are strictly exogenous regressors so that q 12 = 0 and l 12 = 0 ,
the bias, asymmetry, and dependence on non-scalar nuisance parameters vanish, and the
SOLS estimator has a fully efficient asymptotic Gaussian mixture distribution which permits
2
standard Wald testing using conventional limiting x -distributions.
Alternately, SOLS has an asymptotic Gaussian mixture distribution if the number of deterministic trends excluded from the cointegrating equation p 2 is no less than the number of
stochastic regressors n . Let m 2 = max ( n p 2, 0 ) represent the number of cointegrating
regressors less the number of deterministic trend regressors excluded from the cointegrating
equation. Then, roughly speaking, when m 2 = 0 , the deterministic trends in the regressors
asymptotically dominate the stochastic trend components in the cointegrating equation.
While Park (1992) notes that these two cases are rather exceptional, they are relevant in
motivating the construction of our three asymptotically efficient estimators and computation
of critical values for residual-based cointegration tests. Notably, the fully efficient estimation
methods supported by EViews involve transformations of the data or modifications of the
cointegrating equation specification to mimic the strictly exogenous X t case.
Equation Specification
The cointegrating equation is
described in the Equation
specification section. You
should enter the name of the
dependent variable, y , followed by a list of cointegrating
regressors, X , in the edit field,
then use the Trend specification dropdown to choose from
a list of deterministic trend variable assumptions (None, Constant (Level), Linear Trend,
Quadratic Trend). The dropdown menu selections imply trends up to the specified order so
that the Quadratic Trend selection depicted includes a constant and a linear trend term
along with the quadratic.
If you wish to add deterministic regressors that are not offered in the pre-specified list to
D 1 , you may enter the series names in the Deterministic regressors edit box.
The FMOLS estimator employs preliminary estimates of the symmetric and one-sided longrun covariance matrices of the residuals. Let u 1t be the residuals obtained after estimating
Equation (26.1). The u 2t may be obtained indirectly as u 2t = e 2t from the levels regressions
X t = G 21 D 1t + G 22 D 2t + e 2t
(26.4)
X t = G 21 D 1t + G 22 D 2t + u 2t
(26.5)
y t = y t q 12 Q 22 u 2
(26.6)
(26.7)
b
g 1
v =
t = 2
Zt Zt
T
+
+
l 12
Z
y
T
t t
0
t =2
(26.8)
q 1.2 = q 11 q 12 Q 22 q 21
(26.9)
W = ( Rv r ) ( RV ( v )R ) ( Rv r )
(26.10)
with
V ( v ) = q 1.2
t = 2
Z t Z t
(26.11)
Coeffici ent
S td. Error
t-S tatistic
Prob.
LY
C
0.987548
-0.035023
0.009188
6.715362
107.4880
-0.005215
0.0 000
0.9 958
R-squared
Adjusted R-squared
S.E. of regression
Durbin-Watson stat
0.998171
0.998160
1.790506
0.406259
720.50 78
41.740 69
538.59 29
25.466 53
The top portion of the results describe the settings used in estimation, in particular, the
specification of the deterministic regressors in the cointegrating equation, the kernel non and L
, and the noparametric method used to compute the long-run variance estimators Q
d.f. correction option used in the calculation of the coefficient covariance. Also displayed is
the bandwidth of 14.9878 selected by the Andrews automatic bandwidth procedure.
The estimated coefficients are presented in the middle of the output. Of central importance
is the coefficient on LY which implies that the estimated cointegrating vector for LC and LY
(1, -0.9875). Note that we present the standard error, t-statistic, and p-value for the constant
even though they are not, strictly speaking, valid.
The summary statistic portion of the output is relatively familiar but does require a bit of
comment. First, all of the descriptive and fit statistics are computed using the original data,
not the FMOLS transformed data. Thus, while the measures of fit and the Durbin-Watson
stat may be of casual interest, you should exercise extreme caution in using these measures.
Second, EViews displays a Long-run variance value which is an estimate of the long-run
variance of u 1t conditional on u 2t . This statistic, which takes the value of 25.47 in this
1.2 employed in forming the coefficient covariances, and is obtained from
example, is the q
the Q and L used in estimation. Since we are not d.f. correcting the coefficient covariance
1.2 reported here is not d.f. corrected.
matrix the q
Once you have estimated your equation using FMOLS you may use the various cointegrating
regression equation views and procedures. We will discuss these tools in greater depth in
(Working with an Equation on page 279), but for now we focus on a simple Wald test for
the coefficients. To test for whether the cointegrating vector is (1, -1), select View/Coefficient Diagnostics/Wald Test - Coefficient Restrictions and enter C(1)=1 in the dialog.
EViews displays the output for the test:
Wald Test:
Equation: FMOLS
Null Hyp othesis: C(1)=1
Test Stati stic
t-statistic
F-statisti c
Chi-squa re
Value
df
Probability
-1.355362
1.837006
1.837006
168
(1, 168)
1
0.1771
0.1771
0.1753
Value
Std. Err.
-0.01245 2
0.00918 8
The t-statistic and Chi-square p-values are both around 0.17, indicating that the we cannot
reject the null hypothesis that the cointegrating regressor coefficient value is equal to 1.
Note that this Wald test is for a simple linear restriction. Hansen points out that his theoretical results do not directly extend to testing nonlinear hypotheses in models with trend
regressors, but EViews does allow tests with nonlinear restrictions since others, such as Phillips and Loretan (1991) and Park (1992) provide results in the absence of the trend regressors. We do urge caution in interpreting nonlinear restriction test results for equations
involving such regressors.
l 12
(26.12)
L 22
0
1
2 b +
y t = y t S L
1
Q 22 q 21
u
t
(26.13)
where the b are estimates of the cointegrating equation coefficients, typically the SOLS estimates used to obtain the residuals u 1t .
The CCR estimator is defined as ordinary least squares applied to the transformed data
b
g 1
t =1
Z t Z t
Z t y t
(26.14)
t = 1
where Z t = ( Z t, D 1t ) .
Park shows that the CCR transformations asymptotically eliminate the endogeneity caused
by the long run correlation of the cointegrating equation errors and the stochastic regressors
innovations, and simultaneously correct for asymptotic bias resulting from the contemporaneous correlation between the regression and stochastic regressor errors. Estimates based on
the CCR are therefore fully efficient and have the same unbiased, mixture normal asymptotics as FMOLS. Wald testing may be carried out as in Equation (26.10) with Z t used in
place of Z t in Equation (26.11).
Coeffici ent
S td. Error
t-S tatistic
Prob.
LY
C
0.988975
-1.958828
0.007256
5.298819
136.3069
-0.369673
0.0 000
0.7 121
R-squared
Adjusted R-squared
S.E. of regression
Durbin-Watson stat
0.997780
0.997767
1.972481
0.335455
720.50 78
41.740 69
653.63 43
15.915 71
The first thing we note is that the VAR prewhitening has a strong effect on the kernel part of
the calculation of the long-run covariances, shortening the Andrews optimal bandwidth
from almost 15 down to 1.6. Furthermore, as a result of prewhitening, the estimate of the
conditional long-run variance changes quite a bit, decreasing from 25.47 to 15.92. This
decrease contributes to estimated coefficient standard errors for CCR that are smaller than
their FMOLS counterparts. Differences aside, however, the estimates of the cointegrating
vector are qualitatively similar. In particular, a Wald test of the null hypothesis that the cointegrating vector is equal to (1, -1) yields a p-value of 0.1305.
Dynamic OLS
A simple approach to constructing an asymptotically efficient estimator that eliminates the
feedback in the cointegrating system has been advocated by Saikkonen (1992) and Stock
and Watson (1993). Termed Dynamic OLS (DOLS), the method involves augmenting the
cointegrating regression with lags and leads of X t so that the resulting cointegrating equation error term is orthogonal to the entire history of the stochastic regressor innovations:
r
y t = X t b + D 1t g 1 +
X t + j d + v 1t
(26.15)
j = q
Under the assumption that adding q lags and r leads of the differenced regressors soaks up
all of the long-run correlation between u 1t and u 2t , least-squares estimates of
v = ( b, g ) using Equation (26.15) have the same asymptotic distribution as those
obtained from FMOLS and CCR.
An estimator of the asymptotic variance matrix of v may be computed by computing the
usual OLS coefficient covariance, but replacing the usual estimator for the residual variance
of v 1t with an estimator of the long-run variance of the residuals. Alternately, you could
compute a robust HAC estimator of the coefficient covariance matrix.
To estimate your equation using DOLS, first fill out the equation specification, then select
Dynamic OLS (DOLS) in the Nonstationary estimation settings dropdown menu. The dialog will change to display settings for DOLS.
By default, the Lag & lead method
is Fixed with Lags and Leads each
set to 1. You may specify a different
number of lags or leads or you can
use the dropdown to elect automatic information criterion selection of the lag and lead orders by selecting Akaike,
Schwarz, or Hannan-Quinn. If you select None, EViews will estimate SOLS.
If you select one of the info criterion selection methods, you will be
prompted for a maximum lag and
lead length. You may enter a value,
or you may retain the default entry
* which instructs EViews to use an arbitrary observation-based rule-of-thumb:
14
(26.16)
to set the maximum, where k is the number of coefficients in the cointegrating equation.
This rule-of-thumb is a slightly modified version of the rule suggested by Schwert (1989) in
the context of unit root testing. (We urge careful thought in the use of automatic selection
methods since the purpose of including leads and lags is to remove long-run dependence by
orthogonalizing the equation residual with respect to the history of stochastic regressor
innovations; the automatic methods were not designed to produce this effect.)
For DOLS estimation we may also specify the method used to compute the coefficient covariance matrix. Click on the Options tab of the dialog to see the relevant options.
The dropdown menu allows you to choose between the
Default (rescaled OLS), Ordinary Least Squares, White, or
HAC - Newey West. The default computation method rescales the ordinary least squares coefficient covariance using
an estimator of the long-run variance of DOLS residuals
(multiplying by the ratio of the long-run variance to the ordinary squared standard error). Alternately, you may employ a
sandwich-style HAC (Newey-West) covariance matrix estimator. In both cases, the HAC Options button may be used to override the default method
for computing the long-run variance (non-prewhitened Bartlett kernel and a Newey-West
fixed bandwidth). In addition, EViews offers options for estimating the coefficient covariance using the White covariance or Ordinary Least Squares methods. These methods are
offered primarily for comparison purposes.
Lastly, the Options tab may be used to remove the degree-of-freedom correction that is
applied to the estimate of the conditional long-run variance or robust coefficient covariance.
Coeffici ent
S td. Error
t-S tatistic
Prob.
LY
C
@TREND
0.681179
199 .1406
0.268957
0.071981
47.20878
0.062004
9.463267
4.218297
4.337740
0.0 000
0.0 000
0.0 000
R-squared
Adjusted R-squared
S.E. of regression
Durbin-Watson stat
0.999395
0.999351
1.017016
0.422921
720.55 32
39.923 49
155.14 84
10.198 30
The top portion describes the settings used in estimation, showing the trend assumptions,
the lag and lead specification, and method for computing the long-run variance used in
forming the coefficient covariances. The actual estimate of the latter, in this case 10.198, is
again displayed in the bottom portion of the output (if you had selected OLS as your coefficient covariance methods, this value would be simply be the ordinary S.E. of the regression;
if you had selected White or HAC, the statistic would not have been computed).
The estimated coefficients are displayed in the middle of the output. First, note that EViews
does not display the results for the lags and leads of the differenced cointegrating regressors
since we cannot perform inference on these short-term dynamics nuisance parameters. Second, the coefficient on the linear trend is statistically different from zero at conventional levels, indicating that there is a deterministic time trend common to both LC and LY. Lastly, the
estimated cointegrating vector for LC and LY is (1, -0.6812), which differs qualitatively from
the earlier results. A Wald test of the restriction that the cointegrating vector is (1, -1) yields
a t-statistic of -4.429, strongly rejecting that null hypothesis.
While EViews does not display the coefficients for the short-run dynamics, the short-run
coefficients are used in constructing the fit statistics in the bottom portion of the results view
(we again urge caution in using these measures). The short-run dynamics are also used in
computing the residuals used by various equation views and procs such as the residual plot
or the gradient view.
The short-run coefficients are not included in the representations view of the equation,
which focuses only on the estimates for Equation (26.1). Furthermore, forecasting and
model solution using an equation estimated by DOLS are also based on the long-run relationship. If you wish to construct forecasts that incorporate the short-run dynamics, you
may use least squares to estimate an equation that explicitly includes the lags and leads of
the cointegrating regressors.
Residual-based Tests
The Engle-Granger and Phillips-Ouliaris residual-based tests for cointegration are simply
unit root tests applied to the residuals obtained from SOLS estimation of Equation (26.1).
Under the assumption that the series are not cointegrated, all linear combinations of
( y t, X t ) , including the residuals from SOLS, are unit root nonstationary. Therefore, a test
of the null hypothesis of no cointegration against the alternative of cointegration corresponds
to a unit root test of the null of nonstationarity against the alternative of stationarity.
The two tests differ in the method of accounting for serial correlation in the residual series;
the Engle-Granger test uses a parametric, augmented Dickey-Fuller (ADF) approach, while
the Phillips-Ouliaris test uses the nonparametric Phillips-Perron (PP) methodology.
The Engle-Granger test estimates a p -lag augmented regression of the form
p
d j Du 1t j + v t
u 1t = ( r 1 )u 1t 1 +
(26.17)
j=1
The number of lagged differences p should increase to infinity with the (zero-lag) sample
13
size T but at a rate slower than T
.
We consider the two standard ADF test statistics, one based on the t-statistic for testing the
null hypothesis of nonstationarity ( r = 1 ) and the other based directly on the normalized
autocorrelation coefficient r 1 :
r 1
t = -------------se ( r )
T ( r 1 )
z = -------------------------- 1 d j
(26.18)
where se ( r ) is the usual OLS estimator of the standard error of the estimated r
1 2
se ( r ) = s v u 21t 1
(26.19)
(Stock 1986, Hayashi 2000). There is a practical question as to whether the standard error
estimate in Equation (26.19) should employ a degree-of-freedom correction. Following common usage, EViews standalone unit root tests and the Engle-Granger cointegration tests both
use the d.f.-corrected estimated standard error s v , with the latter test offering an option to
turn off the correction.
In contrast to the Engle-Granger test, the Phillips-Ouliaris test obtains an estimate of r by
running the unaugmented Dickey-Fuller regression
u 1t = ( r 1 )u 1t 1 + w t
(26.20)
and using the results to compute estimates of the long-run variance q w and the strict onesided long-run variance l 1w of the residuals. By default, EViews d.f.-corrects the estimates
of both long-run variances, but the correction may be turned off. (The d.f. correction
employed in the Phillips-Ouliaris test differs slightly from the ones in FMOLS and CCR estimation since the former applies to the estimators of both long-run variances, while the latter
apply only to the estimate of the conditional long-run variance).
The bias corrected autocorrelation coefficient is then given by
1
( r 1 ) = ( r 1 ) Tl 1w u 21t 1
(26.21)
r 1
t = ----------------se ( r )
(26.22)
z = T ( r 1 )
where
1 2
12
se ( r ) = q w u 21t 1
(26.23)
As with ADF and PP statistics, the asymptotic distributions of the Engle-Granger and Phillips-Ouliaris z and t statistics are non-standard and depend on the deterministic regressors
specification, so that critical values for the statistics are obtained from simulation results.
Note that the dependence on the deterministics occurs despite the fact that the auxiliary
regressions themselves exclude the deterministics (since those terms have already been
removed from the residuals). In addition, the critical values for the ADF and PP test statistics
must account for the fact that the residuals used in the tests depend upon estimated coefficients.
MacKinnon (1996) provides response surface regression results for obtaining critical values
for four different assumptions about the deterministic regressors in the cointegrating equation (none, constant (level), linear trend, quadratic trend) and values of k = m 2 + 1 from
1 to 12. (Recall that m 2 = max ( n p 2, 0 ) is the number of cointegrating regressors less
the number of deterministic trend regressors excluded from the cointegrating equation.)
When computing critical values, EViews will ignore the presence of any user-specified deterministic regressors since corresponding simulation results are not available. Furthermore,
results for k = 12 will be used for cases that exceed that value.
Continuing with our consumption and income example from Hamilton, we construct EngleGranger and Phillips-Ouliaris tests from an estimated equation where the deterministic
regressors include a constant and linear trend. Since SOLS is used to obtain the first-stage
residuals, the test results do not depend on the method used to estimate the original equation, only the specification itself is used in constructing the test.
To perform the Engle-Granger test, open an estimated equation and select View/Cointegration and select Engle-Granger in the Test Method dropdown. The dialog will change to display the options for this specifying the number p of augmenting lags in the ADF regression.
By default, EViews uses automatic lag-length selection
using the Schwarz information criterion. The default
number of lags is the observation-based rule given in
Equation (26.16). Alternately you may specify a Fixed
(User-specified) lag-length, select a different information criterion (Akaike, Hannan-Quinn, Modified
Akaike, Modified Schwarz, or Modified HannanQuinn), or specify sequential testing of the highest
order lag using a t-statistic and specified p-value
threshold. For our purposes the default settings suffice
so simply click on OK.
The Engle-Granger test results are divided into three
distinct sections. The first portion displays the test specification and settings, along with the
test values and corresponding p-values:
Engle-Granger tau-statistic
Engle-Granger z- statistic
V alue
-4.536843
-33.43478
Prob.*
0.0070
0.0108
The probability values are derived from the MacKinnon response surface simulation results.
In settings where using the MacKinnon results may not be appropriate, for example when
the cointegrating equation contains user-specified deterministic regressors or when there are
more than 12 stochastic trends in the asymptotic distribution, EViews will display a warning
message below these results.
Looking at the test description, we first confirm that the test statistic is computed using C
and @TREND as deterministic regressors, and note that the choice to include a single lagged
difference in the ADF regression was determined using automatic lag selection with a
Schwarz criterion and a maximum lag of 13.
As to the tests themselves, the Engle-Granger tau-statistic (t-statistic) and normalized autocorrelation coefficient (which we term the z-statistic) both reject the null hypothesis of no
cointegration (unit root in the residuals) at the 5% level. In addition, the tau-statistic rejects
at a 1% significance level. On balance, the evidence clearly suggests that LC and LY are
cointegrated.
The middle section of the output displays intermediate results used in constructing the test
statistic that may be of interest:
Intermediate Results:
Rho - 1
Rho S.E.
Residual variance
Long-run residual variance
Number of lags
Number of observations
Number of stocha stic trends**
-0.241514
0.053234
0.642945
0.431433
1
169
2
Most of the entries are self-explanatory, though a few deserve a bit of discussion. First, the
Rho S.E. and Residual variance are the (possibly) d.f. corrected coefficient standard
error and the squared standard error of the regression. Next, the Long-run residual variance is the estimate of the long-run variance of the residual based on the estimated para-
metric model. The estimator is obtained by taking the residual variance and dividing it by
the square of 1 minus the sum of the lag difference coefficients. These residual variance and
long-run variances are used to obtain the denominator of the z-statistic (Equation (26.18)).
Lastly, the Number of stochastic trends entry reports the k = m 2 + 1 value used to
obtain the p-values. In the leading case, k is simply the number of cointegrating variables
(including the dependent) in the system, but the value must generally account for deterministic trend terms in the system that are excluded from the cointegrating equation.
The bottom section of the output depicts the results for the actual ADF test equation:
Engle-Granger Test Equation:
Depend ent Variable: D(RES ID)
Method: Least Squares
Date: 04/21/09 Ti me: 10:37
Sample (adjusted) : 1947Q3 1989Q3
Included observations: 169 after adjustments
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
RESID(-1)
D(RESID(-1))
-0.241514
-0.220759
0.053234
0.071571
-4.536843
-3.084486
0.0 000
0.0 024
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Durbin-Watson stat
0.216944
0.212255
0.801838
107 .3718
-201.4713
1.971405
-0.02443 3
0.9 034 29
2.4 079 45
2.4 449 85
2.4 229 76
Alternately, you may compute the Phillips-Ouliaris test statistic. Simply select View/Cointegration and choose Phillips-Ouliaris in the Test Method dropdown.
The dialog changes to show a single Options button
for controlling the estimation of the long-run variance
q w and the strict one-sided long-run variance l 1w .
The default settings instruct EViews to compute these
long-run variances using a non-prewhitened Bartlett
kernel estimator with a fixed Newey-West bandwidth.
To change these settings, click on the Options button
and fill out the dialog. Since the default settings are
sufficient for our needs, simply click on the OK button
to compute the test statistics.
As before, the output may be divided into three parts;
we will focus on the first two. The test results are given
by:
Phillips-Ouliaris tau-statistic
Phillips-Ouliaris z-statistic
Value
-5.138345
-43.62100
Prob.*
0.0009
0.0010
At the top of the output EViews notes that we estimated the long-run variance and one-sided
long run variance using a Bartlett kernel and an number of observations based bandwidth of
5.0. More importantly, the test statistics show that, as with the Engle-Granger tests, the Phillips-Ouliaris tests reject the null hypothesis of no cointegration (unit root in the residuals) at
roughly the 1% significance level.
The intermediate results are given by:
Intermediate Results:
Rho - 1
Bias corrected Rho - 1 (Rho* - 1)
Rho* S.E.
Residual variance
Long-run residual variance
Long-run residual autocovariance
Number of observations
Number of stochastic trends**
-0.279221
-0.256594
0.049937
0.730377
0.659931
-0.035223
170
2
There are a couple of new results. The Bias corrected Rho - 1 reports the estimated value
of Equation (26.21) and the Rho* S.E. corresponds to Equation (26.23). The Long-run
residual variance and Long-run residual autocovariance are the estimates of q w and
12
l 1w , respectively. It is worth noting that the ratio of q w to the S.E. of the regression,
which is a measure of the amount of residual autocorrelation in the long-run variance, is the
scaling factor used in adjusting the raw t-statistic to form tau.
The bottom portion of the output displays results for the test equation.
the L c test statistic, which arises from the theory of Lagrange Multiplier tests for parameter
instability, to evaluate the stability of the parameters.
The L c statistic examines time-variation in the scores from the estimated equation. Let s t
be the vector of estimated individual score contributions from the estimated equation, and
define the partial sums,
T
s t
S t =
(26.24)
t =2
+
s t = ( Z t u t ) l 12
0
+
(26.25)
Lc =
S t G
S t
(26.26)
t =2
G = q 1.2
Zt Z t
(26.27)
t = 2
The s t and G may be defined analogously to least squares for CCR using the transformed
data. For DOLS s t is defined for the subset of original regressors Z t , and G may be computed using the method employed in computing the original coefficient standard errors.
The distribution of L c is nonstandard and depends on m 2 = max ( n p 2, 0 ) , the number
of cointegrating regressors less the number of deterministic trend regressors excluded from
the cointegrating equation, and p the number of trending regressors in the system. Hansen
(1992) has tabulated simulation results and provided polynomial functions allowing for
computation of p-values for various values of m 2 and p . When computing p-values,
EViews ignores the presence of user-specified deterministic regressors in your equation.
In contrast to the residual based cointegration tests, Hansens test does rely on estimates
from the original equation. We continue our illustration by considering an equation estimated on the consumption data using a constant and trend, FMOLS with a Quadratic Spectral kernel, Andrews automatic bandwidth selection, and no d.f. correction for the long-run
variance and coefficient covariance estimates. The equation estimates are given by:
Coeffici ent
S td. Error
t-S tatistic
Prob.
LY
C
@TREND
0.651766
220 .1345
0.289900
0.057711
37.89636
0.049542
11.29361
5.808855
5.851627
0.0 000
0.0 000
0.0 000
R-squared
Adjusted R-squared
S.E. of regression
Durbin-Watson stat
0.999098
0.999087
1.261046
0.514132
720.50 78
41.740 69
265.56 95
8.2 234 97
There are no options for the Hansen test so you may simply click on View/Cointegration
Tests..., select Hansen Instability in the dropdown menu, then click on OK.
Cointegration Test - Hansen Pa rameter Instability
Date: 08/11/09 Time: 13:48
Equation: EQ_19_3_31
Series: LC LY
Null hypothesis: Series are cointegrated
Cointegrating equati on deterministics: C
@TREND
No d.f. adju stment for score variance
Lc statistic
0.57553 7
S tochastic
Trends (m)
1
Deterministic
Trends (k)
1
E xcl uded
Trends (p2 )
0
Prob.*
0.0641
The top portion of the output describes the test hypothesis, the deterministic regressors, and
any relevant information about the construction of the score variances. In this case, we see
that the original equation had both C and @TREND as deterministic regressors, and that the
score variance is based on the usual FMOLS variance with no d.f. correction.
The results are displayed below. The test statistic value of 0.5755 is presented in the first column. The next three columns describe the trends that determine the asymptotic distribution.
Here there is a single stochastic regressor (LY) and one deterministic trend (@TREND) in the
cointegrating equation, and no additional trends in the regressors equations. Lastly, we see
from the final column that the Hansen test does not reject the null hypothesis that the series
are cointegrated at conventional levels, though the relatively low p-value are cause for some
concern, given the Engle-Granger and Phillips-Ouliaris results.
y t = X t b +
q
s
t gs +
s = 0
t g s + u 1t
(26.28)
s = p+1
and tests for the joint significance of the coefficients ( g p + 1, , g q ) . Under the null hypothesis of cointegration, the spurious trend coefficients should be insignificant since the residual is stationary, while under the alternative, the spurious trend terms will mimic the
remaining stochastic trend in the residual. Note that unless you wish to treat the constant as
one of your spurious regressors, it should be included in the original equation specification.
Since the additional variables are simply deterministic regressors, we may apply a joint Wald
test of significance to ( g p + 1, , g q ) . Under the maintained hypothesis that the original
specification of the cointegrating equation is correct, the resulting test statistic is asymptoti2
cally x q p .
While one could estimate an equation with spurious trends and then to test for their significance using a Wald test, EViews offers a view which performs these steps for you. First estimate an equation where you include all trends that are assumed to be in the cointegrating
equation. Next, select View/Cointegration Test... and choose Park Added Variables in the
dropdown menu. The dialog will change to allow you to specify the spurious trends.
Chi-square
Value
12.72578
df
2
Probability
0.0017
The null hypothesis is that the series are cointegrated. The original specification includes a
constant and linear trend and the test equation will include up to a cubic trend. The Park
test evaluates the statistical significance of the @TREND^2 and the (@TREND/170)^3 terms
using a conventional Wald test. (You may notice that the latter cubic trend termand any
higher order trends that you may includeuses the trend scaled by the number of observations in the sample.)
The test results reject the null hypothesis of cointegration, in direct contrast to the results for
the Engle-Granger, Phillips-Ouliarias, and Hansen tests (though the latter, which also tests
the null of cointegration, is borderline). Note however, adding a quadratic trend to the original equation and then testing for cointegration yields results that, for all four tests, point to
cointegration between LC and LY.
most part, these views and procedures are a subset of those available in other estimation
settings such as least squares estimation. (The one new view, for cointegration testing, is
described in depth in Testing for Cointegration, beginning on page 270.) In some cases
there have been modifications to account for the nature of cointegrating regression.
Views
For the most part, the views of a cointegrating equation require little discussion. For example, the Representations view offers text
descriptions of the estimated cointegrating equation, the Covariance Matrix displays the coefficient covariance, and the Residual
Diagnostics (Correlogram - Q-statistics, Correlogram Squared
Residuals, Histogram - Normality Test) offer statistics based on
pooled residuals. That said, a few comments about the construction of these views are in order.
First, the Representations and Covariance Matrix views of an
equation only show results for the cointegrating equation and the long-run coefficients. In
particular, the short-run dynamics included in a DOLS equation are not incorporated into
the equation. Similarly, Coefficient Diagnostics and Gradients views do not include any of
the short-run coefficients.
Second, the computation of the residuals used in the Actual, Fitted, Residual views and the
Residual Diagnostics views differs depending on the estimation method. For FMOLS and
CCR, the residuals are derived simply by substituting the estimated coefficients into the
cointegrating equation and computing the residuals. The values are not based on the transformed data. For DOLS, the residuals from the cointegrating equation are adjusted for the
estimated short-run dynamics. In all cases, the test statistics results in the Residual Diagnostics should only be viewed is illustrative as they are not supported by asymptotic theory.
Note that standardized residuals are simply the residuals divided through by the long-run
variance estimate.
The Gradient (score) views are based on the moment conditions implied by the particular
estimation method. For FMOLS and CCR, these moment conditions are based on the transformed data (see Equation (26.25) for the expression for FMOLS scores). For DOLS, these values are simply proportional (-2 times) to the residuals times the regressors.
References281
Procedures
The procs for an equation estimated using cointegrating regression are virtually identical to those found in least squares estimation.
Most of the relevant issues were discussed previously (e.g.,
construction of residuals and gradients), however you should
also note that forecasts constructed using the Forecast... procedure and models created using Make Model procedure follow
the Representations view in omitting DOLS short-run dynamics. Furthermore, the forecast
standard errors generated by the Forecast... proc and from solving models created using the
Make Model... proc both employ the S.E. of the regression reported in the estimation output. This may not be appropriate.
Data Members
The summary statistics results in the bottom of the equation output may be accessed using
data member functions (see Equation Data Members on page 35 for a list of common data
members). For equations estimated using DOLS (with default standard errors), FMOLS, or
CCR, EViews computes an estimate of the long-run variance of the residuals. This statistic
may be accessed using the @lrvar member function, so that if you have an equation named
FMOLS,
scalar mylrvar = fmols.@lrvar
References
Engle, R. F., and C. W. J. Granger (1987). Co-integration and Error Correction: Representation, Estimation, and Testing, Econometrica, 55, 251-276.
Hamilton, James D. (1994). Time Series Analysis, Princeton: Princeton University Press.
Hansen, Bruce E. (1992a). Efficient Estimation and Testing of Cointegrating Vectors in the Presence of
Deterministic Trends, Journal of Econometrics, 53, 87-121.
Hansen, Bruce E. (1992b). Tests for Parameter Instability in Regressions with I(1) Processes, Journal of
Business and Economic Statistics, 10, 321-335.
Hayashi, Fumio (2000). Econometrics, Princeton: Princeton University Press.
MacKinnon, James G. (1996). Numerical Distribution Functions for Unit Root and Cointegration Tests,
Journal of Applied Econometrics, 11, 601-618.
Ogaki, Masao (1993). Unit Roots in Macroeconometrics: A Survey, Monetary and Economic Studies, 11,
131-154.
Park, Joon Y. (1992). Canonical Cointegrating Regressions, Econometrica, 60, 119-143.
Park, Joon Y. and Masao Ogaki (1991). Inferences in Cointegrated Models Using VAR Prewhitening to
Estimate Short-run Dynamics, Rochester Center for Economic Research Working Paper No. 281.
Phillips, Peter C. B. and Bruce E. Hansen (1990). Statistical Inference in Instrumental Variables Regression with I(1) Processes, Review of Economics Studies, 57, 99-125.
Phillips, Peter C. B. and Hyungsik R. Moon (1999). Linear Regression Limit Theory for Nonstationary
Panel Data, Econometrica, 67, 1057-1111.
Phillips, Peter C. B. and Mico Loretan (1991). Estimating Long-run Economic Equilibria, Review of Economic Studies, 59, 407-436.
Saikkonen, Pentti (1992). Estimation and Testing of Cointegrated Systems by an Autoregressive Approximation, Econometric Theory, 8, 1-27.
Stock, James H. (1994). Unit Roots, Structural Breaks and Trends, Chapter 46 in Handbook of Econometrics, Volume 4, R. F. Engle & D. McFadden (eds.), 2739-2841, Amsterdam: Elsevier Science Publishers B.V.
Stock, James H. and Mark Watson (1993). A Simple Estimator Of Cointegrating Vectors In Higher Order
Integrated Systems, Econometrica, 61, 783-820.
Background
Specification
An ARDL is a least squares regression containing lags of the dependent and explanatory
variables. ARDLs are usually denoted with the notation ARDL( p, q 1, , q k ), where p is
the number of lags of the dependent variable, q 1 is the number of lags of the first explanatory variable, and q k is the number of lags of the k-th explanatory variable.
An ARDL model may be written as:
p
yt = a +
qj
g i y t i + X j, t i b j, i + et
i= 1
(27.1)
j = 1i = 0
Some of the explanatory variables, X j , may have no lagged terms in the model ( q j = 0 ).
These variables are called static or fixed regressors. Explanatory variables with at least one
lagged term are called dynamic regressors.
To specify an ARDL model, you must determine how many lags of each variable should be
included (i.e. specify p and q 1, , q k ). Fortunately simple model selection procedures are
available for determining these lag lengths. Since an ARDL model can be estimated via least
squares regression, standard Akaike, Schwarz and Hannan-Quinn information criteria may
2
be used for model selection. Alternatively, one could employ the adjusted R from the various least squares regressions.
Post-Estimation Diagnostics
Long-run Relationships
Since an ARDL model estimates the dynamic relationship between a dependent variable and
explanatory variables, it is possible to transform the model into a long-run representation,
showing the long run response of the dependent variable to a change in the explanatory
variables. The calculation of these estimated long-run coefficients is given by:
qj
b j, i
i =1
v j = ----------------------p
(27.2)
gi
i=1
The standard error of these long-run coefficients can be calculated from the standard errors
of the original regression using the delta method.
Cointegrating Relationships
Traditional methods of estimating cointegrating relationships, such as Engle-Granger (1987)
or Johansen's (1991, 1995) method, or single equation methods such as Fully Modified OLS,
or Dynamic OLS either require all variables to be I(1), or require prior knowledge and specification of which variables are I(0) and which are I(1).
To alleviate this problem, Pesaran and Shin (1999) showed that cointegrating systems can be
estimated as ARDL models, with the advantage that the variables in the cointegrating relationship can be either I(0) or I(1), without needing to pre-specify which are I(0) or I(1).
Pesaran and Shin also note that unlike other methods of estimating cointegrating relationships, the ARDL representation does not require symmetry of lag lengths; each variable can
have a different number of lag terms.
The cointegrating regression form of an ARDL model is obtained by transforming (27.1) into
differences and substituting the long-run coefficients from (27.2):
k
p1
Dy t =
g i Dy t 1 +
qj 1
X j, t i b j, i f EC t 1 + e t
(27.3)
j = 1i = 0
i =1
where
EC t = y t a
X j, t v j
j= 1
g i
f = 1
i =1
p
g i =
g m
m = i+1
qj
b j, i =
b j, m
(27.4)
The standard error of the cointegrating relationship coefficients can be calculated from the
standard errors of the original regression using the delta method.
Bounds Testing
Using the cointegrating relationship form in Equation (27.3), Pesaran, Shin and Smith (2001)
describe a methodology for testing whether the ARDL model contains a level (or long-run)
relationship between the independent variable and the regressors.
The Bounds test procedure transforms (3) into the following representation:
k
p1
Dy t =
i= 1
g i Dy t 1 +
qj 1
X j, t i b j, i ry t 1 a
j = 1i = 0
X j, t 1 d j + e t
(27.5)
j=1
The test for the existence of level relationships is then simply a test of
r = 0
d1 = d2 = = dk = 0
(27.6)
The coefficient estimates used in the test may be obtained from a regression using (27.1), or
can be estimated directly from a regression using (27.5).
The test statistic based on Equation (27.5) has a different distribution under the null hypothesis (of no level relationships), depending on whether the regressors are all I(0) or all I(1).
Further, under both cases the distribution is non-standard. Pesaran, Shin and Smith provide
critical values for the cases where all regressors are I(0) and the cases where all regressors
are I(1), and suggest using these critical values as bounds for the more typical cases where
the regressors are a mixture of I(0) and I(1).
The Specification tab allows you to specify the variables used in the regression, and
whether to let EViews automatically detect the appropriate number of lags for each variable.
To begin, enter the name of the dependent variable, followed by a space delimited list of
dynamic regressors (i.e., variables which will have lag terms in the model) in the Dynamic
Specification edit box. You may then select whether you wish EViews to automatically
select the number of lags for each variable, or whether the number of lags is fixed, using the
Automatic Selection and Fixed radio buttons.
If you choose automatic selection, you must then select the maximum number of lags to test
for the dependent variable and regressors using the Max lags dropdowns. If you select to
use a fixed number of lags, the same dropdowns can be used to select the number of lags for
the dependent variable and regressors. Note that when using fixed lags for regressors, each
regressor will be given the same number of lags.
The Fixed regressors area lets you specify any fixed/static variables (regressors without
lags). The Trend specification dropdown may be used to specify whether the model
includes a constant term, or a constant and trend. Any other static regressors can be specified by entering their name in the List of fixed regressors box.
The Options tab allows you to specify the type of model selection to be used if you chose
automatic selection on the Specification tab. You may choose between the Akaike Information Criterion (AIC), Schwarz Criterion (SC), Hannan-Quinn Criterion (HQ), or the Adjusted
R-squared.
An Example287
You may also select the type of covariance matrix to use in the final estimates, using the
Coefficient covariance matrix dropdown. Note that this selection does not affect the model
selection criteria.
An Example
Greene (2008, page 685) uses an ARDL model on data from a number of quarterly US macroeconomic variables between 1950 and 2000. In particular, he estimates an ARDL model
using the log of real consumption as the dependent variable, and the log of real GDP as a
single regressor (along with a constant).
We can open the Greene data with the following EViews command:
wfopen http://www.stern.nyu.edu/~wgreene/Text/Edition7/TableF52.txt
Next we bring up the ARDL estimation dialog by clicking on Quick/Estimate Equation and
using the Method combo to change the estimation method to ARDL.
Following Greene's example, we estimate an ARDL model with the log of real consumption
as the dependent variable, and log GDP as the regressor, by entering:
log(realcons) log(realgdp)
We do not make any changes to the Options tab, leaving all settings at their default value.
The results are shown below:
An Example289
Coefficient
Std. Error
t-Statistic
LOG(REALCONS(-1))
LOG(REALCONS(-2))
LOG(REALCONS(-3))
LOG(REALCONS(-4))
LOG(REALCONS(-5))
LOG(REALGDP)
LOG(REALGDP(-1))
@QUARTER=1
@QUARTER=2
@QUARTER=3
C
0.854510
0.258776
-0.156598
-0.194069
0.169457
0.547615
-0.475684
-0.000348
-0.000451
0.000854
-0.058209
0.064428
0.082121
0.071521
0.070465
0.048486
0.048246
0.051091
0.001176
0.001165
0.001171
0.027842
13.26300
3.151153
-2.189542
-2.754106
3.494951
11.35042
-9.310547
-0.295813
-0.386775
0.729123
-2.090705
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.999873
0.999867
0.005805
0.006336
747.9388
148407.0
0.000000
Prob.*
0.0000
0.0019
0.0298
0.0065
0.0006
0.0000
0.0000
0.7677
0.6994
0.4668
0.0379
7.902158
0.502623
-7.406420
-7.224378
-7.332743
1.865392
*Note: p-values and any subsequent tests do not account for model
selection.
The first part of the output gives a summary of the settings used during estimation. Here we
see that automatic selection (using the Akaike Information Criterion) was used with a maximum of 8 lags of both the dependent variable and the regressor. Out of the 72 models evaluated, the procedure has selected an ARDL(5,1) model - 5 lags of the dependent variable,
LOG(REALCONS), and a single lag (along with the level value) of LOG(REALGDP).
EViews also notes that since the selected model has fewer lags than the maximum, the sample used in the final estimation will not match that used during selection.
The rest of the output is standard least squares output for the selected model. Note that each
of the regressors (with the exception of the quarterly dummies) is significant, and that the
coefficient on the one period lag of the dependent variable, LOG(REALCONS), is quite high,
at 0.85.
To view the relative superiority of the selected model against alternatives, we click on View/
Model Selection Summary/Criteria Graph to view a graph of the AIC of the top twenty
models.
The selected ARDL(5,1) model was only slightly better than an ARDL(5,2) model, which
was in turn only slightly better than an ARDL(5,3). It is notable that the top three models all
use five lags of the dependent variable.
Rather than using automatic selection to choose the best model, Greene (Example 20.4) analyzes these data with a fixed ARDL(3,3) model. We can replicate this by pressing the Estimate button to bring up the Equation Estimation dialog again. We change the number of
lags on both dependent and regressors to 3, and then select the Fixed radio button to switch
off automatic selection.
An Example291
Coefficient
Std. Error
t-Statistic
LOG(REALCONS(-1))
LOG(REALCONS(-2))
LOG(REALCONS(-3))
LOG(REALGDP)
LOG(REALGDP(-1))
LOG(REALGDP(-2))
LOG(REALGDP(-3))
@QUARTER=1
@QUARTER=2
@QUARTER=3
C
0.723341
0.391367
-0.233653
0.565088
-0.390884
-0.237950
0.190243
-0.000259
-0.000259
0.000915
-0.109962
0.069767
0.079618
0.068672
0.051953
0.083934
0.086882
0.058922
0.001266
0.001259
0.001256
0.029236
10.36794
4.915576
-3.402444
10.87699
-4.657023
-2.738778
3.228753
-0.204677
-0.205412
0.728608
-3.761208
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.999855
0.999847
0.006274
0.007479
739.7939
131047.1
0.000000
Prob.*
0.0000
0.0000
0.0008
0.0000
0.0000
0.0068
0.0015
0.8380
0.8375
0.4671
0.0002
7.893303
0.507884
-7.251681
-7.070903
-7.178530
1.785975
*Note: p-values and any subsequent tests do not account for model
selection.
The one-period lag on the dependent variable remains high, at 0.72, and again all coefficients are significant (with the exception of the dummies).
We can then examine the long-run coefficients by selecting View/Coefficient Diagnostics/
Cointegration and Long Run Form.
An Example293
Coefficient
Std. Error
t-Statistic
DLOG(REALCONS(-1))
DLOG(REALCONS(-2))
DLOG(REALGDP)
DLOG(REALGDP(-1))
DLOG(REALGDP(-2))
D(@QUARTER = 1)
D(@QUARTER = 2)
D(@QUARTER = 3)
CointEq(-1)
-0.157714
0.233653
0.565088
0.237950
-0.190243
-0.000259
-0.000259
0.000915
-0.118944
0.069795
0.068672
0.051953
0.086882
0.058922
0.001266
0.001259
0.001256
0.030474
-2.259665
3.402444
10.876990
2.738778
-3.228753
-0.204677
-0.205412
0.728608
-3.903191
Prob.
0.0250
0.0008
0.0000
0.0068
0.0015
0.8380
0.8375
0.4671
0.0001
Coefficient
Std. Error
t-Statistic
LOG(REALGDP)
@QUARTER=1
@QUARTER=2
@QUARTER=3
C
1.063498
-0.002178
-0.002174
0.007691
-0.924478
0.007908
0.010645
0.010583
0.010799
0.065935
134.480542
-0.204601
-0.205420
0.712205
-14.021150
Prob.
0.0000
0.8381
0.8375
0.4772
0.0000
The long-run coefficients, at the bottom of the output, show that the long-run impact of a
change in log(REALGDP) on log(REALCONS) has essentially no lagged-effects. The long-run
change is very close to being equal to the initial change (the coefficient is close to one).
In a second example, Example 20.5, Greene examines an ARDL(1,1) model's cointegrating
form. To perform this in EViews, we again bring up the Equation Estimation dialog and
change the number of lags to 1 for both dependent and regressors, remove the quarterly
dummies, and then click OK.
Coefficient
Std. Error
t-Statistic
DLOG(REALGDP)
CointEq(-1)
0.584210
-0.095416
0.051411
0.030589
11.363511
-3.119291
Prob.
0.0000
0.0021
Coefficient
Std. Error
t-Statistic
LOG(REALGDP)
C
1.060339
-0.894307
0.010630
0.089041
99.753786
-10.043809
Prob.
0.0000
0.0000
References
Greene, William H. (2008). Econometric Analysis, 6th Edition, Upper Saddle Rive, NJ: Prentice-Hall.
References295
Pesaran, M.H. and Shin, Y. (1999). An Autoregressive Distributed Lag Modelling Approach to Cointegration Analysis. Econometrics and Economic Theory in the 20th Century: The Ragnar Frisch Centennial Symposium, Strom, S. (ed.) Cambridge University Press.
Pesaran, M.H., Shin, Y. and Smith, R. (2001). Bounds Testing Approaches to the Analysis of Level Relationships. Journal of Applied Econometrics, 16, 289326.
Background
Suppose that a binary dependent variable, y , takes on values of zero and one. A simple linear regression of y on x is not appropriate, since among other things, the implied model of
the conditional mean places inappropriate restrictions on the residuals of the model. Furthermore, the fitted value of y from a simple linear regression is not restricted to lie
between zero and one.
Pr ( y i = 1 x i, b ) = 1 F ( x i b ) ,
(28.1)
where F is a continuous, strictly increasing function that takes a real value and returns a
value ranging from zero to one. In this, and the remaining discussion in this chapter follows
we adopt the standard simplifying convention of assuming that the index specification is linear in the parameters so that it takes the form x i b . Note, however, that EViews allows you
to estimate models with nonlinear index specifications.
The choice of the function F determines the type of binary model. It follows that:
Pr ( y i = 0 x i, b ) = F ( x i b ) .
(28.2)
Given such a specification, we can estimate the parameters of this model using the method
of maximum likelihood. The likelihood function is given by:
n
l(b) =
y i log ( 1 F ( xi b ) ) + ( 1 y i ) log ( F ( x i b ) ) .
(28.3)
i=0
The first order conditions for this likelihood are nonlinear so that obtaining parameter estimates requires an iterative solution. By default, EViews uses a second derivative method for
iteration and computation of the covariance matrix of the parameter estimates. As discussed
below, EViews allows you to override these defaults using the Options dialog (see Second
Derivative Methods on page 1011 for additional details on the estimation methods).
There are two alternative interpretations of this specification that are of interest. First, the
binary model is often motivated as a latent variables specification. Suppose that there is an
unobserved latent variable y i that is linearly related to x :
y i = x i b + u i
(28.4)
1 if y i > 0
yi =
0 if y i 0.
(28.5)
In this case, the threshold is set to zero, but the choice of a threshold value is irrelevant, so
long as a constant term is included in x i . Then:
Pr ( y i = 1 x i, b ) = Pr ( y i > 0 ) = Pr ( x i b + u i > 0 ) = 1 F u ( x i b )
(28.6)
E ( y i x i, b ) = 1 Pr ( y i = 1 x i, b ) + 0 Pr ( y i = 0 x i, b )
= Pr ( y i = 1 x i, b ).
(28.7)
y i = ( 1 F ( x i b ) ) + e i ,
(28.8)
where e i is a residual representing the deviation of the binary y i from its conditional mean.
Then:
E ( e i x i, b ) = 0
var ( e i x i, b ) = F ( x i b ) ( 1 F ( x i b ) ).
(28.9)
We will use the conditional mean interpretation in our discussion of binary model residuals
(see Make Residual Series on page 311).
Probit
Pr ( y i = 1 x i, b ) = 1 F ( x i b ) = F ( x i b )
where F is the cumulative distribution function of the standard normal distribution.
Logit
Pr ( y i = 1 x i, b ) = 1 ( e
= e
x i b
x i b
(1 + e
(1 + e
x i b
x i b
))
Pr ( y i = 1 x i, b ) = 1 ( 1 exp ( e
= exp ( e
x i b
x i b
))
which is based upon the CDF for the Type-I extreme value distribution. Note that this distribution is skewed.
For example, consider the probit specification example described in Greene (2008, p. 781783) where we analyze the effectiveness of teaching methods on grades. The variable
GRADE represents improvement on grades following exposure to the new teaching method
PSI (the data are provided in the workfile Binary.WF1). Also controlling for alternative
measures of knowledge (GPA and TUCE), we have the specification:
Once you have specified the model, click OK. EViews estimates the parameters of the model
using iterative procedures, and will display information in the status line. EViews requires
that the dependent variable be coded with the values zero-one with all other observations
dropped from the estimation.
Following estimation, EViews displays results in the equation window. The top part of the
estimation output is given by:
Dependent Variable: GRADE
Method: ML - Binary Probit (BFGS / Marquardt steps)
Date: 03/09/15 Time: 15:54
Sample: 1 32
Included observations: 32
Convergence achieved after 23 iterations
Coefficient covariance computed using observed Hessian
Variable
Coefficient
Std. Error
z-Statistic
Prob.
C
GPA
TUCE
PSI
-7.452320
1.625810
0.051729
1.426332
2.542472
0.693882
0.083890
0.595038
-2.931131
2.343063
0.616626
2.397045
0.0034
0.0191
0.5375
0.0165
The header contains basic information regarding the estimation technique (ML for maximum likelihood) and the sample used in estimation, as well as information on the number
of iterations required for convergence, and on the method used to compute the coefficient
covariance matrix.
Displayed next are the coefficient estimates, asymptotic standard errors, z-statistics and corresponding p-values.
Interpretation of the coefficient values is complicated by the fact that estimated coefficients
from a binary model cannot be interpreted as the marginal effect on the dependent variable.
The marginal effect of x j on the conditional probability is given by:
E ( y i x i, b )
------------------------------- = f ( x i b )b j ,
x ij
(28.10)
E ( y i x i, b ) x ij
b
-----j = ---------------------------------------------- .
bk
E ( y i x i, b ) x ik
(28.11)
In addition to the summary statistics of the dependent variable, EViews also presents the following summary statistics:
McFad den R-squ ared
S.D. dependent var
Akaike info criterion
Sch warz criterion
Hannan -Quinn criter.
LR stati stic
Prob(LR statistic)
0.377478
0.482559
1.051175
1.234392
1.111907
15.54585
0.001405
0.3 437 50
0.3 861 28
4.1 746 60
-12.8188 0
-20.5917 3
-0.40058 8
First, there are several familiar summary descriptive statistics: the mean and standard deviation of the dependent variable, standard error of the regression, and the sum of the squared
residuals. The latter two measures are computed in the usual fashion using the ordinary
residuals:
e i = y i E ( y i x i, b ) = y i ( 1 F ( x i b ) ).
(28.12)
Estimation Options
The iteration limit, convergence criterion, and coefficient name may be set in the usual fashion by clicking on the Options tab in the Equation Estimation dialog. In addition, there are
options that are specific to binary models. These options are described below.
Optimization
By default, EViews uses Newton-Raphson with Marquardt steps to obtain parameter estimates.
If you wish, you can use the Optimization method dropdown menu to select a different
method. In addition to Newton-Raphson, you may select BFGS, OPG - BHHH, or EViews
legacy.
For non-legacy estimation, the Step method may be chosen between Marquardt, Dogleg,
and Line search. For legacy estimation the Legacy method is set to the default Quadratic
hill climbing (Marquardt steps) or BHHH (line search).
Note that for legacy estimation, the default optimization algorithm does influence the
default method of computing coefficient covariances.
See Optimization Method on page 1006 and Technical Notes on page 353 for discussion.
Coefficient Covariances
For binary dependent variable models, EViews allows you to estimate the standard errors
using the default (inverse of the estimated information matrix), quasi-maximum likelihood
(Huber/White) or generalized linear model (GLM) methods.
In addition, for ordinary and GLM covariances, you may choose to compute the information
matrix estimate using the outer-product of the gradients (OPG) or using the negative of the
matrix of log-likelihood second derivatives (Hessian - observed).
You may elect to compute your covariances with or without a d.f. Adjustment.
Note that for legacy estimation, the default algorithm does influence the default method of
computing coefficient covariances.
See Technical Notes on page 353 for discussion.
Starting Values
As with other estimation procedures, EViews allows you to specify starting values. In the
options menu, select one of the items from the dropdown menu. You can use the default
EViews values, or you can choose a fraction of those values, zero coefficients, or user sup-
plied values. To employ the latter, enter the coefficients in the C coefficient vector, and select
User Supplied in the dropdown menu.
The EViews default values are selected using a algorithm that is specialized for each type of
binary model. Unless there is a good reason to choose otherwise, we recommend that you
use the default values.
Estimation Problems
In general, estimation of binary models is quite straightforward, and you should experience
little difficulty in obtaining parameter estimates. There are a few situations, however, where
you may experience problems.
First, you may get the error message Dependent variable has no variance. This error
means that there is no variation in the dependent variable (the variable is always one or
zero for all valid observations). This error most often occurs when EViews excludes the
entire sample of observations for which y takes values other than zero or one, leaving too
few observations for estimation.
You should make certain to recode your data so that the binary indicators take the values
zero and one. This requirement is not as restrictive at it may first seem, since the recoding
may easily be done using auto-series. Suppose, for example, that you have data where y
takes the values 1000 and 2000. You could then use the boolean auto-series, y=1000, or
perhaps, y<1500, as your dependent variable.
Second, you may receive an error message of the form [xxxx] perfectly predicts binary
response [success/failure], where xxxx is a sample condition. This error occurs when one
of the regressors contains a separating value for which all of the observations with values
below the threshold are associated with a single binary response, and all of the values above
the threshold are associated with the alternative response. In this circumstance, the method
of maximum likelihood breaks down.
For example, if all values of the explanatory variable x > 0 are associated with y = 1 , then
x is a perfect predictor of the dependent variable, and EViews will issue an error message
and stop the estimation procedure.
The only solution to this problem is to remove the offending variable from your specification. Usually, the variable has been incorrectly entered in the model, as when a researcher
includes a dummy variable that is identical to the dependent variable (for discussion, see
Greene, 2008).
Thirdly, you may experience the error, Non-positive likelihood value observed for observation [xxxx]. This error most commonly arises when the starting values for estimation are
poor. The default EViews starting values should be adequate for most uses. You may wish to
check the Options dialog to make certain that you are not using user specified starting values, or you may experiment with alternative user-specified values.
Lastly, the error message Near-singular matrix indicates that EViews was unable to invert
the matrix required for iterative estimation. This will occur if the model is not identified. It
may also occur if the current parameters are far from the true values. If you believe the latter
to be the case, you may wish to experiment with starting values or the estimation algorithm.
The BHHH and quadratic hill-climbing algorithms are less sensitive to this particular problem than is Newton-Raphson.
Variable
Dep=0
Mean
Dep=1
All
C
GPA
TUCE
PSI
1.000000
2.951905
21.09524
0.285714
1.000000
3.432727
23.54545
0.727273
1.000000
3.117188
21.93750
0.437500
Variable
Dep=0
Standard
Deviation
Dep=1
All
C
GPA
TUCE
PSI
0.000000
0.357220
3.780275
0.462910
0.000000
0.503132
3.777926
0.467099
0.000000
0.466713
3.901509
0.504016
Observations
21
11
32
P(Dep=1)<=C
P(Dep=1)>C
Total
Correct
% Correct
% Incorrect
Total Gain*
Percent
Gain**
Estimated Equation
Dep=0
Dep=1
Total
Constant Probability
Dep=0
Dep=1
Total
18
3
21
18
85.71
14.29
-14.29
3
8
11
8
72.73
27.27
72.73
21
11
32
26
81.25
18.75
15.63
21
0
21
21
100.00
0.00
NA
72.73
45.45
11
0
11
0
0.00
100.00
32
0
32
21
65.63
34.38
mated equation is 15.62 percentage points better at predicting responses than the constant
probability model. This change represents a 45.45 percent improvement over the 65.62 percent correct prediction of the default model.
The bottom portion of the equation window contains analogous prediction results based
upon expected value calculations:
Esti mated Eq uation
Dep=0
Dep=1
Total
E(# of De p=0)
E(# of De p=1)
Total
Correct
% Correct
% Incorr ect
Total Gain*
Percent Gain**
16.89
4.11
21.00
16.89
80.42
19.58
14.80
43.05
4.14
6.86
11.00
6.86
62.32
37.68
27.95
42.59
21.03
10.97
32.00
23.74
74.20
25.80
19.32
42.82
Consta nt Probability
Dep=0
Dep=1
Total
13.78
7.2 2
21.00
13.78
65.63
34.38
7.22
3.78
11.00
3.78
34.38
65.63
21.00
11.00
32.00
17.56
54.88
45.12
In the left-hand table, we compute the expected number of y = 0 and y = 1 observations in the sample. For example, E(# of Dep=0) is computed as:
Pr ( y i = 0
x i, b ) =
F ( x i b ) ,
(28.13)
where the cumulative distribution function F is for the normal, logistic, or extreme value
distribution.
In the lower right-hand table, we compute the expected number of y = 0 and y = 1
observations for a model estimated with only a constant. For this restricted model, E(# of
Dep=0) is computed as n ( 1 p ) , where p is the sample proportion of y = 1 observations. EViews also reports summary measures of the total gain and the percent (of the incorrect expectation) gain.
Among the 21 individuals with y = 0 , the expected number of y = 0 observations in the
estimated model is 16.89. Among the 11 observations with y = 1 , the expected number of
y = 1 observations is 6.86. These numbers represent roughly a 19.32 percentage point
(42.82 percent) improvement over the constant probability model.
Goodness-of-Fit Tests
2
This view allows you to perform Pearson x -type tests of goodness-of-fit. EViews carries out
two goodness-of-fit tests: Hosmer-Lemeshow (1989) and Andrews (1988a, 1988b). The idea
underlying these tests is to compare the fitted expected values to the actual values by group.
If these differences are large, we reject the model as providing an insufficient fit to the
data.
Details on the two tests are described in the Technical Notes on page 353. Briefly, the tests differ
in how the observations are grouped and in the
asymptotic distribution of the test statistic. The
Hosmer-Lemeshow test groups observations on
the basis of the predicted probability that y = 1 .
The Andrews test is a more general test that
groups observations on the basis of any series or
series expression.
To carry out the test, select View/Goodness-of-Fit
Test
You must first decide on the grouping variable.
You can select Hosmer-Lemeshow (predicted probability) grouping by clicking on the corresponding radio button, or you can select series
grouping, and provide a series to be used in forming the groups.
Next, you need to specify the grouping rule. EViews allows you to group on the basis of
either distinct values or quantiles of the grouping variable.
If your grouping variable takes relatively few distinct values, you should choose the Distinct
values grouping. EViews will form a separate group for each distinct value of the grouping
variable. For example, if your grouping variable is TUCE, EViews will create a group for each
distinct TUCE value and compare the expected and actual numbers of y = 1 observations
in each group. By default, EViews limits you to 100 distinct values. If the distinct values in
your grouping series exceeds this value, EViews will return an error message. If you wish to
evaluate the test for more than 100 values, you must explicitly increase the maximum number of distinct values.
If your grouping variable takes on a large number of distinct values, you should select
Quantiles, and enter the number of desired bins in the edit field. If you select this method,
EViews will group your observations into the number of specified bins, on the basis of the
ordered values of the grouping series. For example, if you choose to group by TUCE, select
Quantiles, and enter 10, EViews will form groups on the basis of TUCE deciles.
If you choose to group by quantiles and there are ties in the grouping variable, EViews may
not be able to form the exact number of groups you specify unless tied values are assigned
to different groups. Furthermore, the number of observations in each group may be very
unbalanced. Selecting the randomize ties option randomly assigns ties to adjacent groups in
order to balance the number of observations in each group.
Since the properties of the test statistics require that the number of observations in each
group is large, some care needs to be taken in selecting a rule so that you do not end up
with a large number of cells, each containing small numbers of observations.
By default, EViews will perform the test using Hosmer-Lemeshow grouping. The default
grouping method is to form deciles. The test result using the default specification is given
by:
Goodness-of-Fit Evaluation for Binary Specification
Andrews and Hosmer-Lemeshow Tests
Equation: EQ_PROBIT
Date: 03/09/15 Time: 16:16
Grouping based upon predicted risk (randomize ties)
Quantile of Risk
Low
High
1
2
3
4
5
6
7
8
9
10
0.0161
0.0186
0.0309
0.0531
0.1235
0.2732
0.3563
0.5546
0.6572
0.8400
0.0185
0.0272
0.0457
0.1088
0.1952
0.3287
0.5400
0.6424
0.8342
0.9522
Total
H-L Statistic
Andrews Statistic
Actual
Dep=0
Expect
Actual
Dep=1
Expect
Total
Obs
H-L
Value
3
3
3
3
2
3
2
1
0
1
2.94722
2.93223
2.87888
2.77618
3.29779
2.07481
1.61497
1.20962
0.84550
0.45575
0
0
0
0
2
0
1
2
3
3
0.05278
0.06777
0.12112
0.22382
0.70221
0.92519
1.38503
1.79038
2.15450
3.54425
3
3
3
3
4
3
3
3
3
4
0.05372
0.06934
0.12621
0.24186
2.90924
1.33775
0.19883
0.06087
1.17730
0.73351
21
21.0330
11
10.9670
6.9086
20.6045
Prob. Chi-Sq(8)
Prob. Chi-Sq(10)
32 6.90863
0.5465
0.0240
The columns labeled Quantiles of Risk depict the high and low value of the predicted
probability for each decile. Also depicted are the actual and expected number of observations in each group, as well as the contribution of each group to the overall Hosmer-Lemeshow (H-L) statisticlarge values indicate large differences between the actual and
predicted values for that decile.
2
The x statistics are reported at the bottom of the table. Since grouping on the basis of the
fitted values falls within the structure of an Andrews test, we report results for both the H-L
and the Andrews test statistic. The p-value for the HL test is large while the value for the
Andrews test statistic is small, providing mixed evidence of problems. Furthermore, the relatively small sample sizes suggest that caution is in order in interpreting the results.
Forecast
EViews allows you to compute either the fitted probability, p i = 1 F ( x i b ) , or the fitted values of the index x i b . From the equation toolbar select Proc/Forecast (Fitted Probability/Index), and then click on the desired entry.
As with other estimators, you can select a forecast sample, and display a graph of the forecast. If your explanatory variables, x t , include lagged values of the binary dependent variable y t , forecasting with the Dynamic option instructs EViews to use the fitted values
p t 1 , to derive the forecasts, in contrast with the Static option, which uses the actual
(lagged) y t 1 .
Neither forecast evaluations nor automatic calculation of standard errors of the forecast are
currently available for this estimation method. The latter can be computed using the variance matrix of the coefficients obtained by displaying the covariance matrix view using
View/Covariance Matrix or using the @covariance member function.
You can use the fitted index in a variety of ways, for example, to compute the marginal
effects of the explanatory variables. Simply forecast the fitted index and save the results in a
series, say XB. Then the auto-series @dnorm(-xb), @dlogistic(-xb), or @dextreme(xb) may be multiplied by the coefficients of interest to provide an estimate of the derivatives of the expected value of y i with respect to the j-th variable in x i :
E ( y i x i, b )
------------------------------- = f ( x i b )b j .
x ij
(28.14)
e oi = y i p i
Standardized
y i p i
e si = --------------------------p i ( 1 p i )
Generalized
( y i p i )f ( x i b )
e gi = ----------------------------------------p i ( 1 p i )
where p i = 1 F ( x i b ) is the fitted probability, and the distribution and density functions F and f , depend on the specified distribution.
The ordinary residuals have been described above. The standardized residuals are simply
the ordinary residuals divided by an estimate of the theoretical standard deviation. The generalized residuals are derived from the first order conditions that define the ML estimates.
The first order conditions may be regarded as an orthogonality condition between the generalized residuals and the regressors x .
l ( b )
------------- =
b
i=1
( y i ( 1 F ( x i b ) ) )f ( x i b )
-------------------------------------------------------------------------- x i =
F ( x i b ) ( 1 F ( x i b ) )
e g, i x i .
(28.15)
i= 1
This property is analogous to the orthogonality condition between the (ordinary) residuals
and the regressors in linear regression models.
The usefulness of the generalized residuals derives from the fact that you can easily obtain
the score vectors by multiplying the generalized residuals by each of the regressors in x .
These scores can be used in a variety of LM specification tests (see Chesher, Lancaster and
Irish (1985), and Gourieroux, Monfort, Renault, and Trognon (1987)). We provide an example below.
Demonstrations
You can easily use the results of a binary model in additional analysis. Here, we provide
demonstrations of using EViews to plot a probability response curve and to test for heteroskedasticity in the residuals.
The Scenario Specification dialog allows us to define a set of assumptions under which we
will solve the model. Click on the Overrides tab and enter GPA PSI TUCE. Defining these
overrides tells EViews to use the values in the series GPA_1, PSI_1, and TUCE_1 instead of
the original GPA, PSI, and TUCE when solving for GRADE under Scenario 1.
Having defined the first scenario, we must create the series GPA_1, PSI_1 and TUCE_1 in
our workfile. We wish to use these series to evaluate the GRADE probabilities for various
values of GPA holding TUCE equal to its mean value and PSI equal to 0.
First, we will use the command line to fill GPA_1 with a grid of values ranging from 2 to 4.
The easiest way to do this is to use the @trend function:
series gpa_1 = 2+(4-2)*@trend/(@obs(@trend)-1)
Recall that @trend creates a series that begins at 0 in the first observation of the sample,
and increases by 1 for each subsequent observation, up through @obs-1.
Next we create series TUCE_1 containing the mean values of TUCE and a series PSI_1 which
we set to zero:
series tuce_1 = @mean(tuce)
series psi_1 = 0
Having prepared our data for the first scenario, we will now use the model object to define
an alternate scenario where PSI=1. Return to the Select Scenario tab, select Copy Scenario,
then select Scenario 1 as the Source, and New Scenario as the Destination. Copying Scenario 1 creates a new scenario, Scenario 2, that instructs EViews to use the values in the
series GPA_2, PSI_2, and TUCE_2 when solving for GRADE. These values are initialized
from the corresponding Scenario 1 series defined previously. We then set PSI_2 equal to 1 by
issuing the command
series psi_2 = 1
We are now ready to solve the model under the two scenarios. Click on the Solve button and
set the Active solution scenario to Scenario 1 and the Alternate solution scenario to Scenario 2. Be sure to click on the checkbox Solve for Alternate along with Active so that
EViews knows to solve for both. You can safely ignore the remaining solution settings and
simply click on OK.
EViews will report that your model has solved successfully and will place the solutions in
the series GRADE_1 and GRADE_2, respectively. To display the results, select Object/New
Object.../Group, and enter:
gpa_1 grade_1 grade_2
EViews will open an untitled group window containing these three series. Select View/
Graph/XY line to display a graph of the fitted GRADE probabilities plotted against GPA for
those with PSI=0 (GRADE_1) and with PSI=1 (GRADE_2), both computed with TUCE
evaluated at means.
We have annotated the graph slightly so that you can better judge the effect of the new
teaching methods (PSI) on the probability of grade improvement for various values of the
students GPA.
var ( u i ) = exp ( 2z i g ) ,
(28.16)
where g is an unknown parameter. In this example, we take PSI as the only variable in z .
The test statistic is the explained sum of squares from the regression:
( y i p i )
f ( x i b )
f ( x i b ) ( x i b )
--------------------------- = ---------------------------x i b 1 + -------------------------------------- z i b 2 + v i ,
p i ( 1 p i )
p i ( 1 p i )
p i ( 1 p i )
(28.17)
Next, the dependent variable in the test regression may be obtained as the standardized
residual. Select Proc/Make Residual Series and select Standardized Residual. We will
save the series as BRMR_Y.
Lastly, we will use the built-in EViews functions for evaluating the normal density and
cumulative distribution function to create a group object containing the independent variables:
series fac=@dnorm(-xb)/@sqrt(p_hat*(1-p_hat))
group brmr_x fac (gpa*fac) (tuce*fac) (psi*fac)
You can obtain the fitted values by clicking on the Forecast button in the equation toolbar of
this artificial regression. The LM test statistic is the sum of squares of these fitted values. If
the fitted values from the artificial regression are saved in BRMR_YF, the test statistic can be
saved as a scalar named LM_TEST:
scalar lm_test=@sumsq(brmr_yf)
which contains the value 1.5408. You can compare the value of this test statistic with the
critical values from the chi-square table with one degree of freedom. To save the p-value as
a scalar, enter the command:
scalar p_val=1-@cchisq(lm_test,1)
To examine the value of LM_TEST or P_VAL, double click on the name in the workfile window; the value will be displayed in the status line at the bottom of the EViews window. The
p-value in this example is roughly 0.21, so we have little evidence against the null hypothesis of homoskedasticity.
y i = x i b + e i
(28.18)
where is e i are independent and identically distributed random variables. The observed y i
is determined from y i using the rule:
yi =
if y i g 1
if g 1 < y i g 2
if g 2 < y i g 2
(28.19)
if g M < y i
It is worth noting that the actual values chosen to represent the categories in y are completely arbitrary. All the ordered specification requires is for ordering to be preserved so that
y i < y j implies that y i < y j .
It follows that the probabilities of observing each value of y are given by
Pr ( y i = 0 x i, b, g ) = F ( g 1 x i b )
Pr ( y i = 1 x i, b, g ) = F ( g 2 x i b ) F ( g 1 x i b )
Pr ( y i = 2 x i, b, g ) = F ( g 3 x i b ) F ( g 2 x i b )
(28.20)
Pr ( y i = M x i, b, g ) = 1 F ( g M x i b )
where F is the cumulative distribution function of e .
The threshold values g are estimated along with the b coefficients by maximizing the log
likelihood function:
N
l ( b, g ) =
log ( Pr ( yi = j
x i , b, g ) ) 1 ( y i = j )
(28.21)
i= 1 j =0
where 1 ( . ) is an indicator function which takes the value 1 if the argument is true, and 0 if
the argument is false. By default, EViews uses analytic second derivative methods to obtain
parameter and variance matrix of the estimated coefficient estimates (see Quadratic hillclimbing (Goldfeld-Quandt) on page 1012).
those values. EViews will estimate an identical model if the dependent variable is recorded
to take the values 1, 2, 3, 4, 5 or 10, 234, 3243, 54321, 123456.
(The data, which are from Allison, Truett, and D.V. Cicchetti (1976).Sleep in Mammals:
Ecological and Constitutional Correlates, Science, 194, 732-734, are available in the
Order.WF1 dataset. A more complete version of the data may be obtained from StatLib:
http://lib.stat.cmu.edu/datasets/sleep).
To estimate this model, select Quick/Estimate Equation from the main menu. From the
Equation Estimation dialog, select estimation method ORDERED. The standard estimation
dialog will change to match this specification.
There are three parts to specifying an ordered variable model: the equation specification, the
error specification, and the sample specification. First, in the Equation specification field,
you should type the name of the ordered dependent variable followed by the list of your
regressors, or you may enter an explicit expression for the index. In our example, you will
enter:
danger body brain sleep
tion, number of distinct values for y , and the method of computing the coefficient
covariance matrix.
Dependent Variable: DANGER
Method: ML - Ordered Probit (Quadratic hill climbing)
Date: 08/12/09 Time: 00:13
Sample (adjusted): 1 61
Included observations: 58 after adjustments
Number of ordered indicator values: 5
Conv ergence achieved after 7 iterations
Covariance matrix c omputed using sec ond derivatives
Variable
Coefficient
Std. Error
z-Statistic
Prob.
BODY
BRAIN
SLEEP
0.000247
-0.000397
-0.199508
0.000421
0.000418
0.041641
0.587475
-0.950366
-4.791138
0.5569
0.3419
0.0000
Below the header information are the coefficient estimates and asymptotic standard errors,
and the corresponding z-statistics and significance levels. The estimated coefficients of the
ordered model must be interpreted with care (see Greene (2008, section 23.10) or Johnston
and DiNardo (1997, section 13.9)).
The sign of b j shows the direction of the change in the probability of falling in the endpoint
rankings ( y = 0 or y = 1 ) when x ij changes. Pr( y = 0 ) changes in the opposite direction of the sign of b j and Pr( y = M ) changes in the same direction as the sign of b j . The
effects on the probability of falling in any of the middle rankings are given by:
F ( g k + 1 x i b ) F ( g k x i b )
Pr ( y = k )
----------------------------- = ---------------------------------------- ---------------------------------b j
b j
b j
(28.22)
-2.798449
-2.038945
-1.434567
-0.601211
0.147588
3.138702
2.986891
26.59830
0.000007
0.514784
0.492198
0.473679
0.449109
-5.436166
-4.142527
-3.028563
-1.338675
0.0000
0.0000
0.0025
0.1807
2.890028
-76.81081
-90.10996
-1.324324
Note that the coefficients are labeled both with the identity of the limit point, and the coefficient number. Just below the limit points are the summary statistics for the equation.
Estimation Problems
Most of the previous discussion of estimation problems for binary models (Estimation
Problems on page 304) also holds for ordered models. In general, these models are wellbehaved and will require little intervention.
There are cases, however, where problems will arise. First, EViews currently has a limit of
750 total coefficients in an ordered dependent variable model. Thus, if you have 25 righthand side variables, and a dependent variable with 726 distinct values, you will be unable to
estimate your model using EViews.
Second, you may run into identification problems and estimation difficulties if you have
some groups where there are very few observations. If necessary, you may choose to combine adjacent groups and re-estimate the model.
EViews may stop estimation with the message Parameter estimates for limit points are nonascending, most likely on the first iteration. This error indicates that parameter values for
the limit points were invalid, and that EViews was unable to adjust these values to make
them valid. Make certain that if you are using user defined parameters, the limit points are
strictly increasing. Better yet, we recommend that you employ the EViews starting values
since they are based on a consistent first-stage estimation procedure, and should therefore
be quite well-behaved.
Obs.
18
14
10
9
7
58
Correct
Incorrect
10
6
0
3
6
25
8
8
10
6
1
33
% Correct % Incorrect
55.556
42.857
0.000
33.333
85.714
43.103
44.444
57.143
100.000
66.667
14.286
56.897
Obs.
18
14
10
9
7
58
Correct
Incorrect
18
0
0
0
0
18
0
14
10
9
7
40
% Correct % Incorrect
100.000
0.000
0.000
0.000
0.000
31.034
0.000
100.000
100.000
100.000
100.000
68.966
Each row represents a distinct value for the dependent variable. The Obs column
indicates the number of observations with that value. Of those, the number of Correct observations are those for which the predicted probability of the response is the
highest. Thus, 10 of the 18 individuals with a DANGER value of 1 were correctly specified. Overall, 43% of the observations were correctly specified for the fitted model
versus 31% for the constant probability model.
The bottom portion of the output shows additional statistics measuring this improvement
Gain over Constant Prob. Spec .
Dep. Value
1
2
3
4
5
Total
Obs.
18
14
10
9
7
58
Equation
Constant
% Incorrect % Incorrect Total Gain* Pct. Gain* *
44.444
57.143
100.000
66.667
14.286
56.897
0.000
100.000
100.000
100.000
100.000
68.966
-44.444
42.857
0.000
33.333
85.714
12.069
NA
42.857
0.000
33.333
85.714
17.500
Note the improvement in the prediction for DANGER values 2, 4, and especially 5
comes from refinement of the constant only prediction of DANGER=1.
f ( g y i + 1 x i b ) f ( g yi x i b )
-,
e gi = ---------------------------------------------------------------------------F ( g y i + 1 x i b ) F ( g y i x i b )
(28.23)
where g 0 = , and g M + 1 = .
Background
Consider the following latent variable regression model:
y i = x i b + je i ,
(28.24)
where j is a scale parameter. The scale parameter j is identified in censored and truncated
regression models, and will be estimated along with the b .
In the canonical censored regression model, known as the tobit (when there are normally distributed errors), the observed data y are given by:
0
yi =
y i
if y i 0
if y i > 0
(28.25)
In other words, all negative values of y i are coded as 0. We say that these data are left censored at 0. Note that this situation differs from a truncated regression model where negative
values of y i are dropped from the sample. More generally, EViews allows for both left and
right censoring at arbitrary limit points so that:
ci
y i = y i
ci
if y i c i
if c i < y i c i
if c i < y i
(28.26)
where c i , c i are fixed numbers representing the censoring points. If there is no left censoring, then we can set c i = . If there is no right censoring, then c i = . The canonical
tobit model is a special case with c i = 0 and c i = .
The parameters b , j are estimated by maximizing the log likelihood function:
N
l ( b, j ) =
(28.27)
i=1
N
+
N
log ( F ( ( c i x i b ) j ) ) 1 ( y i = c i )
i =1
log ( 1 F ( ( c i x i b ) j ) ) 1 ( y i = c i )
i= 1
(28.28)
where hours worked (HRS) is left censored at zero. To estimate this model, select Quick/
Estimate Equation from the main menu. Then from the Equation Estimation dialog,
select the CENSORED - Censored or Truncated Data (including Tobit) estimation method.
Alternately, enter the keyword censored in the command line and press ENTER. The dialog
will change to provide a number of different input options.
Next, select one of the three distributions for the error term. EViews allows you three possible choices for the distribution of e :
Standard normal
E ( e ) = 0 , var ( e ) = 1
Logistic
E ( e ) = 0 , var ( e ) = p 3
var ( e ) = p 6
Bear in mind that the extreme value distribution is asymmetric.
For example, in the canonical tobit model the data are censored on the left at zero, and are
uncensored on the right. This case may be specified as:
Left edit field: 0
Right edit field: [blank]
Similarly, top-coded censored data may be specified as,
Left edit field: [blank]
Right edit field: 20000
while the more general case of left and right censoring is given by:
Left edit field: 10000
Right edit field: 20000
EViews also allows more general specifications where the censoring points are known to differ across observations. Simply enter the name of the series or auto-series containing the
censoring points in the appropriate edit field. For example:
Left edit field: lowinc
Right edit field: vcens1+10
specifies a model with LOWINC censoring on the left-hand side, and right censoring at the
value of VCENS1+10.
in the edit fields. If the data are censored on both the left and the right, use separate binary
indicators for each form of censoring:
Left edit field: lcens
Right edit field: rcens
where LCENS is also a binary indicator.
Once you have specified the model, click OK. EViews will estimate the parameters of the
model using appropriate iterative techniques.
Below the header are the usual results for the coefficients, including the asymptotic standard
errors, z-statistics, and significance levels. As in other limited dependent variable models,
the estimated coefficients do not have a direct interpretation as the marginal effect of the
associated regressor j for individual i , x ij . In censored regression models, a change in x ij
has two effects: an effect on the mean of y , given that it is observed, and an effect on the
probability of y being observed (see McDonald and Moffitt, 1980).
In addition to results for the regression coefficients, EViews reports an additional coefficient
named SCALE, which is the estimated scale factor j . This scale factor may be used to estimate the standard deviation of the residual, using the known variance of the assumed distribution. For example, if the estimated SCALE has a value of 0.4766 for a model with extreme
value errors, the implied standard error of the error term is 0.5977 = 0.4766p 6 .
Most of the other output is self-explanatory. As in the binary and ordered models above,
EViews reports summary statistics for the dependent variable and likelihood based statistics.
The regression statistics at the bottom of the table are computed in the usual fashion, using
the residuals e i = y i E ( y i x i, b , j ) from the observed y .
Ordinary
Standardized
Generalized
e oi = y i E ( y i x i, b , j ) f ( ( y i x i b ) j )
y i E ( y i x i, b , j )
e si = --------------------------------------------var ( y i x i, b , j )
f ( ( c i x i b ) j )
- 1 ( yi ci )
e gi = --------------------------------------------j F ( ( c i x i b ) j )
f ( ( y i x i b ) j )
-------------------------------------------- 1 ( ci < yi < ci )
j f ( ( y i x i b ) j )
f ( ( c i x i b ) j )
- 1 ( yi ci )
+ --------------------------------------------------------j ( 1 F ( ( c i x i b ) j ) )
where f , F are the density and distribution functions, and where 1 is an indicator function
which takes the value 1 if the condition in parentheses is true, and 0 if it is false. All of the
above terms will be evaluated at the estimated b and j . See the discussion of forecasting
for details on the computation of E ( y i x i, b, j ) .
The generalized residuals may be used as the basis of a number of LM tests, including LM
tests of normality (see Lancaster, Chesher and Irish (1985), Chesher and Irish (1987), and
Gourioux, Monfort, Renault and Trognon (1987); Greene (2008), provides a brief discussion
and additional references).
Forecasting
EViews provides you with the option of forecasting the expected dependent variable,
E ( y i x i, b, j ) , or the expected latent variable, E ( y i x i, b, j ) . Select Forecast from the
equation toolbar to open the forecast dialog.
To forecast the expected latent variable, click on Index - Expected latent variable, and enter
a name for the series to hold the output. The forecasts of the expected latent variable
E ( y i x i, b, j ) may be derived from the latent model using the relationship:
y i = E ( y i x i, b , j ) = x i b j g .
(28.29)
y i = E ( y i x i, b , j ) = c i Pr ( y i = c i x i, b , j )
+ E ( y i c i < y i < c i ; x i, b , j ) Pr ( c i < y i < c i x i, b , j )
+ c i Pr ( y i = c i x i, b , j )
(28.30)
Note that these forecasts always satisfy c i y i c i . The probabilities associated with being
in the various classifications are computed by evaluating the cumulative distribution function of the specified distribution. For example, the probability of being at the lower limit is
given by:
Pr ( y i = c i x i, b , j ) = Pr ( y i c i x i, b , j ) = F ( ( c i x i b ) j ) .
(28.31)
Coeffici ent
S td. Error
z-S tatistic
Prob.
C
Z1
Z2
Z3
Z4
Z5
Z6
Z7
Z8
7.608487
0.945787
-0.192698
0.533190
1.019182
-1.699000
0.025361
0.212983
-2.273284
3.905987
1.062866
0.080968
0.146607
1.279575
0.405483
0.227667
0.321157
0.415407
1.947904
0.889847
-2.379921
3.636852
0.796500
-4.190061
0.111394
0.663173
-5.472429
0.0 514
0.3 735
0.0 173
0.0 003
0.4 257
0.0 000
0.9 113
0.5 072
0.0 000
14.89131
0.0 000
Error Distribution
SCA LE:C(10)
Mean dependent var
S.E. of regression
Sum squared resid
Log likelihood
Avg. lo g likelihood
Left censored obs
Uncensored obs
8.258432
1.455907
3.058957
5539.472
-704.7311
-1.172597
451
150
0.554581
Right censored o bs
Total obs
3.2987 58
2.3784 73
2.4516 61
2.4069 61
0
601
Tests of Significance
EViews does not, by default, provide you with the usual likelihood ratio test of the overall
significance for the tobit and other censored regression models. There are several ways to
perform this test (or an asymptotically equivalent test).
First, you can use the built-in coefficient testing procedures to test the exclusion of all of the
explanatory variables. Select the redundant variables test and enter the names of all of the
explanatory variables you wish to exclude. EViews will compute the appropriate likelihood
ratio test statistic and the p-value associated with the statistic.
To take an example, suppose we wish to test whether the variables in the Fair tobit, above,
contribute to the fit of the model. Select View/Coefficient Diagnostics/Redundant Variables - Likelihood Ratio and enter all of the explanatory variables:
z1 z2 z3 z4 z5 z6 z7 z8
EViews will estimate the restricted model for you and compute the LR statistic and p-value.
In this case, the value of the test statistic is 80.01, which for eight degrees of freedom, yields
a p-value of less than 0.000001.
Alternatively, you could test the restriction using the Wald test by selecting View/Coefficient Diagnostics/Wald - Coefficient Restrictions, and entering the restriction that:
c(2)=c(3)=c(4)=c(5)=c(6)=c(7)=c(8)=c(9)=0
with degrees of freedom given by the number of coefficient restrictions in the constant only
model. You can double click on the LRSTAT icon or the LRPROB icon in the workfile window to display the results.
smpl @all
Then estimate a probit by replacing the dependent variable Y_PT by Y_C. A simple way to
do this is to press Object/Copy Object from the tobit equation toolbar. From the new untitled equation window that appears, press Estimate, edit the specification, replacing the
dependent variable Y_PT with Y_C, choose Method: BINARY and click OK. Save the
probit equation by pressing the Name button, say as EQ_BIN.
To estimate the truncated model, press Object/Copy Object again from the tobit equation
toolbar again. From the new untitled equation window that appears, press Estimate, mark
the Truncated sample option, and click OK. Save the truncated regression by pressing the
Name button, say as EQ_TR.
Then the LR test statistic and its p-value can be saved as a scalar by the commands:
scalar lr_test=2*(eq_bin.@logl+eq_tr.@logl-eq_tobit.@logl)
scalar lr_pval=1-@cchisq(lr_test,eq_tobit.@ncoef)
Double click on the scalar name to display the value in the status line at the bottom of the
EViews window. For the example data set, the p-value is 0.066, which rejects the tobit
model at the 10% level, but not at the 5% level.
For other specification tests for the tobit, see Greene (2008, 23.3.4) or Pagan and Vella
(1989).
y i = x i b + je i
(28.32)
c i < y i < c i .
(28.33)
l ( b, j ) =
N
(28.34)
i=1
log ( F ( ( c i x i b ) j ) F ( ( c i x i b ) j ) ).
i= 1
The likelihood function is maximized with respect to b and j , using standard iterative
methods.
Enter the name of the truncated dependent variable and the list of the regressors or
provide explicit expression for the equation in the Equation Specification field, and
select one of the three distributions for the error term.
Indicate that you wish to estimate the truncated model by checking the Truncated
sample option.
Specify the truncation points of the dependent variable by entering the appropriate
expressions in the two edit fields. If you leave an edit field blank, EViews will assume
that there is no truncation along that dimension.
You should keep a few points in mind. First, truncated estimation is only available for models where the truncation points are known, since the likelihood function is not otherwise
defined. If you attempt to specify your truncation points by index, EViews will issue an error
message indicating that this selection is not available.
Second, EViews will issue an error message if any values of the dependent variable are outside the truncation points. Furthermore, EViews will automatically exclude any observations
that are exactly equal to a truncation point. Thus, if you specify zero as the lower truncation
limit, EViews will issue an error message if any observations are less than zero, and will
exclude any observations where the dependent variable exactly equals zero.
The cumulative distribution function and density of the assumed distribution will be used to
form the likelihood function, as described above.
e oi = y i E ( y i c i < y i < c i ; x i, b , j )
Standardized
y i E ( y i c i < y i < c i ; x i, b , j )
e si = -----------------------------------------------------------------------------------var ( y i c i < y i < c i ; x i, b , j )
Generalized
f ( ( y i x i b ) j )
e gi = ------------------------------------------jf ( ( y i x i b ) j )
f ( ( c i x i b ) j ) f ( ( c i x i b ) j )
------------------------------------------------------------------------------------------------j ( F ( ( c i x i b ) j ) F ( ( c i x i b ) j ) )
where f , F , are the density and distribution functions. Details on the computation of
E ( y i c i < y i < c i ; x i, b , j ) are provided below.
The generalized residuals may be used as the basis of a number of LM tests, including LM
tests of normality (see Chesher and Irish (1984, 1987), and Gourieroux, Monfort and Trognon (1987); Greene (2008) provides a brief discussion and additional references).
Forecasting
EViews provides you with the option of forecasting the expected observed dependent variable, E ( y i x i, b , j ) , or the expected latent variable, E ( y i x i, b , j ) .
To forecast the expected latent variable, select Forecast from the equation toolbar to open
the forecast dialog, click on Index - Expected latent variable, and enter a name for the
series to hold the output. The forecasts of the expected latent variable E ( y i x i, b , j ) are
computed using:
y i = E ( y i x i, b , j ) = x i b j g .
(28.35)
y i = E ( y i c i < y i < c i ; x i, b , j )
(28.36)
so that the expectations for the latent variable are taken with respect to the conditional (on
being observed) distribution of the y i . Note that these forecasts always satisfy the inequality c i < y i < c i .
It is instructive to compare this latter expected value with the expected value derived for the
censored model in Equation (28.30) above (repeated here for convenience):
y i = E ( y i x i, b , j ) = c i Pr ( y i = c i x i, b , j )
+ E ( y i c i < y i < c i ; x i, b , j ) Pr ( c i < y i < c i x i, b , j )
+ c i Pr ( y i = c i x i, b , j ).
(28.37)
The expected value of the dependent variable for the truncated model is the first part of the
middle term of the censored expected value. The differences between the two expected values (the probability weight and the first and third terms) reflect the different treatment of
latent observations that do not lie between c i and c i . In the censored case, those observations are included in the sample and are accounted for in the expected value. In the truncated case, data outside the interval are not observed and are not used in the expected value
computation.
An Illustration
As an example, we reestimate the Fair tobit model from above, truncating the data so that
observations at or below zero are removed from the sample. The output from truncated estimation of the Fair model is presented below:
Dependent Variable: Y_PT
Method: ML - Censored Normal (TOBIT) (Newton-Raphson /
Marquardt steps)
Date: 03/09/15 Time: 16:26
Sample (adjusted): 452 601
Included observations: 150 after adjustments
Truncated sample
Left censoring (value) at zero
Convergence achieved after 11 iterations
Coefficient covariance computed using observed Hessian
Variable
Coefficient
Std. Error
z-Statistic
Prob.
C
Z1
Z2
Z3
Z4
Z5
Z6
Z7
Z8
12.37287
-1.336854
-0.044791
0.544174
-2.142868
-1.423107
-0.316717
0.621418
-1.210020
5.178533
1.451426
0.116125
0.217885
1.784389
0.594582
0.321882
0.477420
0.547810
2.389261
-0.921063
-0.385719
2.497527
-1.200897
-2.393459
-0.983953
1.301618
-2.208833
0.0169
0.3570
0.6997
0.0125
0.2298
0.0167
0.3251
0.1930
0.0272
8.623910
0.0000
Error Distribution
SCALE:C(10)
Mean dependent var
S.E. of regression
Sum squared resid
Log likelihood
Avg. log likelihood
Left censored obs
Uncensored obs
5.379485
5.833333
4.013126
2254.725
-390.8342
-2.605561
0
150
0.623787
4.255934
5.344456
5.545165
5.425998
0
150
Note that the header information indicates that the model is a truncated specification with a
sample that is adjusted accordingly, and that the frequency information at the bottom of the
screen shows that there are no left and right censored observations.
y i = X i b + e i
(28.38)
zi = Wig + ui
(28.39)
where is z i a binary variable, with y i only observed when z i = 1 . e t and u t are error
terms which follow a bivariate normal distribution:
ei
N j rj
ui
rj 1
(28.40)
with scale parameter j and correlation coefficient r . Note that we have normalized the
variance of u t to 1 since this variance is not identified in this model.
Equation Equation (28.38) is generally referred to as the response equation, with y i the
variable of interest. Equation Equation (28.39) is termed the selection equation and determines whether is y i observed or not.
EViews offers two different methods of estimating this model: Heckmans original two-step
method and a Maximum Likelihood method.
E ( y i Z i = 1 ) = X i b + rjl i ( W i g )
(28.41)
where l ( X ) = f ( X ) F ( x ) is the Inverse Mills Ratio (Greene, 2008), and f and F are
the standard normal density and cumulative distribution function, respectively. Then we
may specify a regression model:
y i = X i b + rjl i ( W i g ) + v i
(28.42)
The two-step method proceeds by first estimating a Probit regression for Equation (28.39) to
obtain an estimate of g , from which l i ( W i g ) may be calculated. A least squares regression of y i on X i b and l i
y i = X i b + rjl i + v i
(28.43)
(28.44)
Maximum Likelihood
The maximum likelihood method of estimating the Heckman Selection Model is performed
using the log-likelihood function given by:
log L ( b , g, r, j X, W ) =
( log 1 F ( W i g ) ) +
(28.45)
i zi = 0
y i X i b
W g + r -------------------
i
y i X i b
j
+ log 1 F ---------------------------------------------------------
log ( j ) + log f -------------------j
2
1r
zi = 1
where the first summation is over observations for which z i = 0 (i.e., when y i is unobserved), and the second for observations for which z i = 1 (i.e., when y i is observed).
It is straightforward to maximize this log-likelihood function with respect to the parameters,
b, g, r, j . However, this maximization is unrestricted with regards to r and j , when, in
fact, there are restrictions of the form ( 1 < r < 1 ) and j > 1 imposed on the parameters.
EViews optimizes the model using transformed versions of the parameters:
j = exp ( j )
r = arctan(r) ( 2 p )
(28.46)
(28.47)
As with most maximum likelihood estimations, the covariance matrix of the estimated
1
parameters can be calculated as either ( H ) (where H is the Hessian matrix, the infor1
1
1
mation matrix), ( GG' ) (where G is the matrix of gradients), or as H GG'H (the
Huber/White matrix).
An Example
As an example of the estimation of the Heckman Selection model, we take one of the results
from Econometric Analysis by William H. Greene (6th Edition, p. 888, Example 24.8), which
uses data from the Mroz (1987) study of the labor supply of married women to estimate a
wage equation for women. Only 428 of the 753 women studied participated in the labor
force, so a selection equation is provided to model the sample selection behavior of married
women.
The wage equation is given by:
2
(28.48)
where EXPER is a measure of each womans experience, EDUC is her level of education, and
CITY is a dummy variable for whether she lives in a city or not.
The selection equation is given by:
2
(28.49)
where LFP is a binary variable taking a value of 1 if the woman is in the labor force, and 0
otherwise, AGE is her age, FAMINC is the level of household income not earned by the
woman, and KIDS is a dummy variable for whether she has children.
You can bring the Mroz data directly into EViews from Greenes website, using the following
EViews command:
wfopen http://www.stern.nyu.edu/~wgreene/Text/Edition7/TableF51.txt
In this data, the wage data are in the series WW, experience is AX, education is in WE, the
city dummy is CIT, labor force participation is LFP, age is WA, and family income is FAMINC. There is no kids dummy variable, but there are two variables containing the number of
children below K6 education (KL6), and the number of kids between K6 education and 18
(K618). We can create the dummy variable simply by testing whether the sum of those two
variables is greater than 0.
To estimate this equation in EViews, we click on Quick/Estimate Equation, and then
change the equation method to Heckit. In the Response Equation box we type:
ww c ax ax^2 we cit
To begin we select the Heckman two-step estimation method. After clicking OK, the estimation results show and replicate the results in the first pane of Table 24.3 in Greene (note that
Greene only shows the estimates of the Wage equation, plus r and j ).
Dependent Variable: WW
Method: Two-Step Heckman Selection
Date: 03/09/15 Time: 16:31
Sample: 1 753
Included observations: 753
Selection Variable: LFP
Coefficient covariance computed using two-step Heckman
method
Variable
Coefficient
Std. Error
t-Statistic
Prob.
-0.455353
0.336804
0.072842
4.003746
1.402194
0.6490
0.7364
0.9420
0.0001
0.1613
-2.964730
2.810436
-3.136096
1.088918
4.271744
-3.429697
0.0031
0.0051
0.0018
0.2765
0.0000
0.0006
Response Equation - WW
C
AX
AX^2
WE
CIT
-0.971200
0.021061
0.000137
0.417017
0.443838
2.132849
0.062532
0.001882
0.104157
0.316531
-4.156807
0.185395
-0.002426
4.58E-06
0.098182
-0.448987
4.177682
2.418304
4327.663
-2254.519
1.402086
0.065967
0.000774
4.21E-06
0.022984
0.130911
3.310282
6.017314
6.084863
6.043337
We can modify our equation to use as the estimation method. Click on the Estimate button
to bring up the estimation dialog and change the method to Maximum Likelihood. Next,
click on the Options tab and change the Information matrix to OPG and click on OK to estimate the equation. The results match the second pane of Table 24.3 in Greene.
Count Models343
Dependent Variable: WW
Method: ML Heckman Selection (Newton-Raphson / Marquardt
steps)
Date: 03/09/15 Time: 16:34
Sample: 1 753
Included observations: 753
Selection Variable: LFP
Convergence achieved after 6 iterations
Coefficient covariance computed using outer product of gradients
Variable
Coefficient
Std. Error
t-Statistic
Prob.
-1.168237
0.368562
-0.044369
4.747067
1.045889
0.2431
0.7126
0.9646
0.0000
0.2960
-2.920822
2.794837
-3.114124
1.460278
3.970163
-3.297155
0.0036
0.0053
0.0019
0.1446
0.0001
0.0010
Response Equation - WW
C
AX
AX^2
WE
CIT
-1.963024
0.027868
-0.000104
0.457005
0.446529
1.680330
0.075614
0.002341
0.096271
0.426937
-4.119692
0.184015
-0.002409
5.68E-06
0.095281
-0.450615
1.410456
0.065841
0.000773
3.89E-06
0.023999
0.136668
Interaction terms
@LOG(SIGMA)
TFORM(RHO)
1.134100
-0.210301
0.026909
0.367061
42.14565
-0.572931
0.0000
0.5669
SIGMA
RHO
3.108376
-0.131959
0.083644
0.223781
37.16219
-0.589676
0.0000
0.5556
4.177682
2.361759
4127.650
-1581.258
3.310282
4.234416
4.314247
4.265171
Count Models
Count models are employed when y takes integer values that represent the number of
events that occurexamples of count data include the number of patents filed by a company, and the number of spells of unemployment experienced over a fixed time interval.
EViews provides support for the estimation of several models of count data. In addition to
the standard poisson and negative binomial maximum likelihood (ML) specifications,
EViews provides a number of quasi-maximum likelihood (QML) estimators for count data.
m ( x i, b ) = E ( y i x i, b ) = exp ( x i b ) .
(28.50)
Next, click on Options and, if desired, change the default estimation algorithm, convergence criterion, starting values, and method of computing the coefficient covariance.
Lastly, select one of the entries listed under count estimation method, and if appropriate, specify a value for the variance parameter. Details for each method are provided
in the following discussion.
Poisson Model
For the Poisson model, the conditional density of y i given x i is:
Count Models345
f ( y i x i, b ) = e
m ( x i, b )
m ( x i, b ) i y i!
(28.51)
where y i is a non-negative integer valued random variable. The maximum likelihood estimator (MLE) of the parameter b is obtained by maximizing the log likelihood function:
N
yi log m ( x i, b ) m ( x i, b ) log ( y i! ) .
l(b ) =
(28.52)
i= 1
Provided the conditional mean function is correctly specified and the conditional distribution of y is Poisson, the MLE b is consistent, efficient, and asymptotically normally distributed, with coefficient variance matrix consistently estimated by the inverse of the Hessian:
V = var(b ) =
i= 1
m
i x i x i
(28.53)
where m
i = m ( x i, b ) . Alternately, one could estimate the coefficient covariance using the
inverse of the outer-product of the scores:
V = var(b ) =
i= 1
( yi m
i ) x i x i
(28.54)
The Poisson assumption imposes restrictions that are often violated in empirical applications. The most important restriction is the equality of the (conditional) mean and variance:
v ( x i, b ) = var ( y i x i, b ) = E ( y i x i, b ) = m ( x i, b ) .
(28.55)
If the mean-variance equality does not hold, the model is misspecified. EViews provides a
number of other estimators for count data which relax this restriction.
We note here that the Poisson estimator may also be interpreted as a quasi-maximum likelihood estimator. The implications of this result are discussed below.
l ( b, h ) =
y i log ( h
i=1
2
m ( x i, b ) ) ( y i + 1 h ) log ( 1 + h m ( x i, b ) )
2
(28.56)
where h is a variance parameter to be jointly estimated with the conditional mean param2
eters b . EViews estimates the log of h , and labels this parameter as the SHAPE parameter in the output. Standard errors are computed using the inverse of the information matrix.
The negative binomial distribution is often used when there is overdispersion in the data, so
that v ( x i, b ) > m ( x i, b ) , since the following moment conditions hold:
E ( y i x i, b ) = m ( x i, b )
2
var ( y i x i, b ) = m ( x i, b ) ( 1 + h m ( x i, b ) )
(28.57)
h is therefore a measure of the extent to which the conditional variance exceeds the conditional mean.
Consistency and efficiency of the negative binomial ML requires that the conditional distribution of y be negative binomial.
Count Models347
Poisson
The Poisson MLE is also a QMLE for data from alternative distributions. Provided that the
conditional mean is correctly specified, it will yield consistent estimates of the parameters b
of the mean function. By default, EViews reports the ML standard errors. If you wish to compute the QML standard errors, you should click on Options, select Robust Covariances, and
select the desired covariance matrix estimator.
Exponential
The log likelihood for the exponential distribution is given by:
N
l(b) =
log m ( x i, b ) y i m ( x i, b ) .
(28.58)
i =1
As with the other QML estimators, the exponential QMLE is consistent even if the conditional distribution of y i is not exponential, provided that m i is correctly specified. By
default, EViews reports the robust QML standard errors.
Normal
The log likelihood for the normal distribution is:
N
l(b) =
i =1
1 y i m ( x i, b ) 2 1
1
2
- --- log ( j ) --- log ( 2p ) .
--- ------------------------------
2
2
2
j
(28.59)
For fixed j and correctly specified m i , maximizing the normal log likelihood function provides consistent estimates even if the distribution is not normal. Note that maximizing the
2
normal log likelihood for a fixed j is equivalent to minimizing the sum of squares for the
nonlinear regression model:
y i = m ( x i, b ) + e i .
(28.60)
EViews sets j = 1 by default. You may specify any other (positive) value for j by
changing the number in the Fixed variance parameter field box. By default, EViews reports
the robust QML standard errors when estimating this specification.
Negative Binomial
2
If we maximize the negative binomial log likelihood, given above, for fixed h , we obtain
the QMLE of the conditional mean parameters b . This QML estimator is consistent even if
the conditional distribution of y is not negative binomial, provided that m i is correctly
specified.
2
EViews sets h = 1 by default, which is a special case known as the geometric distribution. You may specify any other (positive) value by changing the number in the Fixed vari-
ance parameter field box. For the negative binomial QMLE, EViews by default reports the
robust QMLE standard errors.
e oi = y i m ( x i, b )
Standardized (Pearson)
y i m ( x i, b )
e si = ----------------------------v ( x i, b , g )
Generalized
e g =(varies)
Count Models349
where the g represents any additional parameters in the variance specification. Note
that the specification of the variances may vary significantly between specifications.
For example, the Poisson model has v ( x i, b ) = m ( x i, b ) , while the exponential has
2
v ( x i, b ) = m ( x i, b ) .
The generalized residuals can be used to obtain the score vector by multiplying the
generalized residuals by each variable in x . These scores can be used in a variety of
LM or conditional moment tests for specification testing; see Wooldridge (1997).
Demonstrations
A Specification Test for Overdispersion
Consider the model:
NUMB i = b 1 + b 2 IP i + b 3 FEB i + e i ,
(28.61)
where the dependent variable NUMB is the number of strikes, IP is a measure of industrial
production, and FEB is a February dummy variable, as reported in Kennan (1985, Table 1)
and provided in the workfile Strike.WF1.
The results from Poisson estimation of this model are presented below:
Dependent Variable: NUMB
Method: ML/QML - Poisson Count (Newton-Raphson / Marquardt
steps)
Date: 03/09/15 Time: 16:43
Sample: 1 103
Included observations: 103
Convergence achieved after 3 iterations
Coefficient covariance computed using observed Hessian
Variable
Coefficient
Std. Error
z-Statistic
Prob.
C
IP
FEB
1.725630
2.775334
-0.377407
0.043656
0.819104
0.174520
39.52764
3.388254
-2.162539
0.0000
0.0007
0.0306
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Restr. log likelihood
Avg. log likelihood
0.064502
0.045792
3.569190
1273.912
-284.5462
-292.9694
-2.762584
5.495146
3.653829
5.583421
5.660160
5.614503
16.84645
0.000220
Cameron and Trivedi (1990) propose a regression based test of the Poisson restriction
v ( x i, b ) = m ( x i, b ) . To carry out the test, first estimate the Poisson model and obtain the
fitted values of the dependent variable. Click Forecast and provide a name for the forecasted
2
dependent variable, say NUMB_F. The test is based on an auxiliary regression of e oi y i on
y 2i and testing the significance of the regression coefficient. For this example, the test regression can be estimated by the command:
equation testeq.ls (numb-numb_f)^2-numb numb_f^2
Coefficient
Std. Error
t-Statistic
Prob.
NUMB_F^2
0.238874
0.052115
4.583571
0.0000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Durbin-Watson stat
0.043930
0.043930
17.26506
30404.41
-439.0628
1.711805
6.872929
17.65726
8.544908
8.570488
8.555269
The t-statistic of the coefficient is highly significant, leading us to reject the Poisson restriction. Moreover, the estimated coefficient is significantly positive, indicating overdispersion
in the residuals.
An alternative approach, suggested by Wooldridge (1997), is to regress e si 1 , on y i . To
perform this test, select Proc/Make Residual Series and select Standardized. Save the
results in a series, say SRESID. Then estimating the regression specification:
sresid^2-1 numbf
Coeffici ent
S td. Error
t-S tatistic
Prob.
NUMB_F
0.221238
0.055002
4.022326
0.0 001
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Durbin-Watson stat
0.017556
0.017556
3.111299
987 .3785
-262.5574
1.764537
1.1 615 73
3.1 389 74
5.1 176 19
5.1 431 99
5.1 279 80
Count Models351
Both tests suggest the presence of overdispersion, with the variance approximated by
roughly v = m ( 1 + 0.23m ) .
Given the evidence of overdispersion and the rejection of the Poisson restriction, we will reestimate the model, allowing for mean-variance inequality. Our approach will be to estimate
the two-step negative binomial QMLE specification (termed the quasi-generalized pseudomaximum likelihood estimator by Gourieroux, Monfort, and Trognon (1984a, 1984b)) using
2
the estimate of h from the Wooldridge test derived above. To compute this estimator, simply select Negative Binomial (QML) and enter 0.221238 in the edit field for Fixed variance parameter.
We will use the GLM variance calculations, so you should click on Option in the Equation
Specification dialog and choose the GLM option in the Covariance method dropdown
menu. The estimation results are shown below:
Dependent Variable: NUMB
Method: QML - Negative Binomial Count (Newton-Raphson /
Marquardt steps)
Date: 03/09/15 Time: 16:48
Sample: 1 103
Included observations: 103
QML parameter used in estimation: 0.22124
Convergence achieved after 4 iterations
Coefficient covariance computed using observed Hessian
GLM adjusted covariance (variance factor =0.961161659819)
Variable
Coefficient
Std. Error
z-Statistic
Prob.
C
IP
FEB
1.724906
2.833103
-0.369558
0.064023
1.198416
0.235617
26.94197
2.364039
-1.568474
0.0000
0.0181
0.1168
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Restr. log likelihood
Avg. log likelihood
0.064374
0.045661
3.569435
1274.087
-263.4808
-522.9973
-2.558066
5.495146
3.653829
5.174385
5.251125
5.205468
519.0330
0.000000
The negative binomial QML should be consistent, and under the GLM assumption, the standard errors should be consistently estimated. It is worth noting that the coefficient on FEB,
which was strongly statistically significant in the Poisson specification, is no longer significantly different from zero at conventional significance levels.
Coefficient
Std. Error
z-Statistic
Prob.
C
IP
FEB
1.725630
2.775334
-0.377407
0.065140
1.222202
0.260405
26.49094
2.270766
-1.449307
0.0000
0.0232
0.1473
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Restr. log likelihood
Avg. log likelihood
0.064502
0.045792
3.569190
1273.912
-284.5462
-292.9694
-2.762584
5.495146
3.653829
5.583421
5.660160
5.614503
16.84645
0.000220
Note that when you select the GLM robust standard errors, EViews reports the estimated,
here d.f. corrected, variance factor. Then you can use EViews to compute p-value associated
with this statistic, placing the results in scalars using the following commands:
scalar qlr = eq1.@lrstat/2.226420477
scalar qpval = 1-@cchisq(qlr, 2)
Technical Notes353
You can examine the results by clicking on the scalar objects in the workfile window and
viewing the results. The QLR statistic is 7.5666, and the p-value is 0.023. The statistic and pvalue are valid under the weaker conditions that the conditional mean is correctly specified,
and that the conditional variance is proportional (but not necessarily equal) to the conditional mean.
Technical Notes
Default Standard Errors
The default standard errors are obtained by taking the inverse of the estimated information
matrix. If you estimate your equation using a Newton-Raphson or Quadratic Hill Climbing
1
, to form your coefficient covarimethod, EViews will use the inverse of the Hessian, H
ance estimate. If you employ BHHH, the coefficient covariance will be estimated using the
1
are the gradient (or
inverse of the outer product of the scores ( g g ) , where g and H
score) and Hessian of the log likelihood evaluated at the ML estimates.
var QML ( b ) = ( H ) g g ( H ) ,
(28.62)
Note that these standard errors are not robust to heteroskedasticity in binary dependent variable models. They are robust to certain misspecifications of the underlying distribution of
y.
E ( y i x i, b ) = h ( x i b ) .
(28.63)
Even though the QML covariance is robust to general misspecification of the conditional distribution of y i , it does not possess any efficiency properties. An alternative consistent estimate of the covariance is obtained if we impose the GLM condition that the (true) variance
of y i is proportional to the variance of the distribution used to specify the log likelihood:
2
var ( y i x i, b ) = j var M L ( y i x i, b ) .
(28.64)
2
In other words, the ratio of the (conditional) variance to the mean is some constant j that
2
is independent of x . The most empirically relevant case is j > 1 , which is known as over-
dispersion. If this proportional variance condition holds, a consistent estimate of the GLM
covariance is given by:
2
var GLM ( b ) = j var M L ( b ) ,
(28.65)
i=1
( y i y i )
1
----------------------------- = ---------------
NK
v ( x i, b , g )
u i
-.
--------------------------------- ))
(
v
(
x
,
b
,
g
i
i= 1
(28.66)
If you do not choose to d.f. correct, the leading term in Equation (28.66) is 1 N . When you
2
select GLM standard errors, the estimated proportionality term j is reported as the variance factor estimate in EViews.
(Note that the EViews legacy estimator always estimates a d.f. corrected variance factor,
while the other estimators permit you to choose whether to override the default of no correction. Since the default behavior has changed, you will need to explicitly request d.f. correction to match the legacy covariance results.)
For detailed discussion on GLMs and the phenomenon of overdispersion, see McCullaugh
and Nelder (1989).
y( j) =
yi
ij
p(j) =
(28.67)
p i n j =
ij
( 1 F ( x i b ) ) n j
ij
HL =
j =1
( y ( j ) nj p ( j ) )
----------------------------------------.
nj p ( j ) ( 1 p ( j ) )
(28.68)
The distribution of the HL statistic is not known; however, Hosmer and Lemeshow (1989,
p.141) report evidence from extensive simulation indicating that when the model is correctly
2
specified, the distribution of the statistic is well approximated by a x distribution with
J 2 degrees of freedom. Note that these findings are based on a simulation where J is
close to n .
References355
References
Aitchison, J. and S.D. Silvey (1957). The Generalization of Probit Analysis to the Case of Multiple
Responses, Biometrika, 44, 131140.
Agresti, Alan (1996). An Introduction to Categorical Data Analysis, New York: John Wiley & Sons.
Andrews, Donald W. K. (1988a). Chi-Square Diagnostic Tests for Econometric Models: Theory, Econometrica, 56, 14191453.
Andrews, Donald W. K. (1988b). Chi-Square Diagnostic Tests for Econometric Models: Introduction and
Applications, Journal of Econometrics, 37, 135156.
Cameron, A. Colin and Pravin K. Trivedi (1990). Regression-based Tests for Overdispersion in the Poisson
Model, Journal of Econometrics, 46, 347364.
Chesher, A. and M. Irish (1987). Residual Analysis in the Grouped Data and Censored Normal Linear
Model, Journal of Econometrics, 34, 3362.
Chesher, A., T. Lancaster, and M. Irish (1985). On Detecting the Failure of Distributional Assumptions,
Annales de LInsee, 59/60, 744.
Davidson, Russell and James G. MacKinnon (1993). Estimation and Inference in Econometrics, Oxford:
Oxford University Press.
Gourieroux, C., A. Monfort, E. Renault, and A. Trognon (1987). Generalized Residuals, Journal of
Econometrics, 34, 532.
Gourieroux, C., A. Monfort, and C. Trognon (1984a). Pseudo-Maximum Likelihood Methods: Theory,
Econometrica, 52, 681700.
Gourieroux, C., A. Monfort, and C. Trognon (1984b). Pseudo-Maximum Likelihood Methods: Applications to Poisson Models, Econometrica, 52, 701720.
Greene, William H. (2008). Econometric Analysis, 6th Edition, Upper Saddle River, NJ: Prentice-Hall.
Harvey, Andrew C. (1987). Applications of the Kalman Filter in Econometrics, Chapter 8 in Truman F.
Bewley (ed.), Advances in EconometricsFifth World Congress, Volume 1, Cambridge: Cambridge
University Press.
Harvey, Andrew C. (1989). Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge:
Cambridge University Press.
Heckman, James (1976). The Common Structure of Statistical Models of Truncation, Sample Selection,
and Limited Dependent Variables and a Simple Estimator for Such Models, Annals of Economic and
Social Measurement, 5, 475-492.
Hosmer, David W. Jr. and Stanley Lemeshow (1989). Applied Logistic Regression, New York: John Wiley &
Sons.
Johnston, Jack and John Enrico DiNardo (1997). Econometric Methods, 4th Edition, New York: McGrawHill.
Kennan, John (1985). The Duration of Contract Strikes in U.S. Manufacturing, Journal of Econometrics,
28, 528.
Maddala, G. S. (1983). Limited-Dependent and Qualitative Variables in Econometrics, Cambridge: Cambridge University Press.
McCullagh, Peter, and J. A. Nelder (1989). Generalized Linear Models, Second Edition. London: Chapman
& Hall.
McDonald, J. and R. Moffitt (1980). The Uses of Tobit Analysis, Review of Economic and Statistics, 62,
318321.
Mroz, Thomas (1987). The Sensitivity of an Empirical Model of Married Womens Hours of Work to Economic and Statistical Assumptions, Econometrica, 55, 765-799.
Pagan, A. and F. Vella (1989). Diagnostic Tests for Models Based on Individual Data: A Survey, Journal
of Applied Econometrics, 4, S29S59.
Pindyck, Robert S. and Daniel L. Rubinfeld (1998). Econometric Models and Economic Forecasts, 4th edition, New York: McGraw-Hill.
Powell, J. L. (1986). Symmetrically Trimmed Least Squares Estimation for Tobit Models, Econometrica,
54, 14351460.
Wooldridge, Jeffrey M. (1997). Quasi-Likelihood Methods for Count Data, Chapter 8 in M. Hashem
Pesaran and P. Schmidt (eds.) Handbook of Applied Econometrics, Volume 2, Malden, MA: Blackwell, 352406.
Overview
Suppose we have i = 1, , N independent response variables Y i , each of whose conditional mean depends on k -vectors of explanatory variables X i and unknown coefficients
b . We may decompose Y i into a systematic mean component, m i , and a stochastic component e i
Yi = mi + ei
(29.1)
The conventional linear regression model assumes that the m i is a linear predictor formed
from the explanatory variables and coefficients, m i = X i b , and that e i is normally distrib2
uted with zero mean and constant variance V i = j .
The GLM framework of Nelder and McCullagh (1972) generalizes linear regression by allowing the mean component m i to depend on a linear predictor through a nonlinear function,
and the distribution of the stochastic component e i be any member of the linear exponential family. Specifically, a GLM specification consists of:
A linear predictor or index h i = X i b + o i where o i is an optional offset term.
A distribution for Y i belonging to the linear exponential family.
A smooth, invertible link function, g ( m i ) = h i , relating the mean m i and the linear
predictor h i .
A wide range of familiar models may be cast in the form of a GLM by choosing an appropriate distribution and link function. For example:
Model
Family
Link
Linear Regression
Normal
Identity: g ( m ) = m
Exponential Regression
Normal
Log: g ( m ) = log ( m )
Logistic Regression
Binomial
Logit: g ( m ) = log ( m ( 1 m ) )
Probit Regression
Binomial
Probit: g ( m ) = F ( m )
Poisson Count
Poisson
Log: g ( m ) = log ( m )
For a detailed description of these and other familiar specifications, see McCullagh and
Nelder (1981) and Hardin and Hilbe (2007). It is worth noting that the GLM framework is
able to nest models for continuous (normal), proportion (logistic and probit), and discrete
count (Poisson) data.
Taken together, the GLM assumptions imply that the first two moments of Y i may be written as functions of the linear predictor:
1
mi = g ( hi )
1
V i = ( f w i )V m ( g ( h i ) )
(29.2)
where V m ( m ) is a distribution-specific variance function describing the mean-variance relationship, the dispersion constant f > 0 is a possibly known scale factor, and w i > 0 is a
known prior weight that corrects for unequal scaling between observations.
Crucially, the properties of the GLM maximum likelihood estimator depend only on these
two moments. Thus, a GLM specification is principally a vehicle for specifying a mean and
variance, where the mean is determined by the link assumption, and the mean-variance
relationship is governed by the distributional assumption. In this respect, the distributional
assumption of the standard GLM is overly restrictive.
Accordingly, Wedderburn (1974) shows that one need only specify a mean and variance
specification as in Equation (29.2) to define a quasi-likelihood that may be used for coefficient and covariance estimation. Not surprisingly, for variance functions derived from exponential family distributions, the likelihood and quasi-likelihood functions coincide.
McCullagh (1983) offers a full set of distributional results for the quasi-maximum likelihood
(QML) estimator that mirror those for ordinary maximum likelihood.
QML estimators are an important tool for the analysis of GLM and related models. In particular, these estimators permit us to estimate GLM-like models involving mean-variance specifications that extend beyond those for known exponential family distributions, and to
estimate models where the mean-variance specification is of exponential family form, but
the observed data do not satisfy the distributional requirements (Agresti 1990, 13.2.3 offers
a nice non-technical overview of QML).
Alternately, Gourioux, Monfort, and Trognon (1984) show that consistency of the GLM maximum likelihood estimator requires only correct specification of the conditional mean. Misspecification of the variance relationship does, however, lead to invalid inference, though
this may be corrected using robust coefficient covariance estimation. In contrast to the QML
results, the robust covariance correction does not require correction specification of a GLM
conditional variance.
Specification
The main page of the dialog
is used to describe the basic
GLM specification.
We will focus attention on
the GLM Equation specification section since the Estimation settings section in the
bottom of the dialog is
should be self-explanatory.
Alternately, you may enter an explicit linear specification like Y=C(1)+C(2)*X. The
response variable will be taken to be the variable on the left-hand side of the equality (Y)
and the linear predictor will be taken from the right-hand side of the expression
(C(1)+C(2)*X). Offsets may be entered directly in the expression or they may be entered
on the Options page. Note that this specification should not be taken as a literal description
of the mean equation; it is merely a convenient syntax for specifying both the response and
the linear predictor.
Family
Next, you should use the Family dropdown to specify your distribution. The default family is the Normal distribution, but you are free to
choose from the list of linear exponential family and quasi-likelihood
distributions. Note that the last three entries (Exponential Mean,
Power Mean (p), Binomial Squared) are for quasi-likelihood specifications not associated with exponential families.
If the selected distribution requires
specification of an ancillary parameter, you will be prompted to provide
the values. For example, the Binomial
Count and Binomial Proportion distributions both require specification of the number of
trials n i , while the Negative Binomial requires specification of the excess-variance parameter k i .
For descriptions of the various exponential and quasi-likelihood families, see Distribution,
beginning on page 375.
Link
Lastly, you should use the Link dropdown to specify a link function.
EViews will initialize the Link setting to the default for to the selected
family. In general, the canonical link is used as the default link, however, the Log link is used as the default for the Negative Binomial
family. The Exponential Mean, Power Mean (p), and Binomial
Squared quasi-likelihood families will default to use the Identity,
Log, and Logit links, respectively.
If the link that you select requires specification of parameter values, you will be prompted to
enter the values.
For detailed descriptions of the link functions, see Link, beginning on page 377.
Options
Click on the Options tab to display additional settings for the GLM specification. You may
use this page to augment the equation specification, to choose a dispersion estimator, to
specify the estimation algorithm and associated settings, or to define a coefficient covariance
estimator.
Specification Options
The Specification Options section of the Options tab allows you
to augment the GLM specification.
To include an offset in your linear predictor, simply enter a
series name or expression in the Offset edit field.
The Frequency weights edit field should be used to specify replicates for each observation in the workfile. In practical terms,
the frequency weights act as a form of variance weighting and
inflate the number of observations associated with the data records.
You may also specify prior variance weights in the using the Weights dropdown and associated edit fields. To specify your weights, simply select a description for the form of the
weighting series (Inverse std. dev., Inverse variance, Std. deviation, Variance), then enter
the corresponding weight series name or expression. EViews will translate the values in the
weighting series into the appropriate values for w i . For example, to specify w i directly, you
should select Inverse variance then enter the series or expression containing the w i values.
If you instead choose Variance, EViews will set w i to the inverse of the values in the weight
series. Weighted Least Squares on page 36 for additional discussion.
Dispersion Options
The Method dropdown may be used to select the dispersion computation
method. You will always be given the opportunity to choose between the
Default setting or Pearson Chi-Sq., Fixed at 1, and User-Specified. Additionally, if the specified distribution is in the linear exponential family, you
may choose to use the Deviance statistic.
The Default entry instructs EViews to use the default method for
computing the dispersion, which will depend on the specified
family. For families with a free dispersion parameter, the default
is to use the Pearson Chi-Sq. statistic, otherwise the default is
Fixed at 1. The current default setting will be displayed directly below the dropdown.
Estimation Options
The Estimation section of the page lets you specify the
optimization algorithm, starting values, and other estimation settings.
The Optimization Algorithm and Step method dropdown
menus control your estimation method.
The default Optimization Algorithm is NewtonRaphson, but you may instead select BFGS, OPG BHHH, Fisher Scoring (IRLS), or EViews legacy.
The default Step method is Marquardt, but you may use the menu to select Dogleg
or Line search.
If you select optimization using EViews legacy, you will be prompted to select a legacy
method in place of a step method. The Legacy method dropdown offers the choice of the
Examples363
default Quadratic Hill Climbing (Newton-Raphson with Marquardt steps), Newton-Raphson with line search, IRLS - Fisher Scoring, and BHHH (OPG with line search).
By default, the Starting Values dropdown is set to EViews Supplied. The EViews default
starting values for b are obtained using the suggestion of McCullagh and Nelder to initialize
the IRLS algorithm at m i = ( n i y i + 0.5 ) ( n i + 1 ) for the binomial proportion family, and
m i = ( y i + y ) 2 otherwise, then running a single IRLS coefficient update to obtain the initial b . Alternately, you may specify starting values that are a fraction of the default values,
or you may instruct EViews to use your own values.
You may use the IRLS iterations edit field to instruct EViews to perform a fixed number of
additional IRLS updates to refine coefficient values prior to starting the specified estimation
algorithm.
The Max Iterations and Convergence edit fields are self-explanatory. Selecting the Display
settings checkbox instructs EViews to show detailed information on tolerances and initial
values in the equation output.
Coefficient Name
You may use the Coefficient name section of the dialog to change the coefficient vector from
the default C. EViews will create and resize the vector if necessary.
Examples
In this section, we offer three examples illustrating GLM estimation in EViews.
Exponential Regression
Our first example uses the Kennen (1983) dataset (Strike.WF1) on number of strikes
(NUMB), industrial production (IP), and dummy variable representing the month of February (FEB). To account for the non-negative response variable NUMB, we may estimate a
nonlinear specification of the form:
(29.3)
where e i N ( 0, j ) . This model falls into the GLM framework with a log link and normal
family. To estimate this specification, bring up the GLM dialog and fill out the equation specification page as follows:
numb c ip feb
then change the Link function to Log. For the moment, we leave the remaining settings and
those on the Options page at their default values. Click on OK to accept the specification
and estimate the model. EViews displays the following results:
Coefficient
Std. Error
z-Statistic
Prob.
C
IP
FEB
1.727368
2.664874
-0.391015
0.066206
1.237904
0.313445
26.09097
2.152732
-1.247476
0.0000
0.0313
0.2122
5.495146
1273.783
5.411580
5.442662
12.73783
6.905754
1273.783
12.73783
3.653829
-275.6964
5.488319
1273.783
1361.748
0.031654
12.73783
The top portion of the output displays the estimation settings and basic results, in particular
the choice of algorithm (Newton-Raphson with Marquardt steps), distribution family (Normal), and link function (Log), as well as the dispersion estimator, coefficient covariance estimator, and estimation status. We see that the dispersion estimator is based on the Pearson
2
x statistic and the coefficient covariance is computed using the inverse of the (negative of
the) observed Hessian.
The coefficient estimates indicate that IP is positively related to the number of strikes, and
that the relationship is statistically significant at conventional levels. The FEB dummy variable is negatively related to NUMB, but the relationship is not statistically significant.
The bottom portion of the output displays various descriptive statistics. Note that in place of
some of the more familiar statistics, EViews reports the deviance, deviance statistic (deviance divided by the degrees-of-freedom) restricted deviance (for the model with only a constant), and the corresponding LR test statistic and probability. The test indicates that the IP
and FEB variables are jointly significant at roughly the 3% level. Also displayed are the sumof-squared Pearson residuals and the estimate of the dispersion, which in this example is the
Pearson statistic.
Examples365
Binomial
We illustrate the estimation of GLM binomial logistic regression using a simple example
from Agresti (2007, Table 3.1, p. 69) examining the relationship between snoring and heart
disease. The data in the first page of the workfile Snoring.WF1 consist of grouped binomial response data for 2,484 subjects divided into four risk factor groups for snoring level
(SNORING), coded as 0, 2, 4, 5. Associated with each of the four groups is the number of
individuals in the group exhibiting heart disease (DISEASE) as well as a total group size
(TOTAL).
SNORING
DISEASE
TOTAL
24
1379
35
638
21
213
30
254
We may estimate a logistic regression model for these data in either raw frequency or proportions form.
To estimate the model in raw frequency form, bring up the GLM equation dialog, enter the
linear predictor specification:
disease c snore
select Binomial Count in the Family dropdown, and enter TOTAL in the Number of trials
edit field. Next switch over to the Options page and turn off the d.f. Adjustment for the
coefficient covariance. Click on OK to estimate the equation.
Dependent Variable: DISEASE
Method: Generalized Linear Model (Newton-Raphson / Marquardt
steps)
Date: 03/10/15 Time: 15:19
Sample: 1 4
Included observations: 4
Family: Binomial Count (n = TOTAL)
Link: Logit
Dispersion fixed at 1
Summary statistics are for the binomial proportions and implicit
variance weights used in estimation
Convergence achieved after 2 iterations
Coefficient covariance computed using observed Hessian
No d.f. adjustment for standard errors & covariance
The output header shows relevant information for the estimation procedure. Note in particular the EViews message that summary statistics are computed for the binomial proportions
data. This message is a hint at the fact that EViews estimates the binomial count model by
scaling the dependent variable by the number of trials, and estimating the corresponding
proportions specification.
Accordingly, you could have specified the model in proportions form. Simply enter the linear
predictor specification:
disease/total c snoring
with Binomial Proportions specified in the Family dropdown and TOTAL entered in the
Number of trials edit field.
Examples367
Coefficient
Std. Error
z-Statistic
Prob.
C
SNORING
-3.866248
0.397337
0.166214
0.050011
-23.26061
7.945039
0.0000
0.0000
0.023490
0.000357
6.765367
6.092001
1.404456
63.09557
2.874323
1.000000
0.001736
-11.53073
6.458514
2.808912
65.90448
0.000000
1.437162
The top portion of the output changes to show the different settings, but the remaining output is identical. In particular, there is strong evidence that SNORING is related to heart disease in these data, with the estimated probability of heart disease increasing with the level
of snoring.
It is worth mentioning that data of this form are sometimes represented in a frequency
weighted form in which the data each group is divided into two records, one for the binomial successes, and one for the failures. Each each record contains the number of repeats in
the group and a binary indicator for success (the total number of records is G , where G is
the number of groups) The FREQ page of the Snoring.WF1 workfile contains the data represented in this fashion:
SNORING
DISEASE
24
35
21
30
1355
603
192
224
In this representation, DISEASE is an indicator for whether the record corresponds to individuals with heart disease or not, and N is the number of individuals in the category.
Estimation of the equivalent GLM model specified using the frequency weighted data is
straightforward. Simply enter the linear predictor specification:
disease c snoring
with either Binomial Proportions or Binomial Count specified in the Family dropdown.
Since each observation corresponds to a binary indicator, you should enter 1 enter as the
Number of trials edit field. The multiple individuals in the category are handled by entering
N in the Frequency weights field in the Options page.
Dependent Variable: DISEASE
Method: Generalized Linear Model (Newton-Raphson / Marquardt
steps)
Date: 03/10/15 Time: 15:16
Sample: 1 8
Included cases: 8
Total observations: 2484
Family: Binomial Count (n = 1)
Link: Logit
Frequency weight series: N
Dispersion fixed at 1
Convergence achieved after 6 iterations
Coefficient covariance computed using observed Hessian
No d.f. adjustment for standard errors & covariance
Variable
Coefficient
Std. Error
z-Statistic
Prob.
C
SNORING
-3.866248
0.397337
0.166214
0.050011
-23.26061
7.945039
0.0000
0.0000
0.044283
102.1917
0.338861
0.340562
0.337523
63.09557
2412.870
1.000000
0.205765
-418.8658
0.343545
837.7316
900.8272
0.000000
0.972147
Note that while a number of the summary statistics differ due to the different representation
of the data (notably the Deviance and Pearson SSRs), the coefficient estimates and LR test
statistics in this case are identical to those outlined above. There will, however, be substan-
Examples369
tive differences between the two results in settings when the dispersion is estimated since
the effective number of observations differs in the two settings.
Lastly the data may be represented in individual trial form, which expands observations for
each trial in the group into a separate record. The total number of records in the data is
n i , where n i is the number of trials in the i-th (of G ) group. This representation is the
traditional ungrouped binary response form for the data. Results for data in this representation should match those for the frequency weighted data.
Binomial Proportions
Papke and Wooldridge (1996) apply GLM techniques to the analysis of fractional response
data for 401K tax advantaged savings plan participation rates (401kjae.WF1). Their analysis focuses on the relationship between plan participation rates (PRATE) and the employer
matching contribution rates (MRATE), accounting for the log of total employment
(LOG(TOTEMP), LOG(TOTEMP)^2), plan age (AGE, AGE^2), and a binary indicator for
whether the plan is the only pension plan offered by the plan sponsor (SOLE).
We focus on two of the equations estimated in the paper. In both, the authors employ a GLM
specification using a binomial proportion family and logit link. Information on the binomial
group size n i is ignored, but variance misspecification is accounted for in two ways: first
using a binomial QMLE with GLM standard errors, and second using the robust HuberWhite covariance approach.
To estimate the GLM standard error specification, we first call up the GLM dialog and enter
the linear predictor specification:
prate mprate log(totemp) log(totemp)^2 age age^2 sole
Next, select the Binomial Proportion family, and enter the sample description
@all if mrate<=1
Lastly, we leave the Number of trials edit field at the default value of 1, but correct for heterogeneity by going to the Options page and specifying Pearson Chi-Sq. dispersion estimates. Click on OK to continue.
The resulting estimates correspond the coefficient estimates and first set of standard errors
in Papke and Wooldridge (Table II, column 2):
Coefficient
Std. Error
z-Statistic
Prob.
MRATE
LOG(TOTEMP)
LOG(TOTEMP)^2
AGE
AGE^2
SOLE
C
1.390080
-1.001875
0.052186
0.050113
-0.000515
0.007947
5.057998
0.100368
0.111222
0.007105
0.008710
0.000211
0.046785
0.426942
13.84981
-9.007914
7.345545
5.753136
-2.444532
0.169860
11.84703
0.0000
0.0000
0.0000
0.0000
0.0145
0.8651
0.0000
0.847769
92.69516
765.0353
895.5505
0.000000
0.191798
0.169961
-8075.397
0.202551
680.4838
724.4200
0.191798
Papke and Wooldridge offer a detailed analysis of the results (p. 628629), which we will
not duplicate here. We will point out that the estimate of the dispersion (0.191798) taken
from the Pearson statistic is far from the restricted value of 1.0.
The results using the QML with GLM standard errors rely on validity of the GLM assumption
for the variance given in Equation (29.2), an assumption that may be too restrictive. We may
instead estimate the equation without imposing a particular conditional variance specification by computing our estimates using a robust Huber-White sandwich method. Click on
Estimate to bring up the equation dialog, select the Options tab, then change the Covariance method from Default to Huber/White. Click on OK to estimate the revised specification:
Coefficient
Std. Error
z-Statistic
Prob.
MRATE
LOG(TOTEMP)
LOG(TOTEMP)^2
AGE
AGE^2
SOLE
C
1.390080
-1.001875
0.052186
0.050113
-0.000515
0.007947
5.057998
0.107792
0.110524
0.007134
0.008852
0.000212
0.050242
0.421199
12.89596
-9.064757
7.315681
5.661091
-2.432326
0.158172
12.00858
0.0000
0.0000
0.0000
0.0000
0.0150
0.8743
0.0000
0.847769
92.69516
0.626997
0.631100
0.202551
130.5153
724.4200
1.000000
0.169961
-1179.279
0.638538
765.0353
895.5505
0.000000
0.191798
EViews reports the new method of computing the coefficient covariance in the header. The
coefficient estimates are unchanged, since the alternative computation of the coefficient
covariance is a post-estimation procedure, and the new standard estimates correspond the
second set of standard errors in Papke and Wooldridge (Table II, column 2). Notably, the use
of an alternative estimator for the coefficient covariance has little substantive effect on the
results.
Residuals
The main equation output offers summary statistics for the sum-of-squared response residuals (Sum squared resid), and the sum-of-squared Pearson residuals (Pearson SSR).
The Actual, Fitted, Residual views and Residual Diagnostics allow you to examine properties of your residuals. The Actual, Fitted, Residual Table and Graph, show the fit of the
unweighted data. As the name suggests, the Standardized Residual Graph displays the
standardized (scaled Pearson) residuals.
The Residual Diagnostics show Histograms of the standardized residuals and Correlograms of the standardized residuals and the squared standardized residuals.
The Make Residuals proc allows you to save the Ordinary (response), Standardized (scaled Pearson), or Generalized (score) residuals into the workfile. The latter
may be useful for constructing test statistics (note, however, that in some cases, it may be more useful to compute the gradients of the model directly using Proc/Make
Gradient Group).
Given standardized residuals SRES for equation EQ1, the
unscaled Pearson residuals may be obtained using the command
series pearson = sres * @sqrt(eq1.@dispersion)
Forecasting
EViews offers built-in tools for producing in and out-of-sample forecasts (fits) from your
GLM estimated equation. Simply click on the Forecast button on your estimated equation to
bring up the forecast dialog, then enter the desired settings.
You should first use the radio buttons to specify whether you wish to
forecast the expected dependent
variable m i or the linear index h i .
Next, enter the name of the series to
hold the forecast output, and set the
forecast sample.
Lastly, specify whether you wish to
produce a forecast graph and
whether you wish to fill non-forecast values in the workfile with
actual values or to fill them with
NAs. For most cross-section applications, we recommend that you
uncheck this box.
Click on OK to produce the forecast.
Note that while EViews does not presently offer a menu item for saving the fitted GLM variances or scaled variances, you can easily obtain results by saving the ordinary and standardized residuals and taking ratios (Residuals on page 384). If ORESID are the ordinary and
SRESID are the standardized residuals for equation EQ1, then the commands
series glmsvar = (oresid / sresid)^2
series glmvar = glmvar * eq1.@dispersion
Testing
You may perform Wald tests of coefficient restrictions. Simply select View/Coefficient Diagnostics/Wald - Coefficient Restrictions, then enter your restrictions in the edit field. For the
Papke-Wooldridge example above with Huber-White robust covariances, we may use a Wald
test to evaluate the joint significance of AGE^2 and SOLE by entering the restriction
C(5)=C(6)=0 and clicking on OK to perform the test.
Wald Test:
Equation: EQ2_QMLE_R
Null Hyp othesis: C(5)=C(6)=0
Test Stati stic
F-statisti c
Chi-squa re
Value
df
Probability
2.970226
5.940451
(2, 3777)
2
0.0514
0.0513
Value
Std. Err.
-0.00051 5
0.007947
0.00021 2
0.05024 2
The test results show joint-significance at just above the 5% level. The Confidence Intervals and Confidence Ellipses... views will also employ the robust covariance matrix estimates.
The Omitted Variables... and Redundant Variables... views and the Ramsey RESET Test...
views are likelihood ratio based tests. Note that the RESET test is a special case of an omitted variables test where the omitted variables are powers of the fitted values from the original equation.
We illustrate these tests by performing the RESET test on the first Papke-Wooldridge QMLE
equation with GLM covariances. Select View/Stability Diagnostics/Ramsey Reset Test...
and change the default to include 2 fitted terms in the test equation.
Ramsey RESE T Test
Equation: EQ2_QMLE
Specificatio n: PRATE MRATE LOG(TOTE MP) LOG(TOTEMP)^2 AGE
AGE ^2 SOLE C
Omitted Vari ables: Powers of fitted values from 2 to 3
F-statistic
QLR* statistic
Value
0.311140
0.622280
df
(2, 3775)
2
Probability
0.7326
0.7326
Sum of Sq.
0.119389
76 5.0353
76 4.9159
72 4.2589
df
2
3777
3775
3775
Mean
Squares
0.059694
0.202551
0.202627
0.191857
Value
76 5.0353
76 4.9159
0.191857
df
3777
3775
F-test summary:
Test Deviance
Restricted Deviance
Unrestricted Deviance
Dispersion S SR
QLR* test summary:
Restricted Deviance
Unrestricted Deviance
Dispersion
The top portion of the output shows the test settings, and the test summaries. The bottom
portion of the output shows the estimated test equation. The results show little evidence of
nonlinearity.
Notice that in contrast to LR tests in most other equation views, the likelihood ratio test statistics in GLM equations are obtained from analysis of the deviances or quasi-deviances.
Suppose D 0 is the unscaled deviance under the null and D 1 is the corresponding statistic
2
under the alternative hypothesis. The usual asymptotic x likelihood ratio test statistic may
be written in terms of the difference of deviances with common scaling,
D0 D1
2
-------------------- x r
f
(29.4)
( D0 D1 ) r
-------------------------------- F r, N p
f
(29.5)
Technical Details375
is an estimate of the
where N p is the degrees-of-freedom under the alternative and f
under the alternative hypothesis using the method specdispersion. EViews will estimate f
ified in your equation.
We point out that the Ramsey test results (and all other GLM LR test statistics) presented
here may be problematic since they rely on the GLM variance assumption, Papke and
Wooldridge offer a robust LM formulation for the Ramsey RESET test. This test is not currently built-into EViews, but which may be constructed with some effort using auxiliary
results provided by EViews (see Papke and Wooldridge, p. 625 for details on the test construction).
Technical Details
The following discussion offers a brief technical summary of GLMs, describing specification,
estimation, and hypothesis testing in this framework. Those wishing greater detail should
consult the McCullagh and Nelders (1989) monograph or the book-length survey by Hardin
and Hilbe (2007).
Distribution
A GLM assumes that Y i are independent random variables following a linear exponential
family distribution with density:
yi vi b ( vi )
f ( y i, v i, f, w i ) = exp --------------------------- + c ( y i, f, w i )
f wi
(29.6)
where b and c are distribution specific functions. v i = v ( m i ) , which is termed the canonical parameter, fully parameterizes the distribution in terms of the conditional mean, the dispersion value f is a possibly known scale nuisance parameter, and w i is a known prior
weight that corrects for unequal scaling between observations with otherwise constant f .
The exponential family assumption implies that the mean and variance of Y i may be written as
E ( Y i ) = b ( v i ) = m i
Var ( Y i ) = ( f w i ) b ( v i ) = ( f w i )V m ( m i )
(29.7)
where b ( v i ) and b ( v i ) are the first and second derivatives of the b function, respectively,
and V m is a distribution-specific variance function that depends only on m i .
EViews supports the following exponential family distributions:
Family
vi
b ( vi )
Normal
mi
Gamma
1 mi
Inverse Gaussian
1 ( 2m i )
Poisson
log ( m i )
Binomial Proportion
pi
log ------------ 1 pi
log ( 1 + e )
ki mi
log ----------------- 1 + ki mi
log ( 1 e )
-------------------------------ki
( n i trials)
Negative Binomial
( k i is known)
Vm
vi 2
log ( v i )
( 2v )
12
m ( 1 mi )
m ( 1 + ki m )
vi
vi
vi
The corresponding density functions for each of these distributions are given by:
Normal
2
f ( y i, m i, j, w i ) = ( 2pj w i )
1 2
( y i 2y i m i + m i )
exp ----------------------------------------------
2
2j w i
(29.8)
( y i r i m i ) exp ( y i ( m i r i ) )
f ( y i, m i, r i ) = ------------------------------------------------------------------------yi G ( ri )
(29.9)
f ( y i, m i, l, w i ) = ( 2py i l w i )
1 2
( yi mi )
exp --------------------------------
2y i m 2i ( l w i )
(29.10)
for y i > 0 .
Poisson
yi
m i exp ( m i )
f ( y i, m i ) = ----------------------------y i!
(29.11)
Technical Details377
Binomial Proportion
n i ni yi
ni ( 1 yi )
f ( y i, n i, m i ) =
mi ( 1 mi )
ni yi
(29.12)
1 + ki mi
G ( y i + 1 )G ( 1 k i ) 1 + k i m i
(29.13)
Vm
Poisson
Binomial Proportion
m(1 m)
Negative Binomial ( k )
m ( 1 + km )
Power Mean ( r )
Exponential Mean
Binomial Squared
m (1 m)
m
2
The first three entries in the table correspond to overdispersed or prior weighted versions of
the specified distribution. The last three entries are pure quasi-likelihood distributions that
do not correspond to exponential family distributions. See Quasi-likelihoods, beginning on
page 379 for additional discussion.
Link
The following table lists the names, functions, and associated range restrictions for the supported links:
Name
Link Function g ( m )
Range of m
Identity
( , )
Log
log ( m )
( 0, )
Log-Complement
log ( 1 m )
( , 1 )
Logit
log ( m ( 1 m ) )
( 0, 1 )
Probit
F (m)
( 0, 1 )
Log-Log
log ( log ( m ) )
( 0, 1 )
Complementary
Log-Log
log ( log ( 1 m ) )
( 0, 1 )
Inverse
1m
( , )
Power ( p )
if p 0
if p = 0
log ( m )
(m (1 m))
log ( m ( 1 m ) )
p
if p 0
if p = 0
if p 0
Box-Cox ( p )
(m 1) p
log ( m )
((m (1 m)) 1) p
if p 0
log ( m ( 1 m ) )
if p = 0
(p )
if p = 0
( 0, )
( 0, 1 )
( 0, )
( 0, 1 )
EViews does not restrict the link choices associated with a given distributional family. Thus,
it is possible for you to choose a link function that returns invalid mean values for the specified distribution at some parameter values, in which case your likelihood evaluation and
estimation will fail.
One important role of the inverse link function is to map the real number domain of the linear index into the range of the dependent variable. Consequently the choice of link function
is often governed in part by the desire to enforce range restrictions on the fitted mean. For
example, the mean of a binomial proportions or negative binomial model must be between 0
and 1, while the Poisson and Gamma distributions require a positive mean value. Accordingly, the use of a Logit, Probit, Log-Log, Complementary Log-Log, Power Odds Ratio, or
Box-Cox Odds Ratio is common with a binomial distribution, while the Log, Power, and BoxCox families are generally viewed as more appropriate for Poisson or Gamma distribution
data.
Technical Details379
EViews will default to use the canonical link for a given distribution. The canonical link is
the function that equates the canonical parameter v of the exponential family distribution
and the linear predictor h = g ( m ) = v ( m ) . The canonical links for relevant distributions
are given by:
Family
Canonical Link
Normal
Identity
Gamma
Inverse
Inverse Gaussian
Power ( p = 2 )
Poisson
Log
Binomial Proportion
Logit
The negative binomial canonical link is not supported in EViews so the log link is used as
the default choice in this case. We note that while the canonical link offers computational
and conceptual convenience, it is not necessarily the best choice for a given problem.
Quasi-likelihoods
Wedderburn (1974) proposed the method of maximum quasi-likelihood for estimating
regression parameters when one has knowledge of a mean-variance relationship for the
response, but is unwilling or unable to commit to a valid fully specified distribution function.
Under the assumption that the Y i are independent with mean m i and variance
Var ( Y i ) = V m ( m i ) ( f w i ) , the function,
yi mi
U i = u ( m i, y i, f, w i ) = ----------------------------------( f w i )V m ( m i )
(29.14)
Q ( m i, y i, f, w i ) =
yi t
dt
(--------------------------------f w i )V m ( t )
(29.15)
if it exists, should behave very much like a log-likelihood contribution. We may use to the
individual contributions Q i to define the quasi-log-likelihood, and the scaled and unscaled
quasi-deviance functions
N
q ( m, y, f, w ) =
Q ( m i , y i , f, w i )
i= 1
D ( m, y, f, w ) = 2q ( m, y, f, w )
D ( m, y, w ) = 2fD ( m, y, f, w )
(29.16)
Vm ( m )
Restrictions
Distribution
None
Normal
m > 0, y 0
Poisson
m > 0, y > 0
Gamma
m > 0, r 0, 1, 2
---
None
---
0 < m < 1, 0 y 1
Binomial Proportion
0 < m < 1, 0 y 1
---
m > 0, y 0
Negative Binomial
m
m
e
m(1 m)
2
m (1 m)
m ( 1 + km )
Technical Details381
Note that the power-mean m , exponential mean exp ( m ) , and squared binomial proportion
2
2
m ( 1 m ) variance assumptions do not correspond to exponential family distributions.
Estimation
Estimation of GLM models may be divided into the estimation of three basic components:
the b coefficients, the coefficient covariance matrix S , and the dispersion parameter f .
Coefficient Estimation
The estimation of b is accomplished using the method of maximum likelihood (ML). Let
y = ( y 1, , y N ) and m = ( m 1, , m N ) . We may write the log-likelihood function as
N
l ( m, y, f, w ) =
log f ( yi, v i, w i )
(29.17)
i=1
l
------ =
b
i =1
N
log f ( y i, v i, f, w i ) v i
- -------
--------------------------------------------- b
v i
y i b ( v i )
v i
m i
h i
(29.18)
i =1
N
wi yi mi
m i
- -------- X
----f- ---------------V m ( m i ) h i
i =1
where the last equality uses the fact that v i m = V m ( m i ) . Since the scalar dispersion
parameter f is incidental to the first-order conditions, we may ignore it when estimating b .
In practice this is accomplished by evaluating the likelihood function at f = 1 .
It will prove useful in our discussion to define the scaled deviance D and the unscaled
deviance D as
D ( m, y, f, w ) = 2 { l ( m, y, f, w ) l ( y, y, f, w ) }
D ( m, y, w ) = fD ( m, y, f, w )
(29.19)
respectively. The scaled deviance D compares the likelihood function for the saturated
(unrestricted) log-likelihood, l ( y, y, f, w ) , with the log-likelihood function evaluated at an
arbitrary m , l ( m, y, f, w ) .
The unscaled deviance D is simply the scaled deviance multiplied by the dispersion, or
equivalently, the scaled deviance evaluated at f = 1 . It is easy to see that minimizing
either deviance with respect to b is equivalent to maximizing the log-likelihood with
respect to the b .
In general, solving for the first-order conditions for b requires an iterative approach. EViews
offers three different algorithms for obtaining solutions: Newton-Raphson, BHHH, and IRLS
- Fisher Scoring. All of these methods are variants of Newtons method but differ in the
method for computing the gradient weighting matrix used in coefficient updates (see Optimization Algorithms on page 1011).
IRLS, which stands for Iterated Reweighted Least Squares, is a commonly used algorithm for
estimating GLM models. IRLS is equivalent to Fisher Scoring, a Newton-method variant that
employs the Fisher Information (negative of the expected Hessian matrix) as the update
weighting matrix in place of the negative of the observed Hessian matrix used in standard
Newton-Raphson, or the outer-product of the gradients (OPG) used in BHHH.
In the GLM context, the IRLS-Fisher Scoring coefficient updates have a particularly simple
form that may be implemented using weighted least squares, where the weights are known
functions of the fitted mean that are updated at each iteration. For this reason, IRLS is particularly attractive in cases where one does not have access to custom software for estimating
GLMs. Moreover, in cases where ones preference is for an observed-Hessian Newton
method, the least squares nature of the IRLS updates make the latter well-suited to refining
starting values prior to employing one of the other methods.
l
I = E ---------------
bb
= XL I X
b
l
H = ---------------
bb
N
J =
HX
= XL
log f i log f i
--------------b
-------------b
i= 1
(29.20)
JX
= XL
b
Technical Details383
1 m
l I, i = ( w i f )V m ( m i ) --------i
h
2
2 V ( m
1 m
2 m
m i)
l H, i = l I, i + ( w i f ) ( y i m i ) V m ( m i ) --------i ------------------- V m ( m i ) ---------2-i (29.21)
h m
h
1 m
l J, i = ( w i f ) ( y i m i )V m ( m i ) --------i
h
Given correct specification of the likelihood, asymptotically consistent estimators for the S
may be obtained by taking the inverse of one of these estimators of the information matrix.
In practice, one typically matches the covariance matrix estimator with the method of esti I = I 1 when estimation (i.e., using the inverse of the expected information estimator S
mation is performed using IRLS) but mirroring is not required. By default, EViews will pair
the estimation and covariance methods, but you are free to mix and match as you see fit.
If the variance function is incorrectly specified, the GLM inverse information covariance estimators are no longer consistent for S . The Huber-White Sandwich estimator (Huber 1967,
White 1980) permits non GLM-variances and is robust to misspecification of the variance
function. EViews offers two forms for the estimator; you may choose between one that
IJ = I 1J I 1 ) or one that uses the observed Hessian
employs the expected information ( S
1 1
HJ = H J H ).
(S
Lastly, you may choose to estimate the coefficient covariance with or without a degree-offreedom correction. In practical terms, this computation is most easily handled by using a
in the basic calculation, then multiplying the coefficient
non d.f.-corrected version of f
covariance matrix by N ( N k ) when you want to apply the correction.
Dispersion Estimation
Recall that the dispersion parameter f may be ignored when estimating b . Once we have
obtained b , we may turn attention to obtaining an estimate of f . With respect to the estimation of f , we may divide the distribution families into two classes: distributions with a
free dispersion parameter, and distributions where the dispersion is fixed.
For distributions with a free dispersion parameter (Normal, Gamma, Inverse Gaussian), we
must estimate f . An estimate of the free dispersion parameter f may be obtained using the
2
generalized Pearson x statistic (Wedderburn 1972, McCullagh 1983),
f P
1
= ------------Nk
i= 1
w i ( y i m i )
----------------------------V m ( m i )
(29.22)
D ( m, y, w )
f D = --------------------------Nk
(29.23)
For distributions where the dispersion is fixed (Poisson, Binomial, Negative Binomial), f is
naturally set to the theoretically proscribed value of 1.0.
In fixed dispersion settings, the theoretical restriction on the dispersion is sometimes violated in the data. This situation is generically termed overdispersion since f typically
exceeds 1.0 (though underdispersion is a possibility). At a minimum, unaccounted for overdispersion leads to invalid inference, with estimated standard errors of the b typically
understating the variability of the coefficient estimates.
The easiest way to correct for overdispersion is by allowing a free dispersion parameter in
the variance function, estimating f using one of the methods described above, and using
the estimate when computing the covariance matrix as described in Coefficient Covariance
Estimation, on page 382. The resulting covariance matrix yields what are sometimes
termed GLM standard errors.
given a fixed dispersion distribution violates the assumpBear in mind that estimating f
tions of the likelihood so that standard ML theory does not apply. This approach is, however, consistent with a quasi-likelihood estimation framework (Wedderburn 1974), under
which the coefficient estimator and covariance calculations are theoretically justified (see
Quasi-likelihoods, beginning on page 379). We also caution that overdispersion may be
evidence of more serious problems with your specification. You should take care to evaluate
the appropriateness of your model.
Computational Details
The following provides additional details for the computation of results:
Residuals
There are several different types of residuals that are computed for a GLM specification:
The ordinary or response residuals are defined as
e oi = ( y i m i )
(29.24)
The ordinary residuals are simply the deviations from the mean in the original scale of
the responses.
The weighted or Pearson residuals are given by
e pi = [ ( 1 w i )V m ( m i ) ]
1 2
( y i m i )
(29.25)
The weighted residuals divide the ordinary response variables by the square root of
the unscaled variance. For models with fixed dispersion, the resulting residuals should
Technical Details385
have unit variance. For models with free dispersion, the weighted residuals may be
used to form an estimator of f .
The standardized or scaled Pearson residuals) are computed as
e si = [ ( f w i )V m ( m i ) ]
1 2
( y i m i )
(29.26)
e gi = [ ( f w i )V m ( m i ) ] ( m i h ) ( y i m i )
(29.27)
The scores of the GLM specification are obtained by multiplying the explanatory variables by the generalized residuals (Equation (29.18)). Not surprisingly, the generalized residuals may be used in the construction of LM hypothesis tests.
Dividing the Pearson SSR by ( N k ) produces the Pearson x statistic which may be used
as an estimator of f , (Dispersion Estimation on page 383) and, in some cases, as a measure of goodness-of-fit.
References
Agresti, Alan (1990). Categorical Data Analysis. New York: John Wiley & Sons.
Agresti, Alan (2007). An Introduction to Categorical Data Analysis, 2nd Edition. New York: John Wiley &
Sons.
Hardin, James W. and Joseph M. Hilbe (2007). Generalized Linear Models and Extensions, 2nd Edition.
McCullagh, Peter (1983). Quasi-Likelihood Functions, Annals of Statistics, 11, 59-67.
McCullagh, Peter, and J. A. Nelder (1989). Generalized Linear Models, Second Edition. London: Chapman
& Hall.
Papke, Leslie E. and Jeffrey M. Wooldridge (1996). Econometric Methods for Fractional Variables With an
Application to 401 (K) Plan Participation Rates, Journal of Applied Econometrics, 11, 619-632.
Nelder, J. A. and R. W. M. Wedderburn (1972). Generalized Linear Models, Journal of the Royal Statistical Society, A, 135, 370-384.
Wedderburn, R. W. M. (1974). Quasi-Likelihood Functions, Generalized Linear Models and the GaussNewton Method, Biometrika, 61, 439-447.
Wooldridge, Jeffrey M. (1997). Quasi-Likelihood Methods for Count Data, Chapter 8 in M. Hashem
Pesaran and P. Schmidt (eds.) Handbook of Applied Econometrics, Volume 2, Malden, MA: Blackwell, 352406.
Background
Before describing the mechanics of estimating robust regression models in EViews, it
will be useful to review the basics of the three estimation methods and to outline alternative approaches for computing the covariance matrix of the coefficient estimates.
M-estimation
The traditional least squares estimator is computed by finding coefficient values that
minimize the sum of the squared residuals:
N
b LS = argmin b
ri ( b )
(30.1)
i= 1
r i ( b ) = r i = y i X i b
(30.2)
Since the residuals r i enter the objective function on the right-hand side of Equation (30.1)
after squaring, the effects of outliers are magnified accordingly.
M-estimator definition
One obvious approach to robust regression replaces squaring of residuals in Equation (30.1)
with a function that provides less weight to outliers. The Huber M-estimator (M for maximum likelihood estimator-like) computes the coefficient values that minimize the summed
values of a function r of the residuals:
N
b M = argmin b
ri ( b )
r c -----------jw i
(30.3)
i =1
where j is a measure of the scale of the residuals, c is an arbitrary positive tuning constant
associated with the function, and where w i are individual weights that are generally set to
1, but may be set to:
wi =
1 X i ( XX ) X i
(30.4)
to down-weight observations with high leverage (large diagonals of the Hat Matrix).
The potential choices for the function r (Andrews, Bisquare, Cauchy, Fair, Huber-Bisquare,
Logistic, Median, Talworth, Welsch) are outlined below along with the default values of the
tuning constants:
rc ( X )
Default c
Andrews
2
X-
if X pc
c 1 cos --c
2
otherwise
2c
1.339
Bisquare
2 3
c2
----- 1 1 X
----
if X c
c
6
c
----otherwise
4.685
Name
Cauchy
Fairl
c
X 2
---- log 1 + ----
2
c
2.385
X
X
2
c log ------- log 1 + ----
c
1.4
Background389
Huber
X2
---if X c
2
2
c
---otherwise
c
X
1.345
X
2
c log cosh ----
1.205
Logistic
Median
X2
----- 2c
2
Xc
-------- 2
Talworth
X2
---2
2
c
---2
Welsch
0.01
if X c
otherwise
2.796
if X c
otherwise
c
X 2
----- 1 exp ----
c
2
2.985
The default tuning constants for each function are taken from Holland and Welsch (1977),
and are chosen so that the estimator achieves 95% asymptotic efficiency under residual normality.
M-estimator calculation
If the scale j is known, then the k -vector of coefficient estimates b M may be found using
standard iterative techniques for solving the k nonlinear first-order equations:
N
r i ( b ) x ij
----- w c -----------jw i w i
= 0
j = 1, , k
(30.5)
i =1
(0)
b M are obtained from ordinary least squares. The initial coefficients are used to compute a
(1)
(1)
scale estimate, j , and from that are formed new coefficient estimates b M , followed by a
(2)
new scale estimate j , and so on until convergence is reached.
(s 1)
(s)
Given an estimate b M
, the updated scale j
is estimated using one of three different
methods: Mean Absolute Deviation Zero Centered (MADZERO), Median Absolute Deviation Median Centered (MADMED), or Huber Scaling:
(s 1)
MADZERO
)
abs(r i
(s)
j
= median -------------------------0.6745
MADMED
abs(r i
median [ r i
])
(s)
j
= median -----------------------------------------------------------------------0.6745
(s 1)
Huber
(s 1)
(s 1)
ri
1 (j 1) 2 N
------- ( j
-)
) i = 1 y -------------(
s
1
hN
j
(s)
j
=
where
2
y ( v ) = min(v----- , 2.5
----------)
2 2
h = 0.48878N
(s 1)
(s 1)
where r i
are the residuals associated with b M
and where the initial scale required
for the Huber method is estimated by:
(0)
j
=
(0) 2
i = 0 ( r i
(30.6)
R-squared
2
y i m
ri
- i = 1 r c --------
i = 1 r c ------------ j w
j w
i
R = ------------------------------------------------------------------------------------- y i m
N
-
i = 1 r c ------------j w i
where m is the M-estimate from the constant-only specification.
Background391
(30.7)
Both of these statistics can be highly sensitive to the choice of function, even when the coefficient estimates and standard errors are not. Studies have also found that these statistics
may be upwardly biased (see, for example, Renaud and Victoria-Feser (2010)).
Rw-squared
2
Renaud and Victoria-Feser (2010) propose the R W statistic, and provide simulation results
2
2
2
showing R W to be a better measure of fit than the robust R outlined above. The R W statistic is defined as
N
2
RW
i = 1 r c i ( yi y W ) ( yi y W )
---------------------------------------------------------------------------------------------------------------------
( i = 1 r ci ( y i y W ) ) ( i = 1 r c i ( y i y W ) )
(30.8)
yW =
i = 1 r ci yi ,
y W =
i = 1 rci yi
(30.9)
As with the robust R , an adjusted value of R W may be calculated from the unadjusted
statistic
2
N1
2
R W = 1 ( 1 R W ) -------------Nk
(30.10)
Rn-squared Statistic
2
The R N statistic is a robust version of a Wald test of the hypothesis that all of the coefficients are equal to zero. It is calculated using the standard Wald test quadratic form:
1
R N = b 1 Q 1 b 1
(30.11)
is the corresponding
where b 1 are the k 1 non-intercept robust coefficient estimates and Q
estimated covariance. Under the null hypothesis that all of the coefficients are equal to zero,
2
2
the R N statistic is asymptotically distributed as a x ( k 1 ) .
Deviance
The deviance is the value of the objective function Equation (30.3) evaluated at the final
coefficient estimates and estimate of the scale:
2
Deviance = 2j
r i ( b )
r c -----------j w
i =1
(30.12)
Information Criteria
EViews reports two information criteria for M-estimated equations: the robust equivalent of
the Akaike Information Criterion ( AIC R ), and a corresponding robust Schwarz Information
Criterion ( BIC R ):
AIC R
2
r i ( b )
N
i = 1 w c -----------N
r i ( b )
jw i
= 2 r c ------------ + 2k --------------------------------------------
j w i
N
r i ( b )
i=1
w c -----------
i
=
1
jw i
BIC R = 2
(30.13)
r i ( b )
+ 2k log ( T )
r c -----------j w
i =1
where w c is the derivative of w c as outlined in Holland and Welsch (1977). See Ronchetti
(1985) for details.
S-estimation
The S-estimator (S for scale statistic) is a member of the class of high-breakdown-value
estimators introduced by Rousseeuw and Yohai (1984). The breakdown-value of an estimator can be seen as a measure of an estimator's robustness to outliers. (A good description of
breakdown-values and high-breakdown-value estimators can be found in Hubert and
Debruyne (2009)).
S-estimator definition
S-estimators find the set of coefficients b that provide the smallest estimate of the scale S
such that:
1
------------Nk
ri ( b )
hc -----------S
= b
(30.14)
i= 1
for the function h c ( ) with tuning constant c > 0 , where b is taken to be E f ( h c ) with f
the standard normal. The breakdown value B for this estimator is B = b max ( h c ) .
Following Rousseeuw and Yohai, we choose a function based on the integral of the Biweight
function
X 6
X 4
X 2
---- 3 ---- + 3 ----
c
c
hc ( X ) = c
otherwise
1
if X c
(30.15)
Background393
and estimate the scale S using the Median Absolute Deviation, Zero Centered (MADZERO)
method.
Note that c affects the objective function through h c and b . c is typically chosen to
achieve a desired breakdown value. EViews defaults to a c value of 1.5476 implying a
breakdown value of 0.5. Other notable values for c (with associated B ) are:
5.1824
0.10
4.0963
0.15
3.4207
0.20
2.9370
0.25
2.5608
0.30
1.9880
0.40
1.5476
0.50
S-estimator calculation
Calculation of S-estimates is computationally intensive, and there exist a number of fast
algorithms that provide accurate approximations. EViews uses the Fast-S algorithm of Salibian-Barrera and Yohai (2006):
1. Obtain a random subsample of size m from the data and compute the least squares
(0)
regression to obtain a b . By default m is set equal to k , the number of regressors.
(Note that with the default m = k , the regression will produce an exact fit for the
subsample.)
2. Using the full sample, perform a set of r 0 refinements to the initial coefficient estimates using a variant of M-estimation which takes a single step toward the solution of
(s)
Equation (30.5) at every b
update. These modified M-estimate refinements employ
the Bisquare function h c with tuning parameter and scale estimator
(s)
j
=
(s 1)
2 N
(s 1)
)
1 ( j
------------- ---------------------Nk
B
i= 1
(s 1)
ri
-
h c -------------(s 1)
j
(30.16)
where j
is the previous iteration's estimate of the scale and B is the breakdown
value defined earlier.
(0)
The initial scale estimator j
is obtained using MADZERO
3. Compute a new set of residuals over the entire sample using the possibly refined ini(0)
tial coefficient estimates, compute an estimate of the scale S
using MADZERO, and
produce a final estimate of S by iterating Equation (30.16) (with S in place of j ) to
(j)
convergence or until S < B .
4. Steps 1-3 are repeated Q times. The best (smallest) q scale estimates are refined
using M-estimation as in Step 2 with r = 50 (or until convergence). The smallest
scale from those refined scales is the final estimate of S , and the final coefficient estimates are the corresponding estimates of b .
R-squared
2
( N kS F )
2
R = 1 -----------------------2
NS C
(30.17)
where S F is the estimate of the scale from the final estimation, and S C is an estimate of the
scale from S-estimation with only a constant as a regressor.
Deviance
The S-estimator deviance value is given by:
Deviance = 2S
(30.18)
Rn-squared Statistic
2
The R N statistic is identical to the one computed for M-estimation. See Rn-squared Statistic on page 391 for discussion.
MM Estimation
MM-estimation addresses outliers in both the dependent and the independent variables by
combining S-estimation with M-estimation.
The MM-estimator first computes S-estimates of the coefficients and scale, then uses the
estimate of the scale as a fixed value in iterating to find a solution to Equation (30.5). The
second stage M-estimation in EViews uses the Bisquare function with a default tuning
parameter value of 4.684 which gives 95% relative efficiency for normal errors (Yohai,
1987).
The summary statistics for MM-estimation are obtained from the second-stage M-estimation
procedure.
Type I
(default)
[ 1 ( N K ) ] i = 1 w c ( r i )
1
L ------------------------------------------------------------------ ( XX )
2
N
[ ( 1 N ) i = 1 w c ( r i ) ]
2
Type II
[ 1 ( N K ) ] i = 1 w c ( r i )
1
L ------------------------------------------------------------------ W
2
N
[ ( 1 N ) i = 1 w c ( r i ) ]
Type III
1 1
2
1
1
N
L ------------- [ i = 1 w c ( r i ) ]W ( XX )W
Nk
with
N
N i = 1 [ w c ( r i ) w c ]
L = 1 + ---- ---------------------------------------------------2
k
(w )
c
1
w = ---N
wc ( ri )
(30.19)
i =1
W js =
w c ( r i )x ij xis
j, s = 1, , k
i= 1
where as before, w c ( ) = r c ( ) and x ij is the value of the j-th regressor for observation i .
The first method (which is the easiest computationally) is the default choice.
The Specification tab lets you enter the basic regression specification and the type of robust
regression to be performed:
Enter the regression specification in list form (dependent variable followed by the list
of regressors) in the Equation specification variable edit field.
Specify the estimation type by choosing one of the three estimation types M-estimation, S-estimation, or MM-estimation in the Robust estimation type dropdown. By
default, EViews will perform M-estimation.
Enter the estimation Sample in the edit field
Click on OK to estimate the equation using the default settings, or click on Options to
inspect settings for advanced options.
Options
Clicking on the Options tab of the dialog lets you specify advanced estimation options. The
tab will display different settings depending on whether you choose M-estimation, S-estimation, or MM-estimation in the Robust estimation type dropdown.
M-estimation options
For M-estimation, you will be offered choices the for objective specification, scale estimator,
and covariance type.
Objective specification
The Objective specification section of the dialog controls the choice of function and the
tuning constant:
You should use the Function dropdown to choose from among the 10 available functions: Andrews, Bisquare, Cauchy, Fair, Huber, Huber-Bisquare, Logistic, Median, Talworth, and Welsch (Bisquare is the default).
The Scale using H-matrix checkbox may be used to define individual weights w i as
described in Equation (30.4) on page 388.
The Default constant and User-specified constant radio buttons should be used to
specify the value of the tuning constant. Choosing Default constant will use the Holland and Welsch (1977) values of the tuning constant as described on page 388. To
provide your own tuning value, select User-specified constant and enter a positive
number or name of a scalar object in the Tuning value edit field.
Scale estimates
The Scale estimates dropdown is used to select between Mean Absolute Deviation (MAD)
with either zero or median centering, Huber scaling, or a user-specified scale. The default
estimator is MAD with median centering. To provide a user-specified scale, select Fixed user
in the dropdown and enter a positive number or name of a scalar object in the User scale
edit field
Other settings
The Covariance type dropdown allows you to choose between the three types of Huber
covariance methods.
The Iteration control section controls the maximum iterations and convergence tolerances
used when solving the nonlinear equations in Equation (30.5). Click on Display Settings to
show information in the EViews output.
You may use the Coefficient name edit field to specify a coefficient vector other than the
default C to hold the results from estimation.
S-estimation options
The S-estimator offers a set of estimation options than differs markedly from those offered
by the M-estimator. In contrast to the M-estimator, there is no option for choosing the scale
estimator. You will, however, be offered a slightly modified set of Objective specification
options and a new set of S options.
Objective specification
The Objective specification section of the dialog allows you to specify the values of the tuning and breakdown constants:
You should select Default constant to use the default c value of 1.5476 (0.5 breakdown) or you may select User-specified constant and enter a value or name of a scalar in the Tuning value edit field. The tuning value must be positive.
Note that the function choice dropdown is disabled since the S-estimation function is
restricted to be the Tukey Bisquare.
S settings
The S options portion of the dialog allows you to control the settings for the Fast-S algorithm:
Number of trials controls the number Q of S subsample estimates to be computed.
By default, EViews will compute 200 estimates.
Initial sample size specifies the size m of each random subsample used in the S initializing regression. By default, this field will be initialized at the number of regressors.
Max refinements controls the number of refinements r to each initial subsample
regression estimate. Each refinement consists of a single modified M-estimator step
toward the solution of the nonlinear equations.
Number of comparisons is the number of best estimates q that are candidates for
refinement and comparison to find the final estimate.
The Random generator and Seed fields control the construction of the random subsamples required for the Fast-S algorithm. You may the leave the Seed field blank, in
which case EViews will use the clock to obtain a seed at the time of estimation, or you
may provide an integer from 0 to 2,147,483,647. The Clear button may be used to
clear the seed used by a previously estimated equation.
For additional discussion of these settings, see S-estimator calculation on page 393.
Other settings
The Coefficient name, Covariance type, and Iteration control settings are as described in
M-estimation options on page 396.
MM-estimation options
The options for the MM-estimator are closely related to the options for the S-estimator
described in S-estimation options on page 398.
The main difference between the MM and S options is in the settings for the tuning parameters. Since the MM estimator combines both S and M estimation, the dialog has separate fields for
the tuning values used in the S-estimation and the tuning value
used in the M-estimation.
The Default constants setting sets an S tuning parameter of
1.5476 (0.5 breakdown) and a default M tuning value of 4.684 (for 0.95 relative efficiency
under normal errors).
An Illustration
For our example of robust regression estimation, we employ the salinity data set taken
from Rousseeuw and Leroy (1987, page 82), which has been used in many studies of robust
regression and outlier effects. See, for example, Rousseeuw and van Zomeren (1992) and
Fung (1993). The data consist of 28 observations on water salinity (salt concentration) and
river discharge measurements taken from Pamlico Sound in North Carolina.
We are interested in modeling the relationship between the amount of discharge and the
level of salinity. The regression model of interest is:
St = b1 + b2 St 1 + b3 t + b4 Dt + et
(30.20)
Coefficient
Std. Error
t-Statistic
Prob.
C
LAGSAL
TREND
DISCHARGE
9.590263
0.777105
-0.025512
-0.295036
3.125086
0.086222
0.161079
0.106804
3.068799
9.012849
-0.158384
-2.762410
0.0053
0.0000
0.8755
0.0108
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.826388
0.804687
1.330320
42.47401
-45.56391
38.07988
0.000000
10.55357
3.010166
3.540279
3.730594
3.598460
2.660574
Rousseeuw and Leroy identify observation 16 as being an outlier. We can confirm this finding by looking at the influence statistics and leverages for this equation. From the EQ01
menu, display the influence statistics dialog by selecting View/Stability Diagnostics/Influence Statistics...
An Illustration401
Check the box labeled Hat Matrix to tell EViews that you want to view the diagonals of the
matrix along with the default results, then click on OK to display the graphs:
The spikes in the graphs for all four influence measures point to observation 16 as being an
outlier. This finding is confirmed by the leverage plot view of EQ01. Select View/Stability
Diagnostics/Leverage Plots... and click on OK to accept the default settings:
The graphs support the view that observation 16 has high leverage, especially in the relationship between SALINITY and DISCHARGE. Using the mouse pointer to hover over the
outlier confirms the identity of the outlier observation. (For additional discussion of these
diagnostics, see Leverage Plots on page 218 and Influence Statistics on page 219.)
M-estimation example
Given the presence of outliers, we re-estimate the regression using robust M-estimation. Create a new equation object by clicking on Quick/Estimate Equation, or by selecting
Object/New Object/Equation and then select ROBUSTLS from the Method dropdown
menu. Enter the dependent variable followed by the list of regressor variables in the Equation specification edit field:
salinity c lagsel trend discharge
and click on OK to instruct EViews to estimate the specification using the default estimator
and settings. (For convenience, we have included the equation object EQ02 estimated using
these settings in your workfile.)
An Illustration403
Coefficient
Std. Error
z-Statistic
Prob.
C
LAGSAL
TREND
DISCHARGE
18.29590
0.722380
-0.202283
-0.623040
2.660551
0.073405
0.137135
0.090928
6.876732
9.840981
-1.475064
-6.852032
0.0000
0.0000
0.1402
0.0000
Robust Statistics
R-squared
Rw-squared
Akaike info criterion
Deviance
Rn-squared statistic
0.620467
0.933495
52.08135
23.76607
202.9058
Adjusted R-squared
Adjust Rw-squared
Schwarz criterion
Scale
Prob(Rn-squared stat.)
0.573026
0.933495
57.40400
0.734314
0.000000
Non-robust Statistics
Mean dependent var
S.E. of regression
10.55357
1.577654
3.010166
59.73579
A description of the settings used in the M-estimation is presented at the top of the output.
Here we see that the Bisquare function with a default tuning parameter value of 4.685 was
used, that the scale was estimated using the median centered, median absolute deviation
method, and that the z-statistics in the output are based on Huber Type I covariance estimates.
Turning to the coefficient estimates, we see the effect on the coefficient estimates of moving
from least squares to robust M-estimation. The M-estimator produces a much larger negative
impact of DISCHARGE on SALINITY than does ordinary least squares (-0.623 versus -0.295)
with the M-estimator coefficient estimated with similar precision (0.091 versus 0.107). The
sensitivity of the DISCHARGE coefficient estimates to robust estimation is in accord with the
earlier EQ01 diagnostic suggesting that observation 16 had high leverage for the relationship
between SALINITY and DISCHARGE.
2
The bottom portion of the output displays the R and R W goodness-of-fit and adjusted
measures, along which indicate that the model accounts for roughly 60-90% of the variation
2
in the constant-only model. The R N statistic of 202.906 and corresponding p-value of 0.00
indicate strong rejection of the null hypothesis that all non-intercept coefficients are equal to
zero. Lastly, the output shows the value of the deviance, information criteria, and the estimated scale. These measures may be of use when comparing models. See M-estimator
summary statistics on page 390 for formulae and discussion.
MM-estimation example
Next, we estimate the equation using MM-estimation. In the Specification tab:
Specify your equation estimation method as ROBUSTLS - Robust Least Squares,
Change the Robust Estimation Type dropdown to MM-estimation
Fill out the Specification edit field with salinity c lagsel trend discharge as before.
Next, we will provide values for the tuning and breakdown values, and will specify options
to control the S-estimation refinement.
Click on the Options tab to display the additional estimation settings:
For the objective specification, we will provide tuning and breakdown values. Select
the User-specified constants radio button and enter the values as depicted. The Stuning value of 2.937 is chosen to provide a breakdown of 0.25; the M-tuning value of
3.44 is chosen to produce relative efficiency of 0.85.
Select Huber Type II standard errors.
Under S options, enter values for the Number of trials, and Max refinements, as
depicted. The Initial sample size will be pre-filled with the number of regressor variables specified on the first tab of the dialogwe will leave this at the default setting.
An Illustration405
Enter 5 in the Number of comparisons edit field so that EViews will refine the best
5 of the 200 trials.
Additional detail on these settings are provided in S-estimation on page 392 and S-estimation options on page 398.
Click on OK to accept the specification and options and to estimate the equation. EViews
will display the results of the MM-estimation.
Dependent Variable: SALINITY
Method: Robust Least Squares
Date: 08/15/12 Time: 16:41
Sample: 1 28
Included observations: 28
Method: MM-estimation
S settings: tuning=2.937, breakdown=0.25, trials=200, subsmpl=4,
refine=2, compare=5
M settings: weight=Bisquare, tuning=3.44
Random number generator: rng=kn, seed=801674785
Huber Type II Standard Errors & Covariance
Variable
Coefficient
Std. Error
z-Statistic
Prob.
C
LAGSAL
TREND
DISCHARGE
18.35831
0.715431
-0.187484
-0.625891
3.589233
0.066356
0.139891
0.133533
5.114829
10.78163
-1.340216
-4.687167
0.0000
0.0000
0.1802
0.0000
Robust Statistics
R-squared
Rw-squared
Akaike info criterion
Deviance
Rn-squared statistic
0.661056
0.922073
23.15291
26.94331
202.9223
Adjusted R-squared
Adjust Rw-squared
Schwarz criterion
Scale
Prob(Rn-squared stat.)
0.618688
0.922073
32.83208
1.175363
0.000000
Non-robust Statistics
Mean dependent var
S.E. of regression
10.55357
1.586231
3.010166
60.38708
Notice that the top of the output displays various settings for both the S and the M-portions
of the MM-estimation. In addition to showing the S-tuning value of 2.937 and associated
breakdown value of 0.25, EViews reports that the S-estimation consists of 200 trials with initial coefficients obtained from a random initial sample size of 4 and 2 initial refinement
steps. The final comparison involves fully refining 5 sets of the scale estimates and choosing
the smallest scale estimate.
Once the scale estimate is obtained, EViews performs fixed scale M-estimation using the
reported 3.44 tuning parameter.
EViews also reports information on the random number generator used to obtain the random subsamples, and the method used to obtain coefficient estimate covariances.
Turning to the results, we note that despite the difference in robust estimation method, relative efficiency settings, and method of computing standard errors, the results from the Mestimation and the MM-estimation are generally quite similar. Most importantly, both estimates show statistically significant DISCHARGE coefficients of around -0.62 with roughly
comparable coefficient standard errors (0.089 versus 0.13). The results for other coefficients
are even closer.
The MM-estimate of the scale is considerably larger than that obtained from M-estimation
2
(1.16 versus 0.73), but the overall goodness-of-fit measures and R N statistic and test results
are quite similar.
References
Croux, C., G. Dhaene, and D. Hoorelbeke (2003). Robust standard errors for robust estimators, Discussion Papers Series 03.16, K.U. Leuven, CES.
Fung, Wing-Kam (1993). Unmasking Outliers and Leverage Points: A Confirmation, Journal of the
American Statistical Association, 88(422), 515-519.
Holland, Paul W. and Roy E. Welsch (1977). Robust regression using iteratively reweighted least
squares, Communications in Statistics - Theory and Methods, 6(9), 813827.
Huber, Peter J. (1973). Robust Regression: Asymptotics, Conjectures and Monte Carlo, The Annals of
Statistics, 1(5), 799821.
Huber, Peter J. (1981). Robust Statistics. New York: John Wiley & Sons.
Hubert, Mia and Michiel Debruyne (2009). Breakdown Value, Wiley Interdisciplinary Reviews: Computational Statistics, 1(3), 296302.
Maronna, Ricardo A., R. Douglas Martin, and Victor J. Yohai (2006). Robust Statistics. Chichester, England: John Wiley & Sons, Ltd.
Renaud, Olivier and Maria-Pia Victoria-Feser (2010). A Robust Coefficient of Determination for Regression, Journal of Statistical Planning and Inference, 140, 18521862
Ronchetti, Elvezio (1985). Robust Model Selection in Regression, Statistics & Probability Letters, 3, 21
23.
Rousseeuw, P.J. and A. M. Leroy (1987). Robust Regression and Outlier Detection. New York: John Wiley &
Sons, Inc.
Rousseeuw, P. J. and Bert C. van Zomeren (1992). A Comparison of Some Quick Algorithms for Robust
Regression, Computational Statistics & Data Analysis, 14(1), 107116
Rousseeuw, P. J. and V. J. Yohai (1984), Robust Regression by Means of S-Estimators, in Robust and
Nonlinear Time Series, J. Franke, W. Hrdle, and D. Martin, eds., Lecture Notes in Statistics No. 26,
Berlin: Springer-Verlag.
Saliban-Barrera, Matas, and Vctor J. Yohai (2006). A Fast Algorithm for S-Regression Estimates, Journal
of Computational and Graphical Statistics, 15(2), Pages 414427.
Yohai, Vctor J. (1987). High Breakdown-Point and High Efficiency Robust Estimates for Regression, The
Annals of Statistics, 15(2), 642-656.
Background
We consider a standard multiple linear regression model with T periods and m potential
breaks (producing m + 1 regimes). For the observations T j, T j + 1, , T j + 1 1 in
regime j we have the regression model
y t = X t b + Z t d j + e t
(31.1)
for the regimes j = 0, , m . Note that the regressors are divided into two groups. The X
variables are those whose parameters do not vary across regimes, while the Z variables
have coefficients that are regime-specific.
While it is slightly more convenient to define breakdates to be the last date of a regime, we
follow EViewss convention in defining the breakdate to be the first date of the subsequent
regime. We tie down the endpoints by setting T 0 = 1 and T m + 1 = T + 1 .
Once the number and identity of the breakpoints is determined, the model may be estimated
using standard regression techniques. We may rewrite the equation specification as a standard regression equation
y t = X t b + Z t d + e t
(31.2)
The breakpoints may be known a priori or they be estimated using a variety of approaches.
The breakpoint estimation methods that we consider may broadly be divided into two categories: global maximizers for the breakpoints and sequentially determined breakpoints.
Global Maximization
Bai and Perron (1998) describe global optimization procedures for identifying the m multiple breaks and associated coefficients which minimize the sums-of-squared residuals of the
regression model Equation (31.1).
If the desired number of breakpoints is known, the global m -break optimizers are the set of
breakpoints and corresponding coefficient estimates that minimize the sum-of-squares for
that model.
If the desired number of breakpoints is not known, we may specify a maximum number of
breakpoints and employ testing to determine the optimal number of breakpoints. The various test approaches are outlined in detail in Global Maximizer Tests on page 198, but
briefly speaking, involve:
Global tests of l breaks versus none (Bai-Perron 1998). The test of l versus no breaks
procedure may be applied sequentially beginning with a single break until the null is
not rejected. Alternately, it may be applied to all breaks with the selected break being
the highest statistically significant number of breaks, or it may employ the
unweighted or weighted double maximum statistics ( UDmax or WDmax ).
Information criteria based model selection of the number of breaks (Yao 1988; Liu,
Wi, and Zidek 1997), where we minimize the specified information criteria with
respect to the number of breaks.
Sequential tests of l + 1 versus l globally determined breakpoints. The procedure is
applied sequentially, beginning with a single break, until the null is not rejected. This
approach is a modified Bai (1997) method in which, at each test step, the l breakpoints under the null are obtained by global optimization, and the candidate breakpoints are obtained by sequential estimation.
Sequential Determination
Bai (1997) describes an intuitive approach for obtaining estimates for more than one break.
The procedure involves sequential application of breakpoint tests.
Begin with the full sample and perform a test of parameter constancy with unknown
break. At each stage, test for breakpoints in breakpoint tests in each subsample. Add a
breakpoint whenever a subsample null is rejected. (Alternately, one could test only
the single subsample which shows the greatest improvement in the sum-of-squared
residuals.) If any of the tests reject, add the specified breakpoint to the current set.
Repeat the procedure until all of the subsamples do not reject the null hypothesis, or
until the maximum number of breakpoints allowed or maximum subsample intervals
to test is reached.
Perform refinement so that breakpoints are re-estimated if they are obtained from a
subsample containing more than one break. This procedure is required so that the
breakpoint estimates have the same limiting distribution as those obtained from the
global optimization procedure.
If the number of breakpoints is pre-specified, we simply estimate the specified number of
breakpoints using the one-at-a-time method.
You should enter the dependent variable followed by a list of variables with breaking regressors in the top edit field, and optionally, a list of non-breaking regressors in the bottom edit
field.
Next, click on the Options tab to display additional settings for calculation of the coefficient
covariance matrix, specification of the breakpoints, weighting, and the coefficient name.
The weighting and coefficient name settings are common to other EViews estimators so we
focus on the covariance computation and break specification.
Bai and Perron do impose the homogeneity data restriction when computing HAC robust
variances estimators with homogeneous errors. To match the Bai-Perron common error
assumptions, you will have to select the Assume common data distribution checkbox.
(Note that EViews does not allow you to specify heterogeneous error distributions and
robust covariances in partial breaking models.)
Break Specification
The Break specification section of the dialog contains a
Method drop-down where you may specify the type of test
you wish to perform. You may choose between:
Sequential L+1 breaks vs. L
Sequential tests all subsets
Global L breaks vs. none
L+1 breaks vs. global L
Global information criteria
Fixed number - sequential
Fixed number - global
User-specified
The first two entries determine the optimal number of breaks based on the sequential methodology as described in Sequential Determination on page 408 above. The methods differ
in whether, for a given l breakpoints, we test for an additional breakpoint in each of the
l + 1 segments (Sequential tests all subsets), or whether we test the single added breakpoint that most reduces the sum-of-squares (Sequential L+1 breaks vs. L).
The next three methods employ the global optimizers to determine the number and identities of breaks as described in Global Maximization on page 408. If you select one of the
global methods, you will see a second drop-down prompting you to specify a sub-method.
For the Global L breaks vs. none method, there are
four possible sub-methods. The Sequential evaluation method chooses the last significant number of
breaks, determined sequentially. Selecting Highest
significant chooses the number of breaks that is
largest from amongst the significant tests. The latter
two settings choose the number of breaks using the
corresponding double max test.
Similarly, if you select the L+1 breaks vs. none method, a drop-down offers a choice
between Sequential evaluation and Highest significant.
The Global information criteria method lets you choose between using the Schwarz
criterion or the LWZ criterion.
The next two methods, Fixed number - sequential and Fixed number - global, pre-specify
the number of breaks and choose the breakpoint dates using the specified method.
The User-specified method allows you to specify your own break dates.
Depending on your choice of method, you may be prompted to provide information on one
or more additional settings:
If you specify one of the two fixed number of break methods, you will be prompted
for the number of breakpoints (not depicted).
The Trimming percentage, e = 100 ( h T ) implicitly determines h , the minimum
segment length permitted when constructing a test. Small values of the trimming percentage can lead to estimates of coefficients and variances which are based on very
few observations.
The Maximum breaks and Maximum levels setting limits the number of breakpoints
allowed via global testing, and in sequential or mixed l vs. l + 1 testing.
The Significance level drop-down menu should be used to choose between test size
values of (0.01, 0.025, 0.05, and 0.10). This setting is not relevant for methods which
do not employ testing.
Additional detail on all of the methodologies outlined above is provided in Multiple Breakpoint Tests on page 198.
ear regression model in Chapter 19. Basic Regression Analysis, on page 5 and Chapter 24.
Specification and Diagnostic Tests, on page 163 applies. We focus our attention in this section on the unique aspects of the breakpoint equation.
Estimation Output
To illustrate the output from estimation of a breakpoint equation, we employ data from Hansens (2001) labor productivity example. Hansens example uses monthly (February 1947 to
April 2001) U. S. labor productivity in the manufacturing durables sector as measured by the
growth rate of the ratio of the Industrial Production Index to average weekly labor hours.
The data are in the series DDUR in the workfile hansen_jep.wf1.
We estimate a breakpoint model with DDUR regressed on its lag DDUR(-1) and a constant.
The output is presented below:
Dependent Variable: DDUR
Method: Least Squares with Breaks
Date: 12/10/12 Time: 10:55
Sample (adjusted): 1947M03 2001M04
Included observations: 650 after adjustments
Break type: Bai-Perron tests of L+1 vs. L sequentially determined
breaks
Break selection: Trimming 0.05, Max. breaks 5, Sig. level 0.10
Breaks: 1963M12, 1994M12
White heteroskedasticity-consistent standard errors & covariances
No d.f. adjustment for covariances
Variable
Coefficient
Std. Error
t-Statistic
Prob.
0.913124
2.793697
0.3615
0.0054
-3.987492
7.037262
0.0001
0.0000
0.089257
3.034604
0.097749
1.086232
-0.240237
3.891848
0.060248
0.553035
-0.186131
9.329578
0.049806
0.042429
10.91439
76715.79
-2472.849
6.751313
0.000004
0.140463
1.562201
-1.325130
5.972073
0.1856
0.0000
3.756325
11.15357
7.627229
7.668555
7.643258
2.027022
The top portion of the output shows equation specification information. As we see, the two
estimated breakdates 1963m12 and 1994m12 were determined using the Bai-Perron sequential breakpoint methodology, with a maximum of 5 breaks, 5% trimming, and a test size of
0.10. Coefficient covariances for the tests and estimates are computed using Whites method
with no d.f. correction.
The middle section labels each regime and shows the corresponding coefficient estimates,
standard errors, and p-values.
The bottom portion of the dialog shows the standard summary statistics. Most are selfexplanatory. We do note that the R -square, the F-statistic, and the corresponding probability are all based on a comparison with the full restricted, no breakpoint, constant only
model. Note also the F-statistic is based on the difference of the sums-of-squares so, despite
the presence of White coefficient standard errors, it is not robust to heteroskedasticity.
Representations View
The representations view shows you the equation specification estimated by EViews:
Note in particular the use of the @before, @during, and @after functions to create regime
dummy variables that interact with the regressors. You could have used these functions to
specify an equivalent model using the ordinary least squares estimator.
The remaining portion shows the intermediate results for the breakpoint determination:
Current breakpoint calculations:
Multiple breakpoint tests
Bai-Perron tests of L+1 vs. L sequentially determined breaks
Date: 12/11/12 Time: 10:09
Sample: 1947M03 2001M04
Included observations: 650
Breakpoint variables: DDUR(-1) C
Break test options: Trimming 0.15, Max. breaks 5, Sig. level 0.05
Test statistics employ White heteroskedasticity-consistent
covariances assuming common data distribution
No d.f. adjustment for covariances
Sequential F-statistic determined breaks:
Break Test
0 vs. 1 *
1 vs. 2 *
2 vs. 3
F-statistic
8.967659
7.710196
3.817321
Scaled
F-statistic
17.93532
15.42039
7.634643
2
Critical
Value**
11.47
12.95
14.03
Sequential
1982M01
1991M06
Repartition
1965M08
1991M06
Here, the first to columns correspond to the coefficients on DDUR(-1) and the intercept in
the first regime, the next two columns are the results for the second regime, and so on. Any
non-regime specific coefficients appear at the end of the blocks of varying coefficients.
In most table output, EViews groups and labels the variables by regimes. For example, the
scaled coefficients view (View/Coefficient Diagnostics/Scaled Coefficients) mirrors the
format of the equation results output:
Scaled Coefficients
Date: 12/10/12 Time: 12:21
Sample: 1947M01 2001M04
Included observations: 650
Variable
Coefficient
Standardized
Coefficient
Elasticity
at Means
0.081718
3.256673
0.057258
0.138422
0.026049
0.294775
-0.239991
3.342257
-0.150939
0.149785
-0.082734
0.424351
-0.209215
8.383666
-0.091133
0.290912
-0.071046
0.408606
Alternately, in leverage plots (View/Stability Diagnostics/Leverage Plots...) EViews displays graphs that are labeled with the full dummy variable interaction variables previously
seen in the representations view:
DDUR vs Variables (Partialled on Regressors)
@BEFORE("1965M08")*DDUR(-1)
@BEFORE("1965M08")
@DURING("1965M08 1991M05")*DDUR(-1)
60
60
60
40
40
40
20
20
20
-20
-20
-20
-40
-40
-40
-60
-2.0E-14 -1E-14 0.0E+00
-60
-2E-16 -1E-16 0E+00 1E-16 2E-16 3E-16
-60
1E-14
2.0E-14
@DURING("1965M08 1991M05")
-1
@A FTER("1991M06")*DDUR(-1)
60
60
40
40
40
20
20
20
-20
-20
-20
-40
-40
-40
-60
-1
-1
@A FTER("1991M06")
60
-60
-60
-4E-16 -2E-16 0E+00 2E-16 4E-16 6E-16
Notice that in both cases, you should specify these variables in terms of the original, nonbreaking variables.
Forecasting Proc
Forecasting in breakpoint least squares works the same way as in the least squares estimator. Click on the Forecast button on the toolbar or select Proc/Forecast... to bring up the
dialog.
Example419
All of the settings are as in the standard linear regression case. However, it is worth noting
that the saved forecast S.E. assumes a common variance across regimes even if you have
relaxed this assumption in the computation of the coefficient covariances. Note also that
out-of-sample forecasts will use the first regime specific coefficients for periods prior to the
estimation period, and the last regime specific coefficients for periods after the estimation
period.
Example
To illustrate the use of these tools in practice, we employ the simple model of the U.S. expost real interest rate from Garcia and Perron (1996) that is used as an example by Bai and
Perron (2003a). The data, which consist of observations for the three-month treasury rate
deflated by the CPI for the period 1961q11983q3, are provided in the series RATES in the
workfile realrate.WF1.
Select Object/New Object.../Equation or Quick/Estimate Equation from the main menu
or enter the command breakls in the command line and hit Enter.
The regression model consists of a regime-specific constant regressor so we enter the dependent variable RATES and C in the topmost edit field. The sample is set to the full workfile
range.
Next, click on the Options tab and specify HAC (Newey-West) standard errors, check Allow
error distributions to differ across breaks, choose the Bai-Perron Global L breaks vs.
none method using the Unweighted-Max F (UDMax) test to determine the number of
breaks, and set a Trimming percentage of 15, and a Significance level of 0.05.
Example421
Lastly, to match the test example in Bai and Perron (2003a), we click on the HAC Options
button and set the options to use a Quadratic-Spectral kernel with Andrews automatic
bandwidth and single pre-whitening lag:
Click on OK to accept the settings and estimate the model. EViews displays the results of the
breakpoint selection and coefficient estimation:
Coefficient
Std. Error
t-Statistic
Prob.
0.194122
0.8465
9.268219
0.0000
0.078612
0.404959
5.642890
0.469105
0.463849
2.527072
644.9955
-240.6282
89.24490
0.000000
0.608843
1.375142
3.451231
4.711226
4.762386
4.731948
1.382941
The UDMax methodology selects a single statistically significant break at 1980Q4. The
results clearly show a significant difference in the mean RATES prior to and after 1980Q4.
Click on View/Actual, Fitted, Residual/Actual, Fitted, Residual Graph, to see in-sample
fitted data alongside the original series and the residuals:
Example423
12
8
4
0
-4
4
-8
0
-4
-8
62
64
66
68
70
Residual
72
74
76
Actual
78
80
82
84
86
Fitted
Casual inspection of the residuals suggests that the model might be improved with the addition of another breakpoint in the early 1970s. Click on the Estimate button, select the
Options tab, and modify the Method to use the Global information criteria with LWZ criterion. Click on OK to re-estimate the equation using the new method.
EViews reports new estimates featuring two breaks(1972Q4, 1980Q4) defining a medium,
low, and a high rate regime, respectively:
Coefficient
Std. Error
t-Statistic
Prob.
8.588880
0.0000
-3.463485
0.0008
9.222223
0.0000
1.355037
0.157766
-1.796138
0.518593
5.642890
0.624708
0.617202
2.135299
455.9502
-222.7649
83.22967
0.000000
0.611880
1.375142
3.451231
4.383784
4.460524
4.414866
1.942392
References425
12
8
4
0
8
-4
-8
4
2
0
-2
-4
-6
62
64
66
68
70
Residual
72
74
76
Actual
78
80
82
84
86
Fitted
References
Bai, Jushan (1997). Estimating Multiple Breaks One at a Time, Econometric Theory, 13, 315352.
Bai, Jushan and Pierre Perron (1998). Estimating and Testing Linear Models with Multiple Structural
Changes, Econometrica, 66, 4778.
Bai, Jushan and Pierre Perron (2003a). Computation and Analysis of Multiple Structural Change Models, Journal of Applied Econometrics, 6, 7278.
Bai, Jushan and Pierre Perron (2003b). Critical Values for Multiple Structural Change Tests, Econometrics Journal, 18, 122.
Garcia, Rene and Pierre Perron (1996). An Analysis of the Real Interest Rate Under Regime Shifts, The
Review of Economics and Statistics, 78, 111125.
Hansen, Bruce E. (2001). The New Econometrics of Structural Change: Dating Breaks in U.S. Labor Productivity, Journal of Economic Perspectives, 15, 117128.
Liu, Jian, Wu, Shiying, and James V. Zidek (1997). On Segmented Multivariate Regression, Statistica
Sinica, 7, 497525.
Perron, Pierre (2006). Dealing with Structural Breaks, in Palgrave Handbook of Econometrics, Vol. 1:
Econometric Theory, T. C. Mills and K. Patterson (eds.). New York: Palgrave Macmillan.
Yao, Yi-Ching (1988). Estimating the Number of Change-points via Schwarz Criterion, Statistics & Probability Letters, 6, 181189.
Background
We begin with a standard multiple linear regression model with T observations and m
potential thresholds (producing m + 1 regimes). (While we will use t to index the T
observations, there is nothing in the structure of the model that requires time series data.)
For the observations in regime j = 0, 1, , m we have the linear regression specification
y t = X t b + Z t d j + e t
(32.1)
Note that the regressors are divided into two groups. The X variables are those whose
parameters do not vary across regimes, while the Z variables have coefficients that are
regime-specific.
Suppose that there is an observable threshold variable q t and strictly increasing threshold
values ( g 1 < g 2 < < g m ) such that we are in regime j if and only if:
gj qt < gj + 1
where we set g 0 = and g m + 1 = . Thus, we are in regime j if the value of the
threshold variable is at least as large as the j-th threshold value, but not as large as the
( j + 1 ) -th threshold. (Note that we follow EViews convention by defining the threshold values as the first values of each regime.)
For example, in the single threshold, two regime model, we have:
y t = X t b + Z t d 1 + e t
if < q t < g 1
y t = X t b + Z t d 2 + e t
if g 1 q t <
(32.2)
Using an indicator function 1 ( ) which takes the value 1 if the expression is true and 0 otherwise and defining 1 j ( q t, g ) = 1 ( g j q t < g j + 1 ) , we may combine the m + 1 individual regime specifications into a single equation:
m
y t = X t b +
1 j ( q t, g ) Z t d j + e t
(32.3)
j =0
The identity of the threshold variable q t and the regressors X t and Z t will determine the
type of TR specification. If q t is the d -th lagged value of y , Equation (32.3) is a self-exciting
(SE) model with delay d ; if its not a lagged dependent, it's a conventional TR model. If the
regressors X t and Z t contain only a constant and lags of the dependent variable, we have
an autoregressive (AR) model. Thus, a SETAR model is a threshold regression that combines
an autoregressive specification with a lagged dependent threshold variable.
Given the threshold variable and the regression specification in Equation (32.1), we wish to
find the coefficients d and b , and usually, the threshold values g . We may also use model
selection to identify the threshold variable q t .
Nonlinear least squares is an natural approach for estimating the parameters of the model. If
we define the sum-of-squares objective function
T
S ( d, b, g ) =
t =1
y t X t b
j= 0
1 j ( q t, g ) Z t d j
(32.4)
There are two tabs in the threshold regression dialog: Specification and Options. We discuss
each of the pages in turn.
Specification
There are three distinct sections in the threshold regression Specification page: Equation
specification, Threshold specification, and Sample specification. Since the sample specification should be familiar, we will focus on the first two sections.
In the first edit field of the Equation specification section you should enter the dependent
variable followed by a list of variables with threshold specific coefficients. The list of explanatory variables may include lagged series and ranges of lagged series specified with the word
to (lag ranges are common in threshold regression models). In the second edit field, you
may optionally specify a list of non-threshold varying regressors.
Next, in the Threshold variable specification edit field, you should a specification for one
or more threshold variables. You may enter this specification as a single integer or integer
pairs, or you may provide a list of variables:
If you enter a single integer, EViews will interpret the value as the decay parameter in
a SETAR model. Thus, if your dependent variable is Y and you enter a 3 in the edit
field, EViews will use Y(-3) as the threshold variable.
If you enter a single variable name, EViews will use that variable as the threshold
variable. Thus, if you enter W, EViews will estimate the specification using the
series W as the threshold variable.
If you enter one or more lag pairs, EViews will use model selection to determine the
best decay parameter amongst all of the implied lag values. Thus, if you enter 1 4 7
9, EViews will estimate SETAR models with decay parameters between 1 and 4 and
between 7 and 9, (threshold variables {Y(-1), Y(-2), Y(-3), Y(-4), Y(-7), Y(-8), Y(-9)}),
and determine the specification that minimizes the sum-of-squared residuals.
If you specify more than one variable, by providing a list of names, entering a group
name, or using wildcard expressions, EViews will estimate TR models using each variable as the threshold variable and will employ model selection to choose the specification that minimizes the sum-of-squares.
Note that your threshold specification may not mix integer specifications and explicit variable lists.
In the example depicted above, we specify a threshold regime specific AR(11) specification
for LYNX_TRANSF and enter the range pair 1 5 in the Threshold variable specification
edit field. The result is a SETAR model where we will perform model selection for the
threshold variable using lags of LYNX_TRANSF from 1 to 5 .
Options
The Options page contains additional settings for the calculation of the coefficient covariance matrix, the determination of thresholds, and the coefficient name. Most of the settings
are identical to those found in breakpoint least squares, and extensive discussion may be
found elsewhere (Estimating Least Squares with Breakpoints in EViews on page 409).
We offer a brief description of the threshold specification methods below.
based on testing should be viewed as informal in the TAR setting as the lagged endogenous
regressors in the model are themselves subject to structural breaks which violates the
assumptions for the Sup-F statistics (Hansen, 2000; Hansen, 1999).
For a given m , global estimation of thresholds compares the SSRs for all possible sets of m
threshold values. The following global methods are used to identify threshold values and the
associated regression coefficients. In the first two methods the number of thresholds is
unknown and user must specify the maximum number of thresholds allowed. In the last
case the desired number of thresholds must be entered.
Global L thresholds versus none
Minimizing the information criteria
Fixed number - global
Threshold values may also be estimated sequentially by finding an initial threshold value
that minimizes the residual sums of squares, then searching for additional values (given the
initial value) that minimize the SSR until the desired number of thresholds, possibly determined through testing, is obtained. Sequential tests are used in the following methods.
Again, in the first two methods, the number of thresholds is not known and the user must
enter the maximum number of thresholds allowed. In the last case, the user must enter the
desired number of thresholds:
Sequential L + 1 breaks vs. L
Sequential tests all subsets
Fixed number - sequential
The global tests are mixed with sequential testing in the L + 1 versus global L method.
Additional details for each of these methods may be found in the discussion of breakpoint
regression (Background, beginning on page 407) and breakpoint testing (Multiple Breakpoint Tests, beginning on page 198).
Estimation Output
Suppose we estimate a two-regime threshold regression model with an AR(11) in each
regime and model selection over threshold dependent variable lags from -1 to -5.
The top part of the output shows equation specification information.
Dependent Variable: LYNX_TRANSF
Method: Threshold Regression
Date: 03/05/15 Time: 10:29
Sample (adjusted): 1832 1934
Included observations: 103 after adjustments
Threshold type: Fixed number of sequentially determined thresholds
Threshold variables considered: LYNX_TRANSF(-1)
LYNX_TRANSF(-2) LYNX_TRANSF(-3) LYNX_TRANSF(-4)
LYNX_TRANSF(-5)
Threshold variable chosen: LYNX_TRANSF(-3)
Threshold value used: 3.404149
In addition to the usual dependent variable, method, date, and sample information, EViews
displays information about the threshold specification. Here we see that the threshold value
was found using the fixed number (one) of sequentially determined thresholds. Since we
instructed EViews to perform model selection using 1 to 5 lags of the LYNX_TRANSF series,
EViews displays the names of all of the candidate series. Lastly, EViews displays the selected
threshold variable and the estimated threshold value.
Some comments on the reported threshold value are in order. Recall that the threshold values are only identified up to an interval defined by adjacent values of the sorted threshold
variable (Tsay, 1989). For purposes of display, EViews reports the observed value of the
threshold variable at the beginning of a regime, truncated to a more readable form, while
ensuring that the representation satisfies the threshold inequalities.
The middle part of the output labels displays coefficient values and associated statistics for
each regime. The bottom portion of the output contains the usual summary statistics.
Variable
Coefficient
Std. Error
t-Statistic
Prob.
0.901519
1.059387
-0.179744
-0.054279
-0.150385
0.047953
-0.041777
-0.036509
0.159765
0.009293
0.184069
-0.308074
0.326423
0.111013
0.156881
0.149394
0.150522
0.155663
0.155923
0.158275
0.162665
0.162477
0.154652
0.098137
2.761809
9.542922
-1.145737
-0.363329
-0.999085
0.308059
-0.267935
-0.230671
0.982170
0.057197
1.190213
-3.139209
0.0071
0.0000
0.2554
0.7173
0.3208
0.7588
0.7894
0.8182
0.3290
0.9545
0.2375
0.0024
1.074224
1.625837
-1.980078
1.512621
-1.033482
0.755411
0.424123
-0.946509
-0.086284
0.414445
0.141073
-0.242090
0.928938
0.908249
0.170325
2.291827
49.82638
44.90026
0.000000
1.128350
0.191162
0.307855
0.445142
0.444150
0.357410
0.411788
0.438604
0.278625
0.257290
0.242708
0.173382
0.952031
8.505018
-6.431855
3.398063
-2.326876
2.113569
1.029955
-2.158001
-0.309677
1.610811
0.581244
-1.396282
0.3440
0.0000
0.0000
0.0011
0.0225
0.0377
0.3062
0.0340
0.7576
0.1112
0.5627
0.1665
2.879249
0.562305
-0.501483
0.112434
-0.252825
2.109955
Most of the summary statistics are self-explanatory. We do note that the R -square, the Fstatistic, and the corresponding probability are all based on a comparison with the fully
restricted, no threshold, constant only model.
These two views display the model selection criteria used to select the threshold variable in
a line plot or a table, ordered by the selection criterion.
For example, the criteria graph for this equation is shown below:
In this figure, the threshold variable whose model has the lowest AIC is clearly visible on the
left of the graph.
Here, we see the same set of results in table form. This view also includes information about
the common sample used for model selection estimation, and the number of regimes
employed for each candidate model.
Representations View
The representations view (View/Representations) shows the expanded equation specification, which combines the coefficients from different regimes with the threshold variable and
limits and various inequalities into a single equation.
Note that estimating this single equation specification via ordinary least squares will produce the same coefficients as the estimated threshold model.
Threshold Specification
The threshold specification view displays more detailed information about the threshold
variable, values, along with information about the method of selecting the number of
thresholds. To display this view, click on View/Threshold Specification from the equation
menu.
The top portion of the output displays information about the threshold and threshold values:
Threshold Specification
Description of the threshold specification used in estimation
Equation: TAR
Date: 03/05/15 Time: 11:28
Summary
Threshold variable: LYNX_TRANSF(-3)
Specified number of thresholds: 1
Method: Fixed number of sequentially determined thresholds
Threshold data value: 3.40414924921
Adjacent data value: 3.39984671271
Threshold value used: 3.404149
The detailed information on the threshold values includes the actual data value corresponding to the break (in this case 3.40414924921), the actual data value for the next highest data
value (here 3.39984671271), and the truncated value EViews uses for display and representation purposes (3.404149). (Note that any value between the lower adjacent data value and
the threshold data value would produce the same observed fit).
The lower portion of the output displays calculations used in determining the thresholds:
Threshold Test
0 vs. 1 *
1 vs. 2 *
2 vs. 3
F-statistic
4.201074
2.900849
1.528676
Scaled
F-statistic
50.41289
34.81019
18.34411
2
Critical
Value**
27.03
29.24
30.45
Sequential
3.404149
2.328379
Repartition
2.328379
3.404149
In this case, EViews displays the results for sequentially determined thresholds using the
Bai-Perron Sup-F test statistics. We caution again that since this TAR specification contains
lagged endogenous variables, that the conditions required for distributional results is violated (Hansen, 1999; Hansen, 2000).
Simply enter the variables you wish to add in the appropriate edit field.
Alternately, the redundant variables test will prompt you to enter variables from the original
specification that you wish to drop.
We point out that these tests are performed conditionally on the thresholds identified in the
estimation step. EViews will use the threshold variables and values previously determined
and perform the test on the conditional linear specification. This may not be the test you
wish to perform.
Forecasting
Static or one-step ahead forecasting from a TR estimated equation is straightforward, and
involves conditioning on the observed regressors, including any lagged endogenous and
computing the forecast.
For TAR and other models with lagged endogenous variables, n -step ahead nonlinear
dynamic forecasting is considerably trickier (see, for example the discussion in Potter 1999,
or Tong and Lim, 1980 who distinguish between the eventual forecasting function and the
n -step ahead forecasts). For dynamic threshold regression models, EViews computes forecasts by stochastic simulation with the forecasts and forecast standard errors obtained from
the sample average and standard deviation of the simulated values.
To perform the forecast simulation, click on the Forecast button on the equation toolbar or
select Proc/Forecast... from the equation menu to display the dialog:
Most of this Forecast dialog should be familiar. Note, however, that under the Method section are two new edit fields for controlling the stochastic simulation, one for the number of
Repetitions, and the second for the % failed reps before halting.
Change the settings as desired and click on OK to continue. Here we tell EViews we wish to
compute the simulated dynamic forecasts from 1900 to 1934. EViews displays the results of
the forecast along with evaluation statistics:
References441
We may use the saved values of the forecast and standard error series to show a comparison
with the actuals. First, display a graph containing the series using the command:
plot lynx_transf lynx_transfcst lynx_transfcst+2*lynx_transfse
lynx_transfcst-2*lynx_transfse
After a bit of editing to change line colors, patterns, and legend entries, we have:
References
Bai, Jushan and Pierre Perron (1998). Estimating and Testing Linear Models with Multiple Structural
Changes, Econometrica, 66, 4778.
Bai, Jushan, and Pierre Perron (2003). Computation and Analysis of Multiple Structural Change Models.
Journal of Applied Econometrics 18(1), 122.
Hansen, Bruce (1999). Testing for Linearity. Journal of Economic Surveys, 13, 551-576.
Hansen, Bruce (2000). Testing for Structural Change in Conditional Models. Journal of Econometrics, 97,
93-115.
Hansen, Bruce (2011). Threshold Autoregression in Economics. Statistics and Its Interface, 4, 123127.
Potter, Simon (1999). Nonlinear Time Series Modelling: An Introduction. Journal of Economic Surveys
13, 505528.
Tong, H. and K. S. Lim (1980). Threshold Autoregression, Limit Cycles and Cyclical Data, Journal of the
Royal Statistical Society. Series B (Methodological), 42, 245292.
Tsay, Ruey S. (1989). Testing and Modeling Threshold Autoregressive Processes, Journal of the American Statistical Association, 84, 231240
Background
The following discussion describes only the basic features of switching models. Switching
models have a long history in economics that is detailed in numerous surveys (Goldfeld and
Quandt, 1973, 1976; Maddala, 1986; Hamilton, 1994; Frhwirth-Schnatter, 2006), and we
encourage you to explore these resources for additional discussion.
m t ( m ) = X t b m + Z t g
(33.1)
where b m and g are k X and k Z vectors of coefficients. Note that the b m coefficients for
X t are indexed by regime and that the g coefficients associated with Z t are regime invariant.
Lastly, we assume that the regression errors are normally distributed with variance that may
depend on the regime. Then we have the model:
y t = m t ( m ) + j ( m )e t
(33.2)
when s t = m , where e t is iid standard normally distributed. Note that the standard deviation j may be regime dependent, j ( m ) = j m .
The likelihood contribution for a given observation may be formed by weighting the density
function in each of the regimes by the one-step ahead probability of being in that regime:
M
L t ( b, g, j, d ) =
m =1
yt mt ( m )
1
------f ------------------------- P ( s t = m t 1, d )
jm j ( m )
(33.3)
b = ( b 1, , b M ) , j = ( j 1, , j M ) , d are parameters that determine the regime probabilities, f ( ) is the standard normal density function, and t 1 is the information set in
period t 1 . In the simplest case, the d represent the regime probabilities themselves.
The full log-likelihood is a normal mixture
T
l ( b, g, j, d ) =
yt mt ( m )
1
- P ( s t = m t 1, d )
log j-----m-f ------------------------j(m)
t = 1
m = 1
(33.4)
Simple Switching
To this point, we have treated the regime probabilities P ( s t = m t 1, d ) in an abstract
fashion. This section considers a simple switching model featuring independent regime
probabilities. We begin by focusing on the specification of the regime probabilities, then
describe likelihood evaluation and estimation of those probabilities.
It should be emphasized that the following discussion is valid only for specifications with
uncorrelated errors. Models with correlated errors are described in Serial Correlation on
page 449.
Regime Probabilities
In the case where the probabilities are constant values, we could simply treat them as additional parameters in the likelihood in Equation (33.4). More generally, we may allow for
varying probabilities by assuming that p m is a function of vectors of exogenous observables
G t 1 and coefficients d parameterized using a multinomial logit specification:
exp ( G t 1 d m )
P ( s t = m t 1, d ) p m ( G t 1, d ) = --------------------------------------------------M
exp
(
d
)
G
t
1
j
j = 1
(33.5)
Likelihood Evaluation
We may use Equation (33.4) and Equation (33.5) to obtain a normal mixture log-likelihood
function:
Background445
yt mt ( m )
1
l ( b, g, j, d ) = log ------f ------------------------- p m ( G t 1, d )
jm
m = 1 jm
t =1
(33.6)
This likelihood may be maximized with respect to the parameters ( b, g, j, d ) using iterative methods.
It is worth noting that the likelihood function for this normal mixture model is unbounded
for certain parameter values. However, local optima have the usual consistency, asymptotic
normality, and efficiency properties. See Maddala (1986) for discussion of this issue as well
as a survey of different algorithms and approaches for estimating the parameters.
Given parameter point-estimates, coefficient covariances may be estimated using conventional methods, e.g., inverse negative Hessian, inverse outer-product of the scores, and
robust sandwich.
Filtering
The likelihood expression in Equation (33.6) depends on the one-step ahead probabilities of
being in a regime: P ( s t = m t 1 ) . Note, however, that observing the value of the dependent variable in a given period provides additional information about which regime is in
effect. We may use this contemporaneous information to obtain updated estimates of the
regime probabilities
The process by which the probability estimates are updated is commonly termed filtering.
By Bayes theorem and the laws of conditional probability, we have the filtering expressions:
f ( y t s t = m, t 1 ) P ( s t = m t 1 )
P ( s t = m t ) = P ( s t = m y t, t 1 ) = ------------------------------------------------------------------------------------------------ (33.7)
f ( yt t 1 )
The expressions on the right-hand side are obtained as a by-product of the densities
obtained during likelihood evaluation. Substituting, we have:
yt mt ( m )
1
------ f ------------------------- p m ( G t 1, d )
jm j ( m )
P ( s t = m t ) = ------------------------------------------------------------------------------------------yt mt ( j )
1
M
- p j ( G t 1, d )
j = 1 ---j-j f ---------------------j(m)
(33.8)
Markov Switching
The Markov switching regression model extends the simple exogenous probability framework by specifying a first-order Markov process for the regime probabilities. We begin by
describing the regime probability specification, then discuss likelihood computation, filtering, and smoothing.
Regime Probabilities
The first-order Markov assumption requires that the probability of being in a regime
depends on the previous state, so that
P ( s t = j s t 1 = i ) = p ij ( t )
(33.9)
p( t) =
p 11 ( t ) p 1M ( t )
p M1 ( t ) p MM ( t )
(33.10)
where the ij -th element represents the probability of transitioning from regime i in period
t 1 to regime j in period t . (Note that some authors use the transpose of p ( t ) so that all
of their indices are reversed from those used here.)
As in the simple switching model, we may parameterize the probabilities in terms of a multinomial logit. Note that since each row of the transition matrix specifies a full set of conditional probabilities, we define a separate multinomial specification for each row i of the
matrix
exp ( G t 1 d ij )
p ij ( G t 1, d i ) = ---------------------------------------------------M
exp
(
d
)
G
t
1
is
s = 1
(33.11)
Background447
P ( st = m t 1 ) =
P ( st = m
st 1 = j ) P ( st 1 = j t 1 )
j =1
(33.12)
p jm ( G t 1, d j ) P ( st 1 = j
t 1 )
j =1
2. Next, we use these one-step ahead probabilities to form the one-step ahead joint densities of the data and regimes in period t :
yt mt ( m )
1
f ( y t, s t = m t 1 ) = ------f ------------------------- P ( st = m t 1 )
jm
j(m)
(33.13)
3. The likelihood contribution for period t is obtained by summing the joint probabilities across unobserved states to obtain the marginal distribution of the observed data
L t ( b, g, j, d ) = f ( y t t 1 ) =
j = 1 f ( y t, s t = j
t 1 )
(33.14)
4. The final step is to filter the probabilities by using the results in Equation (33.13) to
update one-step ahead predictions of the probabilities:
f ( y t, s t = m t 1 )
P ( s t = m t ) = -----------------------------------------------------------M
j = 1 f ( y t, s t = j t 1 )
(33.15)
These steps are repeated successively for each period, t = 1, , T . All that we require for
implementation are the initial filtered probabilities, P ( s 0 = m 0 ) , or alternately, the initial one-step ahead regime probabilities P ( s 1 = m 0 ) . See Initial Probabilities on
page 448 for discussion.
The likelihood obtained by summing the terms in Equation (33.14) may be maximized with
respect to the parameters ( b, g, j, d ) using iterative methods. Coefficient covariances may
be estimated using standard approaches.
Smoothing
Estimates of the regime probabilities may be improved by using all of the information in the
sample. The smoothed estimates for the regime probabilities in period t use the information
set in the final period, T , in contrast to the filtered estimates which employ only contemporaneous information, t . Intuitively, using information about future realizations of the
dependent variable y s ( s > t ) improves our estimates of being in regime m in period t
because the Markov transition probabilities link together the likelihood of the observed data
in different periods.
Kim (2004) provides an efficient smoothing algorithm that requires only a single backward
recursion through the data. Under the Markov assumption, Kim shows that the joint probability is given by
P ( s t = i, s t + 1 = j T ) = P ( s t = i s t + 1 = j, T ) P ( s t + 1 = j T )
P ( s t = i, s t + 1 = j t )
- P ( st + 1 = j T )
= --------------------------------------------------------P ( st + 1 = j t )
(33.16)
The key in moving from the first to the second line of Equation (33.16) is the fact that under
appropriate assumptions, if s t + 1 were known, there is no additional information about s t
in the future data ( y t + 1, , y T ) .
The smoothed probability in period t is then obtained by marginalizing the joint probability
with respect to s t + 1 :
M
P ( st = i T ) =
P ( st = i, st + 1 = j
T )
(33.17)
j= 1
Note that apart from the smoothed probability terms, P ( s t + 1 = j T ) , all of the terms on
the right-hand side of Equation (33.16) are obtained as part of the filtering computations.
Given the set of filtered probabilities, we initialize the smoother using P ( s T = j T ) , and
iterate computation of Equation (33.16) and Equation (33.17) for t = T 1, , 1 to
obtain the smoothed values.
Initial Probabilities
The Markov switching filter requires initialization of the filtered regime probabilities in
period 0, P ( s 0 = m 0 ) .
There are a few ways to proceed. Most commonly, the initial regime probabilities are set to
the ergodic (steady state) values implied by the Markov transition matrix (see, for example
Hamilton (1999, p. 192) or Kim and Nelson (1999, p. 70) for discussion and results). The
values are thus treated as functions of the parameters that determine the transition matrix.
Alternately, we may use prior knowledge to specify regime probability values, or we can be
agnostic and assign equal probabilities to regimes. Lastly, we may treat the initial probabilities as parameters to be estimated.
Note that the initialization to ergodic values using period 0 information is somewhat arbitrary in the case of time-varying transition probabilities.
Dynamic Models
We may extend the basic switching model to allow for dynamics in the form of lagged
endogenous variables and serially correlated errors. The two methods require different
assumptions about the dynamic response to changes in regime.
Our discussion is very brief. Frhwirth-Schnatter (2006) offers a nice overview of the differences between these two approaches, and provides further discussion and references.
Background449
Dynamic Regression
The most straightforward method of adding dynamics to the switching model is to include
lagged endogenous variables. For a model with p lagged endogenous regressors, and random state variable s t taking the value m we have:
p
m t ( m ) = X t b m + Z t g +
J rm y t r
r =1
(33.18)
y t = m t ( m ) + j ( m )e t
where e t is again iid standard normally distributed. The coefficients on the lagged endogenous variable are allowed to be regime-varying, but this generality is not required.
In the Markov switching context, this model has been termed the Markov switching
dynamic regression (MSDR) model (Frhwirth-Schnatter, 2006). In the special case where
the lagged endogenous coefficients are regime-invariant, the model may be viewed as a variant of the Markov switching intercept (MSI) specification (Krolzig, 1997).
Of central importance is the fact that the mean specification depends only on the contemporaneous state variable s t so that lagged endogenous regressors may be treated as additional
regime specific X t or invariant Z t for purposes of likelihood evaluation, filtering, and
smoothing. Thus, the discussions in Simple Switching on page 444 and Markov Switching on page 445 are directly applicable in MSDR settings.
Serial Correlation
An alternative dynamic approach assumes that the errors are serially correlated (Hamilton,
1989). With serial correlation of order p , we have the AR specification
r r ( s t )L ( y t m t ( s t ) )
= j ( s t )e t
(33.19)
r =1
yt = mt ( st ) +
r r ( s t ) ( y t r m t r ( s t r ) ) + j ( s t )e t
(33.20)
r = 1
In the Markov switching literature, this specification has been termed the Markov switching autoregressive (MSAR) (Frhwirth-Schnatter, 2006) or the Markov switching mean
(MSM) model (Krolzig, 1997). The MSAR model is perhaps most commonly referred to as
the Hamilton model of switching with dynamics.
Note that, in contrast to the MSDR specification, the mean equation in the MSAR model
depends on lag states. The presence of the regime-specific lagged mean adjustments on the
right-hand side of Equation (33.20) implies that probabilities for a p + 1 dimensional state
vector representing the current and p previous regimes are required to obtain a representation of the likelihood.
For example, in a two regime model with an AR(1), we have the standard prediction error
representation of the likelihood:
T
l ( b, g, j, d, r ) = log
i = 1
t =2
j=1
yt mt ( i ) rr ( i ) ( yt 1 mt 1 ( j ) )
1
---------- f -----------------------------------------------------------------------------------
j(i)
j(i)
(33.21)
P ( s t = i, s t 1 = j t 1 )
which requires that we consider probabilities for the four potential regime outcomes for the
state vector ( s t, s t 1 ) .
More generally, since there is a p + 1 dimensional state vector and M regimes, the number
p+1
of potential realizations is M = M
. The description of the basic Markov switching
model above (Markov Switching on page 445) is no longer valid since it does not handle
the filtering and smoothing for the full M vector of probabilities.
Markov Switching AR
Hamilton (1989) derived the form of the MSAR specification and outlined an operational filtering procedure for evaluating the likelihood function. Hamilton (1989), Kim (1994), and
Kim and Nelson (1999, Chapter 4) all offer excellent descriptions of the construction of this
lagged-state filtering procedure.
Briefly, the Hamilton filter extends the analysis in Markov Switching on page 445 to handle the larger p + 1 dimensional state vector. While the mechanics of the procedure are a
bit more involved, the concepts follow directly from the simple filter described above (Likelihood Evaluation and Filtering on page 446). The filtered probabilities for lagged values of
the states, s t 1, , s t p conditional on the information set t 1 are obtained from the
previous iteration of the filter, and the one-step ahead joint probabilities for the state vector
are obtained by applying the Markov updates to the filtered probabilities. These joint probabilities are used to evaluate a likelihood contribution and in obtaining updated filtered probabilities.
Hamilton also offers a modified lag-state smoothing algorithm that may be used with the
MSAR model, but the approach is computationally unwieldy. Kim (1994) improves significantly on the Hamilton smoother with an efficient smoothing filter that handles the M
probabilities using a single backward recursion pass through the data. This approach is a
straightforward extension of the basic Kim smoother (see Smoothing on page 447).
Simple Switching AR
The simple switching results outlined earlier (Simple Switching on page 444) do not hold
for the simple switching with autocorrelation (SSAR) model. As with the MSAR specification, the presence of lagged states in the specification complicates the dynamics and
requires handling a p + 1 dimensional state variable representing current and lag states.
Conveniently, we may obtain results for the specification by treating it as a restricted Markov switching model with transition probabilities that do not depend on the origin regime:
P ( s t = j s t 1 = i ) = p ij ( t ) = p j ( t )
(33.22)
p(t) =
p1 ( t ) pM ( t )
p1 ( t ) pM ( t )
(33.23)
We may then apply the Hamilton filter and Kim smoother to this restricted specification to
obtain the one-step ahead, likelihood, filtered, and smoothed values.
Initial Probabilities
In the serial correlation setting, the Markov switching filter requires initialization of the vecp+1
tor of probabilities associated with the M
dimensional state vector. We may proceed as
in the uncorrelated model by setting M initial probabilities in period ( p + 1 ) using one of
the methods described in Initial Probabilities on page 448 and recursively applying Markov transition updates to obtain the joint initial probabilities for the M dimensional initial
probability vector in period 0 .
Again note that the initialization to steady state values using the period ( p + 1 ) information is somewhat arbitrary in the case of time-varying transition probabilities.
There are two tabs in this dialog. The first tab is used for basic specification of your switching regression. The second tab contains options for modifying selected features of the specification and for controlling computational aspects of the estimation.
Specification
The Specification page contains three sections: Equation specification, Switching specification, and Estimation settings. We focus on the first two sections since the last should
already be familiar.
Equation Specification
The top portion of the page contains the Equation specification section where you should
specify the behavior of the model in the different regimes.
You should enter the name
of the dependent variable
series ( y t ) followed by any
regressors with switching
coefficients ( X t ) in the first
edit field.
Regressors which have nonvarying coefficients ( Z t )
Switching Specification
The Switching specification section controls the specification of the
regime probabilities.
The Switching type dropdown allows you to choose between Simple and Markov switching. The default setting is to estimate a simple switching model.
You should specify the number of regimes M > 1 in the edit field. By default, EViews
assumes that you have two regimes. Bear in mind that switching models with more
than a few regimes may be difficult to estimate.
You may specify additional regressors that determine the unconditional regime probabilities (for simple switching) or the regime transition probability matrix (for Markov
switching). By default, EViews sets the list so that there is a single constant term
resulting in time-invariant probabilities.
Important note: the data for the probability regressors that determine the transition or
regime probabilities for period t , should be located in period t of the workfile. That is,
the G t 1 data should be in period t of the workfile, not period t 1 . You may, of
course, employ standard EViews lag expressions to refer to data in the previous period.
Additional options for setting the initial regime probabilities and restricting the elements of
the probability vector or transition matrix are located on the Options tab and described in
Switching on page 454.
Options
Clicking on the Options tab displays options for modifying features of the switching specification and for controlling various aspects of the computation.
Switching
The Switching options section may be used to specify the initial state probabilities and any
restrictions to the regime probability vector or transition matrix.
Recall that evaluation of the likelihood in Markov switching
and SSAR models requires presample values for the filtered
probabilities (Initial Probabilities on page 448). The Initial
regime probabilities dropdown lets you choose the method of
initializing these values (Ergodic solution (default), Estimated, Uniform, User-specified). If you select User-specified, you will be prompted for the
name of an M -element vector in the workfile that contains the initial probabilities.
The Probability restriction vector/Transition restriction matrix edit field allows you to
specify restrictions on the regime probabilities. Markov switching models, in particular, will
sometime require restrictions on transition matrix probabilities. For example, we may have
p wr = 0 if it is impossible to transition directly from state w to state r . Similarly, if state
w is an absorbing state, then p ww = 1 and p wr = 0 for r w .
To specify restrictions, you should enter the name of an M -element vector in the workfile
(for a SSAR model), or an M M matrix in the workfile (for Markov switching) in the edit
field. The vector or matrix should contain valid probability values for elements that are
restricted and NAs for elements that are to be estimated. For example, in a three regime Markov switching model where state 3 is an absorbing state, you would have
NA NA NA
NA NA NA
0
0
1
(33.24)
You should take care not to specify invalid or inconsistent restrictions. For example, rows of
a Markov transition matrix may not be specified so that there is a single unrestricted cell
since the adding up condition for the row places a restriction on that cell. Similarly, fixed
values should be valid probabilities that do not have generate row sums greater than 1.
EViews will detect these types of errors and will refuse to estimate the model.
In Regime Heteroskedasticity on page 472, we offer an example that employs restrictions
and discuss several practical issues associated with estimating the model.
Start Method
The Start method dropdown allows you to specify a basic
method for choosing starting values (EViews Supplied, .8 x
EViews Supplied, .5 x EViews Supplied, .3 x EViews Supplied, Zero, User-Supplied).
The EViews supplied methods employ simple least squares coefficient estimates or the specified fraction of those estimates. AR coefficients are arbitrarily initialized to zero.
The Zero and User-Supplied methods are self-explanatory, with the latter taken from the
default coefficient vector specified in the dialog (typically the coefficient vector C).
Randomized Estimates
The first two edit fields under Randomized estimates allow you to choose random starting
values based on the those specified in the Start method dropdown.
EViews will randomly choose the number of random starts as specified in No. of random starts and for each random start perform the number of iterations as specified in
Iterations for starts. The coefficients with the highest likelihood value will be chosen
as the starting values.
For non user-supplied starting values, EViews will, by default, generate 25 sets of random starting values and refine each with 10 iterations before choosing the best set as
the starting values. By default, there is no randomization of user-supplied values.
In addition to randomizing based on the initial values, you may randomize based on the
final coefficient estimates. The No. of random from final edit field determines the number
of random coefficients to try following estimation.
The random starting values are chosen by taking the best estimated values to date and
adding random normals with scale given by the Random scale fraction of the final
coefficient standard deviations. The estimates with the highest likelihood become the
final estimates.
By default, EViews does not perform randomization based on the final coefficient estimates.
For both initial and final randomization, the random starting values are chosen by taking the
base values and adding random normals with scale given by the Random scale fraction of
the root of the estimated coefficient variances (or the scale fraction itself if the variances are
not available). The random values will be generated using the Generator specified in the
dropdown and the random Seed specified in the edit field. If a random seed is not specified,
EViews will obtain one from a single draw from the generator.
Optimization Options
You can use the Optimization method dropdown menu to select a different method.
Estimation Output457
By default, EViews uses BFGS with Marquardt steps to obtain parameter estimates. You may
use the Optimization method dropdown to choose between BFGS, OPG - BHHH, or Newton-Raphson. The Step method dropdown offers the choice of Marquardt, Dogleg, and
Line search.
See Optimization Method on page 1006 for discussion.
The remainder of the section allows you to specify a convergence criterion and the maximum number of iterations, and to instruct EViews to display starting value and other optimization information in the output.
Estimation Output
Estimating the equation produces the standard estimation output view. Here we see example
output from a simple switching regression model, estimated using data in the workfile
GNP_hamilton.WF1:
Dependent Variable: G
Method: Simple Switching Regression (BFGS / Marquardt steps)
Date: 03/10/15 Time: 16:03
Sample (adjusted): 1951Q4 1984Q4
Included observations: 133 after adjustments
Number of states: 2
Standard errors & covariance computed using observed
Hessian
Random search: 25 starting values with 10 iterations using 1 standard
deviation (rng=kn, seed=216937)
Convergence achieved after 16 iterations
Variable
Coefficient
Std. Error
z-Statistic
Prob.
-3.294420
3.503494
0.0010
0.0005
6.852728
3.004760
0.0000
0.0027
-0.153503
-2.913118
0.8780
0.0036
-2.236495
0.0253
Regime 1
C
G(-1)
-0.769089
0.493466
0.233452
0.140850
Regime 2
C
G(-1)
0.951307
0.272289
0.138822
0.090619
Common
G(-2)
LOG(SIGMA)
-0.012522
-0.342578
0.081574
0.117598
Probabilities Parameters
P1-C
Mean dependent var
S.E. of regression
Durbin-Watson stat
Akaike info criterion
Hannan-Quinn criter.
-0.985846
0.719740
1.020420
2.043722
2.884176
2.945993
0.440799
1.058739
132.2396
-184.7977
3.036300
The top portion of the output describes the type of switching model and basic sample information, along with information about the computation of the coefficient covariance and the
method of producing coefficient estimates.
The middle section displays coefficient estimates. Regime specific coefficients are presented
in blocks at the top, followed by any common coefficients, and then the logistic coefficients
for the regime probabilities. Note that we have specified G(-1) to be a regime specific regressor, G(-2) to be common, and assume a common error variance. In this example of a simple
switching model with two regimes and no probability regressors, there is only a single probability regressor.
The bottom section shows the standard descriptive statistics for the equation. Most are selfexplanatory. Of note are the residual-based statistics which employ the expected value of the
Switching Views459
residuals obtained by taking the sum of the regime specific residuals weighted by the onestep ahead (unfiltered) regime probabilities (Maheu and McCurdy, 2000).
Switching Views
Once you have estimated your equation, EViews offers a variety
of views for examining your results.
Most of these routines are familiar tools for working with an estimated equation. You may, for example, examine Actual, Fitted,
Residual plots for your estimated equation, examine the coefficient Covariance Matrix, or use Coefficient Diagnostics view
submenu to examine coefficient confidence ellipses, compute
Wald or omitted and redundant variable tests, or use the Residual Diagnostics submenu to examine properties of your residuals.
Since the presence of multiple regimes creates a few wrinkles, we offer a few comments on
select views.
Regime Results
EViews offers specialized tools for examining the regime transition results and predicted
regime probabilities.
Transition Results
To display the transition results dialog, select View/Regime
Results/Transition Results... EViews offers to display different
types of output: Summary, Transition probabilities, and
Expected durations.
The default Summary display shows a table containing both
the transition matrix and the expected durations (Kim and Nelson, 1999, p. 71-72) implied by the transition matrix. For example,
Equation: EQ01
Date: 10/16/12 Time: 11:18
Transition summary: Constant simple switching
transition probabilities and expected durations
Sample (adjusted): 1951Q4 1984Q4
Included observations: 133 after adjustments
Constant transition probabilities:
P(i, k) = P(s(t) = k | s(t-1) = i)
(row = i / column = j)
1
1
0.271733
2
0.271733
2
0.728267
0.728267
2
3.680077
Here, we see results from the simple switching model with constant transition probabilities.
Note that since the model assumes simple switching, the probabilities of being in regime 1
and regime 2 (approximately 0.27 and 0.73, respectively) do not depend on the origin state.
These probabilities imply that the expected duration in a regime is roughly 1.37 quarters in
regime 1 and 3.68 quarters in regime 2.
In models with varying transition probabilities, the transition probability summary will
instead show the means and standard deviations of the transition probabilities and the
expected durations.
The latter two output type choices may prove useful in models with time-varying transition
probabilities:
Selecting Transition probabilities allows you to see the transition matrix in every
period. You will be offered a choice of displaying the transition probabilities in Graph
(default), Sheet, or Table form.
The Graph display shows a multiple graph showing each transition probability. For
purposes of this display simple switching models are treated as restricted Markov
switching models. Thus, a two-regime switching model will always show four separate graphs, one for each transition. Note that for constant transition probability models, each graph will be a straight line.
The Sheet display shows the same results in spreadsheet format while the Table form
displays results for each period in a table form similar to that used in the summary
display. For constant probability models, the spreadsheet format will contain identical
rows and the table form will show a single transition matrix.
Switching Views461
Lastly, you may display the Expected durations associated with the transition matrix
in each period. The results may be displayed in Graph (default), Sheet, or Table form.
For constant probability models, the table form will only show the single set of
expected durations.
Regime Probabilities
To display the estimated regime probabilities select View/Regime Results/Regime Probabilities...
EViews offers a choice between the One-step-ahead, Filtered, and Smoothed probabilities.
For simple switching models without AR terms, the filtered and smoothed results are the
same. In addition, you may display the results in Multiple graphs, in a Single graph, or in
Sheet form.
For graphical output you may also select which regimes to plot by entering the corresponding indices in the Regimes to plot edit field. While the default is to show the probabilities
for all regimes, you may use the edit field to remove redundant probabilities or to focus on a
specific regime of interest.
AR Structure
EViews offers several ways of viewing the underlying structure of your AR specification.
These tools are described in detail elsewhere (see ARMA Structure, beginning on
page 116). Note, however, that these tools are currently available only for models with
regime-invariant AR structures.
Switching Procs
EViews offers several procs for working with switching equations. Most of these procs are self-explanatory, but we offer
brief comments about forecasting and saving regime results to
the workfile.
Forecasting
The forecasting procedure follows Davidson (2004) in employing
the one-step or n -step ahead regime probabilities to compute the
expected forecasted value.
To forecast from an estimated equation, click on the Forecast button on the equation toolbar
or select Proc/Forecast... from the menu.
EViews will display the standard forecast dialog allowing you to specify a
forecast sample and output series, to
select between dynamic and static
forecasting, to choose whether to
include AR terms, and whether to display a forecast graph and forecast
evaluation.
In standard settings, dynamic and
static forecasting differ principally
their handling of lagged dependent
variables, with dynamic forecasts
using lagged predicted values as
regressors where appropriate, while
static forecasts use only actual lagged
values.
In a switching regression setting, dynamic and static forecasting methods also differ in the construction of regime probabilities. The static forecasts use the observed dependent variable, if available, to filter the regime probabilities in preparation for the next forecast period. Dynamic
forecasts do not use the available data to filter the probabilities.
Examples463
Transition Results
To save transition probability or expected duration results in the workfile click on Proc/
Make Regime Results/Make Transition Results...
In addition to prompting you to choose
between saving the Transition probabilities
or the Expected durations, you must select
an output format. By default, EViews will
save the results in a group of series in the
workfile. The series names will be formed
using the base name specified in the edit
field, as in TPROB12, TPROB22, etc. for
transitions, and TPROB1, TPROB2, etc. for expected durations.
You may instead elect to save the results in a matrix. In this case, EViews will prompt you
for the name of the matrix and for an observation at which to evaluate the transition matrix
or expected duration. By default, the dialog will be filled with the first observation in the
estimation sample.
Regime Probabilities
To save the regime probabilities, select
Proc/Make Regime Results/Make
Regime Probabilities Group...
Select the type of probability you wish to
compute (One-step-ahead, Filtered, or
Smoothed), and enter the names of series
to hold the results, one for each probability you wish to retain.
Examples
As illustrations of switching regression estimation, we consider three examples: Hamiltons
(1989) MSAR(4) specification for post-war U.S. GNP, Kim and Nelsons (1999) example of a
time-varying transition probability model of industrial production, and Kim and Nelsons
(1999) three state Markov model of regime heteroskedasticity.
Markov Switching AR
Hamilton (1989) specifies a two-state Markov switching model in which the mean growth
rate of GNP is subject to regime switching, and where the errors follow a regime-invariant
AR(4) process. The data for this example, which consists of the series G containing (100
times) the log difference of quarterly U.S. GNP for 1951q11984q4, may be found in the
workfile GNP_hamilton.WF1.
To estimate the Hamilton model, open an switching equation dialog and enter the specification as depicted below:
The equation specification consists of a two-state Markov switching model with a single
switching mean regressor C and the four non-switching AR terms. The error variance is
assumed to be common across the regimes. The only probability regressor is the constant C
since we have time-invariant regime transition probabilities.
With the exception of the convergence tolerance which we set to 1e-8, we leave the rest of
the settings at their default values. Click on OK to estimate the equation and display the estimation results.
The top portion of the output describes the estimation settings:
Examples465
Dependent Variable: G
Method: Markov Switching Regression (BFGS / Marquardt
steps)
Date: 03/10/15 Time: 16:04
Sample (adjusted): 1952Q2 1984Q4
Included observations: 131 after adjustments
Number of states: 2
Initial probabilities obtained from ergodic solution
Standard errors & covariance computed using observed Hessian
Random search: 25 starting values with 10 iterations using 1 standard
deviation (rng=kn, seed=216937)
Convergence achieved after 36 iterations
(Bear in mind that if you attempt replicate this estimation using the default settings, you
may obtain different results due to the different set of random starting values. You may use
the random number generator seed settings to obtain the same starting values.)
The middle section displays the coefficients for the regime specific mean and the invariant
error distribution coefficients. We see, in the differences in the regime specific means, what
Hamilton terms the fast and slow growth rates for the U.S. economy.
Variable
Coefficient
Std. Error
z-Statistic
Prob.
15.06795
0.0000
-1.308983
0.1905
0.108504
-0.403246
-2.229508
-1.859115
-2.920758
0.9136
0.6868
0.0258
0.0630
0.0035
Regime 1
C
1.163517
0.077218
Regime 2
C
-0.358813
0.274116
Common
AR(1)
AR(2)
AR(3)
AR(4)
LOG(SIGMA)
0.013487
-0.057521
-0.246983
-0.212921
-0.262658
0.124301
0.142645
0.110779
0.114528
0.089928
The remaining results show the parameters of the transition matrix and summary statistics
for the estimated equation.
2.243457
-1.123682
0.719835
1.005677
1.923927
2.904785
2.985051
.48-.62i
0.450936
0.540208
4.975108
-2.080092
.48+.62i
-.47+.35i
0.0000
0.0375
1.066382
125.4118
-181.2634
3.102317
-.47-.35i
Instead of focusing on the transition matrix parameters, we examine the transition matrix
probabilities directly by selecting View/Regime Results/Transition Results... and clicking
on OK to display the default summary view:
Equation: MSAR4
Date: 03/10/15 Time: 16:07
Transition summary: Constant Markov transition
probabilities and expected durations
Sample (adjusted): 1951Q2 1984Q4
Included observations: 135 after adjustments
Constant transition probabilities:
P(i, k) = P(s(t) = k | s(t-1) = i)
(row = i / column = j)
1
1
0.904085
2
0.245329
2
0.095915
0.754671
2
4.076158
Here, we see the transition probability matrix and the expected durations. Note that there is
considerable state dependence in the transition probabilities with a relatively higher probability of remaining in the origin regime (0.90 for the high output state, 0.75 for the low output state). The corresponding expected durations in a regime are approximately 10.4 and 4.1
quarters, respectively.
Lastly, we display the filtered and full sample estimates of the probabilities of being in the
two regimes. First select View/Regime Results/Regime Probabilities... and choose the fil-
Examples467
tered results. We will display the results only for the first regime. Then repeat the procedure
choosing the smoothed results.
After saving the two views as graphs, editing the labels, and applying the RecShade add-in
(http://www.eviews.com/Addins/addins.shtml) to label the NBER recessions, we see that
the predicted probabilities of being in the low output state coincide nicely with the commonly employed definition of recessions:
Time-Varying Transitions
Kim and Nelson (1999, p. 93) provide example data for estimating an MSAR(4) model with
time-varying transition probabilities, as in Filardo (1994). Filardo models the log growth rate
of industrial production (DLOGIP) using an MSAR(4) switching mean specification, using
(among other variables) the log growth rate of the Composite Index of Eleven Leading Indicators (DLOGIDX) as a business-cycle predictor.
The Kim and Nelson monthly data for 1948m01 to 1991m04 are included the workfile
kimnelson_tvp.WF1. Note that these data correspond to, but differ slightly from the data
used in Filardo (1994).
We open the switching regression dialog and fill out the Specification tab with the switching
mean AR(4) spec and the switching probability specification:
Note that use have used the lag of the leading indicator variable as our probability regressor
so that the period t data for the regressor corresponds to the values influencing the transitions for t 1 to t .
We leave the remaining settings at their defaults and click on OK to estimate the equation.
Following estimation, EViews displays the results:
Examples469
Coefficient
Std. Error
z-Statistic
Prob.
6.634341
0.0000
-5.597543
0.0000
3.720827
1.535587
2.112572
2.392135
-9.446653
0.0002
0.1246
0.0346
0.0168
0.0000
5.770703
3.447085
-3.661931
1.739232
0.0000
0.0006
0.0003
0.0820
Regime 1
C
0.517304
0.077974
Regime 2
C
-0.865887
0.154691
Common
AR(1)
AR(2)
AR(3)
AR(4)
LOG(SIGMA)
0.189474
0.079344
0.110945
0.122252
-0.362469
0.050923
0.051670
0.052516
0.051106
0.038370
4.359390
1.770205
-1.649359
0.994559
0.755435
0.513537
0.450407
0.571838
0.245376
0.772775
2.059688
2.320667
2.356194
Inverted AR Roots
.76
-.04+.57i
-.04-.57i
0.878113
303.3684
-586.5718
2.411319
-.50
The results are broadly similar to, but differ slightly from those reported by Filardo. The
coefficients on the DLOGIDX(-1) both differ from zero with opposite (statistically significant) signs. As to the transition matrix parameters, we see that increases in the log growth
rate are associated with higher probabilities of being in the high production growth regime,
lowering the transition probability out of regime 1 and increasing the transition probability
from regime 2 into regime 1.
We can examine more directly the transition probabilities. The default transition probability
summary (View/Regime Results/Transition Results...) shows descriptive statistics for the
elements of the transition matrix:
Equation: MSAR4_TVP
Date: 03/10/15 Time: 16:13
Transition summary: Time-varying Markov
transition probabilities and expected durations
Sample (adjusted): 1948M03
1991M04
Included observations: 518 after adjustments
Time-varying transition probabilities:
P(i, k) = P(s(t) = k | s(t-1) = i)
(row = i / column = j)
1
Mean
1
0.951776
2
0.200155
2
0.048224
0.799845
1
0.112571
0.145082
2
0.112571
0.145082
Std. Dev.
1
2
Mean
Std. Dev.
1
503.5106
2948.119
2
10.50145
16.90769
We can see the variation in the time-varying probabilities by examining graphs of the transition probabilities for each observation. Select View/Regime Results/Transition Results...
and click on the Transition probabilities radio button. The default view for showing the
probabilities is a graph so you may simply click on OK to show graphs for all four probabilities.
Examples471
The filtered probabilities of being in the low production regime are presented below, with
NBER recession shading.
To create this graph, select View/Regime Results/Regime Probabilities..., select the Filtered radio button and enter 2 in the Regimes to plot edit field to only show the second
(low production) regime.
Click on OK to display the graph view, then click on the Freeze button to save the view as a
graph object. Select Proc/Add-ins/Add USA Recession Shading in the graph menu (you
must first install the RecShade Add-in to use the automatic shading featurehttp://
www.eviews.com/Addins/addins.shtml).
Regime Heteroskedasticity
Kim and Nelson (1999) offer an example (Section 4.6, p. 86) of a three state Markov switching model of regime heteroskedastic stock returns from 1926m11986m12. The data, which
consist of monthly CRSP equal-weighted excess returns are in the series EXCESS, provided
in the workfile ew_excs.WF1.
The specification of the model is as depicted in the dialog below:
The excess returns are assumed to have mean zero so we enter only the name of the dependent variable in the topmost edit field. Since we wish to model regime heteroskedasticity,
Regime specific error variances box is checked. The model assumes Markov switching
probabilities with 3 regimes and constant transition probabilities.
Preliminary analysis indicates that this model is particularly difficult to estimate with a
number of local roots exhibiting coefficient singularity.
Examples473
To obtain estimates we instruct EViews to perform extra randomized starting value estimation. Click on the Options tab and change the starting value settings so EViews generates
200 sets of random starts (instead of 25) with 50 (instead of 10) iteration refinements. Set
the convergence tolerance to 1e-5. You may need to set the random seed as depicted below.
Click on OK to proceed with the estimation.
EViews estimates the model and displays the standard switching regression output:
Dependent Variable: EXCESS
Method: Markov Switching Regression (BFGS / Marquardt
steps)
Date: 03/18/15 Time: 11:25
Sample: 1926M01 1986M12
Included observations: 732
Number of states: 3
Initial probabilities obtained from ergodic solution
Standard errors & covariance computed using observed
Hessian
Random search: 200 starting values with 50 iterations using 1
standard deviation (rng=kn, seed=1727802456)
Convergence achieved after 1 iteration
Variable
Coefficient
Std. Error
z-Statistic
Prob.
-43.44684
0.0000
-18.79415
0.0000
-46.16981
0.0000
5.151145
-0.009410
-0.345517
3.458206
-6.178607
-6.798325
0.0000
0.9925
0.7297
0.0005
0.0000
0.0000
Regime 1
LOG(SIGMA)
-3.352112
0.077154
Regime 2
LOG(SIGMA)
-1.736500
0.092396
Regime 3
LOG(SIGMA)
-2.760187
0.059783
3.608554
-16.92853
-2.818798
3.028018
-3.894605
-4.417053
-2.44E-18
0.080198
1.616117
-2.712851
-2.691054
0.700534
1799.045
8.158197
0.875604
0.630337
0.649727
0.080088
4.688728
1001.904
-2.656346
The results show estimates of the log standard deviations in the low, high, and medium volatility regimes. The implied standard deviations are 0.035, 0.176, and 0.063, respectively.
2
2.46E-12
0.951202
0.011691
3
0.026376
0.046050
0.968597
2
20.49264
3
31.84411
The transition probabilities point to a possible explanation of the difficulty in estimating the
model. The transition probability from the low volatility regime 1 to the high volatility
regime 2 is essentially zero.
We may impose a zero restriction on transitions from the high to low state by creating a
3 3 matrix object RESTR containing:
NA 0 NA
NA NA NA
NA NA NA
(33.25)
Examples475
Next, go to the original equation and make a copy by selecting Object/Copy Object. Click
on the Estimate button go to the Options tab, enter the name of the restriction vector in the
Transition prob restr. matrix edit field, and change the Start method to User Specified.
Click on OK to accept the changes and estimate the equation.
The update results are displayed below:
Dependent Variable: EXCESS
Method: Markov Switching Regression (OPG - BHHH /
Marquardt
steps)
Date: 03/18/15 Time: 11:29
Sample: 1926M01 1986M12
Included observations: 732
Number of states: 3
Fixed probability matrix: RESTR
Initial probabilities obtained from ergodic solution
Standard errors & covariance computed using observed
Hessian
Random search: 200 starting values with 50 iterations using 1
standard deviation (rng=kn, seed=1727802456)
Convergence achieved after 56 iterations
Variable
Coefficient
Std. Error
z-Statistic
Prob.
-43.47652
0.0000
-18.80698
0.0000
-46.20125
0.0000
5.154310
-0.345580
3.460047
-6.182827
-6.802999
0.0000
0.7297
0.0005
0.0000
0.0000
Regime 1
LOG(SIGMA)
-3.352111
0.077102
Regime 2
LOG(SIGMA)
-1.736499
0.092333
Regime 3
LOG(SIGMA)
-2.760187
0.059743
3.608567
-2.818959
3.027993
-3.894608
-4.417052
-2.44E-18
0.080198
1.616117
-2.715584
-2.696208
0.700107
8.157176
0.875130
0.629907
0.649280
0.080088
4.688728
1001.904
-2.665356
Note that the coefficient associated with the restricted probability p 12 has been removed.
The transition probabilities view shows us the updated probabilities with the restricted
value for p 12 :
Equation: HETEROSK2
Date: 03/10/15 Time: 16:52
Transition summary: Constant Markov transition probabilities and
expected durations
Sample: 1926M01 1986M12
Included observations: 732
Constant transition probabilities:
P(i, k) = P(s(t) = k | s(t-1) = i)
(row = i / column = j)
1
1
0.973624
2
0.002748
3
0.019712
2
0.000000
0.951202
0.011691
3
0.026376
0.046050
0.968597
2
20.49264
3
31.84411
Lastly, we display the smoothed regime probabilities View/Regime Results/Regime Probabilities... selecting Smoothed and clicking on OK to accept the remaining settings. After
rearranging, we have:
References477
References
Davidson, James (2004). Forecasting Markov-switching Dynamic, Conditionally Heteroscedastic Processes, Statistics & Probability Letters, 68, 137-147.
Diebold, Francis X., Lee, Joon-Haeng, and Gretchen C. Weinbach (1994). Regime Switching with TimeVarying Transition Probabilities, in C. Hargreaves (ed.), Nonstationary Time Series Analysis and
Cointegration, Oxford: Oxford University Press, 283302.
Filardo, Andrew J. (1994). Business-Cycle Phases and Their Transitional Dynamics, Journal of Business
& Economic Statistics, 12, 299-308.
Frhwirth-Schnatter, Sylvia (2006). Finite Mixture and Markov Switching Models, New York: Springer Science + Business Media LLC.
Goldfeld, Stephen M. and Richard E. Quandt (1973). A Markov Model for Switching Regressions, Journal of Econometrics, 316.
Goldfeld, Stephen M. and Richard E. Quandt (1976), Studies in Nonlinear Estimation, Cambridge, MA:
Ballinger Publishing Company.
Hamilton, James D. (1989). A New Approach to the Economic Analysis of Nonstationary Time Series and
the Business Cycle, Econometrica, 57, 357384.
Hamilton, James D. (1990). Analysis of Time Series Subject to Changes in Regime, Journal of Econometrics, 45, 3970.
Hamilton, James D. (1994). Time Series Analysis, Chapter 22, Princeton: Princeton University Press.
Hamilton, James D. (1996). Specification Testing in Markov-switching Time-series Models, Journal of
Econometrics, 70, 127157.
Hansen, B. E. (1992). The Likelihood Ratio Test Under Nonstandard Conditions: Testing the Markov
Switching Model of GNP, Journal of Applied Econometrics, 7, S6S82.
Kim, Chang-Jin (1994). Dynamic Linear Models with Markov-Switching, Journal of Econometrics, 60, 1
22.
Kim, Chang-Jin and Charles R. Nelson (1999). State-Space Models With Regime Switching, Cambridge: The
MIT Press.
Krolzig, Hans-Martin (1997). Markov-Switching Vector Autoregressions: Modelling, Statistical Inference,
and Application to Business Cycle Analysis, Berlin: Springer-Verlag.
Maddala, G. S. (1986). Disequilibrium, Self-Selection, and Switching Models, Handbook of Econometrics, Chapter 28 in Z. Griliches & M. D. Intriligator (eds.), Handbook of Econometrics, Volume 3,
Amsterdam: North- Holland.
Maheu, John M., and Thomas H. McCurdy (2000). Identifying Bull and Bear Markets in Stock Returns,
Journal of Business & Economic Statistics, 18, 100112.
Smith, Daniel R. (2008). Evaluating Specification Tests for Markov-switching Time-series Models, Journal of Time Series Analysis, 29, 629652.
Specification
The dialog has two pages. The
first page, depicted here, is
used to specify the variables in
the conditional quantile function, the quantile to estimate,
and the sample of observations to use.
You may enter the Equation
specification using a list of
the dependent and regressor
variables, as depicted here, or
you may enter an explicit
expression. Note that if you
enter an explicit expression it must be linear in the coefficients.
The Quantile to estimate edit field is where you will enter your desired quantile. By default,
EViews estimates the median regression as depicted here, but you may enter any value
between 0 and 1 (though values very close to 0 and 1 may cause estimation difficulties).
Here we specify a conditional median function for Y that depends on a constant term and
the series X. EViews will estimate the LAD estimator for the entire sample of 235 observations.
Estimation Options
Most of the quantile regression settings are set using this page. The options on the left-hand
side of the page control the method for computing the coefficient covariances, allow you to
specify a weight series for weighted estimation, and specify the method for computing scalar
sparsity estimates.
Iteration Control
The iteration control section offers the standard edit field for changing the maximum number of iterations, a dropdown menu for specifying starting values, and a check box for displaying the estimation settings in the output. Note that the default starting value for quantile
regression is 0, but you may choose a fraction of the OLS estimates, or provide a set of user
specified values.
Bootstrap Settings
When you select Bootstrap in the Coefficient Covariance dropdown, the right side of the
dialog changes to offer a set of bootstrap options.
You may use the Method dropdown menu to choose from
one of four bootstrap methods: Residual, XY-pair, MCMB,
MCMB-A. See Bootstrapping, beginning on page 497 for a
discussion of the various methods. The default method is
XY-pair.
Just below the dropdown menu are two edit fields labeled
Replications and No. of obs. By default, EViews will perform 100 bootstrap replications, but you may override this by
entering your desired value. The No. of obs. edit field controls the size of the bootstrap sample. If the edit field is left blank, EViews will draw samples
of the same size as the original data. There is some evidence that specifying a bootstrap
sample size smaller than the original data may produce more accurate results, especially for
very large sample sizes; Koenker (2005, p. 108) provides a brief summary.
To save the results of your bootstrap replications in a matrix object, enter the name in the
Output edit field.
The last two items control the generation of random numbers. The Random generator dropdown should be self-explanatory. Simply use the dropdown to choose your desired generator. EViews will initialize the dropdown using the default settings for the choice of generator.
By default, the first time that you perform a bootstrap for a given equation, the Seed edit
field will be blank; you may provide your own integer value, if desired. If an initial seed is
not provided, EViews will randomly select a seed value. The value of this initial seed will be
saved with the equation so that by default, subsequent estimation will employ the same
seed, allowing you to replicate results when re-estimating the equation, and when performing tests. If you wish to use a different seed, simply enter a value in the Seed edit field or
press the Clear button to have EViews draw a new random seed value.
Estimation Output
Once you have provided your quantile regression specification and specified your options,
you may click on OK to estimate your equation. Unless you are performing bootstrapping
with a very large number of observations, the estimation results should be displayed shortly.
Our example uses the Engel dataset containing food expenditure and household income considered by Koenker (2005, p. 78-79, 297-307). The default model estimates the median of
food expenditure Y as a function of a constant term and household income X.
Coefficient
S td. Error
t-S tatistic
Prob.
C
X
81.48225
0.560181
24.03494
0.031370
3.390158
17.85707
0.0 008
0.0 000
Pseud o R-squared
Adjusted R-squared
S.E. of regression
Quantile depend ent var
Sparsi ty
Prob(Quasi-LR stat)
0.620556
0.618927
120.8447
582.5413
209.3504
0.000000
624.15 01
276.45 70
8779.9 66
23139.03
548.70 91
The top portion of the output displays the estimation settings. Here we see that our estimates use the Huber sandwich method for computing the covariance matrix, with individual sparsity estimates obtained using kernel methods. The bandwidth uses the Hall and
Sheather formula, yielding a value of 0.15744.
Below the header information are the coefficients, along with standard errors, t-statistics
and associated p-values. We see that both coefficients are statistically significantly different
from zero and conventional levels.
The bottom portion of the output reports the Koenker and Machado (1999) goodness-of-fit
measure (pseudo R-squared), and adjusted version of the statistic, as well as the scalar estimate of the sparsity using the kernel method. Note that this scalar estimate is not used in
the computation of the standard errors in this case since we are employing the Huber sandwich method.
Also reported are the minimized value of the objective function (Objective), the minimized constant-only version of the objective (Restr. objective), the constant-only coefficient estimate (Quantile dependent var), and the corresponding L n ( t ) form of the QuasiLR statistic and associated probability for the difference between the two specifications
(Koenker and Machado, 1999). Note that despite the fact that the coefficient covariances are
computed using the robust Huber Sandwich, the QLR statistic assumes i.i.d. errors and uses
the estimated value of the sparsity.
The reported S.E. of the regression is based on the usual d.f. adjusted sample variance of the
residuals. This measure of scale is used in forming standardized residuals and forecast standard errors. It is replaced by the Koenker and Machado (1999) scale estimator in the compu-
tation of the L n ( t ) form of the QLR statistics (see Standard Views and Procedures on
page 485 and Quasi-Likelihood Ratio Tests on page 499).
We may elect instead to perform bootstrapping to obtain the covariance matrix. Click on the
Estimate button to bring up the dialog, then on Estimation Options to show the options
tab. Select Bootstrap as the Coefficient Covariance, then choose MCMB-A as the bootstrap
method. Next, we increase the number of replications to 500. Lastly, to see the effect of
using a different estimator of the sparsity, we change the scalar sparsity estimation method
to Siddiqui (mean fitted). Click on OK to estimate the specification.
Depend ent Variable: Y
Method: Quantile Regressio n (Median)
Date: 08/12/09 Ti me: 11:49
Sample: 1 235
Included observations: 235
Bootstra p Standard Errors & Covariance
Bootstra p method : MCMB-A , r eps=500, rng=kn, se ed=47500547
Sparsi ty method: S iddiqui using fitted quantiles
Bandwi dth method: Hall-Sheather, bw=0.1574 4
Estimation successfully ide ntifies unique optimal solution
Variable
Coefficient
S td. Error
t-S tatistic
Prob.
C
X
81.48225
0.560181
22.01534
0.023804
3.701158
23.53350
0.0 003
0.0 000
0.620556
0.618927
120.8447
582.5413
267.8284
0.000000
624.15 01
276.45 70
8779.9 66
23139.03
428.90 34
For the most part the results are quite similar. The header information shows the different
method of computing coefficient covariances and sparsity estimates. The Huber Sandwich
and bootstrap standard errors are reasonably close (24.03 versus 22.02, and 0.031 versus
0.024). There are moderate differences between the two sparsity estimates, with the Siddiqui
estimator of the sparsity roughly 25% higher (267.83 versus 209.35), but this difference has
no substantive impact on the probability of the QLR statistic.
Process Coefficients
You may select View/Quantile Process/Process Coefficients to examine the process coefficients estimated at various quantiles.
The Output section of the
Specification tab is used
to control how the process
results are displayed. By
default, EViews displays
the results as a table of
coefficient estimates, standard errors, t-statistics,
and p-values. You may
instead click on the Graph
radio button and enter the
size of the confidence
interval in the edit field
that appears. The default
is to display a 95% confidence interval.
The Quantile specification section of the page determines the quantiles at which the process will be estimated. By default, EViews will estimate models for each of the deciles (10
quantiles, t = { 0.1, 0.2, , 0.9 } ). You may specify a different number of quantiles using
the edit field, or you may select User-specified quantiles and then enter a list of quantiles
or one or more vectors containing quantile values.
The Coefficient specification radio buttons permit you to choose a subset of the coefficients
to display. By default, EViews will produce results for all of the coefficients in your model.
You may select Intercept only to produce results only for the intercept, or you may select
User-specified coefficients and enter a list of coefficient names to show results for specific
coefficients. Entering, for example, C(2) C(3) will produce process results only for the second and third coefficients.
Chi-Sq. d.f.
Prob.
25.22366
0.0000
Restr. Value
Std. Error
Prob.
-0.086077
-0.083834
0.025923
0.030529
0.0009
0.0060
Test Summary
Wald Test
Variable
0.25, 0.5
0.5, 0.75
The top portion of the output shows the equation specification, and the Wald test summary.
2
Not surprisingly (given the graph of the coefficients above), we see that the x -statistic
value of 25.22 is statistically significant at conventional test levels. We conclude that coefficients differ across quantile values and that the conditional quantiles are not identical.
b( t) + b(1 t)
--------------------------------------- = b ( 1 2 )
2
(34.1)
To perform the test, select View/Quantile Process/Symmetric Quantiles Test... and fill out
the dialog.
By default, EViews will test for symmetry using the estimated quantile and the quartiles as
specified in the dialog. Thus, if the estimated model fits the median, there will be a single
set of restrictions: ( b ( 0.25 ) + b ( 0.75 ) ) 2 = b ( 0.5 ) . If the estimated model fits the 0.6
quantile, there will be an additional set of restrictions: ( b ( 0.4 ) + b ( 0.6 ) ) 2 = b ( 0.5 ) .
As with the other process routines, you may select User-specified quantiles and provide
your own values. EViews will estimate a model for both the specified quantile, t , and its
complement 1 t , and will compare the results to the median estimates.
If your original model is for a quantile other than the median, you will be offered a third
choice of performing the test using only the estimated quantile. For example, if the model is
fit to the 0.6 quantile, an additional radio button will appear: Estimation quantile only
(0.6). Choosing this form of the test, there will be a single set of restrictions:
( b ( 0.4 ) + b ( 0.6 ) ) 2 = b ( 0.5 ) .
Also, if it is known a priori that the errors are i.i.d., but possibly not symmetrically distributed, one can restrict the null to examine only the restriction associated with the intercept.
To perform this restricted version of the test, simply click on Intercept only in the Test Specification portion of the page. Alternately, you may click on User-specified coefficients and
enter a list of coefficient names (e.g. C(3) C(4)) to perform tests for specific coefficients.
Lastly, you may use the Output page to save the results from the supplementary process
estimation. You may provide a name for the vector of quantiles, the matrix of process coefficients, and the covariance matrix of the coefficients.
The default test of symmetry for the basic median Engel curve specification is given below:
Symmetric Quantiles Test
Equation: UNTITLED
Specification: Y C X
Test statistic compares all coefficients
Test Summary
Wald Test
Chi-Sq.
Statistic
Chi-Sq. d.f.
Prob.
0.530024
0.7672
Restr. Value
Std. Error
Prob.
-5.084370
-0.002244
34.59898
0.045012
0.8832
0.9602
Variable
0.25, 0.75
C
X
We see that the test compares estimates at the first and third quartile with the median specification. While earlier we saw strong evidence that the slope coefficients are not constant
across quantiles, we now see that there is little evidence of departures from symmetry. The
Background491
overall p-value for the test is around 0.75, and the individual coefficient restriction test values show even less evidence of asymmetry.
Background
We present here a brief discussion of quantile regression. As always, the discussion is necessarily brief and omits considerable detail. For a book-length treatment of quantile regression
see Koenker (2005).
The Model
Suppose that we have a random variable Y with probability distribution function
F ( y ) = Prob ( Y y )
(34.2)
so that for 0 < t < 1 , the t -th quantile of Y may be defined as the smallest y satisfying
F( y) t :
Q ( t ) = inf { y: F ( y ) t }
(34.3)
Fn ( y ) =
1 ( Yi y )
(34.4)
where 1(z) is an indicator function that takes the value 1 if the argument z is true and 0
otherwise. The associated empirical quantile is given by,
Q n ( t ) = inf { y: F n ( y ) t }
(34.5)
Q n ( t ) = argmin y t Y i y + ( 1 t ) Y i y
i: Yi y
i:Y i < y
= argmin y r t ( Y i y )
i
(34.6)
where r t ( u ) = u ( t 1 ( u < 0 ) ) is the so-called check function which weights positive and
negative values asymmetrically.
Quantile regression extends this simple formulation to allow for regressors X . We assume a
linear specification for the conditional quantile of the response variable Y given values for
the p -vector of explanatory variables X :
Q ( t X i, b ( t ) ) = X i b ( t )
(34.7)
bn ( t ) = argmin b ( t ) r t ( Y i X i b ( t ) )
i
(34.8)
Estimation
The quantile regression estimator can be obtained as the solution to a linear programming
problem. Several algorithms for obtaining a solution to this problem have been proposed in
the literature. EViews uses a modified version of the Koenker and DOrey (1987) version of
the Barrodale and Roberts (1973) simplex algorithm.
The Barrodale and Roberts (BR) algorithm has received more than its fair share of criticism
for being computationally inefficient, with dire theoretical results for worst-case scenarios in
problems involving large numbers of observations. Simulations showing poor relative performance of the BR algorithm as compared with alternatives such as interior point methods
appear to bear this out, with estimation times that are roughly quadratic in the number of
observations (Koenker and Hallock, 2001; Portnoy and Koenker, 1997).
Our experience with our optimized version of the BR algorithm is that its performance is certainly better than commonly portrayed. Using various subsets of the low-birthweight data
described in Koenker and Hallock (2001), we find that while certainly not as fast as Cholesky-based linear regression (and possibly not as fast as interior point methods), the estimation times for the modified BR algorithm are quite reasonable.
For example, estimating a 16 explanatory variable model for the median using the first
20,000 observations of the data set takes a bit more than 1.2 seconds on a 3.2GHz Pentium
4, with 1.0Gb of RAM; this time includes both estimation and computation of a kernel based
estimator of the coefficient covariance matrix. The same specification using the full sample
of 198,377 observations takes under 7.5 seconds.
Overall, our experience is that estimation times for the modified BR algorithm are roughly
linear in the number of observations through a broad range of sample sizes. While our
results are not definitive, we see no real impediment to using this algorithm for virtually all
practical problems.
Asymptotic Distributions
Under mild regularity conditions, quantile regression coefficients may be shown to be
asymptotically normally distributed (Koenker, 2005) with different forms of the asymptotic
covariance matrix depending on the model assumptions.
Background493
n ( b ( t ) b ( t ) ) N ( 0, t ( 1 t )s ( t ) J )
(34.9)
where:
J = lim n X i X i n = lim n ( XX n )
(34.10)
Sparsity Estimation
We have seen the importance of the sparsity function in the formula for the asymptotic covariance matrix of the quantile regression estimates for i.i.d. data. Unfortunately, the sparsity
is a function of the unknown distribution F , and therefore is a nuisance quantity which
must be estimated.
EViews provides three methods for estimating the scalar sparsity s ( t ) : two Siddiqui (1960)
difference quotient methods (Koenker, 1994; Bassett and Koenker (1982) and one kernel
density estimator (Powell, 1986; Jones, 1992; Buchinsky 1995).
s ( t ) = [ F ( t + h n ) F ( t h n ) ] ( 2h n )
(34.11)
for some bandwidth h n tending to zero as the sample size n . s ( t ) is in essence computed using a simply two-sided numeric derivative of the quantile function. To make this
procedure operational we need to determine: (1) how to obtain estimates of the empirical
1
quantile function F ( t ) at the two evaluation points, and (2) what bandwidth to employ.
The first approach to evaluating the quantile functions, which EViews terms Siddiqui
(mean fitted), is due to Bassett and Koenker (1982). The approach involves estimating two
additional quantile regression models for t h n and t + h n , and using the estimated coefficients to compute fitted quantiles. Substituting the fitted quantiles into the numeric derivative expression yields:
s ( t ) = X ( b ( t + h n ) b ( t h n ) ) ( 2h n )
(34.12)
for an arbitrary X . While the i.i.d. assumption implies that X may be set to any value,
Bassett and Koenker propose using the mean value of X , noting that the mean possesses
two very desirable properties: the precision of the estimate is maximized at that point, and
the empirical quantile function is monotone in t when evaluated at X = X , so that s ( t )
will always yield a positive value for suitable h n .
A second, less computationally intensive approach to evaluating the quantile functions computes the t + h and t h empirical quantiles of the residuals from the original quantile
regression equation, as in Koenker (1994). Following Koencker, we compute quantiles for
the residuals excluding the p residuals that are set to zero in estimation, and interpolating
values to get a piecewise linear version of the quantile. EViews refers to this method as Siddiqui (residual).
Both Siddiqui methods require specification of a bandwidth h n . EViews offers the Bofinger
(1975), Hall-Sheather (1988), and Chamberlain (1994) bandwidth methods (along with the
ability to specify an arbitrary bandwidth).
The Bofinger bandwidth, which is given by:
hn = n
4
1
4.5 ( f ( F ( t ) ) )
-------------------------------------------
[ 2 ( F 1 ( t ) ) 2 + 1 ] 2
1 5
15
(34.13)
(approximately) minimizes the mean square error (MSE) of the sparsity estimates.
Hall-Sheather proposed an alternative bandwidth that is designed specifically for testing.
The Hall-Sheather bandwidth is given by:
Background495
hn = n
2 13
1
1 3 2 3 1.5 ( f ( F ( t ) ) )
z a ----------------------------------------
2
1
2(F (t)) + 1
(34.14)
t(1 t)
h n = z a -------------------n
(34.15)
which is derived using the exact and normal asymptotic confidence intervals for the order
statistics (Buchinsky, 1995).
Kernel Density
Kernel density estimators of the sparsity offer an important alternative to the Siddiqui
approach. Most of the attention has focused on kernel methods for estimating the derivative
1
F ( t ) directly (Falk, 1988; Welsh, 1988), but one may also estimate s ( t ) using the
inverse of a kernel density function estimator (Powell, 1986; Jones, 1992; Buchinsky 1995).
In the present context, we may compute:
n
s ( t ) = 1 ( 1 n )
c n K ( u i ( t ) c n )
(34.16)
i =1
where u ( t ) are the residuals from the quantile regression fit. EViews supports the latter
density function approach, which is termed the Kernel (residual) method, since it is closely
related to the more commonly employed Powell (1984, 1989) kernel estimator for the noni.i.d. case described below.
Kernel estimation of the density function requires specification of a bandwidth c n . We follow Koenker (2005, p. 81) in choosing:
1
cn = k ( F ( t + hn ) F ( t hn ) )
(34.17)
where k = min ( s, IQR 1.34 ) is the Silverman (1986) robust estimate of scale (where s
the sample standard deviation and IQR the interquartile range) and h n is the Siddiqui
bandwidth.
Independent, Non-Identical
We may relax the assumption that the quantile density function does not depend on X . The
asymptotic distribution of n ( b ( t ) b ( t ) ) in the i.n.i.d. setting takes the Huber sandwich
form (see, among others, Hendricks and Koenker, 1992):
1
1
n ( b ( t ) b ( t ) ) N ( 0, t ( 1 t )H ( t ) JH ( t ) )
(34.18)
J = lim n X i X i n
(34.19)
H ( t ) = lim n X i X i f i ( q i ( t ) ) n
(34.20)
and:
f i ( q i ( t ) ) is the conditional density function of the response, evaluated at the t -th conditional quantile for individual i . Note that if the conditional density does not depend on the
observation, the Huber sandwich form of the variance in Equation (34.18) reduces to the
simple scalar sparsity form given in Equation (34.9).
Computation of a sample analogue to J is straightforward so we focus on estimation of
H ( t ) . EViews offers a choice of two methods for estimating H ( t ) : a Siddiqui-type difference method proposed by Hendricks and Koenker (1992), and a Powell (1984, 1989) kernel
method based on residuals of the estimated model. EViews labels the first method Siddiqui
(mean fitted), and the latter method Kernel (residual):
The Siddiqui-type method proposed by Hendricks and Koenker (1991) is a straightforward
generalization of the scalar Siddiqui method (see Siddiqui Difference Quotient, beginning
on page 494). As before, two additional quantile regression models are estimated for t h
and t + h , and the estimated coefficients may be used to compute the Siddiqui difference
quotient:
1
f i ( q i ( t ) ) = 2h n ( F i ( q i ( t + h ) ) F i ( q i ( t h ) ) )
= 2h n ( X i ( b ( t + h ) b ( t h ) ) )
(34.21)
Note that in the absence of identically distributed data, the quantile density function
f i ( q i ( t ) ) must be evaluated for each individual. One minor complication is that
Equation (34.21) is not guaranteed to be positive except at X i = X . Accordingly, Hendricks and Koenker modify the expression slightly to use only positive values:
f i ( q i ( t ) ) = max(0, 2h n ( X i ( b ( t + h ) b ( t h ) ) d ))
(34.22)
n of H :
The estimated quantile densities f i ( q i ( t ) ) are then used to form an estimator H
n =
H
fi ( q i ( t ) )X i X i n
(34.23)
The Powell (1984, 1989) kernel approach replaces the Siddiqui difference with a kernel density estimator using the residuals of the original fitted model:
Background497
n = ( 1 n ) c n1 K ( u i ( t ) c n )X i X i
H
(34.24)
Bootstrapping
The direct methods of estimating the asymptotic covariance matrices of the estimates
require the estimation of the sparsity nuisance parameter, either at a single point, or conditionally for each observation. One method of avoiding this cumbersome estimation is to
employ bootstrapping techniques for the estimation of the covariance matrix.
EViews supports four different bootstrap methods: the residual bootstrap (Residual), the
design, or XY-pair, bootstrap (XY-pair), and two variants of the Markov Chain Marginal
Bootstrap (MCMB and MBMB-A).
The following discussion provides a brief overview of the various bootstrap methods. For
additional detail, see Buchinsky (1995, He and Hu (2002) and Kocherginsky, He, and Mu
(2005).
Residual Bootstrap
The residual bootstrap, is constructed by resampling (with replacement) separately from the
residuals u i ( t ) and from the X i .
Let u be an m -vector of resampled residuals, and let X be a m p matrix of independently resampled X . (Note that m need not be equal to the original sample size n .) We
form the dependent variable using the resampled residuals, resampled data, and estimated
coefficients, Y = X b ( t ) + u , and then construct a bootstrap estimate of b ( t ) using
Y and X .
This procedure is repeated for M bootstrap replications, and the estimator of the asymptotic
covariance matrix is formed from:
1
( b ) = n m
V
---- --- n B
( bj ( t ) b ( t ) ) ( bj ( t ) b ( t ) )
(34.25)
j= 1
( b )
where b ( t ) is the mean of the bootstrap elements. The bootstrap covariance matrix V
is simply a (scaled) estimate of the sample variance of the bootstrap estimates of b ( t ) .
Note that the validity of using separate draws from u i ( t ) and X i requires independence of
the u and the X .
Goodness-of-Fit
Koenker and Machado (1999) define a goodness-of-fit statistic for quantile regression that is
2
analogous to the R from conventional regression analysis. We begin by recalling our linear
Background499
Q ( t X i, b ( t ) ) = b 0 ( t ) + X i1 b 1 ( t )
(34.26)
( t ) = min b ( t ) r t ( Y i b 0 ( t ) X i1 b 1 ( t ) )
V
(34.27)
( t ) = min b ( t ) r t ( Y i b 0 ( t ) )
V
0
i
the minimized unrestricted and intercept-only objective functions. The Koenker and Machado goodness-of-fit criterion is given by:
1
(t) V
(t)
R (t) = 1 V
(34.28)
2
(t))
2(V
(t) V
L n ( t ) = ---------------------------------------t ( 1 t )s ( t )
(t)
2V
(t))
L n ( t ) = ------------------------------- log ( V
(t) V
t ( 1 t )s ( t )
(34.29)
which are both asymptotically x q where q is the number of restrictions imposed by the null
hypothesis.
You should note the presence of the sparsity term s ( t ) in the denominator of both expressions. Any of the sparsity estimators outlined in Sparsity Estimation, on page 493 may be
employed for either the null or alternative specifications; EViews uses the sparsity estimated
under the alternative. The presence of s ( t ) should be a tipoff that these test statistics
require that the quantile density function does not depend on X , as in the pure locationshift model.
Note that EViews will always compute an estimate of the scalar sparsity, even when you
specify a Huber sandwich covariance method. This value of the sparsity will be used to
compute QLR test statistics which may be less robust than the corresponding Wald counterparts.
Coefficient Tests
Given estimates of the asymptotic covariance matrix for the quantile regression estimates,
you may construct Wald-type tests of hypotheses and construct coefficient confidence
ellipses as in Coefficient Diagnostics, beginning on page 164.
b = ( b ( t 1 ), b ( t 2 ), , b ( t K ) )
(34.30)
Then
n ( b b ) N ( 0, Q )
(34.31)
Q ij = [ min ( t i, t j ) t i t j ]H ( t i )JH ( t j )
(34.32)
Q = Q0 J
(34.33)
min ( t i, t j ) t i t j
q ij = ------------------------------------------------------1
1
f ( F ( ti ) ) ( f ( F ( tj ) ) )
(34.34)
Estimation of Q may be performed directly using (34.32), (34.33) and (34.34), or using one
of the bootstrap variants.
References501
H0 : b1 ( t1 ) = b1 ( t2 ) = = b1 ( tK )
(34.35)
Symmetry Testing
Newey and Powell (1987) construct a test of the less restrictive hypothesis of symmetry, for
asymmetric least squares estimators, but the approach may easily be applied to the quantile
regression case.
The premise of the Newey and Powell test is that if the distribution of Y given X is symmetric, then:
b( t) + b(1 t)
--------------------------------------- = b ( 1 2 )
2
(34.36)
We may evaluate this restriction using Wald tests on the quantile process. Suppose that
there are an odd number, K , of sets of estimated coefficients ordered by t k . The middle
value t ( K + 1 ) 2 is assumed to be equal to 0.5, and the remaining t are symmetric around
0.5, with t j = 1 t K j + 1 , for j = 1, , ( K 1 ) 2 . Then the Newey and Powell test
null is the joint hypothesis that:
b ( tj ) + b ( tK j 1 )
H 0 : ---------------------------------------------- = b(1 2)
2
(34.37)
for j = 1, , ( K 1 ) 2 .
The Wald test formed for this null is zero under the null hypothesis of symmetry. The null
2
has p ( K 1 ) 2 restrictions, so the Wald statistic is distributed as a x p ( K 1 ) 2 . Newey
and Powell point out that if it is known a priori that the errors are i.i.d., but possibly asymmetric, one can restrict the null to only examine the restriction for the intercept. This
restricted null imposes only ( K 1 ) 2 restrictions on the process coefficients.
References
Barrodale I. and F. D. K. Roberts (1974). Solution of an Overdetermined System of Equations in the l 1
Norm, Communications of the ACM, 17(6), 319-320.
Bassett, Gilbert Jr. and Roger Koenker (1982). An Empirical Quantile Function for Linear Models with
i.i.d. Errors, Journal of the American Statistical Association, 77(378), 407-415.
Bofinger, E. (1975). Estimation of a Density Function Using Order Statistics, Australian Journal of Statistics, 17, 1-7.
Buchinsky, M. (1995). Estimating the Asymptotic Covariance Matrix for Quantile Regression Models: A
Monte Carlo Study, Journal of Econometrics, 68, 303-338.
Chamberlain, Gary (1994). Quantile Regression, Censoring and the Structure of Wages, in Advances in
Econometrics, Christopher Sims, ed., New York: Elsevier, 171-209.
Falk, Michael (1986). On the Estimation of the Quantile Density Function, Statistics & Probability Letters, 4, 69-73.
Hall, Peter and Simon J. Sheather, On the Distribution of the Studentized Quantile, Journal of the Royal
Statistical Society, Series B, 50(3), 381-391.
He, Xuming and Feifang Hu (2002). Markov Chain Marginal Bootstrap, Journal of the American Statistical Association, 97(459), 783-795.
Hendricks, Wallace and Roger Koenker (1992). Hierarchical Spline Models for Conditional Quantiles and
the Demand for Electricity, Journal of the American Statistical Association, 87(417), 58-68.
Jones, M. C. (1992). Estimating Densities, Quantiles, Quantile Densities and Density Quantiles, Annals
of the Institute of Statistical Mathematics, 44(4), 721-727.
Kocherginsky, Masha, Xuming He, and Yunming Mu (2005). Practical Confidence Intervals for Regression Quantiles, Journal of Computational and Graphical Statistics, 14(1), 41-55.
Koenker, Roger (1994), Confidence Intervals for Regression Quantiles, in Asymptotic Statistics, P. Mandl
and M. Huskova, eds., New York: Springer-Verlag, 349-359.
Koenker, Roger (2005). Quantile Regression. New York: Cambridge University Press.
Koenker, Roger and Gilbert Bassett, Jr. (1978). Regression Quantiles, Econometrica, 46(1), 33-50.
Koenker, Roger and Gilbert Bassett, Jr. (1982a). Robust Tests for Heteroskedasticity Based on Regression
Quantiles, Econometrica, 50(1), 43-62.
Koenker, Roger and Gilbert Bassett, Jr. (1982b). Tests of Linear Hypotheses and l 1 Estimation, Econometrica, 50(6), 1577-1584.
Koenker, Roger W. and Vasco DOrey (1987). Algorithm AS 229: Computing Regression Quantiles,
Applied Statistics, 36(3), 383-393.
Koenker, Roger and Kevin F. Hallock (2001). Quantile Regression, Journal of Economic Perspectives,
15(4), 143-156.
Koenker, Roger and Jose A. F. Machado (1999). Goodness of Fit and Related Inference Processes for
Quantile Regression, Journal of the American Statistical Association, 94(448), 1296-1310.
Newey, Whitney K., and James L. Powell (1987). Asymmetric Least Squares Estimation, Econometrica,
55(4), 819-847.
Portnoy, Stephen and Roger Koenker (1997), The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error versus Absolute-Error Estimators, Statistical Science, 12(4), 279-300.
Powell, J. (1984). Least Absolute Deviations Estimation for the Censored Regression Model, Journal of
Econometrics, 25, 303-325.
Powell, J. (1986). Censored Regression Quantiles, Journal of Econometrics, 32, 143-155.
Powell, J. (1989). Estimation of Monotonic Regression Models Under Quantile Restrictions, in Non-parametric and Semiparametric Methods in Econometrics, W. Barnett, J. Powell, and G. Tauchen, eds.,
Cambridge: Cambridge University Press.
Siddiqui, M. M. (1960). Distribution of Quantiles in Samples from a Bivariate Population, Journal of
Research of the National Bureau of StandardsB, 64(3), 145-150.
Silverman, B. W. (1986). Density Estimation for Statistics and Data Analysis, London: Chapman & Hall.
Welsh, A. H. (1988). Asymptotically Efficient Estimation of the Sparsity Function at a Point, Statistics &
Probability Letters, 6, 427-432.
Overview
Most of the work in estimating a model using the logl object is in creating the text specification which will be used to evaluate the likelihood function.
If you are familiar with the process of generating series in EViews, you should find it easy to
work with the logl specification, since the likelihood specification is merely a list of series
assignment statements which are evaluated iteratively during the course of the maximization procedure. All you need to do is write down a set of statements which, when evaluated,
will describe a series containing the contributions of each observation to the log likelihood
function.
To take a simple example, suppose you believe that your data are generated by the conditional heteroskedasticity regression model:
yt = b1 + b2 xt + b3 zt + et
(35.1)
2 a
e t N ( 0, j z t )
where x , y , and z are the observed series (data) and b 1, b 2, b 3, j, a are the parameters
of the model. The log likelihood function (the log of the density of the observed data) for a
sample of T observations can be written as:
T
a
2
l ( b, a, j ) = ---- ( log ( 2p ) + log j ) --2
2
T
t = 1
log ( z t )
t =1
( yt b1 b2 xt b3 zt ) 2
---------------------------------------------2 a
t = 1
y t b 1 b 2 x t b 3 z t 1
2 a
- --- log ( j z t )
log f ----------------------------------------a2
2
jz t
j zt
(35.2)
l ( b, a, j ) =
l t ( b, a , j )
(35.3)
t =1
y t b 1 b 2 x t b 3 z t 1
2 a
l t ( b, a, j ) = log f ----------------------------------------- --- log ( j z t )
a2
2
jz t
(35.4)
Suppose that you know the true parameter values of the model, and you wish to generate a
series in EViews which contains the contributions for each observation. To do this, you
could assign the known values of the parameters to the elements C(1) to C(5) of the coefficient vector, and then execute the following list of assignment statements as commands or
in an EViews program:
series res = y - c(1) - c(2)*x - c(3)*z
series var = c(4) * z^c(5)
Specification505
The first two statements describe series which will contain intermediate results used in the
calculations. The first statement creates the residual series, RES, and the second statement
creates the variance series, VAR. The series LOGL1 contains the set of log likelihood contributions for each observation.
Now suppose instead that you do not know the true parameter values of the model, and
would like to estimate them from the data. The maximum likelihood estimates of the parameters are defined as the set of parameter values which produce the largest value of the likelihood function evaluated across all the observations in the sample.
The logl object makes finding these maximum likelihood estimates easy. Simply create a
new log likelihood object, input the assignment statements above into the logl specification
view, then ask EViews to estimate the specification.
In entering the assignment statements, you need only make two minor changes to the text
above. First, the series keyword must be removed from the beginning of each line (since
the likelihood specification implicitly assumes it is present). Second, an extra line must be
added to the specification which identifies the name of the series in which the likelihood
contributions will be contained. Thus, you should enter the following into your log likelihood object:
@logl logl1
res = y - c(1) - c(2)*x - c(3)*z
var = c(4) * z^c(5)
logl1 = log(@dnorm(res/@sqrt(var))) - log(var)/2
The first line in the log likelihood specification, @logl logl1, tells EViews that the series
LOGL1 should be used to store the likelihood contributions. The remaining lines describe
the computation of the intermediate results, and the actual likelihood contributions.
When you tell EViews to estimate the parameters of this model, it will execute the assignment statements in the specification repeatedly for different parameter values, using an iterative algorithm to search for the set of values that maximize the sum of the log likelihood
contributions. When EViews can no longer improve the overall likelihood, it will stop iterating and will report final parameter values and estimated standard errors in the estimation
output.
The remainder of this chapter discusses the rules for specification, estimation and testing
using the likelihood object in greater detail.
Specification
To create a likelihood object, choose Object/New Object/LogL or type the keyword logl
in the command window. The likelihood window will open with a blank specification view.
The specification view is a text window into which you enter a list of statements which
describe your statistical model, and in which you set options which control various aspects
of the estimation procedure.
where series_name is the name of the series which will contain the contributions. This
control statement may appear anywhere in the logl specification.
Whenever the specification is evaluated, whether for estimation or for carrying out a View
or Proc, each assignment statement will be evaluated at the current parameter values, and
the results stored in a series with the specified name. If the series does not exist, it will be
created automatically. If the series already exists, EViews will use the existing series for storage, and will overwrite the data contained in the series.
If you would like to remove one or more of the series used in the specification after evaluation, you can use the @temp statement, as in:
@temp series_name1 series_name2
This statement tells EViews to delete any series in the list after evaluation of the specification is completed. Deleting these series may be useful if your logl creates a lot of intermediate results, and you do not want the series containing these results to clutter your workfile.
Parameter Names
In the example above, we used the coefficients C(1) to C(5) as names for our unknown
parameters. More generally, any element of a named coefficient vector which appears in the
specification will be treated as a parameter to be estimated.
In the conditional heteroskedasticity example, you might choose to use coefficients from
three different coefficient vectors: one vector for the mean equation, one for the variance
equation, and one for the variance parameters. You would first create three named coefficient vectors by the commands:
coef(3) beta
coef(1) scale
Specification507
coef(1) alpha
Since all elements of named coefficient vectors in the specification will be treated as parameters, you should make certain that all coefficients really do affect the value of one or more
of the likelihood contributions. If a parameter has no effect upon the likelihood, you will
experience a singularity error when you attempt to estimate the parameters.
Note that all objects other than coefficient elements will be considered fixed and will not be
updated during estimation. For example, suppose that SIGMA is a named scalar in your
workfile. Then if you redefine the subexpression for VAR as:
var = sigma*z^alpha(1)
EViews will not estimate SIGMA. The value of SIGMA will remain fixed at its value at the
start of estimation.
Order of Evaluation
The logl specification contains one or more assignment statements which generate the series
containing the likelihood contributions. EViews always evaluates from top to bottom when
executing these assignment statements, so expressions which are used in subsequent calculations should always be placed first.
EViews must also iterate through the observations in the sample. Since EViews iterates
through both the equations in the specification and the observations in the sample, you will
need to specify the order in which the evaluation of observations and equations occurs.
By default, EViews evaluates the specification by observation so that all of the assignment
statements are evaluated for the first observation, then for the second observation, and so on
across all the observations in the estimation sample. This is the correct order for recursive
models where the likelihood of an observation depends on previously observed (lagged) values, as in AR or ARCH models.
You can change the order of evaluation so EViews evaluates the specification by equation, so
the first assignment statement is evaluated for all the observations, then the second assignment statement is evaluated for all the observations, and so on for each of the assignment
statements in the specification. This is the correct order for models where aggregate statistics from intermediate series are used as input to subsequent calculations.
You can control the method of evaluation by adding a statement to the likelihood specification. To force evaluation by equation, simply add a line containing the keyword @byeqn.
To explicitly state that you require evaluation by observation, the @byobs keyword can be
used. If no keyword is provided, @byobs is assumed.
In the conditional heteroskedasticity example above, it does not matter whether the assignment statements are evaluated by equation (line by line) or by observation, since the results
do not depend upon the order of evaluation.
However, if the specification has a recursive structure, or if the specification requires the calculation of aggregate statistics based on intermediate series, you must select the appropriate
evaluation order if the calculations are to be carried out correctly.
As an example of the @byeqn statement, consider the following specification:
@logl robust1
@byeqn
res1 = y-c(1)-c(2)*x
delta = @abs(res1)/6/@median(@abs(res1))
weight = (delta<1)*(1-delta^2)^2
robust1 = -(weight*res1^2)
Analytic Derivatives
By default, when maximizing the likelihood and forming estimates of the standard errors,
EViews computes numeric derivatives of the likelihood function with respect to the parameters. If you would like to specify an analytic expression for one or more of the derivatives,
you may use the @deriv statement. The @deriv statement has the form:
@deriv pname1 sname1 pname2 sname2
where pname is a parameter in the model and sname is the name of the corresponding
derivative series generated by the specification.
For example, consider the following likelihood object that specifies a multinomial logit
model:
' multinomial logit with 3 outcomes
@logl logl1
xb2 = b2(1)+b2(2)*x1+b2(3)*x2
xb3 = b3(1)+b3(2)*x1+b3(3)*x2
denom = 1+exp(xb2)+exp(xb3)
Specification509
See Greene (2008), Chapter 23.11.1 for a discussion of multinomial logit models. There are
three possible outcomes, and the parameters of the three regressors (X1, X2 and the constant) are normalized relative to the first outcome. The analytic derivatives are particularly
simple for the multinomial logit model and the two @deriv statements in the specification
instruct EViews to use the expressions for GRAD21, GRAD22, GRAD23, GRAD31, GRAD32,
and GRAD33, instead of computing numeric derivatives.
When working with analytic derivatives, you may wish to check the validity of your expressions for the derivatives by comparing them with numerically computed derivatives. EViews
provides you with tools which will perform this comparison at the current values of parameters or at the specified starting values. See the discussion of the Check Derivatives view of
the likelihood object in Check Derivatives on page 515.
(i + 1)
(i)
= max ( rv , m )
(35.5)
(i + 1)
(i)
(i + 1)
f(v + s
) f(v s
)
-----------------------------------------------------------------------------(i + 1)
2s
The one-sided numeric derivative is evaluated as:
(35.6)
(i)
(i + 1)
(i)
f(v + s
) f(v )
--------------------------------------------------------(i + 1)
s
(35.7)
where f is the likelihood function. Two-sided derivatives are more accurate, but require
roughly twice as many evaluations of the likelihood function and so take about twice as long
to evaluate.
The @derivstep statement can be used to control the step size and method used to evaluate the derivative at each iteration. The @derivstep keyword should be followed by sets of
three arguments: the name of the parameter to be set (or the keyword @all), the relative
step size, and the minimum step size.
The default setting is (approximately):
@derivstep(1) @all 1.49e-8 1e-10
where 1 in the parentheses indicates that one-sided numeric derivatives should be used
and @all indicates that the following setting applies to all of the parameters. The first number following @all is the relative step size and the second number is the minimum step
8
size. The default relative step size is set to the square root of machine epsilon ( 1.49 10 )
10
and the minimum step size is set to m = 10 .
The step size can be set separately for each parameter in a single or in multiple @derivstep
statements. The evaluation method option specified in parentheses is a global option; it cannot be specified separately for each parameter.
For example, if you include the line:
@derivstep(2) c(2) 1e-7 1e-10
7
the relative step size for coefficient C(2) will be increased to m = 10 and a two-sided
derivative will be used to evaluate the derivative. In a more complex example,
@derivstep(2) @all 1.49e-8 1e-10 c(2) 1e-7 1e-10 c(3) 1e-5 1e-8
computes two-sided derivatives using the default step sizes for all coefficients except C(2)
and C(3). The values for these latter coefficients are specified directly.
Estimation
Once you have specified the logl object, you can ask EViews to find the parameter values
which maximize the likelihood parameters. Simply click the Estimate button in the likelihood window toolbar to open the Estimation Options dialog.
Estimation511
Starting Values
Since EViews uses an iterative algorithm to find the maximum likelihood estimates, the
choice of starting values is important. For problems in which the likelihood function is globally concave, it will influence how many iterations are taken for estimation to converge. For
problems where the likelihood function is not concave, it may determine which of several
local maxima is found. In some cases, estimation will fail unless reasonable starting values
are provided.
By default, EViews uses the values stored in the coefficient vector or vectors prior to estimation. If a @param statement is included in the specification, the values specified in the statement will be used instead.
In our conditional heteroskedasticity regression example, one choice for starting values for
the coefficients of the mean equation coefficients are the simple OLS estimates, since OLS
After estimating this equation, the elements C(1), C(2), C(3) of the C coefficient vector will
contain the OLS estimates. To set the variance scale parameter C(4) to the estimated OLS
residual variance, you can type the assignment statement in the command window:
c(4) = eq1.@se^2
For the final heteroskedasticity parameter C(5), you can use the residuals from the original
OLS regression to carry out a second OLS regression, and set the value of C(5) to the appropriate coefficient. Alternatively, you can arbitrarily set the parameter value using a simple
assignment statement:
c(5) = 1
Now, if you estimate the logl specification immediately after carrying out the OLS estimation
and subsequent commands, it will use the values that you have placed in the C vector as
starting values.
As noted above, an alternative method of initializing the parameters to known values is to
include a @param statement in the likelihood specification. For example, if you include the
line:
@param c(1) 0.1 c(2) 0.1 c(3) 0.1 c(4) 1 c(5) 1
in the specification of the logl, EViews will always set the starting values to
C(1)=C(2)=C(3)=0.1, C(4)=C(5)=1.
See also the discussion of starting values in Starting Coefficient Values on page 1007.
Estimation Sample
EViews uses the sample of observations specified in the Estimation Options dialog when
estimating the parameters of the log likelihood. EViews evaluates each expression in the logl
for every observation in the sample at current parameter values, using the by observation or
by equation ordering. All of these evaluations follow the standard EViews rules for evaluating series expressions.
If there are missing values in the log likelihood series at the initial parameter values, EViews
will issue an error message and the estimation procedure will stop. In contrast to the behavior of other EViews built-in procedures, logl estimation performs no endpoint adjustments or
dropping of observations with missing values when estimating the parameters of the model.
LogL Procs513
LogL Views
Likelihood Specification: displays the window where you specify and edit the likelihood specification.
Estimation Output: displays the estimation results obtained from maximizing the
likelihood function.
Covariance Matrix: displays the estimated covariance matrix of the parameter estimates. These are computed from the inverse of the sum of the outer product of the
first derivatives evaluated at the optimum parameter values. To save this covariance
matrix as a symmetric matrix object, you may use the @coefcov data member.
Wald Coefficient Tests: performs the Wald coefficient restriction test. See Wald
Test (Coefficient Restrictions) on page 170, for a discussion of Wald tests.
Gradients: displays view of the gradients (first derivatives) of the log likelihood at the
current parameter values (if the model has not yet been estimated), or at the converged parameter values (if the model has been estimated). These views may prove to
be useful diagnostic tools if you are experiencing problems with convergence.
Check Derivatives: displays the values of the numeric derivatives and analytic derivatives (if available) at the starting values (if a @param statement is included), or at
current parameter values (if there is no @param statement).
LogL Procs
Estimate: brings up a dialog to set estimation options, and to estimate the parameters of the log likelihood.
Make Model: creates an untitled model object out of the estimated likelihood specification.
Make Gradient Group: creates an untitled group of the gradients (first derivatives) of
the log likelihood at the estimated parameter values. These gradients are often used in
constructing Lagrange multiplier tests.
Update Coefs from LogL: updates the coefficient vector(s) with the estimates from
the likelihood object. This procedure allows you to export the maximum likelihood
estimates for use as starting values in other estimation problems.
Most of these procedures should be familiar to you from other EViews estimation objects.
We describe below the features that are specific to the logl object.
Estimation Output
In addition to the coefficient and standard error estimates, the standard output for the logl
object describes the method of estimation, sample used in estimation, date and time that the
logl was estimated, evaluation order, information about the convergence of the estimation
procedure, and the method used to compute the coefficient covariance matrix.
LogL: MLOGIT
Method: Maximum Likelihood (BFGS / Marquardt steps)
Date: 03/10/15 Time: 21:47
Sample: 1 1000
Included observations: 1000
Evaluation order: By equation
Convergence achieved after 22 iterations
Coefficient covariance computed using outer product of gradients
B2(1)
B2(2)
B2(3)
B3(1)
B3(2)
B3(3)
Log likelihood
Avg. log likelihood
Number of Coefs.
Coefficient
Std. Error
z-Statistic
Prob.
-0.521793
0.994358
0.134983
-0.262307
0.176770
0.399166
0.205568
0.267963
0.265655
0.207174
0.274756
0.274056
-2.538302
3.710798
0.508115
-1.266122
0.643371
1.456511
0.0111
0.0002
0.6114
0.2055
0.5200
0.1453
-1089.415
-1.089415
6
2.190830
2.220277
2.202022
EViews also provides the log likelihood value, average log likelihood value, number of coefficients, and three Information Criteria. By default, the starting values are not displayed.
Here, we have used the Estimation Options dialog to instruct EViews to display the estimation starting values in the output.
Gradients
The gradient summary table and gradient summary graph view allow you to examine the
gradients of the likelihood. These gradients are computed at the current parameter values (if
the model has not yet been estimated), or at the converged parameter values (if the model
has been estimated). See Appendix D. Gradients and Derivatives, on page 1019 for additional details.
LogL Procs515
Check Derivatives
You can use the Check Derivatives view to examine your numeric derivatives or to check
the validity of your expressions for the analytic derivatives. If the logl specification contains
a @param statement, the derivatives will be evaluated at the specified values, otherwise, the
derivatives will be computed at the current coefficient values.
Consider the derivative view
for coefficients estimated
using the logl specification.
The first part of this view displays the names of the user
supplied derivatives, step
size parameters, and the
coefficient values at which
the derivatives are evaluated.
The relative and minimum
step sizes shown in this
example are the default settings.
The second part of the view
computes the sum (over all
individuals in the sample) of
the numeric and, if applicable, the analytic derivatives for each coefficient. If appropriate, EViews will also compute the
largest individual difference between the analytic and the numeric derivatives in both absolute, and percentage terms.
Troubleshooting
Because the logl object provides a great deal of flexibility, you are more likely to experience
problems with estimation using the logl object than with EViews built-in estimators.
If you are experiencing difficulties with estimation the following suggestions may help you
in solving your problem:
Check your likelihood specification. A simple error involving a wrong sign can easily stop the estimation process from working. You should also verify that the parameters of the model are really identified (in some specifications you may have to impose
a normalization across the parameters). Also, every parameter which appears in the
model must feed directly or indirectly into the likelihood contributions. The Check
Derivatives view is particularly useful in helping you spot the latter problem.
Choose your starting values. If any of the likelihood contributions in your sample
cannot be evaluated due to missing values or because of domain errors in mathematical operations (logs and square roots of negative numbers, division by zero, etc.) the
estimation will stop immediately with the message: Cannot compute @logl due to
missing values. In other cases, a bad choice of starting values may lead you into
regions where the likelihood function is poorly behaved. You should always try to initialize your parameters to sensible numerical values. If you have a simpler estimation
technique available which approximates the problem, you may wish to use estimates
from this method as starting values for the maximum likelihood specification.
Make sure lagged values are initialized correctly. In contrast to most other estimation routines in EViews, the logl estimation procedure will not automatically drop
observations with NAs or lags from the sample when estimating a log likelihood
model. If your likelihood specification involves lags, you will either have to drop
observations from the beginning of your estimation sample, or you will have to carefully code the specification so that missing values from before the sample do not
cause NAs to propagate through the entire sample (see the AR(1) and GARCH examples for a demonstration).
Since the series used to evaluate the likelihood are contained in your workfile (unless you
use the @temp statement to delete them), you can examine the values in the log likelihood
and intermediate series to find problems involving lags and missing values.
Verify your derivatives. If you are using analytic derivatives, use the Check Derivatives view to make sure you have coded the derivatives correctly. If you are using
numerical derivatives, consider specifying analytic derivatives or adjusting the
options for derivative method or step size.
Reparametrize your model. If you are having problems with parameter values causing mathematical errors, you may wish to consider reparameterizing the model to
restrict the parameter within its valid domain. See the discussion below for examples.
Limitations517
Most of the error messages you are likely to see during estimation are self-explanatory. The
error message near singular matrix may be less obvious. This error message occurs when
EViews is unable to invert the matrix of the sum of the outer product of the derivatives so
that it is impossible to determine the direction of the next step of the optimization. This
error may indicate a wide variety of problems, including bad starting values, but will almost
always occur if the model is not identified, either theoretically, or in terms of the available
data.
Limitations
The likelihood object can be used to estimate parameters that maximize (or minimize) a
variety of objective functions. Although the main use of the likelihood object will be to specify a log likelihood, you can specify least squares and minimum distance estimation problems with the likelihood object as long as the objective function is additive over the sample.
You should be aware that the algorithm used in estimating the parameters of the log likelihood is not well suited to solving arbitrary maximization or minimization problems. The
algorithm forms an approximation to the Hessian of the log likelihood, based on the sum of
the outer product of the derivatives of the likelihood contributions. This approximation
relies on both the functional form and statistical properties of maximum likelihood objective
functions, and may not be a good approximation in general settings. Consequently, you may
or may not be able to obtain results with other functional forms. Furthermore, the standard
error estimates of the parameter values will only have meaning if the series describing the
log likelihood contributions are (up to an additive constant) the individual contributions to a
correctly specified, well-defined theoretical log likelihood.
Currently, the expressions used to describe the likelihood contribution must follow the rules
of EViews series expressions. This restriction implies that we do not allow matrix operations
in the likelihood specification. In order to specify likelihood functions for multiple equation
models, you may have to write out the expression for the determinants and quadratic forms.
Although possible, this may become tedious for models with more than two or three equations. See the multivariate GARCH sample programs for examples of this approach.
Additionally, the logl object does not directly handle optimization subject to general inequality constraints. There are, however, a variety of well-established techniques for imposing
simple inequality constraints. We provide examples below. The underlying idea is to apply a
monotonic transformation to the coefficient so that the new coefficient term takes on values
only in the desired range. The commonly used transformations are the @exp for one-sided
restrictions and the @logit and @atan for two-sided restrictions.
You should be aware of the limitations of the transformation approach. First, the approach
only works for relatively simple inequality constraints. If you have several cross-coefficient
inequality restrictions, the solution will quickly become intractable. Second, in order to per-
form hypothesis tests on the untransformed coefficient, you will have to obtain an estimate
of the standard errors of the associated expressions. Since the transformations are generally
nonlinear, you will have to compute linear approximations to the variances yourself (using
the delta method). Lastly, inference will be poor near the boundary values of the inequality
restrictions.
Note that EViews will report the point estimate and the standard error for the parameter
C(2), not the coefficient of X. To find the standard error of the expression 1-exp(c(2)),
you will have to use the delta method; see for example Greene (2008).
Again, EViews will report the point estimate and standard error for the parameter C(2). You
will have to use the delta method to compute the standard error of the transformation
expression 2*@logit(c(2))-1.
More generally, if you want to restrict the parameter to lie between L and H, you can use the
transformation:
(H-L)*@logit(c(1)) + L
where C(1) is the parameter to be estimated. In the above example, L=-1 and H=1.
Examples
In this section, we provide extended examples of working with the logl object to estimate a
multinomial logit and a maximum likelihood AR(1) specification. Example programs for
these and several other specifications are provided in your default EViews data directory. If
you set your default directory to point to the EViews data directory, you should be able to
issue a RUN command for each of these programs to create the logl object and to estimate
the unknown parameters.
Examples519
exp ( b 0j + b 1j x 1i + b 2j x 2i )
Pr ( y i = j ) = ----------------------------------------------------------------------------- = P ij
3
(35.8)
exp ( b 0k + b 1k x 1i + b 2k x 2i )
k =1
for j = 1, 2, 3 . Note that the parameters b are specific to each category so there are
3 3 = 9 parameters in this specification. The parameters are not all identified unless we
impose a normalization, so we normalize the parameters of the first choice category j = 1
to be all zero: b 0, 1 = b 1, 1 = b 2, 1 = 0 (see, for example, Greene (2008, Section 23.11.1).
The log likelihood function for the multinomial logit can be written as:
N
l =
dij log ( P ij )
(35.9)
i = 1j = 1
where d ij is a dummy variable that takes the value 1 if observation i has chosen alternative
j and 0 otherwise. The first-order conditions are:
l
---------- =
b kj
( d ij P ij )x ki
(35.10)
i =1
for k = 0, 1, 2 and j = 1, 2, 3 .
We have provided, in the Example Files subdirectory of your default EViews directory, a
workfile Mlogit.WK1 containing artificial multinomial data. The program begins by loading this workfile:
' load artificial data
%evworkfile = @evpath + "\example files\logl\mlogit"
load "{%evworkfile}"
coef(3) b3
Since the analytic derivatives for the multinomial logit are particularly simple, we also specify the expressions for the analytic derivatives to be used during estimation and the appropriate @deriv statements:
' specify analytic derivatives
for!i = 2 to 3
mlogit.append @deriv b{!i}(1) grad{!i}1 b{!i}(2) grad{!i}2
b{!i}(3) grad{!i}3
mlogit.append grad{!i}1 = dd{!i}-pr{!i}
mlogit.append grad{!i}2 = grad{!i}1*x1
mlogit.append grad{!i}3 = grad{!i}1*x2
next
Note that if you were to specify this likelihood interactively, you would simply type the
expression that follows each append statement directly into the MLOGIT object.
This concludes the actual specification of the likelihood object. Before estimating the model,
we get the starting values by estimating a series of binary logit models:
' get starting values from binomial logit
equation eq2.binary(d=l) dd2 c x1 x2
b2 = eq2.@coefs
equation eq3.binary(d=l) dd3 c x1 x2
b3 = eq3.@coefs
To check whether you have specified the analytic derivatives correctly, choose View/Check
Derivatives or use the command:
show mlogit.checkderiv
Examples521
If you have correctly specified the analytic derivatives, they should be fairly close to the
numeric derivatives.
We are now ready to estimate the model. Either click the Estimate button or use the command:
' do MLE
mlogit.ml(showopts, m=1000, c=1e-5)
show mlogit.output
Note that you can examine the derivatives for this model using the Gradient Table view, or
you can examine the series in the workfile containing the gradients. You can also look at the
intermediate results and log likelihood values. For example, to look at the likelihood contributions for each individual, simply double click on the LOGL1 series.
The exact Gaussian likelihood function for an AR(1) model is given by:
2 2
( yt c ( 1 r ) )
1
---------------------------------exp
---------------------------------------------
2
2
j 2p ( 1 r 2 )
2(j (1 r ))
f ( y, v ) =
2
( y t c ry t 1 )
1
-------------exp
-----------------------------------------
j 2p
2(j )
t=1
(35.11)
t>0
where c is the constant term, r is the AR(1) coefficient, and j is the error variance, all to
be estimated (see for example Hamilton, 1994, Chapter 5.2).
Since the likelihood function evaluation differs for the first observation in our sample, we
create a dummy variable indicator for the first observation:
' create dummy variable for first obs
series d1 = 0
smpl @first @first
d1 = 1
smpl @all
Next, we declare the coefficient vectors to store the parameter estimates and initialize them
with the least squares estimates:
' set starting values to LS (drops first obs)
equation eq1.ls y c ar(1)
coef(1) rho = c(2)
coef(1) s2 = eq1.@se^2
We then specify the likelihood function. We make use of the @recode function to differentiate the evaluation of the likelihood for the first observation from the remaining observations.
Note: the @recode function used here uses the updated syntax for this functionplease
double-check the current documentation for details.
' set up likelihood
logl ar1
ar1.append @logl logl1
ar1.append var = @recode(d1=1,s2(1)/(1-rho(1)^2),s2(1))
ar1.append res = @recode(d1=1,y-c(1)/(1-rho(1)),y-c(1)-rho(1)*y(1))
ar1.append sres = res/@sqrt(var)
ar1.append logl1 = log(@dnorm(sres))-log(var)/2
The likelihood specification uses the built-in function @dnorm for the standard normal density. The second term is the Jacobian term that arises from transforming the standard normal variable to one with non-unit variance. (You could, of course, write out the likelihood
for the normal distribution without using the @dnorm function.)
The program displays the MLE together with the least squares estimates:
' do MLE
ar1.ml(showopts, m=1000, c=1e-5)
show ar1.output
' compare with EViews AR(1) which ignores first obs
show eq1.output
Additional Examples
The following additional example programs can be found in the Example Files subdirectory of your default EViews directory.
Examples523
estimated by Bollerslev, Engle, and Nelson (1994, equation 9.1, page 3015) for different data.
EGARCH with generalized error distributed errors (egarch1.prg): estimates Nelsons (1991) exponential GARCH with generalized error distribution. The specification
and likelihood are described in Hamilton (1994, p. 668669). Note that this model
may more easily be estimated using the standard ARCH estimation tools provided in
EViews (Chapter 25. ARCH and GARCH Estimation, on page 231).
Multivariate GARCH (bv_garch.prg and tv_garch.prg): estimates the bi- or the trivariate version of the BEKK GARCH specification (Engle and Kroner, 1995). Note that
this specification may be estimated using the built-in procedures available in the system object (System Estimation, on page 583).
References
Bollerslev, Tim, Robert F. Engle and Daniel B. Nelson (1994). ARCH Models, Chapter 49 in Robert F.
Engle and Daniel L. McFadden (eds.), Handbook of Econometrics, Volume 4, Amsterdam: Elsevier
Science B.V.
Engle, Robert F. and K. F. Kroner (1995). Multivariate Simultaneous Generalized ARCH, Econometric
Theory, 11, 122-150.
Greene, William H. (2008). Econometric Analysis, 6th Edition, Upper Saddle River, NJ: Prentice-Hall.
Hamilton, James D. (1994). Time Series Analysis, Princeton University Press.
Judge, George G., W. E. Griffiths, R. Carter Hill, Helmut Ltkepohl, and Tsoung-Chao Lee (1985). The Theory and Practice of Econometrics, 2nd edition, New York: John Wiley & Sons.
Nelson, Daniel B. (1991). Conditional Heteroskedasticity in Asset Returns: A New Approach, Econometrica, 59, 347370.
Quandt, Richard E. (1988). The Econometrics of Disequilibrium, Oxford: Blackwell Publishing Co.
Vuong, Q. H. (1989). Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses, Econometrica, 57, 307333.
yt = yt 1 + et ,
(36.1)
where e is a stationary random disturbance term. The series y has a constant forecast
value, conditional on t , and the variance is increasing over time. The random walk is a difference stationary series since the first difference of y is stationary:
y t y t 1 = ( 1 L )y t = e t .
(36.2)
The first part of the unit root output provides information about the form of the test (the
type of test, the exogenous variables, and lag length used), and contains the test output,
associated critical values, and in this case, the p-value:
Null Hypothesis: TBILL has a unit root
Exogenous: Constant
Lag Length: 1 (Automatic based on SIC, MAXLAG=14)
Augmented Dickey-Fuller test statistic
Test critical values:
1% level
5% level
10% level
t-Statistic
Prob.*
-1.417410
-3.459898
-2.874435
-2.573719
0.5734
The ADF statistic value is -1.417 and the associated one-sided p-value (for a test with 221
observations) is .573. In addition, EViews reports the critical values at the 1%, 5% and 10%
levels. Notice here that the statistic t a value is greater than the critical values so that we do
not reject the null at conventional test sizes.
The second part of the output shows the intermediate test equation that EViews used to calculate the ADF statistic:
Augmented Dickey-Fuller Test Equation
Dependent Variable: D(TBILL)
Method: Least Squares
Date: 08/08/06 Time: 13:55
Sample: 1953M03 1971M07
Included observations: 221
TBILL(-1)
D(TBILL(-1))
C
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
Coefficient
Std. Error
t-Statistic
Prob.
-0.022951
-0.203330
0.088398
0.016192
0.067007
0.056934
-1.417410
-3.034470
1.552626
0.1578
0.0027
0.1220
0.053856
0.045175
0.371081
30.01882
-92.99005
6.204410
0.002395
0.013826
0.379758
0.868688
0.914817
0.887314
1.976361
If you had chosen to perform any of the other unit root tests (PP, KPSS, ERS, NP), the right
side of the dialog would show the different options associated with the specified test. The
options are associated with the method used to estimate the zero frequency spectrum term,
f 0 , that is used in constructing the particular test statistic. As before, you only need pay
attention to these settings if you wish to change from the EViews defaults.
Adj. t-Stat
Prob.*
-1.519035
-3.459898
-2.874435
-2.573719
0.5223
0.141569
0.107615
As with the ADF test, we fail to reject the null hypothesis of a unit root in the TBILL series at
conventional significance levels.
Note that your test output will differ somewhat for alternative test specifications. For example, the KPSS output only provides the asymptotic critical values tabulated by KPSS:
Null Hypothesis: TBILL is stati onary
Exoge nous: Constant
Bandwi dth: 11 (Newey-West automatic) using Bartlett kernel
LM-S tat.
Kwiatkowski-Phill ips-Schmidt-Shin test statistic
Asymptotic critical values*:
1% level
5% level
10% level
1.537310
0.739000
0.463000
0.347000
2.415060
26.11028
Similarly, the NP test output will contain results for all four test statistics, along with the NP
tabulated critical values.
A word of caution. You should note that the critical values reported by EViews are valid only
for unit root tests of a data series, and will be invalid if the series is based on estimated values. For example, Engle and Granger (1987) proposed a two-step method of testing for cointegration which looks for a unit root in the residuals of a first-stage regression. Since these
residuals are estimates of the disturbance term, the asymptotic distribution of the test statistic differs from the one for ordinary series. See Chapter 46. Cointegration Testing, on
page 948 for EViews routines to perform testing in this setting.
y t = ry t 1 + x t d + e t ,
(36.3)
where x t are optional exogenous regressors which may consist of constant, or a constant
and trend, r and d are parameters to be estimated, and the e t are assumed to be white
noise. If r 1 , y is a nonstationary series and the variance of y increases with time and
approaches infinity. If r < 1 , y is a (trend-)stationary series. Thus, the hypothesis of
y t = ay t 1 + x t d + e t ,
(36.4)
H0 : a = 0
H1 : a < 0
(36.5)
t a = a ( se ( a ) )
(36.6)
y t = ay t 1 + x t d + b 1 y t 1 + b 2 y t 2 + + b p y t p + v t .
(36.7)
This augmented specification is then used to test (36.5) using the t -ratio (36.6). An important result obtained by Fuller is that the asymptotic distribution of the t -ratio for a is independent of the number of lagged first differences included in the ADF regression. Moreover,
while the assumption that y follows an autoregressive (AR) process may seem restrictive,
Said and Dickey (1984) demonstrate that the ADF test is asymptotically valid in the presence
of a moving average (MA) component, provided that sufficient lagged difference terms are
included in the test regression.
You will face two practical issues in performing an ADF test. First, you must choose whether
to include exogenous variables in the test regression. You have the choice of including a constant, a constant and a linear time trend, or neither in the test regression. One approach
would be to run the test with both a constant and a linear trend since the other two cases
are just special cases of this more general specification. However, including irrelevant regressors in the regression will reduce the power of the test to reject the null of a unit root. The
standard recommendation is to choose a specification that is a plausible description of the
data under both the null and alternative hypotheses. See Hamilton (1994, p. 501) for discussion.
Second, you will have to specify the number of lagged difference terms (which we will term
the lag length) to be added to the test regression (0 yields the standard DF test; integers
greater than 0 correspond to ADF tests). The usual (though not particularly useful) advice is
to include a number of lags sufficient to remove serial correlation in the residuals. EViews
provides both automatic and manual lag length selection options. For details, see Automatic
Bandwidth and Lag Length Selection, beginning on page 538.
yt
d ( yt a ) =
y t ay t 1
if t = 1
if t > 1
(36.8)
d ( y t a ) = d ( x t a )d ( a ) + h t
(36.9)
where x t contains either a constant, or a constant and trend, and let d ( a ) be the OLS estimates from this regression.
All that we need now is a value for a . ERS recommend the use of a = a , where:
if x t = { 1 }
1 7 T
a =
1 13.5 T
if x t = { 1, t }
(36.10)
We now define the GLS detrended data, y t using the estimates associated with the a :
d
y t y t x t d ( a )
(36.11)
Then the DFGLS test involves estimating the standard ADF test equation, (36.7), after subd
stituting the GLS detrended y t for the original y t :
d
y t = ay t 1 + b 1 y t 1 + + b p Dy t p + v t
(36.12)
d
yt
g 1 2 T ( f 0 g 0 ) ( se ( a ) )
--------------------------------------------t a = t a ----0-
12
f0
s
2f
(36.13)
The asymptotic distribution of the PP modified t -ratio is the same as that of the ADF statistic. EViews reports MacKinnon lower-tail critical and p-values for this test.
y t = x t d + u t
(36.14)
LM =
S(t)
( T f0 )
(36.15)
S( t) =
u r
(36.16)
r =1
based on the residuals u t = y t x t d ( 0 ) . We point out that the estimator of d used in this
calculation differs from the estimator for d used by GLS detrending since it is based on a
regression involving the original data and not on the quasi-differenced data.
To specify the KPSS test, you must specify the set of exogenous regressors x t and a method
for estimating f 0 . See Frequency Zero Spectrum Estimation on page 536 for discussion.
The reported critical values for the LM test statistic are based upon the asymptotic results
presented in KPSS (Table 1, p. 166).
P T = ( SSR ( a ) aSSR ( 1 ) ) f 0
(36.17)
Critical values for the ERS test statistic are computed by interpolating the simulation results
provided by ERS (1996, Table 1, p. 825) for T = { 50, 100, 200, } .
( yt 1 )
k =
(36.18)
t = 2
d 2
MZ a = ( T ( y T ) f 0 ) ( 2k )
d
MZ t = MZ a MSB
MSB
d
MP T
= ( k f0 )
12
(36.19)
2
1 d 2
( c k c T ( yT ) ) f0
=
( c 2 k + ( 1 c )T 1 ( y d ) 2 ) f
T
0
if x t = { 1 }
if x t = { 1, t }
where:
7
c =
13.5
if x t = { 1 }
if x t = { 1, t }
(36.20)
The NP tests require a specification for x t and a choice of method for estimating f 0 (see
Frequency Zero Spectrum Estimation on page 536).
f 0 =
j = ( T 1 )
g ( j ) K ( j l )
(36.21)
where l is a bandwidth parameter (which acts as a truncation lag in the covariance weighting), K is a kernel function, and where g ( j ) , the j-th sample autocovariance of the residuals u t , is defined as:
T
g ( j ) =
( u t u t j ) T
(36.22)
t = j+1
Note that the residuals u t that EViews uses in estimating the autocovariance functions in
(36.22) will differ depending on the specified unit root test:
Unit root test
ADF, DFGLS
not applicable.
KPSS
1 x
K(x) =
0
Parzen:
1 6x 2 ( 1 x )
K ( x ) = 2 ( 1 x )3
Quadratic Spectral
if x 1.0
otherwise
if 0.0 x 0.5
if 0.5 < x 1.0
otherwise
25 sin ( 6px 5 )
K ( x ) = ----------------- ------------------------------- cos ( 6px 5 )
2 2
12p x 6px 5
y t = ay t 1 + J x t d + b 1 y t 1 + + b p y t p + u t
(36.23)
EViews provides three autoregressive spectral methods: OLS, OLS detrending, and GLS
detrending, corresponding to difference choices for the data y t . The following table summarizes the auxiliary equation estimated by the various AR spectral density estimators:
AR spectral method
OLS
y t = y t , and J = 1 , x t = x t .
OLS detrended
y t = y t x t d ( 0 ) , and J = 0 .
GLS detrended
y t = y t x t d ( a ) = y t . and J = 0 .
where d ( a ) are the coefficient estimates from the regression defined in (36.9).
The AR spectral estimator of the frequency zero spectrum is defined as:
2
f 0 = j u ( 1 b 1 b 2 b p )
2
(36.24)
where j u = u t T is the residual variance, and b are the estimates from (36.23). We
note here that EViews uses the non-degree of freedom estimator of the residual variance. As
a result, spectral estimates computed in EViews may differ slightly from those obtained from
other sources.
Not surprisingly, the spectrum estimator is sensitive to the number of lagged difference
terms in the auxiliary equation. You may either specify a fixed parameter or have EViews
automatically select one based on an information criterion. Automatic lag length selection is
examined in Automatic Bandwidth and Lag Length Selection on page 538.
Default Settings
By default, EViews will choose the estimator of f 0 used by the authors of a given test specification. You may, of course, override the default settings and choose from either family of
estimation methods. The default settings are listed below:
Unit root test
ADF, DFGLS
not applicable
PP, KPSS
NP
The first situation occurs when you are selecting the bandwidth parameter l for the kernelbased estimators of f 0 . For the kernel estimators, EViews provides you with the option of
using the Newey-West (1994) or the Andrews (1991) data-based automatic bandwidth
parameter methods. See the original sources for details. For those familiar with the NeweyWest procedure, we note that EViews uses the lag selection parameter formulae given in the
corresponding first lines of Table II-C. The Andrews method is based on an AR(1) specification. (See Automatic Bandwidth Selection on page 1035 for discussion.)
The latter two situations occur when the unit root test requires estimation of a regression
with a parametric correction for serial correlation as in the ADF and DFGLS test equation
regressions, and in the AR spectral estimator for f 0 . In all of these cases, p lagged difference terms are added to a regression equation. The automatic selection methods choose p
(less than the specified maximum) to minimize one of the following criteria:
Information criterion
Definition
Akaike (AIC)
2 ( l T ) + 2k T
Schwarz (SIC)
2 ( l T ) + k log ( T ) T
Hannan-Quinn (HQ)
2 ( l T ) + 2k log ( log ( T ) ) T
2(l T) + 2(k + t) T
2 ( l T ) + ( k + t ) log ( T ) T
t = a
y t 1 j u
(36.25)
for y t = y t , when computing the ADF test equation, and for y t as defined in Autoregressive Spectral Density Estimator on page 537, when estimating f 0 . Ng and Perron (2001)
propose and examine the modified criteria, concluding with a recommendation of the MAIC.
For the information criterion selection methods, you must also specify an upper bound to
the lag length. By default, EViews chooses a maximum lag of:
14
(36.26)
See Hayashi (2000, p. 594) for a discussion of the selection of this upper bound.
Rothenberg, and Stock (ERS), Ng and Perron (NP), and Kwiatkowski, Phillips, Schmidt, and
Shin (KPSS) tests (Unit Root Testing, on page 527).
However, as Perron (1989) points out, structural change and unit roots are closely related,
and researchers should bear in mind that conventional unit root tests are biased toward a
false unit root null when the data are trend stationary with a structural break. This observation has spurred development of a large literature outlining various unit root tests that
remain valid in the presence of a break (see Hansen, 2001 for an overview).
EViews offers support for several types of modified augmented Dickey-Fuller tests which
allow for levels and trends that differ across a single break date. You may compute unit root
tests with a single break where:
The break can occur slowly or immediately.
The break consists of a level shift, a trend break, or both a shift and break.
The break date is known, or the break date is unknown and estimated from the data.
The data are non-trending or trending.
Background
We begin with a brief discussion of the specifications underlining the testing methodology.
As always, our discussion is necessarily brief and we encourage you to consult the enclosed
references for additional detail.
Our discussion follows the basic framework outlined in Perron (1989), Vogelsang and Perron
(1998), Zivot and Andrews (1992), Banerjee et al. (1992) and others. For a useful overview
of the literature, see Perron (2006). Note that our notation differs slightly from the above
sources.
Break Variables
Before proceeding, it will be useful to define a few variables which allow us to characterize
the breaks. Let 1 ( ) be an indicator function that takes the value 1 if the argument ( ) is
true, and 0 otherwise. Then the following variables are defined in terms of a specified break
date T b ,
DU t ( T b ) = 1 ( t T b )
(36.27)
that takes the value 0 for all dates prior to the break, and 1 thereafter.
A trend break variable
DT t ( T b ) = 1 ( t T b ) ( t T b + 1 )
(36.28)
which takes the value 0 for all dates prior to the break, and is a break date re-based
trend for all subsequent dates.
A one-time break dummy variable
Dt ( Tb ) = 1 ( t = Tb )
(36.29)
which takes the value of 1 only on the break date and 0 otherwise.
Note that following EViews convention, we define the break date as the first date for the new
regime. This is in contrast to much of the literature which defines the break date as the last
date of the previous regime.
The Model
Following Perron (1989), we consider four basic models for data with a one-time break. For
non-trending data, we have a model with (O) a one-time change in level; for trending data,
we have models with (A) a change in level, (B) a change in both level and trend, and (C) a
change in trend.
In addition, we consider two versions of the four models which differ in their treatment of
the break dynamics: the innovational outlier (IO) model assumes that the break occurs gradually, with the breaks following the same dynamic path as the innovations, while the additive outlier (AO) model assumes the breaks occur immediately. The tests considered here
evaluate the null hypothesis that the data follow a unit root process, possibly with a break,
against a trend stationary with break alternative.
Within this basic framework there are a variety of specifications for the null and alternative
hypotheses, depending on the assumptions one wishes to make about the break dynamics,
trend behavior, and whether the break date is known or determined endogenously.
As in Perron (1989), we consider two distinct approaches to modeling the break dynamics.
y t = y t 1 + b + w ( L ) ( vD t ( T b ) + gDU t ( T b ) + e t )
(36.30)
where e t are i.i.d. innovations, and w ( L ) is a lag polynomial representing the dynamics of
the stationary and invertible ARMA error process. Note that the break variables enter the
model with the same dynamics as the e t innovations.
For our alternative hypothesis, we assume a trend stationary model with breaks in the intercept and trend:
y t = m + bt + w ( L ) ( vDU t ( T b ) + gDT t ( T b ) + e t )
with the breaks again following the innovation dynamics.
(36.31)
We may construct a general Dickey-Fuller test equation which nests the two hypotheses:
y t = m + bt + vDU t ( T b ) + gDT t ( T b ) + qD t ( T b ) + ay t 1
k
c y t i
i= 1 i
(36.32)
+ ut
y t = m + vDU t ( T b ) + qD t ( T b ) + ay t 1 + i = 1c i y t i + u t
(36.33)
Setting the trend and trend break coefficients b and g to zero yields a test of a random walk against a stationary model with intercept break.
Model 1: trending data with intercept break:
k
y t = m + bt + vDU t ( T b ) + qD t ( T b ) + ay t 1 + i = 1c i y t i + u t
(36.34)
Setting the trend break coefficient g to zero produces a test of a random walk with
drift against a trend stationary model with intercept break.
Model 2: trending data with intercept and trend break:
y t = m + bt + vDU t ( T b ) + gDT t ( T b ) + qD t ( T b ) + ay t 1
k
c y t i
i= 1 i
(36.35)
+ ut
The unrestricted Dickey-Fuller equation tests the random walk with drift against a
trend stationary with intercept and trend break alternative.
Model 3: trending data with trend break:
y t = m + bt + gDT t ( T b ) + ay t 1 +
i = 1 c i yt i + u t
(36.36)
Setting the intercept break and break dummy coefficients v and q to zero tests a random walk with drift null against a trend stationary with trend break alternative.
Note that the test equation for Model 3 follows the methodology of Zivot and Andrews
(1992) and Banerjee et al. (1992) which does not nest the null and alternatives, as
DU t ( T b ) is absent from the test equation; see Vogelsang and Perron (1998), p. 1077
for discussion.
You should bear in mind that whether one specifies a known break date or estimates the
break date from the data affects the allowable specifications for the null hypothesis.
If the break date is known as in Perron (1989), Models 0, 1, and 2 allow for breaks under the
null hypothesis. Model 3 does not allow for a break under the null.
If the break date is estimated, the test statistics considered here do not permit a breaking
trend under the null. Vogelsang and Perron (1998) offer a detailed discussion of this point,
noting that this undesirable restriction is required to obtain distributional results for the
resulting Dickey-Fuller t-statistic. They offer practical advice for testing in the case where
you wish to allow g 0 under the null. See also Kim and Perron (2009) for more recent
work that directly tackles this issue.
y t = y t 1 + b + vD t ( T b ) + gDU t ( T b ) + w ( L )e t
(36.37)
where e t are i.i.d. innovations, and w ( L ) is a lag polynomial representing the dynamics of
the stationary and invertible ARMA error process, and b is a drift parameter. Note that the
full impact of the break variables occurs immediately.
The alternative hypothesis is for a trend stationary model with possible breaks in the intercept and trend:
y t = m + bt + vDU t ( T b ) + gDT t ( T b ) + w ( L )e t
(36.38)
Testing for a unit root in the AO framework is a two-step procedure where we first use the
intercept, trend, and breaking variables to detrend the series using OLS, and then use the
detrended series to test for a unit root using a modified Dickey-Fuller regression.
In the first-step of the AO test, we detrend the data using a model with appropriate trend and
break variables:
Model 0: non-trending data with intercept break:
y t = m + vDU t ( T b ) + y t
(36.39)
y t = m + bt + vDU t ( T b ) + y t
(36.40)
y t = m + bt + vDU t ( T b ) + gDT t ( T b ) + y t
(36.41)
y t = m + bt + gDT t ( T b ) + y t
(36.42)
In the second-step, let y t be the residuals obtained from the detrending equation. The
resulting Dickey-Fuller unit root test equation is given by,
Models 0, 1, 2:
y t =
i = 0 q i D t i ( T b ) + ay t 1 + i = 1c i y t i + u t
(36.43)
Model 3:
k
y t = ay t 1 + i = 1c i y t i + u t
(36.44)
Test Options
For a given test equation described above, you must choose a number of lags k to include in
the test equation, and you must specify the candidate date T b at which to evaluate the
break. EViews offers a number of tools for you to use when making these choices.
Lag Selection
The theoretical properties of the test statistics requires that we choose the number of lag
terms in the Dickey-Fuller equations k to be large enough to eliminate the effect of the correlation structure of the errors on the asymptotic distribution of the statistic
Fixed (with observation-based suggestion from Said and Dickey, 1984).
All of the remaining methods are data dependent, and require specification of a maximum
lag length k max . A different optimal lag length k is obtained for each candidate break
date.
t-test.
Following Perron (1989), Perron and Vogelsang (1992a, 1992b), and Vogelsang and
Perron (1998), k is chosen so that the coefficient on the last included dependent
variable lag difference is significant at a specified probability value, while the coefficients on the last included lag difference in higher-order autoregressions up to k max
are all insignificant at the same level. The probability values for the t-statistics are
computed using the t-distribution.
The t-test method requires the specification of a p-value for use in evaluating significance. The default p-value of 0.10 may be changed by the user.
F-test.
Based on an approach of Said and Dickey (1984) (see also Perron and Vogelsang,
1992a, 1992b), the approach uses an F-test of the joint significance of the lag coeffio
cients for a given k against all higher lags up to k max . If any of the tests against
o
higher-order lags are significant at a specified probability level, we set k = k + 1 .
o
If none of the test statistics is significant, we lower k by 1 and continue. We begin
o
the procedure with k = k max 1 and continue until we achieve a rejection with
o
o
k = k + 1 , or until the lower bound k = 0 is evaluated without rejection and
we set k = 0 .
The F-test method requires the specification of a p-value for use in evaluating significance. The default p-value of 0.10 may be changed by the user.
Information criterion
Following the approach of Hall (1994) and Ng and Perron (1995), k is chosen to
minimize the specified information criterion amongst models with 0 to k max lags.
You may choose between the Akaike, Schwarz, Hannan-Quinn, Modified Akaike,
Modified Schwarz, Modified Hannan-Quinn. Note that the sample used for model
selection excludes data using full set of lag differences up to k max .
The first section tells EViews whether you wish to compute the test using the raw
data (Level), or whether to test for higher order integration using differences (1st
difference or 2nd difference) of the original data.
The Trend specification section determines the trend components that are included
in the test. Using the Basic dropdown, you may choose between an Intercept only or
an Intercept and trend specification. If you include a trend in the specification you
will be prompted to indicate which deterministic components are breaking by choosing Intercept, Intercept and trend, or Trend in the Breaking dropdown menu.
The Lag length section describes the method for selecting lags k for each of the augmented Dickey-Fuller test specifications (Lag Selection on page 544). You may
choose between Akaike criterion (AIC), Schwarz criterion (BIC), Hannan-Quinn
criterion (HQC), Modified Akaike, Modified Schwarz, Modified Hannan-Quinn, tstatistic, F-statistic, and Fixed lag specifications. For all but the Fixed lag method,
you must provide a Max. lag to test; by default, EViews will suggest a maximum lag
based on the number of observations in the series. For the test methods (t-statistic,
F-statistic), you must specify a p-value for the tests; for the Fixed lag method, you
must specify the actual number of use using the User lags edit field.
The Break type section allows you to choose between the default Innovation outlier
and the Additive outlier specifications (The Model on page 541).
The Breakpoint selection section specifies the method for determining the identity
of the breakpoint (Break Date Selection on page 545).
For a model with an intercept break, you may choose between minimizing the t-statistic for a in the ADF test (Dickey-Fuller min-t), minimizing the t-statistic for the
intercept break coefficient (Intercept break min-t), maximizing the t-statistic for the
break coefficient (Intercept break max-t), maximizing the absolute value of the tstatistic for the intercept break coefficient (Intercept break max-abs-t), or providing
a specific date (User-specified).
For models with a trend break, there will be corresponding entries for minimizing
and maximizing the t-statistic or absolute value of the t-statistic for the trend break
coefficient. For models with both an intercept and trend break you will be offered an
additional choice of using the F-statistic for the break coefficients (Incpt.+trend
break max-F) to select the breakpoint.
You will be prompted for specify a trimming percentage when employing methods
that involve the t-statistic or F-statistic of the break coefficients, EViews will remove
from consideration as the breakpoint this percentage of the observations from each
endpoint.
For the User-specified break choice you will be prompted to specify a single date.
Lastly, the Additional output controls the output produced by the view. The checkbox Display test and selection graphs controls whether to show only the test results
with the selected break, or to show the test results and graphs depicting the break
selection criterion results for each candidate break.
If you provide a name in the Results matrix edit field, EViews will save the results
from each of the candidate augmented Dickey-Fuller tests in workfile. The first column contains the observation identifier for the break; the second through fifth columns contain the autoregressive coefficient, autoregressive coefficient standard
error, number of observations, number of variables, and number of selected lags in
the Dickey-Fuller regressions.
If appropriate, the remaining columns contain results for the breakpoint selection,
with the contents varying with the method chosen. When minimizing the DickeyFuller t a , the output consists of a single column containing the t a statistics. For
methods involving one of v or g , the output contains the coefficient value, standard
error, and the corresponding t-statistic; for the F-statistic method, the output columns
consist of the estimates of v , the standard error of v , the estimates of g , the standard
error of g , and the F-statistic for testing the significance of the two coefficients.
Examples
As examples, we replicate some of the results given in Perron (1997), using data originally
provided by Nelson and Plosser (1982). The dataset contains fourteen annual macroeconomic series with values between 1860 and 1988. These data are provided in the workfile
nelson_plosser.wf1.
Real GNP
To begin, we replicate the results in the second row of Table 3 in Perron (1997), which tests
for a unit root in the log of real GNP using data between 1909 and 1970. We display the log
of real GDP, and set the workfile sample to dates from 1909 to 1970 with the commands
smpl 1909 1970
show log(rgnp)
To perform the unit root test with breakpoints, we click on View/Breakpoint Unit Root
Test... which brings up the test dialog. In this example Perron tests for the existence of a unit
root of the data in levels. The test assumes an innovation outlier break, with a trend specification given by Model 2 (Equation (36.35), above); trending data with both intercept and
trend break.
Perron selects a breakpoint by minimizing the Dickey-Fuller t-statistic, and selects a lag
length using the F-test.
We can match these settings by clicking the Level and Innovation Outlier buttons, changing
the Basic Trend specification to Trend and Intercept and the Breaking Trend specification
to Intercept, selecting Dickey-Fuller min-t as the Breakpoint selection, and changing the
Lag length Method to F-statistic:
The top section of this output describes the test that was performed, with a description of
the underlying series, the trend and break specification, and the break type. The second section displays the selected break date, which in this case is 1929. Recall that, unlike Perron,
EViews reports the break date for the start of the new regime instead of the last date before
of the old regime, so the EViews reported date of 1929 matches Perrons 1928 result. Lastly,
we see that the selected number of lags for corresponding test regression, selected on the
basis of F-statistic selection is eight.
The lower section reports the Augmented Dickey-Fuller t-statistic for the unit root test, along
with Vogelsangs asymptotic p-values. Our test resulted in a statistic of -5.50, with a p-value
less than 0.01, leading us to reject the null hypothesis of a unit root.
EViews also provides a graph of the Augmented Dickey-Fuller statistics and AR coefficients
at each test date:
Both graphs show a large dip in 1929, leaving little doubt as to which date should be
selected as the break point.
Employment
Our second example replicates row nine of Table 3 in Perron (2007). This example performs
a unit root test on the log of employment using data from 1890 to 1970. We again begin with
issuing commands to set the sample and display the log of employment:
smpl 1890 1970
show log(totalemp)
In this test, Perron again assumes an innovation outlier break, with a trend specification
given by Model 2 (Equation (36.35), above); trending data with intercept and trend break.
However Perron now selects the breakpoint corresponding to the minimum intercept break
t-statistic, and selects the lag-length using the t-statistic method. We replicate these choices
with the following dialog settings:
The first section of the results of this test are shown below:
Again, the top section of this output describes the test that was performed, notably the
underlying series, the trend and break specifications, and the break type. From the second
section we can see that again a date of 1929 was chosen as the most likely break date. The tstatistic based lag selection selected seven lags for this test regression.
The second section displays the test statistic and associated p-value. The statistic value of 4.918 matches the value report by Perron, and the p-value again means that we reject (at a
5% significance level) the null hypothesis of a unit root.
GNP Deflator
Our final example replicates row 12 of Table 3 in Perron (1997), and performs a unit root test
with breaks on the log of the GNP deflator between 1889 and 1970. We set the workfile sample and display the log of the GNP deflator by issuing the commands
smpl 1889 1970
show log(gnpdeflat)
Here, 1920 was selected at the most likely break date, and the automatic lag selection routine selected 9 lags.
The t-statistic of -3.869 matches that reported by Perron, and the corresponding p-value of
0.27 indicates we cannot reject the hypothesis that the log of the GNP deflator has a unit
root.
The dropdown menu at the top of the dialog is where you will choose the type of test to perform. There are six settings: Summary, Common root - Levin, Lin, Chu, Common
root - Breitung, Individual root - Im, Pesaran, Shin, Individual root - Fisher - ADF,
Individual root - Fisher - PP, and Hadri, corresponding to one or more of the tests
listed above. The dropdown menu labels include a brief description of the assumptions
under which the tests are computed. Common root indicates that the tests are estimated
assuming a common AR structure for all of the series; Individual root is used for tests
which allow for different AR coefficients in each series.
We have already pointed out that the Summary default instructs EViews to estimate the first
five of the tests, where applicable, and to provide a brief summary of the results. Selecting
an individual test type allows you better control over the computational method and provides additional detail on the test results.
The next two sets of radio buttons allow you to control the specification of your test equation. First, you may choose to conduct the unit root on the Level, 1st difference, or 2nd difference of your series. Next, you may choose between sets of exogenous regressors to be
included. You can select Individual intercept if you wish to include individual fixed effects,
Individual intercepts and individual trends to include both fixed effects and trends, or
None for no regressors.
The Use balanced sample option is present only if you are estimating a Pool or a Group unit
root test. If you select this option, EViews will adjust your sample so that only observations
where all series values are not missing will be included in the test equations.
Depending on the form of the test or tests to be computed, you will be presented with various advanced options on the right side of the dialog. For tests that involve regressions on
lagged difference terms (Levin, Lin, and Chu, Breitung, Im, Pesaran, and Shin, Fisher - ADF)
these options relate to the choice of the number of lags to be included. For the tests involving kernel weighting (Levin, Lin, and Chu, Fisher - PP, Hadri), the options relate to the
choice of bandwidth and kernel type.
For a group or pool unit root test, the EViews default is to use automatic selection methods:
information matrix criterion based for the number of lag difference terms (with automatic
selection of the maximum lag to evaluate), and the Andrews or Newey-West method for
bandwidth selection. For unit root tests on a series in a panel workfile, the default behavior
uses user-specified options.
If you wish to override these settings, simply enter the appropriate information. You may, for
example, select a fixed, user-specified number of lags by entering a number in the User
specified field. Alternatively, you may customize the settings for automatic lag selection
method. Alternative criteria for evaluating the optimal lag length may be selected via the
dropdown menu (Akaike, Schwarz, Hannan-Quinn, Modified Akaike, Modified Schwarz,
Modified Hannan-Quinn), and you may limit the number of lags to try in automatic selec-
tion by entering a number in the Maximum lags box. For the kernel based methods, you
may select a kernel type from the dropdown menu (Bartlett, Parzen, Quadratic spectral),
and you may specify either an automatic bandwidth selection method (Andrews, NeweyWest) or user-specified fixed bandwidth.
As an illustration, we perform a panel unit root tests on real gross investment data (I) in the
oft-cited Grunfeld data containing data on R&D expenditure and other economic measures
for 10 firms for the years 1935 to 1954 found in Grunfeld_Baltagi.WF1. We compute the
summary panel unit root test, using individual fixed effects as regressors, and automatic lag
difference term and bandwidth selection (using the Schwarz criterion for the lag differences,
and the Newey-West method and the Bartlett kernel for the bandwidth). The results for the
panel unit root test are presented below:
Panel unit root test: Summary
Series: I
Date: 08/12/09 Ti me: 14:17
Sample: 1935 19 54
Exoge nous vari abl es: Indivi dual effects
Automati c selectio n of maximum lags
Automati c lag le ngth selecti on based o n SIC: 0 to 3
Newey-West automatic bandwidth sele ction and Bartlett kernel
Method
Statistic
Prob.**
Null: Unit root (assumes common unit root process)
Levin, Li n & Chu t*
2.39544
0.991 7
Null: Unit root (assumes indi vidual unit root process)
2.80541
0.997 5
Im, Pesaran and Shin W -stat
ADF - Fisher Chi-square
12.0000
0.916 1
PP - Fisher Chi-square
12.9243
0.880 6
Crosssections
Obs
10
184
10
10
10
184
184
190
** P roba bilities for Fisher tests are computed using a n asympto tic Chi
-square distrib ution. All other tests assume asymptotic
normality.
The top of the output indicates the type of test, exogenous variables and test equation
options. If we were instead estimating a Pool or Group test, a list of the series used in the
test would also be depicted. The lower part of the summary output gives the main test
results, organized both by null hypothesis as well as the maintained hypothesis concerning
the type of unit root process.
All of the results indicate the presence of a unit root, as the LLC, IPS, and both Fisher tests
fail to reject the null of a unit root.
If you only wish to compute a single unit root test type, or if you wish to examine the tests
results in greater detail, you may simply repeat the unit root test after selecting the desired
test in Test type dropdown menu. Here, we show the bottom portion of the LLC test specific
output for the same data:
Intermediate results on I
Cross
section
1
2
3
4
5
6
7
8
9
10
Pooled
Coefficient
-0.01940
t-Stat
-0.464
Lag
0
1
3
0
1
0
0
1
0
0
Max
Lag
4
4
4
4
4
4
4
4
4
4
SE Reg mu*
sig*
1.079 -0.554 0.919
Bandwidth
1.0
11.0
5.0
7.0
18.0
1.0
17.0
6.0
2.0
5.0
Obs
19
18
16
19
18
19
19
18
19
19
Obs
184
For each cross-section, the autoregression coefficient, variance of the regression, HAC of the
dependent variable, the selected lag order, maximum lag, bandwidth truncation parameter,
and the number of observations used are displayed.
y it = r i y it 1 + X it d i + e it
(36.45)
Dy it = ay it 1 +
b ij Dy it j + X it d + eit
(36.46)
j=1
where we assume a common a = r 1 , but allow the lag order for the difference terms,
p i , to vary across cross-sections. The null and alternative hypotheses for the tests may be
written as:
H0 : a = 0
(36.47)
H1 : a < 0
(36.48)
Under the null hypothesis, there is a unit root, while under the alternative, there is no unit
root.
Dy it = Dy it b ij Dy it j X it d
(36.49)
j =1
Likewise, we may define the analogous y it 1 using the second set of coefficients:
pi
y it 1 = y it 1 b ij Dy it j X it d
(36.50)
j=1
Next, we obtain our proxies by standardizing both Dy it and y it 1 , dividing by the regression standard error:
Dy it = ( Dy it s i )
y it 1 = ( y it 1 s i )
(36.51)
where s i are the estimated standard errors from estimating each ADF in Equation (36.46).
Lastly, an estimate of the coefficient a may be obtained from the pooled proxy equation:
Dy it = ay it 1 + h it
(36.52)
is asymptotically norLLC show that under the null, a modified t-statistic for the resulting a
mally distributed
2
)S N j se(a )m mT
t a ( NT
t a = ------------------------------------------------------------------- N ( 0, 1 )
j
(36.53)
mT
= T p i N 1
T
(36.54)
The remaining terms, which involve complicated moment calculations, are described in
greater detail in LLC. The average standard deviation ratio, S N , is defined as the mean of
the ratios of the long-run standard deviation to the innovation standard deviation for each
individual. Its estimate is derived using kernel-based techniques. The remaining two terms,
m mT and j mT are adjustment terms for the mean and standard deviation.
The LLC method requires a specification of the number of lags used in each cross-section
ADF regression, p i , as well as kernel choices used in the computation of S N . In addition,
you must specify the exogenous variables used in the test equations. You may elect to
include no exogenous regressors, or to include individual constant terms (fixed effects), or
to employ individual constants and trends.
Breitung
The Breitung method differs from LLC in two distinct ways. First, only the autoregressive
portion (and not the exogenous components) is removed when constructing the standardized proxies:
Dy it
y it 1
pi
= Dy it b ij Dy it j s i
j =1
= y it 1
pi
b
Dy
ij it j s i
j= 1
(36.55)
Dy it =
Dy it + 1 + + Dy iT
(T t)
--------------------------- Dy it ------------------------------------------------
(T t + 1)
Tt
t1
y it = y it y i1 ------------- ( y iT y i1 )
T1
(36.56)
Dy it = ay it 1 + n it
(36.57)
Breitung shows that under the null, the resulting estimator a is asymptotically distributed
as a standard normal.
The Breitung method requires only a specification of the number of lags used in each crosssection ADF regression, p i , and the exogenous regressors. Note that in contrast with LLC,
no kernel computations are required.
Hadri
The Hadri panel unit root test is similar to the KPSS unit root test, and has a null hypothesis
of no unit root in any of the series in the panel. Like the KPSS test, the Hadri test is based on
the residuals from the individual OLS regressions of y it on a constant, or on a constant and
a trend. For example, if we include both the constant and a trend, we derive estimates from:
y it = d i + h i t + e it
(36.58)
Given the residuals e from the individual regressions, we form the LM statistic:
1
2
2
N
LM 1 = ---- i = 1 S i ( t ) T f 0
(36.59)
e it
Si ( t ) =
(36.60)
s =1
and f 0 is the average of the individual estimators of the residual spectrum at frequency zero:
N
f0 =
f i0 N
i=1
EViews provides several methods for estimating the f i0 . See Unit Root Testing on
page 527 for additional details.
An alternative form of the LM statistic allows for heteroskedasticity across i :
(36.61)
1
2
2
N
LM 2 = ---- i = 1 S i ( t ) T f i0
(36.62)
N ( LM y )
Z = --------------------------------- N ( 0, 1 )
z
(36.63)
where y = 1 6 and z = 1 45 , if the model only includes constants ( h i is set to 0 for all
i ), and y = 1 15 and z = 11 6300 , otherwise.
The Hadri panel unit root tests require only the specification of the form of the OLS regressions: whether to include only individual specific constant terms, or whether to include both
constant and trend terms. EViews reports two Z -statistic values, one based on LM 1 with
the associated homoskedasticity assumption, and the other using LM 2 that is heteroskedasticity consistent.
It is worth noting that simulation evidence suggests that in various settings (for example,
small T ), Hadri's panel unit root test experiences significant size distortion in the presence
of autocorrelation when there is no unit root. In particular, the Hadri test appears to overreject the null of stationarity, and may yield results that directly contradict those obtained
using alternative test statistics (see Hlouskova and Wagner (2006) for discussion and
details).
Dy it = ay it 1 +
b ij Dy it j + X it d + eit
(36.64)
j=1
H 0 : a i = 0, for all i
(36.65)
ai = 0
H1 :
ai < 0
for i = 1, 2, , N 1
for i = N + 1, N + 2, , N
(36.66)
(where the i may be reordered as necessary) which may be interpreted as a non-zero fraction of the individual processes is stationary.
After estimating the separate ADF regressions, the average of the t-statistics for a i from the
individual ADF regressions, t iT ( p i ) :
i
t NT
t iT ( p i ) N
i
(36.67)
i= 1
Wt
NT
1
N t NT N E ( t iT ( p i ) )
i=1
= --------------------------------------------------------------------------- N ( 0, 1 )
N
(36.68)
Var ( t iT ( p i ) )
i=1
The expressions for the expected mean and variance of the ADF regression t-statistics,
E ( t iT ( p i ) ) and Var ( t iT ( p i ) ) , are provided by IPS for various values of T and p and differing test equation assumptions, and are not provided here.
The IPS test statistic requires specification of the number of lags and the specification of the
deterministic component for each cross-section ADF equation. You may choose to include
individual constants, or to include individual constant and trend terms.
log ( p i ) x 2N
(36.69)
i=1
1
Z = -------N
where F
( p i ) N ( 0, 1 )
(36.70)
i =1
EViews reports both the asymptotic x and standard normal statistics using ADF and Phillips-Perron individual unit root tests. The null and alternative hypotheses are the same as for
the as IPS.
For both Fisher tests, you must specify the exogenous variables for the test equations. You
may elect to include no exogenous regressors, to include individual constants (effects), or
include individual constant and trend terms.
Additionally, when the Fisher tests are based on ADF test statistics, you must specify the
number of lags used in each cross-section ADF regression. For the PP form of the test, you
must instead specify a method for estimating f 0 . EViews supports estimators for f 0 based
on kernel-based sum-of-covariances. See Frequency Zero Spectrum Estimation, beginning
on page 536 for details.
Null
Alternative
Possible
Deterministic
Component
Unit root
No Unit Root
None, F, T
Lags
Breitung
Unit root
No Unit Root
None, F, T
Lags
IPS
Unit Root
F, T
Lags
Fisher-ADF
Unit Root
None, F, T
Lags
Fisher-PP
Unit Root
None, F, T
Kernel
Hadri
No Unit Root
Unit Root
F, T
Kernel
None - no exogenous variables; F - fixed effect; and T - individual effect and individual
trend.
An Example
In our example, we employ the time series data on nominal exchange rates used by Wright
(2000) to illustrate his modified variance ratio tests (Wright.WF1). The data in the first
page (WRIGHT) of the workfile provide the relative-to-U.S. exchange rates for the Canadian
dollar, French franc, German mark, Japanese yen, and the British pound for the 1,139 weeks
from August 1974 through May 1996. Of interest is whether the exchange rate returns, as
measured by the log differences of the rates, are i.i.d. or martingale difference, or alternately, whether the exchange rates themselves follow an exponential random walk.
We begin by performing tests
on the Japanese yen. Open
the JP series, then select
View/Variance Ratio... to
display the dialog. We will
make a few changes to the
default settings to match
Wrights calculations. First,
select Exponential random
walk in the Data specification section to tell EViews
that you wish to work with
the log returns. Next,
uncheck the Use unbiased
variances and Use heteroskedastic robust S.E. checkboxes to perform the i.i.d.
version of the Lo-MacKinlay test with no bias correction. Lastly, change the user-specified
test periods to 2 5 10 30 to match the test periods examined by Wright. Click on OK to
compute and display the results.
The top portion of the output shows the test settings and basic test results.
Value
4.295371
22.63414
df
1138
4
Probability
0.0001
0.0001
Individu al Tests
Period
Var. Ratio
2
1.056126
5
1.278965
10
1.395415
30
1.576815
Std. Error
0.029643
0.064946
0.100088
0.182788
z-Statistic
1.893376
4.295371
3.950676
3.155651
Probability
0.0583
0.0000
0.0001
0.0016
*Proba bility appro ximation using stude ntized maximum modulus with
parameter val ue 4 and infinite de grees of fr eedom
Since we have specified more than one test period, there are two sets of test results. The
Joint Tests are the tests of the joint null hypothesis for all periods, while the Individual
Tests are the variance ratio tests applied to individual periods. Here, the Chow-Denning
maximum z statistic of 4.295 is associated with the period 5 individual test. The approximate p-value of 0.0001 is obtained using the studentized maximum modulus with infinite
degrees of freedom so that we strongly reject the null of a random walk. The results are
quite similar for the Wald test statistic for the joint hypotheses. The individual statistics generally reject the null hypothesis, though the period 2 variance ratio statistic p-value is
slightly greater than 0.05.
The bottom portion of the output shows the intermediate results for the variance ratio test
calculations, including the estimated mean, individual variances, and number of observations used in each calculation.
Test Details (Mean = -0.0008928 35617901)
Period
1
2
5
10
30
Variance
0.0 0021
0.0 0022
0.0 0027
0.0 0029
0.0 0033
Var. Ratio
-1.05613
1.27897
1.39541
1.57682
O bs.
1138
1137
1134
1129
1109
Alternately, we may display a graph of the test statistics using the same settings. Simply
click again on View/Variance Ratio Test..., change the Output dropdown from Table to
Graph, then fill out the dialog as before and click on OK:
Value
3.646683
df
1138
Probability
0.0012
Std. Error
0.037086
0.076498
0.115533
0.205582
z-Statistic
1.513412
3.646683
3.422512
2.805766
Probability
0.1316
0.0004
0.0010
0.0058
Note that the Wald test is no longer displayed since the test methodology is not consistent
with the use of heteroskedastic robust standard errors in the individual tests. The p-values
for the individual variance ratio tests, which are all generated using the wild bootstrap, are
generally consistent with the previous results, albeit with probabilities that are slightly
higher than before. The individual period 2 test, which was borderline (in)significant in the
Value
5.415582
37.92402
df
1138
4
Probability
0.0000
0.0000
Std. Error
0.029643
0.064946
0.100088
0.182788
z-Statistic
2.763085
5.415582
4.665193
4.324203
Probability
0.0050
0.0000
0.0000
0.0000
The standard errors employed in forming the individual z-statistics (and those displayed in
the corresponding graph view) are obtained from the asymptotic normal results. The probabilities for the individual z-statistics and the joint max z and Wald statistics, which all
strongly reject the null hypothesis, are obtained from the permutation bootstrap.
The preceding analysis may be extended to tests that jointly consider all five exchange rates
in a panel setting. The second page (WRIGHT_STK) of the Wright.WF1 workfile contains
the panel dataset of the relative-to-U.S. exchange rates described above (Canada, Germany,
France, Japan, U.K.). Click on the WRIGHT_STK tab to make the second page active, double
click on the EXCHANGE series to open the stacked exchange rates series, then select View/
Variance Ratio Test...
We will redo the heterogeneous Lo and MacKinlay test example from above using the panel
data series. Select Table - Fisher Combined in the Output dropdown then fill out the
remainder of the dialog as before, then click on OK. The output, which takes a moment to
generate since we are performing 5000 bootstrap replications for each cross-section, consists
of two distinct parts. The top portion of the output:
Max |z|
28.252
Prob.
0.0016
df
10
shows the test settings and provides the joint Fisher combined test statistic which, in this
case, strongly rejects the joint null hypothesis that all of the cross-sections are martingales.
The bottom portion of the output:
Cross-sectio n Joint Tests
Cross-section
CAN
DEU
FRA
JP
UK
Max |z|
2.0413
1.7230
2.0825
3.6467
1.5670
Prob.
0.0952
0.1952
0.0946
0.0016
0.2606
Obs.
11 38
11 38
11 38
11 38
11 38
depicts the max z statistics for the individual cross-sections, along with corresponding
wild bootstrap probabilities. Note that four of the five individual test statistics do not reject
the joint hypothesis at conventional levels. It would therefore appear that the Japanese yen
result is the driving force behind the Fisher combined test rejection.
Technical Details
Suppose we have the time series { Y t } = ( Y 0, Y 1, Y 2, , Y T ) satisfying
DY t = m + e t
(36.71)
where m is an arbitrary drift parameter. The key properties of a random walk that we would
like to test are E ( e t ) = 0 for all t and E ( e t e t j ) = 0 for any positive j .
First, Lo and MacKinlay make the strong assumption that the e t are i.i.d. Gaussian with
2
variance j (though the normality assumption is not strictly necessary). Lo and MacKinlay
term this the homoskedastic random walk hypothesis, though others refer to this as the i.i.d.
null.
Alternately, Lo and MacKinlay outline a heteroskedastic random walk hypothesis where
they weaken the i.i.d. assumption and allow for fairly general forms of conditional heteroskedasticity and dependence. This hypothesis is sometimes termed the martingale null,
since it offers a set of sufficient (but not necessary), conditions for e t to be a martingale difference sequence (m.d.s.).
We may define estimators for the mean of first difference and the scaled variance of the q -th
difference:
1
m = ---T
( Yt Yt 1 )
t =1
T
2
1
j ( q ) = ------Tq
(36.72)
( Y t Y t q qm )
t =1
2
z ( q ) = ( VR ( q ) 1 ) [ s 2 ( q ) ]
1 2
(36.73)
2
2 ( 2q 1 ) ( q 1 )
2
s ( q ) = -----------------------------------------3qT
(36.74)
while under the m.d.s. assumption we may use the kernel estimator,
q1
2
s ( q ) =
2(q j)
-
-----------------q
d j
(36.75)
j= 1
where
T
2
2
2
d j = ( y t j m ) ( y t m ) ( y t j m )
t = j + 1
t = j + 1
(36.76)
Wild Bootstrap
Kim (2006) offers a wild bootstrap approach to improving the small sample properties of
variance ratio tests. The approach involves computing the individual (Lo and MacKinlay)
and joint (Chow and Denning, Wald) variance ratio test statistics on samples of T observations formed by weighting the original data by mean 0 and variance 1 random variables, and
using the results to form bootstrap distributions of the test statistics. The bootstrap p-values
are computed directly from the fraction of replications falling outside the bounds defined by
the estimated statistic.
EViews offers three distributions for constructing wild bootstrap weights: the two-point, the
Rademacher, and the normal. Kims simulations indicate that the test results are generally
insensitive to the choice of wild bootstrap distribution.
T+1
r 1t = r ( DY t ) --------------
(T 1)(T + 1)
-------------------------------------12
(36.77)
r 2t = F ( r ( DY t ) ( T + 1 ) )
In cases where there are tied ranks, the denominator in r 1t may be modified slightly to
account for the tie handling.
The Wright variance ratio test statistics are obtained by computing the Lo and MacKinlay
homoskedastic test statistic using the ranks or rank scores in place of the original data.
Under the i.i.d. null hypothesis, the exact sampling distribution of the statistics may be
approximated using a permutation bootstrap.
Sign Test
Wright also proposes a modification of the homoskedastic Lo and MacKinlay statistic in
which each DY t is replaced by its sign. This statistic is valid under the m.d.s. null hypothesis, and under the assumption that m = 0 , the exact sampling distribution may also be
approximated using a permutation bootstrap. (EViews does not allow for non-zero means
when performing the sign test since allowing m 0 introduces a nuisance parameter into
the sampling distribution.)
Panel Statistics
EViews offers two approaches to variance ratio testing in panel settings.
First, under the assumption that cross-sections are independent, with cross-section heterogeneity of the processes, we may compute separate joint variance ratio tests for each crosssection, then combine the p-values from cross-section results using the Fisher approach as
in Maddala and Wu (1999). If we define p i to be a p-value from the i-th cross-section, then
under the hypothesis that the null hypothesis holds for all N cross-sections,
N
log ( p i ) x 2N
(36.78)
i=1
as T .
Alternately, if we assume homogeneity across all cross-sections, we may stack the panel
observations and compute the variance ratio test for the stacked data. In this approach, the
only adjustment for the panel nature of the stacked data is in ensuring that lag calculations
do not span cross-section boundaries.
model can be tested to see if there is any non-linear dependence in the series after the linear
ARMA model has been fitted.
The idea behind the test is fairly simple. To perform the test, we first choose a distance, e .
We then consider a pair of points. If the observations of the series truly are iid, then for any
pair of points, the probability of the distance between these points being less than or equal
to epsilon will be constant. We denote this probability by c 1(e) .
We can also consider sets consisting of multiple pairs of points. One way we can choose sets
of pairs is to move through the consecutive observations of the sample in order. That is,
given an observation s , and an observation t of a series X, we can construct a set of pairs
of the form:
{ {X s, X t} , {X s + 1, X t + 1} , {X s + 2, X t + 2} , , {X s + m 1, X t + m 1} }
(36.79)
where m is the number of consecutive points used in the set, or embedding dimension. We
denote the joint probability of every pair of points in the set satisfying the epsilon condition
by the probability c m(e) .
The BDS test proceeds by noting that under the assumption of independence, this probability will simply be the product of the individual probabilities for each pair. That is, if the
observations are independent,
m
c m(e) = c 1 (e) .
(36.80)
When working with sample data, we do not directly observe c 1(e) or c m(e) . We can only
estimate them from the sample. As a result, we do not expect this relationship to hold
exactly, but only with some error. The larger the error, the less likely it is that the error is
caused by random sample variation. The BDS test provides a formal basis for judging the
size of this error.
To estimate the probability for a particular dimension, we simply go through all the possible
sets of that length that can be drawn from the sample and count the number of sets which
satisfy the e condition. The ratio of the number of sets satisfying the condition divided by
the total number of sets provides the estimate of the probability. Given a sample of n observations of a series X, we can state this condition in mathematical notation,
2
c m, n(e) = -------------------------------------------------(n m + 1)(n m)
nm+1 nm+1
s =1
t = s+1
m1
I e(X s + j, X t + j )
(36.81)
j= 0
1
I e(x, y ) =
0
if x y e
otherwise.
(36.82)
We can then use these sample estimates of the probabilities to construct a test statistic for
independence:
(36.83)
where the second term discards the last m 1 observations from the sample so that it is
based on the same number of terms as the first statistic.
Under the assumption of independence, we would expect this statistic to be close to zero. In
fact, it is shown in Brock et al. (1996) that
b m, n(e)
( n m + 1 ) ---------------- N ( 0, 1 )
j m, n(e)
(36.84)
where
2
j m, n ( e )
m
= 4k + 2
m1
mj
2j
c1 + ( m 1 ) c1
j= 1
2m
m kc 1
2m 2
(36.85)
and where c 1 can be estimated using c 1, n . k is the probability of any triplet of points lying
within e of each other, and is estimated by counting the number of sets satisfying the sample condition:
2
k n ( e ) = ---------------------------------------n(n 1)(n 2)
(36.86)
t = 1 s = t+1 r = s+1
y t = u t + 8u t 1 u t 2
(36.87)
where u t is a normal random variable. On simulated data, the correlogram of this series
shows no statistically significant correlations, yet the BDS test strongly rejects the hypothesis
that the observations of the series are independent (note that the Q-statistics on the squared
levels of the series also reject independence).
References579
References
Banerjee, Anindya, Robin L. Lumsdaine, and James H. Stock (1992). Recursive and Sequential Tests of
the Unit-Root and Trend-Break Hypotheses: Theory and International Evidence, Journal of Business
& Economic Statistics, 10, 271287.
Bhargava, A. (1986). On the Theory of Testing for Unit Roots in Observed Time Series, Review of Economic Studies, 53, 369-384.
Breitung, Jrg (2000). The Local Power of Some Unit Root Tests for Panel Data, in B. Baltagi (ed.),
Advances in Econometrics, Vol. 15: Nonstationary Panels, Panel Cointegration, and Dynamic Panels,
Amsterdam: JAI Press, p. 161178.
Brock, William, Davis Dechert, Jose Sheinkman and Blake LeBaron (1996). A Test for Independence
Based on the Correlation Dimension, Econometric Reviews, August, 15(3), 197235.
Choi, I. (2001). Unit Root Tests for Panel Data, Journal of International Money and Finance, 20: 249
272.
Chow, K. Victor and Karen C. Denning (1993). A Simple Multiple Variance Ratio Test, Journal of Econometrics, 58, 385401.
Davidson, Russell and James G. MacKinnon (1993). Estimation and Inference in Econometrics, Oxford:
Oxford University Press.
Dezhbaksh, Hashem (1990). The Inappropriate Use of Serial Correlation Tests in Dynamic Linear Models, Review of Economics and Statistics, 72, 126132.
Dickey, D.A. and W.A. Fuller (1979). Distribution of the Estimators for Autoregressive Time Series with a
Unit Root, Journal of the American Statistical Association, 74, 427431.
Elliott, Graham, Thomas J. Rothenberg and James H. Stock (1996). Efficient Tests for an Autoregressive
Unit Root, Econometrica 64, 813-836.
Engle, Robert F. and C. W. J. Granger (1987). Co-integration and Error Correction: Representation, Estimation, and Testing, Econometrica, 55, 251276.
Fong, Wai Mun, See Kee Koh, and Sam Ouliaris (1997). Joint Variance-Ratio Tests of the Martingale
Hypothesis for Exchange Rates, Journal of Business and Economic Statistics, 15, 51-59.
Fisher, R. A. (1932). Statistical Methods for Research Workers, 4th Edition, Edinburgh: Oliver & Boyd.
Hadri, Kaddour (2000). Testing for Stationarity in Heterogeneous Panel Data, Econometric Journal, 3,
148161.
Hall, Alistair (1994). Testing for a Unit Root in Time Series With Pretest Data Based Model Selection,
Journal of Business and Economic Statistics, 12, 461470.
Hamilton, James D. (1994). Time Series Analysis, Princeton University Press.
Hayashi, Fumio. (2000). Econometrics, Princeton, NJ: Princeton University Press.
Hlouskova, Jaroslava and M. Wagner (2006). The Performance of Panel Unit Root and Stationarity Tests:
Results from a Large Scale Simulation Study, Econometric Reviews, 25, 85-116.
Im, K. S., M. H. Pesaran, and Y. Shin (2003). Testing for Unit Roots in Heterogeneous Panels, Journal of
Econometrics, 115, 5374.
Kim, Dukpa and Pierre Perron (2009). Unit Root Tests Allowing for a Break in the Trend Function at an
Unknown Time Under Both the Null and Alternative Hypotheses Journal of Econometrics, 148, 1
13.
Kwiatkowski, Denis, Peter C. B. Phillips, Peter Schmidt & Yongcheol Shin (1992). Testing the Null
Hypothesis of Stationary against the Alternative of a Unit Root, Journal of Econometrics, 54, 159178.
Levin, A., C. F. Lin, and C. Chu (2002). Unit Root Tests in Panel Data: Asymptotic and Finite-Sample
Properties, Journal of Econometrics, 108, 124.
Lo, Andrew W. and A. Craig MacKinlay (1988). Stock Market Prices Do Not Follow Random Walks: Evidence From a Simple Specification Test, The Review of Financial Studies, 1, 4166.
Lo, Andrew W. and A. Craig MacKinlay (1989). The Size and Power of the Variance Ratio Test in Finite
Samples, Journal of Econometrics, 40, 203-238.
MacKinnon, James G. (1991). Critical Values for Cointegration Tests, Chapter 13 in R. F. Engle and C. W.
J. Granger (eds.), Long-run Economic Relationships: Readings in Cointegration, Oxford: Oxford University Press.
MacKinnon, James G. (1996). Numerical Distribution Functions for Unit Root and Cointegration Tests,
Journal of Applied Econometrics, 11, 601-618.
Maddala, G. S. and Shaowen Wu (1999). A Comparative Study of Unit Root Tests with Panel Data and a
New Simple Test, Oxford Bulletin of Economics and Statistics, 61, 631-652.
Nelson, Charles R. and Charles I. Plosser (1982). Trends and Random Walks in Macroeconomic Time
Series, Journal of Monetary Economics, 10, 139162.
Newey, Whitney and Kenneth West (1994). Automatic Lag Selection in Covariance Matrix Estimation,
Review of Economic Studies, 61, 631-653.
Ng, Serena and Pierre Perron (2001). Lag Length Selection and the Construction of Unit Root Tests with
Good Size and Power, Econometrica, 69, 1519-1554.
Perron, Pierre (1989). The Great Crash, the Oil Price Shock, and the Unit Root Hypothesis, Econometrica, 57, 1361-1401.
Perron, Pierre (1997). Further Evidence on Breaking Trend Functions in Macroeconomic Variables, Journal of Econometrics, 80, 355385.
Perron, Pierre (2006). Dealing with Structural Breaks, in Palgrave Handbook of Econometrics, Vol. 1:
Econometric Theory, K. Patterson and T. C. Mills (eds.), Palgrave Macmillan, 278-352.
Perron, Pierre and Timothy J. Vogelsang (1992a). Nonstationarity and Level Shifts with an Application to
Purchasing Power Parity, Journal of Business & Economic Statistics, 10, 301320.
Perron, Pierre and Timothy J. Vogelsang (1992b). Testing for a Unit Root in a Time Series with a Changing Mean: Corrections and Extensions, Journal of Business & Economic Statistics, 10, 467470.
Phillips, P.C.B. and P. Perron (1988). Testing for a Unit Root in Time Series Regression, Biometrika, 75,
335346.
Richardson, Matthew and Tom Smith (1991). Tests of Financial Models in the Presence of Overlapping
Observations, The Review of Financial Studies, 4, 227254.
Said, Said E. and David A. Dickey (1984). Testing for Unit Roots in Autoregressive Moving Average Models of Unknown Order, Biometrika, 71, 599607.
Vogelsang, Timothy J. and Pierre Perron (1998). Additional Test for Unit Root Allowing for a Break in the
Trend Function at an Unknown Time, International Economic Review, 39, 10731100.
Vogelsang, Timothy J. (1993). Unpublished computer program.
Wright, Jonathan H. (2000). Alternative Variance-Ratio Tests Using Ranks and Signs, Journal of Business
and Economic Statistics, 18, 19.
Zivot, Eric and Donald W. K. Andrews (1992). Further Evidence on the Great Crash, the Oil-Price Shock,
and the Unit-Root Hypothesis, Journal of Business & Economic Statistics, 10, 251270.
Background
A system is a group of equations containing unknown parameters. Systems can be estimated
using a number of multivariate techniques that take into account the interdependencies
among the equations in the system.
The general form of a system is:
f ( y t, x t, b ) = e t ,
(37.1)
Systems and models often work together quite closely. You might estimate the parameters of
a system of equations, and then create a model in order to forecast or simulate values of the
endogenous variables in the system. We discuss this process in greater detail in Chapter 40.
Models, on page 699.
Cross-Equation Weighting
This method accounts for cross-equation heteroskedasticity by minimizing the weighted
sum-of-squared residuals. The equation weights are the inverses of the estimated equation
variances, and are derived from unweighted estimation of the parameters of the system. This
method yields identical results to unweighted single-equation least squares if there are no
cross-equation restrictions.
a specification from a text file. You may also insert a text file using the right-mouse button
menu and selecting Insert Text File...
To estimate the parameters of your system of equations, you should first create a system
object and specify the system of equations. Click on Object/New Object.../System or type
system in the command window. The system object window should appear. When you first
create the system, the window will be blank. You will fill the system specification window
with text describing the equations, and potentially, lines describing the instruments and the
parameter starting values.
From a list of selected variables, EViews can also automatically generate linear
equations in a system. To use
this procedure, first highlight
the dependent variables that
will be in the system. Next,
double click on any of the highlighted series, and select Open/
Open System..., or right click
and select Open/as System....
The Make System dialog box
should appear with the variable
names entered in the Dependent variables field. You can
augment the specification by
adding regressors or AR terms,
either estimated with common or equation specific coefficients. See System Procs on
page 599 for additional details on this dialog.
The Make System proc is also available from a Group object (see Make System, on
page 566).
Equations
Enter your equations, by formula, using standard EViews expressions. The equations in your
system should be behavioral equations with unknown coefficients and an implicit error
term.
You may also impose adding up constraints. Suppose for the equation:
y = c(1)*x1 + c(2)*x2 + c(3)*x3
you wish to impose C(1)+C(2)+C(3)=1. You can impose this restriction by specifying the equation as:
y = c(1)*x1 + c(2)*x2 + (1-c(1)-c(2))*x3
The equations in a system may contain autoregressive (AR) error specifications, but
not MA, SAR, or SMA error specifications. You must associate coefficients with each
AR specification. Enclose the entire AR specification in square brackets and follow
each AR with an =-sign and a coefficient. For example:
cs = c(1) + c(2)*gdp + [ar(1)=c(3), ar(2)=c(4)]
You can constrain all of the equations in a system to have the same AR coefficient by
giving all equations the same AR coefficient number, or you can estimate separate AR
processes, by assigning each equation its own coefficient.
Equations in a system need not have a dependent variable followed by an equal sign
and then an expression. The =-sign can be anywhere in the formula, as in:
log(unemp/(1-unemp)) = c(1) + c(2)*dmr
You can also write the equation as a simple expression without a dependent variable,
as in:
(c(1)*x + c(2)*y + 4)^2
When encountering an expression that does not contain an equal sign, EViews sets
the entire expression equal to the implicit error term.
Instruments
If you plan to estimate your system using two-stage least squares, three-stage least squares,
or GMM, you must specify the instrumental variables to be used in estimation. There are
several ways to specify your instruments, with the appropriate form depending on whether
you wish to have identical instruments in each equation, and whether you wish to compute
the projections on an equation-by-equation basis, or whether you wish to compute a
restricted projection using the stacked system.
In the simplest (default) case, EViews will form your instrumental variable projections on an
equation-by-equation basis. If you prefer to think of this process as a two-step (2SLS) procedure, the first-stage regression of the variables in your model on the instruments will be run
separately for each equation.
In this setting, there are two ways to specify your instruments. If you would like to use identical instruments in every equations, you should include a line beginning with the keyword
@INST or INST, followed by a list of all the exogenous variables to be used as instruments. For example, the line:
@inst gdp(-1 to -4) x gov
instructs EViews to use these six variables as instruments for all of the equations in the system. System estimation will involve a separate projection for each equation in your system.
You may also specify different instruments for each equation by appending an @-sign at
the end of the equation, followed by a list of instruments for that equation. For example:
cs = c(1)+c(2)*gdp+c(3)*cs(-1) @ cs(-1) inv(-1) gov
inv = c(4)+c(5)*gdp+c(6)*gov @ gdp(-1) gov
The first equation uses CS(-1), INV(-1), GOV, and a constant as instruments, while the second equation uses GDP(-1), GOV, and a constant as instruments.
Lastly, you can mix the two methods. Any equation without individually specified instruments will use the instruments specified by the @inst statement. The system:
@inst gdp(-1 to -4) x gov
cs = c(1)+c(2)*gdp+c(3)*cs(-1)
will use the instruments GDP(-1), GDP(-2), GDP(-3), GDP(-4), X, GOV, and C, for the CS
equation, but only GDP(-1), GOV, and C, for the INV equation.
As noted above, the EViews default behavior is to perform the instrumental variables projection on an equation-by-equation basis. You may, however, wish to perform the projections
on the stacked system. Notably, where the number of instruments is large, relative to the
number of observations, stacking the equations and instruments prior to performing the projection may be the only feasible way to compute 2SLS estimates.
To designate instruments for a stacked projection, you should use the @stackinst statement (note: this statement is only available for systems estimated by 2SLS or 3SLS; it is not
available for systems estimated using GMM).
In a @stackinst statement, the @STACKINST keyword should be followed by a list of
stacked instrument specifications. Each specification is a comma delimited list of series
enclosed in parentheses (one per equation), describing the instruments to be constrained in
a stacked specification.
For example, the following @stackinst specification creates two instruments in a three
equation model:
@stackinst (z1,z2,z3) (m1,m1,m1)
This statement instructs EViews to form two stacked instruments, one by stacking the separate series Z1, Z2, and Z3, and the other formed by stacking M1 three times. The first-stage
instrumental variables projection is then of the variables in the stacked system on the
stacked instruments.
When working with systems that have a large number of equations, the above syntax may
be unwieldy. For these cases, EViews provides a couple of shortcuts. First, for instruments
that are identical in all equations, you may use an * after the comma to instruct EViews to
repeat the specified series. Thus, the above statement is equivalent to:
@stackinst (z1,z2,z3) (m1,*)
Second, for non-identical instruments, you may specify a set of stacked instruments using
an EViews group object, so long as the number of variables in the group is equal to the number of equations in the system. Thus, if you create a group Z with,
group z z1 z2 z3
You can, of course, combine ordinary instrument and stacked instrument specifications.
This situation is equivalent to having common and equation specific coefficients for vari-
ables in your system. Simply think of the stacked instruments as representing common
(coefficient) instruments, and ordinary instruments as representing equation specific (coefficient) instruments. For example, consider the system given by,
@stackinst (z1,z2,z3) (m1,*)
@inst ia
y1 = c(1)*x1
y2 = c(1)*x2
y3 = c(1)*x3 @ ic
Z1 M1 IA C 0 0 0 0 0
Z2 M1 0 0 IA C 0 0 0
Z3 M1 0 0 0 0 IA C IC
(37.2)
so it is easy to see that this specification is equivalent to the following stacked specification,
@stackinst (z1, z2, z3) (m1, *) (ia, 0, 0) (0, ia, 0) (0, 0, ia)
(0, 0, ic)
is equivalent to:
@stackinst (ia, 0, 0) (0, ia, 0) (0, 0, ia)
Additional Comments
If you include a C in the stacked instrument list, it will not be included in the individual equations. If you do not include the C as a stacked instrument, it will be
included as an instrument in every equation, whether specified explicitly or not.
You should list all exogenous right-hand side variables as instruments for a given
equation.
Identification requires that there should be at least as many instruments (including
the constant) in each equation as there are right-hand side variables in that equation.
The @stackinst statement is only available for estimation by 2SLS and 3SLS. It is
not currently supported for GMM.
If you estimate your system using a method that does not use instruments, all instrument specification lines will be ignored.
Starting Values
For systems that contain nonlinear equations, you can include a line that begins with param
to provide starting values for some or all of the parameters. List pairs of parameters and values. For example:
param
c(1) .15
b(3) .5
sets the initial values of C(1) and B(3). If you do not provide starting values, EViews uses
the values in the current coefficient vector. In ARCH estimation, by default, EViews does
provide a set of starting coefficients. Users are able to provide their own set of starting values by selecting User Supplied in the Starting coefficient value field located in the Options
tab.
The drop-down menu marked Estimation Method provides you with several options for the
estimation method. You may choose from one of a number of methods for estimating the
parameters of your specification.
The estimation dialog may change to reflect your choice, providing you with additional
options. If you select an estimator which uses instrumental variables, a checkbox will
appear, prompting you to choose whether to Add lagged regressors to instruments for linear equations with AR terms. As the checkbox label suggests, if selected, EViews will add
lagged values of the dependent and independent variable to the instrument list when estimating AR models. The lag order for these instruments will match the AR order of the spec-
ification. This automatic lag inclusion reflects the fact that EViews transforms the linear
specification to a nonlinear specification when estimating AR models, and that the lagged
values are ideal instruments for the transformed specification. If you wish to maintain precise control over the instruments added to your model, you should unselect this option.
Additional options appear if you are estimating a GMM specification. Note that the GMMCross section option uses a weighting matrix that is robust to heteroskedasticity and contemporaneous correlation of unknown form, while the GMM-Time series (HAC) option
extends this robustness to autocorrelation of unknown form.
If you select either GMM method, EViews will display a checkbox labeled Identity weighting matrix in estimation. If selected, EViews will estimate the model using identity
weights, and will use the estimated coefficients and GMM specification you provide to compute a coefficient covariance matrix that is robust to cross-section heteroskedasticity (White)
or heteroskedasticity and autocorrelation (Newey-West). If this option is not selected,
EViews will use the GMM weights both in estimation, and in computing the coefficient
covariances.
When you select the
GMM-Time series
(HAC) option, the dialog displays additional
options for specifying
the weighting matrix.
The new options will
appear on the right
side of the dialog.
These options control
the computation of the
heteroskedasticity and
autocorrelation robust
(HAC) weighting
matrix. See Technical
Discussion on
page 610 for a more detailed discussion of these options.
The Kernel Options determines the functional form of the kernel used to weight the autocovariances to compute the weighting matrix. The Bandwidth Selection option determines
how the weights given by the kernel change with the lags of the autocovariances in the computation of the weighting matrix. If you select Fixed bandwidth, you may enter a number
for the bandwidth or type nw to use Newey and Wests fixed bandwidth selection criterion.
The Prewhitening option runs a preliminary VAR(1) prior to estimation to soak up the
correlation in the moment conditions.
If the ARCH - Conditional Heteroskedasticity method is selected, the dialog displays the
options appropriate for ARCH models. Model type allows you to select among three different multivariate ARCH models: Diagonal VECH, Constant Conditional Correlation (CCC),
and Diagonal BEKK. Auto-regressive order indicates the number of autoregressive terms
included in the model. You may use the Variance Regressors edit field to specify any regressors in the variance equation.
The coefficient specifications for the auto-regressive terms and regressors in the variance
equation may be fine-tuned using the controls in the ARCH coefficient restrictions section
of the dialog page. Each auto-regression or regressor term is displayed in the Coefficient list.
You should select a term to modify it, and in the Restriction field select a type coefficient
specification for that term. For the Diagonal VECH model, each of the coefficient matrices
may be restricted to be Scalar, Diagonal, Rank One, Full Rank, Indefinite Matrix or (in the
case of the constant coefficient) Variance Target. The options for the BEKK model behave
the same except that the ARCH, GARCH, and TARCH term is restricted to be Diagonal. For
the CCC model, Scalar is the only option for ARCH, TARCH and GARCH terms, Scalar and
Variance Target are allowed or the constant term. For for exogenous variables you may
choose between Individual and Common, indicating whether the parameters are restricted
to be the same for all variance equations (common) or are unrestricted.
By default, the conditional distribution of the error terms is assumed to be Multivariate
Normal. You have the option of instead using Multivariate Student's t by selecting it in the
Error distribution dropdown list.
Options
For weighted least squares, SUR, weighted TSLS, 3SLS, GMM, and nonlinear systems of
equations, there are additional issues involving the procedure for computing the GLS
weighting matrix and the coefficient vector and for ARCH system, the coefficient vector used
in estimation, as well as backcasting and robust standard error options.
To specify the method used in iteration, click on the Options tab.
The estimation
option controls the
method of iterating
over coefficients,
over the weighting
matrices, or both:
Update
weights once,
thenIterate
coefs to convergence is the
default
method.
By default,
EViews carries
out a first-stage
estimation of the coefficients using no weighting matrix (the identity matrix). Using
starting values obtained from OLS (or TSLS, if there are instruments), EViews iterates
the first-stage estimates until the coefficients converge. If the specification is linear,
this procedure involves a single OLS or TSLS regression.
The residuals from this first-stage iteration are used to form a consistent estimate of
the weighting matrix.
In the second stage of the procedure, EViews uses the estimated weighting matrix in
forming new estimates of the coefficients. If the model is nonlinear, EViews iterates
the coefficient estimates until convergence.
Update weights once, thenUpdate coefs once performs the first-stage estimation of
the coefficients, and constructs an estimate of the weighting matrix. In the second
stage, EViews does not iterate the coefficients to convergence, instead performing a
single coefficient iteration step. Since the first stage coefficients are consistent, this
one-step update is asymptotically efficient, but unless the specification is linear, does
not produce results that are identical to the first method.
+ (1 l)
H0 = e0 e0 = l H
l
j=0
where:
Tj1
eT j eT j
(37.3)
=
H
( et et ) T
(37.4)
t =1
Estimation Output
The system estimation output contains parameter estimates, standard errors, and t-statistics
(or z-statistics for maximum likelihood estimations), for each of the coefficients in the system. Additionally, EViews reports the determinant of the residual covariance matrix, and, for
ARCH and FIML estimates, the maximized likelihood values, Akaike and Schwarz criteria.
For ARCH estimations, the mean equation coefficients are separated from the variance coefficient section.
2
In addition, EViews reports a set of summary statistics for each equation. The R statistic,
Durbin-Watson statistic, standard error of the regression, sum-of-squared residuals, etc., are
computed for each equation using the standard definitions, based on the residuals from the
system estimation procedure.
In ARCH estimations, the raw coefficients of the variance equation do not necessarily give a
clear understanding of the variance equations in many specifications. An extended coefficient view is supplied at the end of the output table to provide an enhanced view of the
coefficient values involved.
You may access most of these results using regression statistics functions. See Chapter 19,
page 16 for a discussion of the use of these functions, and Chapter 1. Object View and Procedure Reference, on page 2 of the Command and Programming Reference for a full listing
of the available functions for systems.
System Views
The System Specification view displays the specification window for the system. The
specification window may also be displayed by pressing Spec on the toolbar.
Representations provides you with the estimation command, the estimated equations
and the substituted coefficient counterpart. For ARCH estimation this view also
includes additional variance and covariance specification in matrix formation as well
as single equation with and without substituted coefficients.
The Estimation Output view displays the coefficient estimates and summary statistics
for the system. You may also access this view by pressing Stats on the system toolbar.
Residuals/Graphs displays a separate graph of the residuals from each equation in
the system.
Residuals/Correlation Matrix computes the contemporaneous correlation matrix for
the residuals of each equation.
Residuals/Covariance Matrix computes the contemporaneous covariance matrix for
the residuals. See also the function @residcov in System on page 717 of the Command and Programming Reference.
Gradients and Derivatives provides views which describe the gradients of the objective function and the information about the computation of any derivatives of the
regression functions. Details on these views are provided in Appendix D. Gradients
and Derivatives, on page 1019.
Conditional Covariance gives you the option to generate conditional covariances,
variances, correlations or standard deviations for systems estimated using ARCH
methods.
Coefficient Covariance Matrix allows you to examine the estimated covariance
matrix.
Coefficient Tests allows you to display confidence ellipses or to perform hypothesis
tests for restrictions on the coefficients. These views are discussed in greater depth in
Confidence Intervals and Confidence Ellipses on page 164 and Wald Test (Coefficient Restrictions) on page 170.
A number of Residual Diagnostics are supported, including Correlograms, Portmanteau Autocorrelation Test, and Normality Test. For most estimation methods, the
Correlogram and Portmanteau views employ raw residuals, while Normality tests are
based on standardized residuals. For ARCH estimation, the user has the added option
of using a number of standardized residuals to calculate Correlogram and Portmanteau tests. The available standardization methods include Cholesky, Inverse Square
Root of Residual Correlation, or Inverse Square Root of Residual Covariance. See
Residual Tests on page 627 for details on these tests and factorization methods.
Endogenous Table presents a spreadsheet view of the endogenous variables in the
system.
Endogenous Graph displays graphs of each of the endogenous variables.
System Procs
One notable difference between systems and single equation objects is that there is no forecast procedure for systems. To forecast or perform simulation using an estimated system,
you must use a model object.
EViews provides you with a simple method of incorporating the results of a system into a
model. If you select Proc/Make Model, EViews will open an untitled model object containing the estimated system. This model can be used for forecasting and simulation. An alternative approach, creating the model and including the system object by name, is described
in Building a Model on page 717.
There are other procedures for working with the system:
Define System provides
an easy way to define a
system without having to
type in every equation.
Dependent variables
allows you to list the
dependent variables in the
system. You have the
option to transform these
variables by selecting from
the Dependent variable
transformation list in the
Option section. Regressors and AR( ) terms that
share the same coefficient
across equations can be listed in Common coefficients, while those that do not can
be placed in Equation specific coefficients. Command instruments can be listed in
the Common field in the Instrument list section.
Estimate opens the dialog for estimating the system of equations. It may also be
accessed by pressing Estimate on the system toolbar.
Make Residuals creates a number of series containing the residuals for each equation
in the system. The residuals will be given the next unused name of the form RESID01,
RESID02, etc., in the order that the equations are specified in the system.
Make Endogenous Group creates an untitled group object containing the endogenous
variables.
Make Loglikelihoods (for system ARCH) creates a series containing the log likelihood
contribution.
Make Conditional Covariance (for system ARCH) allows you to generate estimates of
the conditional variances, covariances, or correlations for the specified set of dependent variables. (EViews automatically places all of the dependent variables in the
Variable field. You have the option to modify this field to include only the variable of
interest.)
If you select Group under Format, EViews will save the data in series. The Base name
edit box indicates the base name to be used when generating series data. For the conditional variance series, the naming convention will be the specified base name plus
terms of the form _01, _02. For covariances or correlations, the naming convention will use the base name plus _01_02, _01_03, etc., where the additional text
indicates the covariance/correlation between member 1 and 2, member 1 and 3, etc.
If Matrix is selected then whatever is in the Matrix name field will be generated for
what is in the Date (or Presample if it is checked) edit field.
Example
As an illustration of the process of estimating a system of equations in EViews, we estimate
a translog cost function using data from Berndt and Wood (1975) as presented in Greene
(1997). The data are provided in G_cost.WF1. The translog cost function has four factors
with three equations of the form:
pK
pL
pE
- + d KL log ------ + d KE log ------ + eK
c K = b K + d KK log ----- p M
p M
p M
pK
pL
pE
- + d LL log ------ + d LE log ------ + eL
c L = b L + d LK log ----- p M
p M
p M
(37.5)
pK
pL
pE
- + d EL log ------ + d EE log ------ + eE
c E = b E + d EK log ----- p M
p M
p M
where c i and p i are the cost share and price of factor i , respectively. b and d are the
parameters to be estimated. Note that there are cross equation coefficient restrictions that
ensure symmetry of the cross partial derivatives.
We first estimate this system without imposing the cross equation restrictions and test
whether the symmetry restrictions hold. Create a system by clicking Object/New Object.../
System in the main toolbar or type system in the command window. Press the Name button and type in the name SYS_UR to name the system.
Next, type in the system window and specify the system as:
We estimate this model by full information maximum likelihood (FIML). FIML is invariant
to the equation that is dropped. Press the Estimate button and choose Full Information
Maximum Likelihood. Click on OK to perform the estimation. EViews presents the estimated coefficients and regression statistics for each equation. The top portion of the output
describes the coefficient estimates:
System: SYS_UR
Estimation Method: Full Information Maxi mum Likelihood (Ma rquardt)
Date: 08/13/09 Ti me: 09:10
Sample: 1947 19 71
Included observations: 25
Total system (balanced) observations 7 5
Convergence achieved afte r 128 iterations
C(1)
C(2)
C(3)
C(4)
C(5)
C(6)
C(7)
C(8)
C(9)
C(10)
C(11)
C(12)
Coeffi cient
S td. Error
z-S tatistic
Prob.
0.0 54983
0.0 35130
0.0 04136
0.0 23633
0.2 50180
0.0 14758
0.0 83909
0.0 56411
0.0 43257
-0.007 707
-0.002 183
0.0 35624
0.009353
0.035677
0.025616
0.084444
0.012019
0.024771
0.032188
0.096020
0.007981
0.012518
0.020123
0.061802
5.878830
0.984676
0.161445
0.279867
20.81592
0.595766
2.606811
0.587493
5.420095
-0.615722
-0.108489
0.576422
0.0 000
0.3 248
0.8 717
0.7 796
0.0 000
0.5 513
0.0 091
0.5 569
0.0 000
0.5 381
0.9 136
0.5 643
Log likelihood
349.0326
Avg. lo g likelihood
4.6 53769
Akaike info criterion -26.96 261
Determinant resid ual covari ance
Schwarz criterion
Han nan-Quinn criter.
-26.3775 5
-26.8003 4
1.50E-16
while the bottom portion of the output (not depicted) describes equation specific statistics.
Value
df
Probability
0.418796
0.9363
Value
Std. Err.
-0.01062 2
0.031340
0.058594
0.03983 8
0.07778 3
0.09075 8
fails to reject the symmetry restrictions. To estimate the system imposing the symmetry
restrictions, copy the object using Object/Copy Object, click View/System Specification
and modify the system.
We have named the system
SYS_TLOG. Note that to impose
symmetry in the translog specification, we have restricted the coefficients on the cross-price terms to
be the same (we have also renumbered the 9 remaining coefficients
so that they are consecutive). The
restrictions are imposed by using
the same coefficients in each equation. For example, the coefficient on the log(P_L/P_M)
term in the C_K equation, C(3), is the same as the coefficient on the log(P_K/P_M) term in
the C_L equation.
To estimate this model using FIML, click Estimate and choose Full Information Maximum
Likelihood. The top part of the equation describes the estimation specification, and provides coefficient and standard error estimates, t-statistics, p-values, and summary statistics:
System: SYS_TLOG
Estimation Method: Full Information Maximum Likelihood (BFGS / Marquardt
steps)
Date: 03/10/15 Time: 22:09
Sample: 1947 1971
Included observations: 25
Total system (balanced) observations 75
Convergence achieved after 38 iterations
Coefficient covariance computed using outer product of gradients
C(1)
C(2)
C(3)
C(4)
C(5)
C(6)
C(7)
C(8)
C(9)
Coefficient
Std. Error
z-Statistic
Prob.
0.057022
0.029742
-0.000369
-0.010228
0.253398
0.075427
-0.004414
0.044286
0.018767
0.003306
0.012583
0.011205
0.006027
0.005050
0.015483
0.009141
0.003349
0.014894
17.24913
2.363697
-0.032971
-1.697176
50.17733
4.871616
-0.482900
13.22339
1.260009
0.0000
0.0181
0.9737
0.0897
0.0000
0.0000
0.6292
0.0000
0.2077
Log likelihood
344.5916 Schwarz criterion
Avg. log likelihood
4.594555 Hannan-Quinn criter.
Akaike info criterion
-26.84733
Determinant residual covariance
2.14E-16
-26.40853
-26.72563
The log likelihood value reported at the bottom of the first part of the table may be used to
construct likelihood ratio tests.
Since maximum likelihood assumes the errors are
multivariate normal, we may wish to test whether
the residuals are normally distributed. Click Proc/
Make Residuals to display the residuals dialog. You
may choose to save the ordinary or standardized
residuals. If you choose the latter, you can elect to
standardize the residuals using the Cholesky factor
of the (conditional) covariance, the square root of
the (conditional) correlation matrix, or the square
root of the (conditional) covariance matrix. You
must enter a basename for saving the residuals. The
residuals will be named using the next available names in the workfile, in this case
RESID01, RESID02, ...., if those names are not already used.
To plot the series of elasticity of substitution between capital and labor for each observation,
double click on the series name ES_KL in the workfile and select View/Graph/Line & Symbol:
While it varies over the sample, the elasticity of substitution is generally close to one, which
is consistent with the assumption of a Cobb-Douglas cost function.
log ( jy t jy t 1 ) = c 1 + e 1t
log ( sf t sf t 1 ) = c 2 + e 2t
(37.6)
log ( bp t bp t 1 ) = c 3 + e 3t
where e t = [e 1t,e 2t, e 3t] is assumed to distributed normally with mean zero and covariance H t . The conditional covariance is modeled with a basic Diagonal VECH model:
Ht = Q + A et 1 et 1 + B Ht 1
(37.7)
To estimate this model, create a system SYS01 with the following specification:
dlog(jy) = c(1)
dlog(sf) = c(2)
dlog(bp) = c(3)
We estimate this model by selecting ARCH - Conditional Heteroskedasticity as the estimation method in the estimation dialog. Since the model we want to estimate is the default
Diagonal VECH model we leave most of the settings as they are. In the sample field, we
change the sample to 1980 2000 to use only a portion of the data. Click on OK to estimate
the system.
EViews displays the results of the estimation, which are similar to other system estimation
output with a few differences. The ARCH results contain the coefficients statistics section
(which includes both the mean and raw variance coefficients), model and equation specific
statistics, and an extended section describing the variance coefficients.
The coefficient section at the top is separated into two parts, one contains the estimated
coefficient for the mean equation and the other contains the estimated raw coefficients for
the variance equation. The parameters estimates of the mean equation, C(1), C(2) and C(3),
are listed in the upper portion of the coefficient list.
System: SYSTEM01
Estimation Method: ARCH Maximum Likelihood (BFGS / Marquardt steps)
Covariance specification: Diagonal VECH
Date: 03/10/15 Time: 22:15
Sample: 12/31/1979 12/25/2000
Included observations: 1096
Total system (balanced) observations 3288
Presample covariance: backcast (parameter =0.7)
Convergence achieved after 68 iterations
Coefficient covariance computed using outer product of gradients
C(1)
C(2)
C(3)
Coefficient
Std. Error
z-Statistic
Prob.
-0.000865
5.43E-05
-3.49E-05
0.000446
0.000454
0.000378
-1.936740
0.119510
-0.092282
0.0528
0.9049
0.9265
5.919901
3.759945
-3.575569
4.550943
-4.972743
5.590122
7.546435
7.154661
5.668999
8.144178
8.931132
13.93397
84.47647
94.20352
87.55899
79.01305
74.52704
65.07734
0.0000
0.0002
0.0003
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
6.49E-06
3.64E-06
-2.64E-06
1.04E-05
-8.03E-06
1.39E-05
0.059566
0.052100
0.046822
0.058630
0.067051
0.112734
0.917973
0.928844
0.924802
0.908492
0.886249
0.829154
1.10E-06
9.67E-07
7.39E-07
2.28E-06
1.62E-06
2.49E-06
0.007893
0.007282
0.008259
0.007199
0.007508
0.008091
0.010867
0.009860
0.010562
0.011498
0.011892
0.012741
The variance coefficients are displayed in their own section. Coefficients C(4) to C(9) are the
coefficients for the constant matrix, C(10) to C(15) are the coefficients for the ARCH term,
and C(16) through C(21) are the coefficients for the GARCH term.
Note that the number of variance coefficients in an ARCH model can be very large. Even in
this small 3-variable system, 18 parameters are estimated, making interpretation somewhat
difficult. To aid you in interpreting the results, EViews provides a covariance specification
section at the bottom of the estimation output that re-labels and transforms coefficients:
M(1,1)
M(1,2)
M(1,3)
M(2,2)
M(2,3)
M(3,3)
A1(1,1)
A1(1,2)
A1(1,3)
A1(2,2)
A1(2,3)
A1(3,3)
B1(1,1)
B1(1,2)
B1(1,3)
B1(2,2)
B1(2,3)
B1(3,3)
Coeffici ent
S td. Error
z-S tatistic
Prob.
6.49E-06
3.64E-06
-2.64E-06
1.04E-05
-8.03E-06
1.39E-05
0.059566
0.052100
0.046822
0.058630
0.067051
0.112734
0.917973
0.928844
0.924802
0.908492
0.886249
0.829154
1.10E-06
9.67E-07
7.39E-07
2.28E-06
1.62E-06
2.49E-06
0.007893
0.007282
0.008259
0.007199
0.007508
0.008091
0.010867
0.009860
0.010562
0.011498
0.011892
0.012741
5.919903
3.759946
-3.575568
4.550942
-4.972744
5.590125
7.546440
7.154665
5.669004
8.144180
8.931139
13.93396
84.47655
94.20361
87.55915
79.01313
74.52720
65.07757
0.0 000
0.0 002
0.0 003
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
The first line of this section states the covariance model used in estimation, in this case
Diagonal VECH. The next line of the header describes the model that we have estimated in
abbreviated text form. In this case, GARCH is the conditional variance matrix, M is the
constant matrix coefficient, A1 is the coefficient matrix for the ARCH term and B1 is the
coefficient matrix for the GARCH term. M, A1, and B1 are all specified as indefinite matrices.
Next, the estimated values of the matrix elements as well as other statistics are displayed.
Since the variance matrices are indefinite, the values are identical to those reported for the
raw variance coefficients. For example, M(1,1), the (1,1) element in matrix M, corresponds
to raw coefficient C(4), M(1,2) corresponds to C(5), A1(1,1) to C(10), etc.
For matrix coefficients that are rank 1 or full rank, the values reported in this section are a
transformation of the raw estimated coefficients, i.e. they are a function of one or more of
the raw coefficients. Thus, the reported values do not have a one-to-one correspondence
with the raw parameters.
Cor(DLOG(JY),DLOG(BP))
.2
Cor(DLOG(SF),DLOG(BP))
-0.2
.0
-0.4
-.2
-0.6
-.4
-0.8
-.6
-.8
-1.0
80 82 84 86 88 90 92 94 96 98 00
80 82 84 86 88 90 92 94 96 98 00
The correlation looks to be time varying, which is a general characteristic of this model.
Another possibility is to model the covariance matrix using the CCC specification, which
imposes a constant correlation over time. We proceed by creating a new system with specification identical to the one above. We'll select Constant Conditional Correlation this time as
the Model type for estimation and leave the remaining settings as they are. The basic
results:
System: UNTITLED
Estimation Method: ARCH Maximum Likelihood (BFGS / Marquardt steps)
Covariance specification: Constant Conditional Correlation
Date: 03/10/15 Time: 22:29
Sample: 12/31/1979 12/25/2000
Included observations: 1096
Total system (balanced) observations 3288
Presample covariance: backcast (parameter =0.7)
Convergence achieved after 48 iterations
Coefficient covariance computed using outer product of gradients
C(1)
C(2)
C(3)
Coefficient
Std. Error
z-Statistic
Prob.
-0.000804
-0.000232
8.56E-05
0.000450
0.000467
0.000377
-1.788285
-0.497008
0.226828
0.0737
0.6192
0.8206
4.482923
6.238135
67.35993
2.836869
4.864468
12.06496
4.735844
11.26665
46.19303
31.32550
-17.06082
-46.43001
0.0000
0.0000
0.0000
0.0046
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
5.84E-06
0.062911
0.916958
4.89E-05
0.063178
0.772214
1.47E-05
0.104348
0.828535
0.571323
-0.403219
-0.677329
1.30E-06
0.010085
0.013613
1.72E-05
0.012988
0.064005
3.11E-06
0.009262
0.017936
0.018238
0.023634
0.014588
-17.40991
-17.45244
Note that this specification has only 12 free parameters in the variance equation, as compared with 18 in the previous model. The extended variance section represents the variance
equation as,
GARCH(i) = M(i) + A1(i)*RESID(i)(-1)^2 + B1(i)*GARCH(i)(-1)
The lower portion of the output shows that the correlations, R(1, 2), R(1, 3), and R(2, 3) are
0.5713, -0.4032, and -0.6773, respectively:
M(1)
A1(1)
B1(1)
M(2)
A1(2)
B1(2)
M(3)
A1(3)
B1(3)
R(1,2)
R(1,3)
R(2,3)
Coeffici ent
S td. Error
z-S tatistic
Prob.
5.84E-06
0.062911
0.916958
4.89E-05
0.063178
0.772214
1.47E-05
0.104348
0.828536
0.571323
-0.403219
-0.677329
1.30E-06
0.010085
0.013613
1.72E-05
0.012988
0.064005
3.11E-06
0.009262
0.017936
0.018238
0.023634
0.014588
4.482923
6.238137
67.35994
2.836869
4.864469
12.06496
4.735844
11.26665
46.19308
31.32550
-17 .06082
-46 .43002
0.0 000
0.0 000
0.0 000
0.0 046
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
0.0 000
Is this model better than the previous model? While the log likelihood value is lower, it also
has fewer coefficients. We may compare the two system by looking at model selection criteria. The Akaike, Schwarz and Hannan-Quinn all show lower information criteria values for
the VECH model than the CCC specification, suggesting that the time-varying Diagonal
VECH specification may be preferred.
Technical Discussion
While the discussion to follow is expressed in terms of a balanced system of linear equations, the analysis carries forward in a straightforward way to unbalanced systems containing nonlinear equations.
Denote a system of m equations in stacked form as:
X1 0 0
y1
y2
yM
0 X2
0
0 0 XM
b1
b2
bM
e1
+
e2
(37.8)
eM
y = Xb + e .
(37.9)
Technical Discussion611
Under the standard assumptions, the residual variance matrix from this stacked system is
given by:
2
V = E ( ee ) = j ( I M I T ) .
(37.10)
Other residual structures are of interest. First, the errors may be heteroskedastic across the
m equations. Second, they may be heteroskedastic and contemporaneously correlated. We
can characterize both of these cases by defining the M M matrix of contemporaneous
correlations, S , where the (i,j)-th element of S is given by j ij = E ( e it e jt ) for all t . If the
errors are contemporaneously uncorrelated, then, j ij = 0 for i j , and we can write:
2
V = diag ( j 1, j 2 ,, j M ) I T
(37.11)
V = S IT .
(37.12)
Lastly, at the most general level, there may be heteroskedasticity, contemporaneous correlation, and autocorrelation of the residuals. The general variance matrix of the residuals may
be written:
j S
11 11 j 12 S 12 j 1M S 1M
j 21 S 21 j 22 S 22
V =
j MM S MM
j M1 S M1
(37.13)
b LS = ( XX ) Xy
(37.14)
var ( b LS ) = s ( XX )
(37.15)
1 X ) XV
1 y
b WLS = ( XV
(37.16)
s ij = ( y i X i b LS ) ( y j X j b LS ) max ( T i, T j )
(37.17)
where the inner product is taken over the non-missing common elements of i and j . The
max function in Equation (37.17) is designed to handle the case of unbalanced data by
down-weighting the covariance terms. Provided the missing values are asymptotically negligible, this yields a consistent estimator of the variance elements. Note also that there is no
adjustment for degrees of freedom.
When specifying your estimation specification, you are given a choice of which coefficients
to use in computing the s ij . If you choose not to iterate the weights, the OLS coefficient estimates will be used to estimate the variances. If you choose to iterate the weights, the current
parameter estimates (which may be based on the previously computed weights) are used in
computing the s ij . This latter procedure may be iterated until the weights and coefficients
converge.
The estimator for the coefficient variance matrix is:
1
X) .
var ( b WLS ) = ( XV
(37.18)
The weighted least squares estimator is efficient, and the variance estimator consistent,
under the assumption that there is heteroskedasticity, but no serial or contemporaneous correlation in the residuals.
It is worth pointing out that if there are no cross-equation restrictions on the parameters of
the model, weighted LS on the entire system yields estimates that are identical to those
obtained by equation-by-equation LS. Consider the following simple model:
y1 = X1 b1 + e1
(37.19)
y2 = X2 b2 + e2
If b 1 and b 2 are unrestricted, the WLS estimator given in Equation (37.18) yields:
1
b WLS =
( ( X 1 X 1 ) s 11 ) ( ( X 1 y 1 ) s 11 )
1
( ( X 2 X 2 ) s 22 ) ( ( X 2 y 2 ) s 22 )
( X 1 X 1 ) X 1 y 1
1
( X 2 X 2 ) X 2 y 2
(37.20)
The expression on the right is equivalent to equation-by-equation OLS. Note, however, that
even without cross-equation restrictions, the standard errors are not the same in the two
cases.
1
1
b SUR = ( X ( S I T ) X ) X ( S I T ) y ,
(37.21)
Technical Discussion613
y jt
pj
= X jt b j + r jr ( y j ( t r ) X j ( t r ) ) + e jt
r = 1
(37.22)
YG j + XB j + e j = 0
(37.23)
yj = Yj gj + Xj bj + ej = Zj dj + ej
(37.24)
or, alternatively:
where G j = ( 1, g j , 0 ) , B j = ( b j , 0 ) , Z j = ( Y j , X j ) and d j = ( g j , b j ) . Y is
the matrix of endogenous variables and X is the matrix of exogenous variables; Y j is the
matrix of endogenous variables not including y j .
In the first stage, we regress the right-hand side endogenous variables y j on all exogenous
variables X and get the fitted values:
j = X ( XX ) 1 XY j .
Y
(37.25)
j and X j to get:
In the second stage, we regress y j on Y
1
d 2SLS = ( Z j Z j ) Z j y .
(37.26)
j = (Y
j, X j ) . The residuals from an equation using these coefficients are used for
where Z
form weights.
Weighted TSLS applies the weights in the second stage so that:
1
Z j ) Z j V
y
d W2SLS = ( Z j V
(37.27)
where the elements of the variance matrix are estimated in the usual fashion using the residuals from unweighted TSLS.
If you choose to iterate the weights, X is estimated at each step using the current values of
the coefficients and residuals.
d 3SLS = ( Z ( S
X ( XX ) X )Z ) Z ( S
X ( XX ) X )y
(37.28)
(37.29)
If you choose to iterate the weights, the current coefficients and residuals will be used to
.
estimate S
f ( y t, x t, b ) = e t ,
(37.30)
T
log L = ---- log S +
2
t = 1
1
f
log ---------t --2
y t
f t S
ft
(37.31)
t =1
where f t = f ( y t, x t, b ) . Note that the log determinant of the derivatives of f t captures the
simultaneity in the system of equations.
Technical Discussion615
Using the first-order conditions for the variance parameters, we may write the likelihood in
concentrated form:
T
log L =
t = 1
T 1
f
log ---------t ---- log T
2
y t
f t f t
(37.32)
t = 1
The FIML estimator maximizes the concentrated likelihood with respect to the b (or the full
likelihood with respect to b and S ).
The estimator is asymptotically normally distributed with coefficient covariance typically
computed using the inverse of the outer-product of the gradient, the inverse of the negative
of the observed Hessian of the concentrated likelihood. EViews employs the OPG covariance
by default, but there is evidence that one should take seriously the choice of method (Calzolari and Panattoni, 1988). In addition, EViews offers a QML covariance computation that
employs a Huber-White sandwich using both the OPG and the inverse negative Hessian.
Over the years, a number of approaches for FIML estimation have been proposed (see, for
example, Parke 1982, Belsley 1980, Dagenais 1978, or Amemiya 1977). EViews offers standard
BFGS, Newton-Raphson, and OPG/BHHH algorithms with various step methods in trust region
form, as well as a simple implementation of BHHH with Marquardt and line search steps (Optimization Algorithms on page 1011). See Calzolari and Panattoni (1987) and Weihs, Calzolari,
and Panattoni (1986) for simulation results for the performance of various estimators.
Whichever method you select, we encourage you to perform sensitivity analysis.
E ( m ( y, v ) ) = 0 .
(37.33)
The method of moments estimator is defined by replacing the moment condition (37.33) by
its sample analog:
m ( y , v ) T = 0 .
t
(37.34)
However, condition (37.34) will not be satisfied for any v when there are more restrictions
m than there are parameters v . To allow for such overidentification, the GMM estimator is
defined by minimizing the following criterion function:
m ( yt, v )A ( yt, v )m ( y t, v )
t
(37.35)
which measures the distance between m and zero. A is a weighting matrix that weights
each moment condition. Any symmetric positive definite matrix A will yield a consistent
estimate of v . However, it can be shown that a necessary (but not sufficient) condition to
obtain an (asymptotically) efficient estimate of v is to set A equal to the inverse of the
covariance matrix Q of the sample moments m . This follows intuitively, since we want to
put less weight on the conditions that are more imprecise.
To obtain GMM estimates in EViews, you must be able to write the moment conditions in
Equation (37.33) as an orthogonality condition between the residuals of a regression equation, u ( y, v, X ) , and a set of instrumental variables, Z , so that:
m ( v, y, X, Z ) = Zu ( v, y, X )
(37.36)
For example, the OLS estimator is obtained as a GMM estimator with the orthogonality conditions:
X ( y Xb ) = 0 .
(37.37)
For the GMM estimator to be identified, there must be at least as many instrumental variables Z as there are parameters v . See the section on Generalized Method of Moments,
beginning on page 69 for additional examples of GMM orthogonality conditions.
An important aspect of specifying a GMM problem is the choice of the weighting matrix A .
1 , where Q is the estimated long-run covariance matrix of
EViews uses the optimal A = Q
the sample moments m . EViews uses the consistent TSLS estimates for the initial estimate
of v in forming the estimate of Q .
1
Q W = G ( 0 ) = -------------
T k
Z t u t u t Z t
(37.38)
t =1
where u is the vector of residuals, and Z t is a k p matrix such that the p moment conditions at t may be written as m ( v, y t, X t, Z t ) = Z t u ( v, y t, X t ) .
Q HAC = G ( 0 ) +
where:
T1
j= 1
k ( j, q ) ( G ( j ) + G ( j ) )
(37.39)
Technical Discussion617
1
G ( j ) = -------------
T k
t = j+1
Z t j u t j u t Z t .
(37.40)
You also need to specify the kernel function k and the bandwidth q .
Kernel Options
is ensured to be positive
The kernel function k is used to weight the covariances so that Q
semi-definite. EViews provides two choices for the kernel, Bartlett and quadratic spectral
(QS). The Bartlett kernel is given by:
1 x
k( x ) =
0
0x1
otherwise
(37.41)
25
sin ( 6px 5 )
k ( j q ) = -------------------2- ------------------------------ cos ( 6px 5 )
6px 5
12 ( px )
(37.42)
where x = j q . The QS has a faster rate of convergence than the Bartlett and is smooth
and not truncated (Andrews 1991). Note that even though the QS kernel is not truncated, it
still depends on the bandwidth q (which need not be an integer).
Bandwidth Selection
The bandwidth q determines how the weights given by the kernel change with the lags in
the estimation of Q . Newey-West fixed bandwidth is based solely on the number of observations in the sample and is given by:
q = int ( 4 ( T 100 )
29
(37.43)
int ( 1.1447 ( a ( 1 )T ) )
q =
1.3221 ( a ( 2 )T ) 1 5
(37.44)
(1)
The two methods, Andrews and Variable-Newey-West, differ in how they estimate a
(2) .
and a
Andrews (1991) is a parametric method that assumes the sample moments follow an AR(1)
process. We first fit an AR(1) to each sample moment (37.36) and estimate the autocorrela2
( 1 ) and
tion coefficients r i and the residual variances j i for i = 1, 2, , p . Then a
a ( 2 ) are estimated by:
a ( 1 ) =
a ( 2 ) =
zn
4 2
i= 1
4j i r i
------------------------------------------
6
2
( 1 r i ) ( 1 + r i )
zn
4 2
4j i r i
zn
zn
j i
-
-------------------4
i)
(
1
r
i= 1
4
j i
(37.45)
- --------------------4-
(-------------------8
1 r )
( 1 r )
i
i= 1
i=1
Note that we weight all moments equally, including the moment corresponding to the constant.
Newey-West (1994) is a nonparametric method based on a truncated weighted sum of the
( j ) . a ( 1 ) and a ( 2 ) are estimated by,
estimated cross-moments G
lF ( p )l 2
a ( p ) = ------------------
lF ( 0 )l
(37.46)
F ( p ) = ( p 1 )G ( 0 ) +
( G ( i ) + G ( i ) ) ,
(37.47)
i =1
for p = 1, 2 .
One practical problem with the Newey-West method is that we have to choose a lag selection parameter L . The choice of L is arbitrary, subject to the condition that it grow at a certain rate. EViews sets the lag parameter to:
a
L = int( 4 ( T 100 ) )
(37.48)
where a = 2 9 for the Bartlett kernel and a = 4 25 for the quadratic spectral kernel.
Prewhitening
You can also choose to prewhiten the sample moments m to soak up the correlations in
m prior to GMM estimation. We first fit a VAR(1) to the sample moments:
m t = Am t 1 + v t .
(37.49)
1
uZ Q Zu
(37.50)
Note that while Andrews and Monahan (1992) adjust the VAR estimates to avoid singularity
when the moments are near unit root processes, EViews does not perform this eigenvalue
adjustment.
Technical Discussion619
Multivariate ARCH
ARCH estimation uses maximum likelihood to jointly estimate the parameters of the mean
and the variance equations.
Assuming multivariate normality, the log likelihood contributions for GARCH models are
given by:
1
1
1
1
l t = --- m log ( 2p ) --- log ( H t ) --- e t H t e t
(37.51)
2
2
2
where m is the number of mean equations, and e t is the m vector of mean equation residuals. For Student's t-distribution, the contributions are of the form:
v + m m2
G -------------- v
1
2
e t H t e t
1
1
l t = log ------------------------------------------------------------- --- log ( H t ) --- ( v + m ) log 1 + ------------------2
v2
( vp ) m 2 G --v- ( v 2 ) m 2 2
(37.52)
Diagonal VECH
Bollerslev, et. al (1988) introduce a restricted version of the general multivariate VECH
model of the conditional covariance with the following formulation:
Ht = Q + A et 1 et 1 + B Ht 1
(37.53)
where the coefficient matrices A , B , and Q are N N symmetric matrices, and the operator is the element by element (Hadamard) product. The coefficient matrices may be
parametrized in several ways. The most general way is to allow the parameters in the matrices to vary without any restrictions, i.e. parameterize them as indefinite matrices. In that
case the model may be written in single equation format as:
( H t ) ij = ( Q ) ij + ( A ij )e jt 1 e it 1 + ( B ) ij ( H t 1 ) ij
(37.54)
where, for instance, ( H t ) ij is the i-th row and j-th column of matrix H t .
Each matrix contains N ( N + 1 ) 2 parameters. This model is the most unrestricted version
of a Diagonal VECH model. At the same time, it does not ensure that the conditional covariance matrix is positive semidefinite (PSD). As summarized in Ding and Engle (2001), there
are several approaches for specifying coefficient matrices that restrict H to be PSD, possibly
by reducing the number of parameters. One example is:
B H t 1
H t = Q Q + A
A e t 1 e t 1 + B
(37.55)
,B
, and Q are any matrix up to rank N . For example, one may use
where raw matrices A
the rank N Cholesky factorized matrix of the coefficient matrix. This method is labeled the
Full Rank Matrix in the coefficient Restriction selection of the system ARCH dialog. While
this method contains the same number of parameters as the indefinite version, it does
ensure that the conditional covariance is PSD.
A second method, which we term Rank One, reduces the number of parameter estimated to
N and guarantees that the conditional covariance is PSD. In this case, the estimated raw
matrix is restricted, with all but the first column of coefficients equal to zero.
,B
,
In both of these specifications, the reported raw variance coefficients are elements of A
A ,
. These coefficients must be transformed to obtain the matrix of interest: A = A
and Q
B , and Q = Q Q . These transformed coefficients are reported in the extended
B = B
variance coefficient section at the end of the system estimation results.
There are two other covariance specifications that you may employ. First, the values in the
N N matrix may be a constant, so that:
B = b ii
(37.56)
where b is a scalar and i is an N 1 vector of ones. This Scalar specification implies that
for a particular term, the parameters of the variance and covariance equations are restricted
to be the same. Alternately, the matrix coefficients may be parameterized as Diagonal so
that all off diagonal elements are restricted to be zero. In both of these parameterizations,
the coefficients are not restricted to be positive, so that H is not guaranteed to be PSD.
Lastly, for the constant matrix Q , we may also impose a Variance Target on the coefficients
which restricts the values of the coefficient matrix so that:
Q = Q 0 ( ii A B )
(37.57)
where Q 0 is the unconditional sample variance of the residuals. When using this option, the
constant matrix is not estimated, reducing the number of estimated parameters.
You may specify a different type of coefficient matrix for each term. For example, if one estimates a multivariate GARCH(1,1) model with indefinite matrix coefficient for the constant
while specifying the coefficients of the ARCH and GARCH term to be rank one matrices, then
the number of parameters will be N ( ( N + 1 ) 2 ) + 2N , instead of 3N ( ( N + 1 ) 2 ) .
h iit = c i + a i e it 1 + d i I it 1 e it 1 + b i h iit 1
h ijt = r ij h iit h jjt
(37.58)
References621
Restrictions may be imposed on the constant term using variance targeting so that:
2
ci = j0 ( 1 ai bi )
(37.59)
h iit = c i + a i e it 1 + d i I it 1 e it 1 + b i h iit 1 + e i x 1t + gx 2t
(37.60)
Diagonal BEKK
BEKK (Engle and Kroner, 1995) is defined as:
H t = QQ + Ae t 1 e t 1 A + BH t 1 B
(37.61)
EViews does not estimate the general form of BEKK in which A and B are unrestricted.
However, a common and popular form, diagonal BEKK, may be specified that restricts A
and B to be diagonals. This Diagonal BEKK model is identical to the Diagonal VECH model
where the coefficient matrices are rank one matrices. For convenience, EViews provides an
option to estimate the Diagonal VECH model, but display the result in Diagonal BEKK form.
References
Amemiya, Takeshi (1977). The Maximum Likelihood and the Nonlinear Three-Stage Least Squares Estimator in the General Nonlinear Simultaneous Equation Model, Econometrica, 45, 955966.
Andrews, Donald W. K. (1991). Heteroskedasticity and Autocorrelation Consistent Covariance Matrix
Estimation, Econometrica, 59, 817858.
Andrews, Donald W. K. and J. Christopher Monahan (1992). An Improved Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimator, Econometrica, 60, 953966.
Belsley, David (1980). On the Efficient Computation of the Nonlinear Full-information Maximum-likelihood Estimator, Journal of Econometrics, 14, 203225.
Berndt, Ernst R. and David O. Wood (1975). Technology, Prices and the Derived Demand for Energy,
Review of Economics and Statistics, 57(3), 259-268.
Bollerslev, Tim (1990). Modelling the Coherence in Short-run Nominal Exchange Rates: A Multivariate
Generalized ARCH Model, The Review of Economics and Statistics, 72, 498505.
Bollerslev, Tim, Robert F. Engle and Jeffrey M. Wooldridge (1988). A Capital-Asset Pricing Model with
Time-varying Covariances, Journal of Political Economy, 96, 116131.
Calzolari, Giorgio and Lorenzo Panattoni (1987). Computational Efficiency of FIML Estimation, Journal
of Econometrics, 36, 299310.
Calzolari, Giorgio and Lorenzo Panattoni (1988). Alternative Estimators of FIML Covariance Matrix: A
Monte Carlo Study, Econometrica, 56, 701714.
Dagenais, Marcel G. (1978). The Computation of FIML Estimates as Iterative Generalized Least Squares
Estimates in Linear and Nonlinear Simultaneous Equations Models, Econometrica, 46, 13511362.
Ding, Zhuanxin and R. F. Engle (2001). Large Scale Conditional Covariance Matrix Modeling, Estimation
and Testing, Academia Economic Paper, 29, 157184.
Engle, Robert F. and K. F. Kroner (1995). Multivariate Simultaneous Generalized ARCH, Econometric
Theory, 11, 122-150.
Greene, William H. (1997). Econometric Analysis, 3rd Edition, Upper Saddle River, NJ: Prentice-Hall.
Newey, Whitney and Kenneth West (1994). Automatic Lag Selection in Covariance Matrix Estimation,
Review of Economic Studies, 61, 631653.
Parke, William R. (1982). An Algorithm for FIML and 3SLS Estimation of Large Nonlinear Models,
Econometrica, 50, 8195.
Weis, C. Calzolari, G., and L. Panattoni (1987). The Behavior of Trust-Region Methods in FIML-Estimation, Computing, 38, 89100.
y t = A 1 y t 1 + + A p y t p + Bx t + e t
(38.1)
IP t = a 11 IP t 1 + a 12 M1 t 1 + b 11 IP t 2 + b 12 M1 t 2 + c 1 + e 1t
M1 t = a 21 IP t 1 + a 22 M1 t 1 + b 21 IP t 2 + b 22 M1 t 2 + c 2 + e 2t
where a ij , b ij , c i are the parameters to be estimated.
(38.2)
The determinant of the residual covariance (degree of freedom adjusted) is computed as:
1
= det ------------T p
e t e t
(38.3)
where p is the number of parameters per equation in the VAR. The unadjusted calculation
ignores the p . The log likelihood value is computed assuming a multivariate normal (Gaussian) distribution as:
T
l = ---- { k ( 1 + log 2p ) + log Q }
2
(38.4)
AIC = 2l T + 2n T
SC = 2l T + n log T T
(38.5)
where n = k ( d + pk ) is the total number of estimated parameters in the VAR. These information criteria can be used for model selection such as determining the lag length of the
VAR, with smaller values of the information criterion being preferred. It is worth noting that
some reference sources may define the AIC/SC differently, either omitting the inessential
constant terms from the likelihood, or not dividing by T (see also Appendix E. Information Criteria, on page 1027 for additional discussion of information criteria).
Diagnostic Views
A set of diagnostic views are provided under the menus View/
Lag Structure and View/Residual Tests in the VAR window.
These views should help you check the appropriateness of the
estimated VAR.
Lag Structure
EViews offers several views for investigating the lag structure of your equation.
AR Roots Table/Graph
Reports the inverse roots of
the characteristic AR polynomial; see Ltkepohl (1991).
The estimated VAR is stable
(stationary) if all roots have
modulus less than one and lie inside the unit circle. If the VAR is not stable, certain results
(such as impulse response standard errors) are not valid. There will be kp roots, where k is
the number of endogenous variables and p is the largest lag. If you estimated a VEC with r
cointegrating relations, k r roots should be equal to unity.
Carries out lag exclusion tests for each lag in the VAR. For each lag, the x (Wald) statistic
for the joint significance of all endogenous variables at that lag is reported for each equation
separately and jointly (last column).
LR = ( T m ) { log Q 1 log Q } x 2 ( k 2 )
(38.6)
where m is the number of parameters per equation under the alternative. Note that we
employ Sims (1980) small sample modification which uses ( T m ) rather than T . We
compare the modified LR statistics to the 5% critical values starting from the maximum lag,
and decreasing the lag one at a time until we first get a rejection. The alternative lag order
from the first rejected test is marked with an asterisk (if no test rejects, the minimum lag will
be marked with an asterisk). It is worth emphasizing that even though the individual tests
have size 0.05, the overall size of the test will not be 5%; see the discussion in Ltkepohl
(1991, p. 125126).
Residual Tests
You may use these views to examine the properties of the residuals from your estimated
VAR.
Correlograms
Displays the pairwise crosscorrelograms (sample autocorrelations) for the estimated residuals in the VAR
for the specified number of
lags. The cross-correlograms
can be displayed in three different formats. There are two tabular forms, one ordered by
variables (Tabulate by Variable) and one ordered by lags (Tabulate by Lag). The Graph
form displays a matrix of pairwise cross-correlograms. The dotted line in the graphs represent plus or minus two times the asymptotic standard errors of the lagged correlations (computed as 1 T .
Autocorrelation LM Test
Reports the multivariate LM test statistics for residual serial correlation up to the specified
order. The test statistic for lag order h is computed by running an auxiliary regression of the
residuals u t on the original right-hand regressors and the lagged residual u t h , where the
missing first h values of u t h are filled with zeros. See Johansen (1995, p. 22) for the formula of the LM statistic. Under the null hypothesis of no serial correlation of order h , the
2
2
LM statistic is asymptotically distributed x with k degrees of freedom.
Normality Test
Reports the multivariate extensions of the Jarque-Bera residual normality test, which compares the third and fourth moments of the residuals to those from the normal distribution.
For the multivariate test, you must choose a factorization of the k residuals that are orthogonal to each other (see Impulse Responses on page 631 for additional discussion of the
need for orthogonalization).
Let P be a k k factorization matrix such that:
v t = Pu t N ( 0, I k )
(38.7)
where u t is the demeaned residuals. Define the third and fourth moment vectors
m 3 = v 3t T and m 4 = v 4t T . Then:
t
6I k 0
N 0,
0 24I k
m4 3
m3
(38.8)
under the null hypothesis of normal distribution. Since each component is independent of
2
each other, we can form a x statistic by summing squares of any of these third and fourth
moments.
EViews provides you with choices for the factorization matrix P :
Cholesky (Ltkepohl 1991, p. 155-158): P is the inverse of the lower triangular Cholesky factor of the residual covariance matrix. The resulting test statistics depend on
the ordering of the variables in the VAR.
Inverse Square Root of Residual Correlation Matrix (Doornik and Hansen 1994):
P = HL 1 / 2 HV where L is a diagonal matrix containing the eigenvalues of the
residual correlation matrix on the diagonal, H is a matrix whose columns are the corresponding eigenvectors, and V is a diagonal matrix containing the inverse square
root of the residual variances on the diagonal. This P is essentially the inverse square
root of the residual correlation matrix. The test is invariant to the ordering and to the
scale of the variables in the VAR. As suggested by Doornik and Hansen (1994), we
perform a small sample correction to the transformed residuals v t before computing
the statistics.
Inverse Square Root of Residual Covariance Matrix (Urzua 1997): P = GD 1 / 2 G
where D is the diagonal matrix containing the eigenvalues of the residual covariance
matrix on the diagonal and G is a matrix whose columns are the corresponding
eigenvectors. This test has a specific alternative, which is the quartic exponential distribution. According to Urzua, this is the most likely alternative to the multivariate
normal with finite fourth moments since it can approximate the multivariate Pearson
family as close as needed. As recommended by Urzua, we make a small sample correction to the transformed residuals v t before computing the statistics. This small
sample correction differs from the one used by Doornik and Hansen (1994); see Urzua
(1997, Section D).
Factorization from Identified (Structural) VAR: P = B 1 A where A , B are estimated from the structural VAR model. This option is available only if you have estimated the factorization matrices A and B using the structural VAR (see page 637,
below).
EViews reports test statistics for each orthogonal component (labeled RESID1, RESID2, and
so on) and for the joint test. For individual components, the estimated skewness m 3 and
kurtosis m 4 are reported in the first two columns together with the p-values from the x ( 1 )
distribution (in square brackets). The Jarque-Bera column reports:
m 2 ( m4 3 ) 2
-
T ------3- + ----------------------24
6
(38.9)
with p-values from the x ( 2 ) distribution. Note: in contrast to the Jarque-Bera statistic computed in the series view, this statistic is not computed using a degrees of freedom correction.
For the joint tests, we will generally report:
l 3 = Tm 3 m 3 6 x 2 ( k )
l 4 = T ( m 4 3 ) ( m 4 3 ) 24 x 2 ( k )
(38.10)
l = l 3 + l 4 x 2 ( 2k ).
If, however, you choose Urzuas (1997) test, l will not only use the sum of squares of the
pure third and fourth moments but will also include the sum of squares of all cross third
2
and fourth moments. In this case, l is asymptotically distributed as a x with
k ( k + 1 ) ( k + 2 ) ( k + 7 ) 24 degrees of freedom.
Cointegration Test
This view performs the Johansen cointegration test for the variables in your VAR. See
Johansen Cointegration Test, on page 939 for a description of the basic test methodology.
Note that Johansen cointegration tests may also be performed from a Group object, however, tests performed using the latter do not permit you to impose identifying restrictions on
the cointegrating vector.
Notes on Comparability
Many of the diagnostic tests given above may be computed manually by estimating the
VAR using a system object and selecting View/Wald Coefficient Tests... We caution you
that the results from the system will not match those from the VAR diagnostic views for various reasons:
The system object will, in general, use the maximum possible observations for each
equation in the system. By contrast, VAR objects force a balanced sample in case there
are missing values.
The estimates of the weighting matrix used in system estimation do not contain a
degrees of freedom correction (the residual sums-of-squares are divided by T rather
than by T k ), while the VAR estimates do perform this adjustment. Even though
estimated using comparable specifications and yielding identifiable coefficients, the
test statistics from system SUR and the VARs will show small (asymptotically insignificant) differences.
Impulse Responses
A shock to the i-th variable not only directly affects the i-th variable but is also transmitted
to all of the other endogenous variables through the dynamic (lag) structure of the VAR. An
impulse response function traces the effect of a one-time shock to one of the innovations on
current and future values of the endogenous variables.
If the innovations e t are contemporaneously uncorrelated, interpretation of the impulse
response is straightforward. The i-th innovation e i, t is simply a shock to the i-th endogenous variable y i, t . Innovations, however, are usually correlated, and may be viewed as having a common component which cannot be associated with a specific variable. In order to
interpret the impulses, it is common to apply a transformation P to the innovations so that
they become uncorrelated:
v t = P e t ( 0, D )
(38.11)
To obtain the impulse response functions, first estimate a VAR. Then select View/Impulse
Response... from the VAR toolbar. You will see a dialog box with two tabs: Display and
Impulse Definition.
The Display tab provides the following options:
Display Format: displays
results as a table or graph.
Keep in mind that if you
choose the Combined Graphs
option, the Response Standard Errors option will be
ignored and the standard
errors will not be displayed.
Note also that the output table
format is ordered by response
variables, not by impulse variables.
Display Information: you should enter the variables for which you wish to generate
innovations (Impulses) and the variables for which you wish to observe the
responses (Responses). You may either enter the name of the endogenous variables
or the numbers corresponding to the ordering of the variables. For example, if you
specified the VAR as GDP, M1, CPI, then you may either type,
GDP CPI M1
or,
1 3 2
The order in which you enter these variables only affects the display of results.
You should also specify a positive integer for the number of periods to trace the
response function. To display the accumulated responses, check the Accumulate
Response box. For stationary VARs, the impulse responses should die out to zero and
the accumulated responses should asymptote to some (non-zero) constant.
Response Standard Errors: provides options for computing the response standard
errors. Note that analytic and/or Monte Carlo standard errors are currently not available for certain Impulse options and for vector error correction (VEC) models. If you
choose Monte Carlo standard errors, you should also specify the number of repetitions to use in the appropriate edit box.
If you choose the table format, the estimated standard errors will be reported in
parentheses below the responses. If you choose to display the results in multiple
graphs, the graph will contain the plus/minus two standard error bands about the
impulse responses. The standard error bands are not displayed in combined graphs.
The Impulse tab provides the following options for transforming the impulses:
ResidualOne Unit sets the impulses to one unit of the residuals. This option
ignores the units of measurement and the correlations in the VAR residuals so that no
transformation is performed. The responses from this option are the MA coefficients
of the infinite MA order Wold representation of the VAR.
ResidualOne Std. Dev. sets the impulses to one standard deviation of the residuals.
This option ignores the correlations in the VAR residuals.
Cholesky uses the inverse of the Cholesky factor of the residual covariance matrix to
orthogonalize the impulses. This option imposes an ordering of the variables in the
VAR and attributes all of the effect of any common component to the variable that
comes first in the VAR system. Note that responses can change dramatically if you
change the ordering of the variables. You may specify a different VAR ordering by
reordering the variables in the Cholesky Ordering edit box.
The (d.f. adjustment) option makes a small sample degrees of freedom correction
when estimating the residual covariance matrix used to derive the Cholesky factor.
The (i,j)-th element of the residual covariance matrix with degrees of freedom correction is computed as e i, t e j, t ( T p ) where p is the number of parameters per
t
equation in the VAR. The
(no d.f. adjustment) option estimates the (i,j)-th element of
the residual covariance matrix as e i, t e j, t T . Note: early versions of EViews comt
puted the impulses using the Cholesky
factor from the residual covariance matrix with
no degrees of freedom adjustment.
Generalized Impulses as described by Pesaran and Shin (1998) constructs an orthogonal set of innovations that does not depend on the VAR ordering. The generalized
impulse responses from an innovation to the j-th variable are derived by applying a
variable specific Cholesky factor computed with the j-th variable at the top of the
Cholesky ordering.
Structural Decomposition uses the orthogonal transformation estimated from the
structural factorization matrices. This approach is not available unless you have estimated the structural factorization matrices as explained in Structural (Identified)
VARs on page 637.
User Specified allows you to specify your own impulses. Create a matrix (or vector)
that contains the impulses and type the name of that matrix in the edit box. If the VAR
has k endogenous variables, the impulse matrix must have k rows and 1 or k columns, where each column is a impulse vector.
For example, say you have a k = 3 variable VAR and wish to apply simultaneously a
positive one unit shock to the first variable and a negative one unit shock to the sec-
ond variable. Then you will create a 3 1 impulse matrix containing the values 1, -1,
and 0. Using commands, you can enter:
matrix(3,1) shock
shock.fill(by=c) 1,-1,0
and type the name of the matrix SHOCK in the edit box.
Variance Decomposition
While impulse response functions trace the effects of a shock to one endogenous variable on
to the other variables in the VAR, variance decomposition separates the variation in an
endogenous variable into the component shocks to the VAR. Thus, the variance decomposition provides information about the relative importance of each random innovation in affecting the variables in the VAR.
To obtain the variance decomposition,
select View/Variance Decomposition...
from the var object toolbar. You should
provide the same information as for
impulse responses above. Note that since
non-orthogonal factorization will yield
decompositions that do not satisfy an adding up property, your choice of factorization is limited to the Cholesky orthogonal
factorizations.
The table format displays a separate variance decomposition for each endogenous
variable. The second column, labeled
S.E., contains the forecast error of the
variable at the given forecast horizon. The source of this forecast error is the variation in the
current and future values of the innovations to each endogenous variable in the VAR. The
remaining columns give the percentage of the forecast variance due to each innovation, with
each row adding up to 100.
As with the impulse responses, the variance decomposition based on the Cholesky factor
can change dramatically if you alter the ordering of the variables in the VAR. For example,
the first period decomposition for the first variable in the VAR ordering is completely due to
its own innovation.
Factorization based on structural orthogonalization is available only if you have estimated
the structural factorization matrices as explained in Structural (Identified) VARs on
page 637. Note that the forecast standard errors should be identical to those from the Cholesky factorization if the structural VAR is just identified. For over-identified structural VARs,
the forecast standard errors may differ in order to maintain the adding up property.
Procs of a VAR
Most of the procedures available for a VAR are common to those available for a system
object (see System Procs on page 599). Here, we discuss only those procedures that are
unique to the VAR object.
Forecasting
You may produce forecasts directly from an estimated VAR object by clicking on the Forecast
button or by selecting Proc/Forecast. EViews will display the forecast dialog:
Most of the dialog should be familiar from the standard equation forecast dialog. There are,
however, a few minor differences.
First, the fields in which you enter the forecast name and optional S.E. series names now
refer to the character suffix which you will use to form output series names. By default, as
depicted here, EViews will append the letter f to the end of the original series names to
form the output series names. If necessary, the original name will be converted into a valid
EViews series name.
Second, if you choose to compute standard errors of the forecast, EViews will obtain those
values via simulation. You will be prompted for the number of Simulation repetitions, and
the % failed reps before halting the simulation setting.
Lastly, in addition to a Forecast evaluation, you are given a choice of whether to display the
output graphs as Individual graphs, as Multiple graphs, or both.
Clicking on OK instructs EViews to perform the forecast and, if appropriate to display the
output:
In this case, the output consists of a spool containing the forecast evaluation of the series in
the VAR, along with individual graphs of the forecasts and the corresponding actuals series.
Make System
This proc creates a system object that contains an equivalent VAR specification. If you want
to estimate a non-standard VAR, you may use this proc as a quick way to specify a VAR in a
system object which you can then modify to meet your needs. For example, while the VAR
object requires each equation to have the same lag structure, you may want to relax this
restriction. To estimate a VAR with unbalanced lag structure, use the Proc/Make System
procedure to create a VAR system with a balanced lag structure and edit the system specification to meet the desired lag specification.
The By Variable option creates a system whose specification (and coefficient number) is
ordered by variables. Use this option if you want to edit the specification to exclude lags of a
specific variable from some of the equations. The By Lag option creates a system whose
specification (and coefficient number) is ordered by lags. Use this option if you want to edit
the specification to exclude certain lags from some of the equations.
For vector error correction (VEC) models, treating the coefficients of the cointegrating vector
as additional unknown coefficients will make the resulting system unidentified. In this case,
EViews will create a system object where the coefficients for the cointegrating vectors are
fixed at the estimated values from the VEC. If you want to estimate the coefficients of the
cointegrating vector in the system, you may edit the specification, but you should make certain that the resulting system is identified.
You should also note that while the standard VAR can be estimated efficiently by equationby-equation OLS, this is generally not the case for the modified specification. You may wish
to use one of the system-wide estimation methods (e.g. SUR) when estimating non-standard
VARs using the system object.
Ae t = Bu t
(38.12)
where e t and u t are vectors of length k . e t is the observed (or reduced form) residuals,
while u t is the unobserved structural innovations. A and B are k k matrices to be estimated. The structural innovations u t are assumed to be orthonormal, i.e. its covariance
matrix is an identity matrix E [ u t u t ] = I . The assumption of orthonormal innovations u t
imposes the following identifying restrictions on A and B :
ASA = BB .
(38.13)
Noting that the expressions on either side of (38.13) are symmetric, this imposes
k ( k + 1 ) 2 restrictions on the 2k 2 unknown elements in A and B . Therefore, in order to
identify A and B , you need to supply at least 2k 2 k ( k + 1 ) 2 = k ( 3k 1 ) 2 additional restrictions.
ing restrictions: short-run and long-run. For either type, the identifying restrictions can be
specified either in text form or by pattern matrices.
1
0 0
A = NA 1 0 ,
NA NA 1
NA 0
0
B = 0 NA 0 .
0 NA
0
(38.14)
You can create these matrices interactively. Simply use Object/New Object... to create two
new 3 3 matrices, A and B, and then use the spreadsheet view to edit the values. Alternatively, you can issue the following commands:
matrix(3,3) pata
fill matrix in row major order
pata.fill(by=r) 1,0,0, na,1,0, na,na,1
matrix(3,3) patb = 0
patb(1,1) = na
patb(2,2) = na
patb(3,3) = na
Once you have created the pattern matrices, select Proc/Estimate Structural Factorization... from the VAR window menu. In the SVAR Options dialog, click the Matrix button
and the Short-Run Pattern button and type in the name of the pattern matrices in the relevant edit boxes.
To take an example, suppose again that you have a k = 3 variable VAR where you want to
restrict A to be a lower triangular matrix with ones on the main diagonal and B to be a
diagonal matrix. Under these restrictions, the relation Ae t = Bu t can be written as:
e 1 = b 11 u 1
e 2 = a 21 e 1 + b 22 u 2
(38.15)
e 3 = a 31 e 1 a 32 e 2 + b 33 u 3
To specify these restrictions in text form, select Proc/Estimate Structural Factorization...
from the VAR window and click the Text button. In the edit window, you should type the
following:
@e1 = c(1)*@u1
@e2 = -c(2)*@e1 + c(3)*@u2
@e3 = -c(4)*@e1 - c(5)*@e2 + c(6)*@u3
The special key symbols @e1, @e2, @e3, represent the first, second, and third elements of the e t vector, while @u1, @u2, @u3 represent the first, second, and third
elements of the u t vector. In this example, all unknown elements of the A and B matrices
are represented by elements of the C coefficient vector.
Long-run Restrictions
The identifying restrictions embodied in the relation Ae = Bu are commonly referred to
as short-run restrictions. Blanchard and Quah (1989) proposed an alternative identification
method based on restrictions on the long-run properties of the impulse responses. The
(accumulated) long-run response C to structural innovations takes the form:
A 1 B
C = W
(38.16)
= (I A
where W
1A
p ) 1 is the estimated accumulated responses to the reduced
form (observed) shocks. Long-run identifying restrictions are specified in terms of the elements of this C matrix, typically in the form of zero restrictions. The restriction C i, j = 0
means that the (accumulated) response of the i-th variable to the j-th structural shock is
zero in the long-run.
It is important to note that the expression for the long-run response (38.16) involves the
inverse of A . Since EViews currently requires all restrictions to be linear in the elements of
A and B , if you specify a long-run restriction, the A matrix must be the identity matrix.
To specify long-run restrictions by a pattern matrix, create a named matrix that contains the
pattern for the long-run response matrix C . Unrestricted elements in the C matrix should
be assigned a missing value NA. For example, suppose you have a k = 2 variable VAR
where you want to restrict the long-run response of the second endogenous variable to the
first structural shock to be zero C 2, 1 = 0 . Then the long-run response matrix will have the
following pattern:
C = NA NA
0 NA
(38.17)
Once you have created the pattern matrix, select Proc/Estimate Structural Factorization...
from the VAR window menu. In the SVAR Options dialog, click the Matrix button and the
Long-Run Pattern button and type in the name of the pattern matrix in the relevant edit
box.
To specify the same long-run restriction in text form, select Proc/Estimate Structural Factorization... from the VAR window and click the Text button. In the edit window, you would
type the following:
@lr2(@u1)=0 zero LR response of 2nd variable to 1st shock
where everything on the line after the apostrophe is a comment. This restriction begins with
the special keyword @LR#, with the # representing the response variable to restrict.
Inside the parentheses, you must specify the impulse keyword @U and the innovation
number, followed by an equal sign and the value of the response (typically 0). We caution
you that while you can list multiple long-run restrictions, you cannot mix short-run and
long-run restrictions.
Note that it is possible to specify long-run restrictions as short-run restrictions (by obtaining
the infinite MA order representation). While the estimated A and B matrices should be the
same, the impulse response standard errors from the short-run representation would be
incorrect (since it does not take into account the uncertainty in the estimated infinite MA
order coefficients).
The identifying restriction assumes that the structural innovations u t have unit variances. Therefore, you will almost always want to estimate the diagonal elements of
the B matrix so that you obtain estimates of the standard deviations of the structural
shocks.
It is common in the literature to assume that the structural innovations have a diagonal covariance matrix rather than an identity matrix. To compare your results to those
from these studies, you will have to divide each column of the B matrix with the
diagonal element in that column (so that the resulting B matrix has ones on the main
diagonal). To illustrate this transformation, consider a simple k = 2 variable model
with A = 1 :
e 1, t = b 11 u 1, t + b 12 u 2, t
(38.18)
e 2, t = b 21 u 1, t + b 22 u 2, t
e 1, t = v 1, t + ( b 12 b 22 )v 2, t
e 2, t = ( b 21 b 11 )v 1, t + v 2, t
(38.19)
where now:
1
b 12 b 22
B =
,
b 21 b 11
vt =
v 1, t
v 2, t
b2 0
0 , 11
0
2
0 b 22
(38.20)
Note that the transformation involves only rescaling elements of the B matrix and
not on the A matrix. For the case where B is a diagonal matrix, the elements in the
main diagonal are simply the estimated standard deviations of the structural shocks.
Identification Conditions
As stated above, the assumption of orthonormal structural innovations imposes k ( k + 1 ) 2
restrictions on the 2k 2 unknown elements in A and B , where k is the number of endogenous variables in the VAR. In order to identify A and B , you need to provide at least
2
k ( k + 1 ) 2 2k = k ( 3k 1 ) 2 additional identifying restrictions. This is a necessary
order condition for identification and is checked by counting the number of restrictions provided.
As discussed in Amisano and Giannini (1997), a sufficient condition for local identification
can be checked by the invertibility of the augmented information matrix (see Amisano
and Giannini, 1997). This local identification condition is evaluated numerically at the start-
ing values. If EViews returns a singularity error message for different starting values, you
should make certain that your restrictions identify the A and B matrices.
We also require the A and B matrices to be square and non-singular. The non-singularity
condition is checked numerically at the starting values. If the A and B matrix is non-singular at the starting values, an error message will ask you to provide a different set of starting
values.
Sign Indeterminacy
For some restrictions, the signs of the A and B matrices are not identified; see Christiano,
Eichenbaum, and Evans (1999) for a discussion of this issue. When the sign is indeterminate, we choose a normalization so that the diagonal elements of the factorization matrix
A 1 B are all positive. This normalization ensures that all structural impulses have positive
signs (as does the Cholesky factorization). The default is to always apply this normalization
rule whenever applicable. If you do not want to switch the signs, deselect the Normalize
Sign option from the Optimization Control tab of the SVAR Options dialog.
A and B are estimated by maximum likelihood, assuming the innovations are multivariate
normal. We evaluate the likelihood in terms of unconstrained parameters by substituting out
the constraints. The log likelihood is maximized by the method of scoring (with a Marquardt-type diagonal correctionSee Marquardt, on page 1013), where the gradient and
expected information matrix are evaluated analytically. See Amisano and Giannini (1997)
for the analytic expression of these derivatives.
Optimization Control
Options for controlling the optimization process are provided in the Optimization Control
tab of the SVAR Options dialog. You have the option to specify the starting values, maximum number of iterations, and the convergence criterion.
The starting values are those for the unconstrained parameters after substituting out the
constraints. Fixed sets all free parameters to the value specified in the edit box. User Specified uses the values in the coefficient vector as specified in text form as starting values. For
restrictions specified in pattern form, user specified starting values are taken from the first
m elements of the default C coefficient vector, where m is the number of free parameters.
Draw from... options randomly draw the starting values for the free parameters from the
specified distributions.
Estimation Output
Once convergence is achieved, EViews displays the estimation output in the VAR window.
The point estimates, standard errors, and z-statistics of the estimated free parameters are
reported together with the maximized value of the log likelihood. The estimated standard
errors are based on the inverse of the estimated information matrix (negative expected value
of the Hessian) evaluated at the final estimates.
For overidentified models, we also report the LR test for over-identification. The LR test statistic is computed as:
LR = 2 ( l u l r ) = T ( tr ( P ) log P k )
(38.21)
where P = AB T B 1 AS . Under the null hypothesis that the restrictions are valid, the LR
2
statistic is asymptotically distributed x ( q k ) where q is the number of identifying
restrictions.
If you switch the view of the VAR window, you can come back to the previous results (without reestimating) by selecting View/Estimation Output from the VAR window. In addition,
some of the SVAR estimation results can be retrieved as data members of the VAR; see Var
Data Members on page 814 of the Command and Programming Reference for a list of available VAR data members.
y 2, t = by 1, t
(38.22)
y 1, t = a 1 ( y 2, t 1 by 1, t 1 ) + e 1, t
y 2, t = a 2 ( y 2, t 1 by 1, t 1 ) + e 2, t
(38.23)
In this simple model, the only right-hand side variable is the error correction term. In long
run equilibrium, this term is zero. However, if y 1 and y 2 deviate from the long run equilibrium, the error correction term will be nonzero and each variable adjusts to partially restore
the equilibrium relation. The coefficient a i measures the speed of adjustment of the i-th
endogenous variable towards the equilibrium.
Cointegrating Relations
View/Cointegration Graph displays a graph of the estimated cointegrating relations as used
in the VEC. To store these estimated cointegrating relations as named series in the workfile,
use Proc/Make Cointegration Group. This proc will create and display an untitled group
object containing the estimated cointegrating relations as named series. These series are
named COINTEQ01, COINTEQ02 and so on.
Forecasting
To forecast from your VEC, click on the Forecast button on the toolbar and fill out the dialog
as described in Forecasting, on page 635
Data Members
Various results from the estimated VAR/VEC can be retrieved through the command line
data members. Var Data Members on page 814 of the Command and Programming Refer-
ence provides a complete list of data members that are available for a VAR object. Here, we
focus on retrieving the estimated coefficients of a VAR/VEC.
To examine the correspondence between each element of C and the estimated coefficients,
select View/Representations from the VAR toolbar.
To see the correspondence between each element of A, B, and C and the estimated coefficients, select View/Representations from the VAR toolbar.
Imposing Restrictions
Since the cointegrating vector b is not fully identified, you may wish to impose your own
identifying restrictions when performing estimation.
Restrictions can be imposed on the
cointegrating vector (elements of the
b matrix) and/or on the adjustment
coefficients (elements of the a
matrix). To impose restrictions in estimation, open the test, select Vector
Error Correction in the main VAR
estimation dialog, then click on the
VEC Restrictions tab. You will enter
your restrictions in the edit box that
appears when you check the Impose
Restrictions box:
where y1, y2, ... are the (lagged) endogenous variable. Then, if you want to impose the
restriction that the coefficient on y1 for the second cointegrating equation is 1, you would
type the following in the edit box:
B(2,1) = 1
You can impose multiple restrictions by separating each restriction with a comma on the
same line or typing each restriction on a separate line. For example, if you want to impose
the restriction that the coefficients on y1 for the first and second cointegrating equations are
1, you would type:
B(1,1) = 1
B(2,1) = 1
Currently all restrictions must be linear (or more precisely affine) in the elements of the b
matrix. So for example
B(1,1) * B(2,1) = 1
Restrictions on the adjustment coefficients are currently limited to linear homogeneous restrictions so that you must be able to write your restriction as R vec ( a ) = 0 , where R is a
known qk r matrix. This condition implies, for example, that the restriction,
A(1,1) = A(2,1)
is valid but:
A(1,1) = 1
To impose multiple restrictions, you may either separate each restriction with a comma on
the same line or type each restriction on a separate line. For example, to test whether the
second endogenous variable is weakly exogenous with respect to b in a VEC with two cointegrating relations, you can type:
A(2,1) = 0
A(2,2) = 0
You may also impose restrictions on both b and a . However, the restrictions on b and a
must be independent. So for example,
A(1,1) = 0
B(1,1) = 1
Bayesian VAR649
Bayesian VAR
VARs are frequently used in the study of macroeconomic data. Since VARs frequently require
estimation of a large number of parameters, over-parameterization of VAR models is often a
problemwith too few observations to estimate the parameters of the model.
One approach for solving this problem is shrinkage, where we impose restrictions on parameters to reduce the parameter set. Bayesian VAR (BVAR) methods (Litterman, 1986; Doan,
Litterman, and Sims, 1984; Sims and Zha, 1998) are one popular approach for achieving
shrinkage, since Bayesian priors provide a logical and consistent method of imposing parameter restrictions.
The remainder of this discussion describes the estimation of VARs with Bayesian restrictions
shrinkage. We first describe the set of EViews tools for estimating and working with BVARs
and provide examples of the approach. This first section assumes that you are familiar with
the various methods outlined in the literature. The remaining section outlines the methods
in somewhat more detail,
The two BVAR specific tabs, Prior type and Prior specification, allow you to customize
your specification. The following discussion of these settings assumes that you are familiar
with the basics of the various prior types and associated settings. For additional detail, see
Technical Background on page 663.
Prior Type
The Prior type tab lets you specify the type of prior you wish to use, along with options for
calculating the initial residual covariance matrix.
Bayesian VAR651
Prior Specification
The Prior specification tab lets you
further specify the prior distributions
by either assigning hyper-parameter values, or providing a user-supplied prior matrix. If you
wish to assign hyper-parameter values, you should select the Hyper-parameters radio button in the Prior specification type box.
Litterman/Minnesota Prior
For the Litterman/Minnesota prior
depicted here, you may specify the
hyper-parameters using the four scalars m 1 , l 1 , l 2 , and l 3 .
As described below, the prior mean
is likely to have most or all of its elements set to zero to lessen the risk of
over-fitting, and this implies that m 1
should be close to zero.
erence, Koop and Korobilis (2009) set l 3 equal to 2, whereas Kadiyala and Karlsson (1997)
choose l 3 to be 1 (a special case, linear decay) for their particular application.
To specify your own hyper-parameter values, select the User-specified radio button If you
choose User-specified you should provide the following information:
Coefficient means. Fill in the edit box with the name of a vector in the workfile containing a prior mean for the coefficients.
Coefficient covariance. If desired, you may provide the name of a matrix containing a
prior covariance for the coefficients.
Bayesian VAR653
Normal-Wishart Prior
For the normal-Wishart prior, you can specify the two hyper-parameters m 1 and l 1 (where
the prior coefficient mean and covariance are m 1 i m and l 1 I m , respectively, for i m an m element unit vector and I m an m m identity matrix).
Note that the prior covariance has the form V 0 = l 1 I m (to ensure natural conjugacy of the
prior). This result implies that the prior covariance in any equation is identically equal to
l 1 , which may be an undesirable restriction.
If you select User-specified you should enter the name of a vector in your workfile containing a prior mean for the coefficients.
Sims-Zha Priors
The hyper-parameters for both Sims-Zha priors may be specified by setting the five scalars
values m 5 , m 6 , l 1 , l 3 , and l 0 .
Bayesian VAR655
If you select User-specified you should provide the name of a matrix containing a prior
covariance for the coefficients in the H matrix edit box, and the name of a matrix containing
a residual prior scale matrix in the Residual scale matrix edit box.
An Example
To illustrate the Bayesian approach, we now estimate the coefficients of a VAR(2) model
using the first differences of the logarithm of the DLINVESTMENT (investment), DLINCOME
(income), and DLCONCUMPTION (consumption) example data. The raw data are provided
in the EViews workfile wgmacro.WF1. This data set was examined by Ltkepohl (2007,
page 228).
Click on Quick/Estimate VAR... to open the main VAR specification dialog. In the VAR type
box, select Bayesian VAR and in the Endogenous Variables box, type:
dlincome dlinvestment dlconsumption
Here, you will see the pre-filled settings including the variable names. You may change the
default settings, but for now on, we assume that the default settings are used.
Next, click on the Prior type tab to select the prior type for the VAR. By default, EViews will
choose the Litterman/Minnesota prior and the Univariate AR estimate for the Initial
residual covariance options, but you can change the prior type and the initial covariance
estimation option from the menus.
The Prior specification tab shows the hyper-parameter settings. Note that the settings may
vary depending on the prior type. We will use the default settings for our example so that
you may click on OK to continue.
EViews estimates the VAR and displays the results view. The top portion of the main results
is shown below. The heading information provides the basic information about the settings
used in estimation, and the basic prior information:
DLINCOME
DLCONSUMPTION
DLINVESTMENT(-1)
-0.093779
(0.07669)
[-1.22277]
0.017748
(0.01955)
[ 0.90787]
-0.003903
(0.01652)
[-0.23629]
DLINVESTMENT(-2)
-0.010859
(0.04612)
[-0.23544]
0.005534
(0.01173)
[ 0.47195]
0.007179
(0.00991)
[ 0.72462]
DLINCOME(-1)
0.150255
(0.30170)
[ 0.49802]
-0.017130
(0.07784)
[-0.22007]
0.066732
(0.06538)
[ 1.02066]
DLINCOME(-2)
0.059967
(0.17853)
[ 0.33589]
0.010609
(0.04617)
[ 0.22975]
0.047408
(0.03868)
[ 1.22557]
DLCONSUMPTION(-1)
0.272233
(0.35591)
[ 0.76489]
0.103522
(0.09128)
[ 1.13412]
-0.047166
(0.07758)
[-0.60799]
DLCONSUMPTION(-2)
0.088063
(0.21118)
[ 0.41701]
0.002904
(0.05415)
[ 0.05362]
0.036281
(0.04615)
[ 0.78621]
0.008495
(0.01140)
[ 0.74534]
0.017854
(0.00293)
[ 6.09886]
0.017587
(0.00248)
[ 7.10390]
0.057882
-0.027765
0.151955
0.047983
0.675823
0.018229
0.047330
0.058994
-0.026552
0.009629
0.012079
0.689612
0.020283
0.011921
0.097916
0.015909
0.007093
0.010367
1.193989
0.019802
0.010451
R-squared
Adj. R-squared
Sum sq. resids
S.E. equation
F-statistic
Mean dependent
S.D. dependent
In his study of this data, Ltkepohl chose a set of different hyper-parameters from those set by
default in EViews, and chose to use a diagonal VAR to estimate the initial residual covariance. We
can replicate his results by setting the Diagonal VAR estimate on the Prior type tab of the dialog.
Bayesian VAR657
Since the estimates in the third row of Table 5.3 of Ltkepohls example may be obtained using
EViews default hyper-parameter values, click on OK to estimate the modified BVAR specification.
DLINCOME
DLCONSUMPTION
DLINVESTMENT(-1)
-0.096453
(0.07622)
[-1.26547]
0.017885
(0.01924)
[ 0.92950]
-0.003959
(0.01551)
[-0.25524]
DLINVESTMENT(-2)
-0.011337
(0.04601)
[-0.24639]
0.005721
(0.01159)
[ 0.49375]
0.007308
(0.00934)
[ 0.78263]
DLINCOME(-1)
0.150439
(0.30206)
[ 0.49805]
-0.019351
(0.07717)
[-0.25076]
0.069184
(0.06183)
[ 1.11887]
DLINCOME(-2)
0.061511
(0.17965)
[ 0.34239]
0.010797
(0.04601)
[ 0.23465]
0.049405
(0.03677)
[ 1.34362]
DLCONSUMPTION(-1)
0.297322
(0.36589)
[ 0.81261]
0.112852
(0.09293)
[ 1.21434]
-0.051735
(0.07531)
[-0.68697]
DLCONSUMPTION(-2)
0.100237
(0.22109)
[ 0.45338]
0.003454
(0.05615)
[ 0.06151]
0.040620
(0.04563)
[ 0.89022]
0.007766
(0.01147)
[ 0.67684]
0.017691
(0.00292)
[ 6.06270]
0.017498
(0.00235)
[ 7.43245]
0.060117
-0.025327
0.151595
0.047926
0.703587
0.018229
0.047330
0.061359
-0.023972
0.009605
0.012064
0.719073
0.020283
0.011921
0.102341
0.020736
0.007059
0.010342
1.254100
0.019802
0.010451
R-squared
Adj. R-squared
Sum sq. resids
S.E. equation
F-statistic
Mean dependent
S.D. dependent
The results in the other rows of table Table 5.3 may be obtained by changing the hyper-parameters. For example, to obtain the results in the fourth row, go to the Prior Specification tab in the
estimation dialog and change Lambda1 to 0.01:
Bayesian VAR659
Click on OK to estimate the updated specification. The resulting estimation output is displayed
below:
DLINCOME
DLCONSUMPTION
DLINVESTMENT(-1)
-0.001468
(0.00996)
[-0.14733]
0.000349
(0.00250)
[ 0.13943]
-6.90E-05
(0.00202)
[-0.03417]
DLINVESTMENT(-2)
-7.78E-05
(0.00500)
[-0.01558]
5.85E-05
(0.00126)
[ 0.04655]
0.000101
(0.00101)
[ 0.10020]
DLINCOME(-1)
0.003243
(0.03884)
[ 0.08350]
7.94E-05
(0.00996)
[ 0.00797]
0.001141
(0.00795)
[ 0.14351]
DLINCOME(-2)
0.000815
(0.01947)
[ 0.04186]
0.000184
(0.00500)
[ 0.03685]
0.000605
(0.00399)
[ 0.15188]
DLCONSUMPTION(-1)
0.005173
(0.04817)
[ 0.10740]
0.002182
(0.01223)
[ 0.17837]
-0.000644
(0.00996)
[-0.06470]
DLCONSUMPTION(-2)
0.001138
(0.02416)
[ 0.04712]
6.03E-05
(0.00614)
[ 0.00982]
0.000608
(0.00499)
[ 0.12166]
0.018046
(0.00559)
[ 3.23065]
0.020225
(0.00142)
[ 14.2547]
0.019766
(0.00114)
[ 17.2838]
0.001169
-0.089634
0.161103
0.049406
0.012870
0.018229
0.047330
0.001471
-0.089305
0.010218
0.012442
0.016203
0.020283
0.011921
0.001673
-0.089084
0.007850
0.010906
0.018431
0.019802
0.010451
R-squared
Adj. R-squared
Sum sq. resids
S.E. equation
F-statistic
Mean dependent
S.D. dependent
Different priors
To illustrate the importance of the prior selection, we estimate the same model using the
Sims-Zha normal-flat prior, with a univariate AR estimate for the initial residual covariance,
and the default hyper-parameter settings.
Bayesian VAR661
DLINCOME
DLCONSUMPTION
DLINVESTMENT(-1)
-0.093886
(0.11763)
[-0.79818]
0.017931
(0.02994)
[ 0.59894]
-0.003959
(0.02557)
[-0.15486]
DLINVESTMENT(-2)
-0.010879
(0.46548)
[-0.02337]
0.005632
(0.11847)
[ 0.04754]
0.007293
(0.10117)
[ 0.07208]
DLINCOME(-1)
0.151304
(0.54919)
[ 0.27550]
-0.016722
(0.13978)
[-0.11963]
0.067412
(0.11937)
[ 0.56473]
DLINCOME(-2)
0.060782
(0.07073)
[ 0.85933]
0.010793
(0.01800)
[ 0.59956]
0.048193
(0.01537)
[ 3.13470]
DLCONSUMPTION(-1)
0.275072
(0.27613)
[ 0.99616]
0.104512
(0.07028)
[ 1.48710]
-0.047074
(0.06002)
[-0.78432]
DLCONSUMPTION(-2)
0.089479
(0.32665)
[ 0.27393]
0.002897
(0.08314)
[ 0.03485]
0.036280
(0.07100)
[ 0.51098]
0.008372
(0.01755)
[ 0.47701]
0.017819
(0.00447)
[ 3.98922]
0.017554
(0.00381)
[ 4.60183]
0.058142
-0.027481
0.151914
0.047976
0.679046
0.018229
0.047330
0.059442
-0.026063
0.009625
0.012076
0.695184
0.020283
0.011921
0.098831
0.016906
0.007086
0.010362
1.206364
0.019802
0.010451
R-squared
Adj. R-squared
Sum sq. resids
S.E. equation
F-statistic
Mean dependent
S.D. dependent
We can see that the point estimates of the coefficients have changed, in some cases by a
large degree, when compared to our initial BVAR estimation using default settings. For example, the coefficient in the DLINVESTMENT equation for the lagged value of DLCONSUMPTION has decreased from a value of 0.272 to 0.004, with a corresponding change in tstatistic from 0.76 to 0.02.
Bayesian VAR663
Technical Background
Bayesian analysis requires knowledge of the distributional properties of the prior, likelihood,
and posterior. In Bayesian statistics and econometrics, anything about which we are uncertain, including the true value of a parameter, can be thought of as being a random variable
to which can assign a probability distribution.
The prior is the external distributional information based on researchers belief on parameters of interest. The likelihood is the data information contained in the sample probability
distribution function (pdf). Combining the prior distribution via Bayes theorem with the
data likelihood results in the posterior distribution.
In particular, denote the parameters of interest in a given model by v = ( b, S ) and the
data by y . Let us say that the prior distribution is p ( v ) and the likelihood is l ( y v ) , then
the posterior distribution p ( v y ) is the distribution of v given the data y and may be
derived by
p ( v )l ( y v )
p ( v y ) = ----------------------------------- p ( v )l ( y v ) dv
Note that the denominator part p ( v )l ( y v ) dv is a normalizing constant which has no randomness, and thus the posterior is proportional to the product of the likelihood and the prior
p ( v y ) p ( v )l ( y v )
The main target of Bayesian estimation is to find the posterior moments of the parameter of
interest. For instance, location and dispersion are the general estimates which are comparable to those obtained in classical estimation (namely the classical coefficient estimate and
coefficient standard error). These point estimates can be easily derived from the posterior
because the posterior distribution contains all the information available on the parameter v .
To relate this general framework to Bayesian VAR (BVAR) models, suppose that we have the
VAR(p) model:
p
yt = a0 +
Aj yt j + et
j=1
Y = XA + E
(38.24)
y = ( I m X )v + e
(38.25)
or
l ( v, S e ) S e I T
1 / 2 exp
1
- ( y ( I m X )v ) ( S e I T ) ( y ( I m X )v ) (38.26)
-2
To illustrate how to derive the posterior moments, let us assume S e is known and a multivariate normal prior for v :
P ( v ) V0
1 / 2 exp
--2- ( v v 0 )V 0 1 ( v v 0 )
(38.27)
where v 0 is the prior mean and V 0 is the prior covariance. When we combine this prior
with the likelihood function in Equation (38.26), the posterior density can be written as
1
P ( v y ) = exp --- ( ( V 0 1 / 2 ( v v 0 ) ) ( V 0 1 / 2 ( v v 0 ) )
2
(38.28)
+ { ( S e 1 / 2 I T )y ( S e 1 / 2 X )v } { ( S e 1 / 2 I T )y ( S e 1 / 2 X )v } )
V 0 1 / 2 v 0
( S e 1 / 2 I T )y
(38.29)
V 0 1 / 2
( S e 1 / 2 X )
P ( v y ) exp --- ( w Wv ) ( w Wv )
2
(38.30)
v = ( WW ) 1 Ww = [ V 0 1 + ( S e 1 X X ) ] 1 [ V 0 1 v 0 + ( S e 1 X )y ]
Since S e is known, the second term of Equation (38.30) has no randomness about v . The
posterior therefore may be summarized as
Bayesian VAR665
1
= exp --- ( v v )V ( v v )
2
V = [ V 0 1 + ( S e 1 X X ) ] 1
Priors
A fundamental feature of Bayesian econometrics is the formulation of the prior distribution
of the parameters, based upon information which reflects researchers beliefs. A proper
Bayesian analysis will incorporate the prior information to strengthen inferences about the
true value of the parameters. An obvious argument against the use of prior distributions is
that a prior is intrinsically subjective and therefore offers the potential for manipulation.
EViews offers four different priors which have been popular in the BVAR literature:
1. The Litterman/Minnesota prior: A normal prior on v with fixed S e .
2. The Normal-Wishart prior: A normal prior on v and a Wishart prior on S e
3. The Sims-Zha normal-Wishart prior.
4. The Sims-Zha normal-flat: A normal prior on v and non-informative prior on S e :
It is worth noting that EViews only offers conjugate priors (whose posterior has the same
distributional family as the prior distribution). This restriction allows for analytical calculation of the Bayesian VAR, rather than simulation-based estimation (e.g. the MCMC method)
as is generally required. It is also worth noting that the choice of priors does not imply the
need for different Bayesian techniques of estimation. Disagreement over the priors may be
addressed by post-estimation sensitivity analysis evaluating the robustness of posterior
quantities of interest to different prior specifications.
ment of S e , is the standard OLS estimate of the error variance calculated from an univariate AR regression using the i-th variable.
Full VAR: estimates a standard classical VAR and uses the covariance matrix from that
. This choice is not always feasible in cases
estimation as the initial estimate of S
e
where there are not enough observations to estimate the full VAR.
is restricted to be a diagonal matrix (as in the univariate VAR esti Diagonal VAR: S
e
mator), however the diagonal elements of the matrix are calculated from the full classical VAR (i.e., the diagonal elements are equal to those in the full VAR method, and
the non-diagonal elements are set equal to zero).
, we need only specify a prior for the VAR coefficient v . The LitSince S e is replaced by S
e
terman prior assumes that the prior of v is
v N ( v 0, V 0 )
v 0 = 0 (where the hyper-parameter m 1 = 0 , which indicates a zero mean model) and
nonzero prior covariance V 0 0 . Note that although the choice of zero mean could lessen
the risk of over-fitting, theoretically any value for m 1 is possible.
To explain the Minnesota/Litterman prior for the covariance V 0 , note that the explanatory
variables in the VAR in any equation can be divided into own lags of the dependent variable,
lags of the other dependent variables, and finally any exogenous variables, including the
constant term. The elements of V 0 corresponding to exogenous variables are set to infinity
(i.e., no information about the exogenous variables is contained within the prior).
l
The remainder of V 0 is then a diagonal matrix with its diagonal elements v ij for
l = 1, , p
v ij
l 1 2
----
for ( i = j )
l l 3
=
l 1 l 2 j i 2
for ( i j )
---------------l l3 j j
(38.31)
Bayesian VAR667
researchers can make trials with different values for themselves. Litterman (1986) provides
additional discussion of these choices.
Given this choice of prior, the posterior for v takes the form
v N ( v, V )
where
1
1
V = [ V 0 1 + ( S e X' X ) ]
and
1
v = V [ V 0 1 v 0 + ( S e X )'y ]
Normal-Wishart prior
When the assumption that S e is known is loosened, a prior for the residual covariance can
be also chosen. One well-known conjugate prior for normal data is the normal-Wishart:
v N ( v 0, S V 0 )
where v 0 = m 1 i m is the AR(1) coefficient mean and V 0 = l 1 I m is the coefficient covariance with the two prior hyper-parameters m 1 and l 1 , and
S e 1 W ( n 0, S 0 1 )
where n 0 = m is the degree of freedom and S 0 = I m is the scale matrix ( S 0 > 0 ) . Any
values for the hyper-parameters can be chosen, however, it is worth noting that a non-informative prior is obtained by setting the hyper-parameters as V 0 = S 0 = cI k and letting
c 0 . It can be seen that the non-informative prior leads to a posterior based on OLS
quantities which are identical to classical VAR estimation results.
According to the Bayes updating rule, the posterior becomes:
v S N ( v, S V )
and
S W ( n, S 1 )
where
V = [ V 0 1 + X'X ] 1
v = V [ V 0 1 v 0 + X'Xv ]
(38.32)
n = n0 + T
(38.33)
S = SSE + S 0 + v XXv + v 0 V 0 v 0 v ( V 0 1 + XX )v
Since the natural conjugate priors have the same distributional form for the prior, likelihood,
and posterior, the prior can be considered as dummy observations. In the following section,
we will discuss how this interpretation develops the priors for structural VARs.
Sims-Zha priors
Sims and Zha (1998) show how the dummy observations approach can be used to elicit the
priors for structural VAR models. To illustrate the Sims-Zha priors, suppose that we have a
contemporaneous correlation of the series, then the model can be written as:
p
A0 yt = a0 +
Aj yt j + et
j=1
where e t N ( 0, I m ) and S e = A 0 1 A 0 1 . Note that given appropriate identifying restrictions, there will be a mapping from the parameters of the reduced form VAR to the structural
VAR. This form can be also written in a multivariate regression form by defining A to be a
matrix of the coefficients on the lagged variable
YA 0 XA = E
where Y is T m , A 0 is m m , X is T ( mp + 1 ) , A is ( mp + 1 ) m , and E is
T m . Note that X contains the lagged Ys and a column of 1s corresponding to the constant.
Sims and Zha suggest the conditional prior (Sims-Zha prior) on A 0 and A . In particular,
p ( A 0 )p ( A A 0 ) = p ( A 0 )f ( v 0, H 0 )
(38.34)
(38.35)
where
Z =
Y X
A =
A0 A
(38.36)
Combining Equation (38.34) and Equation (38.35), we can derive the posterior density as:
Bayesian VAR669
1 2
exp [ 0.5 ( a 0 ( I YY )a 0
p ( a ) p0 ( a0 ) A0 H0
1
2a ( I XY )a 0 + a ( I XX )a + v 0 H 0 v 0 ) ]
where a is a notation for A vectorized. Since this posterior has a nonstandard form, a
direct analysis of the likelihood may be computationally infeasible. However, the conditional
posterior distribution A A 0 can be analytically derived by:
1 1
p ( a a 0 ) = f ( v, ( I XX + H 0 ) )
(38.37)
where
1 1
v = ( I XX + H 0 ) ( ( I XY )a 0 + H 0 v 0 )
(38.38)
This specification differs from the Litterman/Minnesota case in a few respects. First, there is
no distinction between the prior variances on own lags versus other lags. Second, there is
only one scale factor in the denominator j j2 , rather than using the ratio scale factors
j i2 j j2 . In particular, each element of H 0 for i, j = 1, , m and l = 1, , p is written
as
l 0 l 1 2
H 0l, ij = ----------
j j l l 3
(38.39)
where j j2 is the j-th diagonal element of S e for the l-th lag of the series i in equation j.
EViews offers two different choices for the estimate of S e : Univariate AR S e and Diagonal
VAR S e , as previously specified in 1) and 2). The three hyper-parameters l 0, l 1 and l 3
reflect the general beliefs about the VAR, and in practice theses are specified on the basis of
prior knowledge of researchers. Specifically, l 0 is overall tightness of beliefs on A 0 , l 1 is
standard deviation around A , and l 3 represents lag decay.
Based on the recognition that the prior information can be considered as dummy observations, Sim and Zha suggest two extra dummy variables ( Y d and X d )
Yd =
Y 1d
Y 2d
Xd =
X 1d
X 2d
which account for unit roots ( Y 1d and X 1d ) and trends ( Y 2d and X 2d ), and write the model
as
Yd A Xd A = E
0
Y
X
The first set of dummies are given by
m 5 y 2, 0
m 5 y 1, 0
Y 1d =
X 1d =
0
0 m 5 y 2, 0
( Y 1d, , Y 1d ) m mp 0 m 1
where the hyper-parameter m 5 implies the beliefs on the presence of different stationarities. Note that the last columns of X 1d , which correspond to the constant term and any
exogenous variables, are set to zero.
The second set of dummies reflect a belief that the average of initial values of variable i (i.e.,
E ( y ij ) for j=1,..., p) is likely to be a good forecast of y ij . The dummies for initial observation are
Y 2d =
m 6 E ( y 10 ) m 6 E ( y 20 ) m 6 E ( y m0 )
X 2d =
( Y 2d, , Y 2d ) 1 mp m 6
S e 1 W ( n 0, S 0 1 )
where n 0 = m + 1 is the degree of freedom and S 0 = l 0 2 ( Y Xv ) ( Y Xv ) is the
1
scale matrix where v = ( XX ) X'Y .
The posteriors S e is analytically calculated as
S e 1 W ( n, S )
where n = n 0 + ( T p ) m 1 and
1
S = T ( S 0 + Y'Y + v 0 H 0 1 v 0 v' ( XX + H 0 )v ) .
References671
Se Se
( m + 1 ) 2
S e = T ( Y Xv ) ( Y Xv ) .
Note that the coefficient parameter v is updated by the rule in Equation (38.38).
References
Amisano, Gianni and Carlo Giannini (1997). Topics in Structural VAR Econometrics, 2nd ed, Berlin:
Springer-Verlag.
Blanchard, Olivier and Danny Quah (1989). The Dynamic Effects of Aggregate Demand and Aggregate
Supply Disturbances, American Economic Review, 79, 655-673.
Boswijk, H. Peter (1995). Identifiability of Cointegrated Systems, Technical Report, Tinbergen Institute.
Christiano, L. J., M. Eichenbaum, C. L. Evans (1999). Monetary Policy Shocks: What Have We Learned
and to What End? Chapter 2 in J. B. Taylor and M. Woodford, (eds.), Handbook of Macroeconomics,
Volume 1A, Amsterdam: Elsevier Science Publishers B.V.
Dickey, D.A. and W.A. Fuller (1979). Distribution of the Estimators for Autoregressive Time Series with a
Unit Root, Journal of the American Statistical Association, 74, 427431.
Doornik, Jurgen A. (1995). Testing General Restrictions on the Cointegrating Space, manuscript.
Doornik, Jurgen A. and Henrik Hansen (1994). An Omnibus Test for Univariate and Multivariate Normality, manuscript.
Engle, Robert F. and C. W. J. Granger (1987). Co-integration and Error Correction: Representation, Estimation, and Testing, Econometrica, 55, 251276.
Fisher, R. A. (1932). Statistical Methods for Research Workers, 4th Edition, Edinburgh: Oliver & Boyd.
Johansen, Sren (1991). Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector
Autoregressive Models, Econometrica, 59, 15511580.
Johansen, Sren (1995). Likelihood-based Inference in Cointegrated Vector Autoregressive Models, Oxford:
Oxford University Press.
Johansen, Sren and Katarina Juselius (1990). Maximum Likelihood Estimation and Inferences on Cointegrationwith applications to the demand for money, Oxford Bulletin of Economics and Statistics,
52, 169210.
Kao, C. (1999). Spurious Regression and Residual-Based Tests for Cointegration in Panel Data, Journal
of Econometrics, 90, 144.
Kelejian, H. H. (1982). An Extension of a Standard Test for Heteroskedasticity to a Systems Framework,
Journal of Econometrics, 20, 325-333.
Ltkepohl, Helmut (1991). Introduction to Multiple Time Series Analysis, New York: Springer-Verlag.
Ltkepohl, Helmut (2007). New Introduction to Multiple Time Series Analysis, New York: Springer-Verlag.
Maddala, G. S. and S. Wu (1999). A Comparative Study of Unit Root Tests with Panel Data and A New
Simple Test, Oxford Bulletin of Economics and Statistics, 61, 63152.
MacKinnon, James G., Alfred A. Haug, and Leo Michelis (1999), Numerical Distribution Functions of
Likelihood Ratio Tests for Cointegration, Journal of Applied Econometrics, 14, 563-577.
Newey, Whitney and Kenneth West (1994). Automatic Lag Selection in Covariance Matrix Estimation,
Review of Economic Studies, 61, 631-653.
Osterwald-Lenum, Michael (1992). A Note with Quantiles of the Asymptotic Distribution of the Maximum Likelihood Cointegration Rank Test Statistics, Oxford Bulletin of Economics and Statistics, 54,
461472.
Pedroni, P. (1999). Critical Values for Cointegration Tests in Heterogeneous Panels with Multiple Regressors, Oxford Bulletin of Economics and Statistics, 61, 65370.
Pedroni, P. (2004). Panel Cointegration; Asymptotic and Finite Sample Properties of Pooled Time Series
Tests with an Application to the PPP Hypothesis, Econometric Theory, 20, 597625.
Pesaran, M. Hashem and Yongcheol Shin (1998). Impulse Response Analysis in Linear Multivariate Models, Economics Letters, 58, 17-29.
Phillips, P.C.B. and P. Perron (1988). Testing for a Unit Root in Time Series Regression, Biometrika, 75,
335346.
Said, Said E. and David A. Dickey (1984). Testing for Unit Roots in Autoregressive Moving Average Models of Unknown Order, Biometrika, 71, 599607.
Sims, Christopher (1980). Macroeconomics and Reality, Econometrica, 48, 1-48.
Sims, Christopher and Tao Zha (1998). Bayesian Methods for Dynamic Multivariate Models, International Economic Review, 39, 949968.
Urzua, Carlos M. (1997). Omnibus Tests for Multivariate Normality Based on a Class of Maximum
Entropy Distributions, in Advances in Econometrics, Volume 12, Greenwich, Conn.: JAI Press, 341358.
White, Halbert (1980).A Heteroskedasticity-Consistent Covariance Matrix and a Direct Test for Heteroskedasticity, Econometrica, 48, 817838.
Background
We present here a very brief discussion of the specification and estimation of a linear state
space model. Those desiring greater detail are directed to Harvey (1989), Hamilton (1994a,
Chapter 13; 1994b), and especially the excellent treatment of Koopman, Shephard, and
Doornik (1999), whose approach we largely follow.
Specification
A linear state space representation of the dynamics of the n 1 vector y t is given by the
system of equations:
yt = ct + Zt at + et
(39.1)
at + 1 = dt + Tt at + vt
(39.2)
We will refer to the first set of equations as the signal or observation equations and the
second set as the state or transition equations. The disturbance vectors e t and v t are
assumed to be serially independent, with contemporaneous variance structure:
Q t = var
et
vt
Ht Gt
(39.3)
Gt Qt
Filtering
Consider the conditional distribution of the state vector a t given information available at
time s . We can define the mean and variance matrix of the conditional distribution as:
at s Es ( at )
(39.4)
P t s E s [ ( a t a t s ) ( a t a t s ) ]
(39.5)
where the subscript below the expectation operator indicates that expectations are taken
using the conditional distribution for that period.
One important conditional distribution is obtained by setting s = t 1 , so that we obtain
the one-step ahead mean a t t 1 and one-step ahead variance P t t 1 of the states a t .
Under the Gaussian error assumption, a t t 1 is also the minimum mean square error estimator of a t and P t t 1 is the mean square error (MSE) of a t t 1 . If the normality assumption is dropped, a t t 1 is still the minimum mean square linear estimator of a t .
Given the one-step ahead state conditional mean, we can also form the (linear) minimum
MSE one-step ahead estimate of y t :
y t = y t
t1
Et 1 ( yt ) = E ( yt at
t 1)
= ct + Zt at
t1
(39.6)
e t = e t
t1
y t y t
t1
(39.7)
Background675
F t = F t
t1
var ( e t
t 1)
= Zt Pt
t 1 Zt
+ Ht
(39.8)
The Kalman (Bucy) filter is a recursive algorithm for sequentially updating the one-step
ahead estimate of the state mean and variance given new information. Details on the recursion are provided in the references above. For our purposes, it is sufficient to note that given
initial values for the state mean and covariance, values for the system matrices Y t , and
observations on y t , the Kalman filter may be used to compute one-step ahead estimates of
the state and the associated mean square error matrix, { a t t 1, P t t 1 } , the contemporaneous or filtered state mean and variance, { a t, P t } , and the one-step ahead prediction, prediction error, and prediction error variance, { y t t 1, e t t 1, F t t 1 } . Note that we may also
obtain the standardized prediction residual, e t t 1 , by dividing e t t 1 by the square-root of
the corresponding diagonal element of F t t 1 .
Fixed-Interval Smoothing
Suppose that we observe the sequence of data up to time period T . The process of using
this information to form expectations at any time period up to T is known as fixed-interval
smoothing. Despite the fact that there are a variety of other distinct forms of smoothing (e.g.,
fixed-point, fixed-lag), we will use the term smoothing to refer to fixed-interval smoothing.
Additional details on the smoothing procedure are provided in the references given above.
For now, note that smoothing uses all of the information in the sample to provide smoothed
t a t T E T ( a t ) , and smoothed estimates of the state variances,
estimates of the states, a
V t var T ( a t ) . The matrix V t may also be interpreted as the MSE of the smoothed state
t.
estimate a
As with the one-step ahead states and variances above, we may use the smoothed values to
form smoothed estimates of the signal variables,
y t E ( y t a t ) = c t + Z t a t
(39.9)
S t var ( y t T ) = Z t V t Z t .
(39.10)
et
Q t = var T
vt
(39.11)
Dividing the smoothed disturbance estimates by the square roots of the corresponding diagonal elements of the smoothed variance matrix yields the standardized smoothed disturbance estimates e t and n t .
Forecasting
There are a variety of types of forecasting which may be performed with state space models.
These methods differ primarily in what and how information is used. We will focus on the
three methods that are supported by EViews built-in forecasting routines.
at + n t Et ( at + n ) ,
(39.12)
P t + n t E t [ ( a t + n a t + n t ) ( a t + n a t + n t ) ]
(39.13)
yt + n t Et ( yt + n ) = ct + Zt at + n
(39.14)
F t + n t MSE ( y t + n t ) = Z t + n P t + n t Z t + n + H t
(39.15)
Dynamic Forecasting
The concept of dynamic forecasting should be familiar to you from other EViews estimation
objects. In dynamic forecasting, we start at the beginning of the forecast sample t , and compute a complete set of n-period ahead forecasts for each period n = 1, , n in the forecast interval. Thus, if we wish to start at period t and forecast dynamically to t + n , we
Background677
would compute a one-step ahead forecast for t + 1 , a two-step ahead forecast for t + 2 ,
and so forth, up to an n -step ahead forecast for t + n . It may be useful to note that as
with n-step ahead forecasting, we simply initialize a Kalman filter at time t + 1 and run the
filter forward additional periods using no additional signal information. For dynamic forecasting, however, only one n-step ahead forecast is required to compute all of the forecast
values since the information set is not updated from the beginning of the forecast period.
Smoothed Forecasting
Alternatively, we can compute smoothed forecasts which use all available signal data over
the forecast sample (for example, a t + n t + n ). These forward looking forecasts may be computed by initializing the states at the start of the forecast period, and performing a Kalman
smooth over the entire forecast period using all relevant signal data. This technique is useful
in settings where information on the entire path of the signals is used to interpolate values
throughout the forecast sample.
We make one final comment about the forecasting methods described above. For traditional
n-step ahead and dynamic forecasting, the states are typically initialized using the one-step
ahead forecasts of the states and variances at the start of the forecast window. For smoothed
forecasts, one would generally initialize the forecasts using the corresponding smoothed values of states and variances. There may, however, be situations where you wish to choose a
different set of initial values for the forecast filter or smoother. The EViews forecasting routines (described in State Space Procedures, beginning on page 693) provide you with considerable control over these initial settings. Be aware, however, that the interpretation of the
forecasts in terms of the available information will change if you choose alternative settings.
Estimation
To implement the Kalman filter and the fixed-interval smoother, we must first replace any
unknown elements of the system matrices by their estimates. Under the assumption that the
e t and v t are Gaussian, the sample log likelihood:
nT
1
1
1
log L ( v ) = -------- log 2p --- log F t ( v ) --- e t ( v )F t ( v ) e t ( v )
2
2
2
t
(39.16)
may be evaluated using the Kalman filter. Using numeric derivatives, standard iterative techniques may be employed to maximize the likelihood with respect to the unknown parameters v (see Appendix C. Estimation and Solution Options, on page 1011).
Initial Conditions
Evaluation of the Kalman filter, smoother, and forecasting procedures all require that we
provide the initial one-step ahead predicted values for the states a 1 0 and variance matrix
P 1 0 . With some stationary models, steady-state conditions allow us to use the system
matrices to solve for the values of a 1 0 and P 1 0 . In other cases, we may have preliminary
estimates of a 1 0 , along with measures of uncertainty about those estimates. But in many
cases, we may have no information, or diffuse priors, about the initial conditions.
Specification Syntax
State Equations
A state equation contains the @STATE keyword followed by a valid state equation specification. Bear in mind that:
Each equation must have a unique dependent variable name; expressions are not
allowed. Since EViews does not automatically create workfile series for the states, you
may use the name of an existing (non-series) EViews object.
State equations may not contain signal equation dependent variables, or leads or lags
of these variables.
Each state equation must be linear in the one-period lag of the states. Nonlinearities
in the states, or the presence of contemporaneous, lead, or multi-period lag states will
generate an error message. We emphasize the point that the one-period lag restriction
on states is not restrictive since higher order lags may be written as new state vari-
Examples
The following two state equations define an unobserved error with an AR(2) process:
@state sv1 = c(2)*sv1(-1) + c(3)*sv2(-1) + [var = exp(c(5))]
@state sv2 = sv1(-1)
The first equation parameterizes the AR(2) for SV1 in terms of an AR(1) coefficient, C(2),
and an AR(2) coefficient, C(3). The error variance specification is given in square brackets.
Note that the state equation for SV2 defines the lag of SV1 so that SV2(-1) is the two period
lag of SV1.
Similarly, the following are valid state equations:
@state sv1 = sv1(-1) + [var = exp(c(3))]
@state sv2 = c(1) + c(2)*sv2(-1) + [var = exp(c(3))]
@state sv3 = c(1) + exp(c(3)*x/z) + c(2)*sv3(-1) + [var =
exp(c(3))]
describing a random walk, and an AR(1) with drift (without/with exogenous variables).
The following are not valid state equations:
@state exp(sv1) = sv1(-1) + [var = exp(c(3))]
@state sv2 = log(sv2(-1)) + [var = exp(c(3))]
@state sv3 = c(1) + c(2)*sv3(-2) + [var=exp(c(3))]
since they violate at least one of the conditions described above (in order: expression for
dependent state variable, nonlinear in state, multi-period lag of state variables).
Observation/Signal Equations
By default, if an equation specification is not specifically identified as a state equation using
the @STATE keyword, it will be treated by EViews as an observation or signal equation.
Signal equations may also be identified explicitly by the keyword @SIGNAL. There are
some aspects of signal equation specification to keep in mind:
Signal equation dependent variables may involve expressions.
Signal equations may not contain current values or leads of signal variables. You
should be aware that any lagged signals are treated as predetermined for purposes of
multi-step ahead forecasting (for discussion and alternative specifications, see Harvey
1989, p. 367-368).
Signal equations must be linear in the contemporaneous states. Nonlinearities in the
states, or the presence of leads or lags of states will generate an error message. Again,
the restriction that there are no state lags is not restrictive since additional deterministic states may be created to represent the lagged values of the states.
Signal equations may have exogenous variables and unknown coefficients, and may
be nonlinear in these elements.
Signal equations may also contain an optional error or error variance specification. If there
is no error or error variance, the equation is assumed to be deterministic. Specification of the
error structure of state space models is described in greater detail in Errors and Variances
on page 680.
Examples
The following are valid signal equation specifications:
log(passenger) = c(1) + c(3)*x + sv1 + c(4)*sv2
@signal y = sv1 + sv2*x1 + sv3*x2 + sv4*y(-1) + [var=exp(c(1))]
z = sv1 + sv2*x1 + sv3*x2 + c(1) + [var=exp(c(2))]
since they violate at least one of the conditions described above (in order: lag of state variable, nonlinear in a state variable, lead of signal variable).
The specified variance may be a known constant value, or it can be an expression containing unknown parameters to be estimated. You may also build time-variation into the variances using a series expression. Variance expressions may not, however, contain state or
signal variables.
While straightforward, this direct variance specification method does not admit correlation
between errors in different equations (by default, EViews assumes that the covariance
between error terms is 0). If you require a more flexible variance structure, you will need to
use the named error approach to define named errors with variances and covariances, and
then to use these named errors as parts of expressions in the signal and state equations.
The first step of this general approach is to define your named errors. You may declare a
named error by including a line with the keyword @ENAME followed by the name of the
error:
@ename e1
@ename e2
Once declared, a named error may enter linearly into state and signal equations. In this
manner, one can build correlation between the equation errors. For example, the errors in
the state and signal equations in the sspace specification:
y = c(1) + sv1*x1 + e1
@state sv1 = sv1(-1) + e2 + c(2)*e1
@ename e1
@ename e2
are, in general, correlated since the named error E1 appears in both equations.
In the special case where a named error is the only error in a given equation, you can both
declare and use the named residual by adding an error expression consisting of the keyword
ENAME followed by an assignment and a name identifier:
y = c(1) + sv1*x1 + [ename = e1]
@state sv1 = sv1(-1) + [ename = e2]
The final step in building a general error structure is to define the variances and covariances
associated with your named errors. You should include a sspace line comprised of the keyword @EVAR followed by an assignment statement for the variance of the error or the
covariance between two errors:
@evar cov(e1, e2) = c(2)
@evar var(e1) = exp(c(3))
@evar var(e2) = exp(c(4))*x
The syntax for the @EVAR assignment statements should be self-explanatory. Simply indicate whether the term is a variance or covariance, identify the error(s), and enter the specification for the variance or covariance. There should be a separate line for each named error
covariance or variance that you wish to specify. If an error term is named, but there are no
corresponding VAR= or @EVAR specifications, the missing variance or covariance specifications will remain at the default values of NA and 0, respectively.
As you might expect, in the special case where an equation contains a single error term, you
may combine the named error and direct variance assignment statements:
@state sv1 = sv1(-1) + [ename = e1, var = exp(c(3))]
@state sv2 = sv2(-1) + [ename = e2, var = exp(c(4))]
@evar cov(e1, e2) = c(5)
Specification Examples
ARMAX(2, 3) with a Random Coefficient
We can use the syntax described above to define an ARMAX(2,3) with a random coefficient
for the regression variable X:
y = c(1) + sv5*x + sv1 + c(4)*sv2 + c(5)*sv3 + c(6)*sv4
@state sv1 = c(2)*sv1(-1) + c(3)*sv2(-1) + [var=exp(c(7))]
@state sv2 = sv1(-1)
@state sv3 = sv2(-1)
@state sv4 = sv3(-1)
@state sv5 = sv5(-1) + [var=3]
The AR coefficients are parameterized in terms of C(2) and C(3), while the MA coefficients
are given by C(4), C(5) and C(6). The variance of the innovation is restricted to be a positive
function of C(7). SV5 is the random coefficient on X, with variance restricted to be 3.
The variances and covariances in the model are parameterized in terms of the coefficients
C(2), C(3) and C(4), with the variances of the observed Y and the unobserved state SV1
restricted to be non-negative functions of the parameters.
To set the initial state variance matrix, enter @VPRIOR followed by the name of a sym
object (note that it must be a sym object, and not an ordinary matrix object). The dimensions of the sym must match the state dimension, with the ordering following the order in
which the states appear in the specification. If you wish to set a specific element to be diffuse, simply assign the element the NA missing value. EViews will reset all of the corresponding variances and covariances to be diffuse.
For example, suppose you have a two equation state space object named SS1 and you want
to set the initial values of the state vector and the state variance matrix as:
SV1
SV2
1 ,
0
var SV1
SV2
1 0.5
0.5 2
(39.17)
First, create a named vector object, say SVEC0, to hold the initial values. Click Object/New
Object, choose Matrix-Vector-Coef and enter the name SVEC0. Click OK, and then choose
the type Vector and specify the size of the vector (in this case 2 rows). When you click OK,
EViews will display the spreadsheet view of the vector SVEC0. Click the Edit +/ button to
toggle on edit mode and type in the desired values. Then create a named symmetric matrix
object, say SVAR0, in an analogous fashion.
Alternatively, you may find it easier to create and initialize the vector and matrix using commands. You can enter the following commands in the command window:
vector(2) svec0
svec0.fill 1, 0
sym(2) svar0
svar0.fill 1, 0.5, 2
to your sspace object by editing the specification window. Alternatively, you can type the
following commands in the command window:
ss1.append @mprior svec0
ss1.append @vprior svar0
For more details on matrix objects and the fill and append commands, see Chapter 11.
Matrix Language, on page 257 of the Command and Programming Reference.
Specification Views
State space models may be very complex. To aid you in examining your specification,
EViews provides views which allow you to view the text specification in a more compact
form, and to examine the numerical values of your system matrices evaluated at current
parameter values.
Click on the View menu and select Specification... The following Specification views are
always available, regardless of whether the sspace has previously been estimated:
Covariance Description. Text description of the covariance matrix of the state space
specification. For example, the ARMAX example has the following Covariance
Description view:
Coefficient Values. Numeric description of the structure of the signal and the state
equations evaluated at current parameter values. If the system coefficient matrix is
time-varying, EViews will prompt you for a date/observation at which to evaluate the
matrix.
Covariance Values. Numeric description of the structure of the state space specification evaluated at current parameter values. If the system covariance matrix is timevarying, EViews will prompt you for a date/observation at which to evaluate the
matrix.
Auto-Specification
To aid you in creating a state space specification, EViews provides you with auto-specification tools which will create the text representation of a model that you specify using dialogs. This tool may be very useful if your model is a standard regression with fixed,
recursive, and various random coefficient specifications, and/or your errors have a general
ARMA structure.
When you select Proc/Define State Space... from the menu, EViews opens a three tab dialog. The first tab is used to describe the basic regression portion of your specification. Enter
the dependent variable, and any regressors which have fixed or recursive coefficients. You
can choose which COEF object EViews uses for indicating unknowns when setting up the
specification. At the bottom, you can specify an ARMA structure for your errors. Here, we
have specified a simple ARMA(2,1) specification for LOG(PASSENGER).
The second tab of the dialog is used to add any regressors which have random coefficients.
Simply enter the appropriate regressors in each of the four edit fields. EViews allows you to
define regressors with any combination of constant mean, AR(1), random walk, or random
walk (with drift) coefficients.
Lastly, the Auto-Specification dialog allows you to choose between basic variance structures for your state space model. Click on the Variance Specification tab, and choose
between an Identity matrix, Common Diagonal (diagonal with common variances), Diagonal, or general (Unrestricted) variance matrix for the signals and for the states. The dialog
also allows you to allow the signal equation(s) and state equations(s) to have non-zero error
covariances.
We emphasize the fact that your sspace object is not restricted to the choices provided in
this dialog. If you find that the set of specifications supported by Auto-Specification is too
restrictive, you may use it the dialogs as a tool to build a basic specification, and then edit
the specification to describe your model.
The Ordinary default Covariance method employs the inverse of the matrix specified in the
Information matrix dropdown menu. Alternately, you may compute the sandwich covariance by selecting Huber White in the Covariance method menu.
The outer-product of the gradients (OPG) is the default information matrix estimator. If you
are performing non-legacy optimization, you may use the observed Hessian by selecting
Hessian Observed.
The default settings should provide a good start for most problems; if you choose to change
the settings, see Setting Estimation Options on page 1005 for related discussion of estimation options.
When you click on OK, EViews will begin estimation using the specified settings.
There are two additional things to keep in mind when estimating your model:
Although the EViews Kalman filter routines will automatically handle any missing
values in your sample, EViews does require that your estimation sample be contiguous, with no gaps between successive observations.
If there are no unknown coefficients in your specification, you will still have to estimate your sspace to run the Kalman filter and initialize elements that EViews needs
in order to perform further analysis.
and estimate the model, EViews will display the estimation output view:
Sspace: SS_ARMA21
Method: Maximum likelihood (BFGS / Marquardt steps)
Date: 03/16/15 Time: 11:42
Sample: 1949M01 1960M12
Included observations: 144
Convergence achieved after 9 iterations
Coefficient covariance computed using outer product of
gradients
C(1)
C(2)
C(3)
C(4)
C(5)
SV1
SV2
Log likelihood
Parameters
Diffuse priors
Coefficient
Std. Error
z-Statistic
Prob.
5.499767
0.409013
0.547165
1.188382
-4.934585
0.257517
0.167201
0.164608
0.141461
0.308276
21.35687
2.446239
3.324055
8.400799
-16.00704
0.0000
0.0144
0.0009
0.0000
0.0000
Final State
Root MSE
z-Statistic
Prob.
0.245396
0.319569
0.084850
0.047896
2.892117
6.672101
0.0038
0.0000
124.3367
5
0
-1.657454
-1.554336
-1.615553
The bulk of the output view should be familiar from other EViews estimation objects. The
information at the top describes the basics of the estimation: the name of the sspace object,
estimation method, the date and time of estimation, sample and number of objects in the
sample, convergence information, and the coefficient estimates. The bottom part of the view
reports the maximized log likelihood value, the number of estimated parameters, and the
associated information criteria.
Some parts of the output, however, are new and may require discussion. The bottom section
provides additional information about the handling of missing values in estimation. Likelihood observations reports the actual number of observations that are used in forming the
likelihood. This number (which is the one used in computing the information criteria) will
differ from the Included observations reported at the top of the view when EViews drops
an observation from the likelihood calculation because all of the signal equations have missing values. The number of omitted observations is reported in Missing observations. Partial observations reports the number of observations that are included in the likelihood, but
for which some equations have been dropped. Diffuse priors indicates the number of initial state covariances for which EViews is unable to solve and for which there is no user initialization. EViews handling of initial states and covariances is described in greater detail in
Initial Conditions on page 697.
EViews also displays the final one-step ahead values of the state vector, a T + 1 T , and the
corresponding RMSE values (square roots of the diagonal elements of P T + 1 T ). For settings
where you may care about the entire path of the state vector and covariance matrix, EViews
provides you with a variety of views and procedures for examining the state results in
greater detail.
Signal Views
When you click on View/Signal Views, EViews displays a
sub-menu containing additional view selections. Two of these
selections are always available, even if the state space model
has not yet been estimated:
Actual Signal Table and Actual Signal Graph display
the dependent signal variables in spreadsheet and graphical forms, respectively. If
there are multiple signal equations, EViews will display a each series with its own
axes.
The remaining views are only available following estimation.
Graph Signal Series... opens a dialog with
choices for the results to be displayed. The
dialog allows you to choose between the onestep ahead predicted signals, y t t 1 , the corresponding one-step residuals, e t t 1 , or standardized one-step residuals, e t t 1 , the
smoothed signals, y t , smoothed signal disturbances, e t , or the standardized smoothed signal disturbances, e t . 2 (root mean square)
standard error bands are plotted where appropriate.
Std. Residual Correlation Matrix and Std. Residual Covariance Matrix display the
correlation and covariance matrix of the standardized one-step ahead signal residual,
et t 1 .
State Views
To examine the unobserved state components, click on View/
State Views to display the state submenu. EViews allows you to
examine the initial or final values of the state components, or to
graph the full time-path of various filtered or smoothed state
data.
Two of the views are available either before or after estimation:
Initial State Vector and Initial State Covariance Matrix display the values of the initial state vector, a 0 , and covariance matrix, P 0 . If the unknown parameters have previously been estimated, EViews will evaluate the initial conditions using the
estimated values. If the sspace has not been estimated, the current coefficient values
will be used in evaluating the initial conditions.
This information is especially relevant in models where EViews is using the current
values of the system matrices to solve for the initial conditions. In cases where you
are having difficulty starting your estimation, you may wish to examine the values of
the initial conditions at the starting parameter values for any sign of problems.
The remainder of the views are only available following successful estimation:
Final State Vector and Final State Covariance Matrix display the values of the final
state vector, a T , and covariance matrix, P T , evaluated at the estimated parameters.
Select Graph State Series... to display a dialog containing several choices for the state
information. You can graph the one-step
ahead predicted states, a t t 1 , the filtered
(contemporaneous) states, a t , the smoothed
state estimates, a t , smoothed state disturbance estimates, v t , or the standardized
smoothed state disturbances, h t . In each
case, the data are displayed along with corresponding 2 standard error bands.
n-period ahead forecasting, as described in Forecasting on page 676. Note that any
lagged endogenous variables on the right-hand side of your signal equations will be
treated as predetermined for purposes of forecasting.
EViews allows you to save
various types of forecast output in series in your workfile. Simply check any of the
output boxes, and specify the
names for the series in the
corresponding edit field.
You may specify the names
either as a list or using a
wildcard expression. If you
choose to list the names, the
number of identifiers must
match the number of signals
in your specification. You should be aware that if an output series with a specified
name already exists in the workfile, EViews will overwrite the entire contents of the
series.
If you use a wildcard expression, EViews will substitute the name of each signal in the
appropriate position in the wildcard expression. For example, if you have a model
with signals Y1 and Y2, and elect to save the one-step predictions in PRED*,
EViews will use the series PREDY1 and PREDY2 for output. There are two limitations
to this feature: (1) you may not use the wildcard expression * to save signal results
since this will overwrite the original signal data, and (2) you may not use a wildcard
when any signal dependent variables are specified by expression, or when there are
multiple equations for a signal variable. In both cases, EViews will be unable to create
the new series and will generate an error message.
Keep in mind that if your signal dependent variable is an expression, EViews will only
provide forecasts of the expression. Thus, if your signal variable is LOG(Y), EViews
will forecast the logarithm of Y.
Now enter a sample and specify the treatment of the initial states, and then click OK.
EViews will compute the forecast and will place the results in the specified series. No
output window will open.
There are several options available for setting the initial conditions. If you wish, you
can instruct the sspace object to use the One-step ahead or Smoothed estimates of
the state and state covariance as initial values for the forecast period. The two initialization methods differ in the amount of information used from the estimation sample;
one-step ahead uses information up to the beginning of the forecast period, while
smoothed uses the entire estimation period.
Make State Series... opens a dialog allowing you to create series containing results for
the state variables computed over the estimation sample. You can choose to save
either the one-step ahead state estimate, a t t 1 , the filtered state mean, a t , the
t , state disturbances, v t , standardized state disturbances, h t , or
smoothed states, a
the corresponding standard error series (square roots of the diagonal elements of
P t t 1 , P t , V t and Q t ).
Simply select one of the output types,
and enter the names of the output
series in the edit field. The rules for
specifying the output names are the
same as for the Forecast... procedure
described above. Note that the wildcard expression * is permitted
when saving state results. EViews
will simply use the state names
defined in your specification.
We again caution you that if an output series exists in the workfile,
EViews will overwrite the entire contents of the series.
Click on Make Endogenous Group to create a group object containing the signal
dependent variable series.
Make Gradient Group creates a group object with series containing the gradients of
the log likelihood. These series are named GRAD## where ## is a unique number in
the workfile.
Make Kalman Filter creates a new state space object containing the current specification, but with all parameters replaced by their estimated values. In this way you can
freeze the current state space for additional analysis. This procedure is similar to
the Make Model procedure found in other estimation objects.
Make Model creates a model object containing the state space equations.
Update Coefs from Sspace will place the estimated parameters in the appropriate
coefficient vectors.
References697
specifications that were not supported in earlier versions may be estimated with the current
sspace object.
The cost of these additional features and added flexibility is that Version 3 sspace objects are
not fully compatible with those in the current version. This has two important practical
effects:
If you load in a workfile which contains a Version 3 sspace object, all previous estimation results will be cleared and the text of the specification will be translated to the
current syntax. The original text will be retained as comments at the bottom of your
sspace specification.
If you take a workfile which contains a new sspace object created with EViews 4 or
later and attempt to read it into an earlier version of EViews, the object will not be
read, and EViews will warn you that a partial load of the workfile was performed. If
you subsequently save the workfile, the original sspace object will not be saved with
the workfile.
Technical Discussion
Initial Conditions
If there are no @MPRIOR or @VPRIOR statements in the specification, EViews will either:
(1) solve for the initial state mean and variance, or (2) initialize the states and variances
using diffuse priors.
Solving for the initial conditions is only possible if the state transition matrices T , and variance matrices P and Q are non time-varying and satisfy certain stability conditions (see
Harvey, 1989, p. 121). If possible, EViews will solve for the conditions P 1 0 using the familiar relationship: ( I T T ) vec ( P ) = vec ( Q ) . If this is not possible, the states will be
treated as diffuse unless otherwise specified.
When using diffuse priors, EViews follows the method adopted by Koopman, Shephard and
Doornik (1999) in setting a 1 0 = 0 , and P 1 0 = kI M , where the k is an arbitrarily cho6
sen large number. EViews uses the authors recommendation that one first set k = 10 and
then adjust it for scale by multiplying by the largest diagonal element of the residual covariances.
References
Box, George E. P. and Gwilym M. Jenkins (1976). Time Series Analysis: Forecasting and Control, Revised
Edition, Oakland, CA: Holden-Day.
Hamilton, James D. (1994a). Time Series Analysis, Princeton University Press.
Hamilton, James D. (1994b). State Space Models, Chapter 50 in Robert F. Engle and Daniel L. McFadden
(eds.), Handbook of Econometrics, Volume 4, Amsterdam: Elsevier Science B.V.
Harvey, Andrew C. (1989). Forecasting, Structural Time Series Models and the Kalman Filter, Cambridge:
Cambridge University Press.
Koopman, Siem Jan, Neil Shephard, and Jurgen A. Doornik (1999). Statistical Algorithms for Models in
State Space using SsfPack 2.2, Econometrics Journal, 2(1), 107-160.
Overview
The following section provides a brief introduction to the purpose and structure of the
EViews model object, and introduces terminology that will be used throughout the rest
of the chapter.
A model consists of a set of equations that describe the relationships between a set of
variables.
The variables in a model can be divided into two categories: those determined inside
the model, which we refer to as the endogenous variables, and those determined outside the model, which we refer to as the exogenous variables. A third category of variables, the add factors, are a special case of exogenous variables.
In its most general form, a model can be written in mathematical notation as:
F ( y, x ) = 0
(40.1)
where y is the vector of endogenous variables, x is the vector of exogenous variables, and
F is a vector of real-valued functions f i ( y, x ) . For the model to have a unique solution,
there should typically be as many equations as there are endogenous variables.
In EViews, each equation in the model must have a unique endogenous variable assigned to
it. That is, each equation in the model must be able to be written in the form:
y i = f i ( y, x )
(40.2)
where y i is the endogenous variable assigned to equation i . EViews has the ability to normalize equations involving simple transformations of the endogenous variable, rewriting
them automatically into explicit form when necessary. Any variable that is not assigned as
the endogenous variable for any equation is considered exogenous to the model.
Equations in an EViews model can either be inline or linked. An inline equation contains the
specification for the equation as text within the model. A linked equation is one that brings
its specification into the model from an external EViews object such as a single or multiple
equation estimation object, or even another model. Linking allows you to couple a model
more closely with the estimation procedure underlying the equations, or with another model
on which it depends. For example, a model for industry supply and demand might link to
another model and to estimated equations:
Industry Supply And Demand Model
link to macro model object for forecasts of total consumption
link to equation object containing industry supply equation
link to equation object containing industry demand equation
inline identity: supply = demand
Equations can also be divided into stochastic equations and identities. Roughly speaking, an
identity is an equation that we would expect to hold exactly when applied to real world
data, while a stochastic equation is one that we would expect to hold only with random
error. Stochastic equations typically result from statistical estimation procedures while identities are drawn from accounting relationships between the variables.
The most important operation performed on a model is to solve the model. By solving the
model, we mean that for a given set of values of the exogenous variables, X, we will try to
find a set of values for the endogenous variables, Y, so that the equations in the model are
satisfied within some numerical tolerance. Often, we will be interested in solving the model
over a sequence of periods, in which case, for a simple model, we will iterate through the
periods one by one. If the equations of the model contain future endogenous variables, we
Overview701
may require a more complicated procedure to solve for the entire set of periods simultaneously.
In EViews, when solving a model, we must first associate data with each variable in the
model by binding each of the model variables to a series in the workfile. We then solve the
model for each observation in the selected sample and place the results in the corresponding
series.
When binding the variables of the model to specific series in the workfile, EViews will often
modify the name of the variable to generate the name of the series. Typically, this will
involve adding an extension of a few characters to the end of the name. For example, an
endogenous variable in the model may be called Y, but when EViews solves the model, it
may assign the result into an observation of a series in the workfile called Y_0. We refer to
this mapping of names as aliasing. Aliasing is an important feature of an EViews model, as
it allows the variables in the model to be mapped into different sets of workfile series, without having to alter the equations of the model.
When a model is solved, aliasing is typically applied to the endogenous variables so that historical data is not overwritten. Furthermore, for models which contain lagged endogenous
variables, aliasing allows us to bind the lagged variables to either the actual historical data,
which we refer to as a static forecast, or to the values solved for in previous periods, which
we refer to as a dynamic forecast. In both cases, the lagged endogenous variables are effectively treated as exogenous variables in the model when solving the model for a single
period.
Aliasing is also frequently applied to exogenous variables when using model scenarios.
Model scenarios allow you to investigate how the predictions of your model vary under different assumptions concerning the path of exogenous variables or add factors. In a scenario,
you can change the path of an exogenous variable by overriding the variable. When a variable is overridden, the values for that variable will be fetched from a workfile series specific
to that scenario. The name of the series is formed by adding a suffix associated with the scenario to the variable name. This same suffix is also used when storing the solutions of the
model for the scenario. By using scenarios it is easy to compare the outcomes predicted by
your model under a variety of different assumptions without having to edit the structure of
your model.
The following table gives a typical example of how model aliasing might map variable
names in a model into series names in the workfile:
Model Variable
Workfile Series
endogenous Y
historical data
Y_0
baseline solution
exogenous X
Y_1
scenario 1
X_1
Earlier, we mentioned a third category of variables called add factors. An add factor is a special type of exogenous variable that is used to shift the results of a stochastic equation to
provide a better fit to historical data or to fine-tune the forecasting results of the model.
While there is nothing that you can do with an add factor that could not be done using exogenous variables, EViews provides a separate interface for add factors to facilitate a number
of common tasks.
An Example Model
In this section, we demonstrate how we can use the EViews model object to implement a
simple macroeconomic model of the U.S. economy. The specification of the model is taken
from Pindyck and Rubinfeld (1998, p. 390). We have provided the data and other objects
relating to the model in the sample workfile Macromod.WF1. You may find it useful to follow along with the steps in the example, and you can use the workfile to experiment further
with the model object.
(A second, simpler example may be found in Plotting Probability Response Curves on
page 312).
The macro model contains three stochastic equations and one identity. In EViews notation,
these can be written:
cn = c(1) + c(2)*y + c(3)*cn(-1)
i = c(4) + c(5)*(y(-1)-y(-2)) + c(6)*y + c(7)*r(-4)
r = c(8) + c(9)*y + c(10)*(y-y(-1)) + c(11)*(m-m(-1)) + c(12)* (r(1)+r(-2))
y = cn + i + g
where:
CN is real personal consumption
I is real private investment
G is real government expenditure
Y is real GDP less net exports
R is the interest rate on three-month treasury bills
M is the real money supply, narrowly defined (M1)
An Example Model703
cn c y cn(-1)
Equation EQI:
i c y(-1)-y(-2) y r(-4)
Equation EQR:
The three equations estimate satisfactorily and provide a reasonably close fit to the data,
although much of the fit probably comes from the lagged endogenous variables. The consumption and investment equations show signs of heteroskedasticity, possibly indicating
that we should be modeling the relationships in log form. All three equations show signs of
serial correlation. We will ignore these problems for the purpose of this example, although
you may like to experiment with alternative specifications and compare their performance.
When first created, the model object defaults to equation view. Equation view allows us to
browse through the specifications and properties of the equations contained in the model.
Since we have not yet added any equations to the model, this window will appear empty.
An Example Model705
An Example Model707
Performing a
Dynamic Solution
An alternative way of
evaluating the model is to
examine how the model
performs when used to
forecast many periods
into the future. To do this,
we must use our forecasts
from previous periods,
not actual historical data, when assigning values to the lagged endogenous terms in our
model. In EViews, we refer to such a forecast as a dynamic forecast.
To perform a dynamic forecast, we will the model with a slightly different set of options.
Return to the model window and again click on the Solve button. In the model solution dialog, choose Dynamic solution in the Dynamics section of the dialog, and set the solution
sample to 1985 1999.
Click on OK to solve
the model. To examine the results, we
will use Proc/Make
Graph exactly as
above to display the
actuals and the baseline solutions for the
endogenous variables. Make sure the
sample is set to
1985Q1 to 1999Q4
then click on OK.
The results illustrate
how our model
would have performed if we had
used it back in 1985
to make a forecast
for the economy over
the next fifteen
years, assuming that we had used the correct paths for the exogenous variables (in reality,
we would not have known these values at the time the forecasts were generated). Not surprisingly, the results show substantial deviations from the actual outcomes, although they
do seem to follow the general trends in the data.
Forecasting
Once we are satisfied with the performance of our model against historical data, we can use
the model to forecast future values of our endogenous variables. The first step in producing
such a forecast is to decide on values for our exogenous variables during the forecast period.
These may be based on our best guess as to what will actually happen, or they may be simply one particular possibility that we are interested in considering. Often we will be interested in constructing several different paths and then comparing the results.
An Example Model709
To produce a set of future values for G, we can use this equation to perform a dynamic forecast for G from 2000Q1 to 2005Q4, saving the results back into G itself; see Chapter 23.
Forecasting from an Equation, on page 135 for details. Later we will show you how to
instruct the model to use the data in a different series, say G_1, in place of the data in G
(Using Scenarios for Alternate Assumptions on page 715), so that you may preserve the
original state of the series G.
We now have a set of possible values for our exogenous variables over the forecast period.
An Example Model711
We observe
strange behavior
in the results. At
the beginning of
the forecast
period, we see a
heavy dip in
investment, GDP,
and interest
rates. This is followed by a series
of oscillations in
these series with
a period of about
a year, which die
out slowly during
the forecast
period. This is
not a particularly
convincing forecast.
There is little in the paths of our exogenous variables or the history of our endogenous variables that would lead to this sharp dip, suggesting that the problem may lie with the residuals of our equations. Our investment equation is the most likely candidate, as it has a large,
persistent positive residual near the end of the historical data (see figure below). This residual will be set to zero over the forecast period when solving the model, which might be the
cause of the sudden drop in investment at the beginning of the forecast.
With the add factor in place, we can follow exactly the same procedure that we followed
above to produce a new set of solutions for the model and a new graph for the results.
An Example Model713
Performing a
Stochastic
Simulation
So far, we have
been working under the assumption that our stochastic equations hold exactly over the forecast period. In reality, we would expect to see the same sort of errors occurring in the future
as we have seen in history. We have also been ignoring the fact that some of the coefficients
in our equations are estimated, rather than fixed at known values. We may like to reflect this
uncertainty about our coefficients in some way in the results from our model.
We can incorporate these features into our EViews model using stochastic simulation.
Up until now, we have thought of our model as forecasting a single point for each of our
endogenous variables at each observation. As soon as we add uncertainty to the model, we
should think instead of our model as predicting a whole distribution of outcomes for each
variable at each observation. Our goal is to summarize these distributions using appropriate
statistics.
If the model is linear (as in our example) and the errors are normal, then the endogenous
variables will follow a normal distribution, and the mean and standard deviation of each
distribution should be sufficient to describe the distribution completely. In this case, the
mean will actually be equal to the deterministic solution to the model. If the model is not
linear, then the distributions of the endogenous variables need not be normal. In this case,
the quantiles of the distribution may be more informative than the first two moments, since
the distributions may have tails which are very different from the normal case. In a non-linear model, the mean of the distribution need not match up to the deterministic solution of
the model.
EViews makes it easy to calculate statistics to describe the distributions of your endogenous
variables in an uncertain environment. To simulate the distributions, the model object uses
a Monte Carlo approach, where the model is solved many times with pseudo-random numbers substituted for the unknown errors at each repetition. This method provides only
approximate results. However, as the number of repetitions is increased, we would expect
the results to approach their true values.
To return to our simple macroeconomic model, we can use a stochastic simulation to provide some measure of the uncertainty in our results by adding error bounds to our predictions. From the model window, click on the Solve button. When the model solution dialog
appears, choose Stochastic for the simulation type and choose Dynamic solution for the
sample 2000 2005. In the Solution scenarios & output box on the right-hand side of the
dialog, make sure that the Std. Dev. checkbox in the Active section is checked. Click on OK
to begin the simulation.
Status messages will appear to indicate progress of the simulation. When the simulation is
complete select Proc/Make Graph to display the results. As before, we will set the Model
variables to Endogenous variables and the sample to 1995 2005. In addition, you should
choose Mean +- 2 standard deviations in the Solution Series box, check the Actuals and
Active scenario boxes, and set the latter to Baseline. Click on OK to produce the graph.
The error bounds in
the resulting output
graph show that we
should be reluctant
to place too much
weight on the point
forecasts of our
model, since the
bounds are quite
wide on several of
the variables. Much
of the uncertainty is
probably due to the
large residual in the
investment equation, which is creating a lot of variation
in investment and
interest rates in the
stochastic simulation.
An Example Model715
Alternately, after exiting the Scenario Specification dialog, you may View/Variables to
return to the variable window of the model, click on the variable M, use the right mouse
button to call up the Properties dialog for the variable, and then in the Scenario box, click
on the checkbox for Use override series in scenario. A message will appear asking if you
would like to create the new series. Click on Yes to create the series, then OK to return to the
variable window.
In the variable window, the variable name M should now appear in red, indicating that it
has been overridden in the active scenario. This means that the variable M will now be
bound to the series M_1 instead of the series M when solving the model using Scenario 1.
(You may use the Aliasing tab to change the extension from _1. Note also that depending
on how you created the override, you may still need to create the series M_1 in your workfile
by copying the values of M.)
In our previous forecast for M, we assumed that the real money supply would be kept at a
constant level during the forecast period. For our alternative scenario, we are going to
assume that the real money supply is contracted sharply at the beginning of the forecast
period, and held at this lower value throughout the forecast. We can set the new values
using a few simple commands:
smpl 2000q1 2005q4
series m_1 = 900
smpl @all
As before, we can solve the model by clicking on the Solve button. Restore the Simulation
type to deterministic, make sure that Scenario 1 is the active scenario, and Baseline is
the alternate scenario, and that Solve for Alternate along with Active is checked. Set the
solution sample to 2000 2005. Click on OK to solve.
Once the solution is complete, we can use Proc/Make Graph to display the results following the same procedure as above. First, set the Model variables selection to display the
Endogenous variables. Next, set the Solution series list box to the setting Deterministic
solutions, then check both the Active and Compare solution check boxes, making sure that
the active scenario is set to Scenario 1, and the comparison scenario is set to Baseline.
Set the sample to 1995Q1 to 2005Q4, then click on OK. The following graph should be displayed:
Building a Model717
Building a Model
Creating a Model
The first step in working with a model is to create the model object itself. There are several
different ways of creating a model:
You can create an empty model by using Object/New Object and then choosing
Model, or by performing the same operation using the right mouse button menu from
inside the workfile window.
You can select a list of estimation objects in the workfile window (equations, VARs,
systems), and then select Open as Model from the right mouse button menu. This
item will create a model which contains the equations from the selected objects as
links.
You can use the Make model procedure from an estimation object to create a model
containing the equation or equations in that object.
EViews will associate the equation with the variable X. If we would like the equation to be
associated with the variable Y, we would have to rewrite the equation:
1 / y * x = z
Note that EViews has the ability to handle simple expressions involving the endogenous
variable. You may use functions like LOG, D, and DLOG on the left-hand side of your equation. EViews will normalize the equation into explicit form if the Gauss-Seidel method is
selected for solving the model.
Equation View
The equation view is used for displaying, selecting, and modifying the equations contained
in the model. An example of the equation view can be seen on page 705.
Each line of the window is used to represent either a linked object or an inline text equation.
Linked objects will appear similarly to how they do in the workfile, with an icon representing their type, followed by the name of the object. Even if the linked object contains many
equations, it will use only one line in the view. Inline equations will appear with a TXT
icon, followed by the beginning of the equation text in quotation marks.
The remainder of the line contains the equation number, followed by a symbolic representation of the equation, indicating which variables appear in the equation.
Any errors in the model will appear as red lines containing an error message describing the
cause of the problem.
You can open any linked objects directly from the equation view. Simply select the line representing the object using the mouse, then choose Open Link from the right mouse button
menu.
The contents of a line can be examined in more detail using the equation properties dialog.
Simply select the line with the mouse, then choose Properties from the right mouse button menu. Alternatively, simply double click on the object to call up the dialog.
For a link to a single equation,
the dialog shows the functional form of the equation,
the values of any estimated
coefficients, and the standard
error of the equation residual
from the estimation. If the link
is to an object containing
many equations, you can
move between the different
equations imported from the
object using the Endogenous
list box at the top of the dialog. For an inline equation, the
dialog simply shows the text
of the equation.
The Edit Equation or Link Specification button allows you to edit the text of an inline
equation or to modify a link to point to an object with a different name. A link is represented
in text form as a colon followed by the name of the object. Note that you cannot modify the
specification of a linked object from within the model object, you must work directly with
the linked object itself.
In the bottom right of the dialog, there are a set of fields that allow you to set the stochastic
properties of the residual of the equation. If you are only performing deterministic simulations, then these settings will not affect your results in any way. If you are performing stochastic simulations, then these settings are used in conjunction with the solution options to
determine the size of the random innovations applied to this equation.
The Stochastic with S.D. option for Equation type lets you set a standard deviation for any
random innovations applied to the equation. If the standard deviation field is blank or is set
to NA, then the standard deviation will be estimated from the historical data. The Identity
option specifies that the selected equation is an identity, and should hold without error even
in a stochastic simulation. See Stochastic Options on page 736 below for further details.
The equation properties dialog also gives you access to the property dialogs for the endogenous variable and add factor associated with the equation. Simply click on the appropriate
tab. These will be discussed in greater detail below.
Variable View
The variable view is used for adjusting options related to variables and for displaying and
editing the series associated with the model (see the discussion in Examining the Solution
Results (p. 706)). The variable view lists all the variables contained in the model, with
each line representing one variable. Each line begins with an icon classifying the variable as
endogenous, exogenous or an add factor. This is followed by the name of the variable, the
equation number associated with the variable, and the description of the variable. The
description is read from the associated series in the workfile.
Note that the names and types of the variables in the model are determined fully by the
equations of the model. The only way to add a variable or to change the type of a variable in
the model is to modify the model equations.
You can adjust what is displayed in the variable view in a number of ways. By clicking on
the Filter/Sort button just above the variable list, you can choose to display only variables
that match a certain name pattern, or to display the variables in a particular order. For example, sorting by type of variable makes the division into endogenous and exogenous variables
clearer, while sorting by override highlights which variables have been overridden in the
currently active scenario.
The variable view also allows you to browse through the dependencies between variables in
the model by clicking on the Dependencies button. Each equation in the model can be
thought of as a set of links that connect other variables in the model to the endogenous variable of the equation. Starting from any variable, we can travel up the links, showing all the
endogenous variables that this variable directly feeds into, or we can travel down the links,
showing all the variables upon which this variable directly depends. This may sometimes be
useful when trying to find the cause of unexpected behavior. Note, however, that in a simultaneous model, every endogenous variable is indirectly connected to every other variable in
the same block, so that it may be hard to understand the model as a whole by looking at any
particular part.
You can quickly view or edit one or more of the series associated with a variable by double
clicking on the variable. For several variables, simply select each of them with the mouse
then double click inside the selected area.
Block structure refers to whether the model can be split into a number of smaller parts, each
of which can be solved for in sequence. For example, consider the system:
block 1
x=y+4
y = 2*x 3
block 2
z=x+y
Because the variable Z does not appear in either of the first two equations, we can split this
equation system into two blocks: a block containing the first two equations, and a block
containing the third equation. We can use the first block to solve for the variables X and Y,
then use the second block to solve for the variable Z. By using the block structure of the system, we can reduce the number of variables we must solve for at any one time. This typically improves performance when calculating solutions.
Blocks can be classified further into recursive and simultaneous blocks. A recursive block is
one which can be written so that each equation contains only variables whose values have
already been determined. A recursive block can be solved by a single evaluation of all the
equations in the block. A simultaneous block cannot be written in a way that removes feedback between the variables, so it must be solved as a simultaneous system. In our example
above, the first block is simultaneous, since X and Y must be solved for jointly, while the
second block is recursive, since Z depends only on X and Y, which have already been determined in solving the first block.
The block structure view displays the structure of the model, labeling each of the blocks as
recursive or simultaneous. EViews uses this block structure whenever the model is solved.
The block structure of a model may also be interesting in its own right, since reducing the
system to a set of smaller blocks can make the dependencies in the system easier to understand.
Text View
The text view of a model allows you to see the entire structure of the model in a single
screen of text. This provides a quick way to input small models, or a way to edit larger models using copy-and-paste.
The text view consists of a series of lines. In a simple model, each line simply contains the
text of one of the inline equations of the model. More complicated models may contain one
of more of the following:
A line beginning with a colon : represents a link to an external object. The colon
must be followed by the name of an object in the workfile. Equations contained in the
external object will be imported into the model whenever the model is opened, or
when links are updated.
Specifying Scenarios723
A line beginning with @ADD specifies an add factor. The add factor command has
the form:
@add(v) endogenous_name add_name
where endogenous name is the name of the endogenous variable and number is the
standard deviation of the innovation to be applied during stochastic simulation. When
applied to an exogenous variable, it has the form:
@innov exogenous_name number_or_series
where exogenous name is the name of the exogenous variable and number_or_series
is either a number or the name of the series that contains the standard deviation to be
applied to the variable during stochastic simulation. Note that when an equation in a
model is linked to an external estimation object, the variance from the estimated
equation will be brought into the model automatically and does not require an
@innov specification unless you would like to modify its value.
The keyword @TRACE, followed by the names of the endogenous variables that
you wish to trace, may be used to request model solution diagnostics. See Diagnostics on page 741.
Users of earlier versions of EViews should note that two commands that were previously
available, @assign and @exclude, are no longer part of the text form of the model. These
commands have been removed because they now address options that apply only to specific
model scenarios rather than to the model as a whole. When loading in models created by
earlier versions of EViews, these commands will be converted automatically into scenario
options in the new model object.
Specifying Scenarios
When working with a model, you will often want to compare model predictions under a
variety of different assumptions regarding the paths of your exogenous variables, or with
one or more of your equations excluded from the model. Model scenarios allow you to do
this without overwriting previous data or changing the structure of your model.
The most important function of a scenario is to specify which series will be used to hold the
data associated with a particular solution of the model. To distinguish the data associated
with different scenarios, each scenario modifies the names of the model variables according
to an aliasing rule. Typically, aliasing will involve adding an underline followed by a number, such as _0 or _1 to the variable names of the model. The data for each scenario will
be contained in series in the workfile with the aliased names.
Model scenarios support the analysis of different assumptions for exogenous variables by
allowing you to override a set of variables you would like to alter. Exogenous variables
which are overridden will draw their values from series with names aliased for that scenario, while exogenous variables which are not overridden will draw their values from
series with the same name as the variable.
Scenarios also allow you to exclude one or more endogenous variables from the model.
When an endogenous variable is excluded, the equation associated with that variable is
dropped from the model and the value of the variable is taken directly from the workfile
series with the same name. Excluding an endogenous variable effectively treats the variable
as an exogenous variable for the purposes of solving the model.
When excluding an endogenous variable, you can specify a sample range over which the
variable should be excluded. One use of this is to handle the case where more recent historical data is available for some of your endogenous variables than others. By excluding the
variables for which you have data, your forecast can use actual data where possible, and
results from the model where data are not yet available.
Each model can contain many scenarios. You can view the scenarios associated with the
current model by choosing View/Scenario Specificationas shown above on page 715.
There are two special scenarios associated with every model: actuals and baseline. These
two scenarios have in common the special property that they cannot contain any overrides
or excludes. They differ in that the actuals scenario writes the values for endogenous variables back into the series with the same name as the variables in the model, while the baseline scenario modifies the names. When solving the model using actuals as your active
scenario, you should be careful not to accidentally overwrite your historical data.
The baseline scenario gets its name from the fact that it provides the base case from which
other scenarios are constructed. Scenarios differ from the baseline by having one or more
variables overridden or excluded. By comparing the results from another scenario against
those of the baseline case, we can separate out the movements in the endogenous variables
that are due to the changes made in that particular scenario from movements which are
present in the baseline itself.
The Select Scenario page of the dialog allows you to select, create, copy, delete and rename
the scenarios associated with the model. You may also apply the selected scenario to the
baseline data, which involves copying the series associated with any overridden variables in
Specifying Scenarios725
the selected scenario on top of the baseline values. Applying a scenario to the baseline is a
way of committing to the edited values of the selected scenario making them a permanent
part of the baseline case.
The Scenario overrides page provides a summary of variables which have been overridden
in the selected scenario and equations which have been excluded. This is a useful way of
seeing a complete list of all the changes which have been made to the scenario from the
baseline case.
The Aliasing page allows you to examine the name aliasing rules associated with any scenario. The page displays the complete set of aliases that will be applied to the different types
of variables in the model.
Although the scenario dialog lets you see all the settings for a scenario in one place, you will
probably alter most scenario settings directly from the variable view instead. For both exogenous variables and add factors, you can select the variable from the variable view window,
then use the right mouse button menu to call up the properties page for the variable. The
override status of the variable can be adjusted using the Use override checkbox. Once a
variable has been overridden, it will appear in red in the variable view.
Edit override may also be used for endogenous variables, in which case EViews will simultaneously exclude and override the variable for the current scenario.
You may quickly revert an overriden variable back to its original non-overriden values by
using the Revert right button menu item.
adjust the results of the model without respecifying or reestimating the equations of the
model.
In reality, an add factor is just an extra exogenous variable which is included in the selected
equation in a particular way. EViews allows an add factor to take one of two forms. If our
equation has the form:
f ( y i ) = f i ( y, x )
(40.3)
then we can provide an add factor for the equation intercept or residual by simply including
the add factor at the end of the equation:
f ( y i ) = f i ( y, x ) + a
(40.4)
Alternatively, we may provide an add factor for the endogenous variable of the model by
using the add factor as an offset:
f ( y i a ) = f i ( y, x )
(40.5)
where the sign of the add factor is reversed so that it acts in the same direction as for the
previous case.
If the endogenous variable appears by itself on the left hand side of the equal sign, then the
two types of add factor are equivalent. If the endogenous variable is contained in an expression, for example, a log transformation, then this is no longer the case. Although the two
add factors will have a similar effect, they will be expressed in different units with the former in the units of the residual of the equation, and the latter in the units of the endogenous
variable of the equation.
There are two ways to include add factors. The easiest way is to go to the equation view of
the model, then double click on the equation in which you would like to include an add factor.
Once an add factor has been added to an equation, it will appear in the variable view of the
model as an additional variable. If an add factor is present in any scenario, then it must be
present in every scenario, although the values of the add factor can be overridden for a particular scenario in the same way as for an exogenous variable.
The second way to handle add factors is to assign, initialize or override them for all the
equations in the model at the same time using the Proc/Add Factors menu from the model
window. For example, to create a complete set of add factors that make the model solve to
actual values over history, we can use Add Factors/Equation Assignment... to create add
factors for every equation, then use Add Factors/Set Values... to set the add factors so that
all the equations have no residuals at the actual values.
When solving a model with an add factor, any missing values in the add factor will be
treated as zeros.
tion, errors are generated for each observation in accordance with the residual uncertainty and the exogenous variable uncertainty in the model. At the end of each
repetition, the statistics for the tracked endogenous variables are updated to reflect
the additional results.
If a comparison is being performed with an alternate scenario, then the same set of
random residuals and exogenous variable shocks are applied to both scenarios during
each repetition. This is done so that the deviation between the two is based only on
differences in the exogenous and excluded variables, not on differences in random
errors.
F ( y ( maxlag ), , y ( 1 ), y, y ( 1 ), , y ( maxlead ), x ) = 0
(40.6)
where F is the complete set of equations of the model, y is a vector of all the endogenous
variables, x is a vector of all the exogenous variables, and the parentheses follow the usual
EViews syntax to indicate leads and lags.
Since solving the model for any particular period requires both past and future values of the
endogenous variables, it is not possible to solve the model recursively in one pass. Instead,
the equations from all the periods across which the model will be solved must be treated as
a simultaneous system, and we will require terminal as well as initial conditions. For example, in the case with a single lead and a single lag and a sample that runs from s to t , we
must effectively solve the entire stacked system:
F ( y s 1, y s, y s + 1, x ) = 0
F ( y s, y s + 1, y s + 2, x ) = 0
F ( y s + 1, y s + 2, y s + 3, x ) = 0
F ( y t 2, y t 1, y t, x ) = 0
(40.7)
F ( y t 1, y t, y t + 1, x ) = 0
where the unknowns are y s , y s + 1 ,... y t the initial conditions are given by y s 1 and the
terminal conditions are used to determine y t + 1 . Note that if the leads or lags extend more
than one period, we will require multiple periods of initial or terminal conditions.
To solve models such as these, EViews applies a Gauss-Seidel iterative scheme across all the
observations of the sample. Roughly speaking, this involves looping repeatedly through
every observation in the forecast sample, at each observation solving the model while treating the past and future values as fixed, where the loop is repeated until changes in the values of the endogenous variables between successive iterations become less than a specified
tolerance.
This method is often referred to as the Fair-Taylor method, although the Fair-Taylor algorithm includes a particular handling of terminal conditions (the extended path method) that
is slightly different from the options provided by EViews. When solving the model, EViews
allows the user to specify fixed end conditions by providing values for the endogenous variables beyond the end of the forecast sample, or to determine the terminal conditions endogenously by adding extra equations for the terminal periods which impose either a constant
level, a linear trend, or a constant growth rate on the endogenous variables for values
beyond the end of the forecast period.
Although this method is not guaranteed to converge, failure to converge is often a sign of the
instability which results when the influence of the past or the future on the present does not
die out as the length of time considered is increased. Such instability is often undesirable for
other reasons and may indicate a poorly specified model.
y S = f 1 e S 1 + + f q e S q ,
(40.8)
Initialization Methods
If your equation was estimated with backcasting turned on, EViews will, by default, perform
backcasting to obtain initial values for model solution. If your equation is estimated with
backcasting turned off, or if the forecast sample precedes the estimation sample, the initial
values will be set to zero.
You may examine the equation specification in the model to determine whether backcasting
was employed in estimation. The specification will include either the expression BACKCAST=, or INITMA= followed by an observation identifier for the first period of the
estimation sample. As one might guess, BACKCAST= is used to indicate the use of backcasting in estimation; alternately, INITMA= indicates that the pre-sample values were initialized with zeros.
Backcast Methods
EViews offers alternate
approaches for obtaining
backcast estimates of the innovations when BACKCAST= is specified.
The estimation period method uses data for the estimation sample to compute backcast estimates. The post-backcast sample innovations are initialized to zero and backward recursion
is employed to obtain estimates of the pre-estimation sample innovations. A forward recursion is then run to the end of the estimation sample and the resulting values are used as estimates of the innovations.
The alternative forecast available method offers different approaches for dynamic and static
forecasting:
For dynamic forecasting, EViews applies the backcasting procedure using data from
the beginning of the estimation sample to either the beginning of the forecast period,
or the end of the estimation sample, whichever comes first.
For static forecasting, the backcasting procedure uses data from the beginning of the
estimation sample to the end of the forecast period.
As before, the post-backcast sample innovations are set to zero and backward recursion is
used to obtain estimates of the pre-estimation sample innovations, and forward recursion is
used to obtain innovation estimates. Note that the forecast available method does not guarantee that the pre-sample forecast innovations match those employed in estimation.
See Forecasting with MA Errors on page 151 for additional discussion.
The backcast initialization method employed by EViews for an equation in model solution
depends on a variety of factors:
For equations estimated using EViews 6 and later, the initialization method is determined from the equation specification. If the equation was estimated using estimation
sample backcasting, its specification will contain BACKCAST= and ESTSMPL=
statements instructing EViews to backcast using the specified sample.
The example dialog above shows an equation estimated using the estimation sample
backcasting method.
For equations estimated prior to EViews 6, the model will only contain the BACKCAST= statement so that by default, the equation will be initialized using forecast
available.
In both cases, you may override the default settings by changing the specification of
the equation in the model. To ensure that the equation backcasts using the forecast
available method, simply delete the ESTSMPL= portion of the equation specification. To force the estimation sample method for model solution, you may add an
ESTSMPL= statement to the equation specification.
Note that models containing post-EViews 6 equations solved in previous versions of EViews
will always backcast using the forecast available method.
Basic Options
To begin solving a model, you can use Proc/Solve Model... or you can simply click on the
Solve button on the model toolbar. EViews will display a tabbed dialog containing the solution options.
The basic options page contains
the most important options for
the simulation. While the
options on other pages can
often be left at their default values, the options on this page
will need to be set appropriately for the task you are trying
to perform.
At the top left, the Simulation
type box allows you to determine whether the model should
be simulated deterministically
or stochastically. In a deterministic simulation, all equations in
the model are solved so that
they hold without error during the simulation period, all coefficients are held fixed at their
point estimates, and all exogenous variables are held constant. This results in a single path
for the endogenous variables which can be evaluated by solving the model once.
In a stochastic simulation, the equations of the model are solved so that they have residuals
which match to randomly drawn errors, and, optionally, the coefficients and exogenous variables of the model are also varied randomly (see Stochastic Options on page 736 for
details). For stochastic simulation, the model solution generates a distribution of outcomes
for the endogenous variables in every period. We approximate the distribution by solving
the model many times using different draws for the random components in the model then
calculating statistics over all the different outcomes.
Typically, you will first analyze a model using deterministic simulation, and then later proceed to stochastic simulation to get an idea of the sensitivity of the results to various sorts of
error. You should generally make sure that the model can be solved deterministically and is
behaving as expected before trying a stochastic simulation, since stochastic simulation can
be very time consuming.
The next option is the Dynamics box. This option determines how EViews uses historical
data for the endogenous variables when solving the model:
When Dynamic solution is chosen, only values of the endogenous variables from
before the solution sample are used when forming the forecast. Lagged endogenous
variables and ARMA terms in the model are calculated using the solutions calculated
in previous periods, not from actual historical values. A dynamic solution is typically
the correct method to use when forecasting values several periods into the future (a
multi-step forecast), or evaluating how a multi-step forecast would have performed
historically.
When Static solution is chosen, values of the endogenous variables up to the previous period are used each time the model is solved. Lagged endogenous variables and
ARMA terms in the model are based on actual values of the endogenous variables. A
static solution is typically used to produce a set of one-step ahead forecasts over the
historical data so as to examine the historical fit of the model. A static solution cannot
be used to predict more than one observation into the future.
When the Fit option is selected, values of the endogenous variables for the current
period are used when the model is solved. All endogenous variables except the one
variable for the equation being evaluated are replaced by their actual values. The fit
option can be used to examine the fit of each of the equations in the model when considered separately, ignoring their interdependence in the model. The fit option can
only be used for periods when historical values are available for all the endogenous
variables.
In addition to these options, the Structural checkbox gives you the option of ignoring any
ARMA specifications that appear in the equations of the model.
At the bottom left of the dialog is a box for the solution sample. The solution sample is the
set of observations over which the model will be solved. Unlike in some other EViews procedures, the solution sample will not be contracted automatically to exclude missing data. For
the solution to produce results, data must be available for all exogenous variables over the
course of the solution sample. If you are carrying out a static solution or a fit, data must also
be available for all endogenous variables during the solution sample. If you are performing a
dynamic solution, only pre-sample values are needed to initialize any lagged endogenous or
ARMA terms in the model.
On the right-hand side of the dialog are controls for selecting which scenarios we would like
to solve. By clicking on one of the Edit Scenario Options buttons, you can quickly examine
the settings of the selected scenario. The option Solve for Alternate along with Active
should be used mainly in a stochastic setting, where the two scenarios must be solved
together to ensure that the same set of random shocks is used in both cases. Whenever two
models are solved together stochastically, a set of series will also be created containing the
deviations between the scenarios (this is necessary because in a non-linear model, the difference of the means need not equal the mean of the differences).
When stochastic simulation has been selected, additional checkboxes are available for
selecting which statistics you would like to calculate for your tracked endogenous variables.
A series for the mean will always be calculated. You can also optionally collect series for the
standard deviation or quantile bounds. Quantile bounds require considerable working memory, but are useful if you suspect that your endogenous variables may have skewed distributions or fat tails. If standard deviations or quantile bounds are chosen for either the active or
alternate scenario, they will also be calculated for the deviations series.
Stochastic Options
The stochastic options page contains settings used during stochastic simulation. In many
cases, you can leave these options at their default settings.
appended to the name of the new page so that it does not conflict with any existing page
names.
The Confidence interval box sets options for how confidence
intervals should be calculated, assuming they have been
selected. The Calc from entire sample option uses the sample
quantile as an estimate of the quantile of the underlying distribution. This involves storing complete tails for the observed outcomes. This can be very memory intensive since the memory used increases linearly in the
number of repetitions. The Reduced memory approx option uses an updating algorithm
due to Jain and Chlamtac (1985). This requires much less memory overall, and the amount
used is independent of the number of repetitions. The updating algorithm should provide a
reasonable estimate of the tails of the underlying distribution as long as the number of repetitions is not too small.
The Interval size (2 sided) box lets you select the size of the confidence interval given by
the upper and lower bounds. The default size of 0.95 provides a 95% confidence interval
with a weight of 2.5% in each tail. If, instead, you would like to calculate the interquartile
range for the simulation results, you should input 0.5 to obtain a confidence interval with
bounds at the 25% and 75% quantiles.
The Innovation generation box on the right side of the dialog determines how the innovations to stochastic equations will be generated. There are two basic methods available for
generating the innovations. If Method is set to Normal Random Numbers the innovations
will be generated by drawing a set of random numbers from the standard normal distribution. If Method is set to Bootstrap the innovations will be generated by drawing randomly
(with replacement) from the set of actual innovations observed within a specified sample.
Using bootstrapped innovations may be more appropriate than normal random numbers in
cases where the equation innovations do not seem to follow a normal distribution, for example if the innovations appear asymmetric or appear to contain more outlying values than a
normal distribution would suggest. Note, however, that a set of bootstrapped innovations
drawn from a small sample may provide only a rough approximation to the true underlying
distribution of the innovations.
When normal random numbers are used, a set of independent random numbers are drawn from the standard normal
distribution at each time period, then these numbers are
scaled to match the desired variance-covariance structure of
the system. In the general case, this involves multiplying the
vector of random numbers by the Cholesky factor of the
covariance matrix. If the matrix is diagonal, this reduces to
multiplying each random number by its desired standard
deviation.
The Scale variances to match equation specified standard deviations box lets you determine how the variances of the residuals in the equations are determined. If the box is not
checked, the variances are calculated from the model equation residuals. If the box is
checked, then any equation that contains a specified standard deviation will use that number instead (see page 720 for details on how to specify a standard deviation from the equation properties page). Note that the sample used for estimation in a linked equation may
differ from the sample used when estimating the variances of the model residuals.
The Diagonal covariance matrix box lets you determine how the off diagonal elements of
the covariance matrix are determined. If the box is checked, the off diagonal elements are
set to zero. If the box is not checked, the off diagonal elements are set so that the correlation
of the random draws matches the correlation of the observed equation residuals. If the variances are being scaled, this will involve rescaling the estimated covariances so that the correlations are maintained.
The Estimation sample box allows you to specify the set of observations that will be used
when estimating the variance-covariance matrix of the model residuals. By default, EViews
will use the default workfile sample.
The Multiply covariance matrix field allows you to set an overall scale factor to be applied
to the entire covariance matrix. This can be useful for seeing how the stochastic behavior of
the model changes as levels of random variation are applied which are different from those
that were observed historically, or as a means of trouble-shooting the model by reducing the
overall level of random variation if the model behaves badly.
When bootstrapped innovations are used, the dialog changes
to show options available for bootstrapping. Similar options
are available to those provided when using normal random
numbers, although the meanings of the options are slightly
different.
The field Bootstrap residual draw sample may be used to
specify a sample period from which to draw the residuals
used in the bootstrap procedure. If no sample is provided,
the bootstrap sample will be set to include the set of observations from the start of the workfile to the last observation before the start of the solution
sample. Note that if the bootstrap sample is different from the estimation sample for an
equation, then the variance of the bootstrapped innovations need not match the variance of
the innovations as estimated by the equation.
The Diagonal covariance matrix - draw resid independently for each equation checkbox
specifies whether each equation draws independently from a separate observation of the
bootstrap sample, or whether a single observation is drawn from the bootstrap sample for all
the equations in the model. If the innovation is drawn independently for each equation,
there will be no correlation between the innovations used in the different equations in the
model. If the same observation is used for all residuals, then the covariance of the innovations in the forecast period will match the covariance of the observed innovations within the
bootstrap sample.
The Multiply bootstrap resids by option can be used to rescale all bootstrapped innovations
by the specified factor before applying them to the equations. This can be useful for providing a broad adjustment to the overall level of uncertainty to be applied to the model, which
can be useful for trouble-shooting if the model is producing errors during stochastic simulation. Note that multiplying the innovation by the specified factor causes the variance of the
innovation to increase by the square of the factor, so this option has a slightly different
meaning in the bootstrap case than when using normally distributed errors.
As noted above, stochastic simulation may include both coefficient uncertainty and exogenous variable uncertainty. There are very different ways methods of specifying these two
types of uncertainty.
The Include coefficient uncertainty field at the bottom right
of the Stochastic Options dialog specifies whether estimated
coefficients in linked equations should be varied randomly
during a stochastic simulation. When this option is selected, coefficients are randomly
redrawn at the beginning of each repetition, using the coefficient variability in the estimated
equation, if possible. This technique provides a method of incorporating uncertainty surrounding the true values of the coefficients into variation in our forecast results. Note that
coefficient uncertainty is ignored in nonlinear equations and in linear equations estimated
with PDL terms.
We emphasize that the dynamic behavior of a model may be altered considerably when the
coefficients in the model are varied randomly. A model which is stable may become unstable, or a model which converges exponentially may develop cyclical oscillations. One consequence is that the standard errors from a stochastic simulation of a single equation may vary
from the standard errors obtained when the same equation is forecast using the EViews
equation object. This result arises since the equation object uses an analytic approach to calculating standard errors based on a local linear approximation that effectively imposes stationarity on the original equation.
To specify exogenous variable uncertainty, you must provide information about the variability of each relevant exogenous variable. First, display the model in variable view by selecting
View/Variables or clicking on the Variables button in the toolbar. Next, select the exogenous variable in question, and right mouse click, select Properties..., and enter the exogenous variable variance in the resulting dialog. If you supply a positive value, EViews will
incorporate exogenous variable uncertainty in the simulation; if the variance is not a valid
value (negative or NA), the exogenous variable will be treated as deterministic.
Tracked Variables
The Tracked Variables page of the dialog lets you examine and modify which endogenous
variables are being tracked by the model. When a variable is tracked, the results for that
variable are saved in a series in the workfile after the simulation is complete. No results are
saved for variables that are not tracked.
Tracking is most useful when working with large models, where keeping the results for
every endogenous variable in the model would clutter the workfile and use up too much
memory.
By default, all variables are tracked. You can switch on selective tracking using the radio
button at the top of the dialog. Once selective tracking is selected, you can type in variable
names in the dialog below, or use the properties dialog for the endogenous variable to
switch tracking on and off.
You can also see which variables are currently being tracked using the variable view, since
the names of tracked variables appear in blue.
Diagnostics
The Diagnostics dialog page lets you set options to control the display of intermediate output. This can be useful if you are having problems getting your model to solve.
When the Display detailed messages box is checked, extra output will be produced in the
solution messages window as the model is solved.
The traced variables list lets you specify a list of variables for which intermediate values will
be stored during the iterations of the solution process. These results can be examined by
switching to the Trace Output view after the model is complete. Tracing intermediate values
may give you some idea of where to look for problems when a model is generating errors or
failing to converge.
Solver
The Solver dialog page sets options relating to the non-linear equation solver which is
applied to the model.
retains many of the desirable properties of Newton's method, such as being invariant
to equation reordering or rewriting. (See Broyden's Method, on page 1016.)
Note that even if Newton or Broydens method is selected for solving within each period of
the model, a Gauss-Seidel type method is used between all the periods if the model requires
iterative forward solution. See Models Containing Future Values on page 730.
The Excluded variables/Initialize from Actuals checkbox controls where EViews takes values for excluded variables. By default, this box is checked and all excluded observations for
solved endogenous variables (both in the solution sample and pre-solution observations) are
initialized to the actual values of the endogenous variables prior to the start of a model solution. If this box is unchecked, EViews will initialize the excluded variables with values from
the solution series (aliased series), so that you may set the values manually without editing
the original series.
The Order simultaneous blocks for minimum feedback checkbox tells the solver to reorder the equations/variables within each simultaneous block in a way that will typically
reduce the time required to solve the model. You should generally leave this box checked
unless your model fails to converge, in which case you may want to see whether the same
behavior occurs when the option is switched off.
The goal of the reordering is to separate a subset of the equations/variables of the simultaneous block into a subsystem which is recursive conditional on the values of the variables
not included in the recursive subsystem. In mathematical notation, if F are the equations of
the simultaneous block and y are the endogenous variables:
F ( y, x ) = 0
(40.9)
F 1 ( y 1 , y 2, x ) = 0
F 2 ( y 1 , y 2, x ) = 0
(40.10)
where F has been partitioned into F 1 and F 2 and y has been partitioned into y 1 and y 2 .
The equations in F 1 are chosen so that they form a recursive system in the variables in the
first partition, y 1 , conditional on the values or the variables in the second partition, y 2 . By
a recursive system we mean that the first equation in F 1 may contain only the first element
of y 1 , the second equation in F 1 may contain only the first and second elements of y 1 , and
so on.
The reordering is chosen to make the first (recursive) partition as large as possible, or,
equivalently, to make the second (feedback) partition as small as possible. Finding the best
possible reordering is a time consuming problem for a large system, so EViews uses an algorithm proposed by Levy and Low (1988) to obtain a reordering which will generally be close
to optimal, although it may not be the best of all possible reorderings. Note that in models
containing hundreds of equations the recursive partition will often contain 90% or more of
the equations/variables of the simultaneous block, with only 10% or less of the equations/
variables placed in the feedback partition.
The reordering is used by the solution algorithms in a variety of ways.
If the Gauss-Seidel algorithm is used, the basic operations performed by the algorithm
are unchanged, but the equations are evaluated in the minimum feedback order
instead of the order that they appear in the model. While for any particular model,
either order could require less iterations to converge, in practice many models seem to
converge faster when the equations are evaluated using the minimum feedback ordering.
If the Newton solution algorithm is used, the reordering implies that the Jacobian
matrix used in the Newton step has a bordered lower triangular structure (it has an
upper left corner that is lower triangular). This structure is used inside the Newton
solver to reduce the number of calculations required to find the solution to the linearized set of equations used by the Newton step.
If the Broyden solution algorithm is used, the reordering is used to reduce the size of
the equation system presented to the Broyden solver by using the equations of the
recursive partition to 'substitute out' the variables of the recursive partition, producing a system which has only the feedback variables as unknowns. This more compact
system of equations can generally be solved more quickly than the complete set of
equations of the simultaneous block.
The Use Analytic Derivatives checkbox determines whether the solver will take analytic
derivatives of the equations with respect to the endogenous variables within each simultaneous block when using solution methods that require the Jacobian matrix. If the box is not
checked, derivatives will be obtained numerically. Analytic derivatives will often be faster to
evaluate than numeric derivatives, but they will require more memory than numeric derivatives since an additional expression must be stored for each non-zero element of the Jacobian matrix. Analytic derivatives must also be recompiled each time the equations in the
model are changed. Note that analytic derivatives will be discarded automatically if the
expression for the derivative is much larger than the expression for the original equation, as
in this case the numeric derivative will be both faster to evaluate and require less memory.
The Preferred solution starting values section lets you select the values to be used as starting values in the iterative procedure. When Actuals is selected, EViews will first try to use
values contained in the actuals series as starting values. If these are not available, EViews
will try to use the values solved for in the previous period. If these are not available, EViews
will default to using arbitrary starting values of 0.1. When Previous periods solution is
selected, the order is changed so that the previous periods values are tried first, and only if
they are not available, are the actuals used.
The Solution control section allows you to set termination options for the solver. Max iterations sets the maximum number of iterations that the solver will carry out before aborting.
Convergence sets the threshold for the convergence test. If the largest relative change
between iterations of any endogenous variable has an absolute value less than this threshold, then the solution is considered to have converged. Stop on missing data means that the
solver should stop as soon as one or more exogenous (or lagged endogenous) variables is
not available. If this option is not checked, the solver will proceed to subsequent periods,
storing NAs for this period's results.
The Forward solution section allows you to adjust options that affect how the model is
solved when one or more equations in the model contain future (forward) values of the
endogenous variables. The Terminal conditions section lets you specify how the values of
the endogenous variables are determined for leads that extend past the end of the forecast
period. If User supplied in Actuals is selected, the values contained in the Actuals series
after the end of the forecast sample will be used as fixed terminal values. If no values are
available, the solver will be unable to proceed. If Constant level is selected, the terminal
values are determined endogenously by adding the condition to the model that the values of
the endogenous variables are constant over the post-forecast period at the same level as the
final forecasted values ( y t = y t 1 for t = T, T + 1, , T + k 1 ), where T is the first
observation past the end of the forecast sample, and k is the maximum lead in the model).
This option may be a good choice if the model converges to a stationary state. If Constant
difference is selected, the terminal values are determined endogenously by adding the condition that the values of the endogenous variables follow a linear trend over the post forecast period, with a slope given by the difference between the last two forecasted values:
yt yt 1 = yt 1 yt 2
(40.11)
( yt yt 1 ) yt 1 = ( yt 1 yt 2 ) yt 2
(40.12)
depending on the level of forward or backward persistence in the model. You should choose
whichever setting results in a lower iteration count for your particular model.
The Solution round-off section of the dialog controls how the results are rounded after convergence has been achieved. Because the solution algorithms are iterative and provide only
approximate results to a specified tolerance, small variations can occur when comparing
solutions from models, even when the results should be identical in theory. Rounding can be
used to remove some of this minor variation so that results will be more consistent. The
default settings will normally be adequate, but if your model has one or more endogenous
variables of very small magnitude, you will need to switch off the rounding to zero or
rescale the variables so that their solutions are farther from zero.
target value. If the procedure fails, you may like to try moving the trajectory series closer to
values that you are sure the model can achieve.
Editing Data
The easiest way to make simple changes to the data associated with a model is to open a
series or group spreadsheet window containing the data, then edit the data by hand.
To open a series window from within the model, simply select the variable using the mouse
in the variable view, then use the right mouse button menu to choose Open selected
series, followed by Actuals, Active Scenario or Alternate Scenario. If you select several
series before using the option, an unnamed group object will be created to hold all the
series.
To edit the data, click the Edit+/- button to make sure the spreadsheet is in edit mode. You
can either edit the data directly in levels or use the Units button to work with a transformed
form of the data, such as the differences or percentage changes.
To create a group which allows you to edit more than one of the series associated with a
variable at the same time, you can use the Make Group/Table procedure discussed below to
create a dated data table, then switch the group to spreadsheet view to edit the data.
More complicated changes to the data may require using a genr command to calculate the
series by specifying an expression. Click the Genr button from the series window toolbar to
call up the dialog, then type in the expression to generate values for the series and set the
workfile sample to the range of values you would like to modify.
Displaying Data
The EViews model object provides two main forms in which to display data: as a graph or as
a table. Both of these can be generated easily from the model window.
From the variable view,
select the variables you wish
to display, then use the right
mouse button menu or the
main menu to select Proc
and then Make Group/Table
or Make Graph.
The dialogs for the two
procs are almost identical.
Here we see the Make
Graph dialog. We saw this
dialog earlier in our macro
model example. The majority of fields in the dialog
control which series you would like the table or graph to contain. At the top left of the graph
is the Model Variables box, which is used to select the set of variables to place in the graph.
By default, the table or graph will contain the variables that are currently selected in the
variable view. You can expand this to include all model variables, or add or remove particular variables from the list of selected variables using the radio buttons and text box labeled
From. You can also restrict the set of variables chosen according to variable type using the
drop down menu next to Select. By combining these fields, it is easy to select sets of variables such as all of the endogenous variables of the model, or all of the overridden variables.
Once the set of variables has been determined, it is necessary to map the variable names
into the names of series in the workfile. This typically involves adding an extension to each
name according to which scenario the data is from and the type of data contained in the
series. The options affecting this are contained in the Graph series (if you are making a
graph) or Series types (if you are making a group/table) box at the right of the dialog.
The Solution series box lets you choose which solution results you would like to examine
when working with endogenous variables. You can choose from a variety of series generated
during deterministic or stochastic simulations.
The series of checkboxes below determine which scenarios you would like to display in the
graphs, as well as whether you would like to calculate deviations between various scenarios.
You can choose to display the actual series, the series from the active scenario, or the series
from an alternate scenario (labeled Compare). You can also display either the difference
between the active and alternate scenario (labeled Deviations: Active from Compare), or
the ratio between the active and alternate scenario in percentage terms (labeled % Deviation: Active from Compare).
The final field in the Graph series or Series types box is the Transform listbox. This lets
you apply a transformation to the data similar to the Transform button in the series spreadsheet.
While the deviations and units options allow you to present a variety of transformations of
your data, in some cases you may be interested in other transformations that are not directly
available. Similarly, in a stochastic simulation, you may be interested in examining standard
errors or confidence bounds on the transformed series, which will not be available when
you apply transformations to the data after the simulation is complete. In either of these
cases, it may be worth adding an identity to the model that generates the series you are
interested in examining as part of the model solution.
For example, if your model contains a variable GDP, you may like to add a new equation to
the model to calculate the percentage change of GDP:
pgdp = @pch(gdp)
After you have solved the model, you can use the variable PGDP to examine the percentage
change in GDP, including examining the error bounds from a stochastic simulation. Note
that the cost of adding such identities is relatively low, since EViews will place all such identities in a final recursive block which is evaluated only once after the main endogenous variables have already been solved.
The remaining option, at the bottom left of the dialog, lets you determine how the series will
be grouped in the output. The options are slightly different for tables and graphs. For tables,
you can choose to either place all series associated with the same model variable together,
or to place each series of the same series type together. For graphs, you have the same two
choices, and one additional choice, which is to place every series in its own graph.
In the graph dialog, you also have the option of setting a sample for the graph. This is often
useful when you are plotting forecast results since it allows you to choose the amount of historical data to display in the graph prior to the forecast results. By default, the sample is set
to the workfile sample.
When you have finished setting the options, simply click on OK to create the new table or
graph. All of EViews usual editing features are available to modify the table or graph for
final presentation.
Managing Data
When working with a model, you will often create many series in the workfile for each variable, each containing different types of results or the data from different scenarios. The
model object provides a number of tools to help you manage these series, allowing you to
perform copy, fetch, store and delete operations directly from within the model.
Because the series names are related to the variable names in a consistent way, management
tasks can often also be performed from outside the model by using the pattern matching features available in EViews commands (see Appendix A. Wildcards, on page 735 of the
Command and Programming Reference).
The data management operations from within the model
window proceed very similarly to the data display operations. First, select the
variables you would like to
work with from the variable
view, then choose Copy,
Store series, Fetch
series or Delete series
from the right mouse button
menu or the object procedures menu. A dialog will
appear, similar to the one used when making a table or graph.
In the same way as for the table and graph dialogs, the left side of the dialog is used to
choose which of the model variables to work with, while the right side of the dialog is used
to select one or more series associated with each variable. Most of the choices are exactly
the same as for graphs and tables. One significant difference is that the checkboxes for
active and comparison scenarios include exogenous variables only if they have been overrid-
den in the scenario. Unlike when displaying or editing the data, if an exogenous variable has
not been overridden, the actual series will not be included in its place. The only way to
store, fetch or delete any actual series is to use the Actuals checkbox.
After clicking on OK, you will receive the usual prompts for the store, fetch and delete operations. You can proceed as usual.
Once you have solved your model for different scenarios, you may wish to quickly compare
the results between those scenarios to see which variables differ. Clicking on the menu item
View/Compare solutions... brings up a dialog that allows you to do this. The first part of
the dialog is similar to that of the data display dialog above. Select which variables you
would like to compare by using the Select drop-down box, and the From edit field.
Having selected your variable, you may select which scenarios to compare using the Series
to compare area. The first choice, using the Solution series drop-down is whether you wish
to compare the deterministic solutions, or compare the means from a stochastic solve. Note
you must have already performed the type of solve you choose prior to comparing it.
The Compare the Active Scenario drop-down lets you choose the set of variables for comparison. By default the drop-down will be set at the models current active scenario. Note
changing this drop-down to another entry will change the active scenario for the model, as
well as for comparison.
There are two choices for specifying the second set of variables. You may either select a
comparison scenario (by selecting the Scenario radio button, and then selecting the scenario in the drop-down), or you may specify a pattern matching scheme by selecting the
Pattern radio button. With pattern matching, you should use the * wildcard to represent
the variable names in the pattern. For example, if you wish to compare I_0 (the current
active scenario) with a series called I_OLD, you would enter a pattern of *_OLD, having
specified I as the variable to compare. Note that you may reference series stored in a database using the standard dbname:: syntax, or series in another page using the standard
pagename\ syntax.
The series used for comparison should already exist in the workfile (or storage location if
you specified another container with pattern matching). Note this means that you should
have already solved the model for the specified scenario if applicable.
The Include threshold edit box lets you set the tolerance level for detecting differences
between the solutions. By default it is set to 0.1% - i.e. any relative difference less than 0.001
will be ignored. You can specify a value of 0 to tell EViews to show all differences, no matter
how small.
Finally the Comparison sample edit field lets you set the sample over which you wish to
compare the series.
Clicking OK produces the comparison table:
The table lists any variables for which the percentage difference between the two series for
each scenario is greater than the specified tolerance.
In this case we are comparing the solution between the Baseline scenario (_0) and
Scenario 1 (_1), and two variables, M and G, have a difference greater than the specified tolerance of 1e-04. The four columns in the table show details about the variables. The first
shows the variable name. The second, Delta%, shows the maximum difference between the
two series for each variable. The third and fourth columns, First and Last, give the date
(or observation number) of the first period in which the two series differ and the last period
in which the two series differ. Here, the first period in which M_0 differs from M_1 is
1960Q1, and the last period in which they differ is 1999Q4.
References753
References
Dennis, J. E. and R. B. Schnabel (1983). Secant Methods for Systems of Nonlinear Equations, Numerical
Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, London.
Jain, Raj and Imrich Chlamtac (1985). The P2 Algorithm for Dynamic Calculation of Quantiles and Histograms Without Storing Observations, Communications of the ACM, 28(10), 10761085.
Levy, Hanoch and David W. Low (1988). A Contraction Algorithm for Finding Small Cycle Cutsets, Journal of Algorithms, 9, 470-493.
Pindyck, Robert S. and Daniel L. Rubinfeld (1998). Econometric Models and Economic Forecasts, 4th edition, New York: McGraw-Hill.
cific variable, you should have a separate series corresponding to each cross-section/
variable combination. For example, if you have time series data for an economic variable
like investment that differs for each of 10 firms, you should have 10 separate investment
series in the workfile with names that follow the user-defined convention.
Lastly, and most importantly, a pool workfile must contain one or more pool objects, each of
which contains a (possibly different) description of the pooled structure of your workfile in
the form of rules specifying the user-defined naming convention for your series.
There are various approaches that you may use to set up your pool workfile:
First, you may simply create a new workfile in the usual manner, by describing, the
time series structure of your data. Once you have a workfile with the desired structure, you may define a pool object, and use this object as a tool in creating the series
of interest and importing data into the series.
Second, you may create an EViews workfile containing your data in stacked form.
Once you have your stacked data, you may use the built-in workfile reshaping tools to
create a workfile containing the desired structure and series.
Both of these procedures require a bit more background on the nature of the pool object,
and the way that your pooled data are held in the workfile. We begin with a brief description
of the basic components of the pool object, and then return to a description of the task of
setting up your workfile and data (Setting up a Pool Workfile on page 763).
Cross-section Identifiers
The central feature of a pool object is a list of cross-section members which provides a naming convention for series in the workfile. The entries in this list are termed cross-section identifiers. For example, in a cross-country study, you might use _USA to refer to the United
States, _KOR to identify Korea, _JPN for Japan, and _UK for the United Kingdom.
Since the cross-section identifiers will be used as a base in forming series names, we recommend that they be kept relatively short.
Specifying the list cross-section identifiers in a pool tells EViews about the structure of your
data. When using a pool with the four cross-section identifiers given above, you instruct
EViews to work with separate time series data for each of the four countries, and that the
data may be held in series that contain the identifiers as part of the series names.
The most direct way of creating a pool object is to select Object/New Object.../Pool. EViews
will open the pool specification view into which you should enter or copy-and-paste a list of
identifiers, with individual entries separated by spaces, tabs, or carriage returns. Here, we
have entered four identifiers on separate lines.
There are no special restrictions on the
labels that you can use for cross-section
identifiers, though you must be able to
form legal EViews series names containing these identifiers.
Note that we have used the _ character
at the start of each of the identifiers in
our list; this is not necessary, but you
may find that it makes it easier to spot
the identifier when it is used as the end
of a series name.
Before moving on, it is important to note that a pool object is simply a description of the
underlying structure of your data, so that it does not itself contain series or data. This separation of the object and the data has important consequences.
First, you may use pool objects to define multiple sets of cross-section identifiers. Suppose,
for example, that the pool object POOL01 contains the definitions given above. You may also
have a POOL02 that contains the identifiers _GER, _AUS, _SWTZ, and a POOL03 that
contains the identifiers _JPN and _KOR. Each of these three pool objects defines a differ-
ent set of identifiers, and may be used to work with different sets of series in the workfile.
Alternatively, you may have multiple pool objects in a workfile, each of which contain the
same list of identifiers. A POOL04 that contains the same identifiers as POOL01 may be used
to work with data from the same set of countries.
Second, since pool objects contain only definitions and not series data, deleting a pool will
not delete underlying series data. You may, however, use a pool object to delete, create, and
manipulate underlying series data.
Group Definitions
In addition to the main list of cross-section identifiers, you may define groups made up of
subsets of your identifiers. To define a group of identifiers, you should enter the keyword
@GROUP followed by a name for the group, and the subset of the pool identifiers that are
to be used in the group. EViews will define a group using the specified name and any identifiers provided.
We may, for example, define the ASIA group containing the _JPN and _KOR identifiers,
or the NORTHAMERICA group containing the _USA identifier by adding:
@group asia _jpn _kor
@group northamerica _usa
Pooled Data761
Object/Copy Selected in the main workfile toolbar, or right mouse-click and select
Object/Copy... and enter the new name
Pooled Data
As noted previously, all of your pooled data will be held in ordinary EViews series. These
series can be used in all of the usual ways: they may, among other things, be tabulated,
graphed, used to generate new series, or used in estimation. You may also use a pool object
to work with sets of the individual series.
There are two classes of series in a pooled workfile: ordinary series and cross-section specific
series.
Ordinary Series
An ordinary series is one that has common values across all cross-sections. A single series
may be used to hold the data for each variable, and these data may be applied to every
cross-section. For example, in a pooled workfile with firm cross-section identifiers, data on
overall economic conditions such as GDP or money supply do not vary across firms. You
need only create a single series to hold the GDP data, and a single series to hold the money
supply variable.
Since ordinary series do not interact with cross-sections, they may be defined without reference to a pool object. Most importantly, there are no naming conventions associated with
ordinary series beyond those for ordinary EViews objects.
_USAGDP, _KORGDP, _JPNGDP, and _UKGDP. The identifiers may also be placed in
the middle of series namesfor example, using the names GDP_USAINF, GDP_KORIN,
GDP_JPNIN, GDP_UKIN.
It really doesnt matter whether the identifiers are used at the beginning, middle, or end of
your cross-section specific names; you should adopt a naming style that you find easiest to
manage. Consistency in the naming of the set of cross-section series is, however, absolutely
essential. You should not, for example, name your four GDP series GDP_USA,
GDP_KOR, _JPNGDPIN, _UKGDP, as this will make it impossible for EViews to refer
to the set of series using a pool object.
Pool Series
Once your series names have been chosen to correspond with the identifiers in your pool,
the pool object can be used to work with a set of series as though it were a single item. The
key to this processing is the concept of a pool series.
A pool series is actually a set of series defined by a base name and the entire list of crosssection identifiers in a specified pool. Pool series are specified using the base name, and a
? character placeholder for the cross-section identifier. If your series are named
GDP_USA, GDP_KOR, GDP_JPN, and GDP_UK, the corresponding pool series may
be referred to as GDP?. If the names of your series are _USAGDP, _KORGDP,
_JPNGDP, and _UKGDP, the pool series is ?GDP.
When you use a pool series name, EViews understands that you wish to work with all of the
series in the workfile that match the pool series specification. EViews loops through the list
of cross-section identifiers in the specified pool, and substitutes each identifier in place of
the ?. EViews then uses the complete set of cross-section specific series formed in this
fashion.
In addition to pool series defined with ?, EViews provides a special function, @INGRP,
that you may use to generate a group identity pool series that takes the value 1 if an observation is in the specified group, and 0 otherwise.
Consider, for example, the @GROUP for ASIA defined using the identifiers _KOR and
_JPN, and suppose that we wish to create a dummy variable series for whether an observation is in the group. One approach to representing these data is to create the following
four cross-section specific series:
series asia_usa = 0
series asia_kor = 1
series asia_jpn = 1
series asia_uk = 0
and to refer to them collectively as the pool series ASIA_?. While not particularly difficult
to do, this direct approach becomes more cumbersome the greater the number of cross-section identifiers.
More easily, we may use the special pool series expression:
@ingrp(asia)
to define a special virtual pool series in which each observation takes a 0 or 1 indicator for
whether an observation is in the specified group. This expression is equivalent to creating
the four cross-section specific series, and referring to them as ASIA_?.
We must emphasize that pool series specifiers using the ? and the @INGRP function may
only be used through a pool object, since they have no meaning without a list of cross-section identifiers. If you attempt to use a pool series outside the context of a pool object,
EViews will attempt to interpret the ? as a wildcard character (see Appendix A. Wildcards, on page 735 in the Command and Programming Reference). The result, most often,
will be an error message saying that your variable is not defined.
Direct Setup
The direct approach to setting up your pool workfile involves three distinct steps: first creating a workfile with the desired time series structure; next, creating one or more pool objects
containing the desired cross-section identifiers; and lastly, using pool object tools to import
data into individual series in the workfile.
Simply select File/New workfile... to bring up the Workfile Create dialog which you will
use to describe the structure of your workfile. For additional detail, see Creating a Workfile
by Describing its Structure on page 43 of Users Guide I.
For example, to create a pool workfile that has annual data ranging from 1950 to 1992, simply select Annual in the Frequency dropdown menu, and enter 1950 as the Start date
and 1992 as the End date.
Next, you should create one or more pool objects containing cross-section identifiers and
group definitions as described in The Pool Object on page 758.
Unstacked Data
In this form, observations on a given variable for a given cross-section are grouped together,
but are separated from observations for other variables and other cross sections. For example, suppose the top of our Excel data file contains the following:
year c_usa c_kor c_jpn g_usa g_jpn g_kor
1954
61.6
77.4
66
17.8
18.7
17.6
1955
61.1
79.2
65.7
15.8
17.1
16.9
1956
61.7
80.2
66.1
15.7
15.9
17.5
1957
62.4
78.6
65.5
16.3
14.8
16.3
Here, the base name C represents consumption, while G represents government expenditure. Each country has its own separately identified column for consumption, and its own
column for government expenditure.
EViews pooled workfiles are structured to work naturally with data that are unstacked, since
the sets of cross-section specific series in the pool workfile correspond directly to the multi-
ple columns of unstacked source data. You may read unstacked data directly into EViews
using the standard workfile creation procedures described in Creating a Workfile by Reading from a Foreign Data Source on page 47 of Users Guide I. Each cross-section specific
variable should be read as an individual series, with the names of the resulting series follow
the pool naming conventions given in your pool object. Ordinary series may be imported in
the usual fashion with no additional complications.
In this example, we should use the standard EViews tools to read separate series for each
column. We create the individual series YEAR, C_USA, C_KOR, C_JPN, G_USA,
G_JPN, and G_KOR.
Stacked Data
Pooled data can also be arranged in stacked form, where all of the data for a variable are
grouped together in a single column.
In the most common form, the data for different cross-sections are stacked on top of one
another, with all of the sequentially dated observations for a given cross-section grouped
together. We may say that these data are stacked by cross-section:
id
year
_usa
1954
61.6
17.8
_usa
_usa
_usa
1992
68.1
13.2
_kor
1954
77.4
17.6
_kor
_kor
1992
na
na
Alternatively, we may have data that are stacked by date, with all of the observations of a
given period grouped together:
per
id
1954
_usa
61.6
17.8
1954
_uk
62.4
23.8
1954
_jpn
66
18.7
1954
_kor
77.4
17.6
1992
_usa
68.1
13.2
1992
_uk
67.9
17.3
1992
_jpn
54.2
7.6
1992
_kor
na
na
Each column again represents a single variable, but within each column, all of the cross-sections for a given year are grouped together. If data are stacked by year, you should make certain that the ordering of the cross-sectional identifiers within a year is consistent across
years.
There are to primary approaches to importing data into your pool series: you may read the
data in stacked form then use EViews tools to restructure the data in pool form, or you may
directly read or copy the data into a stacked representation of the pooled series.
Indirect Setup (Restructuring) of Stacked Data
The easiest approach to reading stacked pool data is to create an EViews workfile containing
the data in stacked form, and then use the built-in workfile reshaping tools to create a pool
workfile with the desired structure and data. (Alternately, you can perform the first step and
simply work with the data in stacked form: see Chapter 42. Working with Panel Data, on
page 807 for details.)
The first step in the indirect setup of a pool workfile is to create a workfile containing the
contents of your stacked data file. You may manually create the workfile and import the
stacked series data, or you may use EViews tools for opening foreign source data directly
into a new workfile (Creating a Workfile by Reading from a Foreign Data Source on
page 47 of Users Guide I).
Once you have your stacked data in an EViews workfile, you may use the workfile reshaping
tools to unstack the data into a pool workfile page. In addition to unstacking the data into
multiple series, EViews will create a pool object containing identifiers obtained from patterns in the series names. See Reshaping a Workfile, beginning on page 286 of Users
Guide I for a general discussion of reshaping, and Unstacking a Workfile on page 289 of
Users Guide I for a more specific discussion of the unstack procedure.
The indirect method is generally easier to use than the direct approach and has the advantage of not requiring that the stacked data be balanced. It has the disadvantage of using
more computer memory since EViews must have two copies of the source data in memory at
the same time.
Direct Import of Stacked Data
An alternative approach is to enter or read the data directly into the workfile using a pool
object. You may enter or copy-and-paste data from the source into and a stacked representation of your data, or you may use the pool object to describe how to read the stacked data
into the unstacked workfile.
To enter data or copy-and-paste, you use the pool object to create a stacked representation of
the data in EViews:
First, specify which time series observations will be included in your stacked spreadsheet by setting the workfile sample.
Next, open the pool, then select View/Spreadsheet View EViews will prompt you
for a list of series. You can enter ordinary series names or pool series names. If the
series exist, then EViews will display the data in the series. If the series do not exist,
then EViews will create the series or group of series, using the cross-section identifiers
if you specify a pool series.
EViews will open the stacked spreadsheet view of the pool series. If desired, click on
the Order +/ button to toggle between stacking by cross-section and stacking by
date.
Click Edit +/ to turn on edit mode in the spreadsheet window, and enter your data,
or cut-and-paste from another application.
For example, if we have a pool object that contains
the identifiers _USA, _UK, _JPN, and _KOR,
we can instruct EViews to create the series C_USA,
C_UK, C_JPN, C_KOR, and G_USA, G_UK, G_JPN,
G_KOR, and YEAR simply by entering the pool
series names C?, G? and the ordinary series
name YEAR, and pressing OK.
EViews will open a stacked spreadsheet view of the
series in your list. Here we see the series stacked by cross-section, with the pool or ordinary
series names in the column header, and the cross-section/date identifiers labeling each row.
Note that since YEAR is an ordinary series, its values are repeated for each cross-section in
the stacked spreadsheet.
For a discussion of the text specific settings in the dialog, see References on page 166 of
Users Guide I.
Generation of a pool series applies the formula you supply using an implicit loop across
cross-section identifiers, creating or modifying one or more series as appropriate.
You may use pool and ordinary genr together to generate new pool variables. For example,
to create a dummy variable that is equal to 1 for the US and 0 for all other countries, first
select PoolGenr and enter:
dum? = 0
to initialize all four of the dummy variable series to 0. Then, to set the US values to 1, select
Quick/Generate Series from the main menu, and enter:
dum_usa = 1
It is worth pointing out that a superior method of creating this pool series is to use @GROUP
to define a group called US containing only the _USA identifier (see Group Definitions
on page 760), then to use the @INGRP function:
dum? = @ingrp(us)
to generate and implicitly refer to the four series (see Pool Series on page 762).
To modify a set of series using a pool, select PoolGenr, and enter the new pool series expression:
dum? = dum? * (g? > c?)
It is worth the reminder that the method used by the pool genr is to perform an implicit loop
across the cross-section identifiers. This implicit loop may be exploited in various ways, for
example, to perform calculations across cross-sectional units in a given period. Suppose, we
have an ordinary series SUM which is initialized to zero. The pool genr expression:
sum = sum + c?
Bear in mind that this example is provided merely to illustrate the notion of implicit looping,
since EViews provides built-in features to compute period-specific statistics.
object. One convenient way to create groups of series is to use tools for creating groups out
of pool and ordinary series; another is to use wildcards expressions in forming the group.
Stacked data: display statistics for each variable in the list, computed over all crosssections and periods. These are the descriptive statistics that you would get if you
ignored the pooled nature of the data, stacked the data, and computed descriptive statistics.
Stacked means removed: compute statistics for each variable in the list after
removing the cross-sectional means, taken over all cross-sections and periods.
Cross-section specific: show the descriptive statistics for each cross-sectional variable, computed across all periods. These are the descriptive statistics derived by computing statistics for the individual series.
Time period specific: compute period-specific statistics. For each period, compute the
statistic using data on the variable from all the cross-sectional units in the pool.
Click on OK, and EViews will display a pool view containing tabular output with the
requested statistics. If you select Stacked data or Stacked - means removed, the view will
show a single column containing the descriptive statistics for each ordinary and pool series
in the list, computed from the stacked data. If you select Cross-section specific, EViews will
show a single column for each ordinary series, and multiple columns for each pool series. If
you select Time period specific, the view will show a single column for each ordinary or
pool series statistic, with each row of the column corresponding to a period in the workfile.
Note that there will be a separate column for each statistic computed for an ordinary or pool
series; a column for the mean, a column for the variance, etc.
You should be aware that the latter two methods may produce a great deal of output. Crosssection specific computation generates a set of statistics for each pool series/cross-section
combination. If you ask for statistics for three pool series and there are 20 cross-sections in
your pool, EViews will display 60 columns of descriptive statistics. For time period specific
computation, EViews computes a set of statistics for each date/series combination. If you
have a sample with 100 periods and you provide a list of three pool series, EViews will compute and display a view with columns corresponding to 3 sets of statistics, each of which
contains values for 100 periods.
If you wish to compute period-specific statistics, you may save the results in series objects.
See Making Period Stats on page 775.
Making a System
Suppose that you wish to estimate a complex specification that cannot easily be estimated
using the built-in features of the pool object. For example, you may wish to estimate a
pooled equation imposing arbitrary coefficient restrictions, or using specialized GMM techniques that are not available in pooled estimation.
In these circumstances, you may use the pool to create a system object using both common
and cross-section specific coefficients, AR terms, and instruments. The resulting system
object may then be further customized, and estimated using all of the techniques available
for system estimation.
Select Proc/Make System
and fill out the dialog. You
may enter the dependent
variable, common and crosssection specific variables,
and use the checkbox to
allow for cross-sectional
fixed effects. You may also
enter a list of common and
cross-section specific instrumental variables, and
instruct EViews to add
lagged dependent and independent regressors as instruments in models with AR
specifications.
When you click on OK, EViews will take your specification and create a new system object
containing a single equation for each cross-section, using the specification provided.
Pooled Estimation
EViews pool objects allow you to estimate your model using least squares or instrumental
variables (two-stage least squares), with correction for fixed or random effects in both the
cross-section and period dimensions, AR errors, GLS weighting, and robust standard errors,
all without rearranging or reordering your data.
We begin our discussion by walking you through the steps that you will take in estimating a
pool equation. The wide range of models that EViews supports means that we cannot
exhaustively describe all of the settings and specifications. A brief background discussion of
the supported techniques is provided in Estimation Background, beginning on page 793.
Pooled Estimation779
First, you should specify the estimation settings in the lower portion of the dialog. Using the
Method dropdown menu, you may choose between LS - Least Squares (and AR), ordinary
least squares regression, TSLS - Two-Stage Least Squares (and AR), two-stage least squares
(instrumental variable) regression. If you select the latter, the dialog will differ slightly from
this example, with the provision of an additional tab (page) for you to specify your instruments (see Instruments on page 783).
You should also provide an estimation sample in the Sample edit box. By default, EViews
will use the specified sample string to form use the largest sample possible in each cross-section. An observation will be excluded if any of the explanatory or dependent variables for
that cross-section are unavailable in that period.
The checkbox for Balanced Sample instructs EViews to perform listwise exclusion over all
cross-sections. EViews will eliminate an observation if data are unavailable for any cross-section in that period. This exclusion ensures that estimates for each cross-section will be based
on a common set of dates.
Note that if all of the observations for a cross-section unit are not available, that unit will
temporarily be removed from the pool for purposes of estimation. The EViews output will
inform you if any cross-section were dropped from the estimation sample.
You may now proceed to fill out the remainder of the dialog.
Dependent Variable
List a pool variable, or an EViews expression containing ordinary and pool variables, in the
Dependent Variable edit box.
Pooled Estimation781
Weights
By default, all observations are given equal weight in estimation. You may instruct EViews
to estimate your specification with estimated GLS weights using the dropdown menu labeled
Weights.
If you select Cross section weights, EViews will estimate a feasible GLS
specification assuming the presence of cross-section heteroskedasticity.
If you select Cross-section SUR, EViews estimates a feasible GLS specification correcting for both cross-section heteroskedasticity and contemporaneous correlation. Similarly, Period weights allows for period heteroskedasticity, while
Period SUR corrects for both period heteroskedasticity and general correlation of observations within a given cross-section. Note that the SUR specifications are each examples of
what is sometimes referred to as the Parks estimator.
Options
Clicking on the Options tab
in the dialog brings up a
page displaying a variety of
estimation options for pool
estimation. Settings that are
not currently applicable will
be grayed out.
Weighting Options
If you are estimating a specification that includes a random effects specification, EViews will
provide you with a Random effects method dropdown menu so that you may specify one of
the methods for calculating estimates of the component variances. You may choose between
the default Swamy-Arora, Wallace-Hussain, or Wansbeek-Kapteyn methods. See Random Effects on page 797 for discussion of the differences between the methods. Note that
the default Swamy-Arora method should be the most familiar from textbook discussions.
Details on these methods are provided in Baltagi (2005), Baltagi and
Chang (1994), Wansbeek and Kapteyn (1989).
The checkbox labeled Keep GLS weights may be selected to require EViews to save all estimated GLS weights with the equation, regardless of their size. By default, EViews will not
save estimated weights in system (SUR) settings, since the size of the required matrix may
be quite large. If the weights are not saved with the equation, there may be some pool views
and procedures that are not available.
Pooled Estimation783
Coefficient Name
By default, EViews uses the default coefficient vector C to hold the estimates of the coefficients and effects. If you wish to change the default, simply enter a name in the edit field. If
the specified coefficient object exists, it will be used, after resizing if necessary. If the object
does not exist, it will be created with the appropriate size. If the object exists but is an
incompatible type, EViews will generate an error.
Iteration Control
The familiar Max Iterations and Convergence criterion edit boxes that allow you to set the
convergence test for the coefficients and GLS weights.
If your specification contains AR terms, the AR starting coefficient values dropdown menu
allows you to specify starting values as a fraction of the OLS (with no AR) coefficients, zero,
or user-specified values.
If Display Settings is checked, EViews will display additional information about convergence settings and initial coefficient values (where relevant) at the top of the regression output.
The last set of radio buttons is used to determine the iteration settings for coefficients and
GLS weighting matrices.
The first two settings, Simultaneous updating and Sequential updating should be
employed when you want to ensure that both coefficients and weighting matrices are iterated to convergence. If you select the first option, EViews will, at every iteration, update
both the coefficient vector and the GLS weights; with the second option, the coefficient vector will be iterated to convergence, then the weights will be updated, then the coefficient
vector will be iterated, and so forth. Note that the two settings are identical for GLS models
without AR terms.
If you select one of the remaining two cases, Update coefs to convergence and Update
coefs once, the GLS weights will only be updated once. In both settings, the coefficients are
first iterated to convergence, if necessary, in a model with no weights, and then the weights
are computed using these first-stage coefficient estimates. If the first option is selected,
EViews will then iterate the coefficients to convergence in a model that uses the first-stage
weight estimates. If the second option is selected, the first-stage coefficients will only be iterated once. Note again that the two settings are identical for GLS models without AR terms.
By default, EViews will update GLS weights once, and then will update the coefficients to
convergence.
Instruments
To estimate a pool specification using instrumental variables techniques, you should select
TSLS - Two-Stage Least Squares (and AR) in the Method dropdown menu at the bottom of
the main (Specification) dialog page. EViews will respond by creating a three-tab dialog in
which the middle tab (page) is used to specify your instruments.
As with the regression specification, the instrument list
specification is divided into a
set of Common, Cross-section specific, and Period
specific instruments. The
interpretation of these lists is
the same as for the regressors; if there are cross-section specific instruments, the
number of these instruments
equals the product of the
number of pool identifiers
and the number of variables
in the list; if there are period
specific instruments, the
number of corresponding
instruments is the number of periods times the number of variables in the list.
Note that you need not specify constant terms explicitly since EViews will internally add
constants to the lists corresponding to the specification in the main page.
Lastly, there is a checkbox labeled Include lagged regressors for equations with AR terms
that will be displayed if your specification includes AR terms. Recall that when estimating
an AR specification, EViews performs nonlinear least squares on an AR differenced specification. By default, EViews will add lagged values of the dependent and independent regressors to the corresponding lists of instrumental variables to account for the modified
differenced specification. If, however, you desire greater control over the set of instruments,
you may uncheck this setting.
Pooled Estimation785
We obviously cannot demonstrate all of the specifications that may be estimated using these
data, but we provide a few illustrative examples.
Fixed Effects
First, we estimate a model regressing I? on the common regressors F? and K?, with a crosssection fixed effect. All regression coefficients are restricted to be the same across all crosssections, so this is equivalent to estimating a model on the stacked data, using the cross-sectional identifiers only for the fixed effect.
The top portion of the output from this regression, which shows the dependent variable,
method, estimation and sample information is given by:
Dependent Variable: I?
Method: Pooled Least Squares
Date: 12/03/03 Time: 12:21
Sample: 1935 1954
Included observations: 20
Number of cross-sections used: 10
Total pool (balanced) observations: 200
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
F?
K?
Fixed Effects (Cross)
AR--C
CH--C
DM--C
GE--C
GM--C
GY--C
IB--C
UO--C
US--C
WH--C
-58.74394
0.110124
0.310065
12.45369
0.011857
0.017355
-4.716990
9.287901
17.86656
0.0000
0.0000
0.0000
-55.87287
30.93464
52.17610
-176.8279
-11.55278
-28.47833
35.58264
-7.809534
160.6498
1.198282
EViews displays both the estimates of the coefficients and the fixed effects. Note that EViews
automatically includes a constant term so that the fixed effects estimates sum to zero and
should be interpreted as deviations from an overall mean.
Note also that the estimates of the fixed effects do not have reported standard errors since
EViews treats them as nuisance parameters for the purposes of estimation. If you wish to
compute standard errors for the cross-section effects, you may estimate a model without a
constant and explicitly enter the C in the Cross-section specific coefficients edit field.
The bottom portion of the output displays the effects specification and summary statistics
for the estimated model.
Pooled Estimation787
Effects Specification
Cross-section fixed (dummy variables)
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.944073
0.940800
52.76797
523478.1
-1070.781
288.4996
0.000000
145.9583
216.8753
10.82781
11.02571
10.90790
0.716733
A few of these summary statistics require discussion. First, the reported R-squared and Fstatistics are based on the difference between the residuals sums of squares from the estimated model, and the sums of squares from a single constant-only specification, not from a
fixed-effect-only specification. As a result, the interpretation of these statistics is that they
describe the explanatory power of the entire specification, including the estimated fixed
effects. Second, the reported information criteria use, as the number of parameters, the
number of estimated coefficients, including fixed effects. Lastly, the reported Durbin-Watson
stat is formed simply by computing the first-order residual correlation on the stacked set of
residuals.
Coefficient
Std. Error
t-Statistic
Prob.
C
F?
K?
-58.74394
0.110124
0.310065
19.61460
0.016932
0.031541
-2.994909
6.504061
9.830701
0.0031
0.0000
0.0000
The new output shows the method used for computing the standard errors, and the new
standard error estimates, t-statistic values, and probabilities reflecting the robust calculation
of the coefficient covariances.
Alternatively, we may adopt the Arellano (1987) approach of computing White coefficient
covariance estimates that are robust to arbitrary within cross-section residual correlation
(clustering by cross-section). Select the Options page and choose White period as the coefficient covariance method. The coefficient results are given by.
White period stand ard errors & covaria nce (d.f. corrected)
WARNING: estimated coefficient covari ance matrix is of reduced rank
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
F?
K?
-58.74394
0.110124
0.310065
26.87312
0.014793
0.051357
-2.185974
7.444423
6.037432
0.0 301
0.0 000
0.0 000
We caution that the White period results assume that the number of cross-sections is large,
which is not the case in this example. In fact, the resulting coefficient covariance matrix is of
reduced rank, a fact that EViews notes in the output.
AR Estimation
We may add an AR(1) term to the specification, and compute estimates using Cross-section
SUR PCSE methods to compute standard errors that are robust to more contemporaneous
correlation. EViews will estimate the transformed model using nonlinear least squares, will
form an estimate of the residual covariance matrix, and will use the estimate in forming
standard errors. The top portion of the results is given by:
Depend ent Variable: I?
Method: Pooled Least Squares
Date: 08/17/09 Ti me: 14:45
Sample (adjusted) : 1936 195 4
Included observations: 19 after adjustments
Cross-sections in cluded: 10
Total p ool (bala nced) observations: 19 0
Cross-section SUR (PCSE) standard errors & covariance (d.f.
co rrected)
Convergence achieved afte r 14 iterations
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
F?
K?
A R(1)
-63.45169
0.094744
0.350205
0.686108
28.79868
0.015577
0.050155
0.105119
-2.203285
6.082374
6.982469
6.526979
0.0 289
0.0 000
0.0 000
0.0 000
Note in particular the description of the sample adjustment where we show that the estimation drops one observation for each cross-section when performing the AR differencing, as
well as the description of the method used to compute coefficient covariances.
Pooled Estimation789
Random Effects
Alternatively, we may produce estimates for the two
way random effects specification. First, in the Specification page, we set both the
cross-section and period
effects dropdown menus to
Random. Note that the dialog changes to show that
weighted estimation is not
available with random effects
(nor is AR estimation).
Next, in the Options page we
estimate the coefficient covariance using the Ordinary
method and we change the
Random effects method to use the Wansbeek-Kapteyn method of computing the estimates
of the random component variances.
Lastly, we click on OK to estimate the model.
The top portion of the dialog displays basic information about the specification, including
the method used to compute the component variances, as well as the coefficient estimates
and associated statistics:
Dependent Variable: I?
Method: Pooled EGLS (Two-way random effects)
Date: 12/03/03 Time: 14:28
Sample: 1935 1954
Included observations: 20
Number of cross-sections used: 10
Total pool (balanced) observations: 200
Wansbeek and Kapteyn estimator of component variances
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
F?
K?
-63.89217
0.111447
0.323533
30.53284
0.010963
0.018767
-2.092573
10.16577
17.23947
0.0377
0.0000
0.0000
The middle portion of the output (not depicted) displays the best-linear unbiased predictor
estimates of the random effects themselves.
The next portion of the output describes the estimates of the component variances:
Effects Specification
S.D.
Cross-section random
Period random
Idiosyncratic random
89.26257
15.77783
51.72452
Rho
0.7315
0.0229
0.2456
Here, we see that the estimated cross-section, period, and idiosyncratic error component
standard deviations are 89.26, 15.78, and 51.72, respectively. As seen from the values of
Rho, these components comprise 0.73, 0.02 and 0.25 of the total variance. Taking the crosssection component, for example, Rho is computed as:
(41.1)
In addition, EViews reports summary statistics for the random effects GLS weighted data
used in estimation, and a subset of statistics computed for the unweighted data.
Pooled Estimation791
Dependent Variable: I?
Method: Pooled EGLS (Cross-section weights)
Date: 12/18/03 Time: 14:40
Sample: 1935 1954
Included observations: 20
Number of cross-sections used: 10
Total pool (balanced) observations: 200
Linear estimation after one-step weighting matrix
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
F?
AR--KAR
CH--KCH
DM--KDM
GE--KGE
GM--KGM
GY--KGY
IB--KIB
UO--KUO
US--KUS
WH--KWH
-4.696363
0.074084
0.092557
0.321921
0.434331
-0.028400
0.426017
0.074208
0.273784
0.129877
0.807432
-0.004321
1.103187
0.004077
0.007019
0.020352
0.151100
0.034018
0.026380
0.007050
0.019948
0.006307
0.074870
0.031420
-4.257089
18.17140
13.18710
15.81789
2.874468
-0.834854
16.14902
10.52623
13.72498
20.59268
10.78444
-0.137511
0.0000
0.0000
0.0000
0.0000
0.0045
0.4049
0.0000
0.0000
0.0000
0.0000
0.0000
0.8908
Note that EViews displays results for each of the cross-section specific K? series, labeled
using the equation identifier followed by the series name. For example, the coefficient
labeled AR--KAR is the coefficient of KAR in the cross-section equation for firm AR.
where the latter pool series expression refers to a set of 10 implicit series containing dummy
variables for group membership. The implicit series associated with the identifiers GE,
GM, and GY will contain the value 1, and the remaining seven series will contain the
value 0.
The results from this estimation are given by:
Dependent Variable: I?
Method: Pooled Least Squares
Date: 08/22/06 Time: 10:47
Sample: 1935 1954
Included observations: 20
Cross-sections included: 10
Total pool (balanced) observations: 200
Variable
Coefficient
Std. Error
t-Statistic
Prob.
C
F?
K?
@INGRP(MYGRP)
-34.97580
0.139257
0.259056
-137.3389
8.002410
0.005515
0.021536
14.86175
-4.370659
25.25029
12.02908
-9.241093
0.0000
0.0000
0.0000
0.0000
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.869338
0.867338
78.99205
1222990.
-1155.637
434.6841
0.000000
145.9583
216.8753
11.59637
11.66234
11.62306
0.356290
We see that the mean value of I? for the three groups is substantially lower than for the
remaining groups, and that the difference is statistically significant at conventional levels.
Representation
Select View/Representations to examine your specification. EViews estimates your pool as
a system of equations, one for each cross-section unit.
Estimation Output
View/Estimation Output will change the display to show the results from the pooled estimation.
As with other estimation objects, you can examine the estimates of the coefficient covariance matrix by selecting View/Coef Covariance Matrix.
Testing
EViews allows you to perform coefficient tests on the estimated parameters of your pool
equation. Select View/Wald Coefficient Tests and enter the restriction to be tested. Additional tests are described in the panel discussion Panel Equation Testing on page 857
Pooled Estimation793
Residuals
You can view your residuals in spreadsheet or graphical format by selecting View/Residuals/Table or View/Residuals/Graph. EViews will display the residuals for each cross-sectional equation. Each residual will be named using the base name RES, followed by the
cross-section identifier.
If you wish to save the residuals in series for later use, select Proc/Make Resids. This procedure is particularly useful if you wish to form specification or hypothesis tests using the
residuals.
Residual Covariance/Correlation
You can examine the estimated residual contemporaneous covariance and correlation matrices. Select View/Residual and then either Covariance Matrix or Correlation Matrix to
examine the appropriate matrix.
Forecasting
To perform forecasts using a pool equation you will first make a model. Select Proc/Make
Model to create an untitled model object that incorporates all of the estimated coefficients. If
desired, this model can be edited. Solving the model will generate forecasts for the dependent variable for each of the cross-section units. For further details, see Chapter 40. Models, on page 699.
Estimation Background
The basic class of models that can be estimated using a pool object may be written as:
Y it = a + X it b it + d i + g t + e it ,
(41.2)
where Y it is the dependent variable, and X it is a k -vector of regressors, and e it are the
error terms for i = 1, 2, , M cross-sectional units observed for dated periods
t = 1, 2, , T . The a parameter represents the overall constant in the model, while the
d i and g t represent cross-section or period specific effects (random or fixed). Identification
obviously requires that the b coefficients have restrictions placed upon them. They may be
divided into sets of common (across cross-section and periods), cross-section specific, and
period specific regressor parameters.
While most of our discussion will be in terms of a balanced sample, EViews does not require
that your data be balanced; missing values may be used to represent observations that are
not available for analysis in a given period. We will detail the unbalanced case only where
deemed necessary.
We may view these data as a set of cross-section specific regressions so that we have M
cross-sectional equations each with T observations stacked on top of one another:
Y i = al T + X i b it + d i l T + I T g + e i
(41.3)
Y t = al M + X t b it + I M d + g t l M + e t
(41.4)
Y = al MT + Xb + ( I M l T ) d + ( l M I T ) g + e
(41.5)
where the matrices b and X are set up to impose any restrictions on the data and parameters between cross-sectional units and periods, and where the general form of the unconditional error covariance matrix is given by:
e e e e e e
M 1
1 1 2 1
e2 e1 e2 e2
Q = E ( ee ) = E
eM eM
eM e1
(41.6)
If instead we treat the specification as a set of period specific equations, the stacked (by
period) representation is given by,
Y = al MT + Xb + ( l M I T ) d + ( I M l T ) g + e
(41.7)
e e e e e e
T 1
1 1 2 1
e2 e1 e2 e2
Q = E ( ee ) = E
e
e
eT eT
T 1
(41.8)
The remainder of this section describes briefly the various components that you may employ
in an EViews pool specification.
Pooled Estimation795
tion or period specific. Before turning to the general specification, we consider three extreme
cases.
First, if all of the b it are common across cross-sections and periods, we may simplify the
expression for Equation (41.2) to:
Y it = a + X it b + d i + g t + e it
(41.9)
Y it = a + X it b i + d i + g t + e it
(41.10)
Y it = a + X it b t + d i + g t + e it
(41.11)
(41.12)
If there are k 1 common regressors, k 2 cross-section specific regressors, and k 3 period specific regressors, there are a total of k 0 = k 1 + k 2 M + k 3 T regressors in b .
EViews estimates these models by internally creating interaction variables, M for each
regressor in the cross-section regressor list and T for each regressor in the period-specific
list, and using them in the regression. Note that estimating models with cross-section or
period specific coefficients may lead to the generation of a large number of implicit interaction variables, and may be computationally intensive, or lead to singularities in estimation.
AR Specifications
EViews provides convenient tools for estimating pool specifications that include AR terms.
Consider a restricted version of Equation (41.2) on page 793 that does not admit period specific regressors or effects,
Y it = a + X it b i + d i + g t + e it
(41.13)
where the cross-section effect d i is either not present, or is specified as a fixed effect. We
then allow the residuals to follow a general AR process:
p
e it =
r ri e it r + h it
r= 1
(41.14)
for all i , where the innovations h it are independent and identically distributed, assuming
further that there is no unit root. Note that we allow the autocorrelation coefficients r to be
cross-section, but not period specific.
If, for example, we assume that e it follows an AR(1) process with cross-section specific AR
coefficients, EViews will estimate the transformed equation:
Y it = r 1i Y it 1 + a ( 1 r 1i ) + ( X it r 1i X it 1 ) b i + d i ( 1 r 1i ) + h it
(41.15)
Fixed Effects
The fixed effects portions of specifications are handled using orthogonal projections. In the
simple one-way fixed effect specifications and the balanced two-way fixed specification,
these projections involve the familiar approach of removing cross-section or period specific
means from the dependent variable and exogenous regressors, and then performing the
specified regression using the demeaned data (see, for example Baltagi, 2005). More generally, we apply the results from Davis (2002) for estimating multi-way error components
models with unbalanced data.
Note that if instrumental variables estimation is specified with fixed effects, EViews will
automatically add to the instrument list, the constants implied by the fixed effects so that
the orthogonal projection is also applied to the instrument list.
Pooled Estimation797
Random Effects
The random effects specifications assumes that the corresponding effects d i and g t are realizations of independent random variables with mean zero and finite variance. Most importantly, the random effects specification assumes that the effect is uncorrelated with the
idiosyncratic residual e it .
EViews handles the random effects models using feasible GLS techniques. The first step,
estimation of the covariance matrix for the composite error formed by the effects and the
residual (e.g., n it = d i + g t + e it in the two-way random effects specification), uses one of
the quadratic unbiased estimators (QUE) from Swamy-Arora, Wallace-Hussain, or Wansbeek-Kapteyn. Briefly, the three QUE methods use the expected values from quadratic forms
in one or more sets of first-stage estimated residuals to compute moment estimates of the
2
2
2
component variances ( j d , j g, j e ) . The methods differ only in the specifications estimated
in evaluating the residuals, and the resulting forms of the moment equations and estimators.
The Swamy-Arora estimator of the component variances, cited most often in textbooks, uses
residuals from the within (fixed effect) and between (means) regressions. In contrast, the
Wansbeek and Kapteyn estimator uses only residuals from the fixed effect (within) estimator, while the Wallace-Hussain estimator uses only OLS residuals. In general, the three
should provide similar answers, especially in large samples. The Swamy-Arora estimator
requires the calculation of an additional model, but has slightly simpler expressions for the
component variance estimates. The remaining two may prove easier to estimate in some settings.
Additional details on random effects models are provided in Baltagi (2005), Baltagi and
Chang (1994), Wansbeek and Kapteyn (1989). Note that your component estimates may differ slightly from those obtained from other sources since EViews always uses the more complicated unbiased estimators involving traces of matrices that depend on the data (see
Baltagi (2005) for discussion, especially Note 3 on p. 28).
Once the component variances have been estimated, we form an estimator of the composite
residual covariance, and then GLS transform the dependent and regressor data.
If instrumental variables estimation is specified with random effects, EViews will GLS transform both the data and the instruments prior to estimation. This approach to random effects
estimation has been termed generalized two-stage least squares (G2SLS). See Baltagi (2005,
p. 113-116) and Random Effects and GLS on page 801 for additional discussion.
Note that all of the GLS specifications described below may be estimated in one-step form,
where we estimate coefficients, compute a GLS weighting transformation, and then reestimate on the weighted data, or in iterative form, where to repeat this process until the coefficients and weights converge.
Cross-section Heteroskedasticity
Cross-section heteroskedasticity allows for a different residual variance for each cross section. Residuals between different cross-sections and different periods are assumed to be 0.
Thus, we assume that:
2
E ( e it e it X i ) = j i
E ( e is e jt X i ) = 0
(41.16)
E ( e i e i X i ) = j i I T
(41.17)
Period Heteroskedasticity
Exactly analogous to the cross-section case, period specific heteroskedasticity allows for a
different residual variance for each period. Residuals between different cross-sections and
different periods are still assumed to be 0 so that:
2
E ( e it e jt X t ) = j t
E ( e is e jt X t ) = 0
(41.18)
for all i , j , s and t with s t , where X t contains X t and, if estimated by fixed effects,
the relevant cross-section or period effects ( d, g t ).
Using the period specific residual vectors, we may rewrite the first assumption as:
2
E ( e t e t X t ) = j t I M
(41.19)
We perform preliminary estimation to obtain period specific residual vectors, then we use
these residuals to form estimates of the period variances, reweight the data, and then form
the feasible GLS estimates.
Pooled Estimation799
E ( e it e jt X t ) = j ij
(41.20)
E ( e is e jt X t ) = 0
for all i , j , s and t with s t . The errors may be thought of as cross-sectionally correlated. Alternately, this error structure is sometimes referred to as clustered by period since
observations for a given period are correlated (form a cluster). Note that in this specification
the contemporaneous covariances do not vary over t .
Using the period specific residual vectors, we may rewrite this assumption as,
E ( e t e t X t ) = Q M
(41.21)
QM
j j j
1M
11 12
j 12 j 22
j MM
jM 1
(41.22)
We term this a Cross-section SUR specification since it involves covariances across cross-sections as in a seemingly unrelated regressions type framework (where each equation corresponds to a cross-section).
Cross-section SUR generalized least squares on this specification (sometimes referred to as
the Parks estimator) is simply the feasible GLS estimator for systems where the residuals are
both cross-sectionally heteroskedastic and contemporaneously correlated. We employ residuals from first stage estimates to form an estimate of Q M . In the second stage, we perform
feasible GLS.
Bear in mind that there are potential pitfalls associated with the SUR/Parks estimation (see
Beck and Katz (1995)). For one, EViews may be unable to compute estimates for this model
when you the dimension of the relevant covariance matrix is large and there are a small
number of observations available from which to obtain covariance estimates. For example, if
we have a cross-section SUR specification with large numbers of cross-sections and a small
number of time periods, it is quite likely that the estimated residual correlation matrix will
be nonsingular so that feasible GLS is not possible.
It is worth noting that an attractive alternative to the SUR methodology estimates the model
without a GLS correction, then corrects the coefficient estimate covariances to account for
E ( e is e it X i ) = j st
(41.23)
E ( e is e jt X i ) = 0
for all i , j , s and t with i j . Note that in this specification the heteroskedasticity and
serial correlation does not vary across cross-sections i .
Using the cross-section specific residual vectors, we may rewrite this assumption as,
E ( e i e i X i ) = Q T
(41.24)
QT
j j j
1T
11 12
j 12 j 22
j TT
jT 1
(41.25)
We term this a Period SUR specification since it involves covariances across periods within a
given cross-section, as in a seemingly unrelated regressions framework with period specific
equations. In estimating a specification with Period SUR, we employ residuals obtained from
first stage estimates to form an estimate of Q T . In the second stage, we perform feasible
GLS.
See Contemporaneous Covariances (Cross-section SUR) on page 799 for related discussion
of errors clustered-by-period.
Instrumental Variables
All of the pool specifications may be estimated using instrumental variables techniques. In
general, the computation of the instrumental variables estimator is a straightforward exten-
Pooled Estimation801
sion of the standard OLS estimator. For example, in the simplest model, the OLS estimator
may be written as:
1
b OLS = X i X i X i Y i
(41.26)
(41.27)
Fixed Effects
If instrumental variables estimation is specified with fixed effects, EViews will automatically
add to the instrument list any constants implied by the fixed effects so that the orthogonal
projection is also applied to the instrument list. Thus, if Q is the fixed effects transformation operator, we have:
1
b OLS = X i QX i X i QY i
1
b IV = X i QP Z i QX i X i QP Z i QY i
(41.28)
i = QZ i .
where Z
Random Effects and GLS
Similarly, for random effects and other GLS estimators, EViews applies the weighting to the
instruments as well as the dependent variable and regressors in the model. For example,
with data estimated using cross-sectional GLS, we have:
1
1
1
b GLS = X i Q M X i X i Q M Y i
b GIV =
where Z i =
1
1 2
1 2
X i Q M P Z i Q M X i
1 2
Xi Q M
i
1 2
P Zi Q M
Y i
(41.29)
1 2
Q M Z i .
In the context of random effects specifications, this approach to IV estimation is termed generalized two-stage least squares (G2SLS) method (see Baltagi (2005, p. 113-116) for references and discussion). Note that in implementing the various random effects methods
(Swamy-Arora, Wallace-Hussain, Wansbeek-Kapteyn), we have extended the existing results
to derive the unbiased variance components estimators in the case of instrumental variables
estimation.
More generally, the approach may simply be viewed as a special case of the Generalized
Instrumental Variables (GIV) approach in which data and the instruments are both transformed using the estimated covariances. You should be aware that this has approach has the
effect of altering the implied orthogonality conditions. See Wooldridge (2002) for discussion
and comparison with a three-stage least squares approach in which the instruments are not
transformed. See GMM Details on page 881 for an alternative approach.
AR Specifications
EViews estimates AR specifications by transforming the data to a nonlinear least squares
specification, and jointly estimating the original and the AR coefficients.
This transformation approach raises questions as to what instruments to use in estimation.
By default, EViews adds instruments corresponding to the lagged endogenous and lagged
exogenous variables introduced into the specification by the transformation.
For example, in an AR(1) specification, we have the original specification,
Y it = a + X it b i + d i + e it
(41.30)
Y it = r 1i Y it 1 + a ( 1 r 1i ) + ( X it r 1i X it 1 ) b i + d i ( 1 r 1i ) + h it
(41.31)
where Y it 1 and X it 1 are introduced by the transformation. EViews will, by default, add
these to the previously specified list of instruments Z it .
You may, however, instruct EViews not to add these additional instruments. Note, however,
that the order condition for the transformed model is different than the order condition for
the untransformed specification since we have introduced additional coefficients corresponding to the AR coefficients. If you elect not to add the additional instruments automati-
Pooled Estimation803
cally, you should make certain that you have enough instruments to account for the
additional terms.
N K
t
(41.32)
where the leading term is a degrees of freedom adjustment depending on the total number
of observations in the stacked data, N is the total number of stacked observations and K
is the total number of estimated parameters.
This estimator is robust to cross-equation (contemporaneous) correlation and heteroskedasticity. Specifically, the unconditional contemporaneous variance matrix E ( e t e t ) = Q Mt is
unrestricted, may now vary with t , with conditional variance matrix E ( e t e t X t ) that
may depend on X t in arbitrary, unknown fashion. See Wooldridge (2002, p. 148-153) and
Arellano (1987).
Alternatively, the White period method assumes that the errors for a cross-section are heteroskedastic and serially correlated (cross-section clustered). The coefficient covariances are
calculated using a White cross-section clustered estimator:
1
1
N
-------------------X X X e e X X X
N K i i i i i i i i
i
(41.33)
where, in contrast to Equation (41.32), the summations are taken over individuals and individual stacked data instead of periods.
The estimator is designed to accommodate arbitrary heteroskedasticity and within cross-section serial correlation. The corresponding multivariate regression (with an equation for each
period) allows the unconditional variance matrix E ( e i e i ) = Q Ti to be unrestricted and
varying with i , with conditional variance matrix E ( e i e i X i ) depending on X i in general fashion.
i, t
(41.34)
i, t
(41.35)
Analogously, the Period SUR (PCSE) handles between period correlation (cross-section clustering) by replacing the outer product of the period residuals in Equation (41.33) with an
estimate of the period covariance Q T :
1
1
N
-------------------X X X Q X X X
N K i i i T i i i
i
(41.36)
The two diagonal forms of these estimators, Cross-section weights (PCSE), and Period
M and Q T . These covariweights (PCSE), use only the diagonal elements of the relevant Q
ance estimators are robust to heteroskedasticity across cross-sections or periods, respectively, but not to general correlation of residuals.
References805
The non degree-of-freedom corrected versions of these estimators remove the leading term
involving the number of observations and number of coefficients.
References
Arellano, M. (1987). Computing Robust Standard Errors for Within-groups Estimators, Oxford Bulletin
of Economics and Statistics, 49, 431-434.
Baltagi, Badi H. (2005). Econometric Analysis of Panel Data, Third Edition, West Sussex, England: John
Wiley & Sons.
Baltagi, Badi H. and Young-Jae Chang (1994). Incomplete Panels: A Comparative Study of Alternative
Estimators for the Unbalanced One-way Error Component Regression Model, Journal of Econometrics, 62, 67-89.
Beck, Nathaniel and Jonathan N. Katz (1995). What to Do (and Not to Do) With Time-series Cross-section Data, American Political Science Review, 89(3), 634-647.
Breitung, Jrg (2000). The Local Power of Some Unit Root Tests for Panel Data, in B. Baltagi (ed.),
Advances in Econometrics, Vol. 15: Nonstationary Panels, Panel Cointegration, and Dynamic Panels,
Amsterdam: JAI Press, p. 161178.
Choi, I. (2001). Unit Root Tests for Panel Data, Journal of International Money and Finance, 20: 249
272.
Davis, Peter (2002). Estimating Multi-way Error Components Models with Unbalanced Data Structures,
Journal of Econometrics, 106, 67-95.
Fisher, R. A. (1932). Statistical Methods for Research Workers, 4th Edition, Edinburgh: Oliver & Boyd.
Grunfeld, Yehuda (1958). The Determinants of Corporate Investment, Unpublished Ph.D Thesis, Department of Economics, University of Chicago.
Hadri, Kaddour (2000). Testing for Stationarity in Heterogeneous Panel Data, Econometric Journal, 3,
148161.
Im, K. S., M. H. Pesaran, and Y. Shin (2003). Testing for Unit Roots in Heterogeneous Panels, Journal of
Econometrics, 115, 5374.
Kao, C. (1999). Spurious Regression and Residual-Based Tests for Cointegration in Panel Data, Journal
of Econometrics, 90, 144.
Levin, A., C. F. Lin, and C. Chu (2002). Unit Root Tests in Panel Data: Asymptotic and Finite-Sample
Properties, Journal of Econometrics, 108, 124.
Maddala, G. S. and S. Wu (1999). A Comparative Study of Unit Root Tests with Panel Data and A New
Simple Test, Oxford Bulletin of Economics and Statistics, 61, 63152.
Pedroni, P. (1999). Critical Values for Cointegration Tests in Heterogeneous Panels with Multiple Regressors, Oxford Bulletin of Economics and Statistics, 61, 65370.
Pedroni, P. (2004). Panel Cointegration; Asymptotic and Finite Sample Properties of Pooled Time Series
Tests with an Application to the PPP Hypothesis, Econometric Theory, 20, 597625.
Wansbeek, Tom, and Arie Kapteyn (1989). Estimation of the Error Components Model with Incomplete
Panels, Journal of Econometrics, 41, 341-361.
Wooldridge, Jeffrey M. (2002). Econometric Analysis of Cross Section and Panel Data, Cambridge, MA: The
MIT Press.
You may, at any time, click on the Range display line or select Proc/Structure/Resize Current Page... to bring up the Workfile Structure dialog so that you may modify or remove
your panel structure.
Observation Labels
The left-hand side of every workfile contains observation labels that identify each observation. In a simple unstructured workfile, these labels are simply the integers from 1 to the
total number of observations in the workfile. For dated, non-panel workfiles, these labels are
representations of the unique dates associated with each observation. For example, in an
annual workfile ranging from 1935 to 1950, the observation labels are of the form 1935,
1936, etc.
The observation labels in a panel workfile must reflect the fact that observations possess
both cross-section and within-cross-section identifiers. Accordingly, EViews will form observation identifiers using both the cross-section and the cell ID values.
Here, we see the observation labels in an
annual panel workfile formed using the
cross-section identifiers and a two-digit
year identifier.
Workfile Structure
First, the workfile statistics view provides a convenient place for you to examine the structure of your panel workfile. Simply click on View/Statistics from the main menu to display
a summary of the structure and contents of your workfile.
Workfile Statistics
Date: 06/1 7/07 Time: 16:04
Name: GRUNFELD_BALTAGI_PANEL
Number of pa ges: 1
Page: Untitled
Workfile structure: Panel - Annual
Indices: FN x DATEID
Panel dimensi on: 10 x 20
Range: 1935 1954 x 10 -- 200 obs
Object
series
coef
Total
Count
7
1
8
Data P oints
1400
751
2151
The top portion of the display for our first example workfile is depicted above. The statistics
view identifies the page as an annual panel workfile that is structured using the identifiers
ID and DATE. There are 10 cross-sections with 20 observations each, for years ranging from
1935 to 1954. For unbalanced data, the number of observations per cross-section reported
will be the largest number observed across the cross-sections.
To return the display to the original workfile directory, select View/Workfile Directory from
the main workfile menu.
Identifier Indices
EViews provides series expressions and functions that provide information about the crosssection, cell, and observation IDs associated with each observation in a panel workfile.
Cross-section Index
The series expression @crossid provides index identifiers for each observation corresponding to the cross-section to which the observation belongs. If, for example, there are 8 observations with cross-section identifier alpha series values (in order), B, A, A, A, B,
A, A, and B, the command:
series cxid = @crossid
assigns a group identifier value of 1 or 2 to each observation in the workfile. Since the panel
workfile is sorted by the cross-section ID values, observations with the identifier value A
will be assigned a CXID value of 1, while B will be assigned 2.
A one-way tabulation of the CXID series shows the number of observations in each crosssection or group:
Tabulation of CXID
Date: 02/04/04 Time: 09:08
Sample: 1 8
Included observations: 8
Number of categories: 2
Value
Cumulative Cumulative
Count
Percent
Count
Percent
1
2
5
3
62.50
37.50
5
8
62.50
100.00
Total
100.00
100.00
Cell Index
Similarly, @cellid may be used to obtain integers uniquely indexing cell IDs. @cellid
numbers observations using an index corresponding to the ordered unique values of the cell
or date ID values. Note that since the indexing uses all unique values of the cell or date ID
series, the observations within a cross-section may be indexed non-sequentially.
Suppose, for example, we have a panel workfile with two cross-sections. There are 5 observations in the cross-section A with cell ID values 1991, 1992, 1993, 1994, and
1999, and 3 observations in the cross-section B with cell ID values 1993, 1996,
1998. There are 7 unique cell ID values (1991, 1992, 1993, 1994, 1996, 1998,
1999) in the workfile.
The series assignment
series cellid = @cellid
will assign to the A observations in CELLID the values 1991, 1992, 1993, 1994,
1997, and to the B observations the values 1993, 1995, and 1996.
A one-way tabulation of the CELLID series provides you with information about the number
of observations with each index value:
Tabulation of CELLID
Date: 02/04/04 Time: 09:11
Sample: 1 8
Included observations: 8
Number of categories: 7
Value
Cumulative Cumulative
Count
Percent
Count
Percent
1
2
3
4
5
6
7
1
1
2
1
1
1
1
12.50
12.50
25.00
12.50
12.50
12.50
12.50
1
2
4
5
6
7
8
12.50
25.00
50.00
62.50
75.00
87.50
100.00
Total
100.00
100.00
would number the 5 observations in cross-section A from 1 through 5, and the 3 observations in group B from 1 through 3.
Bear in mind that while @cellid uses information about all of the ID values in creating its
index, @obsid only uses the ordered observations within a cross-section in forming the
index. As a result, the only similarity between observations that share an @obsid value is
their ordering within the cross-section. In contrast, observations that share a @cellid value
also share values for the underlying cell ID.
It is worth noting that if a panel workfile is balanced so that each cross-section has the same cell ID values,
@obsid and @cellid yield identical
results.
The @obsnum keyword allows you to number the observations in the workfile in sequential
order from 1 to the total number of observations.
Similarly, you may use lags to obtain the name of the previous child in household cross-sections. The command:
alpha older = childname(-1)
assigns to the alpha series OLDER the name of the preceding observation. Note that since
lags never cross over cross-section boundaries, the first value of OLDER in a household will
be missing.
Panel Samples
The description of the current workfile sample in the workfile window provides an obvious
indication that samples for dated and undated workfiles are specified in different ways.
EViews will exclude all observations that are dated from 1935 through 1939. We see that the
new sample has eliminated observations for those dates from each cross-section.
As in non-panel workfiles, you may combine the date specification with additional if conditions to exclude additional observations. For example:
smpl 1940 1945 1950 1954 if i>50
uses any panel observations that are dated from 1940 to 1945 or 1950 to 1954 that have values of the series I that are greater than 50.
Additionally, you may use special keywords to refer to the first and last observations for
cross-sections. For dated panels, the sample keywords @first and @last refer to the set of
first and last observations for each cross-section. For example, you may specify the sample:
smpl @first 2000
to use data from the first observation in each cross-section and observations up through the
end of the year 2000. Likewise, the two sample statements:
smpl @first @first+5
smpl @last-5 @last
use (at most) the first five and the last five observations in each cross-section, respectively.
Note that the included observations for each cross-section may begin at a different date, and
that:
smpl @all
smpl @first @last
are equivalent.
The sample statement keywords @firstmin and @lastmax are used to refer to the earliest
of the start and latest of the end dates observed over all cross-sections, so that the sample:
smpl @firstmin @firstmin+20
sets the start date to the earliest observed date, and includes the next 20 observations in
each cross-section. The command:
smpl @lastmax-20 @lastmax
includes the last observed date, and the previous 20 observations in each cross-section.
Similarly, you may use the keywords @firstmax and @lastmin to refer to the latest of the
cross-section start dates, and earliest of the end dates. For example, with regular annual data
that begin and end at different dates, you may balance the starts and ends of your data using
the statement:
smpl @firstmax @lastmin
which sets the sample to begin at the latest observed start date, and to end at the earliest
observed end date.
The special keywords are perhaps most usefully combined with observation offsets. By adding plus and minus terms to the keywords, you may adjust the sample by dropping or adding observations within each cross-section. For example, to drop the first observation from
each cross-section, you may use the sample statement:
smpl @first+1 @last
The following commands generate a series containing cumulative sums of the series X for
each cross-section:
smpl @first @first
series xsum = x
smpl @first+1 @last
xsum = xsum(-1) + x
The first two commands initialize the cumulative sum for the first observation in each crosssection. The last two commands accumulate the sum of values of X over the remaining
observations.
Similarly, if you wish to estimate your equation on a subsample of data and then perform
cross-validation on the last 20 observations in each cross-section, you may use the sample
defined by,
smpl @first @last-20
Note that the processing of sample offsets for each cross-section follows the same rules as
for non-panel workfiles Sample Offsets on page 131 of Users Guide I.
to drop the first 9 and the last 6 observations in the workfile from the current sample.
One consequence of the use of observation pairs in undated panels is that the keywords
@first, @firstmin, and @firstmax all refer to observation 1, and @last, @lastmin, and
@lastmax, refer to the last observation in the workfile. Thus, in our example, the command:
smpl @first+9 @lastmax-6
will also drop the first 9 and the last 6 observations in the workfile from the current sample.
Undated panel sample restrictions of this form are not particularly interesting since they
require detailed knowledge of the pattern of observation numbers across those cross-sections. Accordingly, most sample statements in undated workfiles will employ IF conditions in place of range pairs.
For example, the sample statement,
smpl if townid<>10 and lstat >-.3
and selects all observations with TOWNID values not equal to 10, and LSTAT values greater
than -0.3.
You may combine the sample IF conditions with the special functions that return information about the observations in the panel. For example, we may use the @obsid workfile
function to identify each observation in a cross-section, so that:
smpl if @obsid>1
The @maxsby function returns the number of non-NA observations for each TOWNID value.
Note that we employ the @ALL sample to ensure that we compute the @maxsby over the
entire workfile sample.
Panel Spreadsheets
When looking at the spreadsheet view of a series in a panel workfile the default view will be
to show the stacked form of the series - each cross-sections data will be below the previous cross-sections data.
You may change this by clicking on the Wide +/- button (you will almost certainly need to
widen the window to see the button as it is far to the right of the more commonly used buttons). The first time you click the button, EViews will change the display of the series such
that each row of the spreadsheet contains data for a specific date, and each column contains
data for a cross-section.
Clicking the Wide +/- button a second time transposes this so cross-sections are now
shown per row, and dates per column.
A third click of the button takes the view back to the original stacked form.
Trends
EViews provides several functions that may be used to construct a time trend in your panel
structured workfile. A trend in a panel workfile has the property that the values are initialized at the start of a cross-section, increase for successive observations in the specific crosssection, and are reset at the start of the next cross section.
You may use the following to construct your time trend:
The @obsid function may be used to return the simplest notion of a trend in which
the values for each cross-section begin at one and increase by one for successive
observations in the cross-section.
The @trendc function computes trends in which values for observations with the earliest observed date are normalized to zero, and values for successive observations are
incremented based on the calendar associated with the workfile frequency.
The @cellid and @trend functions return time trends in which the values increase
based on a calender defined by the observed dates in the workfile.
See also Panel Workfile Functions on page 597 and Panel Trend Functions on page 598
of the Command and Programming Reference for discussion.
By-Group Statistics
The by-group statistical functions (By-Group Statistics on page 563 of the Command
and Programming Reference) may be used to compute the value of a statistic for observations in a subgroup, and to assign the computed value to individual observations.
While not strictly panel functions, these tools deserve a place in the current discussion since
they are well suited for working with panel data. To use the by-group statistical functions in
a panel context, you need only specify the group ID series as the classifier series in the function.
Suppose, for example, that we have the undated panel structured workfile with the group ID
series TOWNID, and that you wish to assign to each observation in the workfile the mean
value of LSTAT in the corresponding town. You may perform the series assignment using the
command,
series meanlstat = @meansby(lstat, townid, "@all")
or equivalently,
series meanlstat = @meansby(lstat, @crossid, "@all")
to assign the desired values. EViews will compute the mean value of LSTAT for observations
with each TOWNID (or equivalently @crossid, since the workfile is structured using TOWNID) value, and will match merge these values to the corresponding observations.
Likewise, we may use the by-group statistics functions to compute the variance of LSTAT or
the number of non-NA values for LSTAT for each subgroup using the assignment statements:
series varlstat = @varsby(lstat, townid, "@all")
series nalstat = @nasby(lstat, @crossid, "@all")
To compute the statistic over subsamples of the workfile data, simply include a sample
string or object as an argument to the by-group statistic, or set the workfile sample prior to
issuing the command,
smpl @all if zn=0
series meanlstat1 = @meansby(lstat, @cellid)
is equivalent to:
smpl @all
series meanlstat2 = @meansby(lstat, @cellid, "@all if zn=0")
In the former example, the by-group function uses the workfile sample to compute the statistic for each cell ID value, while in the latter, the optional argument explicitly overrides the
workfile sample.
One important application of by-group statistics is to compute the within deviations for a
series by subtracting off panel group means or medians. The following lines:
smpl @all
series withinlstat1 = lstat - @meansby(lstat, townid)
series withinlstat2 = lstat - @mediansby(lstat, townid)
compute deviations from the TOWNID specific means and medians. In this example, we
omit the optional sample argument from the by-group statistics functions since the workfile
sample is previously set to use all observations.
Combined with standard EViews tools, the by-group statistics allow you to perform quite
complex calculations with little effort. For example, the panel within standard deviation
for LSTAT may be computed from the single command:
series temp = lstat - @meansby(lstat, townid, "@all")
scalar within_std = @stdev(temp)
The first line sets the sample to the first observation in each cross-section. The second line
calculates the standard deviation of the group means using the single cross-sectional observations. Note that the group means are calculated over the entire sample. An alternative
approach to performing this calculation is described in the next section.
Viewing Summaries
The easiest way to compute by-group statistics is to use the standard by-group statistics
view of a series. Simply open the series window for the series of interest and select View/
Descriptive Statistics & Tests/Stats by Classification... to open the Statistics by Classification dialog.
Mean
Std. Dev.
Obs.
1
2
3
4
5
6
7
8
9
10
All
4333.845
1971.825
1941.325
693.2100
231.4700
419.8650
149.7900
670.9100
333.6500
70.92100
1081.681
904.3048
301.0879
413.8433
160.5993
73.84083
217.0098
32.92756
222.3919
77.25478
9.272833
1314.470
20
20
20
20
20
20
20
20
20
20
200
Alternately, to compute statistics for each period in the panel, you should enter DATEID
instead of FN as the classifier series.
Saving Summaries
Alternately, you may wish to compute the by-group panel statistics and save them in their
own workfile page. The standard EViews tools for working with workfiles and creating
series links make this task virtually effortless.
Here, we see the newly created FIRM page and newly created FN series containing the
unique values from FN in the other page. Note that the new page is structured as an
Undated with ID series page, using the new FN series.
Repeating this process using the
DATEID series will create an annual
page. First click on the original panel
page to make it active, then select New
Page/Specify by Identifier series... to
bring up the previous dialog. Delete
the Cross-section ID series specification FN from the dialog, provide a
name for the new page by entering
annual in the Page edit field, and
click on OK. EViews creates the third
page, a regular frequency annual page
dated 1935 to 1954.
To create links containing the desired summaries, first click on the original panel page tab to
make it active, select one or more series of interest, then right mouse click and select Copy.
Next, click on either the firm or the annual page, right mouse click, and select Paste Special.... Alternately, right-click to select the series then drag the selected series onto the tab
for the destination page. EViews will open the Link Dialog, prompting you to specify a
method for summarizing the data.
Suppose, for example, that you
select the C01, F, and I series from
the panel page and then Paste Special... in the firm page. In this case,
EViews analyzes the two pages,
and determines that most likely, we
wish to match merge the contracted data from the first page into
the second page. Accordingly,
EViews sets the Merge by setting
to General match merge criteria,
and prefills the Source ID and Destination ID series with two FN
cross-section ID series. The default Contraction method is set to compute the mean values
of the series for each value of the ID.
You may provide a different pattern to be used in naming the link series, a contraction
method, and a sample over which the contraction should be calculated. Here, we create new
series with the same names as the originals, computing means over the entire sample in the
panel page. Click on OK to All to link all three series into the firm page, yielding:
You may compute other summary statistics by repeating the copy-and-paste-special procedure using alternate contraction methods. For example, selecting the Standard Deviation
contraction computes the standard deviation for each cross-section and specified series and
uses the linking to merge the results into the firm page. Saving them using the pattern *SD
will create links named C01SD, FSD, and ISD.
Likewise, to compute summary statistics across cross-sections for each year, first create an
annual page using New Page/Specify by Identifier series..., then paste-special the panel
page series as links in the annual page.
References
Breitung, Jrg (2000). The Local Power of Some Unit Root Tests for Panel Data, in B. Baltagi (ed.),
Advances in Econometrics, Vol. 15: Nonstationary Panels, Panel Cointegration, and Dynamic Panels,
Amsterdam: JAI Press, p. 161178.
Choi, I. (2001). Unit Root Tests for Panel Data, Journal of International Money and Finance, 20: 249
272.
Fisher, R. A. (1932). Statistical Methods for Research Workers, 4th Edition, Edinburgh: Oliver & Boyd.
Hadri, Kaddour (2000). Testing for Stationarity in Heterogeneous Panel Data, Econometric Journal, 3,
148161.
Hlouskova, Jaroslava and M. Wagner (2006). The Performance of Panel Unit Root and Stationarity Tests:
Results from a Large Scale Simulation Study, Econometric Reviews, 25, 85-116.
Holzer, H., R. Block, M. Cheatham, and J. Knott (1993), Are Training Subsidies Effective? The
Michigan Experience, Industrial and Labor Relations Review, 46, 625-636.
Im, K. S., M. H. Pesaran, and Y. Shin (2003). Testing for Unit Roots in Heterogeneous Panels, Journal of
Econometrics, 115, 5374.
Johansen, Sren (1991). Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector
Autoregressive Models, Econometrica, 59, 15511580.
Kao, C. (1999). Spurious Regression and Residual-Based Tests for Cointegration in Panel Data, Journal
of Econometrics, 90, 144.
Levin, A., C. F. Lin, and C. Chu (2002). Unit Root Tests in Panel Data: Asymptotic and Finite-Sample
Properties, Journal of Econometrics, 108, 124.
References829
Maddala, G. S. and S. Wu (1999). A Comparative Study of Unit Root Tests with Panel Data and A New
Simple Test, Oxford Bulletin of Economics and Statistics, 61, 63152.
Pedroni, P. (1999). Critical Values for Cointegration Tests in Heterogeneous Panels with Multiple Regressors, Oxford Bulletin of Economics and Statistics, 61, 65370.
Pedroni, P. (2004). Panel Cointegration; Asymptotic and Finite Sample Properties of Pooled Time Series
Tests with an Application to the PPP Hypothesis, Econometric Theory, 20, 597625.
Wooldridge, Jeffrey M. (2002). Econometric Analysis of Cross Section and Panel Data, Cambridge, MA: The
MIT Press.
either Fixed or Random effects in either the cross-section or period dimension, or both. See
the pool discussion of Fixed and Random Effects on page 796 for details.
You should be aware that when you select a fixed or random effects specification, EViews
will automatically add a constant to the common coefficients portion of the specification if
necessary, to ensure that the effects sum to zero.
Next, you should specify settings for GLS Weights. You may choose to
estimate with no weighting, or with Cross-section weights, Cross-section SUR, Period weights, Period SUR. The Cross-section SUR setting
allows for contemporaneous correlation between cross-sections (clustering by period), while the Period SUR allows for general correlation of residuals across periods for a specific cross-section (clustering by individual). Cross-section weights and Period
weights allow for heteroskedasticity in the relevant dimension.
For example, if you select Cross section weights, EViews will estimate a feasible GLS specification assuming the presence of cross-section heteroskedasticity. If you select Cross-section SUR, EViews estimates a feasible GLS specification correcting for heteroskedasticity
and contemporaneous correlation. Similarly, Period weights allows for period heteroskedasticity, while Period SUR corrects for heteroskedasticity and general correlation of observations within a cross-section. Note that the SUR specifications are both examples of what is
sometimes referred to as the Parks estimator. See the pool discussion of Generalized Least
Squares on page 797 for additional details.
Lastly, you should specify a method for computing coefficient
covariances. You may use the dropdown menu labeled Coef
covariance method to select from the various robust methods
available for computing the coefficient standard errors. The
covariance calculations may be chosen to be robust under various assumptions, for example, general correlation of observations within a cross-section, or
perhaps cross-section heteroskedasticity. Click on the checkbox No d.f. correction to perform the calculations without the leading degree of freedom correction term.
Each of the coefficient covariance methods is described in greater detail in Robust Coefficient Covariances on page 803 of the pool chapter.
You should note that some combinations of specifications and estimation settings are not
currently supported. You may not, for example, estimate random effects models with crosssection specific coefficients, AR terms, or weighting. Furthermore, while two-way random
effects specifications are supported for balanced data, they may not be estimated in unbalanced designs.
LS Options
Lastly, clicking on the
Options tab in the dialog
brings up a page displaying
computational options for
panel estimation. Settings
that are not currently applicable will be grayed out.
These options control settings for derivative taking,
random effects component
variance calculation, coefficient usage, iteration control, and the saving of
estimation weights with the
equation object.
These options are identical to those found in pool equation estimation, and are described in
considerable detail in Options on page 782.
IV Instrument Specification
There are only two parts to
the instrumental variables
page. First, in the edit box
labeled Instrument list, you will list the names of the series or groups of series you wish to
use as instruments.
Next, if your specification contains AR terms, you should use the checkbox to indicate
whether EViews should automatically create instruments to be used in estimation from lags
of the dependent and regressor variables in the original specification. When estimating an
equation specified by list that contains AR terms, EViews transforms the linear model and
estimates the nonlinear differenced specification. By default, EViews will add lagged values
of the dependent and independent regressors to the corresponding lists of instrumental variables to account for the modified specification, but if you wish, you may uncheck this
option.
See the pool chapter discussion of Instrumental Variables on page 800 for additional
detail.
GMM Estimation
To estimate a panel specification using GMM techniques, you should select GMM / DPD Generalized Method of Moments / Dynamic Panel Data in the Method dropdown menu at
the bottom of the main (Specification) dialog page. Again, you should make certain that
your workfile has a panel structure. EViews will respond by displaying a four page dialog
that differs significantly from the previous dialogs.
GMM Specification
The specification page is
similar to the earlier dialogs. As in the earlier dialogs, you will enter your
equation specification in the
upper edit box and your
sample in the lower edit
box.
Note, however, the presence
of the Dynamic Panel Wizard... button on the bottom
of the dialog. Pressing this
button opens a wizard that
will aid you in filling out the
dialog so that you may
employ dynamic panel data
techniques such as the Arellano-Bond 1-step estimator for models with lagged endogenous
variables and cross-section fixed effects. We will return to this wizard shortly (GMM Example on page 850).
weights allows you to estimate the GMM specification typically referred to as Arellano-Bond
1-step estimation. Similarly, you may choose the White period (AB 1-step) weights if you
wish to compute Arellano-Bond 2-step or multi-step estimation. Note that the White period
weights have been relabeled to indicate that they are typically associated with a specific estimation technique.
Note also that if you estimate your model using difference or orthogonal deviation methods,
some GMM weighting methods will no longer be available.
GMM Instruments
Instrument specification in GMM estimation follows the discussion above with a few additional complications.
First, you may enter your instrumental variables as usual by providing the names of series
or groups in the edit field. In addition, you may tag instruments as period-specific predetermined instruments, using the @dyn keyword, to indicate that the number of implied instruments expands dynamically over time as additional predetermined variables become
available.
To specify a set of dynamic instruments associated with the series X, simply enter
@DYN(X) as an instrument in the list. EViews will, by default, use the series X(-2), X(-3),
..., X(-T), as instruments for each period (where available). Note that the default set of
instruments grows very quickly as the number of periods increases. With 20 periods, for
example, there are 171 implicit instruments associated with a single dynamic instrument. To
limit the number of implied instruments, you may use only a subset of the instruments by
specifying additional arguments to @dyn describing a range of lags to be used.
For example, you may limit the maximum number of lags to be used by specifying both a
minimum and maximum number of lags as additional arguments. The instrument specification:
@dyn(x, -2, -5)
If, for example, you estimate an equation that uses orthogonal deviations to remove a crosssection fixed effect, EViews will, by default, compute orthogonal deviations of the instruments provided prior to their use. Thus, the instrument list:
z1 z2 @lev(z3)
will use the transformed Z1 and Z2, and the original Z3 as the instruments for the specification.
Note that in specifications where @dyn and @lev keywords are not relevant, they will be
ignored. If, for example, you first estimate a GMM specification using first differences with
both dynamic and level instruments, and then re-estimate the equation using LS, EViews
will ignore the keywords, and use the instruments in their original forms.
GMM Options
Lastly, clicking on the Options tab in the dialog brings up a page displaying computational
options for GMM estimation. These options are virtually identical to those for both LS and
IV estimation (see LS Options on page 834). The one difference is in the option for saving
estimation weights with the object. In the GMM context, this option applies to both the saving of GLS as well as GMM weights.
Dy i, t = f i EC i, t +
DX i, t j b i, j + l i, j Dy i, t j + ei, t
j= 0
where
p1
j= 1
(43.1)
EC i, t = y i, t 1 X i, t v
(43.2)
Note that it is assumed that both the dependent variable and the regressors have the same
number of lags in each cross-section. For notational convenience, it is also assumed that the
regressors X , have the same number of lags q in each cross-section, but this assumption is
not strictly required for estimation.
PSS derive the concentrated (with respect to the long-run coefficients, v , and the adjustment coefficients, f i ) log-likelihood function:
T
l t ( J ) = -----i
2
i =1
2
log ( 2pj i )
1
--2
- ( DY i f i EC i ) H i ( DY i f i EC i )
---2
j
(43.3)
i= 1 i
where
DY i = ( Dy i, 1, Dy i, 2, , Dy i, T i )
EC i = ( EC i, 1, EC i, 2, , EC i, T i )
1
Hi = ( ITi Wi ( Wi Wi ) Wi )
(43.4)
W i = ( DY i, 1, , DY i, p + 1, DX i, DX i, 1, , DX i, q + 1 )
DX i = ( DX i, 1, DX i, 2, , DX i, T i )
where, with some abuse of notation, we define the j-th lags of DY i and DX i as DY i, j
and DX i, j , respectively.
This log-likelihood can be maximized directly. However, PSS suggest an iterative procedure
based upon the first derivatives of (2). Initial least squares estimates of v based on the
regression Y t = vX t (where Y t and X t are the stacked forms of y i, t and x i, t ) are used
2
to compute estimates, using the first-derivative relationships, of f i and j i . These estimates
are then used to compute new estimates of v , and the process continues until convergence.
2
Given the final estimates of v , f i and j i , estimates of b i, j and l i, j may be computed.
Although this iterative procedure's estimates converge to the full likelihood estimates, their
covariance matrix does not. Fortunately, PSS (equation 13, page 625) provide the analytical
form of the estimate of the covariance matrix based upon the coefficient estimates.
The first tab of the dialog, the Specification tab, allows you to specify the variables used in
the regression, and whether to let EViews automatically detect the number of lags for each
variable. Enter the dependent variable, followed by a space delimited list of dynamic regressors (i.e. regressors which will have lag terms in the model) in the Dynamic Specification
edit box. You may then select whether you wish EViews to automatically select the number
of lags for each variable, or whether the number of lags is fixed, using the Automatic Selection and Fixed radio buttons.
If you choose automatic selection, you must then select the maximum number of lags to test
for the dependent variable and regressors using the Max lags dropdown menu. If you select
to use a fixed number of lags, the same menu may be used to select the number of lags for
the dependent variable and regressors. Note that, unlike the non-panel form of ARDL model
selection in EViews, each regressor will be given the same number of lags even when using
automatic model selection.
The Fixed regressors area lets you specify any fixed/static variables (regressors without
lags). The Trend specification dropdown may be used to specify whether the model
includes a constant term, or a constant and trend, or neither. Finally, any other static regressors should be entered in the List of fixed regressors box.
The Options tab of the dialog lets you specify the type of model selection to be used if you
chose automatic selection on the Specification tab. You may choose between the Akaike
Information Criterion (AIC), Schwarz Criterion (SC), or Hannan-Quinn Criterion (HQ) as
methods for selection.
Once you have clicked the OK button on the estimation dialog, EViews will present you with
the estimation results for both the long-run and short-run coefficients. The presented shortrun coefficients (and standard errors) are the mean (and standard deviation) of the crosssection specific coefficients. A separate View menu item allows you to see the cross-section
specific coefficients in detail.
Dependent Variable: MV
Method: Panel Least Squares
Date: 08/23/06 Time: 14:29
Sample: 1 506
Periods included: 30
Cross-sections included: 92
Total panel (unbalanced) observations: 506
C
CRIM
CHAS
NOX
RM
AGE
DIS
B
LSTAT
Coefficient
Std. Error
t-Statistic
Prob.
8.993272
-0.625400
-0.452414
-0.558938
0.927201
-1.406955
0.801437
0.663405
-2.453027
0.134738
0.104012
0.298531
0.135011
0.122470
0.486034
0.711727
0.103222
0.255633
66.74632
-6.012746
-1.515467
-4.139949
7.570833
-2.894767
1.126045
6.426958
-9.595892
0.0000
0.0000
0.1304
0.0000
0.0000
0.0040
0.2608
0.0000
0.0000
Effects Specification
Cross-section fixed (dummy variables)
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.918370
0.898465
0.130249
6.887683
369.1080
46.13805
0.000000
9.942268
0.408758
-1.063668
-0.228384
-0.736071
1.999986
The results for the fixed effects estimation are depicted here. Note that as in pooled estimation, the reported R-squared and F-statistics are based on the difference between the residuals sums of squares from the estimated model, and the sums of squares from a single
constant-only specification, not from a fixed-effect-only specification. Similarly, the reported
information criteria report likelihoods adjusted for the number of estimated coefficients,
including fixed effects. Lastly, the reported Durbin-Watson stat is formed simply by computing the first-order residual correlation on the stacked set of residuals.
We may click on the Estimate button to modify the specification to match the Wallace-Hussain random effects specification considered by Baltagi and Chang. We modify the specification to include the additional regressors (ZN, INDUS, RAD, TAX, PTRATIO) used in
estimation, change the cross-section effects to be estimated as a random effect, and use the
Options page to set the random effects computation method to Wallace-Hussain.
The top portion of the resulting output is given by:
Dependent Variable: MV
Method: Panel EGLS (Cross-section random effects)
Date: 08/23/06 Time: 14:34
Sample: 1 506
Periods included: 30
Cross-sections included: 92
Total panel (unbalanced) observations: 506
Wallace and Hussain estimator of component variances
C
CRIM
ZN
INDUS
CHAS
NOX
RM
AGE
DIS
RAD
TAX
PTRATIO
B
LSTAT
Coefficient
Std. Error
t-Statistic
Prob.
9.684427
-0.737616
0.072190
0.164948
-0.056459
-0.584667
0.908064
-0.871415
-1.423611
0.961362
-0.376874
-2.951420
0.565195
-2.899084
0.207691
0.108966
0.684633
0.426376
0.304025
0.129825
0.123724
0.487161
0.462761
0.280649
0.186695
0.958355
0.106121
0.249300
46.62904
-6.769233
0.105443
0.386860
-0.185703
-4.503496
7.339410
-1.788760
-3.076343
3.425493
-2.018658
-3.079674
5.325958
-11.62891
0.0000
0.0000
0.9161
0.6990
0.8528
0.0000
0.0000
0.0743
0.0022
0.0007
0.0441
0.0022
0.0000
0.0000
Effects Specification
S.D.
Cross-section random
Idiosyncratic random
0.126983
0.140499
Rho
0.4496
0.5504
Note that the estimates of the component standard deviations must be squared to match the
component variances reported by Baltagi and Chang (0.016 and 0.020, respectively).
Next, we consider an example of
estimation with standard errors
that are robust to serial correlation.
For this example, we employ data
on job training grants
(Jtrain.WF1) used in examples
from Wooldridge (2002, p. 276 and
282).
As before, the first step is to structure the workfile as a panel work-
file. Click on Range: to bring up the dialog, and enter YEAR as the date identifier and
FCODE as the cross-section ID.
EViews will structure the workfile
so that it is a panel workfile with
157 cross-sections, and three
annual observations. Note that
even though there are 471 observations in the workfile, a large number of them contain missing values for variables of interest.
To estimate the fixed effect specification with robust
standard errors (Wooldridge example 10.5, p. 276),
click on specification Quick/Estimate Equation
from the main EViews menu. Enter the list specification:
lscrap c d88 d89 grant grant_1
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
D88
D89
GRANT
GRANT_1
0.597434
-0.080216
-0.247203
-0.252315
-0.421589
0.062489
0.095719
0.192514
0.140329
0.276335
9.560565
-0.838033
-1.284075
-1.798022
-1.525648
0.0 000
0.4 039
0.2 020
0.0 751
0.1 301
Effects Specification
Cross-section fi xed (dummy variables)
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.927572
0.887876
0.497744
25.76593
-80.94602
23.36680
0.000000
0.3 936 81
1.4 864 71
1.7 153 83
2.8 208 19
2.1 642 07
1.9 969 83
Note that EViews automatically adjusts for the missing values in the data. There are only
162 observations on 54 cross-sections used in estimation. The top portion of the output indicates that the results use robust White period standard errors with no d.f. correction. Notice
that EViews warns you that the estimated coefficient covariances is not of full rank which
occurs in this case since the number of periods is less than the number of cross-sections.
Alternately, we may estimate a first difference estimator for these data with robust standard
errors (Wooldridge example 10.6, p. 282). Open a new equation dialog by clicking on
Quick/Estimate Equation..., or modify the existing equation by clicking on the Estimate
button on the equation toolbar. Enter the specification:
d(lscrap) c d89 d(grant) d(grant_1)
in the Equation specification edit box on the main page, select None in the Cross-section
effects specification dropdown menu, White period and No d.f. correction for the coefficient covariance method on the Panel Options page. The results are given by:
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
D89
D(GRANT)
D(G RANT_1)
-0.090607
-0.096208
-0.222781
-0.351246
0.088082
0.111002
0.128580
0.264662
-1.028671
-0.866721
-1.732624
-1.327147
0.3 060
0.3 881
0.0 861
0.1 874
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.036518
0.008725
0.576716
34.59049
-91.76352
1.313929
0.273884
-0.22113 2
0.5 792 48
1.7 733 99
1.8 727 37
1.8 136 77
1.4 981 32
While current versions of EViews do not provide a full set of specification tests for panel
equations, it is a straightforward task to construct some tests using residuals obtained from
the panel estimation.
To continue with the Wooldridge example, we may test for AR(1) serial correlation in the
first-differenced equation by regressing the residuals from this specification on the lagged
residuals using data for the year 1989. First, we save the residual series in the workfile. Click
on Proc/Make Residual Series... on the estimated equation toolbar, and save the residuals
to the series RESID01.
Next, regress RESID01 on RESID01(-1), yielding:
Coeffici ent
S td. Error
t-S tatistic
Prob.
RESID01(-1)
0.236906
0.133357
1.776481
0.0 814
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
Durbin-Watson stat
0.056199
0.056199
0.554782
16.31252
-44.30230
0.000000
6.17E -1 8
0.5 710 61
1.6 778 63
1.7 146 96
1.6 920 68
Under the null hypothesis that the original idiosyncratic errors are uncorrelated, the residuals from this equation should have an autocorrelation coefficient of -0.5. Here, we obtain an
estimate of r 1 = 0.237 which appears to be far from the null value. A formal Wald hypothesis test rejects the null that the original idiosyncratic errors are serially uncorrelated. Perform a Wald test on the test equation by clicking on View/Coefficient Diagnostics/WaldCoefficient Restrictions... and entering the restriction C(1)=-0.5 in the edit box:
Wald Test:
Equation: Untitled
Null Hyp othesis: C(1)=-0.5
Test Stati stic
t-statistic
F-statisti c
Chi-squa re
Value
df
Probability
5.525812
30.53460
30.53460
53
(1, 53)
1
0.0000
0.0000
0.0000
Value
Std. Err.
0.736906
0.13335 7
The formal test confirms our casual observation, strongly rejecting the null hypothesis.
to regress the difference of log unemployment claims (LUCLMS) on the lag difference, and
the difference of enterprise zone designation (EZ). Since the model is estimated with time
intercepts, you should click on the Panel Options page, and select Fixed for the Period
effects.
Next, click on the Instruments tab, and add the names:
c d(luclms(-2)) d(ez)
to the Instrument list edit box. Note that adding the constant C to the regressor and instrument boxes is not required since the fixed effects estimator will add it for you. Click on OK
to accept the dialog settings. EViews displays the output for the IV regression:
C
D(LUCLMS(-1))
D(EZ)
Coefficient
Std. Error
t-Statistic
Prob.
-0.201654
0.164699
-0.218702
0.040473
0.288444
0.106141
-4.982442
0.570992
-2.060493
0.0000
0.5690
0.0414
Effects Specification
Period fixed (dummy variables)
R-squared
Adjusted R-squared
S.E. of regression
F-statistic
Prob(F-statistic)
Instrument rank
0.280533
0.239918
0.232956
9.223709
0.000000
8.000000
-0.235098
0.267204
6.729300
2.857769
6.150596
Note that the instrument rank in this equation is 8 since the period dummies also serve as
instruments, so you have the 3 instruments specified explicitly, plus 5 for the non-collinear
period dummy variables.
GMM Example
To illustrate the estimation of dynamic panel data models using GMM, we employ the unbalanced 1031 observation panel of firm level data (Abond_pan.WF1) from Layard and Nickell (1986), previously examined by Arellano and Bond (1991). The analysis fits the log of
employment (N) to the log of the real wage (W), log of the capital stock (K), and the log of
industry output (YS).
in the regressor edit box to include these variables. Since the desired specification will
include time dummies, make certain that the checkbox for Include period dummy variables is selected, then click on Next to proceed.
The next page of the wizard is used to specify a
transformation to remove
the cross-section fixed
effect. You may choose to
use first Differences or
Orthogonal deviations. In
addition, if your specification includes period
dummy variables, there is
a checkbox asking
whether you wish to transform the period dummies,
or to enter them in levels.
Here we specify the first
difference transformation,
and choose to include untransformed period dummies in the transformed equation. Click on
Next to continue.
The next page is where
you will specify your
dynamic period-specific
(predetermined) instruments. The instruments
should be entered with the
@DYN tag to indicate
that they are to be
expanded into sets of predetermined instruments,
with optional arguments to
indicate the lags to be
included. If no arguments
are provided, the default is
to include all valid lags
(from -2 to -infinity).
Here, we instruct EViews that we wish to use the default lags for N as predetermined instruments.
in the first edit box and click on Next to proceed to the final page.
The final page allows you
to specify your GMM
weighting and coefficient
covariance calculation
choices. In the first dropdown menu, you will
choose a GMM Iteration
option. You may select 1step (for i.i.d. innovations) to compute the
Arellano-Bond 1-step estimator, 2-step (update
weights once), to compute the Arellano-Bond 2step estimator, or n-step
(iterate to convergence),
to iterate the weight calculations. In the first case, EViews will provide you with choices for
computing the standard errors, but here only White period robust standard errors are
allowed. Clicking on Next takes you to the final page. Click on Finish to return to the Equation Estimation dialog.
EViews has filled out the Equation Estimation dialog with our choices from the DPD wizard. You should take a moment to examine the settings that have been filled out for you
since, in the future, you may wish to enter the specification directly into the dialog without
using the wizard. You may also, of course, modify the settings in the dialog prior to continuing. For example, click on the Panel Options tab and check the No d.f. correction setting in
the covariance calculation to match the original Arellano-Bond results (Table 4(b), p. 290).
Click on OK to estimate the specification.
The top portion of the output describes the estimation settings, coefficient estimates, and
summary statistics. Note that both the weighting matrix and covariance calculation method
used are described in the top portion of the output.
Dependent Variable: N
Method: Panel Generalized Method of Moments
Transformation: First Differences
Date: 08/24/06 Time: 14:21
Sample (adjusted): 1979 1984
Periods included: 6
Cross-sections included: 140
Total panel (unbalanced) observations: 611
White period instrument weighting matrix
White period standard errors & covariance (no d.f. correction)
Instrument list: @DYN(N, -2) W W(-1) K YS YS(-1)
@LEV(@SYSPER)
N(-1)
N(-2)
W
W(-1)
K
YS
YS(-1)
@LEV(@ISPERIOD("1979"))
@LEV(@ISPERIOD("1980"))
@LEV(@ISPERIOD("1981"))
@LEV(@ISPERIOD("1982"))
@LEV(@ISPERIOD("1983"))
@LEV(@ISPERIOD("1984"))
Coefficient
Std. Error
t-Statistic
Prob.
0.474150
-0.052968
-0.513205
0.224640
0.292723
0.609775
-0.446371
0.010509
0.014142
-0.040453
-0.021640
-0.001847
-0.010221
0.088714
0.026721
0.057323
0.080614
0.042243
0.111029
0.125598
0.006831
0.009924
0.012197
0.011353
0.010807
0.010548
5.344699
-1.982222
-8.952838
2.786626
6.929542
5.492054
-3.553963
1.538482
1.425025
-3.316629
-1.906127
-0.170874
-0.968937
0.0000
0.0479
0.0000
0.0055
0.0000
0.0000
0.0004
0.1245
0.1547
0.0010
0.0571
0.8644
0.3330
The standard errors that we report here are the standard Arellano-Bond 2-step estimator
standard errors. Note that there is evidence in the literature that the standard errors for the
two-step estimator may not be reliable.
The bottom portion of the output displays additional information about the specification and
summary statistics:
Effects Specification
Cross-section fixed (first differences)
Period fixed (dummy variables)
Mean dependent var
S.E. of regression
J-statistic
-0.063168
0.116243
30.11247
0.137637
8.080432
38.000000
Note in particular the results labeled J-statistic and Instrument rank. Since the reported
J-statistic is simply the Sargan statistic (value of the GMM objective function at estimated
parameters), and the instrument rank of 38 is greater than the number of estimated coefficients (13), we may use it to construct the Sargan test of over-identifying restrictions. It is
worth noting here that the J-statistic reported by a panel equation differs from that reported
by an ordinary equation by a factor equal to the number of observations. Under the null
hypothesis that the over-identifying restrictions are valid, the Sargan statistic is distributed
as a x ( p k ) , where k is the number of estimated coefficients and p is the instrument
rank. The p-value of 0.22 in this example may be computed using scalar pval =
@chisq(30.11247, 25).
in the Dynamic Specification box, select the Fixed radio button to turn off model selection,
and change the number of dependent lags and regressor lags to 1. To match PSS, we change
the estimation sample to 1962 1993. Leaving all other options at their default values, we
click OK to estimate the model.
Coefficient
Std. Error
t-Statistic
Prob.*
-0.464856
0.904398
0.056705
0.008687
-8.197798
104.1114
0.0000
0.0000
-6.223894
-0.744295
5.706388
7.130068
0.0000
0.4570
0.0000
0.0000
-0.199949
-0.018883
0.327691
0.154571
0.032126
0.025371
0.057425
0.021679
2327.084
*Note: p-values and any subsequent tests do not account for model
selection.
The coefficient on the log of inflation is an estimate of the long-run inflation elasticity, and is
negative (and strongly significant), as economic theory, and PSS, expect. We would expect
the long run income elasticity (the coefficient on the log of income) to be equal to one, but
the estimated value is slightly less at 0.904. We can perform a Wald test of unit elasticity by
clicking on View/Coefficient Diagnostics/Wald Test, and entering the restriction of
C(2)=1. This test rejects the null hypothesis of unit elasticity.
C
D89
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
Coefficient
Std. Error
t-Statistic
Prob.
-0.168993
-0.104279
0.078872
0.111542
-2.142622
-0.934881
0.0344
0.3520
0.008178
-0.001179
0.579589
35.60793
-93.32896
0.874003
0.351974
-0.221132
0.579248
1.765351
1.815020
1.785490
1.445487
We wish to test the significance of the first differences of the omitted job training grant variables GRANT and GRANT_1. Click on View/Coefficient Diagnostics/Omitted Variables Likelihood Ratio... and type D(GRANT) and D(GRANT_1) to enter the two variables in
differences. Click on OK to display the omitted variables test results.
The top portion of the results contains a brief description of the test, the test statistic values,
and the associated significance levels:
Omitted Vari ables Test
Equation: UNTITLED
Specificatio n: D(LSCRAP ) C D89
Omitted Vari ables: GRANT GRANT_1
F-statistic
Likelihood r atio
Value
1.529525
3.130883
df
(2, 104)
2
Probability
0.2215
0.2090
Here, the test statistics do not reject, at conventional significance levels, the null hypothesis
that D(GRANT) and D(GRANT_1) are jointly irrelevant.
The remainder of the results shows summary information and the test equation estimated
under the unrestricted alternative:
F-test summary:
Test SSR
Restricted SS R
Unrestricted SSR
Unrestricted SSR
Sum of Sq.
1.017443
35.60793
34.59049
34.59049
df
2
106
104
104
Value
-93.3289 6
-91.7635 2
df
106
104
Mean
Squares
0.508721
0.335924
0.332601
0.332601
LR test summary:
Restricted L ogL
Unrestricted LogL
Note that if appropriate, the alternative specification will be estimated using the cross-section or period GLS weights obtained from the restricted specification. If these weights were
not saved with the restricted specification and are not available, you may first be asked to
reestimate the original specification.
Coefficient
Std. Error
t-Statistic
Prob.
C
D88
D89
UNION
GRANT
GRANT_1
0.414833
-0.093452
-0.269834
0.547802
-0.214696
-0.377070
0.242965
0.108946
0.131397
0.409837
0.147500
0.204957
1.707379
-0.857779
-2.053577
1.336635
-1.455565
-1.839747
0.0897
0.3923
0.0417
0.1833
0.1475
0.0677
Effects Specification
S.D.
Cross-section random
Idiosyncratic random
1.390029
0.497744
Rho
0.8863
0.1137
Note in particular that our unrestricted model is a random effects specification using Swamy
and Arora estimators for the component variances, and that the estimates of the cross-section and idiosyncratic random effects standard deviations are 1.390 and 0.4978, respectively.
If we select the redundant variables test, and perform a joint test on GRANT and GRANT_1,
EViews displays the test results in the top of the results window:
Redundant Variables Test
Equation: UNTITLED
Specificatio n: LSCRAP C D88 D89 UNION GRA NT GRANT_1
Redundant Variables: GRANT GRANT_1
F-statistic
Value
1.832264
df
(2, 156)
Probability
0.1635
Sum of Sq.
0.911380
39.70907
38.79769
38.79769
df
2
158
156
156
Mean
Squares
0.455690
0.251323
0.248703
0.248703
F-test summary:
Test SSR
Restricted SS R
Unrestricted SSR
Unrestricted SSR
Here we see that the statistic value of 1.832 does not, at conventional significance levels,
lead us to reject the null hypothesis that GRANT and GRANT_1 are redundant in the unrestricted specification.
The restricted test equation results are depicted in the bottom portion of the window. Here
we see the top portion of the results for the restricted equation:
Restricted Test Equatio n:
Dependent Variable: LSCRAP
Method: Pa nel EGLS (Cross-secti on random effects)
Date: 08/18/0 9 Time: 12:39
Sample: 1987 1989
Periods included: 3
Cross-sectio ns included : 54
Total panel ( balanced ) observations: 162
Use pre-specified random component estimates
Swamy and Arora estimator of component variances
Varia ble
Coeffi cient
Std. Erro r
t-Stati stic
Prob.
C
D88
D89
UNION
0.4 19327
-0.168 993
-0.442 265
0.5 34321
0.2429 49
0.0957 91
0.0957 91
0.4097 52
1.725987
-1.764187
-4.616981
1.304010
0.0863
0.0796
0.0000
0.1941
Effects Specification
S.D.
Cross-sectio n random
Idiosyncratic random
1.390029
0.497744
Rho
0.8863
0.1137
The important thing to note is that the restricted specification removes the test variables
GRANT and GRANT_1. Note further that the output indicates that we are using existing estimates of the random component variances (Use pre-specified random component estimates), and that the displayed results for the effects match those for the unrestricted
specification.
C
LINCOMEP
LRPMG
LCARPCAP
Coefficient
Std. Error
t-Statistic
Prob.
-0.855103
0.051369
-0.192850
-0.593448
0.385169
0.091386
0.042860
0.027669
-2.220073
0.562103
-4.499545
-21.44787
0.0272
0.5745
0.0000
0.0000
Effects Specification
Cross-section fixed (dummy variables)
Period fixed (dummy variables)
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.980564
0.978126
0.081183
1.996961
394.2075
402.2697
0.000000
4.296242
0.548907
-2.077237
-1.639934
-1.903027
0.348394
Note that the specification has both cross-section and period fixed effects. When you select
the fixed effect test from the equation menu, EViews estimates three restricted specifications: one with period fixed effects only, one with cross-section fixed effects only, and one
with only a common intercept. The test results are displayed at the top of the results window:
Redundant Fixed Effects Tests
Equation: Untitled
Test cross-section and period fixed effects
Effects Test
Cross-section F
Cross-section Chi-square
Period F
Period Chi-square
Cross-Section/Period F
Cross-Section/Period Chi-square
Statistic
113.351303
682.635958
6.233849
107.747064
55.955615
687.429282
d.f.
Prob.
(17,303)
17
(18,303)
18
(35,303)
35
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
Notice that there are three sets of tests. The first set consists of two tests (Cross-section F
and Cross-section Chi-square) that evaluate the joint significance of the cross-section
effects using sums-of-squares (F-test) and the likelihood function (Chi-square test). The corresponding restricted specification is one in which there are period effects only. The two statistic values (113.35 and 682.64) and the associated p-values strongly reject the null that the
cross-section effects are redundant.
The next two tests evaluate the significance of the period dummies in the unrestricted model
against a restricted specification in which there are cross-section effects only. Both forms of
the statistic strongly reject the null of no period effects.
The remaining results evaluate the joint significance of all of the effects, respectively. Both
of the test statistics reject the restricted model in which there is only a single intercept.
Below the test statistic results, EViews displays the results for the test equations. In this
example, there are three distinct restricted equations so EViews shows three sets of estimates.
Lastly, note that this test statistic is not currently available for instrumental variables and
GMM specifications.
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
F
C01
-57.83441
0.109781
0.308113
28.88930
0.010489
0.017175
-2.001932
10.46615
17.93989
0.0 467
0.0 000
0.0 000
Effects Specification
Rho
S.D.
Cross-section random
Idiosyncratic random
84.20095
52.76797
0.7 180
0.2 820
Next we select the Hausman test from the equation menu by clicking on View/Fixed/Random Effects Testing/Correlated Random Effects - Hausman Test. EViews estimates the corresponding fixed effects estimator, evaluates the test, and displays the results in the equation
window. If the original specification is a two-way random effects model, EViews will test the
two sets of effects separately as well as jointly.
There are three parts to the output. The top portion describes the test statistic and provides
a summary of the results. Here we have:
Correlated Random Effects - Hausman Test
Equation: Untitled
Test cross-section random effects
Test Summary
Cross-section random
Chi-Sq.
Statistic
Chi-Sq. d.f.
Prob.
2.131366
0.3445
The statistic provides little evidence against the null hypothesis that there is no misspecification.
The next portion of output provides additional test detail, showing the coefficient estimates
from both the random and fixed effects estimators, along with the variance of the difference
and associated p-values for the hypothesis that there is no difference. Note that in some
cases, the estimated variances can be negative so that the probabilities cannot be computed.
Fi xed
0.110124
0.310065
Random
V ar(Diff.)
P rob.
0.109781
0.308113
0.000031
0.000006
0.9 506
0.4 332
The bottom portion of the output contains the results from the corresponding fixed effects
estimation:
Cross-section random effects test equation:
Depend ent Variable: I
Method: Panel Least Squares
Date: 08/18/09 Ti me: 12:51
Sample: 1935 19 54
Periods included: 20
Cross-sections in cluded: 10
Total p anel (bal anced) obse rvations: 2 00
Variable
Coeffici ent
S td. Error
t-S tatistic
Prob.
C
F
C01
-58.74394
0.110124
0.310065
12.45369
0.011857
0.017355
-4.716990
9.287901
17.86656
0.0 000
0.0 000
0.0 000
Effects Specification
Cross-section fi xed (dummy variables)
R-squared
Adjusted R-squared
S.E. of regression
Sum squared resid
Log likelihood
F-statistic
Prob(F-statistic)
0.944073
0.940800
52.76797
523 478.1
-1070.781
288 .4996
0.000000
145.95 83
216.87 53
10.827 81
11.025 71
10.907 90
0.7 167 33
In some cases, EViews will automatically drop non-varying variables in order to construct
the test statistic. These dropped variables will be indicated in this latter estimation output.
EViews offers testing for individual and time effects using both F-statistic (likelihood ratio)
and Lagrange multiplier (LM) tests. The follow discussion describes LM testing for random
effects (the F-statistic tests for fixed effects are described elsewhere in this manual).
The most popular random effects test is the Breusch-Pagan (1980) LM test. Honda (1985)
derives component LM tests with one-sided alternatives, obtaining a uniformly most powerful (UMP) test statistic. Moulton and Randolph (1989) propose a standardized version of the
Honda test that has improved asymptotic size. King and Wu (1997) introduce a locally mean
most powerful (LMMP) one-sided LM test. In addition, Baltagi and Li (1992), Baltagi, Chang
and Li (1999) extend the Breusch-Pagan, Honda, and King and Wu approaches to unbalanced designs.
The EViews panel effects (PE) test view computes the following LM tests:
Conventional LM (Breusch-Pagan, 1980)
Uniformly most powerful LM (Honda, 1985)
Standardized LM (Moulton and Randolph, 1989; Honda, 1991; Baltagi et al., 1999)
Locally mean most powerful (LMMP) (King and Wu, 1997)
Gourieroux, Holly, and Monfort (1982)
All of these tests may be computed from estimated regressions for equation objects in a
panel structured workfile, or estimated pool objects in a non-panel workfile. Note that
EViews offers these tests for equations estimated using both regression and instrumental
variables so long as the equations are free of estimated effects, AR terms, and GLS weighting, despite the fact that these LM tests are not, strictly speaking, applicable in the instrumental variables case. One should employ appropriate caution in the use of such results in
this setting.
Background
Our discussion follows closely the survey by Baltagi (2008). We consider two-way error
components disturbances:
u it = m i + l t + n it
(43.5)
For the remaining discussion, it will be useful to write the component specification in
stacked matrix form. Let T = max ( T 1, T 2, , T N ) . Then, define the cross-section residual vector u i = ( u i1, u i2, , u iT ) and stacked residuals u = ( u 1 , u 2 , , u N ) , we
i
have
u = Dm m + Dl l + n
(43.6)
Ar =
u D r D r u
----------------------- 1 for r = ( m, l )
u u
(43.7)
where u are the residuals obtained from the restricted model. Then defining
NT
2
LM m = ---------------------- A m
2(T 1)
(43.8)
NT
2
LM l = ---------------------- A l
2(N 1)
(43.9)
and
ml
LM ml = LM m + LM l
2
x2
which is distributed as a
under the null. To test H 0 and H 0 , we may use LM m and
2
LM l respectively, both of which are distributed as a x 1 under corresponding null.
Baltagi and Li (1990) derived corresponding statistics for unbalanced samples:
2
2
Am
n
LM m = ----- ---------------------2 ( Mm n )
2
LM l
where
2
Al
n
= ----- ---------------------2 ( Ml n )
(43.10)
n =
Ti
i =1
N
Mm =
Ti
(43.11)
i=1
T
Ml =
Nt
1
HO m =
NT
---------------------- A N ( 0, 1 )
2(T 1) m
l
(43.12)
2
HO l =
NT
---------------------- A N ( 0, 1 )
2(T 1) l
(43.13)
Note that both of these statistics are the square roots of the corresponding Breusch-Pagan
LM statistics.
Hondas statistics can be generalized to the unbalanced case yielding square roots of the
unbalanced Breusch-Pagan LM statistics:
n
HO m = ------------------------------A m
2 ( Mm n )
HO l
(43.14)
n
= ------------------------------A l
2 ( Ml n )
ml
Honda does not derive a uniformly most powerful statistic H 0 :j m = j l = 0 against the
one-sided alternative, but does suggest a handy one-sided test statistic:
HO ml = ( HO m + HO l )
which also converges to a N ( 0, 1 ) .
(43.15)
King and Wu
King and Wu (1997) propose locally mean most powerful (LMMP) one-sided test statistics
m 2
m 2
l 2
KW m and KW l for H 0 :j m = 0 against a one-sided H 1 :j m > 0 and for H 0 :j l = 0
l 2
against H 1 :j l > 0 . These two statistics are identical to the corresponding Honda UMP statistics.
2
Baltagi, Chang, and Li (1992) derive the corresponding LMMP test for H 0 :j m = j l = 0
against the one-sided alternative:
KW ml =
T1
------------------------- A +
N+T2 m
N1
------------------------- A l N ( 0, 1 )
N+T2
(43.16)
and Baltagi, Chang, and Li (1999) obtain results for unbalanced case:
KW ml =
Mm n
-----------------------------------A +
M m + M l 2n m
Ml n
-----------------------------------A
M m + M l 2n l
(43.17)
Standardized LM Tests
Moulton and Randolph (1989) showed that the asymptotic approximation for the one-sided
statistics can be poor when the number of regressors is large or the inter-correlation of
regressors is high. Alternatively, they propose a standardized one-sided LM (SLM) statistic
which centers and scales the statistic so that its mean is zero and its variance is one.
m
For H 0 :j m = 0 against a one-sided H 1 :j m > 0 , they show that the standardized Honda (or
King-Wu statistic) is given by:
HO m E ( HO m )
- N ( 0, 1 )
SLM m = --------------------------------------Var ( HO m )
(43.18)
Expressions for the expected value and variance may be found in Moulton and Randolph
(1989) and Baltagi (2008).
l
HO l E ( HO l )
SLM l = --------------------------------------- N ( 0, 1 )
Var ( HO l )
(43.19)
For the two-way model, Honda (1991) proposes a standardized Honda-type SLM test statistic, and Baltagi, Chang and Li (1999) describe a standardized King-Wu statistic. Under
2
2
H 0 :j m = j l = 0 , these SLM statistics are asymptotically distributed as N ( 0, 1 ) and their
critical values should be more accurate than those of the corresponding unstandardized
tests. See Baltagi, Chang, and Li (1999) and Baltagi (2008) for details.
GHM =
LM m + LM l
if
LM m > 0,
LM l > 0
LM m
if
LM m > 0,
LM l 0
LM l
if
LM m 0,
LM l > 0
if
LM m 0,
LM l 0
x mixed
(43.20)
Example
The LM test for random effects view implements Lagrange multiplier tests of individual or/
and time effects based on the results of the pooling model. As an example we use the Grunfeld (1958) data which contains 10 large US manufacturing firms over 20 years (19351954),
which is available in the workfile Grunfeld_Baltagi_panel.wf1 in the Working with Panel
Data folder in your Example Files directory.
Following Grunfeld (1958), we consider the following investment equation:
I it = a + b 1 F it + b 2 C it + u it
(43.21)
where I it denotes real gross investment for firm i in year t ; F it is the real value of the
firm (share outstanding); and C it is the real value of the capital stock. We estimate this
model using ordinary pooled least squares on the specification:
i f c c01
and name our equation EQ01. The results are shown as below:
To test for the presence of individual and time effects in this model, we can click on View/
Fixed-Random Effects Testing/Omitted Random Effects - Lagrange Multiplier menu item.
The results of the LM tests are shown as below:
Cross-section
Test Hypothesis
Time
Both
Breusch-Pagan
798.1615
(0.0000)
6.453882
(0.0111)
804.6154
(0.0000)
Honda
28.25175
(0.0000)
-2.540449
--
18.18064
(0.0000)
---
---
16.29814
(0.0000)
28.25175
(0.0000)
-2.540449
--
21.83221
(0.0000)
---
---
20.96591
(0.0000)
32.66605
(0.0000)
-2.432565
--
--
--
--
798.1615
(< 0.01)*
Test
Standardized Honda
King-Wu
Standardized King-Wu
Standardized
Gourierioux, et al.
From the first column, we see that there is strong evidence that there are unaccounted for
cross-section random effects in the pooled estimator residuals. All three of the cross-section
tests have p-values well below conventional significance levels.
However, for testing time-specific effects, there is a marked difference between the results
for the two-sided Breusch-Pagan and the one-sided tests, with the former suggesting the
presence of effects, and the latter with negative values indicating that there are no timeeffects. These data clearly show the benefits of using one-sided tests in an empirical setting.
Background
Following Pesaran (2004), suppose that we have a panel data model
y it = b it x it + u it
(43.22)
(43.23)
u it u jt
t ( i, j )
r ij = -----------------------------------------------------------------------12
12
T kj
T kj
2
2
u
u
it jt
t (i j) t (i j)
(43.24)
In the unbalanced case, Pesaran proposes use of the centered correlation coefficient
( u it u i ) ( u jt u i )
t ( i, j )
r ij = ------------------------------------------------------------------------------------------2
2
( u it u i ) ( u jt u i )
t ( i, j )
t ( i, j )
(43.25)
where the notation t ( i, j ) is used to indicate that we sum over the subset of T ij observations common to i and j, and the pairwise mean
u it
t ( i, j )
u i = ---------------------T ij
(43.26)
is used to adjust for the fact that the residuals in pairwise subsets are not necessarily mean
zero.
(Note that in practice EViews always employs centered correlations as in Equation (43.25)
as this allows for estimation methods where the residuals are not constrained to have zero
means in each cross-section. These results may differ from those that would have been
obtained using the non-centered correlations in Equation (43.24). EViews will provide a
message informing you when non-zero means are found.)
Breusch-Pagan LM
The most well-known cross-section dependence diagnostic is the Breusch-Pagan (1980)
Lagrange Multiplier (LM) test statistic. In a seemingly unrelated regressions context,
Breusch and Pagan show that under the null hypothesis in Equation (43.23), a LM statistic
for dependence is given by:
N1
LM =
2
2
T ij r ij x N
(N 1)
------------------------
(43.27)
i = 1j = i+1
where the r ij are the correlation coefficients obtained from the residuals of the model as
2
described above. The asymptotic x distribution is obtained for N fixed as T ij for all
( i, j ) , and follows from a normality assumption on the errors.
Pesaran Scaled LM
It is well known that the standard Breusch-Pagan LM test statistic is not appropriate for testing in large N settings. To address this shortcoming, Pesaran (2004) proposes a standardized version of the LM statistic
LM S =
1
----------------------N(N 1)
N1
2
( T ij r ij 1 ) N ( 0, 1 )
(43.28)
i = 1j = i+1
Pesaran notes one shortcoming of the scaled LM which is that E ( T ij r ij 1 ) is not centered
at zero for finite T ij , so that the statistic is likely to exhibit size distortion for small T ij ,
and that the distortion will worsen for larger N .
Pesaran CD
To address the size distortion of LM and LM S , Pesaran (2004) proposes an alternative statistic based on the average of the pairwise correlation coefficients r ij :
CD p =
2
----------------------N(N 1)
N1
T ij r ij N ( 0, 1 )
(43.29)
i = 1j = i+1
LM BC =
1
----------------------N(N 1)
N1
i = 1j = i+1
2
N
( T ij r ij 1 ) ---------------------- N ( 0, 1 )
2(T 1)
(43.30)
Example
We illustrate the use of cross-section dependence tests for equation objects using an empirical example from Baltagi (2008) examining gasoline demand in 18 OECD countries over the
period 19601978 (Table 2.8, p. 29).
We download the data and create a panel-structured workfile by entering the following command in the EViews command window
wfopen http://www.wiley.com///wileychi/baltagi/supp/Gasoline.dat
lastobs=342
and clicking on Finish in the import wizard to accept the default settings.
The equation of interest is a cross-section fixed effects regression of log motor gasoline consumption per auto (LNGASPCAR), on log real per capital income (LINCOMEP), log real
motor gasoline price (LRPMG), log real motor gasoline price the log of the stock of cars per
capita (LCARPCAP).
We estimate this fixed effect specification by entering the command:
equation gas.ls(cx=f) lgaspcar c lincomep lrpmg lcarpcap
which creates the equation object GAS and displays the estimation results:
Implicit in our approach to estimation in this example and in the validity of the computed tstatistics is the assumption that the errors for different cross-sectional units are uncorrelated.
To test for the presence of cross-sectional dependence, we click on View/Residual Diagnostics/Cross-section Dependence Test
EViews will compute the cross-section dependence tests and display the results in the object
window:
The top of the table displays the test hypothesis and information about the number of crosssection and period observations in the panel. The bottom portion of the table contains the
test results.
The first line contains results for the Breusch-Pagan LM test. EViews shows the test statistic
value, test degree-of-freedom, and the associated p-value. In this case, the value of the test
2
statistic, 1027.14 is well into the upper tail of a x 153 , and we strongly reject the null of no
correlation at conventional significance levels.
The next two lines present results for the two scaled Breusch-Pagan tests. Both the Pesaran
scaled Breusch-Pagan LM, and the Baltagi et al. bias-adjusted LM tests are asymptotically
standard normal, and the test statistic results of 48.94 and 48.44 respectively, strongly reject
the null at conventional levels. Note that in this example, the bias correction has a relatively
small effect on the scaled LM statistic as N and T are of similar magnitude.
Since T is relatively small, we may instead wish to focus on the results for the asymptotically standard normal Pesaran CD test which are presented in the final line of the table.
While the test statistic value of 3.22 is significantly below that of the scaled LM tests, the
Pesaran CD test still rejects the null at conventional significance levels.
rj
m j = --------------------------VAR ( r j )
1
r j = ---------------------T3j
(43.31)
r tj
(43.32)
r tj = E ( De i, t, De i, t j )
(43.33)
t = 4+j
This equation replicates the estimates shown in Table 4(b), page 290, of Arellano Bond
(1991).
Dependent Variable: N
Method: Panel Generalized Method of Moments
Transformation: First Differences
Date: 12/20/12 Time: 14:47
Sample (adjusted): 1979 1984
Periods included: 6
Cross-sections included: 140
Total panel (unbalanced) observations: 611
White period instrument weighting matrix
Instrument specification: @DYN(N,-2) W W(-1) K YS YS(-1)
@LEV(@SYSPER)
Constant added to instrument list
Variable
Coefficient
Std. Error
t-Statistic
Prob.
N(-1)
N(-2)
W
W(-1)
K
YS
YS(-1)
@LEV(@ISPERIOD("1979"))
@LEV(@ISPERIOD("1980"))
@LEV(@ISPERIOD("1981"))
@LEV(@ISPERIOD("1982"))
@LEV(@ISPERIOD("1983"))
@LEV(@ISPERIOD("1984"))
0.474150
-0.052968
-0.513205
0.224640
0.292723
0.609775
-0.446371
0.010509
0.014142
-0.040453
-0.021640
-0.001847
-0.010221
0.085303
0.027284
0.049345
0.080063
0.039463
0.108524
0.124815
0.007251
0.009959
0.011551
0.011891
0.010412
0.011468
5.558409
-1.941324
-10.40027
2.805796
7.417748
5.618813
-3.576272
1.449224
1.420077
-3.502122
-1.819843
-0.177358
-0.891270
0.0000
0.0527
0.0000
0.0052
0.0000
0.0000
0.0004
0.1478
0.1561
0.0005
0.0693
0.8593
0.3731
Effects Specification
Cross-section fixed (first differences)
Period fixed (dummy variables)
Mean dependent var
S.E. of regression
J-statistic
Prob(J-statistic)
-0.063168
0.116243
30.11247
0.220105
0.137637
8.080432
38
m-Statistic
-2.427825
-0.332535
rho
-2.106427
-0.075912
SE(rho)
Prob.
0.867619
0.228281
0.0152
0.7395
Although the original 1991 Arellano Bond paper does not display results for the first order
test, the same data are used as an example in Doornik, Bond and Arellano 2006 (page 11),
which does display corrected results for both tests.
The tests show that the first order statistic is statistically significant, whereas the second
order statistic is not, which is what we would expect if the model error terms are serial
uncorrelated in levels.
Estimation Background
The basic class of models that can be estimated using panel techniques may be written as:
Y it = f ( X it, b ) + d i + g t + e it
(43.34)
The leading case involves a linear conditional mean specification, so that we have:
Y it = a + X it b + d i + g t + e it
(43.35)
where Y it is the dependent variable, and X it is a k -vector of regressors, and e it are the
error terms for i = 1, 2, , M cross-sectional units observed for dated periods
t = 1, 2, , T . The a parameter represents the overall constant in the model, while the
d i and g t represent cross-section or period specific effects (random or fixed).
Note that in contrast to the pool specifications described in Equation (41.2) on page 793,
EViews panel equations allow you to specify equations in general form, allowing for nonlinear coefficients mean equations with additive effects. Panel equations do not automatically
allow for b coefficients that vary across cross-sections or periods, but you may, of course,
create interaction variables that permit such variation.
Other than these differences, the pool equation discussion of Estimation Background on
page 793 applies to the estimation of panel equations. In particular, the calculation of fixed
and random effects, GLS weighting, AR estimation, and coefficient covariances for least
squares and instrumental variables is equally applicable in the present setting.
Accordingly, the remainder of this discussion will focus on a brief review of the relevant
econometric concepts surrounding GMM estimation of panel equations.
Estimation Background881
GMM Details
The following is a brief review of GMM estimation and dynamic panel estimators. As
always, the discussion is merely an overview. For detailed surveys of the literature, see
Wooldridge (2002) and Baltagi (2005).
Background
The basic GMM panel estimators are based on moments of the form,
M
gi ( b )
g(b) =
i =1
Zi ei ( b )
(43.36)
i =1
e i ( b ) = ( Y i f ( X it, b ) )
(43.37)
In some cases we will work symmetrically with moments where the summation is taken
over periods t instead of i .
GMM estimation minimizes the quadratic form:
S(b) =
i= 1
Z i e i ( b ) H
Zi ei ( b )
i=1
(43.38)
= g ( b ) Hg ( b )
with respect to b for a suitably chosen p p weighting matrix H .
Given estimates of the coefficient vector, b , an estimate of the coefficient covariance matrix
is computed as,
1
1
V ( b ) = ( G HG ) ( G HLHG ) ( G HG )
(43.39)
G(b )=
Z i f i ( b )
(43.40)
i =1
In the simple linear case where f ( X it, b ) = X it b , we may write the coefficient estimator
in closed form as,
b =
i =1
Z i X i H
i =1
Z i X i
i= 1
Z i X i H
Z i Y i
i= 1
(43.41)
= ( M ZX HM ZX ) ( M ZX HM ZY )
with variance estimator,
1
1
V ( b ) = ( M ZX HM ZX ) ( M ZX HLHM Z X ) ( M ZX HM ZX )
(43.42)
A i B i
(43.43)
i=1
The basics of GMM estimation involve: (1) specifying the instruments Z , (2) choosing the
weighting matrix H , and (3) determining an estimator for L .
It is worth pointing out that the summations here are taken over individuals; we may equivalently write the expressions in terms of summations taken over periods. This symmetry will
prove useful in describing some of the GMM specifications that EViews supports.
A wide range of specifications may be viewed as specific cases in the GMM framework. For
example, the simple 2SLS estimator, using ordinary estimates of the coefficient covariance,
specifies:
2
H = ( j M ZZ )
(43.44)
L = j M ZZ
Substituting, we have the familiar expressions,
1
2
2
b = ( M ZX ( j M ZZ ) M Z X ) ( M ZX ( j M ZZ ) M ZY )
1
(43.45)
= ( M ZX M ZZ M ZX ) ( M Z X M ZZ M ZY )
and,
2
1
V ( b ) = j ( M ZX M ZZ M ZX )
(43.46)
Standard errors that are robust to conditional or unconditional heteroskedasticity and contemporaneous correlation may be computed by substituting a new expression for L ,
1
L = T
Zt e t e t Z t
t = 1
(43.47)
Estimation Background883
1
H = T
t = 1
Z t Q M Z t
(43.48)
1
H = M
i= 1
Z i e i e i Z i
(43.49)
These latter GMM weights are associated with specifications that have arbitrary serial correlation and time-varying variances in the disturbances.
GLS Specifications
EViews allows you to estimate a GMM specification on GLS transformed data. Note that the
moment conditions are modified to reflect the GLS weighting:
M
g(b) =
i= 1
gi ( b ) =
Zi Q
ei ( b )
(43.50)
i= 1
Y it =
r j Y it j + Xit b + di + e it
(43.51)
j= 1
First-differencing this specification eliminates the individual effect and produces an equation
of the form:
r j Y it j + X it b + e it
Y it =
(43.52)
j= 1
Wi =
Y i1
Y i1
Y i2
Y i1
Y i2
(43.53)
Y iT i 2
1
= M
i= 1
Z i YZ i
(43.54)
2
1
1
Y = ---
2
0
0
1
2
0
0
0
0
0
0
0
0
2
1
0
0
2
j
1
2
(43.55)
References885
Given estimates of the residuals from the one-step estimator, we may replace the H
weighting matrix with one estimated using computational forms familiar from White period
covariance estimation:
1
H= M
i= 1
Z i e i e i Z i
(43.56)
This weighting matrix is the one used in the Arellano-Bond two-step estimator.
Lastly, we note that an alternative method of transforming the original equation to eliminate
the individual effect involves computing orthogonal deviations (Arellano and Bover, 1995).
We will not reproduce the details on here but do note that residuals transformed using
orthogonal deviations have the property that the optimal first-stage weighting matrix for the
transformed specification is simply the 2SLS weighting matrix:
1
H = M
i =1
Z i Z i
(43.57)
References
Arellano, M. (1987). Computing Robust Standard Errors for Within-groups Estimators, Oxford Bulletin
of Economics and Statistics, 49, 431-434.
Arellano, M., and S. R. Bond (1991). Some Tests of Specification for Panel Data: Monte Carlo Evidence
and an Application to Employment Equations, Review of Economic Studies, 58, 277297.
Arellano, M., and O. Bover (1995). Another Look at the Instrumental Variables Estimation of Error-components Models, Journal of Econometrics, 68, 2951.
Baltagi, B. H. (2005). Econometric Analysis of Panel Data, Third Edition. New York: John Wiley & Sons.
Baltagi, B. H. (2008). Econometric Analysis of Panel Data, 4th ed. New York: John Wiley & Sons.
Baltagi, B. H. and Young-Jae Chang (1994). Incomplete Panels: A Comparative Study of Alternative Estimators for the Unbalanced One-way Error Component Regression Model, Journal of Econometrics,
62, 67-89.
Baltagi, B. H., Chang, Y. J., and Q. Li (1999), Testing For Random Individual And Time Effects Using
Unbalanced Panel Data, Advances in Econometrics, 13, 120.
Baltagi, B. H., Chang, Y. J., and Q. Li (1992). Monte Carlo Results on Several New and Existing Tests for
the Error Components Model, Journal of Econometrics, 54, 95120.
Baltagi, B. H, Feng, Q., and C. Kao (2012). A Lagrange Multiplier test for Cross-sectional Dependence in a
Fixed Effects Panel Data Model, Journal of the Econometrics, 170, 164177.
Breusch, T., and A. Pagan (1980). The Lagrange Multiplier Test and its Application to Model Specification
in Econometrics, Review of Economic Studies, 47, 239253.
Gourieroux, C., A. Holly, and A. Monfort (1982). Likelihood Ratio Test, Wald Test, and Kuhn-Tucker Test
in Linear Models with Inequality Constraints on the Regression Parameters, Econometrica 50, 63
80.
Harrison, D. and D. L. Rubinfeld (1978). Hedonic Housing Prices and the Demand for Clean Air, Journal
of Environmental Economics and Management, 5, 81-102.
Background
We will work with the standard triangular representation of a regression specification and
assume the existence of a single cointegrating vector as in Hansen (1992). Consider a panel
structure for the n + 1 dimensional time series vector process ( y it, X it ) , with cointegrating equation
y it = X it b + D 1it g 1i + u 1it
(44.1)
for cross-sections i and periods t , where D it = ( D 1it , D 2it ) are deterministic trend
regressors and the n stochastic regressors X t are governed by the system of equations:
e 2it = u 2it
(44.2)
The p 1 -vector of D 1it regressors enter into both the cointegrating equation and the regressors equations, while the p 2 -vector of D 2it are deterministic trend regressors which are
included in the regressors equations but excluded from the cointegrating equation (see
Cointegrating Regression on page 255 for further discussion).
It is worth mentioning that most authors have focused attention on the leading case in
which the deterministic trend terms in the panel cointegrating equation consist only of
cross-section dummy variables:
y it = X it b + g i + u 1it
DX it = u 2it
(44.3)
Following Phillips and Moon (1999), we define the long run covariance matrices for the
errors in cross-section u it = ( u 1it, u 2it ) are strictly stationary and ergodic with zero
mean, contemporaneous covariance matrix S i , one-sided long-run covariance matrix L i ,
and long-run covariance matrix Q i , each of which we partition conformably with u it
S i = E ( u it u it ) =
j 11i j 12i
j 21i S 22i
Li =
E ( uit u it j )
j= 0
Qi =
j =
E ( u it u it j ) =
l 11i l 12i
(44.4)
l 21i L 22i
q 11i q 12i
q 21i Q 22i
= Li + Li Si
and define what Phillips and Moon term the long-run average covariance matrices
L = E ( L i ) , and Q = E ( Q i ) .
Lastly, and perhaps most importantly, we assume that there is independence in the errors
across cross-sections.
Given this basic structure, we may define panel estimators of the cointegrating relationship
coefficient b using extensions of single-equation FMOLS and DOLS methods. There are different variants for each of the estimators depending on the assumptions that one wishes to
make about the long-run covariances and how one wishes to use the panel structure of the
data.
We begin by describing how to estimate panel FMOLS and DOLS models in EViews. We then
discuss the views and procedures associated with the panel equation and offer a simple
example. Lastly, we provide technical details on the estimation methods.
Equation Specification
You should enter the name of the
dependent variable, y , followed by
a list of cointegrating regressors, X ,
in the Equation specification edit
field, then use the Trend specification drop-down menu to specify the
deterministic trend components
(None, Constant (Level), Linear Trend, Quadratic Trend). Your selection will include all
trends up to the specified order. You may use the Deterministic regressors edit box to add
deterministic trend regressors that are not offered in the pre-specified list.
Fully-Modified OLS
To estimate your equation using
panel FMOLS, select Fully-modified OLS (FMOLS) in the Nonstationary estimation settings
dropdown menu. The main dialog
and options pages will change to show the available settings.
First, you should choose between the pooled, weighted, and group mean (averaged) FMOLS
estimators:
Pooled estimation performs standard FMOLS on the pooled sample after removing the
deterministic components from both the dependent variable and the regressors.
Pooled (weighted) estimation accounts for heterogeneity by using cross-section specific estimates of the long-run covariances to reweight the data prior to computing
pooled FMOLS.
Grouped mean estimation computes the cross-section average of the individual crosssection FMOLS estimates.
See Fully-Modified OLS, on page 901 for a detailed description of the methods
Additionally, you may click on the Long-run variances: Options button to specify options
for computing the long-run covariances. By default, EViews will estimate the individual and
long-run average covariance matrices using a (non-prewhitened) kernel approach with a
Bartlett kernel function and Newey-West fixed bandwidth. To change the whitening or kernel settings, click on the Options button and enter your changes in the sub-dialog.
Here we have specified that the long-run covariances be computed using a nonparametric
method with the quadratic-spectral kernel and a real-valued bandwidth chosen by Andrews
automatic bandwidth selection method. Click on OK to accept the updated settings.
Lastly, you can specify the form of the first-stage cointegrating equation regression that
EViews uses to obtain u it for computing the long-run covariances. By default, the first-stage
regression assumes homogeneous long-run coefficients, but you may allow for different
coefficients by selecting the Heterogeneous first-stage long-run coefficients checkbox.
Clicking on the Options tab of the estimation dialog shows the settings for computing the
coefficient covariance for the long-run coefficients and specifying the default coefficient
name:
For pooled estimation, you may choose between the Default (homogeneous variances)
moment estimator or a Sandwich (heterogeneous variances) method as described in
Pooled FMOLS, on page 901. You may also elect to apply or not apply a degrees-of-freedom adjustment to the estimated coefficient covariance.
The pooled weighted and grouped methods only offer the d.f. Adjustment option.
Dynamic OLS
To estimate your equation using DOLS, first fill out the equation specification, then select
Dynamic OLS (DOLS) in the Nonstationary estimation settings dropdown menu. The dialog will change to display the settings for DOLS.
You should use the Panel method
drop-down to choose between the
pooled, weighted, and group mean
(averaged) DOLS estimators:
Pooled estimation performs
standard DOLS on the pooled sample of data after first removing the deterministic
components from both the dependent variable and the regressors.
Pooled (weighted) estimation accounts for heterogeneity by using cross-section specific estimates of the conditional long-run residual variances to reweight the moments
for each cross-section when computing the pooled DOLS estimator.
Grouped mean estimation computes the cross-section average of the individual crosssection DOLS estimates.
If you specify pooled weighted estimation, EViews will display a Long-run var wgts:
Options button which will allow you to specify the settings used in computing the long-run
variances for use as weights.
Next, you should specify the method of selecting leads and lags. By default, the Lag & lead
method is Fixed with Lags and Leads each set to 1. You may specify a different number of
lags or leads or you can use the dropdown to enable automatic information criterion selection of the lag and lead orders for each cross-section by selecting Akaike, Schwarz, or Hannan-Quinn. Note that the automatic lag selection method is conducted by estimating
separate regressions for each cross-section. If you select None, EViews will estimate static
OLS.
If you select one of the info criterion selection methods, you will be prompted for a maximum lag and lead length. You may enter a value, or you may retain the default entry *
which instructs EViews to use an arbitrary observation-based rule-of-thumb for each crosssection:
14
(B.1)
to set the maximum, where k is the number of coefficients in the equation. As in the nonpanel setting, we urge careful thought in the use of automatic selection methods since the
purpose of including leads and lags is to remove long-run dependence, and automatic methods were not designed for this purpose.
When you are done modifying the main estimation settings, click on the Options tab of the
dialog to see the options for computing the long-run coefficient covariance matrix estimates
and specifying the default coefficient name:
For pooled estimation you will be prompted to specify the Default (homogeneous variances) moment estimator or a Sandwich (heterogeneous variances) method as described
in Pooled DOLS, on page 904. You will also be prompted to specify the use of a Long-run
variance estimator or Ordinary variance estimator for use in scaling the moment matrix or
in computing the individual variance weights sandwich estimator, and to choose whether to
perform a d.f. Adjustment.
Pooled weighted estimation offers only a choice of whether to perform the degree-of-freedom correction (since the long-run variance settings are specified on the first page of the
dialog).
Grouped estimation offers a variety of choices for computing the individual coefficient covariance matrices.You may use the Individual covariances method drop-down to choose
between the Default (rescaled OLS), Ordinary Least Squares, White, or HAC - Newey
West.
The Default (rescaled OLS) method re-scales the ordinary least squares coefficient covariance using an estimator of the long-run variance of DOLS residuals
(multiplying by the ratio of the long-run variance to the
ordinary squared standard error). Alternately, you may
employ a sandwich-style HAC (Newey-West) covariance
matrix estimator.
In both cases, you may use the options button (labeled Options or HAC Options, respectively) to override the default method for computing the long-run variance (non-prewhitened Bartlett kernel and a Newey-West fixed bandwidth). You may also select White
covariances or Ordinary Least Squares covariances. The latter two methods are offered primarily for comparison purposes.
Views
For the most part, the views of a cointegrating equation require little discussion.
For example, the Estimation Output shows the estimated coefficients and summary statistics of the equation, the Representations view offers text descriptions of the estimated cointegrating
equation, the Covariance Matrix displays the coefficient covariance, and the Residual Diagnostics (Correlogram - Q-statistics,
Correlogram Squared Residuals, Histogram - Normality Test)
offer statistics based on residuals.
That said, a few comments about the construction of these views
are in order.
The Estimation Output, Representations, Covariance Matrix views of an equation
only show results for the cointegrating equation and the long-run coefficients. In the
representations view, the presence of individual trend coefficients is represented by
the presence of the expression [CX=DETERM]. Similarly, the Coefficient Diagnostics do not include any of the deterministics. Note also that the short-run dynamics
added in DOLS estimation are not included in these views.
(Note that EViews while does not display the coefficients for the deterministics and
short-run dynamics, the coefficients are used in constructing relevant measures such
as fit statistics and residuals.)
You may use the Individual Coefficients view to examine the estimated trend coefficients for each cross-section.
The method used to compute residuals in the Actual, Fitted, Residual views and the
Residual Diagnostics views differs depending on the estimation method. For FMOLS,
the values are not based on the transformed data; the residuals are derived by substi-
Examples895
tuting the estimated coefficients into the original cointegrating equation and computing the residuals. For DOLS, the residuals from the cointegrating equation are adjusted
for the deterministics and estimated short-run dynamics. Standardized residuals are
simply the residuals divided through by the long-run variance estimate.
The test statistics in the Residual Diagnostics are computed using the pooled residual
data and probably should be used only for illustrative purposes.
Procedures
The procs for an equation estimated using panel cointegrating regression are a subset of
those found in least squares estimation.
While most of the relevant issues were discussed in the previous section (e.g., construction of residuals), you should note
that:
Forecasts constructed using the Forecast... procedure
and models created using Make Model procedure follow the Representations view in omitting DOLS shortrun dynamics. If you wish to construct forecasts that
incorporate the short-run dynamics, you should use ordinary least squares to estimate
an equation that explicitly includes the lags and leads of the cointegrating regressors.
The forecast standard errors generated by the Forecast... proc and those obtained
from solving models created using the Make Model... proc both employ the S.E. of
the regression reported in the estimation output. This may not be appropriate.
When creating a model from a panel equation with deterministic trends, EViews will
create a series in the workfile containing the fitted values of the trend terms and will
incorporate this series in the equation specification. If you wish to solve for your
model with out-of-sample values, you will need to fill in the appropriate fitted values
in the series.
Examples
To illustrate the estimation of panel cointegration models in EViews, we follow Kao, Chiang,
and Chen (KCC, 1999) who apply panel cointegration analysis to the study of economic
growth by estimating the cointegrating relationship for total factor productivity and domestic and foreign R&D capital stock.
The KCC data, which we provide in the workfile tfpcoint.WF1 consist of annual data on
log total factor productivity (LTFP), log domestic (LRD), and log foreign (LFRD) R&D capital
stock for 22 countries for the years 1971 to 1990. We consider estimation of simple pooled
FMOLS and DOLS estimators for the cointegrating vectors as in Table 4(i) (p. 703) and Table
5(i) (p. 704).
To begin, display the panel cointegrating equation dialog, and fill out the top portion of the
dialog as depicted below:
Following KCC, we assume a fixed effect specification with LTFP as the dependent variable
and LRD and LFRD as the cointegrating regressors. To handle the fixed effect we specify a
Constant (Level) in the Trend specification drop-down menu.
The default panel cointegration estimation method Pooled estimation using Fully-modified
OLS (FMOLS) corresponds to the estimates in Table 4(i) of KCC, so we leave those settings
unchanged.
To match the KCC estimates, we click on Long-run variances: Options button to display the
long-run covariance settings, and change the Kernel options by setting a user-specified
bandwidth value of 6:
Examples897
Click on OK to accept the changes. Since we wish to estimate the equation using the default
coefficient covariances, we simply click on OK again to estimate the equation using the
specified settings. EViews estimates the equation and displays the results:
Dependent Variable: LTFP
Method: Panel Fully Modified Least Squares (FMOLS)
Date: 01/14/13 Time: 15:23
Sample (adjusted): 1972 1990
Periods included: 19
Cross-sections included: 22
Total panel (balanced) observations: 418
Panel method: Pooled estimation
Cointegrating equation deterministics: C
Coefficient covariance computed using default method
Long-run covariance estimates (Bartlett kernel, User bandwidth =
6.0000)
Variable
Coefficient
Std. Error
t-Statistic
Prob.
LRD
LFRD
0.082284
0.114272
0.017282
0.029055
4.761167
3.933005
0.0000
0.0001
R-squared
Adjusted R-squared
S.E. of regression
Durbin-Watson stat
0.608017
0.585135
0.020502
0.286810
-0.016190
0.031831
0.165613
0.001347
The top portion of the dialog displays the estimation method and information about the
sample employed in estimation. Just below the sample information EViews shows that the
estimates are based on pooled estimation using only a constant as the cross-section specific
trend regressor. The coefficient covariances are computed using the default settings, and the
long-run covariances used a Bartlett kernel with the user-specified bandwidth.
The middle section shows the coefficient estimates, standard errors, and t-statistics, which
differ a bit from the results in KCC Table 4(i), as KCC report estimates for a slightly different
model. As in KCC, both R&D variables, LRD and LFRD are positively related to LTFP, and the
coefficients are statistically significant.
The bottom portion of the output shows various summary statistics. Note in particular, the
1.2 , the estimated long-run average variance of
reported Long-run variance which shows q
u 1it conditional on u 2it , obtained from the DOLS residuals. The square root of this variance, 0.0367, is somewhat higher than the S.E. of the regression value of 0.0205, which is
based on the ordinary estimator of the residual variance.
Clicking View/Representations shows the commands used to estimate the equation, along
with a text representation of the long-run relationship:
Note the [CX=DETERM] component which shows that there are additional heterogeneous trend terms in the relationship (in this case the fixed effects). The presence of this
term instructs EViews to use this information when constructing models, and when computing fits and forecasts.
Suppose, for example, we select Proc/Make Model from our estimated equation. EViews
will create a model object containing the equation results:
Examples899
Notice the presence of EQ4I_EFCT in the model equation and in the workfile. Double-clicking on EQ4I in the model object displays the equation:
Notice that we have replaced the deterministic components [CX=DETERM] in the equation specification with the fitted values in the series EQ4I_EFCT. In this case EQ4I_EFCT just
holds the estimated fixed effects, but more generally it will hold the fitted values for the
deterministic terms in your regression.
To estimate the model using DOLS, we again display the equation dialog, fill out the top portion as before:
and change the Method to Dynamic OLS (DOLS). To match the settings in KCC, we set the
Panel Method to Pooled, and specify the Fixed lags and leads, with 2 lags and 1 lead:
Click on OK to estimate the equation using the default covariance method. EViews will display the results:
Dependent Variable: LTFP
Method: Panel Dynamic Least Squares (DOLS)
Date: 01/15/13 Time: 15:36
Sample (adjusted): 1974 1989
Periods included: 16
Cross-sections included: 22
Total panel (balanced) observations: 352
Panel method: Pooled estimation
Cointegrating equation deterministics: C
Fixed leads and lags specification (lead=1, lag=2)
Coefficient covariance computed using default method
Long-run variance (Bartlett kernel, Newey-West fixed bandwidth) used
for coefficient covariances
Variable
Coefficient
Std. Error
t-Statistic
Prob.
LRD
LFRD
0.109353
0.047674
0.023067
0.037756
4.740719
1.262690
0.0000
0.2082
R-squared
Adjusted R-squared
S.E. of regression
Long-run variance
0.932997
0.851443
0.013225
0.000156
-0.018869
0.034313
0.034632
Again, the top portion of the dialog shows the estimation method, sample, and information
about settings employed in estimation. Note in particular that the default coefficient covariance matrix computation uses an estimator of the long-run variance computed using a
Bartlett kernel and fixed Newey-West bandwidth.
The long-run coefficients, standard errors, and t-statistics are close to their counterparts in
KCC Table 5(i).
We may contrast these results to the group-mean estimates of the same specifications. The
group-mean FMOLS results may be obtained by calling up the original FMOLS equation and
selecting Grouped in the Panel method drop-down menu. The group-mean FMOLS coefficient results are given by:
Technical Details901
Variable
Coefficient
Std. Error
t-Statistic
Prob.
LRD
LFRD
0.319009
-0.061544
0.021539
0.022454
14.81044
-2.740894
0.0000
0.0064
which differ markedly from the pooled estimates, suggesting that heterogeneity in the cointegrating equation or the long-run covariances may be important. Likewise, the corresponding group-mean DOLS results,
Variable
Coefficient
Std. Error
t-Statistic
Prob.
LRD
LFRD
0.401746
-0.093889
0.063727
0.055137
6.304167
-1.702828
0.0000
0.0902
Technical Details
Fully-Modified OLS
Phillips and Moon (1999), Pedroni (2000), and Kao and Chiang (2000) offer extensions of
the Phillips and Hansen (1990) fully modified OLS estimator to panel settings.
Pooled FMOLS
The pooled FMOLS estimator outlined by Phillips and Moon (1999) is a straightforward
extension of the standard Phillips and Hansen estimator. Given estimates of the average
and Q , we may define the modified dependent variable and serial
long-run covariances, L
correlation correction terms
1
y it = y it q 12 Q 22 u 2
(44.5)
1
+
22
l 12 = l 12 q 12 Q 22 L
(44.6)
and
b FP =
i=1t = 1
it X
it
X
1 N
T
+
( X it y it l 12 )
i= 1t =1
(44.7)
It is worth noting the pooled estimator simply sums across cross-sections separately in the
numerator and denominator.
The estimates of the long-run covariances may be obtained by standard approaches using
the u it residuals obtained from estimating Equation (44.1) and after removing the deterministic components in Equation (44.2). Note that EViews allows you to relax the assumption of
common b in these first stage estimates. Given estimates of the individual long-run covari i and Q i , we form our estimators by taking simple cross-secances for each cross-section, L
tion averages:
N
=
L
Q =
i
L
i= 1
Q i
(44.8)
i=1
Phillips and Moon (1999) show that under appropriate assumptions, the asymptotic distribution of the pooled estimator is asymptotically normal under sequential limits as
( T, N ) . Then
lim N
T, N
1 2
T ( b FP b ) N ( 0, q 1.2 aQ 22 )
(44.9)
for a constant a that depends on the deterministic variable specification, where q 1.2 is the
1
long-run variance of u 1t conditional on u 2t , given by q 1.2 = q 11 q 12 Q 22 q 21 .
1
Instead of estimating the asymptotic variance directly using estimates of q 1.2 , Q 22 and the
corresponding a for every possible deterministic specification, EViews adopts the Pedroni
(2000) and Mark and Sul (2003) approach of forming a consistent estimator using moments
of the regressors:
1
FP = q 1.2 M
FP
V
(44.10)
where
1
FP = --M
N
i= 1
1
-----2T
X it X it
(44.11)
t =1
In related work, Mark and Sul (2003) propose a sandwich form of this estimator which
allows for heterogeneous variances:
1
1
FP = M
FP
FP M
FP
V
D
(44.12)
where
1
FP = --D
N
i=1
1
q 1.2i -----2
T
X it X it
t = 1
(44.13)
Technical Details903
Weighted FMOLS
Pedroni (2000) and Kao and Chiang (2000) describe feasible pooled FMOLS estimators for
heterogeneous cointegrated panels where the long-run variances differ across cross-sections.
We again use first-stage estimates of the long-run and regressors equations to obtain the
i and Q i , and let
residuals, estimate the individual long-run variances L
1
+
l 12i = l 12i q 12i Q 22i L 22i
(44.14)
and
++
y it
1
12
1 2
it ) ) b 0
it ( q 12i X
= y it q 12 Q 22 u 2 q 1.2i ( q 1.2i X
(44.15)
(44.16)
1 2
1 2
+
l 12 i = q 1.2i l 12 i Q 22i
b FW =
i= 1t =1
it X
it
X
1 N
( X it
y l )
it
12i
(44.17)
i = 1t = 1
and the asymptotic covariance is estimated using a moment estimator as in Pedroni (2000):
1
FW = --V
N
i =1
1
-----2T
t =1
it
it X
X
(44.18)
Group-Mean FMOLS
Pedroni (2000, 2001) proposes a grouped-mean FMOLS estimator which averages over the
individual cross-section FMOLS estimates:
b FG
1
= ---N
X it X it
i= 1
t = 1
( X it y it l 12 i )
(44.19)
t = 1
Pedroni (1990) notes that in the presence of heterogeneity in the cointegrating relationships,
the grouped-mean estimator offers the desirable property of providing consistent estimates
of the sample mean of the cointegrating vectors, in contrast to the pooled and weighted estimators.
We estimate the asymptotic covariance matrix for this estimator by computing the variance
of the average of the individual estimates:
1
FG = -----V
2
N
i=1
1
q 1.2i -----2
T
t = 1
it X
it
X
(44.20)
It is worth noting that the basic t-statistics obtained using this covariance estimator differ
from the t-statistic proposed by Pedroni (1991), which aggregates individual statistics across
the cross-section dimension.
Pooled DOLS
Kao and Chiang (2000) describe the pooled DOLS estimator in which we use ordinary least
squares to estimate an augmented cointegrating regression equation:
ri
it b +
y it = X
it + j d i v 1it
X
(44.21)
j = q i
it are the data purged of the individual deterministic trends. Note that the
where y it and X
short-run dynamics coefficients d i are allowed to be cross-section specific.
it be regressors formed by interacting the X
it + j terms with cross-section dummy
Let Z
it = ( X
it , Z it ) . Then the pooled DOLS estimator may be written as
variables, and let W
b DP
g DP
i = 1t = 1
it W
it
W
W it y it
(44.22)
i = 1t = 1
Kao and Chiang (2000) show that the asymptotic distribution of this estimator is the same as
for pooled FMOLS. We may estimate the asymptotic covariance matrix of the b DP using the
corresponding sub-matrix of:
1
DP = q 1.2 M
DP
V
where
(44.23)
Technical Details905
DP
M
1
= ---N
i= 1
1
-----2T
W it W it
(44.24)
t =1
(44.25)
where
1
DP = --D
N
i=1
1
q 1.2i -----2
T
W it W it
(44.26)
t =1
1.2i .
employs the individual long-run variance estimates q
Weighted DOLS
Mark and Sul (1999) describe a simple weighted DOLS estimator which allows for heterogeneity in the long-run variances. Define the weighted regression:
b DW
g DW
T
1
q 1.2i
i= 1
t =1
it W
it
W
i=1
T
1
q 1.2i
W it y it
(44.27)
t =1
1.2 i obtained after preliminary DOLS estimafor individual long-run variance estimates q
tion.
In EViews, we estimate the asymptotic covariance matrix of the b DW using the corresponding sub-matrix of:
1
DW = --V
N
i =1
1 1
q 1.2 i -----2T
t =1
it
it W
W
(44.28)
Note that this very simple form of weighted estimation differs from the more complex estimator described Kao and Chiang (2000), which mixes the FMOLS endogenity correction,
weighting of both dependent variable and regressors, and the DOLS serial correlation correction.
Group-mean DOLS
Pedroni (2001) extends the grouped estimator concept to DOLS estimation by averaging over
the individual cross-section DOLS estimates:
b DG
g DG
1
= ---N
i= 1
t =1
it
it W
W
1 T
W it y it
(44.29)
t = 1
The asymptotic covariance matrix is obtained from the corresponding sub-matrix of the variance of the average of the individual estimators:
1
FG = -----V
2
N
i=1
1
q 1.2i -----2
T
t = 1
it
it W
W
(44.30)
We again note that the basic t-statistics involving this covariance estimator differ from the tstatistic proposed by Pedroni (1991) which aggregates individual statistics across the crosssection dimension.
References
Baltagi, Badi (2008). Econometric Analysis of Panel Data, New York: John Wiley & Sons.
Baltagi, Badi and Chihwa Kao (2000). Nonstationary Panels, Cointegration in Panels and Dynamic Panels: A Survey, in Baltagi, B. H. ed., Nonstationary Panels, Panel Cointegration and Dynamic Panels,
15, Amsterdam: Elsevier, 751.
Breitung, Jrg and M. Hashem Pesaran (2008). Unit Roots and Cointegration in Panels, in Mtys,
Lszl and Patrick Sevestre, eds. The Econometrics of Panel Data, Berlin: Springer-Verlag Berlin Heidelberg.
Hansen, Bruce E. (1992). Efficient Estimation and Testing of Cointegrating Vectors in the Presence of
Deterministic Trends, Journal of Econometrics, 53, 87-121.
Kao, Chihwa and Min-Hsien Chiang (2000). On the Estimation and Inference of a Cointegrated Regression in Panel Data, in Baltagi, B. H. et al. eds., Nonstationary Panels, Panel Cointegration and
Dynamic Panels, 15, Amsterdam: Elsevier, 179222.
Kao, Chihwa, Chiang, Min-Hsien, and Bangtian Chen (1999). International R&D Spillovers: An Application of Estimation and Inference in Panel Cointegration, Oxford Bulletin of Economics and Statistics,
61, 693711.
Mark, Nelson C. and Donggyu Sul (1999). A Computationally Simple Cointegration Vector Estimator for
Panel Data, Ohio State University manuscript.
Mark, Nelson C. and Donggyu Sul (2003). Cointegration Vector Estimation by Panel DOLS and Long-run
Money Demand, Oxford Bulletin of Economics and Statistics, 65, 655680.
Pedroni, Peter (2000). Fully Modified OLS for Heterogeneous Cointegrated Panels, in Baltagi, B. H. ed.,
Nonstationary Panels, Panel Cointegration and Dynamic Panels, 15, Amsterdam: Elsevier, 93130.
Pedroni, Peter (2001). Purchasing Power Parity Tests in Cointegrated Panels, The Review of Economics
and Statistics, 83, 727731.
Phillips, Peter C. B. and Bruce E. Hansen (1990). Statistical Inference in Instrumental Variables Regression with I(1) Processes, Review of Economics Studies, 57, 99-125.
Phillips, Peter C. B. and Hyungsik R. Moon (1999). Linear Regression Limit Theory for Nonstationary
Panel Data, Econometrica, 67, 1057-1111.
Saikkonen, Pentti (1992). Estimation and Testing of Cointegrated Systems by an Autoregressive Approximation, Econometric Theory, 8, 1-27.
References907
Stock, James H. and Mark Watson (1993). A Simple Estimator Of Cointegrating Vectors In Higher Order
Integrated Systems, Econometrica, 61, 783-820.
Here we see the dialog for graphing a single series. Note in particular the panel workfile specific Panel options section which controls how the multiple cross-sections in your panel
should be handled. If you select Stack cross sections EViews will display a single graph of
the stacked data, labeled with both the cross-section and date. For example, with a Line &
Symbol type graph, we have
Alternately, selecting Individual cross sections displays separate time series graphs for each
cross-section, while Combined cross sections displays separate lines for each cross-section
911
in a single graph. We caution you that both types of panel graphs may become difficult to
read when there are large numbers of cross-sections. For example, the individual graphs for
the 10 cross-section panel data depicted here provide information on general trends, but little in the way of detail:
Nevertheless, the graph does offer you the ability examine all of your cross-sections at-aglance.
The remaining two options allow you to plot a single graph containing summary statistics
for each period.
For line graphs, you may select Mean plus SD
bounds, and then use the drop down menu on the
lower right to choose between displaying no
bounds, and 1, 2, or 3 standard deviation bounds.
For other graph types such as area or spike, you
may only display the means of the data by period.
For line graphs you may select Median plus quantiles, and then use the drop down menu to choose
additional extreme quantiles to be displayed. For
other graph types, only the median may be plotted.
Suppose, for example, that we display a line graph
containing the mean and 2 standard deviation
bounds for the F series. EViews computes, for each period, the mean and standard deviation
of F across cross-sections, and displays these in a time series graph:
Similarly, we may display a spike graph of the medians of F for each period:
Displaying graph views of a group object in a panel workfile involves similar choices about
the handling of the panel structure.
By-Statistics
While not specifically panel aware, there are a variety of places in EViews where you may
use a classification variable to compute statistics by-group. In these cases, you may use the
@crossid identifier to compute statistics for each cross-section.
For example, you may open a series object and select View/Stats by Classification... to display summary statistics for various groups:
By-Statistics913
Mean
0.823050
0.857000
0.893250
0.870250
0.913400
0.839050
0.865650
0.860900
0.844000
0.882750
0.940250
0.876000
0.973600
0.781800
0.820250
0.834050
0.863000
0.799700
0.662100
0.853350
0.854800
0.865100
0.853332
Std. Dev.
0.329523
0.169526
0.153560
0.170139
0.106301
0.184148
0.174743
0.205281
0.226664
0.147099
0.199533
0.232249
0.124876
0.238805
0.132656
0.167274
0.247619
0.185269
0.160519
0.116893
0.198757
0.184872
0.195387
Obs.
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
440
Similarly, you may test equality of means across cross-sections (View/Equality Tests by
Classification...). Simply open the series, then select View/Descriptive Statistics & Tests/
Equality Tests by Classification.... Enter FN in the Series/Group for Classify edit field, and
select OK to continue. EViews will compute and display the results for an ANOVA for F, classifying the data by firm ID. The top portion of the ANOVA results is given by:
Test for Equality of Means of F
Categorized by values of FN
Date: 08/22/06 Time: 17:11
Sample: 1935 1954
Included observations: 200
Method
df
Value
Probability
(9, 190)
(9, 71.2051)
293.4251
259.3607
0.0000
0.0000
df
Sum of Sq.
Mean Sq.
Between
Within
9
190
3.21E+08
23077815
35640052
121462.2
Total
199
3.44E+08
1727831.
Anova F-test
Welch F-test*
Note in this example that we have relatively few cross-sections with moderate numbers of
observations in each firm. Data with very large numbers of group identifiers and few observations are not recommended for this type of testing. To test equality of means between
periods, call up the dialog and enter either YEAR or DATEID as the series by which you will
classify.
Panel Covariances915
A graphical summary of
the primary information
in the ANOVA may be
obtained by displaying
boxplots by cross-section
or period. For moderate
numbers of distinct classifier values, the graphical display may prove
informative. Select View/
Graph... to bring up the
Graph Options dialog.
Select Categorical graph
from the drop down on
the top left, select Boxplot from the list of graph
types, and enter FN in the
Within graph edit field. Click OK to display the boxplots using the default settings.
One particularly useful set of non-panel specific tools that may be used for panel analysis
are the by-group statistics functions (Chapter 13. Operator and Function Reference, beginning on page 563 of the Command and Programming Reference). The by-group statistics
which allow you to compute statistics by cross-section ID and match merge those results
back to the original data. For example the simple expression
series ydemean = y - @meansby(y, @crossid)
computes the deviations from the cross-section means for the series Y and places the results
in the series YDEMEAN.
Panel Covariances
Panel structured data employ more than one dimension to identify a given observation. In
the most common case where the panel combines time series and cross-sectional data, we
have data for cross-section units i = 1, , N and periods t = 1, , T . In this setting,
we focus on a single random variable X , with individual observations denoted X it .
It is sometimes convenient to view the X for different cross-sections (or time periods) as
being distinct random variables. This unstacking of a single random variable into multiple
random variables permits us to define measures of association between cross-sections or
periods for a given panel series.
For example, we may define the contemporaneous or between cross-section covariances for
X:
j ij = E { ( X i E ( X i ) ) ( X j E ( X j ) ) }
(45.1)
where X i = ( X i1, X i2, , X iT ) is the random variable associated with the X for the i-th
cross-section, i = 1, , N . The contemporaneous covariances are a measure of association (dependence) between the data for different cross-sections at a given point in time.
Similarly, we may define the period or within cross-section covariances for X :
j st = E { ( X s E ( X s ) ) ( X t E ( X t ) ) }
(45.2)
1
j ij = ---T
where X i = T
T
X
t = 1 it
( X it X i ) ( X jt X j )
(45.3)
t =1
and X j = T
t = 1 X jt .
The corresponding Pearson estimators of the period covariances use variation in the crosssection dimension to provide estimates:
j st
where X t = N
N
X
i = 1 it
1
= ---N
( X it X t ) ( X is X s )
i= 1
and X s = N
i = 1 X is .
(45.4)
Panel Covariances917
Other measures of association may be defined similarly. For discussion of the various methods that EViews supports, see Covariance Analysis, beginning on page 526 of Users Guide
I.
Telling EViews to compute measures of association for a series in a panel structured workfile
is straightforward. Simply open the series, and select View/Panel Covariance... to display
the dialog. Note that the workfile must be structured as a panel for the panel covariance
menu entry to be available.
EViews will open the Covariance Analysis dialog
which provides options for
controlling the computation, display, and saving of
results.
For the most part, the dialog is unchanged from the
covariance dialog for a
group of series and the discussion of settings there is
directly relevant (see
Covariance Analysis,
beginning on page 526) of
Users Guide I.
The one notable difference in the current dialog are the radio buttons that allow you to
choose whether to compute Contemporaneous covariances or Between periods covariances.
By changing the settings in the Statistics portion dialog, you may instruct EViews to compute a variety of other measures of association (uncentered Pearson, Spearman rank correlations and Kendalls tau), as well as test statistics for whether the measure of association is
zero.
In addition, you may specify a sample to be used in computation and select whether you
wish EViews to employ listwise deletion to balance the sample in the event that there are
missing values in the series. If you will be working with series with missing observations
you should bear in mind that:
EViews will compute covariances for all of the cross-sections (for contemporaneous
covariances) or periods (for between-period covariances) in the specified sample,
even if there are no valid observations for a relevant cross-section or period. If you
wish to exclude periods or cross-sections from the analysis, you should do so by setting the sample.
For cross-section covariances, checking the Balance sample - (listwise deletion) setting instructs EViews to balance the data by removing data for periods where there
are missing values for any cross-section.
For period covariances, the Balance sample - (listwise deletion) setting will remove
data for entire cross-sections where there are missing observations for any period.
To illustrate, we follow Obstfeld and Rogoff (2001) in computing the cross-country correlations for per capita consumption growth (DCPCH) for the Group of Seven countries over the
period from 1973 to 1992. The data, which are from the Penn World Table 5.6, are provided
for you in the workfile PWT56_CONSUMP.wf1 in the Example Files folder in your EViews
installation directory.
Open the workfile and the series DCPCH, select View/Panel covariance... and fill in the dialog as depicted above. Click on OK to compute the requested statistics and display the
results.
Panel Covariances919
FRANCE
GERMANY,
WEST
CANADA
0.000929
1.000000
20
U.S.A.
U.S.A.
0.000364
0.643323
20
0.000344
1.000000
20
JAPAN
2.39E-05
0.046872
20
0.000156
0.504028
20
0.000280
1.000000
20
FRANCE
7.83E-05
0.248871
20
9.83E-05
0.513199
20
0.000117
0.680323
20
0.000107
1.000000
20
GERMANY, WEST
0.000211
0.411637
20
0.000173
0.554703
20
0.000114
0.403295
20
9.58E-05
0.551423
20
0.000283
1.000000
20
ITALY
0.000236
0.436899
20
4.42E-05
0.134410
20
6.15E-05
0.207144
20
4.88E-05
0.266630
20
8.84E-05
0.296038
20
0.000315
1.000000
20
U.K.
0.000364
0.401499
20
0.000358
0.648652
20
0.000293
0.588630
20
0.000133
0.434526
20
0.000198
0.394636
20
0.000160
0.302919
20
Observations
CANADA
ITALY
U.K.
0.000885
1.000000
20
These results show the correlations in the values of DCPCH between cross-sections.
Likewise, we may instruct EViews to compute the between period covariances, we obtain
correlations between periods. Fill in the dialog as in the previous example, changing the
Type to Between period covariances, and change the sample to 1973 1992 since data for
DCPCH in 1972 are not available (due to the lag in the difference).
If you were to use the original sample of 1972 1992 the resulting between period correlation matrix would contain only NAs (since the balanced sample option would remove all
observations). Click on OK to accept the settings and compute the between covariances and
correlations.
The first tab of the dialog, labeled Components, specifies the display, output, and selection
settings for the principal components. The tab is virtually identical to the one displayed
when you compute the principal components for a group of series (see Performing Covariance Analysis, beginning on page 527). The one minor difference is in the edit field for the
Maximum number of components to be retained. In the panel setting the edit field is filled
with * which is a stand-in for the maximum number of components (number of crosssections or periods); in the group setting, this edit field is filled in with the number of variables.
The dialog is virtually unchanged from the one displayed for saving principal components
scores of a group; indeed the first tab is identical.
In the first tab you will describe the output you wish EViews:
You should provide names for the series in which you wish to save the scores, and
optionally, names for the loadings and eigenvector matrices, and the eigenvalues
vector.
Importantly, the Scaling dropdown on the bottom of the dialog is used to determine
the properties of your scores. By default, the scaling is set to Normalize loadings so
that the scores have variances equal to the eigenvalues of the decomposition. You
may instead elect to save normalized scores (Normalize scores), equal weighted
scores and loadings (Symmetric weights), or user weighted loadings (User loading
weight).
The second tab is used to describe the computation of the measure of association (used in
the computation). The options are those for computing panel covariances as described in
Viewing Principal Components on page 920.
An Illustration
To illustrate, we compute principal components of the cross-country correlations for per
capita consumption growth (DCPCH) for the Group of Seven countries over the period
from 1973 to 1992 (Obstfeld and Rogoff, 2001). The data, which are from the Penn World
Table 5.6, are provided for you in the workfile PWT56_CONSUMP.wf1 in the Example
Files folder in your EViews installation directory.
Open the workfile and the series DCPCH, select View/Panel Principal Components... and
click on OK to compute the principal components using the default settings and to display
the basic results in table form:
Panel Principal Components Analysis
Series: DCPCH
Date: 08/29/12 Time: 15:55
Sample: 1972 1992
Included observations: 147
Analysis of contemporaneous (between cross-section) relationships
Computed using: Ordinary correlations
Extracting 7 of 7 possible components
Eigenvalues: (Sum = 7, Average = 1)
Number
Value
Difference
Proportion
Cumulative
Value
Cumulative
Proportion
1
2
3
4
5
6
7
3.735308
1.091137
0.844344
0.630047
0.371627
0.208565
0.118973
2.644171
0.246793
0.214297
0.258421
0.163062
0.089592
---
0.5336
0.1559
0.1206
0.0900
0.0531
0.0298
0.0170
3.735308
4.826445
5.670789
6.300836
6.672463
6.881027
7.000000
0.5336
0.6895
0.8101
0.9001
0.9532
0.9830
1.0000
Eigenvectors (loadings):
Cross-section
CANADA
U.S.A.
JAPAN
FRANCE
GERMANY, WEST
ITALY
U.K.
PC 1
0.326828
0.429951
0.386426
0.408552
0.392449
0.280698
0.399098
PC 2
0.680374
0.113511
-0.539826
-0.346313
0.019595
0.330954
-0.054291
PC 3
-0.151109
-0.499405
0.095487
0.187099
0.049549
0.793017
-0.228706
PC 4
-0.035987
-0.018909
0.196637
-0.283874
-0.656370
0.245221
0.623011
PC 5
0.329621
0.172759
0.118652
0.506102
-0.634238
-0.047112
-0.432216
PC 6
-0.022346
0.356079
0.556905
-0.574024
0.030900
0.154858
-0.456208
PC 7
0.544974
-0.629172
0.432733
-0.109172
0.095454
-0.310595
0.048876
Here we see header information describing the computation and two out of three of the
sections of output. The first section provides a summary of the eigenvalues of the correlation matrix, while the second section shows the corresponding eigenvectors. Not depicted
here, but present in the actual output, is the estimated correlation matrix itself.
These results in the first section show that the first four components account for about 90%
of the total scaled variance in the values of DCPCH between cross-sections. The second section describes the linear combination coefficients. We see that the first principal component
(labeled PC1) is a roughly-equal linear combination of all seven of the country per capita
consumption growth. This component might be thought of as representing the common
component in G7 consumption growth.
Alternately, we may instruct EViews to compute and graph the eigenvalues associated with
the between period correlations. Click on View/Panel Principal Components... to display
the dialog
go to the Display section, select Eigenvalues plots and check all of the display checkboxes
so that EViews will display all three of the eigenvalue plots. Next, click on the Calculation
tab and click on the Between periods covariances button so that EViews will unstack the
data into different periods.
It is important to note that you must change the sample to 1973 1992 since data for
DCPCH in 1972 are not available (DCPCH is a lagged difference). If you were to use the original sample of 1972 1992, the balanced sample option would remove all observations and
the resulting between period correlation matrix would contain only NAs. Principal components analysis on this matrix would fail.
Click on OK to accept the settings. The results of this view (after rearranging the graphs
slightly) are depicted below:
y i, t = a 0, i + a 1, i y i, t 1 + + a l, i y i, t 1 + b 1, i x i, t 1 + + b 1, i x i, t 1 + e i, t
(45.5)
x i, t = a 0, i + a 1, i x i, t 1 + + a l, i x i, t 1 + b 1, i y i, t 1 + + b 1, i y i, t 1 + e i, t
(45.6)
Where t denotes the time period dimension of the panel, and i denotes the cross-sectional
dimension.
The different forms of panel causality test differ on the assumptions made about the homogeneity of the coefficients across cross-sections.
EViews offers two of the simplest approaches to causality testing in panels. The first is to
treat the panel data as one large stacked set of data, and then perform the Granger Causality
test in the standard way, with the exception of not letting data from one cross-section enter
the lagged values of data from the next cross-section. This method assumes that all coefficients are same across all cross-sections, i.e.:
a 0, i = a 0, j, a 1, i = a 1, j, , a l, i = a l, j, i, j
(45.7)
b 1, i = b 1, j, , b l, i = b lj i, j
(45.8)
a 0, i a 0, j, a 1, i a 1, j, , a l, i a l, j, i, j
(45.9)
b 1, i b 1, j, , b l, i b lj i, j
(45.10)
This test is calculated by simply running standard Granger Causality regressions for each
cross-section individually. The nest step is to take the average of the test statistics, which are
termed the Wbar statistic. They show that the standardized version of this statistic, appropriately weighted in unbalanced panels, follows a standard normal distribution. This is
termed the Zbar statistic.
(EViews does not provide built-in versions of other panel-causality tests since they are often
based upon regressions using some assumptions on Equation (45.5), or in some cases twostage least squares regressions, often using a fixed or a random effects model. It is possible
to perform these test by estimating the models using an EViews equation object and then
perform Wald test coefficient restrictions on the appropriate coefficients.)
To perform the test, create a group containing the series of interest, then select View/
Granger Causality... to display the test dialog:
Select the Test type using the radio buttons and provide a number of Lags to include. Click
on OK to accept the settings and compute the test.
Pairwise Dumitrescu Hurlin Panel Causality Tests
Date: 02/05/13 Time: 15:58
Sample: 1960 1978
Lags: 2
Null Hypothesis:
W-Stat.
Zbar-Stat.
Prob.
2.81017
4.59763
0.59203
3.17200
0.5538
0.0015
Here we show results for the pairwise Dumitrescu-Hurlin tests using data from gasoline.WF1 (which is available in your examples directory). We reject the null that LCARPCAP does not homogeneously cause LGASPCAR, but do not direct in the opposite direction.
The resulting long-run average covariances are shown in the group window:
and the individual cross-section results are stored in the matrix PAN_RESULTS, with the
vech of the individual cross-section covariances stored in each row:
Method
Statistic
Prob.**
Crosssections
Obs
10
180
10
10
10
180
180
190
1.71727
0.9570
-0.51923
33.1797
41.9742
0.3018
0.0322
0.0028
Note that there is a fair amount of disagreement in these results as to whether F has a unit
root, even within tests that evaluate the same null hypothesis (e.g., Im, Pesaran and Shin vs.
the Fisher ADF and PP tests).
To obtain additional information about intermediate results, we may rerun the panel unit
root procedure, this time choosing a specific test statistic. Computing the results for the IPS
test, for example, displays (in addition to the previous IPS results) ADF test statistic results
for each cross-section in the panel:
Intermediate ADF test results
Cross
section
t-Stat
Prob.
E(t)
E(Var)
Lag
Max
Lag
Obs
1
2
3
4
5
6
7
8
9
10
-2.3596
-3.6967
-2.1030
-3.3293
0.0597
1.8743
-1.8108
-0.5541
-1.3223
-3.4695
0.1659
0.0138
0.2456
0.0287
0.9527
0.9994
0.3636
0.8581
0.5956
0.0218
-1.511
-1.511
-1.511
-1.511
-1.511
-1.511
-1.511
-1.511
-1.511
-1.511
0.953
0.953
0.953
0.953
0.953
0.953
0.953
0.953
0.953
0.953
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
18
18
18
18
18
18
18
18
18
18
Average
-1.6711
-1.511
0.953
Prob.
Panel v-Statistic
Panel rho-Statistic
Panel PP-Statistic
Panel ADF-Statistic
0.0001
0.0157
0.1815
0.3931
4.219500
-0.400152
0.671083
-0.216806
0.0001
0.3682
0.3185
0.3897
4.119485
-2.543473
-1.254923
0.172158
Group rho-Statistic
Group PP-Statistic
Group ADF-Statistic
Statistic
Prob.
-1.776207
-0.824320
0.538943
0.0824
0.2840
0.3450
The top portion of the output indicates the type of test, null hypothesis, exogenous variables, and other test options. The next section provides several Pedroni panel cointegration
test statistics which evaluate the null against both the homogeneous and the heterogeneous
alternatives. In this case, eight of the eleven statistics do not reject the null hypothesis of no
cointegration at the conventional size of 0.05.
The bottom portion of the table reports auxiliary cross-section results showing intermediate
calculating used in forming the statistics. For the Pedroni test this section is split into two
sections. The first section contains the Phillips-Perron non-parametric results, and the second section presents the Augmented Dickey-Fuller parametric results.
AR(1)
Variance
AUT
BUS
CON
CST
DEP
HOA
MAE
MIS
0.959
0.959
0.966
0.933
0.908
0.941
0.975
0.991
54057.16
98387.47
144092.9
579515.0
896700.4
146702.7
2996615.
2775962.
HAC
46699.67
98024.05
125609.0
468780.9
572964.8
165065.5
2018633.
3950850.
Bandwidth
Obs
23.00
7.00
4.00
6.00
7.00
6.00
3.00
7.00
321
321
321
321
321
321
321
321
AR(1)
Variance
Lag
Max lag
Obs
AUT
BUS
CON
CST
DEP
HOA
MAE
MIS
0.983
0.971
0.966
0.949
0.974
0.941
0.976
0.977
48285.07
95843.74
144092.9
556149.1
647340.5
146702.7
2459970.
2605046.
5
1
0
1
2
0
6
3
16
16
16
16
16
16
16
16
316
320
321
320
319
321
315
318
Panel Resampling
Resample... performs resampling on all of the series in the group. A description of the
resampling procedure is provided in Resample on page 411 of Users Guide I. When you
resample from a panel workfile, EViews offers you an additional option of whether to resample across cross-sections or not. The default assumes that cross-sections are not identical so
that the resampling is not done across cross-sections, but is instead performed on a crosssection by cross-section basis.
References
Dumitrescu, Elena-Ivona and Christophe Hurlin (2012). Testing for Granger Non-causality in Heterogeneous Panels, Economic Modeling, 29, 1450-1460.
Phillips, Peter C. B. and Hyungsik R. Moon (1999). Linear Regression Limit Theory for Nonstationary
Panel Data, Econometrica, 67, 1057-1111.
y t = A 1 y t 1 + + A p y t p + Bx t + e t
(46.1)
y t = Py t 1 +
G i y t i + Bx t + e t
i= 1
(46.2)
where:
p
P =
A i I,
i=1
Gi =
Aj
(46.3)
j = i +1
Grangers representation theorem asserts that if the coefficient matrix P has reduced rank
r < k , then there exist k r matrices a and b each with rank r such that P = ab and
b y t is I(0). r is the number of cointegrating relations (the cointegrating rank) and each
column of b is the cointegrating vector. As explained below, the elements of a are known
as the adjustment parameters in the VEC model. Johansens method is to estimate the P
matrix from an unrestricted VAR and to test whether we can reject the restrictions implied
by the reduced rank of P .
x distribution and depends on the assumptions made with respect to deterministic trends.
Therefore, in order to carry out the test, you need to make an assumption regarding the
trend underlying your data.
For each row case in the dialog, the COINTEQ column lists the deterministic variables that
appear inside the cointegrating relations (error correction term), while the OUTSIDE column
lists the deterministic variables that appear in the VEC equation outside the cointegrating
relations. Cases 2 and 4 do not have the same set of deterministic terms in the two columns.
For these two cases, some of the deterministic term is restricted to belong only in the cointegrating relation. For cases 3 and 5, the deterministic terms are common in the two columns
and the decomposition of the deterministic effects inside and outside the cointegrating space
is not uniquely identified; see the technical discussion below.
In practice, cases 1 and 5 are rarely used. You should use case 1 only if you know that all
series have zero mean. Case 5 may provide a good fit in-sample but will produce implausible forecasts out-of-sample. As a rough guide, use case 2 if none of the series appear to have
a trend. For trending series, use case 3 if you believe all trends are stochastic; if you believe
some of the series are trend stationary, use case 4.
If you are not certain which trend assumption to use, you may choose the Summary of all 5
trend assumptions option (case 6) to help you determine the choice of the trend assumption. This option indicates the number of cointegrating relations under each of the 5 trend
assumptions, and you will be able to assess the sensitivity of the results to the trend
assumption.
We may summarize the five deterministic trend cases considered by Johansen (1995, p. 80
84) as:
1. The level data y t have no deterministic trends and the cointegrating equations do not
have intercepts:
H 2 ( r ) : Py t 1 + Bx t = ab y t 1
2. The level data y t have no deterministic trends and the cointegrating equations have
intercepts:
H 1* ( r ) : Py t 1 + Bx t = a ( b y t 1 + r 0 )
3. The level data y t have linear trends but the cointegrating equations have only intercepts:
H 1 ( r ) : Py t 1 + Bx t = a ( b y t 1 + r 0 ) + a g 0
4. The level data y t and the cointegrating equations have linear trends:
H * ( r ) : Py t 1 + Bx t = a ( b y t 1 + r 0 + r 1 t ) + a g 0
5. The level data y t have quadratic trends and the cointegrating equations have linear
trends:
H ( r ) : Py t 1 + Bx t = a ( b y t 1 + r 0 + r 1 t ) + a ( g 0 + g 1 t )
The terms associated with a are the deterministic terms outside the cointegrating relations. When a deterministic term appears both inside and outside the cointegrating relation,
the decomposition is not uniquely identified. Johansen (1995) identifies the part that
belongs inside the error correction term by orthogonally projecting the exogenous terms
onto the a space so that a is the null space of a such that a a = 0 . EViews uses a different identification method so that the error correction term has a sample mean of zero.
More specifically, we identify the part inside the error correction term by regressing the cointegrating relations b y t on a constant (and linear trend).
Exogenous Variables
The test dialog allows you to specify additional exogenous variables x t to include in the test
VAR. The constant and linear trend should not be listed in the edit box since they are specified using the five Trend Specification options. If you choose to include exogenous variables, be aware that the critical values reported by EViews do not account for these
variables.
The most commonly added deterministic terms are seasonal dummy variables. Note, however, that if you include standard 01 seasonal dummy variables in the test VAR, this will
affect both the mean and the trend of the level series y t . To handle this problem, Johansen
(1995, page 84) suggests using centered (orthogonalized) seasonal dummy variables, which
shift the mean without contributing to the trend. Centered seasonal dummy variables for
quarterly and monthly series can be generated by the commands:
series d_q = @seas(q) - 1/4
series d_m = @seas(m) - 1/12
Lag Intervals
You should specify the lags of the test VAR as pairs of intervals. Note that the lags are specified as lags of the first differenced terms used in the auxiliary regression, not in terms of the
levels. For example, if you type 1 2 in the edit field, the test VAR regresses y t on yt 1 ,
yt 2 , and any other exogenous variables that you have specified. Note that in terms of the
level series y t the largest lag is 3. To run a cointegration test with one lag in the level series,
type 0 0 in the edit field.
Critical Values
By default, EViews will compute the critical values for the test using MacKinnon-HaugMichelis (1999) p-values. You may elect instead to report the Osterwald-Lenum (1992) at the
5% and 1% levels by changing the radio button selection from MHM to Osterwald-Lenum.
As indicated in the header of the output, the test assumes no trend in the series with a
restricted intercept in the cointegration relation (We computed the test using assumption 2
in the dialog, Intercept (no trend) in CE - no intercept in VAR), includes three orthogonalized seasonal dummy variables D1D3, and uses one lag in differences (two lags in levels)
which is specified as 1 1 in the edit field.
Eigenval ue
Trace
Statistic
0.05
Critical V al ue
Prob.**
None
At most 1
At most 2
At most 3
0.433165
0.177584
0.112791
0.043411
49.14436
19.05691
8.694964
2.352233
54.07904
35.19275
20.26184
9.164546
0.1282
0.7836
0.7644
0.7071
Eigenval ue
Max-E igen
Statistic
0.05
Critical V al ue
Prob.**
None *
At most 1
At most 2
At most 3
0.433165
0.177584
0.112791
0.043411
30.08745
10.36195
6.342731
2.352233
28.58808
22.29962
15.89210
9.164546
0.0319
0.8059
0.7486
0.7071
LR tr ( r k ) = T
log ( 1 l i )
(46.4)
i=r + 1
where l i is the i-th largest eigenvalue of the P matrix in (46.3) which is reported in the
second column of the output table.
The second block of the output reports the maximum eigenvalue statistic which tests the
null hypothesis of r cointegrating relations against the alternative of r + 1 cointegrating
relations. This test statistic is computed as:
LR max ( r r + 1 ) = T log ( 1 l r + 1 )
= LR tr ( r k ) LR tr ( r + 1 k )
(46.5)
for r = 0, 1, , k 1 .
There are a few other details to keep in mind:
Critical values are available for up to k = 10 series. Also note that the critical values
depend on the trend assumptions and may not be appropriate for models that contain
other deterministic regressors. For example, a shift dummy variable in the test VAR
implies a broken linear trend in the level series y t .
The trace statistic and the maximum eigenvalue statistic may yield conflicting results.
For such cases, we recommend that you examine the estimated cointegrating vector
and base your choice on the interpretability of the cointegrating relations; see Johansen and Juselius (1990) for an example.
In some cases, the individual unit root tests will show that some of the series are integrated, but the cointegration test will indicate that the P matrix has full rank
( r = k ). This apparent contradiction may be the result of low power of the cointegration tests, stemming perhaps from a small sample size or serving as an indication
of specification error.
Cointegrating Relations
The second part of the output provides estimates of the cointegrating relations b and the
adjustment parameters a . As is well known, the cointegrating vector b is not identified
unless we impose some arbitrary normalization. The first block reports estimates of b and
a based on the normalization b S 11 b = I , where S 11 is defined in Johansen (1995). Note
that the transpose of b is reported under Unrestricted Cointegrating Coefficients so that
the first row is the first cointegrating vector, the second row is the second cointegrating vector, and so on.
Unrestricted Cointegrating Coefficients (n ormalized by b'*S11*b=I):
LRM
-21.97409
14.6559 8
7.94655 2
1.02449 3
LRY
22.69811
-20.0508 9
-25.6408 0
-1.92976 1
IBO
-114.4173
3.561148
4.277513
24.99712
IDE
92.64010
100.2632
-44 .87727
-14 .64825
C
133.1615
-62.59345
62.74888
-2.318655
0.004406
0.006284
0.000438
-0.000354
0.001980
0.001082
-0.001536
-4.65E -05
0.009691
-0.00523 4
-0.00105 5
-0.00133 8
-0.000329
0.001348
-0.000723
-0.002063
The remaining blocks report estimates from a different normalization for each possible number of cointegrating relations r = 0, 1, , k 1 . This alternative normalization expresses
the first r variables as functions of the remaining k r variables in the system. Asymptotic
standard errors are reported in parentheses for the parameters that are identified.
In our example, for one cointegrating equation we have:
1 Cointegr ating Equati on(s):
Log likelihood
669.1154
C
-6.059932
(0.86239)
Imposing Restrictions
Since the cointegrating vector b is not fully identified, you may wish to impose your own
identifying restrictions. If you are performing your Johansen cointegration test using an estimated Var object, EViews offers you the opportunity to impose restrictions on b . Restrictions can be imposed on the cointegrating vector (elements of the b matrix) and/or on the
adjustment coefficients (elements of the a matrix)
To perform the cointegration test from a Var object, you will first need to estimate a VAR
with your variables as described in Estimating a VAR in EViews on page 624. Next, select
View/Cointegration Test... from the Var menu and specify the options in the Cointegration
Test Specification tab as explained above. Then bring up the VEC Restrictions tab. You will
enter your restrictions in the edit box that appears when you check the Impose Restrictions
box:
Restricted
Log-likehood
LR
Statistic
1
2
3
668.6698
674.2964
677.4677
0.891088
NA
NA
Degrees of
Freedom
1
NA
NA
Probability
0.345183
NA
NA
If the restrictions are not binding for a particular rank, the corresponding rows will be filled
with NAs. If the restrictions are binding but the algorithm did not converge, the corresponding row will be filled with an asterisk *. (You should redo the test by increasing the number of iterations or relaxing the convergence criterion.) For the example output displayed
above, we see that the single restriction a 31 = 0 is binding only under the assumption that
there is one cointegrating relation. Conditional on there being only one cointegrating relation,
the LR test does not reject the imposed restriction at conventional levels.
The output also reports the estimated b and a imposing the restrictions. Since the cointegration test does not specify the number of cointegrating relations, results for all ranks that
are consistent with the specified restrictions will be displayed. For example, suppose the
restriction is:
B(2,1) = 1
Since this is a restriction on the second cointegrating vector, EViews will display results for
ranks r = 2, 3, , k 1 (if the VAR has only k = 2 variables, EViews will return an
error message pointing out that the implied rank from restrictions must be of reduced
order).
For each rank, the output reports whether convergence was achieved and the number of
iterations. The output also reports whether the restrictions identify all cointegrating parameters under the assumed rank. If the cointegrating vectors are identified, asymptotic standard
errors will be reported together with the parameters b .
Single-Equation Cointegration Test from the group toolbar or main menu. The Cointegration Test Specification page opens to prompt you for information about the test.
The dropdown menu at the top allows
you to choose between the default
Engle-Granger test or the PhillipsOuliaris test. Below the dropdown are
the options for the test statistic. The
Engle-Granger test requires a specification for the number of lagged differences to include in the test regression,
and whether to d.f. adjust the standard
error estimate when forming the ADF
test statistics. To match Hamiltons
example, we specify a Fixed (Userspecified) lag specification of 12, and
retain the default d.f. correction of the
standard error estimate.
The right-side of the dialog is used to specify the form of the cointegrating equation. The
main cointegrating equation is described in the Equation specification section. You should
use the Trend specification dropdown to choose from the list of pre-specified deterministic
trend variable assumptions (None, Constant (Level), Linear Trend, Quadratic Trend). If
you wish to include deterministic regressors that are not offered in the pre-specified list, you
may enter the series names or expressions in the Deterministic regressors edit box. For our
example, we will leave the settings at their default values, with the Trend specification set
to Constant (Level), and no additional deterministic regressors specified.
The Regressors specification section should be used to specify any deterministic trends or
other regressors that should be included in the regressors equations but not in the cointegrating equation. In our example, Hamilton points to evidence of non-zero drift in the
regressors, so we will select Linear trend in the Additional trends dropdown.
Click on OK to compute and display the test results.
Dependent
P_T
S_T
PSTAR_T
tau-statistic
-2.730940
-2.069678
-2.631078
Prob.*
0.4021
0.7444
0.4548
z-statistic
-26.42791
-13.83563
-22.75737
Prob.*
0.0479
0.4088
0.0962
P_T
-0.030478
0.011160
0.114656
2.413438
12
189
2
S_T
-0.030082
0.014535
5.934605
35.14397
12
189
2
PSTAR_T
-0.031846
0.012104
0.468376
6.695884
12
18 9
2
The top two portions of the output describe the test setup and summarize the test results.
Regarding the test results, note that EViews computes both the Engle-Granger tau-statistic (tstatistic) and normalized autocorrelation coefficient (which we term the z-statistic) for residuals obtained using each series in the group as the dependent variable in a cointegrating
regression. Here we see that the test results are broadly similar for different dependent variables, with the tau-statistic uniformly failing to reject the null of no cointegration at conventional levels. The results for the z-statistics are mixed, with the residuals from the P_T
equation rejecting the unit root null at the 5% level. On balance, however, the test statistics
suggest that we cannot reject the null hypothesis of no cointegration.
The bottom portion of the results show intermediate calculations for the test corresponding
to each dependent variable. Residual-based Tests, on page 270 offers a discussion of these
statistics. We do note that there are only 2 stochastic trends in the asymptotic distribution
(instead of the 3 corresponding to the number of variables in the group) as a result of our
assumption of a non-zero drift in the regressors.
Alternately, you may compute the Phillips-Ouliaris test statistic. Once again select View/
Cointegration Test/Single-Equation Cointegration Test from the Group toolbar or main
menu, but this time choose Phillips-Ouliaris in the Test Method dropdown.
Dependent
P_T
S_T
PSTAR_T
tau-statistic
-2.023222
-1.723248
-1.997466
Prob.*
0.7645
0.8710
0.7753
z-statistic
-7.542281
-6.457868
-7.474681
Prob.*
0.8039
0.8638
0.8078
P_T
-0.016689
-0.037524
0.018547
0.162192
0.408224
0.123016
13.00000
201
2
S_T
-0.014395
-0.032129
0.018644
6.411674
13.02214
3.305234
13.00000
201
2
PSTAR_T
-0.017550
-0.037187
0.018617
0.619376
1.419722
0.400173
13.00000
20 1
2
In contrast with the Engle-Granger tests, the results are quite similar for all six of the tests
with the Phillips-Ouliaris test not rejecting the null hypothesis that the series are not cointegrated. As before, the bottom portion of the output displays intermediate results for the test
associated with each dependent variable.
14
where T i is the length of the cross-section i . Alternatively, you may provide your own
value by selecting User specified, and entering a value in the edit field.
The Pedroni test employs both parametric and non-parametric kernel estimation of the long
run variance. You may use the Variance calculation and Lag length sections to control the
computation of the parametric variance estimators. The Spectral estimation portion of the
dialog allows you to specify settings for the non-parametric estimation. You may select from
a number of kernel types (Bartlett, Parzen, Quadratic spectral) and specify how the bandwidth is to be selected (Newey-West automatic, Newey-West fixed, User specified). The
29
Newey-West fixed bandwidth is given by 4 ( T i 100 )
. The Kao test uses the Lag length
and the Spectral estimation portion of the dialog settings as described below.
Here, we see the options for
the Fisher test selection.
These options are similar to
the options available in the
Johansen cointegration test
(Johansen Cointegration
Test, beginning on
page 939).
The Deterministic trend
specification section determines the type of exogenous
trend to be used.
The Lag intervals section
specifies the lag-pair to be
used in estimation.
Pedroni proposes several tests for cointegration that allow for heterogeneous intercepts and
trend coefficients across cross-sections. Consider the following regression
(46.6)
for t = 1, , T ; i = 1, , N ; m = 1, , M ; where y and x are assumed to be integrated of order one, e.g. I(1). The parameters a i and d i are individual and trend effects
which may be set to zero if desired.
Under the null hypothesis of no cointegration, the residuals e i, t will be I(1). The general
approach is to obtain residuals from Equation (46.6) and then to test whether residuals are
I(1) by running the auxiliary regression,
e it = r i e it 1 + u it
(46.7)
or
pi
e it = r i e it 1 +
w ij e it j + v it
(46.8)
j=1
for each cross-section. Pedroni describes various methods of constructing statistics for testing for null hypothesis of no cointegration ( r i = 1 ). There are two alternative hypotheses:
the homogenous alternative, ( r i = r ) < 1 for all i (which Pedroni terms the within-dimension test or panel statistics test), and the heterogeneous alternative, r i < 1 for all i (also
referred to as the between-dimension or group statistics test).
The Pedroni panel cointegration statistic N, T is constructed from the residuals from either
Equation (46.7) or Equation (46.8). A total of eleven statistics with varying degree of properties (size and power for different N and T ) are generated.
Pedroni shows that the standardized statistic is asymptotically normally distributed,
N, T m N
--------------------------------- N ( 0, 1 )
v
(46.9)
y it = a i + bx it + e it
(46.10)
for
y it = y it 1 + u i, t
(46.11)
x it = x it 1 + e i, t
(46.12)
e it = re it 1 + v it
(46.13)
w j e it j + v it
e it = r e it 1 +
(46.14)
j=1
Under the null of no cointegration, Kao shows that following the statistics,
T N ( r 1 ) + 3 N
DF r = -------------------------------------------------10.2
DF t =
1.25t r +
(46.15)
1.875N
2
Nj v
(46.16)
2
j 0v
NT ( r 1 ) + 3
*
DF r = -------------------------------------------------------------------4
4
3 + 36j v ( 5j 0v )
(46.17)
t r + 6Nj v ( 2j 0v )
*
DF t = --------------------------------------------------------------------2
2
2
2
j 0v ( 2j v ) + 3j v ( 10j 0v )
(46.18)
t r + 6Nj v ( 2j 0v )
ADF = --------------------------------------------------------------------2
2
2
2
j 0v ( 2j v ) + 3j v ( 10j 0v )
(46.19)
2
w it =
is estimated as
u it
e it
(46.20)
References957
S =
1
= --------NT
j u j u e
2
j ue j e
w it w it
(46.21)
i = 1t = 1
and the long run covariance is estimated using the usual kernel estimator
Q =
2
j 0u j 0ue
2
j 0u e j 0e
1
= ---N
i= 1
1
---T
(46.22)
T
w it w it + ---T-
t =1
( w it w it t + w it t w it )
k(t b)
t = 1
t = t+1
log ( p i ) x
2N
(46.23)
i=1
2
By default, EViews reports the x value based on MacKinnon-Haug-Michelis (1999) p-values for Johansens cointegration trace test and maximum eigenvalue test.
References
Boswijk, H. Peter (1995). Identifiability of Cointegrated Systems, Technical Report, Tinbergen Institute.
Engle, Robert F. and C. W. J. Granger (1987). Co-integration and Error Correction: Representation, Estimation, and Testing, Econometrica, 55, 251276.
Fisher, R. A. (1932). Statistical Methods for Research Workers, 4th Edition, Edinburgh: Oliver & Boyd.
Hamilton, James D. (1994). Time Series Analysis, Princeton: Princeton University Press.
Johansen, Sren (1991). Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector
Autoregressive Models, Econometrica, 59, 15511580.
Johansen, Sren (1995). Likelihood-based Inference in Cointegrated Vector Autoregressive Models, Oxford:
Oxford University Press.
Johansen, Sren and Katarina Juselius (1990). Maximum Likelihood Estimation and Inferences on Cointegrationwith applications to the demand for money, Oxford Bulletin of Economics and Statistics,
52, 169210.
Kao, Chinwa D. (1999). Spurious Regression and Residual-Based Tests for Cointegration in Panel Data,
Journal of Econometrics, 90, 144.
MacKinnon, James G. (1996). Numerical Distribution Functions for Unit Root and Cointegration Tests,
Journal of Applied Econometrics, 11, 601-618.
MacKinnon, James G., Alfred A. Haug, and Leo Michelis (1999), Numerical Distribution Functions of
Likelihood Ratio Tests for Cointegration, Journal of Applied Econometrics, 14, 563-577.
Maddala, G. S. and S. Wu (1999). A Comparative Study of Unit Root Tests with Panel Data and A New
Simple Test, Oxford Bulletin of Economics and Statistics, 61, 63152.
Osterwald-Lenum, Michael (1992). A Note with Quantiles of the Asymptotic Distribution of the Maximum Likelihood Cointegration Rank Test Statistics, Oxford Bulletin of Economics and Statistics, 54,
461472.
Pedroni, P. (1999). Critical Values for Cointegration Tests in Heterogeneous Panels with Multiple Regressors, Oxford Bulletin of Economics and Statistics, 61, 65370.
Pedroni, P. (2004). Panel Cointegration; Asymptotic and Finite Sample Properties of Pooled Time Series
Tests with an Application to the PPP Hypothesis, Econometric Theory, 20, 597625.
to say the least, and we cannot possibly attempt a comprehensive overview. For those
requiring a detailed treatment, Harmans (1976) book length treatment is a standard reference. Other useful surveys include Gorsuch (1983) and Tucker and MacCallum (1977).
Data Specification
The first item in the Data tab is the Type dropdown menu, which is used to specify whether
you wish to compute a Correlation or Covariance matrix from the series data, or to provide
a User-matrix containing a previously computed measure of association.
Covariance Specification
Here we see the dialog layout
when Correlation or Covariance
is selected.
Most of these fields should be
familiar from the Covariance
Analysis view of a group. Additional details on all of these settings may be found in
Covariance Analysis, beginning on page 526.
Method
You may use the Method dropdown to specify the calculation
method: ordinary Pearson
covariances, uncentered covariances, Spearman rank-order
covariances, and Kendalls tau measures of association.
Note that the computation of factor scores (Scoring on page 1000) is not supported for factor models fit to Spearman or Kendalls tau measures. If you wish to compute scores for
measures based on these methods you may, however, estimate a factor model fit to a userspecified matrix.
Variables
You should enter the list of series or groups containing series that you wish to employ for
analysis.
(Note that when you create your factor object from a group object or a set of highlighted
series, EViews assumes that you wish to compute a measure of association from the specified series and will initialize the edit field using the series names.)
Sample
You should specify a sample of observations and indicate whether you wish to balance the
sample. By default, EViews will perform listwise deletion when it encounters missing values. This option is ignored when performing partial analysis (which may only be computed
for balanced samples).
Partialing
Partial covariances or correlations may be computed for each pair of analysis variables by
entering a list of conditioning variables in the edit field.
Computation of factor scores is not supported for models fit to partial covariances or correlations. To compute scores for measures in this setting you may, however, estimate a factor
model fit to a user-specified matrix.
Weighting
When you specify a weighting method, you will be prompted to enter the name of a weight
series. There are five different weight choices: frequency, variance, standard deviation,
scaled variance, and scaled standard deviation.
Degrees-of-Freedom Correction
You may choose to compute covariances using the maximum likelihood estimator or the
degree-of-freedom corrected formula. By default, EViews computes ML estimates (no d.f.
correction) of the covariances. Note that this choice may be relevant even if you will be
working with a correlation matrix since standardized data may be used when constructing
factor scores.
User-matrix Specification
User-matrix in the Type dropdown, the dialog changes,
prompting you for the name of
the matrix and optional information for the number of observations, the degrees-of-freedom
adjustment, and column names.
You should specify the
name of an EViews matrix
object containing the measure of association to be
fit. The matrix should be
square and symmetric,
though it need not be a
sym matrix object.
You may enter a scalar
value for the number of observations, or a matrix containing the pairwise numbers of
observations. A number of results will not be computed if a number of observations is
not provided. If the pairwise number of observations is not constant, EViews will use
the minimum number of observations when computing statistics.
Column names may be provided for labeling results. If not provided, variables will be
labeled V1, V2, etc. You need not provide names for all columns; the generic
names will be replaced with the specified names in the order they are provided.
Estimation Specification
The main estimation settings are displayed when you click on the Estimation tab of the Factor Specification dialog. There are four sections in the dialog allowing you to control the
method, number of factors, initial communalities, and other options. We describe each in
turn.
Method
In the Method dropdown menu,
you should select your estimation method. EViews supports
estimation using Maximum
likelihood, Generalized least
squares, Unweighted least
squares, Principal factors, Iterated principal factors, and Partitioned (PACE) methods.
Depending on the method, different settings may appear in the
Options section to the right.
Number of Factors
EViews supports a variety of
methods for selecting the number of factors. By default,
EViews uses Velicers (1976) minimum average partial method (MAP). Simulation evidence
suggests that MAP (along with parallel analysis) is more accurate than more commonly used
methods such as Kaiser-Guttman (Zwick and Velicer, 1986). See Number of Factors, beginning on page 991 for a brief summary of the various methods.
You may change the default by selecting an alternative method from
the dropdown menu. The dialog may change to prompt you for additional input:
The Minimum eigenvalue method allows you to employ a
modified Kaiser-Guttman rule that uses a different threshold.
Simply enter your threshold in the Cutoff edit field.
If you select Fraction of total variance, EViews will prompt you to enter the target
threshold.
If you select either Parallel analysis (mean) or Parallel analysis (quantile) from the
dropdown menu, the dialog page will change to provide you with a number of additional options.
Initial Communalities
Initial estimates of the common variances are required for most estimation methods. For
iterative methods like ML and GLS, the initial communalities are simply starting values for
the estimation of uniquenesses. For principal factor estimation, the initial communalities are
fundamental to the construction of the estimates (see Principal Factors, on page 993).
By default, EViews will compute SMC based estimates of the communalities. You may select a different method using the Initial
communalities dropdown menu. Most of the methods should be
self-explanatory; a few require additional comment.
Partitioned (PACE) performs a non-iterative PACE estimation of the factor model and
uses the fitted estimates of the common variances. The number of factors used is
taken from the main estimation settings.
The Random diagonal fractions setting instructs EViews to use a different random
fraction of each diagonal element of the original dispersion matrix.
The User-specified uniqueness values will be subtracted from the original variances
to form communality estimates. You will specify the name of the vector containing the
uniquenesses in the Vector edit field. By default, EViews will look at the first elements of the C coefficient vector for uniqueness values.
To facilitate the use of this option, EViews will place the estimated uniqueness values
in the coefficient vector C. In addition, you may use the equation data member
@unique to access the estimated uniqueness from a named factor object.
See Communality Estimation, on page 994 for additional discussion.
Estimation Options
We have already seen the iteration control and random number options that are available for
various estimation and number of factor methods. The remaining options concern the scaling of results and the handling of Heywood cases.
Scaling
Some estimation methods guarantee that the sums of the uniqueness estimates and the estimated communalities equal the diagonal dispersion matrix elements; for example, principal
factors models compute the uniqueness estimates as the residual after accounting for the
estimated communalities.
In other cases, the uniqueness and loadings are both estimated directly. In these settings, it
is possible for the sum of the components to differ substantively from the original variances.
You can enforce the adding up condition by checking the Scale estimates to match
observed variances box. If this option is selected, EViews will automatically adjust your
uniqueness and loadings estimates so the sum of the unique and common variances
matches the diagonals of the dispersion matrix. Note that when scaling has been applied,
the reported uniquenesses and loadings will differ from those used to compute fit statistics;
the main estimation output will indicate the presence of scaled results.
Rotating Factors
You may perform factor rotation on an estimated factor object with two or more retained
factors. Simply call up the Factor Rotation dialog by clicking on the Rotate button or by
selecting Proc/Rotate... from the factor object menu, and select the desired rotation settings.
The Type and Method
dropdowns may be used to
specify the basic rotation
method (see Types of
Rotation, on page 998 for
a description of the supported methods). For some
methods, you will also be
prompted to enter parameter values.
In the depicted example,
we specify an oblique Promax rotation with a power
parameter of 3.0. The Promax orthogonal pre-rotation step performs Varimax (Orthomax
with a parameter of 1).
By default, EViews does not row weight the loadings prior to rotation. To standardize the
data, simply change the Row weight dropdown menu to Kaiser or Cureton-Mulaik.
In addition, EViews uses the identity matrix (unrotated loadings) as the default starting
value for the rotation iterations. The section labeled Starting values allows you to perform
different initializations:
You may instruct EViews to use an initial random rotation by selecting Random in the
Starting values dropdown. The dialog changes to prompt you to specify the number
Estimating Scores967
of random starting matrices to compare, the random number generator, and the initial
seed settings. If you select random, EViews will perform the requested number of
rotations, and will use the rotation that minimizes the criterion function.
As with the random number generator used in parallel analysis, the value of this initial seed will be saved with the factor object so that by default, subsequent rotation
will employ the same random values. You may override this initialization by entering
a value in the Seed edit field or press the Clear button to have EViews draw a new
random seed value.
You may provide a user-specified initial rotation. Simply select User-specified in the
Starting values dropdown, the provide the name of a m m matrix to be used as
the starting T .
Lastly, if you have previously performed a rotation, you may use the existing results
as starting values for a new rotation. You may, for example, perform an oblique Quartimax rotation starting from an orthogonal Varimax solution.
Once you have specified your rotation method you may click on OK. EViews will estimate
the rotation matrix, and will present a table reporting the rotated loadings, factor correlation, factor rotation matrix, loading rotation matrix, and rotation objective function values.
Note that the factor structure matrix is not included in the table output; it may be viewed
separately by selecting View/Structure Matrix from the factor object menu.
In addition EViews will save the results from the rotation with the factor object. Other routines that rely on estimated loadings such as factor scoring will offer you the option of using
the unrotated or the rotated loadings. You may display your rotation results table at any time
by selecting View/Rotation Results from the factor menu.
Estimating Scores
Factor score estimation may be performed as a factor object view or procedure.
Viewing Scores
To display score coefficients or scores, click on the Score button on the factor toolbar, or
select View/Scores... from the factor menu.
The scores view allows you to display: (1) a table showing the factor score coefficients, indeterminacy and validity indices, and univocality measures; (2) a table of factor score values
for a set of observations; (3) a line graph of the scores; (4) scatterplots of scores on pairs of
factors; (4) biplots of scores and loadings on pairs of factors.
You should specify the display format by clicking in the list box to choose one of: Table
summary, Spreadsheet, Line graph, Scatterplot, and Biplot graph.
Scores Coefficients
To estimate scores, you must first specify a method for computing the score coefficients. For
a brief discussion of methods, see Score Estimation on page 1001. Details are provided in
Gorsuch (1983), Ten Berge et. al (1999), Grice (2001), McDonald (1981), Green (1969).
You must first decide whether to use refined coefficients (Exact coefficients), to adjust the
refined coefficients (Coarse coefficients), or to compute coarse coefficients based on the
factor loadings (Coarse loadings). By default, EViews will compute scores estimates using
exact coefficients.
Next, if rotated factors are available, they will be used as a default. You should check Use
unrotated loadings to use the original loadings.
Depending on your selections, you will be prompted for additional information:
If you select Exact coefficients or Coarse coefficients, EViews will prompt you for a
Coef Method. You may choose between the following methods: Regression (Thur-
Estimating Scores969
Scores Data
You will need to specify a set of observable variables to use in scoring and a sample of
observations. The estimated scores will be computed by taking linear combination of the
standardized observables over the specified samples.
If available, EViews will fill the Observables edit field with the names of the original variables used in computation. You will be prompted for whether to standardize the specified
data using the moments obtained from estimation, or whether to standardize the data using
the newly computed moments obtained from the data. In the typical case, where we score
observations using the same data that we used in estimation, these moments will coincide.
When computing scores for observations or variables that differ from estimation, the choice
is of considerable importance.
If you have estimated your object from a user-specified matrix, you must enter the names of
the variables you wish to use as observables. Since moments of the original data are not
available in this setting, they will be computed from the specified variables.
Graph Options
When displaying graph views of your results, you will be prompted for which factors to display; by default, EViews will graph all of your factors. Scatterplots and biplots provide additional options for handling multiple graphs, for centering the graph around 0, and for biplot
graphs, labeling obs and loading scaling that should be familiar from our discussion of prin-
cipal components (see Other Graphs (Variable Loadings, Component Scores, Biplots),
beginning on page 549).
Saving Scores
The score procedure allows you to save score values to series in the workfile. When saving
scores using the Proc/Make Scores..., EViews opens a dialog that differs only slightly from
the view dialog. Instead of a Display section, EViews provides an Output specification section in which you should enter a list of scores to be saved or a list of indices for the scores in
the edit field.
To save the first two factors as series AA and BB, you may enter AA BB in the edit field. If,
instead, you provide the indices 1 2, EViews will save the first two factors using the
default names F1 and F2, unless you have previously named your factors using Proc/
Name Factors....
Factor Views
EViews provides a number of factor object views that allow you to examine the properties of
your estimated factor model.
Specification
The specification view provides a
text representation of the estimation
specification, as well as the rotation
specifications and assigned factor
names (if relevant).
In this example, we see that we have
estimated a ML factor model for
seven variables, using a convergence
criterion of 1e-07. The model was
estimated using the default SMCs initial communalities and Velicers MAP
criterion to select the number of factors.
In addition, the object has a valid rotation method, oblique Quartimax, that was estimated
using the default 25 random oblique rotations. If no rotations had been performed, the rotation specification would have read Factor does not have a valid rotation.
Lastly, we see that we have provided two factor names, Verbal, and Spatial, that will be
used in place of the default names of the first two factors F1 and F2.
Factor Views971
Estimation Output
Select View/Estimation Output to display the main estimation output (unrotated loadings,
communalities, uniquenesses, variance accounted for by factors, selected goodness-of-fit
statistics). Alternately, you may click on the Stats toolbar button to display this view.
Rotation Results
Click View/Rotation Results to show the output table produced when performing a rotation
(rotated loadings, factor correlation, factor rotation matrix, loading rotation matrix, and
rotation objective function values).
Goodness-of-fit Summary
Select View/Goodness-of-fit Summary to display a table of goodness-of-fit statistics. For
models estimated by ML or GLS, EViews computes a large number of absolute and relative
fit measures. For details on these measures, see Model Evaluation, beginning on page 995.
Matrix Views
You may display spreadsheet views of various matrices of interest. These matrix views are
divided into four groups: matrices based on the observed dispersion matrix, matrices based
on the reduced matrix, fitted matrices, and residual matrices.
Observed Covariances
You may examine the observed matrices by selecting View/Observed Covariance Matrix/
and the desired sub-matrix:
The Covariance entry displays the original dispersion matrix, while the Scaled Covariance matrix scales the original matrix to have unit diagonals. In the case where the
original matrix is a correlation, these two matrices will obviously be the same.
Observations displays a matrix of the number of observations used in each pairwise
comparison.
If you select Anti-image Covariance, EViews will display the anti-image covariance of
the original matrix. The anti-image covariance is computed by scaling the rows and
columns of the inverse (or generalized inverse) of the original matrix by the inverse of
its diagonals:
1 1 1
1 1
A = diag ( S ) S diag ( S )
Partial correlations will display the matrix of partial correlations, where every element represents the partial correlation of the variables conditional on the remaining
variables. The partial correlations may be computed by scaling the anti-image covariance to unit diagonals and then performing a sign adjustment.
Reduced Covariance
You may display the initial or final reduced matrices by selecting View/Reduced Covariance
Matrix/ and Using Initial Uniqueness or Using Final Uniqueness.
Fitted Covariances
To display the fitted covariance matrices, select View/Fitted Covariance Matrix/ and the
desired sub-matrix. Total Covariance displays the estimated covariance using both the common and unique variance estimates, while Common Covariance displays the estimate of the
variance based solely on the common factors.
Residual Covariances
The different residual matrices are based on the total and the common covariance matrix.
Select View/Residual Covariance Matrix/ and the desired matrix, Using Total Covariance,
or Using Common Covariance. The residual matrix computed using the total covariance
will generally have numbers close to zero on the main diagonal; the matrix computed using
the common covariance will have numbers close to the uniquenesses on the diagonal (see
Scaling, on page 965 for caveats).
Loadings Views
You may examine your rotated or unrotated loadings in spreadsheet or graphical form.
You may View/Loadings/Loadings Matrix to display the current loadings matrix in spreadsheet form. If a rotation has been performed, then this view will show the rotated loadings,
otherwise it will display the unrotated loadings. To view the unrotated loadings, you may
always select View/Loadings/Unrotated Loadings Matrix.
Factor Views973
Scores
Select View/Scores... to compute estimates of factor score coefficients and to compute factor score values for observations. This view and the corresponding procedure are described
in detail in Estimating Scores, on page 967.
Eigenvalues
One important class of factor model diagnostics is an examination of eigenvalues of the
unreduced and the reduced matrices. In addition to being of independent interest, these
eigenvalues are central to various methods for selecting the number of factors.
Select View/Eigenvalues... to open the Eigenvalue
Display dialog. By default, EViews will display a table
view containing a description of the eigenvalues of
the observed dispersion matrix.
The dialog options allow you to control the output format and method of calculation:
You may change the Output format to display a
graph of the ordered eigenvalues. By default,
EViews will display the resulting Scree plot
along with a line representing the mean eigenvalue.
To base calculations on the scaled observed, initial reduced or final reduced matrix,
select the appropriate item in the Eigenvalues of dropdown.
For table display, you may include the corresponding eigenvectors and dispersion
matrix in the output by clicking on the appropriate Additional output checkbox.
For graph display, you may also display the eigenvalue differences, and the cumulative proportion of variance represented by each eigenvalue. The difference graphs also
display the mean value of the difference; the cumulative proportion graph shows a
reference line with slope equal to the mean eigenvalue.
Additional Views
Additional views allow you to examine:
The matrix of maximum absolute correlations (View/Maximum Absolute Correlation).
The squared multiple correlations (SMCs) and the related anti-image covariance
matrix (View/Squared Multiple Correlations).
The Kaiser-Meyer-Olkin (Kaiser 1970; Kaiser and Rice, 1974; Dziuban and Shirkey,
1974), measure of sampling adequacy (MSA) and corresponding matrix of partial correlations (View/Kaisers Measure of Sampling Adequacy).
The first two views correspond to the calculations used in forming initial communality estimates (see Communality Estimation on page 994). The latter view is an index of factorial
simplicity that lies between 0 and 1 and indicates the degree to which the data are suitable
for common factor analysis. Values for the MSA above 0.90 are deemed marvelous; values
in the 0.80s are meritorious; values in the 0.70s are middling; values the 60s are mediocre, values in the 0.50s are miserable, and all others are unacceptable (Kaiser and
Rice, 1974).
Factor Procedures
The factor procedures may be accessed either clicking on the Proc button on the factor toolbar or by selecting Proc from the main factor object menu, and selecting the desired procedure:
Specify/Estimate... is the main procedure for estimating the factor model. When
selected, EViews will display the main Factor Specification dialog See Specifying the
Model on page 960.
Rotate... is used to perform factor rotation using the Factor Rotation dialog. See
Rotating Factors on page 966.
Make Scores... is used to save estimated factor scores as series in the workfile. See
Estimating Scores on page 967.
Name Factors... may be used to provide user-specified labels for the factors. By
default, the factors will be labeled F1 and F2 or Factor 1 and Factor 2, etc. To
provide your own names, select Proc/Name Factors... and enter a list of factor
An Example975
names. EViews will use the specified names instead of the generic labels in table and
graph output.
To clear a set of previously specified factor names, simply call up the dialog and delete
the existing names.
Clear Rotation removes an existing rotation from the object.
For a full list of the factor object data members, see Factor Data Members on page 176 in
the Command and Programming Reference.
An Example
We illustrate the basic features of the factor object by analyzing a subset of the classic Holzinger and Swineford (1939) data, consisting of measures on 24 psychological tests for 145
Chicago area children attending the Grant-White school (Gorsuch, 1983). A large number of
authors have used these data for illustrating various features of factor analysis. The raw data
are provided in the EViews workfile Holzinger24.WF1. We will work with a subset consisting of seven of the 24 variables: VISUAL (visual perception), CUBES (spatial relations),
PARAGRAPH (paragraph comprehension), SENTENCE (sentence completion), WORDM
(word meaning), PAPER1 (paper shapes), and FLAGS1 (lozenge shapes).
(As noted by Gorsuch (1983, p. 12), the raw data and the published correlations do not
match; for example, the data in Holzinger24.WF1 produces correlations that differ from
those reported in Table 7.4 of Harman (1976). Here, we will assume that the raw data are
correct; later, we will show you how to work directly with the Harman reported correlation
matrix.)
An Example977
starting values for the communalities will be taken from the squared multiple correlations
(SMCs). We will use the default settings for our example so you may click on OK to continue.
EViews estimates the model and displays the results view. Here, we see the top portion of
the main results. The heading information provides basic information about the settings
used in estimation, and basic status information. We see that the estimation used all 145
observations in the workfile, and converged after five iterations.
Factor Method: Maximum Likelihood
Date: 09/11/06 Time: 12:00
Covariance Analysis: Ordinary Correlation
Sample: 1 145
Included observations: 145
Number of factors: Minimum average partial
Prior communalities: Squared multiple correlation
Convergence achieved after 5 iterations
Unrotated Loadings
F1
F2
VISUAL
CUBES
PARAGRAPH
SENTENCE
WORDM
PAPER1
FLAGS1
0.490722
0.295593
0.855444
0.817094
0.810205
0.348352
0.462895
0.567542
0.342066
-0.124213
-0.154615
-0.162990
0.425868
0.375375
Communality
Uniqueness
0.562912
0.204384
0.747214
0.691548
0.682998
0.302713
0.355179
0.437088
0.795616
0.252786
0.308452
0.317002
0.697287
0.644821
Below the heading is a section displaying the estimates of the unrotated orthogonal loadings, communalities, and uniqueness estimates obtained from estimation.
We first see that Velicers MAP method has retained two factors, labeled F1 and F2. A
brief examination of the unrotated loadings indicates that PARAGRAPH, SENTENCE and
WORDM load on the first factor, while VISUAL, CUES, PAPER1, and FLAGS1 load on the second factor. We therefore might reasonably label the first factor as a measure of verbal ability
and the second factor as an indicator of spatial ability. We will return to this interpretation
shortly.
To the right of the loadings are communality and uniqueness estimates which apportion the
diagonals of the correlation matrix into common (explained) and individual (unexplained)
components. The communalities are obtained by computing the row norms of the loadings
matrix, while the uniquenesses are obtained directly from the ML estimation algorithm. We
2
2
see, for example, that 56% ( 0.563 = 0.491 + 0.568 ) of the correlation for the VISUAL
2
2
variable and 69% ( 0.692 = 0.817 + ( 0.155 ) ) of the SENTENCE correlation are
accounted for by the two common factors.
The next section provides summary information on the total variance and proportion of
common variance accounted for by each of the factors, derived by taking column norms of
the loadings matrix. First, we note that the variance accounted for by the two factors is 3.55,
which is close to 51% ( 3.55 7.0 ) of the total variance (sum of the diagonals of the correlation matrix). Furthermore, we see that the first factor F1 accounts for 77% ( 2.72 3.55 ) of
the common variance and the second factor F2 accounts for the remaining 23%
( 0.82 3.55 ).
Factor
F1
F2
Total
Variance
2.719663
0.827282
3.546945
Cumulative
2.719663
3.546945
3.546945
Differ ence
1.892380
---
Proporti on
0.7667 62
0.2332 38
1.0000 00
Cumulati ve
0.766762
1.000000
The bottom portion of the output shows basic goodness-of-fit information for the estimated
specification. The first column displays the discrepancy function, number of parameters,
and degrees-of-freedom (against the saturated model) for the estimated specification For this
extraction method (ML), EViews also displays the chi-square goodness-of-fit test and Bartlett
adjusted version of the test. Both versions of the test have p-values of over 0.75, indicating
that two factors adequately explain the variation in the data.
Discrepancy
Chi-square statistic
Chi-square prob.
Bartlett chi-square
Bartlett probability
Parameters
Degrees-of-freedom
Model
Independence
Saturated
0.034836
5.016316
0.7558
4.859556
0.7725
20
8
2.411261
347.2215
0.0000
339.5859
0.0000
7
21
0.000000
--------28
---
For purposes of comparison, EViews also presents results for the independence (no factor)
model which show that a model with no factors does not adequately model the variances.
An Example979
Goodness-of-fit Summary
Factor: FACTOR01
Date: 09/13/06 Time: 15:36
Parameters
Degrees-of-freedom
Parsimony ratio
Model
Independence
Saturated
20
8
0.380952
7
21
1.000000
28
-----
Model
Independence
Saturated
0.034836
5.016316
0.7558
4.859556
0.7725
0.023188
-0.075750
-0.239983
-0.142483
0.312613
0.989890
0.964616
-2.983684
1.000000
1.000000
0.000000
2.411261
347.2215
0.0000
339.5859
0.0000
0.385771
2.104976
1.673863
1.929800
2.508483
0.528286
-0.651000
326.2215
0.306239
0.322158
0.328447
0.000000
--------0.000000
0.000000
0.000000
0.000000
0.388889
1.000000
-----------
0.962077
0.985553
1.024009
1.008796
1.000000
As you can see, EViews computes a large number of absolute and relative fit measures. In
addition to the discrepancy, chi-square and Bartlett chi-square statistics seen previously,
EViews computes scaled information criteria, expected cross-validation indices, generalized
fit indices, as well as various measures based on estimates of noncentrality. Also presented
are incremental fit indices which compare the fit of the estimated model against the independence model (see Model Evaluation, beginning on page 995 for discussion).
In addition, you may examine various matrices associated with the estimation procedure.
You may examine the computed correlation matrix, various reduced and fitted matrices, and
a variety of residual matrices. For example, you may view the residual variance matrix by
selecting View/Residual Covariance Matrix/Using Total Covariance.
Note that the diagonal elements of the residual matrix are zero since we have subtracted off
the total fitted covariance (which includes the uniquenesses). To replace the (almost) zero
diagonals with the uniqueness estimates, select instead View/Residual Covariance Matrix/
Using Common Covariance.
You may examine eigenvalues
of relevant matrices using the
eigenvalue view. EViews allows
you to compute eigenvalues for
a variety of matrices and display
the results in tabular or graphical form, but for the moment
we will simply produce a scree
plot for the observed correlation
matrix. Select View/Eigenvalues... and change the Output
format to Graph.
Click on OK to accept the settings. EViews will display the
scree plot for the data, along
with a line indicating the average eigenvalue.
To examine the Kaiser Measure of Sampling Adequacy, select View/Kaisers Measure of
Sampling Adequacy. The top portion of the display shows the individual measures and the
overall of MSA (0.803) which falls in the category deemed by Kaiser to be meritorious.
An Example981
0.800894
0.825519
0.785366
0.802312
0.800434
0.800218
0.839796
Kaiser's MSA
0.803024
The bottom portion of the display shows the matrix of partial correlations:
Partial Correlation:
VISUAL
CUBES
PARAGRAPH
SENTENCE
WORDM
PAPER1
FLAGS1
VISUAL
CUBES
PARAGRAPH
SENTENCE
WORDM
PAPER1
FLAGS1
1.000000
0.169706
0.051684
0.015776
0.070918
0.239682
0.321404
1.000000
0.070761
-0.057423
0.044531
0.192417
0.047793
1.000000
0.424832
0.420902
0.102062
0.022723
1.000000
0.342159
0.042837
0.105600
1.000000
-0.088688
0.050006
1.000000
0.102442
1.000000
Each cell of this matrix contains the partial correlation for the two variables, controlling for
the remaining variables.
Factor Rotation
Factor rotation may be used to simplify the factor structure and to ease the interpretation of
factors. For this example, we will consider one orthogonal and one oblique rotation. To perform a factor rotation, click on the Rotate button on the factor toolbar or select Proc/
Rotate... from the main factor menu.
F1
F2
0.255573
0.153876
0.843364
0.818407
0.814965
0.173214
0.298237
0.705404
0.425095
0.189605
0.147509
0.137226
0.522217
0.515978
As with the unrotated loadings, the variables PARAGRAPH, SENTENCE, and WORDM load
on the first factor while VISUAL, CUBES, PAPER1, and FLAGS1 load on the second factor.
The remaining sections of the output display the rotated factor correlation, initial rotation
matrix, the rotation matrices applied to the factors and loadings, and objective functions for
the rotations. In this case, The factor correlation and initial rotation matrices are identity
matrices since we are performing an orthogonal rotation from the unrotated loadings. The
remaining results are presented below:
An Example983
F1
F2
0.934003
-0.357265
0.357265
0.934003
F2
F1
F2
0.934003
-0.357265
0.357265
0.934003
1.226715
0.909893
Note that the factor rotation and loading rotation matrices are identical since we are performing an orthogonal rotation.
Perhaps more interesting
are the results for an
oblique rotation. To replace
the Varimax results with an
oblique Quartimax/Quartimin rotation, select Proc/
Rotate... and change the
Type dropdown to
Oblique, and select Quartimax. We will make a few
other changes in the dialog.
We will use random
orthogonal rotations as
starting values for our rotation, so that under Starting values, you should select Random. Set the random generator
options as depicted and change the convergence tolerance to 1e-06. By default, EViews will
perform 25 oblique rotations using random orthogonal rotation matrices as the starting values, and will select the results with the smallest objective function value. Click on OK to
accept these settings.
The top portion of the results shows information on the rotation method and initial loadings.
Just below the header are the rotated loadings. Note that the relative importance of the
VISUAL, CUBES, PAPER1, and FLAGS1 loadings on the second factor is somewhat more
apparent for the oblique factors.
F1
-0.01685 6
-0.01031 0
0.846439
0.836783
0.837340
-0.03004 2
0.109927
F2
0.759022
0.457438
0.033230
-0.009926
-0.021054
0.565436
0.530662
F1
F2
1.000000
0.527078
1.000000
with the large off-diagonal element indicating that the orthogonality factor restriction was
very much binding.
The rotation matrices and objective functions are given by:
Factor rotation matrix: T
F1
0.984399
-0.17594 9
F2
0.668380
0.743820
F2
0.207044
1.158366
F1
F2
Note that in the absence of orthogonality, the factor rotation and loading rotation matrices
differ.
An Example985
Factor Scores
The factors used to explain the
covariance structure of the
observed data are unobserved,
but may be estimated from the
rotated or unrotated loadings
and observable data.
Click on View/Scores... to bring up the factor score dialog. As you can see, there are several
ways to estimate the factors and several views of the results. For now, we will focus on displaying a summary of the factor score regression estimates, and in producing a biplot of the
scores and loadings.
The default
method of producing scores
is to use exact
coefficients
from Thurstones regression method,
and to apply
these coefficients to the
observables
data used in
factor extraction.
In our example, EViews
will prefill the
sample and observables information; all we need to do is to select our Display output set-
An Example987
ting, and the method for computing coefficients. Selecting Table summary, EViews produces output describing the score coefficient estimation.
The top portion of the output summarizes the factor score coefficient estimation settings and
displays the factor coefficients used in computing scores:
Factor Score Summary
Factor: Untitled
Date: 09/12/06 Time: 11:52
Exact scoring coefficients
Method: Regression (based on rotated loadings)
Standardize observables using moments from estimation
Sample: 1 145
Included observations: 145
Factor Coefficients:
VISUAL
CUBES
PARAGRAPH
SENTENCE
WORDM
PAPER1
FLAGS1
VERBAL
SPATIAL
0.030492
0.010073
0.391755
0.314600
0.305612
0.011325
0.036384
0.454344
0.150424
0.101888
0.046201
0.035791
0.211658
0.219118
We see that the VERBAL score for an individual is computed as a linear combination of the
centered data for VISUAL, CUBES, etc., with weights given by the first column of coefficients
(0.03, 0.01, etc.).
The next section contains the factor indeterminacy indices:
Indeterminancy Indices:
VERBAL
SPATIAL
Multiple-R
R-squared
Minimum Corr.
0.940103
0.859020
0.883794
0.737916
0.767589
0.475832
The indeterminacy indices show that the correlation between the estimated factors and the
variables is high; the multiple correlation for the first factor well over 0.90, while the correlation for the second factor is around 0.85. The minimum correlation indices are also reasonable, suggesting that alternative factor score solutions are highly correlated. At a
minimum, the correlation between two different measures of the SPATIAL factors will be
nearly 0.50.
The following sections report the validity coefficients, the off-diagonal elements of the univocality matrix, and for comparison purposes, the theoretical factor correlation matrix and
estimated scores correlation:
Validity Coefficients:
Validity
VERBAL
SPATIAL
0.940103
0.859020
--0.539237
0.590135
---
SPATIAL
VERBAL
SPATIAL
1.000000
0.627734
1.000000
VERBAL
SPATIAL
1.000000
0.527078
1.000000
Factor Correlation:
VERBAL
SPATIAL
The validity coefficients are both in excess of the Gorsuch (1983) recommended 0.80, and
close to the stricter target of 0.90 advocated for using the estimated scores as replacements
for the original variables.
The univocality matrix reports the correlations between the factors and the factor scores,
which should be similar to the corresponding elements of the factor correlation matrix.
Comparing results, we see that univocality correlation of 0.539 between the SPATIAL factor
and the VERBAL estimated scores is close to the population correlation value of 0.527. The
correlation between the VERBAL factor and the SPATIAL estimated score is somewhat
higher, 0.590, but still close to the population correlation.
Similarly, the estimated scores correlation matrix should be close to the population factor
correlation matrix. The off-diagonal values generally match, though as is often the case, the
factor score correlation of 0.627 is a bit higher than the population value of 0.527.
To display a biplot of using these scores, select View/Scores... and select Biplot graph in
the Display list box.
An Example989
The positive correlation between the VERBAL and SPATIAL scores is obvious. The outliers
show that individual 96 scores high and individual 38 low on both spatial and verbal ability,
while individual 52 scores poorly on spatial relative to verbal ability.
To save scores to
the workfile, select
Proc/Make
Scores... and fill
out the dialog. The
procedure dialog
differs from the
view dialog only in
the Output specification section.
Here, you should
enter a list of scores
to be saved or a list
of indices for the
scores. Since we
have previously named our factors, we may specify the indices 1 2 and click on OK.
EViews will open an untitled group containing the results saved in the series VERBAL and
SPATIAL.
Background
We begin with a brief sketch of the basic features of the common factor model. Our notation
parallels the discussion in Johnston and Wichtern (1992).
The Model
The factor model assumes that for individual i , the observable multivariate p -vector X i is
generated by:
X i m = LF i + e i
(47.1)
var ( X ) = E [ ( X i m ) ( X i m ) ]
= E [ ( LF i + e i ) ( LF i + e i ) ]
(47.2)
= LFL + W
The variances of the individual variables may be decomposed into:
2
j jj = h j + w j
(47.3)
for each j , where the h j are taken from the diagonal elements of LFL , and w j is the cor2
responding diagonal element of W . h j represents common portion of the variance of the jth variable, termed the communality, while w j is the unique portion of the variance, also
referred to as the uniqueness.
Furthermore, the factor structure matrix containing the correlations between the variables
and factors may be obtained from:
Background991
var ( X, F ) = E [ ( X i m ) F i ]
= E [ ( LF i + e i ) F i ]
(47.4)
= LF
Initially, we make the further assumption that the factors are orthogonal so that F = I (we
will relax this assumption shortly). Then:
var ( X ) = LL + W
(47.5)
var ( X, F ) = L
2
Note that with orthogonal factors, the communalities h j are given by the diagonal elements
of LL (the row-norms of L ).
The primary task of factor analysis is to model the p ( p + 1 ) 2 observed variances and
covariances of the X as functions of the pm factor loadings in L , and p specific variances
, we may form estimates of the fitted total variance
and W
in W . Given estimates of L
C = L L . If S is the
Number of Factors
Choosing the number of factors is generally agreed to be one of the most important decisions one makes in factor analysis (Preacher and MacCallum, 2003; Fabrigar, et al., 1999;
Jackson, 1993; Zwick and Velicer, 1986). Accordingly, there is a large and varied literature
describing methods for determining the number of factors, of which the references listed
here are only a small subset.
Broken Stick
We may compare the relative proportions of the total variance that are accounted for by
each eigenvalue to the expected proportions obtained by chance (Jackson, 1993). More precisely, the broken stick method compares the proportion of variance given by j-th largest
eigenvalue of the unreduced matrix with the corresponding expected value obtained from
the broken stick distribution. The number of factors retained is the number of proportions
that exceed their expected values.
Parallel Analysis
Parallel analysis (Horn, 1965; Humphreys and Ilgen, 1969; Humphreys and Montanelli,
1975) involves comparing eigenvalues of the (unreduced or reduced) dispersion matrix to
results obtained from simulation using uncorrelated data.
The parallel analysis simulation is conducted by generating multiple random data sets of
independent random variables with the same variances and number of observations as the
original data. The Pearson covariance or correlation matrix of the simulated data is computed and an eigenvalue decomposition performed for each data set. The number of factors
retained is then based on the number of eigenvalues that exceed their simulated counterpart. The threshold for comparison is typically chosen to be the mean values of the simulated data as in Horn (1965), or a specific quantile as recommended by Glorfeld (1995).
Estimation Methods
There are several methods for extracting (estimating) the factor loadings and specific variances from an observed dispersion matrix.
Background993
EViews supports estimation using maximum likelihood (ML), generalized least squares
(GLS), unweighted least squares (ULS), principal factors and iterated principal factors, and
partitioned covariance matrix estimation (PACE).
D ML ( S, S ) = tr S S ln S S p
1
D GLS ( S, S ) = tr ( [ I p S S ] ) 2
(47.6)
D ULS ( S, S ) = tr ( [ S S ] ) 2
Each estimation method involves minimizing the appropriate discrepancy function with
respect to the loadings matrix L and unique variances W . An iterative algorithm for this
optimization is detailed in Jreskog. The functions all achieve an absolute minimum value
of 0 when S = S , but in general this minimum will not be achieved.
The ML and GLS methods are scale invariant so that rescaling of the original data matrix or
the dispersion matrix does not alter the basic results. The ML and GLS methods do require
that the dispersion matrix be positive definite.
ULS does not require a positive definite dispersion matrix. The solution is equivalent to the
iterated principal factor solution.
Principal Factors
The principal factor (principal axis) method is derived from the notion that the common factors should explain the common portion of the variance: the off-diagonal elements of the
dispersion matrix and the communality portions of the diagonal elements. Accordingly, for
some initial estimate of the unique variances W 0 , we may define the reduced dispersion
matrix S R ( W 0 ) = S W 0 , and then fit this matrix using common factors (see, for example, Gorsuch, 1993).
The principal factor method fits the reduced matrix using the first m eigenvalues and eigenvectors. Loading estimates, L 1 are be obtained from the eigenvectors of the reduced matrix.
Given the loading estimates, we may form a common variance residual matrix,
E 1 = S L 1 L 1 . Estimates of the uniquenesses are obtained from the diagonal elements
of this residual matrix.
Communality Estimation
The construction of the reduced matrix is often described as replacing the diagonal elements
of the dispersion matrix with estimates of the communalities. The estimation of these communalities has received considerable attention in the literature. Among the approaches are
(Gorsuch, 1993):
Fraction of the diagonals: use a constant fraction a of the original diagonal elements
of S . One important special case is to use a = 1 ; the resulting estimates may be
viewed as those from a truncated principal components solution.
Largest correlation: select the largest absolution correlation of each variable with any
other variable in the matrix.
Squared multiple correlations (SMC): by far the most popular method; uses the
squared multiple correlation between a variable and the other variables as an estimate
of the communality. SMCs provide a conservative communality estimate since they
are a lower bound to the communality in the population. The SMC based communali2
ii
ii
ties are computed as h i0 = 1 ( 1 r ) , where r is the i-th diagonal element of
the inverse of the observed dispersion matrix. Where the inverse cannot be computed
we may employ instead the generalized inverse.
Iteration
Having obtained principal factor estimates based on initial estimates of the communalities,
we may repeat the principal factors extraction using the row norms of L 1 as updated estimates of the communalities. This step may be repeated for a fixed number of iterations, or
until the results are stable.
While the approach is a popular one, some authors are strongly opposed to iterating principal factors to convergence (e.g., Gorsuch, 1983, p. 107108). Performing a small number of
iterations appears to be less contentious.
Background995
Model Evaluation
One important step in factor analysis is evaluation of the fit of the estimated model. Since a
factor analysis model is necessarily an approximation, we would like to examine how well a
specified model fits the data, taking account the number of parameters (factors) employed
and the sample size.
There are two general classes of indices for model selection and evaluation in factor analytic
models. The first class, which may be termed absolute fit indices, are evaluated using the
results of the estimated specification. Various criteria have been used for measuring absolute
fit, including the familiar chi-square test of model adequacy. There is no reference specification against which the model is compared, though there may be a comparison with the
observed dispersion of the saturated model.
The second class, which may be termed relative fit indices, compare the estimated specification against results for a reference specification, typically the zero common factor (independence model).
Before describing the various indices we first define the chi-square test statistic as a function
of the discrepancy function, T = ( N k ) D ( S, S ) , and note that a model with p variables
and m factors has q = p ( m + 1 ) m ( m 1 ) 2 free parameters ( pm factor loadings
and m uniqueness elements, less m ( m 1 ) 2 implicit zero correlation restrictions on the
factors). Since there are p ( p + 1 ) 2 distinct elements of the dispersion matrix, there are a
total of df = p ( p + 1 ) 2 q remaining degrees-of-freedom.
One useful measure of the parsimony of a factor model is the parsimony ratio:
PR = df df 0 , where df 0 is the degrees of freedom for the independence model.
Note also that the measures described below are not reported for all estimation methods.
Absolute Fit
Most of the absolute fit measures are based on number of observations and conditioning
variables, the estimated discrepancy function, D , and the number of degrees-of-freedom.
It is well known that the performance of the T statistic is poor for small samples and nonnormal settings. One popular adjustment for small sample size involves applying a Bartlett
correction to the test statistic so that the multiplicative factor N k in the definition of T is
replaced by N k ( 2p + 4m + 5 ) 6 (Johnston and Wichern, 1992).
Note that two distinct sets of chi-square tests that are commonly performed. The first set
compares the fit of the estimated model against a saturated model; the second set of tests
examines the fit of the independence model. The former are sometimes termed tests of
model adequacy since they evaluate whether the estimated model adequately fits the data.
The latter tests are sometimes referred to as test of sphericity since they test the assumption
that there are no common factors in the data.
Information Criteria
Standard information criteria (IC) such as Akaike (AIC), Schwarz (SC), Hannan-Quinn (HQ)
may be adapted for use with ML and GLS factor analysis. These indices are useful measures
of fit since they reward parsimony by penalizing based on the number of parameters.
Construction of the EViews factor analysis information criteria measure employ a scaled version of the discrepancy as the log-likelihood, l = ( N k ) 2 D , and begins by forming
the standard IC. Following Akaike (1987), we re-center the criteria by subtracting off the
value for the saturated model, and following Cudeck and Browne (1983) and EViews convention, we further scale by the number of observations to eliminate the effect of sample
size. The resulting factor analysis form of the information criteria are given by:
AIC = ( N k ) D N ( 2 N ) df
SC = ( N k ) D N ( ln ( N ) N ) df
(47.7)
HQ = ( N k ) D N ( 2 ln ( ln ( N ) ) N ) df
You should be aware that these statistics are often quoted in unscaled form, sometimes
without adjusting for the saturated model. Most often, if there are discrepancies, multiplying
the EViews reported values by N will line up results. Note also that the current definition
uses the adjusted number of observations in the numerator of the leading term.
When using information criteria for model selection, bear in mind that the model with the
smallest value is considered most desirable.
Other Measures
The root mean square residual (RMSR) is given by the square root of the mean of the unique
squared total covariance residuals. The standardized root mean square residual (SRMSR) is
a variance standardized version of this RMSR that scales the residuals using the diagonals of
the original dispersion matrix, then computes the RMSR of the scaled residuals (Hu and
Bentler, 1999).
Background997
There are a number of other measures of absolute fit. We refer you to Hu and Bentler (1995,
1999) and Browne and Cudeck (1993), McDonald and Marsh (1990), Marsh, Balla and
McDonald (1988) for details on these measures and recommendations on their use. Note
that where there are small differences in the various descriptions of the measures due to
degree-of-freedom corrections, we have used the formulae provided by Hu and Bentler
(1999).
Incremental Fit
Incremental fit indices measure the improvement in fit of the model over a more restricted
specification. Typically, the restricted specification is chosen to be the zero factor or independence model.
EViews reports up to five relative fit measures: the generalized Tucker-Lewis Nonnormed Fit
Index (NNFI), Bentler and Bonnets Normed Fit Index (NFI), Bollens Relative Fit Index
(RFI), Bollens Incremental Fit Index (IFI), and Bentlers Comparative Fit Index (CFI). See
Hu and Bentler (1995)for details.
Traditionally, the rule of thumb was for acceptable models to have fit indices that exceed
0.90, but recent evidence suggests that this cutoff criterion may be inadequate. Hu and
Bentler (1999) provide some guidelines for evaluating values of the indices; for ML estimation, they recommend use of two indices, with cutoff values close to 0.95 for the NNFI, RFI,
IFI, CFI.
Rotation
The estimated loadings and factors are not unique; we may obtain others that fit the
observed covariance structure identically. This observation lies behind the notion of factor
rotation, in which we apply transformation matrices to the original factors and loadings in
the hope of obtaining a simpler factor structure.
To elaborate, we begin with the orthogonal factor model from above:
X i m = LF i + e i
(47.8)
X i m = L ( T ) T F i + e i = L F i + e i
(47.9)
E ( F i F i ) = T T = F
See Browne (2001) and Bernaards and Jennrich (2005) for details.
(47.10)
Types of Rotation
There are two basic types of rotation that involve different restrictions on F . In orthogonal
rotation, we impose m ( m 1 ) 2 constraints on the transformation matrix T so that
F = I , implying that the rotated factors are orthogonal. In oblique rotation, we impose
only m constraints on T , requiring the diagonal elements of F equal 1.
There are a large number of rotation methods. The majority of methods involving minimizing an objective function that measure the complexity of the rotated factor matrix with
respect to the choice of T , subject to any constraints on the factor correlation. Jennrich
(2001, 2002) describes algorithms for performing orthogonal and oblique rotations by minimizing complexity objective.
For example, suppose we form the p m matrix L where every element l ij equals the
2
square of a corresponding factor loading l ij : l ij = l ij . Intuitively, one or more measures of
simplicity of the rotated factor pattern can be expressed as a function of these squared loadings. One such function defines the Crawford-Ferguson family of complexities:
p
f( L) = (1 k)
l ij l ik + k
i = 1 j = 1k j
l ij l pj
(47.11)
j = 1 i = 1p i
for weighting parameter k . The Crawford-Ferguson (CF) family is notable since it encompasses a large number of popular rotation methods (including Varimax, Quartimax, Equamax, Parsimax, and Factor Parsimony).
The first summation term in parentheses, which is based on the outer-product of the i-th
row of the squared loadings, provides a measure of complexity. Those rows which have few
non-zero elements will have low complexity compared to rows with many non-zero elements. Thus, the first term in the function is a measure of the row (variables) complexity of
the loadings matrix. Similarly, the second summation term in parentheses is a measure of
the complexity of the j-th column of the squared loadings matrix. The second term provides
a measure of the column (factor) complexity of the loadings matrix. It follows that higher
values for k assign greater weight to factor complexity and less weight to variable complexity.
Along with the CF family, EViews supports the following rotation methods:
Method
Orthogonal
Oblique
Biquartimax
Crawford-Ferguson
Entropy
Entropy Ratio
Equamax
Background999
Factor Parsimony
Generalized Crawford-Ferguson
Geomin
Oblimax
Oblimin
Orthomax
Parsimax
Pattern Simplicity
Promax
Quartimax/Quartimin
Simplimax
Tandem I
Tandem II
Target
Varimax
EViews employs the Crawford-Ferguson variants of the Biquartimax, Equamax, Factor Parsimony, Orthomax, Parsimax, Quartimax, and Varimax objective functions. For example, The
EViews Orthomax objective for parameter g is evaluated using the Crawford-Ferguson
objective with factor complexity weight k = g p .
These forms of the objective functions yield the same results as the standard versions in the
orthogonal case, but are better behaved (e.g., do not permit factor collapse) under direct
oblique rotation (see Browne 2001, p. 118-119). Note that oblique Crawford-Ferguson Quartimax is equivalent to Quartimin.
The two orthoblique methods, the Promax and Harris-Kaiser both perform an initial orthogonal rotation, followed by a oblique adjustment. For both of these methods, EViews provides
some flexibility in the choice of initial rotation. By default, EViews will perform an initial
Orthomax rotation with the default parameter set to 1 (Varimax). To perform initial rotation
with Quartimax, you should set the Orthomax parameter to 0. See Gorsuch (1993) and Harris-Kaiser (1964) for details.
Some rotation methods require specification of one or more parameters. A brief description
and the default value(s) used by EViews is provided below:
Method
Parameter Description
Crawford-Ferguson
Generalized CrawfordFerguson
Vector of weights for (in order): total squares, variable complexity, factor complexity, diagonal quartics
(no default).
Geomin
Oblimin
Orthomax
Promax
Simplimax
Target
p m matrix of target loadings. Missing values correspond to unrestricted elements. (No default.)
Standardization
Weighting the rows of the initial loading matrix prior to rotation can sometimes improve the
rotated solution (Browne, 2001). Kaiser standardization weights the rows by the inverse
square roots of the communalities. Cureton-Mulaik standardization assigns weights between
zero and one to the rows of the loading matrix using a more complicated function of the
original matrix.
Both standardization methods may lead to instability in cases with small communalities.
Starting Values
Starting values for the rotation objective minimization procedures are typically taken to be
the identity matrix (the unrotated loadings). The presence of local minima is a distinct possibility and it may be prudent to consider random rotations as alternate starting values. Random orthogonal rotations may be used as starting values for orthogonal rotation; random
orthogonal or oblique rotations may be used to initialize the oblique rotation objective minimization.
Scoring
The factors used to explain the covariance structure of the observed data are unobserved,
but may be estimated from the loadings and observable data. These factor score estimates
may be used in subsequent diagnostic analysis, or as substitutes for the higher-dimensional
observed data.
Background1001
Score Estimation
We may compute factor score estimates G i as a linear combination of observed data:
( Zi mZ )
G i = W
(47.12)
Score Evaluation
There are an infinite number of factor score estimates that are consistent with an estimated
factor model. This lack of identification, termed factor indeterminacy, has received considerable attention in the literature (see for example, Mulaik (1996); Steiger (1979)), and is a primary reason for the multiplicity of estimation methods, and for the development of
procedures for evaluating the quality of a given set of scores (Gorsuch, 1983, p. 272).
See Gorsuch (1993) and Grice(2001) for additional discussion of the following measures.
Indeterminacy Indices
There are two distinct types of indeterminacy indices. The first set measures the multiple
2
correlation between each factor and the observed variables, r and its square r . The
1
squared multiple correlations are obtained from the diagonals of the matrix P = S G
where S is the observed dispersion matrix and G = LF is the factor structure matrix.
Both of these indices range from 0 to 1, with high values being desirable.
The second type of indeterminacy index reports the minimum correlation between alternate
2
estimates of the factor scores, r = 2r 1 . The minimum correlation measure ranges
from -1 to 1. High positive values are desirable since they indicate that differing sets of factor scores will yield similar results.
Grice (2001) suggests that values for r that do not exceed 0.707 by a significant degree are
problematic since values below this threshold imply that we may generate two sets of factor
scores that are orthogonal or negatively correlated (Green, 1976).
Validity, Univocality, Correlational Accuracy
Following Gorsuch (1983), we may define R ff as the population factor correlation matrix,
R ss as the factor score correlation matrix, and R fs as the correlation matrix of the known
factors with the score estimates. In general, we would like these matrices to be similar.
The diagonal elements of R fs are termed validity coefficients. These coefficients range from
-1 to 1, with high positive values being desired. Differences between the validities and the
multiple correlations are evidence that the computed factor scores have determinacies lower
than those computed using the r -values. Gorsuch (1983) recommends obtaining validity
values of at least 0.80, and notes that values larger than 0.90 may be necessary if we wish to
use the score estimates as substitutes for the factors.
The off-diagonal elements of R fs allow us to measure univocality, or the degree to which
the estimated factor scores have correlations with those of other factors. Off-diagonal values
of R fs that differ from those in R ff are evidence of univocality bias.
Lastly, we obviously would like the estimated factor scores to match the correlations among
the factors themselves. We may assess the correlational accuracy of the scores estimates by
comparing the values of the R ss with the values of R ff .
SW
. R ss
From our earlier discussion, we know that the population correlation R ff = W
may be obtained from moments of the estimated scores. Computation of R fs is more complicated, but follows the steps outlined in Gorsuch (1983).
References
Akaike, H. (1987). Factor Analysis and AIC, Psychometrika, 52(3), 317332.
Anderson, T. W. and H. Rubin (1956). Statistical Inference in Factor Analysis, in Neyman, J., editor, Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Volume V, 111150. Berkeley and Los Angeles: University of California Press.
Bernaards, C. A., and R. I. Jennrich (2005). Gradient Projection Algorithms and Software for Arbitrary
Rotation Criteria in Factor Analysis, Educational and Psychological Measurement, 65(5), 676-696.
Browne, M. W. (2001). An Overview of Analytic Rotation in Exploratory Factor Analysis, Multivariate
Behavioral Research, 36(1), 111150.
Browne, M. W. and R. Cudeck (1993). Alternative ways of Assessing Model Fit, in K. A. Bollen and J. S.
Long (eds.), Testing Structural Equation Models, Newbury Park, CA: Sage.
References1003
Cudeck, R. and M. W. Browne (1983). Cross-validation of Covariance Structures, Multivariate Behavioral Research, 18, 147167.
Dziuban, C. D. and E. C. Shirkey (1974). When is a Correlation Matrix Appropriate for Factor Analysis,
Psychological Bulletin, 81(6), 358361.
Fabrigar, L. R., D. T. Wegener, R. C. MacCallum, and E. J. Strahan (1999). Evaluating the Use of Exploratory Factor Analysis in Psychological Research, Psychological Methods, 4(3), 272299.
Glorfeld, L. W. (1995). An Improvement on Horns Parallel Analysis Methodology for Selecting the Correct Number of Factors to Retain, Educational and Psychological Measurement, 55(3), 377393.
Gorsuch, R. L. (1983). Factor Analysis, Hillsdale, New Jersey: Lawrence Erlbaum Associates, Inc.
Green, B. F., Jr. (1969). Best Linear Composites with a Specified Structure, Psychometrika, 34(3), 301
318.
Green, B. F., Jr. (1976). On the Factor Score Controversy, Psychometrika, 41(2), 263266.
Grice, J. W. (2001). Computing and Evaluating Factor Scores, Psychological Methods, 6(4), 430450.
Harman, H. H. (1976). Modern Factor Analysis, Third Edition Revised, Chicago: University of Chicago
Press.
Harris, C. W. and H. F. Kaiser (1964). Oblique Factor Analytic Solutions by Orthogonal Transformations,
Psychometrika, 29(4), 347362.
Hendrickson, A. and P. White (1964). Promax: A Quick Method for Rotation to Oblique Simple Structure, The British Journal of Statistical Psychology, 17(1), 6570.
Horn, J. L. (1965). A Rationale and Test for the Number of Factors in Factor Analysis, Psychometrika,
30(2), 179185.
Hu, L.-T. and P. M. Bentler (1995). Evaluating Model Fit, in R. H. Hoyle (Ed.), Structural Equation Modeling: Concepts, Issues, and Applications, Thousand Oaks, CA: Sage.
Hu, L.-T. and P. M. Bentler (1999). Cut-off Criteria for Fit Indexes in Covariance Structure Analysis: Conventional Criteria Versus New Alternatives, Structural Equation Modeling, 6(1), 155.
Humphreys, L. G. and D. R. Ilgen (1969). Note on a Criterion for the Number of Common Factors, Educational and Psychological Measurement, 29, 571578.
Humphreys, L. G. and R. G. Montanelli, Jr. (1975). An Investigation of the Parallel Analysis Criterion for
Determining the Number of Common Factors, Multivariate Behavioral Research, 10, 193206.
Ihara, M. and Y. Kano (1995). A New Estimator of the Uniqueness in Factor Analysis, Psychometrika,
51(4), 563-566.
Jackson, D. A. (1993). Stopping Rules in Principal Components Analysis: A Comparison of Heuristical
and Statistical Approaches, Ecology, 74(8), 22042214.
Jennrich, R. I. (2001). A Simple General Procedure for Orthogonal Rotation, Psychometrika, 66(2), 289
306.
Jennrich, R. I. (2002). A Simple General Method for Oblique Rotation, Psychometrika, 67(1), 720.
Johnson, R. A., and D. W. Wichern (1992). Applied Multivariate Statistical Analysis, Third Edition, Upper
Saddle River, New Jersey: Prentice-Hall, Inc.
Jreskog, K. G. (1977). Factor Analysis by Least-Squares and Maximum Likelihood Methods, in Statistical Methods for Digital Computers, K. Enslein, A. Ralston, and H. S. Wilf, (eds.), New York: John
Wiley & Sons, Inc.
Kaiser, H. F. (1970). A Second Generation Little Jiffy, Psychometrika, 35(4), 401415.
Kaiser, H. F. and J. Rice (1974). Little Jiffy, Mark IV, Educational and Psychological Measurement, 34,
111117.
Kano, Y. (1990). Noniterative estimation and the choice of the number of factors in exploratory factor
analysis, Psychometrika, 55(2), 277291.
Marsh, H. W., J. R. Balla and R. P. McDonald (1988). Goodness of Fit Indexes in Confirmatory Factor
Analysis: The Effect of Sample Size, Psychological Bulletin, 103(3), 391410.
McDonald, R. P. (1981). Constrained Least Squares Estimators of Oblique Common Factors, Psychometrika, 46(2), 277291.
McDonald, R. P. and H. W. Marsh (1990). Choosing a Multivariate Model: Noncentrality and Goodness of
Fit, Psychological Bulletin, 107(2), 247255.
Preacher, K. J. and R. C. MacCallum (2003). Repairing Tom Swift's Electric Factor Analysis Machine,
Understanding Statistics, 2(1), 1332.
Ten Berge, J. M. F., W. P. Krijnen, T. Wansbeek, and A. Shapiro (1999). Some New Results on Correlation
Preserving Factor Scores Prediction Methods, Linear Algebra and Its Applications, 289, 311318.
Tucker, L. R, and R. C. MacCallum (1997). Exploratory Factor Analysis, Unpublished manuscript.
Velicer, W. F. (1976). Determining the Number of Components from the Matrix of Partial Correlations,
Psychometrika, 41(3), 321327.
Zoski, K. W. and S. Jurs (1996). An Objective Counterpart to the Visual Scree Test for Factor Analysis:
The Standard Error Scree, Educational and Psychological Measurement, 56(3), 443451.
Zwick, W. R. and W. F. Velicer (1986). Factors Influencing Five Rules for Determining the Number of
Components to Retain, Psychological Bulletin, 99(3), 432442.
Optimization Method
A majority of the EViews nonlinear estimators offer you the choice of optimization method.
For these estimators, the Optimization method dropdown menu lets you choose between
the BFGS, Gauss-Newton, Newton-Raphson, and EViews Legacy methods. The default
method is estimator specific.
In general, the differences between the estimates should be small for well-behaved nonlinear specifications, but if you are experiencing optimization difficulties, you may wish to
experiment with methods. Note that EViews legacy is a particular implementation of GaussNewton with Marquardt or line search steps, and is provided for backward estimation compatibility.
The Step method allow you to choose the approach for choosing candidate iterative steps.
The default method is Marquardt, but you may instead select Dogleg or Line Search.
See Optimization Algorithms on page 1011 for extensive discussion.
v(i + 1 ) v(i )
------------------------------------2 tol
v( i ) 2
(C.1)
where v is the vector of parameters, x 2 is the 2-norm of x , and tol is the specified tolerance. However, before taking the norms, each parameter is scaled based on the largest
observed norm across iterations of the derivative of the least squares residuals with respect
to that parameter. This automatic scaling system makes the convergence criteria more robust
to changes in the scale of the data, but does mean that restarting the optimization from the
final converged values may cause additional iterations to take place, due to slight changes in
the automatic scaling value when started from the new parameter values.
The estimation process achieves convergence if the stopping rule is reached using the tolerance specified in the Convergence edit box of the Estimation Dialog or the Estimation
Options Dialog. By default, the box will be filled with the tolerance value specified in the
global estimation options, or if the estimation object has previously been estimated, it will
be filled with the convergence value specified for the last set of estimates.
EViews may stop iterating even when convergence is not achieved. This can happen for two
reasons. First, the number of iterations may have reached the prespecified upper bound. In
this case, you should reset the maximum number of iterations to a larger number and try
iterating until convergence is achieved.
Second, EViews may issue an error message indicating a Failure to improveafter a number
of iterations. This means that even though the parameters continue to change, EViews could
not find a direction or step size that improves the objective function. This can happen when
the objective function is ill-behaved; you should make certain that your model is identified.
You might also try other starting values to see if you can approach the optimum from other
directions.
Lastly, EViews may converge, but warn you that there is a singularity and that the coefficients are not unique. In this case, EViews will not report standard errors or t-statistics for
the coefficient estimates.
For nonlinear least squares type problems, EViews uses the values in the coefficient
vector at the time you begin the estimation procedure as starting values.
For system estimators and ARCH, EViews uses starting values based upon preliminary
single equation OLS or TSLS estimation. In the dialogs for these estimators, the dropdown menu for setting starting values will not appear.
For selected estimation techniques (binary, ordered, count, censored and truncated),
EViews has built-in algorithms for determining the starting values using specific information about the objective function. These will be labeled in the Starting coefficient
values dropdown menu as EViews supplied.
In the latter two cases, you may change this default behavior by selecting an item from the
Starting coefficient values drop down menu. You may choose fractions of the default starting values, zero, or arbitrary User Supplied.
If you select User Supplied, EViews will use the values stored in the C coefficient vector at
the time of estimation as starting values. To see the starting values, double click on the coefficient vector in the workfile directory. If the values appear to be reasonable, you can close
the window and proceed with estimating your model.
If you wish to change the starting values, first make certain that the spreadsheet view of the
coefficient vector is in edit mode, then enter the coefficient values. When you are finished
setting the initial values, close the coefficient vector window and estimate your model.
You may also set starting coefficient values from the command window using the PARAM
command. Simply enter the param keyword, followed by pairs of coefficients and their
desired values:
param c(1) 153 c(2) .68 c(3) .15
sets C(1)=153, C(2)=.68, and C(3)=.15. All of the other elements of the coefficient vector
are left unchanged.
Lastly, if you want to use estimated coefficients from another equation, select Proc/Update
Coefs from Equation from the equation window toolbar.
For nonlinear least squares problems or situations where you specify the starting values,
bear in mind that:
The objective function must be defined at the starting values. For example, if your
objective function contains the expression 1/C(1), then you cannot set C(1) to zero.
Similarly, if the objective function contains LOG(C(2)), then C(2) must be greater than
zero.
A poor choice of starting values may cause the nonlinear least squares algorithm to
fail. EViews begins nonlinear estimation by taking derivatives of the objective func-
tion with respect to the parameters, evaluated at these values. If these derivatives are
not well behaved, the algorithm may be unable to proceed.
If, for example, the starting values are such that the derivatives are all zero, you will
immediately see an error message indicating that EViews has encountered a Near
Singular Matrix, and the estimation procedure will stop.
Unless the objective function is globally concave, iterative algorithms may stop at a
local optimum. There will generally be no evidence of this fact in any of the output
from estimation.
If you are concerned with the possibility of local optima, you may wish to select various starting values and see whether the estimates converge to the same values. One
common suggestion is to estimate the model and then randomly alter each of the estimated coefficients by some percentage, then use these new coefficients as starting values in estimation.
Derivative Computation
In many EViews estimation procedures, you can specify the form of the function for the
mean equation or the objective function. For example, when estimating a regression model,
you may specify an arbitrary nonlinear expression in the coefficients. In these cases, when
estimating the model, EViews needs to compute derivatives of the user-specified function.
EViews uses two techniques for evaluating derivatives: numeric (finite difference) and analytic.
In most cases, you need not worry about the settings for the derivative computation. The
EViews estimation engine will generally employ analytic expressions for the derivatives, if
possible, or will compute high numeric derivatives, switching between lower precision computation early in the iterative procedure and higher precision computation for later iterations
and final computation.
For the legacy optimizer, EViews may offer you with the option of computing analytic
expressions for these derivatives (if possible), or computing finite difference numeric derivatives in cases where the derivative is not constant. Furthermore, if numeric derivatives are
computed, you can choose whether to favor speed of computation (fewer function evaluations) or whether to favor accuracy (more function evaluations)
The some cases, EViews will offer you settings for controlling the derivative taking:
By default, EViews will fill the options dialog with the global estimation settings. If
the Use numeric only setting is chosen, EViews will only compute the derivatives
using finite difference methods. If this setting is not checked, EViews will attempt to
compute analytic derivatives, and will use numeric derivatives only where necessary.
EViews will ignore the numeric derivative setting and use an analytic derivative
whenever a coefficient derivative is a constant value.
For some procedures where the range of specifications allowed is limited (e.g., VARs,
pools), EViews always uses analytic first and/or second derivatives, whatever the values of these settings.
In a limited number of cases, EViews will always use numeric derivatives. For example, selected GARCH (see Derivative Methods on page 238) and state space models
always use numeric derivatives. As noted above, MA coefficient derivatives are
always computed numerically.
Logl objects always use numeric derivatives unless you provide the analytic derivatives in the specification.
Where relevant, the estimation
options dialog allows you to
control the method of taking
derivatives. For example, the
options dialog for standard
regression allows you to override the use of EViews analytic
derivatives. If you elect to use
EViews legacy estimation, the
dialog will also allow you to
choose between favoring speed
or accuracy in the computation of any numeric derivatives
(note that the additional LS
and TSLS options are discussed
in detail in Chapter 20. Additional Regression Tools, beginning on page 23).
Computing the more accurate numeric derivatives requires additional objective function
evaluations. EViews legacy computes numeric derivatives using either a one-sided finite difference (favor speed), or using a four-point routine using Richardson extrapolation (favor
precision). Additional details are provided in Kincaid and Cheney (1996). The newer EViews
engine computes derivatives in an adaptive method to achieve high precision.
Analytic derivatives will often be faster and more accurate than numeric derivatives, especially if the analytic derivatives have been simplified and carefully optimized to remove
common subexpressions. Numeric derivatives will sometimes involve fewer floating point
operations than analytic, and in these circumstances, may be faster.
Optimization Algorithms1011
Optimization Algorithms
Given the importance of the proper setting of EViews estimation options, it may prove useful
to review briefly various basic optimization algorithms used in nonlinear estimation. Recall
that the problem faced in non-linear estimation is to find the values of parameters v that
optimize (maximize or minimize) an objective function F ( v ) .
Iterative optimization algorithms work by taking an initial set of values for the parameters,
say v ( 0 ) , then performing calculations based on these values to obtain a better set of parameter values, v ( 1 ) . This process is repeated for v ( 2 ) , v ( 3 ) and so on until the objective function F no longer improves between iterations.
There are three main parts to the optimization process: (1) obtaining the initial parameter
values, (2) updating the candidate parameter vector v at each iteration, and (3) determining
when we have reached the optimum.
If the objective function is globally concave so that there is a single maximum, any algorithm which improves the parameter vector at each iteration will eventually find this maximum (assuming that the size of the steps taken does not become negligible). If the objective
function is not globally concave, different algorithms may find different local maxima, but
all iterative algorithms will suffer from the same problem of being unable to tell apart a local
and a global maximum.
The main thing that distinguishes different algorithms is how quickly they find the maximum. Unfortunately, there are no hard and fast rules. For some problems, one method may
be faster, for other problems it may not. EViews provides different algorithms, and will often
let you choose which method you would like to use.
The following sections outline these methods. The algorithms used in EViews may be
broadly classified into three types: second derivative methods, first derivative methods, and
derivative free methods. EViews second derivative methods evaluate current parameter values and the first and second derivatives of the objective function for every observation. First
derivative methods use only the first derivatives of the objective function during the iteration process. As the name suggests, derivative free methods do not compute derivatives.
Newton-Raphson
Candidate values for the parameters v ( 1 ) may be obtained using the method of NewtonRaphson by linearizing the first order conditions F v at the current parameter values,
v( i ) :
g( i ) + H( i ) ( v( i + 1 ) v( i ) ) = 0
(C.2)
v( i + 1 ) = v( i ) H( i ) g( i )
2
( i ) g( i )
v( i + 1 ) = v( i ) H
( i ) = H ( i ) + aI
where H
(C.3)
where I is the identity matrix and a is a positive number that is chosen by the algorithm.
The effect of this modification is to push the parameter estimates in the direction of the gradient vector. The idea is that when we are far from the maximum, the local quadratic
approximation to the function may be a poor guide to its overall shape, so we may be better
off simply following the gradient. The correction may provide better performance at locations far from the optimum, and allows for computation of the direction vector in cases
where the Hessian is near singular.
For models which may be estimated using second derivative methods, EViews uses quadratic hill-climbing as its default method. You may elect to use traditional Newton-Raphson,
or the first derivative methods described below, by selecting the desired algorithm in the
Options menu.
Note that asymptotic standard errors are always computed from the unmodified Hessian
once convergence is achieved.
Optimization Algorithms1013
Nonlinear single equation and system models are estimated using the Marquardt method.
Gauss-Newton/BHHH
This algorithm follows Newton-Raphson, but replaces the negative of the Hessian by an
approximation formed from the sum of the outer product of the gradient vectors for each
observations contribution to the objective function. For least squares and log likelihood
functions, this approximation is asymptotically equivalent to the actual Hessian when evaluated at the parameter values which maximize the function. When evaluated away from the
maximum, this approximation may be quite poor.
The algorithm is referred to as Gauss-Newton for general nonlinear least squares problems,
and often attributed to Berndt, Hall, Hall and Hausman (BHHH) for maximum likelihood
problems.
The advantages of approximating the negative Hessian by the outer product of the gradient
are that (1) we need to evaluate only the first derivatives, and (2) the outer product is necessarily positive semi-definite. The disadvantage is that, away from the maximum, this
approximation may provide a poor guide to the overall shape of the function, so that more
iterations may be needed for convergence.
Marquardt
The Marquardt algorithm modifies the Gauss-Newton algorithm in exactly the same manner
as quadratic hill climbing modifies the Newton-Raphson method (by adding a correction
matrix (or ridge factor) to the Hessian approximation).
The ridge correction handles numerical problems when the outer product is near singular
and may improve the convergence rate. As above, the algorithm pushes the updated parameter values in the direction of the gradient.
For models which may be estimated using first derivative methods, EViews uses Marquardt
as its default method. In many cases, you may elect to use traditional Gauss-Newton via the
Options menu.
Note that asymptotic standard errors are always computed from the unmodified (GaussNewton) Hessian approximation once convergence is achieved.
than the choice of the step size. It is possible, however, that EViews will be unable to find a
step size that improves the objective function. In this case, EViews will issue an error message.
EViews also performs a crude trial-and-error search to determine the scale factor a for Marquardt and quadratic hill-climbing methods.
Gauss-Seidel
By default, EViews uses the Gauss-Seidel method when solving systems of nonlinear equations. Suppose the system of equations is given by:
x 1 = f 1 ( x 1, x 2, , x N, z )
x 2 = f 2 ( x 1, x 2, , x N, z )
xN
= f N ( x 1, x 2, , x N, z )
(C.4)
where x are the endogenous variables and z are the exogenous variables.
The problem is to find a fixed point such that x = f ( x, z ) . Gauss-Seidel employs an iterative updating rule of the form:
(i + 1)
(i)
= f(x , z) .
(C.5)
to find the solution. At each iteration, EViews solves the equations in the order that they
appear in the model. If an endogenous variable that has already been solved for in that iteration appears later in some other equation, EViews uses the value as solved in that iteration.
For example, the k-th variable in the i-th iteration is solved by:
(i)
xk
(i)
(i)
(i)
(i 1)
= f k ( x 1 , x 2 , , x k 1, x k
(i 1)
(i 1)
, x k + 1 , , x N
, z) .
(C.6)
The performance of the Gauss-Seidel method can be affected be reordering of the equations.
If the Gauss-Seidel method converges slowly or fails to converge, you should try moving the
equations with relatively few and unimportant right-hand side endogenous variables so that
they appear early in the model.
Newton's Method
Newtons method for solving a system of nonlinear equations consists of repeatedly solving
a local linear approximation to the system.
Consider the system of equations written in implicit form:
F ( x, z ) = 0
(C.7)
where F is the set of equations, x is the vector of endogenous variables and z is the vector
of exogenous variables.
In Newtons method, we take a linear approximation to the system around some values x
and z :
F ( x, z ) = F ( x, z ) + F ( x, z ) x = 0
x
(C.8)
and then use this approximation to construct an iterative procedure for updating our current
guess for x :
xt + 1 = xt
F ( x t, z )
x
F ( x t, z )
(C.9)
Broyden's Method
Broyden's Method is a modification of Newton's method which tries to decrease the calculational cost of each iteration by using an approximation to the derivatives of the equation system rather than the true derivatives of the equation system when calculating the Newton
step. That is, at each iteration, Broyden's method takes a step:
1
x t + 1 = x t J t F ( x t, z )
(C.10)
where J t is the current approximation to the matrix of derivatives of the equation system.
As well as updating the value of x at each iteration, Broyden's method also updates the
existing Jacobian approximation, J t , at each iteration based on the difference between the
observed change in the residuals of the equation system and the change in the residuals predicted by a linear approximation to the equation system based on the current Jacobian
approximation.
In particular, Broyden's method uses the following equation to update J :
( F ( x t + 1, z ) F ( x t, z ) J t Dx ) Dx
J t + 1 = J t + ------------------------------------------------------------------------------------------Dx Dx
(C.11)
References
Amemiya, Takeshi (1983). Nonlinear Regression Models, Chapter 6 in Z. Griliches and M. D. Intriligator
(eds.), Handbook of Econometrics, Volume 1, Amsterdam: Elsevier Science Publishers B.V.
Berndt, E., Hall, B., Hall, R., and Hausman, J. (1974). Estimation and Inference in Nonlinear Structural
Models, Annals of Economic and Social Measurement, Vol.. 3, 653665.
References1017
Dennis, J. E. and R. B. Schnabel (1983). Secant Methods for Systems of Nonlinear Equations, Numerical
Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, London.
Kincaid, David, and Ward Cheney (1996). Numerical Analysis, 2nd edition, Pacific Grove, CA: Brooks/
Cole Publishing Company.
More and Sorensen (1983). Computing a Trust Region Step, SIAM Journal of Scientific Statistical Computing, 4, 553572.
Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery (1992). Numerical Recipes in C, 2nd edition, Cambridge University Press.
Quandt, Richard E. (1983). Computational Problems and Methods, Chapter 12 in Z. Griliches and M. D.
Intriligator (eds.), Handbook of Econometrics, Volume 1, Amsterdam: Elsevier Science Publishers
B.V.
Thisted, Ronald A. (1988). Elements of Statistical Computing, New York: Chapman and Hall.
Gradients
EViews provides you with the ability to examine and work with the gradients of the objective function for a variety of estimation objects. Examining these gradients can provide useful information for evaluating the behavior of your nonlinear estimation routine, or can be
used as the basis of various tests of specification.
Since EViews provides a variety of estimation methods and techniques, the notion of a gradient is a bit difficult to describe in casual terms. EViews will generally report the values of
the first-order conditions used in estimation. To take the simplest example, ordinary least
squares minimizes the sum-of-squared residuals:
S(b) =
( yt Xt b )
(D.1)
The first-order conditions for this objective function are obtained by differentiating with
respect to b , yielding
2 ( yt Xt b ) Xt
(D.2)
EViews allows you to examine both the sum and the corresponding average, as well as the
value for each of the individual observations. Furthermore, you can save the individual values in series for subsequent analysis.
The individual gradient computations are summarized in the following table:
Least squares
f t ( X t, b )
g t = 2 ( y t f t ( X t, b ) ) -----------------------
2 f t ( X t, b )
g t = 2 ( y t f t ( X t, b ) ) w t -----------------------
f t ( X t, b )
g t = 2 ( y t f t ( X t, b ) ) P t -----------------------
f t ( X t, b )
t w t -----------------------g t = 2 ( y t f t ( X t, b ) ) w t P
b
l t ( X t, b )
g t = -----------------------b
Maximum likelihood
are the projection matrices corresponding to the expressions for the estimawhere P and P
tors in Chapter 21. Instrumental Variables and GMM, beginning on page 57, and l is the
log likelihood contribution function.
Note that the expressions for the regression gradients are adjusted accordingly in the presence of ARMA error terms.
Gradient Summary
To view the summary of the gradients, select View/Gradients and Derivatives/Gradient
Summary, or View/Gradients/Summary. EViews will display a summary table showing the
sum, mean, and Newton direction associated with the gradients. Here is an example table
from a nonlinear least squares estimation equation:
Gradients of the Objective Function
Gradients evaluated at estimated parameters
Equation: EQ01
Method: Least Squares
Specification: LOG(CS) = C(1) +C(2)*(GDP^C(3)-1)/C(3)
Computed using analytic derivatives
Coefficient
Sum
Mean
C(1)
C(2)
C(3)
5.21E-10
9.53E-09
3.81E-08
2.71E-12
4.96E-11
1.98E-10
Newton Dir.
1.41E-14
-3.11E-18
2.47E-18
There are several things to note about this table. The first line of the table indicates that the
gradients have been computed at estimated parameters. If you ask for a gradient view for an
Gradients1021
estimation object that has not been successfully estimated, EViews will compute the gradients at the current parameter values and will note this in the table. This behavior allows you
to diagnose unsuccessful estimation problems using the gradient values.
Second, you will note that EViews informs you that the gradients were computed using analytic derivatives. EViews will also inform you if the specification is linear, if the derivatives
were computed numerically, or if EViews used a mixture of analytic and numeric techniques. We remind you that all MA coefficient derivatives are computed numerically.
Lastly, there is a table showing the sum and mean of the gradients as well as a column
labeled Newton Dir.. The column reports the non-Marquardt adjusted Newton direction
used in first-derivative iterative estimation procedures (see First Derivative Methods on
page 1012).
In the example above, all of the values are close to zero. While one might expect these values always to be close to zero when evaluated at the estimated parameters, there are a number of reasons why this will not always be the case. First, note that the sum and mean
values are highly scale variant so that changes in the scale of the dependent and independent variables may lead to marked changes in these values. Second, you should bear in
mind that while the Newton direction is related to the terms used in the optimization procedures, EViews test for convergence does not directly use the Newton direction. Third, some
of the iteration options for system estimation do not iterate coefficients or weights fully to
convergence. Lastly, you should note that the values of these gradients are sensitive to the
accuracy of any numeric differentiation.
Gradient Series
You can save the individual gradient values in series using the Make Gradient Group procedure. EViews will create a new group containing series with names of the form GRAD##
where ## is the next available name.
Note that when you store the gradients, EViews will fill the series for the full workfile range.
If you view the series, make sure to set the workfile sample to the sample used in estimation
if you want to reproduce the table displayed in the gradient views.
Application to LM Tests
The gradient series are perhaps most useful for carrying out Lagrange multiplier tests for
nonlinear models by running what is known as artificial regressions (Davidson and MacKinnon 1993, Chapter 6). A generic artificial regression for hypothesis testing takes the form of
regressing:
f t ( X t, b )
u t on ------------------------ and Z t
(D.3)
where u are the estimated residuals under the restricted (null) model, and b are the estimated coefficients. The Z are a set of misspecification indicators which correspond to
departures from the null hypothesis.
An example program (GALLANT2.PRG) for performing an LM auxiliary regression test is
provided in your EViews installation directory.
Gradient Availability
The gradient views are currently available for the equation, logl, sspace and system objects.
The views are not, however, currently available for equations estimated by GMM or ARMA
equations specified by expression.
Derivatives
EViews employs a variety of rules for computing the derivatives used by iterative estimation
procedures. These rules, and the user-defined settings that control derivative taking, are
described in detail in Derivative Computation on page 1009.
In addition, EViews provides both object views and object procedures which allow you to
examine the effects of those choices, and the results of derivative taking. These views and
procedures provide you with quick and easy access to derivatives of your user-specified
functions.
It is worth noting that these views and procedures are not available for all estimation techniques. For example, the derivative views are currently not available for binary models since
only a limited set of specifications are allowed.
Derivative Description
The Derivative Description view provides a quick summary of the derivatives used in estimation.
For example, consider the simple nonlinear regression model:
y t = c ( 1 ) ( 1 exp ( c ( 2 ) x t ) ) + e t
(D.4)
Derivatives1023
Following estimation of this single equation, we can display the description view by selecting View/Gradients and Derivatives.../Derivative Description.
Derivatives of the Equation Specification
Equation: EQ02
Method: Least Squares
Specification: RESID = Y - ((C(1)*(1-EXP(-C(2)*X))))
Computed using analytic derivatives
Variable
C(1)
C(2)
Derivative of Specification
-1 + exp(-c(2) * x)
-c(1) * x * exp(-c(2) * x)
There are three parts to the output from this view. First, the line labeled Specification:
describes the equation specification that we are estimating. You will note that we have written the specification in terms of the implied residual from our specification.
The next line describes the method used to compute the derivatives used in estimation.
Here, EViews reports that the derivatives were computed analytically.
Lastly, the bottom portion of the table displays the expressions for the derivatives of the
regression function with respect to each coefficient. Note that the derivatives are in terms of
the implied residual so that the signs of the expressions have been adjusted accordingly.
In this example, all of the derivatives were computed analytically. In some cases, however,
EViews will not know how to take analytic derivatives of your expression with respect to
one or more of the coefficient. In this situation, EViews will use analytic expressions where
possible, and numeric where necessary, and will report which type of derivative was used
for each coefficient.
Suppose, for example, that we estimate:
y t = c ( 1 ) ( 1 exp ( f ( c ( 2 ) x t ) ) ) + e t
(D.5)
where f is the standard normal density function. The derivative view of this equation is
Derivative of Specification
C(1)
C(2)
-1 + exp(-@dnorm(c(2) * x))
--- accurate numeric ---
Here, EViews reports that it attempted to use analytic derivatives, but that it was forced to
use a numeric derivative for C(2) (since it has not yet been taught the derivative of the
@dnorm function).
If we set the estimation option so that we only compute fast numeric derivatives, the view
would change to
Derivatives of the Equation Specification
Equation: EQ02
Method: Least Squares
Specification: RESID = Y - ((C(1)*(1-EXP(-C(2)*X))))
Computed using fast numeric derivatives
Variable
C(1)
C(2)
Derivative of Specification
--- fast numeric ----- fast numeric ---
Derivatives1025
Derivative of Specification*
-1
-x * exp(-c(2) * x)
Recall that the derivatives of the objective function with respect to the AR components are
always computed analytically using the derivatives of the regression specification, and the
lags of these values.
One word of caution about derivative expressions. For many equation specifications, analytic derivative expressions will be quite long. In some cases, the analytic derivatives will be
longer than the space allotted to them in the table output. You will be able to identify these
cases by the trailing ... in the expression.
To see the entire expression, you will have to create a table object and then resize the appropriate column. Simply click on the Freeze button on the toolbar to create a table object, and
then highlight the column of interest. Click on Width on the table toolbar and enter in a
larger number.
This spreadsheet view displays the value of the derivatives for each observation in the standard spreadsheet form. The graph view, plots the value of each of these derivatives for each
coefficient.
Derivative Series
You can save the derivative values in series for later use. Simply select Proc/Make Derivative Group and EViews will create an untitled group object containing the new series. The
series will be named DERIV##, where ## is a number associated with the next available free
name. For example, if you have the objects DERIV01 and DERIV02, but not DERIV03 in the
workfile, EViews will save the next derivative in the series DERIV03.
References
Davidson, Russell and James G. MacKinnon (1993). Estimation and Inference in Econometrics, Oxford:
Oxford University Press.
Definitions
The basic information criteria are given by:
Akaike info criterion (AIC)
Schwarz criterion (SC)
Hannan-Quinn criterion (HQ)
2(l T) + 2(k T)
2 ( l T ) + k log ( T ) T
2 ( l T ) + 2k log ( log ( T ) ) T
Let l be the value of the log of the likelihood function with the k parameters estimated
using T observations. The various information criteria are all based on 2 times the average
log likelihood function, adjusted by a penalty function.
For factor analysis models, EViews follows convention (Akaike, 1987), re-centering the criteria by subtracting off the value for the saturated model. The resulting factor analysis form of
the information criteria are given by:
Akaike info criterion (AIC)
Schwarz criterion (SC)
Hannan-Quinn criterion (HQ)
( T k ) D T ( 2 T ) df
( T k ) D T ( log ( T ) T ) df
( T k ) D T ( 2 ln ( log ( T ) ) T ) df
2(l T) + 2((k + t) T)
2 ( l T ) + ( k + t ) log ( T ) T
2 ( l T ) + 2 ( k + t ) log ( log ( T ) ) T
t = a
y t 1 j
(E.1)
for y t y t when computing the ADF test equation (36.7), and for y t as defined in
(Autoregressive Spectral Density Estimator, beginning on page 537) when estimating the
frequency zero spectrum (see Ng and Perron, 2001, for a discussion of the modified information criteria).
Note also that:
The definitions used by EViews may differ slightly from those used by some authors.
For example, Grasa (1989, equation 3.21) does not divide the AIC by T . Other
authors omit inessential constants of the Gaussian log likelihood (generally, the terms
involving 2p ).
While very early versions of EViews reported information criteria that omitted inessential constant terms, the current version of EViews always uses the value of the full
likelihood function. All of your equation objects estimated in earlier versions of
EViews will automatically be updated to reflect this change. You should, however,
keep this fact in mind when comparing results from frozen table objects or printed
output from previous versions.
For systems of equations, where applicable, the information criteria are computed
using the full system log likelihood. The log likelihood value is computed assuming a
multivariate normal (Gaussian) distribution as:
TM
T
l = ---------- ( 1 + log 2p ) ---- log Q
2
2
(E.2)
where
= det e e T
(E.3)
M is the number of equations. Note that these expressions are only strictly valid
when you there are equal numbers of observations for each equation. When your system is unbalanced, EViews replaces these expressions with the appropriate summations.
The factor analysis forms of the statistics are often quoted in unscaled form, sometimes without adjusting for the saturated model. Most often, if there are discrepancies, multiplying the EViews reported values by T will line up results.
References1029
Many estimation methods, including least squares regression, do not treat the error
variance term, sigma, as an estimated coefficient, and as such omit this term from the
calculation of k .
References
Grasa, Antonio Aznar (1989). Econometric Model Selection: A New Approach, Dordrecht: Kluwer Academic Publishers.
Akaike, H. (1987). Factor Analysis and AIC, Psychometrika, 52(3), 317332.
Ltkepohl, Helmut (1991). Introduction to Multiple Time Series Analysis, New York: Springer-Verlag.
Ng, Serena and Pierre Perron (2001). Lag Length Selection and the Construction of Unit Root Tests with
Good Size and Power, Econometrica, 69(6), 1519-1554.
Technical Discussion
Our basic discussion and notation follows the framework of Andrews (1991) and Hansen
(1992a).
Consider a sequence of mean-zero random p -vectors {V t ( v ) } that may depend on a
K -vector of parameters v , and let V t V t ( v 0 ) where v 0 is the true value of v . We are
interested in estimating the LRCOV matrix Q ,
Q =
G(j)
(F.1)
j =
where
G ( j ) = E ( V t V t j )
G ( j ) = G ( j )
j0
j<0
(F.2)
L1 =
G(j)
j= 1
L0 =
G(j)
j= 0
(F.3)
= G ( 0 ) + L1
The matrix L 1 , which we term the strict one-sided LRCOV, is the sum of the lag covariances, while the L 0 also includes the contemporaneous covariance G ( 0 ) . The two-sided
LRCOV matrix Q is related to the one-sided matrices through Q = G ( 0 ) + L 1 + L 1 and
Q = L 0 + L 0 G ( 0 ) .
Despite the important role the one-sided LRCOV matrix plays in the literature, we will focus
our attention on Q , since results are generally applicable to all three measures; exception
will be made for specific issues that require additional comment.
In the econometric literature, methods for using a consistent estimator v and the corre t V t ( v ) to form a consistent estimate of Q are often referred to as heteroskesponding V
dasticity and autocorrelation consistent (HAC) covariance matrix estimators.
There have been three primary approaches to estimating Q :
1. The nonparametric kernel approach (Andrews 1991, Newey-West 1987) forms estimates of Q by taking a weighted sum of the sample autocovariances of the observed
data.
2. The parametric VARHAC approach (Den Haan and Levin 1997) specifies and fits a
parametric time series model to the data, then uses the estimated model to obtain the
implied autocovariances and corresponding Q .
3. The prewhitened kernel approach (Andrews and Monahan 1992) is a hybrid method
that combines the first two approaches, using a parametric model to obtain residuals
that whiten the data, and a nonparametric kernel estimator to obtain an estimate of
the LRCOV of the whitened data. The estimate of Q is obtained by recoloring the
prewhitened LRCOV to undo the effects of the whitening transformation.
Below, we offer a brief description of each of these approaches, paying particular attention
to issues of kernel choice, bandwidth selection, and lag selection.
Nonparametric Kernel
The class of kernel HAC covariance matrix estimators in Andrews (1991) may be written as:
T
Q = --------------TK
k ( j b T ) G ( j )
(F.4)
j =
( j ) are given by
where the sample autocovariances G
1
G ( j ) = ---T
t j
tV
V
t = j+1
G ( j ) = G ( j )
j0
(F.5)
j<0
k is a symmetric kernel (or lag window) function that, among other conditions, is continous
at the origin and satisfies k ( x ) 1 for all x with k ( 0 ) = 1 , and b T > 0 is a bandwidth
Technical Discussion1033
Kernel Functions
There are a large number of kernel functions that satisfy the required conditions. EViews
supports use of the following kernel shapes:
Truncated uniform
1
k(x) =
0
if x 1.0
otherwise
Bartlett
1 x
k(x) =
0
Bohman
sin ( p x )
( 1 x ) cos ( px ) + -----------------------p
k(x) =
0
if x 1.0
otherwise
Daniell
k ( x ) = sin ( px ) ( px )
Parzen
1 6x 2 ( 1 x )
k ( x ) = 2 ( 1 x )3
1 x2
k(x) =
0
Parzen-Geometric
1 (1 + x )
k(x) =
0
if x 1.0
1 ( 1 + x2 )
k(x) =
0
if x 1.0
Quadratic Spectral
otherwise
if 0.0 x 0.5
Parzen-Riesz
Parzen-Cauchy
if x 1.0
if x 1.0
otherwise
otherwise
otherwise
25 sin ( 6px 5 )
k ( x ) = ----------------- ------------------------------- cos ( 6px 5 )
2 2
6px 5
12p x
Tukey-Hamming
if x 1.0
otherwise
Tukey-Hanning
if x 1.0
otherwise
Tukey-Parzen
if x 1.0
otherwise
Note that k ( x ) = 0 for x > 1 for all kernels with the exception of the Daniell and the
Quadratic Spectral. The Daniell kernel is presented in truncated form in Neave (1972), but
EViews uses the more common untruncated form. The Bartlett kernel is sometimes referred
to as the Fejer kernel (Neave 1972).
A wide range of kernels have been employed in HAC estimation. The truncated uniform is
used by Hansen (1982) and White (1984), the Bartlett kernel is used by Newey and West
(1987), and the Parzen is used by Gallant (1987). The Tukey-Hanning and Quadratic Spectral were introduced to the econometrics literature by Andrews (1991), who shows that the
latter is optimal in the sense of minimizing the asymptotic truncated MSE of the estimator
(within a particular class of kernels). The remaining kernels are discussed in Parzen (1958,
1961, 1967).
Bandwidth
The bandwidth b T operates in concert with the kernel function to determine the weights for
the various sample autocovariances in Equation (F.4). While some authors restrict the bandwidth values to integers, we follow Andrews (1991) who argues in favor of allowing real valued bandwidths.
To construct an operational nonparametric kernel estimator, we must choose a value for the
bandwidth b T . Under general conditions (Andrews 1991), consistency of the kernel estimator requires that b T is chosen so that b T and b T T 0 as T . Alternately,
Kiefer and Vogelsang (2002) propose setting b T = T in a testing context.
For the great majority of supported kernels k ( j b T ) = 0 for j > b T so that the bandwidth acts indirectly as a lag truncation parameter. Relating b T to the corresponding integer
lag number of included lags m requires, however, examining the properties of the kernel at
the endpoints ( j b T = 1 ) . For kernel functions where k ( 1 ) 0 (e.g., Truncated, Parzen-Geometric, Tukey-Hanning), b T is simply a real-valued truncation lag, with at most
m = floor(b T) autocovariances having non-zero weight. Alternately, for kernel functions
Technical Discussion1035
where k ( 1 ) = 0 (e.g., Bartlett, Bohman, Parzen), the relationship is slightly more complex, with m = ceil(b T) 1 autocovariances entering the estimator with non-zero weights.
The varying relationship between the bandwidth and the lag-truncation parameter implies
that one should examine the kernel function when choosing bandwidth values to match
computations that are quoted in lag truncation form. For example, matching Newey-Wests
(1987) Bartlett kernel estimator which uses m weighted autocovariance lags requires setting
b T = m + 1 . In contrast, Hansens (1982) or Whites (1984) estimators, which sum the
first m unweighted autocovariances, should be implemented using the Truncated kernel
with b T = m .
b T = gT
1 (2q + 1)
(F.6)
where g is a constant, and q is a parameter that depends on the kernel function that you
select (Parzen 1958, Andrews 1991). For the Bartlett and Parzen-Geometric kernels ( q = 1 )
13
. The Truncated kernel does not have an theoretib should grow (at most) at the rate T
cal optimal rate, but Andrews (1991) reports Monte Carlo simulations that suggest that
15
works well. The remaining EViews supported kernels have ( q = 2 ) so their optimal
T
15
bandwidths grow at rate T
(though we point out that Daniell kernel does not satisfy the
conditions for the optimal bandwidth theorems).
While theoretically useful, knowledge of the rate at which bandwidths should increase as
T does not tell us the optimal bandwidth for a given sample size, since the constant
g remains unspecified.
Andrews (1991) and Newey and West (1994) offer two approaches to estimating g . We may
term these techniques automatic bandwidth selection methods, since they involve estimating
the optimal bandwidth from the data, rather than specifying a value a priori. Both the
Andrews and Newey-West estimators for g may be written as:
g ( q ) = c k a ( q )
1 ( 2q + 1 )
(F.7)
( q ) is an
where q and the constant c k depend on properties of the selected kernel and a
estimator of a ( q ) , a measure of the smoothness of the spectral density at frequency zero
that depends on the autocovariances G ( j ) . Substituting into Equation (F.6), the resulting
plug-in estimator for the optimal automatic bandwidth is given by:
b T = c k ( a ( q ) T )
1 ( 2q + 1 )
(F.8)
The q that one uses depends on properties of the selected kernel function. The Bartlett and
( 1 ) since they have q = 1 . a ( 2 ) should be used
Parzen-Geometric kernels should use a
for the other EViews supported kernels which have q = 2 . The Truncated kernel does not
( 2 ) . The Daniell
have a theoretically proscribed choice, but Andrews recommends using a
kernel has q = 2 , though we remind you that it does not satisfy the conditions for
Andrewss theorems. Kernel Function Properties on page 1041 summarizes the values of
c k and q for the various kernel functions.
It is of note that the Andrews and Newey-West estimators both require an estimate of a ( q )
that requires forming preliminary estimates of Q and the smoothness of Q . Andrews and
Newey-West offer alternative methods for forming these estimates.
Andrews Automatic Selection
The Andrews (1991) method estimates a ( q ) parametrically: fitting a simple parametric time
series model to the original data, then deriving the autocovariances G ( j ) and corresponding
a ( q ) implied by the estimated model.
a ( q ) =
s = 1
( q ) 2
w s ( f s )
( 0 ) 2
w s ( f s
s = 1
(F.9)
(q )
where f s are parametric estimators of the smoothness of the spectral density for the s -th
variable (Parzens (1957) q -th generalized spectral derivatives) at frequency zero. Estima(q )
tors for f s are given by:
(q)
f s
1
= ------2p
G s ( j )
(F.10)
j =
Technical Discussion1037
a ( 1 ) =
a ( 2 ) =
s =1
4s s r s
w s -------------------------------------------
6
2
( 1 r s ) ( 1 + r s )
4 2
4s s r s
-
w s --------------------8
( 1 r )
s
s =1
s = 1
s = 1
s 4s
-
w s --------------------4
( 1 r s )
s 4s
-
w s --------------------4
( 1 r s )
(F.11)
which may be inserted into Equation (F.8) to obtain expressions for the optimal bandwidths.
( q ) depend on the weighting vector w which govLastly, we note that the expressions for a
(q )
erns how we combine the individual f s into a single measure of relative smoothness.
Andrews suggests using either w s = 1 for all s or w s = 1 for all but the instrument corresponding to the intercept in regression settings. EViews adopts the first suggestion, setting
w s = 1 for all s .
Newey-West Automatic Selection
Newey-West (1994) employ a nonparametric approach to estimating a ( q ) . In contrast to
(q)
Andrews who computes parametric estimates of the individual f s , Newey-West uses a
(q)
Truncated kernel estimator to estimate the f
corresponding to aggregated data.
First, Newey and West define, for various lags, the scalar autocovariance estimators:
1
j j = ---T
tV
t j w = w G ( j ) w
wV
(F.12)
t = j+1
The j j may either be viewed as the sample autocovariance of a weighted linear combination of the data using weights w , or as a weighted combination of the sample autocovariances.
Next, Newey and West use the j j to compute nonparametric truncated kernel estimators of
the Parzen measures of smoothness:
(q )
1
= ------2p
j j
(F.13)
j = n
for q = 0, 1, 2 . These nonparametric estimators are weighted sums of the scalar autocovariances j j obtained above for j from n to n , where n , which Newey and West term
(q )
the lag selection parameter, may be viewed as the bandwidth of a kernel estimator for f .
(F.14)
for q = 1, 2 . This expression may be inserted into Equation (F.8) to obtain the expression
for the plug-in optimal bandwidth estimator.
In comparing the Andrews estimator Equation (F.11) with the Newey-West estimator
Equation (F.14) we see two very different methods of distilling results from the p -dimensions of the original data into a scalar measure a ( q ) . Andrews computes parametric estimates of the generalized derivatives for the p individual elements, then aggregates the
estimates into a single measure. In contrast, Newey and West aggregate early, forming linear
combinations of the autocovariance matrices, then use the scalar results to compute nonparametric estimators of the Parzen smoothness measures.
To implement the Newey-West optimal bandwidth selection method we require a value for
n , the lag-selection parameter, which governs how many autocovariances to use in forming
(q)
the nonparametric estimates of f . Newey and West show that n should increase at (less
than) a rate that depends on the properties of the kernel. For the Bartlett and the Parzen29
2 25
Geometric kernels, the rate is T
. For the Quadratic Spectral kernel, the rate is T
.
4 25
For the remaining kernels, the rate is T
(with the exception of the Truncated and the
Daniell kernels, for which the Newey-West theorems do not apply).
In addition, one must choose a weight vector w . Newey-West (1987) leave open the choice
of w , but follow Andrews (1991) suggestion of w s = 1 for all but the intercept in their
Monte Carlo simulations. EViews differs from this choice slightly, setting w s = 1 for all s .
Parametric VARHAC
Den Haan and Levin (1997) advocate the use of parametric methods, notably VARs, for
LRCOV estimation. The VAR spectral density estimator, which they term VARHAC, involves
t , computing the contemporaneous covariestimating a parametric VAR model to filter the V
ance of the filtered data, then using the estimates from the VAR model to obtain the implied
autocovariances and corresponding LRCOV matrix of the original data.
t } . Let A
j be the p p matrix of estimated j -th
Suppose we fit a VAR( q ) model to the { V
order AR coefficients, j = 1, , q . Then we may define the innovation (filtered) data and
estimated innovation covariance matrix as:
q
t
V t = V
A j V t j
(F.15)
j=1
and
1
G ( 0 ) = ------------Tq
V t V t
(F.16)
t = q+1
Technical Discussion1039
Tq
G ( 0 ) D
Q = ------------------------ D
TqK
(F.17)
where
= Ip
D
j=1
j
A
(F.18)
Implementing VARHAC requires a specification for q , the order of the VAR. Den Haan and
13
Levin use model selection criteria (AIC or BIC-Schwarz) using a maximum lag of T
to
determine the lag order, and provide simulations of the performance of estimator using datadependent lag order.
The corresponding VARHAC estimators for the one-sided matrices L 1 and L 0 do not have
simple expressions in terms of A
j and G ( 0 ) . We can, however, obtain insight into the
construction of the one-sided VARHAC LRCOVs by examining results for the VAR(1) case.
Given estimation of a VAR(1) specification, the estimators for the one-sided long-run variances may be written as:
Tq
1 = -----------------------L
TqK
Tq
0 = -----------------------L
TqK
( A 1 )
1
Tq
1 ) G ( 0 )
1 ( Ip A
G ( 0 ) = ------------------------ A
TqK
Tq
1 ) G ( 0 )
G ( 0 ) = ------------------------ ( I p A
TqK
j= 1
( A 1 )
j= 0
(F.19)
1
1 and G ( 0 ) .
estimated VAR(1) coefficients A
Prewhitened Kernel
Andrews and Monahan (1992) propose a simple modification of the kernel estimator which
performs a parametric VAR prewhitening step to reduce autocorrelation in the data followed
by kernel estimation performed on the whitened data. The resulting prewhitened LRVAR
estimate is then recolored to undo the effects of the transformation. The Andrews and Monahan approach is a hybrid that combines the parametric VARHAC and nonparametric kernel
techniques.
There is evidence (Andrews and Monahan 1992, Newey-West 1994) that this prewhitening
approach has desirable properties, reducing bias, improving confidence interval coverage
probabilities and improving sizes of test statistics constructed using the kernel HAC estimators.
The Andrews and Monahan estimator follows directly from our earlier discussion. As in a
t and obtain the whitened data (residuals):
VARHAC, we first fit a VAR( q ) model to the V
q
t = V
t
V
A j V t j
(F.20)
j= 1
In contrast to the VAR specification in the VARHAC estimator, the prewhitening VAR specification is not necessarily believed to be the true time series model, but is merely a tool for
obtaining V t values that are closer to white-noise. (In addition, Andrews and Monahan
adjust their VAR(1) estimates to avoid singularity when the VAR is near unstable, but
EViews does not perform this eigenvalue adjustment.)
Next, we obtain an estimate of the LRCOV of the whitened data by applying a kernel estimator to the residuals:
Q =
k ( j b T ) G ( j )
(F.21)
j =
( j ) are given by
where the sample autocovariances G
1
G ( j ) = ------------Tq
t j
t V
V
t = j+q+1
G ( j ) = G ( j )
j0
(F.22)
j<0
Lastly, we recolor the estimator to obtain the VAR prewhitened kernel LRCOV estimator:
Tq
Q D
Q = ------------------------ D
TqK
(F.23)
The prewhitened kernel procedure differs from VARHAC only in the computation of the
LRCOV of the residuals. The VARHAC estimator in Equation (F.17) assumes that the residuals V t are white noise so that the LRCOV may be estimated using the contemporaneous
( 0 ) , while the prewhitening kernel estimator in Equation (F.21) allows
variance matrix G
for residual heteroskedasticity and serial dependence through its use of the HAC estimator
Q . Accordingly, it may be useful to view the VARHAC procedure as a special case of the
prewhitened kernel with k ( 0 ) = 1 and k ( x ) = 0 for x 0 .
The recoloring step for one-sided prewhitened kernel estimators is complicated when we
1 (Park and Ogaki, 1991). As in the VARHAC setting, the
allow for HAC estimation of L
expressions for one-sided LRCOVs are quite involved but the VAR(1) specification may be
used to provide insight. Suppose that the VARHAC estimators of the one-sided LRCOV matri 1 be the strict one-sided
ces defined in Equation (F.19) are given by L 1 and L 0 , and let L
kernel estimator computed using the prewhitened data:
L 1 =
k ( j b T ) G
(j)
(F.24)
j =1
Then the prewhitened kernel one-sided LRCOV estimators are given by:
Tq
1 = L 1 + ----------------------- L 1 D
D
L
TqK
(F.25)
Tq
0 = L 0 + ----------------------- 1 D
L
L
D
TqK
ck
rB
rn
Truncated uniform
0.6611
15
---
Bartlett
1.1447
13
29
Bohman
2.4201
15
4 25
Daniell
0.4462
15
---
Parzen
2.6614
15
4 25
Parzen-Riesz
1.1340
15
4 25
Parzen-Geometric
1.0000
13
29
Parzen-Cauchy
1.0924
15
4 25
Quadratic Spectral
1.3221
15
2 25
Tukey-Hamming
1.6694
15
4 25
Tukey-Hanning
1.7462
15
4 25
Tukey-Parzen
1.8576
15
4 25
Notes: r B = 1 ( 2q + 1 ) is the optimal rate of increase for the LRCOV kernel bandwidth.
r n is the optimal rate of increase for the lag selection parameter in the Newey-West (1987)
automatic bandwidth selection procedure. The Truncated uniform kernel does not have theoretically proscribed values for c k and r B , but Andrews (1991) reports Monte Carlo simula-
tions that suggest that these values work well. The Daniell kernel value for r B does not
follow from the theory since the kernel does not satisfy the conditions of the optimal bandwidth
theorems.
References
Andrews, Donald W. K, and J. Christopher Monahan (1992). An Improved Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimator, Econometrica, 60, 953-966.
Andrews, Donald W. K. (1991). Heteroskedaticity and Autocorrelation Consistent Covariance Matrix Estimation, Econometrica, 59, 817-858.
den Haan, Wouter J. and Andrew Levin (1997). A Practitioners Guide to Robust Covariance Matrix Estimation, Chapter 12 in Maddala, G. S. and C. R. Rao (eds.), Handbook of Statistics Vol. 15, Robust
Inference, North-Holland: Amsterdam, 291-341.
Gallant, A. Ronald (1987). Nonlinear Statistical Models. New York: John Wiley & Sons.
Hamilton, James D. (1994). Time Series Analysis, Princeton University Press.
Hansen, Bruce E. (1992a). Consistent Covariance Matrix Estimation for Dependent Heterogeneous Processes, Econometrica, 60, 967-972.
Hansen, Bruce E. (1992b). Tests for Parameter Instability in Regressions with I(1) Processes, Journal of
Business and Economic Statistics, 10, 321-335.
Hansen, Lars Peter (1982). Large Sample Properties of Generalized Method of Moments Estimators,
Econometrica, 50, 1029-1054.
Kiefer, Nicholas M., and Timothy J. Vogelsang (2002). Heteroskedasticity-Autocorrelation Robust Standard Errors Using the Bartlett Kernel Without Truncation, Econometrica, 70, 2093-2095.
Neave, Henry R. (1972). A Comparison of Lag Window Generators, Journal of the American Statistical
Association, 67, 152-158.
Newey, Whitney K. and Kenneth D. West (1987). A Simple, Positive Semi-Definite, Heteroskedasticity
and Autocorrelation Consistent Covariance Matrix, Econometrica, 55, 703-708.
Newey, Whitney K. and Kenneth D. West (1994). Automatic Lag Length Selection in Covariance Matrix
Estimation, Review of Economic Studies, 61, 631-653.
Park, Joon Y. and Masao Ogaki (1991). Inferences in Cointegrated Models Using VAR Prewhitening to
Estimate Shortrun Dynamics, Rochester Center for Economic Research Working Paper No. 281.
Parzen, Emanual (1957). Consistent Estimates of the Spectrum of a Stationary Time Series, The Annals
of Mathematical Statistics, 28, 329-348.
Parzen, Emanuel (1958). On Asymptotically Efficient Consistent Estimates of the Spectral Density Function of a Stationary Time Series, Journal of the Royal Statistical Society, B, 20, 303-322.
Parzen, Emanual (1961). Mathematical Considerations in the Estimation of Spectra, Technometrics, 3,
167-190.
Parzen, Emanual (1967). On Empirical Multiple Time Series Analysis, in Lucien M. Le Cam and Jerzy
Neyman (eds.), Proceedings of the Fifth Berkely Symposium on Mathematical Statistics and Probability, 1, 305-340.
White, Halbert (1984). Asymptotic Theory for Econometricians. Orlando: Academic Press.
Index
(Key: I = Users Guide I; II = Users Guide II)
Symbols
?
pool cross section identifier II:762
.DB? files I:310
.EDB file I:306
.RTF file I:749
.WF1 file I:76
@all I:129
@cellid II:813
@clear I:194
@count I:187
@crossid II:812
@elem I:170
@eqnq I:176
@expand I:188, II:28
@first I:129
@firstmax II:818
@firstmin II:818
@ingrp II:762
@isna I:176
@last I:129
@lastmax II:818
@lastmin II:818
@map I:216
@neqna I:176
@obsid II:814
@obsnum
panel observation numbering II:815
@ranks I:174
@seriesname I:187
@unmap I:217
@unmaptxt I:217
~, in backup file name I:76, I:823
Numerics
1-step GMM
single equation II:73, II:78
2sls (Two-Stage Least Squares) II:57, II:64
diagnostics II:80
dropped instruments II:80
in systems II:585
instrument orthogonality test II:81
instrument summary II:80
J-statistic II:60
nonlinear II:64
nonlinear with AR specification II:65
order condition II:59
panels II:834
rank condition II:59
regressor endogeneity test II:81
residuals II:60
system estimation II:613
weak instruments II:82
weighted in systems II:585, II:613
weighted nonlinear II:65, II:76
with AR specification II:61, II:127
with MA specification II:63
with pooled data II:800
3sls (Three Stage Least Squares) II:585, II:614
64-bit version I:829
A
Abort key I:14
Across factors I:689
Active window I:108
Add factor II:699, II:712
Add text to graph I:709
Adding data I:279
Add-ins IV:775
ADF
See also Unit root tests.
Adjusted R-squared
for regression II:13
Advanced database query I:323
AIC II:1027
See also Akaike criterion.
Akaike criterion II:15, II:1027
for equation II:15
Alias II:701
database I:320
OBALIAS.INI file I:331
1044Index
object I:329
Almon lag II:24
Alpha series I:196
additional views I:203
declaring and creating I:197
maximum length I:198, I:817
spreadsheet view I:203
truncation I:198, I:817
Analysis of variance I:384
by ranks I:387
Analytical derivatives II:1022
logl II:508
Analytical graphs I:636
And operator I:130, I:171
Anderson-Darling test I:389
Andrews automatic bandwidth II:1036
cointegrating regression II:267
GMM estimation II:78
long-run covariance estimation I:561
panel cointegrating regression 1:893
system GMM II:617
Andrews function 1:388
Andrews test II:308, II:355
Andrews-Quandt breakpoint test II:196
ANOVA I:384
by ranks I:387
Appending data I:279
AR roots
inverted II:114
AR Roots (VAR) II:626
AR specification
forecast II:150
in 2SLS II:61
in ARIMA models II:88, II:100
in nonlinear 2SLS II:65
in nonlinear least squares II:47
in pool II:780
in systems II:588
AR(1)
coefficient II:88
Durbin-Watson statistic II:95
estimation II:100
AR(p) II:89
estimation II:101
ARCH II:231
See also GARCH.
correlogram test II:182
LM test II:186
multivariate II:586
system II:586
ARCH test II:186
ARCH-M II:233
ARDL
bounds testing II:285
cointegrating relationships II:284
long-run relationships II:283
panel II:838
pooled mean group estimation II:838
Area band graph I:626
Area graph I:624
Arellano-Bond serial correlation test II:878
AREMOS data I:336
data banks I:335
ARFIMA II:93, II:104
ARFIMA models II:92
ARIMA II:92
ARIMA models II:92
automatic forecasting I:449
automatic selection I:449
automatic selection using X-13 I:423, I:426
Box-Jenkins approach II:94
correlogram II:117
diagnostic checking II:115
difference operator II:103
frequency spectrum II:119
identification II:94
impulse response II:118
roots II:116
specification II:100
starting values II:106, II:112
structure II:116
X-13 I:422
ARIMAX II:99
ARMA terms
in models II:735
seasonal II:90
testing II:183
using state spaces models for II:682
ARMAX II:682
Array expressions 1:835
Arrows
adding to a graph I:711
Artificial regression II:187, II:224
ASCII file
B1045
import I:141
open as workfile I:48
Asymptotic test II:163
Attributes I:62
adding I:65
replacing I:67
viewing I:62
Augmented Dickey-Fuller test II:532
See also Unit root tests.
Augmented regression II:212
Auto tab indent I:823
Autocorrelation I:393, II:14
robust standard errors II:32
Autocorrelation test See Serial correlation test
Automatic bandwidth selection
cointegrating regression II:267
GMM estimation II:78
long-run covariance estimation I:561
panel cointegrating regression 1:893
robust standard errors II:35
technical details II:1035
Automatic forecast
ARIMA I:449
ETS smoothing I:479, I:481
using X-13 I:427
Automatic variable selection II:49
Autoregressive distributed lag models
See ARDL.
Autoregressive spectral density estimator II:537
Auto-search
database I:321
Auto-series I:181
forecasting II:156
generate new series I:181, I:318
in estimation I:185
in groups I:185
in regression I:318
with database I:317
Auto-updating graph I:704
Auto-updating series I:191
and databases I:195
converting to ordinary series I:194
Auxiliary graphs I:655
Auxiliary regression II:183, II:186
Average log likelihood II:302
Average shifted histogram I:641
Axis I:605
assignment I:606
characteristics I:608, I:611
custom obs labels I:725
data ticks and lines I:609
date labels I:615
date ticks I:615
duplicating I:610
format I:610
labels I:608, I:609, I:610
remove custom date labels I:727
scale I:611
B
Backcast
in GARCH models II:237
MA terms II:108
Backup files I:76, I:823
Bai Perron breakpoint test II:198
computing in EViews II:201
examples II:203
Bai sequential breakpoint
estimation with 1:408
test II:200
Balanced data II:768
Balanced sample II:779
Baltagi, Fend and Kao test II:872, II:934
Band-Pass filter I:492
Bandwidth
Andrews II:617, II:1036
automatic selection See Automatic bandwidth
selection
bracketing I:644, I:660, I:661
cointegrating regression II:267
GMM estimation II:78
kernel - technical details II:1034
kernel graph I:644, I:659
local regression I:661
long-run covariance estimation I:561
Newey-West (automatic) II:618, II:1037
Newey-West (fixed) II:617
panel cointegrating regression 1:893
robust standard errors II:35
selection in system GMM II:593, II:617
Bar graph I:624
Bartlett kernel II:617
cointegrating regression II:267
GMM estimation II:78
1046Index
C1047
C
C
coef vector II:6
constant in regression II:6
Cache I:365
Cancel keystroke I:14
Canonical cointegrating regression II:257, II:264
Categorical graphs I:669
See also Graphs.
analytical I:678
binning I:687
category summaries I:670
descriptive statistics I:670
factor display settings I:689
factor labeling I:698
factor ordering I:690
factors I:686
identifying categories I:681
line I:674
specifying factors I:686
summaries I:670
Categorical regressor stats II:305, II:328
Cauchy function 1:388
Causality
Dumitrescu-Hurlin II:927
Grangers test I:564
panel data II:926
CD test II:872, II:934
CEIC I:340
Cell
annotation I:747
formatting I:745
merging I:747
selection I:741
Censored dependent variable II:323
estimation II:324
expected dependent variable II:329
fitted index II:329
forecasting II:329
goodness-of-fit tests II:330
interpretation of coefficient II:328
log likelihood II:324
residuals II:328
scale factor II:328
specifying the censoring point II:325
views II:328
Census X-11
1048Index
Commands
history of I:8
Comments I:111
spool I:758
tables I:747
Common sample I:175
Communalities II:964
Comparing workfiles and pages I:88
Comparison operators I:170
with missing values I:176
Component GARCH II:247
Component plots I:549
Conditional independence I:542
Conditional standard deviation
display graph of II:241
Conditional variance VI:229, II:231, II:232
forecast II:243
in the mean equation II:233
make series from ARCH II:244
Confidence ellipses I:664, II:164
Confidence interval II:164
ellipses II:164
for forecast II:145
for stochastic model solution II:738
Constant
in equation II:6, II:12
in ordered models II:318
Contemporaneous covariance (in panels) II:915
Contingency coefficient I:542
Continuously updating GMM
single equation II:73, II:78
Contracting data I:282
Convergence criterion II:1006, II:1019
default setting I:822
in nonlinear least squares II:44, II:49
in pool estimation II:783
Convert
panel to pool I:289
pool to panel I:295
Copy I:282
and paste I:113, I:749, 1:779
and paste See also OLE.
by link I:283
by value I:284
command I:159
data I:151
data cut-and-paste I:139
D1049
database I:331
objects I:113
pool objects II:760
table to clipboard I:749
to and from database I:313
to spool I:755
Copy special 1:806
Correlogram I:393, I:396, II:96
ARMA models II:117
autocorrelation function I:393
cross I:557
partial autocorrelation function I:394
Q-statistic I:395
squared residuals II:182, II:241
VAR II:628
Count models II:343
estimation II:344
forecasting II:348
negative binomial (ML) II:345
Poisson II:344
QML II:346
residuals II:348
Covariance
matrix, of estimated coefficients II:17
matrix, systems II:598
Covariance analysis I:526
details I:534
panel II:915
Covariance proportion II:147
Cragg-Donald II:82
Cramers V I:542
Cramer-von Mises test I:389
Create
database I:307
dated data table I:509
factor II:960
graph I:703
group I:125, I:186
link I:235
objects I:101
page I:79
series I:118, I:178
spool I:753
table I:741
text object I:751
workfile I:42
Cross correlation I:557
D
Daniell kernel
cointegrating regression II:267
GMM estimation II:78
long-run covariance estimation I:561
panel cointegrating regression 1:893
robust standard errors II:35
technical details II:1033
Data
appending more I:279
cut and paste I:141, I:151
enter from keyboard I:137, 1:835
export I:150, I:152
Federal Reserve Economic data I:352
FRED I:352
import I:141
import as matrix I:150
import as table I:150, I:750
irregular I:251
keyboard entry I:139
1050Index
pool II:764
regular I:251
remove I:282
structure I:251
Database
alias I:320
auto-search I:321
auto-series I:319
cache I:365
copy I:331
copy objects I:313
create I:307
data storage precision I:821
default I:310
default in search order I:321
delete I:331
delete objects I:315
display all objects I:308
export I:313
fetch objects I:311
field I:324
foreign formats I:333
frequency in query I:326
group storage options I:821
link I:312, I:313, I:364
link options I:367
maintenance I:331
match operator in query I:325
open I:307
packing I:332
previewing contents I:103
queries I:321
rebuild I:333
registry I:830
rename I:331
rename object I:315
repair I:333
sharing violation I:308
statistics I:332
store objects I:310
test integrity I:333
using auto-updating series with I:195
window I:307
Database registry I:319, I:830
Datastream I:341
Date pairs I:128
Date series I:204
Dated data table I:507
create I:509
customization I:520
customize I:509
data format I:513
example I:522
fonts I:517
formatting options I:515
frequency conversion I:515
headers I:518
table options I:509
templates I:520
transformation methods I:514
Dated import I:143
Dates
default display format I:822
display format I:204
format in a spreadsheet See Display format
global options I:815
match merging using I:227
Default
database I:9, I:310
database in search order I:321
directory I:9, I:828
set directory I:91
setting global options I:811
update directory I:91
window appearance I:812
Delete I:114
database I:331
graph element I:719
objects from database I:315
observation in series I:124
page I:87
series using pool II:776
spool objects I:762
Demonstration
estimation I:27
examining data I:20
forecasting I:34
getting data into EViews I:17
specification test I:30
Den Haan and Levin II:1038
Denton
frequency conversion method I:161, I:162
Dependent variable
no variance in binary models II:304
Derivatives II:1009, II:1022
checking II:515
D1051
1052Index
E
Easy query I:322
Economy.com I:362
EcoWin database I:342
Edit
group I:126
series I:123, I:503
table I:743
EGARCH II:245
See also GARCH
EGLS (estimated GLS) II:781, II:799, II:833
EHS test II:81
EIA (U.S. Energy Administration) data I:346
Eigenvalues
factor analysis II:973
plots I:548
Elasticity at means II:164
Elliot, Rothenberg, and Stock point optimal test
II:535
See also Unit root tests.
Embedded spools I:756
Empirical CDF
graph I:647
Empirical distribution tests I:389
Empirical quantile graph I:649
Empirical survivor graph I:648
End field I:64, I:326
Endogeneity II:223
test of II:81
Endogenous variables II:57
in models II:699
Engle-Granger cointegration test II:948
Enterprise Edition I:336, I:337, I:340, I:341,
I:342, I:351, I:358, I:362
Epanechnikov kernel I:643
Equality tests I:383
groups I:543
mean I:384
median I:386
variance I:388
Equation II:5
add to model II:704
automatic dummy variables in II:28
coefficient covariance matrix II:17
coefficient covariance scalar II:16
coefficient p-values vector II:17
E1053
derivatives II:1022
failure to improve II:1007
for pool II:778
GLM II:359
GMM II:69
log likelihood II:510
logl II:510
missing data II:10
multi-equation II:584
near singular matrix problems II:1009
nonlinear least squares II:40
options II:1005
ordered models II:317
output II:11
panel II:831
residuals from equation II:20
robust regression 1:395
sample II:9
sample adjustment II:10
single equation methods II:9
starting values II:1007
state space II:677, II:688
systems II:584, II:592
truncated models II:333
two-stage least squares II:57
user-defined IV:775
VAR II:624
VEC II:644
ETS smoothing I:470
AMSE based I:477
example I:483
forecast details I:479, I:481
initial states I:476
MLE based I:477
model selection I:478, I:481
output I:482
parameters I:476, I:481
performing in EViews I:479
specification I:480
technical details I:471
Evaluating forecasts I:397
Evaluation order I:168
logl II:507
EViews
auto-update I:15, I:831
EViews Databases I:305
EViews Enterprise Edition I:342, I:351, I:357,
I:362
1054Index
F
Factor analysis II:959
background II:990
communalities II:964
creation II:960
data members II:975
details II:990
eigenvalues II:973
example II:975
goodness of fit II:971, II:995
graph scores II:969
Kaisers measure of sampling adequacy II:974
loading views II:972
method II:961, II:963
method details II:992
model evaluation II:995
PACE II:963
procedures II:974
reduced covariance II:972
rotation II:966
rotation (theory) II:997
scaling II:965
score estimation II:967
specification II:960
theory of II:990
views II:970
Factor and graph layout options I:693
Factor breakpoint test II:179
Factor display settings I:689
Factset I:351
Fair function 1:388
Fair-Taylor model solution II:731
FAME database I:351
Federal Reserve Economic Data I:352
Fetch I:116
from database I:311
from pool II:776
fetch I:159
Fields in database I:324
description I:327
display_name I:327
end I:326
expressions I:325
freq I:326
history I:327
last_update I:327
last_write I:327
name I:325
remarks I:327
source I:327
start I:326
type I:325
units I:327
Files
default locations I:828
open session on double click I:814
opening/saving on a cloud location I:92
Filter
Hodrick-Prescott I:491
Markov switching 1:446
state space models II:674
switching regression 1:445
workfile objects I:73
FIML II:586
system II:614
First derivative methods II:1012
Fisher-ADF II:563
Fisher-Johansen II:957
Fisher-PP II:563
Fit lines (graph) I:594
Fitted index
binary models II:311
censored models II:329
truncated models II:335
Fitted probability
binary models II:311
Fitted values
of equation II:18
Fixed effects
panel estimation II:833
pool II:781
pool description II:796
test II:861
Fixed variance parameter
negative binomial QML II:348
normal QML II:347
Flatten
spools I:763
FMOLS See Fully modified OLS (FMOLS)
Fonts
defaults I:815
F1055
tables I:746
text in graph I:709, I:734
Forecast
AR specification II:150
ARIMA I:449
ARIMA using X-13 I:427
automatic with ARIMA models I:449
automatic with ETS smoothing I:479, I:481
auto-series II:156
averaging I:458
backcasting II:151
binary models II:311
by exponential smoothing I:470
censored models II:329
Chow test II:210
combination testing I:398
combining I:458
conditional variance II:243
count models II:348
demonstration I:34
dynamic II:148, II:676
equations with formula II:155
error II:143
ETS smoothing I:479, I:481
evaluation I:397, II:145
example II:138
expressions and auto-updating series II:155
fitted values II:142
from estimated equation II:135
GLM II:372
innovation initialization in models II:732
interval II:145
lagged dependent variables II:148
MA specification II:151
Markov switching 1:462
missing values II:143
models II:708
nonlinear models II:161
n-step ahead II:676
n-step test II:217
one-step test II:216
ordered models II:322
out-of-sample II:142
PDLs II:161
smoothed II:677
standard error II:144, II:158
state space II:676
static II:149
structural II:150
switching regression 1:462
system II:599
truncated models II:335
VAR/VEC II:635, II:645
variance II:143
with AR errors II:151
Foreign data
import into workfile I:141
open as workfile I:17
Format
tables I:745
Formula
forecast II:155
implicit assignment I:179
normalize I:180
specify equation by II:7
Forward solution for models II:730
Fractional difference
Specification II:104
Fractional integration II:93
Frame I:603
size I:604
FRED I:352
Freedman-Diaconis I:638
Freeze I:114
create graph from view I:703
Freq
field in database query I:326
Frequency (Band-Pass) filter I:492
Frequency conversion I:113, I:114, I:158, I:815
Chow-Lin I:161, I:163
constant match I:161
cubic I:161, I:162
dated data table I:515
default settings I:164
Denton I:161
DRI database I:367
linear I:161, I:162
links I:242
Litterman I:161, I:164
methods I:160
panels I:233
point I:161, I:162
propagate NAs I:160
quadratic I:161
quandratic I:161
1056Index
G
GARCH II:231
ARCH-M model II:233
asymmetric component model II:248
backcasting II:237
component models (CGARCH) II:247
estimation in EViews II:234
examples II:239
exponential GARCH (EGARCH) II:245
GARCH(1,1) model II:231
GARCH(p,q) model II:233
initialization II:237
Integrated GARCH (IGARCH) II:244
mean equation II:235
multivariate II:524
power ARCH (PARCH) II:246
procedures II:242
robust standard errors II:238
test for II:186
threshold (TARCH) II:244
variance equation II:235
Gauss file I:48
Gauss-Newton II:1013
Gauss-Seidel algorithm II:742, II:1014
Generalized error distribution II:245
Generalized least squares See GLS
Generalized linear models
example II:363
forecasting II:372
link function II:360, II:377
overview II:357
performing in EViews II:359
quasi-likelihood ratio test II:348
residuals II:371
robust standard errors II:362
specification II:359
technical details II:375
testing II:373
variance factor II:354
Generalized method of moments, See GMM.
Generalized residual
binary models II:311
censored models II:329
count models II:348
GLM II:371
ordered models II:322
score vector II:312
truncated models II:335
Generate series I:178
by command I:180
dynamic assignment I:179
for pool II:770
implicit assignment I:179
implicit formula I:179
using samples I:178
Geometric moving average I:185
GiveWin data I:357
Glejser heteroskedasticity test II:186
GLM
See Generalized linear models.
Global breakpoint
estimation with 1:408
tests II:198
Global optimum II:1009
GLS
detrending II:533
pool estimation details II:797
weights II:833
GMM II:69, II:615
bandwidth selection (single equation) II:78
bandwidth selection (system) II:593
breakpoint test II:84
continuously updating (single equation) II:73,
II:78
diagnostics II:80
dropped instruments II:80
estimate single equation by II:69
estimate system by II:586
HAC weighting matrix (single equation) II:78
HAC weighting matrix (system) II:616
G1057
automating I:740
auto-updating I:704
auxiliary graphs I:655
average shifted histogram I:641
axis borders I:579
axis control I:724
axis label format I:610
axis See also Axis.
background color I:723
background printing I:723
bar graph I:624
basic customization I:602
border I:723
boxplot I:652
categorical I:723
categorical See also Categorical graphs.
color settings I:723
combining I:708
combining graphs I:708
confidence ellipse I:664
coordinates for positioning elements I:709
creating I:703, I:704
custom obs labels I:725
customization I:708
customize axis labels I:610
customizing lines and symbols I:728
data option I:577
date label format I:612
date label frequency I:611
date label positioning I:614
dot plot I:629
drawing lines and shaded areas I:710
empirical CDF I:647
empirical log survivor I:648
empirical quantile I:649
empirical survivor I:648
error bar I:630
fill areas I:621
first vs. all I:591
fit lines I:594
font I:709
font options I:734
frame I:603
frame border I:604
frame color I:603
frame fill I:723
freeze I:703
freezing I:704
1058Index
frequency I:579
grid lines I:723
groups I:584
high-low-open-close I:630
histogram I:637
histogram edge polygon I:640
histogram polygon I:639
identifying points I:573
indentation I:723
kernel density I:642
kernel regression I:658
legend I:617
legend font I:619
legend options I:727
legend placement I:618
legend settings I:727
legend text I:619
line formats I:619
line graph I:623
lines I:711
link frequency I:579
location I:605
means I:577
merging multiple I:102
mixed frequency data I:588
mixed line I:627
modifying I:720
multiple graph options I:736
multiple series option I:586
nearest neighbor regression I:660
non-consecutive observations I:723
observation graphs I:623
observations to label I:611
orientation I:578
orthogonal regression I:663
pairwise data I:590
panel data II:909
panel data options I:581
pie I:634
place text in I:709
position I:605, I:736
print in color I:738
printing I:738
quantile-quantile I:650, I:651, I:652
raw data I:577
regression line I:655
remove custom date labels I:727
remove elements I:719
rotate I:578
rotation I:611
sample break plotting options I:723
saving I:739
scale I:611
scatter I:631
scatterplot matrix I:592
scores II:969
seasonal I:635
series I:575
series view I:374
settings for multiple graphs I:735
shade options I:734
size I:604
slider bar (pasting with) 1:786
sorting I:718
sorting observations I:718
spike I:627
stacked I:591
symbol graph I:623
symbols I:619
templates I:730
text justification I:709
text options I:734
theoretical distribution I:646
type I:576, I:585, I:623, I:721
update settings I:705
XY area I:633
XY bar I:633
XY line I:632
XY pairs I:591
Grid lines I:615
table I:744
Grid search II:1014
Group I:186, I:501
add member I:501
adding series I:502
adding to I:126
auto-series I:185
create I:125, I:186
display format I:125
display type I:119
edit mode default I:819
edit series I:503
editing I:126
element I:187
graph view I:526
graphing I:584
H1059
H
HAC
cointegrating regression II:267
GMM estimation II:78
panel cointegrating regression 1:893
robust standard errors II:32, II:35
system GMM II:617
Hadri II:561
Hannan-Quinn criterion II:1027
for equation II:15
Hansen instability test II:275
Harvey heteroskedasticity test II:186
Hat matrix II:219
Hatanaka two-step estimator II:131
Hausman test II:223, II:863
Haver Analytics Database I:357
Heckit modelSee Heckman selection
Heckman selection II:337
example II:340
ML estimation II:338
performing in EViews II:339
two-step model II:337
Heckman two-step II:337
Help I:14
EViews Forum I:15
help system I:14
World Wide Web I:15
Heteroskedasticity
binary models II:315
cross-sectional details II:798
groupwise I:543
of known form II:36
1060Index
Brown-Forsythe I:389
chi-square test I:386
Chow breakpoint II:194
coefficient based II:164
coefficient p-value II:12
CUSUM II:214
CUSUM of squares II:215
demonstration I:30
descriptive statistic tests I:380
distribution I:389
F-test I:388
Hausman test II:223
heteroskedasticity II:185
irrelevant or redundant variable II:178
Kruskal-Wallis test I:387
Levene test I:389
mean I:380
median I:382
multi-sample equality I:383
nonnested II:225
normality II:182
omitted variables II:177
Ramsey RESET II:212
residual based II:181
Siegel-Tukey test I:388
single sample I:380
stability test II:193
unit root I:396, II:527
unknown breakpoint II:196
Van der Waerden test I:382, I:387
variance I:381
Wald coefficient restriction test II:170
White heteroskedasticity II:187
Wilcoxon rank sum test I:386
Wilcoxon signed ranks test I:382
I
Icon I:99
Identification
Box-Jenkins II:94
GMM II:70
nonlinear models II:48
structural VAR II:641
Identity
in model II:700
in system II:589
If condition in samples I:129
IGARCH II:244
L1061
J
Jarque-Bera statistic I:376, II:182, II:242
in VAR II:628
JPEG I:739
J-statistic
2sls II:60
GMM II:70
panel equation II:855
J-test II:225
K
Kaisers measure of sampling adequacy II:974
Kaiser-Guttman II:991
Kalman filter II:675
L
Label
See Label object
Label object I:112
automatic update option I:816
capitalization I:111
LAD II:479
output II:482
performing in EViews II:479
quantile process views II:485
Lag
1062Index
M1063
M
MA roots
inverted II:114
MA specification
backcasting II:108
forecast II:151
in ARIMA models II:89, II:102
in model solution II:732
in two stage least squares II:63
MADMED
definition 1:390
MADZERO
definition 1:390
MAE I:398
Mann-Whitney test I:386
MAPE I:398
Marginal significance level II:12, II:163
Markov switching 1:443, 1:445
AR 1:449, 1:450
autoregressive models 1:449, 1:450
dynamic regression 1:449
estimation in EViews 1:451
example 1:463
expected durations 1:459
filtering 1:446
forecast 1:462
initial probabilities 1:448, 1:454
mean models 1:449
regime probabilities 1:446, 1:461, 1:463
smoothing 1:447
transition probabilities 1:459
transition results 1:459, 1:463
views available 1:459
Marquardt II:1013
Match merge I:222
1064Index
by date I:227
many-to-many I:225
many-to-one I:224
one-to-many I:223
panels I:229
using links I:222
Match operator in database query I:325
Match-merge
as import I:146
Matlab IV:775
Maximization See Optimization (user-defined).
Maximum
number of observations I:829
Maximum likelihood
See also Logl.
See also Optimization (user-defined).
full information II:586
quasi-generalized pseudo-maximum likelihood
II:351, II:369
quasi-maximum likelihood II:346, II:357
user specified II:503
McFadden R-squared II:302
Mean I:375
equality test I:384
hypothesis test of I:380
Mean absolute error I:398, II:146
Mean absolute percentage error I:398, II:146
Mean equation (GARCH) II:235
Mean square error I:459, I:479, II:146
Measurement equation II:674
Measurement error II:57, II:212
Median I:375
equality test I:386
hypothesis test of I:382
Median function 1:389
Memory allocation I:828
Memory, running out of I:828
Menu I:109
objects I:110
Merge I:113
See Match merge.
graphs I:102
into panel workfiles II:827
store option I:311
Messages I:811
M-estimation 1:387
N1065
solving II:729
solving to match target II:746
starting values II:744
static solution II:735
static solve II:705
stochastic equations II:700
stochastic simulation II:735
stochastic solution II:737
text description of II:722
text keywords II:722
tracking variables II:741
updating links II:719
variable dependencies II:721
variable shift add factor II:727
variable view II:721
Moment condition II:70
Moment selection criteria II:83
Moodys Economy.com I:362
Moving statistics
functions I:173
geometric mean I:185
MSAR 1:449, 1:450
MSE I:459, I:479, II:146
MSI 1:449
MSM 1:449
Multicollinearity II:22
coefficient variance decomposition II:168
test of II:167, II:168
Multiple processors I:829
Multivariate ARCH II:586
N
NA See NAs and Missing data.
Nadaraya-Watson I:659
Name
object I:110
reserved I:111
Name field in database query I:325
Naming objects
spool I:758
NAs I:175
forecasting II:143
inequality comparison I:176
See also Missing data
test I:176
Near singular matrix II:22
1066Index
O
Object I:97
allow multiple untitled I:813
basics I:98
closing untitled I:813
copy I:113
create I:101
data I:98, I:117
delete I:114
freeze I:114
icon I:99
label See Label object
naming I:111
open I:102
preview I:103, I:313
print I:115
procedure I:99
sample I:136
show I:103
store I:115
type I:100
window I:108
Object linking and embedding
See OLE.
Objects menu I:110
Observation equation II:674, II:679
Observation graphs I:580, I:623
missing values I:580
Observation identifiers I:292
Observation number I:122
Observation scale I:611
Observations, number of
maximum I:828
ODBC I:48
O1067
OLE 1:779
copy special 1:806
embedding (definition) 1:780
linking (definition) 1:780
paste EViews object 1:784
pasting graphs 1:781
pasting numerical data 1:795
pasting with the workfile sample 1:802
using 1:780
OLS (ordinary least squares)
See also Equation.
adjusted R-squared II:13
coefficient standard error II:12
coefficient t-statistic II:12
coefficients II:11
standard error of regression II:14
sum of squared residuals II:14
system estimation II:584, II:611
Omitted variables test II:177, II:212
panel II:857
OneDrive I:92
One-step forecast test II:216
One-step GMM
single equation II:73, II:78
One-way frequency table I:392
Open
database I:307
foreign data as matrix I:150
foreign data as table I:150, I:750
multiple objects I:102
object I:102
options I:303
workfile I:78
Operator I:167
arithmetic I:167
conjunction (and, or) I:171
difference I:172
lag I:172
lead I:172
parentheses I:168
relational I:170
Optimization
methods II:1006
Optimization algorithms
BHHH II:1013
first derivative methods II:1012
Gauss-Newton II:1013
Goldfeld-Quandt II:1012
1068Index
P
PACE II:963
details II:994
Pack database I:332
Packable space I:308, I:332
Page
create new I:79
delete page I:87
rename I:87
reorder I:87
Page breaks I:771
Pairwise graphs I:590
Panel
random components test II:865
residual cross-section dependence test II:872,
II:934
Panel cointegrating regression 1:887
equation specification 1:889
examples 1:895
performing in EViews 1:888
PMG models II:838
technical details 1:887, 1:901
Panel data II:807
analysis II:828
balanced I:258
cell identifier II:813
cointegration testing II:932, II:952
convert to pool I:289
covariance analysis II:915
create workfile of I:46
cross-section identifiers II:812
cross-section summaries II:822
dated I:257
duplicate identifiers I:256, I:273
dynamic panel data II:835
P1069
graphs 1:782
spreadsheets 1:787
tables 1:787
Paste special See OLE.
PcGive data I:357
PDF
save graph as I:739
PDL (polynomial distributed lag) II:23, II:145
far end restriction II:24
forecast standard errors II:145
instrumental variables II:25
near end restriction II:24
specification II:24
Pearson covariance I:526
Pedroni panel cointegration test II:933, II:954
Period
summaries II:822
SUR II:800
Perron unit root test II:539
Pesaran scaled LM test II:872, II:934
Pesaran, Shin and Smith II:838
Phillips-Ouliaris cointegration test II:948
Phillips-Perron test II:534
Pie graph I:634
PMG II:838
PNG I:739
Point
frequency conversion method I:161, I:162
Poisson count model II:344
Polynomial distributed lags, See PDL.
Pool II:757
? placeholder II:762
and cross-section specific series II:761
AR specification II:780
balanced data II:768, II:772
balanced sample II:779
base name II:761
coefficient test II:792
cointegration II:774
common coefficients II:780
convergence criterion II:783
convert to panel I:295
copy II:760
creating II:763
cross-section II:759
cross-section specific coefficients II:780
defining II:759
1070Index
Q
QML II:346, II:357, II:379
QQ-plot I:650, I:651, I:652
save data I:496
Q-statistic
Ljung-Box I:395
residual serial correlation test II:628
serial correlation test II:96
Quadratic
R1071
R
R IV:775
R project IV:775
Ramsey RESET test II:212
Random components test II:865
Random effects
LM test for II:865
panel estimation II:833
pool II:781
pool descriptions II:797
test for correlated effects (Hausman) II:863
Random walk II:527
Rank condition for identification II:59
Ranks
observations in series or vector I:174
Ratio to moving-average I:448
RATS data
4.x native format I:362
portable format I:363
Read II:764
data from foreign file as matrix I:150
data from foreign file as table I:150, I:750
Reading EViews data (in other applications) I:152
Rebuild database I:333
Recursive coefficient II:217
save as series II:217
Recursive estimation
least squares II:213
using state space II:682
Recursive least squares II:213
Recursive residual II:213, II:214
CUSUM II:214
CUSUM of squares II:215
n-step forecast test II:217
one-step forecast test II:216
save as series II:217
Reduced covariance II:972
Redundant variables test II:178
panel II:859
Regime probabilities 1:461
outputting 1:463
Regime switching 1:444
Registry I:319
Regression
See also Equation.
adjusted R-squared II:13
breakpoint estimation 1:407
coefficient standard error II:12
coefficients II:11
collinearity II:22
forecast II:135
F-statistic II:15
1072Index
studentized II:219
sum of squares II:14
symmetrically trimmed II:331
system II:600
tests of II:181
truncated dependent variable II:334
unconditional II:113
Resize
spools I:761
table columns and rows I:744
workfile I:263, I:276
Restricted estimation II:8
Restricted log likelihood II:302
Restricted VAR II:637
Restructuring II:766
Results
display or retrieve II:16
Rich Text Format I:749
RMSE I:398, II:146
Rn-squared statistic
definition 1:391
Robust least squares 1:387
Andrews function 1:388
Bisquare function 1:388
Cauchy function 1:388
example 1:400
Fair function 1:388
Huber function 1:389
Logistic function 1:389
Median function 1:389
M-estimation 1:387
Talworth function 1:389
Welsch function 1:389
Robust regression See Robust least squares.
Robust standard errors II:32
Bollerslev-Wooldridge for GARCH II:238
clustered II:833
GLM II:353, II:362
GMM II:74
Huber-White (QML) II:353, II:362
Robustness iterations I:657, I:662
Root mean square error I:398, II:146
Rotate
factors II:966, II:997
graphs I:578
Rotation of factors II:966
details II:997
S1073
Row
functions I:187
height I:744
R-squared
adjusted II:13
for regression II:13
from two stage least squares II:61
McFadden II:302
negative II:240
uncentered II:183, II:187
with AR specification II:113
RTF I:749, I:750
create I:832
redirecting print to I:832
Rw-squared statistic
definition 1:391
S
SAIC I:460
Sample
@all I:129
@first I:129
adjustment in estimation II:10
all observations I:129
balanced II:779
breaks I:580
change I:128
command I:130
common I:175
current I:61
date pairs I:128
first observation I:129
if condition I:129
individual I:175
intraday data I:131
last observation I:129
range pairs I:128
selection and missing values I:130
specifying sample object I:136
specifying samples in panel workfiles II:816
used in estimation II:9
using sample objects in expressions I:136
with expressions I:131
workfile I:127
SAR specification II:90, II:94
SAR(p)
estimation II:101
SARMA II:90
SAS file I:48
Save
backup workfile I:76
graphs I:739
options I:303
save as new workfile I:76
spool I:772
tables I:750
workfile I:75
workfile as foreign file I:150
workfile precision and compression I:77
Scalar I:190
Scale factor II:328
Scaled coefficients II:164
Scaling
factor analysis II:965
Scatterplot I:631
categorical I:681
matrix of I:592
with confidence ellipse I:664
with kernel regression fit I:658
with nearest neighbor fit I:660
with orthogonal regression line I:663
with regression line I:655
Scenarios II:715
simple example II:313
Schwarz criterion II:1027
for equation II:15
Score coefficients II:968
Score vector II:312
Scores II:967
Seasonal
ARMA terms II:90
difference I:173, II:103
graphs I:635
Seasonal adjustment I:416
additive I:448
Census X-11 (historical) I:443
Census X-12 I:434
Census X-13 I:416
multiplicative I:448
Tramo/Seats I:443
Second derivative methods II:1011
Seemingly unrelated regression II:585, II:612
Select
all I:102
1074Index
object I:101
Selection model See Heckman selection
Sensitivity of binary prediction II:307
Sequential breakpoint
estimation with 1:408
tests II:200
Serial correlation
ARIMA models II:92
Durbin-Watson statistic II:14, II:95
first order II:88
higher order II:89
nonlinear models II:127
switching models 1:449
theory II:87
two stage regression II:127
Serial correlation test
equations II:95, II:183
panels II:878
VARs II:628
Series I:373
adjust values I:405
auto-series I:181
auto-updating I:191
auto-updating and databases I:195
auto-updating and forecasting II:155
binning I:407
classification I:407
comparison I:405
create I:118, I:178
cross-section specific II:761
delete observation I:124
description of I:117
descriptive statistics I:374
difference I:172
display format I:119
display type I:119
dynamic assignment I:179
edit in differences I:503
edit mode default I:819
editing I:123, 1:835
fill values I:405
functions I:169
generate by command I:180
graph I:374, I:575
implicit assignment I:179
in pool objects II:762
insert observation I:124
interpolate I:413
lag I:172
lead I:172
pooled II:762
previewing contents I:103
procs I:406
properties I:404
ranking I:174
resample I:411
setting graph axis I:606
smpl+/- I:122
spreadsheet view I:374
spreadsheet view defaults I:818
using expressions in place of I:181
S-estimation 1:392
performing in EViews 1:395
tuning constants 1:393
weight function 1:392
SETAR UI:427
Shade region of graph I:710
Shadowing of object names I:330
Sharing violation I:308
Show object view I:102
Siddiqui difference quotient II:481, II:494
Siegel-Tukey test I:388
Sign test I:382
Signal equation II:679
Signal variables
views II:692
Silverman bandwidth I:644
Sims-Zha prior II:653, II:668
Simultaneous equations See systems.
Singular matrix
error in binary estimation II:305
error in estimation II:22, II:48, II:1009
error in logl II:507, II:515, II:517
error in PDL estimation II:24
error in RESET test II:213
error in VAR estimation II:642
Skewness I:375
Slope equality test (quantile regression) II:488
technical details II:500
SMA specification II:90, II:94
Smoothed AIC weights I:460
Smoothing
ETS model I:470
likelihood based I:470
Markov switching 1:447
S1075
methods I:464
parameters I:465
state space II:675
Smpl command I:130
Smpl+/- I:122
Solve
Broyden II:742
Gauss-Seidel II:1014
Newton-Raphson II:1011
Sort
display I:504
observations in a graph I:579, I:718
spreadsheet display I:504
valmaps I:211
workfile I:303
Source
field in database query I:327
Sparse label option I:378, I:540
Spearman rank correlation I:526
Spearman rank-order
theory I:535
Specification
by formula II:7
by list II:6
of equation II:6
of nonlinear equation II:42
of systems II:588
Specification test
for binary models II:315
for overdispersion II:349
for tobit II:331
of equation II:163
RESET (Ramsey) II:212
White II:187
Specificity of binary prediction II:307
Spectrum estimation II:536, II:537
Spike graph I:627
Spool I:753
add to I:754
appending I:755
comments I:758
copying to I:755
create I:753
customization I:765
delete objects I:762
display mode I:767
embedding I:756
extract I:762
flatten tree hierarchy I:763
hiding objects I:759
indentation I:762
management I:754
naming objects I:758
order I:762
print I:832
print size I:772
print to I:754
printing I:771
properties I:765
rearrange I:762
redirecting print to I:832
resize I:761
saving I:772
Spreadsheet
file import I:141
file import as matrix I:150
file import as table I:150, I:750
series I:374
sort display default I:819
sort display order I:504
view option I:818, I:819
Spreadsheet view
alpha I:203
display type I:119
group I:502
SPSS file I:48
SSAR 1:451
SSCP I:528
Stability test II:193
Bai Perron tests II:198
Chow breakpoint II:194
Chow forecast II:210
RESET II:212
with unequal variance II:222
Stacked data II:765
balanced II:768
descriptive statistics II:773
order II:767
Stacking data I:295
Standard deviation I:375
Standard error
for estimated coefficient II:12
forecast II:144, II:158
of the regression II:14
See also Robust standard errors.
1076Index
VAR II:632
Standardized coefficients II:164
Standardized residual II:18
binary models II:311
censored models II:329
count models II:348
GLM II:371
truncated models II:334
Start
field in database query I:326
field in workfile details I:64
Start page I:811
Starting values
(G)ARCH models II:237
binary models II:303
for ARMA estimation II:106, II:112
for coefficients II:46, II:1007
for nonlinear least squares II:43, II:45
for systems II:592
logl II:511
param statement II:46, II:1008
state space II:683
user supplied II:107, II:112
Stata file I:48
State equation II:674, II:678
State space II:673
@mprior II:683
@vprior II:683
estimation II:677, II:688
filtering II:674
forecasting II:676
interpreting II:689
observation equation II:674
representation II:673
specification II:673, II:678
specification (automatic) II:686
starting values II:683
state equation II:674
views II:691
State variables II:673
State views II:692
Static forecast II:149
Static OLS II:256, II:257
Stationary time series II:527
Status line I:9
Step size II:1013
logl II:509
Stepwise II:49
swapwise II:54
uni-directional II:53
Stochastic equations
in model II:700
Store I:115
as .DB? file I:311
from pool II:776
in database I:310
merge objects I:311
Structural change
estimation in the presence of 1:407
tests of II:193, II:198
Structural forecast II:150
Structural solution of models II:735
Structural VAR II:637
estimation II:642
factorization matrix II:629
identification II:641
long-run restrictions II:639
short-run restrictions II:638
Structuring a workfile I:251
Studentized residual II:219
Subtitle
Breusch-Pagan LM test II:874
Sum of squared residuals
for regression II:14
Summarizing data I:507
Summary statistics
for regression variables II:13
SUR II:585, II:612
Survivor function I:648
log I:648
save data I:496
Swapwise II:54
Switching regression 1:443
dynamic models 1:448
estimation in EViews 1:451
expected durations 1:459
filtering 1:445
forecast 1:462
initial probabilities 1:454
regime probabilities 1:444, 1:461, 1:463
serial correlation 1:449
transition probabilities 1:459
transition results 1:459, 1:463
views available 1:459
T1077
T
Tab settings I:823
Table I:741
cell annotation I:747
cell format I:745
cell merging I:747
color I:746
column resize I:744
column width See Column width.
comments I:747
copy I:749
copy to other windows programs I:749
customization I:744
edit I:743
editing I:743
font I:746
formatting I:745
gridlines I:744
merging I:747
paste as unformatted text I:749
print I:749
read data from foreign source I:750
row resize I:744
save to disk I:750
selecting cells I:741
title I:744
Tabs
See Page
Tabulation
n-way I:539
one-way I:392
Talworth function 1:389
TAR UI:427
TARCH II:244
Template
dated data tables I:520
graphs I:730
Test
See also Hypothesis tests, Specification test
and Goodness of fit
ARCH II:186
Arrelano-Bond serial correlation II:878
breakpoint II:194, II:196, II:198
coefficient II:164
cross-section dependence II:872, II:934
Durbin-Wu-Hausman II:81
Granger causality I:564, II:926
Hansen instability II:275
heteroskedasticity II:185
multiple breakpoint II:198
Park added variable II:278
pooled II:792
RESET II:212
residual II:181
stability tests II:193
unit root with break II:539
variance ratio II:565
White II:187
Text I:751
Text file
import as matrix I:150
import as table I:150, I:750
1078Index
U
U.S. Energy Information Administration data
I:346
UMP random effects test II:865
Unconditional residual II:113
Undo I:405
Uni-directional II:53
Unit root test I:396, II:527
augmented Dickey-Fuller II:532
Dickey-Fuller II:532
Dickey-Fuller GLS detrended II:533
Elliot, Rothenberg, and Stock II:535
KPSS II:535
panel data II:555, II:930
Phillips-Perron II:534, II:535
pooled data II:773
trend assumption II:533
with breakpoints II:539
Units
field in database query I:327
Unstacked data II:764
Unstacking data I:289
Unstacking identifiers I:291
Untitled I:110, I:111
Update
automatic I:191
W1079
V
Valmap I:207
cautions I:218
find label for value I:216
find numeric value for label I:217
find string value for label I:217
functions I:216
properties I:212
sorting I:211
Value map See Valmap.
Van der Waerden I:647
Van der Waerden test I:382, I:387
VAR
AR roots II:626
autocorrelation LM test II:628
autocorrelation test II:628
coefficients II:646
cointegration II:939
correlograms II:628
decomposition II:634
estimation II:624
estimation output II:624
factorization matrix in normality test II:629
forecasting II:635, II:645
Granger causality test II:627
impulse response II:631
Jarque-Bera normality test II:628
lag exclusion test II:627
lag length II:627
lag length choice II:627
lag structure II:626
mathematical model II:623
response standard errors II:632
restrictions II:637
See also Impulse response, Structural VAR.
VARHAC I:558
technical details II:1038
Variance
equality test I:388
hypothesis test of I:381
Variance decomposition II:168, II:634
Variance equation See ARCH and GARCH.
Variance factor II:354
Variance inflation factor (VIF) II:167
Variance proportion II:147
Variance ratio test II:565
example II:567
technical details II:572
VEC II:643
estimating II:644
Vector autoregression
See VAR.
Vector error correction model See VEC and VAR.
Verbose mode I:824
View
default I:102
Vogelsang-Perron unit root tests II:539
Volatility II:232
W
Wald test II:170
coefficient restriction II:170
demonstration I:30
formula II:175
F-statistic II:176
joint restriction II:172
nonlinear restriction II:175
structural change with unequal variance II:222
Warning on close option I:813
Watson test I:389
Weak instruments II:66, II:82
Weight functions
M-estimation 1:388
S-estimation 1:392
Weighted least squares II:36
cross-equation weighting II:584
nonlinear II:47
nonlinear two stage II:65, II:76
pool II:781
1080Index
append to I:279
applying structure to I:261
attributes I:62
automatic backup I:821
common structure errors I:274
comparing I:88
contract I:282
copy from I:282
create I:42
description of I:41
details display I:62
directory I:61
export I:303
filtering objects I:73
load existing from disk I:78
multi-page I:78
observation numbers I:122
panel II:807
pool II:757, II:766
remove structure I:276
reshape I:286
resize I:263, I:276
sample I:127
save I:75
sorting I:303
stacking I:295
statistics I:75
storage defaults I:820
storage precision and compression I:820
structure settings I:262
structuring I:251
summary view I:75
undated I:253
unstacking I:289
window I:60
Write II:776
X
X-11 I:443
using X-12 I:436
using X-13 I:423
X-12 I:434
X-13 I:416
ARIMA estimation I:427
ARIMA forcasting I:427
arima models I:422
automatic outliers I:421
example I:432
Z1081
Y
Yates continuity correction I:387
Z
Zivot-Andrews unit root test II:539
1082Index