Tsvarsvar
Tsvarsvar
com
var svar — Structural vector autoregressive models
Description
svar fits a vector autoregressive (VAR) model subject to short- or long-run constraints you place on
the resulting impulse–response functions (IRFs). Economic theory typically motivates the constraints,
allowing a causal interpretation of the IRFs to be made. See [TS] var intro for a list of commands
that are used in conjunction with svar.
Quick start
Structural VAR model for y1, y2, and y3 using tsset data with short-run constraints on impulse
responses given by predefined matrices A and B
svar y1 y2 y3, aeq(A) beq(B)
Structural VAR model for y1, y2, and y3 with long-run constraint on impulse responses given by the
predefined matrix C
svar y1 y2 y3, lreq(C)
Add exogenous variables x1 and x2
svar y1 y2 y3, lreq(C) exog(x1 x2)
Same as above, but include third and fourth lags of the dependent variables instead of first and second
svar y1 y2 y3, lreq(C) exog(x1 x2) lags(3 4)
Menu
Statistics > Multivariate time series > Structural vector autoregression (SVAR)
Syntax
Short-run constraints
svar depvarlist if in , aconstraints(constraintsa ) aeq(matrixaeq )
Long-run constraints
svar depvarlist if in , lrconstraints(constraintslr ) lreq(matrixlreq )
lrcns(matrixlrcns ) long run options
1
2 var svar — Structural vector autoregressive models
You must tsset your data before using svar; see [TS] tsset.
depvarlist and varlistexog may contain time-series operators; see [U] 11.4.4 Time-series varlists.
by, collect, fp, rolling, statsby, and xi are allowed; see [U] 11.1.10 Prefix commands.
See [U] 20 Estimation and postestimation commands for more capabilities of estimation commands.
4 var svar — Structural vector autoregressive models
Options
Model
noconstant; see [R] Estimation options.
aconstraints(constraintsa ), aeq(matrixaeq ), acns(matrixacns )
bconstraints(constraintsb ), beq(matrixbeq ), bcns(matrixbcns )
These options specify the short-run constraints in an SVAR model. To specify a short-run SVAR
model, you must specify at least one of these options. The first list of options specifies constraints
on the parameters of the A matrix; the second list specifies constraints on the parameters of the
B matrix (see Short-run SVAR models). If at least one option is selected from the first list and
none are selected from the second list, svar sets B to the identity matrix. Similarly, if at least
one option is selected from the second list and none are selected from the first list, svar sets A
to the identity matrix.
None of these options may be specified with any of the options that define long-run constraints.
aconstraints(constraintsa ) specifies a numlist of previously defined Stata constraints to be
applied to A during estimation.
aeq(matrixaeq ) specifies a matrix that defines a set of equality constraints. This matrix must be
square with dimension equal to the number of equations in the underlying VAR model. The
elements of this matrix must be missing or real numbers. A missing value in the (i, j ) element
of this matrix specifies that the (i, j ) element of A is a free parameter. A real number in the
(i, j ) element of this matrix constrains the (i, j ) element of A to this real number. For example,
1 0
A=
. 1.5
specifies that A[1, 1] = 1, A[1, 2] = 0, A[2, 2] = 1.5, and A[2, 1] is a free parameter.
acns(matrixacns ) specifies a matrix that defines a set of exclusion or cross-parameter equality
constraints on A. This matrix must be square with dimension equal to the number of equations
in the underlying VAR model. Each element of this matrix must be missing, 0, or a positive
integer. A missing value in the (i, j ) element of this matrix specifies that no constraint be placed
on this element of A. A zero in the (i, j ) element of this matrix constrains the (i, j ) element
of A to be zero. Any strictly positive integers must be in two or more elements of this matrix.
A strictly positive integer in the (i, j ) element of this matrix constrains the (i, j ) element of
A to be equal to all the other elements of A that correspond to elements in this matrix that
contain the same integer. For example, consider the matrix
. 1
A=
1 0
Specifying acns(A) in a two-equation SVAR model constrains A[2, 1] = A[1, 2] and A[2, 2] = 0
while leaving A[1, 1] free.
bconstraints(constraintsb ) specifies a numlist of previously defined Stata constraints to be
applied to B during estimation.
beq(matrixbeq ) specifies a matrix that defines a set of equality constraints. This matrix must
be square with dimension equal to the number of equations in the underlying VAR model.
The elements of this matrix must be either missing or real numbers. The syntax of implied
constraints is analogous to the one described in aeq(), except that it applies to B rather than
to A.
var svar — Structural vector autoregressive models 5
noisure specifies that the VAR model coefficients be estimated via one-step seemingly unrelated
regression when varconstraints() is specified. By default, svar estimates the coefficients in
the VAR model via iterated seemingly unrelated regression when varconstraints() is specified.
When the varconstraints() option is not specified, the VAR model coefficient estimates are
obtained via OLS, a noniterative procedure. As a result, noisure may be specified only with
varconstraints().
dfk specifies that a small-sample degrees-of-freedom adjustment be used when estimating Σ, the
covariance matrix of the VAR disturbances. Specifically, 1/(T − m) is used instead of the large-
sample divisor 1/T , where m is the average number of parameters in the functional form for yt
over the K equations.
small causes svar to calculate and report small-sample t and F statistics instead of the large-sample
normal and χ2 statistics.
noidencheck requests that the Amisano and Giannini (1997) check for local identification not be
performed. This check is local to the starting values used. Because of this dependence on the
starting values, you may wish to suppress this check by specifying the noidencheck option.
However, be careful in specifying this option. Models that are not structurally identified can still
converge, thereby producing meaningless results that only appear to have meaning.
nobigf requests that svar not compute the estimated parameter vector that incorporates coefficients
that have been implicitly constrained to be zero, such as when some lags have been omitted from
a model. e(bf) is used for computing asymptotic standard errors in the postestimation commands
irf create and fcast compute. Therefore, specifying nobigf implies that the asymptotic
standard errors will not be available from irf create and fcast compute. See Fitting models
with some lags excluded in [TS] var.
Reporting
level(#); see [R] Estimation options.
full shows constrained parameters in table.
var specifies that the output from var also be displayed. By default, the underlying VAR model is
fit quietly.
lutstats specifies that the Lütkepohl versions of the lag-order selection statistics be computed. See
Methods and formulas in [TS] varsoc for a discussion of these statistics.
nocnsreport; see [R] Estimation options.
display options: noci, nopvalues, cformat(% fmt), pformat(% fmt), and sformat(% fmt); see
[R] Estimation options.
Maximization
maximize options: difficult, technique(algorithm spec), iterate(#), no log, trace,
gradient, showstep, hessian, showtolerance, tolerance(#), ltolerance(#),
nrtolerance(#), nonrtolerance, and from(init specs); see [R] Maximize. These options are
seldom used.
The following option is available with svar but is not shown in the dialog box:
coeflegend; see [R] Estimation options.
var svar — Structural vector autoregressive models 7
Introduction
This entry assumes that you have already read [TS] var intro and [TS] var; if not, please do. Here
we illustrate how to fit SVAR models in Stata subject to short-run and long-run restrictions. For more
detailed information on SVAR models, see Amisano and Giannini (1997) and Hamilton (1994). For
good introductions to VAR models, see Lütkepohl (2005), Hamilton (1994), Stock and Watson (2001),
and Becketti (2020).
where A
e is a lower triangular matrix with ones on the diagonal and B
e is a diagonal matrix. Because
−1 e
the P matrix for this model is Psr = A B, its estimate, Psr , obtained by plugging in estimates
e b
of A
e and B
e , should equal the Cholesky decomposition of Σ.b
To illustrate, we use the German macroeconomic data discussed in Lütkepohl (2005) and used
in [TS] var. In this example, yt = (dln inv, dln inc, dln consump), where dln inv is the
first difference of the log of investment, dln inc is the first difference of the log of income, and
dln consump is the first difference of the log of consumption. Because the first difference of the
natural log of a variable can be treated as an approximation of the percentage change in that variable,
we will refer to these variables as percentage changes in inv, inc, and consump, respectively.
We will impose the Cholesky restrictions on this system by applying equality constraints with the
constraint matrices
1 0 0 . 0 0
A = . 1 0 and B = 0 . 0
. . 1 0 0 .
8 var svar — Structural vector autoregressive models
With these structural restrictions, we assume that the percentage change in inv is not contempo-
raneously affected by the percentage changes in either inc or consump. We also assume that the
percentage change of inc is affected by contemporaneous changes in inv but not consump. Finally,
we assume that percentage changes in consump are affected by contemporaneous changes in both
inv and inc.
The following commands fit an SVAR model with these constraints.
. use https://www.stata-press.com/data/r18/lutkepohl2
(Quarterly SA West German macro data, Bil DM, from Lutkepohl 1993 Table E.1)
. matrix A = (1,0,0\.,1,0\.,.,1)
. matrix B = (.,0,0\0,.,0\0,0,.)
. svar dln_inv dln_inc dln_consump if qtr<=tq(1978q4), aeq(A) beq(B)
Estimating short-run parameters
(output omitted )
Structural vector autoregression
( 1) [/A]1_1 = 1
( 2) [/A]1_2 = 0
( 3) [/A]1_3 = 0
( 4) [/A]2_2 = 1
( 5) [/A]2_3 = 0
( 6) [/A]3_3 = 1
( 7) [/B]1_2 = 0
( 8) [/B]1_3 = 0
( 9) [/B]2_1 = 0
(10) [/B]2_3 = 0
(11) [/B]3_1 = 0
(12) [/B]3_2 = 0
Sample: 1960q4 thru 1978q4 Number of obs = 73
Exactly identified model Log likelihood = 606.307
/A
1_1 1 (constrained)
2_1 -.0336288 .0294605 -1.14 0.254 -.0913702 .0241126
3_1 -.0435846 .0194408 -2.24 0.025 -.0816879 -.0054812
1_2 0 (constrained)
2_2 1 (constrained)
3_2 -.424774 .0765548 -5.55 0.000 -.5748187 -.2747293
1_3 0 (constrained)
2_3 0 (constrained)
3_3 1 (constrained)
/B
1_1 .0438796 .0036315 12.08 0.000 .036762 .0509972
2_1 0 (constrained)
3_1 0 (constrained)
1_2 0 (constrained)
2_2 .0110449 .0009141 12.08 0.000 .0092534 .0128365
3_2 0 (constrained)
1_3 0 (constrained)
2_3 0 (constrained)
3_3 .0072243 .0005979 12.08 0.000 .0060525 .0083962
The SVAR output has four parts: an iteration log, a display of the constraints imposed, a header with
sample and SVAR log-likelihood information, and a table displaying the estimates of the parameters
from the A and B matrices. From the output above, we can see that the equality constraint matrices
var svar — Structural vector autoregressive models 9
supplied to svar imposed the intended constraints and that the SVAR header informs us that the model
we fit is just identified. The estimates of /A:2 1, /A:3 1, and /A:3 2 are all negative. Because the
off-diagonal elements of the A matrix contain the negative of the actual contemporaneous effects,
the estimated effects are positive, as expected.
The estimates A
b and Bb are stored in e(A) and e(B), respectively, allowing us to compute the
estimated Cholesky decomposition.
. matrix Aest = e(A)
. matrix Best = e(B)
. matrix chol_est = inv(Aest)*Best
. matrix list chol_est
chol_est[3,3]
dln_inv dln_inc dln_consump
dln_inv .04387957 0 0
dln_inc .00147562 .01104494 0
dln_consump .00253928 .0046916 .00722432
svar stores the estimated Σ from the underlying var in e(Sigma). The output below illustrates
the computation of the Cholesky decomposition of e(Sigma). It is the same as the output computed
from the SVAR estimates.
. matrix sig_var = e(Sigma)
. matrix chol_var = cholesky(sig_var)
. matrix list chol_var
chol_var[3,3]
dln_inv dln_inc dln_consump
dln_inv .04387957 0 0
dln_inc .00147562 .01104494 0
dln_consump .00253928 .0046916 .00722432
We might now wonder why we bother obtaining parameter estimates via nonlinear estimation if
we can obtain them simply by a transform of the estimates produced by var. When the model is just
identified, as in the previous example, the SVAR parameter estimates can be computed via a transform
of the VAR estimates. However, when the model is overidentified, such is not the case.
The output below contains the commands and results we obtained by fitting this model on the
Lütkepohl data.
. matrix B = (.,0,0\0,.,0\0,0,.)
. matrix A = (1,0,0\0,1,0\.,.,1)
10 var svar — Structural vector autoregressive models
/A
1_1 1 (constrained)
2_1 0 (constrained)
3_1 -.0435911 .0192696 -2.26 0.024 -.0813589 -.0058233
1_2 0 (constrained)
2_2 1 (constrained)
3_2 -.4247741 .0758806 -5.60 0.000 -.5734973 -.2760508
1_3 0 (constrained)
2_3 0 (constrained)
3_3 1 (constrained)
/B
1_1 .0438796 .0036315 12.08 0.000 .036762 .0509972
2_1 0 (constrained)
3_1 0 (constrained)
1_2 0 (constrained)
2_2 .0111431 .0009222 12.08 0.000 .0093356 .0129506
3_2 0 (constrained)
1_3 0 (constrained)
2_3 0 (constrained)
3_3 .0072243 .0005979 12.08 0.000 .0060525 .0083962
The footer in this example reports a test of the overidentifying restriction. The null hypothesis of this
test is that any overidentifying restrictions are valid. In the case at hand, we cannot reject this null
hypothesis at any of the conventional levels.
dln_inv
dln_inv
L1. -.3196318 .1192898 -2.68 0.007 -.5534355 -.0858282
L2. -.1605508 .118767 -1.35 0.176 -.39333 .0722283
dln_inc
L1. .1459851 .5188451 0.28 0.778 -.8709326 1.162903
L2. .1146009 .508295 0.23 0.822 -.881639 1.110841
dln_consump
L1. .9612288 .6316557 1.52 0.128 -.2767936 2.199251
L2. .9344001 .6324034 1.48 0.140 -.3050877 2.173888
dln_inc
dln_inv
L1. .0439309 .0302933 1.45 0.147 -.0154427 .1033046
L2. .0500302 .0301605 1.66 0.097 -.0090833 .1091437
dln_inc
L1. -.1527311 .131759 -1.16 0.246 -.4109741 .1055118
L2. .0191634 .1290799 0.15 0.882 -.2338285 .2721552
dln_consump
L1. .2884992 .1604069 1.80 0.072 -.0258926 .6028909
L2. -.0102 .1605968 -0.06 0.949 -.3249639 .3045639
dln_consump
dln_inv
L1. -.002423 .0244142 -0.10 0.921 -.050274 .045428
L2. .0338806 .0243072 1.39 0.163 -.0137607 .0815219
dln_inc
L1. .2248134 .1061884 2.12 0.034 .0166879 .4329389
L2. .3549135 .1040292 3.41 0.001 .1510199 .558807
dln_consump
L1. -.2639695 .1292766 -2.04 0.041 -.517347 -.010592
L2. -.0222264 .1294296 -0.17 0.864 -.2759039 .231451
The equation-level model tests reported in the header indicate that we cannot reject the null
hypotheses that all the coefficients in the first equation are zero, nor can we reject the null that all the
coefficients in the second equation are zero at the 5% significance level. We use a combination of theory
and the p-values from the output above to place some exclusion restrictions on the underlying VAR(2)
model. Specifically, in the equation for the percentage change of inv, we constrain the coefficients
on L2.dln inv, L.dln inc, L2.dln inc, and L2.dln consump to be zero. In the equation for
dln inc, we constrain the coefficients on L2.dln inv, L2.dln inc, and L2.dln consump to be
zero. Finally, in the equation for dln consump, we constrain L.dln inv and L2.dln consump to
be zero. We then refit the SVAR model from the previous example.
. constraint 1 [dln_inv]L2.dln_inv = 0
. constraint 2 [dln_inv ]L.dln_inc = 0
. constraint 3 [dln_inv]L2.dln_inc = 0
. constraint 4 [dln_inv]L2.dln_consump = 0
. constraint 5 [dln_inc]L2.dln_inv = 0
. constraint 6 [dln_inc]L2.dln_inc = 0
. constraint 7 [dln_inc]L2.dln_consump = 0
. constraint 8 [dln_consump]L.dln_inv = 0
. constraint 9 [dln_consump]L2.dln_consump = 0
. svar dln_inv dln_inc dln_consump if qtr<=tq(1978q4), aeq(A) beq(B)
> varconst(1/9) noislog
Estimating short-run parameters
(output omitted )
Structural vector autoregression
( 1) [/A]1_1 = 1
( 2) [/A]1_2 = 0
( 3) [/A]1_3 = 0
( 4) [/A]2_1 = 0
( 5) [/A]2_2 = 1
( 6) [/A]2_3 = 0
( 7) [/A]3_3 = 1
( 8) [/B]1_2 = 0
( 9) [/B]1_3 = 0
(10) [/B]2_1 = 0
(11) [/B]2_3 = 0
(12) [/B]3_1 = 0
(13) [/B]3_2 = 0
var svar — Structural vector autoregressive models 13
/A
1_1 1 (constrained)
2_1 0 (constrained)
3_1 -.0418708 .0187579 -2.23 0.026 -.0786356 -.0051061
1_2 0 (constrained)
2_2 1 (constrained)
3_2 -.4255808 .0745298 -5.71 0.000 -.5716565 -.2795051
1_3 0 (constrained)
2_3 0 (constrained)
3_3 1 (constrained)
/B
1_1 .0451851 .0037395 12.08 0.000 .0378557 .0525145
2_1 0 (constrained)
3_1 0 (constrained)
1_2 0 (constrained)
2_2 .0113723 .0009412 12.08 0.000 .0095276 .013217
3_2 0 (constrained)
1_3 0 (constrained)
2_3 0 (constrained)
3_3 .0072417 .0005993 12.08 0.000 .006067 .0084164
If we displayed the underlying VAR(2) results by using the var option, we would see that most of
the unconstrained coefficients are now significant at the 10% level and that none of the equation-level
model statistics fail to reject the null hypothesis at the 10% level. The svar output reveals that the
p-value of the overidentification test rose and that the coefficient on /A:3 1 is still insignificant at
the 1% level but not at the 5% level.
Before moving on to models with long-run constraints, consider these limitations. We cannot place
constraints on the elements of A in terms of the elements of B, or vice versa. This limitation is
imposed by the form of the check for identification derived by Amisano and Giannini (1997). As
noted in Methods and formulas, this test requires separate constraint matrices for the parameters in
A and B. Another limitation is that we cannot mix short-run and long-run constraints.
yt = Cet
In long-run models, the constraints are placed on the elements of C, and the free parameters are
estimated. These constraints are often exclusion restrictions. For instance, constraining C[1, 2] to be
zero can be interpreted as setting the long-run response of variable 1 to the structural shocks driving
variable 2 to be zero.
Similar to the short-run model, the Plr matrix such that Plr P0lr = Σ identifies the structural
impulse–response functions. Plr = C is identified by the restrictions placed on the parameters in
C. There are K 2 parameters in C, and the order condition for identification requires that there be
14 var svar — Structural vector autoregressive models
at least K 2 − K(K + 1)/2 restrictions placed on those parameters. As in the short-run model, this
order condition is necessary but not sufficient, so the Amisano and Giannini (1997) check for local
identification is performed by default.
. use https://www.stata-press.com/data/r18/m1gdp
. matrix lr = (.,0\0,.)
. svar d.ln_m1 d.ln_gdp, lreq(lr)
Estimating long-run parameters
(output omitted )
Structural vector autoregression
( 1) [/C]1_2 = 0
( 2) [/C]2_1 = 0
Sample: 1959q4 thru 2002q2 Number of obs = 171
Overidentified model Log likelihood = 1151.614
/C
1_1 .0301007 .0016277 18.49 0.000 .0269106 .0332909
2_1 0 (constrained)
1_2 0 (constrained)
2_2 .0129691 .0007013 18.49 0.000 .0115946 .0143436
We have assumed that the underlying VAR model has 2 lags; four of the five selection-order criteria
computed by varsoc (see [TS] varsoc) recommended this choice. The test of the overidentifying
restrictions provides no indication that it is not valid.
var svar — Structural vector autoregressive models 15
Stored results
svar stores the following in e():
Scalars
e(N) number of observations
e(N cns) number of constraints
e(k eq) number of equations in e(b)
e(k dv) number of dependent variables
e(ll) log likelihood from svar
e(N gaps var) number of gaps in the sample
e(k var) number of coefficients in a VAR model
e(k eq var) number of equations in an underlying VAR model
e(k dv var) number of dependent variables in an underlying VAR model
e(df eq var) average number of parameters in an equation
e(df r var) if small, residual degrees of freedom
e(obs # var) number of observations on equation #
e(k # var) number of coefficients in equation #
e(df m# var) model degrees of freedom for equation #
e(df r# var) residual degrees of freedom for equation # (small only)
e(r2 # var) R2 for equation #
e(ll # var) log likelihood for equation # VAR model
e(chi2 # var) χ2 statistic for equation #
e(F # var) F statistic for equation # (small only)
e(rmse # var) root mean squared error for equation #
e(mlag var) highest lag in VAR model
e(tparms var) number of parameters in all equations
e(aic var) Akaike information criterion
e(hqic var) Hannan–Quinn information criterion
e(sbic var) Schwarz’s Bayesian information criterion
e(fpe var) final prediction error
e(ll var) log likelihood from var
e(detsig var) determinant of e(Sigma)
e(detsig ml var) determinant of Σ bml
e(tmin) first time period in the sample
e(tmax) maximum time
e(chi2 oid) overidentification test
e(oid df) number of overidentifying restrictions
e(rank) rank of e(V)
e(ic ml) number of iterations
e(rc ml) return code from ml
Macros
e(cmd) svar
e(cmdline) command as typed
e(lrmodel) long-run model, if specified
e(lags var) lags in model
e(depvar var) names of dependent variables
e(endog var) names of endogenous variables
e(exog var) names of exogenous variables, if specified
e(nocons var) nocons, if noconstant specified
e(cns lr) long-run constraints
e(cns a) cross-parameter equality constraints on A
e(cns b) cross-parameter equality constraints on B
e(dfk var) alternate divisor (dfk), if specified
e(eqnames var) names of equations
e(lutstats var) lutstats, if specified
e(constraints var) constraints var, if there are constraints on VAR model
e(small) small, if specified
e(tsfmt) format of timevar
e(timevar) name of timevar
e(title) title in estimation output
e(properties) b V
e(predict) program used to implement predict
16 var svar — Structural vector autoregressive models
Matrices
e(b) coefficient vector
e(Cns) constraints matrix
e(Sigma) Σb matrix
e(V) variance–covariance matrix of the estimators
e(b var) coefficient vector of underlying VAR model
e(V var) VCE of underlying VAR model
e(bf var) full coefficient vector with zeros in dropped lags
e(G var) Gamma matrix stored by var; see Methods and formulas in [TS] var
e(aeq) aeq(matrix), if specified
e(acns) acns(matrix), if specified
e(beq) beq(matrix), if specified
e(bcns) bcns(matrix), if specified
e(lreq) lreq(matrix), if specified
e(lrcns) lrcns(matrix), if specified
e(Cns var) constraint matrix from var, if varconstraints() is specified
e(A) estimated A matrix, if a short-run model
e(B) estimated B matrix
e(C) estimated C matrix, if a long-run model
e(A1) estimated A matrix, if a long-run model
Functions
e(sample) marks estimation sample
Note that results stored in r() are updated when the command is replayed and will be replaced when
any r-class command is run after the estimation command.
NK N N
L(A, B) = − ln(2π) + ln(|W|2 ) − tr(W0 WΣ
b)
2 2 2
where W = B−1 A.
−1 −1
When there are long-run constraints, because C = A B and A = IK , W = B−1 = C−1 A =
(AC)−1 . Substituting the last term for W in the short-run log likelihood produces the long-run log
likelihood
NK N f 2 ) − N tr(Wf 0W
L(C) = − ln(2π) + ln(|W| fΣb)
2 2 2
f = (AC)−1 .
where W
For both the short-run and the long-run models, the maximization is performed by the scoring
method. See Harvey (1990) for a discussion of this method.
var svar — Structural vector autoregressive models 17
Based on results from Amisano and Giannini (1997), the score vector for the short-run model is
∂L(A, B) h i
= N {vec(W0−1 )}0 − {vec(W)}0 (Σb ⊗ IK ) ×
∂[vec(A), vec(B)]
(IK ⊗ B−1 ), −(A0 B0−1 ⊗ B−1 )
(W−1 ⊗ B0−1 )
(IK 2 + ⊕) (W0−1 ⊗ B−1 ), −(IK ⊗ B−1 )
I [vec(A), vec(B)] = N 0−1
−(IK ⊗ B )
where ⊕ is the commutation matrix defined in Magnus and Neudecker (2019, 54–55).
Using results from Amisano and Giannini (1997), we can derive the score vector and the expected
information matrix for the case with long-run restrictions. The score vector is
∂L(C) h
b ⊗ IK ) −(A0−1 C0−1 ⊗ C−1 )
ih i
= N {vec(W0−1 )}0 − {vec(W)}0 (Σ
∂vec(C)
NK NK
∗ NK NK
Vsr =
Ra (W0 ⊗ B) 0K 2
0K 2 Ra (IK ⊗ B)
has full column rank of 2K 2 , where NK = (1/2)(IK 2 + ⊕), Ra is the constraint matrix for the
parameters in A (that is, Ra vec(A) = ra ), and Rb is the constraint matrix for the parameters in B
(that is, Rb vec(B) = rb ).
For the long-run case, based on results from the C model in Amisano and Giannini (1997), the
model is identified if the matrix
has full column rank of K 2 , where Rc is the constraint matrix for the parameters in C; that is,
Rc vec(C) = rc .
18 var svar — Structural vector autoregressive models
LR = 2(LLvar − LLsvar )
where LR is the value of the test statistic against the null hypothesis that the overidentifying restrictions
are valid, LLvar is the log likelihood from the underlying VAR(p) model, and LLsvar is the log likelihood
from the SVAR model. The test statistic is asymptotically distributed as χ2 (q), where q is the number
of overidentifying restrictions. Amisano and Giannini (1997, 38–39) emphasize that, because this test
of the validity of the overidentifying restrictions is an omnibus test, it can be interpreted as a test of
the null hypothesis that all the restrictions are valid.
Because constraints might not be independent either by construction or because of the data, the
number of restrictions is not necessarily equal to the number of constraints. The rank of e(V) gives the
number of parameters that were independently estimated after applying the constraints. The maximum
number of parameters that can be estimated in an identified short-run or long-run SVAR model is
K(K + 1)/2. This implies that the number of overidentifying restrictions, q , is equal to K(K + 1)/2
minus the rank of e(V).
The number of overidentifying restrictions is also linked to the order condition for each model. In
a short-run SVAR model, there are 2K 2 parameters. Because no more than K(K + 1)/2 parameters
may be estimated, the order condition for a short-run SVAR model is that at least 2K 2 − K(K + 1)/2
restrictions be placed on the model. Similarly, there are K 2 parameters in long-run SVAR model.
Because no more than K(K + 1)/2 parameters may be estimated, the order condition for a long-run
SVAR model is that at least K 2 − K(K + 1)/2 restrictions be placed on the model.
Acknowledgment
We thank Gianni Amisano of the Board of Governors of the Federal Reserve System for his helpful
comments.
References
Amisano, G., and C. Giannini. 1997. Topics in Structural VAR Econometrics. 2nd ed, revised and enlarged. Heidelberg:
Springer.
Baum, C. F., and S. Hurn. 2021. Environmental Econometrics Using Stata. College Station, TX: Stata Press.
Becketti, S. 2020. Introduction to Time Series Using Stata. Rev. ed. College Station, TX: Stata Press.
Christiano, L. J., M. Eichenbaum, and C. L. Evans. 1999. Monetary policy shocks: What have we learned and to
what end? In Handbook of Macroeconomics: Volume 1A, ed. J. B. Taylor and M. Woodford. New York: Elsevier.
https://doi.org/10.1016/S1574-0048(99)01005-8.
Hamilton, J. D. 1994. Time Series Analysis. Princeton, NJ: Princeton University Press.
Harvey, A. C. 1990. The Econometric Analysis of Time Series. 2nd ed. Cambridge, MA: MIT Press.
Lütkepohl, H. 1993. Introduction to Multiple Time Series Analysis. 2nd ed. New York: Springer.
. 2005. New Introduction to Multiple Time Series Analysis. New York: Springer.
Magnus, J. R., and H. Neudecker. 2019. Matrix Differential Calculus with Applications in Statistics and Econometrics.
3rd ed. Hoboken, NJ: Wiley.
Rothenberg, T. J. 1971. Identification in parametric models. Econometrica 39: 577–591. https://doi.org/10.2307/1913267.
Schenck, D. 2016a. Long-run restrictions in a structural vector autoregression. The Stata Blog: Not Elsewhere Classified.
http://blog.stata.com/2016/10/27/long-run-restrictions-in-a-structural-vector-autoregression/.
. 2016b. Structural vector autoregression models. The Stata Blog: Not Elsewhere Classified.
http://blog.stata.com/2016/09/20/structural-vector-autoregression-models/.
var svar — Structural vector autoregressive models 19
Also see
[TS] var svar postestimation — Postestimation tools for svar
[TS] tsset — Declare data to be time-series data
[TS] var — Vector autoregressive models+
[TS] var intro — Introduction to vector autoregressive models
[TS] var ivsvar — Instrumental-variables structural vector autoregressive models+
[TS] varbasic — Fit a simple VAR and graph IRFs or FEVDs
[TS] vec — Vector error-correction models
[U] 20 Estimation and postestimation commands
Stata Dynamic Stochastic General Equilibrium Models Reference Manual
Stata, Stata Press, and Mata are registered trademarks of StataCorp LLC. Stata and
®
Stata Press are registered trademarks with the World Intellectual Property Organization
of the United Nations. StataNow and NetCourseNow are trademarks of StataCorp
LLC. Other brand and product names are registered trademarks or trademarks of their
respective companies. Copyright c 1985–2023 StataCorp LLC, College Station, TX,
USA. All rights reserved.
For suggested citations, see the FAQ on citing Stata documentation.