0% found this document useful (0 votes)
84 views

VAR Models: U y A y A y A y

This document provides an overview of vector autoregression (VAR) models and structural VAR (SVAR) models. It can be summarized as follows: 1) A VAR model specifies a variable's current value as a linear function of its own past values and the past values of other variables in the system. SVAR models aim to recover the structural parameters from the estimated reduced form VAR. 2) Identification of SVAR models is challenging as they are underidentified with more structural parameters than observations. A common identification strategy is a recursive (Cholesky) decomposition which assumes some variables are contemporaneously unaffected by others. 3) Impulse response functions trace the effect of a one unit shock to a structural innovation on

Uploaded by

vmahdi
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views

VAR Models: U y A y A y A y

This document provides an overview of vector autoregression (VAR) models and structural VAR (SVAR) models. It can be summarized as follows: 1) A VAR model specifies a variable's current value as a linear function of its own past values and the past values of other variables in the system. SVAR models aim to recover the structural parameters from the estimated reduced form VAR. 2) Identification of SVAR models is challenging as they are underidentified with more structural parameters than observations. A common identification strategy is a recursive (Cholesky) decomposition which assumes some variables are contemporaneously unaffected by others. 3) Impulse response functions trace the effect of a one unit shock to a structural innovation on

Uploaded by

vmahdi
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 15

VAR 1

VAR Models


For a set of n time series variables )' ..., , (
, 2 1 nt t t t
y y y y = , a VAR model of order p (VAR(p)) can
be written as:
(1)
t p t p t t t
u y A y A y A y + + + + =

...
2 2 1 1

where the
i
A s are (nxn) coefficient matrices and )' ,..., , (
2 1 nt t t t
u u u u = is an unobservable i.i.d.
zero mean error term.


VAR Analysis
(Enders Chapter 5)

Consider a two-variable VAR(1) with k=2.
(1)
yt t t t t
z c y c z b b y c + + + =
1 12 1 11 12 10

(2)
zt t t t t
z c y c y b b z c + + + =
1 22 1 21 21 20


with ) , 0 ( . . ~
2
i it
d i i
c
o c and 0 ) , cov( =
z y
c c

In matrix form:
(3)
(

+
(

+
(

=
(

zt
yt
t
t
t
t
z
y
c c
c c
b
b
z
y
b
b
c
c
1
1
22 21
12 11
20
10
21
12
1
1


More simply:
(4)
t t t
X BX c + I + I =
1 1 0
Structural VAR (SVAR) or the Primitive System

To normalize the LHS vector, we need to multiply the equation by inverse B:
t t t
B X B B BX B c
1
1 1
1
0
1 1


+ I + I = , thus:
(5)
t t t
e X A A X + + =
1 1 0
VAR in standard form (unstructured VAR=UVAR).
or:
(6)
(

+
(

+
(

=
(

t
t
t
t
t
t
e
e
z
y
a a
a a
a
a
z
y
2
1
1
1
22 21
12 11
20
10


These error terms are composites of the structural innovations from the primitive system.

What are their characteristics/moments?
t t
B e c
1
= where
1
B
a
B
B
1
=
T
B
B
*) (
1
=
(

=
1
1
) 1 (
1
21
12
12 21
b
b
b b

* B =cofactor of B and
T
B*) ( =transpose.
VAR 2

Thus
(7)
(

=
(

zt
yt
t
t
b
b
b b e
e
c
c
1
1
) 1 (
1
21
12
12 21 2
1


Or
A

=
zt yt
t
b
e
c c
12
1
where
12 21
1 b b = A

A
+
=
zt yt
t
b
e
c c
21
2


s ' c are white noise, thus es are ) , 0 (
2
i
o :
0 ) ( =
it
e E
2
2 2
12
2
2
2 2
12
2
2
1 1
) (
) ( ) (
A
+
=
A
+
= =
z y zt yt
t t
b b E
e E e Var
o o c c
is time independent, and the same is true for
) (
2t
e Var .

But covariances are not zero:
Covar 0
) ( )] )( [(
) ( ) , (
2
2
21
2
12
2
21 12
2 1 2 1
=
A
+
=
A

= =
y z yt zt zt yt
t t t t
b b b b E
e e E e e
o o c c c c
.

So the shocks in a standard VAR are correlated. The only way to remove the correlation and
make the covar=0 is if we assume that the contemporaneous effects are zero: 0
21 12
= = b b .

The var/covar matrix of the VAR shocks:
(

= E
2
2 21
12
2
1
o o
o o
.

Identification
We can estimate (6) with OLS, since the RHS consists of predetermined variables and the
error terms are white noise. The errors are serially uncorrelated but correlated across
equations. Although SUR could be used in these cases, here we do not need it since all the
RHS variables are identical, so there is no efficiency gain in using SUR over OLS. But we
cannot use OLS to estimate the SVAR because of contemporaneous effects, which are
correlated with the s ' c (structural innovations).

Our goal:
To see how a structural innovation
it
c affects the dependent variables in our original model.
We estimate the reduced form (standard VAR), so how can we recover the parameters for the
primitive system from the estimated system?
VAR: 9 parameters ( = 6 coefficient estimates+ 2 variance estimates + 1 Covar
estimate).
SVAR: 10 parameters (=8 parameters + 2 variances). It is underidentified.
VAR 3

Sims (1980) suggested using a recursive system. For this we need to restrict some of the
parameters in the VAR. Ex: assume y is contemporaneously affected by z but not vice-versa.
Thus we assume that 0
21
= b . In other words, y is affected by both structural innovations of y
and z, while z is affected only by its own structural innovation. This is a triangular
decomposition also called Cholesky decomposition. Then we have 9 parameter estimates
and 9 unknown structural parameters, and SVAR is exactly identified.

Now the SVAR system becomes:
(8)
(

+
(

+
(

=
(

zt
yt
t
t
t
t
z
y
c c
c c
b
b
z
y b
c
c
1
1
22 21
12 11
20
10 12
1 0
1


(

1
1
) 1 (
1
21
12
12 21
1
b
b
b b
B
(


=
1 0
1
12
b
.

Hence the VAR system in standard form can be written:

(8)
(


+
(


+
(


=
(

zt
zt yt
t
t
t
t
b
z
y
c c
c b c c b c
b
b b b
z
y
c
c c
12
1
1
22 21
22 12 12 21 12 11
20
20 12 10
) ( ) (


If we match the coefficients in (8) with the estimates in (6)
(

+
(

+
(

=
(

t
t
t
t
t
t
e
e
z
y
a a
a a
a
a
z
y
2
1
1
1
22 21
12 11
20
10
, we
can extract the coefficients of the SVAR:

20 12 10 10
b b b a =
20 20
b a =
z y
e b e
12 1
= c
21 12 11 11
c b c a =
21 21
c a =
z
e e =
2

22 12 12 12
c b c a =
22 22
c a =
12
Cov =
2
2
21
2
12
) (
A
+
y z
b b o o
=
2
12 z
b o

Impulse response functions
We want to trace out the time path of the effect of structural shocks on the dependent
variables of the model. For this, we first need to transform the VAR into a VMA
representation.

Rewrite the UVAR more compactly:
L A I
e
L A I
A
X e X A A X
t
t t t t
1 1
0
1 1 0
) 5 (

= + + =





First, consider the first component on the RHS:

VAR 4
12 21 22 11
20
10
22 12
21 22
22 21
12 11
0
22 21
12 11
1
0 1
0
1
1
1
0
) 1 )( 1 (
1
1
1
1
1
1
) (
) (
a a a a
a
a
a a
a a
a a
a a
A
a a
a a
A I
A A I
A A I
A I
A
a

(

=


(

= =

=
(

+
+
A
=
z
y
a a a a
a a a a
20 22 10 12
20 21 10 22
) 1 (
) 1 (
1


Stability requires that the roots of L A I
1
lie outside the unit circle. We will assume that it is
the case. Then, we can write the second component as:
(

= =

=

i t
i t
i
i i i t
i t
e
e
a a
a a
e A
L A I
e
, 2
, 1
0
22 21
12 11
0 1
1


We can thus write the VAR as a VMA with the standard VARs error terms.
(9)
(

+
(

=
(

i t
i t
i
i
A
t
t
e
e
a a
a a
z
y
z
y
i
, 2
, 1
0
22 21
12 11



But these are composite errors consisting of the structural innovations. We must thus replace
the es with the s ' c from (7)
t t
b
b
e c
(


=
1
1
1
21
12



(9a)
(

+
(

=
(


=
u

i t z
i t y
i
i
t
t
i
b
b
b b
A
z
y
z
y
,
,
0
21
12
21 12
1
1
1 c
c

(

u u
u u
+
(

i t z
i t y
i
i i
i i
i
z
y
,
,
) (
22
) (
21
) (
12
) (
11
0
c
c
i t
i i
X

u + = c
0
.

Impact multipliers
They trace the impact effect of a one unit change in a structural innovation. Ex: find the
impact effect of
t z,
c on
t
y and
t
z :
) 0 (
12
,
u =
t z
t
d
dy
c
) 0 (
22
,
u =
t z
t
d
dz
c

Lets trace the effect one period ahead on
1 + t
y and
1 + t
z
) 1 (
12
,
1
u =
+
t z
t
d
dy
c
) 1 (
22
,
1
u =
+
t z
t
d
dz
c


Note that this is the same effect on
t
y and
t
z of a structural innovation one period ago:
) 1 (
12
1 ,
u =
t z
t
d
dy
c
) 1 (
22
1 ,
u =
t z
t
d
dz
c


Impulse response functions are the plots of the effect of
t z,
c on current and all future y and z.
IRs show how } {
t
y or } {
t
z react to different shocks.
VAR 5
Ex:
Impulse response function of y to a one unit change in the shock to z
= ) 0 (
12
u , ) 1 (
12
u , ) 2 (
12
u ,

Cumulated effect is the sum over IR functions:

=
u
n
i
i
0 12
) ( .

Long-run cumulated effect:
n
lim

=
u
n
i
i
0 12
) (

In practice we cannot calculate these effects since the SVAR is underidentified. So we must
impose additional restrictions on the VAR to identify the impulse responses.
If we use the Cholesky decomposition and assume that y does not have a contemporaneous
effect on z, then 0
12
= b . Thus the error structure becomes lower triangular:
(10)
(


=
(

zt
yt
t
t
b
e
e
c
c
1 0
1
12
2
1


The
y
c shock doesnt affect z directly but it affects it indirectly through its lagged effect in
VAR.

Granger Causality: If the z shock affects e1, e2 and the y shock doesnt affect e2 but it
affects e1, then z is causally prior to y.

Example:
Calculate the impulse response functions on } {
t
y , } {
t
z of a unit change in z shock ) (
zt
c from
an estimate of a two-variable VAR(1):
t t t t
e z y y
1 1 1
2 . 0 7 . 0 + + =


t t t t
e z y z
2 1 1
7 . 0 2 . 0 + + =


2
2
2
1
o o = and 8 . 0
12
= .

For this, we must get the estimates of the primitive function (SVAR) from the estimated
coefficients:
Assume Cholesky decomposition 0
21
= b .
8 . 0 8 . 0
12 2
2
12
2 1
2 , 1
12
= =

= = b
b
SE SE
Cov
o
o

Although this information is sufficient to calculate the impulse responses in this simple
model, we can extract all of the coefficients of the primitive system as follows:
0
20 10
= = a a 0
20
= b
0 0
10 20 12 10
= = b b b b
7 . 0
22 22
= = c a and 2 . 0
21 21
= = c a
From
21 12 11 11
7 . 0 c b c a = = we get 54 . 0
11
= c .
From
21 12 11 11
7 . 0 c b c a = =
and 36 . 0 ) 7 . 0 ( 8 . 0 7 . 0
12 12 22 12 12 12
= + = = = c c c b c a and 54 . 0
11
= c

VAR 6
Substitute
12
b into (10) to get:
zt yt t
e c c 8 . 0
1
+ =
zt t
e c =
2

A 1 unit
zt
c shock is instantaneously absorbed 100% by z and 80% by y.

Impact multipliers:
(11)
zt yt t t t
z y y c c 8 . 0 2 . 0 7 . 0
1 1
+ + + =


zt t t t
z y z c + + =
1 1
7 . 0 2 . 0

At t=0: 8 . 0
,
=
t z
t
d
dy
c
1
,
=
t z
t
d
dz
c


At t=1: forward (11) by one period:
76 . 0 2 . 0 7 . 0
, , ,
1
= + =
+
t z
t
t
z
t
t z
t
d
dz
d
dy
d
dy
c c c
86 . 0 7 . 0 2 . 0
, , ,
1
= + =
+
t z
t
t
z
t
t z
t
d
dz
d
dy
d
dz
c c c


At t=2: forward (11) by two periods:
70 . 0 2 . 0 7 . 0
,
1
,
1
,
2
= + =
+ + +
t z
t
t
z
t
t z
t
d
dz
d
dy
d
dy
c c c
75 . 0 7 . 0 2 . 0
,
1
,
1
,
2
= + =
+ + +
t z
t
t
z
t
t z
t
d
dz
d
dy
d
dz
c c c



Long-run multipliers: both variables go back to zero.

Cumulative multipliers: .. 70 . 0 76 . 0 8 . 0
0
,
+ + + =

=
+
n
i
t z
i t
d
dy
c
... 75 . 0 86 . 0 1
0
,
+ + + =

=
+
n
i
t z
i t
d
dz
c


- Results are ordering-dependent. If you choose the decomposition such that 0
12
= b
instead of
12
b , you can have quite different results. One robustness check is, therefore
to change the ordering. If results dont change then the estimates are robust to
ordering.
- If the correlation between the errors is low (
12
small), then changing ordering does
not make a big difference.
- Eviews specification
-Residual: ignores the correlations in the VAR residuals; gives the MA
coefficients of the infinite MA representation of the VAR.
-Cholesky (with and without degree of freedom adjustment for small sample
correction).
-Generalized impulses: Pesaran and Shin (1998) methodology. Independent
of the VAR ordering. Applies a Cholesky factorization to each variable with
the j-th variable at the top of the ordering.

VAR 7

Confidence Intervals
Help to see the degree of precision in the coefficient estimates. Obtained by Monte Carlo
study. Eviews provides two types of calculations of standard errors for the confidence
intervals: Monte Carlo and Analytic. For M-C you need to provide the number of draws.
Eviews then gives the SE 2 standard bands around the impulse responses.

Note that for VECM, these confidence intervals are not available on Eviews. For those
interested in programming themselves, instructions to generate confidence bounds for
SVARS are available at: http://www.eviews.com/support/examples/docs/svar.htm#blanquah3


Variance Decomposition
It tells how much of a change in a variable is due to its own shock and how much due to
shocks to other variables. In the SR most of the variation is due to own shock. But as the
lagged variables effect starts kicking in, the percentage of the effect of other shocks
increases over time.

To see this consider the VMA representation of VAR in (9a):

(

+
(

=
(


=
u

i t z
i t y
i
i
i
t
t
t
i
b
b
b b
A
z
y
z
y
x
,
,
0
21
12
21 12
1
1
1 c
c

(

u u
u u
+
(

i t z
i t y
i
i i
i i
i
z
y
,
,
) (
22
) (
21
) (
12
) (
11
0
c
c

or
i t
i i t
X x

u + = c
0
.

We want to calculate the n-period forecast error of x in order to find that of say, y.

Start from 1 period:
...
1 2 1 1 0 1
+ u + u + u + =
+ + t t t t
X x c c c
...
1 2 1 1
+ u + u + =
+ t t t t
X x E c c
1-period forecast error
1 0 1 1 + + +
u =
t t t
Ex x c

Proceed in the same way and get 2-period forecast error:
1 1 2 0 2 2 + + + +
u + u =
t t t t
Ex x c c

3-period forecast error:
1 2 2 1 3 0 3 3 + + + + +
u + u + u =
t t t t t
Ex x c c c

n-period forecast error:


= + + + + + + +
= u + + u + u + u =
1
0 1 1 2 2 1 1 0
...
n
i i n t t n n t n t n t n t n t
Ex x c c c c c

Now consider y, the first element of the x matrix. Its n-step-ahead forecast error is:
) ... (
1 , 1 , 11 1 , 1 , 11 , 0 , 11 + + + + +
u + + u + u =
t y n n t y n t y n t n t
Ey y c c c ) ... (
1 , 1 , 21 1 , 1 , 21 , 0 , 21 + + +
u + + u + u +
t z n n t z n t z
c c c

VAR 8
The variance of its n-step-ahead forecast error is:
+ u + + u + u =

time over Decreases
shock own to due
iance of proportion
n
y n y
var
1 , 11
2
1 , 11
2
0 , 11
2 2 2
,
) ... ( o o

time over Grows
shock z a to due
iance of proportion
n
z
var
1 , 21
2
1 , 21
2
0 , 21
2 2
) ... ( u + + u + u o


- If
z
c can explain none of the forecast error var of the sequence } {
t
y at all forecast
horizons ( 0
2 2
,
~ c
z n y
o o ), then } {
t
y is exogenous.
- If
z
c can explain most of the forecast error var of the sequence } {
t
y at all forecast
horizons ( 9 . 0
2 2
,
~ c
z n y
o o for ex.), then } {
t
y is endogenous.

Note that exogeneity is not the same as Granger-causality. It is a concept involving the
contemporaneous value of an endogenous variable and the contemporaneous error term of
another variable.

Same identification problem as for the impulse response functions. But if the cross-
correlation is not significant then ordering will not matter.

Impulse responses + Variance decomposition = innovation accounting.


Hypothesis Testing

1. Specification of the VAR model
- Decide on the variables that enter the VAR: need a model for this. If the VAR is
misspecified because of missing variables, it will create an omitted variable(s)
problem and be reflected in serially correlated error terms.
- Number of lags. We need to include the optimal number of lags. Note that
increasing the number of lags does not solve the residual correlation if there are
omitted variables.
- Even if there is no omitted variables and we include the optimal number (or
reasonable #) of lags, residuals can still reflect a problem caused by structural breaks.
At this stage we will control for them by determining the break dates exogenously.

Determination of optimal lag length
a. LR tests
(10) ) ( ~ ) ln )(ln (
2
q m T LR
u r
_ E E =

T=#observations (after accounting for lags)
m=#parameters estimated in each equation of the unrestricted system, including the
constant.
r
E ln natural log of the determinant of the covariance matrix of residuals of the restricted
system.
VAR 9
q = total number of restrictions in the system (=#lags times
2
n ) and n=#variables (or
equations).

If the LR statistics < critical value, reject the null of the restricted system.

Eviews follows Lutkepohl (1991) methodology in conducting a sequential LR test
(adjusting for m). You start with the maximum #lags following your prior. Suppose you
decide k lags. Then you compare the kth (largest) lags covar matrix determinant with
that of k-1. If the LR statistics < critical value, reject the null of k-1 lags over k lags.
The LR test statistics then becomes:
(11) ) ( ~ ) ln )(ln (
2
1
q m T LR
t t
_ E E =

and q =
2
n

However, if you want to compare say, 12
th
lag with 8
th
lag, you have to do calculate the
test statistics yourself, using the formula in (10).

b. Information criteria
N T AIC 2 ln + E =
T N T SBC ln ln + E =
Choose the # lag that minimizes the criteria.
Note that these criteria are not tests, they mainly indicate goodness of fit of alternatives.
So they should be used as complements to the LR tests.

You can use the information criteria to compare nonsequential tests.

3. Diagnostic tests of the residuals (in Eviews)
- Portmanteau Autocorrelation Test (Box-Pierce-Ljung-Box Q statistics) for residual
correlation.
Null Hypothesis:
No serial correlation up to chosen lag.
Q statistics distributed
2
_ dof = ) (
2
p h n n=#variables, h=#max chosen lags,
p=order of the VAR.
Not a good statistics to use if there is a quasi-unit root (requires high order MA
coefficients to be 0).

- Autocorrelation LM Test.
Null hypothesis:
no autocorrelation up to lag h.
LM statistics distributed
2
_ dof =
2
n .

- Normality tests
Multivariate version of the Jarque Bera tests. It compares the 3
rd
and 4
th
moments
(skewness and kurtosis) to those from a normal distribution. Must specify a
factorization of the residuals. Choices in Eviews:
Cholesky: the statistics will depend on the ordering of the variables.
VAR 10
Doornik and Hansen (94) Inverse SQRT of residual correlation matrix:
invariant to the ordering of variables and the scale of the variables in the
system.
Urzua (97)- Inverse SQRT of residual covariance matrix: same advantage as
Doornick and Hansen, but better.
Factorization from SVAR (later: need to have estimated an SVAR)


4. Granger Causality
In a two-variable VAR(p)The process } {
t
z does not G-cause } {
t
y if all coefficients in
0 ) (
12
= L A (or a joint test of 0 ) ( ... ) 2 ( ) 1 (
21 21 21
= = = = p a a a at all lags is not rejected). This
concept involves the effect of past values of z on the current value of y. So it answers the
question whether past and current values of z help predict the future value of y.
It is different from exogeneity tests, which look at whether the current values of z
explains current and future values of y.

In a n-variable VAR(p), block-exogeneity (=block-G-causality) test looks at whether the
lags of any variables G-cause any other variable in the system. You can test this using the
LR test in (10).


Application
Create a bivariate VAR(1) and apply the tests to get the best specification of the model.
Workfile:ENDERSQUARTERLY.wf

-Generate the rate of growth of money supply
m=log(M1NSAt)-log(M1NSAt-1)

-Generate the rate of PPI inflation:
Inf=log(PPIt)-log(PPIt-1)

-Generate seasonal dummies for each quarter of the year:
Di=@seas(i), where i=1,2,3

or: endersquartdummies.prg
smpl @all
inf=log(ppi)-log(ppi(-1))
m=log(m1nsa)-log(m1nsa(-1))
for !j=1 to 4
series d{!j}=@seas({!j})
next

-Check whether m and inf are I(0)

Now we can create our bivariate VAR(1):
Endogenous variables: m, inf
Exogenous variables: constant, 4 seasonal dummies
VAR 11

Estimate an unrestricted VAR

1. Test the lag length

The sequential LR statistics indicates 5 lags. Also confirmed by info criteria.
View-Lag Structure-Lag length criteria-lags to include [8]-OK

VAR Lag Order Selection Criteria
Endogenous variables: INF M
Exogenous variables: C D1 D2 D3
Sample: 1960Q1 2002Q4
Included observations: 160


Lag LogL LR FPE AIC SC HQ


0 927.0090 NA 3.52e-08 -11.48761 -11.33385 -11.42518
1 993.6745 128.3309 1.61e-08 -12.27093 -12.04029* -12.17728
2 1000.381 12.74192 1.55e-08 -12.30476 -11.99724 -12.17989
3 1010.920 19.76184 1.43e-08 -12.38650 -12.00211 -12.23041
4 1018.108 13.29702 1.38e-08 -12.42635 -11.96507 -12.23904*
5 1023.710 10.22334* 1.35e-08* -12.44637* -11.90822 -12.22785
6 1025.908 3.956218 1.38e-08 -12.42385 -11.80881 -12.17410
7 1028.766 5.072764 1.40e-08 -12.40957 -11.71766 -12.12861
8 1033.418 8.142326 1.39e-08 -12.41773 -11.64894 -12.10555


* indicates lag order selected by the criterion
LR: sequential modified LR test statistic (each test at 5% level)
FPE: Final prediction error
AIC: Akaike information criterion
SC: Schwarz information criterion
HQ: Hannan-Quinn information criterion


You may have priors and want to test for lag length yourself using a LR test. Suppose we
start with 12 lags and compare it with 8 lags.

Calculate the determinant of the residual covariance matrix:
Eviews gives it at the bottom of the estimation
|
|
.
|

\
|

= O

t t t
p T
'
1
det

c c with p=# parameters per equation in the VAR. The unadjusted


ignores p.





VAR 12
Estimate with 12 lags the unrestricted VAR

Vector Autoregression Estimates
Date: 04/12/07 Time: 12:12
Sample (adjusted): 1963Q2 2002Q1
Included observations: 156 after adjustments
Standard errors in ( ) & t-statis ]


Determinant resid
covariance (dof adj.) 1.10E-08
Determinant resid
covariance 7.41E-09
Log likelihood 1017.509
Akaike information
criterion -12.32704
Schwarz criterion -11.23222


Estimate with 8 lags over the same VAR over the same sample:

Determinant resid
covariance (dof adj.) 1.10E-08
Determinant resid
covariance 8.41E-09
Log likelihood 1033.418
Akaike information
criterion -12.41773
Schwarz criterion -11.64894

To do the comparison properly, we must use the same sample of 12 lags (1963q2 2002q1)
Determinant resid
covariance (dof adj.) 1.14E-08
Determinant resid
covariance 8.65E-09
Log likelihood 1005.422
Akaike information
criterion -12.37721
Schwarz criterion -11.59520

Form the LR test statistics: ) ( ~ ) ln )(ln (
2
q m T LR
u r
_ E E = (m=#parameteres in each equation of the
unrestricted system+constants,
2
. # n ns restrictio q = , n=#variables)
LR=[156-(4+(2x12))][ln(8.65E-09)-ln(7.41E-09)] =19.80 < chisqr((12-8)x4)=chisqr(16)= 34
Do not reject the null.

2. Test the significance of the dummies using the same LR test.

3. Diagnostic tests of the residuals
View-Residual tests-


VAR 13
Portmanteau test:

VAR Residual Portmanteau Tests for Autocorrelations
H0: no residual autocorrelations up to lag h
Sample: 1963Q2 2002Q4
Included observations: 156


Lags Q-Stat Prob. Adj Q-Stat Prob. df


1 0.033361 NA* 0.033577 NA* NA*
2 0.316277 NA* 0.320167 NA* NA*
3 1.186238 NA* 1.207185 NA* NA*
4 1.759066 NA* 1.795088 NA* NA*
5 3.215159 NA* 3.299396 NA* NA*
6 3.736958 NA* 3.842067 NA* NA*
7 4.377997 NA* 4.513222 NA* NA*
8 5.299534 NA* 5.484572 NA* NA*
9 5.875070 0.2087 6.095345 0.1921 4 (14.86)
10 9.854399 0.2754 10.34723 0.2415 8 (21.9)
11 17.23996 0.1408 18.29308 0.1071 12
12 18.81050 0.2786 19.99450 0.2205 16


*The test is valid only for lags larger than the VAR lag order.
df is degrees of freedom for (approximate) chi-square distribution

Not reject the null.

LM test

VAR Residual Serial Correlation LM
Tests
H0: no serial correlation at lag order h
Sample: 1963Q2 2002Q4
Included observations: 156


Lags LM-Stat Prob


1 2.327028 0.6759
2 4.861899 0.3018
3 15.30102 0.0041
4 5.459386 0.2433
5 9.271766 0.0547
6 2.422662 0.6585
7 3.174393 0.5291
8 2.091522 0.7189
9 0.727926 0.9478
10 4.659113 0.3241
11 8.771122 0.0671
12 1.905281 0.7532
VAR 14


Probs from chi-square with 4 df.

Chisqr(4)=14.86
Mostly not reject the null.

Normality Test

VAR Residual Normality Tests
Orthogonalization: Residual Correlation (Doornik-Hansen)
H0: residuals are multivariate normal
Sample: 1963Q2 2002Q4
Included observations: 156



Component Skewness Chi-sq df Prob.


1 -0.071003 0.142376 1 0.7059
2 0.087734 0.217132 1 0.6412


Joint 0.359509 2 0.8355



Component Kurtosis Chi-sq df Prob.


1 3.606088 3.832680 1 0.0503
2 1.695635 21.40130 1 0.0000


Joint 25.23398 2 0.0000



Component Jarque-Bera df Prob.


1 3.975057 2 0.1370
2 21.61843 2 0.0000


Joint 25.59349 4 0.0000



The null is a joint test of both the skewness and the kurtosis.
Normality not rejected for inf but rejected for m due to kurtosis problem.
Is this something we should worry about? in principle rejection of normal distribution invalidates the
test statistics. But measures of skewness are found to be not informative in small samples (Bai, Ng
Boston College WP 115, 2001).





VAR 15
4. Granger causality
View-lag structure-G-causality/block exogeneity test

VAR Granger Causality/Block Exogeneity Wald Tests
Sample: 1963Q2 2002Q4
Included observations: 156



Dependent variable: INF


Excluded Chi-sq df Prob.


M 7.107555 5 0.2128


All 7.107555 5 0.2128



Dependent variable: M


Excluded Chi-sq df Prob.


INF 17.95420 5 0.0030


All 17.95420 5 0.0030




Chisqr(5)=16.75

It tests bilaterally whether the lags of the excluded variable affect the endogenous variable.
The null: the lagged coefficients are significantly different than 0.
All: joint test that the lags of all other variables affect the endogenous variable.
Ex: on top panel, first row shows if lagged variables of M are significantly different than 0,
the second row shows if lagged variables of all variables other than INF are zero (in our case
both tests are identical since we only have two variables).

For both the null is rejected, though there is some evidence about effect of inf on m at 10 %
significance level.

You might also like