Nonlinearity Test Summary - Bima
Nonlinearity Test Summary - Bima
06211640000124
If the Xt are iid, this probability should be equal to the following in the limiting
case
C1,T ()m = P (|Xt − Xs | < )m
Brock et al. (1996) define the BDS statistics as follows
√ Cm,T () − C1,T ()m
Vm = T
sm,T
where sm,T is the standard deviation and can be estimated consistently as docu-
mented by Brock et al. (1987). Under fairly moderate regularity conditions, the
BDS statistic converges in distribution to N (0, 1)
and the null hypothesis of linearity is equivalent to the optimal weights of the network
being equal to zero, that is the null hypothesis of the NN test is βj∗ = 0 for j =
1, 2, . . . , q for given q and γj .
Operatively, the NN test can be implemented as a Lagrange multiplier test:
H0 : E(Φt e∗t ) = 0
H1 : E(Φt e∗t ) 6= 0
the element et are replaced by the OLS residuals et = yt − X̃ 0 θ̂, to obtain the test
statistic !0
n n
!
X X
−1/2 −1 −1/2
Mn = n Φt êt Ŵn n Φt êt
t=1 t=1
Pn
where Ŵ is a consistent estimator of W∗ = var(n−1/2 ∗
t=1 Φt et ) and under H0
d
Mn → χ2 (q).
To circumvent multicollinearity of Φt with themselves and Xt as well
as computational issues when obtaining Ŵn , two practical solutions are adopted.
First, the test is conducted for q∗ < q principal components of Φt , Φt e∗t . Second,
the following equivalent test statistic is used to avoid calculation of Ŵn ,
d
nR2 → χ2 (q)
Xt = φ0 + φ1 Xt−1 + · · · + φp Xt−p + at .
The first step of the RESET test is to obtain the least squares estimate φ̂,
compute the residuals ât = Xt − X̂t , and the sum of squared residuals:
n
X
SSR0 = â2t
i=p+1
where Xt−1 = (1, Xt−1 , . . . , Xt−p ) and Mt−1 = (X̂t2 , . . . , X̂ts+1 ) for some s ≥ 1, and
compute the least squares residuals
If the linear AR(p) model is adequate, then a and b should be zero. This can be
tested in the fourth step by the usual F statistic given by:
(SSR0 − SSR1 )/g
F = with g = s + p + 1
SSR1 /(n − p − g)
which under linearity and normality, has an Fg,n−p−g .
Clearly, if ∞
P P∞
u=−∞ v=−∞ θuv at−u at−v is zero, the approximation is linear, so Keenan’s
idea shares the principle of an F test. The procedure is in the same steps as Ram-
sey’s test. Firstly, select (with a selection criterion, e.g. AIC) the value p of the
number of lags involved in the regression, then fit Xt on (1, Xt−1 , . . . , Xt−p to ob-
tain the fitted values (X̂t ), the residuals set (ât ) and the residual sum of squares
SSR. Then regress X̂t2 on (1, Xt−1 , . . . , Xt−p ) to obtain the residuals set (ζˆt ). Finally
calculate Pn ˆ
t=p+1 ât ζt
η̂t = P
n ˆ2
t=p+1 ζt
and the test statistic equals
(n − 2p − 2)ηˆ2
F̂ =
(SSR − ηˆ2 )
Under the null hypothesis of linearity, i.e.
∞
X ∞
X
H0 : θuv at−u at−v = 0
u=−∞ v=−∞
and the assumption that (at ) are i.i.d. Gaussian, asymptotically F̂ ∼ F1,n−2p−2 .
Tsay (1986) improved on the power of the Keenan (1985) test by allowing for
disaggregated nonlinear variables (all cross products Xt−i Xt−j , i, j = 1, . . . , p) thus
generalizing Keenan test by explicitly looking for quadratic serial dependence in
the data. While the first step of Keenan test is unchanged, in the second step of
Tsay test, instead of (X̂t )2 , the products Xt−i Xt−j , i, j = 1, . . . , p are regressed
on (1, Xt−1 , . . . , Xt−p . Hence, the corresponding test statistic F̃ is asymptotically
distributed as Fm,n−m−p−1 , where m = p(p − 1)/2.