0% found this document useful (0 votes)
88 views4 pages

Nonlinearity Test Summary - Bima

Ramsey's RESET Test, White Test, and Terasvirta Test are used to detect linearity and non-linearity in models. Ramsey's RESET test compares the sum of squared residuals from a model with and without higher-order terms added as predictors. The White Test uses residuals from a linear model to predict additional transformations of the predictors in a second model to test for neglected non-linearity. The Terasvirta Test is similar to the White Test but uses a Taylor expansion approximation instead of additional transformations. Example R syntax is provided to conduct these tests on time series data.

Uploaded by

Bima Vhaleandra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views4 pages

Nonlinearity Test Summary - Bima

Ramsey's RESET Test, White Test, and Terasvirta Test are used to detect linearity and non-linearity in models. Ramsey's RESET test compares the sum of squared residuals from a model with and without higher-order terms added as predictors. The White Test uses residuals from a linear model to predict additional transformations of the predictors in a second model to test for neglected non-linearity. The Terasvirta Test is similar to the White Test but uses a Taylor expansion approximation instead of additional transformations. Example R syntax is provided to conduct these tests on time series data.

Uploaded by

Bima Vhaleandra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Bima Putra Goklas

06211640000124

Summary of Ramsey’s RESET Test, Lagrange-Multiplier Test, White Test,


and Terasvirta Test
Ramsey's RESET Test, White Test and Terasvirta Test to detect whether a model follows a
linear or non-linear pattern. Ramsey’s RESET test statistics are
2 2
𝑅𝑛𝑒𝑤 − 𝑅𝑜𝑙𝑑 /𝑝
𝐹= 2
(1 − 𝑅𝑛𝑒𝑤 )/(𝑛 − 𝑘)
with p the number of new independent variables, k the number of parameters in the new model,
n the number of data. In conclusion Ho is rejected if 𝐹 > 𝐹 (𝛼, 𝑝, 𝑛 − 𝑘).
The White Test is a non-linearity detection test developed from the neural network model
discovered by White (1989). The white test uses the statistics 𝜒 2 and F. The procedure used
for 𝜒 2 is:
a. Regress 𝑦𝑡 at 1, 𝑥1 , 𝑥2 , . . . , 𝑥𝑝 and calculate the residual values of ut
b. Regress 𝑢̂𝑡 at 1, 𝑥1 , 𝑥2 , . . . , 𝑥𝑝 and m additional predictors and then calculate the
coefficient of determination from the R2 regression. In this test, these additional m
predictors are the values of the results of 𝜓(𝛾𝑗 𝑤𝑡 ) results of main component
transformation.
c. Calculate 𝜒 2 = nR2, where n is the number of observations used.
With the linearity hypothesis, 𝜒 2 approaches the distribution of 𝜒 2 (𝑚) or reject Ho if P-value
<α.
The Terasvirta test is a non-linearity detection test which was also developed from a neural
network model and belongs to the Lagrange Multiplier (LM) type test developed with Taylor
expansion (Terasvirta, 1993). The conclusion of the three tests can be seen through the P-value,
which is reject Ho if less than α.
R syntax example for nonlinearity test:
>library(lmtest)
> resettest(y.t. ~ t , power=2, type="regressor", data=kasus1)
> library(tseries)
> t<- kasus1$t
> y.t.<-kasus1$y.t.
> white.test(t, y.t.)
> terasvirta.test(t, y.t.)
Section 3 Testing linearity 9

If the Xt are iid, this probability should be equal to the following in the limiting
case
C1,T ()m = P (|Xt − Xs | < )m
Brock et al. (1996) define the BDS statistics as follows
√ Cm,T () − C1,T ()m
Vm = T
sm,T

where sm,T is the standard deviation and can be estimated consistently as docu-
mented by Brock et al. (1987). Under fairly moderate regularity conditions, the
BDS statistic converges in distribution to N (0, 1)

3.1.3 White (1989) and Terasvirta et al (1993) Neural Network tests


The Neural Network test (White, 1989) for neglected nonlinearity, NN test herafter,
is built on neural network models. One of the most common is the single hidden layer
feedforward network where unit inputs send a vector X of signals Xi , i = 1, . . . , k
along links (connections) that attenuate or amplify the original signals by a factor
γij (weights). The intermediate or hidden processing unit j receives the signals
Xi γij , i = 1, . . . , k and processes them. In general, incoming signals are summed by
the hidden units so that an output is produced by means of an activation function
Φ(X̃ 0 , γj ), where Φ is typically the logistic function4 and X̃ = (1, X1 , . . . , Xk ), passed
to the output layer
q
X
f (X, δ) = β0 + βj Φ(X̃ 0 γj ), q∈N (8)
j=1

where β0 , . . . , βq are hidden to output weights and δ = (β0 , . . . , βq , γ10 , . . . , γq0 )0 .


The NN test in particular employs a single hidden layer network, augmented by
connections from input to output. The output o of the network is
q
X
o = X̃ 0 θ + βj Φ(X̃ 0 γj )
j=1

and the null hypothesis of linearity is equivalent to the optimal weights of the network
being equal to zero, that is the null hypothesis of the NN test is βj∗ = 0 for j =
1, 2, . . . , q for given q and γj .
Operatively, the NN test can be implemented as a Lagrange multiplier test:

H0 : E(Φt e∗t ) = 0


H1 : E(Φt e∗t ) 6= 0

where the elements Φt ≡ (Φ(X̃t0 Γ1 ), . . . , Φ(X̃t0 Γq )) and Γ ≡ (Γ1 , . . . , Γq ) are chosen


a priori, independently of Xt and for given q. To practically carry out the test,
4
By definition, Φ belongs to a class of flexible functional forms. White (1989) showed that for
wide class of nonlinear functions Φ, the neural network can provide arbitrarily accurate approxima-
tions to arbitrary functions in various normed function spaces if q is large enough.
10 L. Bisaglia, M. Gerolimetto

the element et are replaced by the OLS residuals et = yt − X̃ 0 θ̂, to obtain the test
statistic !0
n n
!
X X
−1/2 −1 −1/2
Mn = n Φt êt Ŵn n Φt êt
t=1 t=1
Pn
where Ŵ is a consistent estimator of W∗ = var(n−1/2 ∗
t=1 Φt et ) and under H0
d
Mn → χ2 (q).
To circumvent multicollinearity of Φt with themselves and Xt as well
as computational issues when obtaining Ŵn , two practical solutions are adopted.
First, the test is conducted for q∗ < q principal components of Φt , Φt e∗t . Second,
the following equivalent test statistic is used to avoid calculation of Ŵn ,
d
nR2 → χ2 (q)

where R2 is the uncentered squared multiple correlation from a standard linear


regression of êt on Φ∗t , X̃t .
Teräsvirta et al. (1993) proved that the result of this test is affected by the
presence of the intercept in the power of the logistic function chosen as activation
function. Moreover, he documented a loss of power due to the random choice of
the
Pq γ parameters. Building on this, Teräsvirta et al. (1993) replaced the expression
β Φ( X̃ 0 γ ) in (8) with an approximation based on the Taylor expansion and
j=1 j j
derived an alternative LM test has been shown to have better power properties.

3.1.4 Ramsey (1969) RESET test


Ramsey (1969) proposes a specification test for linear least squares regression anal-
ysis, whose argument is that nonlinearity will be reflected in the diagnostics of a
fitted linear model if the residuals of the linear model are correlated with terms to
a certain power. In other words, this test, referred to as a RESET test, focuses on
specification errors in the linear regression, including those coming from unmodeled
non-linearity and is readily applicable to linear AR models.
Consider the linear AR(p) model:

Xt = φ0 + φ1 Xt−1 + · · · + φp Xt−p + at .

The first step of the RESET test is to obtain the least squares estimate φ̂,
compute the residuals ât = Xt − X̂t , and the sum of squared residuals:
n
X
SSR0 = â2t
i=p+1

where n is the sample size.


In the second step, consider the linear regression

ât = X0t−1 a + M0t−1 b + vt

where Xt−1 = (1, Xt−1 , . . . , Xt−p ) and Mt−1 = (X̂t2 , . . . , X̂ts+1 ) for some s ≥ 1, and
compute the least squares residuals

v̂t = ât − X0t−1 â − M0t−1 b̂


Section 3 Testing linearity 11

In the third step sum of squared residuals is computed


n
X
SSR1 = v̂t2
i=p+1

If the linear AR(p) model is adequate, then a and b should be zero. This can be
tested in the fourth step by the usual F statistic given by:
(SSR0 − SSR1 )/g
F = with g = s + p + 1
SSR1 /(n − p − g)
which under linearity and normality, has an Fg,n−p−g .

3.1.5 Keenan’s (1985) test and Tsay’s (1986) test


Keenan (1985) proposes a nonlinearity test for time series that uses X̂t2 only and
modifies the second step of the RESET test to avoid multicollinearity between
X̂t2 and Xt−1 . In particular, Keenan assumes that the series can be approximated
(Volterra expansion) as follows:

X ∞
X ∞
X ∞
X
Xt = µ + θu at−u + θuv at−u at−v
u=−∞ v=−∞ u=−∞ v=−∞

Clearly, if ∞
P P∞
u=−∞ v=−∞ θuv at−u at−v is zero, the approximation is linear, so Keenan’s
idea shares the principle of an F test. The procedure is in the same steps as Ram-
sey’s test. Firstly, select (with a selection criterion, e.g. AIC) the value p of the
number of lags involved in the regression, then fit Xt on (1, Xt−1 , . . . , Xt−p to ob-
tain the fitted values (X̂t ), the residuals set (ât ) and the residual sum of squares
SSR. Then regress X̂t2 on (1, Xt−1 , . . . , Xt−p ) to obtain the residuals set (ζˆt ). Finally
calculate Pn ˆ
t=p+1 ât ζt
η̂t = P
n ˆ2
t=p+1 ζt
and the test statistic equals
(n − 2p − 2)ηˆ2
F̂ =
(SSR − ηˆ2 )
Under the null hypothesis of linearity, i.e.

X ∞
X
H0 : θuv at−u at−v = 0
u=−∞ v=−∞

and the assumption that (at ) are i.i.d. Gaussian, asymptotically F̂ ∼ F1,n−2p−2 .
Tsay (1986) improved on the power of the Keenan (1985) test by allowing for
disaggregated nonlinear variables (all cross products Xt−i Xt−j , i, j = 1, . . . , p) thus
generalizing Keenan test by explicitly looking for quadratic serial dependence in
the data. While the first step of Keenan test is unchanged, in the second step of
Tsay test, instead of (X̂t )2 , the products Xt−i Xt−j , i, j = 1, . . . , p are regressed
on (1, Xt−1 , . . . , Xt−p . Hence, the corresponding test statistic F̃ is asymptotically
distributed as Fm,n−m−p−1 , where m = p(p − 1)/2.

You might also like