0% found this document useful (0 votes)
55 views

Econ 582 Forecasting: Eric Zivot

This document discusses forecasting time series data. It defines the best linear predictor as the forecast that minimizes the mean squared error of the forecast errors. For a stationary time series process, the best linear predictor of a future value is the conditional expectation of that value given the available information. The document provides the formulas for computing the best linear predictor forecasts and forecast errors for autoregressive processes. It also discusses estimating the forecast errors when using estimated model parameters.

Uploaded by

Mithilesh Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Econ 582 Forecasting: Eric Zivot

This document discusses forecasting time series data. It defines the best linear predictor as the forecast that minimizes the mean squared error of the forecast errors. For a stationary time series process, the best linear predictor of a future value is the conditional expectation of that value given the available information. The document provides the formulas for computing the best linear predictor forecasts and forecast errors for autoregressive processes. It also discusses estimating the forecast errors when using estimated model parameters.

Uploaded by

Mithilesh Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Econ 582

Forecasting

Eric Zivot

April 15, 2013


Forecasting

Let {} be a covariance stationary are ergodic process, e.g. an ARMA( )


process with Wold representation

X
= + ~ (0 2)
=0
= + + 11 + 22 +
Let = { 1 } denote the information set available at time . Recall,

[] =

X
() = 2 2
=0
Goal: Using produce optimal forecasts of + for = 1 2
+ = + + + 1+1 +
+ 1+1 + + +11 +
Define +| as the forecast of + based on with known parameters. The
forecast error is
+| = + +|
and the mean squared error of the forecast is

(+|) = [2+|]
= [(+ +|)2]
Theorem: The minimum MSE forecast (best forecast) of + based on is

+| = [+|]
Proof: See Hamilton pages 72-73.
Remarks

1. The computation of [+|] depends on the distribution of {} and


may be a very complicated nonlinear function of the history of {} Even if
{} is an uncorrelated process (e.g. white noise) it may be the case that

[+1|] 6= 0

2. If {} is independent white noise, then [+1|] = 0 and [+|] will


be a simple linear function of {}

+| = + + +11 +
Linear Predictors

+ = + + + 1+1 +
+ 1+1 + + +11 +

A linear predictor of +| is a linear function of the variables in

Theorem: The minimum MSE linear forecast (best linear predictor) of +


based on is
+| = + + +11 +

Proof. See Hamilton page 74.


The forecast error of the best linear predictor is

+| = + +|
= + + + 1+1 +
+1+1 + +
( + + +11 + )
= + + 1+1 + + 1+1
and the MSE of the forecast error is

MSE(+|) = 2(1 + 21 + + 21)


Remarks

1. [+|] = 0

2. +| is uncorrelated with any element in

3. The form of +| is closely related to the IRF

4. (+|) = (+|) ()

5. lim +| =

6. lim (+|) = ()
Example: BLP for MA(1) process

= + + 1 WN(0 2)
Here
1 = = 0 for 1
Therefore,

+1| = +
+2| =
+| = for 1
The forecast errors and MSEs are

+1| = +1 MSE(+1|) = 2
+2| = +2 + +1 MSE(+2|) = 2(1 + 2)
Prediction Confidence Intervals

If {} is Gaussian then

+| (+| 2(1 + 21 + + 21))


A 95% confidence interval for the step prediction has the form
q
+| 196 2(1 + 21 + + 21)
Predictions with Estimated Parameters

Let +| denote the BLP with estimated parameters:

+| = + + +11 +
where is the estimated residual from the fitted model. The forecast error
with estimated parameters is

+| = + +|
= ( ) + + + 1+1 + + 1+1

+ + +11 +11
+
Obviously,

MSE(+|) 6= MSE(+|) = 2(1 + 21 + + 21)


Note: Most software computes
d 2 2
2(1 + + +
MSE(+| ) = 1 1)
Computing the Best Linear Predictor

The BLP +| may be computed in many dierent but equivalent ways.


The algorithm for computing +| from an AR(1) model is simple and the
methodology allows for the computation of forecasts for general ARMA models
as well as multivariate models.

Example: AR(1) Model

= (1 ) +
~ (0 2)
2 are known
In the Wold representation = Starting at and iterating forward
periods gives

+ = + ( ) + + + +1 +
+1+1
= + ( ) + + + 1+1 +
+1+1
Based on information at time , the best forecast for +1 + is zero
because (0 2) Hence,

+| = + ( ) = 1 2
The best linear forecasts of +1 +2 + can be recursively computed
using the chain-rule of forecasting (law of iterated projections)

+1| = + ( )
+2| = + (+1| ) = + (( ))
= + 2( )
..
+| = + (+1| ) = + ( )
The corresponding forecast errors are

+1| = +1 +1| = +1
+2| = +2 +2| = +2 + +1
= +2 + 1+1
..
+| = + +| = + + +1 +
+1+1
= + + 1+1 + + 1+1
The forecast error variances are
(+1|) = 2
(+2|) = 2(1 + 2) = 2(1 + 21)
..
1 2
(+|) = 2(1 + 2 + + 2(1)) = 2
1 2
= 2(1 + 21 + + 21)
Clearly,
lim +| = = []

2
lim (+|) =
1 2

X
= 2 2 = ()
=0
AR(p) Models

Consider the AR(p) model


()( ) = (0 2)
() = 1 1
The forecasting algorithm for the AR() models is essentially the same as that
for AR(1) models once we put the AR() model in state space form. Let
= The AR() in state space form is

1 2 1

1 1 0 0 2 0

.. = ... .. .. + ..

+1 0 1 0 0
or
= F1+w
(w) =
Starting at and iterating forward periods gives
+ = F + w+ + Fw+1 + + F1w+1
Then the best linear forecasts of +1 +2 + are computed using the
chain-rule of forecasting are
+1| = F
+2| = F+1|= F2
..
+| = F+1|= F
The forecast for + is given by plus the first row of +| = F :

1 2

1 0 0
1


+| = F = ... .. ..

0 1 0 +1
The forecast errors are given by

w+1| = +1 +1| = w+1


w+2| = +2 +2| = w+2 + Fw+1
..
w+| = + +| = w+ + Fw+1 +
+F1w+1
and the corresponding forecast MSE matrices are

(w+1|) = (w+1) =
(w+2|) = (w+2) + F(w+1)F0
= + F F0
..
1
X 0
(w+|) =
F F
=0
Notice that
(w+|) = + F(w+1|)F0

You might also like