0% found this document useful (0 votes)
2 views

Lecture_5_Estimation_Forecasting

The document discusses estimation and forecasting methods for AR(1) and MA(1) models, focusing on parameter estimation through moment matching and least squares. It outlines the process for predicting future values and calculating confidence intervals, providing examples for both models. The document also includes practical applications using R for model fitting and forecasting.

Uploaded by

Eason Lau
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture_5_Estimation_Forecasting

The document discusses estimation and forecasting methods for AR(1) and MA(1) models, focusing on parameter estimation through moment matching and least squares. It outlines the process for predicting future values and calculating confidence intervals, providing examples for both models. The document also includes practical applications using R for model fitting and forecasting.

Uploaded by

Eason Lau
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Lecture 5: Estimation and

Forecasting of AR(1) and MA(1)


Overview
• Models: Let 𝜖𝑡 , t=1,2,3,… be i.i.d. 𝑁(0, 𝜎 2 ) random variables.
• MA(1): 𝑋𝑡 = 𝜇 + 𝜖𝑡 − 𝜃𝜖𝑡−1
• AR(1): 𝑋𝑡 = 𝑚 + 𝜙𝑋𝑡−1 + 𝜖𝑡

• Parameter estimation: Guessing the values of the unknown


parameters 𝜇, 𝜃, 𝑚, 𝜙, 𝜎 2 from the data. You will learn two methods,
moment matching and least square.

• Forecasting: Predict 𝑋𝑛+𝑘 with a confidence interval (usually at the


level of 95% or 99%).
Parameters Estimation - Moment Matching
• To solve simultaneously the followings
• Theoretical mean = empirical mean
• Theoretical variance = empirical variance
• Theoretical autocorrelations = empirical autocorrelations

• For MA(1) and AR(1), there are only three unknown parameters. So,
one equation for 𝜌1 is sufficient.
Parameters Estimation - Moment Matching
• For AR(1), the equations are
𝑚
= 𝑋ത
1−𝜙

𝜎2
= 𝛾ො0
1−𝜙2

𝜙 = 𝜌ො1
Parameters Estimation - Moment Matching
Parameters Estimation - Moment Matching
• In the example, the equations becomes

𝑚
• = 0.244682
1−𝜙

𝜎2 2
= 1.467377
1−𝜙2

𝜙 = 0.740

• Solving them, we get 𝜙෠ = 0.740, 𝜎ො = 0.987, 𝑚


ෝ = 0.0636.
Parameters Estimation - Moment Matching
• For MA(1), the equations are

𝜇 = 𝑋ത

1 + 𝜃 2 𝜎 2 = 𝛾ො0
−𝜃
= 𝜌ො1
1+𝜃 2
Parameters Estimation - Moment Matching
Parameters Estimation - Moment Matching
• In the example, the equations becomes

𝜇 = −0.05832

1 + 𝜃 2 𝜎 2 = 1.370795 2

−𝜃
= 0.493
1+𝜃 2

• Solving them, we get 𝜃መ = −0.8451, σ = 1.0470, 𝜇 = −0.05832


Least Square
Estimate
Parameters Estimation - Least Square Estimate
• The idea is to minimize the sum of squared errors.

• For AR(1) model, the error 𝜖𝑡 can be backed out as


𝜖𝑡 = 𝑋𝑡 − 𝑚 − 𝜙𝑋𝑡−1

• For MA(1) model, the error 𝜖𝑡 has to be backed out recursively as


𝜖𝑡 = 𝑋𝑡 − 𝜇 + 𝜃𝜖𝑡−1
𝜖𝑡−1 = 𝑋𝑡−1 − 𝜇 + 𝜃𝜖𝑡−2

𝜖1 = 𝑋1 − 𝜇 + 𝜃𝜖0
If the MA(1) model is invertible, replacing 𝜖0 by zero is acceptable
Parameters Estimation - Least Square Estimate
• To be more precise, the notation

𝜖𝑡Ƹ 𝑚, 𝜙 or 𝜖𝑡Ƹ 𝜇, 𝜃

can be used to represent the backed-out errors of AR(1) and MA(1)


models, respectively.
Parameters Estimation - Least Square Estimate
• Minimize the sum of squared errors like σ𝑛𝑡=2 𝜖𝑡Ƹ 2 (𝑚, 𝜙).

• The minimization procedure has already been implemented in R.

• arima(Series1, order=c(1,0,0))

• For AR(1), specify “p=1” and for MA(1), specify “q=1”


Parameters Estimation - Least Square Estimate
• How does R minimize the sum of squares?

• Usually, numerical methods like BFGS algorithm are used. Interested


students may take course on “Optimization”.
Parameters Estimation - Least Square Estimate
• Example: In the file lecture5.csv, there are two time series 1 and 2.
Choose appropriate models for them. Then, fit the model.
• For AR(1) model, this output means

𝑋𝑡 − 0.166 = 0.6795 𝑋𝑡−1 − 0.166 + 𝜖𝑡

Least Square
Estimate
• For MA(1) model, the output means

𝑋𝑡 = 0.0603 + 𝜖𝑡 + 0.4056𝜖𝑡−1

Least Square
Estimate
Forecasting
Forecasting
• 𝑘 -step ahead forecasting:
• Predicting 𝑋𝑛+𝑘 at time 𝑛
• Conditional mean 𝐸 𝑋𝑛+𝑘 |𝐹𝑛
• Conditional variance 𝑉𝑎𝑟 𝑋𝑛+𝑘 |𝐹𝑛
• 𝐹𝑛 is all information available at time 𝑛

• Basic technique: Repeated substitution


Forecasting in AR(1)
• For AR(1): 𝑋𝑡 − 𝜇 = 𝜙 𝑋𝑡−1 − 𝜇 + 𝜖𝑡 where 𝜖𝑡 are i.i.d. 𝑁(0, 𝜎 2 )

• Predicted value
𝐸 𝑋𝑛+𝑘 |𝐹𝑛 = 𝜇 + 𝜙 𝑘 𝑋𝑛 − 𝜇

• Variance of prediction
𝑉𝑎𝑟 𝑋𝑛+𝑘 𝐹𝑛 = 1 + 𝜙 2 + ⋯ + 𝜙 2 𝑘−1 𝜎2

• For simplicity, neglect the errors in the parameters.


Forecasting in AR(1)
• Proof: By using repeated substitution as follows.

𝑋𝑛+𝑘 − 𝜇 = 𝜙 𝑋𝑛+𝑘−1 − 𝜇 + 𝜖𝑛+𝑘


= 𝜙 2 𝑋𝑛+𝑘−2 − 𝜇 + 𝜙𝜖𝑛+𝑘−1 + 𝜖𝑛+𝑘
=⋯
= 𝜙 𝑘 𝑋𝑛 − 𝜇 + 𝜙 𝑘−1 𝜖𝑛+1 + ⋯ + 𝜙𝜖𝑛+𝑘−1 + 𝜖𝑛+𝑘

At time n, 𝜙 𝑘 𝑋𝑛 − 𝜇 is known while 𝜖𝑛+1 , … , 𝜖𝑛+𝑘 are unknown.


Forecasting in AR(1)
• Example: Consider the AR(1) model 𝑋𝑡 − 5 = 0.9 𝑋𝑡−1 − 5 + 𝜖𝑡 . You are
given 𝑋100 = 4.8 and 𝜎 2 = 0.04. Predict 𝑋101 at t=100. Give 95%
prediction interval.

• Solution: We have
𝐸 𝑋101 𝐹100 = 5 + 0.91 4.8 − 5 = 4.82 and 𝑉𝑎𝑟 𝑋101 𝐹100 = 0.04.

• The 95% prediction interval is then


4.82 − 1.96 0.2 , 4.82 + 1.96(0.2) = [4.428, 5.212]
Forecasting in MA(1)
• For MA(1): 𝑋𝑡 = 𝜇 + 𝜖𝑡 − 𝜃𝜖𝑡−1 where 𝜖𝑡 are i.i.d. 𝑁(0, 𝜎 2 )

• In forecasting, we may need to back out 𝜖𝑡−1 using repeated substitution

• One step ahead forecasting is


𝐸 𝑋𝑛+1 |𝐹𝑛 = 𝜇 − 𝜃 𝜖𝑛Ƹ with 𝑉𝑎𝑟 𝑋𝑛+1 𝐹𝑛 = 𝜎 2

• Two or more steps ahead forecasting are


𝐸 𝑋𝑛+𝑘 |𝐹𝑛 = 𝜇 with 𝑉𝑎𝑟 𝑋𝑛+𝑘 𝐹𝑛 = 𝜎 2 1 + 𝜃 2
Forecasting in MA(1)
• Notes: To back out 𝜖𝑡Ƹ , the information of 𝑋1 , 𝑋2 , … , 𝑋𝑡 is needed.

• At time t, 𝜖𝑡 , 𝜖𝑡−1 , 𝜖𝑡−2 , … can be approximately known by repeated


substitution. On the other hand, 𝜖𝑡+1 , 𝜖𝑡+2 , … are unknown and are
considered random.
Forecasting in MA(1)
• Example: The parameters
2
of an MA(1) model 𝑋𝑡 = 𝜇 + 𝜖𝑡 − 𝜃𝜖𝑡−1 are 𝜇 = 3.952, 𝜃 =
0.2311, and 𝜎 = 0.002592. Suppose that 𝑋1 = 3.976, 𝑋2 = 3.901. Set 𝜖0Ƹ = 0.000.

• Question: Predict 𝑋3 at t=2. Give 99% prediction interval.

• Solution: First, estimate 𝜖1Ƹ and 𝜖2Ƹ


𝜖1Ƹ = 𝑋1 − 𝜇 + 𝜃𝜖0Ƹ = 3.976 − 3.952 + 0.2311 0.000 = 0.024
𝜖2Ƹ = 𝑋2 − 𝜇 + 𝜃 𝜖1Ƹ = 3.901 − 3.952 + 0.2311 0.024 = −0.045
Then, 𝐸 𝑋3 |𝐹2 = 3.952 − 0.2311 −0.045 = 3.9624 with 𝑉𝑎𝑟 𝑋3 𝐹2 = 0.002592.
The 99% prediction interval is then
3.9624 − 2.58 0.002592, 3.9624 + 2.58 0.002592 = [3.831, 4.094]
Forecasting in MA(1)
• Example: The parameters
2
of an MA(1) model 𝑋𝑡 = 𝜇 + 𝜖𝑡 − 𝜃𝜖𝑡−1 are 𝜇 = 3.952,
𝜃 = 0.2311, and 𝜎 = 0.002592. Suppose that 𝑋1 = 3.976, 𝑋2 = 3.901. Set
𝜖0Ƹ = 0.000.

• Question: Predict 𝑋4 at t=2. Give 99% prediction interval.

• Solution:
𝐸 𝑋4 |𝐹2 = 3.952 with 𝑉𝑎𝑟 𝑋4 𝐹2 = 0.002592 1 + 0.23112 =0.002730.
The 99% prediction interval is then
3.952 − 2.58 0.002730, 3.952 + 2.58 0.002730 = [3.817, 4.087]
Forecasting in RStudio
• Forecasting can be done directly in R.

• res = arima(Series1, order=c(1,0,0))


res2 = predict(res,3)

• res2$pred-1.96*res2$se
res2$pred+1.96*res2$se

• For AR(1), specify “p=1” and for MA(1), specify “q=1”

You might also like