Forecasting: Francis X. Diebold University of Pennsylvania
Forecasting: Francis X. Diebold University of Pennsylvania
Francis X. Diebold
University of Pennsylvania
1 / 323
Copyright
c 2013 onward, by Francis X. Diebold.
These materials are freely available for your use, but be warned:
they are highly preliminary, significantly incomplete, and rapidly
evolving. All are licensed under the Creative Commons
Attribution-NonCommercial-NoDerivatives 4.0 International
License. (Briefly: I retain copyright, but you can use, copy and
distribute non-commercially, so long as you give me attribution and
do not modify. To view a copy of the license, visit
http://creativecommons.org/licenses/by-nc-nd/4.0/.) In return I
ask that you please cite the books whenever appropriate, as:
”Diebold, F.X. (year here), Book Title Here, Department of
Economics, University of Pennsylvania,
http://www.ssc.upenn.edu/ fdiebold/Textbooks.html.”
2 / 323
Elements of Forecasting in Business, Finance, Economics
and Government
1. Forecasting in Action
1.1 Operations planning and control
1.2 Marketing
1.3 Economics
1.4 Financial speculation
1.5 Financial risk management
1.6 Capacity planning
1.7 Business and government budgeting
1.8 Demography
1.9 Crisis management
3 / 323
Forecasting Methods: An Overview
4 / 323
Statistical Graphics for Forecasting
5 / 323
Modeling and Forecasting Trend
I Modeling trend
I Estimating trend models
I Forecasting trend
I Selecting forecasting models using the Akaike and Schwarz
criteria
I Application: forecasting retail sales
6 / 323
Modeling and Forecasting Seasonality
7 / 323
Characterizing Cycles
8 / 323
Modeling Cycles: MA, AR and ARMA Models
9 / 323
Forecasting Cycles
I Optimal forecasts
I Forecasting moving average processes
I Forecasting infinite-ordered moving averages
I Making the forecasts operational
I The chain rule of forecasting
I Application: forecasting employment
10 / 323
Putting it all Together: A Forecasting Model with Trend,
Seasonal and Cyclical Components
11 / 323
Forecasting with Regression Models
12 / 323
Evaluating and Combining Forecasts
13 / 323
Unit Roots, Stochastic Trends, ARIMA Forecasting
Models, and Smoothing
14 / 323
Volatility Measurement, Modeling and Forecasting
15 / 323
Useful Books, Journals and Software
Books
16 / 323
Useful Books, Journals and Software cont.
17 / 323
Useful Books, Journals and Software cont.
Special insights:
I Armstrong, J.S. (Ed.) (1999), The Principles of Forecasting.
Norwell, Mass.: Kluwer Academic Forecasting.
I Makridakis, S. and Wheelwright S.C. (1997), Forecasting:
Methods and Applications, Third Edition. New York: John
Wiley.
I Bails, D.G. and Peppers, L.C. (1997), Business Fluctuations.
Englewood Cliffs: Prentice Hall.
I Taylor, S. (1996), Modeling Financial Time Series, Second
Edition. New York: Wiley.
18 / 323
Useful Books, Journals and Software cont.
Journals
I Journal of Forecasting
I Journal of Business Forecasting Methods and Systems
I Journal of Business and Economic Statistics
I Review of Economics and Statistics
I Journal of Applied Econometrics
19 / 323
Useful Books, Journals and Software cont.
Software
I General:
I Eviews
I S+
I Minitab
I SAS
I R
I Python
I Many more...
I Cross-section:
I Stata
I Open-ended:
I Matlab
20 / 323
Useful Books, Journals and Software cont.
Online Information
I Resources for Economists:
21 / 323
A Brief Review of Probability, Statistics, and Regression
for Forecasting
Topics
I Discrete Random Variable
I Discrete Probability Distribution
I Continuous Random Variable
I Probability Density Function
I Moment
I Mean, or Expected Value
I Location, or Central Tendency
I Variance
I Dispersion, or Scale
I Standard Deviation
I Skewness
I Asymmetry
I Kurtosis
I Leptokurtosis
22 / 323
A Brief Review of Probability, Statistics, and Regression
for Forecasting
Topics cont.
I Skewness
I Asymmetry
I Kurtosis
I Leptokurtosis
I Normal, or Gaussian, Distribution
I Marginal Distribution
I Joint Distribution
I Covariance
I Correlation
I Conditional Distribution
I Conditional Moment
I Conditional Mean
I Conditional Variance
23 / 323
A Brief Review of Probability, Statistics, and Regression
for Forecasting cont.
Topics cont.
I Population Distribution
I Sample
I Estimator
I Statistic, or Sample Statistic
I Sample Mean
I Sample Variance
I Sample Standard Deviation
I Sample Skewness
I Sample Kurtosis
I χ2 Distribution
I t Distribution
I F Distribution
I Jarque-Bera Test
24 / 323
Regression as Curve Fitting
Least-squares estimation:
T
X
min [ yt − β0 − β1 xt ] 2
β
t = 1
Fitted values:
et = yt − ŷt
25 / 323
Regression as a probabilistic model
Simple regression:
y t = β 0 + β 1 x t + εt
iid
εt ∼ (0, σ 2 )
Multiple regression:
yt = β0 + β1 xt + β2 zt + εt
iid
εt ∼ (0, σ 2 )
26 / 323
Regression as a probabilistic model cont.
27 / 323
Regression as a probabilistic model cont.
F−statistic 30.89
(SSRres − SSR) / (k − 1)
F =
SSR / (T − k)
S.E. of regression 0.99
PT 2
2 t=1 et
s =
T−k
s
√
PT 2
t=1 et
SER = s2 =
T−k
28 / 323
Regression as a probabilistic model cont.
R−squared 0.58
PT 2
2 t=1 et
R = 1 − PT
t=1 (yt − ȳt )2
or
1 PT 2
2 T t=1 et
R = 1 − 1 PT
T t=1 (yt − ȳt )2
Adjusted R−squared 0.56
1 PT 2
2 T−k t=1 et
R̄ = 1 − 1 PT
T−1 t=1 (yt − ȳt )2
29 / 323
Regression as a probabilistic model cont.
30 / 323
Regression as a probabilistic model cont.
y t = β 0 + β 1 x t + εt
iid
vt ∼ N(0, σ 2 )
εt = φεt−1 + vt
PT 2
t=2 (et − et−1 )
DW = PT 2
t=1 et
31 / 323
Regression of y on x and z
32 / 323
Scatterplot of y versus x
33 / 323
Scatterplot of y versus x – Regression Line Superimposed
34 / 323
Scatterplot of y versus z – Regression Line Superimposed
35 / 323
Residual Plot – Regression of y on x and z
36 / 323
Six Considerations Basic to Successful Forecasting
L(e) = e2
L(e) = |e|
2. The Forecast Object
I Event outcome, event timing, time series.
3. The Forecast Statement
I Point forecast, interval forecast, density forecast, probability
forecast
37 / 323
Six Considerations Basic to Successful Forecasting cont.
Ωunivariate
T = {yT , yT−1 , ..., y1 }
Ωmultivariate
T = {yT , xT , yT−1 , xT−1 , ..., y1 , x1 }
6. Methods and Complexity, the Parsimony Principle, and the
I Shrinkage Principle
I Signal vs. noise
I Smaller is often better
I Even incorrect restrictions can help
38 / 323
Six Considerations Basic to Successful Forecasting cont.
39 / 323
Six Considerations Basic to Successful Forecasting cont.
40 / 323
Quadratic Loss
41 / 323
Absolute Loss
42 / 323
Asymmetric Loss
43 / 323
Forecast Statement
44 / 323
Forecast Statement cont.
45 / 323
Extrapolation Forecast
46 / 323
Extrapolation Forecast cont.
47 / 323
Statistical Graphics For Forecasting
1. Why Graphical Analysis is Important
I Graphics helps us summarize and reveal patterns in data
I Graphics helps us identify anomalies in data
I Graphics facilitates and encourages comparison of different
pieces of data
I Graphics enables us to present a huge amount of data in a
small space, and it enables us to make huge data sets coherent
2. Simple Graphical Techniques
I Univariate, multivariate
I Time series vs. distributional shape
I Relational graphics
3. Elements of Graphical Style
I Know your audience, and know your goals.
I Show the data, and appeal to the viewer.
I Revise and edit, again and again.
4. Application: Graphing Four Components of Real GNP
48 / 323
Anscombe’s Quartet
49 / 323
Anscombe’s Quartet
50 / 323
Anscombe’s Quartet – Bivariate Scatterplot
51 / 323
1-Year Treasury Bond Rates
52 / 323
Change in 1-Year Treasury Bond Rates
53 / 323
Liquor Sales
54 / 323
Histogram and Descriptive Statistics – Change in 1-Year
Treasury Bond Rates
55 / 323
Scatterplot 1-Year vs. 10-year Treasury Bond Rates
56 / 323
Scatterplot Matrix – 1-, 10-, 20-, and 30-Year Treasury
Bond Rates
57 / 323
Time Series Plot – Aspect Ratio 1:1.6
58 / 323
Time Series Plot – Banked to 45 Degrees
59 / 323
Time Series Plot – Aspect Ratio 1:1.6
60 / 323
Graph
61 / 323
Graph
62 / 323
Graph
63 / 323
Graph
64 / 323
Graph
65 / 323
Components of Real GDP (Millions of Current Dollars,
Annual
66 / 323
Modeling and Forecasting Trend
1. Modeling Trend
Tt = β0 + β1 TIMEt
Tt = β0 + β1 TIMEt + β2 TIME2t
Tt = β0 eβ1 TIMEt
67 / 323
Modeling and Forecasting Trend
2. Estimating Trend Models
T
X
(β̂0 , β̂1 ) = arg min [ yt − β0 − β1 TIMEt ] 2
β0 ,β1
t = 1
T
X
yt − β0 − β1 TIMEt − β2 TIME2t
(β̂0 , β̂1 , β̂2 ) = arg min
β0 ,β1 ,β2
t = 1
T h
X i2
(β̂0 , β̂1 ) = arg min yt − β0 eβ1 TIMEt
β0 ,β1
t = 1
T
X
(β̂0 , β̂1 ) = arg min [ ln yt − ln β0 − β1 TIMEt ] 2
β0 ,β1
t = 1
68 / 323
Modeling and Forecasting Trend
3. Forecasting Trend
yt = β0 + β1 TIMEt + εt
yT+h,T = β0 + β1 TIMET+h
69 / 323
Modeling and Forecasting Trend
yT+h,T ± 1.96σ
ŷT+h,T ± 1.96σ̂
N(yT+h,T , σ 2 )
N(ŷT+h,T , σ̂ 2 )
70 / 323
Modeling and Forecasting Trend
71 / 323
Modeling and Forecasting Trend
PT 2 2
2 t=1 et / T−k PT 2
R̄ = 1 − PT = 1 − s t=1 (yt − ȳ) / T−
72 / 323
Modeling and Forecasting Trend
73 / 323
Labor Force Participation Rate
74 / 323
Increasing and Decreasing Labor Trends
75 / 323
Labor Force Participation Rate
76 / 323
Linear Trend – Female Labor Force Participation Rate
77 / 323
Linear Trend – Male Labor Force Participation Rate
78 / 323
Volume on the New York Stock Exchange
79 / 323
Various Shapes of Quadratic Trends
80 / 323
Quadratic Trend – Volume on the New York Stock
Exchange
81 / 323
Log Volume on the New York Stock Exchange
82 / 323
Various Shapes of Exponential Trends
83 / 323
Linear Trend – Log Volume on the New York Stock
Exchange
84 / 323
Exponential Trend – Volume on the New York Stock
Exchange
85 / 323
Degree-of-Freedom Penalties – Various Model Selection
Criteria
86 / 323
Retail Sales
87 / 323
Retail Sales – Linear Trend Regression
88 / 323
Retail Sales – Linear Trend Residual Plot
89 / 323
Retail Sales – Quadratic Trend Regression
90 / 323
Retail Sales – Quadratic Trend Residual Plot
91 / 323
Retail Sales – Log Linear Trend Regression
92 / 323
Retail Sales – Log Linear Trend Residual Plot
93 / 323
Retail Sales – Exponential Trend Regression
94 / 323
Retail Sales – Exponential Trend Residual Plot
95 / 323
Model Selection Criteria
96 / 323
Retail Sales – History January, 1990 – December, 1994
97 / 323
Retail Sales – History January, 1990 – December, 1994
98 / 323
Retail Sales – History January, 1990 – December, 1994
99 / 323
Retail Sales – History January, 1990 – December, 1994
100 / 323
Modeling and Forecasting Seasonality
1. The Nature and Sources of Seasonality
2. Modeling Seasonality
D1 = (1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, ...)
D2 = (0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, ...)
D3 = (0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, ...)
D4 = (0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, ...)
s
X
yt = γi Dit + εt
i=1
s
X
yt = β1 TIMEt + γi Dit + εt
i=1
s
X v1
X v2
X
yt = β1 TIMEt + γi Dit + δiHD HDVit + δiTD TDVi
i=1 i=1 i=1
101 / 323
Modeling and Forecasting Seasonality
3. Forecasting Seasonal Series
s
X v1
X v2
X
yt = β1 TIMEt + γi Dit + δiHD HDVit + δiTD TDVi
i=1 i=1 i=1
s
X v1
X
yT+h = β1 TIMET+h + γi Di,T+h + δiHD HDVi,T+h +
i=1 i=1
s
X v1
X
yT+h,T = β1 TIMET+h + γi Di,T+h + δiHD HDVi,T+h +
i=1 i=1
sγ̂i
X v1
X
ŷT+h,T = β̂1 TIMET+h + Di,T+h + δ̂iHD HDVi,T+h +
i=1 i=1
102 / 323
Modeling and Forecasting Seasonality
yT+h,T ± 1.96σ
ŷT+h,T ± 1.96σ̂
N(yT+h,T , σ 2 )
N(ŷT+h,T , σ̂ 2 )
103 / 323
Gasoline Sales
104 / 323
Liquor Sales
105 / 323
Durable Goods Sales
106 / 323
Housing Starts, January, 1946 – November, 1994
107 / 323
Housing Starts, January, 1990 – November, 1994
108 / 323
Housing Starts Regression Results - Seasonal Dummy
Variable Model
109 / 323
Residual Plot
110 / 323
Housing Starts – Estimated Seasonal Factors
111 / 323
Housing Starts
112 / 323
Housing Starts
113 / 323
Characterizing Cycles
1. Covariance Stationary Time Series
I Realization
I Sample Path
I Covariance Stationary
Eyt = µt
Eyt = µ
cov(x, y)
corr(x, y) =
σx σy
γ(τ )
ρ(τ ) = , τ = 0, 1, 2, ....
γ(0)
115 / 323
2.White Noise
yt ∼ WN(0, σ 2 )
iid
yt ∼ (0, σ 2 )
iid
yt ∼ N(0, σ 2 )
116 / 323
2.White Noise Cont.
E(yt ) = 0
var(yt ) = σ 2
E(yt |Ωt−1 ) = 0
var(yt |Ωt−1 ) = E[(yt − E(yt |Ωt−1 ))2 |Ωt−1 ] = σ 2
117 / 323
3.The Lag Operator
L yt = yt−1
L2 yt = L(L(yt )) = L(yt−1 ) = yt−2
B(L) = b0 + b1 L + b2 L2 + ... bm Lm
Lm yt = yt−m
∆yt = (1 − L)yt = yt − yt−1
(1 + .9L + .6L2 )yt = yt + .9yt−1 + .6yt−2
∞
X
B(L) = b0 + b1 L + b2 L2 + ... = bi Li
i=0
∞
X
B(L) εt = b0 εt + b1 εt−1 + b2 εt−2 + ... = bi εt−i
i=0
118 / 323
4.Wold’s Theorem, the General Linear Process, and
Rational Distributed Lags
Wold’s Theorem
∞
X
yt = B(L)εt = bi εt−i
i=0
εt ∼ WN(0, σ 2 )
where
b0 = 1
and
∞
X
b2i < ∞
i=0
119 / 323
The General Linear Process
∞
X
yt = B(L)εt = bi εt−i
i=0
iid
εt ∼ WN(0, σ 2 ),
where b0 = 1 and
∞
X
b2i < ∞
i=0
120 / 323
The General Linear Process Cont.
∞
X ∞
X ∞
X
E(yt ) = E( bi εt−i ) = bi Eεt−i = bi • 0 = 0
i=0 i=0 i=0
X∞ ∞
X ∞
X ∞
X
2 2 2 2
var(yt ) = var( bi εt−i ) = bi var(εt−i ) = bi σ = σ b
i=0 i=0 i=0 i=0
var(yt |Ωt−1 ) = E[(yt − E(yt |Ωt−1 ))2 |Ωt−1 ] = E(ε2t |Ωt−1 ) = E(ε2t )
121 / 323
Rational Distributed Lags
Θ (L)
B(L) =
Φ (L)
q
X
Θ(L) = θi Li
i=0
p
X
Φ(L) = φi Li
i=0
Θ (L)
B(L) ≈
Φ (L)
122 / 323
5.Estimation and Inference for the Mean, Auto Correlation
and Partial Autocorrelation Functions
T
1X
ȳ = yt
T
t=1
1
ρ̂(τ ) ∼ N(0, )
T
rootTρ̂(τ ) ∼ N(0, 1)
T ˆρ2 (τ ) ∼ χ21
m
X
QBP = T ρ̂2 (τ )
τ =1
m
X 1
QLB = T (T + 2) ρ̂2 (τ )
T−τ
τ =1
124 / 323
A Rigid Cycle Pattern
125 / 323
Autocorrelation Function, One-Sided Gradual Damping
126 / 323
Autocorrelation Function, Non-Damping
127 / 323
Autocorrelation Function, Gradual Damped Oscillation
128 / 323
Autocorrelation Function, Sharp Cutoff
129 / 323
Realization of White Noise Process
130 / 323
Population Autocorrelation Function of White Noise
Process
131 / 323
Population Partialautocorrelation Function of White Noise
Process
132 / 323
Canadian Employment Index
133 / 323
Canadian Employment Index Correlogram
Sample: 1962:1 1993:4
Included observations: 128
Acorr. P. Acorr. Std. Error Ljung-Box p-v
v
1 0.949 0.949 .088 118.07 0.000
2 0.877 −0.244 .088 219.66 0.000
3 0.795 −0.101 .088 303.72 0.000
4 0.707 −0.070 .088 370.82 0.000
5 0.617 −0.063 .088 422.27 0.000
6 0.526 −0.048 .088 460.00 0.000
7 0.438 −0.033 .088 486.32 0.000
8 0.351 −0.049 .088 503.41 0.000
9 0.258 −0.149 .088 512.70 0.000
10 0.163 −0.070 .088 516.43 0.000
11 0.073 −0.011 .088 517.20 0.000
12 −0.005 0.016 .088 517.21 0.000
134 / 323
Canadian Employment Index, Sample Autocorrelation and
Partial Autocorrelation Functions
135 / 323
Modeling Cycles: MA,AR, and ARMA Models
yt = εt + θεt−1 = (1 + θL)εt
εt ∼ WN(0, σ 2 )
If invertible:
136 / 323
Modeling Cycles: MA,AR, and ARMA Models Cont.
var(yt |Ωt−1 ) = E[(yt − E(yt |Ωt−1 ))2 |Ωt−1 ] = E(ε2t |Ωt−1 ) = E(ε2t )
137 / 323
The MA(q) Process
where
Θ(L) = 1 + θ1 L + ... + θq Lq
138 / 323
The AR(1) Process
yt = φyt−1 + εt
εt ∼ WN(0, σ 2 )
If covariance stationary:
139 / 323
Moment Structure
= 0
= σ 2 + φ2 σ 2 + φ4 σ 2 + ...
P∞
= σ2 i=0 φ
2i
2
= σ 1−φ2
140 / 323
Moment Structure Cont.
= φyt−1 + 0
= φyt−1
= 0 + σ2
= σ2
141 / 323
Moment Structure Cont.
Autocovariances and autocorrelations:
yt = φyt−1 + εt
yt yt−τ = φyt−1 yt−τ + εt yt−τ
For
τ ≥1
,
. Thus
2
γ(τ ) = φτ σ 1−φ2 , τ = 0, 1, 2, ....
and
ρ(τ ) = φτ , τ = 0, 1, 2, ....
Partial autocorrelations:
143 / 323
The AR(p) Process
144 / 323
The ARMA(1,1) Process
yt = φyt−1 + εt + θεt−1
εt ∼ WN(0, σ 2 )
MA representation if invertible:
(1 + θ L)
yt = εt
(1 − φ L)
(1 − φ L)
y t = εt
(1 + θ L)
145 / 323
The ARMA(p,q) Process
εt ∼ WN(0, σ 2 )
Φ(L)yt = Θ(L)εt
146 / 323
Realization of Two MA(1) Processes
147 / 323
Population Autocorrelation Function MA(1) Process
θ = .4
148 / 323
Population Autocorrelation Function MA(1) Process
θ = .95
149 / 323
Population Partial Autocorrelation Function MA(1)
Process
θ = .4
150 / 323
Population Partial Autocorrelation Function MA(1)
Process
θ = .95
151 / 323
Realization of Two AR(1) Processes
152 / 323
Population Autocorrelation Function AR(1) Process
φ = .4
153 / 323
Population Autocorrelation Function AR(1) Process
φ = .95
154 / 323
Population Partial Autocorrelation Function AR(1) Process
φ = .4
155 / 323
Population Partial Autocorrelation Function AR(1) Process
φ = .95
156 / 323
Population Autocorrelation Function AR(2) Process with
Complex Roots
157 / 323
Employment: MA(4) Model
158 / 323
Employment MA(4) Residual Plot
159 / 323
Employment: MA(4) Model
Residual Correlogram Sample: 1962:1 1993:4
Included observations: 128
Q−statistic probabilities adjusted for 4 ARMA term(s)
Acorr. P. Acorr. Std. Error Ljung-Box p-v
1 0.345 0.345 .088 15.614
2 0.660 0.614 .088 73.089
3 0.534 0.426 .088 111.01
4 0.427 −0.042 .088 135.49
5 0.347 -0.398 .088 151.79 0.000
6 0.484 0.145 .088 183.70 0.000
7 0.121 −0.118 .088 185.71 0.000
8 0.348 −0.048 .088 202.46 0.000
9 0.148 −0.019 .088 205.50 0.000
10 0.102 −0.066 .088 206.96 0.000
11 0.081 −0.098 .088 207.89 0.000
12 0.029 −0.113 .088 208.01 0.000
160 / 323
Employment: MA(4) Model
Residual Sample Autocorrelation and Partial Autocorrelation
Functions, With Plus or Minus Two Standard Error Bands
161 / 323
Employment: AR(2) Model
162 / 323
Employment AR(2) Model Residual Plot
163 / 323
Employment AIC Values of Various ARMA Models
MA Order
0 1 2 3 4
0 2.86 2.32 2.47 2.20
1 1.01 .83 .79 .80 .81
AR Order 2 .762 .77 .78 .80 .80
3 .77 .761 .77 .78 .79
4 .79 .79 .77 .79 .80
164 / 323
Employment SIC Values of Various ARMA Models
MA Order
0 1 2 3 4
0 2.91 2.38 2.56 2.31
1 1.05 .90 .88 .91 .94
AR Order 2 .83 .86 .89 .92 .96
3 .86 .87 .90 .94 .96
4 .90 .92 .93 .97 1.00
165 / 323
Employment: ARMA(3,1) Model
166 / 323
Employment ARMA(3) Model Residual Plot
167 / 323
Employment: ARMA(3,1) Model Residual Correlogram
Sample: 1962:1 1993:4
Included observations: 128
Q−statistic probabilities adjusted for 4 ARMA term(s)
Acorr. P. Acorr. Std. Error Ljung-Box p-v
1 −0.032 −0.032 .09 0.1376
2 0.041 0.040 .09 0.3643
3 0.014 0.017 .09 0.3904
4 0.048 0.047 .09 0.6970
5 0.006 0.007 .09 0.7013 0.402
6 0.013 0.009 .09 0.7246 0.696
7 −0.017 −0.019 .09 0.7650 0.858
8 0.064 0.060 .09 1.3384 0.855
9 0.092 0.097 .09 2.5182 0.774
10 0.039 0.040 .09 2.7276 0.842
11 −0.016 −0.022 .09 2.7659 0.906
12 −0.137 −0.153 .09 5.4415 0.710
168 / 323
Employment: ARMA(3) Model
169 / 323
Forecasting Cycles
where
εt ∼ WN(0, σ 2 )
170 / 323
Forecasting Cycles Cont.
b0 = 1
, and
∞
X
σ2 b2i < ∞
i=0
.
h−1
X
eT+h,T = (yT+h − yT+h,T ) = bi εT+h−i ,
i=0
h−1
X
σh2 = σ 2 b2i .
i=0
171 / 323
Interval and Density Forecasts
yT+h,T ± 1.96σh
N(yT+h,T , σh2 )
172 / 323
Employment History and Forecast MA(4) Model
173 / 323
Employment History and Long-Horizon Forecast MA(4)
Model
174 / 323
Employment History, Forecast and Realization MA(4)
Model
175 / 323
Employment History and Forecast AR(2) Model
176 / 323
Employment History and Long-Horizon Forecast AR(2)
Model
177 / 323
Employment History and Very Long-Horizon Forecast
AR(2) Model
178 / 323
Employment History, Forecast and Realization AR(2)
Model
179 / 323
Putting it all Together
A Forecast Model with Trend, Seasonal and Cyclical Components
s
X v1
X v2
X
yt = Tt (θ) + γi Dit + δiHD HDVit + δiTD TDVit + εt
i=1 i=1 i=1
Φ(L)εt = Θ(L)vt
Φ(L) = 1 − φ1 L − ... − φp Lp
Θ(L) = 1 + θ1 L + ... + θq Lq
vt ∼ WN(0, σ 2 ).
180 / 323
Point Forecasting
s
X v1
X v2
X
yT+h = TT+h (θ) + γi Di,T+h + δiHD HDVi,T+h + δiTD T
i=1 i=1 i=1
s
X v1
X v2
X
yT+h,T = TT+h (θ) + γi Di,T+h + δiHD HDVi,T+h + δiTD
i=1 i=1 i=1
sγ̂i
X v1
X v2
X
ŷT+h,T = TT+h (θ̂) + Di,T+h + δ̂iHD HDVi,T+h + δ̂iTD T
i=1 i=1 i=1
181 / 323
Interval Forecasting and Density Forecasting
Interval Forecasting:
ŷT+h,T ± 1.96σ̂h
Density Forecasting:
N(ŷT+h,T , σ̂h2 )
182 / 323
Recursive Estimation
K
X
yt = βk xkt + εt
k=1
εt ∼ iidN(0, σ 2 ),
t = 1, ..., T .
OLS estimation uses the full sample, t = 1, ..., T .
183 / 323
Recursive Residuals
êt+1,t ∼ N(0, σ 2 rt )
184 / 323
Standardized Recursive Residuals and CUSUM
êt+1,t
wt+1,t ≡ √ ,
σ rt
t = K , ..., T − 1.
Then
t ∗
X
CUSUMt∗ ≡ wt+1,t , t ∗ = K , ..., T − 1
t=K
185 / 323
Liquor Sales, 1968.1-1993.12
186 / 323
Log Liquor Sales, 1968.01 - 1993.12
187 / 323
Log Liquor Sales: Quadratic Trend Regression
188 / 323
Liquor Sales Quadratic Trend Regression Residual Plot
189 / 323
Liquor Sales Quadratic Trend Regression Residual
Correlogram
Acorr. P. Acorr. Std. Error Ljung-Box
1 0.117 0.117 .056 4.3158 0.0
2 −0.149 −0.165 .056 11.365 0.0
3 −0.106 −0.069 .056 14.943 0.0
4 −0.014 −0.017 .056 15.007 0.0
5 0.142 0.125 .056 21.449 0.0
6 0.041 −0.004 .056 21.979 0.0
7 0.134 0.175 .056 27.708 0.0
8 −0.029 −0.046 .056 27.975 0.0
9 −0.136 −0.080 .056 33.944 0.0
10 −0.205 −0.206 .056 47.611 0.0
11 0.056 0.080 .056 48.632 0.0
12 0.888 0.879 .056 306.26 0.0
13 0.055 −0.507 .056 307.25 0.0
14 −0.187 −0.159 .056 318.79 0.0
15 −0.159 −0.144 .056 327.17 0.0
190 / 323
Liquor Sales Quadratic Trend Regression Residual Sample
Autocorrelation Functions
191 / 323
Liquor Sales Quadratic Trend Regression Residual Partial
Autocorrelation Functions
192 / 323
Log Liquor Sales: Quadratic Trend Regression With
Seasonal Dummies and AR(3) Disturbances
193 / 323
Liquor Sales Quadratic Trend Regression with Seasonal
Dummies Residual Plot
194 / 323
Liquor Sales Quadratic Trend Regression with Seasonal
Dummies Residual Correlogram
Acorr. P. Acorr.Std.Error Ljung-Box
p-value
1 0.700 0.700 .056 154.34 0.000
2 0.686 0.383 .056 302.86 0.000
3 0.725 0.369 .056 469.36 0.000
4 0.569 −0.141 .056 572.36 0.000
5 0.569 0.017 .056 675.58 0.000
6 0.577 0.093 .056 782.19 0.000
7 0.460 −0.078 .056 850.06 0.000
8 0.480 0.043 .056 924.38 0.000
9 0.466 0.030 .056 994.46 0.000
10 0.327 −0.188 .056 1029.1 0.000
11 0.364 0.019 .056 1072.1 0.000
12 0.355 0.089 .056 1113.3 0.000
13 0.225 −0.119 .056 1129.9 0.000
14 0.291 0.065 .056 1157.8 0.000
15 0.211 −0.119 .056 1172.4 0.000 195 / 323
Liquor Sales Quadratic Trend Regression with Seasonal
Dummies Residual Sample Autocorrelation Functions
196 / 323
Liquor Sales Quadratic Trend Regression with Seasonal
Dummies Residual Sample Partial Autocorrelation
Functions
197 / 323
Liquor Sales Quadratic Trend Regression with Seasonal
Dummies and AR(3) Disturbances Residual Plot
198 / 323
Liquor Sales Quadratic Trend Regression with Seasonal
Dummies and AR(3) Disturbances Residual Correlogram
Acorr. P. Acorr. Std. Error Ljung-Box p-v
1 0.056 0.056 .056 0.9779 0.323
2 0.037 0.034 .056 1.4194 0.492
3 0.024 0.020 .056 1.6032 0.659
4 −0.084 −0.088 .056 3.8256 0.430
5 −0.007 0.001 .056 3.8415 0.572
6 0.065 0.072 .056 5.1985 0.519
7 −0.041 −0.044 .056 5.7288 0.572
8 0.069 0.063 .056 7.2828 0.506
9 0.080 0.074 .056 9.3527 0.405
10 −0.163 −0.169 .056 18.019 0.055
11 −0.009 −0.005 .056 18.045 0.081
12 0.145 0.175 .056 24.938 0.015
13 −0.074 −0.078 .056 26.750 0.013
14 0.149 0.113 .056 34.034 0.002
15 −0.039 −0.060 .056 34.532 0.003
199 / 323
Liquor Sales Quadratic Trend Regression with Seasonal
Dummies and AR(3) Disturbances Residual Sample
Autocorrelation Functions
200 / 323
Liquor Sales Quadratic Trend Regression with Seasonal
Dummies and AR(3) Disturbances Residual Sample Partial
Autocorrelation Functions
201 / 323
Liquor Sales Quadratic Trend Regression with Seasonal
Dummies and AR(3) Disturbances Residual Histogram and
Normality Test
202 / 323
Log Liquor Sales History and 12-Month-Ahead Forecast
203 / 323
Log Liquor Sales History, 12-Month-Ahead Forecast, and
Realization
204 / 323
Log Liquor Sales History and 60-Month-Ahead Forecast
205 / 323
Log Liquor Sales Long History and 60-Month-Ahead
Forecast
206 / 323
Liquor Sales Long History and 60-Month-Ahead Forecast
207 / 323
Recursive Analysis Constant Parameter Model
208 / 323
Recursive Analysis Breaking Parameter Model
209 / 323
Log Liquor Sales: Quadratic Trend Regression with
Seasonal Dummies and AR(3) Residuals and Two
Standard Errors Bands
210 / 323
Log Liquor Sales: Quadratic Trend Regression with
Seasonal Dummies and AR(3) Disturbances Recursive
Parameter Estimates
211 / 323
Log Liquor Sales: Quadratic Trend Regression with
Seasonal Dummies and AR(3) Disturbances CUMSUM
Analysis
212 / 323
Forecasting with Regression Models
y t = β 0 + β 1 x t + εt
εt ∼ N(0, σ 2 )
yT+h,T |x∗T+h = β0 + β1 x∗T+h
Density forecast:
N(yT+h,T |x∗T+h , σ 2 )
213 / 323
Unconditional Forecasting Models
yT+h,T = β0 + β1 xT+h,T
214 / 323
Distributed Lags
yt = β0 + δxt−1 + εt
Generalize to
Nx
X
yt = β0 + δi xt−i + εt
i=1
215 / 323
Polynomial Distributed Lags
T
" Nx
#2
X X
min yt − β 0 − δi xt−i
β0 , δi t = Nx +1 i=1
subject to
P(Nx ) = 0
216 / 323
Rational Distributed Lags
A(L)
yt = x t + εt
B(L)
Equivalently,
217 / 323
Another way:
distributed lag regression with lagged dependent variables
Ny Nx
X X
yt = β 0 + αi yt−i + δj xt−j + εt
i=1 j=1
Another way:
distributed lag regression with ARMA disturbances
Nx
X
yt = β0 + δi xt−i + εt
i=1
Θ(L)
εt = vt
Φ(L)
vt ∼ WN(0, σ 2 )
218 / 323
Another Way: The Transfer function Model and Various
Special Cases
Univariate ARMA
C(L)
yt = εt
D(L)
A(
Distributed Lag with
B(L) yt = A(L) xt + εt
, or
Lagged Dep. Variables
A(L) 1
yt = xt + εt
B(L) B(L)
C(
Distributed Lag with 219 / 323
Vector Autoregressions
e.g., bivariate VAR(1)
220 / 323
Point and Interval Forecast
221 / 323
U.S. Housing Starts and Completions, 1968.01-1996.06
222 / 323
Starts Correlogram
Sample: 1968:01 1991:12
Included observations: 288
Acorr. P. Acorr. Std. Error Ljung-Box p-v
1 0.937 0.937 0.059 255.24 0.000
2 0.907 0.244 0.059 495.53 0.000
3 0.877 0.054 0.059 720.95 0.000
4 0.838 −0.077 0.059 927.39 0.000
5 0.795 −0.096 0.059 1113.7 0.000
6 0.751 −0.058 0.059 1280.9 0.000
7 0.704 −0.067 0.059 1428.2 0.000
8 0.650 −0.098 0.059 1554.4 0.000
9 0.604 0.004 0.059 1663.8 0.000
10 0.544 −0.129 0.059 1752.6 0.000
11 0.496 0.029 0.059 1826.7 0.000
12 0.446 −0.008 0.059 1886.8 0.000
13 0.405 0.076 0.059 1936.8 0.000
14 0.346 −0.144 0.059 1973.3 0.000
223 / 323
Starts Sample Autocorrelations
224 / 323
Starts Sample Partial Autocorrelations
225 / 323
Completions Correlogram
Completions Correlogram
Sample: 1968:01 1991:12
Included observations: 288
Acorr. P. Acorr. Std. Error Ljung-Box p-v
1 0.939 0.939 0.059 256.61 0.000
2 0.920 0.328 0.059 504.05 0.000
3 0.896 0.066 0.059 739.19 0.000
4 0.874 0.023 0.059 963.73 0.000
5 0.834 −0.165 0.059 1168.9 0.000
6 0.802 −0.067 0.059 1359.2 0.000
7 0.761 −0.100 0.059 1531.2 0.000
8 0.721 −0.070 0.059 1686.1 0.000
9 0.677 −0.055 0.059 1823.2 0.000
10 0.633 −0.047 0.059 1943.7 0.000
11 0.583 −0.080 0.059 2046.3 0.000
12 0.533 −0.073 0.059 2132.2 0.000
226 / 323
Completions Sample Autocorrelations
227 / 323
Completions Partial Autocorrelations
228 / 323
Starts and Completions: Sample Cross Correlations
229 / 323
VAR Order Selection with AIC and SIC
230 / 323
VAR Starts Equation
231 / 323
VAR Start Equation Residual Plot
232 / 323
VAR Starts Equation Residual Correlogram
Sample: 1968:01 1991:12
Included observations: 284
Acorr. P. Acorr. Std. Error Ljung-Box p-v
1 0.001 0.001 0.059 0.0004 0.985
2 0.003 0.003 0.059 0.0029 0.999
3 0.006 0.006 0.059 0.0119 1.000
4 0.023 0.023 0.059 0.1650 0.997
5 −0.013 −0.013 0.059 0.2108 0.999
6 0.022 0.021 0.059 0.3463 0.999
7 0.038 0.038 0.059 0.7646 0.998
8 −0.048 −0.048 0.059 1.4362 0.994
9 0.056 0.056 0.059 2.3528 0.985
10 −0.114 −0.116 0.059 6.1868 0.799
11 −0.038 −0.038 0.059 6.6096 0.830
12 −0.030 −0.028 0.059 6.8763 0.866
13 0.192 0.193 0.059 17.947 0.160
14 0.014 0.021 0.059 18.010 0.206
233 / 323
VAR Starts Equation Residual Sample Autocorrelations
234 / 323
Var Starts Equation Residual Sample Partial
Autocorrelations
235 / 323
Evaluating and Combining Forecasts
Evaluating a single forecast Process:
yt = µ + εt + b1 εt−1 + b2 εt−2 + ...
εt ∼ WN(0, σ 2 ),
237 / 323
Assessing optimality with respect to an information set
Unforecastability principle: The errors from good forecasts are not
be forecastable!
Regression:
k−1
X
et+h,t = α + αi xit + ut
i=1
T
1X 2
MSE = et+h,t
T
t=1
T
1X 2
MSPE = pt+h,t
T
t=1
v
u
u1 X T
RMSE = t e2t+h,t
T
t=1 239 / 323
Forecast encompassing
a b
yt+h = βa yt+h,t + βb yt+h,t + εt+h,t
240 / 323
Variance-covaiance forecast combination
σc2 = ω 2 σaa
2
+ (1 − ω)2 σbb
2 2
+ 2ω(1 − ω)σab
2 − σ2
σbb
ω∗ = ab
2 + σ 2 − 2sigma2
σbb aa ab
2 − σ̂ 2
σ̂bb
ω̂ ∗ = ab
2 + σ̂ 2 − 2σ̂ 2
σ̂bb aa ab
241 / 323
Regression-based forecast combination
a b
yt+h = β0 + β1 yt+h,t + β2 yt+h,t + εt+h,t
242 / 323
Unit Roots, Stochastic Trends, ARIMA Forecasting
Models, and Smoothing
Φ(L)yt = Θ(L)εt
Φ(L) = Φ0 (L)(1 − L)
Φ0 (L)∆yt = Θ(L)εt
I(0) vs I(1) processes
243 / 323
Unit Roots, Stochastic Trends, ARIMA Forecasting
Models, and Smoothing Cont.
I Random Walk
yt = yt−1 + εt
εt ∼ WN(0, σ 2 ),
yt = δ + yt−1 + εt
εt ∼ WN(0, σ 2 ),
yt = yt−1 + εt
εt ∼ WN(0, σ 2 ),
E(yt ) = y0
var(yt ) = tσ 2
lim var(yt ) = ∞
t→∞
245 / 323
Random walk with drift
Random walk with drift
yt = δ + yt−1 + εt
εt ∼ WN(0, σ 2 ),
E(yt ) = y0 + tδ
var(yt ) = tσ 2
lim var(yt ) = ∞
t→∞
246 / 323
ARIMA(p,1,q) model
Θ(L) = 1 − Θ1 L − ... − Θq Lq
and all the roots of both lag operator polynomials are outside the
unit circle.
247 / 323
ARIMA(p,d,q) model
Θ(L) = 1 − Θ1 L − ... − Θq Lq
and all the roots of both lag operator polynomials are outside the
unit circle.
248 / 323
Properties of ARIMA(p,1,q) processes
249 / 323
Random walk example
Point forecast
Recall that for the AR(1) process,
yt = φyt−1 + εt
yt ∼ WN(0, σ 2 )
the optimal forecast is
yT+h,T = φh yT
Thus in the random walk case,
250 / 323
Random walk example Cont.
Interval and density forecasts
Recall error associated with optimal AR(1) forecast:
eT+h,T = (yT+h − yT+h,T ) = εT+h + φεT+h−1 + ... + φh−1 εT+1
with variance
h−1
X
σh2 = σ 2
φ2i
i=0
Thus in the random walk case,
h−1
X
eT+h,T = εT+h−i
i=0
σh2 = hσ 2
√
h − step − ahead 95% interval : yT ± 1.96σ h
252 / 323
Unit Root Tests
yt = φyt−1 + εt
iid
εt ∼ N(0, σ 2 )
φ̂ − 1
τ̂ = q
s PT 1 2
t=2 yt−1
“Dickey-Fuller τ̂ distribution“
Trick regression:
yt − yt = (φ − 1)yt−1 + εt
253 / 323
Allowing for nonzero mean under the alternative
Basic model:
(yt − µ) = φ(yt−1 − µ) + εt
which we rewrite as
yt = α + φyt−1 + εt
where
α = µ(1 − φ)
254 / 323
Allowing for deterministic linear trend under the alternative
Basic model:
or
yt = α + βTIMEt + φyt−1 + εt
where α = a(1 − φ) + bφ and β = b(1 − φ).
I Under the null hypothesis we have a random walk with drift,
yt = b + yt−1 + εt
255 / 323
Allowing for higher-order autoregressive dynamics
AR(p) process:
p
X
yt + φj yt−j = εt
j=1
Rewrite:
p
X
yt = ρ1 yt−1 + ρj (yt−j+1 − yt−j ) + εt
j=2
Pp Pp
where p ≥ 2, ρ1 = − j=1 φj , and ρi = j=1 φj , i = 2, ..., p.
256 / 323
Allowing for a nonzero mean in the AR(p) case
p
X
(yt − µ) + φj (yt−j − µ) = εt
j=1
or
p
X
yt = α + ρyt−1 + ρj (yt−j+1 − yt−j ) + εt ,
j=2
257 / 323
Allowing for trend under the alternative
p
X
(yt − a − bTIMEt ) + φj (yt−j − a − bTIMEt−j ) = εt
j=1
or
p
X
yt = k1 + k2 TIMEt + ρ1 yt−1 + ρj (yt−j+1 − yt−j ) + εt
j=2
where
p
X p
X
k1 = a(1 + φi ) − b iφi
i=1 i=1
and
p
X
k2 = bTIMEt (1 + φi )
i=1
Pp
In the unit root case, k1 = −b i=1 iφi and k2 = 0.
k−1
X
yt = ρ1 yt−1 + ρj (yt−j+1 − yt−j ) + εt
j=2
k−1
X
yt = α + ρ1 yt−1 + ρj (yt−j+1 − yt−j ) + εt
j=2
k−1
X
yt = k1 + k2 TIMEt + ρ1 yt−1 + ρj (yt−j+1 − yt−j ) + εt
j=2
259 / 323
Simple moving average smoothing
260 / 323
Exponential Smoothing
261 / 323
Exponential smoothing algorithm
262 / 323
Demonstration that the weights are exponential
Start:
ȳt = αyt + (1 − α)ȳt−1
Substitute backward for ȳ :
t−1
X
ȳt = wj yt−j
j=0
where
wj = α(1 − α)j
263 / 323
Holt-Winters Smoothing
c1t = c1,t−1 + νt
264 / 323
Holt-Winters smoothing algorithm
1. Initialize at t = 2:
ȳ2 = y2
F2 = y2 − y1
2. Update:
265 / 323
Random Walk – Level and Change
266 / 323
Random Walk With Drift – Level and Change
267 / 323
U.S. Per Capita GNP – History and Two Forecasts
268 / 323
U.S. Per Capita GNP – History, Two Forecasts, and
Realization
269 / 323
Random Walk, Levels – Sample Autocorrelation Function
(Top Panel) and Sample Partial Autocorrelation Function
(Bottom Panel)
270 / 323
Random Walk, First Differences – Sample Autocorrelation
Function (Top Panel) and Sample Partial Autocorrelation
Function (Bottom Panel)
271 / 323
Log Yen / Dollar Exchange Rate (Top Panel) and Change
in Log Yen / Dollar Exchange Rate (Bottom Panel)
272 / 323
Log Yen / Dollar Exchange Rate – Sample
Autocorrelations (Top Panel) and Sample Partial
Autocorrelations (Bottom Panel)
273 / 323
Log Yen / Dollar Exchange Rate, First Differences –
Sample Autocorrelations (Top Panel) and Sample Partial
Autocorrelations (Bottom Panel)
274 / 323
Log Yen / Dollar Rate, Levels – AIC and SIC Values of
Various ARMA Models
275 / 323
Log Yen / Dollar Exchange Rate – Best-Fitting
Deterministic-Trend Model
276 / 323
Log Yen / Dollar Exchange Rate – Best-Fitting
Deterministic-Trend Model : Residual Plot
277 / 323
Log Yen / Dollar Rate – History and Forecast : AR(2) in
Levels with Linear Trend
278 / 323
Log Yen / Dollar Rate – History and Long-Horizon
Forecast : AR(2) in Levels with Linear Trend
279 / 323
Log Yen / Dollar Rate – History, Forecast and Realization :
AR(2) in Levels with Linear Trend
280 / 323
Log Yen / Dollar Exchange Rate – Augmented
Dickey-Fuller Unit Root Test
281 / 323
Log Yen / Dollar Rate, Changes – AIC and SIC Values of
Various ARMA Models
282 / 323
Log Yen / Dollar Exchange Rate – Best-Fitting
Stochastic-Trend Model
283 / 323
Log Yen / Dollar Exchange Rate – Best-Fitting
Stochastic-Trend Model : Residual Plot
284 / 323
Log Yen / Dollar Rate – History and Forecast : AR(1) in
Differences with Intercept
285 / 323
Log Yen / Dollar Rate – History and Long-Horizon
Forecast : AR(1) in Differences with Intercept
286 / 323
Log Yen / Dollar Rate – History, Forecast and Realization :
AR(1) in Differences with Intercept
287 / 323
Log Yen / Dollar Exchange Rate – Holt-Winters
Smoothing
288 / 323
Log Yen / Dollar Rate – History and Forecast :
Holt-Winters Smoothing
289 / 323
Log Yen / Dollar Rate – History and Long-Horizon
Forecast : Holt-Winters Smoothing
290 / 323
Log Yen / Dollar Rate – History, Forecast and Realization :
Holt-Winters Smoothing
291 / 323
Volatility Measurement, Modeling and Forecasting
The main idea:
293 / 323
The Basic ARCH Process
yt = B(L)εt
∞
X ∞
X
i
B(L) = bi L b2i < ∞ b0 = 1
i = 0 i = 0
σt2 = ω + γ(L)ε2t
p
X X
ω>0 γ(L) = γi Li γi ≥ 0 for all i γi < 1.
i = 1
294 / 323
The Basic ARCH Process cont.
ARCH(1) process:
σt2 = ω + αr2t−1
295 / 323
The GARCH Process
yt = εt
p
X q
X
i
α(L) = αi L , β(L) = βi Li
i = 1 i = 1
X X
ω > 0, αi ≥ 0, βi ≥ 0, αi + βi < 1.
296 / 323
Time Variation in Volatility and Prediction Error Variance
297 / 323
ARMA Representation in Squares
Important result:
The above equation is simply
= σt2 + νt
Thus rt2 is a noisy indicator of σt2
298 / 323
GARCH(1,1) and Exponential Smoothing
Exponential smoothing recursion:
where
wj = γ(1 − γ)j
GARCH(1,1)
σt2 = ω + αr2t−1 + βσt−1
2
299 / 323
Unconditional Symmetry and Leptokurtosis
300 / 323
Estimation and Testing
Estimation: easy!
Maximum Likelihood Estimation
1 r2
1
f(rt | Ωt−1 ; θ) = √ σt2 (θ)−1/2 exp − 2 t .
2π 2 σt (θ)
302 / 323
Asymmetric Response and the Leverage Effect:
TARCH:
σt2 = ω + αr2t−1 + γr2t−1 Dt−1 + βσt−1
2
where
positive return (good news): α effect on volatility
negative return (bad news): α + γ effect on volatility
γ 6= 0 : Asymmetric news response
γ > 0 : ”Leverage effect”
303 / 323
Asymmetric Response and the Leverage Effect Cont.
304 / 323
Introducing Exogenous Variables
305 / 323
Component GARCH
Standard GARCH:
306 / 323
Component GARCH Cont.
307 / 323
Regression with GARCH Disturbances
yt = x0t β + εt
308 / 323
GARCH-M and Time-Varying Risk Premia
Standard GARCH regression model:
yt = x0t β + εt
yt = x0t β + γσt2 + εt
310 / 323
Histogram and Related Diagnostic Statistics – NYSE
Returns
311 / 323
Correlogram – NYSE Returns
312 / 323
Time Series Plot – Squared NYSE Returns
313 / 323
Correlogram – Squared NYSE Returns
314 / 323
AR(5) Model – Squared NYSE Returns
315 / 323
ARCH(5) Model – NYSE Returns
316 / 323
Correlogram – Standardized ARCH(5) Residuals : NYSE
Returns
317 / 323
GARCH(1,1) Model – NYSE Returns
318 / 323
Correlogram – Standardized GARCH(1,1) Residuals :
NYSE Returns
319 / 323
Estimated Conditional Standard Deviation – GARCH(1,1)
Model : NYSE Returns
320 / 323
Estimated Conditional Standard Deviation – Exponential
Smoothing : NYSE Returns
321 / 323
Conditional Standard Deviation – History and Forecast :
GARCH(1,1) Model
322 / 323
Conditional Standard Deviation – Extended History and
Extended Forecast : GARCH(1,1) Model
323 / 323