0% found this document useful (0 votes)
15 views

Auto Regressive Model

An autoregressive (AR) model predicts future behavior based on past values in a time series, utilizing linear regression of current data against previous values. The AR(p) model specifies the order of lagged values used as predictors, with AR(1) depending on the immediate past value and AR(2) on the two most recent values. Autoregressive models are commonly used for forecasting, but their accuracy can be compromised by sudden market changes, as demonstrated by the financial crisis of 2008.

Uploaded by

Bhavana Poluru
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Auto Regressive Model

An autoregressive (AR) model predicts future behavior based on past values in a time series, utilizing linear regression of current data against previous values. The AR(p) model specifies the order of lagged values used as predictors, with AR(1) depending on the immediate past value and AR(2) on the two most recent values. Autoregressive models are commonly used for forecasting, but their accuracy can be compromised by sudden market changes, as demonstrated by the financial crisis of 2008.

Uploaded by

Bhavana Poluru
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Auto Regressive Model

Autoregression:
An autoregressive (AR) model predicts future behaviour based on past
behaviour. It’s used for forecasting when there is some correlation between values in a
time series and the values that precede and succeed them. You only use past data to
model the behaviour, hence the name autoregressive (auto– means “self.” ). The
process is basically a linear regression of the data in the current series against one or
more past values in the same series.
In an AR model, the value of the outcome variable (Y) at some point t in time is like
“regular” linear regression — directly related to the predictor variable (X). Where
simple linear regression and AR models differ is that Y is dependent on X and
previous values for Y.

The AR process is an example of a stochastic process, which have degrees of


uncertainty or randomness built in. The randomness means that you might be able to
predict future trends pretty well with past data, but you’re never going to get 100
percent accuracy. Usually, the process gets “close enough” for it to be useful in most
scenarios.

AR models are also called conditional models, Markov models, or transition models.

AR(p) Model:
An AR(p) model is an autoregressive model where specific lagged values of
yt are used as predictor variables. Lags are where results from one time period affect
following periods.

The simplest AR process is AR(0), which has no dependence between the terms. Only
the error/innovation/noise term contributes to the output of the process, so in the
figure, AR(0) corresponds to white noise.

The value for “p” is called the order. For example, an AR(1) would be a “first order
autoregressive process.” The outcome variable in a first order AR process at some
point in time t is related only to time periods that are one period apart (i.e. the value of
the variable at t – 1). A second or third order AR process would be related to data two
or three periods apart.

For an AR(1) process with a positive φ ,only the previous term in the process and the
noise term contribute to the output. If φ is close to 0, then the process still looks like
white noise, but as φ approaches 1, the output gets a larger contribution from the
previous term relative to the noise. This results in a "smoothing" or integration of the
output, similar to a low pass filter.

For an AR(2) process, the previous two terms and the noise term contribute to the
output. If both φ 1 and φ 2 are positive, the output will resemble a low pass filter, with
the high frequency part of the
noise decreased. If φ 1is positive while φ 2 is negative, then the process favours
changes in sign between terms of the process. The output oscillates. This can be
likened to edge detection or detection of change in direction.

An AR(1) model

An AR(1) model is given by:


Xt =c +φXt −1+ɛt

where ɛt is a white noise process with zero mean and constant variance φ 1 The process
is wide-sense stationary if |φ|<1since it is obtained as the output of a stable filter
whose input is white noise. The mean E(Xt) is identical for all values of t by the very
definition of wide sense stationarity. If the mean is denoted by µ ,it follows from

E ( Xt )=E ( c ) +φE ( Xt −1 ) + E(ɛt )

Mostly Autoregressive(AR) model is used for future forecasting by using previous


time series data and making predictions about values of variables currently and
afterwards based on the value of the same variable at different stages.

Autoregression model in the study of ionospheric anomalies needed no more


information, but it must comply with the following two conditions.

(1) It must be autocorrelated; if autocorrelation coefficient (R) is less than 0.5,


then it should not be used to predict the results which will be highly inaccurate.

(2) Autoregression model only applies to the forecast and its early-related
phenomenon, which is influenced by its own historical factors

Real Time Example of Autoregressive:


Autoregressive models are based on the assumption that past values have an
effect on current values. For example, an investor using an autoregressive model to
forecast stock prices would need to assume that new buyers and sellers of that stock
are influenced by recent market transactions when deciding how much to offer or
accept for the security.
Although this assumption will hold under most circumstances, this is not always the
case. For example, in the years prior to the 2008 Financial Crisis, most investors were
not aware of the risks posed by the large portfolios of mortgage-backed securities held
by many financial firms. During those times, an investor using an autoregressive
model to predict the performance of U.S. financial stocks would have had good reason
to predict an ongoing trend of stable or rising stock prices in that sector.

However, once it became public knowledge that many financial institutions were at
risk of imminent collapse, investors suddenly became less concerned with these
stocks' recent prices and far more concerned with their underlying risk exposure.
Therefore, the market rapidly revalued financial stocks to a much lower level, a move
which would have utterly confounded an autoregressive model.

It is important to note that, in an autoregressive model, a one-time shock will affect


the values of the calculated variables infinitely into the future. Therefore, the legacy of
the financial crisis lives on in today’s autoregressive models.

You might also like