0% found this document useful (0 votes)
109 views2 pages

Cheat SHeet ECON 334

This document discusses various statistical and econometric concepts related to time series analysis and linear regression modeling. It defines key terms like autocorrelation, heteroskedasticity, and multicollinearity. It also summarizes assumptions of the classical linear regression model and various tests that can be used to detect issues like autocorrelation, heteroskedasticity, and model misspecification.

Uploaded by

AbhishekMaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views2 pages

Cheat SHeet ECON 334

This document discusses various statistical and econometric concepts related to time series analysis and linear regression modeling. It defines key terms like autocorrelation, heteroskedasticity, and multicollinearity. It also summarizes assumptions of the classical linear regression model and various tests that can be used to detect issues like autocorrelation, heteroskedasticity, and model misspecification.

Uploaded by

AbhishekMaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Testing sample autocorrelations: Null H0: τ s=0 , reject null if |Z-stat.

|=

t−0

|√ |
1
T
>Z a
2

Autocorrelations in squared returns rather than normal returns data.


d = 0 using Q stat
Test autocorrelations

Ljung Box stat

MA process is a simply a linear combination of white noise processes.

Autoregressive Processes: An AR process has the current value of yt modelled as a


linear function of its past values plus a white noise error term. Random walk is a
special case of AR(1). Lag Operator
TOPIC 1 Topic 3

P t−Pt −1 Pt Factor Models: APT and Consumption based CAPM


Returns: , Log Returns: LN( )
P t−1 P t−1 Factors: Consumptions, Marginal Utility, returns, interest rates, growth in GNP, investment and other
macro variables. CAPM we use market portfolio.
PDF: A Discrete Probability Function gives all possible realisations of X and
Fama-French:
the probability that each value occurs. CDF: F ( x )=P( X ≤ x). CDF
gives the probability X is less than or equal to a specific value. SMB: Small market cap stock – big market cap = size premium or the small firm effect, where smaller
firms outperform larger ones
Continuous r.v : A random variable X is continuous if its CDF is a continuous
function. We always have P(X=x)=0. We calculate
HML: High book to price ratio (Value stocks) – low book to price ratio(growth stocks) = value
b
premium, value stocks tend to outperform growth stocks.
P ( a ≤ X ≤ b )=∫ f ( x ) dx=F ( b ) −F( a) Normal Dist:
a R2- Goodness of fit statistic. = square of correlation coefficient. Total sum of squares = Explained +
( x−μ ) 2 Residual. R2 = ESS/TSS, ranges between 0 – 1.
1 2 X−μ
f ( x )= e 2σ
, Z= N ( 0,1 ) Lognormal Dist:
2 σ T −1
√2 π σ Adjusted R2 = 1−[ ( 1−R 2) ]
T −k
EG: Assuming Independent and Identically distributed returns as a lognormal
variable, with u = 0.0165 and stdev=0.0730. what is probability that the price Restricted Residual−Unrestricted residual
of the security increases over each of the next two weeks. Use Z score: ∗( T−k )
Assumptions Unrestrcited
F-Statistic = of CLRM: 1. Mean of disturbances isresidual m = no. –of
0. Unable to test with OLS. 2. Homoskedasticity
Mean: Expected Value
m
equal variances ie variance of errors is constant. Heteroskedasticity – differing variance. Errors
without constant variance across different obs. Occurs when there are large differences in the
1 2
Variance: ( x −μ )
T∑ i x
magnitude of observations. Related to the size of explanatory variables.

White Test for Heteroskedasticity: Obs*R2 ~ X2(m). Reject Homoskedasticity for Heterosedasticity if
1 Obs*R2 > X2(m). M is no. regressors excluding constant.
∗1
Skewness: T . Left Skewed: Pushed to the right, longer Consequences of Hetero: OLS still gives unbiased coefficient. No longer BLUE. OLS standard errors will
x −μ x )3
3 ∑( i be incorrect and so stat test incorrect. Use Generalised Least Squares if hetero form is known. OR
σ
^β−β
Joint PDF: Transform variables to logs, or use hetero consistent SE. t-test with White SE: .
Independent Variables: f(x, y)=f(x)f(y). Independence is a stronger relation S E W ¿¿
that linear association. Normal r.v with 0 correl implies independence. Correl
of 0 also happens with non-linear relationship.Correlation = Autocorrelation: Due to omission of relevant variables, misspecification error from inappropriate
functional form, seasonality or other patterns in present. Durbin Watson test: Assumes the
Cov ( X , Y ) E ( X−E ( X ) ) ( Y −E ( Y ) ) relationship is between 2 consecutive errors. Values of close to 2 indicate no autocorrelation. LM test
= . Conditional Density: for autocorrelation
σ (x) σ (Y ) σ ( X)σ (Y )
f (X , y) H0: P1 =0, P2 =0…., H1: P1 =/=0, P2 =/=0… If Obs*R 2 > X2(r) accept H1. Consequences of ignoring
f Y ∨X ( y|x )= xy , Covariance = ∑ ( X −x́)¿ ¿ ¿. Autocorrelation: Coefficient estimates derived are still unbiased but inefficient. Not BLUE. SE are
f x (x ) incorrect, When residuals are positively correlated, R2 will be higher. Dealing with autocorrelation:
^β−β
Linear Regression: Y= a + bX + u, U is the error term. U Captures: 1. add lagged dependent variables to regression, or use robust SE. T-test = . Tests both
Determinants of Y that are left out, 2. Random Influences which can’t be S E NW ¿ ¿
modelled, 3. Errors in the measurement of Y which can’t be modelled. autocorrelation and heteroskedasticity. Non random X’s: If regressors are not correlated with the
residuals OLS, the estimator is consistent and unbiased in the presence of stochastic regressors. If one
Ordinary Least Squares: Provide solutions to a and b
or more of the explanatory variables is contemporaneously correlated with the disturbance term, the
Factor Pricing and CAPM: E(R)= Rf + B(E(Rm)-Rf) OLS estimator will be biased and inconsistent.

Multicollinearity: When explanatory variables are highly correlated, we are able to compute OLS
To estimate B: ( Rt −R f ,t )=a+b ( Rm , t−R f , t ) +ut estimate but R2 is likely to be high, the estimated coefficients will be imprecise. Thus Ci for parameters
will be wide and significance tests will suggest that the regressors are insignificant. Regression
Assumptions of Classical Linear Regression Model: becomes very sensitive to small changes in the specification. Measure by looking at matrix of
correlations. High correlation between y and one of the x variables is good. Solution is to drop a
collinear variable, transform the highly correlated variables into a ratio, collect more data. Adopting a
wrong function form? RESET test: Test H0: B1 = B2 =… = 0, then when |test stat| > Critical value =>
Market Efficiency: Reflects all relevant information. White Noise: a collection of r.v. such as yt is a
white noise process if yt is independent and identically distributed.

Martingale Model: Stochastic process a collection of rv which satisfies E(yt|yt-1,yt-2…) = yt-1. Price
Error Variance:
changes/return are uncorrelated from past data. Best forecast of tomorrows price is todays price. No
Standardised consideration of risk. Adjust for risk, then martingale applies

Random walk:
y t =c + y t−1 +ut where y t =log ( Pt ) , ut is ¿ noise , c=E ( y t − y t−1−u t ) , when

Hypothesis testing: Autocovarianceγ s =E ( y t −E ( y t ) )( y t−s −E ( y t−s ) )=cov ( y t , y t−s )


Confidence Interval: Weakly Stationary: satisfy
Type 1 Error – rejecting H0 when true, Type 2 error – not rejecting H0 when 1. E ( y t ¿ ) =μ 2. E ( y t −μ )2=γ 0 3. E ( y t−μ )( y i−μ ) =γ i− j
it is false.
Autocorrelation: τ =Cov ¿ ¿

You might also like