0% found this document useful (0 votes)
31 views

Matrix

The document discusses the linear regression model y = Xβ + u and the method of least squares for estimating the unknown parameters β. It defines the residual sum of squares (RSS) and shows that minimizing the RSS leads to the normal equations (X'X)b = X'y for estimating b. It also discusses assumptions of the error term u, including homoscedasticity and no autocorrelation. Model selection criteria like the Akaike Information Criterion (AIC) and Schwarz Criterion are also mentioned.

Uploaded by

Rakib Hossain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Matrix

The document discusses the linear regression model y = Xβ + u and the method of least squares for estimating the unknown parameters β. It defines the residual sum of squares (RSS) and shows that minimizing the RSS leads to the normal equations (X'X)b = X'y for estimating b. It also discusses assumptions of the error term u, including homoscedasticity and no autocorrelation. Model selection criteria like the Akaike Information Criterion (AIC) and Schwarz Criterion are also mentioned.

Uploaded by

Rakib Hossain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 3

y = X + u

where

If the unknown vector in above equation is replaced by some
estimate b, this defines a vector of residuals e,
e = y Xb
The least-squaresq principle is to choose b to minimize the residual
sum of squares, ee, namely
R = ee
= !y Xb"! y Xb"
= yy bXy yXb # bXXb
= yy $bXy # bXXb
The first order conditions are
%ivin& the normal equations
!XX)b = Xy
1
1
1
1
]
1

1
1
1
1
]
1

k kn n
k
k
X X
X X
X X
X

$
'
$
$ $$
' $'
and
'
'
'
( $ $
" !
+

Xb X y X
b
RSS
y X X X b
'
" !
' $
" ! " !

X X b Var
Decomposition of the Sum of Squares
!yy ny
$
" = !bXXb ny
$
" # ee
T = ) R
Goodness of Fit/ Fit of Various Specifications

** The ad+usted R
$
takes e,plicit account of the number of
re&ressors used in the equation- It is useful for comparin& the
fit of specifications that differ in the addition or deletion of
e,planatory variables-
** The unad+usted R
$
will never decrease with the addition of any
variable to the set of re&ressors-
Schwarz Criterion
Akaike Information Criterion
)!u" = ( and
.ar!u" = )!uu" =
$
I
TSS
RSS
TSS
ESS
R '
$
k n
n
R R


'
" ' ! '
$ $
n
n
k
n
e e
SC ln ln +

n
k
n
e e
AIC
$
ln +

(
(
(
(
" !
" !
" !
" !
$
'
$
'

1
1
1
1
]
1

1
1
1
1
]
1

1
1
1
1
]
1


n n
u E
u E
u E
u
u
u
E u E
The matri, is the variance and-covariance matri, of the
disturbance term- The variances are displayed on the main
dia&onal and the covariances in the off-dia&onal positions-
The first assumption is that the disturbance variance is constant
and finite at each sample point- This condition of contancy of
variance is termed as homoscedasticity, and the observed
condition, where the disturbance variances are not same at all
points, is termed as heteroscedasticity-
The second assumption is that the disturbances are pairwise
uncorrelated. In other words, the / terms of all observations
are all statistically independent and not interrelated- 0hen this
condition of mutual statistical independence of disturbance
fails, the disturbances are said to be autocorrelated or serially
correlated-
[ ]

,
_

,
_

1
1
1
1
]
1


" ! " ! " !
" ! " ! " !
" ! " ! " !
" !
$
$ '
$
$
$ ' $
$ '
$
'
$ '
$
'
n n n
n
n i
n
n
u E u u E u u E
u u E u E u u E
u u E u u E u E
u u u
u
u
u
E u u E

I
u u u u u
u u u u u
u u u u u
n n n
n
n
$
$
$
$
$ '
$ $ ' $
' $ ' '
( (
( (
( (
" var! " cov! " cov!
" cov! " var! " cov!
" cov! " cov! " var!

,
_

,
_

You might also like