Handout Econometrics - Module
Handout Econometrics - Module
ECONOMETRICS
Econometrics 1
Prepared by: Bealu T
Chapter One
Introduction
Econometrics 2
Prepared by: Bealu T
Distance students! Having said the background statement in our attempt for
defining „ECONOMETRICS‟, we may now formally define what econometrics is.
WHAT IS ECONOMETRICS?
Literally interpreted, econometrics means “economic measurement”, but the scope
of econometrics is much broader as described by leading econometricians. Various
econometricians used different ways of wordings to define econometrics. But if
we distill the fundamental features/concepts of all the definitions, we may obtain
the following definition.
Econometrics 3
Prepared by: Bealu T
Econometrics 4
Prepared by: Bealu T
Econometrics 5
Prepared by: Bealu T
Example: Economic theory postulates that the demand for a commodity depends
on its price, on the prices of other related commodities, on consumers‟ income and
on tastes. This is an exact relationship which can be written mathematically as:
Q b0 b1 P b2 P0 b3Y b4 t
The above demand equation is exact. How ever, many more factors may affect
demand. In econometrics the influence of these „other‟ factors is taken into
account by the introduction into the economic relationships of random variable. In
our example, the demand function studied with the tools of econometrics would be
of the stochastic form:
Q b0 b1 P b2 P0 b3Y b4 t u
where u stands for the random factors which affect the quantity demanded.
Econometrics 6
Prepared by: Bealu T
causes of the variation of other variables. Starting with the postulated theoretical
relationships among economic variables, econometric research or inquiry
generally proceeds along the following lines/stages.
1. Specification the model
2. Estimation of the model
3. Evaluation of the estimates
4. Evaluation of he forecasting power of the estimated model
Specification of the model is the most important and the most difficult stage of any
econometric research. It is often the weakest point of most econometric
applications. In this stage there exists enormous degree of likelihood of
Econometrics 7
Prepared by: Bealu T
Econometrics 8
Prepared by: Bealu T
Econometrics 9
Prepared by: Bealu T
meaningful and statistically and econometrically correct for the sample period
for which the model has been estimated; yet it may not be suitable for
forecasting due to various factors (reasons). Therefore, this stage involves the
investigation of the stability of the estimates and their sensitivity to changes in
the size of the sample. Consequently, we must establish whether the estimated
function performs adequately outside the sample of data. i.e. we must test an
extra sample performance the model.
Econometrics 10
Prepared by: Bealu T
Review questions
Econometrics 11
Prepared by: Bealu T
Chapter Two
Economic theories are mainly concerned with the relationships among various
economic variables. These relationships, when phrased in mathematical terms, can
predict the effect of one variable on another. The functional relationships of these
variables define the dependence of one variable upon the other variable (s) in the
specific form. The specific functional forms may be linear, quadratic, logarithmic,
exponential, hyperbolic, or any other form.
Econometrics 12
Prepared by: Bealu T
Assuming that the supply for a certain commodity depends on its price (other
determinants taken to be constant) and the function being linear, the relationship
can be put as:
Q f ( P) P (2.1)
The above relationship between P and Q is such that for a particular value of P,
there is only one corresponding value of Q. This is, therefore, a deterministic
(non-stochastic) relationship since for each price there is always only one
corresponding quantity supplied. This implies that all the variation in Y is due
solely to changes in X, and that there are no other factors affecting the dependent
variable.
If this were true all the points of price-quantity pairs, if plotted on a two-
dimensional plane, would fall on a straight line. However, if we gather
observations on the quantity actually supplied in the market at various prices and
we plot them on a diagram we see that they do not fall on a straight line.
The derivation of the observation from the line may be attributed to several
factors.
a. Omission of variables from the function
b. Random behavior of human beings
c. Imperfect specification of the mathematical form of the model
d. Error of aggregation
e. Error of measurement
Econometrics 13
Prepared by: Bealu T
Thus a stochastic model is a model in which the dependent variable is not only
determined by the explanatory variable(s) included in the model but also by others
which are not included in the model.
2.2. Simple Linear Regression model.
The above stochastic relationship (2.2) with one explanatory variable is called
simple linear regression model.
The true relationship which connects the variables involved is split into two parts:
a part represented by a line and a part represented by the random term „u‟.
Econometrics 14
Prepared by: Bealu T
The scatter of observations represents the true relationship between Y and X. The
line represents the exact part of the relationship and the deviation of the
observation from the line represents the random component of the relationship.
- Were it not for the errors in the model, we would observe all the points on the
line Y1' , Y2' ,......, Yn' corresponding to X 1 , X 2 ,...., X n . However because of the random
- The first component in the bracket is the part of Y explained by the changes
in X and the second is the part of Y not explained by X, that is to say the
change in Y is due to the random influence of u i .
The classicals made important assumption in their analysis of regression .The most
importat of these assumptions are discussed below.
Econometrics 15
Prepared by: Bealu T
Dear distance students! Check yourself whether the following models satisfy the
above assumption and give your answer to your tutor.
a. ln Y 2 ln X 2 U i
b. Yi X i U i
This means that the value which u may assume in any one period depends on
chance; it may be positive, negative or zero. Every value has a certain probability
of being assumed by u in any particular instance.
Mathematically, E (U i ) 0 ………………………………..….(2.3)
Econometrics 16
Prepared by: Bealu T
For all values of X, the u‟s will show the same dispersion around their mean.
In Fig.2.c this assumption is denoted by the fact that the values that u can
assume lie with in the same limits, irrespective of the value of X. For X 1 , u
can assume any value with in the range AB; for X 2 , u can assume any value
with in the range CD which is equal to AB and so on.
Graphically;
Mathematically;
Econometrics 17
Prepared by: Bealu T
E (u i u j ) 0 …………………………..….(2.5)
Econometrics 18
Prepared by: Bealu T
( X iU i ) ( X i )(U i )
( X iU i )
0
8. The explanatory variables are measured without error
- U absorbs the influence of omitted variables and possibly errors of
measurement in the y‟s. i.e., we will assume that the regressors are error
free, while y values may or may not include errors of measurement.
Dear students! We can now use the above assumptions to derive the following
basic concepts.
Proof:
Mean: (Y ) xi ui
X i Since (ui ) 0
X i ui ( X i )
2
(u i ) 2
2 (since (u i ) 2 2 )
var(Yi ) 2 ……………………………………….(2.8)
Econometrics 19
Prepared by: Bealu T
the distribution of yi .
Yi ~ N( x i , 2 )
Proof:
Cov(Yi , Y j ) E{[Yi E (Yi )][Y j E (Y j )]}
E{[ X i U i E ( X i U i )][ X j U j E ( X j U j )}
(Since Yi X i U i and Y j X j U j )
Therefore, Cov(Yi ,Y j ) 0 .
Econometrics 20
Prepared by: Bealu T
ˆ and ˆ are estimated from the sample of Y and X and ei represents the sample
(CLS) involves finding values for the estimates ˆ and ˆ which will minimize the
sum of square of the squared residuals ( ei2 ).
e 2
i (Yi ˆ ˆX i ) 2 ……………………….(2.7)
To find the values of ˆ and ˆ that minimize this sum, we have to partially
differentiate e 2
i with respect to ˆ and ˆ and set the partial derivatives equal to
zero.
ei2
1. 2 (Yi ˆ ˆX i ) 0.......................................................(2.8)
ˆ
Econometrics 21
Prepared by: Bealu T
ei2
2. 2 X i (Yi ˆ ˆX ) 0..................................................(2.11)
ˆ
Note: at this point that the term in the parenthesis in equation 2.8and 2.11 is the
residual, e Yi ˆ ˆX i . Hence it is possible to rewrite (2.8) and (2.11) as
e i 0 and X e i i 0............................................(2.12)
Equation (2.9) and (2.13) are called the Normal Equations. Substituting the
values of ̂ from (2.10) to (2.13), we get:
Y X i i X i (Y ˆX ) ˆX i2
Y X i i Y X i ˆ (X i2 XX i )
XY nXY = ˆ ( X i2 nX 2)
XY nXY
ˆ ………………….(2.14)
X i2 nX 2
( X X ) 2 X 2 nX 2 (2.16)
Substituting (2.15) and (2.16) in (2.14), we get
Econometrics 22
Prepared by: Bealu T
( X X )(Y Y )
ˆ
( X X ) 2
xi yi
ˆ ……………………………………… (2.17)
xi2
Subject to: ˆ 0
The composite function then becomes
Z (Yi ˆ ˆX i ) 2 ˆ , where is a Lagrange multiplier.
z
2 0 (iii)
Substituting (iii) in (ii) and rearranging we obtain:
X i (Yi ˆX i ) 0
Yi X i ˆX i 0
2
Econometrics 23
Prepared by: Bealu T
X i Yi
ˆ ……………………………………..(2.18)
X i2
This formula involves the actual values (observations) of the variables and not
their deviation forms, as in the case of unrestricted value of ˆ .
Econometrics 24
Prepared by: Bealu T
According to the this theorem, under the basic assumptions of the classical linear
regression model, the least squares estimators are linear, unbiased and have
minimum variance (i.e. are best of all linear unbiased estimators). Some times the
theorem referred as the BLUE theorem i.e. Best, Linear, Unbiased Estimator. An
estimator is called BLUE if:
a. Linear: a linear function of the a random variable, such as, the
dependent variable Y.
b. Unbiased: its average or expected value is equal to the true population
parameter.
c. Minimum variance: It has a minimum variance in the class of linear
and unbiased estimators. An unbiased estimator with the least variance
is known as an efficient estimator.
According to the Gauss-Markov theorem, the OLS estimators possess all the
BLUE properties. The detailed proof of these properties are presented below
Dear colleague lets proof these properties one by one.
a. Linearity: (for ˆ )
(but xi ( X X ) X nX nX nX 0 )
x Y xi
ˆ i 2 ; Now, let Ki (i 1,2,.....n)
x i x i2
ˆ K i Y (2.19)
̂ is linear in Y
Econometrics 25
Prepared by: Bealu T
b. Unbiasedness:
Proposition: ˆ & ˆ are the unbiased estimators of the true parameters &
From your statistics course, you may recall that if ˆ is an estimator of then
E (ˆ) the amount of bias and if ˆ is the unbiased estimator of then bias =0
In our case, ˆ & ˆ are estimators of the true parameters & .To show that they
are the unbiased estimators of their respective parameters means to prove that:
( ˆ ) and (ˆ )
x i ( X X ) X nX n X nX
k i 0
x i
2
x i
2
x i2 xi2
k i 0 …………………………………………………………………(2.20)
xi X i ( X X ) Xi
k i X i
xi2 xi2
X 2 XX X 2 nX 2
1
X 2 nX 2 X 2 nX 2
k i X i 1............................. ……………………………………………(2.21)
ˆ k i ui ˆ k i ui (2.22)
Econometrics 26
Prepared by: Bealu T
( ˆ ) , since (ui ) 0
1
n X i 1 n u i Xk i Xk i X i Xk i u i
1 n u i Xk i u i , ˆ 1
n u i Xk i u i
1 n Xk i )ui ……………………(2.23)
(ˆ ) (2.24)
first obtain variance of ˆ and ˆ and then establish that each has the minimum
variance in comparison of the variances of other linear and unbiased estimators
obtained by any other econometric methods than OLS.
a. Variance of ˆ
var( ) ( ˆ ( ˆ )) 2 ( ˆ ) 2 ……………………………………(2.25)
Econometrics 27
Prepared by: Bealu T
x i xi2 1
k i , and therefore, k 2
2
x i (xi ) xi
2 i 2 2
2
var(ˆ ) 2 k i2 ……………………………………………..(2.26)
xi2
b. Variance of ̂
ˆ (2.27)
2
2 ( 1 n Xk i ) 2
2 ( 1 n 2 2 n Xk i X 2 k i2 )
2 ( 1 n 2 X n k i X 2 k i2 ) , Since k i 0
2 ( 1 n X 2 k i2 )
1 X2 xi2 1
(
2
) , Since k i
2
2
n xi 2
(xi )
2 2
xi
Again:
1 X 2 xi2 nX 2 X 2
n xi2 nxi2 n x i
2
X2 X i2
var(ˆ ) 2 1 n 2 2
n x 2
…………………………………………(2.28)
x i i
Econometrics 28
Prepared by: Bealu T
Dear student! We have computed the variances OLS estimators. Now, it is time to
check whether these variances of OLS estimators do possess minimum variance
property compared to the variances other estimators of the true and , other
than ˆ and ˆ .
1. Minimum variance of ˆ
Suppose: * an alternative linear and unbiased estimator of and;
Let * w i Y i ......................................... ………………………………(2.29)
where , wi k i ; but: wi ki ci
* wi ( X i ui ) Since Yi X i U i
wi wi X i wi ui
Econometrics 29
Prepared by: Bealu T
wi var(Yi )
2
Given that ci is an arbitrary constant, 2 ci2 is a positive i.e it is greater than zero.
Thus var( *) var(ˆ ) . This proves that ˆ possesses minimum variance property.
In the similar way we can prove that the least square estimate of the constant
intercept ( ̂ ) possesses minimum variance.
2. Minimum Variance of ̂
We take a new estimator * , which we assume to be a linear and unbiased
estimator of function of . The least square estimator ̂ is given by:
ˆ ( 1 n Xk i )Yi
By analogy with that the proof of the minimum variance property of ˆ , let‟s use
the weights wi = ci + ki Consequently;
* ( 1 n Xwi )Yi
Econometrics 30
Prepared by: Bealu T
* ( 1 n Xwi )( X i u i )
X ui
( Xwi XX i wi Xwi ui )
n n n
* X ui / n Xwi Xwi X i Xwi ui
( wi ) 0, ( wi X i ) 1 and ( wi ui ) 0
i.e., if wi 0, and wi X i 1 . These conditions imply that ci 0 and ci X i 0 .
( 1 n Xwi ) 2 var(Yi )
2 ( 1 n Xwi ) 2
2 ( 1 n 2 X 2 wi 2 1 n Xwi )
2
2 ( n n 2 X 2 wi 2 X wi )
2 1
n
var( *) 2 1
n X 2 wi
2
,Since wi 0
var( *) 2 1
n X 2 (k i2 ci2
1 X2
var( *) 2 2 2 X 2 ci2
n x i
X i2
2
2 X 2 ci2
nxi
2
The first term in the bracket it var(ˆ ) , hence
Econometrics 31
Prepared by: Bealu T
Therefore, we have proved that the least square estimators of linear regression
model are best, linear and unbiased (BLU) estimators.
e
2
n2
e
2
To prove this we have to compute i from the expressions of Y,
Yˆ , y, yˆ and ei .
Proof:
Yi ˆ ˆX i ei
Yˆ ˆ ˆx
Y Yˆ ei ……………………………………………………………(2.31)
ei Yi Yˆ ……………………………………………………………(2.32)
Econometrics 32
Prepared by: Bealu T
Y Yˆi
Y Yˆ (2.33)
n n
Putting (2.31) and (2.33) together and subtract
Y Yˆ e
Y Yˆ
(Y Y ) (Yˆ Yˆ ) e
yi yˆ i e ………………………………………………(2.34)
From (2.34):
ei yi yˆ i ………………………………………………..(2.35)
Y X U
We get, by subtraction
yi (Yi Y ) i ( X i X ) (U i U ) xi (U U )
y i x (U U ) …………………………………………………….(2.36)
Note that we assumed earlier that , (u ) 0 , i.e in taking a very large number
samples we expect U to have a mean value of zero, but in any particular single
sample U is not necessarily zero.
Similarly: From;
Yˆ ˆ ˆx
Y ˆ ˆx
We get, by subtraction
Yˆ Yˆ ˆ ( X X )
yˆ ̂x …………………………………………………………….(2.37)
Econometrics 33
Prepared by: Bealu T
(ui u ) ( ˆi ) xi
The summation over the n sample values of the squares of the residuals over the
„n‟ samples yields:
ei2 [(ui u ) (ˆ ) xi ]2
(u i ) 2
u i2
n
1
(u i2 ) (u ) 2
n
n 2 1n (u1 u 2 ....... u i ) 2 since (u i2 ) u2
n 2 1n (ui2 2ui u j )
n 2 1n ((ui2 ) 2ui u j ) i j
n 2 1n n u2 n2 (ui u j )
n u2 u2 ( given (ui u j ) 0)
u2 (n 1) ……………………………………………..(2.39)
Econometrics 34
Prepared by: Bealu T
1
Hence xi2 .( ˆ ) 2 xi2 . u2
x 2
xi u i
= -2
(xi u i ) xi
,since k i
xi x
2
2
i
(x u ) 2
2 i 2i
xi
xi 2 u i 2 2xi x j u i u j
2
xi
2
x 2 (u i )
2
2 ( given (u i u j ) 0)
x i
2
Consequently, Equation (2.38) can be written interms of (2.39), (2.40) and (2.41)
as follows: ei2 n 1 u2 2 2 u2 (n 2) u2 ………………………….(2.42)
From which we get
ei2
E (ˆ u2 ) u2 ………………………………………………..(2.43)
n2
ei2
Since ˆ 2
n2
u
Econometrics 35
Prepared by: Bealu T
ei2
Thus, ˆ
2
is unbiased estimate of the true variance of the error term( 2 ).
n2
Dear student! The conclusion that we can drive from the above proof is that we
ei2
can substitute ˆ 2 for ( 2 ) in the variance expression of ˆ and ˆ , since
n2
ˆ ˆ 2 ei2
Var ( ) 2 = ……………………………………(2.44)
xi (n 2) xi 2
2 X i ei X i ……………………………(2.45)
2 2 2
Var ( )
ˆ ˆ
2
nxi n(n 2) xi
2
e can be computed as ei 2 y i ̂ xi y i .
2 2
Note: i
Dear Student! Do not worry about the derivation of this expression! we will
perform the derivation of it in our subsequent subtopic.
Econometrics 36
Prepared by: Bealu T
= Yˆ Y
Y.
X
Figure „d‟. Actual and estimated values of the dependent variable Y.
As can be seen from fig.(d) above, Y Y represents measures the variation of the
sample observation value of the dependent variable around the mean. However
the variation in Y that can be attributed the influence of X, (i.e. the regression line)
is given by the vertical distance Yˆ Y . The part of the total variation in Y about
Econometrics 37
Prepared by: Bealu T
Now, we may write the observed Y as the sum of the predicted value ( Yˆ ) and the
residual term (ei.).
Yi Yˆ ei
predicted Yi
Observed Yi Re sidual
From equation (2.34) we can have the above equation but in deviation form
y yˆ e . By squaring and summing both sides, we obtain the following
expression:
y 2 ( yˆ 2 e) 2
y 2 ( yˆ 2 ei2 2 yei)
yˆe 0 ………………………………………………(2.46)
Therefore;
y i2
yˆ 2 ei2 ………………………………...(2.47)
Total Explained Un exp lained
var iation var iation var ation
OR,
Econometrics 38
Prepared by: Bealu T
i.e
TSS ESS RSS ……………………………………….(2.48)
Mathematically; the explained variation as a percentage of the total variation is
explained as:
ESS yˆ 2
……………………………………….(2.49)
TSS y 2
From equation (2.37) we have yˆ ̂x . Squaring and summing both sides give us
yˆ 2 ˆ 2 x 2 (2.50)
xy xi x y
2
2
2 , Since ˆ i 2 i
x y x i
2
xy xy
………………………………………(2.52)
x 2 y 2
Econometrics 39
Prepared by: Bealu T
The limit of R2: The value of R2 falls between zero and one. i.e. 0 R 2 1 .
Interpretation of R2
Suppose R 2 0.9 , this means that the regression line gives a good fit to the
observed data since this line explains 90% of the total variation of the Y value
around their mean. The remaining 10% of the total variation in Y is unaccounted
for by the regression line and is attributed to the factors included in the disturbance
variable ui .
Exercise:
Suppose rxy is the correlation coefficient between Y and X and is give by:
x i y i
x i2 y i2
And let ry2 yˆ the square of the correlation coefficient between Y and Yˆ , and is
(yyˆ ) 2
given by: r 2
2 2
y yˆ
y yˆ
Econometrics 40
Prepared by: Bealu T
We have already assumed that the error term is normally distributed with mean
zero and variance 2 , i.e. U i ~ N ( 02 ) ., Similarly, we also proved
2
1. ˆ ~ N , 2
x
2 X 2
2. ˆ ~ N ,
nx 2
To show whether ˆ and ˆ are normally distributed or not, we need to make use of
one property of normal distribution. “........ any linear function of a normally
distributed variable is itself normally distributed.”
ˆ k i Yi k1Y1 k 2 Y2i .... k n Yn
Econometrics 41
Prepared by: Bealu T
2 2 X 2
ˆ ~ N , 2 ; ˆ ~ N ,
x nx 2
The OLS estimates ˆ and ˆ are obtained from a sample of observations on Y and
X. Since sampling errors are inevitable in all estimates, it is necessary to apply
test of significance in order to measure the size of the error and determine the
degree of confidence in order to measure the validity of these estimates. This can
be done by using various tests. The most common ones are:
i) Standard error test ii) Student’s t-test iii) Confidence interval
All of these testing procedures reach on the same conclusion. Let us now see these
testing methods one by one.
i) Standard error test
This test helps us decide whether the estimates ˆ and ˆ are significantly different
from zero, i.e. whether the sample from which they have been estimated might
have come from a population whose true parameters are zero.
0 and / or 0 .
Formally we test the null hypothesis
H 0 : i 0 against the alternative hypothesis H1 : i 0
SE (ˆ ) var(ˆ )
SE (ˆ ) var(ˆ )
Second: compare the standard errors with the numerical values of ˆ and ˆ .
Decision rule:
If SE ( ˆi ) 1 2 ˆi , accept the null hypothesis and reject the alternative
Econometrics 42
Prepared by: Bealu T
If SE ( ˆi ) 1 2 ˆi , reject the null hypothesis and accept the alternative
Test the significance of the slope parameter at 5% level of significance using the
standard error test.
SE ( ˆ ) 0.025
( ˆ ) 0.6
1
2 ˆ 0.3
Econometrics 43
Prepared by: Bealu T
X
t , with n-1 degree of freedom.
sx
( X X ) 2
sx
n 1
n sample size
We can derive the t-value of the OLS estimates
ˆi
t ˆ
SE ( ˆ )
with n-k degree of freedom.
ˆ
tˆ
SE (ˆ )
Where:
SE = is standard error
k = number of parameters in the model.
Since we have two parameters in simple linear regression with intercept different
from zero, our degree of freedom is n-2. Like the standard error test we formally
test the hypothesis: H 0 : i 0 against the alternative H1 : i 0 for the slope
parameter; and H0 : 0 against the alternative H1 : 0 for the intercept.
ˆ 0 ˆ
t*
SE ( ˆ ) SE ( ˆ )
Econometrics 44
Prepared by: Bealu T
Then this is a two tail test. If the level of significance is 5%, divide it by two to
obtain critical value of t from the t-table.
Step 4: Obtain critical value of t, called tc at and n-2 degree of freedom for two
2
tail test.
Step 5: Compare t* (the computed value of t) and tc (critical value of t)
If t*> tc , reject H0 and accept H1. The conclusion is ˆ is statistically
significant.
If t*< tc , accept H0 and reject H1. The conclusion is ˆ is statistically
insignificant.
Numerical Example:
Suppose that from a sample size n=20 we estimate the following consumption
function:
C 100 0.70 e
(75.5) (0.21)
Econometrics 45
Prepared by: Bealu T
The values in the brackets are standard errors. We want to test the null hypothesis:
H 0 : i 0 against the alternative H1 : i 0 using the t-test at 5% level of
significance.
a. the t-value for the test statistic is:
ˆ 0 ˆ 0.70
t* = 3.3
SE ( ˆ ) SE ( ˆ ) 0.21
„t‟ at
2 =0.025 and 18 degree of freedom (df) i.e. (n-2=20-2). From the
t-table „tc‟ at 0.025 level of significance and 18 df is 2.10.
c. Since t*=3.3 and tc=2.1, t*>tc. It implies that ˆ is statistically significant.
In order to define how close the estimate to the true parameter, we must construct
confidence interval for the true parameter, in other words we must establish
limiting values around the estimate with in which the true parameter is expected to
lie within a certain “degree of confidence”. In this respect we say that with a
given probability the population parameter will be with in the defined confidence
interval (confidence limits).
Econometrics 46
Prepared by: Bealu T
sample, would include the true population parameter in 95% of the cases. In the
other 5% of the cases the population parameter will fall outside the confidence
interval.
In a two-tail test at level of significance, the probability of obtaining the specific
t-value either –tc or tc is
2 at n-2 degree of freedom. The probability of obtaining
ˆ
any value of t which is equal to at n-2 degree of freedom is
SE ( ˆ )
1 2 2 i.e. 1 .
ˆ
but t* …………………………………………………….(2.58)
SE ( ˆ )
Pr SE ( ˆ )t c ˆ SE ( ˆ )t c 1 by multiplying SE (ˆ )
The limit within which the true lies at (1 )% degree of confidence is:
H1 : 0
Decision rule: If the hypothesized value of in the null hypothesis is within the
Econometrics 47
Prepared by: Bealu T
hypothesis is outside the limit, reject H0 and accept H1. This indicates ˆ is
statistically significant.
Numerical Example:
Suppose we have estimated the following regression line from a sample of 20
observations.
Y 128.5 2.88 X e
(38.2) (0.85)
ˆ SE ( ˆ )t c
ˆ 2.88
SE ( ˆ ) 0.85
The results of the regression analysis derived are reported in conventional formats.
It is not sufficient merely to report the estimates of ‟s. In practice we report
Econometrics 48
Prepared by: Bealu T
regression coefficients together with their standard errors and the value of R2. It
has become customary to present the estimated equations with standard errors
placed in parenthesis below the estimated parameter values. Sometimes, the
estimated coefficients, the corresponding standard errors, the p-values, and some
other indicators are presented in tabular form.
These results are supplemented by R2 on ( to the right side of the regression
equation).
Y 128.5 2.88 X
Example: , R2 = 0.93. The numbers in the
(38.2) (0.85)
parenthesis below the parameter estimates are the standard errors. Some
econometricians report the t-values of the estimated coefficients in place of the
standard errors.
Review Questions
Review Questions
1. Econometrics deals with the measurement of economic relationships which are stochastic
or random. The simplest form of economic relationships between two variables X and Y
can be represented by:
Yi 0 1 X i U i ; where 0 and 1 are regression parameters and
and
b. Calculate the coefficient of determination for the data and interpret its value
c. If in a 9th economy the rate of interest is R=8.1, predict the demand for money(M) in
this economy.
Econometrics 49
Prepared by: Bealu T
3. The following data refers to the price of a good „P‟ and the quantity of the good supplied,
„S‟.
P 2 7 5 1 4 8 2 8
S 15 41 32 9 28 43 17 40
a. Estimate the linear regression line ( S ) P
X Y 1,296,836
i i
Y 539,512
i
2
i) Estimate the regression line of sale on price and interpret the results
ii) What is the part of the variation in sales which is not explained by the
regression line?
iii) Estimate the price elasticity of sales.
5. The following table includes the GNP(X) and the demand for food (Y) for a country over
ten years period.
year 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989
Y 6 7 8 10 8 9 10 9 11 10
X 50 52 55 59 57 58 62 65 68 70
a. Estimate the food function
b. Compute the coefficient of determination and find the explained and unexplained
variation in the food expenditure.
c. Compute the standard error of the regression coefficients and conduct test of
significance at the 5% level of significance.
6. A sample of 20 observation corresponding to the regression model Yi X i U i
Econometrics 50
Prepared by: Bealu T
Y 21.9 Y Y 86.9
2
i i
X 186.2 X X 215.4
2
i i
X X Yi i Y 106.4
a. Estimate and
b. Calculate the variance of our estimates
c.Estimate the conditional mean of Y corresponding to a value of X fixed at X=10.
7. Suppose that a researcher estimates a consumptions function and obtains the following
results:
C 15 0.81Yd n 19
(3.1) (18.7) R 2 0.99
where C=Consumption, Yd=disposable income, and numbers in the parenthesis are the „t-ratios‟
a. Test the significant of Yd statistically using t-ratios
b. Determine the estimated standard deviations of the parameter estimates
8. State and prove Guass-Markov theorem
9. Given the model:
Yi 0 1 X i U i with usual OLS assumptions. Derive the expression for the error
variance.
Econometrics 51
Prepared by: Bealu T
Chapter Three
3.1 Introduction
Econometrics 52
Prepared by: Bealu T
and any other (minor) factors, other than xi that might influence Y.
In this chapter we will first start our discussion with the assumptions of the
multiple regressions and we will proceed our analysis with the case of two
explanatory variables and then we will generalize the multiple regression model in
the case of k-explanatory variables using matrix algebra.
3. Hemoscedasticity: The variance of each u i is the same for all the xi values.
i.e. U i ~ N (0, 2 )
5. No auto or serial correlation: The values of u i (corresponding to Xi ) are
i.e. E (u i u j ) 0 for xi j
Econometrics 53
Prepared by: Bealu T
We can‟t exclusively list all the assumptions but the above assumptions are some
of the basic assumptions that enable us to proceed our analysis.
is multiple regression with two explanatory variables. The expected value of the
above model is called population regression equation i.e.
E(Y ) 0 1 X 1 2 X 2 , Since E (U i ) 0 . …………………................(3.4)
Econometrics 54
Prepared by: Bealu T
e 2
i with respect to ˆ0 , ˆ1 and ˆ 2 and set the partial derivatives equal to zero.
ei2
2 Yi ˆ0 ˆ1 X 1i ˆ 2 X 2i 0 ………………………. (3.8)
ˆ
0
ei2
2 X 1i Yi ˆ0 ˆ1 X 1i ˆ1 X 1i 0 ……………………. (3.9)
ˆ
1
ei2
2 X 2i Yi ˆ0 ˆ1 X 1i ˆ 2 X 2i 0 ………… ………..(3.10)
ˆ2
We know that
Econometrics 55
Prepared by: Bealu T
X Yi (X i Yi nX i Yi ) xi y i
2
i
X X i (X i nX i ) xi
2 2 2 2
i
Substituting the above equations in equation (3.14), the normal equation (3.12) can
be written in deviation form as follows:
x y ˆ x ˆ 2 x1 x 2 …………………………………………(3.16)
2
1 1 1
x y ˆ1x1 x 2 ˆ 2 x 2 ………………………………………..(3.17)
2
2
x y ˆ x ˆ 2 x1 x 2 ……………………………………….(3.18)
2
1 1 1
x y ˆ1x1 x 2 ˆ 2 x 2 ……………………………………….(3.19)
2
2
x x x ˆ1 x ………….(3.20)
2
1 1 2 = 2 y
x x 1 2 x 2
2
ˆ 2 x 3y
ˆ 2 ………………….……………………… (3.22)
x1 . x 2 ( x1 x 2 ) 2
2 2
Econometrics 56
Prepared by: Bealu T
ˆ1x1i yi ˆ 2 x 2i yi ei
2
y2 ----------------- (3.26)
Total sum of Explained sum of Re sidual sum of squares
square(Total square( Explained ( un exp lained var iation )
var iation ) var iation )
Econometrics 57
Prepared by: Bealu T
the model, Yˆt . In this case, the model is said to “fit” the data well. If R2 is low,
there is no association between the values of Yt and the values predicted by the
model, Yˆt and the model does not fit the data well.
This measure does not always goes up when a variable is added because of the
degree of freedom term n-k is the numerator. As the number of variables k
increases, RSS goes down, but so does n-k. The effect on R 2 depends on the
amount by which R 2 falls. While solving one problem, this corrected measure of
goodness of fit unfortunately introduces another one. It losses its interpretation;
R 2 is no longer the percent of variation explained. This modified R 2 is sometimes
used and misused as a device for selecting the appropriate set of explanatory
variables.
3.4.General Linear Regression Model and Matrix Approach
So far we have discussed the regression models containing one or two explanatory
variables. Let us now generalize the model assuming that it contains k variables.
It will be of the form:
Y 0 1 X 1 2 X 2 ...... k X k U
Econometrics 58
Prepared by: Bealu T
ei2
2(Yi ˆ0 ˆ1 X 1 ˆ 2 X 2 ...... ˆ k X k )( xi ) 0
ˆ 1
……………………………………………………..
ei2
2(Yi ˆ0 ˆ1 X 1 ˆ 2 X 2 ...... ˆ k X k )( xki ) 0
ˆ
k
The general form of the above equations (except first ) may be written as:
ei2
2(Yi ˆ0 ˆ1 X 1i ˆ k X ki ) 0 ; where ( j 1,2,....k )
ˆ
j
: : : : :
: : : : :
Yi X ki ˆ0 X ki ˆ1X 1i X ki X 2i X ki .................. ˆ k X ki
2
Econometrics 59
Prepared by: Bealu T
Solving the above normal equations will result in algebraic complexity. But we
can solve this easily using matrix. Hence in the next section we will discuss the
matrix approach to linear regression model.
Y2 0 1 X 12 2 X 22 3 X 32 ............. k X k 2 U 2
Y3 0 1 X 13 2 X 23 3 X 33 ............. k X k 3 U 3
…………………………………………………...
Yn 0 1 X 1n 2 X 2n 3 X 3n ............. k X kn U n
Y1 1 X 11 X 21 ....... X k1 0 U 1
Y 1 X 12 X 22
....... X k 2 1 U
2 2
Y3 1 X 13 X 23 ....... X k 3 2 U 3
. . . . ....... . . .
Yn 1 X 1n X 2n ....... X kn n U n
Y X . U
In short Y X U ……………………………………………………(3.29)
Econometrics 60
Prepared by: Bealu T
ˆ 0 e1
ˆ e
1 2
ˆ
. and e.
. .
ˆ en
k
e
i 1
2
i e12 e22 e32 ......... en2
e1
e
2
[e1 , e2 ......en ] . e' e
.
en
ei2 e' e
( X ' AX )
Since 2 AX and also too 2X‟A
ˆ
Econometrics 61
Prepared by: Bealu T
Hence ˆ is the vector of required least square estimators, ˆ0 , ˆ1 , ˆ2 ,........ˆk .
Let C= ( X X ) 1 X
̂ CY …………………………………………….(3.33)
2. Unbiased ness
ˆ ( X ' X ) 1 X ' Y
ˆ ( X ' X ) 1 X ' ( X U )
( ˆ ) ( X ' X ) 1 X 'U
( ) ( X ' X ) 1 X 'U
( X ' X ) 1 X ' (U )
, since (U ) 0
Thus, least square estimators are unbiased.
Econometrics 62
Prepared by: Bealu T
3. Minimum variance
Before showing all the OLS estimators are best(possess the minimum variance
property), it is important to derive their variance.
We know that, var(ˆ ) ( ˆ ) 2 ( ˆ )(ˆ )'
( ˆ )(ˆ )'
( ˆ1 1 ) 2
( ˆ1 1 )(ˆ 2 2 ) .......
( ˆ1 1 )(ˆ k k )
ˆ ˆ
( 2 2 )(1 1 ) ( ˆ 2 2 ) 2
....... ( ˆ 2 2 )(ˆ k k )
: : :
: : :
( k k )(ˆ1 1 )
ˆ ( k k )(ˆ 2 2 )
ˆ ........ ( ˆ k k ) 2
The above matrix is a symmetric matrix containing variances along its main
diagonal and covariance of the estimators every where else. This matrix is,
therefore, called the Variance-covariance matrix of least squares estimators of the
regression slopes. Thus,
var(ˆ ) ( ˆ )(ˆ )' ……………………………………………(3.35)
ˆ ( X X ) 1 X U ………………………………………………(3.36)
Econometrics 63
Prepared by: Bealu T
n X 1n ....... X kn
X X 2
....... X 1n X kn
1n 1n
Where, ( X ' X ) 1 : : :
: : :
X kn X 1n X kn ....... X kn
2
We can, therefore, obtain the variance of any estimator say ˆ1 by taking the ith term
from the principal diagonal of ( X ' X ) 1 and then multiplying it by u2 .
Where the X‟s are in their absolute form. When the x‟s are in deviation form we
can write the multiple regression in matrix form as ;
̂ ( x x) 1 x y
The above column matrix ˆ doesn‟t include the constant term ˆ 0 .Under such
conditions the variances of slope parameters in deviation form can be written
as: var(ˆ ) u2 ( x' x) 1 …………………………………………………….(2.38)
(the proof is the same as (3.37) above). In general we can illustrate the variance of
the parameters by taking two explanatory variables.
The multiple regression when written in deviation form that has two explanatory
variables is
y1 ˆ1 x1 ˆ 2 x 2
var(ˆ ) ( ˆ )(ˆ )'
Econometrics 64
Prepared by: Bealu T
( ˆ1 1 )
ˆ
In this model; ( )
ˆ
( 2 2 )
( ˆ )' ( ˆ1 1 )(ˆ 2 2 )
( ˆ1 1 )
ˆ ˆ
( )( )'
ˆ
( ˆ1 1 )(ˆ 2 2 )
2
( 2
)
var(ˆ1 ) cov(ˆ1 , ˆ 2 )
ˆ ˆ var(ˆ 2 )
cov(1 , 2 )
In case of two explanatory variables, x in the deviation form shall be:
x11 x 21
x12 x 22 x x12 ....... x1n
x and x' 11
: : x12 x 22 ....... x 2 n
x
1n x 2 n
1
x 2 x1 x 2
( x' x)
2 1
12
x1 x 2 x 22
u u
x 22 x1 x 2
2
x1 x 2 x12
u
Or u2 ( x' x) 1
x12 x1 x 2
x1 x 2 x 22
u2 x 22
i.e., var(ˆ1 ) ……………………………………(3.39)
x12 x 22 (x1x 2 ) 2
u2 x12
and, var(ˆ 2 ) ………………. …….…….(3.40)
x12 x 22 (x1x 2 ) 2
() u2 x1 x 2
cov(ˆ1 , ˆ 2 ) …………………………………….(3.41)
x12 x 22 (x1x 2 ) 2
Econometrics 65
Prepared by: Bealu T
ei2
As we have seen in simple regression model ˆ . For k-parameters
2
n 2
ei2
(including the constant parameter) ˆ 2 .
n k
In the above model we have three parameters including the constant term and
e 2
ˆ 2 i
n 3
e y i 1 x1 y 2 x 2 y......... K x K y ………………………(3.42)
2 2
i
e y i 1 x1 y 2 x 2 y ………………………………………...(3.43)
2 2
i
This is all about the variance covariance of the parameters. Now it is time to see
the minimum variance property.
Minimum variance of ˆ
To show that all the i ' s in the ˆ vector are Best Estimators, we have also to
prove that the variances obtained in (3.37) are the smallest amongst all other
possible linear unbiased estimators. We follow the same procedure as followed in
case of single explanatory variable model where, we first assumed an alternative
linear unbiased estimator and then it was established that its variance is greater
than the estimator of the regression model.
ˆ
Assume that ˆ is an alternative unbiased and linear estimator of . Suppose that
Econometrics 66
Prepared by: Bealu T
ˆ
(ˆ ) ( X ' X ) 1 X ' ( X U ) B( X U )
( X ' X ) 1 X ' X ( X ' X ) 1 X 'U BX BU
BX , [since E(U) = 0].……………………………….(3.44)
ˆ
Since our assumption regarding an alternative ˆ is that it is to be an unbiased
ˆ
estimator of , therefore, ( ˆ ) should be equal to ; in other words ( XB ) should
be a null matrix.
( X ' X ) 1 X ' B Y ( X ' X ) 1 X ' B Y '
( X ' X ) 1
X ' B( X U ) ( X ' X )
X ' B( X U ) ' 1
( BX 0)
( X ' X ) 1 X 'U BU U ' X ( X ' X ) 1 U ' B'
( X ' X ) BB'( BX 0)
2
u
1
ˆ
var(ˆ ) u2 ( X ' X ) 1 u2 BB' ……………………………………….(3.45)
Econometrics 67
Prepared by: Bealu T
ˆ
Or, in other words, var(ˆ ) is greater than var(ˆ ) by an expression u2 BB ' and it
We know that ei2 e' e Y ' Y 2ˆ ' X ' Y ˆ ' X ' Xˆ since ( X ' X ) ˆ X ' Y and
Y Y Y
2
i
We know, yi Yi Y
1
yi2 Yi 2 (Yi ) 2
n
In matrix notation
1
y i2 Y ' Y (Yi ) 2 ………………………………………………(3.48)
n
Equation (3.48) gives the total sum of squares variations in the model.
Explained sum of squares y i2 ei2
1
Y ' Y (y ) 2 e' e
n
1
ˆ ' X ' Y (Yi ) 2 ……………………….(3.49)
n
Explained sum of squares
Since R 2
Total sum of squares
1
ˆ ' X ' Y (Yi ) 2
n ˆ ' X ' Y nY
R2 ……………………(3.50)
1 Y ' Y nY 2
Y ' Y (Yi ) 2
n
Econometrics 68
Prepared by: Bealu T
Dear Students! We hope that from the discussion made so far on multiple
regression model, in general, you may make the following summary of results.
(i) Model: Y X U
1
ˆ ' X ' Y (Yi ) 2
n ̂ ' X ' Y nY
(vi) Coeff. of determination: R2
1 Y ' Y nY
Y ' Y (Yi ) 2
n
B. H 0 : 2 0
H1 : 2 0
Econometrics 69
Prepared by: Bealu T
The null hypothesis (A) states that, holding X2 constant X1 has no (linear)
influence on Y. Similarly hypothesis (B) states that holding X1 constant, X2 has no
influence on the dependent variable Yi.To test these null hypothesis we will use
the following tests:
i- Standard error test: under this and the following testing methods we
test only for ˆ1 .The test for ˆ 2 will be done in the same way.
ˆ 2 x 22i ei2
SE ( ˆ1 ) var(ˆ1 ) ; where ˆ 2
x x
2
1i
2
2i ( x1 x 2 ) 2 n3
ˆi
t* ~ t n -k , where n is number of observation and k is number of
SE ( ˆi )
ˆ 2
t*
SE ( ˆ 2 )
Econometrics 70
Prepared by: Bealu T
In this section we extend this idea to joint test of the relevance of all the included
explanatory variables. Now consider the following:
Y 0 1 X 1 2 X 2 ......... k X k U i
H 0 : 1 2 3 ............ k 0
Econometrics 71
Prepared by: Bealu T
from the one used in testing the significance of ˆ3 under the null hypothesis that
3 0 . But to test the joint hypothesis of the above, we shall be violating the
assumption underlying the test procedure.
The test procedure for any set of hypothesis can be based on a comparison of the
sum of squared errors from the original, the unrestricted multiple regression
model to the sum of squared errors from a regression model in which the null
hypothesis is assumed to be true. When a null hypothesis is assumed to be true,
we in effect place conditions or constraints, on the values that the parameters can
take, and the sum of squared errors increases. The idea of the test is that if these
sum of squared errors are substantially different, then the assumption that the joint
null hypothesis is true has significantly reduced the ability of the model to fit the
data, and the data do not support the null hypothesis.
If the null hypothesis is true, we expect that the data are compliable with the
conditions placed on the parameters. Thus, there would be little change in the sum
of squared errors when the null hypothesis is assumed to be true.
Let the Restricted Residual Sum of Square (RRSS) be the sum of squared errors
in the model obtained by assuming that the null hypothesis is true and URSS be
the sum of the squared error of the original unrestricted model i.e. unrestricted
residual sum of square (URSS). It is always true that RRSS - URSS 0.
1
Gujurati, 3rd ed.pp
Econometrics 72
Prepared by: Bealu T
Yi Yˆ e
ei Yi Yˆi
This sum of squared error is called unrestricted residual sum of square (URSS).
This is the case when the null hypothesis is not true. If the null hypothesis is
assumed to be true, i.e. when all the slope coefficients are zero.
Y ˆ 0 ei
̂ 0
Y i
Y (applying OLS)…………………………….(3.52)
n
e Y ̂ 0 but ̂ 0 Y
e Y Y
The sum of squared error when the null hypothesis is assumed to be true is called
Restricted Residual Sum of Square (RRSS) and this is equal to the total sum of
square (TSS).
R R S S U R S /SK 1
The ratio: ~ F( k 1,n k ) ……………………… (3.53);
U R S /Sn K
(has an F-ditribution with k-1 and n-k degrees of freedom for the numerator and denominator respectively)
RRSS TSS
Econometrics 73
Prepared by: Bealu T
(TSS RSS ) / k 1
F
RSS / n k
ESS / k 1
F ………………………………………………. (3.54)
RSS / n k
If we divide the above numerator and denominator by y 2 TSS then:
ESS
/ k 1
F TSS
RSS
/k n
TSS
R2 / k 1
F …………………………………………..(3.55)
1 R2 / n k
This implies the computed value of F can be calculated either as a ratio of ESS &
TSS or R2 & 1-R2. If the null hypothesis is not true, then the difference between
RRSS and URSS (TSS & RSS) becomes large, implying that the constraints
placed on the model by the null hypothesis have large effect on the ability of the
model to fit the data, and the value of F tends to be large. Thus, we reject the null
hypothesis if the F test static becomes too large. This value is compared with the
critical value of F which leaves the probability of in the upper tail of the F-
distribution with k-1 and n-k degree of freedom.
If the computed value of F is greater than the critical value of F (k-1, n-k), then the
parameters of the model are jointly significant or the dependent variable Y is
linearly related to the independent variables included in the model.
Econometrics 74
Prepared by: Bealu T
Table: 2.1. Numerical example for the computation of the OLS estimators.
n Y X1 X2 X3 yi x1 x2 x3 y i2 x1 x 2 x2 x3 x1 x3 x12 x22 x32 x1 yi x 2 yi x3 y i
1 49 35 53 200 -3 -7 -9 0 9 63 0 0 49 81 0 21 27 0
2 40 35 53 212 -12 -7 -9 12 144 63 -108 -84 49 81 144 84 108 -144
3 41 38 50 211 -11 -4 -12 11 121 48 -132 -44 16 144 121 44 132 -121
4 46 40 64 212 -6 -2 2 12 36 -4 24 -24 4 4 144 12 -12 -72
5 52 40 70 203 0 -2 8 3 0 -16 24 -6 4 64 9 0 0 0
6 59 42 68 194 7 0 6 -6 49 0 -36 0 0 36 36 0 42 -42
7 53 44 59 194 1 2 -3 -6 1 -6 18 -12 4 9 36 2 -3 -06
8 61 46 73 188 9 4 11 -12 81 44 -132 -48 16 121 144 36 99 -108
9 55 50 59 196 3 8 -3 -4 9 -24 12 -32 64 9 16 24 -9 -12
1 64 50 71 190 12 8 9 -10 144 72 -90 -80 64 81 100 96 108 -120
0
520 420 620 2000
Σx2x3=-420
Σx1x3=-330
Σx3yi=-625
Σyi2=594
Σx1x2=240
Σx12=270
Σx22=630
Σx32=750
Σx3yi=319
Σx2yi=492
Σyi=0
Σx1=0
Σx2=0
Σx3=0
From the table, the means of the variables are computed and given below:
Econometrics 75
Prepared by: Bealu T
Based on the above table and model answer the following question.
i. Estimate the parameter estimators using the matrix approach
ii. Compute the variance of the parameters.
iii. Compute the coefficient of determination (R2)
iv. Report the regression result.
Solution:
In the matrix notation: ˆ ( x' x) 1 x' y ; (when we use the data in deviation form),
ˆ1 x11 x 21 x31
Where, ˆ ˆ 2 , x x12 x 22 x32 ; so that
ˆ : : :
3 x
1n x2n x3n
Note: the calculations may be made easier by taking 30 as common factor from
all the elements of matrix (x‟x). This will not affect the final results.
270 240 330
| x' x | 240 630 420 4716000
330 420 750
Econometrics 76
Prepared by: Bealu T
var(ˆ1 ) u2 (0.0085)
ei2 17.11
var(ˆ 2 ) u2 (0.0027)ˆ u2 2.851
nk 6
var(ˆ3 ) u2 (0.0032)
1
ˆ ' X ' Y (Yi ) 2
n ˆ1x1 y ˆ 2 x 2 y ˆ3 x3 y 575.98
(iii) R2 0.97
1 y 2
594
Y ' Y (Yi ) 2 i
n
(iv) The estimated relation may be put in the following form:
Yˆ 134.28 0.2063X 1 0.3309 X 2 0.5572 X 3
Econometrics 77
Prepared by: Bealu T
Example 2. The following matrix gives the variances and covariance of the of the
three variables:
y x1 x2
y 7.59 3.12 26.99
x1 29.16 30.80
x 2 133.00
The first raw and the first column of the above matrix shows y 2
and the first
y Y Y , x1 X X , and x2 X X
The above matrix is based on the transformed model. Using values in the matrix
we can now estimate the parameters of the original model.
Econometrics 78
Prepared by: Bealu T
ˆ
(a) ˆ ( x' x) 1 x' y 1
ˆ 2
ˆx2 y ˆx3 y
(c). R2
yi2
(0.1421)(3.12) (0.2358)(26.99)
7.59
R 2 0.78; ei2 (1 R 2 )(yi2 ) 1.6680
1.6680
ˆ u2 0.0981
17
Econometrics 79
Prepared by: Bealu T
Example 3:
Consider the model: Y 1 X 1i 2 X 2i U i
On the basis of the information given below answer the following question
X 12 3200 X 1 X 2 4300 X 2 400
X 22 7300 X 1Y 8400 X 2Y 13500
Y 800 X 1 250 n 25
Yi 28,000
2
ˆ x 2 yx 22 x 2 yx1 x 2
1
x12 x 22 (x1 x 2 ) 2
Econometrics 80
Prepared by: Bealu T
Since the x‟s and y‟s in the above formula are in deviation form we have to find
the corresponding deviation forms of the above given values.
We know that:
x1 x2 X 1 X 2 nX 1 X 2
4300 (25)(10)(16)
300
x1 y X 1Y nX 1Y
8400 25(10)(32)
400
x2 y X 2Y nX 2Y
13500 25(16)(32)
700
x12 X 12 nX 12
3200 25(10) 2
700
x22 X 22 nX 22
7300 25(16) 2
900
Now we can compute the parameters.
x 2 yx 22 x 2 yx1 x 2
ˆ1
x12 x 22 (x1 x 2 ) 2
(400)(900) (700)(300)
(900)(700) (300) 2
0.278
Econometrics 81
Prepared by: Bealu T
(700)(700) (400)(300)
(900)(700) (300) 2
0.685
ˆ 2 x 22
b. var(ˆ1 )
x12 x 22 (x1 x 2 ) 2
ei2
ˆ 2 Where k is the number of parameter
nk
In our case k=3
ei2
ˆ 2
n3
e12 y 2 ˆ1x1 y ˆ 2 x 2 y
2400 0.278(400) (0.685)(700)
1809.3
ei2
ˆ 2
n3
1809.3
25 3
82.24
(82.24)(900)
var(ˆ1 ) 0.137
540,000
ˆ 2 x12
var(ˆ 2 )
x12 x12 (x1 x 2 ) 2
(82.24)(700)
0.1067
540,000
Econometrics 82
Prepared by: Bealu T
t* 0.278
Hence; 0.751
SE ( ˆ1 ) 0.370
ˆ1 is drawn from the population of Y & X1in which there is no relationship
between Y and X1(i.e. 1 0 ).
d. R 2 can be easily using the following equation
ESS RSS
R2 1-
TSS TSS
We know that RSS ei2
and TSS y 2 and ESS yˆ 2 ˆ1x1 y ˆ2 x2 y ...... ˆk xk y
For two explanatory variable model:
RSS 10809.3
R 2 1- 1
TSS 2400
0.24
24% of the total variation in Y is explained by the regression line
Econometrics 83
Prepared by: Bealu T
ei2 / n k (1 R 2 )(n 1)
Adjusted R 1 2
2
1
y / n 1 nk
(1 0.24)(24)
1
22
0.178
e. Let‟s set first the joint hypothesis as
H 0 : 1 2 0
with the critical value F at 5% level of significance and (3,.23) numerator and
denominator respectively. F (2,22) at 5%level of significance = 3.44.
F*(2,22) = 3.47
Fc(2,22)=3.44
F*>Fc, the decision rule is to reject H0 and accept H1. We can say that
the model is significant i.e. the dependent variable is, at least, linearly
related to one of the explanatory variables.
Econometrics 84
Prepared by: Bealu T
Instructions:
Read the following instructions carefully.
Make sure that your exam paper contains 4 pages
The exam has four parts. Attempt
All questions of part one
Only two questions from part two
One question from part three
And the question in part four.
Maximum weight of the exam is 40%
Part One: Attempt all of the following questions (15pts).
1. Discuss briefly the goals of econometrics.
2. Researcher is using data for a sample of 10 observations to estimate the relation
between consumption expenditure and income. Preliminary analysis of the sample data
produces the following data.
xy 700 , x 2 1000 ,
X 100
Y 200
__ __
Where x X i X i and y Yi Y
a. Use the above information to compute OLS estimates of the intercept and slope
coefficients and interpret the result
b. Calculate the variance of the slope parameter
c. Compute the value R2 (coefficient of determination) and interpret the result
d. Compute 95% confidence interval for the slope parameter
e. Test the significance of the slope parameter at 5% level of confidence using t-test
3. If the model Yi= + 1X1i + 2X2i +Ui is to be estimated from a sample of 20 observation using
the semi- processed data given in matrix in deviation form.
0.5 0.08
( x x) 1
0.08 0.6
100
x y X 1 =10, X 2 =25 and Y 30
250
Econometrics 85
Prepared by: Bealu T
10,000
0.1 0.12 0.03
20,300
( x / x) 1 0.12 0.04 0.02 X Y
10,100
0.03 0.02 0.08
30,200
X 1 400 , X 2 200 , and X 3 600
__ __
Where x X i X i and y Yi Y
2. In a study of 100 firms, the total cost(C) was assumed to be dependent on the rate of out put
(X1) and the rate of absenteeism (X2). The means were: C 6 , X 1 3 and X 2 4 . The matrix
showing sums of squares and cross products adjusted for means is
c x1 x2
c 100 50 40
__ __
x1 50 50 -70 where, xi X i X i and c Ci C
x2 40 -70 900
Estimate the linear relation ship between C and the other two variables. (10points)
Econometrics 86