0% found this document useful (0 votes)
38 views

Quantitative Methods in Economics Functional Form: Maximilian Kasy

This document discusses functional forms used in linear regression analysis. It provides examples of quadratic, interaction, discrete regressor, and polynomial functional forms. It also discusses interpreting coefficients from these forms and converting predictions from log units to level units. The document uses Mincer's earnings equation as an example polynomial functional form to illustrate returns to education and experience.

Uploaded by

WazzupWorld
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Quantitative Methods in Economics Functional Form: Maximilian Kasy

This document discusses functional forms used in linear regression analysis. It provides examples of quadratic, interaction, discrete regressor, and polynomial functional forms. It also discusses interpreting coefficients from these forms and converting predictions from log units to level units. The document uses Mincer's earnings equation as an example polynomial functional form to illustrate returns to education and experience.

Uploaded by

WazzupWorld
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Functional form

Quantitative Methods in Economics


Functional form

Maximilian Kasy

Harvard University, fall 2016

1 / 15
Functional form

Roadmap, Part I

1. Linear predictors and least squares regression


2. Conditional expectations
3. Some functional forms for linear regression
4. Regression with controls and residual regression
5. Panel data and generalized least squares

2 / 15
Functional form

Takeaways for these slides

Functional forms:
I Quadratic: decreasing or increasing returns
I Interactions: returns vary with covariates
I Discrete regressors, dummy variables, and saturated regressions
I Polynomial
I Linear in logarithms: elasticities
I Justification via Mincer model

3 / 15
Functional form

I Quadratic polynomial:
I Y = earnings, Z = experience; X1 = Z , X2 = Z 2

E ∗ (Y | 1, X1 , X2 ) = β0 + β1 X1 + β2 X2

I Evaluating this at Z = c gives β0 + β1 c + β2 c 2 .


I Interactions:
I Z1 = experience, Z2 = education; X1 = Z1 , X2 = Z2 , X3 = Z1 · Z2

E ∗ (Y | 1, X1 , X2 , X3 ) = β0 + β1 X1 + β2 X2 + β3 X3

I Evaluating this at Z1 = c, Z2 = d gives β0 + β1 c + β2 d + β3 c · d.

4 / 15
Functional form

Questions for you

I Interpret these two functional forms.


I What happens as education is increased?
I How does that depend on the education we start with?
I How does that depend on experience?

5 / 15
Functional form

I Recall conditional expectation: Solution to

min E [Y − g (Z )]2
g

is the regression function:

r (z ) = E (Y | Z = z ).

I Orthogonality Conditions: Consider any function h(·). Define

U = Y − r (Z ).

I Then U ⊥ h(Z ), i.e. E [Uh(Z )] = 0, and in particular

E ∗ (Y |r (Z ), h(Z )) = β1 r (Z ) + β2 h(Z ) = r (Z ).

I Put differently: If E (Y |X = x ) is linear in x, then

E (Y |X = x ) = E ∗ (Y |X = x ).
6 / 15
Functional form

Discrete regressors
I Assume
Z1 ∈ {λ1 , . . . , λJ }, Z2 ∈ {δ1 , . . . , δK }.
I Dummy Variables:
(
1, ifZ1 = λj , Z2 = δk
Xjk =
0, otherwise
= 1(Z1 = λj , Z2 = δk ).

I Claim: E (Y | Z1 , Z2 ) = E ∗ (Y | X11 , . . . , XJ1 , . . . , X1K , . . . , XJK )

Questions for you


Prove this.

7 / 15
Functional form

Solution:
I Any function g (Z1 , Z2 ) can be written as

J K
g (Z1 , Z2 ) = ∑ ∑ γjk Xjk
j =1 k =1

with γjk = g (λj , δk ).


I Thus
J K
E (Y | Z1 , Z2 ) = ∑ ∑ βjk Xjk ,
j =1 k =1

where
βjk = E (Y | Z1 = λj , Z2 = δk ).
I Since E (Y |X = x ) is linear in x, we get

E (Y | Z1 , Z2 ) = E (Y |X ) = E ∗ (Y |X ).

8 / 15
Functional form

Sample Analog
I Data: (yi , zi1 , zi2 ), i = 1, . . . , n.
I Dummy Variables: xi ,jk = 1(zi1 = λj , zi2 = δk ),
   
y1 x1,jk
 ..   .. 
y =  . , xjk =  .  .
yn xn,jk
I Least Squares:
n
 J K
2
min ∑ yi − ∑ ∑ bjk xi ,jk
b
i =1 j =1 k =1

I This gives:
∑ni=1 yi xi ,jk
bjk = = ȳ | λj , δk .
∑ni=1 xi ,jk
9 / 15
Functional form

Polynomial regressors

I Assume

∼ β0 + β1 s + β2 s2 + β3 t · s + β4 t + β5 t 2 .
E (Y |Z1 = s, Z2 = t ) =

I Example: Jacob Mincer, Schooling, Experience and Earnings,


1974, Table 5.1; 1 in 1000 sample, 1960 census; 1959 annual
earnings; n = 31093;
I y = log(earnings), s = years of schooling, t = years of work
experience;

ŷ = 4.87 + .255s − .0029s2 − .0043t · s + .148t − .0018t 2 .

10 / 15
Functional form

Predictive Effect

I Returns to college:

E (Y |Z1 = 16, Z2 = t ) − E (Y |Z1 = 12, Z2 = t )


∼ β1 · 4 + β2 (162 − 122 ) + β3 · 4 · t .
=

I Returns to high school:

E (Y |Z1 = 12, Z2 = t ) − E (Y |Z1 = 8, Z2 = t )


∼ β1 · 4 + β2 (122 − 82 ) + β3 · 4 · t .
=

11 / 15
Functional form

Plugging in the estimates

Experience Returns to college Returns to high school


0 .70 .79
10 .52 .62
20 .35 .44

Questions for you


Verify this.

12 / 15
Functional form

From predicting log W to predicting W


I Suppose E (log W | Z ) is a linear function:

E (log W | Z ) = β0 + β1 Z .
I Define U = log W − β0 − β1 Z , so E (U | Z ) = 0.
I

W = β0 + β1 Z + U
⇒ W = exp(β0 + β1 Z ) · exp(U ).
I If U and Z are independent, E [exp(U ) | Z ] = E [exp(U )] and
E (W | Z = d ) ∼ β1 (d − c ) + 1,
= exp[β1 (d − c )] =
E (W | Z = c )
 
E (W | Z = d ) ∼ 100β1 (d − c ).
100 −1 =
E (W | Z = c )
13 / 15
Functional form

Mincer model
I Compound Interest (4 = fraction of one year):

$1 → $(1 + r 4) → $(1 + r 4)2 → $(1 + r 4)3 → . . .


I Annual Return (1 + r 4)1/4 .
f (1 + x ) − f (1) log(1 + x )
lim = f 0 (1); lim =1
x →0 x x →0 x
log(1 + r 4)
lim log[(1 + r 4)1/4 ] = r · lim =r
4→0 4→0 r4
I Thus
lim (1 + r 4)1/4 = exp(r ) ≈ 1 + r
4→0
I Works for small r :

exp(.06) = 1.062; exp(.35) = 1.42; exp(.70) = 2.01.


14 / 15
Functional form

I PV (S ) = present value at t = 0 of earning 0 while in school for an


additional S years and then earning W (S ) for a very long time:
Z ∞
PV (S ) = W (S ) exp(−rt ) dt = W (S ) · exp(−rS )/r .
S
I Returns such that students are indifferent about dropping out:

PV (S ) = PV (0) ⇒ W (S ) · exp(−rS )/r = W (0)/r ,


I Thus:
log(W (S )) = log(W (0)) + rS .
I Linear Predictor:

E ∗ (Y | 1, S ) = γ0 + (r + γ1 )S ,

where Y = log(W (S )), A = log(W (0)), and

E ∗ (A | 1, S ) = γ0 + γ1 S .
15 / 15

You might also like