0% found this document useful (0 votes)
6 views

Calculus Summary

for master and PhD

Uploaded by

vuthulan17062001
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Calculus Summary

for master and PhD

Uploaded by

vuthulan17062001
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Calculus

Daniel N. Hauser

This note summarizes some of the key concepts from the second set
of lectures.

Derivatives
Economic “applications” of derivatives
We have a class of functions that are easy to work with and we un- (beyond finding maxes)

derstand very well (linear functions) and another larger class of 1. Comparative statics: How do en-
dogenous choices respond to small
functions that are at least in principle “well-behaved” (continuous
changes in exogenous parameters.
functions). Can we leverage the tractability of the former to help us How much does a small change in
analyze the latter? The derivative, when it exists, gives us a tool that price change demand?

2. Dynamics: Allows us to describe


does exactly that. The gradient of a function, denoted
dynamics. How does an economy
 T change over time? Is an economy
∂ f (x) ∂ f (x) ∂ f (x)
∇ f (x) = , ,..., going to approach a specific steady
∂x1 ∂x2 ∂xn state?

linearizes f at x, in the sense that for any x̂

f ( x̂ ) = f ( x ) + ∇ f ( x ) · ( x̂ − x ) + o ( x̂ − x ).

So for values close to x, there is a linear function that approximates f


that has “slope” ∇ f ( x ).1 2 For a twice differentiable f : Rm → R we 1
Similarly the Taylor expansion let’s us
approximate a continuous function with
can write the second derivative matrix/hessian a polynomials by using higher order
 2 derivative. The higher the degree of the
∂2 f ( x )

∂ f (x) polynomial approximation, the faster
∂x ∂x . . . ∂x ∂xm
 1 1 1 the approximation error vanishes.
 . .. 

D2 f ( x ) =  .. .. . 2
For a function f : Rm → Rn , we
. . 
 2  denote the derivative matrix as
∂ f (x) ∂2 f ( x )
∂xm ∂x . . . ∂xm ∂xm  ∂ f (x)
1 ∂ f1 (x)

1
∂x1 ... ∂xm
 . . 
 
D f ( x ) =  .. .. .. 
For most functions this matrix is symmetric.3 We can use this to  . 
∂ f n (x) ∂ f n (x)
. . .
construct even better approximations using the Taylor expansion, for ∂x
1 ∂xm

Note that for a function that maps to R,


any twice continuously differentiable function D f ( x ) T = ∇ f ( x ). In the next section, I
use Dx to denote the derivative matrix
1 with respect to x (which may be a
f ( x̂ ) = f ( x ) + ( x̂ − x ) T ∇ f ( x ) + ( x̂ − x ) T D2 f ( x )( x̂ − x ) + o (( x̂ − x )2 ).
2 vector).
3
Young’s theorem.

Unconstrained Optimization

The derivative gives us a tool to find local maximizers and minimiz-


ers.
calculus 2

Theorem 1 (First order conditions). The gradient at an interior local


maximum is 0.

So, for any maximum x ∗ that occurs on the interior of the domain,
a necessary condition is that ∇ f ( x ∗ ) = 0. This provides us with a set
of candidate maximizers, called critical points.
First order conditions are necessary, but not sufficient for a maxi-
mizer. 4 The second derivative gives us a second test to help narrow 4
Clearly, this must also hold at mini-
mizers. This can also hold at points that
our search: are neither, e.g. x = 0 for the function
f ( x ) = x3 .
Theorem 2 (Second order conditions). At any critical point x ∗ of f if
D2 f ( x ∗ ) is negative definite then x ∗ is a local maximum. If x ∗ is a local
maximum then D2 f ( x ∗ ) is negative semidefinite.
This gives us a good way to look for
maximizers of a function f if it is twice
differentiable.
The Implicit Function Theorem
1. First, find all points where ∇ f ( x ) =
Consider functions f : Rn + m → Rn . We are often going be confronted 0.

with problems where for any given vector of exogenous variables 2. Then look at D2 f at those points:

x ∈ Rm , we know the corresponding endogenous variables y ∈ Rn (a) If D2 f ( x ) is not negative semi-


definite, it’s not a max.
solve an equation of the form5
(b) If D2 f ( x ) is negative definite, it
is a (local) max.
f (y, x ) = 0.
(c) If D2 f ( x ) is negative semi-
definite, it could be a local max.
A natural question to ask would be “How does y change as x varies?”
3. To find the global max, compare
If there was a nice, differentiable y( x ) that solved this equation at the values of f at the points you
each x, then we’d know exactly what to do. Using the chain rule found in step 2b, c and the points on
the boundary/where the function
wasn’t differentiable.
D x f ( y ( x ), x ) = D x 0 5
For instance, the first order conditions
of the consumer problem tell us that if
Dy f Dy + Dx f = 0
there are two goods x1 , x2 being sold
Dy = −( Dy f )−1 Dx f . for prices p1 , p1 then at the optimum
the consumer sets the marginal rate of
substitution equal to the ratio of the
But, for most problems it’s not clear that such a y exists, much less is prices, e.g.
u1 ( x1 , x2 ) p
differentiable. The implicit function theorem tells us when we can do − 1 = 0.
u2 ( x1 , x2 ) p2
something like this: The prices are exogenous, while the
amount of goods 1 and 2 consumed are
Theorem 3 (Implicit Function Theorem). Let ( x ∗ , y∗ ) solve f (y∗ , x ∗ ) = endogenous.
0. Then if Dy f (y∗ , x ∗ ) has full-rank, in a neighborhood of x∗ there exists a
unique differentiable y s.t. f (y( x ), x ) = 0 and y( x ∗ ) = y∗ . Moreover, in
this neighborhood
Dy = −( Dy f )−1 Dx f .
calculus 3

Optimization with Equality Constraints

Consider a maximization problem of the form

max f ( x )
x ∈Rm

s.t. g( x ) = 0

for f : Rm → R, g : Rm → Rn . The extreme value theorem tells us Maximizing utility subject to spending
your entire budget
that for continuous f and nice enough g, this has a solution. It would
be nice to have something like first order conditions to help us find max u( x )
x ∈Rm

it. In class, we used the implicit function theorem to show that at any s.t. p · x − m = 0

maximum x ∗ , if Dg has full rank then is probably the most familiar of these.

∇ f ( x ∗ ) = λ T Dg( x ∗ )

for some λ ∈ Rn . The λ’s are called Lagrange multipliers.6 So, as 6


There are second order conditions
for these, but they are annoying so
with unconstrained optimization problems, we can turn the prob- we aren’t going to talk about them.
lem of finding a maximum into the problem of solving a system of Appealing to the concavity of the
objective and the convexity of the
(potentially non-linear) equations, with the additional headache that feasible set is mostly what you’ll do in
the first year. More on this in the next
we’ve added an “extra” variable for each constraint (the λ). sets of lectures.

You might also like