Lecture 09 Handouts
Lecture 09 Handouts
Bewley Models
Part 2
Patrick Macnamara
ECON80301: Advanced Macroeconomics
University of Manchester
Fall 2024
Today’s Lecture
• Today, we will continue discussing Bewley models, which feature the following
ingredients:
• Consumers face idiosyncratic labor income uncertainty.
• Incomplete markets – consumers only have access to one risk-free asset,
subject to an exogenous borrowing constraint.
• First, we will review the two key models, Huggett (1993) and Aiyagari (1994),
paying attention to some issues regarding existence and uniqueness.
• Second, we will discuss the overall solution method for these models.
• Third, we will discuss different methods to solve for the invariant distribution of
households in these economies (this is the difficult part).
2 / 49
Overview of the Models
3 / 49
Huggett (1993) and Aiyagari (1994)
• Bellman equation in Huggett (1993):
π(y ′ |y )V (a′ , y ′ )
X
V (a, y ) = max
′
u(c) + β (1)
a ,c
y′
s.t. c + a′ = y + (1 + r )a
a′ ≥ a
Here we approximate the AR(1) process for the labor endowment with a
discrete Markov chain, with grid L = {ℓ1 , . . . , ℓn }.
4 / 49
Law of Motion for Distribution
• In Huggett (1993), define law of motion for µ:
Z
′
µ (B) = Q(a, y , B)dµ(a, y ) for all B ⊂ A × Y (3)
A×Y
π(y ′ |y )1 {g(a, y ) ∈ A0 }
X
Q(a, y , A0 × Y0 ) =
y ′ ∈Y0
| {z }
B
• Essentially, (3) and (4) add up everyone in the current distribution who
transitions to B next period.
5 / 49
Invariant Distribution
• (3) and (4) implicitly define an operator T ∗ , which maps the current
distribution µ to next period’s distribution µ′ (i.e., µ′ = T ∗ (µ)).
6 / 49
Invariant Distribution: Existence
1
See lecture 5 and Stokey Lucas Theorem 12.10
2
See lecture 5. Apply Stokey Lucas Theorem 9.14 to establish Q has the Feller property.
7 / 49
Invariant Distribution: Uniqueness
3
See lecture 5 and Stokey Lucas Theorem 12.12.
4
See lecture 5.
8 / 49
Existence and Uniqueness of Equilibrium
• Equilibrium when
Z
A(r ) ≡ g(a, y )dµ(a, y ) = 0 (Huggett, 1993)
A×Y
K s (r ) = K d (r ) (Aiyagari, 1994)
5
For example, apply Theorem 12.13 in Stokey Lucas
9 / 49
Overall Solution Method
10 / 49
Solution Method
• I will first present the general solution method for Huggett (1993).
• I will then show how to modify this algorithm for Aiyagari (1994).
11 / 49
Overall Solution Method, Huggett
Step 1 Start with a guess for the market clearing r ∈ [−1, 1/β − 1).
From earlier analysis, we know r < 1/β − 1.
We also know A(r ) = a when r = −1.
Step 2 Taking as given r , solve for the value function V (a, y ) and the associated policy
functions c(a, y ) and g(a, y ).6
Step 3 Given the policy function a′ = g(a, y ), solve for the invariant distribution µ
(more on this later).
Step 4 Check whether markets clear:
Z
A(r ) ≡ g(a, y |r )dµ(a, y |r ) = 0
A×Y
13 / 49
Step 2: Solve for g(a, y )
g(a,y )
L
g(a,yH )
45 deg
a min
a min a max
assets
• Notice that the borrowing constraint (a′ ≥ a) can bind for low asset values.
• Be sure to set ā high enough so that g(ā, y ) < ā for y ∈ {yL , yH }.
We don’t know in advance what ā should be.
14 / 49
Step 3: Solve for µ
15 / 49
Step 4: Market Clearing
0.03
0.025
Interest rate, r
0.02
0.015
0.01
0.005
0
-0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Aggregate saving
16 / 49
Overall Solution Method, Aiyagari, 1/2
7
See lecture 6.
17 / 49
Overall Solution Method, Aiyagari, 2/2
18 / 49
Solving for the Invariant Distribution
19 / 49
Solving for the Invariant Distribution
• There are three approaches we can use to solve for the invariant distribution.
1. Iterate on Q matrix
2. Approximate CDF of distribution
3. Monte Carlo methods
20 / 49
Method 1: Iterate on Q Matrix
21 / 49
Method 1: Iterate on Q Matrix
• With this approach, we iterate on the transition rule, defined in (3), until
convergence.
• Make a discrete state approximation for assets.
• Assume assets a ∈ {a1 , . . . , aM }, where a1 = a, aM = ā.
• Make sure that the grid is much finer than the grid used to compute the policy
function g(a, y ).
• We will assume that agents can only transition to asset values on this finer grid
(only when computing the distribution).
22 / 49
Representing the Distribution, 1/2
distribution t
distribution t+1
aM aM
a M-1 a M-1
assets a
assets a
a2 a2
a1 a1
y1 y2 y1 y2
income y income y
23 / 49
Representing the Distribution, 2/2
• Represent distribution as M × 2 matrix:
µ(a1 , y1 ) µ(a1 , y2 )
µ= .. ..
. .
µ(aM , y1 ) µ(aM , y2 )
8
In memory, the underlying data elements are actually stored sequentially in memory, with the
first column followed by second column, and so on.
24 / 49
Manipulating the Distribution
When we do this, the underlying order of the elements will not change.
• We can then reshape it back into a 2D matrix using
% reshape will return a M x 2 matrix
% mu1d must have M*2 elements
mu2d = reshape(mu1d, M, 2);
25 / 49
Constructing the Q Matrix, 1/3
26 / 49
Constructing the Q Matrix, 2/3
• The trick is to assume that agents will transition to one of the two grid points
nearest to g(ai , yj ).
• First find interval on the finer grid containing g(ai , yj ):
𝑔(𝑎𝑖 , 𝑦𝑗 ) Find interval 𝑎𝑘 , 𝑎𝑘+1
𝜆 1−𝜆 containing 𝑔(𝑎𝑖 , 𝑦𝑗 )
𝑎𝑘 𝑎𝑘+1
if i ′ = k
π(yj ′ |yj )(1 − λ)
Q(i,j),(i ′ ,j ′ ) = π(yj ′ |yj )λ if i ′ = k + 1
0 otherwise
27 / 49
Constructing the Q Matrix, 3/3
28 / 49
Main Iteration
• Given today’s distribution µ, tomorrow’s distribution µ′ is
2 X
M
′
X
µ (ai ′ , yj ′ ) = Q(ai , yj , ai ′ , yj ′ )µ(ai , yj )
j=1 i=1
for i ′ = 1, . . . , M and j ′ = 1, 2
mu1 = Q’*mu0;
9
Note we could also solve for the eigenvector of the transposed Q matrix associated with the
unit eigenvalue. However, this Q matrix is very big (but also very sparse), therefore this approach
would likely be slower.
29 / 49
Summary of Method 1
Note that the policy function g(a, y ) is defined on a different grid for assets, so
you would need to use linear interpolation here to compute g(ai , yj ).
30 / 49
Method 2: Approximate CDF of Distribution
31 / 49
Method 2: Approximate CDF of Distribution
• Let Φ(ai , yj ) be the mass of agents with a ≤ ai and y = yj .
P2
j=1 Φ(ai , yj ) is the mass of ALL agents with a ≤ ai .
Example:
pretend
1 1
Φ 𝑎, 𝑦𝐻
2/3
Φ 𝑎, 𝑦𝐿 + Φ(𝑎, 𝑦𝐻 )
1/3
Notice positive mass Φ 𝑎, 𝑦𝐿
at 𝑎
0 0
𝑎 𝑎ത 𝑎 𝑎 𝑎ത 𝑎
32 / 49
Updating the Distribution, 1/2
cdf
• How can we compute Φn+1 given Φn ?
• We need to invert the policy function, exploiting the fact that it is
monotonically increasing.
policy function a' = g(a,y)
a max
g(a,y )
L
g(a,yH )
a'
45 deg
a min
a min g -1 (a',yH) g -1 (a',yL ) a max
assets
33 / 49
Updating the Distribution, 2/2
Example: suppose we want to compute Φn+1 (ai ′ , yj ′ ).
• Today, households with a ≤ g −1 (ai ′ , yL ) and y = yL will transition to a′ ≤ ai ′ .
• There are Φn (g −1 (ai ′ , yL ), yL ) of these households.
• π(yj ′ |yL ) of them will also transition to yj ′ .
• Today, households with a ≤ g −1 (ai ′ , yH ) and y = yH will transition to a′ ≤ ai ′ .
• There are Φn (g −1 (ai ′ , yH ), yH ) of these households.
• π(yj ′ |yH ) of them will also transition to yj ′ .
This implies
2
π(yj ′ |yj )Φn g −1 (ai ′ , yj ), yj
X
Φn+1 (ai ′ , yj ′ ) =
j=1
34 / 49
Inverting the Policy Function
a max
g(a,yL )
g(a,yH )
a'
45 deg
a min
a min g -1 (a',yH) g -1 (a',yL ) a max
assets
• For a given a′ , we would like to solve for the value of a such that g(a, yj ) = a′ .
Denote the inverse by g −1 (a′ , yj ).
• Use linear interpolation to interpolate g(a, yj ).
• Use a root finding algorithm to solve for the inverse. However, we need to pay
very close attention to detail here (see next two slides).
35 / 49
Low Endowment Case
𝑎′ 45
𝑎ത
case1, pretend the policy function is vertical here,
(1) do not extrapolate
𝑔 𝑎,
ത 𝑦𝐿
(2)case2, fun(a) = a' - g(a, y_l)
(3) 𝑔 𝑎, 𝑦𝐿
𝑎
𝑎 𝑔0−1 𝑎ത 𝑎
case 3. compute this point
10
We could extrapolate the policy function, but that’s not necessary.
36 / 49
High Endowment Case
𝑎′ 45
𝑎ത case1, again, pretend it o be vertical, no extrapolate
(1)
𝑔 𝑎,
ത 𝑦𝐻
(2) case2, normal region
𝑔 𝑎, 𝑦𝐻
𝑔 𝑎, 𝑦𝐻
case 3. no inverse exist,
not possible for anyone to trasit to choose
(3)
asset less thant his level .t
hat is the lowest level.
𝑎
𝑎 𝑎ത 𝑎
37 / 49
Summary of Method 2
2/3
Φ0 𝑎, 𝑦𝐻
1/3
Φ0 𝑎, 𝑦𝐿
0
𝑎 𝑎ത 𝑎
38 / 49
Method 3: Monte Carlo Methods
need to modify
We can use the following procedure to simulate next period’s endowment for all
households i = 1, . . . , N.
Step 1 For each household i = 1, . . . , N, draw ui from a uniform distribution over
(0, 1). In Matlab, we can use rand.
Step 2 If ui ≤ π(yL |yi ), set yi′ = yL . Otherwise, set yi′ = yH .
𝑢𝑖
0 𝜋(𝑦𝐿 |𝑦𝑖 ) 1
y' = y_l y' = y_h
This can easily be generalized to Markov processes with more than two states
(see next slide).
41 / 49
Markov Chain with More Than 2 States
0 1
y' = y_1 y' = y_2 2 y' = y_3 3 y' = y_4
• For household i, draw ui from uniform distribution over (0, 1), like before.
• If the household’s current state is yi , set next period’s state to be the yk such
that
k−1
X k
X
π(yj |yi ) < ui ≤ π(yj |yi )
j=1 j=1
42 / 49
Testing for Convergence, 1/4
1.15
mean assets
1.1
1.05
1
50 100 150 200 250
iteration
• This shows mean assets (e.g., m = N1 Ni=1 ai ) in the MC simulation, over time.
P
• While the mean does converge, the simulation introduces a lot of error.
• If we test for convergence using the usual approach (e.g., |mt − mt−1 | < ε),
we will likely never achieve the desired tolerance.
44 / 49
Testing for Convergence, 3/4
How can we test for convergence when Monte Carlo simulation introduces so much
randomness?
Keep track of the minimum error achieved and stop when the error hasn’t improved
for the last T0 iterations (e.g., T0 = 150).
(1) At each iteration, compute the error:
(3) Stop iterating if simulation hasn’t improved upon the best error achieved for the
last T0 iterations.
13
In your code, you don’t have to store the whole history of errors, et . Initialize emin to ∞ and
on each iteration, update emin = min(et , emin ).
45 / 49
Testing for Convergence, 4/4
Monte Carlo simulation error
0.012
error
0.01 min error
0.008
error
0.006
0.004
0.002
0
50 100 150 200 250
iteration
• This is an example of the error (et = maxj |mj,t − mj,t−1 |) from a Monte Carlo
simulation.
• In this case, the iteration stops because the algorithm did not improve upon the
error for the last 150 iterations.
T_0 = 100
46 / 49
Comparing the Three Methods
• The Q matrix and CDF methods will both be accurate. However, they can be
tricky to implement (especially the cdf method).
• The randomness from Monte Carlo methods will introduce some error into the
calculated distributions (but this is manageable).
• However, in more complicated models, the first two methods will become
infeasible. Monte Carlo will always work (and will always be easy to code).
47 / 49
Resulting Invariant Distribution
Invariant
Invariant
Invariant
Distribution
Distribution
Distribution
Invariant
over
over
Distribution
over
(a,y),
(a,y),
(a,y),
CDF
Iterate
Monte
over
Approximation
on
(a,y)
Carlo
Q Matrix
Method
Method
Method
0.5
y
L
0.45 y
H
0.4
0.35
0.3
(a, y)
0.25
0
0 1 2 3 4 5 6 7 8 9 10
a lower bar a
• All methods produce similar distributions, but notice that the Monte Carlo
simulation introduces some error.
48 / 49
What’s Next?
49 / 49
Equilibrium Definition, Huggett (Review)
A stationary recursive equilibrium in Huggett (1993):
(i) the value function V (a, y ),
(ii) the policy functions c(a, y ) and g(a, y ),
(iii) the interest rate r , and
(iv) the distribution µ
such that
(1) Given r , V (a, y ) solves the Bellman equation given in (1) and c(a, y ), g(a, y )
are the associated policy functions.
(2) Markets clear:
Z
A(r ) = g(a, y )dµ(a, y ) = 0
A×Y
1/2
Equilibrium Definition, Aiyagari (Review)
A stationary recursive equilibrium in Aiyagari (1994):
(a) the value function V (a, ℓ),
(b) the policy functions c(a, ℓ) and a′ (a, ℓ),
(c) the prices w and r , and
(d) the distribution µ over (a, ℓ),
such that
(1) Given w and r , V (a, ℓ) solves the Bellman equation in (2), and c(a, ℓ) and
a′ (a, ℓ) are the associated policy functions.
(2) Labor and capital markets clear:
!α
d s K d (r )
K (r ) = K (r ), w = (1 − α)
L̄
2/2