0% found this document useful (0 votes)
6 views

Lecture 09 Handouts

The lecture focuses on Bewley models, specifically reviewing Huggett (1993) and Aiyagari (1994), which address consumers facing labor income uncertainty and incomplete markets. It discusses the existence and uniqueness of stationary distributions and the overall solution methods for these models, including steps to solve for the invariant distribution of households. The lecture outlines various approaches to compute the invariant distribution and emphasizes the importance of market clearing conditions.

Uploaded by

xyqian1124
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Lecture 09 Handouts

The lecture focuses on Bewley models, specifically reviewing Huggett (1993) and Aiyagari (1994), which address consumers facing labor income uncertainty and incomplete markets. It discusses the existence and uniqueness of stationary distributions and the overall solution methods for these models, including steps to solve for the invariant distribution of households. The lecture outlines various approaches to compute the invariant distribution and emphasizes the importance of market clearing conditions.

Uploaded by

xyqian1124
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Lecture 9:

Bewley Models
Part 2

Patrick Macnamara
ECON80301: Advanced Macroeconomics
University of Manchester

Fall 2024
Today’s Lecture

• Today, we will continue discussing Bewley models, which feature the following
ingredients:
• Consumers face idiosyncratic labor income uncertainty.
• Incomplete markets – consumers only have access to one risk-free asset,
subject to an exogenous borrowing constraint.

• First, we will review the two key models, Huggett (1993) and Aiyagari (1994),
paying attention to some issues regarding existence and uniqueness.
• Second, we will discuss the overall solution method for these models.
• Third, we will discuss different methods to solve for the invariant distribution of
households in these economies (this is the difficult part).

2 / 49
Overview of the Models

3 / 49
Huggett (1993) and Aiyagari (1994)
• Bellman equation in Huggett (1993):
 
 
π(y ′ |y )V (a′ , y ′ )
X
V (a, y ) = max

u(c) + β (1)
a ,c
y′

s.t. c + a′ = y + (1 + r )a
a′ ≥ a

• Bellman equation in Aiyagari (1994):


( )
′ ′ ′
X
V (a, ℓ) = max

u(c) + β π(ℓ |ℓ)V (a , ℓ ) (2)
a ,c
ℓ′
s.t. c + a′ = w ℓ + (1 + r )a
a′ ≥ a

Here we approximate the AR(1) process for the labor endowment with a
discrete Markov chain, with grid L = {ℓ1 , . . . , ℓn }.
4 / 49
Law of Motion for Distribution
• In Huggett (1993), define law of motion for µ:
Z

µ (B) = Q(a, y , B)dµ(a, y ) for all B ⊂ A × Y (3)
A×Y
π(y ′ |y )1 {g(a, y ) ∈ A0 }
X
Q(a, y , A0 × Y0 ) =
y ′ ∈Y0
| {z }
B

where A = [a, ā], Y = {yL , yH }, ā is upper bound for assets.


• In Aiyagari (1994), define law of motion for µ:
Z
µ′ (B) = Q(a, ℓ, B)dµ(a, ℓ) for all B ⊂ A × L (4)
A×Y
π(ℓ′ |ℓ)1 {g(a, ℓ) ∈ A0 }
X
Q(a, ℓ, A ×L )=
| 0 {z 0}
B ℓ′ ∈L0

• Essentially, (3) and (4) add up everyone in the current distribution who
transitions to B next period.
5 / 49
Invariant Distribution

• (3) and (4) implicitly define an operator T ∗ , which maps the current
distribution µ to next period’s distribution µ′ (i.e., µ′ = T ∗ (µ)).

• We’re interested in the stationary distribution (i.e., fixed point of T ∗ ).

• How can we guarantee existence and uniqueness of a stationary distribution?

6 / 49
Invariant Distribution: Existence

• Invariant distribution exists if we can establish two properties:1


1. compactness of state space,
2. Q has Feller property.
• Compactness when β(1 + r ) < 1 and preferences display decreasing absolute
risk aversion (we saw this last week)
• Feller property of Q requires that the associated Markov operator map the set
of continuous and bounded functions into itself.2

1
See lecture 5 and Stokey Lucas Theorem 12.10
2
See lecture 5. Apply Stokey Lucas Theorem 9.14 to establish Q has the Feller property.
7 / 49
Invariant Distribution: Uniqueness

• Uniqueness if we can further establish two properties:3


1. Q is monotone, and
2. Q satisfies the monotone mixing condition.
• Q is monotone:
• For every increasing function f , expected value of f given Q is also increasing.4
• For example, Huggett assumed π(yH |yH ) ≥ π(yH |yL ) to get this.
• Q satisfies the monotone mixing condition:
• Requires that agents can move to any part of the distribution from anywhere.
• e.g., if the poorest, agent can become the richest agent with probability 1 as
long as enough time passes.
• e.g., if the richest, agent can become the poorest agent with probability 1 as
long as enough time passes.

3
See lecture 5 and Stokey Lucas Theorem 12.12.
4
See lecture 5.
8 / 49
Existence and Uniqueness of Equilibrium

• We can establish existence of a stationary equilibrium.5


But, we don’t know whether it’s unique. Huggett Equilibrium Aiyagari Equilibrium

• Equilibrium when
Z
A(r ) ≡ g(a, y )dµ(a, y ) = 0 (Huggett, 1993)
A×Y
K s (r ) = K d (r ) (Aiyagari, 1994)

• No results on the monotonicity of A(r ) or K s (r ) with respect to r ;


so uniqueness is not guaranteed.
Intuition: a higher r has both an income and substitution effect on savings:
relative dominance between the two could switch, so savings policy function
may not be monotone.
• In practice, this isn’t much of an issue.

5
For example, apply Theorem 12.13 in Stokey Lucas
9 / 49
Overall Solution Method

10 / 49
Solution Method

• I will first present the general solution method for Huggett (1993).

• Nevertheless, the basic algorithm applies more generally.

• I will then show how to modify this algorithm for Aiyagari (1994).

11 / 49
Overall Solution Method, Huggett
Step 1 Start with a guess for the market clearing r ∈ [−1, 1/β − 1).
From earlier analysis, we know r < 1/β − 1.
We also know A(r ) = a when r = −1.
Step 2 Taking as given r , solve for the value function V (a, y ) and the associated policy
functions c(a, y ) and g(a, y ).6
Step 3 Given the policy function a′ = g(a, y ), solve for the invariant distribution µ
(more on this later).
Step 4 Check whether markets clear:
Z
A(r ) ≡ g(a, y |r )dµ(a, y |r ) = 0
A×Y

If not, update your guess for r and return to Step 2.


6
Notice that we are exploiting the fact that r is constant in a stationary equilibrium. We solve
needcheck
for the value function V (a, y ), taking as given r and then later to modifywhether
as MC is random
that r clears the
market.
12 / 49
Step 2: Solve for V (a, y )

• Take as given r and solve for V (a, y ).


• We know how to do this by now.
• You can adapt the algorithm from the stochastic growth model
(this should be straightforward).
• Construct grid for assets.
• Endowment y already assumed to be discrete (no need to discretize y ).
• Use any numerical optimization routine we’ve learned.
• Iterate on the Bellman equation until convergence.

• Remember to use the Howard improvement algorithm!


use fzero

13 / 49
Step 2: Solve for g(a, y )

policy function a' = g(a,y)


a max

g(a,y )
L
g(a,yH )
45 deg
a min
a min a max
assets

• Notice that the borrowing constraint (a′ ≥ a) can bind for low asset values.
• Be sure to set ā high enough so that g(ā, y ) < ā for y ∈ {yL , yH }.
We don’t know in advance what ā should be.

14 / 49
Step 3: Solve for µ

• Given g(a, y ) and r , we need to solve for the invariant distribution µ.

• Today, we will cover several approaches.

• We will go through these methods in detail (in a few slides).

15 / 49
Step 4: Market Clearing

0.03

0.025

Interest rate, r
0.02

0.015

0.01

0.005

0
-0.1 -0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
Aggregate saving

• Check whether A(r ) = g(a, y |r )dµ(a, y |r ) = 0.


R

• We can use a simple root finding algorithm here. use fzero


• For example, bisection.
• We know some theoretical limits for r ∈ [−1, 1/β − 1).

16 / 49
Overall Solution Method, Aiyagari, 1/2

• Straightforward to adapt this algorithm for Aiyagari (1994).


• First, approximate the AR(1) process for the labor endowment as a discrete
Markov chain, using either Tauchen (1986) or Kopecky and Suen (2010).
• Approximation yields probability transition matrix Pn , with grid
L = {ℓ1 , . . . , ℓn }.
• Construct invariant distribution of Pn : {µ̄1 , . . . , µ̄n }, where ni=1 µ̄i = 1.7
P
• Compute aggregate labor supply:
n
X
L̄ = µ̄i ℓi
i=1

• Key is that we can calculate L̄ before solving the full model.

7
See lecture 6.
17 / 49
Overall Solution Method, Aiyagari, 2/2

Step 1 Start with guess for r ∈ (−δ, 1/β − 1).


From earlier analysis, we know r < 1/β − 1.
We also know K d (r ) = ∞ when r = −δ.
 1/(1−α)
α
Step 2 Given r , compute K d (r ) = r +δ
L̄.
Step 3 Given K d (r ), compute w = (1 − α)(K d (r )/L̄)α .
Step 4 Given w and r , solve the household’s value function V (a, ℓ) and associated
policy function a′ (a, ℓ).
Household’s value function is very similar to Huggett (1993).
Step 5 Solve for the invariant distribution µ over (a, ℓ).
More on this soon.
Step 6 Check whether K s (r ) = K d (r ). If this is satisfied, we’re done. Otherwise,
update our guess for r and return to Step 2. use fzero

18 / 49
Solving for the Invariant Distribution

19 / 49
Solving for the Invariant Distribution

• There are three approaches we can use to solve for the invariant distribution.

1. Iterate on Q matrix
2. Approximate CDF of distribution
3. Monte Carlo methods

• We now examine each of these methods in detail, applied to Huggett (1993).


The same methods can be used for Aiyagari (1994).

20 / 49
Method 1: Iterate on Q Matrix

21 / 49
Method 1: Iterate on Q Matrix

• With this approach, we iterate on the transition rule, defined in (3), until
convergence.
• Make a discrete state approximation for assets.
• Assume assets a ∈ {a1 , . . . , aM }, where a1 = a, aM = ā.
• Make sure that the grid is much finer than the grid used to compute the policy
function g(a, y ).
• We will assume that agents can only transition to asset values on this finer grid
(only when computing the distribution).

• Represent the distribution µ as a M × 2 matrix, where µ(ai , yj ) is the mass of


agents with assets ai and income yj .

22 / 49
Representing the Distribution, 1/2

distribution t
distribution t+1

aM aM
a M-1 a M-1

assets a

assets a
a2 a2
a1 a1

y1 y2 y1 y2

income y income y

• Agents are restricted to be on the grid.


• Given today’s distribution µt (which is a M × 2 matrix), we want to compute
next period’s distribution µt+1 .

23 / 49
Representing the Distribution, 2/2
• Represent distribution as M × 2 matrix:
 
µ(a1 , y1 ) µ(a1 , y2 )

µ= .. .. 

. . 

µ(aM , y1 ) µ(aM , y2 )

• In Matlab, arrays are stored in “column major” order.8


Address Value
1 µ(a1 , y1 )
.. ..
. .
M µ(aM , y1 )
M +1 µ(a1 , y2 )
.. ..
. .
2M µ(aM , y2 )

8
In memory, the underlying data elements are actually stored sequentially in memory, with the
first column followed by second column, and so on.
24 / 49
Manipulating the Distribution

• Sometimes it will be more convenient to represent µ as a 1D vector rather than


a 2D matrix.
• In Matlab, we can reshape a matrix as follows:
% reshape will return a 2M x 1 column vector
% mu2d must have 2M*1 elements
mu1d = reshape(mu2d, 2*M, 1); 2M X 1

When we do this, the underlying order of the elements will not change.
• We can then reshape it back into a 2D matrix using
% reshape will return a M x 2 matrix
% mu1d must have M*2 elements
mu2d = reshape(mu1d, M, 2);

25 / 49
Constructing the Q Matrix, 1/3

• We need to calculate the following probability transition matrix


(which is a 2M × 2M 2D matrix):
(i',j')
 
Q(1,1),(1,1) · · · Q(1,1),(M,1) Q(1,1),(1,2) · · · Q(1,1),(M,2)
 .. .. .. .. .. .. 

 . . . . . . 

 Q(M,1),(1,1) · · · Q(M,1),(M,1) Q(M,1),(1,2) · · · Q(M,1),(M,2)
 
(i,j) 
Q= 
 Q
 (1,2),(1,1) · · · Q(1,2),(M,1) Q(1,2),(1,2) · · · Q(1,2),(M,2) 

 .
.. .. .. .. .. .. 

 . . . . . 

Q(M,2),(1,1) · · · Q(M,2),(M,1) Q(M,2),(1,2) · · · Q(M,2),(M,2)

where Q(i,j),(i ′ ,j ′ ) is the probability of transitioning from (ai , yj ) to (ai ′ , yj ′ ).


• Rows sum to 1. Columns index (a′ , y ′ ) and rows index (a, y ).
• Most elements in this matrix will be zero.

26 / 49
Constructing the Q Matrix, 2/3
• The trick is to assume that agents will transition to one of the two grid points
nearest to g(ai , yj ).
• First find interval on the finer grid containing g(ai , yj ):
𝑔(𝑎𝑖 , 𝑦𝑗 ) Find interval 𝑎𝑘 , 𝑎𝑘+1
𝜆 1−𝜆 containing 𝑔(𝑎𝑖 , 𝑦𝑗 )

𝑎𝑘 𝑎𝑘+1

Use linear interpolation to evaluate g(ai , yj ).


• Then, set transition probability as follows:

if i ′ = k

 π(yj ′ |yj )(1 − λ)

Q(i,j),(i ′ ,j ′ ) = π(yj ′ |yj )λ if i ′ = k + 1


0 otherwise

where λ = (g(ai , yj ) − ak )/(ak+1 − ak ).

27 / 49
Constructing the Q Matrix, 3/3

• In Matlab, it is probably easiest to construct the Q matrix by starting with a


4D matrix with dimensions M × 2 × M × 2.
For example: Q4d(i,j,in,jn) would then represent the probability of
transitioning from (ai , yj ) to (ain , yjn ).
• Once you’ve constructed this matrix, you can re-shape it into a 2D matrix with
dimensions 2M × 2M:
% reshape will return a 2M x 2M matrix
% Q4d must have 2M*2M elements
Q2d = reshape(Q4d, M*2, M*2);

This won’t change the underlying order of the elements.

28 / 49
Main Iteration
• Given today’s distribution µ, tomorrow’s distribution µ′ is
2 X
M

X
µ (ai ′ , yj ′ ) = Q(ai , yj , ai ′ , yj ′ )µ(ai , yj )
j=1 i=1

for i ′ = 1, . . . , M and j ′ = 1, 2

This is the transition rule from Equation (3).


• In Matlab, this can be concisely written as
% mu0 is a 2M x 1 column vector
% Q is a 2M x 2M transition matrix Type text here

mu1 = Q’*mu0;

Iterate on this until convergence.9

9
Note we could also solve for the eigenvector of the transposed Q matrix associated with the
unit eigenvalue. However, this Q matrix is very big (but also very sparse), therefore this approach
would likely be slower.
29 / 49
Summary of Method 1

Step 1 Construct finer grid for assets {a1 , . . . , aM }.


Step 2 Construct the 2M × 2M Q matrix.
Step 3 Start with an initial guess for distribution, µ0 , represented as a 2M × 1 column
vector. Initialize all elements to 1/(2M).
Step 4 Given the current distribution µt , compute next period’s distribution µt+1 by
iterating on the law of motion. Iterate until convergence.
Step 5 Re-shape the invariant distribution, µ, into a 2D matrix and use it to compute
aggregate statistics. For example, aggregate savings is
Z 2 X
X M
A(r ) ≡ g(a, y )dµ = g(ai , yj )µ(ai , yj )
j=1 i=1 M x 2 matrix M x 2 matrix

Note that the policy function g(a, y ) is defined on a different grid for assets, so
you would need to use linear interpolation here to compute g(ai , yj ).

30 / 49
Method 2: Approximate CDF of Distribution

31 / 49
Method 2: Approximate CDF of Distribution
• Let Φ(ai , yj ) be the mass of agents with a ≤ ai and y = yj .
P2
j=1 Φ(ai , yj ) is the mass of ALL agents with a ≤ ai .

Example:
pretend

1 1
Φ 𝑎, 𝑦𝐻
2/3
Φ 𝑎, 𝑦𝐿 + Φ(𝑎, 𝑦𝐻 )
1/3
Notice positive mass Φ 𝑎, 𝑦𝐿
at 𝑎
0 0
𝑎 𝑎ത 𝑎 𝑎 𝑎ത 𝑎

• The basic approach is:


(1) Start with initial guess Φ0 .
(2) Use policy function to compute Φn+1 given Φn .
(3) Iterate until convergence.

32 / 49
Updating the Distribution, 1/2
cdf
• How can we compute Φn+1 given Φn ?
• We need to invert the policy function, exploiting the fact that it is
monotonically increasing.
policy function a' = g(a,y)

a max

g(a,y )
L
g(a,yH )
a'
45 deg
a min
a min g -1 (a',yH) g -1 (a',yL ) a max

assets

We use g (a , yj ) to compute Φn+1 (a , y ′ ) (see above plot).


−1 ′ ′

• However, there are several special cases we need to account for


(more on this shortly).

33 / 49
Updating the Distribution, 2/2
Example: suppose we want to compute Φn+1 (ai ′ , yj ′ ).
• Today, households with a ≤ g −1 (ai ′ , yL ) and y = yL will transition to a′ ≤ ai ′ .
• There are Φn (g −1 (ai ′ , yL ), yL ) of these households.
• π(yj ′ |yL ) of them will also transition to yj ′ .
• Today, households with a ≤ g −1 (ai ′ , yH ) and y = yH will transition to a′ ≤ ai ′ .
• There are Φn (g −1 (ai ′ , yH ), yH ) of these households.
• π(yj ′ |yH ) of them will also transition to yj ′ .

This implies
2  
π(yj ′ |yj )Φn g −1 (ai ′ , yj ), yj
X
Φn+1 (ai ′ , yj ′ ) =
j=1

• Use linear interpolation to compute Φn (a, yj ) at a = g −1 (ai ′ , yj ).


• Special case not clear from notation: if it’s not possible to transition to a′ ≤ ai ′
if y = yj (i.e., there’s no inverse), then set Φn (g −1 (ai ′ , yj ), yj ) to zero.

34 / 49
Inverting the Policy Function

policy function a' = g(a,y)

a max

g(a,yL )
g(a,yH )
a'
45 deg
a min
a min g -1 (a',yH) g -1 (a',yL ) a max

assets

• For a given a′ , we would like to solve for the value of a such that g(a, yj ) = a′ .
Denote the inverse by g −1 (a′ , yj ).
• Use linear interpolation to interpolate g(a, yj ).
• Use a root finding algorithm to solve for the inverse. However, we need to pay
very close attention to detail here (see next two slides).

35 / 49
Low Endowment Case

𝑎′ 45
𝑎ത
case1, pretend the policy function is vertical here,
(1) do not extrapolate

𝑔 𝑎,
ത 𝑦𝐿
(2)case2, fun(a) = a' - g(a, y_l)
(3) 𝑔 𝑎, 𝑦𝐿
𝑎
𝑎 𝑔0−1 𝑎ത 𝑎
case 3. compute this point

(1) a′ ≥ g(ā, yL ): Set the inverse to ā.10


(2) a < a′ < g(ā, yL ): This is the normal case.
(3) a′ = a: Find the largest a such that g(a, yL ) = a (i.e., g0−1 in the graph above).

10
We could extrapolate the policy function, but that’s not necessary.
36 / 49
High Endowment Case

𝑎′ 45
𝑎ത case1, again, pretend it o be vertical, no extrapolate
(1)
𝑔 𝑎,
ത 𝑦𝐻
(2) case2, normal region
𝑔 𝑎, 𝑦𝐻
𝑔 𝑎, 𝑦𝐻
case 3. no inverse exist,
not possible for anyone to trasit to choose
(3)
asset less thant his level .t
hat is the lowest level.
𝑎
𝑎 𝑎ത 𝑎

(1) a′ ≥ g(ā, yH ): Set the inverse to ā.


(2) g(a, yH ) ≤ a′ < g(ā, yH ): This is the normal case.
(3) a′ < g(a, yH ): No inverse exists. Set Φn (g −1 (a′ , yH ), yH ) = 0.
dont extrapolate, just set it to zero
need to modify MC IS RANDOM

37 / 49
Summary of Method 2

Step 1 Construct finer grid for assets {a1 , . . . , aM }.


Step 2 Compute inverse of g(a, y ). Notice we only need to do this once.
Step 3 Start with an initial guess for Φ0 . For example:

2/3
Φ0 𝑎, 𝑦𝐻
1/3
Φ0 𝑎, 𝑦𝐿
0
𝑎 𝑎ത 𝑎

Step 4 Use inverse of g(a, y ) to compute Φn+1 given Φn .


Be very careful to account for special cases, including situations when policy
function has no inverse!
Step 5 Iterate until convergence (e.g., |Φn+1 − Φn | < ε).
need to modify as MC is random

38 / 49
Method 3: Monte Carlo Methods

need to modify

need to modify as MC is random 39 / 49


Method 3: Monte Carlo Methods
With this approach, we construct a sample of N households, simulate the economy,
and track these households over time.
Step 1 Pick a large N (e.g., N = 100,000).
Step 2 Initialize seed for random number generator.
In Matlab, use built-in function rng, with any seed that you want.
Step 3 Initialize the simulated assets and endowment for all households, {(ai , yi )}N
i=1 .
11
Initialize ai = 0 and yi = yL for all households.
Step 4 For each household i = 1, . . . , N:
(1) Update next period’s asset choice, using a′ = g(ai , yi ). update saving for next period

Use linear interpolation.


(2) Simulate next period’s endowment, y ′ .
(I’ll explain how soon).
Step 5 Keep iterating until convergence.
Testing for convergence with MC methods is tricky (more on this soon).
11
Note that i here indexes a particular household in the simulation.
40 / 49
Simulating a Markov Chain

We can use the following procedure to simulate next period’s endowment for all
households i = 1, . . . , N.
Step 1 For each household i = 1, . . . , N, draw ui from a uniform distribution over
(0, 1). In Matlab, we can use rand.
Step 2 If ui ≤ π(yL |yi ), set yi′ = yL . Otherwise, set yi′ = yH .
𝑢𝑖

0 𝜋(𝑦𝐿 |𝑦𝑖 ) 1
y' = y_l y' = y_h

This can easily be generalized to Markov processes with more than two states
(see next slide).

41 / 49
Markov Chain with More Than 2 States

• For example, suppose a Markov chain has 4 states, {y1 , y2 , y3 , y4 }.


1 2 3 4

0 1
y' = y_1 y' = y_2 2 y' = y_3 3 y' = y_4

𝜋(𝑦1 |𝑦𝑖 ) ෍ 𝜋(𝑦𝑗 |𝑦𝑖 ) ෍ 𝜋(𝑦𝑗 |𝑦𝑖 )


y_2
𝑗=1 𝑗=1

• For household i, draw ui from uniform distribution over (0, 1), like before.
• If the household’s current state is yi , set next period’s state to be the yk such
that
k−1
X k
X
π(yj |yi ) < ui ≤ π(yj |yi )
j=1 j=1

42 / 49
Testing for Convergence, 1/4

How do we test for convergence?


Simplest approach:
• Run the simulation for a fixed number of iterations, T .
• Choose T to be sufficiently large (e.g., T = 2,000).
More complicated approach: average saving

• At each iteration, calculate a set of moments from the distribution (m1 , m2 , . . .)


• For example, one moment could be average savings.
• Computing moments is now very easy.
• Because of randomness introduced by the Monte Carlo simulation, the usual
convergence criterion12 will likely never be achieved for a low tolerance.
Therefore, we’ll need to modify the stopping rule (more on this shortly).

need to modify as MC is random


12
The usual convergence criterion would be to stop when |mj,t − mj,t−1 | < ε for all moments j.
43 / 49
Testing for Convergence, 2/4
Monte Carlo simulation: mean assets
1.2

1.15

mean assets
1.1

1.05

1
50 100 150 200 250
iteration

• This shows mean assets (e.g., m = N1 Ni=1 ai ) in the MC simulation, over time.
P

• While the mean does converge, the simulation introduces a lot of error.
• If we test for convergence using the usual approach (e.g., |mt − mt−1 | < ε),
we will likely never achieve the desired tolerance.

44 / 49
Testing for Convergence, 3/4
How can we test for convergence when Monte Carlo simulation introduces so much
randomness?
Keep track of the minimum error achieved and stop when the error hasn’t improved
for the last T0 iterations (e.g., T0 = 150).
(1) At each iteration, compute the error:

et = max |mj,t − mj,t−1 |


j

(2) Simultaneously, keep track of the best error achieved:13

emin,t = min (e1 , e2 , . . . , et )

(3) Stop iterating if simulation hasn’t improved upon the best error achieved for the
last T0 iterations.
13
In your code, you don’t have to store the whole history of errors, et . Initialize emin to ∞ and
on each iteration, update emin = min(et , emin ).
45 / 49
Testing for Convergence, 4/4
Monte Carlo simulation error
0.012
error
0.01 min error

0.008
error
0.006

0.004

0.002

0
50 100 150 200 250
iteration

• This is an example of the error (et = maxj |mj,t − mj,t−1 |) from a Monte Carlo
simulation.
• In this case, the iteration stops because the algorithm did not improve upon the
error for the last 150 iterations.
T_0 = 100

46 / 49
Comparing the Three Methods

• The Q matrix and CDF methods will both be accurate. However, they can be
tricky to implement (especially the cdf method).
• The randomness from Monte Carlo methods will introduce some error into the
calculated distributions (but this is manageable).
• However, in more complicated models, the first two methods will become
infeasible. Monte Carlo will always work (and will always be easy to code).

47 / 49
Resulting Invariant Distribution
Invariant
Invariant
Invariant
Distribution
Distribution
Distribution
Invariant
over
over
Distribution
over
(a,y),
(a,y),
(a,y),
CDF
Iterate
Monte
over
Approximation
on
(a,y)
Carlo
Q Matrix
Method
Method
Method
0.5
y
L

0.45 y
H

0.4

0.35

0.3
(a, y)
0.25

0.2 yL Iterate on Q Matrix


yL CDF Approx.
0.15
yL Monte Carlo

0.1 yH Iterate on Q Matrix


yH CDF Approx.
0.05 yH Monte Carlo

0
0 1 2 3 4 5 6 7 8 9 10
a lower bar a

covers most people in the economy at 3,5

• All methods produce similar distributions, but notice that the Monte Carlo
simulation introduces some error.
48 / 49
What’s Next?

• Next week, we’ll cover another application of dynamic programming:


overlapping generations models.

49 / 49
Equilibrium Definition, Huggett (Review)
A stationary recursive equilibrium in Huggett (1993):
(i) the value function V (a, y ),
(ii) the policy functions c(a, y ) and g(a, y ),
(iii) the interest rate r , and
(iv) the distribution µ
such that
(1) Given r , V (a, y ) solves the Bellman equation given in (1) and c(a, y ), g(a, y )
are the associated policy functions.
(2) Markets clear:
Z
A(r ) = g(a, y )dµ(a, y ) = 0
A×Y

(3) µ is a stationary measure – i.e., a fixed point of (3).


Back

1/2
Equilibrium Definition, Aiyagari (Review)
A stationary recursive equilibrium in Aiyagari (1994):
(a) the value function V (a, ℓ),
(b) the policy functions c(a, ℓ) and a′ (a, ℓ),
(c) the prices w and r , and
(d) the distribution µ over (a, ℓ),
such that
(1) Given w and r , V (a, ℓ) solves the Bellman equation in (2), and c(a, ℓ) and
a′ (a, ℓ) are the associated policy functions.
(2) Labor and capital markets clear:

d s K d (r )
K (r ) = K (r ), w = (1 − α)

(3) µ is a stationary measure – i.e., a fixed point of (4).


Back

2/2

You might also like