Notes Vfi sp2024
Notes Vfi sp2024
Spring 2024
1 Introduction
These notes discuss how to solve dynamic economic models using value function iteration.
C 1−σ − 1
′
V (K) = max + βV (K )
C 1−σ
s.t.
K ′ = K α − C + (1 − δ)K
( )
(K α + (1 − δ)K − K ′ )1−σ − 1
V (K) = max + βV (K ′ )
′
K 1−σ
The basic idea of value function iteration is as follows. Create a grid of possible values of the
state, K, with N elements. Then make a guess of the value function, V 0 (k). This guess will be
a N × 1 vector – one value for each possible state. For that guess of the value function, compute
V 1 (k) as follows:
( )
(K α + (1 − δ)K − K ′ )1−σ − 1
1
V (K) = max + βV 0 (K ′ )
′K 1−σ
1
In doing this process, it is important to remember to take care of the “max” operator inside
the Bellman equation: conditional on V 0 (K), you have to choose K ′ to maximize the right hand
side. For each grid value of K, this gives you a new value for the value function, call this V 1 (K).
Then compare V 1 (K) to your initial guess, V 0 (K). If it is not close, then take your new value
function, V 1 (K), as the “guess” and repeat the process. On the nth iteration, you will have V n (K)
and V n−1 (K). For n large, if your problem is well-defined, these should be very close together and
you can stop.
1 kmin = 0.25*kstar;
2 kmax = 1.75*kstar;
3 grid = (kmax−kmin)/kgrid;
4
5 kmat = kmin:grid:kmax;
6 kmat = kmat';
7 [N,n] = size(kmat);
The resulting vector is called “kmat” and is 100×1. Next I guess a value function. For simplicity
I will just guess a vector of zeros:
1 v0 = zeros(N,1);
I need to specify a tolerance for when to stop searching over value functions. In particular, my
objective is going to be the norm of the of the absolute value of the difference between V n (K) and
V n−1 (K). The norm I’m going to use is the square root of the sum of squared deviations. One
could do other measures of closeness of the value functions. I need to set a tolerance for this for
when MATLAB to stop. I set this tolerance at 0.01. I also can specify the maximum number of
2
iterations that MATLAB can take. I set this at 300. The MATLAB code for these preliminaries is:
1 tol = 0.01;
2 maxits = 1000;
I’m going to write a loop over all possible values of the state. For each value in K, find the
K′ that maximizes the Bellman equation given my guess of the value function. I have a function
file to do the maximizing that we will return to in a minute. Outside of the loop I have a “while”
statement, which tells MATLAB to keep repeating the text as long as the difference between value
functions is greater than the tolerance. The MATLAB code for this part is:
This code can be interpreted as follows. For each value i in the state space, I get the initial value
of K. Then the command on line 4 finds the argmax of the Bellman equation, which is found in the
function file “valfun2.m”. Line 5 collects the optimized value into the new value function (called
v1), and line 6 finds the policy function associated with this choice (ie. the optimal choice of K ′ ,
which is called k1 in this code). After the loop over the possible values of the state I calculate the
difference and write out the iteration number. I do not include a semi-colon after these statements
so that I can see the converge taking place in the command window.
A complication arises in function file because “fminbnd” assumes a continuous choice set –
basically it will search over all values of k1 between “kmin” and “kmax”, which will include points
not in the original capital grid. For this reason, we need to interpolate values of the value function
off of the grid. I will use linear interpolation. Suppose that Matlab chooses a point k1 that is off
the grid. I can approximate the value function at that point assuming that the value function is
linear between the two points on the grid immediately around that point. In particular, I want to
find K(i) < K < K(i + 1). Then I can approximate the value function at this point as:
3
1 g = interp1(kmat,v0,k,'linear'); % smooths out previous value function
We need to include a penalty to prevent consumption from going negative and then calculate
the Bellman equation as follows:
1 if c≤0
2 val = −888888888888888888−800*abs(c); % keeps it from going negative
3 else
4 %val=log(c) + beta*g;
5 val=(1/(1−s))*(cˆ(1−s)−1) + beta*g;
6 end
7 val = −val; % make it negative since we're maximizing and code is to minimize.
The commented out part is for use when s = 1, when the utility function collapses to the natural
log. My complete code for the man file and the function file are given below:
1 clear all;
2 close all;
3
4 tic
5
7 % Eric Sims
8 % University of Notre Dame
9 % Graduate Macro II
10
18 %s = 2;
19
20 % set parameters
21 alpha = 0.33; % capital's share
22 beta = 0.95;
23 ∆ = 0.1; % depreciation rate (annual)
24 s = 2;
25
26 tol = 0.01;
4
27 maxits = 1000;
28 dif = tol+1000;
29 its = 0;
30
35 ystar = kstarˆ(alpha);
36
37 kmin = 0.25*kstar;
38 kmax = 1.75*kstar;
39 grid = (kmax−kmin)/kgrid;
40
41 kmat = kmin:grid:kmax;
42 kmat = kmat';
43 [N,n] = size(kmat);
44
45 polfun = zeros(kgrid+1,3);
46
47 v0 = zeros(N,1);
48 dif = 10;
49 its = 0;
50
63 for i = 1:N
64 con(i,1) = kmat(i,1)ˆ(alpha) − k11(i,1) + (1−∆)*kmat(i,1);
65 polfun(i,1) = kmat(i,1)ˆ(alpha) − k11(i,1) + (1−∆)*kmat(i,1);
66 end
67
68
69 figure
70 plot(kmat,v1,'−k','Linewidth',1)
71 xlabel('k')
72 ylabel('V(k)')
73
74 figure
75 plot(kmat,polfun,'−k','Linewidth',1)
5
76 xlabel('k')
77 ylabel('c')
78
79 toc
1 function val=valfun(k)
2
3 % This program gets the value function for a neoclassical growth model with
4 % no uncertainty and CRRA utility
5
Here is a plot of the resulting value function, plotted against the value of the current capital
stock:
4
3.5
2.5
2
V(k)
1.5
0.5
−0.5
−1
0 1 2 3 4 5 6
k
Next I show a plot of the policy function – given K, what is the optimal choice of K ′ ? I plot
this with a 45 degree line showing all points where K ′ = K, so that we can visually verify that a
steady state indeed exists and is stable:
6
Policy Function
6
kt+1
3
Policy Function
45 Degree Line
2
0
0 1 2 3 4 5 6
kt
1.5
1
c
0.5
0
0 1 2 3 4 5 6
k
C 1−σ − 1
′ ′
V (K, A) = max + βEV (K , A )
C 1−σ
s.t.
K ′ = AK α − C + (1 − δ)K
You can impose that the constraint holds and re-write the problem as choosing the future state:
7
( )
(AK α + (1 − δ)K − K ′ )1−σ − 1
V (K, A) = max + βEV (K ′ , A′ )
′K 1−σ
The stochastic part comes in because A is stochastic. I assume that A follows a Markov process,
which generically means that the current state is a sufficient statistic for forecasting future values
of the state. Let à denote a q × 1 vector denoting possible realizations of A. Then let P be a q × q
matrix whose (i, j) element is taken to be the probability of transitioning to state j from state i;
hence the rows must sum to one. I assume that A can take on the following three values: low,
medium, and high:
0.9
à = 1.0
1.1
This just means that the expected value of A next period is always 1, regardless of what current
productivity is. It also means that the non-stochastic steady state value A is also 1.
Another complication now is that my value function is going to have N × q elements. In other
words, there is a different value to being in every different capital/productivity combination. I
construct this is a N × q matrix, though I could conceivably do Kronecker products and make the
state space a one dimensional object.
The value function iteration proceeds similarly to above, but I have to take account of the
fact that there are expectations over future states now. I start with an initial guess of the value
function, V 0 (K, A). Then I construct the Bellman equation:
( )
(AK α + (1 − δ)K − K ′ )1−σ − 1
1
+ βE V 0 (K ′ , A′ )
V (K, A) = max
′ k 1−σ
Here the expectation is taken over the future value function. The uncertainty comes in over
future values of A′ – you are choosing K ′ based on an expectation of A′ . Given a choice of K ′ ,
8
there are three possible values of A′ given a current value of A, which just corresponds to the row
of the probability matrix for the current state of A.
My new MATLAB code looks like this is in the main file. It is almost identical to above, except
there is now a double loop (i.e. looping over both states):
3 for j = 1:3
4 for i = 1:N
5 k0 = kmat(i,1);
6 a0 = amat(j,1);
7 k1 = fminbnd(@valfun stoch,kmin,kmax);
8 v1(i,j) = −valfun stoch(k1);
9 k11(i,j) = k1;
10 end
11 end
12 %g = abs(v1−v0);
13 dif = norm(v1−v0)
14 v0 = v1;
15 its = its+1
16 end
The function file takes account of the expectations in the following way:
3 % This program gets the value function for a stochastic growth model with
4 % CRRA utility
5
Basically, I post-multiply the interpolated value function by the (transpose of) the proper row
of the probability matrix. The resulting value functions – V (K) plotted against K for different
values of A – are plotted below. As one would expect, the value of being in a “high” A state exceeds
the value of being in the low A states. These differences are not very large – this primarily results
9
because my specification of P is such that there is no real persistence in A.
2
A = 0.9
V(k)
A = 1.0
1 A = 1.1
−1
−2
0 1 2 3 4 5 6
k
A = 0.9
1.8 A = 1.0
A = 1.1
1.6
1.4
c
1.2
0.8
0 1 2 3 4 5 6
k
C 1−σ − 1
′
V (K) = max + θ ln(1 − n) + βV (K )
C,n 1−σ
s.t.
10
K ′ = K α n1−α − C + (1 − δ)K
θ
= C −σ (1 − α)K α n−α (2)
1−n
This first order condition must hold for any choice of C (equivalently, K ′ ) along an optimal
path. Hence, you can effectively treat this first order condition as a “constraint” that must hold.
It essentially determines n implicitly as a function of C and K. Then you can write the modified
Bellman equation as:
( 1−σ )
K α n1−α + (1 − δ)K − K ′ −1
V (K) = max + θ ln(1 − n) + βV (K ′ )
′
K 1−σ
s.t.
θ −σ
= K α n1−α + (1 − δ)K − K ′ (1 − α)K α n−α
1−n
The “constraint” (the FOC for labor) implicitly defines n as a function of K and K ′ . So you
can effectively plug that in everywhere you see n showing up, leaving the problem as one of just
choosing K ′ given K.
11