0% found this document useful (0 votes)
17 views11 pages

Notes Vfi sp2024

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views11 pages

Notes Vfi sp2024

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Graduate Macro Theory II:

Notes on Value Function Iteration


Eric Sims
University of Notre Dame

Spring 2024

1 Introduction
These notes discuss how to solve dynamic economic models using value function iteration.

2 A Deterministic Growth Model


The solution to the deterministic growth model can be written as a Bellman equation as follows:

C 1−σ − 1
 

V (K) = max + βV (K )
C 1−σ
s.t.

K ′ = K α − C + (1 − δ)K

Variables with a ′ denote future values. K is a predetermined state variable. C is a jump


variable. You can impose that the constraint holds and re-write the problem as choosing the future
state:

( )
(K α + (1 − δ)K − K ′ )1−σ − 1
V (K) = max + βV (K ′ )

K 1−σ

The basic idea of value function iteration is as follows. Create a grid of possible values of the
state, K, with N elements. Then make a guess of the value function, V 0 (k). This guess will be
a N × 1 vector – one value for each possible state. For that guess of the value function, compute
V 1 (k) as follows:

( )
(K α + (1 − δ)K − K ′ )1−σ − 1
1
V (K) = max + βV 0 (K ′ )
′K 1−σ

1
In doing this process, it is important to remember to take care of the “max” operator inside
the Bellman equation: conditional on V 0 (K), you have to choose K ′ to maximize the right hand
side. For each grid value of K, this gives you a new value for the value function, call this V 1 (K).
Then compare V 1 (K) to your initial guess, V 0 (K). If it is not close, then take your new value
function, V 1 (K), as the “guess” and repeat the process. On the nth iteration, you will have V n (K)
and V n−1 (K). For n large, if your problem is well-defined, these should be very close together and
you can stop.

2.1 A Numerical Example


Suppose that the parameter values are as follows: σ = 2, β = 0.95, δ = 0.1, α = 0.33. Basically,
this can be taken to be a reasonable parameterization at an annual frequency. The Euler equation
for this problem is:

C −σ = βC ′σ (αK ′α−1 + (1 − δ))

In steady state, C ′ = C = C ∗ . This occurs where K ′ = K = K ∗ where:


  1
∗ α 1−α
K = (1)
1/β − (1 − δ)
For the parameters I have chose, I get a value of K ∗ = 3.169. We then need to construct a grid
of possible values of K. I will construct a grid with N = 100 values, ranging between 25 percent of
the steady state value of K and 175 percent of its steady state value. The MATLAB code is:

1 kmin = 0.25*kstar;
2 kmax = 1.75*kstar;
3 grid = (kmax−kmin)/kgrid;
4

5 kmat = kmin:grid:kmax;
6 kmat = kmat';
7 [N,n] = size(kmat);

The resulting vector is called “kmat” and is 100×1. Next I guess a value function. For simplicity
I will just guess a vector of zeros:

1 v0 = zeros(N,1);

I need to specify a tolerance for when to stop searching over value functions. In particular, my
objective is going to be the norm of the of the absolute value of the difference between V n (K) and
V n−1 (K). The norm I’m going to use is the square root of the sum of squared deviations. One
could do other measures of closeness of the value functions. I need to set a tolerance for this for
when MATLAB to stop. I set this tolerance at 0.01. I also can specify the maximum number of

2
iterations that MATLAB can take. I set this at 300. The MATLAB code for these preliminaries is:

1 tol = 0.01;
2 maxits = 1000;

I’m going to write a loop over all possible values of the state. For each value in K, find the
K′ that maximizes the Bellman equation given my guess of the value function. I have a function
file to do the maximizing that we will return to in a minute. Outside of the loop I have a “while”
statement, which tells MATLAB to keep repeating the text as long as the difference between value
functions is greater than the tolerance. The MATLAB code for this part is:

1 while dif>tol & its < maxits


2 for i = 1:N
3 k0 = kmat(i,1);
4 k1 = fminbnd(@valfun,kmin,kmax);
5 v1(i,1) = −valfun(k1);
6 k11(i,1) = k1;
7 end
8 dif = norm(v1−v0);
9 v0 = v1;
10 its = its+1
11 end

This code can be interpreted as follows. For each value i in the state space, I get the initial value
of K. Then the command on line 4 finds the argmax of the Bellman equation, which is found in the
function file “valfun2.m”. Line 5 collects the optimized value into the new value function (called
v1), and line 6 finds the policy function associated with this choice (ie. the optimal choice of K ′ ,
which is called k1 in this code). After the loop over the possible values of the state I calculate the
difference and write out the iteration number. I do not include a semi-colon after these statements
so that I can see the converge taking place in the command window.
A complication arises in function file because “fminbnd” assumes a continuous choice set –
basically it will search over all values of k1 between “kmin” and “kmax”, which will include points
not in the original capital grid. For this reason, we need to interpolate values of the value function
off of the grid. I will use linear interpolation. Suppose that Matlab chooses a point k1 that is off
the grid. I can approximate the value function at that point assuming that the value function is
linear between the two points on the grid immediately around that point. In particular, I want to
find K(i) < K < K(i + 1). Then I can approximate the value function at this point as:

v(K(i + 1)) − v(K(i))


V (k) ≈ V (K(i)) + (K − K(i))
K(i + 1) − K(i)
One can do other forms of interpolation. If the problem is not too non-linear, and your grid
points are close enough to one another, linear interpolation ought to be pretty good. Matlab has a
built-in interpolation code called “interpl” that will do this for you.

3
1 g = interp1(kmat,v0,k,'linear'); % smooths out previous value function

Now, given the choice of K ′ , we can define C in Matlab as:

1 c = k0ˆalpha − k + (1−∆)*k0; % consumption

We need to include a penalty to prevent consumption from going negative and then calculate
the Bellman equation as follows:

1 if c≤0
2 val = −888888888888888888−800*abs(c); % keeps it from going negative
3 else
4 %val=log(c) + beta*g;
5 val=(1/(1−s))*(cˆ(1−s)−1) + beta*g;
6 end
7 val = −val; % make it negative since we're maximizing and code is to minimize.

The commented out part is for use when s = 1, when the utility function collapses to the natural
log. My complete code for the man file and the function file are given below:

1 clear all;
2 close all;
3

4 tic
5

7 % Eric Sims
8 % University of Notre Dame
9 % Graduate Macro II
10

11 % This program uses Value Function Iteration to solve a neoclassical growth


12 % model with no uncertainty
13

14 global v0 beta ∆ alpha kmat k0 s


15

16 plott=0; % set to 1 to see plots


17

18 %s = 2;
19

20 % set parameters
21 alpha = 0.33; % capital's share
22 beta = 0.95;
23 ∆ = 0.1; % depreciation rate (annual)

24 s = 2;
25

26 tol = 0.01;

4
27 maxits = 1000;
28 dif = tol+1000;
29 its = 0;
30

31 kgrid = 99; % grid points + 1


32 kstar = (alpha/(1/beta − (1−∆)))ˆ(1/(1−alpha)); % steady state k
33 cstar = kstarˆ(alpha) − ∆*kstar;
34 istar = ∆*kstar;

35 ystar = kstarˆ(alpha);
36

37 kmin = 0.25*kstar;
38 kmax = 1.75*kstar;
39 grid = (kmax−kmin)/kgrid;
40

41 kmat = kmin:grid:kmax;
42 kmat = kmat';
43 [N,n] = size(kmat);
44

45 polfun = zeros(kgrid+1,3);
46

47 v0 = zeros(N,1);
48 dif = 10;
49 its = 0;
50

51 while dif>tol & its < maxits


52 for i = 1:N
53 k0 = kmat(i,1);
54 k1 = fminbnd(@valfun,kmin,kmax);
55 v1(i,1) = −valfun(k1);
56 k11(i,1) = k1;
57 end
58 dif = norm(v1−v0);
59 v0 = v1;
60 its = its+1
61 end
62

63 for i = 1:N
64 con(i,1) = kmat(i,1)ˆ(alpha) − k11(i,1) + (1−∆)*kmat(i,1);
65 polfun(i,1) = kmat(i,1)ˆ(alpha) − k11(i,1) + (1−∆)*kmat(i,1);
66 end
67

68

69 figure
70 plot(kmat,v1,'−k','Linewidth',1)
71 xlabel('k')
72 ylabel('V(k)')
73

74 figure
75 plot(kmat,polfun,'−k','Linewidth',1)

5
76 xlabel('k')
77 ylabel('c')
78

79 toc

1 function val=valfun(k)
2

3 % This program gets the value function for a neoclassical growth model with
4 % no uncertainty and CRRA utility
5

6 global v0 beta ∆ alpha kmat k0 s


7

8 g = interp1(kmat,v0,k,'linear'); % smooths out previous value function


9

10 c = k0ˆalpha − k + (1−∆)*k0; % consumption


11 if c≤0
12 val = −888888888888888888−800*abs(c); % keeps it from going negative
13 else
14 val=(1/(1−s))*(cˆ(1−s)−1) + beta*g;
15 end
16 val = −val; % make it negative since we're maximizing and code is to minimize.

Here is a plot of the resulting value function, plotted against the value of the current capital
stock:
4

3.5

2.5

2
V(k)

1.5

0.5

−0.5

−1
0 1 2 3 4 5 6
k

Next I show a plot of the policy function – given K, what is the optimal choice of K ′ ? I plot
this with a 45 degree line showing all points where K ′ = K, so that we can visually verify that a
steady state indeed exists and is stable:

6
Policy Function
6

kt+1
3

Policy Function
45 Degree Line
2

0
0 1 2 3 4 5 6
kt

Once I know K ′ , I can get C. Here is a plot of C against K:

1.5

1
c

0.5

0
0 1 2 3 4 5 6
k

3 The Stochastic Growth Model


Making the problem stochastic presents no special challenges, other than an expansion of the state
space and having to deal with expectations and probabilities. The problem can be written:

C 1−σ − 1
 
′ ′
V (K, A) = max + βEV (K , A )
C 1−σ
s.t.

K ′ = AK α − C + (1 − δ)K

You can impose that the constraint holds and re-write the problem as choosing the future state:

7
( )
(AK α + (1 − δ)K − K ′ )1−σ − 1
V (K, A) = max + βEV (K ′ , A′ )
′K 1−σ

The stochastic part comes in because A is stochastic. I assume that A follows a Markov process,
which generically means that the current state is a sufficient statistic for forecasting future values
of the state. Let à denote a q × 1 vector denoting possible realizations of A. Then let P be a q × q
matrix whose (i, j) element is taken to be the probability of transitioning to state j from state i;
hence the rows must sum to one. I assume that A can take on the following three values: low,
medium, and high:
 
0.9
à =  1.0 
 

1.1

I define the probability matrix so that all of its elements are 31 :


 1 1 1

3 3 3
P =
 1 1 1 
3 3 3 
1 1 1
3 3 3

I can specify the A process as follows:

1 amat = [0.9 1 1.1]';


2 prob = (1/3)*ones(3,3);

This just means that the expected value of A next period is always 1, regardless of what current
productivity is. It also means that the non-stochastic steady state value A is also 1.
Another complication now is that my value function is going to have N × q elements. In other
words, there is a different value to being in every different capital/productivity combination. I
construct this is a N × q matrix, though I could conceivably do Kronecker products and make the
state space a one dimensional object.
The value function iteration proceeds similarly to above, but I have to take account of the
fact that there are expectations over future states now. I start with an initial guess of the value
function, V 0 (K, A). Then I construct the Bellman equation:

( )
(AK α + (1 − δ)K − K ′ )1−σ − 1
1
+ βE V 0 (K ′ , A′ )

V (K, A) = max
′ k 1−σ

Here the expectation is taken over the future value function. The uncertainty comes in over
future values of A′ – you are choosing K ′ based on an expectation of A′ . Given a choice of K ′ ,

8
there are three possible values of A′ given a current value of A, which just corresponds to the row
of the probability matrix for the current state of A.
My new MATLAB code looks like this is in the main file. It is almost identical to above, except
there is now a double loop (i.e. looping over both states):

1 while dif>tol & its < maxits


2

3 for j = 1:3
4 for i = 1:N
5 k0 = kmat(i,1);
6 a0 = amat(j,1);
7 k1 = fminbnd(@valfun stoch,kmin,kmax);
8 v1(i,j) = −valfun stoch(k1);
9 k11(i,j) = k1;
10 end
11 end
12 %g = abs(v1−v0);
13 dif = norm(v1−v0)
14 v0 = v1;
15 its = its+1
16 end

The function file takes account of the expectations in the following way:

1 function val=valfun stoch(k)


2

3 % This program gets the value function for a stochastic growth model with
4 % CRRA utility
5

6 global v0 beta ∆ alpha kmat k0 prob a0 s j


7

8 g = interp1(kmat,v0,k,'linear'); % smooths out previous value function


9

10 c = a0*k0ˆalpha − k + (1−∆)*k0; % consumption


11 if c≤0
12 val = −8888888888888888−800*abs(c); % keeps it from going negative
13 else
14

15 val = (1/(1−s))*(cˆ(1−s) − 1) + beta*(g*prob(j,:)');


16 end
17 val = −val; % make it negative since we're maximizing and code is to minimize.

Basically, I post-multiply the interpolated value function by the (transpose of) the proper row
of the probability matrix. The resulting value functions – V (K) plotted against K for different
values of A – are plotted below. As one would expect, the value of being in a “high” A state exceeds
the value of being in the low A states. These differences are not very large – this primarily results

9
because my specification of P is such that there is no real persistence in A.

2
A = 0.9
V(k)

A = 1.0
1 A = 1.1

−1

−2
0 1 2 3 4 5 6
k

A = 0.9
1.8 A = 1.0
A = 1.1
1.6

1.4
c

1.2

0.8

0 1 2 3 4 5 6
k

4 Dealing with “Static” Variables


As written above, all of the variables are dynamic – choices today directly affect the states tomorrow.
In many models we will also want to work with “static” variables, variables whose choice today
does not directly influence the value of future states. An example here is variable employment.
Suppose that we take the neoclassical growth model with variable employment.
Let the time endowment of the agents be normalized to 1, and let their work effort be given by
n. They get utility from leisure, which is 1 − n. The dynamic program can be written as:

C 1−σ − 1
 

V (K) = max + θ ln(1 − n) + βV (K )
C,n 1−σ
s.t.

10
K ′ = K α n1−α − C + (1 − δ)K

The first order condition for the choice of employment is:

θ
= C −σ (1 − α)K α n−α (2)
1−n

This first order condition must hold for any choice of C (equivalently, K ′ ) along an optimal
path. Hence, you can effectively treat this first order condition as a “constraint” that must hold.
It essentially determines n implicitly as a function of C and K. Then you can write the modified
Bellman equation as:
( 1−σ )
K α n1−α + (1 − δ)K − K ′ −1
V (K) = max + θ ln(1 − n) + βV (K ′ )

K 1−σ

s.t.
θ −σ
= K α n1−α + (1 − δ)K − K ′ (1 − α)K α n−α
1−n
The “constraint” (the FOC for labor) implicitly defines n as a function of K and K ′ . So you
can effectively plug that in everywhere you see n showing up, leaving the problem as one of just
choosing K ′ given K.

11

You might also like