0% found this document useful (0 votes)
72 views18 pages

Stochastic Processes, It O Calculus, and Applications in Economics

This document provides an introduction to stochastic processes and Ito calculus for economists. It discusses why understanding randomness is important for modeling economic problems. It then defines stochastic processes, Wiener processes, Brownian motion, and geometric Brownian motion. Finally, it introduces Ito calculus, which is a tool for dealing with stochastic integrals and stochastic differential equations, important for solving models with randomness.

Uploaded by

ttlphuong
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views18 pages

Stochastic Processes, It O Calculus, and Applications in Economics

This document provides an introduction to stochastic processes and Ito calculus for economists. It discusses why understanding randomness is important for modeling economic problems. It then defines stochastic processes, Wiener processes, Brownian motion, and geometric Brownian motion. Finally, it introduces Ito calculus, which is a tool for dealing with stochastic integrals and stochastic differential equations, important for solving models with randomness.

Uploaded by

ttlphuong
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

STOCHASTIC PROCESSES, IT

O CALCULUS, AND
APPLICATIONS IN ECONOMICS
Timothy P. Hubbard & Yigit Saglam

Department of Economics
University of Iowa
March 3, 2006
Abstract
This document provides an introduction to stochastic processes and Ito calculus with
emphasis on what an economist needs to understand to do research on optimal control
and dynamic programming problems involving randomness. Specically we show
why learning this material is important by illustration with an example in the rst
section. We then discuss stochastic processes and, in particular, Wiener processes and
Brownian motion. In the third section we introduce Ito calculus, a tool for dealing
with stochastic integrals and stochastic dierential equations. In the last section
of the manuscript we discuss some applications from the eld of natural resource
economics.

Department of Economics, University of Iowa, W210 Pappajohn Business Building, Iowa City,
IA 52242, [email protected], [email protected]
1. Introduction
Economists often want to model problems in a way that incorporates the notion
that todays choices impact future decisions (there exists a trend component) but
also allow for the fact that there will be shocks that will alter your state in future
periods (there is a random component). Take, for example, a stochastic growth
model as in Stokey and Lucas with Prescott (1989). In their example, output y,
which is f(k)z in a stochastic one-period growth model, is determined by the size
of the current capital stock, k, and a stochastic technology shock, z. Assuming z is
distributed independently and identically over time, the objective can be expressed
in the following Bellman equation
v(k, z) = max
k

[0,f(k)z]
_
u
_
f(k)z k

+ E
_
v(k

, z

)
_
. (1.1)
The Bellman equation shows that production depends on the shock, z, and hence
so does the agents utility today. Notice also, that unlike in deterministic dynamic
programming we now have discounted expected utility entering into our optimization
problem. What does this new E operator mean? Suppose that z {z
1
, ..., z
N
} are
the possible values that the shock z could take and that Prob(z
i
) is
i
where
N

i=1

i
= 1.
Then (1.1) can be written as
v(k, z) = max
k

[0,f(k)z]
_
u
_
f(k)z k

+
N

i=1
v(k

, z
i
)
i
_
.
In general we want to construct problems where we can allow
i
to depend on
todays shock z such that if z [a, b] and [c, d] [a, b],
(z) = Prob {z [c, d]} =
_
d
c
(z)dz.
In this case we now rewrite (1.1) as
v(k, z) = max
k

[0,f(k)z]
_
u
_
f(k)z k

+
_
b
a
v(k

, z

)(z

)dz

_
. (1.2)
To understand what this equation means we rst need to learn some concepts from
measure theory and stochastic calculus. This document serves as a basic introduction
to these ideas.
1
2. Stochastic Processes
A stochastic process can be thought of as a system that evolves over time in a random
manner. Mathematically this means that a stochastic process maps an element from
a probability space into a state space. If the time parameter t can take on any value in
R
+
, then the process is a continuous-time stochastic process, whereas if t Z
+
, where
Z
+
is the set of nonnegative integers, then the process is a discrete-time process. Some
specic examples of stochastic processes include a Wiener process (explained below), a
random walk, or a Markov process (a Markov chain is the corresponding discrete-time
process). A Wiener process, W(), is a continuous-time stochastic process dened to
have the following properties:
1. W(t) N(0, t) for all t 0,
2. if 0 s < t then [W(t) W(s)] N(0, t s), which results because W() is a
Gaussian stochastic process,
3. if [s, t) and [u, v) do not overlap then [W(t) W(s)] and [W(v) W(u)] are
independent random variables.
2.1. Brownian Motion
Brownian motion is a Wiener process in which a quantity of interest is constantly
undergoing small, random uctuations. The dierential form of Brownian motion
can be written
dS(t) = dt + dW(t) (2.1)
where W(t) is a Wiener process, dt is E [dS(t)] and
2
dt is var [dS(t)]. The process
was named for Scottish botanist Robert Brown who rst studied such uctuations
in observing pollen grains suspended in water. Given an initial starting point one
notices that a particle drifts over time and it is possible to compute the probability
of a particle moving a specic distance in any direction during a certain time interval
in a specic medium; i.e., the probability a particle will move more than a specied
distance (think survivor function). Albert Einstein later concluded that a smaller
particle, less viscous uid, and higher temperature all increased the amount of motion
one would expect to observe. Let S(t) denote the position of the particle at time t,
then the position of the particle at time t + dt is distributed as follows:
[S(t + dt)|S(t) = g] N(g + dt,
2
dt), (2.2)
2
Figure 1
20 40 60 80 100 120 140 160 180 200
3
2
1
0
1
2
3
where is the mean of the increments (drift) and
2
the variance of the increments
(this results from (2.1) and is shown later after we introduce stochastic integration).
This formula species the probability density function for how far the particle moves.
In Figure 1 we have traced the realized path for a quantity over 200 time periods
given it follows a Brownian motion and assuming the initial quantity is zero. We
note without proof that, as it is clear in the gure, Brownian motion is almost surely
continuous and almost surely nowhere dierentiable in t. Another important property
is that Brownian motion is a Markovian process, that is the probability of being in
state S(t) at time t, given all states up to time t 1, depends only on the previous
state, S(t 1), at time t 1. Said dierently, the prediction of what states will occur
in the future depends only on the current state.
Stokey (1998) discusses a way to numerically approximate a Brownian motion to
within any specied degree of accuracy using a discrete-time random walk process as
an approximation. The relationship between a random walk process and Brownian
motion is fairly straightforward. Consider a process S that, for each time increment
, increases by h with probability p and decreases by h with probability (1 p)
(this process can be modeled as a binomial tree). Then process S has the following
properties:
E [S (t + ) S(t)] = ph (1 p)h = h(2p 1)
3
and
E [S (t + ) S(t)]
2
= ph
2
+ (1 p)h
2
= h
2
.
Dene Q(N) to be

N
i=1
S(i) which represents the position of a process that follows
a random walk after N steps, where we assume p equals
1
2
. If the N steps take T
periods then is dened to be T/N. Since we assumed p is
1
2
, this implies E [S(t)]
is zero and var [S(t)] is one. Consequently E [Q(N)] is zero and var [Q(N)] is N, by
independence. By the Central Limit Theorem, as N , Q(N)/

N
D
N(0, 1).
Thus if we scale each Q(t) by

N we obtain the sequence of a random process that


converges to Brownian motion as N . It can also be shown that one can obtain
a Brownian motion with parameters (, ) by setting equal to h(2p 1) and
2

equal to h
2
.
2.2. Geometric Brownian Motion
Often authors assume geometric Brownian motion (GBM) which has the property
that the logarithm of a randomly varying quantity follows a Brownian motion. This
assumption is frequently employed when the randomly varying quantity can only take
on values that are positive. A stochastic process S(t) follows a GBM if it satises
dS(t) = uS(t)dt + vS(t)dW(t) (2.3)
where W(t) is a Wiener process, u is the expected change in S(t), and v is the variance
of the change. Note that log [S(t)/S(0)] N
_
(u
1
2
v
2
)t, v
2
t

; that is, increments of


a GBM are normal relative to the values of the variable in the current state. As an
example consider a model for stock price behavior, then (2.3) can be interpreted as the
Black-Scholes formula where S(t) is the stock price at time t, u is the instantaneous
rate of return on a riskless asset, and v is the volatility of the stock. As (2.3) hints
at, we are often more interested in the process dS(t) rather than the process S(t),
but studying it requires the use of Ito calculus, which we discuss next.
3. Ito Calculus
In this section we use concepts from stochastic processes to introduce the idea of a
stochastic integral and illustrate some important properties.
4
3.1. Solving an Ito Integral
To begin talking about how to interpret and solve (2.3) we need to nd a method
for dealing with stochastic dierential equations. Our approach is similar to the
pedagogical method one would use to teach Riemann integrals in that we use a limiting
approach to help illustrate the intuition of stochastic calculus and Ito integrals. We
begin by considering simple functions, which is a step closer to stochastic integrals in
the sense that they are like the rectangular step functions one uses to approximate the
area under a function, with the exception that simple functions are like random step
functions. Knowing a little measure theory will help in understanding this analogy
but is not required.
Let 1
A
i
(s) be an indicator function that takes on a value of one if s A
i
and
zero otherwise, where A
i
A. A simple function, (s) can be dened as
(s) =
n

i=1
a
i
1
A
i
(s) (3.1)
where a
i
is the value the function attains if s A
i
. Now consider a simple function
(s) and a measure space (A, F, ) where A is a set, F is a algebra, and is a
measure mapping F into R. Then the integral of () with respect to measure is
_
s
(s)(ds) =
n

i=1
a
i
(A
i
). (3.2)
The interpretation is that in computing the stochastic integral we want to weight
things that are more likely to happen. This integral is like a typical Riemann integral
in the sense that we are still adding together the area of rectangles, except now the
idea is that we weight the regions by how likely they are according to measure .
The following example will help illustrate this idea. Consider the set A = R
+
with
subsets {A
1
, A
2
, A
3
, A
4
}, such that

i
A
i
is R and A
i

A
j
is empty for all i = j, and
let (A
1
) equal .3, (A
2
) equal .2, (A
3
) equal .1, and (A
4
) equal .4 (note that
is a probability measure as

i
(A
i
) is one). Also let a
1
, the value that (s) attains
for any element in A
1
, be 5. Likewise let a
2
, a
3
, and a
4
take on the values 3, 2, and
6, respectively. For this example, the simple function (s) is shown in Figure 2. In
this example,
4

i=1
a
i
(A
i
) = 5(.3) + 3(.2) + 2(.1) + 6(.4) = 4.7.
5
Figure 2

0
1
2
3
4
5
6
7
A
2
A
3
A
4
A
1
(s)
While (3.2) brings us one step closer (compared to a Riemann sum) to solving
stochastic integrals we need some denitions and conditions that will allow us to
solve stochastic integrals of the form
I
S
(t) =
_
t
0
S(t)dW(t), t 0 (3.3)
where here, again suppressing arguments for convenience, I
S
is the integral of S with
respect to W. We maintain the assumption that the integral is bounded by imposing
the following restriction
E
__
t
0
S(x)
2
dx
_
< , t > 0. (3.4)
To begin solving stochastic integrals as in (3.3) dene a simple process to be a
stochastic process that has a countable ordered sequence {t
k
}
K
k=0
such that
S(r) = S(t
k1
), r [t
k1
, t
k
) . (3.5)
To solve the stochastic integral of a simple process with steps {t
k
}
K
k=0
we slightly
adjust our integral for simple functions discussed above to account for the stochastic
(Wiener) process W such that
_
t
0
S(t)dW(t)
n1

i=0
S(i) [W(i + 1) W(i)] + S(t
n
) [W(t) W(t
n
)] (3.6)
6
where 0 < t
n
< t. Note that in this equation the state value at each point in time is
multiplied by [W(i + 1) W(i)] N(0, 1) (by denition of a Wiener process). Thus
the state variable amplies the random uctuations. In Figure 2 we can think of
S(t) as the height of the simple function (s) and the W(i)s as the endpoints of the
subsets A
i
.
To extend the concept of the Ito integral to a broader class of integrands we need
to show that there exists a unique sequence of simple processes that approximate the
function or, in the case where there is more than one sequence, that all approximating
sequences converge to the same value. Thus we approximate a general integrand (t)
by a sequence of simple processes and use the integral dened in (3.6). We state
the following without proving existence and uniqueness of the sequence of simple
processes as the proof is beyond the scope of this document and is not useful for
applications.
Proposition: If (t) satises (3.4) then (t) is integrable and there exists a
stochastic process such that (3.3) is satised.
Thus our denition of an Ito integral for a simple process in (3.6) provides a way to
approximate stochastic integrals for a general function so long as (3.4) is satised.
Given our denition of the Ito integral as the limit of a sequence of integrals for
simple processes we can now dene some properties. First note that
E
__
t
S(t)dW(t)
_
= E
_
n

i=0
S(i) [W(i + 1) W(i)]
_
= 0
where the last equality results because the S(i) are independent of [W(i + 1) W(i)].
This says that the expected value of a stochastic integral is zero! In terms of variance,
var
__
t
S(t)dW(t)
_
= E
__
t
S(t)dW(t)
_
2
=
_
E
_
S(t)
2

dt
where the last integral is a standard (Riemann) integral which results from the fact
that dW
2
is dt (this is shown explicitly in the next subsection). Along the same lines
if we consider two simple processes, S(t) and P(t) then
cov
__
t
S(t)dW(t)
_
t
P(t)dW(t)
_
= E
__
t
S(t)dW(t)
_
t
P(t)dW(t)
_
= E
__
t
S(t)P(t)dt
_
7
where again the last integral is a Riemann integral and we have substituted dt for
dW
2
. Not surprisingly, if we again consider two simple processes and two constants,
b and c, then
_
t
[aS(t) + bP(t)] dW(t) = a
_
t
S(t)dW(t) + b
_
t
P(t)dW(t)
which implies the stochastic integral of a weighted sum is the weighted sum of the
stochastic integrals.
3.2. Itos Lemma
An often used result in applications involving stochastic dierential equations is Itos
lemma. One key to proving this lemma derives from property 2 of the denition for
a Wiener process. Specically, since W(t) W(s) N(0, t s) we can write W as

t, where N(0, 1). Letting t get innitesimally small we write dW as

dt,
with the following properties
E(dW) = E
_

dt
_
=

dtE() = 0
and,
var(dW) = E(dW
2
) E
2
(dW) = E
_

2
dt
_
= var()dt = dt.
Now assuming a function f(S, t) is C
1
in t and C
2
in S we can approximate the total
dierential df (omitting arguments for shorthand) by a Taylor series expansion as
follows
df =
f
t
dt +
f
S
dS +
1
2

2
f
S
2
dS
2
+O
_
dS
2
_
=
f
t
dt +
f
S
(dt + dW)
+
1
2

2
f
S
2
_

2
dt
2
+ 2dtdW +
2
dW
2

+O
_
dS
2
_
.
If we drop terms of order higher than dt or dW
2
then this simplies to
df =
f
t
dt +
f
S
dt +
f
S
dW +
2
1
2

2
f
S
2
dW
2
.
8
Using the fact (shown above) that E
_
dW
2
_
is dt, we obtain Itos lemma
df =
_
f
t
+
f
S
+
2
1
2

2
f
S
2
_
dt +
f
S
dW (3.7)
which can be thought of as a chain rule for stochastic functions.
Consider, as an example which is commonly used, the case where S(t) follows
Brownian motion with parameters and as in (2.1) and let R[S(t), t] be dened
as exp [S(t)]. Then we have the following
R
t
= 0,
R
S
= exp(S) = R,
and

2
R
S
2
= exp(S) = R.
Now applying Itos lemma we obtain
dR =
_
+
1
2

2
_
Rdt + RdW (3.8)
which shows that R(t) follows GBM as in (2.3) where u is
_
+
1
2

2
_
and v is . This
is a particularly important example in economics as often we restrict parameters (like
R(t) here) to take on positive values.
3.3. Hamilton-Jacobi-Bellman Equation
Assume that X(t) follows a Brownian motion with an initial value X(0) of zero and
parameters (,
2
). Then
X(t) = X(0) + t + W(t), t,
where W is a Wiener process. This can be written in dierential form as
dX(t) = dt + dW(t), t.
9
Now, assume that F(t, x) is dierentiable at least once in t and twice in x. Taking
the total dierential and using a Taylor series expansion yields
dF =
F
t
dt +
F
X
dX +
1
2

2
F
X
2
dX
2
+ . . .
After some manipulation and after dropping higher order terms we obtain
dF =
F
t
dt +
F
X
dt +
1
2

2

2
F
X
2
dX
2
.
Since E(dW) is zero and E[(dW)
2
] is dt after taking expectations, we nd the mean
and variance of dF
E(dF) =
_
F
t
+
F
X
+
1
2

2

2
F
X
2
_
dt
and
var(dF) = E [dF E(dF)]
2
=
2
_
F
X
_
2
dt.
Let v(x
0
) be the expected discounted value of a stream of returns, [X(t)], given the
initial state X(0) is x
0
, which implies
v(x
0
) = E
__

0
e
t
[X(t)] dt | X(0) = x
0
_
. (3.9)
Assuming () is continuous and bounded, the Riemann integral in (3.9) is well
dened. Now, for any small interval of time t, (3.9) has the following Bellman-
type property
v(x
0
) (x
0
)t +
1
1 + t
E {v[X(0 + t)]|X(0) = x
0
} .
After some manipulation, as t 0, we can derive that
v(x
0
) = lim
t0
_
(x
0
)(1 + t) +
1
t
E [v|X(0) = x
0
]
_
= (x
0
) +
1
dt
E [dv|X(0) = x
0
] ,
(3.10)
where v is dened as {v[X(t, w)] v(x
0
)} . Since we can write the second term
on the right hand side as
E(dv) =
_
v

(x) +
1
2

2
v

(x)
_
dt,
10
we can rewrite (3.10) as
v(x) = (x) + (x)v

(x) +
1
2

2
(x)v

(x), x, (3.11)
which is the HJB equation. The interpretation of the last equation is that the return
on the asset, the term on the left hand side, is equal to the sum of the dividend and
the capital gain, the latter of which is captured by the last two terms on the right
hand side. Note that in a deterministic setting
2
(x) is zero and (3.11) thus simplies
to the standard Bellman equation.
4. Applications
This section discusses papers that use stochastic environments in the models dis-
cussed.
4.1. Clarke & Reed (1989)
The standard deterministic tree cutting problem prescribes cutting down the tree
when the proportional growth rate

R(t)/R(t) at time t equals the market discount
rate . Clarke and Reed extend this model to allow the assets intrinsic value to
evolve according to a continuous-time stochastic process. In a tree-cutting problem
randomness can be introduced into the model in two ways: through the physical size
of a biological asset and through the intrinsic value of a unit (price). The authors
construct a model in which the physical size is subject to random variability in growth
(size uctuates according to a stochastic dierential equation) and price follows GBM.
Specically, the following functional forms are assumed:
dq = bdt +
q
dW
q
(4.1)
and
dy = g(t)dt +
y
dW
y
. (4.2)
Note that q(t) N
_
q(0) + bt,
2
q
t

, thus equation (4.1) says the assets price, P(t)


which equals exp [q(t)], evolves according to GBM with constant drift, b, and constant
variance
2
q
. The stochastic dierential equation in (4.2) shows that log X(t), which
equals y(t), where X(t) is the assets size at time t, behaves like Brownian motion
11
with time-dependent drift g(t) and variance
2
y
. Note that the drift, g(t) depends only
on the assets age, t, and not the assets size, in which case the drift would be h(y).
Optimal harvest rules with size dependent policies are more complicated as size can
uctuate over an assets lifetime, whereas age is deterministic. The authors discuss
the former in Reed and Clarke (1990) and the latter in this paper.
By applying Itos lemma to (4.1) and using the denition of P(t) we obtain the
following
dP =
_
b +
1
2

2
q
_
Pdt +
q
PdW
q
(4.3)
where P(t) is lognormally distributed. What kind of implications does this have?
Since q(t) is dened as log P(t), the expectation of P(t) increases exponentially over
time. Note that
Prob {P(t) < E [P(t)]} = Prob
_
P(t) < P(0) exp
_
bt +
1
2

2
q
t
__
= Prob
_
q(t) < q(0) + bt +
1
2

2
q
t
_
=
_
1
2

t
_
,
(4.4)
where () is the standard normal cumulative distribution function. Thus, as t
approaches almost all sample asset price paths will end up below the exponential
curve. Likewise applying Itos lemma to (4.2) and using the denition of X(t) we
derive
dX =
_
g(t) +
1
2

2
y
_
Xdt +
y
Xdw
y
. (4.5)
If we assume risk neutrality then the market value W(y, q, t) is the expected
present value given that we choose to cut the tree down at the optimal time
W(y, q, t) = sup
t
E {exp [( t) + q() + y()]} . (4.6)
Clarke and Reed then use an optimal stopping rule and assume a myopic-look-ahead
(MLA) rule to simplify the problem. A MLA rule in a discrete-time model says to
stop when the actual gain from cutting the tree is greater than the expected gain
from waiting one-period longer. Analogously, in a continuous-time model we want
12
the assets value at , R() which is exp [y() + q()], to equal the discounted expected
value at an innitesimal time later, + d; i.e., we need
exp [y() + q()] = E

{exp (d) exp [q() + dq + y() + dy]} .


This condition and our specied processes in (4.1) and (4.2) imply the following
MLA rule
g(
M
) = b
1
2
_

2
y
+
2
q
+ 2
yq
_
(4.7)
which implies with age-dependent growth the optimal rule is to harvest the tree at a
xed age,
M
, where the right-hand side of the MLA rule is the nonrandom component
of the assets proportional growth. Note that the policy is independent of the price at

M
! This latter conclusion results because price follows GBM. Consider now applying
Itos lemma and using the denition of R(t) to obtain
E(dR) =
_
[b + g(t)] +
1
2
_

2
y
+
2
q
+ 2
yq
_
_
Rdt +O(dt)
which, using our MLA rule (4.7) implies
E

M
_
dR

M
R

M
_
= d
M
just as in the standard deterministic optimal tree cutting case. The authors proceed
to show that the MLA rule is the optimal harvest rule in the age-dependent growth
model by showing that (4.6) satises the HJB equation and all boundary conditions.
Clarke and Reed then derive some comparative statics results that show price
and growth uncertainty increase
M
and that either an increase in price drift or a
reduction in the discount rate will increase
M
and W
M
(the current market value).
They also show that increases in volatility
y
(
q
) increase the value of the asset, which
simply obtains from the implication that E [X(t)] (E [P(t)]) increase exponentially in

2
y
(
2
q
).
4.2. Reed & Clarke (1990)
The variables in the biological asset valuation model are dened as follows:
P(t) = the unit price of the asset at time t,
13
q(t) = log[P(t)],
X(t) = the size of the asset at time t,
y(t) = log[X(t)],
R(t) = P(t)X(t), the aggregate intrinsic value (revenue yielded by harvesting)
at time t,
V (X, P, t) = the assets market value when its size is X and the price is P at
time t,
= positive, constant, instantaneous discount rate.
PRICE: The price follows a stochastic process called geometric Brownian motion
dp
P
= dt +
q
dw
q
.
The Brownian motion for prices indicates that the prices evolve with a drift, , and
a variance,
2
q
dq = bdt +
q
dw
q
,
where
b =
1
2

2
q
if Ito Calculus is used.
SIZE: We might have two dierent geometric Brownian motions for the size of the
biological asset
dy = g(t)dt +
y
dw
y
(4.8)
or
dy = h(y)dt +
y
dw
y
. (4.9)
The rst case is called pure age-dependent growth, in which the proportional growth
in size depends on the current age t, whereas the second case is called pure size-
dependent growth, in which the proportional growth in size depends on the current
size of the biological asset y. The choice of size does not only depend on the biological
asset analyzed in a study, but also the availability of the information about the asset.
The pure size-dependent growth is assumed throughout this paper. The authors
assume h(y) is decreasing, representing compensatory density-dependent growth.
14
Assuming risk-neutrality and ignoring costs
V (X, P, t) = sup
t
E {exp(( t))P()X()|P(t) = P, X(t) = X} .
If we use y and q instead of X and P, by assuming stationarity, we get
V (X, P, t) = W(y, q) = sup
t
E {exp[ + q() + y()]|q(0) = q, y(0) = y} . (4.10)
STOPPING RULE: Given that we have our objective function, now we can talk
about the optimal stopping rule. A stopping rule partitions the y q space into
two regions: stopping, and continuation regions. The process is stopped, i.e. the
maximization problem is solved when the pair {y(t), q(t)} is in the continuation
region. Given the initial conditions and the objective function and using a stopping
rule S, we can use the HJB equation to nd
W
s
(y, q) = h(y)W
s
y
+ bW
s
q
+
1
2

2
y
W
s
yy
+
1
2

2
q
W
s
qq
. (4.11)
The free-boundary problem can be dened by the HJB equation, with the initial
conditions:
W
s
(y, q) = e
(y+q)
,
and the smooth-pasting conditions, which are:
W
s
y
(y, q) =

y
e
(y+q)
= e
(y+q)
,
and
W
s
q
(y, q) =

q
e
(y+q)
= e
(y+q)
.
BARRIER RULE: A simple possible solution, which turns out to give be optimal,
is the barrier rule. A barrier rule basically says that whenever y exceeds some barrier
value y, the process will stop. In this case, the solution will give us

W(y, q) = {exp(T
y, y
+ q(T
y, y
) + y)|q(0) = q, Y (0) = y} , (4.12)
where T
y, y
is the rst passage time for the {y(t)} process to reach y from an initial
state y.
15
MYOPIC-LOOK-AHEAD RULE: This rule basically keeps the process on the
continuation region as long as the expected discounted intrinsic value of the asset an
innitesimal time ahead of the current time, is larger than the current intrinsic value
of the asset. Then, the stopping condition will be
R() =

_
e

R( + d)
_
, (4.13)
where shows that the expectation is conditional on the state variables at time .
In Clarke and Reed (1989), discussed above, we showed that when the Brownian
motion for the size is pure age-dependent, the MLA rule is an optimal harvest rule.
However, in the case of the pure size-dependent evolution of size variable, the MLA
rule is not optimal.
ONGOING vs SINGLE ROTATIONS: Instead of single rotation, i.e., harvesting
the land only once, there may be ongoing rotations, i.e., the land becomes available
for a new forest growth after harvesting. In the second case, the net present value
of the revenues derived from a piece of land from all future harvests is known as the
land expectation value, and in the case of a deterministic model; it is given as
R(T)/(1 e
T
).
In the stochastic case, it is going to look like
L(P
0
) = sup
_

i=1
e
T
i
P(T
i
)

X(T
i
)|P(0) = P
0
, X(0) = x
0
_
, (4.14)
where the supremum is taken over {T
i
}
i=1:2,...
; and

X
i
is X(T
i
T
i1
), the size of a
stand at absolute time T
i
, and X
0
is the initial size of a newly planted stand.
The result in the ongoing rotation case is that the optimal harvest time is shorter
than that of the single rotation case, which is a generalization of the tree-cutting rules
of the Faustmann and Wicksell types, respectively.
16
B. Bibliography
Clarke, Harry R. and William J. Reed. (1989): The Tree-Cutting Problem in a
Stochastic Environment, Journal of Economic Dynamics & Control, 13, 569
595.
Mitchell, Matthew F. (2004): Class notes, University of Iowa.
Reed, William J. and Harry R. Clarke. (1990): Harvest Decisions and Asset Valu-
ation for Biological Resources Exhibiting Size-Dependent Stochastic Growth.
International Economic Review, 31, 147169.
Ross, Sheldon M. (1997): Introduction to Probability Models, 6e, San Diego, CA:
Academic Press.
Stokey, Nancy L. (1998): Course Materials, http://home.uchicago.edu/ nstokey/,
University of Chicago.
Stokey, Nancy L. and Robert E. Lucas, Jr. with Edward C. Prescott. (1989): Recursive
Methods in Economic Dynamics, Cambridge, MA: Harvard University Press.
17

You might also like