0% found this document useful (0 votes)
174 views

Dynamic Optimization in Economics

This document provides an introduction to dynamic optimization models in economics. It discusses dynamic models as systems represented by differential equations over time. Both continuous-time and discrete-time models are covered. The document also introduces dynamic optimization problems, where the goal is to find the optimal time path for a state or control variable by maximizing or minimizing a specified performance index. An example geometric problem of finding the shortest path between two points is used to illustrate the components and formulation of a dynamic optimization problem.

Uploaded by

chompipon
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
174 views

Dynamic Optimization in Economics

This document provides an introduction to dynamic optimization models in economics. It discusses dynamic models as systems represented by differential equations over time. Both continuous-time and discrete-time models are covered. The document also introduces dynamic optimization problems, where the goal is to find the optimal time path for a state or control variable by maximizing or minimizing a specified performance index. An example geometric problem of finding the shortest path between two points is used to illustrate the components and formulation of a dynamic optimization problem.

Uploaded by

chompipon
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Dynamic Optimization in Economics.

An Introduction
Renato Aguilar

Contents
1 Introduction
1.1 Dynamic Models . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 The Discrete-Time Case . . . . . . . . . . . . . . . . . . . . .
1.3 Dynamic Optimization Problems . . . . . . . . . . . . . . . .
2 Preliminary Matters
2.1 A few critical definitions.
2.2 An Useful Lemma . . . .
2.3 Integration by Parts . . .
2.4 Differentiating Integrals .

1
1
3
3

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

7
7
8
10
11

3 Differential Equations
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . .
3.2 A few basic definitions . . . . . . . . . . . . . . . . . .
3.3 Separable Differential Equations. . . . . . . . . . . . .
3.4 FirstOrder Linear Differential Equations. . . . . . . .
3.4.1 Solving firstorder differential equations. . . . .
3.4.2 An important application: Bernoullis equation
3.4.3 Phase Diagrams: . . . . . . . . . . . . . . . . .
3.5 SecondOrder Linear Differential Equations. . . . . . .
3.6 An application to natural resources. . . . . . . . . . .
3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

13
13
15
15
17
17
20
22
22
29
30

.
.
.
.

.
.
.
.

4 References.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

33

ii

CONTENTS

Chapter 1

Introduction
The main aim of these notes is supporting an introductory course on
dynamic optimization models in economics. We also present several important applications. Some of them are classic dynamic economic models, while
other are interesting but more specialized models appearing in the economic
literature. However, our focus is not on economic theory, or the economic
contents of these models, but the mathematics of their specification and
solution.
When discussing Mathematics a quest for formalization and rigor seems
natural. However, we think that the typical reader of this text is not basically interested in Mathematics, but as an instrument for presenting, discussing, and analyzing economic hypothesis and ideas. Thus, we had to
reach a compromise between rigor and formalization, on one side, and readability and applicability on the other side. Such a tradeoff is not easy to
reach. We can also note that modern Economic Theory has a strong orientation towards formalization and mathematical rigor. Observe, for example,
the insistence on existence theorems in contemporary literature.

1.1

Dynamic Models

Our starting point is the conviction that the economic relationships and
system must be understood and analyzed in a dynamic or temporal perspective. In this sense, we strongly limit, or even reject the possibility of
instantaneous contemporaneous effects between economic variables. We will
consider a real-valued variable t, often defined on an interval [t0 , t1 ] R in
the real line. We will systematically denote this variable as time. Note that,
from a purely mathematical point of view, this is an arbitrary denotation.
1

CHAPTER 1. INTRODUCTION

What is eventually relevant is that it is a real variable defined over a given


interval.
Thus, the state of an economic system can be represented by a ndimensional vector-function x : [t0 , t1 ] R Rn , x (t) defined over time. In
general, these functions of time are called time paths. In particular, these
time paths, representing the state of an economic system, are called state
variables. Then, we can represent the dynamic behavior of this economic
system by the ordinary differential equation
x = f (x, t) ,

(1.1)

where x = dx/ dt and f is a function relating the rate of change of the state
variable x to the same variable and time. That is, formally we can write
f : Rn+1 Rn .
There are several interesting question to be answered for a model represented by equation (1.1), most often including initial conditions, such
. First, we need to determine a procedure for finding the path
as x (t0 ) = x
x (t) for interesting and relevant specifications of function f. That is, we
need to solve the differential equation at (1.1). On the other hand, it is
interesting to find the conditions for the existence of a solution, as well as
the possible uniqueness of the solution. The properties of the solution are
also interesting. For example, which conditions leads to a bounded function
as a solution. Thus, analyzing and solving dynamic models is equivalent to
analyzing and solving ordinary differential functions.
Sometimes we write this differential equation in a slightly different manner, including one more variables u : [t0 , t1 ] R Rk , among the arguments of f in (1.1). The new model is, then,
x = f (x, u, t) .

(1.2)

The problem is now finding a time-path u for which a time-path x exists


with the dynamic behavior indicated in equation(1.2). This is the control
problem and the variable u is called a control variable. An alternative
name, quite popular in economic contexts, is policy variable. When the
argument t nis not include in equation (1.1) or (1.2) we call this an autonomous problem. In spite of the fact that this model was developed in a
quite different context, it is especially suitable for the analysis of economic
policy. For example, assume that the state variable, x, represents the level
of prices and the control variable, u, is money supply. Then, (1.2) is simply
an equation for inflation. This model represents the classical central bank
monetary policy problem.

1.2. THE DISCRETE-TIME CASE

1.2

The Discrete-Time Case

These models, both the basic dynamic model as well as the control problem, are interesting and useful analytical instruments. But still Economic
Theory some times favors an alternative version. In this case, instead of considering continuous, or at least piece-wise continuous, time-paths as state
variables describing the behavior of the economic system under analysis,
we use time-paths that are real-valued sequences. This is the discrete-time
dynamic model. The model becomes:
xt+1 = f (xt , t) .

(1.3)

In this case the problem is trying to find a sequence1 {xt } that satisfies the
finite differences equation at (1.2).
The corresponding control problem becomes:
xt+1 = f (xt , ut , t) .

(1.4)

Now the problem is trying to find a control sequence {ut }, as well a state
sequence {xt } satisfying the finite differences equation at (1.4).

1.3

Dynamic Optimization Problems

However, a main focus of Economic Theory is optimization. That is, a


main aim for Economic Theory is to analyze the conduct of economic agents
when they take rational decisions; with other words, when they follow an
optimizing behavior. Thus, in this text we will be mainly interested in the
optimization of the behavior of an economic system following a dynamic
behavior that can be described by an equation as (1.1) or (1.2). The most
interesting problem in this sense is not only finding a time-path for the
state variable, or the control variable, but a time-path that is optimal in
some sense. In order to find this optimal time-path we need to define some
criterion or indicator about the performance of the time path. We will
present a couple of examples in order to provide some insight about the
problem of dynamic optimization.
Example 1.1. Let us consider a classical geometrical problem to illustrate
this argument. Suppose that we have an initial point A with coordinates
(t0 , y0 ) and a terminal point B with coordinates (t1 , y1 ) on the plane (t, y) .
Our aim is to find the shortest path from A to Z.

CHAPTER 1. INTRODUCTION
y
B

t0

t1

Figure 1.1: Dynamic Problem


This problem has been presented in Figure 1.1, where we included the
initial and terminal points on the (t, y) space, as well several possible timepaths connecting point A with point B. The thick red line, a right line,
represents the shortest path; that is, the optimal one. Notice that we can
identify the following elements in this problem:
An initial and a terminal point, (A and B).
A state variable y (t) describing the state of the world at time t. In
this case points on the path from A to B.
A set of admissible paths from A to B (from y (t0 ) to y (t1 )). Some
times we need to limit the range of paths that can be admitte. In the
example above we can consider only paths in the first quadrant of the
(t, y) plane.
Values serving as performance indices associated with the various paths;
for example, the length of the different paths
A specific objective. Maximize or minimize the performance index of
the paths.
Let us formalize this problem. Notice that for a small variation of variable t the length of a given path y (t) can be derived from the Pythagoras
Theorem as
ds2 = dt2 + dy 2 ,
1

Appendix B presents a short discussion on sequences

1.3. DYNAMIC OPTIMIZATION PROBLEMS

which can be rewritten as,


ds =

p
1 + y 2 dt,

where y = dy/ dt. Then, the length of the path between point A and point
B is given by
Z t1 p
1 + y 2 dt.
V [y] =
t0

Thus, our problem can be formalized as


Z t1 p
min V [y] =
1 + y 2 dt.
y

t0

The answer is, obviously, that the shortest distance between A and B is
given by the right line connecting these two points. Eventually we will solve
this problem, giving a mathematical foundation to this well known intuitive
result.
This problem can be given an economic content. We can assume that
point A represents the initial state of an economy, as indicated by the level of
GDP, by the level of consumption, or by a combination of several indicators.
Point B represents the end-state, or desired state at time t1 , of the same
economy as indicated by the same indicators. The problem is that there are
many possible time-paths from A to B. In graph 1.1 we have drawn a few of
these possible paths. We can expect that one of these paths will be optimal,
given a few regularity conditions to be discussed later on. However, an
optimal path in which sense? Thus, we need to introduce an indicator that
evaluates the performance of each path along time, considering its dynamic
behavior. Most often we will use an integral for this purpose. Thus, our
problem can be generalized as
Z t1
max V [y] =
F (y, y,
t) dt,
y

t0

given that y (t0 ) = y0 and y (t1 ) = y1 .


Formally, we will say that this a variational problem or a Calculus of
Variations problem, with fixed initial and terminal points. The solution
of this problem will be discussed later on.
Example 1.2. The Ramsey Model. The use of capital, K, produce
output f (K) , where f is a concave function representing the technology.
Part of this output is consumed, C, and part of it is saved in order to

CHAPTER 1. INTRODUCTION

increase the stock of capital. Thus, net investments are represented by K.


These ideas can be represented by the basic national accounts identity:

f (K) = C + K C = f (K) K.
The welfare level of the society is represented by a utility function out of
consumption. That is, welfare is given by U (C) . Thus, we are interested on
a time-path for capital K, and consequently for savings and consumption,
that maximizes welfare over a given time horizon. This problem can be
stated as Calculus of Variations problem:
Z t1 h
i
U f (K) K dt,
max
t0

given that K (t0 ) = K0 .


The equivalent Optimal Control problem can be written as
Z t1
max V =
F (x, u, t) dt,
t0

s.t. x = f (x, u, t) ,
where F an f are functions complying with a few regularity conditions to
be discussed later on.

Chapter 2

Preliminary Matters
In this section we discuss a few preliminary matters, which are important
for understanding the argumets presented later on. These are a number of
definitions and results from Calculus, some of them quite classical, that play
a critical role in some of the demonstration and in the procedures for finding
the solution of these models.

2.1

A few critical definitions.

Let us introduce some notation first. Consider the following formal definition.
Definition 2.1. A function x : [t0 , t1 ] R R, x = x (t) is called a
time-path.
In this context, x [t0 , ] and x (, t1 ] are function x restricted to the intervals [t0 , ] and (, t1 , ] respectively, where [t0 , t1 ] . In particular, x (t)
will denote the optimal path in a dynamic problem.
Definition 2.2. Consider F = {x (t) | a time-path} . That is, x (t) is a real
function of a real variable. Then V [x (t)] is a functional if and only if
V : F R.
Observe that there is nothing really new in this definition. We have
just sorted out a particular class of functions: those functions where the
argument is also a function. Notice that the objective functions used in the
Calculus of Variations
Z t1
V (x) =
F (x, x,
t) dt
t0

CHAPTER 2. PRELIMINARY MATTERS

or in the Optimal Control problem


Z
V (x) =

t1

F (x, u, t) dt

t0

presented in the previous chapter comply with this definition. That is, the
aim of these problems is optimizing a functional.

2.2

An Useful Lemma

Let us now present a lemma that will play a critical role later on when
we discuss the solution of the Calculus of Variations problem
Lemma 2.1. Let f be a continuous real-valued function defined on an interval [t0 , t1 ]. Then, this function has the following property:
Z t1
f (t) p (t) dt = 0,
(2.1)
t0

for every real-valued function p defined on the interval [t0 , t1 ], and such that
p (t0 ) = p (t1 ) = 0, if and only if f (t) = 0 for all t [t0 , t1 ], provided that
the integral indicated in (2.1) exists.
Proof. That f (t) = 0 for all t [t0 , t1 ] implies equation (2.1) is trivial. The
real problem is demonstrating that equation (2.1) implies that f (t) = 0 for
all t [t0 , t1 ]. We demonstrate this by showing that if we assume that there
exists at least one c [t0 , t1 ] for which f (t) 6= 0, then we can find a function
p, defined on the interval [t0 , t1 ], for which the integral in (2.1) exists but
Z t1
f (t) p (t) dt 6= 0.
t0

1. Assume that condition (2.1) is true, but there exists a point c [t0 , t1 ]
such that f (t) > 0. But, because f is continuous, c cannot be an
isolated point. Thus, there must be an open interval (a, b) containing
c such that f (t) > 0 for all t (a, b) [t0 , t1 ]. Now define the function
p as

0
if t
/ (a, b) ,
p (t) =
k
k
(t a) (b t) , for some k {1, 2, 3, . . .} if t (a, b) .
It is easy to see that
lim p (t) = lim p (t) = 0.

ta

tb

2.2. AN USEFUL LEMMA

t0

t1

Figure 2.1: Graph of f (t) p (t) .


Thus, p (t) is continuous on (t0 , t1 ) and the integral in (2.1) exists.
Notice that p (t) > 0 and f (t) > 0 for all t (a, b) . Thus,
Z b
f (t) p (t) dt > 0.
a

But
Z t1

f (t) p (t) dt =
t0

Z
f (t) p (t) dt +

t0

Z
f (t) p (t) dt +

t1

f (t) p (t) dt,


b

f (t) p (t) dt + 0 > 0.

=0+
a

Thus, if we assume that there exists a point c [t0 , t1 ] such that


f (t) > 0, we can find a function p (t) that contradicts condition (2.1).
Then, necessarily it is f (t) 0 for all t [t0 , t1 ].
2. A similar argument allows us to prove that f (t) 0 for all t [t0 , t1 ].
3. That is, we have
f (t) 0 and f (t) 0 f (t) = 0 for all t [t0 , t1 ].

Observe that this a quite general result. We need to put very few conditions on function p. The critical point is that the relevant integrals do exist.
For our purposes we can restrict safely this result to a particular class of
functions; namely to functions p that are twice differentiable. For instance,
choose k 2 in the p function introduced at point 1 of our demonstration.

10

2.3

CHAPTER 2. PRELIMINARY MATTERS

Integration by Parts

Consider two differentiable real-valued functions u (x) and v (x) defined


on an interval [a, b] R. Then we can write
d [u (x) v (x)] = u0 (x) v (x) dx + u (x) v 0 (x) dx.

(2.2)

Assume that both u0 (x) v (x) and u (x) v 0 (x) are integrable on the interval
[a, b] . Then, equation 2.2 allows us to write
]u (x) v (x)[ba

Z
=

u (t) v (t) dt +
a

u (t) v 0 (t) dt,

which leads to the formula known as integration by parts.


Z
a

u0 (t) v (t) dt = ]u (t) v (t)[ba

u (t) v 0 (t) dt.

Example 2.1. Calculate


Z

t log t dt.
a

Using the same notation above we will assume:


u0 (x) dx = x u (x) =

x2
,
2

v (x) = log x v 0 (x) dx =

dx
.
x

Thus, applying the fromula for integration by parts.


Z


t log t dt =

x2
log x
2

b

x2
log x
2

b

b 2
t

dt
,
2 t

t
dt,
2
a
a
 2
b  2 b
x
x
=
log x
,
2
4 a
a




b2
1
a2
1
=
log b

log a
.
2
2
2
2
=

2.4. DIFFERENTIATING INTEGRALS

2.4

11

Differentiating Integrals

Consider a real-valued function f : R R R, z = f (y, x). Then we


can define a function I (x) as
Z b
f (t, x) dt,
I (x) =
a

assuming that the function f is integrable with respect to the first variable.
Then we have that
Z b
dI
f (t, x)
=
dt,
dx
x
a
assuming that the function f is differentiable with respect to the second
variable, and this integral exists.
Following the previous result, it is easy to see that
Z b
Z b
f (x, y)
d
f (x, y) dx =
dx.
dy a
y
a
Consider now an integrable function
R y g on an interval [a, b] . We can define a new function I as I (y) = a g (x) dx for all y [a, b] . Then the
fundamental theorem of calculus implies that:
Z y
dI
d
=
g (x) dx = g (y) .
dy
dy a
If y = (z) then, using the chain rule, we can write
dI
dz

dI dy
= g (y) 0 (z) = g ( (z)] 0 (z) , or
dy dz

Z (z)
d
g (x) dx = g [ (z)] 0 (z) .
dz a
Now write
Z u
I (u, v) =
f (x, v) dx,
a

where u = (y) and v = y. Then, using the chain rule once again,
dI
I du I dv
=
+
,
dy
u dy
v dy
Z

f (x, v)
dx,
v
a
Z (y)
Z (y)
d
f (x, y)
0
f (x, y) dx = f [ (y) , y] (y) +
dx.
dy a
y
a
This result is known as the Leibniz formula.
0

= f (u, v) (y) +

(2.3)

12

CHAPTER 2. PRELIMINARY MATTERS

Chapter 3

Differential Equations
This chapter presents a short and rather concise discussion about Differential Equations. This is an essential topic for understanding dynamical models. We assumed that the reader has some familiarity with differential equations. Thus, this chapter is just a short and fast reminder of the
main ideas in this subject.

3.1

Introduction

Let us begin by explaining what a differential equation is. In a rather


intuitive manner a differential equation is an equation where the unknown
is a function and includes at least one of its derivatives. With a somewhat
higher degree of formalism we can assume taht we have a differentiable
function x : [t0 , t1 ] R R. Then, a differential equation can be written
in the following manner:
dx
= f (x, t) ,
(3.1)
dt
wher f : R2 R is a continuous function. This equation can be generalized
considering a vector function x : [t0 , t1 ] R Rn instead of the real-valued
function x considered at (3.1). We can also consider higher order derivatives.
A differential equation like this, where the domain of function x is a subset
of the real numbers is called an ordinary differential equation. When
the domain of function x is a subset of Rn with n > 1 and we include its
partial derivatives in the differential equation, we will talk about partial
differential equation.
Any real function x that meets equation (3.1) is called a solution for
this differential equation. The domain of function x is simply a subset of
13

14

CHAPTER 3. DIFFERENTIAL EQUATIONS

the real numbers for an ordinary differential equation. However, in most


of the applications within economics we are assuming that the argument of
function x represents time. Thus, our favorite notation for this argument
will be t and often we will write
x =

dx
d2 x
= x(1) , x
= 2 = x(2) , . . .
dt
dt

Differential equations can be classified in different manners. For example,


we already discussed the classification into ordinary and partial differential
equations. The highest order derivative included in a differential equation
will indicate its order. Thus, we will have first order differential equations,
second order differential equations, etc. Formally, we can say that the
equation
x(n) (t) + fn1 (t) x(n1) (t) + + f1 (t) x0 (t) + f0 (t) x (t) = r (t) ,
where functions fk (t) , k {0, 1, . . . , n 1} and r (t) are real functions of t
and f0 (t) 6= 0, fopr all t [t0 , t1 ] is called a linear differential equation of
order n. When the equation is given by an algebraic expression of x, then
the higher power of x will indicate the degree of the differential equation.
Example 3.1 (Exponential growth). Consider the differential equation
x = rx. This first order linear differential equation is often used for describing the behavior of a population growing at a constant percentage rate. The
ratio x
= x/x
= r is the percentage rate at which this population is growing.
Example 3.2 (Rational Mechanics Equations). If we call s the distance
covered by a moving point at time t we can write
v = s =

ds
.
dt

This is the definition of velocity or speed. Similarly we con write acceleration


as a second order differential equation:
a = v =

d2 s
dv
= s = 2 .
dt
dt

These are, probably the oldest differential equations formally treated.


Example 3.3 (Gordon-Schaefer model:). In this case we consider the
equation x = rx axm + b. This model is often used for describing the
dynamic behavior of an animal population. We will discuss in detail some
versions of this model later on.

3.2. A FEW BASIC DEFINITIONS

3.2

15

A few basic definitions

There are no general methods or approaches for solving differential equations. We have instead a number of solutions or methods for solving a number of families of differential equations. Thus, we are discussing here the
solution of those cases that are more important and relevant for the analysis
and study of dynamic models in economics. We begin our discussion with a
few definitions that will focus our discussion.
Definition 3.1. Consider the differential equation
x = F (x, t).

(3.2)

A differentiable function x = x (t) , defined in an open interval (t0 , t1 ) R


and satisfying the equation above is called a solution and its graph is called
an integral curve.
This definition introduces two new problems. First, one can wonder if
there is a solutions for equation (3.2). More precisely, we can wonder under
which conditions for function F in (3.2) there is a solution for this equation.
Second, if there is a solution we can wonder if this solution is unique or if
there are other possible solutions. These are the existence and uniqueness
problems. We will skip the discussion of this issues because it exceeds the
purpose of this text.
As an example consider the algebraic equation ax + b = 0. We now
that if a 6= 0 there is a solution for this equation and it is unique; namely
x = b/a. The corresponding question for differential equations is quite
more difficult and complicated.
Note 3.1. Most often the problem is stated as x = F (x, t) given x (t0 ) = a.
Then x (t0 ) = a is an initial condition.
For a problem of order n we need n initial conditions.

3.3

Separable Differential Equations.

A differential equation is said to be separable if it can be written as:


x = f (t) g (x) .

(3.3)

Notice that if g (x) has some zero (there exists some point a such that
g (a) = 0) then x (t) = a is a solution. It can be seen that for the case
x (t) = a we would have
f (t) g (a) = 0 = x.

16

CHAPTER 3. DIFFERENTIAL EQUATIONS


If g (x) 6= 0 for al t, then equation (3.3) can be written as
dx
= f (t) dt.
g ()

We can integrate this expression and, hopefully, solve this equation as


Z
Z
dx
= f (t) dt + C,
for x 3 g (x) 6= 0.
g (x)
Naturally this will work only if we can find the primitives indicated above.
More precisely, we can say that the problem
x = f (t) g (x) , x (t0 ) = x0
has a unique solution
Z

x0

if g () 6= 0

d
=
g ()

f (s) ds + C.
t0

for all [x0 , x]

Examples
Example 3.4. Consider the separable equation x = rx, x > 0, where r is a
constant. Rewriting this equation as
dx
= r dt,
x
we can see that this differential equation represents a population with size
x (t) at time t, growing at a constant rate r. Integrating this last expression
we get
ln x = rt + log C x = Cert .
If we differentiate this last expression we reproduce our differential equation.
Example 3.5. Consider the following differential equation x = 2tex . This
equation can be rewritten as
ex dx = 2t dt.
Integrating this expression we get
ex = t2 + C,

x = log t2 + C .

3.4. FIRSTORDER LINEAR DIFFERENTIAL EQUATIONS.

3.4
3.4.1

17

FirstOrder Linear Differential Equations.


Solving firstorder differential equations.

Firstorder differential equations appear often in economic models. They


have the following canonical, or standard, form:
x + a (t) x = b (t) ,

(3.4)

where a (t) and b (t) are functions of t not depending on x. We begin our
discussion with the case when function a (t) is a constant. That is, when
a (t) = a for all t [t0 , t1 ]. Thus, equation (3.4) becomes,
x + ax = b (t) .

(3.5)

Multiply both sides of equation (3.5) by eat to get


xe
at + axeat = b (t) eat .
Notice that the left hand side of the equation above is simply the derivative
of xeat with respect to t. Thus, we can write

d
xeat = xxe
at + axeat = b (t) xeat .
dt
and thus, after integrating both sides, we have
Z
at
xe = eat b (t) dt + C.
The soloution to equation (3.5) can, then, be written as


Z
at
at
x + ax = b (t) x = e
C + e b (t) dt .
When b (t) is also a constant the result above becomes:
b
x + ax = b x = Ceat + .
a
Example 3.6. Let us solve the following differential equation.
x = rx + b, and x (0) = 0.
We begin by rewriting this equation as our canonical or standard form:
x rx = b, .

18

CHAPTER 3. DIFFERENTIAL EQUATIONS

Notice that in this example, the coefficient of x is r. Thus, we must


multiply both sides by ert and we get
xe
rt rert = bert
Integrating both sides,
xe

rt

bert dt + C,

=
Z
=b

ert dt + C,

b
xert = ert + C.
r
Thus, the general solution is
b
x = Cert .
r
We can use the initial condition, assuming that t = 0.
b
x (0) = 0 = C ,
r
b
C= .
r
Thus, the particular solution is
x (t) =


b rt
e 1
r

The general case for firstorder linear equations is when a (t) in equation
(3.4) is a function of t. In order to solve this case we can use the following
basic results,
Z

d
a (t) dt = a (t) .
dt
Thus we have that
R
R
R
d h R a(t) dt i
xe
= xe
a(t) dt + a (t) xe a(t) dt = b (t) e a(t) dt .
dt
Integrating both sides we get,
xe

a(t) dt

Z
=

a(t) dt

b (t) dt + C,

3.4. FIRSTORDER LINEAR DIFFERENTIAL EQUATIONS.

19

and the solution to the general case follows as




Z R
R
x + a (t) x = b (t) x = e a(t) dt C + e a(t) dt b (t) dt .
With an initial condition x (t0 ) = x0 :
x + a (t) x = b (t) x = x0 e

Rt
t0

a() d

b (s) e

Rt
t0

t0

Example 3.7. Consider the following differential equation,


y + 3ty = 6t and y (0) = 3.
In this example the coefficient of y is a (t) = 3t. Notice that
Z
3
3t dt = t2 .
2
So we can write
3 2

ye 2 t =

3 2

6te 2 t dt + C,
Z
3 2
= 2 3te 2 t dt + C,
3 2

3 2

ye 2 t = 2te 2 t + C,
3 2

y = 2 + Ce 2 t .
Using the initial condition, and assuming t = 0 we get,
y (0) = 3 = 2 + C C = 1.
Thus the solution is

3 2

y (t) = 2 + e 2 t .
Let us see now a more difficult example.
Example 3.8. Consider the following differential equation,

1 + t2 y + 3ty = 6t and y (0) = 3.
This equation can be rewritten as
y +

3t
6t
y=
.
1 + t2
1 + t2

a() d

ds.

20

CHAPTER 3. DIFFERENTIAL EQUATIONS

Note that putting u = 1 + t2 , we get du = 2t dt. Thus, we have


(3/2) du
3t dt
=
,
2
1
+
t
Zu
Z
3
du
3t
dt =
,
1 + t2
2
u
3
= log u,
2
Z

3t
3
dt = log 1 + t2 .
2
1+t
2
Thus, we can write
Z

3
2
6t
e 2 log (1+t ) dt,
2
1+t
Z
3/2

6t
3/2
1 + t2
dt,
y 1 + t2
=
2
1+t
Z
1/2
= 6t 1 + t2
dt,
Z
1/2
=3
1 + t2
2t dt,
Z
3 2 3/2
= 3 u1/2 du =
u + C,
3
3/2
3/2
y 1 + t2
= 2 1 + t2
+ C,
3/2
y = 2 + C 1 + t2
.
3
2
ye 2 log (1+t ) =

Using the initial condition and assuming t = 0, we have


y (0) = 3 = 2 + C C = 1.
Thus, the solution is
y = 2 + 1 + t2

3.4.2

3/2

An important application: Bernoullis equation

The following differential equation, called Bernoullis equation, and defined over an interval t [t0 , t1 ] appears often in different contexts.
x = Q (t) x + R (t) xn ,
where Q (t) 6= 0 and R (t) 6= 0 for all t [t0 , t1 ].

(3.6)

3.4. FIRSTORDER LINEAR DIFFERENTIAL EQUATIONS.

21

In order to solve equation (3.6) substitute z = x1n . Notice that, then,


z = (1 n) xn x,

n
x
x =
z.

1n
Multiplying equation (3.6) by xn we get
xx
n = Q (t) x1n + R (t) .
Substituting in this equation we get,
z
z = Q (t) z + R (t) .
1n
This is a firstorder linear that can be solved using the methods described
above.
Example 3.9. consider the following differential equation
tx + 2x = tx2 ,

(t 6= 0) .

This is a Bernoullis equation with n = 2. Our transformation is, then,


z = x 1. Thus, we have that z = x2 x.
Multiplying both sides of the
equation by x2 we get,
txx
2 + 2x1 = t,
tz + 2z = t,
z 2t1 z = 1.
This is a firstorder linear equation. In order to solve it notice that
Z
2t1 dt = 2 log t + C.
Thus, we can write
ze2 log t =
zt2

e2 log t dt + C,
Z
= t2 dt + C,

zt2 = t1 + C,
z = t + t2 C,
x1 = t (1 + Ct) ,
1
.
x=
t (1 + Ct)

22

CHAPTER 3. DIFFERENTIAL EQUATIONS

3.4.3

Phase Diagrams:

For x = F (x, t) a point x = a represents an equilibrium state or a


stationary state if F (a) = 0.
For drawing a phase diagram plot x against x.

3.5

SecondOrder Linear Differential Equations.

The standard or canonical form of a secondorder differential equation


is:
x
+ a (t) x + b (t) x = f (t) .

(3.7)

We assume that b (t) 6= 0 for all t [t0 , t1 ].


Associated to equation (3.7) we can define the following homogeneous
equation.
x
+ a (t) x + b (t) x = 0.
(3.8)
Our approach to the solution of secondorder linear differential equations
begins with a discussion of a couple of important general results.
Theorem 3.2. The homogeneous differential equation
x
+ a (t) x + b (t) x = 0
has the following general solution:
x = Au1 (t) + Bu2 (t) ,
where u1 (t) and u2 (t) are two linearly independent solutions, and A and B
are two arbitrary real constants.
We skip a general discussion about what linearly independent means.
Instead, we just indicate that, in this context of secondorder linear differential equations, this means that u1 (t) and u2 (t) are not proportional. That
is, there is no real constant k 6= 0 such that u1 (t) = ku2 (t) for all t [t0 , t1 ].
Notice that if we can find two linearly independent solutions, we can find
infinitely many solutions just by giving arbitrary values to the constants A
and B. Moreover, it can be proved that if, for a particular solution x (t), we
have two arbitrary values x (t1 ) and x (t2 ), then, we can find unique values
for the constants A and B.

3.5. SECONDORDER LINEAR DIFFERENTIAL EQUATIONS.

23

Theorem 3.3. The nonhomogeneous differential equation


x
+ a (t) x + b (t) x = f (t) .
has the general solution
x = Au1 (t) + Bu2 (t) + u (t) ,
where Au1 (t)+Bu2 (t) is the general solution of the associated homogeneous
equation, and u (t) is a particular solution of the nonhomogeneous equation.
Proof. Notice that u (t) = Au1 (t) + Bu2 (t) is a solution of the homegeneous
equation associated to equation (3.3). So the proposed solution becomes
x = u (t) + u (t). Substituting in equation (??) we get,
u
(t) + u
(t) + a (t) u (t) + a (t) u (t) + b (t) u (t) + b (t) u (t) = f (t) .
Reorderibg terms we get,
[
u + a (t) u + b (t) u] + u
+ a (t) u + b (t) u = f (t) .
Notice that the first bracket equals zero because u (t) is a solution for the
homegeneous equation. What is left is the non-hamogeneous equation, but
u (t) does satisfy it by definition.
These results give us a strategy in two steps for solving a solution for an
equation as (3.7). First, find a general solution for the associated homogeneous equation presented in (3.8). Then, find a particular solution for the
non-homogeneous equation (3.7) The expression in Theorem 3.3 give us a
general solution. Applying the initial conditions we can find unique values
for A and B and, in consequence, the particular solution.
Let us now see how we solve the first step in the strategy above. In
the next theorem we will present a solution for equations with constant
coefficients. That is, when a (t) = a and b (t) = b for all t [t0 , t1 ].
Theorem 3.4. The general solution of the homogeneous secondorder
linear equation with constant coefficients.
x
+ ax + bx = 0.
is as follows:
1.

1 2
4a

b > 0 x = Aer1 t + Aer2 t , where r1,2 = 12 a

1 2
4a

b.

24

CHAPTER 3. DIFFERENTIAL EQUATIONS


2.

1 2
4a

b = 0 x = (A + Bt) ert , where r = 12 a.

3.

1 2
1
t
4 a b < 0 x = e (A cos t + B sin t) , where = 2 a, =
q
b 14 a2 .

Proof. Suppose that x = ert . Then, differentiating we have


x = rert ,

x
= r2 ert .

Substituting in (3.4) we get


r2 ert + arert + bert = 0.
As ert > 0 for all real t, we can divide this equation by ert and get
r2 + ar + b = 0.
That is, our differential equation become a second degree equation. This
equation, often called characteristic equation, solves as
r1,2 =

a2 4b
,
2

with two solutions. There are three possible cases, depending on the sign of
the number under the radical.
1. If a2 4b > 0 the seconddegree equation above has two different real
solutions r1 and r2 . Moreover, both solutions u1 = er1 t and u2 =
er2 t are not proportional. To see this assume that there exists a real
constant k 6= 0 such that er1 t = ker2 t . Taking logarithms we get
r1 t = r2 t + logk,
t (r1 r2 ) = logk.
Because (r1 r2 ) is a constant, there is no constant k satisfying this
last equation for all t [t0 , t1 ]. Thus, following Theorem (3.2) the
general solution is
x (t) = Aer1 t + Ber2 t ,
with A and B arbitrary constants.

3.5. SECONDORDER LINEAR DIFFERENTIAL EQUATIONS.

25

2. If a2 4b = 0 then we have just one solution for the seconddegree


characteristic equation; namely, r = a/2. Thus, a solution for our
differential equation is u1 (t) = ert . In order to find the general solution
we will prove that u2 (t) = tert is also a solution to the differential
equation. Notice that,
u2 (t) = tert u 2 (t) = ert + rtert ,
u
2 (t) = rert + rert + r2 tert = 2rert + r2 tert .
Substituting into equation (3.4) we get
u
2 + au 2 + bu2 = 2rert + r2 tert + aert + artert + bert ,

= tert r2 + ar + b + ert (2r + a) .
Notice that the first parenthesis is simply the characteristic equation,
and because r is a solution to it, this parenthesis equals zero. Remembering that r = a/2 we can also see that the second parenthesis
equals zero. Then, we can write,
u
2 + au 2 + bu2 = 0.
Thus, it is proved that u2 (t) = tert is also a solution to the differential
equation.
We can use the same procedure as in case 1 to prove that u1 (t) = ert
and u2 (t) = tert are not proportional. Thus, following Theorem (3.2),
the general solution is
x (t) = Au1 (t) + Bu2 (t) = ert (A + Bt) .
3. If a2 4b < 0, then the characteristic equation will have two conjugate
complex numbers as solutions. It can be proved that we can always
choose two complex conjugate constants such that a linear combination
of the solutions is a realvalued sequence. We left the demonstration
for the appendix, because it requires the use of some calculations with
complex numbers.
Thus, the general solution becomes,
1
x = e (A cos t + B sin t) , where = a, and =
2
t

1
b a2 .
4

26

CHAPTER 3. DIFFERENTIAL EQUATIONS

Unfortunately, the second step, searching for particular solutions for the
nonhomogeneous equation, cannot be treated in a systematic a rigorous
manner. Al we have is a practical rule: For the particular solution to the
non-homogeneous equation, required in step 2, try a function that mimics
f (t) in (3.7). For example
1. f (t) = A u = A/b. (Try u constant).
2. f (t) is a polynomial of degree n. Try u = An tn + An1 tn1 + +
A1 t + A0 . Then, if this fails, try a polynomiall degree n + 1.
3. f (t) = peqt . Try u = Aeqt .
4. f (t) = p sin rt + q cos rt. Try u = A sin rt + B cos rt.
Example 3.10. Solve the following differential equation:
x
+ x 20x = 20t + 19
when x (0) = 6 and x (0) = 3.
The associated homogeneous equation is
x
+ x 20x = 0,
and the characteristic equation is,
r2 + r 20 = 0.
This seconddegree equation solves as

1 1 + 80
19
r=
=
r1 = 4 and r2 = 5.
2
2
Now, in order to look after a particular solution of the nonhomogeneous
equation, let us assume u (t) = Ct + D. Then, we have u = C and u = 0.
Substituting into the non-homogeneous equation gives,
C 20 (Ct + D) = 20t + 19,
20Ct + C 20D = 20t + 19.
We need to solve the last equation for C and D, fo all possible values of
t. This can be done using the indeterminate coefficients approach. We
simply observe that this equation states the equality of two ploinomials in

3.5. SECONDORDER LINEAR DIFFERENTIAL EQUATIONS.

27

t. This can be true, for all t, only if the corresponding coefficients are equal.
Thus, we can write
20C = 20 C = 1,
and
C 20D = 19 1 20D = 19 20D = 20 D = 1.
So, we can choose u (t) = t 1. The general solution is, then,
x = Ae4t + Be5t t 1,
and, differentiating, we have
x = 4Ae4t 5e5t 1.
We can now use the intial conditions to get,
x (0) = 6 = A + B 1,
x (0) = 3 = 4A 5B 1.
Collecting terms and multiplying the first equation by 5, we get
35 = 5A + 5B,
4 = 4A 5B.
Adding together both equations we bet
39 = 9A A =

39
.
9

Solving te first linear equation for B we get


B =7AB =

24
.
9

The solution is, then,


x=

39 4t 24 5t
e + e
t 1.
9
9

Example 3.11. Solve the following differential equation:


x
(t) 5x (t) + 6x (t) = e3t ,
wherex (0) = 0,

x (1) = e3 .

28

CHAPTER 3. DIFFERENTIAL EQUATIONS


The characteristic equation of the associated homogeneous equation is,


5 25 24
r1 = 3,
2
r 5r + 6 = 0 r =
r2 = 2
2

Consider u = Ce3t as a possible particular solution for the nonhomogeneous


equation. Then, we will have
u = 3Ce3t and u
= 9Ce3t .
Substituting in the nonhomogeneous equation we get
9Ce3t 15Ce3t + 6Ce3t = e3t ,
9C 15C + 6C = 1.
It is easy to see that there is no constant C satisfying this last equation.
Let us instead try with u = Cte3t as a possible particular solution for the
nonhomogeneous equation. Then, we will have
u = 3Cte3t + Ce3t = Ce3t (1 + 3t) ,
u
= 3Ce3t (1 + 3t) + 3Ce3t = 3Ce3t (2 + 3t) .
Substituting in the nonhomogeneous equation we get
3Ce3t (2 + 3t) 5Ce3t (1 + 3t) + 6Cte3t = e3t ,
3C (2 + 3t) 5C (1 + 3t) + 6Ct = 1.
Notice that the coefficient of t in right hand side of this equation is 0, and
collecting terms we can write
Ct (9 15 + 6) = 0.
But any constant C will satisfy this equation. We can also write
C (6 5) = 1 C = 1.
Thus, the particular solution is u = te3t . The general solution is, then,
x = Ae3t + Be2t + te3t ,
and
x = 3Ae3t + 2Be2t + 2te3t + e3t .

3.6. AN APPLICATION TO NATURAL RESOURCES.

29

Using the first initial condition, we can write


x (0) = 0 = A + b B = A.
Using the second initial condition we can write
x (1) = e3 = 3Ae3 2Ae2 + 3e3 + e3 ,
e3 = Ae2 (3e 2) ,
e
.
A=
2 3e
Thus, the solution is

e
e3t e2t + te3t ,
2 3e
e3t
x=
(e 1) + te3t .
2 3e
x=

3.6

An application to natural resources.

Very often we study animal populations, especialy fisheries, using the


Schaefer model.1 This model is based in the following differential equation,
describing the behavior of the stock, x, in terms of two paremeters; the
intrinsic growth rate, r, and the carrying capacity, K.

dx
x
= rx 1
.
dt
K
Solve this differential equation.
We can solve this equation by separation. That is, rewrite it as,
K
dx
= r.
x (K x) dt

(3.9)

A
B
K
+
=
.
x
K x
x (K x)

(3.10)

Now write,

Then, it must be
AK Ax + Bx = K.
This expression gives us two equations:
B A = 0 A = B,
AK = K A = 1 and B = 1.
1

After biologist M.B. Schaefer

30

CHAPTER 3. DIFFERENTIAL EQUATIONS

Using this results in (3.10) we have


1
1
K
+
=
.
x K x
x (K x)
Thus, the equation in (3.9) can be solved as
Z
Z
Z
dx
dx
+
= r dt.
x
K x
These integrals solve as,
ln x ln (K x) = rt + ln C.

(3.11)

We can solve this as,


x
= Cert ,
K x
KCert
x=
,
1 + Cert
K
x=
.
1 + C 1 ert
The constant C can be found from (3.11), assuming that x (0) = x0 . That
is,


x0
x0
= ln C C =
.
ln
K x0
K x0

3.7

Exercises
2 t3

3.1. Solve x + 3t2 x = tet


3.2. Solve:

if we know that x (1) = 0.

dy
+ 12y + 2et = 0,
dt

subject to y (0) = 6/7.


3.3. Solve
x
+ x 20x = 20t 19
when x (0) = 6 and x (0) = 3.
3.4. Solve the following differential equation:
y 00 (t) + 5y 0 (t) + 4y (t) = 1,

y (0) = 4;


y 0 (0) = 2 .

3.7. EXERCISES

31

3.5. Solve the following differential equation x


x
2ax + a2 x = a2 sin at.
3.6. Consider the following differential equation:
x

2
ac
x + x =
1
1

6= 1, 6= 0.

1. Find the general solution for the associated homogeneous equation.


What is the solution when = 0?
2. Find the general solution to the non-homogeneous equation.

32

CHAPTER 3. DIFFERENTIAL EQUATIONS

3.7. Solve the following differential equation:


x
(t) 5x (t) + 6x (t) = e3t ,
x (0) = 0,

x (1) = e4 .

(Hint: try x (t) = Bte3t ).


3.8. Solve the following differential equation:
x
+ 2x 15x + 1 = t2 ,

x(0) = 0,

3.9. Solve the following differential equation:


x
(t) 5x (t) + 6x (t) = e3t ,
x (0) = 0,

x (1) = e4 .

(Hint: try x (t) = Bte3t ).


3.10. Solve:
y 00 + 6y 0 + 5y = 10.

x(1) = 2.

Chapter 4

References.
Enders, Walter, (1995), Applied Econometric Time series, New York:
John Wiley & Sonc, Inc.
Samuelson, Paul, (1939), Interactions Between the Multiplier Analysis
and Prinicple of Acceleration, Review of Economics and Statistics, 21, 75-78.

33

You might also like