0% found this document useful (0 votes)
93 views

Theory of ODE

This document outlines the theory of ordinary differential equations (ODEs). Chapter 1 introduces ODEs and dynamical systems, discusses existence and uniqueness of solutions, and dependence on parameters. Chapter 2 covers linear systems, including constant coefficient linear equations, the matrix exponential, generalized eigenspace decomposition, and qualitative behavior of linear systems. Chapter 3 discusses topological dynamics, including invariant sets, stability definitions, Lyapunov's direct method, and LaSalle's invariance principle.

Uploaded by

Efe Felix
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views

Theory of ODE

This document outlines the theory of ordinary differential equations (ODEs). Chapter 1 introduces ODEs and dynamical systems, discusses existence and uniqueness of solutions, and dependence on parameters. Chapter 2 covers linear systems, including constant coefficient linear equations, the matrix exponential, generalized eigenspace decomposition, and qualitative behavior of linear systems. Chapter 3 discusses topological dynamics, including invariant sets, stability definitions, Lyapunov's direct method, and LaSalle's invariance principle.

Uploaded by

Efe Felix
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 172

theoryofodes July 4, 2007 13:20 Page i 

Theory of Ordinary Differential Equations

 




theoryofodes July 4, 2007 13:20 Page ii 

 




theoryofodes July 4, 2007 13:20 Page i 

Theory of Ordinary Differential Equations


Christopher P. Grant

Brigham Young University

 




theoryofodes July 4, 2007 13:20 Page ii 

 




theoryofodes July 4, 2007 13:20 Page i 

Contents

Contents i

1 Fundamental Theory 1
1.1 ODEs and Dynamical Systems . . . . . . . . . . . . . . . . . 1
1.2 Existence of Solutions . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Uniqueness of Solutions . . . . . . . . . . . . . . . . . . . . . 10
 1.4 Picard-Lindelöf Theorem . . . . . . . . . . . . . . . . . . . . . 14 

1.5 Intervals of Existence . . . . . . . . . . . . . . . . . . . . . . . 16



1.6 Dependence on Parameters . . . . . . . . . . . . . . . . . . . 20

2 Linear Systems 27
2.1 Constant Coefficient Linear Equations . . . . . . . . . . . . 27
2.2 Understanding the Matrix Exponential . . . . . . . . . . . . 30
2.3 Generalized Eigenspace Decomposition . . . . . . . . . . . 33
2.4 Operators on Generalized Eigenspaces . . . . . . . . . . . . 37
2.5 Real Canonical Form . . . . . . . . . . . . . . . . . . . . . . . 41
2.6 Solving Linear Systems . . . . . . . . . . . . . . . . . . . . . . 43
2.7 Qualitative Behavior of Linear Systems . . . . . . . . . . . . 50
2.8 Exponential Decay . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.9 Nonautonomous Linear Systems . . . . . . . . . . . . . . . . 56
2.10 Nearly Autonomous Linear Systems . . . . . . . . . . . . . . 61
2.11 Periodic Linear Systems . . . . . . . . . . . . . . . . . . . . . 65

3 Topological Dynamics 71
3.1 Invariant Sets and Limit Sets . . . . . . . . . . . . . . . . . . 71
3.2 Regular and Singular Points . . . . . . . . . . . . . . . . . . . 75
i




theoryofodes July 4, 2007 13:20 Page ii 

Contents

3.3 Definitions of Stability . . . . . . . . . . . . . . . . . . . . . . 80


3.4 Principle of Linearized Stability . . . . . . . . . . . . . . . . 85
3.5 Lyapunov’s Direct Method . . . . . . . . . . . . . . . . . . . . 90
3.6 LaSalle’s Invariance Principle . . . . . . . . . . . . . . . . . . 94

4 Conjugacies 101
4.1 Hartman-Grobman Theorem: Part 1 . . . . . . . . . . . . . . 101
4.2 Hartman-Grobman Theorem: Part 2 . . . . . . . . . . . . . . 103
4.3 Hartman-Grobman Theorem: Part 3 . . . . . . . . . . . . . . 105
4.4 Hartman-Grobman Theorem: Part 4 . . . . . . . . . . . . . . 109
4.5 Hartman-Grobman Theorem: Part 5 . . . . . . . . . . . . . . 111
4.6 Constructing Conjugacies . . . . . . . . . . . . . . . . . . . . 115
4.7 Smooth Conjugacies . . . . . . . . . . . . . . . . . . . . . . . 119

5 Invariant Manifolds 125


5.1 Stable Manifold Theorem: Part 1 . . . . . . . . . . . . . . . . 125
5.2 Stable Manifold Theorem: Part 2 . . . . . . . . . . . . . . . . 129
5.3 Stable Manifold Theorem: Part 3 . . . . . . . . . . . . . . . . 132
 5.4 Stable Manifold Theorem: Part 4 . . . . . . . . . . . . . . . . 134 

5.5 Stable Manifold Theorem: Part 5 . . . . . . . . . . . . . . . . 138



5.6 Stable Manifold Theorem: Part 6 . . . . . . . . . . . . . . . . 142
5.7 Center Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.8 Computing and Using Center Manifolds . . . . . . . . . . . 148

6 Periodic Orbits 153


6.1 Poincaré-Bendixson Theorem . . . . . . . . . . . . . . . . . . 153
6.2 Lienard’s Equation . . . . . . . . . . . . . . . . . . . . . . . . 157
6.3 Lienard’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . 161

ii




theoryofodes July 4, 2007 13:20 Page 1 

1
Fundamental Theory

1.1 ODEs and Dynamical Systems

Ordinary Differential Equations

An ordinary differential equation (or ODE) is an equation involving


derivatives of an unknown quantity with respect to a single variable.
More precisely, suppose j, n ∈ N, E is a Euclidean space, and
 

n + 1 copies


z }| {
F : dom(F ) ⊆ R × E × · · · × E → Rj . (1.1)

Then an nth order ordinary differential equation is an equation of the


form

F (t, x(t), ẋ(t), ẍ(t), x (3) (t), · · · , x (n) (t)) = 0. (1.2)

If I ⊆ R is an interval, then x : I → E is said to be a solution of (1.2) on I


if x has derivatives up to order n at every t ∈ I, and those derivatives
satisfy (1.2). Often, we will use notation that suppresses the depen-
dence of x on t. Also, there will often be side conditions given that
narrow down the set of solutions. In these notes, we will concentrate
on initial conditions which prescribe x (ℓ) (t0 ) for some fixed t0 ∈ R
(called the initial time) and some choices of ℓ ∈ {0, 1, . . . , n}. Some ODE
texts examine two-point boundary-value problems, in which the value
of a function and its derivatives at two different points are required to
satisfy given algebraic equations, but we won’t focus on them in this
one.
1




theoryofodes July 4, 2007 13:20 Page 2 

1. Fundamental Theory

First-order Equations

Every ODE can be transformed into an equivalent first-order equation.


In particular, given x : I → E, suppose we define

y1 := x
y2 := ẋ
y3 := ẍ
..
.
yn := x (n−1) ,

and let y : I → E n be defined by y = (y1 , . . . , yn ). For i = 1, 2, . . . , n−1,


define
Gi : R × E n × E n → E
by

G1 (t, u, p) := p1 − u2
 
G2 (t, u, p) := p2 − u3



G3 (t, u, p) := p3 − u4
..
.
Gn−1 (t, u, p) := pn−1 − un ,

and, given F as in (1.1), define Gn : dom(Gn ) ⊆ R × E n × E n → Rj by

Gn (t, u, p) := F (t, u1 , . . . , un , pn ),

where

dom(Gn ) = (t, u, p) ∈ R × E n × E n (t, u1 , . . . , un , pn ) ∈ dom(F ) .

Letting G : dom(Gn ) ⊆ R × E n × E n → E n−1 × Rj be defined by


 
G1
 
 G2 
 
 
G :=  G 
 3 ,
 . 
 .. 
 
Gn
2




theoryofodes July 4, 2007 13:20 Page 3 

ODEs and Dynamical Systems

we see that x satisfies (1.2) if and only if y satisfies G(t, y(t), ẏ(t)) =
0.

Equations Resolved with Respect to the Derivative

Consider the first-order initial-value problem (or IVP)




 F (t, x, ẋ) = 0



 x(t0 ) = x0 (1.3)



ẋ(t0 ) = p0 ,

where F : dom(F ) ⊆ R × Rn × Rn → Rn , and x0 , p0 are given ele-


ments of Rn satisfying F (t0 , x0 , p0 ) = 0. The Implicit Function The-
orem says that typically the solutions (t, x, p) of the (algebraic) equa-
tion F (t, x, p) = 0 near (t0 , x0 , p0 ) form an (n+1)-dimensional surface
that can be parametrized by (t, x). In other words, locally the equation
F (t, x, p) = 0 is equivalent to an equation of the form p = f (t, x) for
some f : dom(f ) ⊆ R × Rn → Rn with (t0 , x0 ) in the interior of dom(f ).
Using this f , (1.3) is locally equivalent to the IVP
  

ẋ = f (t, x)




x(t ) = x .
0 0

Autonomous Equations

Let f : dom(f ) ⊆ R × Rn → Rn . The ODE

ẋ = f (t, x) (1.4)

is autonomous if f doesn’t really depend on t, i.e., if dom(f ) = R × Ω


for some Ω ⊆ Rn and there is a function g : Ω → Rn such that f (t, u) =
g(u) for every t ∈ R and every u ∈ Ω.
Every nonautonomous ODE is actually equivalent to an autonomous
ODE. To see why this is so, given x : R → Rn , define y : R → Rn+1 by
y(t) = (t, x1 (t), . . . , xn (t)), and given f : dom(f ) ⊆ R × Rn → Rn ,
define a new function f˜ : dom(f˜) ⊆ Rn+1 → Rn+1 by
 
1
 
 f1 (p1 , (p2 , . . . , pn+1 )) 
 
f˜(p) =  .. ,
 . 
 
fn (p1 , (p2 , . . . , pn+1 ))
3




theoryofodes July 4, 2007 13:20 Page 4 

1. Fundamental Theory

where f = (f1 , . . . , fn )T and



dom(f˜) = p ∈ Rn+1 (p1 , (p2 , . . . , pn+1 )) ∈ dom(f ) .

Then x satisfies (1.4) if and only if y satisfies ẏ = f˜(y).


Because of the discussion above, we will focus our study on first-
order autonomous ODEs that are resolved with respect to the deriva-
tive. This decision is not completely without loss of generality, because
by converting other sorts of ODEs into equivalent ones of this form, we
may be neglecting some special structure that might be useful for us
to consider. This trade-off between abstractness and specificity is one
that you will encounter (and have probably already encountered) in
other areas of mathematics. Sometimes, when transforming the equa-
tion would involve too great a loss of information, we’ll specifically
study higher-order and/or nonautonomous equations.

Dynamical Systems

As we shall see, by placing conditions on the function f : Ω ⊆ Rn → Rn


 and the point x0 ∈ Ω we can guarantee that the autonomous IVP 




ẋ = f (x)
(1.5)

x(0) = x 0

has a solution defined on some interval I containing 0 in its interior,


and this solution will be unique (up to restriction or extension). Fur-
thermore, it is possible to “splice” together solutions of (1.5) in a nat-
ural way, and, in fact, get solutions to IVPs with different initial times.
These considerations lead us to study a structure known as a dynami-
cal system.
Given Ω ⊆ Rn , a continuous dynamical system (or a flow) on Ω is a
function ϕ : R × Ω → Ω satisfying:

1. ϕ(0, x) = x for every x ∈ Ω;

2. ϕ(s, ϕ(t, x)) = ϕ(s + t, x) for every x ∈ Ω and every s, t ∈ R;

3. ϕ is continuous.

If f and Ω are sufficiently “nice” we will be able to define a function


ϕ : R × Ω → Ω by letting ϕ(·, x0 ) be the unique solution of (1.5), and
4




theoryofodes July 4, 2007 13:20 Page 5 

ODEs and Dynamical Systems

this definition will make ϕ a dynamical system. Conversely, any con-


tinuous dynamical system ϕ(t, x) that is differentiable with respect to
t is generated by an IVP.

Exercise 1 Suppose that:

• Ω ⊆ Rn ;

• ϕ : R × Ω → Ω is a continuous dynamical system;


∂ϕ(t, x)
• exists for every t ∈ R and every x ∈ Ω;
∂t
• x0 ∈ Ω is given;

• y : R → Ω is defined by y(t) := ϕ(t, x0 );



n ∂ϕ(s, p)
.
• f : Ω → R is defined by f (p) :=
∂s s=0

Show that y solves the IVP


 



ẏ = f (y)

y(0) = x . 0

In these notes we will also discuss discrete dynamical systems. Giv-


en Ω ⊆ Rn , a discrete dynamical system on Ω is a function ϕ : Z × Ω →
Ω satisfying:

1. ϕ(0, x) = x for every x ∈ Ω;

2. ϕ(ℓ, ϕ(m, x)) = ϕ(ℓ+m, x) for every x ∈ Ω and every ℓ, m ∈ Z;

3. ϕ is continuous.

There is a one-to-one correspondence between discrete dynamical sys-


tems ϕ and homeomorphisms (continuous functions with continuous
inverses) F : Ω → Ω, this correspondence being given by ϕ(1, ·) = F . If
we relax the requirement of invertibility and take a (possibly noninvert-
ible) continuous function F : Ω → Ω and define ϕ : {0, 1, . . .} × Ω → Ω
5




theoryofodes July 4, 2007 13:20 Page 6 

1. Fundamental Theory

by
n copies
z }| {
ϕ(n, x) = F (F (· · · (F (x)) · · · )),

then ϕ will almost meet the requirements to be a dynamical system,


the only exception being that property 2, known as the group property
may fail because ϕ(n, x) is not even defined for n < 0. We may still
call this a dynamical system; if we’re being careful we may call it a
semidynamical system.
In a dynamical system, the set Ω is called the phase space. Dynam-
ical systems are used to describe the evolution of physical systems in
which the state of the system at some future time depends only on the
initial state of the system and on the elapsed time. As an example,
Newtonian mechanics permits us to view the earth-moon-sun system
as a dynamical system, but the phase space is not physical space R3 ,
but is instead an 18-dimensional Euclidean space in which the coordi-
nates of each point reflect the position and momentum of each of the
three objects. (Why isn’t a 9-dimensional space, corresponding to the
 three spatial coordinates of the three objects, sufficient?) 



1.2 Existence of Solutions

Approximate Solutions

Consider the IVP 



ẋ = f (t, x)
(1.6)

x(t ) = a,
0

where f : dom(f ) ⊆ R × Rn → Rn is continuous, and (t0 , a) ∈ dom(f )


is constant. The Fundamental Theorem of Calculus implies that (1.6)
is equivalent to the integral equation
Zt
x(t) = a + f (s, x(s)) ds. (1.7)
t0

How does one go about proving that (1.7) has a solution if, unlike
the case with so many IVPs studied in introductory courses, a formula
for a solution cannot be found? One idea is to construct a sequence
of “approximate” solutions, with the approximations becoming better
and better, in some sense, as we move along the sequence. If we can
6




theoryofodes July 4, 2007 13:20 Page 7 

Existence of Solutions

show that this sequence, or a subsequence, converges to something,


that limit might be an exact solution.
One way of constructing approximate solutions is Picard iteration.
Here, we plug an initial guess in for x on the right-hand side of (1.7),
take the resulting value of the left-hand side and plug that in for x
again, etc. More precisely, we can set x1 (t) := a and recursively define
xk+1 in terms of xk for k > 1 by
Zt
xk+1 (t) := a + f (s, xk (s)) ds.
t0

Note that if, for some k, xk = xk+1 then we have found a solution.
Another approach is to construct a Tonelli sequence. For each k ∈ N,
let xk (t) be defined by



a, if t0 ≤ t ≤ t0 + 1/k
xk (t) = Z t−1/k (1.8)


a + f (s, xk (s)) dx, if t ≥ t0 + 1/k
t0

for t ≥ t0 , and define xk (t) similarly for t ≤ t0 .


 We will use the Tonelli sequence to show that (1.7) (and therefore 

(1.6)) has a solution, and will use Picard iterates to show that, under an

additional hypothesis on f , the solution of (1.7) is unique.

Existence

For the first result, we will need the following definitions and theorems.

Definition A sequence of functions gk : U ⊆ R → Rn is uniformly


bounded if there exists M > 0 such that |gk (t)| ≤ M for every t ∈ U
and every k ∈ N.

Definition A sequence of functions gk : U ⊆ R → Rn is uniformly


equicontinuous if for every ε > 0 there exists a number δ > 0 such
that |gk (t1 ) − gk (t2 )| < ε for every k ∈ N and every t1 , t2 ∈ U satisfy-
ing |t1 − t2 | < δ.

Definition A sequence of functions gk : U ⊆ R → Rn converges uni-


formly to a function g : U ⊆ R → Rn if for every ε > 0 there exists a
number N ∈ N such that if k ≥ N and t ∈ U then |gk (t) − g(t)| < ε.
7




theoryofodes July 4, 2007 13:20 Page 8 

1. Fundamental Theory

Definition If a ∈ Rn and β > 0, then the open ball of radius β centered


at a, denoted B(a, β), is the set

x ∈ Rn |x − a| < β .

Theorem (Arzela-Ascoli) Every uniformly bounded, uniformly equicon-


tinuous sequence of functions gk : U ⊆ R → Rn has a subsequence that
converges uniformly on compact (closed and bounded) sets.

Theorem (Uniform Convergence) If a sequence of continuous functions


hk : [b, c] → Rn converges uniformly to h : [b, c] → Rn , then
Zc Zc
lim hk (s) ds = h(s) ds.
k↑∞ b b

We are now in a position to state and prove the Cauchy-Peano Exis-


tence Theorem.

Theorem (Cauchy-Peano) Suppose f : [t0 − α, t0 + α] × B(a, β) → Rn is


 continuous and bounded by M > 0. Then (1.7) has a solution defined on 

[t0 − b, t0 + b], where b = min{α, β/M}.


Proof. For simplicity, we will only consider t ∈ [t0 , t0 + b]. For each
k ∈ N, let xk : [t0 , t0 + b] → Rn be defined by (1.8). We will show that
(xk ) converges to a solution of (1.6).
Step 1: Each xk is well-defined.
Fix k ∈ N. Note that the point (t0 , a) is in the interior of a set on which
f is well-defined. Because of the formula for xk (t) and the fact that
it is, in essence, recursively defined on intervals of width 1/k moving
steadily to the right, if xk failed to be defined on [t0 , t0 + b] then there
would be t1 ∈ [t0 + 1/k, t0 + b) for which |xk (t1 ) − a| = β. Pick the first
such t1 . Using (1.8) and the bound on f , we see that
Z Z
t1 −1/k t1 −1/k

|xk (t1 ) − a| = f (s, xk (s)) ds ≤ |f (s, xk (s))| ds
t0 t0
Z t1 −1/k
≤ M ds = M(t1 − t0 − 1/k) < M(b − 1/k)
t0

≤ β − M/k < β = |xk (t1 ) − a|,


which is a contradiction.
8




theoryofodes July 4, 2007 13:20 Page 9 

Existence of Solutions

Step 2: (xk ) is uniformly bounded.


Calculating as above, we find that (1.8) implies that, for k ≥ 1/b,
Z b+t0 −1/k
|xk (t)| ≤ |a| + |f (s, xk (s))| dx ≤ |a| + (b − 1/k)M < |a| + β.
t0

Step 3: (xk ) is uniformly equicontinuous.


If t1 , t2 ≥ t0 + 1/k, then
Z Z
t2 t2

|xk (t1 ) − xk (t2 )| = f (s, xk (s)) ds ≤ |f (s, xk (s))| ds
t1 t1

≤ M|t2 − t1 |.

Since xk is constant on [t0 , t0 + 1/k] and continuous at t0 + 1/k, the


estimate |xk (t1 ) − xk (t2 )| ≤ M|t2 − t1 | holds for every t1 , t2 ∈ [t0 , t0 +
b]. Thus, given ε > 0, we can set δ = ε/M and see that uniform
equicontinuity holds.
Step 4: Some subsequence (xkℓ ) of (xk ) converges uniformly, say
to x, on [t0 , t0 + b].
This follows directly from the previous steps and the Arzela-Ascoli
 
Theorem.



Step 5: The function f (·, x(·)) is the uniform limit of (f (·, xkℓ (·)))
on [t0 , t0 + b].
Let ε > 0 be given. Since f is continuous on a compact set, it is
uniformly continuous. Thus, we can pick δ > 0 such that |f (s, p) −
f (s, q)| < ε whenever |p − q| < δ. Since (xkℓ ) converges uniformly
to x, we can pick N ∈ N such that |xkℓ (s) − x(s)| < δ whenever
s ∈ [t0 , t0 + b] and ℓ ≥ N. If ℓ ≥ N, then |f (s, xkℓ (s)) − f (s, x(s))| < ε.
Step 6: The function x is a solution of (1.6).
Fix t ∈ [t0 , t0 + b]. If t = t0 , then clearly (1.7) holds. If t > t0 , then for
ℓ sufficiently large
Zt Zt
xkℓ (t) = a + f (s, xkℓ (s)) ds − f (s, xkℓ (s)) ds. (1.9)
t0 t−1/kℓ

Obviously, the left-hand side of (1.9) converges to x(t) as ℓ ↑ ∞. By


the Uniform Convergence Theorem and the uniform convergence of
(f (·, xkℓ (·))), the first integral on the right-hand side of (1.9) converges
to Zt
f (s, x(s)) ds,
t0
9




theoryofodes July 4, 2007 13:20 Page 10 

1. Fundamental Theory

and by the boundedness of f the second integral converges to 0. Thus,


taking the limit of (1.9) as ℓ ↑ ∞ we see that x satisfies (1.7), and
therefore (1.6), on [t0 , t0 + b].

Note that this theorem guarantees existence, but not necessarily


uniqueness, of a solution of (1.6).

Exercise 2 How many solutions of the IVP


 p

ẋ = 2 |x|

x(0) = 0,

on the interval (−∞, ∞) are there? Give formulas for all of them.

1.3 Uniqueness of Solutions


 Uniqueness 



If more than continuity of f is assumed, it may be possible to prove
that 

ẋ = f (t, x)
(1.10)
x(t ) = a,
 0

has a unique solution. In particular Lipschitz continuity of f (t, ·) is


sufficient. (Recall that g : dom(g) ⊆ Rn → Rn is Lipschitz continuous if
there exists a constant L > 0 such that |g(x1 ) − g(x2 )| ≤ L|x1 − x2 |
for every x1 , x2 ∈ dom(g); L is called a Lipschitz constant for g.)
One approach to uniqueness is developed in the following exercise,
which uses what are know as Gronwall inequalities.

Exercise 3 Assume that the conditions of the Cauchy-Peano


Theorem hold and that, in addition, f (t, ·) is Lipschitz continu-
ous with Lipschitz constant L for every t. Show that the solution
of (1.10) is unique on [t0 , t0 +b] by completing the following steps.

10




theoryofodes July 4, 2007 13:20 Page 11 

Uniqueness of Solutions

(The solution can similarly be shown to be unique on [t0 − b, t0 ],


but we won’t bother doing that here.)

(a) If x1 and x2 are each solutions of (1.10) on [t0 , t0 + b] and


U : [t0 , t0 + b] → R is defined by U(t) := |x1 (t) − x2 (t)|,
show that Zt
U(t) ≤ L U(s) ds (1.11)
t0

for every t ∈ [t0 , t0 + b].

(b) Pick ε > 0 and let


Zt
V (t) := ε + L U(s) ds.
t0

Show that
V ′ (t) ≤ LV (t) (1.12)

for every t ∈ [t0 , t0 + b], and that V (t0 ) = ε.


 (c) Dividing both sides of (1.12) by V (t) and integrating from t = 



t0 to t = T , show that V (T ) ≤ ε exp[L(T − t0 )].

(d) By using (1.11) and letting ε ↓ 0, show that U(T ) = 0 for all
T ∈ [t0 , t0 + b], so x1 = x2 .

We will prove an existence-uniqueness theorem that combines the


results of the Cauchy-Peano Theorem and Exercise 3. The reason for
this apparently redundant effort is that the concepts and techniques
introduced in this proof will be useful throughout these notes.
First, we review some definitions and results pertaining to metric
spaces.

Definition A metric space is a set X together with a function d : X ×X →


R satisfying:

1. d(x, y) ≥ 0 for every x, y ∈ X, with equality if and only if x = y;

2. d(x, y) = d(y, x) for every x, y ∈ X;


11




theoryofodes July 4, 2007 13:20 Page 12 

1. Fundamental Theory

3. d(x, y) + d(y, z) ≥ d(x, z) for every x, y, z ∈ X.

Definition A normed vector space is a vector space together with a func-


tion k · k : X → R satisfying:

1. kxk ≥ 0 for every x ∈ X, with equality if and only if x = 0;

2. kλxk = |λ|kxk for every x ∈ X and every scalar λ;

3. kx + yk ≤ kxk + kyk for every x, y ∈ X.

Every normed vector space is a metric space with metric d(x, y) =


kx − yk.

Definition An inner product space is a vector space together with a


function h·, ·i : X × X → R satisfying:

1. hx, xi ≥ 0 for every x ∈ X, with equality if and only if x = 0;


 

2. hx, yi = hy, xi for every x, y ∈ X;


3. hλx + µy, zi = λhx, zi + µhy, zi for every x, y, z ∈ X and all


scalars µ, λ.

Every inner product space is a normed vector space with norm


p
kxk = hx, xi.

Definition A sequence (xn ) in a metric space X is said to be (a) Cauchy


(sequence) if for every ε > 0, there exists N ∈ N such that d(xm , xn ) <
ε whenever m, n ≥ N.

Definition A sequence (xn ) in a metric space X converges to x if for


every ε > 0, there exists N ∈ N such that d(xn , x) < ε whenever n ≥ N.

Definition A metric space is said to be complete if every Cauchy se-


quence in X converges (in X). A complete normed vector space is
called a Banach space. A complete inner product space is called a
Hilbert space.
12




theoryofodes July 4, 2007 13:20 Page 13 

Uniqueness of Solutions

Definition A function f : X → Y from a metric space to a metric


space is said to be Lipschitz continuous if there exists L ∈ R such that
d(f (u), f (v)) ≤ Ld(u, v) for every u, v ∈ X. We call L a Lipschitz con-
stant, and write Lip(f ) for the smallest Lipschitz constant that works.

Definition A contraction is a Lipschitz continuous function from a met-


ric space to itself that has Lipschitz constant less than 1.

Definition A fixed point of a function T : X → X is a point x ∈ X such


that T (x) = x.

Theorem (Contraction Mapping Principle) If X is a nonempty, complete


metric space and T : X → X is a contraction, then T has a unique fixed
point in X.

Proof. Pick λ < 1 such that d(T (x), T (y)) ≤ λd(x, y) for every x, y ∈
X. Pick any point x0 ∈ X. Define a sequence (xk ) by the recursive
formula
 
xk+1 = T (xk ). (1.13)



If k ≥ ℓ ≥ N, then

d(xk , xℓ ) ≤ d(xk , xk−1 ) + d(xk−1 , xk−2 ) + · · · + d(xℓ+1 , xℓ )


≤ λd(xk−1 , xk−2 ) + λd(xk−2 , xk−3 ) + · · · + λd(xℓ , xℓ−1 )
..
.
≤ λk−1 d(x1 , x0 ) + λk−2 d(x1 , x0 ) + · · · + λℓ d(x1 , x0 )
λN
≤ d(x1 , x0 ).
1−λ

Hence, (xk ) is a Cauchy sequence. Since X is complete, (xk ) converges


to some x ∈ X. By letting k ↑ ∞ in (1.13) and using the continuity of
T , we see that x = T (x), so x is a fixed point of T .
If there were another fixed point y of T , then

d(x, y) = d(T (x), T (y)) ≤ λd(x, y),

so d(x, y) = 0, which means x = y. This shows uniqueness of the


fixed point.
13




theoryofodes July 4, 2007 13:20 Page 14 

1. Fundamental Theory

1.4 Picard-Lindelöf Theorem

Theorem The space C([a, b]) of continuous functions from [a, b] to Rn


equipped with the norm

kf k∞ := sup |f (x)| x ∈ [a, b]

is a Banach space.

Definition Two different norms k · k1 and k · k2 on a vector space X are


equivalent if there exist constants m, M > 0 such that

mkxk1 ≤ kxk2 ≤ Mkxk1

for every x ∈ X.

Theorem If (X, k · k1 ) is a Banach space and k · k2 is equivalent to k · k1


on X, then (X, k · k2 ) is a Banach space.

 
Theorem A closed subspace of a complete metric space is a complete



metric space.

We are now in a position to state and prove the Picard-Lindelöf


Existence-Uniqueness Theorem. Recall that we are dealing with the
IVP 

ẋ = f (t, x)
(1.14)

x(t ) = a.
0

Theorem (Picard-Lindelöf) Suppose f : [t0 − α, t0 + α] × B(a, β) → Rn


is continuous and bounded by M. Suppose, furthermore, that f (t, ·) is
Lipschitz continuous with Lipschitz constant L for every t ∈ [t0 − α, t0 +
α]. Then (1.14) has a unique solution defined on [t0 − b, t0 + b], where
b = min{α, β/M}.

Proof. Let X be the set of continuous functions from [t0 − b, t0 + b] to


B(a, β). The norm

kgkw := sup e−2L|t−t0 | |g(t)| t ∈ [t0 − b, t0 + b]
14




theoryofodes July 4, 2007 13:20 Page 15 

Picard-Lindelöf Theorem

is equivalent to the standard supremum norm k · k∞ on C([t0 − b, t0 +


b]), so this vector space is complete under this weighted norm. The set
X endowed with this norm/metric is a closed subset of this complete
Banach space, so X equipped with the metric d(x1 , x2 ) := kx1 − x2 kw
is a complete metric space.
Given x ∈ X, define T (x) to be the function on [t0 − b, t0 + b] given
by the formula
Zt
T (x)(t) = a + f (s, x(s)) ds.
t0
Step 1: If x ∈ X then T (x) makes sense.
This should be obvious.
Step 2: If x ∈ X then T (x) ∈ X.
If x ∈ X, then it is clear that T (x) is continuous (and, in fact, differen-
tiable). Furthermore, for t ∈ [t0 − b, t0 + b]
Z Z
t t

|T (x)(t) − a| = f (s, x(s)) ds ≤ |f (s, x(s))| ds ≤ Mb ≤ β,
t0 t0

so T (x)(t) ∈ B(a, β). Hence, T (x) ∈ X.


 Step 3: T is a contraction on X. 

Let x, y ∈ X, and note that kT (x) − T (y)kw is



( Z )
t
−2L|t−t0 |
sup e [f (s, x(s)) − f (s, y(s))] ds t ∈ [t0 − b, t0 + b] .
t0

For a fixed t ∈ [t0 − b, t0 + b],


Z
t
−2L|t−t0 |
e [f (s, x(s)) − f (s, y(s))] ds
t0
Z
t
−2L|t−t0 |
≤e |f (s, x(s)) − f (s, y(s))| ds
t0
Z
t

≤ e−2L|t−t0 | L|x(s) − y(s)| ds
t0
Z
t
−2L|t−t0 | 2L|s−t0 |
≤ Le kx − ykw e ds
t0
kx − ykw  
= 1 − e−2L|t−t0 |
2
1
≤ kx − ykw .
2
Taking the supremum over all t ∈ [t0 − b, t0 + b], we find that T is a
contraction (with λ = 1/2).
15




theoryofodes July 4, 2007 13:20 Page 16 

1. Fundamental Theory

By the contraction mapping principle, we therefore know that T has


a unique fixed point in X. This means that (1.14) has a unique solution
in X (which is the only place a solution could be).

1.5 Intervals of Existence

Maximal Interval of Existence

We begin our discussion with some definitions and an important theo-


rem of real analysis.

Definition Given f : D ⊆ R × Rn → Rn , we say that f (t, x) is locally


Lipschitz continuous with respect to x on D if for each (t0 , a) ∈ D there
is a number L and a product set I × U ⊆ D containing (t0 , a) in its
interior such that the restriction of f (t, ·) to U is Lipschitz continuous
with Lipschitz constant L for every t ∈ I.

Definition A subset K of a topological space is compact if whenever


 K is contained in the union of a collection of open sets, there is a fi- 



nite subcollection of that collection whose union also contains K. The
original collection is called a cover of K, and the finite subcollection is
called a finite subcover of the original cover.

Theorem (Heine-Borel) A subset of Rn is compact if and only if it is closed


and bounded.

Now, suppose that D is an open subset of R × Rn , (t0 , a) ∈ D, and


f : D → Rn is locally Lipschitz continuous with respect to x on D.
Then the Picard-Lindelöf Theorem indicates that the IVP


ẋ = f (t, x)
(1.15)

x(t ) = a
0

has a solution existing on some time interval containing t0 in its inte-


rior and that the solution is unique on that interval. Let’s say that an
interval of existence is an interval containing t0 on which a solution of
(1.15) exists. The following theorem indicates how large an interval of
existence may be.
16




theoryofodes July 4, 2007 13:20 Page 17 

Intervals of Existence

Theorem (Maximal Interval of Existence) The IVP (1.15) has a maxi-


mal interval of existence, and it is of the form (ω− , ω+ ), with ω− ∈
[−∞, ∞) and ω+ ∈ (−∞, ∞]. There is a unique solution x(t) of (1.15)
on (ω− , ω+ ), and (t, x(t)) leaves every compact subset K of D as
t ↓ ω− and as t ↑ ω+ .

Proof.
Step 1: If I1 and I2 are open intervals of existence with correspond-
ing solutions x1 and x2 , then x1 and x2 agree on I1 ∩ I2 .
Let I = I1 ∩ I2 , and let I ∗ be the largest interval containing t0 and con-
tained in I on which x1 and x2 agree. By the Picard-Lindelöf Theorem,
I ∗ is nonempty. If I ∗ ≠ I, then I ∗ has an endpoint t1 in I. By conti-
nuity, x1 (t1 ) = x2 (t1 ) =: a1 . The Picard-Lindelöf Theorem implies that
the new IVP 

ẋ = f (t, x)
(1.16)

x(t ) = a
1 1

 has a local solution that is unique. But restrictions of x1 and x2 near t1 

each provide a solution to (1.16), so x1 and x2 must agree in a neigh-



borhood of t1 . This contradicts the definition of t1 and tells us that
I ∗ = I.
Now, let (ω− , ω+ ) be the union of all open intervals of existence.
Step 2: (ω− , ω+ ) is an interval of existence.
Given t ∈ (ω− , ω+ ), pick an open interval of existence Ĩ that contains
t, and let x(t) = x̃(t), where x̃ is a solution to (1.15) on Ĩ. Because
of step 1, this determines a well-defined function x : (ω− , ω+ ) → Rn ;
clearly, it solves (1.15).
Step 3: (ω− , ω+ ) is the maximal interval of existence.
An extension argument similar to the one in Step 1 shows that every
interval of existence is contained in an open interval of existence. Every
open interval of existence is, in turn, a subset of (ω− , ω+ ).
Step 4: x is the only solution of (1.15) on (ω− , ω+ ).
This is a special case of Step 1.
Step 5: (t, x(t)) leaves every compact subset K ⊂ D as t ↓ ω− and
as t ↑ ω+ .
We only treat what happens as t ↑ ω+ ; the other case is similar. Fur-
thermore, the case when ω+ = ∞ is immediate, so suppose ω+ is finite.
17




theoryofodes July 4, 2007 13:20 Page 18 

1. Fundamental Theory

Let a compact subset K of D be given. Since D is open, for each


point (t, a) ∈ K we can pick numbers α(t, a) > 0 and β(t, a) > 0 such
that
[t − 2α(t, a), t + 2α(t, a)] × B(a, 2β(t, a)) ⊂ D.

Note that the collection of sets



(t − α(t, a), t + α(t, a)) × B(a, β(t, a)) (t, a) ∈ K

is a cover of K. Since K is compact, a finite subcollection, say


 m
(ti − α(ti , ai ), ti + α(ti , ai )) × B(ai , β(ti , ai )) i=1 ,

covers K. Let
m 
[ 
K ′ := [ti − 2α(ti , ai ), ti + 2α(ti , ai )] × B(ai , 2β(ti , ai )) ,
i=1

let
 m
α̃ := min α(ti , ai ) i=1 ,

 and let 
 m

β̃ := min β(ti , xi ) i=1 .


Since K ′ is a compact subset of D, there is a constant M > 0 such that


f is bounded by M on K ′ . By the triangle inequality,

[t0 − α̃, t0 + α̃] × B(a, β̃) ⊆ K ′ ,

for every (t0 , a) ∈ K, so f is bounded by M on each such product


set. According to the Picard-Lindelöf Theorem, this means that for
every (t0 , a) ∈ K a solution to ẋ = f (t, x) starting at (t0 , a) exists for
at least min{α̃, β̃/M} units of time. Hence, x(t) ∉ K for t > ω+ −
min{α̃, β̃/M}.

Corollary If D′ is a bounded set and D = (c, d) × D′ (with c ∈ [−∞, ∞)


and d ∈ (−∞, ∞]), then either ω+ = d or x(t) → ∂D′ as t ↑ ω+ , and
either ω− = c or x(t) → ∂D′ as t ↓ ω− .

Corollary If D = (c, d) × Rn (with c ∈ [−∞, ∞) and d ∈ (−∞, ∞]), then


either ω+ = d or |x(t)| ↑ ∞ as t ↑ ω+ , and either ω− = c or |x(t)| ↑ ∞
as t ↓ ω− .
18




theoryofodes July 4, 2007 13:20 Page 19 

Intervals of Existence

If we’re dealing with an autonomous equation on a bounded set,


then the first corollary applies to tell us that the only way a solution
could fail to exist for all time is for it to approach the boundary of
the spatial domain. (Note that this is not the same as saying that x(t)
converges to a particular point on the boundary; can you give a relevant
example?) The second corollary says that autonomous equations on all
of Rn have solutions that exist until they become unbounded.

Global Existence

For the solution set of the autonomous ODE ẋ = f (x) to be repre-


sentable by a dynamical system, it is necessary for solutions to exist
for all time. As the discussion above illustrates, this is not always the
case. When solutions do die out in finite time by hitting the boundary
of the phase space Ω or by going off to infinity, it may be possible to
change the vector field f to a vector field f˜ that points in the same
direction as the original but has solutions that exist for all time.
For example, if Ω = Rn , then we could consider the modified equa-
 

tion

f (x)
ẋ = q .
1 + |f (x)|2

Clearly, |ẋ| < 1, so it is impossible for |x| to approach infinity in finite


time.
If, on the other hand, Ω ≠ Rn , then consider the modified equation

f (x) d(x, Rn \ Ω)
ẋ = q ·p ,
1 + |f (x)|2 1 + d(x, Rn \ Ω)2

where d(x, Rn \ Ω) is the distance from x to the complement of Ω. It is


not hard to show that it is impossible for a solution x of this equation
to become unbounded or to approach the complement of Ω in finite
time, so, again, we have global existence.
It may or may not seem obvious that if two vector fields point in
the same direction at each point, then the solution curves of the cor-
responding ODEs in phase space match up. In the following exercise,
you are asked to prove that this is true.

19




theoryofodes July 4, 2007 13:20 Page 20 

1. Fundamental Theory

Exercise 4 Suppose that Ω is a subset of Rn , that f : Ω → Rn


and g : Ω → Rn are (continuous) vector fields, and that there is a
continuous function h : Ω → (0, ∞) such that g(u) = h(u)f (u)
for every u ∈ Ω. If x is the only solution of


ẋ = f (x)

x(0) = a

(defined on the maximal interval of existence) and y is the only


solution of 

ẏ = g(y)

y(0) = a,

(defined on the maximal interval of existence), show that there is


an increasing function j : dom(y) → dom(x) such that y(t) =
x(j(t)) for every t ∈ dom(y).

 

1.6 Dependence on Parameters


Parameters vs. Initial Conditions

Consider the IVP 



ẋ = f (t, x)
(1.17)

x(t ) = a,
0

and the paramterized IVP




ẋ = f (t, x, µ)
(1.18)

x(t ) = a,
0

where µ ∈ Rk . We are interested in studying how the solution of (1.17)


depends on the initial condition a and how the solution of (1.18) de-
pends on the parameter µ. In a sense, these two questions are equiv-
alent. For example, if x solves (1.17) and we let x̃ := x − a and
f˜(t, x̃, a) := f (t, x̃ + a), then x̃ solves

 ˙ = f˜(t, x̃, a)
x̃
x̃(t ) = 0,
0
20




theoryofodes July 4, 2007 13:20 Page 21 

Dependence on Parameters

so a appears as a parameter rather than an initial condition. If, on


the other hand, x solves (1.18), and we let x̃ := (x, µ) and f˜(t, x̃) :=
(f (t, x, µ), 0), then x̃ solves

 ˙ = f˜(t, x̃)
x̃

x̃(t ) = (a, µ),
0

so µ appears as part of the initial condition, rather than as a parameter


in the ODE.
We will concentrate on (1.18).

Continuous Dependence

The following result can be proved by an approach like that outlined


in Exercise 3.

Theorem (Grownwall Inequality) Suppose that X(t) is a nonnegative,


continuous, real-valued function on [t0 , T ] and that there are constants
 
C, K > 0 such that

Zt

X(t) ≤ C + K X(s) ds
t0

for every t ∈ [t0 , T ]. Then

X(t) ≤ CeK(t−t0 )

for every t ∈ [t0 , T ].

Using the Grownwall inequality, we can prove that the solution of


(1.18) depends continuously on µ.

Theorem (Continuous Dependence) Suppose

f : [t0 − α, t0 + α] × Ω1 × Ω2 ⊆ R × Rn × Rk → Rn

is continuous. Suppose, furthermore, that f (t, ·, µ) is Lipschitz continu-


ous with Lipschitz constant L1 > 0 for every (t, µ) ∈ [t0 − α, t0 + α] × Ω2
and f (t, x, ·) is Lipschitz continuous with Lipschitz constant L2 > 0 for
every (t, x) ∈ [t0 − α, t0 + α] × Ω1 . If xi : [t0 − α, t0 + α] → Rn (i = 1, 2)
21




theoryofodes July 4, 2007 13:20 Page 22 

1. Fundamental Theory

satisfy 

ẋi = f (t, xi , µi )

x (t ) = a,
i 0

then
L2
|x1 (t) − x2 (t)| ≤ |µ1 − µ2 |(eL1 |t−t0 | − 1) (1.19)
L1
for t ∈ [t0 − α, t0 + α].

This theorem shows continuous dependence on parameters if, in


addition to the hypotheses of the Picard-Lindelöf Theorem, the right-
hand side of the ODE in (1.18) is assumed to be Lipschitz continuous
with respect to the parameter (on finite time intervals). The connec-
tion between (1.17) and (1.18) shows that the hypotheses of the Picard-
Lindelöf Theorem are sufficient to guarantee continuous dependence
on initial conditions. Note the exponential dependence of the modulus
of continuity on |t − t0 |.

 Proof. For simplicity, we only consider t ≥ t0 . Note that 

Z

t

|x1 (t) − x2 (t)| = [f (s, x1 (s), µ1 ) − f (s, x2 (s), µ2 ] ds
t0
Zt
≤ |f (s, x1 (s), µ1 ) − f (s, x2 (s), µ2 )| ds
t0
Zt
≤ |f (s, x1 (s), µ1 ) − f (s, x1 (s), µ2 )| ds
t0
Zt
+ |f (s, x1 (s), µ2 ) − f (s, x2 (s), µ2 )| ds
t0
Zt
≤ [L2 |µ1 − µ2 | + L1 |x1 (s) − x2 (s)|] ds
t0

Let X(t) = L2 |µ1 − µ2 | + L1 |x1 (t) − x2 (t)|. Then


Zt
X(t) ≤ L2 |µ1 − µ2 | + L1 X(s) ds,
t0

so by the Gronwall Inequality X(t) ≤ L2 |µ1 − µ2 |eL1 (t−t0 ) . This means


that (1.19) holds.
22




theoryofodes July 4, 2007 13:20 Page 23 

Dependence on Parameters

Exercise 5 Suppose that f : R × R → R and g : R × R → R


are continuous and are each Lipschitz continuous with respect to
their second variable. Suppose, also, that x is a global solution to


ẋ = f (t, x)

x(t ) = a, 0

and y is a global solution to




ẏ = g(t, y)

y(t ) = b. 0

(a) If f (t, p) < g(t, p) for every (t, p) ∈ R × R and a < b, show
that x(t) < y(t) for every t ≥ t0 .

(b) If f (t, p) ≤ g(t, p) for every (t, p) ∈ R × R and a ≤ b, show


that x(t) ≤ y(t) for every t ≥ t0 . (Hint: You may want to
use the results of part (a) along with a limiting argument.)
 

Differentiable Dependence

Theorem (Differentiable Dependence) Suppose f : R × R × R → R is a


continuous function and is continuously differentiable with respect to x
and µ. Then the solution x(·, µ) of


ẋ = f (t, x, µ)
(1.20)

x(t ) = a 0

is differentiable with respect to µ, and y = xµ := ∂x/∂µ satisfies




ẏ = fx (t, x(t, µ), µ)y + fµ (t, x(t, µ), µ)
(1.21)
y(t ) = 0.
 0

That xµ , if it exists, should satisfy the IVP (1.21) is not terribly sur-
prising; (1.21) can be derived (formally) by differentiating (1.20) with
respect to µ. The real difficulty is showing that xµ exists. The key to
23




theoryofodes July 4, 2007 13:20 Page 24 

1. Fundamental Theory

the proof is to use the fact that (1.21) has a solution y and then to
use the Gronwall inequality to show that difference quotients for xµ
converge to y.

Proof. Given µ, it is not hard to see that the right-hand side of the ODE
in (1.21) is continuous in t and y and is locally Lipschitz continuous
with respect to y, so by the Picard-Lindelöf Theorem we know that
(1.21) has a unique solution y(·, µ). Let

x(t, µ + ∆µ) − x(t, µ)


w(t, µ, ∆µ) := .
∆µ

We want to show that w(t, µ, ∆µ) → y(t, µ) as ∆µ → 0.


Let z(t, µ, ∆µ) := w(t, µ, ∆µ) − y(t, µ). Then

dz dw
(t, µ, ∆µ) = (t, µ, ∆µ)−fx (t, x(t, µ), µ)y(t, µ)−fµ (t, x(t, µ), µ),
dt dt
and

dw f (t, x(t, µ + ∆µ), µ + ∆µ) − f (t, x(t, µ), µ)


(t, µ, ∆µ) =
 dt ∆µ 

f (t, x(t, µ + ∆µ), µ + ∆µ) − f (t, x(t, µ), µ + ∆µ)



=
∆µ
f (t, x(t, µ), µ + ∆µ) − f (t, x(t, µ), µ)
+
∆µ
= fx (t, x(t, µ) + θ1 ∆x, µ + ∆µ)w(t, µ, ∆µ) + fµ (t, x(t, µ), µ + θ2 ∆µ),

for some θ1 , θ2 ∈ [0, 1] (by the Mean Value Theorem), where

∆x := x(t, µ + ∆µ) − x(t, µ).

Hence,

dz

dt (t, µ, ∆µ) ≤ |fµ (t, x(t, µ), µ + θ2 ∆µ) − fµ (t, x(t, µ), µ)|

+ |fx (t, x(t, µ) + θ1 ∆x, µ + ∆µ)| · |z(t, µ, ∆µ)|


+ |fx (t, x(t, µ) + θ1 ∆x, µ + ∆µ) − fx (t, x(t, µ), µ + ∆µ)| · |y(t, µ)|
+ |fx (t, x(t, µ), µ + ∆µ) − fx (t, x(t, µ), µ)| · |y(t, µ)|
≤ p(t, µ, ∆µ) + (|fx (t, x(t, µ), µ)| + p(t, µ, ∆µ))|z(t, µ, ∆µ)|,

where p(t, µ, ∆µ) → 0 as ∆µ → 0, uniformly on bounded sets.


24




theoryofodes July 4, 2007 13:20 Page 25 

Dependence on Parameters

Letting X(t) = ε + (K + ε)|z|, we see that if

|fx (t, x(t, µ), µ)| ≤ K (1.22)

and
|p(t, µ, ∆µ)| < ε, (1.23)

then Zt Zt
dz
|z(t, µ, ∆µ)| ≤ (s, µ, ∆µ) ds ≤ X(s) ds

t0 ds t0
so Zt
X(t) ≤ ε + (K + ε) X(s) ds,
t0

which gives X(t) ≤ εe(K+ε)(t−t0 ) , by Gronwall’s inequality. This, in turn,


gives
ε(e(K+ε)(t−t0 ) − 1)
|z| ≤ .
K+ε
Given t ≥ t0 , pick K so large that (1.22) holds. As ∆µ → 0, we can
take ε arbitrarily small and still have (1.23) hold, to see that

 
lim z(t, µ, ∆µ) = 0.

∆µ→0

25




theoryofodes July 4, 2007 13:20 Page 26 

 




theoryofodes July 4, 2007 13:20 Page 27 

2
Linear Systems

2.1 Constant Coefficient Linear Equations

Linear Equations

Definition Given
f : R × Rn → Rn ,

 we say that the first-order ODE 



ẋ = f (t, x) (2.1)

is linear if every linear combination of solutions of (2.1) is a solution


of (2.1).

Definition Given two vector spaces X and Y, L(X, Y) is the space of


linear maps from X to Y.

Exercise 6 Show that if (2.1) is linear (and f is continuous) then


there is a function A : R → L(Rn , Rn ) such that f (t, p) = A(t)p,
for every (t, p) ∈ R × Rn .

ODEs of the form ẋ = A(t)x + g(t) are also often called linear, al-
though they don’t satisfy the definition given above. These are called
inhomogeneous; ODEs satisfying the previous definition are called ho-
mogeneous.
27




theoryofodes July 4, 2007 13:20 Page 28 

2. Linear Systems

Constant Coefficients and the Matrix Exponential

Here we will study the autonomous IVP




ẋ = Ax
(2.2)

x(0) = x , 0

where A ∈ L(Rn , Rn ), or equivalently A is a (constant) n × n matrix.


If n = 1, then we’re dealing with ẋ = ax. The solution is x(t) =
eta x0 . When n > 1, we will show that we can similarly define etA in a
natural way, and the solution of (2.2) will be given by x(t) = etA x0 .
Given B ∈ L(Rn , Rn ), we define its matrix exponential
X∞
Bk
eB := .
k=0
k!

We will show that this series converges, but first we specify a norm on
L(Rn , Rn ).

Definition The operator norm kBk of an element B ∈ L(Rn , Rn ) is de-


 
fined by

|Bx|

kBk = sup = sup |Bx|.
x≠0 |x| |x|=1

L(Rn , Rn ) is a Banach space under the operator norm. Thus, to


show that the series for eB converges, it suffices to show that

X
m Bk


k=ℓ k!

can be made arbitrarily small by taking m ≥ ℓ ≥ N for N sufficiently


large.
Suppose B1 , B2 ∈ L(Rn , Rn ) and B2 does not map everything to
zero. Then
|B1 B2 x| |B1 B2 x| |B2 x|
kB1 B2 k = sup = sup ·
x≠0 |x| B2 x≠0,x≠0 |B2 x| |x|
! !
|B1 y| |B2 x|
≤ sup sup = kB1 k · kB2 k.
y≠0 |y| x≠0 |x|

If B2 does map everything to zero, then kB2 k = kB1 B2 k = 0, so kB1 B2 k ≤


kB1 k · kB2 k, obviously. Thus, the operator norm is submultiplicative.
28




theoryofodes July 4, 2007 13:20 Page 29 

Constant Coefficient Linear Equations

Using this property, we have



X
m Bk m k
X B Xm
kBkk
≤ ≤ .

k=ℓ k! k=ℓ k! k=ℓ
k!

Since the regular exponential series (for real arguments) has an infinite
radius of convergence, we know that the last quantity in this estimate
goes to zero as ℓ, m ↑ ∞.
Thus, eB makes sense, and, in particular, etA makes sense for each
fixed t ∈ R and each A ∈ L(Rn , Rn ). But does x(t) := etA x0 solve
(2.2)? To check that, we’ll need the following important property.

Lemma If B1 , B2 ∈ L(Rn , Rn ) and B1 B2 = B2 B1 , then eB1 +B2 = eB1 eB2 .

Proof. Using commutativity, we have


  
∞ j ∞ k ∞ X ∞ j ∞ X j
X B X B X B1 B2k X B1 B2k
eB1 eB2 = 1 2
= =
 j=0
j! k=0
k! j=0 k=0
j!k! i=0 j+k=i
j!k! 

∞ X i j (i−j) ∞ X i
! j (i−j)

X B1 B2 X i B1 B2
= =
i=0 j=0
j!(i − j)! i=0 j=0 j i!
X∞
(B1 + B2 )i
= = eB1 +B2 .
i=0
i!

Now, if x : R → Rn is defined by x(t) := etA x0 , then

d x(t + h) − x(t) e(t+h)A x0 − etA x0


x(t) = lim = lim
dt h→0 h h→0 h
! !
(t+h)A tA hA
e −e e −I
= lim x0 = lim etA x0
h→0 h h→0 h
 
X∞ k−1 k
h A  etA x0 = AetA x0 = Ax(t),
=  lim
h→0
k=1
k!

so x(t) = etA x0 really does solve (2.2).


29




theoryofodes July 4, 2007 13:20 Page 30 

2. Linear Systems

2.2 Understanding the Matrix Exponential

Transformations

Now that we have a representation of the solution of linear constant-


coefficient initial-value problems, we should ask ourselves: “What good
is it?” Does the power series formula for the matrix exponential pro-
vide an efficient means for calculating exact solutions? Not usually.
Is it an efficient way to compute accurate numerical approximations
to the matrix exponential? Not according to Matrix Computations by
Golub and Van Loan. Does it provide insight into how solutions be-
have? It is not clear that it does. There are, however, transformations
that may help us handle these problems.
Suppose that B, P ∈ L(Rn , Rn ) are related by a similarity transfor-
mation; i.e., B = QP Q−1 for some invertible Q. Calculating, we find
that
X∞ X∞ X∞
Bk (QP Q−1 )k QP k Q−1
eB = = =
k=0
k! k=0
k! k=0
k!
   
X∞ k

P  Q−1 = QeP Q−1 .



= Q
k=0
k!

It would be nice if, given B, we could choose Q so that P were a


diagonal matrix, since (as can easily be checked)

ediag{p1 ,p2 ,...,pn } = diag{ep1 , ep2 , . . . , epn }.

Unfortunately, this cannot always be done. Over the next few sections,
we will show that what can be done, in general, is to pick Q so that
P = S + N, where S is a semisimple matrix with a fairly simple form,
N is a nilpotent matrix with a fairly simple form, and S and N com-
mute. (Recall that a matrix is semisimple if it is diagonalizable over
the complex numbers and that a matrix is nilpotent if some power of
the matrix is 0.) The forms of S and N are simple enough that we
can calculate their exponentials fairly easily, and then we can multiply
them to get the exponential of S + N.
We will spend a significant amount of time carrying out the project
described in the previous paragraph, even though it is linear algebra
that some of you have probably seen before. Since understanding the
30




theoryofodes July 4, 2007 13:20 Page 31 

Understanding the Matrix Exponential

behavior of constant coefficient systems plays a vital role in helping us


understand more complicated systems, I feel that the time investment
is worth it. The particular approach we will take follows chapters 3, 4,
5, and 6, and appendix 3 of Hirsch and Smale fairly closely.

Eigensystems

Given B ∈ L(Rn , Rn ), recall that that λ ∈ C is an eigenvalue of B if


Bx = λx for some nonzero x ∈ Rn or if B̃x = λx for some nonzero
x ∈ Cn , where B̃ is the complexification of B; i.e., the element of
L(Cn , Cn ) which agrees with B on Rn . (Just as we often identify a linear
operator with a matrix representation of it, we will usually not make
a distinction between an operator on a real vector space and its com-
plexification.) A nonzero vector x for which Bx = λx for some scalar
λ is an eigenvector. An eigenvalue λ with corresponding eigenvector x
form an eigenpair (λ, x).
If an operator A ∈ L(Rn , Rn ) were chosen at random, it would al-
most surely have n distinct eigenvalues {λ1 , . . . , λn } and n correspond-
 ing linearly independent eigenvectors {x1 , . . . , xn }. If this is the case, 

then A is similar to the (possibly complex) diagonal matrix




 
λ1 0 ··· 0
 .. 
 .. .. 
0 . . . 
 .
 . .. .. 
 .. . . 0
 
0 ··· 0 λn
More specifically,
 
  λ1 0 ··· 0
  −1
 .. .. .. 
  0 . . .   
A=
x1 ··· xn  · 
  .
 · x1
  ··· xn 
 .
 .. .. ..
 . . 0
0 ··· 0 λn
If the eigenvalues of A are real and distinct, then this means that
 
  tλ1 0 ··· 0  −1
 .. .. .. 

   0 . . .   
tA =   
x1 · · · xn  ·  .
 · x1 · · · xn  ,
  
 .. . .. . ..
 0 
0 ··· 0 tλn
31




theoryofodes July 4, 2007 13:20 Page 32 

2. Linear Systems

and the formula for the matrix exponential then yields


 tλ 
 e 0 ··· 0
1
  −1
 .. .. .. 

   0 . . .   
etA =   
x1 · · · xn  ·  .
 · x1 · · ·
  xn 
 .
 .. . .. . ..
 0  
tλn
0 ··· 0 e

This formula should make clear how the projections of etA x0 grow or
decay as t → ±∞.
The same sort of analysis works when the eigenvalues are (non-
trivially) complex, but the resulting formula is not as enlightening. In
addition to the difficulty of a complex change of basis, the behavior of
etλk is less clear when λk is not real.
One way around this is the following. Sort the eigenvalues (and
eigenvectors) of A so that complex conjugate eigenvalues {λ1 , λ1 , . . . ,
λm , λm } come first and are grouped together and so that real eigen-
values {λm+1 , . . . , λr } come last. For k ≤ m, set ak = Re λk ∈ R,
bk = Im λk ∈ R, yk = Re xk ∈ Rn , and zk = Im xk ∈ Rn . Then
 

Ayk = A Re xk = Re Axk = Re λk xk = (Re λk )(Re xk ) − (Im λk )(Im xk )



= a k yk − b k zk ,

and

Azk = A Im xk = Im Axk = Im λk xk = (Im λk )(Re xk ) + (Re λk )(Im xk )


= b k yk + a k zk .

Using these facts, we have A = QP Q−1 , where


 
 
Q=
z1 y1 ··· ··· zm ym xm+1 ··· xr 

and P is the (m + r ) × (m + r ) block diagonal matrix, whose first


m diagonal blocks are the 2 × 2 matrices
" #
ak −bk
Ak =
bk ak

for k = 1, . . . , m, and whose last r − m diagonal blocks are the 1 × 1


matrices [λk ] for k = m + 1, . . . , r .
32




theoryofodes July 4, 2007 13:20 Page 33 

Generalized Eigenspace Decomposition

In order to compute etA from this formula, we’ll need to know how
to compute etAk . This can be done using the power series formula. An
alternative approach is to realize that
" # " #
x(t) tAk c
:= e
y(t) d

is supposed to solve the IVP




 ẋ = ak x − bk y





ẏ = bk x + ak y
(2.3)

 x(0) = c





y(0) = d.

Since we can check that the solution of (2.3) is


" # " #
x(t) eak t (c cos bk t − d sin bk t)
= ,
y(t) eak t (d cos bk t + c sin bk t)

we can conclude that


 
" #

e ak t
cos bk t −e ak t
sin bk t

etAk =
eak t sin bk t eak t cos bk t

Putting this all together and using the form of P , we see that etA =
QetP Q−1 , where etP is the (m + r ) × (m + r ) block diagonal matrix
whose first m diagonal blocks are the 2 × 2 matrices
" #
eak t cos bk t −eak t sin bk t
eak t sin bk t eak t cos bk t

for k = 1, . . . , m, and the last r − m diagonal blocks are the 1 × 1


matrices [eλk t ] for k = m + 1, . . . , r .
This representation of etA shows that not only may the projec-
tions of etA x0 grow or decay exponentially, they may also exhibit si-
nusoidally oscillatory behavior.

2.3 Generalized Eigenspace Decomposition

Eigenvalues don’t have to be distinct for the analysis of the matrix


exponential that was done last time to work. There just needs to be
33




theoryofodes July 4, 2007 13:20 Page 34 

2. Linear Systems

a basis of eigenvectors for Rn (or Cn ). Unfortunately, we don’t always


have such a basis. For this reason, we need to generalize the notion of
an eigenvector.
First, some definitions:

Definition The algebraic multiplicity of an eigenvalue λ of an operator


A is the multiplicity of λ as a zero of the characteristic polynomial
det(A − xI).

Definition The geometric multiplicity of an eigenvalue λ of an operator


A is the dimension of the corresponding eigenspace, i.e., the dimension
of the space of all the eigenvectors of A corresponding to λ.

It is not hard to show (e.g., through a change-of-basis argument)


that the geometric multiplicity of an eigenvalue is always less than or
equal to its algebraic multiplicity.

 Definition A generalized eigenvector of A is a vector x such that (A − 

λI)k x = 0 for some scalar λ and some k ∈ N.


Definition If λ is an eigenvalue of A, then the generalized eigenspace


of A belonging to λ is the space of all generalized eigenvectors of A
corresponding to λ.

Definition We say that a vector space V is the direct sum of subspaces


V1 , . . . , Vm of V and write

V = V1 ⊕ · · · ⊕ Vm

if for each v ∈ V there is a unique (v1 , . . . , vm ) ∈ V1 × · · · × Vm such


that v = v1 + · · · + vm .

Theorem (Primary Decomposition Theorem) Let B be an operator on E,


where E is a complex vector space, or else E is real and B has real
eigenvalues. Then E is the direct sum of the generalized eigenspaces
of B. The dimension of each generalized eigenspace is the algebraic
multiplicity of the corresponding eigenvalue.
34




theoryofodes July 4, 2007 13:20 Page 35 

Generalized Eigenspace Decomposition

Before proving this theorem, we introduce some notation and state


and prove two lemmas.
Given T : V → V , let


N(T ) = x ∈ V T k x = 0 for some k > 0 ,

and let


R(T ) = x ∈ V T k u = x has a solution u for every k > 0 .

Note that N(T ) is the union of the null spaces of the positive powers of
T and R(T ) is the intersection of the ranges of the positive powers of
T . This union and intersection are each nested, and that implies that
there is a number m ∈ N such that R(T ) is the range of T m and N(T )
is the nullspace of T m .

Lemma If T : V → V , then V = N(T ) ⊕ R(T ).


 



Proof. Pick m such that R(T ) is the range of T m and N(T ) is the
nullspace of T m . Note that T |R(T ) : R(T ) → R(T ) is invertible. Given
−m m
x ∈ V , let y = T |R(T ) T x and z = x − y. Clearly, x = y + z,
y ∈ R(T ), and T m z = T m x − T m y = 0, so z ∈ N(T ). If x = ỹ + z̃ for
some other ỹ ∈ R(T ) and z̃ ∈ N(T ) then T m ỹ = T m x − T m z̃ = T m x,
so ỹ = y and z̃ = z.

Lemma If λj , λk are distinct eigenvalues of T : V → V , then

N(T − λj I) ⊆ R(T − λk I).

Proof. Note first that (T − λk I)N(T − λj I) ⊆ N(T − λj I). We claim that,


in fact, (T − λk I)N(T − λj I) = N(T − λj I); i.e., that

(T − λk I)|N(T −λj I) : N(T − λj I) → N(T − λj I)

is invertible. Suppose it isn’t; then we can pick a nonzero x ∈ N(T −


λj I) such that (T −λk I)x = 0. But if x ∈ N(T −λj I) then (T −λj I)mj x =
35




theoryofodes July 4, 2007 13:20 Page 36 

2. Linear Systems

0 for some mj ≥ 0. Calculating,

(T − λj I)x = T x − λj x = λk x − λj x = (λk − λj )x,


2
(T − λj I) x = T (λk − λj )x − λj (λk − λj )x = (λk − λj )2 x,
..
.
(T − λj I)mj x = · · · = (λk − λj )mj x ≠ 0,

contrary to assumption. Hence, the claim holds.


Note that this implies not only that

(T − λk I)N(T − λj I) = N(T − λj I)

but also that


(T − λk I)m N(T − λj I) = N(T − λj I)

for every m ∈ N. This means that

 N(T − λj I) ⊆ R(T − λk I). 

Proof of the Principal Decomposition Theorem. The claim is obviously


true if the dimension of E is 0 or 1. We prove it for dim E > 1 by
induction on dim E. Suppose it holds on all spaces of smaller dimen-
sion than E. Let λ1 , λ2 , . . . , λq be the eigenvalues of B with algebraic
multiplicities n1 , n2 , . . . , nq . By the first lemma,

E = N(B − λq I) ⊕ R(B − λq I).

Note that dim R(B − λq I) < dim E, and R(B − λq I) is (positively) invari-
ant under B. Applying our assumption to B|R(B−λq I) : R(B − λq I) →
R(B − λq I), we get a decomposition of R(B − λq I) into the generalized
eigenspaces of B|R(B−λq I) . By the second lemma, these are just

N(B − λ1 I), N(B − λ2 I), . . . , N(B − λq−1 I),

so

E = N(B − λ1 I) ⊕ N(B − λ2 I) ⊕ · · · ⊕ N(B − λq−1 I) ⊕ N(B − λq I).


36




theoryofodes July 4, 2007 13:20 Page 37 

Operators on Generalized Eigenspaces

Now, by the second lemma, we know that B|N(B−λk I) has λk as its


only eigenvalue, so dim N(B − λk I) ≤ nk . Since

q q
X X
nk = dim E = dim N(B − λk I),
k=1 k=1

we actually have dim N(B − λk I) = nk .

2.4 Operators on Generalized Eigenspaces

We’ve seen that the space on which a linear operator acts can be decom-
posed into the direct sum of generalized eigenspaces of that operator.
The operator maps each of these generalized eigenspaces into itself,
and, consequently, solutions of the differential equation starting in a
generalized eigenspace stay in that generalized eigenspace for all time.
Now we will see how the solutions within such a subspace behave by
seeing how the operator behaves on this subspace.
It may seem like nothing much can be said in general since, given a
 

finite-dimensional vector space V , we can define a nilpotent operator



S on V by

1. picking a basis {v1 , . . . , vm } for V ;

2. creating a graph by connecting the nodes {v1 , . . . , vm , 0} with di-


rected edges in such a way that from each node there is a unique
directed path to 0;

3. defining S(vj ) to be the unique node vk such that there is a di-


rected edge from vj to vk ;

4. extending S linearly to all of V .

By adding any multiple of I to S we have an operator for which V is


a generalized eigenspace. It turns out, however, that there are really
only a small number of different possible structures that may arise
from this seemingly general process.
To make this more precise, we first need a definition, some new
notation, and a lemma.
37




theoryofodes July 4, 2007 13:20 Page 38 

2. Linear Systems

Definition A subspace Z of a vector space V is a cyclic subspace of S


on V if SZ ⊆ Z and there is some x ∈ Z such that Z is spanned by
{x, Sx, S 2 x, . . .}.

Given S, note that every vector x ∈ V generates a cyclic subspace.


Call it Z(x) or Z(x, S). If S is nilpotent, write nil(x) or nil(x, S) for the
smallest nonnegative integer k such that S k x = 0.

Lemma The set {x, Sx, . . . , S nil(x)−1 x} is a basis for Z(x).

Proof. Obviously these vectors span Z(x); the question is whether they
are linearly independent. If they were not, we could write down a linear
combination α1 S p1 x + · · · + αk S pk x, with αj ≠ 0 and 0 ≤ p1 < p2 <
· · · < pk ≤ nil(x) − 1, that added up to zero. Applying S nil(x)−p1 −1 to
this linear combination would yield α1 S nil(x)−1 x = 0, contradicting the
definition of nil(x).

 Theorem If S : V → V is nilpotent then V can be written as the direct 

sum of cyclic subspaces of S on V . The dimensions of these subspaces



are determined by the operator S.

Proof. The proof is inductive on the dimension of V . It is clearly true


if dim V = 0 or 1. Assume it is true for all operators on spaces of di-
mension less than dim V .

Step 1: The dimension of SV is less than the dimension of V .


If this weren’t the case, then S would be invertible and could not pos-
sibly be nilpotent.

Step 2: For some k ∈ N and for some nonzero yj ∈ SV , j = 1, . . . , k,

SV = Z(y1 ) ⊕ · · · ⊕ Z(yk ). (2.4)

This is a consequence of Step 1 and the induction hypothesis.

Pick xj ∈ V such that Sxj = yj , for j = 1, . . . , k. Suppose that


38




theoryofodes July 4, 2007 13:20 Page 39 

Operators on Generalized Eigenspaces

zj ∈ Z(xj ) for each j and

z1 + · · · + zk = 0. (2.5)

We will show that zj = 0 for each j. This will mean that the direct sum
Z(x1 ) ⊕ · · · ⊕ Z(xk ) exists.

Step 3: Sz1 + · · · + Szk = 0.


This follows from applying S to both sides of (2.5).

Step 4: For each j, Szj ∈ Z(yj ).


The fact that zj ∈ Z(xj ) implies that

zj = α0 xj + α1 Sxj + · · · + αnil(xj )−1 S nil(xj )−1 xj (2.6)

for some αi . Applying S to both sides of (2.6) gives

Szj = α0 yj + α1 Syj + · · · + αnil(xj )−2 S nil(xj )−2 yj ∈ Z(yj ).

 



Step 5: For each j, Szj = 0.
This is a consequence of Step 3, Step 4, and (2.4).

Step 6: For each j, zj ∈ Z(yj ).


If
zj = α0 xj + α1 Sxj + · · · + αnil(xj )−1 S nil(xj )−1 xj

then by Step 5

0 = Szj = α0 yj + α1 Syj + · · · + αnil(xj )−2 S nil(xj )−2 yj .

Since nil(xj ) − 2 = nil(yj ) − 1, the vectors in this linear combination


are linearly independent; thus, αi = 0 for i = 0, . . . , nil(xj ) − 2. In
particular, α0 = 0, so

zj = α1 yj + · · · + αnil(xj )−1 S nil(xj )−2 yj ∈ Z(yj ).

39




theoryofodes July 4, 2007 13:20 Page 40 

2. Linear Systems

Step 7: For each j, zj = 0.


This is a consequence of Step 6, (2.4), and (2.5).

We now know that Z(x1 )⊕ · · · ⊕ Z(xk ) =: Ṽ exists, but it is not nec-


essarily all of V . Choose a subspace W of Null(S) such that Null(S) =
(Ṽ ∩ Null(S)) ⊕ W . Choose a basis {w1 , . . . , wℓ } for W and note that
W = Z(w1 ) ⊕ · · · ⊕ Z(wℓ ).

Step 8: The direct sum Z(x1 ) ⊕ · · · ⊕ Z(xk ) ⊕ Z(w1 ) ⊕ · · · ⊕ Z(wℓ )


exists.
This is a consequence of the fact that the direct sums Z(x1 ) ⊕ · · · ⊕
Z(xk ) and Z(w1 ) ⊕ · · · ⊕ Z(wℓ ) exist and that Ṽ ∩ W = {0}.

Step 9: V = Z(x1 ) ⊕ · · · ⊕ Z(xk ) ⊕ Z(w1 ) ⊕ · · · ⊕ Z(wℓ ).


Let x ∈ V be given. Recall that Sx ∈ SV = Z(y1 ) ⊕ · · · ⊕ Z(yk ). Write
Sx = s1 + · · · + sk with sj ∈ Z(yj ). If

sj = α0 yj + α1 Syj + · · · + αnil(yj )−1 S nil(yj )−1 yj ,


 



let

uj = α0 xj + α1 Sxj + · · · + αnil(yj )−1 S nil(yj )−1 xj ,

and note that Suj = sj and that uj ∈ Z(xj ). Setting u = u1 + · · · + uk ,


we have

S(x − u) = Sx − Su = (s1 + · · · + sk ) − (s1 + · · · + sk ) = 0,

so x − u ∈ Null(S). By definition of W , that means that

x − u ∈ Z(x1 ) ⊕ · · · ⊕ Z(xk ) ⊕ Z(w1 ) ⊕ · · · ⊕ Z(wℓ ).

Since u ∈ Z(x1 ) ⊕ · · · ⊕ Z(xk ), we have

x ∈ Z(x1 ) ⊕ · · · ⊕ Z(xk ) ⊕ Z(w1 ) ⊕ · · · ⊕ Z(wℓ ).

This completes the proof of the first sentence in the theorem. The
second sentence follows similarly by induction.
40




theoryofodes July 4, 2007 13:20 Page 41 

Real Canonical Form

2.5 Real Canonical Form

Real Canonical Form

We now use the information contained in the previous theorems to


find simple matrices representing linear operators. Clearly, a nilpotent
operator S on a cyclic space Z(x) can be represented by the matrix

 
0 ··· ··· ··· 0
 
 .. .. 
1 . .
 
 .. .. .. 
 
0 . . .,
 
 .. .. .. .. .. 
. . . . .
 
0 ··· 0 1 0

with the corresponding basis being {x, Sx, . . . , S nil(x)−1 x}. Thus, if λ
is an eigenvalue of an operator T , then the restriction of T to a cyclic
subspace of T − λI on the generalized eigenspace N(T − λI) can be
 

represented by a matrix of the form



 
λ 0 ··· ··· 0
 
 .. .. .. 
1 . . .
 
 .. .. .. .. 
 
0 . . . .. (2.7)
 
 .. .. .. .. 
. . . . 0
 
0 ··· 0 1 λ

If λ = a + bi ∈ C \ R is an eigenvalue of an operator T ∈ L(Rn , Rn ),


and Z(x, T − λI) is one of the cyclic subspaces whose direct sum is
N(T − λI), then Z(x, T − λI) can be taken to be one of the cyclic sub-
spaces whose direct sum is N(T − λI). If we set k = nil(x, T − λI) − 1
and yj = Re((T − λI)j x) and zj = Im((T − λI)j x) for j = 0, . . . , k, then
we have T yj = ayj − bzj + yj+1 and T zj = byj + azj + zj+1 for j =
0, . . . , k − 1, and T yk = ayk − bzk and T zk = byk + azk . The 2k + 2
real vectors {z0 , y0 , . . . , zk , yk } span Z(x, T − λI) ⊕ Z(x, T − λI) over
C and also span a (2k + 2)-dimensional space over R that is invariant
under T . On this real vector space, the action of T can be represented
41




theoryofodes July 4, 2007 13:20 Page 42 

2. Linear Systems

by the matrix

 
a −b 0 0 ··· ··· ··· ··· 0 0
 
 b a 0 0 ··· ··· ··· ··· 0 0 
 
 .. .. .. .. .. .. 
 . . . . 
 1 0 . . 
 
 .. .. .. .. .. ..
 0 1 . . . . . .
 
 .. 
..
 .. .. .. .. .. .. 
 0 0 . . . . . . . 
.
 . (2.8)
 .. .. .. .. .. .. .. 
..
 0 0 . . . . . . . .
 
 
 .. .. .. .. .. .. .. .. 
 . . . . . . . . 0 0 
 
 .. .. .. .. .. .. .. .. 
 . . . . . . 
 . . 0 0 
 
 0 0 ··· ··· 0 0 1 0 a −b 
 
0 0 ··· ··· 0 0 0 1 b a

The restriction of an operator to one of its generalized eigenspaces


has a matrix representation like
 

  

λ
  
 λ  
1  
 1 λ 
 
 " # 
 λ 
 
  (2.9)

 1 λ h i


 
 λ 
 h i 
 λ 
 
 
..
.

if the eigenvalue λ is real, with blocks of the form (2.7) running down
the diagonal. If the eigenvalue is complex, then the matrix representa-
tion is similar to (2.9) but with blocks of the form (2.8) instead of the
form (2.7) on the diagonal.
Finally, the matrix representation of the entire operator is block
diagonal, with blocks of the form (2.9) (or its counterpart for complex
eigenvalues). This is called the real canonical form. If we specify the
order in which blocks should appear, then matrices are similar if and
only if they have the same real canonical form.
42




theoryofodes July 4, 2007 13:20 Page 43 

Solving Linear Systems

2.6 Solving Linear Systems

Exercise 7 Classify all the real canonical forms for operators


on R4 . In other words, find a collection of 4 × 4 matrices, possibly
with (real) variable entries and possibly with constraints on those
variables, such that

1. Only matrices in real canonical form match one of the matri-


ces in your collection.

2. Each operator on R4 has a matrix representation matching


one of the matrices in your collection.

3. No matrix matching one of your matrices is similar to a ma-


trix matching one of your other matrices.

For example, a suitable collection of matrices for operators on R2


would be:
 " # " # " # 
λ 0 λ 0 a −b

; ; , (b ≠ 0).

1 λ 0 µ b a

Computing etA

Given an operator A ∈ L(Rn , Rn ), let M be its real canonical form.


Write M = S + N, where S has M’s diagonal elements λk and diagonal
blocks
" #
a −b
b a

and 0’s elsewhere, and N has M’s off-diagonal 1’s and 2×2 identity ma-
trices. If you consider the restrictions of S and N to each of the cyclic
subspaces of A−λI into which the generalized eigenspace N(A−λI) of
A is decomposed, you’ll probably be able to see that these restrictions
commute. As a consequence of this fact (and the way Rn can be rep-
resented in terms of these cyclic subspaces), S and N commute. Thus
etM = etS etN .
43




theoryofodes July 4, 2007 13:20 Page 44 

2. Linear Systems

Now, etS has eλk t where S has λk , and has


" #
eak t cos bk t −eak t sin bk t
eak t sin bk t eak t cos bk t

where S has " #


ak −bk
.
bk ak
The series definition can be used to compute etN , since the fact
that N is nilpotent implies that the series is actually a finite sum. The
entries of etN will be polynomials in t. For example,
   
0 1
   
 .   .. 
1 . .   t . 
  7→  
 . .   . . 
 .. ..   .. .. ... 
   
1 0 tm ··· t 1

and
" # 
0 0
   
 0 0 

" # 

 1 0 
 .. 
 . 
 0 1 
  7→
 .. .. 
 
 . . 
 " # " #
 1 0 0 0 
 
0 1 0 0
 " # 
1 0
 
 0 1 
 " # 
 
 t 0 .. 
 . 
 0 t 
 .
 .. .. .. 
 
 . . . 
" # " # " #
 t m /m! 0 t 0 1 0 
 
···
0 t m /m! 0 t 0 1

Identifying A with its matrix representation with respect to the


standard basis, we have A = P MP −1 for some invertible matrix P .
Consequently, etA = P etM P −1 . Thus, the entries of etA will be linear
combinations of polynomials times exponentials or polynomials times
exponentials times trigonometric functions.
44




theoryofodes July 4, 2007 13:20 Page 45 

Solving Linear Systems

Exercise 8 Compute etA (and justify your computations) if


 
0 0 0 0
 
1 0 0 1
1. A = 1 0 0 1

 
0 −1 1 0
 
1 1 1 1
 
2 2 2 2
2. A = 
3

 3 3 3
4 4 4 4

Linear Planar Systems

A thorough understanding of constant coefficient linear systems ẋ =


 Ax in the plane is very helpful in understanding systems that are non- 

linear and/or higher-dimensional.



There are 3 main categories of real canonical forms for an operator
A in L(R2 , R2 ):

" #
λ 0

0 µ

" #
λ 0

1 λ

" #
a −b
• , (b ≠ 0)
b a

We will subdivide these 3 categories further into a total of 14 cate-


gories and consider the corresponding phase portraits, i.e., sketches of
some of the trajectories or parametric curves traced out by solutions
in phase space.
45




theoryofodes July 4, 2007 13:20 Page 46 

2. Linear Systems

u2
1
" #
λ 0
A=
0 µ

(λ < 0 < µ) b u1

saddle

u2
2
" #
λ 0
A=
0 µ

(λ < µ < 0) b u1

 

stable node

u2
3
" #
λ 0
A=
0 µ

(λ = µ < 0) b u1

stable node

46




theoryofodes July 4, 2007 13:20 Page 47 

Solving Linear Systems

u2
4
" #
λ 0
A=
0 µ

(0 < µ < λ) b u1

unstable node

u2
5
" #
λ 0
A=
0 µ

(0 < λ = µ) b u1

 

unstable node

u2
6
" #
λ 0 b
A=
0 µ

(λ < µ = 0) b u1

degenerate b

47




theoryofodes July 4, 2007 13:20 Page 48 

2. Linear Systems

u2
7
" #
λ 0 b
A=
0 µ

(0 = µ < λ) b u1

degenerate b

u2
8
" #
λ 0 b b b
A=
0 µ

(0 = µ = λ) b b b u1

 b b b


degenerate

u2
9
" #
λ 0
A=
1 λ

(λ < 0) b u1

stable node

48




theoryofodes July 4, 2007 13:20 Page 49 

Solving Linear Systems

u2
10
" #
λ 0
A=
1 λ

(0 < λ) b u1

unstable node

u2
11
" #
λ 0 b
A=
1 λ

(λ = 0) b u1

 b


degenerate

u2
12
" #
a −b
A=
b a

(a < 0 < b) b u1

stable spiral

49




theoryofodes July 4, 2007 13:20 Page 50 

2. Linear Systems

u2
13
" #
a −b
A=
b a

(b < 0 < a) b u1

unstable spiral

u2
14
" #
a −b
A=
b a

(a = 0, b > 0) b u1

 

center

If A is not in real canonical form, then the phase portrait should


look (topologically) similar but may be rotated, flipped, skewed, and/or
stretched.

2.7 Qualitative Behavior of Linear Systems

Parameter Plane

Some of the information from the preceding phase portraits can be


summarized in a parameter diagram. In particular, let τ = trace A and
let δ = det A, so the characteristic polynomial is λ2 − τλ + δ. Then
the behavior of the trivial solution x(t) ≡ 0 is given by locating the
corresponding point in the (τ, δ)-plane:

50




theoryofodes July 4, 2007 13:20 Page 51 

Qualitative Behavior of Linear Systems

stable spiral unstable spiral

center
/

4
2
τ
=
δ
stable node unstable node

degenerate degenerate τ

saddle

Growth and Decay Rates


 

Given A ∈ L(Rn , Rn ), let



 
M 
Eu = N(A − λI) ⊕
 
λ>0
   

 
 
 

 M    M  

Re u u ∈ N(A − λI) ⊕
Im u u ∈ N(A − λI) ,

 
  
Re λ>0  
Re λ>0 

Im λ≠0 Im λ≠0
 
M 
Es = N(A − λI) ⊕
 
λ<0
   

 
 
 

 M    M  

Re u u ∈ N(A − λI) ⊕
Im u u ∈ N(A − λI) ,

 
  
Re λ<0  
Re λ<0 

Im λ≠0 Im λ≠0

and

E c = N(A)⊕
   

 
 
 

 M    M  

Re u u ∈ N(A − λI) ⊕
Im u u ∈ N(A − λI) .

 
  
Re λ=0  
Re λ=0 

Im λ≠0 Im λ≠0
51




theoryofodes July 4, 2007 13:20 Page 52 

2. Linear Systems

From our previous study of the real canonical form, we know that

Rn = E u ⊕ E s ⊕ E c .

We call E u the unstable space of A, E s the stable space of A, and E c


the center space of A.
Each of these subspaces of Rn is invariant under the differential
equation
ẋ = Ax. (2.10)
In other words, if x : R → R is a solution of (2.10) and x(0) is in E u ,
n

E s , or E c , then x(t) is in E u , E s , or E c , respectively, for all t ∈ R. We


shall see that each of these spaces is characterized by the growth or
decay rates of the solutions it contains. Before doing so, we state and
prove a basic fact about finite-dimensional normed vector spaces.

Theorem All norms on Rn are equivalent.

Proof. Since equivalence of norms is transitive, it suffices to prove that


 every norm N : Rn → R is equivalent to the standard Euclidean norm 

| · |.

Given an arbitrary norm N, and letting xi be the projection of x ∈
Rn onto the ith standard basis vector ei , note that
 
n
X n
X n
X
N(x) = N  xi ei  ≤ |xi |N(ei ) ≤ |x|N(ei )
i=1 i=1 i=1
 
n
X
≤ N(ei ) |x|.
i=1

This shows half of equivalence; it also shows that N is continuous,


since, by the triangle inequality,
 
n
X
|N(x) − N(y)| ≤ N(x − y) ≤  N(ei ) |x − y|.
i=1

The set S := x ∈ Rn |x| = 1 is clearly closed and bounded and,
therefore, compact, so by the extreme value theorem, N must achieve
a minimum on S. Since N is a norm (and is, therefore, positive definite),
this minimum must be strictly positive; call it k. Then for any x ≠ 0,
   
x x
N(x) = N |x| = |x|N ≥ k|x|,
|x| |x|
52




theoryofodes July 4, 2007 13:20 Page 53 

Qualitative Behavior of Linear Systems

and the estimate N(x) ≥ k|x| obviously holds if x = 0, as well.

Theorem Given A ∈ L(Rn , Rn ) and the corresponding decomposition


Rn = E u ⊕ E s ⊕ E c , we have
[
Eu = x ∈ Rn lim |e−ct etA x| = 0 , (2.11)
t↓−∞
c>0
[
Es = x ∈ Rn lim |ect etA x| = 0 , (2.12)
t↑∞
c>0

and
\
Ec = x ∈ Rn lim |ect etA x| = lim |e−ct etA x| = 0 . (2.13)
t↓−∞ t↑∞
c>0

Proof. By equivalence of norms, instead of using the standard Euclid-


ean norm on Rn we can use the norm

kxk := sup{|P1 x|, . . . , |Pn x|},

where Pi : Rn → R represents projection onto the ith basis vector


 corresponding to the real canonical form. Because of our knowledge of 

tA
the structure of the real canonical form, we know that Pi e x is either

of the form
p(t)eλt , (2.14)
where p(t) is a polynomial in t and λ ∈ R is an eigenvalue of A, or of
the form
p(t)eat (α cos bt + β sin bt), (2.15)
where p(t) is a polynomial in t, a+bi ∈ C \ R is an eigenvalue of A, and
α and β are real constants. Furthermore, we know that the constant
λ or a is positive if Pi corresponds to a vector in E u , is negative if Pi
corresponds to a vector in E s , and is zero if Pi corresponds to a vector
in E c .
Now, let x ∈ Rn be given. Suppose first that x ∈ E s . Then each
Pi etA x is either identically zero or has as a factor a negative exponen-
tial whose constant is the real part of an eigenvalue of A that is to the
left of the imaginary axis in the complex plane. Let σ (A) be the set of
eigenvalues of A, and set

max Re λ λ ∈ σ (A) and Re λ < 0
c= .
2
53




theoryofodes July 4, 2007 13:20 Page 54 

2. Linear Systems

Then ect Pi etA x is either identically zero or decays exponentially to


zero as t ↑ ∞.
Conversely, suppose x ∉ E s . Then Pi x ≠ 0 for some Pi correspond-
ing to a real canonical basis vector in E u or in E c . In either case, Pi etA x
is not identically zero and is of the form (2.14) where λ ≥ 0 or of the
form (2.15) where a ≥ 0. Thus, if c > 0 then

lim sup |ect Pi etA x| = ∞,


t↑∞

so
lim sup kect etA xk = ∞.
t↑∞
The preceding two paragraphs showed that (2.12) is correct. By
applying this fact to the time-reversed problem ẋ = −Ax, we find that
(2.11) is correct, as well. We now consider (2.13).
If x ∈ E c , then for each i, Pi etA x is either a polynomial or the
product of a polynomial and a periodic function. If c > 0 and we
multiply such a function of t by ect and let t ↓ −∞ or we multiply it by
e−ct and let t ↑ ∞, then the result converges to zero.
 If, on the other hand, x ∉ E c then for some i, Pi etA x contains a 



nontrivial exponential term. If c > 0 is sufficiently small then either
ect Pi etA x diverges as t ↓ −∞ or e−ct Pi etA x diverges as t ↑ ∞. This
completes the verification of (2.13).

2.8 Exponential Decay

Definition If E u = Rn , we say that the origin is a source and etA is an


expansion.

Definition If E s = Rn , we say that the origin is a sink and etA is a


contraction.

Theorem

(a) The origin is a source for the equation ẋ = Ax if and only if for a
given norm k · k there are constants k, b > 0 such that

ketA xk ≤ ketb kxk

for every t ≤ 0 and x ∈ Rn .


54




theoryofodes July 4, 2007 13:20 Page 55 

Exponential Decay

(b) The origin is a sink for the equation ẋ = Ax if and only if for a given
norm k · k there are constants k, b > 0 such that

ketA xk ≤ ke−tb kxk

for every t ≥ 0 and x ∈ Rn .

Proof. The “if” parts are a consequence of the previous theorem. The
“only if” parts follow from the proof of the previous theorem.

Note that a contraction does not always “contract” things immedi-


ately; i.e., |etA x|  |x|, in general. For example, consider
" #
−1/4 0
A= .
1 −1/4

If " #
x1 (t)
x(t) =
x2 (t)
is a solution of ẋ = Ax, then
 " #" # 
d h i −1/4 0 x1

|x(t)|2 = 2hx, ẋi = 2 x1 x2



dt 1 −1/4 x2
1 1 1
= − x12 + 2x1 x2 − x22 = x1 x2 − (x1 − x2 )2 ,
2 2 2
which is greater than zero if, for example, x1 = x2 > 0. However, we
have the following:

Theorem

(a) If etA is an expansion then there is some norm k · k and some con-
stant b > 0 such that

ketA xk ≤ etb kxk

for every t ≤ 0 and x ∈ Rn .

(b) If etA is a contraction then there is some norm k · k and some con-
stant b > 0 such that

ketA xk ≤ e−tb kxk

for every t ≥ 0 and x ∈ Rn .


55




theoryofodes July 4, 2007 13:20 Page 56 

2. Linear Systems

Proof. The idea of the proof is to pick a basis with respect to which A
is represented by a matrix like the real canonical form but with some
small constant ε > 0 in place of the off-diagonal 1’s. (This can be done
by rescaling.) If the Euclidean norm with respect to this basis is used,
the desired estimates hold. The details of the proof may be found in
Chapter 7, §1, of Hirsch and Smale.

Exercise 9

(a) Show that if etA and etB are both contractions on Rn , and BA =
AB, then et(A+B) is a contraction.

(b) Give a concrete example that shows that (a) can fail if the as-
sumption that AB = BA is dropped.

Exercise 10 Problem 5 on page 137 of Hirsch and Smale reads:


 
“For any solution to ẋ = Ax, A ∈ L(Rn , Rn ), show that exactly



one of the following alternatives holds:

(a) lim x(t) = 0 and lim |x(t)| = ∞;


t↑∞ t↓−∞

(b) lim |x(t)| = ∞ and lim x(t) = 0;


t↑∞ t↓−∞

(c) there exist constants M, N > 0 such that M < |x(t)| < N for
all t ∈ R.”

Is what they ask you to prove true? If so, prove it. If not,
determine what other possible alternatives exist, and prove that
you have accounted for all possibilities.

2.9 Nonautonomous Linear Systems

We now move from the constant coefficient equation ẋ = Ax to the


nonautonomous equation

ẋ = A(t)x. (2.16)
56




theoryofodes July 4, 2007 13:20 Page 57 

Nonautonomous Linear Systems

For simplicity we will assume that the domain of A is R.

Solution Formulas

In the scalar, or one-dimensional, version of (2.16)

ẋ = a(t)x (2.17)

we can separate variables and arrive at the formula


Rt
a(τ) dτ
x(t) = x0 e t0

for the solution of (2.17) that satisfies the initial condition x(t0 ) = x0 .
It seems like the analogous formula for the solution of (2.16) with
initial condition x(t0 ) = x0 should be
Rt
A(τ) dτ
x(t) = e t0 x0 . (2.18)

Certainly, the right-hand side of (2.18) makes sense (assuming that A


is continuous). But does it give the correct answer?
 Let’s consider a specific example. Let 

" #

0 0
A(t) =
1 t

and t0 = 0. Note that


Zt " # " #
0 0t2 0 0
A(τ) dτ = 2
= ,
0 t t /2 2 2/t 1

and
 
0 0 
 
" # " # #  2 2 "
t t 2 /2 2
t
1 0 t 0 02 0 0
e = + + + ···
0 1 2 2/t 1 2! 2/t 1
" # " #  
1 0  2  0 0 1 0
= + et /2 − 1 =  2  t 2 /2  2 /2
.
0 1 2/t 1 t
e −1 et

On the other hand, we can solve the corresponding system

ẋ1 = 0
ẋ2 = x1 + tx2
57




theoryofodes July 4, 2007 13:20 Page 58 

2. Linear Systems

directly. Clearly x1 (t) = α for some constant α. Plugging this into


the equation for x2 , we have a first-order scalar equation which can be
solved by finding an integrating factor. This yields
Zt
t 2 /2 t 2 /2 2
x2 (t) = βe + αe e−s /2 ds
0

for some constant β. Since x1 (0) = α and x2 (0) = β, the solution of


(2.16) is  "
" # #
x1 (t) 1 0 x1 (0)
=  t 2 /2 R t −s 2 /2 2
 .
x2 (t) e 0 e ds et /2 x2 (0)
Since Zt
2 2  t 2 /2 2

et /2
e e−s −1 /2
ds ≠
0 t
(2.18) doesn’t work? What went wrong? The answer is that
R t+h Rt
A(τ) dτ
d R0t A(τ) dτ e0 − e 0 A(τ) dτ
e = lim
dt h→0 h
Rt h R t+h i
e 0 A(τ) dτ e t A(τ) dτ − I
≠ lim ,
 h→0 h 

in general, because of possible noncommutativity.


Structure of Solution Set

We abandon attempts to find a general formula for solving (2.16), and


instead analyze the general structure of the solution set.

Definition If x (1) , x (2) , . . . , x (n) are linearly independent solutions of


(2.16) (i.e., no nontrivial linear combination gives the zero function)
then the matrix
h i
X(t) := x (1) (t) ··· x (n) (t)

is called a fundamental matrix for (2.16).

Theorem The dimension of the vector space of solutions of (2.16) is n.

Proof. Pick n linearly independent vectors v (k) ∈ Rn , k = 1, . . . , n,


and let x (k) be the solution of (2.16) that satisfies the initial condi-
tion x (k) (0) = v (k) . Then these n solutions are linearly independent.
58




theoryofodes July 4, 2007 13:20 Page 59 

Nonautonomous Linear Systems

Furthermore, we claim that any solution x of (2.16) is a linear com-


bination of these n solutions. To see why this is so, note that x(0)
must be expressible as a linear combination of {v (1) , . . . , v (n) }. The
corresponding linear combination of {x (1) , . . . , x (n) } is, by linearity, a
solution of (2.16) that agrees with x at t = 0. Since A is continuous,
the Picard-Lindelöf Theorem applies to (2.16) to tell us that solutions
of IVPs are unique, so this linear combination of {x (1) , . . . , x (n) } must
be identical to x.

Definition If X(t) is a fundamental matrix and X(0) = I, then it is called


the principal fundamental matrix. (Uniqueness of solutions implies
that there is only one such matrix.)

Definition Given n functions (in some order) from R to Rn , their Wron-


skian is the determinant of the matrix that has these functions as its
columns (in the corresponding order).

 Theorem The Wronskian of n solutions of (2.16) is identically zero if and 

only if the solutions are linearly dependent.


Proof. Suppose x (1) , . . . , x (n) are linearly dependent solutions; i.e.,


n
X
αk x (k) = 0
k=1
Pn 2
Pn (k)
for some constants α1 , . . . , αn with k=1 αk ≠ 0. Then k=1 αk x (t)
is 0 for every t, so the columns of the Wronskian W (t) are linearly
dependent for every t. This means W ≡ 0.
Conversely, suppose that the Wronskian W of n solutions x (1) , . . . ,
x (n) is identically zero. In particular, W (0) = 0, so x (1) (0), . . . , x (n) (0)
Pn
are linearly dependent vectors. Pick constants α1 , . . . , αn , with k=1 α2k
Pn Pn
nonzero, such that k=1 αk x (k) (0) = 0. The function k=1 αk x (k) is a
solution of (2.16) that is 0 when t = 0, but so is the function that
Pn
is identically zero. By uniqueness of solutions, k=1 αk x (k) = 0; i.e.,
x (1) , . . . , x (n) are linearly dependent.

Note that this proof also shows that if the Wronskian of n solutions
of (2.16) is zero for some t, then it is zero for all t.
59




theoryofodes July 4, 2007 13:20 Page 60 

2. Linear Systems

What if we’re dealing with n arbitrary vector-valued functions (that


are not necessarily solutions of (2.16))? If they are linearly dependent
then their Wronskian is identically zero, but the converse is not true.
For example, " # " #
1 t
and
0 0
have a Wronskian that is identically zero, but they are not linearly de-
pendent. Also, n functions can have a Wronskian that is zero for some
t and nonzero for other t. Consider, for example,
" # " #
1 0
and .
0 t

Initial-Value Problems

Given a fundamental matrix X(t) for (2.16), define G(t, t0 ) to be the


quantity X(t)[X(t0 )]−1 . We claim that x(t) := G(t, t0 )v solves the IVP


ẋ = A(t)x
 
x(t ) = v. 
0



To verify this, note that
d d
x= (X(t)[X(t0 )]−1 v) = A(t)X(t)[X(t0 )]−1 v = A(t)x,
dt dt
and
x(t0 ) = G(t0 , t0 )v = X(t0 )[X(t0 )]−1 v = v.

Inhomogeneous Equations

Consider the IVP 



ẋ = A(t)x + f (t)
(2.19)

x(t ) = x .
0 0

In light of the results from the previous section when f was identi-
cally zero, it’s reasonable to look for a solution x of (2.19) of the form
x(t) := G(t, t0 )y(t), where G is as before, and y is some vector-valued
function.
Note that

ẋ(t) = A(t)X(t)[X(t0 )]−1 y(t)+G(t, t0 )ẏ(t) = A(t)x(t)+G(t, t0 )ẏ(t);


60




theoryofodes July 4, 2007 13:20 Page 61 

Nearly Autonomous Linear Systems

therefore, we need G(t, t0 )ẏ(t) = f (t). Isolating, ẏ(t), we need

ẏ(t) = X(t0 )[X(t)]−1 f (t) = G(t0 , t)f (t). (2.20)

Integrating both sides of (2.20), we see that y should satisfy


Zt
y(t) − y(t0 ) = G(t0 , s)f (s) ds.
t0

If x(t0 ) is to be x0 , then, since G(t0 , t0 ) = I, we need y(t0 ) = x0 , so


y(t) should be
Zt
x0 + G(t0 , s)f (s) ds,
t0

or, equivalently, x(t) should be


Zt
G(t, t0 )x0 + G(t, s)f (s) ds,
t0

since G(t, t0 )G(t0 , s) = G(t, s). This is called the Variation of Con-
stants formula or the Variation of Parameters formula.
 



2.10 Nearly Autonomous Linear Systems

Suppose A(t) is, in some sense, close to a constant matrix A. The ques-
tion we wish to address in this section is the extent to which solutions
of the nonautonomous system

ẋ = A(t)x (2.21)

behave like solutions of the autonomous system

ẋ = Ax. (2.22)

Before getting to our main results, we present a pair of lemmas.

Lemma The following are equivalent:

1. Each solution of (2.22) is bounded as t ↑ ∞.

2. The function t 7→ ketA k is bounded as t ↑ ∞ (where k·k is the usual


operator norm).
61




theoryofodes July 4, 2007 13:20 Page 62 

2. Linear Systems

3. Re λ ≤ 0 for every eigenvalue λ of A and the algebraic multiplicity


of each purely imaginary eigenvalue matches its geometric multi-
plicity.

Proof. That statement 2 implies statement 1 is a consequence of the


definition of the operator norm, since, for each solution x of (2.22),

|x(t)| = |etA x(0)| ≤ ketA k · |x(0)|.

That statement 1 implies statement 3, and statement 3 implies state-


ment 2 are consequences of what we have learned about the real canon-
ical form of A, along with the equivalence of norms on Rn .

Lemma (Generalized Gronwall Inequality) Suppose X and Φ are nonneg-


ative, continuous, real-valued functions on [t0 , T ] for which there is a
nonnegative constant C such that
Zt
X(t) ≤ C + Φ(s)X(s) ds,
t0
 

for every t ∈ [t0 , T ]. Then



Rt
Φ(s) ds
X(t) ≤ Ce t0 .

Proof. The proof is very similar to the proof of the standard Gronwall
inequality. The details are left to the reader.

The first main result deals with the case when A(t) converges to A
sufficiently quickly as t ↑ ∞.

Theorem Suppose that each solution of (2.22) remains bounded as t ↑ ∞


and that, for some t0 ∈ R,
Z∞
kA(t) − Ak dt < ∞, (2.23)
t0

where k · k is the standard operator norm. Then each solution of (2.21)


remains bounded as t ↑ ∞.

Proof. Let t0 be such that (2.23) holds. Given a solution x of (2.21), let
f (t) = (A(t)−A)x(t), and note that x satisfies the constant-coefficient
62




theoryofodes July 4, 2007 13:20 Page 63 

Nearly Autonomous Linear Systems

inhomogeneous problem

ẋ = Ax + f (t). (2.24)

Since the matrix exponential provides a fundamental matrix solution to


constant-coefficient linear systems, applying the variation of constants
formula to (2.24) yields
Zt
x(t) = e(t−t0 )A x(t0 ) + e(t−s)A (A(s) − A)x(s) ds. (2.25)
t0

Now, by the first lemma, the boundedness of solutions of (2.22) in


forward time tells us that there is a constant M > 0 such that ketA k ≤
M for every t ≥ t0 . Taking norms and estimating, gives (for t ≥ t0 )
Zt
(t−t0 )A
|x(t)| ≤ ke k · |x(t0 )| + ke(t−s)A k · kA(s) − Ak · |x(s)| ds
t0
Zt
≤ M|x(t0 )| + MkA(s) − Ak · |x(s)| ds.
t0

 Setting X(t) = |x(t)|, Φ(t) = MkA(t) − Ak, and C = M|x(t0 )|, and 



applying the generalized Gronwall inequality, we find that
Rt
M kA(s)−Ak ds
|x(t)| ≤ M|x(t0 )|e t0
.

By (2.23), the right-hand side of this inequality is bounded on [t0 , ∞),


so x(t) is bounded as t ↑ ∞.

The next result deals with the case when the origin is a sink for
(2.22). Will the solutions of (2.21) also all converge to the origin as
t ↑ ∞? Yes, if kA(t) − Ak is sufficiently small.

Theorem Suppose all the eigenvalues of A have negative real part. Then
there is a constant ε > 0 such that if kA(t) − Ak ≤ ε for all t sufficiently
large then every solution of (2.21) converges to 0 as t ↑ ∞.

Proof. Since the origin is a sink, we know that we can choose constants
k, b > 0 such that ketA k ≤ ke−bt for all t ≥ 0. Pick a constant ε ∈
(0, b/k), and assume that there is a time t0 ∈ R such that kA(t) − Ak ≤
ε for every t ≥ t0 .
63




theoryofodes July 4, 2007 13:20 Page 64 

2. Linear Systems

Now, given a solution x of (2.21) we can conclude, as in the proof


of the previous theorem, that
Zt
|x(t)| ≤ ke(t−t0 )A k · |x(t0 )| + ke(t−s)A k · kA(s) − Ak · |x(s)| ds
t0

for all t ≥ t0 . This implies that


Zt
|x(t)| ≤ ke−b(t−t0 ) |x(t0 )| + ke−b(t−s) ε|x(s)| ds
t0

for all t ≥ t0 . Multiplying through by eb(t−t0 ) and setting y(t) :=


eb(t−t0 ) |x(t)| yield
Zt
y(t) ≤ k|x(t0 )| + kε y(s) ds
t0

for all t ≥ t0 . The standard Gronwall inequality applied to this estimate


gives
y(t) ≤ k|x(t0 )|ekε(t−t0 )
 for all t ≥ t0 , or, equivalently, 



|x(t)| ≤ k|x(t0 )|e(kε−b)(t−t0 )

for all t ≥ t0 . Since ε < b/k, this inequality implies that x(t) → 0 as
t ↑ ∞.

Thus, the origin remains a “sink” even when we perturb A by a small


time-dependent quantity. Can we perhaps just look at the (possibly,
time-dependent) eigenvalues of A(t) itself and conclude, for example,
that if all of those eigenvalues have negative real part for all t then
all solutions of (2.21) converge to the origin as t ↑ ∞? The following
example of Markus and Yamabe shows that the answer is “No”.

Exercise 11 Show that if


" 3 3
#
−1 + 2 cos2 t 1− 2 sin t cos t
A(t) = 3 3
−1 − 2 sin t cos t −1 + 2 sin2 t

64




theoryofodes July 4, 2007 13:20 Page 65 

Periodic Linear Systems

then the eigenvalues of A(t) both have negative real part for every
t ∈ R, but " #
− cos t
x(t) := et/2 ,
sin t
which becomes unbounded as t → ∞, is a solution to (2.21).

2.11 Periodic Linear Systems

We now consider

ẋ = A(t)x (2.26)

when A is a continuous periodic n × n matrix function of t; i.e., when


there is a constant T > 0 such that A(t + T ) = A(t) for every t ∈ R.
When that condition is satisfied, we say, more precisely, that A is T -
periodic. If T is the smallest positive number for which this condition
 holds, we say that T is the minimal period of A. (Every continuous, 

nonconstant periodic function has a minimal period).



Let A be T -periodic, and let X(t) be a fundamental matrix for (2.26).
Define X̃ : R → L(Rn , Rn ) by X̃(t) = X(t + T ). Clearly, the columns of
X̃ are linearly independent functions of t. Also,

d d
X̃(t) = X(t + T ) = X ′ (t + T ) = A(t + T )X(t + T ) = A(t)X̃(t),
dt dt

so X̃ solves the matrix equivalent of (2.26). Hence, X̃ is a fundamental


matrix for (2.26).
Because the dimension of the solution space of (2.26) is n, this
means that there is a nonsingular (constant) matrix C such that X(t +
T ) = X(t)C for every t ∈ R. C is called a monodromy matrix.

Lemma There exists B ∈ L(Cn , Cn ) such that C = eT B .

Proof. Without loss of generality, we assume that T = 1, since if it isn’t


we can just rescale B by a scalar constant. We also assume, without
loss of generality, that C is in Jordan canonical form. (If it isn’t, then
−1
use the fact that P −1 CP = eB implies that C = eP BP .) Furthermore,
65




theoryofodes July 4, 2007 13:20 Page 66 

2. Linear Systems

because of the way the matrix exponential acts on a block diagonal


matrix, it suffices to show that for each p × p Jordan block
 
λ 0 ··· ··· 0
 
 .. .. .. 
1 . . .
 
 .. .. .. .. 
C̃ := 
0 . . . .,

 
 .. .. .. .. 
. . . . 0
 
0 ··· 0 1 λ

C̃ = eB̃ for some B̃ ∈ L(Cp , Cp ).


Now, an obvious candidate for B̃ is the natural logarithm of C̃, de-
fined in some reasonable way. Since the matrix exponential was de-
fined by a power series, it seems reasonable to use a similar definition
for a matrix logarithm. Note that C̃ = λI + N = λI(I + λ−1 N), where N
is nilpotent. (Since C is invertible, we know that all of the eigenvalues
λ are nonzero.) We guess

 B̃ = (log λ)I + log(I + λ−1 N), (2.27) 



where
X∞
(−M)k
log(I + M) := − ,
k=1
k

in analogy to the Maclaurin series for log(1 + x). Since N is nilpotent,


this series terminates in our application of it to (2.27). Direct substitu-
tion shows that eB̃ = C̃, as desired.

The eigenvalues ρ of C are called the Floquet multipliers (or charac-


teristic multipliers) of (2.26). The corresponding numbers λ satisfying
ρ = eλT are called the Floquet exponents (or characteristic exponents)
of (2.26). Note that the Floquet exponents are only determined up to a
multiple of (2π i)/T . Given B for which C = eT B , the exponents can be
chosen to be the eigenvalues of B.

Theorem There exists a T -periodic function P : R → L(Rn , Rn ) such that

X(t) = P (t)etB .
66




theoryofodes July 4, 2007 13:20 Page 67 

Periodic Linear Systems

Proof. Let P (t) = X(t)e−tB . Then

P (t + T ) = X(t + T )e−(t+T )B = X(t + T )e−T B e−tB = X(t)Ce−T B e−tB


= X(t)eT B e−T B e−tB = X(t)e−tB = P (t).

The decomposition of X(t) given in this theorem shows that the


behavior of solutions can be broken down into the composition of a
part that is periodic in time and a part that is exponential in time.
Recall, however, that B may have entries that are not real numbers,
so P (t) may be complex, also. If we want to decompose X(t) into a
real periodic matrix times a matrix of the form etB where B is real, we
observe that X(t + 2T ) = X(t)C 2 , where C is the same monodromy
matrix as before. It can be shown that the square of a real matrix can
be written as the exponential of a real matrix. Write C 2 = eT B with B
real, and let P (t) = X(t)e−tB as before. Then, X(t) = P (t)etB where P
is now 2T -periodic, and everything is real.
The Floquet multipliers and exponents do not depend on the partic-
 ular fundamental matrix chosen, even though the monodromy matrix 

does. They depend only on A(t). To see this, let X(t) and Y (t) be

fundamental matrices with corresponding monodromy matrices C and
D. Because X(t) and Y (t) are fundamental matrices, there is a non-
singular constant matrix S such that Y (t) = X(t)S for all t ∈ R. In
particular, Y (0) = X(0)S and Y (T ) = X(T )S. Thus, C =

[X(0)]−1 X(T ) = S[Y (0)]−1 Y (T )S −1 = S[Y (0)]−1 Y (0)DS −1 = SDS −1.

This means that the monodromy matrices are similar and, therefore,
have the same eigenvalues.

Interpreting Floquet Multipliers and Exponents

Theorem If ρ is a Floquet multiplier of (2.26) and λ is a corresponding


Floquet exponent, then there is a nontrivial solution x of (2.26) such
that x(t + T ) = ρx(t) for every t ∈ R and x(t) = eλt p(t) for some
T -periodic vector function p.

Proof. Pick x0 to be an eigenvector of B corresponding to the eigen-


value λ, where X(t) = P (t)etB is the decomposition of a fundamental
67




theoryofodes July 4, 2007 13:20 Page 68 

2. Linear Systems

matrix X(t). Let x(t) = X(t)x0 . Then, clearly, x solves (2.26). The
power series formula for the matrix exponential implies that x0 is an
eigenvector of etB with eigenvalue eλt . Hence,

x(t) = X(t)x0 = P (t)etB x0 = P (t)eλt x0 = eλt p(t),

where p(t) = P (t)x0 . Also,

x(t + T ) = eλT eλt p(t + T ) = ρeλt p(t) = ρx(t).

Time-dependent Change of Variables

Let x solve (2.26), and let y(t) = [P (t)]−1 x(t), where P is as defined
previously. Then

d d
[P (t)y(t)] = x(t) = A(t)x(t) = A(t)P (t)y(t)
dt dt
= A(t)X(t)e−tB y(t).
 
But



d
[P (t)y(t)] = P ′ (t)y(t) + P (t)y ′ (t)
dt
= [X ′ (t)e−tB − X(t)e−tB B]y(t) + X(t)e−tB y ′ (t)
= A(t)X(t)e−tB y(t) − X(t)e−tB By(t) + X(t)e−tB y ′ (t),

so
X(t)e−tB y ′ (t) = X(t)e−tB By(t),

which implies that y ′ (t) = By(t); i.e., y solves a constant coefficient


linear equation. Since P is periodic and, therefore, bounded, the growth
and decay of x and y are closely related. Furthermore, the growth or
decay of y is determined by the eigenvalues of B, i.e., by the Floquet
exponents of (2.26). For example, we have the following results.

Theorem If all the Floquet exponents of (2.26) have negative real parts
then all solutions of (2.26) converge to 0 as t ↑ ∞.

Theorem If there is a nontrivial T -periodic solution of (2.26) then there


must be a Floquet multiplier of modulus 1.
68




theoryofodes July 4, 2007 13:20 Page 69 

Periodic Linear Systems

Computing Floquet Multipliers and Exponents

Although Floquet multipliers and exponents are determined by A(t),


it is not obvious how to calculate them. As a previous exercise illus-
trated, the eigenvalues of A(t) don’t seem to be extremely relevant.
The following result helps a little bit.

Theorem If (2.26) has Floquet multipliers ρ1 , . . . , ρn and corresponding


Floquet exponents λ1 , . . . , λn , then
ZT !
ρ1 · · · ρn = exp trace A(t) dt (2.28)
0

and ZT
1 2π i
λ1 + · · · + λn ≡ trace A(t) dt mod (2.29)
T 0 T

Proof. We focus on (2.28). The formula (2.29) will follow immediately


from (2.28).
Let W (t) be the determinant of the principal fundamental matrix
 X(t). Let Sn be the set of permutations of {1, 2, . . . , n} and let ǫ : Sn → 

{−1, 1} be the parity map. Then


X n
Y
W (t) = ǫ(σ ) Xi,σ (i) ,
σ ∈Sn i=1

where Xi,j is the (i, j)-th entry of X(t).


Differentiating yields
X n
dW (t) d Y
= ǫ(σ ) Xi,σ (i)
dt σ ∈Sn
dt i=1
Xn X  Y
d
= ǫ(σ ) Xj,σ (j) Xi,σ (i)
j=1 σ ∈Sn
dt i≠j
 
Xn X Xn Y
= ǫ(σ )  Aj,k (t)Xk,σ (j)  Xi,σ (i)
j=1 σ ∈Sn k=1 i≠j
 
n
X n
X X Y
= Aj,k (t)  ǫ(σ )Xk,σ (j) Xi,σ (i)  .
j=1 k=1 σ ∈Sn i≠j

If j ≠ k, the inner sum is the determinant of the matrix obtained by


replacing the jth row of X(t) by its kth row. This new matrix, having
69




theoryofodes July 4, 2007 13:20 Page 70 

2. Linear Systems

two identical rows, must necessarily have determinant 0. Hence,

Xn
dW (t)
= Aj,j (t) det X(t) = [trace A(t)]W (t).
dt j=1

Thus, Rt Rt
W (t) = e 0 trace A(s) ds W (0) = e 0 trace A(s) ds .

In particular,
RT
trace A(s) ds
e0 = W (T ) = det X(T ) = det(P (T )eT B ) = det(P (0)eT B )
= det eT B = det C = ρ1 ρ2 · · · ρn .

Exercise 12 Consider (2.26) where


"1 #
2 − cos t b
A(t) = 3
a 2 + sin t
 
and a and b are constants. Show that there is a solution of (2.26)



that becomes unbounded as t ↑ ∞.

70




theoryofodes July 4, 2007 13:20 Page 71 

3
Topological Dynamics

3.1 Invariant Sets and Limit Sets

We will now begin an intensive study of the continuously differentiable


autonomous system
ẋ = f (x)

or, equivalently, of the corresponding dynamical system ϕ(t, x). We


will denote the phase space Ω and assume that it is an open (not nec-
 

essarily proper) subset of Rn .


Orbits

Definition Given x ∈ Ω, the (complete) orbit through x is the set



γ(x) := ϕ(t, x) t ∈ R ,

the positive semiorbit through x is the set



γ + (x) := ϕ(t, x) t ≥ 0},

and the negative semiorbit through x is the set



γ − (x) := ϕ(t, x) t ≤ 0}.

Invariant Sets

Definition A set M ⊆ Ω is invariant under ϕ if it contains the complete


orbit of every point of M. In other words, for every x ∈ M and every
t ∈ R, ϕ(t, x) ∈ M.
71




theoryofodes July 4, 2007 13:20 Page 72 

3. Topological Dynamics

Definition A set M ⊆ Ω is positively invariant under ϕ if it contains


the positive semiorbit of every point of M. In other words, for every
x ∈ M and every t ≥ 0, ϕ(t, x) ∈ M.

Definition A set M ⊆ Ω is negatively invariant under ϕ if it contains


the negative semiorbit of every point of M. In other words, for every
x ∈ M and every t ≤ 0, ϕ(t, x) ∈ M.

Limit Sets

Definition Given x ∈ Ω, the ω-limit set of x, denoted ω(x), is the set



y ∈ Ω lim inf |ϕ(t, x) − y| = 0
t↑∞

= y ∈ Ω ∃t1 , t2 , . . . → ∞ s.t. ϕ(tk , x) → y as k ↑ ∞ .

Definition Given x ∈ Ω, the α-limit set of x, denoted α(x), is the set



y ∈ Ω lim inf |ϕ(t, x) − y| = 0
t↓−∞

 = y ∈ Ω ∃t1 , t2 , . . . → −∞ s.t. ϕ(tk , x) → y as k ↑ ∞ . 



Lemma If, for each A ∈ Ω, we let A represent the topological closure of
A in Ω, then
\
ω(x) = γ + (ϕ(τ, x)) (3.1)
τ∈R

and
\
α(x) = γ − (ϕ(τ, x)). (3.2)
τ∈R

Proof. It suffices to prove (3.1); (3.2) can then be established by time


reversal.
Let y ∈ ω(x) be given. Pick a sequence t1 , t2 , . . . → ∞ such that
ϕ(tk , x) → y as k ↑ ∞. Let τ ∈ R be given. Pick K ∈ N such that tk ≥ τ
for all k ≥ K. Note that ϕ(tk , x) ∈ γ + (ϕ(τ, x)) for all k ≥ K, so

y ∈ γ + (ϕ(τ, x)).

Since this holds for all τ ∈ R, we know that


\
y∈ γ + (ϕ(τ, x)). (3.3)
τ∈R
72




theoryofodes July 4, 2007 13:20 Page 73 

Invariant Sets and Limit Sets

Since (3.3) holds for each y ∈ ω(x), we know that


\
ω(x) ⊆ γ + (ϕ(τ, x)). (3.4)
τ∈R

Now, we prove the reverse inclusion. Let


\
y∈ γ + (ϕ(τ, x))
τ∈R

be given. This implies, in particular, that


\
y∈ γ + (ϕ(τ, x)).
τ∈N

For each k ∈ N, we have

y ∈ γ + (ϕ(k, x))

so we can pick zk ∈ γ + (ϕ(k, x)) such that |zk − y| < 1/k. Since
zk ∈ γ + (ϕ(k, x)), we can pick sk ≥ 0 such that zk = ϕ(sk , ϕ(k, x)). If
we set tk = k + sk , we see that tk ≥ k, so the sequence t1 , t2 , . . . goes to
 

infinity. Also, since


|ϕ(tk , x) − y| = |ϕ(sk + k, x) − y| = |ϕ(sk , ϕ(k, x)) − y| = |zk − y|


< 1/k,

we know that ϕ(tk , x) → y as k ↑ ∞. Hence, y ∈ ω(x). Since this


holds for every
\
y∈ γ + (ϕ(τ, x)),
τ∈R

we know that
\
γ + (ϕ(τ, x)) ⊆ ω(x).
τ∈R

Combining this with (3.4) gives (3.1).

We now describe some properties of limit sets.

Theorem Given x ∈ Ω, ω(x) and α(x) are closed (relative to Ω) and


invariant. If γ + (x) is contained in some compact subset of Ω, then ω(x)
is nonempty, compact, and connected. If γ − (x) is contained in some
compact subset of Ω, then α(x) is nonempty, compact, and connected.
73




theoryofodes July 4, 2007 13:20 Page 74 

3. Topological Dynamics

Proof. Again, time-reversal arguments tell us that it is only necessary


to prove the statements about ω(x).
Step 1: ω(x) is closed.
This is a consequence of the lemma and the fact that the intersection
of closed sets is closed.

Step 2: ω(x) is invariant.


Let y ∈ ω(x) and t ∈ R be given. Choose a sequence of times (tk )
converging to infinity such that ϕ(tk , x) → y as k ↑ ∞. For each k ∈ N,
let sk = tk + t, and note that (sk ) converges to infinity and

ϕ(sk , x) = ϕ(tk + t, x) = ϕ(t, ϕ(tk , x)) → ϕ(t, y)

as k ↑ ∞ (by the continuity of ϕ(t, ·)). Hence, ϕ(t, y) ∈ ω(x). Since


t ∈ R and y ∈ ω(x) were arbitrary, we know that ω(x) is invariant.

Now, suppose that γ + (x) is contained in a compact subset K of Ω.

 

Step 3: ω(x) is nonempty.



The sequence ϕ(1, x), ϕ(2, x), . . . is contained in γ + (x) ⊆ K, so by the
Bolzano-Weierstrass Theorem, some subseqence ϕ(t1 , x), ϕ(t2 , x), . . .
converges to a point y ∈ K. By definition, y ∈ ω(x).

Step 4: ω(x) is compact.


By the Heine-Borel Theorem, K is closed (relative to Rn ), so, by the
choice of K, ω(x) ⊆ K. Since, by Step 1, ω(x) is closed relative to Ω,
it is also closed relative to K. Since K is compact, this means ω(x) is
closed (relative to Rn ). Also, by the Heine-Borel Theorem, K is bounded
so its subset ω(x) is bounded, too. Thus, ω(x) is closed (relative to
Rn ) and bounded and, therefore, compact.

Step 5: ω(x) is connected.


Suppose ω(x) were disconnected. Then there would be disjoint open
subsets G and H of Ω such that G∩ω(x) and H ∩ω(x) are nonempty,
and ω(x) is contained in G ∪ H . Then there would have to be a se-
quence s1 , s2 , . . . → ∞ and a sequence t1 , t2 , . . . → ∞ such that ϕ(sk , x)
∈ G, ϕ(tk , x) ∈ H , and sk < tk < sk+1 for each k ∈ N. Because (for
74




theoryofodes July 4, 2007 13:20 Page 75 

Regular and Singular Points

each fixed k ∈ N)

ϕ(t, x) t ∈ [sk , tk ]

is a (connected) curve going from a point in G to a point in H , there


must be a time τk ∈ (sk , tk ) such that ϕ(τk , x) ∈ K \ (G ∪ H ). Pick
such a τk for each k ∈ N and note that τ1 , τ2 , . . . → ∞ and, by the
Bolzano-Weierstrass Theorem, some subsequence of (ϕ(τk , x)) must
converge to a point y in K \ (G ∪ H ). Note that y, being outside of
G ∪ H , cannot be in ω(x), which is a contradiction.

Examples of empty ω-limit sets are easy to find. Consider, for ex-
ample, the one-dimensional dynamical system ϕ(t, x) := x + t (gener-
ated by the differential equation ẋ = 1.
Examples of dynamical systems with nonempty, noncompact, dis-
connected ω-limit sets are a little harder to find. Consider the planar
autonomous system

 
ẋ = −y(1 − x 2 ) 



ẏ = x + y(1 − x 2 ).

After rescaling time, this differential equation generates a dynamical


system on R2 with
 
ω(x) = (−1, y) y ∈ R ∪ (1, y) y ∈ R

for every x in the punctured strip



(x, y) ∈ R2 |x| < 1 and x 2 + y 2 > 0 .

3.2 Regular and Singular Points

Consider the differential equation ẋ = f (x) and its associated dynam-


ical system ϕ(t, x) on a phase space Ω.

Definition We say that a point x ∈ Ω is an equilibrium point or a singu-


lar point or a critical point if f (x) = 0. For such a point, ϕ(t, x) = x
for all t ∈ R.
75




theoryofodes July 4, 2007 13:20 Page 76 

3. Topological Dynamics

Definition A point x ∈ Ω that is not a singular point is called a regular


point.

We shall show that all of the interesting local behavior of a contin-


uous dynamical system takes place close to singular points. We shall
do this by showing that in the neighborhood of each regular point, the
flow is very similar to unidirectional, constant-velocity flow.
One way of making the notion of similarity of flows precise is the
following.

Definition Two dynamical systems ϕ : R × Ω → Ω and ψ : R × Θ →


Θ are topologically conjugate if there exists a homeomorphism (i.e., a
continuous bijection with continuous inverse) h : Ω → Θ such that

h(ϕ(t, x)) = ψ(t, h(x)) (3.5)

for every t ∈ R and every x ∈ Ω. In other words, ψ(t, ·) = h ◦ ϕ(t, ·) ◦


h−1 , or, equivalently, the diagram
 
ϕ(t,·)

Ω −
−−−→ Ω

 
 
hy yh
ψ(t,·)
Θ −
−−−→ Θ

commutes for each t ∈ R. The function h is called a topological conju-


gacy. If, in addition, h and h−1 are r -times continuously differentiable,
we say that ϕ and ψ are C r -conjugate.

A weaker type of similarity is the following.

Definition Two dynamical systems ϕ : R × Ω → Ω and ψ : R × Θ → Θ


are topologically equivalent if there exists a homeomorphism h : Ω → Θ
and a time reparametrization function α : R ×Ω → R such that, for each
x ∈ Ω, α(·, x) : R → R is an increasing surjection and

h(ϕ(α(t, x), x)) = ψ(t, h(x))

for every t ∈ R and every x ∈ Ω. If, in addition, h is r -times continu-


ously differentiable, we say that ϕ and ψ are C r -equivalent.
76




theoryofodes July 4, 2007 13:20 Page 77 

Regular and Singular Points

A topological equivalence maps orbits to orbits and preserves the


orientation of time but may reparametrize time differently on each in-
dividual orbit.
As an example of the difference between these two concepts, con-
sider the two planar dynamical systems
" #
cos t − sin t
ϕ(t, x) = x
sin t cos t

and " #
cos 2t − sin 2t
ψ(t, y) = y,
sin 2t cos 2t

generated, respectively, by the differential equations


" #
0 −1
ẋ = x
1 0

and " #
0 −2
 ẏ = y. 

2 0

The functions h(x) = x and α(t, x) = 2t show that these two flows
are topologically equivalent. But these two flows are not topologically
conjugate, since, by setting t = π we see that any function h : R2 → R2
satisfying (3.5) would have to satisfy h(x) = h(−x) for all x, which
would mean that h is not invertible.
Because of examples like this, topological equivalence seems to be
the preferred concept when dealing with flows. The following theorem,
however, shows that in a neighborhood of a regular point, a smooth
flow satisfies a local version of C r -conjugacy with respect to a unidi-
rectional, constant-velocity flow.

Theorem (C r Rectification Theorem) Suppose f : Ω → Rn is r -times


continuously differentiable (with r ≥ 1) and x0 is a regular point of the
flow generated by
ẋ = f (x). (3.6)

Then there is a neighborhood V of x0 , a neighborhood W of the origin


in Rn , and a C r map g : V → W witha C r inverse such that, for each
77




theoryofodes July 4, 2007 13:20 Page 78 

3. Topological Dynamics

solution x of (3.6) in V , y(t) := g(x(t)) satisfies the equation


 
1
 
0
 
ẏ =  .  (3.7)
 .. 
 
0

in W .

Proof. Without loss of generality, we shall assume that x0 = 0 and


f (x0 ) = f (0) = αe1 for some α > 0. Let W be a small ball centered at 0
in Rn , and define G(y) := G((y1 , . . . , yn )T ) = ϕ(y1 , (0, y2 , . . . , yn )T ),
where ϕ is the flow generated by (3.6). (While ϕ might not be a genuine
dynamical system because it might not be defined for all time, we know
that it is at least defined long enough that G is well-defined if W is
sufficiently small.)
In words, G(y) is the solution obtained by projecting y onto the
plane through the origin perpendicular to f (0) and locating the solu-
 tion of (3.6) that starts at this projected point after y1 units of time 

have elapsed.


Step 1: ϕ(·, p) is C r +1 .
We know that
d
ϕ(t, p) = f (ϕ(t, p)). (3.8)
dt
If f is continuous then, since ϕ(·, p) is continuous, (3.8) implies that
ϕ(·, p) is C 1 . If f is C 1 , then the previous observation implies that
d
ϕ(·, p) is C 1 . Then (3.8) implies that dt ϕ(t, p) is the composition of
C 1 functions and is, therefore, C 1 ; this means that ϕ(·, p) is C 2 . Con-
tinuing inductively, we see that, since f is C r , ϕ(·, p) is C r +1 .

Step 2: ϕ(t, ·) is C r .
This is a consequence of applying differentiability with respect to pa-
rameters inductively.

Step 3: G is C r .
This is a consequence of Steps 1 and 2 and the formula for G in terms
of ϕ.

78




theoryofodes July 4, 2007 13:20 Page 79 

Regular and Singular Points

Step 4: DG(0) is an invertible matrix.


Since
∂G(y)

= ϕ(t, 0)
= f (0) = αe1
∂y1 y=0 ∂t t=0

and
∂G(y)

∂p

= ϕ(0, p) · ek = ek = ek ,
∂yk y=0 ∂p
p=0
∂p
p=0

for k ≠ 1, we have
 
 
DG(0) = 
 αe1 e2 ··· en 
,

which is invertible since α ≠ 0.

Step 5: If W is sufficiently small, then G is invertible.


This is a consequence of Step 4 and the Inverse Function Theorem.
 

Set g equal to the (locally defined) inverse of G. Since G is C , so r



is g. The only thing remaining to check is that if x satisfies (3.6) then
g ◦ x satisfies (3.7). Equivalently, we can check that if y satisfies (3.7)
then G ◦ y satisfies (3.6).

Step 6: If y satisfies (3.7) then G ◦ y satisfies (3.6).


By the chain rule,

d ∂
G(y(t)) = ϕ(s, (0, y2 , . . . , yn ))
· ẏ1
dt ∂s s=y1
 
0
 
 
 ẏ2 


+ ϕ(y1 , p) · . 
∂p  . 
p=(0,y2 ,...,yn )  . 
ẏn
= f (ϕ(y1 , (0, y2 , . . . , yn ))) = f (G(y)).

79




theoryofodes July 4, 2007 13:20 Page 80 

3. Topological Dynamics

3.3 Definitions of Stability

In the previous section, we saw that all the “interesting” local behavior
of flows occurs near equilibrium points. One important aspect of the
behavior of flows has to do with whether solutions that start near a
given solution stay near it for all time and/or move closer to it as time
elapses. This question, which is the subject of stability theory, is not
just of interest when the given solution corresponds to an equilibrium
solution, so we study it–initially, at least–in a fairly broad context.

Definitions

First, we define some types of stability for solutions of the (possibly)


nonautonomous equation

ẋ = f (t, x). (3.9)

Definition A solution x(t) of (3.9) is (Lyapunov) stable if for each ε > 0


and t0 ∈ R there exists δ = δ(ε, t0 ) > 0 such that if x(t) is a solution
 

of (3.9) and |x(t0 ) − x(t0 )| < δ then |x(t) − x(t)| < ε for all t ≥ t0 .

Definition A solution x(t) of (3.9) is asymptotically stable if it is (Lya-


punov) stable and if for every t0 ∈ R there exists δ = δ(t0 ) > 0
such that if x(t) is a solution of (3.9) and |x(t0 ) − x(t0 )| < δ then
|x(t) − x(t)| → 0 as t ↑ ∞.

Definition A solution x(t) of (3.9) is uniformly stable if for each ε > 0


there exists δ = δ(ε) > 0 such that if x(t) is a solution of (3.9) and
|x(t0 ) − x(t0 )| < δ for some t0 ∈ R then |x(t) − x(t)| < ε for all t ≥ t0 .

Some authors use a weaker definition of uniform stability that turns


out to be equivalent to Lyapunov stability for autonomous equations.
Since our main interest is in autonomous equations and this alterna-
tive definition is somewhat more complicated than the definition given
above, we will not use it here.

Definition A solution x(t) of (3.9) is orbitally stable if for every ε > 0


there exists δ = δ(ε) > 0 such that if x(t) is a solution of (3.9) and
80




theoryofodes July 4, 2007 13:20 Page 81 

Definitions of Stability

|x(t1 ) − x(t0 )| < δ for some t0 , t1 ∈ R then


[ [
x(t) ⊆ B(x(t), ε).
t≥t1 t≥t0

Next, we present a couple of definitions of stability for subsets of


the (open) phase space Ω ⊆ Rn of a dynamical system ϕ(t, x). (In these
definitions, a neighborhood of a set A ⊆ Ω is an open subset of Ω that
contains A.)

Definition The set A is stable if every neighborhood of A contains a


positively invariant neighborhood of A.

Note that the definition implies that stable sets are positively in-
variant.

Definition The set A is asymptotically stable if it is stable and there is


some neighborhood V of A such that ω(x) ⊆ A for every x ∈ V .
 (If V can be chosen to be the entire phase space, then A is globally 

asymptotically stable.)

Examples

We now consider a few examples that clarify some properties of these


definitions.
y
1


ẋ = −y/2

ẏ = 2x.

b x

Orbits are ellipses with major axis along the y-axis. The equilibrium
solution at the origin is Lyapunov stable even though nearby orbits
81




theoryofodes July 4, 2007 13:20 Page 82 

3. Topological Dynamics

sometimes move away from it.


y
2


ṙ = 0

θ̇ = r 2 ,

or, equivalently, b x


ẋ = −(x 2 + y 2 )y

ẏ = (x 2 + y 2 )x.

The solution moving around the unit circle is not Lyapunov stable,
since nearby solutions move with different angular velocities. It is,
however, orbitally stable. Also, the set consisting of the unit circle is
stable.
y
3
  

ṙ = r (1 − r )




θ̇ = sin2 (θ/2).

b b x

The constant solution (x, y) = (1, 0) is not Lyapunov stable and the
set {(1, 0)} is not stable. However, every solution beginning near (1, 0)
converges to (1, 0) as t ↑ ∞. This shows that it is not redundant to
require Lyapunov stability (or stability) in the definition of asymptotic
stability of a solution (or a set).

Stability in Autonomous Equations

When we are dealing with a smooth autonomous differential equation

ẋ = f (x) (3.10)
82




theoryofodes July 4, 2007 13:20 Page 83 

Definitions of Stability

on an open set Ω ⊆ Rn , all of the varieties of stability can be applied


to essentially the same object. In particular, let x be a function that
solves (3.10), and let

A(x) := x(t) t ∈ R

be the corresponding orbit. Then it makes sense to talk about the


Lyapunov, asymptotic, orbital, or uniform stability of x, and it makes
sense to talk about the stability or asymptotic stability of A(x).
In this context, certain relationships between the various types of
stability follow from the definitions without too much difficulty.

Theorem Let x be a function that solves (3.10), and let A(x) be the
corresponding orbit. Then:

1. If x is asymptotically stable then x is Lyapunov stable;

2. If x is uniformly stable then x is Lyapunov stable;

3. If x is uniformly stable then x is orbitally stable;


 

4. If A(x) is asymptotically stable then A(x) is stable;


5. If A(x) contains only a single point, then Lyapunov stability of x,


orbital stability of x, uniform stability of x, and stability of A(x)
are equivalent.

We will not prove this theorem, but we will note that parts 1 and 2
are immediate results of the definitions (even if we were dealing with
a nonautonomous equation) and part 4 is also an immediate result of
the definitions (even if A were an arbitrary set).

Exercise 13 In items 1–18, an autonomous differential equa-


tion, a phase space Ω (that is an open subset of Rn ), and a par-
ticular solution x of the equation are specified. For each of these
items, state which of the following statements is/are true:

(a) x is Lyapunov stable;

(b) x is asymptotically stable;

83




theoryofodes July 4, 2007 13:20 Page 84 

3. Topological Dynamics

(c) x is orbitally stable;

(d) x is uniformly stable;

(e) A(x) is stable;

(f) A(x) is asymptotically stable.

You do not need to justify your answers or show your work. It


may convenient to express your answers in a concise form (e.g., in
a table of some sort). Use of variables r and θ signifies that the
equation (as well as the particular solution) is to be interpreted as
in polar form.

1. ẋ = x, Ω = R, x(t) := 0

2. ẋ = x, Ω = R, x(t) := et

3. {ẋ1 = 1 + x22 , ẋ2 = 0}, Ω = R2 , x(t) := (t, 0)

 4. {ṙ = 0, θ̇ = r 2 }, Ω = R2 , x(t) := (1, t) 



5. ẋ = x, Ω = (0, ∞), x(t) := et

6. {ẋ1 = 1, ẋ2 = −x1 x2 }, Ω = R2 , x(t) := (t, 0)

7. ẋ = tanh x, Ω = R, x(t) := sinh−1 (et )

8. {ẋ1 = tanh x1 , ẋ2 = 0}, Ω = (0, ∞) × R, x(t) :=


−1 t
(sinh (e ), 0)

9. ẋ = tanh x, Ω = (0, ∞), x(t) := sinh−1 (et )

10. {ẋ1 = sech x1 , ẋ2 = −x1 x2 sech x1 }, Ω = R2 ,


x(t) := (sinh−1 (t), 0)

11. ẋ = x 2 /(1 + x 2 ), Ω = R, x(t) := −2/(t + t 2 + 4)

12. {ẋ1 = sech x1 , ẋ2 = −x2 }, Ω = R2 , x(t) := (sinh−1 (t), 0)

13. ẋ = sech x, Ω = R, x(t) := sinh−1 (t)

14. {ẋ1 = 1, ẋ2 = 0}, Ω = R2 , x(t) := (t, 0)

84




theoryofodes July 4, 2007 13:20 Page 85 

Principle of Linearized Stability

15. ẋ = 0, Ω = R, x(t) := 0

16. ẋ = 1, Ω = R, x(t) := t

17. {ẋ1 = −x1 , ẋ2 = −x2 }, Ω = R2 , x(t) := (e−t , 0)

18. ẋ = −x, Ω = R, x(t) := 0

3.4 Principle of Linearized Stability

Suppose f is a continuously differentiable function such that

ẋ = f (x) (3.11)

generates a continuous dynamical system ϕ on Ω ⊆ Rn . Suppose,


moreover, that x0 ∈ Ω is a singular point of ϕ. If x solves (3.11) and
we set u := x − x0 and A := Df (x0 ), we see that, by the definition of
 derivative, 



u̇ = f (u + x0 ) = f (x0 ) + Df (x0 )u + R(u) = Au + R(u), (3.12)

where R(u)/|u| → 0 as |u| ↓ 0. Because R(u) is small when u is small,


it is reasonable to believe that solutions of (3.12) behave similarly to
solutions of
u̇ = Au (3.13)

for u near 0. Equivalently, it is reasonable to believe that solutions of


(3.11) behave like solutions of

ẋ = A(x − x0 ) (3.14)

for x near x0 . Equation (3.13) (or sometimes (3.14)) is called the lin-
earization of (3.11) at x0 .
Now, we’ve defined (several types of) stability for equilibrium solu-
tions of (3.11) (as well as for other types of solutions and sets), but we
haven’t really given any tools for determining stability. In this lecture
we present one such tool, using the linearized equation(s) discussed
above.
85




theoryofodes July 4, 2007 13:20 Page 86 

3. Topological Dynamics

Definition An equilibrium point x0 of (3.11) is hyperbolic if none of the


eigenvalues of Df (x0 ) have zero real part.

If x0 is hyperbolic, then either all the eigenvalues of A := Df (x0 )


have negative real part or at least one has positive real part. In the
former case, we know that 0 is an asymptotically stable equilibrium
solution of (3.13); in the latter case, we know that 0 is an unstable
solution of (3.13). The following theorem says that similar things can
be said about the nonlinear equation (3.11).

Theorem (Principle of Linearized Stability) If x0 is a hyperbolic equi-


librium solution of (3.11), then x0 is either unstable or asymptotically
stable, and its stability type (with respect to (3.11)) matches the stability
type of 0 as an equilibrium solution of (3.13) (where A := Df (x0 )).

This theorem is an immediate consequence of the following two


propositions.
 

Proposition (Asymptotic Stability) If x0 is an equilibrium point of (3.11)



and all the eigenvalues of A := Df (x0 ) have negative real part, then x0
is asymptotically stable.

Proposition (Instability) If x0 is an equilibrium point of (3.11) and some


eigenvalue of A := Df (x0 ) has positive real part, then x0 is unstable.

Before we prove these propositions, we state and prove a lemma to


which we have referred before in passing.

Lemma Let V be a finite-dimensional real vector space and let L ∈


L(V , V ). If all the eigenvalues of L have real part larger than c, then
there is an inner product h·, ·i and an induced norm k · k on V such
that
hv, Lvi ≥ ckvk2

for every v ∈ V .

Proof. Let n = dim V , and pick ε > 0 so small that all the eigenvalues
86




theoryofodes July 4, 2007 13:20 Page 87 

Principle of Linearized Stability

of L have real part greater than c + nε. Choose a basis {v1 , . . . , vn } for
V that puts L in “modified” real canonical form with the off-diagonal
1’s replaced by ε’s, and let h·, ·i be the inner product associated with
this basis (i.e. hvi , vj i = δij ) and let k · k be the induced norm on V .
Pn
Given v = i=1 αi vi ∈ V , note that (if L = (ℓij ))
 
n
X n X
X n
X n X
X α2i + α2j
hv, Lvi = ℓii α2i + ℓij αi αj ≥ ℓii α2i − ε 
i=1 i=1 j≠i i=1 i=1 j≠i
2
n
X n
X n
X n
X
≥ ℓii α2i − nεα2i = (ℓii − nε)α2i ≥ cα2i = ckvk2 .
i=1 i=1 i=1 i=1

Note that applying this theorem to −L also tells us that, for some
inner product,
hv, Lvi ≤ ckvk2 (3.15)

if all the eigenvalues of L have real part less than c.


 

Proof of Proposition on Asymptotic Stability. Without loss of generality,



assume that x0 = 0. Pick c < 0 such that all the eigenvalues of A have
real part strictly less than c. Because of equivalence of norms and be-
cause of the lemma, we can work with a norm k ·k and a corresponding
inner product h·, ·i for which (3.15) holds, with L = A. Let r > 0 be
small enough that kR(x)k ≤ −c/2kxk for all x satisfying kxk ≤ r , and
let

Br := x ∈ Ω kxk < r }.

If x(t) is a solution of (3.11) that starts in Br at time t = 0, then as


long as x(t) remains in Br

d
kx(t)k2 = 2hx(t), ẋ(t)i = 2hx(t), f (x(t))i
dt
= 2hx(t), Ax(t)i + 2hx(t), R(x(t))i
≤ 2ckx(t)k2 + 2kx(t)k · kR(x(t))k
≤ 2ckx(t)k2 − ckx(t)k2 = ckx(t)k2 .

This means that x(t) ∈ Br for all t ≥ 0, and x(t) converges to 0


(exponentially quickly) as t ↑ ∞.
87




theoryofodes July 4, 2007 13:20 Page 88 

3. Topological Dynamics

The proof of the second proposition will be geometric and will con-
tain ideas that will be used to prove stronger results later in this text.

Proof of Proposition on Instability. We assume again that x0 = 0. If


E u ,E s , and E c are, respectively, the unstable, stable, and center spaces
corresponding to (3.13), set E − := E s ⊕ E c and E + := E u . Then
n + − +
R = E ⊕ E , all of the eigenvalues of A := A|E + have positive real
part, and all of the eigenvalues of A− := A|E − have nonpositive real
part. Pick constants a > b > 0 such that all of the eigenvalues of A+
have real part larger than a, and note that all of the eigenvalues of A−
have real part less than b. Define an inner product h·, ·i+ (and induced
norm k · k+ ) on E + such that

hv, Avi+ ≥ akvk2+

for all v ∈ E + , and define an inner product h·, ·i− (and induced norm
k · k− ) on E − such that

hw, Awi− ≤ bkwk2−


 

for all w ∈ E − . Define h·, ·i on E + ⊕ E − to be the direct sum of h·, ·i+



and h·, ·i− ; i.e., let

hv1 + w1 , v2 + w2 i := hv1 , v2 i+ + hw1 , w2 i−

for all (v1 , w1 ), (v2 , w2 ) ∈ E + × E − . Let k · k be the induced norm, and


note that
kv + wk2 = kvk2+ + kwk2− = kvk2 + kwk2

for all (v, w) ∈ E + × E − .


Now, take (3.11) and project it onto E + and E − to get the corre-
sponding system for (v, w) ∈ E + × E −


v̇ = A+ v + R + (v, w)
(3.16)

ẇ = A− w + R − (v, w),

with kR ± (v, w)k/kv + wk converging to 0 as kv + wk ↓ 0. Pick ε > 0



small enough that a − b − 2 2ε > 0, and pick r > 0 small enough that
kR ± (v, w)k ≤ εkv + wk whenever

v + w ∈ Br := v + w ∈ E + ⊕ E − kv + wk < r .
88




theoryofodes July 4, 2007 13:20 Page 89 

Principle of Linearized Stability

Br

Kr
b v

Figure 3.1: The truncated cone.

Consider the truncated cone



 Kr := v + w ∈ E + ⊕ E − kvk > kwk ∩ Br . 



(See Figure 1.) Suppose x = v + w is a solution of (3.16) that starts in
Kr at time t = 0. For as long as the solution remains in Kr ,

d
kvk2 = 2hv, v̇i = 2hv, A+ vi + 2hv, R + (v, w)i
dt
≥ 2akvk2 − 2kvk · kR + (v, w)k ≥ 2akvk2 − 2εkvk · kv + wk
 1/2 √
= 2akvk2 − 2εkvk kvk2 + kwk2 ≥ 2akvk2 − 2 2εkvk2

= 2(a − 2ε)kvk2 ,

and
d
kwk2 = 2hw, ẇi = 2hw, A− wi + 2hw, R − (v, w)i
dt
≤ 2bkwk2 + 2kwk · kR − (v, w)k
≤ 2bkwk2 + 2εkwk · kv + wk
 1/2
= 2bkwk2 + 2εkwk kvk2 + kwk2

≤ 2bkvk2 + 2 2εkvk2

= 2(b + 2ε)kvk2 .
89




theoryofodes July 4, 2007 13:20 Page 90 

3. Topological Dynamics

The first estimate says that as long as the solution stays in Kr , kvk
grows exponentially, which means that the solution must eventually
leave Kr . Combining the first and second estimates, we have

d √
(kvk2 − kwk2 ) ≥ 2(a − b − 2 2ε)kvk2 > 0,
dt

so g(v + w) := kvk2 − kwk2 increases as t increases. But g is 0 on


the lateral surface of Kr and is strictly positive in Kr , so the solution
cannot leave Kr through its lateral surface. Thus, the solution leaves
Kr by leaving Br . Since this holds for all solutions starting in Kr , we
know that x0 must be an unstable equilibrium point for (3.11).

3.5 Lyapunov’s Direct Method

Another tool for determining stability of solutions is Lyapunov’s direct


method. While this method may actually seem rather indirect, it does
work directly on the equation in question instead of on its lineariza-
tion.
 
We will consider this method for equilibrium solutions of (possibly)



nonautonomous equations. Let Ω ⊆ Rn be open and contain the ori-
gin, and suppose that f : R × Ω → Rn is a continuously differentiable
function. Suppose, furthermore, that f (t, 0) = 0 for every t ∈ R, so
x(t) := 0 is a solution of the equation

ẋ = f (t, x). (3.17)

(The results we obtain in this narrow context can be applied to deter-


mine the stability of other constant solutions of (3.17) by translation.)
In this section, a subset of Ω that contains the origin in its interior
will be called a neighborhood of 0.

Definition Suppose that D is a neighborhood of 0 and that W : D → R


is continuous and satisfies W (0) = 0. Then:

• If W (x) ≥ 0 for every x ∈ D, then W is positive semidefinite.

• If W (x) > 0 for every x ∈ D \ {0}, then W is positive definite.

• If W (x) ≤ 0 for every x ∈ D, then W is negative semidefinite.


90




theoryofodes July 4, 2007 13:20 Page 91 

Lyapunov’s Direct Method

• If W (x) < 0 for every x ∈ D \ {0}, then W is negative definite.

Definition Suppose that D is a neighborhood of 0 and that V : R × D →


R is continuous and satisfies V (t, 0) = 0 for every t ∈ R. Then:

• If there is a positive semidefinite function W : D → R such that


V (t, x) ≥ W (x) for every (t, x) ∈ R × D, then V is positive
semidefinite.

• If there is a positive definite function W : D → R such that


V (t, x) ≥ W (x) for every (t, x) ∈ R ×D, then V is positive definite.

• If there is a negative semidefinite function W : D → R such that


V (t, x) ≤ W (x) for every (t, x) ∈ R × D, then V is negative
semidefinite.

• If there is a negative definite function W : D → R such that


V (t, x) ≤ W (x) for every (t, x) ∈ R × D, then V is negative defi-
nite.
 



Definition If V : R × D is continuously differentiable then its orbital
derivative (with respect to (3.17)) is the function V̇ : R × D → R given
by the formula

∂V ∂V
V̇ (t, x) := (t, x) + (t, x) · f (t, x).
∂t ∂x

(Here “∂V (t, x)/∂x” represents the gradient of the function V (t, ·).)

Note that if x(t) is a solution of (3.17) then, by the chain rule,

d
V (t, x(t)) = V̇ (t, x(t)).
dt
A function whose orbital derivative is always nonpositive is sometimes
called a Lyapunov function.

Theorem (Lyapunov Stability) If there is a neighborhood D of 0 and a


continuously differentiable positive definite function V : R × D → R
whose orbital derivative V̇ is negative semidefinite, then 0 is a Lyapunov
stable solution of (3.17).
91




theoryofodes July 4, 2007 13:20 Page 92 

3. Topological Dynamics

Proof. Let ε > 0 and t0 ∈ R be given. Assume, without loss of gener-


ality, that B(0, ε) is contained in D. Pick a positive definite function
W : D → R such that V (t, x) ≥ W (x) for every (t, x) ∈ R × D. Let

m := min W (x) |x| = ε .

Since W is continuous and positive definite, m is well-defined and pos-


itive. Pick δ > 0 small enough that δ < ε and

max V (t0 , x) |x| ≤ δ < m.

(Since V is positive definite and continuous, this is possible.)


Now, if x(t) solves (3.17) and |x(t0 )| < δ then V (t0 , x(t0 )) < m,
and
d
V (t, x(t)) = V̇ (t, x(t)) ≤ 0,
dt
for all t, so V (t, x(t)) < m for every t ≥ t0 . Thus, W (x(t)) < m for
every t ≥ t0 , so, for every t ≥ t0 , |x(t)| ≠ ε. Since |x(t0 )| < ε, this tells
us that |x(t)| < ε for every t ≥ t0 .

 Theorem (Asymptotic Stability) Suppose that there is a neighborhood D 

of 0 and a continuously differentiable positive definite function V : R ×



D → R whose orbital derivative V̇ is negative definite, and suppose that
there is a positive definite function W : D → R such that V (t, x) ≤ W (x)
for every (t, x) ∈ R × D. Then 0 is an asymptotically stable solution of
(3.17).

Proof. By the previous theorem, 0 is a Lyapunov stable solution of


(3.17). Let t0 ∈ R be given. Assume, without loss of generality, that
D is compact. By Lyapunov stability, we know that we can choose
a neighborhood U of 0 such that if x(t) is a solution of (3.17) and
x(t0 ) ∈ U, then x(t) ∈ D for every t ≥ t0 . We claim that, in fact, if
x(t) is a solution of (3.17) and x(t0 ) ∈ U, then x(t) → 0 as t ↑ ∞.
Verifying this claim will prove the theorem.
Suppose that V (t, x(t)) does not converge to 0 as t ↑ ∞. The nega-
tive definiteness of V̇ implies that V (·, x(·)) is nonincreasing, so, since
V ≥ 0, there must be a number c > 0 such that V (t, x(t)) ≥ c for every
t ≥ t0 . Then W (x(t)) ≥ c > 0 for every t ≥ t0 . Since W (0) = 0 and W
is continuous,

inf |x(t)| t ≥ t0 ≥ ε (3.18)
92




theoryofodes July 4, 2007 13:20 Page 93 

Lyapunov’s Direct Method

for some constant ε > 0. Pick a negative definite function Y : D → R


such that V̇ (t, x) ≤ Y (x) for every (t, x) ∈ R × D. The compactness of
D \ B(0, ε), along with (3.18), implies that


Y (x(t)) t ≥ t0

is bounded away from 0. This, in turn, implies that


V̇ (t, x(t)) t ≥ t0

is bounded away from 0. In other words,

d
V (t, x(t)) = V̇ (t, x(t)) ≤ −δ (3.19)
dt

for some constant δ > 0. Clearly, (3.19) contradicts the nonnegativity


of V for large t.
That contradiction implies that V (t, x(t)) → 0 as t ↑ ∞. Pick a
positive definite function W : D → R such that V (t, x) ≥ W (x) for
 every (t, x) ∈ R × D, and note that W (x(t)) → 0 as t ↑ ∞. 



Let r > 0 be given, and let


wr = min W (p) p ∈ D \ B(0, r ) ,

which is defined and positive by the compactness of D and the conti-


nuity and positive definiteness of W . Since W (x(t)) → 0 as t ↑ ∞, there
exists T such that W (x(t)) < wr for every t > T . Thus, for t > T ,
it must be the case that x(t) ∈ B(0, r ). Hence, 0 is asymptotically
stable.

It may seem strange that we ned to bound V by a time-independent,


positive definite function W from above. Indeed, some textbooks (see,
e.g., Theorem 2.20 in Stability, Instability, and Chaos by Glendinning)
contain asymptotic stability theorems omitting this hypothesis. There
is a counterexample by Massera that demonstrates the necessity of the
hypothesis.

93




theoryofodes July 4, 2007 13:20 Page 94 

3. Topological Dynamics

Exercise 14 Show, by means of a counterexample, that the the-


orem on asymptotic stability via Lyapunov’s direct method fails if
the hypothesis about W is dropped.
(You may, but do not have to, proceed as follows. Let g : R → R
be a function that is twice continuously differentiable and satisfies
g(t) ≥ e−t for every t ∈ R, g(t) ≤ 1 for every t ≥ 0, g(t) = e−t for
every
[
t∉ (n − 2−n , n + 2−n ),
n∈N

and g(n) = 1 for every n ∈ N. Let f : R × R → R be the function


defined by the formula

g ′ (t)
f (t, x) := x,
g(t)

and let V : R × R → R be the function defined by the formula


" Zt #
x2 2
V (t, x) := 3 − [g(τ)] dτ .
[g(t)]2 0
 



Show that, for x near 0, V (t, x) is positive definite and V̇ (t, x) is
negative definite, but the solution 0 of (3.17) is not asymptotically
stable.)

3.6 LaSalle’s Invariance Principle

Linearization versus Lyapunov Functions

In the previous two lectures, we have talked about two different tools
that can be used to prove that an equilibrium point x0 of an autono-
mous system
ẋ = f (x) (3.20)

is asymptotically stable: linearization and Lyapunov’s direct method.


One might ask which of these methods is better. Certainly, lineariza-
tion seems easier to apply because of its straightforward nature: Com-
pute the eigenvalues of Df (x0 ). The direct method requires you to find
an appropriate Lyapunov function, which doesn’t seem so straightfor-
94




theoryofodes July 4, 2007 13:20 Page 95 

LaSalle’s Invariance Principle

ward. But, in fact, anytime linearization works, a simple Lyapunov


function works, as well.
To be more precise, suppose x0 = 0 and all the eigenvalues of A :=
Df (0) have negative real part. Pick an inner product h·, ·i and induced
norm k · k such that, for some c > 0,

hx, Axi ≤ −ckxk2

for all x ∈ Rn . Pick r > 0 small enough that kf (x) − Axk ≤ (c/2)kxk
whenever kxk ≤ r , let

D = x ∈ Rn kxk ≤ r ,

and define V : R × D → R by the formula V (t, x) = kxk2 . Since k · k is


a norm, V is positive definite. Also

V̇ (t, x) = 2hx, f (x)i = 2(hx, Axi + hx, f (x) − Axi)


≤ 2(−ckxk2 + kxkkf (x) − Axk) ≤ −ckxk2 ,

 so V̇ is negative definite. 



On the other hand, there are very simple examples to illustrate that
the direct method works in some cases where linearization doesn’t. For
example, consider ẋ = −x 3 on R. The equilibrium point at the origin
is not hyperbolic, so linearization fails to determine stability, but it is
easy to check that x 2 is positive definite and has a negative definite
orbital derivative, thus ensuring the asymptotic stability of 0.

A More Complicated Example

The previous example is so simple that it might make one question


whether the direct method is of any use on problems where stability
cannot be determined by linearization or by inspection. Thus, let’s
consider something more complicated. Consider the planar system


ẋ = −y − x 3

ẏ = x 5 .

The origin is a nonhyperbolic equilibrium point, with 0 being the only


eigenvalue, so the principle of linearized stability is of no use. A sketch
95




theoryofodes July 4, 2007 13:20 Page 96 

3. Topological Dynamics

of the phase portrait indicates that orbits circle the origin in the coun-
terclockwise direction, but it is not obvious whether they spiral in, spi-
ral out, or move on closed curves.
The simplest potential Lyapunov function that often turns out to be
useful is the square of the standard Euclidean norm, which in this case
is V := x 2 + y 2 . The orbital derivative is

V̇ = 2x ẋ + 2y ẏ = 2x 5 y − 2xy − 2x 4 . (3.21)

For some points (x, y) near the origin (e.g., (δ, δ)) V̇ < 0, while for
other points near the origin (e.g., (δ, −δ)) V̇ > 0, so this function
doesn’t seem to be of much use.
Sometimes when the square of the standard Euclidean norm doesn’t
work, some other homogeneous quadratic function does. Suppose we
try V := x 2 + αxy + βy 2 , with α and β to be determined. Then

V̇ = (2x + αy)ẋ + (αx + 2βy)ẏ


= −(2x + αy)(y + x 3 ) + (αx + 2βy)x 5

 = −2x 4 + αx 6 − 2xy − αx 3 y + 2βx 5 y − αy 2 . 



Setting (x, y) = (δ, −δ2 ) for δ positive and small, we see that V̇ is not
going to be negative semidefinite, no matter what we pick α and β to
be.
If these quadratic functions don’t work, maybe something custom-
ized for the particular equation might. Note that the right-hand side
of the first equation in (3.21) sort of suggests that x 3 and y should be
treated as quantities of the same order of magnitude. Let’s try V :=
x 6 + αy 2 , for some α > 0 to be determined. Clearly, V is positive
definite, and

V̇ = 6x 5 ẋ + 2αy ẏ = (2α − 6)x 5 y − 6x 8 .

If α ≠ 3, then V̇ is of opposite signs for (x, y) = (δ, δ) and for (x, y) =


(δ, −δ) when δ is small. Hence, we should set α = 3, yielding V̇ =
−6x 8 ≤ 0. Thus V is positive definite and V̇ is negative semidefinite,
implying that the origin is Lyapunov stable.
Is the origin asymptotically stable? Perhaps we can make a minor
modification to the preceding formula for V so as to make V̇ strictly
negative in a deleted neighborhood of the origin without destroying
96




theoryofodes July 4, 2007 13:20 Page 97 

LaSalle’s Invariance Principle

the positive definiteness of V . If we added a small quantity whose


orbital derivative was strictly negative when x = 0 and |y| is small and
positive, this might work. Experimentation suggests that a positive
multiple of xy 3 might work, since this quantity changes from positive
to negative as we cross the y-axis in the counterclockwise direction.
Also, it is at least of higher order than 3y 2 near the origin, so it has
the potential of preserving the positive definiteness of V .
In fact, we claim that V := x 6 + xy 3 + 3y 2 is positive definite with
negative definite orbital derivative near 0. A handy inequality, some-
times called Young’s inequality, that can be used in verifying this claim
(and in other circumstances, as well) is given in the following lemma.

Lemma (Young’s Inequality) If a, b ≥ 0, then

ap bq
ab ≤ + , (3.22)
p q

for every pair of numbers p, q ∈ (1, ∞) satisfying

 1 1 
+ = 1. (3.23)

p q

Proof. Assume that (3.23) holds. Clearly (3.22) holds if b = 0, so as-


sume that b > 0, and fix it. Define g : [0, ∞) by the formula

xp bq
g(x) := + − xb.
p q

Note that g is continuous, and g ′ (x) = x p−1 − b for every x ∈ (0, ∞).
Since limx↓0 g ′ (x) = −b < 0, limx↑∞ g ′ (x) = ∞, and g ′ is increasing on
(0, ∞), we know that g has a unique minimizer at x0 = b 1/(p−1) . Thus,
for every x ∈ [0, ∞) we see, using (3.23), that
!
b p/(p−1) bq 1 1
g(x) ≥ g(b 1/(p−1)
)= + − b p/(p−1) = + − 1 b q = 0.
p q p q

In particular, g(a) ≥ 0, so (3.22) holds.

Now, let V = x 6 + xy 3 + 3y 2 . Applying Young’s inequality with


a = |x|, b = |y|3 , p = 6, and q = 6/5, we see that

|x|6 5|y|18/5 1 5
|xy 3 | = |x||y|3 ≤ + ≤ x6 + y 2
6 6 6 6
97




theoryofodes July 4, 2007 13:20 Page 98 

3. Topological Dynamics

if |y| ≤ 1, so
5 6 13 2
V ≥ x + y
6 6
if |y| ≤ 1. Also,

V̇ = −6x 8 + y 3 ẋ + 3xy 2 ẏ = −6x 8 − y 3 (y + x 3 ) + 3x 6 y 2


= −6x 8 − x 3 y 3 + 3x 6 y 2 − y 4 .

Applying Young’s inequality to the two mixed terms in this orbital


derivative, we have

3|x|8 5|y|24/5 3 5
| − x 3 y 3 | = |x|3 |y|3 ≤ + ≤ x8 + y 4
8 8 8 8
if |y| ≤ 1, and
" #
6 2 6 3|x|8
2 |y|8 9 8 3 8 9 3 4
|3x y | = 3|x| |y| ≤ 3 + = x + y ≤ x8 + y
4 4 4 4 4 64

if |y| ≤ 1/2. Thus,


27 8 21 4
 V̇ ≤ − x − y 
8 64



if |y| ≤ 1/2, so, in a neighborhood of 0, V is positive definite and V̇ is
negative definite, which implies that 0 is asymptotically stable.

LaSalle’s Invariance Principle

We would have saved ourselves a lot of work on the previous example if


we could have just stuck with the moderately simple function V = x 6 +
3y 2 , even though its orbital derivative was only negative semidefinite.
Notice that the set of points where V̇ was 0 was small (the y-axis) and
at most of those points the vector field was not parallel to the set.
LaSalle’s Invariance Principle, which we shall state and prove for the
autonomous system
ẋ = f (x), (3.24)

allows us to use such a V to prove asymptotic stability.

Theorem (LaSalle’s Invariance Principle) Suppose there is a neighbor-


hood D of 0 and a continuously differentiable (time-independent) posi-
tive definite function V : D → R whose orbital derivative V̇ (with respect
98




theoryofodes July 4, 2007 13:20 Page 99 

LaSalle’s Invariance Principle

to (3.24)) is negative semidefinite. Let I be the union of all complete


orbits contained in

x ∈ D V̇ (x) = 0 .

Then there is a neighborhood U of 0 such that for every x0 ∈ U,


ω(x0 ) ⊆ I.

Before proving this, we note that when applying it to V = x 6 + 3y 2


in the previous example, the set I is a singleton containing the origin
and, since D can be assumed to be compact, each solution beginning
in U actually converges to 0 as t ↑ ∞.

Proof of LaSalle’s Invariance Principle. Let ϕ be the flow generated by


(3.24). By a previous theorem, 0 must be Lyapunov stable, so we can
pick a neighborhood U of 0 such that ϕ(t, x) ∈ D for every x0 ∈ U
and every t ≥ 0.
Let x0 ∈ U and y ∈ ω(x0 ) be given. By the negative semidefinite-
ness of V̇ , we know that V (ϕ(t, x0 )) is a nonincreasing function of t.

 By the positive definiteness of V , we know that V (ϕ(t, x0 )) remains 

nonnegative, so it must approach some constant c ≥ 0 as t ↑ ∞. By



continuity of V , V (z) = c for every z ∈ ω(x0 ). Since ω(x0 ) is invari-
ant, V (ϕ(t, y)) = c for every t ∈ R. The definition of orbital derivative
then implies that V̇ (ϕ(t, y)) = 0 for every t ∈ R. Hence, y ∈ I.

Exercise 15 Show that (x(t), y(t)) = (0, 0) is an asymptotically


stable solution of 

ẋ = −x 3 + 2y 3
ẏ = −2xy 2 .

99




theoryofodes July 4, 2007 13:20 Page 100 

 




theoryofodes July 4, 2007 13:20 Page 101 

4
Conjugacies

4.1 Hartman-Grobman Theorem: Part 1

The Principle of Linearized Stability indicates one way in which the flow
near a singular point of an autonomous ODE resembles the flow of its
linearization. The Hartman-Grobman Theorem gives further insight
into the extent of the resemblance; namely, there is a local topological
conjugacy between the two. We will spend the next 5 sections talking
 

about the various forms of this theorem and their proofs. This amount

of attention is justified not only by the significance of the theorem but
the general applicability of the techniques used to prove it.
Let Ω ⊆ Rn be open and let f : Ω → Rn be continuously differen-
tiable. Suppose that x0 ∈ Ω is a hyperbolic equilibrium point of the
autonomous equation

ẋ = f (x). (4.1)

Let B = Df (x0 ), and let ϕ be the (local) flow generated by (4.1). The
version of the Hartman-Grobman Theorem we’re primarily interested
in is the following.

Theorem (Local Hartman-Grobman Theorem for Flows) Let Ω, f , x0 , B,


and ϕ be as described above. Then there are neighborhoods U and V
of x0 and a homeomorphism h : U → V such that

ϕ(t, h(x)) = h(x0 + etB (x − x0 ))

whenever x ∈ U and x0 + etB (x − x0 ) ∈ U.


101




theoryofodes July 4, 2007 13:20 Page 102 

4. Conjugacies

It will be easier to derive this theorem as a consequence of a global


theorem for maps than to prove it directly. In order to state that ver-
sion of the theorem, we will need to introduce a couple of function
spaces and a definition.
Let

Cb0 (Rn ) = w ∈ C(Rn , Rn ) sup |w(x)| < ∞ .
x∈Rn
When equipped with the norm

kwk0 := sup kw(x)k,


x∈Rn

where k · k is some norm on R , Cb0 (Rn ) is a Banach space. (We shall


n

pick a particular norm k · k later.)


Let

Cb1 (Rn ) = w ∈ C 1 (Rn , Rn ) ∩ Cb0 (Rn ) sup kDw(x)k < ∞ ,
x∈Rn

where k · k is the operator norm corresponding to some norm on Rn .


Note that the functional
kw(x1 ) − w(x2 )k
Lip(w) := sup
 x1 ,x2 ∈R n kx1 − x2 k 
x1 ≠x2



is defined on Cb1 (Rn ). We will not define a norm on Cb1 (Rn ), but will
often use Lip, which is not a norm, to describe the size of elements of
Cb1 (Rn ).

Definition If A ∈ L(Rn , Rn ) and none of the eigenvalues of A lie on the


unit circle, then A is hyperbolic.

Note that if x0 is a hyperbolic equilibrium point of (4.1) and A =


eDf (x0 ) , then A is hyperbolic.

Theorem (Global Hartman-Grobman Theorem for Maps) Suppose that


the map A ∈ L(Rn , Rn ) is hyperbolic and invertible. Then there exists
a number ε > 0 such that for every g ∈ Cb1 (Rn ) satisfying Lip(g) < ε
there exists a unique function v ∈ Cb0 (Rn ) such that

F (h(x)) = h(Ax)

for every x ∈ Rn , where F = A + g and h = I + v. Furthermore,


h : Rn → Rn is a homeomorphism.
102




theoryofodes July 4, 2007 13:20 Page 103 

Hartman-Grobman Theorem: Part 2

4.2 Hartman-Grobman Theorem: Part 2

Subspaces and Norms

We start off with a lemma that is analogous to the lemma in Lecture


21, except this one will deal with the magnitude, rather than the real
part, of eigenvalues.

Lemma Let V be a finite-dimensional real vector space and let L ∈


L(V , V ). If all the eigenvalues of L have magnitude less than c, then
there is a norm k · k on V such that

kLvk ≤ ckvk

for every v ∈ V .

Proof. As in the previous lemma, the norm will be the Euclidean norm
corresponding to the modification of the real canonical basis that gives
a matrix representation of L that has ε’s in place of the off-diagonal 1’s.
 
With respect to this basis, it can be checked that



LT L = D + R(ε),

where D is a diagonal matrix, each of whose diagonal entries is less


than c 2 , and R(ε) is a matrix whose entries converge to 0 as ε ↓ 0.
Hence, as in the proof of the earlier lemma, we can conclude that if ε
is sufficiently small then

kLvk2 = hv, LT Lvi ≤ c 2 kvk2

for every v ∈ V (where h·, ·i is the inner product that induces k·k).

Note that if L is a linear operator, all of whose eigenvalues have


magnitude greater than c, then we get the reverse inequality

kLvk ≥ ckvk

for some norm k · k. This follows trivially in the case when c ≤ 0, and
when c > 0 it follows by applying the lemma to L−1 (which exists, since
0 is not an eigenvalue of L).
103




theoryofodes July 4, 2007 13:20 Page 104 

4. Conjugacies

Now, suppose that A ∈ L(Rn , Rn ) is hyperbolic. Then, since A has


only finitely many eigenvalues, there is a number a ∈ (0, 1) such that
none of the eigenvalues of A are in the closed annulus

B(0, a−1 ) \ B(0, a).

Using the notation developed when we were deriving the real canonical
form, let
 
 M 

E = N(A − λI) ⊕
 
λ∈(−a,a)
   

 
 
 

 M    M  
Re u u ∈ N(A − λI) ⊕ Im u u ∈ N(A − λI) ,

 
  
 |λ|<a  
 |λ|<a 

Im λ≠0 Im λ≠0

and let
 
 M 
E+ = N(A − λI) ⊕
 
λ∈(−∞,−a−1 )∪(a−1 ,∞)
     

 
 
 

 M
   

   M  

Re u u ∈ N(A − λI) ⊕
Im u u ∈ N(A − λI) .

 
 
 


|λ|>a−1 
  |λ|>a−1 

Im λ≠0 Im λ≠0

Then Rn = E − ⊕ E + , and E − and E + are both invariant under A. Define


P − ∈ L(Rn , E − ) and P + ∈ L(Rn , E + ) to be the linear operators that
map each x ∈ Rn to the unique elements P − x ∈ E − and P + x ∈ E +
such that P − x + P + x = x.
Let A− ∈ L(E − , E − ) and A+ ∈ L(E + , E + ) be the restrictions of A to
E − and E + , respectively. By the lemma and the discussion immediately
thereafter, we can find a norm k · k− for E − and a norm k · k+ for E +
such that
kA− xk− ≤ akxk−

for every x ∈ E − , and

kA+ xk+ ≥ a−1 kxk+

for every x ∈ E + . Define a norm k · k on Rn by the formula

kxk = max{kP − xk− , kP + xk+ }. (4.2)


104




theoryofodes July 4, 2007 13:20 Page 105 

Hartman-Grobman Theorem: Part 3

This is the norm on Rn that we will use throughout our proof of the
(global) Hartman-Grobman Theorem (for maps). Note that kxk = kxk−
if x ∈ E − , and kxk = kxk+ if x ∈ E + .
Recall that we equipped Cb0 (Rn ) with the norm k · k0 defined by the
formula
kwk0 := sup kw(x)k.
x∈Rn
The norm on Rn on the right-hand side of this formula is that given
in (4.2). Recall also that we will use the functional Lip defined by the
formula
kw(x1 ) − w(x2 )k
Lip(w) := sup
x1 ,x2 ∈Rn kx1 − x2 k
x1 ≠x2

The norm on Rn on the right-hand side of this formula is also that


given in (4.2).
Let

Cb0 (E − ) = w ∈ C(Rn , E − ) sup kw(x)k− < ∞ ,
x∈Rn
and let
  
Cb0 (E + ) = w ∈ C(Rn , E + ) sup kw(x)k+ < ∞ .

x∈Rn

Since Rn = E − ⊕ E + , it follows that

Cb0 (Rn ) = Cb0 (E − ) ⊕ Cb0 (E + ),

and the corresponding decomposition of an element w ∈ Cb0 (Rn ) is

w = P − ◦ w + P + ◦ w.

We equip Cb0 (E − ) and Cb0 (E + ) with the same norm k · k0 that we


used on Cb0 (Rn ), thereby making each of these two spaces a Banach
space. It is not hard to see that

kwk0 = max{kP − ◦ wk0 , kP + ◦ wk0 }.

4.3 Hartman-Grobman Theorem: Part 3

Linear and Nonlinear Maps

Now, assume that A is invertible, so that


kAxk
inf > 0.
x≠0 kxk
105




theoryofodes July 4, 2007 13:20 Page 106 

4. Conjugacies

Choose, and fix, a positive constant


 
kAxk
ε < min 1 − a, inf .
x≠0 kxk

Choose, and fix, a function g ∈ C1b (Rn ) for which Lip(g) < ε. The
(global) Hartman-Grobman Theorem (for maps) will be proved by con-
structing a map Θ from Cb0 (Rn ) to Cb0 (Rn ) whose fixed points would be
precisely those objects v which, when added to the identity I, would
yield solutions h to the conjugacy equation

(A + g) ◦ h = h ◦ A, (4.3)

and then showing that Θ is a contraction (and that h is a homeomor-


phism).
Plugging h = I + v into (4.3) and manipulating the result, we can
see that that equation is equivalent to the equation

Lv = Ψ (v), (4.4)

 where Ψ (v) := g ◦ (I + v) ◦ A−1 and 



Lv = v − A ◦ v ◦ A−1 =: (id −A)v.

Since the composition of continuous functions is continuous, and the


composition of functions is bounded if the outer function in the com-
position is bounded, it is clear that Ψ is a (nonlinear) map from Cb0 (Rn )
to Cb0 (Rn ). Similarly, A and, therefore, L are linear maps from Cb0 (Rn )
to Cb0 (Rn ). We will show that L can be inverted and then apply L−1 to
both sides of (4.4) to get

v = L−1 (Ψ (v)) =: Θ(v) (4.5)

as our fixed point equation.

Inverting L

Since A behaves significantly differently on E − than it does on E + , A


and, therefore, L behave significantly differently on Cb0 (E − ) than they
do on Cb0 (E + ). For this reason, we will analyze L by looking at its
restrictions to Cb0 (E − ) and to Cb0 (E + ). Note that Cb0 (E − ) and Cb0 (E + )
106




theoryofodes July 4, 2007 13:20 Page 107 

Hartman-Grobman Theorem: Part 3

are invariant under A and, therefore, under L. Therefore, it makes


sense to let A− ∈ L(Cb0 (E − ), Cb0 (E − )) and A+ ∈ L(Cb0 (E + ), Cb0 (E + ))
be the restrictions of A to Cb0 (E − ) and Cb0 (E + ), respectively, and let
L− ∈ L(Cb0 (E + ), Cb0 (E + )) and L+ ∈ L(Cb0 (E + ), Cb0 (E + )) be the corre-
sponding restrictions of L. Then L will be invertible if and only if L−
and L+ are each invertible. To invert L− and L+ we use the following
general result about the invertibility of operators on Banach spaces.

Lemma Let X be a Banach space with norm k · kX and corresponding


operator norm k · kL(X,X) . Let G be a linear map from X to X, and let
c < 1 be a constant. Then:

(a) If kGkL(X,X) ≤ c, then id −G is invertible and

1
k(id −G)−1 kL(X,X) ≤ .
1−c

(b) If G is invertible and kG−1 kL(X,X) ≤ c, then id −G is invertible and


 c 

k(id −G)−1 kL(X,X) ≤ .



1−c

Proof. The space of bounded linear maps from X to X is a Banach


space using the operator norm. In case (a), the bound on kGkL(X,X) ,
along with the Cauchy convergence criterion, implies that the series

X
Gk
k=0

converges to a bounded linear map from X to X; call it H. In fact, we


see that (by the formula for the sum of a geometric series)

1
kHkL(X,X) ≤ .
1−c

It is not hard to check that H ◦ (id −G) = (id −G) ◦ H = id, so H =


(id −G)−1 .
In case (b), we can apply the results of (a) with G−1 in place of G to
deduce that id −G−1 is invertible and that

1
k(id −G−1 )−1 kL(X,X) ≤ .
1−c
107




theoryofodes July 4, 2007 13:20 Page 108 

4. Conjugacies

Since id −G = −G(id −G−1 ) = −(id −G−1 )G, it is not hard to check that
−(id −G−1 )−1 G−1 is the inverse of id −G and that

c
k − (id −G−1 )−1 G−1 kL(X,X) ≤ .
1−c

The first half of this lemma is useful for inverting small perturba-
tions of the identity, while the second half is useful for inverting large
perturbations of the identity. It should, therefore, not be too surprising
that we will apply the first half with G = A− and the second half with
G = A+ (since A compresses things in the E − direction and stretches
things in the E + direction).
First, consider A− . If w ∈ Cb0 (E − ), then

kA− wk0 = kA ◦ w ◦ A−1 k0 = sup kAw(A−1 x)k = sup kAw(y)k


x∈Rn y∈Rn

≤ a sup kw(y)k = akwk0 ,


 y∈Rn 



so the operator norm of A− is bounded by a. Applying the first half
of the lemma with X = Cb0 (E − ), G = A− , and c = a, we find that L− is
invertible, and its inverse has operator norm bounded by (1 − a)−1 .
Next, consider A+ . It is not hard to see that A+ is invertible, and
+ −1
(A ) w = A−1 ◦ w ◦ A. If w ∈ Cb0 (E + ), then (because the eigenvalues
of the restriction of A−1 to E + all have magnitude less than a)

k(A+ )−1 wk0 = kA−1 ◦ w ◦ Ak0 = sup kA−1 w(Ax)k


x∈Rn
−1
= sup kA w(y)k ≤ a sup kw(y)k = akwk0 ,
y∈Rn y∈Rn

so the operator norm of (A+ )−1 is bounded by a. Applying the second


half of the lemma with X = Cb0 (E + ), G = A+ , and c = a, we find
that L+ is invertible, and its inverse has operator norm bounded by
a(1 − a)−1 .
Putting these two facts together, we see that L is invertible, and, in
fact,
L−1 = (L− )−1 ◦ P − + (L+ )−1 ◦ P + .
108




theoryofodes July 4, 2007 13:20 Page 109 

Hartman-Grobman Theorem: Part 4

If w ∈ Cb0 (Rn ), then

kL−1 wk0 = sup kL−1 w(x)k


x∈Rn

= sup max{kP − L−1 w(x)k, kP + L−1 w(x)k}


x∈Rn

= sup max{k(L− )−1 P − w(x)k, k(L+ )−1 P + w(x)k}


x∈Rn
 
1 a
≤ sup max kw(x)k, kw(x)k
x∈Rn 1−a 1−a
1 1
= sup kw(x)k = kwk0 ,
1 − a x∈Rn 1−a

so the operator norm of L−1 is bounded by (1 − a)−1 .

4.4 Hartman-Grobman Theorem: Part 4

The Contraction Map

Recall that we are looking for fixed points v of the map Θ := L−1 ◦ Ψ ,
 where Ψ (v) := g ◦ (I + v) ◦ A−1 . We have established that L−1 is a well- 



defined linear map from Cb0 (Rn ) to Cb0 (Rn ) and that its operator norm
is bounded by (1−a)−1 . This means that Θ is a well-defined (nonlinear)
map from Cb0 (Rn ) to Cb0 (Rn ); furthermore, if v1 , v2 ∈ Cb0 (Rn ), then

1
kΘ(v1 )−Θ(v2 )k0 = kL−1 (Ψ (v1 )−Ψ (v2))k0 ≤ kΨ (v1 )−Ψ (v2 )k0
1−a
1
= kg ◦ (I + v1 ) ◦ A−1 − g ◦ (I + v2 ) ◦ A−1 k0
1−a
1
= sup kg(A−1 x + v1 (A−1 x)) − g(A−1 x + v2 (A−1 x))k
1 − a x∈Rn
ε
≤ sup k(A−1 x + v1 (A−1 x)) − (A−1 x + v2 (A−1 x))k
1 − a x∈Rn
ε
= sup kv1 (A−1 x) − v2 (A−1 x)k
1 − a x∈Rn
ε ε
= sup kv1 (y) − v2 (y)k = kv1 − v2 k0 .
1 − a y∈Rn 1−a

This shows that Θ is a contraction, since ε was chosen to be less than


1 − a. By the contraction mapping theorem, we know that Θ has a
109




theoryofodes July 4, 2007 13:20 Page 110 

4. Conjugacies

unique fixed point v ∈ Cb0 (Rn ); the function h := I + v satisfies F ◦ h =


h◦A, where F := A+g. It remains to show that h is a homeomorphism.

Injectivity

Before we show that h itself is injective, we show that F is injective.


Suppose it weren’t. Then we could choose x1 , x2 ∈ Rn such that x1 ≠
x2 but F (x1 ) = F (x2 ). This would mean that Ax1 + g(x1 ) = Ax2 +
g(x2 ), so
kA(x1 − x2 )k kAx1 − Ax2 k kg(x1 ) − g(x2 )k
= = ≤ Lip(g)
kx1 − x2 k kx1 − x2 k kx1 − x2 k
kAxk
< ε inf ,
x≠0 kxk

which would be a contradiction.


Now we show that h is injective. Let x1 , x2 ∈ Rn satisfying h(x1 ) =
h(x2 ) be given. Then

h(Ax1 ) = F (h(x1 )) = F (h(x2 )) = h(Ax2 ),


 

and, by induction, we have h(An x1 ) = h(An x2 ) for every n ∈ N. Also,


F (h(A−1 x1 )) = h(AA−1 x1 ) = h(x1 ) = h(x2 ) = h(AA−1 x2 )


= F (h(A−1 x2 )),

so the injectivity of F implies that h(A−1 x1 ) = h(A−1 x2 ); by induction,


h(A−n x1 ) = h(A−n x2 ) for every n ∈ N. Set z = x1 −x2 . Since I = h−v,
we know that for any n ∈ Z

kAn zk = kAn x1 − An x2 k
= k(h(An x1 ) − v(An x1 )) − (h(An x2 ) − v(An x2 ))k
= kv(An x1 ) − v(An x2 )k ≤ 2kvk0 .

Because of the way the norm was chosen, we then know that for n ≥ 0

kP + zk ≤ an kAn P + zk ≤ an kAn zk ≤ 2an kvk0 → 0,

as n ↑ ∞, and we know that for n ≤ 0

kP − zk ≤ a−n kAn P − zk ≤ a−n kAn zk ≤ 2a−n kvk0 → 0,

as n ↓ −∞. Hence, z = P − z + P + z = 0, so x1 = x2 .
110




theoryofodes July 4, 2007 13:20 Page 111 

Hartman-Grobman Theorem: Part 5

Surjectivity

It may seem intuitive that a map like h that is a bounded perturbation


of the identity is surjective. Unfortunately, there does not appear to
be a way of proving this that is simultaneously elementary, short, and
complete. We will therefore rely on the following topological theorem
without proving it.

Theorem (Invariance of Domain) Every continuous injective map from


Rn to Rn maps open sets to open sets.

In particular, this theorem implies that h(Rn ) is open. If we can


show that h(Rn ) is closed, then (since h(Rn ) is clearly nonempty) this
will mean that h(Rn ) = Rn , i.e., h is surjective.
So, suppose we have a sequence (h(xk )) of points in h(Rn ) that
converges to a point y ∈ Rn . Without loss of generality, assume that

kh(xk ) − yk ≤ 1
 

for every k. This implies that kh(xk )k ≤ kyk + 1, which in turn implies

that kxk k ≤ kyk + kvk0 + 1. Thus, the sequence (xk ) is bounded
and therefore has a subsequence (xkℓ ) converging to some point x0 ∈
Rn . By continuity of h, (h(xkℓ )) converges to h(x0 ), which means that
h(x0 ) = y. Hence, h(Rn ) is closed.

Continuity of the Inverse

The bijectivity of h implies that h−1 is defined. To complete the ver-


ification that h is a homeomorphism, we need to confirm that h−1 is
continuous. But this is an immediate consequence of the the Invariance
of Domain Theorem.

4.5 Hartman-Grobman Theorem: Part 5

Modifying the Vector Field

Consider the continuously differentiable autonomous ODE

ẋ = f (x) (4.6)
111




theoryofodes July 4, 2007 13:20 Page 112 

4. Conjugacies

with an equilibrium point that, without loss of generality, is located at


the origin. For x near 0, f (x) ≈ Bx, where B = Df (0). Our goal is
to come up with a modification f˜ of f such that f˜(x) = f (x) for x
near 0 and f˜(x) ≈ Bx for all x. If we accomplish this goal, whatever
information we obtain about the relationship between the equations

ẋ = f˜(x) (4.7)

and
ẋ = Bx (4.8)
will also hold between (4.6) and (4.8) for x small.
Pick β : [0, ∞) → [0, 1] to be a C ∞ function satisfying


1 if s ≤ 1
β(s) =

0 if s ≥ 2,

and let C = sups∈[0,∞) |β′ (s)|. Given ε > 0, pick r > 0 so small that
ε
kDf (x) − Bk <
2C + 1
 

whenever kxk ≤ 2r . (We can do this since Df (0) = B and Df is



continuous.) Define f˜ by the formula
 
kxk
f˜(x) = Bx + β (f (x) − Bx).
r
Note that f˜ is continuously differentiable, agrees with f for kxk ≤ r ,
and agrees with B for kxk ≥ 2r . We claim that f˜ − B has Lipschitz
constant less than ε. Assuming, without loss of generality, that kxk
and kyk are less than or equal to 2r , we have (using the Mean Value
Theorem)

k(f˜(x) − Bx) − (f˜(y) − By)k


   
kxk kyk
= β (f (x) − Bx) − β (f (y) − By)

r r
 
kxk
≤β k(f (x) − Bx) − (f (y) − By)k
r
   
kxk kyk

+ β −β kf (y) − Byk
r r
ε |kxk − kyk| ε
≤ kx − yk + C kyk
2C + 1 r 2C + 1
≤ εkx − yk.
112




theoryofodes July 4, 2007 13:20 Page 113 

Hartman-Grobman Theorem: Part 5

Now, consider the difference between eB and ϕ(1, ·), where ϕ is


the flow generated by f˜. Let g(x) = ϕ(1, x) − eB x. Then, since f˜(x) =
B(x) for all large x, g(x) = 0 for all large x. Also, g is continuously
differentiable, so g ∈ Cb1 (Rn ). If we apply the variation of constants
formula to (4.7) rewritten as

ẋ = Bx + (f˜(x) − Bx),

we find that
Z1
g(x) = e(1−s)B [f˜(ϕ(s, x)) − Bϕ(s, x)] ds,
0

so

kg(x) − g(y)k
Z1
≤ ke(1−s)B kk(f˜(ϕ(s, x)) − Bϕ(s, x)) − (f˜(ϕ(s, y)) − Bϕ(s, y))k ds
0
Z1
≤ε ke(1−s)B kkϕ(s, x) − ϕ(s, y)k ds
 0 

Z1

≤ kx − ykε ke(1−s)B kke(kBk+ε)s − 1k ds,
0

by continuous dependence on initial conditions. Since


Z1
ε ke(1−s)B kke(kBk+ε)s − 1k ds → 0
0

as ε ↓ 0, we can make the Lipschitz constant of g as small as we want


by making ε small (through shrinking the neighborhood of the origin
on which f˜ and f agree).

Conjugacy for t = 1

If 0 is a hyperbolic equilibrium point of (4.6) (and therefore of (4.7))


then none of the eigenvalues of B are imaginary. Setting A = eB , it
is not hard to show that the eigenvalues of A are the exponentials of
the eigenvalues of B, so none of the eigenvalues of A have modulus 1;
i.e., A is hyperbolic. Also, A is invertible (since A−1 = e−B ), so we can
113




theoryofodes July 4, 2007 13:20 Page 114 

4. Conjugacies

apply the global Hartman-Grobman Theorem for maps and conclude


that there is a homeomorphism h : Rn → Rn such that

ϕ(1, h(x)) = h(eB x) (4.9)

for every x ∈ Rn (where ϕ is the flow corresponding to (4.7)).

Conjugacy for t ≠ 1

For the Hartman-Grobman Theorem for flows, we need

ϕ(t, h(x)) = h(etB x)

for every x ∈ Rn and every t ∈ R. Fix t ∈ R, and consider the function


h̃ defined by the formula

h̃(x) = ϕ(t, h(e−tB x)). (4.10)

As the composition of homeomorphisms, h̃ is a homeomorphism. Fur-


thermore, the fact that h satisfies (4.9) implies that
 
ϕ(1, h̃(x)) = ϕ(1, ϕ(t, h(e−tB x))) = ϕ(t, ϕ(1, h(e−tB x)))



= ϕ(t, h(eB e−tB x)) = ϕ(t, h(e−tB eB x))) = h̃(eB x),

so (4.9) holds if h is replaced by h̃.


Now,

h̃ − I = ϕ(t, ·) ◦ h ◦ e−tB − I
= (ϕ(t, ·) − etB ) ◦ h ◦ e−tB + etB ◦ (h − I) ◦ e−tB =: v1 + v2 .

The fact that ϕ(t, x) and etB x agree for large x implies that ϕ(t, ·) −
etB is bounded, so v1 is bounded, as well. The fact that h−I is bounded
implies that v2 is bounded. Hence, h̃ − I is bounded.
The uniqueness part of the global Hartman-Grobman Theorem for
maps now implies that h and h̃ must be the same function. Using this
fact and substituting y = e−tB x in (4.10) yields

h(etB y) = ϕ(t, h(y))

for every y ∈ Rn and every t ∈ Rn . This means that the flows gen-
erated by (4.8) and (4.7) are globally topologically conjugate, and the
flows generated by (4.8) and (4.6) are locally topologically conjugate.
114




theoryofodes July 4, 2007 13:20 Page 115 

Constructing Conjugacies

4.6 Constructing Conjugacies

The Hartman-Grobman Theorem gives us conditions under which a


conjugacy between certain maps or between certain flows may exist.
Some limitations of the theorem are:

• The conditions it gives are sufficient, but certainly not necessary,


for a conjugacy to exist.

• It doesn’t give a simple way to construct a conjugacy (in closed


form, at least).

• It doesn’t indicate how smooth the conjugacy might be.

These shortcomings can be addressed in a number of different ways,


but we won’t really go into those here. We will, however, consider some
aspects of conjugacies.

Differentiable Conjugacies of Flows

 Consider the autonomous differential equations 



ẋ = f (x) (4.11)

and
ẋ = g(x), (4.12)

generating, respectively, the flows ϕ and ψ. Recall that the conjugacy


equation for ϕ and ψ is

ϕ(t, h(x)) = h(ψ(t, x)) (4.13)

for every x and t. Not only is (4.13) somewhat complicated, it appears


to require you to solve (4.11) and (4.12) before you can look for a con-
jugacy h. Suppose, however, that h is a differentiable conjugacy. Then,
we can differentiate both sides of (4.13) with respect to t to get

f (ϕ(t, h(x))) = Dh(ψ(t, x))g(ψ(t, x)). (4.14)

Substituting (4.13) into the left-hand side of (4.14) and then replacing
ψ(t, x) by x, we get the equivalent equation

f (h(x)) = Dh(x)g(x). (4.15)


115




theoryofodes July 4, 2007 13:20 Page 116 

4. Conjugacies

Note that (4.15) involves the functions appearing in the differential


equations, rather than the formulas for the solutions of those equa-
tions. Note, also, that (4.15) is the same equation you would get if you
took a solution x of (4.12) and required the function h ◦ x to satisfy
(4.11).

An Example for Flows

As the simplest nontrivial example, let a, b ∈ R be distinct constants


and consider the equations
ẋ = ax (4.16)

and
ẋ = bx (4.17)

for x ∈ R. Under what conditions on a and b does there exist a topo-


logical conjugacy h taking solutions of (4.17) to solutions of (4.16)?
Equation (4.15) tells us that if h is differentiable then

ah(x) = h′ (x)bx. (4.18)


 

If b ≠ 0, then separating variables in (4.18) implies that on intervals



avoiding the origin h must be given by the formula

h(x) = C|x|a/b (4.19)

for some constant C. Clearly, (4.19) does not define a topological con-
jugacy for a single constant C, because it fails to be injective on R;
however, the formula


x|x|a/b−1 if x ≠ 0
h(x) = (4.20)

0 if x = 0,

which is obtained from (4.19) by taking C = 1 for positive x and C = −1


for negative x, defines a homeomorphism if ab > 0. Even though the
function defined in (4.20) may fail to be differentiable at 0, substitution
of it into
eta h(x) = h(etb x), (4.21)

which is (4.13) for this example, shows that it does, in fact, define a
topological conjugacy when ab > 0. (Note that in no case is this a
C 1 -conjugacy, since either h′ (0) or (h−1 )′ (0) does not exist.)
116




theoryofodes July 4, 2007 13:20 Page 117 

Constructing Conjugacies

Now, suppose that ab ≤ 0. Does a topological (possibly nondif-


ferentiable) conjugacy exist? If ab = 0, then (4.21) implies that h is
constant, which violates injectivity, so suppose that ab < 0. In this
case, substituting x = 0 and t = 1 into (4.21) implies that h(0) = 0.
Fixing x ≠ 0 and letting t sgn b ↓ −∞ in (4.21), we see that the continu-
ity of h implies that h(x) = 0, also, which again violates injectivity.
Summarizing, for a ≠ b there is a topological conjugacy of (4.16)
and (4.17) if and only if ab > 0, and these are not C 1 -conjugacies.

An Example for Maps

Let’s try a similar analysis for maps. Let a, b ∈ R be distinct constants,


and consider the maps F (x) = ax and G(x) = bx (for x ∈ R). For what
(a, b)-combinations does there exist a homeomorphism h : R → R such
that
F (h(x)) = h(G(x)) (4.22)

for every x ∈ R? Can h and h−1 be chosen to be differentiable?


For these specific maps, the general equation (4.22) becomes
 

ah(x) = h(bx). (4.23)


If a = 0 or b = 0 or a = 1 or b = 1, then injectivity is immediately


violated. Note that, by induction, (4.23) gives

an h(x) = h(b n x) (4.24)

for every n ∈ Z. In particular, a2 h(x) = h(b 2 x), so the cases when


a = −1 or b = −1 cause the same problems as when a = 1 or b = 1.
So, from now on, assume that a, b ∉ {−1, 0, 1}. Observe that:

• Setting x = 0 in (4.23) yields h(0) = 0.

• If |b| < 1, then fixing x ≠ 0 in (4.24) and letting n ↑ ∞, we have


|a| < 1.

• If |b| > 1, we can, similarly, let n ↓ −∞ to conclude that |a| > 1.

• If b > 0 and a < 0, then (4.23) implies that h(1) and h(b) have
opposite signs even though 1 and b have the same sign; conse-
quently, the Intermediate Value Theorem yields a contradiction
to injectivity.
117




theoryofodes July 4, 2007 13:20 Page 118 

4. Conjugacies

• If b < 0 and a > 0, then (4.23) gives a similar contradiction.

Thus, the only cases where we could possibly have conjugacy is if


a and b are both in the same component of

(−∞, −1) ∪ (−1, 0) ∪ (0, 1) ∪ (1, ∞).

When this condition is met, experimentation (or experience) suggests


trying h of the form h(x) = x|x|p−1 for some constant p > 0 (with
h(0) = 0). This is a homeomorphism from R to R, and plugging it into
(4.23) shows that it provides a conjugacy if a = b|b|p−1 or, in other
words, if
log |a|
p= .
log |b|

Since a ≠ b, either h or h−1 fails to be differentiable at 0. Is there


some other formula that provides a C 1 -conjugacy? No, because if there
were we could differentiate both sides of (4.23) with respect to x and
evaluate at x = 0 to get h′ (0) = 0, which would mean that (h−1 )′ (0) is
 undefined.


Exercise 16 Define F : R2 → R2 by the formula


" #! " #
x −x/2
F = ,
y 2y + x 2

and let A = DF (0).

(a) Show that the maps F and A are topologically conjugate.

(b) Show that the flows generated by the differential equations

ż = F (z)

and
ż = Az

are topologically conjugate.

(Hint: Try quadratic conjugacy functions.)

118




theoryofodes July 4, 2007 13:20 Page 119 

Smooth Conjugacies

4.7 Smooth Conjugacies

The examples we looked at last time showing that topological conjuga-


cies often cannot be chosen to be differentiable all involved two maps
or vector fields with different linearizations at the origin. What about
when, as in the Hartman-Grobman Theorem, we are looking for a con-
jugacy between a map (or flow) and its linearization? An example of
Hartman shows that the conjugacy cannot always be chosen to be C 1 .

Hartman’s Example

Consider the system




 ẋ = αx



 ẏ = (α − γ)y + εxz



ż = −γz,

 where α > γ > 0 and ε ≠ 0. We will not cut off this vector field but will 

instead confine our attention to x, y, z small. A calculation shows that



the time-1 map F = ϕ(1, ·) of this system is given by
   
x ax
   
F    
y  = ac(y + εxz) ,
z cz

where a = eα and c = e−γ . Note that a > ac > 1 > c > 0. The time-1
map B of the linearization of the differential equation is given by
   
x ax
   
B   
y  = acy  .
y cz

A local conjugacy H = (f , g, h) of B with F must satisfy

af (x, y, z) = f (ax, acy, cz)


ac[g(x, y, z) + εf (x, y, z)h(x, y, z)] = g(ax, acy, cz)
ch(x, y, z) = h(ax, acy, cz)
119




theoryofodes July 4, 2007 13:20 Page 120 

4. Conjugacies

for every x, y, z near 0. Writing k(x, z) for k(x, 0, z), where k ∈


{f , g, h}, we have

af (x, z) = f (ax, cz) (4.25)


ac[g(x, z) + εf (x, z)h(x, z)] = g(ax, cz) (4.26)
ch(x, z) = h(ax, cz) (4.27)

for every x, z near 0.


Before proceeding further, we state and prove a lemma.

Lemma Suppose that j is a continuous real-valued function of a real


variable, defined on an open interval U centered at the origin. Suppose
that there are constants α, β ∈ R such that

αj(u) = j(βu) (4.28)

whenever u, βu ∈ U. Then if |β| < 1 < |α| or |α| < 1 < |β|, j(u) = 0
for every u ∈ U.
 

Proof. If |β| < 1 < |α|, fix u ∈ U and apply (4.28) inductively to get

αn j(u) = j(βn u) (4.29)

for every n ∈ N. Letting n ↑ ∞ in (4.29), we see that j(u) must be zero.


If |α| < 1 < |β|, substitute v = βu into (4.28) to get

αj(β−1 v) = j(v) (4.30)

for every v, β−1 v ∈ U. Fix v ∈ U, and iterate (4.30) to get

αn j(β−n v) = j(v) (4.31)

for every n ∈ N. Letting n ↑ N in (4.31), we get j(v) = 0.

Setting x = 0 in (4.25) and applying the Lemma gives

f (0, z) = 0 (4.32)

for every z near zero. Setting z = 0 in (4.27) and applying the Lemma
gives
h(x, 0) = 0 (4.33)
120




theoryofodes July 4, 2007 13:20 Page 121 

Smooth Conjugacies

for every x near zero. Setting x = 0 in (4.26), using (4.32), and applying
the Lemma gives
g(0, z) = 0 (4.34)

for every z near zero. If we set z = 0 in (4.26), use (4.33), and then dif-
ferentiate both sides with respect to x, we get cgx (x, 0) = gx (ax, 0);
applying the Lemma yields

gx (x, 0) = 0 (4.35)

for every x near zero. Setting z = 0 in (4.34) and using (4.35), we get

g(x, 0) = 0 (4.36)

for every x near zero.


Now, using (4.26) and mathematical induction, it can be verified
that

an c n [g(x, z) + nεf (x, z)h(x, z)] = g(an x, c n z) (4.37)


 

for every n ∈ N. Similarly, mathematical induction applied to (4.25)



gives
f (x, z) = a−n f (an x, c n z) (4.38)

for every n ∈ N. If we substitute (4.38) into (4.37), divide through by


c n , and replace x by a−n x we get

an g(a−n x, z) + nεf (x, c n z)h(a−n x, z) = c −n g(x, c n z) (4.39)

for every n ∈ N.
The existence of gx (0, z) and gz (x, 0) along with equations (4.34)
and (4.36) imply that an g(a−n x, z) and c −n g(x, c n z) stay bounded as
n ↑ ∞. Using this fact, and letting n ↑ ∞ in (4.39), we get

f (x, 0)h(0, z) = 0,

so f (x, 0) = 0 or h(0, z) = 0. If f (x, 0) = 0, then, in combination with


(4.33) and (4.36), this tells us that H is not injective in a neighborhood
of the origin. Similarly, if h(0, z) = 0 then, in combination with (4.32)
and (4.34), this implies a violation of injectivity, as well.
121




theoryofodes July 4, 2007 13:20 Page 122 

4. Conjugacies

Poincaré’s Linearization Theorem

Suppose that f : Rn → Rn is analytic and satisfies f (0) = 0. It is possi-


ble to establish conditions under which there is an analytic change of
variables that will turn the nonlinear equation

ẋ = f (x) (4.40)

into its linearization


u̇ = Df (0)u. (4.41)

Definition Let λ1 , λ2 , . . . , λn be the eigenvalues of Df (0), listed accord-


ing to multiplicity. We say that Df (0) is resonant if there are non-
negative integers m1 , m2 , . . . , mn and a number s ∈ {1, 2, . . . , n} such
that
n
X
mk ≥ 2
k=1

and
n
X
 λs = mk λk . 
k=1



If Df (0) is not resonant, we say that it is nonresonant.

Note that in Hartman’s example there is resonance. A study of nor-


mal forms reveals that nonresonance permits us to make changes of
variable that remove nonlinear terms up to any specified order in the
right-hand side of the differential equation. In order to be able to guar-
antee that all nonlinear terms may be removed, some extra condition
beyond nonresonance is required.

Definition We say that (λ1 , λ2 , . . . , λn ) ∈ Cn satisfy a Siegel condition if


there are constants C > 0 and ν > 1 such that

n
X
C
λs − m
k k ≥ Pn
λ

k=1
( k=1 mk )ν

for all nonnegative integers m1 , m2 , . . . , mn satisfying


n
X
mk ≥ 2.
k=1
122




theoryofodes July 4, 2007 13:20 Page 123 

Smooth Conjugacies

Theorem (Poincaré’s Linearization Theorem) Suppose that f is analytic,


and that all the eigenvalues of Df (0) are nonresonant and either all lie
in the open left half-plane, all lie in the open right half-plane, or satisfy
a Siegel condition. Then there is a change of variables u = g(x) that is
analytic near 0 and that turns (4.40) into (4.41) near 0.

 

123




theoryofodes July 4, 2007 13:20 Page 124 

 




theoryofodes July 4, 2007 13:20 Page 125 

5
Invariant Manifolds

5.1 Stable Manifold Theorem: Part 1

The Hartman-Grobman Theorem states that the flow generated by a


smooth vector field in a neighborhood of a hyperbolic equilibrium
point is topologically conjugate with the flow generated by its lineariza-
tion. Hartman’s counterexample shows that, in general, the conjugacy
cannot be taken to be C 1 . However, the Stable Manifold Theorem will
 tell us that there are important structures for the two flows that can 



be matched up by smooth changes of variable. In this section, we will
discuss the Stable Manifold Theorem on an informal level and discuss
two different approaches to proving it.
Let f : Ω ⊆ Rn → Rn be C 1 , and let ϕ : R × Ω → Ω be the flow
generated by the differential equation

ẋ = f (x). (5.1)

Suppose that x0 is a hyperbolic equilibrium point of (5.1).

Definition The (global) stable manifold of x0 is the set


n o

W s (x0 ) := x ∈ Ω lim ϕ(t, x) = x0 .
t↑∞

Definition The (global) unstable manifold of x0 is the set


n o

W u (x0 ) := x ∈ Ω lim ϕ(t, x) = x0 .
t↓−∞

Definition Given a neighborhood U of x0 , the local stable manifold of


125




theoryofodes July 4, 2007 13:20 Page 126 

5. Invariant Manifolds

x0 (relative to U) is the set


n o
s
Wloc (x0 ) := x ∈ U γ + (x) ⊂ U and lim ϕ(t, x) = x0 .
t↑∞

Definition Given a neighborhood U of x0 , the local unstable manifold


of x0 (relative to U) is the set
n o
u
Wloc (x0 ) := x ∈ U γ − (x) ⊂ U and lim ϕ(t, x) = x0 .
t↓−∞

Note that:

s u
• Wloc (x0 ) ⊆ W s (x0 ), and Wloc (x0 ) ⊆ W u (x0 ).

s u
• Wloc (x0 ) and Wloc (x0 ) are both nonempty, since they each contain
x0 .

• W s (x0 ) and W u (x0 ) are invariant sets.

s u
• Wloc (x0 ) is positively invariant, and Wloc (x0 ) is negatively invari-
 ant. 



s u
• Wloc (x0 ) is not necessarily W s (x0 ) ∩ U, and Wloc (x0 ) is not nec-
essarily W u (x0 ) ∩ U.

s
Wloc (x0 ) is not necessarily invariant, since it might not be negatively
u
invariant, and Wloc (x0 ) is not necessarily invariant, since it might not
be positively invariant. They do, however, possess what is known as
relative invariance.

Definition A subset A of a set B is positively invariant relative to B if


for every x ∈ A and every t ≥ 0, ϕ(t, x) ∈ A whenever ϕ([0, t], x) ⊆
B.

Definition A subset A of a set B is negatively invariant relative to B if


for every x ∈ A and every t ≤ 0, ϕ(t, x) ∈ A whenever ϕ([t, 0], x) ⊆
B.

Definition A subset A of a set B is invariant relative to B if it is nega-


tively invariant relative to B and positively invariant relative to B.
126




theoryofodes July 4, 2007 13:20 Page 127 

Stable Manifold Theorem: Part 1

s
Wloc (x0 ) is negatively invariant relative to U and is therefore invari-
u
ant relative to U. Wloc (x0 ) is positively invariant relative to U and is
therefore invariant relative to U.
Recall that a (k-)manifold is a set that is locally homeomorphic to
an open subset of Rk . Although the word “manifold” appeared in the
s u
names of Wloc (x0 ), Wloc (x0 ), W s (x0 ), and W u (x0 ), it is not obvious
from the defintions of these sets that they are, indeed, manifolds. One
of the consequences of the Stable Manifold Theorem is that, if U is
s u
sufficiently small, Wloc (x0 ) and Wloc (x0 ) are manifolds. (W s (x0 ) and
W u (x0 ) are what are known as immersed manifolds.)
For simplicity, let’s now assume that x0 = 0. Let E s be the stable
subspace of Df (0), and let E u be the unstable subspace of Df (0). If
f is linear, then W s (0) = E s and W u (0) = E u . The Stable Manifold
Theorem says that in the nonlinear case not only are the Stable and
Unstable Manifolds indeed manifolds, but they are tangent to E s and
E u , respectively, at the origin. This is information that the Hartman-
Grobman Theorem does not provide.

 More precisely there are neighborhoods Us of the origin in E s and 

Uu of the origin in E u and smooth maps hs : Us → Uu and hu : Uu →



Us such that hs (0) = hu (0) = 0 and Dhs (0) = Dhu (0) = 0 and the
local stable and unstable manifolds of 0 relative to Us ⊕ Uu satisfy
s

Wloc (0) = x + hs (x) x ∈ Us

and

u
Wloc (0) = x + hu (x) x ∈ Uu .
Furthermore, not only do solutions of (5.1) in the stable manifold con-
verge to 0 as t ↑ ∞, they do so exponentially quickly. (A similar state-
ment can be made about the unstable manifold.)

Liapunov-Perron Approach

This approach to proving the Stable Manifold Theorem rewrites (5.1)


as
ẋ = Ax + g(x), (5.2)
where A = Df (0). The Variation of Parameters formula gives
Z t2
x(t2 ) = e(t2 −t1 )A x(t1 ) + e(t2 −s)A g(x(s)) ds, (5.3)
t1
127




theoryofodes July 4, 2007 13:20 Page 128 

5. Invariant Manifolds

for every t1 , t2 ∈ R. Setting t1 = 0 and t2 = t, and projecting (5.3) onto


E s yields
Zt
xs (t) = etAs xs (0) + e(t−s)As gs (x(s)) ds,
0
where the subscript s attached to a quantity denotes the projection
of that quantity onto E s . If we assume that the solution x(t) lies on
W s (0), set t2 = t, let t1 ↑ ∞, and project (5.3) onto E u , we get
Z∞
xu (t) = − e(t−s)Au gu (x(s)) ds.
t

Hence, solutions of (5.2) in W s (0) satisfy the integral equation


Zt Z∞
x(t) = etAs xs (0) + e(t−s)As gs (x(s)) ds − e(t−s)Au gu (x(s)) ds.
0 t

Now, fix as ∈ E s , and define a functional T by


Zt Z∞
(T x)(t) = etAs as + e(t−s)As gs (x(s)) ds − e(t−s)Au gu (x(s)) ds.
0 t

A fixed point x of this functional will solve (5.2), will have a range
 contained in the stable manifold, and will satisfy xs (0) = as . If we set 



hs (as ) = xu (0) and define hs similarly for other inputs, the graph of
hs will be the stable manifold.

Hadamard Approach

The Hadamard approach uses what is known as a graph transform.


Here we define a functional not by an integral but by letting the graph
of the input function move with the flow ϕ and selecting the output
function to be the function whose graph is the image of the original
graph after, say, 1 unit of time has elapsed.
More precisely, suppose h is a function from E s to E u . Define its
graph transform F [h] to be the function whose graph is the set

ϕ(1, ξ + h(ξ)) ξ ∈ E s . (5.4)

(That (5.4) is the graph of a function from E s to E u —if we identify


E s ×E u with E s ⊕E u —is, of course, something that needs to be shown.)
Another way of putting this is that for each ξ ∈ E s ,

F [h]((ϕ(1, ξ + h(ξ)))s ) = (ϕ(1, ξ + h(ξ)))u ;


128




theoryofodes July 4, 2007 13:20 Page 129 

Stable Manifold Theorem: Part 2

in other words,

F [h] ◦ πs ◦ ϕ(1, ·) ◦ (id +h) = πu ◦ ϕ(1, ·) ◦ (id +h),

where πs and πu are projections onto E s and E u , respectively. A fixed


point of the graph transform functional F will be an invariant manifold,
and it can be show that it is, in fact, the stable manifold.

5.2 Stable Manifold Theorem: Part 2

Statements

Given a normed vector space X and a positive number r , we let X(r )


stand for the closed ball of radius r centered at 0 in X.
The first theorem refers to the differential equation

ẋ = f (x). (5.5)

Theorem (Stable Manifold Theorem) Suppose that Ω is an open neigh-


borhood of the origin in Rn , and f : Ω → Rn is a C k function (k ≥ 1)
 such that 0 is a hyperbolic equilibrium point of (5.5). Let E s ⊕ E u be the 



decomposition of Rn corresponding to the matrix Df (0). Then there is a
norm k · k on Rn , a number r > 0, and a C k function h : E s (r ) → E u (r )
such that h(0) = 0 and Dh(0) = 0 and such that the local stable mani-
s
fold Wloc (0) of 0 relative to B(r ) := E s (r ) ⊕ E u (r ) is the set

vs + h(vs ) vs ∈ E s (r ) .

Moreover, there is a constant c > 0 such that


n o
s
Wloc (0) = v ∈ B(r ) γ + (v) ⊂ B(r ) and lim ect ϕ(t, v) = 0 .
t↑∞

Two immediate and obvious corollaries, which we will not state ex-
plicitly, describe the stable manifolds of other equilibrium points (via
translation) and describe unstable manifolds (by time reversal).
We will actually prove this theorem by first proving an analogous
theorem for maps (much as we did with the Hartman-Grobman Theo-
rem). Given a neighborhood U of a fixed point p of a map F , we can
define the local stable manifold of p (relative to U) as
n o
s
Wloc (p) := x ∈ U F j (x) ∈ U for every j ∈ N and lim F j (x) = p .
j↑∞

129




theoryofodes July 4, 2007 13:20 Page 130 

5. Invariant Manifolds

Theorem (Stable Manifold Theorem for Maps) Suppose that Ω is an open


neighborhood of the origin in Rn , and F : Ω → Ω is an invertible C k
function (k ≥ 1) for which F (0) = 0 and the matrix DF (0) is hyperbolic
and invertible. Let E s ⊕ E u (= E − ⊕ E + ) be the decomposition of Rn
corresponding to the matrix DF (0). Then there is a norm k · k on Rn ,
a number r > 0, a number µ̃ ∈ (0, 1), and a C k function h : E s (r ) →
E u (r ) such that h(0) = 0 and Dh(0) = 0 and such that the local stable
s
manifold Wloc (0) of 0 relative to B(r ) := E s (r ) ⊕ E u (r ) satisfies


s
Wloc (0) = vs + h(vs ) vs ∈ E s (r )
n o

= v ∈ B(r ) F j (v) ∈ B(r ) for every j ∈ N
n o

= v ∈ B(r ) F j (v) ∈ B(r ) and kF j (v)k ≤ µ̃ j kvk for all j ∈ N .

Preliminaries

The proof of the Stable Manifold Theorem for Maps will be broken up
into a series of lemmas. Before stating and proving those lemmas, we
 need to lay a foundation by introducing some terminology and notation 

and by choosing some constants.



We know that F (0) = 0 and DF (0) is hyperbolic. Then Rn = E s ⊕E u ,
πs and πu are the corresponding projection operators, E s and E u are
invariant under DF (0), and there are constants µ < 1 and λ > 1 such
that all of the eigenvalues of DF (0)|E s have magnitude less than µ and
all of the eigenvalues of DF (0)|E u have magnitude greater than λ.
When we deal with a matrix representation of DF (q), it will be with
respect to a basis that consists of a basis for E s followed by a basis for
E u . Thus,
 
Ass (q) Asu (q)
 
DF (q) =  ,
Aus (q) Auu (q)

where, for example, Asu (q) is a matrix representation of πs DF (q)|E u in


terms of the basis for E u and the basis for E s . Note that, by invariance,
Asu (0) = Aus (0) = 0. Furthermore, we can pick our basis vectors so
that, with k · k being the corresponding Euclidean norm of a vector in
E s or in E u ,
kAss (0)vs k
kAss (0)k := sup <µ
vs ≠0 kvs k
130




theoryofodes July 4, 2007 13:20 Page 131 

Stable Manifold Theorem: Part 2

and
kAuu (0)vu k
m(Auu (0)) := inf > λ.
vu ≠0 kvu k
(The functional m(·) defined implicitly in the last formula is some-
times called the minimum norm even though it is not a norm.) For
a vector in v ∈ Rn , let kvk = max{kπs vk, kπu vk}. This will be
n
the norm on R that will be used throughout the proof. Note that
B(r ) := E (r ) ⊕ E u (r ) is the closed ball of radius r in Rn by this
s

norm.
Next, we choose r . Fix α > 0. Pick ε > 0 small enough that

µ + εα + ε < 1 < λ − ε/α − 2ε.

Pick r > 0 small enough that if q ∈ B(r ) then

kAss (q)k < µ,


m(Auu (q)) > λ,
kAsu (q)k < ε,

 kAus (q)k < ε, 

kDF (q) − DF (0)k < ε,


and DF (q) is invertible. (We can do this since F is C 1 , so DF (·) is


continuous.)
Now, define

\
Wrs := F −j (B(r )),
j=0

and note that Wrs is the set of all points in B(r ) that produce forward
semiorbits (under the discrete dynamical system generated by F ) that
s
stay in B(r ) for all forward iterates. By definition, Wloc (0) ⊆ Wrs ; we
will show that these two sets are, in fact, equal.
Two other types of geometric sets play vital roles in the proof: cones
and disks. The cones are of two types: stable and unstable. The stable
cone (of “slope” α) is

C s (α) := v ∈ Rn kπu vk ≤ αkπs vk ,

and the unstable cone (of “slope” α) is



C u (α) := v ∈ Rn kπu vk ≥ αkπs vk .
131




theoryofodes July 4, 2007 13:20 Page 132 

5. Invariant Manifolds

An unstable disk is a set of the form



vu + ψ(vu ) vu ∈ E u (r )

for some Lipschitz continuous function ψ : E u (r ) → E s (r ) with Lips-


chitz constant (less than or equal to) α−1 .

5.3 Stable Manifold Theorem: Part 3

The Action of DF (p) on the Unstable Cone

The first lemma shows that if the derivative of the map is applied to a
point in the unstable cone, the image is also in the unstable cone.

Lemma (Linear Invariance of the Unstable Cone) If p ∈ B(r ), then

DF (p)C u (α) ⊆ C u (α).

Proof. Let p ∈ B(r ) and v ∈ C u (α). Then, if we let vs = πs v and

 vu = πu v, we have kvu k ≥ αkvs k, so 

kπu DF (p)vk = kAus (p)vs + Auu (p)vu k


≥ kAuu (p)vu k − kAus (p)vs k


≥ m(Auu (p))kvu k − kAus (p)kkvs k ≥ λkvu k − εkvs k
≥ (λ − ε/α)kvu k,

and
kπs DF (p)vk = kAss (p)vs + Asu (p)vu k ≤ kAss (p)vs k + kAsu (p)vu k
≤ kAss (p)kkvs k + kAsu (p)kkvu k ≤ µkvs k + εkvu k
≤ (µ/α + ε)kvu k.

Since λ − ε/α ≥ α(µ/α + ε),

kπu DF (p)vk ≥ αkπs DF (p)vk,

so DF (p)v ∈ C u (α).

The Action of F on Moving Unstable Cones

The main part of the second lemma is that moving unstable cones are
positively invariant. More precisely, if two points are in B(r ) and one
132




theoryofodes July 4, 2007 13:20 Page 133 

Stable Manifold Theorem: Part 3

of the two points is in a translate of the unstable cone that is centered


at the second point, then their images under F satisfy the same re-
lationship. The lemma also provides estimates on the rates at which
the stable and unstable parts of the difference between the two points
contract or expand, respectively.
In this lemma (and later) we use the convention that if X and Y are
subsets of a vector space, then

X + Y := x + y x ∈ X and y ∈ Y .

Lemma (Moving Unstable Cones) If p, q ∈ B(r ) and q ∈ {p} + C u (α),


then:

(a) kπs (F (q) − F (p))k ≤ (µ/α + ε)kπu (q − p)k;

(b) kπu (F (q) − F (p))k ≥ (λ − ε/α − ε)kπu (q − p)k;

(c) F (q) ∈ {F (p)} + C u (α).

 Proof. We will write differences as integrals (using the Fundamental 

Theorem of Calculus) and use our estimates on DF (v), for v ∈ B(r ),



to estimate these integrals.
Since B(r ) is convex,
Z
1 d

kπs (F (q) − F (p))k = πs F (tq + (1 − t)p) dt
0 dt
Z
1

= πs DF (tq + (1 − t)p)(q − p) dt
0
Z
1

= Ass (tq + (1 − t)p)πs (q − p) dt
0
Z1


+ Asu (tq + (1 − t)p)πu (q − p) dt
0
Z1
≤ kAss (tq + (1 − t)p)kkπs (q − p)k dt
0
Z1
+ kAsu (tq + (1 − t)p)kkπu (q − p)k dt
0
Z1
≤ [µkπs (q − p)k + εkπu (q − p)k] dt ≤ (µ/α + ε)kπu (q − p)k.
0

This gives (a).


133




theoryofodes July 4, 2007 13:20 Page 134 

5. Invariant Manifolds

Similarly,

kπu (F (q) − F (p))k


Z
1

= Aus (tq + (1 − t)p)πs (q − p) dt
0
Z1


+ Auu (tq + (1 − t)p)πu (q − p) dt
0
Z Z
1 1

≥ Auu (0)πu (q − p) dt − [Aus (tq + (1 − t)p)πs (q − p) dt
0 0
Z
1

− (Auu (tq + (1 − t)p) − Auu (0))πu (q − p) dt
0
Z1
≥ m(Auu (0))kπu (q − p)k − kAus (tq + (1 − t)p)kkπs (q − p)k dt
0
Z1
− kAuu (tq + (1 − t)p) − Auu (0)kkπu (q − p)k dt
0

≥ λkπu (q − p)k − εkπs (q − p)k − εkπu (q − p)k


≥ (λ − ε/α − ε)kπu (q − p)k.
 

This gives (b).



From (a), (b), and the choice of ε, we have

kπu (F (q) − F (p))k ≥ (λ − ε/α − ε)kπu (q − p)k


≥ (µ + εα)kπu (q − p)k
≥ αkπs (F (q) − F (p))k,

so F (q) − F (p) ∈ C u (α), which means that (c) holds.

5.4 Stable Manifold Theorem: Part 4

Stretching of C 1 Unstable Disks

The next lemma shows that if F is applied to a C 1 unstable disk (i.e.,


an unstable disk that is the graph of a C 1 function), then part of the
image gets stretched out of B(r ), but the part that remains in is again
a C 1 unstable disk.
134




theoryofodes July 4, 2007 13:20 Page 135 

Stable Manifold Theorem: Part 4

Lemma (Unstable Disks) Let D0 be a C 1 unstable disk, and recursively


define
Dj = F (Dj−1 ) ∩ B(r )

for each j ∈ N. Then each Dj is a C 1 unstable disk, and


 
j
\
diam πu F −i (Di ) ≤ 2(λ − ε/α − ε)−j r (5.6)
i=0

for each j ∈ N.

Proof. Because of induction, we only need to handle the case j = 1.


The estimate on the diameter of the πu projection of the preimage
of D1 under F is a consequence of part (b) of the lemma on moving
invariant cones. That D1 is the graph of an α−1 -Lipschitz function ψ1
from a subset of E u (r ) to E s (r ) is a consequence of part (c) of that
same lemma. Thus, all we need to show is that dom(ψ1 ) = E u (r ) and
that ψ1 is C 1 .
 Let ψ0 : E u (r ) → E s (r ) be the C 1 function (with Lipschitz constant 

less than or equal to α−1 ) such that




D0 = vu + ψ0 (vu ) vu ∈ E u (r ) .

Define g : E u (r ) → E u by the formula g(vu ) = πu F (vu + ψ0 (vu )). If


we can show that for each y ∈ E u (r ) there exists x ∈ E u (r ) such that

g(x) = y, (5.7)

then we will know that dom(ψ1 ) = E u (r ).


Let y ∈ E u (r ) be given. Let L = Auu (0). Since m(L) > λ, we
know that L−1 ∈ L(E u , E u ) exists and that kL−1 k ≤ 1/λ. Define G :
E u (r ) → E u by the formula G(x) = x − L−1 (g(x) − y), and note that
fixed points of G are solutions of (5.7), and vice versa. We shall show
that G is a contraction and takes the compact set E u (r ) into itself and
that, therefore, (5.7) has a solution x ∈ E u (r ).
Note that

Dg(x) = πu DF (x + ψ0 (x))(I + Dψ0 (x))


= Auu (x + ψ0 (x)) + Aus (x + ψ0 (x))Dψ0 (x),
135




theoryofodes July 4, 2007 13:20 Page 136 

5. Invariant Manifolds

so

kDG(x)k = kI − L−1 Dg(x)k ≤ kL−1 kkL − Dg(x)k


1
≤ (kAuu (x + ψ0 (x)) − Auu (0)k
λ
+ kAus (x + ψ0 (x))kkDψ0 (x)k)
ε + ε/α
≤ < 1.
λ
The Mean Value Theorem then implies that G is a contraction.
Now, suppose that x ∈ E u (r ). Then

kG(x)k ≤ kG(0)k + kG(x) − G(0)k


ε + ε/α
≤ kL−1 k(kg(0)k + kyk) + kxk
λ
1
≤ (kg(0)k + r + (ε + ε/α)r ).
λ
Let ρ : E s (r ) → E u (r ) be defined by the formula ρ(vs ) = πu F (vs ).
Since ρ(0) = 0 and, for any vs ∈ E s (r ), kDρ(vs )k = kAus (vs )k ≤ ε,
 the Mean Value Theorem tells us that 



kg(0)k = kπu F (ψ0 (0))k = kρ(ψ0 (0))k ≤ εkψ0 (0)k ≤ εr . (5.8)

Plugging (5.8) into the previous estimate, we see that

1 1 + ε/α + 2ε
kG(x)k ≤ (εr + r + (ε + ε/α)r ) = r < r,
λ λ
so G(x) ∈ E u (r ).
That completes the verification that (5.7) has a solution for each
y ∈ E u (r ) and, therefore, that dom(ψ1 ) = E u (r ). To finish the proof,
we need to show that ψ1 is C 1 . Let g̃ be the restriction of g to g −1 (D1 ),
and observe that
ψ1 ◦ g̃ = πs ◦ F ◦ (I + ψ0 ). (5.9)

We have shown that g̃ is a bijection of g −1 (D1 ) with D1 and, by the


Inverse Function Theorem, g̃ −1 is C 1 . Thus, if we rewrite (5.9) as

ψ1 = πs ◦ F ◦ (I + ψ0 ) ◦ g̃ −1

we can see that ψ1 , as the composition of C 1 functions, is indeed C 1 .

136




theoryofodes July 4, 2007 13:20 Page 137 

Stable Manifold Theorem: Part 4

Wrs is a Lipschitz Manifold

Recall that Wrs was defined to be all points in the box B(r ) that pro-
duced forward orbits that remain confined within B(r ). The next
lemma shows that this set is a manifold.

Lemma (Nature of Wrs ) Wrs is the graph of a function h : E s (r ) → E u (r )


that satisfies h(0) = 0 and that has a Lipschitz constant less than or
equal to α.

Proof. For each vs ∈ E u (r ), consider the set

D := {vs } + E u (r ).

D is a C 1 unstable disk, so by the lemma on unstable disks, the subset


Sj of D that stays in B(r ) for at least j iterations of F has a diameter
less than or equal to 2(λ − ε/α − ε)−j r . By the continuity of F , Sj is
closed. Hence, the subset S∞ of D that stays in B(r ) for an unlim-
ited number of iterations of F is the intersection of a nested collection
 

of closed sets whose diameters approach 0. This means that S∞ is a



singleton. Call the single point in S∞ h(vs ).
It should be clear that Wrs is the graph of h. That h(0) = 0 follows
from the fact that 0 ∈ Wrs , since F (0) = 0. If h weren’t α-Lipschitz,
then there would be two points p, q ∈ Wrs such that p ∈ {q} + C u (α).
Repeated application of parts (b) and (c) of the lemma on moving un-
stable cones would imply that either F j (p) or F j (q) is outside of B(r )
for some j ∈ N, contrary to definition.

s
Wloc (0) is a Lipschitz Manifold
s
Our next lemma shows that Wloc (0) = Wrs and that, in fact, orbits in
this set converge to 0 exponentially. (The constant µ̃ in the statement
of the theorem can be chosen to be µ + ε if α ≤ 1.)

Lemma (Exponential Decay) If α ≤ 1, then for each p ∈ Wrs ,

kF j (p)k ≤ (µ + ε)j kpk. (5.10)

s
In particular, Wrs = Wloc (0).
137




theoryofodes July 4, 2007 13:20 Page 138 

5. Invariant Manifolds

Proof. Suppose that α ≤ 1 and p ∈ Wrs . By mathematical induction


(and the positive invariance of Wrs ), it suffices to verify (5.10) for j = 1.
Since α ≤ 1 the last lemma implies that Wrs is the graph of a 1-Lipschitz
function. Since F (p) ∈ Wrs , we therefore know that

kF (p)k = max kπs F (p)k, kπu F (p)k = kπs F (p)k.

Using this and estimating, we find that


Z Z
1 d 1

kF (p)k = πs F (tp) dt = πs DF (tp)p dt
0 dt 0
Z
1

= [Ass (tp)πs p + Asu (tp)πu p] dt
0
Z1
≤ [kAss (tp)kkπs pk + kAsu (tp)kkπu pk] dt
0

≤ µkπs pk + εkπu pk ≤ (µ + ε)kpk.

5.5 Stable Manifold Theorem: Part 5


 

s

Wloc (0) is C1

Lemma (Differentiability) The function h : E s (r ) → E u (r ) for which


s

Wloc (0) = vs + h(vs ) vs ∈ E s (r )

is C 1 , and Dh(0) = 0.

Proof. Let q ∈ Wrs be given. We will first come up with a candidate for
a plane that is tangent to Wrs at q, and then we will show that it really
works.
For each j ∈ N and each p ∈ Wrs , define

C s,j (p) := [D(F j )(p)]−1 C s (α),

and let
C s,0 (p) := C s (α).
By definition (and by the invertibility of DF (v) for all v ∈ B(r )),
C s,j (p) is the image of the stable cone under an invertible linear trans-
formation. Note that

C s,1 (p) = [DF (p)]−1 C s (α) ⊂ C s (α) = C s,0 (p)


138




theoryofodes July 4, 2007 13:20 Page 139 

Stable Manifold Theorem: Part 5

by the (proof of the) lemma on linear invariance of the unstable cone.


Similarly,

C s,2 (p) = [D(F 2 )(p)]−1 C s (α) = [DF (F (p))DF (p)]−1 C s (α)


= [DF (p)]−1 [DF (F (p))]−1 C s (α) = [DF (p)]−1 C s,1 (F (p))
⊂ [DF (p)]−1 C s (α) = C s,1 (p)

and

C s,3 (p) = [D(F 3 )(p)]−1 C s (α) = [DF (F 2 (p))DF (F (p))DF (p)]−1 C s (α)
= [DF (p)]−1 [DF (F (p))]−1 [DF (F 2 (p))]−1 C s (α)
= [DF (p)]−1 [DF (F (p))]−1 C s,1 (F 2 (p))
⊂ [DF (p)]−1 [DF (F (p))]−1 C s (α) = C s,2 (p).

Recursively, we find that, in particular,

C s,0 (q) ⊃ C s,1 (q) ⊃ C s,2 (q) ⊃ C s,3 (q) ⊃ · · · .

The plane that we will show is the tangent plane to Wrs at q is the
 

intersection


\
C s,∞ (q) := C s,j (q)
j=0

of this nested sequence of “cones”.


First, we need to show that this intersection is a plane. Suppose
that x ∈ C s,j (q). Then x ∈ C s (α), so

kπs DF (q)xk = kAss (q)πs x + Asu (q)πu xk


≤ kAss (q)kkπs xk + kAsu (q)kkπu xk ≤ (µ + εα)kπs xk.

Repeating this sort of estimate, we find that

kπs D(F j )(q)xk = kπs DF (F j−1 (q))DF (F j−2 (q)) · · · DF (q)xk


≤ (µ + εα)j kπs xk.

On the other hand, if y is also in C s,j (q) and πs x = πs y, then repeated


applications of the estimates in the lemma on linear invariance of the
unstable cone yield

kπu D(F j )(q)x − πu D(F j )(q)yk ≥ (λ − ε/α)j kπu x − πu yk.


139




theoryofodes July 4, 2007 13:20 Page 140 

5. Invariant Manifolds

Since D(F j )(q)C s,j (q) = C s (α), it must, therefore, be the case that

(λ − ε/α)j kπu x − πu yk
≤ 2α.
(µ + εα)j kπs xk

This implies that


 j
µ + εα
kπu x − πu yk ≤ 2α kπs xk. (5.11)
λ − ε/α

Letting j ↑ ∞ in (5.11), we see that for each vs ∈ E s there can be no


more than 1 point x in C s,∞ (q) satisfying πs x = vs . On the other
hand, each C s,j (q) contains a plane of dimension dim(E s ) (namely, the
preimage of E s under D(F j )(q)), so (since the set of planes of that
dimension passing through the origin is a compact set in the natural
topology), C s,∞ (q) contains a plane, as well. This means that C s,∞ (q)
is a plane Pq that is the graph of a linear function Lq : E s → E u .
Before we show that Lq = Dh(q), we make a few remarks.

(a) Because E s ⊂ C s,j (0) for every j ∈ N, P0 = E s and L0 = 0.


 

(b) The estimate (5.11) shows that the size of the largest angle between

two vectors in C s,j (q) having the same projection onto E s goes
to zero as j ↑ ∞.

(c) Also, the estimates in the proof of the lemma on linear invariance
of the unstable cone show that the size of the minimal angle be-
tween a vector in C s,1 (F j (q)) and a vector outside of C s,0 (F j (q))
is bounded away from zero. Since

C s,j (q) = [D(F j )(q)]−1 C s (α) = [D(F j )(q)]−1 C s,0 (F j (q))

and

C s,j+1 (q) = [D(F j+1 )(q)]−1 C s (α)


= [D(F j )(q)]−1 [DF (F j (q))]−1 C s (α)
= [D(F j )(q)]−1 C s,1 (F j (q)),

this means that the size of the minimal angle between a vector
in C s,j+1 (q) and a vector outside of C s,j (q) is also bounded away
from zero (for fixed j).
140




theoryofodes July 4, 2007 13:20 Page 141 

Stable Manifold Theorem: Part 5

(d) Thus, since C s,j+1 (q) depends continuously on q,

Pq′ ∈ C s,j+1 (q′ ) ⊂ C s,j (q)

for a given j if q′ is sufficiently close to q. This means that Pq


depends continuously on q.

Now, we show that DF (q) = Lq . Let ε > 0 be given. By remark (b)


above, we can choose j ∈ N such that

kπu v − Lq πs vk ≤ εkπs vk (5.12)

whenever v ∈ C s,j (q). By remark (c) above, we know that we can


choose ε′ > 0 such that if w ∈ C s,j+1 (q) and kr k ≤ ε′ kwk, then
w +r ∈ C s,j (q). Because of the differentiability of F −j−1 , we can choose
η > 0 such that
ε′
kF −j−1 (F j+1 (q) + v) − q − [D(F −j−1 )(F j+1 (q))]vk ≤ kvk
kD(F j+1 )(q)k
(5.13)
whenever kvk ≤ η. Define the truncated stable cone
 C s (α, η) := C s (α) ∩ πs−1 E s (η). 



From the continuity of F and the α-Lipschitz continuity of h, we know
that we can pick δ > 0 such that

F j+1 (vs + h(vs )) ∈ {F j+1 (q)} + C s (α, η). (5.14)

whenever kvs − πs qk < δ.


Now, suppose that v ∈ C s (α, η). Then (assuming α ≤ 1) we know
that kvk ≤ η, so (5.13) tells us that

F −j−1 (F j+1 (q) + v) = q + [D(F −j−1 )(F j+1 (q))]v + r


(5.15)
= q + [D(F j+1 )(q)]−1 v + r

for some r satisfying


ε′
kr k ≤ kvk.
kD(F j+1 )(q)k
Let w = [D(F j+1 )(q)]−1 v. Since v ∈ C s (α), w ∈ C s,j+1 (q). Also,

kwk = k[D(F j+1 )(q)]−1 vk ≥ m([D(F j+1 )(q)]−1 )kvk


kvk
= ,
kD(F j+1 )(q)k
141




theoryofodes July 4, 2007 13:20 Page 142 

5. Invariant Manifolds

so kr k ≤ ε′ kwk. Thus, by the choice of ε′ , w + r ∈ C s,j (q) . Conse-


quently, (5.15) implies that

F −j−1 (F j+1 (q) + v) ∈ {q} + C s,j (q).

Since v was an arbitrary element of C s (α, η), we have

F −j−1 ({F j+1 (q)} + C s (α, η)) ⊆ {q} + C s,j (q). (5.16)

Set qs := πs q, and suppose that vs ∈ E s (r ) satisfies kvs − qs k ≤ δ.


By (5.14),
F j+1 (vs + h(vs )) ∈ {F j+1 (q)} + C s (α, η).

This, the invertibility of F , and (5.16) imply

vs + h(vs ) ∈ {q} + C s,j (q),

or, in other words,

vs + h(vs ) − qs − h(qs ) ∈ C s,j (q).


 

The estimate (5.12) then tells us that


kh(vs ) − h(qs ) − Lq (vs − qs )k ≤ εkvs − qs k,

which proves that Dh(q) = Lq (since ε was arbitrary).


Remark (d) above implies that Dh(q) depends continuously on q,
so h ∈ C 1 . Remark (a) above implies that Dh(0) = 0.

5.6 Stable Manifold Theorem: Part 6

Higher Differentiability

Lemma (Higher Differentiability) If F is C k , then h is C k .

Proof. We’ve already seen that this holds for k = 1. We show that it
is true for all k by induction. Let k ≥ 2, and assume that the lemma
works for k − 1. Define a new map H : Rn × Rn → Rn × Rn by the
formula " #! " #
p F (p)
H := .
v DF (p)v
142




theoryofodes July 4, 2007 13:20 Page 143 

Stable Manifold Theorem: Part 6

Since F is C k , H is C k−1 . Note that


" #! " # " #
2 p F (F (p)) F 2 (p)
H = = ,
v DF (F (p))DF (p)v D(F 2 )(p)v

" #! " # " #


3 p F (F 2 (p)) F 3 (p)
H = = ,
v DF (F 2 (p))D(F 2 )(p)v D(F 3 )(p)v

and, in general,
" #! " #
j p F j (p)
H = .
v D(F j )(p)v

Also,
 
" #! DF (p) 0
p  
DH = ,
v
D 2 F (p)v DF (p)
so  
" #! DF (0) 0
0  
DH = ,
0
0 DF (0)
 

which is hyperbolic and invertible, since DF (0) is. Applying the induc-

tion hypothesis, we can conclude that the fixed point of H at the origin
(in Rn × Rn ) has a local stable manifold W that is C k−1 .
Fix q ∈ Wrs , and note that F j (q) → 0 as j ↑ ∞ and
n o

Pq = v ∈ Rn lim D(F j )(q)v = 0 .
j↑∞

This means that ( " # )


q
n
Pq = v ∈R ∈W .
v

Since W has a C k−1 dependence on q, so does Pq . Hence, h is C k .

Flows

Now we discuss how the Stable Manifold Theorem for maps implies the
Stable Manifold Theorem for flows. Given f : Ω → Rn satisfying f (0) =
0, let F = ϕ(1, ·), where ϕ is the flow generated by the differential
equation
ẋ = f (x). (5.17)
143




theoryofodes July 4, 2007 13:20 Page 144 

5. Invariant Manifolds

If f is C k , so is F . Clearly, F is invertible and F (0) = 0. Our earlier


discussion on differentiation with respect to initial conditions tells us
that
d
Dx ϕ(t, x) = Df (ϕ(t, x))Dx ϕ(t, x)
dt
and Dx ϕ(0, x) = I, where Dx represents differentiation with respect
to x. Setting
g(t) = Dx ϕ(t, x)|x=0 ,

this implies, in particular, that

d
g(t) = Df (0)g(t)
dt

and g(0) = I, so
g(t) = etDf (0) .

Setting t = 1, we see that

eDf (0) = g(1) = Dx ϕ(1, x)|x=0 = Dx F (x)|x=0 = DF (0).


 



Thus, DF (0) is invertible, and if (5.17) has a hyperbolic equilibrium at
the origin then DF (0) is hyperbolic.
Since F satisfies the hypotheses of the Stable Manifold Theorem for
maps, we know that F has a local stable manifold Wrs on some box
B(r ). Assume that α < 1 and that r is small enough that the vector
field of (5.17) points into B(r ) on C s (α) ∩ ∂B(r ). (See the estimates
in Section 3.4.) The requirements for a point to be in Wrs are no more
restrictive then the requirements to be in the local stable manifold Wrs
of the origin with respect to the flow, so Wrs ⊆ Wrs .
We claim that, in fact, these two sets are equal. Suppose they are
not. Then there is a point q ∈ Wrs \Wrs . Let x(t) be the solution of (5.17)
satisfying x(0) = q. Since limj↑∞ F j (q) = 0 and, in a neighborhood of
the origin, there is a bound on the factor by which x(t) can grow in 1
unit of time, we know that x(t) → 0 as t ↑ ∞. Among other things, this
implies that

(a) x(t) ∉ Wrs for some t > 0, and

(b) x(t) ∈ Wrs for all t sufficiently large.


144




theoryofodes July 4, 2007 13:20 Page 145 

Center Manifolds

Since Wrs is a closed set and x is continuous, (a) and (b) say that we can
pick t0 to be the earliest time such that x(t) ∈ Wrs for every t ≥ t0 .
Now, consider the location of x(t) for t in the interval [t0 − 1, t0 ).
Since x(0) ∈ Wrs , we know that x(j) ∈ Wrs for every j ∈ N. In particu-
lar, we can choose t1 ∈ [t0 − 1, t0 ) such that x(t1 ) ∈ Wrs . By definition
of t0 , we can choose t2 ∈ (t1 , t0 ) such that x(t2 ) ∉ Wrs . By the continu-
ity of x and the closedness of Wrs , we can pick t3 to the be the last time
before t2 such that x(t3 ) ∈ Wrs . By definition of Wrs , if t ∈ [t0 − 1, t0 )
and x(t) ∉ Wrs , then x(t) ∉ B(r ); hence, x(t) must leave B(r ) at time
t = t3 . But this contradicts the fact that the vector field points into
B(r ) at x(t3 ), since x(t3 ) ∈ C s (α) ∩ ∂B(r ). This contradiction implies
that no point q ∈ Wrs \ Wrs exists; i.e., Wrs = Wrs .
The exponential decay of solutions of the flow on the local stable
manifold is a consequence of the similar decay estimate for the map,
along with the observation that, near 0, there is a bound to the factor
by which a solution can grown in 1 unit of time.

 5.7 Center Manifolds 



Definition

Recall that for the linear differential equation

ẋ = Ax (5.18)

the corresponding invariant subspaces E u , E s , and E c had the charac-


terizations
[n
o
Eu = x ∈ Rn lim |e−ct etA x| = 0 ,
t↓−∞
c>0
[n o

Es = x ∈ Rn lim |ect etA x| = 0 ,
t↑∞
c>0

and
\n o

Ec = x ∈ Rn lim |ect etA x| = lim |e−ct etA x| = 0 .
t↓−∞ t↑∞
c>0

The Stable Manifold Theorem tells us that for the nonlinear differential
equation
ẋ = f (x), (5.19)
145




theoryofodes July 4, 2007 13:20 Page 146 

5. Invariant Manifolds

with f (0) = 0, the stable manifold W s (0) and the unstable manifold
W u (0) have characterizations similar to E s and E u , respectively:
[n o

W s (0) = x ∈ Rn lim |ect ϕ(t, x)| = 0 ,
t↑∞
c>0

and
[n o

W u (0) = x ∈ Rn lim |e−ct ϕ(t, x)| = 0 ,
t↓−∞
c>0

where ϕ is the flow generated by (5.19). (This was only verified when
the equilibrium point at the origin was hyperbolic, but a similar result
holds in general.)
Is there a useful way to modify the characterization of E c similarly
to get a characterization of a center manifold W c (0)? Not really. The
main problem is that the characterizations of E s and E u only depend
on the local behavior of solutions when they are near the origin, but
the characterization of E c depends on the behavior of solutions that
are, possibly, far from 0.
 Still, the idea of a center manifold as some sort of nonlinear ana-




logue of E c (0) is useful. Here’s one widely-used definition:

Definition Let A = Df (0). A center manifold W c (0) of the equilbrium


point 0 of (5.19) is an invariant manifold whose dimension equals the
dimension of the invariant subspace E c of (5.18) and which is tangent
to E c at the origin.

Nonuniqueness

While the fact that stable and unstable manifolds are really manifolds
is a theorem (namely, the Stable Manifold Theorem), a center manifold
is a manifold by definition. Also, note that we refer to the stable mani-
fold and the unstable manifold, but we refer to a center manifold. This
is because center manifolds are not necessarily unique. An extremely
simple example of nonuniqueness (commonly credited to Kelley) is the
planar system


ẋ = x 2
ẏ = −y.

146




theoryofodes July 4, 2007 13:20 Page 147 

Center Manifolds

Clearly, E c is the x-axis, and solving the system explicitly reveals that
for any constant c ∈ R the curve
 
(x, y) ∈ R2 x < 0 and y = ce1/x ∪ (x, 0) ∈ R2 x ≥ 0

is a center manifold.

Existence

There is a Center Manifold Theorem just like there was a Stable Man-
ifold Theorem. However, the goal of the Center Manifold Theorem is
not to characterize a center manifold; that is done by the definition.
The Center Manifold Theorem asserts the existence of a center mani-
fold.
We will not state this theorem precisely nor prove it, but we can give
some indication how the proof of existence of a center manifold might
go. Suppose that none of the eigenvalues of Df (0) have real part equal
to α, where α is a given real number. Then we can split the eigenval-
ues up into two sets: Those with real part less than α and those with
 real part greater than α. Let E − be the vector space spanned by the 

generalized eigenvectors corresponding to the first set of eigenvalues,




and let E + be the vector space spanned by the generalized eigenvec-
tors corresponding to the second set of eigenvalues. If we cut off f so
that it is stays nearly linear throughout Rn , then an analysis very much
like that in the proof of the Stable Manifold Theorem can be done to
conclude that there are invariant manifolds called the pseudo-stable
manifold and the pseudo-unstable manifold that are tangent, respec-
tively, to E − and E + at the origin. Solutions x(t) in the first manifold
satisfy e−αt x(t) → 0 as t ↑ ∞, and solutions in the second manifold
satisfy e−αt x(t) → 0 as t ↓ −∞.
Now, suppose that α is chosen to be negative but larger than the
real part of the eigenvalues with negative real part. The corresponding
pseudo-unstable manifold is called a center-unstable manifold and is
written W cu (0). If, on the other hand, we choose α to be between
zero and all the positive real parts of eigenvalues, then the resulting
pseudo-stable manifold is called a center-stable manifold and is written
W cs (0). It turns out that

W c (0) := W cs (0) ∩ W cu (0)


147




theoryofodes July 4, 2007 13:20 Page 148 

5. Invariant Manifolds

is a center manifold.

Center Manifold as a Graph

Since a center manifold W c (0) is tangent to E c at the origin it can, at


least locally, be represented as the graph of a function h : E c → E s ⊕E u .
Suppose, for simplicity, that (5.19) can be rewritten in the form


ẋ = Ax + F (x, y)
(5.20)
ẏ = By + G(x, y),

where x ∈ E c , y ∈ E s ⊕ E u , the eigenvalues of A all have zero real


part, all of the eigenvalues of B have nonzero real part, and F and G are
higher order terms. Then, for points x + y lying on W c (0), y = h(x).
Inserting that into (5.20) and using the chain rule, we get

Dh(x)[Ax + F (x, h(x))] = Dh(x)ẋ = ẏ = Bh(x) + G(x, h(x)).

Thus, if we define an operator M by the formula

 (Mφ)(x) := Dφ(x)[Ax + F (x, φ(x))] − Bφ(x) − G(x, φ(x)), 



the function h whose graph is the center manifold is a solution of the
equation Mh = 0.

5.8 Computing and Using Center Manifolds

Approximation

Recall that we projected our equation onto E c and onto E s ⊕ E u to get


the system 

ẋ = Ax + F (x, y)
(5.21)

ẏ = By + G(x, y),

and that we were looking for a function h : E c → E s ⊕ E u satisfying


(Mh) ≡ 0, where

(Mφ)(x) := Dφ(x)[Ax + F (x, φ(x))] − Bφ(x) − G(x, φ(x)).

Except in the simplest of cases we have no hope of trying to get an


explicit formula for h, but because of the following theorem of Carr we
can approximate h to arbitrarily high orders.
148




theoryofodes July 4, 2007 13:20 Page 149 

Computing and Using Center Manifolds

Theorem (Carr) Let φ be a C 1 mapping of a neighborhood of the origin


in Rn into Rn that satisfies φ(0) = 0 and Dφ(0) = 0. Suppose that

(Mφ)(x) = O(|x|q )

as x → 0 for some constant q > 1. Then

|h(x) − φ(x)| = O(|x|q )

as x → 0.

Stability

If we put y = h(x) in the first equation in (5.20), we get the reduced


equation
ẋ = Ax + F (x, h(x)), (5.22)

which describes the evolution of the E c coordinate of solutions on


the center manifold. Another theorem of Carr’s states that if all the
eigenvalues of Df (0) are in the closed left half-plane, then the stability
 type of the origin as an equilibrium solution of (5.21) (Lyapunov stable, 



asymptotically stable, or unstable) matches the stability type of the
origin as an equilibrium solution of (5.22).
These results of Carr are sometimes useful in computing the stabil-
ity type of the origin. Consider, for example, the following system:


ẋ = xy + ax 3 + by 2 x

ẏ = −y + cx 2 + dx 2 y,

where x and y are real variables and a, b, c, and d are real parameters.
We know that there is a center manifold, tangent to the x-axis at the
origin, that is (locally) of the form y = h(x). The reduced equation on
the center manifold is

ẋ = xh(x) + ax 3 + b[h(x)]2 x. (5.23)

To determine the stability of the origin in (5.23) (and, therefore, in the


original system) we need to approximate h. Therefore, we consider the
operator M defined by

(Mφ)(x) = φ′ (x)[xφ(x)+ax 3 +b(φ(x))2 x]+φ(x)−cx 2 −dx 2 φ(x),


149




theoryofodes July 4, 2007 13:20 Page 150 

5. Invariant Manifolds

and seek polynomial φ (satisfying φ(0) = φ′ (0) = 0) for which the


quantity (Mφ)(x) is of high order in x. By inspection, if φ(x) = cx 2
then (Mφ)(x) = O(x 4 ), so h(x) = cx 2 + O(x 4 ), and (5.23) becomes

ẋ = (a + c)x 3 + O(x 5 ).

Hence, the origin is asymptotically stable if a + c < 0 and is unstable


if a + c > 0. What about the borderline case when a + c = 0? Suppose
that a + c = 0 and let’s go back and try a different φ, namely, one of
the form φ(x) = cx 2 + kx 4 . Plugging this in, we find that (Mφ)(x) =
(k − cd)x 4 + O(x 6 ), so if we choose k = cd then (Mφ)(x) = O(x 6 );
thus, h(x) = cx 2 + cdx 4 + O(x 6 ). Inserting this in (5.23), we get

ẋ = (cd + bc 2 )x 5 + O(x 7 ),

so the origin is asymptotically stable if cd + bc 2 < 0 (and a + c = 0)


and is unstable if cd + bc 2 > 0 (and a + c = 0).
What if a + c = 0 and cd + bc 2 = 0? Suppose that these two
conditions hold, and consider φ of the form φ(x) = cx 2 + cdx 4 + kx 6
for some k ∈ R yet to be determined. Calculating, we discover that
 

(Mφ)(x) = (k − b 2 c 3 )x 6 + O(x 8 ), so by choosing k = b 2 c 3 , we see



that h(x) = cx 2 + cdx 4 + b 2 c 3 x 6 + O(x 8 ). Inserting this in (5.23), we
see that (if a + c = 0 and cd + bc 2 = 0)

ẋ = −b 2 c 3 x 7 + O(x 9 ).

Hence, if a + c = cd + bc 2 = 0 and b 2 c 3 > 0 then the origin is asymp-


totically stable, and if a+c = cd+bc 2 = 0 and b 2 c 3 < 0 then the origin
is unstable.
It can be checked that in the remaining borderline case a + c =
cd + bc 2 = b 2 c 3 = 0, h(x) ≡ cx 2 and the reduced equation is sim-
ply ẋ = 0. Hence, in this case, the origin is Lyapunov stable, but not
asymptotically stable.

Bifurcation Theory

Bifurcation theory studies fundamental changes in the structure of the


solutions of a differential equation or a dynamical system in response
to change in a parameter. Consider the parametrized equation

ẋ = F (x, ε), (5.24)


150




theoryofodes July 4, 2007 13:20 Page 151 

Computing and Using Center Manifolds

where x ∈ Rn is a variable and ε ∈ Rp is a parameter. Suppose that


F (0, ε) = 0 for every ε, that the equilibrium solution at x = 0 is stable
when ε = 0, and that we are interested in the possibility of persis-
tent structures (e.g., equilibria or periodic orbits) bifurcating out of
the origin as ε is made nonzero. This means that all the eigenvalues
of Dx F (0, 0) have nonpositive real part, so we can project (5.24) onto
complementary subspaces of Rn and get the equivalent system


u̇ = Au + f (u, v, ε)

v̇ = Bv + g(u, v, ε),

with the eigenvalues of A lying on the imaginary axis and the eigen-
values of B lying in the open right half-plane. Since the parameter ε
does not depend on time, we can append the equation ε̇ = 0 to get the
expanded system 

 u̇ = Au + f (u, v, ε)



 v̇ = Bv + g(u, v, ε) (5.25)



ε̇ = 0.
 

The Center Manifold Theorem asserts the existence of a center mani-




fold for the origin that is locally given by points (u, v, ε) satisfying an
equation of the form
v = h(u, ε).

Furthermore, a theorem of Carr says that every solution (u(t), v(t), ε)


of (5.25) for which (u(0), v(0), ε) is sufficiently close to zero converges
exponentially quickly to a solution on the center manifold as t ↑ ∞. In
particular, no persistent structure near the origin lies off the center
manifold of this expanded system. Hence, it suffices to consider per-
sistent structures for the lower-dimensional equation

u̇ = Au + f (u, h(u, ε), ε).

151




theoryofodes July 4, 2007 13:20 Page 152 

 




theoryofodes July 4, 2007 13:20 Page 153 

6
Periodic Orbits

6.1 Poincaré-Bendixson Theorem

Definition A periodic orbit of a continuous dynamical system ϕ is a set


of the form

ϕ(t, p) t ∈ [0, T ]

for some time T and point p satisfying ϕ(T , p) = p. If this set is a


singleton, we say that the periodic orbit is degenerate.
 



Theorem (Poincaré-Bendixson) Every nonempty, compact ω-limit set of
a C 1 planar flow that does not contain an equilibrium point is a nonde-
generate periodic orbit.

We will prove this theorem by means of 4 lemmas. Throughout


our discussion, we will be referring to a C 1 planar flow ϕ and the
corresponding vector field f .

Definition If S is a line segment in R2 and p1 , p2 , . . . is a (possibly fi-


nite) sequence of points lying on S, then we say that this sequence is
monotone on S if (pj − pj−1 ) · (p2 − p1 ) ≥ 0 for every j ≥ 2.

Definition A (possibly finite) sequence p1 , p2 , . . . of points on a trajec-


tory (i.e., an orbit) T of ϕ is said to be monotone on T if we can choose
a point p and times t1 ≤ t2 ≤ · · · such that ϕ(tj , p) = pj for each j.

Definition A transversal of ϕ is a line segment S such that f is not


tangent to S at any point of S.
153




theoryofodes July 4, 2007 13:20 Page 154 

6. Periodic Orbits

Lemma If a (possibly finite) sequence of points p1 , p2 , . . . lies on the in-


tersection of a transversal S and a trajectory T , and the sequence is
monotone on T , then it is monotone on S.

Proof. Let p be a point on T . Since S is closed and f is nowhere tan-


gent to S, the times t at which ϕ(t, p) ∈ S form an increasing sequence
(possibly biinfinite). Consequently, if the lemma fails then there are
times t1 < t2 < t3 and distinct points pi = ϕ(ti , p) ∈ S, i ∈ {1, 2, 3},
such that
{p1 , p2 , p3 } = ϕ([t1 , t3 ], p) ∩ S

and p3 is between p1 and p2 . Note that the union of the line segment
p1 p2 from p1 to p2 with the curve ϕ([t1 , t2 ], p) is a simple closed
curve in the plane, so by the Jordan Curve Theorem it has an “inside” I
and an “outside” O. Assuming, without loss of generality, that f points
into I all along the “interior” of p1 p2 , we get a picture something like:

 

p1 b

bp2

Note that
I ∪ p1 p2 ∪ ϕ([t1 , t2 ], p)

is a positively invariant set, so, in particular, it contains ϕ([t2 , t3 ], p).


But the fact that p3 is between p1 and p2 implies that f (p3 ) points
154




theoryofodes July 4, 2007 13:20 Page 155 

Poincaré-Bendixson Theorem

into I, so ϕ(t3 − ε, p) ∈ O for ε small and positive. This contradiction


implies that the lemma holds.

The proof of the next lemma uses something called a flow box. A
flow box is a (topological) box such that f points into the box along
one side, points out of the box along the opposite side, and is tangent
to the other two sides, and the restriction of ϕ to the box is conjugate
to unidirectional, constant-velocity flow. The existence of a flow box
around any regular point of ϕ is a consequence of the C r -rectification
Theorem.

Lemma No ω-limit set intersects a transversal in more than one point.

Proof. Suppose that for some point x and some transversal S, ω(x)
intersects S at two distinct points p1 and p2 . Since p1 and p2 are
on a transversal, they are regular points, so we can choose disjoint
subintervals S1 and S2 of S containing, respectively, p1 and p2 , and,
for some ε > 0, define flow boxes B1 and B2 by
 


Bi := ϕ(t, x) t ∈ [−ε, ε], x ∈ Si .


Now, the fact that p1 , p2 ∈ ω(x) means that we can pick an in-
creasing sequence of times t1 , t2 , . . . such that ϕ(tj , x) ∈ B1 if j is
odd and ϕ(tj , x) ∈ B2 if j is even. In fact, because of the nature of
the flow in B1 and B2 , we can assume that ϕ(tj , x) ∈ S for each j.
Although the sequence ϕ(t1 , x), ϕ(t2 , x), . . . is monotone on the tra-
jectory T := γ(x), it is not monotone on S, contradicting the previous
lemma.

Definition An ω-limit point of a point p is an element of ω(p).

Lemma Every ω-limit point of an ω-limit point lies on a periodic orbit.

Proof. Suppose that p ∈ ω(q) and q ∈ ω(r ). If p is a singular point,


then it obviously lies on a (degenerate) periodic orbit, so suppose that
p is a regular point. Pick S to be a transversal containing p in its
“interior”. By putting a suitable flow box around p, we see that, since
p ∈ ω(q), the solution beginning at q must repeatedly cross S. But
155




theoryofodes July 4, 2007 13:20 Page 156 

6. Periodic Orbits

q ∈ ω(r ) and ω-limit sets are invariant, so the solution beginning at


q remains confined within ω(r ). Since ω(r ) ∩ S contains at most one
point, the solution beginning at q must repeatedly cross S at the same
point; i.e., q lies on a periodic orbit. Since p ∈ ω(q), p must lie on this
same periodic orbit.

Lemma If an ω-limit set ω(x) contains a nondegenerate periodic orbit


P, then ω(x) = P.

Proof. Fix q ∈ P. Pick T > 0 such that ϕ(T , q) = q. Let ε > 0 be given.
By continuous dependence, we can pick δ > 0 such that |ϕ(t, y) −
ϕ(t, q)| < ε whenever t ∈ [0, 3T /2] and |y − q| < δ. Pick a transversal
S of length less than δ with q in its “interior”, and create a flow box

B := ϕ(t, u) u ∈ S, t ∈ [−ρ, ρ]

for some ρ ∈ (0, T /2]. By continuity of ϕ(T , ·), we know that we can
pick a subinterval S′ of S that contains q and that satisfies ϕ(T , S′ ) ⊂

 B. Let tj be the jth smallest element of 



t ≥ 0 ϕ(t, x) ∈ S′ .

Because S′ is a transversal and q ∈ ω(x), the tj are well-defined and


increase to infinity as j ↑ ∞. Also, by the lemma on monotonicity,
|ϕ(tj , x) − q| is a decreasing function of j.
Note that for each j ∈ N, ϕ(T , ϕ(tj , x)) ∈ B, so, by construction of
S and B, ϕ(t, ϕ(T , ϕ(tj , x))) ∈ S for some t ∈ [−T /2, T /2]. Pick such
a t. The lemma on monotonicity implies that

ϕ(t, ϕ(T , ϕ(tj , x))) ∈ S′ .

This, in turn, implies that t + T + tj ∈ {t1 , t2 , . . .}, so

tj+1 − tj ≤ t + T ≤ 3T /2. (6.1)

Now, let t ≥ t1 be given. Then t ∈ [tj , tj+1 ) for some j ≥ 1. For this
j,

|ϕ(t, x) − ϕ(t − tj , q)| = |ϕ(t − tj , ϕ(tj , x)) − ϕ(t − tj , q)| < ε,

since, by (6.1), |t−tj | < |tj+1 −tj | < 3T /2 and since, because ϕ(tj , x) ∈
S′ ⊆ S, |q − ϕ(tj , x)| < δ.
156




theoryofodes July 4, 2007 13:20 Page 157 

Lienard’s Equation

Since ε was arbitrary, we have shown that

lim d(ϕ(t, x), P) = 0.


t↑∞

Thus, P = ω(x), as was claimed.

Now, we get to the proof of the Poincaré-Bendixson Theorem itself.


Suppose ω(x) is compact and nonempty. Pick p ∈ ω(x). Since γ + (p)
is contained in the compact set ω(x), we know ω(p) is nonempty, so
we can pick q ∈ ω(p). Note that q is an ω-limit point of an ω-limit
point, so, by the third lemma, q lies on a periodic orbit P. Since ω(p)
is invariant, P ⊆ ω(p) ⊆ ω(x). If ω(x) contains no equilibrium point,
then P is nondegenerate, so, by the fourth lemma, ω(x) = P.

6.2 Lienard’s Equation

Suppose we have a simple electrical circuit with a resistor, an inductor,


and a capacitor as shown.
 



iC

b b

L R

iL iR

Kirchhoff’s current law tells us that

iL = iR = −iC , (6.2)
157




theoryofodes July 4, 2007 13:20 Page 158 

6. Periodic Orbits

and Kirchhoff’s voltage law tells us that the corresponding voltage


drops satisfy
VC = VL + VR . (6.3)

By definition of the capacitance C,

dVC
C = iC , (6.4)
dt
and by Faraday’s Law
diL
L
= VL , (6.5)
dt
where L is the inductance of the inductor. We assume that the resistor
behaves nonlinearly and satisfies the generalized form of Ohm’s Law:

VR = F (iR ). (6.6)

Let x = iL and f (u) := F ′ (u). By (6.5),

1
ẋ = VL ,
L
 
so by (6.3), (6.4), (6.6), and (6.2)



 
1 dVL 1 1 1 diR
ẍ = = (V̇C − V̇R ) = iC − F ′ (iR )
L dt L L C dt
 
1 1
= (−x) − f (x)ẋ
L C
Hence,
1 1
ẍ + f (x)ẋ + x = 0.
L LC
By rescaling f and t (or, equivalently, by choosing units judiciously),
we get Lienard’s Equation:

ẍ + f (x)ẋ + x = 0.

We will study Lienard’s Equation under the following assumptions


on F and f :

(i) F (0) = 0;

(ii) f is Lipschitz continuous;

(iii) F is odd;
158




theoryofodes July 4, 2007 13:20 Page 159 

Lienard’s Equation

(iv) F (x) → ∞ as x ↑ ∞;

(v) for some β > 0, F (β) = 0 and F is positive and increasing on (β, ∞);

(vi) for some α > 0, F (α) = 0 and F is negative on (0, α).

Assumption (vi) corresponds to the existence of a region of nega-


tive resistance. Apparently, there are semiconductors called “tunnel
diodes” that behave this way.
By setting y = ẋ + F (x), we can rewrite Lienard’s Equation as the
first-order system 

ẋ = y − F (x)
(6.7)

ẏ = −x.

Definition A limit cycle for a flow is a nondegenerate periodic orbit P


that is the ω-limit set or the α-limit set of some point q ∉ P.

Theorem (Lienard’s Theorem) The flow generated by (6.7) has at least


 one limit cycle. If α = β then this limit cycle is the only nondegenerate 

periodic orbit, and it is the ω-limit set of all points other than the origin.

The significance of Lienard’s Theorem can be seen by comparing


Lienard’s Equation with the linear equation that would have resulted
if we had assumed a linear resistor. Such linear RCL circuits can have
oscillations with arbitrary amplitude. Lienard’s Theorem says that, un-
der suitable hypotheses, a nonlinear resistor selects oscillations of one
particular amplitude.
We will prove the first half of Lienard’s Theorem by finding a com-
pact, positively invariant region that does not contain an equilibrium
point and then using the Poincaré-Bendixson Theorem. Note that the
origin is the only equilibrium point of (6.7). Since

d
(x 2 + y 2 ) = 2(x ẋ + y ẏ) = −2xF (x),
dt

assumption (vi) implies that for ε small, R2 \ B(0, ε) is positively in-


variant.
The nullclines x = 0 and y = F (x) of (6.7) (i.e. curves along which
the flow is either vertical or horizontal) separate the plane into four
159




theoryofodes July 4, 2007 13:20 Page 160 

6. Periodic Orbits

regions A, B, C, and D, and the general direction of flow in those


regions is as shown below. Note that away from the origin, the speed of
trajectories is bounded below, so every solution of (6.7) except (x, y) =
(0, 0) passes through A, B, C, and D in succession an infinite number
of times as it circles around the origin in a clockwise direction.

D A

 

C B

We claim that if a solution starts at a point (0, y0 ) that is high


enough up on the positive y-axis, then the first point (0, ỹ0 ) it hits
on the negative y-axis is closer to the origin then (0, y0 ) was. Assume,
for the moment, that this claim is true. Let S1 be the orbit segment
connecting (0, y0 ) to (0, ỹ0 ). Because of the symmetry in (6.7) implied
by Assumption (iii), the set

S2 := (x, y) ∈ R2 (−x, −y) ∈ S1

is also an orbit segment. Let



S3 := (0, y) ∈ R2 −ỹ0 < y < y0 ,

S4 := (0, y) ∈ R2 −y0 < y < ỹ0 ,

and let

S5 := (x, y) ∈ R2 x 2 + y 2 = ε2 ,
160




theoryofodes July 4, 2007 13:20 Page 161 

Lienard’s Theorem

for some small ε. Then it is not hard to see that ∪5i=1 Si is the bound-
ary of a compact, positively invariant region that does not contain an
equilibrium point.

y0

−ỹ0

ỹ0

 −y0 

To verify the claim, we will use the function R(x, y) := (x 2 + y 2 )/2,


and show that if y0 is large enough (and ỹ0 is as defined above) then

R(0, y0 ) > R(0, ỹ0 ).

6.3 Lienard’s Theorem

Recall, that we’re going to estimate the change of R(x, y) := (x 2 +


y 2 )/2 along the orbit segment connecting (0, y0 ) to (0, ỹ0 ). Notice
that if the point (a, b) and the point (c, d) lie on the same trajectory
then Z (c,d)
R(c, d) − R(a, b) = dR.
(a,b)

(The integral is a line integral.) Since Ṙ = −xF (x), if y is a function of


x along the orbit segment connecting (a, b) to (c, d), then
Zc Zc
Ṙ −xF (x)
R(c, d) − R(a, b) = dx = dx. (6.8)
a ẋ a y(x) − F (x)
161




theoryofodes July 4, 2007 13:20 Page 162 

6. Periodic Orbits

If, on the other hand, x is a function of y along the orbit segment


connecting (a, b) to (c, d), then

Zd Zd
Ṙ −x(y)F (x(y))
R(c, d) − R(a, b) = dy = dy
b ẏ b −x(y)
Zd
= F (x(y)) dy. (6.9)
b

We will chop the orbit segment connecting (0, y0 ) to (0, ỹ0 ) up into
pieces and use (6.8) and (6.9) to estimate the change ∆R in R along
each piece and, therefore, along the whole orbit segment.
Let σ = β + 1, and let

B = sup |F (x)|.
0≤x≤σ

 Consider the region





R := (x, y) ∈ R2 x ∈ [0, σ ], y ∈ [B + σ , ∞) .

In R,


dy x σ

dx = y − F (x) ≤ σ = 1;

hence, if y0 > B + 2σ , then the corresponding trajectory must exit R


through its right boundary, say, at the point (σ , yσ ). Similarly, if ỹ0 <
−B − 2σ , then the trajectory it lies on must have last previously hit the
line x = σ at a point (σ , ỹσ ). Now, assume that as y0 → ∞, ỹ0 → −∞.
(If not, then the claim clearly holds.) Based on this assumption we
know that we can pick a value for y0 and a corresponding value for ỹ0
that are both larger than B + 2σ in absolute value, and conclude that
the orbit segment connecting them looks qualitatively like:
162




theoryofodes July 4, 2007 13:20 Page 163 

Lienard’s Theorem

(0, y0 )

(σ , yσ )

(σ , ỹσ )

(0, ỹ0 )

We will estimate ∆R on the entire orbit segment from (0, y0 ) to


(0, ỹ0 ) by considering separately, the orbit segment from (0, y0 ) to
(σ , yσ ), the segment from (σ , yσ ) to (σ , ỹσ ), and the segment from
 

(σ , ỹσ ) to (0, ỹ0 ).



First, consider the first segment. On this segment, the y-coordinate
is a function y(x) of the x-coordinate. Thus,
Z
σ −xF (x)

|R(σ , yσ ) − R(0, y0 )| = dx
0 y(x) − F (x)

−xF (x)

σB
≤ dx ≤ dx
0 y(x) − F (x) 0 y0 − B − σ

σ 2B
= →0
y0 − B − σ
as y0 → ∞. A similar estimate shows that |R(0, ỹ0 ) − R(σ , ỹσ )| → 0 as
y0 → ∞.
On the middle segment, the x-coordinate is a function x(y) of the
y-coordinate y. Hence,
Z ỹσ
R(σ , ỹσ ) − R(σ , yσ ) = F (x(y)) dy ≤ −|yσ − ỹσ |F (σ ) → −∞

as y0 → ∞.
Putting these three estimates together, we see that

R(0, ỹ0 ) − R(0, y0 ) → −∞


163




theoryofodes July 4, 2007 13:20 Page 164 

6. Periodic Orbits

as y0 → ∞, so |ỹ0 | < |y0 | if y0 is sufficiently large. This shows that


the orbit connecting these two points forms part of the boundary of a
compact, positively invariant set that surrounds (but omits) the origin.
By the Poincaré-Bendixson Theorem, there must be a limit cycle in this
set.
Now for the second half of Lienard’s Theorem. We need to show
that if α = β (i.e., if F has a unique positive zero) then the limit cycle
whose existence we’ve deduced is the only nondegenerate periodic or-
bit and it attracts all points other than the origin. If we can show the
uniqueness of the limit cycle, then the fact that we can make our com-
pact, positively invariant set as large as we want and make the hole cut
out of its center as small as we want will imply that it attracts all points
other than the origin. Note also, that our observations on the general
direction of the flow imply that any nondegenerate periodic orbit must
circle the origin in the clockwise direction.
So, suppose that α = β and consider, as before, orbit segments that
start on the positive y-axis at a point (0, y0 ) and end on the negative

 y-axis at a point (0, ỹ0 ). Such orbit segments are “nested” and fill 

up the open right half-plane. We need to show that only one of them

satisfies ỹ0 = −y0 . In other words, we claim that there is only one
segment that gives

R(0, ỹ0 ) − R(0, y0 ) = 0.

Now, if such a segment hits the x-axis on [0, β], then x ≤ β all along
that segment, and F (x) ≤ 0 with equality only if (x, y) = (β, 0). Let
x(y) be the x-coordinate as a function of y and observe that

Z ỹ0
R(0, ỹ0 ) − R(0, y0 ) = F (x(y)) dy > 0. (6.10)
y0

We claim that for values of y0 generating orbits intersecting the x-axis


in (β, ∞), R(0, ỹ0 ) − R(0, y0 ) is a strictly decreasing function of y0 . In
combination with (6.10) (and the fact that R(0, ỹ0 ) − R(0, y0 ) < 0 if y0
is sufficiently large), this will finish the proof.
Consider 2 orbits (whose coordinates we denote (x, y) and (X, Y ))
that intersect the x-axis in (β, ∞) and contain selected points as shown
in the following diagram.
164




theoryofodes July 4, 2007 13:20 Page 165 

Lienard’s Theorem

(0, Y0 ) b
b b (β, Yβ )
(0, y0 )
b b
(λ, yβ )

(β, yβ )

(β, ỹβ )
b b

(µ, ỹβ )
b
(0, ỹ0 )
b
b (β, Ỹβ )
(0, Ỹ0 )

Note that

R(0, Ỹ0 ) − R(0, Y0 ) = R(0, Ỹ0 ) − R(β, Ỹβ )


 

+ R(β, Ỹβ ) − R(µ, ỹβ )



+ R(µ, ỹβ ) − R(λ, yβ )
(6.11)
+ R(λ, yβ ) − R(β, Yβ )
+ R(β, Yβ ) − R(0, Y0 )
=: ∆1 + ∆2 + ∆3 + ∆4 + ∆5 .

Let X(Y ) and x(y) give, respectively, the first coordinate of a point
on the outer and inner orbit segments as a function of the second co-
ordinate. Similarly, let Y (X) and y(x) give the second coordinates as
functions of the first coordinates (on the segments where that’s possi-
ble). Estimating, we find that
Z0 Z0
−XF (X) −xF (x)
∆1 = dX < dx = R(0, ỹ0 ) − R(β, ỹβ ),
β Y (X) − F (X) β y(x) − F (x)
(6.12)
Z Ỹβ
∆2 = F (X(Y )) dY < 0, (6.13)
ỹβ

Z ỹβ Z ỹβ
∆3 = F (X(Y )) dY < F (x(y)) dy = R(β, ỹβ ) − R(β, yβ ), (6.14)
yβ yβ
165




theoryofodes July 4, 2007 13:20 Page 166 

6. Periodic Orbits
Z yβ
∆4 = F (X(Y )) dY < 0, (6.15)

and
Zβ Zβ
−XF (X) −xF (x)
∆5 = dX < dx = R(β, yβ ) − R(0, y0 ).
0 Y (X) − F (X) 0 y(x) − F (x)
(6.16)
By plugging, (6.12), (6.13), (6.14), (6.15), and (6.16) into (6.11), we see
that

R(0, Ỹ0 ) − R(0, Y0 )


< [R(0, ỹ0 ) − R(β, ỹβ )] + 0
+ [R(β, ỹβ ) − R(β, yβ )] + 0
+ [R(β, yβ ) − R(0, y0 )]
= R(0, ỹ0 ) − R(0, y0 ).

This gives the claimed monotonicity and completes the proof.

 

166

You might also like