0% found this document useful (0 votes)
6 views

Dynamical_Systems_course_notes (16)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Dynamical_Systems_course_notes (16)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Dynamical Systems

Class Lecture Notes

Dynamical Systems
Math 637
A gentle introduction to dynamical systems and flows

R. Vandervorst ∗

May 5, 2021

VU Lecture Notes

∗ Exercises and examples see L. Perko, cf. [1]


Dynamical Systems

Disclaimer
You can edit this page to suit your needs. For instance, here we have a no copyright state-
ment, a colophon and some other information. This page is based on the corresponding
page of Ken Arroyo Ohori’s thesis, with minimal changes.
No copyright
c z This book is released into the public domain using the CC0 code. To the extent
possible under law, I waive all copyright and related or neighbouring rights to this work.
To view a copy of the CC0 code, visit:
http://creativecommons.org/publicdomain/zero/1.0/

Colophon
This document was typeset with the help of KOMA-Script and LATEX using the kaobook
class.
The source code of this book is available at:
https://github.com/fmarotta/kaobook
(You are welcome to contribute!)
Publisher
First printed in May 2019 by VU Lecture Notes
I believe in intuitions and inspirations. I sometimes feel that
I am right. I do not know that I am.

– Albert Einstein
Contents

Contents v

Class material 1
1 Linear systems
Lectures 1 and 2 2
1.1 Diagionalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Exponentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Classifying eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Inhomogeneous equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 The existence and uniqueness theorem


Lecture 3 8
2.1 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Continuous dependence on initial values . . . . . . . . . . . . . . . . . . . 12

3 Flows, invariance and linearization


Lecture 4 14
3.1 Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Invariant sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.3 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 The (un)stable manifold theorem


Lectures 5 and 6 18
4.1 Local stable and unstable manifolds . . . . . . . . . . . . . . . . . . . . . . 18
4.2 Center manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3 The Grobman-Hartman theorem . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Hamiltonian and gradient systems . . . . . . . . . . . . . . . . . . . . . . . 24

5 Limit sets
Lecture 7 and 8 27
5.1 Global flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2 𝜔 -limit sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3 Attractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6 Poincaré sections
Lecture 8 33
6.1 Poincaré maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.2 The flow box theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

7 The Poincaré-Bendixson Theorem


Lecture 9 and 10 36
7.1 Three formulations of the Poincaré-Bendixson Theorem . . . . . . . . . . 36
7.2 Proof of the Poincaré-Bendixson Theorem . . . . . . . . . . . . . . . . . . 37

8 Index theory
Lecture 11 40
8.1 Winding numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
8.2 The index of a regular Jordan curve . . . . . . . . . . . . . . . . . . . . . . 42
8.3 The fixed point index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.4 * Homotopy properties of the index . . . . . . . . . . . . . . . . . . . . . . 46

9 Compactification
Lecture 12 and 13 47
9.1 Stereographic and central projections . . . . . . . . . . . . . . . . . . . . . 47
9.2 Extending the vector field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
9.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A 1-dimensional system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A planar system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A 3-dimensional vector field . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.4 Blowing up fixed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

10 Bifurcations
Lecture 14 and 15 57

Appendix 58
A Elememtary calculus 59
A.1 Implicit Function Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

B Elememtary topology 60

Bibliography 61

Notation 62

Alphabetical Index 63
Class material
Linear systems
Lectures 1 and 2 1
In this chapter we consider linear systems of differential equarions with 1.1 Diagionalization . . . . . . . . 2
constant coefficients. We describe a systematic approach to determine the 1.2 Exponentials . . . . . . . . . . . 3
general solution. Most of this chapter is dedicated to the linear algebra of 1.3 Classifying eigenvalues . . . . 5
1.4 Inhomogeneous equations . . 6
exponential of square matrices.

1.1 Diagionalization

Consider a system of two linear differential equations of the form:

𝑥¤ = 𝑎𝑥 + 𝑏 𝑦
𝑦¤ = 𝑐𝑥 + 𝑑𝑦

Shorthand notation is given by x¤ = 𝐴x where x = (𝑥, 𝑦) ∈ ℝ2 and 𝐴 is


the 2 × 2 matrix given by

𝑎 𝑏
 
𝐴= .
𝑐 𝑑

In the case that 𝑏 = 𝑐 = 0 the system decouples and the general solution
is given by x(𝑡) = (𝑥 0 𝑒 𝑎𝑡 , 𝑦0 𝑒 𝑑𝑡 ). In general a system is not decoupled and
we describe a method to determine then general solution.
A linear system of differential equations with 𝑛 variables x = (𝑥 1 , · · · , 𝑥 𝑛 ) ∈
ℝ 𝑛 is given as follows:

𝑎11 ··· 𝑎 1𝑛
© . .. ..
𝐴 = ­­ ..
ª
x¤ = 𝐴x , . .
®.
® (1.1)
« 𝑎𝑛1 ··· 𝑎 𝑛𝑛 ¬

The first step to solving System (1.1) is to determine the eigenvalues of


𝐴, i.e. solve the equation det(𝐴 − 𝜆𝐼) = 0.1 Consider the case that all 1: The determinant gives the characteristic
eigenvalues are real and distinct: 𝜆1 < · · · < 𝜆𝑛 , 𝜆 𝑖 ∈ ℝ. Next compute equation in 𝜆.

the associated eigenvectors v1 , · · · , v𝑛 . These satisfy 𝐴v𝑖 = 𝜆 𝑖 v𝑖 . In matrix


form we have:

𝜆1 ··· 0
© .. .. .. ª
𝐴𝐸 = 𝐸𝐷, with 𝐸 = (v1 · · · v𝑛 ) and 𝐷 = ­­ . . .
®.
®
« 0 ··· 𝜆𝑛 ¬
The eigenvectors form a basis for ℝ 𝑛 and therefore form linearly inde-
pendent columns of the square matrix 𝐸 which is therefore invertible.
This yields the following matrix decompositions:

𝐴 = 𝐸𝐷𝐸−1 , and 𝐷 = 𝐸−1 𝐴𝐸. (1.2)


1 Linear systems
3
Lectures 1 and 2

Define y = 𝐸 −1 x. The vector y are the coordinates of x expressed the new


basis {v1 , · · · , v𝑛 }. Then,

y¤ = 𝐸 −1 x¤ = 𝐸 −1 𝐴x = 𝐸 −1 𝐴𝐸 y = 𝐷 y ,

which reveals the linear system y¤ = 𝐷 y with diagonal matrix 𝐷 . This


system is decoupled and each equation 𝑥¤ 𝑖 = 𝜆 𝑖 𝑥 𝑖 can be solved explicitly:
𝑥 𝑖 (𝑡) = 𝑥 𝑖 (0)𝑒 𝜆𝑖 𝑡 . We can write

𝑒 𝜆1 𝑡 ··· 0
© .. .. .. ª
y(𝑡) = 𝐷(𝑡)y(0), with 𝐷(𝑡) = ­­ . . .
®.
®
« 0 ··· 𝑒 𝜆𝑛 𝑡 ¬
We can now transform back to the vector x:

x(𝑡) = 𝐸 y(𝑡) = 𝐸𝐷(𝑡)y(0) = 𝐸𝐷(𝑡)𝐸 −1 x(0).

Remark 1.1.1 The above expression gives the general solution with
the initial vector x(0) as input. If we regard y(0) as an arbitrary vector
of points y(0) = (𝑐 1 , · · · , 𝑐 𝑛 ) then the general solution expresses as:

x(𝑡) = 𝑐 1 v1 𝑒 𝜆1 𝑡 + · · · + 𝑐 𝑛 v𝑛 𝑒 𝜆𝑛 𝑡 ,

which suffices in some cases.

Examples of computing x(𝑡), see [1], pp. 7-8. Unfortunately square ma-
trices 𝐴 are not always diagonalizable. In the next section we start a
systematic approach to also deal with non-diagonalizable matrices.

1.2 Exponentials

For a linear differential equation in dimension one the solution can be


obtain via integration:
𝑥¤ = 𝑎𝑥, 𝑎 ∈ ℝ .
Then, the solution is given by 𝑥(𝑡) = 𝑒 𝑡 𝑎 𝑥(0).2 For a linear system of 2: Separation of variables and integarting
differential equations with 𝑛 variables x = (𝑥 1 , · · · , 𝑥 𝑛 ) ∈ ℝ 𝑛 is given as both sides.

follows:
𝑎 11 · · · 𝑎1𝑛
© . .. .. ª®
x¤ = 𝐴x , 𝐴 = ­ ..
­ . . ®,
« 𝑎 𝑛 1 · · · 𝑎 𝑛𝑛 ¬
we can postulate a similar formula. If in the one dimensional case we
regard 𝑎 as a 1 × 1 matrix 𝐴 = (𝑎), then 𝑥(𝑡) = 𝑒 𝑡𝐴 𝑥(0). However, if 𝐴 is
an 𝑛 × 𝑛 matrix we need to know if 𝑒 𝑡𝐴 is a well-defined matrix.3 3: An informal Taylor expansion gives:
P∞ 𝑡 𝑘 (𝑘)
x(𝑡) = 𝑘=0 𝑘 !
Moreover, x(𝑘) (0) =
x (0).
𝐴 𝑘 x(0). Upon substitution this yields
Definition 1.2.1 Let 𝐴 be a real 𝑛 × 𝑛 matrix. Define the exponential of P∞ 𝑡𝑘 𝑘
x(𝑡) = 𝐴 x(0).
𝐴 as: 𝑘=0 𝑘 !

1 𝑘
𝑒 𝐴 x :=
X
𝐴 x. (1.3)
𝑘=0
𝑘 !

The next step is to argue that 𝑒 𝐴 is a well-defined matrix. For the vector
1 Linear systems
4
Lectures 1 and 2

valued series we have that


𝑛 𝑛 𝑛
1 1 k𝐴k 𝑘
𝐴𝑘 x ≤ 𝐴𝑘 x ≤
X X X
| x | → 0, , 𝑛, 𝑚 → ∞.
𝑘=𝑚+1
𝑘! 𝑘=𝑚+1
𝑘! 𝑘=𝑚+1
𝑘!

since k𝐴k ≤ 𝑐 . If we set4 y𝑛 = 𝑛𝑘=0 𝑘1! 𝐴 𝑘 x, then {y𝑛 } is a Cauchy


P
4: k𝐴k is the matrix norm defined as
sequence based on the above estimate: | y𝑛 − y𝑚 | → 0 as 𝑛, 𝑚 → ∞. The |𝐴x |
,
limit is denote 𝑒 𝐴 x. Furthermore x ↦→ 𝑒 𝐴 x is a linear map and k𝐴k = max
x≠0 |x|
and |𝐴x | ≤ k𝐴k| x | for all x.
𝑒 𝐴 𝐴x = 𝐴𝑒 𝐴 x. (1.4)

We conclude that 𝑒 𝑡𝐴 x(0) is well-defined for all 𝑡 ∈ ℝ and for all x(0) ∈ ℝ 𝑛 .
We now propose x(𝑡) = 𝑒 𝑡𝐴 x(0) as solution for the system x¤ = 𝐴x. The
vector valued power series 𝑒 𝑡𝐴 x(0) is absolutely convergent and therefore
differentiable in 𝑡 .

Lemma 1.2.1
𝑑 𝑡𝐴
𝑒 x(0) = 𝐴𝑒 𝑡𝐴 x(0),
𝑑𝑡
for all ∈ ℝ and for all x(0) ∈ ℝ 𝑛 .

Proof. Since 𝑒 𝑡𝐴 x(0) is differentiable in 𝑡 we have that

𝑑 𝑡𝐴 ∞
𝑡 𝑘−1 ∞
𝑡 𝑘−1
𝐴 𝑘 x(0) = 𝐴 𝑘−1 𝐴x(0) = 𝑒 𝑡𝐴 𝐴x(0).
X X
𝑒 x(0) =
𝑑𝑡 𝑘=1
(𝑘 − 1)! 𝑘=1
(𝑘 − 1)!

By the commutativity in (1.4) we then have 𝑒 𝑡𝐴 𝑡𝐴 = 𝑡𝐴𝑒 𝑡𝐴 which com-


pletes the proof.

We established x(𝑡) = 𝑒 𝑡𝐴 x(0) as a solution of the linear system in (1.1).


This raises two questions: (i) is the only solution, and (ii) how do we
compute 𝑒 𝐴 ? The first question will be answered in the next chapter,
where we prove that solutions to initial value problems are unique, and
the second question will be considered next.

Algorithm for computing 𝑒 𝑡𝐴

Let 𝜆1 , · · · , 𝜆𝑛 be the eigenvalues of 𝐴 (repeating multiple eigenval-


ues). Determine the following quatities:
I let 𝑎 1 (𝑡) := 𝑒 𝜆1 𝑡 and
∫ 𝑡
𝑎 𝑘 (𝑡) := 𝑒 𝜆 𝑘 (𝑡−𝑠) 𝑎 𝑘−1 (𝑠)𝑑𝑠 ;
0

I let 𝐴1 = 𝐼 and
𝐴 𝑘 = 𝐴 − 𝜆 𝑘−1 𝐼 𝐴 𝑘−1 ;
 

Then,
𝑒 𝑡𝐴 = 𝑎 1 (𝑡)𝐴1 + · · · + 𝑎 𝑛 (𝑡)𝐴𝑛 . (1.5)

We now show way this algorithm indeed establishes the exponential


expression 𝑒 𝑡𝐴 . If we start with the expression for 𝑎 𝑘 we observe that

𝑎¤ 𝑘 (𝑡) = 𝑎 𝑘−1 (𝑡) + 𝜆 𝑘 𝑎 𝑘 (𝑡).


1 Linear systems
5
Lectures 1 and 2

Consider the expression for 𝑒 𝑡𝐴 and differentiate with respect to 𝑡 and


substitute the above expression for 𝑎¤ 𝑘 :5 5: Use 𝑎 0 (𝑡) = 0.

𝑛 𝑛 𝑛 𝑛
𝑑 X  X X X
𝑎 𝑘 𝐴𝑘 = 𝑎¤ 𝑘 𝐴 𝑘 = 𝜆𝑘 𝑎 𝑘 𝐴𝑘 + 𝑎 𝑘−1 𝐴 𝑘 . (1.6)
𝑑𝑡 𝑘=1 𝑘=1 𝑘=1 𝑘=1

From the definition of 𝐴 − 𝑘 we have that

𝑎 𝑘−1 𝐴 𝑘 = 𝑎 𝑘−1 [𝐴 − 𝜆 𝑘 ]𝐴 𝑘−1 = 𝑎 𝑘−1 𝐴𝐴 𝑘−1 − 𝜆 𝑘−1 𝑎 𝑘−1 𝐴 𝑘−1 .

Substitute the latter into (1.6):


𝑛 𝑛 𝑛 𝑛
𝑑 X  X X X
𝑎 𝑘 𝐴𝑘 = 𝜆𝑘 𝑎 𝑘 𝐴𝑘 + 𝐴 𝑎 𝑘−1 𝐴 𝑘−1 − 𝜆 𝑘−1 𝑎 𝑘−1 𝐴 𝑘−1
𝑑𝑡 𝑘=1 𝑘=1 𝑘=1 𝑘=1
𝑛
X 𝑛−1
X 𝑛−1
X
= 𝜆𝑘 𝑎 𝑘 𝐴𝑘 + 𝐴 𝑎 𝑘 𝐴𝑘 − 𝜆𝑘 𝑎 𝑘 𝐴𝑘
𝑘=1 𝑘=1 𝑘=1
𝑛−1
X
= 𝜆𝑛 𝑎 𝑛 𝐴𝑛 + 𝐴 𝑎 𝑘 𝐴𝑘
𝑘=1
𝑛
X 𝑛
X
= 𝜆𝑛 𝑎 𝑛 𝐴𝑛 − 𝑎 𝑛 𝐴𝐴𝑛 + 𝐴 𝑎 𝑘 𝐴𝑘 = 𝐴 𝑎 𝑘 𝐴𝑘 ,
𝑘=1 𝑘=1
(1.7)
where we use the fact that (𝜆𝑛 𝐼 − 𝐴)𝐴𝑛 = 0. This follows from the
Cayley-Hamilton theorem.6 Indeed, 6: Define 𝑝(𝑥) = Π𝑛𝑘=1 (𝑥 − 𝜆 𝑘 ). Then,
𝑝(𝐴) = 0.
(𝐴 − 𝜆𝑛 𝐼)𝐴𝑛 = (𝐴 − 𝜆𝑛 𝐼)(𝐴 − 𝜆𝑛−1 𝐼)𝐴𝑛−1 = · · · = 𝑝(𝐴)𝐴1 = 0.

The expression (1.7) satisfies the equation, i.e.


𝑛 𝑛
𝑑 X  X
𝑎 𝑘 𝐴 𝑘 x(0) = 𝐴 𝑎 𝑘 𝐴 𝑘 x(0).
𝑑𝑡 𝑘=1 𝑘=1

If we combine this with the uniqueness in the next chapter we may


conclude that indeed
𝑛
𝑒 𝑡𝐴 =
X
𝑎 𝑘 𝐴𝑘 .
𝑘=1

1.3 Classifying eigenvalues

If the matrix 𝐴 is invertible then the eigenvalues have the property that
R 𝑒 𝜆 ≠ 0 — the real part of 𝜆. In this case we also have that x∗ = 0
is the only zero of the right hand side of x¤ = 𝐴x. The point x∗ = 0 is
called an equilibrium point for the system. Only when det 𝐴 = 0 there are
more equilibrium points. For equilibrium points we use the following
classification based on eigenvalues:

Classification of eigenvalues

I Stable eigenvalue: R 𝑒 𝜆 < 0;


I Center eigenvalue: R 𝑒 𝜆 = 0;
I Unstable eigenvalue: R 𝑒 𝜆 > 0.

We will use the characterizations of eigenvalues at a later stage to classify


1 Linear systems
6
Lectures 1 and 2

equilibrium points. The eigenvalues also account for invariant subspaces


as explained in the examples: stable (linear) subspace, the unstable
subspace and the center subspace. In the case that all eigenvalues are real
and distinct these spaces are easily recognizable. Order the eigenvalues
as:
𝜆1 < 𝜆2 < · · · 𝜆 𝑘 ≤ 0 < 𝜆 𝑘+1 < · · · < 𝜆𝑛 ,
which leaves the possibility of 𝜆 𝑘 being zero or negative. Let v1 , · · · , v𝑛
be the associated eigenvectors. If 𝜆 𝑘 = 0 we define

𝐸 𝑠 = span {v1 , · · · , v 𝑘−1 };


𝐸 𝑐 = span {v 𝑘 };
𝐸 𝑢 = span {v 𝑘+1 , · · · , v𝑛 }.

The subspace are invariant, i.e. if an initial value x(0) is contained in such
a linear subspace, then 𝑒 𝑡𝐴 x(0) is in the same subspace for all 𝑡 ∈ ℝ.
Moreover,7 7: These properties follow from Remark
ℝ𝑛 = 𝐸 𝑠 ⊕ 𝐸 𝑐 ⊕ 𝐸𝑢 . 1.1.1.

When 𝜆 𝑘 < 0 then the center subsapce is trivial. This definition is more
involved in the case that 𝐴 is not diagonalizable. In order to find the
stable, unstable and center subspaces in general we consider the following
example. For the matrix

1 0 0
𝐴=­ 1 1 0 ®,
© ª

« − 1 − 1 −2 ¬

we consider the differential equation x¤ = 𝐴x. Direct computation shows


that8 8: See examples in class.
𝑒𝑡 0 0
𝑡𝐴 𝑡 𝑡
𝑒 =­ 𝑡𝑒 𝑒 0 ®.
© ª
𝑡 𝑡 −2𝑡 𝑒 𝑡 − 𝑒 −2 𝑡 𝑒 −2 𝑡 ¬
« 𝑡𝑒 + 2𝑒 − 2𝑒
We split the matrix in a sum with equal exponential growth:

1 0 0 0 0 0
𝑒 𝑡𝐴 = 𝑒 𝑡 ­ 𝑡 1 0 ® + 𝑒 −2 𝑡 ­ 0 0 0 ® = 𝑒 𝑡 𝐴 1 + 𝑒 2𝑡 𝐴 2 .
© ª © ª

« 𝑡+2 1 0 ¬ « −2 −1 1 ¬

The eigenvalues 𝜆1 = 𝜆2 = 1 have one eigenvector v1 = (0 , 1 , 1). The


eigenvalue 𝜆3 = −2 has eigenvector v3 = (0 , 0 , 1).
Note that ker 𝐴1 = {(0 , 0 , 𝑧) | 𝑧 ∈ ℝ } and ker 𝐴2 = {(𝑥, 𝑦, 𝑧) | 2 𝑥+𝑦−𝑧 =
0}. We conclude that 𝐸 𝑠 = span {v3 } and 𝐸 𝑢 = ker 𝐴2 = span {v1 , v2 },
where v2 = (1 , 0 , 2).

1.4 Inhomogeneous equations

Consider the inhomogeneous system of differential equations:

x¤ = 𝐴x + f ,

where f = f(𝑡) = ( 𝑓1 (𝑡), · · · , 𝑓𝑛 (𝑡)). For a given initial value x(0) we can
represent the solution with the following formula.
1 Linear systems
7
Lectures 1 and 2

Theorem 1.4.1 The solution for initial value x(0) is given by


∫ 𝑡
𝑡𝐴
x(𝑡) = 𝑒 x(0) + 𝑒 (𝑡−𝑠)𝐴 f(𝑠)𝑑𝑠,
0

which is referred to as the ‘variation of constants’ formula.

 
𝑑
Proof. Consider the expression 𝑒 −𝑡𝐴 x(𝑡) . Upon differentiating, sub-
𝑑𝑡
stituting the equation and using Property (1.4) we obtain:

𝑑  −𝑡𝐴 
𝑒 x(𝑡) = −𝐴𝑒 −𝑡𝐴 x(𝑡) + 𝑒 −𝑡𝐴 x¤ (𝑡)
𝑑𝑡
= −𝐴𝑒 −𝑡𝐴 x(𝑡) + 𝑒 −𝑡𝐴 𝐴x(𝑡) + 𝑒 −𝑡𝐴 f(𝑡) = 𝑒 −𝑡𝐴 f(𝑡).

Integrating both sides yields


∫ 𝑡
𝑒 −𝑡𝐴 x(𝑡) − x(0) = 𝑒 −𝑠𝐴 f(𝑠)𝑑𝑠,
0

which completes the proof.


The existence and uniqueness
theorem
Lecture 3 2
In this chapter we start off with the fundamental problem of the theory 2.1 Existence . . . . . . . . . . . . . 8
of dynamical systems of flows. Let f : ℝ 𝑛 → ℝ 𝑛 be a vector function 2.2 Uniqueness . . . . . . . . . . . . 11
which continuously differentiable, class 𝐶 1 and consider the differential 2.3 Continuous dependence on ini-
tial values . . . . . . . . . . . . . . 12
equation
x¤ = f(x), x(0) = x0 ∈ ℝ 𝑛 . (2.1)
We refer to f as a vector field on ℝ 𝑛 . The obvious question to ask is: does a
solution exist and is the solution unique? The question of existence was
answered in Chapter 1 for linear systems. In this chapter we investigate
the general case.
The fundamental existence and uniqeness theorem for (2.1) addresses
the question of short time existence and uniqueness. For simplicity of
exposition we formulate the theorem for globally defined functions f.1 1: Notation for a continuously differen-
tiable function on ℝ 𝑛 is: f ∈ 𝐶 1 (ℝ 𝑛 ; ℝ 𝑛 ).

Theorem 2.0.1 Let f ∈ 𝐶 1 (ℝ 𝑛 ; ℝ 𝑛 ) and let x0 ∈ ℝ 𝑛 . Then, there exists


a real 𝜏 = 𝜏(x0 ) > 0 and a unique 𝐶 1 -function x : [−𝜏, 𝜏] → ℝ 𝑛 which
satisfies the system of differential equations in (2.1).

2.1 Existence

We prove Theorem 2.0.1 via Picard iteration.2 To set up the problem 2: We follow the approach in cf. [2].
suitable for Picard iteration we first reformulate the equation. Let x : 𝐼 ⊂
ℝ → ℝ 𝑛 be a solution of (2.1). Upon integration of the equation we
obtain: ∫ 𝑡
f x(𝑠) 𝑑𝑠.

x(𝑡) = x0 + (2.2)
0

With Picard iteration the idea is to start with a function x0 (𝑡) = x0 and
compute the next function via the scheme
∫ 𝑡
f x 𝑘 (𝑠) 𝑑𝑠.

x 𝑘+1 (𝑡) = x0 + (2.3)
0

The idea is to construct a sequence of functions x 𝑘 (𝑡) : 𝐼 → ℝ 𝑛 which


‘converge’ to a limit function x(𝑡) which satisfies the integral equation in
(2.2). Such a function x(𝑡) is also a solution of (2.1).
Before we start the proof we show that a 𝐶 1 -function is locally Lipschitz.3
3: A function f : ℝ 𝑛 → ℝ 𝑛 is locally Lip-
schitz if for every neighborhood 𝐵 𝜖 (x0 )
there exists a constant 𝐾 = 𝐾(x0 , 𝜖) such
Lemma 2.1.1 A 𝐶 1 -function f : ℝ 𝑛 → ℝ 𝑛 is locally Lipschitz. that k f(x) − f(y)k ≤ 𝐾k x − y k for all
x , y ∈ 𝐵 𝜖 (x0 ).

Proof. From Taylor’s formula we have that

f(x + h) − f(x) = 𝑅 0 (h , 𝜃),


2 The existence and uniqueness theorem
9
Lecture 3

where 𝑅 0 (h , 𝜃) = 𝐷 f(x + 𝜃 h)h for some 𝜃 ∈ (0 , 1). Fix x0 ∈ ℝ 𝑛 and


define4 4: Define 𝐵 𝜖 (x0 ) = x | k x − x0 k < 𝜖 and


𝐵 𝜖 (x0 ) = x | k x − x0 k ≤ 𝜖 .

𝐾 = max k𝐷 f(x)k. (2.4)
x∈𝐵 𝜖 (x0 )

Let x , y ∈ 𝐵 𝜖 (x) with x + h = y. Since 𝐵 𝜖 (x0 ) is convex we also have that


x + 𝜃 h ∈ 𝐵 𝜖 (x0 ) for all 𝜃 ∈ (0 , 1). Consequently,

k f(x) − f(y)k = k𝐷 f(x + 𝜃 h)(y − x)k ≤ 𝐾k x − y k,

for all x , y ∈ 𝐵 𝜖 (x0 ) which proves the lemma.

Proof of Theorem 2.0.1 (Existence part). Choose a ball 𝐵 𝜖 (x0 ) which by Lemma
2.1.1 yields a constant 𝐾 = 𝐾(x0 , 𝜖) > 0 such that k f(x) − f(y)k ≤ 𝐾k x − y k
for all x , y ∈ 𝐵 𝜖 (x0 ).5 Define 5: The constant 𝜖 > 0 is fixed for the re-
mainder of the proof.
𝑀 = max k f(x)k.
x∈𝐵 𝜖 (x0 )

Start with x0 (𝑡) = x0 and suppose the iteration scheme yields a continuous
function x 𝑘 (𝑡) such that

x 𝑘 (𝑡) ∈ 𝐵 𝜖 (x0 ), ∀𝑡 ∈ [−𝜏, 𝜏],

with6 6: We make this a priori choice which


will be motivated in the remainder of the
0 < 𝜏 < min 𝜖/𝑀, 1/𝐾 .

proof.
For all practical purposes we choose 𝜏 = 𝜏(x0 ) := min 𝜖/𝑀, 1/𝐾 /2.


The next iterate x 𝑘+1 (𝑡) is also defined by the continuity of f and7 8 7: Uses condition on 𝜏.
8: If both 𝐾 = 𝑀 = 0, then 𝜏 can be any
∫ 𝑡 positive number. Indeed, 𝐵 𝜖 (x0 ) consists
k f x 𝑘 (𝑠) k𝑑𝑠 ≤ 𝑀𝜏 < 𝜖,

k x 𝑘+1 (𝑡) − x0 k ≤ of zeroes which are constant solutions that
0 exist for all time 𝑡 . We therefore assume
without loss of generality that 𝐾 ≠ 0 and
for all 𝑡 ∈ [−𝜏, 𝜏]. For the iteration scheme we may therefore assume that 𝑀 ≠ 0.

x 𝑘 (𝑡) ∈ 𝐵 𝜖 (x0 ), ∀𝑡 ∈ [−𝜏, 𝜏], ∀𝑘 ∈ ℕ . (2.5)


The next step is to prove that the sequence converges.
Observe that
∫ 𝑡
k x2 (𝑡) − x1 (𝑡)k ≤ f(x1 (𝑠)) − f(x0 (𝑠)) 𝑑𝑠
0
≤ 𝐾𝜏 max k x1 (𝑠) − x0 k < 𝐾𝜏𝜖,
𝑠∈[0,𝑡]

by (2.5) for 𝑘 = 1. Suppose k x 𝑘 (𝑡) − x 𝑘−1 (𝑡)k ≤ (𝐾𝜏) 𝑘−1 𝜖 . Then,


∫ 𝑡
k x 𝑘+1 (𝑡) − x 𝑘 (𝑡)k ≤ f(x 𝑘 (𝑠)) − f(x 𝑘−1 (𝑠)) 𝑑𝑠
0
≤ 𝐾𝜏 max k x 𝑘 (𝑠) − x 𝑘−1 (𝑠)k < (𝐾𝜏) 𝑘 𝜖,
𝑠∈[0 ,𝑡]

and thus

k x 𝑘+1 (𝑡) − x 𝑘 (𝑡)k ≤ 𝛾 𝑘 𝜖, ∀𝑡 ∈ [−𝜏, 𝜏], ∀𝑘 ∈ ℕ , (2.6)

where 𝛾 = 𝐾𝜏 < 1 by the assumption on 𝜏.9 Consider integers 𝑚, 𝑚 0 ≥ 9: Uses condition on 𝜏.


2 The existence and uniqueness theorem
10
Lecture 3

𝑚0 . Then, using the triangle inequality we have

𝑚−
X1 𝑚−
X1 ∞
𝛾𝑘 ≤ 𝜖 𝛾𝑘
X
k x𝑚 (𝑡) − x𝑚0 (𝑡)k ≤ k x 𝑘+1 (𝑡) − x 𝑘 (𝑡)k ≤ 𝜖
𝑘=𝑚 0 𝑘=𝑚 0 𝑘=𝑚0
𝛾 𝑚0
=𝜖 → 0,
1−𝛾

as 𝑚0 → ∞ and thus k x𝑚 − x𝑚0 k𝐶 0 → 0 as 𝑚, 𝑚 0 → ∞. This proves that


{x 𝑘 (𝑡)} is a Cauchy sequence in the Banach space 𝐶 0 ([−𝜏, 𝜏]; ℝ 𝑛 )10 of con- 10: The space of continuous functions on
tinuous functions on the compact interval [−𝜏, 𝜏]. Since 𝐶 0 ([−𝜏, 𝜏]; ℝ 𝑛 ) the compact interval [−𝜏, 𝜏] is a Banach
space with norm
is a Banach space there exists a limit
k x k𝐶 0 := max k x(𝑡)k.
𝑡∈[−𝜏,𝜏]
x(𝑡) = lim x 𝑘 (𝑡),
𝑘→∞ This makes 𝐶 0 ([−𝜏, 𝜏]; ℝ 𝑛 ) is a normed
linear space. Since the interval [−𝜏, 𝜏] is
and the convergence is uniform11 on [−𝜏, 𝜏]. compact this space is complete, i.e. every
Cauchy sequence has a (unique) limit in
We can now take the limit in (2.3): 𝐶 0 ([−𝜏, 𝜏]; ℝ 𝑛 ).
∫ 𝑡 11: This implies that for every 𝜖 > 0 there
exists a 𝑘 𝜖 > 0 such that k x 𝑘 (𝑡) − x(𝑡)k < 𝜖
f x 𝑘 (𝑠) 𝑑𝑠.

x(𝑡) = lim x 𝑘+1 (𝑡) = x0 + lim
𝑘→∞ 𝑘→∞ 0 for all 𝑘 ≥ 𝑘 𝜖 and for all 𝑡 ∈ [−𝜏, 𝜏].

Since x 𝑘 → x uniformly on [0 , 𝑡] for 𝑡 ∈ [−𝜏, 𝜏] and since


 f is locally
Lipschitz continuous we have that f x 𝑘 (𝑠) → f x(𝑠) uniformly on
[0 , 𝑡].12 This allows us the interchange limit and integral: 12: Indeed, x 𝑘 (𝑡) ∈ 𝐵 𝜖 (x0 ) and f is locally
Lipschitz continuous with Lipschitz con-
∫ 𝑡 ∫ 𝑡 ∫ 𝑡 stant 𝐾 on 𝐵 𝜖 (x0 ) and thus k f(x 𝑘 (𝑡)) −
f x 𝑘 (𝑠) 𝑑𝑠 = lim f x 𝑘 (𝑠) 𝑑𝑠 = f x(𝑠) 𝑑𝑠.
  
lim f(x(𝑡))k ≤ 𝐾k x 𝑘 (𝑡) − x(𝑡)k which proves
𝑘→∞ 0 0 𝑘→∞ 0 uniform convergence of f(x 𝑘 (𝑡)) with limit
f(x(𝑡)).
We conclude that the  limit function x(𝑡) satisfies Equation (2.2) and
clearly x¤ (𝑡) = f x(𝑡) with x(0) = x0 , a local solution for the initial value
problem, which completes the existence part of the proof.

The above proof only deals with the existence of a time interval for a
given initial point x0 . However, the proof also reveals how the constant
𝜏 = 𝜏(x0 ) behaves for x̃ in a neighborhood of x0 . Indeed, if we consider
balls 𝐵 𝜖/2 (x̃) with x̃ ∈ 𝐵 𝜖/2 (x0 ), then 𝐵 𝜖/2 (x̃) ⊂ 𝐵 𝜖 (x0 ) for all x̃ ∈ 𝐵 𝜖/2 (x0 )
by the triangle inequality. If we repeat the proof for x̃ as initial point we
obtain
𝐾˜ = max k𝐷 f(x)k ≤ max k𝐷 f(x)k = 𝐾,
x∈𝐵 𝜖/2 (x̃) x∈𝐵 𝜖 (x0 )

and similarly

˜ = max k f(x)k ≤ max k f(x)k = 𝑀.


𝑀
x∈𝐵 𝜖/2 (x̃) x∈𝐵 𝜖 (x0 )

For the associated constant 𝜏(x̃


˜ ) we choose

𝜏(x̃ ˜ 1/𝐾˜ ≥ min 𝜖/4 𝑀, 1/𝐾 = 𝜏(x0 )/2 ,


˜ ) = min 𝜖/4 𝑀,
 

which implies that a unique solution

x : [−𝜏/2 , 𝜏/2] → ℝ 𝑛 , (2.7)

to the initial value problem exists for all initial values x̃(0) = x̃ ∈ 𝐵 𝜖/2 (x0 ),
which establishes a uniform time interval for all x̃ ∈ 𝐵 𝜖/2 (x0 ).
2 The existence and uniqueness theorem
11
Lecture 3

Remark 2.1.1 Since Equation (2.1) is invariant under translation in


time, i.e. 𝑡 ↦→ 𝑡 + 𝑡0 , we the theorem also applies to x¤ = f(x) with
x(𝑡0 ) = x0 .

A consequence of the proof above is that the function x is 𝐶 1 and the x 𝑘


converges to x in 𝐶 1 ([−𝜏, 𝜏]; ℝ 𝑛 ).

Remark 2.1.2 We proved the existence of local solutions for 𝐶 1 vector


fields f defined on all of ℝ 𝑛 . The proof can be easily adjusted to vector
fields defined on an open subset of ℝ 𝑛 .

The existence proof given above also works for vector fields that are
focally Lipschitz which is a larger class of functions than 𝐶 1 -vector fields.
We can even prove a theorem for continuous vector fields.

Remark 2.1.3 (Peano’s theorem) The existence result for differential


equations can also be obtained by assuming that the vector field is
continuous. The above proof does not work without adjustments. In
the continuous case we do not have uniqueness of solutions in general.

2.2 Uniqueness

Consider the differential equation 𝑥¤ = 𝑥 , 𝑥 ≥ 0 and 𝑥(0) = 0. Direct
verification shows that
(
0 for 𝑡 ≤ 𝑡0
𝑥(𝑡) = 1
4 (𝑡 − 𝑡0 )2 for 𝑡 ≥ 𝑡0 ,

are solutions of the initial value problem for all 𝑡0 ≥ 0. This those gives

infinitely many solutions. Note that the vector field 𝑓 (𝑥) = 𝑥 is a not
differentiable at 𝑥 = 0. The vector field is continuous however.

Proof of Theorem 2.0.1 (Uniqueness part). Suppose there exist two solu-
tions x , y : [−𝜏, 𝜏] → ℝ 𝑛 satisfying (2.1) with x(0) = y(0) = x0 . For
the difference we have, using the representation in (2.2):
∫ 𝑡
k x(𝑡) − y(𝑡)k ≤ f(x(𝑠)) − f(y(𝑠)) 𝑑𝑠
0
≤ 𝐾𝜏 max k x(𝑠) − y(𝑠)k < k x − y k𝐶 0 ,
𝑠∈[0,𝑡]

which implies that x(𝑡) = y(𝑡) for all 𝑡 ∈ [−𝜏, 𝜏].

The uniqueness follows from the local Lipschitz property of the vector
field f. It is clear such a condition is not satisfied for the example above.
If we stay away from 𝑥 = 0 then the existence and uniqueness theorem
again provides a unique solution.
The uniqueness part of Theorem 2.0.1 has an immediate consequence
for the solution x(𝑡) when represented as curves in ℝ 𝑛 . Let x : 𝐼 → ℝ 𝑛
and y : 𝐽 → ℝ 𝑛 be solutions. Suppose that x(𝑡1 ) = y(𝑡2 ) and define
2 The existence and uniqueness theorem
12
Lecture 3

x̃(𝑡) := x(𝑡 − 𝑡1 ) and ỹ(𝑡) := y(𝑡 − 𝑡2 ). Then, from the uniqueness part of
Theorem 2.0.1 we conclude that z(𝑡) := 𝑥(𝑡) ˜ − 𝑦(𝑡)
˜ ≡ 0 in a neighborhood
of 𝑡 = 0. Since the function z is defined on the interval 𝐼˜ ∩ 𝐽˜, where 𝐼˜ and
𝐽˜ are the shifted time intervals the conclusion holds for all of 𝐼˜ ∩ 𝐽˜.
Two solutions x and y cross at x0 = x(𝑡1 ) = y(𝑡2 ) if z(𝑡) . 0 on 𝐼˜ ∩ 𝐽˜.
From the above argument it follows that two cannot cross. In particular
solutions cannot self-intersect. The latter leave the possibility for a
periodic solutions, i.e. x(𝑡 + 𝑡1 ) = x(𝑡) for all 𝑡 ∈ ℝ.

2.3 Continuous dependence on initial values

Dependence of solutions x of (2.1) with respect to the initial value x0 is


very important for the theory of dynamical systems. The next theorem
provides a crucial estimate to achieve continuous dependence.

Theorem 2.3.1 Consider the open ball 𝐵 𝜖 (x0 ) and solutions y(𝑡), ỹ(𝑡) ∈
𝐵 𝜖 (x0 ) for 𝑡 ∈ [𝑡0 , 𝑡1 ]. Then,

k y(𝑡) − ỹ(𝑡)k ≤ k y(𝑡0 ) − ỹ(𝑡0 )k𝑒 𝐾(𝑡−𝑡0 ) , ∀𝑡 ∈ [𝑡0 , 𝑡1 ],

where 𝐾 is the Lipschitz constant of f on 𝐵 𝜖 (x0 ), cf. (2.4).

Before proving the theorem we start with an important lemma for


comparing solutions.

Lemma 2.3.2 (Gronwall’s inequality) Let z : [0 , 𝜏] → ℝ be a continuous


function with z(𝑡) ≥ 0 for all 𝑡 ∈ [0 , 𝜏]. Suppose that
∫ 𝑡
z(𝑡) ≤ 𝑐 0 + 𝑐 1 z(𝑠)𝑑𝑠, ∀𝑡 ∈ [0, 𝜏],
0

for some 𝑐 0 , 𝑐 1 ≥ 0. Then, z(𝑡) ≤ 𝑐 0 𝑒 𝑐1 𝑡 , for all 𝑡 ∈ [0 , 𝜏].

∫𝑡
Proof. First consider the case 𝑐 0 > 0 and let w(𝑡) = 𝑐 0 + 0 𝑐 1 z(𝑠)𝑑𝑠 > 0.
By assumption z(𝑡) ≤ w(𝑡) and w ¤ (𝑡) = 𝑐 1 z(𝑡). This yields

¤ (𝑡) 𝑐 1 z(𝑡)
w
= ≤ 𝑐1 ,
w(𝑡) w(𝑡)
𝑑
and hence 𝑑𝑡 log w(𝑡) ≤ 𝑐 1 . Upon integration this gives

log w(𝑡) ≤ log w(0) + 𝑐 1 𝑡.

Since w(0) = 𝑐 0 we have z(𝑡) ≤ w(𝑡) ≤ 𝑐 0 𝑒 𝑐1 𝑡 . The case 𝑐 0 = 0 follows


from a limit argument.
2 The existence and uniqueness theorem
13
Lecture 3

Proof of Theorem 2.3.1. As before


∫ 𝑡
f y(𝑠) − f ỹ(𝑠) 𝑑𝑠
 
k y(𝑡) − ỹ(𝑡)k ≤ k y(𝑡0 ) − ỹ(𝑡0 )k +
𝑡0
∫ 𝑡
≤ k y(𝑡0 ) − ỹ(𝑡0 )k + 𝐾k y(𝑠) − ỹ(𝑠)k𝑑𝑠
𝑡0

Define z(𝑡) := k y(𝑡0 + 𝑡) − ỹ(𝑡0 + 𝑡)k . Then,


∫ 𝑡
z(𝑡) ≤ z(𝑡0 ) + 𝐾 z(𝑠)𝑑𝑠.
0

We now use Gronwall’s inequality with z(𝑡) ≥ 0, 𝑐 0 = z(𝑡0 ) and 𝑐 1 = 𝐾


which gives
z(𝑡) ≤ z(𝑡0 )𝑒 𝐾𝑡 , 𝑡 ∈ [0 , 𝑡1 − 𝑡0 ],
which proves the theorem.

We denote solutions of the initial value problem by x(𝑡 ; x0 ).

Theorem 2.3.3 The solution x(𝑡 ; x0 ) as function of 𝑡 and x0 is a continuous


function of 𝑡 and x0 on its domain of definition.

Proof. Let y(𝑡) = x(𝑡 ; x0 ), with y(𝑡0 ) = x0 and ỹ(𝑡) = x(𝑡 ; x1 ), with ỹ(𝑡0 ) =
x1 . By the continuity of x(𝑡 ; x0 ) as function of 𝑡 we have that for every
𝜖 > 0 there exists a 𝛿 1𝜖 > 0 such that 0 < |𝑡 − 𝑡0 | < 𝛿1𝜖 implies that
k x(𝑡 ; x0 ) − x0 k < 𝜖/2. Moreover, by Theorem 2.3.1 we have that

k x(𝑡 ; x1 ) − x(𝑡0 ; x0 )k = k x(𝑡 ; x1 ) − x0 k


≤ k x(𝑡 ; x1 ) − x(𝑡 ; x0 )k + k x(𝑡 ; x0 ) − x0 k
≤ k x0 − x1 k𝑒 𝐾(𝑡−𝑡0 ) + k x(𝑡 ; x0 ) − x0 k
≤ k x1 − x0 k𝑒 𝐾𝛿 𝜖 + 𝜖/2.
1

1
Choose 𝛿 𝜖 = min 𝜖𝑒 −𝐾𝛿 𝜖 /2 , 𝛿 1𝜖 . Then 0 < k x1 − x0 k + |𝑡 − 𝑡0 | < 𝛿 𝜖


implies k x1 − x0 k𝑒 𝐾𝛿 𝜖 < 𝜖/2 and thus k x(𝑡 ; x1 ) − x(𝑡0 ; x0 )k < 𝜖 which


1

proves that x(𝑡 ; x0 ) is a continuous function of the arguments 𝑡 and x0 .


Flows, invariance and
linearization
Lecture 4 3
Solutions of (autonomous) differential equations satisfy certain universal 3.1 Flows . . . . . . . . . . . . . . . 14
properties such as the group property which leads to the concept of 3.2 Invariant sets . . . . . . . . . . 15
(local) flow. In this chapter we make a first start towards studying 3.3 Linearization . . . . . . . . . . 16
differential equations as abstract flows on topological spaces, in this flows
on ℝ 𝑛 , or subsets of ℝ 𝑛 . We discuss a number of general principles and
characteristics. This allows us to study (flow) dynamics from a more
abstract point of view.

3.1 Flows

In the previous chapter we discussed the existence and uniqueness of


the initial value problem

x¤ = f(x), x(0) = x0 ∈ ℝ 𝑛 .

The solution function is given as x : [−𝜏, 𝜏] × ℝ 𝑛 → ℝ 𝑛 , where 𝜏 =


𝜏(x0 ) > 0. The next lemma reveals an important property of solutions
which is due to the 𝑡 -independent vector field f and the uniqueness of
solutions.

Lemma 3.1.1 For  𝑡 ∈ [−𝜏, 𝜏] and 𝑠 ∈ [−𝜎, 𝜎], with 𝜏 = 𝜏(x0 ) > 0 and
𝜎 = 𝜎 x(𝑡 ; x0 ) , it holds that

x(𝑠 + 𝑡 ; x0 ) = x 𝑠, x(𝑡 ; x0 ) .

(3.1)

Proof. Let 𝑠 > 0 and define the function


(
x(𝑟 ; x0 ) for − 𝜏 < 𝑟 ≤ 𝑡 < 𝜏
y(𝑟) =
x 𝑟 − 𝑡 ; x(𝑡 ; x0 ) for 𝑡 ≤ 𝑟 ≤ 𝑡 + 𝑠, 0 < 𝑠 < 𝜎.


The function y(𝑟) is a 𝐶 1 -function on (−𝜏, 𝑡 + 𝑠], 𝑠 < 𝜎 and satisfies the
differential equation y¤ = f(y)1 with y(0) = x0 . Since solutions are unique 1: Here we use the fact that f does not
by Theorem 2.0.1 we conclude that depend on 𝑡 explicitly.

x(𝑡 + 𝑠 ; x0 ) = y(𝑡 + 𝑠) = x 𝑟 − 𝑡 ; x(𝑡 ; x0 ) ,




which proves the statement for 𝑠 > 0. For 𝑠 = 0 the statement is trivial
and for 𝑠 < 0 we argue in the same way.

We refer (3.1) as the (local) group property for the solution function.

Remark 3.1.1 The fact that the group property extends the domain of
definition of x at x0 does not contradict the finite time blow-up of for
example the equation x¤ = x2 .
3 Flows, invariance and linearization
15
Lecture 4

Form this point on we denote the solution of (2.1) by

𝜙𝑡 (x0 ) := x(𝑡 ; x0 ), 𝑡 ∈ [−𝜏, 𝜏].

We refer to 𝜙𝑡 as the local flow for the vector field f. A local flow depends in
a continuously differentiable way on 𝑡 , i.e.

𝑡 ↦→ 𝜙𝑡 (x), x ∈ ℝ𝑛 , 𝑡 ∈ [−𝜏, 𝜏], 𝜏 = 𝜏(x) > 0 ,

is a 𝐶 1 -function of 𝑡 , with x fixed. In particular 𝜙−𝑡 ◦ 𝜙𝑡 = 𝜙𝑡 ◦ 𝜙−𝑡 = id. If


𝑈 ⊂ ℝ 𝑛 is a compact subset then 𝜙𝑡 : 𝑈 ⊂ ℝ 𝑛 → ℝ 𝑛 is well-defined for
all 𝑡 ∈ [−𝜏, 𝜏] for some 𝜏 = 𝜏(𝑈) > 0.2 We regard 𝜙𝑡 : 𝑈 ⊂ ℝ 𝑛 → ℝ 𝑛 , 2: The uniformity of the constant 𝜏(𝑈)
𝑡 ∈ [−𝜏, 𝜏], as a 1-parameter family of continuous maps on 𝑈 satisfying follows form the compactness of 𝑈 . Con-
sider an open covering with 𝜖 -balls given
the group property: in Theorem 2.0.1. For each 𝜖 -ball centered
at x there exists a constant 𝜏(x) > 0 by
𝜙 𝑠+𝑡 (x) = 𝜙 𝑠 𝜙𝑡 (x) , ∀x ∈ 𝑈.

(3.2) (2.7). By the compactness of 𝑈 there ex-
ists a finite subcovering which implies the
For 𝑡 fixed the continuity in Theorem 2.3.3 implies that 𝑉 = 𝜙𝑡 (𝑈) uniform constant 𝜏.
is compact and since ℝ 𝑛 is a Hausdorff space so is 𝑉 .3 Bijectivity of 3: cf. [3, Thm. 17.11].
𝜙𝑡 : 𝑈 → 𝑉 follows from the group property and therefore 𝜙𝑡 is a
homeomorphism4 for fixed 𝑡 . 4: cf. [3, Thm. 26.6].

The fact that flows are not defined globally in 𝑡 ∈ ℝ is not much of a
restriction. We will see later on that one can always study vector fields
via a normalization that yields a flow defined for all 𝑡 ∈ ℝ. Also when
we consider invariant sets the time variable will be global in most cases
as we will see in the next section.
If we consider a given local flow 𝜙𝑡 then there exists an associated vector
field: f(x) = 𝜙¤ 𝑡 (x)|𝑡=0 . The vector field is defined on all of ℝ 𝑛 and is the
velocity field of the flow lines. A given 𝐶 1 -vector field on ℝ 𝑛 yields a local
flow on ℝ 𝑛 as described above. Conversely, a local flow 𝜙𝑡 : ℝ 𝑛 → ℝ 𝑛
with the smoothness properties as described above yields a vector field
and thus are solutions of the differential equation in (2.1).

3.2 Invariant sets

Sets of special interest in the study of dynamical systems are sets on


which one can restrict the dynamics. In this section we assume that
𝜙𝑡 : ℝ 𝑛 → ℝ 𝑛 is a local flow.

Definition 3.2.1 A subset 𝑆 ⊂ ℝ 𝑛 is called an invariant set for 𝜙𝑡 if for


every x ∈ 𝑆 there exists a 𝜏 = 𝜏(x) > 0 such that that 𝜙𝑡 (x) ∈ 𝑆 for all
𝑡 ∈ [−𝜏, 𝜏]. If the above property holds for all 𝑡 ∈ [0 , 𝜏] the set 𝑆 is
called forward invariant, and if above property holds for all 𝑡 ∈ [−𝜏, 0]
the set 𝑆 is called backward invariant.

If we restrict the flow 𝜙𝑡 to 𝑆 we obtain a local flow defined on 𝑆 . Since 𝑆


is not necessarily compact such a restriction of 𝜙𝑡 may still not be a flow
defined for all 𝑡 ∈ ℝ.

Lemma 3.2.1 A compact subset 𝑆 ⊂ ℝ 𝑛 is invariant if and only if 𝜙𝑡 (𝑆) = 𝑆


for all 𝑡 ∈ ℝ.
3 Flows, invariance and linearization
16
Lecture 4

Proof. We start with the observation that there exists a 𝜏 > 0 such that
𝜙𝑡 |𝑆 is defined for all 𝑡 ∈ [−𝜏, 𝜏].5 Invariance of 𝑆 implies that 𝜙𝑡 (𝑆) ⊂ 𝑆  Due to the compactness of 𝑆. Indeed, let
5:
for all 𝑡 ∈ [−𝜏, 𝜏]. In particular, 𝐵 𝜖 (x) x∈𝑆 be an open covering for 𝑆 with
𝜖 > 0 fixed. By the compactness
 of 𝑆 there
𝑛
exists a finite sub-covering 𝐵 𝜖 (x 𝑘 ) 𝑘=1
𝜙 𝑘𝑡 (𝑆) = 𝜙𝑡 ◦ · · · ◦ 𝜙𝑡 (𝑆) ⊂ 𝑆, 𝑘 ∈ ℤ,

of 𝑆 and a finite set of times 𝜏(x 𝑘 ) > 0.
Choose 𝜏 = min 𝑘 𝜏(x 𝑘 ).
| {z }
𝑘 times

which proves that 𝜙𝑡 (𝑆) ⊂ 𝑆 for all 𝑡 ∈ ℝ. Using the fact that 𝜙𝑡 is
invertible gives that 𝑆 ⊂ 𝜙−𝑡 (𝑆) for all 𝑡 ∈ ℝ and thus 𝜙𝑡 (𝑆) = 𝑆 for all
𝑡 ∈ ℝ. Conversely, if 𝜙𝑡 (𝑆) = 𝑆 for all 𝑡 ∈ ℝ, then for every x ∈ 𝑆 it holds
that 𝜙𝑡 (x) ∈ 𝑆 for all 𝑡 ∈ ℝ and thus for all 𝑡 ∈ [−𝜏, 𝜏] which proves
invariance.

The same result can be proved for forward and backward invariant sets,
i.e. a compact set 𝑆 ⊂ ℝ 𝑛 is forward invariant if and only if 𝜙𝑡 (𝑆) ⊂ 𝑆 for
all 𝑡 ≥ 0 and a compact set 𝑆 ⊂ ℝ 𝑛 is backward invariant if and only if
𝜙𝑡 (𝑆) ⊂ 𝑆 for all 𝑡 ≤ 0.
The simplest example of an invariant sets is an equilibrium point, i.e. a
point x∗ such that f(x∗ ) = 0. Equilibrium points are also referred to as
fixed points6 since they satisfy 𝜙𝑡 (x∗ ) = x∗ for all 𝑡 ∈ ℝ. The latter shows 6: We alternate between the terminology
that equilibrium point are invariant sets for the flow 𝜙𝑡 . An equilibrium equilibrium point and fixed point.

point is clearly a compact invariant set.


Another example of a compact invariant set is a periodic orbit.

Definition 3.2.2 An invariant sets 𝛾 ⊂ ℝ 𝑛 is called periodic orbit for 𝜙𝑡


if there exists a time 𝑇 > 0a such that
n o
𝛾 = 𝜙𝑡 (x• ) | 𝑥 • ∈ ℝ 𝑛 , fixed , 𝑡 ∈ [0 , 𝑇] , and

(i) 𝜙𝑇 (x• ) = x• ;
(ii) 𝜙𝑡 (x• ) ≠ x• , ∀𝑡 ∈ (0 , 𝑇).
The perodic orbit is also denoted by 𝛾(𝑡) = 𝜙𝑡 (x• ).
a The number 𝑇 in this definition is called the (minimal) period of 𝛾 .

The invariance of 𝛾 is immediately clear from the definition. Indeed,


𝜙𝑡+𝑇 (x• ) = 𝜙𝑡 (x• ) for all 𝑡 ∈ ℝ and therefore 𝜙𝑡 (𝛾) = 𝛾 for all 𝑡 ∈ ℝ.7 A 7: This definition excludes equilibrium
point x• is called a periodic point if it is not an equilibrium point8 and there points of being periodic orbits.

exists a 𝑇 > 0 such that 𝜙𝑇 (x• ) = x• . Since x• is not an equilibrium point 8: This implies that 𝑓 (x• ) ≠ 0
it follows that 𝜙𝑡 (x• ) ≠ x• for some 𝑡 . Moreover, 𝜙𝑡+𝑇 (x• ) = 𝜙𝑡 (𝜙𝑇 (x• )) =
𝜙𝑡 (x• ) for all 𝑡 . By defining 𝑇0 = inf {𝑡 > 0 | 𝜙𝑡 (x• ) ≠ x• }, we satisfy
Defn. 3.2.2(ii) which shows that periodic points generate periodic orbits.
Conversely, every point x• ∈ 𝛾 of a periodic orbit is a periodic point.

3.3 Linearization

Important for the study of dynamical systems is understanding how flows


behave near equilibrium points and periodic orbits. In this section we
3 Flows, invariance and linearization
17
Lecture 4

discuss the first step for analyzing equilibrium points. For an equilibrium
point x∗ we have the following Taylor expansion:

f(x) = f(x∗ ) + 𝐷 f(x∗ )(x − x∗ ) + 𝑜 k x − x∗ k




= 𝐷 f(x∗ )(x − x∗ ) + 𝑜 k x − x∗ k ,


for x in a neighborhood of x∗ . If we ignore the higher order terms we


obtain the linear system
𝝃¤ = 𝐷 f(x∗ )𝝃, (3.3)
with 𝝃 = x − x∗ . The linear system is called the linearization of f at the
equilibrium point x∗ and the system is called the linearized system at x∗ .
The linearized system describes the behavior of the flow 𝜙𝑡 near x = x∗ .
This last statement is not very precise. In the next chapter we will give
more precise statements concerning the behavior of 𝜙𝑡 based on the
linearized system at an equilibrium point.
If we investigate the matrix 𝐴 = 𝐷 f(𝑥 ∗ ) in (3.3) we start with the eigen-
values. This allows us to classify the equilibrium points based on the
eigenvalues of 𝐴.

Definition 3.3.1 An equilibrium point 𝑥 ∗ is hyperbolic if the eigenvalues


𝜆 𝑖 of 𝐴 = 𝐷 f(𝑥∗ ) satisfy

Re𝜆 𝑖 ≠ 0 , ∀𝑖 = 1, · · · , 𝑛.

If the latter condition is not satisfied the equilibrium point is called


non-hyperbolic.
The (un)stable manifold theorem
Lectures 5 and 6 4
In the previous chapter we discussed to idea of linearizing equilibrium 4.1 Local stable and unstable mani-
points. The behavior of the associated linear system contains informa- folds . . . . . . . . . . . . . . . . . 18
tion about the behavior of the flow near an equilibrium point if the 4.2 Center manifolds . . . . . . . 23
4.3 The Grobman-Hartman theo-
linearization satisfies certain conditions. In this chapter we prove the
rem . . . . . . . . . . . . . . . . . . 24
stable and unstable manifold theorem for hyperbolic equilibrium points.
4.4 Hamiltonian and gradient sys-
At a later stage we briefly discuss the center manifold theorem as well
tems . . . . . . . . . . . . . . . . . 24
as the Grobman-Hartman theorem. The proof of (un)stable mainifold
is elementary and in the spirit of the earlier chapters. The proof of the
Grobman-Hartman theorem is beyond the scope of these notes.

4.1 Local stable and unstable manifolds

The differential equation

x¤ = f(x), x(0) = x0 ∈ ℝ 𝑛 ,

with f ∈1 (ℝ 𝑛 ; ℝ 𝑛 ) gives rise to a local flow 𝜙𝑡 . Suppose x∗ ∈ ℝ 𝑛 is an


equilibrium point for 𝑓 , i.e. f(x∗ ) = 0.1 The linearization of the system at 1: Equivalently x∗ is called a fixed point
x∗ is given by since 𝜙𝑡 (x∗ ) = x∗ for all 𝑡 ∈ ℝ.

𝝃¤ = 𝐷 f(x∗ )𝝃.
Suppose the fixed point x∗ is hyperbolic, i.e. the eigenvalues 𝜆 𝑖 of 𝐷 f(x∗ )
satisfy Re𝜆 𝑖 ≠ 0 for all 𝑖 . Moreover precisely 𝐷 f(x∗ ) has 𝑘 eigenvalues
with Re𝜆 𝑖 < 0 (stable eigenvalues) and 𝑛 − 𝑘 eigenvalues with Re𝜆 𝑖 > 0
(unstable eigenvalues). From Section 1.3 this yields the decomposition of
ℝ 𝑛 in terms of stable and unstable invariant subspaces:

ℝ𝑛 = 𝐸 𝑠 ⊕ 𝐸𝑢 ,

with dim 𝐸 𝑠 = 𝑘 and dim 𝐸 𝑢 = 𝑛 − 𝑘 . Near x∗ we have the following


theorem concerning stability and invariance. We state the theorem with
the above conditions:2 2: We prove that a stable manifold exists a
graph of a continuous function. At a later
stage we will also address differentiability
Theorem 4.1.1 (The stable manifold theorem) There exists a locally and the tangent space at x∗ .
𝑠
defined 𝑘 -dimensional surface 𝑊loc (x∗ ) ⊂ ℝ 𝑛 containing x∗ with tangent
𝑠
space parallel to 𝐸 at x∗ such that
𝑠 𝑠
(i) (forward invariance) 𝜙𝑡 𝑊loc (x∗ ) ⊂ 𝑊loc (x∗ ) for 𝑡 ≥ 0;

𝑠
(ii) (stability) lim𝑡→∞ 𝜙𝑡 (x) = x∗ for all x ∈ 𝑊loc .
𝑠
The 𝑘 -dimensional surface 𝑊loc (x∗ ) is given as graph over an open subset in
𝑠
𝐸 and is called the local stable manifold at x∗ .

Before we start the proof of the stable manifold theorem we consider some
properties of the linearization. The Taylor expansion about x∗ reads

f(x) = 𝐷 f(x∗ )(x − x∗ ) + g(x),


4 The (un)stable manifold theorem
19
Lectures 5 and 6

where g(x) := f(x) − 𝐷 f(x∗ )(x − x∗ ), with g(x∗ ) = 0 and 𝐷 g(x∗ ) = 0. Since
g ∈ 𝐶 1 (ℝ 𝑛 ; ℝ 𝑛 ) we have from Taylor’s theorem that

g(x0) = g(x) + 𝐷 g x + 𝜃(x0 − x) (x0 − x), 𝜃 ∈ (0, 1).




By the assumptions on g we have that 𝐷 g is continuous with 𝐷 g(x∗ ) = 0


and thus that for every 𝜖 > 0 there exists a 𝛿 𝜖 > 0 such that k𝐷 g(x)k < 𝜖
for all x ∈ 𝐵 𝛿 𝜖 (x∗ ). If x , x0 ∈ 𝐵 𝛿 𝜖 (x∗ ), then
 also x + 𝜃(x − x0) ∈ 𝐵 𝛿 𝜖 (x∗ ),
0

𝜃 ∈ (0 , 1) and thus k𝐷 g x + 𝜃(x − x) k < 𝜖 for all x , x ∈ 𝐵 𝛿 𝜖 (x∗ ).


0

Therefore, for every 𝜖 > 0

k g(x0) − g(x)k ≤ k𝐷 g x + 𝜃(x0 − x) k k x0 − x k ≤ 𝜖k x0 − x k,



(4.1)

x , x0 ∈ 𝐵 𝛿 𝜖 (x∗ ).
Let 𝐴 = 𝐷 f(x∗ ) then for the linear flow 𝑒 𝑡𝐴 it holds that 𝑒 𝑡𝐴 𝐸 𝑠 = 𝐸 𝑠 and
𝑒 𝑡𝐴 𝐸 𝑢 = 𝐸 𝑢 for all 𝑡 ∈ ℝ. From the expression

𝑒 ℎ𝐴 x − x
lim = 𝐴x ,
ℎ→0 ℎ

we derive that 𝐴𝐸 𝑠 = 𝐸 𝑠 and 𝐴𝐸 𝑢 = 𝐸 𝑢 .The linear subspaces 𝐸 𝑠 and


𝐸 𝑢 are invariant for the linear flow 𝑒 𝑡𝐴 and for the matrix 𝐴. From
the decomposition ℝ 𝑛 = 𝐸 𝑠 ⊕ 𝐸 𝑢 we choose a basis for both invariant
subspaces and denote the associated matrix by 𝑉 . The matrix for the linear
map x ↦→ 𝐴x with respect to the basis 𝑉 is then given by 𝐷 = 𝑉 −1 𝐴𝑉
and since 𝐸 𝑠 and 𝐸 𝑢 are invariant subspaces the matrix 𝐷 has the form

𝑃
 
0
𝐷 = 𝑉 −1 𝐴𝑉 = , (4.2)
0 𝑄
where 𝑃 is an 𝑘 × 𝑘 matrix and 𝑄 is an (𝑛 − 𝑘) × (𝑛 − 𝑘) matrix. We use
𝑉 to define new coordinates. Define

y := 𝑉 −1 (x − x∗ ),

and in terms of the new coordinates the differential equation is:

y¤ = 𝐷 y + h(y), (4.3)

where h(y) = 𝑉 −1 g 𝑉 y + x∗ . Indeed, y¤ = 𝑉 −1 x¤ and thus




y¤ = 𝑉 −1 𝐴(x − x∗ ) + 𝑉 −1 g(x)
= 𝑉 −1 𝐴𝑉 y + 𝑉 −1 g 𝑉 y + x∗ .


For h ∈ 𝐶 1 (ℝ 𝑛 ; ℝ 𝑛 ) we have that h(0) = 0 and 𝐷 h(0) = 0 and there, as


before, for every 𝜖 > 0 there exists a 𝛿 𝜖 > 0 such that

k h(y0) − h(y)k ≤ 𝜖k y0 − y k, (4.4)

for all y , y0 ∈ 𝐵 𝛿 𝜖 (0).3 The local flow associated with Equation (4.3) is 3: The Lipschitz estimate can also be ob-
denoted by 𝜓𝑡 . tained form the Lipschitz estimate for the
function g.

Proof of Theorem 4.2.1. To prove the theorem we prove the existence of a


local stable manifold for Equation (4.3) at the equilibrium point x = 0.
The eigenvalues of 𝐷 coincide with the eigenvalues of 𝐴. We order the
4 The (un)stable manifold theorem
20
Lectures 5 and 6

eigenvalues as follows:

Re𝜆1 ≤ · · · ≤ Re𝜆 𝑘 < 0 < Re𝜆 𝑘+1 ≤ · · · ≤ Re𝜆𝑛 .

From the representation of 𝑒 𝑡𝐷 :

𝑒 𝑡𝑃 𝑒 𝑡𝑃
     
𝑡𝐷 0 0 0 0
𝑒 = = + = 𝑈(𝑡) + 𝑉(𝑡).
0 𝑒 𝑡𝑄 0 0 0 𝑒 𝑡𝑄

The unbounded terms in 𝑒 𝑡𝑃 are of the form 𝑡 𝑛 𝑒 Re𝜆𝑖 𝑡 , 𝑖 = 1 , · · · , 𝑘 and


the unbounded terms in 𝑒 𝑡𝑄 are of the form 𝑡 𝑛 𝑒 Re𝜆𝑖 𝑡 , 𝑖 = 𝑘 + 1 , · · · , 𝑛 .
Choose 𝛼, 𝜎 > 0 such that

Re𝜆1 ≤ · · · ≤ Re𝜆 𝑘 < −(𝛼 + 𝜎) < 0 < 𝜎 < Re𝜆 𝑘+1 ≤ · · · ≤ Re𝜆𝑛 .

This gives 𝑡 𝑛 𝑒 Re𝜆𝑖 𝑡 ≤ 𝑐𝑒 −(𝛼+𝜎)𝑡 , 𝑖 = 1 , · · · , 𝑘 , for all 𝑡 ≥ 0 and 𝑡 𝑛 𝑒 Re𝜆𝑖 𝑡 ≤


𝑐𝑒 𝜎𝑡 , 𝑖 = 𝑘 + 1 , · · · , 𝑛 , for all 𝑡 ≤ 0. For the exponentials we obtain

k𝑈(𝑡)k ≤ 𝑐 0 𝑒 −(𝛼+𝜎)𝑡 , 𝑡 ≤ 0, k𝑉(𝑡)k ≤ 𝑐0 𝑒 𝜎𝑡 , 𝑡 ≥ 0 ,

for some constant 𝑐 0 > 0. For vector a a ∈ ℝ 𝑛 we consider the integral


equation4 4: One can verify that solutions y(𝑡 ; a)
satisfy the differential equation in (4.3).
∫ 𝑡 ∫ ∞ To derive the equations directly we write
y(𝑡 ; a) = 𝑈(𝑡)a + 𝑈(𝑡 − 𝑠)h y(𝑠 ; a) 𝑑𝑠 − 𝑉(𝑡 − 𝑠)h y(𝑠 ; a) 𝑑𝑠.
 
y = y𝑠 + y𝑢 which yields the system
0 𝑡
(4.5) y¤ 𝑠 = 𝑃 y𝑠 + 𝜋 𝑠 h(y𝑠 + y𝑢 );
As in the proof of Theorem 2.0.1 we use Picard iteration for construct a y¤ 𝑢 = 𝑄 y𝑢 + 𝜋𝑢 h(y𝑠 + y𝑢 ).
local stable manifold. Consider Integrating over time the first equation
over [0 , 𝑡] with initial value a𝑠 = 𝜋 𝑠 a and
y0 (𝑡 ; a) = 0; the second equation over [𝑡, ∞) yields the
∫ 𝑡 integral equation in (4.5). Note that the
yℓ +1 (𝑡 ; a) = 𝑈(𝑡)a + 𝑈(𝑡 − 𝑠)h yℓ (𝑠 ; a) 𝑑𝑠

equation is independent of the initial value
0 a𝑢 = 𝜋𝑢 a = (𝑎 𝑘+1 , · · · , 𝑎 𝑛 ).
∫ ∞
𝑉(𝑡 − 𝑠)h yℓ (𝑠 ; a) 𝑑𝑠, ∀ℓ ≥ 0.


𝑡

Note that since h(0) = 0 that y1 (𝑡 ; a) = 𝑈(𝑡)a and

k y1 (𝑡 ; a)k ≤ k𝑈(𝑡)k k a k ≤ 𝑐 0 𝑒 −(𝛼+𝜎)𝑡 k a k ≤ 𝑐 0 𝑒 −𝛼𝑡 k a k,

for all 𝑡 ≥ 0. Assume the following induction hypothesis

𝑒 −𝛼𝑡 k a k
k yℓ (𝑡 ; a) − yℓ −1 (𝑡 ; a)k ≤ 𝑐 0 , ∀𝑡 ≥ 0 , (4.6)
2ℓ −1
and for all ℓ = 1 , · · · , 𝑚 . Now consider k y𝑚+1 (𝑡 ; a) − y𝑚 (𝑡 ; a)k . Using the
Lipschitz condition for h in (4.4) we have
∫ 𝑡
k y𝑚+1 (𝑡 ; a) − y𝑚 (𝑡 ; a)k ≤ 𝜖 k𝑈(𝑡 − 𝑠)k k y𝑚 (𝑠 ; a) − y𝑚−1 (𝑠 ; a)k𝑑𝑠
0
∫ ∞
+𝜖 k𝑉(𝑡 − 𝑠)k k y𝑚 (𝑡 ; a) − y𝑚−1 (𝑡 ; a)k𝑑𝑠,
𝑡
(4.7)
provided k y𝑚 (𝑡 ; a) − y𝑚−1 (𝑡 ; a)k < 𝛿 𝜖 for all 𝑡 ≥ 0. 5
If we use the 5: We need to choose the bounds on k a k
and 𝜖 to justify the estimate.
4 The (un)stable manifold theorem
21
Lectures 5 and 6

induction hypothesis in (4.6) then (4.7) yields


𝑡
𝑒 −𝛼𝑠 k a k

k y𝑚+1 (𝑡 ; a) − y𝑚 (𝑡 ; a)k ≤ 𝜖𝑐0 𝑒 −(𝛼+𝜎)(𝑡−𝑠) 𝑐 0 𝑑𝑠
0 2𝑚−1
∞ (4.8)
𝑒 −𝛼𝑠 k a k

𝜎(𝑡−𝑠)
+ 𝜖𝑐 0 𝑒 𝑐 0 𝑚−1 𝑑𝑠,
𝑡 2

Before continuing the estimate we evaluate the integrals:


∫ 𝑡 ∫ 𝑡 ∫ 𝑡
𝜎𝑠
𝑒 −(𝛼+𝜎)(𝑡−𝑠) −𝛼𝑠
𝑒 𝑑𝑠 = 𝑒 −(𝛼+𝜎)𝑡
𝑒 𝑑𝑠 ≤ 𝑒 −(𝛼+𝜎)𝑡
𝑒 𝜎𝑠 𝑑𝑠
0 0 −∞
𝑒 −𝛼𝑡
= ,
𝜎
∞ ∞
𝑒 −𝛼𝑡
∫ ∫
𝑒 𝜎(𝑡−𝑠) 𝑒 𝜎𝑠 𝑑𝑠 = 𝑒 𝜎𝑡 𝑒 −(𝛼+𝜎)𝑠 𝑑𝑠 =
𝑡 𝑡 𝛼+𝜎
𝑒 −𝛼𝑡
≤ .
𝜎
Combining these integral estimates with (4.8) we obtain

𝑒 −𝛼𝑡 k a k  2𝜖𝑐  𝑒 −𝛼𝑡 k a k


0
k y𝑚+1 (𝑡 ; a) − y𝑚 (𝑡 ; a)k ≤ 2 𝜖𝑐 02 ≤ 𝑐 0 𝑚−1
𝜎 2𝑚−1 𝜎 2 (4.9)
𝑒 −𝛼𝑡 k a k
≤ 𝑐0 ,
2𝑚
provided 2 𝜖𝑐 0 /𝜎 ≤ 1/2. In order to justify the estimate in (4.7) we have,
for all 𝑡 ≥ 0 and 𝑚 ≥ 0, 𝑐 0 k a k𝑒 −𝛼𝑡 /2𝑚−1 ≤ 2 𝑐 0 k a k/2𝑚 ≤ 2 𝑐 0 k a k < 𝛿 𝜖
and therefore we choose

𝜖 ≤ 𝜎/4 𝑐0 , k a k < 𝛿 𝜖 /2 𝑐 0 , (4.10)

which completes the induction step and therefore

𝑒 −𝛼𝑡 k a k
k yℓ +1 (𝑡 ; a) − yℓ (𝑡 ; a)k ≤ 𝑐 0 , ∀𝑡 ≥ 0 , (4.11)
2ℓ
and for all ℓ ≥ 0. As in the proof of Theorem 2.0.1 consider integers
𝑚, 𝑚 0 ≥ 𝑚0 . Then, using the triangle inequality we have

𝑚−
X1 𝑚−
X1 1
k y𝑚 (𝑡 ; a) − y𝑚0 (𝑡 ; a)k𝑒 𝛼𝑡 ≤ k yℓ +1 (𝑡 ; a) − yℓ (𝑡 ; a)k𝑒 𝛼𝑡 ≤ 𝑐 0 k a k
ℓ =𝑚 0 ℓ =𝑚 0 2ℓ

X 1 1
≤ 𝑐0 k a k = 𝑐0 k a k → 0,
ℓ =𝑚0 2ℓ 2𝑚0 −1

as 𝑚0 → ∞ and thus k y𝑚 − y𝑚0 k𝐶 𝛼0 → 0 as 𝑚, 𝑚 0 → ∞. This proves that


{yℓ (𝑡)} is a Cauchy sequence in the Banach space 𝐶 𝛼0 ([0 , ∞); ℝ 𝑛 ).6 Since 6: The space of continuous functions on
𝐶 𝛼0 ([0, ∞); ℝ 𝑛 ) is a Banach space there exists a limit the interval [0 , ∞) is a Banach space with
norm

y(𝑡 ; a) = lim yℓ (𝑡 ; a), k y k𝐶 0 := max k y(𝑡)k𝑒 𝛼𝑡 .


ℓ →∞ 𝛼 𝑡∈[0,∞)

This makes 𝐶 𝛼0 ([0 , ∞); ℝ 𝑛 ) is a normed


and the convergence is uniform on [0 , ∞). For the limit function we have
linear space. Due to the weight factor
𝑒 𝛼𝑡 this space is complete, i.e. every
Cauchy sequence has a (unique) limit in
𝐶 𝛼0 ([0 , ∞); ℝ 𝑛 ).
4 The (un)stable manifold theorem
22
Lectures 5 and 6

the following estimate:



X
k y(𝑡 ; a)k = k y(𝑡 ; a) − y0 (𝑡 ; a)k ≤ k yℓ +1 (𝑡 ; a) − yℓ (𝑡 ; a)k
ℓ =0

(4.12)
−𝛼𝑡
X 1 −𝛼𝑡
≤ k a k𝑐 0 𝑒 = 2 𝑐 0 k a k𝑒 , ∀𝑡 ≥ 0.
ℓ =0 2ℓ

Since y(𝑡 ; a) satisfies the integral equation in (4.5) the function is a


𝐶 1 -function on [0 , ∞).
Since y(𝑡 ; a) is independent of 𝑎 𝑘+1 , · · · , 𝑎 𝑛 we set these parameters equal
to zero. Note that7 7: 𝜋 𝑖 denoted the projection onto the 𝑖 th
coordinate.
𝑦 𝑖 (0; a) = 𝑎 𝑖 , 𝑖 = 1, · · · , 𝑘 ;
∫ ∞
𝑦 𝑖 (0; a) = −𝜋 𝑖 𝑉(−𝑠)h y(0; a) 𝑑𝑠, 𝑖 = 𝑘 + 1 , · · · , 𝑛.

0

This way we express the initial values for the coordinates 𝑖 = 𝑘 + 1 , · · · , 𝑛


in terms of first coordinates 𝑖 = 1 , · · · , 𝑘 , i.e. define

𝑦 𝑖 (0; a) = 𝜎𝑖 (𝑎 1 , · · · , 𝑎 𝑘 ), 𝑖 = 𝑘 + 1 , · · · , 𝑛,
∫∞
where 𝜎𝑖 (𝑎 1 , · · · , 𝑎 𝑘 ) = −𝜋 𝑖 0 𝑉(−𝑠)h y(0; a) 𝑑𝑠 , for 𝑖 = 1 , · · · , 𝑛 − 𝑘 ,

which we denote by 𝝈(𝜋 𝑠 a). As is the proof of Theorem 2.0.1 we have
that y(𝑡 ; a) continuously on 𝑡 and a and therefore the function 𝜎𝑖 depend
continuously on a.8 This parametrization defines a parametrized 𝑘 - 8: The continuity follows from the for-
dimensional surface (manifold)9 mula for 𝜎𝑖 using the continuity of and
the exponential decay of 𝑉 .
9: The parameter 𝜋 𝑠 a := (𝑎 1 , · · · , 𝑎 𝑘 ) is a
n o
𝑆 := 𝜋 𝑠 a , 𝝈(𝜋 𝑠 a) | 𝜋 𝑠 a ∈ 𝐵 𝛿𝑘 𝜖 /2𝑐0 (0) .

point in ℝ 𝑘 .

This surface has the following properties with respect to 𝜓𝑡 . Let a =


𝜋 𝑠 a , 𝝈(𝜋 𝑠 a) ∈ 𝑆 and consider the solution y(𝑡 ; a) = y(𝑡 ; 𝜋 𝑠 a). By the
the group property we have

y(𝑡 ; a) = y 0 , 𝜓𝑡 (a) = y 0 , 𝜋 𝑠 𝜓𝑡 (a) ,


 

which proves that


  
𝜓𝑡 (a) = 𝜋 𝑠 𝜓𝑡 (a), 𝝈 𝜋 𝑠 𝜓𝑡 (a) ∈ 𝑆, ∀𝑡 ≥ 0 ,

and thus 𝜓𝑡 (𝑆) ⊂ 𝑆 for all 𝑡 ≥ 0, i.e. 𝑆 is forward invariant. Moreover, by


the estimate in (4.12) we have that 𝜓𝑡 (a) → 0.

In a similar there exists a local unstable manifold.

Theorem 4.1.2 (The unstable manifold theorem) There exists a locally


𝑢
defined 𝑛 − 𝑘 -dimensional surface 𝑊loc (x∗ ) ⊂ ℝ 𝑛 containing x∗ with tangent
𝑢
space 𝐸 at x∗ such that
𝑢 𝑢
(i) (backward invariance) 𝜙𝑡 𝑊loc (x∗ ) ⊂ 𝑊loc (x∗ ) for 𝑡 ≤ 0;

𝑢
(ii) (instability) lim𝑡→−∞ 𝜙𝑡 (x) = x∗ for all x ∈ 𝑊loc .
𝑢
The 𝑛 − 𝑘 -dimensional surface 𝑊loc (x∗ ) is given as graph over an open subset
𝑢
in 𝐸 and is called the local unstable manifold at x∗ .
4 The (un)stable manifold theorem
23
Lectures 5 and 6

Proof. Observe that by the transformation 𝑡 ↦→ −𝑡 the local stable man-


ifold of the differential equation x¤ = −f(x) yields the local unstable
𝑢
manifold 𝑊loc (x∗ ) for x¤ = f(x).

Remark 4.1.1 If we impose the Property (ii) in Theorems 4.2.1 and


4.2.2 we obtain the stable and unstable manifolds 𝑊 𝑠 (x∗ ) and 𝑊 𝑢 (x∗ )
which are embedded manifolds and invariant sets for the local flow.

4.2 Center manifolds

So far we considered local stable and unstable manifolds related to


hyperbolic fixed points which are characterized by the property that all
eigenvalues have non-zero real part. In this section we briefly discuss the
case of non-hyperbolic fixed points. For a non-hyperbolic fixed point the
situation is:10 10: Non-hyperbolic fixed need not be iso-
lated fixed point. For example is a flow
Re𝜆1 ≤ · · · ≤ Re𝜆 𝑘 < 0 < Re𝜆 𝑘+𝑚+1 ≤ · · · ≤ Re𝜆𝑛 ; has a line of fixed point then they are
necessarily non-hyperbolic.
Re𝜆 𝑘+1 = · · · = Re𝜆 𝑘+𝑚 = 0 ,

with decomposition
ℝ𝑛 = 𝐸𝑢 ⊕ 𝐸 𝑐 ⊕ 𝐸 𝑠 .

Theorem 4.2.1 (The center manifold theorem, I) There exists a locally


𝑐𝑠
defined 𝑘 + 𝑚 -dimensional surface 𝑊loc (x∗ ) ⊂ ℝ 𝑛 containing x∗ with tangent
𝑐 𝑠
space parallel to 𝐸 ⊕ 𝐸 at x∗ such that
𝑐𝑠 𝑐𝑠
(i) (forward invariance) 𝜙𝑡 𝑊loc (x∗ ) ⊂ 𝑊loc (x∗ ) for 𝑡 ≥ 0;

𝑐𝑠
(ii) (stability) lim𝑡→∞ 𝜙𝑡 (x) = x∗ for all x ∈ 𝑊loc .
𝑐𝑠
The 𝑘 + 𝑚 -dimensional surface 𝑊loc (x∗ ) is given as graph over an open subset
𝑐 𝑠
in 𝐸 ⊕ 𝐸 and is called a local center-stable manifold at x∗ .

We emphasize that a local center-stable manifold is not unique in general


in contrast to local stable and unstable manifolds. Due to the lack of
hyperbolicity, which is used in the proofs of the stable and unstable
manifold theorems, we will not provide a proof of the center manifolds
theorem in these notes.

Theorem 4.2.2 (The center manifold theorem, II) There exists a locally
𝑐𝑢
defined 𝑛 − 𝑘 -dimensional surface 𝑊loc (x∗ ) ⊂ ℝ 𝑛 containing x∗ with tangent
𝑐 𝑢
space 𝐸 ⊕ 𝐸 at x∗ such that
𝑐𝑢 𝑐𝑢
(i) (backward invariance) 𝜙𝑡 𝑊loc (x∗ ) ⊂ 𝑊loc (x∗ ) for 𝑡 ≤ 0;

𝑐𝑢
(ii) (instability) lim𝑡→−∞ 𝜙𝑡 (x) = x∗ for all x ∈ 𝑊loc .
𝑐𝑢
The 𝑛 − 𝑘 -dimensional surface 𝑊loc (x∗ ) is given as graph over an open subset
𝑐 𝑢
in 𝐸 ⊕ 𝐸 and is called a local center-unstable manifold at x∗ .

Also center-unstable manifolds are not unique in general. If we consider


only center eigenvalues then directionality is lost in general and we define
local center manifolds:

𝑐 𝑐𝑠 𝑐𝑢
𝑊loc (x∗ ) := 𝑊loc (x∗ ) ∩ 𝑊loc (x∗ ). (4.13)
4 The (un)stable manifold theorem
24
Lectures 5 and 6

Remark 4.2.1 As for stable and unstable manifold we can also define
global version that are invariant sets for the flow.

4.3 The Grobman-Hartman theorem

The Grobman-Hartman theorem11 is a result for hyperbolic fixed points. 11: In many text books this theorem is
With the Grobman-Hartman theorem we establish local conjugations of a referred to as the Hartman-Grobman the-
orem.
flow. A local conjugation for a (local) flow 𝜙𝑡 at a hyperbolic fixed point x∗
to the linear flow 𝑒 𝑡𝐴 , 𝐴 = 𝐷 f(x∗ ), is a homeomorphism ℎ : 𝑈 ⊂ ℝ 𝑛 →
ℎ(𝑈) ⊂ ℝ 𝑛 , with ℎ(x∗ ) = 0, such that

ℎ 𝜙𝑡 (x) = 𝑒 𝑡𝐴 ℎ(x), ∀x ∈ 𝑈 , ∀𝑡 ∈ [−𝜏, 𝜏]



(4.14)

where 𝑈 is a bounded, open neighborhood of x∗ and 𝑉 = ℎ(𝑈) is a


neighborhood of 0, and 𝜏 = 𝜏(x). Via a local conjugation we obtain the
local description of 𝜙𝑡 in terms of a linear flow: 𝜙𝑡 (x) = ℎ −1 𝑒 𝑡𝐴 ℎ(x) .

Theorem 4.3.1 Let x∗ be a hyperbolic fixed point (2.1). Then, there exist a
local conjugacy for the associated local flow 𝜙𝑡 as described in (4.14).

We will not treat the proof of the Grobman-Hartman theorem in this text.
It was proves by Hartman that if the vector field is of class 𝐶 2 then a local
𝐶 1 -conjugacy exists. In this case we easily retrieve stable and unstable
manifolds from the local conjugacy.

4.4 Hamiltonian and gradient systems

In this section we consider two special classes of dynamical systems that


are important in various applications. A Hamiltonian system is given by a
system of differential equations of the form

p¤ = 𝜕q 𝐻(p , q);
(4.15)
q¤ = −𝜕p 𝐻(p , q),

𝐶2
where x = (p , q) ∈ ℝ2𝑛 , p , q ∈ ℝ 𝑛 . The function 𝐻 : ℝ2𝑛 −
−→ ℝ is called
the Hamiltonian. Since 𝐻 is of class 𝐶 2 the vector field

𝑋𝐻 (x) := 𝜕q 𝐻(p, q), −𝜕p 𝐻(p , q)




is a 𝐶 1 -vector field which induces a well-defined local flow, Hamiltonian


flow 𝜙𝑡𝐻 .

Lemma 4.4.1 𝐻 𝜙𝑡𝐻 (x) = 𝐸 = constant for all x ∈ ℝ2𝑛 .




Proof. By definition 𝜙¤ 𝑡𝐻 = 𝑋𝐻 𝜙𝑡𝐻 . Then, the time-derivative for 𝐻 gives




𝑑
𝐻 𝜙𝑡𝐻 = ∇𝐻 𝜙𝑡𝐻 · 𝑋𝐻 𝜙𝑡𝐻 = 0 ,
  
𝑑𝑡
4 The (un)stable manifold theorem
25
Lectures 5 and 6

which implies that 𝐻 𝜙𝑡𝐻 is a constant function in time.




The level sets 𝑀𝐸 := x | 𝐻(x) = 𝐸 ⊂ ℝ2𝑛 are called energy surfaces




which foliate the phase space. The dimension of a regular energy surface
is 2𝑛 − 1. A Hamiltonian system is an example of a conservative system.12
12: A conservative system is a system of
differential equations which allows a func-
Every motion is restricted to a given energy surface. The number 𝑛 in the tion 𝐸(x) which is constant along orbits.
Note that a conservative system is not nec-
dimension of the phase spave ℝ2𝑛 is counts the degrees of freedom pf
essarily Hamiltonian.
the Hamiltonian system.. In general Hamiltonian systems with 𝑛 ≥ 2 are
very complicated dynamical systems. However, for 𝑛 = 1 the systems are
what we call integrable. Let us consider 1-degree of freedom Hamiltonian
systems. In this case we obtain the system

𝑝¤ = 𝜕𝑞 𝐻(𝑝, 𝑞);
(4.16)
𝑞¤ = −𝜕𝑝 𝐻(𝑝, 𝑞),

𝐶2
with x = (𝑝, 𝑞) ∈ ℝ2 and 𝐻 : ℝ2 − −→ ℝ. By Lemma 4.4.1 the flow-lines of
the associated flow 𝜙𝑡𝐻 are curves given by the equation 𝐻(𝑝, 𝑞) = 𝐸 . If
we locally solve this equation at a regular point at an energy curve, say
𝑝(𝑞), then the dynamics is given by the equation

𝑞¤ = −𝜕𝑞 𝐻(𝑝(𝑞), 𝑞),

which can be solved by integration. This explains the name integrable


system. The Hamiltonian structures makes that critical points are of
certain type. Consider 𝐷𝑋𝐻 (x∗ ) at an fixed point x∗ :13 13: Observe that x∗ is a equilibrium point
for 𝑋𝐻 if and only if x∗ is a critical point
𝜕𝑝𝑞
2
𝐻(x∗ ) 𝜕2𝑞𝑞 𝐻(x∗ ) of 𝐻 .
 
𝐿= ,
−𝜕𝑝𝑝 𝐻(x∗ )
2
−𝜕𝑝𝑞2
𝐻(x∗ )

Since trace 𝐿 = 0 we have that the eigenvalues are given by the character-
istic equation 𝜆2 + det 𝐿 = 0.

Theorem 4.4.2 Let x∗ be a fixed point that is a non-degenerate critical point


of 𝐻 , i.e. det 𝐿 ≠ 0. Then,
(i) x∗ is a saddle point if and only if it is a saddle for 𝐻 ;
(ii) x∗ is a center if and only if it is a local minimum, or maximum for 𝐻 .

Proof. It holds that


2
det 𝐿 = 𝜕𝑝𝑝
2
𝐻(x∗ )𝜕2𝑞𝑞 𝐻(x∗ ) − 𝜕𝑝𝑞
2
𝐻(x∗ )

Observe that det 𝐿 = det 𝑑 2 𝐻(x∗ ).14 A fixed point is a non-degenerate 14: The matrix 𝑑 2 𝐻(x∗ ) is the Hessian of
local minimum of 𝐻 if and only if 𝜕𝑝𝑝
2
𝐻(x∗ ) > 0 and det 𝑑2 𝐻(x∗ ) = det 𝐿 > 𝐻 and is the matrix of second derivatives
of 𝐻 .
0, and a non-degenerate local maximum if and only if 𝜕𝑝𝑝 2
𝐻(x∗ ) < 0 and
𝜕𝑝𝑝 𝐻(x∗ ) > 0 and det 𝑑 𝐻(x∗ ) = det 𝐿 > 0. A fixed point is a saddle for
2 2

𝐻 if and only if 𝜕𝑝𝑝


2
𝐻(x∗ ) > 0 and det 𝑑2 𝐻(x∗ ) = det 𝐿 < 0. In the latter

case 𝜆1 < 0 < 𝜆2 which proves (i) and in the former cases 𝜆1,2 = ± det 𝐿,
which proves (ii).

In a Hamiltonian system stable/unstable nodes are excluded.


4 The (un)stable manifold theorem
26
Lectures 5 and 6

Complementary to a Hamiltonian system in the plane is a gradient system


defined by the system of differential equations:

𝑝¤ = −𝜕𝑝 𝐻(𝑝, 𝑞);


(4.17)
𝑞¤ = −𝜕𝑞 𝐻(𝑝, 𝑞),

The vector field −∇𝐻 is orthogonal to the Hamiltonian vector field


𝑋𝐻 . In contrast to Hamiltonian systems gradient systems are far from
conservative. Indeed, if 𝜙𝑡 is the local gradient flow then

𝑑
𝐻 𝜙𝑡 (x) = ∇𝐻 𝜙𝑡 (x) · −∇𝐻 𝜙𝑡 (x) = −k∇𝐻 𝜙𝑡 (x) k 2 ≤ 0 ,
    
𝑑𝑡
and < 0 whenever x ≠ x∗ , x∗ a critical point of 𝐻 . This implies that
𝐻 𝜙𝑡 (x) is strictly decreasing if we start at a point x that is not an
equilibrium point.
Gradient systems can be defined in any dimension 𝑛 and are not necessar-
ily complementary to Hamiltonian systems. A gradient system is defined
via a (smooth) potential function 𝑉 : ℝ 𝑛 → ℝ. Consider the system of
differential equations:
x¤ = −∇𝑉(x). (4.18)
For a gradient systems the nature of fixed points x∗ is determined by the
eigenvalues of the Hessian −𝑑 2 𝑉(x∗ ). Since −𝑑 2 𝑉(x∗ ) is always symmetric
non-degenerate fixed are either stable/unstable nodes, or saddle points.
Centers or spiral points, or combinations are excluded.
Limit sets
Lecture 7 and 8 5
In this chapter we study the asymptotic behavior of flows and in particular 5.1 Global flows . . . . . . . . . . 27
the notion of limit point. The most convenient way to address asymptotic 5.2 𝜔 -limit sets . . . . . . . . . . . 28
behavior is to describe dynamics by global flows. We start off with 5.3 Attractors . . . . . . . . . . . . 32
discussed global versus local flows.

5.1 Global flows

We showed in Lemma 3.2.1 that a local flow 𝜙𝑡 restricted to a compact


invariant set 𝑆 is a global flow on 𝑆 , i.e. 𝜙𝑡 is defined for all 𝑡 ∈ ℝ and
for all x ∈ 𝑆 . We explain now that every local flow can be understood
as global flow if we reparametrize the time variable 𝑡 is a appropriate
manner. For completeness we state definition of a global flow:1 1: The notion of global can also be defined
on arbitrary topological spaces such as
metric spaces and smooth manifolds.
Definition 5.1.1 A global flow, or flow is a continuous function 𝜙 : ℝ ×
ℝ 𝑛 → ℝ 𝑛 , denoted as 1-parameter family 𝜙𝑡 , 𝑡 ∈ ℝ, which satisfies
(i) 𝜙0 (x) = x for all x ∈ ℝ 𝑛 ;
(ii) 𝜙𝑡 𝜙 𝑠 (x) = 𝜙𝑡+𝑠 (x) for all 𝑠, 𝑡 ∈ ℝ and for all x ∈ ℝ 𝑛 .

In practice we will assume that flows are 𝐶 1 -functions. Although for


many of the definitions in this chapter differentiability is not needed.
A first example of a global flow is the exponential flow of a linear system of
differential equations, i.e. given x¤ = 𝐴x and the associated solution with
initial value x(0) = x0 given by x(𝑡) = 𝜙𝑡 (x0 ) = 𝑒 𝑡𝐴 x0 which is defined
for all 𝑡 ∈ ℝ and for all x0 ∈ ℝ 𝑛 . Consider the system of differential
equations given by

f(x)
x¤ = , x(0) = x0 ∈ ℝ 𝑛 . (5.1)
1 + k f(x)k

Lemma 5.1.1 Let x0 ∈ ℝ 𝑛 . Then, the unique solution x(𝑡 ; x0 ) is defined for
all 𝑡 ∈ ℝ.

Proof. Define g(x) := 1+k f(x)k . Since f ∈ 𝐶 1 (ℝ; ℝ 𝑛 ) also g ∈ 𝐶 1 (ℝ 𝑛 ; ℝ 𝑛 ).


f(x)

Therefore, given x0 ∈ ℝ 𝑛 , there exists a time 𝜏(x0 ) > 0 and a function


x(·; x0 ) : [−𝜏(x0 ), 𝜏(x0 )] → ℝ which solves the system, cf. Theorem 2.0.1.
Since k 𝑔(x)k < 1 for all x ∈ ℝ 𝑛 we have that
∫ 𝑡
g x(𝑠 ; x0 ) 𝑑𝑠,

x(𝑡 ; x0 ) − x0 =
0

and thus
k x(𝑡 ; x0 )k < k x0 k + 𝑡. (5.2)
5 Limit sets
28
Lecture 7 and 8

Since this works for positive and negative times we have that k x(𝑡 ; x0 )k <
k x0 k + |𝑡 | . Given x0 and 𝜏(x0 ) inductively define

x 𝑘+1 = x 𝜏(x 𝑘 ); x 𝑘 , 𝑘 = 0, 1, · · · ,


where 𝜏(x 𝑘 ) > 0.2 Suppose k x 𝑘 k → ∞ as 𝑘 → ∞ and 𝜏 := 𝑘 𝜏(x 𝑘 ) < 2: The times 𝜏(x 𝑘 ) > 0 can be chosen by
P
∞. From (5.2) and the group property we obtain k x𝑛+1 k < k x0 k + virtue of Theorem 2.0.1.
P𝑛
𝑘=0
𝜏(x 𝑘 ) < k x0 k + 𝜏 < ∞, which is a contradiction. This implies that
k x(𝑡 ; x0 )k → ∞ only if 𝑡 → ∞, or 𝑡 → −∞, and thus x(𝑡 ; x0 ) exists for all
𝑡 ∈ ℝ.

Let us now compare the systems

f(x)
x̃0 = f(x̃), and x¤ = ,
1 + k f(x)k
𝑑x̃
where x̃0 = 𝑑𝑠 and define
∫ 𝑡
1
𝑠(𝑡) :=  𝑑𝜎 > 0, 𝑡 ∈ ℝ. (5.3)
0 1 + k f x(𝜏; x0 ) k

The latter is well-defined for every 𝑡 ∈ ℝ since x(𝜏; x0 ) is defined for all
𝜏 ∈ ℝ by Lemma 5.1.1. Moreover, 𝑠(𝑡) is invertible and inverse is 𝑡(𝑠).
Now consider the function x† (𝑠 ; x0 ) := x 𝑡(𝑠); x0 and compute


𝑑 x† 𝑑 x 𝑑𝑡 h  i
= x¤ 1 + k f x(𝑡 ; x0 ) k = f x(𝑡(𝑠); x0 ) = f x† (𝑠 ; x0 ,
 
=
𝑑𝑠 𝑑𝑡 𝑑𝑠

which shows that x† = x̃ by uniqueness of the initial value problem.


Therefore, the local flow 𝜙˜ 𝑠 of x¤ = f(x) can be obtained from the global
f(x)
flow 𝜙𝑡 of x¤ = . In this case we say that 𝜙˜ 𝑠 and 𝜙𝑡 are topologically
1+k f(x)k
equivalent via time-reparametrization.3 3: In the general definition of topological
equivalence we also allow a homeomor-
phism in the x-variable, cf. Sect. 4.3 on
Theorem 5.1.2 The equations conjugations.

f(x)
x̃0 = f(x̃), and x¤ = ,
1 + k f(x)k

are topologically equivalent, i.e. the associated local flow 𝜙˜ 𝑠 and associated
global flow 𝜙𝑡 are topologically equivalent via time-reparametrization.

The method described here is not the only way to link the equation
x¤ = f(x) to a global flow. Since the orbits of the latter are obtained from
(5.1) via time-reparametrization the orbit structure of both equations is
exactly the same. We therefore, without loss of generality, may study the
global flow of (5.1).

5.2 𝜔 -limit sets

Let 𝜙𝑡 be a flow on ℝ 𝑛 and let x ∈ ℝ 𝑛 . Suppose that 𝛾x := {𝜙𝑡 (x)} —


the orbit through x —, then 𝜙𝑡 (x) may not converge to a point x0 ∈ ℝ 𝑛 as
𝑡 → ±∞. This yields the following definition of limit point for flows.
5 Limit sets
29
Lecture 7 and 8

Definition 5.2.1 A point x0 ∈ ℝ 𝑛 is called an omega limit point, or


𝜔 -limit point of x under a flow 𝜙𝑡 if there exists a sequence 𝑡 𝑛 → ∞
such that
lim 𝜙𝑡𝑛 (x) = x0 .
𝑛→∞

The set of all 𝜔 -limit points of x is denoted 𝜔(x; 𝜙𝑡 ) and is called the
omega limit set, or 𝜔 -limit set. If there is no ambiguity with respect to
the flow 𝜙𝑡 we write 𝜔(x).

An alpha limit point, or 𝛼 -limit point is defined by letting 𝑡 𝑛 → −∞. The


set of all alpha limit points is called the alpha limit set, or 𝛼 -limit set. We
have the relation 𝛼(x; 𝜙𝑡 ) = 𝜔(x; 𝜙−𝑡 ).

Definition 5.2.2 A point x0 ∈ ℝ 𝑛 is an omega limit point of a set 𝑈 ⊂ ℝ 𝑛


if there exist x𝑛 ∈ 𝑈 and 𝑡 𝑛 → ∞ such that

lim 𝜙𝑡𝑛 (x𝑛 ) = x0 .


𝑛→∞

The set of all 𝜔 -limit points of 𝑈 is denoted 𝜔(𝑈) and is called the
omega limit set, or 𝜔 -limit set of 𝑈 .

As before we can also define 𝛼(𝑈), the alpha limit set of 𝑈 . As before
𝛼(𝑈 ; 𝜙𝑡 ) = 𝜔(𝑈 ; 𝜙−𝑡 ). The following lemma gives a use characterization
of omega limit sets, and therefore also alpha limit sets.

Lemma 5.2.1 Let 𝑈 ⊂ ℝ 𝑛 . Then,


\ [
𝜔(𝑈) = cl 𝜙 𝑠 (𝑈). (5.4)
𝑡≥ 0 𝑠≥𝑡

Proof. Fix 𝑡 > 0. Then, 𝜙𝑡𝑛 (x𝑛 ) ∈ 𝑠≥𝑡 𝜙 𝑠 (𝑈) for 𝑛 such that 𝑡 𝑛 ≥ 𝑡 .
S
Therefore, x0 ⊂ cl 𝑠≥𝑡 𝜙 𝑠 (𝑈). Since the latter is independent of 𝑡 we
S
obtain x0 ∈ 𝑡≥0 cl 𝑠≥𝑡 𝜙 𝑠 (𝑈).
T S

Now let x0 ∈ 𝑡≥0 cl 𝑠≥𝑡 𝜙 𝑠 (𝑈). Then, x0 ∈ cl 𝑠≥𝑡 𝜙 𝑠 (𝑈) for all 𝑡 ≥ 0.
T S S
We can choose point 𝑡 𝑛 → ∞ and 𝑥 𝑛 ∈ 𝑈 such that k𝜙𝑡𝑛 (x𝑛 ) − x0 k < 1/𝑛
as 𝑛 → ∞, which proves that x0 ∈ 𝜔(𝑈).

Next we give some properties of omega limit sets (and alpha limit sets)
that crucial in the application to flows.

Proposition 5.2.2 The omega limit set 𝜔(𝑈) is closed, invariant, and con-
tained in cl Γ+ (𝑈).a If 𝑈 ⊂ ℝ 𝑛 is forward invariant, then
\ \
𝜔(𝑈) = cl 𝜙𝑡 (𝑈) = 𝜙𝑡 (cl 𝑈), b (5.5)
𝑡≥0 𝑡≥ 0

and 𝜔(𝑈) ⊂ cl 𝑈 (equality when 𝑈 is invariant).


a The 𝜏-forward image if a set 𝑈 is defined as Γ+𝜏 (𝑈) := 𝑠≥𝜏 𝜙 𝑠 (𝑈). The 𝜏-backward
S
image is defined similarly.
b We emphasize that the latter equality holds since 𝜙 (·) are homeomorphisms for all 𝑡 .
𝑡
5 Limit sets
30
Lecture 7 and 8

Proof. Observe that 𝜔(𝑈) = 𝑡≥ 0 cl 𝑠≥𝑡 𝜙 𝑠 (𝑈) =


+
𝑡≥0 cl Γ𝑡 (𝑈) ⊂
T S T
cl Γ+ (𝑈) and Γ+𝑡 (𝑈) ⊂ Γ+𝑡 0 (𝑈) for all 𝑡 0 ≤ 𝑡 . Since the forward images are
nested we have that 𝜔(𝑈) = 𝑡≥𝑠 cl Γ+𝑡 (𝑈). Then, for 𝑠 ∈ ℝ,
T

!
\ \
𝜙(𝑠, 𝜔(𝑈)) = 𝜙 𝑠 cl Γ+𝑡 (𝑈) 𝜙 𝑠 cl Γ+𝑡 (𝑈)

=
𝑡≥ 0 𝑡≥ 0
\ \ \
cl 𝜙 𝑠 Γ+𝑡 (𝑈) cl Γ+𝑡+𝑠 (𝑈) = cl Γ+𝑡 (𝑈)

= =
𝑡≥ 0 𝑡≥ 0 𝑡≥𝑠

= 𝜔(𝑈),

which establishes the invariance of 𝜔(𝑈).4 4: Since forward images are nested it
holds that
In the case 𝑈 ⊂ 𝑋 is  forward invariant, the group property implies,
cl Γ+ cl Γ+
\ \
𝑡 (𝑈) = 𝑡 (𝑈).
𝜙𝑡+𝑠 (𝑈) = 𝜙𝑡 𝜙 𝑠 (𝑈) ⊂ 𝜙𝑡 (𝑈) for all 𝑠, 𝑡 ≥ 0. Therefore, 𝑠≥𝑡 𝜙 𝑠 (𝑈) =
S
𝑡≥𝑠 𝑡≥ 0
𝜙𝑡 (𝑈), which completes the proof.

The same properties apply to alpha limit sets by reversing the time:
𝑡 ↦→ −𝑡 . In most applications we use the omega limit set of a compact, or
pre-compact5 set 𝑈 ⊂ ℝ 𝑛 . The following proposition provides a list of 5: Recall that a set is pre-compact in if cl 𝑈
properties in the case a compactness assumption is made. is a compact sets in ℝ 𝑛 .

Proposition 5.2.3 Suppose Γ+𝜏 (𝑈) is precompact for some 𝜏 ≥ 0. Then,


(i) 𝜔(𝑈) is compact;
(ii) 𝑈 ≠ ∅, implies 𝜔(𝑈) ≠ ∅;
(iii) 𝑈 connected implies that 𝜔(𝑈)  is connected;
(iv) for all x ∈ 𝑈 , 𝑑 𝜙𝑡 (x), 𝜔(𝑈) → 0, as 𝑡 → ∞.

Proof. For 𝑡 ≥ 𝜏, the sets cl 𝑠≥𝑡 𝜙 𝑠 (𝑈)) ⊂ cl Γ+𝜏 (𝑈) are compact, and
S
thus 𝜔(𝑈) ⊂ 𝑡≥𝜏 cl 𝑠≥𝑡 𝜙 𝑠 (𝑈)) ⊂ cl Γ+𝜏 (𝑈) is compact. Since the latter
T S
is an intersection of nested non-empty compact sets it is non-empty,
which establishes (i) and (ii).
Since 𝑈 is connected, then 𝜙𝑡 Γ+ (𝑈) is connected.

Using the precom-
pactness of Γ+ (𝑈) we derive that cl 𝜙𝑡 Γ+ (𝑈) is a nested

 sequence of
compact and connected sets. Therefore, 𝑡≥0 cl 𝜙𝑡 Γ (𝑈) is connected,
+
T
which proves (iii).
Suppose6 𝑑 𝜙𝑡 (x), 𝜔(𝑈) 6→ 0 as 𝑡 → ∞, then 𝑑 𝜙𝑡𝑛 (x), 𝜔(𝑈) ≥ 𝛿 > 0,
 
6: Recall that the distance between a point
for some sequence 𝑡 𝑛 → ∞. Since Γ+𝜏 (𝑈) is pre-compact, the sequence x ∈ ℝ 𝑛 and a set 𝑈 ⊂ ℝ 𝑛 is defined as:

{𝜙𝑡𝑛 (x)} has a limit point y, with 𝑑(y , 𝜔(𝑈)) > 0, which is a contradiction 𝑑(x , 𝑈) := inf k x − x0 k.
x0 ∈𝑈
and therefore proves Property (iv).

The next proposition list a number of properties of omega limit sets that
do not require any compactness conditions.

Proposition 5.2.4 (Set theoretic properties) Let 𝑈 , 𝑉 ⊂ ℝ 𝑛 , then the


omega limit sets satisfy the following list of properties:
(i) if 𝑉 ⊂ 𝑈 , then 𝜔(𝑉) ⊂ 𝜔(𝑈);
(ii) 𝜔(𝑈 ∪ 𝑉) = 𝜔(𝑈) ∪ 𝜔(𝑉) — additive property;
(iii) 𝜔(𝑈 ∩ 𝑉) ⊂ 𝜔(𝑈) ∩ 𝜔(𝑉) — sub-multiplicative property;
(iv) if 𝑉 ⊂ 𝜔(𝑈), then 𝜔(𝑉) ⊂ 𝜔(𝑈);
5 Limit sets
31
Lecture 7 and 8

(v) 𝜔(𝑈) = 𝜔(cl 𝑈), i.e. cl 𝜔(𝑈) = 𝜔(cl 𝑈);


(vi) 𝜔(𝜔(𝑈)) = 𝜔(𝑈)— idempotent property;
(vii) 𝜔(𝑈) = 𝜔 𝜙𝑡 (𝑈) for all 𝑡 ∈ ℝ.
(viii) if there exists a backward orbit 𝛾x− ⊂ 𝑈 , then x ∈ 𝜔(𝑈).

Proof. Property (i) follows from the characterization in Lemma 5.2.1. As


for Property (ii) we argue as follows
[ [ [
cl 𝜙 𝑠 (𝑈 ∪ 𝑉) = cl 𝜙 𝑠 (𝑈) ∪ cl 𝜙 𝑠 (𝑉),
𝑠≥𝑡 𝑠≥𝑡 𝑠≥𝑡

and since the sets are nested the property follows. As for the intersection
we argue as follows. Note that 𝑈 ∩ 𝑉 ⊂ 𝑈 and 𝑈 ∩ 𝑉 ⊂ 𝑉 , and by
(i) 𝜔(𝑈 ∩ 𝑉) ⊂ 𝜔(𝑈) and 𝜔(𝑈 ∩ 𝑉) ⊂ 𝜔(𝑉). Combining this gives
𝜔(𝑈 ∩ 𝑉) ⊂ 𝜔(𝑈) ∩ 𝜔(𝑉).
By forward invariance of 𝜔(𝑈), 𝜙(𝑡, 𝑉) ⊂ 𝜙(𝑡, 𝜔(𝑈)) ⊂ 𝜔(𝑈). Since the
latter is closed it follows that 𝜔(𝑉) ⊂ 𝜔(𝑈), proving (iv)
Since 𝑈 ⊂ cl 𝑈 it follows that 𝜔(𝑈) ⊂ 𝜔(cl 𝑈). As for the reversed
inclusion we argue as follows. Since,
[ [ [
𝜙(𝑠, cl 𝑈) ⊂ cl 𝜙(𝑠, 𝑈) ⊂ cl 𝜙(𝑠, 𝑈)
𝑠≥𝑡 𝑠≥𝑡 𝑠≥𝑡

𝜙 𝑠 (cl 𝑈) ⊂ cl 𝜙 𝑠 (𝑈) and therefore


S S
which shows that 𝑠≥𝑡 𝑠≥𝑡
[ [
cl 𝜙 𝑠 (cl 𝑈) ⊂ cl 𝜙 𝑠 (𝑈).
𝑠≥𝑡 𝑠≥𝑡

For the omega limit sets this implies that 𝜔(cl 𝑈) ⊂ 𝜔(𝑈), which proves
(v).
For the idempotency property we argue as follows. Since 𝜔(𝑈) is closed
and invariant we have

𝜔(𝜔(𝑈)) = cl 𝜔(𝑈) = 𝜔(𝑈),

which proves (vi).


Since 𝜔(Γ+ (𝑈)) = 𝜔 Γ+ (𝜙𝑡 (𝑈)) and Γ+ (𝜙𝑡 (𝑈)) = 𝜙𝑡 (Γ+ (𝑈)), Property

(vii) follows.
If there exists a backward orbit 𝛾x− ⊂ 𝑈 , then y𝑛 = 𝛾x− (−𝑡 𝑛 ) ∈ 𝑈 , 𝑡 𝑛 → ∞
has the property that x = 𝜙𝑡𝑛 (y𝑛 ) as 𝑡 𝑛 → ∞, which shows that x ∈ 𝜔(𝑈)
and proves Property (viii).

Remark 5.2.1 The infinite additive property is not true in general, i.e.
in general x∈𝑈 𝜔(x) ⊂ 𝜔(𝑈). In case the additive property is true for
S
arbitrary unions we say that the operator is completely additive. As a
matter of fact complete super-additivity is true for 𝜔 . With respect to
intersection we have complete sub-multiplicativity.
5 Limit sets
32
Lecture 7 and 8

5.3 Attractors

In order to understand the structure of a flow on ℝ 𝑛 we consider special


forward invariant sets.

Definition 5.3.1 A trapping region is a forward invariant set 𝑈 ⊂ ℝ 𝑛 ,


such that 𝜙𝜏 (cl 𝑈) ⊂ int (𝑈) for some 𝜏 > 0.

Due to forward invariance we have for 𝑡 ≥ 0:

𝜙𝑡 (𝜙𝜏 (cl 𝑈)) = 𝜙𝜏 (𝜙𝑡 (cl 𝑈))


(5.6)
⊂ 𝜙𝜏 cl 𝜙(𝑡, 𝑈) ⊂ 𝜙𝜏 (cl 𝑈) ⊂ int 𝑈.


Consequently, 𝜙𝑡 (cl 𝑈) ⊂ int 𝑈 for all 𝑡 ≥ 𝜏 > 0.

Definition 5.3.2 A set 𝐴 ⊂ 𝑋 is called an attractor if there exists a


trapping region 𝑈 ⊂ ℝ 𝑛 such that 𝐴 = 𝜔(𝑈).

By the properties of omega limits set we conclude that an attractor is


an invariant sets contain in 𝑈 . The definition however provides more
properties. To start with trapping region may contain an empty attractor!
The latter cannot happen if some compactness properties are added. We
start with a list of most apparent properties.
I If 𝑈 ⊂ ℝ 𝑛 is bounded and non-empty. Then, 𝐴 = 𝜔(𝑈) is non-
empty, compact and invariant. Indeed, by the forward invariance
of 𝑈 Γ+𝜏 (𝑈) ⊂ 𝑈 ⊂ ℝ 𝑛 is bounded and therefore pre-compact.
Proposition 5.2.3(i)-(ii) then imply the desired properties.
I As a consequence of compactness and invariance 𝐴 = 𝜔(𝐴). Direct
consequence of Proposition 5.2.2.
I An attractor is contained in a neighborhood of itself, i.e. 𝐴 ⊂ int 𝑈 .
Indeed, by invariance of 𝐴 and the trapping property of 𝑈 we have
𝐴 = 𝜙𝜏 (𝐴) ⊂ 𝜙𝜏 (cl 𝑈) ⊂ int 𝑈 .
I An attractor is the maximal invariant set7 in a neighborhood of itself, i.e. 7: The maximal invariant set inside a set
𝐴 = Inv 𝑈 = Inv (cl 𝑈). Let 𝑈 ⊂ ℝ 𝑛 be a trapping region such that 𝑈 is defined as the union of all invariant
sets contain in 𝑈 . Notation Inv 𝑈 .
𝐴 = 𝜔(𝑈) Define 𝑆 = Inv (cl 𝑈). Consequently,

𝑆 = 𝜙𝜏 (𝑆) ⊂ 𝜙𝜏 (cl 𝑆) ⊂ int 𝑈 ⊂ 𝑈 ,

and therefore 𝑆 ⊂ Inv 𝑈 ⊂ Inv (cl 𝑈) = 𝑆 , which proves that


Inv 𝑈 = Inv (cl 𝑈). Now let 𝑆0 ⊂ 𝑈 be an invariant set. Then,
\ \
𝑆= 𝜙𝑡 (𝑆) ⊂ 𝜙𝑡 (cl 𝑆) = 𝜔(𝑆) ⊂ 𝜔(𝑈),
𝑡≥0 𝑡≥ 0

which show that every invariant sets is contained in 𝜔(𝑈) and thus
also Inv 𝑈 ⊂ 𝜔(𝑈). This implies Inv 𝑈 = 𝜔(𝑈) = 𝐴.
Attractors reveal information about the global structure of flows. In
Chapter 7 we will use trapping regions and attractors to study global
properties of flows in ℝ2 .
Poincaré sections
Lecture 8 6
In this chapter we introduce the notion of transverse section, or Poincaré 6.1 Poincaré maps . . . . . . . . . 33
section. Poincaré sections are a useful tool for analyzing the linear 6.2 The flow box theorem . . . . 35
behavior of periodic orbits. They also play a crucial role in the proof of
the Poincaré-Bendixson theorem.

6.1 Poincaré maps

Recall that the velocity of a flow 𝜙𝑡 at a point x0 is given by the vector


field: 𝜙¤ 𝑡 (x0 )|𝑡=0 = 𝑓 (x0 ).
The condition of transversality of 𝑓 and
Definition 6.1.1 For a choice of x0 and n ∈ ℝ 𝑛 the hyperplane Σ ⊂ ℝ 𝑛 with Σ at x0 implies transversality at points
defined by x in a neighborhood of x0 since f(x) · n is a
continuous fuunction of x. Transversality
Σ := {x | (x − x0 ) · n = 0},
need not be true for all x ∈ ℝ 𝑛 !. This
is called a local Poincaré section for 𝜙𝑡 at x0 if f(x0 ) · n ≠ 0. We say that motivates the terminology ‘local’ Poincaré
section.
the vector field f is transverse to Σ at x0 . Notation f t Σ.

Note that a Poincaré section is defined at every point x0 with is not an


equilibrium point for 𝜙𝑡 , i.e. f(x0 ) ≠ 0.1 1: One can define a Poincaré section more
generally as a hypersurface. Since we use
Let 𝛾(𝑡) = 𝜙𝑡 (x0 ) be a periodic orbit for 𝜙𝑡 with period 𝑇 > 0, cf. Defn. Poincaré sections as a local tool (loaclly
3.2.2. A Poincaré section Σ at x0 is a referred to as a Poincaré section for near x0 ) hypersurfaces suffice.
𝛾. For the point x0 the flow 𝜙𝑡 ‘carries’ it back to the section Σ. Points
x near x0 are expected the display the same behavior. The next lemma
quantifies this behavior.
Recall that the open ball 𝐵 𝛿 (x0 ) is given by
Lemma 6.1.1 Let 𝜙𝑡 be a 𝐶 1 -flow on ℝ 𝑛 and let 𝛾(𝑡) be a periodic orbit for 𝐵 𝛿 (x0 ) = {x ∈ ℝ 𝑛 | | x − x0 | < 𝛿}.
𝜙𝑡 with period 𝑇 > 0 with a Poincaré section Σ for 𝛾 at a point x0 ∈ 𝛾. Then,
there exists a 𝛿 > 0 and a unique 𝐶 1 -function 𝜏 : 𝐵 𝛿 (x0 ) → ℝ such that
(i) 𝜏(x0 ) = 𝑇 ;
(ii) 𝜙𝜏(x) (x) ∈ Σ for all x ∈ 𝐵 𝛿 (x0 ).
The map 𝜏 is called the first return-time.

Proof. The flow 𝜙𝑡 (x) is a 𝐶 1 -function in both 𝑡 and x, cf. proper ref.
Define the map

𝐹 : ℝ𝑛 × ℝ → ℝ , 𝐹(x , 𝑡) = (𝜙𝑡 (x) − x0 ) · n,

which by the properties of 𝜙𝑡 (x) is a 𝐶 1 -map on ℝ 𝑛 × ℝ. Note that


𝐹(x0 , 𝑇) = (𝜙𝑇 (x0 ) − x0 ) · n = (x0 − x0 ) · n = 0 and thus (x0 , 𝑇) is a zero
of 𝐹 . Moreover,

𝜕𝑡 𝐹(x , 𝑡) = 𝜙¤ 𝑡 (x) · n = f(𝜙𝑡 (x)) · n ,


6 Poincaré sections
34
Lecture 8

and by the assumption that Σ is a Poincaré section at x0 we have


𝜕𝑡 𝐹(x0 , 𝑇) = f(x0 ) · n ≠ 0. By the Implicit Function Theorem, cf. Thm.
A.1.1, there exists a neighborhood N(x0 ) ⊂ ℝ 𝑛 and a 𝐶 1 -function
𝜏 : N(x0 ) → ℝ2 such that 𝜏(x0 ) = 𝑇 and 𝐹(x, 𝜏(x)) = 0 for all x ∈ N(x0 ). 2: The derivative of 𝜏 is given by 𝐷x 𝜏(x) =
The latter implies that 𝜙𝜏(x) (x) ∈ Σ for all x ∈ N(x0 ). Now choose 𝛿 > 0 −𝐷x 𝐹(x, 𝜏(x))/𝜕𝑡 𝐹(x, 𝜏(x)).

sufficiently small such that 𝐵 𝛿 (x0 ) ⊂ N(x0 ).

Using the above lemma we can define the first return-map, or Poincaré
map at Σ:3 3: The idea of a return map can also be de-
voloped in the absence of a periodic orbit.
𝑃 : Σ ∩ 𝐵 𝛿 (x0 ) → Σ, x ↦→ 𝑃(x) := 𝜙𝜏(x) (x). In the case a point x0 is mapped to Σ under
the flow allows one to go through the steps
of Lemma 6.1.1 in order to construct a first
For Poincaré map for periodic orbits we have the following proper- return-time and thus a Poincaré map.
ties:

Theorem 6.1.2 Let 𝑃 : Σ ∩ 𝐵 𝛿 (x0 ) → Σ be a Poincaré map for a periodic


orbit 𝛾 as defined above. Then, 𝑃 is a diffeomorphism and 𝑃(x0 ) = x0 .

Proof. By Lemma 6.1.1 the composition 𝜙𝜏(x) (x) is a 𝐶 1 -map in x ∈ 𝐵 𝛿 (x0 ).


The fixed point property follows from the fact that 𝛾 is a periodic orbit
through x0 with period 𝑇 > 0. As for the invertibility we argue as follows.
At a point y = 𝑃(x) we reverse the flow, i.e. 𝜓𝑡 (x) := 𝜙−𝑡 (x) and apply
Lemma 6.1.1 to 𝜓𝑡 with first return-time 𝜎(x). This yields

x = 𝜓 𝜎(y) (y) = 𝜙−𝜎(y) (y),

and thus the inverse is given by x ↦→ 𝑃 −1 (x) = 𝜙−𝜎(x) (x) which is a 𝐶 1 -


function on a neighborhood of x0 , proving that 𝑃 is a diffeomorphism.

The following is a variation on the first return map describes a flow near
a Poincaré section.

Lemma 6.1.3 Let Σ be a Poincaré at x0 and suppose 𝜙𝑇 (y) = x0 for some


𝑇 ≥ 0 and for some y ∈ ℝ 𝑛 . Then, there exists a neighborhood 𝐵 𝛿 (y) and a
One can apply Lemma 6.1.3 to points y ∈ Σ
𝐶 1 -function 𝜏 : 𝐵 𝛿 (y) → ℝ such that with 𝑇 = 0.
(i) 𝜏(y) = 𝑇 ;
(ii) 𝜙𝜏(x) (x) ∈ Σ for all x ∈ 𝐵 𝛿 (y).

Proof. As in the proof of Lemma 6.1.1 define the map

𝐹 : ℝ𝑛 × ℝ → ℝ , 𝐹(x , 𝑡) = (𝜙𝑡 (x) − x0 ) · n,

which by the properties of 𝜙𝑡 (x) is a 𝐶 1 -map on ℝ 𝑛 × ℝ. Note that


𝐹(y , 𝑇) = (𝜙𝑇 (y) − x0 ) · n = (x0 − x0 ) · n = 0 and thus (y , 𝑇) is a zero of 𝐹 .
Moreover,
𝜕𝑡 𝐹(x , 𝑡) = 𝜙¤ 𝑡 (x) · n = f(𝜙𝑡 (x)) · n ,
and by the assumption that Σ is a Poincaré section at x0 we have
𝜕𝑡 𝐹(y , 𝑇) = f(x0 ) · n ≠ 0. As before by the Implicit Function Theorem, cf.
Thm. A.1.1, there exists a neighborhood N(y) ⊂ ℝ 𝑛 and a 𝐶 1 -function
𝜏 : N(y) → ℝ such that 𝜏(x0 ) = 𝑇 and 𝐹(x , 𝜏(x)) = 0 for all x ∈ N(y).
The latter implies that 𝜙𝜏(x) (x) ∈ Σ for all x ∈ N(x0 ). Now choose 𝛿 > 0
sufficiently small such that 𝐵 𝛿 (y) ⊂ N(y).
6 Poincaré sections
35
Lecture 8

6.2 The flow box theorem

With the stable/unstable theorems and the Grobman-Hartman theorem


one describes the local behavior of a flow near a hyperbolic equilibrium
point via a local conjugacy. In the case that x0 is non-equilibrium point it
is easier to describe the local behavior. To simply matters assume with
loss of generality that x0 = 0 and we apply a change of variable such that
f(0) = 𝑒1 4 since f(0) ≠ 0. Let Σ be a local Poincaré section at 0 defined by 4: The vectors 𝑒1 = (1 , 0 , · · · , 0), 𝑒2 =
Σ = {x | x · 𝑒1 = 0}, i.e. points of the form (0, 𝑥2 , · · · , 𝑥 𝑛 ), cf. Definition (0 , 1 , 0, · · · , 0), etc. denote the unit basis
vectors for ℝ 𝑛 .
6.1.1. We define the map

ℎ : ℝ × Σ → ℝ𝑛 , (𝑠, x) ↦→ ℎ(𝑠, x) := 𝜙 𝑠 (x). (6.1)

For a 𝐶 1 -flow ℎ is a 𝐶 1 -function of 𝑡 and x. The idea is now to apply the


inverse function theorem at (0 , 0). Note that 𝐷 ℎ(0 , 0) = id on ℝ × Σ and
therefore ℎ is a diffeomorphism on a neighborhood (−𝜏, 𝜏) × Σ𝛿 , with
Σ𝛿 = Σ ∩ 𝐵 𝛿 (0) and 𝜏 and 𝛿 sufficiently small.
Consider a point (𝑠, x) ∈ (−𝜏, 𝜏) × Σ𝛿 , then ℎ(𝑠, x) ∈ ℝ 𝑛 and for 𝑡 + 𝑠 ∈
(−𝜏, 𝜏) we have that

𝜙𝑡 (ℎ(𝑠, x)) = 𝜙𝑡 𝜙 𝑠 (x) = 𝜙𝑡+𝑠 (x) = ℎ(𝑡 + 𝑠, x),




and therefore
ℎ −1 𝜙𝑡 (ℎ(𝑠, x)) = (𝑡 + 𝑠, x),


which is the linear flow generated by the vector field g(x) = 𝑒1 . This
shows that near non-equilibrium points 𝜙𝑡 is locally conjugate to a linear
flow.

Theorem 6.2.1 Let x0 be a non-equilibrium point for 𝜙𝑡 . Then, there exists a


local diffeomorphism ℎ defined in a neighborhood of x0 with ℎ(x0 ) = 0 such
that
𝜙𝑡 (ℎ(𝑠, x)) = ℎ 𝜓𝑡 (𝑠, x) ,


where 𝜓𝑡 is the linear flow generated by the vector field g(x) = 𝑒1 .


The Poincaré-Bendixson
Theorem
Lecture 9 and 10 7
In this chapter we consider flows on ℝ2 and the implication that has on 7.1 Three formulations of the
the structure on its omega limit sets. We prove the well-known Poincaré- Poincaré-Bendixson Theorem . 36
Bendixson Theorem for planar flows and discuss some variations. 7.2 Proof of the Poincaré-Bendixson
Theorem . . . . . . . . . . . . . . . 37

7.1 Three formulations of the


Poincaré-Bendixson Theorem

Flows on 𝜙𝑡 on ℝ2 have more ‘room’ than 1-dimensional flows which


are easy to classify via their equilibrium points. In dimension 2 there is
more manoeuvrability but still the global behavior is well-behaved. One
example of this is the celebrated Poincaré-Bendixson Theorem. As before
we assume that 𝜙𝑡 is a (globally defined) flow in the plane which satsifies
the differential equation
x¤ = f(x),
𝐶 1
where f(x) = 𝜙¤ 𝑡 (x)|𝑡=0 and thus f : ℝ2 −
−→ ℝ2 .

Theorem 7.1.1 Let 𝜙𝑡 be a 𝐶 1 -flow on ℝ2 . Let x ∈ ℝ2 and suppose that


(i) 𝛾x+ is a forward orbit contained in a compact sets 𝑈 ⊂ ℝ2 ;
(ii) 𝜔(x) = 𝜔(𝛾x+ ) contains non equilibrium points.
Then, 𝜔(x) ⊂ 𝑈 is a periodic orbit for 𝜙𝑡 .

A direct consequence of this result in the case of trapping regions yields


the existence of non-trivial attractors containing periodic orbits.
The periodic obit is a subset of the attractor
Corollary 7.1.2 Let 𝜙𝑡 be a 𝐶 1 -flow on ℝ2 and let 𝑈 ⊂ ℝ2 be a non-empty, 𝐴 = 𝜔(𝑈). For points x ∈ 𝑈 it holds that
bounded trapping region. If 𝜙𝑡 has no equilibrium points in 𝑈 , then for every 𝛾x is a periodic orbit, or x ‘converges’ to a
periodic orbit.
x ∈ 𝑈 , the omega limt set 𝜔(x) ⊂ int 𝑈 is a periodic orbit.

The final version of the Poincaré-Bendixson Theorem we state includes


the case that 𝜔(x) may contain equilibrium points.

Theorem 7.1.3 Let 𝜙𝑡 be a 𝐶 1 -flow on ℝ2 . Let x ∈ ℝ2 and suppose that 𝛾x+


is a forward orbit contained in a compact sets 𝑈 ⊂ ℝ2 . Then, either
(i) 𝜔(x) ⊂ 𝑈 is a periodic orbit for 𝜙𝑡 ;
(ii) 𝜔(x) is an equilibrium point, or;
(iii) 𝜔(x) consists of finitely many equilibrium points {x𝑖 } and a a countable
set of orbits {𝛾 𝑗 } such that for every x0 ∈ 𝛾 𝑗 , 𝜔(x0 ), 𝛼(x0 ) ∈ {x𝑖 }.
The orbits in (iii) are called heterclinic orbits between the equilibrium points
in {x𝑖 }.
7 The Poincaré-Bendixson Theorem
37
Lecture 9 and 10

7.2 Proof of the Poincaré-Bendixson Theorem

We prove the Poincaré-Bendixson Theorem in 7.1.1 via a series of lemma


about orbits and their limiting behavior in ℝ2 . The arguments strongly
use the topology of the plane.
A regular parametrized curve is a piecewise smooth map1 x : [0 , 2𝜋] → ℝ2 1: Piecewise smooth indicates that the
for which the velocity vector is non-zero at every point where the parametrization is continuous and has at
most finitely many points on the curve
derivative is defined.2 We denote the image in the plane by 𝛾 = x([0 , 2𝜋]). where the derivative is not defined. At all
A parametrized curve is closed if x(0) = x(2𝜋). other point the parametrization id differ-
entiable.
If a regular parametrized curve x : [0 , 2𝜋] → ℝ2 satisfies
2: The parametrized curve 𝑡 ↦→ (𝑡 2 , 𝑡 3 )
(i) x(𝑠) ≠ x(𝑡) for all 𝑠 ≠ 𝑡 , 𝑠, 𝑡 ∈ [0 , 2𝜋); is regular. The velocity is not defined at
𝑡 = 0.
(ii) x(0) = x(2𝜋),
it is called a (regular) simple closed curve, or regular Jordan curve. The
image 𝛾 ⊂ ℝ2 is also referred to as regular Jordan curve.3 3: Jordan curves here are understood to
be smooth, regular and oriented via the
Regular Jordan curves in the plane have a special property that they chosen parametrization. If necessary we
‘divide up’ the plane into two regions. refer to these as oriented regular Jordan
curves.

Proposition 7.2.1 (Jordan curve Theorem) Let 𝛾 be a regular Jordan curve.


Then, ℝ2 \ 𝛾 consists of two connected components 𝑅 − (𝛾) and 𝑅 + (𝛾), where
𝑅 − (𝛾) is unbounded, called the exterior, and 𝑅 + (𝛾) is bounded, called the
interior.

Let x0 , x1 ∈ ℝ2 and consider the line-segment 𝐿 = (1 − 𝜇)x0 + 𝜇x1 | 𝜇 ∈




[0 , 1] ⊂ ℝ2 . We say that a line-segment 𝐿 is a Poincaré section for f if


f t 𝐿, cf. Definition 6.1.1. Let n be the unit normal to 𝐿, then

0 < f(x) · n , ∀x ∈ 𝐿.

Points on 𝐿 are parametrized via their 𝜇-coordinate.

Lemma 7.2.2 Suppose 𝐿 is a Poincaré section for 𝜙𝑡 . Let x ∈ 𝐿 and let


𝛾x+ = {𝜙𝑡 (x) | 𝑡 ≥ 0}. If 𝛾x+ intersects 𝐿 at consecutive points 𝜙𝑡 𝑖 (x) with
0 = 𝑡0 < 𝑡1 < · · · < 𝑡 𝑛 , then either Two intersections 𝜙𝑡 𝑘 (x) and 𝜙𝑡 𝑘+1 (x) in
𝐿 are consecutive if 𝜙𝑡 (x) ∉ 𝐿 for all 𝑡 𝑘 <
(i) {𝜙𝑡 𝑖 (x)} ⊂ 𝐿 is a constant sequence in which case 𝛾x+ defines a periodic 𝑡 < 𝑡 𝑘+1 .
orbit, or,
(ii) the 𝜇-coordinates of the intersection points 𝜙𝑡 𝑖 (x) are monotone, i.e.
either 𝜇0 < · · · < 𝜇𝑛 , or 𝜇0 > · · · > 𝜇𝑛 .

Proof. Let 𝜇0 = 𝜇(x), i.e. (1 − 𝜇0 )x0 + 𝜇0 x1 = x, and 𝜇1 is the associated


𝜇-coordinate of 𝜙𝑡1 (x). If 𝜇0 = 𝜇1 , then all 𝜇𝑖 coincide by the uniqueness
of the initial value problem. This proves case (i). Suppose 𝜇0 ≠ 𝜇1 . Define
the Jordan curve
𝛾 := 𝐽1 ∪ 𝐽2 ,
with 𝐽1 = 𝜙𝑡 (x) | 0 ≤ 𝑡 ≤ 𝑡1 and 𝐽2 = (1 − 𝜇)x0 + 𝜇x1 | 𝜇0 ≤
 

𝜇 ≤ 𝜇1 , or 𝜇1 < 𝜇 < 𝜇0 . By Proposition 7.2.1 we have that ℝ2 \ 𝛾 =


𝑅 + (𝛾) ∪ 𝑅 − (𝛾). By the fact that f is transverse to 𝐽2 it follows that
𝜙𝑡1 +𝜖 (x) ∈ 𝑅 − (𝛾), or 𝜙𝑡1 +𝜖 (x) ∈ 𝑅 + (𝛾) for 𝜖 > 0 sufficiently small.
Suppose 𝜙𝑡1 +𝜖 (x) ∈ 𝑅 + (𝛾). We claim that 𝜙𝑡 (x) ∈ 𝑅 + (𝛾) for 𝑡 > 𝑡1 . If
not there exists a time 𝑡 0 > 𝑡1 such that 𝜙𝑡 0 (x) ∈ 𝛾 = 𝐽1 ∪ 𝐽2 . The case
7 The Poincaré-Bendixson Theorem
38
Lecture 9 and 10

𝜙𝑡 0 (x) ∈ 𝐽1 contradicts the uniqueness of the initial value problem and


𝜙𝑡 0 (x) ∈ 𝐽2 contradicts the direction of f at 𝐽2 since f is pointing into 𝑅 + (𝛾).
By construction 𝜙𝑡2 (x) ∈ 𝐿 ∩ 𝑅 + (𝛾) which proves that either 𝜇0 < 𝜇1 < 𝜇2
or 𝜇0 > 𝜇1 > 𝜇2 . By the repeating the above argument by starting at
𝑡 = 𝑡1 one obtains that the monotonicity of the sequence {𝜇𝑖 }. The same
applies in the other case that 𝜙𝑡1 +𝜖 (x) ∈ 𝑅 − (𝛾) by reversing the time
direction.

Lemma 7.2.3 Let x ∈ ℝ2 and let y ∈ 𝜔(x) ∩ 𝐿, where 𝐿 is Poincaré section


as before. Then, there exist times 𝑡 𝑛 → ∞ such that y𝑛 := 𝜙𝑡𝑛 (x) ∈ 𝐿 and
y𝑛 → y as 𝑛 → ∞.

Proof. By the definition of omega limit set there exists a sequence of time
𝑠 𝑛 → ∞ such that z𝑛 := 𝜙 𝑠 𝑛 (x) → y ∈ 𝐿. By Lemma 6.1.3 there exists
a continuous function 𝜏 defined in a neighborhood of y with 𝜏(y) = 0,
such that y𝑛 := 𝜙𝜏(z𝑛 ) (z𝑛 ) ∈ 𝐿 for 𝑛 sufficiently large. Since z𝑛 → y we
have that 𝜏(z𝑛 ) → 0 and thus

𝐿 3 y𝑛 = 𝜙𝜏(z𝑛 ) (z𝑛 ) = 𝜙𝜏(z𝑛 )+𝑠 𝑛 (x) = 𝜙𝑡𝑛 (x) → y ,

for 𝑛 → ∞, which proves the lemma.

Lemma 7.2.4 Let x ∈ ℝ2 and 𝐿 be a Poincaré section as before. Then, 𝜔(x)


can intersect 𝐿 at at most one point.

Proof. Suppose there are two distinct intersection points y and y0. By
Lemma 7.2.3 there exist sequences of times {𝑡 𝑛 } and {𝑡 𝑛0 }, with 𝑡 𝑛 , 𝑡 𝑛0 →
∞, such that

𝐿 3 y𝑛 = 𝜙𝑡𝑛 (x) → y , 𝐿 3 y0𝑛 = 𝜙𝑡𝑛0 (x) → y0 , 𝑛 → ∞.

By Lemma 7.2.2 the associated sequences of 𝜇-coordinates {𝜇𝑛 } and {𝜇0𝑛 }


are monotone and 𝜇𝑛 → 𝜇(y) and 𝜇0𝑛 → 𝜇(y0). Since both sequences of
times 𝑡 𝑛 , 𝑡 𝑛0 → ∞ we can choose subsequences 𝑡 𝑛 𝑘 and 𝑡 𝑛0 𝑘 such that

𝑡 𝑛1 < 𝑡 𝑛0 2 < 𝑡 𝑛3 < 𝑡 𝑛0 4 < · · · ,

and by Lemma 7.2.2 the associated sequence of 𝜇-coordinates 𝜇𝑛1 , 𝜇0𝑛2 ,


𝜇𝑛3 , 𝜇0𝑛4 , · · · is monotone. Such a sequence has at most one limit which
contradicts the fact that 𝜇𝑛 𝑘 → 𝜇(y) and 𝜇0𝑛 𝑘 → 𝜇(y0), with 𝜇(y) ≠
𝜇(y0).

Lemma 7.2.5 Suppose 𝜔(x) ≠ ∅ and bounded. Moreover, assume that 𝜔(x)
does not contain equilibrium points of 𝜙𝑡 . Then, there exists a periodic orbit
𝛾 ⊂ 𝜔(x) for the flow 𝜙𝑡 .

Proof. Since 𝜔(x) is non-empty we can choose a point y ∈ 𝜔(x). The


omega limit set is closed by definition4 and bounded by assumption, and 4: cf. Proposition 5.2.4(v)
therefore compact. By the invariance5 of 𝜔(x) we have that 𝛾y = {𝜙𝑡 (y) | 5: cf. Proposition 5.2.2
𝑡 ∈ ℝ } ⊂ 𝜔(x). The orbit 𝛾y is pre-compact since it is contained in the
7 The Poincaré-Bendixson Theorem
39
Lecture 9 and 10

compact set 𝜔(x), and therefore 𝜔(y) ≠ ∅ and compact.6 By Proposition 6: cf. Proposition 5.2.3(i)-(ii).
5.2.4(i) and (vi) we have that 𝜔(y) ⊂ 𝜔 𝜔(x) = 𝜔(x).
Because 𝜔(y) is non-empty we can choose point z ∈ 𝜔(y). By assumption
𝜔(x) does not contain equilibrium points and therefore z is not an
equilibrium point and f(z) ≠ 0. Choose a line-segment 𝐿 at z which is
a Poincaré section for 𝜙𝑡 , i.e. z ∈ 𝜔(y) ∩ 𝐿. By Lemma 7.2.3 there exist
times 𝑡 𝑛 → ∞ such that

𝐿 3 z𝑛 = 𝜙𝑡𝑛 (y) → z.

Since 𝛾y ⊂ 𝜔(x) we have that z𝑛 ∈ 𝜔(x) ∩ 𝐿 for all 𝑛 . Lemma 7.2.4 then
implies that z𝑛 = z𝑛0 for all 𝑛 ≠ 𝑛 0. From the times 𝑡 𝑛 → ∞ we can
choose times 𝑡 𝑛 ≠ 𝑡 𝑛0 . This implies 𝜙𝑡𝑛 (y) = 𝜙𝑡𝑛0 (y) which proves that
𝛾y ⊂ 𝜔(x) is a periodic orbit.

Lemma 7.2.6 Assume 𝜔(x) is connected and 𝜔(x) does not contain equilib-
rium points of 𝜙𝑡 . Suppose 𝛾 ⊂ 𝜔(x) is a periodic orbit, then 𝜔(x) = 𝛾 .

Proof. Let y ∈ 𝛾 be any point on 𝛾 and let 𝐿 be a line-segment at y which


is a Poincaré section for 𝜙𝑡 . By construction y ∈ 𝜔(x) ∩ 𝐿. By Lemma
7.2.4 we have that
𝜔(x) ∩ 𝐿 = {y}.
By the connectedness of 𝜔(x) we can choose y ∈ 𝛾 and y0 ∈ 𝜔(x) \
𝛾 arbitrary close. By Lemma 6.1.3 there exists a time 𝜏(y0) such that
𝜙𝜏(y0 ) (y0) ∈ 𝐿 and therefore 𝜙𝜏(y0 ) (y0) ∈ 𝜔(x) ∩ 𝐿. This implies that
𝜙𝜏(y0 ) (y0) = y and consequently y0 ∈ 𝛾 which contradicts the assumption
that y0 ∈ 𝜔(x) \ 𝛾 . Therefore 𝜔(x) \ 𝛾 = ∅ which completes the proof.

Proof of Theorem in 7.1.1. By Condition (i) 𝛾x+ is pre-compact. By Proposi-


tion 5.2.3(i)-(iii) we have that 𝜔(x) is non-empty, compact and connected.
If we combine this with Condition (ii), then Lemma 7.2.5 yields the
existence of a periodic orbit 𝛾 ⊂ 𝜔(x) and Lemma 7.2.6 implies 𝛾 = 𝜔(x).
By Proposition 5.2.2 𝛾 = 𝜔(x) ⊂ cl 𝛾x+ ⊂ 𝑈 .

Proof of Corollary 7.1.2. Since 𝑈 ⊂ ℝ 𝑛 is a non-empty, bounded trapping


region every forward orbit 𝛾x+ is bounded and therefore pre-compact,
and 𝜔(x) ⊂ int 𝑈 for all x ∈ 𝑈 . Since 𝜙𝑡 has no equilibrium points
∫ in
𝑈 Conditions (i) and (iii) of Theorem 7.1.1 are met and 𝜔(x) ⊂ 𝑈 is a
periodic orbit for every x ∈ 𝑈 .
Index theory
Lecture 11 8
In this chapter we introduce yet another topological tool for planar flows: 8.1 Winding numbers . . . . . . 40
the index of a regular Jordan curve. Associated with the index of a regular 8.2 The index of a regular Jordan
Jordan curve is the fixed point index. curve . . . . . . . . . . . . . . . . . 42
8.3 The fixed point index . . . . 44
8.4 * Homotopy properties of the in-
dex . . . . . . . . . . . . . . . . . . 46
8.1 Winding numbers

Let O = (0 , 0) be a designated point a let x : [0 , 2𝜋] → ℝ2 \ O be a smooth


parametrized closed curve1 avoiding the origin O. The image is denoted 1: It holds that x(0) = x(2𝜋).
by 𝛾 . Consider the vector field
 −𝑦 𝑥 
𝑋(x) = , ,
𝑥2 + 𝑦2 𝑥2 + 𝑦2

which is a smooth vector field on ℝ2 \ O.2 The vector field 𝑋 is irrotational: 2: Note that 𝑋 does not extend to a smooth
vector field on ℝ2 .
curl 𝑋 = 0.3
3: Let 𝑋(x) = 𝑎(𝑥, 𝑦), 𝑏(𝑥, 𝑦) . Then,

curl 𝑋 := 𝜕𝑥 𝑏 − 𝜕𝑦 𝑎 .
Definition 8.1.1 Let x : [0 , 2𝜋] → ℝ2 \ O be a smooth parametrized
closed curve avoiding the origin O. Then, the line integral

1
𝑊(𝛾, O) := 𝑋 · 𝑑x, (8.1)
2𝜋 𝛾

is called the winding number of 𝛾 about the origin O.

The winding number has special properties. The first step is to write The winding number can be defined with
the parametrization x(𝑡) in appropriate polar coordinates. For the point respect to any point x0 ∉ 𝛾 .

x(0) choose 𝜃0 ∈ ℝ such that x(0) = (𝑟(0) cos 𝜃0 , 𝑟(0) sin 𝜃0 ).4 Define the 4: The angle 𝜃0 is unique up to a multiple
of 2𝜋.
angular function
𝑡 𝑡
−𝑦(𝑠) 𝑥(𝑠)
¤ + 𝑥(𝑠) 𝑦(𝑠)
¤
∫ ∫
𝜃(𝑡) := 𝜃0 + 𝑋(x(𝑠)) · x¤ (𝑠)𝑑𝑠 = 𝜃0 + 𝑑𝑠.
0 0 𝑥 (𝑠) + 𝑦 (𝑠)
2 2
(8.2)
This yields the following representation for x(𝑡).

Lemma 8.1.1 The function 𝜃(𝑡) is the unique angular function such that

x(𝑡) = 𝑟(𝑡) cos 𝜃(𝑡), 𝑟(𝑡) sin 𝜃(𝑡) ,



(8.3)

and where 𝑟(𝑡) = | x(𝑡)| .

Proof. By definition the functions 𝑟(𝑡) and 𝜃(𝑡) are smooth functions of 𝑡.
The vectors u(𝑡) = cos 𝜃(𝑡), sin 𝜃(𝑡) and u⊥ (𝑡) = − sin 𝜃(𝑡), cos 𝜃(𝑡)
make an orthonormal frame for all 𝑡 ∈ ℝ and every vector function x(𝑡)
is uniquely represented by x(𝑡) = (x(𝑡) · u(𝑡))u(𝑡) + (x(𝑡) · u⊥ (𝑡))u⊥ (𝑡).
Note that (x(0) · 𝑢(0)) = 𝑟(0) > 0 and (x(0) · 𝑣(0)) = 0. We show now that
8 Index theory
41
Lecture 11

(x(𝑡) · u(𝑡)) = 𝑟(𝑡) and (x(𝑡) · u⊥ (𝑡)) = 0 for all 𝑡 ∈ ℝ by computing the
derivatives of the expressions (x(𝑡) · u(𝑡))/𝑟(𝑡) and (x(𝑡) · u⊥ (𝑡)). We have

𝑑  x(𝑡) · u(𝑡)  𝑑  𝑥(𝑡)  𝑥(𝑡) ¤


= cos 𝜃(𝑡) − 𝜃(𝑡) sin 𝜃(𝑡)
𝑑𝑡 𝑟(𝑡) 𝑑𝑡 𝑟(𝑡) 𝑟(𝑡)
𝑑  𝑦(𝑡)  𝑦(𝑡)
¤ cos 𝜃(𝑡)
+ sin 𝜃(𝑡) + 𝜃(𝑡)
𝑑𝑡 𝑟(𝑡) 𝑟(𝑡)
¤ 2 − 𝑥𝑟 𝑟¤ + 𝑦𝑟 2 𝜃¤
𝑥𝑟 ¤ 2 − 𝑦𝑟 𝑟¤ − 𝑥𝑟 2 𝜃¤
𝑦𝑟
= cos 𝜃(𝑡) + sin 𝜃(𝑡)
𝑟 3 𝑟3
= 0,

by using the fact that 𝑟 𝑟¤ = 𝑥 𝑥¤ + 𝑦 𝑦¤ and 𝑟 2 𝜃¤ = −𝑦 𝑥¤ + 𝑥 𝑦¤ . This implies that


x(𝑡)· u(𝑡)/𝑟(𝑡) is constant in 𝑡 and thus x(𝑡)· u(𝑡)/𝑟(𝑡) = x(0)·𝑢(0)/𝑟(0) = 1.
A similar calculation shows that x(𝑡) · u⊥ (𝑡) = 0 for all 𝑡 . We conclude that
x(𝑡) = 𝑟(𝑡)u(𝑡) which proves (8.3). As for uniqueness we argue as follows.
Consider the expression in (8.3) with a different angular function 𝜃(𝑡) ˜ .
2 ¤̃
Then 𝑟 𝜃 = −𝑦 𝑥¤ + 𝑥 𝑦¤ = 𝑟 𝜃¤ , which proves the uniqueness of 𝜃 .
2

The above representation allows us to express the winding number in


terms of the angular function 𝜃(𝑡).

Lemma 8.1.2
𝜃(2𝜋) − 𝜃(0)
𝑊(𝛾, O) = ∈ ℤ.
2𝜋

∫ 2𝜋
Proof. By definition 𝜃(0) = 𝜃0 and 𝜃(2𝜋) = 𝜃0 + 0 𝑋(x(𝑠)) · x¤ (𝑠)𝑑𝑠 =
𝜃0 + 2𝜋𝑊(𝛾, O), which proves the above expression. Since x(0) = x(2𝜋)
there exists a 𝑘 ∈ ℤ such that 𝜃(2𝜋) = 𝜃(0) + 2𝜋𝑘 = 𝜃0 + 2𝜋𝑘 , and
establishes the property that 𝑊(𝛾, O) ∈ ℤ.

Another crucial property of the winding number is its invariance under


homotopies. A homotopy of closed curves5 𝛾𝜆 , 𝜆 ∈ [0 , 1] is defined via a 5: It holds that h(𝜆, 0) = h(𝜆, 2𝜋) for all
smooth 2-parameter parametrization h : [0 , 1] × [0 , 2𝜋] → ℝ2 \ O such 𝜆 ∈ [0 , 1].
that x𝜆 := h(𝜆, ·) is a smooth parametrization6 for 𝛾𝜆 .7 In order to 6: A smooth parametrization is a piece-
repeat the proof of Lemma 8.1 we first need to deal with the initial angles wise smooth map from [0 , 2𝜋] to 𝑅 2 .

𝜃0 (𝜆). Consider the path x𝜆 (0) := h(𝜆, 0) and choose 𝜃0 (0) ∈ ℝ such that 7: It is crucial for the argument in homo-
topy principle for 𝛾𝜆 to be a subset of
x0 (0) = (𝑟(0 , 0) cos 𝜃0 (0), 𝑟(0 , 0) sin 𝜃0 (0)). By Lemma 8.1 there exists a
ℝ2 \ O for all 𝜆 ∈ [0, 1].
unique function 𝜃0 (𝜆) such that

x𝜆 (0) = 𝑟(𝜆, 0) cos 𝜃0 (𝜆), 𝑟(𝜆, 0) sin 𝜃0 (𝜆) .




Now define
∫ 𝑡
𝜃(𝜆, 𝑡) := 𝜃0 (𝜆) + 𝑋(x𝜆 (𝑠)) · x¤ 𝜆 (𝑠)𝑑𝑠,
0

which by Lemma 8.1 yields a representation

x𝜆 (𝑡) = 𝑟(𝜆, 𝑡) cos 𝜃(𝜆, 𝑡), 𝑟(𝜆, 𝑡) sin 𝜃(𝜆, 𝑡) .



(8.4)

The function 𝑊(𝛾𝜆 , O) = 𝜃(𝜆, 2𝜋) − 𝜃(𝜆, 0) /2𝜋 = 𝑘(𝜆), is a continuous



integer-valued function of 𝜆 ∈ [0 , 1] and therefore constant on [0 , 1]. This
proves the following homotopy invariance principle.
8 Index theory
42
Lecture 11

Lemma 8.1.3 Let 𝛾𝜆 ⊂ ℝ2 \ O, 𝜆 ∈ [0 , 1] be a smooth homotopy of closed


curves. Then,
𝑊(𝛾0 , O) = 𝑊(𝛾1 , O).

Proof. As the proof of Lemma 8.1 using the representation in (8.4).8 8: As a matter of fact 𝑊(𝛾𝜆 , O) is constant
in 𝜆 ∈ [0 , 1].

Remark 8.1.1 Lemma 8.1.3 states that homotopic parametrized curves


in ℝ2 \ O have the same winding number. The converse is also true:
two parametrized closed curves in ℝ2 \ O that have the same winding
number are homotopic.

Let 𝛾 be a regular Jordan curve in ℝ2 . By the regular Jordan curve


Theorem, cf. Thm. 7.2.1, 𝛾 divides the palne in two regions 𝑅 + (𝛾)
(bounded) and 𝑅 − (unbounded). If O ∈ 𝑅 − , then 𝑋 is a smooth vector
field on 𝑅 + (𝛾) and we may apply Green’s Theorem:
∮ ∬
1 1
𝑊(𝛾, O) := 𝑋 · 𝑑x = curl 𝑋 𝑑𝐴 = 0.
2𝜋 𝛾 2 𝜋 𝑅+ (𝛾)

In the application of Green’s Theorem 𝛾 is positively oriented.9 In the 9: The region 𝑅 + (𝛾) is positively oriented.
case that O ∈ 𝑅 + (𝛾) we argue as follows. Define 𝑅
e+ (𝛾) = 𝑅+ (𝛾) \ 𝐵 𝜖 (O)10 The induced orientation for 𝛾 implies that
and 𝑋 is smooth vector field on 𝑅 (𝛾). Again apply Green’s Theorem:
e + the curve is traversed counter-clockwise.
10: 𝐵 𝜖 (O) is a small open ball centered at
O.
∮ ∬
1 1
𝑋 · 𝑑x = curl 𝑋 𝑑𝐴 = 0.
2𝜋 𝜕𝑅
e+ (𝛾) 2𝜋 𝑅
e+ (𝛾)

The boundary of 𝑅 + (𝛾) is consists of two disjoint regular Jordan curves:


𝜕𝑅
e+ (𝛾) = 𝛾 ∪ 𝜕𝐵 𝜖 (O). The winding number of the latter can be computed

as follows. Choose x(𝑡) = (𝜖 cos 𝑡, −𝜖 sin 𝑡)11 which yields 21𝜋 𝜕𝐵 (O) 𝑋 · 11: The induced orientation for 𝜕𝐵 𝜖 (O) is
∮ 𝜖
𝑑x = −1. For 1
2𝜋 𝛾
𝑋 · 𝑑x this implies: negative, i.e. the circle is traversed clock-
wise.
∮ ∮ ∮
1 1 1
0= 𝑋 · 𝑑x = 𝑋 · 𝑑x + 𝑋 · 𝑑x
2𝜋 𝜕𝑅
e+ (𝛾) 2𝜋 𝛾 2𝜋 𝜕𝐵 𝜖 (O)

= 𝑊(𝛾, O) − 1,

which yields that 𝑊(𝛾, O) = 1.The curve 𝛾 is positively oriented the


winding number for 𝛾 with the reversed orientation is −1.

8.2 The index of a regular Jordan curve

Let 𝜙𝑡 (x) be a 𝐶 1 -flow on ℝ2 . The associated vector field is given by


𝑓 (𝑥) = 𝜙¤ 𝑡 (x)|𝑡=0 and flow-line satisfy the equation x¤ (𝑡) = 𝑓 (x(𝑡)) where
x(𝑡) = 𝜙𝑡 (x). Let 𝛾 be a positively oriented regular Jordan curve in ℝ2
such that 𝑓 |𝛾 ≠ 0, i.e. there are no fixed points of 𝜙𝑡 on 𝛾 . For the sake of
exposition we will treat the index theory in section for regular Jordan
curves.
Let x : [0 , 2𝜋] → ℝ2 be a (positively oriented) regular parametrization of
𝛾 and consider the composition y = 𝑓 ◦ x

x 𝑓
[0 , 2𝜋] → − ℝ2 ,
− ℝ2 →
8 Index theory
43
Lecture 11

which may again be regarded as a smooth parametrized curve in ℝ2 by


identifying vectors with points in ℝ2 . By the assumption that 𝑓 does
not contain equilibrium points we have that y : [0 , 2𝜋] → ℝ2 \ O is a
parametrized curve in the plane that avoids the origin.12 We denote the 12: Observe that 𝑓 ◦ 𝛾 , the image under y,
composed curve by 𝑓 ◦ 𝛾 . is not a regular Jordan curve in general, nor
is the orientation the same as 𝛾 necessarily!

Definition 8.2.1 Let 𝛾 be a positively oriented regular Jordan curve in


ℝ2 such that 𝑓 |𝛾 ≠ 0. Then,

𝜄 𝑓 (𝛾) := 𝑊( 𝑓 ◦ 𝛾, O) ∈ ℤ , (8.5)

is called the index of 𝛾 with respect to the vector field 𝑓 .

If we express the vector field 𝑓 by 𝑓 (x) = 𝑝(x), 𝑞(x) , then the index is

given by the integral

−𝑞𝑑𝑝 + 𝑝𝑑𝑞

1
𝜄 𝑓 (𝛾) = . (8.6)
2𝜋 𝛾 𝑝2 + 𝑞2

Remark 8.2.1 Note that the expression for the index in (8.1) makes
sense for any vector field defined on 𝛾 or on any neighborhood of 𝛾 .
Therefore, vector field do not have to be related to global flows and
therefore be normalized.

We now list a number of basic properties of the index.

Lemma 8.2.1 Let 𝛾 be a positively oriented regular Jordan curve in ℝ2 such


that 𝑓 |𝑅+ (𝛾) ≠ 0. Then,
𝜄 𝑓 (𝛾) = 0. (8.7)

Proof. Since we consider smooth regular Jordan curves we invoke Green’s


Theorem. By assumption 𝑝 2 + 𝑞 2 > 0 on 𝑅 + (𝛾). Then,

−𝑞𝑑𝑝 + 𝑝𝑑𝑞

1
𝜄 𝑓 (𝛾) =
2𝜋 𝛾 𝑝2 + 𝑞2
−𝑞𝜕𝑥 𝑝 + 𝑝𝜕𝑥 𝑞 𝑑𝑥 + −𝑞𝜕𝑦 𝑝 + 𝑝𝜕𝑦 𝑞 𝑑𝑦
∮  
1
=
2𝜋 𝛾 𝑝2 + 𝑞2
!
1
∬ h −𝑞𝜕𝑦 𝑝 + 𝑝𝜕𝑦 𝑞 i h −𝑞𝜕 𝑝 + 𝑝𝜕 𝑞 i
𝑥 𝑥
= 𝜕𝑥 − 𝜕𝑦 𝑑𝐴
2𝜋 𝑅+ (𝛾) 𝑝2 + 𝑞2 𝑝2 + 𝑞2
= 0,

which follows by carrying out the partial derivatives under the integral.

Lemma 8.2.2 Let 𝛾 be a positively oriented regular Jordan curve in ℝ2 such


that 𝑓 |𝛾 ≠ 0. Under time reversal 𝑡 ↦→ −𝑡 , or equivalently 𝑓 ↦→ − 𝑓 the
index is invariant, i.e. 𝜄 − 𝑓 (𝛾) = 𝜄 𝑓 (𝛾).
8 Index theory
44
Lecture 11

Proof. The expression in (8.6) for the index is invariant under the trans-
formation 𝑓 ↦→ − 𝑓 , which completes the proof.

Lemma 8.2.3 Let 𝛾 and 𝛾0 be two positively oriented regular Jordan curves
such that 𝛾0 ⊂ int 𝑅 + (𝛾) and 𝑆 = 𝑅 + (𝛾) \ int 𝑅 + (𝛾0) contains no equilib-
rium points. Then,
𝜄 𝑓 (𝛾) = 𝜄 𝑓 (𝛾0).

Proof. By assumption 𝑓 ≠ 0 on 𝑆 and we apply Green’s Theorem to the


set 𝑆 whose oriented boundary is given by 𝛾 − 𝛾0.13 As in the proof of 13: By 𝛾−𝛾0 we denoted the two boundary
Lemma 8.2.1 we have curves of 𝑆 , where −𝛾 indicates the the
orientation is opposite to 𝛾0 , i.e. negatively
−𝑞𝑑𝑝 + 𝑝𝑑𝑞 −𝑞𝑑𝑝 + 𝑝𝑑𝑞 oriented.
∮ ∮
1 1
𝜄 𝑓 (𝛾) − 𝜄 𝑓 (𝛾0) = +
2𝜋 𝛾 𝑝 +𝑞
2 2 2𝜋 −𝛾0 𝑝2 + 𝑞2
−𝑞𝜕𝑥 𝑝 + 𝑝𝜕𝑥 𝑞 𝑑𝑥 + −𝑞𝜕𝑦 𝑝 + 𝑝𝜕𝑦 𝑞 𝑑𝑦
∮  
1
=
2𝜋 𝛾−𝛾0 𝑝2 + 𝑞2
!
1
∬ h −𝑞𝜕𝑦 𝑝 + 𝑝𝜕𝑦 𝑞 i h −𝑞𝜕 𝑝 + 𝑝𝜕 𝑞 i
𝑥 𝑥
= 𝜕𝑥 − 𝜕𝑦 𝑑𝐴
2𝜋 𝑆 𝑝2 + 𝑞2 𝑝2 + 𝑞2
= 0,

which completes the proof.

The last property of the index we mention is the case where 𝛾 is a periodic
orbit of the flow 𝜙𝑡 .14 A sktech of a proof can be found in Sect. 8.4. 14: See for instance [1] for a proof.

In the definition the periodic orbit is tra-


versed counter-clockwise. This is indepen-
Lemma 8.2.4 Let 𝛾 be a periodic orbit (positively or negatively oriented) of
dent of the orientation of the periodic or-
the flow 𝜙𝑡 . Then, bit.
𝜄 𝑓 (𝛾) = 1.

As a consequence of Lemma 8.2.4 we have:

Corollary 8.2.5 Let 𝛾 be a periodic orbit (positively or negatively oriented)


of the flow 𝜙𝑡 . Then, 𝑅 + (𝛾) must contain at least one fixed point.

Proof. Suppose not, then 𝜄 𝑓 (𝛾) =) by Lemma 8.2.1, which contradicts


Lemma 8.2.4.

In the next section we will see some additional properties of periodic


orbits.

8.3 The fixed point index

The index of a regular Jordan curve with respect to a vector field can be
used to assign an index the isolated fixed points of a planar flow 𝜙𝑡 .15 15: The index for a fixed point can also
be defined for flows on ℝ 𝑛 by using then
A fixed point x0 of 𝜙𝑡 is isolated if there exists an 𝜖 > 0 such that x0 is the Brouwer mapping degree as a generaliza-
only fixed point in 𝐵 𝜖 (x0 ). Let 𝛾 be a positively oriented regular Jordan tion of the winding number.
curve such that x0 is the only fixed point in 𝑅 + (𝛾). Such regular Jordan
8 Index theory
45
Lecture 11

curves always exist since x0 is isolated. Just take 𝛾𝜖 = 𝜕𝐵 𝜖 (x0 ) as regular


Jordan curve. Now consider the index 𝜄 𝑓 (𝛾). We argue now that if 𝛾0 is
another positively oriented regular Jordan curve such that x0 is the only
fixed point in 𝑅 + (𝛾0) then the index remains unchanged. Choose 𝜖 > 0
sufficiently small such that cl 𝐵 𝜖 (x0 ) ⊂ int 𝑅 + (𝛾) ∩ int 𝑅 + (𝛾0). Then, by
Lemma 8.2.3 we have

𝜄 𝑓 (𝛾) = 𝜄 𝑓 (𝛾𝜖 ) = 𝜄 𝑓 (𝛾0),

which motivates the following definition.

Definition 8.3.1 Let x0 be an isolated fixed point of 𝜙𝑡 . The fixed point


index of x0 is defined as

𝜄 𝑓 (x0 ) := 𝜄 𝑓 (𝛾), (8.8)

where 𝛾 is a positively oriented regular Jordan curve such that x0 is


the only fixed point in 𝑅 + (𝛾).

The fixed point index can be linked to the index index of a regular Jordan
curve via the index sum formula.

Theorem 8.3.1 Let 𝛾 be a positively oriented regular Jordan curve in ℝ2


such that 𝑓 |𝑅+ (𝛾) ≠ 0. Suppose that 𝑅 + (𝛾) contains only finitely many fixed
points {x𝑖 }. Then, X
𝜄 𝑓 (𝛾) = 𝜄 𝑓 (x𝑖 ). (8.9)
𝑖

Proof. Choose 𝜖 > 0 sufficiently small such that cl 𝐵 𝜖 (x𝑖 ) ⊂ int 𝑅 + (𝛾) for
all 𝑖 . Consider the set
[
𝑆 = 𝑅 + (𝛾) \ 𝐵 𝜖 (x𝑖 ).
𝑖

As in the proofs of Lemmas 8.2.1 and 8.2.3 we apply Green’s Theorem to


the set 𝑆 which yields
X X
𝜄 𝑓 (𝛾) − 𝜄 𝑓 (x𝑖 ) = 𝜄 𝑓 (𝛾) 𝜄 𝑓 (𝜕𝐵 𝜖 (x𝑖 )) = 0 ,
𝑖 𝑖

proving the above sum formula.

In Corollary 8.2.5 we established that a period orbit 𝛾 contains at least


one fixed point inside 𝑅 + (𝛾). By invoking the sum formula in (8.9) we
obtain the following statement.

Corollary 8.3.2 Let 𝛾 be a periodic orbit (positively or negatively oriented)


of the flow 𝜙𝑡 . If all fixed points in 𝑅 + (𝛾) are isolated, then
X
1= 𝜄 𝑓 (x𝑖 ).
𝑖

As a consequence a saddle point can never be the only fixed point


enclosed by a periodic orbit. The latter can be used in some cases to
disprove the existence of periodic orbits.
8 Index theory
46
Lecture 11

8.4 * Homotopy properties of the index

For some applications of the index homotopy principle for the Jordan
curves and the vector fields are useful tools.

Lemma 8.4.1 (Homotopy invariance, I) Let 𝛾𝜆 , 𝜆 ∈ [0 , 1] be a homotopy


of regular Jordan curves such that 𝑓 |𝛾𝜆 ≠ 0 for all 𝜆 ∈ [0 , 1]. Then,

𝜄 𝑓 (𝛾0 ) = 𝜄 𝑓 (𝛾1 ).

Proof. Consider the homotopy parametrization x𝜆 (𝑡) and the composition


y𝜆 (𝑡) = 𝑓 (x𝜆 (𝑡)). Since 𝑓 |𝛾𝜆 ≠ 0 its follows that y𝜆 (𝑡) defined a homotopy
of closed curves in ℝ2 \ O. By Lemma 8.1.3 we obtain

𝜄 𝑓 (𝛾0 ) = 𝑊( 𝑓 ◦ 𝛾0 , O) = 𝑊( 𝑓 ◦ 𝛾1 , O) = 𝜄 𝑓 (𝛾1 ),

which completes the proof.16 16: As a matter of fact 𝜄 𝑓 (𝛾𝜆 ) is constant


in 𝜆 ∈ [0 , 1].

A homotopy of vector fields is defined via a 𝐶 1 -function f : [0 , 1] × ℝ2 →


ℝ2 and 𝑓𝜆 = f(𝜆, ·). If we have a 1-parameter family of flows i.e. a 𝐶 1 -
function 𝜑 : [0 , 1] × ℝ × ℝ2 → ℝ2 such that 𝜑(𝜆, ·, ·) is a planar flow for
every 𝜆 ∈ [0 , 1], and we write 𝜙𝜆𝑡 (x) := 𝜑(𝜆, 𝑡, x). Then, 𝑓𝜆 := 𝜙¤ 𝜆𝑡 (x)|𝑡=0
is a homotopy of vector fields.17 17: The homotopy principle below also ap-
plies to vector fields that do not necessarily
induce global flows.
Lemma 8.4.2 (Homotopy invariance, II) Let 𝑓𝜆 , 𝜆 ∈ [0 , 1] be a homotopy
of vector fields such that 𝑓𝜆 |𝛾 ≠ 0 for all 𝜆 ∈ [0 , 1]. Then,

𝜄 𝑓0 (𝛾) = 𝜄 𝑓1 (𝛾).

Proof. As in the proof of Lemma 8.4.1 y𝜆 (𝑡) = 𝑓𝜆 (x(𝑡)) is a homotopy of


closed curves in ℝ2 \ O. This implies

𝜄 𝑓0 (𝛾) = 𝑊( 𝑓0 ◦ 𝛾, O) = 𝑊( 𝑓1 ◦ 𝛾, O) = 𝜄 𝑓1 (𝛾),

which completes the proof.18 18: As before 𝜄 𝑓𝜆 (𝛾) is constant in 𝜆 ∈


[0, 1].

We end this section with a sketch of a proof of Lemma 8.2.4. The index
is the same for positively and negatively oriented periodic orbits by
Lemma 8.2.2. We assume 𝛾 is positively oriented. Choose a 𝜖 -tubular
neighborhood of 𝑁𝜖 (𝛾) of the periodic orbit 𝛾 . Using Lemma 8.1.1 we
obtain a diffeomorphism between 𝑁𝜖 (𝛾) and the annulus 𝐴 𝜖 := {x | 1−𝜖 ≤
| x | ≤ 1 + 𝜖} such that 𝛾 maps to the unit circle 𝛾 in 𝐴 − 𝜖. The transformed
vector field on 𝐴 𝜖 is denoted by e 𝑓 . Use the homotopy principle in Lemma
8.4.2 to normalize the vector field such that e 𝑓 has unit length at the unit
circle e𝛾. Then e 𝑓 |e
𝛾 = (−𝑦, 𝑥). Summarizing we have
∮ ∫ 2𝜋
1 1
𝜄 𝑓 (𝛾) = 𝜄 e𝑓 (e
𝛾) = −𝑞𝑑𝑝 + 𝑝𝑑𝑞 = (𝑥 𝑦 0 − 𝑦𝑥 0)𝑑𝑡 = 1,
2𝜋 𝛾
e 2𝜋 0


by using the parametrization cos(𝑡), sin(𝑡) .
Compactification
Lecture 12 and 13 9
We describe a method to extend smooth vector fields on 𝑅 𝑛 to smooth 9.1 Stereographic and central projec-
vector fields on 𝑆 𝑛 ⊂ ℝ 𝑛+1 . This allows us to understand the asymptotic tions . . . . . . . . . . . . . . . . . 47
behavior of vector fields and also provides an interplay between the 9.2 Extending the vector field . . 47
9.3 Examples . . . . . . . . . . . . 49
dynamics of a vector field and its asymptotics.
A 1-dimensional system . . . 49
A planar system . . . . . . . . 49
A 3-dimensional vector field 50
9.1 Stereographic and central projections Linearization . . . . . . . . . . . 51
9.4 Blowing up fixed points . . . 52
The stereographic projection is a standard method to give a diffeomorphism
between the 𝑛 -sphere 𝑆 𝑛 \ {∞} ⊂ ℝ 𝑛+1 and ℝ 𝑛 where ∞ denotes the
north pole. The stereogarphic projection is well-known and convenient
to use in many situations. The draw-back is that all asynmptotic behavior
is mapped to one point that models infinity, the north pole.1 A different, 1: Compactification of ℝ 𝑛 using the stere-
but similar technique, makes the equator of 𝑆 𝑛 play the role of infinity. ographic projection and the 𝑛 -sphere is
also referred to as the Bendixson sphere.
This is called the central projection.2 Let us describe the central projection.
2: Compactification of ℝ 𝑛 using the cen-
Consider coordinates (𝝃, 𝜁) ⊂ ℝ 𝑛+1 , where 𝝃 = (𝜉1 , · · · , 𝜉𝑛 } and let
tral projection and the upper 𝑛 -sphere is
also referred to as the Poincaré sphere.
𝑆 𝑛 := {(𝝃, 𝜁) ∈ ℝ 𝑛+1 | |𝝃| 2 + 𝜁 2 = 1}.

Denote the upper hemisphere by 𝑆+𝑛 , i.e. all points on 𝑆 𝑛 with 𝜁 > 0.
Consider ℝ 𝑛 with coordinates x = (𝑥 1 , · · · , 𝑥 𝑛 ).
For points (𝝃, 𝜁) ∈ 𝑆+𝑛 define the points

𝝃
𝑆+𝑛 → ℝ 𝑛 , (𝝃, 𝜁) ↦→ x = ∈ ℝ𝑛 , (9.1)
𝜁

which is well-defined since 𝜁 > 0.3 Conversely, write 𝝃 = 𝜁 x. Then, 3: We can think of (𝝃, 𝜁) as homogeneous
p coordinates.
|𝝃| 2 + 𝜁 2 = 1 implies (1 + | x | 2 )𝜁 2 = 1 and thus 𝜁 = 1/ 1 + | x | 2 . This yields
the map
!
𝑛 x 1
ℝ → 𝑆+𝑛 , x ↦→ (𝝃, 𝜁) = p ,p (9.2)
1 + |x|2 1 + |x|2

This establishes the diffeomorphisms between 𝑆+𝑛 and ℝ 𝑛 . The same can
be carried out for 𝑆−𝑛 . The equator 𝑆eq
𝑛−1 = {(𝝃, 0) | |𝝃| = 1 } plays the role

of infinity. The next step is to map a vector field on ℝ 𝑛 to a vector field


on 𝑆+𝑛 .

9.2 Extending the vector field

As before we used the ratio x : 1 = 𝝃 : 𝜁 to define the central projection.


We do the same now for the vector field 𝑓 , i.e.
   
𝑓 (x), 0 ↦→ 𝜁 𝑓 (𝝃/𝜁), 0 , at (𝝃, 𝜁).
9 Compactification
48
Lecture 12 and 13

This vector field is not tangent to 𝑆 𝑛 and thus we need to correct it with
the normal direction: 𝜁 𝑓 (𝝃/𝜁), 0 − 𝑐(𝝃, 𝜁). To be tangent to 𝑆 𝑛 we take
inner product with the normal, i.e.
h  i
𝜁 𝑓 (𝝃/𝜁), 0 − 𝑐(𝝃, 𝜁) · (𝝃, 𝜁) = 0 ,

which yields 𝑐 = 𝜁𝝃 · 𝑓 (𝝃/𝜁). The vector field in ℝ 𝑛+1 tangent to 𝑆 𝑛 is4 4: The same expression can be obtained
by differentiating x = 𝜁𝝃 and 𝜁 .
 
𝜁 𝑓 (𝝃/𝜁) − 𝜁𝝃 𝝃 · 𝑓 (𝝃/𝜁) , −𝜁 2 𝝃 · 𝑓 (𝝃/𝜁) .
  

Choose 𝑚 ≥ 1 such that 𝑓 ∗ (𝝃, 𝜁) := 𝜁 𝑚 𝑓 (𝝃/𝜁) is a smooth function of its


variables.5 We multiply the above expression for the vector field with 5: The integer 𝑚 has to be chosen wisely.
𝜁 𝑚−1 which yields the normalized vector field In principle as small as possible. If it is
chosen too large the behavior of the vector
  field at the equator loses all fine structure.
𝑓 ∗ (𝝃, 𝜁) − 𝝃 𝝃 · 𝑓 ∗ (𝝃, 𝜁) , −𝜁 𝝃 · 𝑓 ∗ (𝝃, 𝜁) ,
  
(9.3)

which extends to all of ℝ 𝑛+1 and thus 𝑆 𝑛 by allowing 𝜁 ∈ ℝ. Another


way to write the vector field is:
  
𝑓 ∗ (𝝃, 𝜁), 0 − 𝝃 · 𝑓 ∗ (𝝃, 𝜁) n,

(9.4)

where n = (𝝃, 𝜁) is the outward unit normal to 𝑆 𝑛 . In order to integrate


this vector field we write the differential equations:

The compactified system (homogeneous coordinates)

𝝃0 = 𝑓 ∗ (𝝃, 𝜁) − 𝝃 · 𝑓 ∗ (𝝃, 𝜁) 𝝃
 

𝜁0 = − 𝝃 · 𝑓 ∗ (𝝃, 𝜁) 𝜁.
 

For the latter system the time variable is 𝑠 . The relation to the time
variable 𝑡 of the system x¤ = 𝑓 (x) is

𝜁 𝑚−1 𝑑𝑠 = 𝑑𝑡.

Since by construction the vector field in (9.3) is tangent to 𝑆 𝑛 we have


that 𝑆 𝑛 is an invariant set of the compactified system. The differential
equation for 𝜁 yields that both the hyperplane {𝜁 = 0}  ℝ 𝑛 and the
equator 𝑆eq𝑛−1 are invariant.

𝑓 (x) =
Remark 9.2.1 If we consider a renormalized vector field x¤ = e
𝑓 (x)/(1 + | 𝑓 (x)|), then

𝜁 𝑚 𝑓 (𝝃/𝜁) 𝑓 ∗ (𝝃, 𝜁)
𝑓 ∗ (𝝃, 𝜁) =
e = ,
𝜁 𝑚 + |𝜁 𝑚 𝑓 (𝝃/𝜁)| 𝜁 𝑚 + | 𝑓 ∗ (𝝃, 𝜁)|

in which case we can take 𝜁 𝑚 + | 𝑓 ∗ (𝝃, 𝜁)| as the time-rescale factor:


𝜁 𝑚−1 + 𝜁 −1 | 𝑓 ∗ (𝝃, 𝜁)| 𝑑𝑠 = 𝑑𝑡 . In this case it is more convenient to use
𝑓 instead of e 𝑓.
9 Compactification
49
Lecture 12 and 13

9.3 Examples

In this section we out a number of examples describing the method given


in the previous section.

A 1-dimensional system

Let us consider the 1-dimensional system 𝑥¤ = 𝑥 3 . The explicit solution


q
is given by 𝑥(𝑡) = 𝑥 0 / 1 − 2𝑡𝑥 02 . Via the above method we obtain
the compactified system by choosing 𝑚 = 3: 𝑓 (𝑥) = 𝑥 3 and 𝑓 ∗ (𝜉) =
𝜁 3 𝑓 (𝜉/𝜁) = 𝜉3 . The compactified system, which is defined on all of ℝ2
is:
𝜉0 = 𝜉 3 − 𝜉 5
𝜁0 = −𝜉4 𝜁.
This system can be solved explicitly but we will restrict to asymptotic
analysis. In the original variable 𝑥 an orbit starting at 𝑥 0 > 0 blows up
to infinity in finite time as 𝑡 → 1/2 𝑥 02 . We show now that we can link
the asymptotic behavior of both systems. In the compactified system this
translates to (𝜉(𝑠), 𝜁(𝑠)) → (1 , 0) as 𝑠 → ∞. From the linear analysis of
the compactified system we have that the linearized system at (1 , 0) is
determined by the matrices

3𝜉 2 − 5𝜉 4
   
0 −2 0
, ,
−4 𝜁𝜉3 −𝜉4 0 −1

and thus (1 , 0) is a stable node in the 2-dimensional system. The linearized


equation for 𝜁 yields 𝜁(𝑠) ∼ 𝑐𝑒 −𝑠 which gives the following relation
bewteen 𝑠 and 𝑡 :6 6: As 𝑠 → ∞, then 𝑡 → 1/2 𝑐 02 .

1 1
𝜁 2 𝑑𝑠 = 𝑑𝑡 =⇒ 𝑐 2 𝑒 −2𝑠 𝑑𝑠 ∼ 𝑑𝑡 =⇒ − 𝑐 2 𝑒 −2𝑠 ∼ 𝑡 − 02
2 2𝑐

and therefore 𝜁 2 ∼ 1/𝑐 02 − 2𝑡 . Since 𝜉 ∼ 1 we obtain

1 𝑐0
𝑥(𝑡) ∼ ∼√
𝜁(𝑠) 1 − 2𝑡𝑐 02

These considerations show that the asymptotics in the original system


can be retrieved from the compactification.

A planar system

In the next example we consider a planar system of linear differential


equations:
𝑥¤ = −𝑥,
𝑦¤ = 𝑦,
and the vector field is given by 𝑓 (𝑥, 𝑦) = (−𝑥, 𝑦). In order to transform to
the compactified system we consider 𝑓 ∗ (𝝃, 𝜁) = 𝜁 𝑚 𝑓 (𝝃/𝜁) by choosing
𝑚 = 1 which gives 𝑓 ∗ (𝝃, 𝜁) = (−𝜉1 , 𝜉2 ). The compactified system then
reads7 7: We could write 𝜉¤ 𝑖 instead of 𝜉0𝑖 since
𝑑𝑠 = 𝑑𝑡 and one can choose 𝑠 = 𝑡 .
9 Compactification
50
Lecture 12 and 13

𝜉10 = −𝜉1 − [−𝜉12 + 𝜉22 ]𝜉1 ,


𝜉20 = 𝜉2 − [−𝜉12 + 𝜉22 ]𝜉2 ,
𝜁0 = −[−𝜉12 + 𝜉22 ]𝜁.
To analyze all fixed points of this system one can argue as follows. At 𝜁 = 0
we have 𝜉12 + 𝜉22 = 1. For the right hand sides in the above system this
gives −2𝜉1 (1 − 𝜉12 ) and 2𝜉2 (1 − 𝜉22 ) which implies if 𝜉1 = 0, then 𝜉2 = ±1
and if 𝜉2 = 0, then 𝜉12 = ±1. For 𝜁 ≠ 0 we obtain the extra point (0 , 0). In
order to easily analyze this system we convert the system to cylindrical
coordinates (𝑟, 𝜃, 𝜁) via 𝜉1 = 𝑟 cos 𝜃 and 𝜉2 = 𝑟 sin 𝜃 with 𝑟 2 + 𝜁 2 = 1.
As before 𝜉1 𝜉¤ 1 + 𝜉2 𝜉¤ 2 = 𝑟 𝑟¤ and −𝜉2 𝜉¤ 1 + 𝜉1 𝜉¤ 2 = 𝑟 2 𝜃¤ . Substitution in the
above system yields
𝑟 0 = (𝑟 3 − 𝑟) cos 2𝜃,
1
𝜃0 = sin 2𝜃,
2
𝜁0 = 𝜁 cos 2𝜃.
At 𝜁 = 0 we observe from the equation sin 2𝜃 = 0 and 𝑟 = 1 the the fixed
points indicated above. From the angular equation 𝜃¤ = 12 sin 2𝜃 we find
the flow on the equator. This equation also allows us to carry out the
stability analysis of the fixed points. To know the nature of the fixed
points in the entire system we can incorporate the equation for 𝑟¤ . In [1]
there are pictures of the flow on the disc.

A 3-dimensional vector field

The third example in this section is a linear system in ℝ3 .8 Consider the 8: We choose a linear system to keep the
system: technicalities down to a minimum.

𝑥¤ = 𝑥 − 𝑦,
𝑦¤ = 𝑥 + 𝑦,
𝑧¤ = −𝑧
and the vector field is given by 𝑓 (𝑥, 𝑦, 𝑧) = (𝑥 − 𝑦, 𝑥 + 𝑦, −𝑧). In order to
transform to the compactified system we take, as before, 𝑚 = 1 which
gives the following system

𝜉10 = 𝜉1 − 𝜉2 − [2𝜉12 + 2𝜉22 − 1]𝜉1 ,


𝜉20 = 𝜉1 + 𝜉2 − [2𝜉12 + 2𝜉22 − 1]𝜉2 ,
𝜉30 = −𝜉3 − [1 − 2𝜉32 ]𝜉3 ,
𝜁0 = −[𝜉12 + 𝜉22 − 𝜉32 ]𝜁,

where we used the relation 𝜉12 + 𝜉22 + 𝜉32 = 1, which is the ‘equator’ on
𝑆3 and we call it the 2-sphere at infinity. We use spherical-cylindrical
coordinates in ℝ4 : 𝜉1 = cos 𝜃 sin 𝜙 , 𝜉2 = sin 𝜃 sin 𝜙 , 𝜉3 = cos 𝜙 and
𝜁 ∈ ℝ+ , 𝜃 ∈ [0, 2𝜋] and 𝜙 ∈ [0 , 𝜋]. In terms of derivatives we have:
−𝜉2 𝜉10 + 𝜉1 𝜉20 = (sin2 𝜙)𝜃0 and 𝜉30 = −(sin 𝜙)𝜙0. In polar coordinates the
𝝃 -component of the vector field is given by:

(2 cos 𝜃 − sin 𝜃) sin 𝜙 − 2 sin3 𝜙 cos 𝜃


­ (cos 𝜃 + 2 sin 𝜃) sin 𝜙 − 2 sin3 𝜙 sin 𝜃 ® .
© ª

« −2 cos 𝜙 sin2 𝜙 ¬
9 Compactification
51
Lecture 12 and 13

For 𝜃0 we obtain

(sin2 𝜙)𝜃0 = −(2 cos 𝜃 − sin 𝜃) sin 𝜃 sin2 𝜙 + (cos 𝜃 + 2 sin 𝜃) cos 𝜃 sin2 𝜙,

which gives 𝜃0 = 1. For 𝜙0 we have:

−(sin 𝜙)𝜙0 = −2 cos 𝜙 sin2 𝜙,

and therefore 𝜙0 = sin 2 𝜙 . The angle 𝜙 = 0 , 𝜋 correspond to north and


south pole on the sphere at infinity. The angle 𝜙 = 𝜋/2 corresponds to a
periodic orbit at infinity. It follows from the equation for 𝜙 that the north
and south pole are unstable for the flow on 𝑆 2 and the periodic orbit is
stable. In order to examine the stabilty for the flow on 𝑆 3 we include the
equation for 𝜁 which is:

𝜁0 = (cos 2𝜙)𝜁.

For 𝜁 ≈ 0 we conclude that north and south pole are unstable for the
whole system and the periodic orbit is stable for the 3-dimensional
flow.

Linearization

Linearizing fixed points for f is described in Sect. 3.3. What if we want to


linrearize fixed points at ‘infinity’. In principle the compactified system
is defined on ℝ 𝑛+1 and therefore can be linearized as before. However,
since we consider the flow restricted to 𝑆 𝑛 it is more convenient to derive
an appropriate system in ℝ 𝑛 . To explain the the idea let us consider the
system
𝑥¤ = −𝑥,
𝑦¤ = 𝑦,
as discussed above. In this example we find the fixed points (0 , ±1 , 0)
and (±1 , 0 , 0) at infinity. Let us zoom in on the point (1 , 0 , 0). The idea
is to invert the construction of central projection but now to the plane
{(1 , 𝑦, 𝑧)} which contains the point (1, 0, 0). The inverse central projection
then provides a description of the flow on the ‘hemisphere’ 𝑥 > 1. To
simplify matters we carry out the inverse projection on a general 2-
dimensional system onto the plane 𝑥 = 1. Given x¤ = f(x), f = ( 𝑓 , 𝑔), and
the associated compactified system:

𝜉10 = 𝑓 ∗ (𝜉1 , 𝜉2 , 𝜁) − 𝝃 · f(𝝃, 𝜁) 𝜉1 ,




𝜉20 = 𝑔 ∗ (𝜉1 , 𝜉2 , 𝜁) − 𝝃 · f(𝝃, 𝜁) 𝜉2 ,




𝜁0 = − 𝝃 · f(𝝃, 𝜁) 𝜁.


This system is related to the 2-dimensional system via the coordinates


(𝑥, 𝑦, 1) ∼ (𝜉1 , 𝜉2 , 𝜁). To build a chart at 𝑥 = 1 we use the correspondence:
(1 , 𝑦, 𝑧) ∼ (𝜉1 , 𝜉2 , 𝜁). This gives:

𝜉2 𝜁 𝑦 𝜉2
𝑦= , 𝑧= , and = .
𝜉1 𝜉1 𝑧 𝜁
9 Compactification
52
Lecture 12 and 13

𝜉20 𝜉1 −𝜉2 𝜉10 𝜁0 𝜉1 −𝜁𝜉10


Via the relations 𝑦 0 = 𝜉12
and 𝑧 0 = 𝜉12
we obtain the following
equation for 𝑦 and 𝑧 :
   
𝜉1 𝜉2 𝜉1 𝜉2
𝜁𝑚 𝑔 𝜁 , 𝜁 𝜉1 − 𝜁 𝑚 𝑓 𝜁 , 𝜁 𝜉2 𝜉 𝜉2 
𝑧 0 = −𝜁 𝑚+1 𝑓
1
𝑦0 = and .,
𝜉12 𝜁 𝜁

𝜉1 𝜉2 𝑦
If we write 𝜁 = 𝑧𝜉1 , 𝜁 = 1
𝑧 and 𝜁 = 𝑧 we obtain
h 1 𝑦 1 𝑦 i 1 𝑦
𝑦 0 = 𝜉1𝑚−1 𝑧 𝑚 𝑔 , −𝑧 𝑚 𝑓 , 𝑦 and 𝑧 0 = −𝜉1𝑚−1 𝑧 𝑚+1 𝑓 , .
𝑧 𝑧 𝑧 𝑧 𝑧 𝑧

After reparametrizing time via 𝜉1𝑚−1 𝑑𝑠 = 𝑑𝑡 we obtain:


1 𝑦 1 𝑦
𝑦¤ = 𝑧 𝑚 𝑔 , − 𝑧𝑚 𝑓 , 𝑦,
𝑧 𝑧 𝑧 𝑧
1 𝑦
𝑧¤ = −𝑧 𝑚+1 𝑓 , .
𝑧 𝑧
For this system in the (1 , 𝑦, 𝑧)-plane the equator {𝜉1 = 0} plays the
role of ‘infinity’. Let us now go back the above example and transform
the equations to the (1 , 𝑦, 𝑧)-plane. Here 𝑓 (𝑥, 𝑦) = −𝑥 , 𝑔(𝑥, 𝑦) = 𝑦 and
𝑚 = 1 and upon substitution we obtain the system:

𝑦¤ = 2 𝑦,
𝑧¤ = 𝑧.

This proves that the point (1 , 0 , 0) is a repelling fixed point. If we carry


out the above procedure for (−1 , 0 , 0) we obtain (−1 , 𝑦, 𝑧) ∼ (𝜉1 , 𝜉2 , 𝜁)
and therefore:
𝜉2 𝜁 𝑦 𝜉2
𝑦=− , 𝑧=− , and = .
𝜉1 𝜉1 𝑧 𝜁

This yields the equations for the (−1 , 𝑦, 𝑧)-plane:


 1 𝑦  1 𝑦
𝑦¤ = (−1)𝑚+1 𝑧 𝑚 𝑔 − , − (−1)𝑚 𝑧 𝑚 𝑓 − , 𝑦,
𝑧 𝑧 𝑧 𝑧
 1 𝑦
𝑧¤ = −(−1)𝑚 𝑧 𝑚+1 𝑓 − , .
𝑧 𝑧
For the point (−1 , 0 , 0) this gives we obtain the system (𝑦, 𝑧)-system and
thus (−1 , 0 , 0) is also a repelling fixed point. The same procedure can be
carried out for the points (0 , ±1 , 0).

9.4 Blowing up fixed points

We alter the compactification via the central projection in two ways: (i)
we map to a cylinder as generalized polar coordinates and (ii) we allow a
generalization of homogeneous coordinates. In this construction we have
two parameters. The first one is a smooth convex function ℎ : ℝ 𝑛 → ℝ+
such that
(i) ℎ −1 (1) diffeomorphic to 𝑆 𝑛−1 ;
(ii) ∇ℎ(𝝃) · Ω𝝃 = 𝜆ℎ(𝝃), for some 𝜆 > 0,9 9: The function ℎ is quasi-homogeneous
of degree 𝜆.
9 Compactification
53
Lecture 12 and 13

where Ω = diag(𝜔1 , · · · , 𝜔𝑛 ) with 𝜔 𝑖 ≥ 1, which is the second parameter


to be chosen appropriately with respect to the vector field 𝑓 . Define

S𝑛−1 := {𝝃 ∈ ℝ 𝑛 | ℎ(𝝃) = 1}.

Next define the map


!
𝜉1 𝜉𝑛
S𝑛−1 × ℝ → ℝ 𝑛 \ O , (𝝃, 𝜏) ↦→ x = 𝑒 −𝜏Ω 𝝃 := 𝜏𝜔 , · · · , 𝜏𝜔 ∈ ℝ𝑛 ,
𝑒 1 𝑒 𝑛
(9.5)
with 𝜔 𝑖 > 0 and which is well-defined since 𝜁 > 0. Conversely, write
𝝃 = 𝑒 𝜏Ω x. Then, ℎ 𝑒 𝜏Ω x = 1 yields the map10 10: Use the Implicit Function Theorem to
show that for every 𝝃 there is a unique
ℝ 𝑛 \ O → S𝑛−1 × ℝ , x ↦→ (𝝃, 𝜏). (9.6) 𝜁 > 0.

This establishes the diffeomorphisms between S𝑛−1 × ℝ and ℝ 𝑛 \ O. We


refer to coordinates as defined in (9.5) as quasi-homogeneous coordinates if
𝝎 ≠ (𝜔, · · · , 𝜔) for some integer 𝜔. The next step is the choose Ω in an
optimal way with respect to the vector field 𝑓 .
Choose Ω and 𝑚 ≥ 1 and define the smooth

𝑓 † (𝝃, 𝜏) := 𝑒 (𝑚−1)𝜏 𝑒 𝜏Ω 𝑓 𝑒 −𝜏Ω 𝝃



(9.7)

of the variables (𝝃, 𝜏). At a later stage we will impose conditions on Ω


and 𝑚 . To determine the vector field on the cylinder we differentiate the
expression x = 𝑒 −𝜏Ω 𝝃 and use the equation x¤ = 𝑓 (x):

𝑓 † (𝝃, 𝜏) = 𝑒 (𝑚−1)𝜏 𝑒 𝜏Ω x¤ = −𝑒 (𝑚−1)𝜏 𝜏Ω𝝃


¤ ¤
+ 𝑒 (𝑚−1)𝜏 𝝃.

First take the inner product with ∇ℎ(𝝃) which gives:

∇ℎ(𝝃)· 𝑓 † (𝝃, 𝜏) = −𝑒 (𝑚−1)𝜏 𝜏∇ℎ(𝝃)·Ω𝝃 = −𝑒 (𝑚−1)𝜏 𝜏𝜆ℎ(𝝃) = −𝜆𝑒 (𝑚−1)𝜏 𝜏,


 
¤ ¤ ¤

and thus 𝑒 (𝑚−1)𝜏 𝜏¤ = − 𝜆1 ∇ℎ(𝝃) · 𝑓 † (𝝃, 𝜏) and for 𝝃¤ this yields


 

1
𝑒 (𝑚−1)𝜏 𝝃¤ = 𝑓 † (𝝃, 𝜏) − ∇ℎ(𝝃) · 𝑓 † (𝝃, 𝜏) Ω𝝃

𝜆

As before we rescale time via 𝑒 (𝑚−1)𝜏 𝑑𝑠 = 𝑑𝑡 . Summarizing we have:

Quasi-homogeneous polar coordinates

Let (𝝃, 𝜏) ∈ S𝑛−1 × ℝ:

1
𝝃0 = 𝑓 † (𝝃, 𝜏) − ∇ℎ(𝝃) · 𝑓 † (𝝃, 𝜏) Ω𝝃

𝜆
1
𝜏0 = − ∇ℎ(𝝃) · 𝑓 † (𝝃, 𝜏) .

𝜆
For the latter system the time variable is 𝑠 . The relation to the time
variable 𝑡 of the system x¤ = 𝑓 (x) is

𝑒 (𝑚−1)𝜏 𝑑𝑠 = 𝑑𝑡.

We can now choose which ‘ends’, i.e. +∞ or −∞, we want to compactify.


Let us start with with 𝜏 = −∞. Set 𝜏 = ln 𝜁 , then 𝜏0 = 𝜁0/𝜁 . The ‘end’
9 Compactification
54
Lecture 12 and 13

𝜏 = −∞ corresponds to 𝜁 = 0. In order for the above system to be valid


at 𝜁 = 0 we need the following condition:

𝑓 ∗ (𝝃, 𝜁) := 𝑓 † (𝝃, ln 𝜁), (9.8)

is smooth on S𝑛−1 × ℝ+ .11 If we consider the ‘end’ 𝜏 = +∞ we set 11: We use the convention: ℝ+ = [0 , ∞).
𝜏 = − ln 𝜁 and 𝜁 = 0 corresponds to 𝜏 = +∞, which yields the condition:

𝑓 ∗ (𝝃, 𝜁) := 𝑓 † (𝝃, − ln 𝜁), (9.9)


is smooth on S𝑛−1 × ℝ+ . In specific examples these conditions may be
satisfied by choosing Ω and 𝑚 appropriately. This way we can study
the behavior of vector fields at ∞ and at x = O as fixed point of 𝑓 . We
illustrate this with the following example. The equations are:12 12: The − sign corresponds to 𝜏 = −∞
and the + to 𝜏 = +∞.
1
𝝃0 = 𝑓 † (𝝃, 𝜏) − ∇ℎ(𝝃) · 𝑓 ∗ (𝝃, 𝜁) Ω𝝃

𝜆
1
𝜁0 = ± ∇ℎ(𝝃) · 𝑓 ∗ (𝝃, 𝜁) 𝜁.

𝜆

Remark 9.4.1 The choice of quasi-homogeneous convex function in


(i)-(ii) is dictated by the scaling 𝜉 ↦→ 𝑒 𝜏Ω x. Due to this construction we
obtain conjugate flows on the balls ℎ(𝝃) + 𝜇𝜁𝜆 = 1 in ℝ 𝑛+1 . If we don’t
require this additional symmetric we can also consider ℎ is Condition
(ii) replaced by
(ii)’ ∇ℎ(𝝃) · Ω𝝃 = 𝜆(𝝃) > 0 on ℎ −1 (1).
In this case the differential equations for 𝝃 and 𝜁 remain unchanged
except for letting 𝜆 depend on 𝝃 .

The transformation 𝜁 ↦→ 1/𝜁 interchanges the role of the origin O and ∞


as the following example shows.
Consider the system, cf. [1]:

𝑥¤ = 𝑥 3 − 3𝑥 𝑦 2 + 𝑥 4 𝑦,
𝑦¤ = 3 𝑥 2 𝑦 − 𝑦 3 − 𝑥 4 + 𝑦 5 .

We use this system as an example to illustrate the blow-up method for


degenerate fixed point. Note that x = (0 , 0) is a fixed point of the system
and linearization does not provide information since 𝐷 𝑓 (0 , 0) is the zero
matrix. All lower order terms are third order polynomials and thus the
fixed point is highly degenerate. The method described in this section
will allow us to obtain information about its local behavior.
Apply the above homogeneous polar coordinates with ℎ(𝝃) = 𝜉12 + 𝜉22
and Ω = id. From (9.7) we have, choosing 𝑚 = 3, that13 13: The vector field 𝑓 † is define on all of
S𝑛−1 ×ℝ. Whether the vector field extends
𝜉13 − 3𝜉1 𝜉22 + 𝑒 −2𝜏 𝜉14 𝜉2 to its ‘ends’ 𝜏 ± ∞ depends on the choice
 
𝑓 † (𝝃, 𝜏) := 𝑒 2𝜏 𝑒 𝜏 id 𝑓 𝑒 −𝜏 id 𝝃 = .

of 𝑚 and the type of scaling we use.
3𝜉1 𝜉2 − 𝜉23 − 𝑒 −𝜏 𝜉14 + 𝑒 −2𝜏 𝜉25
2

(9.10)
In order to analyze the origin we consider the ‘end’ 𝜏 = +∞ and 𝜏 = − ln 𝜁 .
This gives:14 14: Smooth vector field on all of S𝑛−1 ×
ℝ+ .
9 Compactification
55
Lecture 12 and 13

𝜉13 − 3𝜉1 𝜉22 + 𝜁 2 𝜉14 𝜉2


 
∗ †
𝑓 (𝝃, 𝜁) := 𝑓 (𝝃, − ln 𝜁) = . (9.11)
3𝜉12 𝜉2 − 𝜉23 − 𝜁𝜉14 + 𝜁 2 𝜉25

The extension at 𝜁 = 0 is given by

𝜉13 − 3𝜉1 𝜉22


 
𝑓 ∗ (𝝃, 0) = . (9.12)
3𝜉12 𝜉2 − 𝜉23

The correction factor on the limiting circle S1 at 𝜁 = 0 for the vector field
in the quasi-homogeneous coordinates is:

𝝃 · 𝑓 ∗ (𝝃, 0) = 𝜉14 − 𝜉24 = (𝜉12 − 𝜉22 )(𝜉12 + 𝜉22 ) = 𝜉12 − 𝜉22 ,

since 𝜉12 + 𝜉22 = 1. The flow on the circle is described by the equations:

𝜉10 = 𝜉13 − 3𝜉1 𝜉22 − 𝜉12 − 𝜉22 𝜉1 = −2𝜉1 𝜉22 ,


 

𝜉20 = 3𝜉12 𝜉2 − 𝜉23 − 𝜉12 − 𝜉22 𝜉2 = 2𝜉2 𝜉12 .


 

In polar coordinates we have 𝜉1 = cos 𝜃 and 𝜉2 = sin 𝜃 and therefore


𝜃0 = −𝜉2 𝜉10 + 𝜉1 𝜉2 . If we substitute the polar coordinates in the above
equations we obtain:
𝜃0 = sin 2𝜃.
The equation for 𝜁 near 𝜁 = 0 becomes15 15: Note the + sign since we use the trans-
formation 𝜏 = − ln 𝜁 .
𝜁0 = +[𝜉12 − 𝜉22 ]𝜁 = +(cos 2𝜃)𝜁, 𝜁 ≈ 0.

From the 𝜃 -equation we have four fixed points (±1 , 0) and (0 , ±1), where
first two unstable and the latter two stable. From the 𝜁 -equation we derive
that the first fixed points are unstable for the whole system and the latter
two fixed points are stable for the whole system.16 The above equations 16: For an illustration of the blown-up
for 𝜃 and 𝜁 describe the local behavior near the degenerate fixed point fixed point see [1, Sect. 3.10].

(0 , 0) and is called blow-up of the fixed point.


If we consider 𝑓 † for 𝜏 → −∞ to describe ∞ we substitute 𝜏 = ln 𝜁 which
yields:

𝜉13 − 3𝜉1 𝜉22 + 𝜁 −2 𝜉14 𝜉2


 
𝑓 ∗ (𝝃, 𝜁) := 𝑓 † (𝝃, ln 𝜁) = . (9.13)
3𝜉1 𝜉2 − 𝜉23 − 𝜁 −1 𝜉14 + 𝜁 −2 𝜉25
2

Observe that the extension to 𝜁 = 0 does not exist with the present
choices. In order to describe ∞ we choose 𝑚 = 5. This gives:

𝜉13 𝜁 2 − 3𝜉1 𝜉22 𝜁 2 + 𝜉14 𝜉2


 
𝑓 ∗ (𝝃, 𝜁) := 𝑓 † (𝝃, ln 𝜁) = . (9.14)
3𝜉12 𝜉2 𝜁 2 − 𝜉23 𝜁 2 − 𝜁𝜉14 + 𝜉25

The extension at 𝜁 = 0 is given by

𝜉14 𝜉2
 

𝑓 (𝝃, 0) = . (9.15)
𝜉25

The correction factor on the limiting circle S1 at 𝜁 = 0 for the vector field
in the quasi-homogeneous coordinates is:

𝝃 · 𝑓 ∗ (𝝃, 0) = 𝜉15 𝜉2 + 𝜉26 = 𝜉2 (𝜉15 + 𝜉25 ).


9 Compactification
56
Lecture 12 and 13

The flow on the circle is described by the equations:

𝜉10 = 𝜉14 𝜉2 − 𝜉15 + 𝜉25 𝜉2 𝜉1 ,


 

𝜉20 = 𝜉25 − 𝜉15 + 𝜉25 𝜉22 .


 

In polar coordinates we have 𝜉1 = cos 𝜃 and 𝜉2 = sin 𝜃 and therefore


𝜃0 = −𝜉2 𝜉10 +𝜉1 𝜉2 . For 𝜉10 and 𝜉20 we have: 𝜉10 = sin3 𝜃 cos 𝜃(cos3 𝜃−sin3 𝜃)
and 𝜉20 = − sin2 𝜃 cos2 𝜃(cos3 𝜃 − sin3 𝜃). For 𝜃0 we obtain

𝜃0 = −(sin4 𝜃 cos 𝜃 + sin2 𝜃 cos3 𝜃)(cos3 𝜃 − sin3 𝜃)


= − sin2 𝜃 cos 𝜃(cos3 𝜃 − sin3 𝜃)

The right hand side is 𝜋-periodic and we find equilibria for 𝜃 equal to
0, 𝜋/4, 𝜋/2, 𝜋, 5𝜋/4, and 3𝜋/2. Without carrying out a detailed analysis
the right hand side in the 𝜃 equation reveals that the angles 𝜋/2 and
3𝜋/2 are stable, the angles 𝜋/4 and 5𝜋/4 are unstable and the angles 0
and 𝜋 are mixed stable/unstable for the flow on the circle. In order to
examine the stability in the overall system we invoke the 𝜁 equation:

𝜁0 = − sin 𝜃 cos5 𝜃 + sin5 𝜃 𝜁.




Except for 𝜃 = 0 , 𝜋 stability in 𝜁 -direction follows immediately. Further


analysis for 0 and 𝜋 reveals stability also for these points. Summarizing,
if 𝜁 is near zero solution converge to the circle at infinity.
The example shows that by working at different scales of homogeneous
coordinates we can analyze the behavior of a flow near fixed points and
infinity. Especially for systems in higher dimensions the behavior can be
quite complicated.
Bifurcations
Lecture 14 and 15 10
Appendix
Elememtary calculus A
In this chapter we outline a number of basic tools used in the manuscript, A.1 Implicit Function Theorem . 59
such as basic topology, analysis and geometry.

A.1 The Implicit Function Theorem

We formulate the 𝐶 1 -version of the Implicit Function theorem in Eu-


clidean space.

Theorem A.1.1 Let 𝐹 : ℝ 𝑛 × ℝ 𝑚 → ℝ 𝑚 be a 𝐶 1 -function in the variables


(x, y) ∈ ℝ 𝑛 × ℝ 𝑚 . Suppose 𝐹(x0 , y0 ) = 0 and the Jacobian 𝐷y 𝐹(x0 , y0 ) is
an invertible 𝑚 × 𝑚 matrix. Then, there exists a neighborhood N(x0 ) ⊂ ℝ 𝑛
of x0 and a 𝐶 1 -function 𝑔 : N(x0 ) ⊂ ℝ 𝑛 → ℝ 𝑚 such that
(i) 𝑔(x0 ) = y0 ;
(ii) 𝐹(x , 𝑔(x)) = 0 for all x ∈ N(x0 ).
The Jacobian of 𝑔 at x ∈ N(x0 ) is given by

𝐷x 𝑔(x) = −𝐷y 𝐹(x , 𝑔(x))−1 𝐷x 𝐹(x, 𝑔(x).


Elememtary topology B
Bibliography

Here are the references in citation order.

[1] Lawrence Perko. ‘Differential Equations and Dynamical Systems, Third Edition’. In: (Feb. 2010), pp. 1–571
(cited on pages i, 3, 44, 50, 54, 55).
[2] M Hirsch and S Smale. ‘Differential Equations, Dynamical Systems, and Linear Algebra (Pure and
Applied Mathematics, Vol. 60)’. In: (1974) (cited on page 8).
[3] James Munkres. ‘Topology ’. In: (Jan. 2015), pp. 1–507 (cited on page 15).
Notation

The next list describes several symbols that will be later used within the body of the document.
𝑐 Speed of light in a vacuum inertial frame

ℎ Planck constant

Greek Letters with Pronounciation

Character Name Character Name


𝛼 alpha AL-fuh 𝜈 nu NEW
𝛽 beta BAY-tuh 𝜉, Ξ xi KSIGH
𝛾, Γ gamma GAM-muh o omicron OM-uh-CRON
𝛿, Δ delta DEL-tuh 𝜋, Π pi PIE
𝜖 epsilon EP-suh-lon 𝜌 rho ROW
𝜁 zeta ZAY-tuh 𝜎, Σ sigma SIG-muh
𝜂 eta AY-tuh 𝜏 tau TOW (as in cow)
𝜃, Θ theta THAY-tuh 𝜐, Υ upsilon OOP-suh-LON
𝜄 iota eye-OH-tuh 𝜙, Φ phi FEE, or FI (as in hi)
𝜅 kappa KAP-uh 𝜒 chi KI (as in hi)
𝜆, Λ lambda LAM-duh 𝜓, Ψ psi SIGH, or PSIGH
𝜇 mu MEW 𝜔, Ω omega oh-MAY-guh

Capitals shown are the ones that differ from Roman capitals.
Alphabetical Index

𝛼 -limit set, 29 First return-time, 33 Linearized system, 17


𝜔-limit point, 28, 29 Fixed point, 16, 18 Local center manifold, 23
𝜔-limit set, 28 hyperbolic, 23 Local center-stable manifold,
non-hyperbolic, 23 23
Additive operator, 31 Fixed point index, 44, 45 Local center-unstable
Alpha limit set, 29, 36 Flow, 14, 15, 27 manifold, 23
Alpha limit set of a set, 29 global, 27 Local conjugation, 24
Angular function, 40 local, 15 Local flow, 15
Attractor, 32 of a vector field, 14, 15 Local stable manifold, 18
Attractors, 32 Forward invariant sets, 15 Local unstable manifold, 22
Backward invariant set, 15 Matrix decomposition, 2
Generalized polar coordinates,
Basis, 3
52
Bendisxson sphere, 47 Non-hyperbolic equilibrium,
Global flow, 27
Blow-up, 55 17
Gradient system, 26
Blow-up of a fixed point, 55 Non-hyperbolic fixed point,
Green’s Theorem, 42
Group property, 14, 15 23
Center manifold, 23
Center-stable manifold Omega limit point, 29
Hamiltonian, 24
local, 23 Omega limit set, 28, 29, 36
Hamiltonian flow, 24
Center-unstable manifold Omega limit set of a set, 29
local, 23 Hamiltonian system, 24
Orbit, 28
Central projection, 47 1-degree of freedom, 25
through x, 28
Closed curve, 37 integrable, 25
Oriented regular Jordan curve,
Compactified system, 48 Hessian, 25
37
Completely additive, 31 Heteroclinc orbit, 36
Orthonormal frame, 40
Completely sub-multiplicative, Homogeneous coordinates,
31 47
Periodic orbit, 16, 33, 36, 44
Completely super-additive, Homotopy of closed curves,
Periodic point, 16
31 41
Poincaré map, 34
Conjugation Hyperbolic equilibrium, 17
Poincaré section, 33
local, 24 Poincaré sphere, 47
Implicit Function Theorem, 34,
Conservative system, 25 Poincaré-Bendixson Theorem,
59
Curl, 40 36
Index of a periodic orbit, 44
Potential function, 26
Degrees of freedom, 25 Index of a regular Jordan
Diagonalization, 2 curve, 42 Quasi-homogeneous
Initial value problem, 8 coordinates, 53
Eigenvalue, 2 Invariant, 6, 15 Quasi-homogeneous polar
Eigenvector, 2 backward, 15 coordinates, 53
Energy surface, 25 forward, 15
Equilibrium point, 5, 16, 18 Invariant set, 15 Regular Jordan curve, 37
hyperbolic, 17 Irrotational vector field, 40 Theorem, 37
non-hyperbolic, 17 Isolated equilibrium point, Regular parametrized curve,
Existence and uniqueness 44 37
of initial value problems, Isolated fixed point, 44
8 Simple closed curve, 37
Exponential, 3 Limit point, 28 Smooth parametrized curve,
algoritm for, 4 Linear system, 2 37
of 𝐴, 3 Linearization, 17 Stable manifold, 18
local, 18 Topologically equivalent flows, Variation of constants formula,
Stereographic projection, 47 28 7
Sub-multiplicative operator, Trapping region, 32, 36 Vector field, 8, 15
31 Two-sphere at infinity, 50
Subspace
invariant, 6 Unstable manifold, 22
Super-additive operator, 31 local, 22 Winding number, 40

You might also like