0% found this document useful (0 votes)
34 views

Lecture-Notes-Flows On The Line

flows on the line

Uploaded by

Latelo Mosea
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Lecture-Notes-Flows On The Line

flows on the line

Uploaded by

Latelo Mosea
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Flows on the line

In this chapter we will study one-dimensional systems of differential


equations. Some of the systems will depend on a continuous time variable
𝑡 ∈ ℤ, while others will depend on a discrete time variable 𝑛 ∈ ℤ. But in
this course we will develop techniques for continuous systems only.

Continuous time systems: flows

Consider the first-order differential equation


𝑥 ′ = 𝑓(𝑥, 𝑡) (1)
𝑁 𝑁
Where 𝑥 ∈ ℤ , and 𝑓: 𝐷 × 𝐼 → ℤ .

Here 𝑡 is the independent variable, 𝑥 is the dependant variable (also


called the state variable. If, for instance, if 𝑥 represents a position of
a particle, then 𝑥′ denotes its velocity). If there are 𝑁 space variable
the 𝑥 will be a vector of 𝑁, i.e. 𝑥 = (𝑥1 , 𝑥2 , 𝑥3 , … , 𝑥𝑁 ). Note that 𝑥 itself
is function of 𝑡.
When 𝑁 = 1 it means we are dealing with one-dimensional systems, which is
to say that there is one state variable. Later on we will look at the case
when 𝑁 = 2 which is two-dimensional system of differential equations.

Equation (1) is the most general form of a nonautonomous continuous-time


system, i.e. the state variable 𝑥 depends explicitly on 𝑡, the time. If the
vector function 𝑓(𝑥, 𝑡) does not explicitly depend on 𝑡, then the system is
called autonomous. In this chapter we are dealing with autonomous systems.A
main result of the standard theory of differential equations about systems
such as the above is the Cauchy Theorem, which deals with the existence and
uniqueness of solutions satisfying the initial conditions:
𝑥(𝑡0 ) = 𝑥0 , 𝑡0 ∈ 𝐼, 𝑥0 ∈ 𝐷.

Theorem 1 (Cauchy theorem)


Let 𝑓(𝑥, 𝑡) ∈ 𝐶 𝑟 (𝐷 × 𝐼) (meaning 𝑓(𝑥, 𝑡) is continuously differentiable at the
𝐷 × 𝐼). Then ∃ a unique solution, we denote it by 𝜑(𝑡0 , 𝑥0 , 𝑡) of the initial
value problem (I.V.P):

𝑥 ′ = 𝑓(𝑥, 𝑡)
{ (2)
𝑥(𝑡0 ) = 𝑥0

For |𝑡 − 𝑡0 | sufficiently small. This solution is a continuous function of


𝑥0 , 𝑡0 , and 𝑡.

The solution 𝜑(𝑡0 , 𝑥0 , 𝑡) of the system (1)is referred to as the flow of the
system.
There is fundamental difference between the theory of differential
equations and dynamical systems. In the theory of differential equations,
the objective is to examine the behaviour of individual solutions of
differential equations as a function of the independent variables. In
contrast, the goal of the theory of dynamical systems is to understand the
behaviour of the whole family of solutions of the given dynamical system,
as a function of either initial conditions, or as a function of parameters
arising in the system. So in this chapter and subsequent chapters we will
consider entire sets of solutions, not just a single solution of a system
of differential equations. The modern theory of dynamical systems uses a
lot of geometry; we will encounter this as we go along the course.

Equilibrium Solutions and their stability

In order to start to understand the behaviour of solutions of a dynamical


system, we start by considering the simplest one, which is equilibrium
solution of a system. Equilibrium solution of a system is a constant
solution of its differential equation(s). There are several reasons why
equilibrium solutions receive the amount of attention they do: The most
important of this reasons is that they are relatively easy to determine, as
already stated above. And perhaps more technical reasons are that they are
singular points of the direction field of the system, meaning they play a
special role in the geometry of the phase space. Lastly, as we will discuss
later, depending on their stability, they are candidates for the long-time
asymptotics of nearby solutions. But formally, what are equilibrium
solutions of a dynamical systems?

Consider an autonomous system:


𝑥 ′ = 𝑓(𝑥)
An equilibrium solution which we denote by 𝑥̂ of this system is a constant
solution of this system of differential equations. By definition, 𝑥̂′ = 0 ,
thus equilibrium solutions are determined by
𝑓(𝑥̂) = 0
The solutions of the system can be studied near the equilibrium to learn
about what we call the stability of equilibrium solutions.

Definition 1(Lyapnov Stability): Suppose 𝑥̅ is a solution of the autonomous


system. The solution 𝑥̅ is stable if ∀𝜖 > 0, ∃ 𝛿(𝜖) > 0 such that if
|𝑥̅𝑘0 − 𝑦𝑘0 | < 𝛿 then |𝑥̅𝑘 − 𝑦𝑘 | < 𝜖
For 𝑘 > 𝑘0 and all solutions 𝑦.

At the face of it, this definition may seem difficult to understand, but it
is not.
➢ Here is what is happening here: someone picks an 𝜖. We have to find a
𝛿 (which will depend on 𝜖) such that if we start within a distance of
𝛿 of the given solution, we never wander more than 𝜖 away. Clearly,
the smaller 𝜖 is chosen, the smaller we will have to pick 𝛿, etc..
➢ According to this definition, the solution 𝑦𝑘 can wander, but not
too far. Meaning the solution is stable, it does not move far away
from the equilibrium point.
Definition 2(Asymptotic Stability): Suppose 𝑥̅ is a solution of the
autonomous system. The solution 𝑥̅ is asymptotically stable if it
satisfies two conditions:
i) 𝑥̅ is Lyapunov stable, and
ii) ∃ 𝛿 > 0, such that if |𝑥̅𝑘0 − 𝑦𝑘0 | < 𝛿 then lim | 𝑥̅𝑛 − 𝑦𝑛 | = 0.
𝑛→∞

In other words, once the neighbouring solution is close enough to the


solution under investigation, it is trapped and it ultimately falls into
our solution.

Fixed Point/Equilibrium Point


Recall that we said a solution where the system 𝑥 ′ = 𝑓(𝑥) is zero is an
equilibrium solutions. Any point 𝑥 ∗ where 𝑥 ′ = 𝑓(𝑥) = 0 is known as the
fixed point/equilibrium point.

Example
Find the fixed point(s) of the system 𝑥 ′ = 𝑥.

Solution
The fixed point of this system occurs when 𝑥 ′ = 𝑥 = 0. Which gives 𝑥 ∗ =
0.That is, 𝑓(𝑥 ∗ ) = 0.

Stability of fixed points.

Stable fixed point: A fixed point is stable if all sufficiently small


disturbances/changes away from it damp out in time.

Unstable fixed point: A fixed point is unstable if all sufficiently small


disturbances grows in time.

Example
Find the fixed point of the system 𝑥 ′ = 𝑓(𝑥) = 𝑥 2 − 1, and classify them in
terms of their stability.

Solution
𝑥 ′ = 𝑓(𝑥) = 0 means 𝑥 2 − 1 = 0, which means that fixed points are 𝑥 ∗ = ±1
To classify this point, we first sketch the graph of 𝑥 2 − 1 and then sketch
the vector fields (direction of the trajectories) near the fixed points. To
do this, we examine the signs of 𝑥 2 − 1 near the fixed points 𝑥 ∗ = ±1.

𝑥 2 − 1 < 0 to the right very near 𝑥 ∗ = −1 (or to be specific in the


interval −1 < 𝑥 < 1) and 𝑥 2 − 1 > 0 to the left of 𝑥 ∗ = −1.
This means that the more the value of 𝑥 becomes negative, the more 𝑥 2 − 1
becomes positive, i.e. going towards the right. And the more the value of 𝑥
becomes less negative, the more the value of 𝑥 2 − 1 becomes negative, i.e.
going to left. Hence, the trajectories are moving towards 𝑥 ∗ = −1 on both
sides. Therefore, the point 𝑥 ∗ = −1 is a stable equilibrium point (check
definition of stability above) of 𝑥 ′ = 𝑥 2 − 1.

𝑥 2 − 1 > 0 to the right of 𝑥 ∗ = 1 and 𝑥 2 − 1 < 0 to the left very near 𝑥 ∗ = 1


(or to be specific in the interval −1 < 𝑥 < 1). This means that the more
the value of 𝑥 becomes positive, the more 𝑥 2 − 1 becomes positive, i.e.
going towards the right. And the more the value of 𝑥 becomes less positive,
the more the value of 𝑥 2 − 1 becomes negative, i.e. going to left. Hence,
the trajectories are moving away from 𝑥 ∗ = 1 on both sides. Therefore, the
point 𝑥 ∗ = 1 is an unstable equilibrium point of 𝑥 ′ = 𝑥 2 − 1. This is shown
in the figure below. A figure of this kind is known as the phase diagram or
phase portrait of a system.
Note: The definition of stable equilibrium is based on small
disturbances/changes. We thus say a fixed point is locally stable, meaning
for very large changes, the system may not be stable.

Let us work out another example


Example
Find the fixed point of the system
𝑥 ′ = 𝑓(𝑥) = 𝑥 − 𝑐𝑜𝑠𝑥
And classify its stability.
Solution
To find the fixed point we must make 𝑥 ′ = 0, i.e 𝑥 − 𝑐𝑜𝑥 = 0. Therefore, the
fixed point corresponds to the intersection of the straight line 𝑥 and and
the graph of 𝑐𝑜𝑠𝑥, i.e. at 𝑥 ∗ = 𝑐𝑜𝑠 𝑥 ∗ . As before, we want to investigate
the signs of 𝑥 − cos 𝑥 around the fixed point 𝑥 ∗ :
𝑥 − cos 𝑥 < 0 when 𝑥 < cos 𝑥, i.e. when the line 𝑥 lies below the cosine
curve the system 𝑥 ′ = 𝑓(𝑥) is negative, meaning the trajectories are going
to the left of the fixed point 𝑥 ∗ . And
𝑥 − cos 𝑥 > 0 when cos 𝑥 < 𝑥, i.e. when the line 𝑥 lies above the cosine
curve the system 𝑥 ′ = 𝑓(𝑥) is positive, meaning the trajectories are going
to the right of the fixed point 𝑥 ∗ . This observation implies that at the
fixed point 𝑥 ∗ the trajectories behave as in the figure below (for clarity
we have reproduced the fixed point to a different position, but the
intuition is the same regardless of the position):

𝑥∗
So, we conclude that since the directions are moving away from the fixed
point 𝑥 ∗ , the fixed point is unstable.
Note: We have been able to classify this fixed point without even knowing
its value,

Practice Problems
Use the method we used above to classify the following systems.
1) 𝑥 ′ = −𝑥 3
2) 𝑥 ′ = 𝑥 3
3) 𝑥 ′ = 𝑥 2

Linear Stability
You might have or learned about something called Taylor series in Calculus.
It is an expansion of a real valued function 𝑓(𝑥) about a certain point,
let us call it 𝑥 = 𝑎, and is given by

𝑓 ′′ (𝑥 − 𝑎)2 𝑓 ′′′ (𝑥 − 𝑎)3 𝑓 𝑛 (𝑥 − 𝑎)𝑛


𝑓(𝑥) = 𝑓(𝑎) + 𝑓 ′ (𝑎)(𝑥 − 𝑎) + + +⋯+
2! 3! 𝑛!

We will not go into details about the proof of this expansion (it is
studied in details in Advanced Calculus), but we are going to apply it in
our introduction of linear stability of a one dimensional system of
differential equation.

Let 𝑥 ∗ be a fixed point of the system 𝑥 ′ = 𝑓(𝑥), and let

𝜂(𝑡) = 𝑥(𝑡) − 𝑥 ∗ (3)

be a small change/perturbation away from 𝑥 ∗ . To see whether a perturbation


grows or decays, we derive a differential equation for 𝜂. Differentiating
equation (3) with respect to time we have

𝑑 𝑑 𝑑
𝜂′ = 𝑑𝑡 (𝑥 − 𝑥 ∗ ) = 𝑑𝑡 (𝑥) − 𝑑𝑡 (𝑥 ∗ ).

𝑑
But 𝑥 ∗ is a point (a fixed point), therefore (𝑥 ∗ ) = 0, meaning
𝑑𝑡

𝑑 𝑑
𝜂′ = 𝑑𝑡 (𝑥 − 𝑥 ∗ ) = 𝑑𝑡 (𝑥) = 𝑥′

But, 𝑥 ′ = 𝑓(𝑥), therefore

𝜂′ = 𝑥 ′ = 𝑓(𝑥)

And from equation (3), we have


𝜂′ = 𝑥 ′ = 𝑓(𝑥) = 𝑓(𝜂 + 𝑥 ∗ ) (4)

Applying the Taylor expansion on 𝑓(𝜂 + 𝑥 ∗ ) about the point 𝑥 ∗ we have

𝑓(𝜂 + 𝑥 ∗ ) = 𝑓(𝑥 ∗ ) + 𝜂𝑓 ′ (𝑥 ∗ ) + 𝑂(𝜂2 )

Where 𝑂(𝜂2 ) denotes all the successive powers of 𝜂. The term 𝑓(𝑥 ∗ ) = 0
since 𝑥 ∗ is a fixed point. Hence,

𝑓(𝜂 + 𝑥 ∗ ) = 𝜂𝑓 ′ (𝑥 ∗ ) + 𝑂(𝜂2 )
Now, from equation (4) we have

𝜂′ = 𝑓(𝜂 + 𝑥 ∗ ) = 𝜂𝑓 ′ (𝑥 ∗ ) + 𝑂(𝜂2 ) (5)

If 𝑓′(𝑥 ∗ ) is not zero, the term 𝜂𝑓′(𝑥 ∗ ) dominates the right-hand side of
equation 5 entirely because the terms 𝑂(𝜂2 ) are very small since they are
powers of 𝜂 (which denotes very small perturbations. Therefore, we can
neglect the effect of 𝑂(𝜂2 ) in equation (4) and get

𝜂′ = 𝜂𝑓 ′ (𝑥 ∗ ) (6)

The result in equation (6) is known as the linearization about 𝑥 ∗ . It shows


that 𝜂(𝑡), the perturbation grows exponentially when 𝑓 ′ (𝑥 ∗ ) > 0 and decays
exponentially when 𝑓 ′ (𝑥 ∗ ) < 0. This result is very important because it
tells us that we can classify the stability of the system by knowing the
slope 𝑓′(𝑥) of the system at the fixed point. Particularly, if the slope
𝑓 ′ (𝑥 ∗ ) > 0 we have an unstable fixed point, and if the slope 𝑓 ′ (𝑥 ∗ ) < 0 we
will have a stable fixed point. We shall apply this result in examples
below.

But first!!! What if 𝑓 ′ (𝑥 ∗ ) is zero? The result will be a different kind of


stability (we call it a non-linear stability) because in absence of 𝑓 ′ (𝑥 ∗ )
in equation (5), the terms 𝑂(𝜂2 ) will have a visible role.

The essence of this section was to develop a way of classifying stability


using linearization. So, let us apply the result we obtained to solve the
following exemplar problems:

Example
Use linear stability analysis to determine the stability of the system 𝑥 ′ =
sin 𝑥.

Solution
The fixed point occurs when 𝑥 ′ = 𝑓(𝑥) = sin 𝑥 = 0. Which means the fixed
points are the values 𝑥 ∗ = 𝑛𝜋, for 𝑛 ∈ ℤ. Therefore,

1, 𝑛 𝑖𝑠 𝑒𝑣𝑒𝑛
𝑓 ′ (𝑥 ∗ ) = cos 𝑛𝑘 = {
−1, 𝑛 𝑖𝑠 𝑜𝑑𝑑

Which implies that 𝑥 ∗ is stable when 𝑛 is odd (because the slope is


negative there), and is unstable when 𝑛 is even (because the slope is
positive there).
The phase diagram for this system and its classified fixed points is shown
below. As we can notice for the values of 𝑛 that are even, the trajectories
are moving away from fixed points, and for odd values of 𝑛, trajectories
are moving towards fixed points.

Practice Problems

Use linear stability analysis we used above to classify the following


systems.
1) 𝑥 ′ = −𝑥 3
2) 𝑥 ′ = 𝑥 3
3) 𝑥 ′ = 𝑥 2

You might also like