0% found this document useful (0 votes)
49 views

Ma2aPracLectures 10-12

1) The document discusses linear systems of first-order ordinary differential equations (ODEs) where the solution set forms a vector space. 2) For a linear system of ODEs in the form x' = Ax with constant coefficients, solutions can be found in the form x = veλt, where v is an eigenvector of A and λ is the corresponding eigenvalue. 3) The eigenvalues and eigenvectors of the coefficient matrix A determine the general solution of the linear system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Ma2aPracLectures 10-12

1) The document discusses linear systems of first-order ordinary differential equations (ODEs) where the solution set forms a vector space. 2) For a linear system of ODEs in the form x' = Ax with constant coefficients, solutions can be found in the form x = veλt, where v is an eigenvector of A and λ is the corresponding eigenvalue. 3) The eigenvalues and eigenvectors of the coefficient matrix A determine the general solution of the linear system.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Lecture 10

Linear Systems of First order ODEs


We’ve looked at x′ = f (t, x), for x ∈ R depending on an independent
variable t.
Recall: If f and fy are continuous, then we can find a solution (through
Picard’s iteration) which is unique if we fix an initial value x0 at t = 0.
For special classes of f , one can say a lot. A particular case of interest
is when the ODE is linear, which is the case when f (t, x) is linear in x, i.e.,
f (t, x) = a(t)x + b(t), for suitable functions a(t) and b(t). It is said to be
homogeneous iff the following happens: When x is a solution, any scalar
multiple cx is again a solution; in particular, 0 is a solution. In the linear
case we just considered, it is homogeneous exactly when b(t) = 0.
We say that the linear ODE has constant coefficients if a(t) and b(t) are
both independent of t. Hence we have homogeneity and constant coefficients
iff a(t) is independent of t and b(t) = 0; in other words, the ODE is of the
form x′ = ax, for some scalar a.
Recall: If x′ = ax, a is constant, the set of all solutions is given by

x = {Beat | B any constant}

Indeed,
( )
′ dx
x = ax =⇒ = 0 ⇔ x = 0 : equilibrium point
dt
1 dx
= a, when x ̸= 0
∫ x dt ∫
(dx/dt)
=⇒ dt = a dt
x
=⇒ log |x| = at + c
=⇒ |x| = eat+c
x = Beat , with B = ±ec ̸= 0

But B = 0 is also possible and corresponds to the equilibrium solution x = 0.


Hence the claim above, that the general solution of x′ = ax is x = Beat , where
B is any constant.
We may think of x as a vector in a space of dimension 1. The set of
all solutions is also a 1-dimensional vector space, with B as the coordinate.

1
(Brush up on the basics of vector spaces, linear maps, and properties of
matrices - including eigenvalues, eigenvectors, and diagonalization - from
Ma1b; it will also be good if you know about the Jordan decomposition,
which we will discuss later.)
Generalization

independent variable (as before), and let x be a vector in R .


n
Let t 
be an
x1
 x2 
 
i.e., x =  .. , whose derivative is another vector in Rn :
.
xn
 
dx1 /dt
dx  
 dx2 /dt 
x′ = =  .. 
dt  . 
dxn /dt

We can look at a linear ODE in vector form:

x′ = A(t)x (*)

where A(t) is an n × n matrix; A(t) = (aij (t)), 1 ≤ i, j ≤ n. Explicitly (∗)


means we have n ODEs
dx1
dt
= a11 (t)x1 + a12 (t)x2 + · · · + a1n (t)xn
dx2
dt
= a21 (t)x1 + a22 (t)x2 + · · · + a2n (t)xn
..
.
dxn
dt
= an1 (t)x1 + an2 (t)x2 + · · · + ann (t)xn
called an n × n linear system of ODEs
We can say that this system has constant coefficients iff A is independent
of t, i.e., each aij is a constant. From now on, assume that we are in the case
of constant coefficients, and look for solutions x(t) of x′ = Ax. It will be
of interest to consider the set of all solutions of such a homogeneous linear
system of ODE’s:
   

 x1 

 ..  n ′
V = x =  .  ∈ R x = Ax, A = (aij )

 

xn

2
Using the known fact(s) that differentiation and matrix multiplication are
linear operations, we get the following

Properties of V:
(i) (existence of origin) 0 ∈ V
(ii) (additivity) If x, y are both in V , then x + y ∈ V
(iii) (homogeneity) If x ∈ V , then so is αx for any scalar α
Reason for (ii):

x′ = Ax
y′ = Ay
x′ + y′ = A(x + y)
(x + y)′ = A(x + y)

Reason for (iii):


  
αx1 αx′1
   
αx =  ...  , αx′ =  ...  = αAx = A(αx),
αxn αx′n

since αA = Aα.
Conclusion
The solution set V of a linear, homogeneous system x′ = Ax is a vector
space. It is natural to expect V has dimension n.
Basic Questions: Can we guess a non-zero solution of x′ = Ax, for any
n × n constant matrix A? If so, can we find all the solutions, i.e., write down
a general solution like in the n = 1 case?
Here’s a clever idea for any n: (in many cases, but not all, this furnishes
all the solutions)  
x1
′  .. 
x = Ax, x =  . 
xn
Try:
x = veλt , where v ∈ Rn , v ̸= 0, λ ∈ R.

3
Here v is independent of t. Then

x′ = λveλt

But x′ = Ax, so we must have

Ax = λveλt
⇒ Av |{z}
eλt = λv |{z}
eλt
̸=0 ̸=0

⇒ Av = λv

Hence λ must be an eigenvalue (since v is a non-zero vector).


Conversely, if x′ = Ax, with λ an eigenvalue of A, i.e., with Av = λv, for
some non-zero vector v, then
d
Aveλt = λveλt = (veλt ).
dt
So x = veλt is a solution of x′ = Ax.
Recall from Basic Linear Algebra (Ma1b):
Given any n × n matrix A, we can always find all of its eigenvalues in
C. So we get an added complexity (no pun intended) on whether there are
real eigenvalues. To elaborate further, the eigenvalues λ are solutions of the
characteristic equation
det(λIn − A) = 0,
which is a polynomial equation in λ of degree n. There are n complex roots
but not necessarily all distinct. Even when A is a real matrix, some of the
eigenvalues may be non-real. However, when A is a real matrix, if a complex,
i.e., non-real, eigenvalue λ occurs, then its complex conjugate λ will also be
an eigenvalue of A, which is evident from applying complex conjugation to
the characteristic equation. Consequently, the complex eigenvalues come in
conjugate pairs, and when n is odd, this forces the existence of at least one
real eigenvalue. One of the basic results of Linear Algebra (Ma1b), which we
will use at various places, is this:
If A is a real symmetric matrix, then all of its eigenvalues are real.
Recall that A = (aij ) is symmetric iff aji = aij for all i, j ≤ n, i.e., iff A equals
its transpose At = (aji ). More generally, we say that a complex matrix A is

4
t
hermitian iff A equals its conjugate transpose, i.e., A = A . The general fact
(hopefully discussed in Ma1b) is that the eigenvalues of a complex hermitian
matrix are all real. Of course, a real matrix is hermitian iff it is symmetric,
since A = A for real A.
Examples:
( )
0 1
(i) A =
1 0
( )
λ −1
det( λI2 −A) = det = λ2 − 1

|{z} 
−1 λ

λ 0
0 λ
Eigenvalues: λ = 1, −1 since ±1: roots of λ2 − 1 = 0.
Eigenvectors:

λ=1 λ = −1

Av = v Av = −v
( )( ) ( ) ( )( ) ( )
0 1 x ? x 0 1 x ? −x
= =
1 0 y y 1 0 y −y
↓ ↓
x = y( ) x = −y
( )
1 1
Take v = Take v =
1 −1
( )
0 1
(ii) A = → det(λI2 − A) = λ2 + 1
−1 0
Eigenvalues: λ± = ±i
( )
± 1
Eigenvectors: v = (no real eigenvector)
±i

Suppose λ1 ̸= λ2 are two real eigenvalues of A. Then we get two distinct


solutions to x′ = Ax, namely

x(1) := v(1) eλ1 t and x(2) := v(2) eλ2 t ,

5
with v(1) eigenvector of λ1 and v(2) eigenvector for λ2 . Indeed, for j = 1, 2,
since Av(j) = λj v(j) ,

d ( λj t ) d
Ax(j) = λj v(j) eλj t = v(j) e = x(j) .
dt dt
Claim: x(1) and x(2) are linearly independent solutions.
Proof: Suppose c1 x(1) + c2 x(2) = 0, for scalars c1 , c2 , not both zero. Putting
t = 0, and noting that, by definition, x(j) (0) = v(j) for j ∈ {1, 2}, we get the
linear dependence relation

(1) c1 v(1) + c2 v(2) = 0,

not both constants c1 , c2 being zero. So it suffices to check that v(1) , v(2)
are linearly independent. This should be clear from the material covered in
Ma1b, since these eigenvectors correspond to different eigenvalues, but in any
case, here is the argument: To begin, since the eigenvectors are non-zero, if
one constant, say c1 , is zero, then so is the other. So we may assume that
both c1 and c2 are non-zero. Applying the matrix A to this relation (1), and
using the fact that Av(j) = λj v(j) , we obtain

(1) c1 λ1 v(1) + c2 λ2 v(2) = 0.

Multiplying (1) by λ2 and subtracting it from (2),

c1 (λ1 − λ2 )v(1) = 0,

which is impossible since c1 , λ1 − λ2 , and v(1) are all non-zero. This gives
the necessary contradiction, and the Claim follows.

6
Lecture 11
Linear, homogeneous system with constant coefficients:
 
x1
′  ..  dx
x = Ax, x =  .  , x′ = , A = (aij )1≤i,j≤n (*)
dt
xn
During the first 3 weeks we studied this for n = 1. In general, try to under-
stand well the n = 2 and n = 3 cases. For n = 2, you should know how to
draw various pictures, often called portraits, in the (x1 , x2 )-plane.
Equilibrium points are the solutions x for which x′ = 0, i.e., where Ax = 0.
Important special case: when A is an invertible matrix, i.e., when the deter-
minant of A, denotes as det(A) or just |A|, is nonzero. Then there exists an
inverse matrix to 
A.  Applying A−1 (in this case) to Ax = 0 on both sides, we
0
 .. 
see that x = 0 =  .  is the only equilibrium point (when A is invertible).
0
Check that in general, a matrix A is singular, i.e., not invertible, if and
only if 0 is an eigenvalue of A.
General principle/ Theorem:
 
x1
′  .. 
Consider x = Ax, x =  . 
xn
(a) If λ is an eigenvalue of A with eigenvector v (̸= 0), then the function
of t given by x = veλt is a non-zero solution of (∗).
(b) Suppose A has n distinct eigenvalues, say λ1 , λ2 , . . . λn (with λi ̸= λj if
i ̸= j) with eigenvector v(1) , v(2) , . . . v(n) , i.e.,
Av(j)| = λj v(i)
then every solution of x′ = Ax is a linear combination
x = c1 x(1) + c2 x(2) + · · · + cn x(n)
where the cj are scalars and
x(j) = v(i) eλj t ,
for each j = 1, 2, . . . , n.

7
Look at the case n = 2: When the eigenvalues are (real and) distinct,
i.e., λ1 ̸= λ2 ,
x(1) = v(1) eλ1 t , x(2) = v(2) eλ2 t

W (λ(1) , λ(2) ) = det((1) eλ1 t , v(2) eλ2 t


( λt )
ae 1 ceλ2 t
= det
beλ1 t deλ2 t
( )
a c (λ1 λ2 )t
= det e
b d

Remarks:

(a) It is more subtle if the solutions are not real or not all distinct. Here
when λ1 . . . λn are all real and distinct, all fundamental solutions x(1) . . . x(n)
are all real vectors, i.e., in Rn .

(b) A key point to remember (from Ma1b) is that eigenvectors correspond-


ing to distinct eigenvalues are linearly independent
( )
(c) The matrix Ψ = x(1) x(2) . . . x(n) is called a fundamental matrix.

(d) If y(1) . . . y(n) are n arbitrary solutions of (∗), one defines their
Wronskian determinant to be

W (y(1) . . . y(n) ) = det(y(1) . . . y(n) ).

These y(j) ’s give a fundamental set of solutions when


W (y(1) . . . y(n) ) ̸= 0.
Clearly, there are at most n independent solutions.

Example:
( ) ( )
′ 0 1 x1
(1) n = 2, x = Ax, A = ,x =
1 0 x2
We saw last time A has 2 eigenvalues
( ) λ1 = 1 and( λ)2 = −1, with
1 1
corresponding eigenvectors v(1) = and v(2) = .
1 −1

8
The two basic solutions of the linear system are
( ) ( )
1 t 1
(1)
x =v e = (1) t (2)
e , x = v(2)e = t
e−t ,
1 −1
and the Wronskian is
( ) ( )
et et 1 1
(1) (2)
W (x , x ) = det t −t = det = −2 ̸= 0.
e e 1 −1
Thus x(1) , x(2) are independent solutions. Of course we knew this already,
because they correspond to distinct eigenvalues.
Slope field:
This is a plot in the (x1 , x2 )-plane, called the phase plane, where one
chooses a grid and draws, at each point on the grid, a short arrow in the
direction
( ) of the vector connecting the origin to the point determined by
x
A 1 .
x2
Note that since A is a constant matrix, x′ (t), given by Ax, is independent
of t, which is what allows us to draw the slope field on the phase plane (at
all times t).
Asymptotics:
Suppose A is an n × n-matrix with distinct (real) eigenvalues λ1 , . . . , λn
and corresponding eigenvectors v(1) , . . . , v(n) . Then the general solution of
x′ = Ax is given by
x = c1 v(1) eλ1 t · · · + cn v(n) eλn t .
Note that when λj > 0, eλj t goes to ∞ as t → ∞ and goes to 0 when t → −∞.
It follows that the term corresponding to the largest (positive) λj dominates
the other terms as t → ∞, while the largest (negative) λj dominates when
t → −∞. This is because the eigenvectors v(1) , v(2) , . . . v(n) of A are static,
i.e., do not vary with t (in our “constant coefficients” context). However, for
each j with λj ̸= 0, the corresponding basic solution x(j) := v(j) eλj t evolves
as t varies.
Note: If A has a non-zero eigenvalue, then the equilibrium solution
x = 0 is not asymptotically stable (or even stable), since any solution which
is near 0 goes to a (non-zero vector times) ±∞ either as t goes to ∞ or as
t → −∞. If the only eigenvalue is 0, x = 0 is stable as solutions near it will
stay nearby for larger |t|, but is not asymptotically stable.

9
Trajectory:
To fix ideas, look at the example above with eigenvalues ±1, and with
general solution

x = ϕ(t) = c1 v(1) et + c2 v(2) e−t .


( ) ( )
(1) (2) 1 1
x0 = ϕ(0) = c1 v + c2 v = c1 + c2
1 −1

If we sketch the evolution of ϕ(t) for any particular choice of c1 , c2 , we can


represent it by a curve, called a trajectory, in the phase plane.
A phase portrait is just a sampling of different types of trajectories in
the phase plane.
Trajectory of x(1) (t): ( )
(1) 1 t
x (t) = e
1
Choose t1 , t2 , . . . tm and plot x(1) (tj ) for each j, and then join them:
( ) ( )
(1) 1 (1) e
x (0) = , x (1) = ,
1 e
( a)
(1) e
x (a) = a , . . .
e

10
Lecture 12
Last time we discussed the example, in the plane:
( ) ( )
′ x1 0 1
x = Ax, x = , A=
x2 1 0
A three-dimensional example
   
7 −8 0 x1
A = 3 −8 0 , 
x = x2 
0 0 3 x3
 
7
Solve x′ = Ax subject to the initial condition: x(0) =  3 .
−1
Eigenvalues of A: Solve det(λI3 − A) = 0.
 
λ−7 8 0
λ − 7 8
det  −3 λ + 8 0 = (λ − 3)
−3 λ + 8
0 0 λ−3
= [(λ − 7)(λ + 8) + 24](λ − 3)
= (λ2 + λ − 56 + 24)(λ − 3)
= (λ − 1)(λ + 2)](λ − 3)
Thus λ1 = 1, λ2 = −2, λ3 = 3.
 
a
Eigenvectors: Look for v = b  ̸= 0 such that Av = λv.

c
  
7 −8 0 a
λ3 = 3 :  3 −8 0   b
00 3 c
| 
{z 
}

7a − 8b 
 
3a − 8b
 
3c
 
0
We may take v(3) = 0.
1

11
 
a
λ1 = 1: Want A(v(1) = v(1) . Put v(1) = b  so that

c

7a − 8b = a
3a − 8b = b
3c = c ⇒ c = 0
 
3
We may take v(1) = 1.

0
 
2
λ2 = −2: Check: v(2) = 1 works!
0
Note: Put
( )
M = v(1) v(2) v(3) , matrix of eigenvectors
 
3 2 0
= 1 1 0
0 0 1
 
1 −2 0
Check : M −1 = −1 3 0
0 0 1
( )−1 ( )
a b 1 d −b
using =
c d (ad − bc) −c a
   
1 −2 0 7 −8 0 3 2 0
=⇒ M −1
| {zAM} = 2 −6 0 3 −8 0 1 1 0
conjugation of A by M 0 0 3 0 0 3 0 0 1
 
1 0 0
= 0 −2 0 ← the diagonal matrix of eigenvalues
0 0 3

Since the eigenvalues of A are all (real and) distinct, we know that a
fundamental set (basis) of solutions of Ax = x′ is given by:

x(j) , x(j) , x(j) , with x(j) = v(j) eλjt , for each j = {1, 2, 3}

12
Explicitly,
 
3
x(1) = 1 et ,

0
 
2
x(2) = 1 e−2t ,

0
 
0
x(3) = 0 e3t .

1

The associated fundamental matrix


 t −2t

( ) 3e 2e 0
Ψ = x(1) x(2) x(3) =  et e−2t 0  ,
0 0 e3t

whose Wronskian is
( )
W x(1) x(2) x(3) = (3e−t − 2e−t )e3t = e2t ̸= 0.

Asymptotics of the fundamental solutions


x(1) (0) = v(1) : starting point at t = 0
 

x(1) (t) = v(1) et → ∞ as t → ∞; the first 2 coordinates go to +∞
0  

while third one stays at 0. Also, as t → −∞, x(1) (t) → ∞. Similarly,
0
   
2 0
(2) (2) −2t
x (t) = v e = 1 e   −2t 
→ 0 , as t → ∞
0 0
   
0 0
x(3) (t) = v(3) e3t = 0 e3t →  0  , as t → ∞
1 ∞

13
Note: No non-zero
  linear combination of x1 (t), x3 (t) goes to the equi-
0
librium solution 0 as t → ∞, while x(2) does approach the equilibrium

0
solution as t → ∞.  
2
x (0) = v = 1
(2) (2) 
0
General Solution:
x = c1 x(1) + c2 x(2) + c3 x(3) ,
where c1 , c2 , c3 are scalars.  
7
The given initial condition requires that x(0) =  3 . This gives a
−1
system of equations for c1 , c2 , c3 :
       
3 2 0 7
c1 1 + c2 1 + c3 0 =  3  (*)
0 0 1 −1
 
c1
Write C = c2  as a vector of constants.
c3  
3 2 0
Consider the matrix of eigenvectors M = 1 1 0, which diagonalizes
0 0 1
the matrix A defining the linear system. It follows that
M C = x(0)
We know that M is invertible, and so
    
1 −2 0 7 1
C = M −1 x(0) = −1 3 0  3  =  2 
0 0 1 −1 −1
So the unique solution satisfying the Initial Condition is:
     
3 4 0
x = x(1) + 2x(2) − x(3) = 1 et + 2 e−2t − 0 e3t .
0 0 1

14
Terminology
Let x′ = Ax be a homogeneous, linear system, with A an n × n-matrix
with constant coefficients.
If we have n solutions, say y(1) , y(2) , . . . y(n) of this system of ODE’s, then
the Wronskian of {y(1) , . . . , y(n) } is
( )
W y(1) , y(2), . . . , y(n) = det(y(1) , y(2) . . . y(n) ).

The y(j) give a basis of solutions in an interval (−a, a), for some a > 0, iff
W (y(1) , . . . y(n) ) ̸= 0 on (−a, a). This is the same as saying: The y(j) are
linearly independent on the interval.
If x(1) , x(2) , . . . , x(n) is a fundamental set of solutions, the associated fun-
damental matrix is
Ψ = (x(1) , x(2) . . . x(n) .

A simple example where A is real,


but has
non-real eigenvalues in the plane

( ) ( )
x1 ′ 0 1
x= , x = Ax, A=
x2 −1 0
As we saw earlier, the eigenvalues of A are λ1 = i, λ2 = −i, with associated
eigenvectors ( ) ( )
(1) 1 (2) 1
v = , v = .
i −i
We get distinct solutions:
( ) ( it )
(1) 1 it
(1) λt e
z =v e = e =
i ieit
( ) ( −it )
(2) (2) λ2 t 1 −it e
z =v e = e =
−i −ie−it

The only catch is that the solutions are complex, not real!

15
If z is a complex solution, i.e., if z′ = Az, then x = Re(z) and y = Im(z)
are also solutions, since A is real, Re(z′ ) = x′ , and Im(z′ ) = y′ . The nice
thing is that x, y are real solutions. Since

e±it = cos t ± i sin t, ±ie±it = − sin t ± i cos t,

the real solutions in the example above are


( ) ( )
(1) cos t (1) sin t
x (t) = , y (t) = ,
− sin t cos t

and ( ) ( )
(2) cos t (2) − sin t
x (t) = , y (t) = .
− sin t − cos t
Note that
x(1) (t) = x(2) (t), y(1) (t) = −y(2) (t).
So it suffices to consider just the solutions x(1) and y(1) . Moreover, their
Wronskian is
( )
( (1) (1) ) cos t sin t
W x ,y = det = cos2 t + sin2 t = 1 ̸= 0.
− sin t cos t

So these two real solutions are linearly independent (over R), and the corre-
sponding fundamental matrix
( )
cos t sin t
Ψ =
− sin t cos t

is a rotation matrix, representing the rotation (with center 0) of the points


in the plane through the angle t in the counterclockwise
( ) direction.
0 1
Finally, the general real solution of u′ = u is given by
−1 0
( ) ( )
cos t sin t
u(t) = b1 + b2 ,
− sin t cos t

where b1 , b2 are arbitrary real constants.

16

You might also like