0% found this document useful (0 votes)
2 views

MAT3706StudyMaterial

The document is a study guide for the MAT3706 module on Ordinary Differential Equations at the University of South Africa, authored by Prof M Grobbelaar and revised by Dr SA van Aardt. It covers various topics including linear systems of differential equations, eigenvalues, fundamental matrices, and qualitative theory, with an emphasis on understanding and solving these equations. The guide also includes numerous exercises and references to prescribed textbooks for further study.

Uploaded by

jijage9595
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

MAT3706StudyMaterial

The document is a study guide for the MAT3706 module on Ordinary Differential Equations at the University of South Africa, authored by Prof M Grobbelaar and revised by Dr SA van Aardt. It covers various topics including linear systems of differential equations, eigenvalues, fundamental matrices, and qualitative theory, with an emphasis on understanding and solving these equations. The guide also includes numerous exercises and references to prescribed textbooks for further study.

Uploaded by

jijage9595
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 117

Department of Mathematical Sciences

University of South Africa


Pretoria

Ordinary
Differential Equations

Only Study guide for MAT3706

Author
Prof M Grobbelaar

Revised by
Dr SA van Aardt
© 2013 University of South Africa

All rights reserved

Printed and published by the


University of South Africa
Muckleneuk, Pretoria

Page-layout by the Department


Mathematical Sciences

MAT3706/1/2014

70111413
iii MAT3706/1/2014

Contents

PREFACE iv

CHAPTER 1 Linear Systems of Differential Equations


Solution of Systems of Differential Equations by
the Method of Elimination 1

CHAPTER 2 Eigenvalues and Eigenvectors and Systems of Linear Equations with


Constant Coefficients 23

CHAPTER 3 Generalised Eigenvectors (Root Vectors) and Systems of Linear


Differential Equations 40

CHAPTER 4 Fundamental Matrices


Non–homogeneous Systems
The Inequality of Gronwall 51

CHAPTER 5 Higher Order One–dimensional Equations as Systems


of First Order Equations 73

CHAPTER 6 Analytic Matrices and Power Series Solutions


of Systems of Differential Equations 80

CHAPTER 7 Nonlinear Systems


Existence and Uniqueness Theorem for Linear Systems 93

CHAPTER 8 Qualitative Theory of Differential Equations


Stability of Solutions of Linear Systems
Linearization of Nonlinear Systems 101

APPENDIX A Symmetric matrices 103

APPENDIX B Refresher notes on Methods of Solution of


One-dimensional Differential Equations 104
iv

Preface

The module MAT3706 is a continuation of the module APM2611 in which one–dimensional differential
equations are treated. Obviously you should have a good knowledge of the contents of this module before
starting to study MAT3706.

As you will notice, Chapter 8 of the study guide refers you to Chapter 10 of the prescribed book and there
are no additional notes on this work in the study guide. Many exercises and examples in Chapters 1 – 7 of
the study guide also refer to the prescribed book. Therefore, although the study guide is to a great extent
complete, it is essential that you use the prescribed book together with the study guide. (Since the same
book is also prescribed for APM2611, you should already have this book.)

You will notice that the study guide contains a large number of exercises. Many of these were taken from
the prescribed book while others come from Goldberg and Schwartz (see References below). Do as many
as you find necessary to master the tutorial matter.

The prescribed book by Zill & Wright are simply referred to as Z&W in the study guide.

In compiling Chapters 1 to 7, we have, apart from the prescribed book, made use of the following books:

FINIZIO, N & LADAS, G, (1989), Ordinary Differential Equations with Modern Applications, Wadsworth
Publishing Company, Belmont, California.

GOLDBERG, J L & SCHWARTZ, A J, (1972), Systems of Ordinary Differential Equations: An Introduc-


tion, Harper & Row, New York.

RICE BERNARD J & STRANGE JERRY D, (1989), Differential Equations with Applications, The Ben-
jamin/Cummings Publishing Company, Inc, Redwood City, California.

RAINVILLE, ED, (1981), Elementary Differential Equations, Collier Macmillan Publishers, London.

TENENBAUM, M & POLLARD, H, (1964), Ordinary Differential Equations, Harper & Row, New York.
1 MAT3706/1

CHAPTER 1

Linear Systems of Differential Equations


Solution of Systems of Differential Equations by the Method of
Elimination

Objectives for this Chapter


The main objective of this chapter is to gain an understanding of the following concepts regarding systems
of linear differential equations (DE’s):

• polynomial differential operators;

• matrix notation of linear systems of DE’s;

• degeneracy of systems;

• homogeneous and non–homogeneous systems;

• conditions for the existence of a solution of a system of DE’s;

• conditions for the uniqueness of the solution of initial value problems;

• linear independency of solutions of systems of DE’s.

Outcomes of this Chapter


After studying this chapter the learner should be able to:

• write a system of DE’s in operator notation;

• determine whether a system is degenerate or not;

• solve a system of DE’s using the operator method (method of elimination);

• solve a system of DE’s using the method of triangularization;

• determine the linear independency of the solutions;

• apply the solution techniques for systems of DE’s in solving relevant physical problems.
2

1.1 INTRODUCTION

Mathematical models of real–life situations often lead to linear systems of differential equations. Linear
systems of differential equations arise for instance in the theory of electric circuits, in Economics and in
Ecology, the branch of science in which the interaction between different species is investigated. Systems
of differential equations have been used by, among others, L.F. Richardson1 in as early as 1939 in devising
a mathematical model of arms races, and by E. Ackerman1 and colleagues in the study of a mathematical
model for the detection of diabetes. As we shall see later on (see Chapter 5) systems of differential equations
also appear when an n–th order linear differential equation is reduced to a linear system of differential
equations.

1.2 DEFINITIONS AND BASIC CONCEPTS

Definition 1.1 By a system of n first–order linear differential equations we mean a set of n simultaneous
equations of the form

ẋ1 (t) = a11 (t) x1 + a12 (t) x2 + . . . + a1n (t) xn + Q1 (t)


ẋ2 (t) = a21 (t) x1 + a22 (t) x2 + . . . + a2n (t) xn + Q2 (t)
..
. (1.1)
ẋn (t) = an1 (t) x1 + an2 (t) x2 + . . . + ann (t) xn + Qn (t)

where the functions aij , i, j = 1, 2, . . . , n and the functions Qk , k = 1, 2, . . . , n, are all given functions of t
on some interval J. We assume that all the functions aij and all the functions Qk are continuous on J.

The functions aij (t) and Qk (t) are known functions, while x1 (t) , x2 (t) , . . . , xn (t) are the unknowns.
The functions aij (t) are called the coefficients of the linear system (1.1). We say that the system (1.1)
has constant coefficients when each aij is a constant function. The terms Qk (t) are called forcing terms.
The system (1.1) is called homogeneous when Qk (t) is zero for all k = 1, 2, . . . , n and non–homogeneous
(inhomogeneous) if at least one Qk (t) is not zero.

Remark
The homogeneous system obtained by putting Qk (t) = 0 for all k = 1, 2, . . . , n,

ẋ1 (t) = a11 (t) x1 + a12 (t) x2 + . . . + a1n (t) xn


ẋ2 (t) = a21 (t) x1 + a22 (t) x2 + . . . + a2n (t) xn
..
. (1.2)
ẋn (t) = an1 (t) x1 + an2 (t) x2 + . . . + ann (t) xn

may be expressed in the form


Ẋ = A (t) X (1.3)

with A (t) = [aij (t)] , i, j = 1, 2, . . . , n, X = [x1 , x2 , . . . , xn ]T , the transpose of [x1 , x2 , . . . , xn ] and


Ẋ = [ẋ1 , ẋ2 , . . . , ẋn ]T . If all the functions aij are constants, (1.3) becomes Ẋ = AX.
1
See pp. 168–169 of Ordinary Differential Equations with Modern Applications by N. Finizio and G. Ladas, Wadsworth
Publishing Company, Belmont, California, 1989, for full references.
3 MAT3706/1

It is clear that a solution of this system (see Definition 1.3) should be an n–dimensional vector. Use your
knowledge of matrix multiplication to show that the product of the matrix A (t) and the column vector X
actually produces the vector
 
a11 (t) x1 + a12 (t) x2 + . . . + a1n (t) xn
 
 a21 (t) x1 + a22 (t) x2 + . . . + a2n (t) xn 
 .
 .. 
 . 
an1 (t) x1 + an2 (t) x2 + . . . + ann (t) xn

Definition 1.2 A higher–order system with constant coefficients is a system of the form

P11 (D) x1 + P12 (D) x2 + . . . + P1n (D) xn = h1 (t)


P21 (D) x1 + P22 (D) x2 + . . . + P2n (D) xn = h2 (t)
..
. (1.4)
Pn1 (D) x1 + Pn2 (D) x2 + . . . + Pnn (D) xn = hn (t)

where all the Pij , i, j = 1, 2, . . . , n are polynomial differential operators and all the functions hi (t),
i = 1, 2, . . . , n are defined on an interval J. The determinant
 
P11 (D) · · · P1n (D)
 
 P21 (D) · · · P2n (D) 
det 
 .. 

 . 
Pn1 (D) · · · Pnn (D)

is called the determinant of the system (1.4). The system (1.4) is non–degenerate if its determinant
is non–zero, otherwise it is degenerate.

The determinant of a system of differential equations is of great importance, since it provides a method of
determining the correct number of arbitrary constants in the solution of the system (see Theorem 1.5).

Definition 1.3 A solution of the system (1.1) of differential equations is an n–dimensional vector function

[x1 (t) , x2 (t) , . . . , xn (t)]T ,

of which each component is defined and differentiable on the interval J and which, when substituted into
the equations in (1.1), satisfies the equations for all t in J.

As in the case of ordinary differential equations, we have

Theorem 1.4 (The Superposition Principle) Any linear combination of solutions of a system of dif-
ferential equations is also a solution of the system.
[ ]T [ ]T
For example, each of the two vectors −2e2t , e2t and e3t , −e3t is a solution of the system

ẋ1 = x1 − 2x2
ẋ2 = x1 + 4x2 .
4

[ ]T
By the superposition principle, the linear combination −2c1 e2t + c2 e3t , c1 e2t − c2 e3t , with c1 , c2 arbitrary
constants, is also a solution of the system.

Before defining the concept of a general solution of a system of differential equations, we need the con-
cept of linear independence of vector functions, which is obviously an extension of the concept of linearly
independent functions.

Definition 1.5 Suppose the vector functions


     
x11 (t) x12 (t) x1n (t)
     
 x21 (t)   x (t)   x (t) 
X1 (t) =   , X2 (t) =  22.  , . . . , Xn (t) =  2n. 
 ..   ..   .. 
 .     
xn1 (t) xn2 (t) xnn (t)
are solutions, defined on an interval J, of a system of n first order linear differential equations in n
unknowns. We say that X1 (t) , X2 (t) , . . . , Xn (t) are linearly independent if it follows from

c1 X1 + c2 X2 + . . . + cn Xn = 0 for all t in J

that
c1 = c2 = . . . = cn = 0.

The following theorem provides a criterion which can be used to check the linear dependence of solutions
of a system of differential equations:

Theorem 1.6 The n solutions


     
x11 (t) x12 (t) x1n (t)
     
 x21 (t)   x22 (t)   x2n (t) 
 ,  ,..., 
 ..   ..   .. 
 .   .   . 
xn1 (t) xn2 (t) xnn (t)
of the system (1.1) are linearly independent on the interval J if and only if the Wronskian determinant
 
x11 (t) . . . x1n (t)
 
 x21 (t) . . . x2n (t) 

det  
.. 
 . 
xn1 (t) . . . xnn (t)
of the system (1.2) is never zero on J.

We state without proof the following basic existence theorems for linear systems of differential equations:

Theorem 1.7 (Existence of a solution of a homogeneous system) There exists n linearly inde-
pendent solutions of the system (1.2).
Furthermore, if the n column vectors
     
x11 (t) x12 (t) x1n (t)
     
 x21 (t)   x22 (t)   x2n (t) 
 ,  ,..., 
 ..   ..   .. 
 .   .   . 
xn1 (t) xn2 (t) xnn (t)
5 MAT3706/1

are linearly independent solutions of the linear system (1.2), a general solution is given by
       
x1 (t) x11 (t) x12 (t) x1n (t)
       
 x2 (t)   x21 (t)   x22 (t)   x2n (t) 
  = c1   + c2   + . . . + cn  ,
 ..   ..   ..   .. 
 .   .   .   . 
xn (t) xn1 (t) xn2 (t) xnn (t)

i.e.
x1 (t) = c1 x11 (t) + c2 x12 (t) + . . . + cn x1n (t)
x2 (t) = c1 x21 (t) + c2 x22 (t) + . . . + cn x2n (t)
..
.
xn (t) = c1 xn1 (t) + c2 xn2 (t) + . . . + cn xnn (t) ,
where c1 , c2 , . . . , cn are arbitrary constants.

Theorem 1.8 (Existence of a solution of a non–homogeneous system) If the n solutions


     
x11 (t) x12 (t) x1n (t)
     
 x21 (t)   x22 (t)   x2n (t) 
 ,  ,..., 
 ..   ..   .. 
 .   .   . 
xn1 (t) xn2 (t) xnn (t)

of the system (1.2) are linearly independent and if [x1p (t) , x2p (t) , . . . , xnp (t)]T is a particular solution
of the inhomogeneous system (1.1), then a general solution of (1.1) is given by
         
x1 (t) x11 (t) x12 (t) x1n (t) x1p (t)
         
 x2 (t)   x21 (t)   x22 (t)   x2n (t)   x2p (t) 
  = c1   + c2   + . . . + cn  + 
 ..   ..   ..   ..   .. 
 .   .   .   .   . 
xn (t) xn1 (t) xn2 (t) xnn (t) xnp (t)

i.e.
x1 (t) = c1 x11 (t) + c2 x12 (t) + . . . + cn x1n (t) + x1p (t)
x2 (t) = c1 x21 (t) + c2 x22 (t) + . . . + cn x2n (t) + x2p (t)
..
.
xn (t) = c1 xn1 (t) + c2 xn2 (t) + . . . + cn xnn (t) + xnp (t)
where c1 , c2 , . . . , cn are arbitrary constants.

Remark
In Theorem 1.7 we learn that a general solution of a system of n first–order linear differential equations
in n unknown functions contains n arbitrary constants. As we shall see in the next section the method
of elimination sometimes introduces redundant constants. The redundant constants can be eliminated by
substituting the solutions into the system and equating coefficients of similar terms.

In general the following theorem determines the correct number of arbitrary constants in a general solution
of a system of differential equations (which may be a higher–order system):
6

Theorem 1.9 The correct number of arbitrary constants in a general solution of a system of differential
equations is equal to the order of the determinant of the system, provided this determinant has non–zero
value.

In particular: The number of arbitrary constants in a general solution of the higher–order system

P11 (D) x1 + P12 (D) x2 + . . . + P1n (D) xn = h1 (t)


P21 (D) x1 + P22 (D) x2 + . . . + P2n (D) xn = h2 (t)
..
. (1.5)
Pn1 (D) x1 + Pn2 (D) x2 + . . . + Pnn (D) xn = hn (t)

is equal to the degree of  


P11 (D) . . . P1n (D)
 
 P21 (D) . . . P2n (D) 
∆ = det 
 .. .
 (1.6)
 . 
Pn1 (D) . . . Pnn (D)
If ∆ is identically zero, then the system either has either infinitely many solutions, or no solutions.

We will return to the proof of the latter theorem in Section 1.4, where we will also discuss the degenerate
case when ∆ = 0 in order to be able to decide when a degenerate system has an infinite number of solutions
or no solution. We end this section with the Existence and Uniqueness Theorem for a linear system with
initial conditions.

Theorem 1.10 (Existence and Uniqueness Theorem for Linear Initial Value Problems) Assume
that the coefficients aij , i, j = 1, 2, . . . , n and the functions Qk (t), k = 1, 2, . . . , n of the system (1.1) are
all continuous on the interval J. Let t0 be a point in J and let x10 , x20 , . . . , xn0 , be n given constants. Then
the initial value problem (IVP ) consisting of the system (1.1) and the initial conditions

x1 (t0 ) = x10 , x2 (t0 ) = x20 , . . . , xn (t0 ) = xn0

has a unique solution [x1 (t) , x2 (t) , . . . , xn (t)]T . Furthermore this unique solution is valid throughout the
interval J.

We are now ready to treat the method of elimination (operator method).

1.3 THE METHOD OF ELIMINATION

In this section we show a method for solving non–degenerate systems of differential equations which
resembles the method of elimination of a variable used for solving systems of algebraic equations. Since the
method entails the application of differential operators to differential equations, the method is also known
as the operator method.

Before carrying on with this section, you should revise Sections 4.1 - 4.5 of Z&W which were done in
APM2611 as well as Section 4.9 of Z&W where the notion of differential operators are introduced and
discussed. Also read through Appendix B at the end of this study guide.
7 MAT3706/1

1.3.1 POLYNOMIAL OPERATORS


First we define:
Definition 1.11 An nth–order polynomial operator P (D) is a linear combination of differential oper-
ators of the form
an Dn + an−1 Dn−1 + . . . + a1 D + a0
where a0 , a1 , . . . , an are constants.
The symbol P (D) indicates that P (D) is applied to an n–times differentiable function y. To emphasize
the fact that P (D) is an operator, it is actually preferable (but not compulsory) to write P (D) [y].

Definition 1.12 (Linearity property) The polynomial differential operator P (D) is a linear operator
in view of the fact that the following linearity properties are satisfied:
P (D) [y1 + y2 ] = P (D) [y1 ] + P (D) [y2 ] ,
P (D) [cy] = cP (D) [y] .
Definition 1.13 (Sum and Product of polynomial differential operators) The sum P1 (D)+P2 (D)
of two operators, P1 (D) and P2 (D) is obtained by writing P1 and P2 as linear combinations of the D
operator and adding coefficients of equal powers of D.
The product P1 (D) P2 (D) of two operators P1 (D) and P2 (D) is obtained by applying the operator P2 (D)
and then applying the operator P1 (D).
This is interpreted as follows:

(P1 (D) P2 (D)) [y] = P1 (D) [P2 (D) [y]] .

Remark
Polynomial operators can be shown to satisfy all the laws of elementary Algebra with regards to the basic
operations like addition and multiplication. They may therefore be handled in the same way as algebraic
polynomials. We can, for instance, factorize polynomial differential operators by using methods similar to
those used in factorizing algebraic polynomials.

Example 1.14
(a) If P1 (D) = D3 + D − 8, P2 (D) = 3D2 − 5D + 1, then P1 (D) + P2 (D) = D3 + 3D2 − 4D − 7.
( )
(b) The product operator (4D − 1) D2 + 2 may be expanded to yield 4D3 − D2 + 8D − 2.

(c) The operator D3 − 2D2 − 15D may be factorized into the factors D (D − 5) (D + 3).

(d) Any linear system of differential equations may be written in the form (1.5). For example, the system
d2 x dy
2
− 4x + =0
dt dt
(1.7)
dx d2 y
−4 + 2 + 2y = 0
dt dt
can be written in operator notation as
( 2 )
D − 4 [x] + D [y] = 0 (1)
( )
−4D [x] + D2 + 2 [y] = 0. (2)
8

We are now ready to do:

Example 1.15 Solve the system (1.7) by using the elimination method (operator method). Eliminate x
first.

Solution
To eliminate x first, we apply the operator 4D to the first equation and the operator D2 − 4 to the second
( )
equation of the system. For brevity’s sake, we denote these operations by 4D [(1)] and D2 − 4 [(2)]. We
thus have ( )
4 D3 − 4D [x] + 4D2 [y] = 0 (3)
( 3 ) ( 4 )
−4 D − 4D [x] + D − 2D − 8 [y] = 0.
2 (4)
We now add the last two equations to obtain
( 4 )
D + 2D2 − 8 [y] = 0. (5)

From the auxiliary equation


( )( )
m4 + 2m2 − 8 = m2 + 4 m2 − 2 = 0
we find
m2 = −4 or m2 = 2
so that

m = ±2i or m = ± 2.
The solution of equation (5) is therefore
√ √
y (t) = c1 e 2t
+ c 2 e− 2t
+ c3 cos 2t + c4 sin 2t,

where c1 , . . . , c4 are arbitrary constants.


Substitution into (2) yields
√ √ √ √
−4D [x] + 2c1 e 2t
+ 2c2 e− 2t
− 4c3 cos 2t − 4c4 sin 2t + 2c1 e 2t
+ 2c2 e− 2t
+ 2c3 cos 2t + 2c4 sin 2t = 0

whence √ √ 1 1
D [x] = c1 e 2t
+ c2 e− 2t
− c3 cos 2t − c4 sin 2t.
2 2
Integration yields √ √
2 √2t 2 −√2t 1 1
x (t) = c1 e − c2 e − c3 sin 2t + c4 cos 2t + c5 ,
2 2 4 4
with ci , i = 1, 2, . . . , 5 arbitrary constants.

The solution, therefore, contains five arbitrary constants whereas the determinant
[ ]
D2 − 4 D ( )( )
det = D2 − 4 D2 + 2 + 4D2
−4D D + 2 2

is of the fourth order. Substitution of the solution (x (t) , y (t)) into (1) yields c5 = 0. Consequently, a
general solution of the system is given by
√ √
2 √2t 2 −√2t 1
x (t) = c1 e − c2 e − c3 sin 2t + c4 cos 2t
2 2 4
√ √
y (t) = c2 e 2t
+ c2 e− 2t
+ c3 cos 2t + c4 sin 2t.
9 MAT3706/1

Example 1.16 Solve the system

(D + 1) [x] − (D + 1) [y] = et (1)


(D − 1) [x] + (2D + 1) [y] = 5 (2)

by using the elimination method (operator method). Eliminate y first.

Solution
Eliminating y first we find:
( ) ( ) [ ]
(2D + 1) [(1)] : 2D2 + 3D + 1 [x] − 2D2 + 3D + 1 [y] = (2D + 1) et = 3et (3)
( ) ( )
(D + 1) [(2)] : D2 − 1 [x] + 2D2 + 3D + 1 [y] = (D + 1) [5] = 5 (4)

(3) + (4) : 3D (D + 1) [x] = 3et + 5. (5)

From the auxiliary equation


m (m + 1) = 0

we have m = 0 or m = −1 so that the complementary function is given by

xC.F. (t) = c1 + c2 e−t

with c1 and c2 arbitrary constants. We find the particular integral xP.I. (t) by using the method of unde-
termined coefficients:
Let
xP.I. (t) = Aet + Bt

so that

ẋP.I. (t) = Aet + B


ẍP.I. (t) = Aet .

Substituting these equations in (5) we find

6Aet + 3B = 3et + 5.

Comparing coefficients we find


1 5
A= and B =
2 3
so that
1 5
xP.I. (t) = et + t.
2 3
Therefore

x (t) = cC.F. (t) + xP.I. (t)


1 5
= c1 + c2 e−t + et + t.
2 3
10

Substituting x(t) in (1) yields

(D + 1) [y] = (D + 1) [x] − et
5 5
= c1 + t + . (6)
3 3
From the auxiliary equation m + 1 = 0 it follows that the complementary function yC.F. (t) is given by

yC.F. (t) = c3 e−t .

Using the method of undetermined coefficients we solve the particular integral yP.I. (t) as follows:
Let
yP.I. (t) = At + B.

Then
ẏP.I. (t) = A.

Substitution of these equations in (6) yields


5 5
At + (A + B) = c1 + t + .
3 3
Comparing coefficients we find that
5
A= and B = c1
3
so that
5
yP.I. (t) = t + c1 .
3
Therefore

y (t) = yC.F. (t) + yP.I. (t)


5
= c3 e−t + c1 + t.
3
The solution therefore contains three arbitrary constants c1 , c2 and c3 whereas the determinant
[ ]
D + 1 −D − 1
det
D − 1 2D + 1

is of the second order. Substitution of the solution (x (t) , y (t)) into (2) yields c3 = −2c2 . Consequently a
general solution of the system is given by
1 5
x (t) = c1 + c2 e−t + et + t
2 3
5
y (t) = c1 − 2c2 e−t + t.
3
11 MAT3706/1

Exercise 1.17

(1) Find a general solution of each of the following systems:

(a) ẋ = 5x − 6y + 1
ẏ = 6x − 7y + 1

(b) ẋ = 3x − 2y + 2t2
ẏ = 5x + y − 1

(c) ẍ − ẏ = t
ẋ + 3x + ẏ + 3y = 2.

(2) Solve the following initial value problem:

ÿ = −y + ẏ + z
ż = y − z − 1
y (0) = 1, ẏ (0) = 0, z (0) = 1.

(3) Use the operator method (method of elimination) to find a general solution of each of the following
systems of ordinary differential equations:

(a) (3D + 2)[x] + (D − 6)[y] = 5et


(4D + 2)[x] + (D − 8)[y] = 5et + 2t − 3
NB: Eliminate y first.
(b) ẋ = 4x − ÿ + t2
ẏ = −x − ẋ
NB: Eliminate x first.

1.3.2 EQUIVALENT TRIANGULAR SYSTEMS

As noted previously, the method of elimination sometimes introduces redundant arbitrary constants into a
general solution of a system of differential equations, the reason being, as we have in fact seen in the above
example, that a polynomial operator in D operates on each equation. Since the elimination of such constants
is generally a tedious and time consuming process, we shall develop a method which immediately yields the
correct number of constants in a general solution of a (non–degenerate) system of differential equations.
The method amounts to a reduction of a system of differential equations to an equivalent triangular system.

Definition 1.18 If the system

P11 (D) x1 + P12 (D) x2 + . . . + P1n (D) xn = h1 (t)


P21 (D) x1 + P22 (D) x2 + . . . + P2n (D) xn = h2 (t)
..
. (1.8)
Pn1 (D) x1 + Pn2 (D) x2 + . . . + Pnn (D) xn = hn (t)
12

is reducible by a method analogous to that of row operations in Matrix Algebra to the system

P11 (D) x1 = H1 (t)


P21 (D) x1 + P22 (D) x2 = H2 (t)
..
. (1.9)
Pn1 (D) x1 + Pn2 (D) x2 + . . . + Pnn (D) xn = Hn (t) ,

then the system (1.9) is known as a triangular system equivalent to (1.8).

A linear system of differential equations is thus in triangular form if each succeeding equation in the system
has at least one unknown function more (or less) than the previous equation.

The concept of triangular system thus corresponds to that of triangular matrix.

From the definition and from the properties of determinants, the following conclusion can immediately be
drawn:

If a system of differential equations is reduced to an equivalent triangular system, the determinant and the
order of the determinant of the system remain unchanged.

In Theorem 1.22 we will show that a solution of an equivalent triangular system is also a solution of the
original system of differential equations. The proof contains the proof of Theorem 1.9 for n = 2.

In the next example we show how a linear system of differential equations can be reduced to an equivalent
triangular system.

Example 1.19 Reduce the following system of equations to an equivalent triangular system
( 3 ) ( )
D + 2D2 − D + 1 [x] + D4 − 2D2 + 2 [y] = 0 (1)
( 3 )
(D − 3) [x] + D + 1 [y] = 0 (2)

of the form

P1 (D)[y] = 0 (A)
P2 (D)[x] + P3 (D)[y] = 0. (B)

Solution
Consider the polynomials in D, i.e. the polynomials P (D); retain the equation in which the order of P (D)
is the lowest — in this case (2). Our aim is to reduce the order of the coefficient polynomial P (D) of x in
(1) step by step: first we get rid of the coefficient D3 of x. The notation (1) → (1) + k (D) [(2)] denotes that
(1) is replaced by (1) + k (D) [(2)] with k (D) a polynomial in D. By executing the operations as indicated,
we obtain the following systems, of which the last is the required triangular system.
( ) ( )
(1) → (1) − D2 [(2)] : 5D2 − D + 1 [x] + −D5 + D4 − 3D2 + 2 [y] = 0 (3)
( 3 )
(D − 3) [x] + D + 1 [y] = 0. (2)

Note that the order of the coefficient polynomial P (D) of x in (3) has been reduced by one; it is no longer
a cubic polynomial, but a quadratic polynomial. This is the basic difference between the operator method
13 MAT3706/1

and the method of triangularization. (If we were applying the operator method, we would have executed
the operation
( 3 ) ( )
D + 1 [1] − D4 − 2D2 + 2 [2]

to eliminate y, or the operation


( )
(D − 3) [1] − D3 + 2D2 − D + 1 [2]

to eliminate x.)

Instead, we did not eliminate x, but only got rid of the coefficient D3 of x. The next step would be to
reduce the order of coefficient polynomial P (D) of x in (3) (i.e. 5D2 − D + 1) by one which means that we
have to get rid of the coefficient 5D2 :
( )
(3) → (3) − 5D [(2)] : (14D + 1) [x] + −D5 − 4D4 − 3D2 − 5D + 2 [y] = 0 (4)
( 3 )
(D − 3) [x] + D + 1 [y] = 0. (2)

The order of the coefficient polynomial P (D) of x in the two equations is now the same. Retain either of
the two equations.
( )
(4) − 14 (2) : 43x − D5 + 4D4 + 14D3 + 3D2 + 5D + 12 [y] = 0 (5)
( 3 )
(D − 3) [x] + D + 1 [y] = 0. (2)

From (5) we get


1 ( 5 )
x= D + 4D4 + 14D3 + 3D2 + 5D + 12 [y] .
43
Substitute into (2). This yields the system
( )
43x − D5 + 4D4 + 14D3 + 3D3 + 5D + 12 [y] = 0 (5)
D−3
( 5 ) ( )
43 D + 4D4 + 14D3 + 3D2 + 5D + 12 [y] + D3 + 1 [y] = 0. (6)

Simplification of (6) yields the triangular system


( 6 )
D + D5 + 2D4 + 4D3 − 4D2 − 3D + 7 [y] = 0 (7)
( )
43x − D5 + 4D4 + 14D3 + 3D2 + 5D + 12 [y] = 0. (5)

Note that in the triangular system above, the polynomial coefficient of x in (5) is a constant.
Remark
An equivalent triangular system is not unique. The system

(D − 2) [y1 ] + 2D [y2 ] = 2 − 4e2x


(2D − 3) [y1 ] + (3D − 1) [y2 ] = 0

can be reduced to two equivalent systems, namely


( 2 )
D + D − 2 [y1 ] = 2 + 20e2x
D [y1 ] − 2y2 = −6 + 12e2x

and ( )
D2 + D − 2 [y2 ] = −6 − 4e2x
y1 − (D + 1) [y2 ] = −4 + 8e2x .
14

Exercise 1.20 Derive the two triangular systems mentioned in the above remark. Use the method of
elimination to solve these two systems as well as the original system of differential equations. What
conclusion can you draw with regard to the solutions?

Example 1.21 Reduce the following system of equations to an equivalent triangular system

ẍ + x + 4ẏ − 4y = 4et
ẋ − x + ẏ + 9y = 0

of the form
P1 (D)[y] = f1 (t)
P2 (D)[x] + P3 (D)[y] = f2 (t)
and SOLVE.

Solution
Note that there are two triangular forms to which you can reduce the system: either
}
P1 (D)[y] = f1 (t)
A
P2 (D)[x] + P3 (D)[y] = f2 (t)
or
}
P1 (D)[x] = f1 (t)
B
P2 (D)[x] + P3 (D)[y] = f2 (t)
In this case you are asked to reduce the system to the first triangular form.
Also note that the polynomial P2 (D) should be a constant in the triangular form A. This implies that once
you have solved for y(t) from the first equation in A, you will be able to solve for x(t) immediately from
the second equation in A. Therefore there won’t be any redundant constants as in the case of the operator
method. (In the case of the triangular form B, the polynomial P3 (D) should be a constant so that y(t) can
be solved immediately once you have obtained x(t) from the first equation.)
In operator notation the system becomes

(D2 + 1)[x] + 4(D − 1)[y] = 4et (1)


(D − 1)[x] + (D + 9)[y] = 0. (2)

We retain the equation in which the order of P (D) is the lowest, in this case (2). Furthermore, we execute
the following operations in order to obtain the required triangular system:
(1) → (1) − D[(2)] :

(D2 + 1 − D(D − 1))[x] + (4(D − 1) − D(D + 9))[y] = (D + 1)[x] − (D2 + 5D + 4)[y] = 4et (3)
(D − 1)[x] + (D + 9)[y] = 0 (2)

(3) → (3) − (2) :

(D + 1 − (D − 1))[x] + (−D2 − 5D − 4 − (D + 9))[y] = 2x − (D2 − 6D + 13)[y] = 4et (4)


(D − 1)[x] + (D + 9)[y] = 0. (2)

Substituting
1
x = 2et + (D2 − 6D + 13)[y] (from(4))
2
15 MAT3706/1

into (2) we find [ ]


1 2
(D − 1) 2e + (D − 6D + 13)[y] + (D + 9)[y] = 0.
t
2
The triangular system is therefore given by

(D3 + 5D2 + 9D + 5)[y] = 0 (5)


2x − (D2 − 6D + 13)[y] = 4et (4)
We now solve for y using equation (5):
From the auxiliary equation

m3 + 5m2 + 9m + 5 = (m + 1)(m2 + 4m + 5) = 0,

we get
m = −1 and m = −2 ± i,
so that
y(t) = yC.F. (t) + yP.I. (t) = e−2t (A cos t + B sin t) + Ce−t .
Substitution of y(t) in (4) then gives

x(t) = 2et + e−2t ((2A + B) cos t + (2B − A) sin t) + 4Ce−t .

We now proceed to the proof of Theorem 1.22. We formulate and prove the theorem only for a linear
system consisting of two equations, since the proof is completely analogous for systems consisting of more
equations.

Theorem 1.22 Suppose that the system

f1 (D) [x] + g1 (D) [y] = h1 (t) (1)


(1.10)
f2 (D) [x] + g2 (D) [y] = h2 (t) (2)
has been reduced to the equivalent system

F1 (D) [x] = H1 (t) (3)


(1.11)
F2 (D) [x] + G2 (D) [y] = H2 (t) . (4)
A general solution of (1.11) will then be a general solution of (1.10).

Proof. The proof is in the form of a discussion:


Suppose we solve equation (3) for x. The correct number of arbitrary constants in the solution is equal to
the order of F1 (D). Substitute this solution into equation (4), which is then reduced to an equation in y of
which the order equals that of G2 (D). The number of arbitrary constants in the solution of (4) equals the
order of G2 (D) . Consequently the total number of constants in the solution (x, y) of (1.11) is equal to the
sum of the orders of F1 (D) and G2 (D). This is, however, precisely the order of the determinant of (1.11),
i.e. the order of F1 (D) G2 (D). We have thus shown that, if we solve the system as described, the solution
contains the correct number of constants, (since the pair satisfies each equation identically) and that this
number equals the order of the determinant of (1.11). But the order of the determinant of the reduced
system (1.11) is the same as the order of the determinant of the original system (1.10). Thus we have not
only proved that a solution of (1.11) will also be a solution of (1.10) and contain the correct number of
arbitrary constants, but have at the same time proved Theorem 1.9. 
16

Exercise 1.23 Triangularize and solve the following system

(D2 + 1)[x] − 2D[y] = 2t


(2D − 1)[x] + (D − 2)[y] = 7.

NB: Reduce to the form

y = f1 (t)
P2 (D)[x] + P3 (D)[y] = f2 (t)

and SOLVE.

In all the examples that we have studied, the systems of differential equations were non–degenerate. We
now study:

1.4 DEGENERATE SYSTEMS OF DIFFERENTIAL EQUATIONS


We recall that a system of differential equations is degenerate when its determinant is zero. As in the case
of systems of algebraic equations, a degenerate system of differential equations has either no solution or
otherwise an infinite number of solutions.

By systematizing the elimination procedure and letting it assume Cramer’s rule, we will be able to decide
when a degenerate system of differential equations has no solution or infinitely many. For simplicity’s sake,
we consider the system
P11 (D) [y1 ] + P12 (D) [y2 ] = f1 (t)
(1.12)
P21 (D) [y1 ] + P22 (D) [y2 ] = f2 (t) .
By applying P22 (D) to the first equation of (1.12) and P12 (D) to the second equation of (1.12), and by
subtracting the resulting equations, we obtain

(P11 (D) P22 (D) − P12 (D) P21 (D)) [y1 ] = P22 (D) [f1 (x)] − P12 (D) [f2 (x)] .

This can be written in the form


[ ] [ ]
P11 P12 f1 P12
det [y1 ] = det , (1.13)
P21 P22 f2 P22

where the determinant on the right hand side is interpreted to mean P22 [f1 ] − P12 [f2 ].
Similarly we have [ ] [ ]
P11 P12 P11 f1
det [y2 ] = det . (1.14)
P21 P22 P21 f2
By using equations (1.13) and (1.14), we can make the last part of Theorem 1.9 precise for the case n = 2,
by stating:

If ∆ = P11 P22 − P12 P21 = 0, then the system (1.12) has infinitely many solutions if the determinants on
the right hand sides of equations (1.13) and (1.14) vanish; otherwise there is no solution.

The theorem can easily be generalized to hold for the case n > 3.

We illustrate the validity of the theorem with a few examples:


17 MAT3706/1

Example 1.24 Find a general solution, if it exists, of the system

D [x] − D [y] = et
3D [x] − 3D [y] = 3et .

Solution
The system is clearly degenerate since

D −D
∆= = D (−3D) − (−D) (3D) = 0.
3D −3D

Furthermore,
D et [ ] [ ]
= D 3et − 3D et = 3et − 3et = 0
3D 3et
and similarly
et D [ ] [ ]
= 3D et − D 3et = 0
3et 3D
so that an infinite number of solutions exist (note that the system is equivalent to a single equation in two
unknowns). We could for instance choose

x (t) = et + c, y (t) = c, or x (t) = t + et + c, y (t) = t + c.

More generally, let y(t) be any arbitrary function of t. Then

x(t) = y(t) + et + c

where c is an arbitrary constant.

Example 1.25 Find a general solution, if it exists, of the system

D [x] + 2D [y] = et
D [x] + 2D [y] = t.

Solution
The system is degenerate since

D 2D
∆= = D (2D) − 2D (D) = 0.
D 2D

However, since
D et ( )
= D (t) − D et = 1 − et ̸= 0,
D t
the right hand side of the system does not become zero when x is eliminated. (In this case, the same is true
when y is eliminated.) Hence the system has no solution. (This could also quite easily be seen by noting
that, since the left hand sides of the two equations are identical, we must have that the right hand sides
should also be equal, i.e. et = t for all values of t!)
18

Exercise 1.26 In the following exercises determine if the given systems have no solutions or infinitely many
solutions. If a system has infinitely many solutions, indicate how the set of solutions may be obtained.

(1) ẏ1 + ẏ2 = 3


ÿ1 − ẏ1 + ÿ2 − ẏ2 = x

(2) ẏ1 − y1 + ẏ2 = x


ÿ1 − ẏ1 + ÿ2 = 1

We conclude this chapter with a few applications.

1.5 APPLICATIONS2

As stated in Section 1.1, linear systems of differential equations are encountered in a large variety of physical
problems. We will treat a problem which occurs in the study of electrical circuits, as well as a mixture
problem, and finally an example in which the love and hate between two lovers are modelled (believe it if
you can!) Many other applications of linear differential equations may be found in books on differential
equations, a few of which will be mentioned at the end of this section.

Section 1.5.1 as well as Section 1.5.2 are adopted from Section 9–8 of Bernard J. Rice and Jerry D. Strange’s
book Ordinary Differential Equations with Applications, Second Edition, Brooks/Cole Publishing Company,
Pacific Grove, California, 1989, since these sections effectively illustrate the use of systems of differential
equations in modelling physical processes. Only numbers of equations and units (we use metric units) have
been changed.

1.5.1 ELECTRICAL CIRCUITS

The current in the electrical circuit shown in Figure 1.1 can be found by applying the elements of circuit
analysis (which you are, for the purpose of this module, not expected to be familiar with). Essential to the
analysis is Kirchoff ’s law, which states that the sum of the voltage drops around any closed loop is zero. In
applying Kirchoff’s law, we use the fact that the voltage across an inductance is

dI
vL = L
dt
and the voltage across a resistance is vR = IR.

Figure 1.1: An electrical circuit


2
Students are not expected to be able to derive the mathematical models encountered in this section.
19 MAT3706/1

Let the current in the left loop of the circuit be I1 , and the current in the right loop be I2 . From the figure
we conclude that the current in the resistor R1 is I1 − I2 relative to the left loop, and I2 − I1 relative to
the right loop. Applying Kirchoff’s law to the left loop, we get

L1 I˙1 + R1 (I1 − I2 ) = v (t) (1.15)


dI1
where I˙1 = . Similarly, the sum of the voltage drops around the right loop yields
dt
L2 I˙2 + R2 I2 + R1 (I2 − I1 ) = 0. (1.16)

If the components of the circuit are given, the values of I1 and I2 can be found by solving this system of
differential equations.

Example 1.27 Consider the circuit shown in Figure 1.1. Determine I1 and I2 when the switch is closed if

L1 = L2 = 2 henrys,
R1 = 3 ohms,
R2 = 8 ohms, and
v (t) = 6 volts.

Assume the initial current in the circuit is zero.

Solution
As previously noted, the circuit is described by the system

L1 I˙1 + R1 (I1 − I2 ) = v (t)


L2 I˙2 + R2 I2 + R1 (I2 − I1 ) = 0.

Substituting the given values yields the initial value problem

2I˙1 + 3I1 − 3I2 = 6


2I˙1 + 11I2 − 3I1 = 0
I1 (0) = I2 (0) = 0.

Writing the system in operator form, we have

(2D + 3) [I1 ] − 3I2 = 6


−3I1 + (2D + 11) [I2 ] = 0.

Multiplying the first equation by 3 and applying the operator 2D + 3 to the second and then adding the
two equations, we obtain
( 2 )
4D + 28D + 24 [I2 ] = 18,

or dividing by 4,
( ) 9
D2 + 7D + 6 [I2 ] = .
2
The solution of this second–order non–homogeneous linear differential equation consists of the sum of a
general solution of the corresponding homogeneous equation and a particular solution of the given non–
( )
homogeneous equation. The solution of D2 + 7D + 6 [I2 ] = 0, is c1 e−t + c2 e−6t .
20

The method of undetermined coefficients can now be used to show that


3
Ip2 =
4
is a particular solution of the non–homogeneous equation. Thus the expression for I2 is
3
I2 = c1 e−t + c2 e−6t + .
4
To find an expression for I1 , we substitute I2 into the second equation of the system:
( )
( −t −6t
) −t −6t 3
2 −c1 e − 6c2 e + 11 c1 e + c2 e + − 3I1 = 0.
4
Solving this equation for I1 yields
1 11
I1 = 3c1 e−t − c2 e−6t + .
3 4
Finally, using I1 (0) = 0 in the expressions for I1 and I2 , we get
1 11
0 = 3c1 − c2 + , (I1 )
3 4
3
0 = c1 + c2 + . (I2 )
4
9 3
Solving this system, we get c1 = − and c2 = , so the desired currents are
10 20
27 −t 1 11
I1 = − e − e−6t + ,
10 20 4

9 −t 3 3
I2 = − e + e−6t + .
10 20 4
Exercise 1.28 This exercise is Exercise 3 from Section 9–8 of the quoted book by Rice and Strange.
Referring to Figure 1.1, we assume that

L1 = L2 = 1, R1 = 6, R = 1, and R2 = 3.

Prior to t = 0, the generator has an output of 6v which at t = 0 is increased to 12v. Determine the initial
current and the current for t > 0.

1.5.2 MIXTURE PROBLEMS


Consider the two tanks shown in Figure 1.2, in which a salt solution of concentration ci kg/litre is pumped
into tank 1 at a rate gi kg/min. A feedback loop interconnects both tanks so that solution is pumped from
tank 1 to tank 2 at a rate of g1 litre/min and from tank 2 to tank 1 at a rate of g2 litre/min. Simultaneously,
the solution in tank 2 is draining out at a rate of g0 litre/min. Find the amount of salt in each tank.
Let y1 and y2 represent the amount of salt (in kilograms) in tanks 1 and 2 at any time and G1 and G2
represent the amount of liquid in the tank at any time t. Then the concentration in each tank is given by
y1
c1 = = concentration of salt in tank 1
G1

y2
c2 = = concentration of salt in tank 2.
G2
21 MAT3706/1

Figure 1.2: Flow between two tanks

The rate at which the salt is changing in tank 1 is


dy1
= ci gi − c1 g1 + c2 g2
dt

y1 y2
= ci gi − g1 + g2 (1.17)
G1 G2
Also, in tank 2, we have
dy2 y1 y2 y2
= g1 − g2 − g0 . (1.18)
dt G1 G2 G2
Equations (1.17) and (1.18) form a system in y1 and y2 . If G10 and G20 are the initial volumes of the two
tanks, then

G1 = G10 + gi t − g1 t + g2 t
G2 = G20 + g0 t − g2 t + g1 t.

Comment
In many mixture problems the rates are assumed to be balanced; that is

gi = g0 , gi + g2 = g1 ; and g0 + g2 = g1 .

In this problem we assume the volumes are constant, that is G1 = G10 and G2 = G20 .

Example 1.29 Assume both tanks in Figure 1.2 are filled with 100 litres of salt solutions of concentrations
c1 and c2 , respectively. Pure water is pumped into tank 1 at the rate of 5 litres/min. The solution is
thoroughly mixed and pumped into and out of tank 2 at the rate of 5 litres/min. Assume no solution
is pumped from tank 2 to tank 1. The system of equations describing this process is obtained by using
equations (1.17) and (1.18) with

ci = 0, gi = g1 = g0 = 5, and g2 = 0.

Thus
dy1 y1
= −5
dt 100

dy2 y1 y2
= 5 −5 .
dt 100 100
22

Exercise 1.30 The mixture problems below are Exercises 11–13 from Section 9–8, again from the cited
book by Rice and Strange.

(1) Solve for the amount of salt at any time in two 200–litre tanks if the input is pure water and

gi = g0 = 5, g1 = 7, and g2 = 2.

Assume y1 (0) = y2 (0) = 0.

(2) Solve for the amount of salt at any time in two 200–litre tanks if the input is a salt solution with
a concentration of 0.5 kilogram/litre and gi = g1 = g0 = 12 litres/minute. Assume g2 = 0 and
y1 (0) = y2 (0) = 0.

(3) Solve for the amount of salt at any time in two 400–litre tanks if the input is a salt solution with a
concentration of 0.5 kilogram/litre,

gi = g0 = 5, g1 = 7, and g2 = 2.

Assume y1 (0) = y2 (0) = 0.

Our final application on linear systems of differential equations is a rather humoristic one, just in case none
of the above applications appeal to you.

1.5.3 LOVE AFFAIRS

This application is due to Steven H. Strogaz, “Love Affairs and Differential Equations”, Math Magazine 61
(1988): 35.

Two lovers, Romeo and Juliet, are such that the more Juliet loves Romeo, the more he begins to dislike her.
On the other hand, when Juliet’s love for Romeo begins to taper off, the more his affection for her begins
to grow. For Juliet’s part, her love for Romeo grows when he loves her and dissipates when he dislikes her.
Introduce r (t) to represent Romeo’s love/hate for Juliet at time t and j (t) to represent Juliet’s love/hate
for Romeo at time t. For either function a positive value represents love and a negative value represents
hate. If a and b are positive constants, then this love affair is modelled by the following system:
dr
= −aj,
dt

dj
= br.
dt
Solve this system. Show that their love affair is a never–ending cycle (ellipse) of love and hate. What
percentage of the time do they achieve simultaneous love?

For more applications please read Section 3.3 of Z&W.


23 MAT3706/1

CHAPTER 2

Eigenvalues and Eigenvectors and Systems of Linear Equations with


Constant Coefficients

Objectives for this Chapter


The main objective of this chapter is to gain an understanding of the following concepts regarding the
eigenvalue–eigenvector method of solving linear systems of DE’s:

• the characteristic equation C (λ) = 0 of the matrix A;

• finding eigenvalues and their corresponding eigenvectors;

• the linear independence of eigenvectors;

• complex eigenvalues and their corresponding eigenvectors;

• multiple roots of C (λ) = 0 – finding a second linearly independent eigenvector corresponding to a


double root of C (λ) = 0;

• solving initial value problems using the eigenvalue–eigenvector method.

Outcomes of this Chapter


After studying this chapter the learner should be able to:

• find the eigenvalues of a matrix A;

• determine the corresponding eigenvectors;

• use the eigenvalue–eigenvector method to find solutions for a system of DE’s;

• determine the linear dependance/independance of the solutions;

• find a general solution for the system of DE’s;

• find real solutions for a system of DE’s which has complex eigenvalues and eigenvectors;

• find two linear independent eigenvectors which correspond to a double (repeated) root of C (λ) = 0;

• solve initial value problems using the eigenvalue–eigenvector method.


24

2.1 INTRODUCTION

We have already pointed out that a system of n linear equations in functions x1 (t) , x2 (t) , . . . , xn (t) and
with constant coefficients aij , may be expressed in the form Ẋ = AX, with A the n × n coefficient matrix
and X the vector [x1 , . . . , xn ]T . Thus the system

ẋ − y = x
ẏ + y = 0
ż − 2z = 0

may be expressed in the form


   
1 1 0 x
   
Ẋ =  0 −1 0  X with X =  y  .
0 0 2 z

In this chapter it will be shown that the solutions of Ẋ = AX may be found by studying the eigenvalues
and eigenvectors of A. This approach is known as the eigenvalue–eigenvector method. If this method yields
n linearly independent solutions of a system of differential equations, then a general solution is a linear
combination of these solutions — the same as that obtained by applying the method of elimination which
was described in the previous chapter.

In what follows, A is an n × n matrix with real entries. For the sake of completeness we give the following
definitions and theorem with which you should be familiar:

Definition 2.1 If a vector U ̸= 0 and a real number λ exists such that

AU =λU,

i.e.
(A − λI) U = 0,

where I is the identity matrix, then λ is said to be an eigenvalue of A. The vector U is called the
associated or corresponding eigenvector.

Theorem 2.2 The number λ is an eigenvalue of A if and only if

det (A − λI) ≡ |A − λI| = 0. (2.1)

Equation (2.1) is called the characteristic equation of A.

Definition 2.3  
a11 − λ a12 ... a1n
 
 a21 a22 − λ ... a2n 
C (λ) = |A − λI| = det 
 .. .. .. .. 

 . . . . 
an1 an2 . . . ann − λ
is known as the characteristic polynomial of A.
25 MAT3706/1

Exercise 2.4 Find the eigenvalues and eigenvectors of A where


[ ]
1 1
(a) A =
0 1
 
−1 2 2
 
(b) A =  2 2 2 
−3 −6 −6

2.2 THE EIGENVALUE–EIGENVECTOR METHOD


As far as solutions of Ẋ = AX are concerned, the analogy between ẋ = ax (a a constant) and Ẋ = AX,
gives rise to the conjecture that exponentials should satisfy the equation Ẋ = AX. Let us assume that
X (t) = eλt U is a solution. Since Ẋ =λeλt U, U and λ must satisfy (from Ẋ = AX)
λeλt U = Aeλt U =eλt AU,
i.e.
eλt (A − λI) U = 0,
i.e.
(A − λI) U = 0,
since eλt ̸= 0. We note that this equation expresses precisely the connection between an eigenvalue λ of
A and the associated eigenvector U. If λ is real, then U ̸= 0 is the solution of a homogeneous system of
algebraic equations with real coefficients and may, therefore, be chosen as a real vector.
The following theorem can now be formulated:
Theorem 2.5 If U1 is a real eigenvector of A corresponding to the real eigenvalue λ1 , then X1 (t) = eλ1 t U1
is a solution of Ẋ = AX for all t.

Exercise 2.6
(1) Prove Theorems 2.2 and 2.5.

(2) Find solutions of Ẋ = AX with


 
−1 2 2
 
A = 2 2 2 .
−3 −6 −6
Answer:
U1 = k1 [0, 1, −1]T , U2 = e−2t k2 [−2, 1, 0]T , U3 = e−3t k3 [1, 0, −1]T .
(In general we will not provide answers, as solutions can be checked by substituting into the original
equation. Do this as an exercise.)
As in Chapter 1 we have

Theorem 2.7 (The Superposition Principle) Assume that X1 (t) , . . . , Xr (t) are solutions of Ẋ = AX.
Then
X (t) = k1 X1 (t) + . . . + kr Xr (t)
with ki , i = 1, 2, . . . , r arbitrary constants, is also a solution.

Exercise 2.8 Prove Theorem 2.7.


26

Remark 2.9

(1) In the proof use is made of the fact that for any n × n matrix A

A {k1 X1 (t) + k2 X2 (t) + . . . + kr Xr (t)} = k1 AX1 (t) + . . . + kr AXr (t) ,

i.e. { r }
∑ ∑
r
A ki Xi (t) = ki AXi (t) .
i=1 i=1

(2) It follows from Theorem 2.7 that the set of solutions of Ẋ = AX forms a vector space (choose the
trivial solution X ≡ 0 as the zero element). We may, therefore, now use the expression “the space of
solutions” or “the solution space”.
It will be shown later on that the dimension of the solution space corresponds with that of A, i.e. if
A is an n × n matrix, the space of solutions Ẋ = AX will be n–dimensional.

2.3 LINEAR INDEPENDENCE OF EIGENVECTORS

We recall

Definition 2.10 The vectors X1 , . . . , Xn are linearly independent if it follows from

c1 X1 + . . . + cn Xn = 0

that ci = 0 with ci , i = 1, . . . , n arbitrary constants.

The following theorem supplies a condition for eigenvectors U1 , U2 , . . . , Un , corresponding to eigenvalues


λ1 , . . . , λn of A, to be linearly independent, in other words, a condition for solutions

X1 (t) , . . . , Xn (t) with Xi (t) = eλi t Ui , i = 1, . . . , n

to be linearly independent.

Theorem 2.11 Suppose the n × n matrix A has different (or distinct) eigenvalues λ1 , . . . , λn corre-
sponding to eigenvectors U1 , . . . Un . Then U1 , . . . , Un are linearly independent.

Proof. We prove the statement by means of the induction principle. We start with n = 2. Suppose

c1 U1 + c2 U2 = 0. (1)

If A is applied to both sides of (1), we find

0 = A(c1 U1 + c2 U2 ) = c1 AU1 + c2 AU2

or
c1 λ1 U1 + c2 λ2 U2 = 0, since AUi = λi Ui . (2)

We now multiply (1) by λ1 and subtract from (2). This gives

(c1 λ1 U1 + c2 λ2 U2 ) − (c1 λ1 U1 + c2 λ1 U2 ) = 0
27 MAT3706/1

or
c2 (λ2 − λ1 )U2 = 0.

Since U2 ̸= 0 (from the definition of an eigenvector) and since λ2 ̸= λ1 , we derive that c2 = 0. By


substituting c2 = 0 in (1), we find that c1 = 0, which proves the validity of the theorem for n = 2. Now
suppose the theorem holds for n = k, i.e. assume that any k eigenvectors which correspond to k distinct
eigenvalues are linearly independent. We now prove that the theorem holds for n = k + 1. Hence, we
assume that
c1 U1 + c2 U2 + . . . + ck Uk + ck+1 Uk+1 = 0. (3)

By applying A to both sides of (3) and making use of the fact that AUi = λi Ui , we find that

c1 λ1 U1 + c2 λ2 U2 + . . . + ck λk Uk + ck+1 λk+1 Uk+1 = 0. (4)

Next, multiply both sides of (3) by λk+1 and subtract from (4):

c1 (λ1 − λk+1 )U1 + c2 (λ2 − λk+1 )U2 + . . . + ck (λk − λk+1 )Uk = 0.

From the induction assumption we have that U1 , U2 , . . . , Uk are linearly independent. Thus

c1 (λ1 − λk+1 ) = c2 (λ2 − λk+1 ) = . . . = ck (λk − λk+1 ) = 0,

and since λi ̸= λk+1 for i = 1, 2, . . . , k, we derive that c1 = c2 = . . . = ck = 0. Thus from (3) it follows that
ck+1 = 0. The statement therefore holds for m = k + 1, thereby completing the proof.

Exercise 2.12

Find an n × n matrix A such that although A does not have n different eigenvalues, n linearly independent
corresponding eigenvectors still exist.

A natural question to ask is: which n × n matrices have n linearly independent eigenvectors? An important
class of matrices with this property, is the class of symmetric matrices. These matrices appear frequently
in applications. We will return to this class of matrices in Chapter 3.

Exercise 2.13

(1) Study Appendix A at the end of the study guide which deals with symmetric matrices.

(2) For each of the following matrices, find three linearly independent eigenvectors.
     
2 0 0 1 0 0 1 1 1
   √   
(a) A =  0 1 0  (b) A =  0 3 2  (c) A =  1 1 1 .

0 0 1 0 2 2 1 1 1

(3) Write out the details which prove Au = Au if A is real.

(4) Solve ẋ = Ax, x(0) = x0 for the matrix A in 2(c) above, when
   
1 1
   
(a) x0 =  1  (b) x0 =  0 .
1 0
28

(5) If A and B are symmetric and AB = BA, prove that AB is symmetric.

(6) A is antisymmetric if AT = −A. Show that if A is antisymmetric, then uT Av = −vT Au.

(7) Show that the eigenvalues of a real, antisymmetric matrix are pure imaginary (0 may be considered
a pure imaginary number).

(8) Prove that if A is a real, (2n + 1) × (2n + 1) antisymmetric matrix, then λ = 0 is an eigenvalue of A.

We recall that a convenient method of determining whether solutions


     
x11 (t) x12 (t) x1n (t)
     
 x21 (t)   x22 (t)   x (t) 

X1 (t) =   , X (t) =   , . . . , Xn (t) =  2n. ,
..  2  ..   .. 
 .   .   
xn1 (t) xn2 (t) xnn (t)

of a system of linear equations are linearly independent, is to find the Wronskian of the solutions. We recall
that the Wronskian W (t) of the solutions X1 (t) , . . . , Xn (t) is the determinant
 
x11 (t) . . . x1n (t)
 .. .. .. 
W (t) = |X1 (t) , X2 (t) , . . . , Xn (t)| = det 
 . . . .

xn1 (t) . . . xnn(t)

By recalling that the value of a determinant in which one of the rows or columns is a linear combination
of another row or column, is zero, it follows immediately that W = 0 if and only if X1 , . . . , Xn are linearly
dependent. Therefore, if W ̸= 0, the solutions X1 , . . . , Xn are linearly independent.
Since every Xi , i = 1, . . . , n is a function of t, we must investigate the questions whether it is possible that
on an interval I ⊂ R, W could be zero in some points and in others not. The following theorem, to which
we shall return in Chapter 4, settles the question.

Theorem 2.14 If W , the Wronskian of the solutions of a system of differential equations, is zero at a
point t0 in an interval I, it is zero for all t ∈ I.

The statement of the theorem is equivalent to the following statement: Either

W (t) = 0 ∀t ∈ I

or
W (t) ̸= 0 ∀t ∈ I.

Example 2.15 Solve the initial value problem


[ ] [ ]
6 8 1
Ẋ = X ≡ AX, with X (0) = .
−3 −4 2

Solution
We solve the characteristic equation

6−λ 8
C (λ) = =0
−3 −4 − λ
29 MAT3706/1

to find the eigenvalues of the matrix. This gives

C (λ) = (6 − λ) (−4 − λ) + 24 = λ (λ − 2) = 0

so that λ1 = 0 and λ2 = 2.
To find the eigenvector U1 corresponding to λ1 = 0 we solve (using the equation (A−λI) U = 0 with
λ = λ1 = 0) [ ][ ] [ ]
6 8 u1 0
=
−3 −4 u2 0
which is equivalent to the equation
3u1 + 4u2 = 0.

Choosing, for example u1 = 4c1 where c1 is an arbitrary constant, it follows that


3 3
u2 = − u1 = − (4c1 ) = −3c1
4 4
so that [ ] [ ]
u1 4
U1 = = c1
u2 −3
and consequently [ ] [ ]
4 ot 4
X1 (t) = c1 e = c1
−3 −3

is a solution of Ẋ = AX corresponding to λ = 0.
Similarly, for λ2 = 2 we solve (again using the equation (A − λI) U = 0with λ = λ2 = 2)
[ ][ ] [ ]
4 8 u1 0
=
−3 −6 u2 0

to find [ ] [ ]
u1 −2
U2 = = c2
u2 1
so that [ ]
−2
X2 (t) = c2 e2t
1

is a solution of Ẋ = AX corresponding to λ = 2.
The general solution is therefore given by
[ ] [ ]
4 −2
X (t) = c1 + c2 e2t .
−3 1
[ ]
1
From the initial value X (0) = , we now solve for c1 and c2 :
2

1 = 4c1 − 2c2
2 = −3c1 + c2
5 11
which yields c1 = − and c2 = − .
2 2
30

The solution of the initial value problem is therefore given by


[ ] [ ] [ ]
x1 (t) 5 4 11 −2
X = =− − e2t
x2 (t) 2 −3 2 1
 
−10 + 11e2t
 
=  15 11
.

− e2t
2 2
Exercise 2.16

(1) Study Sections 8.1 and 8.2 up to p. 337 of Z&W.

(2) Do problems 1 – 14 of Exerciss 8.2.1 on p. 346 of Z&W.

By using techniques to be developed in the next section, n linearly independent solutions X1 (t) , X2 (t) , . . . ,
Xn (t) of the equation Ẋ = AX, can always be found, even if some of, or all, the roots of the characteristic
equation are coincident. This implies that the space of solutions of Ẋ = AX is n–dimensional: the solution
space is namely a subspace of an n–dimensional space, so that its dimension cannot exceed n. That its
dimension is exactly n, now follows from the existence of n linearly independent solutions.

From the above it follows that any solution of Ẋ = AX is of the form X (t) = k1 X1 (t) + . . . + kn Xn (t)
with Xi (t) , i = 1, . . . , n, the above solutions.

Since any other set of n linearly independent solutions (known as a fundamental set of solutions) also forms
a base for the solution space, it is sufficient, in order to determine a general solution of Ẋ = AX, to find in
any manner any set of n linearly independent solutions.

If one verifies, by differentiation and by evaluating the Wronskian, that they are actually linearly indepen-
dent solutions, no theoretical justification for one’s method is needed.

From the above, it is clear that if the equation C (λ) = 0 has m–fold roots (m < n) , a method will have to
be found for determining the (n − m) solutions of Ẋ = AX that may still be lacking and which, together
with the solutions already found, constitute a set of n linearly independent solutions. The technique for
finding these solutions will be described in Sections 2.5 as well as in Chapter 3.

2.4 COMPLEX EIGENVALUES AND EIGENVECTORS

Even if all the entries of A are real, it often happens that the eigenvalues of A are complex. In order to
extend Theorem 2.5 to the case where λ is complex, we must define eλt for complex λ. Let λ = a + ib with
a and b real numbers. In view of Euler’s equation we have

e(a+ib)t = eat (cos bt + i sin bt)

– a complex number for every real value of t.

Let λ be a complex eigenvalue with associated eigenvector U. We now have

d ( λt )
e U = λeλt U = eλt λU = eλt AU = Aeλt U.
dt
31 MAT3706/1

Thus eλt U is a solution of Ẋ = AX as in the real case.

Note that eλt ̸= 0 in view of


2 2 2 2
eλt = e(a+ib)t = eat eibt
2( )
= eat cos2 bt + sin2 bt
2
= eat
> 0

for all values of t.

Exercise 2.17 Prove that


d ( (a+ib)t )
e = (a + ib) e(a+ib)t ,
dt
i.e.
d ( λt )
e = λeλt
dt
for λ complex.

Consider again the complex vector eλt U. If U = V+iW with V, W ∈ Rn , λ = a + ib, then

eλt U = eat (cos bt + i sin bt) (V+iW)


= eat (V cos bt − W sin bt) + ieat (V sin bt + W cos bt) .

Consequently ( )
Re eλt U = eat (V cos bt − W sin bt)

and ( )
Im eλt U = eat (V sin bt + W cos bt) .

The following theorem shows that complex eigenvalues of A may lead to real solutions of Ẋ = AX.

Theorem 2.18 If X (t) is a solution of Ẋ = AX, A real, then Re X (t) and Im X (t) are solutions.

Proof. We must show that


d
{Re X (t)} = A {Re X (t)}
dt
and similarly for Im X (t).
From our assumption we have
d
{X (t)} = AX (t) .
dt
Therefore
d d
{Re X (t)} = Re {X (t)} = Re {AX (t)}
dt dt
since the elements of A are real.

The proof for Im X (t) is similar. 

If λ = a + ib and U = V+iW, the theorem may be reformulated:


32

Theorem 2.19 If λ = a + ib is an eigenvalue of A and U = V+iW the corresponding eigenvector, then


two solutions of Ẋ = AX are

X1 (t) = eat (V cos bt − W sin bt)


X2 (t) = eat (V sin bt + W cos bt) . (2.2)

Example 2.20 Find real–valued fundamental solutions of


[ ]
2 3
Ẋ = X.
−3 2

Solution
The characteristic equation is λ2 −4λ+13 = 0. The roots are λ1 = 2+3i and λ2 = 2−3i. Choose λ = 2+3i.
The eigenvector equation is now
[ ][ ] [ ] [ ]
−3i 3 v1 −3iv1 + 3v2 0
= = .
−3 −3i v2 −3v1 − 3iv2 0

A solution of this linear system is v2 = iv1 , where v1 is any nonzero constant (note that the two equations
are equivalent). We take v1 = 1 so that V = (1, i)T is an eigenvector. A complex–valued solution is
[ ]
1
X1 (t) = e(2+3i)t
i
{[ ] [ ]}
2t 1 0
= e (cos 3t + i sin 3t) +i
0 1
{[ ] [ ]}
cos 3t sin 3t
= e2t +i .
− sin 3t cos 3t

Two linearly independent real–valued solutions are


[ ] [ ]
cos 3t sin 3t
e2t and e2t .
− sin 3t cos 3t

As for solutions corresponding to λ = 2 − 3i : This eigenvalue yields identical solutions up to a constant, i.e.
solutions which can be obtained by multiplying the solutions already found by a constant. This happens
whenever eigenvalues occur in complex conjugate pairs. The implication of this is that you need
not construct solutions corresponding to the second complex eigenvalue as this yields nothing new.

A general solution of the given problem is


[ ] [ ]
2t cos 3t 2t sin 3t
c1 e + c2 e .
− sin 3t cos 3t

Exercise 2.21 Study Section 8.2.3 on pp. 342 –346 of Z&W.

Exercise 2.22 Do Problems 33 – 46 of Exercise 8.2.3 on pp. 347 – 348 of Z&W.


33 MAT3706/1

2.5 NEW ROOTS OF Ẋ = AX

Assume that the equation C (λ) = 0 has multiple roots. In order to find the general solution of Ẋ = AX,
we seek a method for determining the eigenvectors which may be lacking.

Consider the system [ ]


1 1
Ẋ = X.
0 1
Then [ ]
1−λ 1
C (λ) = det = (1 − λ)2
0 1−λ
[ ] [ ]
1 1
so that λ = 1 is a double root of C (λ) = 0. An associated eigenvector is . Thus X1 (t) = et is
0 0
a solution.
[ ]
b (t) = x1
Suppose that X is a second solution. Then
x2
{[ ]} [ ][ ] [ ]
d x1 1 1 x1 x1 + x2
= = .
dt x2 0 1 x2 x2

This yields the system

ẋ1 = x1 + x2 (1)
ẋ2 = x2 (2)

for which we find (from (2)) that

x2 (t) = et . (3)

Substitution of (3) in (2) gives


ẋ1 = x1 + et

which has the complementary function


x1C.F. (t) = et

and the particular integral

x1P.I. (t) = tet (from the shift property – see Appendix B),

so that x1 (t) = et + tet together with x2 (t) = et is a solution of the system. Consequently
[ ] [ ] [ ]
b et (t + 1) t 1 t 1
X (t) = t
= te +e .
e 0 1

b (t) is indeed a solution. We check for linear independence:


By differentiation we see that X
[ ]
[ ] e t et (t + 1) ( t )2
det X1 (t) X b (t) = det = e ̸= 0.
0 et
34

The general solution of Ẋ = AX is therefore

b (t) ,
X (t) = k1 X1 (t) + k2 X

b (t) , we see that X


with k1 , k2 arbitrary constants. If we compare X1 (t) and X b (t) is of the form
[ ]
t 1
e (B + tC) with C =
0

an eigenvector of A, whereas B has to be determined. If λ is a triple root of C (λ) = 0, it can be deduced


in a similar way that if X (t) = eλt C is a solution of Ẋ = AX, then
( 2 )
t
(Ct + B) e and C + Bt + D eλt
λt
2

are also solutions. We return to this method for finding new solutions in Chapter 3.

Example 2.23 Find a general solution of


 
0 1 0
 
Ẋ =  0 0 1  X ≡ AX.
−2 −5 −4

Solution
Put C (λ) = 0, i.e.
 
−λ 1 0
 
det  0 −λ 1  = 0.
−2 −5 −4 − λ

Then −4λ2 − λ3 − 5λ − 2 = 0 from which we obtain λ = −1, −1, or −2. We first determine an eigenvector
 
u1
 
U =  u2 
u3

for λ = −2. Put   


2 1 0 u1
  
 0 2 1   u2  = 0.
−2 −5 −2 u3
Hence
2u1 + u2 =0
2u2 + u3 = 0
−2u1 − 5u2 − 2u3 = 0.
This system is satisfied by u1 = k, u2 = −2k, u3 = 4k. Consequently
 
1
 
X1 (t) = k1 e−2t  −2 
4
35 MAT3706/1

is a solution of Ẋ = AX. For λ = −1,  


1
 
 −1 
1
is an eigenvector (check this!) so that
 
1
 
X2 (t) = k2 e−t  −1 
1

is a second solution of Ẋ = AX. From the preceding, a third solution is


 
1
 
X3 (t) = e−t B + te−t  −1  ≡ e−t B + te−t C.
1

In order to determine B, substitute X3 (t) in Ẋ = AX. This yields


( )
−e−t B + e−t C − te−t C = A e−t B + te−t C
= e−t AB + te−t AC.

By equating coefficients of like powers of t, we obtain

−C = AC (2.3)
−B + C = AB. (2.4)

This yields         
b1 1 0 1 0 b1 b1
        
−  b2  +  −1  =  0 0 1   b2  where  b2  = B,
b3 1 −2 −5 −4 b3 b3
i.e.

−b1 + 1 = b2
−b2 − 1 = b3
−b3 + 1 = −2b1 − 5b2 − 4b3 .

A solution of this system is b1 = 1, b2 = 0, b3 = −1 (by eliminating either b1 or b3 , it appears that the


third equation yields nothing new). Therefore
   
1 1
   
X3 (t) = e−t  0  + te−t  −1 
−1 1

and the general solution is


X (t) = c1 X1 (t) + c2 X2 (t) + c3 X3 (t)

with c1 , c2 , c3 arbitrary constants. That X1 (t) , X2 (t) and X3 (t) are linearly independent, can be checked
by calculating their Wronskian.
36

 
1
 
Note that in this example  0  = B is not an eigenvector associated with the eigenvalue λ = −1 of
−1
 
0 1 0
 
A = 0 0 1 ,
−2 −5 −4

since   
1 1 0 1
  
(A − λI) B =  0 1 1   0  = C ̸= 0.
−2 −5 −3 −1
B is known as a generalised eigenvector or root vector of A. This subject is dealt with in Chapter 3.

Exercise 2.24
Find a general solution of Ẋ = AX if
  
[ ] 2 0 0 2 1 0
−3 4    
(a) A= , (b) A =  1 2 0  , (c) A = 0 2 1 .
−1 1
−1 0 2 0 0 2

2.6 INITIAL VALUE PROBLEMS; SOLUTION OF INITIAL VALUE PROBLEMS BY


THE EIGENVALUE–EIGENVECTOR METHOD

Definition 2.25 The system Ẋ = AX together with an (initial ) condition X (t0 ) = X0 is known as an
initial value problem. A solution to the problem is a differentiable function X (t) that is a solution of
Ẋ = AX and such that X (t0 ) = X0 .

For the sake of convenience, t0 is often (but not always) taken as 0.

Example 2.26 Solve [ ] [ ]


1 1 1
ẋ = x, x= .
0 1 1

Solution
From C(λ) = (1 − λ)2 = 0 it follows that λ = 1 (twice). Thus
[ ] [ ][ ] [ ] [ ]
0 1 0 1 u1 u2 0
u= = =
0 0 0 0 u2 0 0
] [
1
from which we get u2 = 0 and u1 is arbitrary. Hence u = c is an eigenvector and therefore x(t) =
0
[ ] [ ] [ ]
1 c 1
cet is the corresponding solution. But x(0) = ̸= . This implies that cannot yet find a
0 0 1
solution that satisfies the given initial condition with what we know so far.
37 MAT3706/1

This example illustrates that application of the eigenvalue–eigenvector method does not always lead to a
solution of Ẋ = AX which satisfies the initial condition. This problem, as can be expected, is experienced
when the characteristic equation has multiple roots. The following theorems contain sufficient conditions
under which a solution for the initial value problem Ẋ = AX, X (t0 ) = X0 can be found by means of the
eigenvalue–eigenvector method.

Theorem 2.27 If n linearly independent eigenvectors exist with corresponding real (not necessarily differ-
ent) eigenvalues of the n × n matrix A, then the initial value problem Ẋ = AX, X (t0 ) = X0 has a solution
for every X0 ∈ Rn .

Exercise 2.28 Prove Theorem 2.27.

From Theorem 2.27 we can derive

Theorem 2.29 If A has n real different eigenvalues, then Ẋ = AX, X (t0 ) = X0 , has a solution for every
X0 ∈ Rn .

Exercise 2.30

(1) Study Example 3 on p. 338 of the Z&W.

(2) Solve
   
0 1 0 1
   
Ẋ =  0 0 1  X, X (0) =  0  .
2 1 −2 1
Why is there a unique solution to the above system?

Example 2.31 Find a solution of


    
1 1 0 x1 2
    
Ẋ =  0 1 0   x2  ≡ AX, X0 =  0  .
0 0 2 x3 −2

Solution
The characteristic equation has roots λ = 1, 1, 2. If we put λ = 1 into (A − λI) U = 0, we have
    
0 1 0 u1 0
    
 0 0 0   u2  =  0  .
0 0 1 u3 0

Now    
u1 1
   
U =  u 2  = k1  0 
u3 0
satisfies this equation and  
1
 
X1 (t) = et k1  0 
0
38

is a solution of Ẋ = AX which, however, does not satisfy the initial condition, since
 
1
t  
X1 (t) = e k1  0 
0
 
2
 
can, for no values of t and k1 , yield the vector  0  . Next, put λ = 2 into (A − λI) V = 0. We then
−2
have     
−1 1 0 v1 0
    
 0 −1 0   v2  =  0 
0 0 0 v3 0
from which follows

−v1 + v2 = 0
−v2 = 0.

Therefore, v1 = 0, v2 = 0, while, since v3 does not appear in the equations, v3 may be chosen arbitrarily.
Put v3 = k2 . Then  
0
2t  
V = k2 e  0 
1

is a solution of Ẋ = AX which, however, does not satisfy the initial condition. Consequently we must check
whether X0 belongs to the span of the eigenvectors U and V.

Put mU + nV = X where m and n are scalars. By putting t = 0 we have


     
1 0 2
     
m 0  + n 0  =  0  (2.5)
0 1 −2

whence

m = 2,
n = −2.

Consequently    
1 0
t  2t  
X (t) = 2e  0  − 2e  0 
0 1

is a solution of Ẋ = AX which also satisfies the initial condition.


39 MAT3706/1

Exercise 2.32 The n × n matrix A in each of the following initial value problems ẋ = Ax, x(t0 ) = x0 has
n distinct, real eigenvalues. Find a general solution containing n arbitrary constants and then determine
the values of the constants using the initial conditions x(t0 ) = x0 .
[ ] [ ] [ ] [ ]
1 1 1 2 1 1
(a) A = , x(−1) = (b) A = , x(0) =
3 −1 2 2 3 −1
       
4 −3 −2 0 1 1 1 1
       
(c) A =  2 −1 −2  , x(1) =  0  (d) A =  1 −1 1  , x(0) =  0 
3 −3 −1 1 0 0 0 1
40

CHAPTER 3

Generalised Eigenvectors (Root Vectors) and Systems of Linear


Differential Equations

Objectives for this Chapter


The main objective of this chapter is to gain an understanding of the following concepts regarding root
vectors and systems of linear DE’s:

• a root vector of order k;

• finding root vectors of order k corresponding to a multiple root of C (λ) = 0;

• linear independence of root vectors;

• existence of a solution of an initial value problem when A has constant entries.

Outcomes of this Chapter


After studying this chapter the learner should be able to:

• determine a root vector of order of k of A;

• use root vectors to find linear independent solutions of Ẋ = AX;

• solve initial value problems Ẋ = AX, X (t0 ) = X0 where the matrix A has multiple roots;

• know why real symmetric matrices has no root vectors of order ≥ 2.

3.1 GENERALISED EIGENVECTORS OR ROOT VECTORS

Suppose A is a square matrix and λ a multiple root of the characteristic equation with corresponding
eigenvector C ̸= 0, i.e. eλt C is a solution of Ẋ = AX. In Section 5 of Chapter 2 we derived a second
solution, (B + Ct) eλt where B is the solution of the equation

(A−λI) B = C ̸= 0. (3.1)

As has been previously observed, B is not an eigenvector corresponding to λ. This is clear from (3.1),
the defining equation for B. Equations of the form (3.1) in which B has to be determined and C is an
eigenvector leads to the concept of a generalised eigenvector or root vector.
41 MAT3706/1

Before formally introducing the concept of a generalised eigenvector or root vector, we observe that equation
(3.1) may be expressed in a form in which C does not appear at all and in which the right hand side equals
zero, viz
(A − λI)2 B = (A − λI) C = 0 (3.2)

by applying A−λI to (3.1). If λ is at least a triple root of the characteristic equation, then
( )
1 2
C t + Bt + D eλt
2

is, as in Section 2.5, a third solution, provided that the equations

(A−λI) C = 0, C ̸= 0
(A−λI) B = C
(A−λI) D = B

are satisfied. The last two equations may, by applying (A − λI) and (A − λI)2 , be written as

(A − λI)2 B = 0 (3.3)
3
(A−λI) D = 0, (3.4)

because

(A − λI)2 B = (A − λI) (A − λI) B = (A − λI) C = 0


(A−λI)3 B = (A − λI) (A − λI)2 B = (A − λI) 0 = 0.

Note that on the other hand, we have (A − λI) B ̸= 0and (A − λI)2 D ̸= 0, because B and D are not
eigenvectors corresponding to λ.

Generally, we now consider equations of the form (A − λ∗ I)k U = 0, U ̸= 0, with k the smallest number
for which this relation holds, in other words, for i = 1, . . . , k, the relationship

(A − λ∗ I)k−i U ̸= 0

holds.

Definition 3.1 The non–zero vector V is said to be a root vector of order k, k ≥ 1, of the matrix A,
if a number λ∗ exists such that

(A − λ∗ I)k V = 0 (3.5)
∗ k−1
(A − λ I) V ̸= 0. (3.6)

If we bear in mind that V is said to be annihilated by (A − λ∗ I)k if (A − λ∗ I)k V = 0, Definition 3.1 can
be reformulated as:

Definition 3.2 The vector V is a root vector of order k, k ≥ 1, of A, if a number λ∗ exists such that
V is annihilated by (A − λ∗ I)k , but not by (A − λ∗ I)k−1 .
42

Note that unless (A − λ∗ I) is singular (non–invertible), (A − λ∗ I)k would be a nonzero vector for all k.
Hence the existence of k, V ̸= 0and λ∗ satisfying (3.5) – (3.6) implies A−λ∗ Iis singular, which means that
λ∗ must be an eigenvalue of A. It is now clear, by putting k = 1, that an eigenvector is a root vector of
order 1 while the vectors B and D above are root vectors of order 2 and 3 respectively.

Example 3.3 Verify that [2, 0, 1]T is a root vector of order 3 corresponding to the eigenvalue λ = 0 of
 
0 1 0
 
A= 0 0 1 .
0 0 0

Solution Let v3 = [2, 0, 1]T . Since λ = 0, A − λI = A we get


      
0 1 0 0 1 0 2 0
      
 0 0 1  v3 =  0 0 1   0  =  1  = v2
0 0 0 0 0 0 1 0

from which       
0 1 0 0 1 0 0 1
      
 0 0 1  v2 =  0 0 1   1  =  0  = v1
0 0 0 0 0 0 0 0
and finally       
0 1 0 0 1 0 1 0
      
 0 0 1  v1 =  0 0 1   0  =  0  .
0 0 0 0 0 0 0 0
This implies that v3 , v2 and v1 are the root vectors of order 3, 2, and 1, respectively.

The following theorem provides a condition under which a vector V is a root vector of order k, while it
also supplies a method for constructing a sequence of root vectors such that the order of each root vector
is one less than that of its predecessor. This theorem will, after a few more formalities, supply us with an
easy method for finding the general solution of Ẋ = AX in the case where some or all of the eigenvalues of
A are coincident.

Theorem 3.4 Suppose λ∗ is an eigenvalue of A. Let the sequence of vectors {Vk−j }, j = 0, . . . , k − 1, be


defined by
(A − λ∗ I) Vk−j = Vk−(j+1) , j = 0, . . . , k − 2. (3.7)
Then Vk , Vk−1 , . . . , V1 are root vectors of order k, k−1, . . . , 1 respectively if and only if V1 is an eigenvector
of A corresponding to λ∗ .

If the root vectors correspond to different eigenvalues, one must distinguish between the different sequences
of root vectors.

Definition 3.5 If Ui,k is a root vector of order k, corresponding to the eigenvalue λi of A, the sequence
of root vectors {Ui,k−j } , j = 0, . . . , k − 1, defined by

(A − λi I) Ui,k−j = Ui,k−(j+1) , j = 0, . . . , k − 2

is known as a chain of length k.


43 MAT3706/1

Example 3.6 Find a chain of length 3 for the matrix


 
0 1 0
 
 0 0 1 
8 −12 6

Solution From

−λ 1 0
C(λ) = 0 −λ 1 = −λ3 + 6λ2 − 12λ − 8 = −(λ − 2)3 = 0
8 −12 6 − λ

we see that λ = 2 is the only eigenvalue of A. From (A − λI)U = 0 we get


    
−2 1 0 u1 0
    
 0 −2 1   u2  =  0 
8 −12 4 u3 0

which easily reduces to     


1 − 12 0 u1 0
 1    
 0 1 − 2   u2   0  .
=
0 0 0 u3 0
Hence [1, 2, 4]T is an eigenvector associated with λ = 2. Now from (A − 2I)V = U:
    
−2 1 0 v1 1
    
 0 −2 1   v2  =  2 
8 −12 4 v3 4

we get     1 
1 − 12 0 v1 −2
 1    
 0 1 − 2   v2  =  −1 
0 0 0 v3 0
so that V = [−1, −1, 0]T .
Finally, from (A − λI)W = V we get
    
−2 1 0 w1 −1
    
 0 −2 1   w2  =  −1 
8 −12 4 w3 0

which can be reduced to     1 


1 − 21 0 w1 2
    
 0 1 − 12   w2  =  12 
0 0 0 w3 0
so that W = [ 43 , 12 , 0]T .
It is easy to check that (A − 2I)W = V; (A − 2I)V = U and (A − 2I)V = 0, yielding the required chain.
44

Exercise 3.7
1. Show that the matrices below have no root vectors of order 2.
 
[ ] [ ] 1 0 0
3 1 0 1  
(a) (b) (c)  0 1 0 
2 2 1 0
0 0 1
2. For each of the following matrices, find at least one chain of length 2.
     
1 1 0 1 1 0 0 1 0
     
(a)  0 1 0  (b)  0 1 0  (c)  0 0 1 
1 2 −2 0 0 1 4 −8 5
3. For each of the following matrices, find at least one chain of length 3.
   
1 1 0 0 1 0
   
(a)  0 1 1  (b)  0 0 1 
0 0 1 1 −3 3

3.2 ROOT VECTORS AND SOLUTIONS OF Ẋ = AX


We have, from the previous section, that if λ is a triple root of the characteristic equation, with C the
corresponding eigenvector and B and D root vectors of order 2 and 3 respectively, then
( )
1 2
X1 (t) = eλt C, X2 (t) = eλt (B + Ct) and X3 = eλt Ct + Bt + D ,
2

are solutions of Ẋ = AX.


The following theorem contains the general result:

Theorem 3.8 Suppose Uk is a root vector of order k, corresponding to the eigenvalue λ of A. Then
( )
λt tk−1
X (t) = e Uk + tUk−1 + . . . + U1
(k − 1)!

is a solution of Ẋ = AX.

Proof. We have to proof that Ẋ = AX for all t.


Apply A − λIto X (t). Since (A − λI) Ui = Ui−1 , we have
( )
tk−2
(A − λI) X (t) = eλt Uk−1 + tUk−2 + . . . + U1 .
(k − 2)!
(Note that (A − λI) U1 = 0.)
Hence
( )
λt tk−1
Ẋ (t) = λe Uk + tUk−1 + . . . + U1
(k − 1)!
( )
λt tk−2
+e Uk−1 + . . . + U1
(k − 2)!
= λX (t) + (A − λI) X (t)
= AX (t) .


45 MAT3706/1

Exercise 3.9 Prove that if Uk are root vectors of order k, corresponding to the eigenvalue λ of A, then

X1 (t) = eλt U1
X2 (t) = eλt (U2 + tU1 )
..
.
( )
tk−2
Xk−1 (t) = eλt Uk−1 + tUk−2 + . . . + U1
(k − 2)!

are solutions of Ẋ = AX.

From this, and from Theorem 3.8 (where we proved that


( )
λt tk−2
Xk (t) = e Uk + tUk−1 + . . . + U1
(k − 2)!

is a solution of Ẋ = AX), we have that k solutions can be found from k root vectors of orders 1, . . . , k.
If we can prove that these solutions are linearly independent, we can find the general solution of Ẋ = AX
when the zeros of the characteristic polynomial are coincident, i.e. we now have at our disposal sufficient
methods for finding the solution of Ẋ = AX if the roots of the characteristic equation are different and if
the roots of C (λ) = 0 are coincident.

We first prove

Lemma 3.10 If U1 , . . . , Uk are root vectors of order 1, . . . , k corresponding to the eigenvalue λ of A, then
U1 , . . . , Uk are linearly independent.

Proof. Put c1 U1 + . . . + ck Uk = 0. We must show that ci = 0, i = 1, . . . , k. From (3.7) we have

c1 (A−λI)k−1 Uk + c2 (A − λI)k−2 Uk + . . . + ck−2 (A − λI)k−(k−2) Uk


(3.8)
+ck−1 (A − λI)k−(k−1) Uk + ck Uk = 0.

From Definition 3.1 we have that (A − λI)k−i Uk ̸= 0, i ≥ 1. Apply (A − λI)k−1 to (3.8). Then

0 = ck (A − λI)k−1 Uk (3.9)

since all the other terms vanish in view of the index of A − λI being equal to or greater than k. Since
(A − λI)k−1 Uk ̸= 0, (3.9) can only hold if ck = 0.

Next apply (A − λI)k−2 to (3.8). This yields

0 = ck−1 (A − λI)k−1 Uk + ck (A − λI)k−2 Uk


= ck−1 (A−λI)k−1 Uk (3.10)

since ck = 0. As before, equation (3.10) is valid only if ck−1 = 0.

By repeating the process, we obtain


ck−2 = 0, . . . , c1 = 0. 
Next we prove
46

Lemma 3.11 The solutions X1 (t) , . . . , Xk (t) of Ẋ = AX, with


( )
tk−1
Xk (t) = eλt Uk + tUk−1 + . . . + U1
(k − 1)!
(compare Theorem 3.8 and Exercise 3.9) are linearly independent.

Proof. Put
c1 X1 (t) + . . . + ck Xk (t) = 0,
i.e. { ( )
t2
eλt c1 U1 + c2 (U2 + tU1 ) + c3 U3 + tU2 + 2! U1 + ...

( )
tk−2
. . . + ck−1 Uk−1 + tUk−2 + . . . + (k−2)! U1 + ck (Uk + tUk−1 + . . .

}
tk−1
... + (k−1)! U1 ) = 0.
Rearrangement of the terms yields
{ ( ) ( )
tk−1 tk−2
eλt U1 c1 + c2 t + . . . + ck + U2 c2 + c3 t + . . . + ck + ...
(k − 1)! (k − 2)!
( ) }
t2
. . . + Uk−2 ck−2 + ck−1 t + ck + Uk−1 (ck−1 + ck t) + Uk ck = 0.
2!
This yields
tk−1
c1 + c2 t + . . . + ck = 0 ∀t,
(k − 1)!
tk−2
c2 + c3 t + . . . + ck = 0 ∀t,
(k − 2)!
..
.
ck−1 + ck t = 0 ∀t,

ck = 0,
since eλt ̸= 0 and U1 , . . . , Uk are linearly independent. Consequently ci = 0, i = 1, . . . , k. 

Example 3.12 Find a general solution of


 
0 1 0
 
Ẋ =  0 0 1  X ≡ AX
8 −12 6

by using root vectors.

Solution
Put  
−λ 1 0
 
det  0 −λ 1  = 0,
8 −12 6 − λ
47 MAT3706/1

i.e.
−λ {(6 − λ) (−λ) + 12} + 8 = 0.

Then −λ3 + 6λ2 − 12λ + 8 = 0 from which we obtain λ = 2, 2, 2. Next we determine the root vectors
U, V, W of order 1, 2, 3 respectively.

From (A − λI) U = 0, we have


    
−2 1 0 u1 0
    
 0 −2 1   u2  =  0  .
8 −12 4 u3 0

Then
−2u1 + u2 =0 (1)
−2u2 + u3 =0 (2)
8u1 − 12u2 + 4u3 =0 (3)
(3) → (3) + 4 (1) : −8u2 + 4u3 =0 (4)

Note that (4) is the same as (2).


A solution of this system is
   
u1 1
   
U =  u2  =  2  .
u3 4
Therefore  
1
 
X1 (t) = e2t  2 
4

is a solution of Ẋ = AX.

From (A−λI) V = U, we have


    
−2 1 0 v1 1
    
 0 −2 1   v2  =  2  ,
8 −12 4 v3 4
from which we obtain    
v1 −1
   
V =  v2  =  −1  ,
v3 0
so that according to Exercise 3.9
   

 −1 1 
   
X2 (t) =  −1  + t  2  ∈2t

 

0 4

is a second solution of Ẋ = AX.


48

Finally, we have from (A − λI) W = V,


    
−2 1 0 w1 −1
    
 0 −2 1   w2  =  −1  ,
8 −12 4 w3 0

from which    
w1 3/4
   
W =  w2  =  1/2  .
w3 0

A third solution of Ẋ = AXis therefore


     
3/4 −1 2
1
2t     t  
X3 (t) = e  1/2  + t  −1  +  2  .
2
0 0 4

It is easy to verify that we have three linearly independent solutions, by determining their Wronskian.

A general solution is then


X (t) = k1 X1 (t) + k2 X2 (t) + k3 X3 (t)

with k1 , k2 and k3 arbitrary constants.

Exercise 3.13 Do problems 19 – 31 of Exercise 8.2.2 on p. 347 of Z&W.

Remark
In the above example we have constructed the solution of an initial value problem of the form Ẋ = AX if
A has constant entries, i.e. the entries are (real ) numbers.

3.3 TWO IMPORTANT RESULTS (AND KEEPING OUR PROMISES!)

The aim of this section is to present a theorem on the solvability of the initial value problem Ẋ = AX,
X (0) = X0 and to “revisit” symmetric matrices.

The result of the next theorem is stronger than the results in Theorems 2.27 and 2.29 in the sense that the
conditions “If ....” are now no longer necessary.

We state without proof

Theorem 3.14 (Existence Theorem for Linear Systems with Constant Coefficients) For any
choice of X0 ∈ Rn and any n × n matrix A with constant entries, a vector function X (t) exist such that
Ẋ = AX (t) , X (0) = X0 .

Remark

(i) Note that the only condition is that the entries of A must be real — by virtue of the solution
techniques developed in the preceding sections, the conditions imposed in Theorems 2.27 and 2.29
can be removed.
49 MAT3706/1

(ii) The solution whose existence is guaranteed by this theorem, is unique. This will be proved in the
next chapter. In Chapter 7 we will prove a more general result in the sense that the matrix A can be
dependent on t, i.e. the entries of A may be functions of t. Students should take care to distinguish
between the two cases.

In case the reader doubts the value of existence and uniqueness theorems, he should read an article by AD
Snider on this subject: “Motivating Existence–Uniqueness Theory for Applications Oriented Student”1 .
In this article the writer applies the fact that the one–dimensional wave problem

∂2u 2
2∂ u
= c , x > 0, t > 0,
∂t2 ∂x2

u (x, 0) = f (x)

∂u
(x, 0) = g (x)
∂t

u (0, t) = 0

has one and only one solution, to answer the following question: if a moving wave meets the boundary, does
the reflected wave travel below or above the position of equilibrium? The uniqueness of the solution of the
wave equation enables one to deduce that the wave travels below the position of equilibrium.

The last result of this chapter pertains to symmetric matrices. We already know that symmetric matrices
have only real eigenvalues. In view of our knowledge of root vectors, we can prove the following result:

Theorem 3.15 If A is a real symmetric matrix, then A has no root vectors of order n ≥ 2.

Proof Suppose to the contrary that there exists a vector b ̸= 0 such that

(A − λI)b = a ̸= 0
(A − λI)a = 0.

Hence
(A − λI)2 b = (A − λI)[(A − λI)b] = (A − λI)a = 0

and
(A − λI)b ̸= 0.

Since a ̸= 0, the matrix [ ]



n
T
a a= ai ai
i=1

is a non-zero 1 × 1 matrix, i.e. aT a ̸= [0].


From Lemma A.3 of Appendix A of the study guide, we know that λ is real and since A and λI are real,
it follows that (A − λI)b = (A − λI)b.
1
American Mathematical Monthly, 83(1976), 805–807.
50

By the symmetry of A − λI and Lemma A.1 of Appendix A of the study guide we have that

aT a = [{(A − λI)b}T {(A − λI)b}]


= [{(A − λI)b}T (A − λI)b]
T
= [b (A − λI)(A − λI)b] (by Lemma A.1)
T
= [b (A − λI)2 b]
T
= [b 0] (since (A − λI)2 b = 0)
= 0

which yields the contradiction. 

Exercise 3.16 Prove that if (A − µI)k u = 0, then u ̸= 0 is an eigenvalue of A.

Exercise 3.17 Solve the following initial value problems:


  
5 4 2 4
   
(a) Ẋ =  4 5 2  X, X(0) =  5 
2 2 2 −9
  
4 6 6 11
   
(b) Ẋ =  1 3 2  X, X(0) =  3 .
−1 −5 −2 −7
51 MAT3706/1

CHAPTER 4

Fundamental Matrices
Non–homogeneous Systems
The Inequality of Gronwall

Objectives for this Chapter


The main objective of this chapter is to gain an understanding of the following concepts regarding funda-
mental matrices and non–homogeneous systems:

• a fundamental matrix of Ẋ = AX at t0 ;

• the normalized fundamental matrix of Ẋ = AX at t0 ;

• the uniqueness of normalized fundamental matrices;

• using a (normalized) fundamental matrix to solve the IVP Ẋ = AX, X (t0 ) = X0 ;

• the method of variation of parameters;

• the norm of a matrix;

• the inequality of Gronwall.

Outcomes of this Chapter


After studying this chapter the learner should be able to:

• determine a fundamental matrix for Ẋ = AX;

• normalize the fundamental matrix;

• find a general solution for Ẋ = AX using fundamental matrices;

• solve the IVP Ẋ = AX, X (t0 ) = X0 using fundamental matrices;

• use the method of variation of parameters to solve the non–homogeneous problem Ẋ = AX + F (with
or without the initial condition X (t0 ) = X0 );

• apply the inequality of Gronwall to estimate the growth of solutions.


52

4.1 FUNDAMENTAL MATRICES

The concept of the fundamental matrix of a system of differential equations arises when the solutions (which
are vectors) of the system are grouped together as the columns of a matrix. Fundamental matrices provide
a convenient method of formulating the behaviour of the system Ẋ = AX in an elegant way.

Important results in this chapter are the uniqueness theorems for the homogeneous initial value problem
Ẋ = AX, X (t0 ) = X0 and the non–homogeneous initial value problem Ẋ = AX + F (t), X (t0 ) = X0 .

Let us now define:

Definition 4.1 An n×n matrix Φ with the property that its columns are solutions of Ẋ = AX, and linearly
independent at t0 , is said to be a fundamental matrix of Ẋ = AX at t0 .

Example 4.2 Find a fundamental matrix for the system ẋ = Ax at t0 = 0, where


[ ]
1 0
A= .
−1 3

Solution
It is easy to see that λ = 1 and λ = 3 are the two eigenvalues of A and that [2, 1]T and [0,[1]T ]are two
2
linearly independent eigenvectors which correspond with them respectively. Hence x1 (t) = et and
1
[ ]
0
x2 (t) = e3t are two solutions of the given system. Thus
1
[ ]
2et 0
t
e e 3t

is a fundamental matrix. However,


[ ] [ ] [ ]
2 t 0 0
x3 (t) = e + e3t and x4 (t) = e3t
1 1 4

are also solutions and are linearly independent at t = 0. Hence


[ ]
2et 0
et + e3t 4e3t

is also a fundamental matrix at t0 = 0.


This example illustrates the fact that a system may have two fundamental matrices at a point t0 , i.e. the
fundamental matrix of a system at a point t0 is not unique.

In order to construct fundamental matrices, it is necessary to find n solutions of Ẋ = AX which are linearly
independent at t0 . From the contents of the two previous chapters, we know that this is always possible.
Therefore the following theorem (stated without proof) holds.

Theorem 4.3 Every system Ẋ = AX has a fundamental matrix at t0 .


53 MAT3706/1

For matrices the equation χ̇ = Aχ, with χ a matrix function of t, is analogous to the equation Ẋ = AX
with X a vector function of t. A fundamental matrix of Ẋ = AX at t0 may be formulated in terms of this
equation. To begin with, we define:

d
Definition 4.4 If Φ (t) is the matrix [aij (t)] , then Φ̇ (t) = dt [Φ (t)] is the matrix [ȧij (t)], i.e. the matrix
of which the entries are the derivatives of the corresponding entries of Φ (t).

Theorem 4.5 Φ is a matrix of Ẋ = AX iff

(a) Φ is a solution of χ̇ = Aχ i.e. Φ̇ (t) = AΦ (t) ;

(b) det Φ (t0 ) ̸= 0.

Remark
Condition (b) is equivalent to the condition: (b′ ) Φ−1 (t0 ) exists.

Exercise 4.6

(1) Prove Theorem 4.5.


[ ]
0 1
(2) Verify Theorem 4.5 for the system with A = .
−1 −2

On the strength of Theorem 4.5 it is possible to give a representation of the solutions of Ẋ = AX in terms
of fundamental matrices.

Theorem 4.7 If Φ is a fundamental matrix of Ẋ = AX at t0 , then X (t) = Φ (t) K is a solution of


Ẋ = AX for every constant vector K.

It will turn out that any solution of Ẋ = AX is of this form.

Corollary 4.8 If Φ is a fundamental matrix of Ẋ = AX at t0 , then X (t) = Φ (t) K with K = Φ−1 (t0 ) X0 ,
is a solution of the initial value problem Ẋ = AX, X (t0 ) = X0 .

In the expression Φ (t) Φ−1 (t0 ) X0 , we are dealing with the product of two matrices of which the second is
constant. From Theorem 4.5 we deduce

Corollary 4.9 If B is non–singular (invertible) constant matrix and Φ a fundamental matrix of Ẋ = AX


at t0 , then ΦB is also a fundamental matrix of Ẋ = AX at t0 .

If t = t0 , then Φ (t) Φ−1 (t0 ) is the identity matrix, so that at t = t0 we have Φ (t) Φ−1 (t0 ) X0 = X0 . This
leads to the concept of normalized fundamental matrix at t0 .

Definition 4.10 A fundamental matrix Φ (t) of Ẋ = AX at t0 with the property that Φ (t0 ) = I, the
identity matrix, is said to be a normalized fundamental matrix at t0 .
54

Remark
A fundamental matrix Φ (t) of Ẋ = AX at t0 can therefore be normalized by multiplying by Φ−1 (t0 ) from
the right as follows:
Ψ (t) = Φ (t) Φ−1 (t0 ) (4.1)

where Ψ (t) is then the normalized fundamental matrix of Ẋ = AX at t0 . (Clearly, from Corollary 4.9, the
matrix Ψ (t) is a fundamental matrix and furthermore, at t = t0 , we have

Ψ (t0 ) = Φ (t0 ) Φ−1 (t0 ) = I,

so that it is also normalized.)

Corollary 4.11 can now be formulated:

Corollary 4.11 If Ψ (t) is a normalized fundamental matrix of Ẋ = AX at t0 , then X (t) = Ψ (t) X0 is a


solution of the initial value problem Ẋ = AX, X (t0 ) = X0 .

The initial value problem Ẋ = AX, X (t0 ) = X0 , can therefore be solved simply by finding a normalized
fundamental matrix at t0 .

Exercise 4.12 Prove Theorem 4.7 and Corollaries 4.8, 4.9.

Example 4.13 Find the solution of the initial value problem


[ ] [ ]
0 1 2
Ẋ = X, X (0) = .
−3 4 4

Solution
The vector functions [ ] [ ]
1 1
X1 (t) = et and X2 (t) = e3t
1 3
are two linearly independent solutions of the given system. Therefore
[ ]
et e3t
Φ (t) =
et 3e3t

is a fundamental matrix. Now [ ]


3
− 12
Φ−1 (0) = 2 ,
− 21 1
2

so that [ ] [ ]
1 3et − e3t e3t − et 1 0
Φ (t) Φ−1 (0) = = if t = 0.
2 3et − 3e3t 3e3t − et 0 1
From (4.1) we have that Φ (t) Φ−1 (0) is a normalized fundamental matrix at t = 0. Consequently from
Corollary 4.11 [ ][ ] [ ]
3et − e3t e3t − et 2 et + e3t
χ (t) = =
3et − 3e3t 3e3t − et 4 et + 3e3t
is a solution to the given problem.
55 MAT3706/1

Exercise 4.14 Find a normalized fundamental matrix of the system ẋ = Ax at t0 = 0 where A is given
by
[ ] [ ] [ ]
1 3 1 1 2 1
(a) (b) (c)
1 −1 3 −1 2 3
     
1 1 1 1 1 0 2 0 0
     
(d)  1 1 1  (e)  0 1 0  (f)  0 2 1 
1 1 1 0 1 1 0 0 2

4.2 THE UNIQUENESS THEOREM FOR LINEAR SYSTEMS WITH CONSTANT


COEFFICIENTS
In the proof of this theorem the fact that a fundamental matrix of Ẋ = AX at t0 is invertible for every t,
is used. We first prove the result for a normalized fundamental matrix.

Suppose that Φ is a normalized fundamental matrix of Ẋ = AX at t0 . A matrix B (t) has to be determined


such that
B (t) Φ (t) = I ∀t. (4.2)

In the next theorem it is shown that if we take B (t) as the transpose of a normalized fundamental matrix
of
T
Ẋ = −A X (4.3)

at t0 , then (4.2) is satisfied. Equation (4.3) is known as the conjugate of Ẋ = AX.

T
Theorem 4.15 Assume that Φ and Ψ are normalized fundamental matrices of Ẋ = AX and Ẋ = −A X
at t0 respectively. Then
ΨT (t) Φ (t) = I ∀t,

i.e.
(Φ (t))−1 = ΨT (t) .

In words the theorem reads: The inverse of a normalized fundamental matrix Φ (t) of Ẋ = AX at t0 , is
T
equal to the transpose of the normalized fundamental matrix of the conjugate equation Ẋ = −A X at t0 .

Proof. Since Ψ is differentiable, ΨT and ΨT Φ are likewise differentiable. From the rules for differentiating
matrix functions we have
d [ T ] d [ T] dΦ
Ψ Φ = Ψ Φ + ΨT
dt dt dt
{ }T

= Φ + ΨT AΦ
dt

{ }T
= −AT Ψ Φ + ΨT AΦ
= −ΨT AΦ + ΨT AΦ
= 0
56

( )T
where in the second last step we have made use of (AB)T = BT AT and AT = A for any matrices A
and B.

Consequently ΨT Φ = C, a constant matrix, i.e. ΨT Φ (t) = C for all t. In particular, ΨT (t0 ) Φ (t0 ) = C
holds. Since Ψ and Φ are normalized fundamental matrices, we have I = ΨT (t0 ) Φ (t0 ) = C, so that C = I
holds. This completes the proof. 

Corollary 4.16 Fundamental matrices are non–singular for all t.

Proof. Suppose Φ is a fundamental matrix at t0 as well as non–singular in t0 , i.e. Φ−1 (t0 ) exists. From
the definition Φ (t) Φ−1 (t0 ) is then a normalized fundamental matrix at t0 , so that Φ (t) Φ−1 (t0 ) is non–
singular for every t, i.e. a matrix B exists so that Φ (t) Φ−1 (t0 ) B (t) = I for every t. This implies that
Φ (t) itself is non–singular for every t. 

Corollary 4.17 Solutions of Ẋ = AX which are linearly independent at t = t0 , are linearly independent
for all t.

Proof. The proof follows immediately from the preceding corollary by recalling that solutions of Ẋ = AX
( )
are linearly independent at t0 if det Φ−1 (t0 ) ̸= 0, where Φ is a fundamental matrix of Ẋ = AX at t0 .
This requirement is equivalent to that of the non–singularity of Φ at t0 . 

Corollary 4.18 Fundamental matrices at t0 are fundamental for all t.

Proof. From Definition 4.1, the result is merely a formulation of the preceding result. 

From the above discussion, it follows that vector functions satisfying Ẋ = AX, cannot be linearly indepen-
dent at one point and linearly dependent at another.

Exercise 4.19 Show by means of an example that arbitrary chosen vectors, linearly independent at a
point t0 , are not necessarily linearly independent for all t. The property of Corollary 4.17 is, therefore,
a specific property of solutions of Ẋ = AX. The converse, however, is not true, i.e. a system of vector
functions, linearly independent for all t, does not necessarily constitute a solution of Ẋ = AX with A some
or other matrix.

Theorem 4.20 Any solution of Ẋ = AX is of the form X (t) = Φ (t) X (t0 ) with Φ a normalized funda-
mental matrix of Ẋ = AX at t0 .

Proof. We show that Φ−1 (t) X (t) = Z (t) is a constant vector by showing that Ż (t) = 0. From Theorem
4.15 we have Z (t) = ΨT (t) X (t) with Ψ (t) a solution of χ̇ = −AT χ. Hence
( )T ( )
Ż = Ψ̇ X+ ΨT Ẋ
( )T
= −AT Ψ X + ΨT AX
= −ΨT AX + ΨT AX
= 0.

Consequently, Z (t) = Φ−1 (t) X (t) = C, with C a constant vector. If we put t = t0 , it follows that
C = X (t0 ), so that Φ−1 (t) X (t) = X (t0 ), i.e. X (t) = Φ (t) X (t0 ) . 
57 MAT3706/1

We recall that if Φ is a fundamental matrix of Ẋ = AX at t0 , then Φ (t) K is a solution of Ẋ = AX for


every choice of K (Theorem 4.7). From Theorem 4.20 we have that the system Φ (t) K contains all the
solutions, i.e. each solution of Ẋ = AX can be obtained from Φ (t) K by a suitable choice of K. If we choose
Φ a normalized fundamental matrix, we can deduce from Theorem 4.20 that every solution of Ẋ = AX is
some or other linear combination of a set of fundamental solutions. This leads to

Definition 4.21 The vector function Φ (t) K is said to be a general solution of Ẋ = AX if Φ is any
fundamental matrix of the system Ẋ = AX.

From Theorem 4.20 we also have

Corollary 4.22 Any solution of Ẋ = AX that vanishes at t0 , is identically zero. In particular, the problem
Ẋ = AX, X (0) = 0, has the unique solution X (t) ≡ 0.

We now have all the tools for proving the uniqueness theorem for linear systems with constant coefficients.

Theorem 4.23 1 (Uniqueness Theorem for Linear Systems with Constant Coefficients) The
initial value problem Ẋ = AX, X (t0 ) = X0 in which A is a constant matrix, has a unique solution.

Proof. We have to show that if X (t) and Y (t) are both solutions, then X (t) ≡ Y (t). Put Z (t) =
X (t) − Y (t). Then

Ż (t) = Ẋ (t) − Ẏ (t) and AZ (t) = AX (t) − AY (t) = A (X (t) − Y (t)) .

Therefore Z (t) = AZ (t). Also


Z (t0 ) = X (t0 ) − Y (t0 ) = 0.

Therefore Z (t) is a solution of Ẋ = AX which vanishes at t0 . By Corollary 4.22 we conclude that Z (t) =
0 ∀t, i.e. X (t) ≡ Y (t). 

Corollary 4.24 Let Φ be a normalized fundamental matrix for the system Ẋ = AX at t0 . Then χ (t) =
Φ (t) χ0 is the unique solution of the matrix initial value problem χ̇ = Aχ, χ (t0 ) = χ0 .

The uniqueness theorem for the initial value problem Ẋ = AX, X (t0 ) = X0 , may be reformulated in terms
of fundamental matrices.

Theorem 4.25 There exists one and only one normalized fundamental matrix of Ẋ = AX at t0 .

Exercise 4.26 Prove Corollary 4.24 and Theorem 4.25.

1
This theorem must not be confused with the Existence and Uniqueness Theorem for linear systems of the form Ẋ = A (t) X,
i.e. the case where the entries of A vary with time t (Chapter 7).
58

4.3 APPLICATIONS OF THE UNIQUENESS THEOREM. SOLUTION OF THE NON–


HOMOGENEOUS PROBLEM

The following theorems, which deal with the properties of fundamental matrices, which may be normalized,
are proved by using the Uniqueness Theorem 4.23.

Theorem 4.27 If Φ is a normalized fundamental matrix of Ẋ = AX at t0 = 0, then

Φ (s + t) = Φ (t) Φ (s) (4.4)

for all real numbers s and t.

Proof. Let s be fixed, say s = s0 . We show that both sides of (4.4) are solutions of the matrix initial value
problem
χ̇ = Aχ, χ (0) = Φ (s0 ) . (4.5)

From Corollary 4.24, we have that χ (t) = Φ (t) Φ (s0 ) satisfies (4.5), i.e.

d
[Φ (t) Φ (s0 )] = A [Φ (t) Φ (s0 )] and χ (0) = Φ (s0 ) .
dt

Since Φ is a normalized fundamental matrix of Ẋ = AX at t0 = 0, we have, from Theorem 4.5,

d
[Φ (t)] = AΦ (t) ∀t,
dt
so that in particular
d
[Φ (s0 + t)] = AΦ (s0 + t) ∀t.
dt
Moreover Φ (s0 + t) = Φ (s0 ) at t = 0. Consequently χ (t) = Φ (s0 + t) is a solution of (4.5). Since the
solution of (4.5) is unique, Φ (s0 + t) = Φ (t) Φ (s0 ) holds for arbitrary s0 and all real numbers t. Therefore

Φ (s + t) = Φ (t) Φ (s) for all real numbers s and t.

Exercise 4.28

1. Show by means of an example that the hypothesis that Φ is a normalized fundamental matrix at
t0 = 0, cannot be scrapped.

2. Prove that under the hypothesis of Theorem 4.27,

Φ (t) Φ (s) = Φ (s) Φ (t) for all real numbers s and t.

Note that Φ (0) = I follows from the definition, so that Φ indeed displays the properties of the
exponential function.

Corollary 4.29 If Φ is a normalized fundamental matrix at t0 = 0, then Φ−1 (t) = Φ (−t) for all t.
59 MAT3706/1

Corollary 4.30 For any fundamental matrix Φ of Ẋ = AX

Φ−1 (t) = Φ−1 (0) Φ (−t) Φ−1 (0)

holds for all t.

Remark
This formula provides a method for determining the inverse of a fundamental matrix of a system of differ-
ential equations.

Exercise 4.31

1. Prove Corollaries 4.29 and 4.30.


[ ]
2et 0
2. Verify Corollary 4.30 for the fundamental matrix Φ(t) = t 3t
.
e e

Theorem 4.32 (Transition Property) Suppose the system Ẋ = AX has the fundamental matrices Φ0
and Φ1 . Suppose Φ0 and Φ1 are respectively normalized at t0 and t1 . Then

Φ1 (t) Φ0 (t1 ) = Φ0 (t) for each value of t.

Corollary 4.33 With the notation the same as in the previous theorem we have

Φ1 (t0 ) Φ0 (t1 ) = I.

Theorem 4.34 If Φ is a normalized fundamental matrix of Ẋ = AX at t0 , then

AΦ (t) = Φ (t) A.

Exercise 4.35

1. Show by means of an example that the hypothesis that Φ is a normalized fundamental matrix, is
indispensable.

2. Let y(t) = −x(t). Show that ẏ(t) = Ay(t) if ẋ(t) = −Ax(t).

3. Suppose A is symmetric and Φ is a normalized fundamental matrix of ẋ = Ax at t0 = 0. Prove that


ΦT (t) = Φ(t). (Hint: Consider the system ẋ = −Ax and use Problem 2 above and Ψt Φ = I and
Φ−1 (t) = Φ(−t).)

4.4 THE NON–HOMOGENEOUS PROBLEM Ẋ = AX + F (t) : VARIATION OF PARA–


METERS

In this section, we want to derive a formula which, under suitable conditions on F, yields a general solution
of the non–homogeneous system Ẋ = AX + F (t), when a general solution of Ẋ = AX (known as the
complementary homogeneous system), is known. The fundamental matrix of Ẋ = AX plays an important
role in this formula. The method used is known as the Method of Variation of Parameters. (See also pp
60

351 – 354 of Z&W. On pp 156 – 162 of Z&W see also how a one–dimensional non–homogeneous differential
equation can be solved by the method of variation of parameters.)

In the case of the n–dimensional problem Ẋ = AX + F (t), the part played by the exponential function in
the one–dimensional problem, is played by a fundamental matrix of the complementary system Ẋ = AX.
We assume the existence of a vector U such that

Xp (t) = Φ (t) U (t) . (4.6)

is a solution of
Ẋ = AX + F (t) .
Then

Ẋp (t) = Φ̇ (t) U (t) + Φ (t) U̇ (t)


= AΦ (t) U (t) + Φ (t) U̇ (t)

since Φ is a fundamental matrix of Ẋ = AX.


Also
Ẋp (t) = A [Φ (t) U (t)] + F (t) .
Consequently
Φ (t) U̇ (t) = F (t) ,
so that
U̇ (t) = Φ−1 (t) F (t)
since Φ−1 exists. We therefore have
{∫ t }
−1
Xp (t) = Φ (t) U (t) = Φ (t) Φ (s) F (s) ds .
t0

The above gives rise to

Theorem 4.36 If F (t) is continuous and Φ is a fundamental matrix of Ẋ = AX, then the inhomogeneous
system Ẋ = AX + F (t) has solution
∫ t
Xp (t) = Φ (t) Φ−1 (s) F (s) ds. (4.7)
t0

This solution is called a particular solution.


Proof. We have to show that Ẋp (t) = AXp (t) + F (t). Since Φ is a fundamental matrix of Ẋ = AX, we
have from Theorem 4.5 that Φ̇ (t) = AΦ (t). We therefore have
∫ t (∫ t )
−1 d −1
Ẋp (t) = AΦ Φ (s) F (s) ds + Φ (t) Φ (s) F (s) ds
t0 dt t0
∫ t ( )
= AΦ (t) Φ−1 (s) F (s) ds + Φ (t) Φ−1 (t) F (t)
t0
∫ t
= AΦ (t) Φ−1 (s) F (s) ds + F (t)
t0

= AXp (t) + F (t)


61 MAT3706/1

by making use of the differentiation formula


(∫ ) ∫ F (t) { }
F (t)
∂ ′ ′ ∂
G (t, s) ds = G (t, F (t)) F (t) − G (t, H (t)) H (t) + (G (t, s)) ds (4.8)
∂t H(t) H(t) ∂t

where F, G and H satisfy suitable conditions. 

Example 4.37 Determine a solution of


[ ] [ ]
1 0 et
Ẋ = X+ ≡ AX + F.
−1 3 1

Solution
From (4.6), Xp (t) = Φ (t) U (t) is a solution of Ẋ = AX + F, where Φ (t) U̇ (t) = F (t) with Φ a funda-
mental matrix of Ẋ = AX. A fundamental matrix is
[ ]
2et 0
et e3t

so that we must solve for U from [ ] [ ]


2et 0 et
U̇ = .
et e3t 1
If [ ]
u̇1
U̇ = ,
u̇2
we have

2et u̇1 = et (1)


et u̇1 + e3t u̇2 = 1. (2)

t
Now (1) is satisfied by u1 = while (2), which reduces to
2
( )
et
u̇2 = 1 − e−3t ,
2

is satisfied by
e−3t e−2t
u2 = + .
−3 4
Consequently [ ]
t
U= 2
e−3t e−2t
−3 + 4

and [ ][ ] [ ]
t
2et 0 tet
Φ (t) U (t) = e−3t
2
e−2t
= ( t 1) t
2 + 4 e −
1
et e3t −3 + 4 3

is a solution of the given problem.


62

Example 4.38 Use Theorem 4.36 to find the general solution of the inhomogeneous system
[ ] [ ]
−1 1 1
Ẋ = X+
−2 1 cot t
π
where t0 = .
2
Solution
The characteristic equation C (λ) = 0 has roots λ = ±i. An eigenvector corresponding to λ = i is
[ ]
1
U= .
1+i

From this we obtain, with the aid of Theorem 2.19, the two real solutions
[ ] [ ]
cos t sin t
and .
cos t − sin t cos t + sin t

A fundamental matrix is therefore


[ ]
cos t sin t
Φ (t) = .
cos t − sin t cos t + sin t
Now [ ]
cos t + sin t − sin t
Φ−1 (t) = .
sin t − cos t cos t
π
Therefore, by choosing t0 = , we have from Theorem 4.36
2
∫ t[ ][ ]
cos s + sin s − sin s 1
Xp (t) = Φ (t) ds
π
2
sin s − cos s cos s cot s

= Φ (t) J

where
∫ t[ ]
sin s
J = ds
π
2
− cos s + cosec s
cos2 s 1 − sin2 s
(recall = = cosec s − sin s)
sin s sin s
[ ]t
− cos s
=
− sin s − ln |cosec s + cot s| π
[ 2
]
− cos t
= .
− sin t − ln |cosec t + cot t| + 1

This yields

Xp (t) = Φ (t) J

[ ]
−1 + sin t (1 − ln |cosec t + cot t|)
= .
−1 + (cos t + sin t) (1 − ln |cosec t + cot t|)
63 MAT3706/1

Remark
The inverse Φ−1 (t) of the matrix
[ ]
cos t sin t
Φ (t) =
cos t − sin t cos t + sin t

could quite easily be determined by recalling that the inverse of the matrix
[ ]
A B
S=
C D

is given by [ ]
−1 1 D −B
S = , provided that AD − BC ̸= 0.
AD − BC −C A

If Φ is a normalized fundamental matrix of Ẋ = AX at t = 0, the formula in Theorem 4.36 can be changed


slightly by applying Corollary 4.29.

Corollary 4.39 If Φ is a normalized fundamental matrix of Ẋ = AX at t = 0, then


∫ t
Xp (t) = Φ (t − s) F (s) ds (4.9)
0

is a solution of Ẋ = AX + F.

Remark
Note that the solution Xp (t) of Ẋ = AX + F, given by (4.7) or (4.9) vanishes at t0 , since the integral
becomes zero at t0 = 0.

Example 4.40 Use Corollary 4.39 to solve the inhomogeneous system


[ ] [ ]
4 5 4et cos t
Ẋ = X+
−2 −2 0

by taking t0 = 0.

Solution
The corresponding homogeneous system has the characteristic equation

C (λ) = λ2 − 2λ + 2 = 0

which yields the roots λ = 1 ± i.

1 + i : Solve [ ][ ] [ ]
3−i 5 v1 0
=
−2 −3 − i v2 0
or equivalently

(3 − i) v1 + 5v2 = 0 (1)
−2v1 + (3 + i) v2 = 0. (2)
64

(Note that the second equation is identical to the first one, i.e. − (3 − i) /2 × (2) = (1). ) Choose, for
example, v1 = 5, then v2 = −3 + i, so that an eigenvector corresponding to 1 + i is

[ ] [ ] [ ]
5 5 0
V= = +i .
−3 + i −3 1

Thus we get the two real solutions of the corresponding homogeneous problem

([ ] [ ] )
5 0
X1 (t) = et cos t − sin t
−3 1
[ ]
t 5 cos t
= e
−3 cos t − sin t

and

([ ] [ ] )
5
0
X2 (t) = et sin t + cos t
−3
1
[ ]
t 5 sin t
= e .
−3 sin t + cos t

A fundamental matrix is therefore

[ ]
t 5 cos t 5 sin t
Φ (t) = e .
−3 cos t − sin t −3 sin t + cos t

Hence
[ ]
5 0
Φ (0) =
−3 1

and thus
[ ]
1 1 0
Φ−1 (0) =
5 3 5

so that the normalized fundamental matrix is given by

[ ]
−1 t cos t + 3 sin t 5 sin t
Ψ (t) = Φ (t) Φ (0) = e .
−2 sin t −3 sin t + cos t
65 MAT3706/1

From Corollary 4.39 it therefore follows that


∫ t
Xp (t) = Ψ (t − s) F (s) ds
0

∫ [ ][ ]
t
cos (t − s) + 3 sin (t − s) 5 sin (t − s) 4es cos s
= et−s ds
0 −2 sin (t − s) −3 sin (t − s) + cos (t − s) 0
∫ t[ ]
t cos (t − s) cos s + 3 sin (t − s) cos s
= 4e ds
0 −2 sin (t − s) cos
∫ t[ ]
cos t + cos (t − 2s) + 3 sin t + 3 sin (t − 2s)
= 2et ds
0 −2 sin t − 2 sin (t − 2s)
[ ]t
s cos t + 3s sin t − 21 sin (t − 2s) + 32 cos (t − 2s)
= 2et
−2s sin t − cos (t − 2s)
0
[ ]
t t cos t + 3t sin t + sin t
= 2e .
−2t sin t

Remark
Note that the fundamental matrix Φ (t) in Corollary 4.39 is normalized while the one in Theorem 4.36 is
not necessarily normalized. Also, in Theorem 4.36 the point t0 is arbitrary, while in Corollary 4.39, t0 = 0.

Exercise 4.41

1. Find a particular solution of the problem in Example 4.37 by using formula (4.7). Show that the answer
[ 3t ]T
obtained and the solution already found, differ by 0, e12 — a solution of the complementary system
Ẋ = AX.

2. Prove that the difference between any two particular solutions of a non–homogeneous system, is a
solution of the complementary homogeneous system.

From this it follows that the solutions of the non–homogeneous system Ẋ = AX + F may be expressed
as X (t) = Xc (t) + Xp (t) with Xp (t) a particular solution of Ẋ = AX + F and Xc (t) a general solution
of Ẋ = AX. As Xc (t) assumes all solutions of Ẋ = AX, it follows that X (t) assumes all solutions of
Ẋ = AX + F. As a result of Definition 4.21, we define a general solution of Ẋ = AX + F as follows:

Definition 4.42 Suppose that Xp (t) is any particular solution of the non–homogeneous system Ẋ = AX + F
and Φ a fundamental matrix of the complementary system Ẋ = AX. Then the vector function

X (t) = Xp (t) + Φ (t) K (4.10)

with K a constant vector, is called a general solution of Ẋ = AX + F.

Theorem 4.43 (Uniqueness Theorem for the Non–Homogeneous Initial Value Problem
Ẋ = AX + F, X (t0 ) = X0 ) The problem

Ẋ = AX + F, X (t0 ) = X0 (4.11)
66

has one and only one solution


X (t) = Xp (t) + Φ (t) X0 (4.12)
with Φ a normalized fundamental matrix of the complementary system Ẋ = AX at t0 and
∫ t
Xp (t) = Φ (t) Φ−1 (s) F (s) ds.
t0

Proof. By differentiation it follows that the function given by equation (4.12), is a solution of Ẋ = AX + F,
which also satisfies the initial condition. Suppose Y (t) is any solution of (4.11). We then have to prove
that X (t) ≡ Y (t). Define Z (t) = X (t) − Y (t). Then

Ż (t) = Ẋ (t) − Ẏ (t)


= AX (t) + F (t) − AY (t) − F (t)
= A (X (t) − Y (t))
= AZ (t)

and
Z (t0 ) = X (t0 ) − Y (t0 ) = 0.
The vector function Z (t) is, therefore, a solution of a homogeneous system which vanishes at t0 . From
Corollary 4.22, Z (t) is identically zero, so that X (t) ≡ Y (t). This completes the proof. 

Example 4.44 Find a general solution of Ẋ = AX + F, where


[ ] [ ]
1 0 et
A= , F (t) = .
−1 3 1

Solution
Take t0 = 0.
A general solution of Ẋ = AX is
[ ] [ ]
2 0
X (t) = c1 et + c2 e3t ,
1 1

with c1 , c2 arbitrary constants. A fundamental matrix is therefore given by


[ ]
2et 0
Φ (t) = .
et e3t

We find a particular solution by using Corollary 4.39. A normalized fundamental matrix at t0 = 0 for the
homogeneous system is given by

Ψ (t) = Φ (t) Φ−1 (0)

[ ] [ ]
2et 0 1 1 0
=
et e3t 2 −1 2
 
et 0
=  e − e3t
t .
e3t
2
67 MAT3706/1

From equation (4.9), a particular solution is


∫ t
Xp (t) = Ψ (t − s) F (s) ds
0

∫ t[ ][ ]
et−s 0 es
= et−s −e3(t−s)
ds
0 2 e3(t−s) 1
∫ t[ ]
et
= et −e3t−2s
ds
0 2 + e3(t−s)
[ ]t
set
= set e3t−2s e3(t−s)
.
2 − −4 + −3 0

We therefore have the particular solution


[ ] [ ]
tet tet ( )
Xp (t) = tet et e3t e3t
= (t ) e3t .
2 + 4 − 13 − 4 + 3 2 + 1
4 et + 12 − 1
3

A general solution is, therefore,


[ ] [ ] [ ]
tet ( ) 2 0
X (t) = (t ) e3t + c1 t
e + c2 e3t ,
2 + 1
4 et + 12 − 1
3 1 1

with c1 , c2 arbitrary constants.

Exercise 4.45 Do problems 11 – 33 of Exercise 8.3.2 on pp. 354 – 355 of Z&W.

Exercise 4.46

1. Use Theorem 4.36 to find a general solution of the inhomogeneous problem


[ ] [ ]
0 1 0
Ẋ = X+ . (Take t0 = 0.)
−1 0 2 cos t

2. Use Corollary 4.39 to solve the inhomogeneous initial value problem


[ ] [ ] [ ]
2 1 3 3
Ẋ = X+ e2t , X(0) = .
−4 2 t 2

4.5 THE INEQUALITY OF GRONWALL

In applications of differential equations, it is not always possible to obtain an exact solution of the differential
equation. In such cases it is sometimes necessary to find, instead, an estimate for, or an upper bound of,
the so–called norm of the solution. The latter quantity, as we will see below, measures the “size” of the
solution.
68

In this section we treat the inequality of Gronwall, which is a useful tool in obtaining estimates for the
norm of a solution of a differential equation of the form Ẋ = AX.

Firstly we recall that if X (t) is a vector function, e.g.

X (t) = (x1 (t) , x2 (t) , . . . , xn (t))T ,

then

||X (t)|| = x21 (t) + x22 (t) + . . . + x2n (t)

( )1

n 2

= x2i (t) .
i=1

Next we recall that for an n × n matrix A = [aij ] the number ||A|| is defined 2 by
 1

n 2

||A|| =  a2ij  .
i,j=1

We show that  1

n 2

||A|| =  a2ij 
i,j=1

is indeed a norm. The system of all n × n matrices is then a normed linear space. If A denotes [aij ] and
B = [bij ], we have

(i) ||A|| = 0 ⇐⇒ aij = 0 ∀i = 1, . . . , n; j = 1, . . . , n


⇐⇒ A is the zero matrix.

( )1

n
2
2

(ii) ||λA|| = (λaij )


i,j=1

= |λ| ||A|| for any scalar λ.

( )1

n 2

(iii) ||A + B|| = (aij + bij )2


i,j=1
( )1 ( )1

n 2

n 2

≤ a2i,j + b2ij
i,j=1 i,j=1

= ||A|| + ||B||
where in the second last step we used the inequality of Minkowski.

Before formulating the inequality of Gronwall, we prove the following auxiliary result, which will be needed
in proving Gronwall’s inequality.
2
The norm of A may also be defined in another way, as long as the properties of a norm are satisfied — see (i)–(iii) further
on.
69 MAT3706/1

Lemma 4.47 Suppose that f and g are continuous, real–valued functions in a ≤ t ≤ b and that f˙ exists
in this interval. If, in addition, f˙ (t) ≤ f (t) g (t) in a ≤ t ≤ b, then
[∫ t ]
f (t) ≤ f (a) exp g (s) ds ∀t ∈ [a, b] . (4.13)
a
( ∫ )
t
Proof Let p (t) = exp − a g (s) ds . Then
( ∫ t ) ( ∫ t )
d
ṗ (t) = exp − g (s) ds · − g (s) ds
a dt a
( ∫ t )
= exp − g (s) ds [−g (t)]
a

= −p (t) g (t)

and p(a) = 1. Now let F (t) = f (t)p(t). Then

F ′ = f ′ p + f p′
= f ′ p − f pg
= p(f ′ − gf ) ≤ 0

since p > 0 and f ′ − f g ≤ 0. Hence F is nonincreasing for a ≤ t ≤ b and therefore F (a) ≥ F (t). But
F (a) = f (a) since p(a) = 1. Hence we have
[ ∫ t ]
F (a) = f (a) ≥ f (t)p(t) = F (t) = f (t)exp − g(s)ds
a

which yields the desired result. 


We now proceed to

Theorem 4.48 (The Inequality of Gronwall) If f and g (g ≥ 0) are continuous, real–valued functions
in a ≤ t ≤ b, and K a real constant such that
∫ t
f (t) ≤ K + f (s) g (s) ds, a ≤ t ≤ b, (4.14)
a

then [∫ t ]
f (t) ≤ K exp g (s) ds , a ≤ t ≤ b. (4.15)
a

Proof. Put ∫ t
G (t) = K + f (s) g (s) ds.
a

Then f (t) ≤ G (t) in [a, b] . G is now differentiable in [a, b]. Indeed G′ (t) = f (t) g (t). Since g ≥ 0 and
f ≤ G, we have f g ≤ Gg; therefore G′ ≤ Gg. All the requirements of Lemma 4.47 are satisfied with the
role of f assumed by G. Consequently we have, according to inequality (4.13),
(∫ t )
G (t) ≤ G (a) exp g (s) ds , a ≤ t ≤ b.
a
70

Now ∫ a
G (a) = K + f (s) g (s) ds = K.
a
Therefore [∫ t ]
G (t) ≤ K exp g (s) ds , a ≤ t ≤ b.
a
The result now follows since f (t) ≤ G (t) in [a, b]. 

Example 4.49

(a) Show that the only continuous function, f , satisfying


∫ t
0 ≤ f (t) ≤ f (s)ds, t0 ≤ t
a

is the identically zero function.


Solution Set g(s) = 1 and K = 0 in equation (4.14). Since all the conditions of Theorem 4.48 are
satisfied, the inequality in equation (4.15) yields the desired result.

(b) Use the Gronwall inequality to prove that only the identically zero function is a solution of the initial
value problem
ẋ = g(t)x, x(t0 ) = 0

with g continuous in t ≥ t0 .
Solution By integrating the above equation we get
∫ t
x(t) = g(s)x(s)ds
t0

for any function x(t) which satisfies the initial condition. Hence we get
∫ t ∫ t
0 ≤ |x(t)| = g(s)x(s)ds ≤ |g(s)||x(s)|ds.
t0 t0

Now set f (t) = |x(t)| and g(t) = |g(t)| and K = 0 in Theorem 4.48. Then the inequality in equation
(4.15) implies that x(t) = 0.

The technique used in these examples is applied to systems of differential equations in the following section.
As in the case of the one–dimensional equation, the procedure is to replace the differentiable equation by
an equation containing an integral, and then to take the norm.

4.6 THE GROWTH OF SOLUTIONS

The Gronwall Inequality may be used to obtain an estimate of the norm of solutions of the system Ẋ = AX,
where A is a constant matrix.

Theorem 4.50 If X (t) is any solution of Ẋ = AX, then the following inequality holds for all t and t0 :

||X (t)|| ≤ ||X (t0 )|| exp (||A|| |t − t0 |) (4.16)

with equality for t = t0 .


71 MAT3706/1

Proof. If t = t0 , the assertion holds. If t > t0 , we have |t − t0 | = t − t0 , so that the inequality (4.16) reduces
to
||X (t)|| ≤ ||X (t0 )|| exp [||A|| (t − t0 )] .

By integrating Ẋ (t) = AX (t), we obtain


∫ t
X (t) − X (t0 ) = AX (s) ds.
t0

From the properties of norms we have


∫ t
||X (t)|| ≤ ||X (t0 )|| + ||A|| ||X (s)|| ds. (4.17)
t0

By putting f (t) = ||X (t)|| , g (t) = ||A|| and K = ||X (t0 )||, it follows from Theorem 4.48 that
(∫ t )
||X (t)|| ≤ ||X (t0 )|| exp ||A|| ds = ||X (t0 )|| exp [||A|| (t − t0 )] .
t0

We still have to prove the theorem for t < t0 . In this case, put Y (t) = X (−t). If we assume that X (t) is
a solution of Ẋ = AX, it follows that

Ẏ (t) = Ẋ (−t) = −AY (t) .

Therefore Y (t) is a solution of Ẏ = −AY. Moreover −t > −t0 . Therefore, by applying (4.17) to the system
Ẏ = −AY, we obtain

||X (t)|| = ||Y (−t)||


≤ ||Y (−t0 )|| exp [||−A|| (−t + t0 )]
= ||X (t0 )|| exp [||A|| (−t + t0 )]
= ||X (t0 )|| exp [||A|| |t − t0 |] ,

since if t < t0 , then |t − t0 | = −t + t0 . This completes the proof. 


Remark
If t − t0 > 0, the inequality (4.16) shows that ||X (t)|| does not increase more rapidly than a certain
exponential function. Such functions X (t) are known as functions of exponential type.

Example 4.51

The norms of a vector


X (t) = (x1 (t) , x2 (t) , . . . , xn (t))T

and an n × n matrix A = [ai,j ] were previously defined by


v v
u n u∑
u∑ u n 2
||X|| = t xi and ||A|| = t
2 aij .
i=1 i,j

Alternatively we can define



n ∑
n
||X|| = |xi | and ||A|| = max |aij | .
j=1,...,n
i=1 i=1
72

The latter definition means that the norm of a matrix A is defined as the maximum of the sums obtained
by addition of the absolute values of the entries in each column of A.

Now consider the system

ẋ1 = − (sin t) x2 + 4
ẋ2 = −x1 + 2tx2 − x3 + et
1
ẋ3 = 3 (cos t) x1 + x2 + x3 − 5t2 .
t
Assume that there exists solutions X and Y on the interval (1, 3) with

X (2) = [7; 3; −2]T and Y (2) = [6.7; 3.2; −1, 9]T .

Use the alternative definitions given above and Theorem 4.50 to estimate the error

||X (t) − Y (t)|| for 1 < t < 3.

Solution
For the given system we have  
0 − sin t 0
 
A (t) = 

−1 2t −1 
1 
3 cos t 1
t
so that
{ }
1
||A (t)|| = max 1 + 3 |cos t| , |sin t| + 2t + 1, 1 +
t

≤ max {4, 2 + 2t, 2}

= 8, since 1 < t < 3.

By Theorem 4.50 we have

||X (t) − Y (t)|| ≤ ||X (2) − Y (2)|| exp (||A|| |t − 2|) < (0.3 + 0.2 + 0.1) e8 .
73 MAT3706/1

CHAPTER 5

Higher Order One–dimensional Equations as Systems of First Order


Equations

Objectives of this Chapter


The main objective of this chapter is to gain an understanding of the following concepts regarding higher
order one–dimensional equations:

• the link between higher order one–dimensional equations and linear systems of first order differential
equations;

• companion matrix;

• companion system for the n–th order equation with constant coefficients as given by (5.1).

Outcomes of this Chapter


After studying this chapter the learner should be able to

• rewrite the n–th order equation (5.1) as a linear system of first order differential equations;

• solve the corresponding linear system of first order differential equations and interpret the solution in
terms of the original higher order one dimensional equation.

5.1 INTRODUCTION

In Chapter 4 of the Z&W the link that exists between higher order one–dimensional linear differential
equations and linear systems of first order differential equations is explored. The student can revise Sections
4.1 through 4.6 of Z&W as it was studied in APM2611.

5.2 COMPANION SYSTEMS FOR HIGHER–ORDER ONE–DIMENSIONAL


DIFFERENTIAL EQUATIONS

The n–th order equation with constant coefficients

y (n) + an−1 y (n−1) + . . . + a1 y (1) + a0 y = f (t) (5.1)

has been treated in APM2611. This equation can be reduced to a system of n first order equations. We
illustrate by means of an example:
74

Consider the equation


y (3) − 6y (2) + 5y (1) = sin t (5.2)

with initial conditions y (0) = y ′ (0) = y ′′ (0) = 0.

If y (t) represents the distance of a body from a fixed point on a line, this equation describes the movement
of that body along the line, subject to different forces. Since y measures distance, y (1) measures velocity
and y (2) acceleration. By putting y (1) = v and y (2) = a, equation (5.2) may be expressed as

y (1) = v
y (2) = a (5.3)
y (3) = −5v + 6a + sin t.

This, in turn, may be expressed as


      
ẏ 0 1 0 y 0
      
Ẋ =  v̇  =  0 0 1  v + 0  (5.4)
ȧ 0 −5 6 a sin t
   
0 1 0 0
   
=  0 0 1 X+ 0 . (5.5)
0 −5 6 sin t

Equation (5.5) is now merely a special case of Ẋ = AX + F (t) with the initial condition
   
y (0) y (0)
   
X (0) =  v (0)  =  y (1) (0)  = 0.
a (0) y (2) (0)

More generally, the third order equation

y (3) + a2 y (2) + a1 y (1) + a0 y = f (t) (5.6)

from which we obtain


y (3) = −a0 y − a1 y (1) − a2 y (2) + f (t) ,

can be expressed in matrix form as three first order equations as follows:

Let  
y
 
X =  y (1)  .
y (2)

Then  
y (1)
 
Ẋ =  y (2) 
y (3)
75 MAT3706/1

so that
     
y (1) y (1) 0
     
Ẋ =  y (2)  =  y (2) + 0 
y (3) −a0 y − a1 y (1) − a2 y (2) f (t)
   
0 1 0 0
   
=  0 0 1  X + f (t)  0  (5.7)
−a0 −a1 −a2 1

= AX + F (t)

where  
0 1 0
 
A = 0 0 1 
−a0 −a2 −a3
and  
0
 
F (t) = f (t)  0  .
1
The system (5.7) is known as the companion system for the third order equation (5.6). The companion
system for the n–th order equation (5.1) may be defined in a similar way.

Definition 5.1 A matrix of the form


 
0 1 0 ... 0
 
 0 0 1 ... 0 
 
 .. .. .. .. 
 . . . ... . 
 
 
 0 0 0 ... 1 
−a0 −a1 −a2 . . . −an−1
is known as a companion matrix.

Definition 5.2 The system


   
0 1 0 ... 0 0
   
 0 0 1 ... 0    0
   
 .. .. .. ..    ..
 . . . ... .  X + f (t)   . (5.8)
   
   0 
 0 0 0 ... 1   
−a0 −a1 −a2 . . . −an−1 1
is known as the companion system for the n–th order equation (5.1).

Theorem 5.3 If y is a solution of equation (5.1), then the vector


 
y
 
 y (1) 
X =  
.. 
 . 
y (n−1)
76

is a solution of the companion system (5.8). Conversely: if


 
x1
 . 
X = . 
 . 
xn

is a solution of the system (5.8), then x1 , the first component of X, is a solution of (5.1) and
 
x1
 (1) 
 x1 
X =  ,
.. 
 . 
(n−1)
x1

i.e.
(1) (2) (n−1)
x2 = x1 , x3 = x1 , . . . , xn = x1 .

Exercise 5.4
1. Prove Theorem 5.3 for the case n = 3.

2. Prove, by applying Theorem 5.3, that the initial value problem

y (n) + an−1 y (n−1) + . . . + a1 y (1) + a0 y = f (t) ,

y (t0 ) = y1 , y (t) (t0 ) = y2 , . . . , y (n−1) (t0 ) = yn ,

has a unique solution.

We summarize: Equations of order n in one dimension may be solved by determining a general solution
 
x1 (t)
 .. 
X (t) = 
 . 

xn (t)

of the companion system. The function x1 (t) is then a solution of the given n–th order equation.

As the companion system is but a special case of the non–homogeneous linear system with constant coeffi-
cients, one needs no new techniques.

Example 5.5 Determine the general solution of

d2 y
+ 4y = sin 3t (5.9)
dt2
by using the companion system.

Solution
The companion system is found as follows:

Rewrite (5.9) as
y (2) = −4y + sin 3t.
77 MAT3706/1

Let [ ]
y
X= ,
y (1)
then [ ]
y (1)
Ẋ =
y (2)
so that
[ ] ][ [ ] [ ]
y (1) 0 1 y 0
Ẋ = = +
y (2)−4 0 y (1) sin 3t
[ ][ ] [ ]
0 1 y 0
= + sin 3t
−4 0 y (1) 1

= AX + F (t) . (5.10)

We first determine a general solution of Ẋ = AX. Now


[ ]
−λ 1
det = 0 ⇒ λ = ±2i.
−4 −λ

Choose λ = 2i. (Remember λ = −2i yields identical solutions up to a constant). Now put
[ ][ ] [ ]
−2i 1 u1 0
= .
−4 −2i u2 0

Then

−2iu1 + u2 = 0
−4u1 − 2iu2 = 0.

These two equations are identical. Choose u1 = 1, then u2 = 2i. The vector
[ ] [ ]
u1 1
U≡ =
u2 2i

is then an eigenvector corresponding to the eigenvalue λ = 2i. Consequently


{ [ ] [ ]} [ ]
1 0 cos 2t
X1 (t) = cos 2t − sin 2t =
0 2 −2 sin 2t

and { [ ] [ ]} [ ]
1 0 sin 2t
X2 (t) = sin 2t + cos 2t =
0 2 2 cos 2t

are solutions of Ẋ = AX and a general solution is

X (t) = c1 X1 (t) + c2 X2 (t)

with c1 and c2 arbitrary constants.


78

A fundamental matrix of Ẋ = AX is therefore


[ ]
cos 2t sin 2t
Φ (t) =
−2 sin 2t 2 cos 2t

and a normalized fundamental matrix of Ẋ = AX at t0 = 0 is


[ ]
1 2 cos 2t sin 2t
Ψ (t) = Φ (t) Φ−1 (0) = .
2 −4 sin 2t 2 cos 2t

Therefore, a particular solution of the system (5.10) is


∫ {[ ][ ]}
1 t 2 cos 2 (t − s) sin 2 (t − s) 0
Xp (t) = ds
2 0 −4 sin 2 (t − s) 2 cos 2 (t − s) sin 3s
∫ [ ]
1 t sin 2 (t − s) sin 3s
= ds
2 0 2 cos 2 (t − s) sin 3s
∫ [ ]
1 t 12 cos (2t − 5s) − 12 cos (2t + s)
= ds
2 0 sin (2t + s) + sin (5s − 2t)
 ( ) t
− sin(2t−5s) sin(2t+s)
10 − 2
1



=
2 ( cos(5s−2t)
) 
− cos (2t + s) − 5 0
 ( ) 
− 2 sin
5
3t
+ 3 sin 2t
5
1 
=  .
2 ( )
− 6 cos
5
3t
+ 6 cos 2t
5

Consequently the general solution of Ẋ = AX + F (t) is


[ ] [ ] [ ]
cos 2t sin 2t 1 −2 sin 3t + 3 sin 2t
X (t) = c1 + c2 + .
−2 sin 2t 2 cos 2t 10 −6 cos 3t + 6 cos 2t

According to Theorem 5.3, a general solution of (5.9) is given by the first component of X (t) . Therefore
sin 3t 3 sin 2t
y (t) = c1 cos 2t + c2 sin 2t − + . (5.11)
5 10
Note that the second component of X (t) is the derivative of y (t) .

Remark
This solution can be obtained more readily by applying the standard techniques for solving higher order one–
( )
dimensional equations: For the equation D2 + 4 y = sin 3t, we have the auxiliary equation m2 +4 = 0 with
roots ±2i. Hence
yC.F. (t) = d1 cos 2t + d2 sin 2t.
Furthermore
sin 3t sin 3t
yP.I. (t) = =− .
−3 + 4
2 5
Hence
sin 3t
y (t) = d1 cos 2t + d2 sin 2t − .
5
This is equivalent to the solution (5.11).
79 MAT3706/1

Exercise 5.6

1. Write the companion systems for the equations given below.


(a) y (2) − y (1) = 0 (b) y (2) − y = 0 (c) y (2) = 0 (d) y (4) − y (3) = 0
(e) y (4) − y = 0 (f) y (3) − (sin t)y = f (t)

2. Each of the matrices below are system’s matrices for the companion system of an nth-order equation.
Determine the nth-order equation in each case.
   
[ ] [ ] [ ] 0 1 0 0 1 0
0 1 0 1 0 1    
(a) (b) (c) (d)  0 0 1  (e)  0 0 1 
0 0 0 1 −1 0
−1 0 1 −1 1 0
80

CHAPTER 6

Analytic Matrices and Power Series Solutions of Systems of Differential


Equations

Objectives for this Chapter


The main objectives of this chapter is to gain an understanding of the following concepts regarding power
series solutions of differential equations:

• analytic matrices;

• power series expansions of analytic functions;

• power series solutions for Ẋ = A (t) X;

• the exponential of a matrix.

Outcomes of this Chapter


After studying this chapter the learner should be able to:

• determine whether a matrix is analytic or not;

• solve the system Ẋ = A (t) X using power series methods.

6.1 INTRODUCTION

The system Ẋ = AX, with A an n × n matrix with constant entries, is a special case of the more general
system
Ẋ = A (t) X, a < t < b, (6.1)

with A (t) = [aij (t)] a matrix whose entries are functions of time and which we will call continuous and
real–valued if all the functions aij are continuous and real–valued. Everything that holds for the system
Ẋ = A (t) X, is therefore applicable to the case where A is constant. The converse, is, however, not easy.
Whereas we have to our disposal standard techniques for finding n linearly independent solutions of
Ẋ = AX, each of which is usually a linear combination of known functions, while we also know that exactly
n linearly independent solutions exist, the construction of only a single solution of Ẋ = A (t) X, usually
entails an enormous amount of work. Moreover, it requires advanced mathematics to prove theoretically
that n linearly independent solutions always exist.
81 MAT3706/1

A discussion of this theory, which is outside the scope of this course, is found, amongst others, in “Lectures
on Ordinary Differential Equations” by W. Hurewicz1 . It is sufficient to say that, as in the case of the
equation Ẋ = AX, fundamental and normalized fundamental matrices Φ (t) of Ẋ = A (t) X, are used in
the development of this theory.

Whereas the solution of Ẋ = AX are usually linear combinations of known functions such as sin t and cos t,
such linear combinations are rarely solutions of Ẋ = A (t) X. Indeed, systems Ẋ = A (t) X are often used
to define and to study new functions. For instance, the functions of Bessel, Legendre and Laguerre and the
Hermite Functions (some of which you have encountered in APM2611), are examples of functions defined
as solutions of certain differential equations.

The main result of this chapter is that, under certain conditions on the matrix A (t) , viz that it is possible
to write the entries of A (t) as power series, a solution of Ẋ = A (t) X can be expressed as a power series

X (t) = U0 + U1 (t − t0 ) + . . . + Uk (t − t0 )k + . . . (6.2)

about t0 . If we wish to find a power series solution of Ẋ = A (t) X, the vectors U0 , U1 , . . . have to be deter-
mined, while we also have to show that the infinite sum (6.2) converges to a solution of Ẋ = A (t) X. The
property which A (t) must satisfy, is that of analyticity, a powerful property, since continuity and even the
existence of derivatives of all orders, are not sufficient to ensure this property. However, in most cases,
the coefficient matrices of differential equations, as well as of systems that are important for all practical
purposes, do have the property of analyticity.

6.2 POWER SERIES EXPANSIONS OF ANALYTIC FUNCTIONS


Although the property of analyticity is usually associated with functions of a complex variable, we can
define the property of analyticity for real functions, in view of the fact that a function of a real variable is
only a special case of a function of a complex variable.

Definition 6.1 A function F of a real variable is analytic at a real number t0 if a positive number r
and a sequence of numbers a0 , a1 , a2 , . . . exist such that

N ∞

k
F (t) = lim ak (t − t0 ) = ak (t − t0 )k for |t − t0 | < r. (6.3)
N →∞
k=0 k=0

The following theorem deals with the differentiation of power series. This is no trivial matter, since infinite
sums are involved.

Theorem 6.2 (Taylor’s Theorem) If F is analytic at t0 , and has series expansion




F (t) = ak (t − t0 )k for |t − t0 | < r,
k=0

then

∑ (k + j)!
F (j)
(t) = ak+j (t − t0 )k for |t − t0 | < r,
k!
k=0
for every positive integer j.
1
W. Hurewicz, Lectures on Ordinary Differential Equations, The Technology Press, Cambridge, Mass., 1958.
82

From the above it is clear that the j–th derivative F (j) is also analytic at t0 and F (j) (t0 ) = j!aj . The
constants
1
ak = F (k) (t0 )
k!
are known as the Taylor coefficients of F at t0 , and the expansion

∑ ∞
∑ 1 (k)
ak (t − t0 )k = F (t0 ) (t − t0 )k
k!
k=0 k=0

is the Taylor series of F about t0 .

Remark

(1) It is worthwhile selecting analytic functions to form a class of functions for the following reasons:

(a) It is possible that a function may not possess derivatives of all orders. (Can you think of a good
example?)
(b) Functions F exist which have derivatives of all orders, but of which the Taylor series

∑ 1 (k)
F (t0 ) (t − t0 )k
k!
k=0

converges only at t = t0 . It may happen that a function F has a Taylor series expansion

∑ 1 (k)
F (t0 ) (t − t0 )k
k!
k=0

converging for |t − t0 | < r for some r > 0, but not to F (t) , i.e.

∑ 1 (k)
F (t) ̸= F (t0 ) (t − t0 )k ,
k!
k=0

unless t = t0 .

(2) If F is a function of the complex variable z and F is analytic at z = z0 (with the concept of analyticity
defined as in Complex Analysis), a power series expansion of the form


ak (z − z0 )k
k=0

exists for F (z) , viz the Laurent Series



∑ ∞

k
ak (z − z0 ) + bk (z − z0 )−k ,
k=0 k=1

with all the coefficients bk of the “principal part”




bk (z − z0 )−k
k=0
83 MAT3706/1

equal to zero.

In this case the coefficients ak (k = 0, 1, 2, . . .) are given in terms of contour integrals by means of the
formula I
1 F (z)
ak = dz
2πi C (z − z0 )k+1

with C a circle with centre z0 . This is, of course, a result from Complex Analysis, quoted here only
for the sake of interest (as this result is dealt with in the module on Complex Analysis).

A list of Taylor series for some elementary functions is given below.


∞ k
∑ t
(1) et = for all t
k!
k=0


∑ (−1)k t2k+1
(2) sin t = for all t
(2k + 1)!
k=0


∑ (−1)k t2k
(3) cos t = for all t
(2k)!
k=0



1
(4) = tk −1 < t < 1
1−t
k=0



1
(5) = (−1)k tk −1 < t < 1
1+t
k=0


∑ (−1)k tk+1
(6) ln(1 + t) = −1 < t ≤ 1
k+1
k=0


∑ (−1)k t2k+1
(7) arctan t = −1 ≤ t ≤ 1
2k + 1
k=0

The following theorem deals with addition, subtraction and multiplication of power series:

Theorem 6.3 If


F (t) = Fk (t − t0 )k , −r < t − t0 < r,
k=0


G (t) = Gk (t − t0 )k , −r < t − t0 < r
k=0

with Fk and Gk the Taylor Coefficients of F and G, then




F (t) ± G (t) = (Fk ± Gk ) (t − t0 )k
k=0
84

and
∞ ∑
∑ k
F (t) G (t) = Fi Gk−i (t − t0 )k for |t − t0 | < r.
k=0 i=0

The following theorem is often used.

Theorem 6.4 If


F (t) = Fk (t − t0 )k ≡ 0, −r < t − t0 < r,
k=0

then F0 = F1 = . . . = 0 with Fk the Taylor coefficients of F.

Proof. The result follows immediately by recalling that if

F (t) ≡ 0, i.e. F (t) = 0 for all t,

then the derivatives of all orders are also zero. 

Remark
F (0) = 0 does not imply F ′ (0) = 0! For example

F (x) = x implies F (0) = 0, but F ′ (0) = 1.

We now define analytic matrices:

Definition 6.5 The n × m matrix A (t) = [aij (t)] is said to be analytic at t = t0 if every entry

aij (t) , i = 1, . . . , n, j = 1, . . . , m

is analytic at t = t0 .

In this definition it may happen that the intervals of convergence of the expansions for the separate entries
differ from one another. The interval of convergence of A (t) is then taken as that interval in which all the
entries converge.

The preceding theorems may now be reformulated for analytic A (t) by simply applying the relevant con-
ditions to each entry of A (t) and then making the conclusion in a similar way.

The notation


A (t) = Ak (t − t0 )k ,
k=0

with
1 (k)
Ak = A (t0 ) , |t − t0 | < r,
k!
is, therefore, just an abbreviated notation for n × m expansions — one for each entry of A (t) .
85 MAT3706/1

Example 6.6

Determine Ak such that  


cos t arctan t ∞
  ∑
 1 = Ak tk ≡ A (t) .
et k=0
1−t
Solution
Note that t0 = 0. Since cos t is an even function and arctan t is an odd function, we rewrite the series
expansion


Ak tk
k=0
as the sum of its even and odd expansions as follows:

∑ ∞
∑ ∞

k 2i
Ak t = A2i t + A2i+1 t2i+1 .
k=0 i=0 i=0

Then  
(−1)2i
 0 
 (2i)! 
A2i =  
 1 
1
(2i)!
and  
(−1)2i
 0
(2i + 1)! 
A2i+1 = 



1
1
(2i + 1)!
for i = 1, 2, . . . by using the expansions for cos t, et and arctan t. The common interval of convergence is
−1 < t < 1, so that A (t) is analytic at t0 = 0.

Exercise 6.7

Find Xk such that [ ] ∞



et
X(t) = = X k tk .
e2t k=0

6.3 SERIES SOLUTIONS FOR Ẋ = A (t) X


Consider the initial value problem

Ẋ = A (t) X,
X (t0 ) = X0 . (6.4)

Assume that A (t) is analytic at t = t0 .


According to a result which we do not prove, a unique solution X (t) , analytic at t0 , exists for the initial
value problem (6.4), i.e. X (t) has series expansion


X (t) = Xk (t − t0 )k . (6.5)
k=0
86

If we want to find a solution for (6.4) in the form of the expansion (6.5), we have to determine formulas
for Xk . Since A (t) and X (t0 ) = X0 are known, we therefore determine the relation between the constant
vectors Xk , and A (t) and X0 .

Since
X(k) (t0 )
Xk = , k = 0, 1, 2, . . . (6.6)
k!
we must differentiate (6.5) (k − 1) times, and determine the value of the derivative at t = t0 in each case.
In order to do this, we use Leibniz’s Rule, according to which
( )
dk ∑ k
k
[A (t) X (t)] = A(i) (t) X(k−i) (t) (6.7)
dtk i
i=0

with ( )
k k!
= .
i i! (k − i)!

From (6.6) we have


[ k ]
X(k+1) (t0 ) 1 d
Xk+1 = = Ẋ (t)
(k + 1)! (k + 1)! dtk t=t0
[ k ]
1 d
= (A (t) X (t))
(k + 1)! dtk t=t0
( )
1 ∑ k
k
= A(i) (t0 ) X(k−i) (t0 )
(k + 1)! i
i=0

from (6.7).

Since

∑ A(k) (t0 )
A (t) = (t − t0 )k
k!
k=0

from Taylor’s Theorem, we can, by putting

A(k) (t0 )
Ak = ,
k!

and using (6.6), write


k! ∑ k
Xk+1 = Ai Xk−i ,
(k + 1)!
i=0

i.e.
1 ∑
k
Xk+1 = Ai Xk−i (6.8)
k+1
i=0

for k = 0, 1, . . . and X0 = X (t0 ) .

Since a unique solution of (6.4), analytic at t0 , exists, (see Chapter 7), we have the following theorem:
87 MAT3706/1

Theorem 6.8 Suppose that A (t) is analytic at t0 with a power series expansion


A (t) = Ak (t − t0 )k , |t − t0 | < r.
k=0

Let X0 be an arbitrary vector. Then




X (t) = Xk (t − t0 )k , |t − t0 | < r
k=0

is a solution of (6.4), analytic at t0 , with

1 ∑
k
Xk+1 = Ai Xk−i ,
k+1
i=0

i.e.
X1 = A0 X0
1
X2 = {A0 X1 + A1 X0 }
2 (6.9)
..
.
1
Xk+1 = {A0 Xk + A1 Xk−1 + . . . + Ak X0 }
k+1
Example 6.9 Determine a series solution of
[ ]
0 1
Ẋ = X ≡ AX.
0 2

Compare your answer with the solution obtained by the eigenvalue–eigenvector method.

Solution
Since all entries are constants, the matrix A is analytic at t = 0. The sum


Ak (t − t0 )k
k=0

can yield the constant matrix [ ]


0 1
0 2
only if A1 , A2 , A3 , . . . are all the zero matrix. Hence, A0 = A. From (6.9) we have
[ ]
1 0 1
Xn = Xn−1
n 0 2
[ ]2
1 0 1
= Xn−2
n (n − 1) 0 2

= ...

[ ]n
1 0 1
= X0 .
n! 0 2
88

Now
[ ][ ] [ ]
0 1 0 1 0 2
A2 = =
0 2 0 2 0 4
[ ][ ] [ ]
3 0 1 0 2 0 4
A = =
0 2 0 4 0 8
[ ][ ] [ ]
4 0 1 0 4 0 8
A = =
0 2 0 8 0 16

so it is easy to see that [ ]n [ ]


0 1 0 2n−1
= .
0 2 0 2n
By putting [ ]
a
X0 = ,
b
we have
[ ][ ]
1 0 2n−1 a
Xn = n
n! 0 2 b
[ ]
1 2n−1 b
=
n! 2n b
 
b
2n  2 
=   , n ≥ 1.
n!
b

Therefore


[ ] ∞ b
a ∑ 2  2n tn
X (t) = +  
b n!
n=1 b
 
[ ] b ∞
 ∑ (2t)
n
a 
= + 2 
b n!
b n=1
 
[ ] b
a  ( )
= +  2  e2t − 1
b
b
   
b b
 a−  
=  2 + 2  2t
e .
0 b

∑∞ (2t)n
(Recall that e2t − 1 = .)
n=1 n!
89 MAT3706/1

b b
By putting a − = c1 and = c2 , we obtain
2 2
[ ] [ ]
1 1
X (t) = c1 + c2 e2t .
0 2

This solution is the same as that obtained by the method of eigenvalues and eigenvectors.

Example 6.10 Find a series solution for the second order initial value problem

ẍ + x = 0, x (0) = 1, ẋ (0) = 0.

Solution
Let ẋ = y. Then ẏ = ẍ = −x so the second order initial value problem becomes a system of two first order
equations

ẋ = y
ẏ = −x

with x (0) = 1, y[(0) =


] 0.
x
By putting X = the problem can be rewritten as
y
[ ] [ ]
0 1 1
Ẋ = X, X (0) = .
−1 0 0

Since all entries are constants, the matrix A is analytic at t = 0. Thus the sum


Ak (t − t0 )k
k=0

can yield the constant matrix [ ]


0 1
−1 0
if and only if A1 , A2 , A3 , . . . are all the zero matrix. Hence, A0 = A so that
[ ]
1 0 1
Xn = Xn−1
n −1 0
[ ]n
1 0 1 1
= X0 = An X0 . (1)
n! −1 0 n!

Now from
[ ][ ] [ ] [ ][ ] [ ]
0 1 1 0 0 1 0 −1
AX0 = = A2 X 0 = =
−1 0 0 −1 −1 0 −1 0
[ ][ ] [ ] [ ][ ] [ ]
0 1 −1 0 0 1 0 1
A3 X0 = = A4 X0 = =
−1 0 0 1 −1 0 1 0
90

we notice the pattern


[ ] [ ]
0 (−1)k
A2k+1 X0 = , k = 0, 1, . . . , A2k X0 = , k = 0, 1, . . .
(−1)k+1 0

so that [ ]
1 1 0
X2k+1 = A2k+1 X0 = , k = 0, 1, . . . ,
(2k + 1)! (2k + 1)! (−1)k+1
and [ ]
1 1 (−1)k
X2k = A2k X0 = , k = 0, 1, . . .
(2k)! (2k)! 0
Hence we have

∑ ∞
∑ ∞

k 2k
X (t) = Xk t = X2k t + X2k+1 t2k+1
k=0 k=0 k=0

[ ] ∞
[ ]
∑ 1(−1)k ∑ 1 0
= t2k + t2k+1
(2k)! 0 (2k + 1)! (−1)k+1
k=0 k=0

[ ] ∞
[ ]
∑ (−1)k 1 ∑ (−1) k+1 0
= t2k + t2k+1
(2k)! 0 (2k + 1)! 1
k=0 k=0
[ ] ∞ [ ] ∞
1 ∑ (−1)k 2k 0 ∑ (−1)k+1 2k+1
= t + t
0 (2k)! 1 (2k + 1)!
k=0 k=0

Remark
If we used the eigenvalue–eigenvector method to solve this problem, we would have found the exact solution

x (t) = cos t, y (t) = − sin t.

This result can easily be obtained from equation (2) by making use of standard series expansions of sin t
and cos t; we find [ ] [ ] [ ]
cos t 0 cos t
X (t) = + = .
0 sin t − sin t

Exercise 6.11
1. Show that

∑ (t − t0 )k
X (t) = Ak X0
k!
k=0

is the solution of
Ẋ = AX, X (t0 ) = X0 .

2. Use the power series method to find a series solution for


   
0 1 0 0
   
(a) Ẋ =  0 0 1  X, X(0) =  1  .
1 −3 3 2
Write your final answer in terms of the appropriate exponential function.
91 MAT3706/1

(b) ẍ + 4x = 0, x(0) = 1, ẋ(0) = 0.


First write the equation above as a system of two first order differential equations and then
find the series solution using the power series method. Write your final answer in terms of the
appropriate trigonometric functions.

Remark
We have restricted ourselves to series solutions about “ordinary” points, i.e. points where A (t) is analytic.
Series solutions about certain types of singular points are also possible, but for the purpose of this course,
we shall not deal with them.

6.4 THE EXPONENTIAL OF A MATRIX

Our object is this section is to generalize the exponential function eta , where a is a constant. We wish to at-
tach meaning to the symbol etA when A is a constant matrix, say, an n×n matrix. That this has significance
in the process of a study of the system Ẋ = AX, and the initial value problem Ẋ = AX, X (t0 ) = X0 , can
be “suspected” from the fact that eta y is the (unique) solution of ẋ = ax, x (0) = y.
In view of our previous work, it is possible to define in either of the following two ways:

Definition 6.12 (1) By keeping in mind that


∞ k
∑ t
X (t) = Ak X0
k!
k=0

is the unique solution of Ẋ = AX, X (t0 ) = X0 , it makes sense to define

∞ k
∑ t
etA U = Ak U (6.10)
k!
k=0

with U a constant n–dimensional vector. With this definition


∞ k

tA t
e X0 = Ak X0 = X (t)
k!
k=0

so that the “function” etA corresponds to eta , the solution of ẋ = ax.


(2) By recalling that the solution X (t) of Ẋ = AX, X (t0 ) = X0 , may also be expressed in terms of the
normalized fundamental matrix Φ (t) of A at t = 0, viz as X (t) = Φ (t) X0 , we can also define

etA = Φ (t) . (6.11)

Equation (6.11) is interpreted as


etA U = Φ (t) U

for every vector with n entries.

By using the latter definition, and the properties of a normalized fundamental matrices, it is easy to show
that etA has all the properties of the exponential function.
92

Exercise 6.13 Show that

e(t+s)A = etA esA for s, t > 0,

etA = I for t = 0.

Remark
The generalization of the exponential function has been extended to the case where A, which may be
dependent on time, is a bounded or unbounded operator with domain and range in a Banach space. A
system {Tt : t > 0} = {etA : t > 0} of operators is then known as a semigroup of bounded operators with
infinitesimal generator A. The concept of semigroups of bounded operators2 is of cardinal importance in
the study of abstract differential equations and has wide application to the field of the qualitative theory
of partial differential equations. A good knowledge of ordinary differential equations, as well as a course in
Functional Analysis and Distribution Theory will make this exciting field accessible to you.

2
The concept of semigroup was named in 1904 and after 1930 the theory of semigroups developed rapidly. One of the most
authentic books on this subject is Functional Analysis and Semigroups, Amer. Math. Soc. Coll. Publ. Vol. 31, Providence
R.I., 1957, by E. Hille and R.S. Phillips. A more recent addition to the literature is A Concise Guide to Semigroups and
Evolution Equations by Aldo Belleni–Morante, World Scientific, Singapore, 1994.
93 MAT3706/1

CHAPTER 7

Nonlinear Systems
Existence and Uniqueness Theorem for Linear Systems

Objectives for this Chapter


The main objective of this chapter is to gain an understanding of the following concepts regarding nonlinear
systems:

• autonomous and non–autonomous systems;

• conditions for existence and uniqueness for solutions of differential equations.

Outcomes of this Chapter


After studying this chapter the learner should be able to:

• express a non–autonomous system in autonomous form;

• determine under which conditions the initial value problem Ẋ = A (t) X, X (t0 ) = X0 has a unique
solution.

7.1 NONLINEAR EQUATIONS AND SYSTEMS

We showed in Chapter 1 that the equation

Ẋ = A (t) X + G (t) (7.1)

with A an n × n matrix [fij ] ,


   
g1 (t) x1
 .   . 
G (t) =  . 
 .  and X = . 
 . 
gn (t) xn

is equivalent to the linear first order system

ẋ1 = f11 (t) x1 + f12 x2 + . . . + f1n xn + g1 (t)


ẋ2 = f21 (t) x1 + f22 x2 + . . . + f2n xn + g2 (t)
..
.
ẋn = fn1 (t) x1 + fn2 xn + . . . + fnn xn + gn (t) .
94

If for some or other i, xi cannot be expressed as a linear combination of the components of X (the coefficients
of these components may be either functions of t or else constants) we have a nonlinear equation

ẋi = fi (t, x1 , . . . , xn ) ≡ fi (t, X) .

Systems containing nonlinear equations, i.e. nonlinear systems, are written in vector form as

Ẋ = F (t, X) . (7.2)

(The matrix notation is obviously not possible.)

From the above it is clear that the system


Ẋ = F (t, X)

is linear iff F (t, X) = A (t) X + G (t) , for some other matrix A (t) and a vector function G (t) .

A special case of nonlinear systems are the autonomous systems.

Definition 7.1 The system Ẋ = F (t) X is said to be autonomous if F is independent of t.

An autonomous system is, therefore, of the form Ẋ = F (X) . Autonomous systems have certain properties
which are not generally valid.

Example 7.2

(i) The system Ẋ = AX with A a matrix with constant entries, is autonomous.

(ii) The system

ẋ1 = x1 x2
ẋ2 = tx1 + x2

is a non–autonomous system.

It is always possible to express a non–autonomous system in autonomous form by introducing a spurious


variable.

Example 7.3
Consider the non–autonomous equation
ẋ = t2 x − et .

Put x1 = t and x2 = x. Then ẋ1 = 1 and ẋ2 = ẋ = t2 x − et , i.e. ẋ2 = x21 x2 − ex1 .
This yields the system [ ] [ ]
ẋ1 1
Ẋ = = ,
ẋ2 x 1 x 2 − ex 1
2

the right–hand side of which is independent of t — therefore an autonomous system.


95 MAT3706/1

More generally we consider the non–autonomous system


   
x1 f1
 .   . 
Ẋ = F (t, X) with X = 
 .
. ,
 F = . 
 . .
xn fn

Put    
t 1
   
 x1   f1 
Y =
 .. ,
 G =
 .. .

 .   . 
xn fn
Then    
1 1
   
 ẋ1   f1 
Ẏ = 
 .. =
  ..  = G (Y)

 .   . 
ẋn fn
and the system Ẏ = G (Y) is autonomous.

Example 7.4
Write the second-order equation
ÿ − 2tẏ + y = 0
as an autonomous system of first-order equations.
Solution Let x1 = t, x2 = y, x3 = ẏ. Then ẋ1 = 1, ẋ2 = ẏ = x3 and ẋ3 = ÿ = 2tẏ − y = 2x1 x3 − x2 . The
autonomous system is therefore
ẋ1 = 1
ẋ2 = x3
ẋ3 = 2x1 x3 − x2
Exercise 7.5
(1) Which of the following are linear? autonomous?
(a) ẋ = sin tx (b) ẋ = 1
x (c) ẍ − x = t (d) ẋ1 = t, ẋ2 = x1 + x2
1 x1
(e) ẋ1 = x1 , ẋ2 = x2 (f) ẋ = t2 x + t (g) ẋ1 = t , ẋ2 = x1

(2) Write each nonautonomous system of problem 1 as an autonomous system.

(3) Write
...
y + aÿ + bẏ + cy = αt

as an autonomous system.

It is seldom possible to solve nonlinear equations explicitly1 . Certain principles, applicable to linear systems
are also not even valid in the case of nonlinear equations, as for instance the principle of uniqueness of
1
One can, however, obtain important information on certain characteristics of the solution of a nonlinear differential equation
without actually solving the equation. One method is the method of phase portraits in the phase space. Nonlinear first–order
autonomous systems of differential equations also are studied by investigating the points of equilibrium of the so–called linearized
system. This subject is dealt with in Chapter 8.
96

solutions and the principle of superposition of solutions. The following counterexamples illustrate the truth
of these statements.

Exercise 7.6

3 1
Show that the initial value problem ẏ = y 3 , y (0) = 0 has an infinite number of solutions.
2
Example 7.7
t
The principle of superposition of solutions is not valid for the equation ẏ = y −1 .
2
( 2 ) 12
t
Differentiation confirms that y (t) = +c satisfies, for all c, the equation 2yy ′ = t, which is equivalent
2
t
to y ′ = . However,
2y
( 2 ) 21 ( 2 ) 21
t t
+ c0 + + c1
2 2
is in no case a solution, in view of
{√ √ } 
t2 t2  t t 
2 + c0 + + c1 √ + √
2 2  t2 2 
2 2 + c0 2 t2 + c1
v v 
u u
u t2 + c0 u t2 + c1 
= t + t + t t t22 + t t22
 + c 
2 1 2 + c0

̸= t for all c0 , c1 .

It is, indeed, true that the validity of the principle of superposition would imply linearity — thus leading
to a contradiction. We prove this:

Suppose that F (t, X) is a function such that every initial value problem Ẋ = F (t, X) has a solution.
Suppose further that the principle of superposition is valid, i.e. if Xi (t) is a solution of Ẋ = F (t, X) for
i = 1, 2, then Z = aX1 + bX2 is a solution of

Ż = F (t, Z) , Z (t0 ) = aX01 + bX02 ,

where we use the notation X0i ≡ Xi (t0 ) .

We show that F (t, Z) is a linear function of Z, i.e.

F (t, aX1 + bX2 ) = aF (t, X1 ) + bF (t, X2 ) .

From our assumption we have

F (t, X1 ) = Ẋ1 (t) ,


F (t, X2 ) = Ẋ2 (t) ,
97 MAT3706/1

and
d
F (t, aX1 + bX2 ) = (aX1 (t) + bX2 (t))
dt

= aẊ1 (t) + bẊ2 (t)

= aF (t, X1 ) + bF (t, X2 ) .

As far as the solution of nonlinear equations is concerned, we note that the Method of Separation of
Variables can, in the case of simple one–dimensional equations, often be applied successfully.

Example 7.8 Determine the solution of



1 − t2
ẏ = − √ − 1 ≤ t ≤ 1, y > −5. (7.3)
5+y
Solution
By separation of variables, equation (7.3) is equivalent to
√ √
1 − t2 dt + 5 + y dy = 0.

By integration we obtain that the solution y (t) is given implicitly by


1 √ 1 2 3
t 1 − t2 + arcsin t + (5 + y) 2 = c, −1 ≤ t ≤ 1. (7.4)
2 2 3
Note that in order to obtain a unique solution, we consider only principal values of arcsin, i.e. those values
π π
between − and , otherwise (7.4) would define a multi–valued function.
2 2
Exercise 7.9
1
(a) Show that ẏ = −t + (2y + t2 ) 2 , y(1) = − 12 is solved by y(t) = −t2 /2 and y1 (t) = −t + 21 .

(b) Separate variables to find solutions of the following equations


(i) tẏ = y (ii) ẏ = et+y (iii) ẏ = f (t)y = 0.

7.2 NUMERICAL SOLUTIONS OF DIFFERENTIAL EQUATIONS


If a differential equation cannot be solved by means of a standard technique, a numerical solution can still
be obtained by means of techniques of approximation. By a numerical solution of a differential equation
in x, a function of t, is meant a table of values such that for every value of t, a corresponding value of
x (t) is given. Such a table always has a column giving the magnitude of the error involved. The theory
of numerical solutions covers a wide field and is outside the scope of this course. Should the reader be
interested, an easily readable chapter on numerical solutions is found in Z&W, Chapter 9, as well as in
Ordinary Differential Equations by M. Tenenbaum and H. Pollard (see references).

It is interesting to note that a computer can yield a “solution” for a differential equation even when
theoretically no solution exists. An example is the boundary value problem

f¨ (x) + f (x) = sin x, 0 < x < π, f (0) = f (π) = 0.


98

For this reason it is necessary to pay particular attention to those conditions under which a unique solution
of a system of differential equations exists. We shall confine ourselves to the case of linear systems of
differential equations.

7.3 EXISTENCE AND UNIQUENESS THEOREM FOR LINEAR SYSTEMS OF


DIFFERENTIAL EQUATIONS

By putting A (t) X = F (t, X) , the problem

Ẋ = A (t) X,
X (t0 ) = X0 ,

reduces to the problem

Ẋ = F (t, X) ,
X (t0 ) = X0 .

The theorem will, therefore, be proved by applying the Existence and Uniqueness Theorem for the differ-
ential equation
ẋ = f (t, x) , x (t0 ) = x0 .

Theorem 7.10 If f is continuous in an interval |t − t0 | ≤ T, ||X − X0 || ≤ R and f satisfies a Lipschitz


condition, i.e. a constant K > 0 exists such that

||f (t, U) − f (t, V)|| ≤ K ||U − V|| (7.5)

in |t − t0 | ≤ T and U, V in ||X − X0 || ≤ R, then the initial value problem

Ẋ = f (t, X)
X (t0 ) = X0

has one and only one solution in |t − t0 | < δ.

Remark
(1) In the above theorem the constant δ is defined as δ = min (T, R/M ) , with the constant M the
upperbound of ||F|| . (See note later on.)

(2) If the range of F is the Banach Space E n , i.e. the n–dimensional Euclidian space with the Euclidian
metric (recall that a Banach space is a normed space which is complete), then ||F (t, X)|| is precisely
|F (t, X)| .

(3) If A (t) is an n × n matrix with real entries, and X an n–tuple of real numbers, then A (t) X is an
n–dimensional vector with components real numbers, so that A (t) X is an element of the Banach
space Rn . As in Chapter 4, Section 5, we define
 1

n 2

||A (t)|| =  (fij (t)) 2


. (7.6)
i,j=1
99 MAT3706/1

Since the role of F (t, X) is played by A (t) X, we must have A (t) X continuous for all t in |t − t0 | < T and
every X in ||X − X0 || ≤ R, with R an arbitrary constant.

It is sufficient to require that A (t) should be continuous in |t − t0 | ≤ T, i.e. that fij (t) should be continuous
for i = 1, . . . , n, j = 1, . . . , n. The vector X is, indeed, continuous as a function of X for every “interval”
||X − X0 || ≤ R. Since the product of continuous functions is continuous, it follows that A (t) X is continuous
in |t − t0 | ≤ T and ||X − X0 || ≤ R.

For the proof of Theorem 7.10 we need

Lemma 7.11 If
 
x1
 . 
A (t) = [fij (t)] , X =  . 
 . ,
xn

then we have the inequality


||A (t) X|| ≤ ||A (t)|| ||X|| . (7.7)

Proof. By putting
 
f11 . . . f1n  
  f1
 f21 . . . f2n   .. 
A (t) = 
 .. =
  . 
,
 . 
fn
fn1 . . . fnn

i.e. fi = (fi1 , fi2 , . . . , fin ) , i = 1, . . . , n, we have


 
f1 · X
 
 f2 · X 
AX = 
 .. 

 . 
fn · X

where · indicates the ordinary scalar product. Then, by definition

||A (t) X||2 = (f1 · X)2 + . . . + (fn · X)2

≤ ||f1 ||2 ||X||2 + . . . + ||fn || ||X||2 (Schwarz’ Inequality)

( )
= ||f1 ||2 + ||f2 ||2 + . . . + ||fn ||2 ||X||2 .

Since

n
||fi ||2 = fi1
2 2
+ fi2 2
+ . . . + fin = (fij )2 ,
j=1
100

we have

2 2

n
2

n
2

n
||f1 || + . . . + ||fn || = (f1j (t)) + (f2j (t)) + . . . + (fnj (t))2
j=1 j=1 j=1


n
= (fij (t))2
t,j=1

= ||A (t)||2 .

It follows that
||A (t) X|| ≤ (||A (t)|| ||X||) .

In proving Theorem 7.10, we finally make use of the fact that if a function f is continuous in |t − t0 | ≤ T,
||X − X0 || ≤ R, then a constant M, independent of t and X, exists, such that ||f || ≤ M. Indeed, continuity
and boundedness are equivalent concepts in normed spaces. For the proof, the reader may refer to, among
others, “Mathematical Analysis” by Apostol, p. 832 .

Our main result now reads:

Theorem 7.12 If A (t) is continuous in |t − t0 | ≤ T, the initial value problem

Ẋ = A (t) X
X (t0 ) = X0

has a unique solution in |t − t0 | < δ.

Proof. From the assumption that A (t) is continuous in |t − t0 | ≤ T we have, according to our previous
remarks, that A (t) X is continuous in |t − t0 | ≤ T and ||X − X0 || ≤ R for every R. We also have, from a
previous remark, that ||A (t)| ≤ M | in |t − t0 | ≤ T.
Therefore

||A (t) U − A (t) V|| = ||A (t) (U − V)||


≤ ||A (t)|| ||U − V||
≤ M ||U − V|| .

Consequently A (t) X = F (t, X) satisfies the Lipschitz condition (7.5) with K = M. All the conditions of
Theorem 7.10 are now satisfied. We can conclude that the problem

Ẋ = A (t) X
X (t0 ) = X0

has a unique solution in |t − t0 | < δ. This completes the proof of the theorem. 

2
Mathematical Analysis, T.M. Apostol, Addison–Wesley Publishing Co., London.
101 MAT3706/1

CHAPTER 8

Qualitative Theory of Differential Equations


Stability of Solutions of Linear Systems
Linearization of Nonlinear Systems

Objectives for this Chapter


The main objective of this chapter is to gain an understanding of the following concepts regarding the
stability of solutions of linear systems and the linearization of nonlinear systems:

• autonomous systems;

• critical point;

• periodic solutions;

• classification of a critical point: stable/unstable node, saddle, center, stable/unstable spiral point,
degenerate node;

• stability of critical point;

• linearization and local stability;

• Jacobian matrix;

• phase–plane method.

Outcomes of this Chapter


After studying this chapter the learner should be able to:

• find the critical points of plane autonomous systems;

• solve certain nonlinear systems by changing to polar coordinates;

• apply the stability criteria to determine whether a critical point is locally stable or unstable;

• classify the critical points using the Jacobian matrix.


102

The final chapter of this module is devoted to the qualitative theory of differential equations. In this
branch of theory of differential equations, techniques are developed which will enable us to obtain important
information about the solutions of differential equations without actually solving them. We will, for instance,
be able to decide whether a solution is stable, or what the long time behaviour of the solution is, without
even knowing the form of the solution. This is extremely useful in view of the fact that it is often very
difficult, or even impossible, to determine an exact solution of a differential equation.

Study Chapter 10 of Z&W, work through all the examples and do the exercises at the end of
each section.
103 MAT3706/1

Appendix A
Symmetric Matrices

(Taken from Goldberg & Schwartz - see Preface for full reference.)
You should by now be familiar with the following basic properties of matrices:
(1) A is symmetric if AT = A, where AT is the transpose of A;

(2) (AT )T = A;

(3) (AB)T = BT AT .
Suppose A is an n × n matrix with only real-valued entries. For any given vector u = [u1 , u2 , . . . , un ]T ,
the vector u is that vector whose entries are the complex conjugates of the entries of u, that is u =
[u1 , u2 , . . . , un ]T . Obviously u = u, if u has only real entries.
Lemma A.1 For any vectors u and v in C n and any real symmetric matrix A

uT Av = vT Au. (A.1)

Proof The product Av is a column matrix. Since uT is a row matrix uT Av is a matrix with one entry.
Therefore (uT Av)T = uT Av. But (uT Av)T = vT AT (uT )T = vT Au, since AT = A. Hence the result
follows. 
Lemma A.2 If λ is an eigenvalue of A with corresponding eigenvector u then u is an eigenvector of A
corresponding to the eigenvalue λ.
Proof By hypothesis, Au = λu. Therefore the complex conjugates of both sides are equal, that is
Au = Au = λu (since A has real entries). Hence u is an eigenvector of A corresponding to the eigenvalue
λ. 
Lemma A.3 A symmetric matrix has only real eigenvalues.
Proof Let v = u in (A.1). Suppose A is a symmetric matrix and let u be an eigenvector of A which
corresponds with the eigenvalue λ. From Lemma A.2 we have that

uT Au = uT (λu) = λuT u

so that from Lemma A.1 it follows that

uT Au = uT (λu) = λuT u.

Therefore
λuT u = λuT u.
Since uT u and uT u are 1 × 1 matrices with only one entry |u1 |2 + |u2 |2 + . . . + |un |2 , and since u ̸= 0, this
entry is positive. Hence λuT u = λuT u implies λ = λ which means that λ is real. 
Theorem A.4 If A is a real n × n symmetric matrix then there exists n linearly independent eigenvectors
each belonging to Rn (and hence spanning Rn ).
Theorem A.5 If A is symmetric then the initial value problem

ẋ = Ax, x(t0 ) = x0

has a solution for every x0 ∈ Rn .


104

Appendix B
Refresher notes on Methods of Solution of One-dimensional Differential
Equations

I. First order Differential Equations


A few important first order differential equations are Linear equations, Homogeneous equations, Exact
equations and the Bernoulli equation.

A. Linear Equations
General form:
dy
+ p(x)y = q(x)
dx
where p(x) and q(x) are continuous functions in some or other interval.
Method of Solution:

Multiply be integrating factor (I.F.) e p(x)dx . This yields
∫ dy ∫
p(x)dx
e ( + p(x)y) = e p(x)dx .q(x),
dx
i.e. ∫ ∫
d
(ye p(x)dx ) = e p(x)dx .q(x).
dx
By integration we obtain ∫
∫ ∫
ye p(x)dx
= {e p(x)dx
.q(x)} dx + k.

Example
Solve
dx x
− = 1. (1)
dt t
∫ 1
I.F. = e− = e− ln t =
dt
t
t
1 1 dx x 1
(1) × : − 2 = ,
t t dt t t
i.e.
d x 1
( )= .
dt t t
Therefore
x
= ln(tc), c an arbitrary constant,
t
i.e.
x = t ln(tc).
105 MAT3706/1

B. The Bernoulli-equation
Definition
The Bernoulli equation is the differential equation
dy
+ p(x)y = q(x)y n , n ̸= 0, 1 ,
dx

where p(x) and q(x) are continuous functions in some or other interval.
Method of Solution:
By putting z = y 1−n the equation is reduced to a linear D.E. in z.
Example
Find the solution of
dy y
x − = y2.
dx 2 ln x
In this case
1 1
p(x) = − , q(x) = , n = 2.
2x ln x x
By putting
z = y −1 ,

we obtain the linear D.E.


dz 1 1
+ z=− .
dx 2x ln x x
Multiplication by ∫ dx 1 1
e 2x ln x = e 2 ln(ln x) = (ln x) 2 ,

yields
1
d 1 (ln x) 2
(z(ln x) 2 ) = − .
dx x
Therefore
1 2 3
z(ln x) 2 = − (ln x) 2 + c
3
or
2
z = − ln x + c(ln x)− 2 ,
1

3
i.e.
1
y= .
ln x + c(ln x)− 2
1
− 32
106

C. Homogeneous equations
Definition
A homogeneous differential equation is an equation of the form

dy g (x, y)
= (1)
dx h (x, y)

where the functions g and h are homogeneous of the same degree. We recall that a function g(x, y)
homogeneous of degree n if g(αx, αy) = αn g(x, y).

Method of Solution:
By putting y = vx(1) is reduced to a D.E. in which the variables are separable.

Example
Find the solution of
dy x3 + y 3
= . (2)
dx xy 2
Here g(x, y) = x3 + y 3 and h(x, y) = xy 2 are homogeneous of degree 3. Put y = vx. Then (2) becomes

dv 1
v+x = 2 +v
dx v
whence
1
v 2 dv = dx.
x
Integrations yields
v3
= ln |x| + c,
3
i.e.
1 y3
= ln |x| + c.
3 x3
Therefore
1
y = x(d ln |x| + c) 3 .

D. Exact Differential Equations


Definition
A D.E. of the form
M (x, y)dx + N (x, y)dy = 0

is called exact if there exists a function f (x, y) of which the total differential

∂f ∂f
df = dx + dy
∂x ∂y

satisfies
df = M (x, y)dx + N (x, y)dy.

Theorem 1 below provides a method to check whether a D.E. is exact, while Theorem 2 contains a
formula for the solution.
107 MAT3706/1

Theorem 1
∂M ∂N
The D.E. M (x, y)dx + N (x, y)dy = 0 is exact if and only if = .
∂y ∂x
Theorem 2
The solution of (1) is implicitly given by
∫ x ∫ y
f (x, y) = M (t, y0 )dt + N (x, s) = C,
x0 y0

∂M ∂N
where (x0 , y0 ) is any point in the region in which the functions M, N, and are continuous.
∂y ∂x
If these functions are continuous in (0, 0), it is convenient to choose

(x0 , y0 ) = (0, 0).

Example
Solve
(3x2 + 4xy 2 )dx + (2y − 3y 2 + 4x2 y)dy = 0.

Here
M (x, y) = 3x3 + 4xy 2 and N (x, y) = 2y − 3y 2 + 4x2 y.

Therefore
∂M ∂N
= 8xy = ,
∂y ∂x
so that the D.E. is exact. The solution is implicitly given by
∫ x ∫ y
2
3t dt + (2s − 3s2 + 4x2 s)ds = c,
0 0

i.e.
x3 + y 2 − y 3 + 2x2 y 2 = c.

II. Higher order differential equations with constant


coefficients General Form:

dn y dn−1 dy
an n
+ an−1 n−1 + . . . + a1 + a0 y = F (x) (1)
dx dx dx
where ak are constants for k = 0, . . . , n.
By putting
d dn
= D, . . . , n = Dn ,
dx dx
(1) is equivalent to
(an Dn + an−1 Dn−1 + . . . + a0 )y = F (x) ,

i.e.
f (D)y = F (x), (2)

with f (D) a polynomial in D of degree n. If F (x) = 0 ∀ x, we have that the homogeneous equation
f (D)y = 0. Otherwise (2) is non–homogeneous.
108

A. Properties of the Differential Operator D, applied to some Standard Functions.

(1) f (D)[eax ] = f (a)eax .


(2) f (D)[eax V (x)] = eax f (D + a)[V (x)] (Shift Property).
[ ] [ ]
sin ax sin ax
(3) f (D2 ) = f (−a2 ) .
cos ax cos ax

Proof

2
We prove (3) only for f (D2 ) = ck D2k where ck, k = 0, 1, 2, are constants.
k=0
{ 2 }
∑ ∑
ck D2k [sin ax] = ck D2k [sin ax]
0

= ck (−a2 )k sin ax (since D2 (sin ax) = D(a cos ax) = −a2 sin ax)
{∑ }
= ck (−a2 )k [sin ax]
( )
= f (−a2 ) sin ax (by the definition of f D2 ).

1
B. Properties of the Inverse Operator applied to Standard Functions
f (D)

eax eax
(1) = provided f (a) ̸= 0
f (D) f (a)
eax V (x) V (x)
(2) = eax .
f (D) f (D + a)
This property is known as the Shift Property and is particularly useful whenever f (a) = 0.

(3)
cos ax cos ax sin ax sin ax
= and =
f (D2 , D) f (−a2 , D) f (D2 , D) f (−a2 , D)
provided f (−a2 , D) ̸= 0.
Example
1 sin bx
[sin bx] = 2 provided b ̸= a.
D2 +a 2 (a − b2 )

1 −x
(4) [sin ax] = cos ax
(D2 + a2 ) 2a
1 +x
[cos ax] = sin ax.
(D2 2
+a ) 2a
sin ax −1
(5) = 2 2 (λa cos ax − µ sin ax).
λD + µ (λ a + µ2 )
cos ax 1
= 2 2 (λa sin ax + µ cos ax), λ, µ real.
λD + µ (λ a + µ2 )
109 MAT3706/1

Proof
We prove only (1) and (5).

(1) We have to show that { }


eax
f (D) = eax .
f (a)
From f (D)[eax ] = f (a)eax , it follows that
{ ax }
e 1
f (D) = f (D)eax = eax .
f (a) f (a)

λD − µ
(5) By applying , it follows that
λD − µ

sin ax (λD − µ)
= 2 2 [sin ax].
λD + µ (λ D − µ2 )

(λD − µ)
= [sin ax] (by (3))
(−λ2 a2 − µ2 )

−(λa cos ax − µ sin ax)


= (by differentiation).
(λ2 a2 + µ2 )

Solution of f (D)y = F (x)


The solution is given by y(x) = yC.F. + yP.I. , where yC.F. denotes the complementary function, which
solves f (D)y = 0, and yP.I. a particular solution, the so–called Particular Integral.
On finding the Complementary Function
The auxiliary equation f (m) = 0 is formed from f (D) — solve this for m. Suppose f (m) = 0 has

(i) n different real roots m1 , . . . , mn . Then

yC.F. (x) = A1 em1 x + A2 em2 x + . . . + An emn x

with Ai (i = 1, . . . , n) arbitrary constants.


(ii) n coincident real roots m. Then

yC.F. (x) = (A1 + A2 x + . . . + An xn−1 )emx .

(iii) We first assume that f (D) is of degree 2 (i.e. f (m) = 0 has only 2 roots). Suppose the roots are
complex conjugates ξ ± ηi, (ξ, η real). Then

yC.F. (x) = eξx (A1 cos ηx + A2 sin ηx).

Suppose for f (D) of degree 2n the roots of f (m) = 0 occur in complex conjugate pairs αr ±
iβr , r = 1, . . . , n. Then

n
yC.F. (x) = eαr x (Ar cos βr x + Br sin βr x),
r=1
110

with Ar, Br for r = 1, . . . , n arbitrary constants. If (m) = 0 has k–fold roots, i.e.

(m − (α + iβ))k (m − (α − iβ))k = 0,

then

yC.F. (x) = eαx (C1 + C2 x + . . . + Ck xk−1 )[A1 cos βx +


B1 sin βx + x(A2 cos βx + B2 sin βx) +
+ . . . + xk−1 (Ak cos βx + Bk sin βx)]

Note that yC.F. (x) is the solution of f (D)y = 0.

Example 1

(i) Solve
(D2 − 7D + 12)y = 0

f (m) = m2 − 7m + 12. Put f (m) = 0. Then (m2 − 7m + 12) = 0, i.e.

(m − 3)(m − 4) = 0.

Therefore
y = yC.F. = Ae3x + Be4x .

(ii) Solve
(8D3 − 12D2 + 16D − 1)y = 0.

Put
f (m) = 0.

Now
8m3 − 12m2 + 6m − 1 = 0
⇐⇒ 8m3 − 1 − 6m(2m − 1) = 0
⇐⇒ (2m − 1)(4m2 + 2m + 1) − 6m(2m − 1) = 0
⇐⇒ (2m − 1)(4m2 − 4m + 1) = 0
⇐⇒ (2m − 1)3 = 0
Therefore
x
y = yC.F. = (A + Bx + Cx2 )e 2 .

(iii) Solve

(D2 − 2D + 5)y = 0
m2 − 2m + 5 = 0 ⇐⇒ (m − 1)2 − 4i2 = 0
⇐⇒ m − 1 = ±2i, i.e. m = 1 ± 2i.

Therefore

y = yC.F = ex (A cos 2x + B sin 2x),


with A, B arbitrary constants.
111 MAT3706/1

(iv) Solve
(D − 4)3 (D2 + 9)2 y = 0.

The roots of the auxiliary equation are

m = 4, 4, 4, m = ±3i, ±3i.

Therefore

y = yC.F. = (A1 + A2 x + A3 x2 )e4x +


[(C4 cos 3x + C5 sin 3x) +
x(C6 cos 3x + C7 sin 3x)],
Ai (i = 1, 2, 3), Cj (j = 4, 5, 6, 7) arbitrary constants.

On finding the Particular Integral


F (x)
For the D.E. f (D)y = F (x), yP.I (x) is given by yP.I (x) = . Now apply B.
f (D)
Example 2

(i) Determine YP.I. for the D.E. (D2 + 6D + 9)y = 50e−3x . In this case we have

f (D) = D2 + 6D + 9, so that f (a) = 0.

It now follows from B(2), by writing

50e−3x e0x 50e−3x


as
(D + 3)2 (D + 3)2

(i.e. V (x) = e0x = 1), that

1
yP.I. = 50e−3x .
((D − 3) + 3)2

x2
= 50e−3x .
2
1
Note that is interpreted as integration k times.
Dk
(ii) Find yP.I. for
(D2 + 9)y = 40 sin 4x.

By B(3), we have
40 sin 4x 40 sin 4x
yP.I. = =− .
−16 + 9 7
(iii) Find yP.I. for
(D2 + 9)y = sin 3x

By B(4) we have
x
yP.I. = − cos 3x.
6
112

Find yP.I. for


(D3 − 2D2 + 2D + 7)y = sin 2x.

By B(3) we have

sin 2x sin 2x
yP.I. = =
D(−4) − 2(−4) + 2D + 7 −2D + 15

(−2D − 15)[sin 2x]


=
4D2 − 225

[sin 2x]
= −(2D + 15) (B(3))
−16 − 225

−4 cos 2x − 15 sin 2x
= (by differentiation)
−241
Therefore
4 cos 2x + 15 sin 2x
yP.I. =
241
If f (x) is not one of the standard functions in B, (or even if it is), then yP.I. may be obtained
with the aid of
The Method of Undetermined Coefficients

Example 1
Find
yP.I. for (D2 + 1)y = 2 + x2. (3)

Suppose
yP.I. = a + bx + cx2

Then
Dy = y ′ = b + 2cx, Dy 2 = 2c.

Substitute in (3). We then have 2c + a + bx + cx2 = 2 + x2 . (4)


Compare coefficients of powers of x in (4).

x2 : c = 1
x : b=0
x0 : 2c + a = 2 ⇒ a = 0.

Therefore
yP.I. (x) = x2.

Example 2
Solve

(3D2 + 3D)x = et cos t (5)


3m + 3m = 0 ⇒ m = 0, m = −1.
2
113 MAT3706/1

Therefore
xC.F. (t) = Ae−t + B.

Suppose
xP.I. = Aet cos t + Bet sin t.

Then

DxP.I. = Aet (cos t − sin t) + Bet (sin t + cos t),


D2 xP.I. = −2Aet sin t + 2Bet cos t.

Substitute in (5):
(3D2 + 3D)x = (−9A + 3B)et sin t + (3A + 9B)et cos t.

Compare coefficients of cos t and sin t:

cos t: 3A + 9B = 1
sin t: − 9A + 3B = 0

This yields
1 1
A= , B= .
30 10
Therefore
1 t 1
xP.I (t) = Ae−t + B + e cos t + et sin t.
30 10

You might also like