0% found this document useful (0 votes)
47 views15 pages

First Order Homogeneous Linear Systems With Constant Coefficients

The document discusses homogeneous linear systems with constant coefficients of the form X'(t) = AX(t), where A is an n×n matrix. It shows that the solutions can be obtained by finding the eigenvalues and eigenvectors of A. The general solution is the sum of terms with each term being an eigenvector multiplied by e^λt, where λ is the corresponding eigenvalue. It then introduces the matrix exponential e^At as a unified way to represent the general solution as X(t) = e^AtC, where C are arbitrary constants. It provides properties and methods to compute the matrix exponential.

Uploaded by

akshay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views15 pages

First Order Homogeneous Linear Systems With Constant Coefficients

The document discusses homogeneous linear systems with constant coefficients of the form X'(t) = AX(t), where A is an n×n matrix. It shows that the solutions can be obtained by finding the eigenvalues and eigenvectors of A. The general solution is the sum of terms with each term being an eigenvector multiplied by e^λt, where λ is the corresponding eigenvalue. It then introduces the matrix exponential e^At as a unified way to represent the general solution as X(t) = e^AtC, where C are arbitrary constants. It provides properties and methods to compute the matrix exponential.

Uploaded by

akshay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

First Order Homogeneous Linear Systems with

Constant Coefficients

Department of Mathematics
IIT Guwahati
SHB/SU

SHB/SU MA-102 (2020)


Homogeneous linear systems with constant coefficients
Consider the homogeneous system
X 0 (t) = AX (t), (1)
where A is a real n × n matrix.
Goal: To find a fundamental solution set for (1).
We seek solutions of the form X (t) = e λt v , where λ ∈ R and
v ∈ Rn \ {0}.
X 0 (t) = λe λt v and AX (t) = e λt Av

so that X 0 (t) = A(t)X (t) ⇔ Av = λv . Thus X (t) = e λt v


solves X 0 (t) = AX (t) if and only if λ is an eigenvalue of A
with v as a corresponding eigenvector.

Q. Can we obtain n linear independent solutions to


X 0 (t) = AX (t) by finding all the eigenvalues and eigenvectors
of A?
SHB/SU MA-102 (2020)
Finding the general solution to X 0 (t) = AX (t)
Theorem: Suppose A = (aij )n×n . Let λi ∈ R be the eigenvalue
with corresponding eigenvector vi ∈ Rn , i = 1, . . . , n. Then
{e λ1 t v1 , e λ2 t v2 , . . . , e λn t vn }
is a fundamental solution set on R for X 0 (t) = AX (t) if and
only if {v1 , . . . , vn } is a linearly independent set. In such a
case the general solution (GS) of X 0 (t) = AX (t) is
X (t) = c1 e λ1 t v1 + c2 e λ2 t v2 + · · · + cn e λn t vn ,
where c1 , . . . , cn are arbitrary constants.
Proof.
W (t) = det[e λ1 t v1 , . . . , e λn t vn ] = e (λ1 +···+λn )t det[v1 , . . . , vn ] 6= 0.
Thus, {e λ1 t v1 , e λ2 t v2 , . . . , e λn t vn } is a fundamental solution set
and hence, the GS is given by
X (t) = c1 e λ1 t v1 + c2 e λ2 t v2 + · · · + cn e λn t vn .
SHB/SU MA-102 (2020)
Example: Find the GS of
 
0 2 −3
X (t) = AX (t), where A = .
1 −2
The eigenvalues are λ1 = 1 and λ2 = −1. The corresponding
eigenvectors (with r = 1) are
   
3 1
v1 = and v2 = .
1 1
The GS is
   
t 3 −t 1
X (t) = c1 e + c2 e .
1 1

SHB/SU MA-102 (2020)


The Matrix Exponential
We need a unified approach for solving the system
X 0 (t) = AX (t) that works for all cases. For this we extend
techniques for scalar differential equations to systems.
For example, a GS to x 0 (t) = ax(t), where a is a constant, is
x(t) = ce at . Analogously, we shall show that a GS to the
system
X 0 (t) = AX (t),
where A is a constant matrix, is
X (t) = e At C .

Task: To define the matrix exponential e At .


Given an n × n matrix A, and t0 > 0, consider the matrix
valued power series

X Ak t k
.
k=0
k!
SHB/SU MA-102 (2020)
The Matrix Exponential
Let M be an n × n matrix. The function
kMk = max |M(X )|,
|X |≤1
pPn
2
where |X | = i=1 xi , is called a norm on such matrices.

Exercise: Prove that


• kMk ≥ 0 and kMk = 0 ⇔ M = 0.
• kαMk = |α|kMk for all α ∈ R, and n × n matrices M.
• kM + Nk ≤ kMk + kNk for all n × n matrices M and N.
• kMNk ≤ kMkkNk for any n × n matrix M and n × p matrix N.

P∞ Ak t k
The series k=0 k! is said to be absolutely convergent if
P∞ Ak t k

k=0 k! is convergent. The radius of convergence is R, if

it is absolutely convergent for all |t| < R and divergent for


|t| > R.
SHB/SU MA-102 (2020)
The Matrix Exponential
Theorem: The series ∞
P k k
A t
k=0 k! is absolutely convergent with
infinite radius of convergence.

Proof. Let a = kAk and t0 ∈ R be arbitrary. Then for |t| ≤ t0 ,


k k
A t kAkk |t|k ak t0k
≤ ≤ .
k! k! k!
ak t0k
But, ∞ = e at0 . By the Weierstrass M-test, the series
P
P∞ Akk=0 t k
k!

k=0 k! is absolutely convergent for all |t| ≤ t0 . Since


t0 ∈ R is arbitrary, the series is absolutely convergent with an
infinite radius of convergence.
Definition: Let A be an n × n matrix. Then for t ∈ R,

X Ak t k
e At = .
k=0
k!

SHB/SU MA-102 (2020)


Computing e At
If A is a diagonal matrix, then the computation of e At is
simple.  
−1 0
Example: Let A = . Then
0 2
     
2 1 0 3 −1 0 n (−1)n 0
A = , A = , ··· , A = .
0 4 0 8 0 2n
Therefore,
∞ k
 P∞ k tk

kt k=0 (−1) k! 0
X
At
e = A = P∞ k tk
k=0
k! 0 k=0 (2) k!

e −t
 
0
= 2t .
0 e

SHB/SU MA-102 (2020)


Computing e At
Theorem: Let A and B be n × n matrices independent of t
and r , s, t ∈ R (or ∈ C). Then
• e A0 = e 0 = I .
• e A(t+s) = e At e As .
• (e At )−1 = e −At .
• e (A+B)t = e At e Bt , provided that AB = BA.
• e rIt = e rt I .
• Ae At = e At A.

Theorem: If P and A are n × n matrices and PAP −1 = B, then


e Bt = Pe At P −1 .
Proof. Using the definition of e At ,
n n
X (PAP −1 )k t k X (At)k
e Bt
= lim = P lim P −1 = Pe At P −1 .
n→∞
k=0
k! n→∞
k=0
k!
SHB/SU MA-102 (2020)
Computing e At
Corollary: If P −1 AP = diag[λj ] then e At = P diag[e λj t ] P −1 .
   
a −b cos bt − sin bt
Corollary: A = ⇒ e At = e at .
b a sin bt cos bt
Proof. If λ = a + ib, then
     
a −b λ 0 −1 i −i
=V V where V = .
b a 0 λ̄ 1 1
Thus,
e λt
   
0 −1 e ibt
e At
=V V = e V at
V −1
0 e λ̄t e −ibt
 
at cos bt − sin bt
= e .
sin bt cos bt

In particular if a = 0 then e A is a rotation by b radians.


SHB/SU MA-102 (2020)
e At is a fundamental matrix of X 0 (t) P
= AX (t).
It is well known that if the power series ∞ k
k=0 ak x is
absolutely convergent with radiusPof convergence R > 0, and
f (x) = k=0 ak x then f (x) = ∞
P∞ k 0 k−1
k=1 kak x , for all
x ∈ (−R, R).
The above fact extends to the case when ak are replaced by
matrices.
Lemma: Let A be a square matrix. Then for all t ∈ R,
d At
e = Ae At .
dt
P∞ t k Ak
Proof. As k=0 k!
has infinite radius of convergence, for all
t ∈ R,
∞ ∞ ∞
t k Ak kt k−1 Ak (tA)k−1
 
d At X d X X
e = = =A = Ae At .
dt k=0
dt k! k=1
k! k=1
(k − 1)!

SHB/SU MA-102 (2020)


e At is a fundamental matrix of X 0 (t) = AX (t).
Note that
d At
(e ) = Ae At
dt
=⇒ e At is a solution to the system
X 0 (t) = AX (t).

Since e At is invertible it follows that the columns of e At form a


fundamental solution set for X 0 (t) = AX (t).
Theorem: If A is an n × n constant matrix, then the columns
of e At form a fundamental solution set for
X 0 (t) = AX (t).

Therefore, e At is a fundamental matrix for the system, and a


GS is
X (t) = e At C = Φ(t)C .
SHB/SU MA-102 (2020)
A comparison of two fundamental matrices

Consider the n × n system X 0 (t) = AX (t). Suppose A has n


linearly independent eigenvectors {v1 , . . . , vn } such that
Avi = λi vi . Then setting
V = [v1 · · · vn ],

one fundamental matrix is


e λ1 t
 

Φ1 (t) = e λ1 t v1 · · · e λn t vn = V 
  ... .
e λn t

giving the GS X (t) = Φ1 (t)C , where C ∈ Rn is arbitrary.

SHB/SU MA-102 (2020)


A comparison of two fundamental matrices

λ1
 

As A = V  ..  V −1 , another fundamental matrix


.
λn
is
e λ1 t
 

Φ2 (t) = e At = V  ..  V −1 ,
.
e λn t
giving the GS Y (t) = Φ2 (t)C , where C ∈ Rn is arbitrary.
Observe that Φ2 (t) = Φ1 (t)V −1 .
The columns of Φ1 (t) and Φ2 (t) are two different bases of the
solution space of the system and hence of the kernel of
d
L := dt − A.

SHB/SU MA-102 (2020)


Theorem: (The fundamental theorem for linear systems)
Let A be an n × n matrix. Then for a given X0 ∈ Rn , the IVP
X 0 (t) = AX (t); X (t0 ) = X0 has a unique solution given by
X (t) = e A(t−t0 ) X0 .

Proof. If X (t) = e A(t−t0 ) X0 , then


d At −At0
X 0 (t) = dt e e X0 = Ae A(t−t0 ) X0 = AX (t), t ∈ R. Also,
X (t0 ) = IX0 = X0 .
Uniqueness: Let X (t) be any solution of the IVP. Set
Y (t) = e −At X (t). Then Y (t) is a constant vector as
Y 0 (t) = −Ae −At X (t)+e −At X 0 (t) = −Ae −At X (t)+e −At AX (t) = 0,
for all t ∈ R where the last equality holds as Ae −At = e −At A.
As Y (t0 ) = e −At0 X0 , thus Y (t) = e −At0 X0 for all t ∈ R and
any solution of the IVP is given by
X (t) = e At Y (t) = e A(t−t0 ) X0 .
SHB/SU MA-102 (2020)

You might also like