0% found this document useful (0 votes)
107 views

Jordan Canonical Form Generalized Modes Cayley-Hamilton Theorem

The document discusses Jordan canonical form and related topics. 1) Any matrix A can be put into Jordan canonical form J via a similarity transformation T-1AT = J, where J is a block diagonal matrix with Jordan blocks Ji corresponding to the eigenvalues of A. 2) The Jordan blocks Ji describe the generalized eigenvectors of A. The first column of each block is the eigenvector, and subsequent columns are generalized eigenvectors that satisfy Avij = vij-1 + λvij. 3) The Cayley-Hamilton theorem states that every matrix A satisfies its own characteristic polynomial X(A) = 0, where X(s) is the determinant of sI - A. This relates

Uploaded by

Mouli
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
107 views

Jordan Canonical Form Generalized Modes Cayley-Hamilton Theorem

The document discusses Jordan canonical form and related topics. 1) Any matrix A can be put into Jordan canonical form J via a similarity transformation T-1AT = J, where J is a block diagonal matrix with Jordan blocks Ji corresponding to the eigenvalues of A. 2) The Jordan blocks Ji describe the generalized eigenvectors of A. The first column of each block is the eigenvector, and subsequent columns are generalized eigenvectors that satisfy Avij = vij-1 + λvij. 3) The Cayley-Hamilton theorem states that every matrix A satisfies its own characteristic polynomial X(A) = 0, where X(s) is the determinant of sI - A. This relates

Uploaded by

Mouli
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

EE263 Autumn 2015 S. Boyd and S.

Lall

Jordan canonical form

I Jordan canonical form

I generalized modes

I Cayley-Hamilton theorem

1
Jordan canonical form
any matrix A ∈ Rn×n can be put in Jordan canonical form by a similarity transfor-
mation, i.e.  
J1
T −1 AT = J = 
 .. 
. 
Jq
where
λi 1
 
 .. 
 λi .  ∈ Cni ×ni

Ji = 
 .. 
 . 1 
λi
Pq
is called a Jordan block of size ni with eigenvalue λi (so n = i=1 ni )

I J is upper bidiagonal

I J diagonal is the special case of n Jordan blocks of size ni = 1

I Jordan form is unique (up to permutations of the blocks)

I can have multiple blocks with same eigenvalue


2
Jordan canonical form

note: JCF is a conceptual tool, never used in numerical computations!


X (s) = det(sI − A) = (s − λ1 )n1 · · · (s − λq )nq
hence distinct eigenvalues ⇒ ni = 1 ⇒ A diagonalizable
dim null(λI − A) is the number of Jordan blocks with eigenvalue λ
more generally, X
dim null(λI − A)k = min{k, ni }
λi =λ

so from dim null(λI − A)k for k = 1, 2, . . . we can determine the sizes of the Jordan
blocks associated with λ

3
Jordan canonical form

I factor out T and T −1 , λI − A = T (λI − J)T −1

I for, say, a block of size 3:


   
0 −1 0 0 0 1
2
λi I − Ji =  0 0 −1  (λi I − Ji ) =  0 0 0 (λi I − Ji )3 = 0
0 0 0 0 0 0

I for other blocks (say, size 3, for k ≥ 2)

(λi − λj )k −k(λi − λj )k−1 (k(k − 1)/2)(λi − λj )k−2


 
k
(λi I −Jj ) =  0 (λj − λi )k −k(λj − λi ) k−1 
0 0 (λj − λi )k

4
Generalized eigenvectors

suppose T −1 AT = J = diag(J1 , . . . , Jq )
express T as
T = [T1 T2 · · · Tq ]
n×ni
where Ti ∈ C are the columns of T associated with ith Jordan block Ji
we have ATi = Ti Ji
let Ti = [vi1 vi2 · · · vini ]
then we have:
Avi1 = λi vi1 ,
i.e., the first column of each Ti is an eigenvector associated with e.v. λi
for j = 2, . . . , ni ,
Avij = vi j−1 + λi vij

the vectors vi1 , . . . vini are sometimes called generalized eigenvectors

5
Jordan form LDS

consider LDS ẋ = Ax
by change of coordinates x = T x̃, can put into form x̃˙ = J x̃
system is decomposed into independent ‘Jordan block systems’ x̃˙ i = Ji x̃i

x̃ni x̃ni −1 x̃1


1/s 1/s 1/s

λ λ λ

Jordan blocks are sometimes called Jordan chains


(block diagram shows why)

6
Resolvent, exponential of Jordan block

resolvent of k × k Jordan block with eigenvalue λ:


−1
s−λ −1

 .. 
s−λ .
(sI − Jλ )−1 = 
 

 .. 
 . −1 
s−λ
(s − λ)−1 (s − λ)−2 ··· (s − λ)−k
 
 (s − λ)−1 ··· (s − λ)−k+1 
=
 
.. .. 
 . . 
(s − λ)−1
= (s − λ)−1 I + (s − λ)−2 F1 + · · · + (s − λ)−k Fk−1

where Fi is the matrix with ones on the ith upper diagonal

7
Resolvent, exponential of Jordan block

by inverse Laplace transform, exponential is:


 
etJλ = etλ I + tF1 + · · · + (tk−1 /(k − 1)!)Fk−1
1 t · · · tk−1 /(k − 1)!
 
 1 · · · tk−2 /(k − 2)! 
= etλ 
 
.. .. 
 . . 
1

Jordan blocks yield:

I repeated poles in resolvent

I terms of form tp etλ in etA

8
Generalized modes
consider ẋ = Ax, with
x(0) = a1 vi1 + · · · + ani vini = Ti a

then x(t) = T eJt x̃(0) = Ti eJi t a

I trajectory stays in span of generalized eigenvectors

I coefficients have form p(t)eλt , where p is polynomial

I such solutions are called generalized modes of the system

with general x(0) we can write


q
X
x(t) = etA x(0) = T etJ T −1 x(0) = Ti etJi (SiT x(0))
i=1

where
S1T
 
 . 
T −1 =  .. 
SqT

hence: all solutions of ẋ = Ax are linear combinations of (generalized) modes


9
Cayley-Hamilton theorem

if p(s) = a0 + a1 s + · · · + ak sk is a polynomial and A ∈ Rn×n , we define

p(A) = a0 I + a1 A + · · · + ak Ak

Cayley-Hamilton theorem: for any A ∈ Rn×n we have X (A) = 0, where X (s) =


det(sI − A)
 
1 2
example: with A = we have X (s) = s2 − 5s − 2, so
3 4

X (A) = A2 − 5A − 2I
   
7 10 1 2
= −5 − 2I
15 22 3 4
=0

10
Cayley-Hamilton theorem

corollary: for every p ∈ Z+ , we have

Ap ∈ span I, A, A2 , . . . , An−1


(and if A is invertible, also for p ∈ Z)


i.e., every power of A can be expressed as linear combination of I, A, . . . , An−1
proof: divide X (s) into sp to get sp = q(s)X (s) + r(s)
r = α0 + α1 s + · · · + αn−1 sn−1 is remainder polynomial
then

Ap = q(A)X (A) + r(A) = r(A) = α0 I + α1 A + · · · + αn−1 An−1

11
Cayley-Hamilton theorem

for p = −1: rewrite C-H theorem

X (A) = An + an−1 An−1 + · · · + a0 I = 0

as
I = A −(a1 /a0 )I − (a2 /a0 )A − · · · − (1/a0 )An−1


(A is invertible ⇔ a0 6= 0) so

A−1 = −(a1 /a0 )I − (a2 /a0 )A − · · · − (1/a0 )An−1

i.e., inverse is linear combination of Ak , k = 0, . . . , n − 1


for p = −2, −3, . . . , use induction:

Ap−1 = −(a1 /a0 )Ap − (a2 /a0 )Ap+1 − · · · − (1/a0 )Ap+n

if Ap , . . . , Ap+n are linear combinations of Ak , k = 0, . . . , n − 1, so is Ap−1

12
Proof of C-H theorem

first assume A is diagonalizable: T −1 AT = Λ

X (s) = (s − λ1 ) · · · (s − λn )

since
X (A) = X (T ΛT −1 ) = T X (Λ)T −1
it suffices to show X (Λ) = 0

X (Λ) = (Λ − λ1 I) · · · (Λ − λn I)
= diag(0, λ2 − λ1 , . . . , λn − λ1 ) · · · diag(λ1 − λn , . . . , λn−1 − λn , 0)
=0

13
Proof of C-H theorem

now let’s do general case: T −1 AT = J

X (s) = (s − λ1 )n1 · · · (s − λq )nq

suffices to show X (Ji ) = 0

n
··· i

0 1 0
n1 0
X (Ji ) = (Ji − λ1 I) · · ·  0 1 ···  n
 · · · (Ji − λq I) q = 0
..
.
| {z }
(Ji −λi I)ni

14

You might also like