0% found this document useful (0 votes)
4 views4 pages

MatrixTheory-20120906

The lecture notes cover concepts related to diagonalization and eigenvalue multiplicity in matrix theory. Key topics include conditions for diagonalization, the relationship between algebraic and geometric multiplicity, and the simultaneous diagonalizability of matrices. The notes also provide proofs and lemmas regarding the properties of diagonalizable matrices and their commutativity under polynomial functions.

Uploaded by

freemanchen115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views4 pages

MatrixTheory-20120906

The lecture notes cover concepts related to diagonalization and eigenvalue multiplicity in matrix theory. Key topics include conditions for diagonalization, the relationship between algebraic and geometric multiplicity, and the simultaneous diagonalizability of matrices. The notes also provide proofs and lemmas regarding the properties of diagonalizable matrices and their commutativity under polynomial functions.

Uploaded by

freemanchen115
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Matrix Theory, Math6304

Lecture Notes from September 6, 2012


taken by Nathaniel Hammen

Last Time (9/4/12)


Diagonalization: conditions for diagonalization
Eigenvalue Multiplicity: algebraic and geometric multiplicity

1 Further Review
1.1 Warm-up questions
1.1.1 Question. If A 2 Mn has only one eigenvalue (of multiplicity n) and is diagonalizable,
what is A?
Answer. Because A is diagonalizable, there exists an invertible S 2 Mn such that S 1 AS is
diagonal. In fact, because all eigenvalues are , we have
1 1 1 1
A = S(S AS)S = S IS = SS = I
0
1.1.2 Question. Let A 2 Mn and f (t) = det(I + tA). What is f (t) in terms of A?
Answer. The i, jth entry of (I + tA) is i,j + tai,j , so
X n
Y
f (t) = det(I + tA) = sgn( ) ( (j),j + ta (j),j )
2Sn j=1
n n
!
X Y X Y
= sgn( ) (j),j +t a (j),j (i),i + o(t2 )
2Sn j=1 j=1 i6=j

where Sn is the set of permutations of n elements and sgn( ) is +1 if is an even permutation


and -1 if is an odd permutation. Di↵erentiating f (t) gives
n
!
X X Y
f 0 (t) = sgn( ) a (j),j (i),i + o(t)
2Sn j=1 i6=j
n
X
= aj,j + o(t) = tr(A) + o(t)
j=1

1
1.5.31
 Lemma. Two matrices A 2 Mn and B 2 Mm are diagonalizable i↵
A 0
C= 2 Mn+m is diagonalizable.
0 B
Proof. First we assume A 2 Mn and B 2 Mm are diagonalizable. Then 9S1 2 Mn , S2 2 Mm
such that S1 and S2 are invertible and S 1 AS and S 1 BS are diagonal matrices.
 
S1 0 1 S1 1 0
Let S = . Then S =
0 S2 0 S2 1

and  
1 A 0 S1 1 AS1 0
S S= 1
0 B 0 S2 BS2
which is a diagonal matrix, so C is diagonalizable.
Conversely, assume that C is diagonalizable. Then 9S 2 Mn+m such that S is invertible and
D = S 1 CS is a diagonal matrix. Write S = [s1 s2 . . . sn+m ] with each sj 2 Cn+m . Then,
because SD = CS, each sj is an eigenvector for C. Each sj may then be written as

xj
sj = with xj 2 Cn and yj 2 Cm
yj

Then, the block form of C and the fact that Csj = implies that Axj= j xj and Byj = j yj .
j sj
X
If we let X = [x1 x2 . . . xn+m ] and Y = [y1 y2 . . . yn+m ] then S = . The matrix S is
Y
invertible, so rank(S) = n + m. By the dimensions of X and Y , we also have rank(X)  n
and rank(Y )  m. When looking at row rank, we see that

n + m = rank(S)  rank(X) + rank(Y )  n + m

so rank(X) + rank(Y ) = n + m. This can only occur if rank(X) = n and rank(Y ) = m.


Thus X contains n linearly independent columns, each of which is an eigenvector for A, and Y
contains m linearly independent columns, each of which is an eigenvector for B. Thus, we have
bases of eigenvectors of A and B, and we conclude that A and B are diagonalizable.

1.5.32 Theorem. Let A, B 2 Mn be diagonalizable. Then AB = BA if and only if A and B


are simultaneously diagonalizable.

Proof. We have already shown that if A and B are simultaneously diagonalizable then AB = BA.
All that remains to show is the converse. Assume that AB = BA. Because A is diagonalizable,
9S 2 Mn such that S is invertible and D = S 1 AS is diagonal. We may multiply S by an
invertible matrix to permute elements on the diagonal of D, so we may assume without loss of
generality that
2 3
1 I m1 0 ... 0
6 .. 7
6 0 2 I m2 . 7
D=6 . .. 7 with j 6= k 8j 6= k
4 .. . 0 5
0 ... 0 r I mr

3
where Im is the m ⇥ m identity and mj is the multiplicity of j. Because AB = BA, we have
1 1 1 1 1 1
(S AS)(S BS) = S ABS = S BAS = (S BS)(S AS).
If we let C = (S 1 BS), then the above gives that DC = CD. If we denote C = [ci,j ]ni,j=1
and D = [di,j ]ni,j=1 , then by the diagonal structure of D we have di,i ci,j = ci,j dj,j . Then
(di,i dj,j )ci,j = 0 implies that if di,i 6= dj,j then ci,j = 0. By the block structure of D, this
implies that C is block diagonal with blocks of the same size. That is
2 3
C1 0 . . . 0
6 .. 7
6 0 C2 . 7
C=6 . . 7 with Cj 2 Mmj 8j
4 .. .. 0 5
0 . . . 0 Cr
Because B is diagonalizable, 9R 2 Mn such that R is invertible and R 1 BR is diagonal. Then
R 1 SCS 1
R = R 1 SS 1
BSS 1
R = R 1 BR
so C is diagonalizable. From the previous lemma, we deduce that each block Cj is diagonalizable.
Thus, for each j, 9Tj 2 Mmj such that each Tj is invertible and Tj 1 Cj Tj is diagonal.
2 3 2 3
T1 0 . . . 0 T1 1 0 . . . 0
6 .. 7 6 .. 7
6 0 T2 . 7 1 6 0 T2 1 . 7
Let T = 6 . . 7 . Then T = 6 . . 7
4 .. .. 0 5 4 .. .. 0 5
0 . . . 0 Tr 0 ... 0 Tr 1
and 2 3
T 1 1 C1 T 1 0 ... 0
6 .. 7
1 1 1 6 0 T 2 1 C2 T 2 . 7
T S BST = T CT = 6 .. .. 7
4 . . 0 5
0 ... 0 T r 1 Cr T r
which is a diagonal matrix. Also
1 1
T S AST = T 1 DT
2 3
T 1 1 1 I m1 T 1 0 ... 0
6 1 .. 7
6 0 T 2 2 I m2 T 2 . 7
=6 . .. 7
4 .
. . 0 5
1
0 ... 0 Tr r I mr T r
2 3
1 I m1 0 ... 0
6 .. 7
6 0 2 I m2 . 7
=6 . .. 7
4 .. . 0 5
0 ... 0 r I mr

= D.
Thus, T 1 S 1 AST and T 1 S 1
BST are both diagonal matrices, so A and B are simultaneously
diagonalizable by ST 2 Mn .

4
We want to make this equivalence more general.
1.5.33 Remark. Let A 2 Mn and consider polynomials

p(A) = p0 I + p1 A + · · · + pr Ar and
q(A) = q0 I + q1 A + · · · + qs As with r, s 2 N.

Then p(A)q(A) = q(A)p(A). Thus the family of all polynomials of A is commuting. If S 2 Mn


is invertible, then S 1 Ak S = (S 1 AS)k for any integer value of k, so
1
S p(A)S = S 1 (p0 I + p1 A + · · · + pr Ar )S
= p0 S 1 IS + p1 S 1 AS + · · · + pr S 1 Ar S
= p0 I + p1 S 1 AS + · · · + pr (S 1 AS)r
= p(S 1 AS)

Thus, if S diagonalizes A, then p(S 1 AS) is a polynomial of a diagonal matrix, which is diagonal.
Thus, if S diagonalizes A, then S 1 p(A)S is diagonal, so S diagonalizes all polynomials of A
simultaneously.

You might also like