0% found this document useful (0 votes)
253 views

Mat67 LL Spectral - Theorem PDF

Spectral Theorem states that normal operators are diagonal with respect to an orthonormal basis. Self-adjoint OR HERMITIAN OPERATORS are those for which the operator and its adjoint commute to each other.

Uploaded by

Dzenis Pucic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
253 views

Mat67 LL Spectral - Theorem PDF

Spectral Theorem states that normal operators are diagonal with respect to an orthonormal basis. Self-adjoint OR HERMITIAN OPERATORS are those for which the operator and its adjoint commute to each other.

Uploaded by

Dzenis Pucic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

MAT067

University of California, Davis

Winter 2007

The Spectral Theorem for normal linear maps


Isaiah Lankham, Bruno Nachtergaele, Anne Schilling
(March 14, 2007)

In this section we come back to the question when an operator on an inner product space
V is diagonalizable. We rst introduce the notion of the adjoint or hermitian conjugate of an
operator and use this to dene normal operators, which are those for which the operator and
its adjoint commute with each other. The main result of this section is the Spectral Theorem
which states that normal operators are diagonal with respect to an orthonormal basis. We
use this to show that normal operators are unitarily diagonalizable and generalize this
notion to nd the singular-value decomposition of an operator.

Self-adjoint or hermitian operators

Let V be a nite-dimensional inner product space over C with inner product , . A linear
operator T L(V ) is uniquely determined by the values of
T v, w for all v, w V .
This means in particular that if T, S L(V ) and
T v, w = Sv, w for all v, w V ,
then T = S. To see this take w for example to be the elements of an orthonormal basis of
V.
Definition 1. Given T L(V ), the adjoint (sometimes hermitian conjugate) of T is the
operator T L(V ) such that
T v, w = v, T w for all v, w V
Moreover, we call T self-adjoint or hermitian if T = T .
The uniqueness of T is clear by the previous observation.
c 2007 by the authors. These lecture notes may be reproduced in their entirety for nonCopyright 
commercial purposes.

1 SELF-ADJOINT OR HERMITIAN OPERATORS

Example 1. Let V = C3 and let T L(C3 ) be dened as T (z1 , z2 , z3 ) = (2z2 + iz3 , iz1 , z2 ).
Then
(y1 , y2 , y3), T (z1 , z2 , z3 ) = T (y1, y2 , y3), (z1 , z2 , z3 )
= (2y2 + iy3 , iy1 , y2 ), (z1 , z2 , z3 )
= 2y2z1 + iy3 z1 + iy1 z2 + y2 z3
= (y1, y2 , y3 ), (iz2 , 2z1 + z3 , iz1 ),
so that T (z1 , z2 , z3 ) = (iz2 , 2z1
canonical basis we see that

0 2

M(T ) = i 0
0 1

+ z3 , iz1 ). Writing the matrix for T in terms of the

i
0
0

0 i 0
and M(T ) = 2 0 1 .
i 0 0

Note that M(T ) can be obtained from M(T ) by taking the complex conjugate of each
element and transposing.
Elementary properties that you should prove as exercises are that for all S, T L(V )
and a F we have
(S + T ) = S + T
(aT ) = aT
(T ) = T
I = I
(ST ) = T S
M(T ) = M(T )
where A = (aji)ni,j=1 if A = (aij )ni,j=1. The matrix A is the conjugate transpose of A.
For n = 1 the conjugate transpose of the 1 1 matrix A is just the complex conjugate of
its element. Hence requiring A to be self-adjoint (A = A ) amounts to saying that the entry
of A is real. Because of the transpose, reality is not the same as self-adjointness, but the
analogy does carry over to the eigenvalues of self-adjoint operators as the next Proposition
shows.
Proposition 1. Every eigenvalue of a self-adjoint operator is real.
Proof. Suppose C is an eigenvalue of T and 0 = v V the corresponding eigenvector
such that T v = v. Then
v2 = v, v = T v, v = v, T v
= v, T v = v, v = v, v = v2 .

2 NORMAL OPERATORS

This implies that = .


2
1+i
Example 2. The operator T L(V ) be dened by T (v) =
v is self-adjoint
1i
3
(or hermitian) and it can be checked that the eigenvalues are = 1, 4 by determining the
zeroes of the polynomial p() = (2 )(3 ) (1 + i)(1 i) = 2 5 + 4.

Normal operators

Normal operator are those which commute with their adjoint.


Definition 2. We call T L(V ) normal if T T = T T .
In general T T = T T . Note that T T and T T are both self-adjoint for all T L(V ).
Also, any self-adjoint operator T is normal. We now give a dierent characterization for
normal operators in terms of norms. In order to prove this results, we rst discuss the next
proposition.
Proposition 2. Let V be a complex inner product space and T L(V ) such that
T v, v = 0 for all v V .
Then T = 0.
Proof. Verify that
T u, w =

1
{T (u + w), u + w T (u w), u w
4
+iT (u + iw), u + iw iT (u iw), u iw} .

Since each term on the right is of the form T v, v, we obtain 0 for all u, w V . Hence
T = 0.
Proposition 3. Let T L(V ). Then T is normal if and only if
T v = T v for all v V .
Proof. Note that
T is normal T T T T = 0
(T T T T )v, v = 0 for all v V
T T v, v = T T v, v for all v V
T v2 = T v2 for all v V .

3 NORMAL OPERATORS AND THE SPECTRAL DECOMPOSITION

Corollary 4. Let T L(V ) be a normal operator. Then


1. null T = null T .
2. If C is an eigenvalue of T , then is an eigenvalue of T with the same eigenvector.
3. If , C are distinct eigenvalues of T with associated eigenvectors v, w V respectively, then v, w = 0.
Proof. Note that 1 follows from Proposition 3 and the positive deniteness of the norm.
To prove 2, rst verify that if T is normal, then T I is also normal and (T I) =
T I. Therefore by Proposition 3 we have
0 = (T I)v = (T I) v = (T I)v,
so that v is an eigenvector of T with eigenvalue .
Using part 2, note that
( )v, w = v, w v, w = T v, w v, T w = 0.
Since = 0 it follows that v, w = 0, proving part 3.

Normal operators and the spectral decomposition

Recall that an operator is diagonalizable if there exists a basis of V consisting of eigenvectors of V . The nicest operators on V are those that are diagonalizable with respect to an
orthonormal basis of V . These are the operators such that there is an orthonormal basis
consisting of eigenvectors of V . The spectral theorem for complex inner product spaces
shows that these are precisely the normal operators.
Theorem 5 (Spectral Theorem). Let V be a finite-dimensional inner product space over C
and T L(V ). Then T is normal if and only if there exists an orthonormal basis for V
consisting of eigenvectors for T .
Proof.
= Suppose that T is normal. We proved before that for any operator T on a complex
inner product space V of dimension n, there exists an orthonormal basis e = (e1 , . . . , en ) for
which the matrix M(T ) is upper-triangular

a11 a1n

.. .
..
M(T ) =
.
.
0

ann

4 APPLICATIONS OF THE SPECTRAL THEOREM: DIAGONALIZATION

We will show that M(T ) is in fact diagonal, which implies that e1 , . . . , en are eigenvectors
n

of
nT . Since M(T ) = (aij )i,j=1 with aij = 0 for i > j, we have T e1 = a11 e1 and T e1 =
k=1 a1k ek . Thus, by the Pythagorean Theorem and Proposition 3
2

|a11 | = a11 e1  = T e1  = T e1  = 

a1k ek  =

k=1

|a1k |2

k=1

from which follows that |a12 | =


= |a1n | = 0. One can repeat this argument, calculating
2
2

2
T ej  = |ajj | and T ej  = nk=j |ajk |2 to nd aij = 0 for all 2 i < j n. Hence T is
diagonal with respect to the basis e and e1 , . . . , en are eigenvectors of T .
= Suppose there exists an orthonormal basis (e1 , . . . , en ) of V consisting of eigenvectors
for T . Then the matrix M(T ) with respect to this basis is diagonal. Moreover, M(T ) =
M(T ) with respect to this basis must also be a diagonal matrix. Any two diagonal matrices
commute. It follows that T T = T T since their corresponding matrices commute
M(T T ) = M(T )M(T ) = M(T )M(T ) = M(T T ).

The next corollary is the best possible decomposition of the complex vector space V into
subspaces invariant under a normal operator T . On each subspace null (T i I) the operator
T just acts by multiplication by i .
Corollary 6. Let T L(V ) be a normal operator. Then
1. Denoting 1 , . . . , m the distinct eigenvalues for T ,
V = null (T 1 I) null (T m I).
2. If i = j, then null (T i I)null (T j I).
As we will see next, the canonical matrix for T admits a unitary diagonalization.

Applications of the spectral theorem: Diagonalization

We already discussed that if e = (e1 , . . . , en ) is a basis of a vector space V of dimension n


and T L(V ), then we can associate a matrix M(T ) to T . To remember the dependency
on the basis e let us now denote this matrix by [T ]e . That is
[T v]e = [T ]e [v]e

for all v V

4 APPLICATIONS OF THE SPECTRAL THEOREM: DIAGONALIZATION


v1
..
[v]e = .
vn

where

is the coordinate vector for v = v1 e1 + + vn en with vi F.


The operator T is diagonalizable if there exists a basis e such that [T ]e is diagonal, that
is, there exist 1 , . . . , n F such that

0
1

..
[T ]e =
.
.
0
n
The scalars 1 , . . . , n are necessarily eigenvalues of T and e1 , . . . , en are the corresponding
eigenvectors. Therefore:
Proposition 7. T L(V ) is diagonalizable if and only if there exists a basis (e1 , . . . , en )
consisting entirely of eigenvectors of T .
We can reformulate this proposition as follows using the change of basis transformations.
Suppose that e and f are bases of V such that [T ]e is diagonal and let S be the change of
basis transformation such that [v]e = S[v]f . Then S[T ]f S 1 = [T ]e is diagonal.
Proposition 8. T L(V ) is diagonalizable if and only if there exists a invertible matrix
S Fnn such that

0
1

..
S[T ]f S 1 =

.
0
n
where [T ]f is the matrix of T with respect to a given arbitrary basis f = (f1 , . . . , fn ).
On the other hand, the spectral theorem tells us that T is diagonalizable with respect to
an orthonormal basis if and only if T is normal. Recall that
[T ]f = [T ]f
for any orthonormal basis f of V . Here
A = (aji)nij=1

for A = (aij )ni,j=1

is the complex conjugate transpose of the matrix A. When F = R then A = At is just the
transpose of the matrix, where At = (aji )ni,j=1.
The change of basis transformation between two orthonormal bases is called unitary in
the complex case or orthogonal in the real case. Let e = (e1 , . . . , en ) and f = (f1 , . . . , fn )

4 APPLICATIONS OF THE SPECTRAL THEOREM: DIAGONALIZATION

be two orthonormal bases of V and U the change of basis matrix [v]f = U[v]e for all v V .
Then
ei , ej  = ij = fi , fj  = Uei , Uej .
Since this is true on the basis e we in fact have that U is unitary if and only if
Uv, Uw = v, w for all v, w V .

(1)

This means that unitary matrices preserve the inner product. Operators which preserve the
inner product are also called isometries. Similar conditions hold for orthogonal matrices.
Since by the denition of the adjoint Uv, Uw = v, U Uw, equation 1 also shows that
unitary matrices are characterized by the property
U U = I
OtO = I

for the unitary case


for the orthogonal case.

The equation U U = I implies that U 1 = U . For nite-dimensional inner product spaces


V the left inverse of an operator is also the right inverse, so that
UU = I if and only if U U = I
OO t = I if and only if O t O = I.

(2)

It is easy to see that the columns of a unitary matrix are the coecients of the elements of
an orthonormal basis with respect to another orthonormal basis. Therefore the columns are
orthonormal vectors in Cn (or in Rn in the real case). By (2) this is also true for the rows
of the matrix.
The spectral theorem shows that T is normal if and only if [T ]e is diagonal with respect
to an orthonormal basis e, that is, there exists a unitary matrix U such that

0
1

..
UT U =

.
0

Conversely, if a unitary matrix U exists such that UT U = D is diagonal, then


T T T T = U (DD DD)U = 0
since diagonal matrices commute, and hence T is normal.
Let us summarize all denitions so far.

4 APPLICATIONS OF THE SPECTRAL THEOREM: DIAGONALIZATION

Definition 3.
A
A
U
O

is hermitian if A = A.
is symmetric if At = A.
is unitary if UU = I.
is orthogonal if OO t = I.

Note that all cases of Denition 3 are examples of normal operators. An example of a
normal operator N that is none of the above is


1 1
N =i
.
1 1
You can easily verify that NN = N N. Note that iN is symmetric.
Example 3. Take the matrix

2
1+i
A=
1i
3

of Example 2. To unitarily diagonalize A, we need to nd a unitary matrix U and a diagonal


matrix D such that A = UDU 1 . To do this, we want to change basis to one composed of
orthonormal eigenvectors for T L(C2 ) dened by T v = Av for all v C2 .
To nd such an orthonormal basis, we start by nding the eigenspaces of T . We already
1 0
. Hence
determined that the eigenvalues of T are 1 = 1 and 2 = 4, so that D =
0 4
C2 = null (T I) null (T 4I)
= span((1 i, 1)) span((1 + i, 2)).
Now apply the Gram-Schmidt procedure to each eigenspace to obtain the columns of U.
Here


 1i 1+i 1
1+i
1i

1
0
6
3
6
A = UDU 1 = 13
2
1
2
0
4
3
6
3
6





1+i
1i
1+i

1
1
0
6
3
3
.
= 13
1i
2

2
0
4
3
6
6
6
Note that the diagonal decomposition allows us to compute powers and the exponential

5 POSITIVE OPERATORS

of matrices. Namely if A = UDU 1 where D is diagonal, we have


An = (UDU 1 )n = UD n U 1



1
1 k
exp(A) =
A =U
D k U 1 = U exp(D)U 1 .
k!
k!
k=0

k=0

Example 4. Continuing the previous Example






1 0
6
5 + 5i
2
1 2
2 1

A = (UDU ) = UD U = U
U =
0 16
5 5i
11
n

A = (UDU

1 n

) = UD U


 2

1 0
(1 + 2n1 )

3
=U
2n U = 1i
0 2
(1 + 22n )
3

1+i
(1 + 22n )
3
1
(1 + 22n+1 )
3

exp(A) = U exp(D)U




1
e 0
e4 e + i(e4 e)
2e + e4
1
=U
U =
.
0 e4
e + 2e4
3 e4 e + i(e e4 )

Positive operators

Recall that self-adjoint operators are the operator analogue of real numbers. Let us now
dene the operator analogue of positive (or more precisely nonnegative) real numbers.
Definition 4. An operator T L(V ) is called positive (in symbols T 0) if T = T and
T v, v 0 for all v V .
(If V is a complex vector space the condition of self-adjointness follows from the condition
T v, v 0 and can hence be dropped).
Example 5. Note that for all T L(V ) we have T T 0 since T T is self-adjoint and
T T v, v = T v, T v 0.
Example 6. Let U V be a subspace of V and PU the orthogonal projection onto U. Then

PU 0. To see this write V = U U and v = uv + u


v for all v V where uv U and uv

U . Then PU v, w = uv , uw + u


w  = uv , uw  = uv + uv , uw  = v, PU w, so that PU = PU .
Also, setting v = w in the above string of equations we obtain PU v, v = uv , uv  0 for all
v V . Hence PU 0.
If is an eigenvalue of a positive operator T and v V is the associated eigenvector,
then T v, v = v, v = v, v 0. Since
v, v 0 for all vectors v V , it follows that

0. This fact can be used to dene T by setting




T ei = i ei ,

6 POLAR DECOMPOSITION

10

where i are the eigenvalues of T with respect to the orthonormal basis e = (e1 , . . . , en ). We
know that these exist by the spectral theorem.

Polar decomposition

Continuing the analogy between C and L(V ), recall the polar form of a complex number
z = |z|ei , where |z| is the absolute value or length of z and ei is an element on the unit
circle. In terms of an operator T L(V ) where V is a complex inner product space, a
i
unitary operator U takes the role
of e and |T | takes the role of the length. As we discussed

above T T 0 so that |T | := T T exists and |T | 0 as well.


Theorem 9. For all T L(V ) there exists a unitary U such that
T = U|T |.
This is called the polar decomposition of T .
Sketch of proof. We start by noting that
T v2 = |T |v2,

since T v, T v = v, T T v =  T T v, T T v. This implies that null T = null |T |. Because of the dimension formula dim null T + dim range T = dim V , this also means that
dim range T = dim range |T |. Moreover, we can hence dene an isometry S : range |T |
range T by setting
S(|T |v) = T v.
The trick is now to dene a unitary operator U on all of V such that the restriction of U
onto the range of |T | is S
U|range |T | = S.
Note that null |T |range |T | because for v null |T | and w = |T |u range |T |
w, v = |T |u, v = u, |T |v = u, 0 = 0
since |T | is self-adjoint.
Pick an orthonormal basis e = (e1 , . . . , em ) of null |T | and an orthonormal basis f =
i = fi and extend S on null |T | by linearity. Any v
(f1 , . . . , fm ) of (range T ) . Set Se
V can be uniquely written as v = v1 + v2 where v1 null |T | and v2 range |T | since
1 + Sv2 . Now U is an isometry and
null |T |range |T |. Then dene U : V V by Uv = Sv

7 SINGULAR-VALUE DECOMPOSITION

11

hence unitary as shown by the following calculation using the Pythagorean theorem
1 + Sv2 2 = Sv
1 2 + Sv2 2
Uv2 = Sv
= v1 2 + v2 2 = v2 .
Also by construction U|T | = T since U|null |T | does not matter.

Singular-value decomposition

The singular-value decomposition generalizes the notion of diagonalization. To unitarily


diagonalize T L(V ) means to nd an orthonormal basis e such that T is diagonal with
respect to this basis

1
0

..
M(T ; e, e) = [T ]e =

.
0
n
where the notation M(T ; e, e) indicates that the basis e is used both for the domain and
codomain of T . The spectral theorem tells us that unitary diagonalization can only be done
for normal operators. In general, we can nd two orthonormal bases e and f such that

s1
0

..
M(T ; e, f ) =

.
0

sn

which means that T ei = si fi . The scalars si are called singular values of T . If T is


diagonalizable they are the absolute values of the eigenvalues.
Theorem 10. All T L(V ) have a singular-value decomposition. That is, there exist
orthonormal bases e = (e1 , . . . , en ) and f = (f1 , . . . , fn ) such that
T v = s1 v, e1 f1 + + sn v, en fn ,
where si are the singular values of T .
Proof. Since |T | 0 and hence also self-adjoint, there is an orthonormal basis e = (e1 , . . . , en )
by the spectral theorem so that |T |ei = si ei . Let U be the unitary matrix in the polar
decomposition, so that T = U|T |. Since e is orthonormal, we can write any vector v V as
v = v, e1 e1 + + v, en en
and hence
T v = U|T |v = s1 v, e1 Ue1 + + sn v, en Uen .

7 SINGULAR-VALUE DECOMPOSITION

12

Now set fi = Uei for all 1 i n. Since U is unitary, (f1 , . . . , fn ) is also an orthonormal
basis, proving the theorem.

You might also like