basic classes of linear operations
basic classes of linear operations
Classes of
Linear
Operators
Israel Gohberg
5eymour Goldberg
Marinus A. Kaashoek
Springer Basel AG
Israel Gohberg Marinus A. Kaashoek
School of Mathematical Sciences Department of Mathematics and Computer Science
Raymond and Beverly Sackler Vrije Universiteit Amsterdam
Faculty of Exact Sciences De Boelelaan 1081a
Tel Aviv University NL-1081 HV Amsterdam
Ramat Aviv 69978 The Netherlands
Israel
e-mail: [email protected] e-mail: [email protected]
Seymour Goldberg
Department of Mathematics
University of Maryland
College Park, MD 20742-4015
USA
e-mail: [email protected]
This work is subject to copyright. AII rights are reserved, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, re-use of iIIustrations, recitation,
broadcasting, reproduction on microfilms or in other ways, and storage in data banks. For any
kind of use permission of the copyright owner must be obtained.
ISBN 978-3-7643-6930-9
www.birkhiiuser-science.com
987654321
Dedicated to our grandchildren
Table of Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . •. . . . . •. . . . . . . . . . . . .. xv
Chapter I Hilbert Spaces. . . . . . . . . . . . . . . . . . . . •. . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Complex n-Space 1
1.2 The Hilbert Space Zj 3
1.3 Definition of Hilbert Space and its Elementary Properties 5
1.4 Distance from a Point to a Finite Dimensional Space 8
1.5 The Gram Determinant 10
1.6 Incompatible Systems of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.7 Least Square Fit 15
1.8 Distance to a Convex Set and Projections onto Subspaces 16
1.9 Orthonormal Systems 18
1.10 Szego Polynomials 19
1.11 Legendre Polynomials 24
1.12 Orthonormal Bases 26
1.13 Fourier Series 29
1.14 Completeness of the Legendre Polynomials 31
1.15 Bases for the Hilbert Space of Functions on a Square 32
1.16 Stability of Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.17 Separable Spaces 35
1.18 Isometry of Hilbert Spaces 36
1.19 Example of a Non Separable Space 38
Exercises 38
Index 421
Preface
The present book is an expanded and enriched version of the text Basic Operator
Theory, written by the first two authors more than twenty years ago. Since then
the three of us have used the basic operator theory text in various courses . This
experience motivated us to update and improve the old text by including a wider
variety of basic classes of operators and their applications. The present book has
also been written in such a way that it can serve as an introduction to our previous
books Classes ofLinear Operators, Volumes I and II. We view the three books as
a unit.
We gratefully acknowledge the support of the mathematical departments of
Tel-Aviv University, the University of Maryland at College Park, and the Vrije
Universiteit at Amsterdam. The generous support ofthe Silver Family Foundation
is highly appreciated.
A theme which also appears throughout this book is the use of one sided
invertible operators . For example they appear in the theory of Fredholm, Toeplitz
and singular integral operators.
This book consists, basically, oftwo unequal parts. The major portion ofthe text
is devoted to the theory and applications of linear operators on a Hilbert space.
We begin with a chapter on the geometry of Hilbert space and then proceed to
the theory of linear operators on these spaces . The theory is richly illustrated by
numerous examples.
In the next chapter we study Laurent and Toeplitz operators which provide
further illustrations and applications of the general theory. Here the projection
methods and one sided invertibility play an important role.
Spectral theory of compact self adjoint operators appears in the next chapter
followed by a spectral theory of integral operators . Operational calculus is then
presented as a natural outgrowth of spectral theory. The reader is also shown the
basic theory of unbounded operators on Hilbert space.
The second part of the text concentrates on Banach spaces and linear operators,
both bounded and unbounded, acting on these spaces . It includes, for example, the
three basic principles oflinear analysis and the Riesz-Fredholm theory ofcompact
operators which is based on the theory of one sided invertible operators .
In later chapters we deal with a class of operators, in an infinite dimensional
setting, on which a trace and determinant are defined as extensions of the trace
and determinant of finite matrices. Further developments of Toeplitz operators
and singular integral operators are presented. Projection methods and one sided
invertible operators continue to play an important role. Both parts of the book
contain plenty of applications and examples . All chapters deal exclusively with
linear problems except for the last chapter which is an introduction to the theory
of non linear operators .
In general, in writing this book, the authors were strongly influenced by recent
developments in operator theory which affected the choice of topics, proofs and
exercises .
One ofthe main features ofthis book is the large number ofnew exercises chosen
to expand the reader's comprehension of the material , and to train him or her in
the use of it. In the beginning portion of the book we offer a large selection of
computational exercises; later, the proportion of exercises dealing with theoretical
questions increases . We have, however, omitted exercises after Chapters VII, IX
and XVII due to the specialized nature of the subject matter.
To reach as large an audience as possible, the material is self-contained. Begin-
ning with finite dimensional vector spaces and matrices, the theory is gradually
developed.
Introduction xvii
Since the book contains more material than is needed for a one semester course,
we leave the choice of subject matter to the instructor and the interests of the
students .
It is assumed that the reader is familiar with linear algebra and advanced cal-
culus; some exposure to Lebesgue integration is also helpful. However, any lack
of knowledge of this last topic can be compensated for by referring to Appendix
2 and the references therein.
It has been our experience that exposure to this material stimulated our students
to expand their knowledge ofoperator theory. We hope that this will continue to be
the case. Included in the book is a list of suggested reading material which should
prove helpful.
Chapter I
Hilbert Spaces
In this chapter we review the main properties of the complex n-dimensional space
en and then we study the Hilbert space which is its most natural infinite dimen-
sional generalization. Many applications to classical problems are included (Least
squares, Fourier series and others) .
n ) 1/2
IIxll = (x, x}1 /2 = { ; Ix;l2
(
2 Chapter I. Hilbert Spaces
If x = (XI, X2) and Y = (YI, Y2) are vectors in the plane, then (x , y) = IIx 1111 Y II
cos e, where e is the angle between x andy, 0 S e S Jr. Thus I(x, y)1 S IIxllllyll.
Based only on the properties of the inner product, we shall prove this inequality
in en.
Cauchy-Schwarz Inequality: If x and yare in en, then
I(x, y)1 S IIxllllyll· (1.1)
n ( n 2 ) 1/2 ( n 2 ) 1/2
{ ; IXiYi I S {; IXi 1 { ; IYi 1 (1.3)
The following inequality, when confined to vectors in the plane, states that the
sum of the lengths of two sides of a triangle exceeds the length of the third side.
D
Triangle inequality: For x and Y in en,
IIx + yll S IIxll + IIYII· (1.4)
D
1.2 The Hilbert Space £2 3
The pas sage from en, the space of n-tuples (~I , . . . , ~n) of com plex numbers, to
the spa ce consisting of certain infinite sequences (~l , ~2 , .. .) of complex numbers
is a very natural one.
If x = (~I , ~2, . ..) and Y = (1/1, 1/2, . ..), define x + Y = (~l + 1/1. ~2 + 1/2, .. .)
and a x = (a~ l, a b . . .) for a E Co The inner product (x , y ) = L~I ~i iji and
the corresponding norm [x ] = (x , x }1 /2 = (L~I 1~i12)1 /2 make sense provided
L~ I 1~i12 < 00 and L~ I 11/i1 2 < 00. Indeed, by ( 1.3) in Section I ,
00
2) 1/2 ( 00
2) 1/2 ( 00
2) 1/2
( f; I~i + n. 1 S f; I~i 1 + f ; l1/i 1
Thus
I(x , y}1.s IIxli ll YIl and II x + YII s IIx ll + II YII .
Definition of £2: The vector spac e £2 consists of the set of those sequences
(~ I , ~2 , . ..) of complex numbers for which L~I 1~i1 2 < 00 , together with the
operatio ns of addition and scalar multiplication defined abo ve .
The complex valued function ( ,) defined on £2 x £2 by (x , y ) = L~I ~i iji ,
where x = (~d, Y = (1/i ), is called the inner product on £2. Clearly, the inner
product satisfies (i}-(iv) in Section I .
The norm, IIxll , on £2 is defin ed by
00 ) 1/2
II xll=(x, x} 1/2= f;1~i1 2
(
The distance between the vectors x = (~I , ~2 , . ..) and Y = (71 1, 1/2, . . .) is
IIx - yll = (L~II ~i -7Jil2) 1/2.
The extension of e n to £2 is not just an interesting exerc ise. As we shall see ,
£2 arises from the study of Fourier series , integral equations, infinite systems of
linear equations , and many appl ied problems.
In a treatment of advanced calculus it is shown that every Ca uchy sequence in
en converges, i.e., if {Xk} is a sequence ofvectors in en such that IIxm - Xn II -+ 0
as m , n -+ 00, then there exists an x E en such that II x n - x II -+ O. Th is enables
one to prove, for example, the convergence of certain series.
Th e space £2 has the same important prop erty which is called the completeness
property of £2.
4 Chapter I. Hilbert Spaces
Proof: Suppose X n = (~[n) , ~in) , . .. ) E £2 . The idea of the proof is to show that
for each fixed k, the sequence of components ~P ) , ~?) , ...
converges to some
number ~k and that x = (~l , ~2 , ...) is the desired limit of {x n }.
For k fixed,
Thus for each k, {~t)}~l is a Cauchy sequence of complex numbers which there-
fore converges. Take ~k = Iimn ---+ oo ~t) and x = (~l, ~2, . . .). First we show that
x is in £2.
For any positive integer j,
(2.2)
and
j
LI~t)12 ~ IIxn ll 2. (2.3)
k=l
Now sup, IIx n II = M < 00 since IlIxn II - IIx m III ~ IIx n - X m II -+ 0 as m,
n -+ 00. It therefore follows from (2.2) and (2.3) that L:;:l I~k 12 ~ M 2 , i.e., x is
in£2 .
Finally, we prove that IIx n - x II -+ O. Let e > 0 be given . There exists an
integer N such that if m , n ~ N, and p is any positive integer,
p
L I~t) - ~im)12 ~ II x n - x mll 2 ~ c. (2.4)
k=l
Thus for n ~ N,
00
II x n - xII 2 = L I~t) - ~d ~ c.
k=l
Sometimes we shall also deal with £2(Z), the two-sided version of £2. The
vector space £2(Z) consists ofall doubly infinite sequences (... , ~-l, ~o , ~l, ...) of
1.3 Definition ofHilbert Space and its Elementary Properties 5
complex numbers that are square summable, that is, L~-oo I~j 12 < 00. As its
one-sided version £2, the space £2G~) is a vector space with addition and scalar
multiplication being defined entry wise . Also, £2(2) has a natural inner product,
namely (x, y) = L~-oo ~j ryj, where x = (~j) jEZ and y = (T/j) j EZ , The
corresponding norm is given by
and Theorem 1.2.1 above remains valid with £2(2) in place of £2. In other words,
the space £2(2) has the completeness property. D
(a) x +y = y +x;
(b) (x + y) + z = x + (y + z);
(c) There exists a vector 0 E E such that x +0 = x for all x E E .
(d) For each x E E there exists a vector -x E E such that x + (- x) = o.
For every a and f3 in C ,
(e) a(x + y) = ax + ay ;
(f) (a + f3)x = ax + f3y;
(g) (af3)x = a(f3x);
(h) Ix = x.
Definition: A vector space over C is a set E, whose elements are called vectors,
together with two rules called addition and scalar multiplication. These rules asso-
ciate with any pair of vectors x and y in E and any a E C, unique vectors in E,
denoted by x + y and ax, such that (a)-(h) hold.
6 Chapter I Hilbert Spaces
(f, g) = l b
I(x)g(x) dx
1.3 Definition ofHilbert Space and its Elementary Properties 7
Proof: The proofs of (i) and (ii) are exactly the same as the proofs of the inequal-
ities (1.1) and (1.4) in Section 1. Also,
2+lIx 2
IIx+Y II -YII
= (x + y , x + y) + (x - Y, x - y )
= II xll + 2 Re (x , y ) + IIYII 2 + II x ll 2 -
2
2Re (x , y) + IIYII 2
2(lIx 1l + II YII 2 ) .
2
=
3. L2([a, b]) is a Hilbert space (cf. Appendix 2). This space is infinite
dimensional since the functions 1, x, x 2 , . • . are linearly independent. For
if L:Z=oakxk is the zero function, then ak = 0, 0 :::: k :::: since any n,
polynomial of degree n has at most n zeros .
4. if is not complete. Indeed, let Xn = (!, -b,..., 2~' 0, 0, . . .) E if. Then
forn > m,
IIxn - Xm II 2 < L 00
1 = - 1 ---+ O.
-k
2 2m
k=m+1
Then for g(x) = - \ - , 0 :::: x :::: 1, r, ---+ gin L2([0 , 1]). Thus, {Pn }
1- 7x
is a Cauchy sequence in P which does not converge to a vector in P since
g ¢ P C L2([0, 1]).
We shall see later that every finite dimensional inner product space is complete.
Definition: The distan ce d (v, S) from a point vEE to a set SeE is defined
by
d(v , S) = inf[]» - s] : s E S}.
We shall show that if M is a finite dimensional subspace of E, then for each
VEE there exists a unique W E M such that d(v , M) = IIv - wll. Hence
II v - w II < II v - z II for all Z EM , z f= w.
The following preliminary results are used .
Every finite dimensional inner product space has an orthonormal basis. This result
is a special case of Theorem 9.1.
The following simple theorem is useful for calculations.
Proof:
To find w so that v - w 1. M, let ({II , ..., rpn be an orthonormal basis for M . Then
w = L:J=l a j ({lj and for each k,
Thus
n
W = L(v, ({Ij)({Ij. (4.2)
j=1
Theorem 4.2 Let M be a finite dimensional subspace of E and let {({II , . . . , ({In}
be an orthonormal basisfor M . For each v E E , the vector w = L:J=l (v , ({Ij)({Ij
is the unique vector in M with the property that II v - w II = d (v , M ).
10 Chapter I. Hilbert Spaces
The equivalence of a closest vector and orthogonality is given in the next theorem.
Hence
2 Re A(Z, v - w } ::: IA1 2 11 z 11 2 .
Taking A = r(z , v - w}, where r is a real number, we get
Throughout this section, E denotes an inner product sp ace . Let YI, ... , Yn be
a basis (not necessarily ortho gon al) for a subspace M of E. We know from
Theorem 4.2 that there exists a unique w = I:7=1cq y, in M such that II Y- w ll =
d (y, M ) or, equivalentl y, y - w .L M . Our aim in this section is to find each a ,
and d (y, M ) in terms of the basis.
Now
n
0= (y - w, Yj) = (y , Yj ) - Lai(Yi , Yj}, 1 ::: j ::: n ,
i= 1
or
al (YI , YI) + a 2(Y2, YI) + ...+ an (Yn, YI) = (y , YI)
(5.1)
al (YI , Yn) + a 2(Y2, Yn) + ...+ an (Yn , Yn) = (Y, Yn).
Since the set of a, is unique , we have by Cramer's rule that
aj = Dj , (5.2)
g(YI, . . . , Yn)
where g(YI, . . . , Yn) is the (non-zero) determinant of the coefficient matri x
1.5 The Gram Determinant 11
d 2 = lI y - wll 2 = (y - w, Y - w ) = (y , y ) - (w , y )
n
= (y , y) - L cti (Yi , y )
i=1
or
Xi =cti, 1 s ! S n, Xn+1 = 1.
Thus
n
2
d g (Yl , ··· , Yn) = (y, y }Cn+l + L(Y, Yk}Ck
k=l
(Yl /l) ...
= det .
( (Yl , Yn)
(Yl, y)
= g(Yl,· · ·, Yn, y ). (5.5)
Now g(Y l) = (Yl , y d > O. Applying (5.5) to M; = Sp{Yl, . .. , ykl and Y = Yk+ l,
and noting that d (Yk+l , Mk) ::::: II Yk+IIl , it follows by induction that
)
l~
d ey , M ) = g(YI, , Yn, y)
( g(YI , , Yn)
and
0 < g(Y l, . . . , Yn) ::::: II YI1l 2 I1 Y211 2 . .. ll Yn 11 2,
where g( XI, . . " Xj) = det «(xi , Xj }) is the Gram determinant correspo nding to
(XI,X2 , . . . , Xj) .
The vector WE M for which d ey, M ) = lI y - wil is given by
Figure I
IIYI1l
2I1Y211 2 2
. . . llYn 11 ~ g(Yl , . .. , Yn) = det((Yi, Yj)) = det AA*
= det A det A = [det A1 2 ,
which establishes the desired inequality. o
1.6 Incompatible Systems of Equations
where in the i t h experiment, aij is the measurement oix , and y, is the measurement
ofy.
Even if this system of equations could be solved, the solutions would only
approximate the desired Ai since measurements are not exact. Moreover, if m > n,
there might not be a solution, in which case the system of equations is called
incompatible.
As a compromise, we seek to minimize the deviation between the right and left
sides of each equation. To be specific, it is desired to find a I , a2, .. . , an so that
I(AI, , An) = IYI- L:7=1
aIi A;l2+. +!Ym- L:7=,
amiA;l2attains it minimum
at (ai, , an)'
The solution to this minimization problem is obtained directly from Theorem 5.1
as follows.
Let
Ai = (ali, . . . ,ami) E em, 1::: i ::: n, Y = (YI , ... , Ym) E em.
The components of Ai and Y correspond to the columns appearing in (6.1) . Let
M = SP{Ai} C em. Since, in practice, there are usually more experiments than
there are variables, i.e., m > n, we assume that A I, . .. , An are linearly indepen -
dent in em. In this setting we seek (ai , ... , an) so that for w = a, Ai and L:7=,
r
any AI , . . . , An in C,
Given points tl , . .. , tk in the interval [a, b] and given complex numbers YI, .. . , Yk,
let us first consider the problem to find a polynomial P of degree at most k - 1
such that P(tj) = Yj, I S j S k. This problem has a solution which is given by
the Lagrange interpolation formula :
n--.
k k
t - ti
P(t) = LYj Pj (t) , Pj(t) =
j=1 i=1 tj - ti
i#j
k
2
S(P) = L IYi - P(ti)1
i=1
has the smallest value among all polynomials of degree at most n.
To solve this problem of "least squares," let E be the vector space ofall complex
. k --
valuedfunctions defined onrj . v. : tk. Define (g , h) = Li=l g(ti)h(ti), g, hE E.
Clearly, ( , ) is an inner product on E . Let M be the subspace of E consisting of
all polynomials of degree at most n . The polynomials {I, t, .. . , t n } are linearly
independent in M; for if pet) = LJ=O£ljt j = 0, i.e., P(tj) = 0,1 S j S k,
then P is a polynomial of degree n < k which has at least n + 1 zeros. Thus
°
£lj = 0, I S j S n.
For S i, j S n, let
k k n
Cij = (t l . , t i ) = L i+j
i-:': s, = L Ymtm,
i
Bn+1 = LY;"
m=1 m=l m=1
It follows from Theorem 5.1 that the desired polynomial P is given by
C
COl COn
pet) =
1
det
:
.
c;
~O)
det(Cij) C;o Cnl Bn
t tn
°
C
and
COl COn
1 . Bo )
d 2(P, M) = det :
det( Cij) Sto Cnl c.; Bn
BO BI Bn Bn+1
16 Chapter I Hilbert Spaces
In practice, some of the datapoints YI, ... , Yk are, for various reasons, more
reliable than others. Therefore, certain "weights" 0; are chosen and the corres-
ponding least squares problem is to find the polynomial of degree at most n which
mmmuzes
k
S(P) = L Iy; - P(t;)120 t
;=1
among all polynomials ofdegree at most n . Ifwe apply the above results to E with
the inner product (g, h) = I:1=1 g(t;)h(t;)ot, the desired polynomial is obtained.
Throughout this chapter, H denotes a Hilbert space. It was shown in Theorem 4.2
that the distance from a vector to a finite dimensional subspace is always attained.
This result can be extended to closed convex sets . However, a very different proof
is needed.
Definition: A set C cH is convex if for any two vectors x and yin C, the set
2(lIy-znIl2+lIy-zmIl2)=1I2y-(zn+zm)1I2+lIzn-zmIl2. (8.1)
as n, m --+ 00. Since H is complete and M is closed, there exists awE M such
that Zn --+ w. Thus d = 1imn-;. oo IIY - z; II = lIy - wll. Finally, suppose Z E M
and d = II Y - Z II. Computing the distance between Y and the midpoint of the
segment jointing Z to w, namely i(z + w) E M, we obtain
d
2
s lIy - ~(Z + w)1I 2 = 11~(y - z) + ~(Y - W)r
Hence, by the parallelogram law applied to i(Y - z) and i(Y - w),
d
2
~ II~(Y-Z)+~(Y-W)r
= 2 (11~(y - Z)r +11~(y - W)r) -II~(Z - W)r
Since any r-ball S, in H is closed and convex, we can apply the above theorem
to Sr .
Proof: By Theorems 8.1 and 4.3, there exists a unique W E M such that v =
Y - WE Ml.. and Y = W + v. Suppose Y = WI + VI, WI EM, VI E Ml... Then
Y - WI E Ml... Hence by the uniqueness of w, W = WI and therefore V = VI . 0
We recall from linear algebra that given linearly independent vectors YI, Y2, . .. ,yn
in H, there exist an orthonormal set of vectors C{JI, qJ2, . . . , C{Jn in H such that
=
SP{C{Ji If=1 Sp{Yi If=I' 1 ::: k ::: n. The C{Ji are defined indirectively as follows :
YI Yk - Wk-I
C{J = --, C{Jk =
IIYIII IIYk- wk-III'
where,
k-I
Wk-I = L(yk. C{Ji}C{Ji
i=1
and
IIYk - wk-III = d(yk. SP{YI, .. . , Yk-I D·
The following result enables one to find C{Jn without first determining C{JI, C{J2, • • • ,
C{Jn-l·
Theorem 9.1 Let YI, Y2, ••• be linearly independent in inner product space E
and let C{JI , C{J2 , ••• be the corresponding orthonormal set obtained by applying the
Gram-Schmidt orthogonalization procedure to YI, Y2 , .. .. Then
YI
C{JI = lIyIII
1/ 2
Yn+1 - w dn
fIln +1 = II Yn+1 - W
II = ( -d)
n+1
(Yn +1 - w),
Y
I/Il (Y2, YI) .. . (Yn,: YI) (yn+;, YI))
It is obvious from the theorem above that every infinite dimensional Hilbert space
contains an infin ite orthonormal set.
Let w be a strictly pos itive Lebesgu e integrable function defined on the unit circle.
Define on the vector space P of polynomials with complex coeffi cients on inner
product
f
zr
If w is the constant function 1/21T, then the corresponding Szego polynomials are
1, z, z2, . . . .
In this section we prove three theorems. The first two describe in two different
ways the Szego polynomials. The third is Szego's theorem which states that all
the zeros of the Szego polynomials lie inside the unit circle.
Theorem 10.1 The Szego polynomials CPo, CPI, ... corresponding to the strictly
positive weight ware given by
1
,n 2: 1, (10.2)
( an-I an - 2 a_I
1 z zn
where
am = !1T
-1T
e-imOw(eiO)dO, mE Z, (10.3)
(10.4)
a-I
ao
(10.5)
Proof: Since
( zj _!
,z k) w-
1T
e -i(k- j)e we . -- 0 , 1, 2 , .. . ,
(iO) --ak-j'. k , j (10.6)
-1T
we have
(10.7)
Equality (10.2) now follows directly from (10.6), (10.7) and Theorem 9.1 applied
to the polynomials 1, z, . . . , z", 0
1.10 Szego Polynomials 21
(10.8)
These numbers can be used to give an alternative description of the n-th Szego
polynomial.
Theorem 10.2 Let .80, .81, ... , .8n be numbers such that (10.8) holds, and put
Pn(Z) =.80 + .8IZ + ... + .8nzn (10.9)
Then the n-th Szego polynomial C{Jn is given by
dn )1 /2
C{Jn(Z) = ( - - Pn(Z), n = 0,1,2, ... , (10.10)
dn-I
where dk is defined by (1004) and d_1 = 1.
Proof: Notice that 1 = To.8o = ao.8o. Therefore .80 = a o and thus
l
,
1/2 1/2 -I -1 /2
do po(z) = a o a o = a o = C{Jo(z),
which proves (10.10) for n = O.
For n ::: 1, we find the phcoefficient .8j of Pn(z) by applying Cramer 's rule to
the system of equations corresponding to the matrix equation (10.8). We get
j+2 /)" .
.8j=(-1)n+ d:' (10.11)
where /)" j is the determinant ofthe matrix obtained by deleting the (k+ l)st column
of the matrix
Now from (10.2) and (10.9) we have that the coefficient of zj of C{J(z) is
Theorem 10.3 (Szegd's Theorem). All the zeros ofthe Szego polynomialsfor the
positive weight ware contained inside the unit circle.
Proof: From equality (10.1 0) we see that it suffices to prove the theorem for Pn (z),
where Pn(Z) is defined by (10.8) and (10.9). Let A be any zero of p«. Then
Thus
(l - IAI 2 )( q , q)w = (Pn, Pn)w > 0,
which implies IAI < 1. o
Examples: 1) Let WI be the weight given by
-3z
WI (z) = 2z 2 _ 5z +2 (lz] = 1).
3 3
WI (z) = (2z _ 1)(2cl _ 1) = 12z _ 11 2 ' Izi = 1.
Thus the weight WI is strictly positive on the unit circle . Also, it is easily seen that
WI (z) = -1 + -1- + 1
I
~
= L....- 2- J zJ
1'1 '
(lz] = 1).
1 - lz 1- -C l ,
2 2 J=-OO
Now let us use Theorem 10.2 to find the corresponding n-th orthogonal polynomial
I{Jn. This requires to solve the following system of equations:
Solving this system (for instance by elimination) we obtain the following solution:
2 I
fJo = 0, . .. , fJn-2 = 0, fJn-1 = 3' fJn = - 3'
From Theorem 10.2 we conclude that the n-th Szego polynomial for the weig ht
W I is given by
dn
((in(Z) = ( dn-I
)1/2 (23zn-l - 3z
I n) ,
where d_ 1 = I and d n is the determinant of the coefficient matrix of (10.13).
Notice that the zeros of ({in are given by Z = 0 and Z = i
which is in agreement
with Theorem 10.3.
2) In this example we take as a weight the function
5 -2 0 0 0 0
-2 5 -2 0 0 0
0 -2 5 -2 0 0
T; = (10.14)
0 0 0 0 5 2
0 0 0 0 -2 5
Put d n = detTn , n = 0, 1,2, . . . , and set d_1 = 1. From the tridiagonal structure
of the matrix it follows that the determinants dn satisfy the following recurrence
relation:
dk = 5dk-1 - 4dk- 2, k = 1,2, . ...
This together with d-( = 1 and do = 5 yields
k 1
dk = 5 + t ( - I ) j (k+~ -j) (~r ,
where y = [(k + I)/2] denotes the largest integer which does not exceed (k + I) /2 .
The above formula for dk is proved by induction using the above recurrence
relations.
Using Cramer's rule it is straight forward to check that for this example the
solution fJo, . .. , fJn of equation (10.8) is given by
n-k dk-I
fJk = (- 2) - , k=0 ,1, 2, .. . , n.
dn
24 Chapter I. Hilbert Spaces
Thus the n-th Szego polynomial for the weight W2 has the form
(_2)n ~ -k k
((1n(Z) = r;r-:(l L...J(-2) dk-\ Z ,
yUn-\Un k=O
LetH = L2([-I, 1]) and let un(x) =xn,n = 0,1, . ... Using the Gram-Schmidt
orthogonalization procedure, we shall exhibit an orthonormal system whose span
coincides with the span ofu\, u2, . .. .
Let
Uo 1
({10 = lIuoll ; ({1o(x) = ,J2
V\ = Ul - (U\, ({1o)({10
V2 (5 1 2
C{J2 = II v 2 11 ; CfJ2(x) = V"2 . "2(3x - 1) .
n + I 1 dn
~
2 n
({1n ( x ) -- - 2- -2- - - x -1 ) ,
n n! dx" (
n = 0,1, .... (11.1 )
and
k
x
k
= Lb kj ({1j (X ) , bu > O. (11.3)
j=O
1.11 Legendre Polynomials 25
(P n, x k) = 1 Q~n)(x)xkdx
1
-I
= (-I)kk! 1 Q~n-k)(x)dx
1
-I
Since the leading coefficients of Pn and f/!n are positive, Cn > O. Thus
(11.5)
1
-I -I
1
= . .. = (_1)n Qn (x) Q~2n) (x)dx .
-I
1
-I
1
= (2n)! (1 - xt(1 + x)n dx (11.6)
-I
26 Chapter I. Hilbert Spaces
and
t (1 -
1-1
x)n(1 + x)n dx = _n_
n+1
t (1 -
1-1
x)n-I (1 + x)n+1 ds
= . .. =
(n +
l)(n
n!
+
2) . . . 2n 1-1
(1 t + x)2n dx
(n!)22 2n+ 1
(2n)!(2n + 1)
Thus by (11.4), (11.5) and (11.6),
r, [2i"+T 1 dn 2 n
({In = IIPn ll = V~-2- 2nn! dx n (x -1) .
n
The polynomial -2}, 2
n . ddx n (x - l)n is called the Legendre polynomial of degree
n, We shall refer to ({In as the normalized Legendre polynomial of degree n , This
polynomial ({In has the following interesting property.
Let an E C be chosen so that IPn = an({Jn has a leading coefficient 1, i.e.,
an = J
2;J~W 2n~I ' For any polynomial of degree n with leading coefficient 1,
(11.3) implies that for some ak E C, 0 ::s k ::s n, Q = L~=o ak({Jk. Hence
Now that we know that every vector in a Hilbert space has a closest vector w
in a closed subspace M, it remains to find a representation of w. It turns out that
w = Lk(w, ({Jk)({Jk, where {({Jd is a certain orthonormal system in M . It is therefore
necessary to concern ourselves first with the convergence of the series.
L
00
lakl 2 -+ 0 asn -+ 00 .
k=n+1
1.12 Or thonormal Bases 27
Lemma 12.1 The inner p roduct is continuous on 'H x H , i.e., if X n ---+ x and
Yn ---+ Y in H , then
(Xn , Yn) ---+ (x , y ).
Thu s {sn} converge s since 'H is complete, i.e., I:d x , !Pk )!Pk converges.
(iii) Let Sn = I:Z=l ak !Pk, Sn = I:Z=l lad . Then for n > m,
IIsn- sml = (I:Z =m+1 a k!Pk, I:Z=m+l ak!Pk) = I:Z=m+llad
2
= Sn-Sm.
Thus {sn} is a Cauchy sequence if and only if {Sn} is a Cauchy sequence.
Therefore {sn} converges if and only if {Sn} converges, which implies (iii).
(iv) Suppose Y = I:ka k!Pk. Then by Lemma 12.1,
(Y, !Pi) = lim
n -4 00
(I:Z=l ak!Pk, !Pi )= a i'
o
28 Chapter I. Hilbert Spaces
Examples: 1. The standard basis {ek} is an orthonormal basis for £2. Also
£2(Z) has a natural orthonormal basis. Indeed, put ek = (8jk)jEZ, where 8jk
is the Kronecker delta. Then ek E £2(Z), and
_ {O if k =F e,
(ek, ee) - 1 if k = £.
The vectors ek, k E Z, have norm one , are mutually orthogonal, and each
x = (~j )jEZ can be written as x = L~-oo ~jej with the series converging
in the norm of £2(Z) . Thus .. . , e_l, eo, el, ... is an orthonormal basis of
£2(Z). We shall refer to this basis as the standard orthonormal basis of
£2(Z).
We shall prove the next assertions 2, 3 and 4 in Sections 13 and 14.
2. beinX, n = 0, ±1, ... is an orthonormal basis for L2([-n, n]) .
.Jiii ' ~
3 . {_I_ -./ii n = I is an orthonormal basis for L 2 ([-n , n] ) .
-./ii ' sinnx}oo
r-:
4. The normalized Legendre polynomials
n
CPn(x) = 1 -d (x 2 - 1)n ,
- - - nn! n = 0,1, .. .
n 2 dx"
is an orthonormal basis for L2([ -1, 1]).
n
(x, y) = lim (sn, Sn) = lim "(x, qJk)(y, qJk) .
n~oo n~OO L...J
k=!
asn ~ 00 .
(iii) implies (ii). If (x, qJk) = 0, k = 1, 2, . . . , then clearly x ..1 sp{qJd.
Therefore, by the continuity of the inner product, x is orthogonal to the closure of
Sp{qJk}, which is H . In particular, x ..1 x and x = O.
(ii) implies (i). For any z E H, W = Lk (z, qJk)qJk converges by Theorem 11.2.
Thus for each j,
The proofs of the assertions 2, 3, and 4 preceding Theorem 12.3 rely on the
following two approximation theorems ([R], pp. 174-5).
Weierstrass Approximation Theorem. Iffis a complex valuedfunction which is
continuous on [a, b]. then for every e > 0 there exists a polynominal P such that
_ { 1 cosnx sinnx .
S- ../2ii' .fiT ' .fiT . n = 1,2, .. . }
is an orthonormal basis for L2([ -Jr, n ]), it suffices, by Theorem 12.3, to show
that the span of S is dense in the space.
Suppose I is a real valued function in L2([ -Jr, Jr]) . Given 8 > 0, there exists
a real valued function g which is continuous on [-Jr, zr] such that
(cf. [12], p. 90). Next we choose a function h which is continuous on [-Jr, Jr] so
that h( -Jr) = h(Jr) and
IIg-hll< 8/3. (13.2)
By the second Weierstrass approximation theorem, there exists a trigonometric
polynomial Tn such that
8
Ih(x) - Tn(x)1 < 10' x E [-Jr,Jr].
Thus
j
Jr 82
IIh - Tn 11 2 = Ih(x) - Tn(x)12 dx < - . (13.3)
-rn 9
Combining (13.1), (13.2) and (13.3) we have
ao
-+2
L
00
where
an = -
TC
1 1][
_][
I(x) cos nx dx , b; = -
1
TC _][
1][
I(x) sinn x dx ,
The series is called the Fourier series of I ;an, b; are called the Fourier coefficients
of I .
By Parseval's equali ty (Theorem 12.3),
inx
2. Since cos nx = e +/- inx and sin nx = e
inx
2r inx
, it follows from the above
result that sp{e inx : n = 0, ±I , .. .} is dense in L2([-TC, TC]) . Hence { 1.-:=- einx :
v 2][
n = 0, ±I , .. .} is an orthonormal basis for L2( [-TC , TC]) . Therefore , given I E
L2([-TC, TC]) , the Fourier series L~- oo cneinx, where
Cn = -
1 1][ .
l(x)e- In X dx , n=O,±I , . . .
2TC -r tt
It was shown in Section 11 that the set of normal ized Legendre polynomials
n
~I d
= V-----z--
2 n
({in(x) 2- 2nn ! dx n (x - 1) , n = 0, I , . ..
We now proceed to prove that {({iO, ({iI, . . .}is an orthonormal basis for L2([- I , I])
by showing that (iii) of Theorem 12.3 holds.
32 Chapter I. Hilbert Spaces
Since the set of all complex valued functions which are continuous on [-1, I]
is dense in Lz ([ -1, I]), it follows from the Weierstrass approximation theorem
that the set of all polynomials is also dense in Lz ([ -1 , 1]). Thus from (14.1), the
span of <PI , <Pz, . . . is dense in L2([-n , n]) . This proves that {<po, <PI , . . .} is an
orthonormal basis. Consequently
f I l k:~:~>n<Pn(X) I
Z
lim f(x) - dx = 0,
k-+oo -I n=O
where
By Parseval's equality,
Let rio = L2([a, b] x [a, b]) be the vector space of all those complex valued
functions f which are Lebesgue measurable on the square [a , b] x [a, b] and for
which
(h, g) = llb b
h(t, s)g(t , s) ds dt
defines the inner product. With this inner product, rio is a Hilbert space .
The reader in referred to [R] for a treatment of rio .
An orthonormal basis for rio can be constructed from an orthonormal basis for
L2([a , b]) as follows .
Theorem 15.1 If<P I, <112 , .. . is an orthonormal basisfor L2([a, b]), then ¢>ij(s , t) =
<Pi(S)<pj(t), 1 ~ i, j,forms an orthonormal basisfor L2([a, b] x [a, b]).
1.15 Basesfor the Hilbert Space ofFunctions on a Square 33
= I b
rpj(s)rpm(s) ds I b
rpk (t)rpn (t) dt = Ojm Okn ,
0= (g , <l>jk) = II b b
g(s , t)rpj(s) rpkCt) ds dt
= I b b
rpj(S){l g(S,t)rpkCt)dt} ds.
I
Thus
b
h(s) = g(s , t)rpk(t) dt
I' I'
-Jr -Jr
f (t. s) - i: t a~e;("+·')
n= P m= j
2 ds dt
a nm = -1 jJr jJr .
f(t, s)e-1(nt+ms) ds dt.
2rr - Jr - Jr
o
34 Chapter 1. Hilbert Spaces
The following theorem shows that an orthonormal system which is "close enough"
to an orthonormal basis is also an orthonormal basis .
Lemma 16.1 If M and N are subspaces ofan inner product space and dimM <
dimN, then Ml- n N =1= (0)
Proof: Let CPI, . .. , CPn be a basis for M and let 1/11 , ... , 1/In+1 be linearly inde-
pendent in N . The lemma is proved once it is established that there exist numbers
fh , . .. , f3n+ I, not all zero , such that L7~l f3; 1/1; 1. cPk, k = I, 2, .. . , n, or
n+1
L f3; (1/1;, CPk} = 0, k = 1,2, ... , n.
;=1
Theorem 16.2 Let CPI , CP2 , .•. be an orthonormal basis for a Hilbert space H. If
1/11,1/12, . .. , is an orthonormal system in H such that L~I IICPn - 1/In 11 2 < 00,
then 1/11, 1/12 , . . . is an orthonormal basis for H.
Proof: Assume {1J!n}~1 is not an orthonormal basis forH . Then by Theorem 12.3,
there exists a 1/10 E H such that 111/10 II = 1 and (1/10 , 1/1 j ) = 0, j = 1, 2, . ...
Choose an integer N so that
00
L 2
IICPn - 1/In 11 < 1. (16.1)
n=N+I
Lemma 16.1 guarantees the existence of a w =1= 0 in sp{1/10, . .. , 1/IN} such that
w 1. tpi> I .:s j .:s N . Since {CPn} ~ I is an orthonormal basis for H and w 1. 1/1n-
n > N,
00 00
Now that we have stud ied orthonormal bases in cons iderable detail, it is natural to
determine which Hilbert spaces possess an orthonormal basis .
Definition: A Hilbert space 'H is called separable ifthere exist vectors V I , V2, . ..
wh ich span a subspace dense in 'H.
An equivalent definition appears in Appendix 1.
Every Hilbert space H which has an orthonormal basis is separable since the
span ofthe basis is dense in H by Theorem 12.3(iii). In particular, £2 and L 2([a , b])
are separable.
It turns out that only separable Hilbert spaces have an orthonormal basis .
Theorem 17.1 A Hilbert space contains an orthonormal basis ifand only ifit is
separable.
Proof: Let sp{VI , V2 , . . . } be dense in the Hilbert space H and let M be a closed
subspace oi'H. For each k there exists a unique vector W k E M such that Vk - Wk ..1
M . We shall show that sp{WI, W2 , . . . } is dense in M which proves that M is
separable.
Given wE M and given e > 0, there exists a vector LJ=I !3jV j such that
n
W - L!3jVj < c. (17.1)
j=1
Since LJ=I !3j(Vj - Wj ) is orthogonal to M , (17 .1) and Theorem 4.3 imply
(17.2)
36 Chapter I. Hilbert Spaces
n n n n
Proof: Since M is closed and 1i is complete, it follows that M , with the inner prod-
uct inherited from H , is a separable Hilbert space. Hence M contains an orthonor-
mal basis {cpI , CP2, ·· .}. Now W = L k( y, CPk)CPk converges by Theorem 12.3 and
for each i .
Thus (y - w ) 1.. sp{CPI , CP2 , . ..} and since sp{CPI, CIl2, • . •} is dense in M , the con-
tinuity of the inner produ ct implies (y - w ) 1.. M. 0
The existence of orthonormal bases for separable Hilbert spaces enables one to
identify all separable Hilbert spaces as follows .
Definition: Inner product spaces E and F are linearly isometric ifthere exists a
function A which maps E onto F such that for all u , v in E and a , {3 E C,
Theorem 18.1 Any two infinite dimensional separable Hilbert spaces (over C)
are linearly isometric.
Proof: Suppose 'H I and 1i2 are infinite dimensional separable Hilbert spaces with
orthonormal bases {cpI, CIl2, . • .} and {Vrl , Vr2, . . .}, respectively. For each u E H I,
1.18 Isometry ofHilbert Spaces 37
IIAull
2
= L I(u, ({Jk}1 2 = 2
lIull .
k
The same proofshows that every n-dimensional Hilbert space (over C) is linear
isometric to en .
Thus every finite dimensional subspace of a Hilbert space is
closed. 0
lt is easy to verify that the operator A defined in the proof of Theorem 18.1 has
the property that for all u, v in 'H I .
which means that the spaces 1t I, 1t2 from the point of view of Hilbert spaces,
practically do not differ.
Often one can give an explicit description of the operator that establishes the
linear isometry between two Hilbert spaces. To illustrate this let us consider the
spaces l2 (Z) and L2 ([ -lr, lr]) . Both are infinite dimensional separable Hilbert
spaces, and hence by Theorem 18.1 they are linearly isometric. To make this
connection explicit, let Fbe the map that assigns to each function I in L2 ([ -lr, lr])
the sequence ofthe Fourier coefficients of I with respect to the orthonormal basis
({In(t) = ~eint, n = 0, ±1, ±2, . ... Thus
_1_i
v2Jr
Jr
In = I(t)e- int dt .
-J2ii -Jr
r'
By Parseval's equality, the sequence F I is square summable, that is FIE l2 (Z),
and
The Hilbert spaces which are usually encountered in applications are separable.
However, there is a very important non-separable inner product space which has
been a source of active research for more than fifty years. This is the space of
almost periodic functions .
Exercises I
(a) .Jt~+l
(b) c!:at' a E JR
(c) rJ" a E JR
(d) t" eat, a E JR, n E N (the set of positive integers).
8. Let C[a , b] be the vector space of all continuous complex valued functions
on[a, b]. Intoduceanonn 11 ·11 onC[a, b]by 1If11 = maX/E[a ,b] If(t)l . Show
that it is impossible to define an inner product on C[ a, b] such that the norm
it induces is the same as the given norm .
9. Find the intersection of the ball liz - ~oll :5 1 with the line Z = A~I, where
~o = (1, t, ! ,i ,·· .),~I = (1,0, . . .).
10. Find the intersection of the line Z = A({J with the sphere liz - ({Joll = 1 in
L2[0 , 1], where ({Jo(t) = 2t 2, ({J(t) = t.
11. Find the intersection ofthe ball liz - ~o II :5 1 with the plane z = a~1 + (J~2 ,
where
12. Find the intersection ofthe plane z = a({J1 +{J({J2 with the sphere IIz-({Joll =
1 in L2[0 , 1], where
!
i., 0 -<t<1
- 2
({Jo(t) == y E C, ({JI(t) =
-I ·1<t<I
, 2 -
!
1
2'
. 0 -< t -
< 41 ' 2
1 -
< t -
< 4l
({J2(t) = I. I I 3
-2' 4 < t < 2' 4 < t :5 1
13. Is the intersection ofthe two balls II~I -zil :5 RI and 1I~2 - z] :5 R2 empty?
(c) ~I = (1,0,0,0, 8\ ' 2~3 ' 719' " .), RI = t; ~2 = (0, t,! ,-f.;, 0,
O, . . .) , R2 = 3-v'2
2v'2 '
40 Chapter I Hilbert Spaces
14. (a) Show that the system (Zj}f is linearly independent in L2[0 , 1], where
0; 0 S t < j~1
Zj(t)= . I .
\ 1, H I < t S 1
(b) Let Zj(t) = tje- a t , j = 1,2, . . . . Show that the system is linearly
independent in L2[0, 00).
15. Let W = (WI, W2, . . .), where Wj > O. Define l2(W) to be the set of all
sequences ~ = (~I , ~2 , ... ) of complex numbers with L}:I Wj I~j 12 < 00.
Define an inner product on l2 (w) by
00
(~ , 1)) = LWj~jijj .
j=1
b )1/2
1If11 = (1g(t)lf(t)1 2 dt
18. Let H be a Hilbert space. Denote by l2(H) the space of all sequences
(Xj)jEN of vectors in H with Lj=1 IIXj 11 2 < 00. Define the inner product
(x, y) = L}:I (Xj, Yj). Show that l2(H) is Hilbert space .
19. Let z(l) = (zil) , zi 2), . ..), Z(2) = (z? ), zi2), . ..), , z(n) = (zin) , zin) , . . .)
be any n vectors in l2 . Take z(i) (k) = (zii), , zii) , 0, 0, . . .), for i =
1,2, . .. , n.
(a) Iffor some k the vectors (z(l) (k), .. . , z(n)(k)} are linearly independent,
show that (z(l) , . . . , z(n)} are linearly independent too.
(b) Check if the converse statement is true .
21. Find the pairwise intersections between the subspaces defined problem 20.
Exercises I 41
22. Let rl, ri. rs E C be such that Iril < 1, i = 1,2,3; and rj :j:. ri o Let
Xi = (1 ,ri ,rf, . . .) . Prove that ~ = (~1 ,b~3, .. .) is in the subspace
spanned by XI, X2 and X3 if and only if ~n+3 - (rl + r2 + r3)~n+2 + (rl r: +
rlr3 + r2r3)~n+1 - rlr2r3~n = 0.
23. Let X = (1, !' t, k, /6' ...),
Y = (1, 2.!, 3·t, 4·k, .. .),
z = (2,3 .2.!,
t, k, ...)
4.3. 5.4. be vectors in £2. Prove that ~ = (~I, ~2, .. .) is in the
subspace spanned by x, y and z if and only if
331
~n+3 - 2~n+2 + 4~n+1 - 8~n = 0.
24. Let fl(t) = e', h(t) = e it and f3(t) = e:" , Prove that a vector y E
L2 [a, b] is in the subspace spanned by fl, hand f3 ifand only if y satisfies
the differential equation
y(3) _ /2) + y' _ y = 0.
25. (a) Prove that a vector y E L2[a, b] is in the subspace spanned by {et , te';
e- t } if and only if y satisfies the differential equation /3) - y(2) - y' +
y =0.
(b) What is the dimension of this subspace?
26. (a) Let YI and Y2 be twice continuously differentiable functions in L2[a, b]
with det ( Y) Y;):j:. 0. Prove that Y is an element of the subspace
YI Y2
spanned by YI and Y2 if and only if
31. Let to = (1, a , a Z, . . .) for a > 1. Denote by £z (w) the Hilbert space
consisting of all sequences ~ = (~I , b . ..) such that L~I a j -ll~jIZ <
00. Prove that s = (1, 1.,
a a
-\, .. .) E £zandfindtheorthogonalcomplement
in £z(w) of sp[s} .
32. Let XI = (1,2,0, ), YI = (1,0,0, );
Xz = (0, 1,2,0, ), YZ = (1, 1,0,0, );
X3 = (0,0, 1,2,0, ), Y3 = (1, 1, 1,0,0, );
be two given systems in £z. Prove that for all j, Yj tt: Sp{XI , Xz , . .. }.
33. (a) Prove that the orthogonal complement in £z of sp{y} , where Y
(1, -t, !' --d7, ...)
is Sp{XI, Xz, .. .)}, where
Xj = (0, . .. , 0, 1, 3, 0, .. .).
'-,--'
j-I times
(b) Decompose each vector with respect to sp{x} and its orthogonal
complement.
34. Prove that ~ E SP{XI , Xz, . . .}, where Xl, xz , . . . are as in problem 32 ifand
only if ~I = - L~I (-!-)j ~J+I . Therefore ~ is of the form (!-~z - ~~3 +
k~4 - ... ,b ~3 , . . .) .
35. Let xj = (a, {3, 0, 0, . . .), Xz = (0, a , {3, 0, . . .), X3 = (0,0, a , {3, . . .), . . . ,
where I~ I > 1. Prove that for all i, the vectors Yj from problem 32 are not
in the subspace Sp{XI, Xz , .. .} of £z.
36. Let XI, Xz, X3, . . . be as in problem 35. Prove that the orthogonal comple-
ment in £z to sp{(1, -~, ~~ , - ~~ , ...)} is SP{XI, Xz, .. .}.
37. Let XI, Xz , X3 , .. . be the vectors in £z as in problem 35. Prove that ~ E
sp{xI ,xz , ...[ if and only if s, = -L~I(-~) j~J+I so that s is of the
form
Exercises I 43
38. Let XI = (a, {3, 0, 0, . ..), X2 = (0, a, {3, 0, ), X3 = (0,0, a , {3, 0, . . .),
. . . , where I~I ~ 1. Check that SP{XI , X2, } = f2 .
39. Let XI = (1,0,0, .. .), X2 = (a , {3 , 0, ), X3 = (0, a, {3, 0, ...), .. . ,
where I~I > 1.
(a) Check that Sp{XI, X2 , . • •} = h
(b) Show that any finite system of these vectors is linearly independent.
(c) Find aI , a2, •.. in C, not all zero , such that L:~I a)x) converges to
zero .
41.
L
et Xr (t) =
{ ° for r < t < 1
1 for 0:;; t < r
42. Let {Xi} be the vectors in problem 32. For every ~ E f2, find the projection
of ~ into SP{XI , X2 , . • .} and find the distance from ~ to this subspace.
43. Does the line z = AX intersect the ball liz - aoll ~ Ro?
(a) X = (1, ~, ~, .In,...), ao = (0, 1,0, . ..), Ro = ~ .
(b) X = (1, 0, ~,O, i, 0, ~' " .), ao = (0, 1, 0, ~, 0, i, ...), Ro = ~.
44 . Suppose rpl , ... , rpn is an orthonormal system in a Hilbert space 11. Let S
be the ball {z : liz - zo] ~ Ro}, and let M = Sp{rpl, ... , rpn}. The distance
d between Sand M is defined by d = inf'[]« - vII : u E S, v EM} . Prove
t
that
46. Let liz - zi] = R and liz - z211 = R be two spheres with IIzlli = IIz211 .
Show that the intersection of the two spheres is a sphere in the subspace
orthogonal to the vector Z2 - ZI . Find its radius and its center.
47. Prove that Idet(ajk)1 2 = n;=1 L:Z=I lajk l2 if and only if the vectors YI =
(all , a\2, . . . , al n), Y2 = (a21, a22, ... , a2n), . . . , Yn = (ani, an2, . . . , ann)
are orthogonal to each other.
48. Consider the incompatible systems of equations:
0= Xl + X2 -1 = 2xI + X2
2 = 3xI - X2 -1~ = XI + 2X2
Minimize the deviation between the right and left sides .
49. Let the points (0, 0), (1, 1), (2, 1) be given. Find the polynomial P(t) of
degree 1 with the least squares fit to these three points.
50. Let ZI, . . . , Zn be vectors in a Hilbert space H .
(a) Prove that if ZI , .. . , Zn are linearly independent, then there exists an
e > 0 such that any vectors YI, . . . , Yn in H which satisfy
IIzi - Yi II < e, i = 1, . .. , n,
51. Let A = {ai, . . . , an} be a system of vectors in a Hilbert space H . For any
Y E H, let YA denote the projection of Y into the subspace sp{al , . . . , an}.
Prove that for any e > 0, there exists a 8 > 0 such that for any system of
vectors B = {bl , . .. , bn } with the property lIa j - b j II < 8, 1 .:s j .:s n, the
inequality
IIYA - YBII .:s sHYl1
holds for any Y E H. Hint: Use the formula for YA.
52. Let
1 .:s k .:s m, lIaij - bijll < 8, 1 .:s i .:s m, 1 .:s j .:s n, then the solution
!JB = (!JI, .. . , !In) of the incompatible system
ZI = bllAI + ... + blnA n
satisfies
liZ
L lai -
n )
z
(
!Jil < s.
1=1
53. Let tl, ... , tk be k points in [0, 1] and let YI, . .. , Yk be in Co Let P(t) be
°
their least square fit polynomial of degree n < k. Prove that for any e > 0,
there exists a 8 > such that if SI, . . . , Sk in [0, 1] and ZI, . .. , Zn E C
satisfy lSi - til < 8, i = 1, . .. , k ; IZi - Yil < 8, i = 1, . . . , k , then their
least square fit polynomial Q(t) (degree < k) satisfies
[1 1
IP(t) - Q(t)IZdtf IZ < e.
~ 11
1
ak = k +-
- l - f(k)(t)(1 - tZldt.
k 2kk! -I
61. (a) In Lz[ -1, 1], find the projection of x n into sp{x n- I, x n-Z, ... ,I}.
(b) Express x n = ~(x) + 1/r(x), where
~(x) E sp{x n- I, .. . , I} and 1/r(x) E sp{x n- J , ••• , l}ol.
62. Consider the two vectors cos t and cos t + sin t in Lz[ -Jr, Jrl Change the
inner product on Lz [-Jr, Jr] in such a way that it remains a Hilbert space
and these two vectors become orthogonal.
Exercises I 47
63. Consider the vectors (1,2,0,0, . ..) and (1, 1, 1,0,0, . . .) in h Change
the inner product on £2 such that it remains a Hilbert space and these two
vectors become orthogonal.
64. In general, given linearly independent vectors CPI, ... CPn in a Hilbert space
H, change the inner product on H such that it remains a Hilbert space and
cP' , .. . , CPn became orthogonal.
65. Let L be a closed subspace of a Hilbert space H. Given gEL and f E H,
denote by Pt. f the projection of f into L. Prove that g ..1 PL f if and only
ifg..l f.
66. Prove that for any two subspaces of a Hilbert space H,
(a) (L, + L2).1 = Lt n Lt
(b) (L, n L2).1 = Lt + Lt
70. Define£2(N x N) to be the set ofall double sequences {~jk} with L:rk=1 I~jd
< 00, and an inner product defined by
00
71. Determine which of the following systems are orthogonal bases in £2 and
which are not.
(a) (1,2,0,0, . . .), (0,0,1,2,0, . . .), (0,0,0,0,1,2,0, . . .), . . .
48 Chapter I. Hilbert Spaces
72. Given vectors <PI , . . . , <Pn in a Hilbert space H, when is it possible to find
a vector xo E H such that (XO, <PI) = 1 and (XO , <Pj) = 0 for j > I? Ifit
exists, find such a XO .
73. Given vectors <PI, ... , <Pn in a Hilbert space H,
(a) prove that there exist vectors XI, . . . , Xn with (Xj, <Pk) = 8jk, if and
only if <PI, . . . , <Pn are linearly independent. Such a system XI, . . . , Xn
is called a biorthogonal system.
(b) Let rpj be the projection of <Pj into sp{<PI , .. . , <Pj -I , <P j+ I, ... , <Pn}.
Prove that the system {Xj}'j=l' where Xj = lI <PjIl 2~IIq;jIl2(<Pj - rpj) is a
biorthogonal system.
(c) How many biorthogonal systems corresponding to <PI, ... , <Pn exist?
where Ck E C and
Exercises I 49
77. Let the matrix An in the previous exercise be replaced by the positive definite
Toeplitz matrix Tn = (tj-kYJ,k=O' Prove that the polynomial
with the coefficients given by (*) in the previous exercise, has all its zeros
inside the unit disc.
78. Let T; be as in the previous exercise, and
-1 -lal 2
W(A) = _ .
-1 + lal2 - a): - aA- 1
50 Chapter I Hilbert Spaces
(a) Prove that w(A) > 0 for each A E 1[' and that
00 00
(b) Construct the Szego polynomials for the weight w (hint: use the
previous exercise).
2
w()..) = I fah _i1
J=o
Compute up to a constant the Szego polynomials for this weight.
Chapter II
Bounded Linear Operators on Hilbert Spaces
where
al) (f3I) L aijaj o
(
n
(aij) : = : i.e., f3i =
an f3n J=I
sup [A x] < 00 .
IIx II :::: I
When there is no cause for confusion, we use the same notation for the norms and
inner products on 'HI and 'H2.
An operator A is bounded if and only ifit takes the l-ball SI with center zero in
'HI into some r-ball in 'H2 . The smallest ball in 'H2 which contains ASI has radius
IIAII·
The identity operator I : 'HI ~ 'HI defined by Ix = x is obviously a bounded
linear operator of norm 1.
The operator A : en ~ en defined above is bounded; for if x = (eq , ... , an),
then by (1.3) in Section 1.1
IIAxll
2~ EIPoI' -s (~I la.I') (~IOjl') = IIxll
2;~l l agl',
from which it follows that II A II S (L~j=1 lay 1/2. e)
It is easy to verify the next three properties of II A II .
(a) IIAII = sUPx;60 II
li
ltll
= sUPllxll=1 [Ar],
(b) IIAII = sUPllxll=lIyll=I!(Ax , Y}I = sUPllxll:ol ,lI yll :oI!(Ax, Y}I· (1.1)
(c) If II Ax II s CJIxll for all x E 'HI, then A is bounded and IIAII s C.
The set of bounded linear operators whick map 'HI into 'H2 is denoted by
£('HI, 'H2)' If'Hl = 'H2, we write £('Hd instead of £('HI, 'HI) .
If A and B are in £('H I, 'H2) , it is a simple matter to verify (i}-(v).
(i) aA + f3B E £('HI , 'H2); a, f3 E C.
lIaAIl = lalliAII; a E C.
(ii)
(iii) IIA + BII s IIAII + IIBII·
(iv) IIA - BII 2: IIiAII- IIBIII ·
(v) If C E £('H2, 'H3), where 'H3 is a Hilbert space, then defining CA by
CAx = C(Ax) , it follows that CA is in £('HI , 'H3) and II CA II S IICJ1I1A II .
While it is often easy to verify that a linear operator is bounded, it may be very
difficult to determine its norm. A few examples are discussed in this section.
1. Every linear operator which maps a finite dimensional Hilbert space into a
Hilbert space is bounded. Indeed, suppose A is a linear map from 'HI into 'H2 ,
and dim 'HI < 00 . Let CPI, ... , CPn be an orthonormal basis for 'HI. Then for any
x E 'HI ,
n n
x = L(x , CPk}CPk and Ax = L(x , CPk}Acpk .
i= 1 k=1
2.2 Examples ofBounded Linear Operators with Estimates ofNorms 53
Hence
n
21( 2 21lx1l 2
= L IAkI x , ({Jk}1 s M ,
k= I
Thus
IIAII = max IAkl·
k
2. Let {({JI , ({J2, . . .} be an orthonormal basis for the Hilbert space H and let
{AI , A2, . . .} be a bounded sequence of complex numbers. For each x E H , define
Ax = L Ak(X, ({Jk}({Jk.
k
where m = sUPk IAkl . Thus IIAII :s m . On the other hand, given 8 > 0, there
exists a Aj such that IA j I > m - 8. Hence
We shall see in Chapter IV that there is a large class of operators A of the form
given above.
3. Certain infinite matrices give rise to bounded linear operators on £2 as follows :
Given an infinite matrix (aij)0=1 ' where
00 00
L
00
IIAII 2 S L L laijl2
i = 1 )=1
since
Condition (2.1) is not a necessary condition for A to be bounded since the identity
matrix (aij) = (8ij) does not satisfy (2.1), yet A = I .
No conditions on the matrix entries aij have been found which are necessary
and sufficient for A to be bounded, nor has II A II been determined in the general
case .
4. Let H = L2([a, b]) and let a(t) be a complex valued function which is
continuous on [a, b]. Define A : H -+ H by
(Af)(t) = a(t)!(t).
2.2 Examples ofBounded Lin ear Operators with Estimates of Norms 55
2
II Afll = l b
2
la(t) f( t) 1 dt ::s M 2 11 f11 2 .
Thus II AII ::s M. To show that II AII = M , suppose M = la(to)l. Define a sequence
{q:>n} in 'H by
Now
lIq:>nll
2
= 1 10+* n
10- 1
n
-dt
2
= 1,
Hence
IIAII ::: la(to)1 = M.
5. Let k be a complex valued Lebesgue measurable function on [a , b) x [a, b)
such that
ll b b
2
Ik(t , s) 1 dsdt < 00 .
(Kf)(t) = l b
k(t , s )f (s )ds.
The integral exists since for almost every t , k(t , s) f (s) is Lebesgue measurable
on [a, b) and by Schwarz's inequality,
b ( b 2 ) 1/ 2 ( b ) 1/ 2
llk(t, s)f(s)lds::S l l k (t, s )1 ds l lf (S) ,2dS
Thu s
Hence
Clearly, K is linear.
56 Chapter II. Bounded Lin ear Operators on Hilbert Spaces
The operator K is called an integral operator and k(t , s) is called the kernelfunction
corresponding to K.
6. Let Sr: £z -+ £z be defined by
However, A is unbounded on V(A); for if 'fin (x) = J..::-e inx , n = 1,2, . . . , then
" Zrr
II 'fin II
= 1 but II A 'fin II = nll'finll = n.
Fortunately, there is a well developed theory of unbounded linear operators
which enables one to treat large classes of differential operators. The reader is
referred to [DS2], [G] and [GGKI] for an extensive treatment of the subject.
Whereas a bounded real valued function can be discontinuous at every point, the
situation for bounded linear operators is vastly different as the next theorem shows.
2.4 Matrix Representations ofBounded Linear Operators 57
Theorem 3.1 Let A be a linear operator whi ch maps HI into H z. The f ollo wing
statements are equivalent.
Proof: (i) implies (ii) . Suppose A is continuous at xo. Given e > 0, there exists a
8> 0 such that IIx - xoll < 8 impl ies II Ax -Axoll < c . Thus ifllw - zll < 8, w, z
in HI , then IIxo - (xo + z - w)1I < 8. Therefore,
1 ~ IIA(8x)1I = 8IIAxll,
o
2.4 Matrix Representations of Bounded Linear Operators
We have seen in Section 2 Example 3 how certain matrices give rise to bounded
linear operators. In this section it is shown how to asso ciate a matrix with a given
bounded linear operator on a separable Hilbert space .
Suppose A E £ (H ), where H is a Hilbert space with orthonormal basis ({II ,
({Iz, . . . . Then for x E H , x = L j (X, ({Ij)({Ij. It follows from the linearity and
continuity of A, appl ied to the sequence of part ial sum s of this serie s, that
Now
A({Jj = L(A({Jj, ({Jk}({Jk. (4.2)
k
Definition: Let ({Jj, ({J2, ... be an orthonormal basis for a Hilbert space 1t and let
A be in £(1t). The matrix (aij) corresponding to A and ({Jj, ({J2, ... is defined by
aij = (A({Jj, ({Jj) .
2. Let H = L2([-Jr, Jr]) and let aCt) be a bounded complex valued Lebesgue
measurable function on [-Jr, Jr].
Define A E £(1t) by
(Af)(t) = a(t)!(t) .
The "doubly infinite" matrix {ajdfk=-oo corresponding to A and the orthonormal
basis ({In(t) = b eint, n = 0, ±1, ±2, . . . is obtained as follows. Let
an = - 1
2Jr
l -1T
1T
a(t)e- ln
• t dt.
2.4 Matrix Representations ofBounded Linear Operators 59
Then
which is called a Toeplitz matrix . The entry ~ is located in row zero column
zero .
3. Let K be the integral oprator on L2([a , b]) with kernel funct ion k E
L 2([a , b] x [a, b]) as in Section 2, Example 5. Given an orthonormal basis {lpj}
for L 2( [a , b]) , it follows from the remark at the end ofthe proof ofTheorem L15.1
that <l>ij (s , t ) = lpi(S)lpj (t ) , 1 :'S i , j , forms an orthonormal basi s for L 2([a , b] x
[a , b] ). Now the matrix (aij) corresponding to K and {lpi} is given by
Thus we see that each aij is a Fourier coefficient of k with respe ct to {<I>ij }.
Consequently, Parseval's equality gives
So far we started with a bounded linear operator and considered its matrix repre-
sentation relat ive to an orthonormal basis. In some problems the reverse direction
is important: an infinite matrix is given and one looks for a bounded linear oper-
ator for which the given matrix is the matrix of the operator relative to some
orthonormal basis . In the sequel we shall use the following definition.
Let (aj kr;::k= l be an infinite matri x. We say that this matri x induces a bounded
linear operator A on l2 if (a jk)~k= 1 is the matrix of A with respect to the standard
orthonormal bas is el , e2, ... ofe 2, that is,
]
In this case we simpl y write
l
all a \2 a 13
a 21 a22 a23
..
.. ..
(4.5)
A = ~31 a32 a33
60 Chapter II Bounded Linear Operators on Hilbert Spaces
Here Ojk is the Kronec1an delta and the diagonal entries are WI, W2, W3, . ... This
matrix induces a bounded linear operator Von £2 ifand only ifsuPj ~1 Iw j I < 00,
and in this case
IIVII = sup IWjl· (4.6)
j~ 1
Indeed, ifthe matrix diagtur, )~I induces a bounded linear operator Von £2, then
and hence SUPj~1 IWjl :s IIVII < 00 . Conversely, ifsuPj~1 IWjl < 00, then
is a bounded linear operator on £2 and the matrix of V with respect to the standard
orthonormal basis of £2 is the diagonal matrix diag( W j ) ~ I' Moreover,
then the matrix with respect to the standard orthonormal basis of £2 ofthe operator
A defined in Part 3 of Section IL2 is precisely equal to (aij)'ij=I '
In this section we prove the very useful result that every bounded linear
functional on 11. is an f y.
The motivation for the proof is based on the following observations.
Suppose f is a bounded linear functional on 11. and suppose there exists a
Y E 11. such that f (x) = (x , y ) for all x E 11.. In order to find this y, let
Ker f = (x : f(x) = OJ . Then 0 = f(x) = (x , y) for all x E Ker f ,
i.e., y 1- Ker f . It follows readily from the assumption that f is bounded and
linear that Ker f is a closed subspace oi H , The first step in finding y is to choose
v f= 0 which is orthogonal to Ker f (assuming f f= 0). The existence of such a
v is assured by Theorem 1.8.2. But for any a E C, y = av is also orthogonal to
Ker f . In order to determine which a to choose, we note that
af (v ) = f( av) = f (y ) = (y , y ) = aa llvll2 .
x = tho + z. f3 E C, Z E Kerf.
f (x) = (x , y ).
(i) y 1- Ker f.
(ii) f(y) = (y , y ).
62 Chapter II. Bounded Linear Operators on Hilbert Spaces
Given x E H, we know from Lemma 5.1 that there exists a f3 E C and a Z E Ker
I such that x = f3y + z. From (i) and (ii) we get,
I(x) = 1(f3y) = f31(Y) = f3(y, y} = (f3y + z. y) = {x , y} .
To show that y is unique, suppose there exists awE H such that I (x) = {x , w}
for all x E H. Then
0= I(x) - I(x) = {x, y - w}
for all x E 'H. In particular, {y - w, y - w} = O. Hence y = w. Equation (5.1)
established that II I II = II y II · 0
If ({II , ({I2, ... is an orthonormal basis for 'H, then the vector y corresponding to
the functional I in the Riesz representation theorem is given by
y= L 1«({Ik)({Ik .
k
FU) = l b
I(t)g(t)dt
I(x) = L akfik.
k=1
{x , y} w = L Akakfik.
k=1
Proof: Let CPI , . . . , CPn be an orthonormal basis for Im K . Then for every x E 11. I,
n
Kx = L(Kx, CPi}CPi . (6.2)
i=1
Now for each i, Ii (x) = (Kx, CPi) is a bounded linear functional on 11. I. There-
fore the Riesz representation theorem guarantees the existence of a Vi E 11. I such
that for all x E HI,
L Wj(t) 1!(S)Vj(s)ds,
n b
(Kf)(t) =
j=1 a
and
AA-Iy = y for every y E 'H2.
The operator A -I is called the inverse of A.
Clearly, A has at most one inverse. If A is invertible, then for each y E 'H2 there
exists one and only one x E 'H I such that Ax = y, namely x = A -I y. Conversely,
if for each y E 'H2 there exists a unique x E 'HI such that Ax = y, then A is
invertible. The proof that A -I is bounded is given in Theorem XII.4.1.
It is very useful to know that A -I is continuous. For example, suppose that the
equation Ax = y has a unique solution in 'HI for every y E 'H2. It might very
well be that this equation is too difficult to solve whereas Ax = y can be solved
x
rather easily for some y "close" to y. In this case the solution of this equation is
"close" to the solution x of the original equation if A -I is bounded since
Definition: The kernel of A E £(1iI , 1i2), written Ker A , is the closed subspace
{x E HI : Ax = OJ.
A is called inje cti ve or one-one ifKer A = (0) .
A is injective if and only if A x =1= A z whenever x =1= z. Thus , if A is injective ,
then the equation Ax = y has at most one solution.
Suppose A is in £(C m ) and (aij) is the n x n matrix corresponding to A and the
standard basis for en. Then A is invertible if and only if det (aij ) =1= 0, in which
case the matrix corresponding to A - I and the standard basis is
I T
det(aij) (Aij) ,
1
In this case. for every y E 'H,
a\2 al n
[all
a21 a22 a2n
(Y•
(y , •({J2)
1)
(I - K )-I y = Y - _ I _ det : (7.1)
det (aij) .
ani a n2 ann (y , ({In)
1/11 1/12 1/In 0
(7.2)
Then
n
(x, ({Jk) = (y, ({Jk) + Laj('l/Jj, ({Jk) = ai:
j=l
Hence
n n
(l- K)x =x - L(x, ({Jk)1/!k =x - L ak1/!k = y.
k= l k= l
we shall
(a) determine those Asuch that for each g E L2 ([0, 2JrD, there exists a solution
in L2([0 , 2JrD to Equation (7.7).
(b) find the solutions .
Define K : L2([0 , 2Jr]) ~ L2([0, 2JrD by
(Kf)(t) = 1 27r
j(s)sin(s + t)ds.
Example 5 in Section 2 shows that K is bounded and linear. From the trigono-
metric identity sin(a + fJ) = sina cosfJ + coso sinfJ, it follows that
AKf = (j, f{Jj)1/I1 + (j, f{J2)1/I2,
where
1/I1(t) = ACOSt , 1/I2(t) = Asint,
f{JI(t) = sin r, f{J2(t) = cost.
Thus AK has rank 2 if A :I O. Using the same notation as in Theorem 7.1, we get
all = 1 - (1/11 , f{JI) = 1, a12 = -(1/12, f{JI) = -AJr ,
a21 = -(1/11, f{J2) = -AJr, a22 = I - (1/12 , f{J2) = 1,
det(aij) = 1 - A2Jr2.
where
10r
Z7f
bi = (g, <PI) = g(t) sint dt,
10r
27f
bz = (g, <pz) = g(t) cos t dt .
When A = ±~, Ker (I - AK) consists of vectors of the form (Xp/JI + (Xzo/z,
where
(1 -AJr
-AJr) ((XI) =
1 (Xz
(0)
0
or
(XI - AJr(XZ = 0
-AJr(X1 + (Xz = O.
f(t) - -1
Jr 0
l Z7f
f(s) sin(s + t) ds = 0
is f(t) = c(cost + sin z), c arbitrary in C, and the general solution to the homo-
geneous equation
f(t) + -I l
Jr 0
Z7f
f(s) sin(s + t) ds = 0
IS
f(t) = c(cos t - sin t), c arbitrary in C.
2. Let a(t) be a continuous complex valued function defined on the closed
interval [a, b]. Define a bounded linear operator A on 1{ = Lz([a, b]) by
(Af)(t) = a(t)f(t).
and
1
IIA- 11I = max - - (7.8)
laCt)1
t E[a.b]
Conversely, suppose A is invertible. We show that a Ct ) f= 0 for all t E [a, b]. For
JEH ,
1
IIAjll ~ II A- 11l1lf1l (7.9)
Suppose a Cto ) = 0 for some to E [c, d]. Then given e > 0, there exists a 0 > 0
such that la (t )1 < e if It - rol < o. Define
J (t ) = { 1, It - tol s 0 }.
o It - ro l > 0
Then
f
to+o
II AJ II
2
= la (t )1
2IJ
(t )1
2
:s e 2 11 f11 2 ,
to-o
The next theorem, which is rather easy to prove, is nevertheless very useful. The
formula for (/ - A )-l is suggested by the geometric series
i.e.,
00
(/ - A )-l = LA k
k=O
70 Chapter II. Bounded Linear Operators on Hilbert Spaces
and
Proof: Giveny E 1t, the series L~o Akyconverges. Indeed.Iet s., = Lk=O Aky.
Then for n > m,
n n
II sn - smll S L IIA kYIi S IIYII L IIAli k --+ O.
k=m+1 k=m+1
as m, n --+ 00 since L~o IIAli k = I-tAli ' The completeness of1t ensures the
convergence of {sn}, i.e., L~o Aky converges . Define B : H --+ 'H by By =
L~o A k y . B is linear and
00 00
= LAky - LAk+l y = y.
k=O k=O
as n --+ 00. D
00
and
1 2I1
II A - I - B - I II< IIA - 1I A - BIl .
- 1 -IIA- I IIIIA - B II
Proof: Since B = A - (A - B ) = A[I - A- I (A - B )] and II A - I( A - B ) II ~
II A -11111 A - B II < 1, it follows from Theorem 8.1 that B is invertible and
L
00
o
Corollary 8.3 Let {An} be a sequence ofoperators in £(1t1, 1t2) whi ch converges
in norm to A E £(1t I, 1t2 ). Then A is invertible ifand only ifthere exists an integer
N such that An is invertiblef or n 2: Nand sUPn >N II A ; III < 00. In this case,
IIA;I - A-III ~ o. -
Proof: Suppose A is invertible, then iffollows from Corollary 8.2 that there exists
an N such that An is invertible for all n 2: Nand IIA; I - A -III ~ O. Thus
sUPn>NIIA;11I < 00.
Conversely, suppose that An is invertible for n 2: Nand M = sUPn>N IIA;I II <
00. Then for n 2: N , -
L aijx j = Yi , i = 1, 2, .. . , (9.1)
j =1
Theorem 9.1 Given the matrix (aij)f.}=I' where L:f.}=1 laije < 1, the system of
equations
L: aijXj = Yi,
00
Xi - i = 1,2,. . . (9.2)
j=1
has a unique solution ~ = (~I, ~2 , . . .) E £2 for every Y = (YI, Y2, .. .) E £2. The
truncated system ofequations
n
Xi - L:aijXj = n. i s i .:s n, (9.3)
j=1
.
has a umque so Iutton
' (n)
xI , . .. , Xn(n) an d Zn = (xI(n) , .. . , Xn(n) , 0 , 0 , ... ) con-
verges in £2 to ~ as n -+ 00.
Proof: Define A E £(£2) by A(ai, a2, ...) = (fh, fh , . . .), where
1
defined by An (ai, a2, . ..) = (fh, fh, . ..), where
n
a ll al :I
: :I 0
ani . .. ann:
[ -----0----- I
00 00 00 00
IIA - An 11
2
.:s L L laijl2 + L L laij!2 -+ 0 as n -+ 00.
i=n+1 j=1 j=n+1 i=1
2. J0 Integral Equations ofthe Second Kind 73
(9.4)
by Corollary 8.3. Taking (xf n). xi n)• . . .) = (J - An)- I (YI. Y2• . . .), it follows
from the definition of A n that (xf n)• . . . • x~n» is the unique solution to (9.3) and
X]n ) = Yj, for j > n . Since L~ l IYjl2 < 00, (9.4) implies that
l'
n 2.~
«n ) (n) 0 0 )_
x I • . . .• x n • , , . .. - n2.~
u «n ) (n)
Xl • . . . , x n ' Yn+l . Yn+2 • . . .
)
= (J - A)-I(YI . Y2 • .. .) . (9.5)
In fact, (9.4) shows that as (YI. Y2• . . .) ranges over the I-ball of £2, the corres-
ponding sequences in (9.5) converge uniformly. 0
(Kf)(t) = l b
k (t , s) J(s) ds
is a bounded linear operator from L 2([a , b]) into L 2([a , b]) with IIK II S IIkll .
Suppose IIkll < 1. By Theorem 8.1, I - K is invertible. Thus for each g E
L 2([a . b]) , there exists a unique J E L 2([a . b]) such that
J(t ) -l b
ktt , s )J (s ) ds = get ). (10.1)
L
00
(K 2g)(t) = l b
k(t, x) (Kg) (x)dx
= l {l
b
k(t ,x)
b
k(X,S)g(S)dS}dx
= l {l
b
g(s)
b
k(t ,X)k(X ,S)dX}dS
= l b
k2(t , s)g (s) ds,
where
k2(t, s) = l b
k(t, x)k(x, s)dx.
Ik2(t , s)1
2
s {l b
Ik(t, X) 12dX} {l b
Ik(x, S)12dX} .
Therefore,
and
ll b b
Ik2(t, s)1
2ds
dt
~ {l l {l l
b b b b
2dx 2dx
Ik(t, x)1 dt} Ik(x , s)1 dS} '
(Kng)(t) = l b
kn(t ,s)g(s)ds, n= 1,2, . .. , (10.3)
L~I kn = k for so~e k E L2([a, b] x [a, b]) . Let K be the integral operator
with kernel function k. It follows from (10.3) and Schwarz's inequality that
as p ~ 00. Hence
00
l
If, in addition,
b
c = sup Ik(t, s)12ds < 00,
t a
then
00
b )1/2
I(Kh)(t) I S IIhil ( llk(t , s)1 2 ds S -JCllhil .
Replacing h by K n - I g gives
Hence
00 00
Jet) - A 1 1
e(t-s) j(s) ds = get) E L2([0, 1]), (10.6)
n=1
1 I
et-Sg(s) ds
1
0
1
= get) + -A- et- s g(s) ds (10.7)
1 -A 0
where y* = (0, YI, Y2, . ..) . Thus S; is the forward shift operator.
In a similar way one shows that the adjoint of the bilateral forward shift
Von £2(Z) is equal to the bilateral backward shift on £2(Z) . Thus in this
case V* = V-I.
4. Let K E £(H I, H2) be the operator of finite rank defined by Kx = L:J=I
(x, u j }V j, U j E HI , Vj E H2 . Then for all x E HI and Y E H2,
(Aj, g) = I b
a(t)!(t)g(t)dt = (j, iig) .
The matrix corresponding to A * and !PI, !P2 , . .. is the conj ugate transpose
(iip) of the matrix (aij) '
Theorem 11.2 Let K : L2([a, b]) ---+ L2([a, b]) be the bounded linear operator
I
defined by
b
(Kf)(t) = k(t, s)!(s)ds,
(K*g)(t) = I b
k(s, t)g(s)ds.
(Kj, g) = I (l
b b
k(t , S)!(S)dS) g(t)dt
= I (l
b
!(s)
b
k(t, s)g(t)dt ) ds = (j, g*),
where
g*(s) = I b
k(t,s)g(t) dt.
o
2.11 Adjoint Operators 79
o
The next theorem gives some useful relationships between the kernels and ranges
of an operator and its adjoint.
Theorem 11.4 For A E £(11 I, 11z), the following dual properties hold.
(i) KerA = ~A*-1 .
(ii) KerA* = ~A-1 .
(iii) ~A = KerA*-1.
(iv) ~A* = KerA-1.
(iii) Corollary 1.8.3, the continuity of the inner product and (ii) imply
Theorem 12.1 If A E £(1i) is self adjoint, then Ker Al- = :SA. Thus H =
Ker A EB :SA.
IIPxll
2
= lIull 2 :s lIull 2 + IIvII 2 = lIu + vII 2 = IIx1l 2 .
Thus IIPII :s 1. But for m :F 0 in M, IIPlilimll ~ IIPmll = IImli. Hence IIPII ~ 1.
From Theorems 1.4.3 and 1.8.1 we know that if M is a closed subspace of H,
then for each y E H, there exists a unique W E M such that (y - w)l.M or,
equivalently, II Y- wll = d(y, M). Ifwe define Py = w, then P is the orthogonal
projection of y onto M since y = w + y - w, W E M , y - W E M.1. .
P(Px) = Pm = m = Px .
P(U + v) = Pu + Pv = Pu = U
The following two theorems, which play a major role in functional analysis,
will be used often throughout the remaining chapters of the text. The proof of
Theorem 14.1 which is given in the more general setting ofBanach spaces, appears
in Theorem XIIAA.
Theorem 14.1 Suppose A E L(HI, H2) has the properties that KerA = {O} and
ImA = H2. Then its inverse A-I is bounded.
Corollary 14.2 An operator A E L(HI, H2) is one-one and has a closed range
if and only if there exists an m
> 0 such that
The proofofthe next theorem is elementary. It differs from the usual proofs which
depend on the closed graph theorem or the Baire category theorem.
2.14 Two Fundamental Theorems 83
Theorem 14.3 Suppose {An} is a sequence in £(HI, Hz) with the property that
sUPn II Anx II < 00 for each x E H I . Then sUPn II An II < 00.
(14.2)
and
(14.3)
Let x = L~I x i - Note that the series converges since HI is complete and
L~I II x j ll = L~14- j < 00 . Since
where
CoroUary 14.4 Let {An} and {B n} be sequences in £(HI , Hz) and £(Hz, H3) ,
respectively, where H i is a Hilbert space, i = 1, 2, 3. Suppose Ax = limn-.+ ooAnx
and By = lim Bny existforeach x E HI and y E Hz. Then A and B are bounded
n-.+ oo
linear operators and
Proof: Since the sequence {Bny} converges for each y E H z, sup, II Bnyll < 00 .
Hence M = sUPn IIBn II < 00 by Theorem 14.3. Thus IIByll = limn -.+ oollBny ll S
84 Chapter II Bounded Linear Operators on Hilbert Spaces
o
2.15 Projections and One-Sided Invertibility of Operators
Definition: A Hilbert space 11. is said to be the direct sum of subspaces M and
N, written 11. = M EB N, if every vector v E 11. has a unique representation of the
form v = x + y, where x E M and yEN.
The subspace M is said to be complemented in 11. ifthere exists a closed subspace
N of 11. such that 11. = M EB N . Note that the representation of v = x + y, x E
M, yEN is unique if and only if M n N = {OJ .
We have seen in Theorem 15.1 that if P is a projection on 11., then
The projections P in examples (i), (ii) and (iv) are orthogonal projections since
:sP is orthogonal to Ker P . This is not the case for the projection in example
(iii) since (ak), (,Bk») = Lk=1 ak~k need not be zero for (ad E :sP and (,Bd E
KerP.
Let HI and Hz be Hilbert spaces. An operator A E £(11.1 , Hz) is said to be
left invertible if there exists an operator L E E (Hz , 11.» such that LA = 1. The
operator L is called a left inverse of A.
Examples: 1. Let A be the forward shift operator on £z, i.e., A(al , az , . . .) =
(0, aI , a2, ...) . Then A is left invertible and every left inverse L has the form
86 Chapter II. Bounded Linear Operators on Hilbert Spaces
whe re {,Bd is an arbitrary vector in f,2. Indeed, Let L ei = {.Bd k::I' Since Le; =
LAek-1 = ek- I, k > 1, equali ty (15.2) follows .
2. Let AI be the bounded linear operator defined on f,2 by
1
+L
k
LI «ak» = (rk), where rk = al.Bk +ak+1 2k+I - ja j , (15 .3)
j= 1
and (.Bd is an arbitrary vector in f,2. This can be shown as follows: First we note
that AI = S - !S2 = S(I - is), where S is a forward shift operator on f,2. If LI
is a left inverse of A , then
(15.4)
Since S is a forward shift , we have from (15.4) and the above example that L I x
(I - i s) = L , where L is given in (15.2). Now II !SII = ! . Hence I - ! Sis
invertible, LI = L(I - ! S) -I , and a simple computation shows that
( 1-
1
"2S
)-1 « ad) = (11k) , where
k-I 1
111 =a l ,11k=ak + L 2
k- j a j , k > 1. (15.5)
j= 1
3. Let 1t = L 2([0 , 1]). For 0 < a < 1, define the operators Ra and Rt / a on H
by
(Raf)(t) = fea t), 0 ~ t ~ 1, (15.6)
and
(R I/af)(t) = {f ( ~t) , 0 s t .s a , (15.7)
0, a < t ~ 1.
2. J5 Projections and One-Sided Invertibility ofOperators 87
(15.8)
rp(t) = {.la'
0,
o ~ t ~ a,
a<t~1.
1
IIR a ll = .ja' (15.9)
Similarly
Hence
(15.10)
Now
(RaRlrp)(t) = (Rlrp)(at) = rp(t), 0 ~ t ~ 1, (15.11)
a a
and
:sRa = H.
Thus R a has a right inverse but is not invertible . Also, from (15.7) and (15.11) we
have this
Ker Rl = {O}
a
and
:sRl = {g E Hig = 0 on [a, I]}. (15.12)
a
Theorem 15.2 Let operator A E £('HI , 'H2) have a left inverse L. Then ~A is
closed and complemented in 'H by Ker I. The operator AL is a projection onto
~A and Ker AL = Ker I.
Proof: Suppose A has a left inverse I. Clearly, Ker A S; Ker LA = Ker I = {O}.
The bounded operator AL is a projection since (AL)2 = A(LA)L = AI. From
~AL S; ~A and ~A = ~(AL)A S; ~AL, we have ~AL = ~A . Also, KerAL S;
Ker LAL = Ker Land Ker L S; Ker AI. Thus Ker AL = Ker L and
Definition: Let M and N be subspaces (not necessarily closed) and let 'H =
M EB N. The codimension of M, written codim M , is defined to be dim N (finite
or infinite).
The codim M is independent of the subspace N . Indeed, suppose 'H = M EB Z .
Define <p : N ~ Z as follows :
For each v EN, there exists a unique u E M and a unique Z E Z such that
v = u + z. Let <p(v) = z. The linear operator <p is a I - I map from N onto Z .
Hence dim Z = dim N .
Theorem 15.3 Lettheoperator A E £(HI, H2) have a left inverse L E £('H2, HI) .
If B is an operator in £('H I, 'H2) and II A - B II < II L II-I, then B has a left inverse
LI given by
Moreover,
codim ~B = codim ~A. (15.15)
Proof: Since IIA - BII < liLli-I, the operator I - (A - B)L is invertible and
(I - (A - B)L)-I = L~o [(A - B)L]k by Theorem 8.1. Hence
Example: Let A be the backward shift operator on £2, i.e., A(I1\, 112 , .. .)
(112, 113 , ... ) Then A is right invertible and every right inverse R of A has the form
(15.16)
where (Yk) is in £2 and {ed is the standard basis in £2. To see this, let R ek =
(fh , fh, .. .) now
ek = ARek = (fh , fh , . . .).
00 00
L q Yk = f R « cd ) = ( q ) , (Zk) ) = L CkZk .
k= \ k=l
Theorem 15.4 Let A E 'c(HI , H2) have a right inverse R. Then ~A = H2 and
the operator I - RA is a projectionjrom HI onto Ker A with kernel ~ R . Thus ~R
is closed and
(15.17)
Theorem 15.5 Let the operator A E 'c(HI, H2) have a right inverse R E
'c(H2 , Hd . If B is an operator in 'c(Hl, H2) and
00
Moreover,
dimKer A = dimKer B .
Proof: Since IIR(A - B)II < 1, the operator I - R(A - B) is invertible and
(/ - R(A - B))-l = L~o[R(A - B)]k by Theorem 8.1. Hence
BRI = B(/ - R(A - B))-l R = A[I - R(A - B)] [(/ - R(A - B))-I R]
=AR = I.
Let C = (/ - R(A - B)). From the identity B = A[I - R(A - B)] = AC and
the invertibilty of C, we have
A very important class of bounded linear operators which arise in the study
of integral equations is the class of compact operators . This section presents
some properties and examples of compact operators which are used in subsequent
chapters .
Theorem 16.1 Suppose K and L are compact operators in J:(HI, H2) . Then
(i) K + L is compact.
(ii) If AE £(H3 , HI) and B E £(H2 , H3), where H3 is a Hilbert space, then
KA and BK are compact.
Theorem 16.2 An operator A E £(HI, Hz) is compact ifand only ifits adjoint
A * is compact.
IIA - Anll---+ O.
Hence A is compact.
3. Let K: Lz ([a , b]) ---+ Lz ([a, b]) be the integral operator
(Kf)(t) = l b
ktt , s)f(s)ds,
IIKII ::::: (l l
b b
Ik(t, S)IZdsdt) I /Z = IIkll . (16.2)
Then
Ilk - kn II ---+ O. (16.3)
Let K n be the integral operator defined on Lz([a, b]) by
(Knf)(t) = l b
kn(t, s)f(s)ds.
94 Chapter II. Bounded Linear Operators on Hilbert Spaces
Theorem 16.4 In order that a band matrix (ajk)J:k=O induce a bounded linear
operator A on £z it is necessary and sufficient that
sup lajkl < 00, (16.5)
j.k=I.Z....
and in this case
N
IIAII S L sup lajkl,
v=_N j-k=v
(16.6)
for each j and k. Hence (16.5) holds. To prove the reverse implication, assume
(16.5) holds. For v ~ 0 let D; and D- v be the operators on £z defined by
Condition (16.5) implies that the operator D n is bounded for each integer nand
Let A = L~=-N D n . Then A is a bounded linear operator on £z, and the matrix
of A with respect to the standard basis of £z is precisely the band matrix (ajk)J,k=O'
Thus (16.5) implies that the matrix (ajk)J,k=O induces a bounded linear operator on
lz. Futhermore, IIA II :s L~=-N IIDn II . By combining this with (16.8) we obtain
(16.6). Notice that (16.6) implies that
and hence A - QnAQn is finite rank for each n . From (16.9) it follows that
and hence (16.7) implies that II QnAQn II --+ 0 for n --+ 00. Thus in the operator
norm, A is the limit ofa sequence of finite rank operators, and hence A is compact.
96 Chapter Il. Bounded Linear Operators on Hilbert Spaces
CllXl + Cl2X2 + = YI
C2l Xl + C22 X2 + = Y2 (i)
where (Yl, Y2, ... ) is in £2. As in Theorem 9.1, the natural approach to approxi-
mating a solution to (i) is the following :
For all n sufficiently large, find a solution (x~I), ... , x~n» to the finite system
ClIXI + + ClnXn = Yl
C21Xl + + C2n Xn = Y2
(ii)
where
00
We shall assume that E]:I E~I ICjd < 00 so that C is in .c(lz) (Example 3 in
Section 2). Define P; E .c(lz) by
Note that
P; = Pn, IIPn ll = 1 and Pnx --+ x for every x E lz.
This projection method for A is said to converge if there exists an integer N such
that for each y E 11. and n ~ N, there exists a unique solution X n to equation (17.1)
and, in addition, the sequence {x n } converges to A -I y .
We denote by Il(Pn) the set of invertible operators A for which the above
projection method for A converges .
Unless a specific sequence of projections is used, we shall always assume that
we have the general sequence {P n } with the properties above .
Since Pnx --+ x for each x E 11., sUPn II Pnx II < 00 for each x E 11.. Hence
Proof: Suppose there exists an integer N such that for n ~ Nand Y E 71., the
equation PnAPnx = Pny has a unique solution X n E ;:S Pn. In addition, assume
(17.3) holds . By (17.2) and (17.3),
Hence
X n= X n - Pnx + Pnx -+ X = A -I Y,
which shows that A E I1(Pn). Conversely, assume A E 11 (Pn) . Then there
exists an integer N such that for n ~ N, the operator PnAPn restricted to ;:SPn
is a one-one map from the Hilbert space ;:SPn onto ;:SPn. Hence (PnAPn)-1 is a
bounded linear operator on ;:S Pn by Theorem 14.1. Since X n = (PnAPn)-1 PnY is
the unique solution to equation (17.1) for n ~ N and since A E 11 (P n ), we have
that
Hence
m = sup II (PnAPn)-1 Pnll < 00
n~N
o
Corollary 17.2 Let {Pn} be a sequence ojprojections on a Hilbert space H. Sup-
pose II Pn II = 1, n = 1, 2, ... and Pnx -+ x for every x E H . If A E £(71.) with
III - All < 1, then A E I1(Pn).
Let B = I-A . Since IIPnBPnll ::s IIBII < 1, both A = I - B and 1- PnBPn
are invertible . It is easy to see that (I - PnBPn)-1 maps ;:SPn onto ;:S Pn. Thus it
follows from (17.4) that PnAPn is invertible on ;:s Pn and
By Theorem 8.1,
I 1
11(1 - Pn (1 - A)Pn)- II :s -l---II-Pn-(-I---A-)-Pn-1I
:s (l - III - AID-I . (17.6)
The corollary now follows from (17.5), (17.6) and Theorem 17.1.
Let Pn be the orthogonal projection of £2 onto the first n coordinates, i.e.,
and hence Pnx ---+ X for each X E £2. Now, let A be invertible on £2. We say that
the finite section method converges for A if the projections method for A relative
to the projections {Pn}~o converges, that is, A E TI{Pn}.
In this setting identify ':SPn with en, and we refer to PnAPn as the nth section
of A.
Similarly, for an invertible operator A on £2 (Z) we say that the finite section
method converges for A if A E TI{Pn}, where now Pn is the orthogonal projection
on £2 GZ) given by
o 0
A= -c OJ 0
o -c 1
Here lei < 1, and, as usual, the box indicates the entry in the zero-zero position.
We shall show that the finite section method converges for A .
100 Chapter II Bounded Linear Operators on Hilbert Spaces
Notice that A = I -cV, where V is the forward shift on z-fz). Since IIVII = 1,
we have II c V II < 1, and hence A is invertible with
00
It follows that the matrix of A -I relative to the standard basis of l2 (Z) is given by
0 0 0 0
C 0 0 0
A- I = c2 c ITJ 0 0
c3 c2 c 1 0
c4 c3 c2 C
-c
1 01 00 0)
0
An = 0 -c 1 O.
[ .. ..
o 0 0 1
A straight forward calculation shows that An is invertible and
o 0
1 0 ... OJ
... 0
c 1 o .
1
and hence for the operator A the finite section method converges.
The above conclusion can also be derived directly from Corollory 17.2. Indeed,
I - A = c V, and thus II I - A II < 1.
2.17 The Projection Method for Inversion ofLinear Operators 101
-c 0 0
B = 1 I-c I 0
o 1 -c
Again the matrix is two diagonal and [c] < 1. Notice that B differs from A in the
previous example only in the fact that the order of the two diagonals with nonzero
entries is interchanged.
Recall that Y is invertible, and that y-I is the backward shift on £z(Z) . Thus
B = -c I +Y= (/ - C y-I) Y.
Since IIcy- 11i = [c] < 1, we conclude that B is invertible, and
00
B-Ix = y-I(/ -cy-I)-I x = Lcjy-j-I x,
j=O
for each x E £z(Z) . Thus
0 1 c cZ c3
0 0 1 c cZ
B- 1 = 0 0 @] 1 c
0 0 0 0 1
0 0 0 0 0
B, (I
~ ;c ;c
!J
A straightforward calculation shows that
!)
0 0
l/c 1 0
B-1 = _~ ( Iz
l/c l/c 1
n c .
o
By comparing the matrix of B -I with those for B I, B ,I, B2"1 ... we see that for
B the finite section method does not converge . This conclusion also follows from
Theorem 17.1 because in this case
(n ~ 00) .
Let us remark that in this example the operator B is invertible and the equation
has a unique solution x, E ';SPn for each y E f2(Z), but the sequence xn, XI, X2, ...
need not converge. For instance, if y = (0jO) jEZ, then
Lemma 17.3 Let H be a Hilbert space. Given A E £(H), suppose there exists a
number c > 0 such that
I(Ax, x)] ::: cllxll 2 (17.7)
for all X E H. Then A is invertible and II A -I II .:::: ~.
Proof: Since
IIAxllllxll::: I(Ax,x)l::: cllxll 2
we have
IIAxll ::: cllxll . (17.8)
Thus A is injective and its range ';SA is closed by Corollary 14.2. We show that
';SA = H. Let P be the orthogonal projection onto ';SA. Given y E H, we know
that y - Py E KerP = ';Spl.. = ';SAl... Hence
Therefore, y = Py E ';SA . The inequality IIA -III .:::: ~ follows from inequality
(17.8). 0
Theorem 17.4 Let H be a Hilbert space. Given A E £(H), suppose there exists
a number c > 0 such that
I(Ax,x)1 ::: cllxll2
for all X E H. Let {Pn} be a sequence oforthogonal projections on H with the
property that Pnx ~ x for all x E H . Then A E Il (Pn).
2.17 The Projection Methodfor Inversion ofLin ear Operators 103
Next we show that the projection method is stable under certa in perturbations.
and
sup II (Pn (A + B )Pn)-11l S 2M. (17.14)
n?:.N
Lemma 17.8 Suppose {Tn} is a sequence ofoperators in £(H2) such that Tny ~
Tyfor all Y E H2 . If K is a compact operator in £(HI, H2) , then
Proof: It follows from Theorem 14.3 that M = sup II Tn II < 00 . Suppose (17.15)
n
does not hold. Then there exists an B > 0, and a subsequence in'} of {n} , vectors
{xnl} such that IIxn 'll = 1 and
(17.16)
Since K is compact, there exists a subsequence {xnll} of {xn l} such that Kx;» ~
Y E H2 . Hence
(17.18)
moreover
(17.19)
Indeed, from the definition of convergence of the projection method for A, we
know that
(PnAPn)-1 Ps Kx ~ A-I Kx , x E H.
Hence (17.19) follows from Lemma 17.8. Since I + A -I K = A -I (A + K) is
invertible, we have from (17.19) and Corollary 1.8.3 that I + (PnAPn)-1 PnK is
invertible for n ~ NI ~ Nand
Since (/ + PnAPn)-1 maps <;J Pn onto <;J Pn, it follows from (17.17), (17.18) and
(17.20) that
then A is one-one and has a closed range. Indeed suppose M = sup II (PnAPn)-IIi .
Then for all x E 11,
1
IIPnAPnxll 2: M IIPnxll ·
·
IIAxll = Iim II PnAP nx ll 2: -
I I'im IIP x ll = -lIxll·
1
n - HXl M n-->oo n M
Thus
1
IIAx ll 2: M IIxll for all x E 'H.
Let {Pn } be a sequence of projections in £(11) with the property that Pi;» -+ x
for all x E 11. In the ordinary projection method for {P n }, the solution x of an
equation Ax = Y, when A is an invertible operator on 11, in approximated by
solutions X n E <;J Pn of the equations
We refer to this way of inverting A as the modified projection method. One of the
reasons for replacing (18.1) by (18.2) is that it is sometimes easier to invert the
operators F n than the operators PnAI ~Pn .
The next result is the analogue of Theorem 17.1 for the modified projection
method
Theorem 18.1 Let {Pn} be a sequence ofprojections in £(H) such that Pnx -+ x
for each x E H, and let A be an invertible operator on H . Also, let Fn E
£(~ Pn ) , n = 1, 2, . .., be a sequence ofoperators satisfying (18.3), and assume
that for n ~ N the operator Fn is invertible. Then
(18.4)
if and only if
sup IlFn-11i < 00. (18.5)
n?:.N
The second term in the right hand side converges to zero. The first can be evaluated
as follows:
Recall that PnY -+ y. Using (18.3) with x = A-Iy and (18.4) we see that the
preceding inequality implies that the first term in the right hand side of (18.6)
converges to zero when n -+ 00. Thus (18.4) is proved. 0
We shall use the modified projection method to show that A is invertible and to
find its inverse.
For each n we let Pn be the usual orthogonal projection onto the first n + 1
0,0,...).
coordinates :
Pn(XO, XI, X2, ...) = (xo , . .. , Xn ,
Here we identify ~ Pn with C n + l . Notice that Fn differs from PnAPn only in the
entry in the right lower comer. It follows that
1
and hence (18.3) holds. A direct computation shows that
2- 1 2- 2 n
1 2- 1 .. .. .. 2-
2-n + 1
3- 1 1 . .. 2- n +2
..
.... ) (18.7)
Thus we can apply Theorem 8.1 to show that for this example the modified pro-
jection method converges, that is, for each y E 'H.
108 Chapter II. Bounded Linear Operators on Hilbert Spaces
2. If, in Example 1, (aij) is lower triangular, i.e., aij = 0 if i < j, then for each
n , SP{qJn+l, qJn+2, ... } is A-invariant.
3. Given k E L2([a, b] x [a, b]), define K on L2([a, b]) by
is invariant under K .
Theorem 19.2 A closed subspace M cHis A-invariant ifand only ifAP = PAP,
where P is the orthogonal projection onto M
2.20 The Spectrum ofan Operator 109
Proof: If AM eM, then for each u E 'H, APu E AM eM. Therefore, PAPu =
APu. Conversely, if PAP = AP, then for v E M,
Av =APv =PAPv E M.
Given A E £(1t) and given a closed subspace M C 'H, let P be the orthogonal
projection onto M . Then Q = I - P is the orthogonal projection onto Ml.. and
x = Px + Qx. If we identify with each x E n the column vector (~: ), then
the operator A can be represented as a block matrix
(
A ll
A21
AI2) .
A22
A =PAP+QAP+PAQ+QAQ
For a linear operator on a finite dimensional Hilbert space 'H. it is well known from
linear algebra that the equation Ax - Ax = y has a unique solution for every y E 'H
if and only if det().oij - aij) t- 0, where oij is the Kronecker delta and (aij) is the
matrix corresponding to A and some orthonormal basis of 'H. Therefore, in this
case AI - A is invertible for all but a finite number of )..
If1t is infinite dimensional, then the set a(A) of those ). for which AI - A is
not invertible in more complex. In this section we touch upon some properties
of a(A ).
110 Chapter II Bounded Linear Operators on Hilbert Spaces
Theorem 20.1 The resolvent set ofA E .c(1i) is an open set containing {A I IAI >
IIAII}. Hence a(A) is a closed bounded set contained in {A I IAI .:s IIAII}. Fur-
thermore.for A E oa(A), the boundary ofa(A), the operator AI - A is neither
left nor right invertible.
Proof: If IAI > IIAII, then AI - A = A(I - fA) is invertible since II fAil < 1.
Thus A E peA) . Suppose AO E peA) . Since AoI - A is invertible, we have by
Corollary 8.2 that AI - A is invertible if IA- Aol < II (AO - A)-III-I. Hence peA)
is an open set.
Next, take A E oa(A), and assume AI - A is left invertible. Since a(A) is
closed. A belongs to a(A), and hence AI - A is not two-sided invertible. It
follows that codim ;s(AI - A) =I- O. By Theorem 15.3 we have
codim;s(A'I - A) = codim;s(AI - A)
whenever IA- A'I is sufficiently small. But each open neighborhood of A contains
points of resolvent set of A. In particular, each open neighborhood of A contains
points A' such that codim ;s(A'I - A) = O. We reached a contradiction. Thus
AI - A is not left invertible. In a similar way, using Theorem 15.5, one proves
that AI - A cannot be right invertible. 0
Examples: 1. Let K be the finite rank operator on the Hilbert space 'H given by
n
Kx = ~)x, C{Jj} Vrj '
j=1
2.20 The Spectrum ofan Operator 111
Ax = LAK(X,ffJk)ffJk.
k=1
J
infinite diagonal matrix with diagonal entries AI , A2, . .., i.e.,
a (A ) = {A E q i~flA - Aj l = O} .
II BII = sup
j
1__1
A - Aj
1
= {inf lA - Ajl} -I <
J
00.
and thus II(U - A)qJk' II ::=: II(U - A)-III-I . Thus AE peA) implies (20.1).
3. Let A be the backward shift on £2, i.e.,
Since IAI ~ 1 if and only if IXI ~ 1, we get that a (S) = a (A *) is the closed unit
disc .
4. Let a(·) be a continuous complex valued function defined on the closed
bounded interval [c, d] . Define a bounded linear operator A on H = L2 ([ c, d]) by
(Af)(t) = a(t)f(t) , c ~ t ~ d.
If IAI < ~, then IIAR11I < 1 and therefore! - AR1 is invertible . Since R; is
V U
a a
not invertible and
U - R a = Ra(AR1 - I),
a
2.20 The Spectrum ofan Operator 113
/f PI (A) c a(Ao), then for all A E PI (A), the operator AI - Ao is strictly left
invertible and 0 < codim(AI - Ao) is constant on PI (A).
Proof: Obviously,
The set PI (A) n p(Ao) is open . Also , PI (A) n a(Ao) is open . To see this , suppose
A E PI(A) n a(Ao) . Let P be the orthogonal projection onto Ho. Then (AI-
A)-ljHo its a left inverse for AI - Ao on Ho since
Proof: The open set p(A) is not contained in a(A) since p(A) is unbounded.
Hence p(A) C p(Ao) by Theorem 19.2. Thus a(Ao) C a(A). 0
Proof: Suppose A E p(A) . Then there exists an open disc D such that A E D C
p(A) . Since A E p(Ao), D n p(Ao) i= 0 . Also, A E D n a(Ao). This contradicts
Theorem 19.2 as D is an open connected subset of p(A) .
An operator U E £(H) is called an isometry if IIUxll = [x] for each x E H .
If, in addition, ;sU = H, the operator U is said to be unitary. Notice that a unitary
operator U is invertible, and both U and U- I have norm 1. This implies that for
a unitary operator U the spectrum belongs to the unit circle, i.e.,
Indeed, for IAI > 1 we have A E p(U) by Theorem 19.1, and for IAI < 1 we have
IIAU- III < 1, and thus AI - U = -U(I - AU-I) is again invertible. In general,
the inclusion in (20.4) can be strict. For instance, if U = I, where I is the identity
operator on H, then a(U) = {l}. D
(Rexcp)(t) = cp(ext) .
(20.5)
also ;s(y'ciRexcp) = H, for given 1/1 E H, take cp(t) = 1/I(~t), 0:::; t < 00. Then
(Rexcp)(t) = Rex1/l(~t) = 1/I(t). Hence y'ciRex is unitary and by (20.4) we have
a(y'ciR ex) C {AlAI = I} which implies
(20.6)
(20.7)
and
o { 0, 0 :::; t :::; 1,
(R bcp)(t) = cp(~t), t > 1.
2.20 The Spectrum ofan Operator 115
Then
Indeed, define
I,
cp(t) =
{ 0,
Then cp E 1-lo and from (20.8) we have
and
1 1
IIcpll2 = f a Idt = _.
ha ex
Hence
1
IIR~II.ja = IIR~III1CPIl2: IIR~cpli = 1
Ro °
1 Racp = sp, cp E 1-lo, (20.10)
a
since
0 0 )
( R ~ Racp (t)
{ 0, 0 s t ~ 1
= R2(cp(~t» = cp(t), t > 1.
The operator R2 is not invertible since 1m R2 i= 1-lo. To see this, define
From (20.5) we have II R211 = Jet. Hence if 1.1..1 > Jet, then A E p(R2) . For
1.1..1 < Jet, IIAR~.;a II < Jet.va = 1 by (20.9). Thus I - AR1 a
is invertible .
Equality (20.10) shows that
where a-I , ao and al are real numbers. On 'H we consider the operator A of
multiplications by a, that is
Example 4 shows that the spectrum a(A) consists of those A E <C for which
(20.13)
for some t . Taking the real and imaginary parts in (20.13) and using eit = cost
+ i sin t gives
ReA = ao + (a_1 +al)cost, (20.14)
Hence
ao - ReA)Z ( ~A )Z (20.17)
( a-I +al + al -a_1 = 1.
2.20 The Spectrum ofan Operator 117
Thus A lies on the ellipse given by (20.17). Conversely, if A lies on the above
ellipse , then there exists atE [-n . zr] such that (20.16) holds . Hence for this
choice of t , equation (20.13) is valid. So, if jaI l =1= la-II, then a (A) is precisely
the ellipse given by (20.17).
Case 2: Assume a l = a- I. Then ~A = 0, and from (20.14) and (20. 15) we get
A = ao + 2aI cos t . Thus a (A ) is a real line segment, namely
Case 3: Finally, assume at = -a_I . Then we see from (20.14) and (20.15) that
ffi A = ao, ~A = 2al sint .
Thus
a(A) = {A = ao + irl - 21all ::: r ::: 21all}.
We conclude with a remark about the operator A. Recall (see Section 1.13), that
the functions
1 .
'Pn(t) = ~ emf , nE Z , (20.18)
v2n
form an orthonormal basis of7i = L2([- n. n) .
The matrix of A with respect to this basis in the doubl y infinite tridiagonal matrix
ao al 0 0 0
a_I ao al 0 0
0 a-I §J al 0
0 0 a-I ao al
0 0 0 a-I ao
Here the box indicates the entry in the (0 , O)-position. Operators defined by matri-
ces of the above type are examples of Laurent operators. The latter operators will
be studied in the first section of the next chapter.
From Example 7 above we derive the following result.
Corollary 20.5 The spectrum ofthe forward shift V on £2 (Z) is equal to the unit
circle.
Proof: Let F be the map which assigns to each f in L2([-n. n )) the sequence
of Fourier coefficients of f relative to the orthonormal basis (20. 18). Then F is
an invertible operator from L2([-ot , n)) onto £2(Z) , Put A = F - I V :F. Then
a (V ) = a (A ). But A is the operator considered in Example 7 with al = 1 and
ao = a-I = O. The spectrum of A is given by (20.17), which in this case is the
unit circle. 0
118 Chapter II. Bounded Linear Operators on Hilbert Spaces
Exercises II
2. LetDw be as in problem 1 and let inf', IWjl > oand suplui, I < 00. Which
of the following equalities or inequalities hold for any w?
IIAII s L
k=-oo J
s~p laii-kl .
A = (~I 12)
i.e., A(UI, U2) = (AIUI, A2U2). Prove that A is in £(H) and IIAII =
max(IIAIII,II A211) ·
6. Foreachn E N, let U; be an operator on £2given by Un~ = (~n, ~n-I, . . . ,
~I, 0, 0, . . .). Find ;s u-. Ker u., II u, II and a matrix representation for
U« with respect to the standard basis.
7. Let aCt) be a continuous complex valued function on [a, b] . Define A :
L2[a , b] ~ L2[a , b] by (Af)(t) = a(t)!(t) . Find KerA and ;SA.
8. Given the operator A in problem 7, prove that ;s A is closed if and only if
aCt) i- 0 for t E [a , b] ora(t) is identically zero on [a , b].
9. Does the statement of problem 8 remain true if aCt) is allowed to be
discontinuous?
10. Let D w be an operator on £2 as defined in problem 1. Prove that ;sD w is
closed if and only if
13. For the following operators K of finite rank, find the orthonormal basis
for which K has the diagonal matrix representation
120 Chapter II Bounded Linear Operators on Hilbert Spaces
(c) (K({l)(t) = t f!:.rr ((l(X) COSX dx + cost f!:.rr X({l(X) dx, K : L2[-rr, zr]
---+ L2[-rr,rr] .
15. Given vectors ({ll, ,({In, 1f!1, . . . , 1f!n in a Hilbert space H, let N = span
{({ll, . .. , ({In, 1f!1, , 1f!n}. Define K E £(1i) by Kv = L:J=I (v , ({lj)x
1f!j. Prove that KN C N and that KNJ.. = O.
17. Let 1i be a Hilbert space and let K E L (1i) be an operator of finite rank
given by
n
Kv = I)v, ({lj)1f!j, ({lj ,1f!j E 'H, j = 1, . . . , n ,
j=1
Which of the following statements are true and which are not?
18. Let H be a Hilbert space and let K I, K2 E £(1i) be two operators of finite
rank . Prove that rank (KI + K2) ::: rank KI + rankK2
19. Let 'H be a Hilbert space and let B, C. D E £(1i). On 1i(3) = 'H EB 'H. EB H
define A by the matrix
Exercises II 121
Prove
(a) A E £(1i(3»
(b) A 3=0
~I + ILI~Z = 1JI
i:
23. Let K : LZ[-JT, JT] -"" LZ[-JT, JT] be given by
Find the matrix of K with respect to the basis {einl}nE Z in each of the
following cases:
(a) k(t) = It I
(a) k(t) = sin t
rp(t) = sin t + 1 1
rp(s) ds.
25. Let K : Lz[O, 00) -"" Lz[O , 00) be given by (K rp )(t) = k(t +s)rp(s)ds 10
00
28. Which of the following operators are self adjoint and which are not?
(a) The operator K: L2[-rr,rr] ~ L2[-rr,rr] defined by (Kcp)(t) =
J~:rr ei(t-s)cp(s) ds.
(b) The operator K: L2[-rr,rr] ~ L2[-rr ,rr] defined by (Kcp)(t) =
J~:rr cos(t - s)cp(s) ds.
(c) The operator K2: L2[O, 1] ~ L2[O, 1] defined by (Kcp)(t) =
J~ cp(s) ds.
29. Let A be in £(H), where H is a Hilbert space . Define on the direct sum
H(2) = H E9 H (cf. exercise 7) the operator B by
B= (° °.
-iA*
iA)
ao
A=
al
a2
ao
al ao °
a2 al
(a) Prove that A is bounded and that IIAII ::: L~o lajl.
(b) Find the matrix representation of A*.
Exercises II 123
35. Let w = ()I.I, A2, . . .) , where A j > 0, j = 1,2, ... be such that
SUPjAj < 00 and inf', A j > 0. Define £2(W) as in Exerci se 1-15. For
; = (;1, b ;3 , · · ·) E £2(W), define
S; (; I , ;2 , . . .) = (0, ~I , ;2 , . . .).
Prove that (S;) * is given by
36. Describe all selfadjoint operators of finite rank for which K I 980 = 0.
37. Define an operator U on £2 by U ; = (;n, ;n-l, . .. , ; 1, ;n+d . Prove that
(a) IIU;II = 11;11 for all s E h
(b) U- I = U = U* and U 2 = I.
(c) if a =/= ±1 , a E C , then I - all is invertible and (I - aU)-1
= 1~Q'2 (I + aU) .
(d) Give a matrix representation for U.
()
(S/ rp)(t) = {
L
rp (t
°+ r) for
for
_ 1- r
_< t <
1 - r < t ::s 1.
Prove that
(a) [S;r )] * = Si r )
(b) S? Il S;r2 ) = S; r l +Q )
(c) si i
rIls r2) = SY I+Q)
(d) S(r2)
e e
S (ril = !p Q Sc(rl- r2)
P S (r2- ril
>c:
lor rl _ rz
°::s
c.
r2 r lor r2 ~ rl
where PQ is the projection onto {rp E L 2[a , b] I rp(t) = 0, t ::s
r2}.
(e) li S;') II = IISy )II = 11 for r E [0, 1].
39. Let 'H be a Hilbert space . Prove that for any A I , A 2 E £ (1i),
(a) {~AI + ~A2 }-l = Ker At n Ker A ~
(b) ~ A I n ~A2 = (Ker At + Ker A ~ )-l.
40. Give a formula for the orthogonal proje ction Ponto sp {rpl , tp2 , rp3}, where
(a) rpl = (1, 0, 0, 1, 0, . . .),rp2 = (1,0,1 ,0, . . .) ,rp3 = (1, 1, 0, . ..)
are in £2.
(b) rpl (t) = cos t, tp2(t) = e', rp3(t) = t are in L 2[ - Jr , zr],
124 Chapter J[ Bounded Linear Operators on Hilbert Spaces
41. Let Lo be the subspace of all odd functions in L2[-rr, rr] and let L E be
the subspace of all even functions . Denote by Po and PE the orthogonal
projections onto Lo and L E, respectively. Give a formula for Po and PE.
42. Let Nl = {(~l, ~l, ~l , ~2, ~2 , b, ...)} ,
44. Find the orthogonal projection onto the intersection of the following pair
of subspaces in £2:
45. Given Ajk E £(1-{), j , k = 1,2, define on 1-{(2) = 1-{ EB 1-{ an operator A
by
A= (All Al2)
Al2 .
A2l
48. Let (aj)~1 be a sequence of complex numbers with L~llajl < 00.
Define an operator on £2 by the matrix
C )
a2 a3
A = a: a3
a3
A=
C
W2
W3 0)
0
53. What can one say about the A-invariance of L and Ll.. for each of the
properties (a)--(f) in problem 52?
54. Given A E £(H) , let M be a closed A-invariant subspace. Denote by AM
the restriction of A to M . Show that (AM)* = PMA*IM , where A*IM is
the restriction of A * to M and PM is the orthogonal projection onto M.
126 Chapter II. Bounded Linear Operators on Hilbert Spaces
Al -
- (All0 o)
B22 ,
A =
3
(Cll
0
lr
'H such that 'H = H I ED 1t2 ED 1t3. Let A , B , Cand D E £(1t ) be given by
ell
the matrices
A= 0
Al2
A22
All )
A23 , B = B21
0
B22 o),
ell
0 0 A33 B31 B32 B33
cD
CI2
c= 0 C22 D =
(D II
0 D22 o).
0 0 0 D32 D33
List all the obvious invariant subspaces for A, B , C and D .
58. List some A-invariant subspaces different from sp{ek, ek+I , . . .},
k = 1,2, . .. , where the operator A : £2 --+ £2 is given by the matrix
0
2 1
A =
2
2
0
M = Irn A , N = Ker A?
69. For i = 1,2 let M; and N ; be closed subspaces of the separable Hilbert
space H ; such that H ; = M ; E9 N; . Find (if possible) an operator A E
£(H I, Hz) with the property that
NI = KerA , N2 = Ker A* ,
MI = 1mA*, Mz = ImA .
71. Let 'H = 'HI ffi 'H2, where 'HI and 'H2 are closed subspaces of the
Hilbert space 'H, and let T E £('H) have the following operator matrix
representation
T = (~ ~)
relative to the given decomposition of 'H. Assume T is invertible. Deter-
mine which of the following statements are true :
(a) A is invertible,
(b) A is left invertible,
(c) A is right invertible,
(d) B is invertible,
(e) B is right invertible,
(f) B is left invertible.
72. As in the previous exercise, let'H = 'HI ffi 'H2 and T E £('H) be given by
Let A and B both be left (right) invertible. Is the operator T left (right)
invertible?
73. Do Exercise 71 with the operator matrix representation of T being replaced
by
74. Do Exercise 72 with the operator matrix representation ofT being replaced
by
75. (a) Given an example ofa product oftwo orthogonal projections that is not
an orthogonal projection.
(b) When is the product oftwo orthogonal projections again an orthogonal
projection?
76. Let ta = (1, a, a 2 , . ..), where 0 < a E ~, and consider the corresponding
Hilbert spaces l2 (cv); see Exercise 1-15. Define Sf to be the forward shift
on l2(CV) and Sb the backward shift.
(a) Prove that Sf and Sb are bounded linear operators on l2(CV) and com-
pute their norms.
(b) Find the spectra of Sf and Sb.
Exercises II 129
77. Let W be as in the previous exercise, and fix 0 i= q E C . Let Sf ,q and Sb,q
be the weighted shift operators on £z(w) given by
Solve the problems (a) and (b) in the previous exercise with Sf ,q and Sb,q
in place of Sf and Sb, respectively.
78 . Do Exercise 76 with the sequence W = (1 , a , a Z , • • •) being replaced by
W = (wo, WI , Wz .. .), where Wj > 0 for j = 0, 1,2, .. ..
79 . Do Exercise 77withw = (wo, WI, Wz, )andwj > ofor j = 0,1,2, . . . .
80. Do Exercise 77 with W = (wo, WI, WZ, ) as in the previous exercise and
with Sf,Q and Si,Q being given by
83. Let'H = 'HI ®'Hz ff/H3 be the direct sum ofthe Hilbert spaces 'HI , 'Hz and
'H3 (cf., Exercise 5). Relative to this decompostion of'H let the operator
A E £('H) be given by the lower triangular operator matrix
What can we conclude about the operators A II , Azz, A33 when the operator
A is invert ible? Illustrate the answers with examples.
84. Do the previous exercise for the case when the operator A is given by
All
A= 0
(
o
130 Chapter II Bounded Linear Operators on Hilbert Spaces
f (x - t) for x > t,
(Wf)(x) =
Io for 0 < x :::: t.
,
(c) Show that the spectrum of W is equal to the closed unit disc
(Vf)(x) = f(x + t) , x E lR
87. Let A E £(£2), and let Pn be the orthogonal projection onto span {eo, .. . ,
en }, where eo, el , . .. is the standard basis of £2. Show that the operator
A is a band operator if and only if for some nonnegative integer m the
following equalities hold
(i) PkA(I - Pm+k+d = 0, k = 0, 1,2, ,
(ii) (I - Pm+k+dAPk = 0, k = 0,1,2, .
ALo C Lo, A* Lo C Lo .
ALo C u;
91. Do the previous exercise with A * in place of A .
92. Is the inverse of a lower triangular invertible band operator always given
by a lower triangular matrix?
Exercises II 131
)
93. Let T be the operator on £z given by the matrix
...
.. .
where a and b are complex number with lal < 1 and Ibl < l.
(a) Prove that indeed T is a bounded operator and find an evaluation of
the norm of T.
(b) Use the finite section method to find the conditions of invertibility of
T and to invert T .
I a aZ a 3 a4
b 1 a aZ a3
T= bZ b IT] a aZ
b 3 bZ b I a
b4 b 3 bZ b
95. Consider on £z(Z) the operator T given by the two-diagonal lower trian-
gular matrix
bOO
T= a [B 0
o a b
Use the finite section method to find that T is invertible and to determine
its inverse for the following cases:
(i) a = 1 and b = -4;
(ii) a = -4 and b = 1;
(iii) a , bEe and
lal < Ibl;
(iv) a, bEe and Ibl < [zz] .
96. Solve the problems from the previous exercise in an alternative way by
writing
T=bI+aV .
where V is the forward shift on £z(Z) .
132 Chapter II. Bounded Linear Operators on Hilbert Spaces
7 -3 0
T = -2 [2J -3
o -2 7
99. Use the finite section method to find the conditions of invertibility of
T E £2 and to construct its inverse when T is given by the following
tridiagonal matrix
(1 +dal' ii
1 + lal 2
0
ii
0
0 . ..
T= 0 a 1 + lal 2 ii OJ
0 0 a 1 + lal 2
100. Do the previous exercise with £2 replaced by £2 (Z) and with the matrix
replaced by the corresponding doubly infinite tridiagonal analog .
101. Let A E £(1i\ , 1i2) and B E £(1i2, 1iJ). Assume that the product AB is
invertible . Which of the following statements is true?
(a) A is invertible .
(b) B is invertible.
(c) A and B are both invertible.
(d) A right invertible and B is left invertible .
(e) A is left invertible and B is right invertible .
Ax = 2j (x, CPj)lfrj,
j=l
104. Find the connection between the spectra of the operators A 1 and Az when
(b) A = (~ ~z ~I ) , Az = B I BZB3.
B3 0 0
(c) What can you say additionally if the operators B j are compact?
X-AXB=Y,
This chapter deals with operators on £2 (Z) and £2 with the property that the matrix
relative to the standard basis in these spaces has a special structure, namely the
elements on diagonals parallel to the main diagonal are the same, i.e., the matrix
entries ajk depend on the difference j - k only. On £2 (Z) these operators are
called Laurent operators (and in that case the matrix is doubly infinite); on £2 they
are called Toeplitz operators. These operators form important classes of operators
and they appear in many applications. They also have remarkable properties.
For instance, there are different methods to invert explicitly these operators, and to
compute their spectra . This chapter reviews these results starting from the simplest
class.
A Laurent operator A is a bounded linear operator on £2 (Z) with the property that
the matrix of A with respect to the standard orthonormal basis {ej }~-oo of £2 (Z)
is of the form
ao a -I a-2
al ~ a-I (Ll)
a2 al ao
Here ~ denotes the entry ao located in the zero row zero column position. In
other words, a bounded linear operator A on £2 (Z) is a Laurent operator if and
only if (Aek, ej ) depends on the difference j - k only.
Proof: Let {ej }{~-oo be the standard orthonormal basis of £2 (Z), and put ajk =
(Aek, ej) . Reca I that the bilateral shift V on £2(Z) is given by
Also, (AVek, ej) = (Aek+ I, ej) = a j .k+ I . Next, observe that VA = AV if and
only if
Thus A and V commute if and only if aj-I ,k = a j ,k+1 for each j , k E Z, that is,
if and only if A is a Laurent operator. 0
In a somewhat different form we have already met Laurent operators in
Example 2 of Section 11.4. Indeed, let a be a bounded complex valued Lebesgue
measurable function on [-lr, zr], and let M be the corresponding operator ofmul-
tiplication by a on L2 ([-lr, lr]), that is,
Example 4 in Section 11.2 shows that M is a bounded linear operator on L2 ([-lr, lr])
and the matrix of M with respect to the orthonormal basis ({in (t) = Jkeint
,
n = 0, ±l, ±2, ... , is given by (1.1), where
an = - -
1 1]( .
a(t)e- tn t dt, n E Z. (1.2)
.J2ii -](
The relation A = F M F - I also yie lds the following invers ion theorem.
Theorem 1.2 Let A be the Laurent operator defined by the continuous fun ction
a. Then A is invertible if and only if a(t ) i= 0 for each -7( S t S 7(, and in this
case, A - I is the Laurent operator defined by the fun ction I I a, that is
bo b_1 b-2
bl bo b_1
b2 bl bo
where
bn = - -
I j If I .
__ e- mt dt , n E Z.
,J2ii - If a Ct )
Proof: Let M be the ope rator of mult iplication by a on L 2([ -7( , rr]) . From
Example 2 in Section 11.7 we kno w that M is invertible if and only if a Ct ) i= 0 for
- 7( S t S 7(] , and in that case M - I is the operator of multiplicat ion by b = Y]«.
Since A = F M F - I , the operator A is invertible if and only if M is invertible ,
and then A -I = F M - 1F- 1. By combining these results the theorem follows . 0
Corollary 1.3 The spectrum ofthe Laurent operator A defin ed by the continuous
f unctions consists ofall p oints on the curve param etrized by a, that is
The set A of all Laurent operators with a cont inuous defining function is closed
under addition and multiplication . Moreover multiplication in A is comm utative.
The se facts follow immediately from the fact that F AF- 1 consists ofall operators
ofmultiplication by a cont inuous function. Thus A has the same algebraic structure
as the ring of all complex valued continuous funct ions on [-7(, 7(].
138 Chapter III. Laurent and Toeplitz Operators
7/5 -3/5 0 0 0
-2/5 7/5 -3/5 0 0
A= 0 -2/5 17/51 -3/5 0
o 0 -2/5 7/5 -3/5
o 0 0 -2/5 7/5
The function
2 it 7 3_ it
aCt) = -Se + S- Se
W(A) = -~A + 2- ~A -I
5 5 5
= ~ (~ -
5 A
2) (A - 3) .
Herea(t) =1= 0 foreacht , and we can apply Theorem 1.2 to show that A is invertible.
To compute A -I, notice that
1 lA-I
2 1 + --1- ' IAI = 1,
W(A) I - 2A-I 1 - 3A
and hence
bet) = - 1 =
aCt)
Loo ----,.e-
1
2J
lJ
"f
+L
oo I
3J
"•
----,.e lJf
)= 1 )=0
Since ei f = cos t + i sin t, we see that the defining function a is also given by
7
aCt) = ( S- cost
) 1 smt
+ Si . . (1.5)
3.1 Laurent Operators 139
Thus cos t = ~ - ffla(t) and sin t = 5;Ja(t) . Using Corollary 1.3 we see that the
spectrum of A is precisely given by the ellipse
1 1/2 1/2 2
A= 1
2" OJ 1/2
2
1/2 1/2
+L
OO 1 .. oo 1 ..
a(t) =
L
j=1
----ce-ljf
2J
j=O
----celjf .
2J
W(A) =
1 -I
2"A ( 1 - i1 ·-I
)-1 + ( 1-
1
2"A )-1 =
(53 - 3A
2 -I -
)-1
2A
3
Since w (A) =I- 0 for IA 1 = 1, we see that a (t) =I- 0 for each t , and hence the operator
A is invertible by Theorem 1.2, and the defining function b of A -I is given by
b(t) = -
1 2 '
= __ e- 1f
z ;.
+ -5 - _e '
a(t) 3 3 3
5/3 -2/3 0 0 0
-2/3 5/3 -2/3 0 0
A- 1 = 0 -2/3 15/31 -2/3 0
0 0 -2/3 5/3 -2/3
0 0 0 -2/3 5/3
140 Chapter III Laurent and Toeplitz Operators
+ -8 + -)..
w ().. ) = -10 (--)..
1
9393
1 -I) = -10 ( 1 + <);
9
1
3
-I) (1 - 1) .
<):
3
Notice that fRa(t ) = ~~ and ':3a (t) = - ~~ sint. Thus by Corollary 1.3, we have
a (A ) = { z = -80
81
+ ir I- -20 < r < -20} .
27 - - 27
3.2 Toeplitz Operators 141
a ll a I2 an ]
a21 a22 a23 .
T = a31 a32 a33 (2.1)
[ .. .. ..
. . .
means that the matrix in the right hand side of (2.1) is the matrix corresponding to
T and the standard orthonormal basis e l , e2, .. . in £2. We say that T is a Toeplitz
operator if
(2.2)
In this case we refer to the right hand side of (2.2) as the standard matrix repre-
sentation ofT. Recall that ej = (0, . . . , 0, 1, 0, . . .), where the one appears in the
j -th entry. Thus T is a Toeplitz operator if and only if (Tek , ej ) depends only on
the difference j - k. This remark yields the follow ing theorem.
Therefore, T = S*TS if and only if akj = ak+I.j +I for all j , k = 1, 2, . ... Thus
T = S*TS if and only if T is Toeplitz. 0
With each continuous complex valued function on the unit circle T in C we
can associate a Toeplitz operator on £2. Indeed, let w be such a function, and put
a(t ) = w (e i t ) . Notice that a is continuous on [-Jr, Jr] and a (-Jr ) = a(Jr) . Now
let A be the Laurent operator on £2(Z) defined by a, i.e.,
ao a-I a-2
A= al [§]
a- I (2.3)
142 Chapter III. Laurent and Toeplitz Operators
where
an = - -
1 j:rr w(e1t)e-
. . ln t
dt, n E Z. (2.4)
.j2ii -:rr
In the sequel we identify the space f.2 with the closed subspce of f.2 (Z) consisting
of all sequences (~j) jeZ such that ~j = 0 for j :s -1. Put
(2.5)
where Pez is the orthogonal projection of f.2(Z) onto f.2. Then T is a bounded
linear operator on f.2, and from (2.3) we see that (2.2) holds with an given by (2.4).
Thus T is a Toeplitz operator. We shall refer to T as the Toeplitz operator with
continuous symbol co.
Also, if ... ,e-l,eo,el, ' " is the standard basis of f.2(Z), then e-tc.e-u-i«,
e-N+2,'" is an orthonormal basis of 1iN, and from (2.5) if follows that with
respect this basis the matrix of P1-{NAI1iN is given by
:::]
[
:~ aa~1 :=~
a2 al ao .
··· ..
.
It follows that II PH.N A I1iN II = II T II for each N.
Now, fix e > 0, and choose x E f.2(Z) such that [x] = I and II Ax II > IIAII- e.
By (2.6), we have P1-{NX --+ x and AP1-{NX --+ Ax for N --+ 00. Thus we can find
a positive integer N' such that x' = P1-{N'X has the following properties:
According to (2.6), we also have P?tN Ax' ~ Ax' for N ~ 00. Thus we can
choose N il ::: N ' such that II P?t N" Ax' II > II A 11- 8. Since x ' E 1tN' and N il ::: N ',
we have x' E 1tN", and thus
P IIAII- 8
li P?t N" AI1t N "II >
-
II ?t N" Ax ' ll >
IIx' ll 1+ 8
In this section we study the invertibility ofa Toeplitz operator that has the additional
property that in its standard matrix representation all diagonals are zero with the
exception of a finite number. In that case all the non-zero entries in the standard
matrix representation are located in a band, and for that reason we refer to such
an operator as a band Toeplitz operator (cf., the last part of Section II.I6). Notice
that a band Toeplitz operator has a continuous symbol of the form
q
W(A) = L Akak. (3.1)
k= -p
I it tt
K = 2rr [arg wee )]t=-rr '
Theorem 3.1 Let T be the band Toeplitz operator with symbol w given by (3.2).
Then T is two-sided invertible if and only if w(e it ) f= 0 for -Jr :s t :s Jr and
K = I - r = O. In that case
1 m
c J=I (S-,BJ'I)-Irr 1=
T-1=-rr ~ 1 (I-a 1'S*)-1 , (3.3)
Theorem 3.2 Let T be the band Toeplitz operator with symbol w given by (3.1),
and put K = £ - r. Then T is left or rightinvertible if and only if w(e it ) f= Ofor
-Jr :s t :s Jr . Furthermore, if this condition is satisfied, we have
T(-I)=~rr~
c J=I (S-,BJ·I)-\s*)Krr~1=1 (I-a ·S*)-I . 1 ,
(3.4)
(ii) T is right invertible if and only if K :s 0, and in that case dim Ker T = -K
and a right inverse ofT is given by
1
m i
T(-I)=-rr
c J=I (S-,B J'I)-IS-Krr 1=1 (I-a 1'S*)-1 • (3.5)
Thus SS* f= S*S (and thus the Toeplitz operators S and S* do not commute), and
the product SS* is not Toeplitz (but a projection).
Notice that for n 2: 0 and m 2: 0 we have
s *)m- n m > n
(S*)mS n = {( ' -, (3.6)
s":", n 2: m
These identities will turn out to be very useful in the proofs.
In the proofs of Theorems 3.1 and 3.2 we will also need the following lemma.
3.3 Band Toeplitz Operators 145
Lemma 3.3 For Jal < 1 and 1,81 > 1 the operators 1 - a S" and S - ,81 are
invertible, and the respective inverses are given by
1 a a2 a3
o 1 a a2
o0 1 a
o0 0
1 0 o 0
,8-1 1 o 0
(S - ,81)-1 = -,8 ,8-2 ,8-1 1 0 (3.7)
,8-3 ,8-2 ,8-1 1
For lal = 1,81 = 1 the operators 1 - a S" and S - ,81 are neither left nor right
invertible.
Proof: For laJ < 1 we have lIaS*1I < 1, and hence, by Theorem II.8.!, the
operator 1 - a S* is invertible and its inverse is given by the first part of(3.7). Since
S -,81 = -,8(/ - ,8-1 S), a similar argument shows that S -,81 is invertible, and
that its inverse is given by the second part of(3 .7). From the results in Section II.20
we know that the spectra of Sand S* are both equal to the closed unit disc. Thus
the perturbation results Theorems II.15.3 and 11.15.5 imply the operators 1 - a S"
and S - ,81 are neither left nor right invertible when lal = 1,81 = 1. 0
Now, let R be a band Toeplitz operator, and let its symbol to be given by (3.1).
Since we allow a: p and aq to be zero, we may assume that p and q are positive
integers, and in that case (3.1) means that
Part 1. Assume w(e it ) i= 0 for -T{ ::s t ::s T{ . Here co is given by (3.2). It
follows that in (3.2) the numbers f31, . .. , f3m are outside the closed unit disk, i.e.,
1f3 j I > 1 for j = 1, .. . , m.
Assume K = f - r ::: O. Then from the results proved in the paragraph preceding
the present proof we see that
(3.8)
Indeed, s':' is the band Toeplitz operator with symbol A':', and thus by repeatedly
applying the above mentioned multiplication rules, we see that the right hand side
of(3 .8) is the band Toeplitz operator with symbol
(3.9)
are invertible operators. Also , the operator T(-I) in (3.4) is well-defined, and ,
using (S*)KSK = I, we see that T(-I)T = I . Thus T is left invertible and T(-I)
is a left inverse ofT. From (3.8) and the invertibility of the operators in (3.9) we
obtain that
codim Im T = codim Im SK = K ,
which completes the proof of statement (i).
Next, assume K ::s 0, and hence f ::s r . Then, by repeated application of the
product rules proved in the paragraph preceding the present proof, we have
(3.10)
Since the operators I - a, S* and S - f3 j I are invertible for each i and j, the
operator T(-I ) in (3.5) is well-defined. From (S*)-K S-K = I we conclude that
TT(-I) = I, and hence T(-I) is a right inverse of T. Furthermore, using the
invertibility of the operators in (3.9) we see that
Then WI and wz are functions of the same type as co. Let TI and T2 be the band
Toeplitz operators with symbol s WI and W2 , respe ctively. Since
w(}.) = WI ( }.)(). - fh ), w (}. ) = (1 - fh). - I )WZ (}.) ,
the product rules from the paragraph preceding the present proo f yield
T = T I (S - fh I) , T = (/ - fhS*)T2. (3.11)
By Lemma 3.3 the operator S - Ih I is not left invertible, because Ifh I = I. Hence
the first equality in (3.11) shows that T cannot be left invertible. It follows that T
must be right invertible, but then the second equality in (3.11) implies that I - Ih S*
is right invertible. Thus I - 71k S is left invertible. However the latter is impossible,
again by Lemma 3.3 and the fact that I13k I = 1.
Proof of Theorem 3.1 By Theorem 3.2 (i) and (ii) the operator T is both left and
right invertible if and only if w (ei !) =1= 0 for -rn S t S tt and K = £ - r = O.
Moreover in that case, formula (3.4) with K = 0 yields the formula for the inverse
in (3.3). 0
Corollary 3.4 Let T be the band Toeplitz operator with symbol ta . Then the
spectrum a (T) of T is given by
a (T ) = C\Q , (3.12)
where Q is the set ofall points }. in C such that}. =1= w(e it ) fo r all - Jr S t S tt
and the winding number ofco (.) - }.relative to zero is equal to zero. In particular,
the spectral radius of T is equal to the norm ofT.
Proof: Notice that for each }.in C the operator T -).J is the band Toeplitz operator
with symbol w(·) <): Thus Theorem 3.1 shows that}. E a CT ) ifandonly if}. E Q ,
which proves (3.12).
Since w(e it ) E a CT ) for each -Jr S t .:::: n , the spectral radius r eT ) =
sup{I}.1 I }. E a (T)} is larger than or equal to max-JT ~t ~JT Iw (eit)l. On the other
hand, the latter quantity is equal to II TIl , by Theorem 2.2. Since 1IT11 ~ r eT) , we
conclude that r eT) = II T11 . 0
Example : In this example we consider the Toeplitz operator T on £2 defined by
the tridiagonal matrix :
7/5 - 3/ 5 0
-2/5 7/5 -3/5
. .. J
T = 0 - 2/ 5 7/5 .
[
··· ..
.
The corresponding Laurent operator has been considered in Example I of the first
section . The operator T is bounded, and its symbol co is given by
w(}.)
2 +S
= - S}. 7 - S}.
3 - I = -S}.
2 - I ( }. - "2I) (). - 3) . (3.13)
148 Chapter III. Laurent and Toeplitz Operators
The aim is to study the invertibility properties of zI - T for each z in <C, and to
compute an inverse (one-sided or two-sided) whenever it exists. We split the text
into five parts.
Part 1. Notice thatw(A) i= 0 for each IAI = 1. Moreover, for this W the numbers
land r in (3.2) are given by l = 1 and r = 1. Thus K = O. By Theorem 3.1 the
operator T is invertible, and
(3.14)
( 1-
1
2S*
)-1 = L
00 1 .
2}(S*)J, (3.15)
J=o
and
1
-(S - 3/)- = (31 - S)- =
1
3"1 ~
~
1 .
3} SJ, (3.16)
}=o
and the formula for T- 1 can be rewritten in matrix form as follows
1/3
1
~ [~ 1~2 l:t: •••]
::]
1/22
7/12
43/36
:: ]
Let us examine the matrix entries)~ of T- 1 in more detail. From (3.14)-(3.16)
we see that
)~ = ~ miI: k
) (~r-V (~y-v
v=o
An easy calculation shows that
and
)kx = 3J+ 112k+ 1 (6k+1 -
1)"lor j ~ k .
3.3 Band Toeplitz Operators 149
j ,k=O,I, . . . ,
where
3 k - ' , for k S i .
si-«> { zi:", fork~j,
and
1
lik = 3j+ 12k+ 1 j,k=O ,I, . . . .
We proved that
T - 1 =PA-1PIImP+F.
The latter equality can be derived for other Toeplitz operators T too and can be
used for the computation of T- 1 •
Part 2. Let us determine the spectrum a (T) of the Toeplitz operator T with
symbol w given by (3.13). As we have seen in Example I of the first section the
curve t t-+ w(e it ) is precisely equal to the ellipse
Thus we can apply Corollary 3.4 to show that a (T) consists of all points A E C
that lie on or are inside the ellipse (3.17), that is,
a(T) = I
A Eel ( !JtA - ~y + 25(~A)2 S IJ .
150 Chapter III Laurent and Toeplitz Operators
On the other hand, as we have seen in Example 1 of the first section, the spectrum
of the Laurent operator A with symbol co given by (3.13) is precisely the ellipse
(3.17). Thus in this case a(A) is just the boundary iJa(T) of the spectrum ofT .
Part 3. We continue with an analysis of zol - T for the case when zo is a point
strictly inside the ellipse (3.17). From the result in the previous part we know
that zol - T is not two-sided invertible. We claim that zol - T is right invertible
with dim Ker (zol - T) = 1. To see this, notice that WO(A) = W(A) - zo is the
symbol of the Toeplitz operator To = T - zcl , and A E a(To) if and only if
A - zo E a(T) . Now, apply Theorem 3.2 to To. Since zo is inside the ellipse
(3.17) , we have W(eit ) - zo i- 0 for each t, and hence Wo (e") i- 0 for each t.
Furthermore, the winding number of the curve t f-+ wo(e it ) with respect to zero is
precisely equal to the winding number of the curve t f-+ w(e it ) with respect to ZOo
Since (cf., formula (5) in the first section)
7 1
w(e')
1
= 5" - cost + 5"i sint ,
the orientation on the curve t f-+ w(e it ) is clockwise, and hence the winding
number is -I . Thus we can apply Theorem 3.2 to show that To = T - zo I is right
invertible and dim Ker To = I.
Let us specify further the above for the case when zo = 7/5 (the center of the
ellipse (3.17)) . Then the symbol WO(A) is given by
Notice that the roots ±i.../3 /2 are outside the unit circle . Thus in this case the
number K = e- r corresponding to (3.2) is equal to -1 (which we already know
from the previous paragraph). Furthermore, by Theorem 3.2 (ii), the operator
T - ~ I is right invertible and a right inverse is given by
WO(A) = --A
5
2-I [2 5(7
2 5
zo) + -3]
A - -
2 '
- -
and the roots ai, a2 of the quadratic polynomial A2 - ~ (~ - zo) + ~ are outside
the unit circle. To see this, recall that wo(e it ) i- 0 for each t and each choice of
zo strictly inside the ellipse (3.17). Next, notice that the roots al and a2 depend
continuously on the point zo, and for Zo = 7/5 they are outside the unit circle.
Thus, if for i = 1 or i = 2 the root a ; would not be outside the unit circle, then
3.3 Band Toeplitz Operators 151
the root a; has to cross the unit circle, which contradicts the fact that wo(e it ) =1= 0
for each t. Thus
with laii > 1, la21 > 1, and by Theorem 3.2 (ii) the operator T - zoI is right
invertible and a right inverse is given by
5
(T - zoI)(-I) = -2"(S - aII)-\S - a2I)-IS.
By using the second part of formula (3.7) in Lemma 3.3 we obtain that
o 0 0 0
». 0 0 0
I) 5 1 bi bl 0 0
(zoI - T) ( - = ---
2 a la2 b3 b3 bl 0
where
Part4. Next let zo be a point strictly outside the ellipse (3.17) . We already know
from Part 2 that in that case zo rj. a(T) . Thus zoI - T is invertible. To compute
its inverse, we consider again the corresponding symbol
is inside the open unit disc , and one root, a2 say, is outside the unit circle. For
zo = 0 we know this from Example 1. Indeed, for zo = 0 we have al = 1/2 and
a2 = 3. For an arbitrary zo it follows by using a continuity argument similar to
152 Chapter III. Laurent and Toeplitz Operators
the one used in the third paragraph of the previous part. Thus Theorem 3.1 yields
that
.'.J [1 ar '.'J
1 0
-I 5
-I
a2
1 o 0 al1 al
(zoI - T) = -- -2 -I 1 0 0 1 .
2a2 a a2
[ 2 .. : ·0
" .
This product we can compute in the same way as we compute T- 1 at the end of
Part 1. We have
where
. _ [a2". j
gJ -
for j = -1 , -2, .
af for j=0,1 ,2, .
and
i .k = 0,1 ,2, . . . .
Part 5. Finally, let zo be a point on the ellipse (3.17) . As we have already seen in
Part 2, this implies that zo E oa(T) . We claim that in this case zoI - T is neither
left nor right invertible. Indeed, let zoI - T be left or right invertible . Since zo
belongs to the spectrum of T, the operator zoI - T is not two-sided invertible.
Thus by the stability results of Section 11.15 there is an open neighborhood of zo
such that for each point z~ in this neighborhood the operator z~I - T is one-sided
invertible but not two-sided invertible . Since zo E Ba (T) , this is impossible.
(4.1)
then the two curves t t--+ w(e it) and t t--+ iiJ(e it ) , with - l r ~ t ~ tt , do not pass
through zero and the corresponding winding numbers are equal.
The following theorem is the main result of this section.
Theorem 4.1 Let T be the Toeplitz operator with continuous symbol w. Then T
is left or right invertible ifand only ifw(e it) i= Ofor - l r ~ t ~ tt . Furthermore,
ifthis condition is satisfied, then
(i) T is left invertible ifand only if K 2: 0, and in that case
codim 1m T = K, (4.2)
Proof: We split the proof into two parts. In the first part we assume that co has
no zero on the unit circle and prove (i) and (ii). The second part concerns the
necessity of the above condition.
Part I. Assume w(e it ) i= 0 for - l r ~ t ~ n . We prove (i) and (ii) by an
approximation argument using Theorem 3.2. By the second Weierstrass approxi-
mation theorem (see Section 1.13) there exists a sequence WI (e") , w2(e it), .. . of
trigonometric polynomials such that
From (4.4) it follows that for n sufficiently large wn(A) i= 0 for IAI I and
wn(A) -lw(A) ~ I uniformly on IAI = I when n ~ 00. Thus we can find a
trigonometric polynomial iiJ such that iiJ(e it ) i= 0 for - l r ~ t ~ n and
(4.5)
(4.6)
154 Chapter III Laurent and Toeplitz Operators
These operators are also invertible. This follows from Lemma 3.3 and the fact that
la; I < I and IfJj I > I for each i and j. We claim that
(4.8)
or
(4.9)
Since iiJ is given by (4.6) and all scalar functions commute, we see that w# = w,
and hence (4.8) holds. The identity (4.9) is proved in a similar way.
From (4.8) and the invertibility ofthe operators T_, T+ and I + C we conclude
that T is left invertible if K 2: 0. In fact, in that case a left inverse T(-l) of Tis
given by
(4.10)
Moreover,
codim Im T = codim Im SK = K.
Similarly, (4.9) yields that T is right invertible for K :::: 0, a right inverse of T(-l)
of T is given by
(4.11)
and
dimKer T = dimKer (S*)-K = -K.
Part 2. In this part T is left or right invertible, and we prove that w(e it ) i= 0 for
- ] f ::::: t ::::: it . The proof is by contradiction. We assume that w(Ao) = 0 for some
lAO I = 1. Since T is left or right invertible, the perturbation results of Section I.15
show that there exists e > 0 such that the operator T on £z is left or right invertible
whenever
liT-TIl <c. (4.12)
By the second Weierstrass approximation theorem (see Section I.13) there exists
a trigonometric polynomial w(e it ) such that
(4.13)
Now let T be the Toeplitz operator with symbol WCA) = w(A) - w(Ao) . Since
w(Ao) = 0, we see from (4.13) that IW(Ao)1 < te. Hence, again using (4.13), we
have
IWCe it ) - w(eit)1 < e, -]f::::: t ::::: tt .
But then we can use Theorem 2.2 to show that T satisfies (4.12), and hence T is left
or right invertible. However, w(Ao) = 0 by construction, and thus Theorem 3.2
tells us that T cannot be left or right invertible . So we reached a contradiction.
We conclude that w(e it) i= 0 for - ] f ::::: t ::::: it . 0
Corollary 4.2 Let T be the Toeplitz operator with continuous symbol to. Then the
sp ectrum ofT is given by
a(T)=C\Q , (4.14)
where Q is the set ofall points A in C such that A i= w(e it) for all - ] f ::::: t ::::: x
and the winding number ofco (.) - A relative to zero is equal to zero. In particular,
the spectral radius ofT is equal to the norm ofT.
The proof ofthe above corollary is the same as that of Corollary 3.4, except that
the reference to Theorem 3.1 has to be replaced by a reference to the final part of
Theorem 4.1. We omit further details. 0
We conclude this section with the case when the symbol ofto ofthe Toeplitz T is a
rational function. In that case the process of inverting T can be carried out without
the approximation step described in the first part of the proof of Theorem 4.1 .
Indeed, let
w(A) = PI (A) , IAI = 1,
pZ(A)
n n
k+ r:
PI (>..) = CJ (>.. - tt) (>.. - ti),
j=1 j=1
where tt and rt are inside the unit circle and the points ti and ri are outside 11'.
It follows that co can be represented in the form
where K = k+ - l+, and the factors co.: and w+ are rational functions which have
their zeros and poles inside 11' and outside 11', respectively. In fact,
T_ = CI
C2
nU- 7- *) IT u -
k+
t S
J
e+
r + S *)- I,
J
j=1 j=1
r:
n
k-
T+ IT (S - ti I) (S - ri I)-I ,
j =1 j=1
and
(4.16)
where
Sen) _ {sn for n = 0, 1, 2, ,
- (s*)-n for n=-I ,-2, .
Notice that because of commutativity the order of the factors in the formulas for
Land T+ is not important. However in (4.16) the order is essential.
3.4 Toeplitz Operators with Continuous Symbols 157
T~I = C2
CI
nU-
f+
1:+S*)
J
nU- t7-
k+
J
S*)-I,
j=I j=1
T;I = nk:
j =1
(S - 1: -
j
I) n
r:
j=I
(S - t; I) -I .
In other words, T~I and T;I are the Toeplitz operators with symbols l/uL and
l / w+, respectively. Furthermore, we see that T~I and T;I are of the form
:J
- -
Y-I Y- 2
[ Yo- Yo-
T~I = 0 Yo~
Y-=-1
(4.17)
J
0 0
[ y~+ Yo+ 0
-I YI
T+ = Y( Yo+ yt (4.18)
where
0
co.: (A) = L
j =-oo
j
Yj- A , IAI ~ 1, (4.19)
00
where
min(j . k )
Furthermore, if K < 0, then the operator T is right invertible (but not left) and
a right inverse is given by
(4.21)
If K > 0, then the operator T is left invertible (but not right), and a left inverse is
given by
T(-I) = T;Ics*tT.=-I. (4.22)
The formulas (4.21) and (4.22) are the analogs of formulas (3.4) and (3.5) in
Theorem 3.2. As in the case K = 0, formulas (4.17) and (4.18) can be used to
obtain the matrix entries in the matrix representations of the operators T(-l) in
(4.21) and (4.22).
Example: Consider the Toeplitz operator on £2 given by
1 1/2 1/4 .. . ]
1/3 1 1/2
2
T = 1/:3 1/3 1 ". .
[
W(A) -
_ 00
~
(~)j
2 A
r i
+L
00 (~)k
3
k_
A -
1
1 _ A-1 /2
A/3
+ 1- A/3
)=0 k=1
= <;5 ( 1 - 1
2A -I
)-1 ( 1- 3I A)-1
Thus the symbol is a rational function , and we have the representation (4.15) with
K = 0 and
5 ( 1- 1 A- I
W_(A)=<;
2
)-1 ,
6/5 -3/5 0
-I -2/5 7/5 -3/5
T = 0 -2/5 7/5
r
3.5 Finit e Section Method 159
7/5 -3/5 0
G=
[
-2/5 7/5 -3/5
0 -2/5 7/5
"'J
..
.
' F=
[~~~" 'J
000
.'
.
..
'
.
ao a _I a- z
al ao a-I .
J
T = oz al ao ' (5.1)
[ ·· ..
· .
where
an = -
1 j :n: w (e't. )e- rnt
. dt , n E Z. (5.2)
2rr -:n:
Given a non-negative integer N , the N -th section TN of T is the linear operator
on eNgiven by
[ aaDl
a_ I
ao
... G- N
: : . a_~+ 1
J . (5.3)
TN = a~ . .
aN - I . . . ao
(5.4)
More precisely, we say that the finite sec tion method for T con verges if TN is
invertible for N sufficiently large, N ~ No say, and for each y = (Yo, YI, Yz , . . .) in
160 Chapter III Laurent and Toeplitz Operators
l2 the vector x(N) = (xo(N), . .. , xN(N), 0, 0, ...), where (xo(N), ... , xN(N))
is the solution of (5.4) for N ~ No, converges in the norm of l2 to the solution x
ofTx = y .
In the sequel we identify the space eN with the subspace HN of l2 consisting of
all x = (x j )'/=,0 in l2 such that x j = 0 for j > N, and we let PN be the orthogonal
projection of l2 onto HN = eN. It then follows that the finite section method for
T converges ifand only ifthe projection method relative to {PN }'f!=o is applicable
to T, that is, using the notation of Section II.17, the operator T belongs to n {PN }.
The following theorem is the main result of this section.
Before we prove the theorem it will be convenient to make some preparations. The
first is a general proposition and the second a lemma.
where PN is the orthogonal projection of l2 onto the space spanned by the first
N + 1 vectors in the standard basis. Furthermore
(5.7)
Proof: First consider the case when the standard matrices of A and A -I are both
lower triangular. So
with zero elements in the strictly upper triangular part. Then the N -th sections of
A and A -I are given by
..
AN= . (A-1)N = .
[
.; [
b~o b~1
3.5 Finite Section Method 161
which proves (5.5). Moreover, (5.7) follows by using that IIA-I y- PN A -I yll ---+ 0
if N ---+ 00. This completes the proof for the case when the standard matrices of
A and A -I are both lower triangular. The upper triangular case is proved in a
similar way. 0
Lemma 5.3 Let T be an arbitrary Toeplitz operator, and let R = nm=1 (S - f3 j l),
where S is the forward shift on f2 and f3, .. . ,f3m are complex numbers. Then
Proof: First notice that ST - TS has rank at most one . Indeed, let eo, el, e2, . ..
be the standard orthonormal basis of f2, and let
::: ]
[
:~ aa~1 :=~
T = a2 al ao .
·· ..
· .
Then for each n = 0, 1,2, .. . we have
L sr:' (ST -
P
SPT - T se = TS)Sj-l,
j=1
and thus
rank (SPT - TSP) :::s p, p = 1,2, . . .. (5.9)
Now notice that R is a polynomial in S of degree m, and hence (5.9) yields that
the rank of RT - TR is at most mp. 0
Proof of Theorem 5.1: Since T is assumed to be invertible, we know from
Theorem 4.1 that w(e it ) =1= 0 for -T( ::::; t :::s T( and the winding number K of w
relative to zero is equal to zero . But then we can use the first part of the proof of
Theorem 4.1 to conclude that T can be represented in the form
(5.10)
162 Chapter III Laurent and Toeplitz Operators
(5.11)
where S is the forward shift on £2, and ICti 1< 1, 1,8j 1> 1 for each i and j . As we
have seen before, the fact that ICti 1< 1 and 1,8j I > 1 for each i and j implies that
the operators 1'_ and 1'+ are invertible (by Lemma 3.3) . In fact,
1'-I=IT
- ~,=1 (I-Ct ''S*)-1 , 1'-I=IT~
+ )=1 (S-,8 )·I)-1 . (5.12)
From (5.11) and (5.12)we also see that with respect to the standard basis 0[£2 the
matrices of 1'_ and 1'~ 1 are upper triangular, and those of 1'+ and 1'; 1 are lower
triangular.
Consider the operator
(5.13)
By comparing (5.10) and (5.13) we see that F is obtained from T by interchanging
the factors 1'_ and 1'+. According to Lemma 5.3 the operators 1'+ C - C 1'+ and
1'+1'_ - 1'_ 1'+ are of finite rank. It follows that T - «i:+1'_ (l + C) is an operator
of finite rank . Also , by taking adjoints in Lemma 5.3 we see that 1'_C - C1'_ is
of finite rank . So we have proved that the operator T - F is of finite rank.
Notice that the three factors in the right hand side of(5.13) are invertible, and
hence F is invertible . But then, by Theorem 11.17.7, the result of the previous
paragraph shows that T E IT {PN} if and only if F E IT {PN }. Moreover, since
IICII < 1 we also know (see Theorem 11.17.6) that I + C E IT{PN}.
Let 1'+,N and 1'_,N be the N-th sections of 1'+ and 1'_, respectively, and let FN
and C N be those of F and C, respectively. Since 1'+ is upper triangular and 1'_
lower triangular with respect to the standard basis of £2, we have
and thus
(5.14)
From the triangular properties ofthe operators in (5.11) and (5.13) it follows that
we can apply Proposition 5.2 to show that 1'+.N and 1'_,N are invertible . Also,
IICNII S 1IC11 < 1, and therefore I + CN is invertible . Using (5.14) we see that
for each N the N -th section F N is invertible , and
We conclude that
In this section we consider the finite section method for Laurent operators. Let
L be an invertible Laurent operator on f2 (Z) . Recall (see Section 11.17) that the
finite section method is said to converge for L if L E IT{Qn}, where Qn is the
orthogonal projection OU2(Z) onto span {e- n, . .. , en}. Here . . . , e_l, eo, el, . ..
is the standard orthonormal basis of f2(Z).
In general, in contrast to Toeplitz operators , from the invertibility of L it does
not follow that the finite section method converges for 1. For instance, assume
that L = V , where V is the bilateral forward shift on f2 (Z). Then L is invertible,
but
000 o0
100 o 0
010 o0
o0 0 1 0
Proof: The sufficiency follows from Theorems 4.1 and 5.1. Indeed, assume the
winding number K of co relative to zero is equal to zero. Let T be the Toeplitz
operator on £2 with symbol co. Since K = 0, Theorem 4.1 implies that T is
invertible. But then we can use Theorem 5.1 to show that the finite section method
converges for T. Thus for n sufficiently large, n ~ no say, the operator PnTPn is
invertible on Im P, and
Here Pn is the orthogonal projection onto the space spanned by the first n
vectors in the standard orthonormal basis of £2. Notice that we may identify
both Im P2n+1 and Im Qn with C 2n+ l , and in that case we have P2n+1 TP2n+1 =
QnLQn . We conclude that QnLQn is invertible on Im Qn for n ~ no and
sUPn >n
_ 0 IICQnLQn)-IIm Qnll < 00. By Theorem ILl7.! this implies that the
finite section method converges for L.
For the proof of the necessity of the winding number condition we refer to
the proof of Theorem XVI.5 .2 where the necessity is established in a somewhat
different setting. The proof given there carries over to the case considered here.
o
To understand Theorem 6.1 better, consider the Laurent operator
ao a-I a-2
L = al §] a-I
aK aK-I aK-2
LI = aK+I ~ aK-I
The difference between the matrices for L and L I is only in the location of the
entry in the zero-zero position .
3.6 The Finite Section Methodfor Laurent Operators 165
The symbol of L I is equal to CVI (A) = A-K cv (A). We have CVI (e") :F 0 for each t
and the corresponding winding number is equal to zero. Thus L I is invertible , and
Theorem 6.1 tells us that the finite section method converges for L I. This means
that the matrices
QnLI QnlIm Qn = (aK+J-r)'}.r=-n
are invertible for n large enough and
Notice that
V KQn V- K = Qn,K'
where Qn,K is the orthogonal projection of f2(Z) onto span {e-n+K" ' " en+K}
with .. . , e_l, eo, el , ... being the standard orthonormal basis Off2(Z) . We arrive
at the conclusion that for the operator L the projection method with respect to two
sequences of projections (Qn , K' Qn }nEZ is convergent. In other words to have the
finite section method converging for an invertible Laurent operator L one has to
take the sections around the x-th diagonal, where K is the winding number of the
symbol of L.
As we have seen in Section II. 18 it is sometimes convenient to a consider a
modified finite section method. This is also true for Laurent operators.
Example: Consider the Laurent operator L with symbol CV(A) = -2A+ 7 -3A -I .
Since
we see that cv(eit ) :F 0 for each t and relative to zero the winding number of
the corresponding curve is equal to zero . It follows that the finite section method
converges for L.
To find the inverse of L it is convenient to use a modified finite section method.
Notice that the n-th section of L is the following (2n + I) x (2n + I) matrix
7 -3 0 0 0 0
-2 7 -3 0 0 0
0 -2 7 -3 0 0
Ln = 0 0 -2 7 0 0
0 0 0 0 7 -3
0 0 0 0 -2 7
166 Chapter III. Laurent and Toeplitz Operators
Now replace the entries in the left upper comer and in the right lower comer by 6,
and let Fn be the resulting matrix . As we have seen in Section II.18 the inverse of
Fn is given by
I 2- 1 2- 2 2- 2n
3- 1 I 2- 1 2- 2n + 1
F- I = ~ r 2
r l
I 2- 2n +2
n 5=
3- 2n 3-2n+ 1 3- 2n +2
I 2- 1 2- 2 2- 3 2- 4
3- 1 I 2- 1 2- 2 2- 3
L- I = ~
5
= 3- 2 3- 1 r l 2-2OJ
r 3 r 2
r l
I r l
3- 4 3- 3 3- 2 3- 1 I
Exercises III
(a) Prove that the operator A is strictly positive, that is, for some s > 0
(b) Show that A is invertible, and find the matrix of A -I with respect to the
standard basis of l2Gl) .
(c) Find the spectra of the operators A and A -I .
2. Let lal =1= 1. Do the previous exercise for the Laurent operator A on e2G~)
defined by
Exercises III 167
1 a a 2 a 3 a4
b 1 a a2 a3
A= b2 b ITJa a2
b3 b2 b 1 a
b4 b3 b2 b
4. Let [zz I i= 1, and let A be the Laurent operator on l2 (Z) with symbol
1- lal 2
W(A) = 2 I' A E T.
1 + lal - a): - aA-
5. Can it happen that the inverse ofa lower triangular Laurent operator is upper
triangular?
6. Prove or disprove the statement: a left invertible Laurent operator is always
two-sided invertible.
7. Fix t E JR, and let U be the operator on L2 (-00 , (0) given by
(Uf)(x) = f(x + t) , x ER
A=-2U+71-3U- I .
(a) Prove that the operator T is strictly positive, that is, for some e > 0
(x E £2) .
(b) Show that T is invertible, and find the matrix of T- 1 with respect to
the standard basis of £2.
(c) Find the spectra of T and T- 1•
10. Let lal =1= 1. Do the previous exercise for the Toeplitz operator T on £2
with symbol
a(A) = a): -I + 1 + lal2 + CiA, A E T.
T=
[
:, ~ 7 ". !
where lal < 1 and Ibl < 1.
Exercises III 169
12. Let lal =1= 1, and let T be the Toeplitz operator on £2 with symbol
1 -lal 2
(A) - ---;:-------:- A E T.
- 1 + lal 2 - a): - aA -I '
(J)
13. Let T be the Toeplitz operator on £2 with the following tridiagonal matrix
representation:
oa 0 0
bOa 0
T=
o bOa
o0 b 0
(a) For which valuesofa andb in C is the operator T left, right or two-sided
invertible? For these values determine the one- or two-sided inverse of
T.
(b) Solve the equation Tx = y, where y = (l, q, q2, . . .) with Iql < 1.
T = -2 W + 7I - 3W*
15. Fix 0 < a < 1. Let Ma be the operator Lz([O, 1]) defined by
I
Ja!(~t), 0 ~ t ~ a,
(Maf)(t) =
0, a < t ~ 1.
(M~f)(t) = .Ja!(at),
18. Fix 0 < a < 1. Find the spectrum of T, where T is the operator on
Lz ([0, 1]) given by
00 00
T = 'Laj M a + / + 'Lbj(M~)j .
j=! j=!
In particular, for the operator T prove the analogs of Theorems 3.1 and 3.2,
and determine its spectrum.
Chapter IV
Spectral Theory of Compact Self
Adjoint Operators
One ofthe fundamental results in linear algebra is the spectral theorem which states
that if H is a finite dimensional Hilbert space and A E £(H) is self adjoint, then
there exists an orthonormal basis cpI, . .. , CPn for H and real numbers AI, ... , An
such that
ACPi = Aicpi, l::s i ::s n.
The matrix (aij ) «Acp j , cpi) ) corresponding to A and cpI , . . . , CPn is the
diagonal matrix
ACPi = AiCPi, i s i?
This means thatthe matrix corresponding to A and cpI , <fJ2 , . •. is an infinite diagonal
matrix.
In this chapter it is shown that the spectral theorem admits an important gener-
alization to compact self adjoint operators .
Let us first consider an example which indicates the possibility of a
generalization.
i:
defined on L z([-JT , JT]) by
is a bounded linear operator with range in L2([-n, n]). Taking CPn (I) = ~eint,
v2TC
n = 0, ±l, . . . as the orthonormal basis for L2([-n,n]), it follows from the
t:
periodicity of hand CPn that
f
TC
(Kcpn)(l)= _TCh(t-S)CPn(s)ds=
1 . fTC
t-TC h(s)CPn(t-s)ds
.
= _ _ e mt h(s)e- tn S ds = AnCPn(t),
,.j2ii -TC
where
An = fTC h(s)e- ins ds,n = 0, ±l , ... .
-TC
The matrix corresponding to the operator K and (CPn}~-oo' is the doubly
infinite diagonal matrix
The examples described above show that the spectral representation theory starts
with the problem of the existence of eigenvalues and eigenvectors.
We shall see later the significance ofeigenvalues and eigenvectors which appear
in various problems in mathematical analysis and mechanics.
(Kcp)(t) =i 1t
cp(s)ds-i [1 cp(S)dS=Acp(t)a.e. (2.1)
4.2 The Problem ofExistence ofEigenvalues and Eigenvectors 173
o.
2;
cp (I) = ceT! a.e ., c =1= (2.3)
By identifying functions which are equal a.e., we may assume (2.3) holds for
all t . Now (2.1) implies that
2;
O=cp(O) + cp(1)=c(l+e T ), c =1= O.
Hence
2
A = (2k + l)rr, k 0, ±1, . . . .
2
Ak = , k = 0, ± 1, .. .
(2k + l)rr
are the eigenvalues of K and e i (Zk+ l )1T! are eigenvectors corresponding to Ak.
Every linear operator on a finite dimensional Hilbert space over C has an eigen-
value. However, even a self adjoint operator on an infinite dimensional Hilbert
space need not have an eigenvalue.
For example, let A : Lz([a, b]) -+ Lz([a, b]) be the operator defined by
2.1 (a) Any eigenvalue ofa selfadjoint operator is real. For if A is self adjoint and
Ax = Ax, x =1= 0, then
whence A = X.
174 Chapter TV. Spectral Theory ofCompact SelfAdjoint Operators
Ax = AX, Ay = p.y; Y i- 0,
and
n
A(X, ({Jk) - L(x , ({Jj}(1/!j, ({Jk) = 0, 1 S k S n. (3.2)
j=l
Now (x, ({Jj) i- 0 for some j, otherwise x = 0 by (3.1). Thus {(x , ({Jj}}~=l is a
non trivial solution to the system of equations J
n
Aq - L(1/!j, ({Jk}Cj = 0, < k S n. (3.3)
j=l
(3.4)
Conversely, if (3.4) holds, then there exist ci , ... , Cn , not all zero, which satisfy
t
(3.3) . Guided by (3.1), we take x = L:J=1 Cj1/!j and getKx = AX.
4.4 Existence ofEigenvalues 175
In this section it is shown that every compact self adjoint operator has an
eigenvalue.
Let us start with the following theorem.
Therefore,
Combining this with the definition of m and the parallelogram law (Theorem I.3.1),
we get
176 Chapter IV. Spectral Theory ofCompact SelfAdjoint Operators
Now (Ax, y) = I(Ax , y) I e i O for some real number (). Substituting e- i Ox for x
in (4.1) yields
I(Ax, Y)I .s ~ (lIxll 2 + IIYII 2 ) . (4.2)
Now that we know that the least upper bound of the set {1(Ax,x)1 : IIxll = I}
is II A II (A self adjoint), the next problem is to determine if II A II is attained, i.e.,
IIAII = I(Axo , xo)1 for some xo, IIxoll = 1. The next theorem shows that if this
is the case, then at least one of the numbers IIAII or -IIAII is an eigenvalue of A .
Thus if A is a self adjoint operator which does not have an eigenvalue, as in the
example in Section 2, then
A = (Axo , xo) ,
Proof: For every a E <C and every v E H, it follows from the definition of A that
Expanding the inner products and setting A = (Axo, xo), we obtain the inequality
(v , (A - A)xo) = O.
Theorem 4.4 IfA E £(H) is compact and selfadjoint, H =1= {O}, then at least one
ofthe numbers II A II or - II A II is an eigenvalue ofA.
Proof: The theorem is trivial if A = O. Assume A =1= o. It follows from Theorem 4.1
that there exists a sequence {xn} in H, IIxn II = 1, and a real number A such that
IAI = II A II =1= 0 and (Axn ,xn) -+ A. To prove that A is an eigenvalue of A , we
first note that
Thus
(4.4)
Since A is compact, there exists a subsequence {Axn'} of {Axn} which converges
t
to some Y E H. Consequently, (4.3) implies that X n ' -+ Y , and by the continuity
of A,
Y = lim Ax n , =
n->OO
~Ay
A
.
Hence Ay = Ay and y =1= 0 since II YII = lim n->oollAxn ' II = IAI = IIAII . Thus A is
an eigenvalue of A. 0
Since
(Kf)(t) = 1 1
ktt , s)! (s)ds,
l'
then
q;(s) ds = Aq;(t) a.e . (4.5)
We shall now prove the spectral theorem for compact self adjoint operators. The
proof depends on successive applications of Theorem 4.4.
Ax = L:).t{x , q;k)q;k ·
k
But this is impossible since {A({In} has a convergent sub-sequence due to the com-
pactness of A.
We are now ready to prove the representation of A as asserted in the theorem.
Let x E H be given.
Ax = Ak(X, ({Ik)({Ik .
k= 1
o
180 Chapter IV. Spectral Theory ofCompact SelfAdjoint Operators
Ax = I:>dx,rpk}rpk. (6.1)
k
- -- .1
sp {rpn} = ImA = Ker A .
Thus {rpn} is an orthonormal basis for Ker A.1. Hence, given x E H, there exists a
v E Ker A.1 such that
For if this is not the case, then there exists a v =1= 0 in Ker(Aj - A) such that
v ..1 C{Jni ' 1 .:s i .:s p. But for k =1= n., 1 .:s i .:s p, v is also orthogonal to C{Jk since
Aj =1= Ak . Hence
AjV = Av = L Ak(V, C{Jk)C{Jk = 0
k
Ax = L Ak(X, C{Jk)C{Jk
k
n
Anx = L Ak(X, f{Jk}f{Jk.
k=!
Since
2
00
2
IIA-A n Il = sup L Ak(X,f{Jk}f{Jk :s sup lAd --+ 0
IIxll=! k=n+! k >n
Theorem 7.1 Given a compact selfadjoint operator A E .c(H), let {J-L j} be the
set of non-zero eigenvalues of A and let Pj be the orthogonal projection onto
Ker(J-L j I - A) . Then
(i) PjPk = 0, j i= k ;
(ii) A = LjJ-LjPj,
where convergence ofthe series is with respect to the norm on .c(H).
Proof: Let {~n}, {,A.n} be a basic system of eigenvectors and eigenvalues of A. For
each k, a subset of {~n} is an orthonormal basis for Ker(JLk - A) , say ~ni ' 1 :5
i :5 p. Then PkX = L;=l (x , ~ni )~n i' and since each An is a JLk. it follows from
(6.2) that
i:
2
I
A- k=l JLkPk 11 = su~
IIxll-l
IIAX- 'tJLkPkXI12
k=l
as n -+ 00. D
Theorem 8.1 Let K be a compact self adjoint operator in £('H.) with a basic
system ofeigenvectors {~n} and eigenvalues {An}. If A =1= 0 and A =1= An, n 2: 1,
then AI - K is invertible and
or
(A - Aj)(X , ~j) = (y, ~j), 1:5 j. (8.2)
Thus, from (8.1) and (8.2) ,
(8.4)
Theorem 8.2 Let K E £(1i) be a compact self adjoint operator with a basic
system of eigenvectors {CPn} and eigenvalues {An} . Given y E H, the equation
K x = y has a solution if and only if
x = u+ Ln -(Y
1
An
, CPn}CPn, u E KerK.
1
Proof: Suppose (i) and (ii) hold . Then XQ = Ln n (Y, CPn) is in 'H and it follows
from (2) in Section 6 that
= LAkl(x, C{Jk}lZ 2: o.
k
Let A be compact and non-negative. In the proof of the spectral theorem, a basic
system {C{Jn}, {An} of eigenvectors and eigenvalues of A was obtained by taking
AI = max (Ax,x) = (AC{JI,C{JI)
IIxll=I
AZ = max (Ax, x) = (AC{Jz , CPZ)
IIxll=1
x.J..SP('Pl}
Proof: We note that max{(Ax ,x) : IIxll = 1, x ..1 M} is attained. This can
be seen from Corollary 4.5 applied to the restriction of PA to M.l , where P is
the orthogonal projection onto M.l . For n = 1, the only subspace of dimension
zero is (0). Thus the formula for Al reduces to Al = maxllxll=1 (Ax, x), which we
already know.
Let {gin} be a basic system of eigenvectors of A corresponding to {An} . Given
any subspace M of dimension n - 1, Lemma 1.16.1 implies that there exists an
Xo E Sp{gil , ... , gin} such that Xo ..1 M and II xo II = 1. Suppose Xo = L:Z=I a kgik.
Since Ak 2: An , 1 ::; k ::; n,
max (Ax , x ) 2: (A xo , x o) =
Ilx n= l
xl-M
(t k= 1
Akakgik, i:
k=1
akgik)
n
2
= LAk lad 2: An llxoll = An. (9.2)
k= 1
But
An = max (A x , x ). (9.3)
DxD = 1
x L sp{I"I •...•l"n - I }
Now for some simple applications of the min-max theorem. Suppose A and B
are compact non-negative operators in £(H) . Let gil, gi2, . . . and Al (A) 2: A2 (A)
2: . . . be a basic system of eigenvectors and eigenvalues of A. Let Vrl , Vr2 , ... and
Al (B) 2: A2 (B) 2: ... be a basic system of eigenvectors and eigenvalues of B.
Hence
(9 .6)
Am(B ) = ma x (B x , x) . (9.8)
IIxll=1
.r L {h .....'i' m- I )
:::: max (A
IIxU= 1
+ B )x , x) :::: An+ m- j+1 ( A + B)
x l- M
188 Chapter IV. Spectral Theory ofCompact SelfAdjoint Operators
Exercises IV
10. Show that every IAI < 1 is an eigenvalue of the backward shift operator Se.
11. Given that (aI - A)-l exists , find x, where
(a) (aI - A)x = el and A : £2 ~ £2 is given by A~
3~2, 0, 0, . ..).
Exercises IV 189
Is it always correct that dim Ker(AB - A1) = dim Ker(BA - A1) if A :j:. 0
is an eigenvalue of AB?
14. What conclusions can one draw in problems 12 and 13 if A = O?
15. Given A :j:. 0, prove that ifone ofthe two operators (AB-A1) and (BA -A1)
is invertible, then so is the other and
A =
B
0
00 C0)
( 000
(a) max
L
min (x, x) =
(Ax .x)= 1
1.
J
dim Le j-e l X.LL
(b) min
L
max (x, x) =
(Ax .x) = !
1..
J
dim Leej XEL
21. Let P be an orthogonal projection on a Hilbert space 'H. Prove that for any
A E £(1i) , A* PA ~ A* A , i.e., A* A - A*PA is positive.
22. Define K E £(1i) byKh = LJ=1 Aj(h, CPj}CPj, where {CPj}j=1 is an ortho-
normal system and Aj ~ O. Let P be an orthogonal projection on 'H. Define
Kph = LJ=1 Aj(h , PCPj}PCPj . Prove thatAj(K p) ~ Aj(K) .
23. Let P be an orthogonal projection on a Hilbert space 'H. Prove that if
A E £(1i) is compact, then
g(Acpl, .. . , Acpn) ~ Al (A * A) .. . An (A * A) ,
Using the theory developed in Chapter IV, we now present some fundamental
theorems concerning the spectral theory of compact self adjo int integral operators.
In general, the spectral series representations of these operators converge in the
L 2-norm which is not strong enough for many applications. Therefore we prove
the Hilbert-Schmidt theorem and Mercer's theorem since each of these theorems
gives conditions for a uniform convergence of the spectral decomposition of the
integra l operators. As a corollary of Mercer's theorem we obtain the trace formula
for positive integral operators with cont inuous kernel function .
We recall that if k is in L 2([a , b] x [a, b]) and k(t , s) = k (s , t ) a.e., then the
integral operator K defined by
(Kf)(t) = l b
k(t , s )j (s )ds
is a compact selfadjoint operator on L 2([a, b]) . Con sequently, there exists a basic
system of eigenvectors {cpkl and eigenvalues {A.k} of K (K i= 0) such that
(1.1)
For many purposes this type of convergence is too weak, whereas uniform conver-
gence in (1.1) is most desirable. The problem is to give sufficient, but reasonable,
conditions so that for each jEL2([a , b]) , the series in ( 1.1) converges uniformly
on [a, b]. The Hilbert-Schmidt theorem which follows now is very useful in this
regard.
194 Chapter V. Spectral Theory ofIntegral Operators
I b
k(t, s)j(s)ds = ~ Ak (l b
j(S)({Ik(S)dS) ({IkCt) a.e.
(1.2)
Now
Aj({lj(t) = (K({Ij)(t) = I b
ktt , s)({Ij(s) ds = (k t , iPj),
where k t (s) = k(t, s) . Therefore, since k, is in L2([a, b)), it follows from Bessel's
inequality and the hypotheses that
L IAj({lj(t)1 2
= L I(kr, iPj)1 2 ~ IIk ll 2 t
I
j j
b
2
= Ik(t,s)1 ds s suplb Ik(t,s)1 2 ds
a t a
= C2 < 00. (1.3)
Let e > 0 be given . Since L j l(f, ({Ij)1 2 ~ 1If11 2 , there exists an integer N such
that ifn > m ~ N, then
n
L l(f, ({Ij)1
2
s s2. (1.4)
j=m
Thus from (1.2) , (1.3), and (1.4)
n
L IAj(f, ({Ij)({I j(t)1 s c« n > m ~ N, t E [a, b).
j=m
5.1 Hilbert-Schmidt Theorem 195
la
b
k (t , s) f (s ) ds = L
k
la
b
Ak({Jk(t )i/Jk (s ) f (s ) ds
= l a
b
L Ak({Jk(t)i/Jk (s ) f (s ) ds.
k
The result we shall prove in Section 3 is Mercer 's theorem which states that
if k is continuous and each Ak is non-negative, then the series (1.5) converges
uniformly and absolutely on [a , b] x [a, b] to k (t , s) .
Now let us prove another fact. We know from IY. 6.l (b) that L 2([a , bD has an
orthonorm al basis ({J\, qJ2, . .. consisting of eigenvectors of K . Since cf> ij (t , s) =
({J; (t ) ({J j (s) form an orthonormal basis for L 2([a, b] x [a , b D,
00
and
(k , cf> ij) = ll b b
k(t , s)qi; (t) ({Jj (s) ds dt
= l [l b
qi;(t)
b
k(t, s)({Jj (s ) dsJ dt
Theorem 1.2 Suppose k E Lz([a , b] x [a, b]) and k(t, s) = k(s, t) a.e. .If{Akl is
the basic system ofeigenvalue ofK , where K is the integral operator with kernel
function k, then
The formula is the continuous analogue of the fact that if (aij) is an n x n self
adjoint matrix with eigenvalues AI, .. . , An, counted according to multiplicity, then
n n
L laijlZ = L A7·
;,j= 1 ;=1
Here ktt , s) is replaced by k(i, j) = aij and the intergral is replaced by a sum.
ll b b
ktt , s)f(s)f(t) ds dt 2: 0
Proof: The function k(t, t) is real valued . Indeed, the integral operator with
kernel function k is positive and therefore self adjoint. Hence k(t , t) = k(t , t) .
Suppose keto, to) < 0 for some to E [a, b]. It follows from the continuity of k that
ffi kit ; s) < 0 for all (r, s) in some square [c, d] x [c, d] containing (to, to). But
then for g(s) = 1 if s E [c, d] and zero otherwise,
o~ llb b
kit , s )g (s )g (t ) ds dt = ffi II d d
k(t, s) ds dt < 0
h(t) = l b
k(t ,s)cp(s) ds
Ih(t) - h (to)1 ~ l b
Ik( t , s) - k (to, s)llcp(s)1 ds
(l
I
b
~ IIcpli Ik(t , s) -
z
k tt«, s)l dS) 1
Dini's Theorem 2.3 Let Un} be a sequence ofreal valued continuousfun ctions on
[a, b]. Suppo se /J (t ) ~ h (t ) ~ . . . for all t E [a, b] and f (t) = limn->oo f n(t )
is continuous on [a, b]. Then U n} converges uniformly to f on [a, b].
Fn = {t : f (t ) - fn(t) ~ s}, n = 1, 2, .. . .
n r, =
N
FN = 0.
n= l
We are now ready to prove the series expansion for k which was discussed in
Section 1.
Theorem 3.1 Let k be continuous on [a, b] x [a, b]. Supp ose that f or all f E
L z([a , b]),
ll b b
k (t , s ) f (s ) j(t) ds dt ~ o.
198 Chapter V. Spectral Theory ofIntegral Operators
if {C{ln},
{An} is a basic system of eigenvectors and eigenvalues of the integral
operator with kernelfunction k, thenfor all t and s in [a, b],
k(t,s) = LAjC{lj(t)(jJj(s).
j
Proof: Let K be the integral operator with kernel function k. It follows from
the hypotheses that K is compact, positive and Aj = tKip], C{lj) ~ O. Schwarz's
inequality applied to the sequences {AC{lj(t)} and {AC{lj(s)} yields
(3.1)
LAjIC{lj(t)1
2
s m:xk(s,s) . (3.2)
j
Let
n
kn(t, s) = k(t, s) - LAjC{lj(t)(jJj(s).
j=!
Since each <P j is an eigenvector of K, it follows from Lemma 2.2 that <P j is
continuous which implies that k n is continuous. A straightforward computation
verifies that
= LAjl(f, C{lj}1 2 ~ o.
j>n
Since n was arbitrary, inequality (3.2) follows . For fixed t and e > 0, (3.1) and
(3.2) imply the existence of an integer N such that for n > m ~ N
n
L Aj IC{lj (t)C{lj (s)1 s ec. s E [a, b] ,
j=m
5.3 Mercer 's Theorem 199
For / E L 2([a , b]) and t fixed, the uniform convergence of the series in s and the
cont inuity of each i{J j imply that k(t , s) is continuous as a funct ion of s and
I b
[k(t , s) - k (t, s )]/ (s) ds = (Kf)( t ) - "2;. Aj (f, ({J j}({Jj (t ).
J
(3.3)
1.
If / E Ker K = 1m K.L , then since ({Jj = K({Jj Elm K , we have that Kf = 0
J
and (f, ({Jj) = O. Thus the right side of(3.3) is zero . If T = ({J0.. then the right side
of (3.3) is A;({J; (t) - A;({J; (t ) = O. Thus for each t , k (t , ) - k (t , ) is orthogonal
to L 2([a , b]) by IY.6.1(a). Hence, k (t , s) = k (t , s) for each t and almost every s.
But then k (t , s) = k (t , s) for every t and s since k (t , ) and k (t , ) are continuous.
We have shown that
In part icular,
2
k (t , t ) = L Aj l({Jj (t ) 1 .
j
The partial sums of this series form an increasing sequen ce of continuous func-
tions which converges pointwise to the continuous function k (t , t ) . Dini's theorem
asserts that this series converges uniforml y to k (t , t). Thus given e > 0, there
exists an integer N such that for n > m 2: N ,
n
L Ajl({Jj(t)1 2
< e
2
for all t E [a, b] .
j=m
This observation, together with (3.1) and (3.2), imply that for all n > m 2: Nand
all (t, s) E [a, b] x [a, b],
n
L Ajl({Jj(t)i{Jj(s) 1 ::::: «:
j =m
Hence L j Aj({Jj (t) i{Jj (s) converges absolutely and uniformly on [a , b] x [a , b].
200 Chapter V. Spectral Theory ofIntegral Operators
The trace formula for finite matrices states that if (aij) is an n x n matrix
with eigenvalues AI, . . . , An, counted according to multiplicity, then LJ=I ajj =
LJ=I Al : The following theorem is the continuous analogue of the trace formula.
Theorem 4.1 Let k be continuous on [a, b] x [a, b]. Suppose that for all
ll
f E L2([a, b]) ,
b b
k(t, s)f(s)j(t) ds dt 2: O.
If K is the integral operator with kernel function k and {A j} is the basic system of
eigenvalues of K, then
L>j =
J
l b
k(t, t) dt.
l b
k(t,t) dt = ~:~:>jllq;jIl2 = ~Aj.
J J
o
If, more generally, k is in L2([a, b] x [a , b]) and k(t , s) = k(s, t) a.e., then
L j Aj may diverge whereas L j A] < 00 by Theorem 1.2.
Exercises V
k(t,s) = l b
g(x,t)g(x ,s)dx
I:>j(K) =
J
ll b b
z
Ig( s, t)l ds dt.
10. Generalize the results in exercises 8 and 9 to the case where g(t, s) is a
°
Hilbert-Schmidt kernel which is Lz-continuous. This means that for any
e > 0, there exists a 0 > such that
I
form
b
(Kq;)(t) = A(t)B(s)q;(s) ds,
~
k(x , y) = ~
1
2 [cos (k + ..
l)x smky - sm (k + ]
l)x cosky .
k=o(k+l)
It is indeed fortunate that essentially all the important differential operators form a
class ofoperators for which an extensive theory is developed. These are the closed
operators which we now define.
Definition: Let H I and H 2be Hilbert spaces . An operator A with domain V (A) ~
H I and range in H 2 is called a closed operator if it has the property that whenever
{xn} is a sequence in V (A ) satisfying Xn -+ x in HI and AXn -+ y in H 2, then
x E V (A ) and Ax = y .
Clearly, a bounded linear operator on a Hilbert space is closed.
The operator A is closed. To see this, we first note that Ker A = {OJ and
1m A = H. Indeed, given g E H, take f (t ) = f~ g(s)ds. (Recall that
L2([0, 1]) ~ L I([O , 1])). Then f E V (A)andAf = g. DefineA - Ig = f , g E H.
204 Chapter VI Unbounded Operators on Hilbert Space
The operator A-I is a bounded linear operator on H with range V(A) since the
Schwarz inequality gives
Therefore
Then jj, =A-IA!n ----+ A-Ih. Hence! = A-Ih E V(A) and A! = hwhich
shows that A is closed.
Definition: Let HI and H2 be Hilbert spaces and let A be a linear operator with
domain V(A) ~ HI and range 1m A ~ H2 . Suppose there exists a bounded linear
operator A - I mapping H2 into HI with the properties AA -I Y = Y for all y E H2
and A -I Ax = x for all x E V(A) . In this case we say that A is invertible and A -I
is the inverse of A . Note that if A is invertible, then Ker A = {OJ and 1m A = H2.
Clearly, A cannot have more than one inverse.
The differential operator in the above example is invertible and its inverse is
(A-Ig)(t) = f~ g(s)ds,g E L2([0, 1]).
The notation A (HI ----+ H2) signifies that A is a linear operator with domain in
Hilbert space HI and range in Hilbert space H2 .
Theorem 1.1 Let A(HI ----+ H2) be invertible. Then A is a closed operator.
We shall show in Theorem XII.4.2 that the converse theorem also holds, i.e., if
the operator A is closed with Ker A = {OJ and 1m A = H2 , then A is invertible.
One of the main motivations for the development of a theory of integral operators
is that certain differential equations with prescribed boundary conditions can be
6.2 The Second Derivative as an Operator 205
y(x) = - l
x
1 t
f(s)dsdt + C1X + C2, (2.3)
where
y(X)=-l
x
1x
f(s)dtds+x 1 11 1
f(s)dtds
= 1 x
(s - x )f(s) ds +1
1
x(1 - s)f(s) ds. (2.5)
Hence
y(x) = 1 1
g (x , s)f(s ) ds, (2.6)
where
s(l - x ) , 0:::: s :::: x
g(x , s ) = { x(1 _ s) , x:::: s :::: 1. (2.7)
are absolutely continuous on [0, 1] and have second order derivatives which are
in L2 ([0 , 1]). Note that v" (x) exists for almost every x since y' is absolutely
continuous. Define Ly = -y" . It is clear from (2.2) that L is injective.
Take G to be the integral operator with kernel function g defined in (2.7). Since
g is continuous on [0, 1] x [0, 1] and g (x , s) = g (s, x), G is a compact selfadjoint
operator on L2 ([0 , 1]). From the discussion above, y = Gf satisfies (2.1) a.e. for
every f E L2([O, 1]). Thus
LGj= f. (2.8)
Since L is also injective, we have that L is invertible with L -1 = G . Thus L is
a closed linear operator by Theorem 1.1. If follows that rp is an eigenvector of L
with eigenvalue Aif and only if rp is an eigenvector of G with eigenvalue t.
Thus,
since G is compact self adjoint and Ker G = (0), L2 ([0, 1]) has an orthonormal
basis consisting of eigenvectors of L. The eigenvalues of L are those real A for
which
y" +Ay = 0 (2.9)
and
y(O) = y(l) = 0 (2.10)
has a non-trivial solution. Since the general solution to (2.9) is
y = ax +b, A = 0
Y = acos~x +bsin~x, A > 0
Y = ae A x + be-Ax, A < 0
it follows from the boundary conditions (2.10) that the eigenvalues are )..
n 2 rr 2 , n = 1, 2, ... , with bn sin nitx, bn I- 0 the corresponding eigenvectors.
The eigenvectors ..fi sin ntt x , n = 1, 2, . . . , therefore form an orthonormal basis
for L2([0, 1]).
Similarly, if we change the domain of L by replacing the boundary conditions
(2.10) by y(-rr) = y(rr), y'(-rr) = y'(rr), then the eigenvalues of L are those A
for which the boundary value problem
v" +Ay = 0
has a non trivial solution. It follows that A = n 2 , n = 0, 1, ... are the eigenvalues
of L with an cos nt + bn sin nt (Ian 12 + Ibn 12 I- 0) the corresponding eigenvectors.
Thus , {_1_
.Jiii ' cost nt sin nt }OO is an orthonormal system ofeigenvectors of L which
.j1i , .j1i n = 1
we know forms an orthonormal basis for L2([ -rr, rr]) .
norm on its domain. We need the following preliminary results. Let 1t1 and 1t2
be Hilbert spaces. The product sp ace H I x 1t2 is the set of all ordered pairs (x , y)
with x E 'H I and y E 1t2. The operations of addi tion + and scalar multiplication
. are defined in the usual way by
Under these operations , H I x 1t2 is a vector space. It is easy to see that 'H I x 1t2
is a Hilbert space with respect to the inner product
(3.1)
Given operator A (1t1 --+ 1t2), the grap h G (A ) of A is the subspace of1t1 x 1t2
consisting of the ordered pairs (x , Ax ) , x E D (A ) . It follow s readil y from the
defin ition of the norm on 'H 1 X 1t2 that A is a closed operator if and only if its
graph G ( A ) is a closed subspace of1t1 x 1t2.
On the domain D (A ) we define the inne r product ( , }A as follow s:
(u , V}A = (u, v) + (A u, Av ), u , v E D (A ).
wh ich is exactl y the norm of the pair (u, Au ) E G (A ) . We call the norm III1 A
the gr aph norm on D ( A ) . If A is closed, then the inner product space D (A ) with
norm II II A is complete. Indeed, suppose IIxn - Xm II A --+ 0 as n , m --+ 00 . Then
II x n- x mll--+ OandllAx n-Axmll --+ Oasn ,m --+ 00 . Hence there exists x E 1t1
and y E 1t2 such that Xn --+ x and AXn --+ y. Since A is closed, x E D(A) and
A x = y . Therefore
Th is graph norm allows one to reduce theorems and problems for closed operators
to corresponding results for bounded linear operators. For exampl e, let the closed
operator A have the property that it is invert ible when con sidered as a (boun ded)
208 Chapter VI. Unbounded Operators on Hilbert Space
map from the Hilbert space (V(A) : III1A) onto 1h Suppose B(HI -+ H2) has
domain V(B) ;2 V(A) and satisfies
IIf11L =
(
I
1If(t)12dt
) 1/2
+
(
1
I
Ifl/(t)1 2dt
) 1/2
, f E V(L).
The operator L is a bounded linear operator on the Hilbert space (V(L) : II lid
with range L2([0, 1]).
In this section we extend the concept of the adjoint of a bounded operator to the
adjoint of an unbounded operator.
Definition: Let A(HI -+ H 2) have domain V(A) dense in HI, i.e., the closure
V(A) = HI . We say that A is densely defined. The adjoint A*(H2 -+ HI) is
defined as follows :
V(A*) = {y E H21 there exists z E HI such that (Ax , y) = (x, z)
for all x E V(A)}
Obviously 0 E V(A*). The vector z with the above property is unique since
(x, z) = (x, u) for all x E V(A) implies (v - z) 1.. V(A) = HI . Hence
v - z = O. We define A*y = z. Thus
(Ax, y) = (x , z) = (x, A *y) , x E V(A), y E V (A*).
Note that if A is bounded on HI, then A * is the adjoint defined in Section II.11.
It is easy to see that A * is a linear operator.
The space Cgo ([0 , 1]) of infinitely differentiable functions which vanish outside a
closed subinterval ofthe open interval (0, 1) is well known to be dense in L2([0 , 1])
with respect to the L2-norm. Since Cgo([O, 1]) ~ V(A), we have that A is densely
defined. We shall show that A* = S, where
6.4 Adjoint Operators 209
(Af, g) = 1 1
!,(t)g(t) dt = (f,A*g) = 1 1
f (t )h (t )dt. (4.3)
1 1
f (t ) h (t ) dt = -1 1
(H(t) + C )f'(t )dt , (4.4)
where C is an arbitray constant and H(t ) = J~ h(s)ds. Then by (4.3) and (4.4),
0= 1 1
(g (t ) + H (t ) + C)f'(t)d~ f E V (A ). (4.5)
it
Let
f o(t ) = (g(s) + H (s ) + Co) ds , (4.6 )
0= 1 1
Ig(t) + H (t ) + Col2dt
(Au , v) = 1 1
u' (t )v (t )dt = -1 1
u (t )v' (t )dt = (u ,Sv)
Therefore v E V(A *) and A*v = Sv = -v'.
(Ax, y) = lim
n
(Ax , Yn) = lim (x , A*Yn) = (x , v)
n
Since the operator S in the above example is A *, Theorem 4.1 shows that S is a
closed operator.
Theorem 4.2 Let A('HI -+ 'H2) be a densely defined invertible operator. Then
A * is invertible and
(A*)-I = (A- I)*
Proof: Given v E V(A *) and y E 'H2, it follows from the boundedness of A-I
that
(A-I)*A*v,y) = (A*v,A-Iy) = (v,AA - Iy) = (v,y)
Since y is arbitrary in 'H2,
(A- I)* A*v = v, v E V(A*) . (4.7)
Now for x E V(A) and w E 'HI, it follows from the boundedness of (A- I)* on
'HI that
(Ax , (A -I)*w) = (A -I Ax, w) = (x , w) .
Thus , by definition of A* , (A-I) *w E V(A*) and
A*(A-I)*w = w . (4.8)
Since (A -1)* E L('HI , 'H2), we have from (4.7) and (4.8) that A* is invertible and
(A*)-I = (A- I)* E L('H I, 'H2)' 0
Proof: Suppose A is self adjoint. Then by Theorem 4.2 (A- I)* = (A*)-I =
A -I , i.e., A -I is self adjoint. Conversely, if A -I is self adjoint, then V(A *) =
Im(A*)-1 = Im(A- 1)* = ImA- 1 = V(A) . Given y E V(A*) = V(A),
and
(A- 1)*Ay = (A-1)Ay = y .
Since (A- 1)* is 1 - 1, Ay = A*y. o
It was shown that the second order differential operator L discussed in Section 2
is invertible with inverse L -I, a compact self adjoint operator.
Hence L is self adjoint by Theorem 4.4.
6.5 Sturm-Liouville Operators 211
dx dY)
d ( p(x) dx + q(x)y = f(x) (i)
d ( p(x) ddxY)
Ly = dx + q(x)y.
Assume that zero is not an eigenvalue of L, i.e., L is injective. Now for f = 0,
there exist real valued functions YI ¢ 0 and Y2 ¢ 0 such that YI , Y2 satisfy (i),
Y;'and Y~ are continuous, YI satisfies the first condition in (ii), and Y2 satisfies the
second condition in (ii) ([DS2]), XIII.2.32). Let
y(x) = I b
g(x , s ) f (s )ds
(5.3)
where
Yl (x) i X
Y2(X) = l b
cY2(s)f(s) ds.
Actually, (5.4) holds for all x . To see this, let h = yi Y2 + Y~Yl and let
y(x) = yea) + i X
h(t) dt .
Now y' = v' a.e. and yand yare absolutely continuous. This follows from the
absolute continuity of Yi and Yi , i = 1, 2. Hence
y)'(t) dt = 0,
or y = y' , Thus (5.4) holds for all x which implies that y' is absolutely continuous.
Moreover,
al yea) + a2Y' (a) = (alY2(a) + a2 y~(a»Yl (a) + (a lYl (a) + a2yi (a» Y2(a)
=0.
Similarly,
6.5 Sturm-Liouville Operators 213
!..-
dx
[P(X) d
Y]
dx
+ (q(X) - ~)
)...
Y= 0 (5.5)
Theorem 5.1 Let L be the Sturm-Liouville operator with Ker L = {OJ. Then L is
a selfadjoint invertible operator with compact inverse L -I = G, where G is the
integral operator on L2 ([a, b]) with kernelfunction g defined in (5 .1) . In addition,
dim Ker(H - L) ::::: 1,)... E C and L has infinitely many eigenvalues.
If ipi- II cP j II =
1, is an eigenvector ofL corresponding to ILi- then cp] , CP2 , . . . is an
orthonormal basis for L2([a, b]).
A vector y E L2 ([a, b]) is in the domain ofL ifand only if
L IILjI21(y, cpj}1
00
2
< 00 .
j=1
For such vectors,
00
Proof: If KerL = (0), then the corollary follows from the theorem above and
Theorems V.l.l, IV.8.2.
Suppose zero is an eigenvalue of L. Choose a real number r such that r is not
an eigenvalue of L. The existence of such an r may be seen as follows.
Integrating by parts and making use of(ii), a straight forward computation yields
(a) there exists an orthonormal basis {CPI , CP2 , . . .} of'H consisting ofeigenvec-
tors of A . If J-t I , J-t2, .. . are the corresponding eigenvalues, then each J-t j
is real and IJ-t j I ~ 00. The numbers ofrepetitions of J-t j in the sequence
J-tl, J-t2, . .. is finite and equals dim Kertu j - A).
(b) D(A) = {x E HI Lj 1J-t1 21(x , cp j}l2 < oo}
(c) Ax = Lj J-tj(x , CPj}CP j , x E D(A) .
Then the series Lj J.L j (x , ({Jj }({Jj converges. Now A -I is bounded and A -1({J j =
1.
)
({J j = J.L j ({Ji- Hence
On the other hand, if x E D(A), then since K = A -I is self adjoint and K ({Jj =
Aj({Jj, we have
o
Exercises VI
T*y = y, Y E V(T*).
Ts f = if'
V(T2) = V(Tj) n UI/(O) = 1(1)}
T21 = if'
V(T3) = V(Tj) n UI/(O) = 1(1) = OJ.
T31 = if'
Prove that
a) Tt = T3
b) T2* = h
10. Find the eigenvalues, eigenvectors and the Green's function for the
following Sturm-Liouville operators.
= -y" ; y'(O) = 0, y(l) = °
°
(a) Ly
= -v" . yeO) = 0 , y(l) + y'(I) =
°
(b) Ly
(c) Ly = <y"; yeO) = 0, y/(l) =
(d) Ly = -s" - y ; y'(O) = 0, y(l) = 0.
11. Given the Sturm-Liouville operator
l
equation
x
q.>(x) = f(x) + k(x, s)q.>(s)ds
n-l
' " aj(x)
k(x, s) = - L...J - .,-(x - s)J..
j=O J.
1
given by
t
yet) = g(t,s)f(s)ds,
where
Yl (S)Y2(t) - Yl (t)Y2(S)
g (t, s ) = -=-----'---''--:-----=-..:....:...:..:....:-:.....:...
Yl (s)y~(s) - yi
(S)Y2(S)
with Yl , Y2 any basis for Ker L.
Chapter VII
Oscillations of an Elastic String
The aim of this chapter is to describe the motion of a vibrating string in terms of
the eigenvalues and eigenvectors of an integral operator.
Let us consider an elastic string whose ends are fastened at 0 and e. We assume that
the segment Oi is much greater than the natural length of the string. This means
that the string is always under large tension.
When a vertical force F is applied to the string, each point x on the string has
a displacement u (x ) from the segment. We shall determine u (x ), first when F is
concentrated at a point, then when F is continuously distributed. Finally, we study
the motion of the string by regarding acceleration as an inertial force.
A vertical force F (y) is appl ied at the point y causing a displacement u (x) at
each point x of the string. Let d = u (y) (Figure 1). The relative displacement of
a point on the string from a fixed end is independent of the point. Thus
F(y)
c
T
x y
Figure 1
u(x) d
- - = - ,0 S x S y
x y
u (x ) d
-- = -- y < X < i. (1.1)
i -x i-y ' - -
Let us suppose that the string is subject to small displacements and that the tension
T is constant. Since the sum of F and the vertical components of T is zero (the
tension acts in the direction of CO and C i ), we have
F = T sin O + T sin e.
220 Chapter VII Oscillations ofan Elastic String
From the assumption that the displacements are small, it follows that F is
approximately
de
Ttan8 + Ttancp = T
y(e - y )
.
Solving for d in (1.1) and substituting this value of d in the above equation yields
where
X(f - Y )
!
-----yy- , O~x~y
k(x, y) = y (f - x ) (1.3)
Tf '
y~x ~ e.
Recall that for T = e = 1, k(x, y) is the Green's function corresponding to - ::2
(cf. (2.7) in Section VI.2.). We note that the displacement of the point x arising
from the action of a force applied at the point y is equal to the displacement of
the point y arising from the same force applied at the point x. In other words , if
F(x) = F(y), then u(x) = u(y) . This can be seen from (1.2) and (1.3) since
k(x, y) = k(y, x) and therefore
Let us consider the more general situation where the string is subjected to a con-
tinuously distributed vertical force with density distribution f (y). The force acting
on the segment between y and y + ~y is approximately f(y)~y (~y is "small").
Thus if we take points 0 < Yl < Y2 < . . . < Yn = e, with ~y = Yi+l - Yi,
then the force F(Yi) at Yi is approximately f(Yi)~Y. Now each F(Yi) gives rise
to a displacement at x approximately equal to k(x, Yi)f(Yi)~y (Equation (1.2)) .
The displacement u(x) is approximately the sum L::7=1 k(x, Yi )f(Yi)~y of these
displacements. Ifwe let ~y -+ 0, we arrive at the formula
u(x) = 1 f
k(x , y)f(y) dy. (1.4)
Let us start with the assumption that there is no external force acting on the string.
If p2(x) is the density ofthe string, then by Newton's second law of motion (force
7.2 Basic Harmonic Oscillations 221
is the product of mass and acceleration), the inertial force acting on the section of
the string between y and y + /:!,. y is
(2.2)
It follows from (2.1) and (2.2) that the density distribution f ey ) is given by
(2.3)
U(x , t ) = u (x ) sinwt
(2.4)
or
(2.5)
where K is the integral operator on L 2[0, e] with kernel function k defined in (1.3)
of Section 1.
In view of (2.5), our immediate goal is to determine the eigenvalues and eigen-
vectors of K . It follows from the discussion in VI.2 that ip is an eigenvector of K
with eigenvalues A if and only if A i= 0 and
1
ep" + -ep = 0
AT
ep (O ) = ep ee ) = o. (2.6)
. nrrx
s, sm -i-' bn f. O. (2.10)
In particular,
(2.11 )
Let us assume that the density p2 of the string is constant. Then by (2.3) in
Section 2 and (3.1), the density distribution fey) of the sum of the external force
and intertial force is given by
where
g(x) = i f k(x,y)h(y)dy.
Hence
(3.2)
7.3 Harmonic Oscillations with an External Force 223
An
+L
00
u(x) = g(x) (pw)-2 _ An (g, f{Jn}f{Jn(x),
n=l
where An and f{Jn are defined in (2.9) and (2.11) of Section 2. The series converges
uniformly and absolutely on [0, i). This follows from the uniform and absolute
convergence of L~! An (g , f{Jn }f{Jn (x), which is guaranteed by Theorem V,U and
the boundedness of the sequence {( pw )~2 - An ) .
Thus
00 .
'""" sm nitx
u(x) = g(x) + L...J an i ' (3.3)
n=!
where
2 An if . ntt y
r,
an = - 2 g (y) sm - - dy,
i (pw)- - An 0 i
We have seen that harmonic oscillations which arise from an external force,
with a density distribution of the form h (y) sin on, is decomposable into basic
harmonic oscillations.
Since the operator K with kernel function k, defined in (1.3) of Section 1 was
shown to be positive, it follows from Theorem V.3.1 that
00
k(x , y) = LAnf{Jn(X)<Pn(Y)
n=!
= --
u L -1s m
00
. nitx . nit y
--sm-- . (3.4)
T;r2
n=!
n2 e e
The series converges uniformly and absolutely on [0, i] x [0, i].
For a thorough and rigorous treatment of the equations describing large vibra-
tions of strings, as well as additional references, we refer the reader to [A].
Chapter VIII
Operational Calculus with Applications
The spectral theory which was studied in the preceding chapters provides a means
for the development of a theory of functions of a compact selfadjoint operator. We
now present this theory with applications to a variety of problems in differential
equations.
Ax = L:).dx , ({Jd({Jk
k
2
A x = L A~(X , ({Jk}({Jk
k
For any polynomial p(z) = 'LZ=o akZk, it is natural to define p (A ) = 'LZ=o akAk .
Therefore it follows from (1.1) that
Let a(A ) be the subset of C consisting of zero and the eigenvalues of A . Let f be
a complex valued function which is bounded on a (A) . Guided by (1.2), we define
the operator f (A ) on'H by
The operator f (A) does not depend on the choice ofthe eigenvectors {rpk }. Indeed,
let J-L I , J-L2, . .. be the distinct non-z ero eigenvalues of A and let Pn be the orthog-
onal projection onto Ker (J-LnI - A). Suppose tpj , tpj+ I , .. . , rpk is a bas is for Ker
(J-Ln I - A). Then
k
r ;» = L (x, rpi )rpi
i=j
2 2I1 2
II f (A )xIl = If (O)1 PoxIl + L If(Ak)121(x, rpk )12
k
and
Theorem 1.1 Suppose F and g are comp lex valuedfun ction s which are bounded
on a (A). Then
(i) (a f + f3 g )(A ) = af (A ) + f3 g (A ), a , f3 E C.
(ii) (f.g)(A) = f (A )g (A ).
(iii) /f{fn } is a sequence ofbounded comp lex valuedfunctions which converges
uniformly on a (A ) to the boundedfunction f , then
IIfn(A) - f( A) II --+ O.
8.1 Functions ofa Compact SelfAdjoint Operator 227
and
Property (ii) follows immediately from (1.8), (1.9) and (1.10) . (iii). By (i) and
Equation (1.5),
(1.13)
1
I
(J1I - A)- x = g(A)x = -Pox + L~ - -1( x , C{Jk)C{Jk.
J1 k J1 - Ak
By Equation (1.5),
II B II = sup{l,8 + AI : A an eigenvalue of A}
= sup {IIi I : Ii an eigenvalue ofB}. (1.16 )
N
B - ,8 Po - L r/j Q j
j=l
where
( 1.18)
n
convergence is with respect to the norm on L2([a, b) x [a, b)) We shall prove the
following more general result
Let {Vrn} be an orthonormal basis for H = Lz ([a, bD. If A E £ (H ) and
L i,j I(AVri , Vr j }!z < 00, then A is the integral operatoir with kernel fun ction
(Ku, v) = (Au , v) ,
An = rr
J-rr
h( s)e- ins ds n = 0, ±1 , ...
Since {CPn} is an orthonormal basis for L2([-rr, rrD , {CPn}, {An} is a basic system
of eigenvectors of K and Ker K = O. Hence for any positive integer p , equations
(1.17) and (1.18) applied to f(A) = )..P give
(KPv)(t) = l b
£'p(t - s)v(s) ds a.e.,
where
00
- 1 ~
kp(t) = 2rr L.... (convergence in L2 ([ -ot ; rr])).
n=-oo
y
'et) _ li yet
- un
+ h) - yet) '1..J
E'L.
u-;» h
Though the main results in this section are valid for an arbitrary operator A E
£(H), we shall assume that A is compact and self adjoint. This enables us to
8.2 Differential Equations in Hilbert Spa ce 231
Theorem 2.1 Let A E L(1t) be compact and selfadjoint with a basic system of
eigencectors {qJd and eigenvalues (Ad. For each x E 'H, there exists a unique
solution to
The solution is
y(t ) = etAx = Pox + I:> Akt(X,qJk)qJk,
k
Proof: From Theorem 1.1 and Equation 1.13 in Section I , if follows that if
y (t) = etAx , then
Then
and
wdO) = (w(O ) , qJk) = (0, qJk) = O.
232 Chapter VIII Operational Calculus with Applications
But this implies that yet) - vet) = wet) = O. Indeed, for any z E 'H,
d ,
dt (w(t) , z) = (w (t) , z) = O.
yet) = Pox +L e
Ak t
(x, «Pk)«Pk E £2
k
satisfies
y'(t) = Ay(t), - 00 < t < 00
yeO) = x.
-acp (t , x) =
at
l
a
b
k (x , s) cp(t, s) ds, - 00 < t < 00, a ~ x ~ b (4.1)
yet) = cp(t , ).
That is to say, for each t , y (t) is the function cP(t , x) considered as a function of
x. Let K be the integral operator on L z([a, b] ) with kernel function k . Then (4.1)
can be written in the form
let) = Ky (t ).
yeO) = g. (4.3)
+L
00
tK Akt
cp(t, x) = (e g)(x) = ( POg)(x ) e (g, CPk}CPk(X)
k=!
00
= g(x) +L (e
Akt
- I)(g , CPk}CPk(X) , (4.4)
k= !
converges uniformly and absolutely on [a, b]. Thus, for any finite interval J , the
series in (4.4) converges uniformly on J x [a, b] since the mean value theorem
implies
leAkt - 11 ~ Pl.klC, 1 ~ k,
for some constant C and all t E J . Ifwe differentiate the series in (4.4) termwise
with respect to t, we get the series L~! AkeAkt (g , <Pk)<Pk(X) with converges
uniformly on J x [a , b]. Hence %'f (t , x) exists and
alp
-(t,x) " A
00
= L....Ake kt (g ,<Pk)<Pk(X),
at k=!
which is easily seen to equal the right side of (4.1). Obviously, <p(0 , x) = g(x).
Exercises VIII
I: I:
(b) A : L2[-Jr , Jr] ---?> L2[-Jr.Jr] ;
(c) A : L2[-Jr , Jr] ---?> L2[-Jr .Jr] ; (A<p)(t) = J::'n(t - s)2<p (s) ds
(d) A: L2[-Jr, Jr] ---?> L2[-Jr.Jr]; (A<p)(t) = J::'n cos t(t - s)<p(s) ds.
2. For each of the operators A in problem 1, check that e iA = cos A + i sin A
and cos 2 A + sin 2 A = I . Which is true , sin 2A = 2 cos A sin A or sin 2A =
2sinA cos A?
3. Solve the following integro-differential equations with the general boundary
condition
r: sintJA
yet) = cos tv Ayo + v'A xo,
Exercises VIII 235
where SiJt is the operator j(A) with j(A) = Sin;/! for A > °
and
j(O) = t.
5. Solve the following equations with boundary conditions
gJ(O , x) = °
gJ'(O ,x) = j(x) E L2[-JT,JT]
x - Ax = y. (1.2)
Xl Y + Axo
X2 y+AXI =y+Ay+ A2xo
h+Kx=y (1.3)
238 Chapter IX Solving Linear Equations by Iterative Methods
Lemma 2.1 Given A E £(11.) and y E 11., suppose that the equation x - Ax = y
has a solution . Given XQ E 11., define
Suppose IIAII ~ land Ker(l + A) = (0) . Given x E 'H, there exists a unique
U E Ker(l - A) and a unique v E Ker(l - A).L such that x = U + v. Let P be
the orthogonal projection onto Ker(l - A) . Then for n = 1,2, .. . ,
and
nx nvll nllvll nllxll
IIA - Pxll = IIA ~ IIAMll ~ IIAMll , (2.2)
L I(v, CPk}1
2
< «n. (2.4)
k >N
240 Chapter IX Solving Linear Equations by Iterative Methods
We have shown above that the modulus of any eigenvalue of AM is less than 1.
Hence, for all n sufficiently large,
N
L IJLdnl(v, ({lk)1 2 < s/2. (2.5)
k=l
We are now prepared to prove Theorem 1.1. Let A = I - a B* B. The first step in
the proof is to verify that {Anx} converges for each x by showing that A satisfies
the conditions in Theorem 2.2. It is easy to see that
-l«Ax,x)~1.
x-Ax=aB*y (3.2)
has a solution. Indeed, by hypothesis, there exists aWE H such that B W = Y and
w - Aw = «s: Bw = «tr».
Therefore, we have from Theorem 2.2 and Lemma 2.1 that {x n } converges to a
solution v of Equation (3.2). The convergence is uniform as Xo ranges over any
bounded set ifH is finite dimensional or a satisfies the requirements ofthe theorem.
Now
«s: Bv = v - Av = aB*y,
hence
Bv - Y E Ker B* = 1m Bl.. .
Since Bw = y,
or
AV +Kv = Bv = y.
The iterative method in Theorem 1.1 does not have any specific relation to the
spectral theorem, although the proof is strongly based on the theorem.
Even if Equation (1.3) in Theorem 1.1 does not have a solution, nevertheless
{x n } converges for an appropriate ex when A =I O.
Lemma 3.1 * If B = J-LI + K , where K E £(1i) is compact and J-L =I 0, then for
any y E 'H, the equation B* Bx = B*y has a solution.
Proof: It is easy to see that A = BB* - 1J-L1 2 I is compact and self adjoint. Let
{A j }, {qJ j} be a basic system of eigenvaluoes and eigenvectors of A . For each
y E H, there exists a v E Ker A such that
y = v + L(y , qJj}qJj.
j
2 B *v
x = -1 + 'L.J
" 21 (y , qJj}B *qJj.
IJ-LI U N IJ-LI + Aj
o
Theorem 3.2 Let {x n } be the sequence defined in Theorem 1.1. If A =I 0, then
forO < ex < 2I1BII-2, the sequence {x n } converges. ForO < ex < min (2I1BII-2 ,
2IAI-2 ) , {x n } converges uniformly as Xo ranges over any bounded set.
Ifv = lim x. ; then (AI + K)v = Py, where P is the orthogonal projection
from 'H onto Im (AI + K) . Thus
inf IIAx +Kx - YII = IIAv
XEH
+ Kv - yll.
Proof: The proof of the main theorem relied on the fact that a B" Bx = x - Ax =
exB*y has a solution. But this we know from Lemma 3.1. An inspection of
the proof of the main Theorem 1.1 shows that {x n } converges to some v and
Bv - y E Im e-. Thus
(AI + K)v = Bv = Py .
The iteration procedure in Theorem 1.1 can be readily applied to compact integral
operators as follows :
Let K be an integral operator with kernel function k E Lz([a, b) x [a, b]) .
Given g E Lz([a, b]), suppose that for some complex number A, the equation
Af(t) + l b
k(t,s)/(s)ds=g(t) a.e. (4.1)
(K* Kf) = l {l b
k(s, t)
b
k(s, ~)/(~) -l ds
= ll b b
k(s, t)k(s, ~)/(~)d~ ds.
Given 10 E Lz([a , b]), define, for n = 0,1 , ... ,
+ llb b
k(s, t)k(s, ~) In (~) d~ dS]
+ aXg(t) + a l b
k(s, t)g(s) ds,
where
0< a < 211A! + KII- z.
Since K is compact, it follows from Theorem 1.1 that Un} converges in Lz([a , b)
to a solution of (4.1).
Let Lf« = limn-->oo In, i.e., IILlo - In II -* O. Suppose A =I O. If a satisfies
(4.2) and a < 2IAI- z, then for each B > 0, there exists an integer N such that for
all 10 in the r-ball of Lz([a, b),
Clearly, AB = BA.
An important result is that the converse holds if A and B are compact and self
adjoint.
Theorem 1.1 Suppose A and B are compact self adjoint operators in .c(H). If
AB = B A, then A and Bare diagonalizable simultaneously.
Proof: Let {A.n} be the basic system ofeigenvalues of A. It was shown in the proof
of the Spectral Theorem IY.5.1, that if 'PI E Ker ()"I - A), II 'PI II = 1 and
Suppose B Ker A :j:. {OJ. Since Ker A is invariant under B, there exists an
orthonormal system {o/j} C Ker A and a sequence of eigenvalues {1J j } of B such
that for all u E Ker A,
Bu = L 1Jj (u, o/j }o/j. (104)
j
It is clear from (1.2), (1.3) and (104) that either {qJn} or {o/I, qJI, O/Z , qJz , . . .} diag-
onalizes both A and B .
The theorem above shows that there exists an orthononnal basis {qJ j } for
M = 1m A such that the matrices corresponding to the restrictions AM, BM and
{qJj } are diagonal matrices. 0
Analogous to the complex numbers, every A E £('H) can be expressed in the form
where
Al * , Az =
= 2'1 (A + A) 2i1 (A - A *).
The operators A I and Az are self adjoint and are called the real and imaginary
parts of A , respectively.
It A is compact, then A I and Az are compact self adjoint. Therefore, A I and
A2 are each diagonalizable, but, in general, A is not. However, if AIA2 = A2A I ,
then by Theorem 1.1, A 1 and Az are simultaneously diagonalizable and therefore
A is diagonalizable.
A simple computation verifies that AIAz = AZAI if and only if AA* = A* A .
This leads us to the following definition.
Theorem 2.1 An operator A E £('H) is compact and normal if and only if it has
a basic system ofeigenvectors and eigenvalues, where the sequence ofeigenvalues
ofA converges to zero if it is infinite.
Proof: Suppose A is compact and normal, Let A I and Az be the real and imaginary
parts of A . Then by the above discussion, there exists an orthonormal system {qJn}
and corresponding real eigenvalues {An} and {j.L n } of A I and Az, respectively such
that for each x E 'H,
smce
Therefore,
A* Ax = L ITJd(x , ifJk}ifJk = AA *x
k
for all x E 11, which means that A is normal. o
Application. Let K be an integral operator with kernel function k E L 2([a , b) x
( K* Kf)(t) = k (s , t) k (s, ds
= ii b b
~)f(~)
k (s , t )k(s , dsds
Hence a nece ssary and sufficient condition that K be normal is that for almost
every (t,~) E [a, b) x [a, b) ,
Since K is compact, we know from Theorem 2.1 that if (2.2) holds , then K has a
basic system of eigenvectors {ifJn} and eigenvalues {An} . Thus
2
ll
b b n
lim
n---> oo a
k (t , s)f(s) d s - I:a~jifJj (t ) dt = 0,
a j =1
l
where
b
a j = Aj f(s)ifJj(s) d s.
It was shown in Theorem 1.18.1 that any two complex Hilbert spaces HI, H2 of
the same dimension are equivalent in the sense that there exists a U E £(H I , H2)
such that U is surjective and II U x II = IIx II for all x E HI .
An operator U which has the property that II U x II = IIx II for all x is called an
isometry.
Isometries have some very special properties as seen in the next theorem.
(i) U is an isometry.
(ii) U*U = hil ' the identity operator on HI.
(iii) (Ux, Uy) = (x, y) for all x and y E HI .
Since y is arbitrary, U U* x = x. o
Definition: A linear isometry which maps H onto H is called a unitary operator.
The above theorem shows that an operator U E £(H) is unitary if and only if it
is invertible and U- I = U* .
Thus a unitary operator is normal.
10.3 Unitary Operators 247
We have seen in (20.4) of Section 11.20 that of U E £(1i) is unitary, then the
spectrum
a(A) C {AIIAI = 1} . (3.1)
(Uf)(t) = a(t)f(t)
is unitary.
2. Given a real number r, let fret) = f(t + r) for all t E (-00,00) . The
operator defined on L2 [( -00 , 00)] by Uf = fr is unitary.
3. The forward shift operator on £2 is an isometry which is not unitary since it
is not surjective,
Unitary operators can be used to identify compact self adjoint operators with
each other provided they have certain properties in common. The following theorem
is an illustration of this assertion.
Theorem 3.2 Let A and B be compact self adjoint operators on a sep arable
Hilbert space 'H. There exists a unitary operator U on 'H such that U* B U = A
if and only if
dim Ker (AI - A) = dim Ker (AI - B)
for all A E C.
Proof: Suppose dim Ker(A - A) = dim Ker(A - B) for all A E C. It follows from
IY.6.I (d) that A and B have the same basic system of eigenvalues {An}· Let {qJn}
and {1fJn} be basic systems of eigenvectors of A and B, respectively, corresponding
to {An}. Since dim Ker A = dim KerB , there exists , by Theorem U8.1, a linear
isometry Uo which maps Ker A onto Ker B . Define U E £(1i) as follows. Given
x E 'H, there exists a unique u E Ker A such that
x = u + ~)x, qJk}qJk .
k
Let
Ux = Uou + L(x, qJk}1fJk .
k
z = W + L(Z, l/!k}1/!k.
k
248 Chapter X Further Developments ofthe Spectral Theorem
Clearly
Given x E H,
BUx = ~)x, ({Jk}B1/!k = :L).k(X, ({Jk}1/!k
k k
U*(AI - B)U = AI - A, A E C,
which implies
dim Ker (AI - B) = dim Ker (AI - A) .
The operators A and B in the above theorem are said to be unitarily equivalent.
o
10.4 Singular Values
Given Hilbert spaces HI and Hz, let A be a compact operator in £(HI, Hz). The
operator A * A is compact and positive since
(4.2)
note that sj(A) ~ Sj+1 (A) and Sj (A) ~ O. The importance of singular values is
seen in the following characterization of compact operators.
Theorem 4.1 If A E £(H I , Hz) is compact, then there exist orthonormal systems
{({Jj} S; HI and {1/!j} S; Hz such thatfor all x E HI,
v(A)
v( A)
v( A ) v(A )
Corollary 4.3 Let 1i! and 1i2 be Hilbert spaces and let A be a compact operator
in £(1il, 1i2). Then the singular values
sj(A*) = sj(A).
and
Hence
v(A )
Thus
and
o
Example: Let K : L2([0, 1]) ---* L2([0, 1]) be the integral operator
(Kf)(t) = 2i 11
f(s) ds .
By taking
k(t ,S)={I,O:::;S:::;t:::;1
0, s > t
it follows from example 3 in 11.16, that K is a compact operator. Since the kernel
function corresponding to K* is k(s, t), we have that
f
I
Hence
10.4 Singular Values 251
f (1
1
S
V"+4f=0,
f(l) = 0, f'(O) = O. (i)
we have
, 2i
0= f (0) = A1/ 2 (a - b) .
Thus
f(t) = a(e 2it / ).1/2 +e- 2it / ).1/2 ) =2acos2t/A1 / 2
and
0= fO) = 2acos2/A1 / 2 .
Hence
2/A 1/ 2 = (2k + l)rr, k = 0, 1,2, . . .
or
Sk (K) -- , 1/2 - 2 k 0 1 2
I\. - (2k + l)rr' = ", ...
It follows from equality (4.1) and the min-max Theorem IY.9.1 applied to A*A
that
Sj(A) = min max IIAxli. (4.6)
M IIxll=1
dimM=j -l Xl.M
Theorem 4.4 Let 'H be a Hilbert space and let A E £(1i) be compact. Then/or
n = 1,2, . . .
It remains to prove that the infimum above is attained and equals Sn (A). By
Theorem 4.1 the operator A = ,,£j<:?
sj(A)(x, CPj)Vlj, where {cpj} and {1/Jj} are
orthonormal systems. For n < v(A) + 1, define for n > I the operator Fnx =
,,£j:1sj(A)(x, CPj) 1/Jj , and Fl = O. Since rank F n = n - 1,
2
v(A)
II (A 2
- Fn)xll = L sj(A)(x, CPj)1/Jj
n
Thus IIA - F n II :5 sn(A) . Hence formula (4.7) holds for n < v(A) + 1. Thus if
m ~ v (A) + 1, then S n = 0 and rank A :5 n - 1. Therefore the infimum is attained
at F = A. 0
The next result shows that the singular values are continuous functions of com-
pact operators in £(H).
(4.8)
Hence
Theorem 5.1 Let k be continuous on [a, b] x [a, b]. Suppose that for all
f E L2([a, b] x [a, b]),
ff
b b
f
b
f
00 b
f
t
Theorem 5.2 Suppose k E L2([a, b] x [a, b]) and k(t , s) = kis, t) . Then the
operator K defined on L2([a, b]) by
f
b
(Kf)(t) = k(t,s)f(s)ds
a
254 Chapter X Further Developments ofthe Spectral Theorem
is Hilbert-Schmidt and
ff
00 b b
~Sj(K)2 = 2
Ik(t, s)1 dsdt. (5.4)
J=! a a
Proof: The operator K is compact and selfadjoint. Hence sJ(K) = A](K) and
formula (5.4) is a consequence of Theorem Y.1.2.
o
The compact operator K defined in (5.3) is not of trace class but is a Hilbert-
Schmidt operator since
00
~st(K) = L
00 ( 2
(2k + l)7T
)2 < 00.
J=! k=!
Exercises X
1. Which of the following operators are normal and which are not?
(a) K : L2[0, 1] ~ L2[0, 1]; (K rp)(t) = fd (t - s)2rp(s) ds.
(b) V: L2[0, 1] ~ L2[0, 1]; (Vf)(t) = f~ f(s)ds.
(c) S, : f2 ~ f2; Sr~ =
(0, ~!, b .. .).
(d) A : L2[0, 1] ~ L2[0 , 1]; (Af)(t) = a(t)f(t), where a is bounded
and Lebesgue measurable on [0, 1].
4. Let A be a selfadjoint operator on a Hilbert space 'H. Define on 'H EB 'H EB 'H
an operator
B=
0
A
0
0
A)
0
(
o A 0
Prove that B is normal.
Exercises X 255
0
A
0·· ·0
0 ·· ·0
A)
0
B = O . .~ .'.'.' .~ . 0 .
(
o O· · ·A 0
f(A)g(B) = g(B)f(A).
13. Find the real and imaginary parts of the following operators.
(a) (Kcp)(t) = J::'nk(t -s)cp(s)ds ; K : LZ[-JT,JT] --+ LZ[-JT,JT],k E
Lz[-2JT,2JT].
(b) (Af)(t) = u 1/
f(s) ds; A : Lz[O, I] --+ Lz[O, I].
(c) (Af)(t) = 2J~k(t ,s)f(s)ds; A : Lz[O,I] --+ Lz[O,I],k E
Lz([O, 1] x [0, 1]).
(
~ ~z 1~~
(d) A = . Aij E £(H).
000
000
256 Chapter X Further Developments ofthe Spectral Theorem
14. Let G be a self adjoint invertible operator on a Hilbert space 'H and let
B = G 2. Define on 1t a new scalar product (' , ')B by (h, g)B = (Bh, g) .
Prove the 'H is a Hilbert space with respect to the new scalar product and
that the new norm is equivalent to the old one.
IS. Let Band (', ' )B be defined as in exercise 14. For any A E £(1t), define
A x to be the adjoint operator of A with respect to the new scalar product
( ', .) B. Prove that A x = B- A * B and that A is self adjoint with respect to
1
Ah = L ).j(h, CPj)CPj ,
j
where p.j} are the eigenvalues and the tpj form a system for which
iBip], CPk) = Sjk·
(d) Conversely, if A has the representation given in (c), then BA = A * B .
32. Give an example ofa compact operator which has only zero as its eigenvalue
and is not Hilbert-Schmidt.
Chapter XI
Banach Spaces
A norm can be defined on Cn in many different ways. Some of them stem from an
inner product. An example of a norm on C n which does not arise from an inner
product is lI(al, a2, · ··, an)IJ = maxl ::;i::;m lad·
This is due, for example, to the fact that the vectors (1,0, . . . , 0) and (0,1,
0, . . . ,0) do not satisfy the parallelogram law (cf . I. 3.1) in that norm. As we
shall see, all norms on C" are equivalent. This need not be the case for infinite
dimensional spaces as the situation is more complicated.
Many problems are cast in an infinite dimensional setting with norms which do
not have some of the nice geometrical properties of a Hilbert space norm. The
choice of the norm plays a vital role in applications and in solutions of certain
problems. This leads us to study, in this chapter, Banach spaces which are more
general that Hilbert spaces . In subsequent chapters we extend the theory oflinear
operators to Banach spaces .
11/11 = max
tE[a ,b]
1/(t)1
00 )1/P
II~II = (
~1~nIP
with
Theorem 1.1 If X and Yare Banach spaces, then L(X, Y) is a Banach space.
IIA - An II ~ 0 n ~ 00 .
o
Excluding £2 and L2([a , b]), none of the Banach spaces in Examples 2-8 is a
Hilbert space, i.e., there does not exist an inner product ( , ) on the space such that
Ilxll = (x,x)I /2 (Exercise 8).
There are some very basic properties which Hilbert spaces possess and arbitrary
Banach spaces lack. For example, not every closed subspace of a Banach (not
Hilbert) space is complemented. Indeed, the closed subspace Co of £00 is not
complemented in £00 (cf. [W]). These differences, which we shall point out in
subsequent sections, are due to the fact that a Banach space has less structure than
a Hilbert space.
262 Chapter XI. Banach Spaces
Given a finite dimensional vector space X over C, there are infinitely many norms
which can be defined on X. For example, suppose XI , ••. , X n is a basis for X.
Then for X = L:7=1 aixi,
n
IIxlli = L lail, IIxlI(r) = rlixlli , r > 0,
i=1
(2.1)
all define norms on X. However, we shall now show that all norms on X are
equivalent in the following sense.
Definition: Two norms II . II and II . III, on a vector space X, are called equ ivalent
if there exist numbers C and m > 0 such that for all X EX,
It is clear that if 1111 and 11111 are equivalent norms on X, then Xo = (X , 1111) is
complete if and only if XI = (X , II II j) is complete. Also, a sequence converges
in X 0 if and only if it converges in XI.
Theorem 2.1 Any two norms on afinite dimensional vector sp ace are equivalent.
Proof: Let XI , • . • , X n be a basis for the vector space X. For any norm II . II on X
and any ak E C, 1 ::s k ::s n,
(2.2)
We shall show that there exists an m > 0 such that for ak E C , 1 ::s k ::s n ,
(2.3)
Since (2.2) and (2.3) are valid for any norm on X, it follows that any two norms
on X are equivalent. 0
From the properties of en and Theorem 2.1, we obtain the following result.
(a) X is complete.
(b) Every bounded sequence in X has a convergent subsequence.
Property (a) implies that every finite dimensional subspace of a normed linear
space Y is closed in Y .
Property (b) is false if X is infinite dimensional. In order to prove this we use
the following lemma.
1= IIxll = d(x , M ),
where d(x , M ) is the distance from X to M .
Proof: Let z be in X but not in M. There exists a sequence (md in M such that
li z - mkil ~ d(z , M ) > O. Since M is finite dimensional and {md is bounded,
there exists a subsequence {mk'} of {md and an m E M such that mk' ~ m .
Hence
Th us c: z- m
lor X = li z-mil'
d(z -m , M)
1 = [x] = = d(x , M).
liz-mil
o
Theorem 2.4 Ifevery sequence in the Y-sphere ofa norm ed linear space X has a
convergent subsequence, then X is finite dimensional.
264 Chapter XI. Banach Spaces
After {XI, . . . , Xk} has been obtained, choose xk+ I E X such that
where M; = Sp{XI, . . . , Xk}. Now {x n } is in the unit sphere of X but the sequence
does not have a convergent subsequence since
The sequence {ad is bounded; otherwise there exists a subsequence {ae} such
that 0 < lak,l -+ 00 . Hence
Examples: 1. C([a, b]) andLp([a , b]), 1 ~ p < 00, are separable Banach spaces
since it follows from the Weierstrass approximation theorem that the count-
able set ofpolynomials with complex rational coefficients is dense in C ([a, b])
which, in turn, is dense in Lp([a , b]).
2. lp, 1 ~ p < 00, is separable since the countable set of all sequences {ak} of
complex rationals, where ak = 0 for all k sufficiently large , is dense in lp.
3. loo is not separable. For given any countable set {x n } in loo, we can
always find an x E loo such that IIx - X n II 2: 1. Indeed, suppose X n =
I ,a2(n) ,. .. ) . Let an = 0 1'f Ian(n) l>I d
( a (n) an a n = 1 + Ian(n) I ot herwise.
.
Then x = (ai , a2, . ..) is in loo and
We have seen that every separable Hilbert space 1i has an orthonormal basis
{Cfln} . Thus every x E H can be represented by x = Lk akCflk and the
representation is unique, i.e., if x = Lk fJkCflk, then ak = fJk.
Theorem 4.1 Given! E i~, 1 ::; p < 00, there exists a unique fJ = (fJI, fJ2, . . .) E
Proof: We shall only prove the theorem for p = 1. The proof for 1 < p < 00
appears in [TL] p. 143.
Given lEi;, let fJk = !(ek). For s = (~I, b . ..) Ell,
Theorem 4.2 For each F E L~([a , b]), 1 ::; p < 00, there corresponds a
unique (up to sets ofmeasure zero) g E Lq([a, b]), -t + ~ = 1, such that for all
! E Lp([a, b]),
F(f) = l b
!(t)g(t) dt . (4.5)
One ofthe fundamental results in the theory of Banach spaces is the Hahn-Banach
theorem. This section contains the proof of the theorem together with some very
important applications.
Hahn-Banach Theorem 5.1 Iff is a bounded linear functional which is defined
on a subspace M ofa normed linear space X, then f can be extended to a bounded
linear functional F defined on X such that 1If11 = IIFII.
The theorem is clear if X is a Hilbert space (see Exercise XII-5).
In order to prove the Hahn-Banach theorem, we need the following preliminary
results.
A set E is called partially ordered if there exists a binary relation, S, defined
for certain pairs (x, y) E E x E such that
(i) x S x,
(ii) if x S y and y S Z, then x S z.
Theorem 5.2 Suppose X is a vector space over the real or comp lex numbers. Let
p be a real-valuedfunction defined on X such thatfor all x , y in X and numbers a ,
(a) p (x + y) S p (x) + p (y )
(b) p (ax) = la jp (x) .
268 Chapter XI. Banach Spaces
Proof: Let £ be the set of all linear functionals g such that the domain D (g) of g
is contained in X, g = f on M, and Ig(x)1 ::::: p(x) x E D(g) . Obviously, f is
in E, We partially order £ by letting g ::::: h mean that h is an extension of g. The
idea ofthe proof is to use Zorn's lemma to show that £ has a maximal element and
that this element is defined on all of X.
Let .1 be a totally ordered subset of E: Define the linear functional H by
D(H) = U D(g)
gE.:J
H(x) = g(x) if x E D(g).
In order to determine G(x), let us first assume that X is a vector space over the
reals. If (5.2) is to hold, then in particular,
and
G(-x - y) ::::: p(-x - y) = p(x + y), Y E D(F) . (5.4)
and
G(x) 2: -p(x + y) - F(y), Y E D(F). (5.6)
11.5 Hahn-Banach Theorem 269
It is easy to check that the right side of (5.5) is greater than or equal to the right
side of (5.6) for all z and y in D (F ). Thus, if we define
where m f (z) and ~ f (z) denote the real and imaginary parts of f (z), respectively.
Since ~ fez) = -m f tiz) , we have
Hence, by the result we proved for real vector spaces, there exists a linear extension
G ofm f to all of X, such that IG(z)1 :s p(x), x E X», Guided by (5.8), we
define F on X by
F (x ) = G(x) - iG(ix ).
Now G is a linear extension of f to all of X. Writing F (x ) = IF (x )le i B, we get
which implies that 1 :::: IIgli . Therefore, IIgll = 1. The theorem now follows by
extending g to I E X' such that 1 = IIgll = 11111 . 0
Corollary 5.4 Given x E X, there exists an I E X' such that 11111 = 1 and
I(x) = [x].
Indeed, there exists agE X' such that IIgll = 1 and g(x) = IIxli. Thus for all
I E X', 1If11 = 1,
II(x)l::: IIxll = g(x) .
11.5 Hahn-Banach Theorem 271
Corollary 5.6 Given a linearly independent set {XI, . . . , Xn} C X, there exist
fi , h, . .. , In in X' such that
Ij(Xk) = Ojb 1 ~ j ,k ~ n.
(i) X = M +N = {u + v: u E M , v E N} and
(ii) M n N = (0) .
X is called the direct sum of M and N and is written X = M $ N .
It is clear that X = M $ N if and only if for each X E X there exists a unique
u E M and a unique v E N such that X = u + v.
A closed subspace M of a Hilbert space is complemented by M.1. However,
as we pointed out in Section 1, a closed subspace of a Banach space need not be
complemented, unless the subspace is finite dimensional.
Hence X = M $ N. 0
Exercises XI
(a) f p l S; f p2
(b) L pi [a, b] ;2 LpJa, b]
(c) II~ II PI :s II~ II p2 for ~ E f p l
(d) 1I<pllpl ~ 1I<pllp2 for <p E LpJa , b].
4. Show that the intersection of the unit ball in foo with the plane
is the square
5. Find the intersection of the unit ball in f p , 1 :s P :s 00, with the plane
II ·II:X---.+R+ by Ilxll=min{IAI:iES}.
Prove that II . II is a norm and (X, II . II) is a Banach space in which S is the
unit ball.
Exercises Xl 273
9. Prove that a Banach space X is a Hilbert space ifand only iffor all x , Y EX,
2 2 2 2.
IIx + YII + IIx - YII = 211xll + 211Y1I
13. Given two vectors (1,1,0,0, . . .) and(~l, b ~3, ~4, 0, 0, ...) inl oo, where
o .s I~jl .s 1, j = 1,2,3,4, prove that the distance in loo between those
vectors does not exceed 2 and that there exist ~l, ~2, ~3, ~4 such that the
distance is exactly 2.
14. Show that there exist two vectors x and yin loo such that they are linearly
independent, IIxll = IIYII = 1 and IIx + YII = 2.
15. Prove that ifthe unit sphere ofa normed linear space contains a line segment,
then there exist vectors x and Y such that II x + Y II = II x II + II Y II and x, Y
are linearly independent (a line segment is a set ofthe form {au + (1 - a) v:
o .s a .s 1}.
16. Prove that if a normed linear space X contains linearly independent vectors
x and Y such that IIx + YII = IIxll + lIylI, then there is a line segment
contained in the unit sphere of X .
17. Prove that there are no line segments contained in the unit sphere of a
normed linear space if and only if the closest element in a subspace to a
given vector is unique.
274 Chapter XI. Banach Spaces
18. Prove that in £p, 1 < p < 00, there are no line segments contained in the
unit sphere.
19. Find the intersection ofthe unit ball in qo, 1] with the following subspaces:
(a) sp{t}
(b) sp{1, t}
(c) sp{l, t , t 2 }
(d) sp{l - t, t} .
20. Given two spheres IIx - Yoll = IIYoll and IIx + Yo II = IIYoll in a nonned
linear space, how many points can these spheres have in common?
21. Let ~ = (~I , ~2, . . .) be a vector in Co (the subspace of £00 consisting of
sequences which converge to zero). By renumbering the entries of ~ we
obtain ~* with I~il 2: 1~21 2: . .. . Let XI = {~ E Co : L~I tl~jl < oo}.
Define anonnon XI by IIHI = L~I tl~jl .
Prove that XI is a Banach space with respect to this norm, Let X(I) =
Ln_II;~1 (I)
{~ E col sUPn i~=, ~ < oo} and define a norm on X by
Prove that X(I) is a Banach space with respect to this norm. Prove that X I
and X(l) are not Hilbert spaces.
22 . Let M be a closed subspace of X. Define the quotient space XI M to be
the vector space consisting of the cosets [x] = {x + z : Z E M} with
vector addition and scalar multiplication defined by [x] + [y] = [x + Y],
a[x] = [ax]. Define lI[x]1I1 = d(x , M), the distance from x to M . Prove
that II . III is a norm on X 1M and that (X1M, II . III) is complete. We shall
refer to this space in XII exercise 17.
23. A Banach space is called uniformly convex if IIx n II = llYn II = 1 and
II~(Xn + Yn) II --+ 1 imply Xn - Yn --+ O.
(a) Prove that every Hilbert space is uniformly convex.
(b) It can be shown that the following inequalities hold for every ! and g
in Lp[a ,b].
.!.+1.=1.
P q
27. Prove that a linear functional on X is bounded if and only if its kernel is
closed.
28. Prove that if X is separable, then there exists a countable set 11,12, ...
in X' with the property that if x i- 0, then there exists an Ij such that
h(x)i- 0.
29. Let rj = (a , (J, 0, . . .), Xz = (0, a , (J , 0, . . .), X3 = (0,0, a , (J , 0, .. .), . . . ,
where I~ I > 1. Show that {x j } is a Schauder basis for l z.
30. Let XI = (a, 0, . . .), Xz = (a, (J, 0, . ..), X3 = (0, a , (J , 0, . . .), . . . , where
I~I < 1. Show that {Xj} is aSchauderbasis forlz.
Many of the definitions , theorems and proofs concerning operators in £(1iI , 1i2)
carry over verbatim to operators in £(X , Y) , where X and Yare Banach spaces.
Chapter II, Sect ions 1,3,8 and Theorems 16.1, 16.3 are illustrations of this asser-
tion . We shall refer to these results within the framework of Banach spaces even
though they were stated for Hilbert spaces .
There are also very substantial differences between operators on Banach and
Hilbert spaces. For instance, in general, there is no notion ofa selfadjoint operator
on a Banach space which is not a Hilbert space.
The following result characterizes operators in £(£1) which stem from matrices.
where
00
Proof: If A is in £(£d and {ekJ is the standard basis for £1, then
00
On the other hand, if m < 00, then given; = (;\, ;2, ... ) E e\,
(Kf)(t) = l b
k(t, s)f(s) ds.
IIKII = maxlb t a
Ik(t,s)lds.
Proof: It follows from the uniform continuity of k that K is a linear map from X
l
into X. Let
b
m = max Ik(t, s)] ds.
t a
l
Since
b
I(Kf)(t)1 ::: Ik(t, s)IIf(s)1 ds .s mllfll,
we have 11K II ::: m , Now
yet) = l b
Ik(t,s)lds
Therefore,
b
II K II ::: l k (to,s )g(s )ds= lb,k(to ,s),ds=m.
C = sup
t
I a
b
2
Ik(t , s)1 ds < 00 ,
then
(Kf)(t) = LAj(f, glj}glj(t) a.e., (1.1)
j
and the series converges uniformly and absolutely on [a, b]. We shall give the
following simple proof of this result.
Since
b )1/2
j(Kf)(t)j:::: (llk(t, s)12ds 1If112 s Cllfb
it follows that K is a bounded linear map from L 2([a , b]) into L oo([a , b]) . Now
we know that
n
sn (t) = (Pof) (t) + L ( f, gl j}glj (t)
j= l
converges in L oo([a, b]) to Kf. The uniform convergence of the series in (1.1)
follows from the definition of the norm on L oo([a . b]). The absolute convergence
follows from the fact that the argument does not depend on the arrangement of the
terms of the series . 0
In Chapter VI we introduced closed linear operators and proved some results in the
setting of Hilbert spaces. These definitions and the proofs of the corresponding
results carry over verbatim to Banach spaces. For example, let X and Y be Banach
spaces. A linear operator A with domain V (A) ~ X and range 1m A ~ Y,denoted
by A (X ~ Y ), is called a closed operator ifit has the property that whenever {x n}
is a sequence in V(A ) satisfying Xn ~ x in X and AX n ~ Y E Y , then x E V(A)
and Ax = y .
280 Chapter XII. Linear Operators on a Banach Space
To see that A is closed, suppose that In ---+ I and Aln ---+ g . Then it follows from
the definition of the norm on C ([0, 1]) that Un} and U~} converge uniformly on
[0, 1] to I and g, respectively. By considering J~ I' (s) ds, it is easy to verify that
I is differentiable on [0,1] and AI = I ' = g . Thus A is closed.
2. If A is invertible, then A is closed. The proof of this assertion is exactly the
same as the proof of Theorem VLl.l.
3. Let X = Y = Lp([O, 1]), 1 .:::: p < 00 and let A be defined by D(A) =
U E XII absolutely continuous on [0,1], I ' E X, 1(0) = o}, AI = I '.
A is obviously one-one and Im A = X since given g EX,
(A -I g)(t) = 1 t
g(s) ds.
Hence
If A is closed, then the argument used in Section VI.3 shows that D(A) with norm
II . II A is a Banach space.
The operator A is closed if and only if its graph G(A) = {(x, Ax)lx E D(A)}
is a closed subspace of X x Y with norm II (x , Y)II = IIxll + IIYII.
12.3 Closed Graph Theorem 281
Proof: Let the Banach space X = UnCn, where each C; is a closed set. Suppose
that none of the C, has an interior point. Choose XI E Cl. Since S'(xj , 1) ~ Cl
and Cl is closed, there exist X2 and ri , 0 < r: < ~, such that
Since S(X2, r2) rt. C2 and C2 is closed, there exist X3 and r3, 0 < rs < ~, such
that
and S(X3, r3) n C2 = 0.
*'
S(X3, r3) C S(X2, r2)
Continuing in this manner, we obtain sequences {x n } and {r n }, 0 < r« < such
that
S(Xn+l, rn+l) C S(x n, rn) and S(Xn+l, rn+l) n c, = 0. (3.1)
The sequence {x n } is a Cauchy sequence. For if n > m, then we have from (3.1)
that X n E S(xm , rm ) , i.e.,
I
II x n -xmll < Tm < -. (3.2)
m
282 Chapter XII. Linear Operators on a Banach Space
Lemma 3.3 Suppose C is a convex set in X and C = (-l)C . /fC has an interior
point, then zero is also an interior point ofC.
Since C is convex. We have shown that S(0,2r) C 2C, which implies that
S(O,r) c C. 0
Proof of the Closed Graph Theorem: Let T be a closed linear operator which
maps the Banach space X into the Banach space Y . Define Z = {x: IITxll < I}.
First we prove that the closure 2 of Z has an interior point.
Since D(T) = X and T is linear, X = U~lnZ. It follows from the Baire
Category Theorem that there exists a positive integer k such that kZ = kZ has
an interior point. Therefore, 2 has an interior point. It is easy to verify that 2
is convex and 2 = (-1)2. By Lemma 3.3 S(O, r) C Z for some r > 0, which
implies
S(O, ar) C a2 = «z , a > 0. (3.3)
°
Given < e < I and IIx II < r, we have from (3.3) that x is in 2 . Therefore , there
exists an XI C Z such that IIx - xIII < er. Since x - XI E S(O, r) C e Z, there
exists an X2 E e Z such that
(3.4)
Now {Tsn } is a Cauchy sequence since (3.5) implies that for n > m,
00 00 m
IITsn-Tsmll~ L IITXkll~Lck=lc_c--+O
k=m+l k=m
as m --+ 00. Hence, by the completeness of Y, TSn --+ Y for some y E Y . So, we
have Sn --+ x and TSn --+ y . Since T is closed, Tx = y. Thus
00 I
IITxll = IIYII = n--+oo
lim IITs nll ~"IITxkll ~
L...J
--,
I - e
k=1
Thus
2
Theorem 4.1 Suppose X and Yare Banach spaces. If A(X --+ Y) is closed with
properties Ker A = (0) and 1m A = Y, then A -I is bounded on Y .
Corollary 4.2 Suppose 11111 and 11112 are norms on the vector space X such that
(X, 1111 d and (X, 11112) are complete. If there exists a number C such that
then 11111 and 11112 are equivalent, i.e., there exists CI > 0 such that CJllx112 :s
IIxlll :s ClJx1l2.
284 Chapter XII Linear Operators on a Banach Space
Proof: Let / be the identity map on X . Now / is a bounded linear map from
(X , II liz) onto (X , II lid since
II/xIII = IIxlll ::s Cllxllz.
Hence l : I is bounded by Theorem 4.1 , and
I lllllxlli .
C"x11I s l
IIxliz = IIr xllz::s IIr
o
We now give two proofs of the following fundamental result. One proof relies
on the closed graph theorem.
Proof: For each x E X, the sequence {Anx} is bounded since it converges . Thus
by the uniform boundedness principle, sUPn IIA n II = m < 00 and
Theorem 4.5 Suppo se that S is a subset ofa Banach space X such that for each
I E X',
sup I/(x)1 < 00 .
XES
Then S is bounded.
Proof: For each X E S , define the linear functional F, on the conjugate space X'
by F; I = I (x) . Clearly, F; is linear and by Corollary XI.5.5
o
Another application of the uniform boundedness principle gives the following
result. If fJ = {fJl, fJ2, .. .} is a sequence of complex numbers such that the
series L~1 fJA j converges for every {~j} E lp, 1 .s p < 00, then fJ is in
lq, t+~ = 1. To prove this, define I on lp by I({~j}) = L~1 fJj~j. Let
In({~j}) = LJ=1 fJj~j, n = 1,2, . . . . Clearly, In is in the conjugate space l~
and In(x) -+ I(x) for each x E i . Hence I is in l~ by Corrollary 4.4 and, by
Theorem XI.4 .1, fJ is in lq with IIfJll q = Ilfll.
Theorem 4.6 An operator A E £(X , Y) is one-one and has a closed range if and
only if there exists an m > 0 such that
Just as every closed subspace of a Hilbert space has a projection associated with
it, so does every closed, comp lemented subspace of a Banach space.
Definition. Let M be a subspace of X. An operator P is called a projection from
X onto M if it is a bounded linear map from X onto M and p 2 = P .
If x is in M, then P x = x . Indeed, there exists a z E X such that x = P z.
Hence Px = p 2z = P': = x .
It is easy to see that if P is a projection then Q = I - P is also a projection and
Im P = Ker Q, Ker P = Im Q. Hence Im P is closed.
0= P(x - y) = Px - P y = Px - y .
and
L1 $ L2 = H, L1 $ L2 i- H,
where the left side means the closure of L1 $ L2.
12.5 Complemented Subspaces and Projections 287
no (x + y) = x, X E L) , Y E L z .
It is obvious that
n5 = TIo.
It is clear that the first set of vectors is an orthonormal basis for L), while the
second set is an orthogonal basis for L z .
The intersection LI n Lz consists of zero only. Indeed, let f E L) n L z . Then
f can be represented as
00
1
f = 2::>jCPZj-) , f = fJj(CPZ j-1 + 2.CPj) ,
j=) j=) ]
288 Chapter XII Linear Operators on a Banach Space
with the two series converging in the norm of H . From these equalities it follows
that
00 00 1
L(aj - Yj)({J2j-1 = LYj 2j({J2 j .
j=1 j=1
Notice that in this equality the left hand side is in L I while the right hand is
orthogonal to LI . This leads to aj = Yj and Y) = 0 for j = 1,2, . . .. Thus
f = 0, and hence LI n L2 = {OJ.
Also, LI 6' L2 = H. For suppose this is not the case . Then there exists a u =1= 0
in H such that u l.. L, and u l.. L2. But then u l.. ({JI , ({J2, . .. which implies that
u = O. Now assume L I 6' L2 = H. Take
X n = ({J2n-l , n = 1,2, . ..
and
1
Yn = ({J2n-1 + 2n ({J2n, n = 1,2, . .. .
It is obvious that Ilxn II = 1, llYn II = /1 + 2~n and
· IIX - Yn II = l'1m - 1 = O.
11m
n--+oo
n n
n--+oo 2
But this is impossible since by Theorem 5.1, there exists a c > 0 such that
In this section we extend the results of Section 11.15 to Banach spaces X and Y.
The definition of one-sided invertibility for an operator on a Banach space is the
same as for an operator on a Hilbert space (cf. 11.15)
Proof: The proof of the necessity is exactly the same as the proof of
Theorem II. 15.2. SupposeKer A = {OJ and Y = 1m A6'M, where 1m A andM are
closed subspaces of Y. Let P be the projection from Y onto 1m A with Ker P = M
and let A I be the operator A considered as a map from X onto the Banach space
1m A . Then Al is invertible by Theorem 4.1 and All P is a bounded left inverse
of A since AllPAx = All Ax = x . 0
12.7 The Projection Method Revisited 289
Proof: The proof of the necessity is exactly the same as the proof of
Theorem II.I5.4. Suppose now that 1m A = Y and X = Ker A E9 N , where
N is a closed subspace of X. Let P be the projection of X onto N with Ker
P = Ker A . Let Al be the operator A restricted to N. Then Al is injective and
AIN = AX = Y . Hence Al is invertible and AAily = y.
The proofs of the following theorems are exactly the same as the proofs of the
corresponding Theorems II.I5.3 and II.15.5. 0
Moreover,
codim 1m B = codim 1m A . (6.2)
The projection method for A is said to converge if there an integer N such that for
each y E Y and n ::: N , there exists a unique solution X n to equation (7.1) and, in
addition, the sequence {x n} converges to A -I y . We denote by IT(P n, Qn) the set
of invertible operators for which the convergent method converges.
The projection method for A was presented in Section II.I7 for X = Y a Hilbert
space and Qn = Pn.
290 Chapter XII. Linear Operators on a Banach Spa ce
Theorem 7.1 Let A E L(X, Y) be invertible. Then A E IT(Pn , Qn) ifand only if
there exists an integer N such that for n 2: N,
The proof of the corollary is exactly the same as the proof of Corollary 11.17.2.
The proof of the next theorem is analogous to the proof of the corresponding
Theorem 11.17.6 with PnAPn replaced by QnAPn.
Theorem 7.3 Suppose A E IT(Pn, Qn)' There exists a y > 0 such that if B E
L(X, Y) and liB II < y , then A + BE IT(Pn , Qn) .
In this section we extend the notion of the spectrum of an operator (which was
introduced in Section 11.20 for Hilbert space operators) to a Banach space setting.
Throughout this section X is a Banach space.
Definition: Given A E L(X), a point x E C is called a regular point of A if
AI - A is invertible. The set peA) of regular points is called the resolvent set of
A . The spectrum a(A) of A is the complement of peA) .
Theorem 8.1 The resolvent set ofany A E L(X) is an open set in C containing
().. 11)..1
> IIA II}· Hence a(A) isa closed bounded set contained in {).. 11)..1 ~ IIA II}.
Furthermore.for): E oa(A), the boundary ofo (A) , the operator AI- A is neither
left nor right invertible.
Proof: The proof is exactly the same as the proof of Theorem 11.20.1 0
where a(t) E X . For A =1= a(t), t E [c, d], it is clear that X E p(A) and
1
«U - A)-lg)(t) = g(t).
A - a(t)
a(A) = {a(t) , c :s t :s d} .
3. For X = lp, 1 :s p < 00, let A E £(X) be the backward shift operator,
Clearly, IIA II = 1. Thus if IAI > 1, then A E p(A) by Theorem 6.1. If AI < 1,
then x = (l, A, A2 , . . .) is in Ker (U - A) . Therefore, A E a (A) . Since a (A) is
a closed set, it follows that a(A) = {A : IAI :s 1} .
If IAI > 1, then
k
1 ( A)-l 00 A
(U - A)-l = - 1 - - = " - .
A A LJ Ak+l
k=O
Thus given y = ({31, {32, .. .) EX, (U - A)-l Y = (ai , a2, .. .), where
The spectrum of the backward shift operator on lp, 1 :s p < 00, was shown
above to be independent of p . The following example shows that the spectrum
can vary as the space varies.
4. Given a number q > 0, let ll(q) be the set of those sequences x =
(ai, a2, . ..) of complex numbers such that
00
with the usual definitions of additions and scalar multiplication, II (q), together
with the norm IIl1q , is a Banach space.
Let A be the backward shift operator on II (q). We shall show that a(A) =
{A : IAI :s q}. If IAI < q, then x = (l, A, A2 , ... ) is in Ker(U - A) . Thus
292 Chapter XII. Linear Operators on a Banach Space
In particular,
n-l
an+l = Anal - LAkf3n-k (8.1)
k=O
and
n-l
al = A., -n an+l + ""'
L..... A., k-n pn-k·
R (8.2)
k=O
Since x is to be in.fl (q) and IAI > q,
n-l n-l 00
and an , n > 1, as defined in (8.1). This argument also shows that for each
y E .fl(q) there exists a unique x E .fl(q) such that (AI - A)x = y, provided
IAI > q. Formulas (8.1) and (8.5) give (AI - A)-l y. Thus, since a(A) is closed
and contains {A : IAI < q},
Proof: Let Sn = L~=o A k and S = lim n ---+ oo Sn. Then (l - A )S = lim n ---+ oo
(l - A )Sn = lim n ---+ oo Sn(l - A) = S (l - A ).
Since (l- A )Sn = I - A n+1 --+ I ,
(l - A )S = S(l - A) = I.
Thu s
00
L A k = S = (l- A )-I.
k=O
Corollary 8.3 Suppose A E £ (X ) and IIAPII < 1 for some p ositive integer p .
Then L~o A k converges, I - A is invertible and (l - A )-I = L~o A k.
Proof: Let
00
Hence
00
n n
L Aj ::: L IIA jll --+ 0 as m ,n --+ 00 .
j=m j=m
Suppose k (t , s) is continuous on [0, 1] x [0, 1]. We shall show that the equation
Af(t) -1 1
k (t , s) j(s) d s = g(t)
294 Chapter XII Linear Operators on a Banach Space
has a unique solution in C ([0, 1]) for every g E C ([0, 1]) if).. i= O. In other words,
if Y is the Volterra integral operator on C ([0 , 1]) defined by
(Yf)(t) 1 1
I(Yf)(t)1 S 1 1
By induction,
Hence
k
lIy
kll
S ~! ~ O.
1( y)-I yk
vr:'
00
(H - = -).. 1 - )..
- = " '-.
~ )..k+1
k=O
To find a formula for v', let kl (t, s) = k(t, s), if a ss Stand zero otherwise.
1
Then
1
(Yf)(t) = kl(t,s)f(s)ds
Definition : An operator valued function A(A) which maps a subset of'C into £(X)
is analytic at AO if
where each Ak is in £(X) and the series converges for each A in some neighborhood
ofAo.
Theorem 10.1 Thefunction A(A) = (AI - A)-I is analytic at each point in the
open set p(A).
Since p(A) is open, we may choose e > 0 so that IA - Aol < e implies A E p(A)
and II(A - Ao)A(Ao)1I < 1. In this case, it follows from (10.1) that
00
d
-(AI - A)-I = -(AI - A)-2.
dA
By Corollary XI.5.4, there eixst x' E X' and x E X such that x'x f= O. From
(10.3) it follows that
00
Exercises XII
I . Let B be a Banach space with a given norm II . II. Check that the following
norms II . III are equivalent to II . II ·
(a) B = l2 ; 1I~lIi = L~II(1 + J)~jI2.
(b) B = L2[O, I] ; III/II = 11(2/ + V) !II , where VI(t)
= f~ I(s) ds .
(c) B = lp(Z), I :::: p :::: 00 ; U : lp(Z) ~ lp(Z) is given by
15. Let A()") and B()") be analytic operator valued functions defined on a set
S C Co Find the derivatives of A()..)-I, A()")B()") and A ()..)-I B()") at the
points where the derivatives exist.
16. Suppose A E 'c(X) is injective . Prove that the range of A is closed if and
only if there exists a y > 0 such that II Ax II ~ y IIx II for all x EX.
298 Chapter XII. Linear Operators on a Banach Space
IIK-Knllssuplf3kl~O asn~oo.
k>n
Hence K is compact.
3. Suppose k is continuous on [a, b] x [a, b] . Let K : C([a, b]) ~ C([a, b])
be the linear integral operator defined by
(Kf)(t) = l b
k(t , s)/(s) ds.
l
C([a, b]) by
b
Kn/ = e.«. s)/(s) ds.
It is easy to see that K n is a bounded linear operator of finite rank on C([a , b]) .
Hence K n is compact. Replacing k by k - Pn in (1.1) gives
t
4. For 1 < p ; q < 00, letk(t, s) be in Lr([O, 1] x [0, 1]), wherer = max(p' , q),
+ ~ = 1. We shall now show that the integral operator K defined by
(Kf)(t) = 1 1
k(t ,s)/(s) ds
IIKfll~ = III
11 k(t , s)/(s) ds I
q
dt
1
S IIf11;,
I ( I
1lk(t , sW ds
)q/r
dt. (1.2)
Let
g(t) = 11Ik(t,sWds. (1.3)
I
1Ig(t),q/r dt S
(I1 Ig(t),
)q/r
(1.4)
From (1.2), (1.3), (1.4), and the observation 1I/llr' S Ilfll p , we obtain
I I ) I/r
II Kfll q S II/lip ( 11Ik(t , s)]" dsdt (1.5)
Thus K is a bounded linear map from Lp([O, 1]) into Lq([O, 1]).
J3. J Examples ofCompact Operators 301
The set of polynomials in s and t is dense in L r([O , 1] x [0, 1]). Thus there
exists a sequence {Pn (t, s) } ofpolynomials which converges in L; ([0, 1] x [0, 1])
to k (t , s) . Define kn : L p([O , 1]) ~ Lq([O , 1]) by
(Knf)(t) = 1 1
Pn(t , s ) f (s ) ds.
Ifwe replace k in (1.5) , first by Pn and then by k - P«, we see that K; is a bounded
linear operator of finite rank and
Hence K is compact.
It follows from the example that the Volterra operator V defined in XIL9 is
compact linear map from L p([O , 1]) into Lq([O , 1]), 1 < p , q < 00 .
Ifthe restriction 1 < p , q < 00 is removed, then the result in Example 4 is false.
In [G], p. 90, a bounded Lebesgue measurable function defined on [0, I] x [0, 1]
is constructed such that the corresponding integral operator is not compact on
L1 ([0, 1]).
5. The Lebesgue integral operator
I
b B (t , s )
(Kf)(t) = f(s) ds
a It - sla
is compact on C ([a, b]), where B(t , s) is continuous on [a, b] x [a, b] and a < 1.
Also ,
IIKII = max [IB (t , s) j ds. (1.6)
IE [a ,b] Ja It - s la
First we show thatKf is in C ([a, b]) for each f E q a , b]. Let M = maXS,IE[a,b]
IB(t , s)] and II f11 00 = maXIE[a,b] lf (t )l·
Then
[b 2M
I(Kf) (t)1 ~ Mllflloo J It - s l-a ds ~ 1_ a (b - a)l-a IIf11oo.
a
Next we show that K is com pact on C([a, b]). Letk (t , s ) = ,~~;,2 and define for
n = 1, 2, . ..
l
o, It -s l -< ...!..
2n
kn(t , s) = (2n lt - s l - l )k (t , s), 2~ ~ It - s] ~ ~ . (1.7)
k (t , s ), It - sl 2: ~
302 Chapter XlII Compact Operators on Banach Spaces
(Knf)(t) = l b
kn(t, s)f(s)ds
I«K - Kn)f)(t)1 :s l b
Ik(t, s) - kn(t, s)llf(s)lds
n
sl-ads
2 (2)1-a
s Mllflloo 1 _ ex -;; (1.8)
J: 1~~;12Ids
Thus K is compact on C([a, b)). Finally, we prove equality (b). The function
F(t) = is in C([a, b)). This follows from the above result with
IB(t , s)1 in place of B(t , s) and F(t) =
1 on [a, b) . Thus
J: 1~~;,21 f(s)ds, where f is identically
max
t JaI" Ik(t, s)lds = Jra Ik(to, s)lds (1.9)
for some to E [a, b]. Since Ikn(t, s)] :s Ik(t, s)] for each nand s -:F t, we have
from (1.9) and Theorem XII .1.2 that
Jr Jar Ik(t,s)lds
b b
IIKII = lim IIKn ll = lim max Ikn(t,s)lds:s max
n--->oo n ---> oo t a t
The following theorem is very useful when dealing with operators of finite rank.
Theorem 2.1 IfK is an operator in £(X) offinite rank, then there exist subspaces
Nand Z ofX such that N is finite dimensional, Z is closed,
X=NEElZ, KNCNandKZ=(O).
13.3 Approximation by Operators ofFinite Rank 303
Proof: Let {KVl, ... , Kv n} be a basis for 1m K . By Corollary XI.5 .6, there exist
!J, ... ,In
in X' such that l iKvj = oij, 1 ::::: i, j ::::: n, and for each x E X,
n
Kx = L Ii (Kx)Kvi. (2.1)
i= \
Indeed, from (2.1) in the proof ofTheorem 2.1, we see that we can choose gi (x) =
li(Kx) and ui ; = KVi .
It was shown in Theorem X.4 .2 that every compact linear operator which maps a
Hilbert space into a Hilbert space is the limit, in norm, of a sequence of operators
of finite rank. While this result need not hold for Banach spaces, we do have the
following results.
Theorem 3.1 Suppose {Tn} is a sequence ofoperators in £.(Y) such that Tn y --+
Tyfor all Y E Y. If K is a compact operator in £.(X, Y) , then
IITnK - TKII --+ O.
The proof is exactly the same as the proof of Lemma 11.17.8
Corollary 3.2 Suppose there exists a sequence {Pn} ofoperators offinite rank in
£.(Y) with the property Pny --+ Y for every Y E Y . Then every compact operator
K E £.(X , Y) is the limit, in norm , ofoperators offinite rank. In fact,
IIPnK - KII --+ o.
304 Chapter XIII. Compact Operators on Banach Spa ces
is in Y' . Hence, defining Pny = L:J=I !J (y) Yj, it is clear that {Pn} satisfies the
hypotheses of Corollary 3.2. Thus we have following result.
Corollary 3.3 If the Banach space Y has a Schauder basis , then every compact
operator in LeX, Y) is the limit, in norm, ofa sequence ofoperators offinite rank.
is a compact operator in £(l I ) ifand only iffor each 8 > 0, there exists an integer
N such that
sup L lajkl < 8. (3.1)
k j >N
Indeed, suppose (3.1) holds. For each positive integer n define Pn E £(ld by
Pn (ai , az , . . .) = (al , . . . , an , 0, 0, . . .). It follows from Theorem XILl.l and
Conversely, assume A is compact. Then by Corollary 3.2, there exists, for each
8 > 0, an integer N such that
Theorem 4.1 Given K E L(X), suppose there exists afinite rank operator Ko E
L(X), such that 11K - Koll < 1. Then I - K has a closed range and
Proof: First, let us assume that K is of finite rank . By Theorem 2.1, there exist
closed subspaces Nand Z of X such that N is finite dimensional, X = N E9 Z,
KN C Nand KZ = {O}. Let (I - K)N be the restriction of 1- K to N. Then
Also, it follows from Theorem XI.2.5 and (4.3) that Im(/ - K) is closed.
More generally, assume 11K - Koll < 1, where Ko is of finite rank. Now
B = I - (K - Ko) is invertible and
1- K =B- Ko = (/ - KoB-I)B .
Hence
Since KoB- 1 is of finite rank , it follows from what we have shown, together with
(4.4) and (4.5), that Im(/ - K) is closed and
o
Corollary 4.2 If K E LeX) is the limit, in norm, of a sequence ofoperators of
finite rank, then I - K has a closed range and
Definition: Given A E £ (X , Y ), the conjugate A' : Y' ---+ X' of A is the operator
defined by A' I = f o A, lEY' .
lt is clear that A' is linear. Furthermore, II A' II = II A II . The proofofthis assertion
is as follows.
IIA'/II = II/ o Ail s IIflill AII·
Hence II A' II ~ II A II. On the other hand , by Corollary XI.5.5
IIAx il = max I/ (Ax)1 = max I(A ' f)xl .s II A'II IIx ll·
"1"=1 "1"=1
Thus II AII ~ IIA'II·
lt is easy to verify that if A and B are in £(X , Y ), then
t
2. Let K be the integral operator in Sect ion 1, Example 4. If we identify
L ~ ([a , b]) with L pl([a , b]) , + ~ = 1, as in Theorem XI.4.2 , then for F E L~
and g E L p([a, b]) ,
i {i
(K' F)g = F (Kg) =
b
F(t )
b
k(t , s )g (S)dS} dt
=i {i b b
g(s) k (t ' S)F(t)dt} ds .
i
Hence
b
( K' F )(s ) = k (t , s )F(t )dt. (5.1)
and
1.. N = {x EX : g(x) = 0 forallg EN}.
Proof: We shall only prove (ii). The proofs of the remaining relationships are
similar. These we leave to the reader.
Suppose IE Ker A'. For any x EX,
Theorem 5.2 Given A E L(X, Y), suppose 1m A is closed and codim 1m A < 00.
Then
dim Ker A' = codim 1m A.
Hence
f iYj = oij and fi E (ImA).L = KerA', 1 ~ i ~ n. (5.3)
We need only show that {II, .. . , fn} is a basis for Ker A' . It follows readily from
(5.3) that the /; are linearly independent. Given f E Ker A' and Y E Y, there
exists a u E 1m A and a v E N such that Y = u + v. Since f is in (1m A).L and
{YI, ... , Yn} is a basis for N, (5.3) implies that
Theorem 5.3 Given K E L(X), suppose there exsist afinite rank operator Ko E
L(X) such that 11K - Koll < 1. Then
00 > dim Ker (l - K) = codim Im(l - K) = dim Ker(l - K').
We shall now prove the main result in this section, namely, that the conjugate
ofa compact operator is compact. The proofrelies on the following theorem.
such thatVI, . .. ,Vk-I are linearly independent and Vk E sp{VI , •.• , Vk-I}, say
"k-I S·
Vk = L"j=1 CXjVj . mce
k-l
0= AkVk - TVk = LCXj(Ak - Aj)Vj,
j=1
(6.1)
Now
It is easy to see that AnWn - TW n and TW m are in Mn-I, provided n > m . Hence
it follows from (6.1) and (6.2) that for n > m,
IITwn - TW m II ~ IAnld(w n, Mn-J> = IAnl ~ B.
13.7 Applications
In Section 1 it was shown that the operators appearing below are limits in norm of
a sequence of operators of finite rank. Hence the conclusions below follow from
Corollary 4.2 and Theorem 6.1.
1. Suppose k is continuous on [a, b] x [a, b]. The equation
Af(t) -l b
k(t ,s)j(s)ds = g(t), ).. #: 0 (7.1)
has a unique solution in C ([a, b]) for each g E C ([a , b]) if and only if the homo-
geneous equation
Af(t) = l b
k(t, s)j(s) ds = 0, ).. #: 0, (7.2)
Exercises XIII
13. Let CPI, CP2, .. . be an orthonormal basis for a Hilbert space 1-£. Define T E
£(1-£) by T x = (x , CP3}CP2 + (x , CPS}CP4 + (x, CP7}CP6 + .... Show that T is
not compact but T 2 = O.
14. Suppose A E £(X) and AP is compact for some positive integer p .
Prove that the Fredholm Theorems Xl-4.1, 5.3 and 6.1 hold for A . Hint:
Choose n roots of unity 1, ~I, ... , ~n-I for some n ~ p so that ~i E p(A),
1 ~ i ~ n - 1 (why do such ~i exist?) Then I - An = (I - A)
(~I - A) . .. (~n-I - A) .
Chapter XIV
Poincare Operators: Determinant and trace
L IPjkl < 00
j, k= l
we call a Poincare operator, H. Poincare [P] introduced and studied this class in
connection with problems in celestial mechanics . One of the important advantages
of this class is that for these operators P it is possible to introduce a trace and for
operators I - P a determinant that are natural generalizations of the trace and
determinant of finite matrices. To keep the presentation of the material in this
chapter simpler we restrict ourselves to the Hilbert space £2 . However it can be
generalized for many Banach spaces, in particular, for £p(l :s P < 00).
Denote by P the set of all Poincare operators on £2. Recall that these operators P
are defined in the standard basis of £2 by the matrix
00
P = L P jkUjk.
j ,k= l
where
318 Chapter XN. Poincare Operators
and el, ez . . .. is the standard basis in £2. Taking into account that
00
IIPII :5 L IpjklllUjkll
j,k=1
and
IIUjkll = 1,
we obtain that P is a bounded operator and
where
00
IIPlip = L Ipjkl·
j ,k= 1
We call this operator the n-th section of P. Denote by p(n) E P the operator
where pjt) = Pjk for j, k = 1,2, . . . , n and pjt) = 0 for the rest of the indices j
and k. Note that the following relations hold
where
TI(Pn ) = .~
J-I
(1 + i: k=1
IPjkl)
Indeed, let
v = (Vjk)j ,k=1
be a matrix with entries Vjk E C. According to the definition of the determinant
where the coefficient Yi, .h. ....i « is equal to one of three values +1, -1 and O.
14.1 Determinant and Trace 319
It is clear that
[det VI s nrv),
where
n
n(V) = n (1 + IUjl1 + IUj21 + . . . + IUjnl) .
j=1
Let some set L of entries Ujk in the matrix V be replaced by zeros and the rest is
not changed. Denote the new matrix by W . It is easy to see that
From this inequality follows inequality (Ll). Indeed, introduce Ln as the set
of Ujk for which at least one of the indices j or k is > n . Replace Ujk by 0, for
i . k E Ln' then the inequality (Ll) follows from (1.2).
We are now in a position to define the determinant detp(/ - P) for any PEP.
Let Pn(n = 1,2, . . .) be the n-th finite section of P:
From the relation (1.1) follows that the sequence of complex numbers
Idetp(/ - P)I :s n
00
j=1
(l + Iplj 1+ IP2j 1+ .. .).
ts-p P = LPjj,
j=l
-~10 "')
UI U2 Un-l
0 0 0 O· . .
V =
(
-V2 0 0 o·..
0 0 -V3 0 0 ··.
Hence
det (/ - V n) = (_l)n+IUnVIV2 Vn +
(-ltU n-IVIV2 Vn-l + ... + UIVI + 1.
From here it follows that
00
and
trp V = O.
Note also that the operator V is not of finite rank, generally speaking .
Let F(f2) denote the space of bounded operators of finite rank on f2 . Due to
Theorem XIII.2.l, for any operator F E F(f2) there exist closed subspaces MF
and N F of f2 with the following properties:
a) MF(JJNF=f2
b) F M F ~ M F, N F ~ Ker F
c) dim M F < 00 .
I _ F _ (!J - FlO)
- 0 h
Note that these definitions do not depend on the particular choice of the subspace
M F. To see this we recall from linear algebra that
tr FI =
m
~::).j(FI)
j=1
and det(!J - FJ) = n
m
j=1
(l - Aj(FJ))
where m = dim MF and Al (FI), A2(FJ) , ... , Am(FI) are the eigenvalues of FI
counted according to their algebraic multiplicities. In the two identities the zero
322 Chapter XN. Poincare Operators
eigenvalues do not give any contribution. It is also easy to see that the operators
F and Fl have the same nonzero eigenvalues counting multiplicities. Thus
tr F = I>'j(f)
j
and det(I - F) = n(l-
j
Aj(F)). (2.1)
Now we shall prove the following four properties of the trace and determinant.
For any two operators F and K E F(l2) the following equalities hold
We set
and
N=Ml.. .
Then
F(M) C M, K(M) eM
NcKerF , NcKerK.
Now let
The subspace M has a finite dimension and hence all four properties for F M and
KM follow from linear algebra. This statement is proved in the same way as the
first statement in this section. The next theorem gives a useful representation of
the trace and determinant.
Fx = L(x, Cfik)fb x E H
k=l
14.2 Finite Rank Operators, Determinants and Traces 323
Then
m
tr F = L(!k, rpk} (2.2)
k=l
and
(2.3)
Proof: Since tr F is a linear functional, it is enough to prove the first equality for
the case m = 1. In this case the operator is
Fx = {x, rpl} ft .
Let
Then
y
x = ;: ft, where y = (x, rpt)
and
y
y = ;:(fl, rpl)·
Now let us prove the second equality. Consider two operators X E £(£2 , C'") and
Y E £(c m , (2) defined by the equalities:
Note that
HenceXY = «(fk,rpj})j,k=l'
324 Chapter XN. Poincare Operators
and
Theorem 2.2 Let the operator F E F(l2) be defined in the standard basis
e 1, ei , ... of l2 by the following matrix
Then
00
tr F = Lakk. (2.4)
k=l
where the series converges absolutely and
Fx = L(x, CPr)fr.
r=l
14.2 Finite Rank Operators, Determinants and Traces 325
Hence
m
ajk = (Feb ej) = L(eb rpr)(fr, ej), k , j = 1,2,. ... (2.6)
r=l
Taking into consideration that
00 00
we obtain
00
tr F = Lakk.
k=l
The first part of the theorem is proved. Now we proceed with the second part .
Denote
v
(v) ~ F-
brs = ~J rk<Psk
k=l
where
frk = (Jr, ek) and <Psk = (<Ps, ek)'
When v tends to infinity b~~) tends to (Jr , <Ps):
and
ycp,v = (rpsj)~I' }=1 .
In the proof of Theorem 2.1 it was proved that for two finite matrices X f, v
and Ycp,v
det(l- Xf,vY;,v) = det(l- Y; ,vXf,v).
The matrix Y;, X
v f ,v has the size v x v. Denote
Note that so far we have introduced for Poincare operators F E F(l2) two
definitions of determinants for I - F and two definitions of trace for F.
Namely, in the first section we defined
and
ts-p F = lim tr Fn ,
n--+oo
where Fn is the n-th section of F . In this section the determinant and the trace
were defined via the formulas
det(l- F) = nm
r=!
(l - Aj(F))
and
m
trF= LAj(F).
r=!
As a corollary of Theorem 2.2 we obtain that these two definitions coincide:
detp(l- F) = n
r=!
m
(l- Aj(F)),
m
trp(l- F) = LAj(F) .
r=!
In this connection we will drop, in subsequent sections, the subscript P in the
Poincare determinant.
In conclusion we mention that it can happen that an operator has finite rank and
is not a Poincare operator; see exercise 7.
Proof: Let us first prove the theorem for operators X, Y of finite rank . Introduce
the function
g(A) = det (I - ~(X Y)+ +A(X - Y)).
It is obvious that g(A) is an entire function and
Therefore
where p is a positive number that will be chosen later and - i < t < i.
Hence
1
Ig'(t)1 ::::; - sup Ig(t + nl
p 1{I=p
and
1
Idet(l- X) - det(l- Y)I ::::; - sup Ig(A)I ·
p [).I::;p+1
Put p = IIX - YII:pl and IAI < p + i. Taking into account that
H Y)
(X + + A(X - Y)llp ::::; ~ (II(X + Y)lIp + II (X - Y)lIp) + 1,
we obtain
Let us remark that the functions trX and det(I - X) are not continuous on P in
the operator norm. Indeed, define operators
where
(n) -1 .
ex j =.jii ' ] = 1, 2, . . . , n
and
exjn) = 0, j = n + 1, n + 2, . ..
It is obvious that Fn E P and
lim
n-+oo
IIFn ll = 0.
det(I - F n) (1 + In)n ~
= 00
and
tr Fn = -.jii ~ -00
forn ~ 00 .
In subsequent sections we will need the following properties of the determinant.
Theorem 3.3 The set P has the following properties: If G(l) and G(2) E P then
G = G(l)G(2) E P and
00
'" (I) (2)
gjk = LJ gjr g rk
r= 1
and
00 00
" Ig (1) 1 'LJ
. < 'LJ
IgJkl -
" Ig (2) I,
jr rk
r=1 r=1
Hence
L
00
det(I - G(l)
n
det(I - d n2» = det(I - G(1)
n
- d n2 ) + d 1l ) G(2»
n'
Here G~) is the n-th section of G (i) .t = I, 2. Now let us take the limit when
n --+ 00 from both sides of this equality. The limit of the left hand side is equal to
and the limit of the operator G(3)(n) = G~l) + G~2) - G~I) G~2) converges to G(O)
in the following sense
In this section we discuss the role of the determinant in the inversion of operators
of the form 1- G, G E P.
14.4 Determinants and Inversion of Operators 331
(I - R n ) -I = I R~
j=1
The series
m
R- L R~ ~ 0, m~ 00.
j=1 p
The operator
Since A (n) is of finite rank, the operator (I + R)A(n) is of finite rank and the
operator I - (I + R)A(n) is invertible if and only if det(l - (I + R)A(n») ~ O.
Now it is easy to conclude that the operator I - A is invertible if and only if
332 Chapter XIV. Poincare Operators
det(l- A) i= O. Note also that if the last condition, holds then in the decomposition
(.2= 1m Pn EBlm(l- Pn), where Pn is the orthogonal projection onto the subspace
spanned by the first basis vectors el , ... , en, the following equality holds
Hence
where
0)
I .
This implies
Z- 1 I
(I - A)-I - I =
((
11 ~I
Z21 ZI1
~) + I) (I - Rn ) - I
Z- I - I
--R
- n +( II
Z Z-I
21 11
Z- I -
11 I
I 0)
0 E P and R; E P
(
Z21 Z li
we obtain (l- A)-I - IE P. The theorem is proved. o
Corollary 4.2 Let A E P and D,(v) = det(l - vA). The function D,(v) is entire
in C and D,(vo) = 0 if and only ifvo i= 0 and AO = ljvo is an eigenvalue of A.
Moreover if « is the multiplicity of the zero vo of the analytic function D,(v), then
K is the algebraic multiplicity ofthe eigenvalue AO = ljvo ofthe operator A.
14.4 Determinants and Inversion of Operators 333
Proof: All claims except the last follow from the previous theorem. For the
last statement we need knowledge about the multiplicity of an eigenvalue . This
material is outside the scope of this book. The proof of this statement can be found
in [GGKr] , Theorem 11.6.2. 0
Now we shall use the determinant det(l - G), G E P to obtain formulas for
the inverse operator (I - G) -I. Let us assume that G E P and det(l - G) I- 0.
Then by Theorem 4.1 the operator 1- G is invertible and (I - G) -I - I E P.
Represent o» on the entire space £2 = Im Pn EB Im (I - Pn ) by the matrix
(G 0)
n
° °'
where Pnx = {XI, X2, . .. , Xn , 0, 0, . . .} and G n is the n-th section of G. It is clear
that I - G(n) is invertible for n sufficiently large. We shall now prove that
which is in P and
=L
00
-I 1
An = det An R n ,
where
R n = (r j kn»)oo
j ,k= 1
334 Chapter XN. Poincare Operators
(detAn)A;;-I = R n.
lim IIR - R n II = O.
n-+ oo
In particular
. (n)
I1m. rjk = Fjk »
n-e-co jk
where R = (rjdfk=I'
It is obvious that
1
A= - - R .
detA
This equality generalizes, for operators of the form A = I - G, G E P, Cramer's
rule of inversion of finite matrices. Now we shall give another representation of
the inverse (I - J.tG)-I in terms of a series .
= L a; (G)J.tm,
00
D(G; J.t)
m=O
of which the coefficients d m = dm(G) are operators from P which are uniquely
determined by the recurence relations
det(I - J.tG) = j
j =O
14.4 Determinants and Inversion of Operators 335
Proof: There exists a 8 > 0 such that for IJLI < 8 the operator I - JLC is invertible:
00
where the series converges in the nonn of P . Consider the operator I - JLc (n),
where the sequence {c (n)} converges to C :
There exists a number 81 > 0 (81 < 8) such that for IJL I :::: 81 ,
00
Using the computations preceding this theorem we come to the conclusion that
lim 11(/ - JLC(n» -1 - (l - JLC)-Ilip = 0,
n--> OO
uniformly in the disc IJLI :::: 81. We start now with the formula
(I _ C (n» - 1 _ I R ( )
JL - det(l _ JLc (n» n JL
where Rn(JL ) is the matrix of algebraic complements for I - JLC6n ) (see the
definition preceding Theorem 4.3). It is clear that Rn(JL ) is a polynomial in JL
with operator valued coefficients. After using the formula
(/- JLC(n» -1 = 1+ JLC(n)(l- JLC(n» - 1
we have
(l - JLC(n» -1 = 1+ I JL c<n)Rn (JL )
det(/ - JLc (n»
Denote c (n)Rn(JL) by D (n )(JL ). Hence
D (n)(JL ) = (l - JLC(n» -IC(n)det(l - JLC(n».
Therefore
Since
and
00 00
we obtain
00 00 00
We note that the last recurrence relation can be further simplified. More than
that, it can be solved in closed form as a difference equation. For these results we
refer to [GGKl], Ch VII Section 7 and [GGKr] ChVIII Section 1.
The main theorem in this section gives formulas for the trace and determinant
of a Poincare operator in terms of its eigenvalues. Here we rely, in part, on
complex function theory. In particular, we shall use the following theorem, of
which the proof can be found in [Ah] Chapter 5 sec. 2.3 and [GGKl]
Theorem VII.5.l . It is a special case of Hadamard's factorization theorem.
Theorem 5.1 Let a i , a2, . . . be the zeros ofan entire function f ordered according
to increasing absloute value s with multiplicities taken into account. Assume that
1
~ lajl < 00 .
J
14.5 Trace and Determ inant Formulasfor Poincare Operators 337
Supp ose f (O ) = 1 and f or each e > 0 there exists a constant C (€) such that
IAje
If (A) 1 S C (€)e , AE C
Then f admits the representa tion
f(A) = n(1 - ~)
J
For the proof of the main theorem about trace and determinant we will need the
following properties of the eigenvalues of Poincare operators.
Lemma 5.3 * Let A = (ajk)'J,k=1 be an n x n matrix and let AI, A2, . . . , An be the
eigenvalues of A, counting multipli cities. Then
n n
L IAjl S L lajkl· (5.1)
j= 1 j.k=1
Proof: Let U = (UjdJ,k=1 be a unitary matrix such that UAU * is upper
triangular with AI, A2, . . . , An on the diagonal. Take B to be the diagonal matrix
diag tr}, tz . .. . , tn ) where Aj tj = IAj l, Itjl = 1, 1 S j S n . Then for S =
BUAU* = (Sjk)j,k=1 it is clear that
n n
L IAjl = L:>jj. (5.2)
j=1 j= 1
n
I>jj = LL L
n n (n )
tj ujpapq Vqj.
j=1 j =l q=1 p=1
Theorem 5.4 Let A E P and let Al (A) , Az(A) , ... be the non-zero eigenvalues
of A counted according to their algebraic multiplicities. Then
L IAj(A)1 s IIAllp ·
j
Hence
k
For the application of Theorem 5.1 to the function det(I - AA) we need the
following theorem .
Theorem 5.5 Let A E P. Given e > 0, there exists a constant C(s) such that
D( E
Proof: Since A E P , we have from Section 1 that
where L~l L~l lajkl < 00 . Given e > 0, there exists a positive integer N
such that L~N+l L~l lajkl < s/2. Let c, = L~l IUjkl, j = 1,2, . . .. Since
1 + x S e", x 2: 0, it follows from (5.4) that
Idet(l - AA)I s n
N
(l + IAICj) n
00
j=N+I
(l + IAICj) (5.5)
"D D
j=1
(I i
+ I"IC )exp (1"1 i i i Ci) " (I + l"ICi)' !"'. (5.6)
Theorem 5.6 Let A = (ajk)'rk=l E P and let Al (A), A2(A) , . .. be the sequence
ofthe non-zero eigenvalues ofA counted according to their algebraic multiplicities.
Then
det(l - AA) = n
(l - H j (A)) .
j
(i)
Proof: (i) From the remark preceding Theorem 5.2, we know that the zeros of
the entire function det(l - AA) are AA A) of order, by definition, the algebraic
multiplicity of Aj (A) for A . Statement (i) now follows from Theorem 5.1, applied
to f(A) = det(l - AA) . The conditions of this theorem are met by Theorems 5.4
and 5.5.
(ii) Since the function det(l - AA) is entire, det(l - AA) = L~o CnAn, A E C-
Let An be the nth-finite section of A . Now
n n
det(l-AA n) = n(l-Hj(A n)) = LCkn Ak ,
j=1 k=O
340 Chapter XN. Poincare Operators
where
n n
Cln =- L:>l.k(A n ) =- Lakk.
k=1 k=1
Since det(/ - AA n ) converges uniformly on compact subsets of C to det(/ - AA),
it follows that the sequence {cln} converges to CI, i.e.,
00
CI = - Lakk . (5.7)
k=1
On the other hand, we have from (i) that the sequence of polynomials
nJ=1 (l - U) (A)) denoted by "LJ=o dinA} converges uniformly on compact sets
to det(/ - AA). Hence the sequence {din} converges to CI. Since
n
din = - LA}(A),
}=I
we obtain
By (5.6)
00
CI = - Lakk .
k=1
o
Much of the material in this chapter has its ongm in the book [GGKr],
Chapters I-III, as well as in the books [GGKl] and [GKre]. Further develop-
ments of the theory of traces, determinants and inversion can be found in these
books also along with a list of other sources on this subject.
Exercises XIV
1. Let X and Y be any operators in L:(cn) , and suppose CI (n) and C2(n) are
positive constants such that
(a) ItrX-trYI::sCI(n)IIX-YJI
(b) Idet(/ - X) - det(/ - Y)I ::s C2(n)IIX - YJI
Prove that
lim CI (n) = 00 and lim C2(n) = 00,
n--+oo n--+oo
2. Prove that there exists a number r > 0 such that for F, G E F(f.2) with
1
L -n tr (Fn)A n.
00
det(/ - AF) = exp
n=!
holds, where F E P.
4. Prove that for any F E F(f.2) and A E C,
C (F)
det(l- AF) = 1 + L
rank F
(_l)n_n-,_An ,
n.
n=!
where
tr F n-1 o
tr F 2 tr F n-2
Cn(F) = det .
(
tr F n
and
1 0 . . .)
tr ( ~ ~ : : : =1.
for X, YEP.
7. Construct a finite rank operator F E F(H) such that F fj. P .
342 Chapter XN. Poincare Operators
(K cp)(t) = l b
k(t , s)cp(s) ds,
l
Prove that
b
~>.j(K) = k(s, s) ds
J
10. Denote by HSrp the set of all operators A in £(H) such that their matrices
(ajk)),'k=1 in the orthonormal basis {cpj}f satisfy the condition
00
L lajkl <
Z
00.
j,k=1
Let {1fr j } f be another orthonormal basis in H. Find an operator X E H Srp
such that X rt. H S1{f.
.. .
...
)
under the conditions that
00
and
L IXj -
00
11 < 00.
j=O
(b) The same problem under the same conditions for the special case when
Xj = l-a jbj.
In this case the last condition follows from the previous ones .
Exercises XN 343
12. Let {ej }~-oo be the standard orthonormal basis of lz(Z) and denote
by P (Z ) the set of all operators A E £ (l z(Z )) such that their matrice s
(ajdfk=- oo in the basis (e j } ~- oo satisfy the condition
00
det(I - A ).
13. Let F E F (l z(Z )). We assume that (fjdfk =-OO is the matrix of F in the
standard basis {ej } ~ -oo of lz(Z) . Denote by F (- n,m) the operator defined
by the section
(fjdj,k=- n'
Define tr F and det(I - F ) in the same way as these quantities are defined
for F E F(l z); see Section 2.
(a) Prove that
00
14. Compute the determinant det(/ - V), where the operator V is defined in the
standard basis of £z (Z) by one of the following matrices:
(a)
b-4 0 0 0 0 0
0 b-3 0 0 0 0
0 0 b-z 0 0 0
V= a-3 a-Z a-I b-I 0 0
0 0 0 ao al az
0 0 0 bi 0 0
0 0 0 0 bz 0
00
ajj = Xj j = 0, ±1 , . ..
L
00 00
and
00
L IXjl < 00 .
j=-oo
exists .
Exercises XN 345
16. Let P(7L.) be the set of operators defined in problem 12. Suppose A =
(ajk)'fk=-oo E P(7L.) with its sequence of eigenvalues AI (A), A2(A), ...
counted according to their algebraic multiplicities. Prove that
tr A = I:> j(A) .
j
Chapter XV
Fredholm Operators
Fredholm operators are operators that have a finite dimensional kernel and an
image offinite codimension. This class includes all operators acting between finite
dimensional spaces and operators of the form two-sided invertible plus compact.
Fredholm operators appear in a natural way in the theory of Toeplitz operators.
The main properties of Fredholm operators, the perturbation theorems and the
stability of the index, are presented in this chapter. The proofs are based on the
fact that these operators can be represented as finite rank perturbations ofone-sided
invertible operators.
Then Ker A = sp [ej , . .. , ekJ and l z = 1m A $ sp {el ' . . . , e. }, where el, ez, .. .
is the standard orthonormal basis in l z. Thus
n (A ) = k, d (A ) = r, ind A =k- r.
Proof: Since n(A) < 00 and d(A) < 00, there exist, by Theorem XI.5 .7, closed
subspaces M and N such that
x =M EB Ker A, Y = 1m A EB N .
Let M x N be the Banach space with norm lI(u, v) II = lIuli + IIvll . Define the
operator A on M x N by A(u , v) = Au + v. Since A is an injective bounded
linear map from M x N onto Y , we have that A is invertible by Theorem XII.4.1.
Thus for any u E M,
(2.1)
It is easy to verify that the restriction of A to M is a one to one map from M onto
1m AnKer B . Hence
Thus by (2.2),
Since N, is a subspace of the finite dimensional space Ker B , there exists a sub-
space N: of Ker B such that
Ker B = N, EB Nz . (2.4)
Then
n(B) = nt + nz, nz = dim Nz. (2.5)
Since d(A) and dim N: are finite and 1m A n N: = (0), there exists a finite
dimensional subspace N3 such that
Y = 1m A EB Nz EB N3 (2.6)
and
d(A) = nz + n3 < 00, n3 = dim N3. (2.7)
15.2 First Properties 349
From
Ker B = Nl EB N2 C 1m A EB N2
and equalities (2.4) and (2.6), it follows that B is one to one on N3 and
1m B = 1m BA EBBN3. (2.8)
From (2.3), (2.5), (2.7) and (2.9), we have that the operator BA is a Fredholm
operator with
Proof: Let A have the representation (2.10) . Assume that D(-l) is a right inverse
of D. Then
A = D(1 + D(-l) F) .
Notice that by Theorem XIII.4.1 the operator I + D(-l) F is Fredholm and has
index zero. Hence A is Fredholm and ind A = ind D by Theorem 2.2. A similar
argument shows that A is Fredholm if D has a left inverse.
Suppose that A is Fredholm. There exists a linear independent set {Yl , Y2, . . . , Yn}
and a closed subspace M such that
and
x = MEBKer A .
Let {Xl, X2, . . . x m } be a basis for Ker A . By Corollary XI.5 .6, linear functionals
Ii, 12,···, t- , which are bounded on X, may be chosen so that !j(Xk) = Ojk,
j , k = 1,2, ... , m, where 0j k is the Kronecker delta and for every X E Ker A,
m
X= L h(x)Xj .
j=l
350 Chapter xv. Fredholm Operators
min(m ,n)
F(x) = L !j(x)Yj' (2.12)
j=l
Setting D = A - F, we have
1m D = 1m A EB 1m F. (2.13)
Y = 1m D , n z: m.
It is easy to see that Ker D = Sp{Xn+l, ... ,xm } for n < m and
Thus Ker D is complemented in X. These results combined with Theorem XII .6.2
show that D is right invertible when n :s m , i.e., d(A) :s n(A) . For n ::: m,
i.e.,d(A)::: n(A),wehavefrom(2.13),(2.14) and TheoremXII.6.1 thatn(D) = 0
and d(D) :s d (A) < 00, i.e., D is a left invertible Fredholm operator. Thus (2.10)
follows. 0
Lemma 2.5 Given A E LeX, Y) and B E Ley, Z), where X, Y and Z are Banach
spaces, suppose that AB = F and BA = G are Fredholm operators. Then A and
B are Fredholm operators and
Since A is one to one on Xo and AXo is closed, it follows from Theorem XII.4.6
that there exists m > 0 such that IIAxoll ~ mllxoll for Xo E Xo. Thus
, 1 ,
If(Axo)l:::: IIx IIl1 xoll :::: -lix III1 Ax oll·
m
352 Chapter xv. Fredholm Operators
g = u + v, u E (Ker A).l , v E W.
d(A') = dim W = dim qJ(W) = dim (Ker A)' = dim Ker A = n(A).
Theorem 3.1 Given a Fredholm operator A E £(X , Y), there exists a y > 0 such
that if B E £( X, Y) and II B II < y, then A + B is a Fredholm operator with
ind(A + B) = indA .
Furthermore,
n(A + B) :::: n(A), d(A + B) :::: d(A) .
15.3 Perturbations Small in Norm 353
Proof: Since A is Fredholm , we have from Theorem 2.3 that A has the
representation A = D + F , where D is a one-sided invertible Fredholm oper-
ator and F has finite rank. Let D (-I ) denote a one-sided inverse of D. We know
from TheoremsXII.6.3 andXII.6.4, that for IIB II < IID(- l) II-I, D+B is invertible
from the same side as D and
We have from Corollary 2.4 that the operator D is left invertible and, as noticed
above, D + B is also left invertible. Thus Ker (D + B) = {OJ and
Since Ker (J + F(D + B )(-l » c 1m (F(D + B) (-I », it follows from (3.4) and
(3.5) that
n(A + B) = dimKer (A + B)
~ dim Ker (J + F(D + B)(-I » + dim Ker (D + B)
= dimKer (J + F(D + B)(-I» ~ dim 1m (F(D + B (-I »)
~ dim 1m F = n(A) . (3.6)
Corollary 3.2 The spectrum ofthe forward shift S on l p, 1 ::s p ::s 00, is equal
to the closed unit disc. Furthermore,for IAI < 1 the operator AI - S is Fredholm
with
n(AI - S) = 0, d(AI - S) = 1, ind (AI - S) = -1 , (3.8)
and AI - S is not Fredholmfor IAI = 1.
But this is impossible because ind (,81- S) = -1 for 1,8 I < 1 and ind (,81- S) =0
for 1,81 > 1. Thus AI - S is not Fredholm for IAI = 1. 0
If 1111 < 1 and 11 " close" to A, then it is easy to see that n(A - 11I) 1 and
,8(A - 111) = O. Hence
(/ - K )u = n--+
lim (/ -
oo
K )u n, = O.
But this is impossible since I - K is one to one on M and U =1= o.
Next we show that d(/ - K ) < 00. Assume this is not the case. Then for any
positive integer n , ther e exist linearl y independent vectors YI , Y2, .. . , Yn such that
1m (/ - K) n Sp{YI , Y2, ... , Yn} = {OJ. As we saw in the proofofTheoremXI.5.3,
there exist functionals /J, h, . . . , In in 1m (/ - K ).l.. such that Ii (y j) = oij ' Now
the set {II, h , . . . , In } is linearl y independent since 0 = L J= I a jI j implies
0= (LJ=I a j/j) (Yd = as, k = 1,2 , .. . , n . Hence n ~ dim Im (/ - K ).l.. =
dim Ker (/ - K ') by Theorem XIII .5.1. Since n was arbitrary, dim Ker (/ -
K ' ) = 00 . But this is impossible, for K ' is compact and, as we observed above,
Ker (/ - K ' ) is finite dimensional. Hence d(/ - K ) < 00 .
We have shown that I - K is Fredholm. Finally we prove that ind (/ - K ) = O.
For each J... E C, we have seen that I - J.. K is a Fredholm operator. Define
the mapp ing tp on the interval [0, I] by cp(J...) = ind (/ - J...K ). It follows from
Theorem 3.1 that cp is continuou s. Since cp is integer valued, ip must be a constant
function. Hence
0= cp(O) = cp (l) = ind (/ - K ).
This completes the proof. o
356 Chapter xv. Fredholm Operators
Using Theorem 4.2, the proof of the following theorem is the same as the proof
of Theorem XIII. 6.1.
Theorems 3.1 and 4.1 can be extended to closed operators in the following manner.
First we extend the definition of a Fredholm operator.
A linear operator T (X ---+ Y) is called a Fredholm operator if the numbers
n(T) = dim Ker T and d(T) = codim Im T are both finite. In this case the
number ind T = n(T) - d(T) is called the index of T . The Sturm-Liouville
operator discussed in Section VI.5 is a closed Fredholm operator.
15.5 Unbounded Fredholm Operators 357
Theorem 5.1 Let T (X ---+ Y ) be a closed Fredh olm operator. There exis ts a
y > 0 such that for any operator B (X ---+ Y ) with V (B ) :J V (T ) and
and
II( T + B )xll 2: IITx ll- IIBx ll 2: (1 - y ) II T xll- Y llx ll ·
Hence
1
IITx ll s - - (yllxll
l- y
+ II (T + B )xll )· (5.3)
Theorem 5.2 Let T (X ---+ Y) be a closed Fredholm operator, and let K (X ---+ Y)
be a comp act operator on V T . Then T + K is a close d Fredh olm operator with
AI=I'
Let b be a bounded Lebesgue measurable function on [0, 1]. Define B E £(1-{) by
(Bg)(t) = b(t)g(t) .
I
with inverse
t
(A-Ig)(t) = g(t)ds .
Thus A-I is a compact operator on 'H, To see that B is A-compact, suppose {gn}
is an A-bounded sequence, i.e., IIgn II + IIAgn II :s M, n = 1,2, ... . Then {Ag n}
is a bounded sequence in H, and since A -I is compact, {gn} = {A -I Agn} has
a convergent subsequence. Hence {Bg n} has a convergent subsequence which
shows that B is A-compact. From Theorem 5.2 we get that A + B is a closed
Fredholm operator with
ind(A + B) = indA = O.
For a thorough treatment of perturbation theory, the reader is referred to the paper
[GKrel], and the books [G], [GGKl] and [K]. Applications of Theorems 5.1 and
5.2 to differential operators appear in [G].
Exercises XV 359
Exercises XV
(b) If the latter condition on a holds, show that ind A = (k - r)m, where m
is the winding number relative to zero of the oriented curve t ~ a(e it ) ,
with t running from -71: to 71:.
Determine the Fredholm properties of the operator A when k = r .
3. Find the spectrum of the operator A defined in the previous exercise (with
k i- r) .
4. Let U and U+ be as in the first exercise with k i- r , and consider the operator
N M
B = I>- j (U +) j +aoI + 'La ju j ,
j= l j=l
M
f3(A) = L ajA
j
i= 0 (IAI = 1).
j=- N
(b) If the latter condition on f3 holds, show that ind B = (k - r)m, where m
is the winding number relative to zero of the oriented curve t --+ f3(e i ! ),
with t running from -77: to 77:.
where Aij is a bounded linear operator from X j into Xi (i, j = 1,2). In fact,
(*) means that for XI E XI and X2 E X2 we have A(xi + X2) = YI + Y2,
where
Yi = Ail XI + A i 2X2 E Xi (i = 1,2).
(a) If A is invertible and the operators A12 and A21 are compact, prove that
A II and A22 are Fredholm and
In this chapter we develop the theory ofLaurent and Toeplitz operators for the case
when the underlying space is lp with I :s p < 00. To keep the presentation as
simple as possible we have chosen a special class ofsymbols, namely those that are
analytic in an annulus around the unit circle . For this class of symbols the results
are independent of p . We prove the theorems about left and right invertibility and
derive the Fredholm properties. Also, the convergence ofthe finite section method
is analyzed. In the proofs factorization is used systematically. The chapter also
contains extensions ofthe theory to pair operators and to a simple class of singular
integral operators on the unit circle .
In this section we study the invertibility of Laurent operators on the Banach space
f p (2), where 1 :s p < 00. Such an operator L assigns to an element x = (x j) j EZ
in l p (2) an element y = (y j ) j e Z via the rule
L
00
Lx = y, Yn = an-kXk (n E Z) . (1.1)
k=-oo
In what follows we assume that there exist constants c :::: 0 and 0 :s p < 1 such
that
(1.2)
From (1.2) it follows that the operator L in (1.1) is well-defined and bounded. To
see this, let Y be the bilateral (forward) shift on f p (2), that is, Y is the operator
given by
(1.3)
Notice that Y is invertible and II y n II = 1 for each n E 2 . We claim that the action
of the operator L in (1.1) is also given by
L
00
Indeed, the series in the right hand side of (1.4) converges in fp(Z) to a vector
which we shall denote by Lx. Since the map (x j) j ell: --+ Xn is a continuous linear
functional on fp(Z), we have
00 00 00
This holds for each n, and thus Lx = Lx, and (1.4) is proved. Now,
(1.5)
a(A) = L n
anA , A E 'Jr. (1.6)
n=-oo
an = -1 j1f a(e't)e-
. mt. dt, n E Z. (1.7)
2n -1f
Theorem 1.1 Let La be the Laurent operator on f p(Z) with symbol a E A Then
La is invertible if and only if a does not vanish on the unit circle, and in that case
(1.8)
Proof: We split the proof into two parts. In the first part we assume that a does
not vanish on T and we prove (1.9) . The second part concerns the necessity ofthe
condition on a .
Part 1. It will be convenient to show first that for each a and f3 in A have
(1.9)
16.1 Laurent Operators on l p (Z) 363
To prove (1.9) it suffices to show that LaLfJej = LafJej for each vector ej =
(Ojn)nE'Z., where 0jn is the Kronecker delta. Notice that LfJej = (b n- j )nE'Z. , where
bn = _I_i
2rr
Jr
-Jr
{3(eit)e-int dt (n E Z).
The right hand side of (1.10) is equal to the coefficient Yn- j ofAn- j in the Laurent
series expansion ofY(A) = a(A){3(A) . Thus the right hand side of (1.10) is also
equal to (LafJej)n. This holds for each n , and thus (1.9) is proved.
Now assume that a does not vanish on 1I', and put {3 = a-I. Then (1.9) yields
Condition (2) implies that L~-oo Ian I is convergent. We choose a positive integer
N so that Lln! >N lanl < 8/2. Put
N
aN(A) = L n
anA ,
n=-N
and let LaN be the Laurent operator with symbol aN. Then
Put I = L aN -an (Ao) I. From (1.13) and a(Ao) = 0 we see that /a N(ao) I < 8/2.
By combining this with (1.12) we get that II La -III < 8, and hence I is invertible.
Now, put
364 Chapter XVI. Toeplitz and Singular Integral Operators
Theorem 1.2 Let L be the Laurent operator on l p (Z) with symbol a E A Then
L is invertible if and only if L is Fredholm.
Proof: We only have to establish the "if part" of the theorem . So assume L is
Fredholm . In view of Theorem l.l it suffices to show that a(A) =1= 0 for each
A E T. Assume not. So a(Ao) = 0 for some AO E T . By the stability theorems
for Fredholm operators there exists 8 > 0 such that an operator i. on l p (Z) will
be Fredholm whenever
ilL - ill < 8.
Now, using this 8, construct i. and W as in the second part of the proof of
Theorem 1.1. Then i. is Fredholm and (1.14) holds. This implies that the operator
V - AoI is Fredholm, which is impossible because IAol = 1. Thus a(A) =1= 0 for
A E T, and hence L is invertible. D
In this section we study the invertibility of Toeplitz operators on the Banach space
l p» where 1 ~ P < 00 . Such an operator T assigns to an element x = (x j )1=0 in
lp an element Y = (Yj)1=O via the rule
00
an = -
1 . m. dt,
jJr a(e1t)e- t
nEZ, (2.2)
2rr -Jr
where a E A, that is, a is analytic in an annulus containing the unit circle T, or,
equivalently, for some c ~ 0 and 0 ~ p < 1 we have
Ian I ~ cp1n l (n E Z) . (2.3)
The function a is called the symbol of T.
16.2 Toeplitz Operators on l p 365
(2.5)
n=-oo
Let eo, el , ei, . . . be the standard basis of lp. Then (2.1) means that
with the usual matrix column multiplication. Therefore, instead of (2.1) we some-
times simply write
ao a_I a-2 ]
al ao a_I .
T = a2 al ao .
[
··· ..
.
To study the invertibility of Toeplitz operators we shall use the notion of a
Wiener-Hopf factorization for a function from A. So let a E A. A factorization
(2.6)
is called a Wiener-Hopffactorization of a if K is an integer, the functions a : and
a+ both belong to A and do not vanish on 11', and
(i) a+O±1 extends to a function which is analytic on the unit disk,
(ii) a.: O±I extends to a function which is analytic outside the unit disk
including infinity.
Since a E A, the conditions (i) and (ii) imply that the functions a+O±1 and
a_O±1 are also in A. Furthermore, from (i) and (ii) we see that a+O±1 and
a_O±1 admit series expansions of the following form :
00 00
a_(A) = L.a=jA-
j, a_(A)-1 = L. Y'=-jA- j (IAI :::: 1).
j =o j=o
366 Chapter XVI. Toeplitz and Singular Integral Operators
Thus the Toeplitz operators with symbols a+O and a+O- 1 are represented by
lower triangular matrices, and those with symbols a-O and a-O- I by upper
triangular matrices.
For a Wiener-Hopf factorization to exist it is necessary that a does not vanish
on T . The next theorem shows that this condition is also sufficient.
Proof: We have already seen that a(A.) =1= 0 for x E T is necessary. To prove
that this condition is also sufficient, assume that it is satisfied, and let K be the
winding number of a relative to zero . Put w ()..) = ).. -K a ()..). Then w is analytic
in an annulus containing the unit circle, co does not vanish on T, and the winding
number of w relative to zero is equal to zero. It follows from complex function
theory that w(A.) = exp f()..) for some function f that is analytic on an annulus
containing T . In fact, we can take f = log co for a suitable branch ofthe logarithm.
Thus for some 8 > 0 the function f admits the following Laurent series expansion:
00
Now put
00
(2.7)
(2.8)
16.2 Toeplitz Operators on lp 367
The right hand side of(2.8) extends to a function which is analytic on 1.1..1 < 1 +0+
for some 0+ > 0, and the left hand side of (2.8) extends to a function which
is analytic on 1.1..1 > 1 - L for some L > O. Moreover, the left hand side of
(2 .8) is analytic at infinity and its value at infinity is zero. By Liouville's theorem
(from complex function theory), both sides of (2.8) are identically zero, which is
impossible. Thus JL cannot be strictly larger than K. In a similar way one shows
that K cannot be strictly larger than JL. Thus K = JL, and K is uniquely determined
by « . 0
Theorem 2.2 Let T be the Toeplitz operator on lp, 1 ~ P < 00, with symbol
a E A Then T is left or right invertible if and only ifate'"; =1= Ofor-n ~ t ~ tt .
Assume the latter condition is satisfied and let
(2 .9)
(i) T is left invertible if and only if « ::: 0, and in that case codim Im T = K,
(ii) T is right invertible if and only if« ~ 0, and in that case dim Ker T = -K ,
(2.10)
s(n)=!sn , n=0,1,2 , ,
(2.11)
(s( -I)) -n, n = -1 , -2, ,
with S the forward shift and S(-I) the backward shift on lp.
Formula (1.9) in the previous section for the product of two Laurent operators
does not hold for Toeplitz operators. This fact complicates the study of Toeplitz
operators. However, we do have the following intermediate result.
Theorem 2.3 Let Ta and Tf3 be the Toeplitz operators on lp, 1 ~ P < 00, with
symbols a and f3 from A If a extends to a function which is analytic outside the
unit disk including infinity or f3 extends to afunction which is analytic on the unit
disk, then
(2.12)
368 Chapter XVI. Toeplitz and Singular Integral Operators
Proof: Let La and L/3 be the Laurent operators on lp(Z) with symbols ex and f3,
and let P be the projection of lp(Z) defined by (2.4) . By identifying 1m P with
l p we see that
t; = PLallm P, T/3 = PL/3llm P.
Put Q = I - P . Since L a/3 = L aL/3' we have
n=I,2 , ....
The order ofthe factors in (2.12) is important, because in general Ta T/3 i= T/3 Tex.
However, as we shall see later (Theorem 2.5 below), for a and f3 in A the difference
Ta T/3 = T/3 Ta is always compact. We are now ready to prove Theorem 2.2.
Proof of Theorem 2.2: We split the proof into two parts. In the first part we prove
the necessity of the condition ex (A) i= 0 for A E T . The second part concerns the
reverse implication and the proof of (2.1 0).
Part 1. Assume T is left or right invertible, and let ex (.1..0) = 0 for some .1..0 E T .
We want to show that these assumptions are contradictory. To do this we use the
same line of reasoning as in the second part of the proof of Theorem 1.1. Since T
is left or right invertible, we know from Section XII .6 that there exists e > 0 such
that f on l p is left or right invertible whenever liT - f II < £ . Choose a positive
integer N such that
L lanl < £/2 , (2.13)
Inl >N
and set
N N
TaN = L anS(n), exN(A) = L anAn . (2.14)
n=- N n=-N
and
PutT = Ta N - a N (Ao)1. Then liT-Til < s.und hence T is left or right invertible,
depending on T being left or right invertible. Put
Then
(2.15)
Both WI and £02 are trigonometric polynomials, and hence WI and £02 belong
to A. Let Tw ) and TW2 be the corresponding Toeplitz operators. Notice that T is
the Toeplitz operator with symbol a NO - aN(Ao). Thus (2.15) and Theorem 2.3
yield
- - (-I)
T = Tw ) (S - Ao1), T = (1 - AOS )TW2 ' (2.16)
We have already proved that T is left or right invertible . Thus (2.16) shows that
S - A01 is left invertible or 1 - AOS(-I) is right invertible. Both are impossible
because AO E T. Indeed, the spectra of Sand S(-I) are equal to the closed unit
disc, and according to the stability results of Section XII.6 the operators Ai - S
and Ai - S(-I) are not one-sided invertible for any A E aa(T) = T. We conclude
that for T to be left or right invertible it is necessary that a(A) i= 0 for each A E T.
Part 2. We assume that a does not vanish on T. So, by Theorem 2.1, the
function a admits a Wiener-Hopf factorizat ion. Let this factorization be given by
(2.9). From the properties of the factors a.: and a+ in the right hand side of(2.9)
we may conclude (using Theorem 2.3) that
(2.17)
Thus T«. is invertible and (Ta_)-I = Ta-::l. Similarly, Ta+ is invertible and
(Ta+)-I = Ta-, . The fact that in the right hand side of(2.17) the factors T«: and
+
Ta + are invertible, allows us to obtain the invertibility properties of T from those
of S(K) .
From (2.11) we see that
dimKer T = dimKer SK = K.
This proves item (i) of the theorem; item (ii) is proved in a similar way. Finally,
from (2.17) and (2.18) it follows that the operator T(-l) defined by (2.10) is a left
or right inverse of T.
The following theorem is the second main result of this section.
Theorem 2.4 Let T be the Toeplitz operator on l p, 1 :::: p < 00, with symbol
a E A Then T is Fredholm on l p if and only if its symbol a does not vanish
on the unit circle, and in that case ind T is equal to the negative of the winding
number ofa relative to zero.
For the proof of Theorem 2.4 we need the following addition to Theorem 2.3.
Theorem 2.5 Let Ta and Tf3 be the Toeplitz operators on l p with symbols a and
{3 from A Then the commutator Ta Tf3 - Tf3 Ta is compact.
Proof: Let La and Lf3 be the Laurent operators on lp(Z) with symbols a and {3,
respectively, and let P be the projection of l p(Z) defined by (2.4). Put Q = 1- P.
We have already seen in the proof of Theorem 2.3 that
(PLaP)(PLflP) - (PLflP)(PLaP)
= (PLf3Q)(QL aP) - (PL aQ)(QLf3P). (2.19)
Thus in order to prove that Ta Tf3 - Tf3 Ta is compact it suffices to show that the
right hand side of (2.19) is compact. In fact, to complete the proof it suffices to
show that for a E A the operators PLa Q and QL a P are compact.
Put aN 0.. ) = L~=-N anA.n, where an is given by (2.2), and let LaN be the
Laurent operator on l p (Z) with symbol aN . Then
Thus, in order to prove that PL a Q and QLa P are compact, it is sufficient to show
that PLaNQ and QLaNP are finite rank operators. But the latter is a consequence
of the fact that aN is a trigonometric polynomial. Indeed, for each x = (x j ) j e Z
in l p (Z) we have
N- n
(PLaN QX)n = L an+kX-k. n = 0, . .. , N - 1
k=1
Proof of Theorem 2.4: Assume a does not vanish on the unit circle . Then we
can use Theorem 2.2 (i) and (ii) to show that T is Fredholm, and that
(2.20)
and let Tw be the Toeplitz operator with symbol w. From Theorem 2.3 and (2.20)
we see that l' = Tw(S-A01) , where Sisthe forward shifton z • . Thus Tw(S-Ao1)
is Fredholm. Theorem 2.5 tells us that (S - Ao1)Tw is a compact perturbation of
Tw(S - A01) . According to Theorem 4.1 in the previous chapter this yields that
(S - Ao1)Tw is also Fredholm. Thus both Tw(S - Ao1) and (S - Lo1)Tw are
Fredholm. But then we can use Lemma XV.2.5 to show that S - A01 is Fredholm.
However, by Corollary XY.3.2, the latter is impossible because AO E 11'. So we
reached a contradiction, and hence a(A) =1= 0 for each A E 11' whenever T is
Fredholm. 0
The following theorems show that Theorem 2.4 remains true for Toeplitz oper-
ators with continuous symbols provided we take p = 2. The result is a further
addition to Theorem III.4.1.
372 Chapter XVI. Toeplitz and Singular Integral Operators
In this section we use the theory developed in the previous section to solve in £p
the following infinite system of equations:
L
00
1 C C2 • •• ]
c c ...
T= c2cI . (3.2)
[ · ..
· · .
16.3 An Illustrative Examp le 373
{ E T.
Notice that a is analytic in an annulus around the unit circle T, and hence the
operator T is well-defined and bounded on i p (l ::::; p < 00) .
Using the operator T the equation (3.1) can be rewritten as
T x - AX = y , (3.3)
Theorem 3.1 Let A = O. Then equation (3.1) has a unique solution in i p (l ::::;
p < 00), namely,
e- i p _ q . . 1
Xo = e - ip
Xj = xo(q - e'P)q J-, j = 1,2, .... (3.4)
- eip '
Proof: First let us prove that the operator T is invertible . To do this, notice that
its symbol a can be rewritten as
e e- 1
a(n = -{ ---e - -{ ---e--""""I { E T.
{ E T, (3.5)
Now, let us use (3.8) to comput e the solution of (3.1). From (3.6) and (3.7) we
see that
1 I 1
TQ'-I = - I (I - eS), T - I = - - (1- eS) ,
- e-e Q'+ e
374 Chapter XVI. Toeplitz and Singular Integral Operators
where S' and S are the backward shift and the forward shift on i p» respectively. It
follows that
T- I = _1_ (I - cS -
l-c 2
es
+ c2SS'),
and hence the matrix ofT- 1 with respect to the standard basis of i p is tridiagonal:
-c 0 0
-c 1 +c2 -c 0
T- I = _1_ 0 -c 1 + c2 -c
1 - c2
0 0 -c 1 + c2
Put Y = (l , q, q2, . ..), and let x = (xo, XI, X2, ...) be T- I y . Then
1 c- I - q
Xo = --2(l-cq) = -I '
l-c c -c
and for k ~ 1 we have
(c-c-I)s
w(S) = a(S) - A = (s _ c)(s _ c- I ) - A
AS 2 - 2(ACOSp + i sinp)S +A
sET.
(s - c)(s - C l)
In the above expression for w({) the numerator is a quadratic polynomial and
the product of its two roots ZI and Z2 is equal to one. SO Z2 = Z- I, and we can
assume that IZII s I. Thus for ), .:p 0 the symbol co of T - ).../ is of the form
~ E T, (3.11)
~ - Z-I -I 1 - ZI~
w+({) = ~ - c ~I = ZI c 1
-c
~ (3.13)
Recall that Z I and c are in the open unit disc. Thus w+O±1 extends to function
which is analytic on the unit disc, and co.: O±I extends to a function which is
analytic outside the unit disc including infinity. Thus (3.11) is a Wiener-Hopf
factorization with K = 0, and we can apply Theorem 2.2 to show that T - ).../ is
invertible . Moreover,
(T - ).../)-1 = T - I T - I , (3.14)
w+ w_
where Tw+ -I and T - I are the Toeplitzoperators with symbols Ijw+()...) and Ijw_()...),
w_
respectively.
Theorem 3.2 Let ): .:p 0, and assume IZ II < 1. Then the equation (3.1) has a
unique solution in l p (1 S p < (0), and for q = e i p this solution is given by
2i sin P j+1
x·- Z (j=0,1,2, . .. ). (3.15)
] - )...(1 - eipzJ> I
Proof: It remains to prove (3.15). Writey = (1, q, q2, ), whereq = eiP. Then,
according to (3.14), the unique solution x = (xo, XI, X2 , ) in lp (1 S p < (0)
of(3.1) is given by
X=Tw+ -IT-lY·
w_ (3.16)
Recall that Tw=1 is an upper triangular Toeplitz operator. Hence , using (3.12), we
(1)-1 1-
have
-I cq
Tw- IY = w_ - Y = -)... y. (3.17)
- q l-zlQ
376 Chapter XVI Toeplitz and Singular Integral Operators
(3.18)
where ao, aI, a2, .. . are the Taylor coefficients at zero of the function w+({)-I x
(1- q{)-I .
Now, take q = eip (= c) . Then we see from (3.13) that
-I 1 - c2 -I j+1
1 -1
C- c )+1
Xj = -A c zl = - ZI' j = 0,1,2, . . . .
I-zlc AI-ZIC
More precisely, a (T) is an arc on a circle, the endpoints ofthe arc are
i sinp -i sinp
A-I = , A+I = ,
1 + cosp 1 - cosp
Proof: From the proof of Theorem 3.1 we know that T is invertible . Thus A E
a(T) if and only if A =1= 0 and the roots ZI = ZI(A) and Z2 = ZI (A)-I of the
quadratic polynomial
I
_ ZI(A)+ZI(A)-I _ 1 (
T(A) - - - c+c
2
-I
2
+ c-c-
A
)
.
16.4 Applications to Pair Operators 377
Notice that a nonzero complex number z lies on the unit circle if and only if
(z + z- I )/ 2 E [-1 , 1]. We conclude that A E a (T ) if and only if A i= 0 and
r (A) E [-1 , 1], and in that case
C _c- 1
A - - - - - - --:-
- 2r (A) - c - c- I .
Throughoutthis section (except in some ofthe remarks) we assume that a and,8 are
functions from the class A , that is, a and,8 are analyti c in an annulus containing the
unit circle T. With these two functions we associate an operator M a.fJ on £p(71),
where 1 ::'S p ::'S 00 is fixed. The definition is as follows. Given x = (X j)j EZ the
element y = M a.fJx is the vector y = ( y j) j EZ with
00 - I
i
where for each n
1C
an = - 1 a(e't)
•
e- znt
.
dt, (4.2)
2rr - 1C
Theorem 4.1 Let ex, f3 be in A . The operator M ex ,f3 on lp(Z) is Fredholm if and
only if the functions ex and f3 do not vanish on T, and in that case ind M ex ,f3 is the
negative ofthe winding number of y = ex / f3 relative to zero .
Before we prove the theorem it will be convenient first to analyze the operator
M ex ,f3 a bit better. Let P and Q be the complementary projections appearing in
(4.3). Then lp(Z) = 1m Q 61 1m P, and relative to this decomposition we can
write M ex ,f3 as a 2 x 2 operator matrix,
QLf3llm Q QLexllm P ]
Mex ,f3 = : 1m Q 61 1m P ~ 1m Q 611m P. (4.4)
[ PLf3llm Q PLexllm P
(4.5)
where Tex is the Toeplitz operator on lp with symbol ex. In Section 2 (see
the proof of Theorem 2.5) we have shown that the operator QLex P is compact,
and hence Q Lex 11m P is a compact operator from Im P into 1m Q. Similarly,
the operator PLf3llm Q from Im Q into Im P is compact. It remains to analyze
QLf3llm Q. Let J be the operator from c, into Im Q that transforms the vector
(Xj )}':o in l p into the vector (Yj) jeZ, where
(4.7)
A E T. (4.8)
16.4 Applications to Pair Operators 379
Indeed, let b., be the coefficient of Av in the Laurent series expansion of f3 . Then
for x = (Xj)i=o inl p and Ix = Y = (Yj)jE7l., where Yj is given by (4.6), we have
for n :::: 0
= L b-n-I-kYk = L b-n-I-kX-k-1
k=-oo k=-oo
00 00
= L b-n-£x£ = L b~_£x£ ,
£=0 £=0
where be is the coefficient of AV in the Laurent series expansion of f3# . Notice that
f3# is also analytic in an annulus containing the unit circle, that is, f3# E A. We
are now ready to prove Theorem 4.1
Proof of Theorem 4.1: Consider the representation (4.4) . Since the operators
QLalIm P andPLplIm Q are compact, Ma,p and
QL pIIm Q 0 ] (4.9)
[ o PLalIm P
differ by a compact operator. Thus, by Theorem 3.1 in the previous chapter, the
operator Ma,p is Fredholm if and only if the operator (4.9) is Fredholm. But the
latter happens if and only if PLalIm P and QLpIm Q are Fredholm. Now use
(4.5) and (4.7). It follows that Ma.p is Fredholm if and only if Ta and Tp# are
Fredholm. Now apply Theorem 2.4 to Ta and Tp# . Notice that f3# does not vanish
on T if and only if the same holds true for f3 . We conclude that Ma,p is Fredholm
if and only if ex and f3 do not vanish on T.
Now, assume ex and f3 do not vanish on T, and put y = ex / f3 . By applying
Theorem 1.1 to L p we see that L p is invertible, and thus, using formula (9) in
Section 1, we have
Here we used formula (1.3) of Section III.1. Thus QL a P is the limit in the operator
norm of a sequence of finite rank operators, and therefore QL a P is compact. In
a similar way one shows that PL{3 Q is compact. We can now repeat in precisely
the same way all the arguments in the proof of Theorem 4.1 with the exception
of three. The first two concern the two references to Theorem 2.4 which have to
be replaced by references to Theorem 2.6 . The third concerns the reference to
Theorem 1.1 which has to be replaced by a reference to Theorem III .l .2. With
these changes the proof of Theorem 4.1 also works for the case when a and f3 are
continuous on T provided p = 2.
(i) M a,{3 is left invertible if and only if K > 0, and in that case
codim 1m Ma ,{3 = K ,
(ii) M a,{3 is right invertible if and only if K < 0 and in that case
dim Ker M a ,{3 =-K .
A E T, (4.10)
M(-{3I) = (L
~ ~
- 1 P + L y_ O)(V- K
P + Q)L ~- I L{3-I. (4.11)
Here L w denotes the Laurent operator on i p (Z) with symbol co, and V is the
bilateralforward shift on i p (Z).
Proof: We split the proof in three parts. The first part contains some general facts
on the function space A.
16.4 Applications to Pair Operators 381
Indeed,
00 00
s c f
n=O
(p" - pnr n) + c f ((~)
n=l
n _ pn)
The left or right invertibility of Ma,fJ also implies that the functions a and fJ are
not identically equal to zero. This together with the fact that a and fJ are analytic
in an annulus containing 11' shows that the zeros of a and fJ are isolated. So we
can find 0 < r' < 1 such that for 0 < r' < 1 the functions a; and fJr are in A and
do not vanish on 11'. By definition,
But then we can use (4.13) to show that for r' < r < 1 and 1 - r sufficiently small
we have
dim Ker Mar,fJr = dim Ker Ma,fJ , codim 1m Mar,fJr = codim 1m Ma,fJ '
(4.14)
382 Chapter XVI. Toeplitz and Singular Integral Operators
However, since a; and f3r do not vanish on T for r' < r < 1, the operator Mar,fJr
is Fredholm (by Theorem 4.1), and hence the identities in (4.14) show that Ma ,fJ
is Fredholm too. But then, again using Theorem 4.1, we can conclude that a and
f3 do not vanish on T .
Part 3. In this part we assume that a and f3 do not vanish on T, and we prove
the statements (i) and (ii) and the inversion formula (4.11). Our assumption on a
and f3 imply that
(4.15)
where y = a / f3. The function y does not vanish on T, and hence it admits
a Wiener-Hopf factorization as in (4.10), where K is the winding number of y
relative to zero. Let L y _ and L y + be the Laurent operators with symbols y_ and
y+. The fact that y_ and y+ do not vanish on T yields
L y- 1 = L - I, (4.16)
- y-
where Y is the bilateral forward shift on lp(Z) . Hence, by (4.16) and (4.17),
(4.18)
Moreover, all factors in the right hand side of(4.18) are invertible with the possible
exception of yK P + Q. Now, notice that yk P + Q is left invertible if K ~ 0 and
right invertible if K ~ 0, and in both cases a left or right inverse of yK P + Q is
given by
(yK P + Q)(-l) = y-K P + Q. (4.19)
also
codim 1m (yK P+ Q) = codim 1m T )..K = K (k ~ 0), (4.20)
dimKer (yK P + Q) = dimKer T ).-K = -K (k ~ 0). (4.21)
From (4.18)-(4.21) and the invertibility of all factors in the right hand side of
(4.18) different from yK P + Q we see that statements (i) and (ii) hold true, and a
left or right inverse of Ma,fJ is given by
The latter formula together with (4.16) and (4.17) yield (4.11), which completes
~~~ 0
I6.4 Applications to Pair Operators 383
We conclude this section with some further information about the pair operator
M a ,f3 for the case when this operator acts on e2(Z) and the functions a and f3 are
not required to belong to the class A but are merely continuous. In that case the
first part of Theorem 4.2 stilI holds true, that is, the following theorem holds .
Theorem 4.3 Let M a ,f3 be the pair operator on e2(Z), with a and f3 being con-
tinuous on the unit circle. Then M a ,f3 is left or right invertible if and only if a and
f3 do not vanish on T. Assume the latt er condition holds, and let K be the winding
number of y = a / f3 relative to zero . Then
(i) M a ,f3 is left invertible if and only if K > 0, and in that case
codim 1m M a ,f3 = K,
(ii) M a ,f3 is right invertible if and only if K < 0, and in that case
dim Ker M a ,f3 = -K .
In particular, M a ,f3 is two-sided invertible if and only if « = O.
Proof: We split the proof into two parts . In the first part we prove the necessity
of the condition on a and f3. The secon d part concerns the reverse implication and
statements (i) and (ii).
Part 1. Let M a ,f3 have a left or right inverse, and assume a (>"0) = 0 for some
>"0 E T . By the results of Section XII .6 there exists e > 0 such that the operator
M on e2(Z) is left or right invertible whenever 1 M- Mil < e. Now, pick a
a
trigonometric polynomial such that la (e") -a (e") I < £/ 4 for each - Jr .::: t .::: Jr .
a a- a
Put = a
(>"0). Then (>"0) = 0 and
. . I
la(e't) - a (e/t)1 < 2£' -Jr'::: t .::: tt . (4.22)
Thus M a. , f3' is left or right invertible. Both a and ~ belong to class A . Thus
Theorem 4.2 implies that a does not vanish on T which contradicts the fact that
a (>..o) = O. Therefore, M a ,f3 is left or right invertible implies that a does not
vanish on T. The analogous result for f3 is proved in the same way.
Part 2. In this part we assume that a and f3 do not vanish on T. Let K be the
winding number of y = a / f3 relat ive to zero. Notice that
The operator L f3 is invertible by Theorem III.1.2. Since (QL y p)2 is the zero
operator, we have that I + QL y P is invertible too. Next, recall (see formula
(5» that we can identify the operator PLy 11m P with the Toeplitz operator Ty.
But then we can use Theorem I1IA.1 to show that PLy P + Q is left or right
invertible. The invertibility of the factors L f3 and I + QL y P in (4.24) now yields
that Ma ,f3 is also left or right invertible. Furthermore, by applying (i) and (ii) in
Theorem IlIA. 1 to T = Ty , formula (4.24) also proves the statements (i) and (ii)
of the present theorem. 0
We conclude this section with a remark about operators that can be considered
as conjugates of pair operators . Fix 1 ~ P ~ 00, and let a and fJ belong to the
class A. Given x = (XdkEZ in lp(Z) we define
(4.25)
where La and Lf3 are the Laurent operators with symbols a and fJ, the operator P
(Xj)jEZ with Xj = °
is the orthogonal projection of l p (Z) onto the subspace consisting of all vectors
for j < 0, and Q = 1- P. We shall refer to K a ,f3 as the
associate pair operator defined by a and fJ.
From the representations (4.25) and (4.3) it follows that the conjugate of Ka ,f3
is the operator
where a#(A.) = a (A. -I) and fJ#(A.) = fJ(A. -I). This connection allows us to
conclude that with appropriate modifications, Theorems 4.1~.3 remain true ifthe
pair operator Ma,f3 is replaced by the associate pair operator K a,f3 ' For instance,
K a ,f3 is Fredholm if and only if the functions a and fJ do not vanish . Moreover,
in that case ind K a,f3 = -ind M a#,f3#' and hence ind K a,f3 is again equal to the
negative of the winding number of a] fJ relative to zero. In a similar way, one
derives the invertibility properties (one- or two-sided) of Ka ,f3 '
In this section we study the convergence ofthe finite section method ofthe Laurent
operators, Toeplitz operators and pair operators considered in the previous sections .
°
We begin with the Toeplitz operators on lp. Throughout, 1 ~ P < 00.
For each n 2: let Pn be the projection of lp defined by
Notice that II Pn II = 1, and P;» ---+ x (n ---+ 00) for each x E f.p • Here we use that
p < 00. Indeed, for x = (xo, XI, X2, . . .) we have
(n ---+ 0).
Now let T be an invertible operator on f.p . We say that the finite section method
converges for T if for n sufficiently large, n ~ no say, the operator Pn T Pn on
1m Pn is invertible and for each y E f.p the vector
Theorem 5.1 For an invertible Toeplitz operator on f.p, 1 ~ p < 00, with symbol
from the class A, the finite section method converges.
with
(Ta_)-I = Ta=l, (Ta+)-I = Ta:;: 1.
Put F = Ta+ Ta_. Then F is invertible, and by Theorem 2.5 the operator T and F
differ by a compact operator. Therefore, by Theorem XIII.3.5, it suffices to prove
a=
that the finite section method converges for F .
From the analyticity properties of the functions a~ I and I it follows that for
each n E Z we have
for each n E Z. Indeed , since P; = Pn , the first identities in (5.2) and (5.3) yield
The projections Qn have all norm one, and Qnx ---+ x (n ---+ (0) for each x E f p
(because 1 ~ p < (0). We say that the finite section method converges for
an invertible operator L on fp(Z) if for n sufficiently large, n ~ no say, the
operator QnLQn on Im Q" is invertible and for each y E fp(Z) the vector x(n) =
( QnLQn)- l QnY , n ~ no, converge s to a solution x of L x = y. In other word s,
the finite section method converges for L if and only if LEn {Q n}. We shall
prove the following theorem.
Proof: The fact that a does not vanish on the circle follows from the invertibility
of La (use Theorem 1.1). Let P; be the projection of i p defined by (5.1), and
define In to be the map from Im Qn to Im P2n+l given by
(5 .7)
From the definitions of Qn and P; it follows that I is a linear operator which maps
Im Qn in a one to one way onto Im P2n+l. Moreover, IIInxll = IIxll for each
x E Im Qn. A straightforward calculation shows that
where Ta is the Toeplitz operator with symbol a . Since I is one to one and onto , the
operator QnLa Qn is invertible on Im Qn ifand only if P2n+l Ta P2n+l is invertible
on Im P2n+l, and in that case
(5.9)
Notice that the right hand side of(5.10) may be obtained from the right hand side
of (5.7) by reversing the order of the first 2n + 1 entries. Again I: is a linear
operator which maps Im Qn in a one to one way onto Im P2n+ 1, and I: is norm
preserving. Moreover,
(5 .12)
As mentioned in the previous section (see the paragraph before the proof of
Theorem 4.1), the function a# E A. From (5.11) it follows that QnLaQn is
invertible on Im Qn if and only if the same holds true for P2n+l Ta#P2n+l on
Im P2n+l, and in that case
(5.13)
Now, assume that the finite section method converges for La . Then, by Theorem
XI!. 7.1, the operator QnLa Qn is invertible on Im Q for n sufficiently large, n 2: N
say, and sUPn ::':N II(QnLaQn)-11l < 00 . Using (5.9) and (5.13) it follows that
sup II (P2n+lTa P2n+l)-1 II < 00, sup II (P2n+l Ta#P2n+j)- 11l < 00. (5.14)
n::':N n::,:N
388 Chapter XVI Toeplitz and Singular Integral Operators
By the remark made at the end of Section 11.17 (which carries over to a Banach
space setting) it follows that both Tcx and Ta# are one to one. Let K be the winding
number of a with respect to zero. Since Ta is one to one, Theorem 2.2 implies
that K 2: O. Notice that a# does not vanish on T and its winding number relative
to zero equals to -K. Then Theorem 2.2 applied to Ta # yields -K 2: O. Hence
K = 0 as desired.
Finally, let us assume that the winding number of a relative to zero is equal
to zero. Then Ta is invertible (by Theorem 2.2), and according to Theorem 5.1
we have Ta E TI{Pn }. But then we can apply Theorem XII.7.1 to show that the
first inequality in (5.14) holds for some positive integer N , and hence (5.9) yields
sUPn>N II QnLa Qn)-lll < 00. So, using Theorern Xll.Z.l again , we can conclude
that La E TI{Qn}. 0
We conclude this section with an analysis ofthe convergence ofthe finite section
method for the pair operator M a ,/3 introduced in the previous section . Here a and
f3 belong to A. Recall that for an invertible operator L on l p (Z) the convergence
of the finite section method is defined in the paragraph preceding Theorem 5.2.
Theorem 5.3 Assume the operator Ma ,/3 on l p(Z), I ::s p < 00, is invertible
(i.e., a and f3 do not vanish on the unit circle). In order that the finite section
method convergefor Ma ,/3 it is necessary and sufficient that the winding numbers
ofa and f3 relative to zero are equal to zero.
Proof: Since M a ,/3 is invertible, we know from Theorem 4.2 that a and f3 do not
vanish on T, and the winding number of a relative to zero is equal to the winding
number of f3 relative to zero. We denote this winding number by K . It remains to
prove the statement about the convergence of the finite section method. We split
the proof in two parts.
Part 1. In this part we assume that the finite section method converges for M a ,/3 .
We have to show that the number K introduced in the previous paragraph is equal
to zero. Assume K > O. Let P be the projection of lp(Z) defined by
x } , j 2: K,
y} = [ 0, otherwise.
The fact that a and f3 belong to A implies that QL a P and PLfJ Q are compact (as
we have seen in the proof of Theorem 2.5) . Since P - P is an operator of finite
rank, the operators (P - p)L a P and (P - P)LfJ Q are also compact. Hence Ha,fJ
is compact.
16.5 The Finite Section Method Revisited 389
Next, we show that the operator Ma,fJ = Ma,fJ - Ha,fJ is invertible . Notice that
= PLaP + QLfJQ. Thus it suffices to show thatPLaP from Im P to Im P
Ma ,fJ
and QL fJ Q from Im Q to 1m Q are invertible . To see that PLa P is invertible from
Im P to Im P, let J : Im P -+ lp and j : Im P -+ lp be defined by
(5.15)
Notice that the function <li()') = ). -K ex does not vanish on 11' and has winding
number zero relative to zero. By Theorem 2.2 this implies that the Toeplitz operator
T).,-Ka is invertible . Next, observe that both J and j are invertible . Hence (5.15)
shows that PLa P is an invertible operator from Irn P onto Im P. In a similar way
one shows that up to invertible factors the operator QL fJ Q from Im Q to Im Q is
equal to the Toeplitz operator T).,KfJ#' where f3#().) = 13(1/).). We know that x" 13#
does not vanish on 11' and its winding number relative to the origin is equal to zero.
So T).,K fJ# is invertible, and hence QL fJ Q : Im Q -+ Im Q is invertible.
Recall that the finite section converges for Ma,fJ ' By Theorem Xm.3.5, the
compactness of Ha,fJ and the invertibility of Ma,fJ = Ma,fJ - Ha,fJ imply that
the finite section method converges for Ma,fJ ' Thus for n sufficiently large, the
operator QnMa ,fJ Qn on Im Qn is invertible . We shall show that this is impossible.
Indeed, let e, = (Oij)jEZ, where oij is the Kronecker delta, and take n > K. Notice
that the vectors e_ n, e- n+ I, ... , en-I , en form a basis ofIm Q, and the matrix of
the operator QnMa,fJQn on Im Qn relative to this basis has the form:
bo b-n+1 0 0
bn-I bo 0 0
bn bl 0 0
(5.16)
bn+K-I bK 0 0
0 0 aK aK-n
0 0 an ao
Here a v and bv are the coefficients of ).v in the Laurent series expansions of ex
and 13, respectively. The matrix (5.16) is a square matrix of order 2n + 1, and it
partitions as a 2 x 2 block matrix
[~ ~],
390 Chapter XVI Toeplitz and Singular Integral Operators
I
Xj , j=0 ,1 ,2, .. . ,
P«Xj) jEZ) = (Yj)j EZ, Yj = 0, otherwise.
Put Q = I - P . Then
M a./3 = PLaP + QL/3Q + QLaP +PL/3Q.
As we have seen in the previous section, the operator QLa P + PL/3 Q is compact
and up to invertible factors the operators PLa P on 1m P and QL /3 Q on 1m Q are
equal to the Toep1itz operators Ta and T/3#' respectively. By our assumptions on
a and f3 the Toeplitz operators Ta and T/3# are invertible, and hence the operator
PLa P + QL /3 Q is invertible too. By Theorem 5.1 the invertibility of Ta and T/3#
also implies that these operators belong to IT{Pn }, where Pn is the projection of l p
defined by (5.1). Thus for n sufficiently large, n 2: N say, the operators PnTaPn
r,
and r, T/3# on 1m r, are invertible and
sup II(PnTaPn) - 11l < 00, sup II (PnT/3 #Pn)- l II < 00 .
n?N n? N
But this implies (use formulas (5.5) and (5.7) in the previous section) that for
n 2: N + I the operator Qn(PLaP + QL/3 Q)Qn is invertible and
sup IIWn(PL aP + QL/3 Q)Qn}-11l < O.
n? N
Since PLa P + QL /3 Q is invertible , we can use Theorem XII. 7.1 to show that the
finite section method converges for this operator. Our assumptions on a and f3
also imply that the operator M a,/3 is invertible (by Theorem 4.2). Thus M a./3 is an
invertible operator which is a compact perturbation of an invertible operator for
which the finite section method converges. But then Theorem XIV.3.5 implies the
the finite section method converges for Ma ./3 .
In this section we study the inversion of the following singular integral equation
1
on the unit circle 1[':
1 k(A, z)
w(A)f(A) +~ - - f ( z ) dz = g(A), A E 1['. (6.1)
7T:1 1f z- A
J6.6 Singular Integral Operators on the Unit Circle 391
The integral in (6.1) has to be understood as a principal value integral. The precise
meaning of the left hand side of (6.1) will be explained by the next lemma and its
proof. 0
i -1- dz = lim
1I' z - 1 e.j.O
(1--IT 0 it
- .ie- - dt
ell - 1
+ lIT -.--
ie
+e ell -
it
dt )
1
= tt i. (6.4)
. .
i ell (ell - 1)-
1
= '21 i + '12 sin t (1 - eos t) - ,
1
and the fact that the second term in the right hand side of this identity is an even
function. The integral in the left hand side of (6.4) does not change if we replace
z by A-I z where A E T is fixed. Thus
1
Next, notice that
Since
i 'f
zkd z = jlC te«Lkt eitdt = {271:i ,
-lC 0,
k=-I, .
otherwise,
(6.8)
we see from (6.7) and (6.5) that (6.6) holds for m ::: O. For m < 0 we get the
desired equality by replacing (6.7) by
The functions cn(e it) = eint, nEZ, form an orthonormal basis of £2(1['). We let
lP' be the orthogonal projection of £2('][') defined by
l
cn, n = 0, 1,2, ... ,
lP'cn = (6.1 0)
0, otherwise.
Put Q = I - P. Formulas (6.5) and (6.6) show that for each trigonometric
polynomial <p we have
1
----;-
7I:l
i -<p(z) dz = «lP' - Q)<p)(A),
'fZ-A
A E 1'. (6.11)
In the sequel for any <p in £2 (1') the function given by the left hand side of (6.11)
will by definition be equal to (P - Q)<p. Thus we define for each tp E £2 (1')
1
----;-
7I:l
i 'fZ-A
-
<p(z)
dz := «lP' - Q)<p) (A), A E l' a.e.. (6.12)
(Bf)(A) = 7I:i (Cf) (A) + k(A, A) [ _1_ f(z) d z; A E l' a.e.. (6.14)
i'f z - A
In this way B is a well-defined bounded linear operator on £2 (1') . D
16.6 Singular Integral Operators on the Unit Circle 393
The operator in the left hand side of (6.12) is usually referred to as the operator
of singular integration, and it is denoted by S']f. Notice that equation (6.1) can
now be rewritten in the following equivalent form
Then both a and f3 are continuous functions on T, and using ST = IP' - Ql, we can
rewrite (6.14) as
a (.)IP' f + f30Ql + Cf = g.
Now put
Af = a( ·)1P' f + f30Qlf + Cf. (6.17)
In what follows we refer to A as the operator on L2 (T) associated with (6.1) .
The next theorem is the first main result of this section.
Theorem 6.2 The operator A associated with the singular integral equation (6 .1)
is Fredholm ifand only if the functions a and f3 defined by (6.16) do not vanish
on the unit circle, and in that case ind A is the negative ofthe winding number of
y = a]f3 relative to zero.
Proof: Since the operator C defined by (6.13) is compact, formula (6.17) shows
that A is a compact perturbation of the operator Ao = a (.)IP' + f3(.)Ql. But then,
by Theorem 4.1 in the previous chapter, it suffices to prove the theorem for Ao in
place of A .
Next, let U be the operator on L2 (T) that assigns to each f E L2 (T) its
sequence of Fourier coefficients (c n (f» nE'Z with respect to the orthonormal basis
int
. .. , £ - 1, £0 , £1 , . . ., where £ n (e") = e for n E 2 . Thus
It follows that Uf E £2(2), and U is a unitary operator from L2(1l') onto £2(2) .
From the definition oflP' in (6.10) and Ql = I -IP' we see that UIP'U- 1 = P and
UQlU- 1 = Q, where P is the orthogonal projection of £2(2) defined by
where L a and LfJ are the Laurent operators on £2(2) with symbols a and f3 (or
in the terminology of Section IILl the Laurent operators on £2 (2) defined by the
394 Chapter XVI Toeplitz and Singular Integral Operators
functions a(t) = ex(eit ) and b(t) = f3(eit ) on -]7: ::: t ::: ]7:). We conclude that the
operator Ao = ex (.)IP' + f3(.)Q is unitarily equivalent to the pair operator Ma ,{J on
£2(Z).
From the remark made after the proof of Theorem 3.1 we know that for ex and
f3 continuous on T the pair operator Ma ,{J is Fredholm on £2(Z) if and only if ex
and f3 do not vanish on T, and in that case ind Ma,{J = -K, where K is the wind ing
number of a] f3 relative to zero. Since Ao and Ma ,{J are unitarily equivalent, the
same holds true for Ao, which completes the proof. 0
where V(A) = k(A , A), the function w is as in (6.1) , and § 1!' is the operator of
singular integration. As we have seen , Theorem 6.2 remains true if A is replaced
by Ao. The next two theorems concern invertibility of the operator Ao .
Theorem 6.3 Let Ao be the main part ofthe operator associated with the singular
integral equation (6.1), and let ex and f3 be given by (6.16) . Then Ao is invertible
ifand only ifex and f3 do not vanish on the unit circle T and the winding number
of y = ex I f3 relative to zero is equal to zero. Let these conditions be satisfied, and
assume in addition that ex and f3 are analytic in an annulus containing T. Then y
admits a Wiener-Hopffactorization y = y_ y+ and the inverse of A is given by
Theorem 6.4 Let Ao be the main part ofthe operator associated with the singu lar
integral equation (6.1), and let ex and f3 be the functions defined by (6.16). Then
Ao is left or right invertible ifand only ifex and f3 do not vanish on the unit circle
T. Assume the latter condition holds, and let K be the winding number of y = ex I f3
relative to zero. Then
Exercises XVI 395
(i) Ao is left invertible if and only if K :::: 0, and in that case codim 1m Ao = K,
(ii) Ao is right invertible if and only if K :::: 0, and in that case dim Ker Ao =
-K.
If, in addition, a and {3 are analytic in an annulus containing the unit circle, then
y admits a Wiener-Hopffactorization
A E 11',
Here IP' is the orthogonal projection defined by (6.10) , the operator <Q = I - IP',
and SeA) = A.
Proof: Let U be the unitary operator from L2 (11') onto £2(Z) introduced in the
second paragraph of the proof of Theorem 6.2. Then U AoU- 1 = M a ,f3 , where
M a,f3 = La P + L f3 Q. Here P is the projection defined by (6.18), the operator
Q = I - P, and La and L f3 are the Laurent operators on £2 (Z) with symbols a and
{3,respectively. SinceUIP'U- 1 = P,U<QU- 1 = Q,andUw(.)U- 1 = Lwforany
continuous function on co, Theorem 6.4 immediately follows from Theorem 4.2
when the functions a and {3 belong to the class A (that is, are analytic in an annulus
containing the unit circle). To complete the proof for the general case, we use the
same unitary equivalence and apply Theorem 2.2, 0
As was noted earlier, this chapter is a continuation of the material in Chapter III.
Further developments can be found in the paper [Kre], and in the monographs
[GF], [BS], [GGK2]. The requirement that p is finite is not essential in the first
four sections. All results in these sections remain true for p = 00 with some minor
modifications in the proofs.
Exercises XVI
Sa
a(A) = 2"A- 1
- (a 5)
+ 2 + A.
(b) For which values ofthe parameter a is the operator T invertible? Find
the inverse of T if it exists .
(c) What is the spectrum of T ? Does it depend on p?
2. Let T be the Toeplitz operator on l p (1 s p < (0) with symbol
Tx = eo,
where eo = (1,0,0 , . . .).
(c) Solve the same problem as in (b) with y = (1, q , q2, .. .), Iql < 1, in
place of eo.
3. Let T be the Toeplitz operator on lp (1 S p < (0) with symbol ex E A.
Assume ex admits a Wiener-Hopf factorization
(0,0, . . . , 0, yo, Yl , · · .) .
'-.-'
r
Tx = eo, x E lp,
where eo = (1, 0, 0, . . .). Express the solution(s) in terms of the
functions ex±O-I .
Exercises XVI 397
(d) Solve the same problem as in (c) with y = (1, q, q2, . . .), Iql < 1, in
place of eo.
4. Let T be the Toeplitz operator on ip (1 ::s p < 00) with symbol a E A,
and let T' be the Toeplitz operator with symbol f3(A) = a(A -I).
(a) What is the relation between the matrices of T and T' relative to the
standard basis of i p ?
(b) Assume a admits a Wiener-Hopffactorization, and let eo = (1,0,0, .. .).
Show that T is invertible if and only if the equations
Tx = eo, T'x' = eo
have solutions in ip .
(c) Let a be as in (b), and assume the two equations in (b) are solvable in
i p ' Express the inverse of T in terms of the solutions x and x',
5. Let T be a Toeplitz operator with symbol a E A. Assume that the two
equations in (b) of the previous exercise have solutions in il. Show that
a(A) i- 0 for each A E 11' and that relative to zero the winding number of
the oriented curve t -+ a(e it ) with t running from - ] f to rr is equal to zero.
6. Let U and U+ be the operators on i p (1 ::s p < 00) defined in the first
exercise to Chapter XV, i.e.,
where a j (j E Z) are complex numbers such that Ian I ::s cp In I, nEZ, for
some c ~ 0 and 0 ::s p < 1. Assume k i- r.
(a) Show that A is Fredholm if and only if
00
(b) Ifthe latter condition on o holds, show that ind A = (k-r)m, wherem
is the winding number relative to zero ofthe oriented curve t t-+ a(e it )
with t running from - ] f to tt .
398 Chapter XVI Toeplitz and Singular Integral Operators
7. Fix 0 :s t < 00 . Let Y be the operator on Lp[O, 00) ,1 :s p < 00, defined
by
(Vj)(x) = I(x + t), x ~ O.
Also consider the operator Y + on L p (0, 00) given by
T
5a y+ I - (5)
+ 2: I + Vi
a = o.
Wf)(x) f tax) , x ~ 0,
5
a(A.) = A.-I - 1], {3(A.) = A. - -.
2
Here 1] is a complex parameter.
(a) For which values of 1] is the operator M invertible? Find the inverse if
it exists.
(b) Determine the spectrum of M.
(c) When is M left or right invertible?
Exercises XVI 399
(d) Let 11]1 =1= 1. Solve in ep(Z) the equation Mx = 0. Also, solve
Mx = eo, with eo = (1,0,0, . . .).
a(A) = A-
I 2
-- , f3(A) = A - 1].
5
12. Let K = PLa + QLf3 be the associate pair operator on ep(Z), 1 S P < 00 ,
with a and f3 as in Exercise 10.
(a) For which values of 1] is the operator K invertible? Find the inverse if
it exists .
(b) Determine the spectrum of K.
(c) When is K left or right invertible?
e
(d) Let 11] 1 =1= 1. Solve in p (Z) the equation Kx 0. Also, solve
Kx = eo, with eo = (1,0,0, . ..).
1]f(A) + ---:
1 1AZ
--fez)
n t or z - A
d: = g(A), A E T.
Linear operators are the simplest operators. In many problems on e has to consider
more complicated nonlinear operators. As in the case of linear operators, aga in
the main problem is to sol ve equations Ax = y for a nonlinear A in a Hilbert or
Banach space. Geometrically, this problem means that a certain map or operator
B leaves fixed at least one vector x, i.e. ,
and we have to find this vector. Theorems which establish the existence of such
fixed vectors are calledfixed point theorems. There are a number ofvery important
fixed point theorems. In this chapter we present one ofthe simplest; the Contraction
Mapping Theorem. This theorem is very powerful in that it allows one to pro ve the
existence of solutions to nonlinear integral, differential and functional equations,
and it gives a procedure for numerical approximations to the solution. Some of
the applications are also included in this chapter.
A function j which maps a set S into S is said to have afixed p oint if there exists
an s E S such that j (s ) = s.
Contraction Mapping Theorem 1.1 Let S be a closed subset of a Banach space
and let T map S in to S. Suppo se there exists a number a < 1 such that for all
x, y in S,
IJT x - T ylJ :5 alJx - ylJ · (Ll)
Then T has a unique fix ed point in S.
In this section the contraction mapping theorem is used to prove the existence and
uniqueness ofsolutions to certain non linear integral and differential equations. In
addition, we give a proof of the implicit function theorem.
I(t) - A l b
k(t , s, I(s)) ds = get) (2.1)
(Tf)(t) = A l b
k(t , s , I(s)) ds + g(t) .
It is easy to see that 1m T C C([a, b]) . T is a contraction. Indeed, for all I,
hE C([a, b]),
Thus
IITf - Thll :::: IAlm(b - a)lIf - hi!'
and IAlm(b - a) < 1. Hence T has a unique fixed point fo E C([a, b]) by
Theorem 1.1. Obviously, fo is the solution to (2.1).
Equation (1.1) includes linear integral equations of the second kind. For given
ko(t, s), let k(t, s,~) = ko(t, s)~.
Then
l b
k(t , s, f(s)) ds = l b
ko(t , s)f(s) ds.
for all (x, y), (x, z) in O. Thenfor any (xo, YO) EO, the differential equation
dy
dx = f(x, y(x)) (2.3)
y(x) = YO + r
lXQ
f(t, yet»~ dt. (2.5)
We now show that (2.5) has a unique solution on some interval containing xn, To do
this, we define a contraction map as follows. Let M = sUP(x.y)eO If(x, y)1 < 00.
Choose p > 0 such that
Let S be the set ofcomplex valued functions y which are continuous on the interval
J = {x : Ix - xol :::: p} and have the property that
(Tg)(x) = YO + lXQ
x
f(s, g(s)) ds.
404 Chapter XVII. Non Linear Operators
whence
and
j(xo, YO) = 0,
then there exists a rectangle R : [xo - 8, Xo + 8] x [Yo - e, YO + £] contained in Q
andauniquey(x) E C([xo-8,xo +8]) such that (x , y(x)) E R, j(x ,y(x)) =0
for all x E [xo - 8, Xo + 8] and y(xo) = YO .
Proof: We shall show that for a suitable set of functions, the operator
1 aj
(TgJ)(x) = gJ(x) - - j(x, gJ(x)), m = -(XO,Yo) (2.6)
m ay
has a fixed point.
It follows from the conditions on j that there exist e > 0 and 8 > 0 such that
the rectangle R = {(x, y) : Ix - xol s 8, IY - Yol s s] is contained in Q,
1 aj 1
<2'
I
ma/ x,y)-1 (x , y) E R , (i)
1
and
(ii)
J 7.3 Generalizations 405
Iaf (I f
-
ay
y - -
m
(x, y) )I I 1af
= 1- - -
may
I 21
(x, y) < - .
Hence it follows from the Mean Value Theorem applied to the function
y - ~ f(x, y) that for g and h in S,
1
I(Tg) (x) - (Th)(x) I ~ 2 Ig(x) - h(x)l · (iii)
Thus
1
IITg-Thll ~ 2I1g-hll .
17.3 Generalizations
Theorem 3.1 Suppose S is a complete metric space and T : S ---* S has the
property that T" is a contractionfor somepositive integern. Then T has a unique
fixed point.
Proof: Since T" is a contraction, it has a unique fixed point XES. Now
Theorem 3.2 Let S be a compact subset ofa normed linear space and let T map
S into S. If
IITx - TYII < IIx - YII for all x , Y in S, x =1= y , (3.1)
then T has a uniquefixed point in S.
Proof: Let m = infxEsIITx - x]: There exists a sequence {xn} in S such that
IITxn - xnll ---* m , Since S is compact, {xn} has a subsequence {xn'} which
converges to some XES. Hence
Therefore T x = x, otherwise
The proof shows that the theorem can be extended to compact metric spaces .
Ifthe set S in the Theorem 3.2 is also convex, then we can weaken the inequality
(3.1) and obtain the following result.
Theorem 3.3 Let S be a compact convex subset ofa normed linear space and let
T map S into S. If
(i) If S is a countable set, then the set consisting of all finite sequences of
members of S is also countable.
Indeed, since S can be put into one to one correspondence with a subset
of positive integers it suffices to prove that the set Z which consists of all
finite sequences of positive integers is countable.
For each ~ositive integer n, let Z; be the set of those (m I, . . . ,mk) E Z for
which Li=1 m; = n. Since Zn is a finite set, the members of Zn may be
listed as we please. Thus we can list all the members of Z = U~I Z; by
listing the members of ZI, then Z2, etc .
(ii) The set of all finite sequences of rational complex numbers (the real and
imaginary parts of the number are rational) is countable. Indeed, by iden-
tifying the complex number ~: + i ~ with (PI, ql, P2, q2), where Pj and
qj are integers, it follows from (i) th~t the set of finite sequences of rational
complex numbers is a subset of a countable set and therefore is countable.
(iii) The set of all polynomials with rational complex coefficients is count-
able since we can identify the polynomial anx n + ... + alx + ao with
(an, . '" ai , ao)·
(iv) The set of real numbers is uncountable. Indeed, it suffices to show that
any sequence of real numbers in [0, I] does not contain some real number
in [0, I].
Suppose Sl, S2 , . . . is a sequence of real numbers. Each Si has the
decimal expansion s; = oii) oii), .. . , where infinitely many of the inte-
gers 01(i) ,° (i) , . .. are not 9 . L et s = . °1°2 .. . , were
2
h Ok = O I'f a (k) -I-
k I 0
and I if ak
k
) = O. Then s is in [0, I] but s i= si for every i .
410 Appendix 1. Separable Hilbert Spaces
Theorem: A Hilbert space 'H is separable if and only if'H contains a countable
set which is dense in H,
In this section we give a very brief introduction to the Lebesgue integral which is
necessary for the description and understanding of the spaces L p .
A subset Z of the real line is said to have Lebesgue measure zero if for each
8 > 0, there exists a countable set of intervals h , h . . . such that Z C U] I j and
L j JL(I j ) < 8 , where JL(Ij ) is the length of I j .
Every countable subset {Xl , X2, . •. } of the line has Lebesgue measure zero.
Indeed, given 8> 0, take I j = [Xj , Xj + 8/2 H 1).
A real valued function I which is defined on an interval J is called a step
f unction if I(x) = L~=l CXkC h (x ), where CXk is a real number, h , . . . , In are
mutually disjoint subintervals, and C h (x) = I if X E h and zero otherwise. The
step function I is Lebesgue integrable if L~= I cxkJL(h ) < 00, (0· 00 = 0). In
this case, we define the integral
[ I(x) d x =
JJ
t
k= l
cxkJ1(h )·
A non negative real valued function I defined on the interval J is Lebesgue mea-
surable if there exists a nondecreasing sequence {In} of step functions defined on
J such that In (x) ~ I (x ) for all x E J which lie outside some set of Lebesgue
measure zero (possibly the empty set). We then say that Un} converges to I
almost everywhere (a.e.). If each step function In is Lebesgue integrable, then
fJ II (x ) dx ~ f J fz(x ) d x s .... The function I is Lebesgue intergrable if
lim J. I n (x) d x < 00 . The Lebesgue integral of I is defined by
n-+oo J
[ I(x) d x =
JJ
lim
n-+oo
r I n (x) d x .
JJ
It turns out that the integral is independent of the choice of the increasing
sequence Un}.
412 Appendix 2. The Lebesgue Integral and L p Spaces
ifn(X)dXt--+ if(X)dX .
i {l b d
f(x, y) d Y} dx = 1 f(x , y) dxdy.
A.2.2 t. , Spaces 413
f:
Similarly, for almost every y E [c, d], the function f ( , y) is Lebesgue integrable
on [a, b] and f (x , y) dx is Lebesgue integrable on [c, d], Moreover,
A.2.2 L p Spaces
[f] = {g E L p (1 ) : f = g a.e.].
Then, either [f] = [h] or [f] n [h] = 0 . The set of equivalence classes becomes
a vector space under the operations
Moreover,
1I[f]lI p = (l,g(XWdX) li P ,
where g is arbitrary in [f] , defines a norm on this vector space. For the sake
of simplicity, we usually do not distinguish between functions and equivalence
classes.
It is a very important fact that L p(1) is complete . For a detailed proof we refer
the reader to [R].
A complex valued Lebesgue measurable f defined on J is called essentially
bounded if there exists a number M such that If (x )1 :s M a.e. The greatest lower
bound of all such M is denoted by II f II 00 ' If functions which are equal almost
everywhere are identified as above, then II . 11 00 is a norm on the vector space
L oo (1 ) of essentially bounded function s and L oo (1 ) is a Banach space.
The following inequal ity is essential for our treatment of integral operators on
t., spaces.
414 Appendix 2. The Lebesgue Integral and L p Spaces
Equality holds ifand only ifthere exists some non-zero a , f3 in <C such thatalfl P =
f3lgl q a.e.
Suggested Reading
16. Sz.-Nagy, 8., and Foias, C., Harmonic Analysis of Operators on Hilbert
Space, North-Holland Publ. Co., Amsterdam-Budapest, 1970.
17. Taylor, A.E., and Lay, D.C., Introduction to Functional Analysis. 2nd ed.,
Wiley, New York, 1980.
18. Weidmann, J., Linear Operators in Hilbert Spaces, Springer Verlag,
New York, 1980.
19. Zaanew, AC., Linear analysis, North-Holland Publ. Co., Amsterdam, 1956.
References
[A] S.S. Antman, The equations for large vibrations ofstrings , Amer. Math.
Monthly 87 (1980), 359-370.
[Ah] t .v Ahlfors, Complex Analysis, McGraw-Hill, 1966.
[BS] A. Bottcher and B. Silberrnann, Analysis of Toeplitz Operators,
Springer-Verlag, Berlin, 1990.
[CH] R. Courant and D. Hilbert , Methods ofMathemati cal Physics, vol. I,
Interscience, New York, 1953.
[DS1] N. Dunford and J.T. Schwartz, Linear Operators, Part I: General
Theory, Interscience, New York, 1958.
[DS2] N. Dunford and J.T. Schwartz, Linear Operators, Part II: Spectral
Theory, Intersc ience , New York, 1963.
[E] P. Enflo, A counterexample to the approximation problem in Banach
spaces, Acta Math. 130 (1973), 309-317.
[F] LA. Fel'dman, Some remarks on convergence of the iterative method,
Izvestia Akad. Nauk Mold. SSR 4 (1966), 94-96.
[G] S. Goldberg, Unbounded Linear Operators, McGraw-Hill,
New York, 1966.
[GF] I. Gohberg and I. Feldman, Convolution Equations and Projection
Methods for their Solution , Transl. Math . Monograph, vol. 41, Amer.
Math . Soc., Providence, R.I., 1974.
[GG] I. Gohberg and S. Goldberg, Basic Operator Theory, Birkhauser,
Basel, 1981.
[GGKa1] I. Gohberg, S. Goldberg and M.A. Kaashoek, Classes of Linear
Operators , vol. I, Birkhauser, Basel, 1990.
[GGKa2] I. Gohberg, S. Goldberg and M.A. Kaashoek, Classes of Linear
Operators, vol. II, Birkhauser, Basel, 1993.
[GGKr] I. Gohberg, S. Goldberg and N. Krupnik, Traces and determinants of
linear operators, Birkhauser, Basel, 2000.
[GKre] I. Gohberg and M.G. Krein, Introduction to the theory oflinear non-
selfadjoint operators, Transl. Math. Monographs, vol. 18, Amer. Math.
Soc., Providence, R.I., 1969.
[GKre1] I. Gohberg and M.G. Krein, The basic propositions on defect numbers,
root numbers and indices oflinear operators, Uspekhi Math. Nauk 12,
2(74) (1957) , 43-118 (Russian); English Transl., Amer. Math. Soc.
Transl. (Series 2) 13 (1960) ,185-265.
[H] P.R. Halmos, Finite-dimensional Vector Spaces, 2nd ed., Van
Nostrand, Princeton, 1958.
418 References
[K] T. Kato, Perturbation Theory for Linear Operators, 2nd ed., Springer
Verlag, New York, 1976.
[Kra] M.A Krasnosel'skii, Solving linear equations with selfadjoint
operators by iterative method, Uspehi - Matern. Nauk, 15,3 (1960),
161-165 .
[Kre] M.G. Krein, Integral equations on a half-line with kernel depending
upon the difference of the arguments, Uspekhi Math. Nauk 13 (5)
(1958), 3-120 (Russian); English Transl., Amer. Math. Soc. Trans/.
(Series 2) 22 (1962), 163-288.
[P] H. Poincare, Sur les determinants d'ordre infini, Bull. Soc. Math.
France 14 (1886), 77-90.
[PS] C. Pearcy and AL. Shields, A survey of the Lomonosov technique
in the theory of invariant subspaces, Topics in Operator Theory,
Mathematical Surveys, No. 13, American Mathematical Society, Prov-
idence, 1974.
[R] H.L. Royden, Real Analysis, 2nd ed., Macmillan, New York, 1968.
[S] G. Strang, Linear Algebra and Its Applications, 2nd ed., Academic
Press, New York, 1980.
[Sc] M. Schechter, Principles offunctional analysis, Amer. Math. Soc.,
Providence, R.I., 2002.
[TL] AE. Taylor and D.C. Lay, Introduction to Functional Analysis, 2nd
ed., Wiley, New York, 1980.
[W] R. Whitley, Projecting m onto co, Amer. Math. Monthly, 73 (1966),
285-286.
List of Symbols
Ker A kernel of A, 65
lz , lz (2) the Hilbert spaces of square summable sequences
with entries in C, 3, 5
lz(w) weighted lz space, 40
lp ,1 =:: p s 00,260
Lp([a , bj) , 1 =:: p =:: 00,261 ,413
£(H) space of bounded linear operators on H , 52
£(HI , Hz) space of bounded linear operators from HI into Hz, 52
Ma ,{3 pair operator, 377
n(A) dimension of the kernel of A, 347
n(Pn ) , n(Pn , Qn) , 97, 289
420 List ofSymbols
vector space 5
Volterra integral operator 293, 294
or series
Edited by
Israel Gohberg, School of Mathematical
Sciences, Tel Aviv University, Ramat Aviv, Israel
This series is devoted to the publication of
current research in operator theory, with particular emphasis
on applications to classical analysis and the theory of integral
equations, as well as to numerical analysis, mathematical phy-
sics and mathematical methods in electrical engineering.
OT series
Edited by
Israel Gohberg, School of Mathematical
Sciences, Tel Aviv University, Ramat Aviv, Israel
This series is devoted to the publication of
current research in operator theory, with particular emphasis
on applications to classical analysis and the theory of integral
equations, as well as to numerical analysis, mathematical phy-
sics and mathematical methods in electrical engineering.