0% found this document useful (0 votes)
54 views

B_Introduction to Integral Equations With Applications

The document is a second edition of 'Introduction to Integral Equations with Applications' by Abdul J. Jerri, published by John Wiley & Sons, Inc. It includes various topics related to integral equations, their numerical solutions, and applications in different fields, along with exercises and bibliographical references. The text is aimed at providing a comprehensive understanding of integral equations and their significance in mathematical modeling.

Uploaded by

YC Huang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

B_Introduction to Integral Equations With Applications

The document is a second edition of 'Introduction to Integral Equations with Applications' by Abdul J. Jerri, published by John Wiley & Sons, Inc. It includes various topics related to integral equations, their numerical solutions, and applications in different fields, along with exercises and bibliographical references. The text is aimed at providing a comprehensive understanding of integral equations and their significance in mathematical modeling.

Uploaded by

YC Huang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 464

NUIT

UNIVERSITY OF GLAMORGAN
LEARNING RESOURCES CENTRE
Pontypridd, Mid Glamorgan, CF37 1DL
Telephone: Pontypridd (01443) 482626

Books are to be returned on or before the last date below

10 DEC 2004

15 APR 2005

OY JUNIE
Introduction to
Integral Equations
with Applications
Digitized by the Internet Archive
in 2024

https://archive.org/details/introductiontoinO00Ojerr
Introduction to
Integral Equations
with Applications

Second Edition

ABDUL J. JERRI
Clarkson University

A Wiley-Interscience Publication
JOHN WILEY & SONS, INC.
New York * Chichester * Weinheim ¢ Brisbane * Singapore ¢ Toronto
MeeZo ean
Op

This text is printed on acid-free paper. @ a Gentre


Copyright © 1999 by John Wiley & Sons, Inc. All rights reserved.

Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system or


transmitted in any form or by any means, electronic, mechanical,
photocopying, recording, scanning or otherwise, except as permitted under
Sections 107 or 108 of the 1976 United States Copyright Act, without either
the prior written permission of the Publisher, or authorization through
payment of the appropriate per-copy fee to the Copyright Clearance Center,
222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744.
Requests to the Publisher for permission should be addressed to the
Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New
York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ
@ WILEY.COM.

For ordering and customer service, call 1-800-CALL-WILEY.

Library of Congress Cataloging-in-Publication Data:

Jerri, AbdulJ.,1932-.
Introduction to integral equations with applications / Abdul J.
Jerri. — 2nd ed.
p. cm.
“ AWiley-Interscience publication.”
Includes bibliographical references and index.
ISBN 0-471-31734-9 (alk. paper)
1. Integral equations—Numerical solutions. _I. Title.
QA431J47 1999
515'.45—dc21 99-14638

Printed in the United States of America

LOPOT SE 7615142392
In memory of my father and mother
7

; a
>


whom bre 6
‘ a
oem. Ueeee a -—-

Sa. 7

ers. * “s :
8 : ry > :
= if, _ —

=ae . = ; —< —
a
ay) a
Das ¢ were: _ Sap & a

Cop@reeo-« : = e6é 4 re : ae i

« = : ; =
eA SI Manan al a |

Kee a > » 1} teem


qe ®
WeemEEe? & pet mS ic om —= a -
io ho 1S SPS Bea ed -
ee Ven UT Ned Bess ( ~yr ae 5 (ee ites 7
Pw ER Cae os (gy,
we in aes’
ee
;
a en ee —
PCS 2-1 6h fewem,
Wa HED SER ay bo Os Ges...
ere eal > oe:he apie te
Ha
raed gongs pea | Fley6 Fak. we OF Pend (een fe
=OS Gut eeiee, \, CS En) Banta
@ a7) ;a > :
: =
Gee we 3 ©4 “ese Gary ai ocf55 QE a

Pie. J a ee ee
: om S |

Pe Ps ee
Sutew bs greleoguiinge
Gite Ge@iwpen (ehdae| os
: wd =.
as ~ —_ 7 eS. 7

tie re patties
er” ia
. Riecreyr tl GQetebeee
— Pas ah pet
— . = eae ==. Cl . 9
= owtiiie = 7
: - © 4%)...
——— me = : 7 .

_ Faas + & Ue @eeu ma =


ww 4 cee9df: i
Contents

Preface Xiil
Acknowledgments

Integral Equations, Origin, and Basic Tools


1.1 Various Problems as Integral Equations
Exercisesel. 1
1.2. Classification of Integral Equations
Exercises 1.2
1.3 Some Important Identities and Basic Definitions
1.3.1 Multiple Integrals Reduced to Single Integrals
1.3.2 Generalized Leibnitz Formula
1.3.3 Convergence of Integrals and Basic Definitions
Exercises 1.3
1.4 Laplace, Fourier, and Other Transforms
1.4.1 The Laplace Transform
1.4.2 Fourier Transforms
1.4.3 Other Transforms
Exercises 1.4
1.5 Basic Numerical Integration Formulas
1.5.1 Basic (Elementary) Integration Formulas
viii CONTENTS

Looe The Smoothing Effect of Integration


LD3 Interpolation of the Numerical Solutions of
Integral Equations
1.5.4 Review of Cramer’s Rule
Exercises 1.5

Modeling of Problems as Integral Equations


Ze Population Dynamics
els Human Population
ele Biological Species Living Together 100
Exercises 2.1 102
heh Control and Other Problems : 104
2.2.1. Mortality of Equipment and Rate of
Replacement 104
Exercises 2.2 105
2.3. Mechanics Problems LOZ,
eel Hanging Chain 107
eg5 Sliding a Bead Along a Wire: Abel’s Problem 110
Exercises 2.3 1i2
2.4 Initial Value Problems Reduced to Volterra Integral
Equations 113
Exercises 2.4 116
2.5 Boundary Value Problems Reduced to Fredholm
Integral Equations 118
Exercises 2.5 i272
2.6 Mixed Boundary Conditions: Dual Integral Equations 124
2.081 Electrified Infinite Plane 124
2.00 Electrified Disc 126
Exercises 2.6 IPAs
2.7 Integral Equations in Higher Dimensions 128
aeeal Schrédinger Equation as an Integral Equation
in the Three-Dimensional Momentum Space Zo

Volterra Integral Equations I33


Sol Volterra Equations of the Second Kind 134
Beies| Resolvent Kernel Method: Neumann Series 134
Sele Method of Successive Approximations
(Iterations) 14]
Jol Laplace Transform Method: Difference Kernel 143
CONTENTS ix

Exercises 3.1 146


3.2 Volterra Integral Equations of the First Kind 148
3.2.1 Volterra Integral Equation of the First Kind
with a Difference Kernel—Laplace Transform
Method 150
Exercises 3.2 154
3.3 Numerical Solution of Volterra Integral Equations 156
3.3.1 Numerical Approximation Setting of Volterra
Equations 156
Exercises 3.3 162

The Green’s Function 165


4.1 Construction of the Green’s Function 165
4.1.1 Nonhomogeneous Differential Equations 166
4.1.2 Construction of the Green’s Function —
Variation of Parameters Method 169
4.1.3 Orthogonal Series Representation of Green’s
Function 187
4.1.4 Green’s Function in Two Dimensions 192
Exercises 4.1 193
4.2. Fredholm Integral Equations and the Green’s
Function 202
Exercises 4.2 206

Fredholm Integral Equations 209


5.1 Fredholm Integral Equations with Deeterate Kernel 211
5.1.1 Nonhomogeneous Fredholm Equations with
Degenerate Kernel Zid
5.1.2. Fredholm Alternative 215)
5.1.3 Approximating a Kernel by a Degenerate One Zo.
Exercises 5.1 Zi
5.2 Fredholm Integral Equations with Symmetric Kernel ASU
5.2.1 Homogeneous Fredholm Equations with
Symmetric Kernel 237,
5.2.2. Solution of Fredholm Equations of the Second
Kind with Symmetric Kernel 245
Exercises 5.2 Zo)
5.3. Fredholm Integral Equations of the Second Kind Pye}
5.3.1 Method of Fredholm Resolvent Kernel Zo3
5.3.2. Method of Iterated Kernels 256
x CONTENTS

5.3.3 Some Basic Approximate Methods 262


Exercises 5.3 269
5.4 Fredholm Integral Equations of the First Kind LY
5.4.1 Fredholm Equations of the First Kind with
Symmetric Kernels 272
5.4.2 Ill-Posed Problems and the Fredholm Equation
of the First Kind 207
Exercises 5.4 282
5.5 Numerical Solution of Fredholm Integral Equations 285
5.5.1 |Numerical Approximation Setting of Fredholm
Integral Equations 286
5.5.2 Homogeneous Fredholm Equations 294
Exercises 5.5 ; 225

6 Existence of the Solutions: Basic Fixed Point Theorems 299


6.1 Preliminaries: Toward a Contractive Mapping 299
6.1.1 Basic Definitions: Complete Metric Spaces 305
6.1.2 Contractive Mapping for Linear Fredholm
Equations 308
6.1.3 Contractive Mapping for Linear Volterra
Equations 310
6.2 Fixed Point Theorem of Banach Sig)
6.2.1 Existence of the Solution for Linear Integral
Equations LO
6.2.2. Existence of the Solution for Nonlinear
Integral Equations 319
6.2.3 Existence of the Solution for Nonlinear
Differential Equations 324

7 Higher Quadrature Rules for the Numerical Solutions S27


7.1 Higher Quadrature Rules of Integration with Tables Soe.
Exercises 7.1 348
7.2 Higher Quadrature Rules for Volterra Equations 350
Exercises 7.2 397
7.3 Higher Quadrature Rules for Fredholm Equations 358
7.3.1 Comments on Higher Quadrature Rules for
Some Singular Fredholm Equations 365
Exercises 7.3 370

Appendix A The Hankel Transforms 373


CONTENTS xi

A.l_ The Hankel Transform for the Electrified Disc a7 3


A.2 The Finite Hankel Transform Sip)
Exercises: Appendix A 376

Appendix B Green’s Function for Various Boundary Value Problems 379


B.1_— Green’s Functions in Terms of Simple Functions SAD
B.2 Green’s Function in Terms of Special Functions 382

Answers to Exercises 383


Chapter 1 383
Chapter 2 388
Chapter 3 392
Chapter 4 394
Chapter 5 398
Chapter 7 408
Appendix A 419

References 421

Index ay
:
aa —
a) es hoor Ait it Vsrua?
a orsed nduenld s Huis De 7
4d Pach Aeupi@? te Ghee zy 1 Reworki sain venle
oe ime Abang
of t% Piha $6
CL ieitar Asai eea
seni i VAariel of nach
vy
biaeh
bs a Pie apie yreebncan
cs
basil
ipa ne
— ai REA wired 4 ¢"eae
seve lpey 4
Ls fj
» -
AS” Aero! Gel? trian daarGideg i desersay
THE

a
- 4 ‘err ta Ape soo sa aa we re? of fre:

ib ee
has *> - PO 2
pemeny iy vy v7
fy ) Priel) ay!
su «
ee 7
eer
= sigue ‘5

7 ~
ee a” Ae 4%
Wit er res of We Sodio: Pee tind & ve Thewring
as! Pravi atheend & ie a we 0Items: o Mai pel?
| - _ . 4 ’ , « “iv iF
mies ee ‘Gaia + ake Ve a.
a as — Ze ia Cave or ShesentmdA fi a ey PTs igh —
: 7 = ~ i Heat:
‘ _ ag :
Aa br = niiactine tog 8) imove> GQyryra yo
4. =
3) ¢
Beieehaeay = oa

64 Fis Judas Th Onan ay heact ce


a re ar Evfcienoce of one Solids yw Lies inignee
™ “ead Eynuncys "Ss 6
es AQ EelaveneeoF the Salado. dsc
- “

_7
a

; Suen Egui
€ ti,
AE Niantic set alee Sdiage san for hi Laur
ae 7 ims epatiens
>. .
et .
7 Mighes Auld hlahoes "ivance oes
. me y~ High Cualnane et y a iad
ed,

Sa ie en
i = ong? fe

— Perris Pi :

=
a en,
Preface

The goal of this present second edition is still the same as that of the first edition.
It is to present the subject of integral equations, their varied applications, and basic
methods of solutions on a level close to that of a first (sophomore) course in ordinary
differential equations. This is not such an easy task, especially when we assume
only the basic calculus and differential equations as prerequisites. The main thrust
here is that a variety of applied problems have their natural mathematical setting
as integral equations, thus they have the advantage of usually simpler methods of
solution. In addition, a large class of initial and boundary value problems, associated
with differential equations, can be reduced to integral equations, whence enjoy the
advantage of the above integral representation. Such topics also bring to light the
unity of differentiation and integration. It may be said that such a basic integral
equations course would complement the elementary differential equations course,
especially when the actual coverage in the latter is (most often) limited to initial value
problems, and for obvious historical reasons. This being that differential equations
began following the work of Leibnitz and Newton, with the flavor of applications in
dynamics, which had occurred a long time before integral equations started to get
attention at the very beginning of this century.
We should point out here that for this elementary presentation of integral equations
— assuming only calculus and differential equations preparation — the treatment in all
chapters, except for (the optional) Chapter 6, is formal. This is in the sense that clear
procedures and steps for arriving at the solution or some basic results are emphasized,
without necessarily stopping to give their complete mathematical justification. The
latter most often requires more advanced mathematics preparation. Thus we shall be

xiii
xiv PREFACE

limited to give those justifications that would not require us to go beyond the level of
this basic applicable undergraduate text.
In this second edition all comments, suggestions, and corrections relayed by
students, colleagues from around the world, and the expert reviewers of the journals of
mathematics and other concerned professions, were addressed. They all deserve my
sincere thanks and appreciation. Such suggestions, it is hoped, will help this edition
in attaining even more the same goal set in the first edition for an undergraduate
focusing integral equations text to serve the students of science, engineering, and
mathematics. To stay with this important goal, and keep the required text material
to a comparable size to that of the first edition, we decided to have a new (optional)
Chapter 7 for the detailed numerical methods. This includes using higher quadrature
rules for the numerical approximation of the integrals. The main changes made for
this second edition, in light of the suggestions received, are:

1. Discussion of the basic theory and illustration of solutions to Fredholm integral


equations of the first kind as done in Section 5.4.

2. Detailed discussion and illustration of numerical integration with the basic


higher quadrature rules, and numerical methods of solving Fredholm and
Volterra integral equations. The use of the high quadrature rules is covered in
a new (optional) Chapter 7.

3. More exercises for each section including some challenging ones.

4. More emphasis on clear statements of the basic theorems for the existence and
uniqueness of the solutions of integral equations. The brief introduction of
basic theory in Chapter 6 can be considered optional, when seen in light of the
main goal of this elementary text.

5. Conditions for the existence of integral transforms, their inverses, and important
operations are spelled out. A more detailed treatment is found in the author’s
undergraduate-graduate book on the subject (Marcel Dekker, 1992) entitled
“Integral and Discrete Transforms with Applications and Error Analysis."

6. Very clear examples of singular integral equations with general discussions of


their solutions. Such discussions must be taken in light of the (undergraduate)
level set for this book, in which no complex analysis preparation is assumed
for the reader.

7. More applications to update, replace, and complement the already ample vari-
ety of applied problems as recognized by all reviews of the first edition. These
now include some relevant problems in higher dimensions.

8. More emphasis are placed on the interrelation between the integral equations
and the differential equations representations of boundary and initial value
problems. This is also to emphasize that differentiation and integration are
inseparable.
PREFACE XV

9; All detected and reported typographical as well as other errors are corrected,
and some examples are deleted and replaced by more appropriate ones. Almost
all the suggestions made by the expert reviewers of the journals of our and other
concerned professions have been very seriously addressed, keeping in mind
the main goal of an undergraduate book for scientists, engineers, as well as
mathematicians. This includes the reviews of three critical experts for the first
draft of this new edition, and another three reviewers of the final draft.

10. For this edition we now have a “Student’s Solution Manual" to accompany this
book. It contains very detailed solutions to all the odd numbered problems in
the text (see the end of the preface for details).
With these changes and additions, the first chapter still starts with the statements
of a number of problems from different subjects, to illustrate their integral
equation representation. Although the reader is warned against expecting a full
understanding of some of these problems from such a brief presentation, a very
detailed formulation of them is given in Chapter 2. This is followed by the usual
classification of integral equations and a clear derivation and illustration of
some important integral and differential identities needed for the formulations
in Chapter 2 and later chapters. Such identities are essential for showing how we
can go from the integral equations representation to the differential equations
representation and vice versa. We have also improved upon the self-contained
(short but simple) presentation of the Laplace and Fourier transforms with
clear statements for the existence of the transforms. Chapter | is concluded by
a section on simple elements of numerical integration which represents only
the essentials necessary for the numerical solutions of Fredholm and Volterra
integral equations that are discussed in Chapters 5 and 3, respectively. The
higher quadrature numerical integration rules along with their needed tables
are covered in a new (optional) Chapter 7. They are well illustrated for the
numerical integration, setting up the numerical approximation of Volterra and
Fredholm equations, and the numerical solution of these integral equations.
Chapter 2 involves very detailed modeling of problems as integral equations
with a new section on integral equations in higher dimensions illustrated with
the Schrédinger equation integral representation in the momentum space. This
includes population dynamics, control, mechanics, radiation transport, and
boundary and initial value problems. Chapter 3 deals with methods of solving
Volterra integral equations, including approximate and numerical methods,
which are presented in detail. Chapter 4 is devoted to the construction and
properties of Green’s functions, which is very important for reducing boundary
value problems to Fredholm integral equations. Chapter 5 deals with basic
theory and detailed methods of solving Fredholm integral equations including
the use of the Green’s functions, and a detailed presentation of the familiar
approximate and numerical methods of solutions. Methods of estimating the
eigenvalues of homogeneous Fredholm integral equations are also presented.
In this edition a new special section (Section 5.4) is added for a very elementary
theory and a method of solving Fredholm integral equations of the first kind.
xvi PREFACE

Also, more varied numerical methods are used in the new Chapter 7 for solving
the different integral equations, compared to the very basic ones in Chapters
3 and 5 as it was the case in the first edition. In Chapter 6 we have a brief
and descriptive discussion of the theory regarding the convergence of the
methods of solving linear as well as nonlinear integral equations. For the basic
introductory undergraduate course, this chapter is clearly optional.
In each chapter we have attempted to present many clear examples in every
section, followed by a good number of related exercises at the end of each
section with hints to (almost) all exercises and answers to all the exercises.

Suggestions for Course Adoption


To use this text for an elementary one-semester or one-quarter course in integral
equations, we suggest that from Section 1.4 of Chapter | only the very essential ele-
ments, that are necessary for the convolution theorems, of the Laplace and the Fourier
transforms be included, a selected number of mathematical modeling problems from
Chapter 2 be covered, depending on the students’ interest, and some selected sub-
sections of the relatively long Chapter 5 be omitted. An exposure to the very basic
numerical methods of solution in Chapters 3 and 5, with their exercises and detailed
answers, is very beneficial. Another possibility is to present the most basic material
for a one-semester course in integral equations with boundary value problems as part
of the senior or first-year graduate course in methods of applied mathematics for
scientists and engineers. For this purpose, we have included the added Chapter 7 on
the numerical methods using higher quadrature rules.

The Student’s Solutions Manual

A Student’s Solutions Manual to accompany this book with Additional Solved


Problems is now available from the author directly. Student’s Solutions Manual to
Accompany “Introduction to Integral Equations with Applications — Second Edition,
Wiley & Sons, Inc., 1999 by Abdul J. Jerri" with Additional Solved Problems,
Sampling Publishing, 1999 by Abdul J. Jerri ISBN 0-9673301-0-6).
To order: Telephone (315)265-2755 and (315)265-1005, Fax (315)265-2755, e-
mail: solnman@ hotmail.com and [email protected]. Also see website: http://
www.clarkson.edu/~jerria/solnman. Send $29.95 plus $2.95 for shipping and han-
dling in the United States and Canada, and $4.95 abroad (all in US currency) to:
Attention S.A. Jerri, 69 Leroy Street, Potsdam, NY 13676, USA.
Acknowledgments

I would like to acknowledge many helpful suggestions from colleagues and students
during the preparation and use of the first edition of this book as well as using the
manuscript of the present second edition. I would also like to thank all of those
colleagues and students who read the first draft of this second edition, and made
valuable remarks and corrections. Professors C.A. Roberts and A. Aluthge read the
prefinal draft of this edition and made very valuable suggestions that helped a great
deal in steering this book towards its main stated goal of serving as the very first
introduction to the subject for undergraduate students in science and engineering
and I owe them my deep gratitude. Professor A. Bastys made the most thorough
reading of the prefinal draft, attending to the very details of the text with very candid
suggestions and corrections, and I owe him my deepest gratitude.
I would like to thank especially the reviewers here and abroad who made very
constructive criticisms and detailed suggestions, which I have attempted to address
very seriously, and which I hope will contribute to the desired quality and purpose
of this book. In particular, Prof. J. Chochran made the most detailed critical
evaluation with constructive suggestions. Indeed I have also requested him to review
this manuscript, which he did with suggestions that have contributed to the further
focusing of this book toward being the first introductory book on the subject for
undergraduates in the applied fields. Professor Chochran deserves my gratitude.
Also Professors I. Feny6 and M. Putinar made very detailed and constructive reviews
of the first edition, and I owe them my sincere thanks.
All along the process of developing this second edition, Prof. M.Z. Nashed has
been very generous in his constructive suggestions and valuable criticisms, thus I am

xvii
xviii ACKNOWLEDGMENTS

deeply indebted to him. Mr. J. Craparo read the first draft of the manuscript, made
Suggestions and supplied detailed numerical solutions, and he deserves my thanks.
The staff of Wiley and Sons, especially Ms. J. Downey, Ms. A. Loredo, and Ms.
S. Liu, deserve my thanks for their effective cooperation. I am grateful to Ms. C.
Smith for typing the prefinal draft of the first edition and the final camera-ready form
of this edition, and for typing the changes and additions to this new edition. Mr. J.
Hruska, Jr. deserves thanks for making the drawings with patience and care.
I owe my deepest thanks to my wife Suad and my daughter Huda for their continued
support and patience during the long hours of preparing this edition.
Integral Equations, Origin,
and Basic Tools

An integral equation is an equation in which the unknown function u(x) appears


under an integral sign. A general example of an integral equation in u(z) is

UD) at (35) + [K(e,)u(tat (is)

where K (z, t) is a function of two variables called the kernel or nucleus of the integral
equation. According to Bécher [1914], the name integral equations was suggested
in 1888 by du Bois-Reymond, although the first appearance of integral equations is
accredited to Abel for his thesis work on the Tautochrone, which was published in
1823 and 1826, and which we shall present shortly. There is also the opinion that
such first appearance was in Laplace’s work in 1782 as it shall make sense when we
speak of the inverse Laplace transform in Section 1.4.1. For example, the Laplace
transform of the given (known) function f(t), 0 < t < cw is

LAO iis) = fs en of Ode, s>a (1.2)

provided that the integral converges for s > a. So, if we are now given Fs), say
F(s) = +, 8 > 0, and we are to find the original function (now as unknown) f(t),
or the inverse Laplace transform of F(s), i.e., f(t) = £7'{F(s)},
1 co

z= i Cao tae (1.2a)


$8 0
then we are against solving an integral equation (1.2) in (the unknown) f(t). So it
does make sense that integral equations started with Laplace, since he was, in the
1
2 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

final analysis, after recovering the original function f(t) from knowing F’'(s) in (1.2).
In our above example, f(t) = t.
In the same vein, Fourier in 1820 solved for the inverse f(x) of the following
Fourier transform F'(A) of f(z), -coo < % < co

Ws baAy ei ah AN) iheeA F(x)dx (1.3)

as Zs

(Gyato uh =f e? F())dX. (1.4)

Hence in finding the (unknown) f () in equation (1.3), he solved an integral equation


in f (x) with the solution given in (1.4). With such an explicit solution f(x) for (1.3),
it is not surprising that some historians consider this Fourier (inverse transform) result
of (1.4) as the first very clear and reachable solution of an integral equation [Bell,
1945 p. 525]. We note that the formula for the inverse Laplace transform [as shown
in (1.65)] is not as accessible as the above (1.4) of the inverse Fourier transform,
where the mere statement of the former in (1.65) needs familiarity with complex
contour integration (that we shall not pursue at the level of this introductory text).
Some problems have their mathematical representation appear directly, and in
a very natural way, in terms of integral equations. Other problems, whose direct
representation is in terms of differential equations and their auxiliary conditions, may
also be reduced to integral equations. Problems of a “hereditary” nature fall under the
first category, since the state of the system u(t) at any time t depends by definition on
all the previous states u(t — 7) at the previous times t — 7, which means that we must
sum over them, hence involve them under the integral sign in an integral equation.
We may then say that such problems, among others, have integral equations as their
natural mathematical representation. The examples, which we illustrate next, are
from population dynamics, the surge in birth rates, the mortality of equipment and
their rate of replacement, biological species living together, population of fish and
game, the torsion of a wire or rod, the control of a rotating shaft, the shape of a
hanging chain, the deflection of a rotating rod, and the shape of a wire that allows a
bead to descend on it in a predetermined time (Abel’s problem). More problems are
included in this edition, and in particular, those in higher dimensions. This includes
the potential distribution in a disc and Schrédinger equation in the three-dimensional
momentum space. We will present almost all of these problems with their basic
clear statements as they are represented by integral equations, leaving the detailed
derivation of a good selection of them for Chapter 2. The rest of the examples are
problems that are formulated in terms of ordinary or partial differential equations
with initial and/or boundary conditions that are reduced to an integral equation or
equations. The advantage here is that the auxiliary conditions are automatically
satisfied, since they are incorporated in the process of formulating the resulting
integral equation. The other advantage of the integral equation form is in the case
when both differential equations as well as integral equations forms do not have exact,
closed-form solutions in terms of elementary known functions. In this case we must
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 3

resort to numerical or approximate computations, where the integral representation


is more suitable.
As mentioned above the detailed formulations of many of the problems presented
in the next section are given in Chapter 2, but first we need to familiarize ourselves
with the various types of integral equations and acquire some basic mathematical
tools necessary for facilitating such formulations.
For future reference the integral equation,

u(x) = f(x) + [Ke nuna (1.1)

may be written in the operational (abbreviated) form or notation as

u(x) = f(a) + (Ku)(x) (1.5)


or
u=ft+kKu
where K is an integral operator for the integral in (1.1) that maps the function u, as an
input, to an output (Ku)(z) = f K (2, t)u(t)dt in the range of the integral operator
K. As seen in (1.1), the image of the function wu under such mapping becomes
u— f = (Ku). Such mapping (Kw) of the function u by the (integral) operator K is
similar to the usual mapping done by function F(x) on the variable x. The difference
here is that the domain of the operator K is a class of functions u(x) instead of the
numbers x for the operation of the function F(x). This operator notation in (1.2)
does help describing, in a very brief and elegant way, the more general transformation
of functions instead of just numbers, and thus the topic of analysis of functions as
functional analysis, a subject that we shall not pursue further in this introductory
book, except for, possibly, a brief mention in the (optional) Chapter 6.
We may mention that some situations may also involve a rate of change, besides
the cumulative nature, where a derivative is used. Hence the unknown u(z) is
involved inside an integration as well as a differentiation operation. The result of
such mathematical model is a hybrid equation termed integro-differential equation.
For example,

d
ao(x)u(x) + ay (2) == f(a) [ Kequoat (1.6)

is an integro-differential equation in (the unknown) u(x). See also equations (1.13)


and (1.14) for the two species living together.

1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS

In this section we present statements of a number of problems from many different


fields which will be classified, primarily, according to whether they are formulated
directly in terms of integral equations, or are represented in terms of differential
4 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

equations with auxiliary conditions that can be reduced to integral equations. We


must emphasize again that what we are about to present are clear statements of the
problems and their integral equation representations, which is for the main purpose of
familiarizing the reader with the various applications of integral equations. We must
caution against the temptation of expecting the formulation of all these problems on
the spot! A good number are simple enough to avail themselves to such expectations.
Others, which are placed here because of their interesting and representative nature,
need more detailed derivation, which we shall cover in Chapter 2. Our purpose here
is to convey a feeling for modeling in terms of integral equations. For example, the
integral equation in u(t),

u(t) = ifK(t,r)u(r)dr alaee


relates the present state u(t) to the accumulation (integral) of what had happened to
all its previous values u(r) from rt = 0 to the present time 7 = t. The formulation
of most of these problems will be given in detail in Chapter 2, others are left for
exercises, and for the remainder, which are not as suitable for solution here, we refer
the interested reader elsewhere for detailed treatment. For such problems we will
give the appropriate references, which are included in the bibliography at the end of
the text.
This section consists of four parts: part A covers applied problems of hereditary
nature that have integral equations as their natural settings, part B addresses finding
the inverse of integral transforms, such as the Laplace transform for an example, as
solving integral equations (of the first kind), part C deals with ordinary differential
equations associated with initial or boundary conditions that reduce to (Volterra or
Fredholm) integral equations, and part D presents an example of an integral equation
in two dimensions, which is associated with a partial differential equation and bound-
ary conditions that reduces to a (Fredholm) integral equation in two dimensions.

A. Integral Equations as the Problems’ Natural Setting

Human population
The problem of forecasting human population may be one of the clearest examples
formulated as an integral equation since the population n(t) at time t depends on the
number ofthe initial population n(0) = no surviving to time ¢, and, more importantly,
all children born during the time interval 0 < rt < t who survive to time t. The
dependency of the population n(t) on the initial population no and the previous
populations n(r), 0 < 7 < t, is represented by the integral equation

n(t) = no f(t) + k | f(t — 7)n(r)dr (1.8)

where no is the number of people present at time t = 0 and f(t) is the survival
function (Figure 1.1), which is the fraction of the number of people that survive to
age t. With regard to the integral in (1.8) we may remark that kf(t — r)n(r)Ar
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 5

represents the number of children born in the time interval Av around time 7 that
survive to time ¢. It is clear that their number is proportional to n(r), the population
present at time 7, and that their survival function at time t is f(t — T) since they are
then of age t — r. The detailed formulation of (1.8) is presented in (2.3) to (2.8) of
Section 2.1.

Fig. 1.1 The survival function. From Jerri [1982, 1986], courtesy of COMAP, Inc.

In Section 1.2 we will discuss the classification of integral equations with their two
main classes, namely, Volterra and Fredholm integral equations. These two (differ-
ent) equations are characterized by having a variable and a fixed limit of integration,
respectively. Hence, equation (1.8) is a Volterra integral equation, since the (upper)
limit of integration is the variable 7 = t. As we shall see soon, equation (1.19) that
describes the small deflection y(x) of a rotating shaft is with fixed integration limits
€ = 0,1 hence it is a Fredholm integral equation.

Periodicity in the surge of birthrates


The study of population dynamics includes determination of the surge in the
birthrate b(t) at any time ¢ to allow for future necessary planning. The dependence
of the birthrate b(t) on previous birthrates b(t — 7), for women in the childbearing
age range a <7 < {, is given by Lotka’s integral equation,

B
b(t) = g(t) +/ b(t — r)p(r)m(r)dr (1.9)
a

where p(7) is the probability that a female lives to age t and m(r)
Ar is the probability
that she will give birth to a female in the time interval Ar. g(t) is a term added to
allow for girls already born before the oldest childbearing woman (of age T = (2) was
born. The formulation of (1.9) is the subject of an exercise in Section 2.1 (Exercises
4), which is supported by detailed leading hints.
6 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Mortality of equipment and its rate of replacement


Equipment wears out, so to maintain a fixed number of items f (t) at any time ¢,
it must be replaced at a certain (unknown) rate r(t), which requires knowing (from
past experience) the survival function s(t — 7) of all equipment bought at time 7
previous to time t. This problem is represented by the integral equation
t
f(t) = f(0)s(t)+[ Heat AMIE (1.10)
where s(0) = 1 to make all newly bought equipment present at t = 0 [i-e., f(0) =
f (0)s(0)] and r(t) is the rate at which the equipment must be replaced. We may
note here the common “hereditary” nature of both (1.8) and (1.10). We may add
that r(r)Ar in the integral of (1.10) represents the number of new machines added
during the time interval Av about time 7. These will be of age t — 7 at time f¢;
hence their survival function is s(t — 7) and their surviving humber at time ¢ is about
r(r)s(t —7)Ar. So we have to integrate these increments of the new added machines
to find their total from the initial time 7 = 0 to time 7 = t as the integral of the second
term in (1.10). The first term f(0)s(t) in (1.10) represents the number f(0)s(t) of
the initial equipment f (0) that stayed operational until time ¢ and the ones that did
not need replacement.

Propagation of stocked fish in a new lake


A very closely related problem! to the above problem of the equipment rate of
replacement is the problem of the propagation rate of fish in a new lake. New artificial
lakes are stocked with fish for recreation purposes. This is the same situation with
game in a protected park. From known data of similar lakes, the fish is stocked at
a given (known) rate of supply s(t) per year. Add to this that the fish multiply (or
propagate) at an unknown rate of r(t) per year. This means that in the time interval
Ar we have (s(7) +r(7))Ar fish added in the lake. But such a population, naturally,
declines, where we assume the simple exponential decay e~**. For the future time
t, such added fish during the time interval Av around time 7 are of age t — 7. So
their survival function is e~*-7). The result is that at future time t we will have
only (s(r) + r(r)) Are~>-7), of the total accumulated fish in the time interval Ar
around t = T, survive to time t. So, to find the total number of the fish NV,,(t) due to
the (known) supply rate s(t) and the natural multiplying (propagation) rate r(t) we
integrate from the initial time T = 0 to 7 = t to have

t
Nea = ie (Ei s(r) + r(r)]dr. (111)

It is, of course, desired to keep the number of fish in the lake at a certain (given)
level N(t). This level is kept and watched by sampling the fish in the lake by
selective netting. Before we supply the stocked fish, which will multiply to give the

'Wing [1991], courtesy of SIAM.


1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 7

total number of fish NV,,(t) in the integral of (1.11), we assume that the lake had
initial number of fish N(0) = No. But this fish will decline to N(0)e~ at time t.
So if we add this number to the supplied and propagated number of the integral in
(1.11), we have the total number of fish V(t) which we would like to keep it at this
(given-known) level,

N(t) =Noe-** + [ow + r(r)Je A") dr. (Ga4)


0
This is a (Volterra) integral equation in the unknown rate of propagation function
r(7), where s(7) and N(t) are assumed as known functions.

Biological species living together


Consider two separate species with numbers n,(t) and n(t) at time t, where the
first species increases and the second decreases. If they are put together, assuming
that the second species will feed on the first, there will be an increase in the rate of
the second species dn2/dt which depends not only on the present population nj (t)
but also on all previous values of the first species. When a steady-state condition or
equilibrium is reached between these two species, it is described by the following
pair of integro-differential equations (as n; and nz appear under both integration and
differentiation operations):

it = n(t) i,—Fine(t) — ; jG r)na(r)dr] ki >0 “(£13)


t—Tp

dry _ n2(t) [ks + y2n1(t) + j (Ge r)na(r)dr] ye Uae)


dt me
where k, and —kz are the coefficients of increase and decrease of the first and second
species (had they stayed separate), respectively. The parameters 7, fi and y2, fo are
dependent on the respective species. To is assumed to be the finite heredity duration
of both species. The detailed formulation of (1.13) and (1.14) is given in Section
Pb be
There are many other problems that are modeled as integral equations that include
some examples, which we covered briefly in this section of the first edition, but shall
only mention here. We give their original references, and leave for the interested
reader their, somewhat, lengthy derivation. These include the propagation of nervous
impuise,” the smoke filtration in a cigarette,’ and the chance to find a time gap T in
order to cross a dense traffic. We have also attempted in this second edition to add
a few new examples with emphasis on their more simple and clear derivation. The
other new ones were placed as exercises for Chapter 2, which is designated for the

Rashevsky [1960, p. 426].


3Noble [1967, p. 153].
4Green [1969, p. 139].
8 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

detailed mathematical modeling of problems as integral equations. Such exercises


are supported with detailed hints.

Torsion of a wire
Many physical problems are also of a hereditary nature; for example, if we apply
a torque m(t) to twist a wire or bar, the torsion w(t) will depend on this present
torque in the form m(t) = hw(t) as well as on all torques applied in times (—oo, t)
previous to t. Such accumulation of twists changes the physical properties of the
wire, thus introducing a hereditary (cumulative) effect. We will assume that we have
some data that tells us at time t how the proportionality factor (t, 7), instead of the
usual constant proportionality factor h, had been affected by the continuous previous
torques m(T), —co < T < t. If we add the first torque hw(t) to the accumulation of
previous torques as the integral ee p(t, T)w(r)dr, we have, for static equilibrium,
this problem represented by the integral equation ‘

m(t) = hw(t) +f p(t, T)w(r)dr. (1.15)

As mentioned above, hf is a constant and ¢(t,7) is a function that takes care of


how previous torques had affected the physical properties of the wire, and hence the
present torsion. We may note that the function ¢ depends on t and the previous times
T in a more general way (t,7) than the dependence on their difference ¢(t — 7)
in most of the foregoing problems. We may note how the (positive) accumulation
integral in (1.15) effectively reduces the proportionality constant h of the first term
hw(t).
We may also note that this integral equation in (1.15) has a new feature, which
is that one of the limits of the integration is unbounded, namely, the lower limit of
the integral is —oo. As we shall see when we classify integral equations in the next
section (Section 1.2), such equations are called singular integral equations. Another
property that lends an integral equation singular is when the kernel [see K (a, t) in
(1.1)] becomes unbounded at a point or points in the domain of the integral equation
(or, more precisely, is when the kernel is not integrable on the domain of the integral).
We must note here that the methods of solving such singular integral equation are
more involved for the purpose of this book, where they require basic knowledge of
complex analysis. We shall not pursue a discussion of such methods in this book, but
we will illustrate solving a number of singular integral equations that are tractable
via some basic tools that we shall present in Section 1.4.

Automatic control of a rotating shaft


The problem of correcting for the deviation ¢(t) between 6,(t), the angle of
steering (or rotating) a shaft, and 0;(t), the angle of the direction indicator to be
followed, is similar to that of the above twisted wire problem.
To correct for this deviation ¢(t), a torque proportional to such deviation, a¢(t),
and in the opposite direction, must be applied to overcome the instantaneous deviation
at time t. Another torque, b(d¢/dt), proportional to the rate of change d@/dt of the
deviation, must also be applied to dampen such a change. To do even better, a torque
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 9

b fo ( T)dr, proportional to the accumulation of the deviation from the starting time
t = 0 to present time t, must also be applied to take care of all previous deviations.
If we let J be the moment of inertia of the rotating shaft, then according to Newton’s
second law of motion, the torque m,(t) applied by the shaft to rotate with angle @, (t)
is
ais
(be) = 1 qe

This torque must be equal in magnitude, but opposite in direction, to the sum of the
three correction torques,

d?6,
Ip = —ad(t)038 0 fon
o(t (1.16)

We note here that the unknown ¢(t) of (1. 9) is being differentiated in the term
—p4¢ as well as integrated in the last term —c fe 6 T)dr. Such equation is called an
ian differential equation in @(t).

Shape of an elastic thread (The hanging chain)


An example of a physical problem that results naturally in an integral equation
is to find how a variable density p(x) must be distributed along an elastic thread in
order that the thread assumes a given shape f(x). For an elastic thread of length /
under a horizontal tension To, the resulting integral equation in p(z) is
l

=, i Ge, €)p(€d, (1.17)


where
h nl) aut

G(z,é) = Ll aameadiieg : (1.18)


Gils
PY ’
2 eb= a7
and g is the acceleration of gravity. The formulation of this problem is rather simple,
but a bit more detailed. The least we need is a figure of the above function G(z, €)
with its “two branches" on (0, €) and (€,1), as shown in Figure 1.2, which we shall
derive in Section 2.3.1. In Figure 1.2 we note that y(z) is taken to be positive in the
(downward) direction of gravity.
We will show then that this G(z, €) is the resulting shape of the elastic string, as
a function of z, that is due to a vertical (point) force of unit magnitude located at
x = €. So, the vertical force AF (€) = gp(€)A€, due to the weight of the increment
Aé€ of the length of the string, would cause the displacement of the string in the form
Ay(a, €) = G(z, €)gp(€)A€. What remains is to find the total displacement y(z),
0 < x < I due to the gravity force along the whole string, which is obtained by
superimposing all these displacements Ay(a, €) of the elements of the string. This
is accomplished by integrating over dy(x, €) from € = 0 to € = / to obtain the final
shape of the elastic string f(x), which is what we have in (1.17). We note that (1.17)
is a Fredholm integral equation in the (unknown) density distribution function p(x),
10 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Fig. 1.2 Displacement due to a single vertical force F at €. From Jerri [1986], courtesy of
COMAP, Inc.

since the limits of integration are fixed as = 0 and € = I. Also, as we shall discuss
the classification of integral equations in Section 1.2, when the unknown y(z) is not
present in a term outside the integral of an integral equation, the equation is called of
the first kind. Hence in (1.17) we have a Fredholm integral equation of the first kind.
The very detailed derivation of (1.17) and (1.18) is presented in Section 2.3.1.

Small deflection of a rotating shaft


Consider a shaft of length / rotating with angular velocity about the z axis. When
it is disturbed a little, there results a deflection from the original position along the
Z axis, as shown in Figure 1.3. To formulate the problem for the deflection y(z) at
z of the bar from its original rotating position along the x axis, we will assume that
we know the function F(z, €), which gives the displacement in the y direction at the
point x due to a unit force applied at another point x = €. The details of deriving
such a function are similar to those of the above hanging chain problem, (which
shall be derived in Section 2.3.1). To find how the force of the segment of length
A€ of the bar affects the displacement y(x), we must find the centrifugal force of
this rotating segment whose mass is Am = p(€)AE€, radius r = y(€), and angular
velocity w. Hence the centrifugal force is Amw*r = pAfw*y(€), where p(€) is the
linear density. According to the definition of F(x, €), this centrifugal force of Ag
around € will affect a displacement Ay(z) = F(z, €)p(€)Aéw*y(€). Now, if we
sum up all the contributions to the y(z) displacement at x from all the segments A€
along the length of the bar (0,/), we obtain

y() = 0? i F(a, €)o(€)y(€)dé (1.19)


1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 17

which is a Fredholm integral equation in y(z), the deflection of the bar at x. Since
there is no external term outside the integral of (1.19) that is independent of the
unknown function y(z), this equation is termed homogeneous.

Fig. 1.3 Small deflection of a rotating bar.

Sliding a bead along a wire: Abel’s problem


One of the earliest problems formulated as an integral equation was Abel’s problem.
It describes the shape of a wire ¢(y) in a vertical plane (Figure 1.4)

Fig. 1.4 The sliding bead on a wire—Abel’s problem.

along which a bead must descend (under the influence of gravity) a distance y in a
predetermined time f(y). This is represented by Abel’s integral equation in $(y),

- Jat) = f°Oe (1.20)


12 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

where ¢(y) = 1/sina, the angle a is shown in Figure 1.4, and g is the acceleration of
gravity. The detailed formulation of this problem is presented in Section 2-3.2.. Lhere
we shall see that for s(y) the length of the path as a function of the vertical distance
y, we have oe = — sina. The unknown ¢(y) in (1.20) is defined as $(y) = =a ao
Here, we have resorted to using 7 for y of ¢(y) as the dummi variable of integration
so that we can write the upper limit of integration as y. Most references use y for
the variable of integration, and designate the upper limit of integration as yo, which
may be confused with a constant limit yo. We had to stay with the variables y and
n, since y is the vertical distance traveled. Abel in 1823-1826 formulated and solved
this and more general problems. This was followed, independently, by Liouville’s
work in 1832-1839.

Example 1
Verify that d(y) = 5 is a solution of the following special case of Abel’s problem:

1
y2= [* onan
Merrrearrt (£.1)

We substitute ¢(7) = 5 in the integral of (E.1) to obtain

= -(0-y?) ye (E.2)
which is the left side of (E.1); hence ¢(y) = $ is a solution of Abel’s integral equation
(Ed)
We note that this case of Example | corresponds to ¢(y) = 1 in (1.20) for a body
falling a—not so interesting path!—of direct vertical fall of distance y. This is the
case since for such a fall, we have y = sgt; a) ue where y = 0 corresponds to

t = 0, so in our case we write t = f(y) = —, a. If we substitute this value in the

left side of (1.20), we have \/2g - ,/ o = 2,/y. With this factor of 2, the solution to
(E.1) is $(y) = (2)($) = 1 = shy.
sina
which results in a = &, the direct vertical fall!
This is not such an interesting, if not dangerous, special case of path of descent for
(1.20). The following Tautochrone problem is much more interesting special case of
(1.20), and is also the first integral equation problem that started Abel’s interest in
the subject.

The Tautochrone
This is Abel’s original problem that he later generalized to the integral equation
(1.20). As a special case of (1.20) it deals with finding the path where the time
required for descent along such path as shown in Figure 1.5 from any point (2, y) to
the origin is a constant T’, i.e., f(y) = T is independent of the starting point. For
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 13

Fig. 1.5 Abel’s Tautochrone problem.

this f(y) = T, (1.20) becomes (noting here that y corresponds to t = 0, and y = 0


corresponds to t = T’)

vier = 0 [ABE F'n) =~) (1.21)


Here F'(y) = s(y), the length of the ae as a function of the vertical distance y, and
2 2
dF — de — _ 4(y) in(1.20), (4a = (4) ee (4) Historically, Abel is
Benes ivith the first conscientious effort in stating problems as integral equations
as in (1.20) and (1.21) and offering their solutions. For example, he gave the solution
to the Tautochrone problem (1.21) as

Fly) = s(u) = (=) v2


Which is the equation of a cycloid. He also solved (1.20) and its generalization,
which we shall return to in Section 1.4.1 when we cover the Laplace transform, that
will facilitate obtaining all such solutions to Abel’s problems.
We may note that in the above Abel problems, we see first that the kernel K (y, 7) =
a is unbounded at the point 7 = y, hence they are singular (Volterra) integral
equations. Second, that the unknown term is absent outside the integral, when
compared with the general integral equation in (1.1),

uae eye efK (2, €)u(€)d£. (1.1)


As we mentioned earlier, such integral equations are termed of the first kind as
opposed to the those of the second kind when the unknown term is present outside
the integral as in (1.1). So Abel’s integral equations (1.20) and (1.21) are singular
Volterra equations of the first kind.
As we shall see in Section 5.4 for the Fredholm integral equations of the first kind,
such equations, very often, represent some major difficulties. For Abel problems, this
14 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

is complicated even more where the problem is also singular, which is another difficult
situation for integral equations. These difficulties were documented theoretically long
after Abel’s time. So, we may say that Abel not knowing of such, often, formidable
difficulties, he took such problems by stride, as they turned out to be among the few
without major apparent difficulties! We shall return in Section 2.3.2 to derive Abel’s
integral equation (1.20), with its solution accomplished by the use of the Laplace
transform (see Example 8 in Section 3.2.1 and Exercise 5 of Section 1.4.)
The Tautochrone problem will be the subject of Exercise 5 in Section 2.3, where
the derivation is supplied with detailed leading hints.

Example 2 Bernoulli’s Problem


One of the simplest integral equations arises from a problem in geometry called
Bernoulli’s problem, which deals with finding the shape of a curve y = f(x) for
which the area A under the curve i f (€)d€ on the interval (0, x) is only a fraction
k of the area of the rectangle circumscribing it which is kx f(x) (Figure 1.6). Thus,
this problem is easily represented by the integral equation

Safe = /” fOdé. (B.1)

Fig. 1.6 The Bernoulli problem.

For a simple demonstration of (E.1) we can easily verify that the area under the
parabola y = 2? is one-third of the rectangle circumscribing it. So, if we substitute
k = 3 and f(x) = x? in the Bernoulli equation (E.1), we have

1 1 %
a x? = re = i!€7dé (£.2)
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 15

which is the case when we perform the integration of (E.1):

zx
1
[ eae= 5
co canes = -2°,
€=0
Radiation transport — Determining the energy spectrum of neutrons
We shall present here a simple example of the absorption of radiation*® (say
neutrons) in a slab with fixed thickness as illustrated in Figure 1.7. The measured

Incident :
Parti Transmitted
article ; Detector
B eam Particle Beam

Fig. 1.7 Simple experiment for determining the energy spectrum of particles.

neutrons g(x) on the other side of the slab (after the absorption) for finite number
of different thicknesses x of the same material can be used to determine the neutron
spectrum f(£). Here f(£) is the number of neutrons at the energy level E. The
result is a Fredholm integral equation of the first kind in the neutron spectrum function
f(E). Before we start the derivation of such an equation, we need to define what we
mean by the cross section o of the nuclei of the material to the incoming radiation
(or neutrons) with energy &. Simply speaking, it is “the effective area" that the
neutrons see of the nucleus as a target. Of course, the cross section o depends on the
material, and, principally, on the energy distribution (spectrum) f (£) of the colliding
neutrons. Also when the particles (or neutrons) collide with the nucleus, they may
be absorbed, scattered, or create new neutrons (by fission). So it is important, first,
to know the probability of the collision. It can be shown easily (see Exercise 14) that

5Wing [1991, p. 9], courtesy of SIAM.


16 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

the probability of a neutron traveling a distance x with no collision is

Dt) eas (1522)

Now let us consider a beam of neutrons transmitted through a slab of uniform


thickness x as shown in Figure 1.7. We shall assume in such experiment that the
neutrons are only absorbed (and none are scattered). Hence the cross section is that
of absorption, and, of course, it depends on the neutrons’ energy spectrum f(),
Emin < E < Emaz, which we shall denote by a(£). We are to determine this
spectrum f(£) from knowing the “measured" neutrons g(x) that cross the slab of
thickness x. It is understood that doing such measurement for a slab gives only one
sample value of the function g(x). So the thickness is varied for a finite number of
different thicknesses to have a good idea about “the measured" output g(x).
Now in the small range of energy AF around E, we have f(#)AE neutrons
which have x distance to cross without being absorbed. The probability for this to
happen, according to (1.22) is e~*7'”), hence the number of these neutrons that have
crossed the slab is e~*7(”) f(E)AE. If we sum this contribution over the whole
energy spectrum of the neutrons from & = Emin to LH= Emaz, we have the total
number of the escaped (and to be measured) neutrons as

Emaz
Ie) = i e *(F) F(E)dE. (1.23)
Emin

This is a Fredholm integral equation of the first kind in the neutron spectrum f(£).
In Exercise 15 we will present discussion concerning the difficulty in solving such
Fredholm integral equations of the first kind, especially when the known function
g(a) in (1.23) has the inaccuracy of the measurement.
Also, very much related problem to this neutron transport one is the subject of
(a detailed) Exercise 4 in Section 2.2. It deals with determining the strength of the
neutrons source in a uniform rod, where it results in Fredholm integral equation of
the first kind in the unknown function of the neutrons source strength.°

Electric potential on the rim of a unit disc


Consider the potential u(r, @) in a unit disc, which is due to the given potential
u(1,@) = g(@), —m < 6 < 7 onthe rim of the unit disc. The solution u(r,8) to this
Dirichlet boundary value problem due to the given g(@) is the well known Poisson
integral formula,

line” 1-r?
HO Oa Qn [ 1 — 2r cos(@ — d) + pa 19)A8. a)
However, if we ask for the solution of the inverse problem, namely, to find a potential
distribution g(@) on the rim of the unit disc that would result in a given desired

®For this problem and other interesting problems, modeled as Fredholm integral equations of the first kind,
see Wing [1991, p. 18], courtesy of SIAM.
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 17

potential distribution u(r, @) inside the disc, then we face the above equation (1.24) as
an integral equation in the unknown function g(@). We note here that ¢ in the integral
of (1.24) is a dummy variable, and just like 9, it is the polar angle, —1 < ¢ < 7.
Further discussion of this problem, including the derivation of the result in (1.24) is
found in Section 4.1.4 [with the above result as (4.69)].

B. Inverse Problems

The Laplace and other integral transforms


As we have indicated at the beginning of this section, the most familiar example
of an integral equation in u(x) comes from searching for a function u(x) whose
Laplace transform U(s) is known:

Ci(s) = is ey u(x az, (1.25)

Another example of an integral equation arises when we have U(X) the Fourier
transform of u(x),
[o.@)

U(A) =| e 7 u(ax)dz. (1.26)


—0co

Here the solution u(z) of the integral equation (1.26) is given in Section 1.4.2 as

1 ee aN

u(z) = — os
Ae iL e’7U(\)dr : fe
C27)

This Fourier integral inversion (1.27) of (1.26) is considered historically to be the


earliest in the direction of solving integral equations such as (1.26) in u(x). Indeed,
one of the most general integral transforms of f(z),
b

F(A) = i AEC IONE (1.28)


can be considered as an integral equation in f(x). Here p(x) in (1.28) is a (known)
weight function.
In Section 1.4 we present the Laplace, Fourier, and few other transforms and
discuss some of their basic properties, which will be used to solve certain types of
integral equations. In Appendix A we present the Hankel transform with Bessel
function kernel K(z,t) = J,(at)’, [and weight function p(x) = & in (1.28)).

7Here Jn (a) is the Bessel function, of the first kind of order n, which is one of the two solutions of Bessel
differential equation x2u" + xu! + (x? — n”)u = 0 which is bounded at z = 0,

Se bubs
In(a) = ki(n + ky!
18 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

It is needed in Chapter 2 for formulating and solving the dual integral equations
representation of the electrified disc problem. In Section 2.6 we will derive the
simpler problem of the electrified plate, where the Fourier transform of Section 1.4.2
is employed. The electrified disc dual integral equations setting and their solution
are done in Example | of Appendix A.

Example 3
Verify that
ANT elt
(V4Go) ef PA) {0, a ta (E.1)

is a solution of the integral equation


G (oe)

gS as i eu (x)da . (E.2)
r co
which is a special case of(1.26).
We substitute u(x) from (E.1) in the integral of (E.2) to obtain

i eu(a)de = | ede + | eMe(Ayde + | e A= (0)dax


10,9) =nle-2} eat?

a sph a

= af e "dr = ‘Ac =
oe =tN|.

= 5A wh(cb
ir, Thyeres
ers) 2A sin Aa
= EE (E.3)
which is the left side of (E.2), after using the identity
i era iP eta
sin Aa = -
21

C. Differential Equations with Auxiliary Conditions Reduced to Integral


Equations
The above variety of problems in part A represent varied applications, where
integral equations is the natural setting for the mathematical model of such problems.
They are characterized by their “hereditary” or accumulative nature. In this section
we shall present examples that illustrate the reduction of differential equations with
auxiliary (initial or boundary) conditions to (Volterra or Fredholm) integral equations.

Initial value problems


In Section 2.4 we will use repeated integration and some integral identities from
Section 1.3 to show that, for example, the following initial value problem associated
with a second-order differential equation,

d?u
<5 = du(z) + g(x), 2 >0 (1.29)
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 19

u(0) =1 (1.30)
u'(0) =0 (1.31)
reduces to an integral equation in u(z),

WO eS) i eae ifeaeyors ee)


which represents the first main class of integral equations, the Volterra integral
equations.

Boundary value problems


In Section 2.5 we will show, for example, that the following two-point or boundary
value problem,
du
7 Ne AEA) (1-33)
u(a) = 0 (1.34)
u(b) =0 (1.35)
reduces to an integral equation in u(z),
b
u(a) =A f K(2,8)u(e)dé (1.36)
where b
eau a<€<a2<b
K(z,€) = ras (1.37)
SONG St <g<t sh
b-a
This represents the other main class of integral equations, the Fredholm integral
equations.
We may mention again that the integral equation (1.32) for the initial value problem
has a variable limit of integration z, while (1.36) for the boundary value problem
has fixed limits a and b. This points out a main distinction in classifying integral
equations and hence the nature of their different methods of solution. This leads
us to classify integral equations along these two lines in the following Section 1.2,
where, as indicated above, (1.32) and (1.36) are special cases of the two main classes,
Volterra and Fredholm integral equations, respectively.

D. Integral Equations in Higher Dimensions


All our examples of integral equations up till now are done for functions of one
variable, or in one dimension. The following problem of the electric potential on a
unit disc is a representative of integral equations in two dimensions. Examples of
problems in three dimensions are left for Chapter 2. They include the electrified plate
and disc problems in Section 2.6, and Schrédinger equation in the three-dimensional
momentum space in Section 2.7.
20 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Charge density for a potential on a unit disc


The general problem of solving for the potential in a disc is discussed in details
in Section 4.1.4, and where partial differential equations are used along with some
boundary conditions. For example, the potential u(r,@) inside a unit disc with
grounded rim, which is charged with charge density distribution f(r, @), is given as
1 27

u(r, 8) = [ dp i G(r,6; 0,4) f(0, #)dd. (1.38)


What concerns us here is the inverse problem, i.e., to find the charge distribution
f(r,9), which results in a desired potential distribution inside (and outside) the disc.
This means that the above equation (1.38) represents an integral equation in the
two-dimensional (unknown) function f(r, 4).
We may conclude that in this section we presented a good variety of basic applied
problems as integral equations. For the reader who is interested in the more detailed
and realistic, though somewhat involved!, applications of integral equations, we refer
to the specialized references (and the references therein) at the end of the book. In
particular, Kanwal [1971, 2nd ed., 1997] covers applications in fluid dynamics, Wing
[1991] has applications for radiation transport among other applications, Porter and
Stirling [1980] covers spectral theory, while Pogorzelski [1966] covers the detailed
theory and applications.

Exercises 1.1

1. Verify that u(x) = a is a solution of the integral equation (a Laplace transform


of u(zx))

2. (a) Reduce the integral equation (E.1) of the Bernoulli problem in Example 2
to a differential equation. Hint: Differentiate both sides of (E.1) using
the fundamental theorem of calculus: =f i (Side =i F(a)
= a

(b) Solve the resulting differential equation in part (a) for f(x), the solution
of the Bernoulli problem (E.1).
For the following Exercises 3 to 5 verify that the given function u(x) is a
solution to the indicated (Volterra) integral equation.

SAU (ors,

ie [ee @-orunae
zu

Aur)
= 1)— a,
EXERCISES 1.1 21

Hint: Take the factor e* outside the integral, then there is a simple integration,
part of which involves integration by parts.

1
u(x) =1— | sin ctu(t)dt
0
u(z) = 2 —2°/6,

0) = iLsinh(a — t)u(t)dt

For the following Exercises 6 to 10 show whether or not the given function
u(x) is a solution to the indicated (Fredholm) integral equation of that particular
Exercise.

iio) sin ne) 2),

u(x) = : + (n?/4) / K (a,t)u(t)dt, (E.1)


where the kernel A(z, t) is defined (with two branches) on the interval (0, 1)
as

K(z,t) = 257% (E.2)

Hint: In substituting the kernel K (x, t), with its two different branches, in the
integral of (E.1), you should write the integral as the sum of two integrals on
0<t<azandz <t < 1, where the second and the first branches of K (z, t)
in (E.2) are used for these two subintervals, respectively.

u(x) =e* —2— [ a(e** — 1)u(t)dt

. Show that u(x) = (sinaz)/7z is a solution of the integral equation

eae Te Alice
shake Mu(adr = veld) = 46 Bisie
10)

Hint: Use (1.27) with U(X) = pa(A), or consult Example 3.

10. Verify that u(x) = e~*, x > 0, is a solution of the integral equation (Fourier
cosine transform)
1
— = cos Axu(x)da.
1+? [
22 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

i Verify that
ws feo see? ceee-|ar <a
Wee fogs Lal soa
is a solution of the integral equation (Fourier transform)
[o-@)

OA cos \ — sinA) = il e 2 y(x)dz.


0-9)

Hint: Use the identity et** = cosz F isin.

12. (a) Reduce the following integral equation

b(ax)
(aaa yi f(a) +/ K (a, t)u(t)dt, h@ye0

to the following form:

Kia)
where k(x, t) =
h(t)h(a)
Hint: Divide both sides of (E.1) by the function ,\/h(z) on (a, d).
(b) Show that if A(2,t) is symmetric in (E.1), then the resulting modified
kernel k(z, t) in (E.1) is also symmetric.

18: Give the equations that describe the rate of change of the two biological species
living together of (1.13) and (1.14) when they are separate (independent). Hint:
See (2.9)-(2.11).

14. Use the following hints to derive the probability expression p(x) = e~?” for
a neutron to travel a distance x without being absorbed. o is the cross section
(of the nuclei) of the material as it appears to the neutron.
Here we shall assume only that the probability function p(x) satisfies

p(z1 + £2) = p(x1)p(z2). (E.1)

The following steps are aimed at generating a differential equation in p(z)


whose solution is p(x) = e~ 7”.
Let p(x) be the probability that the neutron (particle) moves with no collision,
then consider the particle traveling an extra small distance Az, whereby we
have p(z + Ax) at z + Az. But according to (E.1) we have

p(x + Ax) = p(x)p(Az). (E.2)


EXERCISES 1.1 23

Also, with p(0) = 1, the (decreasing) p(Az) can very well be approximated
by p(Azx) = 1—oAz.
DAs) — lor, o-> 0. (E.3)
Use (E.2) with the above approximation of p(Az) in (E.3) to generate a first-
order differential equation in p(x), then solve it to find p(x) = e~?". Of
course we have the boundary condition p(0) = 1 for determining the arbitrary
constant in the solution of the first order differential equation.
i>: Consider the Fredholm integral equation of the first kind (1.23) in f(E), the
neutron spectrum, where the output g(x) is a measured data.

(a) What is the major difficulty in obtaining an accurate value of f(£).


Hint: Recall that g(a) is measured data for a fixed number values of
zx, and most likely a (numerical) differentiation of this data (with its
inaccuracy due to the measurement) may be needed to find f(F), as it is
often the case for equations of the first kind.
(b) Assume that the cross section y = o(E) is a monotonically increasing
function of the energy. This is to allow writing it’s inverse E = a1 (y)
as a function of y. Show that the integral equation (1.23) will reduce to
finding F'(y) in the following equation,
b

g(x) = / e “YF (y)dy (E.1)


where d

F(y)= Ko" w)a,7 ©) (E.2)


and
OilManin ae On O(a = Os, (E.3)
(c) Give the special case of o(£) that would make the integral (E.1) as the
(typical) Laplace transform of F'(y).
16. Reduce the initial value problem

d?u
AED) +u=0, lige!)

Hee ee aa
to a Volterra integral equation. Hint: See (1.29)—(1.31) and use (1.32).

17. Reduce the boundary value problem


du
ABD) + Au = 0,

u(0) = 0, u(Z) =0
to a Fredholm integral equation. Hint: See (1.33)—(1.35) and use (1.36) and
C37)!
24 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

1.2 CLASSIFICATION OF INTEGRAL EQUATIONS

As we remarked in the preceding section, it seems that most of the integral equations
we have presented fall under two main categories: those with variable limits of
integration, such as (1.7), (1.8), (1.10), (1.12), (1.15), (1.20), and (1.32), and those
with fixed limits of integration, such as (1.17), (1.19), (1.23), (1.24) and (1.36).
These two classes of integral equations are called Volterra® and Fredholm? integral
equations, respectively. As we shall see in Chapter 2, these two classes represent
two different sets of problems and require different methods of solution, which we
present in Chapters 3 and 5, respectively. In the following we present a more detailed
classification of integral equations in order to become familiar with the conditions
and the terminology that soon will be used in the formulation of the problems or the
construction of their respective solutions.
The most general linear integral equation in u(x) can be presented as!°
b(x)
h(xu(z) = fle) +f K(w,€)u(e)ag (1.39)
or in operational notation, similar to what we wrote for (1.1) in (1.5),

h(x)u(x) — f(z) = (Ku)(z), asa (1.39a)


where K defines the above integration operation on the function u in (1.39). The
equation (1.39) is called a Volterra integral equation when b(x) = 2,

na)u(e) = s(o) + [ROSE (1.40)


When h = O it is called a Volterra equation of the first kind,

Foe /" K (a,€)u(€)dé (1.41)


and is called a Volterra equation of the second kind when h(x) = 1,

u(a) = f(a) + Wejaens (1.42)


It is clear that Abel’s equation (1.20) is a Volterra equation of the first kind, whereas
the equation for torsion of a wire (1.15) is a Volterra equation of the second kind.
The initial value problem equation (1.32) is also a Volterra equation of the second
kind, with

fGyeaniee iSeo eee (1.43)


8Volterra’s important work in this area was done in 1884-1896.
*Fredholm’s important contribution was made in 1900-1903.
!ONote that for the rest of the text we will be using other variables of integration, such as ¢ and y, in
addition to €, in (1.39). Also, we may refer to an integral equation only as “equation.”
1.2 CLASSIFICATION OF INTEGRAL EQUATIONS 25

after comparing (1.32) with (1.42).


The integral equation (1.39) is called a Fredholm integral equation when b(x) = b,
a constant,
6

h(au(a) = fle) +fK(e,guleae. (1.44)


It is also called a Fredholm equation of the first and second kind when h(x) = 0 and
h(x) = 1, respectively,
b

SFG) = i K (a, €)u(O)dé, (1.45)


b

ule) = f(a) + / K (a,€)u(é)aé. (1.46)


Examples of Fredholm equations of the first kind are the hanging chain equation
(1.17), the Laplace transform (1.25) in u(x), the Fourier transform (1.26) in u(z),
and the inverse Fourier transform (1.27) in U(A\). Examples of Fredholm equations
of the second kind are the equation of the deflection of a rotating shaft in y(x) (1.19),
which corresponds to f(a) = 0 in (1.46), and (the boundary value problem) equation
(1.36) (with f(a) = 0 in (1.46)).
The Volterra and Fredholm integral equations look very similar except for the limits
b(z) = x and b(x) = b in (1.40) and (1.44), respectively. However, as we remarked
before, they have a different origin and will require different methods of solution.
We illustrate in Section 2.4 and Section 2.5 how the Volterra and Fredholm integral
equations are representations of initial and boundary value problems, respectively.
In the case of either Volterra equation (1.40) or the Fredholm equation (1.44), the
integral equation is termed homogeneous when f(x) = 0,

NOI GWE /Cast ae (1.40h)


b
h(x)u(z) =i) K (a, €)u(€)d€. (1.44h)

An example of a homogeneous Volterra equation is the Bernoulli equation [(E.1) of


Example 2 in the last section] in f(z)

karf(a) =f0 s(6ag (1.47)


while the deflection of a rotating shaft (1.19) in y(z),
l
ya) =? |0 p(€)F(a,)u(6)ae (1.19)
is a homogeneous Fredholm equation.
An integral equation, like other equations, is termed linear in u(x) if when u1(z)
and u2(z) are solutions to its associated homogeneous case in u(x), then their linear
26 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

combination c,u1(x) + C2U2(x) is also a solution to that homogeneous integral


equation. For example, the integral equation (1.15) for the torsion of a wire,

m(t) = +f o(t, T)w (1.15)

is a linear equation in w(t), as we show in the next example, while the following
integral equation

Tene ‘:” Kile, thw (Oat


is nonlinear in u(t). We will illustrate the rather obvious nonlinearity of this integral
equation in part b) of the next example.
We may mention here that this book covers only linear integral equations. This
is with the exception of a brief general discussion of nonlinear integral equations of
the, somewhat, theoretical treatment, in the (optional) Chapter 6.

Example 4 a) Linear Integral Equations.


The integral equation (1.15), of the torsion of a wire, is linear in w(t), since when
w(t) and w2(t) are solutions to its associated homogeneous case (1.15h),

0 = huw(t +f p(t, T)w (1.15h)

we have
0= hay(t+f P(t, T)w, (r)dr (F£.1)

0 = hwe(t y+ feQ(t, T)we(r)dr (E.2)

Hence if we multiply (E.1) by c; and (E.2) by co and add we obtain

0 = A[cjw1(t) + cowe(t)] + fh b(t, T)[c1w1(T) + cowe(r)]dr (E.3)

which says now that w(t) = cjw4(t) + cow2(t) satisfies (1.15h) and hence this linear
combination of w(t) and w(t) is also a solution of the homogeneous equation
(1.15h). We may note here that the most general integral equation (1.39) of this
section,
b(a)

h(x)u(x) = f(a) + fl K (a, €)u(Od£, (1.39)


is linear in u(x) and hence almost all the integral equations in this section and this
book (Chapters 3 to 5) are linear (see Exercises 3 for a few examples of nonlinear
integral equations). Also, in Chapter 6 we have a brief introduction to the basic
theory of linear as well as nonlinear integral equations.
1.2 CLASSIFICATION OF INTEGRAL EQUATIONS 27

b) Nonlinear Integral Equations.


In contrast to the linear integral equation in part a), we will show that the following
(homogeneous) integral equation is nonlinear in u(z),
b
u(x) ik k(a, t)u?(t)dt. (E..4)

Of course, our familiarity with linear systems like linear algebraic equations and
linear differential equations tells us that the presence of the quadratic term u?(t)
inside the above integral results in the equation being nonlinear in u(t). However
to further illustrate the way of proving linearity we will show here that if u;(t) and
u(t) are solutions to (E.4), then their linear combination c; u(t) + c2u2(t) is not a
solution to (E.4). We proceed as we did in part a) by assuming u(t) and w2(t) as
solutions of (E.4) to have

b
uy (t) =) k(a, t)u? (t)dt, (E.5)

b
u2(t) =| k(a, t)u3(t)dt. (E.6)

If we multiply (E.5) by c; and (E.6) by cz and add we have

b
C1 u(t) + cgue(t) = / k(z, t)[e,u?(t) + cgu3(t)]dt (E.7)

where we see clearly that the linear combination c)u;(t) + c2u2(t) [of the two
solutions u;(t) and w(t) of (E.4)] is not a solution to (E.4), as required if it is to be
a linear integral equation. Hence (E.4) is a nonlinear integral equation in u(t).
The function K(x,€) in (1.40) is called the kernel or nucleus of the integral
equation. An integral equation is termed singular if the range of integration is infinite
or the kernel K (x, €) becomes infinite in the range of integration. The Fourier integral
in u(x) of (1.26) is singular, because the range of integration is infinite (—o0, oo),
while Abel’s equation (1.20) is singular because the kernel 1/./y— 7 becomes infinite
in the range of integration (0, y) at 7 = y.
For the unbounded kernel singular integral equations, there are two important
classes that we should differentiate between, since their methods of solution are
completely different. An integral equation with kernel K (x,t) = k(x, t)/|x — t|°,
0 < a < 1, where k(z,t) is bounded, is termed a weakly singular equation, or
that its kernel K(x, t) is with weak singularity. The other class of singular integral
equations is that of strong singularity with kernel K (x,t) = k(x, t)/(a — t), where
k(a, t) is bounded. These are called kernels with strong singularity or with Cauchy
singular kernel.
The generalized Abel integral equation

T(z) dh AESga
= [ Ge Orar= | (1.48)
28 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

is singular since its kernel eae is singular at € = 2, and it is also with weak
singularity corresponding to a = 5.
In the case of K(z,€) = K(x — €) in (1.40), that is, when the kernel depends
on the difference x — €, which is what we will call a difference kernel, this Volterra
equation of the first kind assumes a Laplace type of convolution product, that we
shall discuss in detail in Section 1.4.1. Such type equations yield themselves to the
Laplace transform method of solution that we shall illustrate in Sections 1.4.1 and
3.2.1. The following singular Fredholm equation of the first kind with difference
kernel,

=| K(e-au(eag (1.49)
(oe)

=,9)

assumes the Fourier type of convolution product, which will be discussed in Section
1.4.2, thus suggests a Fourier transform method of solutign. These two examples
may illustrate the different methods used for solving Volterra and Fredholm integral
equations. In Section 1.4 we present the Laplace and Fourier integral transforms and
illustrate their methods of solving integral equations with difference kernels.

Exercises 1.2

1. Classify each of the following integral equations according to whether it is


a Fredholm or a Volterra integral equation. Also determine whether it is
homogeneous, singular, and so on.

(a) u(x) = a —sineg +e*(x—-1)+ [ine — e* (x — t)|u(t)dt


0

(b) u(x saa flG(x, €)u(é)dé,


NS),
OS rae
Sapidie:
=| then, ogee
Iz zr—€

De f(a)ads icesfreee)ait
(@)
i

Coa ey
(0) Z—t
EXERCISES 1.2 29

2. Classify each integral equation of Problems 9 to 11 and 12 in Exercises 1.1


according to whether it is a Fredholm or a Volterra equation. Also determine
whether it is homogeneous, singular, and so on.

3. Show whether or not (the homogeneous parts) of the following integral equa-
tions are linear.
Hint: For the proof of linearity, according to the definition and paralleling
Example 4 a), we consider only the homogeneous version of the integral
equation [i.e., f(z) = 0 in (1.42) or (1.46)].

(a) The integral equation (1.19) in y(a) of the small deflection of a rotating
shaft.
(b) The integral equation in u(x),

u(a)= fle)+ f°K(e,w?(@ag


(c) The most general integral equation (1.39) in u(x) of this section.
(d) The integral equation in s(x),

s(e)= a(0)+ | Kegs Gas


4. Consider the following integral equation representation of what is termed the
“Dirichlet problem," then classify it as an integral equation in the charge density
distribution function p(t).
This problem deals with finding the linear charge density distribution p(t)
along a (more general) smooth closed contour C’ that causes a given, or desired,
potential distribution f (£) in the interior of the closed curve C.. It is represented
as the following integral equation in p(t),

(7) cos 67 =
TOI OF fee T

where ¢ and 7 represent the position vectors of points in the interior and on the
curve C, respectively, 7 = 7 — t is the vector distance between such points,
r = ||r||, 7 is the unit exterior normal vector to C' at 7, and ds is an arc length
increment of C.

5. Determine the class of singularity for the kernel of each of the following integral
equations

(a) The Abel problem in (1.20).


(b) The generalized Abel problem (1.48).
(c) The problems in Exercises 1(c), (d), and (e).
30 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

6. Classify the following integral equation in u(t)


1
[ esi - 2, 0<2<1 (£.1)

7. (a) Show that the kernel

Ie, t= hiet) ina —t)

where k(x, t) is bounded, is with weak singularity.


Hint: Write
|2 — t|? In |x — ¢|
Ogg Beall (£.1)
&

and use L’Hospital rule on |x — ¢|? In |x — t| as t > z to show that it is


bounded and tends to zero ast > a.
(b) Determine the type of singularity of the integral equation,

f(a) == f a(t) n |= at, O<2<1 (E.2)


1 cee

Hint: See part (a).


(c) Show that from the integral equation (E.2) of part (b) we can generate an
integral equation of the first kind with Cauchy kernel.
Hint: Differentiate the equation with respect to x, and allow the inter-
change of differentiation and integration.

8. Consider the integral equation of the second kind in u(x)

f(a) =a df K(e,ju(tar (£.1)

Show that to the linear combination c;u1(x) + c2u2(ax) of the solutions to


(E.1),
(ie) = (a) = df Ke, t)u;(t)dt, p= (E.2)

there corresponds the nonhomogeneous term c; f(x) + C2 f(z) in (E.1).


Hint: Substitute uw; (¢), u2(t) as solutions corresponding to f(x) and f(z) in
(E.1), then multiply the resulting first and second equations (for u; and uz) by
c; and C2, respectively, and add.

9. (a) Show that if {¢;(t)}?_, is a set of solutions to the (linear) homogeneous


Fredholm integral equation,
b
O(a) = af K (a, t)p(t)dt (E.1)
1.3. SOME IMPORTANT IDENTITIES AND BASIC DEFINITIONS 31

corresponding to ,, i.e.,
b

ile) / K (cr,t)di(t)at (B.2)


then the linear combination
nm

S- cidi(t) (E.3)
i=1
of such solutions is also a solution to (E.1).
Hint: Multiply both sides of (E.2) by c;, sum fromi = 1 to n, and invoke
(E.2) for the integral to give ae

1.3. SOME IMPORTANT IDENTITIES AND BASIC DEFINITIONS

In this section we will derive and illustrate very basic identities that are needed to
facilitate the analysis of reducing an important class of initial value problems and
boundary value problems to Volterra and Fredholm integral equations, respectively,
and vice versa. The latter topics are presented in Sections 2.4 and 2.5, respectively.
This includes a basic identity that reduces the repeated integrations, necessary for
integrating higher order derivatives, to a single integral. The other identity is the gen-
eralized Leibnitz rule for differentiating integrals (with variable limits of integration),
which is needed for reducing an integral equation to a differential equation. The rest
of the section is devoted to few very basic definitions.
In Chapter 2 we present initial and boundary value problems associated with linear
differential equations and, usually, homogeneous auxiliary conditions, to show how
they can be represented by Volterra and Fredholm integral equations, respectively. In
doing so we need to perform a number of integrations. For example, in the second-
order differential equation of the form d?y/dz? = F(a), we can integrate twice to
obtain

v= f4 Fed +e
d zx

dz
ze
ae) = / F(t)dtdé + cx + co. (1.50)
a a

Note how we had to change the variable of integration (€ to ¢ in the inner integral) to
keep z, the independent variable, as the limit of the last integration.

1.3.1 Multiple Integrals Reduced to Single Integrals

The double integral (1.50) can be reduced to a single integral:

i,[ Feosatas= Le — t)F (t)dt. (1:51)


32 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

This can be proved by two methods. The first is integrating by parts, letting dv = dé
and u(é) = f° F(t)d¢ in (1.51), and knowing that du/d€ = F(€).

[ [ro
(t)dtdé = [Fwd - [ ere@ag
aa i F(t)dt — 0 — i éF (E)dé (1.51)
= [w-orwa = |“(a — t)F(t)dt
where we replaced € by t in the last integral since € and t ne only dummy variables
of integration (with the same limits a to 2).
The second method involves exchanging the two integrals. We will also illustrate
this method since it is often used. The domain of the double integral (1.50) is shown
in Figure 1.8, where the integration over ¢ first, then €, is indicated by the solid
arrows. When the integration is interchanged (i.e., when we integrate with respect to
€ then t), as indicated by the dashed arrows, we obtain

Fig. 1.8 Domains for performing the integration in (1.51) with respect to ¢ first (solid lines)
or with respect to € first (dashed lines).

[Ff Poaeae -[ Fe Lf as|dt


(1.51)
= [ie — t)F(t)dt
1.3 SOME IMPORTANT IDENTITIES AND BASIC DEFINITIONS 33

after evaluating the simple integral / dot,


t
Example 5
Reduce the differential equation

a = Au(a) (E.1)
to an integral equation.
We let d?y/dz? = F(z) and integrate once with respect to x to obtain

awa / jena lear (B.2)


and if we integrate again as in (1.50), we have

x € iy
y(a) = | i P(bjdtdé + o.2+0 = | (x —t)F(t)dt+cqr+c2 =

ea ‘r(a — €)F(€)dE+ e123 + c2 (E.3)

after using (1.51). But from (E.1),

d*y
F(x) = 73 = dy(2) (£4)
which we can substitute in (E.3) to obtain the integral equation

y(a) =» |Cee es (B.5)


The identity (1.51) serves for second-order differential equations, but for nth-order
derivatives we need the following generalization of (1.51) for reducing the resulting
n repeated integrations to a single integral,

[ jai lhe FC Pade = mop f @-e Feae,


(1.52)
which can be proved in the same way (see Exercise 6).

1.3.2 Generalized Leibnitz Formula

Once we obtain an integral equation for the initial or boundary value problem, it
becomes natural to inquire whether this integral equation indeed satisfies the original
34 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

differential equation. For differentiating an integral equation (1.39) with variable


limit of integration b(x), we need the following generalized Leibnitz formula

qd a2) B(2) oF
ae ope Feewdy = f Bg or wey

+F (2, Bla) E — F(a, a2)L. (1.53)


-
It is valid if both F(a, y) and its partial derivative oe are continuous functions of
x and y, and if both fe and a are continuous. with such conditions, one should
be very cautious when it comes to differentiating integral equations with singular
kernels. For example, the above conditions are clearly not satisfied in the case of
Abel’s integral equation,

shea) i* ul) dé, Omar I 1.48)


o (—§)% (
since here F(x,€) = u(€)/(a — €)® is unbounded at x = €.
The rule in (1.53) is a generalization of the fundamental theorem of integral
calculus,

azi Fedye Ey. (1.54)


To prove (1.53), we let

A(z)
#(a,8,2) =f P(a,y)dy
a(x)
(1.55)
and 3

5p(au) = Fay) (1.56)


where then

$(0, 8,2) je
a,),0) My By Tay1 =FB) ~Flea).
= Ta NS = ’ at ’ c (1.87)
So we will use the chain rule on ¢(a, 3; x) as function of three variables a(x), B(x)
and z,

dp _ oe Og dB we
Og da
dx OB dx dadz
(1.58)
and allow partial differentiation with respect to z inside the integral of (1.55), giving
us
do 9 B(a) B(t) OF
Ox is ores F(a, y)dy =f. Dey or yay (1.59)
1.3. SOME IMPORTANT IDENTITIES AND BASIC DEFINITIONS 35

since Of/Ox here means keeping a and ( as constants. Also, if we use ¢(a, 3,2) =
f(x, 8) — f(x, a) from (1.57), we have

a6 8 _ Of(a,B) _ AF (w,a)
dB dB Je) Op
and

Oo O Of (x,

So if we combine (1.59), (1.60), and (1.61) in (1.58), we have

aap
a=
ue
ie Dg y)dy + F(z, ee — F(a, Oh
d
(1.62)

which is (1.53).

Example 6
Verify that the solution of the Volterra integral equation

u(x)= af (x — €)d€ + ce,2 + C2 (£.1)

of Example 5 is the solution of the differential equation


d2
= CAL (E.2)

To do this we differentiate y(z) in (E.1) once to obtain

Ha,
dz
|“HO Re (B.3)
after using (1.53) on the integral in (E.1) with a(x) = a, B(x) = x, and K(x, €) =
x — €. If we now differentiate (E.3) using (1.53) again, or its special case (1.54), we
obtain (E.2):
2
= = Ay(x): (E.2)

Another similarly important case where we will need the generalized Leibnitz rule
(1.53) is when we reduce a Fredholm integral equation to its equivalent boundary
value problem associated with a differential equation, which we hope is a familiar one
to solve. This is attained by differentiating the integral equation, as we did in Example
6, until we reduce it to a differential equation and then seek the boundary conditions
needed from the integral equation. For example, the homogeneous Fredholm integral
equation

See [ K (a,t)u(t)dt (E.4)


36 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

with the kernel err ee


z(1— 46
eee fees fe reel cy
= ; Fr x IRS

can be reduced to the boundary value problem in u,

me + Au =0 (E£.6)

u(0) = 0 (E.7)
Au) 0 (E.8)
which we will leave as an exercise. [See Exercise 5 of this section or Example 6 of
Section 2.5, equations (E.1)-(E.11).]
In looking at the final result of the generalized Leibnitz formula (1.53) of the last
section Hey eS
d
Sead z *) OF
a fa F(a,y)dy = ie —Da (x, d y)dy

dB da

we can basically interpret it in the direction of a rule that resulted in allowing us to enter
the differentiation operation inside the integral on the left side of (1.53), as a partial
differentiation, as seen in the integration term on the right side of (1.53). This, we may
term now, as some type of interchange of the two basic operations of differentiation
and integration. In mathematical analysis, and especially its applications, one faces
many situations of such interchange of many very basic mathematical operations. A
summary and illustration of the main theorems that allow such an interchange are
found in Jerri [1992, pp. 99-104, pp. 377-382.]

1.3.3 Convergence of Integrals and Basic Definitions

As is expected, when we deal with improper integrals, we must assure the convergence
(and, sometimes a certain type) for the individual integrals. Very familiar situations
are when we deal with the Laplace transform of f(x) on (0, 00),

dN) Peat Oa Si i! = f(x)\dz (1.63)

and the Fourier transform of f(x) on (—oo, co).

FO)=Fff} = if e F(x)dx (1.64)


since they are defined as improper integrals. In the case of the Laplace transform,
especially, there is a very reasonable and applicable class of functions f(a) which
guarantees the existence of its Laplace transform as the improper integrals in (1.63).
This class of functions is described as (i) sectionally continuous on each bounded
1.3. SOME IMPORTANT IDENTITIES AND BASIC DEFINITIONS 37

interval 0 < x < R, and (ii) of exponential order as x — 00, ie., |f (x)| does not
grow faster than Me®*, where M and aq are constants (or that there exist positive
numbers M and A such that |f(x)| < Me®* forall x > A.)

Definition 1 f(x) is called sectionally (or piecewise) continuous on an interval


a < x < bif this interval can be subdivided by a finite number of points a = x9 <
Tuts 7 <2, = anton. subintervals 2) < 2 <A 7;,1 rl, 273; 5-", 2),

(i) the function f(x) is continuous on each of the subintervals: 2; < © < 2%,
(aoe,
et and

(ii) f(x) approaches a finite limit as x approaches the limits of the subinterval, 7;_1
and z;, from the interior.

Figure 1.9 illustrates a function f(x) which is sectionally continuous on the interval
(a, b); that is, it is continuous on each of the open subintervals (a, 21), (21, Z2), and
(x2, b). Note, for example, that the left- and right-hand limits f(22—) and f(z2+),
as © approaches 2, are not equal, and we say that f(x) has a jump discontinuity at
Z2 of magnitude J = f(z2+) — f(xe2-).

a X Xo c b X
Fig. 1.9 A sectionally continuous function f(x) on (a, 6) with two jump discontinuities at
xz, and x2. From Jerri [1992], courtesy of Marcel Dekker Inc.

For the theory of Fourier series and integrals, we shall need a more restricted class
of functions than the above piecewise continuous functions, namely the piecewise
smooth functions.

Definition 2 A piecewise continuous function in an interval a < x < b is termed


piecewise (or sectionally) smooth if, in addition to being piecewise continuous,
(1) its first derivative df /dz is continuous on each of the subintervals 7;_1 < Z < i,
tm 1g2,35-c,apd
38 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

(2) df /dx approaches a finite limit as x approaches the limits of the subinterval
z;_1 and 2x; from the interior; i.e., there exists f'(a;-1+), f’(zi—) for each
(Zi21, 42), = L200 aes

For example, the function in Figure 1.9 is piecewise continuous on (a, 5) but it is
not piecewise smooth because of condition (2) above in the subinterval (c, b), where
its derivative df /dx does not approach a limit f'(c+) as x approaches the end point
c from the right. For completeness, we may mention that the function is sectionally
smooth on (a, c), and it is smooth on (a, 21).

Definition 3 f(x) is termed of exponential growth, or of exponential order, as


x —» oo if there exists a constant @ such that e °*| f(z)| is bounded for all x greater
than a finite number A. In other words, |f(x)| < Me°* for z > A, with M, a
constants, and we say f(z) is O(e®%*). For example, the function f(x) = e3” sin x
is of exponential order O(e3”) since with M = 1, |f(x)| = |e3” sinz| < e?* for all
x. However, functions like f(x) = e2” are not of exponential order. Polynomial
functions like f(x) = x” are clearly of exponential order with a > 0.
In the following Example 7, we will illustrate how easy it is to prove the existence
of the Laplace transform F'(s) in (1.63) for the class of functions f(x), which are
sectionally continuous and of exponential order. This Example will constitute the
proof of Theorem 1 in Section 1.4.1.

Example 7 The Existence of Laplace Transform!!


If f(x) is
(i) sectionally continuous on the interval 0 < 2 < A, and

(ii) of exponential order e°”, that is, |f(x)| < Me°%* fora > A,
then the Laplace transform F'(s) of f(x) in (1.63) exists for s > a.

Proof
foe) A love)
Eis) i, ea f(a)da= / celta lds i Cia h(a) dre (E.1)

The first integral on the finite interval (0, A) clearly converges since e~ ** f (x)
is bounded for the sectionally continuous f(x). For the convergence of the second
integral, we will use the result of comparison of improper integrals, along with the
exponential growth of f(x), to show that it converges provided that s > a.

[ee sleyae| x [te steiae <fe 1f(@)lae


S ares A (E.2)
< M | e S-Vtdy < oo for s>a
A

'! Optional
1.3. SOME IMPORTANT IDENTITIES AND BASIC DEFINITIONS 39

where in the last integral we used |f(x)| < Me®*. The last improper integral clearly
converges for s > a, which concludes our proof.
In this fashion, we have shown not only that the Laplace transform exists by
proving that afsee ** f(x)dx < oo, for s > a but also that e~ ** f(x) is absolutely
integrable, i.e., [5° |e~** f(x) |da < 00, for s > a.
We must also note that the above two conditions (i) and (ii), that f(2) be sectionally
continuous on (0, A) and of exponential growth as x — oo, are sufficient but not
necessary. An example of f(x) not sectionally continuous on (0, 00) is f(x) = oF
which is infinite as x — 0. This means that the first of the above two sufficient
. . . . . 7

conditions (i) is not satisfied. However it can be shown, with the aid of using the
' ; i 1
gamma function, that the Laplace transform of 1/x2 does exist as £L {= }=a
GPP Ss
s > 0. [See the definition of the gamma function in (1.74) and the Laplace transform
pair for v = —t in (1.79), and Exercise 1(b) (and 4(b)) of Section 1.4.]

Example 8 An Important Necessary Condition of Laplace Transform


A very important necessary condition for the existence of the Laplace transform
F(s), of the above class of functions of Example 7, is that F'(s) must vanish as s
approaches infinity. This can be easily shown when we write

lim F(s) = lim Ca Ufates c. (E£.1)


SCO s—0o 0

Then if we “formally" allow the interchange of the limit process with the integration,
an operation that is valid!” for f(t) in the class of the functions in Example 7,
|f(t)| < Me**, we have

lim |F(s)| < M | lim e~@-)*dt = 0, s>a.


s—0o 0 S—0o

since, clearly, lims_,o, e~ 8-©! = 0 for s > a.


We have this example to illustrate one of the difficulties with solving integral
equations of the first kind. Here we can look at finding the inverse of the Laplace
transform f(t) = £~'{F(s)} of F(s) as attempting to solve the following integral
equation of the first kind in f(t).
Co

Fis)= i er Fic jal, Seen


0
So, to search for a reasonable solution f(t) such as that of the (large) class of
functions of Example 7, we must be very careful (or fussy) about what is assigned
above as F'(s). Well, F'\(s) must at least vanish as s approaches infinity to satisfy the
necessary condition shown in the above Example 8. So if we are given F's) = sth

!2The condition for allowing the above interchange of the two operations is very close to Lebesgue
convergence theorem. See Jerri [1992, p. 99, Theorem 2.10].
40 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

we know that there exists no solution f(t) for the above integral equation in the
class of functions described in Example 7. In other words, there is no such function
f(t) in the domain of the Laplace transform operator that is mapped to the given
F(s) = ***! So, if we write the general integral equation of the first kind in u(t)

g(x) = / K(a, tu(t)dt,


we must be aware not to have g(x) assigned arbitrarily. This means that the theory
of such equations must be checked thoroughly, for such and other difficulties, before
going after a solution. In Section 5.4 we present some essentials of this subject and
its several, possibly chronic, difficulties for the Fredholm integral equations of the
first kind. In Section 3.2 we only present possible methods of solution for Volterra
integral equations of the first kind. ;

Exercises 1.3

1. (a) Verify that


1 x

Wa) = ;f sin b(a — t)f(t)dt (E£.1)


0
is a solution of the following initial value problem

d?y
aS + by = (2), (E.2)

y(0) = y'(0) =0. (E.3)


Hint: See Example 6.
(b) Reduce the initial value problem (E.2) and (E.3) to a Volterra integral
equation. Note that in (E.1) we have the solution to the problem (E.2)
and (E.3), and not the integral equation representation of it that we are
seeking here.
Hint: See Example 5 then invoke the initial conditions of (E.3) to deter-
mine the arbitrary constants c; and co.
(c) Verify your answer in part (b) by reducing it to the initial value problem
(E.2) and (E.3). Hint: The use of the generalized Leibnitz rule (1.53) is
very helpful in differentiating the integral of the integral equation as the
answer of part (b).

2. (a) Verify that the Volterra integral equation

y(x) =cosx —x—-1- [e — t)y(t)dt (B.1)


0
EXERCISES 1.3 41

reduces to the initial value problem

d?y
ro = COS LZ, (E.2)

yO)=0, y'(0)=—1. (E.3)


Hint: See Example 6.
(b) Reduce the initial value problem in (E.2) and (E.3) to the Volterra integral
equation in (E.1). Hint: See Example 5 and problem 1(b).

3. (a) Use F(x) = d?u/dz? to reduce the differential equation

Citiew u(x) + g(z),


dx? = g ’ x >0 (E.1)
e

of an initial value problem to an integral equation.


Hint: See Example 5.
(b) Verify your answer in part (a) by showing that u(x) in the integral equation
satisfies the differential equation (E. 1).

4. Reduce the integral equation


Co

ea) | ety (t)dt


0
to a differential equation.
Hint: write

and then differentiate twice.

5. (a) Differentiate the Fredholm integral equation


1
We | K (a, t)u(t)dt, (£.1)
0

Al{va(l=o)) 0s aiscessl
eee ee Ota aa
(E.2)
to reduce it to a differential equation. Hint: Write the integral equation
with its explicit kernel (E.2) as

1
u(c) =a fa—ajeu(oat+ af ens ae
a \(le ve) i.tu(t)dt + rz | (1 — t)u(t)dt
x
42 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

and differentiate, realizing that each term in the right side of (E.3) is
a product of two functions of x, and use (1.53) for differentiating the
integrals.

(b) Use the form (E.3) to find the two boundary conditions for u(x) at z = 0
and x = las u(0) = Oand u(1) = 0.

(c) Solve the resulting boundary value problem associated with the differential
equation of part (a) and the boundary conditions in part (b).
Hint: You have the two linearly independent solutions sin Vda and
cos Vz, for the differential equation, to use in a linear combination to
satisfy the two boundary conditions u(0) = 0, u(1) = 0. Here you will
end up with an “infinite” set of solutions (called the “eigenfunctions”
or “characteristic” functions of the boundary value problem). These
solutions are associated with the discrete values A,,,n = 1,2,---,nofthe
parameter A in (E.1), (which are called the eigenvalues or “characteristic”
values of the boundary value problem). (See also Example 6 of Section
Dede)

6. Prove the result (1.52) for n = 3 of the triple integration.

Hint: consider the triple integral f” fs Cs F(y)dydtdé, and let G(t) =


th F(y)dy, use (1.51) for the double integral ie (i G(t)dtdé, then use the
same integration by parts (with respect to €) on the resulting double integral
Jo SE(@ -— ©F (y)dydé =f"(x— O[f* F(y)dyldé with u(€) = f* F(y)dy
and du(€) = (x — €)d€.

1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS

In this section we present a brief summary of some important properties of Laplace


and Fourier transforms. These transforms are very useful for solving certain initial and
boundary value problems associated with differential equations and partial differential
equations with constant coefficients. Also, as we shall soon show, the Laplace and
Fourier transforms are used for solving Volterra and (singular) Fredholm integral
equations with difference kernels, respectively. These transforms may also be applied
to integro-differential equations. For detailed treatment of these and other transforms,
see Jerri [1992] and Sneddon [1972].
Other singular Fredholm integral equations, as results of finding the inverse of
integral transforms such as the Hilbert and Mellin transforms, will be studied and
illustrated with the help of the following analysis of Fourier and Laplace transforms.
As was mentioned in the preface, the treatment here is elementary as for the first
undergraduate course. However, for the interest of the reader who wants to go to
some reasonable mathematical rigor, we have in this edition spelled out the basic
results clearly and carefully as theorems.
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 43

1.4.1. The Laplace Transform

The Laplace transform of the function f(a) defined on (0, 00) is

F(s)=L{f}= i Cn ay (eax: (1.63)

We have already defined in Section 1.3 the (usual) class of functions f(x) for which
the above improper Laplace integral exists. This is being the class of sectionally
continuous (Definition 2) and of exponential order (Definition 4). We then proved
such existence in Example 7, and which we shall repeat only the statement here as
Theorem | on the existence of Laplace transform for such class of functions.

Theorem 7 The Existence of the Laplace Transform


In the following statement we will use f(t) instead of f(x), since, as it shall
become very clear shortly, we reserve x for the real part of the complex number
z= + ty for F(z) (the extension of F'(s) to F(z)). If f(t) is (as in Example 7)

(i) sectionally continuous on the interval 0 < t < A and

(ii) of exponential order e®, i-e., |f(t)| < Me* fort > A,

then, the Laplace transform F'(s) of f(t) in (1.63) exists for s > a. The proof is
done in details in Example 7. It is clear that the equation defining Laplace transform
in (1.61) represents a Fredholm integral equation of the first kind in f (x) with kernel
K(s,t) = e~*, which is singular since the integral is with an infinite limit. To speak
about the inverse ofthe Laplace transform in (1.63), that is f(t) = C~!{F(s)}, inour
present notion of integral equations, is to embark on the attempt to solve the singular
integral equation of the first kind (1.63) in f(t). As is the case for solving most
singular integral equations of this type, the tools of complex analysis are employed.
For our purpose in this book, where we don’t require formal preparation in functions
of complex variables, we will be satisfied with the following clear statement of the
result. We will, however, follow this by a more appropriate formula at the level of
this book, but, possibly due to its impracticability as shown in (1.67), it is not much
referred to in the discussion of the Laplace transform in almost all textbooks. Such
formula (1.67) uses only differentiation, and without resorting to complex variables.
The solution f(t) to the singular integral equation (1.63) is given as the inverse
Laplace transform of F's), which we shall state the conditions for its existence in
Theorem 2,

y+iL
IG) = Let} = lim ie e"'F(a)do, y> Real {z;}, (1.65)
= Loo

where {z;} are the singularities of F'(z). The above integral is a complex line integral
of F(z), z = x + iy, taken along a vertical line in the complex plane at x = y where
z= o0= y+ iy, and where the location z = 7¥ is to the right of all singularities
{z;} of F(z). For example the Laplace transform of f(t) = et, 0 < t < oo is the
44 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

real valued function F'(s) = +: For the inversion formula (1.65) we extend F'(s)
analytically to the complex plane as F(z) = -+5 = oe = See note that
it has only one singularity at z; = 2, thus we take the vertical complex line integral
to the right of the real part of z; which is z} = 2, y > 2. The derivation of the
Laplace transform inversion formula (1.65) involves relating the Laplace transform
to the Fourier transform of causal functions (f(t) = 0, t < 0), where the definition
of both transforms is extended to complex variables. In this introductory book we
don’t assume preparation in complex variables, and in our next very brief discussion,
and a statement of an important theorem, we will only use something like the above
basic elements of complex numbers.
In Theorem | we stated conditions for the existence of the Laplace transform
(1.63),
FACS ee fe ex UfHdt Life (1.63)

as an improper integral, which is of utmost importance to the theory of Laplace


transform as an integral transform. What concerns us here in the topic of integral
equations is to see the Laplace transform (1.63) as a singular integral equation in
f(t), 0 < t < oo. Thus, before we mention the solution to such an equation as
in the Laplace inversion formula (1.65), we should assert the existence of such a
solution. This is covered in the following Theorem 2 on the existence of the inverse
Laplace transform in (1.65) as the solution to the singular integral equation (1.63)
in f(t). Note that we are using here t instead of x as the variable for f(t) in (1.65)
because we need to use z as the real part of the complex variable z = x + iy, since
the theorem needs the extension of the definition of the Laplace transform F'(s) to
complex numbers F'(z) = F(x + iy) i.e. the variable x now stands for the s in F’(s)
of (1.63).

Theorem 2. The Existence of the Inverse Laplace Transform (as a solution to the
singular integral equation (1.63)
If F(z), z = x + ty is the Laplace transform of any function f(t) of exponential
order O(e*°'), where f(t) and f’(t) are sectionally continuous in each interval
0 <t < T, then the (inversion) integral of F'(z) in (1.65) along any line x = y,
where y > Zo, exists and represents f(t),

ome Hi) efit mates 0: (1.66)


At any point to, where f(t) is discontinuous, the inversion integral represents the
mean value 3[f(to+) + f(to—)]; when t = 0 it has the value 5f(0+), and when
t < Oit has the zero value. We should note here that the conditions of this theorem
on f(t) as a solution of the singular integral equation

F(s) = ie e*' f(t)dt (1.63)

are not so stringent, since in the applications, for example, f(t) may be the dis-
placement of mechanical vibrations or electrical current, and we can easily impose
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 45

“sectional continuity" and “of exponential order" on the displacement f(t) and its
derivative g
This version of the theorem for the existence of the solution f(t) of the singular
integral equation (1.63) is what we considered the appropriate one for this book from
among other theorems, whose statements involve complex analysis. However for
our purpose of solving the integral equation (1.63) in f(t), we would like to have the
conditions to be put on the given function F'(s), and not its analytic extension F'(z).
This is exactly the advantage of the other, not well quoted in texts, form of solution
to (1.63) as we shall present in (1.67). What remains about the solution of (1.63) is
its uniqueness. As it is the case for all integral transforms, their inverse is not unique
in the sense that two solutions f;(t) and f2(t) of (1.63) could differ at any finite set
of points t;, t2,---,tn, or even at an infinite set of points t,, t2,---, and still give the
same F'(s) (see Exercise 3(b)). For the proof of Theorem 2, and the other theorems,
see Churchill [1972].

Another Formula for the Inverse Laplace Transform


A not much seen formula (in textbooks) for the inverse Laplace transform f(t) =
L-!{F(s)} is the following,”

r= mm (EEO)EO} om
We may remark here that though this formula is appealing on first sight, it is very
demanding, the possible reason that it is scarcely mentioned in texts compared to
(1.65). It is obvious that this formula puts the burden on the given function F’(s),
which is more suitable for us as we look for an inverse Laplace transform f(t) as the
solution of the integral equation of the first kind

iG) ie e * f(t)dt.

First it requires F'(s) to have very high order derivatives in eA and for large
arguments & as k becomes very large. This may also illustrate another difficulty for
finding the solution of integral equations of the first kind. What makes this problem
worse, is that we often have the known output F’(s) as a finite number of data, along
with these points inaccuracy of their measurements. So, the derivative of this data
with its inaccuracy will have its own error, and for higher derivatives such errors will
be compounded to render the result useless!
This, as we planned it, should illustrate again the “inherited" difficulties of the
integral equations of the first kind. So it is not the problem of formulas (1.65) or
(1.67), where they “innocently" require the best of (the input) functions F’(s): as
infinitely differentiable (analytic) functions as seen in (1.67), or what is hidden in
(1.65) as the requirement of F(z) being analytic except for an “isolated” finite or
infinite number of points in the complex plane.

'3From Wing [1991, p. 8], which is attributed to Post and Widder [Widder, 1946].
46 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Properties, Pairs of the Laplace Transform

The most important property of the Laplace transform is known for solving dif-
ferential equations, where it transforms the differential operation ae on f(z) to
an algebraic operation sF(s) — f(0) on its Laplace transform F'(s). This can be
shown next by using (1.63), performing one integration by parts and assuming that
limzyo0 € ** f(x) = 0 [i-e., f(x) is with exponential growth e%” as x — oo and
s >a):

CO

+s ie ey of (x)da
0 0
= —f(0) + sF(s) = sF(s) — f(0). (1.68)
(We may note that f(0) here is f(0+) since f(t) is defined on (0, 00), and we have
lim f(t) = f(0).) A more precise statement of this “formal” result is given as the
~—

following Theorem 3.

Theorem 3 The Laplace Transform of Derivatives


Let f(a) be a real function which is

(i) continuous for x > 0 and of exponential order e“*, and let

(ii) df /dx be sectionally (piecewise) continuous in every finite closed interval 0 <
xz < A. Then L{df /dx} exists for s > a and (1.68) results,

cio} = sF(s) — f(0), sa: (1.68)

By the same method it can be shown that

L (52 = s-F(s) —s7(0)— fF(0) (1.69)

which we shall leave as an exercise [See Exercise 6(a)]. These results can be extended
to higher derivatives, and as seen in (1.68) and (1.69) we must supply the proper initial
conditions on f(z).
The above results, starting with Theorem 3, show the advantage of Laplace
transform in reducing differential equations (with constant coefficients) in f(z),
0 < x < o, and its given appropriate initial conditions to an algebraic equation
in the Laplace transform F’(s). These are of general interest in methods of applied
mathematics, but what concerns us here, when dealing with integral equations, should
be the result of Laplace transforming an integral of the unknown function. A result
in this direction is

Igi sea} will?ie acraraly, (1.70)


0 s
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 47

This pair complements our very important Laplace transform pair of the derivative
ie as given in Theorem 3,

af
ae iz} = sF(s) — f(0). (1.68)

The result in (1.70) can be proved easily by letting g(z = fo f €)d€, with its
Laplace transform G(s), and clearly g(0)= 0. From A bees theorem of
calculus we have a == f(x), and if we use (1.68) above for £L {#2
2} we have

F(s)
=£{f(@)} = £42}=sG(s)~ (0) = 8G),
G(s) = ae s>0

In terms of integral equations, these two results (1.68) and (1.70) should prove useful
when dealing with some integro-differential equations, where the sought unknown
function f(a) is operated on by integration as well as differentiation.
A more general result concerning the Laplace transform of integral operations
is that of the Laplace convolution theorem. This is an extremely useful tool to the
important class of Volterra integral equations with difference kernel. But before
stating the convolution theorem as Theorem 4, we should point out the difficulty
facing the Laplace transform (or other similar integral transforms) method when
we have to deal with variable coefficients differential (or integral) equations. An
important result in this direction 1s

£{2" f(a)}
= (-1)" Fs), (1.71)
which simply says, that it may be a wees to work with the Laplace trans-
form when dealing with variable coefficient terms in the differential equation to be
transformed. This is so, since, for n, a nonnegative integer, a polynomial coefficient
of order n in the original equation will result in an nth order differential equation
in the Laplace transform space. This result (1.71) can be derived easily when we
differentiate the Laplace integral in (1.63)

ad” Gee co

ate) vik ee hy
ee —ST. d

00 (ani2))
ole ——e"**d
r= [ay seer
as SAN —$@ dy,

giving the desired result

Lin” f(a)} = (-1)" SF), (1.71)


after allowing the interchange of differentiation with integration.
48 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Again, and before introducing the important convolution theorem of Laplace trans-
form in (1.84) as Theorem 4, it is instructive at this point to have a few illustrations.
In solving initial value problems associated with differential or integral equations
we may need to Laplace-transform some familiar functions, for example,

—(s—a)x 1
fe} = [ ee de = e

SEO) 6
= =:
Sah
Ser02 (1273)

Also, after a problem is transformed to an algebraic equation in F’\(s) and then solved
for F(s), we need to transform Fs) back to f(x), the solution of the original
problem, which is called the inverse Laplace transform of F'(s) and is denoted by
f(x) = L7'{F(s)}. As was discussed earlier then presented in equation (1.65),
the direct Laplace transform inversion formula involves complex integration, a topic
which is not assumed as prerequisite for the level of thisstext. Thus, on this level
of preparation, and as it is done in all elementary differential equations books where
the Laplace transform is used, we will depend on the tabulated values of the Laplace
transform (Table 1.1) for the few illustrations in this book and refer the interested
reader to books of extensive tables'* for Laplace transform. For example, if the
solution of the Laplace-transformed problem is F'(s) = 1/(s — 5), then from (1.73),
the inverse Laplace transform of 1/(s — 5) is f(x) = e°*, which is the solution of
the original problem.
In Table 1.1, [(v) is the gamma function, which is defined as

T(v) =| a’ e-* dg, Def Oe) eae (1.74)


0
and where it can be shown that [(v + 1) = vI'(v), [(n + 1) = n! (for n positive
integer), [(1) = 0! = 1, and (5) = \/m (see Exercise 3(b)). Also, the error
function erf(x) is defined as

erf(z) = ~sife§ dé (1.75)


Tv

and the error function complementary erfc(«) is defined as

erfe(x) = 1 — erf(z) = = ihee€ dé (1.76)


where
co

[ e © dé = wilh,
0 Z
Jo(x) in Table 1.1 is a Bessel function of the first kind of order 0, which is a special
case of J;,(x) that is one of the two solutions of the Bessel differential equation,

@u d
a shi =+ (2? — n?)u =0 (1.77)

'4See Roberts and Kaufman [1966], Ditkin and Prudnikov [1965], Erdelyi et al. [1954].
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 49

Table 1.1 Laplace Transform Pairs

c{ste)} =F) = fo ee sflu)ae


Pairs

Operations

Sacha)

9. fi(x)
+ fo(z)
10. f(ax)
n—1
ar 8" F(s) La 3 Ome) (O)
"det k=0
O O
ae y)

sree (x) (-1)" Fs)


a ; d.

were
sf(a)

16. iBepi(en EVs CT Sf


0
50 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

which is bounded at x = 0. The series representation of J, (x) is

= Grid ae
——_——__—__— 1.78
In(a) =D kl(n + k)!
k=0
ce
The first Laplace transform pair in Table 1.1,

pea)
is very important and can be proved easily with the aid of (1.74). The pair

L{e~™ f(xz)} = F(s +4) (1.80)

is easily proved since

~ ihee (8+)? F(x)dz = F(s + a)


0

after using the definition of the Laplace transform (1.63). The two Laplace transform
pairs
a
{sin ax} = ——
L{si MEE (1.81)
1.81

8
(e{cos ax} re
See 1.82
(1.82)

can be proved by direct integration if we use the Euler identities,


(0) AT tax —tax
Sian oe 200 eC OS ie a, (1.83)

Example 9 Find the inverse Laplace transform of

1 1
s?+9 s(s+1)

In Table 1.1 we note that the Laplace transform and its inverse are linear operations,

Li fiz) + fo(x)} = Fi(s) + Fa(s). (E.2)


Hence

fe) = £-(F()}= 01 {s?+9


s+ woot}
s(s+1)
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 51

From (1.81) it is clear that

1 il
Lt 4 —— 4 = =i
lass} 3 sin3z E
(E.4)

but it is not so clear how to find

aires):
However, if we write the partial fraction of 1/s(s + 1),
i 1 1
s(s +1) Sa a eae oe
we Can again use the linearity property of the inverse Laplace transform to write

1 1 1 1
Cts ———__ tale i Cae em
{s(s +1) \ 8s ¢e+1 S

=i 1 —£
—£ =l-e (E.6)

after consulting (1.79) with vy= 0 and (1.80) with a = 1. Hence, from (E.4) and
(E.6), the final solution to (E.3) is

f(z) era teers) (E.7)


x
= 3—sin3gzx+1l—e-
sindz

The (Laplace) Convolution Product


The most important Laplace transform pair used for solving Volterra integral
equation with difference kernel is

c {/wae s)fal6)de Fe). (1.84)


The integral P

/ file — foldé= (fi * fo) (2) (1.85)


is called the Laplace convolution product of the two functions f;(x) and f(x) and
is denoted by f; * fa. The result in (1.84) is the convolution theorem for the Laplace
transform.
Even though the * in (1.85) represents different convolution multiplication for the
different integral transforms, we shall nevertheless use the same symbol, as most
books do, without a fear of confusion.
The following Example 10 illustrates the use of (1.84) for solving a Volterra
integral equation with difference kernel. This Example is followed by a precise
statement of the result in (1.84) that constitutes the convolution theorem for Laplace
transform as Theorem 4.
52 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Example 10 Use the Laplace transform to find the solution of the following Volterra
integral equation of the first kind with difference kernel K (2 —t)=e7-*

sine = [ e” tu (t)dt. (E.1)


0
Before we Laplace-transform both sides of (E.1), we recognize that the right-
hand side is in the Laplace convolution product form (1.84) with f;(z) = e” and
fo(x) = u(x). Now we Laplace-transform both sides of (E.1) to obtain

Li sin: aa
1
a ye of: :
a—t
u(oat} (8.2)

mae ewe aia) tae U(s)


s—1 A

after using (1.81) for :


Eon¢
s* +1?
the convolution theorem (1.84), (1.73) for L{e”} = 1/(s—1), and letting C{u(x)} =
U(s). Hence, from (E.2), the Laplace transform of (E.1) is

1 U(s)
= ; E.3
62-41 wig a1 CRS)

From this we find

s—l S 1
U(s) = ——- = =—- - =—
(s) Sao liay 1S- hee

rae edeees Starr


Fie edauie
aged eal ETravbermases)
hain aise eS
after consulting (1.82) and (1.81) for the Laplace transform of cosz and sinz,
respectively.

The Convolution Theorem for Laplace Transform


Now, we shall elaborate more on the important convolution theorem for Laplace
transform as was briefly introduced in (1.84) and (1.85) since it is used for solving
the type of Volterra integral equations whose integral is in the form of the Laplace
convolution product as illustrated in the above Example 10. First we introduce again
a bit more refined definition of the convolution product f; * fo of (1.85) associated
with the Laplace transform, which is essential for the statement of the convolution
theorem.

Definition 4 Let f;(x) and f2(x) be causal (vanish identically for z < 0) and
defined on (0, oo); then their (Laplace) convolution product is defined as

(fi * fo)(a) = AiG INAOne (1.85)


1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 53

It is easy to show that this convolution product is commutative, that is,

fi * fe sale fay
(he file) = [fle eyfoleas (1.86)
bsikfilm) fole — n)dn = (fo * fa)(2)
after letting 2 — € = n, and where we used the fact that f(z) is a causal function
where fo(x — 7) =O forn > z.
We are now in a position to state more precisely the convolution theorem for the
Laplace transform as the result in (1.84) and (1.85).

Theorem 4 The Convolution Theorem for Laplace Transform


If F\(s) and F2(s) are, respectively, the Laplace transform of f(x) and fo(z),
which are in the class of functions we considered in Theorem 1, by being sectionally
continuous on each interval 0 < x < A, and of exponential order e®” as x — oo
(i.e. |f(x)| < Me°* for z > B). Then, the Laplace transform of the convolution
product (f; * f2)(x) exists as F)(s)F2(s) fors > aie.,

L{(fi * fo)(2)} = Fi(s)Fa(s), 8 > a. (1.84)


Example 11 Laplace Transform Inverse — Use of the Convolution Theorem
Find ,
i | eel i

ize +1) \;
We have here a product of two Laplace transform F\(s) = 1/s and Fo(s) =
1/(s? + 1), where according to (1.73) and (1.81) their corresponding inverse Laplace
transform are f(x) = 1 and f2(x) = sin. Hence, according to (1.84) the result is
the convolution product of | and sin z, that is,

oi {5} Z iN1 sin(a — €)d€ = cos(z — €) =1-cosz zr

0
(E.1)
—=1 ‘1
en Fieeee - = :
iG tsorep} 1 —cosz

1.4.2 Fourier Transforms

The Fourier exponential transform of the function f(a) defined on (—oo, 00) is
Co

ON ie / en (a) de. (1.87)


— Co

It is clear that (1.87) represents a (singular) Fredholm integral equation of the first
kind in f(a) with kernel K (A, 2) = e~*, and so we should be interested in solving
54 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

for f(x) as the inverse Fourier transform f(z) = F~'{F}. Fortunately, and in
contrast with the Laplace transform, the Fourier exponential transform has a simple
and symmetric formula for its inverse,

{a_j=F 1h = ie er F(X). (1.88)

We should note that there are a number of variations,!*> with minor modifications,
for the definition of the Fourier transform and its inverse. This usually depends on
the field of the text or research reference, where such notation, usually, is the most
convenient for that particular subject. For example in physics books, we see the
Fourier transform and its inverse written as

F(p) e 'P® f(x)dz, * (1.89)


ee
=f.

f(2)= “ieif_ = F(p)dp (1.90)


where z is the spatial variable, and p is the variable in the momentum (Fourier) space.
Indeed we shall need in Section 2.7 to extend the above definition (1.89) and (1.90)
to functions of three variables in the spatial (vector) variable r° = i+ jy + kz (or
— ix + jx2 + kx3) for f (7), and the momentum (or frequency) vector variable
X= 0A, + fro + kg for F()). This will be done for the mathematical modeling of
Schrodinger equation of quantum physics in three dimensions as a Fredholm integral
equation in the wave function V(X) of the three-dimensional momentum space. The
complete details are left for Section 2.7. Here, we note that as we rely on a source or
reference for Schrddinger equation which is in physics texts, it is most convenient for
us to quote the results in that particular notation, which is based on (1.89) and (1.90)
for physics books. So in this notation, the Fourier transform of the wave function
w(7) in three dimensional space is F (X) in the three dimensional momentum space,
which is

aa I i ie f(7)dx,dr2d273, (1.91)
2

al eee e F(X)d\ydodd3 (1.92)

where7 = i271 + 729 + kx and \ = id S55.N5 + RX3.


Now, we may even see either of (1.87) or (1.88) as singular Fredholm integral
equation in f(x) for (1.87) and in F(A) for (1.88). Since the two integrals of (1.87)
and (1.88) are symmetric in f(x) and F(X), we may have exact similar theorems
for the existence of the Fourier transform F(A) of (1.87) with condition on f(a) and
for the existence of f(x) of (1.88) with condition on F(A). We will state the first
theorem on the existence of Fourier transform F(A) in (1.87).

'> For such varied notations in books and references of different fields, see Jerri [1992, pp. 129, 156].
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 55

Theorem 5 Existence of the Fourier Transform of Absolutely Integrable


Functions
Co

If f(x) in (1.87) is absolutely integrable, i-e., ib |f(x)|dx < ov, its Fourier
transform F(X) exists and, moreover, it is continuous.
According to this theorem and the symmetry between the Fourier transform and
its inverse, one may think of the same type theorem for the existence of f(z) in (1.88)
as the solution to the singular Fredholm integral equation (1.87) in f(x). What is
needed here, of course, is that F(A) in (1.88) is absolutely integrable, thus Theorem 5
is satisfied for the convergence of the integral of (1.88) for its f(z) to exist. However,
it turned out that, in general, this is not the case for F(A) of the absolutely integrable
f(x), since the Fourier transform F'(A), of the absolutely integrable function f(z), is
not necessarily absolutely integrable. To give an example, we consider the function

peed mall)
fla) =4 0, <0
which is absolutely integrable on (—0oo, 00)
CO (oe)

/ |f(x) |dax =| Cmte oll


—0©o 0

with Fourier transform

F(A) = / e ®e-2dr = [ e Ut+A)zr dy os : e Ate ae 1


0 0 1+2A

after knowing that e~*** = cos \x — isin Az is bounded at = oo. F(X) here is a
complex-valued function whose absolute value |F'(A)| = 4/ F(A) F(A) where F'(\)
is the complex conjugate of F(A), which is obtained from F(A) by replacing 7 by

/ ae | 1 1
F(A)| = F(A)F(A) = (+ Day) = Maan

This F(A) is not absolutely integrable, since

fag OUD )|dA = (fe SS


[rears Sains)
A=—0o
= 00 — (—0o)
= co

So, if we look at the symmetric form of the inverse Fourier transform (1.88), we
cannot be sure of the existence (of f(x)) of this integral for such an input F(A),
which is the output of (1.87) as the Fourier transform of an absolutely integrable
function f(z). The reason is that F(X) here is, in general, not necessarily absolutely
integrable to guarantee the integral in (1.88) to exist, according to Theorem 5, and
define f(x). But, in practice, we would like to Fourier-transform back and forth
from f(z) to F(A) in (1.87) and then in a very symmetric way from F(A) to f(<)
56 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

via (1.88). Indeed, when we speak of signals, f(z) is the representation in the time
space while F(A) is its representation in the frequency, or Fourier, space. In quantum
mechanics f(z) is in the coordinate space, while F(A) is the representation in the
momentum (A) space.
To be able to utilize the Fourier transform (1.87) and its inverse (1.88) as convergent
integrals, we must restrict our class of transformed functions f(x) to more than just
absolutely integrable. One of the simplest versions of such restrictions, which satisfies
our needs here and which we shall adopt in this book, is that f (a) must be sectionally
smooth in addition to being absolutely integrable on (—0co, co). This is the statement
of the Fourier integral Theorem 6 that we shall state next after recalling Definition 2
of sectionally (or piecewise) smooth functions.

Definition 2 A piecewise continuous function in an interval a < x < 6 is termed


piecewise (or sectionally) smooth, if in addition to being piecewise continuous,
i) its first derivative df /dx is continuous on each of the subintervals x;_1 < © <
d,
Dale Oe ie, IL) ue approaches a finite limit as x approaches the limits of
the subinterval x;_, and x; from the interior; i.e., there exists f'(z;-1+), f’(xi—)
LOmeach(a age. nt ben ie

Theorem 6 The Fourier Integral Theorem (the Fourier Transform Inversion For-
mula)
If f(x) is a piecewise smooth function on every finite interval of the real line
(—co, co) and is absolutely integrable on (—oo, oo), then
1 co

f(z) —peal |lips der re” 4(Qe-ae


(1.93)
sf af ser—ag
converges to S(f(x+) + f(x—)]. It converges to f (x) at x where f(x) is continuous.
The detailed proof is found (among other references) in the author’s book on integral
and discrete transforms [Jerri, 1992].
We should note that the above double integral in (1.93) is equivalent to the integral

fim = [anf f@cosMe-gde = sift) + se-)] (1.94)


1 A fore) i!

which is what we usually see in most statements of the Fourier integral formula.
Such an equivalence is shown easily when we use the Euler identity e(*—-§) =
cos A(x — €) + asin A(x — €) in the inner integral of (1.93), and recognize the zero
contribution of the (odd function) sin A(x — €) to the outer integral.
The above analysis including the two Theorems 5 and 6 should make clear, we
hope, that it is one thing to have conditions on f(a) to guarantee the convergence
of its Fourier integral to represent F(A) in (1.87) and another to have conditions on
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 57

f(x) or F(A) for the existence of the solution f(x) of the singular Fredholm integral
equation of the first kind (1.87) in f(x). The essence here is that we are given the
general class of functions for which the given function F'(A) of (1.87) and its sought
solution f(z) belong. The question that still remains is how the form of the solution
to (1.87) was constructed or derived as another Fourier integral in (1.88) or in the
Fourier integral formula (1.93). The rigorous proof for the construction of (1.88) or
(1.93) is somewhat long, and can be found with necessary details in the author’s book
on the subject of transforms (see also the reference at the end of Chapter 2 therein).
There are, however, other methods that if presented in a fast or simple way, would
need either more of a mathematical background like generalized functions or lack the
rigor in justifying a number of assumed limiting processes.
The familiar method found in most books on the undergraduate level, does not
need sophisticated concepts, but it does slide over some very important justifications
of passing to the limits. Such justifications, if done very properly, may even make
such proof longer than the above mentioned detailed one.

Fourier Sine and Cosine Transforms


Next, we will turn to the two very special cases of Fourier transforms, namely
those for f(x) being odd or even functions, which will result in the Fourier sine
and cosine transforms, respectively. We present these transforms to illustrate again,
that their inverses represent solutions of very familiar singular Fredholm integral
equations of the first kind in f(z). We will state the existence of their solutions (the
inverse transform) as simple Corollaries | and 2 to the Fourier integral Theorem 6.
The fact that the Fourier exponential transform of an odd (even) function will reduce
to Fourier sine (cosine) transform is left for an exercise.
We define the Fourier sine integral for f (x) on (0, 00) as

ir ilsf(x) sin Avda (1.95)

and we see it as a singular Fredholm integral equation of the first kind with kernel
K(x, X) = sin Az. The solution f(x) to (1.95) is the inverse Fourier sine transform

HS ig ve 2[- F.C) sin wd) (1.96)


7 Jo

whose existence and form can be justified as a special case of the Fourier integral
Theorem 6 as its following Corollary 1. The proof, is easy to establish from Theorem
6.

Corollary 1 (to Theorem 6) The Inverse Fourier Sine Transform


If f(x) is an absolutely integrable function on (0,00), and is piecewise smooth
on every finite interval of (0, 00), then
CO
1 id eas ‘
5lf(z+) + f(x—)] = =| sin And) | f (€) sin AEd€. (1.97)
58 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Next, we define the Fourier cosine integral of f(x) on (0, 00) as

PA a ee i HOE (1.98)
0
which is another singular Fredholm integral equation of the first kind in f(x). As
was done for the Fourier sine integral (1.95) the existence and the form of the solution
f (x) to (1.98) (as the inverse Fourier cosine transform),

Cd eae fifatF(X) cos \xdX (1.99)


7 JO
can be established easily with the aid of Theorem 6 as its following second corollary.

Corollary 2 (to Theorem 6) The Inverse Fourier Cosine Transform


If f(a) is absolutely integrable on (0, 00), and is piecewise smooth on every finite
interval of (0, oo), then

1 Danie Se
glf (zt) + f(x—)] = = [ cos Avdd | f (€) cos AEdE. (1.100)
0
We remark here that f(z) in (1.88) represents the solution of the Fredholm integral
equation (1.87) in f(x). Although (1.88) offers a very direct way of evaluating the
inverse Fourier transform, the integration may still be an involved one. Hence, for the
few illustrations that we have here, we will depend on the tabulated values of Fourier
transform presented in Table 1.2 and refer the interested reader to more extensive
tables.'° Some of these tabulated pairs can be obtained by simple integration. For
example:

F{e'* f(x)} = iP en? fae 7? dx


— CO
love)

= / e A-a)e t(2)dx = F(\ —a)


—0oo
which is the 8th entry in Table 1.2.

Example 12
The Fourier transform of the function

f(0) ee= {fomee aise


se
(B.1)
1
BON eee ‘i e ? f(x)dr = fo e Ady

Acie’ Sly lite (et — e) = 2A sin Aa eae)


—iA |_, iA are,

'6See Ditkin and Prudnikov [1965], Jerri [1992].


1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 59

Table 1.2 Fourier Transform Pairs

F(A)

2Asin \a
r

Wh 8 Saye
~eit/46 id“ /4a
a
2(1 cd Ne ae
|X <a
Poe
On 1)
0 |A| >1

us el b>0
b

Operations

a” f (x,y)
60 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

which is the first entry in Table 1.2.


We may remark again that the inversion (1.88) of the Fourier transform (1.87)
may not be easily accessible from the existing Fourier transform tables, or that the
given transform F'(A) of (1.87) is a set of data. In such cases we have to resort to the
numerical approximation of the integral in (1.88). Fortunately, such computations
have been greatly reduced with the establishment of the very fast numerical algorithm
called the fast Fourier transform (FFT).!7

Example 13
Find the Fourier exponential transform of

_ sinax
g(x) =. xt
(B.1)
Here we note that from Example 10, we have

FAG) = Mou
: z
eee
sin aX
(E.2)
OF zea r

In this problem we have f(a) = (sin ax) /z, but from the symmetry of the Fourier
transform (1.87) and its inverse (1.88) we have (as indicated in the last entry of Table
1.2)

F{F(a)} = 2nf(—A). (E.3)


Hence, if we use

F(z) = sin ax
and f(x)=4 9°
z
OF
WIS
|A| <a
Alea

in (E.3) we obtain

r sin ax ese tT, |Al<a EBA


BR a = Vee ite. (24)
We note here that if we consider the precise statement of convergence in (1.94), the
; 7
Fourier transform here converges to 2 at |\| = a, where there is a jump discontinuity
with the result in (E.4) becoming a

a, \Al<aa@
Ff : i Gen risa (E.5)
s1n Q@Zx

Tv
>? |A| =a

'7See Brigham [1974, 1988], Briggs and Henson [1995] and Jerri [1992].
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 61

The tabulated Fourier transforms are used for evaluating many basic improper inte-
grals.
As we mentioned earlier, the most important property of the Fourier transform,
for solving (singular) Fredholm integral equations with a difference kernel, is the
convolution theorem, which states that

FY[pte e)falde} = FOF) (1.101)


where Fi (A) = F (fi) and F(A) = F(f2). It is clear that the Fourier convolution
product of f(x) and fo(z)

fi fola) = ffTO OEE (1.102)


has limits of integration which are suitable for the (singular) Fredholm integral
equation with difference kernel (1.49)

fle) = fHaeeysneae (1.49)


and that f;(a — €) stands for K (x — €), the difference kernel.
We shall state the convolution theorem for the Fourier exponential transform (1.87)
with precise conditions for its validity, as Theorem 7.
It is easy to show that the convolution product of (1.102) is commutative, distribu-
tive over addition, and associative, as stated in the following lemma.

Lemma For fi (x), fo(x), and f3(x) in the class of functions that are bounded,
absolutely integrable on (—oo, 00), and sectionally continuous on each bounded
interval,
(1) (fi * fo)(x) = (fa * fi) (2) (1.103)
(2) fi(x) * [fo(x) + fa(x)] = (fi * fo)(x) + (fi * fs) (2) (1.104)
(3) fi * (fo * fa) = (fi * fo) * fs (1.105)
We leave the (formal) proof as a simple exercise.

Theorem 7 The Fourier Convolution Theorem


If f:(z) and f2(x) are in the class of functions that are bounded, absolutely
integrable on (—oo, oo), and sectionally continuous over each bounded interval-then
the Fourier transform of the convolution product exists, and its Fourier transform is
the product Fy (A) F(A), as in (1.101):

F (fi * fo)(a) = FLA) F2Q) (1.101)


where F(A) and F2(A) are the Fourier transforms of f; (x) and f2(x), respectively.
62 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Example 14 Inverse Fourier Transform — Application of the Convolution


Theorem
Find the inverse Fourier transform of

UOC = F(Ner** = F{u(a,t)},u(z,0) = f(x), -co <a@<co. (E.1)


2
e-2 /4t
We note that this function is a product of F(A) = F{f} and et =F \
VArt
t > 0(see entry 5 in Table 1.2). So, according to the result (1.101) of the convolution
Theorem 7, its inverse Fourier transform u(z, t) is the convolution product of f(z)
—x?/4t
(Z

V4rt i
LY

5 enn [At
u(c, en) 4 = tree) | |
ge re ee
=(@=3)) (E.2)

u(z, t) == i ee eet > 0.

Parseval’s Equality
Another useful property of the Fourier transform is Parseval’s equality,

51 | AORWA= | A@h@ar (1.106)


where F is the complex conjugate of F,, which as we stated earlier is obtained from
F(A) by replacing i by —7. The important special case of (1.106) for f; = fo = f,
1 CO CO

— |F(A)/?dA 2) |f (x) |?da (1.107)


an a, 0} 46S)

can be derived from the convolution theorem (1.101), which we shall leave for
Exercise 10 with a clear supporting hint.

Singular Fredholm Integral Equations—Fourier Convolution Product Type


As we had done with the convolution theorem for the Laplace transform (1.84)
for solving Volterra integral equations with difference kernels, we will illustrate here
the use of the above Fourier convolution theorem (1.101) to solve singular Fredholm
integral equations with difference kernels, and where the integral, as required by
(1.101) is defined on (—oo, 00). To satisfy the theory we shall consider the given
functions in the equation as well as the solution to satisfy the existence of the Fourier
transform and its inverse as spelled out in Theorems 5 and 6. We should also add
here that we must check first a theorem for the existence of the solution of the given
integral equation, which we shall discuss in Chapter 5 (as done in Theorems 1-4
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 63

in Section 5.1 and Theorem 6 in Section 5.3 for Fredholm equations of the second
kind, and in Theorem 5 of Section 5.2 and Theorem 7 of Section 5.4 for Fredholm
equations of the first kind). So, the illustration here may be considered as a formal
one in the absence of the above required checks for justifying our steps.
Consider the following singular Fredholm integral equation of the first kind in
u(x) with the given function f(a) and the difference kernel k(x — €),

f(e) = /HO asiCe


00)
(1.108)
where the integral is in the Fourier convolution product form (1.102), So if we let
F(X), K(A) and U(A) be the Fourier transforms of f(x), k(x) and u(x) respec-
tively, then Fourier-transform both sides of the integral equation (1.108), using the
convolution theorem (1.101) on the integral we have an algebraic equation in U(A),

(1.109)

provided that F(A) does not vanish. What remains is to find the solution u(z) as the
inverse Fourier transform of U(X), which we will illustrate in the next example.

Example 15 Fourier Transform for Solving a Singular Fredholm Integral


Equation
Consider the singular Fredholm integral equation of the second kind in u(z),

u(x) =e! taf” e|®~§lu(é)dé. (E.1)


[e-@)

If we take the Fourier transform of both sides of this equation, using the convolution
theorem on the integral in (E. 1) and realizing in this special problem that f (a) = e~ I2|
and the kernel k(x) = e~!*!, where from the fourth entry in Table 1.2, we have

{er}= ree 2
the Fourier transform of (E.1) becomes

2 2)
———— ——-~ U(r
OD ere arareprie OV)
and
De
U(A) ee, AT 12 0. E.3
64 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

The denominator vanishes for real values of AXwhen pp > ‘ and some real value of
A, so for uw < 5 we can take the inverse Fourier transform of U(A) in (E.3) to have
* 1 co Jerr

Me) em =e]! @=lneeca


(E.4)
Syeny
Si 77
after using again the fourth entry in Table 1.2 for e ll with a = /1 — 2p.
For another illustration of using the Fourier transform for solving homogeneous
(singular) Fredholm integral equations (with difference kernel!), we present the next
Example 16.

Example 16 Singular Homogeneous Fredholm Equations!®


In this example we shall consider two homogeneous singular Fredholm equations
to further illustrate the use of the Fourier transform, and to point out some main
features that differentiate the singular from the nonsingular integral equations.
a) Consider the first equation in u(z)
co

Ue | ems u(t )de. (E£.1)


Xe, o)

We note that the integral is in the form of the Fourier convolution product, moreover
its kernel is the same as that of the equation of the second kind (E.1) in Example 15.
So it may be appealing to try the convolution theorem in the same exact way like
what we did for Example 15, and we have

U(A) = 4 PEA U(A), (E.2)

This equation (E.2) gives a nontrivial solution U(A) only if the parameter pz is
restricted such that w = BS, What remains now is to find the function, (or
. 2 . . .

feeNe
functions) u(x) that satisfies the integral equation (E.1) with zp = Sew
the knowledge of the basic properties of Fourier transform, one may arrive at such
solutions, sometimes, by a kind of inspection. We observe in (E.1) that our solution
2
u(t) is an input that results in an output u(x) which is within the pape constant

factor. If we also recall that the kernel e~!*~*l is shifted by t, where according to the
Fourier pair from Table 1.2, we have

F{ f(z —20)} =e7* F(X). (E.3)

'8Optional
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 65

Then if we nominate u(t) = e~™* for a solution, the integral on the right of (E.1)
will represent no more than the Fourier transform of e~!*~*!, which according to
(E.3) should give us eee R(X) a Thus (E.1) results in
1+ 2
2
e —ixr (E.4)
WO 3
as solution u(x) = e~** to (E.1) provided that the parameter ps (which we shall call
2
an eigenvalue later) is restricted to uw = . We should note in this example of
homogeneous singular Fredholm integral equation (E.1) that for every uw > 5 (1:7,
infinity of jz values) we have solution e~**” to the singular equation (E.1).
b) Here we consider another homogeneous singular Fredholm integral equation in
u(x),
Us) = uf u(t) sin xtdt (E.5)

where we note that the integral on the right is a Fourier sine integral. So the equation
(E.5) puts a Fourier sine integral U(a) as in (1.95)

Oia) i sin xtu(t)dt (E.6)


0
and its inverse u(t) as in (1.96)

Ubi -[ sin xtU (x)dx (E.7)


0
within a multiple constant, i.e., (E.5) can be obtained from (E.6) if we let U(x) = Ea
But if this is used in (E.7) we have
») (oe)

u(t) = — | sin ctu(x)dx (E.8)


TH Jo
an integral equation in the same form as our original equation (E.5), and they will
& 2 2
be compatible if their coefficients are equal, i.e., 4.= —, [i = Or f= 4).
Tp T 7
Hence the integral equation (E.6) will have nontrivial solutions for these two values
of 1. To find the possible solutions corresponding to these values may become a long
search, and of course a check of the detailed tables of Fourier transforms!® should be
the best route for a first search. It turned out that for any positive constant value of a,
the following functions

z
yi (x) = ze
TW —ar
Pape tee (E.9)

19See [Ditkin and Prudnikov [1965] and Erdelyi et al. [1954].


66 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

and
Seaton x
y2(x) = 5° ees aa (peal) (E.10)

[2 2
are solutions of the integral equation (E.5) for 4 = = and pz = =e , respec-
2
tively. This should illustrate that for one value of the parameter pp = 41 = a for
example, the integral equation (E.5) has infinity of solutions y; (x) in (E.9) for all the
positive values of the constant a involved in yj (z).
The above-illustrated two features of i) infinite values of the parameter yz in the
singular equation (E. 1) and ii) the infinity of solutions corresponding to one value of
2 ; :
the parameter . = ,/ — for the other singular equation (E.5) do represent important
T
characteristics of singular Fredholm integral equations.

Fourier Transform of Derivatives

For completeness we present the following result (1.110), which represents one
of the most important properties of the Fourier transforms for solving differential
equations (with, usually, constant coefficients). This is the transforming of the
2
derivative —> (or higher derivatives) in the x-space to the algebraic —)\? F()) in
the Fourier A—space. Of course, as we mentioned for the Laplace transform, the
combination of such results like (1.110) for algebraizing differential operators, and
the convolution theorem (1.101), for algebraizing the convolution integral operator,
can be used in the attempt of solving integro-differential equations on (—oo, 00). We
will next state and prove this result for Fourier-transforming the first derivative —
2
as Theorem 8. The cases of higher derivatives like that of (1.110) for “s (or for
bi
d
— will follow easily as a corollary to this theorem.
As in the case of the Laplace transform, the Fourier transform also algebraizes dif-
ferential operators with constant coefficients, for example, F {d? f /dx?} = —\? F())
with the conditions of lim f(x) =Oand lim f'(x) = 0. This can be shown
L—+=ECO xr—> +00
after two repeated integrations by parts and by assuming that lim f (ey = Wand
T—=+rCo

I Seseglll
Re sme CAG
das 7 Jag de, pe Of i . ~ ~ira of
Ficah= fe a2 = ° =| tafe din’

=0+%A [e-* f(a) |" + id ir es


fa yan)
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 67

r {ot = —)*F()) (1.110)


dx?
A more precise statement in this direction is the following Theorem 8.

Theorem 8 The Fourier Transform of #


v
Let f(x) be continuous and absolutely integrable on (—o0o, 00), and let - be
by
piecewise smooth and absolutely integrable on (—oo, co); then

df
Fie} = iF (A): (1.111)
Now we state the corresponding result to (1.111) in the Fourier space,

{-iaf(a)} are
Ue leU =dF= (1.112)
or
dF
Fi= {5
ee
\
ae
ix f(x). (1.113)

Just as was done formally in (1.72) for the similar Laplace transform pair (1.71), this
result is obtained by differentiating the Fourier integral

MO ag he afede,
5dF =pd fore Jae =ftelKe f(a)dr (1.114)
esyeeaeee = F{-irf(2)}.
This step of interchanging differentiation, with respect to A, with integration is
justified if the middle integral above resulting from such interchange is uniformly
convergent to allow such an operation.

Finite(-Limit) Fourier Sine and Cosine Transforms—Fourier Coefficients


The Laplace and Fourier transforms, we introduced up till now, were used on
functions defined on an infinite interval (0, 00) or (—00, 00). We will now introduce
the finite sine and cosine transforms of functions defined on the finite interval (0, 7)
and with the real variable limited to integer values.
The finite sine transform of f(x), defined on (0, 7), is

F,(n) =| sinna f(x)dz, n integer (1.115)


0
whereas its inverse is the Fourier sine series of f(z),0 <a <1,

(2) = 3 Rln)sinne
9 co

T
(1.116)
68 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

The finite cosine transform of f(x), defined on (0, 7) is


Tv

F(n)= / cosnaf(x)dz, n integer (17)


0
and its inverse is the Fourier cosine series of f(x),0< @ < 7,

jeez ~F.(0) af=)|]Ts) 3 Pap econ (1.118)


a

We note here that the finite sine transform in (1.115) is related to the Fourier sine
coefficients b,, of the Fourier sine series in (1.116) as F(n) = ae. while the finite
cosine transform (1.117) is related to the Fourier cosine coefficients Qn, of the Fourier
cosine series in (1.118), as F.(n) = Fan: Tiel acon
Since it is our main purpose here to bring examples of integral equations, we can
also look at the finite-limit Fourier sine and cosine transforms (1.115), (1.117) as
nonsingular Fredholm integral equations of the first kind in f(a), as compared to the
singular equations of the (infinite-limit) Fourier sine and cosine transforms in (1.95),
(1.98) with their infinite limit of integration on (0,00). The other point noticed is
that the nonsingular Fredholm equation of the first kind of the finite sine transform
(1.115), for example, is solvable for f(x), 0 < a < 7, in (1.116) as an infinite series
in terms of the discrete values of the given function F’,(n), while the singular integral
equation of the Fourier sine transform (1.95) is solvable (for f(z), 0 < x < oo) as
an infinite integral in terms of the continuous values of the given function F(A) in
(1.96). The same analogy can be drawn for the finite cosine transform (1.118) versus
the infinite-limit one in (1.98). The singular property of the integral equation and
the continuum of A values are important characteristics of such (not so easy to treat)
equations.
The following finite exponential Fourier transform F'(n) can also be defined in a
similar way, where its inverse f(a) is expressed as an infinite (Fourier) series, and
similar conclusions regarding the discreteness of \ are reached.

F(n) = ik e '® F(a) dz, (1.119)

fey — = = F(n)e’"”, SR UT: (1.120)

We should mention that the above finite transforms are also used in an operational
way, similar to the Laplace and Fourier transforms, for algebraizing derivatives
defined on the finite domain. For example the Fourier sine and cosine transform
algebraize even order derivatives,

2
[ sin nat de = n{f(0) — (-1)"f(7)} —n?F,(n), (L121)
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 69

[f°
p cosna Ta
La Er= (-1)"f'(r) mw) —— f f'(0 ssi
(0) —n*Fi(n). 122
(P4122)

These can be derived by simple twice integration by parts. More importantly, that
we will need their above algebraization properties in Section 4.1 for modeling the
potential distribution in a
charged square plate as Fredholm integral equation in two
dimensions (see Exercise 24 of Section 4.1).
2
We may note that the finite Fourier sine transform of a in (1.121) requires the
values of the function f (0) and f(z) at the two ends of the interval (0, 7), while the
finite cosine transform requires the derivatives f'(0) and f(z) at the end points of
(Ona):
The finite exponential transform also has the same operational property for al-
gebraizing all order derivatives of f(x) on the finite interval of its definition, for
example

‘a ening Fa, = (-1)"{f(a) — f(—a)} + inF(n) (12123)


ae dx

which can be accomplished easily by using one integration by parts.


To summarize, in this section we showed how integral transforms especially the
Laplace and Fourier transforms help in solving differential equations (with constant
coefficients), and Volterra and some singular Fredholm integral equations with special
(difference) kernels. This is done where each of the differential equations as well as
the integral equations in f(x) become simple algebraic equations in their transforms
F(). All our illustrations and applications emphasized this fact. However, and at the
same time, a particular important integral transform like the Fourier transform owes
its simple applicability to the theory of integral equations, that solved for its inverse
f(z) = F~'{F(A)} (or the return to the z-space from the transform -space). This
is a problem of singular Fredholm integral equation of the first kind in f(z),

vO) a e >? f(x)dzx (1.87)


—Co

and whose solution (luckily) was derived to be

f(z) eo ~ f e'* F(\)dz.


Mes (1.88)

So for any integral transform no matter how effective it is, at the end we have to solve
for its integral equation that is associated with finding its inverse. For an integral
transform to be useful, there must be a balance between the difficulty in finding
its inverse and its special properties in efficiently solving differential, integral or
integro-differential equations among others.
70 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

1.4.3 Other Transforms

Here”? we introduce a number of other familiar integral (or finite) transforms, for the
main purpose of pointing out to that finding the inverse of any of these transforms,
is a very clear example of solving an, often, singular Fredholm integral equation
of the first kind in the transformed function f(a). Of course, this is besides their
use, in parallel to Laplace and Fourier transforms as shown in (1.127) for the case
of Hankel transform, and for facilitating the mathematical modeling and solution of
some integral equations. We will not cover the latter here, but we will revisit it briefly
in Section 2.6.2 to illustrate its use in the integral representation of the electrified disc
problem. The detailed modeling and solution of this problem is done in Example |
of Appendix A.

The Hankel Transform ‘


The Hankel transform F;,(A) of f(r) is defined as

1
EOS TS Ga he finlAr) is(rar (1.124)

where J,,(x) is the Bessel function of the first kind of order n, which is (the bounded
solution at z = 0) of the two solutions of the Bessel differential equation,

A d ; :
Ea ae ie + (2? — n”)u =0, (1.125)

j i 09 (—1)*(2)n+?s
n(z) = » ek! Geille (1.126)
(=0
In a similar operational way to that of the Laplace and Fourier transforms, it can be
shown, using (a rather lengthy!) integration by parts”! (with appropriate boundary
conditions), that the Hankel transform algebraizes the Bessel differential equation
with variable coefficients (or its following variable coefficient operator), that is,
.
Hn 4§f + bts }= —\*F,,(X). (1.127)
dr? sordr_ r?

With such an advantage, it remains to transform back F;,(A) in the transform space
to the original function f(r) in the physical r-space. This would mean solving the
singular integral equation (1.124) in f(r) with the Bessel function kernel, which,
fortunately, has been established as the following inverse Hankel transform,

nACe etree net he AIn (Ar) Fn(A)dA (1.128)

20 Optional
*! See Jerri [1992, pp. 16-18, Example 1.6].
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 71

where we note a perfect symmetry between the Hankel transform (1.124) and its
inverse (1.128). We may mention that the derivation of the inverse Hankel transform is
done with the help of the Fourier transform of functions (er a in two dimensions
with circular symmetry, i.e., f(z,y) = f(\/z2+y?) = f(r). This leads us to
the simple extension of the Fourier exponential transform of pees of multiple
variables as we did in (1.91) and (1.92) for the Fourier transform in three dimensions.
The double Fourier transform of f(z, y)

Fi {f(z,y)} = FO, A2) = iL Te e Ait —t2y F(a y)dady (1.129)

can be seen as an example of higher (two) dimensional integral equation in f(z, y).
Its inverse, as the solution to such a two-dimensional (singular) integral equation of
the first kind, is

hs i a) ee :
For, r2)} = f(x,y) = =
Ae tA) EY
eet *\29 HN, , Ap) d\rdAo
(1.130)
which can be obtained from the Fourier integral formula (1.93) with a rather simple
extension to two dimensions. Of course, the higher dimensional Fourier transforms
would be used for solving partial differential equations by, usually, algebraizing their
derivatives with respect to the spatial variables.
Next we present the Hilbert and Mellin transforms, for the main purpose of
showing that finding their inverses is a matter of solving singular Fredholm equations
of the first kind.

The Hilbert Transform


The Hilbert transform of the function f(x) defined on (—o00, oo) is

FO)=H{f}=—P
= =—pP f
1
=e
a (ada
(1.131)
erst

where P refers to the Cauchy principal value of the improper integral

SSCA
i relim oho)
[o:———dz + Saha)
LO al (12132)
eh MES e404 A —xX ear
A— co

in case there is a value \e(—A, A) for which the integrand in (1.131) above becomes
infinite. When both limits on the right side of (1.132) exist as € > 0, the integral is
convergent and we drop the P to use f instead of P J.
The inverse of this Hilbert transform F'(A) is

#(e) =H FO)} = <p f eT


eee
(1.133)
72 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

We may mention that this inverse Hilbert transform, as a solution to the singular
Fredholm integral equation of the first kind (1.131) in f(x), can be derived with the
help of the Fourier integral formula.”*

Example 17 A Hilbert Transform-Type Integral Equation


A very direct simple example of, at least the use of the available Hilbert transform
tables, for solving integral equations in the form (1.131) is
Co

=P f a = sin 2d (E.1)
7 =ie,9}
A =

The general method of solving such singular integral equation would involve “heavy”
use of complex variables. So, here, and in parallel to what we do for the inverse
Laplace transform in the sophomore course on differential equations, we appeal to
the available tables of the Hilbert transforms to find the pair

1 au bad
H {cos br} = =P [ pe ain ONS (E.2)
TOV Soak ea eX
Hence a simple comparison of (E.1) and (E.2) shows that (E.1) is a special case of
(E.2) with b = 2, whence the solution of (E.1) is u(a~) = — cos 22.

The Mellin Transform


The Mellin transform of the function f(x) defined on (0, 00) is
CO

F(X) =Mis}= | a f(x)dx (1.134)


0
The inverse Mellin transform is
y+iL
1h) — = jim ie a *F(r)dA, y> Real {zi}, z=a+iy (1.135)

which can be seen as the solution to the singular integral equation (1.134). Here in
(1.135), as it was the case for the Laplace inversion formula (1.65), 7 is taken to be
larger than Real {z;}, the real part of any singularity 2; of F(z) inside the integral of
(1.135) (where z = x + iy).
In Exercise 19 we will present the convolution product and its associated Mellin
transform convolution theorem,

Se Cave
Mifi * fo}=M fill) fe ee Fy (A) F2(A) (1.136)
0
where F(A) and F(A) are the Mellin transforms of f,(x) and f(z), respectively.
Then this theorem is used, in parallel to the Laplace transform, for solving special
class of integral equations of the first kind, that are in the form of such convolution
product.

*2See Erdelyi et al. [1954].


EXERCISES 1.4 73

Exercises 1.4

The Laplace Transform

1. Find the Laplace transform of the following functions (see Table 1.1).

(a) gil? 0<2< 0


(b) «/27,0<2<0
(c) iee2(t—t)dt
0
Hint: See the (Laplace-type) convolution theorem in (1.84).

(d) x— [e — t)?u(t)dt
0

(e) i Ce oe u(0) = 0
0 dt
Hint: See (1.84) and (1.68).
(f) xsinz

2. Find the inverse Laplace transform f(z) = £~!{F(s)} of the following func-
tions F’(s).

1
(a)
s—(A+1)
G(s)
(b) s—(A4+1)
1
(c)
(s —3)?+5
1 1 1

Hint: Use the (Laplace-type) convolution theorem (1.84).

(f) V/sF(s)
Hint: Let \/sF(s) = s[F(s)//s] = sH(s) and use (1.68) and the result
of part (e) for h(t) = £~'{H(s)}, noting that h(0) = 0 in part (e).

3. (a) Show that , § > ais the Laplace transform of the following two
s-

functions.

@ fi) =e"
and
s Cemit< 9, 40F toss
(ii) f2(t) 24 . poe
74 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

(b) Use the information in part (a) to conclude the nonunique nature of the
solution f(t) of the (singular) integral equation of the first kind,

pe ibeen f(i)dt.
Sta 0

1
.
OLE(a) Findene testa
the inverse Laplace transform of
CieF'(s) = —————.
maar,
1
Hint: Use the Laplace convolution theorem (1.84) with F\(s) = = and
=H
F(s) = = where f; (x) = 1, and" fo(x)#= ¢ remembering
Jaa
the shifting property for F2(s).
(b) Use the following identity,

r(y)rd —v) =
Il
to show that T° (5) =i) i

. (a) Find the Laplace transform of Abel’s integral equation (a Volterra equation
of the first kind with difference kernel).

if [ Feueoae
1

Hint: See the convolution theorem in (1.84) and Exercise 1(b).


(b) Solve for u(z).

. (a) Prove the result in (1.69).


d
Hint: Let g(a) = a sO

c{oth =£1 3} = sL{g} — 9(0) = 3612 ~ f°)

(b) Attempt to put conditions on f(a) and we to establish a precise statement


of a theorem for the formal result in (1.69).
(c) Use the convolution theorem in (1.84) to prove the result in (1.70).
Hint: Consider theintegral in (1.84) as a convolution product
of f;(t) = 1
and fo(t) = f(t).
. Find the inverse Laplace transform of
EXERCISES 1.4 75

by two methods.

(a) By using partial fractions.


1 A B,+C
Hint: —.=——-
5241) = —a + ——__ ey and A,B, and C.

(b) By using the convolution theorem.


1 1
Hint: F; eae = >—.
abiacHuhs) Os 2(8) s?+1
8. (a) Show that the following initial value problem of the harmonic oscillator in
TC LU eae OTeep

oY uta = ; £20 (£.1)

(Oj 0 (E.2)

u/(0)=1 (E.3)
is equivalent to the Volterra integral equation

u(t) =t+w? i)(a — t)u(x)da. (E.4)

Hint: See Example 6 and Exercise 2(a) of Section 1.3.


(b) Verify that
sin wt
u(t) =
Ww

is a solution to both the initial value problem (E.1)—-(E.3) and its equivalent
Volterra integral equation (E.4).
Hint: You may substitute directly, or use the Laplace transform to solve
the initial value problem (E.1)-(E.3); or the convolution theorem (1.84)
to solve the integral equation (E.4).

Fourier Transforms
9. (a) Prove the following shifting properties of the Fourier transform,
(i) Shifting in the physical x-space

Fif@=—a)\= er, or), (E.1)

(ii) Shifting in the Fourier \-space

ee fe f(x) } =r CAC) (E.2)


76 Chapter 1. INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

(b) Most of the functions we deal with are assumed to be real-valued functions
f(x) on (—00, 00). When f(x) is complex-valued, prove the following
result, which we need for the use of the Parseval equality in (1.106),

F{f(-2)}=FQ),
where f(x) stands for the complex conjugate of f(z), i.e., where each
= /—1in f(z) is replaced by —i.
Hint: Take the complex conjugate of the Fourier transform of f(—), and
note that the complex conjugation operation is distributive over addition
and multiplication, ie., fi + fo = fi + fo and fifo =fi fe.
10. Use the (Fourier-type) convolution Theorem 7 as in (1.101), which can be
written as :

1
Be _ RAV yan= fo fila — t)fo(t)dt (E.1)

to derive the very important equality for Fourier analysis, namely Parseval’s
equality in (1.107).
Hint: Let x = 0 in (E.1). Then consider the special case of fo(t) = fi(—t) m
(E.1) with the use of the result in part (b) of Exercise 9 to have F{ f,(—t)}=
FAfoo) eee)
IWbe Consider f(x), —co < z < o.

(a) Show that the exponential Fourier transform F(A) of f(a) reduces to a
Fourier sine transform like that of (1.95) when f(a) is an odd function
fo(z).
Hint: In the integral of (1.87) write e~** = cos Ax — i sinAz, then
recognize that the integrand fo(x) cos Ax is an odd function where its
integral on (—oo, oo) vanishes.
(b) Show that the exponential Fourier transform F(A) of the even function
f(x) reduces to a Fourier cosine transform like that of (1.98).
Hint: See the hint for part (a), where in the present case f,(x) sin Azx is
an odd function, thus its integral on (—oo, 00) vanishes.
12: Consider the function of two variables u(z, y) and the Laplacian of this function

Ou Oru
Vu=— + —.
72 oF Dy? :
(£.1)

Show that the double Fourier transform, as given in (1.129), of this Laplacian
of u(x,y) is the following algebraic form in U(Aj,2) the double Fourier
transform of u(z, y), i.e

iE 0?u (id 2+iA2y)


(iA d O7u Oru
(2) (gat st
3 = a ase Sat ail dxdy
EXERCISES 1.4 77

SOP) ORAS): (E.2)


2
Hint:
Start with the Fourier transform of Ay2 oY) as a function of z,
ae hy ;
0?u Aue
Fl a} = EA a }= iA,U (A1, y) wher U(\1,y) = F {Suh An-
e

other integration gives F {5 = —?U (Ay, y), where T(1,y) = Vues)

The second Fourier integral of (E.2) with respect to y gives F {U Ona =


U(Aq, A2) with the final result of (E.2) as

0? :
F (2) ‘Fat = —ATU(A1, Az).

For the second term in (E.2) do the same starting with the Fourier transform of
07 u(z, y)
as a function of y.
Oy?
13. (a) Show that f(a) in its Fourier sine series representation (1.116) is, indeed, a
solution of the (nonsingular) Fredholm integral equation of the first kind
(12415),in fz).
Hint: Substitute the series (1.116) of f(a) inside the integral of (1.115),
interchange the operations of summation and integration, then use tne
orthogonality property of {sinmz}°°_, on the interval (0, 7), ie.,

m Oem
‘i sinmazsinndz = ¢ 7 (E£.1)
0 x n=m™m.

The interchange of the operation of integration with the infinite summa-


tion of (1.116) is allowed when f(a) is square integrable on (0,7), i.e.,
Nsf?(x)dx < oo.
0
(b) By the same method in part (a), show that f(a) of (1.118) is a solution
of the Fredholm integral equation (1.117). You will need to use the
following, a bit different from (E.1), integration result that relates to the
orthogonality of {cos mz}?°_, on (0, 7).

- On in ae
i cosmzcosnz dr = 7 n= Tse 0 (E.2)
: kt, v) n=m=0

14. Consider F(A) and F(A) as, respectively, the Laplace and Fourier trans-
forms of the causal function f(x), i-e., f(z) = 0 forxz < 0.
Show that these transforms are related as

Frp(A) = Fc(iA).
78 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Hint: Note that for the Fourier transform of f(x), its integral on (—oo, 0) is
zero since the causal function f(a) vanishes identically there.

Ii (a) Find the Fourier exponential transform of the (singular) Fredholm integral
equation with difference kernel assuming that the solution does exist.

f(x) = g(x) + [kaw - ef as


(oe)

—co

Hint: Note that the (improper) integral is in the Fourier convolution


product form (1.102), so use the (Fourier) convolution theorem in (1.101).
(b) Solve for F(A) in part (a) and then find f(2) = F~!{F(A)}, the solution
to part (a).
&

16. Solve the following (singular) Fredholm integral equation with difference ker-
nel

NO / k(x — €)b()dé, (E.1)


[eo.e)

—e5)

assuming that the solution does exist.


Hint: Note that the integral is in the Fourier convolution product form (1.102),
so use the (Fourier) convolution theorem (1.101).

iy. Solve the following (singular) indexsingular!Fredholm equation Fredholm


equation in ¢(z),

o(x) = e7!#! + es ec Wendt a (E.1)


OO

Hint: Note that the (improper) integral is in the Fourier convolution product
form, where the Fourier convolution Theorem 7 as in (1.101) can be used to
algebraize the integration operation. Also remember that

F {el} = (E.2)
18. Solve the following (singular) integral equation in (2),

o@)—pf - SPS
t)dt
= nla), (E.1)
where p(x) is the gate function

pat) eg (£2)
ford —akr
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 79

Hint: Note that the (improper) integral is in the Fourier convolution product
form (1.101), and recall the two Fourier pairs

—alx B 2a
Fie } @ +X ee)
and
2 sina

Other Transforms

19. (a) Find the Mellin transform of

f(s) Seo": a> 0.

Hint: Use (1.134) and let ax = z, then appeal to the definition of the
gamma function as given in (1.74).
(b) The Mellin transform-type convolution product of f;(x) and fo(x), 0 <
x < oo; is defined as

The associated convolution theorem is given in (1.136) as

M {iivdaye (=) ?| =AOARO), “yD


where F(X) and F2(A) are the Mellin transforms of f(x) and fo(z),
respectively.
Now let U()) be the Mellin transform of u(z), then use this convolution
theorem in (E.1) (and the Mellin transform pair in part (a)) to find the
Mellin transform of the following singular integral equation of the second
kind in u(z),

ee [e etuo? (B.2)
to show that you will have an algebraic equation in U(A).

1.5 BASIC NUMERICAL INTEGRATION FORMULAS

In the preceding section we introduced the Laplace, Fourier, and other integral
transforms and illustrated their suitability for solving only special cases of integral
equations. In particular, the Laplace and Fourier transforms are compatible with the
80 Chapter 1_ INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

following Volterra and Fredholm (singular) integral equations with difference kernels
K(x — t), respectively:

u(x) = f(x) + iSAG apm (1.37)

u(x) = f(x) + o K (a — t)u(t)dt. (1.38)

Hence, as expected, the methods we introduce in Chapters 3 and 5 for solving the
Volterra integral equations and (mostly nonsingular) Fredholm integral equations
will depend on the type of equation, in particular its kernel. But in general these and
other special analytical methods may fail; then we must resort to other approximate
or numerical methods of solution. The approximate method of solution involves
approximating the integral equation by another with a known solution which can be
made close to the exact solution of the original problem. We must stress that all
the methods being it analytic, approximate or numerical must be preceded by some
assurance of the “existence” of the sought (unknown) solution, in general. Also,
if possible, we shall have some idea about the stability of the solution for integral
equations of the first kind in particular, a topic that we shall touch upon briefly in
Section 5.4. Such existence theorems for the variety of integral equations covered
here, including the singular ones and those of the first kind, are usually stated most
precisely in a general abstract mathematical setting, which requires more of the
abstract analysis preparation than assumed for this book. However, we will attempt
to give a good intuitive explanation and as precise statements as possible for such an
important topic of the existence of the solution to a given integral equation. A brief
presentation of this “theoretical” topic is given in the (optional) Chapter 6.
In this section we will review the very basic numerical integration formulas. They
will be used for the numerical setting of Volterra integral equations in Section 3.3, and
Fredholm integral equations in Section 5.5. These formulas include the trapezoidal
rule, Simpson’s rule, and the midpoint formula. For the level of this elementary book,
we present the higher quadrature rules [see (1.140)] only for the interested reader,
thus we decided to have them covered in a new (optional) Chapter 7 along with
their necessary tables. There we will give a good number of illustrations for the use
of these higher quadrature rules for more accurate numerical integration, and their
use in the numerical setting and solving linear Volterra as well as Fredholm integral
equations. This is done to support the numerical methods of Sections 3.3 and 5.5,
where only the basic rules of this section are used.

1.5.1 Basic (Elementary) Integration Formulas

Numerical methods of solutions for integral equations approximate the integral in-
volved. For example, the integral ie f (x)dz is approximated by a finite sum
b n
/ f(a)de © Sp(x) = So f(a) Ae (1.139)
@ 17=0
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 81

wheve usually the sample values f(x;) are equally spaced with the increment
DANE py JANG 3 oe a for n equal increments of the interval (a,b). In general A;x
may be variable, but usually for the very basic formulas of elementary numeri-
cal methods of integrations, to be discussed soon, the increment is taken as equal
/SNay oe However, and depending on the particular formula, the ordinates f(z;)
in the approximate sum above may be given a weight that is indicated by D, (instead
of just A;x) to be written for (1.139) as D; f(z;) for fixed Ar = pmo instead of the
simplest version f(x;)A;2,

b n

flfa)dr Sn(a— >) Dif (a,): (1.140)


@ 0)

Such weights D;, (or quadratures) are equivalent to approximating the function f(z)
on the subinterval A; by a simple curve. Such a curve is a simple straight line for
the trapezoidal rule and a parabola for Simpson’s rule to be discussed next. Other
quadratures, where higher degree polynomials or other functions are used, are also
available in the literature and are used in books on numerical methods of solving
integral equations, which we shall present and illustrate in our detailed discussion
of the numerical solution of Volterra and Fredholm integral equations in Sections
7.2 and 7.3, respectively. Numerical analysis is the subject that deals with such an
approximation in the most accurate and efficient way. However, for our purpose in
this section we shall be satisfied with the very basic formulas used for the numerical
integration above. So, we will first present the most familiar formulas of numerical
integration: the trapezoidal rule, Simpson’s rule, and the midpoint formula. In the
next section we will illustrate primarily the use of the trapezoidal rule for evaluating
integrals. We will also give an exercise to illustrate the use of Simpson’s rule. In
Sections 3.3 and 5.5 (also in Sections 7.2 and 7.3) we will show how the linear
Volterra and Fredholm integral equations are reduced, respectively, to a triangular
and a square system of n + 1 linear algebraic equation in the n + 1 unknowns of the
solution (approximate!) samples u(a;), i = 0,1,2,---,n. We should, actually, use
another symbol t(;) to indicate the solution of the linear system as an approximation
to the samples u(x;) of the solution of the corresponding integral equation. But, for
simplicity, and in accordance with the usual notation, we will adhere to u(x;) without
much fear of confusion!
The basic formulas we present here to approximate the integral ifyf(x)dx on
the interval (a,b) by a partial sum use n equal increments, Ax = (b— a)/n. It
is clear from Figure 1.10 and (1.139) and (1.140) that the form of the particular
summation formula will depend on how the increment of area A;A under the curve
is approximated. Such an area has constant width Az = (b — a)/n and hence, its
value will depend on how we approximate the height or the value of the function
f (x) on the subinterval (x;_1, x;), as illustrated in Figure 1.10a. This choice, as we
shall see next with the following basic integration formulas of the trapezoidal rule
and the Simpson rule, will translate in terms of the weights D; of (1.140).
a) The trapezoidal rule approximates the function on the subinterval (2;_1, 7;) by
a straight line passing through the points (x;-1, f(ai—1)) and (a;, f(a;)), and uses
82 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

(a)

(x)
f (xi)

f(x;-\)

(c) (d)

Fig. 1.10 The trapezoidal, Simpson’s, and the midpoint rule for numerical approximation
of integrals.

the area of a trapezoid with height (1/2)[f(z:-1) +f(x;)] as shown in Figure 1.10b
to approximate the area A; A, which gives

. =
~ —* sf(0) + f(r) + faa) +
iif(z)dx
1 (1.141)
tt EA Dalid ameectee(Dna)ar af (an) .

If we compare this formula with (1.140) we note that the weights Do, D,, Do,---,
D,y-1, Dn given to the ordinates f(zo0), f(x1),---, f(Zn) are, respectively, 5 11.
-++,1,5 multiples of Az = =.
We may mention here that the higher quadrature rules, of numerical integration,
that we shall present in Section 7.1 for the very interested reader, are with more
elaborate (or fancier!) weights than the above simple ones of halves and ones.
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 83

Hence they require their own tables, which we supply in Section 7.1 along with the
discussion and illustration of each particular rule.
For completeness, the error E-r(f), involved in the above trapezoidal rule (1.141)
for approximating the integral if Po) Gars

Br(f) = f sa)de~ =") 5pao) + Hes) + flea) +--+ flea)


b _

“4 flen-1) + 54(e0) (1.142)


for a twice differentiable function f(x) on [a, b], can be estimated by

1 1 (b—a)3
IEr(f)| < 5 W(b-a)M = = = M (1.143)
where h = =, and M is the maximum value of |f’(x)| on [a, 8].
b) Simpson’s rule approximates the function on (x;-1,2;41) by a parabola
that passes through three points—the left point (z;_1, f(a;-1)), the middle point
(xi, f(a;)), and the right point (2;41, f(2i41)) as shown in Figure 1.10c and which
results in

[sora ~ Elo) + Af (ar) + 24


e0) + Af) + 24
(9)
6 =

+--+ 4F(tn-1) + f(n)] (1.144)


where n is restricted to be an even integer. Here we note that the choice for the
weights of the ordinates D; in (1.144) are 1, 4, 2, 4,---,4, 1 multiples of the (smaller)
constant increments Az = tt forn = 4,6, 8,---, and for the special case of n = 2,
the weights are 1,4, and 1 multiples of ar as can be checked easily from Figure
1.10c. The detailed derivation of (1.144) is not as clear as that of (1.142), but can be
found in most basic numerical analysis and some calculus texts’. The error E's(f)
in the Simpson’s rule approximation of the integral Re fear,

2 b-—a
Hslf = / f(z)dx — = [f (ro) + 4f (21) + 2f (x2) + 4f (x3) + 2f (4)

+++ 4f(tn1) + f(n)) (1.145)


for a four times differentiable function f(x) on [a, b], can be estimated by
1 (b-a)°
1 4 peepee:
M, (1.146)

23 Anton [1995, pp. 473-477]


84 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

where h = * and M is the maximum value of [a on the interval {a, 6]. If


we compare the errors E7(f) in (1.143) and Es(f) in (1.146) for the trapezoidal
and Simpson’s rules, we see that they depend respectively on ay and = which
says, roughly, that for large number of increments n, the Simpson rule may become
superior to the trapezoidal rule. Of course, the function considered and, in particular,
its differentiability that determines M in both errors, will play a major role. There are
examples, however, where the trapezoidal rule becomes preferable! The illustrations
of the above trapezoidal rule and Simpson’s rule along with estimates for their error
bounds, are the subject of Exercises 1, 2, and 3.
c) The midpoint formula approximates the height of the area A;A by (a fixed)
value of the function f(“=**+) at the midpoint fi=1** that results in

[sete ~ =" 1p(BS) +7 (AFB) +

+f (4 *) onsen (Gt=) (1.147)


which should be clear from Figure 1.10d.
We may interject here again that in Section 7.1 we will present the other more
powerful (or higher order) quadrature rules. There, we will concentrate on a few
representatives of the two main groups of quadrature rules, in particular, the Newton-
Cotes rules, the repeated Newton-Cotes rules in part A of Section 7.1, and the Gauss
quadrature rules in part B of Section 7.1. The Newton-Cotes rules use equidistant
samples. The simple trapezoidal rule (1.141) and Simpson’s rule (1.144) that we
presented in this section are no more than two special cases of repeated Newton-Cotes
rules of the lowest degrees | and 2, respectively. The Gauss quadrature rules, on the
other hand, use nonequidistant samples, and which, in general, are more efficient as
we shall illustrate in part B of Section 7.1 for approximating integrals, and in Section
7.3 for the numerical solution of Fredholm integral equations. Newton-Cotes rules,
or a combination of their repeated versions, are more suitable for Volterra integral
equations as we shall illustrate in Section 7.2.
The detailed derivation of (1.144) is not as clear as that of (1.141). Still we will
have a chance in Section 7.1 to see (1.141) and (1.144) as two special cases of a
repeated Newton-Cotes rule of degree | and 2, respectively.
In comparing the above three basic numerical integration formulas, we see that
the midpoint, the trapezoidal, and Simpson’s rule approximate the function on a
subinterval, respectively, by polynomials of degree zero (flat top), first degree (straight
line), and second degree (parabola). Other quadrature formulas involving higher
order polynomials or other special functions are considered, along with their error
analysis and illustration in books on numerical methods of integral equations.*4 As
we mentioned before, our emphasis in this book is to introduce integral equations

*4The reader is advised to consult detailed references like Delves and Mohammed [1988], and Baker and
Miller [1977] (also Kondo [1991]).
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 85

to the student for the first time, with their varied aspects including their numerical
approximations, but without necessarily requiring much of abstract analysis. For
this purpose, and to clarify our illustrations in Sections 3.3 and 5.5 for the numerical
solutions of Volterra and Fredholm equations, respectively, we will stay with the
above trapezoidal and Simpson’s rule. For particular problems, especially in Section
5.5 on numerical methods of Fredholm integral equations, it is tempting to draw upon
higher order quadrature rules but keeping in mind the raathematical level of this book,
we have relegated this discussion to Section 7.3 for the very interested reader. We shall
also back this treatment with a clear reference to the specific detailed source of such
analysis. Sections 3.3 and 5.5 will be devoted to the numerical solution of Volterra
and Fredholm integral equations respectively, where only the above basic numerical
integration rules are used. In Section 7.1 we will present the higher order quadrature
of integration with a number of illustration for numerically evaluating integrals. This
is followed by using these rules in Sections 7.2 and 7.3 for the numerical solution of
Volterra and Fredholm integral equations, respectively. In Section 7.1 we supply the
very necessary tables of the above higher quadrature formulas that are needed for our
illustrations. A specific reference is given there for the more detailed tables

A Prelude to the Numerical Approximation Setting of Fredholm Equations


With the above simple introduction of the basic elementary formulas of numerical
integration, we may state here that the numerical methods of solution for integral
equations involve, for example, that for the general Fredholm integral equation of the
second kind,

b
(UG) sap (e) +f K (a, t)u(t)dt (1.148)

we start by approximating the integral numerically by a partial sum of the form

Sp(z) = SeK (a, t;)u(ts)Ajt. (1.149)


70)

Here, as we mentioned above, the points t; are equally spaced, but can be chosen
at one’s convenience, and as it may be required by the chosen quadrature formula
beyond the two simple ones discussed above. Also At may stand for the usual equal
increment, but in general, the index 7 in A;t may indicate a weight D; assigned to
the ordinates K (x, t;)u(t,;) (of the integrand) by the particular numerical integration
formula used as we discussed for (1.140) and illustrated for the trapezoidal rule
(1.141) and Simpson’s rule (1.144). In this general sense of different weights D; for
the n + 1 values u(t;) in the sum of (1.149), we rewrite it as

Bis (a,6,)D,u(t). (1.150)


j=0
We shall continue this in Sections 3.3 and 5.5 for the Volterra and Fredholm integral
equations respectively, where we use the trapezoidal and Simpson’s rule; and in
Sections 7.2 and 7.3 where we use the higher quadrature rules.
86 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

1.5.2 The Smoothing Effect of Integration

In the future, and in particular with integral equations of the first kind,

pas [ Keapuyae (1.151)


we note that while the sought solution u(z) may not be of good quality, for a
reasonably behaved kernel K (x,t), the result g(x) of the above integration is of
better quality. For example, u(x) may have a jump discontinuity in the domain of
the integration, while the result g(x) of the integration in (1.151) is continuous. In
general, we describe such result as the “smoothing effect" of the integration process,
which we shall illustrate next with a simple example. So it goes without saying that
while the integration process may cover the bad quality of the integrated function;
its inverse operation, namely, the differentiation process will definitely uncover such
bad quality. We are also most concerned if the solution of an integral equation is in
terms of derivatives of a known function or some integral of it. This happened to be
the case with some integral equations of the first kind, for example the famous Abel’s
problem (1.20)

—/2gf(y) = * uit
o(n)dn
pass! 1.20
OTN el ( )
has its solution found, via the use of Laplace transform, in Example 8 of Section 3.2
as $(a) in (3.41) in terms of a derivative of an integral of the known function f(z)
weighed by ==,

oa)=-Ef 0a (3.41)
Example 16
To give a simple example, let us consider the very special case of the integral
equation of the first kind with K(z,t)=1,0<2;0<t<za,

Negue /Py eae (E.1)


Let us assume that we are given h() as the following (continuous) roof function on
Di ies Pa

Soak (nes OSG


hie) = {a-#, a<
2 < 2a ce)
as illustrated in Figure 1.11 and we are to find the solution u(x) of (E.1) on (0, 2a).
Of course, it is obvious that u(x) can be obtained from simply differentiating the
given function h(z),
dh
Te
u(x) = — (E.4)
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 87

Fig. 1.11 The roof function h(z) of (E.2) and its discontinuous derivative 44
» of (E.5).

: Clits)
Now a simple look at = in Figure 1.11, and we see the uncovered difficulty due
to this differentiation, namely, that the derivative of h(x) does not exist at rz= a.
Instead, and in contrast the continuous h(z), oe is continuous only on (0,a) and
(a, 2a), and it has a clear jump discontinuity of size 2 at r = a,

dh ie 0 <a"a
= { ; (E.5)
dx lin Ort Tila:

The matter is even more serious when h(z) is given as data, where, of course, it
is within the eee of the measurement of the me So, if we are, in principle,
after u(x) = az!ae we must approximate ae by 24 eee one) But this
computation for Be
22 will compound the final error in
1 vig we Start with the
“aa,
inaccurate data of h(z).

1.5.3 Interpolation of the Numerical Solutions of Integral Equations

Lagrange Interpolation Formula


As the result of solving an integral equation numerically, with n increments for the
integral involved, we obtain N = n+ 1 approximate sample values {&; = &(2;)}%,
for the solution u(x). Then it is desirable to interpolate between these discrete values
to obtain a continuous function w(z) as an approximation to the solution u(x). There
are many different formulas for such interpolation. We present here the well known
Lagrange interpolation formula, which is based on the use of a polynomial of degree
n=N-1,

Delay an tier Qn-10" 1}+-+:+ajz+agn, n=N-1 (1.152)

to interpolate between the N discrete values {u(z;)}, that result in the continuous
approximation &(z) to the approximated function u(z).
88 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

The Lagrange interpolation formula is used for not necessarily equidistant sam-
ples {f(2;)}%, of f(x), we will use f(a), instead of f(x), for the (approximate)
interpolated function to distinguish it from the exact f (Gs

LD= SOD (1.153)


j=l
where the function 1;(x) is defined as the following quotient of two products

(fee oo) (oe 1) eee oa See ON)


1;(x)
(a; —21)(zj — a2) -<-(@; — 24-1) (25 — 2541) > -4(2p— EN)
4

my, (x — 24)
eae
TIN, (x; — zi)
ey.
(1.154)
(We note here that the factors (x — x;) and (x; — xj) are missing, respectively, in
the numerator and denominator of 1;(a), which is indicated in the product notation
by 2 # 7 to say excluding the 7th factor in both products.)
An interpolation formula should first give us the sampling points, thus requires
from (1.54) that

tee)
itm) = |es (1.55)
which is clearly the case. This is so because form # j we havea factor (Im —®m) =
0 in the numerator but no such factor in the denominator, and all other factors there
are nonzero. In the case of |;(x;) all factors in both numerator and denominator are
the same, and without the factor (2; — x;), since it is missing in both places, and the
resultas.l;(a7) 11.

Example 17 Interpolating the Numerical Solution of a Volterra Equation


Consider the following Volterra integral equation of the second kind in u(z),

Ur) =o— fo — t)u(t)dt (£.1)

We note that it is with difference kernel and the integral is in the form of the Laplace
convolution product, where we can use the Laplace transform to solve it, similar to
what we illustrated in Example 10 of Section 1.4. The exact solution can be easily
obtained as u(x) = sinx using the Laplace transform, and can be quickly verified,
after a simple integration by parts for le t sin tdt (and remembering that x in x — t
of the integral is considered as constant, since the integration is done with respect to
t.)
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 89

In Example 9 of Section 3.3, we use only four increments on the interval (0, 4] to try
to find the five numerically approximated values of the solution &(x;), 7 = 1,2,3,4
and 5 at the indicated locations x = 0, 1,3, 4 and 4 as shown in Table 1.3.

Table 1.3 Numerical and Exact Solutions of Volterra Integral Equation (E.1) of
Example 17

Xx 0 1 2 3 4
Numerical value of u(z) Ome I 0 -1
Exact value ofu(z) =sinz O 0.8415 0.9093 0.1411 -0.7568

The table also includes the corresponding exact values u(z;) = sinz;,j = 1,2,3,4,
and 5. Figure 1.12 illustrates the comparison between these exact and approximate
five values of the solution to (E.1). Note that we graphed the exact values, then
purposely connected them with (an exact) graph as a solid line. This is done because
we know that the solution is u(x) = sin z forall > 0. However, for the approximate
numerical values &(x;) we only graphed, what we are sure of, as the approximated
five values. So, it is left for some interpolation formula to use these few values and
fill between them to give an idea of a continuous approximate solution u(r). Here
we appeal to the Lagrange interpolation formula (1.153) and (1.154) to do this job.

U(x) x--Numerical
o— Exact
x

Fig. 1.12 Numerical and exact solutions of Volterra equation (E.1) of Example 17. (Also
Example 9 and Table 3.1 in Section 3.3.)

To use the Lagrange interpolation of (1.153) and (1.154), we first prepare L;(x),
4 = 1325-75 of (1.154)
90 Chapter 1. INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

(x — 1)(x — 2)(x — 3)(a — 4) (x — 1)(x — 2)(ax — 3)(a — 4)


> C= )0= 703024) 7

Hee Walle lesa) Sele ae) =6


Je Baile ees) —
hie) e=-4 _OC=“VYEmn -3e=4
— Oe=YVe=)3)(
oa 2—4) —
(2)(2—1)(2-—

itr, (2?) er a) ed %ioe)


i mre CaOen) =
Rey
Be)
2 DOS
Ca
aa aaa e - e it)
GS iee |
(E.2)
So, using the above data and these functions {1 (x) }9_y of (E.2) in (1.153) we obtain
the interpolated approximate solution ti(z),

fi(e) = (0) (x —1)(a - ae — 3)(a — 4) asyee = Ae 3)(x@ — 4),

2 - oe 3)(x — 4) i (0) 2% ~ ES 2)(x — 3)

penx(xt@a— 1)(a
Wea Be)
— 2)(x — atton ee stare teeS ie
This interpolated approximate solution t(a) is computed and shown in Figure 1.13,
where it connects the five numerical sample values ti(xz;), j = 1, 2,3, 4,5, and u(z)
is compared with the exact solution u(x) = sina. Samples of the comparison are
u(0.5) = 0.4794, &(0.5) = 0.5859, u(2.5) = 0.5985, u(2.5) = 0.5859, u(3.4) =
—0.2555, u(3.4) = —0.4896.
Next we illustrate the use of the Lagrange interpolation formula for two approxi-
mate numerical solutions of a Fredholm integral equation of the second kind.

Example 18 Interpolating Two Numerical Solutions of a Fredholm Integral Equa-


tion
In Section 5.5 we will cover a simple detailed treatment for the numerical solution
of Fredholm integral equations. This is illustrated in Example 20 of Section 5.5.1 for
the numerical setting and solution of the following Fredholm integral equation of the
second kind

1
u(@) = sing + / (1 — xcos xt)u(t)dt. (£.1)
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 91

weno Interpolated values f(x) of (E.3)


See u(x)=sinx

Fig. 1.13 The interpolated approximate solution %(z) of (E.3) and the exact solution u(x) =
sin z of (E.1) in Example 17.

In this Example 20, we consider only three (approximate values i(x;), 7 = 1, 2,3 at
x; = 0, 2 = 0.5 and x3 = 1.0. The result is a 3 x 3 system of algebraic equations
in u(z;), 7 = 1,2, 3, since we have these three values for the input &(t;), 7 = 1,2,3
inside the approximating sum of the integral, for each u(z;), 7 = 1, 2,3 of the output
u(x) of (E.1).
The solution of this 3 x 3 system of algebraic equations is the subject of Exercise
2(a) and 2(b) of Section 5.5, where the trapezoidal rule and the Simpson’s rule are
used, respectively, for approximating the integral in (E.1). These results are shown
in Table 1.4, and are to be compared with the exact solution u(z) = 1,0 < x < lof
(E.1). For such square system of algebraic equations, we leave it (in Section 5.5) for
the preparation of the reader to deal with the solution using matrix methods. For the
most basic method, we present a review of the Cramer’s rule in the next section.
Now we will use the Lagrange interpolation formula (1.153) and (1.154) for both
sets of the three samples, then compare with the exact solution u(x) = 1.
From Example 20 and Exercise 2 of Section 5.5, we have in Table 1.4 the two sets
of approximate sample values w(z;) of the solution to (E.1),
We first prepare 1;(x), l2(x) and I3(x) of (1.154) to be used in the Lagrange
interpolation formula (1.153), then we use the two sets of data to find their respective
interpolations.

i = Ct)
ae
-2 (2-5)
2
@-0
Ho (CC oe
lg = (0.5)(0.5 —1) = —4zr (x = 1) (E£.2)

x(x — 0.5)

If we use the approximate samples values of Exercise l(a) in Section 5.5 and the
above functions ; (a) in (1.153) we have
92 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Table 1.4 Numerical Solutions of Fredholm Equation (E.1) of Example 18 (as given in the
answer to Exercise 2(a), (b) of Section 5.5), using a) the Trapezoidal Rule and b) the Simpson’s
Rule

j a; t(z;): Exer. 2(a) ti(2;): Exer 2(b)

1 0.0 1.013 0.99987


20'S 1.009 0.99992
a eekO 1.021 0.99967

ii(a) = (1.013)2 (<= 5)(Gy (1.009) ey

+(1.021)2z («is5) (B.3)


which can be easily computed to find that this &(a) is very close to the exact value
of u(x) = 1. An example are the few values %(0.3) = 1.0087, (0.7) = 1.012 and
a(0.9) = 1.0173.
When we use the approximate samples values from Exercise 2(b), with the func-
tions (x) of (E.2) in (1.153), we obtain

ii(x) = (0.99987)2 («= 5)(x — 1) — (0.99992)42(ax — 1)

+(0.99967)2« (<= 5) (E.4)

which interpolates similar to (E.3). An example are the few values &(0.3) = 0.9999,
u(0.7) = 0.9999, «(0.9) = 0.9997.
As we mentioned earlier, the numerical methods of solving Volterra and Fredholm
integral equations will require be the subjects of Sections 3.3 and 5.5, respectively.
There we will illustrate the numerical setting with the aim at a numerical solution
for these equations using the simplest numerical integration formulas, namely, the
trapezoidal rule (1.141) and Simpson’s rule (1.144). In Chapter 7 we will follow
such treatment by concentrating on the use of the higher order quadrature rules. In
particular, the Newton-Cotes rules and the Maclaurin rule will be used for Volterra
equations in Section 7.2, while the Gauss quadrature rules are used for Fredholm
equations in Section 7.3. The Gauss quadratures will also be consulted and illustrated
for their use in finding an approximate solution of one type of singular Fredholm
integral equations, namely, those whose singularity is due to (only) the limit (or
limits) of integration being infinite.
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 93

1.5.4 Review of Cramer’s Rule

As we mentioned in the last section, the resulting square system of linear algebraic
equations, for numerically approximating the Fredholm integral equation, will require
some basic knowledge of matrix analysis. We present here a review of the very basic
such needed computations, namely, Cramer’s rule for solving N x N system of linear
algebraic equations.
Consider the square N x N system of linear equations in 21, £2,---,2N,

44121 + 4,2%2 +--+: + a;nrn = 0,


Q2121 + Gg2%2 +--+: + Qanztn = bo
(1.156)
Qni21 + ano%2 +---+annzyn = bn

with its matrix form


AXe— 6B (1.157)

where A = la; bjt Ni = (2; et oe Ns and B = (Oto ON: Of course,


we must remember that the existence of a unique solution to this system in (1.156)
[or (1.157)] is strongly dependent on |A| = detA, the determinant of the coefficients
matrix A. This can be stated for the nonhomogeneous system (b # 0), (here 0 is the
zero column matrix) that when |A| 4 0, then the system has a unique solution (for
any values of 6), but if |A| = 0, then there exist values of b for which there is no
solution, and other values of b for which there are infinitely many solutions. On the
other hand, in the case of the homogeneous system (b = 0), if |A| 4 0, the system
has only the trivial solution X = 0, but when |A| = 0, the system has infinitely many
solutions.
So for |A| 4 0 the unique solution of the nonhomogeneous system (b ¥ 0) in
(1.156) or (1.157) is given by Cramer’s rule as

by @i2z ***" aiy,


bg @o2 °°: Gan
1
TZ raRTaT
|A|
bn Qn2 °** GAnn

Git 03 2" Gin


Goi bo) = ** Gan
1
L)
= —-=
|A|
Qni Un *"" Onan
94 Chapter 1_ INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS

Gj eee et
G21 +" (Gentz be
l : :
(1.158)
|A|
In =

Qni *** Gnn-1 bn

We also remind that |.A|, the determinant of the matrix A is computed as


N

|A] = 50(-1)*aug|Mayl (1.159)


j=1
for the expansion about the ith row of A(i—fixed), and

[A] = $0(-1)?**ai3|
Mis (1.160)
e—"

for the expansion about the jth column of A(j—fixed).


|M;;| is the minor of the entry a;;, which is the determinant of the (remaining)
(N — 1) x (N — 1) matrix after deleting the ith row and the jth column of A. Also
(—1)'t3|M;,| is called the cofactor of aj;.

Exercises 1.5

1. (a) Find the exact value of the following definite integral

[ dx
Ohl
(E.1)
Hint:
(E.2)
(b) Use the trapezoidal rule with n = 4 to approximate the integral in (E.1).
(c) Estimate the error in the trapezoidal rule approximation of the integral in
(EY):
Hint: For the maximum value M of |E4| of f(x) = z42 on (0, 1) to be used
for the error estimate in (1.47), we must find ae = ee in the search for

a critical point of f(x). Note 5 in this case Lf


das 5 Qon (0, 1), and the minimum

value of f(x) = ae occurs as f’’(0) = —2 at the end point x = 0,


£
whereM = |f"(0)| = 2.
(d) Compare the actual error of this numerical approximation with its estimate
(or upper bound) of part (c).
EXERCISES 1.5 95

. (a) Find the exact value of the following definite integral

1 dx
[ io (E.1)

(b) Use Simpson’s rule with n = 4 to approximate the integral in (E.1).


(c) Estimate the error in the Simpson’s rule approximation of the integral in
(E.1).
Hint: Here we need the maximum value of ee = lagasl of f(z) = es
on (0, 1), which clearly occurs at z = 0, M = |f4)(0)| = 24.
(d) Compare the actual error of this numerical approximation with its estimate
(or upper bound) of part (c).

. Use six increments to approximate the integral

| jaa
S adxr

using
(a) the trapezoidal rule.
(b) Simpson’s rule.
(c) Find the exact value of the integral, and compare it with the two approximate
values in parts (a) and (b).

. In Exercise 1 of Section 7.2 we solve the following Volterra integral equation


x

u(x) = 1—2¢ +42? + i [3 + 6(@ — t) — 4(a — t)?}u(t)dt (E.1)


0
numerically.
Use the Lagrange interpolation formula (1.153)—(1.154) to interpolate those
numerical values of the solution that are found in the answer to Exercise | of
Section 7.2. Compare the interpolated result with the exact solution u(x) = e*®
of (E.1). (This is the same as Exercise 5 of Section 7.2.)

. Consider the Fredholm integral equation,


1
UD) = exe -{ ze'u(t)dt (E£.1)
0
and the three samples ofits numerical solution u; = u(0) = 1, uz = u(0.5) =
0.3674966, and uz = u(1) = —0.11089 as done in Exercise 3a(i) of Section
bss
(a) Use the Lagrange interpolation formula (1.153)—(1.154) to find the (ap-
proximate) interpolation u(x) of these approximate samples.
(b) Do the same as in part (a) for the eleven samples of the numerical solution
of (E.1) as given in the answer to Exercise 3a(ii) of Section 5.5.
7 a a a

(ht) 7 e..
_~—~ a

iZ Si i= 2 .

_ i) 5 Siete)“4 Gh 1 if i —ekoscuaiingn
pote “T= Glut- Wake
eae Pe.) oo
(1s i ‘er. 7
oe
a ne wa ie ee
> aS
— _
24 homies | ib mol lot Gm Sabla
4 @ rey j - 4? Py |& — > — als doidve At a bi
MOTE 01) Via orn LaaeetnOa0 & qe beerea
_— —~ 69S aed! a> pana (ie20 ovtegfine
cha me ——

: ncepae 06 scenes i sith omertsel 828



as ; eo en“?, _ 7 <5
wht

i 7 -
- , jae et]ta nbd af
4 > om a (ie CrOri),
y 6S wo ) Camas ee ou aatg
- at we nag ie GB 00gvs
ssh ileal, 4
a eal, PE ag atin # neeeprsd Ate
S=O-1ME iol SESE 1 me svy shen Seiad ass yl bere ie
- wT ; . “tol [is dear 3
PNP) ote, = — eet Wi) ne eS Tee Bhp manta —_ 4
7 (x Sir iGas od (96e a ‘ he eae a f
! anyy Bsiiny : (4 — a ohh \.43.-
2-16 eer
¢ * re oe
- = 7 a a) -, '
| : a yi f = - _ dim S “iy
<

SO Pele a LL 0 05) aera ea aeons a


eos bee @ CAE Se pond) Ceuitcainy =O ter Cade fad ~— mf :
; >= | ™
en Suc daar ae, eG nayrni? PAT re: :

es 7 Pe sone tee ane


- me pt 3
L¢-serras Soaees
- = — oe é ‘Se Owe o~\

ands nae al Eiaian gle

Sette ce
(4 Ooi mae —_1 STF ae.
— ak mie In| a s RAPE
< .(ote ie
| tee eee ai
<< =~ atende Sanam
>> sek heey
ra te; Oohae
- oya
: = ei ew 1
oe st a
~— ra oe
Modeling of Problems as
Integral Equations

In this chapter we formulate, in as much detail as possible, a number of the problems


that we presented in Section 1.1 in terms of their natural representation as integral or
integro-differential equations. We also show in detail how initial and boundary value
problems, associated with linear differential equations, and some specified auxiliary
conditions are reduced to Volterra and Fredholm integral equations, respectively.
Finally, we illustrate the formulation of a mixed boundary condition for the electrified
plate in terms of dual integral equations.
The formulation of the first group of problems with integral equations as their
natural representation will depend on the realization, demonstrated in Section 1.1,
that an integral equation like

u(a) = i K(x, €)u(é)ae (2.1)


relates the present state u(x) to the accumulation (integral) of thechanges K (a, €)u(&)
of all its other values u(€) fora < € < b. This is in contrast with the differential
equation representation, which gives a local relation. For example,

u(z) = 4!
dz
(2.2)
relates the state u(z) to its instantaneous rate of change du/dz, and hence the relation
to its values only in the very immediate neighborhood. Indeed, the formulation
in terms of derivatives is the essence of the mathematical modeling for the basic
natural law of causality which made differential equations the important subject
it is today. In comparison, we say that the importance of integral equations stems
97
98 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

from representing accumulative or hereditary situations, where, for example, a state


u(t) is affected by the accumulation of changes in all its previous values such as
the population-type problems as in (1.8). Of course, the mathematical modeling
for problems involving both hereditary and causal principles is quite familiar and is
given in terms of integro-differential equations, as in the case of the biological special
living together (1.13) and (1.14).
We may remark that what was said here about a state as a function of time and its
previous values can be extended to functions of space variables such as the hanging
chain, (1.17) and (1.18), and to more than one variable, such as the case of the charge
density for potential on a unit disc in (1.38). This accumulation effect for the hanging
chain in (1.17) is clearly seen where the shape f(z) at the point re(0, 1) is affected
by all the forces on the interval of its definition (0,1).
In this chapter we first present the formulation of some problems of population,
control, mechanics, and radiation transport, then follow with initial and boundary
value problems, dual integral equations, and Schrédinger equations in the three-
dimensional momentum space.

2.1 POPULATION DYNAMICS

As mentioned earlier the study of population growth includes the forecasting of any
future surge in birthrates, which is of great importance for future planning throughout
the world. In this section we formulate the problem of human population growth, the
problem of the surge in birthrates, and the problem of two biological species living
together.

2.1.1 Human Population

Let the number of people present at time t = 0 be ng. If we look at survival or


insurance tables, we find that there is some sort of a survival function f(t) similar to
that shown in Figure 2.1, which gives the fraction of people surviving to age t. It is
assumed that these people are either male or female. The surviving population n,(t)
at time ¢ is then

n(t) = no f(t) (2.3)


where n5(0) = no f (0) = no.
Under normal circumstances there is a continuous addition to the population
through new births. If children are born at an average rate r(t), then in a particular
time interval A;7 about the time 7;, there are r(7;)A;7 children added who, if they
survive, will be of age t — 7; at time t. But according to Figure 2.1, only a fraction
f(t — 7%) of these children will survive to age t — 7;, so the final addition to the
population at time ¢, from the children born in the interval A;7 about time 7;, is

f(t — r)r (Ta) Ait.


2.1 POPULATION DYNAMICS 99

Fig. 2.1 The survival function.

Now if this process is repeated for all the m subintervals of the time interval (0, t),
we obtain the partial sum

AO) =) Seimei era (2.4)


=

as the number of people added through new births which, if passed to the limit (as
m —> oo), becomes the integral

ONE / OME (2.5)


If this is added to n,(t) in (2.3) (the survivors of the initial population), we obtain
the total population at time ¢ as

n(t) = n,(t) + b(t) = nof(t) + [ f(t — r)r(r)dr. (2.6)

, dn.
It is reasonable now to assume that the rate of birthrate r(t) = are is proportional to
n(t), the number of the population present at time t,

c(t) = Kn(C). OT0)

From (2.6) and (2.7) it follows that


t
n(t) = nof(t) + k | f(t — 7)n(r)dr (2.8)
0
which is a Volterra integral equation of the second kind in n(t) with a difference
kernel kf(t — 7). Such a kernel reminds us of the Laplace method of solution
discussed in Section 1.4.1, which we will illustrate in Section 3.1.3.
100 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

2.1.2 Biological Species Living Together

Consider the two species with number n,(t) and n2(t), respectively, present at time
t that we presented in (1.13) and (1.14) of Section 1.1. If the two species are left
separate, it is reasonable to assume that their rate of change dn/dt is proportional to
n(t), the number found at time ¢,
dn
riage an(t). (2.9)

Here a is the proportionality constant or the coefficient of increase (a > 0) or


decrease (a < 0), depending on whether there is a growth or decline in the population,
respectively. Let us assume that the first species is increasing with coefficient of
increase k,(k, > 0) and the second species is decreasing with coefficient of decrease
—ko(k2 > 0). Then if the two species are left separate, the growth of the first is
represented by

si =kni(t), kh >0 (2.10)


and the decline of the second species is represented by

d
= 2 Oy te Say (2.11)
Now we are to formulate the state of equilibrium of these two species, when put
together, with the assumption that the second species (a predator) will feed on the
first (the prey). This will, of course, affect the rate of change of both species; the
first (prey) will have a slower rate of growth, while the second (predator) will have a
slower rate of decline.
To formulate this situation in terms of a reasonable mathematical model, we start
with the model of the separate species (2.10) and (2.11). We then attempt to modify
it in two steps to allow for the new situation, where we must introduce factors to
decrease k, in (2.10) and increase —kg in (2.11). To start with kj, the rate of increase
of the first species (prey), it is reasonable to assume that its decrease is proportional
to n2(t), the number of the second species (predator) present. Hence k, should be
modified to kj.

ki = ky — Y1N2(t) (212)

where 7; is a proportionality constant which depends on the first species. The actual
decrease in ky is due not only to the presence (feeding) of the second species n(t) at
the present time ¢ but also to all previous presences (feedings) of n2(7) for the whole
time interval t —- Ty < rT < t, where To is the finite heredity duration of both species.
If in addition to the present 7, factor we have the record of its rate of decrease as
fi (7) in previous times t — Ty < 7 < t, then the decrease in k, at time t, due to the
decrease in the time interval Ar, is —f\(t — T)n2(r)Ar. Here we used fi (t — 7),
since a species at the present time t had a chance to resist n2(7) at time t — 7 and
hence a factor f;(t — 7). The total decrease in k, in the whole time interval Ty is
2.1 POPULATION DYNAMICS 101

t
Ok, = fi (t = T)N2 (r)dr. (2:13)
t—To

If we combine this previous decrease (2.13) with the present decrease —y,n2(t) [as
in (2.12)], we obtain

t
Kesf = ky = ¥1N2(t) = bs fi (t = T)n2(r)dr (2.14)
t—To

as the final effectively reduced coefficient of increase of the first species.


By reasoning similar to the above, the coefficient of decrease —k2 of the second
species (predator) will increase by 271 (t) due to present feeding on the first species,
and hence —kz should be replaced by the higher value

—ks = —k2 + yoni (t). (2.15)

Again it is not the present good feeding alone which causes the increase in —k», but
also all previous feedings that depended on the presence of the first species n; (rT).
Hence in the time interval A7, there is an increase of f2(t — 7)ni(7)Ar and the total
increase in the same period To is

dk = fo(t — 7)ni(r)dr. (2.16)


t—Tp

The final (effectively increased) value of —k2 is obtained by adding dk in (2.16) to


that of (2.15):

t
—koesf = —ke + yor (t) + fo(t — T)n4(7)dr. (251%)
227,

Now we modify k, in (2.10) to its effective reduced value (2.14) and —kz in (2.11)
to its effective increased value (2.17), to obtain the model for the equilibrium state of
the two species living together:

t
on = ny(t) i — yN2(t) — fi(t - r)na(r)dr] hankipStO (2.18)
t tT

t
dn2 = no(t) | + y2n1(t) + fo(t - r)na (r)dr] reoks 0. (2:19)
dt 27

This system of two (nonlinear) integro-differential equations is as we presented them


in (1.13) and (1.14).
102 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

Exercises 2.1

1. Consider the integral equation (2.8) in n(t); the number of the human popula-
tion at time f.

(a) Let N(s) and F'(s) be the Laplace transform of n(t) and f(t), respectively.
Find the Laplace transform of (2.8) and hence find N(s).
(b) Assume an exponential-type survival function f(t) = e~“, c > k > 0.
Find N(s), then solve for n(t) of (2.8) as the inverse Laplace transform
of N(s).
2. Consider the problem of radioactive decay where the rate of decrease of the
number of atoms is proportional to the number of atoms n(t) present at time t.
Hint: See (2.9)-(2.11).

(a) Write the differential equation and its initial condition assuming that no is
the initial number of atoms present at time t = 0, and remembering that
the proportionality constant for the decay being a negative number —k,
Ket:
(b) Reduce the initial value problem in part (a) to an integral equation.
(c) Solve the initial value problem in part (a) for n(t).
(d) Verify that the solution of the initial value problem in part (c) is also a
solution of the integral equation in part (b).

3. Consider the human population problem (1.8), and assume that the survival
function is described roughly by the exponential function f(t) = e~ 7, where
T is the average life span of a typical person.

(a) Write the resulting integral equation and find its Laplace transform and
hence N(s) = L{n(t)}.
(b) Use the inverse Laplace transform to find the solution n(t), the population
at time f.
(c) In (2.7) we assumed that the birthrate is proportional to the number present
in the population,

ar = knit) (2.7)
where we can take k as the rate of population variation per capita. Use
the result in part (b) to show that
(i) The population increases in an exponential fashion when T > 1/k
(i.e., when the average life span of the typical person is larger than
the reciprocal of the per capita rate of change of the population).
(ii) The population decreases in an exponential fashion when T < 1/k.

4. Consider the problem of birthrate and the possibility of asurge in the birthrate,
which we considered in (1.9) of Section 1.1.
EXERCISES 2.1 103

(a) Let us assume that there are initially b) women, and that they will give
birth to a female child at a rate h(t) per year. Find their contribution to
the female birthrate at time t.
(b) To find the birthrate b(t) at time ¢, we must add to this birthrate of part (a)
the contribution to birthrates of girls born at time 7 > 0 when they are at
age 7 in the range of childbearing age a < rT < (. Girls bornat time t —7
will at future time ¢ belong to the birthrate b(t — 7). If the probability
of a girl living to age 7 is p(7) and the probability of the girl at this age
giving birth to a female child in a time interval Av is m(r)Ar7, find the
contribution to the birthrate b(t) from women in the subinterval Ar of
the range of childbearing age 7 [born at t — 7 with birth rate b(t — r)].
(c) Find the contribution to birthrate b(t) at time ¢ from all women in the
childbearing age rangea <T < £.
Hint: Pass to the limit for the sum in part (b) to become an integral.
(d) Find the expression for the total birthrate b(t) that includes the contribution
of the women present at the initial time t = 0.

. Consider the problem of the surge in birthrate of problem 4, where its birthrate
b(t) is governed by the integral equation
B
b(t) = boh(t) +f b(t — T)p(r)m(r)dr. (E£.1)

The integral above represents the contribution to birthrate of girls born at time
t > 0 when they are at the childbearing age T, a < T < £3.
Use the following detailed steps in parts (a) and (b) to show that the above inte-
gral equation can be expressed as the following Volterra-type integral equation
with difference kernel

boh(t) tof b(t —7)p(r)m(r)dr, t<B


HO toned (E.2)
i!b(t — r)p(r)m(7)dr, LG:

(a) Fort < 6 you can write the integral (E.1) as

b(t) = boh(t) + ibeb(t — r)p(r)m(r)dr + : b(t — 7)p(r)m(7r)dr


B
+f b(t — 7)p(r)m(r)dr.
t
(E.3)
The first integral goes to zero since m(r)Ar, the probability of a girl
bearing a female child at age t < a which is outside the childbearing
age, is zero. It is added to help having integration from rT = 0 to T = ¢ to
104 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

fit the Laplace convolution product-type integral as shown in the integral


of the first branch in (E.2). Also, the third integral of (E.3) is of zero
contribution since b(t) takes care of birthrates for t > 0 and it is assumed
zero for t < 0. In other words b(t — T) is zero in the third integral, since
t—7r <Ofort <7 < GB. Hence the first term plus the second integral of
(E.3) give the first branch of (E.2).

(b) For t > £ note that since we are taking the origin of time as the birth of
the oldest childbearing woman, then boh(t) = 0 fort > G in (E.1), since
this term now takes care of birthrates to women at age t > 3, which is
outside the childbearing range (a, 3). With boh(t) = 0, we now rewrite
(E.1) as

Qa B :
b(t), == i b(t — r)p(r)m(r)dr + / b(t — Tr)p(r)m(7)dr
0 t a

i / Rye ame emt se:


B
(E.4)
by adding the first and third integrals, which are of zero contribution,
since m(r)Ar = 0 fort < aand7t > £. Hence (E.4) gives the second
branch of (E.2).

2.2 CONTROL AND OTHER PROBLEMS

In this section we present the formulation, in terms of integral equations, of two


control problems: the problem of how to keep a specified number of machines
always available in operating condition and that of controlling the deviation of a
steering shaft from its indicated direction.

2.2.1 Mortality of Equipment and Rate of Replacement

The problem of finding the rate dr /dt at which equipment should be replaced, to keep
a specified number f(t) in operating condition at any time t, is formulated similar to
that of the human population problem of Section 2.1.1. We first assume that we have
s(t), the function that determines the number of pieces of new equipment bought at
t = O that survives to time t. If we start with f (0) as the number of new pieces bought
at time t = 0, then, due to loss or wear, only the fraction f (0)s(t) will survive to time
t. To keep a specified number larger than f(0)s(t) at time ¢ we must continuously
add equipment at the desired rate from time t = 0 to time t. If the desired rate of
replacement at which we must add new equipment at time 7 is dr(r) /dr, then at time
t this equipment will be of age t— 7 with a survival function s(t — T) that is dependent
EXERCISES 2.2 105

: dr
on their age t — r. From (F) Ar, what we replace in the time interval Ar, only
a fraction s(t — r)(dr/dt)Ar will survive to time t. Hence, if these survivals of the
continuous replacements are added along the time interval (0, t), we obtain

t
He) = / s(t — ila t>.0 (2.20)
0 dt
the number of pieces of equipment surviving to time t, which were purchased as
replacements during the time 0 < 7 < t. If we add this to f(0)s(t), the surviving
number of pieces of original equipment (new at time t = 0), we obtain the (desired)
total number of pieces of equipment in operating condition at time t,

t d
7) —fO)s@) +f s(t — 1) dr, (2:21)
0 dr

which is a Volterra integral equation of the first kind in the unknown rate of replace-
dr
eS ae

Exercises 2.2

1. Assume an exponential-type survival function s(t) = e~° for the integral


d
equation (2.21) in = the rate at which the equipment must be replaced.

(a) Use the Laplace transform to solve for the necessary rate of replacement
dr. :
—., in order that we keep a constant number of machines f(t) = A at all
times ft.
(b) Verify your answer.

2. Consider the problem (1.16) of the deviation ¢(t) of the steering angle 6,(t)
of the rotating shaft from a constant direction indicator angle 0;(t) = 1,¢ > 0.
Only a correction torque proportional to the deviation ¢(t) and another one
proportional to the rate of change of the deviation are applied. The rotating
shaft starts from rest at a zero angle 6,(0) = 0, 64 (0) = 0.

(a) Write the mathematical model for the problem.

(b) Find the Laplace transform for the equation in part (a).
(c) Solve for the deviation ¢(t) and hence for 6,(t) of the rotating shaft angle.
For simplicity let J = 1,6 = 2,anda = 1.

(d) Use the same method and conditions above to solve for the deviation $(t)
for the complete problem (1.16) when the accumulation (integral) torque
correction term is included. For simplicity let J = c = 1 anda = 6b =3.
106 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

3. Electric potential in a disc. The electric potential u(r, @) at a point (r, @) inside
a disc of radius a, which is free of charge and where the boundary of the disc
(r = a) is kept at a potential u(a,@) = f(8), is given by the Poisson integral

ee ey f(g)de
(== |) ar a ee
(a) State a problem that makes the above an integral equation in f(¢).
Hint: Be cautious about lim u(r, @). For more details, see the derivation
of this Poisson formula in (4.69) at the end of Section 4.1 in Chapter 4.
(b) Show that the potential f(@) on the boundary must be distributed in such
a way that its average is equal to the value of the potential u(0, 4) at the
center of the disc. ‘
1 b
Hint: ean i}f (x)dz is the average of f(z) on the interval (a, b).

. Determining a source of neutrons in an absorbing uniform rod. This exercise


relates to our discussion of determining the energy spectrum f(£) of radiation
such as neutrons, which we derived in Section 1.1 as the following Fredholm
integral equation of the first kind (1.23) in f(£),
Emaz
g(x) = if | e 77) F(E)dE. (1.23)

Here g(x) is the number of emerging neutrons on the other side of the used
slab of uniform thickness x, and a(£) is the absorption cross section of the
material of the slab as it appears to the incoming neutrons of energy E.

(a) Consider now a uniform bar of length b with source of neutrons f(y),
0 < y < b to be determined. It is assumed, for a simple model, that the
neutrons move only in two directions — right and left — along the rod
with constant absorption cross section a. With the input as f(y), assume
that we can measure the output at position x as h(a). (Of course h(z) is
a set of measurements for finite number of location points x.) Show that
f(y) satisfies the following Fredholm integral equation of the first kind

A(z) = i e-7l2-ul F(y)dy. (E.1)


Hint: For the neutrons moving — right and left — you must consider the
two cases x > y and x < y, and for both cases the neutrons must travel
distance |x — y].
(b) Show that an analytic solution to (E.1) can be obtained as

Flu) = 5 {anty) - 20}. (B2)


2.3. MECHANICS PROBLEMS 107

Hint: Write the integral in (E.1) as


£ b

(a) = fee gydy+ [ ee) fydy


a 4
(B38)
then differentiate twice using the generalized Leibnitz rule (1.53) to have
an expression for h’’(y) in terms of h(y) as required in (E.2).
(c) The above analytical solution in (E.2) for the Fredholm equation of the
first kind (E.1) is in terms of the measured data h(y) and its second order
derivative h’’(y). Explain how such an analytic solution in (E.2) of the
Fredholm equation of the first kind, as may be anticipated, gives a bad
numerical solution for (E.1).
Hint: Recall the inaccuracy in the measured data h(y), and how the
numerical differentiation (= = se | iS SO sensitive
y
to even small errors in the differentiated function.

2.3 MECHANICS PROBLEMS

In this section we formulate problems dealing with the shape of the hanging chain
and Abel’s problem.

2.3.1 Hanging Chain

Here we consider the problem of how a variable density p(x) must be distributed in
the form of a chain or a rope, in order that it may assume a given shape f(z).
First we consider an elastic string under an initial constant tension Jo, and a
vertical force F' acting at one point. Then we derive the equation for the case of
distributed forces along the string, for example, the variable gravitational force due
to a variable linear density of the string.
(a) Displacement Due to a Single Vertical Force: Consider the string AB of length
l in Figure 2.2 under initial constant tension Jo. (Recall that in Figure 2.2 we take
y(x) to be positive in the downward direction of gravity, and that the force of the
point mass m is its weight w = mg.) Let F' be a constant vertical force acting on the
string at x = € to displace it by a small vertical
distance y(£) which is very small compared to €. If we equate the vertical forces,
assuming that the tension is constant (Zo) along the string, we have

To sind + To sind = F. (2.22)


tang = y(&)/é and sind ~ tané =
But for small ¢ and 6 we have sing ~
y(E)/(l — €); hence (2.22) becomes
y(§) () _ op
osead
108 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

Fig. 2.2 Displacement due toa single vertical force F' at €. From Jerri [1982, 1986], courtesy
of COMAP, Inc.

We) = i (eee (2.23)


To find the displacement y(z) at any point z we consider the similar triangles
ACD and AHE for x < € where y(x)/y(€) = z/€, which when combined with y(€)
from (2.23) gives

ula) = Zu(6) = aH a eee (2.24)


For € < x < 1 we use the similar triangles CBD and H'BE’, where y(x)/y(€) =
(1 — x)/(l — €) and (2.23) to give

ya) = —2y(Q)= el-2), E<a<l. (2.25)


So from (2.24) and (2.25) the displacement y(x) for 0 < x < I, due to the single
vertical force F at x = €, is

AUS a ee é
y(x) = FG(z,f) =F ies wohiqy (2.26)
as E<2sl.

It is important to note the two branches of the function G(z, €) (2.26) where the
first branch satisfies the boundary condition y(0) = 0, for the first end of the elastic
string at x = 0 to be fixed; while the second branch satisfies the boundary condition
y(1) = 0 for a fixed second end at x = I. This is a very familiar occurrence when
finding the integral representation of boundary value problem as Fredholm integral
equations. We will see a function similar to G(x, y) of (2.26) appearing as the kernel
in the integral equation to satisfy, the already incorporated, boundary conditions.
2.3. MECHANICS PROBLEMS 109

Such function is termed Green’s function, which is discussed in details in Chapter 4,


in preparation for the Fredholm integral equations of Chapter 5.
(b) Displacement Due to Distributed Vertical Force: We now consider the vertical
force not at one point x = € only, but distributed continuously along the string; for
example, the gravitational force due to the variable linear density p(€) of a string.
For such a string the gravitational force acting on the element A€ of the string is
AF(€) = gp(€)A€E. According to (2.26), the resulting displacement due to this
single force on A€ is

Ay(xr) = AF(£)G(z,€) = G(a, €)gp(€) AE (2.27)


where G(z, €) is given by (2.26).
The total displacement due to the gravity force along the whole string is obtained
by superimposing all these displacements (2.27) of the elements of the string, or in
other words, integrating from € = 0 to€ = 1,

yGl=4 / G(x, &)p(O)dé (2.28)


where G(z, €) is given by (2.26). This is a Fredholm integral equation of the first
kind in p(x) that relates how the linear density p(€) must be distributed along the
string so that the string may assume the prescribed shape y(z).

Example 1
We illustrate here how the simple case of constant density p(£) = c determines
the expected (parabolic) shape for the string.
If we use (2.28) for y(x), (2.26) for G(x, €), and let p(€) = c, we obtain

cherie ae
Aap / cil aaa, i eae. (B.1)
We note here how the second and first branches of (2.26) are used for the first and
second integrals of (E.1), respectively. Evaluating the two integrals in (E.1) gives
110 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

2.3.2 Sliding a Bead Along a Wire: Abel’s Problem

As we mentioned in Section 1.1, Abel’s problem is one of the earliest problems


modeled as an integral equation. It deals with finding the path y(z) in the vertical
zy-plane (Figure 2.3) along which a particle, under the influence of gravity and
starting from rest at yo, must move in order that it descends a distance yo in a
prescribed time t= f (yo).
To simplify the problem, we consider the path of the particle to be known when
we know a, the angle that the tangent to the path makes with the z axis. In this case
dy/dz = tana, so dy/ds = — sina,' where v = ds/dt.
For a particle starting from rest at y = yo, under gravity, the velocity v at y is
governed by
v? = 29(yo — y), ‘
r ds
v= 7 = V29(yo — y) (2.29)
where g is the acceleration of gravity. To have the desired expression for dt, we write

Fig. 2.3. Abel’s problem.

dy dyds ,
eae share —V/29(yo — y) sina,

—dy
Fe een eee 2.30
29(yo — y) sina ae

'Note that dy/dx < 0, tana < 0; dy/ds < 0,sina>0,47/2<a<n.


2.3 MECHANICS PROBLEMS 117

Realizing that a depends on y, we let 1/sina = (y) in (2.30); then

at — —_ Pw ay
29(yo — y) a
and integrate from the initial time of descent t(yo) = f (yo) to the final time t(y =
0) =0.
yt, = — ° _ bly)dy
ne vo V29(yo — y)’
° g(y)dy
O= t(yo) = —f(yo) = (2.32)
yo V2g9(Yo — ¥)
Hence (2.32) is the final integral equation in ¢(y) that relates the form of the path
$(y) to the predetermined time of descent f (yo) of the particle,

Ne hgo ie: heoy (2.33)


To avoid having the variable yo looks like a constant, we replace the two variables yo
and y by y and , respectively, to write (2.33) in the form of Abel’s integral equation
(1.20)

Leg ae (1.20)
We note that taking the final time t(y = . = 0, we are making a negative initial
time t(yo) = f(yo) < 0.
Example 2
For illustration we will consider the simple case of finding the path in a vertical
plane along which the particle must move from rest at y = Yo so that it reaches the
ground in (the usual free-falling body) time

anes (E.1)
: 9g
where we expect a vertical path for the fall, i.e., a = 90° in Figure 2.3.
Let us note that this is a very special case with t = f(yo) = —.\/2yo/g, which
can be solved by using the simple laws of motion since yo = 1/2gt?, t = /2yo/g.
If we substitute t = —,/2yo/g for f(yo) in (2.33), noting that the final time is ¢ = 0,
which necessitates a negative initial time, we have

(ee Pie Ngee


This Abel integral equation may be solved for ¢(y) by using the Laplace transform
(see Exercise 4); but for simplicity, we will use the result of Example 1, Section 1.1,
112 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

which gives u(x) = 1/2 as a solution of Abel’s equation,

pl ae i u()dg
o va-€
Hence we may use ¢(y) = 2/2 in (E.2) or d(y) = 1 = 1/(sina). This gives
a = 90°, which says that the path is vertical.

Exercises 2.3

1. Let the deflection of an elastic string of length / at point x; due to a unit force
(load) at x be K (xj, 22).

(a) Give the equation that represents the total deflection D;2(z) at a point
x due to a load L,(x), applied at the middle of the elastic string, and
another load L2(x), applied at 2 = 2.
(b) Give the equation that represents the total deflection D(a) due to a contin-
uous load L(x) = p(x), as a result of the string’s variable linear density
p(z).
2. Use (2.27) and (2.28) to find the approximate shape of the string when two thin
beads with a constant density of 1 and length 1/20 and //12 are placed along
(1/5,1/4) and (21/3, 31/4) of the string, respectively.”
Hint: Use the weight of the bead as the force at the bead’s center of gravity.

3. Determine y(x), the shape of the string in (2.28), when the linear density is
given by
Pa) ex (la):

4. Use the Laplace transform to solve for the path (y) in Abel’s integral equation
(2.33), to verify that the path is vertical.

5. Rederive Abel’s Tautochrone integral equation (1.21) in d(y) = -< which


governs the path along which a particle, starting from (zo, yo), must slide
(Figure 2.4) to reach the origin in constant time T = f (yo) that is independent
of the starting point (xo, yo). Hint: Use conservation of energy:

15 |(4)
a =—mg(yo — y)

d
and note that ~ = Siete Ny
y dy

The very detailed solution of this problem in five pages is found in “The Student’s Solution Manual" to
accompany this book [Jerri, 1999]. See the end of the preface for more information.
2.4 INITIAL VALUE PROBLEMS REDUCED TO VOLTERRA INTEGRAL EQUATIONS 113

er ee (X:; Yo)

Fig. 2.4 Abel’s Tautochrone problem.

6. Consider the Tautochrone problem (1.21) that was presented in Section 1.1.
Use conservation of energy,

Le dae | 1. Sao

where m is the mass and g the gravity acceleration, to drive the integral equation
(1.21)

201
Pama)
we (1.21)
Hi Q WHO
= Y Q
where s = Fy), noting that

(3)=)
Hint: Note that t = 0 tot = T correspond to y = yo to y = 0, respectively,
2
e < 0 that requires a minus sign, and gs =,4/1+ ee
dt a em dy dy} —

2.4 INITIAL VALUE PROBLEMS REDUCED TO VOLTERRA INTEGRAL


EQUATIONS

To illustrate in detail how an initial value problem associated with a linear differential
equation and, usually, homogeneous initial conditions reduces to a Volterra integral
equation, we consider the following example, which we have already presented in
(1.29) and (1.32). This will be followed by the initial value problem associated with
the general second-order differential equation.
114 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

Example 3 :

ot = ry(2) + 9(e)," (1.29)


y(0) = 1 (1.30)
y’(0) = 0 (1.31)
First we note that we have a nonhomogeneous initial condition in (1.30). However,
the whole initial value problem (1.29)-(1.31) can be easily reduced to one with
homogeneous initial conditions, by making a simple change of variables u(x) =
ita) er te
Now we start with the same steps used in Example 5 of Section 1.3. So we let

d’y .
= F(e) (B.1)
and integrate once with respect to x to obtain

=f
dy
—=
=
F Pd +e : (E.2)
E.2

Integrating again gives

ze
Ca eo atjiF(t)dtdé + cu + c2. (E.3)
0 Jo
To reduce the double integral of (E.3) to a single integral, we use the identity (1.51)

x € x x
/ ifP(tydtag = f (@- F(a = | (Goomede (1.51)
in (E.3) to give

ule)=| @- OF Odk +aiz + on (B.A)


To find the arbitrary constants c; and c2, we apply the initial condition (1.30) on
(E.4),
y(0) =1=0+¢e, Coa
and the initial condition (1.31) on (E.2),

y'(0) =0=0+«a, ep
= 0:

Hence (E.4) becomes

ioe i)"(@ - OF (Od. (B.5)


>For a more general initial value problem, see the first edition of this book, p. 61.
2.4 INITIAL VALUE PROBLEMS REDUCED TO VOLTERRA INTEGRAL EQUATIONS 115

From (E.1) and (1.29) we have

az
dy
= F(t) = ule) + 9(@). HE)
If we substitute this value for F(x) in (E.5), we obtain

ee /ee oven th“(e—€g(€)dé (1.32), (2.34)


which is a Volterra integral equation of the second kind in y(z).
Another Volterra integral equation in F(x) = d?y/dz? is obtained when we
substitute y(x) from (E.5) in the original differential equation (1.29).

F(e)=) f+ [ @-oF@as| + ofa),


F(x) =2 ["(a ~ )F(é)dé ++ 9(2). (E.7), (2.35)
The two integral equations (1.32) and (E.7), in y(x) and F(x) = d?y/dz?,
respectively, represent the same initial value problem in y(x). Concerning the method
of solution, there seems to be a choice between solving (1.32) for the direct value
of y(z) or solving (E.7) for F(x) = d?y/dz*, then integrating this twice to obtain
y(z).
The above method and the resulting two forms of the Volterra integral equation in
(1.32) and (E.7) are illustrated in the following example.

Example 4 Reduce the initial value problem in y(z)

CY 4 y = cose (E.1)

y(0) =0 (E.2)
y'(0) =0 (E.3)
to a Volterra integral equation in: (a) u(x) = d?y/dz? and (b) y(z).
First we let u(x) = d?y/dz? then integrate once to have
HH

gy = i.u(t)dt + cy (E.4)
dx 0

with c; = 0 after using the initial condition (E.3). We integrate this result again,
using the identity (1.51) for the double integration to have

OT Gs) ie — t)u(t)dt + ce (E.5)


116 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

where we also have c2 = 0 from using the initial condition (E.2),

Wo ‘A(CEI HOR: (E.6)


HH

0
For the final result (E.6) to be an integral equation, we have two choices:

(a) To make this result as an integral equation in u(x). In this case we use (E.1)
to have y(x) = cosa — d*y/dx? = cosx — u(z) to substitute for the y(x)
term outside the integral for (E.6) to become a Volterra integral equation of the
second kind in u(x) = d?y/dx?

cosz — u(x) = slit — t)u(t)dt,


, ‘

u(x) = cosz — [oe — t)u(t)dt. (E.7)

(b) To have (E.6) as an integral equation in y(x), we substitute for u(t) inside the
integral of (E.6) in terms ofy(t) via (E.1), where u(t) = y’’(t) = —y(t)+cost,

ue) = [(e-d[-u(0) + coss)jat,


y(z) =| (x — t) cos tats [ (t — x)y(t)dt

= | cos tdt — / t cos tdt + fe — x)y(t)dt,

y(z) =1-—cosx+ fe — x)y(t)dt

after evaluating the first two integrals.

Exercises 2.4

1. Reduce the initial value problem

dy
ee — COS.
ar

y(0)=0, y(0)=-1
(of Exercise 2 in Section 1.3) to the Volterra integral equation in y(x),

y(z) =cosx -—a-—1-— [e — t)y(t)dt.


EXERCISES 2.4 117

2. Reduce the initial value problem in y(z)

at +y=0 (E.1)

y(0) =0 (E.2)
y'(0) =1 (E.3)
(a) to Volterra integral equation in u(x) = d?y/dz?.
(b) to Volterra integral equation in y(z).
Hint: (a) let u(x) = d?y/dzx?, integrate it twice, using the identity (1.51)
and the initial conditions (E.3) and (E.2), then substitute for y(x) outside in
terms of d?y/dx? = u(x) from (E.1). (b) In the integral for y(z) in part (a),
substitute for u(x) = d*y/dz? in terms of y(x) from (E.1).
3. Reduce the initial value problem

dy
sue = —si
—=—
dy
ap ee + e"y
i = 2, (E.1)
Eu

y(0) = 1, (E.2)

y'(0) = -1 (E.3)
dy

u(z) = 2 —sing-+ e*(¢ — 1)" [bine —e*(x—t)lu(t)dt (E.4)

Hint: See the derivation of (E.7) in Example 3 and (E.7) in Example 4.


(b) to the Volterra integral equation in y(z),
3
ye) = = —“£+ [sine + (t — a)(e' + cost)]y(t)dt. (E.5)

4. Reduce the initial value problem in y(z),

d*yie
pan
| tA
dy
peed = Oe ae> 0 Vel
(E£.1)

y(0) =1 (E.2)
y'(0) =0 (E.3)
fake
to Volterra integral equation in u(x) = —
118 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

a? ee me
Hint Leva) s= a integrate it once using the initial condition (E.3) to

obtain , then integrate again using the identity (1.51) and the initial condition
fe
d :
(E.2) to obtain y(x). Last substitute these = and y(a), in (E.1), with their
d*y
resulting dependence (inside their corresponding integrals) on u(x) = ae

2.5 BOUNDARY VALUE PROBLEMS REDUCED TO FREDHOLM


INTEGRAL EQUATIONS

To illustrate how a boundary value problem associated with a differential equation


may be represented by a Fredholm integral equation, we consider the following
example of(1.33)-(1.35).*

Example 5
SEE
dx2 — i] ’ OMEE<AU (1333)
©

y(a) =0 (1.34)
y(b) =0 (1.35)
We proceed to integrate (1.33), in the same manner as that followed in Example
4, which gives
teal
BE
= , y (t)dt + Cj. (E.1)
5

Integrating again and using the integral identity (1.51) gives


z é ae
Ue) | i y(t)dtdé +c,2+c2 = | (x —t)y(t)dt ++cj2 +c. (E.2)

For simplicity, we leave the variable of the final integration in (E.2) as t instead of €.
To evaluate the arbitrary constants c; and cp, we employ the boundary condition
(1.34),
y(a) =0=0+4+cja+ ce, co = —c,a (E£.3)
and the boundary condition (1.35),
b
YD) es af (b— t)y(t)dt + c1b — qua. (E.4)

Hence, from (E.3) and (E.4), we have


‘ b

C= 7c ai}(b — t)y(t)dt (E.5)

4For a more general boundary value problem, see the first edition of this book, p. 66.
2.5 BOUNDARY VALUE PROBLEMS REDUCED TO FREDHOLM EQUATIONS 119

and

Co — CG = hae 1 — t)y(t)dt. (E.6)

So, if we use these values of c; and c2 in (E.2), the final integral representation of
the boundary value problem (1.33)—(1.35) is

y(a)=f (e-dujae +rF—" [e-oyna. — B)


x ae b

This integral equation in y(x) can now be rearranged to result in the form of the
Fredholm integral equation (1.36) with its kernel K(x, t) defined in (1.37). This is
done by writing the second integral as two parts on the intervals [a, x] and [2, 6],
where the first part will then be combined with the first integral of (E.7),

“¥(2)= Af (ema ah (t — b)y(t)dt

trek t — b)y aif’Soe Se


®) w(t)dt (8)
of (hae)
=e=)ab eyes
If we now define the kernel K (z, t) as

es SO), ee Pe
K(at)=4 ie Dinaaictves (E.9)
b-a * tee
then the last two integrals in (E.8) can be combined as

Ky b
»/ K (a, t)y(t)dt + ae K (a, t)y(t)dt = rf K (a, t)y(t)dt. (E£.10)

Hence (E.8), and in turn (E.7), reduce to the homogeneous Fredholm integral equation

ae |"Ke. Duet (1.36), (2.36)


where K(a, t) is given by (E.9), which is (1.37)

KGS
abn ue Bee (1.37), (2.37)
se eres ins gu
We want to stress again the equivalence of the homogeneous boundary value
problem (1.33)-(1.35) with the homogeneous Fredholm integral equation (1.36)
and its kernel in (1.37), since we may sometimes resort to solving the boundary
120 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

value problem to obtain the solutions for its equivalent homogeneous Fredholm
equation. As we mentioned at the end of Example 6 and in Exercise 5 of Section
1.3, this will require us to differentiate the integral equation in order to find its
corresponding differential equation (with its boundary conditions), which, hopefully,
is easier to solve. This development is illustrated in detail in the following Example
6. The need for such development will become evident when we study the methods
of solving nonhomogeneous Fredholm integral equations in Chapter 5, where the
solutions of the homogeneous equation are essential for the development. A list of
homogeneous boundary value problems with their equivalent homogeneous Fredholm
integral equations and, of course, their respective kernels (Green’s functions) is given
in Appendix B.

Example 6 Reduce the following homogeneous integral equation to a boundary


value problem:

ua) = K (a, t)y(t)dt, (E.1)

Keo = iene ea (E.2)

As we remarked on the function G(z, €) of (2.26) for the hanging chain, we can
again see very clearly the appearance of this kernel K(x, t) in (1.37) with its first
and second branches that satisfy the two boundary conditions at the two boundary
points z = a and x = b, respectively. This K(x, t) is the Green’s function of the
boundary value problem (1.33)—(1.35) that, effectively, reduced it to the Fredholm
integral equation in (1.37). The Green’s function is the subject of Chapter 4.
Here we have two ways of doing the problem. The first is to recognize the kernel
(E.2) as a special case of (E.9) or (2.37) in Example 5, with a = 0 and b = 1,
and hence the integral equation is a special case of (2.37) which is equivalent to the
boundary value problem (1.33)—(1.35), with a = 0, b = 1,

say
ee
dx2
Ss
hy(x),
y
Oy,
) O<2<1x (E.3 a )

y(0)=0, =y(1) =0. (E.4)


There is an infinity of solutions for this boundary value problem, as we shall show
soon, which are yn (x) = sinnaa, n = 0,1, 2,---, and in turn they are the solutions
of the homogeneous Fredholm equation (E.1) and (E.2).
The second method is that in the absence of an integral equation like (2.37) with
which to compare (E.1), we may have to keep differentiating the integral equation,
hoping that it will finally reduce to a familiar differential equation. In this case, we
write (E.1) as

ute)=A [ea —nyinae sr [20 —ou(oae (E.5)


2.5 BOUNDARY VALUE PROBLEMS REDUCED TO FREDHOLM EQUATIONS 121

and note how we used the second branch of K (z, t) in the first integral of (E.5) on
the interval (0, 2) and the first branch in the second integral of (E.5) on the interval
(x, 1). Next we must realize that both integrals in (E.5) have variable limits and that
their integrands are functions of 7; hence, in general, we should use the Leibnitz rule
(1.53) in differentiating them. However, in the special case at hand, we can factor
the x dependence out of the integrals

y(x) = A(1 - 2) ty(t)dt + re | (1 — t)y(t)dt. (E.6)


x

Now each term is a product of two functions of x and we can use the fundamental
theorem of calculus on the integrals. If we differentiate (E.6) once, we obtain

ts) i= mA ft
ty(t)dt
+ \(1 — x)ry(z)eh (1 — t)y(t)dt— Ax(1 — x)y(z)

Sie (1 — t)y(t)dt—a [rah


ty (t
(E.7)
and if we differentiate again (using the fundamental theorem of calculus), we obtain

y"(x) = —A(1 — 2)y(x) — Axy(z) (E.8)


= —Ay(x) + Aty(z) — Ary(x) = —Ay(z) ;
or
y" +Ay(z) =0 (E.9)
which is the desired differential equation.
To obtain the boundary conditions, we let x = 0 and x = 1 in (E.5) to obtain,
respectively,
y(0) =0 (£.10)
y(1) = 0, (£.11)
or we can see this easily by substituting 2 = 0 and x = 1 in (E.1) to obtain (E.10)
and (E.11) with the help of the definition of the kernel (Green’s function) K (z, t) in
(E.2), which vanishes at x = 0 due to its first branch and at z = 1 from its second
branch.
It is now instructive to solve the resulting simple boundary value problem (E.9)—
(E.11), where the general solution of the differential equation (E.9) is

y(x) = c, cos VAz + cp sin VAz. (E.12)


For this solution to satisfy the boundary conditions (E.10) and (E.11) we must have

u(O) =ou-.0 =0, c1—0


y(1) = c)sinVA = 0, Vi =n, n = 0, +1, $2, $3,:::

Hence the (nontrivial) solutions to the boundary value problems (E.9)—-(E.11) are

Yn(t) =cosinnaz, An=n’n?, n=1,2,--- (E.13)


122 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

which are also the nontrivial solutions (eigenfunctions) of the Fredholm integral
equation (E.1) and (E.2). The subject of eigenfunctions and eigenvalues is very
important for the development of the solution to Fredholm integral equations of the
second kind, especially with regard to the existence of such solutions as we shall
discuss and illustrate in Section 5.1.2. The preliminaries of the eigenfunctions as
solutions to Sturm-Liouville problem,> and their use as the orthogonal functions for
the Fourier series expansion is done in Section 4.1.2. Such expansion facilitates the
representation of the Green’s function, which we shall use in Section 4.2 to reduce a
boundary value problem to a Fredholm integral equation, which is (the more direct)
equivalent way to what we are doing in this section. For other boundary problems of
interest with their eigenfunctions and eigenvalues, see Appendix B.
For now, the set of the nontrivial solutions of the homogeneous problem (E.9)—
(E.11) {sin na} are called the characteristic functions or eigenfunctions and vn» =
n?n are the characteristic values or eigenvalues of the kernel K(a,t) of the
(equivalent) homogeneous Fredholm integral equation (E. 1) and (E.2).
As mentioned before, we shall see in Chapter 5 that the solution of a nonhomo-
geneous Fredholm integral equation will, as expected, depend on the solutions of
its associated homogeneous equation. The solutions of the homogeneous equation
are the classical solutions of the homogeneous case of the boundary value problem
(1.33)—(1.35). Example 6 is a special case, and for quick reference we tabulate in Ap-
pendix B a number of familiar homogeneous boundary value problems, their Green’s
functions, and their corresponding homogeneous Fredholm integral equations with
the Green’s function as their kernel. The verification of these results is the subject of
a number of examples and exercises in this chapter and in Chapters 4 and 5.
In this section, our illustrations covered only homogeneous differential equations.
The nonhomogeneous differential equations case should follow easily, as was done in
part (b) of Example 4 for the initial value problem and its resulting nonhomogeneous
Volterra integral equation. (Also, see Example 8 of Section 4.2 and many of the
exercises in Sections 4.1 and 4.2.)

Exercises 2.5

1. Reduce the boundary value problem

dy
dg? ~ AY(2), Ocir<sob

y(0) = 0,
y(b) =0
to a Fredholm integral equation.

5 : ‘ : : ae
A very important general boundary value problem, whose solutions, with some regularity conditions, are
orthogonal functions.
EXERCISES 2.5 123

Hint: See Example 5.

2. Consider the kernel K (a, t) in (E.9) of Example 5, as a function of z, for a


fixed value of t.

(a) Show that K (a, t) is continuous at x = t.


(b) Show that 0K /0z is discontinuous at x = t, indeed it has a jump discon-
tinuity of 1,ie., 0K /Oz|z>4 — OK /Oz|,,-,
u<t
= 1 as x increases through
i

(c) Show that on the two subintervals a < 2 < t andt < zx < b, the kernel
K (a, t) satisfies the differential equation 07K /0x? = 0, also K(z, t)
satisfies the boundary conditions by vanishing at the end points x = a
and x = b.

3. (a) Reduce the homogeneous Fredholm integral equation

il

u(2)=A f K(e,t)u(tde, ay
0

st
a, 0<2r<
sin
K(z,t) = sinh
t sinh(x — 1)
—a ee ee
ibn
sinh1
to a boundary value problem.
Hint: See Example 6.
(b) Solve the resulting boundary value problem to find the characteristic func-
tions (eigenfunctions) and the characteristic values (eigenvalues) of the
kernel in (E.2) of part (a).
(c) Verify that the solutions in part (b) satisfy the integral equation in part (a).

4. (a) Verify that the nonlinear integral equation in u(x),

ula) = [ K(x, €)o(€,u(é))a€


0
(E.1)
with the kernel

s@(1 - 2)°(Bn -€- 208), OSESz


Kage) = (E.2)

=o (1— £)?(3
—# -22f), r<€<1
is equivalent to the boundary value problem associated with the displacement
u(x) of a vibrating beam.
124 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

me =O 2, Ux))), Ore (E.3)

OY 0; u'(0) =0 (E.A4), (E.5)


Oya), ui(1)=0 (E.6), (E.7)
Hint: See Exercise 3(b) in Section 1.3.
(b) Attempt to reduce the nonlinear boundary value problem (E.3)-(E.7) to the
nonlinear Fredholm integral equation (E.1) and (E.2).
Hint: Follow the steps of Example 5 where you do four integrations utilizing
the (general) identity (1.52), and using the four boundary conditions in (E.4)—
(E.7) for determining the four arbitrary constants of the four integrations.
.’

2.6 MIXED BOUNDARY CONDITIONS: DUAL INTEGRAL EQUATIONS

In this section we illustrate the formulation of a mixed boundary condition, where


the function is given on one part of the boundary and its integral equation derivative
is given on the other part, which will result in a dual integral equation representation.
We consider the problem of an electrified plate in the half plane, where we will use
the Fourier transform of Section 1.4.2 to reduce its mixed boundary condition to a
condition of solving dual integral equations.

2.6.1 Electrified Infinite Plane

We consider here the potential distribution u(x, y) in the half plane (2 > 0), due
to the presence of a plate of width 2a (see Fig. 2.5) placed along the y axis with
center at the origin and extending in the z direction. The plate is kept at a potential
u(0,y) = g(y), -a < y < aand where the rest of the yz plane is assumed to be
insulated (i.e., Ou(0, y)/Ox = 0, |y| > a).
The potential distribution in free space here is independent of z, thus as u(z, y) ii
is governed by the following Laplace equation in two dimensions

Sali LOPATI
De? wage hae a> Os —0o <y < © (2.38)

and the mixed boundary condition is represented as

u(0,y)=9(y), —a<y<a (2.39)


du(0,y)
pp
_ re (2.40)
Since the domain of the problem in y is —-oo < y < oo, we will use the Fourier
exponential transform (1.87), which we discussed in Section 1.4.2. We let

U(e,d) = Ffulay)}= f ule, yey


[o.@)

Ae 9)
(2.41)
2.6 MIXED BOUNDARY CONDITIONS: DUAL INTEGRAL EQUATIONS 125

Fig. 2.5 The electrified plate.

and Fourier-transform the Laplace equation (2.38), using the Fourier transform pair
in (1.110) on the second partial derivative with respect to y in (2.38), to obtain

d’?U
ey, RUE i=0; (2.42)

which is a second-order differential equation whose general solution can easily be


obtained as
U(az, A) = A(A)e7I? + B(A)e!. (2.43)
Here the inverse Fourier transform (1.88) of e!4!* (as a function of y) does not exist
to guarantee a finite solution u(z, y). Hence we must let B(A) = 0 in (2.43),

U(z,) = A(A)e"*, (2.44)

To find the arbitrary function A(A) in (2.44), we must have a condition on U(z, A)
at z = 0, which should come naturally from the Fourier transform of the condition
on u(x, y) at z = O through (2.41). But our mixed condition (2.39)—(2.40) at x = 0
is not suitable for the required substitution in (2.41), since it is given as a function
for |y| < a and as a derivative of the function for |y| > a. Instead, we leave A(A) in
(2.44) for the moment and we proceed to find u(z, y), the solution of our problem,
as the inverse Fourier transform (1.88) of (2.44),

U2) x fu (x, A)e4%dd


(2.45)
xf A
A(AAe lt e4d).
126 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

Now, we can apply the mixed boundary conditions (2.39) and (2.40) on u(x, y) above
to obtain
u(0,u)
=9) =51 [Ae
ee a
"ad,
Sin be

2rg(y) = ie A(A)edA, ly] <<a (2.46)


— oo

a
55(0,9)=0=
0U
— =0 5;—1 |es —|A|A(A)e’*4
PAM id dy,

oe i IMA)eMZaA, — [yl >a. (2.47)


(2.46) and (2.47) are dual integral equations in A(X), where they must be solved
for A(A) of (2.45) to obtain the solution u(z,y) of the electrified plate problem
(2.38)-(2.40).
We remark here that the boundary value problem (2.38)—(2.40) also describes
the steady-state temperature distribution u(z,y) in the zy plane, due to a given
temperature on the segment |y| < a of the y axis and where the rest of the y axis is
completely insulated. Another physical problem that is represented by the boundary
value problem above is that of the steady irrotational flow of a perfect fluid through
the opening |y| < a of an infinite wall along the y axis.

2.6.2 Electrified Disc

As we have seen in the problem above, the mixed boundary condition (2.39) and
(2.40) for the potential distribution in two dimensions u(x, y) was reduced to dual
integral equations (2.46) and (2.47). Here we consider the same type of mixed
boundary condition for the potential distribution in three-dimensions. This is due
to a given potential on a disk of unit radius in the zy plane, with center at the
origin and with the rest of the zy plane being completely insulated (Fig. 2.6). This
problem also fits the steady-state temperature distribution in three-dimensional space
due to a given temperature on the unit disk and where the rest of the xy plane is
completely insulated. To describe such boundary conditions, it is important that we
use cylindrical coordinates (r, #, z) for the potential distribution u(r, 6, z), and hence
we write the Laplace equation in cylindrical coordinates,

iu 10n,10%"
Or?
Ou_y
or Or r2 002 @z2
=
(2.48)
The special case we consider here is when the potential on the disk is cylindrically
symmetric and hence the potential should be independent of the angle u(r, 6, z) =
u(r, z); so the Laplace equation (2.48) in u(r, z) becomes

O*u 10u O2u


Dr? an Ay ru< op (ieee ea. (2.49)
EXERCISES 2.6 127

In the very special case of constant potential up on the unit disk with the rest of the
ry piane being insulated (see. Fig. 2.6), this mixed boundary condition becomes

u(r, 0) = uo, OS ral (2.50)

Ou
a7" O20; I< t < Co. (2.51)

Fig. 2.6 The electrified disc.

We mention again that the boundary value problem (2.49)-(2.51) represents two
other physical problems: that of steady-state temperature distribution due to a given
constant temperature on the unit disk with the rest of the zy plane completely
insulated, and the steady irrotational flow of a perfect fluid through a circular aperture
in a rigid wall. In comparison with (2.38), where we used the Fourier transform to
algebraize its partial derivative with respect to y, and reduce it to the ordinary
differential equation (2.42), equation (2.49) needs the Hankel transform to algebraize
the differentiation with respect to r. The subject of the Hankel transform was
presented very briefly in Section 1.4.3, and its use will be illustrated in Appendix
A, where the problem of the electrified disk is discussed and the corresponding dual
integral equations are derived as (E.8) and (E.9) in Example | (of Appendix A),
where a final solution u(r, z) is found in (E.11) of the example.

Exercises 2.6

1. Use the following two integrals involving the Bessel function Jo (Ar):
We»
[FP
RANT aoraa =F,
2 O<r<l
128 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

i) sin AJo(Ar)dA = 0, Por OO


0
to solve for A(A) in the following dual-integral equations:

Up = / AJo(Ar) A(A)dd, O0<r<l


0

Us / —)? Jo(Ar)A(A)dA, 1 <7 < oo.


0

Hint: Compare the respective integrals.

2. Fluid flow through an aperture in a rigid plane wall.


(a) Develop the problem of the steady flow u(x,y) of a perfect fluid in an
aperture of width 2a with center at 0 in a rigid plane wall. See the first problem
(2.38)—(2.40) of this section.

(b) Develop the same problem in u(r, z) for a circular operature of radius a.

3. Write the dual integral equations

i AB(A) sin ArdX = 0, toa (E£.1)


0

iinBY) sin Aga =f (2), 0a <1 (E£.2)


0
in B(A) as one (singular) integral equation of the first kind.

2.7 INTEGRAL EQUATIONS IN HIGHER DIMENSIONS

In Section 1.4.2 we indicated that the Fourier transform can be extended to three-
dimensions, where we gave such Fourier transforms pair in (1.91) and (1.92) (with the
notation used in physics texts). We mentioned there that we shall need such extension
to model the Schrédinger (partial differential) equation in the three-dimensional
physical space as a Fredholm integral equation in the three-dimensional momentum
space. Of course, we are also after the Fourier transform’s most important operational
property, namely, that it algebraizes (linear) derivatives with constant coefficients,
which generalizes to the three-dimensional case as we shall see in the derivation of
(2.60). Next we present the detailed analysis of using the three-dimensional Fourier
transform to give the integral representation of Schrédinger equation as a Fredholm
integral equation in the three-dimensional momentum space.
2.7 INTEGRAL EQUATIONS IN HIGHER DIMENSIONS 129

2.7.1 Schrodinger Equation as an Integral Equation in the


Three-Dimensional Momentum Space

In quantum mechanics the wave function (7), 7 = iz + jy + kz = (22s


governed by the Schrddinger equation,
h?

= @) + U(r)p(r) = Ev(r), (2.52)


as a partial differential equation with a spatial differential operator, namely, the
ee 2 Ove OU ou
Laplacian V~, where V“~y(7') = ma pi Ale fatale mt and f arethe
Ox Oy Oz
mass, the Planck constant, the potential energy, and the total energy, respectively.
2
With a? = — oe and u(r") =— “7 U(r), Schrédinger equation is written as

— V7(7) + a? (F) = o(FY(F) (2.53)


which is for one body system with interaction potential v(7’). If we consider a
many body system with interaction potential u(7,7) over the whole physical space,
Schrédinger equation generalizes to®

-vu@+aun= fff warweee,


BF = dédydé (2.54)
which can be seen as an integro-(partial) differential equation in three-dimensions.
As we Shall see soon, and as mentioned in Section 1.4.2, the Fourier transform in
three-dimensions is used to reduce the differentiation operation to an algebraic one,
and the above problem (2.54) becomes an integral equation in the Fourier (frequency,
or momentum) space. This is usually done because in the momentum space we can
easily describe the quantum states, and besides, we have the possibility of a better
computational and analytical hold on a variety of such problems when represented as
integral equations. The Fourier transform in one dimension was discussed in Section
1.4.2. For the present three-dimensional problem, we need the three-dimensional
Fourier transform, which represents a simple extension that we presented in (1.91)
and (1.92). Of course, in this section our interest is to merely present clear statements
of many examples as integral equations, leaving pursuing the details for the interested
reader as we continue covering such details in the rest of the book.
With \ = -= (Aj, 2,A3) as the wave number (multiplied by 27), or the
>
momentum in three-dimensions, let Y(A) be the (triple) Fourier transform of the
wave function ~(7) in the physical space. With such Fourier transformation, the
Schrodinger equation of~)(7’) in the three-dimensional r-physical space will reduce to

For more details, see Arfken [1970, p. 722].


130 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

the following (homogeneous) Fredholm integral equation of W(X) in the momentum


(or wave number) A-space,

(02 + a2) (2) =| fiesih V(X, A) U(A)aPX, (2.55)

N? = |Al? =A? +2 +22


where the ee is done over the whole three-dimensional momentum space,
—2m
Bi.=Daddrdig, a? = Paes and ? = |A|?= A? + A2 + AZ. Vi) is the
three-dimensional Fourier transform of the interaction energy v(7, f).
Now that we have presented a detailed treatment of the Fourier transform in one
dimension in Section 1.4.2, which can be extended easily to higher dimensions, we
are in a position to establish the above Fredholm integral equation representation
(1.52) of the Schrédinger equation, which is the subject of the following example.

Schrédinger Equation in Momentum Space


In this example we will return to the Schrédinger (partial differential) equation
(1.49) in the wave function ~(7) of the physical space

— 5 V74(F) + UU) = BVO), (2.52)


and use the three-dimensional Fourier transform’ to reduce this partial differential
equation to its integral representation as the (homogeneous) Fredholm integral equa-
tion (2.55) in the wave function &(A) of the momentum space

(02 + a2) (2) =f iN ileV(X, A)U(A) AaB. (2.55)


Since the Schrédinger equation is found mostly in physics books, we will adopt the
usual notation used there for the Fourier transform and its inverse,

F{f} =F ie e*” F(a)dax (2.56)

—1
(F}=f(@)= =i. ert
Fads (2.57)
where we notice more symmetry in distributing the multiplicative constant —— for
V20
the Fourier transform and its inverse. Also to use this with our above notation for
the three-dimensional Fourier transform, we use \ = 1A, + jA2 + kA3 for the wave

7More on the higher dimensional Fourier transform can be found in Jerri [1992] and Sneddon [1972].
2.7 INTEGRAL EQUATIONS IN HIGHER DIMENSIONS 131

number vector instead of the k so as not to confuse the latter with the unit vector & in
i ix, + jx + kv,

rota gy ff fe f(r) da dr2dx3 (2.58)

Fy ihe Oe

= ae (2.59)
If we now let W(X) = F (3) {(r)} as the wave function in the momentum )-space,
and apply the three-dimensional Fourier transform to the Schrédinger equation (2.54),
we have

3
oH oe oe THO ge
itll
)2

dx, dx2dx3

af3
)2
oH) 06 ) u(re™ "dx', dxydx',dzx,dr2dz3.

For the left-hand side we employ a simple extension of the Fourier transform impor-
tant property in (1.110) to three-dimensions to algebraize the three partial differenti-
ation terms to —(A? + A? + A23)¥Q) = —)?W(X), and where the fourth term a?2))(7)
will simply be transformed to a? W(X),
Z 1 CS ie VO a aces
(A? + a?) ¥(X) = a ae u(F,rp(Aje*".--
-- dr drydxr,dx,dx2dz3. (2.60)

To have an integral equation in W(A) we must write (1) inside the above integral as
an inverse Fourier transform of w(A),

weenie LLL LLL


eNFAF Y(R)d\y digdigde!.da!,dx',dx,dr2dzs. (2.61)
~

Now we can see the six-dimensional integration over 7’ and7? as the three-dimensional
Fourier transform V(X , A) of the interaction energy v(7, F) (as a function of the two
spatial vectors 7’ and 7), and where ize \) stands for the interaction energy in the
momentum space,
132 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS

+ & 1 me se hc:
VOX) = cae 6 ie UE niesBee "dx,nant a
dxr,dx,dx,dx2dz3.
(2.62)
With this definition of V(X, ), the equation (2.61) becomes the desired (homo-
geneous) Fredholm integral equation in the wave function W(X) of the momentum
space

(a? + a2)¥(3) = | i, / V (3, A)W(A)dhy hohe, (2.63)


>7

which is what we were set to derive as (2.55).


Volterra Integral Equations

In Section 1.2 we presented the Volterra integral equation (1.39) in u(x),

hz)u(a) = f(a) + [ K (2, €)u(e)ag (1.39)


which we termed the second kind when h(x) = 1,

ua) = fle) + f K(x, §)u(e)ds (1.42)


and the first kind when h(x) = 0,

-s(a) = [" K(x, €)u(6)dé. (1.41)


It is interesting to note that Volterra started working on solving such integral equations
in 1884, which is before the name integral equations was given to them by du Bois-
Reymond in 1888. Volterra’s serious study began in 1896.’
In Chapter 2 we formulated a number of problems that resulted in Volterra integral
equations. This included the human population problem (2.8), the mortality of
equipment problem (2.21), and Abel’s problem (2.33).
In Example 3 of Section 2.4 we showed how an initial value problem,

2
oY = dy(2) + 9(c) (1.29)
! Anderson and DeHoog [1980]. See Bocher [1914] for Volterra’s collected work in 1884-1896.

133
134 Chapter 3 VOLTERRA INTEGRAL EQUATIONS

y(0) =1 (1.30)
y'(0) =0 (1.31)
is reduced to Volterra integral equation of the second kind

yia)=A [0 w-Ou@de+i+ | @-eaQae (1.82)


x x

0
with the nonhomogeneous term

fone [eS eee


In this chapter we present the basic methods and techniques for solving Volterra
integral equations, with primary emphasis on illustrating ‘such methods rather than
proving them. We may recall that a number of the aforementioned Volterra integral
equations had difference kernels and hence were suitable for the Laplace transform
method of solution, which we illustrated in a number of examples and exercises in
Chapters | and 2. The Laplace transform method and other methods are discussed
and illustrated in the following section. The basic theorems are stated precisely, and
few of them are proved.

3.1 VOLTERRA EQUATIONS OF THE SECOND KIND

3.1.1 Resolvent Kernel Method: Neumann Series

The solution of the Volterra integral equation of the second kind,

ule) = f(a) +2 i!” K(x, €)u(€)dé (3.1)


may often appear as an integral,
x

ula) = f(a) +2 / P(a,&;d)f(€)dé (3.2)


in terms of the given function f(z), where ['(z, €; ) is called the resolvent kernel of
the integral equation (3.1).
When K(x, €) and f(x) in (3.1) are both continuous, it is easy to construct the
resolvent kernel I(x, €; \) for (3.1) in terms of the following Neumann series:

T(x, €; 2) =k n+1(Z, €) (3.3)

2This (successive substitution) method of C. Neumann came about 30 years later after the work of Abel
and Liouville in 1823-1839.
3.1 VOLTERRA EQUATIONS OF THE SECOND KIND 135

where K,41(x, €), the iterated kernel, is evaluated as follows:

Kuala.) = | Ke Kaly, Ody (3.4)


xr

and Ki(z,y) = K(2,y).


This is easily shown by assuming the following infinite series form for the solution
u(x):
u(x) = uo(z) + Aus (x) + A? u(x) + --- (3.5)
and substituting it in (3.1) to obtain

uo(x) + ru (x) + 7 u2(x) teogd

= f(x) +a |ewarutavte oe /ei te a)


with the assumption of good enough convergence of the infinite series in (3.5) that
allows the exchange of its summation with the integration operation of (3.1) that led
to (3.6).
Now we equate the coefficients of each of the same power on both with the
assumption of good enough convergence of the infinite series in (3.5) that allows the
exchange of its summation with the integration operation of (3.1) that led to (3.6),
sides of (3.6); for example, the coefficients of \°, A1, A? in (3.6) are equated to give
(3.7), (3.8), and (3.9), respectively

tio(#) = f(c) (3.7)


tu(2) = i;K(x, )uo(€)aé (3.8)
uaa) = f” K(a,€)us (€)aé (3.9)

Uy (0) = ieK (a, €)tn-1(€)d€. (3.10)

So if we substitute uo(x) = f(x) from (3.7) in (3.8),

(a) =f K(w€)4(@)ae (3.11)


then use this resulting value of u1(z) in (3.9),

u2(xr) = [ Ke, &) i K (€, t) f(t)dtdé (3.12)

and interchange the integrals in (3.12), keeping in mind the change in the limits of
integration as illustrated in Figure 1.8, we obtain
136 Chapter 3 VOLTERRA INTEGRAL EQUATIONS

wl) =f10 |[e@oxccoae| at (3.13a)


THe) = i)f(t)Ko(2,
t)dt
fei” Ko(a,)f(Odé (3.130)
after defining the iterated kernel K2(x, €) according to (3.4).

Fig. 1.8 Domains for performing the integration in (1.51) with respect to t first (solid lines) or
with respect to € first (dashed lines).

Just as the K (x, t) in (3.11) is taken as Ky(z, t) to give u; (zx), the inside integral
in (3.13a) defines K2(z, t), the iterated kernel,

Ka(e,t) = f" K(2,€)K (E,t)dé


; (3.14)
= i” K (a, Ki (6, tae
to give u2(x) in (3.13b), since Ky (€,t) = K(€,t). In general, following the same
steps in (3.11)-(3.14), we can derive the general term for the iterated kernel,

Keane iL" K(0,€)Kn(é,t)dé, (3.15)


that gives the general term un+1 (x) of (3.5),

unsa(e) = ffbierneorariGrs (3.16)


3.1 VOLTERRA EQUATIONS OF THE SECOND KIND 137

The final solution is then (formally) obtained from u(z) in (3.5) with uo(x) = f(z)
as in (3.7), wi (x) in (3.11), w2(z) in (3.13b) and so on for up (x) as in (3.16),

u(a) = f(a) +a f” Ky (2,€)f (Ode + /" Ko(a,€)f(OdE


pegtf Kyle, ge
f(Ode +
= fle) +f [Ki (e,§) + AK(0,8)+ (3.17)
on 6G EAS) aves )F(E)dg

=f) +f p»ele 8)fEdg


=F / P(x, €d)f(Oa€
a

which is what was sought in (3.2) and (3.3) to constitute the solution to the Volterra
integral equation (3.1) via the present method of the iterated kernels (3.15) for
constructing the resolvent kernel ['(a, €; A) in (3.3).
Of course, as we mentioned above, these are only the formal steps of the method,
which lack the mathematical justification for the convergence of the resolvent kernel
infinite series (3.3). This includes proving the general case of the iterated kernel
Kn41(2, t) in (3.15). The first part concerning the form of K,41(, t) in (3.15) can
be accomplished by mathematical induction, which we leave for an exercise. Before
offering the major part of a proof of the main result, for u(x) of (3.1) and (3.2) to be
the unique solution of (3.1), we shall give a precise statement for it as the following
Theorem 1, then illustrate the above method with Example |. This is followed by a
very similar iterated kernels method that we will pursue to prove Theorem | in detail.
Last we show that u(z) in (3.2) and (3.3) or (3.17) does indeed satisfy the Volterra
equation (3.1) to qualify as its solution.

Theorem 1 “The Volterra integral equation of the second kind (3.1) in u(z),

u(x) = f(a) +f K(e,eul6ag (3.1)


with f(x) integrable on [a,b] and the kernel K(x,€) integrable in the triangle,
a<2z<b,a< € < a, has a unique bounded solution given by u(x) in (3.2)
with the resolvent kernel I'(z, €; X) represented by the infinite series (3.3), which is
convergent for all values of \."

Example 1 Find the resolvent kernel to solve the Volterra integral equation of the
second kind

eet [* ef tu(t)adt. (E.1)


138 Chapter 3 VOLTERRA INTEGRAL EQUATIONS

Here we have
eg (eet i= (at tee ae (E.2)
So if we use K(x,€)= e~§ and K,(£,t) = K(E,t) = e&~$ in (3.4), we obtain

mene /ROR Dae / e*-fef-tae = ieet tade


(E.3)
= el d&= (c= te.
t
Similarly, from (3.4) with n = 2, we have

Ka(a,t) = [t K(@,8)Kalé, tag


=ft ot ie-vettde= fe -e* tae
2
as eae (€ a t)dé = ez—t (5 = ts) (E.4)

t t

2) 2 wiry 2
Se F 5 t -1(e-1)| = (x ; ) ett

These calculations can be continued to find that

Ct)
Kn+i(2,t) = eas 5 (E.5)

Hence from (3.3) and (E.5) the resolvent kernel for (E.1) is

I'(a,t;A) = Ki(z,t) + \Ko(x,t) + A? K3(a,t)+ --- + A°Kyii(a,t) +---


=e" *4+X(x-the™* + fe Sts sicher shred,
a
eee

=e EEiG=ye ee re UE
= ef teMe-t) — p(1+d)(2-#)
(E.6)
after realizing that the series in brackets is the Maclaurin series of e4(*-*),
So from (3.2) and the resolvent kernel (E.6), the solution to (E.1) is

u(x) = f(z) +f ctnen9 (pat (E.7)

It is not often that the series representation of I(x, t;) will converge to an
expression in closed form [such as e('+*)(*—) of (E.6)]. We presented such a
special case to illustrate the basic method. In practice, we may have to evaluate
numerically a finite number of terms of the Neumann series (3.3), which gives only
an approximation of the resolvent kernel ['(z, t; A).
3.1 VOLTERRA EQUATIONS OF THE SECOND KIND 139

An Iterative Approach
Another very similar method that depends on the above iterated kernels (3.4) and
generates the same resolvent kernel (3.3) is discussed next. The advantage seems to
be its transparency for the need of proving its convergence and that it is an iterative
process from the first step. If we look at the left side u(z) of (3.1) as the output of
the integral equation while u(€) inside the integral as the input, the method would
take the whole right-hand side of (3.1) with its two terrns, which is the output, and
use it as an input inside the integral, i.e., it starts as an iterative process to give

nG) =se+a f xeoo [re


raf K(E,t) Ho dé
= f(a) +) iEK,(0,€)f(6)dé + lai K(2,€): (3.18)
K(E,thu(tatde
Ge i Ky(0,€)f(@dé + / Ko(c,t)u(t)dt
a

after using the definition of the iterated kernel K(x, t) as it was generated in (3.4).
The difference between this method and the above former one, associated with the
infinite series (3.5) as a starting point, is that in (3.18) we still see clearly the unknown
function u(t) involved in the integral of the third term in (3.18). If we input this u(z),
of the right-hand side of (3.18), inside the integral of (3.1) again, we easily obtain

ie) SIFY a /" Kyo,t)f(tdt + /" Kola,t)f(t)dt


(3.19)
+3 ihK3(a, t)u(t)dt,
a

remembering again the definition of the iterated kernels K2(z,t) and K3(z,t) of
(3.4). If this process is repeated n times we have

u(x) = f(z) raft Ky (a,t)f (t)dt


+ i Ko(x,
t)f(t)dt

ewe is
” Ka(a,t) f(t)dt +" il
ima enyuldi (3.20)

where the iterated kernel K,,+41(z, t) is defined in the same way as in (3.4), and where
we can still see the unknown function u(t) involved inside the above last integral of
(3.20), which clearly hinders this expression from being a solution to (3.1). The way
to show (3.20) becoming a solution is to show that such a series converges as n — 00
(see Exercise 10 for the detailed steps of the proof). This, of course, means that the
n + lst term, as the integral involving u(t), will vanish as an (obvious) necessary
condition of the convergence of the series. Hence we have the solution
140 Chapter 3 VOLTERRA INTEGRAL EQUATIONS

[o)

u(z) =sea)+a foox Kn4i(£ils


n=0 (3.21)
= f(x) + af Pet \) (tat
which is the same as (3.17) (or (3.2)) of the first method (that started with the
Neumann series (3.5)).
We will prove here that the resulting function u(x), as represented by the conver-
gent series (3.21) with its resolvent kernel, does indeed satisfy the Volterra equation
(3.1) to qualify as its solution. We do this by substituting the expression for u(x) of
(3.21) in the integral on the right hand side of (3.1),

u(a) = f(a) +2 f" K(a,€)u(é)dé (3.1)


to have

x S
#) +) i K(2,€) Lr +A / rieanstoat dé
x x is

a+r] K@erod+» ff Kw.ore,trF(ardg. (6.22)


Now, this result must be shown to reduce to the same expression of u(x) in (3.21).
If we interchange the two integrals in the above double integral term [as was done in
(3.12) to (3.13a)], we have

z ré
i i K(w, OU(E,t;A)f (tdtde
= 3 f“F(t il"K(a,
OU(Et Aas] dt
ax fgof emo dr wala]
f=

(3.23)
af i0d yn ilaK(0,8)Knya(E, a]

=) ibf(t)13 a eer OF, dt

[ ne Knete) f(E)dé
3.1 VOLTERRA EQUATIONS OF THE SECOND KIND 141

after using the property of the iterated integral as in (3.12)—-(3.13a) to arrive at


Kyn42(2,t) above, then substituting for the dummy variable of integration by t = €
and that of the summation by m = n + 1. If we substitute this result (3.23) for the
double integral term in (3.22) we have

f(a) + / K (a, €)f(€)dé + i,j p NOE es, o|f(é)dé


= f(x) +r ‘i“|e@o+d i Knale8) f(€)dé
[e2)
i (3.21)
= f(x) + | SS Knee) F(§)dg
@ m=0

= f(o) +a |“P(e,6)f(Edé
which is the same expression of (3.21) that we started with as a (nominated) solution
inside the integral on the right-hand side of the Volterra equation of the second kind
(al):

3.1.2 Method of Successive Approximations (Iterations)

Another very well known method of solving the Volterra integral equation of the
second kind (3.1) is to start with substituting a zeroth approximation u(x) in the
integral [of (3.1) with A = 1] to obtain a first approximation wu (z),

ui(x) = f(x) + ieK (a, t)uo(t)dt. (3.24)

Then this wu; (x) is substituted again for u(z) in the integral of (3.1) to obtain a second
approximation u2(z),
x

163 ofa) +f K (a,t)uy(t)dt.


0
This process can be continued to obtain the nth approximation,
xz

OA ee) +f K (03 t )ttn—1 (t) at. (3.25)


0
Then we have to determine whether u,,(x) approaches the solution u(x) as n in-
creases.
This successive approximation (or iterative) method applies just as well to Fred-
holm integral equations of the second kind. It is even tried for the nonlinear Volterra
and Fredholm equations of the second kind.
It turns out that if f(z) is continuous for 0 < a < a and if K(z,t) is also
continuous for 0 < x < aand 0 < t < @, then it can be proved that the sequence
142 Chapter 3 VOLTERRA INTEGRAL EQUATIONS

Un(x) in (3.25) will converge to the solution u(z) of (3.1). We state this result as the
following Theorem 2, which we shall follow immediately with a detailed illustrative
Example 2.

Theorem 2 “If for the Volterra integral equation of the second kind (3.1) we have
f(x) continuous for 0 < x < a and K(z,t) is also continuous in the triangle
0<a<a,0<t< z, then the successive approximations sequence up (x) in (3.25)
converges to the solution u(z) of 3.1."

Example 2 Successive Approximations. Use the method of successive approxi-


mations to solve the Volterra integral equation of the second kind,

u(x) = x — [e — t)u(t)dt. . (E£.1)

We first note that the above Theorem 2 applies to this problem, since f(x) = a is
continuous for any z, and the kernel K (x,t) = —(a — t) is also continuous in x and
t for all their values.
We may remark here that there is always an advantage in making a reasonable
zeroth approximation, a matter that becomes clearer after solving a number of prob-
lems. In this case we may start with wo(t) = 0 in the integral of (E.1) to obtain wu, (x)
according to (3.25),

ui(z) = 2-0, (E.2)


so if we substitute this u(t) = u;(t) = t inside the integral of (E.1), we obtain

2) 3 z
Zz

0 Z 3] \o
ae
= ge
3!
Now

ug(t) =2- (2 —t)us(t)at = 2 — | (x — t) (+-5) dt


Ne t? +5 x
way

=e eres al
go gy? pia 8H augsh Sugk
age aiden | acme BE a aga ae ee
(4 i) + (5 =) Os 54 a RG
Crees Sip his
Sopa eS Mati. «Pe ge ee oe
6 120 el
(E.4)
From (E.2), (E.3), and (E.4) it looks clear now that if we continue this process, we
obtain the n + 1st approximation u,y41(x) as
3.1 VOLTERRA EQUATIONS OF THE SECOND KIND 143

zr r - gent

ata ob) hE ert (0) Qn+1)!’ n= Os eden.) “(E.0)

which is obviously the nth partial sum of the Maclaurin series of sin x,

= gent
sing = ) (—1)”———_. (£6)
ae (2n + 1)!
Hence the solution to (E.1) is

woe r= lim" 17,441 (a) sina. (E.7)


n—- oo

As in Example 1, we remark again that it is not very often that a general sequence
Un(zx) will converge to such a simple function as sin z, and we may have to resort
to approximate numerical methods for evaluating the resulting partial sum (E.5) or
in general its integral representation (3.25). We leave it as an exercise to verify that
sin z is a solution to (E.1) by performing the direct integration (Exercise 8).

3.1.3 Laplace Transform Method: Difference Kernel

When the kernel K(x,£) depends only on the difference x — €, it is termed the
difference kernel, K(x,&) = K(a — €). The following Volterra equation of the
second kind with difference kernel K(x — &),

NO eee /aeouea (3.26)


now has an integral in the form of the Laplace convolution product (1.85)

iEK(a — €)u(€)dé = K * u.

Hence, as we presented and illustrated in Section 1.4.1, the Volterra equation with
difference kernel (3.26) lends itself to the Laplace method of solution. So if we
Laplace-transform (3.26), letting U(s), F(s), and K(s) be the Laplace transform of
u(x), f(x), and K(z), respectively, and realize from the convolution theorem (1.84)
that L{K * u} = KU, we obtain

U(s) = F(s) + AK(s)U(s) (3-20)


aS)

The solution u(z) of (3.26) is then the inverse Laplace transform of U(s) in (3.28),
Bee F(s)

which can be evaluated with the aid of Laplace transform pairs (Table 1.1), as we
illustrate in the following example.
144 Chapter 3 VOLTERRA INTEGRAL EQUATIONS

Example 3 Use the Laplace transform to solve the problem of Example 1,


zx

u(z) = f(x) + | e” ‘u(t)dt. (£.1)


0

The kernel K (x,t) = e*~' is a difference kernel; if we Laplace-transform (E.1)


using K(s) = L{K(x)} = Lf{e*} = 1/(s — 1) in (3.28), we obtain

U(s) =F(s)+ A mea


im F(s) 2 srl
DS ee
Es Fate) eae = F(s) reeks
(B2)
eT EA —1- 2

So the solution u(x) of (E.1) is the inverse Laplace transform of (E.2),

at 2)
id ay {F() +A;ean) }

—sigoil i (6Tenia ‘aaa


me r(s)} E.
(E.3)

= f(z) + AL- ee

If we use the convolution theorem (1.84) on the last term in (E.3) with L{eQ+))7} =
1/[s — (A + 1)] from (1.80) (or the second entry in Table 1.1) we obtain

u(2) = f(a) + AO*D* + f(2) (E.4)


=e) raf eAt1la—t) F(t) dt
0

which is the same result (E.7) of Example 1, where the solution was obtained by
using the resolvent kernel-Neumann series method.

Example 4 Use the Laplace transform to solve the problem of Example 2,

Ie) =o = fe — t)u(t)dt. (£.1)

The kernel K (x,t) = x — t is a difference kernel; we use Laplace transform on


(E.1) to obtain

UG) =—s2 =U)


2
(E.2)
[using the convolution theorem (1.84) and L{x} = 1/s? from (1.79) (or the first
entry in Table 1.1) with y = 1]. From (E.2) we have
3.1. VOLTERRA EQUATIONS OF THE SECOND KIND 145

Lis? 1
Oeics groomer ey eS)
and if we use the Laplace transform pair C{sin ax} = ae of (1.81), we obtain
a

SVCog es a { : “}= sing (E.4)


s2 +

which is the solution (E.6) of Example 2.

Integro-differential equations
When the function u(x) in an equation is involved in a differentiation as well as
an integration operations, the equation is called an integro-differential equation. For
example,

ae = f(z) + /: K (a, t)u(t)dt. (3.30)


We note here that when K (x,t) is a difference kernel, the right-hand side of
(3.30) is amenable to the Laplace transformation, while the second derivative on the
left-hand side, according to (1.69), needs the initial conditions u(0) and wu’'(0) for its
Laplace transformation.

Example 5 Integro-differential Equation. Solve the following initial value problem


associated with the integro-differential equation

du Ors
AED =e = / e
2(a—t) ae
du (E£.1)

u(0) 20 (E.2)
u'(0) = 0. (E.3)
We apply the Laplace transform on (E.1) using (1.69)

yi aie = s?U(s) — su(0) — u’(0) (E.4)


ax?
on the left side, the convolution theorem (1.84) for the integral, (1.73) for the term
e2* and (1.68)

ie{=} = sU(s) — u(0) (E.5)


dx
for the derivative inside the integral, to obtain

#?U(s)
2 —su(0) ——w'(0)
ms a) = —51 - —p[sU(s)-w(0).
Sa
1
= i (B86)
E.

Now we use the initial conditions (E.2) and (E.3) in (E.6) to have
146 Chapter 3. VOLTERRA INTEGRAL EQUATIONS

nl sU(s)
SU (8) scien Gann (E.7)
ee, Bis arias
WAC eas era VP? (SBP oie

after using partial fractions for the last line. Hence the solution to (E.1)—(E.3) is the
inverse Laplace transform of (E.7),

oe
1 i
/2 A) pe es Ney ee ete
1
se
eae {ap +3}
= re* —e* + 1
where for the first term we used (1.80) [or (1.71)].

Exercises 3.1

1. Find the resolvent kernel I'(z, t; ) to solve the Volterra equation of the second
kind
u(x) = f(x) + af u(t)dt.
0
Hint: The kernel here is K(z,t) = 1.

2. Find the resolvent kernel to solve the Volterra integral equation of the second
kind
u(x) = g(x) + » [oe — t)u(t)dt.

3. Use the Laplace transform method to solve Exercises 1 and 2. Hint: For
Exercise | note that ps u(t)dt is (a very special case) of the Laplace convo-
lution product type with f;(¢) = 1, fo(t) = u(t); you can also use the pair
H
1
ct f fe) t= 5f (8) in the fourteenth entry of Table 1.1. For Exercise 2,
0
. oe ; A
note that sinh z = —isiniz,ie., £{sinh Ar} = zoe

4. Use the method of successive approximations [let uo(x) = 0] to solve Exercise


1 for
(a) f(a) le
(Dif) = aN

5. Use the method of successive approximations to solve Exercise 2 for g(x) = 1


and A? = 1. Let up(x) = 1.
6. Use Laplace transform to solve the integro-differential equation in u(x) with
the given initial conditions
EXERCISES 3.1 147

x 2
d*u oe du
—, —2— + u(z) =cosa-2 | cos(xz
— t)ae}, sin(x — t) dt
0
u(0) =0, it(Oy =O!
. Verify your answers for Exercises 4, 5, and 6.

. Verify that u(x) = sin z is a solution of the Volterra integral equation (E.1) of
Example 2.

. Solve the following Volterra integral yn pea in ae ) by reducing it to a


first-order differential equation in F'(x =e €ul(€

zr

(a) =e +f xv€u(€)d€. (£.1)


0
Hint: Note that u(z) = x + xF (zx),

dF
ER =<, [ewe
CulClde— cus lala ok (7)|. (E.2)

Solve (E.2) for F(a), and for the constant of integration substitute in the
original integral equation (E.1).

10. (a) Prove that the last term in (3.20) vanishes as n — oo by showing that
the infinite series (3.21) converges absolutely and uniformly. Do the proof by
justifying the following steps (i) to (iii) with detailed hints:
(i) Show that

rl byasdd
[Forced ast) <q M oe (E£.1)

where M is the upper bound of |K (a, t)].


Hint: See

|K(x, t)| =| [ K@OK( (Ha

< i;|K (2, Q)||K(E,t)|dt< M?|x —¢|


t

and

|Ka(z,t)| = [ K@9Ka(60) yar} < [IK (x, 8)| [Ko(E,t)|a


<m | M?\€ — t\|dé < M nels
aie:
t
148 Chapter 3. VOLTERRA INTEGRAL EQUATIONS

(ii) With the result (E.1), show that the n + 1st term in (3.20) is bounded as
follows

ynt i Hoe (omayatl etn [M|Al| — al]"**


(n+ 1)! cee)
where m is the upper bound of |u(t)|.
(iii) With the result (E.2), show that the sequence in (3.20) converges, and as
such its n + lst term tends to zero as n — oo.
Hint: The n + 1st term is dominated by an nth term of the following series

oo n
M|X(a—a)| _ [M|A||z — al]
ee ES eens
n=0

which is absolutely and uniformly convergent for all |\(z — a)|, as it is clear
from a simple ratio test, remembering the presence of the n! in the denominator
of the above sequence.
(b) What is the essential property in this method that helped the most in easing
the proof of the convergence.

11. Write a Volterra integral equation of the second kind in the function of the two
variables u(x, y) in analogy to that of (3.1) in u(z).
Hint: See (1.38) for a parallel, and note that it is a Fredholm integral equation
in two dimensions.

3.2 VOLTERRA INTEGRAL EQUATIONS OF THE FIRST KIND

We should mention at the outset that integral equations of the first kind present their
own difficulties as we shall allude to toward the end of this section. In Section 5.4
we will discuss with some details the topic of Fredholm integral equations of the first
kind.
For the special case of a Volterra equation of the first kind,

fo af K (a, t)u(t)dt (S730)

with kernel K(x, t) such that K(z,x) # 0 (and is differentiable with respect to 2),
we will show next that it can be reduced to Volterra equation of the second kind
(whose solution, in general, is much more tractable!) If we differentiate both sides
of (3.31) with respect to x using the Leibnitz rule (1.53) on the integral, we have

of _ Af OK (z,t)
Aa u(t)dt + AK (a, x)u(z). (3.32)
0
3.2 VOLTERRA INTEGRAL EQUATIONS OF THE FIRST KIND 149

This can easily be rewritten as


1 eon
OK (az,
ar t)
A Gp le Tie +E - Ar) aa w(t \at, (ara) 20 = (3.33)

as a Volterra integral equation - the second kind with the nonhomogeneous term

1 df
UGS eoyeaten mre
and the (new) kernel
—1 dOK(z,t)
Hc, b=
Cae ae) ARO:
Thus
a+ f Hie, tyucae. (3.34)
So when K (zx, x) # 0 in (3.31), we can reduce it to a Volterra equation of the second
kind and solve it using one of the methods that we discussed in Section 3.1.

Example 6 Solve the following Volterra equation of the first kind after reducing it
to a Volterra equation of the second kind:
x

sine = [ e* ‘u(t)dt. (£.1)


0
The kernel K (x, t) = e*~* does not vanish when z = t [i.e., K(z, 2) = ]
and hence according to (3.33) with f(z) = sinz, A = 1, we have ie i)
6)
——(e**) =e etn,
Ox

u(x) = cosz — shie”‘u(t)dt. (E.2)


0
This is a Volterra equation of the second kind which happens to be a special case of
the problem (E.1) in Example | with f(z) = cosz and X = —1. According to the
result (E.7) of Example | or (E.4) of Example 3,

u(x) = f(x) +r [* eltay(e—t) f (t)dt (E.3)


0
we have .
u(x) = cose [ cos tdt (E.4)
a
= cosxz — [sint]} = cosa — sina.
When K (x, x) = 0, (3.32) is still a Volterra equation of the first kind. However, if
—(z,x) # 0 in (3.32), differentiating again will result in a Volterra equation of the
second kind. If these attempts fail, the methods of solution become more involved,
which we shall allude to briefly toward the end of this section. An exception to this is
the special case when (3.32) has a difference kernel and hence the method of Laplace
transform can be employed, as we shall discuss and illustrate next.
150 Chapter 3 VOLTERRA INTEGRAL EQUATIONS

3.2.1 Volterra Integral Equation of the First Kind with a Difference


Kernel—Laplace Transform Method

When the Volterra integral equation of the first kind (3.31) has a difference kernel
K @t) =);

yCnea itee OES (3.35)


M1

0
we apply the Laplace transform on (3.35) as we did for (3.26), to obtain

F(s) = AK(s)U(s),

Vie :a | (3.36)
and the solution to (3.35) is

data sual (acss


Ula i= xf zo }: (37)

Example 7 Volterra Equation of First Kind with a Difference Kernel. Solve the
integral equation
SVU af e”‘u(t)dt (E.1)
0
by using Laplace transform.
This Volterra equation of the first kind (E.1) is with difference kernel K (x,t) =
e*—'; if we Laplace-transform it, recalling the (Laplace) convolution theorem in
(1.84) for the integral of (E.1), and using the Laplace transform pairs in (1.73) and
(1.81),

1 1
L{K(2)}=L{e*}=—T, —L{sinz} = —5
for (E.1) we have
Se1 s\ 1 U(s)
s?+1 s—1
BC cee ee ee 8 1
~~ AIf(s-1) As?+1° ~ A\s?41~— 8241) °
(E.2)
To obtain the solution u(x) of (E.1), we find the inverse Laplace transform of (E.2),

u(a) = +L = ‘ani
Ss il
aa} (E.3)
and with the use of the two Laplace transform pairs in (1.82) and (1.81) we obtain

1 (ae) = (cos. — sin x) (E.4)

which is the result of Example 6 when A = 1.


3.2 VOLTERRA INTEGRAL EQUATIONS OF THE FIRST KIND 151

A Main Difficulty of the Integral Equations of the First Kind


The above Example 7 seems to go very well for a solution via the Laplace
transformation. However if we change the problem a little, we will uncover the first
main difficulty with solving integral equations of the first kind. This is described in
that we are not sure that there is a solution (input) u(z) for (3.31) that corresponds to
any given function (output) f(a). In other words, when we write (3.31) with operator
notation f = K{u}, there may not be a function u(z) in the domain of the (integral)
operator K that is mapped to the given function f(x) of (3.31). We will show here
that in the above example, the function f(x) = sinx must have been selected very
carefully (according to guidelines from advanced theory) to guarantee the existence
of the (obtained) solution (E.4) of (E.1). It is after this guarantee that the Laplace
transform method of solution got a smooth sailing for the above Volterra integral
equation of the first kind in (E.1) with (Laplace) convolution product integral type.
So let us assume that, instead of f(x) = sinx in (E.1) (with \ = 1) of the above
example, we were given f(x) = 1. Thenif we don’t know about the above difficulty,
we can proceed, formally, with the same above Laplace transform method, where we
only have the left side C{sinz} = ae of the first equation (E.2) (with A = 1) in
the above Example 7 changed to C{1} = + to result in

Cie ae ==
However, for such result of U(s) = *=+ = 1 — 4, there exists no Laplace transform
inverse for the first term 1 of U(s) = 1— q. This is an obvious accepted conclusion,
since the important necessary condition for the Laplace transform F'(s) (of a large
class of functions as described in Theorem 1.1 of Section 1.4) is that it must vanish
as s approaches infinity, and the above F'(s) = 1 does not. This important necessary
condition was shown in Example 8 of Section 1.3.

Abel's Generalized Integral Equation


The Abel’s integral equation (1.20)

-J%f(a) = [ FOS a —_—


a
(3.38)
is a Volterra equation of the first kind with difference kernel K(z,t) = 1//x —t;
hence we may use the Laplace transform to solve it.
The generalized Abel integral equation

f(z) = ifet 0<a<1 (3.39)


is also of the convolution type and, for an appropriate f(z)!, can be solved by using
Laplace transform (see our comment following Example 7 concerning conditions on
152 Chapter 3. VOLTERRA INTEGRAL EQUATIONS

f(z) in (3.31) to guarantee a solution to Volterra equations of the first kind (3.31).) In
the next example we use the Laplace transform to solve (3.38) and leave the solution
of (3.39), which is

Gites SED eee == (earl OF eed (3.40)


nm dx Jo
as an exercise (see Exercise 3).

Example 8 Abel’s Integral Equation. Use the Laplace transform to solve Abel’s
equation (3.38).
We let F'(s) and (s) be the Laplace transform of f(x) and ¢(x), respectively,
and use C{K(x)} = L{1//x} = V7/s from (1.79) with vy = <9 noting that
[\(1/2) = 7, to obtain ;

- %9F(s) = [2465 (F.1)


(s) = ~[2verte) (E.2)

Hence the solution to Abel’s equation (3.38) is

(x) = ~ [2-1 VaF (a). (B.3)


In trying to use the convolution theorem for C~!{./sF(s)} in (E.3) we have
difficulty (impossibility!) in finding £~!{,/s}, which actually does not exist, since
as we explained in Example 8 of Section 1.3, an important necessary condition
for a “legitimate" Laplace transform F'(s) is that it must vanish as s + oo, i.e.,
lims—yoo f(s) = 0. This is also seen from the condition vy> —1 on the Laplace
transform pair (1.79), which prevents us from letting v = —3/2 to use the pair for
il
s!/2_ On the other hand, (1.79) with vy = — 5 gives the inverse Laplace transform of
s—1/2 as 1/.\/7z; we may multiply and divide the right-hand side of (E.2) by ,/s to
obtain

F(s) (E.2)

= -/Bs(), H(s) = ae (E.3)


and we can easily use the convolution theorem to write

Lai
£{H(s)} a's
=£ 0 iar
fg )}-— feC
dt=h(a). (BA)
3.2 VOLTERRA INTEGRAL EQUATIONS OF THE FIRST KIND 153

We are still after £~'{(s)}, so according to (1.68),

C{sH(s)—no} = 2x a (E.5)
= £{sH(6)}- £-{n(0)} = 2
ses H,(s).\e= o (E.6)

[since from (E.4) (or on physical grounds) we have h(0) = 0].


Finally, we use h(x) from (E.4) in (E.6) to obtain ¢(z) [the inverse Laplace
transform of ®(s) = —,/2g/7[sH(s)] in (E.2)],

(3.41)
Mem ine x/ Cie

which is the solution to (3.38).

Another Difficulty for Integral Equations of the First Kind


Again this method and the solution in (3.41) for Abel’s problem (3.38), as Volterra
equation of the first kind, seems to flow very well. However, we may still point out
to the “hidden” difficulties in trying to solve integral equations of the first kind. For
example, does the integral in (3.41) exist for an arbitrary f(x). Abel’s first problem
of the Tautochrone (1.21) is for the case of f(x) being constant T,, which is very safe,
since the integral of (3.41) does exist for f(x) = 1,
x

/ (x — t)~2dt = —2(a — t)?


0 t=0
This result of the integration is a nice continuous function for z > 0, but what
is needed for the solution (3.41) is to differentiate this result of the integration.
To prepare for this second type of difficulty for equations of the first kind, we
remind how integration is a “smoothing process," while differentiation, as the inverse
operation of integration, would uncover the “rough spots" (discontinuities, that were
smoothened by the integration process). This smoothing of the integration operation
was discussed then illustrated in Example 16 of Section 1.5.2 for the (continuous) roof
function h(x), and how the differentiation operation uncovers the jump discontinuity
in in illustrated in Figure 1.11. In our example with f(z) = T
hy
wo" ie e/g aly By
= degen as 2”
which is unbounded at x = 0. So, solutions of the Volterra integral equations of the
first kind that involve related differentiation of the given function f(z) would inherit
154 Chapter 3 VOLTERRA INTEGRAL EQUATIONS

what the differentiation operation may uncover. This is even more serious if f(x) is
given as data, which of course has the inaccuracy of the measurement. So, for the
above result in (3.41) we have to do numerical integration with its own added error
of approximating the integral. On the top of that, this inaccurate numerical result has
to be numerically differentiated, which compounds the error for the desired practical
solution ¢(z) of (3.38), instead of writing the formal solution as in (3.41). This
difficulty of the equations of the first kind is considered to be very serious, because
even if we know that a solution exists, we may only get useless inaccurate data for it.
Such situations are described by ‘“‘a small change in the input data f(x) may produce
a very large change in the sought output (solution) u(«)" of the integral equation of
the first kind,

f(a) = ilK(«, t)u(t)dt,


whereby the solution is termed “unstable.” Such difficulties and warning were, of
course, not available to Abel around the year 1826. This is taken historically to be
a good luck that Abel was not aware of such deterrents, and went ahead to get the
solutions, while “not knowing enough to be daunted".
We shall return to this subject of the troubles with integral equations of the first
kind when we discuss the Fredholm integral equations of the first kind in Section 5.4.

Exercises 3.2

1. (a) Reduce the Volterra integral equation of the first kind

fi i cos(x — t)u(t)dt
0
to a Volterra integral equation of the second kind.
(b) Use the Laplace transform to solve the resulting integral equation in part
(a).
(c) Use the Laplace transform to solve the problem in part (a) directly, i.e,
without having to reduce it to Volterra equation of the second kind.
(d) Use the result in part (b) to verify it as a solution to the integral equation in
part (a).
(e) Do parts (a)—-(c) above for the Volterra integral equations of the first kind
in Exercises 3 and 4 of Section 1.1.

2. Solve the Volterra integral equation of the first kind after reducing it to an
equation of the second kind,

tay 37—*u(t)dt.
0

3See Lonseth [1977] and Anderson and DeHoog [1980].


EXERCISES 3.2 155

Hint? Write 3° Sten c(3°yS1/(s= In 3):


. Use the Laplace transform to solve the generalized Abel integral equation
(3:39),

ee fee ult at
f(a) = f rene: 0<a<l

Hint: To simplify the answer, use the relation [(x)C(1 — x) = m/(sin 72).

. Solve each of the following Volterra integral equations of the first kind after
reducing them to Volterra equations of the second kind.
x? &
(a) — = i (1— a? + t?)u(t)dt
2 0
(b) e? /2 -1= flsin(a — t)u(t)dt
10)
ae

Hint: For part (b) avoid the Laplace transform (because L{e = } does not exist,
and note that A(z, x) = sin0 = 0.

. Consider the following examples of Volterra and Fredholm integral equations


with the same kernel,

f(t) = [SEAM: (E.1)


fla) = [ sin(a + u(eat. (E.2)
Show how these two equations illustrate that the existence for a solution to the
Fredholm integral equation of the first kind (E.2) has much harder restriction
on the class of the given function f(z), for (E.2) to have a solution, than that
of the Volterra equation of the first kind (E.1).
Hint: Expand sin(x + €) in its two terms (sin(# + €) = sin x cos€ +cos z sin
€), then examine the integrals over € in both equations to see that (E.2) requires
the very restrictive condition that its f(a) on the left must be of the form
Asinz + Bcosz, A,B constants; while (E.1) allows f(x) to be in the more
relaxed form f(x) = g(x) sinxz + h(x) cosa. The reason for the difference is
the fixed limits of integration in (E.2) versus a variable limit z in (E.1).

. Consider the Volterra integral equation of the first kind (3.31) in u(x), which
after differentiation (with K(x,x) 4 0) had resulted in the Volterra integral
equation of the second kind (3.34) in u(a). Assuming that one) exists and
is continuous and that K(x, x) # 0 for all ze[a, b], use an integration method
by letting ¢(z) = if u(t)dt to also reduce the equation of the first kind to
another one of the second kind in ¢(2),
156 Chapter 3. VOLTERRA INTEGRAL EQUATIONS

Ot
p(x) — aE)
* aK (2,4) p(t) ¥ f(z)
\K(a,2)’ rea, db].

Hint: Use integration by parts on (3.31) letting U(t) = K(a,t) and dV(t) =
u(t)dt, whereby V(t) = f u(t)dt = d(t).

3.3 NUMERICAL SOLUTION OF VOLTERRA INTEGRAL EQUATIONS

In the preceding two sections we presented exact and approximate methods for
solving Volterra integral equations. We must recognize that the illustrations we
presented there were of a very special form to suit the method and are simple enough
to result in a familiar form of solution without very lengthy,computations. For more
general types of problems, we often resort to approximate methods where the integral
equation is replaced or approximated by another one which is closely related and can
be handled by the usual methods, and hopefully, with solutions close to those of
the original equations. When such methods are not feasible, we have to resort to
numerical methods which are also approximate methods and where the integral in the
equation is approximated by a sum of N terms. As a result, the integral equation may
be reduced to a set ofN equations in the N unknowns u(z;),7 = 0,1,2,---,N —1,
the samples of the approximate solution. To illustrate this method clearly we present
simple examples, some of which were solved by the exact methods so that we have
a chance to compare them with the numerically evaluated (approximate) results.
For the same reason we will, at this stage, use one of the simplest and most familiar
methods of numerical integration, the trapezoidal rule (1.141), which we have already
presented in Section 1.5 along with Simpson’s rule (1.144) and the midpoint formula
(1.147). For the level of this introductory text, we leave the higher order quadrature
rules for the interest of the reader. They are discussed and illustrated with a good
number of examples and exercises in Chapter 7. There, we also include the tables of
the quadrature rules that are necessary for their use in the examples and the exercises.

3.3.1 Numerical Approximation Setting of Volterra Equations

Here we consider the Volterra integral equation of the second kind,

u(x) = f(x) + ikK (a, t)u(t)dt (3.42)

with its noted variable upper limit of integration x as compared to the fixed upper
limit 6 of the Fredholm equation (1.148),
b
(aia) +f K (a, t)u(t)dt. (1.148)

Indeed, this is the major difference in classifying these two main (different) classes
of integral equations. So, as expected, this will affect their theories and methods of
3.3 NUMERICAL SOLUTION OF VOLTERRA INTEGRAL EQUATIONS 157

solutions, and in particular the present numerical approximation by a linear system


of equations. As expected, and shall soon become very clear, the numerical setting
and the approximation of the integral for the Volterra equation (3.42) will result in
the coefficient matrix of the linear system of equations being a (lower) triangular
one, which is exactly due to the variable upper limit z of the integration in (3.42).
This should be clear, since in (3.42) the kernel K (x,t) = 0 fort > a as the integrand
can be considered identically zero above its upper limit of integration x in (3.42).
So for the discrete case we have K(2z;,t;) = Ki; = 0 for 7 > %. Of course, a
system of linear equations with such a natural triangular coefficient matrix is easy to
solve, as we shall illustrate in Example 9. This is especially when compared with the
Square system of equations for the Fredholm integral equation, as we shall discuss
and illustrate in Section 5.5.1 [see (5.118) and Example 20].
We will subdivide the interval of integration (a,x) into n (= N — 1) equal
subintervals of width At = (x, — a)/n,n > 1, where x, is the end point we choose
for x; we shall set tp) = aandt; =a+JjAt = to + jAt. Since we will be using
either ¢ or x as the independent variable for the solution uw, we will call a9 = to = a,
© = 2, —1t,and 7; =2@p +1At =a+iAt = ¢;, or in shott, x; = t;. We will refer
to the value of the function f(z) at 2; as f(x;) = f;, the value of kernel K(z, t) at
(x;,t;) as K(x;,t;) = Kj, and the (approximate) value of the solution u(x) at 2;
or t; as u(t;) = u(a;) = uj. K(ax;,t;) clearly vanishes for t; > x; as the integration
ends at t; < z;. Note that the particular value u(x) = f(a) according to (3.42). So
if we use the trapezoidal rule with n subintervals to approximate the integral in the
Volterra integral equation (3.42), we have

i K(a, t)u(t)dt ~ At 5K(e, to)u(to) + K(x, ti)u(ti) +---

+K (a, tp—1)u(tn—1) + 5K (0, tn)ultn) ;

and the integral equation (3.42) is then approximated by the sum

Wey (ONE 5K (0, to)u(to) + K(a,t))u(ti) +---

1
+-+-+ K(z,tn—1)u(tn—1) + 5K (a,tn)u(tn)|

t;<@, g21, ©=2n =tn. (3.44)

The integration in (3.43) is over t, a < t < a; thus fort > x we take K(z,t) = 0,
KiCac;,¢;) = Viton barre
Of course, we realize here that the solution desired in (3.43) is an approximate
solution of (3.42) since there is an error involved in replacing the integral in (3.42) by
the N = n+ 1 terms of the trapezoidal rule (1.141). If we consider (the same) n + 1
158 Chapter 3 VOLTERRA INTEGRAL EQUATIONS

sample values of u(x), u(xi) = ui, 1 = 0,1, 2,---,m, equation (3.44) will become
a set of N = n + 1 equations in u(a;) (or u;) [note that u(zo) = f (xo) since the
integral in (3.42) vanishes for = rp = a],

u(zo) = f (zo)

u(zi) = f(z) +At resi. + K(a;,ti)ui +-:-


: (3.45)
il
eco pak I5 Corgi spe) Ties ae 5K (i, ta) us ’

1=1,2,---,N, ti <j

where we note again that K(a;,t;) = 0 for t; > a; since the integration in (3.43)
stops att; = x;. The system of equations in (3.45) can be written in a more compact
form as

uo = fo
if 1
Ui= fi + At 5 Kiow ar Kyu, ee Kyj-1Uj-1 ae 5ghist ; (3.46)

t= 1 2ereyn, Ky; = K (zi, t;), TS

which are N = n + 1 equations in u;, the approximation to the solution u(«) of


(G47) ata; = ar (Zt f0ry = U0, Lo. 1.
If we transfer all the terms involving the solution u,; to the left side of (3.46),
leaving only the nonhomogeneous part f; on the right side, then write all the n + 1
equations for u;, 7 = 0,1,2,---,n, we have the following triangular system of
equations:

uo = fo
At At
—5 Kioto + (1- Fx) U1 = fi
At At
— 9 A200 — AtKoiu; + ¢ — Ko } u2 = fo
At At
——_ Ksoto = AtkK3,u, = AtK32u2 ae (1
oH 5 Kea)U3 = fs

REO te At
——> Knouo — AtKniui — AtKpgue — +--+ (1= Kon) Un = Fn
2 2
(3.47)
as a system of n + 1 equations in the n + 1 desired unknowns ug, u1,---,tUn. We
note that the form of this set of equations is a very special and desired one since the
solutions can be obtained by repeated substitution, starting with uo = fo from the
first equation of (3.47), which can be substituted in the second equation to obtain u;,
3.3 NUMERICAL SOLUTION OF VOLTERRA INTEGRAL EQUATIONS 159

At IN) At At
— a Aiotlo ae (1= sku) w=fi= —5 Kioto oP (1= sku) U1,

a fi + (At/2)Ky0fo (3.48)
1 — (At/2) Ky,

Then this value u, is substituted in the third equation of (3.47) to obtain u2, and
this substitution process can be continued until we obtain u,. With this particular
triangular system for the Volterra equation, we will be encouraged in the following
Example 9 to find the solution. As we remarked earlier, this is in contrast to the square
system of equations of a Fredholm equation of Section 5.5 as illustrated in Example
20 of that section, where the solution of the system is not as easy and straightforward
as the above one.

Example 9 Numerical Solution of Volterra Equation. Use the numerical method


described above to find the approximate values of the solution for the following
Volterra equation at zr= 0,1, 2,3, and 4; then compare these values with the exact
solution u(x) = sina,

Gee [oe ~ t)u(t)dt

a ae i (t — x)u(t)dt.

Here we have f(x) = 2, K(z,t) = t— 2 fort <.¢ = 0,1,2,3,4 and is zero


fort > x = 0,1,2,3,4, and a = 0 with u(0) = 0. We also have n = 4 and hence
At = (4—0)/4 = 1. So from (3.47) the five equations in uo, U1, U2, U3, and u4 are

uo = jo — 0) (E22)

1 i
— 5 Ai0t0 ate (1= 5Ku) U1 =f al ene)

1 1
— 5 K20%0 — Koi + (1= 3K) U2 =fe=2 (E4)
1
~5 Koto — K31u1 — K32u2 + (1= Kea) U3 =fs—3 (E-5)
1
— 5Kou — Kyu, — Kaque — K43u3 + ¢ = =u) gi fs 4. (E.6)
Hence if we substitute in (E.3)-(E.6) the values for Kio = K(1,0) =0 -1= —-1,
Ki, = 0, Koo = —2, Ko, = -1, Koo = 0, K30 = —3, K3i = —2, Kz = —1,
Kiss = 0p kao SA0hkig==3) Ka = —2, K43 = —1, and K44 = 0, we obtain
160 Chapter 3 VOLTERRA INTEGRAL EQUATIONS

Uy SO
sup + = 1, i= luo 1 0
Uo + Uy + U2 = 2, ug =2—up —u, = 2-0-1=2-1=1
Sup + 2u; + U2 + uz = 3, ug = 3 — up — 2u) —-u2 =3-2-1=0
Quo + 3u, + 2ug +u3+ug =4, ug = 4— 2uq — 3u — 2u2 — UZ
=4—(0—3—2—0=-—!1

after substituting uo from (E.2) for obtaining wu; in (E.3), and so on. So the numerical
approximation to the sample values of the solution are up = 0, uy = 1, v2 = 1,
u3 = 0, and ug = —1, which are compared to the exact values u(x) = sinz as
4to.=. sin. O:= 05-4, = sine =—.0.8415; to—.sin 2.= 0.9093, 44. Sit du .07L ae
and u4 = sin4 = —0.7568 as illustrated in Table 3.1 and Figure 3.1.

Table 3.1 Numerical and Exact Solutions of Volterra Integral Equation

x OF Fl 2 3 4

Numerical value of u(z) OR 1 0 -1


Exact value of u(z) =sinz 0 0.8415 0.9093 0.1411 —0.7568

U(x) x-- Numerical


o— Exact
a)

Fig. 3.1 Numerical and exact solutions of Volterra equation (E.1) of Example 9.

In Example 17 of Section 1.5, we used the Lagrange interpolation formula to


interpolate between the above four approximate numerical values to have a continuous
3.3 NUMERICAL SOLUTION OF VOLTERRA INTEGRAL EQUATIONS 161

curve that connects them, which was illustrated in Figure 1.13, and which resembles
the dotted line in Figure 3.1.
We may return to (3.42) and (3.47) and emphasize again how the numerical method
reduced, or more precisely approximated, the Volterra equation of the second kind
(3.42) to a (lower triangular) system of N = n+1 equations in N = n+1 unknowns
as in (3.47), where N is the number of approximated sample values u, of the solution
desired. Now we recognize that the set of equations (3.47) can be written in a matrix
notation form

KU=F (3.49)
where K = (K;;) is the (n + 1) x (n + 1) matrix of the coefficients of the system
of equations (3.47), U = (u,;) is the column matrix of the sample solutions, and
F = (fj) is the column matrix of the sample values of the nonhomogeneous part
f (x) in (3.47). The symbolic matrix form (3.49) can be written explicitly as

1 0 0 0

At At
—9 Kio 1— 5 Ku 0 0 0

At At
——Ko —AtKo, 1 — —Ko2 0
2 2 y
At
Be Kap —AtK3 —AtkK32 1— 5 Ks 0

At
Fi —AtKni —AtKn2 vee 1- 9 Ann

Uo fo
U4 fi
U2 fo
ish) =} fs (3.50)

Un tn
which can be verified as (3.47) by performing the simple matrix multiplication. This
would mean that the essence of the numerical method is to reduce the Volterra integral
equation to a matrix equation. Familiarity with the powerful tools of matrix theory
would give us a more efficient way of solving the integral equation numerically.
We should note here again that the numerical method for solving Fredholm inte-
gral equations, which we shall discuss in Section 5.5, will follow in the same way,
however, it results in a square rather than the present triangular system of simul-
taneous equations. Even more important is how the theory regarding the existence
162 Chapter 3. VOLTERRA INTEGRAL EQUATIONS

of solutions for the system of equations will shed light on the existence of solutions
for the Fredholm integral equation. Since we intend to keep this text on the under-
graduate level by assuming mainly a basic calculus preparation, we will keep our
exercises on this level and leave it for each reader to obtain the result in an efficient
way depending on his or her preparation in matrix calculus.
In Chapter 7 (towards the end of Section 7.2), and with the help of its higher
quadrature rules, we will also have the chance to make a brief comment and illustrate
the numerical solution for a particular class of singular Volterra integral equations.
They are those equations whose singularity is due to the infinite limit of integration.
An example is the integral equation of the torsion of a wire (1.15) in the torsion
function w(t),

m(t) = hw + Ha b(t, T)w(r)dt, (1.15)

Exercises 3.3

1. Consider the Volterra equation in Example 9

u(z) = x — | (x — t)u(t)dt
0

(a) Use the trapezoidal rule* to solve for u(z) in the interval (0, 4) with enough
sample values to compare with the exact solution u(x) = sin a.
Hints Usen = 8:

(b) Tabulate or graph the numerical and exact solutions for comparison and
note if there is improvement over those in Example 9.

2. Consider the Volterra equation of Example 3 with \ = 2 and f(x) = a,

u(r) =427+2 [ e” ‘*u(t)dt (£.1)

(a) Find the exact solution. Hint: See Example | for \ = 2 and f(x) = 2, and
in particular (E.7) for the solution.

(b) Solve (E.1) numerically for 0 < x < 5. Hint: Find five or nine sample
values with n = 4 or 8, respectively.
(c) Compare the approximate numerical solution in part (b) with the exact one
in part (a). Graph both results.

4As was done for the development (3.43)-(3.47), the trapezoidal rule is used for the exercises of this
section.
EXERCISES 3.3 163

3. Consider the Volterra equation of the first kind of Example 6,

sine = ife”'u(t)dt. (E£.1)


0

(a) Find the solution numerically for 0 < 2 < 27. Hint: Reduce it to a Volterra
integral equation of the second kind as in (E.2) of Example 6,

AG ene :* e?—tu(t)dt (B.2)


0

then use the method of this section as illustrated in (3.46) or (3.47). Find nine
sample points with n = 8.
(b) Compare the numerical values of part (a) with the exact values of Example
6.
(c) Attempt to find the numerical solution of the Volterra equation of the first
kind (E.1) for 3 sample values (n = 2) directly [i.e., without reducing it to that
of the second kind (E.2)]. Hint: Follow the steps from (3.43) to (3.47) for

0 = f(x) + [OCOTON, (E.3)


instead of (3.42).
i

Of 6rSon= Ge g ye
i «

Rpg iron!by asec panda


Gr te 0% ft) : Sue ia 2 Ss

e i i & = 7

TMirte Pua © lout! @: ra ie «@

Ou) at = ae 9egie

> @liiptiet noo Saige ei

(aay Vraw = o fess ay ets ew ay? asthins dlkp


go ¢ ‘ ae 4 = ¢ Uva fiin Oe
i
sin het itsWO Ati ns Raven i areata wl ot bp vou ott ieey
= ditt cation ora
alqminKT Yo mnie’ ‘nena wae OY dk igssree vi sahaug ia
Ge
a, = 2 :
Boas=

a a § » ean »
asi ont} ts iden icin aM ben vue
.

oh
t
-

ee iyWen Aalto te iw 4 Yiteed (f = 9) wena slegeines # vo (0) etl ©


7 ~c asree ‘> ey Had= Te = ipl ht —
: AS-3) Bik tency —_2

: en a):
~~:
<7 a fafvin yer
jx AP
pale weary,

‘ 4 ore vealed rryl i) oNss fjtis


ig teleaee fae
tie
mie
mr A 7 ree po 2
a
Pas
_ nid t
ow) i(2)] = Gage. “a 7

<r
fine (y é :
=
¢° q &) Yeas —2 tus? ow ; Ooms ate
_ au) ae
wr) a or £ Sse > Vem:
om”

S. —_aioae as = $ oa A Oa ons yer

a] —s — one »@ s@ ae .
_ - -
SS . 5 a

- a) fag tincot le, Ge Bp Gy shi td Ripicd ih al


=

. - ‘(AWS A hela
7
f
x. 7
= _ v
De Sten by cananpellly bela on © Wee Pied.
Oe rh) eo.» © rc tee 7

> i; am ra a aged - aly =s “


a ie Prat 1, OnGh how ey i ov <

—— a A ;
The Green’s Function

4.1 CONSTRUCTION OF THE GREEN’S FUNCTION

In Example 5 of Section 2.5 we illustrated how a boundary value problem

d?y

y(a) =0 (1.34)
y(o= 0 (1.35)
reduced to a Fredholm integral equation
b
Teg Be | K (a, t)y(t)dt (1.36) (2.36)

with the kernel K(x, t) as

(OO age
K(a,th=9 (@- nee b) (1.37) (2.37)
' Getty SU
(b— a)
where we referred to it as Green’s function. Also, in Example 6 of Section 2.5,
we showed how a similar Fredholm integral equation, with kernel as a special case
of the above one in (2.37), reduces to a boundary value problem such as the above
one in (1.33)-(1.35). In this chapter we will consider a more general boundary
value problem, which can be reduced more readily to a Fredholm integral equation
165
166 Chapter 4 THE GREEN’S FUNCTION

with the help of the Green’s function associated with such a problem. Next we
shall familiarize ourselves with the Green’s function and the very basic (elementary)
methods of constructing it.!
The Green’s function method is one of the most important methods for solving
boundary value problems associated with nonhomogeneous ordinary or partial dif-
ferential equations. In this chapter we use the Green’s function (in Section 4.2) to
show again how a boundary value problem is reduced to a Fredholm integral equation
with the Green’s function as its kernel. First we present methods for constructing the
Green’s function for boundary value problems associated with nonhomogeneous dif-
ferential equations. Of central importance to this development is the study of a very
important special type of boundary value problem, the Sturm-Liouville problem. We
will give a brief presentation of this problem and show how its solutions are used in
an infinite series for another way of constructing the Green’s function. In Section 4.2
we will use the Green’s functions to reduce boundary value problems to a Fredholm
integral equation with the Green’s function as its kernel. A brief discussion with an
illustration, of reducing boundary value problems associated with partial differential
equations to two-dimensional Fredholm integral equations is given at the end of the
this section (in Section 4.1.4). The illustration involves the potential distribution in a
charged unit disc with grounded rim (see (4.62)-(4.69). Another very related illus-
tration is that of the potential distribution in a charged square with grounded edges,
which is the subject of Exercise 24 of this section. In this exercise we have very
detailed instructions for using the finite Fourier sine transform (see (1.115), (1.116),
and (1.121)) to reduce the partial differential equation with boundary conditions (in
two variables) to a nonhomogeneous ordinary differential equation with its bound-
ary conditions. Then the Green’s function of this section is used to solve the latter
problem.

4.1.1 Nonhomogeneous Differential Equations

Consider the boundary value problem associated with a nonhomogeneous ordinary


differential equation of second-order,

2
Ao (2) 5%+ r(x) + Aa(e)y = Ly = f(e),? G5 <b (4.1)

aiy(a) + azy'(a) = 0 (4.2)


Biy(b) + Boy'(b) = 0 (4.3)
where L stands for the differential operator (with A; (z) as real-valued functions with
continuous derivatives up to the order 2 — j, 7 = 0,1,2 on [a, b], and Ag(x) 4 0 on
[a, b]), and ay, a2, 3; and By are constants.

'For a complete treatment of the Green’s function, the interested reader may consult Stakgold [1979].
*In some books — f(a) instead of f(z) is written for the nonhomogeneous term of (4.1), which will bring
a (+) sign instead of the (—) in the final solution (4.5) of (4.1)(4.3).
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 167

To solve this boundary value problem we usually attempt to find a particu-


lar solution yp(z) for (4.1) and the general solution y;(z) for its corresponding
homogeneous equation,

Ao(t)
d*y
5
dy
+ Ai (zt)7 + Ao(x)y = 0. (4.4)

The general solution y,(x) of (4.1),

Yo =Yp tT Yh

is the superposition of the two solutions of the linear equations (4.1) and (4.4). In
general, it is difficult to find the particular solution for any arbitrary nonhomogeneous
term f(x) of (4.1).
The Green’s function method represents a general method of solving the boundary
value problem (4.1)—(4.3) where the solution is given as

b
OS / G(x,t)f(t)dt (4.5)
an integral in terms of the given nonhomogeneous term f(z) and the Green’s function
G(a,t). Note that some texts use —G(z, t) instead of G(z, t) in (4.5), it is to make
up for using — f(x) instead of f(x) for the nonhomogeneous term of (4.1). The basic
reason is convenience, which will become clear in the examples.
Before we illustrate the construction of the Green’s function, there is an important
particular case of the differential operator L in (4.1) with consequences that will
shed more light on the Green’s function of (4.5), such as its symmetric property
G(x,t) = G(t,x), and which will aid a great deal in the method of constructing it.
Here G is the complex conjugate of G, which we shall take as G' since we will often
work with G(z, t) as a real-valued function. The particular form of the differential
operator L in (4.1) is that of the self-adjoint form, which means that (uLu — uLv)dz
must be an exact differential dg = (vLu — uLv)dz for any two functions u(x) and
v(x) operated on by L. When the differential operator L of (4.1) is a self-adjoint one,
we will show that its associated Green’s function G(z, t) in (4.5), for the boundary
value problem (4.1)—(4.3), is symmetric [i.e. G(x, t) = G(t, x)] (see (4.25)).
We should point out now that while second-order differential operators can be
written in a self adjoint form; in general, this is not the case for differential operators
of order n > 2.
A very important example in applied mathematics of a self-adjoint differential
operator is the following second-order one?

roe —q(x)u(x), r(x) >0 (4.6)

3Here we should use L* instead of the same L of (4.1), but since we are going to work mainly with the
above L*, we shall designate it, for simplicity, as L.
168 Chapter 4 THE GREEN’S FUNCTION

which is used with the following well-known Sturm-Liouville problem (4.7)-(4.9),

Lut Ap(z)u(z) = < ror] + [-¢(z) + Ap(z)Ju(z) = 0, p(x) > 0 (4.7)

a,u(a) + agu'(a) = 0 (4.8)


G,u(b) + Bou'(b) = 0. (4.9)
with some (usual) regularity conditions on the coefficients in (4.6) [or (4.1)] as spelled
out at the beginning of Section 4.1.3 [and following (4.1)]. We may mention here
that in (4.7) is called “the eigenparameter", and when solutions u,(x) are found for
the problem (4.7)-(4.9), they are termed “the eigenfunctions" that correspond to the
eigenvalues \ = A, in (4.7). In this sense the boundary Value problem (4.7)-(4.9)
represents an important example of “an eigenvalue problem" that we shall return to
in Section 4.1.3. To be on the careful side we should use L, instead of L for the
above self-adjoint differential operator to differentiate it from the operator L used in
(4.1). However, since we are going to work mainly with the above one in (4.6) we
shall, for simplicity, use L.
In the next example we illustrate that the differential operator L in (4.6) is self-
adjoint, and follow it by a method of constructing the Green’s function.

Example 1 Self-Adjoint Operator


To show that the Sturm-Liouville differential operator L in (4.6) is in the self-
adjoint form, we must show that (vLu — uLv)dz is an exact differential [i.e., (vuLu —
uLv)dxz = dg).
If we use L from (4.6) above, we have

vLu-uLlv = vE[r(a)u!] + v[-q(x)Ju — u< (r(a)e'] — u[—q(x)]v

= oS ir(e)ul]' — uLfr(e)o'
het d ' (EY)
=vru" +r'u’ — ure" — ur'v'
=r{vu" —uv"} +r'{vu' — uv}.
But

<[r(a){vu! — we") r(a){v'u' + vu" — uv’ — w"} +r! (x){uu' — uv'}


=r{vu" — uv"} +r’ {vu' — uv'}
(E.2)
Hence from (E.1) and (E.2) we conclude that

d
Lu —— uLv
vLu uLv = qa— ln(e){vu =euv }],

(vLu — uLv)dz = d[r(z){vu' — uv'}] = dg (E.3)


4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 169

which means that (vLu — uLv)dz is an exact differential with g(x) = r(x){vu' —
uv’ }, and hence can be integrated to give

b b

/ (vLu — uLv)dz = [r(x)(vu' — uv')] (E.4)


a

From now on we will assume the self-adjoint form of the second-order differential
operator L of (4.6) instead of that in (4.1).
In this example we merely showed that the form in (4.6) is the self-adjoint form.
Indeed it can be shown that any second-order differential operator such as L in (4.1)
can be made self-adjoint by multiplying it by

rit: le. A,(a)


ple) TENE TES exp (/Ap(a) ar); Ao(x) #0 (E.5)

which we shall leave for an exercise with a detailed hint [see Exercise 25(a)]. More-
over, in general, differential operators of order n > 2 are not necessarily self-adjoint.
To illustrate this point we state that the simple third-order differential operator L of
u
LTu= = + wis not self-adjoint, while the fourth-order operator L in Lu = —> +u
is self ise We will show the first case here, and leave the case of the fourth- ae
differential operator for an exercise (see Exercise 25(b)).
To show that L in Lu = u’” + w is not self-adjoint, we write vLu — uLv, then
add and subtract terms, which result in only the major parts of (vLu — uLv)dz as a
sum of exact differentials,

vLu—-—uLvy = vu" — uw"


Sout =u! =e" ul bela!
+0" u = oa — uy"
= (vu)! — (v'u')! + (v"u')!= 20"
d
= —[vu" — v'u' + 0"u']— Qua.
dx

Hence (vLu — uLv)dz is not an exact differential because of the last term .

4.1.2 Construction of the Green’s Function — Variation of Parameters


Method

In this section we use the method of variation of parameters to construct the Green’s
function G(z, t) for the integral representation (4.5) of the solution of the nonhomo-
geneous boundary value problem (4.1)—(4.3) with L as in (4.6). The result will show
clearly that the Green’s function associated with the self-adjoint differential operator
L of (4.6) is symmetric. With the aid of this and other basic properties of the Green’s
function we are often able to construct the Green’s function without having to go
through the full details of the analytic method. We will illustrate both methods with
simple examples.
170 Chapter 4 THE GREEN’S FUNCTION

The method of constructing the Green’s function [and hence solving the nonho-
mogeneous problem 4.1 (with L as in (4.6)), (4.2) and (4.3)] depends primarily on
the solutions of the associated homogeneous problem (4.7)—(4.9) in the sense that
they both have the same differential operator L.
Let v;(z) and v2(z) be two linearly independent solutions of the associated
homogeneous equation (4.7). The variation of parameters method assumes the form

u(x) = wy(x)v;
(x) + wo(x)v2(z) (4.10)
for the solution u(x) of the nonhomogeneous problem (4.1), where the unknown
variable coefficients (parameters) w(x) and w2(x) are to be determined via this
method.
For now we will assume that neither of the solutions v;(x) or v(x) of (4.7)
satisfies both boundary conditions (4.2) at = a and (4.3) at f = b. A simple
intuitive reason for this assumption can be found by looking at the shape y(z) of
the hanging chain in Figure 2.2, where the solution of the nonhomogeneous problem
with its nonhomogeneous external force F(x) consists of two different straight lines,
the one for 0 < x < € satisfying (only) the boundary condition y(0) = 0 at x = 0,
and the one for € < x < I satisfying the boundary condition y(l) = 0 at the other
Cla al
The analytical reason behind this assumption—of not allowing either of v;(x) or
v2(zx) to satisfy both boundary conditions—is that if one of them does say v2 (x), then
by following the same method of construction of the solution as that we are about to
use, we can show that we end up with an extra condition,

b
i vo(x)f(x)dx = 0 (4.11)

that contributes to the nonuniqueness of the final solution u(x). To stay with our
aim, of constructing the Green’s function, we would rather not deal with (4.11) for
the present, and we leave it as an exercise [see exercise 20(a) with it’s very detailed
leading steps].
To prepare for making u(x) of (4.10) a solution to the nonhomogeneous equation
Lu = f with the self-adjoint differential operator L as in (4.6), we first find the
derivative of u(x) in (4.10),

u' (x) = wi (x)v;4 (x) + we(x)vy (x) + wi (z)v1 (x) + wh (ax)v2(z). (4.12)
A very important step in the method of variation of parameters is to reduce the
expected second-order differentiation of the operator L, on the unknown functions
(parameters) w;(x) and w2(2) to a first-order differentiation. This is accomplished
by assigning to zero the last two terms involving w} (x) and w}(a) in (4.12),

wy (x)v; (x) + wo(x)v2(x) = 0 (4.13)


leaving u'(x) of (4.12) free of the first derivatives wi (x) and w}(z),

u(x) = wy(x)v; (2) + we(x)vy (2). (4.14)


4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 171

When this u‘(x) is substituted in the nonhomogeneous second-order differential


equation Lu = f of (4.1) with L as in (4.6), the result is a linear equation (4.15) in
w(x) and w}(x), which we can combine with the first equation (4.13) to solve for
w; (x) and w(z),

= w1(z) {+ roy | — q(x)v1 )}

ta(a) {5 [r(0) G2] — ala)en(a)}

= 040+ r(x)[w; (x)v}


(x)+ w2(x)v3(x)] = f(a),

(4.15)
after using the fact that v; (xz) and v2(z) are solutions of the homogeneous equation
(4.7) to make the foregoing coefficients of both w; (2) and w2(z) vanish.
In (4.13) and (4.15) we have the main result of the variation of parameters method
as two simultaneous equations in the first derivatives w} (x) and w4 (a) of the unknown
variable parameters w(x) and w2(z),

w;(x)v; (x) + w9(x)v2(x) = 0 (4.13)

w} (204 (2) + wh(a)v4(a) = 52). (4.15)


The solutions to these equations are

Oe oe (4.16)
f (z)u1(2 (4.17)
2
172 Chapter 4. THE GREEN’S FUNCTION

Before we integrate to find w;(zx) and w2(z) we will take advantage of the fact that
the differential operator L of (4.7) is self-adjoint to show that the denominator in
(4.16) and (4.17) is a constant.
In the preceding section (Example 1) we showed that L is a self-adjoint operator,
which means that for any two twice-differentiable functions u and v, we have

(vLu — uLv)dx = d[r(x){v(z)u' (x) — u(x)v' (x) }]


as an exact differential.
If we use v(x) and v2(x) here, they are also solutions of Lu = 0, which will
make the left side vanish,

0 = vyLv2 — v2Lv = “Ir(a){v1 (v3 (a) ~ v2(x)v; (2)}]


where upon integration we have the desired result,

r(x) {vj (x)v5 (x) — ve(x)v;,(2)} = B= const (4.18)


for the denominator in (4.16) and (4.17),

w(x) = —Ff(a)v2(2) (4.16")

wh(2) = FF(2)e (2. (4.17"


If we integrate these two equations, we obtain the variable coefficinets w,(a) and
w2(x) of the solution in (4.10),

n=-5 [Heme (4.19)


wl) = 5| s@m (Ode (4.20)
where c; and c2 are arbitrary constants to be determined from the implication of the
boundary conditions (4.8), (4.9) on w; (x), and w2(a), respectively, as seen in (4.22),
(4.23). The result is that these arbitrary constants are chosen as c; = a and c2 = b.
To find such appropriate conditions will depend on the earlier basic assumption
that neither of the solutions v; (x) and v2(x) satisfies both boundary conditions (4.8)
at x = a and (4.9) at x = b. These boundary conditions are, of course, to be satisfied
by the final desired solution u(x) in (4.10) of the problem 4.1 (with L as in (4.6)),
(4.2) and (4.3). For the boundary condition (4.8) on u(x) at z = a, we have

ayu(a) + agu'(a) = a;[wi(a)v; (a) ‘w2(a)v2(a)}

)
+a2v5(a)] = 0
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 173

where we used (4.14) for u’(a).


If we assume in (4.21) that v2(x) satisfies the boundary condition (4.8) at z = a
[1.€., @1V2(a) + a2v5(a) = 0] while v1 (x) does not [i-e., a1v1 (a) + av} (a) F 0),
then (4.21) gives

w1(a)[a1
v1 (a) + a2v;(a)] = 0

which forces the boundary condition on w,(z) to be w;(a) = 0, which is what


we need for determining the arbitrary constant c; involved in the solution w; (x) of
(4.19). If we apply this condition on (4.19), we have

ula) =0=-5 |s@m(de =0


which is satisfied if we choose the arbitrary constant c; = a,

wi(a) =F | Fleen(Ode (4.22)


1 az

If, on the other hand, we assume that only v;(z) satisfies the boundary condition
(4.9) at x = b, steps similar to those of (4.21) yield

Byu(b) + Bou'(b) = Bi [wi (b)v1 (6) + w2(b)v2(b)] + B2[wi (b)v4


(d)
+w(b)v(d)]
= w1(b)[B101(b) + Bor; (6)] + we (b)[G1v2(b) + B2v9(d)]
= 0 + w2(b) [G1 v2(b) + Bev5(b)] = 0,
w2(b) = 0

since 3 v2(b) FP Bov5 (b) oe 0.


If we apply this boundary condition on w2(z) in (4.20), we have

wold) = | F(ealé)ag =0
1 b

c2

which is satisfied if we choose the arbitrary constant cp = 6,

wla)= Gf HOuleag=—F | fOmOde. (4.28)


x b

With the variable coefficients w; (x) in (4.22) and w2() in (4.23), the final solution
(4.10), of the nonhomogeneous differential equation 4.1 (with L as in (4.6)) with its
associated homogeneous boundary conditions (4.2) and (4.3), becomes
174 Chapter 4 THE GREEN’S FUNCTION

Ne) =oe ) + we(x)v2(z)

= zu) (ae
| £6)va(E pote) [$0 v1 (E)dé

~- avonpoke
coo |
GEN (4.24)

--[ G(a, €)f

where G(z, €) is defined as the Green’s function with its two branches,

1
Bir (a)v2(6), E<a<b
G(x,€) = (4.25)
sur(ti(g), asasé
Basic Properties of the Green’s Function
From this expression for G(x, €) of (4.25) with the constant B in (4.18), we will
show the following basic properties of the Green’s function:

(a) It is clear that the Green’s function in (4.25) is symmetric, that is,

G(x,€)= G(,2)
This, of course, is dependent on B of (4.18) being a constant, which is a direct
consequence of the differential operator L being self-adjoint.

(b) The Green’s function satisfies the boundary conditions (4.8) and (4.9) since vj (x)
of its first branch in (4.25) satisfies the condition (4.9) at x = b, and v2(x) in
the second branch satisfies the boundary condition (4.8) at x = a.

(c) G(z, €) is clearly continuous on the interval a < x < b; however, its derivative
OG(zx, €)/Ox has a jump discontinuity at z = € which is

OG
aoe
Dz ety 2 8)| mylene: (4.26)

where r(x) is the coefficient of u’’(z) in (4.7).


This can be proved by the use of the first branch (x > €) and the second branch
(x < €) of the Green’s function (4.25) for the above right-hand and left-hand
derivatives of G(z, €),

o=§_
(x<€)
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 175

E J[v1 (€)en(6) — v5(€)rr (€)]


inal | B | 1
Bl r(é) r(g)
after using (4.18) for the B constant value of the factor in brackets above.
Property (c) warns against expecting a second derivative for G(x, €) at x = €;
however, the second derivative does exit away from this point, as v(x) and
v2(x) of (4.25) are indeed the solutions of Lu = 0. Hence we can conclude
that

(d) G(z, €) as a function of x satisfies the homogeneous equation, except at x = €,

LG(z,f)=0, «r#€ (4.27)


At z = &, we have LG(z,y) = 6(a — y), where 6(x — y) is defined as the
Dirac delta function satisfying:
i) d(@—€) =0,2 Fy
i) [ He— de =1 Kh, le = El <e
Re

a iE6( — €)F(6)dé = F(a)


for arbitrary continuous function F(z) in the region R: a < x < b. The first
two properties i) and ii) of the Diract delta function d(x — €) may allow the
simple “popular" (not exact!) interpretation that 6(2—y) is some “distribution,”
unlike the usual function, which is zero every where, but spikes at x = € ina
very narrow neighborhood of width 2€ around z. Property iii) says that this
very narrow spike, effectively selects the single value F(x) of the integrated
function F'(€) besides it inside the integral, where F(x) is seen as the output
of the integral J,, d(x — €)F(€)dé.
We will start our illustrations by using the direct method of arriving at G(z, €)
in (4.25), then follow it by an example of using the four important properties (a)—
(d) of the Green’s function for a faster way of constructing it. But, before we do
that we would like to discuss how this treatment of the Green’s function, which is
primarily aimed at solving boundary value problems, can be modified to treat initial
value problems. This is followed by the discussion and illustration of boundary value
problems associated with differential operators of order n > 2, and those with mixed
boundary conditions that are not covered in the boundary conditions (4.8) and (4.9) of
the present Sturm-Liouville problem, associated with the second-order (self-adjoint)
differential operator L in (4.6).
176 Chapter 4 THE GREEN’S FUNCTION

Initial Value Problems

Our present treatment of finding the Green’s function of the homogeneous bound-
ary value problem (4.7)—(4.9), is usually aimed at solving the same boundary value
problem with nonhomogeneous differential equation (4.1)—(4.3), with the particular
solution as given in (4.5). In parallel to this treatment we may inquire about the
initial value problem with the conditions

u'(a) = 0 (4.29)
and the second-order nonhomogeneous differential equation (4.6).

d du
Iu = — rors] — q(x)u(x) = f(z) (4.30)

with the same differential operator L as in (4.6). We will show, with few modifications
of the above method that the function R(z, €), similar to the Green’s function, is also
used in an integral similar to that of (4.24) to give the solution of this initial value
problem as

u(a) = [” R(x, €)f(Odé, (4.31)


- Flv:
1 (x)v2(€) — ve(x)u1 (€)]
R(x,€) \|
(4.32)
Jlva(a)o1 (€) — v1 (x)v2(
€)]

where B is a constant as given in (4.18).


For the solution u(x) in (4.10), the first initial condition (4.28) gives

u(a) = wi(a)v1 (a) + we(a)v2(a) = 0, (4.33)

and the second initial condition (4.29) on u'(«), after employing (4.14) gives

u'(a) = wi(a)v} (a) + w2(a)v}(a) = 0, (4.34)

With v1 v4 — v, v2 F O, this system of equations (4.33) and (4.34) in w; (a) and we(a)
gives the trivial solution w;(a) = 0, w2(a) = 0. The first result w,(a) = 0 gives
w(x) as we had already in (4.22),

wi(a) =— Ff F(@)valeae (4.35)


The second result w2(a) = 0, which is what matters here for the initial value problem
is satisfied if we choose the arbitrary constant cp = a in (4.20),
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 177

w(a) =H f fm (ede =o. sf


1 a

4.36

wa) = Ff Heri(eae
So, if we use w;(x) from (4.35) and we(x) from (4.36) in (4.10), we obtain the
solution in (4.31) and (4.32)

(2) i= untavene) + we(x)v2(x)

=-5 [ n@m©-n@nOsOde — gn
it” R(,
€)f(dé
where we have R(x, €), the Green’s function (-like) for the initial value problem
(4.30), (4.28), (4.29), as we stated it in (4.32).

Example 2 Green’s function (-like) for an initial value problem.


Consider the following initial value problem in u(z),

ay f
lu= aa +X u= f(z), ip al (£.1)

u(0) =0 (B.2)
u'(0) =0 (E.3)
The two linearly independent solutions to the homogeneous equation

du
qa tr u=0, (E.4)

are v; (xz) = sin Az and v(x) = cos Az. Here r(x) = 1, and from (4.18) we have

Bi =r(x)[vi (x)vy(x) — v2(x)r (2)]


lI 1[—A sin Az sin Ax — A cos Az cos Az] (E.5)
=—-,).
From (4.32), the Green’s function (-like) R(«, €) for this initial value problem (E.1)—
(E.3) is

R(x, 6) = - pl (2)n(©) - w@)n()]


Dye [sin Ax cos Ag — cos Aa sin Ag] (E.6)
sin A(x — €)
aN
So from (4.37) the final solution to the initial value problem (E.1)-(E.3) is
178 Chapter4 THE GREEN’S FUNCTION

ue) = f° PAE! peas (E.7)


To verify that this u(z) is the solution to (E.1)—(E.3) we first need to prepare UAGe)
for (E.3) and u"(z) for (E.1). For the integrand g(x,€) = sin MoS) F(E) in (E.7), it
is best that we appeal to the generalized Leibnitz rule of (1.53),

wa) =f ZF)
A
a
sin A(x — 2) dx sinA(x
— 0) do
ne x ae ee ee es
=| Aces MG= 8)Fede + 0-0 (E.8)

= lacos A(x — €) f(€)d€.

uli) = [OZleosr(e
-§)F(6))
;
+ cos A(x — f(a) — cos A(x — 0) f(0)—
dO (E.9)

=-A [sin(w— 6)F(@)de + le) -


So if we substitute this u’(a) in (E.1) we have
x

u'+\2u =-A] sinrA(z- ah€)dé + f(x)


=
Nye
ee ga

=f
—X si
sin N(xA(x ——€) £)df(€)dE
+f(2)
+A / sinX(x - €)d€é
o

f(z)
where (E.1) is satisfied. To show the first initial condition (E.2), we use u’(x) of
(E.8) for zc = 0 to have

0
u'(0) = [cos
0
r(0 ~ €)f(6)ag = 0
(E.1) is clearly satisfied as we substitute z = 0 in (E.7),

a ° sin X(0 — €) ra
u(0) = ifeae 0s
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 179

Higher Order Differential Equations


Before we start illustrating the method of constructing the Green’s function for
boundary value problems, we will make two remarks concerning very important
points. The first is the question of the existence of a unique Green’s function for the
boundary value problem, a result that we shall state in the next Theorem | without
proof. The second question regards the possible generalization of this treatment to
boundary value problems associated with nth order differential equations.* Indeed
the existence theorem for the Green’s function applies to homogeneous boundary
value problems associated with nth order differential operator, where our above
problem of second-order becomes a special case. Let us, then, consider the nth order
differential operator Lp,

d 12) d™—ly
eA ala)d=
Lay = Ao (a) = + Aj(2) ay 0) 4(4538)
dr”™—1
dx
instead of the second-order differential operator L in (4.1), and, instead of the two
homogeneous boundary conditions in (4.2) and (4.3) we need the following n (linear,
independent) homogeneous boundary conditions, as applied to the solution y(z) and
its first nm— 1 derivatives atx = a and z = b, ie.,

n—1)
Bey = axy(a) + ajy'(a) +--- Fatty Ya)
+ Buy(b) + Buy’(b)+--+ BP yD (b) = 0, (4.39)
| som Ui 5c af)

where B,, k = 1,2,---,n stands for the operators of these n boundary conditions.
In (4.38) the coefficients A;(x), j = 0,1,2,---,n are real-valued functions with
continuous derivatives up to the order n — j on (a, b], and Ao(x) 4 0 on {a, 8].
We will soon list the four basic properties of the Green’s function G(, €) asso-
ciated with the homogeneous boundary value problem (4.38) and (4.39). With such
Green’s function we are able to obtain a Fredholm integral equation representation
(4.41) in u(x) for the following nonhomogeneous problem associated with the nth
order differential operator L,, of (4.38),

Lyru + Ap(x)u = f(z), QE 01<b (4.40)


and (the homogeneous) boundary conditions (4.39),

b b
ua) =f Gtees@de +a f Glas)oQuode.
a a
(4.41)
We may mention that a good sign for guarranteeing the existence of a unique Green’s
function for (4.41), is that the homogeneous problem (4.38) and (4.39) should have
no solution but the trivial one. So we will assume that the general homogeneous

4 Optional
180 Chapter 4 THE GREEN’S FUNCTION

boundary value problem (4.38) and (4.39) has only the trivial solution in order to
guarantee the existence of its unique Green’s function. As to the basic properties
of this Green’s function, we must bear in mind that, although the second-order
differential operator can always be put in a self-adjoint form, by using a form of an
integrating factor [as shown in (E.5) of Example 1], it is, in general, not the case
for differential operators of order n larger than two. This is, possibly, the reason for
not seeing the symmetry property of the Green’s function at the top of the next list.
The rest of the properties follow in parallel to those of the second order differential
operator. The ordering of the following properties is influenced by bringing to focus
the jump discontinuity property of the Green’s function in (4.41).

(i) G(x, €) is continuous, and so are all its derivatives with respect to x up to the
order (n — 2) on the interval a < x < b. This leaves the expected jump
discontinuity for its (n — 1)th derivative as follows: *

(ii) The (n — 1)th derivative of G(x, &) with respect to x at the point x = € has a
jump discontinuity of magnitude 1/Ao(z), i.e.,

0" G(z, 0" 1G(z, 1


ser eee(=>8)
eet es (2<é)
Rt pe rere
with Ag(z) as in (4.38).

(iii) The Green’s function satisfies the n homogeneous boundary conditions

By.G = 0; ki =e, een (4.39a)


with (the boundary operators) By, as in (4.39).

(iv) In each of the two subintervals a < x < € and € < x < b the Green’s function,
as a function of 2, satisfies the nth order homogeneous differential equation
(4.38).

EGlx,&) = 0, Depae (4.38a)


For the lack of space, we will present in Example 5 only the final result of a
boundary value problem associated with a third-order differential equation.

Example 3 The Hanging Chain or the Shape of Elastic Thread


Consider the problem of the hanging chain under the influence of the external
force F(x). This is a static problem where we can show that its displacement y(z)
satisfies the differential equation

—~ = —-Fi(a) (£.1)
with boundary conditions
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 181

y(0) =0 (E.2)
y(l) =0. (E.3)
The justification for the differential equation (E.1) can be taken from the special
static (time-independent) case of the well-known wave equation of the vibrating
string in its small vertical displacement u(z, t),

du 1 du = —F(z)
Ox? = c?.-: OF.
(E.4)
where for the time-independent static case u(x, t) = y(x) it becomes

dy
= —F(z) (F.1)
dx?
c in (E.4) is the velocity of the wave. The differential operator L = d*/dz? is
self-adjoint as a very special case of L in (4.6) with r(x) = 1, q(x) = 0, and the two
linearly independent solutions of the associated homogeneous differential equation

dy
Dy oe 0 (E£.5)

are | and x. Note how we avoided for the moment calling these solutions v (zx)
and v(x). The reason is that, in addition to v;(x) and v2(x) being two linearly
independent solutions of (E.5), they are also committed [in (4.25)] to satisfying the
boundary condition v2(0) = 0 at = a = 0 and vy (l) = Oatx =b=1. We
note here that we can have v2(z) = x, which satisfies the first boundary condition;
however, v; (xz) = 1, as is cannot satisfy the second boundary condition v; (1) = 0.
So for vi (x) we may consider a linear combination v1 (x) = c1x + C2 of the two
linearly independent solutions x and | and choose the arbitrary constants to satisfy
the boundary condition v; (J) = 0. An obvious choice is to let cp = | and c; = —1
for v;(z) = | — x, which is still a solution of (E.5), but now it also satisfies the
boundary condition v; (J) =! —1=0.
With vo(x) = x and v; (x) = 1 — x we will find the constant B of (4.18) for the
Green’s function in (4.25),

B =r(a)[vi(z)v9(x)
— v2(x)v; (z)]
my; ne OP ea AS, earl)
so that the Green’s function of (4.25),

+», (x)v2(€), 026525)


G(z,6é)=% B (4.25)
ula), 0S e<€<I
182 Chapter 4 THE GREEN’S FUNCTION

becomes :
jél—2), 0Sé<es!
G(x, €) = (E.7)
Fal), OS2SESI.
This is the same form of G(z, €) in (2.26) and Figure 2.2 for the shape of an elastic
thread (under constant horizontal tension Ty = 1), which is due to a single vertical
force of unit magnitude F' = 1 that is placed at x = €,0 < x < I. The derivation
of G(x, €) in (2.26) was based on simple balance of vertical and horizontal forces,
and the geometrical shape as seen in Figure 2.2. We also saw in (2.37) of Example
5 in Chapter 2, a similar Green’s function to that of G(a, €) in the above equation
(E.7). There we showed that the boundary value problem (1.33)—(1.35) reduces to
the Fredholm integral equation (2.36) with its kernel as K (a, t) in (2.37). Now we
recognize this kernel K (x, t) as the Green’s function of the boundary value problem
(1.33)—(1.35).
Finally, the solution to the boundary value problem (E.1)-(E.3) is obtained from
(4.24) with f(~) = —F(ax) and G(z, €) as in (E.7),

u(2) = [ Ge, )F (dé. (E.8)


As we indicated in the above Example 3, the expression for the Green’s function
(4.25) is not explicit enough; it is still left for us to make sure that v2(z) satisfies the
boundary conditions (4.8) at z = a only and v, (2) satisfies the boundary condition
(4.9) at x = b, only. For the special boundary conditions

1 (@) =. 0 (4.43)
u(b) = 0 (4.44)
we give the more explicit result for the Green’s function and leave its derivation for
an exercise [see exercise 11(b)]

1
Fp lv2(a)e1 (a) — v1 (202 (a)]fo2(€)v1(6)— v1 (€)r2(d)),a< 2 <
G(z,€) = ;
Bp 2 (8)u1 (a)— v1(§)v2(a)][v2(w)o1 (6) = v1(w)v2(b)],€ Sa <b
(4.45)
where D = v2(a)v1(b) — v1 (a)v2(b) # 0, and B as in (4.18).
Again, the condition D # 0 also guards against v;(x) and or v(x) satisfying
both of the boundary conditions (E.9) and (E.10). With this more explicit formula of
Green’s function in (4.45) we can now solve the problem in Example 3 more directly
with v; (2%) = 1, ve(2) = 2, where B = 1 since
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 183

and

D = w(0)m (1) — 1 (0)vo(1) =0-1—1-1


=.
If we substitute these values of v1 (x) = 1, ve(x) = x, B = 1 and D = —1 in (4.45)
we have

—Fle-1-1(Oe-1-1-1, O<a<e
G(x,€) II

-F[€-1-1@)fe-1-1-0, Exes

G(z,€) II

which is the same answer (E.7) as in Example 3.


As we mentioned earlier, we will illustrate next the use of the four basic properties
of the Green’s function, instead of the explicit formulas (4.25) and (4.45). This
is a very useful and familiar method which is often resorted to in the absence of
explicit formulas (an example is the case of more complicated higher dimensional
geometries). This method will be followed by another more general method of using
infinite (orthogonal) series expansion to represent the Green’s function.

Example 4 Construction of the Green’s Function—Using its Basic Properties


Construct the Green’s function for the boundary value problem

oY 4 by = F(a), CA0M Oar (E.1)


y(0) =0 (E.2)
y(1) = 0. (£.3)
In this example we consider the case of (E.1) with b # 0; in Example 3 we
illustrated the construction of the Green’s function for the important special case of
b = 0 in (E.1). First we note that the differential operator L = (d?/dz?) + 6? is
avid
self-adjoint since it is in the form ae (#4) + by of (4.7) with r(x) = 1,q = —b?.
Hence the Green’s function for (E.1) is symmetric.
184 Chapter 4 THE GREEN’S FUNCTION

In an attempt to construct the Green’s function for the boundary value problem
above, we first investigate its corresponding homogeneous boundary value problem,

d’y
Apa 2 =O Olea el (E.4)

y(0) =0 (E£.2)

We shall first use property (b) (following (4.25)) of the Green’s function satisfying
the (homogeneous) boundary conditions (4.8) and (4.9). Clearly, sin bx and cos bx
are the two linearly independent solutions of (E.4) with b # 0. We know that
sin bx and sin b(1 — 2) are also two linearly independent solutions of (E.4) with the
added advantage that sin b(1 — 2), instead of cos bz, satisfies.the boundary condition
(E.3). Hence we may use either v;(x) = sin bz, vo(x) = cos bx or vi (x) = sin ba,
v2(a“) = sin (1 — x) ina linear combination to construct the Green’s function. Here,
for convenience, we adopt the latter choice, and use it in (4.10) to write

G(a,&) = w1(€) sin bx + w(€) sin b(1 — 2). (£.5)


We leave it for the reader to show that to satisfy (E.3), the first choice will give the
same result for the Green’s function [see Exercise 2(b)].
To apply the boundary conditions (E.2) and (E.3) on (E.5) we must consider two
cases:
(a) The case of 0 < x < €, where we let x = 0 in (E.5) to satisfy (E.2):

(b) The case of € < x < 1, where we let z = 1 in (E.5) to satisfy (E.3):

G(1,€) =wi(€) sind + we(€)sin0 = w(€) sinb = 0, ols) =U),


G(v,€) =w2(é)sinb(l—2), €<2<1.
(E.7)
The results (E.6) and (E.7) exemplify the two branches of the Green’s function [i.e.,
(E.6) to satisfy the boundary condition at x = 0 and (E.7) to satisfy the boundary
condition at z = 1]. Hence from (E.6) and (E.7) we have

w,(€) sin be,


Cpe {wo(é)sinb(l—z), 0<aK<eé
€<a<1. (E.8)

Now we use the symmetry property (a): G(z,é) if = G(€,x) of the Green’s
function; a clear choice for the arbitrary functions w (€) and w2(£), to make G(s, €)
in (E.8) symmetric in x and €, is w;(€) = C sin b(1— €) and w2(€) = C sin bé; (E.8)
becomes
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 185

Csinb(1—€)sinbz, 0<a2<€
OSs {Csinb€sinb(l—2z), €<a< 1. ees)
To evaluate the arbitrary constant C in (E.9) we use property (c) for the jump condition
(4.26) of the derivative 0G/Oz,

OG(a, €) _ OG(z,€) ol
Sy woes Ci Pia r(€)

= (C sin bf)(—bcos b(1 — x) |2<¢, — (C sin b(1 — €))(bcos ba) |g

= —Cbsin
bé cos b(1 — €) — Cbsin b(1 — €) cosbé = —1,

Cosin
b(€ + 1 — €) = Cbsinb= 1, = — (£.10)

Note how we used the second and first branches of G(x, €) in (E.9) for x = €4 and
Qos, Tespectively, since @ = C4. 4>-€ is inthe domain é = 7 <A.anda == < €
is in the domain 0 < x < €. From (E.9) and (E.10) the final form for the Green’s
function is

sin b(1 — €) sin bx


SUES OE
Gee) — bsinb (E.11)
sin bé sin b(1 — z) es
bsinb Ce ae ones
We will leave it as a simple exercise to show that G(z, €) of (E.11) satisfies (the
initial part of) property (c) of being continuous, and satisfies condition (d) in (4.27)
by satisfying the homogeneous boundary value problem (E.4) for z # €, (E.2), and
(E.3) (see exercise 2).

Higher Order Differential Equations—An Illustration


To summarize, our above detailed treatment and illustration was concentrated,
mainly, on the construction of the Green’s functions associated with second-order
differential equations on (a,b) and general boundary conditions at x = a and b
as given in (4.1)-(4.3). The basic method of variations of parameters was used to
generate the Green’s function in (4.25) and was illustrated in Example 3 for the
simple boundary conditions y(0) = 0, y(/) = 0. From such analysis we developed
four basic properties (a)—(d) of the Green’s function, which were also used as a
more efficient way of constructing the Green’s function as illustrated in Example
4 for another second-order differential operator and the same boundary conditions.
In the following Example 5 we will present the final result for a simple third-order
differential equation with a different set of boundary conditions, and where Theorem
1 is used to first test for the existence of the unique Green’s function.
186 Chapter 4 THE GREEN’S FUNCTION

Example 5 Green’s Function—A Boundary Value Problem Associated with Third


Order Differential Equation
Consider the following boundary value problems, associated with a third-order

differential
ifferential equation,
equation, (L (Ly == =),
1%)
3
OE We oy esa <a (E.1)
dae
or p
Y
aes AU ee Ole
y(0) =0 (E£.2)

y'(0) = y'(1). (E.4)


The method of Section 2.5, with multiple integration (using the identity (1.52)) and
the application of the boundary conditions, is used to reduce this problem to the
Fredholm integral equation of the second kind

Wa) = | G(a, t)y(t)dt + ale —1)(a@? + 2—1) (E.5)

with the Green’s function as

G(z,t) = 2 =) ie (E.6)
=Ht= 2) et) $v le
With the statement made regarding the existence of a unique Green’s function,
we have the chance now to test it for the problem associated with the third-order
differential operator L = d?/dzx? in the equation
_ ay =0 (E.7)
dz
and its homogeneous boundary conditions in (E.2)-(E.4). All we have to do is to
show that the problem (E.7), (E.2)-(E.4) has only the trivial solution y(x) = 0. The
solution to (E.7), after three integrations is

a2
ViGi) ae > ter + cs. (E.8)
If we use the boundary condition (E.2) we have y(0) = cz = 0, y(x) = (c,/2)x? +
cox and from (E.3) we have

il
yie= ie + co = 0, Ci = —2¢2,

1 1
y (x) = san = poly
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 187

1
If we use (E.4) on y'(x) = cy2 — 5c1 we obtain
j 1
y (0) SOS cr, Cac = 0.
So the three conditions result in c) = c2 = cz = 0. Hence the solution to the
homogeneous boundary value problem (E.7), (E.2)-(E.4) is y(z) = 0, a trivial
solution. Therefore our problem will have a unique Green’s function that we shall
leave its construction for an exercise.

4.1.3 Orthogonal Series Representation of Green’s Function

Next we will develop the method of series representation of the Green’s function.
Such series expansion of G'(z, ) is in terms of the solutions {u,,(x) }°2, of the asso-
ciated homogeneous problem (4.7)-(4.9). We will first discuss the basic properties of
these functions and their series expansion, which are very necessary for developing
this method of constructing the Green’s function.
An extremely important result of the Sturm-Liouville problem (4.7)-(4.9) and
its self-adjoint operator (as an eigenvalue problem) is that under the conditions, on
the (regular) differential operator L of (4.7), that r(x), r'(x), q(x) and p(x) are
continuous on the closed interval a < x < b, and that r(x) > 0, p(x) > 0 on
{a, b], the solutions {u,,(x)} (or eigenfunctions) of the Sturm-Liouville problem are
orthogonal. By orthogonality of {u,(x)} on the interval (a,b) we mean that for
any two different solutions u,(x) and u(x) (of (4.7)-(4.9)) the following integral
vanishes:

ifPOU ALC, (eda —10, n#m (4.46)

where um (x) is the complex conjugate of un (zx).> Here p(zx) is the (weight) function
appearing in (4.7) and a and Db are the limits of the interval on which the problem
(4.7)-(4.9) is defined. Next we illustrate how the orthogonality property of the
solutions {u,(xz)} can be employed in expanding given functions in an infinite
series of these orthogonal functions—hence the name orthogonal or Fourier series
expansion—which we will use in determining the solutions of certain Fredholm
equations in Chapter 5 (in particular, Fredholm integral equations with symmetric
kernel in Section 5.2.)
The importance of the orthogonality of functions is not limited to series expansion
but, as we will see in Chapter 5, will be used as a condition for some of the theorems
proved concerning the Fredholm equation. For example, we may need to investigate
whether the nonhomogeneous part f(z) in the following Fredholm equation
b
u(x) = | K (a, t)u(t)dt + f(z)

5In most cases we will have real-valued functions wm (x), where Um(x) = Um(Z).
188 Chapter 4 THE GREEN’S FUNCTION

is orthogonal to the solutions up,(a) of the associated homogeneous equation

Un(2) = An iSe Os
Also, sometimes we may speak of whether A(z, t) is orthogonal to a function or
even to itself.

Sturm-Liouville Problem and the Orthogonal (Fourier) Series Expansion


The orthogonality (4.46) of the solutions of the Sturm-Liouville problem can be
easily proved, which we leave as an exercise. We concentrate instead on its very
important role in applied mathematics for determining the coefficients c, of the
orthogonal or Fourier series expansion

iG) ye crune) (4.47)


Qa

of (usually sectionally continuous) or square integrable functions f(x), defined on


the interval (a, b), in terms of the orthogonal set of functions {un (x) }°2).
In the next example we show how the orthogonality (4.46) is used in determining
the form of the coefficients of the orthogonal series expansion (4.47) as
b
Sq
=
P(t) f(z) un (x) da (4.48)
Jaq P(a)uz, (x)dx
We mention here for future reference that the integral in the denominator of (4.48) is
called the norm square of the function u,(x) and is denoted by
b
unl? =f ole)ud eae, (4.49)
When the orthogonal functions u,,(x) are divided by their norm ||up||,

Un (x)
bn(z) =
||un||
the resulting functions ¢, (x) are called orthonormal functions; it is easy to show that
their norm is 1,

[: o(aei(e)az
mavieg
= / Ae) Teale
Pe. : uz (x)

=Heal
= heffolaink(e)de ~ [Iuall?
1
=1

after employing (4.49).


Also, as we mentioned earlier, the solutions u,(x) to (4.7)-(4.9) are called the
characteristic functions or eigenfunctions of the differential operator, for example,
the operator L, in (4.7),
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 189

Tyup(x) = < Po + q(x)Un (x) = —Anp(x)Un(z) (4.50)


has the characteristic functions or eigenfunctions u,(z) with A, as their corre-
sponding characteristic values or eigenvalues. We will often refer to the orthogonal
expansion (4.47) and (4.48) as the eigenfunctions expansion.

Example 6 Orthogonal (Fourier) Series Expansion


To show that the coefficient c, of the orthogonal expansion (4.47) of f(z) is given
by (4.48), we first write

(acne) (E.1)
n=l

then we multiply both sides of (E.1) by p(x)u (x) and integrate from a to b to obtain

J oleium(ayt (ade =f pl2)um(1) 3° entin(oa


6 b lore)

, co iSgeys) deci’ (£.2)


= Yen |o(z)um(2)un(z)de
after allowing the interchange of integration with the infinite summation on the right-
hand side which is permissible when f(z) is square integrable on (a, b),, where p(x)
is the weight function as in (4.48) [i.e., f. p(x) f?(x)dz is finite].
Now the integral inside the infinite series is that which describes the orthogonality
property of {u,,(x)}°2, and vanishes according to (4.46) when n # m. This leaves
the series with only one term, cp, (3 p(x)u2, (x)dz, and hence (E.2) becomes

b b
i De) tn 2) f(eda Cr, ( p(x)u2, (x)dz,
a a

b
.

Sy p(a)u2,
(a)dex
(Lx ae

which is the Fourier coefficient (4.48) of the orthogonal or Fourier series expansion
(4.47) of f(x) on (a, 6), in terms of the orthogonal functions {um(x)}>_,. It is
important to note the simpler form for c,, when {u,(x)} are chosen orthonormal as
{¢n(x)}, whence

ae / o(2)bn (2) f(2) dr (E.4)


190 Chapter 4 THE GREEN’S FUNCTION

Convergence in the Mean of the Orthogonal Series Expansion


Of great importance to the Fourier series representation of square integrable func-
tions f(z), is the convergence in the mean of the general orthogonal series of (4.47)
to f(x) on the interval (a, b)p, where p(z) is the weight function used in the integrals
of the Fourier coefficients in (4.48).
Let S~(z) be the Nth partial sum of the general orthogonal expansion in (4.47),

N
feyig) = SS Crit (a) (4.51)

The series in (4.47) is said to converge to f(x) in the mean on the interval (a, b)p (with
respect to the weight function p(x)) if

b N
(4.52)
sim,f p(x)| F(x) — s Cnn(x)|’dax = 0.

If the orthogonal series (4.47) converges in the mean for every piecewise continuous
function f(x) (or square integrable function) on (a,b)p, then the orthogonal set
{un(x)}°2, is called a complete orthogonal set on (a,b)p. It turns out that the
completeness of the orthogonal set {w,,(xz)}°2, in the series (4.47) is equivalent to
allowing integrating such series term by term, a fact that we used in (E.2) of the above
example.

Eigenfunction Expansion of the Green’s Function


Besides the direct method of using variation of parameters that resulted in (4.25)
or (4.45), we will present here another method (for constructing the Green’s function
for the nonhomogeneous problem)

Lu+ Au = f(x) (4.53)

where L is a self-adjoint operator. For the present method we expand u(x) and f(z)
of (4.53) in a Fourier series of the orthonormal® eigenfunctions {uj (zx)}:
co b
tea We arte), ‘py = u(x)up(x)dax (4.54)
k=1 ¢
love) b

Fo he bpur(z), DaA= i f (x)ug(ax)dx (4.55)


k=1 Me

Lup = —Agur (4.56)


of the operator L [see (4.50) with p(x) = 1].

b
“ipuj (x)da = 1. Also p(«) of (4.49) can be introduced with simple modification.
a
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 191

If we substitute the expansions (4.54) and (4.55) and use (4.56) in (4.53), we,
formally, obtain

D2 aK (A ~ Az )ua (2) = D7 ewe (2). (4.57)


k=1 k=1

But since the eigenfunctions u,(x) are linearly independent, we may equate the
coefficients in (4.57) to obtain

an(A — Ax) = be, ak me he


(4.58)
Hence from (4.54) and (4.58) the solution u(x) to the nonhomogeneous equation
(4.53) is

sarah slQue(ar= b fF) bs


@ up (2) pal
oe Ja
O°. Up (x)Ug (E) sane)

after using
by = (t)up(t)dt (4.60)
from (4.55) and exchanging summation with integration.
The solution (4.59) can be written in the form (4.5)
b
(x)= -| G(a, t) f (t)dt (4.61a)

where G(z, t) is the Green’s function

G(z,t) _= »Ue one:


(@)ug(t) (4.61b)

Example 7 .Nonhomogeneous Boundary Value Problem


Solve the following boundary value problem by using the Green’s function.

d*y
— + dry = f(z), Ot gal (E.1)
dx?
y(0) =0 (£.2)
y(1) =0 (£.3)
The orthonormal eigenfunctions of the corresponding homogeneous problem,
d2
eee
dx?
On” 0. ei (E.4)
192 Chapter 4 THE GREEN’S FUNCTION

are uz(z) = V2sin kaa and the eigenvalues are k?71”. Hence from (4.61b) the
Green’s function for (E.1) is

Cane ee
sinaesin
ae = (E5)

and from (4.61a) the solution to the aries valueae (E.1)-(E.3) is

= sinkra : ;
i) Ze=a / f(t) sin krtdt. (E.6)

4.1.4 Green’s Function in Two Dimensions

In this section our discussion was centered, mainly, around the Green’s function for
boundary value problems associated with ordinary differential equations. In the next
section, the Green’s function will be used to reduce such boundary value problems to
Fredholm integral equations in one variable. In higher dimensional problems, we will
encounter boundary value problems associated with partial differential equations.
One of the methods of constructing the Green’s function for such boundary value
problems is illustrated next for the potential distribution in a unit disc. The resulting
Fredholm integral equation, as expected, will involve the unknown in two variables
inside a double integral. Integral equations in three dimensions are also illustrated
in Section 2.7 for the Schrodinger equation in the (three-dimensional) momentum
space, where the three dimensional Fourier transform is used.
There are a variety of methods for constructing the Green’s function for boundary
value problems in two and three dimensions. However, for the level of this introduc-
tory text, we will limit our very brief presentation here to the following illustration.
This involves the potential distribution in a charged unit disc with grounded rim, as
we shall discuss next. The same type problem of the potential distribution in a square
is left for an exercise (Exercise 24) with very detailed leading hints.

Potential Distribution in a Charged Unit Disc (-Poisson Equation)


The potential function u(r, @) due to a charge distribution f(r, 6) ona unit disc, is
governed by the following Poisson equation (4.62). We use here polar coordinates
(r,@) so that the boundary of the disc as a unit circle is represented simply by one
of the coordinates being a constant, namely, r = 1. In polar coordinates the Poisson
equation in u(r, 8) is
V?u(r,0) = —f(r,0)

=1
=
[2
bo
(-(sou
) vaOy
(sino)
en Ou
=-f(r,6)
a
(4.62)
where f(r, @) is the charge density. Also, with the help of complex analysis methods ,

the Green’s function of this problem takes the form:

1 | 1-2rpcos(@ — ¢) + r?p?
G(r,(r,6;9;p,
p,¢) 6) =
= —1 BuO ORS Be (4.63)
5
EXERCISES 4.1 193

So with a grounded rim of the unit disc, we have the boundary condition at r = 1,

u(l,e)"= 0; —7 <O0< 7. (4.64)

The solution to the boundary value problem of the Poisson equation (4.62) and the
boundary condition (4.64) via the Green’s function in (4.63), is

1 Qn

HONS [ dp i) OHIO (4.65)


The same Green’s function in (4.63) is also used to solve another very related and
familiar problem, namely the Dirichlet problem on a unit disc. This is a boundary
value problem governed by the Laplace equation.

V7u(r,0) =0 (4.66)
inside the disc (with no charge, f(r,@) = 0), and where the potential on the rim is
given as
u(1.e) = G(@), —™1<O0< 7. (4.67)
The solution to this Dirichlet boundary value problem (4.66) and (4.67) is of the form

27 0G
u(r, 0) = =) Op g(¢)d (4.68)
p=1

Jif
=5, | [=o
al =e d, 4.69
2)
which is what we presented in (1.24).
Of course equation (4.65) can be made as a Fredholm integral equation in two
dimensions, when a charge distribution f(r, 0) is to be found on the disc to affect the
given desired potential distribution u(r, @) there. In the same way we can say that
equation (4.69) is a Fredholm integral equation in one dimension in the unknown
potential function f(@) on the rim of the unit disc, that would produce the given
desired potential distribution u(r, @) in the interior of the unit disc. The integral in
(4.69) is the well known Poisson integral.
Another example of a Fredholm integral equation in two dimensions can be made
of the answer of Exercise 24 (parts e, f in (E.10)), when we are to ask about the
required charge distribution f(x, y) on a square that would result in the given desired
potential distribution u(x, y) inside the charged square.

Exercises 4.1

1. Solve the following boundary value problems associated with second-order


differential equations on the indicated domain and the particular boundary
conditions. Note that for infinite domain we must consider the boundedness
of the solution as a condition. A is assumed real in all the following problems.
194 Chapter 4 THE GREEN’S FUNCTION

(a) y"” + A7y = 0, Oral


y(0) = y() =0
(b) y” + A?y = 0, Ore <a
y'(0) =y'() =0
(c) y! + A’y = 0, O<2<!l
y=, y()=0
(d) y"” + A*y =0, Or -<-60
y(0)=0, — |y(x)| < c
(e) y"” + Ay =0, —0 <2 <0o
ly(z)| < 00
(f) y” — A?y = 0, 0O:< 2 ico ,
y(0)=1, — |y(co)| < co
(g) y” — Ay = 0, O<zx<l

2. (a) Verify all the properties of the Green’s function

sin b(1 — €) sin br


ie
G(e8) =) sisi
bsi
vnB(e1~2)
bsinb ek
of (E.11) in Example 4.
(b) Consider the choice of the two linearly independent solutions v; (x) =sin bz
and v2(x) = cos ba of (E.4) of Example 4. Follow the same steps in Example
4 to show that you obtain the same result for the Green’s function in (E.11).

Construct the Green’s function associated with the following boundary value
problems.

dy 1
3. aa = F(z), US

y0)=0, y(F) =0
Hint: See (E.6)-(E.9) of Example 8 in Section 4.2.

4. y" — b’y = f(z), Oral


y(0)=0, (1) =0
5. Use the Green’s function to solve the boundary value problem

dy
Tye ane Diagn oa
EXERCISES 4.1 195

y(0) = 0, y(1) =0
Hint: Use (4.5) and the Green’s function of Exercise 4 with b = 1. Fora series
solution form see Example 7 with A = —1 and f(z) = 2.

Use the Green’s function to solve the following boundary value problems:

d*y :
: a2 ¥ = 2sinchl, VS Kil (E.1)

y(0) = 0, y(1) =0 (E.2)


Hint: See Exercise 6.

. Reduce the following boundary value problem, associated with nonlinear dif-
ferential equation, to an integral equation.

Hint: Use the Green’s function of Exercise 3 with 7/2 replaced by 1.

. Use the Green’s function—like approach in (4.31) and (4.32), (4.28)-(4.30) to


find the solution of the following initial value problem,

a + d?y(x) = cosz, cal) (£.1)


y(0) =0 (E.2)
y'(0) =0 (E.3)
. Let v, (x) and v2(zx) be the two linearly independent solutions of the homoge-
neous equation Lu = 0 of (4.7).

(a) Verify that the following u(x) is a solution to the associated (same L)
nonhomogeneous equation (4.1),

u(x) = Civ, (x) + Cove(x) + ikRare yflejds (E.1)

R(z,6) =- Thala -w@n@] (62)


and B is as in (4.18).
(b) Use the result in part (a) to derive the explicit form (4.45) of the Green’s
function associated with this operator L and the particular boundary
conditions u(a) = 0 and u(b) = 0 of (4.43) and (4.44). Hint: Apply
196 Chapter 4 THE GREEN’S FUNCTION

the two boundary conditions to u(x) in (E.1) and (E.2) to have two
simultaneous equations in C, and C2 to be determined.

Reduce the following boundary value problems to Fredholm integral


equations. Hint: For all these problems take the term Ay to the right side
of the differential equation, and consider the resulting —Ay(z) + f(z) as
if it is a nonhomogeneous term associated with the (new) homogeneous
d
equation — = 0. Also see problems 14 and 15 and Example 5 (for the
mixed boundary conditions).

da
10. Lae y==e De tae ; x
Dien :
(E.1)

y(0) = y'(1) (E.2)


y'(0) = y(1) (E.3)
dy
Hae FR) ae Ora (E.1)

y(0) = y'(0) (E.2)


y(1) = y'(1) (E.3)
WA CM
at TidgtGedy ee fora fh. ed 1 gH eS 1 (Bet)

y(—1) = (1) (E.2)


y'(-1) =y'(1). (E.3)

Use the Green’s function to solve the following boundary value problems.

d*y .
13: Ape os = — De
(E.1)

y(0) = y'(0) (E.2)


y(1) = —y'(1). (E.3)
Hint: See Example 5 and the above problem 11 for the mixed boundary
conditions.
Find the Green’s function associated with the following boundary value prob-
lems. Hint: See Example 5 for the mixed boundary conditions.
da
14. aa = fla), O<2<1 (E.1)
y(0) = y'(1) (E.2)
y'(0) = y(1). (E.3)
d
15; UTE) 6 Hal eal (E.1)
EXERCISES 4.1 197

y(0) = y'(0) (E.2)


y(1) = y'(1) (E.3)
d2 2
16. a GO -l<z<1 (E.1)
y(—1) = y(1) (E.2)
y (=1)i=y'() (E.3)
We (a) For Exericses 3 to 6, verify Theorem 1 concerning the existence of a unique
Green’s function.
(b) Determine whether or not a unique Green’s function exists for the following
boundary value problems, and if it does, construct it.
2

) £2 = Fa, O<2r<1 (E.1)


y(0) = y'(1) (E.2)
y'(0) = y(1) (E.3)
d*y
ti) > = Fle); O<— 2 <1 (E.4)
“W(0) = y(1) (E.5)
vO) = y'(1) (E.6)
(iii) oY y= s(e yi Ora ar (E.7)

0) = y(n) (E.8)
18. In our discussion regarding the construction of the Green’s function (4.25),
we assumed that neither of the solutions v1 (x) and v2(xz) of the homogeneous
problem satisfies both boundary conditions at z = a andz = b.

(a) Assume now that v2(x) does satisfy both conditions, while v(x) satisfies
‘neither; follow the same steps as those used in reaching (4.25), with a
solution of the form

ula) = zle) | FeOdé - Gla) [ Fu Oae (B.)


as in (4.10), (4.19), and (4.20) to show that this would require that v2(z)
and f(x) must satisfy another (consistency) condition (4.11),
b
/ vo(x)f(x)de = 0 (4.11)(E.2)
Note that the final solution u(x) does, of course, satisfy both boundary
conditions. So apply these conditions on u() in (E.1) to arrive at two
vanishing integrals that add up to that of (E.2).
198 Chapter 4. THE GREEN’S FUNCTION

(b) With the results in part (a), show that the solution is not unique; that is,
show that the solution becomes

ieaeaGua)s ipGo, €)f (dé (E.3)


where C is still arbitrary and G(x,€) is the same as in (4.25). Hint:
The cy in (E.1), which we could not determine because of the assumed
boundary conditions for v2(x) only, is equivalent to the usual arbitrary
constant of integration, so write the second integral of (E.1) on (0, 2)
with the rest of it on (cz, b) as constant to give you the term C'v2(z),
remembering that c,; = a in (E.1) according to (4.22).

Construct the Green’s function associated with the following boundary value
problems. P

19;
yoy fix) O< rig
y(0)=yQ), —-y"(0) = y’()
Hint: Consider the form A cos(z — € + c) for G(x, €) where A and c are to be
determined.

20. Use the Green’s function method to solve the boundary value problem

d?y :
—, tn y=cosmz, 0<e<1
dx
y(0)=y(1), = y' (0) = y'(1)
Hint: See Exercise 19.

mis Consider the following boundary value problems, which is obviously a Sturm-
Liouville problem [see (4.7)-(4.9)]:

d2
app we O<2<1 (E.1)
u(0) = 0 (E.2)
u(1) = 0 (E.3)
(a) Without solving for the explicit solutions, prove that any two eigenfunc-
tions up(x) and um(x) of (E.1)-(E.3) corresponding to two different
eigenvalues A, and A,, are orthogonal on the interval (0, 1]; that is,
1
q Und lun eae = 0. EEA aeVes
0
EXERCISES 4.1 199

Hint: Write the differential equation (4.7) for u,(z) and for u(x) and
attempt to arrive at

(27 xn) f yb)


(12)im(x)dx
ia ade=0

(b) Solve the boundary value problem.


(c) Verify by direct integration that the solutions in part (b) are othogonal.
Hint: To show that ik sinn7zsinmazdx = 0 forn # m, use the
trigonometric identity sin ax sin br = (1/2)[cos(a — b)x — cos(a + 6)z]
to simplify the integration.
(d) Write the Fourier series expansion in terms of these functions [solutions
of part (b)] for the function f(x) = 2,0 < 2 < 1. Hint: In Example 6
use (E.1) for the Fourier series with u,;(x) = sin nz and use (E.3) for
evaluating the present Fourier sine series coefficients c, for (E.1).

225 Orthogonal kernels. The orthogonal property is not limited to solutions of


equations only but extends to other functions; for example, two kernels K
(z, t)
and L(z,t) are termed orthogonal on {(z,t):a<x2<b,a<t< }} if the
following two integrals vanish:

ipICE
T LAT Lar = 0

b
/ D(a,
7) (et) dr =0
a

(a) Prove that the kernel K(a,t) = x°t? and the kernel L(z,t) = x?t? are
orthogonal on {(z,t):0<2<1,0<t< Il}.
(b) Prove that the kernel K(x, t) = sin(x — 2t) is orthogonal to itself on the
square {(z,t) : 0 <a < 2n,0 < t < 27}; that is, show that je sin
(a — 27) sin(r — 2t)dr = 0.

2s The Legendre polynomial P,,(x) of degree n is defined by the Rodriques


formu la as 4 a
Ps (2) (2? — 1)”. (E.1)
~ ann! dan
(a) Use (E.1) to show that the first four Legendre polynomials are

1 1
Pi@)y= 5 (32° — 1), P3(x) = 5 (5a — 3z)

(b) Verify that the Legendre polynomials in part (a) are orthogonal on (-1,1)
with respect to a weighting function p(x) = 1. Hint: See (4.46).
200 Chapter 4 THE GREEN’S FUNCTION

(c) Write the first three terms of the Fourier series expansion in terms of the
Legendre polynomials in part (a) to approximate the function f(x) = e4
on (-1,1). Hint: See (4.47) and (4.48) with p(x) = 1, un(z) = Pr(z),
fie Ws N22
(d) Tabulate or graph the approximate series expansion to compare with its
exact value f(r) = e?*.
24. In reference to our discussion of the potential distribution in a charged unit
disc of (4.62)-(4.65), consider now the potential distribution in a square of
side length 7 with charge density f(x, y), and where the edges are grounded.
The boundary value problem for the potential distribution u(z, y) in a charged
square with side length 7 as in Figure 4.1 due to a charge density f(z, y), and
with all sides being grounded is
A

O72 2074
DE tebyee oe Wien icc gs O<y<T (£.1)

u(0,y) = 0, O<y<t (E.2)

aly) — 0; Oy ior (E£.3)


we.0) =u (aaa) 0 (£4)

U(O,y)=O

O u(x,0)#0 * X
Fig. 4.1 Electric potential in a square plate.

(a) Let U(n,y) and F(n, y) be the finite Fourier sine transforms, as defined
m(ib15);
U(n,y) = [wey sinnadx (E.4)

4ACGY) pe bef(x,y) sinnzdz (£.5)


EXERCISES 4.1 201

We are aiming at algebraizing ee the second-order partial derivative


with respect to x, to render (E.1) as a second-order (nonhomogeneous)
ordinary differential equation in the Fourier sine transform U (n, y). Then
we can appeal to the Green’s function of the problem in Example 4 to
construct U(n,y). Finally we use the inverse Fourier sine transform
(Fourier sine series) of (1.116) to obtain our original function u(z, y) for
the potential on the square.
To accomplish the first part, and in parallel to the use of the Laplace
transform for algebraizing differential equations (with initial conditions
though!), use the following (operational) property of the finite Fourier
sine transform (1.121)

iINsinng oedz — n°2 F,(n)


= n{f(0) — (—1)"f(m)} (E.6)
0
to transform the partial differential equation of (E.1) in u(z,y) to the
following ordinary differential equation in U(n, y),’

ute) —n?U(n,y) = —F(n,y) (E.7)

(b) Show that the boundary conditions (E.2), (E.3) are easily transformed to

U(n,0) =0 (E.8)

Un,a) = 0 (E.9)

(c) Construct the Green’s function for the boundary value problem (E.7)-(E.9)
in U(n,y). Hint: Note that in (E.1)-(E.3) of Example 3 we have the
same problem as the above (E.7)-(E.9) except for b= nein (E..),
and the boundary points are 0 and | instead of 0 and 7 in the present
boundary conditions of (E.8) and (E.9).
(d) With the help of the Green’s function in part (c), find the solution U(n, y)
to the boundary value problem (E.7) and (E.8). Hint: See (4.1)—(4.3) and
(4.5).
(e) Find the solution for the potential u(z, y) in the original boundary value
problem (E.1)—(E.3). Hint: Use the inverse (finite) Fourier sine transform
(Fourier sine series) of (1.116) on U(n, y) of part (d) to find u(z, y).
(f) Attempt to find an expression for the Green’s function in two dimensions
of the original problem (E.1)-(E.3) as G(z, y; €,). Hint: In the formal
answer of part (d), substitute for F'(n, y) (inside the integral representing

7For detailed treatment of integral and finite transforms and how to find the proper (compatible) transform
for a given boundary value problem, see Jerri [1992].
202 Chapter 4 THE GREEN’S FUNCTION

U(n, y)) in terms of its sine integral as in (E.5), then exchange the two
integrations with the Fourier series summation to write

sea) = i /Mey Neem A(E10)


hence the resulting infinite sine series expression for G(z, y; €, 7).
(g) Derive (E.6) the important property of the finite sine transform. Hint: Use
two integrations by parts, and recall the definition of F’;(n) in (1.115) as
the finite Fourier transform of f(z),0 <a < 7.
25. (a) Show that the second-order differential operator L in (4.1) can always
be written in the self-adjoint form of L in (4.6). Hint: Divide Lu of (4.1)
by Ao(z) # 0, then multiply z-7qy
Lu by r(x) = exp(f 7dr).
(b) Show that the fourth-order differential operator L in Lu = oe + u is
self-adjoint. Hint: Write vLu—uLv as we did in Example 1, then add and
subtract appropriate terms: (v/u!” — vu” — vu" +0" u" +0'"u! -v'"'u')
to end up with the sum of four exact differentials that can then be written
as one exact differential 4 [vu!” — v'u" + vu! -v!u] = vLu - uLv.

4.2 FREDHOLM INTEGRAL EQUATIONS AND THE GREEN’S


FUNCTION

In Example 5 of Section 2.5 we used repeated integration to show how a boundary


value problem
a = TE ONE RE ENG (1.33)
y(a) = (1.34)
y(b) = (1.35)
reduces to a Fredholm integral equation (2.36) and (2.37). In this section we use
the Green’s function method to show that the general boundary value problem with
parameter 4X,
2
Ao(z) 54+ Ai (a) + Ao(x)y + Ap(z)y = h(x), (4.70)

y(a) =0 (4.71)
y(b) =0 (4.72)
reduces to a Fredholm integral equation with the Green’s function as its kernel.
To do this we write (4.70) in the form of (4.1) and purposely shift the term \p(a)y
to the right side, in anticipation of involving it inside the final integral to have an
integral equation

Ao(2)3 + Ai(z)—— + Ao(a)y = Ly = A(z) — Ap(a)y = f(x). (4.73)


4.2 FREDHOLM INTEGRAL EQUATIONS AND THE GREEN’S FUNCTION 203

We note that having a symmetric Green’s function G(z, €) for (an equivalent problem
to that of) the problem (4.73), (4.71) and (4.72) can be easily justified since the
differential operator

d2 d
L= Ao(2) 73 45 Ai(z)—- AP A2(x)

in (4.73) can be reduced to a self-adjoint form as in (4.6), via multiplying it by a


factor p(x) = aes of (E.5) in Example | (see Exercise 25(a) in Section 4.1).
0
If we assume that such an operation has already been done to make the above L
in (4.73) as a self-adjoint operator, (i.e., L now is in the self adjoint form as given in
(4.30)), then according to (4.5), the solution to such equation (4.73) with the boundary
conditions (4.71) and (4.72) is

ya) =~ f"Gla, €)fae


=-f "G2, 6(h©) — AolOu(O)\aé (4.74)
--f "Gla, QM(E)dE +A ["Gla, €)(@ul6dé.
If we let ;

ka) = — f Gla e)n(eag (4.75)


then (4.74) becomes

y(2) = Ke) +A |" Gle,€)olOuledé (4.76)


which is a Fredholm integral equation of the second kind with kernel K(x,€) =
G(z, €)p(€). and a nonhomogeneous term k(z).
This Fredholm equation can be written in a more symmetric form if we multiply
both sides of (4.76) by \/p(z),

Vateuta) = Vataynte) +a | SpleyoW


ate.)Vou eae
and let u(x) = \/p(a)y(x) and g(x) = \/p(x)k(z),

u(x) = g(x) +r /* mOG(e,


Vala é)u(eae (4.77)
As we mentioned in Section 4.1, the Green’s function G(x, €) is symmetric for such
(already made) self-adjoint problem (4.70)—(4.72) as seen in (4.25). It follows that
the kernel K(x, €) = \/p(x)p(€)G(z, €) of the resulting Fredholm equation (4.77)
is obviously symmetric.
204 Chapter 4 THE GREEN’S FUNCTION

Example 8 Green’s Function and Fredholm Equation


Use the Green’s function to reduce the boundary value problem

d?
ao tA =e, (<eae (E.1)

y (5) =0 (E.3)
to a Fredholm integral equation.
If we compare this problem with (4.70)-(4.72) we have h(x) = a, p(x) = 1,
and L = d?/dz?, which is self-adjoint and hence the Green’s function is symmetric.
From (4.76) the integral equation representation is
m /2 n/2 :
oe if G(w,£)édé +2 / G(x, €)y(€)aé (E.4)
where we used
m /2
ke) == if Ge, €)EdE.
0
(E.5)
It remains to construct the Green’s function from the corresponding homogeneous
boundary value problem

FY 9, Ugias (E.6)
y(0) =0 (E.7)
v (5) 0: (E.8)
The solution that satisfies (E.6) and (E.7) is y(x) = a, and that which satisfies
(E.6) and (E.8) is y(x) = (1/2) — a; therefore, the symmetric Green’s function may
be written as two branches,

G(x,f) = re (£.9)
Ce(5-2), tsa<8
where the first and second branches satisfy (E.7) and (E.8) in the variable x, respec-
tively.
From the jump discontinuity property (4.26) of OG(a, €)/Ox we have

0G 0G 1
By 68) T= De (x, €) ale ~ r(€) =-—l

=-0€-C (5-6) ~ =n mlb C= =.


(E.10)
4.2 FREDHOLM INTEGRAL EQUATIONS AND THE GREEN’S FUNCTION 205

Note how we used the second branch and the first branch of G(z,€) in (E.9) for
xz = €, and x = E_, respectively, since = £4 > € is in the domain < x < 1/2
and z = €_ < € isin the domain0 < z < €. From (E.9) and (E.10) we have

Gye ee)
G(z,€)= (£.11)
HES mtg BS
OAT A

and the Fredholm integral equation of the second kind (4.76) is

m /2
y(t) = (a) +A i G(e, é)y(é)aé (E.12)
where the kernel K(x, €) = G(z, €) as given in (E.11) is symmetric and k(x) is

k(x) = Hof G(x, €)h peaf Crees


7 n/2
=| “6ae “5(es) gd€ (E.13)

“ECD
é

EEL
G-*)),
3 ty 2

Mo) =F
The final Fredholm equation that is equivalent to the boundary value problem (E.1)—
(E.3) is

yl) ==6 Teal


24
Tx
0
G(e,e)uleasa2
(B.14)
where the Green’s function G(z, €) is given in (E.11).

Fredholm Integral Equations in Two Dimensions


For an illustration of reducing boundary value problems associated with partial
differential equations, we may consult (4.62)—(4.65) for the potential distribution in
a charged unit disc with its rim being grounded. As we had done above, all what we
needed there is to construct the Green’s function for this two-dimensional boundary
value problem. Another very similar illustration is that of the potential distribution
on a charged square with its edges being grounded. This was the subject of Exercise
24 of the last section, which is supplied with very detailed leading hints.
For the above two boundary value problems, we have already presented the integral
representation of the potential distribution due to a given charge distribution with the
help of the Green’s function in two dimensions. These results are shown in (4.65)
for the charged disc, and in the answer to Exercise 24(e),(f) of the last section for the
charged square. So, all what we need to have these problem as Fredholm integral
206 Chapter 4 THE GREEN’S FUNCTION

equations in two dimensions, is to ask about the inverse problem, namely to find
the charge density f(x,y) (inside the integral) that would affect the given desired
potential distribution u(z, y) on the charged square, for example.

Exercises 4.2

1. Reduce the boundary value problem

d’y
Oe = er, Oriel

NO) 0 erly 0
to an integral equation by first finding the Green’s function. Hint: See Example
8.

. Reduce the boundary value problem

to an integral equation. Hint: See Example 8.

. Reduce the boundary value problem

d3
BEE. ncaa Ny Or <a
dx3

y(0)=0, y(l)=0, y'(0) =y'(1)


to an integral equation by:
(a) Using the Green’s function of Example 5.
(b) First constructing the Green’s function starting with the form a,(&) +
ra2(€) + 27a3(€) for G(z, €).
For the following boundary value problems, use the Green’s function to reduce
them to Fredholm integral equations. Hint: You may consult the answers of
Exercises 4.1 for the appropriate Green’s function.

dy
- Gpz ty = 2a +1, Oper

y(0) = y'(1)
y'(0) = y(1)
Hint: See problem 17(b)i of Exercises 4.1 for the mixed boundary conditions.
EXERCISES 4.2 207

5 ‘ eee+ Ay = e”

y(0) = y'(0)
y(1) = y'(1)
Hint: See problem 15 of Exercises 4.1.

a qT? TL
y = Ay + cos ce

y'(-1) = y’(1) Hint: See problem 16 of Exercises 4.1. (for its boundary
conditions only.)

. Write the Fredholm integral equation in the charge density function f (p, ¢) that
would produce a given potential distribution on a unit disc u(r,@),0 <r <1,
0 < @ < 2m, where the rim of the disc is being grounded. Hint: Consult (4.65)
and its derivation in (4.62)—(4.65) at the end of the last section.
= are Gare oe
es %

Os= ite Jas Se;


the (ice é - @ : > ae *%
bad

Kierw <4 eee me OTF


4 . > Fo

-
@ 7 a A

| a2 es Pebee
Cie trare? a aell 8 ued bat Aig ee rye tiie = (eee
=—
ae la

a /} 4} elite oe vet hyge an « ws Stee

(ie
eee eaiigenet)
i re
* Pe
saath oh e
plete _
OM jeter ¢ : each ni <2? erm
: _ = edsb =e
a ‘ Ame co~ ae = fal Try
: ‘# ai On jfar-U
ha 2 BY 1 Te a oi? me ~

_* =~
a eB
i, _ ~~
—» m, @ - @ ‘=
~— aa
“v
r
ee
J 7
a
os, ‘ ; : 7
SO Oe
SS i

® > oe pha 7 ; 7 a

>7
> — as a Se)
_
o<,
7 eames
: . —
<n <€
- i
="
ow ;
iN @
~~
4 S = mie

- =
a > 7
: / CBee OS) eg, Nieage) 2 se
a Oe Maw és <b wilthdie
a ——
a ae: 7
~ 2-2 eae
jp1men
=nma
albrae
est.

- ~ rm

———— Sa
ve. #1
~~? pit «
pid |
Ffredholm Integral
Equations

In Section 1.2 we presented the Fredholm integral equation,!

b
n(ayu(z) = f(a) + f K(e,6)u(éag (5.1)
which we termed the second kind when h(x) = 1,
b
u(x) = f(a) + | K(e,guledg (5.2)
and the first kind when h(x) = 0,
b
-f(a) = f K(@,s)uleas, (5.3)
When f(z) = 0 in (5.2) it becomes the homogeneous Fredholm equation,

b
u(x) = f K(2,8)ulé)dé. (5.4)
We note that the limits a and b of the integrals may be finite or infinite, where the
infinite limit makes it a singular equation.

'In 1900-1903, Fredholm developed the theory of these integral equations as a limit to the linear system
of equations. In 1904 and later, Hilbert established the theory in a rigorous fashion [see BOcher, 1914].

209
210 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

In Chapters 1 and 2 we formulated a number of problems which resulted in


Fredholm integral equations. This included the shape of a hanging chain (2.28) in
Section 2.3 and the small deflection of a rotating shaft (1.19) in Section 1.1. In
Example 5 of Section 2.5 we illustrated how a boundary value problem,

d’y
PE = Ay(z), a<z<b (1.33)

ya) =0 (1.34)
y(b) =0 (1.35)
reduces to a Fredholm integral equation of the second kind

ya) =e fh tieas,
tha (edt; (1.36)

BS Grit c=
“be
G alte b) eee (1337)

In this chapter we discuss and illustrate a number of exact, approximate, and


numerical methods of solving Fredholm integral equations. An important condition,
termed the regularity condition, for the development of the solutions of Fredholm
integral equations, is that the kernel K (x,t) be square integrable in both z and t in
the square {(z,t):a<a2<b,a<t< }}; that is,

isi.|K (a, t)|?drdt = B* (5.5)

is finite. We must also stress here the relations in the theory of solving the nonhomo-
geneous Fredholm integral equation of the second kind (5.2) and its corresponding
homogeneous equation (5.4) with the (added) important parameter A,

u(x) = /"Ke, tyu(tyde (5.4h)


Indeed, the theory very closely parallels that of the theory of systems of nonhomoge-
neous and homogeneous linear equations. Such a relation is the essence of Fredholm
alternative, which we state at the end of Section 5.1 and which deals with conditions
for the existence of solutions of Fredholm equations. In the next section we present
a method for solving the Fredholm integral equation (5.2) when the kernel is of a
special form,

K (x,t) =) ag(x)bx(t) (5.6)


k=1

a finite sum of products of a, (x), a function of x only, and b;,(t), a function of t only.
Such kernels defined by (5.6) are called degenerate kernels or separable kernels. In
5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 211

Section 5.2 we introduce methods of solving another special but very important case
of the Fredholm equation with symmetric kernel [i.e., K(x,t) = K(t,x)]. Section
5.3 is devoted to Fredholm equations of the second kind; Section 5.4 is a new section
for this edition to cover in more detail the Fredholm integral equations of the first
kind; and Section 5.5 covers basic elements of the numerical (approximate) method
of solution. The higher quadratures numerical methods are covered in the added
(optional) Chapter 7 in this edition. In each of these sections we illustrate, when
appropriate, some approximate methods of solution and methods for determining the
eigenvalues of the homogeneous Fredholm equation.
Here we will again be concentrating on the various, mostly, successive approxi-
mation (iterative) methods, for constructing a solution. Accurate statements for such
results will be stated without the complete proof.
We will start this chapter on Fredholm integral equations with the very special
degenerate kernel, as it is easy to illustrate without the need for new tools except
for the familiar theory of system of linear equations. This is very important as it
represents the fundamental and historical relation of such theory and the theory for
Fredholm integral equations, as was done by Fredholm in 1900-1903.

5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE


KERNEL

5.1.1 Nonhomogeneous Fredhoim Equations with Degenerate Kernel

Consider the nonhomogeneous Fredholm equation of the second kind with degenerate
kernel K (x,t) = >>,_, @n(z)bx(E),

b
ula) =f (@) 4 | K (a, t)u(t)dt (5.7a)

6 n
= f(x) + a/ S > ag (x2)bx(t)u(t)dt (5.75)
Oe

n b
= f(r) + Yale) | by,(t)u(t)dt (5.7c)
k= C

after using K(z, t) of (5.6) and exchanging summation with integration. In the fol-
lowing we show how the solution of this Fredholm integral equation with degenerate
kernel reduces to solving a system of linear equations. If we define cx as the integral
in(S. 3c),

b
Ck =i} b;,(t)u(t)dt (5.8)
212 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

then (5.7c) becomes

u(x) = f(x) + A SeCpag (x). (5.9)


k=1
If we multiply both sides of (5.9) by b,,(a) and integrate from a to b, we produce Cm
on the left side,

n b
frTe MN bi— [ bm (x) f(x)dx + 2 ye Ck / bm(2)a,(xz)dx. (5.10)
a a k=1 a

If we define the (new) integrals in (5.10) as

b
ae / Dm (x) f (a) dx (5.11)
and
b
ae / Me covap aids (5.12)
a

then (5.10) becomes

Cr ft DE Gm ECE ie = We 20 Been (5:13)


a=al

which is a set of n linear equations in Cincz, 3,9 Si cnoaHere tf, and ta, pare
considered known since we are given b,,(x), f(x), and ax(z).
So the solution to the Fredholm equation of the second kind (5.2) with degenerate
kernel (5.6) reduces to solving for c,, from the system of the n linear equations (5.13)
(in the n unknowns c,,, m = 1,2,---,n), since c,, will then be used in the series
(5.9) to obtain u(z), the solution of (5.2).
If we use matrix notation, the system of n linear equations (5.13) can be written
in the form

Cj fi Ct 1 es e ily Cy
C2 fo Qa21 422 a2n C2
(Gig ; = ; + : : . , =F+XAC

Cn ign Qni Qn2 *** Ann Cn


(5.14)
or as
(I — AJC = F. (5.15)
From the theory of linear systems of equations we know that (5.15) has a unique
solution if the determinant |J — \A| # 0 and has either infinite or no solution when
| —AA| =0.
5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 213

Example 1 Solve the Fredholm integral equation

1
Uwe) =o + af (at? + a2t)u(t)dt. (E£.1)
0
This Fredholm integral equation has a degenerate kernel of the form (5.6),

K@,t) Scot? att 55 ak(2)be(t) (E.2)


[rail

where ai (x) = 2, ao(x) = x”, b(t) = t?, and bo(t) = t. To solve for cm in (5.13),
and hence u(z) of (E.1), we must prepare f,,fo from (5.11) and a11, @12, G21, G22
from (5.12). From (E.1) we have f(x) = x; hence according to (5.11),

1 1
fi =| bo s(ode = | Pat = 7

1 pl
fa= ffvolysinar= [Pat = 5
and the column matrix F’ of (5.14) becomes

r=[4]=
Ble
Wl

To prepare the matrix A in (5.14) we use (5.12) to evaluate the elements a,,,%, with
ax(z) and b;(t) as in (E.2) for k, m = 1, 2,
1 1 i 1
ait =) bs (tay (tat = f Prat = [ bedi i
0, 0, 0 1 1

a= | bu (t)aa(t)at = [ erat | tdi 7


i 0, 0, 40 1

a2} el ba(t)an (tat = | at = | dt = 3


0, ) o 1 1
aoa = | bo(t)as(t)at = war = | PO if
0 0 0

Hence C' = F' + AAC of (5.14) becomes

ip)ne Cy
+X
C2
Wl
Bile &] Ble
Ol ole
214 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

and if we transfer the matrix product to the left side, we obtain C — AAC =
(I —rA)C = F,
sl , 1
24; ea 4
lee|a | (E.3)
A ’ x C2 1
geet 3
In general, before solving (E.3) we must evaluate the determinant of the matrix
I-AA,

A d
Povg= MALS 5 = (1-3)
KNBia
5 . 7ig Ries Si
‘aces
|
240 — 120—2?
eae (es (E.4)

If 240 — 120 — \? # 0, the problem (E.3) has a unique solution for c; and cp which
we can evaluate by finding the inverse, (I — \A)~+. As we have only two equations
in (E.3),

1
(i ee*)ara
=, ee
eter (£.5)
r aN 1

we can solve them immediately by eliminating one of the unknowns to find


60 + A
Bo Oi? (ED.
eeTSA)
ae= TION
E8
ee
Once we know c and c2 we can obtain u(x), the solution to the integral equation
(E.1), from (5‘9) with n= 2, f(2) =: a) (a)"—"aF and @ (a) =a

u(a) = f(t) +AY— cear(a) (5.9)


k=1
2
Ua) =3 4X ey Chap (2)
k=1
= 2+ A[cia1(x) + coaa(a)]
= bh ee 80a? (E.9)
ae 240 — 120A— 2-240 — 120 — d?
_ (240 — 60A)a + 80Ax? ‘
= Sa? 240 - 1200-7 #0
5.1. FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 215

As this Example | illustrates, this is a clear and simple method for constructing
the solution of the nonhomogeneous Fredholm integral equation (5.7a). Moreover
the condition for the existence of such a solution seems to also be very transparent as
|I — AA| 4 0, as we showed in Example |. This amounts to restricting the parameter
A in (5.7a) not to be a zero of the equation

[I — AA]= 240 — 120A — \?__ 0


240 oe:
i.e., the condition 240 — 120 — \? # 0 is sufficient to guarantee the unique solution
u(x) as constructed in (E.9) of Example 1. However, this still may raise an immediate
question concerning equation (5.7a) when its parameter \ happens to be one of
the possible two zeros of the quadratic equation A? + 120A — 240 = 0, which
violates the sufficient condition that guaranteed us the unique solution of (E.9) in
our Example 1. Such an important question concerning the existence of solution
(or solutions) to nonhomogeneous Fredholm integral equation is the subject of the
Fredholm alternative as given in Theorems | and 2, and illustrated clearly in Examples
3 and 4. The thrust of these theorems depends on our knowledge of solving the
associated homogeneous equation of (5.4h),

oe. /ei esiyuli ye (5.4h)


which is the subject of the next section. In that regard the present method will be
instrumental in constructing the solutions of the homogeneous equation as illustrated
in Example 2. Moreover if the analysis allows a unique solution, or infinity of
solutions, for the nonhomogeneous problem in (5.7a), the present method will be
used for their construction as illustrated in Example 4.
Indeed, the analysis in the Fredholm alternative applies to more general kernels
than the present simple degenerate one of (5.6), but for a simple presentation we will
be satisfied with the theory for symmetric kernels, i.e., K(xz,t) = K(t,x), which
is discussed in Section 5.2. To summarize the purpose of our next illustrations, in
regards to the existence of the solution to Fredholm equations of the second kind,
the Fredholm alternative will be illustrated for degenerate but nonsymmetric kernel
in Example 3, degenerate and symmetric kernel in Example 4 (and symmetric but
nondegenerate kernel in Examples 7, 9, and 10 of Section 5.2.)

5.1.2 Fredholm Alternative

Homogeneous Fredholm Equations


We consider here the homogeneous case [i.e., when f(x) = 0 in (5.7a)] of the
Fredholm integral equation with degenerate kernel

Ne) syleK (a, t)u(t)dt


nt b (5.16)
= Ss ax(2) / by (t)u(t)dt.
216 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

We will follow the same steps as those we used for the nonhomogeneous equation
(5.7a), to reduce (5.16) to

He 5 Chag (2) (nh)


k=1
and then to a system of n homogeneous equations in Cm,

Ge OS Grater n= Les (5.18)


k=!

or in matrix notation,
(I —-AA)C = 0 (5.19)

instead of the nonhomogeneous system of linear equations in (5.13) and (5.14). Here
A and C are defined as in (5.14).
From the theory of systems of linear equations we can conclude that if |[—A.A| ¥ 0,
then the only solution to the homogeneous equation (5.19) is the trivial solution
C = 0. By using (5.17), the solution to the homogeneous Fredholm equation (5.16)
is the trivial solution u(a) = 0. On the other hand, when |J — A| = 0, then (5.19)
has nontrivial solution. This leads us to discuss next the subject of eigenvalues and
eigenfunctions of the homogeneous problem.

Eigenvalues and eigenfunctions


For the homogeneous Fredholm integral equation with the kernel K (z, t),

u(x), = sf (e,nutoae (5.20)

the parameter \ # 0 for which (5.20) does have a nontrivial solution (i.e., u(a) 4 0)
is called the eigenvalue or characteristic value of the homogeneous equation (5.20)
or, in short, the eigenvalue of the kernel K (a, t) in (5.20). The nontrivial solutions
u;(x) # 0 corresponding to the eigenvalues \; are called the eigenfunctions or
characteristic functions of (5.20), or in short, the eigenfunctions of the kernel
i (2-2):
In this sense, then, the eigenvalues of (5.20) are the solutions of |J — AA| = 0,
since if A is not the solution ofthis equation, then |J — \A| 4 0, and hence (5.18) and
in turn (5.20) have the trivial solution. There may exist more than one eigenfunction
~j(a) corresponding to a specific eigenvalue \;. The number p of such (linearly
independent) eigenfunctions W;+41(x), }j+2(Z), Yj43(Z), -:>, %j4p(@) is called the
degeneracy (or index) of Aj. In case an eigenvalue A, is a multiple root with degree
m in the equation |J — A.A| = 0, i.e., a factor (A — Ax.) appears in this equation, then
m is called the multiplicity of the eigenvalue. For the typically well behaved square
integrable kernels, it can be shown that the index p never exceeds the multiplicity m
of the root or eigenvalue of the kernel, i.e., 0 < p < m, and for symmetric kernels
p =m. For p = 1, the eigenvalue 2, is termed “simple”.
5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 217

Example 2 Homogeneous Fredholm Equation with Degenerate Kernel. Solve the


integral equation

U(2)\i= af (cos? x cos 2t + cos 32 cos* t)u(t)dt. (£.1)


0
This is a homogeneous Fredholm equation with degenerate kernel

K (a, t) = cos” # cos 2t + cos 3z cos? t


2
=) ax (0)Du(t) (E.2)
k="

hence ai(x) = cos? z, a2(x) = cos3z, b;(t) = cos2t, and be(t) = cos*t. We
follow the method of Example | to find c, and cz from

2
Cm = aNMe AmkCk (5.18)
k=1
or the matrix equation
(I — rAA)C = 0. (5.19)
The solution u(z) of (E.1) is
2
TG) aa ye Cpax (x) = Ac, cos? x + Ace cos 3z. (E.3)
k=

To evaluate c; and co from (5.18) we must evaluate aj1, @12, G21, and ag2, the
elements of the matrix A:
Tv Tv

Cite ifby (t)a;(t)dt = i cos 2t cos? tdt = ~


0 0 4

a2 = i bi (t)a2(t)dt = i cos 2t cos 3tdt = 0


0 0

a2) = | bo(t)ai(t)dt = itcos® t cos” tdt = 0


0 0

AT
(ee
8

and if we multiply the matrices in (E.4), we obtain


218 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

(12 7) aes (E.5)

(1a d=) oat (E.6)


For

C=
Pe

to be the nontrivial solution, we must have a zero determinant for J — 4A in (E.4),

1- — 0
4 ~- ys
=| kes Ao)
eee ea A=)) ==0 (E.7)

which is a quadratic equation in \ whose solutions are 4; = 4/7 and Az = 8/7,


which in turn are the eigenvalues of (E.1). As a solution to (E.1), we have two
eigenfunctions u(x) and u2(x), corresponding to the two eigenvalues Ay = 4/7
and A2 = 8/7, respectively. Next we consider the two eigenvalues separately and
find their corresponding eigenfunctions.
(a) Ai = —
If we substitute 4; = 4/7 in (E.5) and (E.6) to solve for c; and cp, we have

4
(1-5. Fe =0e =0, Cy = Cy
nm 4

from (E.5) and

4 7 1 1
(1-5 F)a=(1-5)a=sa=0 ex =10

from (E.6). Hence with c) = c; and cg = 0 in (E.3) the eigenfunction u(x)


corresponding to A; = 4/7 is

4 ‘ 4 4
11 (Ee C1 cos? x + ~ (0) cos 3x2 = po cos? x.

This means that the eigenfunction is known except for the arbitrary constant c,, which
determines its amplitude; we may arbitrarily let (4/7)c, = 1 to have

ui(z) = cos? 2.

Now we consider the case of the second eigenvalue:


8

We again substitute 42 = 8/7 in (E.5) and (E.6) to obtain


5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 219

Hence from c; = 0 and cy = cp in (E.3) we obtain u(x), the eigenfunction


corresponding to the eigenvalue A2 = 8/7,
8 8
ia) = —(0) cos? x + ~ C2 COS So oy cos 32.

Now we may let (8/7)c2 = 1 to have

U(z) = COssz.

Fredholm alternative
The statements regarding the existence of the solutions of the nonhomogeneous and
homogeneous system of n linear equations (5.15) and (5.19), respectively, and how
they relate to the existence of solutions of the nonhomogeneous and homogeneous
Fredholm integral equations are valid even when the kernel is not degenerate, and
are summarized in the following statement (Theorems 1 and 2) of the Fredholm
alternative.

Theorem 1 Fredholm Alternative—The Main Part


If the homogeneous integral equation (5.20)

b
u(t)2= »/ K (a, €)u(€)d&é (5.20)

has only the trivial solution u(x) = 0, then the corresponding nonhomogeneous
equation,
b

Hevea sey sea / K(x, €)u(é)d€ (5.21)


always has one and only one solution. On the contrary, if the homogeneous equation
has some nontrivial solutions, then the nonhomogeneous integral equation has either
no solution or an infinity of solutions depending on the given function f(x) (see
Theorem 2).
As mentioned before, nontrivial solutions {u, (2) } of the homogeneous equation
(5.20) are called the eigenfunctions u,, (x) of (5.20) corresponding to the eigenvalues
{An},
b
uae = rn ffK (a, t)un(t)dt. (5.22)

To complement the second part of the Fredholm alternative, when the homogeneous
equation (5.20) has a nontrivial solution, we state the following theorem without
proof, which gives us the necessary and sufficient condition for the existence of
220 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

solutions of its associated nonhomogeneous equation (5.21) for the important special
case of symmetric kernels, i.e., K(x,t) = K(t,x). We make this choice of special
kernels to facilitate a more clear initial presentation of the main idea behind conditions
for the existence of the solution.

Theorem 2 If the homogeneous equation with symmetric kernel,


b
ens af K (a, é)u(é)dé, K(a,t) = K(t,z) (5.20)

has a nontrivial solution or solutions, {u;(x)} (corresponding to \ = Aj in (5.20)),


then the associated nonhomogeneous equation (with the same fixed parameter A)

b
u(x) = f(x) + af Kz, ues i(ay i= Kies) (5.21s)

will have a solution if and only if the nonhomogeneous term f(x) in (5.21s) is or-
thogonal to every solution u; (x) (corresponding to A;)of the homogeneous equation
(20):
Of course, other theorems are available to accommodate equations with nonsym-
metric kernels, but we chose the above very important special case of symmetric
kernels to simplify a clear presentation of the main features of Fredholm alternative
for the existence of solutions to Fredholm integral equations of the second kind.
For completeness, we present such theorems after this initial discussion, in Theo-
rems 3 and 4, and illustrate them in detail in Example 5.
We may mention here that in comparison to the conditions of the above two
theorems, for the existence of the solutions of Fredholm equations of the second
kind, the theory for the equations of the first kind is much more restrictive, as we
shall discuss and illustrate in Section 5.4.
We should note that in contrast to the last Theorem | of the Fredholm alternative,
Theorem 2 is for symmetric kernels, and its statement also assumes the same value
for A in both (5.20) and (5.21). This means that we are considering the usually
special case when the fixed parameter X of the nonhomogeneous equation (5.21) is
equal to ,,, the eigenvalue of the homogeneous equation (5.20). In Section 5.2.2 we
will consider the general solution in (5.47) for A of (5.21) not equal to A, of (5.20),
then treat the problem of A = A, as a special case in (5.57). The next Example 3
illustrates Theorem | when A # X,, for the nonsymmetric kernel of Example 2, while
Example 4 illustrates both Theorems | and 2 as it covers the two cases of A # An
and A = ,, for a symmetric kernel.

Example 3 Existence of the Fredholm Equation Solution-Nonsymmetric Kernel


In light of the Fredholm alternative, let us discuss the possibility of a solution to
the nonhomogeneous Fredholm equation

u(x) = f(z) +A ip(cos* « cos 2t + cos 3x cos? t) u(t)dt (E.1)


5.1. FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 221

which is associated with the homogeneous equation of Example 2 with its nonsym-
metric kernel. Hence, with Theorems | and 2 at our disposal, we can only use
Theorem | of the Fredholm alternative for such a nonsymmetric kernel.
(a) From Example 2, the homogeneous equation associated with (E.1),

ve es | (cos? x cos 2t + cos 32 cos® t) u(t)dt (E.2)


0
has two nontrivial solutions (eigenfunctions),

ui(z) = cos*x and u2(x) = cos 3a

corresponding to the eigenvalue A; = 4/7 and Ay = 8/7, respectively. So according


to Fredholm alternative in Theorem 1, the Fredholm integral equation in (E.1) will
definitely have a unique solution if the parameter \ in (E.1) is not equal to either one of
the above two eigenvalues of its kernel, i.e., when 4 4/7, 8/7. For the special two
cases of \ = 4/7, 8/7 in (E.1), the equation may have either no solution or an infinity
of solutions depending on f(z) as it is clearly stated in Theorem | of the Fredholm
alternative. However, since the kernel is not symmetric in (E.1), we cannot appeal to
the only tool available to us in this book (at this stage without having Theorems 3 and
4), i.e., Theorem 2 to do a follow-up and determine which of these two possibilities
may occur. For A # 4/7 or 8/7 we can also construct the unique solution, since,
fortunately, the kernel in (E.1) is degenerate with only two terms, where the method
of Section 5.1.1 can be followed as illustrated in Example | (see also Example 2).
The Fredholm alternative for nonsymmetric kernels is complemented in Theorems 3
and 4, and fully illustrated in Example 5.
In the following Example 4 we will consider a Fredholm integral equation of
the second kind with symmetric kernel, where we can employ Theorem | and
Theorem 2 as the complete statement of Fredholm alternative. We choose a symmetric
degenerate kernel to allow us the method of Section 5.1.1 for constructing the unique
solution or an infinity of solutions when they exist according to Fredholm alternative.
The case of symmetric but nondegenerate kernels is discussed in Section 5.2, with
detailed illustration for the construction of the solution, which is done in Example 7,
and where the conditions of the Fredholm alternative in Theorems | and 2 will be
very evident in Examples 9 and 10 of the same section.

Example 4 Existence of Fredholm Equation’s Solution-Symmetric Kernel


Consider the Fredholm integral equation of the second kind

u(x) = f(z) + af é sin(x + t)u(t)dt. (E£.1)

We shall discuss the possible existence of the solution (or solutions) for the three
particular cases:
MA=3: fcy=s
(ii) \ = 1/7, f(x) = sin 2x
(iil) A=al /ayf (zc) = sine:
222 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

Here we have a symmetric kernel, K (x,t) = sin(z + t) = sin(t + x)= K(t,2),


so we can rely on Theorems | and 2 for the Fredholm alternative to determine the
possible existence of a unique solution, an infinity of solutions, or no solution. In
case a unique solution does exist, we can appeal to the method of Section 5.1.1 for
constructing such solution, since the kernel here is also degenerate, sin(s +t) =
sin x cost+cos x sin t. Moreover, at this stage, and as the Fredholm alternative states,
we must know whether or not the parameter ) in (E. 1) is one of the eigenvalues {Ax}
of the kernel K(z,t) = sin(z + t). This says that before doing anything, we must
resort to the method of Section 5.1.1 to determine the eigenvalues and eigenfunctions
of the (associated) homogeneous case of (E.1),

20
Op (Z) = Ak / sin(x + t)d,(t)dt (£2)
Ly

as was done in Example 2 of Section 5.1.1.


In order not to lose track of the analysis of Fredholm alternative for the three
above cases, we shall first quote the results of the eigenvalues and eigenfunctions
as shall be determined by the method of Section 5.1.1 and delay the details of the
method to the end of this example. Such details will also be very much needed for
the construction of any possible solution of (E.1). The method described gives two
distinct eigenvalues Ay = 1/7 and Az = —1/7 corresponding to their eigenfunctions
¢1(x) = sinxz + cosz and ¢2(x) = sin x — cosa, respectively.
With these results we have for case (i) \ = 3 # 1/7, i.e., the parameter 2 in
(E.1) is not equal to any of the eigenvalues of the symmetric kernel. Thus according
to Theorem | of the Fredholm alternative, the nonhomogeneous Fredholm equation
(E.1) has a unique solution for arbitrary f(z), and, of course, this includes the
f(x) = & given in case (i).
For case (ii) we have A = 1/a = Aj, i.e., the parameter A in (E.1) is equal to
one of the eigenvalues of the symmetric kernel, which should be a sign of some
alarm. However, since the kernel K (z,t)= sin(x + t) is symmetric, we can appeal
to Theorem 2, which states that the existence of (infinite—not unique!) number of
solutions depends on whether the given nonhomogeneous term f(x) = sin 2z in
(E.1) is orthogonal to the eigenfunction ¢;(x) = sinz + cosa, corresponding to
the eigenvalue A; = 1/7, on the interval (0, 27). To show this we will evaluate the
following integral, whose vanishing proves the orthogonality of these two functions
on (0, 27),

20 20
i Geer (dae J sin 2a[sin z + cos z]dx
0 1
0 27

mer / [cos z — cos 3x + sin x + sin 3z]dz


1 z 27
= glsing — sin 32/3 — cos x — cos 32/3]
i 0
= 5((-4/3) - (-4/3)] = 0,
5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 223

(after using trigonometric identities for the first integral) which says that f(x) =
sin 2x is orthogonal to ¢;(x) = sinz + cosz on (0, 27), whence the problem (E.1)
for case (ii) has infinite number of solutions. These solutions will be constructed at
the end of this analysis, where the method of Section 5.1.1 is employed.
For case (iii) with A = 1/m = 2, and f(x) = sin z, we will show next that f(x) =
sin z is not orthogonal to ¢;(x) = sin x + cos z, the eigenfunction corresponding to
1
M=-,
T
27 20
[ f(x)¢i(z)dz = i sin z[sin x + cos z]dz
0 01 20

=- i [1 — cos 2x + sin 2z]dxr


2 Jo

= ;[x — sin 22/2 — cos 2x/2]|6”

= 5[{2n -0-(1/2)}
-(0-0-(/2}J=7 40
as the integral does not vanish. Hence, according to Theorem 2 there exists no
solution for (E.1) in case (iii) of A = + and f(x) = sina.
Now we follow the method described in Section 5.1.1 to find the eigenvalues A,
and eigenfunctions ¢,,(x) of the (associated) homogeneous problem in (E.2),
22
Ole) = af sin(x + t)n(t)dt (E.3)

as was illustrated in Example 2. Then we use the same method, that was illustrated
in Example 1, for constructing the unique solution of the nonhomogeneous equation
(E.1) for case (i), and the infinite number of solutions of (E.1) for case (ii).
Here we have a degenerate kernel,
2
sin(z +t) =sinzcost+coszsint = > Ap (x)
bz(t)
k=)

with a,(x) = sin z, a2(x) = cosa, bi(t) = cost and be(t) = sint.
We will follow the procedure from (5.7) to (5.14) with all the necessary details
except for evaluating the simple integrations involved. We let
20 27
Co by (t) P(t) dt =| cos td(t)dt,
0
27 21
ae , bo(t)4(t)dt = i semi a
0

which are to be determined from solving the homogeneous case of the matrix equation
(5.14), i.e., with F' = 0. We now need to compute @11, @12, 421, and Q22, the entries
for the matrix A,
224 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

27 27
ant / by (t)a;(t)dt = [ cost sin tdt = 0,
0 0
20 27
Qi2 = i bi (t)a2(t)dt = i} cost costdt = 7,
0 0
20 27
a2) = il be(t)a;(t)dt = / sintsintdt = 7, (E.4)
0 0

Also for the solution of case (i) we will need, as shown in (5.14),

20 27
Hie / b(t) f(t)dt = i tcosdt = 0,
0 0

fo = iesbo(t) f(t)dt = sa tsindt = —27. (E.5)

So the homogeneous case of the matrix equation (5.14) now becomes

v-aoce[s $]f2]-a[2 s}[2]-[8]


M -e 1 — Ar Cy ~ 0
=(-aajo=|1), we Spee (E.6)

which will have solution if the following determinant vanishes,

i il —)r ar mty 2) oge8


y-aal=| 1, 1 = PA— 0;

. 1 1 on : ‘
hence A; = — and Ay = —~— as the two distinct eigenvalues of the symmetric (and
degenerate) egal sin(x + t)of (E.2), as we quoted them at the beginning of this
example.
To find the eigenfunctions ¢;(x) and $2(x) corresponding to the eigenvalues
A, = 1/m and Ay = —1/7 we substitute for each case in (E.6) to find c, and co,
which are to be substituted in (5.9) with f(x) = 0,

u(z) =A Yo ena®) (5.9h)


k=

to have the corresponding eigenfunction ¢, (x).


For A; = 1/7 in (E.6) we have

aoa |
5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 225

Gi Cy = Uy

—cj
+e =0

Hence c; = C2, and from (5.9h) above,

no) — te crore) (5.9h), (E.7)


Ka

we have
2
gi(xz) =1/x a Chay (x) = 1/m[c) sin + c, cosz] = = (sing +cosz). (E£.8)
k=1

If we normalize by choosing c; = 7, we have ¢; = sinz + cosz, which is what


we quoted at the beginning of this example. However, the more common way is
to normalize with a unit norm, i.e., ||¢1()|| = Lf.” (sin x + cosx)?dz]2 = V/2n,
whence the normalized eigenfunction becomes $1 (x) = (1/27) (sin x + cos z).
For completeness we determine the second eigenfunction ¢2(x) corresponding to
the second eigenvalue Az = —1/7, where (E.4) with \ = —1/7 becomes

Ea] ]=[o]
iE C2 ORs

qt co — UP C2 = —C1,
€; +.6> =0, C2 = —C}.

Hence cz = —c;, and from (5.9) with f(x) = 0 (or (5.9h)) we have

2
2(xr) = -= eS:Chae) -=le sin x — c; cos Z| (E.9)
k=1 ;
Ci;
lI — 7[sine — cos z],

and if we normalize by choosing (—c;/7) = 1, we have ¢2(xz)= sin x — cos x which


is what we quoted at the beginning ofthis example. Again, if we normalize with a unit
norm, i.e.; ||¢2||= Lf," (sin a —cosx)*dx]? = V2n, we have the more commonly
normalized eigenfunction ¢2(x) = (1/V27)(sin x — cos 2).
We will leave it as an exercise to verify that the above ¢; (a) and $2(z) are the
eigenfunctions corresponding to the eigenvalues A; = 1/z and Ay = —1/n7, ie.,
each of the corresponding pairs satisfies the homogeneous Fredholm equation (E.2)
(see Exercise 4).
il
To construct the unique solution of (E.1) for case (i) of AX= 3 (F +7) and
f(x) = 2, we use (5.14) or (5.15) with the entries of the known matrices A and F as
computed above in (E.4) and (E.5) to find c; and cz of the unknown matrix C, that
are needed for the solution of (E.1) as given by (5.9),
226 Chapter5 FREDHOLM INTEGRAL EQUATIONS

u(z) = f(x) +A 5 Crag (Z) (5.9)


k=1
u(x) = 2+ 3[c; sing + c2 cosa]. (£.10)
Of course, these c;, C2 are not to be confused with those of the homogeneous matrix
equation (E.6). If we substitute the known values of \ = 3 and the matrices A from
(E.4) and F from (E.5) in (5.15) we have

Cc, — 3mc2 = 0,

—3rc, + co = —27,

cy = 61? /(9n? — 1), Co = 2n/(9n? — 1).


If we now substitute these values in (E.10) we obtain the wnique solution to the
problem in (E.1) with A = 3,

187? sinx + 67 cosx


u(x) = x+ 3c, sing
+ 3cpcosx = x2+ 4 (E.11)
To construct the infinite number of solutions for case (ii) with A = 1/7 in (E.1) (for
1
A = A, = —), wecan use (5.15) with the same matrix A, but the matrix F is different,
T
since here we have a different function f(x) = sin 2x from that of f(x) = x in case
(i). So we need to evaluate
20
i i} sin 2t costdt = 0,
0
20
i [ sin 2t sin tdt = 0,
0
to be substituted with those entries of matrix A in (5.15) to have

ist oletaal iy
Ci aa Omer CT in 0

1 —1 Ci = 0
-1 1 €9 ma 0 ,
5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 227

With such result of the arbitrary c; = cy we have no unique solution, as when we


substitute these values in (5.9) we have

u(x) = sin 27 + c;(sinx + cos 2) (£12)

which represents an in finite number of solutions because of the solution dependence


on the arbitrary value c, in (E.12).

Fredholm Alternative-Nonsymmetric Kernels


For completeness, we may now return to the Fredholm alternative for the case
of nonsymmetric kernels, state Theorem 4 that complements Theorem | for such
kernels, then illustrate it with a simple nonsymmetric kernel in Example 5. In
Theorem 2 we stated the complement to the Fredholm alternative (Theorem 1) for
the case of symmetric kernels, i.e., K(z,t) = K(t,x) in equation (5.21s). Here,
we state a more general Theorem that complements Theorem | for the case of not
necessarily symmetric kernels. Before we state such a theorem we present a simple
statement of Theorem 3 that relates to a Fredholm integral equation of the second
kind in w(x), associated with (5.21), but with kernel A(t, x) instead of K(x, t) of
21),
b
w(x) = g(x) + i K (t,x)v(t)at (5.23)
with its homogeneous case as
b
Sr weain / K (t,x)w(t)de. (5.23h)
Theorem 3 Fredholm Equation with Kernel K(t, x) instead of K (2, t)
“Consider the Fredholm integral equation of the second kind (5.21)
b
u(x) = f(a) + »/ K (a, t)u(t)dt (oz)

and its homogeneous case (5.20)


b
sea Nas flACH aaa (5.20)
If the Fredholm alternative of Theorem | holds for equation (5.21) in u(x) with kernel
K(a,t) for a given fixed A, then it also holds for its associated equation (5.23) in
w(x) with kernel K(t, 2). Moreover, the homogeneous equation (5.20) (with kernel
K (a, t)) and its associated equation (5.23h) (with kernel K(t,x)) have one and the
same finite number of linearly independent solutions."
Now, that we introduced the associated equations to (5.21) and (5.20) with kernel
K(t,x) as in (5.23) and (5.23h), we will present Theorem 4 that complements the
Fredholm alternative of Theorem | for nonsymmetric kernels instead of the special
case of Theorem 2 of the symmetric kernels.
228 Chapter5 FREDHOLM INTEGRAL EQUATIONS

Theorem 4 Complement to Fredholm Alternative-Nonsymmetric Kernels


“If the associated homogeneous equation (5.23h) (with not necessarily symmetric
kernel) has a nontrivial solution or solutions {~;(x)},
b
w;(z) = | K (t,x); (t)dt (5.24)

then the nonhomogeneous Fredholm equation (5.21)


b
(KG) )= IAC hee rf K (a, t)u(t)dt (5.21)

(with the same fixed parameters X = Aj) will have a solution if and only if the
nonhomogeneous term f(z) in (5.21) is orthogonal to every solution (eigenfunction)
~;(a) of the associated equation (5.24) (with kernel K(t, x)), i-e.,
b

[ sobslaz = 02
Clearly this theorem becomes Theorem 2 when K (x,t) is symmetric, since the
associated equations (5.23h) and (5.24) become (5.20) and (5.22), respectively, and
the above orthogonality condition a f (x); (x)dx = 0 becomes if ip Cag selCe at—
0 of Theorem 2, where {¢;(x)} are the eigenfunctions of (5.20) (or (5.22)) for
symmetric kernels.
To illustrate Theorem 4, as it complements the Fredholm alternative for nonsym-
metric kernels, we choose the following example with a very simple (nonsymmetric
kernel) to avoid lengthy details. The main emphasis will be directed towards solving
the two homogeneous equations (5.20) with K (a, t) and (5.23h) with K (t, x) as they
supply the basic ingredients to our analysis of both parts of the Fredholm alternative
for nonsymmetric kernels (Theorems | and 4).

Example 5 Fredholm Alternative-Nonsymmetric Kernels


Consider the following problem with its very simple nonsymmetric kernel K (x, t) =
sin(In z), where K(t, x) = sin(Int),
1
(x)= 29 + af sin(In x)u(t)dt. (F£.1)
0
This kernel is also degenerate with one term where a; (x) = sin(In x) and b;(t) = 1,
so we use the method of this section to first find the eigenvalue ,, of the homogeneous
equation (5.20) (with K (az, t) = sin(Inz)),

O10) = a» | sin(In
2) p; (t)dt (E.2)

and in case A of (E.1) is not equal to this eigenvalue \1, we construct the unique
solution of (E.1) as guaranteed by the first part of the Fredholm alternative (Theorem
1). If weletc, = fh u(t)dt in (E.1), we have
5.1. FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 229

u(x) = 27 + cyAsin(Inz), (£3)


and if we substitute this value of u(x) in the above integral defining c;, we obtain
1 1
OS i u(t)dt = i [c,A sin(In t) + 2¢]dt,
0 0 1 (E.4)
= aa | sin(In t)dt + ¢?
0 0

The integral iissin(In t)dt can be done with one substitution u = Int, which will
reduce it to ee e” sin udu, that can be evaluated by two (careful) integrations by
parts to give a value of —},

1
cq =—-=qA4+1, a (1+3) = il,
5 2 (E.5)
Cy = X42? aN ee —2.

So with this result of c; and the (important) condition A # —2 in (E.1), the unique
solution to (E.1) becomes

u(x) = 2a + x5 sin(In 2), A # -2. (E.6)

Of course, the condition \ #4 —2 for the above unique solution in (E.6) should, in the
spirit of the Fredholm alternative (Theorem 1), be transparent to us as \ # Ay = —2,
where A; = —2 should be the eigenvalue of the homogeneous equation (E.2). This
can be verified easily from (E.4) or (E.5) without the nonhomogeneous term vale in
(E.4) (as if we do (E.3) without the 2x7 term, which amounts to doing (E.2)),

1
cy = — 501A, Ci (A == 2) = (0). (E.7)

So unless \ = —2, the arbitrary constant c, in (E.7) must be zero, which results in
a trivial solution u(x) = O for the homogeneous equation (E.2) as can be obtained
from u(x) in (E.3) without the nonhomogeneous term 2x. This means that for (E.2)
to have an eigenvalue A; = —2 in (E.7), the constant c; is allowed to be arbitrary.
Thus, the corresponding nontrivial solution, i.e., the eigenfunction corresponding to
A, = —2 is obtained from (E.3) (without the term 22) as

$1 (x) = (A; sin(Inz) = —2c; sin(Inz) = C'sin(In 2). (E.8)

The second part of this illustration is to address that case for the nonhomogeneous
problem (E.1) (with its nonsymmetric kernel) when its parameters A is equal to
dy = —2 of this kernel. Since the kernel K(z,t) = sin(In x) is not symmetric,
we must resort to Theorem 4 for the second part of the Fredholm alternative, which
230 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

means we have to find the eigenvalue and eigenfunction 7 (z) of the homogeneous
equation (5.23h), associated with the kernel K (t,x) = sin(In ¢),
1
“iy cpa / nn (Fae (E.9)
We use here j1; instead of A; just to emphasize solving a homogeneous problem as
new because of its different kernel K (t,x) = sin(Int). So we set out to find the
eigenfunction ~, (x) ina similar way to what we did for (E.2) and (E.1), except here
in (E.9) we have the kernel sin(In t) instead of sin(In x) in (E.2). In this case we
have a degenerate kernel K (x,t) = sin(Int) with one term, where a;(x) = 1 and
b(t) = sin(Int). As was done before, we let

Cy =| b(ou(ode = | sin(In t)p(t)dt, (£.10)

which when substituted in (E.9) we obtain ~(r) = s41c1, and if this yi (x) is
substituted inside the integral of (E.10), we obtain

1 1
Cie ey sin(In t)dt = — 51s
2
0 (E.11)
ai ( 5 |=o.

For (x) = f41c, not to be the trivial solution ~(2) = 0, i.e., to have it as an
eigenfunction,
we cannot assign c,; = 0. Thus from (E.1 1), the eigenvalue to equation
(E.9) is w; = —2 corresponding to the eigenfunction y(%) = ic, = —2c, = ¢,
an arbitrary constant. This ~,(x) = c represents an infinity of solutions, but it can
be normalized, and we have the eigenfunction needed for the following important
orthogonality condition of Theorem 4:

b
‘ Raye (ajdt = 0toreally: (£.12)
a

So according to Theorem 4, when A = —2 in the nonhomogeneous equation (E. 1),


where A; = —2 is the eigenvalue of its (nonsymmetric) kernel K (x,t) = sin(In 2),
the only way for it to have a solution is for its nonhomogeneous term f(x) = 22 to
be orthogonal to 7; (x) = c of (E.9) as spelled out in (E.12), which, unfortunately is
not the case since,

b 1
i f (x)u1 (x)dz = a PL) rem pee al) (E.13)
a 0

and c is assumed not to be zero for #1 (x) = c to be an eigenfunction (i.e., not a trivial
solution).
To summarize, the final result of this example is that the Fredholm integral equation
(E.1) has a unique solution when \ # —2 and no solution when \ = —2.
5.1. FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 231

5.1.3 Approximating a Kernel by a Degenerate One

In many cases a nondegenerate kernel K (x, t) may be approximated by a degenerate


kernel as a partial sum of the Maclaurin (or other) series expansion of K (x,t). For
example, the nondegenerate kernel K (x,t) = coszt may be approximated by a
degenerate kernel

a2 t2 4 t4

which consists of the first three terms of the Maclaurin series expansion of cos zt.
Let us consider the Fredholm equation with kernel K (z, t),
b
u(x) = f(x) +2 ‘|K (a,t)u(t)dt (5.21)
and its associated equation,

6
se STN / M(s, t)v(t)dt (5.26)
with kernel / (z, t) as the degenerate kernel approximation of K (a, t). In principle,
we may use this section method to solve for v(x), which is considered as an approx-
imate to the solution u(x) of (5.21). Of course, there will be an error involved in
such an approximation, which is defined as e€ = |u(x) — v(x)|, and we may attempt
to estimate this error to give us a measure of how good this approximation is.

Example 6 Approximating a One by a Degenerate Kernel Find the approximate


solutions to the Fredholm integral equation
1
u(x) = sina + [ (1 — xcos azt)u(t)dt (E.1)

by considering its associated Fredholm equation with degenerate kernel.


We note here that the kernel 1 — x cos zt is not degenerate, but a finite number of
terms of its Maclaurin series

Cte art Ct 2s
1-2 (1-5 45F -...) <1-24 5 -Se (E.2)

is degenerate or, in other words, separable in x and t. So if we consider only three


terms of the series in (E.2), we have a degenerate kernel

342
Dat
M(a,t) Sy ag (E.3)
as an approximation to K (x,t) = 1 — cost of (E.1). The associated equation in
u(x),
232 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

oa) =sinet )
zit
[ (1-245 u(t)dt (E.4)

has degenerate kernel and can be solved by the method we discussed in this section
and illustrated as in Example 1. Hence from (5.6) we have

a
M(z,t) = (1-2) = ap (x (E.5)

where a;(z) = (1 — 2), a(x) = x? and bi (t) = 1, bo(t) = t?/2. Now we can
employ the method of solving nonhomogeneous Fredholm equations with degenerate
kernel as illustrated in Example | (leaving the detailed steps for Exercise 11(a)) to
find the elements of the matrix C’ in (5.15) are .Y

c, = 1.00308, C2 = 0.16736.

Therefore, the solution to (E.4), according to (5.9), is

v(x) =sing +c) a;(r) + cpao(z)


= sine + 1.00308(1 — x) + 0.167362° (E.6)
which is the approximate solution to (E.1). In this special case it happens that we
also know the exact solution of (E.1) as u(a) = 1, which can be verified easily.

1 1
sin z + / (1-—acosat)(1)dt =sinzg+1-— xf cos xtdt
0 0; 1
sin zt ; :
=sinz+1l-2z =sinz+1-sinz
0
= 1.

The approximate solution u(x) in (E.6) and the exact solution u(x) = 1 of (E.1)
are presented in Table 5.1 to show how v(x) corresponding to the degenerate kernel
(E.5) approximates u(x) = 1 with nondegenerate kernel 1 — x cos zt. It is of interest
to observe how close v(x) will be to u(a) when we consider more terms of the
Maclaurin series expansion for M (x, t) (see Exercise 11).

Exercises 5.1

1. Solve the following Fredholm equations in u(z), then verify your answer.
mw/2
(a) u(x) = sina + af sin
x cos tu(t)dt
0
nm/2
Hint: Write C = / cos tu(t)dt, where the above equation becomes:
0
u(x) = sinx + AC'sinz, use this u(x) in the integral of C’, then solve for
EXERCISES 5.1 233

Table 5.1 Approximate (Kernel Replaced by a Degenerate One) and Exact Solutions of
Fredholm Equation (E.1)

Oy 0 0.25 0.5 0.75 1.0


Approximate values,
v(x) = sing + 1.00308(1— 2) 1.0031 1.0023 1.0019 1.0030 1.0088
+0.16736x3
Exact values, u(r) =1 1.0000 1.0000 1.0000 1.0000 1.0000

C’. Aso, note that all the Fredhoim equations in this Exercise 1 are nonhomo-
geneous with degenerate kernel.

(b) u(x) = a+ »/ (cos zsint + xcost + t? sin x)u(t)dt


7) §

Hint: Here we will end up with a rather long 3 x 3 system of equations, however,
many of the entries of the coefficient matrix A vanish due to integrating odd
functions on the symmetric interval (—7, 77).

(c) u(x) = 22 —7 +4 i,
Si sin? ru(t)dt

(d) u(r) = cos + [: sin(a — t)u(t)dt


(e) u(x) = e* +A f. (5a? — 3)t?u(t)de
(f) u(x) = 22 —6- 2 | u(t)dt

(g) u(x) = 201x” — 802 + 52 + /“(dot? — 327t — t?)u(t)dt

2. Solve the following homogeneous Fredholm equations by finding the eigen-


values and eigenfunctions of the equations.
20
(a) u(x) = a [ sin
x sin tu(t)dt
0
Hint: See the hint to problem I(a).
nm/2
(b) u(x) = »f sin
x cos tu(t)dt
0
(c) ula) =r dycos(x + t)u(t)dt
0
1
(d) u(x) = 2a | x(t — 2x)u(t)dt
0

(e) u(x) = | (5a? — 3)t?u(t)dt


234 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

. (a) Discuss the existence of the solution to the integral equation

u(x) = f(z) + | : sin(a + t)u(t)dt (£.1)


0
for the two cases.

(i) X= 1/VF, f(z) = 2


Gi) \ =" V/7e i (coy sito’
Hint: Consult the Fredholm alternative in Theorems | and 2, and its detailed
illustration in Example 4 for the present symmetric kernel.
(b) Find the solution in the case (or cases) where it exists.
Hint: Consult part (a) and Example 4 for the existence of a unique solution or
infinity of solutions, then the method of Section 5.1.1 and its illustration at the
end of Example 4 for constructing the solution (or solutions).

. In Example 4, verify that g(x) = (sinz + cos) and ¢2(x) = sinx — cosz
are the two eigenfunctions corresponding, respectively, to the two eigenvalues
Ay = 1/m and Ap = —1/7 of the kernel K(z,t) = sin(z + ft), i.e., show that
each pair of an eigenfunction and its corresponding eigenvalue satisfies the
homogeneous Fredholm equation (E.2) in Example 4.

. In problem 1(a), and its associated homogeneous case in 2(b), use their results
to illustrate the Fredholm alternative.
Hint: Compare the parameter \ in the Fredholm equation of problem 1(a) with
the one eigenvalue A; in problem 2(b).

. Solve the following homogeneous Fredholm integral equations

(a) u(x) = 6 | (2at — x”)u(t)dt (F.1)


0

Kes 2f oe (E.2)
Hint: See the hint to Exercise 1(a), but watch for u(x) = C # 0, since this
will give a divergent integral on the right side of (E.2).
2
(c) u(x) = ae |x|u(t)dt. (£3)

Hint: See the hint to Exercise 1(a).

. In light of the Fredholm alternative, how do you explain the validity of the
solution to problem 1(d) for all real values of its parameter \?

. Compare the results of problem 1(e) and its associated homogeneous case in
problem 2(e), how do you explain the validity of the solution in l(e) for all
values of its parameter \?
EXERCISES 5.1 235

9: Consider the following problem with degenerate kernel K (x, t)= (1 — 3zt)
and a general nonhomogeneous term f(z),

Te) i a af (1 — 3xt)u(t)dt (E.1)

(a) Find the (two) eigenvalues of the kernel in (E.1).

(b) Consider the resulting system of equations from (E.1), what is the condition
for a unique solution? Also what about the existence of a solution to (E.1)?

(c) Consider the cases when A = Ay = 2 and \ = Ay = —2 in the answer of


part (a), what can be said about the system in part (b) for the given function
f(x)?
(d) Find the corresponding eigenfunctions to the eigenvalues of the kernel in
part (a).
(e) Write the general solution to (E.1) when \ = 2.

10. (a) Use the method of degenerate kernels to solve the nonlinear integral equa-
tion

u(x) = b+ valeu(t)dt
0
Hmnceisayi r= 1 jet C = le u”(t)dt, where the above equation becomes:
u(z) = b+ AC; then use this u(x) in the integral of C’, which results in a
single equation in C.
(b) Use the same method to solve the homogeneous equation

C2 af u?(t)dt

1 Consider the Fredholm integral equation of Example 6

u(x) = sing + fo — xcos zt)u(t)dt. (£.1)


0

(a) As in Example 6, approximate the kernel by a degenerate one by considering


only the first two terms of the Maclaurin series expansion of cos zt. Use the
method of this section to find the approximate solution u(x) as given for
Example 6.
(b) Compare the result of part (a) with the exact solution u(x) = 1 and another
approximate of Example 6 [by considering the first three terms of the Maclaurin
series expansion of cos zt in (E.1)].
236 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

i. (a) Use the first three terms of the Maclaurin series of the kernel in the integral
equation

u(z) =e” —x— i a(e*’ — 1)u(t)dt (E.1)

to reduce it to another approximate equation with degenerate kernel, then solve


this problem.
(b) Verify that u(x) = 1 is the exact solution of (E.1).
(c) Tabulate the approximate solution in part (a) and compare with the exact
solution in part (b).

Nk (a) Reduce the Fredholm equation

1 4

GD ety +/ e*'u(t)dt
=1

to an approximate one (in v(z)) with degenerate kernel.


(b) Consider only the first two terms of the Maclaurin series of e** in part (a)
and solve the resulting integral equation, using the method of Example | for a
degenerate kernel.

14. Assume that the kernel K(z,t) is not symmetric, i.e., K(xz,y) # K(y,2),
show that the following two kernels, associated with K (a, t)

Ace iK(s,2)K(s,y)ds

ejesn= ifK(x, s)K(y,s)ds


are symmetric.
Hint: Take the complex conjugate Ki(z,y) of K(z,y), noting that
J fila) fo(z)de = f fil) fo(x)dz, and f,(x) = fi(z).
15: Consider the problem of Example 5 for the Fredholm integral equation with
nonsymmetric kernel (as in Example 5) but with more general nonhomogeneous
term f(z).

u(x) = f(x) + | sin(In x) u(t)dt. (E.1)

(a) In light of the Fredholm alternative for Fredholm integral equation with
nonsymmetric kernels, as stated in Theorems | and 4, discuss the existence of
the solution (or solutions) to (E.1) when
G8 aia
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 237

(ii) A= 2, f(s) = cos ar


(b) Find the unique solution of (E.1) in part (a) when it exists.
(c) Find an infinity of solutions of (E.1) in part (a) when they exist.

5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL

In the preceding section we discussed methods of solving nonhomogeneous and


homogeneous Fredholm equations for the special case of degenerate kernels and
showed how the method reduced essentially to solving systems of linear equations.
In this section we consider the Fredholm equation for another important special case
of the symmetric kernel? [i.e., K(x, t) = K(t, 2) in (5.21)]:

u(x) = f(x) + le TOE


UG Cale iit) Ni Coy: (5.21s)(5.27)

Similar to the case of (3.1), the Volterra integral equation, it turns out that the
resolvent kernel (a, t; ), for (5.27), can be expressed as an infinite series in terms
of the orthonormal eigenfunctions of the homogeneous equation with symmetric
kernel,

az) sailKiet oh (ab) = 1 (t,2) (5.208) (5.28)

which is discussed next.


As in the case of (3.2) for the solution to the Volterra equation in terms of its
resolvent kernel in (3.3), the resolvent kernel ['(z, t; A) of (5.27) will be shown, in
the analysis leading to (5.55), to give
b
u(x) = f(x) + | T(z,t; A) f (t)dt (5.55)
a

as the solution of (5.27), where ['(z,t; A) is given in (5.46). This will be derived
in Section 5.2.2. We stress here the difference between X, the eigenvalue of the
homogeneous Fredholm equation (5.28) and the parameter X of the nonhomogeneous
Fredholm equation (5.27). In most of our treatment we will assume that the parameter
of (5.27) is different from all the eigenvalues {,, }of the homogeneous Fredholm
equation (5.28).

5.2.1 Homogeneous Fredholm Equations with Symmetric Kernel

There are many interesting results concerning the eigenvalues {\,,} and the eigen-
functions {un(x)} of the symmetric kernel of (5.28),

2If the kernel K (x,t) is a complex-valued function, then the definition of the symmetric kernel is
K(a,t) = K(t,x), where K is the complex conjugate of K.
238 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

Una An ifK (a, t)un(t)dt, Kiet = ier) (5.29)

where K(x, t) is expanded in terms of the eigenfunctions.


In the following we state, then illustrate or prove, the most important of the results
needed for the development of the series expansion of the symmetric kernel in (5.28)
and the resolvent kernel of the nonhomogeneous equation (5.27).
(a) The eigenvalues of the symmetric kernel in (5.28) are real.
This can be proved easily but one needs to consider complex-valued functions;
that is left for an exercise.
(b) The eigenfunctions u,(x) and u(x) of the symmetric kernel corresponding
to two distinct eigenvalues \,, # Am are orthogonal [i.e., ie tn
(Dithen Eat = 0.
An eH Ami ;
To simplify the orthogonal expansion or the Fourier series in terms of the orthog-
onal eigenfunctions u,,(x) of the symmetric kernel,

Kio. t) = Se Calle) (5.30)

iz i K (a, t)un(x)dax
n (5.31)
i ie u2 (x)dzx
we will normalize them by redefining them as an orthonormal eigenfunction (as we
did in Section 4.1),

$,(2) > ———, 2(ade =)1, (5.32)

(c) The degeneracy or multiplicity p of the eigenvalue A # 0 is finite for every


symmetric kernel that is square integrable on the square {(z,t) : a < x < b,
a <t< 5d}, that is,
b b
i!/ K?(z, t)dxdt = B? < 00. (5.33)
This simply means that for such a kernel there can be only a finite number
p of eigenfunctions uj41,Uj;4+2,°°*,Uj+p (sometimes, for clarity, are written as
ae hen res eee) that corresponds to an eigenvalue A; of K(z, t).
From the above and other results, a very important theorem, the Hilbert-Schmidt
theorem, can be developed. This theorem expands f(z) of the Fredholm equation of
the first kind with symmetric kernel

b
b= af K (a, t)u(t)dt, BAB.) elta) (5.34)

in a Fourier series in terms of the orthonormal eigenfunctions {¢,(x)} of the sym-


metric kernel,
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 239

b
One) = wf K (a, t) bx (t)dt, Kit) = k(t2) (5.35)

Before we state the Hilbert-Schmidt theorem we must note that there is a limitation
on the class of functions f(a) that can be expressed as in (5.34), since thinking of a
solution u(x) for (5.34) means the existence of such solution u(x) for the Fredholm
integral equation of the first kind (5.34) for the given function f (x). However this, in
general, as was illustrated earlier with a very basic problem in Example 8 of Section
1.3, cannot be (easily) assured. Indeed the conditions for the existence of a solution to
Fredholm integral equation of the first kind is much more restrictive when compared
with those of the second kind. Such an important topic will be discussed briefly after
the next Example 7, illustrated in Example 8, then it will be discussed in more detail
in Section 5.4.
The following is a simple version of Hilbert-Schmidt theorem.

Hilbert-Schmidt Theorem-Fredholm Equation of the First Kind Let f(x) be


expressed as in the form of the Fredholm equation of the first kind (5.34), let K (a, t)
be symmetric and square integrable in the square {(z,t): a< x<b,a<t< b)}
for the square integrable u(x), then f(x) can be expanded in a Fourier series,

x) = > ards (2) (5.36)


k=1

a / f(a)oe(o)de (5.37)
in terms of the orthonormal eigenfunctions {¢, (2)}of the symmetric kernel K (2, t)
and the series (5.36) converges to f(a) in the mean (as defined in (4.52) (see (4.51)).
The series is also convergent absolutely and uniformly.
As we shall see in the next section this theorem is essential for developing the
resolvent for the nonhomogeneous Fredholm equation with symmetric kernel (5.27).
The following Mercer’s theorem is of importance as it expresses the symmetric kernel
as an infinite series of a product of its orthonormal eigenfunctions.

Mercer's Theorem
If the kernel K(x, t) is symmetric and square integrable on the square {(z,t) :
a<a2z<b,a<t < bd}, continuous, and has only positive eigenvalues (or at most a
finite number of negative eigenvalues), then the series

Saharan
mane
br (

converges absolutely and uniformly and gives the following bilinear form for the
symmetric kernel:
240 Chapter5 FREDHOLM INTEGRAL EQUATIONS

K(ae) =) HE) (5.38)


k=!

The conditions and results of Mercer’s theorem and Hilbert-Schmidt theorem are
illustrated in detail in the following example.

Example 7 Eigenfunctions for a Homogeneous Equation with a (Nondegenerate)


Symmetric Kernel
The eigenfunctions {ux (zx)},
1
Uk (a) IVs iiK (a, t)ug (t)dt (5.39)
0
of the symmetric kernel

Qdots Unda ed
Kai quae ean os (5.40)
can be obtained by reducing (E.1) to its equivalent eigenvalue problem of (E.3) and
(E.4) as was done in Example 6 of Section 2.5:

au
— + ru =0, Orr <1 (E.3), (5.41)
dx?
11(O) == ets(10): (E..4), (5.42)
The eigenfunctions of(E.3) and (E.4) are clearly u,(x) = sin ka and the eigenvalues
are A, = k?r?. These eigenvalues A, = 1k? of the symmetric kernel in (5.40)
are real, and the eigenfunctions {sin k7x} are orthogonal. From the definition of the
norm square in (4.49) we have

b 1
: 1
[eee|? =a uz (x)dx = i sin* kradz = =.
a 0 2
Hence the orthonormal eigenfunctions are

sink sink
OnE) = see Ba EIS! = V2sinkrz.
|x || i
2

The symmetric kernel in (5.40) is square integrable on the square {(z,t) : 0 <
x < 1,0 <t < 1} since it is bounded in z and t there. The non-zero eigenvalues
here are simple since for every \y, = k?x? there corresponds only one eigenfunction
sin kmz. Therefore, the conditions of the Hilbert-Schmidt theorem are satisfied, for
the square integrable u(x) of (5.34), as K(a,t) in (E.1) is symmetric and square
integrable. Also the conditions of Mercer’s theorem are clearly met, thus K(z, t) of
(5.40) can be expressed in the bilinear series (5.38) with $4 (x) = /2sin ka and
Nk = k?n?.
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 241

Next we will present a discussion concerning the difficulty in securing the existence
of a solution to Fredholm integral equation of the first kind which is concluded by
a detailed illustration in Example 8. The more detailed treatment with precise
(practical) theorems is done in Section 5.4. This topic was touched upon very briefly
in Section 1.3 and was illustrated with Example 8 there.

On the Existence of a Solution to Fredholm Equation of the First Kind

Before presenting any illustration for the Hilbert-Schmidt theorem in relation to


the Fredholm integral equation of the first kind (5.34) we may first inquire into
whether a solution, at all, does exist (let alone be unique) for the equation of such
an illustration. We emphasize this point, since the theory is rather restrictive for
such existence of solutions if compared to that of Fredholm equations of the second
kind. Indeed, in Example 8 that follows shortly with a Fredholm equation of the first
kind, we will illustrate this point in detail. In essence, the comparison will lead us to
think of the generous theory for the existence of the solution to Fredholm equation
of the second kind (5.21) and its associated homogeneous one (5.20), as given by
Theorems | to 4 of Fredholm alternative in Section 5.1. In contrast, the theory for
the existence of the solution for Fredholm integral equation of the first kind (5.34)
(even with symmetric kernel),

f@)= le Ke(ayhult)\di, -K(z,t= KGa) (5.34)

is much more limited.


This difficulty of finding a solution to the Fredholm integral equation (5.34) stems
from the fact that the integration operation over the input as the sought solution u(t),
is a smoothing process, especially, when combined with a nicely behaved kernel
K(a,t). This means, for example, that if the solution u(x) is piecewise continuous,
then the above integration operation on the right hand side of (5.34), with continuous
kernel K (x,t), would result in a smoother, i.e., continuous output f(x) of (5.34)
as the given function at hand. So, if we are given a continuous function f(z) in
(5.34), we cannot, in general guarantee an answer in the search for a solution u(z)
among the class of continuous functions! In other words, if we look at the right
hand side of (5.34) as an integral transform of piecewise continuous functions, then
this transform maps such class of functions to a more restrictive one, in this case
continuous functions. Indeed, for more smooth kernels, i.e., K (x,t) differentiable,
the class of piecewise or even integrable functions u(t) is mapped into differentiable
functions f(x). Hence, for a continuous kernel A (x,t) (in both x and ¢) and a
continuous output f(x), the integral equation of the first kind (5.34) cannot, in
general, be solved by a continuous function u(t). Of course, if K(x, t) is not very
regular, then it is possible that this irregularity is combined with the smoothness of
the integration operation and a continuous solution u(t), to produce a continuous
output f(a) in (5.34).
A very basic theorem, which addresses searching for a solution u(x) among
continuous functions, would even be more demanding on f(z). Such demand of
242 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

the theory translates in requiring that the given function f(x) must be expressible in
a Fourier series of the eigenfunctions of the continuous, real and symmetric kernel
K (a, t) of (5.34). It states that:

Theorem 5 “For the continuous real and symmetric kernel, and continuous f(z),
a solution to (5.34) exists only if the given function f(z) can be expressed in a series
of the eigenfunctions of the kernel ’(z, t), i.e., only if

=> agdr(a (5.36)

fe du(a (5.37)
where we are using here the orthonormal set of eigenfunctions {$4 (x) }@2., on (a, b)."
With the condition of this theorem, the solution takes a similar form

x) = > bebe (x) = D> Ananda (cx), (5.43)


=1 k=1

by = ARQk- (5.43a)

This form satisfies the condition (5.36) as we substitute the expression (5.43) for u(t)
inside the integral of (5.34)

b oe)
si
= K(z, t) Ppsaxon dt

ea (5.36)
= arene = Soren
k=1

after using the fact that @;(x) and A, are the eigenfunctions and eigenvalues, as seen
in (5.35), of the symmetric kernel K (a, t) of the integral inside the sum.
We may note here that, although we are guaranteed the existence of the (contin-
uous) solution of (5.34) in the form of the series (5.43), it is by no means a unique
solution. This is the case, since if we add to the series in (5.43) a function V(a) that
is orthogonal to the kernel K (z, t), i.e
b

/ K (a,t)W(t)dt = 0, (5.44)
and substitute in (5.34), we obtain the same output f(a) as in (5.36). So, for a
unique solution u(x) in (5.43), we must insist that there are no functions (a) that
are orthogonal to the symmetric kernel K (z, t).
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 243

Perhaps, at this level of discussion, the safest way to come up with an example
which has a solution for the Fredholm equation of the first kind, (5.34), is to assume
a form of (continuous) solution u(t) and find the resulting f(z). This f(x) may then
be used as a given function in (5.34) to safely illustrate the Hilbert-Schmidt theorem,
and the impor’ at condition (5.36) and (5.37) for the existence of the solution to
(5.34). Understandably, we can start with the simplest form u(t) = 1 on (0, 1) in the
integral of the special case of (5.34) with the symmetric kernel,

et cdot) 0 <a
K(e,t)={ SOE een eee)
and after paying attention to the two branches of the kernel K (x, t) in (5.45), we can
A ;
easily integrate to have the result as f(x) = —(x — x”), which we have as a simple
exercise (see Exercise 4). In the following example we illustrate the conditions for
the existence of a solution to such resulting integral equation. We will then illustrate
the related aspects discussed above.

Example 8

(a) Now wecan aril a reasonable Fredholm integral equation of the first kind (5.34)
with f(x) = $(a — x”), X = 1 where we know for sure that the solution does
Xistas w(t) 1,0 <7 <1:

(x = 2? elt K(a,t)u (£.1)

where K (z, t) is the symmetric kernel

K(a,t) =! et ere (B.2)


From Example 7 we have the orthonormal eigenfunctions of the kernel K (z, t)
2sinkmz. So we will express f(x) and u(x) in a Fourier sine series
(1.116), (1.115) of these functions. The Fourier (sine) series for these two
functions (f(z) = =(a — x”) and u(x) = 1 on (0, 1)) can be easily written,
respectively as (see Exercise 4(b))

1 9 2/2 - V2sin(2k
+ 1)r2x
Oris 1ACE:3)

SS 2V/2- /2sin(2k
+ 1)r2x
HV lig) a imi Uk a ae (E.4)
sz m(2k + 1)
k=0
Now if we compare the Fourier coefficients a, = aes Oe AS
(a — x) and b, = Bey for u(x) = 1, we find that the condition (5.43a)
244 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

for the existence of the solution to the specific (and well prepared in advance!)
Fredholm integral equation of the first kind (E.1), is satisfied,

2/2 2/2
by = Ark410K = 7°(2k + 1)? - m(2k+1)3 1(2k+1)' (Ee)
It is clear that this given f(x) = 5(a — x”) in(E.1) and K (z, t) in (E.2) satisfy
Theorem 5, which we shall leave for an exercise (see Exercise 4). So the series
expansion (5.36) of f(a), as required by Theorem 5, is justified, thus in turn
the existence of a solution to the special case (E.6) of the Fredholm equation
of the first kind with symmetric kernel (5.34).

(b) With these words of caution about the rather restrictive conditions for the exis-
tence of the solution of Fredholm integral equation of the first kind, we leave
this important subject for now, and we shall return to it in Section 5.4 with a
more general theorem and a rather relaxed condition on the solution u(t) of
(5.34). It may be instructive to give here the spirit of such a theorem compared
to the above rather restrictive Theorem 5.
As we had explained following (5.45) and in Exercise 4(a) of this section, that
the simple continuous function u(z) = 1,0 < x < 1 is a solution to the
Fredholm integral equation of the first kind

1 1
s(t — 27) = / K (a,t)u(t)dt, (E.6)
0
Fy 5) pall ean ea
K(at) = ili), tree1 Bot)
which can be verified here easily. On the other hand, the equation

1
Ci iiK (a, t)u(t)dt (£.8)
0
with the same kernel as in (E.7) has no solution. These results will be supported
by a limited version of Picard’s Theorem 7 with necessary and sufficient con-
ditions for the existence of a not necessarily continuous, but square integrable
solutions.
An important dividend of the more relaxed existence Theorem 7, that we shall
present in Section 5.4, is that as it assures us of the solution, it also offers a
method for constructing such a solution. This is a great relief when we know
that integral equations of the first kind are denied the usual simple iterative
method. The latter difficulty, of course, is due to the absence of the unknown
function u(x) as a separate term outside the integral of the Fredholm integral
equation of the first kind in (5.34) as compared to that of the second kind in
(S21):
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 245

(c) To also illustrate Mercer’s Theorem for the series expansion (5.38) of the above
kernel K(x, t) in (5.45) , we see that the theorem is satisfied since the kernel
is continuous and all its eigenvalues {\,} = {k?7?} are positive; therefore
such a kernel can be expanded in terms of the (orthonormal) eigenfunctions
{dx(x)} = {V2sin kz} as

sin kra sin krt


K (a,t)1 Se
aie (E.9)

In the following section we will develop the resolvent kernel (x,t; A) for the
nonhomogeneous Fredholm equation (5.27) with symmetric kernel.

5.2.2 Solution of Fredholm Equations of the Second Kind with


Symmetric Kernel

With the aid of the foregoing important development of the Fredholm homoge-
neous equation with symmetric kernel (5.28), we show here, at least formally, that the
resolvent kernel ['(z, t; A) of the nonhomogeneous equation with symmetric kernel
(5.27) is expressed as an infinite series of the orthonormal eigenfunctions {o,(x)}
of K(z,t),

Ltd) = 3 GNEAG) DN Fire (5.46)

and hence the solution to (5.27) is

u(x)= f(x) +2 Si
cael Weay. (5.47)

b
res / EMD (5.37)
To prove (5.47), we write (5.27) in the form

b
h(z) = u(z) — f(z) = | K (a, t)u(t)dt (5.48)

which is suitable for the Hilbert-Schmidt theorem with the function h(x) = u(x) —
f(a) in (5.48) instead of f(a) in (5.34). According to the Hilbert-Schmidt theorem,
remembering its important conditions here on u(x)(= A(x) + f(x)) being square
integrable ona < x < band K(z,t) symmetric and square integrable on the square
{(x,t):a<2<b,a<t < bd}, wecan expand h(z) in a Fourier series (5.36) and
(5.37) of the orthonormal eigenfunctions {¢;(z)} of the symmetric kernel K(z, t),
246 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

h(a) = u(a) — f(z) =D) bade (2) (5.49)


p=

b b .
a i h(x)ox(2)dx = i u(x) px (x)da — i,f(x)dx(x)dt (550)
a
=a — ak.

Here d, is the Fourier coefficient of the unknown function u(z),

b
pe / VUE (5.51)
a

and a, is the Fourier coefficient of the given function f(a). In (5.50) we have now
a relation between b;, dy, and ax. It is clear that we need to express b, of (5.49) in
terms of a, to arrive at the final solution (5.47). To do this we need another relation,
by = Ady /Ax, which we can easily show, since

b
Toe=a [u(x) — f(2)]bx(a)de
=f af K (a, t)u(t)dto,(x) dx (5.52)

=4/ ud)[Ki 2)on(0 )dxdt

after using the integral of (5.48) for h(a), interchanging the two integrals, and using
the fact that the kernel is symmetric [i.e., A (2,t) = K(t,x)]. Now according to
(5.35), the inside integral is
Ou (t)
Ak
f DN
a | u(t) 5 (\diz— —
2 fu t) dr (i)dt = —d, (5.53)
a Ak

after using the definition of d; in (5.51). If we substitute from (5.53) for dx in (5.50),
we obtain b, in terms of az,

Arn=— = ak, br =
r
bx yon
(5.54)

and if we now substitute b; from (5.54) in (5.49) , we obtain

u(z) ee ees
coer Ney (5.47)

which is (5.47), the solution of (5.27) with symmetric kernel. This solution (5.47)
can be rewritten using a, from (5.37) as
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 247

ule) = Fe) + SEP pn gana


co b

k=1 m

Pye af 10 yseet) a (5.55)


k=1
b
u(x) = f(x) + a T(x,t; A) f(t)dt, NENG

after exchanging the infinite summation with the integration and defining I'(z, t; A),
the resolvent as in (5.46),

Px(et
T'(a, t; ) ee a he (5.46)
The very clear condition A # A, in (5.47) on the parameter \, in the Fredholm
integral equation of the second kind (5.27), not equal to any of the eigenvalues {A, }
of its symmetric kernel is consistent with the Fredholm alternative in Theorem 1. In
case A = Ax, as we shall illustrate in the next Example 9 for a symmetric kernel, we
will use the second part of the Fredholm alternative as stated in Theorem 2.

The Gibbs Phenomenon in (the Truncated) Fourier Series (Eigenfunctions)


Expansion
In the above development we had the Hilbert-Schmidt conditions for h(x) =
u(x) — f(x) represented as in f(x) of (5.34), whence h(x) was expressed in terms
of the Fourier series (5.49) of the eigenfunctions {¢,(x)} of the symmetric kernel.
We may remark here that in the practical applications, we can compute only a
finite sum of N terms to approximate the infinite series in (5.49), which will, of
course, incur a truncation error. With a fast convergence of the above series, such an
error may be reduced by increasing N. There is also another “stubborn” error that
may appear, which cannot be reduced by the mere increase of the series’ V terms.
Such an error will appear when we write the solution as in (5.49) where f(a) may
be sectionally continuous. For such input function f(x) with a jump discontinuity
we must watch for the very well known error, namely, the Gibbs phenomenon,
which manifests itself as overshoots and undershoots in the neighborhood of the
jump discontinuity of the approximated function h(x) = u(a) — f(x) in (5.49).
Indeed the size of the first overshoot (in the truncated Fourier series) is about 8.95%
of the size of the jump discontinuity. The next is an undershoot of about 4.86% of
the size of the jump discontinuity. This phenomenon also appears in the general
orthogonal (eigenfunctions) expansion, such as the Fourier-Bessel series and the
Fourier-Legendre polynomial series, to give two familiar examples. The sizes of the
first overshoot and undershoot are about the same as that of the above (truncated)
Fourier-trigonometric series.

3See Jerri [1998] for the first comprehensive book treatment of the Gibbs phenomenon that covers the
basic elements of the subject and its research development since its discovery in 1848.
248 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

Example 9 Nonhomogeneous Fredholm Equation with Symmetric Kernel


Solve the Fredholm equation of the second kind
1

u(a) =a+ af K (a, t)u(t)dt (E£.1)


0
with the symmetric kernel given as in Example 7:

K(e,t) = {Or ee a (Gea)


In Example 7 we showed that this symmetric kernel satisfies all the conditions
required for deriving the solution (5.47) for (E.1), and that the eigenvalues and
the orthonormal eigenfunctions of K (x,t) in (E.2) are \xy = 17k? and ¢,(x) =
/2sin ka, respectively. If we use these results, the solution to (E.1) according to
(5.47) is

a,V2sin kr
u(z) ee ee dA 17k?

bats £3
2X S (—1)**! sin kaa BS)
k(m2k2 — 2)
since
1 ns) k+1 9
ip, = ihzv2sinkradz = Cah v2
0 kr

The resolvent kernel of (E.1) according to (5.46) is

T(x, t;d) =2 3 ent


aes emt yo aR. (E.5)
The foregoing treatment for constructing the solution (5.47) of the nonhomoge-
neous equation (5.27) was based on the fact that the parameter A of (5.27) is not equal
to any of the eigenvalues {A;} of the symmetric kernel K (z, t) in (5.35). When A
is equal to one eigenvalue \j;41, with degeneracy p, then for X = Ax = Aj+1, the
coefficient ax/(A — Ax) in (5.47) is not defined unless a, = 0, which makes the
ax, /(A — Ax) indeterminate and hence arbitrary. From the definition of a,x,
b

oo / Heme
the condition a, = 0 would mean that f(x) must be orthogonal to #4 (x), and hence
a solution to the integral equation (5.27) in the form (5.47) does not exist unless f (a)
is orthogonal to all the eigenfunctions $;+41, $j+42,---,j+p that correspond to the
(degenerate) eigenvalue Aj41 = Ajzo = ++: = Aj4p-
b

an = | f(a)du(x)dx = 0, k=jtjes,j+p. (656)


5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 249

We may remark here that this condition on the nonhomogeneous term f(z) of (5.27)
is consistent with Theorem 2, the second part of the Fredholm alternative (Theorem
1) for symmetric kernels.
In the case that this condition (5.56) is satisfied, the series will include arbitrary
constants B,, Bj,---, By resulting from the p indeterminate forms

ak
=R OAR ay = 0, k=j+1,j+2,-*:,9
+p
A — Xz

and (5.47) becomes

u(z) =f(z)+rA65« pa Poa (5:50)

+ Bi b541(2) + Bodj+2(t) + --> + Bpdj+p(z)


which represents infinity of solutions because of the arbitrary constants B,, Bo,
-++, By. (For the degeneracy in (5.57), its last line may be (for more clarity) written
as +B, 9), a Bog), S i Be oea) Thus, and according to Theorem 2, for
X = Xx the Fredholm integral equation with symmetric kernel (E.1) has an infinity
of solutions in (5.57) provided that condition (5.56), of f(x) being orthogonal to all
the eigenfunctions corresponding to the (degenerate, index p) eigenvalue A;41 = A,
where 4 is the given parameter in (E. 1).

Example 10
The Fredholm integral equation

u(x) = 2 + 4n? [ K (a, t)u(t)dt (E.1)


0
with the symmetric kernel of Example 9,

POS Kare (E.2)


nearer iis oe
is not solvable since here we have that the parameter X of (E.1) is equal to Az = Ar?
the eigenvalue of K (z, t) in (E.2) of Example 7, and f(x) = z is not orthogonal to
its corresponding eigenfunction ¢2(x) = V2 sin 27a since

1
=]
—_

ihzV2sin2radzr = =v2 = — £0.


0 27 ra

However, the integral equation with f(z) = sin 3z instead of the above f(x) = x
in(E? 2b);

u(x) = sin3ra + 41? ifK (a, t)u(t)dt (E.3)


0
250 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

(with K (a, t) as in (E.2)) does have solutions even though its \ = An? = Xo, since
f(x) = sin 3rz here is orthogonal to ¢2(x) = V2 sin 272,
1
v3| sin 37z sin 27xdzx = 0.
0

The solution is obtained from (5.57) after computing a, for f(z) = sin 37x, where
we note that this f(z) is a very special case, as it is a member of the orthogonal set
{sinkrax}. Thus a, = 0 except for ag = es sin? 3radxz = 1/ V2, where the
sum in (5.57) becomes only one term, and we have

u(z) =sin3aa + 4r?a3


ae eee shah (E.4)
= 2 sin 37a + Bo sin 272. ‘

This represents an infinite number of solutions for (E.3) because of the aribtrary
constant Bz in (E.4). We note here that the multiplicity is p = 1 for the eigenvalue
A2 = Ar?,

Comments on the Numerical Evaluation of the Eigenvalues: Rayleigh-Ritz


Method
In solving the homogeneous Fredholm equation (5.20),

(0 ie af K@,nunat (5.20)

in order to find its (nontrivial solutions) eigenfunctions and eigenvalues, we had up


till now the chance of the special case of degenerate kernels in Section 5.1.2, where
the method of Section 5.1.1 was employed as it was illustrated in Example 2 and
at the end of Example 4. From the start of Section 5.2 we needed the eigenvalues
and eigenfunctions of the homogeneous equation (5.20) with nondegenerate (but
symmetric) kernel. This was essential for the statement of the Fredholm alternative
(Theorems 1, 2) concerning the existence of a solution (or solutions) to the Fredholm
integral equation of the second kind (5.27) (with the same kernel). To find such
eigenvlaues and eigenfunctions we resorted to reducing the homogeneous Fredholm
equation (5.20) to its equivalent boundary value problem, associated with a differen-
tial equation and boundary conditions, as discussed and illustrated in Section 2.5. We
used this method in Example 7, and we even repeated using the same kernel and its
eigenvalues and eigenfunctions in the rest of the Examples 8, 9 and 10 concerning the
existence of a solution to the Fredholm integral equation of the first kind in Example
8, and the construction of the solution to the Fredholm equation of the second kind
(5.27) in Examples 9 and 10, with the same nondegenerate (but symmetric) kernel.
We emphasize here that in most of the illustrations that we have presented, the
resulting boundary vlaue problem was a very familiar one and hence its eigenfunc-
tions and eigenvalues were obtained with minimum effort. However, in general the
EXERCISES 5.2 251

resulting boundary value problem may be a general Sturm-Liouville problem, and


hence we cannot expect such easily obtained familiar solutions, so we may resort to
the approximate or numerical methods to find the eigenfunctions and eigenvalues.
One of the most familiar numerical methods for estimating the eigenvalues is called
the Rayleigh-Ritz method, whose derivation is based on variational principles, which
we shall not pursue here, and be satisfied with another method that we will cover
briefly at the end of Section 5.3.2. For the Rayleigh-Ritz method simple presentation
and detailed illustration, we refer the reader to the first edition of this book.4 Also
a summary of the method with a number of detailed illustrations are found in the
“Student’s Solutions Manual" to accompany this book® (see the end of the preface
for more information).

Exercises 5.2

1. Consider the homogeneous Fredholm equation with symmetric kernel

1) = | cos(x + t)u(t)dt (E.1)


0
of Exercise 2(c), Section 5.1.

(a) Use the results of Exercise 2(c), Section 5.1, to verify that for this sym-
metric kernel (cos(az + t) = cos(t + x)) the eigenvalues are real and the
corresponding eigenfunctions are orthogonal.
(b) Use differentiation to reduce the integral equation to an ordinary differ-
ential equation from which you determine the eigenfunctions, then the
eigenvalues. Compare those with the results of Exercise 2(c), Section
=a
(c) Find the orthonormal eigenfunctions.
(d) Use (E.1) to find the eigenvalues. Hint: Substitute each eigenfunction of
part (c) in (E.1) to find their corresponding eigenvalues.
(e) Show that the symmetric kernel is square integrable on {(z,t):0 <a <7,
Ore <n }8
(f) Determine whether Mercer’s theorem applies to this problem and if so,
write the Kernel’s bilinear expansion of (5.38).

2. Use the results in problem | to solve the nonhomogeneous integral equation

u(z) =x+A r cos(xz + t)u(t)dt (E.2)


0

4Jerri [1985, pp. 146-151]. See also Kanwal [1971, 1997 (2nd ed.)] and Green [1969].
SJerri [1999].
252 Chapter5 FREDHOLM INTEGRAL EQUATIONS

by finding the resolvent kernel.

. Consider the nonhomogeneous Fredholm equation


;

u(x) = cos2z + 2 | K (x, t)u(t)at (B£.1)


0

with the kernel

sinxcost, O<2<
re<
=my i]

K(z,t) = sinvcosz, €< 2 -~
w|
a>

(a) Verify that the kernel is symmetric and is square integrable on the square
{(@.2) 50 Se < 2/2 0 tas 2}.
(b) Reduce the homogeneous equation
a/2
u(e)= | K (a, t)u(t)dt (2.3)
0
with K(2, t) as in (E.2) to a differential equation to obtain the eigenvalues
and eigenfunctions.
(c) Use the information in part (b) to solve the nonhomogeneous equation
(ESD):
(d) Just as in Exercise 2, use the Fredholm alternative (Theorems |, 2) to show
that the Fredholm integral equation in (E.1) above does indeed have a
unique solution.

. Consider the Fredholm integral equation of the first kind in (5.34) with A = 1,
(with the given particular f(a) and the symmetric kernel) as it was considered
in Example 8. (This is the same problem as Exercise 2 in Section 5.4.)

(a) Show that the solution u(t) = 1 corresponds to the nonhomogeneous term
f(z) = $(@ —2*). Hint: Watch for the two branches of the kernel
K (a, t), write the integral on the two subintervals (0, 2) and (x, 1).
(b) As needed for (E.3) and (E.4) of Example 8, write the Fourier sine series
for both the solution u(x) = 1, and the nonhomogeneous term f(x) =
+(x — x”) on the interval (0,1).
(c) Show that the nonhomogeneous term f(x) = $(2 — 2”) in (E.6) and
K(a,t) in (E.7) of Example 8 satisfy Theorem 5. Hint: Note that
f(x) = =(x—2”) is continuous on (0, 1), and that the clearly symmetric
kernel A(x,t) in (E.2) is square integrable on the square {2e(0, 1),
te(0,1)}. (See the hint to part (a).)

5. For Example 10, verify that u(x) in (E.4) satisfies the Fredholm equation in
(B.1).
5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 253

5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND

5.3.1 Method of Fredholm Resolvent Kernel

One of the methods of solving the general Fredholm integral equations of the second
kind (5.21),
b
u(x) = f(x) + a K (a, t)u(t)dt (5:21)

is the method of evaluating the Fredholm resolvent kernel I'(z, t; A),


b
u(x) = f(z) + | I(x,t; A) f(t)dt (5.58)

y — D(z, tA)

where ['(a,t; A), D(x, t;), and D(A) are called the Fredholm resolvent kernel
of (5.21), the Fredholm minor, and the Fredholm determinant, respectively. The
D(a, t; X) is defined as

Dee) ee7
es Bn(a;t), (5.60)
n=0
where Bo(z,t) = K (x,t), and

Bike,t) = Cak (2,2) = nf USGS)


iB ei(Sab)\ dS.) ait lez ee (0.01)

where A
Ch / Brealt,t) dé, ars? Cy =o (5.62)

and D(,) is defined as ;

D(A) = S9 See. (5.63)


n=0
Note also that from (5.60) D(z, t;0) = Bo(x,t) = K(z,t) as seen in (5.61) with
Co = Lin 262).
Before we present an illustration for this method in the following Example 11 we
may comment on the importance of the above results (5.58) to (5.63) with D(A) # 0
as they encompass the following Fredholm’s first theorem (Theorem 6). Without
assuming complex analysis we may give a simple version of this important theorem
with our attention being fixed, primarily, on the basic conditions for the existence
of the unique solution to Fredholm integral equation of the second kind (5.21) as
presented in (5.58). This is besides good qualities of the Fredholm resolvent kernel
T(z, t; 4) in (5.59), and the convergence for all A of the series (5.60) and (5.63) for
the Fredholm minor D(z, t; 4) and the Fredholm determinant D(A), respectively.
254 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

Theorem 6 Fredholm’s First Theorem: A Simple Version


“The Fredholm integral equation of the second kind (5.21) with f(x) and K(z, t)
integrable, has for D(A) # 0, a unique solution of the form given in (5.58) via the
Fredholm resolvent kernel ['(x, t;). Moreover, this resolvent kernel, as seen in
(5.59) is a ratio of two infinitely differentiable functions of A, namely D(z, t; A) and
D(A)."

Example 11 (The Fredholm Resolvent Kernel Method)


Solve the Fredholm integral equation of the second kind

(a) = f (2) ck af xe'u(t)dt (£.1)

According to (5.58), the solution to this equation is



u(x) = f(x) + | I(a,t; A) f(t)dt.

To evaluate the resolvent kernel I(x, t; 4) we should start evaluating the functions
required for it in (5.59) which are found in (5.60)—(5.63).
Here Bo(z,t) = K(zx,t) = ze’, Co = 1, and hence from (5.62) we have
1 1
Cr ifBo(t, t)dt = / te di= a1. (E.2)
0 0
For C2 we need Bj (t, t), which we can evaluate from (5.61),

1
ari Gata ies Gre
GlCrag be K (a, s)Bo(s, t)ds
1 0 1
Sn = / ze’se'ds = xe’ — cet | se*ds ee)
0 0
= ze! — re' = 0.

From (5.62) we have

If we use Cp = 0 and B, = 0 in (5.61) for Ba(x, t) we obtain B2(z,t) = 0 and


this can be used again in (5.62) for C3 = 0. This can be continued to obtain

Creat OL = 0) (Dy PS OSD

pepsi
It is clear from (5.63) and the values of Co = C, = 1, C, = 0, n = 2,3,--- above
that

DOY28C} CORSO se0 (E.5)


5.3. FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 255

and from (5.60) and the values of By = ze’, B, = 0,n = 1,2,---, we have

D(a, t;A) = ze’ —0 = ze’. (E.6)


From (5.59), (E.5), and (E.6) the (Fredholm) resolvent kernel becomes

DE tN) ace
Les) = Doe To (E.7)

and the solution to (E.1) is


t
HGS f (t)dt. (E.8)

We may remark here that the kernel K (x,t) = ze! of (E.1) is degenerate with one
term, and hence it is much easier to solve (E. 1) using the method for solving Fredholm
equations of the second kind with degenerate kernel which we discussed in Section
5.1 and illustrated in Example | for a degenerate kernel with two terms,

K(a,t) = at? + t?z.


Another way of expressing the repeated integral expressions for B,, (x, t) in (5.61)
and C’,, in (5.62) is in the form of n repeated integrals whose integrand is a determinant
of order n and whose entries are determined by the kernel K(z,t). These new
expressions are

K(a5%) a K(z, tn)


ree, Key ty)0
(z; t) =f{[- salt dt, dtz---dtn

K (ta, t) =a FG ))
(5.64)

as K (ty, te) ee K (t1, tn)


K (ta, ti) es ike

On= fi[- nae dete Pag


KiGasti). Pe K (tn; tn)
(5.65)
These kinds of expressions for B, (x,t) and C;, may have a great advantage for
those who are familiar with determinants and their properties and manipulations,
which are used for more efficient computations. For example, we needed two steps
of substitution to compute B2(z,t) from (5.61) in Example 11, whereas if we use
(5.64) we can immediately write

1 pl} ve xe’?
B2(a, t) =k is be. te! t,e!? dt, dty. (5.66)

0 0 t ef e! toe”?
256 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

It is advisable here to exhaust the properties of determinants, which may produce a


simple result for the integrand, before embarking on doing the double integration.
For example, the result of Cz = 0 in (E.4) of Example 11, can be obtained easily,
since from (5.65) we have

ef’ te"

tye”

where it is obvious that the determinant is zero,


oe

ef?
dt, dt (5.67)

ty t te
|t1e 1€ ee Fine titte _ titee =
toe toe??
and hence Cy = 0. Also the integral in (5.66) can be shown to vanish after noting
that the first and second columns of the determinant are proportional, which results
in the vanishing of the determinant [see Exercise 2(a)].

5.3.2 Method of Iterated Kernels

Another method of solving Fredholm integral equation of the second kind


b
u(x) = f(x) +A / K (a,t)u(t)dt (5.21)
is the method of iterated kernels. This method starts, as in the case of the Volterra
equation of the second kind (of Section 3.1) by the zeroth approximation uo(z) =
f(a) for the solution u(x) in the integral of (5.21), to obtain the first approximation
u(x),
b

= Fee / K(x,t)f(t)dt = f(e) + Adi(z) (5.68)


where 5

(x) = i K (x,t) f(t)dt. (5.69)


This ui (x) of (5.68) is substituted again in the integral of (5.21) to obtain the
second aproximation u2(z),

u2(z) flo) +a f K(a,0u

“fae re a]
flo) +a foK (a,
( t)f(t)
wary [ i Ke OK Ga f(y)dy

= f(x) + doi (c) +? feKo(x,y) (yay


(5.70)
5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 257

after using (5.69) for the first integral and defining the iterated kernel
b

PACH i K (m, Rae ye (5.71)


with Ky (t,y) = K(t,y).
If we define
6

$o(x) = / Ko(2,y) f(y)dy (5.72)


then u2(x) in (5.70) becomes

U2(x) = f(a) + Adi (x) + A*G2(z). (5.73)


This second approximation is then substituted in (5.21) following the same steps as
those used above to obtain u3(z):
b
u3(z) = f(r) + Adi (x) + A?ho(x) + »° | K3(z,y)f(y)dy (5.74)
= f(x) + Adi (x) + A2ho(2) + A343 (zx)
where
b
K3(z,y) =| K (a, t)Ko(t,
y)dt

$3(x) = i Ks(2,y)$(y)dy (5.75)


and K2(t, y) is given by (5.71).
If this process is continued n times, we obtain up,(x), the nth approximation for
the solution of (5.21), as

Un (ey =f (2) + 19% (x) + \2go(x) +--+ +A" bn(z)

= f(x) + > X9i(2), C2)


b

$(2) = / K,(2,) f(u)dy (5.77)


b
K;(2, y) = i Kia, Kae ydt, He Pie Bo Ah (5.78)

K;(z, y) is called the ith iterated kernel. It remains to find under what condition the
series (5.76) converges to u(z), the solution of (5.21). It turns out that the series
(5.76) converges for |AB| < 1,° |A| < 1/B, where

®See Pogorzelski [1966].


258 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

He a / i” 2m, t)dedt. (5.79)


The convergent series

u(x) = f(x) + s AC) (5.80)

is called the Neumann series and can be rewritten, after substituting for ¢;(2) from
(5.77), as
os) b
u(x) = ce i Ki(«,t)f(t)dt

=se)+ fo Sarin] f (t)dt (5.81)

ere) +a D(a,t;)f(t)at
a

and hence we find a new resolvent kernel for (5.21), which is

Pia, td) = se1 K;(z, t) (5.82)


Goal

in addition to the Fredholm resolvent kernel of (5.59). In Example 13 we will present


a simple proof for showing that the above Fredholm resolvent kernel I(x; t; A) is
unique.
We may remark here that assuming the uniqueness of the solution to (5.21), we can
show that this resolvent kernel (5.82) associated with the Neumann series solution
is unique, a result that we will relegate its’ proof as a simple illustration in Example
13. Next we illustrate the foregoing iterated kernel-Neumann series method for the
same problem as that of Example 11, where we used the Fredholm resolvent kernel
method of (5.58)—(5.63) to solve it. We may remind of the other (special) resolvent
kernel of (5.46) that we used in (5.55) of Seciton 5.1 for the solution of the Fredholm
integral equation with symmetric kernel (5.21s).

Example 12 Iterated Kernels: Neumann Series Method


Solve the integral equation of Example 11,

u(x) = f(x) + | ze'u(t)dt. (E.1)

To arrive at the Neumann series solution (5.81) for this problem we must prepare
K;(z,t), the ith iterate of the kernel K(x,t) = xe’. Here we have K,(zx,t) =
K (x,t) = ae‘. For i= 2 we obtain the second iterate K(x, y) from (5.78),
5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 259

1 1
Ko2(2,y) =) Ka) Ki(t,ydt = [ re'te¥dt
ita y (£.2)
= ze? te'dt = xe”.
0
Now we use this result again in (5.78) for 1 = 3 to obtain

1 1
K3(a2,y) =| K(a,t)Ka(t,y)at = [ ze'tedt
arth 0 (E.3)
Se te'dt = ze’.
0
and it is obvious from (E.2), (E.3) and (5.78), that if these calculations are repeated,
we obtain the general expression for the 2th iterate of the kernel as

TS(a4) = wee. (E.4)


This is now substituted in the Neumann series (5.81) to obtain the final solution to
(ET);

u(x) = f(a) +2 ON [ e! f(y)dy. (E.5)


We note that this series converges for |A| < BI = ,/6/(e? —1) & 0.97 since
according to (5.79) with K (z,t) = ze’, we have

= ff wren =f°tere [et [te


1 e? pick RP e
eee ‘(j)a= 3 2 =
(E.6)
So B = ,/(e? — 1)/6, and for the series in (E.5)toconverge we must have |AB| < 1,
which means that |A| < 1/|B| = \/6/(E? — 1) = 0.97. If we write (E.5) in terms
of resolvent kernel, we obtain the same answer as in Example 11,

u(x) La) +f (xsoe) Hay

=f(2 of (>:“)e” f (y)dy (E.7)


te
= Fay +0f FS rendy
after recognizing that the geometric series }>7° ,\‘~' converges to 1/(1 — A).
We note here that Example 12 was carefully chosen with simple kernel to facilitate
the illustration of the method of iterated kernels by keeping the effort of performing
260 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

the repeated integrations at a minimum. As a consequence is the special feature of


Example 12 of the simple form of the 7th iterate K;(z,y) = re¥ of (E.4), which in
more general problems will be of a more complicated form.
A special class of kernels that may result in a finite (instead of an infinite) Neumann
series is that of the orthogonal kernels. Two kernels K (x,t) and L(a, t) are called
orthogonal on {(z,t):a<a2<b,a<t < dD}if the following two integrals vanish
(see Exercise 25, Section 4.1):

iOL AD (5.83)

ijOLN ie (5.84)
As a special case, if it turns out that the kernel K (z, t) is orthogonal to all kernel
iterates K;(x,t),i =n+1,n+4 2,---, then according to (5.78), all the iterates with
order above n will vanish and the Neumann series (5.80) will have n terms only. In
the very special case when the kernel K (z, t) is orthogonal to itself, then according
to (5.78), we have
ha 1) (5.85)
and the Neumann series (5.80) becomes a one-term series with the resolvent kernel
of (5.82) as a A multiple of the kernel itself.
In the next example we prove that the Fredholm resolvent kernel T(z; t; A) of
(5.82) is unique.

Example 13 Proof of Uniqueness of the Resolvent Kernel I'(z, t; A) in (5.82)


For a fixed \ = Xo let there be two resolvent kernels Ij (x, t; Ao) and ['2(z, t; Ao)
for the solution in (5.81). Substituting these two values of the resolvent kernel in
(5.81) (assuming it is a unique solution to (5.21)), we obtain

b b
f(a) + do | en Ao)S(Odt = f(z) +20 | T(x, t;do)f(t)dt (E.1)
a

b b

i Pi(2,t; do) f(t)dt = i)Po(2, ty do) f (Cat (E.2)


which can be written as

b b
/ (T(x, t; Xo) f(t)dt — / T'2(a, t; Xo) f (t)dt = 0. (E.3)
We note that (E.3) is valid for arbitrary function f(t); hence if we set Cy (2, t; Ao) —
P2(x,t; Ao) = ®(a,t;Ao) and let f(t) = &(z,t;Ao) in (E.3), we obtain

b
} |@(zx, t; Ao) |?dt = 0
5.3. FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 261

which implies that ®(z, t; Ao) = 0; hence

Tr, (a, i Xo) = T2(z, Up Ao) =0

and
Ty (a, UR Ao) = L(g, Us Ao)

which says that the resolvent kernel of (5.81) is unique.

Numerical Evaluation of the Eigenvalues: Method of Traces

In Section 5.2.3 we mentioned the Rayleigh-Ritz method for estimating the eigen-
values A for the homogeneous Fredholm integral equation

HO) = iy K (a, t)u(t)dt. (5.86)

We present here another method for estimating the eigenvalues since it essentially
makes use of the iterated kernels AK’;(z, t) in (5.78) of this section. Here we will only
state the results of the method and illustrate it with a detailed example. This method
gives the following formula for estimating the smallest eigenvalue .,:

Adj
Ay ~ (5.87)
Agi+2
where A; is defined in terms of Kj (x,t), the jth iterate of the kernel A(z, t); as

Aj = /; K,;(t, t)dt (5.88)


which is called the jth trace of the kernel K (x, t) and hence the name for the method
of traces. For the symmetric kernel K (x, t) we can show that the even-indexed trace
Ao; can be-expressed in terms of K?(z, t) as
b b
Ao; = / / K? (a, t)dxdt. (5.89)

Example 14 Method of Traces for Estimating Eigenvalues


In this example we consider the following problem
1
u(e) — | xtu(t)dt. (E.1)
=i
Here we have K (x,t) = Ki (x,t) = xt and for simplicity we will seek an estimate
for the lowest eigenvalue A; from (5.87) as

Ag
oT (E.2)
Ai~
262 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

which corresponds to i = 1. According to (5.89), since we have a symmetric


kernel K(a,t) = rt here we must evaluate the first and second iterated kernels of
K(a,t) = xt for Ay and Ag, respectively, to be used for a (rough!) approximation
of A, in (5.87) with 7 = 1. So we start with K,(z,t) = K(a,t) = xt, to obtain

2 =f f xite.pazdt = ff teed
Es

afta al
dt = Salles Baa bir 2 i
“ops
bares) wobec
For A4 we must have K2(x, t), which can be evaluated from (5.78) with

EG(Ge0) = =a eance
1 A

Ko(z,t)
; = |e K(z,y)Kily,t)dy
(y)Ki (E.4)
1 1 ye
= / zyytdy = at | y?dy =azt —
= A 8} —1

Now we substitute K2(x,t) = (2/3)zxt from (E.4) in (5.89) to obtain the value for
Aa,

eae iPeox, t)dzdt= ie{3~2" t*daxdt

: 2 4 2

=e :
=
sel;
== LS = t“dt
va:
-*f
27
ea=(5D7) mithsj SMUD
mesT5 (AAC)0 29) dame
LSSle
When we substitute in (E.2) the values of Ag = 4/9 from (E.3) and A, = 16/81
from (E.5), we obtain the estimate for the lowest eigenvalue,

Te I by
Anes (2 © \1G/8t a2 See
and hence A; ~ 3/2.

5.3.3 Some Basic Approximate Methods

The approximate methods that we will present here for solving the Fredholm equation
of the second kind

e / K (a, t)u(t)dt (5.90)


are based on approximating the solution u(z) of (5.90) by a partial sum.
5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 263

N
Sn (2) = >— cede (zx) (5.91)
k=1

of N linearly independent functions $1, ¢2,---, Nn on the interval (a, b). Of course,
if this approximate solution (5.91) is to be substituted in (5.90) for u(z), there will
be an error €(z,c1,C2,---,¢n) involved, which depends on z and on the way the
coefficients c,, k = 1,2,---, N are chosen,

b
Syi(z) = fle) +f K(a,t)Sw(t)dt + €(z,c1,c2,°-+, cn). (5.92)

The main point here is how we can find or impose N conditions to give us the N
equations required for determining the N coefficients c, c2,---,cn of the approxi-
mate solution (5.91). The methods employed will differ by the way these conditions
are set, and of course the better method will be the one that keeps the error in (5.92)
to a minimum.

Collocation Method
This method presents the NV conditions by insisting that the error in (5.92) vanishes
at N points 71, 22,---,xn. This reduces (5.92) to the N equations

b
Sn (ai) = f(z;) +f K (2;,t)Sn(t)dt, em Le 2 ee (5.93)
a

for determining the coefficients c,,c2,---, Cn of the approximate solution Sy(x) in


(5.91). To determine these coefficients in (5.93) we first substitute for Sj(a) from
(5.91) in terms of the given N linearly independent functions $1, ¢2,---,@n(z),
perform the integration, then substitute z = 2,,%2,:--,@yn for which the error
€(X, C1, C2,*°*, Cn) vanishes.

Example 15 The Collocation Approximate Method


We illustrate this approximate method with the following simple Fredholm equa-
tion of the second kind:

Ue) = oer ibxtu(t)dt (E.1)

which of course can be solved by using any of the exact methods discussed in
the preceding sections as the kernel K(z,t) = rt is degenerate and symmetric.
We choose here three linearly independent functions ¢1(z) = 1, @2(x) = a, and
¢3(x) = x”, and so the approximate solution from (5.91) is

3
S3(e) = S- cebe (x) = cy + cou + €32”. (E.2)
k=1
264 Chapter 5. FREDHOLM INTEGRAL EQUATIONS

If we substitute this in (5.92), we obtain

1
53 (@) C1 i Liat cya? = xt i xt(c, + cot + e3t”)dt + €(x, C1, C2, C3)
el
1
=( C1, C2,C3)
+e@r+co2% =a2+ ail (cit + Cot? + cgt®)dt + €(z,
=i
(E.3)
and after performing the integration,

1 ‘ 42 3 44}
He (cit ecrte + c3t )dt = Cy ate az + C3 om +,

1 1 1 1 1 1 EA
= 30 AF 3° ar 4° = (Fe _ 3° + ic) ( )

}
= -C

(E.3) becomes

, 2
Cy +cgrt+c3t° =2“2+e 3} + €(X, C1, C2, 3)
(E.5)
D2
= 2 (1- 50) + €(@,
C1, C2, C3).

To find c,, C2, and cz we need three equations, which we provide (via the colloca-
tion method) by insisting that the error €(@, c1, C2, C3) in (E.5) vanishes at three points
(among other choices) x; = 1, 2 = 0, and x3 = —1, which gives, respectively,

2 1
Clips CO Cia Clie aac! wae (E£.6)

c, +0+0=0, @ = (E.7)

Gu Ont G3 ceil Cp, e1 — en +3 = — 1. (E.8)

It is simple to solve for c; , c2, and cz from (E.6)-(E.8), which gives cy; = cz = 0 and
C2 = 3. The approximate solution to (E.1) is $3(2) = 3a. For this example it happens
that we can easily verify that the exact solution to (E.1) is also u(x) = 3x2. However,
the perfect agreement between the approximate and exact solutions should not be
surprising since the particular form of the approximate solution S3(x) = c, + cox +
cz included the exact solution as a very special case, u(x) = coxa = 3z. It should
be clear, however, that such agreement is not possible when we consider another
form for the approximate solution of (E.1), say, S3(x) = cy + cosinz + c3.cosz
in terms of the three linearly independent functions 1, sin, and cosz, which we
leave as an exercise. We may remark again that we have chosen this very particular
problem to minimize the detailed computations in favor of clarifying the main steps
of the method. In the following example we consider a more general problem with a
known exact solution with which to compare our approximate solution.
5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 265

Example 16 The Collocation Approximate Method


The Fredholm equation

eet fsze'u(t)dt (E.1)

is a special case of problem (E.1) of Example 12 with f(z) = e~* and \ = —1;
hence its exact solution is easily obtained as

(E.2)
using (E.7) of Example 12.
Now we will use the collocation method to find an approximate solution to (E.1)
which we will compare with the exact solution (E.2). We again choose the three
linearly independent simple functions 1, x, 2”, so the approximate solution is $3(x) =
c, + cor + c32”. If we substitute in (5.92) with f(z) = e~* and K(z,t) = —ze’,
we obtain
1
@) =P One or C3" =e X= sf e'(cy ap Onis ar c3t”)dt ar €(2, C1, C2; C3). (E£.3)
0

We perform the integration on the right side of (E.3) to obtain

C1 + cot + cg? = e * — x(cye — cy + Cp + c3€ — 2c3) + €(2,


C1, C2,C3). (E.4)

To determine c;, C2, and c3, we insist that the error (x, C1, C2, c3) in (E.4) vanishes
at three points. In this case we take x = 0, 1/2, and 1, which gives us the desired
three equations in c;, C2, and c3.

Oe cq +0+0=1-0, Cja=al (E.5)

1 il 1
= siert 52+ 76s =e 1/2 _ “(ce — cy + cp + c3€ — 23)

mgt ace =ale— lta tee — 2c)


(£6)
g=1l:cqt+e+e; =e! — (cje
— cy + C2 + C3€ — 2c3)
=1l+o+c3 =e! —(e—1+c2 4+c3e — 2c3).
(E.7)
From (E.5)-(E.7) it is easy to solve for c,,c2, and cz as cy = 1, cp = —1.441, and
cz = 0.310, which makes the approximate solution

S3(x) = 1— 1.4412 + 0.3102”. (E.8)

In Table 5.2 and Figure 5.1 we present a comparison between this approximate
solution (E.8) and the exact solution (E.2) of the problem (E.1).
266 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

Table 5.2 Comparison of Approximate (Collocation Method) and Exact Solutions of Fred-
holm Integral Equation (E.1)

AY 0 0.25 0.5 0.75 1.0

Approximate values
u(x) ~ S3(x) =1—1.4412 + 0.3102? 1 0.6590 0.3568 0.0933 -0.1310
Exact values, u(z) = e~* — § 1 0.6538 0.3565 0.0974 -0.1321

Fig. 5.1 Comparison of approximate (collocation method) and exact solutions of Fredholm
equation (E.1) of Example 16.

Galerkin (or the Weighted Functions) Approximate Method


This method establishes the NV conditions necessary for the determination of the NV
coefficients in (5.91) by making the error €(2, c1, C2,°--, cn) of (5.92), as a function
of z, orthogonal to N given linearly independent functions 7 (x), wo(z),---, Wn (z)
on the interval (a,b). We will use the definition of orthogonality in (4.46) on the
error €(Z, C,, C2,°**,Cn) in (5.92), where these NV conditions become

w;(z)e(z, C1, C2, ny, -,cn)dz

a
5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 267

which can be rewritten as the following N equations in the N unknowns cy,


»CN;

b b b

ifb;(2) a5ih— de = / bj(a)f(e)de, jf=1,2,---,N


or

(5.95)

after substituting for Sjy(x) from (5.91). We remark here that in general the linearly
independent functions ~;(zx) are different from ¢;(x) used for the approximation,
but sometimes it is convenient to use the same functions.

Example 17 The Galerkin Approximate Method


For simplicity we will illustrate this method for the same problem of Example 15,
1
CE +/ xtu(t)dt (£.1)
=

and we choose the same linearly independent functions ¢)(z) = 1, ¢2(x) = a, and
$3(x) = x” to approximate the solution u(x) by

S3(x) = cy + co +32”. (E.2)

If we substitute this in (E.1), then according to (5.92), the error is


1
€(@,C1,C2,€3) = C, +eoa +32” — 2 — ih at(cy + cat + c3t?)dt. (E.3)
-1
To find the three equations necessary for determining c,, C2, and c3, the Galerkin
method requires this error to be orthogonal to three linearly independent functions
w1(x), W2(x), and 3(zx), which again for simplicity we choose as 1, z, and 2”,
respectively. The orthogonality condition (5.95) gives the three desired equations in
C1, C2, and c3:

1 1
[fe + Cot + C32 > | gt(cy + cot + cat?) i= fl l(x)dx (E.4)
1 1

1 1
[ee
a Cy, + cov + 327 — sh wie; cot + cat) dg
i} a(a)dx (E.5)
—] = 1

1 1
lx? a + cot + c327 — i! at(cy + Cot + cal di = / x? (x)dzx
Z = =1
j (E.6)
268 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

We note from (E.4) in Example 15 that the inside integral in the equations above,

: 2
ii t(cy + cot + c3t”)dt i 32°
=f
We use this result and perform the rest of the simple integrations to obtain the three
equations in ¢;, C2, and c3:

1 1 1 2 1
i cq t+=cot +¢327) dz = ik zdx = —| =0
—1 3 = 2 -1 (E 7)
1 1 : 2
=cqart 6 Cou? + 30 te = 2c, + 303 = (0)

i 1 3 sl 2

if (c1z Bed + c32° } dx =| ede = ==


Si 3 Whe Suey? 23 B8
1 1 g|' 2 2 C28)
= sae + 9 cox? + c3n- - = 9° Sy cy =13

/ (cz? +=e92* + e324 ) dx = / z*-edg=—| =0


ra 3 “ 43a E9
1 1 1 : 2 Cee
=a + — ct <r <c32° = Cit Cy = OU
3 12 5 wat 3 5
From (E.7), (E.8), and (E.9) we solve for c,, c2, and c3, to find that c; = 0, co = 3,
and cz = 0, which gives S3(a) = 3a as the approximate solution. But as we pointed
out in Example 15, this is also the exact solution. This is because of our choice of
the three linearly independent functions 1, x, x? for Sj (x), where the exact solution
happened to be 3z, only a constant multiple of one of them, namely x. We leave it as
an exercise [9(a, 7)] to illustrate this Galerkin method with the problem of Example
15 where we take the three linearly independent functions 1, sin z, and cos x instead
of 1, z, and z?.
Other approximate methods for solving Fredholm integral equations include that
of the least squares method, which in summary insists on the integral of the square
of the error,

b
if€7(a,¢1,C2,°**,cn)dx = minimum (5.96)
a

on the interval (a, b) being a minimum. We shall not discuss this or other approximate
methods here due to their somewhat lengthy computations; we refer the reader to their
more complete treatment in other texts that cover approximate methods of solving
integral equations.’

7See Green [1969, p-96], Baker and Miller [1977], Delves and Mohammed [1988].
EXERCISES 5.3 269

Exercises 5.3

1. Use the method of the Fredholm resolvent kernel (5.58) and (5.59) to solve the
following Fredholm equations of the second kind, then verify your answer.
1
(a) u(x) = 2? + | (a — 2t)u(t)dt
0
Hint: We have Cp = 1, Bo = K(a,t) = x — 2t, so start with C; from
(5.62), then B,(z, t) from (5.61), and we continue as in Example 11 to
obtain the resolvent kernel I(x, t; ) for the solution u(z) in (5.58).

(b) u(x) = e* — Hee*—'u(t)dt


:1
Coie Hh(Ane ey aleyd:
0
(d) u(x) =1+ a sin(x + t)u(t)dt.
0
2. (a) Use the properties of determinants to show that the double integral in (5.66)
vanishes. Hint: See that the first column is proportional to the second
column in the determinant of (5.66) after writing it as

ze! xe!
fre. = effi tie"

toe! toe!

(We may note that all the three columns in (5.66) are proportional to each
other, so are the three rows!)
(b) Solve the problem of Example 11, by using (5.64) and (5.65) for B,, (a, t)
and C’,, instead of (5.61) and (5.62), respectively.

3. Use the iterated kernels method to solve the integral equation

20
| eae ae® | sin(a — 2t)u(t)dt.
0

Hint: Note that the kernel K (x,t) = sin(x — 2t) is orthogonal to itself (see
Exercise 22, Section 4.1).

4. Solve the Fredholm equation

u(z) =2+ af sin(x + t)u(t)dt.


0

Hint: Use the Neumann series (5.81). (Also, you can use the result of problem
5 with very minor changes!)
270 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

. Use the iterated kernels-Neumann series method to solve the following integral
equation. Verify your answer.

u(x) = 3+ af sin(a + t)u(t)dt.


0
. Use the method of traces (Section 5.3.2) to find an estimate for \;, the smallest
eigenvalue of the kernels.
(a) K(x, t) = x???

(b) K(«,t) 24 @,OeSe


of the homogeneous Fredholm equation

Vili af K (a, t)u(t)dt

Hint: See Example 14.


In the following problems 7 and 8, where the collocation method is to be used,
you may choose your own convenient collocation points. (In problem 8, for
example, you may try the collocation points 2; = 0, 2 = , 23 = — and
Clr 3
w4 = 1G)

. (a) Use the collocation method to find an approximate solution for the equation
of Example 15
1
Ria) =e +f xtu(t)dt
=i
in terms of
(i) The three linearly independent functions ¢;(x) = 1, ¢o(x) = sina,
and $3(x) = cosa.
(ii) The eight linearly independent functions 1, sinz, cosz, sin 2z,
cos 22, sin 3z, cos 3a, and sin 4a.
handle the lengthy computations of solving the linear equations.
(b) Tabulate the two approximate solutions in part (a) and compare them with
the exact solution u(x) = 3x of Example 15.

. Use the collocation method to find an approximate solution for the equation of
Example 16,
1
ula) =e? -{ ze‘'u(t)dt
0
in terms of the linearly independent functions

(a) $i (2) = 1, b2(a) = 2, b3(2) = 2, a(x) = 2°


(b) $:(x) = sina, do(x%) = cosa
5.4 FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND 271

(c)) @1(z) = e7*, d2(@) = &


and compare the results of part (a), (b), and (c) with the approximate and
exact solutions in Example 16.

9. (a) Use the Galerkin method to find an approximate solution for the equation
of Example 1S,

u(z) = a+ ibxtu(t)dt
1
in terms of
(i) The three linearly independent functions ¢,(z) = 1, ¢2(x) = sina,
and $3(xz) = cosa.
(ii) The eight linearly independent functions 1, sinz, cosz, sin 2z,
cos 2a, sin 3x, cos 3x, and sin 4x. You may use w(x) = ¢;(2).
(b) Tabulate the two approximate solutions in part (a) and compare them with
the exact solution u(x) = 3a and the approximate solution obtained by
the collocation method in exercise 7(a,i,1i).
(c) Use the least squares criterion (5.96) to compare how good the approxi-
mations in exercises 7(a,i) and 9(a,i) are.
(d) Do part (c) for exercises 7(a,ii) and 9(a,ii) and show how they in turn
compare with 7(a,i) and 9(a,i), respectively.

10. Do Exercise 8 using the Galerkin method instead of the collocation method
and compare your results.

5.4 FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND

Towards the end of Section 5.2.1, and in relation to the Hilbert-Schmidt theorem, we
discussed then illustrated in Example 8 the difficulty of insuring the existence of the
solution u(x) to Fredholm integral equations of the first kind,
b
TAZ) =) K (a, t)u(t)dt (5.97)

and how the given function f (x) must be restricted to have such a solution. Moreover,
even when, perhaps on other grounds, we know that there is a solution, we lack the
usual iterative method to construct it. This is due to the absence of the solution u(z)
outside the integral of (5.97), which is in contrast to integral equations of the second
kind, where the iterative (or successive approximations) method plays an important
role, as we had discussed in Sections 5.3.2 and 3.1, respectively, for Fredholm and
Volterra equations of the second kind.
At the level of this book, the simplest statement on the existence of a unique
solution for the Fredholm integral equation of the first kind (5.97) is found in (the
following) Theorem 7, which is limited to a special class of symmetric kernels
272 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

(K(a,t) = K(t,x)) that we shall describe in the following simple Definition 1. This
Theorem 7 is a restricted version of Picard’s theorem. For the general theory, the
kernel K (az, t) can be complex-valued, the reason for using the complex conjugation
in the definition of the symmetric kernel as K (x,t) = K(t, x); itis dropped when we
deal with only real-valued kernels, and we write, K(z,t) = K(t,x) for symmetric
real kernels as we did in (5.27). For the definitions needed for Theorem 7, we shall
rely on the basic elements of Fourier series, that we have introduced and used for
the theory of homogeneous Fredholm integral equations with symmetric kernels in
Section 5.2.1. So, here we will limit ourselves to symmetric kernels, but we may
have the chance later (or in the exercises) to briefly discuss cases or examples of
non-symmetric kernels.

5.4.1. Fredholm Equations of the First Kind with Symmetric Kernels

In Example 8 of Section 5.2, and the first basic Theorem 5 for the existence of a
solution to Fredholm integral equation of the first kind with symmetric kernel (5.34),
we showed how conditions for such an existence are rather demanding on the given
function f(a) in (5.34). We now present another very basic theorem, which is aimed
at the existence of not necessarily continuous solutions to (5.34), namely, square
integrable solutions. Also the condition of this theorem guarantees a unique solution
to (5.34). For this theorem, we need to present a few definitions, which describe the
particular symmetric kernel that allows the existence of a unique solution to Fredholm
integral equation of the first kind (5.34). Such special symmetric kernels are called
closed symmetric kernels, which we shall describe in the following two definitions.
This will enable us to give a precise statement of the simplest possible theorem on
the existence of the solutions without the need for more abstract development that is
necessary for most of the other theorems. The theorem will be illustrated very clearly
in Example 18.
While the first part of this section deals with the rather demanding conditions
for the existence of the solution, the second part of the section deals with another
difficulty that such a solution may have. Briefly, Fredholm integral equations of the
first kind are termed ill-posed, a rather advanced subject which we shall attempt to
explain on the level of this book, and where we complement our discussion with a
number of examples for various applied problems.

Definition 1: A function f(t) is termed orthogonal to a symmetric kernel K (z, t)


on (a, b), if

/ SOUPS: (5.98)
We will need the following basic result, where it can be shown that “‘a square integrable
function f(x) on (a, b) is orthogonal to a symmetric kernel K (a, t) if and only if it
is orthogonal to all eigenfunctions {¢,, (a) } of the kernel as defined in (5.35),
5.4 FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND 273

Oe [ Ke. noa(0ee, L5Gined waeaes IA I (5.99)

Also we may repeat the definition of the null function n(x), which is the function that
has its (square) norm vanish on the indicated interval (a, b),

i.n*(x)dx = 0. (5.100)
Now we define the special class of symmetric kernels that would allow the simple
statement of Theorem 7 for the existence of the unique solution of Fredholm integral
equations of the first kind (5.97). This is the class of closed symmetric kernels.

Definition 2: Closed Symmetric Kernels


The symmetric kernel (K (x,t) = K(t, x)) that is orthogonal to no other function
but the null function n(z), is called closed. With the result following (5.98), this
definition says that, for a closed symmetric kernel, there is no function other than the
null function which can be orthogonal to all the eigenfunctions of such a kernel. This
statement is another definition for such a set of eigenfunctions to be called complete,
and we remind that we are using the square norm, as we did in (5.32) of Section
5.2.1. The following is a limited version of Picard’s theorem, which is stated without
proof, but will be clearly illustrated in Example 18.

Theorem 7: Existence of a Unique Solution to Fredholm Equation of the First


Kind — with Closed Symmetric Kernel
“The Fredholm integral equation of the first kind (5.97) with a closed symmetric
kernel has a unique (square integrable) solution if and only if the following series

epee (5.101)
n=1

converges, where {\,,} are the eigenvalues of the kernel A (a,t) (as indicated in
(5.99), and the a, are the Fourier coefficients of the given function f(x) on the
interval (a,b) in terms of the orthonormal eigenfunctions of the kernel as given in
(S:31),(.32) and (5-29),

(5.102)
b

an =ff(e)bn(@)de,
fo) a.0,(2)? (5.103)
Also, as it shall become clear from the illustration in Example 18, the important
condition of the convergence of the series in (5.101) is necessary for the class of
square integrable solutions u(x) of (5.97) to have the Fourier series representation

a= Sy Xnan One) (5.104)


274 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

in terms of the orthonormal eigenfunctions of the kernel of (5.97). If we compare


(5.104) and (5.103) for the Fourier coefficients b) = An@n and a, of u(x) and f(z),
respectively, we find the condition by, = An@n, which is what we used in (5.43) for
illustrating the theorem (Theorem 5) presented in Section 5.2 for the existence of the
solution to the Fredholm integral equation of the first kind, which was illustrated in
Example 8. Thus, while we have to know \,, and a, to check the condition (5.101)
for the existence of the solution u(x), the same product A,@n provides us with a
method of constructing such solution in a form of the Fourier series (5.104) for the
Fredholm integral equation ofthe first kind (5.97). This is a relief for having a method
of solving (5.97), and it is a main advantage, of having the complete orthonormal
set of eigenfunctions of the (special) closed symmetric kernel, for constructing such
solution (5.104). This is especially true when we know that, in general, integral
equations of the first kind are denied the well known simple iterative method used
for the equations of the second kind. Next, we will make some general comments
and give an illustration of the theorem in Example 18. As we remarked earlier,
having a closed symmetric kernel in (5.97) means that we can work with its complete
orthonormal set of eigenfunctions {¢,,(a)} on (a,b). With such completeness, we
feel at ease writing a Fourier series expansion in terms of such functions for any
square integrable function on the interval (a, b). In that vein of constructing the square
integrable solution (5.104) in terms of such Fourier series (with special condition on
its coefficients), we need the following Riesz-Fisher theorem, which we state without
a proof.

Riesz-Fisher Theorem: “If {u,(x)} is a given orthonormal set of functions that


are defined and integrable along with their square |u,,(x)|? on (a,b), and if {cp}
is a given sequence such that )>°~_, |cn|? converges. Then there exists a unique
function f(x), integrable together with its square |f(x)|? on (a,b) for which {c,}
are the Fourier coefficients of its Fourier series in terms of the complete set of
eigenfunctions {un(x)},

=> Cnn (a (5.105)

Cna
= f(z)Un(x (5.106)
and to which the Fourier series (5.105) converges in the mean, i.e.

N 2

lim
Noo a
- Ss Cr tln (©) dx = 0." (5.107)

A very relevant comment on the present discussion is how condition (5.101)


restricts the class of functions f(x) (for a given symmetric kernel) in (5.97) for this
equation of the first kind to have a solution. This is especially when we know that
the eigenvalues A, are increasing, as was illustrated in Example 7 with ,, = n?7°.
5.4 FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND 275

So, f(x) must have coefficients a, that are decaying fast enough to make the series
in (5.101) with its nth term \,,a,, converge. Such restriction should be borne in the
mind of anyone that wants to give a simple example of a Fredholm integral equation
of the first kind. This is so true, since for a casually given function f (x) in (5.97) the
solution u(x) may not exist! This will be illustrated in the following Example 18 for
the two simple functions used in Example 8, namely, f (x) = x and f(x) = $(a—2?)
on the interval (0, 1). We will show that, according to the condition (5.101), a solution
to (5.97) does not exist for the first case with the function f(z) = x, while it does
exist for the second case with the function $(r — x”). Moreover we can construct
this latter solution via its Fourier series as in (5.104).

Example 18 On the Existence of a Unique Solution to Fredholm Equation of the


First Kind
(a) To illustrate how difficult it is to satisfy the (necessary and sufficient) condition
(5.101) we may consider the simple example with f(z) = z,0<2 <1,

=
fiK (a, t)u(t)dt (£.1)
0
where K (x, t) is a symmetric kernel which we used in Example 8,
Sh a Nada nal ee aS
K(e,t) = {t{i-z), t<2<1 ee
and where we can secure from that example its orthonormal eigenfunctions
{bx(z)}9_, = {V2sin krz} and note its (clearly increasing!) eigenvalues {,}?
={k?x?}% |. We also note that this set of eigenfunctions is complete on the interval
(0, 1) of the closed (symmetric) kernel K (a, t) of (E.2).
We will show here that a solution to (E.1) does not exist. This is so, since as we
force it on (E.1), we may write the Fourier series for f(z) = x on (0, 1) in terms of
the above eigenfunctions, according to (5.104), (5.103), as
00
dines SD a,V2sin kre, OK acl (E.3)
k=1
i V2
ak =) aV2sinknadx = (—1)**! (E.4)
0 kn
where the above integral for the Fourier coefficients a, is done with one simple
integration by parts;



sinkra, Oar uke (E.5)

We note here that (E.3) to (E.5) are all fine since the eigenfunctions are complete® on
(0,1), and f(x) = x is square integrable on (0, 1), i-e., if x*dx = $,80 this function

8See (4.47), (4.48), and (4.52) and the discussion immediately following (4.52).
276 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

is entitled to its Fourier sine series representation in (E.5), which does converge in
the mean to f(x) = 2 on (0,1). The problem arises as soon as we look at (E.1),
where we see clearly that we are forcing a solution u(z) for it, which does not exist,
as the violation of condition (5.101) will indicate.
For (5.101), we have now ax = eee and \, = k?1?, so

Sswaa? => k=1


=
ker oi Gans
Dehae
Hal”
2
mn? k? (E.6)

which is a divergent series. But since (5.101) is a necessary and sufficient condition
for the existence of the solution to (5.97), we easily conclude the non-existence of
such solution to (E.1). Another way of showing this negative result for (E.1) is to
force a Fourier series representation for the (assumed) solution u(x), then find that
(E.1) implies that such series diverges, which we leave for an exercise (see Exercise
1).
From this illustration for Theorem 7, we should learn that before embarking on
solving a Fredholm integral equation of the first kind we must first have the eigen-
values {A,,} of its symmetric kernel, then we proceed to find the Fourier coefficients
{a,,} of the Fourier series expansion of the given function f(a) in terms of the or-
thonormal eigenfunctions of the kernel. Then it is a matter of the condition on the
product

1 1 1
PRE a@ (=) i.e., of the order me ki 5 (E.7)

for the series (5.101) to converge. In the above example we can see that it is not the
case since

|An@n| = V2an = O(n) (E.8)


where in this case k = —1, and its corresponding series (5.101) clearly diverges.
(b) In the following we will consider the Fredholm integral equation of the first
kind (5.97) for the above problem (E.1) with the same symmetric kernel (E.2) of
(E.1), except that we have here f(x) = $(x — 2”).

(n= 2? = [Ke ound (E.9)

As we did in Example 8, we first write the Fourier sine series for f(x) = $(x — x”),
on (0, 1),

1 1 2/2
a er ee ee Ota eral (£.10)

where the Fourier coefficients are easily computed, using integration by parts, from
its Fourier coefficients integral as given in (5.103) with dn (x) = /2sinnra,
5.4. FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND 277

HAD ER ib 1
sparen
es [ 3 (2 — 2”) V2sin(2n + 1)radz = (Qn
+1 2v2
azn = 0
(£.11)
Recalling that the eigenvalues are ,, = n?72?, we have for condition (5.101)

SA ene 2/2
|eeenen ree ons NG 2 2 = 1
bs
dont on+1| 73 (2n i D3 TT (2n ae 1) O @ (£12)

and the series in (5.101) converges since k = 1 > 5 in (E.7). Indeed, the sought
solution to (E.9) is u(x) = 1,0 < x < 1, as can be verified after simple integration
(see Exercise 2(a)). As a matter of fact, and as we did for Example 8, a practical way
of making an example, for a Fredholm integral equation of the first kind that does
have a solution, is to plug in a known function as a solution u(z) inside the integral,
and find the result of the integral as f(a) to be used for the example as a sure thing
to guarantee the solution to the problem. On the other hand, once we have ,, for
the kernel and a,, for f(x) of the equation of the first kind (5.97), we first use An@n
in (5.101) to see whether a solution does exist, and if so we use the same A,,a,, in
(5.104) to construct that solution as a Fourier series in terms of the eigenfunctions
{¢n(x)} of the kernel with coefficients bp) = anAn.

5.4.2 . |Ill-Posed Problems and the Fredholm Equation of the First Kind

As we remarked in Section 1.5, in the practical applications we often resort to


approximate or numerical methods for solving linear systems, and in particular
integral equations. For such systems, it is desirable that a small error in the given
data of the system causes a correspondingly small error in the output as the desired
solution. In other words we would like to see that the solution (output) depends in
a continuous way on the input (given data), and such a system is termed a stable
system. So it is of utmost importance for a stated problem to represent a stable
system, especially when very complex (expensive) computations are to be involved.
In most introductory treatments with some theoretical touch, we usually emphasize
the fact that we should insure the existence of the solution before we go after it,
moreover such a solution better be unique for us to focus on it as the only useful
one. These two concepts, of existence and uniqueness of the solution, were the most
important to guarantee the classical solution, via very well known theorems, where
powerful analytical methods are used. With the advent of contemporary complex
problems and the urgency for their solutions, powerful approximate and numerical
methods had to be utilized, of course, with an awareness for the inevitable practical
error in the data. As a consequence, the above mentioned stability condition is now
added to the previous existence and uniqueness of the solution. Indeed such three
conditions were postulated by Hadamard for initial and boundary value problems.
The stability condition is motivated by the fact that in a physical system, the input is
278 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

a measured data, and we want to make sure that a small inaccuracy in this data (error
in the input) will cause only a small error in the output as the solution of the problem.
A problem stated with the assurance of the existence, uniqueness and stability
of its solution is termed a well-posed problem, otherwise it is ill-posed. A typical
example of a well-posed problem is that of the potential distribution u in a disc due
to given input potential w = f on its rim that we presented in (1.24), where we
can prove the existence and uniqueness of the solution (potential) wu in the interior
of the disc. For now our physical intuition suggests that such solution wu depends
continuously on the data f at the boundary, i.e., it is a stable problem. This example
is to be differentiated from the one that we shall present in Example 19, which is due
to Hadamard, where we give the potential as well as its gradient on the boundary,
and which illustrates the earliest analytical example of an ill-posed problem.
Another example is the solution of the temperature distribution in a bar with
given initial temperature (data), and boundary conditions. Again it can be proved
that a solution in the interior (temperature u(z,t) for, t > 0; 0 < x < J), exists,
and it is unique. Also, on physical grounds we can see that a small change in the
initial temperature causes only a small change in the temperature in the interior.
Definitions and theorems are introduced to prove these results but they are beyond
the scope of this book, the interested reader may consult the available references on
the subject? For us, we may look at the input-output problem symbolically as with
operator notation, without going in depth to the theorems in the above references.
However, we may give a descriptive, though not so precise, notion of some of their
results, which will be followed by a specific clear illustration in Example 20, and a
discussion of the ill-posedness of Fredholm integral equations of the first kind. For
example, consider the operator equation,

Ag = f (5.108)
where A is an operator, say the integral operator in the Fredholm equation of the first
kind,
b
re / cameo (5.109)
mapping the desired solution ¢ as an element of (an acceptable) space of functions
X into f, as an element of another space Y of the same type functions,

Ae OGY: (5.110)
The idea of well-posedness will depend on the existence of an inverse operator A~!
that will return feY to deX,
Aho, (5.111)
and thus obtaining the solution of the integral equation (5.109).
For a small change in the input f to cause only a small change in the output
(solution) ¢, or in other words, the continuous dependence of ¢ on the data f, means

°See Kress [1989] and Weinberger [1965].


5.4. FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND 279

that this inverse operator should be continuous. Unfortunately, in general, and for
a large class of such operators, this may not be the case. This would mean that,
according to (5.111), a small change in f may cause a very large change in ¢, and the
problem becomes ill-posed. Such a situation is familiar to us, where we may have
the linear system of n equations in the n unknowns of the (column) matrix U,

AU =F (5.112)
where A is the known n by n matrix of the coefficients, and F' is the known column
matrix. However, when it comes to solving for U in (5.112) A may not have an
inverse, whose simple check is when its determinant |A| vanishes. Another situation
is that of the heat equation (see Exercise 2) which is stable as the heat is a diffusive
process, and a small change in the initial temperature will not cause big a change
in the (diffusing) output, which can be described as “forgetting its past". However,
the inverse heat problem of knowing the temperature now, and we are to find the
initial temperature, which is called the inverse, or backward heat equation is an ill-
posed problem. In physical terms this means that the heat diffusion is an irreversible
physical process. In the following Example 19 we will use a very well known
example due to Hadamard to illustrate what we mean by an ill-posed problem. It
will be followed by a discussion of the ill-posedness of Fredholm integral equations
of the first kind.

Example 19 An Ill-Posed Problem-Hadamard’s Example!®


Here we will illustrate that the following potential distribution problem is ill-posed
in the sense of Hadamard. Consider the boundary value problem for the potential
distribution u(z,y) in two dimensional space (upper half plane), which is free of
charge, it is governed by the Laplace equation in the interior,

2 O*u
an rs Se ray =ae Ou
NCES =r2 <0 60, yi>.0: (E.1)

The given boundary conditions are a grounded lower edge y = 0,

wa :0),=-0, —0co <£< 00 (E.2)

and where the gradient “ of the potential is given at the same edge y = 0,

Ou(z, 0)
= f(a), =O CO (E.3)
Oy
where f(z) is a continuous function. Hadamard’s example is for the choice of the
data f(a) as the particular sequence

fale Suu ’ —0o0 < Z <0. (E.4)


n

!0Optional
280 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

We can easily show that the sequence

1 ;
Un(2,y) = —> Sinnaz sinh ny (E.5)
n
is a solution to the boundary value problem (E. 1), (E.2) and (E.3) with f(x) = f(x)
as in (E.4). Also the input f,(2) = mae of (E.4) is convergent to zero as n — 00,
i.e., for large n there could be only small changes in the input data of (E.4). However,
the solution (output) in (E.5) (with its factor sinh ny = ae y > 0) will sustain
a very large change due to the eӴ term for the same large n. Hence the solution
Un(2, y) in (E.5) to the boundary value problem (E.1)—(E.3) and (E.4) is not stable,
and the problem is ill-posed. To show that the inverse, or backward heat equation is
also ill-posed, we refer the reader to Kress (1989).
The treatments and methods for a stable approximate solution of ill-posed prob-
lems are called regularization methods. Briefly, and to use operator notation, the
operator A of the ill-posed problem Ag¢ = f is replaced by one (or a family) of
a bounded operator R, such that for the perturbed data f? = f + Of of f witha
knownerror | f* — f| < 6, the (resulting perturbed) solution ¢*, corresponding to this
perturbed data, is a reasonable approximation of the actual solution ¢ i.e. ¢° depends
continuously on f*. A detailed treatment with powerful theorems, that describe such
regularization methods, is found in Kress (1990).

lll-Posedness of Fredholm Integral Equations of the First Kind


What concerns us in this section is that Fredholm integral equations of the first
kind can easily show the signs of ill-posedness. In Theorem 7, as a special case of
Picard’s theorem, we established a unique solution for Fredholm equations of the
first kind,
b

Hone i MOUs Uae mee (5.970)


for the closed symmetric kernel K (x, t) as
CoO

UD) =o Gs Ane (@) (5.104)


n=1

where A, and ¢, are the eigenvalues and eigenfunctions, respectively, of the sym-
metric kernel, and a,, is the Fourier coefficients of f(a) in terms of such a (complete)
set of orthonormal eigenfunctions,
b

6,62 ifFo) dalahdes (5.102)

f(x) = oe AnGn(2). (5.103)

The necessary and sufficient condition of Theorem 7 for the existence of such a
solution is that the series }>”°_, |An@n|? converges. This sounds very fine as far
5.4 FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND 281

as the two desired qualities of existence and uniqueness of the solution to our
problem of the Fredholm equation of the first kind. What remains, for the present
discussion, is the third quality of the stability of the solution for the problem to be the
desired and acceptable well-posed problem. Unfortunately, from the solution u(«)
in (5.104) we can show that the problem is not stable. This is the case, since if we
perturb the given data f(a) by a small df (x), the solution u(x) in its Fourier series
representation in (5.104) will not be perturbed by what we wish, a Fourier series
representation of 6 f(z) (or a constant multiple of it), but some magnification, i.e., a
much larger corresponding change du in u(x). This, as we shall see shortly, is due to
the eigenvalues A,, factor in (5.104), where they are increasing. If we write (5.104),
using a, as in (5.102), we have

ce b
zs Yo ndn(a) | f(y) on(y)dy. (5.113)

Now if we perturb f(x) by 6f(x), we substitute f(x) + 6f(z) inside the integral of
(5.113) to have u(x) + du(z) on the left hand side,

a) ule vdn(2) / fF) + of@endy


b

vd) ‘\FO venaney (5.114)


LSS owt) / OF OROVay,
6 = SS,

lee) b

1) =o rntndn(z), en= f Sf W)PW)dy (5.115)


where €,, is the above Fourier coefficient of the small perturbation 6 f(x), and where
we recognize the first series in (5.114) to represent u(x) as in (5.104) to cancel u(z)
from both sides of the equation (5.114). In (5.115) we see that while €,, is the Fourier
coefficient of the small perturbation of the input 6 f (x), the Fourier coefficients of the
corresponding perturbation for the output du(z), is magnified by the multiplicative
increasing factor A, which clearly will cause du(x) to be a larger change compared
to the given small change 6 f(x). Hence, there seems to be no continuous dependence
of the solution u(z) on the given data f(z) in the Fredholm equation of the first kind
(5.97). This possible ill-posedness of the Fredholm equation of the first kind adds to
other difficulties for the existence of solutions for problems with more general kernel
than the above symmetric one that we limited ourselves to in all our discussions up
to now. Then, it is no wonder that Fredholm integral equations of the first kind are in
the forefront with regard to the research priority in integral equations.
282 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

Exercises 5.4

1. Consider the Fredholm integral equation of the first kind

ts i" K (0, tyu(t)dt (E.1)


0

with a symmetric kernel as was considered in Example 18. Follow steps (i)—(iii)
to show, as in Example 18, that a solution does not exist for this equation.

(i) Assume a Fourier series representation for the (not so sure!) solution u(z)
in terms of the eigenfunctions of the kernel,

G2 SSbp V2 sin krax. (E.2)


k=

(ii) Substitute this u(a) in the integral of (E.1), interchange the summation
with integration as though the quality of the convergence of the series
(E.2) allows that.
Hint: For the integration inside the series involving the kernel, use the
fact that \/2 sin ka are the eigenfunctions of the kernel as described in
(5.35)ion (5:29).
(iii) Write a similar Fourier series for f(x) = x on (0,1) and use in (E. 1), then
compare coefficients, where you find that bk = m/2(—1)**1k which
makes the (assumed) Fourier series for the solution in (E.2) divergent.
Thus, there exists no solution to (E.1).

2. Consider the Fredholm integral equation of the first kind (E.1) of Example 8
in Section 5.2. (This is the same problem as in Exercise 4 of Section 5.2.)

(a) Verify that u(z) = 1,0 < a < 1 isa solution to this problem. Hint:
Watch for the two branches of the kernel K(x, t); write the integral on
the two subintervals (0, 7) and (a, 1).
(b) Write the Fourier series for the solution u(x) = 1,0 < x < 1 (of part (a))
and the given function f(x) = $(a — x”), 0 < x < 1 in terms of the
eigenfunctions of the kernel, to verify by = Axa, in (5.43) (and (5.43a)).
(c) Verify that for the function f(x) = $(a — a”),0 < x < 1in(E.10), the
Hilbert-Schmidt theorem is satisfied.
Hint: Note that f(z) = $(x — x?) is continuous on (0,1), and that
the clearly symmetric kernel K (a, t) in (E.2) is square integrable on the
square {xe(0, 1), te(0, 1)}. (See the hint to part (a).)

3: (a) Show that a Fredholm integral equation of the first kind,


b
deen / K (a,t)u(t)dt (E.1)
EXERCISES 5.4 283

with degenerate kernel,

K(z,t)= yal )by (t) (E.2)

does not have a solution unless the given function f(z) is restricted to a
linear combination of the functions a,(z),

f(z) = >/ cea (a)


(yaa

(b) Consider a Fredholm integral equation of the first kind,

= [ [email protected] (E.1)

with continuous kernel A(x,t) and continuous f(a). Would we neces-


sarily search for a continuous solution u(t)?
(c) Assume that f(a) and the solution u(x) of (E.1) have each a Fourier series
expansion in terms of {¢,(z)}, the set of eigenfunctions of the kernel
K(a,t) in (E.1). What restriction on the Fourier coefficients of f(z),
and hence f(z), would that entail?
4. Consider the Fredholm integral equation of the first kind,
b
= / K (a, t)u(t)dt (E.1)
a

where K (, t) is continuous, real and symmetric.


Assume that K (zr, t) has only finite number of eigenfunctions, {¢1(zx), ¢2(z),
-+,dn(x)}, show that the equation (E.1) then becomes solvable only for a
restricted class of functions regardless of u(x). (See Exercise 3, which is very
similar).
Hint: Substitute K (x,t) = )\"_, ci(t)@;(t) in (E.1) and integrate with respect
to t.

5. (a) Illustrate problem 4 for the example,

20
(aay [ sin(a + t)u(t)dt (E£.1)

according to the following instructions in (1) - (ii).


(i) Solve for the eigenvalues and eigenfunctions of K (x,t) = sin(z +t),
and
(ii) Show that any functions f(a) of the form in (E.1) is restricted to the
linear combination of the (two) eigenfunctions found in part (a).
284 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

(b) Show that the solution to (E.1) is also not unique.


Hint: If you add to the solution a function g(x), which is orthogonal to
the eigenfunctions ¢; (x) and ¢2(a) of (E.1), it will still be a solution.
(c) In general, when can the solution of a Fredholm integral equation of the
first kind with symmetric kernel, be unique?

6. (a) Consider the integral equation of the first kind in K (a, x),

‘i K (a,x) sinwrdr = F(w) (£.1)


0

and let K (a, x) be square integrable on (0, 7) for particular values of a.


Assume the very well known Riemann-Lebesgue lemma

lim K(a, x) sinwxdz = 0 (E.2)


Ww CO 0

and use it to show that this result (E.2) illustrates the 211 — posedness of
the equation of the first kind (E. 1).
Hint: See that for large values of w, F'(w) and so is its change 6 F'(w) will
be small, however the solution K (a, xz) maybe piecewise continuous in
x with large jump discontinuities.
(b) Consider the Laplace transform, of the piecewise continuous and of expo-
nential order f(t), (f(t) = o(e%*)),

(C= i ent (dt es a. (E.3)

Show that lim,_,.. F'(s) = 0, and use this result to comment about the
well-posedness of the singular Fredholm equation of the first kind (E.3)
in f(t). Hint: See part (a).

7. Show that for the integral transform f(x) of u(t) (or the integral equation of
the first kind in u(t)), with continuous kernel K (z, t),
b
f(z) =i K (a, t)u(t)dt. (E.1)

(a) If u(t) is piecewise continuous, then f(z) is continuous.


(b) Based on the result in (a), can we guess at only a continuous solution u(t)
for the problem of the first kind (E.1), when f(a) is continuous?
(c) If the solution u(t) in (E.1) is considered as the output corresponding to
the input (or data) f(x) of the system represented by the equation (E. 1),
what is the consequence of the result in part (b) on the well-posedness of
the solution for such Fredholm integral equation of the first kind?
5.5 NUMERICAL SOLUTION OF FREDHOLM INTEGRAL EQUATIONS 285

8. Consider the Green’s function of the loaded string G(z, t) as in the hanging
chain problem (2.28). On physical grounds show that

l
[ GG,
tf (tdi = 0
0

has only the trivial solution f(t) = 0.

5.5 NUMERICAL SOLUTION OF FREDHOLM INTEGRAL EQUATIONS

In the preceding section we illustrated the many different exact and approximate
methods for solving integral equations using special examples that needed moderate
amounts of work. For more general cases we sometimes resorted to approximate
methods where one integral equation is approximated by another which can be
handled by the usual methods illustrated. When both approaches do not apply,
we may have to resort to the numerical method of approximating the integral by a
finite sum, and hence the integral equation is approximated by a set of simultaneous
equations whose number is determined by the number of values or samples of the
approximate solution u(xz;) on the desired interval.
In this section we will first remind of the most basic numerical integration formulas
such as the trapezoidal and Simpson’s rule that we have already discussed in Section
1.5 in (1.141) and (1.144), respectively. Then we will prepare for the numerical
approximation setting of Fredholm integral equations of the second kind, and where
both the trapezoidal rule and Simpson’s rule will be used for approximating the
integration term in the equation. Such an approximation setting becomes a (square)
set of n + 1 linear equations in the n + 1 (approximate) samples of the solution
u(x;),7 = 0,1,2,---,n. This preparation will be concluded by an example where
the approximate numerical values are compared with the exact solution of a simple
Fredholm integral equation (see Example 20 and Exercises 1,2 and 3.) In this section
we will concentrate on using only the very basic integration formulas such as the
trapezoidal rule and the Simpson’s rule. As we emphasized in Section 1.5, the
higher quadrature rules, and their use in approximating the integral, for the numerical
solutions of Fredholm integral equations, is relegated to Section 7.3 of Chapter 7.
The treatment there is supported with the necessary tables, and a good number of
very detailed examples and exercises.
We will also have a chance to make some comments concerning the numerical
solution of a particular class of singular Fredholm integral equations. These are the
ones characterized by their infinite limit (or limits) of integration.
286 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

5.5.1 Numerical Approximation Setting of Fredholm Integral


Equations

After introducing the basic numerical integration rules in Section 1.5, we are now in
a position to discuss the numerical setting of Fredholm integral equations. We will
first consider the Fredholm equation of the second kind.
Consider the Fredholm integral equation of the second kind, as we used it in
(1.148) in Section 1.5.1,

(a) = fe) ikK (a, t)u(t)dt, (5.116), (1.148)

where we approximated its integral by a sum as in (1.149),

Sr Sy ae AG (1.149)
g=0

As was indicated in Section 1.5, we usually use equal increment At instead of the
above more general A;t. Here j as the index in A;t may indicate a weight D;
assigned to the ordinates K (x, t;)u(t;) (of the integrand) by the particular numerical
integration rule that we discussed and illustrated for the trapezoidal rule (1.141) and
Simpson’s rule (1.144).
With the approximation to the integral in (1.149), we have the approximate result
to the Fredholm integral equation (1.148)

u(x) © f(z) + S >K(a, t;)u(t;) gt. (5.117)


j=0

Now, it becomes clear that if we are to solve for approximate sample values u(z;)
of the solution u(x), we may require (5.117) to be an equality at the n + 1 locations
ri, 1 = 0,1,2,---,n of the (approximate) sample values u(zx;)(= u(t;)), i =
OL ese.

u(zi) = f(wi) + >) K(ai,t;)u(ts)Ajt, i= 0,1,2,3,--+,n. (5.118)


y=0

With such “forcing" of the approximation (5.117) to the equality (5.118), it should be
clear that the {u(x;)} in (5.117) are only approximations to the solution u(«) of the
integral equation (5.116) at {2;}, and they should really be designated differently. In
(5.118) we see that the (linear) Fredholm integral equation (5.116) is approximated
by a system of n + 1 linear equations in the (approximate) samples of its solution
uj = u(x;), 7 = 0,1,2,3,---,n. This should definitely remind us ofa matrix equa-
tion, whereby we can rely on our knowledge of solving systems of linear algebraic
equations with the help of matrix analysis, and more importantly our dependence on
5.5 NUMERICAL SOLUTION OF FREDHOLM INTEGRAL EQUATIONS 287

its theory for the existence of such sought solution. Indeed, the strong relation be-
tween matrix theory and the theory of linear Fredholm integral equations goes a long
way to Fredholm’s original work on linear integral equations, as it became abundantly
clear in the first few sections of this Chapter, where such theory is developed. If we
use the notation u; = u(2z;), fi = f(zi), Ki; = K(ai,t;), where clearly U = [ui],
F = [2;] are column matrices while K = [K;;] is ann + 1 by n + 1 square matrix,
we can rewrite (5.118) as a matrix equation,

U =F + DKU. (5.119)
where D = [Dj6,;] is a diagonal matrix of order n + 1, and 6;; is the Kronecker
delta. So in matrix notation we are after the unknown column matrix U,

LUGS DU =

[I -DK]U =F (5.120)
where I = [6,;] is the unit (square) matrix of order n + 1. If the inverse [I — DK]~!
of the matrix [J — DK] on the left of (5.120) exists, we have

US DikVE (51121)

as the solution of the approximate sample values u(z;), 1 = 0,1,2,3,---,n of the


Fredholm integral equation (5.116). From matrix theory we know that the inverse of
a square matrix A exists if its determinant |A| does not vanish. So a unique solution
to our system of equations in (5.120) exists if |J — DK| # 0. On the other hand if
|I — DK| = 0, the system in (5.120) has infinite solutions or no solutions. To be
more explicit, we will attempt in the following illustrations to set up the numerical
approximation of the Fredholm equation of the second kind, the Volterra equation of
the second kind case was covered in Section 3.3. For the convenience of the reader
we have included in Section 1.5.4 a brief review of Cramer’s rule to be used for
solving the above system of equations in (5.120).

Nonhomogeneous Fredholm Equations of the Second Kind


Let us consider again the Fredholm equation of the second kind

b
(a f(a) +f K (a, t)u(t)dt. (5.116)

We subdivide the interval (a,b) into n equal increments At = (b — a)/n and we


call tp = a, t; = a+JjAt = to + jAt; since we will be using either ¢ or x as our
variable, we will call zo = to = a, tn = tn = 5, and x; = Xp + 7At (or in short
x; = t;). We will refer to the known function values at x; as f(z;) = fj, the value
of the kernel K (a, t) at (x;,t;) as K(2;,t;) = Ki, and the (approximate!) values
of the unknown function u(x) at x; or t; as u(aj) = u; or u(tj) = Uj.
288 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

Numerical Integration with the Trapezoidal Rule


So if we use the trapezoidal rule (1.141) to approximate the integral of (5.116),
we have

} 1
(23a (a) +/ K (a, t)u(t)dt = f(x) + At 5K (2,to)u(to)

4 Kiaetyults) oe We te teen ae 5K (2, tn)ultn)


(5.122)
or

u(x) & f(x) +At [5K(esto) + K(a,t)u1


} | (6:123)
1
+-+++ K(x, tp—1)Un—1 + 5K (a,ta)
tin

where the solutions of (5.123) are approximate solutions of (5.116) since there is an
error involved in replacing the integral in (5.116) by the n + 1 sum of the trapezoidal
rule. With this note, we shall from now on use the equal = sign instead of the
approximate & sign in (5.123).
If we consider n + 1 values of u; = u(z;) = u(t;), 7 = 0,1,2,3,---,n, then
(5.123) becomes

1 il
i= fat At 3 Kioto apes PC: Gonos ayeee ae = a Kintn ;

i=0,1,2,---,n (5.124)

which are n + 1 equations in u;, the approximate solution to u(x) atz = x; = a+iAt,
(el
UR Oc
If we transform all the terms involving the solution wu; to the left side of (5.124)
leaving only the nonhomogeneous part f; on the right side, then write all the n + 1
equations for u;,7 = 0,1, 2,---,n explicitly, we have the following n + 1 system of
equations in ug, U1,°-*, Un to be solved:

At
1— “7 Koo Wh) = AtKoiu4 —AtKo2u2 Se AN Gree ees)

JX}:
— = Ko,ntn — fo
At
— =a Fito AF (1 = Atky1)uy —AtKi2u2 pet SG hare Atom aye

At
Sta Ys = fi
Z
5.5 NUMERICAL SOLUTION OF FREDHOLM INTEGRAL EQUATIONS 289

At
— a Kn-1,0U0 — Athy sar) =—Athp21,2u0 +>
+(1 — ISG KGcats, ey es — SEK i nUn = Jat

Fae ato POLI ait oi ALK 2s — 0 — ALI


yan Uno

aP (1= 3 Kon) UO RSS


(5.125)
which can be written in a matrix form as

[l-DrkK]U =F (5.126)

where I = [6;;], the identity matrix, D7 = [D,;6,;], the diagonal matrix representing
the weights of the quadrature rule used, which is here the trapezoidal rule, as they
1 il
appear in (1.141) (with Do = ght, Dig TANte Dota a) Ao). K is
the matrix for the kernel K = [.;;], thus the matrix of the coefficients of the linear
system in (5.125) or (5.126) is

A= tel p=

At
l= o*Koo —AtKo1 ——y Kon
At
ee. 1—Atky, = MGs
7 Z

At At
— a n-1,0 me (LS Athy an) ——y Kn-10
At
ere AtKn 1— Kan
(5.127)
U is the matrix of the solutions,

uo
U1
= (5.128)

Un

and F is the matrix of the nonhomogeneous part,

fo
fi
= ; (5.129)

iA
290 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

So now we may summarize that in approximating the integral in the (linear) Fredholm
integral equation by the n + 1 terms of the trapezoidal rule, we have reduced the
integral equation to a set of n + 1 (linear) equations (5.125) in uo, U1, ---, Un, Or to
the matrix equation (5.126) to be solved for the unknown matrix U whose elements
Ug, U1,°**,Un are the n + 1 approximate samples of the solution to the integral
equation (5.116) (or (1.148)). As we mentioned earlier, an obvious result from the
theory of linear systems of equations regarding the solution of the matrix equation
(5.126) is that there is a unique solution U to (5.126) when |A| = |J — DrK/, the
determinant of the coefficients matrix J — DK, does not vanish, and that (5.126)
has infinite solutions or no solution when the determinant |J — DrK| vanishes. To
this end, then, it is a matter of how efficient we are in solving matrix equations and
how prepared in choosing a more suitable method of numerical integration instead
of the trapezoidal rule. Since this book assumes preparation only in elementary
calculus and differential equations, we will not attempt td seek efficiency in our
present illustrations, as our main purpose here is to introduce the subject in the
clearest way possible. It is left to the readers to choose their own method of solving
the resulting system of linear equations (5.125). This, however, does not prevent
us from noting some special features, such as the symmetry of the kernel, which
will simplify the computations. For our illustrations, and for the purpose of a more
self-contained treatment, we felt it helpful to have a brief presentation in Section
1.5.4 of Cramer’s rule for solving system of linear equations. Of course, one may
consult other efficient methods, for example, the Gauss elimination method. The
illustration of the numerical approximation of the Fredholm integral equation (5.116)
when Simpson’s rule (1.144) is used for approximating its integral, (and where,
m is an even number) is left for an exercise (see Exercise 5.). In Section 7.3, of
the (optional) Chapter 7, we will use higher quadrature rules for approximating the
integral in (5.116), the trapezoidal rule and the Simpson’s rule, used here, are only
two special cases of such rules.

Example 20
Use the trapezoidal rule with n = 2 to set up the approximate numerical repre-
sentation of a 3 x 3 system of linear equations in (the approximate values) u(z;),
2 = 0, 1, 2 of the following Fredholm integral equation,

u(x) = sing + fo — x cos zt)u(t)dt. (£.1)


0
With n = 2 we have At = (1 — 0)/2 = 1/2, so #; = 1At = (1/2)3 = a;. If
we use the trapezoidal rule for the integral in (E.1) (with the weights D; of (5.118)
corresponding to the trapezoidal rule as in (1.141)) we have

l/l 1
Ui eels: (5Kou + Kua + 5Kaus) Bae
Thaiaw (E.2)

or in matrix form.
5.5 NUMERICAL SOLUTION OF FREDHOLM INTEGRAL EQUATIONS 291

1
1— gio —5Ko1 — 7 kor uo sin 0

f
ake
4 10 il
aS 91 fu
1
sone
gine U4
= a
sin 5)
1
(
E.3 )

1 1 1
— 720 —5 Kai 1- qh2 U2 sin 1

Now if we substitute for f; = f(z;) = sin(¢/2) and K,; = K(a;,t;) = 1-


(t/2) cos(ij /4) in (E.3), we obtain

0.754" “0.5 © =0:25 Uo 0


—0.125 0.741 —0.140 | | «, | =| 0.479 (E.4)
0 0.061 0.885 U2 0.842
as a simple matrix equation AU = F to be solved for the approximate value ug,
u; and ug of the integral equation (E.1). It should be on our mind to check that
the determinant |A| does not vanish for the system (E.4) to have a unique solution.
We shall leave (the rather lengthy!) details of finding the final numerical solution
of (E.4) to Exercise 2. The result of such approximate values are up = 1.013,
u, = 1.009, u2 = 1.021, which compare very well with the exact solution u(x) = 1.
Also they compare well with the approximate values of Example 6 in Section 5.1
as shown in Table 5.1, wo = 1.003, u; = 1.002, w2 = 1.009. We shall return to
the numerical methods of solving Fredholm integral equations in Section 7.3 of the
(optional) Chapter 7 where the more efficient Gauss quadrature rules are used.
In the next section we will discuss and illustrate the numerical approximate solution
of the very important case of the homogeneous Fredholm integral equations. We
should remind here of the special feature of these equations, where they are associated
with an eigenvalue problem, as we had discussed in Section 5.1.2.

5.5.2 Homogeneous Fredholm Equations

In Section 5.5.1, we considered the nonhomogeneous Fredholm integral equation of


the second kind
b
u(x) = f(x) + / K (a, t)u(t)dt (5.116)

then used the trapezoidal rule for approximately the integral that resulted in a set of
n + 1 nonhomogeneous algebraic equations in n + 1 unknowns {u;}7_9, and which
we wrote in the following matrix form (5.126) as follows from (5.125)

[I —DrK]U =F (5.126)
where I, Dr, K and F are clearly defined after (5.126) as in (5.127)-(5.129).
In this section, we consider the numerical method of solving a homogeneous
Fredholm equation

wal af Kte, Qua (5.130)


292 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

which can be developed in the same way as we did for the nonhomogeneous Fredholm
integral equation (5.116). We will again use the trapezoidal rule, with n subintervals
to approximate the integral above, and reduce (5.130) to n + 1 linear homogeneous
equations, in the n + 1 (approximate) unknowns u;, 7 = 0,1,---,n.

1 1
uj = AAt 9 Kioto Git cee e yh ed a Kintn ;

0p 2 een: (5.131)
Here it looks that such numerical approximation setting, as a system of homogeneous
linear equations for the homogeneous Fredholm integral equation (5.130), should
follow as a special case of (5.116) with f(x) = 0. However, any discussion of the
results will need what might be new concepts of the eigenvalues and eigenfunctions,
which we have already discussed in detail in Section 5.1.2, (and earlier at the end of
Section 4.1.3). So, attention should be made to the parameter A of the homogeneous
integral equation (5.130) and its numerical approximation (5.131). In summary, the
values of this A in (5.130) (or (5.131)) that results in nontrivial solutions for these
equations are called the eigenvalues, while the corresponding (nontrivial) solutions
are called the eigenfunctions.
If we bring all the terms to the left side of (5.131) and write the n + 1 homogeneous
equations for 7= 0,1, 2,---,n, we have

AAt
(1rs S Koo)uo AAtKoi uy = AAtKo2ue2 Bs he Oa oe = (0

At
—AZ iota ae (1 = AAtK11)uy — AAtKyqug -— +--+: -— Siig =

~~
At Knotlo = AAtK yi uy or AAtK yn2uU2 SP OOO (1= <i Kn Un = 0.

(5.132)
There is one simplification that can be attained by letting \ = 1/p and hence p will
appear only in one term of each equation instead of appearing in every term; that is,
(5.132) reduces to

At At
(u= 5 Keo uo — AtKoiu1 — AtKo2u2 —--- — “9 Ko,ntin =0
t At
— 3 Fioto ta (uu = Atky,)uy — AtKy2u2 —-+-— I lin = (0
2 (5.183)


At7 Knouo = AtKni U4 ap O28 o SE (ua
At
= Kun Un =
.
0)

which is in the same form now as (5.125) except for that f; = 0,i = 0,1, 2,---,n,
and the | in parentheses on the diagonal (of (5.125) is replaced by ju. So if we write
5.5 NUMERICAL SOLUTION OF FREDHOLM INTEGRAL EQUATIONS 293

this set of n + 1 homogeneous equations in matrix form, we have


ka =0

where
0
O=]| :
0
is the zero matrix, U is the same matrix as in (5.128).

uo
Ui
= (5.134)

Un

and Ky signifies the coefficient matrix for the homogeneous equation (5.133):

At At
pb “7 Koo =Ation © Se 5 Kon

—AtkK4o p— Athy, sere =i

KG

At At
— 5 Kn-1,0 pS ic) aoe ee OTIS yet 1 =~ Ka-1n
At At
=a Kno —AtKn aes (UE “9 Ann

(5.135)
We must recall here that a nontrivial solution to this system of n + 1 linear homoge-
neous equations exists if and only if the determinant | | of the coefficients matrix
Ky in (5.135) vanishes. This condition is used to find the (approximate) eigenvalues
A of (5.130) through finding = 1/2 as the zeros of |Ky| = 0.
We may recall that while | A| = |[— DrK| # 0 guarantees a unique (approximate)
solution for (the nonhomogeneous equation) (5.127), the foregoing condition |K y| =
0 guarantees a nontrivial but not a unique solution to the homogeneous system (5.135),
which means that we may have to determine the values uo, U1,-*-, Un in terms of
one of them as an arbitrary value. Such an arbitrary constant can be evaluated in
practice when we normalize the approximated solution. This will become clear in
the following illustration.

Example 21 Numerical Solution of Homogeneous Fredholm Equations


For illustrating the numerical method of solving homogeneous Fredholm equa-
tions, we consider the following equation of Example 7:

aL) ah K (a, t)u(t)dt (E.1)


294 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

with the symmetric kernel

2(1-t), O<a<t
euiey Gao eee ee
where we found that the normalized eigenfunctions are

ug(z) = V2sinkra, k = 1,2,3,--- (E.3)


corresponding to the eigenvalues A, = 77k”. To simplify the computations we will
attempt to find an approximate solution at z = 0, 1/2, and 1, hence n = 2 and
Atz= 5: So we proceed to evaluate K,; = K(t/2,j/2), 7,7 = 0,1,2, where we
have Kop = Ko, = Koz = Ki2 = Oand Ky, = t from the first branch of K (z,t)
in (E.1) and Kyo = Koo = Ko; = K22 = 0 from the second branch of K (az, t) in
(E.1). Hence if we substitute these values in (5.133), we obtain

pup +0+0=0
1
0+ (u-5)m+0=0 (E.4)
OF 0 tts = 0:

For this system of homogeneous equations to have a nontrivial solution, the determi-
nant of the coefficients must vanish,

Lb 0 0
0 i 1 0 |=2(n-=)
3 ee LU 8 =0
9)

0 0 i,

1
ju— 05 DS re (E£.5)

If we consider pw = 1/8, this will give X = 1/p = 8 and if we substitute this value of
. in (E.4), we obtain up = uz = 0 and wu; = wu; as an arbitrary constant. Hence we
have the two zero values at x = 0, | but an arbitrary value wu; at x = 1/2. What we
did here is, of course, a very rough approximation to the integral in (E.1), where we
used only three points, but it can be improved by considering more points. It remains
to find the arbitrary value u;. For this we may approximate the solution function by
two straight lines connecting the three points (0,0), (1/2, u;), and (1,0) as

A
2 (E.6)
1
EXERCISES 5.5 295

to find u; and then compare u(x) with an orthonormal solution from (E.3). If we
substitute u(az) from (E.6) in (E.7), we obtain
il
Poa 1 1 1 2
su; | a dx +4up | (a — 1)*daz = gui a5 gui ns =1, u=Vv3~1.73.
2
So the approximate numerical values are
1 ,

u(0) & uo = 0, u (5) tin Le 3y u(1) 3 ue =0

corresponding to an approximate eigenvalue of \ = 8. Now if we want to compare


these values to an exact orthonormal eigenfunction from (E.3), we must choose
u(x) = V2sin zz, since this corresponds to the eigenvalue A; = 7? ~ 10, which
is the closest to AX = 8. This exact solution gives

uo = u(0) = V2sin0=0, w =«(5) = v2sin Fdra Lctros—


wl N= 0:

As we have indicated at the beginning of this section, we have included here only the
most basic numerical integration rules to approximate the integral of the Fredholm
integral equations. The higher order quadrature rules of approximating the integral,
their tables, and the numerical setting of the Fredholm integral equations using such
rules, are covered in Section 7.3. There we support the use of such different rules
with a good number of detailed examples and exercises.

Exercises 5.5

1. (a) Use a numerical method (trapezoidal rule) to solve for the approximate
values of the solution of the Fredholm equation of Example 20

u(x) = sing + [oo — xcos xt)u(t)dt

at
12
i) r=0,5,551
a ee 3
oe ee, ee
(ii) x 0,59 Teae |

Hint: Note that for (5.125) with a 3 x 3 system in Exercise 1, a 4 x 4


system of part (i) and an 11 x 11 system of part (ii), you need to
use a computer to handle the lengthy computations for solving the
resulting linear equations.
(b) Tabulate the two approximate results in part (a) and compare them with
the approximate solution

v(x) = sinz + 1.003(1 — z) + 0.16742"


296 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

of Example 6 in Section 5.1.

2. (a) In problem 1(a)(i) use Simpson’s rule instead of the trapezoidal rule.
(b) Compare the approximate results of part (a) with the exact answer (ii a
ke

3. (a) Use a numerical method (trapezoidal rule) to solve for the approximate
values of the solution of the equation of Example 16 in Section 5.3

WN Ser 4 ze‘u(t)dt

at
1
i x ==
(i) 9 ,

(ii) c= 0 ae A 1. See the hint for Exercise 1(a)


Y ev p02 LOR ek
(b) Compare the two approximate results in part (a) with the exact and ap-
proximate results of Example 16 as presented in Table 5.2 and Figure 5.1
(of Section 5.3).

4. (a) Use a numerical method (trapezoidal rule) to solve for the approximate
values of the solution of the Fredholm equation
1
Feat -{ esi (B.1)
ate — EOL:
(b) Attempt to verify such a crude approximate solution.
Hint: Try to integrate numerically with the three approximate values
of u(x) and see how the two sides of (E.1) compare for each value of
x = —1,0,and 1.
(c) Repeat parts (a) and (b) for the approximate values of the solution at z =
—1, —9/10, —8/10, ---,0,1/10,2/10,---,1, then graph and compare
with the results in part (a). See the hint for Exercise 1(a,ii).

5. (a) Use Simpson’s rule of integration (1.144) instead of the trapezoidal rule
to reduce the Fredholm integral equation (5.116) to a system of 2n + 1
linear equations similar to that of (5.124).
Hint: Note that n must be even in (1.144) of the Simpson’s rule.
(b) Use the result in part (a) to solve for the equation of Exercise 1(a).
1
u(x) = sina + / (1 — xcos zt)u(t)dt (E£.1)
0
at x= 0, 5, and 1.
EXERCISES 5.5 297

(c) Compare the results in part (b) with those of Exercise 1(a) and the approx-
imate solution of Example 6,

v(“) = sinz + 1.0031(1 — x) + 0.16742°.

6. (a) Use a numerical method (trapezoidal rule) to solve for the approximate
values at x = 0, 1/2, and 1 of the homogeneous Fredholm equation

He) = 7 K (a, t)u(t)dt (E£.1)

_ f t(l-2)(Q2-t?-27), 0<t<z
a= at ee a<t<1 (22)
This problem represents the deflection u(x) of a rotating shaft (1.19)
with unit length and constant density, where \ combines most of the shaft
physical properties.
Hint: Note that the kernel is symmetric.
(b) Repeat part (a) for approximate eigenvalues and the solution values at
xz = 0, 1/4, 1/2, 3/4, and 1. See the hint for Exercise 6(c).

7. (a) Use a numerical method (trapezoidal rule) to solve for the approximate val-
ues at x = 0, 4, 2, 1 of the homogeneous Fredholm equation of Example
Dae
Ue) = | K (a, t)u(t)dt (£.1)

ale list). 0 Bost


HiCofREAi ed are (E.2)
Hint: Use n = 3, search for the approximate solution that corresponds
to the largest finite eigenvalue, and follow Exmaple 21.
(b) Compare the results of part (a) with an exact eigenfunction of u,(z) =
V2 sin kr2x corresponding to the exact eigenvalue \, = 17k?.
Hint: Try to approximate the function by three straight lines between
the four approximate values, then make it with a norm of | as we did in
Example 21.

8. Use a numerical method (trapezoidal rule) to solve the Fredholm equation

1
HES HhK (a,t)u(t)dt (B.1)
0

i= {: oes (E.2)
atz = 0,4,5, 4, and 1.
298 Chapter 5 FREDHOLM INTEGRAL EQUATIONS

9. For the three samples uj, w2, and u3 of problem 3a(i), use the Lagrang
e inter-
polation formula (1.153) and (1.154) to interpolate the approximate
solution,
then compare with the exact answer of TA a) ae
2
A es 2° 0 < x < 1 and the

answeroff probproblem ee
lem 3a(i) at x ¢—= 0 Oy
3a(ii)at —,1
10’ 10
Existence of the Solutions:
Basic Fixed Point Theorems

With the main emphasis of this edition on a simple introductory and applicable course
in integral equations, this chapter must definitely be considered as an optional one.
Indeed we could have relegated it to an appendix, but since its simple and descriptive
presentation! relates to basic topics in Chapters 3 and 5, we opted to retain it in this
edition. Of course the introductory course depends, primarily, on good parts of the
first five chapters as we described it in our “suggestions for course adoption" at the
end of the preface. For a more advanced applied course, parts of this chapter may
prove helpful to the reader with a desire to look into more basic theory, besides the
methods of solutions in Chapters 3 and 5.

6.1 PRELIMINARIES: TOWARD A CONTRACTIVE MAPPING

Our treatment in Chapters 3 and 5 for the Volterra and Fredholm integral equations
centered mainly on illustrations of the known methods of finding exact, approximate,
or numerical solutions. In so doing we either had to assume the existence of a unique
solution or stated some conditions to secure it.
In this chapter we present and prove a few basic theorems that are necessary
for establishing the existence and uniqueness of the solutions of integral equations.
We start with a descriptive presentation to motivate the basic mathematical concepts

'For more information on the existence of solutions to linear as well as nonlinear integral equations, see
Kress [1989], Hochstadt [1973], Pogorzelski [1966] (greater depth), Cochran [1972], and Collatz [1966]
(numerical methods).

299
300 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

needed for an accurate and clear statement of the principal theorem: the fixed point
theorem of Banach. It is our intention first to give a clear presentation of several
applications of the fixed point theorem, which have been selected with the goal of
keeping this chapter at the same level as, and in harmony with, the remainder of the
text.
The very basic iterative method that we employed in Chapters 3 and 5,

esl ejto ae iMen arnOut (6.1)


was instrumental in constructing the solutions, and in many instances we even showed
the convergence of the sequence uy (x) to u(x), the solution of the original integral
equation
A

u(cy= f(r) Ar [ha aed. (6.2)

Even when we accept such practical constructive proofs, we still should inquire about
their applicability to other, more general problems that cannot be solved in closed
forms. In particular, all our treatment in this text has been directed toward solving
only linear integral equations as in (6.2), with no method or illustration given of how
to proceed when we have nonlinear integral equations. The reason for this is that
while the existence of a unique solution may be assumed or established by direct
computations for the linear problem (6.2), it is a very different matter to tackle that
of the much more complicated nonlinear integral equation

Tee [ Petula (6.3)

whose successive approximations (iterations) are

ea) iiF(x, t, un(t))dt. (6.4)


In this section we motivate the preparations necessary for accurate statements and
proofs of the few very basic theorems on the existence and uniqueness of the solutions
for such general problems. The iterative method, which we have used so extensively
in this text, will be a principal vehicle for the proofs of these theorems.
Compared to the constructive-type proofs that we have employed until now, the
theorems of the present section and their proofs will have more of a geometric
approach. For example, the integral equation (6.3) is looked at in the following way:
The right-hand side is considered as a mapping or transformation T on u denoted
by T(u), while the left-hand side indicates that such transformation had left this one
element u unchanged,

em IED) (6.5)
This means that the solution u which we seek for the integral equation (6.3) represents
a very special element in the domain of the operator 7’, namely, that which remains
6.1 PRELIMINARIES: TOWARD A CONTRACTIVE MAPPING 301

unaltered or fixed under the T transformation. Such an element wu as in (6.5) is called


a fixed point of the transformation or mapping T’, which is the solution sought for
the integral equation (6.3). In this sense the successive approximations (iterations)
of (6.4) can be written as
Un+1 = T(uUn). (6.6)
The question still remains as to whether the general mapping T has a fixed point,
and if so whether such a point is unique. This, as we expect, will depend on the
function F(z, t, u(t)) in (6.3) or K (2, t) in the linear case in (6.2). However, there are
other very important factors that enter into play, including the nature of the iterative
process, the measure we use for the distance (metric) in determining how close the
members of the sequence up, are clustering together toward a limit point, and most
important, the quality of the set or space from which we select such sequences. A
very familiar space to us is R, the set of real numbers, with the distance between uy,
and u (Euclidean distance) defined by

d(un,u) = |un — ul. (6.7)

This, as we shall see, is but one of a variety of measures of distance (or metric) that
we may choose to adopt in order to facilitate the proofs of the desired theorems.
For the n-dimensional Euclidean space R” = {x = (21, %2,::-,2n); 2; € R},
the distance above is easily generalized to

Another different measure of distance between two elements z and y of R” is


defined as

dy (zr, y) =" max |a;— yi|, x,y € R”. (6.8)


QA 2 eT)
Sere)

This type of distance d, of (6.8) proves very useful when modified to give a measure
of the difference between continuous functions. For f(x) and g(x) as two elements
of the set C[a, b] of continuous functions on the closed interval [a, 6], we define the
distance between them as

d(f(2),9(2)) = max |f(@)- ale), f9 €Cfa,o] (69)


which is graphically the largest distance between the two functions on the closed
interval [a,b], as in Figure 6.1. A simple example of this type of distance is that
between f(x) = cosz and g(x) = sin on [0, 7/2], which is the maximum distance
of 1 occurring at x = O and x = 77/2.
In practice, if g(x) is the approximation to the solution u(x) we are seeking, then
the (maximum) metric d(u, g) in (6.9) measures the maximum deviation ofg(x) from
the desired solution u(x). So if we are to require an accuracy of 10~°, for example,
the way to express it is via the maximum of the metric as € = d(u, g) = 1O=e
302 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

We shall soon present the formal definition for the distance or metric d(x,y)
between two elements 2, y of a given set X, but first we would like to motivate the
type of convergence that is more suitable for describing the clustering or closeness of
the members of the sequence up. In our construction of the solution via the iterative
process we were after the sequence un approaching the limit u as n approaches
infinity, which is the usual type of convergence

d(f (x), g(x))

Fig. 6.1 The distance d(f, g) of (6.9) between two continuous functions.

lim |un — ul = 0 (6.10)


n—-co

that one encounters in the basic calculus course. However, in practice we very often
do not have a way of knowing the limit point u, but instead we know merely that
as n increases, Un+1 gets closer to Up (i.e., the sequence is clustering). It is even a
better sign when not only the consecutive members un+1, Un but the members of the
SEQUENCE Un+p, Un become close, that is, when their distance |un+p — Un| becomes
very small as n, the number of iterations, increases, that is,

Jim leat plieste lO: (6.11a)

This would be a very good sign for the convergence of the sequence, but without
specifying the particular limit point. We should note that in (6.1la) we may use m
instead of n + p, and write

lim > tna etn = 0. (6.116)


n,m—oo

Such a practical concept of convergence is called Cauchy convergence as opposed to


the usual convergence in (6.10), which we will refer to as “convergence to the limit
point u." There seems to be a drawback to Cauchy-type convergence in that it does
not specify the limit point that we are after. In other words, we are concerned about
6.1 PRELIMINARIES: TOWARD A CONTRACTIVE MAPPING 303

whether Cauchy convergence (6.11) would ever imply the convergence (6.10), which
spells out the limit point. To answer this question in the affirmative will depend on
the particular space that contains the sequence and on the type of metric we use to
measure the distance between the elements of this space. A space with its assigned
metric (distance) is called a metric space. We will soon show that in a metric space,
convergence (6.10) to a limit u always implies Cauchy convergence (6.11), but the
converse, which is what we are after, is not always true.
A metric space in which Cauchy convergence implies convergence to a limit is
a very special one termed complete metric space. This is the metric space we shall
work with and in which we state and prove the fixed point theorem.
Before we begin the formal definitions necessary for the accurate statements
of the fixed point theorems, there is still an extremely desirable property of the
transformation or mapping T'(w) of (6.5). This property can be described as a kind
offocusing effect ofT as it maps the input estimate uw, to its output un+1 as in (6.6).
By this we mean that the distance between the images u’ = T’(u) and v' = T(v)
would be closer than the distance between their objects wu and v in the domain of T’,
which can be expressed as

d(u',v') = d(T(u),T(v)) < ad(u,v), Oma <1 (6.12)


and is illustrated in Figure 6.2. A transformation T with this property (6.12) is called
contractive, as indeed it results in a contracted or closer distance between its outputs
(images). It is just this contractive property which is responsible for clustering the
sequence {u,,} of the iterative process (6.6),

Un+1 = T(Un) (6.6)


toward a limit point.
It has been our attempt to give, in a very descriptive way, an idea of the main
concepts that are needed for the statement of a fixed point theorem, namely, a
complete metric space and a contractive mapping. With this informal introduction, a
very basic fixed point theorem of Banach states that “for a contractive mapping T on
a complete metric space, there exists a unique solution u to u = T(u)." The detailed
statement and proof of this important theorem are given in Section 6.2.
For our purposes this would ensure the existence of a unique solution to the linear
integral equation used in earlier chapters,

u(x) = f(z) + » |K(e,t)u(ode (6.2)

as well as the (generally) nonlinear integral equation

ule) [Fe t, u(t))dt (6.3)

as special cases of a contractive mapping u = T'(w).


Although our principal concern in this text is with solutions of integral equations,
this should not distract us from other possible applications of the fixed point theorem
304 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

u'=T(u)
an
tee ny)
yer » v‘'=T(y)
u
<. ip
fh.

Fig. 6.2 Contractive mapping. *

in proving the existence of solutions for various types of equations that can be
described by the mapping

i L(t) (6.5)
For example, instead of T’ being the integral operator in the integral equations
above, it can represent a differential operator in the case of a differential equation
like

u=~—> =T(u). (6.13)

Another example is that of algebraic equations, in particular the matrix equation

DAD, edi (L1, Lon ee) 6.14


A = lela ( )

Up to this point in the discussion we have not mentioned that the mapping T
is linear-hence the role of the fixed point theorem in assuring the existence and
uniqueness of solutions to a class of “usually” intractable nonlinear integral and
differential equations. We refer here to a certain class, as it remains for us to show
that the particular equation has a contraction operator T’.
Even though the fixed point theorem can be applied to integral equations as well as
differential equations, the successive approximations (iterative) process (6.1) favors
the integral equation representation of the problem, since in practice we watch the
approach of the sequence un+1 of (6.6) toward the desired solution of the integral
equation (6.3). This means that in order to apply the fixed point theorem to differential
equations, we may first change the differential equation to an integral equation to
make it suitable for the iterative process (6.6). We will illustrate this application for
initial value problems associated with differential equations after reducing them to
Volterra integral equations.
6.1 PRELIMINARIES: TOWARD A CONTRACTIVE MAPPING 305

With the foregoing intuitive and very descriptive introduction of the basic concepts
necessary for stating the fixed point theorem, we turn now to the formal definitions
of these concepts. It is our intention to keep the treatment brief, but clear and mostly
self-contained.

6.1.1 Basic Definitions: Complete Metric Spaces

Metric Space
A metric space, designated as (M, d), is aset M with a mapping (d: M x M >
R) that associates a real number (distance) d(x,y) = reR to every ordered pair
(x, y) in the domain of d and such that this distance (or metric) d(x, y) satisfies the
following three conditions:

(a) d(z,y) > O for z,yeM, andd(z,y) =Oex=y (6.15)


This means that the distance between any two elements is always nonnegative,
and the distance being zero is equivalent to the two elements being identical.

(b) d(z,y) = d(y,x) for x, yeM. (6.16)


That is, the distance is symmetric in x and y.

(c) d(x,z) < d(z,y) + d(y, z) for z,y,zeM. (6.17)


This means that the present general definition of distance still satisfies an
inequality that parallels the usual triangle inequality,

Iz —2| <|a—yl+ly-2| (6.18)


see (Figure 6.3).

The triangle inequality (6.17) will be used very often in proofs of the basic
theorems. We note that the present mapping which defines the distance d(z, y) is to
be distinguished from T (uw) in (6.5).
A familiar example of a metric space is (R, d), the set of real numbers R with the
distance (metric) d(x, y) = |x — y|, which can easily be shown to satisfy the three
properties of a metric listed above. The set Cla, b] of continuous functions on the
closed interval [a, b], together with the metric

d(f(x),g(z)) = ve @)a@). .f,geCla; b| (6.19)


also constitutes an important metric space, especially for the present development,
where we are dealing with continuous functions of the integral equation (6.3), such
as F(x, t, u(t)).
It is instructive to show that this new type ofdistance d( f(x), g(a)) in (6.19) does
satisfy the triangle inequality (6.17),

d(f,g) < d(f,h) + d(h, 9).


306 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

Fig. 6.3 Triangle for (6.18).

To prove this we observe

d(f,g) = max|f(¢) — g(x) = max|f() — h(x) + h(x) — 9(2)|


< max{| f(x) — h(2)| + |a(2) — 9(2))}
< max|f(2) — h(e)| + max|h() — 9(=)|
< d(f,h) + d(h, 9)
where the maximum is taken over all ze[a, 6].

Limit of a Sequence in Metric Space


Let {un}?2, be a sequence of points uneM. The point u is called a limit point
of the sequence, that is,

ti) limiting (6.20)


n—- Co

if for each € > 0 there is a number no = no(e) such that forn > no(e) the element
Un is within the distance € from u [i.e., d(u, un) < e€]. In this case we say that the
sequence up converges to u.
As we mentioned earlier, especially for the iterative process (6.4) or (6.6), it is
sometimes the case that the elements u,, of the sequence get very close to each other
but no limit u is known [i.e., d(un, Um) — 0 as n,m -+ oo]. This brings us to the
Cauchy-type convergence. The sequence {un }°2, in M is called Cauchy if for each
€ > 0 there is no = no(e) such that for n,m > no(e) we have d(un, Um) < €.
We will prove here that in a metric space every convergent sequence (6.20) is
Cauchy convergent. From the definition of the sequence {u,,} being convergent we
have

d(u,un)<e' for n>no(e’). (6.21)


We want to show that this implies indexCauchy convergence Cauchy convergence,
that is,
6.1 PRELIMINARIES: TOWARD A CONTRACTIVE MAPPING 307

Ans Ure OP ne nive> INCE). (6.22)


From the definition of the metric we have the triangle inequality (6.18),

d(Un,Um) < d(un,u) + d(u, Um) (6.23)


where for the right side we can use the assumed convergence to have
d(un,u)<e' for n> no(e') (6.24)
and in the same way,
d(u,un) <e- for? -m > mo(e). (6.25)
So if we take n > max(no, mo) = No(e'), we can use this No(e’) to satisfy both
(6.24) and (6.25), which are then used in (6.23) to yield

d(un,Um) <e' +, n,m > No(e’). (6.26)


If we let e’ = €/2, we have

dun;tm)<¢€ for n,m > No ( )= N(e) (6.27)
2
which, according to (6.22), constitutes the Cauchy convergence.
Although convergence to a limit implies Cauchy convergence, we should bear in
mind that the converse is not always true. As mentioned earlier, only in very special
metric spaces called complete metric spaces do we also have Cauchy convergence
implying convergence to a limit. For example, in the metric space (M,d) with
M = Qas the set of rational numbers, a sequence of rational numbers may converge
to a limit point u = V2, but this limit point is not a member of the set of rational
numbers.
It can be shown that the metric space (M, d), with M = R the set of real numbers
and d(z,y) = |x — y| for x, yeR, is complete. Also, the set C[a, b] of continuous
functions on the closed interval [a, b], together with the metric

d( f(x), g(x)) = x€[a,b]


max |f(x)—g(x)|, — f(x), g(#) € Cla, }] (6.9)
can be shown as another very important complete metric space. Because of space
limitations we will not pursue proofs of these results; instead, we concentrate our
efforts on proving the main result, the fixed point theorem.

Fixed Point of a Mapping


Let (M,d) be a metric space and M’ a subset of M with the mapping
T:M >~M, a1)
—0
An element up € M' is a fixed point of the mapping T if T(uo) = uo. For
all elements u € M’ we are looking for those particular elements uo that remain
unchanged under the transformation 7, that is,
uo = T(uo). (6.28)
308 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

Contractive Mapping
The mapping T in (6.28) is called contractive if there is a nonnegative real number
a less than 1,0 < @ < 1, such that for each wu, u2 € M' we have

d(T (u1), T(u2)) < ad(ur, uz). (6.29a)

In other words, a contractive mapping brings the images T'(u;) and T'(uz) closer in
the range of the operator T than their corresponding objects u2 and u, in the domain,
as illustrated in Figure 6.2. In terms of our iterative process

Un+l = T(t) (6.6)

we have, for example, uz as the image of u; and uz as the image of u2, so with a
contractive mapping d(T(u2), T(ui)) < ad(ug, ui), but T(u2) = uz, T (ui) = ue,
hence

d(u3, U2) < ad(ug, uy). (6.296)


In the same way we can show that

d(u4, U3) = d(T (u3), T'(u2)) < ad(uz3, U2) < a? d(uz, U1) (6.30)

after using (6.29a) and (6.29b).


This process can be continued to obtain the general case,

a(tnay, tn) < ad(un, Wet) < a’ d(Un—1, Un—2) Fas < a”—"d(ue, U1) (6.31)

which says that the sequence is clustering since the outputs u,+4 1 and up, are closer
than the inputs wz and u; by a geometric factor of a”—!,0<a< 1.

6.1.2 Contractive Mapping for Linear Fredholm Equations

In the following example we illustrate conditions for the linear Fredholm integral
equations of Chapter 5 to represent a contractive mapping.
Consider the Fredholm integral equation of the second kind (5.7a),

b
u(x) = g(x) + af K (a, t)u(t)dt = T(u). (6.32), (5.7a)

We assume that g(x) is continuous on the interval [a,b] and K(z, t) is continuous
on the square D = {(z,t) : € [a,b], t € [a,b]}, as indicated in Figure 6.4. For
such functions we shall work with the complete metric space C[{a, b] of continuous
functions and its metric d(z, y) as in (6.9).
To find a sufficient condition for the mapping T(u) of (6.32) to be contractive,
we first indicate that the kernel K (x, t) here is bounded [i.e., |K (a, t)| < M] since
it is continuous on the bounded domain of the square in Figure 6.4. To show the
6.1 PRELIMINARIES: TOWARD A CONTRACTIVE MAPPING 309

Fig. 6.4 Domain D of (6.32).

contraction property of 7’, we use the metric of (6.9) on the images T(3(z)) and
T(7(z)) of the two continuous functions (x), y(z) in C[a, ],

d(T(B(z)),T(y(z))) = aah
ote) +a foKG,08(t)dt— [g(x

nf K (a, t)y(t)dt]

= max [afe,
0180 ~ roe
< max ikIAK (2, t)(8(¢)—v()]lat
<INM max, [late -role
< |AIM max |8(2)~(2) a dt
< |NM(b— a)d(8(z),-7(z)) = ad(B(2),1(2))
(6.33)
, t)|. Hence with
after using the upper bound M for |K(z

Comma eg) (6.34)


or
RES reer (6.35)
the mapping of the linear Fredholm equation (6.32) becomes contractive, since this
ensures d(T'(3(z),T(y(x)) < d(G(x), y(x)) in (6.33).
310 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

6.1.3 Contractive Mapping for Linear Volterra Equations

Next we illustrate that to ensure a contractive mapping J(u) for the linear Volterra
integral equation (3.1),

u(x) = f(z) +2 [TROOHOR (3.1)

= T(u) (6.36)
we need much less restrictive conditions than those for the Fredholm equation (6.32).
In Section 3.1.2 we considered the successive approximation method (3.25) of
solving (3.1),

(Vip a) daa BlG2) a= iPK (a, t)un—1(t)dt (3.25)

which we write here as

Unt (ae) =p (2) | Ka) (b ate (6.37)


0

To assure the convergence of this approximation, we stated the result (without proof)
that “if f(x) is continuous on [0, a] and K(z,t) is also continuous for 0 < x < a,
0 <t < a, then the sequence u,,(x) converges to the solution u(z) of (3.1)." In terms
of our present development, where we are working in Ca, 6], the space of continuous
functions, we should be able to reach a conclusion of convergence without any extra
conditions.
This indeed is possible but needs a number of preliminary results. The most
important of these results is to show that for large enough n, the nth-order mapping
T”(u) of the Volterra equation is a contractive one. We will limit our efforts in the
following example to showing this result, which we feel captures the main idea of
the contraction for T'(w), and we leave it to Example 3 at the end of Section 6.2.1,
after we already have the fixed point theorem, to show that if T”(w) is contractive,
then T'(u) = u has a unique solution.
T”(u) is easily illustrated when applied in (6.37),

Unti = T(un) = T(T(un-1)) = T?(Un-1) = T?(T(un—2)) = T?(un—2)


peed (TG) — a

Example 1 A Contractive Mapping for the Volterra Equation


We will show here that J(u) of (3.1) is contractive when n is large. Following
what we did for the successive approximations and the iterated kernels method in
Section 3.1, we write

Tus) = uate) = f(e) + f” Ke(a, €)us(€)dé (B.1)


6.1 PRELIMINARIES: TOWARD A CONTRACTIVE MAPPING 311

Tu) = g(t) = T(ua(a)) = fle) +a f" K(, €)ua (Ede


. re
= f(z) +r / K(#,)Uf(6) +A / K(E, tu (t)dt}dé
x DT pé
Se ‘)K(e,€)f(€)dé +»? i / K (#,€)K (E,t)ua (t)dtde
= f(@) +) / K(x, €)f(€)dé +? i” Kola, €)uy(€)dé
=7(f) + |" Ka(&)un a, (6)dé
(E.2)
since the last double integral reduces to the single integral / K(az,y)K(y, €)dy
i
that defines the iterated kernel K2(x, €) as defined in (3.4). Note how the first two
terms in (E.2) are known operations on the known function f(x), where we consider
them fixed Tf) as far as the mapping of the (variable) estimate u; is concerned,
and where we are seeking a contractive mapping on w,. If we repeat this successive
process to Un4, = T (un) = T”(u1), we have

T"(us) = Tun) = (2) +A f" K (2, €)f(@de +» ii" Kola,


€)f(Odé
began f” Ky-a(a,8)f (Ode +2" il“Kpla,ui(Odé — (E.3)
where K,,(z, €) is the nth iterated kernel of (3.4),

ene [ ke.oKulteut Ky(2,6)=K(2,€) —(3.4)(B.4)


and we note that all the terms, except the last one with u;, are considered known.
We emphasize this point since as we write |J’(u,) — T’”(v1)| next in preparation to
show T"(u) as a contractive mapping, all these known terms will cancel out, leaving
us with only the last term, which will involve the desired |u; — v;| difference of the
first estimates,

zx

|T” (ui) — T"(r1)| = |[T(un) — T(vn)| S arf |An(z, €)||ui (E) — v1 (€)|dé
(E.5)
where, of course, T’”(v;) is obtained as in (E.3) of T”(u1)..
Before we use the metric (6.9) on (E.5) in (E.9), we should prepare for an upper
bound of the iterated kernel K,,(z,£) on the square indicated. Since K(x,€) =
K,(a, €) is assumed continuous on this bounded square domain, we can conclude
312 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

that Ky (2, €) is bounded by some positive number M, |Ki(z, )| < M. It remains


to show the following bound for the iterated kernel K, (a, €):
M”

|Kn(a,€)| < Cee = 6)%) a Sor (£.6)


which can be established by mathematical induction. We illustrate the cases for
Ko(z, €) and K3(a, €):

|Ko(x, €)| =|[Keon (neat <[enue eras <u J

IK(t.8)lat
<M Ltmax at]< M?(a
- €) iy

(E.7)

Fegtne= |
i; K (1,1) Ka(t,Sa <M |"| Ka(t, €)|dt
<M
<M fVe 9a
—C\dt s aM” [ @-9a0
= f\d. (E.8)
E.8

oy CIS
2!
after using the result of (E.7) for the bound on |Ko2(t, €)| in (E.8). With this result
(E.6) and the result (E.5), we write

a(T"(u1),T"(r4)) = max|T"(u) - T(r)


= max |A”"| [Kole = ov(e)i
0
4,
< |AI" max HfKale, €)llun() — vi @lae
< JAP max cane ) — v1 ()|dg

< |A|"M™ max |u1((z)— v1 (a if —


cite of

< |A|" MM" d( U1, v1) eal


G aae
=a

“rant
=a d(w,n) < parm eaa”
<a" dl),
d(T”(ui),T"(v1)) < ad(uy, v1)

where Nake

AS hare
n\
Oe (E.10)
6.2 FIXED POINT THEOREM OF BANACH 313

Hence T”(u) is contractive if a < 1, which, with the help of the n! in the denominator,
is the case when n is sufficiently large (i.e., if we wait for more iterations). Of course,
if we have our problem on the unit square, 0 < x < 1, then |z — a| = |z| < 1 inthe
factor |x — a|” of (E.10) will help even more in speeding a of (E.10) toward being
less than 1.
If we consult Example | of Chapter 3,

u(x) = f(x) + af e”'u(t)dt (£.11)

we had a closed form for the iterated kernel,

K,,(2,t) = ean (E.12)


So on the unit square we have
gril

|Kn(z,t)| < Gane M=e (E.13)

which is within the (more conservative) bound we obtain from (E.6),

Mzr-1 grt
CMG 9B) oe
ee ee,
PORES aa rie akeremeni ee)
In this case we use (E.13) in the second line above that of (E.9) to obtain
gn

C— |AIresF (£.15)

instead of .
ne
a=) bAlre a (£.16)

which corresponds to using (E.14) in (E.9) and (E.10).

6.2 FIXED POINT THEOREM OF BANACH

With the definitions of metric space, fixed point of the mapping, and contractive
mapping, we are now in a position to state and prove a very basic fixed point
theorem, the Banach (or Banach-Cacciopoli) theorem.

Fixed Point Theorem


Let (M,d) be a complete metric space and let the mapping T.: M — M bea
contraction; then T has exactly one fixed point.
Proof: We are to prove the following for the mapping

u=T(u) (6.39)
314 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

(a) The uniqueness of the fixed point when it exists.

(b) The existence of the fixed point, where we show first that the sequence of the
successive approximations

Unt = T (Un) (6.40)


is Cauchy convergent, hence convergent since it is in a complete metric space.
More important, we show that the limit point for this convergent sequence
u = limn_soo Un is indeed the fixed point of the actual problem u = T(u).

(a) To prove the uniqueness of the fixed point, suppose that there are two distinct
fixed points u and v, u # v [i.e., u = T(u) and v = T(v), u F vIJ. Since u F v, the
distance between them is not zero: d(u, v) # 0. Because u and v are fixed points of
T, we also have i

d(T (u),T(v)) = d(u,v) £0. (6.41)


But since the mapping T is also contractive, we have, according to (6.12),

d(T(u),T(v)) < ad(u, v), Oar: (6.42)

If we combine (6.41) and (6.42), we see clearly that there is a contradiction

d(u,v) = d(T(u),T(v)) < ad(u, v),

(1 — a)d(u,v) <0
where since d(u, uv) > 0 by assumption, then 1 — a < 0, a > 1, which contradicts
the assumption of contractive mapping whose a is strictly less than 1. Hence the
distance d(u, v) must be identically zero, which is equivalent to u being equal to v,
and which proves the uniqueness of the fixed point when it exists.
(b) To prove the existence of a limit point as a fixed point for u = T(u), we will
first prove that the sequence u,, of the iterative process

Un+1 = T'(Un) (6.40)


is a Cauchy sequence.
With the help of the contraction property, we will find the distance d(un, Un+1)
between two consecutive approximations in terms of the distance d(u2, u;) between
the first two approximations (input estimates) wu; and uz. The next step is to find
the distance d(un,Un+p), that we need to use in proving the Cauchy convergence
(6.11b).
From (6.4) and the assumption of contractive mapping we have

d(uz,u3) = d(T (ui), T(u2)) < ad(uy, u2). (6.43)


By the same reasoning

d(uz,u4) = d(T(u2),T(us)) < ad(u2, u3) (6.44)


6.2 FIXED POINT THEOREM OF BANACH 315

and if we invoke on the right side the previous result for d(u2,u3), we have

d(us, U4) KS ad(u2, U3) < a’ d(uy, u2).

If we continue this to wu, and un+1, we have, as we did for (6.30) and (6.31),

d(tn,Unti) < a”! d(uy, uz) (6.45)

where clearly the higher order consecutive iterates un,Un4i(n >> 1) are much
closer together than the first ones, u; and uz, due to the geometric factor a”~!;
0 <a <1. Still we have to show the Cauchy convergence, which will entail the use
of the important result (6.45) and the triangle inequality of the metric d(un, Un+p).
Observe that
d(un, Untp) << d(un, Un+1) ae d(un+1 ) Unt2)+

+d(Un+2,Un+43) + +++ + d(Un+p-1,Un+p) (6.46)


after repeated use of the triangle inequality (6.17). Now we use property (6.45) on
each of the terms on the right side, which are distances for consecutive sequences.
Therefore,

d(uUn, n+p) < d(un, Un+1) a d(Un+1, Un+2) AG d(uUnyo, Un+3)


+--+ + d(Untp-1, Untp)
< a"! d(uy, u2) + a"d(uz,
uz) + at d(u1, u2)
aR oe Se at P~2d(uy, u2) (6 47)

a tar +arth 4... 4atP~?) d(uy, uz) ;


a "T1+ata?+---+a?~")d(u1, uz)
es
a aa i = d(ui, v2),
after realizing that we have a geometric series in the parentheses above. Since
0 < a < 1, the right side would clearly go to zero as n — oo, which makes
d(Un,Un+p) — 0 as n — oo (i.e., the sequence converges in the Cauchy sense).
Since this sequence u,, is an element of a complete metric space, it will converge
to a limit w in this space (i.e., limp—oo Un = U).
What remains is to show that this limit point wu is indeed the fixed point of our
equation; that is, it must satisfy u = T(u), or in other words, d(u, T(u)) = 0. From
(6.6) we have

Uric Lay)
and from the proof of the existence of the limit point above we can say that

lim @44 = lim vu, =u


n—- co n—- Co

or
d(w,T (tn)) = du, nti) + 9, d(u, Un) 0
as n — oo. With these results we will use the triangle inequality to have
316 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

d(u,T(u)) <d(u,T(un)) + d(T (un), T(u)) < du, T(un)) + od(un,u)


< d(u, Un41) + ad(un,u)
(6.48)
after using the contraction property of the operator T in the last term.
As n — oo each of the two terms on the right would approach zero, which makes
d(u,T(u)) < 0; but since the metric d is nonnegative by definition, we must have
d(u,T(u)) = 0, which means that u = T(u), the desired result of the fixed point
theorem.
Next we will illustrate this important Banach fixed point theorem to prove the
existence of unique solutions to linear and nonlinear Fredholm and Volterra integral
equations that exhibit contraction.
As we mentioned earlier, the importance of the foregoing type of proof for the
fixed point theorem is that it presents us with a method of constructing the solution.
Moreover, it gives an upper bound on the error €, = |u — u,| or in general d(u, un),
incurred in approximating the solution u by the nth successive approximation up, in
terms of the difference between the first two estimates, u; and wo,

an}
n= t,t.) = 5 du» U2). (6.49a)

This is obtained easily from the last line in (6.47); we take the limit as p > oo,
where limpy_,.. a? = 0 for 0 < a < 1 on the right side and limp_,o Unip = U
(sincen + p = Mm > &, limm-.oo Um = U) On the left side, to give

qn}
limnd (ntpp hd a eS = a (ut ua) (6.496)
poo

6.2.1 Existence of the Solution for Linear Integral Equations

Linear Fredholm Equations


For our illustration of the linear Fredholm integral equation

SESE} flK (a,t)u(t)dt (6.32), (5.7a)


we found in (6.34) that a = AM(b — a), which gives a contractive mapping if we
insist that a < 1 [i.e., \ < 1/M(b—a), where M is the upper bound of |K (a, t)| on
the square of Figure 6.4]. In this case the up estimate as an input would produce an
output Uyp+1 that has a maximum error bounded as in (6.49a),

En = max |u—u Hy Se ae |ju2—ui|, |AM(b-a)|


<1
z€[a,b] mr = 1=|AM(b= a)| ee[a,5) ‘. ae
(6.50)
in terms of the maximum difference between the first two estimates, uw. and wy.
6.2 FIXED POINT THEOREM OF BANACH 317

Next we consider Example 12 of Chapter 5, where we solved the Fredholm integral


equation

u(x) = f(x) + | re'u(t)dt

by the successive approximation (iterated kernels-Neumann series) method. We will


compare the contraction condition for the existence of the solution, which we have
established here, with the one we merely stated above equation (5.79) in Chapter 5
for the convergence of the Neumann series solution (5.80).

Example 2 Existence of the Unique Solution for Fredholm Linear Equations


Consider the following Fredholm integral equation; let us find a condition that
assures the existence of the solution.

u(x) = f(x) + af ze‘u(t)dt. (£.1)

Assume that f(x) is continuous on (0, 1]. A(z, t) = xe! is obviously continuous
on the square x € [0,1], t € [0, 1] (see Figure 6.4); hence it is bounded there and we
can easily see that a bound / is e, that is,

M = max |re’| =e. (E.2)


zx€(0,1]
teE[0,1]

So according to (6.35), this Fredholm equation represents a contractive mapping, and


hence assures the existence of a unique solution if

a = A\M(b—a)
= Ae(1—-0) =Ae< 1

that is, if |A]| < 1/e ~ 0.37.


Now we would like to compare this condition with that which we gave for the
linear Fredholm integral equation
b
(2) = f(a) 4+ | K (a, t)u(t)dt (5.21)

to be |A| < 1/B, where B is given in (5.79) as

pe if [ K2(a, t)dndt (5.79)


B was calculated in Example 12 of Chapter 5 to be B = \/(e? — 1)/6, where the
condition
1 1 6
|A| < B becomes |A| < nae Cai O19 7,

2See Pogorzelski [1966].


318 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

This |A| < 0.97, when compared with ours of \ < 0.37, makes the contraction
approach to convergence appear more conservative. The reason lies in the nature of
a special complete metric space of square integrable functions on (a, b) in which the
condition |A| < 1/B was obtained. These square integrable functions f(z) on (a, b)
were discussed in Section 4.1.3 in relation to their Fourier series representation in
b
(4.47) and (4.48). They are defined such that / |f(a)|?dz < oo. For the space
a

b
of these functions we define the metric d(f(x),g(x)) = / |f(x) — g(x) |?dz,

whence they constitute a complete metric space. (See (4.52) and some of the refer-
ences given in the first page of this Chapter.)

Linear Volterra Equations ‘


The following example illustrates the existence of a unique solution for Volterra
equations.

Example 3 Existence of the Unique Solution for Linear Volterra Integral Equation

In Example | we showed that the nth-order mapping T”(u) for the Volterra
equation

u(x) = f(x) + jaeK (a, t)u(t)dt (3.1)

= 1 (as) (£.1)

is contractive. We show here that this implies the existence of a solution wu to


u = T(u) above, and that this solution u is unique.
Let T”(u) = S(u) = u. Since T”(u) is contractive, then by the Banach fixed
point theorem it has a unique solution u,

TE(Gv) wo (0) = (£.2)

This means that with the first estimate u;, we have the sequence uz+1 = S(ug) =
S*(u1) converging to u, that is,

u enajim, Uk+1 =a,jim, S* k (ur) — jim


iF (T nykCn) 26 jim
98 T nk (ut) s. we (CE)

Recall from Example | that T”(u) was proved contractive for large enough n, so
with the unique solution u for 7 (u) = u, it should be clear that for the even larger
kn, T*"(u) = u has the same solution u of T”(u) = u, n large.
In (E.3) we have the first estimate u;, being arbitrary, so we may choose it to be
Uy = da Qui)s
6.2 FIXED POINT THEOREM OF BANACH 319

gta jim T™ (u1) = jim T™ (T(u))


= Jim,TEEN(ap) = lim,T(T*"(u)| = jim T(u) =T(u), (E.4)
heed) ahs

after using T"*(w) = wu in the second line for large n.


Hence (E.4) represents the existence of the solution u to T(u) = u. To prove that
u is unique, let y, 3, y £ 3, be two different solutions to u = T(u) [i.e., y = T(y),
B = T(8)). But since y = T(7), then

Pe tee
(1 y)\ ty) eee = (yy ay,
T"(y) =7- (E.5)
The same can be shown for (,
T"(8) = B. (E.6)
But since J” is known to be contractive, it must have a unique solution which forces
y = B. Hence T(u) = u has a unique solution.
The following section represents our only (brief) discussion on analysis of nonlin-
ear integral equations. It deals first with applying the fixed point theorem to nonlinear
integral equations. This is followed by a simple initial value problem associated with
a first-order nonlinear differential equation to illustrate the importance of the integral
representation of differential equations.

6.2.2 Existence of the Solution for Nonlinear Integral Equations

In the preceding section we limited our illustrations of the Banach fixed point theorem
to the existence of unique solutions of the linear Fredholm and Volterra integral
equations. In this section we apply the fixed point theorem to nonlinear Fredholm
and Volterra equations. This is followed in Section 6.2.3 by an initial value problem
associated with first-order nonlinear differential equation. The latter problem is
added to indicate the importance of having to change to the integral representation
in order to enjoy the method of successive approximations, and where proving the
contraction property is greatly facilitated when working with an integral operator, as
we illustrated for linear integral equations in Section 6.2.1.

Nonlinear Fredholm Equations


Consider the nonlinear Fredholm integral equation
b
u(x) = f(z) + | E(axt,ult))dt=T7 @) (6.51)

where we assume f(z) continuous on [a, 6] and that F(a, t,u(t)) is continuous,
hence bounded on the square of Figure 6.4, |F'(z, t, u(t))| < M for bounded u(t) :
c < u(t) < d. Consider also the successive approximation of (6.51),
320 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

b
ipa (2) =f eee »/ F(a, t, Un(t))dt (6.52)

and the metric (6.9) with the metric space of continuous functions Ca, 6].
To show whether the mapping in (6.51) is a contractive one, we must first look at
the distance between the images T'(3(ax)) and T(y(x)) of the inputs G(a) and (x)
in C[a, b],

a(T (8), T(y))


6 b
= ae f(x) +r] F(z,
t,B(t)) — Lr +a Pestova

= max |. i |F(c,t, A(t)—F(e,


t,y(t)|dts
(6.53)
To relate this distance d(T (3), T (y)) of the outputs to that of the inputs d(G(t), y(t))
we need to have the maximum operation of the last line taken on |G(t) — y(t)],
which clearly is not available in this form, as seen inside the integral of (6.53). To
have |G(t) — y(t)| freed from the operation of the function F’ inside this integral, we
impose a well-known condition on F' that would satisfy our goal, called the Lipschitz
condition.
The function F(z, t, u(t)) is called Lipschitz with respect to the variable u(t) if
there is a positive constant L such that

|F(x,
t,B(t)) — F(a, t,y(t))| < LIB(t) — v(t)| (6.54)
for (x,t, B(t)) and (a, t, y(t)) in the domain of F’.
If we impose this Lipschitz condition on F inside the integral of (6.53), we have

d(T(8),T(7)) <|AlLmax | \p(t) - (Oat <[AIL(6—a)max |A(2) — (2)


b

< |A|L(b — a)d(8, 7)


(6.55)
where we have a = |A|L(b — a). Hence the mapping T(u) of (6.51) becomes
contractive when a = |A|L(b— a) <1,

|A| < ei
L(b—a)
(6.56)
where L is the Lipschitz constant of F(x, t, u(t)) as in (6.54).
We note that if F is linear in u(t), as in our illustrations in Section 6.2.1, then F
is always Lipschitz, since for F(z, t, u(t)) = K (a, t)u(t) we have

|F'(z,t, 8) — F(a,
t,y(t))| = |K(a,t)B(t) — K(a
6.2 FIXED POINT THEOREM OF BANACH 321

where M, the upper bound of |A(z, t)|, can stand for L, the Lipschitz constant.
We also note that from the start we assumed that F(z, t, u(t)) is continuous in
all three variables, but clearly the continuity of F in u(t) does not imply that it is
Lipschitz in u(t). A simple counterexample is that of F(2,t,u(t)) = xt,/u(t),
which is continuous but not Lipschitz in u(t). However, if F(a,t,u(t)) has a
continuous partial derivative 0F'/Ou in the domain D of F, then F is Lipschitz in u,
as we will show next, and

OF
= max,
i (6.58)
Ou
Since F(z, t, u(t)) is assumed to have continuous partial derivative OF'/Ou in D,
we can use the mean value theorem, which states that for any u;(t) and u2(t) in D
there is an 7(t) between them, u1(t) < 7(t) < u2(t), such that

P(a,0,
Ui (t) — Fast, Ualt)) OF
ce
Ces By (ob nt). (6.59)
From this result we have

F(et,an(0))—F(astyua()| = [Fe t.000) ui (t)—ue(t)| < Lluis (t)—ue(t)|


(6.60)
where L = max z,t)ep |OF'/Oul, as given in (6.58).
To take a simple example consider F(z, t, u(t)) = ¢? sin u(t) on the rectangle
0<2<3,0<t< 1. OF/0u = t? cosu(t), where according to (6.58), L is

sm; sei = max |t? cosu(t)| < 1


Doe Ou 0<t<1
since both ¢? and cos u(t) are bounded from above by 1.
Besides this important Lipschitz condition on F(z, t, u(t)) for ensuring the con-
tractive property of T(u) in (6.51) when |A| < 1/L(b — a) in (6.56), we should
also ensure that the output 7'(u) is bounded. This is accomplished by assuming that
F(a, t,u(t)) is bounded by a constant k,

|F(a,
t, u(t))| <k. (6.61)
With this condition on F' in (6.51) we have

b
lu(x) — f(x)| = af [email protected] S if |F(x,t, u(t))|dt < |A|k(b — a).
(6.62)
This means that if our input estimates w,,(t) in (6.52) are bounded [i.e., c < un(t) <
dj, the outputs un+1(t) can also be bounded within the same range by limiting the
value of \ in (6.62) and taking into consideration the bounds on f, m1 < f < mz.
This amounts to choosing A as
322 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

|A| < min (Pisa a | (6.63) |


k(b — a)’ k(b—a)
So to have this condition of boundedness on the successive approximations as well
as the important contractive property condition (6.56) on A, we require that

: (Mi) SE d— ms 1
Al < min (RS = ) (6.64)

when m, < f(x) < m2,c < T(un) < d, and L is the Lipschitz constant ofF’ as in
(6.54), (6.58), or (6.60).
In regards to the iterative process (6.52) and its mapping T’, there can be different
variations on it that may result in a better contraction property for its associated
modified mapping T,, (see Jerri [1991] Jerri et. al. [1987], Jerri and Herman
[1996]).

Nonlinear Volterra Equations


Consider the nonlinear Volterra integral equation

u(x) = f(x) +2 /SEGRAG d= BO). (6.65)

As for the nonlinear Fredholm equation (6.51), we will assume that f(a) is continuous
on [a, b] and bounded: m, < f(r) < m2; F(z,t, u(t)) is continuous with respect
to the three variables z, t, and u(t) on the domain D: a< x<ba<t<z, u(t)
unbounded: c < u(t) < d; and F(z,t,u(t)) is Lipschitz with respect to u(t). To
ensure that the outputs un+1(2),
x

Until’) =i (£) + | E(a.6 unlt) dt= Tz


(ay) (6.66)

are always bounded within the range c < u,(t) < d of the inputs, we follow the
same steps for the Fredholm equation in getting the condition (6.63) on A to come up
with similar condition on of (6.66),

; m,—-ce d—mz
|A| < min (a. was) (6.67)

where k is the upper bound of F' (i.e., |F| < k). As we have shown for the
linear Volterra integral equations, the proof of a contractive mapping for the Volterra
equations does not require an extra condition on as long as we take large enough
n for un(x). This is, of course, a welcome nicety of the Volterra equations which
stems from the nature of its origin as an initial value problem.
For the linear Volterra equations, we showed in Example | that T”(w) is contrac-
tive, then concluded from that in Example 3 that T(u) = u of the Volterra equation
has a unique solution. This was accomplished with the aid of the iterated kernels,
which are clearly exclusive for the linear case only. Here we will follow a slightly
6.2 FIXED POINT THEOREM OF BANACH 323

different procedure to get to the contraction of T(w) in (6.65) and the convergence of
(t) of (6.66) to the unique solution u(t).
the sequence up,
Since F(z, t, u(t)) in (6.65) and (6.66) is assumed Lipschitz, we can follow what
we did in (6.54) and (6.55), for the nonlinear Fredholm equation, and write

Jun+i() — Un(x)| < LIA| is|Un(t) — Uns (t)|dt (6.68)


where u(x) is given in (6.66).
Shortly we will show that [see (6.73)]

jun41(2) — um(2)| < Je ~ allAL|"-? ije-a[?


SS (6.69)
This will allow us to conclude that the series

[o-@)

S-[unti (2) — un(2)] (6.70)


nT

is absolutely and uniformly convergent since it is dominated by an infinite series [of


the sequence on the right of the inequality in (6.69)] which obviously does converge
uniformly with no restriction on A.
To show the convergence of the successive approximation (6.66) sequence up, (x)
to the solution u(x) of (6.65), we first note that u,(x) can be written, with the help
of the telescoping terms, as

Un(x) = uy(x) + ue(x) — u(x) + us (x) — U2(%) +--+ + Un(Z) — Un—1(2)


m—1

=u (2) + > |ujsi(z) — w,(2)].


j=l
(6.71)
So with the uniform convergence of the series (6.70) which we have here on the right
side of (6.71), we can take the limit of both sides of (6.71) as n — o to conclude
that limp_.oo Un(x) exists for all x € [a,b]. But this is exactly the sequence up,(zx)
in (6.66), and since we assumed that F(x, t, un(t)) is continuous in u,(t), we can
take the limit as n — oo on both sides of (6.66), allowing Jim F(a tug) i=
F(z, t,u(t)), to conclude that limn+oo Un(z) = u(t), the solution to the nonlinear
Volterra equation (6.65).
What remains is to prove the result in (6.69). From (6.68) we have, for n = 2,

|us(z) — u2(z)| < LIAl ifnju2(t) — ur (t)|dt < LAlle—d||z—al (6.72)

after using the bounds set on u(t) : c < u(t) < d. In the same way we show that
324 Chapter6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS

lus(x) — ug(z)| < ual folug(t) — ua(t)|dt < Ld f°zalle~ alle= alt
a
| 2
< |LA)?|c - par
; 7 It —al?
Css) et) < La f |ua(t) — us(t)|dt < ILAP|e~ al [ | 5 tat
a 3
a

< [LA|*|e - ayo


t= a

(6.73)
where a simple mathematical induction establishes (6.69).

6.2.3 Existence of the Solution for Nonlinear Differential Equations

Here we consider an initial value problem associated with a first-order nonlinear


differential equation in y(z) on the domain D(z, y) of a rectangle as indicated in
Figure 6.5.

—=f(z,y), |tc-—zol<a, ly—yol <b (6.74)


y(zo) = Yo (6.75)

Fig. 6.5 Domain D of (6.74).

We will assume that f(x,y) is continuous, and in anticipation of the use of the
fixed point theorem for the integral representation of this problem we assume that
f(x, y) is also Lipschitz in y(z),

[f(z, B(z)) — F(z, (2) < LIB(z) — y(z)| (6.76)


If we integrate (6.74) from zo to z, and involve the initial condition y(29) = yo, we
have the nonlinear (special) Volterra integral equation
6.2 FIXED POINT THEOREM OF BANACH 325

uie)=w+ |" f(tyy(t))at. (6.77)


In light of the foregoing development, this represents a very special case of (6.65) with
F(a,t,u(t)) = f(t,u(t)). To keep the outputs un+1(x) bounded, we also assume
that f is bounded by k [i-e., |f(¢, u(t))| < kJ. Thus the integral representation (6.77)
would have a unique solution, which is obtained via the successive approximation of
this representation,

insi(t) = v0 + f” f(t yn(t))at. (6.78)


Hence we observe the clear advantage of using this iterative process (6.78) once
the initial value problem associated with the differential equations is changed to its
equivalent Volterra integral equation representation.
wey | : apie
cn and<6
geal) © »y* a 7 =
ib} - int
De newt o 'gtep«d acer. <2
wma
ssa eds ~~ ie mas WT) 28 Regie Lal’) cel =
a ; = amy 4g
sg
aianeswet Ie
~
> ithe ave
tilakindenpgietis eth ie eins la on
de
ee oi . ‘Raion.
iS. = Cer
ba
By} ba eq ise’

7. oe ra a
Smal 8 Ss ee ah osoa creer
s
ny set)—
-
w eooakt,
ir i Yeh « wei iS wb me Grate elton idle ine
ant of
6| Cs ayy sp ee O. eau m Grea iertanseesee yale os Ye —_

ee
(2.

ee 2 ee
- Pa,

| BE yar et i _ ~ @ dl =
hie) ace ea out igtes val a v4

- » Panay De aw eo
| mmgis th te

_
\ SSba
2 et lrg hae Grapes ; “ Si : ~ >

7 . i omy 7
— i
a
7 -

eo
:

: =
7 ss i 7
a Pe*o ally hae

_ :

a
- vif > —«

7
7
Cah °
tt &
,
.
osomG
ee
r--
tee
BSaei — >
_"
ee
GY @ -hammes
|
7
. -
,
7s
; —. a

=.
pe

~~
e& =a >

-
cat all
= =

: raetidiveto
= * @i)| @hiune ow : eu
ty Hal Sin te ves és(a faa bien =eaa
Sts. 9% iorLope at gel - :
/ : — i > =~
Seas % ye he ale -
0 @ptapes iis 1% 34am isa =i pu

Ov ef weal ar
v se is S eo Wi
Higher Quadrature Rules
for the Numerical Solutions

As we emphasized at the end of Section 1.5, besides the very basic numerical
integrations formulas of the trapezoidal rule (1.141) and Simpson’s rule (1.144),
there are many other numerical integration formulas (or numerical quadrature rules).
They are, of course, used for more accurate way of approximating the integral as
compared to their special cases (lower order quadrature rules) of the trapezoidal and
Simpson’s rules. These two rules correspond to using, respectively, first and second
degree polynomials, while the higher quadrature rules, to be discussed here, use high
degree polynomials.

7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES

To give a brief discussion of the quadrature rules we return to our first numerical
approximation of integrals using weights (or quadratures) D; in (1.140) with a minor
modification

N
[sede = Dulles) te= Sy + (7.1)

where € is the error of the approximation, and the summation here is over 7 = 1 to
Ly;
N
ox (e) (7.2)
a

327
328 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

Here N denotes the number of (in general) not necessarily equidistant sample loca-
tions {a;}/_,. With this note, we should mention that there are numerical methods
that use the end points of the interval z = a and b as x, and xy, respectively, which
are termed closed rules. Those rules which avoid the end points and use xz} = a+ €
and xy = b— eg as first and last sample locations in the interior of the interval (a, b)
are called open rules. The latter rules are useful in case there are singularities at the
end points. Also since b — a is constant, we may write the weight D; = (b — a)wi,
where we list the values of the weights w; in Tables 7.1 to 7.6 (in this Section) for
the representative quadrature rules of interest in this presentation.
In (7.2) we observe that the approximation sum Sj has two variables; namely,
the locations {x;} and the weights D;. As we mentioned earlier, there are basically
two groups of numerical integration methods, where the main difference depends on
their use of these two variables. For lack of space, and instead of just presenting the
formulas without derivation, we shall be satisfied with sketching the outline of the
essential steps of such derivation. The details are left to the exercises with ample
guiding hints, and can be found in the already cited references in Section 1.5. These
two groups of methods start by expressing the function to be integrated f(a) in terms
of P known functions (basis) h;(x), 7 = 1,2,---,P,
a
f(z) = YE,ajh;(2). (7.3)
This is to be substituted in the integral of (7.1), and the criterion here is to have the error
€ [as in (7.1)] of the approximation vanish for all coefficients a;, 2 = 1,2,---,P.
After evaluating the P integrations involved of (Rehja@)dz, t= 1.2. es this
amounts to equating coefficients for each a; in the resulting (7.1) with the condition
that the error € = 0. The result is a system of P linear equations in the 2N unknowns
of the locations {a;}§, and the weight {D;}‘_,. The two different groups of
numerical quadrature rules differ basically in their dealing with such 2N variables.
We shall discuss the (closed) Newton-Cotes rule, and the (open) Gauss (or Gauss-
Legendre) rule as representatives of the two groups.

A. The Newton-Cotes (NC) (closed)! Quadrature Type Rules


The first group of quadrature rules fixes the locations by eu roU si equidistant
samples at x; = a+ (¢ — 1)h,i = 1,2,---,N, where h = at“» and takes P =
N. This results in a convenient N BY N square system of linear equations in the
N unknown weights D;, 1 = 1,2,---,N of (7.2) to be determined. So, if the
determinant of the coefficients matrix ofthese D; is not zero, and if all the above
computed integrals i h;(a)da did exist, then there is a unique solution for the
weights D;.
As a representative of this group, the Newton-Cotes (NC) rule uses a simple
monomial h;(x) = x*~' for the basis in the expansion (7.3) of f(x). This results in

'Newton-Cotes rule of the open type are found in Table 7.1(b), while a list of most of the present closed
type rules are found in Table 7.1(a).
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 329

simplifying the above needed integrations, and gives the N weights D; for what is
termed the N — point rule of degree N — 1. These weights are listed in Table 7.1(a)
for approximating the integral ihef(@)dx,

b IN N
‘ifade i, F(a)de = hw f(2s) (7.4)
for the cases N = 2,3,4 and 5. The weights are tabulated as w,; for D; = hui,
where h = an
For this table, it is important to note that if we write this rule for approximating
the integral jectedx)dz as

$x(0.1) =oSe aed


a i-1
(G4) (7.5)
then for theintegral ip f (x)dz, the sample locations are scaled by b—a and translated
by a as x; = a+ (i — 1)h, and that the weights are also scaled by (b — a) as
C—O <a

Sy(a,at(N -1)h) = S_ hw; f(a + (i - 1h). (7.6)


4=1

For illustration we will write the first three cases explicitly, for completeness we will
also give the error estimate of such approximations. For N = 2, we have 2; = a,
22 = b, and

[Hear = Sse) + Hed BIO, m<é<m (77)


h3

where h = a “, which we recognize as a two-point trapezoidal rule. The trape-


zoidal rule for the N points is called the extended, composite, or repeated trapezoidal
rule, which is clearly based on this two-point rule, where its first degree polynomial
(straight line) is used for each two adjacent points to result in (1.141). Attention
should be paid to the difference between the error bound given in (1.143) for the
(extended) trapezoidal rule (1.141), and the error estimate given in (7.7) for the
two-point (basic trapezoidal) rule.
a+b
[FoyeIN = 3) WHEN, P= Oh ip = 5 88 = b, and

2For the rest of the related quadrature formulas, and the more detailed tables with high accuracy, see
Abramowitz and Stegun [1965, pp. 885-890 and pp. 916-924, respectively].
330 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

Table 7.1 Newton-Cotes Rules

(a) Newton-Cotes Rules of the Closed Type

Trapezoidal Rule

[see =ha+m-Fr'eo
Me — 2 1 2 12 )

Lis § < 22,


i= beg 4 bil
Extended (composite) Trapezoidal Rule
a N —1)h3
/ fila \dan eh 2 + fot::-+fn-it+ = ~ Se ee

rms iene Bike nee POS


N-1
Simpson’s Rule
ie i : a) h ' h® (4)
‘ a)dz = gift +4 fockfs| = 907 (f),
fhe — Ab
TS hee Geli p= .

Extended (composite) Simpson’s Rule

[OO sede = Fl + Ata + fa + fon)


nh®
+2(fs + fs+-::+ fon—1) + fonsi] — paula n
T2n+1 — V1
h = ———_—
2n
(Simpson’s 3 Rule)
v4 3h 3 f(D (e)A5
ih f(z)dz = gp it + 3fo+3f3+ fa) — one

(n-point Newton-Cotes Rules, n = 5, 6,7, 8, 9)

if f(a)dz = (Th TEED SUBD TE ERYRIVE SUa lee Sf (E)h7


1
945

(b) Newton-Cotes Rules of the Open Type

eS (2) (¢) 73
f(x)dx = Sh + fa) + —
. (4)
f(z)dx = (2h — fg +2f4) + Bane
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 331

[ floiae = Ft (er) + 49 (02) + Fe)


1

San re ae ee
(7.8)
where we use the notation f(")(r) = ot. This is the three-point and degree 2
rule, which is the basic Simpson’s rule for three points. Again this represents the
backbone for the repeated (or extended) Simpson’s rule (1.144) with N points, where
its derivation uses the first three samples f(zo), f(z1), and f (a2) and fits them to a
polynomial of degree 2 (parabola), which in other words uses the above basic three-
point rule (7.8) to approximate the integral on (xo, x2). This process is repeated for
the three samples f (x2), f(x3), and f(x4) using the same rule in (7.8) and so on -- -
to result in (1.144), where n is even, N = n+1. For the basic Simpson’s rule (7.8) of
three-point (and degree m = 2), if it is repeated M times, the total number of points
is N=mM+1=2M+1=n +1, and it is termed Simpson’s of M panels or
the familiar composite (or repeated) Simpson’s rule (1.144). The same is said about
the composite trapezoidal rule (1.141) as the basic two-point Newton-Cotes rule of
degree | with M panels, N = M+1=2n+1. After we present the other higher
degree Newton-Cotes rules next, we will see that they can be extended in the same
way as high degree NC rules with repeated M/ panels.
For N = 4, we have the four-point, third degree (m = 3) (closed) rule,

[Po sloide = fas) + 302) + 3f cs) + Sle)


3h?fA(6) “a <aer ey (7.9)
80
which is termed Simpson’s 3/8-rule.
In addition to the tabulation of the rest of these (closed) Newton-Cotes rules
in Table 7.1(a) for N up to 5, we refer the reader to the most basic reference of
Abramowitz and Stegun (1965, pp. 885-924) for the rest of the above explicit high
quadrature rules with N up to 11, along with estimates for their errors. This important
reference of a “Handbook of Mathematical Functions with Formulas, Graphs and
Tables" is a very valuable reference, which, for our purpose here, has all the different
quadrature rules, along with their very accurate weights w; and locations 2; for
reasonable values of N, and very high N for the Gauss-Legendre quadrature rule.

The Newton-Cotes Repeated Rules: NC(m, M) of Panel M and Degree m

As was discussed above, the familiar Simpson’s rule (1.144) is a repeated three-
point, degree m = 2 Newton-Cotes rule (7.8) with size panel M, where N =
mM +1=2M+4+1=n+1. The size of the panel is obtained from b — a = Mh,
where h is the size of the subinterval used repeatedly with the three-point rule
(7.8). The same repeated process can be done for the higher order Newton-Cotes
332 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

method with panel size M, and are called the repeated Newton-Cotes (here closed)
rules. For an indication of the importance of such extension, one needs only to
look at the basic three-point Simpson’s rule (7.8), and how inefficient it would be
for approximating integrals on [a,b] with its mere three samples of the integrand.
The same unsatisfactory approximation property is observed about the high degree
Newton-Cotes N-point rules, even when the integrated function is well behaved.
The derivation of the repeated Newton-Cotes (closed) rules with M panels parallels
exactly what we described, and is well known in calculus texts, for Simpson’s
(composite or repeated) rule (1.144) and the (composite or repeated) trapezoidal rule
(1.141). Here we present the final result

i f(x)dax ory a+(k—1)h+(i—1)h) (7.10)


k=1ee—ai :

and remind of the allowed translation and scaling that we discussed in (7.6) versus
(7.5) for each subinterval of integration in the above summation, and where the weight
for the rule, according to (7.6), is hw.

Example 1 Newton-Cotes Rule

(a) Asan illustration we use the integral ifsnot whose exact value is pec
~ 0.785398.
We first use the three-point (second degree) Newton-Cotes rule [the nonre-
peated Simpson’s rule (7.8)] with N = 3, h = 45% = } to have

[ 1+ = 2? he Lene = ;L700)+47 |) +10)


1 4 eta a7 (E.1)
6 |f 5)i | 610 ) 60
= 4|— -| = -—_—- =

= 0.78333
(b) Now we use the HOMO (degree 3) Newton-Cotes (or the2
3 Simpson’s) rule
of (7.9) with h = =5 to have

S4(0,1) =3 (3) 700)+37 ; +3t(F) +70]

which shows a very good improvement over the above (nonrepeated) Simpson’s
rule in (E.1). We will leave the details to an Exercise for comparing these results
with the results of the other more accurate rules such as the 2-panels Simpson’s
rule, the six-point, degree 5 Newton-Cotes rule, and the repeated six-point,
degree 5 Newton-Cotes rule with M = 2 panels. In Example 2(a) we will use
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 333

a four-point Gauss-Legendre rule with its better approximation compared to


the present result of the four-point Newton-Cotes rule.

The Maclaurin Rule


There is another (hybrid!) method, which sets equidistant samples locations 2;,
but varies the weight w; in order to minimize the error € of Sy in (7.1), namely, the
Maclaurin Method. We list its fixed uniform locations x; and variable weights w; in
Table 7.2. This method, as seen from its locations in Table 7.2? (with c = 1) starting
i al
at £; = —% fora four-point formula on the interval (— > 3) is of the open type, i.e.,
it does not involve the end points of the interval as samples points. It is also known
to be good for Volterra integral equations, which will be illustrated in Example 6 of
Section 7.2 for its use in the numerical solution of Volterra integral equations of the
first kind.

Table 7.2 Locations x; and Weights w; for the Maclaurin Rule (equidistant samples)

< N

f(a)dax = aL wif (xi)


<< 7
2 Gaal

where +2; are the samples’ locations and w; are the weight factors?

Ea, Wi ai Wi

= N =15
1/2 1/2 0 402/1152
Nees 2/10 100/1152
0 2/8 4/10 DiI TAS?
1/3 3/8 Net
N=4 1/12 254/1280
1/8 11/48 3/12 139/1280
3/8 13/48 5/12 247/1280

Integrals with Infinite Limits (associated with singular equations)


At this point we may inquire about integrals with infinite limits, for example,
Ae a , whose exact value is $ © 1.5708. Here we must truncate the upper limit
; : is
of integration to L that results in an approximate integral fe a , which happens
to have the exact value of tan~! L to compare our numerical approximation with.
Since the function aes is slowly varying, we must take large enough value of L,

3From Kondo [1991, p. 148], courtesy of Oxford University Press (Clarendon Press).
334 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

for example L = 200. We will illustrate neat the use of a high degree four-point
Newton-Cotes rule (7.9), where
N = 4,h = aan = ae and

f° hyenas(2) ($8) +7
“— [1 + 6.75 x 10-4 + 1.69 x 107* + 2.50 x 107°]
= 25,0217,
which is a bad approximation when compared with the exact value tan 200.—
1.56580. This shows how inefficient such methods may be, and thus the need for
more efficient rules like the following Gauss quadrature rules of the next Saeco
One may think that the Newton-Cotes rules can do well for the pet pore =
sin 2xdz where the function e~* sin 2” decays much faster than Ga of the above
example, but still with taking a limit L = 200, and using the four-point Newton-Cotes
rule (7.9), we have

ie os ee (=) [700ye Sie (+) oF (=) z #(200)


Saas
——
—— [3.29
x 10779 + 1.343 x 107°8 — 1.178x 10787]
= 18.22. al0n-
which is not good when compared to the exact value 2 of the infinite integral. As
we shall see in the illustrations [Example 2(b)] of the next Gauss-Legendre rule,
we will have a much more accurate result for the integral ie e-* sin 2xdzx with
only L = 10, and an eight-point ES ee rule. However, for the first integral
ee yds ofthe slowly varying f(x) = ioe , the eight-point Gauss-Legendre rule
with L = 100 will not be anywhere close in accuracy!. Last we will try in Example
3 the Gauss-Laguerre quadrature rule, which is (natural) for integration over the
infinite interval (0,00), and compare its (better) results with the above methods for
the same two integrals.

B. Gauss-Legendre and Other Quadrature Rules


We present here the Gauss (or Gauss-Legendre) rule as a representative of the other
principal group of quadrature rules. This, as was mentioned earlier, is characterized
by nonequidistant locations of the samples x; as well as variable weights D; in
(7.1). Thus, for the N-point rule of (7.2), and after substituting the expansion (7.3)
for f(z) in (7.1), as we did at the beginning of this section, we will end up with a
P system of linear equations in the 2N variables of the locations x; and the weights
D;. In this case we set P = 2N, but the assurance of a unique solution for the N
weights is not so obvious! The analysis here needs some familiarity with the topic
of orthonormal polynomials, which we have already discussed and used in Chapter
5 (see Section 5.2.1 for the orthonormal eigenfunctions and Section 4.1.3 and its
related exercise 23 for the Legendre polynomials). For now we will supply, in a
7.1. HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 335

simple manner, a few very basic elements of this topic to allow us a general sketch
of what is behind the Gauss quadrature rules. First, this second group of quadrature
rules uses orthonormal polynomials qn(x) of degree n,n = 1,2,---,.N — 1 instead
of the simple monomial x*~! of the first group of Newton-Cotes rules. Also, the
locations of the samples {z;} will be the zeros of the polynomial of the highest degree
(of such polynomials) gy (zx), i.e., gn (2) = 0,2 = 1,2,3,---,N.
The special property of the polynomials is that they are orthonormal on the interval
(a, b) of the integration considered, which means that

(2)an(e)de = { (7.11)
b

| PCe)am
where p(x) is called a weight function, p(x) > 0.
For the present case of the Gauss-Legendre polynomials, gn(x) =(74+*) 2 P(x),
n = 1,2,3,---,N —1 are used on the interval [—1, 1], where P,, (x) is the Legendre
polynomial of degree n. To give a few examples of the Legendre polynomials P,, (2),

Pa ray Sn aS 5(32 Go
1 1
P3(x) = 5 (5a" =32)) P,(z) = g (35a" — 30x” +3).
More Legendre polynomials can be generated via the Rodrigues formula,

P,,(z) i aia? 1)" ie, eas (7.12)


— 29! dx”
It is easy to show that for p(x) = 1 in (7.11), the integral vanishes for qi(x) =
(241)3 P, (2) ee (30. q2(x) = (441)? Pa(a) =1/5(32 — 5) where forn = 17
m = 2, we have .

ve f ($2? -1) dr =0
DP afte; 2
since the integrand is an odd function over the symmetric interval [—1, 1]. However,
when n = m = 1 for qi (rz) = 30. the integral in (7.11) gives the value of 1,,
1
oy ake oae
dx = a c dz 53
=-—_ — 1.

=i
-—1

Also, it is easy to show that P(x) has two real zeros of Fa in the interval [—1, 1],
3 1 sles!
when we look at the equation P2(x) = (52° — —~) = 0. The generalization of this
result is that P(x) has k real distinct zeros in the interval [—1, 1]. As seen in Table
7.3 these zeros are symmetric around the origin, so only half of them on (0, 1) are
tabulated with + signs. For example with N = 2 we are using P2(x) with its two
1
zeros on (—1, 1) as = = —0.57735 and WE = 0.57735.
336 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

The derivation of the Gauss-Legendre rule

(7.13)
1 N

[ fae = YSwisleo
takes advantage of the orthonormality (7.11) of the polynomials on [—1, 1], and the
special locations {z;}/¥_, as the zeros of Py(z), i-e., Py(a;) = 0,7 = 1,2,3,---,N
to determine the N weights D; of (7.2) from the 2N equations. The details may
take us more away from our main line of sketching the idea, but the net result is that
we end up with an N-point rule of degree 2N — 1. We must note that since these
(orthonormal polynomials) rules result in an approximation of degree 2NV — 1, then
they clearly will give an exact approximation to the integral of any function which
happens to be a polynomial of degree < 2N — 1. The weights w,, and the locations
(zeros of Py (x)) are listed in the following Table 7.3, for N up to 8. For higher
values of N, and extremely accurate value of w; and x; for this Gauss-Legendre rule
as well as other orthonormal polynomials rules, see Abramowitz and Stegun [1965,
pp. 916-924].

Table 7.3 Locations x; and Weights w; for the Gauss-Legendre Rule

1 N

/Sade = Y wif a)
+2; (zeros of Legendre polynomials Px (x)), w; weight factors
Seri Wi =, Wi

N12 E10
0.577350 1.000000 0.238619 0.467914
0.661209 0.360762
Ns='3 0.932469 Ot 1325
0.000000 0.888889
0.774597 0.555556 Ni
0.000000 0.417959
N=4 0.405845 0.381830
0.339981 0.652145 0.741531 0.279705
0.861136 0.347855 0.949108 0.129485
Nao N=8
0.000000 0.568888 0.183435 0.362684
0.538469 0.478629 0.525532 0.313707
0.906180 0.236927 0.796666 0.222381
0.960290 0.101229
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 337

From Table 7.3 we see that for N = 2 we neve cate weights w; = we = 1, and
we have already found the two zeros 7; = or = —0.577350 and rg = 72 ~
0.577350 of P2 (x)as the two (symmetric) locations in the interval [—1, 1]. Hence, we
have two-point Gauss-Legendre rule of degree 2N — 1 = 3, i.e., we use a Legendre
polynomial of degree 3 to have the Gauss-Legendre approximation for S2 of (7.2),

sai(-) (4)
To give a simple example, we consider the integral [oe e* dz with its exact value
e— A = 2.350402. The two-point Gauss rule of degree 3 gives the following
approximate value Sy = e V3 +ev% = 0.561384 + 1.781312 = 2.342696, while
the two-point Newton-Cotes rule of degree | in (7.7) gives Sp = t + e =0.367879 +
2.718282 = 3.086161, which is a bad approximation compared to the Gauss rule
as it is evident from comparing these two results with the exact value of 2.350402.
Although, this is a very simple example, it demonstrates what is well known about
the power of the Gauss type quadrature rules. This is especially true in the treatment
of the numerical solution of Fredholm integral equations, where we end up with an
N x N system of linear equations, and for large N, the cost for such numerical
computations becomes prohibitive! This is unless we have efficient methods like the
Gauss quadrature rules. In contrast, the Newton-Cotes type N-point rules of degree
N — 1 are known to be inefficient in this situation, and the accuracy is in much doubt
for large N. However, they are adequate for the more simple triangular system of
the resulting linear equations in the case of Volterra integral equations.
The setting up of the numerical approximation of Fredholm integral equations
parallels that which we have illustrated already with the help of the simple (repeated)
trapezoidal rule in (5.124) and (5.125) of Section 5.5.1, except for looking up the
sample locations and weights from Tables 7.1-7.6, which we will return to after
presenting another illustration, and few more useful and efficient quadrature rules.

Example 2 Gauss-Legendre Rule


(a) In this example we return to the integral fe i737de of Example 1, and use
a four-point Gauss-Legendre rule to be compared with the result of the four-point
Newton-Cotes rule of Example 1. For lack of space we shall present only the final
results for the comparison.
We first note that the Gauss-Legendre rule, with its above Table 7.3 of locations and
weights, is done for the integral fe f (x)dzx on the symmetric interval (—1,1). So
for the above integral on (0,1), we use a scaling and translation via the transformation
y = 2% — 1, dy = 2dz,

1 1 1 i
SS ih =
‘f ie lie ae ave?
if! =5 | so
~) 5[0.662145 f(- 0. Sema + 0.347855f (—0.861136)
338 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

1
+0.652145
f(0.339981) + 0.347855
f (0.861136)] = 3 [0-588098
+0.346186 + 0.450101 + 0.186422] = 0.785403.

This is a much better approximation to the exact value of 7 ~ 0.785398 than that of
0.78461, obtained with the use of the four-point Newton-Cotes rule that was done at
the end of Example 1.
(b) Next we return to our two examples of integration over the infinite interval
(0, co), namely, {he e* sin 2xdz (with its exact value of 0.4) and that of the slowly
varying function {)~ ;+.rdz (with its exact value of 7 ~ 1.5708). We will truncate
the infinite limit of integration in the first integral to L = 10, and use an eight-
point Gauss-Legendre rule which results in a much better value of 0.40041 than that
of 8.22 x 10-28 when using L = 200 with a four-point Newton-Cotes rule in the
computations for integrals with infinite limits (following Example 1). The exact value
of the infinite integral is 0.4. In the second integral, with its slowly varying function
jG = as , this eight-point Gauss rule with L = 100 gives a good approximation
of 1.17915 (compared to the exact value of 1.5608 of tan—! 100 of the truncated
integral on (0,100)) which is far better than the very bad approximation of 25.0217
of the four-point Newton-Cotes rule with L = 200, that we did following Example
bi
For the Gauss-Legendre rule of approximating the truncated integral

10
| e "sin 2zxdz,
0

we must use the transformation y = Fx — | in order to have an integral over the


symmetric integral —1 < y < 1,

10 1
/ e "sin 2adz = 5 | e + sin(10(y + 1))dy.
0 -1
We must mention here that the above are illustrations to show some indication for
the approximation of the two groups of numerical integration rules, namely, the
Newton-Cotes rules versus the Gauss type rules. These illustrations are by no means
exhaustive, since much analysis must be done regarding a suitable truncation limit L
for the infinite integral. For example, with the choice L = 25 for the above integral,
the eight-point Gauss-Legendre rule gave a much better result of 1.578364 to the
exact result of tan! 25 = 1.5308176 of the truncated integral fees es. This may
be explained in terms of investigating the eight points in the region0 < x < 25
where the integrated function ee counts the most instead of spreading those eight
points in the region 0 < x < 100, where beyond x = 25, the function changes little
from zero.
In Example 3, we will use the Gauss-Laguerre (polynomial) quadrature rule,
which is designed for integrals on the semi-infinite interval (0, 00), to show better
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 339

results with L = 10 for the first integral ik e * sin 2xdz and L = 25 for the second
; 25 ; : ;
integral ifs reas Ws when an eight-point Gauss-Laguerre rule is used. These results
will be compared with those of another efficient rule, also designed for integrals with
infinite limits of slowly varying functions, namely, the Gauss-rational rule.

Gauss-Tchebychev and Tchebychev Quadrature Rules


For other known orthonormal polynomials used for the Gauss quadrature rules,
we list their locations and weights in Tables 7.4—7.6. To illustrate the power of using
such orthogonal polynomials we will first discuss the Gauss-Tchebychev N-point
rule of degree 2N — 1 for approximating the integral fe aa (x)dx where we
use the orthonormal Tchebychev polynomials of the first kind,

lala), (Wil
IN |
Qn(x) = B (7.144)
270, 0)

on the interval (-1,1) with respect to the weight function p(x) = ee [see(7-11)].
The very special property that makes such polynomials extremely useful, is that they
can be related, after a simple change of variable r = cos@ for T,(x), to a cosine
function,

1
Taye aa cosn(arccosz), n#0 (7.14b)
ilo) ‘=

This enables us to arrive at the zeros of T(x) in the N-point rule from the following
very simple formula,

Ty (a7) =0=cosiN (arccosiz,),


N(arccosz;) = (2i— 1) 5.
(7.15)
21-1 :
Li = cos (A), ee he 2k IV

The above change of variable x = cos@ is the reason for the weight p(x) =
echt = =, where the integral [,° f(@)d0 becomes ie Taf (cos x)dx
1—cos
since d9 = d(cos~! x) = ++
V1—«2
-sdz. The Tchebychev
: ;
polynomials also have the
simple property of constant weights w; for their N-point Gauss quadrature rule,

1 : ew (ies
1
‘ 2)dx & woud (Sa). (7.16)
la rer

where the above-mentioned change of variable in g(x) = f(cos~’ x) is very apparent


in the integral to be approximated by such a simple rule. Of course, if g(x) =
f(cos~! x) is a polynomial of degree < 2N — 1, then the approximation (7.16),
340 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

as it is for Gauss quadrature rules in general, is exact. For example, the integral
is ’ wae z2dz is approximated exactly by the N =two-point Gauss-Tchebychev
rule (7.16) of degree 2N — 1 = 3, since g(x)= x” is of degree 2 < 2N —1 = 3. The
exact yas of this integral can be obtained by simple trigonometric substitution for its
value as $. The above Tchebychev rule with N = 2 gives the same Valu Since with
m1=- cs Ja t2 = C08 = Yq we have Sp = 3[f(— wa) + Gy 5)] =
Acree ; l== §. We may note that for the rule (7.16) with vfequal an
ay
we eel noie for the locations x;,since they can be obtained very easily from
(WAgley's
The above discussion makes use of the important relationship (7.14b) between
the Tchebychev polynomial and the trigonometric cosine function. The result is a
constant weight quadrature formula (7.16), but it is for a weighted integral,
&

[or f(cos-! x)dz = ibsT=


zlye

i
of g(x) on (—1,1) with weight function w(x) = —=——. There is, however, a
VP 1

simpler Tchebychev quadrature rule for the integral i f(x)dzx (without weight),
=
D
and with equal weights W in the sum,

[tear = 5 DH) (7.17)


1 N

However, the locations of the samples x; are different from those used in (7.16) [as
obtained from the simple formula in (7.15)]. The locations x; for (7.17) are listed in
Table 7.4, and they are, more difficult to obtain, as zeros of the polynomial part of*

G(x) = 2% exp Gx ee peat 1

Note that the x; here in the Tchebychev rule (E.1) are different from those for the
Gauss-Tchebychev rule.
We may also note that both sums, of the Gauss-Tchebychev rule (7.16) and
the Tchebychev rule (7.17), are with equal weights (w; = W and Z, respectively),
compared to the rest of the Gauss quadrature rules used here, such as that of Legendre
in (7.13) and what will follow as the Laguerre and the Hermite quadrature rules in
(7.18) and (7.26), respectively.
We may remind again that in Tables 7.2, 7.3, and 7.4 for the Maclaurin rule, the
Gauss-Legendre rule, and the Tchebychev rule (with equal weights), respectively, the

4 Abramowitz and Stegun [1965, p. 887]


7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 341

Table 7.4 Locations 2; for the Tchebychev Rule with Equal Weights (7.17)

1 2 N
[ f(a)da x = > f (zi) (E.1)

+2; (zeros of Tchebychev polynomials Ty ())


N Ey, N EEa

20577350" ©) 0.832497
0.374541
3 0.707107 0.000000
0.000000
6 0.866247
4 0.794654 0.422519
0.187592 0.266635

locations of the samples x; (and the weights for Tables 7.2 and 7.3) are given for the
a

integral f(x)dz on the symmetric interval (—a, a). Hence the integral at hand
—a

i f(a)dz must be adjusted with a change of variable to define it on (—a, a) as we


did for Example 2 where the integral was defined on (0,1). Sometimes the data of
the tables is adjusted accordingly to suit the limits of the given integral, as we shall
do in preparing for (7.41) and in (E.5) of Example 7.

Gauss-Laguerre Quadrature Rule


Other very useful polynomials are the Laguerre polynomials L;(x) which are
orthonormal on the semi-infinite interval (0,00) with respect to the weight p(x) =
e-* in (7.11). The locations x;, which are the zeros of the Laguerre polynomial
Lyn(z), and the weights w, of this N-point Gauss rule are listed in Table 7.5 for the
approximation of the following integral,
aS N
i e*f(a)dz~ wif (zi) (7.18)
0 i=l
or its equivalent
es N
[sade = Yowet ale), o(2) =e *F(e) (7.19)
0 i=1
In Table 7.5, the locations x;, weights w; for (7.18) as well as weights we”: for
(7.19) are tabulated for N up to 8, the rest are found in Abramowitz and Stegun
[1965, p. 923].
342 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

The negative numbers in parenthesis (—7) to the left of the numerical values of w;
in Table 7.5 are the negative exponent of the factor 10~” used for the indicated small
numbers. For integrals of the form fie e 6 f (x)dx, we can simply make the change
of variable
€=G(x — a) where this integral becomes 5e~°* Vf mens (s ~ a)dé,
which then can be approximated by (7.18)

pee
ile f(x)dx oF I ts (Le
B [ enuf. rs d€

oh
e fa oe ag
NOUS
e Ba
ree RAGS)
a
(7.20)

Pym (Gre)
where, as seen in the middle sum for h(€;), the locations €; are the zeros of the
Laguerre polynomial Ljy(x). This generalized version (7.20) of the Gauss-Laguerre
rule (7.19) is called the shifted Gauss-Laguerre rule.
A good illustration of this efficient rule for integration over the semi-infinite
interval (0, co) is to return to our examples of fee e * sin 2xdz and ifs Tero
which we have already approximated in Example 2, using an eight-point Gauss-
Legendre rule with truncation limit L = 10 for the first integral and L = 25 (and
100) for the second integral, and also right after Example 1, where Newton-Cotes
rule was used with even larger L = 200 (see also Exercise 2).

Table 7.5 Locations x; and Weights w; for the Gauss-Laguerre Rule

[ve
ef (a)dae Ywste
Xi) (£.1)

Se N
[ g(o)de = >wie™ gle) (B.2)
x; (zeros of Laguerre polynomials Dj (x)), w; weight factors

Li Wi wie”?

ING—22
0.585786 (-1)8.535534 1.533326
3.414214 (-1)1.464466 4.450957
N=3
0.415775 (-1)7.110930 ~=—-1.077693
2.294280 (-1)2.785177 =—-2.762143
6.289945 = (-2)1.038926 5.601095
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 343

Table 7.5 continued Locations x; and Weights w; for the Gauss-Laguerre Rule

Zi Wi wie!

N=4
0.322548 (-1)6.031541 0.832739
1.745761 (-1)3.574187 2.048102
4.536620 (-2)3.888790 3.631146
9.395071 (-4)5.392947 6.487145
N=5
0.263560 (-1)5.217556 0.679094
1.413403 (-1)3.986668 1.638488
3.596426 (-2)7.594245 2.769443
7.085810 (-3)3.611759 4.315657
12.640801 (-5)2.336997 7.219184
N=6
0.222847 (-1)4.589647 0.573536
1.188932 (-1)4.170008 1.369253
2.992736 (-1) 1.133734 2.260685
5.775144 (-2)1.039920 3.350525
9.837462 (-4)2.610172 4.886827
15.982874 (-7)8.985479 7.849016
N=7
0.193044 (-1)4.093190 0.496478
1.026665 (-1)4.218313 1.177643
2.567877 (-1)1.471263 1.918250
4.900353 (-2)2.063351 2.771850
8.182153 (-3)1.074010 3.841249
12.734180 (-5) 1.586546 5.380678
19.395728 (-8)3.170315 8.405432
N=8
0.170280 (-1)3.691886 0.437723
0.903702 (-1)4.187868 1.033869
2.251087 (-1)1.757950 1.669710
4.266700 (-2)3.334349 2.376925
7.045905 (-3)2.794536 3.208541
10.758516 (-5)9.076509 4.268576
15.740679 (-7)8.485748 5.818083
22.863 132 (-9)1.048001 8.906226

Example 3
Here we will use the four-point and eight-point Gauss-Laguerre rule (IV = 4 and
8 in Table 7.5) for both integrals with the limit of integration being truncated to about
10 and 23, respectively. For the integral ifse* f(x)dx with L = 10, we note from
Table 7.5 that the closest value to 10 is x4 = 9.395071 for the four-point Laguerre
rule. For N = 8, we can go as far as xg = 22.863132.
344 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

co

The four-point Gauss-Laguerre rule approximation of the first integral i Cau


0
sin2zdz is

~~ 0.603154 sin (2(0.322548)) + 0.357419 sin (2(1.745761))


+0.0388790 sin (2(4.536620) + 0.000539295 (2(9.395071))
— 0.362662 — 0.12253 + 0.013388 — 3.2 x 107° = 0.253486
which is not such a good approximation to the exact value 0.4 of the infinite integral.
However an eight-point Gauss-Laguerre rule gave a better result of 0.3872805 as we
shall illustrate in Example 4(b).
Now if we use an eight-point Gauss-Laguerre rule to approximate the integral
So” a2g2
dz, which, effectively, truncates it to L = xg = 22.863132, we find that
a

co

/ Soe, & 0.425389 + 0.569099 + 0.275194


0 1+ 2?
+ 0.123768 + 0.063354 + 0.036563 + 0.023387 + 0.017006
= IL sia

which is a good approximation to the exact result of tan! 22.863132 =1.527085.


It is also a better approximation than 1.578364 of the eight-point Gauss-Legendre
rule used in Example 2 for the truncated integral fe ws (see also Newton-Cotes
approximation right after Example 1).

The Gauss-Rational Rule


Our treatment for the two integrals with infinite limits was based on simply
truncating the infinite limit to a finite limit L, then using the Gauss-Legendre rule for
the interval (0, L) as was done in Example 2. The second method for accomplishing
such truncation of the integral was done through the use of the (finite sum) Laguerre
quadrature rule on the interval (71,7), where {x;} are the zeros of the Laguerre
polynomial Ly (x), as we have illustrated for the same two integrals in Example 3.
Another way around the infinite interval (a, co) of the integral,

I= jhef(x)dex (7.21)

is to use a change of variable (or mapping) x = vu(€). This reduces the limits of
integration to finite ones in the new variable €. For example, the following change of
variable

p= 0(6) =
2(a + (3)
ae (7.22)

(as a rational function of €) reduces the integral in (7.21) with xe(a, oo) to the
following integral with finite limits for €€(—1, 1),
7.1. HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 345

r= |" fede = -2ta+) [ —


= 2(a + B)yf oF
Le
ti (7.23)
i d

where F'(€)= f(v(€)), and the weight at in the above integral is the result of
dx= — At de after using x = “ee — B. So, a Gauss-Legendre rule can now
be used for approximating the integral in (7.23) on the symmetric interval (—1, 1),

ie f(x)de =2(a+) fe a
(7.24)
“\ wiF (&)
CEAOS reer
with F(€) = f (ae — ) , and w3;, &; are, respectively, the weights and locations
1+&:
for the Gauss-Legendre rule as given in Table 7.3. Of course, in the (very special)
case of the integrand Tote in (7.23) being a polynomial of degree < 2N — 1, the
Gauss-Legendre rule of approximating its integral in (7.24) would give the exact
value of the integral.
This method is called the ““Gauss-rational rule," which we will illustrate in the
next Example 4 for the two integrals of Examples 2 and 3, and then compare the
results for the different methods.
This rule reduces integrals with infinite limits of integration to those of finite
limits. So, for our purpose of this book, it will help us in reducing some singular
integral equations, those with infinite limits of integration, to non-singular integral
equations, i.e., with finite limits of integration.
We emphasize here the different numerical integration methods, since they are
essential for the accurate numerical setting of such singular integral equations. This
is so, especially when at the level of this book we are not covering the theory behind
and the analytic methods for solving such singular equations.
We may stress the point that while other methods deal first with truncating the
infinite limit of the integral, the present Gauss-rational rule ends up dealing with finite
limit integral as in (7.24). So, there is no surprise if it suceeeds in approximating
the integral ies Te ae, with its slowly varying integrand at which was a source
of trouble for the former methods (such as the Maclaurin method and Newton-Cotes
method that we discussed following Example 1) that must start with truncating the
infinite limit.
The success of the Gauss-Laguerre rule in approximating integrals of the form
Nee ® sin 2adz, is also understood because of the inherent decaying factor e~* in
the integrand, that acts, effectively, to truncate the infinite limit of integration.
346 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

There remains the role of the parameter ( in (7.22)-(7.24) for the Gauss-rational
rule. To compare this rule with the Gauss-Laguerre rule for the integral Ip ede,
we must consider this integral in the form of the integral in (7.20)

ee co 1 fore)

ee
eht

(7.25)
0 14+ 22 0 1+2

a 1X wet
=
i e °* f(x)dz& —
B+ —_—-

after using the shifted Gauss-Laguerre rule (7.20), and where €; are the zeros of the
Laguerre polynomial L(x), and w; are the weights of the Laguerre rule as given in
Table 7.5.
The observation that the Gauss-rational rule may benefit from different (Larger!
in this example) values of 3 than the Gauss-Laguerre rule, which uses smaller values
of 3, will appear in the illustration of the two methods in the following example.

Example 4
We consider again the same two integrals with infinite limits that we used in
Examples 2 and 3.
(a)
ile1 ag = * = 1,5707963 (E.1)
9 1+2? a Se x i

i en SiN 2raa = Be (E.2)


f 5
We will first use an eight-point Gauss-rational rule as in (7.22)-(7.24) for the integral
in (E.1) with @ = 10 and from (7.22) with a = 0 we have x = v(€) = 78 — Bto
use in (7.23),

au i Gulls)
where F'(€) = aaa If we use 8 = 10 and aneight-point Gauss-Legendre rule
€+1
on the second integral in (E.3), using Table 7.3, we have the following approximation,

ee |
i ae dx © 20[0.000263 + 0.000689 + 0.001347 + 0.002577
0
+0.005327 + 0.012629 + 0.030205 + 0.025304 = 1.56685
Next we use an eight-point Gauss-Laguerre rule as in (7.25) with (a small) G = 0.2
to have
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 347

= 5[0.25377 + 0.048273 + 0.013077 + 0.005211 + 0.002583


+0.001475 + 0.000939 + 0.000681] = 1.63005
which is not as good approximation, to the exact value 5 =1.5707963 of the integral,
as that of the above Gauss-rational rule that gave 1.56409. In this case it may be
verified that pushing N to larger values is not enough to catch up with the better
approximation of the Gauss-rational rule.
(b) For the second integral with its rapidly decaying integrand the numerical results
are a bit different from that of the first integral.
First we use the Gauss-rational rule as in (7.22)-(7.24) to have

a
/ e sin dade = 26 merit
aad (E..4)

where F'(€) = e7 eri +8 gin Zt aan — ). Then we use an eight-point Gauss-Legendre


rule on the second integral with the help of Table 7.3 to have

[o-e)

[ e-*sin2adz 20[0.0+ 1.62 x 10-9% +. 1.51 x 10-4


0
—1.8 x 10~’ + 0.000246 — 3.77 x 10~* + 0.017096
+0.008479] = 0.50888.
Next we use an eight-point (shifted) Gauss-Laguerre rule (7.20) in parallel to (7.25)
with G = 1, and consult Table 7.5 to have
a 8
/ e* sin 2adz = = w; sin(2€;)
0 tl

~ (0.123315 + 0.407119 — 0.17193 + 0.025939 + 0.002792


TA TAC 10 25155 <105°'+:1.03 * 10-2]. 0.387281,
which is more accurate approximation of the exact value 0.4 of the integral, than the
above Gauss-rational rule result of 0.51507.

Gauss-Hermite Quadrature Rule


For integrals x f (x)dz over the infinite interval (—0o, 00), the (orthog-
—OO
onal) Hermite polynomials H,,(x) are used, which are orthogonal on the interval
(—0oo, 00) with respect to the weight p(x) = e-* in (7.11), to give the N-point
Gauss — Hermite rule as
oO N
i ev f(a)dx = > wif (a) (7.26)
ao 21

where, of course, x; are the zeros of Hy (a), i.e., Hy (xi) = 0,7 = 1,2,---, N.
348 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

These samples locations x; and the weights w, are listed in Table 7.6 for the above
integral (7.26) and its equivalent

ore) N -
[oleae
=o] =
wie g(a) (7.27)
for N up to 4. The same note regarding the negative numbers in parenthesis (—7)
for Table 7.6 applies here, where it is for (the exponent of) the factor 10~” used for
representing small numbers w; to be employed in (7.26).

Table 7.6 Locations x; and Weights w; for Gauss-Hermite Rule

es N
/ e-® f(x)dz © wif (ai) . (E.1)

00 N ; ;
/ g(x)dx & s wie" g(x;) (£.1)
al

+z; (zeros of Hermite polynomials Hy (x)), w; weight factors

Set iA Wi wer

IN ss
0.707108 (-1)8.862269 1.461141
IN = 3
0.000000 (0)1.181636 1.181636
1.224745 (-1)2.954091 1.323931
Nao
0.524648 (-1)8.049141 1.059965
1.650680 (-2)8.131284 1.240226

Exercises 7.1

1. Consider the integral


as 1
if 100 + x? sia
which is very similar to the first integral used in Examples 3 and 4. In parallel
to what we did in Example 4 use an eight-point Gauss-Laguerre rule with
6 = 0.2, then an eight-point Gauss-rational rule with @ = 10 to approximate
the first integral in (E.1). Compare your answer with the exact value of
30 = 0.1570796.
EXERCISES 7.1 349

2. Do the same as in problem | for the following integral, except that 8 = 1 and
10 for the Gauss-Laguerre rule and the Gauss-rational rule, respectively.
[o-@)

/ e “sinazdz.
0

Compare your answer with the exact value of 0.5.

3. Use the Gauss-Laguerre rule to compute the exact value of the integral

e 16
f ead = —. (E.1)
1 €

[o.@)

Hint: In order to use the Gauss-Laguerre rule for the Cai (aor. we
0
make a change of variable y = x — 1 in the given integral of (E.1) to reduce it
to [5~ e Ye! (y + 1)3dy, then apply the rule of (7.18) to this last integral (you
can also use (7.20) on (E.1) with a = 1 as the shifted Gauss-Laguerre rule),

co 1 [o@)

/ e"s*dzr = -| e ¥%(y + 1)3dy


1 € 0
i
ru — > wily: + 1)?
-s

4. Consider the integral ie e~*a‘4dz and its exact value of 24.


Use a three-point Gauss-Laguerre rule to show that we obtain, aside from a
round-off error, etc., the exact value (since the integrated function f(x) = 2‘,
with respect to the weight function p(x) = e~* on (0, oo), is a polynomial of
degree 4 < 2N —1= 2(3) —1=5).

5. Consider the integral


oS 2
‘ e” x'dx
(6.9)

whose exact value is We,

(a) Use a two-point Gauss-Hermite rule to approximate the integral and com-
pare with the exact value. This approximation is not exact! Why?, see part
(b).
(b) Use a three-point Gauss-Hermite rule to obtain the exact value of Dis for
the integral, to show that this polynomial rule gives an exact value since the
integrated function f(x) = x‘, with respect to the weight function p(x) = er
on (—0o, 00), is a polynomial of degree 4 < 2(N) — 1 = 2(3) -1=5.
350 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

7.2 HIGHER QUADRATURE RULES FOR VOLTERRA EQUATIONS

As we mentioned earlier, the numerical setting of Volterra integral equations is


usually done with Newton-Cotes rules, or their repeated versions. The simplest is
the trapezoidal rule (1.141) that was used for the Volterra equation,

u(x) = +f K(x,t)u (3.42)


to result inthe N x N, N = n +1, triangular system of equations in (3.45) or (3.46)
(in Section 3.3). We may note that if we rush to suggest the repeated (high order)
Simpson’s rule we see that it cannot be started easily, since the second equation with
7 = 1, in (3.46) has only two samples uo and wu, instead of the three required for
Simpson’s rule (since n must be even). This suggests staying with the trapezoidal
rule for 2 = 1, but go to higher degree Newton-Cotes rules for few values of 2 > 2,
then follow the latter by a repeated Newton-Cotes rule. One such successful version
is to use the two-point Newton-Cotes rule, i.e., the trapezoidal rule (7.7) followed
by the three-point Newton-Cotes rule, i.e., Simpson’s rule (7.8) for 2 = 2, followed
by a 2 (or the four-point Newton-Cotes) rule (7.9) for 1 = 3, then return to the
repeated Simpson’s rule (1.144) fori > 4. For this combination, the approximation
for Volterra integral equation (3.1) with increment At becomes

uo= fo

t= fi +At 5Kou + 5K : Trapezoidal

2At ;
u2= fet Eee [K20uo + 4Ko1u1 + K22u2]: Simpson's
3At 3
ug = f3t os [K30U0 + 3K'31u1 + 3K32u2 + K33us] : aaa rule (7.28)

2At
U4 = fa af We [K40U0 ar 4K 4,u1 =P 2K 42uU2

+4K43u3 +K44us]: repeated Simpson’s.

While Simpson’s rule (1.144) and the 3-rule are high-degree efficient rules, this
method will still suffer from the inefficiency of the lower degree trapezoidal rule,
used in determining wu; above at the starting point, which is then used for the more
accurate rules in determining uz, u3, u4 and so on --- in the following equations
of (7.28). Such inaccurate value of the input u;, would, of course, ruin the good
accuracy of the high degree rules used for Wa U3, U4,**:. One way around this
difficulty is to use smaller At, say Me or =, from t = 0 to At to have a more
accurate value for the starting value of U1, then follow it by the other rules for uo,
ug, and u4, which is illustrated in the following example.
7.2 HIGHER QUADRATURE RULES FOR VOLTERRA EQUATIONS 351

Example 5 We will return to Example 9 of Section 3.3, where the trapezoidal rule
was used,

aoe se / (eo puleds


0

u(x) = a+ ix — £)u(t)dt. (E.1)

We first try (7.28) with At = 5, and to improve the accuracy of the Eepsze dal rule
for determining wu, in (7.28), weuse a = ; fot = 0. todea Ab = 2°where
we have three points at t = 0, 4| and 3} with Rane values uo, u, and us to use
the trapezoidal rule on as a “mini" problem. The result wu, can now stand for a
more accurate value wu; att = 4 to be used as wu in the third equation of (7.28) for
determining uz via Sage s rule.
First we choose At = 5, A’t = 3, and Pe the trapezoidal rule in the second
equation
of (7.28) with f(0 Ne
=O fi = ve ) = 4, Kio = -—7, Kun = (§ — 5) = 0,
to find ui, = u(4),

Uo =fo= f(0)=0, w=0


1 1
U4 =fit+
+ Alt —Kjouo + =~Kyu}

a 1 17 1 ei ; eee!
a jiE (-3) (0) + 5(0)x Ay =F
nes

Then
we use up = 0, ui,= u(¢) = F and fo = f(§ 1;
fe 2 in a three-point trapezoidal
rule with A’t = } to find the more refined uy = u (5)>tO be used later for Simpson’s
rule,

i 1
Us = fo+A't =Kaouo + Kau + 5 Kean]

ih yal 1 1 1 il Lat 1 j

SeWe ley hegre OlaSiegia tale >) |


; < =

Now we return to our original (over all) problem with the larger t increment of
At = 3, Uo = 0, and the more fannes i Us) — t= 2* to be used for
Simpson’s rule in (7.28) with At = ==} to find u2 = u(1),

2At
tly ford ae [Koouo + 4Koiu1 + Ku] ,

1 1 31
ow

dat/. S18) tes 31_ 161


32 )° opm oD ae
352 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

Next we use the 2-rule in (7.28) with At = aoe 2 et = + gus tte —


u(4) = 21, and uz = u(1) = 71% to find uz = u($),

= fs =Ee
ayer + 3K31u1 + 3K32 + K33us3],

U3 75
om
+75 [- Gv +9(5-3) (a) *8(%-3) (i)
(5-53
“te
2 U3|;

| 3 —
s 93 483 3 3
|-— - — |] = = — = (2.71094) = 1.5 — 0.5083
Ms 2 iie |64 al 2 6! )
U3 =U: 991

These numerical (approximate) results up = u(0),uy = u(Z) = 0.25, uw =


ul = u(d) = 0.4844, up = u(1) = 0.8385, and ne u(2)= 0.9917 are to
be ue! with the values of the exact solution u(x) = sina, where u(0) = 0,
u(¢)= ms =0.2474, uy = ui = u(s) = 0.4794, ue = u(1) = 0.8415, and
Us u(3 ) = 0.9975, as shown with Te corresponding errors in the following Table
ane

Table 7.7 Numerical and Exact Solutions of a Volterra Equation of the Second Kind (Ex-
ample 5)

a x; ‘t(num.) w; =sinaz;(exact) errore; = u; — u;

0 0.0 0.0000 0.0000 0.00000


1’ d 0.2500 0.2474 2.0>< 10a

tede 0.4844 0.4794 SxlOme


2 1 0.8385 0.8415 -3 x10-3
5) : 0.9917 0.9975 58x tO ae

In Example 9 of Section 3.3, we used only the typical (extended) trapezoidal rule
with At = 1, where the results are reported in Table 3.1 and Figure 3.1 of Section
3.3. A look at this data shows the better accuracy of the present computations.
For example, we now have uz = u(1) = 0.8385, while in Example 9, we have
ug = u(1) = 1.0, where the latter is much further away from the exact answer
of u(1) = sinl = 0.8415, ie., 16% error compared to 3.3% error of the present
computations.
For another illustration of this method, see Exercise 4 with its very detailed answer.

Volterra Equations of the First Kind


For Volterra integral equations of the first kind
7.2 HIGHER QUADRATURE RULES FOR VOLTERRA EQUATIONS 353

ja) i K (a, t)u(t)dt (7.29)


a

we write its numerical setting, in parallel to (3.45), with N =n +1 as

jSt
Hes) aK Ge Duma, 3,---N, tp < ae:
ea
For this equation we will use the Maclaurin rule (see also Exercise 2).
From Table 7.2, we note that the Maclaurin rule on (—4,+)
oP) uses fixed odd
number of increments for approximating the integration on (— T 5). For example,
with N = 4, itusest; = —} + § = -3,t2 =-$ +2 =-},t3 =-$+2 =i,
angi7n— —$+ é = 3. So with Az = h= ; we must use t] = a +h,tz = a+ 3h,
t; = a+ 5h, ty = a+ Th, i.e., we use an odd number of increments h. But
as in (3.45), the (variable) upper limit for the integral of Volterra integral equation
requires t; < x;, while the Maclaurin method, as an open method, does not use the
end point a and the upper one of the considered interval. Thus to avoid the upper
limit of the integration we must have t; < x;. This means that we may take 7;
with even increments of h, x; = a + 2th, and (lower) odd increments of h for t;,
t; = a+ (2j-—1)h. Hence, we take r2 = a+ 2h, 24 = a + 4h, re = a+ 6h,
and zg = a+ 8h,--- for the variable x of the term f(x) on the left side of (7.29)
and also for the x of the kernel K(x, t) inside the integral of (7.29). In this way, we
have f(x;) = f(a+ 2th) = foi, u(t;) = u(a + (27 — 1)h) S uoj-1, and K (aj, t;)
=K(a + 2ih, a + (27 — 1)h) = Koiaj-1; i,j = 1,2,---,N. With this notation,
the Maclaurin setting of the Volterra integral equation of the first kind (7.29) (for
t= 182, -e-),) BECOMES

2hKiu1 = fe
il 1
Ah [5Kau + 5 Keaus| = fa
3 2 3
6h [pKa oF g Nests + 5 Kass = fe

11 11 13
8h Laken a5 7g 83us cL 7g Mass EE ken |= fe (7.30)

and so on. We note again that in this example with N = 5, the equations stop at the
sample ug before the one at the end point, namely wo.
354 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

The following example illustrates this use of the Maclaurin rule for solving Volterra
integral equations of the first kind (see also Exercise 2 and its detailed answers).

Example 6 Maclaurin Rule for Volterra Equation of the First Kind


Here we consider the Volterra integral equation of the first kind
zr

sine = [ e”—'u(t)dt
0
to illustrate the above Maclaurin method (rule) with N = 4. The numerical results
are compared with the exact solution u(x) = cosx — sin, which can be obtained
from the answer of Example 7 of Section 3.2 (with A = 1),which we arrived at easily
via the use of Laplace transform.
Here we take h = 0.05, and with N = 4 we have ro; = 2th, to;-1 =(27 — 1)h,
A$ =A, 2eOP4 S06, 25 = OA en 10 25 a67 = 003223, — 0 A(the end jpoint)..and
ty = 0.05, t3 = 0.15, ts = 0.25 and t7 = 0.35. Hence our unknowns u(t;) are
labeled as u; = u(ti) = u(0.05), ug = u(t3) = u(0.15), us = u(ts) = u(0.25),
and u7 = u(t7) = u(0.35).
If we substitute the values f(z2) = sinz2 = sinQ.1 and Ko; = K(a2,t1) =
e9-1—0-05 in the first equation of (7.30), we find u; = u(0.05),
0.1e9!-9
yu, =sin0.1 = 0.09983
= (0.1)(1.0513)u; = 0.09983, u, = 0.9496.
Next we substitute this value of u;, along with the values of f4 = f(a4) = sin 0.2,
Ka, = €9:?-9-, K43 = e9:?-°-15, in the second equation of (7.30) to find us =
u(0.15),

1 1
0.2 5002-99(0, 9496) + er7 Ou, =sin0.2 = 0.1987,
uz = 0.8393.
The same can be done for us = u(0.25) and u7 = u(0.35) in the third and the fourth
equations of (7.30), respectively, to have
3 2
0.3 5°-005(0, 9496) - 3° (0.8403) + seo ='sin 0.3,
us = 0.7197
13
0.5 [sper 2-9(0.9496) 11
+70" °° (0.8403) a ent 09(0.7197)

13
igo | =sin0.4, wu7 = 0.5960.
48
These approximate values of i; are compared in the following Table 7.8 with the
exact values of the solution u(z;) = cosx; — sinz; = uj, along with the error
€; = Uj; — u;. We used u; for the above approximate values to distinguish them from
the exact values u(x;) = uj.
7.2 HIGHER QUADRATURE RULES FOR VOLTERRA EQUATIONS 355

Table 7.8 The Numerical Solution of a Volterra Equation of the First Kind—The Maclaurin
Method (Example 6)

a ibs 4u,;(num.) wu; = 1cos2z; —sin2z; (exact) errore; = &; — u;

Le 0.05 0.9496 0.9488 S104


ZO 0.8403 0.8393 10x 10-4
310.25 0.7197 0.7215 1S cl03-
A e035 0.5960 0.5965 5x 104

Comments Regarding Some Singular Volterra Integral Equations


As we did in Section 1.2 in the simple classification of integral equations, we
termed an integral equation singular because of the kernel being singular in the
domain of the integration, or that the integral involves an unbounded limit (or limits)
of oo or —oo. The earliest example of an integral equation, and the very familiar
example in most integral equations books, is that of the Abel’s problem (1.20) in
p(y),

—V/29f(y) = [ Sa (1.20)
where the kernel K(y,7) = rer is singular at the end point 7 = y.
Fortunately, this singular equation (1.20) is in the special form of Laplace trans-
form convolution product integral, where the Laplace transform can be used to solve
it, which was done with complete details in Example 8 in Section 3.2. At the level of
this introductory text, this example represents the only singular Volterra equation—
with its singularity due to its kernel—that we have covered and solved analytically.
Another example of a singular Volterra integral equation, due to its infinite (lower)
limit of integration, is that of the torsion of a wire (1.15) in the torsion w(t),

m(t)= hw(t +f o(t, T)w(7)dr, (1.15)

where the integral represents the dependence of the torsion on all torques applied to
the wire in time (—oo, t) besides the present torque m(t).
At the level of this book we are not covering singular integral equations. However,
we feel that a brief comment regarding the numerical approach to the class of singular
Volterra integral equations, such as that of (1.15) above, is in order.
We limit our very brief remark to the one type of singular Volterra integral equa-
tions that are characterized by having an infinite limit for its integration as in (1.15)
of the torsion of the wire,

+f f(t, T)w(r)dr (1.15)


356 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

Here we consider its following general case


xz

te +f TG aiert as Gt, a OO ea nen (Zab)


(oe)

The solution u(x) is defined for ze(—oo, co), however the integration is done on
(—oo, x), which can be considered as due to the assumption that the kernel (a0)
vanishes for t > x, or

= TG ) p) Co <a on . 2

NE) an Di bs: (7-22)

According to this definition of K(x, t), if we write K(t, x),

esi —o <t<g
es {ISAS in KI EMCO) ey)
instead of K(x, t) in (7.31), the limits of integration become from t = x to t = on,
and (7.31) reduces to

ie/Or ayae (7.34)


Either of the forms (7.31) with kernel K(az,t) in (7.32) and (7.34) with kernel
K (t,x) in (7.33) is a singular Volterra integral equation of the second kind, and they
are singular because of the presence of an infinite limit for their integrals.
In the following we present simple transformations that reduce the above Volterra
integral equations with infinite limit for their integral, to ones with finite limits.
However, in the new resulting equations, the kernel now becomes singular.

Mapping onto a Finite Interval


For the singularity of the equation due to the infinite limit of negation for
voles (as well as Fredholm) equations, we can use the simple mappings € = 4= and
Tee + to map the infinite limit to 0. In the case of (7.31), this map EINE reduces it to
the followin integral equation in the new unknown ¢(€)= u(2) ona finite (0, €)
interval instead of u(t) on (—oo, x), (x < 0)

@+f H(e. a) Oar 30)

where F(£) = f(5),H(£,7) =K(¢,


=).
We note here that even for a nice continuous kernel K (z, t) in (7.31), the above
integral equation (7.35) may reduce to one with a singular kernel with singularity at
T = 0. For example, the equation in u(x),

u(z) = xe" + [ e 2(—t)(¢ — t)u(t)dt (7.36)


—e,9)

becomes the following in ¢(€)


EXERCISES 7.2 357

1 de g ay fees ie ey a eae | 1
ete Weak (;-+)«(+) ae
u(2) =e
& g if Te Nee ip (7.37)

o@) =F@- | E98) oS ayar


where the kernel is singular at 7 = O_ due to the factor +. (The factore ~ &* is
bounded since € < tT < 0.)
Since we are not covering the analysis of singular integral equations due to their
kernel being singular, we shall leave this subject for the interest of the reader. (See
the general references given towards the end of Section 5.5 on the numerical methods
of integral equations including the singular ones Delves and Mohammed [1988],
Baker and Miller [1977].)

Exercises 7.2

1. Consider the Volterra integral equation of the second kind


xz

u(z) = 1—27 +42? + i [3 + 6(@ — t) — 4(a@ — t)?]u(t)dt (£.1)


0
(a) Letty = At = 0.05; t = 0.1, ts = 0.15, and tg = 0:2, u(¢;) =u; Write
its numerical Newton-Cotes “gradual" or “combination" rules in the sense
of using the trapezoidal rule for determining u; of u;, Simpson’s rule for that
of w2, the 2-rule for u3, then returning to the repeated Simpson’s rule for uz,
us,--: Hint: See (7.28) and Example 5.
(b) Since we have a simple triangular 5 x 5 system, solve it successively to find
the approximate values of the solution uj, U2, U3, U4, and us, then compare
with those of the exact solution u(x) = e”.

2. Consider the Volterra integral equation of the first kind,


H1}

a i (1-2 + t)u(t)dt. (£.1)


ny)

(a) Use the Maclaurin method (that we used for (7.29) to obtain its numerical
setting (7.30)) to write the numerical setting of (E.1), using Nt 0.05,
uj = u(z;) = u(0.052), 2= 1,3,5,7,9. (See Example 6.)
(b) With such simple triangular system of equations, solve it successively to
find the samples of the approximate solution u;, 2 = 1,3,---,9. Compare
your results with the samples of the exact solution u(z) = e”.

3. For the Volterra integral equation of the second kind in Exercise 1, repeat the
problem with At = 0.02 and compare with the approximate and exact answers.
358 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

4. Consider the Volterra integral equation of the second kind of Exercise 1,

u(a) = 1-22 + 4a? + [re + 6(a — t) — 4(x — t)*]u(t)dt


0
and its numerical setting with At = 0.05 in part (a) of that exercise starting
with the trapezoidal rule for u;, Simpson’s rule for ue, the 3-rule for u3, then
returning to the repeated Simpson rule for w4, us,---, aS was done in (7.28)
and illustrated in Example 5.
(a) In the present exercise use larger At = 0.5 instead of At = 0.05 in the
above exercise, to solve the problem, and compare your results with its exact
answer u(x) = e”.
(b) The discussion following (7.28) suggests that the value u;, obtained above
with the (not so accurate low degree) trapezoidal rule would be the most
inaccurate among the other approximate values. Use your results to comment
on this discussion. Also, why is it that such inaccuracy is not so apparent in
Exercise 1 when a smaller At = 0.05 was used?
For additional detailed computations of this problem see Exercise A.1 of Sec-
tion 7.2.A in “The Student’s Solution Manual" to accompany this text.°

5. For Exercise 1, use the Lagrange interpolation formula (1.153) and (1.154) to
interpolate its numerical values of the solution, then compare this approximate
interpolated solution u(x) with the exact solution u(x) = e”.

6. Consider the singular Volterra integral equation in u(x),

Hee *-[ 3 € a :)u(t)dt


(a) Use the change of variable £ = +, r = + to reduce the integral to that of
finite limits 7 = 0 to 7 = € in the new unknown ¢(€) = u(+).
(b) Solve the resulting equation in #(€), thus find u(z) = (z). Hint: The
resulting equation in ¢(€) is non-singular with a Laplace convolution product
type integral.

7.3. HIGHER QUADRATURE RULES FOR FREDHOLM EQUATIONS

In this section we consider the numerical approximation setting for solving Fredholm
integral equations using the more efficient high quadrature rules of Section 7.1 such as
the Gauss-Legendre and other Gauss quadrature rules. The results will be compared
with the, relatively inefficient, trapezoidal rule that we have already used in (5.124)

5 Jerri [1999]. See the end of the preface for more information.
7.3 HIGHER QUADRATURE RULES FOR FREDHOLM EQUATIONS 359

of Section 5.5.1. As we discussed in Section 7.1, all such quadrature rules will differ
in the weights D; and the sample locations t; of the numerical approximation setting
(5.118) with N =n +1,

u(xi) = f(ai) + Y_ K(ai,tj)Dju(ts), i=1,2,---,N (7.38), (5.118)

for the Fredholm integral equation (of the second kind) (1.148), (5.116)

u(x) = f(x) + ( K (a,t)u(t)dt. (1.148), (5.116)


Here, the need for efficient numerical methods is clear, since we have an N x N square
system of linear equations, and the cost is very high if we stay with the trapezoidal
rule. For the case of Volterra integral equations, with its much simpler triangular
system of linear equations, Newton-Cotes rules, or a combination of them, can be
used to solve such systems, successively, or what is termed as the marching method,
which was discussed in the previous Section 7.2.
We will illustrate (7.38) (or (5.117)) using a four-point Gauss-Legendre rule for
approximating the integral on the interval (0,1), which we have already presented in
Section 7.3. We must first note that in Table 7.3, we have the locations of the samples
x; and the weights w; given on the (symmetric) interval (-1,1), and where both values
of x; and w; are symmetric. Here, we use four sample points on (0,1). So, according
to (7.5) and (7.6), we must translate the locations adding 1, then scale them by a
factor of h = $s since our interval (0,1) has half the width of (-1,1) in Table 7.3. So
the needed locations are x; = 5(1 +2;),7 = 1,2,3,4. Also the weights w; of Table
7.3 must be scaled by a factor of 4 as indicated in (7.6) and w} = $w;. Thus, we
obtain the four locations on (0,1) from the following (symmetric) locations (of Table
183):

£1 = —0.861136,rz = —0.339981,x3 = 0.339981,x4 = 0.861136

as

i, = 5 — 0.861136) = 0.069432, x, = 0.330009, x = 0.669991,


r4 = 0.930568.

The new weights wi = 5Wj of the (symmetric) values w; of Table 7.1 are:

= 0.173927,
wi, = 0.326073,
w, = 0.326073,
w, = 0.173927.

Now we can write the numerical setting of the Fredholm equation,

1
ula) =4 (x) + af K (a, t)u(t)dt (7.39)
360 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

with the help of a four-point Gauss-Legendre rule (in parallel to using the trapezoidal
rule in setting (5.124) for (5.116)) as

U1 — fi + X[w} Kivu + wi Kyqu2 + w3Ki3u3 + w),K14uU4]

Hy = fi + X[w} Kiiuw + wo Ki2u2 + w) Ki3u3 + w;,Ki4ua]


uy = fi + A(0.173927K
111 + 0.326073.K
12u2 + 0.326073.K13u3

The same is done for i = 2,3, and 4 to have the system of four equations,

ui = fit A[w) Kir + wy Kigue + wo Kigug + w, Kigua),


eS) 3548 (7.40)
and where it is understood that

i; = U(0,),fs= f(o,)a aud Hy, = Ke eee el,


Next we illustrate the same problem with the help of a four-point Tchebychev quadra-
ture rule with its equal weights w; =a = 10 = i and with its locations 7; on
(-1,1) as given in Table 7.4. Again, since our problem is defined on (0,1) we translate
the locations x; of Table 7.4 by 1, then scale them by a factor of $,i.e.,2; = $(1+24),
1 = 1,2,3,4 (as we did with the above values for the Gauss-Legendre quadrature
rule) to have
1
al = 5(l+) = 5(1 — 0.794654) = 0.102673,
x, = 0.406204, x}, = 0.593796, x’, = 0.897327.
With the equal weights of w; = i, 1 = 1, 2,3, 4, the numerical setting of (7.39) with
the Tchebychev rule becomes
il
ui = fit Aj Kam + Ki2u2 + Kigu3 + Kigue], 1 = 1,2,3,4, (7.41)
where t, = x; are as determined above.
We recognize here that in order to have better accuracy for the solution of such
system of linear equations, we often need a large number of points N and a high
quadrature rule. Even with such very efficient rules of Section 7.1, the main task still
remains on the shoulders of the user, where a good familiarity with matrix analysis is
of utmost benefit. In this book, and for the sake of having the material self contained,
we have included only a brief review of Cramer’s rule in Section 1.5.4 for solving a
system of linear equations.
Next, we will use Cramer’s rule for solving the system of four linear equations as
the result of the numerical approximation of Fredholm integral equation in Example
20 of Section 5.5.1,
1
ti(2) sine +/ (1 — xcos zt)u(t)dt (7.42)
0
where this equation will now be approximated with the help of a four-point Gauss-
Legendre quadrature rule as in (7.40), then a Tchebychev rule as in (7.41).
7.3 HIGHER QUADRATURE RULES FOR FREDHOLM EQUATIONS 361

Example 7 Fredholm Equation of the Second Kind


(a) The Gauss-Legendre Rule
First we approximate the Fredholm integral equation of the second kind

u(x) = sine + fo — x cos xt)u(t)dt (E.1)


0
using a four-point Gauss-Legendre rule as in (7.40), to have the system of four
equations in the four unknown sample values uj, uz, u3 and u4.
Of course, we first use the change of variable € = 2t—1, t = so dii=
dé, to have the integral on the symmetric interval -1 < € < 1, ready for its
numerical approximation by the Gauss-Legendre rule and its Table 7.3. However we
still have xe(0, 1), where we shall adjust x;, w;, given for the symmetric interval
; 1
(-1,1) in Table 7.3 for N = 4, to give us x, = 5 (ti +1), w;= 5 wi for u(x;,) = ui,
x,€(0, 1).

u(x) = sing + ;iE f— £COST (>) u (S) dé. (£.2)

Now we use the four-point Gauss-Legendre rule for the integral with the help of Table
7.3, and evaluate u, atz =z} = ae = a = (1.069432 to generate the first
linear equation of (7.40) corresponding to z = 1,

u, =sin(0.06943) + 5[(0.34786) {1 — (0.06943) cos(0.06943)?} uj


+(0.65215){1 — (0.06943) cos(0.06943) (0.330001) }us
+(0.65215){1 — (0.06943) cos(0.06943) (0.66999) }u3
+(0.34786) {1 — (0.06943) cos(0.06943) (0.93057) }14}.
In a similar way we obtain the results at x5, 73, and x to have the four linear
equations in the four unknowns wy, U2, ug, and u4; ui = U(z')

u, = 0.06938 + 0.16185u; + 0.30345u2 + 0.30346u3


+0.16188u4
ug = 0.32405 + 0.11655u; + 0.21910u2 + 0.22109u3
+0.11922u4 (B.3)
uz = 0.62098 + 0.05752u; + 0.11292u2 + 0.12925u3 ;
+0.07932u4
ug = 0.80196 + 0.01241u, + 0.03684u2 + 0.07973u3
+0.06906u4.
Then we rearrange to have the system of linear equations ready for a matrix equation
form, with its resulting coefficient matrix A (in AU = B of (E.3)) as

0.83815 —0.30344 —0.30346 —0.16188


Biles —0.11655 0.78090 —0.22109 —0.11922 (E.4)
~ | —0.05752 -—0.11292 0.87075 —0.07932 ,
—0.01241 —0.03684 —0.07932 0.93094
362 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

Here we must find the determinant of the (square) matrix A, which is essential to the
use of Cramer’s rule for obtaining the final solution, then we report the final solution
(u1, U2, U3, U4). The result of evaluating the determinant of the square matrix A in
(E.4) is |A| = 0.450361.
Now we use Cramer’s rule (as in Section 1.5.4) to solve the system of equations
in (E.3) where we find that the values @1, ti2, 3 and %4 are almost equal to the exact
value of u(x) = 1,0 < x < 1 (within an error of about order 107~!°).
In Table 7.9 we present a comparison of these approximate (numerical) solutions
a;, with the exact solution u; = u(x;) = 1, along with the error €; = U; — Ui.

Table 7.9 Numerical (Gauss-Legendre) and Exact Solutions of a Fredholm Equation (Ex-
ample 7)

1 Li u;(num.) Oy = Ih GAG) error = Ua;

1 0.06943 as eal 1.0 wie


2 0.33001 bee onal (et 1.0 ~ 10-19
3 0.66999 ew 105 1.0 ri 1°
Ae 50:93057 lwo 100” 1.0 ~3x 1071

We note here that the absolute error of about 10~!° represents an improvement
for the Legendre rule, when compared with the trapezoidal rule that we used for the
same problem in Example 20 of Section 5.5.1, though with N = 3, where the error
ranged between 10~? and 2 x 107?.
(b) The Tchebychev (Equal Weight) Rule
Here we use a four-point Tchebychev rule, with the help of Table 7.4, as in (7.41)
to solve the same Fredholm integral equation (E.1); and to have the integral on the
symmetric interval, we use its transformed version in (E.2). We also adjust the 2;
given on the symmetric interval (-1,1) in Table 7.4 to give us 2, = 5 (ti + 1) for
u(zi) = uj, rie(0, 1), since our integral of (E.1) is defined on (0,1).

u, = sin(0.102673) + “la — (0.102673) cos(0.102673)2)u;


+(1 — (0.102673) cos{ (0.102673) (0.406204)}) uz (E.5)
+(1 — (0.102673) cos{ (0.102673) (0.593796)})us
+(1 — (0.102673) cos{ (0.102673) (0.897327) }) ua].
In a similar way we obtain the results at x2, 73, and x4 to have four linear equations
iN U1, Ug, UZ and U4,

uy = 0.102493 + 0.224333u; + 0.224354u2 + 0.224379u3


+0.224441u4
ug = 0.395125 + 0.148537u; + 0.149828u2 + 0.151389u3
+0.155121u4
7.3 HIGHER QUADRATURE RULES FOR FREDHOLM EQUATIONS 363

U3 = 0.599511 + 0.101827u; + 0.105848u2 + 0.110684u3


+0.122130u4
ug = 0.781663 + 0.0266197u; + 0.040406u2 + 0.056767u3
(E.6)
+0.094545u4.

These equations are then written in a matrix form, as we did in (E.3), (E.4) in part (a),
where the determinant of the coefficient matrix A is computed, and Cramer’s rule of
Section 1.5.4 is used to find the solution (ui, uz, u3, ua) of the system of equations
in (E.6).
Since all these computations were done in detail in part (a), it is sufficient here to
report in the following Table 7.10 such numerical solution @;, 1 = 1, 2,3, 4, which
are almost equal to the exact solution u; = 1 (within an error €; = &; — v1; of about
order 10~".)

Table 7.10 Numerical (Gauss-Tchebychev) and Exact Solutions of a Fredholm Equation


(Example 7)

a Zs u,;(num.) Up = (Gee) Giire, = =U

1 0.102673 1+~ 1077 1.0 ~ —107"


2 0.406204 14+~107-7 1.0 ~ —1077
3 0.593796 14+~10-" 1.0 ~ =10-"
4 0.897327 1+~10~’ 1.0 wi=107!

We note that the accuracy of the computations is reasonable compared to that of the
Gauss-Legendre rule in part (a). However, it is known that the latter can do very well
as we increase the number of points NV. We shall leave such observations for the
exercises.

Accuracy of the Numerical Methods


For the family of the quadrature rules suggested for the numerical solution of
Fredholm integral equation,

b
u(x) = f(z) +2 fiK (a,t)u(t)dt (7.39)
we, of course, must first make sure that the equation has a unique solution u(z) before
embarking on using one or more quadrature rules to find its approximate samples
{u(a;)}§_,. Next we must also have an estimate of the error for the quadrature
rule (or rules). It turns out that the bound on the error for such numerical methods
depends on two factors. The first is independent of the method used, and it depends
on the integral equation itself as characterized by the integral operator with its kernel
K (a, t) in (7.39), and its nonhomogeneous term f(x). As was discussed briefly in
Section 5.4 for the Fredholm integral of the first kind, the first factor is described in
364 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

terms of the (possibly high) sensitivity of the solution u(x) to small changes in the
above two characteristic parameters of the problem, namely, f(x) and K (a, t).
The second factor of the error bound is more related to what we are doing here,
where it is a measure of the error of the rule used in approximating the integral of the
equation (7.39). With the high degree quadrature rules, such error can be minimized
easily if K(x, t)u(t) is well behaved. However there is a catch here, since u(x)
is still the unknown to be found. A very helpful result, for directing our efforts
in using the above two groups of Newton-Cotes and the Gauss quadrature rules, is
that, “If the kernel K(x,t) is continuous in the square; {a < x < b;a < t < D},
f(x) is continuous on a < x < 6, and if we also know that the equation (7.39) has
a unique continuous solution, then there is a family of quadrature rules for which
the error, in approximating the Fredholm integral equation (7.39), tends to zero as
N ~ oo". Such family of rules includes, (i) the M/-panel Newton-Cotes rules of
degree P for any fixed P and increasing M; and, (ii) the Gauss-Legendre, and open
and closed Tchebychev N-point rules for increasing N. It does not, however include
the Newton-Cotes method for fixed panel M and increasing degree P.
For example, in the problem of the above Example 7, and also Example 20 of
Section 5.5.1

u(x) = sing + [o — «cos xt)u(t)dt (7.42)

we know that the solution of this equation is u(a) = 1, and that the kernel K (az, t) =
(1 — xcost) is also continuous (and differentiable) on the square; {0 < x < 1;
0 < t <}. Hence, according to the above result, there is no surprise when simple
quadrature rules would work as illustrated in Exercise 1(a) of Section 5.5 (with the
very low value of N = 3) for the trapezoidal rule and with better results for Simpson’s
rule. However, this is not very much the case for the integral equations with kernels
such as where the solutions are smooth, but the kernel K (z, t)

elt), Oteri<t
Kat) = {(1-2), 0<tSa (is)
which, although it is continuous in both z and t, it does have the problem of a jump
discontinuity of size | in its first derivative oR (at) as shown in (4.26) for the more
general case of Green’s function G(z, t) in (4.25). An indication of the convergence
of the numerical method for the problem (7.42) may be seen in the result of Example
20 in Section 5.5 where the trapezoidal rule was used with N = 3 (see also Exercise 5
in Section 5.5). For an illustration with kernel K («, t) that has a jump discontinuity
in its derivative, see Exercises 8 and 6 in Section 5.5 for a nonhomogeneous and
homogeneous Fredholm equations, respectively. Of course, these examples with
limited small N are not enough, and we shall have a chance to illustrate such error
analysis for large N in the exercises.
7.3 HIGHER QUADRATURE RULES FOR FREDHOLM EQUATIONS 365

7.3.1 Comments on Higher Quadrature Rules for Some Singular


Fredholm Equations

Some Analytical Methods


As we mentioned in the preface, the presentation of integral equations in this book
assumes only basic differential equations and calculus. Thus it is not possible to touch
upon the desired theory or methods of solutions of singular integral equations, which
requires some preparation of at least a course in complex variables and advanced
calculus. As such, our treatment in this section will be mainly illustrative in nature,
where we depend on the tools that have already been developed in this book with the
necessary details. This includes the use of integral transforms in Section 1.4 and the
numerical methods with its basic support in Section 5.5.1 and this Section 7.3. The
first operational calculus method covers the use of Fourier and other transforms for
solving a special class of singular Fredholm integral equations. Such class includes
the following singular Fredholm equation, where its integration part is in the form of
a Fourier convolution product

ula) = flo) + f” K(a— 6ulédé (7.44)


as we presented it in Section 1.4.2. This method was then illustrated in Example
15 (of the same Section 1.4.2) for solving the following singular Fredholm integral
equation of the second kind,

u(x) =e ll + wf e l*Slu(€)d€, (7.45)


—oo

and in Example 16 of the same Section 1.4.2 for the singular homogeneous Fredholm
integral equation,

u(x) = ps /- e thu (t)dt. (7.46)


In Section 1.2 we classified integral equations, where we defined an integral equation
as singular, when

(i) Either one of the two limits of the integral in the equation is infinite,
or

(ii) The kernel of the integral equation becomes unbounded, i.e., equations with
singular kernels.

The first type of singularity is seen in the above two examples of (7.45) and (7.46). For
lack of space we shall limit our general brief discussion and the numerical method, of
using higher quadrature rules, to such singular Fredholm equations whose singularity
is only due to the limit (or limits) of integration being infinite.°

6The interested reader may consult Baker and Miller [1977] or Delves and Mohammed [1988].
366 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

Singularity Due to Infinite Limits of the Integral


In the examples of the Fourier transforms (1.87), (1.95), (1.98) in Section 1.4.2,
and the Laplace transform (1.63) in Section 1.4.1 we looked at them as Fredholm
integral equations of the first kind that are singular because of the infinite limit (or
limits in the case of (1.87)) of integration, while the kernels are extremely well
behaved (very smooth) exponential functions. In the case of the Hilbert transform
(1.131) we have both (strong) singularity at z = 2 for the kernel K(x, A) = se
and infinite limits of integration. The method of solving such (strongly) singular
integral equations needs complex analysis, and falls at the center of the treatment of
singular integral equations, which is termed the “Hilbert problem", and which we are
not going to pursue in this book.

Transforming to Finite Limits


Integral equations with singularity due to the infinite limits of integration, but with
well behaved kernels may be reduced to ones that are much easier to handle. For
example,
[o-@)

ula) = fe) + K (a, t)u(t)dt, —00 <Z<0O (7.47)


—0o

may, formally, be reduced to that of finite limits via the change of variables

f= tana: w= tare (7.48)

where the limits of integration in both of the new variables 7 and € are from — } to 3.
Such change of variables (or mappings) is often used in the numerical approximation
of this type of singular equations, where we use a quadrature rule on (— 5, >) for the
following transformed equation (7.49) instead of dealing with the infinite limits of
(7.47)) (see Exercise 9 for an illustration in this general direction),
&

U(g)=F(E)+ | H(E,7)U(r)dr (7.49)


5

where U(£) = u(x) = u(tan€), F(¢) = f(x) = f(tang), and H(€,r) = K(z,t)
=K (tan €,tan 7) - sec? r. Note that the new kernel in (7.49) may have singularities
ati +5-
Another mapping that is used for the same purpose is

=
2a 2a
= IL. = — — :
é r+a i t+a : Ka)
which reduces the equation

(2) f(z) + BUGS


uh dts Orme <100 (7.51)
0
to one with the finite limits 7 = —1 to 1,
7.3 HIGHER QUADRATURE RULES FOR FREDHOLM EQUATIONS 367

1
Ue=FeQ+ |=<) GENUMdr,-1<€<1 (7.52)
Hee U(€) = u(x), F(€) = f(x), and G(é,r) = VAG = 22, K(2% —a,
eo
Of course, here we notice that the new kernel has a singularity at the lower limit of
integration tT= —1. For the numerical methods, such singularity at an end point may
be avoided by the use of an “open" quadrature rule for approximating the integral,
i.e., a rule that does not use the end points for samples such as the Newton-Cotes
rules of the open type (Table 7.2(b)), Maclaurin rule (Table 7.3) or the Tchebychev
(open) rule (Table 7.4), as we discussed in Section 7.1.
As to the analytical treatment of singular equations of this infinite limits type
(7.47), the simplest class that we can have a good hold on here is the special case
where the integral is in a Fourier convolution product form,
co

u(x) = f(x) + u | k(x — t)u(t)dt. (7.53)


—oo

If we let U(A), F(A) and K (A) be the Fourier transforms of u(x), f(x) and k(x), re-
spectively, then the Fourier transformation of the singular Fredholm integral equation
(7.53) reduces it to an algebraic equation in U(A),

U(A) = F(A) + pK(A)U(A) (7.54)


after using the Fourier convolution theorem (1.101) on the above integral as a con-
volution product integral in (7.53).
Now we solve for U(A),

UOA) = Roy
F(A)
= ———.,_ 1
1-wK HKAA
(A) £0. (7.55)
7.59

So, provided that 1 — A (A) # 0, we can use the inverse Fourier transform (1.88)
on this U(A) of (7.55) to find the solution u(x) of the singular equation (7.53),

—Cco

As we mentioned at the beginning of this section, this method was illustrated for
the singular nonhomogeneous Fredholm equation (7.45), and its associated homoge-
neous one (7.46), respectively, in Examples 15 and 16 of Section 1.4.2, respectively.
Another illustration with instructive hints is done in Exercise 17 of the same section.

The Numerical Method


As we mentioned earlier, our main discussion and illustration will center around
Fredholm integral equations that are singular due to (only) infinite limit (or limits) of
the integral.
368 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

The main issue here, compared to the numerical methods of nonsingular Fredholm
integral equations in the previous section, is how to deal with the infinite limit or
limits of integration? This, we have prepared for in (7.18)-(7.24), (7.26) and (7.27)
of Section 7.1, and illustrated in Examples 3 and 4, and Exercises 2 to 5 in Section
Wek:
The basic idea of the numerical methods starts with approximating the integral in
an integral equation, a subject that we covered in Section 7.1 concerning quadrature
rules for approximating integrals including those with infinite limits. All those
preparations can be summed in the following three basic attempts:
(1) To truncate the infinite limit of integration to a finite limit D, then use a high
order quadrature rule, which is, often, not good enough.
(2) To use the Gauss-Laguerre rule for integrals on (0, oo).
(3) To use the Gauss-rational rule as in (7.24), which, effectively, maps the infinite
interval (a, oo) into a finite interval (-1,1) in a new variable U(€), whence a Gauss-
Legendre rule can be easily applied for this finite interval. This was used in reducing
the singular Fredholm equation (7.51) with infinite domain (0, 00) for u(z) to that of
(7.52) with finite domain (-1,1) for U(€).
We shall illustrate an outline of these (simple) schemes in the following example.

Example 8
Consider the following singular integral equation.
[e-)

1 (DF (a) +/ K (a, t)u(t)dt (E.1)


0
with the nonhomogeneous term

fl2l== (4-27)
Qy—d fines
Lair: (E.2)

and a smooth kernel


OO) (E.3)
where, clearly, the form of f() is picked to have an exact solution u(x) = ae
4+27)
for comparing our numerical results.
The exact solution for the above equation, with the cos zt factor in the kernel, is
obtained with the help of the Fourier cosine transform tables. From such tables we
have
~ cos xt T
ye
[ eee amen oo
Since we know the solution u(x) = as a Slowly varying function (some-
(4+ 2?)2
thing that may only happen for such “made up" illustration!) and combined with
the kernel as another slowly varying function, we have an extra (artificial!) advan-
tage that the user of a method does not have when looking for the solution! Again,
7.3 HIGHER QUADRATURE RULES FOR FREDHOLM EQUATIONS 369

we (purposely) made up the exact solution for comparing the results of the above
different methods.
We will present here only a brief discussion of the different methods, and leave the
discussion of some analysis and the numerical illustrations for a very similar problem
in Exercises 8-10.
(1) The first method is straight-forward and may start with truncating the infinite
limit of integration to L, then using N-point Gauss-Legendre rule on (0, L). The
computations may be limited to L = 4, 8,64, and N = 4,8.
(2) For a second method we can use N-point Gauss-Laguerre rule for N = 4, 8,
and with the parameter ( in (7.20) varying towards smaller values of 3 = 1, t, is:
The choice for small values of 3 in this example is the result of our experience of
the numerical integration in Example 4(a) of Section 7.1 for slowly varying function
; = , and where very small ( value gave fair results. Here, knowing our solution
in advance as u(t) = (4 + t?)~2 along with the (4 + t?)~2 in the kernel of (E.4)
makes the integrand, aside from the oscillating cos xt factor, a slowly varying one.
This is only an illustration, which suggests the importance of having some idea about
the asymptotic behavior of the solution in steering our efforts towards using the most
suitable (or efficient) quadrature rule.
(3) In a third method we may use the Gauss-rational rule for N = 4,8, and the
parameter 3 ranging towards larger values G = 1,4, 16, as was suggested by our
experience in Example 4(a) of Section 7.1 for approximating integrals on (0, co)
with slowly varying integrands.
The numerical results in this example may be fine for the three methods. Now we
must inquire about the possible analytic reason from the theory that will help guide
us for other examples. The general “regularity” or “well-behaving" conditions fall
along the lines of assuming that, for the (singular) Fredholm equation (E.1) above, its
kernel K (x,t) is square integrable on the first quadrant (0 < z < 00,0 <t < oo),
and that its nonhomogeneous term f(x) as well as the solution u(x) (if we can
estimate its behavior!) are also square integrable on (0, co). In our present example
the (made up in advance) solution u(x) = (4 + 2)~2, and the nonhomogeneous
term f(x) = (4+ 2?)~2[1 — Ze ?*], are clearly square integrable on (0, 00). The
kernel K (z, t) =(4+ 27) 2(4 + t2)~2 cos zt is also square integrable in both x and
t on the first quadrant. Hence, it should not be very surprising if we see convergence
for the approximate numerical methods as they are lead by the comfort in satisfying
the essential conditions of the underlying theorem.
To give an illustration where we don’t have the protection of the above underlying
theory, we give the following example

1 T ie
“(eo TER a 7 +f cos rtu(t)dt (7.57)

with a known solution u(z) = oa and a nonhomogeneous term f(z) = a =


Fe , which are both square integrable on (0, oo). However, the square integrability
370 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

of K (a, t) = cos zt on the first quadrant is clearly in doubt,

/ if | cos xt|?dxdt = / Ht (5= COS 221)dxdt = oo


0 Jo 0 Jo 2 2
which we shall leave for a simple exercise.
We leave the attempt of using the Gauss-Laguerre rule (with small @ values for
the slowly varying (almost constant) integrand!) and the Gauss-rational rule (with
large 3 values) for an exercise to show that even with large N there is no sign of
convergence! (see Exercise 8).

Exercises 7.3

1. Consider the Fredholm integral equation of the second kind,


1
NC ae 2 | (1 + xt)u(t)dt. (E.1)
0

(a) Use a four-point Tchebychev rule (7.17) for approximating the integral,
and set up the system of 4 x 4 equations in the four unknowns uj, U2, U3, and
ug. For the locations of the samples z;, consult (7.16), and for the weights see
Table 7.4.
(b) Use a four-point Gauss-Legendre rule to write the 4 x 4 linear equations
(see Table 7.3 in Section 7.1 for the locations of the samples x; and the weight
W3;).

2. Consider the homogeneous Fredholm integral equation


1
Ue) = af (x + t)u(t)dt.
0

Use a four-point Gauss-Legendre rule to set up the 4 x 4 homogeneous linear


equations in uj, U2, U3, and u4.
Hint: See Example 7 and Table 7.3.

3. Consider the Fredholm integral equation of the first kind in u(z),


1
Pg elk if(1+ xt)u(x)dx
0

Use a four-point Tchebychev rule (7.17) for approximating the integral, and
set up the resulting four linear equations in the four unknowns wu}, U2, u3, and
Ug at Z) = 0.108, x2 = 0.406, x3 = 0.594 and x4 = 0.897 (see (7.16) and
Table 7.4). Note that we have already adjusted the locations x; of Table 7.4 to
suit the interval (0,1) of the above integral.
EXERCISES 7.3 371

4. Consider the Fredholm integral equation with degenerate kernel


1
ue) = = = ;+ i (x + t)u(t)dt. (E.1)

(a) Attempt a Gauss-Legendre rule with N = 2,3, and 4 and make your
conclusion concerning reaching an exact answer.
(b) Solve the integral equation by the method of Section 5.1 for the exact
answer u(x) = x, compare with the answer in part (a), and show why part (a)
gives an exact answer.
(c) Use the exact answer of part (b) to verify the integral equation.
(d) Try part (a) with Simpson’s rule (with the knowledge of the exact solution
ae) SFa).

5. Repeat the computations and error analysis of Exercise 4 for the following
Fredholm integral equation of the second kind (with degenerate kernel of one
term),
us

u(z) =sing —x2+ fe xtu(t)dt (E.1)


0
with the following steps.
(a) Use the method of Section 5.1 for degenerate kernels to show that the exact
solution is u(x) = sin a.
(b) Why is that a finite degree (polynomial) quadrature rule cannot result in an
exact numerical solution for this equation (E.1)?
(c) Repeat the steps of part (a) in Exercise 4 by attempting a Gauss-Legendre
rule with N = 2,3, and 4 points, and compare your results with the above
exact answer.
(d) Noting part (b), where the integral cannot be approximated by any finite
degree (polynomial) quadrature rule, explain the reason for the good results in
part (c).

6. Consider the Fredholm integral equation of the first kind,


1
3k) y= ih(1+ xt)u(t)dt
0
and its numerical setting in Exercise 2 in u;, uz, uz, and w4, where a four-point
Gauss-Tchebychev rule was used. Find the determinant of the coefficient
matrix, to see if Cramer’s rule can be applied to find a solution.

7. Consider the homogeneous Fredholm integral equation,

ai) afie + t)u(t)dt (£.1)


372 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS

and its resulting 4 x 4 system of homogeneous equations in u;, U2, U3, and U4
after using the Gauss-Legendre rule in Exercise 2.
(a) Write the system in the matrix form U = AAU, (I — XA)U = 0, and set
det(I — \A) = |I — \A| = Oto find the approximate (two) eigenvalues A; and
do to be compared with their exact values 4; = —6— V/48 and \2y = —6+ V/48.
(see part (b)).
(b) Find the four samples w;, wz, u3, and wu, of the two approximate eigenfunc-
tions of (E.1) corresponding to the two (approximate) eigenvalues found in part
(a). Hint: Note that the four sample values of the approximate eigenfunctions
are very sensitive to the accuracy of the approximate eigenvalues. Indeed we
had to go to ten places accuracy to get the good agreement with the exact
eigenfunctions as shown in part (d).
(c) The homogeneous Fredholm integral equation (E.1) is with degenerate
kernel of two terms. Use the method discussed in Section 5.1 and illustrated
as in Example 2 of that section, to find the above two exact eigenvalues A; and
A. Continue the same method to find the corresponding eigenfunctions Uj (x)
and U2(x). Compare these exact results with the approximate ones in part (a).
(d) Use the Lagrange interpolation formula (1.153) and (1.154) to interpolate
the four approximate sample values of the eigenfunctions in part (a) to find
their continuous approximations U; (x) and U(x). Graph these two functions
and compare them with the graph of the exact one in part (b).
Hint: See Example 7 and Table 7.3 (with its nonequidistant locations 7, £2,
“",0N.-

. Consider the singular Fredholm equation

u(z) = uh ‘i
* 2+ u(t)at,
1l+a2 1

(a) Use the rational transformation £ = 2 angina 5 to reduce this equation


to one with finite limits of integration € = 0 to | in the new solution U(€) =

(b) Use a four-point Tchebychev rule on the interval 0 < € < 1 for the
approximate numerical set up, as 4 x 4 system of linear equations in the four
approximate samples of the new solution U(€),0 < € < 1.
(c) Solve the system of equations in part (b) to find U(€;), i = 1, 2,3, 4, then
Se) ae 3 to report the approximate values, of the actual solution u(z;),
LA NS pc
Appendix A
The Hankel Transforms

To further support our presentation in Sections 1.4.2 and 1.4.3 of the Fourier and
other integral transforms, we will present the Hankel transforms.!

A.1| THE HANKEL TRANSFORM FOR THE ELECTRIFIED DISC

As was presented in (1.124)-(1.128) in Section 1.4.3, the Hankel transform F;, (A)
of f(x), 0 <r < oo is defined as

POH 440) = i TI AAT) ar ar (A.1)

where J,,(Ar) is the Bessel function of the first kind of order n.


As we mentioned in Section 1.4.3, we can show using (a rather lengthy!) integra-
tion by parts that the Hankel transform algebraizes the following variable coefficient
part of the Bessel differential equation as in (1.127), that is,

Me {a ie ws Be }Sa FO: (A.2)


dr? ordr_ r?

For more detailed references, see Jerri [1992] and Sneddon [1972].

373
374 APPENDIX A: THE HANKEL TRANSFORMS

The inverse Hankel transform is

fGen i \Jn (Ar) Fa (Aad. (4.3)


0
As we indicated in (2.49)-(2.51) and Fig. 2.6 of Section 2.6.2, we will use the
Hankel transform in the following example to solve for the potential distribution in
three dimensions due to an electrified disk.

Example 1 The Electrified Disc: Dual Integral Equations


To illustrate the application of the Hankel transform to a boundary value prob-
lem and its resulting dual integral equations, we choose to solve for the potential
distribution u(r, z) due to a constant potential uo on a unit disk in the xy plane
where it is symmetric with respect to this plane outside the disk. Hence we use
the Laplace equation for the circular symmetric potential u(r, z) [see Figure 2.6 and
(2.49)—(2.51)],

dtu |100 ,Oy


Or? rOr = Oz?
= 0, Creer Sno, OrezretO0 (E£.1)

and express the mixed boundary conditions at the zy plane as

u(r, 0) = uo, OPS rrreeL (E.2)


Ou
5, (0) =0, 1<r<o, (£.3)
where the second part (E.3) of the (mixed) condition represents the symmetry of
the potential with respect to the zy-plane. We note here that the radial part of the
Laplacian in (E.1) is a special case of what is in (A.2) with n = 0; hence we use the
Ho Hankel transform of u(r, z) which is associated with Jo, the Bessel function of
order zero,
Oa) = if rJo(Ar)u(r, z)dr. (E.4)
0
So if we Hankel-transform (E.1) and use (A.2), we obtain

,2 iat dU(A,) z)

whose bounded solution is

LOW eae (ON ae al) (E.6)


To find the arbitrary function A(A) we need to Hankel-transform a condition on the
original function u(r, z) at z = 0, but unfortunately, this condition is given partly as
u(r, 0) = uo in (E.2) and partly as Ou(r,0)/Oz = Oas in (E.3), which is not suitable
for the Hankel transformation. So instead of finding A(A) for U(A, z), we will now
find the inverse Hankel transform of U(A, z) in (E.6) to obtain the original function

u(r,z) = i AJo(Ar) A(A)e7>* dd (E.7)


A.2 THE FINITE HANKEL TRANSFORM 375

then apply the mixed condition (E.2) and (E.3) on u(r, z) in (E.7) to obtain

au PO) te / AJo(Ar)A(A)dX = uo, Orsi cali;


0
e (E.8)
fi AJo(Ar) A(A)dA = Uo, OSS il
0
Ou
an” Oj = i —\ Jo(Ar)A(A)dA = 0, 1<r<oo,
od 0
(E.9)
Co

‘| —\? Jo(Ar)A(A)dA =0, 1<r<oco.


0
Here we notice that the arbitrary function A(A) is now involved in dual integral
equations (E.8) and (E.9) whose solution can be obtained, with the aid of integrals of
Bessel functions (see exercise 1, Section 2.6), as
2u0 sin A
(E.10)
A esi maa
So if we substitute this in (E.7), we obtain the final solution to the electrified disk
problem,
u(r2) = 2 f Jo(are# Sad. (E.11)
We leave it as an exercise to show that (E.11) satisfies (E.1), (E.2), and (E.3) (see
Exercise 1).

A.2. THE FINITE HANKEL TRANSFORM

In Section 1.4 we presented the finite Fourier sine and cosine transforms, and here we
present the finite Hankel transform of a function f(r), defined on the finite interval
(0, a), as
Fy (Ak) = / Tn (Agr) f (r)dr (A.4)
0
where {Aa} are the zeros of Jn,
dana) = 0, [ed eoogee (A.5)
With the aid of Fourier-Bessel series we can easily see (see Exercise 3) that the
inverse finite Hankel transform is

=52 3
a
Fatae
J( Apt)
(A.6)
where the sum is over the index k of the zeros Axa = Jn zkOf Jn in (A.6).
We note that to find the inverse Finite Hankel transform f(r) in (A.4), we are
asking for solving (A.4) as an integral equation in f(r). This solution f(r) is found
in terms of the Fourier-Bessel series (A.6).
376 APPENDIX A: THE HANKEL TRANSFORMS

Exercises: Appendix A

1. Verify that u(r, z) in (E.11) of Example | is the solution to the problem (E. 1)—
(E.3).
2. The orthogonal expansion (Fourier-Bessel series) of a function f(r) defined
on (0, a) in terms of the Bessel functions Jn (Agr) is

f(r) = Sock dn (er) (B.1)


k=1

where Axa is the kth zero of the Bessel function J, (usually written as jn,4)
and the sum is over the index k of these zeros. The Fourier-Bessel coefficients
are given by
Pt In(Agr) f(r)dr ~
Gas daa lies (E.2)
So VIZ (Ar) dr
(a) Relate the Fourier coefficient c, to the finite Hankel transform F,,(A,) in
(A.4) (of Section A.2).
(b) Use (E.1) and the result in part (a) to verify the Fourier-Bessel series in
(A.6) (the inverse finite Hankel transform) of f(r) = 1,0 < r < 1. Hint:
Use (A.6) and (A.4).

3. Fluid flow through circular aperture in a wall. It is known that the velocity
potential v of the flow of a jet of perfect fluid satisfies Laplace equation (2.48)
(see also (E.1) of Example | here).

(a) Formulate the problem of this steady jet flow through a circular aperture
of unit radius | in a plane rigid wall where the velocity distribution in
the hole (at the wall z = 0 when we take z as the direction of the flow
perpendicular to the wall) is given by v(r,0) = f(r) and that the slope
of the velocity potential is zero on the wall (outside the hole).
(b) Use Hankel transform to solve the problem and reduce the mixed boundary
conditions to dual integral equations.
(c) Solve the dual integral equations for the case of constant velocity potential
f(r) = 1 at the entrance of the aperture. Hint: See Exercise 1, Section
26;

4. Find the Laplace transform of the following initial and boundary value problem
in u(r, 0,t), the displacement of a membrane with zero initial displacement
u(r,0,0) = 0,0 < r < wo, 0 < @ < 2m and a given initial velocity of
55(79:0) = 9(1,8),0 <r <00,0<8< 2m,

Oru oe CLO (MOU 1 0?u


pe = Vue Ts (rot) r2 G2” crea Os Ox 6 <2 t 0

(E.1)
EXERCISES: APPENDIX A 377

UO. t) = ult,2n,t) (E.2)


u(r, 6,0) = 0 (E.3)
Ou
By 6" 99) = g(r, 0). (E.4)

Hint: From (1.69), (E.3), and (E.4) it is clear that only ory in (E.1) is suitable
for Laplace transformation. So let U(r,6,s) = L{u(r,6,t)} and Laplace-
transform both sides of (E.1), realizing that the differentiation with respect to
r and @ on the right side of (E.1) can be exchanged with the integration with
respect to t of the Laplace transformation.
| Sh aati aliA, area
'
|
= a
i wis pear 7

: ta EOE ou usa ne
ch i —
_ tm eA 4h
moe ose oi es as :
dhe im foe ntadeena
s eye
“DL pori Th 4 i
“yay <3 lO? WeaSita
= te 7 ounmeartcenn "i
»gabe fasi ive sats 4 a orients re,
ne one gates1Biow teoghednes ae Ji)
es ae Mihi ashHee basiy” 7

De 20 6 ide 600 mere, en abet, Ca taneydey


GG inn panXs inettestele al aed, oo tite atlost
8 pen ty
a 0" ys actly
a _ Se le ‘
we) Veiue000 wees fics, oi Kno i ld *
- _ fA4) i - nan Ha : r
- Raabe oD Sh eh, et Sante scene
- (4.20 rhe * Tm piile iSaatas Oneal rei Arve Leos «
is Fi AO} ed n.Ap
- 7
co vr i
—— a
- Ph
( at thy
77 itireaghy ws 1 (Gane a “ wel. D sd an m aia
.nebeytal jue hie 1 2 ios perio" Wan! Grtelicn Lag Saige fa
' ese chee | pal Reaaniale 1 peeret
wn
- °2 a
Ls ay i ree ‘of ‘Ry jie aby got Ce > den (oA
a

we ot a
| i Wand olese te Tw yh oae Chale hp-oeiacl
7 ; vue ee mad ¢ ai iow Go wile ah)
Seat
spr D + ye
=
7 > i. take ’ oo eG m gfewretes wi Alig ot Ji55 iy
aa
- LSM TS Py om ie voqunhe Barta - 2|
> Oger wtagsge Se onde ne
a
a
eta) toyee?
age. = ae
te ~~. a eo)
eee: al teeta & tee Sake
Fh. °
Ree

~~ AR fe
hes re woeOh ttn Corploceeane
_ ue oe | hans
ia
; a Ls mat
a 1

2 » Oo> -~
.% *
Appendix B
Green’s Function for
Various Boundary Value
Problems

The following is only a collection of very useful results that center around familiar
boundary value problems, associated with differential equations, along with their
Green’s functions that facilitate giving the Fredholm integral equation representation
of the boundary value problem. In addition, and when possible, we supply the
eigenvalues and eigenfunctions, which are of great value for the construction of the
Green’s function (Section 4.1) and the analysis and construction of the solutions to
Fredholm integral equations of Chapter 5. The theory behind the derivation of some
of these results may not be found detailed in this book.

B.1 GREEN’S FUNCTIONS IN TERMS OF SIMPLE FUNCTIONS

(a) ua) = af K (a, t)u(t)dt (B.1)

379
380 APPENDIX B: GREEN’S FUNCTION FOR VARIOUS BOUNDARY VALUE PROBLEMS

AS) Mee
0 eG ant) = (B.2)
t(a Le ) ae
a
du
7 (B.3)

u(0)
=0 (B.4)
u(a)
=0 (B.5)
_ nnxr nit \ 2
Un = sin — n= (=) (B.6)

(See Example 6, Section 2.5 and Example 7, Section 5.2.)

(b) u(x) = | K (a, t)u(t)dt (B.1)


0
sinh x sinh(t — 1)
sinh , Veet (B.2)
K(a,t) = G(,t)= sinh ¢tsinh(x — 1)
Se sinh 1
tly SE
io
du
=
— (A+ 1)u=0
—-(A+lu= (B.3)

u(0) = 0 (B.4)
0) (B.5)
VR) Seen yt ee il (B.6)
(See Exercise 3, Section 2.5.)

(©) utr) = -a | K (a, t)u(t)dt (B.1)


0
CPx) tO Sat
K(2,t) = Gle,t) =| (l+t)c, t<¢<1 (B.2)

oat+ Au = 0 (B.3)
u(0) = u'(0) (B.4)
AGO Reese Tx ly) (B.5)

(B.1)

(B.2)
x(t —1)-1, Get e
du
da + Au =0 (B.3)
u(O) = (1) (B.4)
B.1 GREEN’S FUNCTIONS IN TERMS OF SIMPLE FUNCTIONS 381

u'(0) = u(1) (B.5)


1
(e) u(x) = | K (a, t)u(t)dt (B.1)
=

ain (62), lear, <t


K(z,t) = G(a,t) = (B.2)
= ain -(@ — 1), te <1

@ui x

u(—1) = u(1) (B.4)


u'(—1) = u'(1) (B.5)

1
(f) u(x) =—-A | K(a,t)u(t)dt (B.1)
0
1
Err eth 1 Oa ot
K(x,t) = G(z,t) = ae 4 ) (B.2)
aa bin Mae NY), eS oy al

du

u(0) =0 (B.4)
u(ly= 0 (B.5)
w'(0) = u'(1) (B.6)

1
(g) u(x) = | K (a, t)u(t)dt (B.1)
0

KG) =G,t)= (B.2)


t?2
—(37—?t), t<@<1
6
d‘u
cee
int ce Au = (B.3)

u'(0) = (B.5)

eG) a0 (B.6)
ul(1) =0 (B.7)
382 APPENDIX B: GREEN’S FUNCTION FOR VARIOUS BOUNDARY VALUE PROBLEMS

B.2 GREEN’S FUNCTION IN TERMS OF SPECIAL FUNCTIONS

(h) u(x) = | K (a, t)u(t)dt (B.1)


0
L/2NY oD

K(a,t) = G(z,t) = ; v#0 (B.2)


= (<) (Beg) Ta <1
20 Ne
2
vos + oe + (\*2? — vu =0, vy#0 (B.3)
ES =O Ga ee toon ae ty (B.4)
AE) =) (Qe) sien t= oe where J,(aq) = O05 (B.5)
Here J,,(z) is the Bessel function of the first kind defined by the series

101=() Sata)” E\Y — —1)*


eo
d? = a2, where {a,} are the zeros of the Bessel function, J, (an) = 0.
For the special case of vy= 0 for Jo(x),

ee ln NOt
G(e,t) = {hie Veaei
1
(i) u(x) = | K (a, t)u(t)dt (B.1)
ae

F =In(1—2)(1 +4), Coat


Kit) = Goa pain — ae (B.2)
5in(1 SSA) ee
d
ee — 2a + Au = (B.3)
lai(a) |"<"Cop— 1 ie a) (B.4)
Un(£) = Pr(x), An = n(n + 1), n an integer (B.5)
Here P,,(x) is the Legendre polynomial of degree n (n an integer) defined by
the Rodrigues formula,

Les
BNC sever er SPA ies (B.6)
Answers to Exercises

Chapter 1

Exercises 1.1, p. 20

arial & [r@ + oa


lek
(b) f(z) =Cx k_ ,C anarbitrary constant.
5. No

7. Yes

8. Yes

12. MG
h(a )ala = at pe Renee (E.2)

where
Jikajule) = oe) o(e) = (B3)
and
fea (B.4)
384 ANSWERS TO EXERCISES

D3: — =kin(t), nar = —kpno(t)

d ; Rese a
14. The resulting differential equation = —gp with the initial condition p(0) =
1 give pia) ee

15. a) The possibly needed differentiation of the known measured data g(x) is very
sensitive to the inaccuracy of the measured data g(x)
€). 0 (Emin = 0: 0am) co:

16. u(x) =1+ fe — r)u(t)dt

nm/2
Wig u(a) =) i K(z,é)u(é\dé, K(2,€)=

Exercises 1.2, p. 28

1: (a) Volterra integral equation of the second kind, nonhomogeneous.


(b) Fredholm integral equation of the second kind, nonhomogeneous.
(c) Fredholm integral equation of the first kind, singular (infinite range of
integration and the kernel becomes infinite at x = A), with difference kernel
1/(a — X) [Fourier convolution type, see (1.72)].
(d) Volterra integral equation of the first kind, singular (the kernel becomes
infinite in the range of integration at \= x). (e) Volterra integral equation of the
first kind, singular, with difference kernel 1/./x — t [Laplace convolution type,
see (1.67)]. (f) Fredholm integral equation of the second kind, homogeneous,
singular, with difference kernel (Fourier convolution type). (g) Fredholm
integral equation of the second kind, nonhomogeneous.

. For Exercises 1.1, #9, 10, 11, and 12.

9. Fredholm integral equation of the first kind, singular [infinite range of


integration (—oo, 0o)].
10. Fredholm integral equation of the first kind, singular.
11. Fredholm integral equation of the first kind, singular.
12. Volterra with b(x), Fredholm when b(x) = b, a constant.

. (a) Linear (e) Nonlinear


(b) Nonlinear (f) Nonlinear
(c) Linear (g) Linear
ANSWERS: CHAPTER 1 385

(d) Nonlinear (h) Nonlinear

4. A linear nonhomogeneous Fredholm integral equation (in three dimensions).


1
5. (a) Weak singularity, a = 5 oat) 1k

(b) Weak singularity,0 <a <1, k(z,t) = 1.


(c) Strong singularity or Cauchy singular for 1(c), weak singularity for 1(d),
(e).

6. A Cauchy singular Fredholm integral equation of the first kind.

7. (b) Weak singularity, see part (a).

@ae)=F=2f
(lee de ye
(+
1 1
Ja
Exercises 1.3, p. 40

WOH ee iace tice


foe il“(e — OF dé.
3. (a) u(x =r fc
(x —€ yutede + fe ~ eale)de + 12 +e

d2
4. 3,3 + (2A-1)u=0
2
5. (a) a + Au(xz) = 0
(b) u(Q). = 0, u(1) = 0
(Cee sin nit. = 1-5 1 — 1, 2, 3,---

Exercises 1.4, p. 73

(a) oes

(b) xo
1
() s?(s — 2)

() 5— 5, Uf) = C{uld)}
1 _ sU(s)
(€) 5 [sU(s)
—uO] = <=
386 ANSWERS TO EXERCISES

2s
Ori?
mayen.

co) [teat
0
(c) e3* sin /5a
V5
(d) ze** + e27 +1
Lets wt)
ON Gan
Orel tent
. (a) fi(t) # fo(t) when t = 3 as in (i) and (ii) of part (a). Indeed f(t) and
f2(t) could differ on a set of points t;, t2,---, tn on (0, 00).

-(@) f(a) = erly)


@) 2= 20). 016) = Clue)
bud) =61{ b=
d,
. (b) If f(x) is such that e is continuous on (0, 00), and

(i) Jim f(zjer** =0, (E.1)


(ii) jim faa eae cen) (E.2)
then, (1.69) follows.

. f@e)=bScosx

» Keta7—0nni(Es))toihave

ef BORO = [7 fh-oplod. (62)


Then, we consider the special case with f;(—t) = f(t), which can be writ-
ten as fo(t) = fo(t) = fi(—t). Also we have from Exercise 9(b) that
F{fi(—t)} = Fi(A), and for our case this becomes F { f; (—t)} =F {fo(t)} =
F(A). If we use this fo(¢) = f;(—t) in (E.2), we obtain the Parseval equality,

51 | ORO = [:fi(-t)fi
ase
(—bat
ANSWERS: CHAPTER 1 387

=| AWwhwau,
oo) nr

—0o

x | lnePa=f |acPae (B.3)


1 (oe) 5 Co

—oo —oo

after a simple change of variable u = —t in the right integral, and the use of
ff=\f/.
15. (a) F(A) = G(A) + KQ)FQ), FO) = F{f(2)}, GA) = F{f(x)},
K(A) = F{k(a)}
b) F = G(A) Ny
patie ie aes
ixXr G(A)
De) Saaryayeay ae heme’
1620(2).—= nb =
a ee F(t)
ToOuKa
MK. itt
dt (EZ)

where F(t) and K (t) are the Fourier transforms of f(x) and k(x), respectively.
Lie ; ‘A .
= a1 =|2| 11b*—-a —b|2|
Coe coor SF
18. s
Biel sf ea sin » 1
oO)= 5 fimeHE(Fm).
TA)
19 (a) F(A) = —y a>0O

(b) U(A) = _ TO)


oO +T(A)U(A), Sa)
U(A) = @ TSPO)

Exercises 1.5, p. 94

1. @ |ae
ge = 71 70-7854
(b) 0.7828

Ua? - 96F — o.o10s


(©|Er(f)l< (12)(16)
=<

(d) The absolute value of the actual error from the exact value of the integral
in part (a) and the numerical approximation in part (b) is

\0.7854 — 0.7828] = 0.0026


as compared to the (larger!) estimate of its upper bound in part (c) as 0.0104.

. (a) In2 = 0.69315


(b) 0.69325
388 ANSWERS TO EXERCISES

(c) |Es(f)| < eS 0.00052


ae (180)(64)
(d) The absolute value of the actual error is [0.69315 — 6.69325] = 0.0001,
while the estimate of an upper bound of such error is 0.00052 as in part (c), the
latter, of course, is conservative.

3. (a) Trapezoidal rule: 0.562167


(b) Simpson’s rule: 0.561956. Exact: 0.561965

4. See the answer to Exercise 5 of Section 7.2.

5. (a) u(x) = 2(x — 0.5)


(a — 1) — 1.4699862(2 — 1) — 0.2203782x(x — 0.5).
(b) You may consult (the rather long formula) as the answer in “The Student’s
Solution Manual"! to accompany this book (see the end of the preface for more
information). :

Chapter 2

Exercises 2.1, p. 102

i (a) N(s) =noF(s) + kF(s)N(s), N(s) = a

(b) N(s) = :
Ss nb) =nee «*) ne > k > 0

OEdt AO AD) tn
t
(b) n(t) = no — a n(7)dr
0
(c) n(t) = noe *#

if no
t
_ (a) n(t) = noew*/T a —(1/T)(t-7)
eae i MT ENS) Sea SIE:
N(s) = L{n(t)}
(b) n(@) noe 1/7) Ht
. (a) g(t) = boh(t)
(b) b(t — 7) p(r)m(r)
Ar
B
© | b(t — r)p(r)m(r)dr

B
(d) b(t) = boh(t) + fsb(t — r)p(r)m(r)dr
a

‘Jerri, 1999
ANSWERS: CHAPTER 2 389

Exercises 2.2, p. 105

d
ie (a) = = 6

2. (ay
#6, do

(b) Is?O,(s) = —a®(s) — bs®(s), where @,(s) = L{O,(t)}, ®(s) =


L{O(t)}.
(c) ot) = — Ne, 6,(t) = 1+ (¢-1)e
4
(d) ¢(t) = e~* (-5 +,2¢ — i),9.(t) =1+e-* (-5 ot = 1)

3. (a) What should be the potential distribution on the boundary of the disk such
that the potential u(r, 6) at a given point (r,0) takes a predetermined value
u(r, 8) = g(r,6)?
Exercises 2.3, p. 112

1. (a) Di o(z) = K («,5)I, (5) +K («,=) Lo @

l
(b) D(z) i.K (a, €)p(€)d€, where K (a, €) is as given in (2.26).
0

2. Here we consider two cases:”

(i) a close to an exact one, where we look at the density distribution to be zero
l i} DL BY
outside the beads on (0 ; 5) (7 3) (—,/), and one for the two intervals of
l l Dimou
the lengths of the beads on (=, D and (> q” The answer here is

(a) =9 f°Gla,g)ae +9 f° G(ee)ae,

2The detailed solution of this problem (in over five pages), with figures is found in “The Student’s Solution
Manual" to accompany this book [Jerri, 1999]. See the end of the preface for more information.
390 ANSWERS TO EXERCISES

3a | Te _ 227
800T) 288% 36007) ’
“1 4002? + 16/? — 1912! (2
800 To 288To
1 1800x? + 721? — 94721

ee
9i(i — x) i
aah eae
cr aye
Bt 8
Wcv= 9
800T> 800T) — 7200 To 4 3
gi(i—a) 1 1442? + 641? — 19921
800T> 288 To
_ __1 15191? — 489421 + 36002” ee 3
77200 To Ree ot!
Qii=2x) 17l(l=2) . 2531 —2) 3l wees
800To 288T> «36007 Ace tox
with its above five branches.

(ii) An approximation to the problem, where we take the weights of the two
beads as two point forces located at their respective centers of masses €; =
10 1a 1 aio Re eye ea .
De aiea many eh “7h ying ait a faa
eetLala,ea 9!
Pee lg
oe 171
ead
Via) = (3 (=.55) + 5K (= a)
Aijig sil, ig ta) _ légs
Tel |205 0m 12) 240) eS Toe
0<a<H

y(x) =4 nls
Tot [20° 40pl- +e12lg Boe
1 E 91 yetle 271 + 8x
24 120Tp
<a< tt

ey)1 [pe
[lg Reh
9l lg Sey
ee 171 fl 141
= yee — 142
eee
To E; 400 Donia © 2) iy
tl <a<l.

eae
y(x) = (x°
gcr
oT, , 3 ae— 22°1 2 + 3 I’)

5. —f(yo)
Va =-y2er =| Aw
yo

0 OS

Exercises 2.4, p. 116

2. (a) u(x) = —r + fe — x)u(t)dt


0
ANSWERS: CHAPTER 2 391

O)y(a)=2+ f(ta)ylta 0

Khile\ =s -{ (Qn — t)u(t)dt


(0)

Exercises 2.5, p. 122

b
ya) = | KG, tyyttyat, ho(2,t) =
0

0 re OK
2. (b) ae
>
craves a<t

0 (tah ©GSD 20b=a


— 9, l(t — a)(t — 6)/(b — a)] = (b—a) ni (b —a) - ho. =

(c) Away from x = t, the kernel K(z,t) is a linear function in z, hence it


; 2
OK
satisfies eel =x (QO) 2b seit.

- @) TS =
Lee A+ 1u(a), w(0)
0)==0,u(t)==0
0,u(1)
(b) un(x) = sinnraz, An = —n?n? - 1

Exercises 2.6, p. 127


2uo sin A
1. A(A) = =a =
2. For a rectangular aperture of width 2a, the boundary value problem in v(z, y)

Bu | By
1S

==) OO! OOOO


<< YY <.005
Or? OF”
v(0,y) = fly), -a<y<a
55
Ov
(0:4) = Olul > a
For a circular aperture of radius a, the boundary value problem in v(r, z) is

TTD LDTt LIAS heh Ma am


Or2 r Or Oz2 i
v(r,0)
= f(r),0<r<a
CLAS SS venee
Oz

Ba ciC8) [O K@ Boa. zr>0


392 ANSWERS TO EXERCISES

Asim, woes A .0
where K(,3)= 4288 Ozer <> 0

0, 4g > Il
F(z) = {He) Meee”

Chapter 3

Exercises 3.1, p. 146

lL. (et; A) = TAG)


Meni
T Spe
Ae =" @ veo,
u(x) = f@)+rAf, YF Od ;

2. [(z,t;A) =sinhA(z -—t), u(x) = g(x) + a0 sinh A(x — t)g(t)dt


0
3. See answers for Exercises | and 2.
x 0° n ‘
4. @u(t)=lt+etyt+ =) So =€

(b) u(x) = e* — 1

5 o) = coshx

Cua) 5sine

9. F(x) =-1+Ce*,
23 ae
a(g\—Cress=ress C= 1

10. (b) The inequalities generated for the bounds (E.1) of the iterated kernels, in
particular the n! in the denominator of the bound.
r y
11. u(ax,y) = f(x,y) +a i K (x,y; &,n)u(&, n)d€dn
where f(z, y) is a given function.

Exercises 3.2, p. 154

1. (a) u(z) = 1+ ipsin(x — t)u(t)dt


0
ae
(b) u(x) =1+ 5,
(e) The answers are given with the questions of Exercises 3 and 4 of Section
Hel
ANSWERS: CHAPTER 3 393

— [ (x — t)*~'
f(t)dt [This is equation (3.40).]
0

(b) u(x) = eF /2( 4? +2)-1

5. f(x) in the Fredholm integral equation (E.2) is restricted to only the lin-
ear combination f(x) = Asinaz + Bcosz of sinz and cosz, where A =
1
ifu(€) cos dé, and B = rheu(€) sin €d€ are constants. On the other hand
0
f (x) of the Volterra integral equation of the first kind (E.1) can be the more gen-
xz

eral function f(x) = g(x) sinxz + h(x) cos x where g(x) = [ u(€) cos €dé
0
and h(x) =i u(€) sin Ed€.
0
Exercises 3.3, p. 162

1. (a), (b)

ay Exact u(x) = sinz Numerical (approximate) u(z),n = 8

0 0.0 0.0
0.5 0.4794 0.50
1.0 0.8415 0.875
ES 0.9975 1.03125
2.0 0.9093 0.92969
Ded 0.5985 0.5957
3.0 0.1411 0.1128
3.5 -0.3508 -0.3983
4.0 -0.7568 -0.8098

2. (a) ula) = 2 + se*[l — (3x + 1)]


(b), (c)
£ | Exact u(x) | Numerical (approximate) u(x), n = 8
0.0 0.0 0.0
0.625 1.4352 1.6667
125 9.6435 13.7124
1.875 62.0188 109.784
2.5 402.398 886.095
3.125 |2,620.801 7,169.5
394 ANSWERS TO EXERCISES

Table continued.

x | Exact u(z) | Numerical (approximate) u(x), n = 8


345 17,085.455 58,036.9
4.375 111,405.7 469,843.97
5.0 726,449.75 3,803,722.4

3. (a), (b)

2 Exact u(x) = cosx — sing | Numerical u(x),n = 8

0 1.0 1.0
~ [oo -0.1107
Fg |2120 -1.2195
3
4 -1.414 - 1.6740

T -1.0 -1.2055

= 0.0 -0.0859

at 1.0 1.0314
tr
7 1.414 1.4942

2m, 0 1:0 1.0335

(c) Uo is arbitrary, find u; and uz in terms of up.

Chapter 4

Exercises 4.1, p. 193

1. (a) A set of solutions y,(%) = Asin(naaz/l), n = 1, 2, 3,---, where A is an


arbitrary constant.
(D) rn (@ A cos (yar
1) rns ed poe
1
(c) yn(x) = Acos(n + 5)ra/I, Cp Os NG Wie
(d)y(x) = Asin Az, \ > 0
(e) y(x) = AcosAx + Bsin Xz, real
CuCl i= cmon ee 0)
inh A(] —
(@) y(2) = A)
ANSWERS: CHAPTER 4 395

sinb(1 — €) sinbr
wee jy ASSES
~ (b) G(z,
€) =
sin bé sin b(1
(1-2)
— x)
ae
pepe ye”

“2(
7
G(cre) =
2a én<}
7

sinh b(1 — €) sinh bz


<p
bsinh b TUES ees
oS - (vb) G(x,£)=
sinh b€é sinh b(1 — x)
aS
bsinh b
SS ESSE
» €sest
< aps

Kksin =
ONE = =ee
+ (kr)?

y(x) = 2[sinhz — sinh(z — 1) — sinh 1]

a(1—€), O<z<é
- y(t) = =} G(x, €)f(E y(§))d&, G(@,€) = |

Zz sin Ax
- y(t) = ay

_ ula) = £(22° +30? - 172-5) -A f (ale, ula,


woe| (ee
(€-1)r—1],
lsat Wes tS
or <i

bee ari 2 Re :
Syay= 7 sin > + a? 8 + af G(a, €)y()d€,
2
396 ANSWERS TO EXERCISES

1
— sin 5 (¢ — 2); Sc ieee

G(z,£) =
— sin 5(a - €), Exar

13: y(x) = sinha + (1—2z)e”

(6-2) ace al ee Se
14. G(z,€) =
(¢ Lad, Gsyee

(Eee)Ge Uren
15: G(z,€) =
(Litt Gam meee a1

—sin~(€-—z), -l<ar<&
16. G(x, €)=
= sin —(ti=-4)) See 1

We (b) (i) A unique Green’s function does exist, since the homogeneous boundary
value problems has only the trivial solution y(x) = 0,

€-1+(€-2)2, 0<a2<é
G(x,
€) =
(Coa (see ES

(ii) The homogeneous boundary value has an infinite number of solutions


u(x) = Cas C is arbitrary, thus the Green’s function does not exist.
(iii) The Green’s function does not exist, since this problem has infinity of
solutions y(x) = Asin x. See the answer to part (ii).

cos(x — € + 3)
en(1/2\e ee
SD
potas 8icos(é —2+1/2 ee
Qsn(1/2) 7 ° 27S!
20. Uc )e— ion — 1)sinre
4 Tv
21. (b) Un(Z) = Gina, i = 2a

1
ey
eo)
; n+1
ee sinnnrz, ao | 2(-1)"*1
|. sno ngrdr. = —, c=
(0) 2, nm
n=
ANSWERS: CHAPTER 4 397

23 2i(C) f(a)= Seca Bp


=5 (322 —1),e?* ~ 2.6392? + 2.9232 + 0.934

(d)
xz e* (exact) Three-term Legendre series approximation of e2*

-0.75 0.22 0.226


-0.50 0.37 0.132
-0.25 0.61 0.368
0.00 1.00 0.934
0.25 1.65 1.830
0.50 DAL 3.055
O75 4.48 4.610

1 sinhn(m —y)sinhnn, O<n<y


24. (c) gn(y,n) = nsinh nt
sinhnysinhn(t—7), y<n<a

(d) U(n,y) = ipgn(y, n)F(n,n)dn

(cpu) t= -SS U(n,y) sin nz


it

=== Dosinns f"9n(y,n)F(n,n)dn


9) co ; T T ;

(e,f) = 5 sane [ 9n(y, 7) ff f (En) sinned] d

=f iP EY>an(yn)sinnesinng f(E,n)d&dn

when? F(§, n)d€dn

(f) G(x, y; €,) 28 a y,7) sin nz sin n€


fen

Exercises 4.2, p. 206

tees / Gey eee 1) Fe


(Le) a ORSeg

ctl 2), es rail


398 ANSWERS TO EXERCISES

nm/2 x! re
Dy (a) y(x) =-/F G(x, dae 1 ee
(x, E)y(€)dé 06”

e(1-2¢), 0<a<é
vie

G(a,€)

O<ar<

-5&(E-a)(1-2), €<a<1
ule)az a=g(2x*a +32 ? —172-5)
172 —5)— -d f(a z (e,e)u()lds,

(6¢-2)e+€—1, OS e<€
G(x,
f) =
(Gaede:

y(a)=e" =) ifG(o, €y(Odé,

IRE Ses
~ sin 5 (2 — &), Ga a

7. See equation (4.65). Given u(r, @) find f(p, ¢) in (4.65).

Chapter 5

Exercises 5.1, p. 232


ANSWERS: CHAPTER 5 399

2
1. (a) u(x) = y_ Sina,»
i 24)
27
(b)b u(a) == a2 + ee
14 272 (Ana Be ; x + cos2)
— 4Ar sin
2
(c) u(x) = 22
—4 + =y sin?x

» ‘
(d) u(a) = ————(2 cosz + mA sin 2z)
4+ 12)?
(e) u(x) = e* + A(e — 2)(5a? — 3)

(f) u(x) = 2x — ;

(g) u(x) = 60x? + 60x + 24

. (a)Ay = 1/7, ui (x) = sin, u(x) = Asin, where A is an arbitrary constant.


(b) Ai = 2, u(x) = sing, u(x) = Asin, where A is an arbitrary constant.
(c) Ay = 2/7, Ag = —2/7, u(x) = Acosz, u2(x) = Bsinz, where A and
B are arbitrary constants.
(d) Ay = Aq = —3, u(x) = u(x) = 2 — 22?
(e) u(x) = 0

. (a) The eigenvalues of the kernel are Ay} = 1/7 and Az = —(1/7) with their
corresponding (normalized) eigenfunctions ¢,(r%) = 1/V2z(sinx + cosz)
and ¢2(x) = 1/V2zn(sin x — cos z), respectively.
(i) A = 1/V2rz is not an eigenvalue, i.e., \ = 1//2a 4 F1/7, hence the
problem in (E.1) has a unique solution for arbitrary f(x), which, of course
includes the given function f(z) = 2”.
(ii) X = 1/7 is one of the eigenvalues A; = 1/7, which corresponds to the
eigenfunction ¢;(x) = 1/V2z(sinz + cosz).
Since the kernel is symmetric, Theorem 2 requires that in order for (E.1) to
have an infinity of solutions, the given nonhomogeneous term f(z) = sin 3x
musi be orthogonal to the eigenfunction ¢;(x) corresponding to A; = 1/7,
which happened to be the case since Je" (sin x +cosz) sin 3rdz = 0.
Oi@QA= if feg@) =a?
Sal EW, An? yk 1 A i1 a 1
1a) Sa ee (2 rt) sing + 7? (1 rt) cos]

(b) (ii) u(x) = sin 3x + “isin x + cos a], cis an arbitrary constant (an infinity
T
of solutions!).

. The eigenvalue in problem 2(b) is A; = 2, and the answer in I(a) requires


\ # 2. This is consistent with Fredholm alternative (Theorem 1), which
insists that 1 # A, = 2 for the nonhomogeneous equation in 2(a) to have a
400 ANSWERS TO EXERCISES

unique solution. When A = A; = 2 in the equation of problem l(a), then


according to the Fredholm alternative of Theorem 2, this equation will have a
solution only if its nonhomogeneous term (here f(x) = sin x) is orthogonal
to the eigenfunction ¢;(z) = sinz (found in problem 2(b)) corresponding
27

to its eigenvalue A; = 2, which is not the case since if sinzsinzdr =


)
20
sin? xdx = 7. Hence for \ = 2, the equaiton in problem 1(a) does not
0
have a solution.

. (a) u(x) = C(x — x”), C arbitrary


(bya (a= G0:

(c) u(x) = Cla], C arbitrary

. The kernel K (x,t) = sin(x — t) of 1(d) must not have real eigenvalues, as it
is clear form the answer of problem 2(d) with its denominator of 4 + 1?.? not
vanishing for all real values of A! (Indeed, following the steps around (E.6) of
2
Example 4 we find the two eigenvalues as pure imaginary ones A = eee)
T

. u(x) = 0 in problem 2(a).

O(a) Ae NS =e
(b)
\ # £2
(c) The two systems corresponding to A; and Ay become either incompatible or
redundant. So there exists either no solution or not a unique solution depending
on f(z).
@MA =A = 2, ¢1(¢) =A — Zz); N=. = —2,
do(2) = BW = 32)
(e) u(x) = f(x) + A(1 — 2) (an infinity of solutions!)

1+ vV1-—-4bA
10. (a) u(x) = ed 5 an

1
(b) u(x) = X

Li (a), (b) The error for both cases is of about the order 10~?, so it appears that
we have to include many more terms of the Maclaurin series of cos xt than the
above two or three!

12. (a) v(x) = e® — x — 0.501022? — 0.167x? — 0.041824

(c)
ANSWERS: CHAPTER 5 401

a 0.0 0.1 0.25 0.5 0.75 1.0

Approximate
Solution 1.0 0.999989 0.999937 0.999962 1.00144 1.00833

Exact solution,
Teor Oe leO 1.0 1.0 1.0 1.0

1 ertn
13. (a) v(x) = 2+ ( u(t)dt
a a!
(b) v(x) = 3x

14.

Kiley) = | K(s,2)K yds


= [oD K(s,y)ds

= |K(o,)
KC Bas= Kiya)
after using K = K. The same is done for Ko(z,y).

15. The eigenvalue of the kernel is 43 = —2 corresponding to the eigenfunction


gi(x) = sin(Inz).
In case (i), A = 3 # —Aj, hence according to Theorem | of the Fredholm
alternative, the integral equation (E.1) has a unique solution
2X
u(x) = 2° + aD sin(la2).2 p= 32 Ap =o

(see steps (E.3)—-(E.6) in Example 5).


For case (ii) we have the eigenvalue 4 = —2 in (E.1), which is equal to
A, = —2 the eigenvalue of the kernel. So we appeal to Theorem 4 besides
Theorem 1, which requires the eigenfunction ¢;(x) = c (found in Example
6) to be orthogonal to f(x) (here f(x) = cos7x), which is the case. Hence
we follow the steps (E.3)-(E.6) in Example 5 to find a solution or (solutions)
to (E.1) with f(z) = cosmz. The final answer is an infinity of solutions
u(x) = cos mx—2c, sin(In x) for (E.1) with A = Ay = —2and f(x) = cosmz.

Exercises 5.2, p. 251


d?
1. (b) = +u=0, u(x) = Acosz+ Bsinz

(c) di (x) = 20 Ly Oe) \/2sin x


402 ANSWERS TO EXERCISES

© f iEcos? (x + t)dxdt = =

(f) K(z,t) = cosxcost


— sinzsint = cos(x
+ t)

—4 cosz sin x )
ula) =2-a( S524 A+ 2/7
zt
. (a) The kernel is square integrable since i / K?(a, t)dxdt < oo, due to
0 Jo
|sin
xzcost|? < 1 and|coszsint|? < 1.
d?u 1
Org On es CaO Oa
2az sin /1 + Agu ; “pee
(c) u(x2) = conde + AD) ESET where VIF Hag = wh. and

be Cis) Glaeser
ee
oe Rie D(BR) 2 (QED ne? CERO em ORS:2)
z g k even
= fn k? —1?
Ok odd,

(d)\ =24 Ay = 4n? —1,n = 1,2,---. Hence, according to the Fredholm


alternative (as stated in Theorem 1), the integral equation (E.1) has a unique
solution.

4. (b) See (E.3) and (E.4) of Example 8 for u(x) = 1 and f(x) =
respectively.

Exercises 5.3, p. 269


1 2
La(a). Cie — 57 Pilz, t) =—-xz-—t+22t+ 3

D(ayt;d) = (#- 21) + (24-201 2)

Nae e\c
(BAO. valk
ute DUO —
g—2t+X(e+t
— 2at — 2/3)
Resolvent kernel: I(x, t; A) =
1+ 2/2 +2/6
uC) ae ees Ot
1+A/2+)A7/6|3 2 6\6
ANSWERS: CHAPTER 5 403

ee
a) = 2°
a r,t; eeeBe
ze u(x)
fees

Beles De iz
ee
ne ey oe Ee1—r+ )
2/18
— 3Azd
_ 34(2 + 6)
MONS = am ris
sin(x + t) + (1A/2) cos(x — t) 4
(d) t; A)=
qd) D(z,tN a /41-2
ee SS
mF 2 abet

1
u(x) = 1+ 1 —7?r?/4 (?rsinzg +2dcosz), =
D

ecb) u(r)
=f (2)+rAfo ze f(t)dt
. E(w, tA) =sin(a:— 2t),0(2)i=2

. See answers to problems 1(d) and S.

u(x) = 2+ ers (A’rsina


+ 2Acosz), 7 4=
4
1— 72/4 ra

Z
u(x) =3+ : (rsing +2drcosz), A? és
= 1 — 7?\2/4 i me
. (a) M1 ~d

(b) Ai ~ 2.58
. (a) (i) $3(x) = 4.1818 sinx
(ii) Sg(x) = 4.91689 sin x — 1.32080 sin 27 + 0.28337 sin 3x — 0.03132 sin 4z
(b)
ay -| -0.5 0.0 0.5 1.0

Bxact u(z)= 37 9=3:0 -1.5 O10) 25 3.0


S3(z) -3.519 = -2.005 0.0 2.005 35119.
Sg (zx) -3.0001 -1.50005 0.0 1.50005 3.0001

. (a) S4(x) = 1 — 1.403832 + 0.2499852? + 0.02184132%. For the collocation


LS =,1; S4(x)
oy
= 1 — 1.4937882 +0.464096x? — 0.1025009z°.
404 ANSWERS TO EXERCISES

(b) So(x) = —0.8861 sinz + cosx


(c)So (open a1

OF (ayG) S3(a)i=3:-2997 sina

10. (a) S4(x) = 1 — 1.52 + 0.52? — 0.12? [very close to the collocation method
of exercise 8(a)]
(b) S2(x) = —0.8162 sinz + 0.8997 cosx
(C)iS3 (a) =sen 5(exact solution)

Exercises 5.4, p. 282

2e(b) For f(s) ae — a”) and u(x) = 1 0n (0,1), see their Fourier coefficients
in (E.3) and (E.4) of Example 8, respectively.

3. (a)
b n

as / poeta) u(t)dt
eal

n b
= SCO) ifbj,(t)u(t)dt
k= cS

n b

= S > crax(2), Ck =| b, (t)u(t)dt


k=1 a
where c, is defined by the above integral as given in (5.8).
(b) No, since a piecewise continuous function u(t) in the integral of (E.1) can
still give a continuous output f(z) to (E.1), ie., there is the smoothing of the
integration ooaaon on u(t)!

(c) f(x => CnOn(z

oS Cn AnOn(x

For this solution a to exist, its series must be convergent, but nous that
the eigenvalues \,, increase with n (see Example 7 with its \,, = n?77), then
Cn must die out (relatively) very fast which puts a lot of strain (or restriction) on
the class of functions f(z) that allow its associated Fredholm integral equation
of the first kind (E.1) to have a solution.

4. f(z) is restricted to a linear combination of the eigenfunctions of K (a, t), i.e

=) _bidi(z)
i=l
ANSWERS: CHAPTER 5 405

where

b; = i u(t)c;(t)dt

1 :
5.(a) i) Ap= - corresponding to ¢;(x) = sinz+cosz, Az = oe correspond-
T
ing to ¢o(x) = sinz — cosa.
20
(ii) i sin(x + t)u(t)dt =
0
20 27
= sina [ u(t) cos tdt + cos a u(t) sin tdt
0 0
= b; sinz + by cosz
(c) If there are infinite eigenfunctions for the kernel, and they are complete,
then there can be no nontrivial function g(x) that is orthogonal to all of the
eigenfunctions (see a version of Picard’s theorem in Theorem 7).

7. (b) No, we cannot limit ourselves to search for only continuous input u(x) (see
part (a).)
(c) Here u(t) can have a large jump discontinuity, for example, which cor-
responds to practically no change in the continuous f(z), i.e., the input and
output are not related in a continuous way. Thus the problem is called ill-posed,
i.e., not stable, since a very small change in the input may cause a very large
change in the output.

8. If f(t) 4 0, this would mean that such a static load of density distribution
p(x) = f(x) in (2.28) gives no deflection, i.e., y(z) = 0 at any point, but we
expect some deflection y(x) as in (2.28) and Fig. 2.2.

Exercises 5.5, p. 295


1.. (a)
(i) 1.005133, 1.00380, 1.004589, 1.00888
(ii) 1.00042, 1.00038, 1.00034, 1.00032, 1.00031, 1.00032, 1.00035, 1.00042,
1.00051, 1.00063, 1.00078
(b) Example 20: See the answer for 1(a)i, 11.
Example 6: 1.003, 1.0027, 1.0024, 1.0021, 1.002, 1.002, 1.002, 1.0025,
1.0037, 1.0056, 1.009

2. (a), (b)
“iy (a) Simpson’s rule (b) Exact Example 20 (Trapezoidal rule)

0 0.99987 1 1.013
0.5 0.99992 1 1.009
1.0 0.99967 1 1021
406 ANSWERS TO EXERCISES

s (@)

(i) u(0) = 1.0, u(0.5) = 0.367, u(1.0) = —0.11019 [see part (b)]

(ii) up = 1.0, uy = 0.85493, ue = 0.7189, ug = 0.59109, ug = 0.47069,


us = 0.35699, ug = 0.249, uz = 0.147213, ug = 0.05007, ug = —0.04260,
ujo = —0.1320 [see part (b)]

(b) (a,1)

x 0.0) 0:5 1.0


Collocation (Example 16),
S3(a) =1—1.441¢+0.310z? 1 0.3568 -0.1310
Numerical, trapezoidal
rule 1 0.367. * -0.1102
Exact solution,
Ue) = ea 0/2 1 0.3565 -0.1321

(a,il)

z OS! P. ss) 4 28)

Approximate
Part (a,i) 1
Part (a,1i) o*ioe)NnNn onl| ee)
— aeen=) owdss—~ o is*)Nn~

Exact, .(ii(@)=
e~* — g/2) = oS ooNnGN o ~ — \o oS NnKe)— S aS—~ o OWNnONN

Approximate u(x) =
S3(z) = 1—- 1.4412
+0.3102? 1 0.859 0.7242 0.5956 0.4732 0.3568

- ~ — - -0.11
0.249 0.147 0.05 -0.043 -0.132
0.249 0.147 0.05 -0.043 -0.132
0.247 0.143 0.046 -0.046 -0.131

4. @) u(y S115 (0) Sse Gets


(c) Uo =1.51134,. uw = 1.47727, we = 1.44503, \ ug = 1.41543,
ug = 1.38912, us = 1.36656, ug = 1.34798, wuz = 1.33355,
Ug = 1.32326, wtp = 1.31709, wip = 1.31504, 1, = 1.31504,
U12 = Pot so: U13 = ome U14 = 1.61142, U15 = oz
wie = LOLI, Ui7 = 1.5998, wig = 1.49989, wig = 1.50131,
uU20 = 1.51134
ANSWERS: CHAPTER 5 407

Sat yee aD
n

At At At At
¢ = 5 Koo) uo —4— Koti —- -—4 Ko 2n-1U2n-1—-
— Koontn = fo
3 3 3 3

At ING At At At
pea (1
ia 1K) Ui —2—, iota sit 4-2 Ki 2n-1U2n-1 = i:

“Ki onlin — fi

At At At
=a no a 4-7 Kian—1U2n-1 ste (1of = Kon Un = fn

(b)

0) 0.9987
0.5 0.9992
1.0 0.99967

(c) Comparison of results:


ae Exact Example 6 (Exercise 1(b)) Simpson rule

0 1.0 1.003 0.99987


0.5 1.0 1.002 0.99992
1.0 1.0 1.009 0.99967

6. (a) u(0) = 0.0, u(1/2) = 1.73 (corresponds to A = 16), u(1) = 0.0


7. (a) up = 0,u1 = 1.73,ug = —1.73,u3 = 0, largest finite eigenvalue is
Naat
(b) The closest exact eigenvalue A2 = (27)? & 39, which corresponds to
n = 2 for u2(xz) = V2sin 2rz.

z | (Oe V2 eye be
Numericals n-— 3) OL0F 21h73) = 17390:0
Exact, u2(z)
=V2sin2xx 0.0 1.23 -1.23 0.0

8. Numerical method: u(0) = 0.0, u(1/4) = 0.46, u(1/2) = 0.90, u(3/4) =


1.28, u(1) = 1.58
nea a(n) =(1)2(< a 5)(e—1)—(0.367)4a(2-1)-+(~0.1102)(2)(2) ( : >)
408 ANSWERS TO EXERCISES

lo s(x ~ 1) —1.468a(2 — 1) — 0.22042 («7 5)


See the following table for comparison of the results.

| 2; |u(x) exact |a(x) (interp.) |u(2;) (num.exerc.3a(ii)) |


0.000 1.00000 1.00000 1.00000
0.100 0.85484 0.86094 0.85493
0.200 0.71873 0.72810 0.7189
0.300 0.59082 0.60150 0.59109
0.400 0.47032 0.48114 0.47069
0.500 0.35653 0.36700 0.35699
0.600 0.24881 0.25910 0.249
0.700 0.14659 0.15742 0.147213
0.800 0.04933 0.06198 0.05007
0.900 | —0.04343 —0.02722 —0.0426
1.000 | —0.13212 —0.11020 —0.1320

Chapter7

Exercises 7.1, p. 348


lee) (ee) Bz
eh mea Jer epee
9 100+ 2? 0 100
+ x?
let u = Bx, hence du = (dz and the above equation becomes

[aa
—— iy er Parte Wah ata
u

7" lesan!
(i) If we use the Gauss-Laguerre rule with N = 8, 8 = 0.2, we have an
approximate answer of 0.14999. The exact answer is = = 0.1570796.

(ii) For the Gauss-rational rule, we have x = v(€)= eri B,

Dwdx Le zi
9 100+22
100+

For N = 8, 8 = 10, we prepare the data for w,; as and use the Gauss-
(1+ 3)?
Laguerre quadrature to have the approximate value of the integral 0.15707944.
or
The exact answer is 0 = 0:15707963.
ANSWERS: CHAPTER 7 409

CO [e-@)

e *sinzdz = ‘i e 8% 8% e—* sin ada


0 0
let u = Bax, the above equation becomes,

jas sin tds, = sf sin


) sin (4)du.

(i) If we use 3 = 1, we have

aioe (5) a= f
ae U eo)
(B) sin (= du = | e “sinu du

and an eight-point Gauss-Laguerre rule gives a value of 0.499988. The exact


answer is 0.5.

(ii) For the Gauss-rational rule, let s = ———


26 — #, then
62P1

eer
fe eesinade == 28 EI)
| 1S ds

where

F(g) =i) — fits sin (3


= Wi 6).

For Nx=6, ae
= 0 we prepare the data in the same way as in part (a) to have

268 ye noeke
‘= = 0.4955. The exact answer is 0.5.

. Since the integrated function in (E.2), with respect to p(y) = e~¥ on (0, co),
is f(y) = (y + 1)3, a polynomial of degree m = 3, then a two-point Laguerre
rule will give the exact result becausem = 3 < 2(N) — 1 = 2(2) -1=3.
With 2, = 2+ 2,22 = 2— V2,w, = 1(2 — V2), and wo = 4(2+ V2),
the two-point shifted Laguerre rule in (E.2) gives

ve 16
/ e *a'dr x —.
1 e

. (a) The result is not exact for two-point Gauss-Hermite rule because the func-
tion f(x) = z* is of degree 4> 2N — 1 = 2(2) -1 =3.

Exercises 7.2, p. 357

1. @ eo: =10(0)=1>0:-0=1
Using the trapezoidal rule (for u;) with At = 0.05, to = 0,t, = 0.05,

u, = 0.89 + oF [3.29u0 AF 3ui]|, u, = 1.051081


410 ANSWERS TO EXERCISES

0.1
Simpson’s Rule , At = 0.05, tg = 0.10,h = >

al
ua = 0.76 + “(3.5610 + 13.16u; + 3u2], ue = 1.105127

Simpson’s-~ Rule At =" 0.055t3 = 015;

uz =0.61+ =(0.05)(3.81U0 + 10.68u; + 9.87u2 + 3us}],


uz = 1.161784385

Extended Simpson’s Rule, At = 0.05, t, = 0.20,


0.05
ug =0.44+ 3 [4.04u0 + 15.24u, + 7.12u2 + 13.16u3 + 3ua],

(Sy — PANoe

(b) The comparison of these results with the exact solution u(x) = e” are
presented in the following table.

1 ie O(N) Wy See (Cee) Cire, =H = WH

0 0.00 1.000000 1.000000 0.000000


1 0.05 1.051081 1.051271 =1:9001- <10--
2) 5010-91. 105197 1.105171 -4.4128 x 10-4
3 0.15 1.161784 1.161834 -4.9858 x107°
A2n0.202 81221334 1.221403 -6.8356 x 10-5

See also the answer to Exercise 4(e) for a bit more accurate results.

2. (a)% = 1.05263
uz = 1.16344
tig = 1.28201
u7 = 1.41863
ug = 1.56959
(b) The comparison of the numerical results with the exact solution u(x) = e*
is given in the following table.

1 Lj a; (num.) uw; =e"! (exact) errore; =; vu;

b 0.05" 5 1:052632 10s 1271 1.361 x10~3


86 hOMS4 112163435 1.161834 1.601 x10~3
3S 0.25 1.282014 1.284025 -2.011 x1073
7 0.35 1.418631 1.419068 -4.364 x10~4
9 0.45 1.569590 1.568312 L227 8ecl0ne
ANSWERS: CHAPTER 7 411

3 Shor A= 002725 = (O)—F 15

02
0.97u,; = 0.9584 + “5 [8.1189u),
u, = 1.020190
and if we proceed parallel to Exercise 1,

u2 = 1.040810.

In the same way we obtain

uz = 1.06183,
ua = 1.083286.

The comparison of these numerical results with the exact solution


u(x) = e® is given in the following table.

1 505 ab; (NUM) Pet, =e" (EXact)o celroré; = i; — U;

0 0.00 1.000000 1.000000 0.000000


1 0.02 1.020190 1.020201 -1.165 x1075
2 0.04 1.040809 1.040811 -9.911 x1077
3 0.06 1.061836 1.068365 -9.465 x1077
4 0.08 1.083286 1.083287 -1.1910 x10-®
. (a) Using (the coarse!) At = 0.5, with the trapezoidal rule-Simpson’s rule
combination as in Exercise 1, we have up = 1.0

i = 1—2(0'5) — 4(0.5)? + ;3+6 (5) =4 (=) uo


1
+3 (=)Uy

5
0.25u, =-l+ A’ Uy = 1.0

In the same way we obtain

us. = —1.60667

(b) With the exact answer u(x) = e” > 0, this negative value for
ug = —1.66667 suggests a break down of the numerical method due to the
inaccuracy inherent in u; = 1.0, which is very far from its exact value of
e295 = 1.6487. In Exercise 1 with At = 0.05, we had wu; = 1.051081
compared to its exact value of e?-% = 1.05127, i.e., a good enough resolution
of At = 0.05 that preserves the characteristics of the solution u(x) = e” inside
the integral.
412 ANSWERS TO EXERCISES

5. ti(z) = —1333.333333(2 — 0.05)(« — 0.01)(a — 0.15) + 4204.3242(4 —


0.1)(a—0.15) —4420.5082(x—0.05)(2—0.15)+1549.045333z (x—0.05)(a—
0.1). The comparison of this approxiamte interpolated result u(x) of the
numerical values and the exact answer u(x) = e”, is given in the following
table.

Approximate (interpolated) u(a) and exact


u(x) = e” solutions of (E.1)

0.00 1.00000 1.00000


0.04 1.04062 1.04081
0.10 1.10513 1.10517
0.14 1.15026 1.15027
0.20 1.22070 1.22140
0.24 1.26922 1.27125
0.30 1.34388 1.34986
0.40 1.47184 1.49182
0.50 1.60175 1.64872
0.60 1.73078 1.82212
0.70 1.85609 2.01375
0.80 1.97486 2.22554
0.90 2.08424 2.45960
1.00 2.18141 2.71828

tS
IOS ee / Gernaee 0

Oyee = sinks,
Meso (=) inn (=)
Exercises 7.3, p. 370
1. (a) First we use the transformation € = 2t — 1, d€ = 2dt to have an integral
on the symmetric interval —1 < € < 1 ready for using the Tchebychev rule
(7.17),
u(t) = 2? — a ¢ + ees) u (SS) dé. (E.1)
1
1.505271u; +0.52085309u2 + 0.5304834u3 + 0.54606562u4
= 0.01054174 EB)
0.52085309u; +1.5825008u2 + 0.62060115u3 + 0.68224891u,4
= 0.16500169
(E.3)
ANSWERS: CHAPTER 7 413

0.5304834u; +0.62060115u2 + 1.6762968u3 + 0.76641459u,4


= 0.35259369
(E.4)
0.54606562u; +0.68224891u2 + 0.76641459u3 + 1.90259787u,4
= 0.805195744
(E.5)
(b) For the use of the Gauss-Legendre rule, we employ the same transformation
€ = 2t—1, dé = 2dt to have an integral on the symmetric interval —1 < € < 1
to be ready for Table 7.3.

u1 = (0.069432)? — [(0.347855)(1 + (0.069432)2)u;


+(0.652145)(1 + (0.069432) (0.3300095))us
+(0.652145)(1 + (0.069432) (0.6699905))us
+(0.347855)(1 + (0.069432) (0.930568))u4,
1.34953194u,; +0.66708774u2 + 0.682482u3 + 0.37033033u4
= 4.8208 x 107°
(E.6)
and in the same way we obtain the following results for v2, 73, and x4,

0.35582548u; +1.72316768u2 + 0.79633636u3 + 0.45467998u4


= 0.108906272
(E.7)
0.3640368u; +0.79633636u2 + 1.94488459u3 + 0.56473275u4
= 0.448887272
(£.8)
0.37033033u; +0.8524163u2 + 1.05873896u3 + 1.6490824u,4 (B.9)
= 0.86595682 ;

UE af (x + t)u(t)dt.

For the four-point Gauss-Legendre rule, we let € = 2t — 1, d€ = 2dt, to have


the above integral defined on the symmetric interval (-1,1) to be ready for Table

(Ge af [e+| u (S) dé,

then we substitute for z; = 0.069432, x2 = 0.330010, x3 = 0.593796 and


x4 = 0.897327 to have the first linear equation (E.1). The four resulting linear
homogeneous equations in u1, U2, U3, and U4 are:

uy = d(0.024152u, + 0.130247u2 + 0.241105us


(E.1)
+0.173928u4]
uy = 0.069474u; + 0.215214u2 + 0.326072us
(E.2)
+0.219249u4]
414 ANSWERS TO EXERCISES

uz = d(0.128606u; + 0.326072u2 + 0.436931u3


(E.3)
+0.278381u4]
us = A(0.173928u; + 0.41104u2 + 0.521898u3
(E.4)
+0.323703u4].
3. Here we use the same four points z7;, 1 = 1,2,3,4 of the Tchebychev rule
(7.17) after making the change of variable € = 2¢ — 1 to have the following
integral of (E.1)
1
32 +1 =| (1 + xt)u(t)dt (E.1)
0

ready for Table 7.4 that needs symmetric limits of integration,


1

2(3¢ +1) = / (14 ost) “ (SS) de:


art
The resulting four linear homogeneous equations in 71, Z2, £3 and 4 are:

5.232 = 1.0105; + 1.041706u2 + 1.060967u3


+1.092131u4 Uae,

8.874 = 1.041706u; + 1.1650u2 + 1.24120u3 (E.2)


+1.3645u4 ;
11.125 = 1.060967u; + 1.24120u2 + 1.352594u3 B3
+1.532829u4 ESE,
14.7679 = 1.092131u; + 1.3645u2 + 1.532829u3 (E.A)
+1.805196u4

4. (a) N = 2 yields the exact answer, see part (b).


1 1

We) = SB paseaip [ (x + t)u(t)dt


aS} 0
Letres= 2a 1ide— Jar:

Poesia fh E+1 €+1


eee (2+ (SS) )« (SE) a
N= 2
u;, = 0.211325, ug = 0.788675

IN =33
u, = 0.112701, ug =0.50, uz = 0.887298

N=4
uy = 0.069432, uz = 0.330010, uz = 0.669991, us, = 0.930568
ANSWERS: CHAPTER 7 415

Also, see part (b).

(b) The computations, using the method in Section 5.1 for the present equation
with degenerate kernel, show that the exact solution is u(x) = x. Part (a)
gives an exact answer with N = 2, since with the exact solution u(x) = 2, the
integrand in the integral above is of degree 2 < 2N — 1 = 2(2) —2 = 3, hence
it is approximated exactly by the two-point Gauss-Legendre rule. Of course,
what concerns the numerical rule is that the error at the samples locations, in
this case x; and x9, vanishes.

(d) Simpson’s rule will have an infinite accuracy as a rule of degree 2, which is
the degree of the polynomial (x + t)u(t) =(a + t)t = xt + t? integrated with
respect to t in (E.1).

Simpson’s rule with:

' 1
(i) N = P49 (ie 9? Uj =A) 4) U2 = 1.0

il 1 2
ENS Ns 3 aE oe Uw= Fs Ua 10)

1 1 1 3
(ii)
11] N=
—_ 4,h = itil a
Sq aolees
=c,igo 18 =-,
= 1d = 10)

. (a) We have a degenerate kernel where the method of Section 5.1 results in the
exact solution as u(x) = sin a.

(b) Since the exact solution is u(x) = sin z, the integrand in (E.1) is (zt) sin t,
which is not a polynomial in t. Hence no finite degree (polynomial) quadrature
rule can approximate it exactly.

4 d

Gauss-Legendre rule,

G) N= 2
u, = 0.320388, u2 = 0.924892

(i) N=3
u, = 0.176134, ue = 0.707221, uz = 0.984574

(iii) N = 4
u, = 0.108847, ue = 0.495471, uz = 0.868624, us = 0.994058
416 ANSWERS TO EXERCISES

See the following table for N = 2,3, and 4,

Gauss-Legendre, N = 2,3 and 4

1 a, ub, (num.) ww; (€xact) ¢€; =U, — Us;

Ng
1 0.331948 0.320388 0.325886 -0.00549755
2 1.238848 0.924892 0.945409 -0.02051700
Naso
1 0.177031 0.176134 0.176108 2.58 x107°
2 0.785398 0.707221 0.707107 [Al Sc10a
3 1.393765 0.984574 0.984371 2: Onl One
Ne =A ;
1 0.109064 0.108847 0.108847 -4.8 x1078
2 - 0.518378 0.495471 (0.495472, =2.3'x 10‘
3 1.052419 0.868624 0.868624 -4.7 x10~’
4 1.461733 0.994058 0.994058 -6.5 x1077

(d) The reason for the good results is the smoothness of the integrand ¢ sin t of
Kiet) = 2tu(@) ct sint in (1).
6. The determinant is zero, since the kernel is symmetric, and with using the
Tchebychev rule this symmetry is preserved for the coefficients matrix, as seen
in the answer to Exercise 2. With this symmetry, the determinant vanishes,
which excludes the use of Cramer’s rule. So we must first check the theory
of Fredholm integral equations of the first kind in Section 5.4 for ensuring the
existence of the solution to the above integral equation before we embark on a
numerical solution. See Section 5.4, where the condition for the existence of a
unique solution of Fredholm integral equations of the first kind is, in general,
(much) more restrictive than that of Fredholm integral equations of the second
kind (Sections 5.1-5.3).

7. (a)

Uy 0.024152 0.130247 0.241105 0.173927 U1


u2 | _ | 0.069474 0.215214 0.326073 0.219249) ug
uz | | 0.128606 0.326073 0.436931 0.278381. U3
U4 0.173927 0.411040A 0.521898 0.323703 UA
‘ E (E.1)
| — XA| = O results in Ay = —12.92820323, Ap = 0.9282, to be compared
with the given exact eigenvalues of A; = —12.928 and Ay = 0.92820, which
shows no difference (see part (b)).
(b) (i) Ay = —12.92920323, u. = U; (0.0694320) =" 0-(0.0694320) =
0.8797402479 for normalizing to the exact eigenfunction U;(x) atz = 0.0694320.
u2 = —0.4284098556, uz = 0.1604561724, us, = 0.61179665745.
ANSWERS: CHAPTER 7 417

(ii) Xx = 0.928203232, u. = U2(0.06932) = U2 (0.06932) = 1.1202560


for normalizing to the exact eigenfunction Uj(z) at x = 0.0694320. u2 =
1.5715944, ug = 2.160457, w4 = 2.611795.

(c)u(sy= afc + t)u(t)dt

The two eigenvalues are

di = -6 — V48, \2 = -6 + V'48.
The first eigenfunction corresponding to the 4; = —6 — v/48 is

6 4
ui (x) = bees
(4+ 5v48)
The second eigenfunction corresponding to Ap = —6 + 48, is

oe _ (-6
+ V48)
EEO
Ley aRy

(d) The interpolations of the four sample values of the two approximate eigen-
functions of (E.1) are

(i)
U; (a) = 6.528178475(t — 0.330010)(t — 0.669990)(t— 0.930570)
~8.052147011(t — 0.0694320)(t — 0.669990)(t— 0.930570)
—3.015830589(t — 0.0694320)(t — 0.330010)(t — 0.930570)
44.539799149(t — 0.0694320)(t — 0.330010)(t — 0.669990)

U»(x) = —8.313307847(t — 0.330010)(t— 0.669990)(t— 0.930570)


429.54016446(t — 0.0694320)(t— 0.669990)(t— 0.930570)
—40.60839619(t — 0.0694320)(t — 0.330010)(t — 0.930570)
4+19.38153958(t — 0.0694320)(t — 0.330010)(t— 0.669990)
corresponding to the two approximate eigenvalues \, = —12.9292 and \_ =
0.928203232, respectively.

The comparison of these two continuous approximations U;(x) and U2(x) of


the eigenfunctions U;(x) and U2(x) of (E.1) are given in the following two
tables, respectively.
418 ANSWERS TO EXERCISES

The Interpolated Approximation U,(a) and the exact first eigenfunction


U(x) of (E.1), 41 = —12.92820323, 1 = —12.928

ay Approx U,(x) Exact U;(z)


dt = —12.92820323 A, = —12.928
0.0 —1.000007 —1.000000
0.1 —0.826801 —0.826795
0.2 —0.653595 —0.653590
0.3 —0.480389 —0.480385
0.4 —0.307183 —0.307180
0.5 —0.133977 —0.133975
0.6 0.039229 0.039230
0.7 0.212435 0.212436
0.8 0.385641 0.385641
0.9 0.558847 0.558846
1.0 0.732053 0.732051

The Interpolated Approximate U(x) and the exact second eigenfunc-


tion Uz(x) of (E.1). Az = 0.928203232, A» = 0.92820

az Approx U2(z) Exact U2(z)


do = 0.928203232 A> = 0.92820
0.0 1.000048 1.000000
0.1 1.173260 1.173205
0.2 1.346473 1.346410
0.3 1.519686 1.519615
0.4 1.692898 1.692820
0.5 1.866111 1.866025
0.6 2.039323 2.039230
0.7 2.212536 2.212436
0.8 2.385748 2.385641
0.9 2.558961 2.558846
1.0 2.732173 2.732051

8. UG) = eet f etttt. Suter


1

0
4.7u, + 3.4u2 + 1.5u3 + 1.2u4 = 0.9 x 101°,
11.1u; + 7.lug + 3.2ug + 2.4u4 = 0.3 x 104,
(b) 4.6u; + 3.4u2 + 1.5u3 + 1.lu, = 0.4 x 103,
14.4u; + 10.5u2 + 5.4ug + 3.6u4 = 0.5 x 102,
ANSWERS: APPENDIX A 419

(c)

€ aS g u(z)

O89738 Visas =1.79 «10-29


0.5938 1.6841 -7.87 x10~9
0.4062 2.4615 -4.13 x107!°
0.1029 9.7371 1.04 x107!°

Appendix A

Exercises Appendix A, p. 376


F(A)
i oS] Ds
2Fin(Ak)
(a) ¢ Jer Berar aT? 1 (aXe)
2 => —.NW_

See Bessel functions integrals for the integral in the denominator;

/pee!
Ia Np) at a m1
oO) 2 (2AR)-

(b) f(r ) = 250 Fa(Me) 272 yy nw) = J, Pdoer


f(r)dr
Jo (Agr) =\)

a2 Lay 1 Ov HOw
x ae! 2 0,
Or? Gn oe

u(r,0) = f(r), 0<r<a


ay
Oz rT, OFS a ’ 07>a

dV (Ad, z)
(b) a MV Oz Viz) = Aen”

v(r,z) = ia dJo(Ar)A(A)e>7dr
0
where A(,) is the solution of the dual integral equations

ies jfAJo(Ar)A(A)dA,0<r<a

ie i d? Jo(Ar)A(A)dA,r > a
0
(c) See the answer to Exercise 1, Section 2.6.

4. V?U(r,0,8) — s*U(r,0,8)= —g(r, 6).


——— & —- a ° : =. oe

: ary 2 Sper mone SALMO

TS P
a,: @uaey ST. ye 9 po; i
Pa ae

= 7 a — = = - Ss =a | = es os73 =

4 Nae Sa) meek Para”. *


~—in —— a
ic.
; ae
=

oe
Pea caa--s- *. eA

Q ; a ont) a a omnt oF
—— a vow

=) a al arr
Alt _ ae
—_ 3S _ retin -
oes mi a +

a
a —— : Ae
teAOA CHATIOR
~ _ i

—— 4S = 66Voy!
1a ies"y2 Pi mr
erase ya cl fj

lr iee le He ¥iip’= antA OP sieny stl vie Slay =—_-

Top Yitosagaannd dss, haiti dig eeepepe


,eh by > S00, a 2A i _
be : ae
ah lahat, cist Tots
|
Ss athe
hae
bf
“tis trl
ere” Gir tA Exact 0 on 7
_ “5 bAg ©seme at ~via 1 eae
Di rm. ¢ V@yeis I oo
a aren Eee aM, + aah hac
ait —

SS ..
7

ie a
Saeed
& A

7
B oe

a ee ee
JS) — => SES an aie "sah
eee ye al wae? sEeevrinuoiat 0
aw hw
7 ot bs eh Oe _ DPF
a fs ar ag
Ore in
ie al _ a ; _ = a

SS q (eg) SiddeOAL pl

vir i ne
es a 4) ite Tiwi
ifey + O4ae a

a
——
References

Abramowitz, M., and I.E. Stegun, Handbook of Mathematical Functions, Dover,


New York, 1965.

Atkinson, K.E., The Numerical Solution of Integral Equations of the Second Kind,
Cambridge University Press, New York, 1997.

Anderson, R.S., and LR. DeHoog, Application and Numerical Solutions of Integral
Equations, Sijthoff en Noordhoff, Alphen aan den Rijn, The Netherlands, 1980.

Anton, H., Calculus with Analytic Geometry, 5th ed., John Wiley and Sons, Inc.,
New York, 1995.

Arfken, G., Mathematical Methods for Physicists, Academic Press, New York,
1970.

Baker, C.T., and G.F. Miller, Treatment of Integral Equations by Numerical Meth-
ods, Oxford University Press, London, 1977.

Bell, E.T., The Development of Mathematics, McGraw-Hill, New York, 1945, pp.
524-30.

Bocher, M., Integral Equations, Cambridge University Press, Cambridge, 1914.

Briggs, W.L., and V.E. Henson, The DFT-An Owner’s Manual for the Discrete
Fourier Transforms, SIAM, 1995.

421
422 REFERENCES

Brigham, E.O., Fast Fourier Transform and its Applications, Prentice-Hall, Engle-
wood Cliffs, N.J., 1988.

Churchill, R.V., Operational Mathematics, McGraw-Hill, New York, 1972.

Cochran, J.A., The Analysis of Linear Integral Equations, McGraw-Hill, New York,
1972.

Coddington, E.A., and N. Levinson, Theory of Ordinary Differential Equations,


McGraw-Hill, New York, 1964.

Collatz, L., Functional Analysis and Numerical Methods, Academic Press, New
York, 1966.

Corduneanu, C., Principles of Differential and Integral Equations, Allyn and


Bacon, Boston, 1971.

Courant, R., and D. Hilbert, Methods of Mathematical Physics, Vol. 1, Chapter 3,


Interscience, New York, 1953.

Davis, P.J., Interpolation and Approximation, 2nd ed. Blaisdell, New York, 1965.

deHoog, F.R., Review of Fredholm Integral Equations of the First Kind, in The
Application and Numerical Solution of Integral Equations, Andersen, R.S., et.
al., eds. Sijthoff and Noordhoff, the Netherlands, 1980.

Delves, L.M., and J.L., Mohammed, Computational Methods for Integral Equa-
tions, Cambridge University Press, London, 1988.

Ditkin, V.A., and A.P. Prudnikov, /ntegral Transforms and Operational Calculus,
Pergamon Press, Elmsford, N.Y., 1965.

Erdelyi, A., W. Magnus, F. Oberhettinger, and FW. Tricomi, Tables of Integral


Transforms, Vol. 1, McGraw-Hill, New York, 1954.

Golberg, A.M., Solution Methods for Integral Equations: Theory and Applications,
Plenum Press, New York, 1979.

Green, C.D., Integral Equation Methods, Barnes & Noble, New York, 1969.

Hildebrand, F.B., Integral Equations, Chapter 3 in Methods of Applied Mathematics,


2nd ed., Prentice-Hall, Englewood Cliffs, N.J., 1965.

Hochstadt, H., Integral Equations, Wiley, New York, 1973.

Jerri, A.J., Introduction to Integral Equations with Applications, Marcel-Dekker


Inc., New York, 1985. (The first edition of the present book.)

Jerri, A.J., Elements and Applications of Integral Equations, UMAP J., Vol. 7, pp.
45-80, 1986 (Also UMAP Module #609, 1982.)
REFERENCES 423

Jerri, A.J., Student’s Solution Manual to Accompany “Introduction to Integral


Equations with Applications — Second Edition, Wiley & Sons", with Additional
Solved Problems, Sampling Publishing, 1999 (see the end of the preface for more
information).

Jerri, A.J., Integral and Discrete Transforms with Applications and Error Analysis,
Marcel Dekker Inc., New York, 1992.

Jerri, A.J., The Gibbs Phenomenon in Fourier Analysis, Splines and Wavelet Ap-
proximations, Kluwer Academic Publishers, Boston, 1998.

Jerri, A.J., A Recent Modification of Iterative Methods for Solving Nonlinear


Problems. In the Mathematical Heritage of C.F. Gauss (G.M. Rassias, ed.), an
invited paper, pp. 379-404. World Scientific Publishing, Singapore, 1991.

Jerri, A.J. and R.L. Herman, The solution of Poisson-Boltzmann equation between
two spheres — Modified iterative methods, J. Sci. Comp., 11, pp. 127-153, 1996.

Jerri, A.J., R.L. Herman, and R.H. Weiland, The modified iterative method for non-
linear concentration in cylindrical and spherical pellets, J. Chem. Eng. Commun.,
52, pp. 173-193, 1987.

Kanwal, R.P., Linear Integral Equations, Theory and Technique, Academic Press,
New York, 1971.

Kanwal, R.P., Linear Integral Equations, Second Edition, Birkhauser, Boston,


1997.

Keyfitz, N., Population Waves, Chapter | in Population Dynamics (T.N.E. Grenville,


ed.), Academic Press, New York, 1972.

Kondo, J., Integral Equations, Clarendon Press, Oxford, 1991.

Kress, R., Linear Integral Equations, Springer-Verlag, New York, 1989.

Lonseth, A., Sources and Applications of Integral Equations, SIAM Review, 19,
pp. 241-278, 1977.

Miller, R.K., Nonlinear Volterra Integral Equations, W.A. Benjamin, Menlo Park,
CARO 71,

Noble, B., Applications of Undergraduate Mathematics in Engineering, Macmillan,


New York, 1967.

Papoulis, A., Systems and Transforms with Applications in Optics, McGraw-Hill,


New York, 1968.

Petrovskii, I.G., Integral Equations and Their Applications, Vol. 1, Pergamon Press,
Oxford, 1960.
424 REFERENCES

Pogorzelski, W., Integral Equations and Their Applications, Macmillan (Perga-


mon), New York, 1966.

Porter, D. and D.G. Stirling, Integral Equations - A Practical Treatment, from


Spectral Theory to Applications, Cambridge University Press, Cambridge, 1991.

Rashevsky, N., Mathematical Biophysics: Physico-Mathematical Foundations of


Biology, Vol. 1, Dover, New York, 1960.

Roberts, G.E., and H. Kaufman, Tables of Laplace Transform, W.B. Saunders,


Philadelphia, 1966.

Sneddon, I.N., The Use of Integral Transforms, McGraw-Hill, New York, 1972.

Srivastava, H.M., and R.G. Buschman, Convolution Integral Equations, Wiley,


New York, 1977.
Stakgold, I., Green’s Functions and Boundary Value Problems, Wiley-Interscience,
New York, 1979.

Tricomi, F.G., Integral Equations, Dover, New York, 1985.

Volterra, V., Theory of Functionals and of Integral and Integro-Differential Equa-


tions, Dover, New York, 1959.

Weinberger, H.E., A First Course in Partial Differential Equations, Blaisdell, New


York, 1965.

Widder, D.V., The Laplace Transform, Princeton University Press, Princeton, 1946.

Wing, G.M., A Primer on Integral Equations of the First Kind - The Problem of
Deconvolution and Unfolding, SIAM, Philadelphia, 1991.

Wolf, K.B., Integral Transforms in Science and Engineering, Plenum, New York,
1978.
Index

M panels Banach (or Banach-Cacciopoli) theo-


of Simpson’s, 331 rem, 313
jth iterate of the kernel, 261 Banach fixed point theorem, 319
n approximation, 257 Bell, 2
n-dimensional Euclidean space, 301 Bernoulli’s problem, 14
nth order differential operator, 179 Bessel
differential equation, 17
Abel, 1, 134 Bessel function, 17, 374, 376
Abel’s integral equation, 74, 152, 155 first kind, 48, 70
generalized, 27, 151 of the first kind, 382
Abel’s problem, 2, 11, 29, 86, 110, biological species, 2, 7
F595 35)) living together, 22, 100
generalized, 29 birthrates
Abramowitz, 329, 331, 336 surge of, 5
absolutely integrable functions, 55 boundary conditions
Anderson, 133, 154 mixed, 196
Anton, 83 boundary value problems, 19, 123, 179,
approximate methods, 262 193, 194, 196, 206, 210, 379
approximating a kernel nonhomogeneous, 191
by a degenerate one, 231 reduced to Fredholm integral equa-
Arfken, 129 tions, 118
associated homogeneous, 170 Briggs, 60
Brigham, 60
Bocher, 1, 133, 237
Baker, 84, 268, 357 Cauchy
425
426 INDEX

convergence, 302, 306, 307 for Laplace transform, 52


convergent, 314 Fourier, 61, 76
principal value, 71 Cramer’s rule, 91, 93
sequence, 314 cross section, 106
causality, 97 of the nuclei, 22
characteristic functions, 122, 188,216
characteristic value, 216 decay problem, 102
characteristic values, 122, 189 deflection of a rotating shaft, 29
charge density, 29, 207 degeneracy, 216, 238, 248
for a potential on a unit disc, 20 degenerate, 221
Churchill, 45 kernel, 211
closed, 273 degenerate kernel, 223, 231, 233, 236
symmetric kernel, 272—275, 280 degenerate kernels
closed rules, 328 or separable kernels, 210
Cochran, 299 DeHoog, 133, 154
coefficient matrix Delves, 84, 268, 357
for the homogeneous equations, determinant, 93, 94, 212
293 |Ky |of the coefficients matrix,
cofactor, 94 293
Collatz, 299 of the coefficients matrix, 290
collocation method, 263, 265, 266, difference kernel, 28, 61, 80, 143, 149
270 differential equations
complete, 273, 274 higher order, 179, 185
metric space, 303, 305 integral equations, 18
complex conjugate, 167, 237 nonhomogeneous, 166
continuous functions, 305 differential operator, 304
contractive mapping, 299, 308, 313 Dirac delta function, 175
for linear Fredholm equations, 308 Dirichlet problem, 29
for linear Volterra equations, 310 distribution, 175
control, 104 Ditkin, 48, 58, 65
of a rotating shaft, 2, 8 double Fourier transforms, 71, 76
convergence, 307 du Bois-Reymond, 1, 133
Cauchy, 302 dual integral equations, 97, 124, 374
of integrals, 36 dynamics, 2
to a limit, 307
uniform, 323 eigenfunctions, 122, 168, 188, 198,
convergent 216,219 22 22292
absolutely, 323 expansion, 189
converges for a homogeneous equation
in the mean, 274 with a (nondegenerate) sym-
convolution product metric kernel, 240
Fourier, 61, 62, 78 of the kernel, 282
Laplace, 51 eigenvalue, 65, 237
Mellin transform, 72, 79 eigenvalues, 122, 168, 189, 216, 273,
convolution theorem 292
INDEX 427

approximate, 297 inversion formula, 56


numerical evaluation of of derivatives, 66
Rayleigh-Ritz method, 250 pairs, 59
electric potential three-dimensional, 54
in a disc, 106 Fredholm, 24, 211, 237
on the rim of a unit disc, 16 alternative, 210, 215, 219, 221,
electrified disc, 126, 127, 373, 374 234, 249, 250, 252
electrified infinite plane, 124 complement to, 228
electrified plate, 19, 125 nonsymmetric kernels, 227, 228
Erdelyi, 48, 65, 72 the main part, 219
error function, 48 determinant, 253
error function complementary, 48 equations, 202
Euclidean distance, 301
linear equations
exact differential, 168
existence of the unique solu-
existence, 277, 278, 281
tion, 317
of the solution, 234
minor, 253
of the solutions, 299
resolvent kernel, 253, 255, 258,
exponential growth, 38
269
exponential order, 37, 43
method, 253, 258
Fredholm equation, 19, 21, 25, 35, 41,
finite Fourier sine transform, 201
O35 no), ULos 123 1925
fixed point, 301, 313
193,206, 207,209, 232,379
of a mapping, 307
and the Green’s function, 202
fixed point theorem, 300, 313
existence of solution
Banach, 313
nonsymmetric kernel, 220
fixed point theorems, 299
symmetric kernels, 221
Fourier, 2
-Bessel series, 247, 376
first kind, 25, 40, 43, 154, 209,
-Legendre polynomial series, 247 243,250, 292;211,2/4,278,
coefficients, 67, 246, 274 282-284
cosine transform existence of a unique solution,
inverse, 58 215
exponential transforms, 124 existence of solution, 241
integral theorem, 56 nonsingular, 68
series, 187, 200, 245, 274, 282 with closed symmetric kernel,
sine and cosine transforms, 57 213
finite(-limit), 67 with symmetric kernel, 244,
sine series, 77, 201 272
sine transform homogeneous, 25, 119, 209
finite, 200 in three-dimensional momentum
transform, 17, 53, 75 space, 128
existence of, 55 in two dimensions, 205
exponential, 53 interpolating
in two dimensions, 71 numerical solutions, 90
inverse, 2, 25 nonhomogeneous, 215
428 INDEX

numerical approximation setting, and Fredholm equation, 204


85 basic properties, 174
numerical solution, 285 boundary value problems, 198
second kind, 25, 209, 210, 212, construction of, 165, 183
250, 253, 308 variation of parameters method,
with symmetric kernel, 245 169
singular, 28, 61 eigenfunction expansion of, 190
singular homogeneous, 64 existence of, 197
three-dimensional, 54 for an initial value problem (-
with degenerate kernel, 211 like), 177
with symmetric kernel, 211, 237, in two dimensions, 192
249 orthogonal series, 187
Fredholm’s first theorem, 253 property of, 180
a simple version, 254 unique, 197
Green’s functions, 120, 122, 379
Galerkin method, 266, 267, 271 for various boundary value prob-
gamma function, 48 lems, 382
Gauss in terms of simple functions, 379
-Hermite quadrature rule, 347 in terms of special functions, 382
-Hermite rule, 349 various boundary value problems,
-Laguerre quadrature rule, 341 379
-Laguerre rule, 339, 342, 344-
346, 349 Hadamard, 279
eight-point, 346 Hadamard’s example, 279
shifted, 342, 346 hanging chain, 9, 107, 120, 180
-Legendre Hankel transform, 70, 127, 373
and other quadrature rules, 334 finite, 375
-Legendre quadrature rule, 331 inverse, 70
-Legendre rule, 328, 333, 336- Henson, 60
338, 346, 347 hereditary, 2, 18
-Tchebychev, 339 Herman, 322
-Tchebychev rule, 340 Hermite quadrature, 340
-rational rule, 344, 346, 348 Hilbert, 237
elimination method, 290 Hilbert transform, 71
quadrature rules, 84, 334 Hilbert-Schmidt theorem, 238, 239,
rational rule, 339 243, 245
generalized, 27 Hochstadt, 299
generalized Leibnitz formula, 33 homogeneous, 29
Gibbs phenomenon, 247 Fredholm equation, 120,211,215,
in (the truncated) Fourier series, 233, 234, 250, 291, 292
247 with degenerate kernel, 217
Green, 7, 251, 268 with symmetric kernel, 237
Green’s function, 109, 120, 165, 166, linear equations, 292
175, 179, 180, 182, 194— homogeneous equation, 167, 196
196, 203, 206, 285 with symmetric kernel, 220
INDEX 429

human population, 4, 98, 102 Laplace transform, 43, 48


another formula, 45
identities existence of, 44
basic, 31 problem, 17, 20
ill-posed, 272, 278, 284 iterated" kemelyeisoe 166.6257, 31 1.
Fredholm equations 313
first kind, 280 nth, 311
ill-posed problems, 280 method of, 256
Fredholm equation Neumann series method, 258
first kind, 277 iterations, 141
Hadamard’s example, 279 iterative, 139
index, 216 approach, 139
initial value problems, 18, 41, 116, method, 141
TPIS ATE 195 process, 139
reduced to Volterra integral equa-
tions, 113 Jerri, 5, 36, 39, 42, 54, 56, 58, 60,
integral equations, 1 70, 112, 130, 201, 247, 251,
Abel’s, 14 22,998; 3 19
classification, 24
first kind, 4 Kanwal, 20, 251
another difficulty, 153 Kaufman, 48
main difficulty, 151 kernel, 1, 21
Fredholm, 4, 5, 29 kernels, 120
homogeneous, 25 iterated, 139
in higher dimensions, 19, 128 Kondo, 84, 333
linear, 25, 27 Kress, 299
Lotka’s, 5
modeling of problems, 97 Lagrange
nonlinear, 26 interpolation, 358
to a boundary value problem, 120 interpolation formula, 87, 95
various problems, 3 Laguerre, 340
Volterra, 4, 5, 28 polynomials, 344
integral operator, 3 zeros of, 342
integration Laplace, |
smoothing effect, 86 transform, 36
integro-(partial) differential equation existence of, 43
in three-dimensions, 129 necessary condition of, 39
integro-differential equation, 3, 9, 47, properties, pairs of, 46
97, 145, 146 convolution product, 51, 104
interpolation equation, 124, 193, 279
Lagrange formula, 87 in cylindrical coordinates, 126
of the numerical solutions, 87 transform, 1, 17, 20, 25, 43, 73,
inverse 152, 154, 355, 376
Fourier sine transform, 201 convolution theorem, 53
Fourier transform, 62, 125 existence of, 38
430 INDEX

inverse, 1, 73 pair, 79
method, 146, 150 Mercer’s theorem, 239, 251
of derivatives, 46 method of traces
pairs, 49 for estimating eigenvalues, 261
transform method metric, 303
difference kernel, 143 space, 303, 313
Laplacian, 76 midpoint formula, 80, 84
least squares Miller, 84, 357
criterion, 271 minor, 94
method, 268 mixed boundary conditions
Lebesgue convergence theorem, 39 dual integral equations, 124
Legendre polynomials, 199, 335, 336, modified mapping, 322
382 Mohammed, 84, 268, 357
Leibnitz rule momentum (Fourier) space, 54
generalized, 31 mortality of equipment, 2, 6, 104
limit of a sequence multiple integrals
in metric space, 306 reduced to single integrals, 31
limit point, 302 multiplicity, 238
linear
combination, 30 Neumann, 134
equations, 211 series, 134, 138, 260, 269
Fredholm equations, 316 solution, 258
integral equations, 26, 300 neutrons
existence of the solution, 316 energy spectrum, 15
Volterra equations, 318 source of, 106
existence of a unique solution, Newton-Cotes, 357
318 (NC) rule, 328
linearly independent, 263 (closed) rules, 332
Liouville, 134 repeated rules, 331
Lipschitz, 320 rule, 84, 328, 330, 332, 337, 338,
condition, 320 350
constant, 320 n point, 330
Lonseth, 154 of the open type, 330
two-point, 331, 350
Maclaurin, 231 type N-point rules, 337
method, 333, 357 Noble, 7
rule, 333, 354 nonhomogeneous
series, 138, 143, 235 Fredholm equations
mapping, 322 second kind, 287
a finite interval, 356 with degenerate kernel, 211
mathematical induction, 312 ordinary differential equations
mechanics problems, 107 second-order, 166
Mellin nonlinear
transform, 72, 79 boundary value problems, 124
inverse, 72 differential equations
INDEX 431

existence of the solution, 324 continuous functions, 56


Fredholm equations, 319 smooth, 37
integral equations, 26, 27, 123, Pogorzelski, 20, 299, 317
300 Poisson
existence of the solution, 319 equation, 192
Volterra equations, 322 integral, 106, 193
nonsymmetric kernel, 221, 228 integral formula, 16
nontrivial, 292 polynomial, 87
solutions, 122, 221 population, 2
norm square, 240 dynamics, 98
nucleus, | Porter, 20
null function, 273 Post, 45
numerical potential distribution, 2
approximation setting in a charged unit disc, 192, 205
of Fredholm integral equations, Prudnikov, 48, 58, 65
286
evaluation of the eigenvalues quadrature, 81
method of traces, 261 formulas, 84
integration rules, 328
basic formulas, 79 for the numerical solution, 327
with the trapezoidal rule, 288 of integration with tables, 327
solution
of homogeneous Fredholm equa- radiation transport, 15
tions, 293 Rashevsky, 7
of Volterra equation, 159 Rayleigh-Ritz method, 251, 261
regularity condition, 210
open rules, 328 regularization methods, 280
orthogonal, 187, 198, 222, 238 repeated Newton-Cotes rule, 84
kernels, 199, 260 repeated Simpson’s, 350
series expansion, 188 resolvent kernel, 137, 140, 245, 248,
convergence in the mean, 190 253, 260
Fourier, 189 method, 134
orthogonality property, 77 uniqueness of, 260
orthonormal, 190 Roberts, 48
eigenfunctions, 191, 238, 240, Rodrigues formula, 335
245,273,215, 295 rotating shaft, 10, 105
of the kernel, 276
polynomials, 334, 336 Schrodinger equation, 2, 19, 54, 192
set, 274 (partial differential), 128
as an integral equation, 129
parameter, 237 in momentum space, 130
Parseval’s equality, 62 sectionally continuous, 36
partial differential equations, 71, 192 (or piecewise), 37
Picard’s theorem, 273 self-adjoint, 167, 202
piecewise differential operator, 167
432 INDEX

second-order, 175 kernel, 187, 220, 237, 238, 242,


operator, 168, 190 261
simple
eigenvalues, 240 Tautochrone, 12, 112
Simpson’s rule, 80, 81, 83, 95, 285, Tchebychev
290, 296, 327, 330, 358 polynomials, 339
composite or repeated rule, 332 quadrature rules, 339
extended (composite), 330 torsion of a wire, 8, 355
repeated, 350 transforms
three-point, 332 Fourier, 42
Simpson’s-3 rule, 330 Hilbert and Mellin, 42, 71
singular, 29 Laplace, 17, 42
Fredholm equation, 63 other, 17, 70
with difference kernel, 78 trapezoidal, 350
equations, 27
rulep80%815294 295:a15 78 162;
285,290, 291,296, 297, 327,
Fredholm equation, 62
33033515358
first kind, 57, 63, 72, 284
extended (composite), 330
integral equations
two-points, 329
first kind, 43
triangle inquality, 305
kernel
triangular, 157
Cauchy, 27
truncation error, 247
Volterra equation, 355
singularity
unique
strong, 27
Green’s function
weak, 27
existence of, 179
Sneddon, 42, 130, 373
solution, 303
square integrable, 238, 252, 272, 318 existence of, 93
square system, 157 uniqueness, 277, 278, 281
stability, 277, 278, 281 unstable, 154
stable, 277
Stakgold, 166 variation of parameters, 169
Stegun, 329, 331, 336 vibrating string, 181
Stirling, 20 Volterra, 133
stocked fish Volterra equation, 19, 20, 24, 35, 41,
propagation of, 6 52, 95,005.61 Lopes
Sturm-Liouville problem, 122, 168, and Fredholm integral equation,
LS 7254 97
and the Orthogonal (Fourier) se- first kind, 24, 133, 148, 149, 154,
ries expansion, 188 WSS 21629952 53549357
successive approximations, 141, 142, Maclaurin rule, 354
146, 314 the Maclaurin method, 355
surge in birthrate, 102 with a difference kernel, 150
survival function, 4, 98, 104 higher quadrature rules, 350
symmetric, 205, 252 homogeneous, 25
INDEX 433

interpolating, 88
numerical approximation setting,
156
numerical solution, 88, 156, 160
second kind, 24, 99, 133, 134,
137, 142
singular, 162, 355, 358
with difference kernel, 143

wave equation, 181


well-posed problem, 278
well-posedness, 284
Widder, 45
Wing, 16, 20, 45

Leaming Resources
Gentre
oy §Lo

a re
he ;= ‘be ce. aniniton ming 24 >
ay zs Sian m7 gad Pefinee!
in cas
Pic s 252 a eddih he eani
tons ke: BH Bei ON DMAAARE . _
oh, Fad, CP, we 68 ° yin ita EME NEHeo S
oo we ~eaawas 4) -. 4 a AU Rea b Abtugaia
aoe a £4)ra Saf
itp wa Ge eel (heaod ——
i Csaerie\.40 1M ncaa iw
TT — not, OTS apie " I hw

aetna - oe = ee. beogeltod


ite een? 4 1.7 te ie
— : : er tea eh ie
~ aed a iim,1%, GA,

.
hae
—— ad 226s
- - . 4m,
maples
e000 T8464, 04
a ee a - Tekeed exci grawtit be TK
on pil = weil, va) ae ~
—— =. ~ , Meal equally,
Ms
Sanly 22 * WunEe es?
ar ewaited: iv ~._ Ou—_ an trier. 247

oaprtarrty : + _ :+
ore, 77 mn apallie _
avn. 27 be Cantos :
Sikes, 1, 5) 7 = " arace vf, ty
ember | 1) eat a saat ee 7 i.
i. a4 wu, (92 : 7 : peepee “~
% aay a = Viepmemaa 2 1? = 7 % 2) ai
2 miata 196 ;

-
ss -<*) ~areeepiera, 100
Pemmsii Rs aa a. en etek ee
ris
ss >: ear it’ _
a = ae ~~, - ae
=a re ajati x
ale : ion, s6y.2 ¥ >
ae ane fo

grlt=s, 12
reiertes ~ = a Soeipan
sruem:
% ix] rate Sus _
ss fs ne Coeiunpiria’ (Panag
Py

ay ;
-
tifa Cap nats Lday > |
(ser cae OT Ew Og -ebee ~—
: -, oi 2a >= _ —
Tage a Virdoata,
UT aos .
ast yi tne an ae oe
Tue im : ia Pla a


iit Ake
IN.

ai
al
hy
MN
oe

\'

7
aa
I"
niw> eq
SODIAIIS
jODSAjl4d
sajeM ynos
Aseiqr
yo Ayisaaaiun
“Extremely clear, self-contained text . . . offers to a wide class of
readers the theoretical foundations and the modern numerical
methods of the theory of linear integral equations.”

— Revue Roumaine bdde tintin ino Pures et dina

Abdul Jerri has revised his highly applied book to make it even more useful for
scientists and engineers, as well as mathematicians. Covering the fundamental
ideas and techniques at a level accessible to anyone with a solid undergraduate
background in calculus and differential equations, Dr. Jerri clearly demon-
strates how to use integral equations to solve real-world engineering and
physics problems. This edition provides precise guidelines to the basic methods
of solutions, details more varied numerical methods, and substantially boosts
the total of practical examples and exercises. Plus, it features added emphasis
on the basic theorems for the existence and uniqueness of solutions of integral ‘
equations and points out the interrelation between differentiation and integra-
tion. Other features include:

A new section on integral equations in higher dimensions


An improved presentation of the Laplace and Fourier transforms
A new detailed section for Fredholm integral equations of the first kind
A new chapter covering the basic higher quadrature numerical
integration rules
A concise introduction to linear and nonlinear integral equations
Clear examples of singular integral equations and their solutions
A student’s solutions manual available directly from the author

is Professor of Mathematics at Clarkso


University, Potsdam, New York.
=a nA =

Cover Design: Lynn Cole

WILEY-INTERSCIENCE ISBN O-
John Wiley & Sons, Inc. ER Pee
Scientific, Technical, and Medical Division ee
605 Third Avenue, New York, N.Y. 10158-0012
New York ¢ Chichester ¢ Weinheim
Brisbane ¢ Singapore * Toronto > a9 dll silica

You might also like