B_Introduction to Integral Equations With Applications
B_Introduction to Integral Equations With Applications
UNIVERSITY OF GLAMORGAN
LEARNING RESOURCES CENTRE
Pontypridd, Mid Glamorgan, CF37 1DL
Telephone: Pontypridd (01443) 482626
10 DEC 2004
15 APR 2005
OY JUNIE
Introduction to
Integral Equations
with Applications
Digitized by the Internet Archive
in 2024
https://archive.org/details/introductiontoinO00Ojerr
Introduction to
Integral Equations
with Applications
Second Edition
ABDUL J. JERRI
Clarkson University
A Wiley-Interscience Publication
JOHN WILEY & SONS, INC.
New York * Chichester * Weinheim ¢ Brisbane * Singapore ¢ Toronto
MeeZo ean
Op
Jerri, AbdulJ.,1932-.
Introduction to integral equations with applications / Abdul J.
Jerri. — 2nd ed.
p. cm.
“ AWiley-Interscience publication.”
Includes bibliographical references and index.
ISBN 0-471-31734-9 (alk. paper)
1. Integral equations—Numerical solutions. _I. Title.
QA431J47 1999
515'.45—dc21 99-14638
LOPOT SE 7615142392
In memory of my father and mother
7
; a
>
—
whom bre 6
‘ a
oem. Ueeee a -—-
Sa. 7
ers. * “s :
8 : ry > :
= if, _ —
=ae . = ; —< —
a
ay) a
Das ¢ were: _ Sap & a
Cop@reeo-« : = e6é 4 re : ae i
« = : ; =
eA SI Manan al a |
Pie. J a ee ee
: om S |
Pe Ps ee
Sutew bs greleoguiinge
Gite Ge@iwpen (ehdae| os
: wd =.
as ~ —_ 7 eS. 7
tie re patties
er” ia
. Riecreyr tl GQetebeee
— Pas ah pet
— . = eae ==. Cl . 9
= owtiiie = 7
: - © 4%)...
——— me = : 7 .
Preface Xiil
Acknowledgments
References 421
Index ay
:
aa —
a) es hoor Ait it Vsrua?
a orsed nduenld s Huis De 7
4d Pach Aeupi@? te Ghee zy 1 Reworki sain venle
oe ime Abang
of t% Piha $6
CL ieitar Asai eea
seni i VAariel of nach
vy
biaeh
bs a Pie apie yreebncan
cs
basil
ipa ne
— ai REA wired 4 ¢"eae
seve lpey 4
Ls fj
» -
AS” Aero! Gel? trian daarGideg i desersay
THE
a
- 4 ‘err ta Ape soo sa aa we re? of fre:
ib ee
has *> - PO 2
pemeny iy vy v7
fy ) Priel) ay!
su «
ee 7
eer
= sigue ‘5
7 ~
ee a” Ae 4%
Wit er res of We Sodio: Pee tind & ve Thewring
as! Pravi atheend & ie a we 0Items: o Mai pel?
| - _ . 4 ’ , « “iv iF
mies ee ‘Gaia + ake Ve a.
a as — Ze ia Cave or ShesentmdA fi a ey PTs igh —
: 7 = ~ i Heat:
‘ _ ag :
Aa br = niiactine tog 8) imove> GQyryra yo
4. =
3) ¢
Beieehaeay = oa
_7
a
; Suen Egui
€ ti,
AE Niantic set alee Sdiage san for hi Laur
ae 7 ims epatiens
>. .
et .
7 Mighes Auld hlahoes "ivance oes
. me y~ High Cualnane et y a iad
ed,
Sa ie en
i = ong? fe
— Perris Pi :
=
a en,
Preface
The goal of this present second edition is still the same as that of the first edition.
It is to present the subject of integral equations, their varied applications, and basic
methods of solutions on a level close to that of a first (sophomore) course in ordinary
differential equations. This is not such an easy task, especially when we assume
only the basic calculus and differential equations as prerequisites. The main thrust
here is that a variety of applied problems have their natural mathematical setting
as integral equations, thus they have the advantage of usually simpler methods of
solution. In addition, a large class of initial and boundary value problems, associated
with differential equations, can be reduced to integral equations, whence enjoy the
advantage of the above integral representation. Such topics also bring to light the
unity of differentiation and integration. It may be said that such a basic integral
equations course would complement the elementary differential equations course,
especially when the actual coverage in the latter is (most often) limited to initial value
problems, and for obvious historical reasons. This being that differential equations
began following the work of Leibnitz and Newton, with the flavor of applications in
dynamics, which had occurred a long time before integral equations started to get
attention at the very beginning of this century.
We should point out here that for this elementary presentation of integral equations
— assuming only calculus and differential equations preparation — the treatment in all
chapters, except for (the optional) Chapter 6, is formal. This is in the sense that clear
procedures and steps for arriving at the solution or some basic results are emphasized,
without necessarily stopping to give their complete mathematical justification. The
latter most often requires more advanced mathematics preparation. Thus we shall be
xiii
xiv PREFACE
limited to give those justifications that would not require us to go beyond the level of
this basic applicable undergraduate text.
In this second edition all comments, suggestions, and corrections relayed by
students, colleagues from around the world, and the expert reviewers of the journals of
mathematics and other concerned professions, were addressed. They all deserve my
sincere thanks and appreciation. Such suggestions, it is hoped, will help this edition
in attaining even more the same goal set in the first edition for an undergraduate
focusing integral equations text to serve the students of science, engineering, and
mathematics. To stay with this important goal, and keep the required text material
to a comparable size to that of the first edition, we decided to have a new (optional)
Chapter 7 for the detailed numerical methods. This includes using higher quadrature
rules for the numerical approximation of the integrals. The main changes made for
this second edition, in light of the suggestions received, are:
4. More emphasis on clear statements of the basic theorems for the existence and
uniqueness of the solutions of integral equations. The brief introduction of
basic theory in Chapter 6 can be considered optional, when seen in light of the
main goal of this elementary text.
5. Conditions for the existence of integral transforms, their inverses, and important
operations are spelled out. A more detailed treatment is found in the author’s
undergraduate-graduate book on the subject (Marcel Dekker, 1992) entitled
“Integral and Discrete Transforms with Applications and Error Analysis."
7. More applications to update, replace, and complement the already ample vari-
ety of applied problems as recognized by all reviews of the first edition. These
now include some relevant problems in higher dimensions.
8. More emphasis are placed on the interrelation between the integral equations
and the differential equations representations of boundary and initial value
problems. This is also to emphasize that differentiation and integration are
inseparable.
PREFACE XV
9; All detected and reported typographical as well as other errors are corrected,
and some examples are deleted and replaced by more appropriate ones. Almost
all the suggestions made by the expert reviewers of the journals of our and other
concerned professions have been very seriously addressed, keeping in mind
the main goal of an undergraduate book for scientists, engineers, as well as
mathematicians. This includes the reviews of three critical experts for the first
draft of this new edition, and another three reviewers of the final draft.
10. For this edition we now have a “Student’s Solution Manual" to accompany this
book. It contains very detailed solutions to all the odd numbered problems in
the text (see the end of the preface for details).
With these changes and additions, the first chapter still starts with the statements
of a number of problems from different subjects, to illustrate their integral
equation representation. Although the reader is warned against expecting a full
understanding of some of these problems from such a brief presentation, a very
detailed formulation of them is given in Chapter 2. This is followed by the usual
classification of integral equations and a clear derivation and illustration of
some important integral and differential identities needed for the formulations
in Chapter 2 and later chapters. Such identities are essential for showing how we
can go from the integral equations representation to the differential equations
representation and vice versa. We have also improved upon the self-contained
(short but simple) presentation of the Laplace and Fourier transforms with
clear statements for the existence of the transforms. Chapter | is concluded by
a section on simple elements of numerical integration which represents only
the essentials necessary for the numerical solutions of Fredholm and Volterra
integral equations that are discussed in Chapters 5 and 3, respectively. The
higher quadrature numerical integration rules along with their needed tables
are covered in a new (optional) Chapter 7. They are well illustrated for the
numerical integration, setting up the numerical approximation of Volterra and
Fredholm equations, and the numerical solution of these integral equations.
Chapter 2 involves very detailed modeling of problems as integral equations
with a new section on integral equations in higher dimensions illustrated with
the Schrédinger equation integral representation in the momentum space. This
includes population dynamics, control, mechanics, radiation transport, and
boundary and initial value problems. Chapter 3 deals with methods of solving
Volterra integral equations, including approximate and numerical methods,
which are presented in detail. Chapter 4 is devoted to the construction and
properties of Green’s functions, which is very important for reducing boundary
value problems to Fredholm integral equations. Chapter 5 deals with basic
theory and detailed methods of solving Fredholm integral equations including
the use of the Green’s functions, and a detailed presentation of the familiar
approximate and numerical methods of solutions. Methods of estimating the
eigenvalues of homogeneous Fredholm integral equations are also presented.
In this edition a new special section (Section 5.4) is added for a very elementary
theory and a method of solving Fredholm integral equations of the first kind.
xvi PREFACE
Also, more varied numerical methods are used in the new Chapter 7 for solving
the different integral equations, compared to the very basic ones in Chapters
3 and 5 as it was the case in the first edition. In Chapter 6 we have a brief
and descriptive discussion of the theory regarding the convergence of the
methods of solving linear as well as nonlinear integral equations. For the basic
introductory undergraduate course, this chapter is clearly optional.
In each chapter we have attempted to present many clear examples in every
section, followed by a good number of related exercises at the end of each
section with hints to (almost) all exercises and answers to all the exercises.
I would like to acknowledge many helpful suggestions from colleagues and students
during the preparation and use of the first edition of this book as well as using the
manuscript of the present second edition. I would also like to thank all of those
colleagues and students who read the first draft of this second edition, and made
valuable remarks and corrections. Professors C.A. Roberts and A. Aluthge read the
prefinal draft of this edition and made very valuable suggestions that helped a great
deal in steering this book towards its main stated goal of serving as the very first
introduction to the subject for undergraduate students in science and engineering
and I owe them my deep gratitude. Professor A. Bastys made the most thorough
reading of the prefinal draft, attending to the very details of the text with very candid
suggestions and corrections, and I owe him my deepest gratitude.
I would like to thank especially the reviewers here and abroad who made very
constructive criticisms and detailed suggestions, which I have attempted to address
very seriously, and which I hope will contribute to the desired quality and purpose
of this book. In particular, Prof. J. Chochran made the most detailed critical
evaluation with constructive suggestions. Indeed I have also requested him to review
this manuscript, which he did with suggestions that have contributed to the further
focusing of this book toward being the first introductory book on the subject for
undergraduates in the applied fields. Professor Chochran deserves my gratitude.
Also Professors I. Feny6 and M. Putinar made very detailed and constructive reviews
of the first edition, and I owe them my sincere thanks.
All along the process of developing this second edition, Prof. M.Z. Nashed has
been very generous in his constructive suggestions and valuable criticisms, thus I am
xvii
xviii ACKNOWLEDGMENTS
deeply indebted to him. Mr. J. Craparo read the first draft of the manuscript, made
Suggestions and supplied detailed numerical solutions, and he deserves my thanks.
The staff of Wiley and Sons, especially Ms. J. Downey, Ms. A. Loredo, and Ms.
S. Liu, deserve my thanks for their effective cooperation. I am grateful to Ms. C.
Smith for typing the prefinal draft of the first edition and the final camera-ready form
of this edition, and for typing the changes and additions to this new edition. Mr. J.
Hruska, Jr. deserves thanks for making the drawings with patience and care.
I owe my deepest thanks to my wife Suad and my daughter Huda for their continued
support and patience during the long hours of preparing this edition.
Integral Equations, Origin,
and Basic Tools
where K (z, t) is a function of two variables called the kernel or nucleus of the integral
equation. According to Bécher [1914], the name integral equations was suggested
in 1888 by du Bois-Reymond, although the first appearance of integral equations is
accredited to Abel for his thesis work on the Tautochrone, which was published in
1823 and 1826, and which we shall present shortly. There is also the opinion that
such first appearance was in Laplace’s work in 1782 as it shall make sense when we
speak of the inverse Laplace transform in Section 1.4.1. For example, the Laplace
transform of the given (known) function f(t), 0 < t < cw is
provided that the integral converges for s > a. So, if we are now given Fs), say
F(s) = +, 8 > 0, and we are to find the original function (now as unknown) f(t),
or the inverse Laplace transform of F(s), i.e., f(t) = £7'{F(s)},
1 co
final analysis, after recovering the original function f(t) from knowing F’'(s) in (1.2).
In our above example, f(t) = t.
In the same vein, Fourier in 1820 solved for the inverse f(x) of the following
Fourier transform F'(A) of f(z), -coo < % < co
as Zs
d
ao(x)u(x) + ay (2) == f(a) [ Kequoat (1.6)
Human population
The problem of forecasting human population may be one of the clearest examples
formulated as an integral equation since the population n(t) at time t depends on the
number ofthe initial population n(0) = no surviving to time ¢, and, more importantly,
all children born during the time interval 0 < rt < t who survive to time t. The
dependency of the population n(t) on the initial population no and the previous
populations n(r), 0 < 7 < t, is represented by the integral equation
where no is the number of people present at time t = 0 and f(t) is the survival
function (Figure 1.1), which is the fraction of the number of people that survive to
age t. With regard to the integral in (1.8) we may remark that kf(t — r)n(r)Ar
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 5
represents the number of children born in the time interval Av around time 7 that
survive to time ¢. It is clear that their number is proportional to n(r), the population
present at time 7, and that their survival function at time t is f(t — T) since they are
then of age t — r. The detailed formulation of (1.8) is presented in (2.3) to (2.8) of
Section 2.1.
Fig. 1.1 The survival function. From Jerri [1982, 1986], courtesy of COMAP, Inc.
In Section 1.2 we will discuss the classification of integral equations with their two
main classes, namely, Volterra and Fredholm integral equations. These two (differ-
ent) equations are characterized by having a variable and a fixed limit of integration,
respectively. Hence, equation (1.8) is a Volterra integral equation, since the (upper)
limit of integration is the variable 7 = t. As we shall see soon, equation (1.19) that
describes the small deflection y(x) of a rotating shaft is with fixed integration limits
€ = 0,1 hence it is a Fredholm integral equation.
B
b(t) = g(t) +/ b(t — r)p(r)m(r)dr (1.9)
a
where p(7) is the probability that a female lives to age t and m(r)
Ar is the probability
that she will give birth to a female in the time interval Ar. g(t) is a term added to
allow for girls already born before the oldest childbearing woman (of age T = (2) was
born. The formulation of (1.9) is the subject of an exercise in Section 2.1 (Exercises
4), which is supported by detailed leading hints.
6 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
t
Nea = ie (Ei s(r) + r(r)]dr. (111)
It is, of course, desired to keep the number of fish in the lake at a certain (given)
level N(t). This level is kept and watched by sampling the fish in the lake by
selective netting. Before we supply the stocked fish, which will multiply to give the
total number of fish NV,,(t) in the integral of (1.11), we assume that the lake had
initial number of fish N(0) = No. But this fish will decline to N(0)e~ at time t.
So if we add this number to the supplied and propagated number of the integral in
(1.11), we have the total number of fish V(t) which we would like to keep it at this
(given-known) level,
Torsion of a wire
Many physical problems are also of a hereditary nature; for example, if we apply
a torque m(t) to twist a wire or bar, the torsion w(t) will depend on this present
torque in the form m(t) = hw(t) as well as on all torques applied in times (—oo, t)
previous to t. Such accumulation of twists changes the physical properties of the
wire, thus introducing a hereditary (cumulative) effect. We will assume that we have
some data that tells us at time t how the proportionality factor (t, 7), instead of the
usual constant proportionality factor h, had been affected by the continuous previous
torques m(T), —co < T < t. If we add the first torque hw(t) to the accumulation of
previous torques as the integral ee p(t, T)w(r)dr, we have, for static equilibrium,
this problem represented by the integral equation ‘
b fo ( T)dr, proportional to the accumulation of the deviation from the starting time
t = 0 to present time t, must also be applied to take care of all previous deviations.
If we let J be the moment of inertia of the rotating shaft, then according to Newton’s
second law of motion, the torque m,(t) applied by the shaft to rotate with angle @, (t)
is
ais
(be) = 1 qe
This torque must be equal in magnitude, but opposite in direction, to the sum of the
three correction torques,
d?6,
Ip = —ad(t)038 0 fon
o(t (1.16)
We note here that the unknown ¢(t) of (1. 9) is being differentiated in the term
—p4¢ as well as integrated in the last term —c fe 6 T)dr. Such equation is called an
ian differential equation in @(t).
Fig. 1.2 Displacement due to a single vertical force F at €. From Jerri [1986], courtesy of
COMAP, Inc.
since the limits of integration are fixed as = 0 and € = I. Also, as we shall discuss
the classification of integral equations in Section 1.2, when the unknown y(z) is not
present in a term outside the integral of an integral equation, the equation is called of
the first kind. Hence in (1.17) we have a Fredholm integral equation of the first kind.
The very detailed derivation of (1.17) and (1.18) is presented in Section 2.3.1.
which is a Fredholm integral equation in y(z), the deflection of the bar at x. Since
there is no external term outside the integral of (1.19) that is independent of the
unknown function y(z), this equation is termed homogeneous.
along which a bead must descend (under the influence of gravity) a distance y in a
predetermined time f(y). This is represented by Abel’s integral equation in $(y),
where ¢(y) = 1/sina, the angle a is shown in Figure 1.4, and g is the acceleration of
gravity. The detailed formulation of this problem is presented in Section 2-3.2.. Lhere
we shall see that for s(y) the length of the path as a function of the vertical distance
y, we have oe = — sina. The unknown ¢(y) in (1.20) is defined as $(y) = =a ao
Here, we have resorted to using 7 for y of ¢(y) as the dummi variable of integration
so that we can write the upper limit of integration as y. Most references use y for
the variable of integration, and designate the upper limit of integration as yo, which
may be confused with a constant limit yo. We had to stay with the variables y and
n, since y is the vertical distance traveled. Abel in 1823-1826 formulated and solved
this and more general problems. This was followed, independently, by Liouville’s
work in 1832-1839.
Example 1
Verify that d(y) = 5 is a solution of the following special case of Abel’s problem:
1
y2= [* onan
Merrrearrt (£.1)
= -(0-y?) ye (E.2)
which is the left side of (E.1); hence ¢(y) = $ is a solution of Abel’s integral equation
(Ed)
We note that this case of Example | corresponds to ¢(y) = 1 in (1.20) for a body
falling a—not so interesting path!—of direct vertical fall of distance y. This is the
case since for such a fall, we have y = sgt; a) ue where y = 0 corresponds to
left side of (1.20), we have \/2g - ,/ o = 2,/y. With this factor of 2, the solution to
(E.1) is $(y) = (2)($) = 1 = shy.
sina
which results in a = &, the direct vertical fall!
This is not such an interesting, if not dangerous, special case of path of descent for
(1.20). The following Tautochrone problem is much more interesting special case of
(1.20), and is also the first integral equation problem that started Abel’s interest in
the subject.
The Tautochrone
This is Abel’s original problem that he later generalized to the integral equation
(1.20). As a special case of (1.20) it deals with finding the path where the time
required for descent along such path as shown in Figure 1.5 from any point (2, y) to
the origin is a constant T’, i.e., f(y) = T is independent of the starting point. For
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 13
is complicated even more where the problem is also singular, which is another difficult
situation for integral equations. These difficulties were documented theoretically long
after Abel’s time. So, we may say that Abel not knowing of such, often, formidable
difficulties, he took such problems by stride, as they turned out to be among the few
without major apparent difficulties! We shall return in Section 2.3.2 to derive Abel’s
integral equation (1.20), with its solution accomplished by the use of the Laplace
transform (see Example 8 in Section 3.2.1 and Exercise 5 of Section 1.4.)
The Tautochrone problem will be the subject of Exercise 5 in Section 2.3, where
the derivation is supplied with detailed leading hints.
For a simple demonstration of (E.1) we can easily verify that the area under the
parabola y = 2? is one-third of the rectangle circumscribing it. So, if we substitute
k = 3 and f(x) = x? in the Bernoulli equation (E.1), we have
1 1 %
a x? = re = i!€7dé (£.2)
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 15
zx
1
[ eae= 5
co canes = -2°,
€=0
Radiation transport — Determining the energy spectrum of neutrons
We shall present here a simple example of the absorption of radiation*® (say
neutrons) in a slab with fixed thickness as illustrated in Figure 1.7. The measured
Incident :
Parti Transmitted
article ; Detector
B eam Particle Beam
Fig. 1.7 Simple experiment for determining the energy spectrum of particles.
neutrons g(x) on the other side of the slab (after the absorption) for finite number
of different thicknesses x of the same material can be used to determine the neutron
spectrum f(£). Here f(£) is the number of neutrons at the energy level E. The
result is a Fredholm integral equation of the first kind in the neutron spectrum function
f(E). Before we start the derivation of such an equation, we need to define what we
mean by the cross section o of the nuclei of the material to the incoming radiation
(or neutrons) with energy &. Simply speaking, it is “the effective area" that the
neutrons see of the nucleus as a target. Of course, the cross section o depends on the
material, and, principally, on the energy distribution (spectrum) f (£) of the colliding
neutrons. Also when the particles (or neutrons) collide with the nucleus, they may
be absorbed, scattered, or create new neutrons (by fission). So it is important, first,
to know the probability of the collision. It can be shown easily (see Exercise 14) that
Emaz
Ie) = i e *(F) F(E)dE. (1.23)
Emin
This is a Fredholm integral equation of the first kind in the neutron spectrum f(£).
In Exercise 15 we will present discussion concerning the difficulty in solving such
Fredholm integral equations of the first kind, especially when the known function
g(a) in (1.23) has the inaccuracy of the measurement.
Also, very much related problem to this neutron transport one is the subject of
(a detailed) Exercise 4 in Section 2.2. It deals with determining the strength of the
neutrons source in a uniform rod, where it results in Fredholm integral equation of
the first kind in the unknown function of the neutrons source strength.°
line” 1-r?
HO Oa Qn [ 1 — 2r cos(@ — d) + pa 19)A8. a)
However, if we ask for the solution of the inverse problem, namely, to find a potential
distribution g(@) on the rim of the unit disc that would result in a given desired
®For this problem and other interesting problems, modeled as Fredholm integral equations of the first kind,
see Wing [1991, p. 18], courtesy of SIAM.
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 17
potential distribution u(r, @) inside the disc, then we face the above equation (1.24) as
an integral equation in the unknown function g(@). We note here that ¢ in the integral
of (1.24) is a dummy variable, and just like 9, it is the polar angle, —1 < ¢ < 7.
Further discussion of this problem, including the derivation of the result in (1.24) is
found in Section 4.1.4 [with the above result as (4.69)].
B. Inverse Problems
Another example of an integral equation arises when we have U(X) the Fourier
transform of u(x),
[o.@)
Here the solution u(z) of the integral equation (1.26) is given in Section 1.4.2 as
1 ee aN
u(z) = — os
Ae iL e’7U(\)dr : fe
C27)
7Here Jn (a) is the Bessel function, of the first kind of order n, which is one of the two solutions of Bessel
differential equation x2u" + xu! + (x? — n”)u = 0 which is bounded at z = 0,
Se bubs
In(a) = ki(n + ky!
18 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
It is needed in Chapter 2 for formulating and solving the dual integral equations
representation of the electrified disc problem. In Section 2.6 we will derive the
simpler problem of the electrified plate, where the Fourier transform of Section 1.4.2
is employed. The electrified disc dual integral equations setting and their solution
are done in Example | of Appendix A.
Example 3
Verify that
ANT elt
(V4Go) ef PA) {0, a ta (E.1)
gS as i eu (x)da . (E.2)
r co
which is a special case of(1.26).
We substitute u(x) from (E.1) in the integral of (E.2) to obtain
a sph a
= af e "dr = ‘Ac =
oe =tN|.
= 5A wh(cb
ir, Thyeres
ers) 2A sin Aa
= EE (E.3)
which is the left side of (E.2), after using the identity
i era iP eta
sin Aa = -
21
d?u
<5 = du(z) + g(x), 2 >0 (1.29)
1.1 VARIOUS PROBLEMS AS INTEGRAL EQUATIONS 19
u(0) =1 (1.30)
u'(0) =0 (1.31)
reduces to an integral equation in u(z),
Exercises 1.1
2. (a) Reduce the integral equation (E.1) of the Bernoulli problem in Example 2
to a differential equation. Hint: Differentiate both sides of (E.1) using
the fundamental theorem of calculus: =f i (Side =i F(a)
= a
(b) Solve the resulting differential equation in part (a) for f(x), the solution
of the Bernoulli problem (E.1).
For the following Exercises 3 to 5 verify that the given function u(x) is a
solution to the indicated (Volterra) integral equation.
SAU (ors,
ie [ee @-orunae
zu
Aur)
= 1)— a,
EXERCISES 1.1 21
Hint: Take the factor e* outside the integral, then there is a simple integration,
part of which involves integration by parts.
1
u(x) =1— | sin ctu(t)dt
0
u(z) = 2 —2°/6,
0) = iLsinh(a — t)u(t)dt
For the following Exercises 6 to 10 show whether or not the given function
u(x) is a solution to the indicated (Fredholm) integral equation of that particular
Exercise.
Hint: In substituting the kernel K (x, t), with its two different branches, in the
integral of (E.1), you should write the integral as the sum of two integrals on
0<t<azandz <t < 1, where the second and the first branches of K (z, t)
in (E.2) are used for these two subintervals, respectively.
eae Te Alice
shake Mu(adr = veld) = 46 Bisie
10)
10. Verify that u(x) = e~*, x > 0, is a solution of the integral equation (Fourier
cosine transform)
1
— = cos Axu(x)da.
1+? [
22 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
i Verify that
ws feo see? ceee-|ar <a
Wee fogs Lal soa
is a solution of the integral equation (Fourier transform)
[o-@)
b(ax)
(aaa yi f(a) +/ K (a, t)u(t)dt, h@ye0
Kia)
where k(x, t) =
h(t)h(a)
Hint: Divide both sides of (E.1) by the function ,\/h(z) on (a, d).
(b) Show that if A(2,t) is symmetric in (E.1), then the resulting modified
kernel k(z, t) in (E.1) is also symmetric.
18: Give the equations that describe the rate of change of the two biological species
living together of (1.13) and (1.14) when they are separate (independent). Hint:
See (2.9)-(2.11).
14. Use the following hints to derive the probability expression p(x) = e~?” for
a neutron to travel a distance x without being absorbed. o is the cross section
(of the nuclei) of the material as it appears to the neutron.
Here we shall assume only that the probability function p(x) satisfies
Also, with p(0) = 1, the (decreasing) p(Az) can very well be approximated
by p(Azx) = 1—oAz.
DAs) — lor, o-> 0. (E.3)
Use (E.2) with the above approximation of p(Az) in (E.3) to generate a first-
order differential equation in p(x), then solve it to find p(x) = e~?". Of
course we have the boundary condition p(0) = 1 for determining the arbitrary
constant in the solution of the first order differential equation.
i>: Consider the Fredholm integral equation of the first kind (1.23) in f(E), the
neutron spectrum, where the output g(x) is a measured data.
d?u
AED) +u=0, lige!)
Hee ee aa
to a Volterra integral equation. Hint: See (1.29)—(1.31) and use (1.32).
u(0) = 0, u(Z) =0
to a Fredholm integral equation. Hint: See (1.33)—(1.35) and use (1.36) and
C37)!
24 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
As we remarked in the preceding section, it seems that most of the integral equations
we have presented fall under two main categories: those with variable limits of
integration, such as (1.7), (1.8), (1.10), (1.12), (1.15), (1.20), and (1.32), and those
with fixed limits of integration, such as (1.17), (1.19), (1.23), (1.24) and (1.36).
These two classes of integral equations are called Volterra® and Fredholm? integral
equations, respectively. As we shall see in Chapter 2, these two classes represent
two different sets of problems and require different methods of solution, which we
present in Chapters 3 and 5, respectively. In the following we present a more detailed
classification of integral equations in order to become familiar with the conditions
and the terminology that soon will be used in the formulation of the problems or the
construction of their respective solutions.
The most general linear integral equation in u(x) can be presented as!°
b(x)
h(xu(z) = fle) +f K(w,€)u(e)ag (1.39)
or in operational notation, similar to what we wrote for (1.1) in (1.5),
is a linear equation in w(t), as we show in the next example, while the following
integral equation
we have
0= hay(t+f P(t, T)w, (r)dr (F£.1)
which says now that w(t) = cjw4(t) + cow2(t) satisfies (1.15h) and hence this linear
combination of w(t) and w(t) is also a solution of the homogeneous equation
(1.15h). We may note here that the most general integral equation (1.39) of this
section,
b(a)
Of course, our familiarity with linear systems like linear algebraic equations and
linear differential equations tells us that the presence of the quadratic term u?(t)
inside the above integral results in the equation being nonlinear in u(t). However
to further illustrate the way of proving linearity we will show here that if u;(t) and
u(t) are solutions to (E.4), then their linear combination c; u(t) + c2u2(t) is not a
solution to (E.4). We proceed as we did in part a) by assuming u(t) and w2(t) as
solutions of (E.4) to have
b
uy (t) =) k(a, t)u? (t)dt, (E.5)
b
u2(t) =| k(a, t)u3(t)dt. (E.6)
b
C1 u(t) + cgue(t) = / k(z, t)[e,u?(t) + cgu3(t)]dt (E.7)
where we see clearly that the linear combination c)u;(t) + c2u2(t) [of the two
solutions u;(t) and w(t) of (E.4)] is not a solution to (E.4), as required if it is to be
a linear integral equation. Hence (E.4) is a nonlinear integral equation in u(t).
The function K(x,€) in (1.40) is called the kernel or nucleus of the integral
equation. An integral equation is termed singular if the range of integration is infinite
or the kernel K (x, €) becomes infinite in the range of integration. The Fourier integral
in u(x) of (1.26) is singular, because the range of integration is infinite (—o0, oo),
while Abel’s equation (1.20) is singular because the kernel 1/./y— 7 becomes infinite
in the range of integration (0, y) at 7 = y.
For the unbounded kernel singular integral equations, there are two important
classes that we should differentiate between, since their methods of solution are
completely different. An integral equation with kernel K (x,t) = k(x, t)/|x — t|°,
0 < a < 1, where k(z,t) is bounded, is termed a weakly singular equation, or
that its kernel K(x, t) is with weak singularity. The other class of singular integral
equations is that of strong singularity with kernel K (x,t) = k(x, t)/(a — t), where
k(a, t) is bounded. These are called kernels with strong singularity or with Cauchy
singular kernel.
The generalized Abel integral equation
T(z) dh AESga
= [ Ge Orar= | (1.48)
28 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
is singular since its kernel eae is singular at € = 2, and it is also with weak
singularity corresponding to a = 5.
In the case of K(z,€) = K(x — €) in (1.40), that is, when the kernel depends
on the difference x — €, which is what we will call a difference kernel, this Volterra
equation of the first kind assumes a Laplace type of convolution product, that we
shall discuss in detail in Section 1.4.1. Such type equations yield themselves to the
Laplace transform method of solution that we shall illustrate in Sections 1.4.1 and
3.2.1. The following singular Fredholm equation of the first kind with difference
kernel,
=| K(e-au(eag (1.49)
(oe)
=,9)
assumes the Fourier type of convolution product, which will be discussed in Section
1.4.2, thus suggests a Fourier transform method of solutign. These two examples
may illustrate the different methods used for solving Volterra and Fredholm integral
equations. In Section 1.4 we present the Laplace and Fourier integral transforms and
illustrate their methods of solving integral equations with difference kernels.
Exercises 1.2
De f(a)ads icesfreee)ait
(@)
i
Coa ey
(0) Z—t
EXERCISES 1.2 29
3. Show whether or not (the homogeneous parts) of the following integral equa-
tions are linear.
Hint: For the proof of linearity, according to the definition and paralleling
Example 4 a), we consider only the homogeneous version of the integral
equation [i.e., f(z) = 0 in (1.42) or (1.46)].
(a) The integral equation (1.19) in y(a) of the small deflection of a rotating
shaft.
(b) The integral equation in u(x),
(7) cos 67 =
TOI OF fee T
where ¢ and 7 represent the position vectors of points in the interior and on the
curve C, respectively, 7 = 7 — t is the vector distance between such points,
r = ||r||, 7 is the unit exterior normal vector to C' at 7, and ds is an arc length
increment of C.
5. Determine the class of singularity for the kernel of each of the following integral
equations
corresponding to ,, i.e.,
b
S- cidi(t) (E.3)
i=1
of such solutions is also a solution to (E.1).
Hint: Multiply both sides of (E.2) by c;, sum fromi = 1 to n, and invoke
(E.2) for the integral to give ae
In this section we will derive and illustrate very basic identities that are needed to
facilitate the analysis of reducing an important class of initial value problems and
boundary value problems to Volterra and Fredholm integral equations, respectively,
and vice versa. The latter topics are presented in Sections 2.4 and 2.5, respectively.
This includes a basic identity that reduces the repeated integrations, necessary for
integrating higher order derivatives, to a single integral. The other identity is the gen-
eralized Leibnitz rule for differentiating integrals (with variable limits of integration),
which is needed for reducing an integral equation to a differential equation. The rest
of the section is devoted to few very basic definitions.
In Chapter 2 we present initial and boundary value problems associated with linear
differential equations and, usually, homogeneous auxiliary conditions, to show how
they can be represented by Volterra and Fredholm integral equations, respectively. In
doing so we need to perform a number of integrations. For example, in the second-
order differential equation of the form d?y/dz? = F(a), we can integrate twice to
obtain
v= f4 Fed +e
d zx
dz
ze
ae) = / F(t)dtdé + cx + co. (1.50)
a a
Note how we had to change the variable of integration (€ to ¢ in the inner integral) to
keep z, the independent variable, as the limit of the last integration.
This can be proved by two methods. The first is integrating by parts, letting dv = dé
and u(é) = f° F(t)d¢ in (1.51), and knowing that du/d€ = F(€).
[ [ro
(t)dtdé = [Fwd - [ ere@ag
aa i F(t)dt — 0 — i éF (E)dé (1.51)
= [w-orwa = |“(a — t)F(t)dt
where we replaced € by t in the last integral since € and t ne only dummy variables
of integration (with the same limits a to 2).
The second method involves exchanging the two integrals. We will also illustrate
this method since it is often used. The domain of the double integral (1.50) is shown
in Figure 1.8, where the integration over ¢ first, then €, is indicated by the solid
arrows. When the integration is interchanged (i.e., when we integrate with respect to
€ then t), as indicated by the dashed arrows, we obtain
Fig. 1.8 Domains for performing the integration in (1.51) with respect to ¢ first (solid lines)
or with respect to € first (dashed lines).
a = Au(a) (E.1)
to an integral equation.
We let d?y/dz? = F(z) and integrate once with respect to x to obtain
x € iy
y(a) = | i P(bjdtdé + o.2+0 = | (x —t)F(t)dt+cqr+c2 =
d*y
F(x) = 73 = dy(2) (£4)
which we can substitute in (E.3) to obtain the integral equation
Once we obtain an integral equation for the initial or boundary value problem, it
becomes natural to inquire whether this integral equation indeed satisfies the original
34 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
qd a2) B(2) oF
ae ope Feewdy = f Bg or wey
A(z)
#(a,8,2) =f P(a,y)dy
a(x)
(1.55)
and 3
$(0, 8,2) je
a,),0) My By Tay1 =FB) ~Flea).
= Ta NS = ’ at ’ c (1.87)
So we will use the chain rule on ¢(a, 3; x) as function of three variables a(x), B(x)
and z,
dp _ oe Og dB we
Og da
dx OB dx dadz
(1.58)
and allow partial differentiation with respect to z inside the integral of (1.55), giving
us
do 9 B(a) B(t) OF
Ox is ores F(a, y)dy =f. Dey or yay (1.59)
1.3. SOME IMPORTANT IDENTITIES AND BASIC DEFINITIONS 35
since Of/Ox here means keeping a and ( as constants. Also, if we use ¢(a, 3,2) =
f(x, 8) — f(x, a) from (1.57), we have
a6 8 _ Of(a,B) _ AF (w,a)
dB dB Je) Op
and
Oo O Of (x,
aap
a=
ue
ie Dg y)dy + F(z, ee — F(a, Oh
d
(1.62)
which is (1.53).
Example 6
Verify that the solution of the Volterra integral equation
Ha,
dz
|“HO Re (B.3)
after using (1.53) on the integral in (E.1) with a(x) = a, B(x) = x, and K(x, €) =
x — €. If we now differentiate (E.3) using (1.53) again, or its special case (1.54), we
obtain (E.2):
2
= = Ay(x): (E.2)
Another similarly important case where we will need the generalized Leibnitz rule
(1.53) is when we reduce a Fredholm integral equation to its equivalent boundary
value problem associated with a differential equation, which we hope is a familiar one
to solve. This is attained by differentiating the integral equation, as we did in Example
6, until we reduce it to a differential equation and then seek the boundary conditions
needed from the integral equation. For example, the homogeneous Fredholm integral
equation
me + Au =0 (E£.6)
u(0) = 0 (E.7)
Au) 0 (E.8)
which we will leave as an exercise. [See Exercise 5 of this section or Example 6 of
Section 2.5, equations (E.1)-(E.11).]
In looking at the final result of the generalized Leibnitz formula (1.53) of the last
section Hey eS
d
Sead z *) OF
a fa F(a,y)dy = ie —Da (x, d y)dy
dB da
we can basically interpret it in the direction of a rule that resulted in allowing us to enter
the differentiation operation inside the integral on the left side of (1.53), as a partial
differentiation, as seen in the integration term on the right side of (1.53). This, we may
term now, as some type of interchange of the two basic operations of differentiation
and integration. In mathematical analysis, and especially its applications, one faces
many situations of such interchange of many very basic mathematical operations. A
summary and illustration of the main theorems that allow such an interchange are
found in Jerri [1992, pp. 99-104, pp. 377-382.]
As is expected, when we deal with improper integrals, we must assure the convergence
(and, sometimes a certain type) for the individual integrals. Very familiar situations
are when we deal with the Laplace transform of f(x) on (0, 00),
interval 0 < x < R, and (ii) of exponential order as x — 00, ie., |f (x)| does not
grow faster than Me®*, where M and aq are constants (or that there exist positive
numbers M and A such that |f(x)| < Me®* forall x > A.)
(i) the function f(x) is continuous on each of the subintervals: 2; < © < 2%,
(aoe,
et and
(ii) f(x) approaches a finite limit as x approaches the limits of the subinterval, 7;_1
and z;, from the interior.
Figure 1.9 illustrates a function f(x) which is sectionally continuous on the interval
(a, b); that is, it is continuous on each of the open subintervals (a, 21), (21, Z2), and
(x2, b). Note, for example, that the left- and right-hand limits f(22—) and f(z2+),
as © approaches 2, are not equal, and we say that f(x) has a jump discontinuity at
Z2 of magnitude J = f(z2+) — f(xe2-).
a X Xo c b X
Fig. 1.9 A sectionally continuous function f(x) on (a, 6) with two jump discontinuities at
xz, and x2. From Jerri [1992], courtesy of Marcel Dekker Inc.
For the theory of Fourier series and integrals, we shall need a more restricted class
of functions than the above piecewise continuous functions, namely the piecewise
smooth functions.
(2) df /dx approaches a finite limit as x approaches the limits of the subinterval
z;_1 and 2x; from the interior; i.e., there exists f'(a;-1+), f’(zi—) for each
(Zi21, 42), = L200 aes
For example, the function in Figure 1.9 is piecewise continuous on (a, 5) but it is
not piecewise smooth because of condition (2) above in the subinterval (c, b), where
its derivative df /dx does not approach a limit f'(c+) as x approaches the end point
c from the right. For completeness, we may mention that the function is sectionally
smooth on (a, c), and it is smooth on (a, 21).
(ii) of exponential order e°”, that is, |f(x)| < Me°%* fora > A,
then the Laplace transform F'(s) of f(x) in (1.63) exists for s > a.
Proof
foe) A love)
Eis) i, ea f(a)da= / celta lds i Cia h(a) dre (E.1)
The first integral on the finite interval (0, A) clearly converges since e~ ** f (x)
is bounded for the sectionally continuous f(x). For the convergence of the second
integral, we will use the result of comparison of improper integrals, along with the
exponential growth of f(x), to show that it converges provided that s > a.
'! Optional
1.3. SOME IMPORTANT IDENTITIES AND BASIC DEFINITIONS 39
where in the last integral we used |f(x)| < Me®*. The last improper integral clearly
converges for s > a, which concludes our proof.
In this fashion, we have shown not only that the Laplace transform exists by
proving that afsee ** f(x)dx < oo, for s > a but also that e~ ** f(x) is absolutely
integrable, i.e., [5° |e~** f(x) |da < 00, for s > a.
We must also note that the above two conditions (i) and (ii), that f(2) be sectionally
continuous on (0, A) and of exponential growth as x — oo, are sufficient but not
necessary. An example of f(x) not sectionally continuous on (0, 00) is f(x) = oF
which is infinite as x — 0. This means that the first of the above two sufficient
. . . . . 7
conditions (i) is not satisfied. However it can be shown, with the aid of using the
' ; i 1
gamma function, that the Laplace transform of 1/x2 does exist as £L {= }=a
GPP Ss
s > 0. [See the definition of the gamma function in (1.74) and the Laplace transform
pair for v = —t in (1.79), and Exercise 1(b) (and 4(b)) of Section 1.4.]
Then if we “formally" allow the interchange of the limit process with the integration,
an operation that is valid!” for f(t) in the class of the functions in Example 7,
|f(t)| < Me**, we have
!2The condition for allowing the above interchange of the two operations is very close to Lebesgue
convergence theorem. See Jerri [1992, p. 99, Theorem 2.10].
40 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
we know that there exists no solution f(t) for the above integral equation in the
class of functions described in Example 7. In other words, there is no such function
f(t) in the domain of the Laplace transform operator that is mapped to the given
F(s) = ***! So, if we write the general integral equation of the first kind in u(t)
Exercises 1.3
d?y
aS + by = (2), (E.2)
d?y
ro = COS LZ, (E.2)
Al{va(l=o)) 0s aiscessl
eee ee Ota aa
(E.2)
to reduce it to a differential equation. Hint: Write the integral equation
with its explicit kernel (E.2) as
1
u(c) =a fa—ajeu(oat+ af ens ae
a \(le ve) i.tu(t)dt + rz | (1 — t)u(t)dt
x
42 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
and differentiate, realizing that each term in the right side of (E.3) is
a product of two functions of x, and use (1.53) for differentiating the
integrals.
(b) Use the form (E.3) to find the two boundary conditions for u(x) at z = 0
and x = las u(0) = Oand u(1) = 0.
(c) Solve the resulting boundary value problem associated with the differential
equation of part (a) and the boundary conditions in part (b).
Hint: You have the two linearly independent solutions sin Vda and
cos Vz, for the differential equation, to use in a linear combination to
satisfy the two boundary conditions u(0) = 0, u(1) = 0. Here you will
end up with an “infinite” set of solutions (called the “eigenfunctions”
or “characteristic” functions of the boundary value problem). These
solutions are associated with the discrete values A,,,n = 1,2,---,nofthe
parameter A in (E.1), (which are called the eigenvalues or “characteristic”
values of the boundary value problem). (See also Example 6 of Section
Dede)
We have already defined in Section 1.3 the (usual) class of functions f(x) for which
the above improper Laplace integral exists. This is being the class of sectionally
continuous (Definition 2) and of exponential order (Definition 4). We then proved
such existence in Example 7, and which we shall repeat only the statement here as
Theorem | on the existence of Laplace transform for such class of functions.
(ii) of exponential order e®, i-e., |f(t)| < Me* fort > A,
then, the Laplace transform F'(s) of f(t) in (1.63) exists for s > a. The proof is
done in details in Example 7. It is clear that the equation defining Laplace transform
in (1.61) represents a Fredholm integral equation of the first kind in f (x) with kernel
K(s,t) = e~*, which is singular since the integral is with an infinite limit. To speak
about the inverse ofthe Laplace transform in (1.63), that is f(t) = C~!{F(s)}, inour
present notion of integral equations, is to embark on the attempt to solve the singular
integral equation of the first kind (1.63) in f(t). As is the case for solving most
singular integral equations of this type, the tools of complex analysis are employed.
For our purpose in this book, where we don’t require formal preparation in functions
of complex variables, we will be satisfied with the following clear statement of the
result. We will, however, follow this by a more appropriate formula at the level of
this book, but, possibly due to its impracticability as shown in (1.67), it is not much
referred to in the discussion of the Laplace transform in almost all textbooks. Such
formula (1.67) uses only differentiation, and without resorting to complex variables.
The solution f(t) to the singular integral equation (1.63) is given as the inverse
Laplace transform of F's), which we shall state the conditions for its existence in
Theorem 2,
y+iL
IG) = Let} = lim ie e"'F(a)do, y> Real {z;}, (1.65)
= Loo
where {z;} are the singularities of F'(z). The above integral is a complex line integral
of F(z), z = x + iy, taken along a vertical line in the complex plane at x = y where
z= o0= y+ iy, and where the location z = 7¥ is to the right of all singularities
{z;} of F(z). For example the Laplace transform of f(t) = et, 0 < t < oo is the
44 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
real valued function F'(s) = +: For the inversion formula (1.65) we extend F'(s)
analytically to the complex plane as F(z) = -+5 = oe = See note that
it has only one singularity at z; = 2, thus we take the vertical complex line integral
to the right of the real part of z; which is z} = 2, y > 2. The derivation of the
Laplace transform inversion formula (1.65) involves relating the Laplace transform
to the Fourier transform of causal functions (f(t) = 0, t < 0), where the definition
of both transforms is extended to complex variables. In this introductory book we
don’t assume preparation in complex variables, and in our next very brief discussion,
and a statement of an important theorem, we will only use something like the above
basic elements of complex numbers.
In Theorem | we stated conditions for the existence of the Laplace transform
(1.63),
FACS ee fe ex UfHdt Life (1.63)
Theorem 2. The Existence of the Inverse Laplace Transform (as a solution to the
singular integral equation (1.63)
If F(z), z = x + ty is the Laplace transform of any function f(t) of exponential
order O(e*°'), where f(t) and f’(t) are sectionally continuous in each interval
0 <t < T, then the (inversion) integral of F'(z) in (1.65) along any line x = y,
where y > Zo, exists and represents f(t),
are not so stringent, since in the applications, for example, f(t) may be the dis-
placement of mechanical vibrations or electrical current, and we can easily impose
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 45
“sectional continuity" and “of exponential order" on the displacement f(t) and its
derivative g
This version of the theorem for the existence of the solution f(t) of the singular
integral equation (1.63) is what we considered the appropriate one for this book from
among other theorems, whose statements involve complex analysis. However for
our purpose of solving the integral equation (1.63) in f(t), we would like to have the
conditions to be put on the given function F'(s), and not its analytic extension F'(z).
This is exactly the advantage of the other, not well quoted in texts, form of solution
to (1.63) as we shall present in (1.67). What remains about the solution of (1.63) is
its uniqueness. As it is the case for all integral transforms, their inverse is not unique
in the sense that two solutions f;(t) and f2(t) of (1.63) could differ at any finite set
of points t;, t2,---,tn, or even at an infinite set of points t,, t2,---, and still give the
same F'(s) (see Exercise 3(b)). For the proof of Theorem 2, and the other theorems,
see Churchill [1972].
r= mm (EEO)EO} om
We may remark here that though this formula is appealing on first sight, it is very
demanding, the possible reason that it is scarcely mentioned in texts compared to
(1.65). It is obvious that this formula puts the burden on the given function F’(s),
which is more suitable for us as we look for an inverse Laplace transform f(t) as the
solution of the integral equation of the first kind
iG) ie e * f(t)dt.
First it requires F'(s) to have very high order derivatives in eA and for large
arguments & as k becomes very large. This may also illustrate another difficulty for
finding the solution of integral equations of the first kind. What makes this problem
worse, is that we often have the known output F’(s) as a finite number of data, along
with these points inaccuracy of their measurements. So, the derivative of this data
with its inaccuracy will have its own error, and for higher derivatives such errors will
be compounded to render the result useless!
This, as we planned it, should illustrate again the “inherited" difficulties of the
integral equations of the first kind. So it is not the problem of formulas (1.65) or
(1.67), where they “innocently" require the best of (the input) functions F’(s): as
infinitely differentiable (analytic) functions as seen in (1.67), or what is hidden in
(1.65) as the requirement of F(z) being analytic except for an “isolated” finite or
infinite number of points in the complex plane.
'3From Wing [1991, p. 8], which is attributed to Post and Widder [Widder, 1946].
46 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
The most important property of the Laplace transform is known for solving dif-
ferential equations, where it transforms the differential operation ae on f(z) to
an algebraic operation sF(s) — f(0) on its Laplace transform F'(s). This can be
shown next by using (1.63), performing one integration by parts and assuming that
limzyo0 € ** f(x) = 0 [i-e., f(x) is with exponential growth e%” as x — oo and
s >a):
CO
+s ie ey of (x)da
0 0
= —f(0) + sF(s) = sF(s) — f(0). (1.68)
(We may note that f(0) here is f(0+) since f(t) is defined on (0, 00), and we have
lim f(t) = f(0).) A more precise statement of this “formal” result is given as the
~—
following Theorem 3.
(i) continuous for x > 0 and of exponential order e“*, and let
(ii) df /dx be sectionally (piecewise) continuous in every finite closed interval 0 <
xz < A. Then L{df /dx} exists for s > a and (1.68) results,
which we shall leave as an exercise [See Exercise 6(a)]. These results can be extended
to higher derivatives, and as seen in (1.68) and (1.69) we must supply the proper initial
conditions on f(z).
The above results, starting with Theorem 3, show the advantage of Laplace
transform in reducing differential equations (with constant coefficients) in f(z),
0 < x < o, and its given appropriate initial conditions to an algebraic equation
in the Laplace transform F’(s). These are of general interest in methods of applied
mathematics, but what concerns us here, when dealing with integral equations, should
be the result of Laplace transforming an integral of the unknown function. A result
in this direction is
This pair complements our very important Laplace transform pair of the derivative
ie as given in Theorem 3,
af
ae iz} = sF(s) — f(0). (1.68)
The result in (1.70) can be proved easily by letting g(z = fo f €)d€, with its
Laplace transform G(s), and clearly g(0)= 0. From A bees theorem of
calculus we have a == f(x), and if we use (1.68) above for £L {#2
2} we have
F(s)
=£{f(@)} = £42}=sG(s)~ (0) = 8G),
G(s) = ae s>0
In terms of integral equations, these two results (1.68) and (1.70) should prove useful
when dealing with some integro-differential equations, where the sought unknown
function f(a) is operated on by integration as well as differentiation.
A more general result concerning the Laplace transform of integral operations
is that of the Laplace convolution theorem. This is an extremely useful tool to the
important class of Volterra integral equations with difference kernel. But before
stating the convolution theorem as Theorem 4, we should point out the difficulty
facing the Laplace transform (or other similar integral transforms) method when
we have to deal with variable coefficients differential (or integral) equations. An
important result in this direction 1s
£{2" f(a)}
= (-1)" Fs), (1.71)
which simply says, that it may be a wees to work with the Laplace trans-
form when dealing with variable coefficient terms in the differential equation to be
transformed. This is so, since, for n, a nonnegative integer, a polynomial coefficient
of order n in the original equation will result in an nth order differential equation
in the Laplace transform space. This result (1.71) can be derived easily when we
differentiate the Laplace integral in (1.63)
ad” Gee co
ate) vik ee hy
ee —ST. d
00 (ani2))
ole ——e"**d
r= [ay seer
as SAN —$@ dy,
Again, and before introducing the important convolution theorem of Laplace trans-
form in (1.84) as Theorem 4, it is instructive at this point to have a few illustrations.
In solving initial value problems associated with differential or integral equations
we may need to Laplace-transform some familiar functions, for example,
—(s—a)x 1
fe} = [ ee de = e
SEO) 6
= =:
Sah
Ser02 (1273)
Also, after a problem is transformed to an algebraic equation in F’\(s) and then solved
for F(s), we need to transform Fs) back to f(x), the solution of the original
problem, which is called the inverse Laplace transform of F'(s) and is denoted by
f(x) = L7'{F(s)}. As was discussed earlier then presented in equation (1.65),
the direct Laplace transform inversion formula involves complex integration, a topic
which is not assumed as prerequisite for the level of thisstext. Thus, on this level
of preparation, and as it is done in all elementary differential equations books where
the Laplace transform is used, we will depend on the tabulated values of the Laplace
transform (Table 1.1) for the few illustrations in this book and refer the interested
reader to books of extensive tables'* for Laplace transform. For example, if the
solution of the Laplace-transformed problem is F'(s) = 1/(s — 5), then from (1.73),
the inverse Laplace transform of 1/(s — 5) is f(x) = e°*, which is the solution of
the original problem.
In Table 1.1, [(v) is the gamma function, which is defined as
[ e © dé = wilh,
0 Z
Jo(x) in Table 1.1 is a Bessel function of the first kind of order 0, which is a special
case of J;,(x) that is one of the two solutions of the Bessel differential equation,
@u d
a shi =+ (2? — n?)u =0 (1.77)
'4See Roberts and Kaufman [1966], Ditkin and Prudnikov [1965], Erdelyi et al. [1954].
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 49
Operations
Sacha)
9. fi(x)
+ fo(z)
10. f(ax)
n—1
ar 8" F(s) La 3 Ome) (O)
"det k=0
O O
ae y)
were
sf(a)
= Grid ae
——_——__—__— 1.78
In(a) =D kl(n + k)!
k=0
ce
The first Laplace transform pair in Table 1.1,
pea)
is very important and can be proved easily with the aid of (1.74). The pair
after using the definition of the Laplace transform (1.63). The two Laplace transform
pairs
a
{sin ax} = ——
L{si MEE (1.81)
1.81
8
(e{cos ax} re
See 1.82
(1.82)
1 1
s?+9 s(s+1)
In Table 1.1 we note that the Laplace transform and its inverse are linear operations,
1 il
Lt 4 —— 4 = =i
lass} 3 sin3z E
(E.4)
aires):
However, if we write the partial fraction of 1/s(s + 1),
i 1 1
s(s +1) Sa a eae oe
we Can again use the linearity property of the inverse Laplace transform to write
1 1 1 1
Cts ———__ tale i Cae em
{s(s +1) \ 8s ¢e+1 S
=i 1 —£
—£ =l-e (E.6)
after consulting (1.79) with vy= 0 and (1.80) with a = 1. Hence, from (E.4) and
(E.6), the final solution to (E.3) is
Example 10 Use the Laplace transform to find the solution of the following Volterra
integral equation of the first kind with difference kernel K (2 —t)=e7-*
Li sin: aa
1
a ye of: :
a—t
u(oat} (8.2)
1 U(s)
= ; E.3
62-41 wig a1 CRS)
s—l S 1
U(s) = ——- = =—- - =—
(s) Sao liay 1S- hee
Definition 4 Let f;(x) and f2(x) be causal (vanish identically for z < 0) and
defined on (0, oo); then their (Laplace) convolution product is defined as
fi * fe sale fay
(he file) = [fle eyfoleas (1.86)
bsikfilm) fole — n)dn = (fo * fa)(2)
after letting 2 — € = n, and where we used the fact that f(z) is a causal function
where fo(x — 7) =O forn > z.
We are now in a position to state more precisely the convolution theorem for the
Laplace transform as the result in (1.84) and (1.85).
ize +1) \;
We have here a product of two Laplace transform F\(s) = 1/s and Fo(s) =
1/(s? + 1), where according to (1.73) and (1.81) their corresponding inverse Laplace
transform are f(x) = 1 and f2(x) = sin. Hence, according to (1.84) the result is
the convolution product of | and sin z, that is,
0
(E.1)
—=1 ‘1
en Fieeee - = :
iG tsorep} 1 —cosz
The Fourier exponential transform of the function f(a) defined on (—oo, 00) is
Co
It is clear that (1.87) represents a (singular) Fredholm integral equation of the first
kind in f(a) with kernel K (A, 2) = e~*, and so we should be interested in solving
54 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
for f(x) as the inverse Fourier transform f(z) = F~'{F}. Fortunately, and in
contrast with the Laplace transform, the Fourier exponential transform has a simple
and symmetric formula for its inverse,
We should note that there are a number of variations,!*> with minor modifications,
for the definition of the Fourier transform and its inverse. This usually depends on
the field of the text or research reference, where such notation, usually, is the most
convenient for that particular subject. For example in physics books, we see the
Fourier transform and its inverse written as
aa I i ie f(7)dx,dr2d273, (1.91)
2
'> For such varied notations in books and references of different fields, see Jerri [1992, pp. 129, 156].
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 55
If f(x) in (1.87) is absolutely integrable, i-e., ib |f(x)|dx < ov, its Fourier
transform F(X) exists and, moreover, it is continuous.
According to this theorem and the symmetry between the Fourier transform and
its inverse, one may think of the same type theorem for the existence of f(z) in (1.88)
as the solution to the singular Fredholm integral equation (1.87) in f(x). What is
needed here, of course, is that F(A) in (1.88) is absolutely integrable, thus Theorem 5
is satisfied for the convergence of the integral of (1.88) for its f(z) to exist. However,
it turned out that, in general, this is not the case for F(A) of the absolutely integrable
f(x), since the Fourier transform F'(A), of the absolutely integrable function f(z), is
not necessarily absolutely integrable. To give an example, we consider the function
peed mall)
fla) =4 0, <0
which is absolutely integrable on (—0oo, 00)
CO (oe)
after knowing that e~*** = cos \x — isin Az is bounded at = oo. F(X) here is a
complex-valued function whose absolute value |F'(A)| = 4/ F(A) F(A) where F'(\)
is the complex conjugate of F(A), which is obtained from F(A) by replacing 7 by
/ ae | 1 1
F(A)| = F(A)F(A) = (+ Day) = Maan
So, if we look at the symmetric form of the inverse Fourier transform (1.88), we
cannot be sure of the existence (of f(x)) of this integral for such an input F(A),
which is the output of (1.87) as the Fourier transform of an absolutely integrable
function f(z). The reason is that F(X) here is, in general, not necessarily absolutely
integrable to guarantee the integral in (1.88) to exist, according to Theorem 5, and
define f(x). But, in practice, we would like to Fourier-transform back and forth
from f(z) to F(A) in (1.87) and then in a very symmetric way from F(A) to f(<)
56 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
via (1.88). Indeed, when we speak of signals, f(z) is the representation in the time
space while F(A) is its representation in the frequency, or Fourier, space. In quantum
mechanics f(z) is in the coordinate space, while F(A) is the representation in the
momentum (A) space.
To be able to utilize the Fourier transform (1.87) and its inverse (1.88) as convergent
integrals, we must restrict our class of transformed functions f(x) to more than just
absolutely integrable. One of the simplest versions of such restrictions, which satisfies
our needs here and which we shall adopt in this book, is that f (a) must be sectionally
smooth in addition to being absolutely integrable on (—0co, co). This is the statement
of the Fourier integral Theorem 6 that we shall state next after recalling Definition 2
of sectionally (or piecewise) smooth functions.
Theorem 6 The Fourier Integral Theorem (the Fourier Transform Inversion For-
mula)
If f(x) is a piecewise smooth function on every finite interval of the real line
(—co, co) and is absolutely integrable on (—oo, oo), then
1 co
which is what we usually see in most statements of the Fourier integral formula.
Such an equivalence is shown easily when we use the Euler identity e(*—-§) =
cos A(x — €) + asin A(x — €) in the inner integral of (1.93), and recognize the zero
contribution of the (odd function) sin A(x — €) to the outer integral.
The above analysis including the two Theorems 5 and 6 should make clear, we
hope, that it is one thing to have conditions on f(a) to guarantee the convergence
of its Fourier integral to represent F(A) in (1.87) and another to have conditions on
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 57
f(x) or F(A) for the existence of the solution f(x) of the singular Fredholm integral
equation of the first kind (1.87) in f(x). The essence here is that we are given the
general class of functions for which the given function F'(A) of (1.87) and its sought
solution f(z) belong. The question that still remains is how the form of the solution
to (1.87) was constructed or derived as another Fourier integral in (1.88) or in the
Fourier integral formula (1.93). The rigorous proof for the construction of (1.88) or
(1.93) is somewhat long, and can be found with necessary details in the author’s book
on the subject of transforms (see also the reference at the end of Chapter 2 therein).
There are, however, other methods that if presented in a fast or simple way, would
need either more of a mathematical background like generalized functions or lack the
rigor in justifying a number of assumed limiting processes.
The familiar method found in most books on the undergraduate level, does not
need sophisticated concepts, but it does slide over some very important justifications
of passing to the limits. Such justifications, if done very properly, may even make
such proof longer than the above mentioned detailed one.
and we see it as a singular Fredholm integral equation of the first kind with kernel
K(x, X) = sin Az. The solution f(x) to (1.95) is the inverse Fourier sine transform
whose existence and form can be justified as a special case of the Fourier integral
Theorem 6 as its following Corollary 1. The proof, is easy to establish from Theorem
6.
PA a ee i HOE (1.98)
0
which is another singular Fredholm integral equation of the first kind in f(x). As
was done for the Fourier sine integral (1.95) the existence and the form of the solution
f (x) to (1.98) (as the inverse Fourier cosine transform),
1 Danie Se
glf (zt) + f(x—)] = = [ cos Avdd | f (€) cos AEdE. (1.100)
0
We remark here that f(z) in (1.88) represents the solution of the Fredholm integral
equation (1.87) in f(x). Although (1.88) offers a very direct way of evaluating the
inverse Fourier transform, the integration may still be an involved one. Hence, for the
few illustrations that we have here, we will depend on the tabulated values of Fourier
transform presented in Table 1.2 and refer the interested reader to more extensive
tables.'° Some of these tabulated pairs can be obtained by simple integration. For
example:
Example 12
The Fourier transform of the function
F(A)
2Asin \a
r
Wh 8 Saye
~eit/46 id“ /4a
a
2(1 cd Ne ae
|X <a
Poe
On 1)
0 |A| >1
us el b>0
b
Operations
a” f (x,y)
60 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
Example 13
Find the Fourier exponential transform of
_ sinax
g(x) =. xt
(B.1)
Here we note that from Example 10, we have
FAG) = Mou
: z
eee
sin aX
(E.2)
OF zea r
In this problem we have f(a) = (sin ax) /z, but from the symmetry of the Fourier
transform (1.87) and its inverse (1.88) we have (as indicated in the last entry of Table
1.2)
F(z) = sin ax
and f(x)=4 9°
z
OF
WIS
|A| <a
Alea
in (E.3) we obtain
a, \Al<aa@
Ff : i Gen risa (E.5)
s1n Q@Zx
Tv
>? |A| =a
'7See Brigham [1974, 1988], Briggs and Henson [1995] and Jerri [1992].
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 61
The tabulated Fourier transforms are used for evaluating many basic improper inte-
grals.
As we mentioned earlier, the most important property of the Fourier transform,
for solving (singular) Fredholm integral equations with a difference kernel, is the
convolution theorem, which states that
Lemma For fi (x), fo(x), and f3(x) in the class of functions that are bounded,
absolutely integrable on (—oo, 00), and sectionally continuous on each bounded
interval,
(1) (fi * fo)(x) = (fa * fi) (2) (1.103)
(2) fi(x) * [fo(x) + fa(x)] = (fi * fo)(x) + (fi * fs) (2) (1.104)
(3) fi * (fo * fa) = (fi * fo) * fs (1.105)
We leave the (formal) proof as a simple exercise.
V4rt i
LY
5 enn [At
u(c, en) 4 = tree) | |
ge re ee
=(@=3)) (E.2)
Parseval’s Equality
Another useful property of the Fourier transform is Parseval’s equality,
can be derived from the convolution theorem (1.101), which we shall leave for
Exercise 10 with a clear supporting hint.
in Section 5.1 and Theorem 6 in Section 5.3 for Fredholm equations of the second
kind, and in Theorem 5 of Section 5.2 and Theorem 7 of Section 5.4 for Fredholm
equations of the first kind). So, the illustration here may be considered as a formal
one in the absence of the above required checks for justifying our steps.
Consider the following singular Fredholm integral equation of the first kind in
u(x) with the given function f(a) and the difference kernel k(x — €),
(1.109)
provided that F(A) does not vanish. What remains is to find the solution u(z) as the
inverse Fourier transform of U(X), which we will illustrate in the next example.
If we take the Fourier transform of both sides of this equation, using the convolution
theorem on the integral in (E. 1) and realizing in this special problem that f (a) = e~ I2|
and the kernel k(x) = e~!*!, where from the fourth entry in Table 1.2, we have
{er}= ree 2
the Fourier transform of (E.1) becomes
2 2)
———— ——-~ U(r
OD ere arareprie OV)
and
De
U(A) ee, AT 12 0. E.3
64 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
The denominator vanishes for real values of AXwhen pp > ‘ and some real value of
A, so for uw < 5 we can take the inverse Fourier transform of U(A) in (E.3) to have
* 1 co Jerr
We note that the integral is in the form of the Fourier convolution product, moreover
its kernel is the same as that of the equation of the second kind (E.1) in Example 15.
So it may be appealing to try the convolution theorem in the same exact way like
what we did for Example 15, and we have
This equation (E.2) gives a nontrivial solution U(A) only if the parameter pz is
restricted such that w = BS, What remains now is to find the function, (or
. 2 . . .
feeNe
functions) u(x) that satisfies the integral equation (E.1) with zp = Sew
the knowledge of the basic properties of Fourier transform, one may arrive at such
solutions, sometimes, by a kind of inspection. We observe in (E.1) that our solution
2
u(t) is an input that results in an output u(x) which is within the pape constant
factor. If we also recall that the kernel e~!*~*l is shifted by t, where according to the
Fourier pair from Table 1.2, we have
'8Optional
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 65
Then if we nominate u(t) = e~™* for a solution, the integral on the right of (E.1)
will represent no more than the Fourier transform of e~!*~*!, which according to
(E.3) should give us eee R(X) a Thus (E.1) results in
1+ 2
2
e —ixr (E.4)
WO 3
as solution u(x) = e~** to (E.1) provided that the parameter ps (which we shall call
2
an eigenvalue later) is restricted to uw = . We should note in this example of
homogeneous singular Fredholm integral equation (E.1) that for every uw > 5 (1:7,
infinity of jz values) we have solution e~**” to the singular equation (E.1).
b) Here we consider another homogeneous singular Fredholm integral equation in
u(x),
Us) = uf u(t) sin xtdt (E.5)
where we note that the integral on the right is a Fourier sine integral. So the equation
(E.5) puts a Fourier sine integral U(a) as in (1.95)
z
yi (x) = ze
TW —ar
Pape tee (E.9)
and
Seaton x
y2(x) = 5° ees aa (peal) (E.10)
[2 2
are solutions of the integral equation (E.5) for 4 = = and pz = =e , respec-
2
tively. This should illustrate that for one value of the parameter pp = 41 = a for
example, the integral equation (E.5) has infinity of solutions y; (x) in (E.9) for all the
positive values of the constant a involved in yj (z).
The above-illustrated two features of i) infinite values of the parameter yz in the
singular equation (E. 1) and ii) the infinity of solutions corresponding to one value of
2 ; :
the parameter . = ,/ — for the other singular equation (E.5) do represent important
T
characteristics of singular Fredholm integral equations.
For completeness we present the following result (1.110), which represents one
of the most important properties of the Fourier transforms for solving differential
equations (with, usually, constant coefficients). This is the transforming of the
2
derivative —> (or higher derivatives) in the x-space to the algebraic —)\? F()) in
the Fourier A—space. Of course, as we mentioned for the Laplace transform, the
combination of such results like (1.110) for algebraizing differential operators, and
the convolution theorem (1.101), for algebraizing the convolution integral operator,
can be used in the attempt of solving integro-differential equations on (—oo, 00). We
will next state and prove this result for Fourier-transforming the first derivative —
2
as Theorem 8. The cases of higher derivatives like that of (1.110) for “s (or for
bi
d
— will follow easily as a corollary to this theorem.
As in the case of the Laplace transform, the Fourier transform also algebraizes dif-
ferential operators with constant coefficients, for example, F {d? f /dx?} = —\? F())
with the conditions of lim f(x) =Oand lim f'(x) = 0. This can be shown
L—+=ECO xr—> +00
after two repeated integrations by parts and by assuming that lim f (ey = Wand
T—=+rCo
I Seseglll
Re sme CAG
das 7 Jag de, pe Of i . ~ ~ira of
Ficah= fe a2 = ° =| tafe din’
df
Fie} = iF (A): (1.111)
Now we state the corresponding result to (1.111) in the Fourier space,
{-iaf(a)} are
Ue leU =dF= (1.112)
or
dF
Fi= {5
ee
\
ae
ix f(x). (1.113)
Just as was done formally in (1.72) for the similar Laplace transform pair (1.71), this
result is obtained by differentiating the Fourier integral
MO ag he afede,
5dF =pd fore Jae =ftelKe f(a)dr (1.114)
esyeeaeee = F{-irf(2)}.
This step of interchanging differentiation, with respect to A, with integration is
justified if the middle integral above resulting from such interchange is uniformly
convergent to allow such an operation.
(2) = 3 Rln)sinne
9 co
T
(1.116)
68 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
We note here that the finite sine transform in (1.115) is related to the Fourier sine
coefficients b,, of the Fourier sine series in (1.116) as F(n) = ae. while the finite
cosine transform (1.117) is related to the Fourier cosine coefficients Qn, of the Fourier
cosine series in (1.118), as F.(n) = Fan: Tiel acon
Since it is our main purpose here to bring examples of integral equations, we can
also look at the finite-limit Fourier sine and cosine transforms (1.115), (1.117) as
nonsingular Fredholm integral equations of the first kind in f(a), as compared to the
singular equations of the (infinite-limit) Fourier sine and cosine transforms in (1.95),
(1.98) with their infinite limit of integration on (0,00). The other point noticed is
that the nonsingular Fredholm equation of the first kind of the finite sine transform
(1.115), for example, is solvable for f(x), 0 < a < 7, in (1.116) as an infinite series
in terms of the discrete values of the given function F’,(n), while the singular integral
equation of the Fourier sine transform (1.95) is solvable (for f(z), 0 < x < oo) as
an infinite integral in terms of the continuous values of the given function F(A) in
(1.96). The same analogy can be drawn for the finite cosine transform (1.118) versus
the infinite-limit one in (1.98). The singular property of the integral equation and
the continuum of A values are important characteristics of such (not so easy to treat)
equations.
The following finite exponential Fourier transform F'(n) can also be defined in a
similar way, where its inverse f(a) is expressed as an infinite (Fourier) series, and
similar conclusions regarding the discreteness of \ are reached.
We should mention that the above finite transforms are also used in an operational
way, similar to the Laplace and Fourier transforms, for algebraizing derivatives
defined on the finite domain. For example the Fourier sine and cosine transform
algebraize even order derivatives,
2
[ sin nat de = n{f(0) — (-1)"f(7)} —n?F,(n), (L121)
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 69
[f°
p cosna Ta
La Er= (-1)"f'(r) mw) —— f f'(0 ssi
(0) —n*Fi(n). 122
(P4122)
These can be derived by simple twice integration by parts. More importantly, that
we will need their above algebraization properties in Section 4.1 for modeling the
potential distribution in a
charged square plate as Fredholm integral equation in two
dimensions (see Exercise 24 of Section 4.1).
2
We may note that the finite Fourier sine transform of a in (1.121) requires the
values of the function f (0) and f(z) at the two ends of the interval (0, 7), while the
finite cosine transform requires the derivatives f'(0) and f(z) at the end points of
(Ona):
The finite exponential transform also has the same operational property for al-
gebraizing all order derivatives of f(x) on the finite interval of its definition, for
example
So for any integral transform no matter how effective it is, at the end we have to solve
for its integral equation that is associated with finding its inverse. For an integral
transform to be useful, there must be a balance between the difficulty in finding
its inverse and its special properties in efficiently solving differential, integral or
integro-differential equations among others.
70 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
Here”? we introduce a number of other familiar integral (or finite) transforms, for the
main purpose of pointing out to that finding the inverse of any of these transforms,
is a very clear example of solving an, often, singular Fredholm integral equation
of the first kind in the transformed function f(a). Of course, this is besides their
use, in parallel to Laplace and Fourier transforms as shown in (1.127) for the case
of Hankel transform, and for facilitating the mathematical modeling and solution of
some integral equations. We will not cover the latter here, but we will revisit it briefly
in Section 2.6.2 to illustrate its use in the integral representation of the electrified disc
problem. The detailed modeling and solution of this problem is done in Example |
of Appendix A.
1
EOS TS Ga he finlAr) is(rar (1.124)
where J,,(x) is the Bessel function of the first kind of order n, which is (the bounded
solution at z = 0) of the two solutions of the Bessel differential equation,
A d ; :
Ea ae ie + (2? — n”)u =0, (1.125)
j i 09 (—1)*(2)n+?s
n(z) = » ek! Geille (1.126)
(=0
In a similar operational way to that of the Laplace and Fourier transforms, it can be
shown, using (a rather lengthy!) integration by parts”! (with appropriate boundary
conditions), that the Hankel transform algebraizes the Bessel differential equation
with variable coefficients (or its following variable coefficient operator), that is,
.
Hn 4§f + bts }= —\*F,,(X). (1.127)
dr? sordr_ r?
With such an advantage, it remains to transform back F;,(A) in the transform space
to the original function f(r) in the physical r-space. This would mean solving the
singular integral equation (1.124) in f(r) with the Bessel function kernel, which,
fortunately, has been established as the following inverse Hankel transform,
20 Optional
*! See Jerri [1992, pp. 16-18, Example 1.6].
1.4 LAPLACE, FOURIER, AND OTHER TRANSFORMS 71
where we note a perfect symmetry between the Hankel transform (1.124) and its
inverse (1.128). We may mention that the derivation of the inverse Hankel transform is
done with the help of the Fourier transform of functions (er a in two dimensions
with circular symmetry, i.e., f(z,y) = f(\/z2+y?) = f(r). This leads us to
the simple extension of the Fourier exponential transform of pees of multiple
variables as we did in (1.91) and (1.92) for the Fourier transform in three dimensions.
The double Fourier transform of f(z, y)
can be seen as an example of higher (two) dimensional integral equation in f(z, y).
Its inverse, as the solution to such a two-dimensional (singular) integral equation of
the first kind, is
hs i a) ee :
For, r2)} = f(x,y) = =
Ae tA) EY
eet *\29 HN, , Ap) d\rdAo
(1.130)
which can be obtained from the Fourier integral formula (1.93) with a rather simple
extension to two dimensions. Of course, the higher dimensional Fourier transforms
would be used for solving partial differential equations by, usually, algebraizing their
derivatives with respect to the spatial variables.
Next we present the Hilbert and Mellin transforms, for the main purpose of
showing that finding their inverses is a matter of solving singular Fredholm equations
of the first kind.
FO)=H{f}=—P
= =—pP f
1
=e
a (ada
(1.131)
erst
SSCA
i relim oho)
[o:———dz + Saha)
LO al (12132)
eh MES e404 A —xX ear
A— co
in case there is a value \e(—A, A) for which the integrand in (1.131) above becomes
infinite. When both limits on the right side of (1.132) exist as € > 0, the integral is
convergent and we drop the P to use f instead of P J.
The inverse of this Hilbert transform F'(A) is
We may mention that this inverse Hilbert transform, as a solution to the singular
Fredholm integral equation of the first kind (1.131) in f(x), can be derived with the
help of the Fourier integral formula.”*
=P f a = sin 2d (E.1)
7 =ie,9}
A =
The general method of solving such singular integral equation would involve “heavy”
use of complex variables. So, here, and in parallel to what we do for the inverse
Laplace transform in the sophomore course on differential equations, we appeal to
the available tables of the Hilbert transforms to find the pair
1 au bad
H {cos br} = =P [ pe ain ONS (E.2)
TOV Soak ea eX
Hence a simple comparison of (E.1) and (E.2) shows that (E.1) is a special case of
(E.2) with b = 2, whence the solution of (E.1) is u(a~) = — cos 22.
which can be seen as the solution to the singular integral equation (1.134). Here in
(1.135), as it was the case for the Laplace inversion formula (1.65), 7 is taken to be
larger than Real {z;}, the real part of any singularity 2; of F(z) inside the integral of
(1.135) (where z = x + iy).
In Exercise 19 we will present the convolution product and its associated Mellin
transform convolution theorem,
Se Cave
Mifi * fo}=M fill) fe ee Fy (A) F2(A) (1.136)
0
where F(A) and F(A) are the Mellin transforms of f,(x) and f(z), respectively.
Then this theorem is used, in parallel to the Laplace transform, for solving special
class of integral equations of the first kind, that are in the form of such convolution
product.
Exercises 1.4
1. Find the Laplace transform of the following functions (see Table 1.1).
(d) x— [e — t)?u(t)dt
0
(e) i Ce oe u(0) = 0
0 dt
Hint: See (1.84) and (1.68).
(f) xsinz
2. Find the inverse Laplace transform f(z) = £~!{F(s)} of the following func-
tions F’(s).
1
(a)
s—(A+1)
G(s)
(b) s—(A4+1)
1
(c)
(s —3)?+5
1 1 1
(f) V/sF(s)
Hint: Let \/sF(s) = s[F(s)//s] = sH(s) and use (1.68) and the result
of part (e) for h(t) = £~'{H(s)}, noting that h(0) = 0 in part (e).
3. (a) Show that , § > ais the Laplace transform of the following two
s-
functions.
@ fi) =e"
and
s Cemit< 9, 40F toss
(ii) f2(t) 24 . poe
74 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
(b) Use the information in part (a) to conclude the nonunique nature of the
solution f(t) of the (singular) integral equation of the first kind,
pe ibeen f(i)dt.
Sta 0
1
.
OLE(a) Findene testa
the inverse Laplace transform of
CieF'(s) = —————.
maar,
1
Hint: Use the Laplace convolution theorem (1.84) with F\(s) = = and
=H
F(s) = = where f; (x) = 1, and" fo(x)#= ¢ remembering
Jaa
the shifting property for F2(s).
(b) Use the following identity,
r(y)rd —v) =
Il
to show that T° (5) =i) i
. (a) Find the Laplace transform of Abel’s integral equation (a Volterra equation
of the first kind with difference kernel).
if [ Feueoae
1
by two methods.
(Oj 0 (E.2)
u/(0)=1 (E.3)
is equivalent to the Volterra integral equation
is a solution to both the initial value problem (E.1)—-(E.3) and its equivalent
Volterra integral equation (E.4).
Hint: You may substitute directly, or use the Laplace transform to solve
the initial value problem (E.1)-(E.3); or the convolution theorem (1.84)
to solve the integral equation (E.4).
Fourier Transforms
9. (a) Prove the following shifting properties of the Fourier transform,
(i) Shifting in the physical x-space
(b) Most of the functions we deal with are assumed to be real-valued functions
f(x) on (—00, 00). When f(x) is complex-valued, prove the following
result, which we need for the use of the Parseval equality in (1.106),
F{f(-2)}=FQ),
where f(x) stands for the complex conjugate of f(z), i.e., where each
= /—1in f(z) is replaced by —i.
Hint: Take the complex conjugate of the Fourier transform of f(—), and
note that the complex conjugation operation is distributive over addition
and multiplication, ie., fi + fo = fi + fo and fifo =fi fe.
10. Use the (Fourier-type) convolution Theorem 7 as in (1.101), which can be
written as :
1
Be _ RAV yan= fo fila — t)fo(t)dt (E.1)
to derive the very important equality for Fourier analysis, namely Parseval’s
equality in (1.107).
Hint: Let x = 0 in (E.1). Then consider the special case of fo(t) = fi(—t) m
(E.1) with the use of the result in part (b) of Exercise 9 to have F{ f,(—t)}=
FAfoo) eee)
IWbe Consider f(x), —co < z < o.
(a) Show that the exponential Fourier transform F(A) of f(a) reduces to a
Fourier sine transform like that of (1.95) when f(a) is an odd function
fo(z).
Hint: In the integral of (1.87) write e~** = cos Ax — i sinAz, then
recognize that the integrand fo(x) cos Ax is an odd function where its
integral on (—oo, oo) vanishes.
(b) Show that the exponential Fourier transform F(A) of the even function
f(x) reduces to a Fourier cosine transform like that of (1.98).
Hint: See the hint for part (a), where in the present case f,(x) sin Azx is
an odd function, thus its integral on (—oo, 00) vanishes.
12: Consider the function of two variables u(z, y) and the Laplacian of this function
Ou Oru
Vu=— + —.
72 oF Dy? :
(£.1)
Show that the double Fourier transform, as given in (1.129), of this Laplacian
of u(x,y) is the following algebraic form in U(Aj,2) the double Fourier
transform of u(z, y), i.e
0? :
F (2) ‘Fat = —ATU(A1, Az).
For the second term in (E.2) do the same starting with the Fourier transform of
07 u(z, y)
as a function of y.
Oy?
13. (a) Show that f(a) in its Fourier sine series representation (1.116) is, indeed, a
solution of the (nonsingular) Fredholm integral equation of the first kind
(12415),in fz).
Hint: Substitute the series (1.116) of f(a) inside the integral of (1.115),
interchange the operations of summation and integration, then use tne
orthogonality property of {sinmz}°°_, on the interval (0, 7), ie.,
m Oem
‘i sinmazsinndz = ¢ 7 (E£.1)
0 x n=m™m.
- On in ae
i cosmzcosnz dr = 7 n= Tse 0 (E.2)
: kt, v) n=m=0
14. Consider F(A) and F(A) as, respectively, the Laplace and Fourier trans-
forms of the causal function f(x), i-e., f(z) = 0 forxz < 0.
Show that these transforms are related as
Frp(A) = Fc(iA).
78 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
Hint: Note that for the Fourier transform of f(x), its integral on (—oo, 0) is
zero since the causal function f(a) vanishes identically there.
Ii (a) Find the Fourier exponential transform of the (singular) Fredholm integral
equation with difference kernel assuming that the solution does exist.
—co
16. Solve the following (singular) Fredholm integral equation with difference ker-
nel
—e5)
Hint: Note that the (improper) integral is in the Fourier convolution product
form, where the Fourier convolution Theorem 7 as in (1.101) can be used to
algebraize the integration operation. Also remember that
F {el} = (E.2)
18. Solve the following (singular) integral equation in (2),
o@)—pf - SPS
t)dt
= nla), (E.1)
where p(x) is the gate function
pat) eg (£2)
ford —akr
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 79
Hint: Note that the (improper) integral is in the Fourier convolution product
form (1.101), and recall the two Fourier pairs
—alx B 2a
Fie } @ +X ee)
and
2 sina
Other Transforms
Hint: Use (1.134) and let ax = z, then appeal to the definition of the
gamma function as given in (1.74).
(b) The Mellin transform-type convolution product of f;(x) and fo(x), 0 <
x < oo; is defined as
ee [e etuo? (B.2)
to show that you will have an algebraic equation in U(A).
In the preceding section we introduced the Laplace, Fourier, and other integral
transforms and illustrated their suitability for solving only special cases of integral
equations. In particular, the Laplace and Fourier transforms are compatible with the
80 Chapter 1_ INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
following Volterra and Fredholm (singular) integral equations with difference kernels
K(x — t), respectively:
Hence, as expected, the methods we introduce in Chapters 3 and 5 for solving the
Volterra integral equations and (mostly nonsingular) Fredholm integral equations
will depend on the type of equation, in particular its kernel. But in general these and
other special analytical methods may fail; then we must resort to other approximate
or numerical methods of solution. The approximate method of solution involves
approximating the integral equation by another with a known solution which can be
made close to the exact solution of the original problem. We must stress that all
the methods being it analytic, approximate or numerical must be preceded by some
assurance of the “existence” of the sought (unknown) solution, in general. Also,
if possible, we shall have some idea about the stability of the solution for integral
equations of the first kind in particular, a topic that we shall touch upon briefly in
Section 5.4. Such existence theorems for the variety of integral equations covered
here, including the singular ones and those of the first kind, are usually stated most
precisely in a general abstract mathematical setting, which requires more of the
abstract analysis preparation than assumed for this book. However, we will attempt
to give a good intuitive explanation and as precise statements as possible for such an
important topic of the existence of the solution to a given integral equation. A brief
presentation of this “theoretical” topic is given in the (optional) Chapter 6.
In this section we will review the very basic numerical integration formulas. They
will be used for the numerical setting of Volterra integral equations in Section 3.3, and
Fredholm integral equations in Section 5.5. These formulas include the trapezoidal
rule, Simpson’s rule, and the midpoint formula. For the level of this elementary book,
we present the higher quadrature rules [see (1.140)] only for the interested reader,
thus we decided to have them covered in a new (optional) Chapter 7 along with
their necessary tables. There we will give a good number of illustrations for the use
of these higher quadrature rules for more accurate numerical integration, and their
use in the numerical setting and solving linear Volterra as well as Fredholm integral
equations. This is done to support the numerical methods of Sections 3.3 and 5.5,
where only the basic rules of this section are used.
Numerical methods of solutions for integral equations approximate the integral in-
volved. For example, the integral ie f (x)dz is approximated by a finite sum
b n
/ f(a)de © Sp(x) = So f(a) Ae (1.139)
@ 17=0
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 81
wheve usually the sample values f(x;) are equally spaced with the increment
DANE py JANG 3 oe a for n equal increments of the interval (a,b). In general A;x
may be variable, but usually for the very basic formulas of elementary numeri-
cal methods of integrations, to be discussed soon, the increment is taken as equal
/SNay oe However, and depending on the particular formula, the ordinates f(z;)
in the approximate sum above may be given a weight that is indicated by D, (instead
of just A;x) to be written for (1.139) as D; f(z;) for fixed Ar = pmo instead of the
simplest version f(x;)A;2,
b n
Such weights D;, (or quadratures) are equivalent to approximating the function f(z)
on the subinterval A; by a simple curve. Such a curve is a simple straight line for
the trapezoidal rule and a parabola for Simpson’s rule to be discussed next. Other
quadratures, where higher degree polynomials or other functions are used, are also
available in the literature and are used in books on numerical methods of solving
integral equations, which we shall present and illustrate in our detailed discussion
of the numerical solution of Volterra and Fredholm integral equations in Sections
7.2 and 7.3, respectively. Numerical analysis is the subject that deals with such an
approximation in the most accurate and efficient way. However, for our purpose in
this section we shall be satisfied with the very basic formulas used for the numerical
integration above. So, we will first present the most familiar formulas of numerical
integration: the trapezoidal rule, Simpson’s rule, and the midpoint formula. In the
next section we will illustrate primarily the use of the trapezoidal rule for evaluating
integrals. We will also give an exercise to illustrate the use of Simpson’s rule. In
Sections 3.3 and 5.5 (also in Sections 7.2 and 7.3) we will show how the linear
Volterra and Fredholm integral equations are reduced, respectively, to a triangular
and a square system of n + 1 linear algebraic equation in the n + 1 unknowns of the
solution (approximate!) samples u(a;), i = 0,1,2,---,n. We should, actually, use
another symbol t(;) to indicate the solution of the linear system as an approximation
to the samples u(x;) of the solution of the corresponding integral equation. But, for
simplicity, and in accordance with the usual notation, we will adhere to u(x;) without
much fear of confusion!
The basic formulas we present here to approximate the integral ifyf(x)dx on
the interval (a,b) by a partial sum use n equal increments, Ax = (b— a)/n. It
is clear from Figure 1.10 and (1.139) and (1.140) that the form of the particular
summation formula will depend on how the increment of area A;A under the curve
is approximated. Such an area has constant width Az = (b — a)/n and hence, its
value will depend on how we approximate the height or the value of the function
f (x) on the subinterval (x;_1, x;), as illustrated in Figure 1.10a. This choice, as we
shall see next with the following basic integration formulas of the trapezoidal rule
and the Simpson rule, will translate in terms of the weights D; of (1.140).
a) The trapezoidal rule approximates the function on the subinterval (2;_1, 7;) by
a straight line passing through the points (x;-1, f(ai—1)) and (a;, f(a;)), and uses
82 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
(a)
(x)
f (xi)
f(x;-\)
(c) (d)
Fig. 1.10 The trapezoidal, Simpson’s, and the midpoint rule for numerical approximation
of integrals.
the area of a trapezoid with height (1/2)[f(z:-1) +f(x;)] as shown in Figure 1.10b
to approximate the area A; A, which gives
. =
~ —* sf(0) + f(r) + faa) +
iif(z)dx
1 (1.141)
tt EA Dalid ameectee(Dna)ar af (an) .
If we compare this formula with (1.140) we note that the weights Do, D,, Do,---,
D,y-1, Dn given to the ordinates f(zo0), f(x1),---, f(Zn) are, respectively, 5 11.
-++,1,5 multiples of Az = =.
We may mention here that the higher quadrature rules, of numerical integration,
that we shall present in Section 7.1 for the very interested reader, are with more
elaborate (or fancier!) weights than the above simple ones of halves and ones.
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 83
Hence they require their own tables, which we supply in Section 7.1 along with the
discussion and illustration of each particular rule.
For completeness, the error E-r(f), involved in the above trapezoidal rule (1.141)
for approximating the integral if Po) Gars
1 1 (b—a)3
IEr(f)| < 5 W(b-a)M = = = M (1.143)
where h = =, and M is the maximum value of |f’(x)| on [a, 8].
b) Simpson’s rule approximates the function on (x;-1,2;41) by a parabola
that passes through three points—the left point (z;_1, f(a;-1)), the middle point
(xi, f(a;)), and the right point (2;41, f(2i41)) as shown in Figure 1.10c and which
results in
2 b-—a
Hslf = / f(z)dx — = [f (ro) + 4f (21) + 2f (x2) + 4f (x3) + 2f (4)
*4The reader is advised to consult detailed references like Delves and Mohammed [1988], and Baker and
Miller [1977] (also Kondo [1991]).
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 85
to the student for the first time, with their varied aspects including their numerical
approximations, but without necessarily requiring much of abstract analysis. For
this purpose, and to clarify our illustrations in Sections 3.3 and 5.5 for the numerical
solutions of Volterra and Fredholm equations, respectively, we will stay with the
above trapezoidal and Simpson’s rule. For particular problems, especially in Section
5.5 on numerical methods of Fredholm integral equations, it is tempting to draw upon
higher order quadrature rules but keeping in mind the raathematical level of this book,
we have relegated this discussion to Section 7.3 for the very interested reader. We shall
also back this treatment with a clear reference to the specific detailed source of such
analysis. Sections 3.3 and 5.5 will be devoted to the numerical solution of Volterra
and Fredholm integral equations respectively, where only the above basic numerical
integration rules are used. In Section 7.1 we will present the higher order quadrature
of integration with a number of illustration for numerically evaluating integrals. This
is followed by using these rules in Sections 7.2 and 7.3 for the numerical solution of
Volterra and Fredholm integral equations, respectively. In Section 7.1 we supply the
very necessary tables of the above higher quadrature formulas that are needed for our
illustrations. A specific reference is given there for the more detailed tables
b
(UG) sap (e) +f K (a, t)u(t)dt (1.148)
Here, as we mentioned above, the points t; are equally spaced, but can be chosen
at one’s convenience, and as it may be required by the chosen quadrature formula
beyond the two simple ones discussed above. Also At may stand for the usual equal
increment, but in general, the index 7 in A;t may indicate a weight D; assigned to
the ordinates K (x, t;)u(t,;) (of the integrand) by the particular numerical integration
formula used as we discussed for (1.140) and illustrated for the trapezoidal rule
(1.141) and Simpson’s rule (1.144). In this general sense of different weights D; for
the n + 1 values u(t;) in the sum of (1.149), we rewrite it as
In the future, and in particular with integral equations of the first kind,
—/2gf(y) = * uit
o(n)dn
pass! 1.20
OTN el ( )
has its solution found, via the use of Laplace transform, in Example 8 of Section 3.2
as $(a) in (3.41) in terms of a derivative of an integral of the known function f(z)
weighed by ==,
oa)=-Ef 0a (3.41)
Example 16
To give a simple example, let us consider the very special case of the integral
equation of the first kind with K(z,t)=1,0<2;0<t<za,
Fig. 1.11 The roof function h(z) of (E.2) and its discontinuous derivative 44
» of (E.5).
: Clits)
Now a simple look at = in Figure 1.11, and we see the uncovered difficulty due
to this differentiation, namely, that the derivative of h(x) does not exist at rz= a.
Instead, and in contrast the continuous h(z), oe is continuous only on (0,a) and
(a, 2a), and it has a clear jump discontinuity of size 2 at r = a,
dh ie 0 <a"a
= { ; (E.5)
dx lin Ort Tila:
The matter is even more serious when h(z) is given as data, where, of course, it
is within the eee of the measurement of the me So, if we are, in principle,
after u(x) = az!ae we must approximate ae by 24 eee one) But this
computation for Be
22 will compound the final error in
1 vig we Start with the
“aa,
inaccurate data of h(z).
to interpolate between the N discrete values {u(z;)}, that result in the continuous
approximation &(z) to the approximated function u(z).
88 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
The Lagrange interpolation formula is used for not necessarily equidistant sam-
ples {f(2;)}%, of f(x), we will use f(a), instead of f(x), for the (approximate)
interpolated function to distinguish it from the exact f (Gs
my, (x — 24)
eae
TIN, (x; — zi)
ey.
(1.154)
(We note here that the factors (x — x;) and (x; — xj) are missing, respectively, in
the numerator and denominator of 1;(a), which is indicated in the product notation
by 2 # 7 to say excluding the 7th factor in both products.)
An interpolation formula should first give us the sampling points, thus requires
from (1.54) that
tee)
itm) = |es (1.55)
which is clearly the case. This is so because form # j we havea factor (Im —®m) =
0 in the numerator but no such factor in the denominator, and all other factors there
are nonzero. In the case of |;(x;) all factors in both numerator and denominator are
the same, and without the factor (2; — x;), since it is missing in both places, and the
resultas.l;(a7) 11.
We note that it is with difference kernel and the integral is in the form of the Laplace
convolution product, where we can use the Laplace transform to solve it, similar to
what we illustrated in Example 10 of Section 1.4. The exact solution can be easily
obtained as u(x) = sinx using the Laplace transform, and can be quickly verified,
after a simple integration by parts for le t sin tdt (and remembering that x in x — t
of the integral is considered as constant, since the integration is done with respect to
t.)
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 89
In Example 9 of Section 3.3, we use only four increments on the interval (0, 4] to try
to find the five numerically approximated values of the solution &(x;), 7 = 1,2,3,4
and 5 at the indicated locations x = 0, 1,3, 4 and 4 as shown in Table 1.3.
Table 1.3 Numerical and Exact Solutions of Volterra Integral Equation (E.1) of
Example 17
Xx 0 1 2 3 4
Numerical value of u(z) Ome I 0 -1
Exact value ofu(z) =sinz O 0.8415 0.9093 0.1411 -0.7568
The table also includes the corresponding exact values u(z;) = sinz;,j = 1,2,3,4,
and 5. Figure 1.12 illustrates the comparison between these exact and approximate
five values of the solution to (E.1). Note that we graphed the exact values, then
purposely connected them with (an exact) graph as a solid line. This is done because
we know that the solution is u(x) = sin z forall > 0. However, for the approximate
numerical values &(x;) we only graphed, what we are sure of, as the approximated
five values. So, it is left for some interpolation formula to use these few values and
fill between them to give an idea of a continuous approximate solution u(r). Here
we appeal to the Lagrange interpolation formula (1.153) and (1.154) to do this job.
U(x) x--Numerical
o— Exact
x
Fig. 1.12 Numerical and exact solutions of Volterra equation (E.1) of Example 17. (Also
Example 9 and Table 3.1 in Section 3.3.)
To use the Lagrange interpolation of (1.153) and (1.154), we first prepare L;(x),
4 = 1325-75 of (1.154)
90 Chapter 1. INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
penx(xt@a— 1)(a
Wea Be)
— 2)(x — atton ee stare teeS ie
This interpolated approximate solution t(a) is computed and shown in Figure 1.13,
where it connects the five numerical sample values ti(xz;), j = 1, 2,3, 4,5, and u(z)
is compared with the exact solution u(x) = sina. Samples of the comparison are
u(0.5) = 0.4794, &(0.5) = 0.5859, u(2.5) = 0.5985, u(2.5) = 0.5859, u(3.4) =
—0.2555, u(3.4) = —0.4896.
Next we illustrate the use of the Lagrange interpolation formula for two approxi-
mate numerical solutions of a Fredholm integral equation of the second kind.
1
u(@) = sing + / (1 — xcos xt)u(t)dt. (£.1)
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 91
Fig. 1.13 The interpolated approximate solution %(z) of (E.3) and the exact solution u(x) =
sin z of (E.1) in Example 17.
In this Example 20, we consider only three (approximate values i(x;), 7 = 1, 2,3 at
x; = 0, 2 = 0.5 and x3 = 1.0. The result is a 3 x 3 system of algebraic equations
in u(z;), 7 = 1,2, 3, since we have these three values for the input &(t;), 7 = 1,2,3
inside the approximating sum of the integral, for each u(z;), 7 = 1, 2,3 of the output
u(x) of (E.1).
The solution of this 3 x 3 system of algebraic equations is the subject of Exercise
2(a) and 2(b) of Section 5.5, where the trapezoidal rule and the Simpson’s rule are
used, respectively, for approximating the integral in (E.1). These results are shown
in Table 1.4, and are to be compared with the exact solution u(z) = 1,0 < x < lof
(E.1). For such square system of algebraic equations, we leave it (in Section 5.5) for
the preparation of the reader to deal with the solution using matrix methods. For the
most basic method, we present a review of the Cramer’s rule in the next section.
Now we will use the Lagrange interpolation formula (1.153) and (1.154) for both
sets of the three samples, then compare with the exact solution u(x) = 1.
From Example 20 and Exercise 2 of Section 5.5, we have in Table 1.4 the two sets
of approximate sample values w(z;) of the solution to (E.1),
We first prepare 1;(x), l2(x) and I3(x) of (1.154) to be used in the Lagrange
interpolation formula (1.153), then we use the two sets of data to find their respective
interpolations.
i = Ct)
ae
-2 (2-5)
2
@-0
Ho (CC oe
lg = (0.5)(0.5 —1) = —4zr (x = 1) (E£.2)
x(x — 0.5)
If we use the approximate samples values of Exercise l(a) in Section 5.5 and the
above functions ; (a) in (1.153) we have
92 Chapter 1 INTEGRAL EQUATIONS, ORIGIN, AND BASIC TOOLS
Table 1.4 Numerical Solutions of Fredholm Equation (E.1) of Example 18 (as given in the
answer to Exercise 2(a), (b) of Section 5.5), using a) the Trapezoidal Rule and b) the Simpson’s
Rule
which interpolates similar to (E.3). An example are the few values &(0.3) = 0.9999,
u(0.7) = 0.9999, «(0.9) = 0.9997.
As we mentioned earlier, the numerical methods of solving Volterra and Fredholm
integral equations will require be the subjects of Sections 3.3 and 5.5, respectively.
There we will illustrate the numerical setting with the aim at a numerical solution
for these equations using the simplest numerical integration formulas, namely, the
trapezoidal rule (1.141) and Simpson’s rule (1.144). In Chapter 7 we will follow
such treatment by concentrating on the use of the higher order quadrature rules. In
particular, the Newton-Cotes rules and the Maclaurin rule will be used for Volterra
equations in Section 7.2, while the Gauss quadrature rules are used for Fredholm
equations in Section 7.3. The Gauss quadratures will also be consulted and illustrated
for their use in finding an approximate solution of one type of singular Fredholm
integral equations, namely, those whose singularity is due to (only) the limit (or
limits) of integration being infinite.
1.5 BASIC NUMERICAL INTEGRATION FORMULAS 93
As we mentioned in the last section, the resulting square system of linear algebraic
equations, for numerically approximating the Fredholm integral equation, will require
some basic knowledge of matrix analysis. We present here a review of the very basic
such needed computations, namely, Cramer’s rule for solving N x N system of linear
algebraic equations.
Consider the square N x N system of linear equations in 21, £2,---,2N,
Gj eee et
G21 +" (Gentz be
l : :
(1.158)
|A|
In =
[A] = $0(-1)?**ai3|
Mis (1.160)
e—"
Exercises 1.5
[ dx
Ohl
(E.1)
Hint:
(E.2)
(b) Use the trapezoidal rule with n = 4 to approximate the integral in (E.1).
(c) Estimate the error in the trapezoidal rule approximation of the integral in
(EY):
Hint: For the maximum value M of |E4| of f(x) = z42 on (0, 1) to be used
for the error estimate in (1.47), we must find ae = ee in the search for
1 dx
[ io (E.1)
| jaa
S adxr
using
(a) the trapezoidal rule.
(b) Simpson’s rule.
(c) Find the exact value of the integral, and compare it with the two approximate
values in parts (a) and (b).
(ht) 7 e..
_~—~ a
iZ Si i= 2 .
_ i) 5 Siete)“4 Gh 1 if i —ekoscuaiingn
pote “T= Glut- Wake
eae Pe.) oo
(1s i ‘er. 7
oe
a ne wa ie ee
> aS
— _
24 homies | ib mol lot Gm Sabla
4 @ rey j - 4? Py |& — > — als doidve At a bi
MOTE 01) Via orn LaaeetnOa0 & qe beerea
_— —~ 69S aed! a> pana (ie20 ovtegfine
cha me ——
i 7 -
- , jae et]ta nbd af
4 > om a (ie CrOri),
y 6S wo ) Camas ee ou aatg
- at we nag ie GB 00gvs
ssh ileal, 4
a eal, PE ag atin # neeeprsd Ate
S=O-1ME iol SESE 1 me svy shen Seiad ass yl bere ie
- wT ; . “tol [is dear 3
PNP) ote, = — eet Wi) ne eS Tee Bhp manta —_ 4
7 (x Sir iGas od (96e a ‘ he eae a f
! anyy Bsiiny : (4 — a ohh \.43.-
2-16 eer
¢ * re oe
- = 7 a a) -, '
| : a yi f = - _ dim S “iy
<
Sette ce
(4 Ooi mae —_1 STF ae.
— ak mie In| a s RAPE
< .(ote ie
| tee eee ai
<< =~ atende Sanam
>> sek heey
ra te; Oohae
- oya
: = ei ew 1
oe st a
~— ra oe
Modeling of Problems as
Integral Equations
u(z) = 4!
dz
(2.2)
relates the state u(z) to its instantaneous rate of change du/dz, and hence the relation
to its values only in the very immediate neighborhood. Indeed, the formulation
in terms of derivatives is the essence of the mathematical modeling for the basic
natural law of causality which made differential equations the important subject
it is today. In comparison, we say that the importance of integral equations stems
97
98 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS
As mentioned earlier the study of population growth includes the forecasting of any
future surge in birthrates, which is of great importance for future planning throughout
the world. In this section we formulate the problem of human population growth, the
problem of the surge in birthrates, and the problem of two biological species living
together.
Now if this process is repeated for all the m subintervals of the time interval (0, t),
we obtain the partial sum
as the number of people added through new births which, if passed to the limit (as
m —> oo), becomes the integral
, dn.
It is reasonable now to assume that the rate of birthrate r(t) = are is proportional to
n(t), the number of the population present at time t,
Consider the two species with number n,(t) and n2(t), respectively, present at time
t that we presented in (1.13) and (1.14) of Section 1.1. If the two species are left
separate, it is reasonable to assume that their rate of change dn/dt is proportional to
n(t), the number found at time ¢,
dn
riage an(t). (2.9)
d
= 2 Oy te Say (2.11)
Now we are to formulate the state of equilibrium of these two species, when put
together, with the assumption that the second species (a predator) will feed on the
first (the prey). This will, of course, affect the rate of change of both species; the
first (prey) will have a slower rate of growth, while the second (predator) will have a
slower rate of decline.
To formulate this situation in terms of a reasonable mathematical model, we start
with the model of the separate species (2.10) and (2.11). We then attempt to modify
it in two steps to allow for the new situation, where we must introduce factors to
decrease k, in (2.10) and increase —kg in (2.11). To start with kj, the rate of increase
of the first species (prey), it is reasonable to assume that its decrease is proportional
to n2(t), the number of the second species (predator) present. Hence k, should be
modified to kj.
ki = ky — Y1N2(t) (212)
where 7; is a proportionality constant which depends on the first species. The actual
decrease in ky is due not only to the presence (feeding) of the second species n(t) at
the present time ¢ but also to all previous presences (feedings) of n2(7) for the whole
time interval t —- Ty < rT < t, where To is the finite heredity duration of both species.
If in addition to the present 7, factor we have the record of its rate of decrease as
fi (7) in previous times t — Ty < 7 < t, then the decrease in k, at time t, due to the
decrease in the time interval Ar, is —f\(t — T)n2(r)Ar. Here we used fi (t — 7),
since a species at the present time t had a chance to resist n2(7) at time t — 7 and
hence a factor f;(t — 7). The total decrease in k, in the whole time interval Ty is
2.1 POPULATION DYNAMICS 101
t
Ok, = fi (t = T)N2 (r)dr. (2:13)
t—To
If we combine this previous decrease (2.13) with the present decrease —y,n2(t) [as
in (2.12)], we obtain
t
Kesf = ky = ¥1N2(t) = bs fi (t = T)n2(r)dr (2.14)
t—To
Again it is not the present good feeding alone which causes the increase in —k», but
also all previous feedings that depended on the presence of the first species n; (rT).
Hence in the time interval A7, there is an increase of f2(t — 7)ni(7)Ar and the total
increase in the same period To is
t
—koesf = —ke + yor (t) + fo(t — T)n4(7)dr. (251%)
227,
Now we modify k, in (2.10) to its effective reduced value (2.14) and —kz in (2.11)
to its effective increased value (2.17), to obtain the model for the equilibrium state of
the two species living together:
t
on = ny(t) i — yN2(t) — fi(t - r)na(r)dr] hankipStO (2.18)
t tT
t
dn2 = no(t) | + y2n1(t) + fo(t - r)na (r)dr] reoks 0. (2:19)
dt 27
Exercises 2.1
1. Consider the integral equation (2.8) in n(t); the number of the human popula-
tion at time f.
(a) Let N(s) and F'(s) be the Laplace transform of n(t) and f(t), respectively.
Find the Laplace transform of (2.8) and hence find N(s).
(b) Assume an exponential-type survival function f(t) = e~“, c > k > 0.
Find N(s), then solve for n(t) of (2.8) as the inverse Laplace transform
of N(s).
2. Consider the problem of radioactive decay where the rate of decrease of the
number of atoms is proportional to the number of atoms n(t) present at time t.
Hint: See (2.9)-(2.11).
(a) Write the differential equation and its initial condition assuming that no is
the initial number of atoms present at time t = 0, and remembering that
the proportionality constant for the decay being a negative number —k,
Ket:
(b) Reduce the initial value problem in part (a) to an integral equation.
(c) Solve the initial value problem in part (a) for n(t).
(d) Verify that the solution of the initial value problem in part (c) is also a
solution of the integral equation in part (b).
3. Consider the human population problem (1.8), and assume that the survival
function is described roughly by the exponential function f(t) = e~ 7, where
T is the average life span of a typical person.
(a) Write the resulting integral equation and find its Laplace transform and
hence N(s) = L{n(t)}.
(b) Use the inverse Laplace transform to find the solution n(t), the population
at time f.
(c) In (2.7) we assumed that the birthrate is proportional to the number present
in the population,
ar = knit) (2.7)
where we can take k as the rate of population variation per capita. Use
the result in part (b) to show that
(i) The population increases in an exponential fashion when T > 1/k
(i.e., when the average life span of the typical person is larger than
the reciprocal of the per capita rate of change of the population).
(ii) The population decreases in an exponential fashion when T < 1/k.
4. Consider the problem of birthrate and the possibility of asurge in the birthrate,
which we considered in (1.9) of Section 1.1.
EXERCISES 2.1 103
(a) Let us assume that there are initially b) women, and that they will give
birth to a female child at a rate h(t) per year. Find their contribution to
the female birthrate at time t.
(b) To find the birthrate b(t) at time ¢, we must add to this birthrate of part (a)
the contribution to birthrates of girls born at time 7 > 0 when they are at
age 7 in the range of childbearing age a < rT < (. Girls bornat time t —7
will at future time ¢ belong to the birthrate b(t — 7). If the probability
of a girl living to age 7 is p(7) and the probability of the girl at this age
giving birth to a female child in a time interval Av is m(r)Ar7, find the
contribution to the birthrate b(t) from women in the subinterval Ar of
the range of childbearing age 7 [born at t — 7 with birth rate b(t — r)].
(c) Find the contribution to birthrate b(t) at time ¢ from all women in the
childbearing age rangea <T < £.
Hint: Pass to the limit for the sum in part (b) to become an integral.
(d) Find the expression for the total birthrate b(t) that includes the contribution
of the women present at the initial time t = 0.
. Consider the problem of the surge in birthrate of problem 4, where its birthrate
b(t) is governed by the integral equation
B
b(t) = boh(t) +f b(t — T)p(r)m(r)dr. (E£.1)
The integral above represents the contribution to birthrate of girls born at time
t > 0 when they are at the childbearing age T, a < T < £3.
Use the following detailed steps in parts (a) and (b) to show that the above inte-
gral equation can be expressed as the following Volterra-type integral equation
with difference kernel
(b) For t > £ note that since we are taking the origin of time as the birth of
the oldest childbearing woman, then boh(t) = 0 fort > G in (E.1), since
this term now takes care of birthrates to women at age t > 3, which is
outside the childbearing range (a, 3). With boh(t) = 0, we now rewrite
(E.1) as
Qa B :
b(t), == i b(t — r)p(r)m(r)dr + / b(t — Tr)p(r)m(7)dr
0 t a
The problem of finding the rate dr /dt at which equipment should be replaced, to keep
a specified number f(t) in operating condition at any time t, is formulated similar to
that of the human population problem of Section 2.1.1. We first assume that we have
s(t), the function that determines the number of pieces of new equipment bought at
t = O that survives to time t. If we start with f (0) as the number of new pieces bought
at time t = 0, then, due to loss or wear, only the fraction f (0)s(t) will survive to time
t. To keep a specified number larger than f(0)s(t) at time ¢ we must continuously
add equipment at the desired rate from time t = 0 to time t. If the desired rate of
replacement at which we must add new equipment at time 7 is dr(r) /dr, then at time
t this equipment will be of age t— 7 with a survival function s(t — T) that is dependent
EXERCISES 2.2 105
: dr
on their age t — r. From (F) Ar, what we replace in the time interval Ar, only
a fraction s(t — r)(dr/dt)Ar will survive to time t. Hence, if these survivals of the
continuous replacements are added along the time interval (0, t), we obtain
t
He) = / s(t — ila t>.0 (2.20)
0 dt
the number of pieces of equipment surviving to time t, which were purchased as
replacements during the time 0 < 7 < t. If we add this to f(0)s(t), the surviving
number of pieces of original equipment (new at time t = 0), we obtain the (desired)
total number of pieces of equipment in operating condition at time t,
t d
7) —fO)s@) +f s(t — 1) dr, (2:21)
0 dr
which is a Volterra integral equation of the first kind in the unknown rate of replace-
dr
eS ae
Exercises 2.2
(a) Use the Laplace transform to solve for the necessary rate of replacement
dr. :
—., in order that we keep a constant number of machines f(t) = A at all
times ft.
(b) Verify your answer.
2. Consider the problem (1.16) of the deviation ¢(t) of the steering angle 6,(t)
of the rotating shaft from a constant direction indicator angle 0;(t) = 1,¢ > 0.
Only a correction torque proportional to the deviation ¢(t) and another one
proportional to the rate of change of the deviation are applied. The rotating
shaft starts from rest at a zero angle 6,(0) = 0, 64 (0) = 0.
(b) Find the Laplace transform for the equation in part (a).
(c) Solve for the deviation ¢(t) and hence for 6,(t) of the rotating shaft angle.
For simplicity let J = 1,6 = 2,anda = 1.
(d) Use the same method and conditions above to solve for the deviation $(t)
for the complete problem (1.16) when the accumulation (integral) torque
correction term is included. For simplicity let J = c = 1 anda = 6b =3.
106 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS
3. Electric potential in a disc. The electric potential u(r, @) at a point (r, @) inside
a disc of radius a, which is free of charge and where the boundary of the disc
(r = a) is kept at a potential u(a,@) = f(8), is given by the Poisson integral
ee ey f(g)de
(== |) ar a ee
(a) State a problem that makes the above an integral equation in f(¢).
Hint: Be cautious about lim u(r, @). For more details, see the derivation
of this Poisson formula in (4.69) at the end of Section 4.1 in Chapter 4.
(b) Show that the potential f(@) on the boundary must be distributed in such
a way that its average is equal to the value of the potential u(0, 4) at the
center of the disc. ‘
1 b
Hint: ean i}f (x)dz is the average of f(z) on the interval (a, b).
Here g(x) is the number of emerging neutrons on the other side of the used
slab of uniform thickness x, and a(£) is the absorption cross section of the
material of the slab as it appears to the incoming neutrons of energy E.
(a) Consider now a uniform bar of length b with source of neutrons f(y),
0 < y < b to be determined. It is assumed, for a simple model, that the
neutrons move only in two directions — right and left — along the rod
with constant absorption cross section a. With the input as f(y), assume
that we can measure the output at position x as h(a). (Of course h(z) is
a set of measurements for finite number of location points x.) Show that
f(y) satisfies the following Fredholm integral equation of the first kind
In this section we formulate problems dealing with the shape of the hanging chain
and Abel’s problem.
Here we consider the problem of how a variable density p(x) must be distributed in
the form of a chain or a rope, in order that it may assume a given shape f(z).
First we consider an elastic string under an initial constant tension Jo, and a
vertical force F' acting at one point. Then we derive the equation for the case of
distributed forces along the string, for example, the variable gravitational force due
to a variable linear density of the string.
(a) Displacement Due to a Single Vertical Force: Consider the string AB of length
l in Figure 2.2 under initial constant tension Jo. (Recall that in Figure 2.2 we take
y(x) to be positive in the downward direction of gravity, and that the force of the
point mass m is its weight w = mg.) Let F' be a constant vertical force acting on the
string at x = € to displace it by a small vertical
distance y(£) which is very small compared to €. If we equate the vertical forces,
assuming that the tension is constant (Zo) along the string, we have
Fig. 2.2 Displacement due toa single vertical force F' at €. From Jerri [1982, 1986], courtesy
of COMAP, Inc.
AUS a ee é
y(x) = FG(z,f) =F ies wohiqy (2.26)
as E<2sl.
It is important to note the two branches of the function G(z, €) (2.26) where the
first branch satisfies the boundary condition y(0) = 0, for the first end of the elastic
string at x = 0 to be fixed; while the second branch satisfies the boundary condition
y(1) = 0 for a fixed second end at x = I. This is a very familiar occurrence when
finding the integral representation of boundary value problem as Fredholm integral
equations. We will see a function similar to G(x, y) of (2.26) appearing as the kernel
in the integral equation to satisfy, the already incorporated, boundary conditions.
2.3. MECHANICS PROBLEMS 109
Example 1
We illustrate here how the simple case of constant density p(£) = c determines
the expected (parabolic) shape for the string.
If we use (2.28) for y(x), (2.26) for G(x, €), and let p(€) = c, we obtain
cherie ae
Aap / cil aaa, i eae. (B.1)
We note here how the second and first branches of (2.26) are used for the first and
second integrals of (E.1), respectively. Evaluating the two integrals in (E.1) gives
110 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS
dy dyds ,
eae share —V/29(yo — y) sina,
—dy
Fe een eee 2.30
29(yo — y) sina ae
at — —_ Pw ay
29(yo — y) a
and integrate from the initial time of descent t(yo) = f (yo) to the final time t(y =
0) =0.
yt, = — ° _ bly)dy
ne vo V29(yo — y)’
° g(y)dy
O= t(yo) = —f(yo) = (2.32)
yo V2g9(Yo — ¥)
Hence (2.32) is the final integral equation in ¢(y) that relates the form of the path
$(y) to the predetermined time of descent f (yo) of the particle,
Leg ae (1.20)
We note that taking the final time t(y = . = 0, we are making a negative initial
time t(yo) = f(yo) < 0.
Example 2
For illustration we will consider the simple case of finding the path in a vertical
plane along which the particle must move from rest at y = Yo so that it reaches the
ground in (the usual free-falling body) time
anes (E.1)
: 9g
where we expect a vertical path for the fall, i.e., a = 90° in Figure 2.3.
Let us note that this is a very special case with t = f(yo) = —.\/2yo/g, which
can be solved by using the simple laws of motion since yo = 1/2gt?, t = /2yo/g.
If we substitute t = —,/2yo/g for f(yo) in (2.33), noting that the final time is ¢ = 0,
which necessitates a negative initial time, we have
pl ae i u()dg
o va-€
Hence we may use ¢(y) = 2/2 in (E.2) or d(y) = 1 = 1/(sina). This gives
a = 90°, which says that the path is vertical.
Exercises 2.3
1. Let the deflection of an elastic string of length / at point x; due to a unit force
(load) at x be K (xj, 22).
(a) Give the equation that represents the total deflection D;2(z) at a point
x due to a load L,(x), applied at the middle of the elastic string, and
another load L2(x), applied at 2 = 2.
(b) Give the equation that represents the total deflection D(a) due to a contin-
uous load L(x) = p(x), as a result of the string’s variable linear density
p(z).
2. Use (2.27) and (2.28) to find the approximate shape of the string when two thin
beads with a constant density of 1 and length 1/20 and //12 are placed along
(1/5,1/4) and (21/3, 31/4) of the string, respectively.”
Hint: Use the weight of the bead as the force at the bead’s center of gravity.
3. Determine y(x), the shape of the string in (2.28), when the linear density is
given by
Pa) ex (la):
4. Use the Laplace transform to solve for the path (y) in Abel’s integral equation
(2.33), to verify that the path is vertical.
15 |(4)
a =—mg(yo — y)
d
and note that ~ = Siete Ny
y dy
The very detailed solution of this problem in five pages is found in “The Student’s Solution Manual" to
accompany this book [Jerri, 1999]. See the end of the preface for more information.
2.4 INITIAL VALUE PROBLEMS REDUCED TO VOLTERRA INTEGRAL EQUATIONS 113
er ee (X:; Yo)
6. Consider the Tautochrone problem (1.21) that was presented in Section 1.1.
Use conservation of energy,
Le dae | 1. Sao
where m is the mass and g the gravity acceleration, to drive the integral equation
(1.21)
201
Pama)
we (1.21)
Hi Q WHO
= Y Q
where s = Fy), noting that
(3)=)
Hint: Note that t = 0 tot = T correspond to y = yo to y = 0, respectively,
2
e < 0 that requires a minus sign, and gs =,4/1+ ee
dt a em dy dy} —
To illustrate in detail how an initial value problem associated with a linear differential
equation and, usually, homogeneous initial conditions reduces to a Volterra integral
equation, we consider the following example, which we have already presented in
(1.29) and (1.32). This will be followed by the initial value problem associated with
the general second-order differential equation.
114 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS
Example 3 :
d’y .
= F(e) (B.1)
and integrate once with respect to x to obtain
=f
dy
—=
=
F Pd +e : (E.2)
E.2
ze
Ca eo atjiF(t)dtdé + cu + c2. (E.3)
0 Jo
To reduce the double integral of (E.3) to a single integral, we use the identity (1.51)
x € x x
/ ifP(tydtag = f (@- F(a = | (Goomede (1.51)
in (E.3) to give
y'(0) =0=0+«a, ep
= 0:
az
dy
= F(t) = ule) + 9(@). HE)
If we substitute this value for F(x) in (E.5), we obtain
CY 4 y = cose (E.1)
y(0) =0 (E.2)
y'(0) =0 (E.3)
to a Volterra integral equation in: (a) u(x) = d?y/dz? and (b) y(z).
First we let u(x) = d?y/dz? then integrate once to have
HH
gy = i.u(t)dt + cy (E.4)
dx 0
with c; = 0 after using the initial condition (E.3). We integrate this result again,
using the identity (1.51) for the double integration to have
0
For the final result (E.6) to be an integral equation, we have two choices:
(a) To make this result as an integral equation in u(x). In this case we use (E.1)
to have y(x) = cosa — d*y/dx? = cosx — u(z) to substitute for the y(x)
term outside the integral for (E.6) to become a Volterra integral equation of the
second kind in u(x) = d?y/dx?
(b) To have (E.6) as an integral equation in y(x), we substitute for u(t) inside the
integral of (E.6) in terms ofy(t) via (E.1), where u(t) = y’’(t) = —y(t)+cost,
Exercises 2.4
dy
ee — COS.
ar
y(0)=0, y(0)=-1
(of Exercise 2 in Section 1.3) to the Volterra integral equation in y(x),
at +y=0 (E.1)
y(0) =0 (E.2)
y'(0) =1 (E.3)
(a) to Volterra integral equation in u(x) = d?y/dz?.
(b) to Volterra integral equation in y(z).
Hint: (a) let u(x) = d?y/dzx?, integrate it twice, using the identity (1.51)
and the initial conditions (E.3) and (E.2), then substitute for y(x) outside in
terms of d?y/dx? = u(x) from (E.1). (b) In the integral for y(z) in part (a),
substitute for u(x) = d*y/dz? in terms of y(x) from (E.1).
3. Reduce the initial value problem
dy
sue = —si
—=—
dy
ap ee + e"y
i = 2, (E.1)
Eu
y(0) = 1, (E.2)
y'(0) = -1 (E.3)
dy
d*yie
pan
| tA
dy
peed = Oe ae> 0 Vel
(E£.1)
y(0) =1 (E.2)
y'(0) =0 (E.3)
fake
to Volterra integral equation in u(x) = —
118 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS
a? ee me
Hint Leva) s= a integrate it once using the initial condition (E.3) to
obtain , then integrate again using the identity (1.51) and the initial condition
fe
d :
(E.2) to obtain y(x). Last substitute these = and y(a), in (E.1), with their
d*y
resulting dependence (inside their corresponding integrals) on u(x) = ae
Example 5
SEE
dx2 — i] ’ OMEE<AU (1333)
©
y(a) =0 (1.34)
y(b) =0 (1.35)
We proceed to integrate (1.33), in the same manner as that followed in Example
4, which gives
teal
BE
= , y (t)dt + Cj. (E.1)
5
For simplicity, we leave the variable of the final integration in (E.2) as t instead of €.
To evaluate the arbitrary constants c; and cp, we employ the boundary condition
(1.34),
y(a) =0=0+4+cja+ ce, co = —c,a (E£.3)
and the boundary condition (1.35),
b
YD) es af (b— t)y(t)dt + c1b — qua. (E.4)
4For a more general boundary value problem, see the first edition of this book, p. 66.
2.5 BOUNDARY VALUE PROBLEMS REDUCED TO FREDHOLM EQUATIONS 119
and
So, if we use these values of c; and c2 in (E.2), the final integral representation of
the boundary value problem (1.33)—(1.35) is
This integral equation in y(x) can now be rearranged to result in the form of the
Fredholm integral equation (1.36) with its kernel K(x, t) defined in (1.37). This is
done by writing the second integral as two parts on the intervals [a, x] and [2, 6],
where the first part will then be combined with the first integral of (E.7),
es SO), ee Pe
K(at)=4 ie Dinaaictves (E.9)
b-a * tee
then the last two integrals in (E.8) can be combined as
Ky b
»/ K (a, t)y(t)dt + ae K (a, t)y(t)dt = rf K (a, t)y(t)dt. (E£.10)
Hence (E.8), and in turn (E.7), reduce to the homogeneous Fredholm integral equation
KGS
abn ue Bee (1.37), (2.37)
se eres ins gu
We want to stress again the equivalence of the homogeneous boundary value
problem (1.33)-(1.35) with the homogeneous Fredholm integral equation (1.36)
and its kernel in (1.37), since we may sometimes resort to solving the boundary
120 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS
value problem to obtain the solutions for its equivalent homogeneous Fredholm
equation. As we mentioned at the end of Example 6 and in Exercise 5 of Section
1.3, this will require us to differentiate the integral equation in order to find its
corresponding differential equation (with its boundary conditions), which, hopefully,
is easier to solve. This development is illustrated in detail in the following Example
6. The need for such development will become evident when we study the methods
of solving nonhomogeneous Fredholm integral equations in Chapter 5, where the
solutions of the homogeneous equation are essential for the development. A list of
homogeneous boundary value problems with their equivalent homogeneous Fredholm
integral equations and, of course, their respective kernels (Green’s functions) is given
in Appendix B.
As we remarked on the function G(z, €) of (2.26) for the hanging chain, we can
again see very clearly the appearance of this kernel K(x, t) in (1.37) with its first
and second branches that satisfy the two boundary conditions at the two boundary
points z = a and x = b, respectively. This K(x, t) is the Green’s function of the
boundary value problem (1.33)—(1.35) that, effectively, reduced it to the Fredholm
integral equation in (1.37). The Green’s function is the subject of Chapter 4.
Here we have two ways of doing the problem. The first is to recognize the kernel
(E.2) as a special case of (E.9) or (2.37) in Example 5, with a = 0 and b = 1,
and hence the integral equation is a special case of (2.37) which is equivalent to the
boundary value problem (1.33)—(1.35), with a = 0, b = 1,
say
ee
dx2
Ss
hy(x),
y
Oy,
) O<2<1x (E.3 a )
and note how we used the second branch of K (z, t) in the first integral of (E.5) on
the interval (0, 2) and the first branch in the second integral of (E.5) on the interval
(x, 1). Next we must realize that both integrals in (E.5) have variable limits and that
their integrands are functions of 7; hence, in general, we should use the Leibnitz rule
(1.53) in differentiating them. However, in the special case at hand, we can factor
the x dependence out of the integrals
Now each term is a product of two functions of x and we can use the fundamental
theorem of calculus on the integrals. If we differentiate (E.6) once, we obtain
ts) i= mA ft
ty(t)dt
+ \(1 — x)ry(z)eh (1 — t)y(t)dt— Ax(1 — x)y(z)
Hence the (nontrivial) solutions to the boundary value problems (E.9)—-(E.11) are
which are also the nontrivial solutions (eigenfunctions) of the Fredholm integral
equation (E.1) and (E.2). The subject of eigenfunctions and eigenvalues is very
important for the development of the solution to Fredholm integral equations of the
second kind, especially with regard to the existence of such solutions as we shall
discuss and illustrate in Section 5.1.2. The preliminaries of the eigenfunctions as
solutions to Sturm-Liouville problem,> and their use as the orthogonal functions for
the Fourier series expansion is done in Section 4.1.2. Such expansion facilitates the
representation of the Green’s function, which we shall use in Section 4.2 to reduce a
boundary value problem to a Fredholm integral equation, which is (the more direct)
equivalent way to what we are doing in this section. For other boundary problems of
interest with their eigenfunctions and eigenvalues, see Appendix B.
For now, the set of the nontrivial solutions of the homogeneous problem (E.9)—
(E.11) {sin na} are called the characteristic functions or eigenfunctions and vn» =
n?n are the characteristic values or eigenvalues of the kernel K(a,t) of the
(equivalent) homogeneous Fredholm integral equation (E. 1) and (E.2).
As mentioned before, we shall see in Chapter 5 that the solution of a nonhomo-
geneous Fredholm integral equation will, as expected, depend on the solutions of
its associated homogeneous equation. The solutions of the homogeneous equation
are the classical solutions of the homogeneous case of the boundary value problem
(1.33)—(1.35). Example 6 is a special case, and for quick reference we tabulate in Ap-
pendix B a number of familiar homogeneous boundary value problems, their Green’s
functions, and their corresponding homogeneous Fredholm integral equations with
the Green’s function as their kernel. The verification of these results is the subject of
a number of examples and exercises in this chapter and in Chapters 4 and 5.
In this section, our illustrations covered only homogeneous differential equations.
The nonhomogeneous differential equations case should follow easily, as was done in
part (b) of Example 4 for the initial value problem and its resulting nonhomogeneous
Volterra integral equation. (Also, see Example 8 of Section 4.2 and many of the
exercises in Sections 4.1 and 4.2.)
Exercises 2.5
dy
dg? ~ AY(2), Ocir<sob
y(0) = 0,
y(b) =0
to a Fredholm integral equation.
5 : ‘ : : ae
A very important general boundary value problem, whose solutions, with some regularity conditions, are
orthogonal functions.
EXERCISES 2.5 123
(c) Show that on the two subintervals a < 2 < t andt < zx < b, the kernel
K (a, t) satisfies the differential equation 07K /0x? = 0, also K(z, t)
satisfies the boundary conditions by vanishing at the end points x = a
and x = b.
il
u(2)=A f K(e,t)u(tde, ay
0
st
a, 0<2r<
sin
K(z,t) = sinh
t sinh(x — 1)
—a ee ee
ibn
sinh1
to a boundary value problem.
Hint: See Example 6.
(b) Solve the resulting boundary value problem to find the characteristic func-
tions (eigenfunctions) and the characteristic values (eigenvalues) of the
kernel in (E.2) of part (a).
(c) Verify that the solutions in part (b) satisfy the integral equation in part (a).
=o (1— £)?(3
—# -22f), r<€<1
is equivalent to the boundary value problem associated with the displacement
u(x) of a vibrating beam.
124 Chapter 2 MODELING OF PROBLEMS AS INTEGRAL EQUATIONS
We consider here the potential distribution u(x, y) in the half plane (2 > 0), due
to the presence of a plate of width 2a (see Fig. 2.5) placed along the y axis with
center at the origin and extending in the z direction. The plate is kept at a potential
u(0,y) = g(y), -a < y < aand where the rest of the yz plane is assumed to be
insulated (i.e., Ou(0, y)/Ox = 0, |y| > a).
The potential distribution in free space here is independent of z, thus as u(z, y) ii
is governed by the following Laplace equation in two dimensions
Sali LOPATI
De? wage hae a> Os —0o <y < © (2.38)
Ae 9)
(2.41)
2.6 MIXED BOUNDARY CONDITIONS: DUAL INTEGRAL EQUATIONS 125
and Fourier-transform the Laplace equation (2.38), using the Fourier transform pair
in (1.110) on the second partial derivative with respect to y in (2.38), to obtain
d’?U
ey, RUE i=0; (2.42)
To find the arbitrary function A(A) in (2.44), we must have a condition on U(z, A)
at z = 0, which should come naturally from the Fourier transform of the condition
on u(x, y) at z = O through (2.41). But our mixed condition (2.39)—(2.40) at x = 0
is not suitable for the required substitution in (2.41), since it is given as a function
for |y| < a and as a derivative of the function for |y| > a. Instead, we leave A(A) in
(2.44) for the moment and we proceed to find u(z, y), the solution of our problem,
as the inverse Fourier transform (1.88) of (2.44),
Now, we can apply the mixed boundary conditions (2.39) and (2.40) on u(x, y) above
to obtain
u(0,u)
=9) =51 [Ae
ee a
"ad,
Sin be
a
55(0,9)=0=
0U
— =0 5;—1 |es —|A|A(A)e’*4
PAM id dy,
As we have seen in the problem above, the mixed boundary condition (2.39) and
(2.40) for the potential distribution in two dimensions u(x, y) was reduced to dual
integral equations (2.46) and (2.47). Here we consider the same type of mixed
boundary condition for the potential distribution in three-dimensions. This is due
to a given potential on a disk of unit radius in the zy plane, with center at the
origin and with the rest of the zy plane being completely insulated (Fig. 2.6). This
problem also fits the steady-state temperature distribution in three-dimensional space
due to a given temperature on the unit disk and where the rest of the xy plane is
completely insulated. To describe such boundary conditions, it is important that we
use cylindrical coordinates (r, #, z) for the potential distribution u(r, 6, z), and hence
we write the Laplace equation in cylindrical coordinates,
iu 10n,10%"
Or?
Ou_y
or Or r2 002 @z2
=
(2.48)
The special case we consider here is when the potential on the disk is cylindrically
symmetric and hence the potential should be independent of the angle u(r, 6, z) =
u(r, z); so the Laplace equation (2.48) in u(r, z) becomes
In the very special case of constant potential up on the unit disk with the rest of the
ry piane being insulated (see. Fig. 2.6), this mixed boundary condition becomes
Ou
a7" O20; I< t < Co. (2.51)
We mention again that the boundary value problem (2.49)-(2.51) represents two
other physical problems: that of steady-state temperature distribution due to a given
constant temperature on the unit disk with the rest of the zy plane completely
insulated, and the steady irrotational flow of a perfect fluid through a circular aperture
in a rigid wall. In comparison with (2.38), where we used the Fourier transform to
algebraize its partial derivative with respect to y, and reduce it to the ordinary
differential equation (2.42), equation (2.49) needs the Hankel transform to algebraize
the differentiation with respect to r. The subject of the Hankel transform was
presented very briefly in Section 1.4.3, and its use will be illustrated in Appendix
A, where the problem of the electrified disk is discussed and the corresponding dual
integral equations are derived as (E.8) and (E.9) in Example | (of Appendix A),
where a final solution u(r, z) is found in (E.11) of the example.
Exercises 2.6
1. Use the following two integrals involving the Bessel function Jo (Ar):
We»
[FP
RANT aoraa =F,
2 O<r<l
128 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS
(b) Develop the same problem in u(r, z) for a circular operature of radius a.
In Section 1.4.2 we indicated that the Fourier transform can be extended to three-
dimensions, where we gave such Fourier transforms pair in (1.91) and (1.92) (with the
notation used in physics texts). We mentioned there that we shall need such extension
to model the Schrédinger (partial differential) equation in the three-dimensional
physical space as a Fredholm integral equation in the three-dimensional momentum
space. Of course, we are also after the Fourier transform’s most important operational
property, namely, that it algebraizes (linear) derivatives with constant coefficients,
which generalizes to the three-dimensional case as we shall see in the derivation of
(2.60). Next we present the detailed analysis of using the three-dimensional Fourier
transform to give the integral representation of Schrédinger equation as a Fredholm
integral equation in the three-dimensional momentum space.
2.7 INTEGRAL EQUATIONS IN HIGHER DIMENSIONS 129
—1
(F}=f(@)= =i. ert
Fads (2.57)
where we notice more symmetry in distributing the multiplicative constant —— for
V20
the Fourier transform and its inverse. Also to use this with our above notation for
the three-dimensional Fourier transform, we use \ = 1A, + jA2 + kA3 for the wave
7More on the higher dimensional Fourier transform can be found in Jerri [1992] and Sneddon [1972].
2.7 INTEGRAL EQUATIONS IN HIGHER DIMENSIONS 131
number vector instead of the k so as not to confuse the latter with the unit vector & in
i ix, + jx + kv,
Fy ihe Oe
= ae (2.59)
If we now let W(X) = F (3) {(r)} as the wave function in the momentum )-space,
and apply the three-dimensional Fourier transform to the Schrédinger equation (2.54),
we have
3
oH oe oe THO ge
itll
)2
dx, dx2dx3
af3
)2
oH) 06 ) u(re™ "dx', dxydx',dzx,dr2dz3.
For the left-hand side we employ a simple extension of the Fourier transform impor-
tant property in (1.110) to three-dimensions to algebraize the three partial differenti-
ation terms to —(A? + A? + A23)¥Q) = —)?W(X), and where the fourth term a?2))(7)
will simply be transformed to a? W(X),
Z 1 CS ie VO a aces
(A? + a?) ¥(X) = a ae u(F,rp(Aje*".--
-- dr drydxr,dx,dx2dz3. (2.60)
To have an integral equation in W(A) we must write (1) inside the above integral as
an inverse Fourier transform of w(A),
Now we can see the six-dimensional integration over 7’ and7? as the three-dimensional
Fourier transform V(X , A) of the interaction energy v(7, F) (as a function of the two
spatial vectors 7’ and 7), and where ize \) stands for the interaction energy in the
momentum space,
132 Chapter 2. MODELING OF PROBLEMS AS INTEGRAL EQUATIONS
+ & 1 me se hc:
VOX) = cae 6 ie UE niesBee "dx,nant a
dxr,dx,dx,dx2dz3.
(2.62)
With this definition of V(X, ), the equation (2.61) becomes the desired (homo-
geneous) Fredholm integral equation in the wave function W(X) of the momentum
space
2
oY = dy(2) + 9(c) (1.29)
! Anderson and DeHoog [1980]. See Bocher [1914] for Volterra’s collected work in 1884-1896.
133
134 Chapter 3 VOLTERRA INTEGRAL EQUATIONS
y(0) =1 (1.30)
y'(0) =0 (1.31)
is reduced to Volterra integral equation of the second kind
0
with the nonhomogeneous term
2This (successive substitution) method of C. Neumann came about 30 years later after the work of Abel
and Liouville in 1823-1839.
3.1 VOLTERRA EQUATIONS OF THE SECOND KIND 135
and interchange the integrals in (3.12), keeping in mind the change in the limits of
integration as illustrated in Figure 1.8, we obtain
136 Chapter 3 VOLTERRA INTEGRAL EQUATIONS
Fig. 1.8 Domains for performing the integration in (1.51) with respect to t first (solid lines) or
with respect to € first (dashed lines).
Just as the K (x, t) in (3.11) is taken as Ky(z, t) to give u; (zx), the inside integral
in (3.13a) defines K2(z, t), the iterated kernel,
The final solution is then (formally) obtained from u(z) in (3.5) with uo(x) = f(z)
as in (3.7), wi (x) in (3.11), w2(z) in (3.13b) and so on for up (x) as in (3.16),
which is what was sought in (3.2) and (3.3) to constitute the solution to the Volterra
integral equation (3.1) via the present method of the iterated kernels (3.15) for
constructing the resolvent kernel ['(a, €; A) in (3.3).
Of course, as we mentioned above, these are only the formal steps of the method,
which lack the mathematical justification for the convergence of the resolvent kernel
infinite series (3.3). This includes proving the general case of the iterated kernel
Kn41(2, t) in (3.15). The first part concerning the form of K,41(, t) in (3.15) can
be accomplished by mathematical induction, which we leave for an exercise. Before
offering the major part of a proof of the main result, for u(x) of (3.1) and (3.2) to be
the unique solution of (3.1), we shall give a precise statement for it as the following
Theorem 1, then illustrate the above method with Example |. This is followed by a
very similar iterated kernels method that we will pursue to prove Theorem | in detail.
Last we show that u(z) in (3.2) and (3.3) or (3.17) does indeed satisfy the Volterra
equation (3.1) to qualify as its solution.
Theorem 1 “The Volterra integral equation of the second kind (3.1) in u(z),
Example 1 Find the resolvent kernel to solve the Volterra integral equation of the
second kind
Here we have
eg (eet i= (at tee ae (E.2)
So if we use K(x,€)= e~§ and K,(£,t) = K(E,t) = e&~$ in (3.4), we obtain
t t
2) 2 wiry 2
Se F 5 t -1(e-1)| = (x ; ) ett
Ct)
Kn+i(2,t) = eas 5 (E.5)
Hence from (3.3) and (E.5) the resolvent kernel for (E.1) is
=e EEiG=ye ee re UE
= ef teMe-t) — p(1+d)(2-#)
(E.6)
after realizing that the series in brackets is the Maclaurin series of e4(*-*),
So from (3.2) and the resolvent kernel (E.6), the solution to (E.1) is
It is not often that the series representation of I(x, t;) will converge to an
expression in closed form [such as e('+*)(*—) of (E.6)]. We presented such a
special case to illustrate the basic method. In practice, we may have to evaluate
numerically a finite number of terms of the Neumann series (3.3), which gives only
an approximation of the resolvent kernel ['(z, t; A).
3.1 VOLTERRA EQUATIONS OF THE SECOND KIND 139
An Iterative Approach
Another very similar method that depends on the above iterated kernels (3.4) and
generates the same resolvent kernel (3.3) is discussed next. The advantage seems to
be its transparency for the need of proving its convergence and that it is an iterative
process from the first step. If we look at the left side u(z) of (3.1) as the output of
the integral equation while u(€) inside the integral as the input, the method would
take the whole right-hand side of (3.1) with its two terrns, which is the output, and
use it as an input inside the integral, i.e., it starts as an iterative process to give
after using the definition of the iterated kernel K(x, t) as it was generated in (3.4).
The difference between this method and the above former one, associated with the
infinite series (3.5) as a starting point, is that in (3.18) we still see clearly the unknown
function u(t) involved in the integral of the third term in (3.18). If we input this u(z),
of the right-hand side of (3.18), inside the integral of (3.1) again, we easily obtain
remembering again the definition of the iterated kernels K2(z,t) and K3(z,t) of
(3.4). If this process is repeated n times we have
ewe is
” Ka(a,t) f(t)dt +" il
ima enyuldi (3.20)
where the iterated kernel K,,+41(z, t) is defined in the same way as in (3.4), and where
we can still see the unknown function u(t) involved inside the above last integral of
(3.20), which clearly hinders this expression from being a solution to (3.1). The way
to show (3.20) becoming a solution is to show that such a series converges as n — 00
(see Exercise 10 for the detailed steps of the proof). This, of course, means that the
n + lst term, as the integral involving u(t), will vanish as an (obvious) necessary
condition of the convergence of the series. Hence we have the solution
140 Chapter 3 VOLTERRA INTEGRAL EQUATIONS
[o)
x S
#) +) i K(2,€) Lr +A / rieanstoat dé
x x is
z ré
i i K(w, OU(E,t;A)f (tdtde
= 3 f“F(t il"K(a,
OU(Et Aas] dt
ax fgof emo dr wala]
f=
(3.23)
af i0d yn ilaK(0,8)Knya(E, a]
[ ne Knete) f(E)dé
3.1 VOLTERRA EQUATIONS OF THE SECOND KIND 141
= f(o) +a |“P(e,6)f(Edé
which is the same expression of (3.21) that we started with as a (nominated) solution
inside the integral on the right-hand side of the Volterra equation of the second kind
(al):
Another very well known method of solving the Volterra integral equation of the
second kind (3.1) is to start with substituting a zeroth approximation u(x) in the
integral [of (3.1) with A = 1] to obtain a first approximation wu (z),
Then this wu; (x) is substituted again for u(z) in the integral of (3.1) to obtain a second
approximation u2(z),
x
Un(x) in (3.25) will converge to the solution u(z) of (3.1). We state this result as the
following Theorem 2, which we shall follow immediately with a detailed illustrative
Example 2.
Theorem 2 “If for the Volterra integral equation of the second kind (3.1) we have
f(x) continuous for 0 < x < a and K(z,t) is also continuous in the triangle
0<a<a,0<t< z, then the successive approximations sequence up (x) in (3.25)
converges to the solution u(z) of 3.1."
We first note that the above Theorem 2 applies to this problem, since f(x) = a is
continuous for any z, and the kernel K (x,t) = —(a — t) is also continuous in x and
t for all their values.
We may remark here that there is always an advantage in making a reasonable
zeroth approximation, a matter that becomes clearer after solving a number of prob-
lems. In this case we may start with wo(t) = 0 in the integral of (E.1) to obtain wu, (x)
according to (3.25),
2) 3 z
Zz
0 Z 3] \o
ae
= ge
3!
Now
=e eres al
go gy? pia 8H augsh Sugk
age aiden | acme BE a aga ae ee
(4 i) + (5 =) Os 54 a RG
Crees Sip his
Sopa eS Mati. «Pe ge ee oe
6 120 el
(E.4)
From (E.2), (E.3), and (E.4) it looks clear now that if we continue this process, we
obtain the n + 1st approximation u,y41(x) as
3.1 VOLTERRA EQUATIONS OF THE SECOND KIND 143
zr r - gent
which is obviously the nth partial sum of the Maclaurin series of sin x,
= gent
sing = ) (—1)”———_. (£6)
ae (2n + 1)!
Hence the solution to (E.1) is
As in Example 1, we remark again that it is not very often that a general sequence
Un(zx) will converge to such a simple function as sin z, and we may have to resort
to approximate numerical methods for evaluating the resulting partial sum (E.5) or
in general its integral representation (3.25). We leave it as an exercise to verify that
sin z is a solution to (E.1) by performing the direct integration (Exercise 8).
When the kernel K(x,£) depends only on the difference x — €, it is termed the
difference kernel, K(x,&) = K(a — €). The following Volterra equation of the
second kind with difference kernel K(x — &),
iEK(a — €)u(€)dé = K * u.
Hence, as we presented and illustrated in Section 1.4.1, the Volterra equation with
difference kernel (3.26) lends itself to the Laplace method of solution. So if we
Laplace-transform (3.26), letting U(s), F(s), and K(s) be the Laplace transform of
u(x), f(x), and K(z), respectively, and realize from the convolution theorem (1.84)
that L{K * u} = KU, we obtain
The solution u(z) of (3.26) is then the inverse Laplace transform of U(s) in (3.28),
Bee F(s)
which can be evaluated with the aid of Laplace transform pairs (Table 1.1), as we
illustrate in the following example.
144 Chapter 3 VOLTERRA INTEGRAL EQUATIONS
at 2)
id ay {F() +A;ean) }
= f(z) + AL- ee
If we use the convolution theorem (1.84) on the last term in (E.3) with L{eQ+))7} =
1/[s — (A + 1)] from (1.80) (or the second entry in Table 1.1) we obtain
which is the same result (E.7) of Example 1, where the solution was obtained by
using the resolvent kernel-Neumann series method.
Lis? 1
Oeics groomer ey eS)
and if we use the Laplace transform pair C{sin ax} = ae of (1.81), we obtain
a
Integro-differential equations
When the function u(x) in an equation is involved in a differentiation as well as
an integration operations, the equation is called an integro-differential equation. For
example,
du Ors
AED =e = / e
2(a—t) ae
du (E£.1)
u(0) 20 (E.2)
u'(0) = 0. (E.3)
We apply the Laplace transform on (E.1) using (1.69)
#?U(s)
2 —su(0) ——w'(0)
ms a) = —51 - —p[sU(s)-w(0).
Sa
1
= i (B86)
E.
Now we use the initial conditions (E.2) and (E.3) in (E.6) to have
146 Chapter 3. VOLTERRA INTEGRAL EQUATIONS
nl sU(s)
SU (8) scien Gann (E.7)
ee, Bis arias
WAC eas era VP? (SBP oie
after using partial fractions for the last line. Hence the solution to (E.1)—(E.3) is the
inverse Laplace transform of (E.7),
oe
1 i
/2 A) pe es Ney ee ete
1
se
eae {ap +3}
= re* —e* + 1
where for the first term we used (1.80) [or (1.71)].
Exercises 3.1
1. Find the resolvent kernel I'(z, t; ) to solve the Volterra equation of the second
kind
u(x) = f(x) + af u(t)dt.
0
Hint: The kernel here is K(z,t) = 1.
2. Find the resolvent kernel to solve the Volterra integral equation of the second
kind
u(x) = g(x) + » [oe — t)u(t)dt.
3. Use the Laplace transform method to solve Exercises 1 and 2. Hint: For
Exercise | note that ps u(t)dt is (a very special case) of the Laplace convo-
lution product type with f;(¢) = 1, fo(t) = u(t); you can also use the pair
H
1
ct f fe) t= 5f (8) in the fourteenth entry of Table 1.1. For Exercise 2,
0
. oe ; A
note that sinh z = —isiniz,ie., £{sinh Ar} = zoe
x 2
d*u oe du
—, —2— + u(z) =cosa-2 | cos(xz
— t)ae}, sin(x — t) dt
0
u(0) =0, it(Oy =O!
. Verify your answers for Exercises 4, 5, and 6.
. Verify that u(x) = sin z is a solution of the Volterra integral equation (E.1) of
Example 2.
zr
dF
ER =<, [ewe
CulClde— cus lala ok (7)|. (E.2)
Solve (E.2) for F(a), and for the constant of integration substitute in the
original integral equation (E.1).
10. (a) Prove that the last term in (3.20) vanishes as n — oo by showing that
the infinite series (3.21) converges absolutely and uniformly. Do the proof by
justifying the following steps (i) to (iii) with detailed hints:
(i) Show that
rl byasdd
[Forced ast) <q M oe (E£.1)
and
(ii) With the result (E.1), show that the n + 1st term in (3.20) is bounded as
follows
oo n
M|X(a—a)| _ [M|A||z — al]
ee ES eens
n=0
which is absolutely and uniformly convergent for all |\(z — a)|, as it is clear
from a simple ratio test, remembering the presence of the n! in the denominator
of the above sequence.
(b) What is the essential property in this method that helped the most in easing
the proof of the convergence.
11. Write a Volterra integral equation of the second kind in the function of the two
variables u(x, y) in analogy to that of (3.1) in u(z).
Hint: See (1.38) for a parallel, and note that it is a Fredholm integral equation
in two dimensions.
We should mention at the outset that integral equations of the first kind present their
own difficulties as we shall allude to toward the end of this section. In Section 5.4
we will discuss with some details the topic of Fredholm integral equations of the first
kind.
For the special case of a Volterra equation of the first kind,
with kernel K(x, t) such that K(z,x) # 0 (and is differentiable with respect to 2),
we will show next that it can be reduced to Volterra equation of the second kind
(whose solution, in general, is much more tractable!) If we differentiate both sides
of (3.31) with respect to x using the Leibnitz rule (1.53) on the integral, we have
of _ Af OK (z,t)
Aa u(t)dt + AK (a, x)u(z). (3.32)
0
3.2 VOLTERRA INTEGRAL EQUATIONS OF THE FIRST KIND 149
as a Volterra integral equation - the second kind with the nonhomogeneous term
1 df
UGS eoyeaten mre
and the (new) kernel
—1 dOK(z,t)
Hc, b=
Cae ae) ARO:
Thus
a+ f Hie, tyucae. (3.34)
So when K (zx, x) # 0 in (3.31), we can reduce it to a Volterra equation of the second
kind and solve it using one of the methods that we discussed in Section 3.1.
Example 6 Solve the following Volterra equation of the first kind after reducing it
to a Volterra equation of the second kind:
x
When the Volterra integral equation of the first kind (3.31) has a difference kernel
K @t) =);
0
we apply the Laplace transform on (3.35) as we did for (3.26), to obtain
F(s) = AK(s)U(s),
Vie :a | (3.36)
and the solution to (3.35) is
Example 7 Volterra Equation of First Kind with a Difference Kernel. Solve the
integral equation
SVU af e”‘u(t)dt (E.1)
0
by using Laplace transform.
This Volterra equation of the first kind (E.1) is with difference kernel K (x,t) =
e*—'; if we Laplace-transform it, recalling the (Laplace) convolution theorem in
(1.84) for the integral of (E.1), and using the Laplace transform pairs in (1.73) and
(1.81),
1 1
L{K(2)}=L{e*}=—T, —L{sinz} = —5
for (E.1) we have
Se1 s\ 1 U(s)
s?+1 s—1
BC cee ee ee 8 1
~~ AIf(s-1) As?+1° ~ A\s?41~— 8241) °
(E.2)
To obtain the solution u(x) of (E.1), we find the inverse Laplace transform of (E.2),
u(a) = +L = ‘ani
Ss il
aa} (E.3)
and with the use of the two Laplace transform pairs in (1.82) and (1.81) we obtain
Cie ae ==
However, for such result of U(s) = *=+ = 1 — 4, there exists no Laplace transform
inverse for the first term 1 of U(s) = 1— q. This is an obvious accepted conclusion,
since the important necessary condition for the Laplace transform F'(s) (of a large
class of functions as described in Theorem 1.1 of Section 1.4) is that it must vanish
as s approaches infinity, and the above F'(s) = 1 does not. This important necessary
condition was shown in Example 8 of Section 1.3.
f(z) in (3.31) to guarantee a solution to Volterra equations of the first kind (3.31).) In
the next example we use the Laplace transform to solve (3.38) and leave the solution
of (3.39), which is
Example 8 Abel’s Integral Equation. Use the Laplace transform to solve Abel’s
equation (3.38).
We let F'(s) and (s) be the Laplace transform of f(x) and ¢(x), respectively,
and use C{K(x)} = L{1//x} = V7/s from (1.79) with vy = <9 noting that
[\(1/2) = 7, to obtain ;
F(s) (E.2)
Lai
£{H(s)} a's
=£ 0 iar
fg )}-— feC
dt=h(a). (BA)
3.2 VOLTERRA INTEGRAL EQUATIONS OF THE FIRST KIND 153
C{sH(s)—no} = 2x a (E.5)
= £{sH(6)}- £-{n(0)} = 2
ses H,(s).\e= o (E.6)
(3.41)
Mem ine x/ Cie
what the differentiation operation may uncover. This is even more serious if f(x) is
given as data, which of course has the inaccuracy of the measurement. So, for the
above result in (3.41) we have to do numerical integration with its own added error
of approximating the integral. On the top of that, this inaccurate numerical result has
to be numerically differentiated, which compounds the error for the desired practical
solution ¢(z) of (3.38), instead of writing the formal solution as in (3.41). This
difficulty of the equations of the first kind is considered to be very serious, because
even if we know that a solution exists, we may only get useless inaccurate data for it.
Such situations are described by ‘“‘a small change in the input data f(x) may produce
a very large change in the sought output (solution) u(«)" of the integral equation of
the first kind,
Exercises 3.2
fi i cos(x — t)u(t)dt
0
to a Volterra integral equation of the second kind.
(b) Use the Laplace transform to solve the resulting integral equation in part
(a).
(c) Use the Laplace transform to solve the problem in part (a) directly, i.e,
without having to reduce it to Volterra equation of the second kind.
(d) Use the result in part (b) to verify it as a solution to the integral equation in
part (a).
(e) Do parts (a)—-(c) above for the Volterra integral equations of the first kind
in Exercises 3 and 4 of Section 1.1.
2. Solve the Volterra integral equation of the first kind after reducing it to an
equation of the second kind,
tay 37—*u(t)dt.
0
ee fee ult at
f(a) = f rene: 0<a<l
Hint: To simplify the answer, use the relation [(x)C(1 — x) = m/(sin 72).
. Solve each of the following Volterra integral equations of the first kind after
reducing them to Volterra equations of the second kind.
x? &
(a) — = i (1— a? + t?)u(t)dt
2 0
(b) e? /2 -1= flsin(a — t)u(t)dt
10)
ae
Hint: For part (b) avoid the Laplace transform (because L{e = } does not exist,
and note that A(z, x) = sin0 = 0.
. Consider the Volterra integral equation of the first kind (3.31) in u(x), which
after differentiation (with K(x,x) 4 0) had resulted in the Volterra integral
equation of the second kind (3.34) in u(a). Assuming that one) exists and
is continuous and that K(x, x) # 0 for all ze[a, b], use an integration method
by letting ¢(z) = if u(t)dt to also reduce the equation of the first kind to
another one of the second kind in ¢(2),
156 Chapter 3. VOLTERRA INTEGRAL EQUATIONS
Ot
p(x) — aE)
* aK (2,4) p(t) ¥ f(z)
\K(a,2)’ rea, db].
Hint: Use integration by parts on (3.31) letting U(t) = K(a,t) and dV(t) =
u(t)dt, whereby V(t) = f u(t)dt = d(t).
In the preceding two sections we presented exact and approximate methods for
solving Volterra integral equations. We must recognize that the illustrations we
presented there were of a very special form to suit the method and are simple enough
to result in a familiar form of solution without very lengthy,computations. For more
general types of problems, we often resort to approximate methods where the integral
equation is replaced or approximated by another one which is closely related and can
be handled by the usual methods, and hopefully, with solutions close to those of
the original equations. When such methods are not feasible, we have to resort to
numerical methods which are also approximate methods and where the integral in the
equation is approximated by a sum of N terms. As a result, the integral equation may
be reduced to a set ofN equations in the N unknowns u(z;),7 = 0,1,2,---,N —1,
the samples of the approximate solution. To illustrate this method clearly we present
simple examples, some of which were solved by the exact methods so that we have
a chance to compare them with the numerically evaluated (approximate) results.
For the same reason we will, at this stage, use one of the simplest and most familiar
methods of numerical integration, the trapezoidal rule (1.141), which we have already
presented in Section 1.5 along with Simpson’s rule (1.144) and the midpoint formula
(1.147). For the level of this introductory text, we leave the higher order quadrature
rules for the interest of the reader. They are discussed and illustrated with a good
number of examples and exercises in Chapter 7. There, we also include the tables of
the quadrature rules that are necessary for their use in the examples and the exercises.
with its noted variable upper limit of integration x as compared to the fixed upper
limit 6 of the Fredholm equation (1.148),
b
(aia) +f K (a, t)u(t)dt. (1.148)
Indeed, this is the major difference in classifying these two main (different) classes
of integral equations. So, as expected, this will affect their theories and methods of
3.3 NUMERICAL SOLUTION OF VOLTERRA INTEGRAL EQUATIONS 157
1
+-+-+ K(z,tn—1)u(tn—1) + 5K (a,tn)u(tn)|
The integration in (3.43) is over t, a < t < a; thus fort > x we take K(z,t) = 0,
KiCac;,¢;) = Viton barre
Of course, we realize here that the solution desired in (3.43) is an approximate
solution of (3.42) since there is an error involved in replacing the integral in (3.42) by
the N = n+ 1 terms of the trapezoidal rule (1.141). If we consider (the same) n + 1
158 Chapter 3 VOLTERRA INTEGRAL EQUATIONS
sample values of u(x), u(xi) = ui, 1 = 0,1, 2,---,m, equation (3.44) will become
a set of N = n + 1 equations in u(a;) (or u;) [note that u(zo) = f (xo) since the
integral in (3.42) vanishes for = rp = a],
u(zo) = f (zo)
1=1,2,---,N, ti <j
where we note again that K(a;,t;) = 0 for t; > a; since the integration in (3.43)
stops att; = x;. The system of equations in (3.45) can be written in a more compact
form as
uo = fo
if 1
Ui= fi + At 5 Kiow ar Kyu, ee Kyj-1Uj-1 ae 5ghist ; (3.46)
uo = fo
At At
—5 Kioto + (1- Fx) U1 = fi
At At
— 9 A200 — AtKoiu; + ¢ — Ko } u2 = fo
At At
——_ Ksoto = AtkK3,u, = AtK32u2 ae (1
oH 5 Kea)U3 = fs
REO te At
——> Knouo — AtKniui — AtKpgue — +--+ (1= Kon) Un = Fn
2 2
(3.47)
as a system of n + 1 equations in the n + 1 desired unknowns ug, u1,---,tUn. We
note that the form of this set of equations is a very special and desired one since the
solutions can be obtained by repeated substitution, starting with uo = fo from the
first equation of (3.47), which can be substituted in the second equation to obtain u;,
3.3 NUMERICAL SOLUTION OF VOLTERRA INTEGRAL EQUATIONS 159
At IN) At At
— a Aiotlo ae (1= sku) w=fi= —5 Kioto oP (1= sku) U1,
a fi + (At/2)Ky0fo (3.48)
1 — (At/2) Ky,
Then this value u, is substituted in the third equation of (3.47) to obtain u2, and
this substitution process can be continued until we obtain u,. With this particular
triangular system for the Volterra equation, we will be encouraged in the following
Example 9 to find the solution. As we remarked earlier, this is in contrast to the square
system of equations of a Fredholm equation of Section 5.5 as illustrated in Example
20 of that section, where the solution of the system is not as easy and straightforward
as the above one.
a ae i (t — x)u(t)dt.
uo = jo — 0) (E22)
1 i
— 5 Ai0t0 ate (1= 5Ku) U1 =f al ene)
1 1
— 5 K20%0 — Koi + (1= 3K) U2 =fe=2 (E4)
1
~5 Koto — K31u1 — K32u2 + (1= Kea) U3 =fs—3 (E-5)
1
— 5Kou — Kyu, — Kaque — K43u3 + ¢ = =u) gi fs 4. (E.6)
Hence if we substitute in (E.3)-(E.6) the values for Kio = K(1,0) =0 -1= —-1,
Ki, = 0, Koo = —2, Ko, = -1, Koo = 0, K30 = —3, K3i = —2, Kz = —1,
Kiss = 0p kao SA0hkig==3) Ka = —2, K43 = —1, and K44 = 0, we obtain
160 Chapter 3 VOLTERRA INTEGRAL EQUATIONS
Uy SO
sup + = 1, i= luo 1 0
Uo + Uy + U2 = 2, ug =2—up —u, = 2-0-1=2-1=1
Sup + 2u; + U2 + uz = 3, ug = 3 — up — 2u) —-u2 =3-2-1=0
Quo + 3u, + 2ug +u3+ug =4, ug = 4— 2uq — 3u — 2u2 — UZ
=4—(0—3—2—0=-—!1
after substituting uo from (E.2) for obtaining wu; in (E.3), and so on. So the numerical
approximation to the sample values of the solution are up = 0, uy = 1, v2 = 1,
u3 = 0, and ug = —1, which are compared to the exact values u(x) = sinz as
4to.=. sin. O:= 05-4, = sine =—.0.8415; to—.sin 2.= 0.9093, 44. Sit du .07L ae
and u4 = sin4 = —0.7568 as illustrated in Table 3.1 and Figure 3.1.
x OF Fl 2 3 4
Fig. 3.1 Numerical and exact solutions of Volterra equation (E.1) of Example 9.
curve that connects them, which was illustrated in Figure 1.13, and which resembles
the dotted line in Figure 3.1.
We may return to (3.42) and (3.47) and emphasize again how the numerical method
reduced, or more precisely approximated, the Volterra equation of the second kind
(3.42) to a (lower triangular) system of N = n+1 equations in N = n+1 unknowns
as in (3.47), where N is the number of approximated sample values u, of the solution
desired. Now we recognize that the set of equations (3.47) can be written in a matrix
notation form
KU=F (3.49)
where K = (K;;) is the (n + 1) x (n + 1) matrix of the coefficients of the system
of equations (3.47), U = (u,;) is the column matrix of the sample solutions, and
F = (fj) is the column matrix of the sample values of the nonhomogeneous part
f (x) in (3.47). The symbolic matrix form (3.49) can be written explicitly as
1 0 0 0
At At
—9 Kio 1— 5 Ku 0 0 0
At At
——Ko —AtKo, 1 — —Ko2 0
2 2 y
At
Be Kap —AtK3 —AtkK32 1— 5 Ks 0
At
Fi —AtKni —AtKn2 vee 1- 9 Ann
Uo fo
U4 fi
U2 fo
ish) =} fs (3.50)
Un tn
which can be verified as (3.47) by performing the simple matrix multiplication. This
would mean that the essence of the numerical method is to reduce the Volterra integral
equation to a matrix equation. Familiarity with the powerful tools of matrix theory
would give us a more efficient way of solving the integral equation numerically.
We should note here again that the numerical method for solving Fredholm inte-
gral equations, which we shall discuss in Section 5.5, will follow in the same way,
however, it results in a square rather than the present triangular system of simul-
taneous equations. Even more important is how the theory regarding the existence
162 Chapter 3. VOLTERRA INTEGRAL EQUATIONS
of solutions for the system of equations will shed light on the existence of solutions
for the Fredholm integral equation. Since we intend to keep this text on the under-
graduate level by assuming mainly a basic calculus preparation, we will keep our
exercises on this level and leave it for each reader to obtain the result in an efficient
way depending on his or her preparation in matrix calculus.
In Chapter 7 (towards the end of Section 7.2), and with the help of its higher
quadrature rules, we will also have the chance to make a brief comment and illustrate
the numerical solution for a particular class of singular Volterra integral equations.
They are those equations whose singularity is due to the infinite limit of integration.
An example is the integral equation of the torsion of a wire (1.15) in the torsion
function w(t),
Exercises 3.3
u(z) = x — | (x — t)u(t)dt
0
(a) Use the trapezoidal rule* to solve for u(z) in the interval (0, 4) with enough
sample values to compare with the exact solution u(x) = sin a.
Hints Usen = 8:
(b) Tabulate or graph the numerical and exact solutions for comparison and
note if there is improvement over those in Example 9.
(a) Find the exact solution. Hint: See Example | for \ = 2 and f(x) = 2, and
in particular (E.7) for the solution.
(b) Solve (E.1) numerically for 0 < x < 5. Hint: Find five or nine sample
values with n = 4 or 8, respectively.
(c) Compare the approximate numerical solution in part (b) with the exact one
in part (a). Graph both results.
4As was done for the development (3.43)-(3.47), the trapezoidal rule is used for the exercises of this
section.
EXERCISES 3.3 163
(a) Find the solution numerically for 0 < 2 < 27. Hint: Reduce it to a Volterra
integral equation of the second kind as in (E.2) of Example 6,
then use the method of this section as illustrated in (3.46) or (3.47). Find nine
sample points with n = 8.
(b) Compare the numerical values of part (a) with the exact values of Example
6.
(c) Attempt to find the numerical solution of the Volterra equation of the first
kind (E.1) for 3 sample values (n = 2) directly [i.e., without reducing it to that
of the second kind (E.2)]. Hint: Follow the steps from (3.43) to (3.47) for
Of 6rSon= Ge g ye
i «
e i i & = 7
Ou) at = ae 9egie
a a § » ean »
asi ont} ts iden icin aM ben vue
.
oh
t
-
: en a):
~~:
<7 a fafvin yer
jx AP
pale weary,
<r
fine (y é :
=
¢° q &) Yeas —2 tus? ow ; Ooms ate
_ au) ae
wr) a or £ Sse > Vem:
om”
a] —s — one »@ s@ ae .
_ - -
SS . 5 a
. - ‘(AWS A hela
7
f
x. 7
= _ v
De Sten by cananpellly bela on © Wee Pied.
Oe rh) eo.» © rc tee 7
—— a A ;
The Green’s Function
d?y
y(a) =0 (1.34)
y(o= 0 (1.35)
reduced to a Fredholm integral equation
b
Teg Be | K (a, t)y(t)dt (1.36) (2.36)
(OO age
K(a,th=9 (@- nee b) (1.37) (2.37)
' Getty SU
(b— a)
where we referred to it as Green’s function. Also, in Example 6 of Section 2.5,
we showed how a similar Fredholm integral equation, with kernel as a special case
of the above one in (2.37), reduces to a boundary value problem such as the above
one in (1.33)-(1.35). In this chapter we will consider a more general boundary
value problem, which can be reduced more readily to a Fredholm integral equation
165
166 Chapter 4 THE GREEN’S FUNCTION
with the help of the Green’s function associated with such a problem. Next we
shall familiarize ourselves with the Green’s function and the very basic (elementary)
methods of constructing it.!
The Green’s function method is one of the most important methods for solving
boundary value problems associated with nonhomogeneous ordinary or partial dif-
ferential equations. In this chapter we use the Green’s function (in Section 4.2) to
show again how a boundary value problem is reduced to a Fredholm integral equation
with the Green’s function as its kernel. First we present methods for constructing the
Green’s function for boundary value problems associated with nonhomogeneous dif-
ferential equations. Of central importance to this development is the study of a very
important special type of boundary value problem, the Sturm-Liouville problem. We
will give a brief presentation of this problem and show how its solutions are used in
an infinite series for another way of constructing the Green’s function. In Section 4.2
we will use the Green’s functions to reduce boundary value problems to a Fredholm
integral equation with the Green’s function as its kernel. A brief discussion with an
illustration, of reducing boundary value problems associated with partial differential
equations to two-dimensional Fredholm integral equations is given at the end of the
this section (in Section 4.1.4). The illustration involves the potential distribution in a
charged unit disc with grounded rim (see (4.62)-(4.69). Another very related illus-
tration is that of the potential distribution in a charged square with grounded edges,
which is the subject of Exercise 24 of this section. In this exercise we have very
detailed instructions for using the finite Fourier sine transform (see (1.115), (1.116),
and (1.121)) to reduce the partial differential equation with boundary conditions (in
two variables) to a nonhomogeneous ordinary differential equation with its bound-
ary conditions. Then the Green’s function of this section is used to solve the latter
problem.
2
Ao (2) 5%+ r(x) + Aa(e)y = Ly = f(e),? G5 <b (4.1)
'For a complete treatment of the Green’s function, the interested reader may consult Stakgold [1979].
*In some books — f(a) instead of f(z) is written for the nonhomogeneous term of (4.1), which will bring
a (+) sign instead of the (—) in the final solution (4.5) of (4.1)(4.3).
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 167
Ao(t)
d*y
5
dy
+ Ai (zt)7 + Ao(x)y = 0. (4.4)
Yo =Yp tT Yh
is the superposition of the two solutions of the linear equations (4.1) and (4.4). In
general, it is difficult to find the particular solution for any arbitrary nonhomogeneous
term f(x) of (4.1).
The Green’s function method represents a general method of solving the boundary
value problem (4.1)—(4.3) where the solution is given as
b
OS / G(x,t)f(t)dt (4.5)
an integral in terms of the given nonhomogeneous term f(z) and the Green’s function
G(a,t). Note that some texts use —G(z, t) instead of G(z, t) in (4.5), it is to make
up for using — f(x) instead of f(x) for the nonhomogeneous term of (4.1). The basic
reason is convenience, which will become clear in the examples.
Before we illustrate the construction of the Green’s function, there is an important
particular case of the differential operator L in (4.1) with consequences that will
shed more light on the Green’s function of (4.5), such as its symmetric property
G(x,t) = G(t,x), and which will aid a great deal in the method of constructing it.
Here G is the complex conjugate of G, which we shall take as G' since we will often
work with G(z, t) as a real-valued function. The particular form of the differential
operator L in (4.1) is that of the self-adjoint form, which means that (uLu — uLv)dz
must be an exact differential dg = (vLu — uLv)dz for any two functions u(x) and
v(x) operated on by L. When the differential operator L of (4.1) is a self-adjoint one,
we will show that its associated Green’s function G(z, t) in (4.5), for the boundary
value problem (4.1)—(4.3), is symmetric [i.e. G(x, t) = G(t, x)] (see (4.25)).
We should point out now that while second-order differential operators can be
written in a self adjoint form; in general, this is not the case for differential operators
of order n > 2.
A very important example in applied mathematics of a self-adjoint differential
operator is the following second-order one?
3Here we should use L* instead of the same L of (4.1), but since we are going to work mainly with the
above L*, we shall designate it, for simplicity, as L.
168 Chapter 4 THE GREEN’S FUNCTION
= oS ir(e)ul]' — uLfr(e)o'
het d ' (EY)
=vru" +r'u’ — ure" — ur'v'
=r{vu" —uv"} +r'{vu' — uv}.
But
d
Lu —— uLv
vLu uLv = qa— ln(e){vu =euv }],
which means that (vLu — uLv)dz is an exact differential with g(x) = r(x){vu' —
uv’ }, and hence can be integrated to give
b b
From now on we will assume the self-adjoint form of the second-order differential
operator L of (4.6) instead of that in (4.1).
In this example we merely showed that the form in (4.6) is the self-adjoint form.
Indeed it can be shown that any second-order differential operator such as L in (4.1)
can be made self-adjoint by multiplying it by
which we shall leave for an exercise with a detailed hint [see Exercise 25(a)]. More-
over, in general, differential operators of order n > 2 are not necessarily self-adjoint.
To illustrate this point we state that the simple third-order differential operator L of
u
LTu= = + wis not self-adjoint, while the fourth-order operator L in Lu = —> +u
is self ise We will show the first case here, and leave the case of the fourth- ae
differential operator for an exercise (see Exercise 25(b)).
To show that L in Lu = u’” + w is not self-adjoint, we write vLu — uLv, then
add and subtract terms, which result in only the major parts of (vLu — uLv)dz as a
sum of exact differentials,
Hence (vLu — uLv)dz is not an exact differential because of the last term .
In this section we use the method of variation of parameters to construct the Green’s
function G(z, t) for the integral representation (4.5) of the solution of the nonhomo-
geneous boundary value problem (4.1)—(4.3) with L as in (4.6). The result will show
clearly that the Green’s function associated with the self-adjoint differential operator
L of (4.6) is symmetric. With the aid of this and other basic properties of the Green’s
function we are often able to construct the Green’s function without having to go
through the full details of the analytic method. We will illustrate both methods with
simple examples.
170 Chapter 4 THE GREEN’S FUNCTION
The method of constructing the Green’s function [and hence solving the nonho-
mogeneous problem 4.1 (with L as in (4.6)), (4.2) and (4.3)] depends primarily on
the solutions of the associated homogeneous problem (4.7)—(4.9) in the sense that
they both have the same differential operator L.
Let v;(z) and v2(z) be two linearly independent solutions of the associated
homogeneous equation (4.7). The variation of parameters method assumes the form
u(x) = wy(x)v;
(x) + wo(x)v2(z) (4.10)
for the solution u(x) of the nonhomogeneous problem (4.1), where the unknown
variable coefficients (parameters) w(x) and w2(x) are to be determined via this
method.
For now we will assume that neither of the solutions v;(x) or v(x) of (4.7)
satisfies both boundary conditions (4.2) at = a and (4.3) at f = b. A simple
intuitive reason for this assumption can be found by looking at the shape y(z) of
the hanging chain in Figure 2.2, where the solution of the nonhomogeneous problem
with its nonhomogeneous external force F(x) consists of two different straight lines,
the one for 0 < x < € satisfying (only) the boundary condition y(0) = 0 at x = 0,
and the one for € < x < I satisfying the boundary condition y(l) = 0 at the other
Cla al
The analytical reason behind this assumption—of not allowing either of v;(x) or
v2(zx) to satisfy both boundary conditions—is that if one of them does say v2 (x), then
by following the same method of construction of the solution as that we are about to
use, we can show that we end up with an extra condition,
b
i vo(x)f(x)dx = 0 (4.11)
that contributes to the nonuniqueness of the final solution u(x). To stay with our
aim, of constructing the Green’s function, we would rather not deal with (4.11) for
the present, and we leave it as an exercise [see exercise 20(a) with it’s very detailed
leading steps].
To prepare for making u(x) of (4.10) a solution to the nonhomogeneous equation
Lu = f with the self-adjoint differential operator L as in (4.6), we first find the
derivative of u(x) in (4.10),
u' (x) = wi (x)v;4 (x) + we(x)vy (x) + wi (z)v1 (x) + wh (ax)v2(z). (4.12)
A very important step in the method of variation of parameters is to reduce the
expected second-order differentiation of the operator L, on the unknown functions
(parameters) w;(x) and w2(2) to a first-order differentiation. This is accomplished
by assigning to zero the last two terms involving w} (x) and w}(a) in (4.12),
(4.15)
after using the fact that v; (xz) and v2(z) are solutions of the homogeneous equation
(4.7) to make the foregoing coefficients of both w; (2) and w2(z) vanish.
In (4.13) and (4.15) we have the main result of the variation of parameters method
as two simultaneous equations in the first derivatives w} (x) and w4 (a) of the unknown
variable parameters w(x) and w2(z),
Oe oe (4.16)
f (z)u1(2 (4.17)
2
172 Chapter 4. THE GREEN’S FUNCTION
Before we integrate to find w;(zx) and w2(z) we will take advantage of the fact that
the differential operator L of (4.7) is self-adjoint to show that the denominator in
(4.16) and (4.17) is a constant.
In the preceding section (Example 1) we showed that L is a self-adjoint operator,
which means that for any two twice-differentiable functions u and v, we have
)
+a2v5(a)] = 0
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 173
w1(a)[a1
v1 (a) + a2v;(a)] = 0
If, on the other hand, we assume that only v;(z) satisfies the boundary condition
(4.9) at x = b, steps similar to those of (4.21) yield
wold) = | F(ealé)ag =0
1 b
c2
With the variable coefficients w; (x) in (4.22) and w2() in (4.23), the final solution
(4.10), of the nonhomogeneous differential equation 4.1 (with L as in (4.6)) with its
associated homogeneous boundary conditions (4.2) and (4.3), becomes
174 Chapter 4 THE GREEN’S FUNCTION
= zu) (ae
| £6)va(E pote) [$0 v1 (E)dé
~- avonpoke
coo |
GEN (4.24)
where G(z, €) is defined as the Green’s function with its two branches,
1
Bir (a)v2(6), E<a<b
G(x,€) = (4.25)
sur(ti(g), asasé
Basic Properties of the Green’s Function
From this expression for G(x, €) of (4.25) with the constant B in (4.18), we will
show the following basic properties of the Green’s function:
(a) It is clear that the Green’s function in (4.25) is symmetric, that is,
G(x,€)= G(,2)
This, of course, is dependent on B of (4.18) being a constant, which is a direct
consequence of the differential operator L being self-adjoint.
(b) The Green’s function satisfies the boundary conditions (4.8) and (4.9) since vj (x)
of its first branch in (4.25) satisfies the condition (4.9) at x = b, and v2(x) in
the second branch satisfies the boundary condition (4.8) at x = a.
(c) G(z, €) is clearly continuous on the interval a < x < b; however, its derivative
OG(zx, €)/Ox has a jump discontinuity at z = € which is
OG
aoe
Dz ety 2 8)| mylene: (4.26)
o=§_
(x<€)
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 175
Our present treatment of finding the Green’s function of the homogeneous bound-
ary value problem (4.7)—(4.9), is usually aimed at solving the same boundary value
problem with nonhomogeneous differential equation (4.1)—(4.3), with the particular
solution as given in (4.5). In parallel to this treatment we may inquire about the
initial value problem with the conditions
u'(a) = 0 (4.29)
and the second-order nonhomogeneous differential equation (4.6).
d du
Iu = — rors] — q(x)u(x) = f(z) (4.30)
with the same differential operator L as in (4.6). We will show, with few modifications
of the above method that the function R(z, €), similar to the Green’s function, is also
used in an integral similar to that of (4.24) to give the solution of this initial value
problem as
and the second initial condition (4.29) on u'(«), after employing (4.14) gives
With v1 v4 — v, v2 F O, this system of equations (4.33) and (4.34) in w; (a) and we(a)
gives the trivial solution w;(a) = 0, w2(a) = 0. The first result w,(a) = 0 gives
w(x) as we had already in (4.22),
4.36
wa) = Ff Heri(eae
So, if we use w;(x) from (4.35) and we(x) from (4.36) in (4.10), we obtain the
solution in (4.31) and (4.32)
=-5 [ n@m©-n@nOsOde — gn
it” R(,
€)f(dé
where we have R(x, €), the Green’s function (-like) for the initial value problem
(4.30), (4.28), (4.29), as we stated it in (4.32).
ay f
lu= aa +X u= f(z), ip al (£.1)
u(0) =0 (B.2)
u'(0) =0 (E.3)
The two linearly independent solutions to the homogeneous equation
du
qa tr u=0, (E.4)
are v; (xz) = sin Az and v(x) = cos Az. Here r(x) = 1, and from (4.18) we have
wa) =f ZF)
A
a
sin A(x — 2) dx sinA(x
— 0) do
ne x ae ee ee es
=| Aces MG= 8)Fede + 0-0 (E.8)
uli) = [OZleosr(e
-§)F(6))
;
+ cos A(x — f(a) — cos A(x — 0) f(0)—
dO (E.9)
=f
—X si
sin N(xA(x ——€) £)df(€)dE
+f(2)
+A / sinX(x - €)d€é
o
f(z)
where (E.1) is satisfied. To show the first initial condition (E.2), we use u’(x) of
(E.8) for zc = 0 to have
0
u'(0) = [cos
0
r(0 ~ €)f(6)ag = 0
(E.1) is clearly satisfied as we substitute z = 0 in (E.7),
a ° sin X(0 — €) ra
u(0) = ifeae 0s
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 179
d 12) d™—ly
eA ala)d=
Lay = Ao (a) = + Aj(2) ay 0) 4(4538)
dr”™—1
dx
instead of the second-order differential operator L in (4.1), and, instead of the two
homogeneous boundary conditions in (4.2) and (4.3) we need the following n (linear,
independent) homogeneous boundary conditions, as applied to the solution y(z) and
its first nm— 1 derivatives atx = a and z = b, ie.,
n—1)
Bey = axy(a) + ajy'(a) +--- Fatty Ya)
+ Buy(b) + Buy’(b)+--+ BP yD (b) = 0, (4.39)
| som Ui 5c af)
where B,, k = 1,2,---,n stands for the operators of these n boundary conditions.
In (4.38) the coefficients A;(x), j = 0,1,2,---,n are real-valued functions with
continuous derivatives up to the order n — j on (a, b], and Ao(x) 4 0 on {a, 8].
We will soon list the four basic properties of the Green’s function G(, €) asso-
ciated with the homogeneous boundary value problem (4.38) and (4.39). With such
Green’s function we are able to obtain a Fredholm integral equation representation
(4.41) in u(x) for the following nonhomogeneous problem associated with the nth
order differential operator L,, of (4.38),
b b
ua) =f Gtees@de +a f Glas)oQuode.
a a
(4.41)
We may mention that a good sign for guarranteeing the existence of a unique Green’s
function for (4.41), is that the homogeneous problem (4.38) and (4.39) should have
no solution but the trivial one. So we will assume that the general homogeneous
4 Optional
180 Chapter 4 THE GREEN’S FUNCTION
boundary value problem (4.38) and (4.39) has only the trivial solution in order to
guarantee the existence of its unique Green’s function. As to the basic properties
of this Green’s function, we must bear in mind that, although the second-order
differential operator can always be put in a self-adjoint form, by using a form of an
integrating factor [as shown in (E.5) of Example 1], it is, in general, not the case
for differential operators of order n larger than two. This is, possibly, the reason for
not seeing the symmetry property of the Green’s function at the top of the next list.
The rest of the properties follow in parallel to those of the second order differential
operator. The ordering of the following properties is influenced by bringing to focus
the jump discontinuity property of the Green’s function in (4.41).
(i) G(x, €) is continuous, and so are all its derivatives with respect to x up to the
order (n — 2) on the interval a < x < b. This leaves the expected jump
discontinuity for its (n — 1)th derivative as follows: *
(ii) The (n — 1)th derivative of G(x, &) with respect to x at the point x = € has a
jump discontinuity of magnitude 1/Ao(z), i.e.,
(iv) In each of the two subintervals a < x < € and € < x < b the Green’s function,
as a function of 2, satisfies the nth order homogeneous differential equation
(4.38).
—~ = —-Fi(a) (£.1)
with boundary conditions
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 181
y(0) =0 (E.2)
y(l) =0. (E.3)
The justification for the differential equation (E.1) can be taken from the special
static (time-independent) case of the well-known wave equation of the vibrating
string in its small vertical displacement u(z, t),
du 1 du = —F(z)
Ox? = c?.-: OF.
(E.4)
where for the time-independent static case u(x, t) = y(x) it becomes
dy
= —F(z) (F.1)
dx?
c in (E.4) is the velocity of the wave. The differential operator L = d*/dz? is
self-adjoint as a very special case of L in (4.6) with r(x) = 1, q(x) = 0, and the two
linearly independent solutions of the associated homogeneous differential equation
dy
Dy oe 0 (E£.5)
are | and x. Note how we avoided for the moment calling these solutions v (zx)
and v(x). The reason is that, in addition to v;(x) and v2(x) being two linearly
independent solutions of (E.5), they are also committed [in (4.25)] to satisfying the
boundary condition v2(0) = 0 at = a = 0 and vy (l) = Oatx =b=1. We
note here that we can have v2(z) = x, which satisfies the first boundary condition;
however, v; (xz) = 1, as is cannot satisfy the second boundary condition v; (1) = 0.
So for vi (x) we may consider a linear combination v1 (x) = c1x + C2 of the two
linearly independent solutions x and | and choose the arbitrary constants to satisfy
the boundary condition v; (J) = 0. An obvious choice is to let cp = | and c; = —1
for v;(z) = | — x, which is still a solution of (E.5), but now it also satisfies the
boundary condition v; (J) =! —1=0.
With vo(x) = x and v; (x) = 1 — x we will find the constant B of (4.18) for the
Green’s function in (4.25),
B =r(a)[vi(z)v9(x)
— v2(x)v; (z)]
my; ne OP ea AS, earl)
so that the Green’s function of (4.25),
becomes :
jél—2), 0Sé<es!
G(x, €) = (E.7)
Fal), OS2SESI.
This is the same form of G(z, €) in (2.26) and Figure 2.2 for the shape of an elastic
thread (under constant horizontal tension Ty = 1), which is due to a single vertical
force of unit magnitude F' = 1 that is placed at x = €,0 < x < I. The derivation
of G(x, €) in (2.26) was based on simple balance of vertical and horizontal forces,
and the geometrical shape as seen in Figure 2.2. We also saw in (2.37) of Example
5 in Chapter 2, a similar Green’s function to that of G(a, €) in the above equation
(E.7). There we showed that the boundary value problem (1.33)—(1.35) reduces to
the Fredholm integral equation (2.36) with its kernel as K (a, t) in (2.37). Now we
recognize this kernel K (x, t) as the Green’s function of the boundary value problem
(1.33)—(1.35).
Finally, the solution to the boundary value problem (E.1)-(E.3) is obtained from
(4.24) with f(~) = —F(ax) and G(z, €) as in (E.7),
1 (@) =. 0 (4.43)
u(b) = 0 (4.44)
we give the more explicit result for the Green’s function and leave its derivation for
an exercise [see exercise 11(b)]
1
Fp lv2(a)e1 (a) — v1 (202 (a)]fo2(€)v1(6)— v1 (€)r2(d)),a< 2 <
G(z,€) = ;
Bp 2 (8)u1 (a)— v1(§)v2(a)][v2(w)o1 (6) = v1(w)v2(b)],€ Sa <b
(4.45)
where D = v2(a)v1(b) — v1 (a)v2(b) # 0, and B as in (4.18).
Again, the condition D # 0 also guards against v;(x) and or v(x) satisfying
both of the boundary conditions (E.9) and (E.10). With this more explicit formula of
Green’s function in (4.45) we can now solve the problem in Example 3 more directly
with v; (2%) = 1, ve(2) = 2, where B = 1 since
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 183
and
—Fle-1-1(Oe-1-1-1, O<a<e
G(x,€) II
-F[€-1-1@)fe-1-1-0, Exes
G(z,€) II
In an attempt to construct the Green’s function for the boundary value problem
above, we first investigate its corresponding homogeneous boundary value problem,
d’y
Apa 2 =O Olea el (E.4)
y(0) =0 (E£.2)
We shall first use property (b) (following (4.25)) of the Green’s function satisfying
the (homogeneous) boundary conditions (4.8) and (4.9). Clearly, sin bx and cos bx
are the two linearly independent solutions of (E.4) with b # 0. We know that
sin bx and sin b(1 — 2) are also two linearly independent solutions of (E.4) with the
added advantage that sin b(1 — 2), instead of cos bz, satisfies.the boundary condition
(E.3). Hence we may use either v;(x) = sin bz, vo(x) = cos bx or vi (x) = sin ba,
v2(a“) = sin (1 — x) ina linear combination to construct the Green’s function. Here,
for convenience, we adopt the latter choice, and use it in (4.10) to write
(b) The case of € < x < 1, where we let z = 1 in (E.5) to satisfy (E.3):
Now we use the symmetry property (a): G(z,é) if = G(€,x) of the Green’s
function; a clear choice for the arbitrary functions w (€) and w2(£), to make G(s, €)
in (E.8) symmetric in x and €, is w;(€) = C sin b(1— €) and w2(€) = C sin bé; (E.8)
becomes
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 185
Csinb(1—€)sinbz, 0<a2<€
OSs {Csinb€sinb(l—2z), €<a< 1. ees)
To evaluate the arbitrary constant C in (E.9) we use property (c) for the jump condition
(4.26) of the derivative 0G/Oz,
OG(a, €) _ OG(z,€) ol
Sy woes Ci Pia r(€)
= —Cbsin
bé cos b(1 — €) — Cbsin b(1 — €) cosbé = —1,
Cosin
b(€ + 1 — €) = Cbsinb= 1, = — (£.10)
Note how we used the second and first branches of G(x, €) in (E.9) for x = €4 and
Qos, Tespectively, since @ = C4. 4>-€ is inthe domain é = 7 <A.anda == < €
is in the domain 0 < x < €. From (E.9) and (E.10) the final form for the Green’s
function is
differential
ifferential equation,
equation, (L (Ly == =),
1%)
3
OE We oy esa <a (E.1)
dae
or p
Y
aes AU ee Ole
y(0) =0 (E£.2)
G(z,t) = 2 =) ie (E.6)
=Ht= 2) et) $v le
With the statement made regarding the existence of a unique Green’s function,
we have the chance now to test it for the problem associated with the third-order
differential operator L = d?/dzx? in the equation
_ ay =0 (E.7)
dz
and its homogeneous boundary conditions in (E.2)-(E.4). All we have to do is to
show that the problem (E.7), (E.2)-(E.4) has only the trivial solution y(x) = 0. The
solution to (E.7), after three integrations is
a2
ViGi) ae > ter + cs. (E.8)
If we use the boundary condition (E.2) we have y(0) = cz = 0, y(x) = (c,/2)x? +
cox and from (E.3) we have
il
yie= ie + co = 0, Ci = —2¢2,
1 1
y (x) = san = poly
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 187
1
If we use (E.4) on y'(x) = cy2 — 5c1 we obtain
j 1
y (0) SOS cr, Cac = 0.
So the three conditions result in c) = c2 = cz = 0. Hence the solution to the
homogeneous boundary value problem (E.7), (E.2)-(E.4) is y(z) = 0, a trivial
solution. Therefore our problem will have a unique Green’s function that we shall
leave its construction for an exercise.
Next we will develop the method of series representation of the Green’s function.
Such series expansion of G'(z, ) is in terms of the solutions {u,,(x) }°2, of the asso-
ciated homogeneous problem (4.7)-(4.9). We will first discuss the basic properties of
these functions and their series expansion, which are very necessary for developing
this method of constructing the Green’s function.
An extremely important result of the Sturm-Liouville problem (4.7)-(4.9) and
its self-adjoint operator (as an eigenvalue problem) is that under the conditions, on
the (regular) differential operator L of (4.7), that r(x), r'(x), q(x) and p(x) are
continuous on the closed interval a < x < b, and that r(x) > 0, p(x) > 0 on
{a, b], the solutions {u,,(x)} (or eigenfunctions) of the Sturm-Liouville problem are
orthogonal. By orthogonality of {u,(x)} on the interval (a,b) we mean that for
any two different solutions u,(x) and u(x) (of (4.7)-(4.9)) the following integral
vanishes:
where um (x) is the complex conjugate of un (zx).> Here p(zx) is the (weight) function
appearing in (4.7) and a and Db are the limits of the interval on which the problem
(4.7)-(4.9) is defined. Next we illustrate how the orthogonality property of the
solutions {u,(xz)} can be employed in expanding given functions in an infinite
series of these orthogonal functions—hence the name orthogonal or Fourier series
expansion—which we will use in determining the solutions of certain Fredholm
equations in Chapter 5 (in particular, Fredholm integral equations with symmetric
kernel in Section 5.2.)
The importance of the orthogonality of functions is not limited to series expansion
but, as we will see in Chapter 5, will be used as a condition for some of the theorems
proved concerning the Fredholm equation. For example, we may need to investigate
whether the nonhomogeneous part f(z) in the following Fredholm equation
b
u(x) = | K (a, t)u(t)dt + f(z)
5In most cases we will have real-valued functions wm (x), where Um(x) = Um(Z).
188 Chapter 4 THE GREEN’S FUNCTION
Un(2) = An iSe Os
Also, sometimes we may speak of whether A(z, t) is orthogonal to a function or
even to itself.
Un (x)
bn(z) =
||un||
the resulting functions ¢, (x) are called orthonormal functions; it is easy to show that
their norm is 1,
[: o(aei(e)az
mavieg
= / Ae) Teale
Pe. : uz (x)
=Heal
= heffolaink(e)de ~ [Iuall?
1
=1
(acne) (E.1)
n=l
then we multiply both sides of (E.1) by p(x)u (x) and integrate from a to b to obtain
b b
i De) tn 2) f(eda Cr, ( p(x)u2, (x)dz,
a a
b
.
Sy p(a)u2,
(a)dex
(Lx ae
which is the Fourier coefficient (4.48) of the orthogonal or Fourier series expansion
(4.47) of f(x) on (a, 6), in terms of the orthogonal functions {um(x)}>_,. It is
important to note the simpler form for c,, when {u,(x)} are chosen orthonormal as
{¢n(x)}, whence
N
feyig) = SS Crit (a) (4.51)
The series in (4.47) is said to converge to f(x) in the mean on the interval (a, b)p (with
respect to the weight function p(x)) if
b N
(4.52)
sim,f p(x)| F(x) — s Cnn(x)|’dax = 0.
If the orthogonal series (4.47) converges in the mean for every piecewise continuous
function f(x) (or square integrable function) on (a,b)p, then the orthogonal set
{un(x)}°2, is called a complete orthogonal set on (a,b)p. It turns out that the
completeness of the orthogonal set {w,,(xz)}°2, in the series (4.47) is equivalent to
allowing integrating such series term by term, a fact that we used in (E.2) of the above
example.
where L is a self-adjoint operator. For the present method we expand u(x) and f(z)
of (4.53) in a Fourier series of the orthonormal® eigenfunctions {uj (zx)}:
co b
tea We arte), ‘py = u(x)up(x)dax (4.54)
k=1 ¢
love) b
b
“ipuj (x)da = 1. Also p(«) of (4.49) can be introduced with simple modification.
a
4.1 CONSTRUCTION OF THE GREEN’S FUNCTION 191
If we substitute the expansions (4.54) and (4.55) and use (4.56) in (4.53), we,
formally, obtain
But since the eigenfunctions u,(x) are linearly independent, we may equate the
coefficients in (4.57) to obtain
after using
by = (t)up(t)dt (4.60)
from (4.55) and exchanging summation with integration.
The solution (4.59) can be written in the form (4.5)
b
(x)= -| G(a, t) f (t)dt (4.61a)
d*y
— + dry = f(z), Ot gal (E.1)
dx?
y(0) =0 (£.2)
y(1) =0 (£.3)
The orthonormal eigenfunctions of the corresponding homogeneous problem,
d2
eee
dx?
On” 0. ei (E.4)
192 Chapter 4 THE GREEN’S FUNCTION
are uz(z) = V2sin kaa and the eigenvalues are k?71”. Hence from (4.61b) the
Green’s function for (E.1) is
Cane ee
sinaesin
ae = (E5)
= sinkra : ;
i) Ze=a / f(t) sin krtdt. (E.6)
In this section our discussion was centered, mainly, around the Green’s function for
boundary value problems associated with ordinary differential equations. In the next
section, the Green’s function will be used to reduce such boundary value problems to
Fredholm integral equations in one variable. In higher dimensional problems, we will
encounter boundary value problems associated with partial differential equations.
One of the methods of constructing the Green’s function for such boundary value
problems is illustrated next for the potential distribution in a unit disc. The resulting
Fredholm integral equation, as expected, will involve the unknown in two variables
inside a double integral. Integral equations in three dimensions are also illustrated
in Section 2.7 for the Schrodinger equation in the (three-dimensional) momentum
space, where the three dimensional Fourier transform is used.
There are a variety of methods for constructing the Green’s function for boundary
value problems in two and three dimensions. However, for the level of this introduc-
tory text, we will limit our very brief presentation here to the following illustration.
This involves the potential distribution in a charged unit disc with grounded rim, as
we shall discuss next. The same type problem of the potential distribution in a square
is left for an exercise (Exercise 24) with very detailed leading hints.
=1
=
[2
bo
(-(sou
) vaOy
(sino)
en Ou
=-f(r,6)
a
(4.62)
where f(r, @) is the charge density. Also, with the help of complex analysis methods ,
1 | 1-2rpcos(@ — ¢) + r?p?
G(r,(r,6;9;p,
p,¢) 6) =
= —1 BuO ORS Be (4.63)
5
EXERCISES 4.1 193
So with a grounded rim of the unit disc, we have the boundary condition at r = 1,
The solution to the boundary value problem of the Poisson equation (4.62) and the
boundary condition (4.64) via the Green’s function in (4.63), is
1 Qn
V7u(r,0) =0 (4.66)
inside the disc (with no charge, f(r,@) = 0), and where the potential on the rim is
given as
u(1.e) = G(@), —™1<O0< 7. (4.67)
The solution to this Dirichlet boundary value problem (4.66) and (4.67) is of the form
27 0G
u(r, 0) = =) Op g(¢)d (4.68)
p=1
Jif
=5, | [=o
al =e d, 4.69
2)
which is what we presented in (1.24).
Of course equation (4.65) can be made as a Fredholm integral equation in two
dimensions, when a charge distribution f(r, 0) is to be found on the disc to affect the
given desired potential distribution u(r, @) there. In the same way we can say that
equation (4.69) is a Fredholm integral equation in one dimension in the unknown
potential function f(@) on the rim of the unit disc, that would produce the given
desired potential distribution u(r, @) in the interior of the unit disc. The integral in
(4.69) is the well known Poisson integral.
Another example of a Fredholm integral equation in two dimensions can be made
of the answer of Exercise 24 (parts e, f in (E.10)), when we are to ask about the
required charge distribution f(x, y) on a square that would result in the given desired
potential distribution u(x, y) inside the charged square.
Exercises 4.1
Construct the Green’s function associated with the following boundary value
problems.
dy 1
3. aa = F(z), US
y0)=0, y(F) =0
Hint: See (E.6)-(E.9) of Example 8 in Section 4.2.
dy
Tye ane Diagn oa
EXERCISES 4.1 195
y(0) = 0, y(1) =0
Hint: Use (4.5) and the Green’s function of Exercise 4 with b = 1. Fora series
solution form see Example 7 with A = —1 and f(z) = 2.
Use the Green’s function to solve the following boundary value problems:
d*y :
: a2 ¥ = 2sinchl, VS Kil (E.1)
. Reduce the following boundary value problem, associated with nonlinear dif-
ferential equation, to an integral equation.
(a) Verify that the following u(x) is a solution to the associated (same L)
nonhomogeneous equation (4.1),
the two boundary conditions to u(x) in (E.1) and (E.2) to have two
simultaneous equations in C, and C2 to be determined.
da
10. Lae y==e De tae ; x
Dien :
(E.1)
Use the Green’s function to solve the following boundary value problems.
d*y .
13: Ape os = — De
(E.1)
0) = y(n) (E.8)
18. In our discussion regarding the construction of the Green’s function (4.25),
we assumed that neither of the solutions v1 (x) and v2(xz) of the homogeneous
problem satisfies both boundary conditions at z = a andz = b.
(a) Assume now that v2(x) does satisfy both conditions, while v(x) satisfies
‘neither; follow the same steps as those used in reaching (4.25), with a
solution of the form
(b) With the results in part (a), show that the solution is not unique; that is,
show that the solution becomes
Construct the Green’s function associated with the following boundary value
problems. P
19;
yoy fix) O< rig
y(0)=yQ), —-y"(0) = y’()
Hint: Consider the form A cos(z — € + c) for G(x, €) where A and c are to be
determined.
20. Use the Green’s function method to solve the boundary value problem
d?y :
—, tn y=cosmz, 0<e<1
dx
y(0)=y(1), = y' (0) = y'(1)
Hint: See Exercise 19.
mis Consider the following boundary value problems, which is obviously a Sturm-
Liouville problem [see (4.7)-(4.9)]:
d2
app we O<2<1 (E.1)
u(0) = 0 (E.2)
u(1) = 0 (E.3)
(a) Without solving for the explicit solutions, prove that any two eigenfunc-
tions up(x) and um(x) of (E.1)-(E.3) corresponding to two different
eigenvalues A, and A,, are orthogonal on the interval (0, 1]; that is,
1
q Und lun eae = 0. EEA aeVes
0
EXERCISES 4.1 199
Hint: Write the differential equation (4.7) for u,(z) and for u(x) and
attempt to arrive at
ipICE
T LAT Lar = 0
b
/ D(a,
7) (et) dr =0
a
(a) Prove that the kernel K(a,t) = x°t? and the kernel L(z,t) = x?t? are
orthogonal on {(z,t):0<2<1,0<t< Il}.
(b) Prove that the kernel K(x, t) = sin(x — 2t) is orthogonal to itself on the
square {(z,t) : 0 <a < 2n,0 < t < 27}; that is, show that je sin
(a — 27) sin(r — 2t)dr = 0.
1 1
Pi@)y= 5 (32° — 1), P3(x) = 5 (5a — 3z)
(b) Verify that the Legendre polynomials in part (a) are orthogonal on (-1,1)
with respect to a weighting function p(x) = 1. Hint: See (4.46).
200 Chapter 4 THE GREEN’S FUNCTION
(c) Write the first three terms of the Fourier series expansion in terms of the
Legendre polynomials in part (a) to approximate the function f(x) = e4
on (-1,1). Hint: See (4.47) and (4.48) with p(x) = 1, un(z) = Pr(z),
fie Ws N22
(d) Tabulate or graph the approximate series expansion to compare with its
exact value f(r) = e?*.
24. In reference to our discussion of the potential distribution in a charged unit
disc of (4.62)-(4.65), consider now the potential distribution in a square of
side length 7 with charge density f(x, y), and where the edges are grounded.
The boundary value problem for the potential distribution u(z, y) in a charged
square with side length 7 as in Figure 4.1 due to a charge density f(z, y), and
with all sides being grounded is
A
O72 2074
DE tebyee oe Wien icc gs O<y<T (£.1)
U(O,y)=O
O u(x,0)#0 * X
Fig. 4.1 Electric potential in a square plate.
(a) Let U(n,y) and F(n, y) be the finite Fourier sine transforms, as defined
m(ib15);
U(n,y) = [wey sinnadx (E.4)
(b) Show that the boundary conditions (E.2), (E.3) are easily transformed to
U(n,0) =0 (E.8)
Un,a) = 0 (E.9)
(c) Construct the Green’s function for the boundary value problem (E.7)-(E.9)
in U(n,y). Hint: Note that in (E.1)-(E.3) of Example 3 we have the
same problem as the above (E.7)-(E.9) except for b= nein (E..),
and the boundary points are 0 and | instead of 0 and 7 in the present
boundary conditions of (E.8) and (E.9).
(d) With the help of the Green’s function in part (c), find the solution U(n, y)
to the boundary value problem (E.7) and (E.8). Hint: See (4.1)—(4.3) and
(4.5).
(e) Find the solution for the potential u(z, y) in the original boundary value
problem (E.1)—(E.3). Hint: Use the inverse (finite) Fourier sine transform
(Fourier sine series) of (1.116) on U(n, y) of part (d) to find u(z, y).
(f) Attempt to find an expression for the Green’s function in two dimensions
of the original problem (E.1)-(E.3) as G(z, y; €,). Hint: In the formal
answer of part (d), substitute for F'(n, y) (inside the integral representing
7For detailed treatment of integral and finite transforms and how to find the proper (compatible) transform
for a given boundary value problem, see Jerri [1992].
202 Chapter 4 THE GREEN’S FUNCTION
U(n, y)) in terms of its sine integral as in (E.5), then exchange the two
integrations with the Fourier series summation to write
y(a) =0 (4.71)
y(b) =0 (4.72)
reduces to a Fredholm integral equation with the Green’s function as its kernel.
To do this we write (4.70) in the form of (4.1) and purposely shift the term \p(a)y
to the right side, in anticipation of involving it inside the final integral to have an
integral equation
We note that having a symmetric Green’s function G(z, €) for (an equivalent problem
to that of) the problem (4.73), (4.71) and (4.72) can be easily justified since the
differential operator
d2 d
L= Ao(2) 73 45 Ai(z)—- AP A2(x)
d?
ao tA =e, (<eae (E.1)
y (5) =0 (E.3)
to a Fredholm integral equation.
If we compare this problem with (4.70)-(4.72) we have h(x) = a, p(x) = 1,
and L = d?/dz?, which is self-adjoint and hence the Green’s function is symmetric.
From (4.76) the integral equation representation is
m /2 n/2 :
oe if G(w,£)édé +2 / G(x, €)y(€)aé (E.4)
where we used
m /2
ke) == if Ge, €)EdE.
0
(E.5)
It remains to construct the Green’s function from the corresponding homogeneous
boundary value problem
FY 9, Ugias (E.6)
y(0) =0 (E.7)
v (5) 0: (E.8)
The solution that satisfies (E.6) and (E.7) is y(x) = a, and that which satisfies
(E.6) and (E.8) is y(x) = (1/2) — a; therefore, the symmetric Green’s function may
be written as two branches,
G(x,f) = re (£.9)
Ce(5-2), tsa<8
where the first and second branches satisfy (E.7) and (E.8) in the variable x, respec-
tively.
From the jump discontinuity property (4.26) of OG(a, €)/Ox we have
0G 0G 1
By 68) T= De (x, €) ale ~ r(€) =-—l
Note how we used the second branch and the first branch of G(z,€) in (E.9) for
xz = €, and x = E_, respectively, since = £4 > € is in the domain < x < 1/2
and z = €_ < € isin the domain0 < z < €. From (E.9) and (E.10) we have
Gye ee)
G(z,€)= (£.11)
HES mtg BS
OAT A
m /2
y(t) = (a) +A i G(e, é)y(é)aé (E.12)
where the kernel K(x, €) = G(z, €) as given in (E.11) is symmetric and k(x) is
“ECD
é
EEL
G-*)),
3 ty 2
Mo) =F
The final Fredholm equation that is equivalent to the boundary value problem (E.1)—
(E.3) is
equations in two dimensions, is to ask about the inverse problem, namely to find
the charge density f(x,y) (inside the integral) that would affect the given desired
potential distribution u(z, y) on the charged square, for example.
Exercises 4.2
d’y
Oe = er, Oriel
NO) 0 erly 0
to an integral equation by first finding the Green’s function. Hint: See Example
8.
d3
BEE. ncaa Ny Or <a
dx3
dy
- Gpz ty = 2a +1, Oper
y(0) = y'(1)
y'(0) = y(1)
Hint: See problem 17(b)i of Exercises 4.1 for the mixed boundary conditions.
EXERCISES 4.2 207
5 ‘ eee+ Ay = e”
y(0) = y'(0)
y(1) = y'(1)
Hint: See problem 15 of Exercises 4.1.
a qT? TL
y = Ay + cos ce
y'(-1) = y’(1) Hint: See problem 16 of Exercises 4.1. (for its boundary
conditions only.)
. Write the Fredholm integral equation in the charge density function f (p, ¢) that
would produce a given potential distribution on a unit disc u(r,@),0 <r <1,
0 < @ < 2m, where the rim of the disc is being grounded. Hint: Consult (4.65)
and its derivation in (4.62)—(4.65) at the end of the last section.
= are Gare oe
es %
4 . > Fo
—
-
@ 7 a A
| a2 es Pebee
Cie trare? a aell 8 ued bat Aig ee rye tiie = (eee
=—
ae la
(ie
eee eaiigenet)
i re
* Pe
saath oh e
plete _
OM jeter ¢ : each ni <2? erm
: _ = edsb =e
a ‘ Ame co~ ae = fal Try
: ‘# ai On jfar-U
ha 2 BY 1 Te a oi? me ~
_* =~
a eB
i, _ ~~
—» m, @ - @ ‘=
~— aa
“v
r
ee
J 7
a
os, ‘ ; : 7
SO Oe
SS i
® > oe pha 7 ; 7 a
>7
> — as a Se)
_
o<,
7 eames
: . —
<n <€
- i
="
ow ;
iN @
~~
4 S = mie
- =
a > 7
: / CBee OS) eg, Nieage) 2 se
a Oe Maw és <b wilthdie
a ——
a ae: 7
~ 2-2 eae
jp1men
=nma
albrae
est.
- ~ rm
———— Sa
ve. #1
~~? pit «
pid |
Ffredholm Integral
Equations
b
n(ayu(z) = f(a) + f K(e,6)u(éag (5.1)
which we termed the second kind when h(x) = 1,
b
u(x) = f(a) + | K(e,guledg (5.2)
and the first kind when h(x) = 0,
b
-f(a) = f K(@,s)uleas, (5.3)
When f(z) = 0 in (5.2) it becomes the homogeneous Fredholm equation,
b
u(x) = f K(2,8)ulé)dé. (5.4)
We note that the limits a and b of the integrals may be finite or infinite, where the
infinite limit makes it a singular equation.
'In 1900-1903, Fredholm developed the theory of these integral equations as a limit to the linear system
of equations. In 1904 and later, Hilbert established the theory in a rigorous fashion [see BOcher, 1914].
209
210 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
d’y
PE = Ay(z), a<z<b (1.33)
ya) =0 (1.34)
y(b) =0 (1.35)
reduces to a Fredholm integral equation of the second kind
ya) =e fh tieas,
tha (edt; (1.36)
BS Grit c=
“be
G alte b) eee (1337)
is finite. We must also stress here the relations in the theory of solving the nonhomo-
geneous Fredholm integral equation of the second kind (5.2) and its corresponding
homogeneous equation (5.4) with the (added) important parameter A,
a finite sum of products of a, (x), a function of x only, and b;,(t), a function of t only.
Such kernels defined by (5.6) are called degenerate kernels or separable kernels. In
5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 211
Section 5.2 we introduce methods of solving another special but very important case
of the Fredholm equation with symmetric kernel [i.e., K(x,t) = K(t,x)]. Section
5.3 is devoted to Fredholm equations of the second kind; Section 5.4 is a new section
for this edition to cover in more detail the Fredholm integral equations of the first
kind; and Section 5.5 covers basic elements of the numerical (approximate) method
of solution. The higher quadratures numerical methods are covered in the added
(optional) Chapter 7 in this edition. In each of these sections we illustrate, when
appropriate, some approximate methods of solution and methods for determining the
eigenvalues of the homogeneous Fredholm equation.
Here we will again be concentrating on the various, mostly, successive approxi-
mation (iterative) methods, for constructing a solution. Accurate statements for such
results will be stated without the complete proof.
We will start this chapter on Fredholm integral equations with the very special
degenerate kernel, as it is easy to illustrate without the need for new tools except
for the familiar theory of system of linear equations. This is very important as it
represents the fundamental and historical relation of such theory and the theory for
Fredholm integral equations, as was done by Fredholm in 1900-1903.
Consider the nonhomogeneous Fredholm equation of the second kind with degenerate
kernel K (x,t) = >>,_, @n(z)bx(E),
b
ula) =f (@) 4 | K (a, t)u(t)dt (5.7a)
6 n
= f(x) + a/ S > ag (x2)bx(t)u(t)dt (5.75)
Oe
n b
= f(r) + Yale) | by,(t)u(t)dt (5.7c)
k= C
after using K(z, t) of (5.6) and exchanging summation with integration. In the fol-
lowing we show how the solution of this Fredholm integral equation with degenerate
kernel reduces to solving a system of linear equations. If we define cx as the integral
in(S. 3c),
b
Ck =i} b;,(t)u(t)dt (5.8)
212 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
n b
frTe MN bi— [ bm (x) f(x)dx + 2 ye Ck / bm(2)a,(xz)dx. (5.10)
a a k=1 a
b
ae / Dm (x) f (a) dx (5.11)
and
b
ae / Me covap aids (5.12)
a
which is a set of n linear equations in Cincz, 3,9 Si cnoaHere tf, and ta, pare
considered known since we are given b,,(x), f(x), and ax(z).
So the solution to the Fredholm equation of the second kind (5.2) with degenerate
kernel (5.6) reduces to solving for c,, from the system of the n linear equations (5.13)
(in the n unknowns c,,, m = 1,2,---,n), since c,, will then be used in the series
(5.9) to obtain u(z), the solution of (5.2).
If we use matrix notation, the system of n linear equations (5.13) can be written
in the form
Cj fi Ct 1 es e ily Cy
C2 fo Qa21 422 a2n C2
(Gig ; = ; + : : . , =F+XAC
1
Uwe) =o + af (at? + a2t)u(t)dt. (E£.1)
0
This Fredholm integral equation has a degenerate kernel of the form (5.6),
where ai (x) = 2, ao(x) = x”, b(t) = t?, and bo(t) = t. To solve for cm in (5.13),
and hence u(z) of (E.1), we must prepare f,,fo from (5.11) and a11, @12, G21, G22
from (5.12). From (E.1) we have f(x) = x; hence according to (5.11),
1 1
fi =| bo s(ode = | Pat = 7
1 pl
fa= ffvolysinar= [Pat = 5
and the column matrix F’ of (5.14) becomes
r=[4]=
Ble
Wl
To prepare the matrix A in (5.14) we use (5.12) to evaluate the elements a,,,%, with
ax(z) and b;(t) as in (E.2) for k, m = 1, 2,
1 1 i 1
ait =) bs (tay (tat = f Prat = [ bedi i
0, 0, 0 1 1
ip)ne Cy
+X
C2
Wl
Bile &] Ble
Ol ole
214 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
and if we transfer the matrix product to the left side, we obtain C — AAC =
(I —rA)C = F,
sl , 1
24; ea 4
lee|a | (E.3)
A ’ x C2 1
geet 3
In general, before solving (E.3) we must evaluate the determinant of the matrix
I-AA,
A d
Povg= MALS 5 = (1-3)
KNBia
5 . 7ig Ries Si
‘aces
|
240 — 120—2?
eae (es (E.4)
If 240 — 120 — \? # 0, the problem (E.3) has a unique solution for c; and cp which
we can evaluate by finding the inverse, (I — \A)~+. As we have only two equations
in (E.3),
1
(i ee*)ara
=, ee
eter (£.5)
r aN 1
As this Example | illustrates, this is a clear and simple method for constructing
the solution of the nonhomogeneous Fredholm integral equation (5.7a). Moreover
the condition for the existence of such a solution seems to also be very transparent as
|I — AA| 4 0, as we showed in Example |. This amounts to restricting the parameter
A in (5.7a) not to be a zero of the equation
We will follow the same steps as those we used for the nonhomogeneous equation
(5.7a), to reduce (5.16) to
or in matrix notation,
(I —-AA)C = 0 (5.19)
instead of the nonhomogeneous system of linear equations in (5.13) and (5.14). Here
A and C are defined as in (5.14).
From the theory of systems of linear equations we can conclude that if |[—A.A| ¥ 0,
then the only solution to the homogeneous equation (5.19) is the trivial solution
C = 0. By using (5.17), the solution to the homogeneous Fredholm equation (5.16)
is the trivial solution u(a) = 0. On the other hand, when |J — A| = 0, then (5.19)
has nontrivial solution. This leads us to discuss next the subject of eigenvalues and
eigenfunctions of the homogeneous problem.
the parameter \ # 0 for which (5.20) does have a nontrivial solution (i.e., u(a) 4 0)
is called the eigenvalue or characteristic value of the homogeneous equation (5.20)
or, in short, the eigenvalue of the kernel K (a, t) in (5.20). The nontrivial solutions
u;(x) # 0 corresponding to the eigenvalues \; are called the eigenfunctions or
characteristic functions of (5.20), or in short, the eigenfunctions of the kernel
i (2-2):
In this sense, then, the eigenvalues of (5.20) are the solutions of |J — AA| = 0,
since if A is not the solution ofthis equation, then |J — \A| 4 0, and hence (5.18) and
in turn (5.20) have the trivial solution. There may exist more than one eigenfunction
~j(a) corresponding to a specific eigenvalue \;. The number p of such (linearly
independent) eigenfunctions W;+41(x), }j+2(Z), Yj43(Z), -:>, %j4p(@) is called the
degeneracy (or index) of Aj. In case an eigenvalue A, is a multiple root with degree
m in the equation |J — A.A| = 0, i.e., a factor (A — Ax.) appears in this equation, then
m is called the multiplicity of the eigenvalue. For the typically well behaved square
integrable kernels, it can be shown that the index p never exceeds the multiplicity m
of the root or eigenvalue of the kernel, i.e., 0 < p < m, and for symmetric kernels
p =m. For p = 1, the eigenvalue 2, is termed “simple”.
5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 217
hence ai(x) = cos? z, a2(x) = cos3z, b;(t) = cos2t, and be(t) = cos*t. We
follow the method of Example | to find c, and cz from
2
Cm = aNMe AmkCk (5.18)
k=1
or the matrix equation
(I — rAA)C = 0. (5.19)
The solution u(z) of (E.1) is
2
TG) aa ye Cpax (x) = Ac, cos? x + Ace cos 3z. (E.3)
k=
To evaluate c; and co from (5.18) we must evaluate aj1, @12, G21, and ag2, the
elements of the matrix A:
Tv Tv
AT
(ee
8
C=
Pe
1- — 0
4 ~- ys
=| kes Ao)
eee ea A=)) ==0 (E.7)
4
(1-5. Fe =0e =0, Cy = Cy
nm 4
4 7 1 1
(1-5 F)a=(1-5)a=sa=0 ex =10
4 ‘ 4 4
11 (Ee C1 cos? x + ~ (0) cos 3x2 = po cos? x.
This means that the eigenfunction is known except for the arbitrary constant c,, which
determines its amplitude; we may arbitrarily let (4/7)c, = 1 to have
ui(z) = cos? 2.
U(z) = COssz.
Fredholm alternative
The statements regarding the existence of the solutions of the nonhomogeneous and
homogeneous system of n linear equations (5.15) and (5.19), respectively, and how
they relate to the existence of solutions of the nonhomogeneous and homogeneous
Fredholm integral equations are valid even when the kernel is not degenerate, and
are summarized in the following statement (Theorems 1 and 2) of the Fredholm
alternative.
b
u(t)2= »/ K (a, €)u(€)d&é (5.20)
has only the trivial solution u(x) = 0, then the corresponding nonhomogeneous
equation,
b
To complement the second part of the Fredholm alternative, when the homogeneous
equation (5.20) has a nontrivial solution, we state the following theorem without
proof, which gives us the necessary and sufficient condition for the existence of
220 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
solutions of its associated nonhomogeneous equation (5.21) for the important special
case of symmetric kernels, i.e., K(x,t) = K(t,x). We make this choice of special
kernels to facilitate a more clear initial presentation of the main idea behind conditions
for the existence of the solution.
b
u(x) = f(x) + af Kz, ues i(ay i= Kies) (5.21s)
will have a solution if and only if the nonhomogeneous term f(x) in (5.21s) is or-
thogonal to every solution u; (x) (corresponding to A;)of the homogeneous equation
(20):
Of course, other theorems are available to accommodate equations with nonsym-
metric kernels, but we chose the above very important special case of symmetric
kernels to simplify a clear presentation of the main features of Fredholm alternative
for the existence of solutions to Fredholm integral equations of the second kind.
For completeness, we present such theorems after this initial discussion, in Theo-
rems 3 and 4, and illustrate them in detail in Example 5.
We may mention here that in comparison to the conditions of the above two
theorems, for the existence of the solutions of Fredholm equations of the second
kind, the theory for the equations of the first kind is much more restrictive, as we
shall discuss and illustrate in Section 5.4.
We should note that in contrast to the last Theorem | of the Fredholm alternative,
Theorem 2 is for symmetric kernels, and its statement also assumes the same value
for A in both (5.20) and (5.21). This means that we are considering the usually
special case when the fixed parameter X of the nonhomogeneous equation (5.21) is
equal to ,,, the eigenvalue of the homogeneous equation (5.20). In Section 5.2.2 we
will consider the general solution in (5.47) for A of (5.21) not equal to A, of (5.20),
then treat the problem of A = A, as a special case in (5.57). The next Example 3
illustrates Theorem | when A # X,, for the nonsymmetric kernel of Example 2, while
Example 4 illustrates both Theorems | and 2 as it covers the two cases of A # An
and A = ,, for a symmetric kernel.
which is associated with the homogeneous equation of Example 2 with its nonsym-
metric kernel. Hence, with Theorems | and 2 at our disposal, we can only use
Theorem | of the Fredholm alternative for such a nonsymmetric kernel.
(a) From Example 2, the homogeneous equation associated with (E.1),
We shall discuss the possible existence of the solution (or solutions) for the three
particular cases:
MA=3: fcy=s
(ii) \ = 1/7, f(x) = sin 2x
(iil) A=al /ayf (zc) = sine:
222 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
20
Op (Z) = Ak / sin(x + t)d,(t)dt (£2)
Ly
20 20
i Geer (dae J sin 2a[sin z + cos z]dx
0 1
0 27
(after using trigonometric identities for the first integral) which says that f(x) =
sin 2x is orthogonal to ¢;(x) = sinz + cosz on (0, 27), whence the problem (E.1)
for case (ii) has infinite number of solutions. These solutions will be constructed at
the end of this analysis, where the method of Section 5.1.1 is employed.
For case (iii) with A = 1/m = 2, and f(x) = sin z, we will show next that f(x) =
sin z is not orthogonal to ¢;(x) = sin x + cos z, the eigenfunction corresponding to
1
M=-,
T
27 20
[ f(x)¢i(z)dz = i sin z[sin x + cos z]dz
0 01 20
= 5[{2n -0-(1/2)}
-(0-0-(/2}J=7 40
as the integral does not vanish. Hence, according to Theorem 2 there exists no
solution for (E.1) in case (iii) of A = + and f(x) = sina.
Now we follow the method described in Section 5.1.1 to find the eigenvalues A,
and eigenfunctions ¢,,(x) of the (associated) homogeneous problem in (E.2),
22
Ole) = af sin(x + t)n(t)dt (E.3)
as was illustrated in Example 2. Then we use the same method, that was illustrated
in Example 1, for constructing the unique solution of the nonhomogeneous equation
(E.1) for case (i), and the infinite number of solutions of (E.1) for case (ii).
Here we have a degenerate kernel,
2
sin(z +t) =sinzcost+coszsint = > Ap (x)
bz(t)
k=)
with a,(x) = sin z, a2(x) = cosa, bi(t) = cost and be(t) = sint.
We will follow the procedure from (5.7) to (5.14) with all the necessary details
except for evaluating the simple integrations involved. We let
20 27
Co by (t) P(t) dt =| cos td(t)dt,
0
27 21
ae , bo(t)4(t)dt = i semi a
0
which are to be determined from solving the homogeneous case of the matrix equation
(5.14), i.e., with F' = 0. We now need to compute @11, @12, 421, and Q22, the entries
for the matrix A,
224 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
27 27
ant / by (t)a;(t)dt = [ cost sin tdt = 0,
0 0
20 27
Qi2 = i bi (t)a2(t)dt = i} cost costdt = 7,
0 0
20 27
a2) = il be(t)a;(t)dt = / sintsintdt = 7, (E.4)
0 0
Also for the solution of case (i) we will need, as shown in (5.14),
20 27
Hie / b(t) f(t)dt = i tcosdt = 0,
0 0
. 1 1 on : ‘
hence A; = — and Ay = —~— as the two distinct eigenvalues of the symmetric (and
degenerate) egal sin(x + t)of (E.2), as we quoted them at the beginning of this
example.
To find the eigenfunctions ¢;(x) and $2(x) corresponding to the eigenvalues
A, = 1/m and Ay = —1/7 we substitute for each case in (E.6) to find c, and co,
which are to be substituted in (5.9) with f(x) = 0,
aoa |
5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 225
Gi Cy = Uy
—cj
+e =0
we have
2
gi(xz) =1/x a Chay (x) = 1/m[c) sin + c, cosz] = = (sing +cosz). (E£.8)
k=1
Ea] ]=[o]
iE C2 ORs
qt co — UP C2 = —C1,
€; +.6> =0, C2 = —C}.
Hence cz = —c;, and from (5.9) with f(x) = 0 (or (5.9h)) we have
2
2(xr) = -= eS:Chae) -=le sin x — c; cos Z| (E.9)
k=1 ;
Ci;
lI — 7[sine — cos z],
Cc, — 3mc2 = 0,
—3rc, + co = —27,
ist oletaal iy
Ci aa Omer CT in 0
1 —1 Ci = 0
-1 1 €9 ma 0 ,
5.1 FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 227
(with the same fixed parameters X = Aj) will have a solution if and only if the
nonhomogeneous term f(z) in (5.21) is orthogonal to every solution (eigenfunction)
~;(a) of the associated equation (5.24) (with kernel K(t, x)), i-e.,
b
[ sobslaz = 02
Clearly this theorem becomes Theorem 2 when K (x,t) is symmetric, since the
associated equations (5.23h) and (5.24) become (5.20) and (5.22), respectively, and
the above orthogonality condition a f (x); (x)dx = 0 becomes if ip Cag selCe at—
0 of Theorem 2, where {¢;(x)} are the eigenfunctions of (5.20) (or (5.22)) for
symmetric kernels.
To illustrate Theorem 4, as it complements the Fredholm alternative for nonsym-
metric kernels, we choose the following example with a very simple (nonsymmetric
kernel) to avoid lengthy details. The main emphasis will be directed towards solving
the two homogeneous equations (5.20) with K (a, t) and (5.23h) with K (t, x) as they
supply the basic ingredients to our analysis of both parts of the Fredholm alternative
for nonsymmetric kernels (Theorems | and 4).
O10) = a» | sin(In
2) p; (t)dt (E.2)
and in case A of (E.1) is not equal to this eigenvalue \1, we construct the unique
solution of (E.1) as guaranteed by the first part of the Fredholm alternative (Theorem
1). If weletc, = fh u(t)dt in (E.1), we have
5.1. FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 229
The integral iissin(In t)dt can be done with one substitution u = Int, which will
reduce it to ee e” sin udu, that can be evaluated by two (careful) integrations by
parts to give a value of —},
1
cq =—-=qA4+1, a (1+3) = il,
5 2 (E.5)
Cy = X42? aN ee —2.
So with this result of c; and the (important) condition A # —2 in (E.1), the unique
solution to (E.1) becomes
Of course, the condition \ #4 —2 for the above unique solution in (E.6) should, in the
spirit of the Fredholm alternative (Theorem 1), be transparent to us as \ # Ay = —2,
where A; = —2 should be the eigenvalue of the homogeneous equation (E.2). This
can be verified easily from (E.4) or (E.5) without the nonhomogeneous term vale in
(E.4) (as if we do (E.3) without the 2x7 term, which amounts to doing (E.2)),
1
cy = — 501A, Ci (A == 2) = (0). (E.7)
So unless \ = —2, the arbitrary constant c, in (E.7) must be zero, which results in
a trivial solution u(x) = O for the homogeneous equation (E.2) as can be obtained
from u(x) in (E.3) without the nonhomogeneous term 2x. This means that for (E.2)
to have an eigenvalue A; = —2 in (E.7), the constant c; is allowed to be arbitrary.
Thus, the corresponding nontrivial solution, i.e., the eigenfunction corresponding to
A, = —2 is obtained from (E.3) (without the term 22) as
The second part of this illustration is to address that case for the nonhomogeneous
problem (E.1) (with its nonsymmetric kernel) when its parameters A is equal to
dy = —2 of this kernel. Since the kernel K(z,t) = sin(In x) is not symmetric,
we must resort to Theorem 4 for the second part of the Fredholm alternative, which
230 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
means we have to find the eigenvalue and eigenfunction 7 (z) of the homogeneous
equation (5.23h), associated with the kernel K (t,x) = sin(In ¢),
1
“iy cpa / nn (Fae (E.9)
We use here j1; instead of A; just to emphasize solving a homogeneous problem as
new because of its different kernel K (t,x) = sin(Int). So we set out to find the
eigenfunction ~, (x) ina similar way to what we did for (E.2) and (E.1), except here
in (E.9) we have the kernel sin(In t) instead of sin(In x) in (E.2). In this case we
have a degenerate kernel K (x,t) = sin(Int) with one term, where a;(x) = 1 and
b(t) = sin(Int). As was done before, we let
which when substituted in (E.9) we obtain ~(r) = s41c1, and if this yi (x) is
substituted inside the integral of (E.10), we obtain
1 1
Cie ey sin(In t)dt = — 51s
2
0 (E.11)
ai ( 5 |=o.
For (x) = f41c, not to be the trivial solution ~(2) = 0, i.e., to have it as an
eigenfunction,
we cannot assign c,; = 0. Thus from (E.1 1), the eigenvalue to equation
(E.9) is w; = —2 corresponding to the eigenfunction y(%) = ic, = —2c, = ¢,
an arbitrary constant. This ~,(x) = c represents an infinity of solutions, but it can
be normalized, and we have the eigenfunction needed for the following important
orthogonality condition of Theorem 4:
b
‘ Raye (ajdt = 0toreally: (£.12)
a
b 1
i f (x)u1 (x)dz = a PL) rem pee al) (E.13)
a 0
and c is assumed not to be zero for #1 (x) = c to be an eigenfunction (i.e., not a trivial
solution).
To summarize, the final result of this example is that the Fredholm integral equation
(E.1) has a unique solution when \ # —2 and no solution when \ = —2.
5.1. FREDHOLM INTEGRAL EQUATIONS WITH DEGENERATE KERNEL 231
a2 t2 4 t4
which consists of the first three terms of the Maclaurin series expansion of cos zt.
Let us consider the Fredholm equation with kernel K (z, t),
b
u(x) = f(x) +2 ‘|K (a,t)u(t)dt (5.21)
and its associated equation,
6
se STN / M(s, t)v(t)dt (5.26)
with kernel / (z, t) as the degenerate kernel approximation of K (a, t). In principle,
we may use this section method to solve for v(x), which is considered as an approx-
imate to the solution u(x) of (5.21). Of course, there will be an error involved in
such an approximation, which is defined as e€ = |u(x) — v(x)|, and we may attempt
to estimate this error to give us a measure of how good this approximation is.
Cte art Ct 2s
1-2 (1-5 45F -...) <1-24 5 -Se (E.2)
342
Dat
M(a,t) Sy ag (E.3)
as an approximation to K (x,t) = 1 — cost of (E.1). The associated equation in
u(x),
232 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
oa) =sinet )
zit
[ (1-245 u(t)dt (E.4)
has degenerate kernel and can be solved by the method we discussed in this section
and illustrated as in Example 1. Hence from (5.6) we have
a
M(z,t) = (1-2) = ap (x (E.5)
where a;(z) = (1 — 2), a(x) = x? and bi (t) = 1, bo(t) = t?/2. Now we can
employ the method of solving nonhomogeneous Fredholm equations with degenerate
kernel as illustrated in Example | (leaving the detailed steps for Exercise 11(a)) to
find the elements of the matrix C’ in (5.15) are .Y
c, = 1.00308, C2 = 0.16736.
1 1
sin z + / (1-—acosat)(1)dt =sinzg+1-— xf cos xtdt
0 0; 1
sin zt ; :
=sinz+1l-2z =sinz+1-sinz
0
= 1.
The approximate solution u(x) in (E.6) and the exact solution u(x) = 1 of (E.1)
are presented in Table 5.1 to show how v(x) corresponding to the degenerate kernel
(E.5) approximates u(x) = 1 with nondegenerate kernel 1 — x cos zt. It is of interest
to observe how close v(x) will be to u(a) when we consider more terms of the
Maclaurin series expansion for M (x, t) (see Exercise 11).
Exercises 5.1
1. Solve the following Fredholm equations in u(z), then verify your answer.
mw/2
(a) u(x) = sina + af sin
x cos tu(t)dt
0
nm/2
Hint: Write C = / cos tu(t)dt, where the above equation becomes:
0
u(x) = sinx + AC'sinz, use this u(x) in the integral of C’, then solve for
EXERCISES 5.1 233
Table 5.1 Approximate (Kernel Replaced by a Degenerate One) and Exact Solutions of
Fredholm Equation (E.1)
C’. Aso, note that all the Fredhoim equations in this Exercise 1 are nonhomo-
geneous with degenerate kernel.
Hint: Here we will end up with a rather long 3 x 3 system of equations, however,
many of the entries of the coefficient matrix A vanish due to integrating odd
functions on the symmetric interval (—7, 77).
(c) u(x) = 22 —7 +4 i,
Si sin? ru(t)dt
. In Example 4, verify that g(x) = (sinz + cos) and ¢2(x) = sinx — cosz
are the two eigenfunctions corresponding, respectively, to the two eigenvalues
Ay = 1/m and Ap = —1/7 of the kernel K(z,t) = sin(z + ft), i.e., show that
each pair of an eigenfunction and its corresponding eigenvalue satisfies the
homogeneous Fredholm equation (E.2) in Example 4.
. In problem 1(a), and its associated homogeneous case in 2(b), use their results
to illustrate the Fredholm alternative.
Hint: Compare the parameter \ in the Fredholm equation of problem 1(a) with
the one eigenvalue A; in problem 2(b).
Kes 2f oe (E.2)
Hint: See the hint to Exercise 1(a), but watch for u(x) = C # 0, since this
will give a divergent integral on the right side of (E.2).
2
(c) u(x) = ae |x|u(t)dt. (£3)
. In light of the Fredholm alternative, how do you explain the validity of the
solution to problem 1(d) for all real values of its parameter \?
. Compare the results of problem 1(e) and its associated homogeneous case in
problem 2(e), how do you explain the validity of the solution in l(e) for all
values of its parameter \?
EXERCISES 5.1 235
9: Consider the following problem with degenerate kernel K (x, t)= (1 — 3zt)
and a general nonhomogeneous term f(z),
(b) Consider the resulting system of equations from (E.1), what is the condition
for a unique solution? Also what about the existence of a solution to (E.1)?
10. (a) Use the method of degenerate kernels to solve the nonlinear integral equa-
tion
u(x) = b+ valeu(t)dt
0
Hmnceisayi r= 1 jet C = le u”(t)dt, where the above equation becomes:
u(z) = b+ AC; then use this u(x) in the integral of C’, which results in a
single equation in C.
(b) Use the same method to solve the homogeneous equation
C2 af u?(t)dt
i. (a) Use the first three terms of the Maclaurin series of the kernel in the integral
equation
1 4
GD ety +/ e*'u(t)dt
=1
14. Assume that the kernel K(z,t) is not symmetric, i.e., K(xz,y) # K(y,2),
show that the following two kernels, associated with K (a, t)
Ace iK(s,2)K(s,y)ds
(a) In light of the Fredholm alternative for Fredholm integral equation with
nonsymmetric kernels, as stated in Theorems | and 4, discuss the existence of
the solution (or solutions) to (E.1) when
G8 aia
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 237
Similar to the case of (3.1), the Volterra integral equation, it turns out that the
resolvent kernel (a, t; ), for (5.27), can be expressed as an infinite series in terms
of the orthonormal eigenfunctions of the homogeneous equation with symmetric
kernel,
as the solution of (5.27), where ['(z,t; A) is given in (5.46). This will be derived
in Section 5.2.2. We stress here the difference between X, the eigenvalue of the
homogeneous Fredholm equation (5.28) and the parameter X of the nonhomogeneous
Fredholm equation (5.27). In most of our treatment we will assume that the parameter
of (5.27) is different from all the eigenvalues {,, }of the homogeneous Fredholm
equation (5.28).
There are many interesting results concerning the eigenvalues {\,,} and the eigen-
functions {un(x)} of the symmetric kernel of (5.28),
2If the kernel K (x,t) is a complex-valued function, then the definition of the symmetric kernel is
K(a,t) = K(t,x), where K is the complex conjugate of K.
238 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
iz i K (a, t)un(x)dax
n (5.31)
i ie u2 (x)dzx
we will normalize them by redefining them as an orthonormal eigenfunction (as we
did in Section 4.1),
b
b= af K (a, t)u(t)dt, BAB.) elta) (5.34)
b
One) = wf K (a, t) bx (t)dt, Kit) = k(t2) (5.35)
Before we state the Hilbert-Schmidt theorem we must note that there is a limitation
on the class of functions f(a) that can be expressed as in (5.34), since thinking of a
solution u(x) for (5.34) means the existence of such solution u(x) for the Fredholm
integral equation of the first kind (5.34) for the given function f (x). However this, in
general, as was illustrated earlier with a very basic problem in Example 8 of Section
1.3, cannot be (easily) assured. Indeed the conditions for the existence of a solution to
Fredholm integral equation of the first kind is much more restrictive when compared
with those of the second kind. Such an important topic will be discussed briefly after
the next Example 7, illustrated in Example 8, then it will be discussed in more detail
in Section 5.4.
The following is a simple version of Hilbert-Schmidt theorem.
a / f(a)oe(o)de (5.37)
in terms of the orthonormal eigenfunctions {¢, (2)}of the symmetric kernel K (2, t)
and the series (5.36) converges to f(a) in the mean (as defined in (4.52) (see (4.51)).
The series is also convergent absolutely and uniformly.
As we shall see in the next section this theorem is essential for developing the
resolvent for the nonhomogeneous Fredholm equation with symmetric kernel (5.27).
The following Mercer’s theorem is of importance as it expresses the symmetric kernel
as an infinite series of a product of its orthonormal eigenfunctions.
Mercer's Theorem
If the kernel K(x, t) is symmetric and square integrable on the square {(z,t) :
a<a2z<b,a<t < bd}, continuous, and has only positive eigenvalues (or at most a
finite number of negative eigenvalues), then the series
Saharan
mane
br (
converges absolutely and uniformly and gives the following bilinear form for the
symmetric kernel:
240 Chapter5 FREDHOLM INTEGRAL EQUATIONS
The conditions and results of Mercer’s theorem and Hilbert-Schmidt theorem are
illustrated in detail in the following example.
Qdots Unda ed
Kai quae ean os (5.40)
can be obtained by reducing (E.1) to its equivalent eigenvalue problem of (E.3) and
(E.4) as was done in Example 6 of Section 2.5:
au
— + ru =0, Orr <1 (E.3), (5.41)
dx?
11(O) == ets(10): (E..4), (5.42)
The eigenfunctions of(E.3) and (E.4) are clearly u,(x) = sin ka and the eigenvalues
are A, = k?r?. These eigenvalues A, = 1k? of the symmetric kernel in (5.40)
are real, and the eigenfunctions {sin k7x} are orthogonal. From the definition of the
norm square in (4.49) we have
b 1
: 1
[eee|? =a uz (x)dx = i sin* kradz = =.
a 0 2
Hence the orthonormal eigenfunctions are
sink sink
OnE) = see Ba EIS! = V2sinkrz.
|x || i
2
The symmetric kernel in (5.40) is square integrable on the square {(z,t) : 0 <
x < 1,0 <t < 1} since it is bounded in z and t there. The non-zero eigenvalues
here are simple since for every \y, = k?x? there corresponds only one eigenfunction
sin kmz. Therefore, the conditions of the Hilbert-Schmidt theorem are satisfied, for
the square integrable u(x) of (5.34), as K(a,t) in (E.1) is symmetric and square
integrable. Also the conditions of Mercer’s theorem are clearly met, thus K(z, t) of
(5.40) can be expressed in the bilinear series (5.38) with $4 (x) = /2sin ka and
Nk = k?n?.
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 241
Next we will present a discussion concerning the difficulty in securing the existence
of a solution to Fredholm integral equation of the first kind which is concluded by
a detailed illustration in Example 8. The more detailed treatment with precise
(practical) theorems is done in Section 5.4. This topic was touched upon very briefly
in Section 1.3 and was illustrated with Example 8 there.
the theory translates in requiring that the given function f(x) must be expressible in
a Fourier series of the eigenfunctions of the continuous, real and symmetric kernel
K (a, t) of (5.34). It states that:
Theorem 5 “For the continuous real and symmetric kernel, and continuous f(z),
a solution to (5.34) exists only if the given function f(z) can be expressed in a series
of the eigenfunctions of the kernel ’(z, t), i.e., only if
fe du(a (5.37)
where we are using here the orthonormal set of eigenfunctions {$4 (x) }@2., on (a, b)."
With the condition of this theorem, the solution takes a similar form
by = ARQk- (5.43a)
This form satisfies the condition (5.36) as we substitute the expression (5.43) for u(t)
inside the integral of (5.34)
b oe)
si
= K(z, t) Ppsaxon dt
ea (5.36)
= arene = Soren
k=1
after using the fact that @;(x) and A, are the eigenfunctions and eigenvalues, as seen
in (5.35), of the symmetric kernel K (a, t) of the integral inside the sum.
We may note here that, although we are guaranteed the existence of the (contin-
uous) solution of (5.34) in the form of the series (5.43), it is by no means a unique
solution. This is the case, since if we add to the series in (5.43) a function V(a) that
is orthogonal to the kernel K (z, t), i.e
b
/ K (a,t)W(t)dt = 0, (5.44)
and substitute in (5.34), we obtain the same output f(a) as in (5.36). So, for a
unique solution u(x) in (5.43), we must insist that there are no functions (a) that
are orthogonal to the symmetric kernel K (z, t).
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 243
Perhaps, at this level of discussion, the safest way to come up with an example
which has a solution for the Fredholm equation of the first kind, (5.34), is to assume
a form of (continuous) solution u(t) and find the resulting f(z). This f(x) may then
be used as a given function in (5.34) to safely illustrate the Hilbert-Schmidt theorem,
and the impor’ at condition (5.36) and (5.37) for the existence of the solution to
(5.34). Understandably, we can start with the simplest form u(t) = 1 on (0, 1) in the
integral of the special case of (5.34) with the symmetric kernel,
et cdot) 0 <a
K(e,t)={ SOE een eee)
and after paying attention to the two branches of the kernel K (x, t) in (5.45), we can
A ;
easily integrate to have the result as f(x) = —(x — x”), which we have as a simple
exercise (see Exercise 4). In the following example we illustrate the conditions for
the existence of a solution to such resulting integral equation. We will then illustrate
the related aspects discussed above.
Example 8
(a) Now wecan aril a reasonable Fredholm integral equation of the first kind (5.34)
with f(x) = $(a — x”), X = 1 where we know for sure that the solution does
Xistas w(t) 1,0 <7 <1:
1 9 2/2 - V2sin(2k
+ 1)r2x
Oris 1ACE:3)
SS 2V/2- /2sin(2k
+ 1)r2x
HV lig) a imi Uk a ae (E.4)
sz m(2k + 1)
k=0
Now if we compare the Fourier coefficients a, = aes Oe AS
(a — x) and b, = Bey for u(x) = 1, we find that the condition (5.43a)
244 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
for the existence of the solution to the specific (and well prepared in advance!)
Fredholm integral equation of the first kind (E.1), is satisfied,
2/2 2/2
by = Ark410K = 7°(2k + 1)? - m(2k+1)3 1(2k+1)' (Ee)
It is clear that this given f(x) = 5(a — x”) in(E.1) and K (z, t) in (E.2) satisfy
Theorem 5, which we shall leave for an exercise (see Exercise 4). So the series
expansion (5.36) of f(a), as required by Theorem 5, is justified, thus in turn
the existence of a solution to the special case (E.6) of the Fredholm equation
of the first kind with symmetric kernel (5.34).
(b) With these words of caution about the rather restrictive conditions for the exis-
tence of the solution of Fredholm integral equation of the first kind, we leave
this important subject for now, and we shall return to it in Section 5.4 with a
more general theorem and a rather relaxed condition on the solution u(t) of
(5.34). It may be instructive to give here the spirit of such a theorem compared
to the above rather restrictive Theorem 5.
As we had explained following (5.45) and in Exercise 4(a) of this section, that
the simple continuous function u(z) = 1,0 < x < 1 is a solution to the
Fredholm integral equation of the first kind
1 1
s(t — 27) = / K (a,t)u(t)dt, (E.6)
0
Fy 5) pall ean ea
K(at) = ili), tree1 Bot)
which can be verified here easily. On the other hand, the equation
1
Ci iiK (a, t)u(t)dt (£.8)
0
with the same kernel as in (E.7) has no solution. These results will be supported
by a limited version of Picard’s Theorem 7 with necessary and sufficient con-
ditions for the existence of a not necessarily continuous, but square integrable
solutions.
An important dividend of the more relaxed existence Theorem 7, that we shall
present in Section 5.4, is that as it assures us of the solution, it also offers a
method for constructing such a solution. This is a great relief when we know
that integral equations of the first kind are denied the usual simple iterative
method. The latter difficulty, of course, is due to the absence of the unknown
function u(x) as a separate term outside the integral of the Fredholm integral
equation of the first kind in (5.34) as compared to that of the second kind in
(S21):
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 245
(c) To also illustrate Mercer’s Theorem for the series expansion (5.38) of the above
kernel K(x, t) in (5.45) , we see that the theorem is satisfied since the kernel
is continuous and all its eigenvalues {\,} = {k?7?} are positive; therefore
such a kernel can be expanded in terms of the (orthonormal) eigenfunctions
{dx(x)} = {V2sin kz} as
In the following section we will develop the resolvent kernel (x,t; A) for the
nonhomogeneous Fredholm equation (5.27) with symmetric kernel.
With the aid of the foregoing important development of the Fredholm homoge-
neous equation with symmetric kernel (5.28), we show here, at least formally, that the
resolvent kernel ['(z, t; A) of the nonhomogeneous equation with symmetric kernel
(5.27) is expressed as an infinite series of the orthonormal eigenfunctions {o,(x)}
of K(z,t),
u(x)= f(x) +2 Si
cael Weay. (5.47)
b
res / EMD (5.37)
To prove (5.47), we write (5.27) in the form
b
h(z) = u(z) — f(z) = | K (a, t)u(t)dt (5.48)
which is suitable for the Hilbert-Schmidt theorem with the function h(x) = u(x) —
f(a) in (5.48) instead of f(a) in (5.34). According to the Hilbert-Schmidt theorem,
remembering its important conditions here on u(x)(= A(x) + f(x)) being square
integrable ona < x < band K(z,t) symmetric and square integrable on the square
{(x,t):a<2<b,a<t < bd}, wecan expand h(z) in a Fourier series (5.36) and
(5.37) of the orthonormal eigenfunctions {¢;(z)} of the symmetric kernel K(z, t),
246 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
b b .
a i h(x)ox(2)dx = i u(x) px (x)da — i,f(x)dx(x)dt (550)
a
=a — ak.
b
pe / VUE (5.51)
a
and a, is the Fourier coefficient of the given function f(a). In (5.50) we have now
a relation between b;, dy, and ax. It is clear that we need to express b, of (5.49) in
terms of a, to arrive at the final solution (5.47). To do this we need another relation,
by = Ady /Ax, which we can easily show, since
b
Toe=a [u(x) — f(2)]bx(a)de
=f af K (a, t)u(t)dto,(x) dx (5.52)
after using the integral of (5.48) for h(a), interchanging the two integrals, and using
the fact that the kernel is symmetric [i.e., A (2,t) = K(t,x)]. Now according to
(5.35), the inside integral is
Ou (t)
Ak
f DN
a | u(t) 5 (\diz— —
2 fu t) dr (i)dt = —d, (5.53)
a Ak
after using the definition of d; in (5.51). If we substitute from (5.53) for dx in (5.50),
we obtain b, in terms of az,
Arn=— = ak, br =
r
bx yon
(5.54)
u(z) ee ees
coer Ney (5.47)
which is (5.47), the solution of (5.27) with symmetric kernel. This solution (5.47)
can be rewritten using a, from (5.37) as
5.2 FREDHOLM INTEGRAL EQUATIONS WITH SYMMETRIC KERNEL 247
k=1 m
after exchanging the infinite summation with the integration and defining I'(z, t; A),
the resolvent as in (5.46),
Px(et
T'(a, t; ) ee a he (5.46)
The very clear condition A # A, in (5.47) on the parameter \, in the Fredholm
integral equation of the second kind (5.27), not equal to any of the eigenvalues {A, }
of its symmetric kernel is consistent with the Fredholm alternative in Theorem 1. In
case A = Ax, as we shall illustrate in the next Example 9 for a symmetric kernel, we
will use the second part of the Fredholm alternative as stated in Theorem 2.
3See Jerri [1998] for the first comprehensive book treatment of the Gibbs phenomenon that covers the
basic elements of the subject and its research development since its discovery in 1848.
248 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
a,V2sin kr
u(z) ee ee dA 17k?
bats £3
2X S (—1)**! sin kaa BS)
k(m2k2 — 2)
since
1 ns) k+1 9
ip, = ihzv2sinkradz = Cah v2
0 kr
oo / Heme
the condition a, = 0 would mean that f(x) must be orthogonal to #4 (x), and hence
a solution to the integral equation (5.27) in the form (5.47) does not exist unless f (a)
is orthogonal to all the eigenfunctions $;+41, $j+42,---,j+p that correspond to the
(degenerate) eigenvalue Aj41 = Ajzo = ++: = Aj4p-
b
We may remark here that this condition on the nonhomogeneous term f(z) of (5.27)
is consistent with Theorem 2, the second part of the Fredholm alternative (Theorem
1) for symmetric kernels.
In the case that this condition (5.56) is satisfied, the series will include arbitrary
constants B,, Bj,---, By resulting from the p indeterminate forms
ak
=R OAR ay = 0, k=j+1,j+2,-*:,9
+p
A — Xz
Example 10
The Fredholm integral equation
1
=]
—_
However, the integral equation with f(z) = sin 3z instead of the above f(x) = x
in(E? 2b);
(with K (a, t) as in (E.2)) does have solutions even though its \ = An? = Xo, since
f(x) = sin 3rz here is orthogonal to ¢2(x) = V2 sin 272,
1
v3| sin 37z sin 27xdzx = 0.
0
The solution is obtained from (5.57) after computing a, for f(z) = sin 37x, where
we note that this f(z) is a very special case, as it is a member of the orthogonal set
{sinkrax}. Thus a, = 0 except for ag = es sin? 3radxz = 1/ V2, where the
sum in (5.57) becomes only one term, and we have
This represents an infinite number of solutions for (E.3) because of the aribtrary
constant Bz in (E.4). We note here that the multiplicity is p = 1 for the eigenvalue
A2 = Ar?,
(0 ie af K@,nunat (5.20)
Exercises 5.2
(a) Use the results of Exercise 2(c), Section 5.1, to verify that for this sym-
metric kernel (cos(az + t) = cos(t + x)) the eigenvalues are real and the
corresponding eigenfunctions are orthogonal.
(b) Use differentiation to reduce the integral equation to an ordinary differ-
ential equation from which you determine the eigenfunctions, then the
eigenvalues. Compare those with the results of Exercise 2(c), Section
=a
(c) Find the orthonormal eigenfunctions.
(d) Use (E.1) to find the eigenvalues. Hint: Substitute each eigenfunction of
part (c) in (E.1) to find their corresponding eigenvalues.
(e) Show that the symmetric kernel is square integrable on {(z,t):0 <a <7,
Ore <n }8
(f) Determine whether Mercer’s theorem applies to this problem and if so,
write the Kernel’s bilinear expansion of (5.38).
4Jerri [1985, pp. 146-151]. See also Kanwal [1971, 1997 (2nd ed.)] and Green [1969].
SJerri [1999].
252 Chapter5 FREDHOLM INTEGRAL EQUATIONS
sinxcost, O<2<
re<
=my i]
—
K(z,t) = sinvcosz, €< 2 -~
w|
a>
(a) Verify that the kernel is symmetric and is square integrable on the square
{(@.2) 50 Se < 2/2 0 tas 2}.
(b) Reduce the homogeneous equation
a/2
u(e)= | K (a, t)u(t)dt (2.3)
0
with K(2, t) as in (E.2) to a differential equation to obtain the eigenvalues
and eigenfunctions.
(c) Use the information in part (b) to solve the nonhomogeneous equation
(ESD):
(d) Just as in Exercise 2, use the Fredholm alternative (Theorems |, 2) to show
that the Fredholm integral equation in (E.1) above does indeed have a
unique solution.
. Consider the Fredholm integral equation of the first kind in (5.34) with A = 1,
(with the given particular f(a) and the symmetric kernel) as it was considered
in Example 8. (This is the same problem as Exercise 2 in Section 5.4.)
(a) Show that the solution u(t) = 1 corresponds to the nonhomogeneous term
f(z) = $(@ —2*). Hint: Watch for the two branches of the kernel
K (a, t), write the integral on the two subintervals (0, 2) and (x, 1).
(b) As needed for (E.3) and (E.4) of Example 8, write the Fourier sine series
for both the solution u(x) = 1, and the nonhomogeneous term f(x) =
+(x — x”) on the interval (0,1).
(c) Show that the nonhomogeneous term f(x) = $(2 — 2”) in (E.6) and
K(a,t) in (E.7) of Example 8 satisfy Theorem 5. Hint: Note that
f(x) = =(x—2”) is continuous on (0, 1), and that the clearly symmetric
kernel A(x,t) in (E.2) is square integrable on the square {2e(0, 1),
te(0,1)}. (See the hint to part (a).)
5. For Example 10, verify that u(x) in (E.4) satisfies the Fredholm equation in
(B.1).
5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 253
One of the methods of solving the general Fredholm integral equations of the second
kind (5.21),
b
u(x) = f(x) + a K (a, t)u(t)dt (5:21)
y — D(z, tA)
where ['(a,t; A), D(x, t;), and D(A) are called the Fredholm resolvent kernel
of (5.21), the Fredholm minor, and the Fredholm determinant, respectively. The
D(a, t; X) is defined as
Dee) ee7
es Bn(a;t), (5.60)
n=0
where Bo(z,t) = K (x,t), and
where A
Ch / Brealt,t) dé, ars? Cy =o (5.62)
To evaluate the resolvent kernel I(x, t; 4) we should start evaluating the functions
required for it in (5.59) which are found in (5.60)—(5.63).
Here Bo(z,t) = K(zx,t) = ze’, Co = 1, and hence from (5.62) we have
1 1
Cr ifBo(t, t)dt = / te di= a1. (E.2)
0 0
For C2 we need Bj (t, t), which we can evaluate from (5.61),
1
ari Gata ies Gre
GlCrag be K (a, s)Bo(s, t)ds
1 0 1
Sn = / ze’se'ds = xe’ — cet | se*ds ee)
0 0
= ze! — re' = 0.
pepsi
It is clear from (5.63) and the values of Co = C, = 1, C, = 0, n = 2,3,--- above
that
and from (5.60) and the values of By = ze’, B, = 0,n = 1,2,---, we have
DE tN) ace
Les) = Doe To (E.7)
We may remark here that the kernel K (x,t) = ze! of (E.1) is degenerate with one
term, and hence it is much easier to solve (E. 1) using the method for solving Fredholm
equations of the second kind with degenerate kernel which we discussed in Section
5.1 and illustrated in Example | for a degenerate kernel with two terms,
K (ta, t) =a FG ))
(5.64)
1 pl} ve xe’?
B2(a, t) =k is be. te! t,e!? dt, dty. (5.66)
0 0 t ef e! toe”?
256 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
ef’ te"
tye”
ef?
dt, dt (5.67)
ty t te
|t1e 1€ ee Fine titte _ titee =
toe toe??
and hence Cy = 0. Also the integral in (5.66) can be shown to vanish after noting
that the first and second columns of the determinant are proportional, which results
in the vanishing of the determinant [see Exercise 2(a)].
“fae re a]
flo) +a foK (a,
( t)f(t)
wary [ i Ke OK Ga f(y)dy
after using (5.69) for the first integral and defining the iterated kernel
b
K;(z, y) is called the ith iterated kernel. It remains to find under what condition the
series (5.76) converges to u(z), the solution of (5.21). It turns out that the series
(5.76) converges for |AB| < 1,° |A| < 1/B, where
is called the Neumann series and can be rewritten, after substituting for ¢;(2) from
(5.77), as
os) b
u(x) = ce i Ki(«,t)f(t)dt
ere) +a D(a,t;)f(t)at
a
To arrive at the Neumann series solution (5.81) for this problem we must prepare
K;(z,t), the ith iterate of the kernel K(x,t) = xe’. Here we have K,(zx,t) =
K (x,t) = ae‘. For i= 2 we obtain the second iterate K(x, y) from (5.78),
5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 259
1 1
Ko2(2,y) =) Ka) Ki(t,ydt = [ re'te¥dt
ita y (£.2)
= ze? te'dt = xe”.
0
Now we use this result again in (5.78) for 1 = 3 to obtain
1 1
K3(a2,y) =| K(a,t)Ka(t,y)at = [ ze'tedt
arth 0 (E.3)
Se te'dt = ze’.
0
and it is obvious from (E.2), (E.3) and (5.78), that if these calculations are repeated,
we obtain the general expression for the 2th iterate of the kernel as
iOL AD (5.83)
ijOLN ie (5.84)
As a special case, if it turns out that the kernel K (z, t) is orthogonal to all kernel
iterates K;(x,t),i =n+1,n+4 2,---, then according to (5.78), all the iterates with
order above n will vanish and the Neumann series (5.80) will have n terms only. In
the very special case when the kernel K (z, t) is orthogonal to itself, then according
to (5.78), we have
ha 1) (5.85)
and the Neumann series (5.80) becomes a one-term series with the resolvent kernel
of (5.82) as a A multiple of the kernel itself.
In the next example we prove that the Fredholm resolvent kernel T(z; t; A) of
(5.82) is unique.
b b
f(a) + do | en Ao)S(Odt = f(z) +20 | T(x, t;do)f(t)dt (E.1)
a
b b
b b
/ (T(x, t; Xo) f(t)dt — / T'2(a, t; Xo) f (t)dt = 0. (E.3)
We note that (E.3) is valid for arbitrary function f(t); hence if we set Cy (2, t; Ao) —
P2(x,t; Ao) = ®(a,t;Ao) and let f(t) = &(z,t;Ao) in (E.3), we obtain
b
} |@(zx, t; Ao) |?dt = 0
5.3. FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 261
and
Ty (a, UR Ao) = L(g, Us Ao)
In Section 5.2.3 we mentioned the Rayleigh-Ritz method for estimating the eigen-
values A for the homogeneous Fredholm integral equation
We present here another method for estimating the eigenvalues since it essentially
makes use of the iterated kernels AK’;(z, t) in (5.78) of this section. Here we will only
state the results of the method and illustrate it with a detailed example. This method
gives the following formula for estimating the smallest eigenvalue .,:
Adj
Ay ~ (5.87)
Agi+2
where A; is defined in terms of Kj (x,t), the jth iterate of the kernel A(z, t); as
Ag
oT (E.2)
Ai~
262 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
2 =f f xite.pazdt = ff teed
Es
afta al
dt = Salles Baa bir 2 i
“ops
bares) wobec
For A4 we must have K2(x, t), which can be evaluated from (5.78) with
EG(Ge0) = =a eance
1 A
Ko(z,t)
; = |e K(z,y)Kily,t)dy
(y)Ki (E.4)
1 1 ye
= / zyytdy = at | y?dy =azt —
= A 8} —1
Now we substitute K2(x,t) = (2/3)zxt from (E.4) in (5.89) to obtain the value for
Aa,
: 2 4 2
=e :
=
sel;
== LS = t“dt
va:
-*f
27
ea=(5D7) mithsj SMUD
mesT5 (AAC)0 29) dame
LSSle
When we substitute in (E.2) the values of Ag = 4/9 from (E.3) and A, = 16/81
from (E.5), we obtain the estimate for the lowest eigenvalue,
Te I by
Anes (2 © \1G/8t a2 See
and hence A; ~ 3/2.
The approximate methods that we will present here for solving the Fredholm equation
of the second kind
N
Sn (2) = >— cede (zx) (5.91)
k=1
of N linearly independent functions $1, ¢2,---, Nn on the interval (a, b). Of course,
if this approximate solution (5.91) is to be substituted in (5.90) for u(z), there will
be an error €(z,c1,C2,---,¢n) involved, which depends on z and on the way the
coefficients c,, k = 1,2,---, N are chosen,
b
Syi(z) = fle) +f K(a,t)Sw(t)dt + €(z,c1,c2,°-+, cn). (5.92)
The main point here is how we can find or impose N conditions to give us the N
equations required for determining the N coefficients c, c2,---,cn of the approxi-
mate solution (5.91). The methods employed will differ by the way these conditions
are set, and of course the better method will be the one that keeps the error in (5.92)
to a minimum.
Collocation Method
This method presents the NV conditions by insisting that the error in (5.92) vanishes
at N points 71, 22,---,xn. This reduces (5.92) to the N equations
b
Sn (ai) = f(z;) +f K (2;,t)Sn(t)dt, em Le 2 ee (5.93)
a
which of course can be solved by using any of the exact methods discussed in
the preceding sections as the kernel K(z,t) = rt is degenerate and symmetric.
We choose here three linearly independent functions ¢1(z) = 1, @2(x) = a, and
¢3(x) = x”, and so the approximate solution from (5.91) is
3
S3(e) = S- cebe (x) = cy + cou + €32”. (E.2)
k=1
264 Chapter 5. FREDHOLM INTEGRAL EQUATIONS
1
53 (@) C1 i Liat cya? = xt i xt(c, + cot + e3t”)dt + €(x, C1, C2, C3)
el
1
=( C1, C2,C3)
+e@r+co2% =a2+ ail (cit + Cot? + cgt®)dt + €(z,
=i
(E.3)
and after performing the integration,
1 ‘ 42 3 44}
He (cit ecrte + c3t )dt = Cy ate az + C3 om +,
1 1 1 1 1 1 EA
= 30 AF 3° ar 4° = (Fe _ 3° + ic) ( )
}
= -C
3°
(E.3) becomes
, 2
Cy +cgrt+c3t° =2“2+e 3} + €(X, C1, C2, 3)
(E.5)
D2
= 2 (1- 50) + €(@,
C1, C2, C3).
To find c,, C2, and cz we need three equations, which we provide (via the colloca-
tion method) by insisting that the error €(@, c1, C2, C3) in (E.5) vanishes at three points
(among other choices) x; = 1, 2 = 0, and x3 = —1, which gives, respectively,
2 1
Clips CO Cia Clie aac! wae (E£.6)
c, +0+0=0, @ = (E.7)
It is simple to solve for c; , c2, and cz from (E.6)-(E.8), which gives cy; = cz = 0 and
C2 = 3. The approximate solution to (E.1) is $3(2) = 3a. For this example it happens
that we can easily verify that the exact solution to (E.1) is also u(x) = 3x2. However,
the perfect agreement between the approximate and exact solutions should not be
surprising since the particular form of the approximate solution S3(x) = c, + cox +
cz included the exact solution as a very special case, u(x) = coxa = 3z. It should
be clear, however, that such agreement is not possible when we consider another
form for the approximate solution of (E.1), say, S3(x) = cy + cosinz + c3.cosz
in terms of the three linearly independent functions 1, sin, and cosz, which we
leave as an exercise. We may remark again that we have chosen this very particular
problem to minimize the detailed computations in favor of clarifying the main steps
of the method. In the following example we consider a more general problem with a
known exact solution with which to compare our approximate solution.
5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 265
is a special case of problem (E.1) of Example 12 with f(z) = e~* and \ = —1;
hence its exact solution is easily obtained as
(E.2)
using (E.7) of Example 12.
Now we will use the collocation method to find an approximate solution to (E.1)
which we will compare with the exact solution (E.2). We again choose the three
linearly independent simple functions 1, x, 2”, so the approximate solution is $3(x) =
c, + cor + c32”. If we substitute in (5.92) with f(z) = e~* and K(z,t) = —ze’,
we obtain
1
@) =P One or C3" =e X= sf e'(cy ap Onis ar c3t”)dt ar €(2, C1, C2; C3). (E£.3)
0
To determine c;, C2, and c3, we insist that the error (x, C1, C2, c3) in (E.4) vanishes
at three points. In this case we take x = 0, 1/2, and 1, which gives us the desired
three equations in c;, C2, and c3.
1 il 1
= siert 52+ 76s =e 1/2 _ “(ce — cy + cp + c3€ — 23)
In Table 5.2 and Figure 5.1 we present a comparison between this approximate
solution (E.8) and the exact solution (E.2) of the problem (E.1).
266 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
Table 5.2 Comparison of Approximate (Collocation Method) and Exact Solutions of Fred-
holm Integral Equation (E.1)
Approximate values
u(x) ~ S3(x) =1—1.4412 + 0.3102? 1 0.6590 0.3568 0.0933 -0.1310
Exact values, u(z) = e~* — § 1 0.6538 0.3565 0.0974 -0.1321
Fig. 5.1 Comparison of approximate (collocation method) and exact solutions of Fredholm
equation (E.1) of Example 16.
a
5.3 FREDHOLM INTEGRAL EQUATIONS OF THE SECOND KIND 267
b b b
(5.95)
after substituting for Sjy(x) from (5.91). We remark here that in general the linearly
independent functions ~;(zx) are different from ¢;(x) used for the approximation,
but sometimes it is convenient to use the same functions.
and we choose the same linearly independent functions ¢)(z) = 1, ¢2(x) = a, and
$3(x) = x” to approximate the solution u(x) by
1 1
[fe + Cot + C32 > | gt(cy + cot + cat?) i= fl l(x)dx (E.4)
1 1
1 1
[ee
a Cy, + cov + 327 — sh wie; cot + cat) dg
i} a(a)dx (E.5)
—] = 1
1 1
lx? a + cot + c327 — i! at(cy + Cot + cal di = / x? (x)dzx
Z = =1
j (E.6)
268 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
We note from (E.4) in Example 15 that the inside integral in the equations above,
: 2
ii t(cy + cot + c3t”)dt i 32°
=f
We use this result and perform the rest of the simple integrations to obtain the three
equations in ¢;, C2, and c3:
1 1 1 2 1
i cq t+=cot +¢327) dz = ik zdx = —| =0
—1 3 = 2 -1 (E 7)
1 1 : 2
=cqart 6 Cou? + 30 te = 2c, + 303 = (0)
i 1 3 sl 2
b
if€7(a,¢1,C2,°**,cn)dx = minimum (5.96)
a
on the interval (a, b) being a minimum. We shall not discuss this or other approximate
methods here due to their somewhat lengthy computations; we refer the reader to their
more complete treatment in other texts that cover approximate methods of solving
integral equations.’
7See Green [1969, p-96], Baker and Miller [1977], Delves and Mohammed [1988].
EXERCISES 5.3 269
Exercises 5.3
1. Use the method of the Fredholm resolvent kernel (5.58) and (5.59) to solve the
following Fredholm equations of the second kind, then verify your answer.
1
(a) u(x) = 2? + | (a — 2t)u(t)dt
0
Hint: We have Cp = 1, Bo = K(a,t) = x — 2t, so start with C; from
(5.62), then B,(z, t) from (5.61), and we continue as in Example 11 to
obtain the resolvent kernel I(x, t; ) for the solution u(z) in (5.58).
ze! xe!
fre. = effi tie"
toe! toe!
(We may note that all the three columns in (5.66) are proportional to each
other, so are the three rows!)
(b) Solve the problem of Example 11, by using (5.64) and (5.65) for B,, (a, t)
and C’,, instead of (5.61) and (5.62), respectively.
20
| eae ae® | sin(a — 2t)u(t)dt.
0
Hint: Note that the kernel K (x,t) = sin(x — 2t) is orthogonal to itself (see
Exercise 22, Section 4.1).
Hint: Use the Neumann series (5.81). (Also, you can use the result of problem
5 with very minor changes!)
270 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
. Use the iterated kernels-Neumann series method to solve the following integral
equation. Verify your answer.
. (a) Use the collocation method to find an approximate solution for the equation
of Example 15
1
Ria) =e +f xtu(t)dt
=i
in terms of
(i) The three linearly independent functions ¢;(x) = 1, ¢o(x) = sina,
and $3(x) = cosa.
(ii) The eight linearly independent functions 1, sinz, cosz, sin 2z,
cos 22, sin 3z, cos 3a, and sin 4a.
handle the lengthy computations of solving the linear equations.
(b) Tabulate the two approximate solutions in part (a) and compare them with
the exact solution u(x) = 3x of Example 15.
. Use the collocation method to find an approximate solution for the equation of
Example 16,
1
ula) =e? -{ ze‘'u(t)dt
0
in terms of the linearly independent functions
9. (a) Use the Galerkin method to find an approximate solution for the equation
of Example 1S,
u(z) = a+ ibxtu(t)dt
1
in terms of
(i) The three linearly independent functions ¢,(z) = 1, ¢2(x) = sina,
and $3(xz) = cosa.
(ii) The eight linearly independent functions 1, sinz, cosz, sin 2z,
cos 2a, sin 3x, cos 3x, and sin 4x. You may use w(x) = ¢;(2).
(b) Tabulate the two approximate solutions in part (a) and compare them with
the exact solution u(x) = 3a and the approximate solution obtained by
the collocation method in exercise 7(a,i,1i).
(c) Use the least squares criterion (5.96) to compare how good the approxi-
mations in exercises 7(a,i) and 9(a,i) are.
(d) Do part (c) for exercises 7(a,ii) and 9(a,ii) and show how they in turn
compare with 7(a,i) and 9(a,i), respectively.
10. Do Exercise 8 using the Galerkin method instead of the collocation method
and compare your results.
Towards the end of Section 5.2.1, and in relation to the Hilbert-Schmidt theorem, we
discussed then illustrated in Example 8 the difficulty of insuring the existence of the
solution u(x) to Fredholm integral equations of the first kind,
b
TAZ) =) K (a, t)u(t)dt (5.97)
and how the given function f (x) must be restricted to have such a solution. Moreover,
even when, perhaps on other grounds, we know that there is a solution, we lack the
usual iterative method to construct it. This is due to the absence of the solution u(z)
outside the integral of (5.97), which is in contrast to integral equations of the second
kind, where the iterative (or successive approximations) method plays an important
role, as we had discussed in Sections 5.3.2 and 3.1, respectively, for Fredholm and
Volterra equations of the second kind.
At the level of this book, the simplest statement on the existence of a unique
solution for the Fredholm integral equation of the first kind (5.97) is found in (the
following) Theorem 7, which is limited to a special class of symmetric kernels
272 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
(K(a,t) = K(t,x)) that we shall describe in the following simple Definition 1. This
Theorem 7 is a restricted version of Picard’s theorem. For the general theory, the
kernel K (az, t) can be complex-valued, the reason for using the complex conjugation
in the definition of the symmetric kernel as K (x,t) = K(t, x); itis dropped when we
deal with only real-valued kernels, and we write, K(z,t) = K(t,x) for symmetric
real kernels as we did in (5.27). For the definitions needed for Theorem 7, we shall
rely on the basic elements of Fourier series, that we have introduced and used for
the theory of homogeneous Fredholm integral equations with symmetric kernels in
Section 5.2.1. So, here we will limit ourselves to symmetric kernels, but we may
have the chance later (or in the exercises) to briefly discuss cases or examples of
non-symmetric kernels.
In Example 8 of Section 5.2, and the first basic Theorem 5 for the existence of a
solution to Fredholm integral equation of the first kind with symmetric kernel (5.34),
we showed how conditions for such an existence are rather demanding on the given
function f(a) in (5.34). We now present another very basic theorem, which is aimed
at the existence of not necessarily continuous solutions to (5.34), namely, square
integrable solutions. Also the condition of this theorem guarantees a unique solution
to (5.34). For this theorem, we need to present a few definitions, which describe the
particular symmetric kernel that allows the existence of a unique solution to Fredholm
integral equation of the first kind (5.34). Such special symmetric kernels are called
closed symmetric kernels, which we shall describe in the following two definitions.
This will enable us to give a precise statement of the simplest possible theorem on
the existence of the solutions without the need for more abstract development that is
necessary for most of the other theorems. The theorem will be illustrated very clearly
in Example 18.
While the first part of this section deals with the rather demanding conditions
for the existence of the solution, the second part of the section deals with another
difficulty that such a solution may have. Briefly, Fredholm integral equations of the
first kind are termed ill-posed, a rather advanced subject which we shall attempt to
explain on the level of this book, and where we complement our discussion with a
number of examples for various applied problems.
/ SOUPS: (5.98)
We will need the following basic result, where it can be shown that “‘a square integrable
function f(x) on (a, b) is orthogonal to a symmetric kernel K (a, t) if and only if it
is orthogonal to all eigenfunctions {¢,, (a) } of the kernel as defined in (5.35),
5.4 FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND 273
Also we may repeat the definition of the null function n(x), which is the function that
has its (square) norm vanish on the indicated interval (a, b),
i.n*(x)dx = 0. (5.100)
Now we define the special class of symmetric kernels that would allow the simple
statement of Theorem 7 for the existence of the unique solution of Fredholm integral
equations of the first kind (5.97). This is the class of closed symmetric kernels.
epee (5.101)
n=1
converges, where {\,,} are the eigenvalues of the kernel A (a,t) (as indicated in
(5.99), and the a, are the Fourier coefficients of the given function f(x) on the
interval (a,b) in terms of the orthonormal eigenfunctions of the kernel as given in
(S:31),(.32) and (5-29),
(5.102)
b
an =ff(e)bn(@)de,
fo) a.0,(2)? (5.103)
Also, as it shall become clear from the illustration in Example 18, the important
condition of the convergence of the series in (5.101) is necessary for the class of
square integrable solutions u(x) of (5.97) to have the Fourier series representation
Cna
= f(z)Un(x (5.106)
and to which the Fourier series (5.105) converges in the mean, i.e.
N 2
lim
Noo a
- Ss Cr tln (©) dx = 0." (5.107)
So, f(x) must have coefficients a, that are decaying fast enough to make the series
in (5.101) with its nth term \,,a,, converge. Such restriction should be borne in the
mind of anyone that wants to give a simple example of a Fredholm integral equation
of the first kind. This is so true, since for a casually given function f (x) in (5.97) the
solution u(x) may not exist! This will be illustrated in the following Example 18 for
the two simple functions used in Example 8, namely, f (x) = x and f(x) = $(a—2?)
on the interval (0, 1). We will show that, according to the condition (5.101), a solution
to (5.97) does not exist for the first case with the function f(z) = x, while it does
exist for the second case with the function $(r — x”). Moreover we can construct
this latter solution via its Fourier series as in (5.104).
=
fiK (a, t)u(t)dt (£.1)
0
where K (x, t) is a symmetric kernel which we used in Example 8,
Sh a Nada nal ee aS
K(e,t) = {t{i-z), t<2<1 ee
and where we can secure from that example its orthonormal eigenfunctions
{bx(z)}9_, = {V2sin krz} and note its (clearly increasing!) eigenvalues {,}?
={k?x?}% |. We also note that this set of eigenfunctions is complete on the interval
(0, 1) of the closed (symmetric) kernel K (a, t) of (E.2).
We will show here that a solution to (E.1) does not exist. This is so, since as we
force it on (E.1), we may write the Fourier series for f(z) = x on (0, 1) in terms of
the above eigenfunctions, according to (5.104), (5.103), as
00
dines SD a,V2sin kre, OK acl (E.3)
k=1
i V2
ak =) aV2sinknadx = (—1)**! (E.4)
0 kn
where the above integral for the Fourier coefficients a, is done with one simple
integration by parts;
—
—
sinkra, Oar uke (E.5)
We note here that (E.3) to (E.5) are all fine since the eigenfunctions are complete® on
(0,1), and f(x) = x is square integrable on (0, 1), i-e., if x*dx = $,80 this function
8See (4.47), (4.48), and (4.52) and the discussion immediately following (4.52).
276 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
is entitled to its Fourier sine series representation in (E.5), which does converge in
the mean to f(x) = 2 on (0,1). The problem arises as soon as we look at (E.1),
where we see clearly that we are forcing a solution u(z) for it, which does not exist,
as the violation of condition (5.101) will indicate.
For (5.101), we have now ax = eee and \, = k?1?, so
which is a divergent series. But since (5.101) is a necessary and sufficient condition
for the existence of the solution to (5.97), we easily conclude the non-existence of
such solution to (E.1). Another way of showing this negative result for (E.1) is to
force a Fourier series representation for the (assumed) solution u(x), then find that
(E.1) implies that such series diverges, which we leave for an exercise (see Exercise
1).
From this illustration for Theorem 7, we should learn that before embarking on
solving a Fredholm integral equation of the first kind we must first have the eigen-
values {A,,} of its symmetric kernel, then we proceed to find the Fourier coefficients
{a,,} of the Fourier series expansion of the given function f(a) in terms of the or-
thonormal eigenfunctions of the kernel. Then it is a matter of the condition on the
product
1 1 1
PRE a@ (=) i.e., of the order me ki 5 (E.7)
for the series (5.101) to converge. In the above example we can see that it is not the
case since
As we did in Example 8, we first write the Fourier sine series for f(x) = $(x — x”),
on (0, 1),
1 1 2/2
a er ee ee Ota eral (£.10)
where the Fourier coefficients are easily computed, using integration by parts, from
its Fourier coefficients integral as given in (5.103) with dn (x) = /2sinnra,
5.4. FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND 277
HAD ER ib 1
sparen
es [ 3 (2 — 2”) V2sin(2n + 1)radz = (Qn
+1 2v2
azn = 0
(£.11)
Recalling that the eigenvalues are ,, = n?72?, we have for condition (5.101)
SA ene 2/2
|eeenen ree ons NG 2 2 = 1
bs
dont on+1| 73 (2n i D3 TT (2n ae 1) O @ (£12)
and the series in (5.101) converges since k = 1 > 5 in (E.7). Indeed, the sought
solution to (E.9) is u(x) = 1,0 < x < 1, as can be verified after simple integration
(see Exercise 2(a)). As a matter of fact, and as we did for Example 8, a practical way
of making an example, for a Fredholm integral equation of the first kind that does
have a solution, is to plug in a known function as a solution u(z) inside the integral,
and find the result of the integral as f(a) to be used for the example as a sure thing
to guarantee the solution to the problem. On the other hand, once we have ,, for
the kernel and a,, for f(x) of the equation of the first kind (5.97), we first use An@n
in (5.101) to see whether a solution does exist, and if so we use the same A,,a,, in
(5.104) to construct that solution as a Fourier series in terms of the eigenfunctions
{¢n(x)} of the kernel with coefficients bp) = anAn.
5.4.2 . |Ill-Posed Problems and the Fredholm Equation of the First Kind
a measured data, and we want to make sure that a small inaccuracy in this data (error
in the input) will cause only a small error in the output as the solution of the problem.
A problem stated with the assurance of the existence, uniqueness and stability
of its solution is termed a well-posed problem, otherwise it is ill-posed. A typical
example of a well-posed problem is that of the potential distribution u in a disc due
to given input potential w = f on its rim that we presented in (1.24), where we
can prove the existence and uniqueness of the solution (potential) wu in the interior
of the disc. For now our physical intuition suggests that such solution wu depends
continuously on the data f at the boundary, i.e., it is a stable problem. This example
is to be differentiated from the one that we shall present in Example 19, which is due
to Hadamard, where we give the potential as well as its gradient on the boundary,
and which illustrates the earliest analytical example of an ill-posed problem.
Another example is the solution of the temperature distribution in a bar with
given initial temperature (data), and boundary conditions. Again it can be proved
that a solution in the interior (temperature u(z,t) for, t > 0; 0 < x < J), exists,
and it is unique. Also, on physical grounds we can see that a small change in the
initial temperature causes only a small change in the temperature in the interior.
Definitions and theorems are introduced to prove these results but they are beyond
the scope of this book, the interested reader may consult the available references on
the subject? For us, we may look at the input-output problem symbolically as with
operator notation, without going in depth to the theorems in the above references.
However, we may give a descriptive, though not so precise, notion of some of their
results, which will be followed by a specific clear illustration in Example 20, and a
discussion of the ill-posedness of Fredholm integral equations of the first kind. For
example, consider the operator equation,
Ag = f (5.108)
where A is an operator, say the integral operator in the Fredholm equation of the first
kind,
b
re / cameo (5.109)
mapping the desired solution ¢ as an element of (an acceptable) space of functions
X into f, as an element of another space Y of the same type functions,
Ae OGY: (5.110)
The idea of well-posedness will depend on the existence of an inverse operator A~!
that will return feY to deX,
Aho, (5.111)
and thus obtaining the solution of the integral equation (5.109).
For a small change in the input f to cause only a small change in the output
(solution) ¢, or in other words, the continuous dependence of ¢ on the data f, means
that this inverse operator should be continuous. Unfortunately, in general, and for
a large class of such operators, this may not be the case. This would mean that,
according to (5.111), a small change in f may cause a very large change in ¢, and the
problem becomes ill-posed. Such a situation is familiar to us, where we may have
the linear system of n equations in the n unknowns of the (column) matrix U,
AU =F (5.112)
where A is the known n by n matrix of the coefficients, and F' is the known column
matrix. However, when it comes to solving for U in (5.112) A may not have an
inverse, whose simple check is when its determinant |A| vanishes. Another situation
is that of the heat equation (see Exercise 2) which is stable as the heat is a diffusive
process, and a small change in the initial temperature will not cause big a change
in the (diffusing) output, which can be described as “forgetting its past". However,
the inverse heat problem of knowing the temperature now, and we are to find the
initial temperature, which is called the inverse, or backward heat equation is an ill-
posed problem. In physical terms this means that the heat diffusion is an irreversible
physical process. In the following Example 19 we will use a very well known
example due to Hadamard to illustrate what we mean by an ill-posed problem. It
will be followed by a discussion of the ill-posedness of Fredholm integral equations
of the first kind.
2 O*u
an rs Se ray =ae Ou
NCES =r2 <0 60, yi>.0: (E.1)
and where the gradient “ of the potential is given at the same edge y = 0,
Ou(z, 0)
= f(a), =O CO (E.3)
Oy
where f(z) is a continuous function. Hadamard’s example is for the choice of the
data f(a) as the particular sequence
!0Optional
280 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
1 ;
Un(2,y) = —> Sinnaz sinh ny (E.5)
n
is a solution to the boundary value problem (E. 1), (E.2) and (E.3) with f(x) = f(x)
as in (E.4). Also the input f,(2) = mae of (E.4) is convergent to zero as n — 00,
i.e., for large n there could be only small changes in the input data of (E.4). However,
the solution (output) in (E.5) (with its factor sinh ny = ae y > 0) will sustain
a very large change due to the eӴ term for the same large n. Hence the solution
Un(2, y) in (E.5) to the boundary value problem (E.1)—(E.3) and (E.4) is not stable,
and the problem is ill-posed. To show that the inverse, or backward heat equation is
also ill-posed, we refer the reader to Kress (1989).
The treatments and methods for a stable approximate solution of ill-posed prob-
lems are called regularization methods. Briefly, and to use operator notation, the
operator A of the ill-posed problem Ag¢ = f is replaced by one (or a family) of
a bounded operator R, such that for the perturbed data f? = f + Of of f witha
knownerror | f* — f| < 6, the (resulting perturbed) solution ¢*, corresponding to this
perturbed data, is a reasonable approximation of the actual solution ¢ i.e. ¢° depends
continuously on f*. A detailed treatment with powerful theorems, that describe such
regularization methods, is found in Kress (1990).
where A, and ¢, are the eigenvalues and eigenfunctions, respectively, of the sym-
metric kernel, and a,, is the Fourier coefficients of f(a) in terms of such a (complete)
set of orthonormal eigenfunctions,
b
The necessary and sufficient condition of Theorem 7 for the existence of such a
solution is that the series }>”°_, |An@n|? converges. This sounds very fine as far
5.4 FREDHOLM INTEGRAL EQUATIONS OF THE FIRST KIND 281
as the two desired qualities of existence and uniqueness of the solution to our
problem of the Fredholm equation of the first kind. What remains, for the present
discussion, is the third quality of the stability of the solution for the problem to be the
desired and acceptable well-posed problem. Unfortunately, from the solution u(«)
in (5.104) we can show that the problem is not stable. This is the case, since if we
perturb the given data f(a) by a small df (x), the solution u(x) in its Fourier series
representation in (5.104) will not be perturbed by what we wish, a Fourier series
representation of 6 f(z) (or a constant multiple of it), but some magnification, i.e., a
much larger corresponding change du in u(x). This, as we shall see shortly, is due to
the eigenvalues A,, factor in (5.104), where they are increasing. If we write (5.104),
using a, as in (5.102), we have
ce b
zs Yo ndn(a) | f(y) on(y)dy. (5.113)
Now if we perturb f(x) by 6f(x), we substitute f(x) + 6f(z) inside the integral of
(5.113) to have u(x) + du(z) on the left hand side,
lee) b
Exercises 5.4
with a symmetric kernel as was considered in Example 18. Follow steps (i)—(iii)
to show, as in Example 18, that a solution does not exist for this equation.
(i) Assume a Fourier series representation for the (not so sure!) solution u(z)
in terms of the eigenfunctions of the kernel,
(ii) Substitute this u(a) in the integral of (E.1), interchange the summation
with integration as though the quality of the convergence of the series
(E.2) allows that.
Hint: For the integration inside the series involving the kernel, use the
fact that \/2 sin ka are the eigenfunctions of the kernel as described in
(5.35)ion (5:29).
(iii) Write a similar Fourier series for f(x) = x on (0,1) and use in (E. 1), then
compare coefficients, where you find that bk = m/2(—1)**1k which
makes the (assumed) Fourier series for the solution in (E.2) divergent.
Thus, there exists no solution to (E.1).
2. Consider the Fredholm integral equation of the first kind (E.1) of Example 8
in Section 5.2. (This is the same problem as in Exercise 4 of Section 5.2.)
(a) Verify that u(z) = 1,0 < a < 1 isa solution to this problem. Hint:
Watch for the two branches of the kernel K(x, t); write the integral on
the two subintervals (0, 7) and (a, 1).
(b) Write the Fourier series for the solution u(x) = 1,0 < x < 1 (of part (a))
and the given function f(x) = $(a — x”), 0 < x < 1 in terms of the
eigenfunctions of the kernel, to verify by = Axa, in (5.43) (and (5.43a)).
(c) Verify that for the function f(x) = $(a — a”),0 < x < 1in(E.10), the
Hilbert-Schmidt theorem is satisfied.
Hint: Note that f(z) = $(x — x?) is continuous on (0,1), and that
the clearly symmetric kernel K (a, t) in (E.2) is square integrable on the
square {xe(0, 1), te(0, 1)}. (See the hint to part (a).)
does not have a solution unless the given function f(z) is restricted to a
linear combination of the functions a,(z),
= [ [email protected] (E.1)
20
(aay [ sin(a + t)u(t)dt (E£.1)
6. (a) Consider the integral equation of the first kind in K (a, x),
and use it to show that this result (E.2) illustrates the 211 — posedness of
the equation of the first kind (E. 1).
Hint: See that for large values of w, F'(w) and so is its change 6 F'(w) will
be small, however the solution K (a, xz) maybe piecewise continuous in
x with large jump discontinuities.
(b) Consider the Laplace transform, of the piecewise continuous and of expo-
nential order f(t), (f(t) = o(e%*)),
Show that lim,_,.. F'(s) = 0, and use this result to comment about the
well-posedness of the singular Fredholm equation of the first kind (E.3)
in f(t). Hint: See part (a).
7. Show that for the integral transform f(x) of u(t) (or the integral equation of
the first kind in u(t)), with continuous kernel K (z, t),
b
f(z) =i K (a, t)u(t)dt. (E.1)
8. Consider the Green’s function of the loaded string G(z, t) as in the hanging
chain problem (2.28). On physical grounds show that
l
[ GG,
tf (tdi = 0
0
In the preceding section we illustrated the many different exact and approximate
methods for solving integral equations using special examples that needed moderate
amounts of work. For more general cases we sometimes resorted to approximate
methods where one integral equation is approximated by another which can be
handled by the usual methods illustrated. When both approaches do not apply,
we may have to resort to the numerical method of approximating the integral by a
finite sum, and hence the integral equation is approximated by a set of simultaneous
equations whose number is determined by the number of values or samples of the
approximate solution u(xz;) on the desired interval.
In this section we will first remind of the most basic numerical integration formulas
such as the trapezoidal and Simpson’s rule that we have already discussed in Section
1.5 in (1.141) and (1.144), respectively. Then we will prepare for the numerical
approximation setting of Fredholm integral equations of the second kind, and where
both the trapezoidal rule and Simpson’s rule will be used for approximating the
integration term in the equation. Such an approximation setting becomes a (square)
set of n + 1 linear equations in the n + 1 (approximate) samples of the solution
u(x;),7 = 0,1,2,---,n. This preparation will be concluded by an example where
the approximate numerical values are compared with the exact solution of a simple
Fredholm integral equation (see Example 20 and Exercises 1,2 and 3.) In this section
we will concentrate on using only the very basic integration formulas such as the
trapezoidal rule and the Simpson’s rule. As we emphasized in Section 1.5, the
higher quadrature rules, and their use in approximating the integral, for the numerical
solutions of Fredholm integral equations, is relegated to Section 7.3 of Chapter 7.
The treatment there is supported with the necessary tables, and a good number of
very detailed examples and exercises.
We will also have a chance to make some comments concerning the numerical
solution of a particular class of singular Fredholm integral equations. These are the
ones characterized by their infinite limit (or limits) of integration.
286 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
After introducing the basic numerical integration rules in Section 1.5, we are now in
a position to discuss the numerical setting of Fredholm integral equations. We will
first consider the Fredholm equation of the second kind.
Consider the Fredholm integral equation of the second kind, as we used it in
(1.148) in Section 1.5.1,
Sr Sy ae AG (1.149)
g=0
As was indicated in Section 1.5, we usually use equal increment At instead of the
above more general A;t. Here j as the index in A;t may indicate a weight D;
assigned to the ordinates K (x, t;)u(t;) (of the integrand) by the particular numerical
integration rule that we discussed and illustrated for the trapezoidal rule (1.141) and
Simpson’s rule (1.144).
With the approximation to the integral in (1.149), we have the approximate result
to the Fredholm integral equation (1.148)
Now, it becomes clear that if we are to solve for approximate sample values u(z;)
of the solution u(x), we may require (5.117) to be an equality at the n + 1 locations
ri, 1 = 0,1,2,---,n of the (approximate) sample values u(zx;)(= u(t;)), i =
OL ese.
With such “forcing" of the approximation (5.117) to the equality (5.118), it should be
clear that the {u(x;)} in (5.117) are only approximations to the solution u(«) of the
integral equation (5.116) at {2;}, and they should really be designated differently. In
(5.118) we see that the (linear) Fredholm integral equation (5.116) is approximated
by a system of n + 1 linear equations in the (approximate) samples of its solution
uj = u(x;), 7 = 0,1,2,3,---,n. This should definitely remind us ofa matrix equa-
tion, whereby we can rely on our knowledge of solving systems of linear algebraic
equations with the help of matrix analysis, and more importantly our dependence on
5.5 NUMERICAL SOLUTION OF FREDHOLM INTEGRAL EQUATIONS 287
its theory for the existence of such sought solution. Indeed, the strong relation be-
tween matrix theory and the theory of linear Fredholm integral equations goes a long
way to Fredholm’s original work on linear integral equations, as it became abundantly
clear in the first few sections of this Chapter, where such theory is developed. If we
use the notation u; = u(2z;), fi = f(zi), Ki; = K(ai,t;), where clearly U = [ui],
F = [2;] are column matrices while K = [K;;] is ann + 1 by n + 1 square matrix,
we can rewrite (5.118) as a matrix equation,
U =F + DKU. (5.119)
where D = [Dj6,;] is a diagonal matrix of order n + 1, and 6;; is the Kronecker
delta. So in matrix notation we are after the unknown column matrix U,
LUGS DU =
[I -DK]U =F (5.120)
where I = [6,;] is the unit (square) matrix of order n + 1. If the inverse [I — DK]~!
of the matrix [J — DK] on the left of (5.120) exists, we have
US DikVE (51121)
b
(a f(a) +f K (a, t)u(t)dt. (5.116)
} 1
(23a (a) +/ K (a, t)u(t)dt = f(x) + At 5K (2,to)u(to)
where the solutions of (5.123) are approximate solutions of (5.116) since there is an
error involved in replacing the integral in (5.116) by the n + 1 sum of the trapezoidal
rule. With this note, we shall from now on use the equal = sign instead of the
approximate & sign in (5.123).
If we consider n + 1 values of u; = u(z;) = u(t;), 7 = 0,1,2,3,---,n, then
(5.123) becomes
1 il
i= fat At 3 Kioto apes PC: Gonos ayeee ae = a Kintn ;
i=0,1,2,---,n (5.124)
which are n + 1 equations in u;, the approximate solution to u(x) atz = x; = a+iAt,
(el
UR Oc
If we transform all the terms involving the solution wu; to the left side of (5.124)
leaving only the nonhomogeneous part f; on the right side, then write all the n + 1
equations for u;,7 = 0,1, 2,---,n explicitly, we have the following n + 1 system of
equations in ug, U1,°-*, Un to be solved:
At
1— “7 Koo Wh) = AtKoiu4 —AtKo2u2 Se AN Gree ees)
JX}:
— = Ko,ntn — fo
At
— =a Fito AF (1 = Atky1)uy —AtKi2u2 pet SG hare Atom aye
At
Sta Ys = fi
Z
5.5 NUMERICAL SOLUTION OF FREDHOLM INTEGRAL EQUATIONS 289
At
— a Kn-1,0U0 — Athy sar) =—Athp21,2u0 +>
+(1 — ISG KGcats, ey es — SEK i nUn = Jat
[l-DrkK]U =F (5.126)
where I = [6;;], the identity matrix, D7 = [D,;6,;], the diagonal matrix representing
the weights of the quadrature rule used, which is here the trapezoidal rule, as they
1 il
appear in (1.141) (with Do = ght, Dig TANte Dota a) Ao). K is
the matrix for the kernel K = [.;;], thus the matrix of the coefficients of the linear
system in (5.125) or (5.126) is
A= tel p=
At
l= o*Koo —AtKo1 ——y Kon
At
ee. 1—Atky, = MGs
7 Z
At At
— a n-1,0 me (LS Athy an) ——y Kn-10
At
ere AtKn 1— Kan
(5.127)
U is the matrix of the solutions,
uo
U1
= (5.128)
Un
fo
fi
= ; (5.129)
iA
290 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
So now we may summarize that in approximating the integral in the (linear) Fredholm
integral equation by the n + 1 terms of the trapezoidal rule, we have reduced the
integral equation to a set of n + 1 (linear) equations (5.125) in uo, U1, ---, Un, Or to
the matrix equation (5.126) to be solved for the unknown matrix U whose elements
Ug, U1,°**,Un are the n + 1 approximate samples of the solution to the integral
equation (5.116) (or (1.148)). As we mentioned earlier, an obvious result from the
theory of linear systems of equations regarding the solution of the matrix equation
(5.126) is that there is a unique solution U to (5.126) when |A| = |J — DrK/, the
determinant of the coefficients matrix J — DK, does not vanish, and that (5.126)
has infinite solutions or no solution when the determinant |J — DrK| vanishes. To
this end, then, it is a matter of how efficient we are in solving matrix equations and
how prepared in choosing a more suitable method of numerical integration instead
of the trapezoidal rule. Since this book assumes preparation only in elementary
calculus and differential equations, we will not attempt td seek efficiency in our
present illustrations, as our main purpose here is to introduce the subject in the
clearest way possible. It is left to the readers to choose their own method of solving
the resulting system of linear equations (5.125). This, however, does not prevent
us from noting some special features, such as the symmetry of the kernel, which
will simplify the computations. For our illustrations, and for the purpose of a more
self-contained treatment, we felt it helpful to have a brief presentation in Section
1.5.4 of Cramer’s rule for solving system of linear equations. Of course, one may
consult other efficient methods, for example, the Gauss elimination method. The
illustration of the numerical approximation of the Fredholm integral equation (5.116)
when Simpson’s rule (1.144) is used for approximating its integral, (and where,
m is an even number) is left for an exercise (see Exercise 5.). In Section 7.3, of
the (optional) Chapter 7, we will use higher quadrature rules for approximating the
integral in (5.116), the trapezoidal rule and the Simpson’s rule, used here, are only
two special cases of such rules.
Example 20
Use the trapezoidal rule with n = 2 to set up the approximate numerical repre-
sentation of a 3 x 3 system of linear equations in (the approximate values) u(z;),
2 = 0, 1, 2 of the following Fredholm integral equation,
l/l 1
Ui eels: (5Kou + Kua + 5Kaus) Bae
Thaiaw (E.2)
or in matrix form.
5.5 NUMERICAL SOLUTION OF FREDHOLM INTEGRAL EQUATIONS 291
1
1— gio —5Ko1 — 7 kor uo sin 0
f
ake
4 10 il
aS 91 fu
1
sone
gine U4
= a
sin 5)
1
(
E.3 )
1 1 1
— 720 —5 Kai 1- qh2 U2 sin 1
then used the trapezoidal rule for approximately the integral that resulted in a set of
n + 1 nonhomogeneous algebraic equations in n + 1 unknowns {u;}7_9, and which
we wrote in the following matrix form (5.126) as follows from (5.125)
[I —DrK]U =F (5.126)
where I, Dr, K and F are clearly defined after (5.126) as in (5.127)-(5.129).
In this section, we consider the numerical method of solving a homogeneous
Fredholm equation
which can be developed in the same way as we did for the nonhomogeneous Fredholm
integral equation (5.116). We will again use the trapezoidal rule, with n subintervals
to approximate the integral above, and reduce (5.130) to n + 1 linear homogeneous
equations, in the n + 1 (approximate) unknowns u;, 7 = 0,1,---,n.
1 1
uj = AAt 9 Kioto Git cee e yh ed a Kintn ;
0p 2 een: (5.131)
Here it looks that such numerical approximation setting, as a system of homogeneous
linear equations for the homogeneous Fredholm integral equation (5.130), should
follow as a special case of (5.116) with f(x) = 0. However, any discussion of the
results will need what might be new concepts of the eigenvalues and eigenfunctions,
which we have already discussed in detail in Section 5.1.2, (and earlier at the end of
Section 4.1.3). So, attention should be made to the parameter A of the homogeneous
integral equation (5.130) and its numerical approximation (5.131). In summary, the
values of this A in (5.130) (or (5.131)) that results in nontrivial solutions for these
equations are called the eigenvalues, while the corresponding (nontrivial) solutions
are called the eigenfunctions.
If we bring all the terms to the left side of (5.131) and write the n + 1 homogeneous
equations for 7= 0,1, 2,---,n, we have
AAt
(1rs S Koo)uo AAtKoi uy = AAtKo2ue2 Bs he Oa oe = (0
At
—AZ iota ae (1 = AAtK11)uy — AAtKyqug -— +--+: -— Siig =
~~
At Knotlo = AAtK yi uy or AAtK yn2uU2 SP OOO (1= <i Kn Un = 0.
(5.132)
There is one simplification that can be attained by letting \ = 1/p and hence p will
appear only in one term of each equation instead of appearing in every term; that is,
(5.132) reduces to
At At
(u= 5 Keo uo — AtKoiu1 — AtKo2u2 —--- — “9 Ko,ntin =0
t At
— 3 Fioto ta (uu = Atky,)uy — AtKy2u2 —-+-— I lin = (0
2 (5.183)
—
At7 Knouo = AtKni U4 ap O28 o SE (ua
At
= Kun Un =
.
0)
which is in the same form now as (5.125) except for that f; = 0,i = 0,1, 2,---,n,
and the | in parentheses on the diagonal (of (5.125) is replaced by ju. So if we write
5.5 NUMERICAL SOLUTION OF FREDHOLM INTEGRAL EQUATIONS 293
where
0
O=]| :
0
is the zero matrix, U is the same matrix as in (5.128).
uo
Ui
= (5.134)
Un
and Ky signifies the coefficient matrix for the homogeneous equation (5.133):
At At
pb “7 Koo =Ation © Se 5 Kon
KG
At At
— 5 Kn-1,0 pS ic) aoe ee OTIS yet 1 =~ Ka-1n
At At
=a Kno —AtKn aes (UE “9 Ann
(5.135)
We must recall here that a nontrivial solution to this system of n + 1 linear homoge-
neous equations exists if and only if the determinant | | of the coefficients matrix
Ky in (5.135) vanishes. This condition is used to find the (approximate) eigenvalues
A of (5.130) through finding = 1/2 as the zeros of |Ky| = 0.
We may recall that while | A| = |[— DrK| # 0 guarantees a unique (approximate)
solution for (the nonhomogeneous equation) (5.127), the foregoing condition |K y| =
0 guarantees a nontrivial but not a unique solution to the homogeneous system (5.135),
which means that we may have to determine the values uo, U1,-*-, Un in terms of
one of them as an arbitrary value. Such an arbitrary constant can be evaluated in
practice when we normalize the approximated solution. This will become clear in
the following illustration.
2(1-t), O<a<t
euiey Gao eee ee
where we found that the normalized eigenfunctions are
pup +0+0=0
1
0+ (u-5)m+0=0 (E.4)
OF 0 tts = 0:
For this system of homogeneous equations to have a nontrivial solution, the determi-
nant of the coefficients must vanish,
Lb 0 0
0 i 1 0 |=2(n-=)
3 ee LU 8 =0
9)
0 0 i,
1
ju— 05 DS re (E£.5)
If we consider pw = 1/8, this will give X = 1/p = 8 and if we substitute this value of
. in (E.4), we obtain up = uz = 0 and wu; = wu; as an arbitrary constant. Hence we
have the two zero values at x = 0, | but an arbitrary value wu; at x = 1/2. What we
did here is, of course, a very rough approximation to the integral in (E.1), where we
used only three points, but it can be improved by considering more points. It remains
to find the arbitrary value u;. For this we may approximate the solution function by
two straight lines connecting the three points (0,0), (1/2, u;), and (1,0) as
A
2 (E.6)
1
EXERCISES 5.5 295
to find u; and then compare u(x) with an orthonormal solution from (E.3). If we
substitute u(az) from (E.6) in (E.7), we obtain
il
Poa 1 1 1 2
su; | a dx +4up | (a — 1)*daz = gui a5 gui ns =1, u=Vv3~1.73.
2
So the approximate numerical values are
1 ,
As we have indicated at the beginning of this section, we have included here only the
most basic numerical integration rules to approximate the integral of the Fredholm
integral equations. The higher order quadrature rules of approximating the integral,
their tables, and the numerical setting of the Fredholm integral equations using such
rules, are covered in Section 7.3. There we support the use of such different rules
with a good number of detailed examples and exercises.
Exercises 5.5
1. (a) Use a numerical method (trapezoidal rule) to solve for the approximate
values of the solution of the Fredholm equation of Example 20
at
12
i) r=0,5,551
a ee 3
oe ee, ee
(ii) x 0,59 Teae |
2. (a) In problem 1(a)(i) use Simpson’s rule instead of the trapezoidal rule.
(b) Compare the approximate results of part (a) with the exact answer (ii a
ke
3. (a) Use a numerical method (trapezoidal rule) to solve for the approximate
values of the solution of the equation of Example 16 in Section 5.3
WN Ser 4 ze‘u(t)dt
at
1
i x ==
(i) 9 ,
4. (a) Use a numerical method (trapezoidal rule) to solve for the approximate
values of the solution of the Fredholm equation
1
Feat -{ esi (B.1)
ate — EOL:
(b) Attempt to verify such a crude approximate solution.
Hint: Try to integrate numerically with the three approximate values
of u(x) and see how the two sides of (E.1) compare for each value of
x = —1,0,and 1.
(c) Repeat parts (a) and (b) for the approximate values of the solution at z =
—1, —9/10, —8/10, ---,0,1/10,2/10,---,1, then graph and compare
with the results in part (a). See the hint for Exercise 1(a,ii).
5. (a) Use Simpson’s rule of integration (1.144) instead of the trapezoidal rule
to reduce the Fredholm integral equation (5.116) to a system of 2n + 1
linear equations similar to that of (5.124).
Hint: Note that n must be even in (1.144) of the Simpson’s rule.
(b) Use the result in part (a) to solve for the equation of Exercise 1(a).
1
u(x) = sina + / (1 — xcos zt)u(t)dt (E£.1)
0
at x= 0, 5, and 1.
EXERCISES 5.5 297
(c) Compare the results in part (b) with those of Exercise 1(a) and the approx-
imate solution of Example 6,
6. (a) Use a numerical method (trapezoidal rule) to solve for the approximate
values at x = 0, 1/2, and 1 of the homogeneous Fredholm equation
_ f t(l-2)(Q2-t?-27), 0<t<z
a= at ee a<t<1 (22)
This problem represents the deflection u(x) of a rotating shaft (1.19)
with unit length and constant density, where \ combines most of the shaft
physical properties.
Hint: Note that the kernel is symmetric.
(b) Repeat part (a) for approximate eigenvalues and the solution values at
xz = 0, 1/4, 1/2, 3/4, and 1. See the hint for Exercise 6(c).
7. (a) Use a numerical method (trapezoidal rule) to solve for the approximate val-
ues at x = 0, 4, 2, 1 of the homogeneous Fredholm equation of Example
Dae
Ue) = | K (a, t)u(t)dt (£.1)
1
HES HhK (a,t)u(t)dt (B.1)
0
i= {: oes (E.2)
atz = 0,4,5, 4, and 1.
298 Chapter 5 FREDHOLM INTEGRAL EQUATIONS
9. For the three samples uj, w2, and u3 of problem 3a(i), use the Lagrang
e inter-
polation formula (1.153) and (1.154) to interpolate the approximate
solution,
then compare with the exact answer of TA a) ae
2
A es 2° 0 < x < 1 and the
answeroff probproblem ee
lem 3a(i) at x ¢—= 0 Oy
3a(ii)at —,1
10’ 10
Existence of the Solutions:
Basic Fixed Point Theorems
With the main emphasis of this edition on a simple introductory and applicable course
in integral equations, this chapter must definitely be considered as an optional one.
Indeed we could have relegated it to an appendix, but since its simple and descriptive
presentation! relates to basic topics in Chapters 3 and 5, we opted to retain it in this
edition. Of course the introductory course depends, primarily, on good parts of the
first five chapters as we described it in our “suggestions for course adoption" at the
end of the preface. For a more advanced applied course, parts of this chapter may
prove helpful to the reader with a desire to look into more basic theory, besides the
methods of solutions in Chapters 3 and 5.
Our treatment in Chapters 3 and 5 for the Volterra and Fredholm integral equations
centered mainly on illustrations of the known methods of finding exact, approximate,
or numerical solutions. In so doing we either had to assume the existence of a unique
solution or stated some conditions to secure it.
In this chapter we present and prove a few basic theorems that are necessary
for establishing the existence and uniqueness of the solutions of integral equations.
We start with a descriptive presentation to motivate the basic mathematical concepts
'For more information on the existence of solutions to linear as well as nonlinear integral equations, see
Kress [1989], Hochstadt [1973], Pogorzelski [1966] (greater depth), Cochran [1972], and Collatz [1966]
(numerical methods).
299
300 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS
needed for an accurate and clear statement of the principal theorem: the fixed point
theorem of Banach. It is our intention first to give a clear presentation of several
applications of the fixed point theorem, which have been selected with the goal of
keeping this chapter at the same level as, and in harmony with, the remainder of the
text.
The very basic iterative method that we employed in Chapters 3 and 5,
Even when we accept such practical constructive proofs, we still should inquire about
their applicability to other, more general problems that cannot be solved in closed
forms. In particular, all our treatment in this text has been directed toward solving
only linear integral equations as in (6.2), with no method or illustration given of how
to proceed when we have nonlinear integral equations. The reason for this is that
while the existence of a unique solution may be assumed or established by direct
computations for the linear problem (6.2), it is a very different matter to tackle that
of the much more complicated nonlinear integral equation
em IED) (6.5)
This means that the solution u which we seek for the integral equation (6.3) represents
a very special element in the domain of the operator 7’, namely, that which remains
6.1 PRELIMINARIES: TOWARD A CONTRACTIVE MAPPING 301
This, as we shall see, is but one of a variety of measures of distance (or metric) that
we may choose to adopt in order to facilitate the proofs of the desired theorems.
For the n-dimensional Euclidean space R” = {x = (21, %2,::-,2n); 2; € R},
the distance above is easily generalized to
This type of distance d, of (6.8) proves very useful when modified to give a measure
of the difference between continuous functions. For f(x) and g(x) as two elements
of the set C[a, b] of continuous functions on the closed interval [a, 6], we define the
distance between them as
We shall soon present the formal definition for the distance or metric d(x,y)
between two elements 2, y of a given set X, but first we would like to motivate the
type of convergence that is more suitable for describing the clustering or closeness of
the members of the sequence up. In our construction of the solution via the iterative
process we were after the sequence un approaching the limit u as n approaches
infinity, which is the usual type of convergence
Fig. 6.1 The distance d(f, g) of (6.9) between two continuous functions.
that one encounters in the basic calculus course. However, in practice we very often
do not have a way of knowing the limit point u, but instead we know merely that
as n increases, Un+1 gets closer to Up (i.e., the sequence is clustering). It is even a
better sign when not only the consecutive members un+1, Un but the members of the
SEQUENCE Un+p, Un become close, that is, when their distance |un+p — Un| becomes
very small as n, the number of iterations, increases, that is,
This would be a very good sign for the convergence of the sequence, but without
specifying the particular limit point. We should note that in (6.1la) we may use m
instead of n + p, and write
whether Cauchy convergence (6.11) would ever imply the convergence (6.10), which
spells out the limit point. To answer this question in the affirmative will depend on
the particular space that contains the sequence and on the type of metric we use to
measure the distance between the elements of this space. A space with its assigned
metric (distance) is called a metric space. We will soon show that in a metric space,
convergence (6.10) to a limit u always implies Cauchy convergence (6.11), but the
converse, which is what we are after, is not always true.
A metric space in which Cauchy convergence implies convergence to a limit is
a very special one termed complete metric space. This is the metric space we shall
work with and in which we state and prove the fixed point theorem.
Before we begin the formal definitions necessary for the accurate statements
of the fixed point theorems, there is still an extremely desirable property of the
transformation or mapping T'(w) of (6.5). This property can be described as a kind
offocusing effect ofT as it maps the input estimate uw, to its output un+1 as in (6.6).
By this we mean that the distance between the images u’ = T’(u) and v' = T(v)
would be closer than the distance between their objects wu and v in the domain of T’,
which can be expressed as
u'=T(u)
an
tee ny)
yer » v‘'=T(y)
u
<. ip
fh.
in proving the existence of solutions for various types of equations that can be
described by the mapping
i L(t) (6.5)
For example, instead of T’ being the integral operator in the integral equations
above, it can represent a differential operator in the case of a differential equation
like
Up to this point in the discussion we have not mentioned that the mapping T
is linear-hence the role of the fixed point theorem in assuring the existence and
uniqueness of solutions to a class of “usually” intractable nonlinear integral and
differential equations. We refer here to a certain class, as it remains for us to show
that the particular equation has a contraction operator T’.
Even though the fixed point theorem can be applied to integral equations as well as
differential equations, the successive approximations (iterative) process (6.1) favors
the integral equation representation of the problem, since in practice we watch the
approach of the sequence un+1 of (6.6) toward the desired solution of the integral
equation (6.3). This means that in order to apply the fixed point theorem to differential
equations, we may first change the differential equation to an integral equation to
make it suitable for the iterative process (6.6). We will illustrate this application for
initial value problems associated with differential equations after reducing them to
Volterra integral equations.
6.1 PRELIMINARIES: TOWARD A CONTRACTIVE MAPPING 305
With the foregoing intuitive and very descriptive introduction of the basic concepts
necessary for stating the fixed point theorem, we turn now to the formal definitions
of these concepts. It is our intention to keep the treatment brief, but clear and mostly
self-contained.
Metric Space
A metric space, designated as (M, d), is aset M with a mapping (d: M x M >
R) that associates a real number (distance) d(x,y) = reR to every ordered pair
(x, y) in the domain of d and such that this distance (or metric) d(x, y) satisfies the
following three conditions:
The triangle inequality (6.17) will be used very often in proofs of the basic
theorems. We note that the present mapping which defines the distance d(z, y) is to
be distinguished from T (uw) in (6.5).
A familiar example of a metric space is (R, d), the set of real numbers R with the
distance (metric) d(x, y) = |x — y|, which can easily be shown to satisfy the three
properties of a metric listed above. The set Cla, b] of continuous functions on the
closed interval [a, b], together with the metric
if for each € > 0 there is a number no = no(e) such that forn > no(e) the element
Un is within the distance € from u [i.e., d(u, un) < e€]. In this case we say that the
sequence up converges to u.
As we mentioned earlier, especially for the iterative process (6.4) or (6.6), it is
sometimes the case that the elements u,, of the sequence get very close to each other
but no limit u is known [i.e., d(un, Um) — 0 as n,m -+ oo]. This brings us to the
Cauchy-type convergence. The sequence {un }°2, in M is called Cauchy if for each
€ > 0 there is no = no(e) such that for n,m > no(e) we have d(un, Um) < €.
We will prove here that in a metric space every convergent sequence (6.20) is
Cauchy convergent. From the definition of the sequence {u,,} being convergent we
have
Contractive Mapping
The mapping T in (6.28) is called contractive if there is a nonnegative real number
a less than 1,0 < @ < 1, such that for each wu, u2 € M' we have
In other words, a contractive mapping brings the images T'(u;) and T'(uz) closer in
the range of the operator T than their corresponding objects u2 and u, in the domain,
as illustrated in Figure 6.2. In terms of our iterative process
we have, for example, uz as the image of u; and uz as the image of u2, so with a
contractive mapping d(T(u2), T(ui)) < ad(ug, ui), but T(u2) = uz, T (ui) = ue,
hence
d(u4, U3) = d(T (u3), T'(u2)) < ad(uz3, U2) < a? d(uz, U1) (6.30)
a(tnay, tn) < ad(un, Wet) < a’ d(Un—1, Un—2) Fas < a”—"d(ue, U1) (6.31)
which says that the sequence is clustering since the outputs u,+4 1 and up, are closer
than the inputs wz and u; by a geometric factor of a”—!,0<a< 1.
In the following example we illustrate conditions for the linear Fredholm integral
equations of Chapter 5 to represent a contractive mapping.
Consider the Fredholm integral equation of the second kind (5.7a),
b
u(x) = g(x) + af K (a, t)u(t)dt = T(u). (6.32), (5.7a)
We assume that g(x) is continuous on the interval [a,b] and K(z, t) is continuous
on the square D = {(z,t) : € [a,b], t € [a,b]}, as indicated in Figure 6.4. For
such functions we shall work with the complete metric space C[{a, b] of continuous
functions and its metric d(z, y) as in (6.9).
To find a sufficient condition for the mapping T(u) of (6.32) to be contractive,
we first indicate that the kernel K (x, t) here is bounded [i.e., |K (a, t)| < M] since
it is continuous on the bounded domain of the square in Figure 6.4. To show the
6.1 PRELIMINARIES: TOWARD A CONTRACTIVE MAPPING 309
contraction property of 7’, we use the metric of (6.9) on the images T(3(z)) and
T(7(z)) of the two continuous functions (x), y(z) in C[a, ],
d(T(B(z)),T(y(z))) = aah
ote) +a foKG,08(t)dt— [g(x
nf K (a, t)y(t)dt]
= max [afe,
0180 ~ roe
< max ikIAK (2, t)(8(¢)—v()]lat
<INM max, [late -role
< |AIM max |8(2)~(2) a dt
< |NM(b— a)d(8(z),-7(z)) = ad(B(2),1(2))
(6.33)
, t)|. Hence with
after using the upper bound M for |K(z
Next we illustrate that to ensure a contractive mapping J(u) for the linear Volterra
integral equation (3.1),
= T(u) (6.36)
we need much less restrictive conditions than those for the Fredholm equation (6.32).
In Section 3.1.2 we considered the successive approximation method (3.25) of
solving (3.1),
To assure the convergence of this approximation, we stated the result (without proof)
that “if f(x) is continuous on [0, a] and K(z,t) is also continuous for 0 < x < a,
0 <t < a, then the sequence u,,(x) converges to the solution u(z) of (3.1)." In terms
of our present development, where we are working in Ca, 6], the space of continuous
functions, we should be able to reach a conclusion of convergence without any extra
conditions.
This indeed is possible but needs a number of preliminary results. The most
important of these results is to show that for large enough n, the nth-order mapping
T”(u) of the Volterra equation is a contractive one. We will limit our efforts in the
following example to showing this result, which we feel captures the main idea of
the contraction for T'(w), and we leave it to Example 3 at the end of Section 6.2.1,
after we already have the fixed point theorem, to show that if T”(w) is contractive,
then T'(u) = u has a unique solution.
T”(u) is easily illustrated when applied in (6.37),
zx
|T” (ui) — T"(r1)| = |[T(un) — T(vn)| S arf |An(z, €)||ui (E) — v1 (€)|dé
(E.5)
where, of course, T’”(v;) is obtained as in (E.3) of T”(u1)..
Before we use the metric (6.9) on (E.5) in (E.9), we should prepare for an upper
bound of the iterated kernel K,,(z,£) on the square indicated. Since K(x,€) =
K,(a, €) is assumed continuous on this bounded square domain, we can conclude
312 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS
IK(t.8)lat
<M Ltmax at]< M?(a
- €) iy
(E.7)
Fegtne= |
i; K (1,1) Ka(t,Sa <M |"| Ka(t, €)|dt
<M
<M fVe 9a
—C\dt s aM” [ @-9a0
= f\d. (E.8)
E.8
oy CIS
2!
after using the result of (E.7) for the bound on |Ko2(t, €)| in (E.8). With this result
(E.6) and the result (E.5), we write
“rant
=a d(w,n) < parm eaa”
<a" dl),
d(T”(ui),T"(v1)) < ad(uy, v1)
where Nake
AS hare
n\
Oe (E.10)
6.2 FIXED POINT THEOREM OF BANACH 313
Hence T”(u) is contractive if a < 1, which, with the help of the n! in the denominator,
is the case when n is sufficiently large (i.e., if we wait for more iterations). Of course,
if we have our problem on the unit square, 0 < x < 1, then |z — a| = |z| < 1 inthe
factor |x — a|” of (E.10) will help even more in speeding a of (E.10) toward being
less than 1.
If we consult Example | of Chapter 3,
Mzr-1 grt
CMG 9B) oe
ee ee,
PORES aa rie akeremeni ee)
In this case we use (E.13) in the second line above that of (E.9) to obtain
gn
C— |AIresF (£.15)
instead of .
ne
a=) bAlre a (£.16)
With the definitions of metric space, fixed point of the mapping, and contractive
mapping, we are now in a position to state and prove a very basic fixed point
theorem, the Banach (or Banach-Cacciopoli) theorem.
u=T(u) (6.39)
314 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS
(b) The existence of the fixed point, where we show first that the sequence of the
successive approximations
(a) To prove the uniqueness of the fixed point, suppose that there are two distinct
fixed points u and v, u # v [i.e., u = T(u) and v = T(v), u F vIJ. Since u F v, the
distance between them is not zero: d(u, v) # 0. Because u and v are fixed points of
T, we also have i
(1 — a)d(u,v) <0
where since d(u, uv) > 0 by assumption, then 1 — a < 0, a > 1, which contradicts
the assumption of contractive mapping whose a is strictly less than 1. Hence the
distance d(u, v) must be identically zero, which is equivalent to u being equal to v,
and which proves the uniqueness of the fixed point when it exists.
(b) To prove the existence of a limit point as a fixed point for u = T(u), we will
first prove that the sequence u,, of the iterative process
and if we invoke on the right side the previous result for d(u2,u3), we have
If we continue this to wu, and un+1, we have, as we did for (6.30) and (6.31),
where clearly the higher order consecutive iterates un,Un4i(n >> 1) are much
closer together than the first ones, u; and uz, due to the geometric factor a”~!;
0 <a <1. Still we have to show the Cauchy convergence, which will entail the use
of the important result (6.45) and the triangle inequality of the metric d(un, Un+p).
Observe that
d(un, Untp) << d(un, Un+1) ae d(un+1 ) Unt2)+
Uric Lay)
and from the proof of the existence of the limit point above we can say that
or
d(w,T (tn)) = du, nti) + 9, d(u, Un) 0
as n — oo. With these results we will use the triangle inequality to have
316 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS
an}
n= t,t.) = 5 du» U2). (6.49a)
This is obtained easily from the last line in (6.47); we take the limit as p > oo,
where limpy_,.. a? = 0 for 0 < a < 1 on the right side and limp_,o Unip = U
(sincen + p = Mm > &, limm-.oo Um = U) On the left side, to give
qn}
limnd (ntpp hd a eS = a (ut ua) (6.496)
poo
Assume that f(x) is continuous on (0, 1]. A(z, t) = xe! is obviously continuous
on the square x € [0,1], t € [0, 1] (see Figure 6.4); hence it is bounded there and we
can easily see that a bound / is e, that is,
a = A\M(b—a)
= Ae(1—-0) =Ae< 1
This |A| < 0.97, when compared with ours of \ < 0.37, makes the contraction
approach to convergence appear more conservative. The reason lies in the nature of
a special complete metric space of square integrable functions on (a, b) in which the
condition |A| < 1/B was obtained. These square integrable functions f(z) on (a, b)
were discussed in Section 4.1.3 in relation to their Fourier series representation in
b
(4.47) and (4.48). They are defined such that / |f(a)|?dz < oo. For the space
a
b
of these functions we define the metric d(f(x),g(x)) = / |f(x) — g(x) |?dz,
whence they constitute a complete metric space. (See (4.52) and some of the refer-
ences given in the first page of this Chapter.)
Example 3 Existence of the Unique Solution for Linear Volterra Integral Equation
In Example | we showed that the nth-order mapping T”(u) for the Volterra
equation
= 1 (as) (£.1)
This means that with the first estimate u;, we have the sequence uz+1 = S(ug) =
S*(u1) converging to u, that is,
Recall from Example | that T”(u) was proved contractive for large enough n, so
with the unique solution u for 7 (u) = u, it should be clear that for the even larger
kn, T*"(u) = u has the same solution u of T”(u) = u, n large.
In (E.3) we have the first estimate u;, being arbitrary, so we may choose it to be
Uy = da Qui)s
6.2 FIXED POINT THEOREM OF BANACH 319
Pe tee
(1 y)\ ty) eee = (yy ay,
T"(y) =7- (E.5)
The same can be shown for (,
T"(8) = B. (E.6)
But since J” is known to be contractive, it must have a unique solution which forces
y = B. Hence T(u) = u has a unique solution.
The following section represents our only (brief) discussion on analysis of nonlin-
ear integral equations. It deals first with applying the fixed point theorem to nonlinear
integral equations. This is followed by a simple initial value problem associated with
a first-order nonlinear differential equation to illustrate the importance of the integral
representation of differential equations.
In the preceding section we limited our illustrations of the Banach fixed point theorem
to the existence of unique solutions of the linear Fredholm and Volterra integral
equations. In this section we apply the fixed point theorem to nonlinear Fredholm
and Volterra equations. This is followed in Section 6.2.3 by an initial value problem
associated with first-order nonlinear differential equation. The latter problem is
added to indicate the importance of having to change to the integral representation
in order to enjoy the method of successive approximations, and where proving the
contraction property is greatly facilitated when working with an integral operator, as
we illustrated for linear integral equations in Section 6.2.1.
where we assume f(z) continuous on [a, 6] and that F(a, t,u(t)) is continuous,
hence bounded on the square of Figure 6.4, |F'(z, t, u(t))| < M for bounded u(t) :
c < u(t) < d. Consider also the successive approximation of (6.51),
320 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS
b
ipa (2) =f eee »/ F(a, t, Un(t))dt (6.52)
and the metric (6.9) with the metric space of continuous functions Ca, 6].
To show whether the mapping in (6.51) is a contractive one, we must first look at
the distance between the images T'(3(ax)) and T(y(x)) of the inputs G(a) and (x)
in C[a, b],
|F(x,
t,B(t)) — F(a, t,y(t))| < LIB(t) — v(t)| (6.54)
for (x,t, B(t)) and (a, t, y(t)) in the domain of F’.
If we impose this Lipschitz condition on F inside the integral of (6.53), we have
|A| < ei
L(b—a)
(6.56)
where L is the Lipschitz constant of F(x, t, u(t)) as in (6.54).
We note that if F is linear in u(t), as in our illustrations in Section 6.2.1, then F
is always Lipschitz, since for F(z, t, u(t)) = K (a, t)u(t) we have
|F'(z,t, 8) — F(a,
t,y(t))| = |K(a,t)B(t) — K(a
6.2 FIXED POINT THEOREM OF BANACH 321
where M, the upper bound of |A(z, t)|, can stand for L, the Lipschitz constant.
We also note that from the start we assumed that F(z, t, u(t)) is continuous in
all three variables, but clearly the continuity of F in u(t) does not imply that it is
Lipschitz in u(t). A simple counterexample is that of F(2,t,u(t)) = xt,/u(t),
which is continuous but not Lipschitz in u(t). However, if F(a,t,u(t)) has a
continuous partial derivative 0F'/Ou in the domain D of F, then F is Lipschitz in u,
as we will show next, and
OF
= max,
i (6.58)
Ou
Since F(z, t, u(t)) is assumed to have continuous partial derivative OF'/Ou in D,
we can use the mean value theorem, which states that for any u;(t) and u2(t) in D
there is an 7(t) between them, u1(t) < 7(t) < u2(t), such that
P(a,0,
Ui (t) — Fast, Ualt)) OF
ce
Ces By (ob nt). (6.59)
From this result we have
|F(a,
t, u(t))| <k. (6.61)
With this condition on F' in (6.51) we have
b
lu(x) — f(x)| = af [email protected] S if |F(x,t, u(t))|dt < |A|k(b — a).
(6.62)
This means that if our input estimates w,,(t) in (6.52) are bounded [i.e., c < un(t) <
dj, the outputs un+1(t) can also be bounded within the same range by limiting the
value of \ in (6.62) and taking into consideration the bounds on f, m1 < f < mz.
This amounts to choosing A as
322 Chapter 6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS
: (Mi) SE d— ms 1
Al < min (RS = ) (6.64)
when m, < f(x) < m2,c < T(un) < d, and L is the Lipschitz constant ofF’ as in
(6.54), (6.58), or (6.60).
In regards to the iterative process (6.52) and its mapping T’, there can be different
variations on it that may result in a better contraction property for its associated
modified mapping T,, (see Jerri [1991] Jerri et. al. [1987], Jerri and Herman
[1996]).
As for the nonlinear Fredholm equation (6.51), we will assume that f(a) is continuous
on [a, b] and bounded: m, < f(r) < m2; F(z,t, u(t)) is continuous with respect
to the three variables z, t, and u(t) on the domain D: a< x<ba<t<z, u(t)
unbounded: c < u(t) < d; and F(z,t,u(t)) is Lipschitz with respect to u(t). To
ensure that the outputs un+1(2),
x
are always bounded within the range c < u,(t) < d of the inputs, we follow the
same steps for the Fredholm equation in getting the condition (6.63) on A to come up
with similar condition on of (6.66),
; m,—-ce d—mz
|A| < min (a. was) (6.67)
where k is the upper bound of F' (i.e., |F| < k). As we have shown for the
linear Volterra integral equations, the proof of a contractive mapping for the Volterra
equations does not require an extra condition on as long as we take large enough
n for un(x). This is, of course, a welcome nicety of the Volterra equations which
stems from the nature of its origin as an initial value problem.
For the linear Volterra equations, we showed in Example | that T”(w) is contrac-
tive, then concluded from that in Example 3 that T(u) = u of the Volterra equation
has a unique solution. This was accomplished with the aid of the iterated kernels,
which are clearly exclusive for the linear case only. Here we will follow a slightly
6.2 FIXED POINT THEOREM OF BANACH 323
different procedure to get to the contraction of T(w) in (6.65) and the convergence of
(t) of (6.66) to the unique solution u(t).
the sequence up,
Since F(z, t, u(t)) in (6.65) and (6.66) is assumed Lipschitz, we can follow what
we did in (6.54) and (6.55), for the nonlinear Fredholm equation, and write
[o-@)
after using the bounds set on u(t) : c < u(t) < d. In the same way we show that
324 Chapter6 EXISTENCE OF THE SOLUTIONS: BASIC FIXED POINT THEOREMS
lus(x) — ug(z)| < ual folug(t) — ua(t)|dt < Ld f°zalle~ alle= alt
a
| 2
< |LA)?|c - par
; 7 It —al?
Css) et) < La f |ua(t) — us(t)|dt < ILAP|e~ al [ | 5 tat
a 3
a
(6.73)
where a simple mathematical induction establishes (6.69).
We will assume that f(x,y) is continuous, and in anticipation of the use of the
fixed point theorem for the integral representation of this problem we assume that
f(x, y) is also Lipschitz in y(z),
7. oe ra a
Smal 8 Ss ee ah osoa creer
s
ny set)—
-
w eooakt,
ir i Yeh « wei iS wb me Grate elton idle ine
ant of
6| Cs ayy sp ee O. eau m Grea iertanseesee yale os Ye —_
ee
(2.
ee 2 ee
- Pa,
| BE yar et i _ ~ @ dl =
hie) ace ea out igtes val a v4
- » Panay De aw eo
| mmgis th te
_
\ SSba
2 et lrg hae Grapes ; “ Si : ~ >
7 . i omy 7
— i
a
7 -
eo
:
: =
7 ss i 7
a Pe*o ally hae
_ :
a
- vif > —«
7
7
Cah °
tt &
,
.
osomG
ee
r--
tee
BSaei — >
_"
ee
GY @ -hammes
|
7
. -
,
7s
; —. a
=.
pe
~~
e& =a >
-
cat all
= =
: raetidiveto
= * @i)| @hiune ow : eu
ty Hal Sin te ves és(a faa bien =eaa
Sts. 9% iorLope at gel - :
/ : — i > =~
Seas % ye he ale -
0 @ptapes iis 1% 34am isa =i pu
Ov ef weal ar
v se is S eo Wi
Higher Quadrature Rules
for the Numerical Solutions
As we emphasized at the end of Section 1.5, besides the very basic numerical
integrations formulas of the trapezoidal rule (1.141) and Simpson’s rule (1.144),
there are many other numerical integration formulas (or numerical quadrature rules).
They are, of course, used for more accurate way of approximating the integral as
compared to their special cases (lower order quadrature rules) of the trapezoidal and
Simpson’s rules. These two rules correspond to using, respectively, first and second
degree polynomials, while the higher quadrature rules, to be discussed here, use high
degree polynomials.
To give a brief discussion of the quadrature rules we return to our first numerical
approximation of integrals using weights (or quadratures) D; in (1.140) with a minor
modification
N
[sede = Dulles) te= Sy + (7.1)
where € is the error of the approximation, and the summation here is over 7 = 1 to
Ly;
N
ox (e) (7.2)
a
327
328 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
Here N denotes the number of (in general) not necessarily equidistant sample loca-
tions {a;}/_,. With this note, we should mention that there are numerical methods
that use the end points of the interval z = a and b as x, and xy, respectively, which
are termed closed rules. Those rules which avoid the end points and use xz} = a+ €
and xy = b— eg as first and last sample locations in the interior of the interval (a, b)
are called open rules. The latter rules are useful in case there are singularities at the
end points. Also since b — a is constant, we may write the weight D; = (b — a)wi,
where we list the values of the weights w; in Tables 7.1 to 7.6 (in this Section) for
the representative quadrature rules of interest in this presentation.
In (7.2) we observe that the approximation sum Sj has two variables; namely,
the locations {x;} and the weights D;. As we mentioned earlier, there are basically
two groups of numerical integration methods, where the main difference depends on
their use of these two variables. For lack of space, and instead of just presenting the
formulas without derivation, we shall be satisfied with sketching the outline of the
essential steps of such derivation. The details are left to the exercises with ample
guiding hints, and can be found in the already cited references in Section 1.5. These
two groups of methods start by expressing the function to be integrated f(a) in terms
of P known functions (basis) h;(x), 7 = 1,2,---,P,
a
f(z) = YE,ajh;(2). (7.3)
This is to be substituted in the integral of (7.1), and the criterion here is to have the error
€ [as in (7.1)] of the approximation vanish for all coefficients a;, 2 = 1,2,---,P.
After evaluating the P integrations involved of (Rehja@)dz, t= 1.2. es this
amounts to equating coefficients for each a; in the resulting (7.1) with the condition
that the error € = 0. The result is a system of P linear equations in the 2N unknowns
of the locations {a;}§, and the weight {D;}‘_,. The two different groups of
numerical quadrature rules differ basically in their dealing with such 2N variables.
We shall discuss the (closed) Newton-Cotes rule, and the (open) Gauss (or Gauss-
Legendre) rule as representatives of the two groups.
'Newton-Cotes rule of the open type are found in Table 7.1(b), while a list of most of the present closed
type rules are found in Table 7.1(a).
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 329
simplifying the above needed integrations, and gives the N weights D; for what is
termed the N — point rule of degree N — 1. These weights are listed in Table 7.1(a)
for approximating the integral ihef(@)dx,
b IN N
‘ifade i, F(a)de = hw f(2s) (7.4)
for the cases N = 2,3,4 and 5. The weights are tabulated as w,; for D; = hui,
where h = an
For this table, it is important to note that if we write this rule for approximating
the integral jectedx)dz as
For illustration we will write the first three cases explicitly, for completeness we will
also give the error estimate of such approximations. For N = 2, we have 2; = a,
22 = b, and
2For the rest of the related quadrature formulas, and the more detailed tables with high accuracy, see
Abramowitz and Stegun [1965, pp. 885-890 and pp. 916-924, respectively].
330 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
Trapezoidal Rule
[see =ha+m-Fr'eo
Me — 2 1 2 12 )
eS (2) (¢) 73
f(x)dx = Sh + fa) + —
. (4)
f(z)dx = (2h — fg +2f4) + Bane
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 331
San re ae ee
(7.8)
where we use the notation f(")(r) = ot. This is the three-point and degree 2
rule, which is the basic Simpson’s rule for three points. Again this represents the
backbone for the repeated (or extended) Simpson’s rule (1.144) with N points, where
its derivation uses the first three samples f(zo), f(z1), and f (a2) and fits them to a
polynomial of degree 2 (parabola), which in other words uses the above basic three-
point rule (7.8) to approximate the integral on (xo, x2). This process is repeated for
the three samples f (x2), f(x3), and f(x4) using the same rule in (7.8) and so on -- -
to result in (1.144), where n is even, N = n+1. For the basic Simpson’s rule (7.8) of
three-point (and degree m = 2), if it is repeated M times, the total number of points
is N=mM+1=2M+1=n +1, and it is termed Simpson’s of M panels or
the familiar composite (or repeated) Simpson’s rule (1.144). The same is said about
the composite trapezoidal rule (1.141) as the basic two-point Newton-Cotes rule of
degree | with M panels, N = M+1=2n+1. After we present the other higher
degree Newton-Cotes rules next, we will see that they can be extended in the same
way as high degree NC rules with repeated M/ panels.
For N = 4, we have the four-point, third degree (m = 3) (closed) rule,
As was discussed above, the familiar Simpson’s rule (1.144) is a repeated three-
point, degree m = 2 Newton-Cotes rule (7.8) with size panel M, where N =
mM +1=2M+4+1=n+1. The size of the panel is obtained from b — a = Mh,
where h is the size of the subinterval used repeatedly with the three-point rule
(7.8). The same repeated process can be done for the higher order Newton-Cotes
332 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
method with panel size M, and are called the repeated Newton-Cotes (here closed)
rules. For an indication of the importance of such extension, one needs only to
look at the basic three-point Simpson’s rule (7.8), and how inefficient it would be
for approximating integrals on [a,b] with its mere three samples of the integrand.
The same unsatisfactory approximation property is observed about the high degree
Newton-Cotes N-point rules, even when the integrated function is well behaved.
The derivation of the repeated Newton-Cotes (closed) rules with M panels parallels
exactly what we described, and is well known in calculus texts, for Simpson’s
(composite or repeated) rule (1.144) and the (composite or repeated) trapezoidal rule
(1.141). Here we present the final result
and remind of the allowed translation and scaling that we discussed in (7.6) versus
(7.5) for each subinterval of integration in the above summation, and where the weight
for the rule, according to (7.6), is hw.
(a) Asan illustration we use the integral ifsnot whose exact value is pec
~ 0.785398.
We first use the three-point (second degree) Newton-Cotes rule [the nonre-
peated Simpson’s rule (7.8)] with N = 3, h = 45% = } to have
= 0.78333
(b) Now we use the HOMO (degree 3) Newton-Cotes (or the2
3 Simpson’s) rule
of (7.9) with h = =5 to have
which shows a very good improvement over the above (nonrepeated) Simpson’s
rule in (E.1). We will leave the details to an Exercise for comparing these results
with the results of the other more accurate rules such as the 2-panels Simpson’s
rule, the six-point, degree 5 Newton-Cotes rule, and the repeated six-point,
degree 5 Newton-Cotes rule with M = 2 panels. In Example 2(a) we will use
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 333
Table 7.2 Locations x; and Weights w; for the Maclaurin Rule (equidistant samples)
< N
where +2; are the samples’ locations and w; are the weight factors?
Ea, Wi ai Wi
= N =15
1/2 1/2 0 402/1152
Nees 2/10 100/1152
0 2/8 4/10 DiI TAS?
1/3 3/8 Net
N=4 1/12 254/1280
1/8 11/48 3/12 139/1280
3/8 13/48 5/12 247/1280
3From Kondo [1991, p. 148], courtesy of Oxford University Press (Clarendon Press).
334 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
for example L = 200. We will illustrate neat the use of a high degree four-point
Newton-Cotes rule (7.9), where
N = 4,h = aan = ae and
f° hyenas(2) ($8) +7
“— [1 + 6.75 x 10-4 + 1.69 x 107* + 2.50 x 107°]
= 25,0217,
which is a bad approximation when compared with the exact value tan 200.—
1.56580. This shows how inefficient such methods may be, and thus the need for
more efficient rules like the following Gauss quadrature rules of the next Saeco
One may think that the Newton-Cotes rules can do well for the pet pore =
sin 2xdz where the function e~* sin 2” decays much faster than Ga of the above
example, but still with taking a limit L = 200, and using the four-point Newton-Cotes
rule (7.9), we have
simple manner, a few very basic elements of this topic to allow us a general sketch
of what is behind the Gauss quadrature rules. First, this second group of quadrature
rules uses orthonormal polynomials qn(x) of degree n,n = 1,2,---,.N — 1 instead
of the simple monomial x*~! of the first group of Newton-Cotes rules. Also, the
locations of the samples {z;} will be the zeros of the polynomial of the highest degree
(of such polynomials) gy (zx), i.e., gn (2) = 0,2 = 1,2,3,---,N.
The special property of the polynomials is that they are orthonormal on the interval
(a, b) of the integration considered, which means that
(2)an(e)de = { (7.11)
b
| PCe)am
where p(x) is called a weight function, p(x) > 0.
For the present case of the Gauss-Legendre polynomials, gn(x) =(74+*) 2 P(x),
n = 1,2,3,---,N —1 are used on the interval [—1, 1], where P,, (x) is the Legendre
polynomial of degree n. To give a few examples of the Legendre polynomials P,, (2),
Pa ray Sn aS 5(32 Go
1 1
P3(x) = 5 (5a" =32)) P,(z) = g (35a" — 30x” +3).
More Legendre polynomials can be generated via the Rodrigues formula,
ve f ($2? -1) dr =0
DP afte; 2
since the integrand is an odd function over the symmetric interval [—1, 1]. However,
when n = m = 1 for qi (rz) = 30. the integral in (7.11) gives the value of 1,,
1
oy ake oae
dx = a c dz 53
=-—_ — 1.
=i
-—1
Also, it is easy to show that P(x) has two real zeros of Fa in the interval [—1, 1],
3 1 sles!
when we look at the equation P2(x) = (52° — —~) = 0. The generalization of this
result is that P(x) has k real distinct zeros in the interval [—1, 1]. As seen in Table
7.3 these zeros are symmetric around the origin, so only half of them on (0, 1) are
tabulated with + signs. For example with N = 2 we are using P2(x) with its two
1
zeros on (—1, 1) as = = —0.57735 and WE = 0.57735.
336 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
(7.13)
1 N
[ fae = YSwisleo
takes advantage of the orthonormality (7.11) of the polynomials on [—1, 1], and the
special locations {z;}/¥_, as the zeros of Py(z), i-e., Py(a;) = 0,7 = 1,2,3,---,N
to determine the N weights D; of (7.2) from the 2N equations. The details may
take us more away from our main line of sketching the idea, but the net result is that
we end up with an N-point rule of degree 2N — 1. We must note that since these
(orthonormal polynomials) rules result in an approximation of degree 2NV — 1, then
they clearly will give an exact approximation to the integral of any function which
happens to be a polynomial of degree < 2N — 1. The weights w,, and the locations
(zeros of Py (x)) are listed in the following Table 7.3, for N up to 8. For higher
values of N, and extremely accurate value of w; and x; for this Gauss-Legendre rule
as well as other orthonormal polynomials rules, see Abramowitz and Stegun [1965,
pp. 916-924].
1 N
/Sade = Y wif a)
+2; (zeros of Legendre polynomials Px (x)), w; weight factors
Seri Wi =, Wi
N12 E10
0.577350 1.000000 0.238619 0.467914
0.661209 0.360762
Ns='3 0.932469 Ot 1325
0.000000 0.888889
0.774597 0.555556 Ni
0.000000 0.417959
N=4 0.405845 0.381830
0.339981 0.652145 0.741531 0.279705
0.861136 0.347855 0.949108 0.129485
Nao N=8
0.000000 0.568888 0.183435 0.362684
0.538469 0.478629 0.525532 0.313707
0.906180 0.236927 0.796666 0.222381
0.960290 0.101229
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 337
From Table 7.3 we see that for N = 2 we neve cate weights w; = we = 1, and
we have already found the two zeros 7; = or = —0.577350 and rg = 72 ~
0.577350 of P2 (x)as the two (symmetric) locations in the interval [—1, 1]. Hence, we
have two-point Gauss-Legendre rule of degree 2N — 1 = 3, i.e., we use a Legendre
polynomial of degree 3 to have the Gauss-Legendre approximation for S2 of (7.2),
sai(-) (4)
To give a simple example, we consider the integral [oe e* dz with its exact value
e— A = 2.350402. The two-point Gauss rule of degree 3 gives the following
approximate value Sy = e V3 +ev% = 0.561384 + 1.781312 = 2.342696, while
the two-point Newton-Cotes rule of degree | in (7.7) gives Sp = t + e =0.367879 +
2.718282 = 3.086161, which is a bad approximation compared to the Gauss rule
as it is evident from comparing these two results with the exact value of 2.350402.
Although, this is a very simple example, it demonstrates what is well known about
the power of the Gauss type quadrature rules. This is especially true in the treatment
of the numerical solution of Fredholm integral equations, where we end up with an
N x N system of linear equations, and for large N, the cost for such numerical
computations becomes prohibitive! This is unless we have efficient methods like the
Gauss quadrature rules. In contrast, the Newton-Cotes type N-point rules of degree
N — 1 are known to be inefficient in this situation, and the accuracy is in much doubt
for large N. However, they are adequate for the more simple triangular system of
the resulting linear equations in the case of Volterra integral equations.
The setting up of the numerical approximation of Fredholm integral equations
parallels that which we have illustrated already with the help of the simple (repeated)
trapezoidal rule in (5.124) and (5.125) of Section 5.5.1, except for looking up the
sample locations and weights from Tables 7.1-7.6, which we will return to after
presenting another illustration, and few more useful and efficient quadrature rules.
1 1 1 i
SS ih =
‘f ie lie ae ave?
if! =5 | so
~) 5[0.662145 f(- 0. Sema + 0.347855f (—0.861136)
338 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
1
+0.652145
f(0.339981) + 0.347855
f (0.861136)] = 3 [0-588098
+0.346186 + 0.450101 + 0.186422] = 0.785403.
This is a much better approximation to the exact value of 7 ~ 0.785398 than that of
0.78461, obtained with the use of the four-point Newton-Cotes rule that was done at
the end of Example 1.
(b) Next we return to our two examples of integration over the infinite interval
(0, co), namely, {he e* sin 2xdz (with its exact value of 0.4) and that of the slowly
varying function {)~ ;+.rdz (with its exact value of 7 ~ 1.5708). We will truncate
the infinite limit of integration in the first integral to L = 10, and use an eight-
point Gauss-Legendre rule which results in a much better value of 0.40041 than that
of 8.22 x 10-28 when using L = 200 with a four-point Newton-Cotes rule in the
computations for integrals with infinite limits (following Example 1). The exact value
of the infinite integral is 0.4. In the second integral, with its slowly varying function
jG = as , this eight-point Gauss rule with L = 100 gives a good approximation
of 1.17915 (compared to the exact value of 1.5608 of tan—! 100 of the truncated
integral on (0,100)) which is far better than the very bad approximation of 25.0217
of the four-point Newton-Cotes rule with L = 200, that we did following Example
bi
For the Gauss-Legendre rule of approximating the truncated integral
10
| e "sin 2zxdz,
0
10 1
/ e "sin 2adz = 5 | e + sin(10(y + 1))dy.
0 -1
We must mention here that the above are illustrations to show some indication for
the approximation of the two groups of numerical integration rules, namely, the
Newton-Cotes rules versus the Gauss type rules. These illustrations are by no means
exhaustive, since much analysis must be done regarding a suitable truncation limit L
for the infinite integral. For example, with the choice L = 25 for the above integral,
the eight-point Gauss-Legendre rule gave a much better result of 1.578364 to the
exact result of tan! 25 = 1.5308176 of the truncated integral fees es. This may
be explained in terms of investigating the eight points in the region0 < x < 25
where the integrated function ee counts the most instead of spreading those eight
points in the region 0 < x < 100, where beyond x = 25, the function changes little
from zero.
In Example 3, we will use the Gauss-Laguerre (polynomial) quadrature rule,
which is designed for integrals on the semi-infinite interval (0, 00), to show better
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 339
results with L = 10 for the first integral ik e * sin 2xdz and L = 25 for the second
; 25 ; : ;
integral ifs reas Ws when an eight-point Gauss-Laguerre rule is used. These results
will be compared with those of another efficient rule, also designed for integrals with
infinite limits of slowly varying functions, namely, the Gauss-rational rule.
lala), (Wil
IN |
Qn(x) = B (7.144)
270, 0)
on the interval (-1,1) with respect to the weight function p(x) = ee [see(7-11)].
The very special property that makes such polynomials extremely useful, is that they
can be related, after a simple change of variable r = cos@ for T,(x), to a cosine
function,
1
Taye aa cosn(arccosz), n#0 (7.14b)
ilo) ‘=
This enables us to arrive at the zeros of T(x) in the N-point rule from the following
very simple formula,
The above change of variable x = cos@ is the reason for the weight p(x) =
echt = =, where the integral [,° f(@)d0 becomes ie Taf (cos x)dx
1—cos
since d9 = d(cos~! x) = ++
V1—«2
-sdz. The Tchebychev
: ;
polynomials also have the
simple property of constant weights w; for their N-point Gauss quadrature rule,
1 : ew (ies
1
‘ 2)dx & woud (Sa). (7.16)
la rer
as it is for Gauss quadrature rules in general, is exact. For example, the integral
is ’ wae z2dz is approximated exactly by the N =two-point Gauss-Tchebychev
rule (7.16) of degree 2N — 1 = 3, since g(x)= x” is of degree 2 < 2N —1 = 3. The
exact yas of this integral can be obtained by simple trigonometric substitution for its
value as $. The above Tchebychev rule with N = 2 gives the same Valu Since with
m1=- cs Ja t2 = C08 = Yq we have Sp = 3[f(— wa) + Gy 5)] =
Acree ; l== §. We may note that for the rule (7.16) with vfequal an
ay
we eel noie for the locations x;,since they can be obtained very easily from
(WAgley's
The above discussion makes use of the important relationship (7.14b) between
the Tchebychev polynomial and the trigonometric cosine function. The result is a
constant weight quadrature formula (7.16), but it is for a weighted integral,
&
i
of g(x) on (—1,1) with weight function w(x) = —=——. There is, however, a
VP 1
simpler Tchebychev quadrature rule for the integral i f(x)dzx (without weight),
=
D
and with equal weights W in the sum,
However, the locations of the samples x; are different from those used in (7.16) [as
obtained from the simple formula in (7.15)]. The locations x; for (7.17) are listed in
Table 7.4, and they are, more difficult to obtain, as zeros of the polynomial part of*
Note that the x; here in the Tchebychev rule (E.1) are different from those for the
Gauss-Tchebychev rule.
We may also note that both sums, of the Gauss-Tchebychev rule (7.16) and
the Tchebychev rule (7.17), are with equal weights (w; = W and Z, respectively),
compared to the rest of the Gauss quadrature rules used here, such as that of Legendre
in (7.13) and what will follow as the Laguerre and the Hermite quadrature rules in
(7.18) and (7.26), respectively.
We may remind again that in Tables 7.2, 7.3, and 7.4 for the Maclaurin rule, the
Gauss-Legendre rule, and the Tchebychev rule (with equal weights), respectively, the
Table 7.4 Locations 2; for the Tchebychev Rule with Equal Weights (7.17)
1 2 N
[ f(a)da x = > f (zi) (E.1)
20577350" ©) 0.832497
0.374541
3 0.707107 0.000000
0.000000
6 0.866247
4 0.794654 0.422519
0.187592 0.266635
locations of the samples x; (and the weights for Tables 7.2 and 7.3) are given for the
a
integral f(x)dz on the symmetric interval (—a, a). Hence the integral at hand
—a
The negative numbers in parenthesis (—7) to the left of the numerical values of w;
in Table 7.5 are the negative exponent of the factor 10~” used for the indicated small
numbers. For integrals of the form fie e 6 f (x)dx, we can simply make the change
of variable
€=G(x — a) where this integral becomes 5e~°* Vf mens (s ~ a)dé,
which then can be approximated by (7.18)
pee
ile f(x)dx oF I ts (Le
B [ enuf. rs d€
oh
e fa oe ag
NOUS
e Ba
ree RAGS)
a
(7.20)
Pym (Gre)
where, as seen in the middle sum for h(€;), the locations €; are the zeros of the
Laguerre polynomial Ljy(x). This generalized version (7.20) of the Gauss-Laguerre
rule (7.19) is called the shifted Gauss-Laguerre rule.
A good illustration of this efficient rule for integration over the semi-infinite
interval (0, co) is to return to our examples of fee e * sin 2xdz and ifs Tero
which we have already approximated in Example 2, using an eight-point Gauss-
Legendre rule with truncation limit L = 10 for the first integral and L = 25 (and
100) for the second integral, and also right after Example 1, where Newton-Cotes
rule was used with even larger L = 200 (see also Exercise 2).
[ve
ef (a)dae Ywste
Xi) (£.1)
Se N
[ g(o)de = >wie™ gle) (B.2)
x; (zeros of Laguerre polynomials Dj (x)), w; weight factors
Li Wi wie”?
ING—22
0.585786 (-1)8.535534 1.533326
3.414214 (-1)1.464466 4.450957
N=3
0.415775 (-1)7.110930 ~=—-1.077693
2.294280 (-1)2.785177 =—-2.762143
6.289945 = (-2)1.038926 5.601095
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 343
Table 7.5 continued Locations x; and Weights w; for the Gauss-Laguerre Rule
Zi Wi wie!
N=4
0.322548 (-1)6.031541 0.832739
1.745761 (-1)3.574187 2.048102
4.536620 (-2)3.888790 3.631146
9.395071 (-4)5.392947 6.487145
N=5
0.263560 (-1)5.217556 0.679094
1.413403 (-1)3.986668 1.638488
3.596426 (-2)7.594245 2.769443
7.085810 (-3)3.611759 4.315657
12.640801 (-5)2.336997 7.219184
N=6
0.222847 (-1)4.589647 0.573536
1.188932 (-1)4.170008 1.369253
2.992736 (-1) 1.133734 2.260685
5.775144 (-2)1.039920 3.350525
9.837462 (-4)2.610172 4.886827
15.982874 (-7)8.985479 7.849016
N=7
0.193044 (-1)4.093190 0.496478
1.026665 (-1)4.218313 1.177643
2.567877 (-1)1.471263 1.918250
4.900353 (-2)2.063351 2.771850
8.182153 (-3)1.074010 3.841249
12.734180 (-5) 1.586546 5.380678
19.395728 (-8)3.170315 8.405432
N=8
0.170280 (-1)3.691886 0.437723
0.903702 (-1)4.187868 1.033869
2.251087 (-1)1.757950 1.669710
4.266700 (-2)3.334349 2.376925
7.045905 (-3)2.794536 3.208541
10.758516 (-5)9.076509 4.268576
15.740679 (-7)8.485748 5.818083
22.863 132 (-9)1.048001 8.906226
Example 3
Here we will use the four-point and eight-point Gauss-Laguerre rule (IV = 4 and
8 in Table 7.5) for both integrals with the limit of integration being truncated to about
10 and 23, respectively. For the integral ifse* f(x)dx with L = 10, we note from
Table 7.5 that the closest value to 10 is x4 = 9.395071 for the four-point Laguerre
rule. For N = 8, we can go as far as xg = 22.863132.
344 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
co
co
I= jhef(x)dex (7.21)
is to use a change of variable (or mapping) x = vu(€). This reduces the limits of
integration to finite ones in the new variable €. For example, the following change of
variable
p= 0(6) =
2(a + (3)
ae (7.22)
(as a rational function of €) reduces the integral in (7.21) with xe(a, oo) to the
following integral with finite limits for €€(—1, 1),
7.1. HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 345
where F'(€)= f(v(€)), and the weight at in the above integral is the result of
dx= — At de after using x = “ee — B. So, a Gauss-Legendre rule can now
be used for approximating the integral in (7.23) on the symmetric interval (—1, 1),
ie f(x)de =2(a+) fe a
(7.24)
“\ wiF (&)
CEAOS reer
with F(€) = f (ae — ) , and w3;, &; are, respectively, the weights and locations
1+&:
for the Gauss-Legendre rule as given in Table 7.3. Of course, in the (very special)
case of the integrand Tote in (7.23) being a polynomial of degree < 2N — 1, the
Gauss-Legendre rule of approximating its integral in (7.24) would give the exact
value of the integral.
This method is called the ““Gauss-rational rule," which we will illustrate in the
next Example 4 for the two integrals of Examples 2 and 3, and then compare the
results for the different methods.
This rule reduces integrals with infinite limits of integration to those of finite
limits. So, for our purpose of this book, it will help us in reducing some singular
integral equations, those with infinite limits of integration, to non-singular integral
equations, i.e., with finite limits of integration.
We emphasize here the different numerical integration methods, since they are
essential for the accurate numerical setting of such singular integral equations. This
is so, especially when at the level of this book we are not covering the theory behind
and the analytic methods for solving such singular equations.
We may stress the point that while other methods deal first with truncating the
infinite limit of the integral, the present Gauss-rational rule ends up dealing with finite
limit integral as in (7.24). So, there is no surprise if it suceeeds in approximating
the integral ies Te ae, with its slowly varying integrand at which was a source
of trouble for the former methods (such as the Maclaurin method and Newton-Cotes
method that we discussed following Example 1) that must start with truncating the
infinite limit.
The success of the Gauss-Laguerre rule in approximating integrals of the form
Nee ® sin 2adz, is also understood because of the inherent decaying factor e~* in
the integrand, that acts, effectively, to truncate the infinite limit of integration.
346 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
There remains the role of the parameter ( in (7.22)-(7.24) for the Gauss-rational
rule. To compare this rule with the Gauss-Laguerre rule for the integral Ip ede,
we must consider this integral in the form of the integral in (7.20)
ee co 1 fore)
ee
eht
(7.25)
0 14+ 22 0 1+2
a 1X wet
=
i e °* f(x)dz& —
B+ —_—-
after using the shifted Gauss-Laguerre rule (7.20), and where €; are the zeros of the
Laguerre polynomial L(x), and w; are the weights of the Laguerre rule as given in
Table 7.5.
The observation that the Gauss-rational rule may benefit from different (Larger!
in this example) values of 3 than the Gauss-Laguerre rule, which uses smaller values
of 3, will appear in the illustration of the two methods in the following example.
Example 4
We consider again the same two integrals with infinite limits that we used in
Examples 2 and 3.
(a)
ile1 ag = * = 1,5707963 (E.1)
9 1+2? a Se x i
au i Gulls)
where F'(€) = aaa If we use 8 = 10 and aneight-point Gauss-Legendre rule
€+1
on the second integral in (E.3), using Table 7.3, we have the following approximation,
ee |
i ae dx © 20[0.000263 + 0.000689 + 0.001347 + 0.002577
0
+0.005327 + 0.012629 + 0.030205 + 0.025304 = 1.56685
Next we use an eight-point Gauss-Laguerre rule as in (7.25) with (a small) G = 0.2
to have
7.1 HIGHER QUADRATURE RULES OF INTEGRATION WITH TABLES 347
a
/ e sin dade = 26 merit
aad (E..4)
[o-e)
where, of course, x; are the zeros of Hy (a), i.e., Hy (xi) = 0,7 = 1,2,---, N.
348 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
These samples locations x; and the weights w, are listed in Table 7.6 for the above
integral (7.26) and its equivalent
ore) N -
[oleae
=o] =
wie g(a) (7.27)
for N up to 4. The same note regarding the negative numbers in parenthesis (—7)
for Table 7.6 applies here, where it is for (the exponent of) the factor 10~” used for
representing small numbers w; to be employed in (7.26).
es N
/ e-® f(x)dz © wif (ai) . (E.1)
00 N ; ;
/ g(x)dx & s wie" g(x;) (£.1)
al
Set iA Wi wer
IN ss
0.707108 (-1)8.862269 1.461141
IN = 3
0.000000 (0)1.181636 1.181636
1.224745 (-1)2.954091 1.323931
Nao
0.524648 (-1)8.049141 1.059965
1.650680 (-2)8.131284 1.240226
Exercises 7.1
2. Do the same as in problem | for the following integral, except that 8 = 1 and
10 for the Gauss-Laguerre rule and the Gauss-rational rule, respectively.
[o-@)
/ e “sinazdz.
0
3. Use the Gauss-Laguerre rule to compute the exact value of the integral
e 16
f ead = —. (E.1)
1 €
[o.@)
Hint: In order to use the Gauss-Laguerre rule for the Cai (aor. we
0
make a change of variable y = x — 1 in the given integral of (E.1) to reduce it
to [5~ e Ye! (y + 1)3dy, then apply the rule of (7.18) to this last integral (you
can also use (7.20) on (E.1) with a = 1 as the shifted Gauss-Laguerre rule),
co 1 [o@)
(a) Use a two-point Gauss-Hermite rule to approximate the integral and com-
pare with the exact value. This approximation is not exact! Why?, see part
(b).
(b) Use a three-point Gauss-Hermite rule to obtain the exact value of Dis for
the integral, to show that this polynomial rule gives an exact value since the
integrated function f(x) = x‘, with respect to the weight function p(x) = er
on (—0o, 00), is a polynomial of degree 4 < 2(N) — 1 = 2(3) -1=5.
350 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
uo= fo
2At ;
u2= fet Eee [K20uo + 4Ko1u1 + K22u2]: Simpson's
3At 3
ug = f3t os [K30U0 + 3K'31u1 + 3K32u2 + K33us] : aaa rule (7.28)
2At
U4 = fa af We [K40U0 ar 4K 4,u1 =P 2K 42uU2
While Simpson’s rule (1.144) and the 3-rule are high-degree efficient rules, this
method will still suffer from the inefficiency of the lower degree trapezoidal rule,
used in determining wu; above at the starting point, which is then used for the more
accurate rules in determining uz, u3, u4 and so on --- in the following equations
of (7.28). Such inaccurate value of the input u;, would, of course, ruin the good
accuracy of the high degree rules used for Wa U3, U4,**:. One way around this
difficulty is to use smaller At, say Me or =, from t = 0 to At to have a more
accurate value for the starting value of U1, then follow it by the other rules for uo,
ug, and u4, which is illustrated in the following example.
7.2 HIGHER QUADRATURE RULES FOR VOLTERRA EQUATIONS 351
Example 5 We will return to Example 9 of Section 3.3, where the trapezoidal rule
was used,
We first try (7.28) with At = 5, and to improve the accuracy of the Eepsze dal rule
for determining wu, in (7.28), weuse a = ; fot = 0. todea Ab = 2°where
we have three points at t = 0, 4| and 3} with Rane values uo, u, and us to use
the trapezoidal rule on as a “mini" problem. The result wu, can now stand for a
more accurate value wu; att = 4 to be used as wu in the third equation of (7.28) for
determining uz via Sage s rule.
First we choose At = 5, A’t = 3, and Pe the trapezoidal rule in the second
equation
of (7.28) with f(0 Ne
=O fi = ve ) = 4, Kio = -—7, Kun = (§ — 5) = 0,
to find ui, = u(4),
a 1 17 1 ei ; eee!
a jiE (-3) (0) + 5(0)x Ay =F
nes
Then
we use up = 0, ui,= u(¢) = F and fo = f(§ 1;
fe 2 in a three-point trapezoidal
rule with A’t = } to find the more refined uy = u (5)>tO be used later for Simpson’s
rule,
i 1
Us = fo+A't =Kaouo + Kau + 5 Kean]
ih yal 1 1 1 il Lat 1 j
Now we return to our original (over all) problem with the larger t increment of
At = 3, Uo = 0, and the more fannes i Us) — t= 2* to be used for
Simpson’s rule in (7.28) with At = ==} to find u2 = u(1),
2At
tly ford ae [Koouo + 4Koiu1 + Ku] ,
1 1 31
ow
= fs =Ee
ayer + 3K31u1 + 3K32 + K33us3],
U3 75
om
+75 [- Gv +9(5-3) (a) *8(%-3) (i)
(5-53
“te
2 U3|;
| 3 —
s 93 483 3 3
|-— - — |] = = — = (2.71094) = 1.5 — 0.5083
Ms 2 iie |64 al 2 6! )
U3 =U: 991
Table 7.7 Numerical and Exact Solutions of a Volterra Equation of the Second Kind (Ex-
ample 5)
In Example 9 of Section 3.3, we used only the typical (extended) trapezoidal rule
with At = 1, where the results are reported in Table 3.1 and Figure 3.1 of Section
3.3. A look at this data shows the better accuracy of the present computations.
For example, we now have uz = u(1) = 0.8385, while in Example 9, we have
ug = u(1) = 1.0, where the latter is much further away from the exact answer
of u(1) = sinl = 0.8415, ie., 16% error compared to 3.3% error of the present
computations.
For another illustration of this method, see Exercise 4 with its very detailed answer.
jSt
Hes) aK Ge Duma, 3,---N, tp < ae:
ea
For this equation we will use the Maclaurin rule (see also Exercise 2).
From Table 7.2, we note that the Maclaurin rule on (—4,+)
oP) uses fixed odd
number of increments for approximating the integration on (— T 5). For example,
with N = 4, itusest; = —} + § = -3,t2 =-$ +2 =-},t3 =-$+2 =i,
angi7n— —$+ é = 3. So with Az = h= ; we must use t] = a +h,tz = a+ 3h,
t; = a+ 5h, ty = a+ Th, i.e., we use an odd number of increments h. But
as in (3.45), the (variable) upper limit for the integral of Volterra integral equation
requires t; < x;, while the Maclaurin method, as an open method, does not use the
end point a and the upper one of the considered interval. Thus to avoid the upper
limit of the integration we must have t; < x;. This means that we may take 7;
with even increments of h, x; = a + 2th, and (lower) odd increments of h for t;,
t; = a+ (2j-—1)h. Hence, we take r2 = a+ 2h, 24 = a + 4h, re = a+ 6h,
and zg = a+ 8h,--- for the variable x of the term f(x) on the left side of (7.29)
and also for the x of the kernel K(x, t) inside the integral of (7.29). In this way, we
have f(x;) = f(a+ 2th) = foi, u(t;) = u(a + (27 — 1)h) S uoj-1, and K (aj, t;)
=K(a + 2ih, a + (27 — 1)h) = Koiaj-1; i,j = 1,2,---,N. With this notation,
the Maclaurin setting of the Volterra integral equation of the first kind (7.29) (for
t= 182, -e-),) BECOMES
2hKiu1 = fe
il 1
Ah [5Kau + 5 Keaus| = fa
3 2 3
6h [pKa oF g Nests + 5 Kass = fe
11 11 13
8h Laken a5 7g 83us cL 7g Mass EE ken |= fe (7.30)
and so on. We note again that in this example with N = 5, the equations stop at the
sample ug before the one at the end point, namely wo.
354 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
The following example illustrates this use of the Maclaurin rule for solving Volterra
integral equations of the first kind (see also Exercise 2 and its detailed answers).
sine = [ e”—'u(t)dt
0
to illustrate the above Maclaurin method (rule) with N = 4. The numerical results
are compared with the exact solution u(x) = cosx — sin, which can be obtained
from the answer of Example 7 of Section 3.2 (with A = 1),which we arrived at easily
via the use of Laplace transform.
Here we take h = 0.05, and with N = 4 we have ro; = 2th, to;-1 =(27 — 1)h,
A$ =A, 2eOP4 S06, 25 = OA en 10 25 a67 = 003223, — 0 A(the end jpoint)..and
ty = 0.05, t3 = 0.15, ts = 0.25 and t7 = 0.35. Hence our unknowns u(t;) are
labeled as u; = u(ti) = u(0.05), ug = u(t3) = u(0.15), us = u(ts) = u(0.25),
and u7 = u(t7) = u(0.35).
If we substitute the values f(z2) = sinz2 = sinQ.1 and Ko; = K(a2,t1) =
e9-1—0-05 in the first equation of (7.30), we find u; = u(0.05),
0.1e9!-9
yu, =sin0.1 = 0.09983
= (0.1)(1.0513)u; = 0.09983, u, = 0.9496.
Next we substitute this value of u;, along with the values of f4 = f(a4) = sin 0.2,
Ka, = €9:?-9-, K43 = e9:?-°-15, in the second equation of (7.30) to find us =
u(0.15),
1 1
0.2 5002-99(0, 9496) + er7 Ou, =sin0.2 = 0.1987,
uz = 0.8393.
The same can be done for us = u(0.25) and u7 = u(0.35) in the third and the fourth
equations of (7.30), respectively, to have
3 2
0.3 5°-005(0, 9496) - 3° (0.8403) + seo ='sin 0.3,
us = 0.7197
13
0.5 [sper 2-9(0.9496) 11
+70" °° (0.8403) a ent 09(0.7197)
13
igo | =sin0.4, wu7 = 0.5960.
48
These approximate values of i; are compared in the following Table 7.8 with the
exact values of the solution u(z;) = cosx; — sinz; = uj, along with the error
€; = Uj; — u;. We used u; for the above approximate values to distinguish them from
the exact values u(x;) = uj.
7.2 HIGHER QUADRATURE RULES FOR VOLTERRA EQUATIONS 355
Table 7.8 The Numerical Solution of a Volterra Equation of the First Kind—The Maclaurin
Method (Example 6)
—V/29f(y) = [ Sa (1.20)
where the kernel K(y,7) = rer is singular at the end point 7 = y.
Fortunately, this singular equation (1.20) is in the special form of Laplace trans-
form convolution product integral, where the Laplace transform can be used to solve
it, which was done with complete details in Example 8 in Section 3.2. At the level of
this introductory text, this example represents the only singular Volterra equation—
with its singularity due to its kernel—that we have covered and solved analytically.
Another example of a singular Volterra integral equation, due to its infinite (lower)
limit of integration, is that of the torsion of a wire (1.15) in the torsion w(t),
where the integral represents the dependence of the torsion on all torques applied to
the wire in time (—oo, t) besides the present torque m(t).
At the level of this book we are not covering singular integral equations. However,
we feel that a brief comment regarding the numerical approach to the class of singular
Volterra integral equations, such as that of (1.15) above, is in order.
We limit our very brief remark to the one type of singular Volterra integral equa-
tions that are characterized by having an infinite limit for its integration as in (1.15)
of the torsion of the wire,
The solution u(x) is defined for ze(—oo, co), however the integration is done on
(—oo, x), which can be considered as due to the assumption that the kernel (a0)
vanishes for t > x, or
= TG ) p) Co <a on . 2
esi —o <t<g
es {ISAS in KI EMCO) ey)
instead of K(x, t) in (7.31), the limits of integration become from t = x to t = on,
and (7.31) reduces to
1 de g ay fees ie ey a eae | 1
ete Weak (;-+)«(+) ae
u(2) =e
& g if Te Nee ip (7.37)
Exercises 7.2
(a) Use the Maclaurin method (that we used for (7.29) to obtain its numerical
setting (7.30)) to write the numerical setting of (E.1), using Nt 0.05,
uj = u(z;) = u(0.052), 2= 1,3,5,7,9. (See Example 6.)
(b) With such simple triangular system of equations, solve it successively to
find the samples of the approximate solution u;, 2 = 1,3,---,9. Compare
your results with the samples of the exact solution u(z) = e”.
3. For the Volterra integral equation of the second kind in Exercise 1, repeat the
problem with At = 0.02 and compare with the approximate and exact answers.
358 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
5. For Exercise 1, use the Lagrange interpolation formula (1.153) and (1.154) to
interpolate its numerical values of the solution, then compare this approximate
interpolated solution u(x) with the exact solution u(x) = e”.
In this section we consider the numerical approximation setting for solving Fredholm
integral equations using the more efficient high quadrature rules of Section 7.1 such as
the Gauss-Legendre and other Gauss quadrature rules. The results will be compared
with the, relatively inefficient, trapezoidal rule that we have already used in (5.124)
5 Jerri [1999]. See the end of the preface for more information.
7.3 HIGHER QUADRATURE RULES FOR FREDHOLM EQUATIONS 359
of Section 5.5.1. As we discussed in Section 7.1, all such quadrature rules will differ
in the weights D; and the sample locations t; of the numerical approximation setting
(5.118) with N =n +1,
for the Fredholm integral equation (of the second kind) (1.148), (5.116)
as
The new weights wi = 5Wj of the (symmetric) values w; of Table 7.1 are:
= 0.173927,
wi, = 0.326073,
w, = 0.326073,
w, = 0.173927.
1
ula) =4 (x) + af K (a, t)u(t)dt (7.39)
360 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
with the help of a four-point Gauss-Legendre rule (in parallel to using the trapezoidal
rule in setting (5.124) for (5.116)) as
The same is done for i = 2,3, and 4 to have the system of four equations,
Now we use the four-point Gauss-Legendre rule for the integral with the help of Table
7.3, and evaluate u, atz =z} = ae = a = (1.069432 to generate the first
linear equation of (7.40) corresponding to z = 1,
Here we must find the determinant of the (square) matrix A, which is essential to the
use of Cramer’s rule for obtaining the final solution, then we report the final solution
(u1, U2, U3, U4). The result of evaluating the determinant of the square matrix A in
(E.4) is |A| = 0.450361.
Now we use Cramer’s rule (as in Section 1.5.4) to solve the system of equations
in (E.3) where we find that the values @1, ti2, 3 and %4 are almost equal to the exact
value of u(x) = 1,0 < x < 1 (within an error of about order 107~!°).
In Table 7.9 we present a comparison of these approximate (numerical) solutions
a;, with the exact solution u; = u(x;) = 1, along with the error €; = U; — Ui.
Table 7.9 Numerical (Gauss-Legendre) and Exact Solutions of a Fredholm Equation (Ex-
ample 7)
We note here that the absolute error of about 10~!° represents an improvement
for the Legendre rule, when compared with the trapezoidal rule that we used for the
same problem in Example 20 of Section 5.5.1, though with N = 3, where the error
ranged between 10~? and 2 x 107?.
(b) The Tchebychev (Equal Weight) Rule
Here we use a four-point Tchebychev rule, with the help of Table 7.4, as in (7.41)
to solve the same Fredholm integral equation (E.1); and to have the integral on the
symmetric interval, we use its transformed version in (E.2). We also adjust the 2;
given on the symmetric interval (-1,1) in Table 7.4 to give us 2, = 5 (ti + 1) for
u(zi) = uj, rie(0, 1), since our integral of (E.1) is defined on (0,1).
These equations are then written in a matrix form, as we did in (E.3), (E.4) in part (a),
where the determinant of the coefficient matrix A is computed, and Cramer’s rule of
Section 1.5.4 is used to find the solution (ui, uz, u3, ua) of the system of equations
in (E.6).
Since all these computations were done in detail in part (a), it is sufficient here to
report in the following Table 7.10 such numerical solution @;, 1 = 1, 2,3, 4, which
are almost equal to the exact solution u; = 1 (within an error €; = &; — v1; of about
order 10~".)
We note that the accuracy of the computations is reasonable compared to that of the
Gauss-Legendre rule in part (a). However, it is known that the latter can do very well
as we increase the number of points NV. We shall leave such observations for the
exercises.
b
u(x) = f(z) +2 fiK (a,t)u(t)dt (7.39)
we, of course, must first make sure that the equation has a unique solution u(z) before
embarking on using one or more quadrature rules to find its approximate samples
{u(a;)}§_,. Next we must also have an estimate of the error for the quadrature
rule (or rules). It turns out that the bound on the error for such numerical methods
depends on two factors. The first is independent of the method used, and it depends
on the integral equation itself as characterized by the integral operator with its kernel
K (a, t) in (7.39), and its nonhomogeneous term f(x). As was discussed briefly in
Section 5.4 for the Fredholm integral of the first kind, the first factor is described in
364 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
terms of the (possibly high) sensitivity of the solution u(x) to small changes in the
above two characteristic parameters of the problem, namely, f(x) and K (a, t).
The second factor of the error bound is more related to what we are doing here,
where it is a measure of the error of the rule used in approximating the integral of the
equation (7.39). With the high degree quadrature rules, such error can be minimized
easily if K(x, t)u(t) is well behaved. However there is a catch here, since u(x)
is still the unknown to be found. A very helpful result, for directing our efforts
in using the above two groups of Newton-Cotes and the Gauss quadrature rules, is
that, “If the kernel K(x,t) is continuous in the square; {a < x < b;a < t < D},
f(x) is continuous on a < x < 6, and if we also know that the equation (7.39) has
a unique continuous solution, then there is a family of quadrature rules for which
the error, in approximating the Fredholm integral equation (7.39), tends to zero as
N ~ oo". Such family of rules includes, (i) the M/-panel Newton-Cotes rules of
degree P for any fixed P and increasing M; and, (ii) the Gauss-Legendre, and open
and closed Tchebychev N-point rules for increasing N. It does not, however include
the Newton-Cotes method for fixed panel M and increasing degree P.
For example, in the problem of the above Example 7, and also Example 20 of
Section 5.5.1
we know that the solution of this equation is u(a) = 1, and that the kernel K (az, t) =
(1 — xcost) is also continuous (and differentiable) on the square; {0 < x < 1;
0 < t <}. Hence, according to the above result, there is no surprise when simple
quadrature rules would work as illustrated in Exercise 1(a) of Section 5.5 (with the
very low value of N = 3) for the trapezoidal rule and with better results for Simpson’s
rule. However, this is not very much the case for the integral equations with kernels
such as where the solutions are smooth, but the kernel K (z, t)
elt), Oteri<t
Kat) = {(1-2), 0<tSa (is)
which, although it is continuous in both z and t, it does have the problem of a jump
discontinuity of size | in its first derivative oR (at) as shown in (4.26) for the more
general case of Green’s function G(z, t) in (4.25). An indication of the convergence
of the numerical method for the problem (7.42) may be seen in the result of Example
20 in Section 5.5 where the trapezoidal rule was used with N = 3 (see also Exercise 5
in Section 5.5). For an illustration with kernel K («, t) that has a jump discontinuity
in its derivative, see Exercises 8 and 6 in Section 5.5 for a nonhomogeneous and
homogeneous Fredholm equations, respectively. Of course, these examples with
limited small N are not enough, and we shall have a chance to illustrate such error
analysis for large N in the exercises.
7.3 HIGHER QUADRATURE RULES FOR FREDHOLM EQUATIONS 365
and in Example 16 of the same Section 1.4.2 for the singular homogeneous Fredholm
integral equation,
(i) Either one of the two limits of the integral in the equation is infinite,
or
(ii) The kernel of the integral equation becomes unbounded, i.e., equations with
singular kernels.
The first type of singularity is seen in the above two examples of (7.45) and (7.46). For
lack of space we shall limit our general brief discussion and the numerical method, of
using higher quadrature rules, to such singular Fredholm equations whose singularity
is only due to the limit (or limits) of integration being infinite.°
6The interested reader may consult Baker and Miller [1977] or Delves and Mohammed [1988].
366 Chapter 7 HIGHER QUADRATURE RULES FOR THE NUMERICAL SOLUTIONS
may, formally, be reduced to that of finite limits via the change of variables
where the limits of integration in both of the new variables 7 and € are from — } to 3.
Such change of variables (or mappings) is often used in the numerical approximation
of this type of singular equations, where we use a quadrature rule on (— 5, >) for the
following transformed equation (7.49) instead of dealing with the infinite limits of
(7.47)) (see Exercise 9 for an illustration in this general direction),
&
where U(£) = u(x) = u(tan€), F(¢) = f(x) = f(tang), and H(€,r) = K(z,t)
=K (tan €,tan 7) - sec? r. Note that the new kernel in (7.49) may have singularities
ati +5-
Another mapping that is used for the same purpose is
=
2a 2a
= IL. = — — :
é r+a i t+a : Ka)
which reduces the equation
1
Ue=FeQ+ |=<) GENUMdr,-1<€<1 (7.52)
Hee U(€) = u(x), F(€) = f(x), and G(é,r) = VAG = 22, K(2% —a,
eo
Of course, here we notice that the new kernel has a singularity at the lower limit of
integration tT= —1. For the numerical methods, such singularity at an end point may
be avoided by the use of an “open" quadrature rule for approximating the integral,
i.e., a rule that does not use the end points for samples such as the Newton-Cotes
rules of the open type (Table 7.2(b)), Maclaurin rule (Table 7.3) or the Tchebychev
(open) rule (Table 7.4), as we discussed in Section 7.1.
As to the analytical treatment of singular equations of this infinite limits type
(7.47), the simplest class that we can have a good hold on here is the special case
where the integral is in a Fourier convolution product form,
co
If we let U(A), F(A) and K (A) be the Fourier transforms of u(x), f(x) and k(x), re-
spectively, then the Fourier transformation of the singular Fredholm integral equation
(7.53) reduces it to an algebraic equation in U(A),
UOA) = Roy
F(A)
= ———.,_ 1
1-wK HKAA
(A) £0. (7.55)
7.59
So, provided that 1 — A (A) # 0, we can use the inverse Fourier transform (1.88)
on this U(A) of (7.55) to find the solution u(x) of the singular equation (7.53),
—Cco
As we mentioned at the beginning of this section, this method was illustrated for
the singular nonhomogeneous Fredholm equation (7.45), and its associated homoge-
neous one (7.46), respectively, in Examples 15 and 16 of Section 1.4.2, respectively.
Another illustration with instructive hints is done in Exercise 17 of the same section.
The main issue here, compared to the numerical methods of nonsingular Fredholm
integral equations in the previous section, is how to deal with the infinite limit or
limits of integration? This, we have prepared for in (7.18)-(7.24), (7.26) and (7.27)
of Section 7.1, and illustrated in Examples 3 and 4, and Exercises 2 to 5 in Section
Wek:
The basic idea of the numerical methods starts with approximating the integral in
an integral equation, a subject that we covered in Section 7.1 concerning quadrature
rules for approximating integrals including those with infinite limits. All those
preparations can be summed in the following three basic attempts:
(1) To truncate the infinite limit of integration to a finite limit D, then use a high
order quadrature rule, which is, often, not good enough.
(2) To use the Gauss-Laguerre rule for integrals on (0, oo).
(3) To use the Gauss-rational rule as in (7.24), which, effectively, maps the infinite
interval (a, oo) into a finite interval (-1,1) in a new variable U(€), whence a Gauss-
Legendre rule can be easily applied for this finite interval. This was used in reducing
the singular Fredholm equation (7.51) with infinite domain (0, 00) for u(z) to that of
(7.52) with finite domain (-1,1) for U(€).
We shall illustrate an outline of these (simple) schemes in the following example.
Example 8
Consider the following singular integral equation.
[e-)
fl2l== (4-27)
Qy—d fines
Lair: (E.2)
we (purposely) made up the exact solution for comparing the results of the above
different methods.
We will present here only a brief discussion of the different methods, and leave the
discussion of some analysis and the numerical illustrations for a very similar problem
in Exercises 8-10.
(1) The first method is straight-forward and may start with truncating the infinite
limit of integration to L, then using N-point Gauss-Legendre rule on (0, L). The
computations may be limited to L = 4, 8,64, and N = 4,8.
(2) For a second method we can use N-point Gauss-Laguerre rule for N = 4, 8,
and with the parameter ( in (7.20) varying towards smaller values of 3 = 1, t, is:
The choice for small values of 3 in this example is the result of our experience of
the numerical integration in Example 4(a) of Section 7.1 for slowly varying function
; = , and where very small ( value gave fair results. Here, knowing our solution
in advance as u(t) = (4 + t?)~2 along with the (4 + t?)~2 in the kernel of (E.4)
makes the integrand, aside from the oscillating cos xt factor, a slowly varying one.
This is only an illustration, which suggests the importance of having some idea about
the asymptotic behavior of the solution in steering our efforts towards using the most
suitable (or efficient) quadrature rule.
(3) In a third method we may use the Gauss-rational rule for N = 4,8, and the
parameter 3 ranging towards larger values G = 1,4, 16, as was suggested by our
experience in Example 4(a) of Section 7.1 for approximating integrals on (0, co)
with slowly varying integrands.
The numerical results in this example may be fine for the three methods. Now we
must inquire about the possible analytic reason from the theory that will help guide
us for other examples. The general “regularity” or “well-behaving" conditions fall
along the lines of assuming that, for the (singular) Fredholm equation (E.1) above, its
kernel K (x,t) is square integrable on the first quadrant (0 < z < 00,0 <t < oo),
and that its nonhomogeneous term f(x) as well as the solution u(x) (if we can
estimate its behavior!) are also square integrable on (0, co). In our present example
the (made up in advance) solution u(x) = (4 + 2)~2, and the nonhomogeneous
term f(x) = (4+ 2?)~2[1 — Ze ?*], are clearly square integrable on (0, 00). The
kernel K (z, t) =(4+ 27) 2(4 + t2)~2 cos zt is also square integrable in both x and
t on the first quadrant. Hence, it should not be very surprising if we see convergence
for the approximate numerical methods as they are lead by the comfort in satisfying
the essential conditions of the underlying theorem.
To give an illustration where we don’t have the protection of the above underlying
theory, we give the following example
1 T ie
“(eo TER a 7 +f cos rtu(t)dt (7.57)
Exercises 7.3
(a) Use a four-point Tchebychev rule (7.17) for approximating the integral,
and set up the system of 4 x 4 equations in the four unknowns uj, U2, U3, and
ug. For the locations of the samples z;, consult (7.16), and for the weights see
Table 7.4.
(b) Use a four-point Gauss-Legendre rule to write the 4 x 4 linear equations
(see Table 7.3 in Section 7.1 for the locations of the samples x; and the weight
W3;).
Use a four-point Tchebychev rule (7.17) for approximating the integral, and
set up the resulting four linear equations in the four unknowns wu}, U2, u3, and
Ug at Z) = 0.108, x2 = 0.406, x3 = 0.594 and x4 = 0.897 (see (7.16) and
Table 7.4). Note that we have already adjusted the locations x; of Table 7.4 to
suit the interval (0,1) of the above integral.
EXERCISES 7.3 371
(a) Attempt a Gauss-Legendre rule with N = 2,3, and 4 and make your
conclusion concerning reaching an exact answer.
(b) Solve the integral equation by the method of Section 5.1 for the exact
answer u(x) = x, compare with the answer in part (a), and show why part (a)
gives an exact answer.
(c) Use the exact answer of part (b) to verify the integral equation.
(d) Try part (a) with Simpson’s rule (with the knowledge of the exact solution
ae) SFa).
5. Repeat the computations and error analysis of Exercise 4 for the following
Fredholm integral equation of the second kind (with degenerate kernel of one
term),
us
and its resulting 4 x 4 system of homogeneous equations in u;, U2, U3, and U4
after using the Gauss-Legendre rule in Exercise 2.
(a) Write the system in the matrix form U = AAU, (I — XA)U = 0, and set
det(I — \A) = |I — \A| = Oto find the approximate (two) eigenvalues A; and
do to be compared with their exact values 4; = —6— V/48 and \2y = —6+ V/48.
(see part (b)).
(b) Find the four samples w;, wz, u3, and wu, of the two approximate eigenfunc-
tions of (E.1) corresponding to the two (approximate) eigenvalues found in part
(a). Hint: Note that the four sample values of the approximate eigenfunctions
are very sensitive to the accuracy of the approximate eigenvalues. Indeed we
had to go to ten places accuracy to get the good agreement with the exact
eigenfunctions as shown in part (d).
(c) The homogeneous Fredholm integral equation (E.1) is with degenerate
kernel of two terms. Use the method discussed in Section 5.1 and illustrated
as in Example 2 of that section, to find the above two exact eigenvalues A; and
A. Continue the same method to find the corresponding eigenfunctions Uj (x)
and U2(x). Compare these exact results with the approximate ones in part (a).
(d) Use the Lagrange interpolation formula (1.153) and (1.154) to interpolate
the four approximate sample values of the eigenfunctions in part (a) to find
their continuous approximations U; (x) and U(x). Graph these two functions
and compare them with the graph of the exact one in part (b).
Hint: See Example 7 and Table 7.3 (with its nonequidistant locations 7, £2,
“",0N.-
u(z) = uh ‘i
* 2+ u(t)at,
1l+a2 1
(b) Use a four-point Tchebychev rule on the interval 0 < € < 1 for the
approximate numerical set up, as 4 x 4 system of linear equations in the four
approximate samples of the new solution U(€),0 < € < 1.
(c) Solve the system of equations in part (b) to find U(€;), i = 1, 2,3, 4, then
Se) ae 3 to report the approximate values, of the actual solution u(z;),
LA NS pc
Appendix A
The Hankel Transforms
To further support our presentation in Sections 1.4.2 and 1.4.3 of the Fourier and
other integral transforms, we will present the Hankel transforms.!
As was presented in (1.124)-(1.128) in Section 1.4.3, the Hankel transform F;, (A)
of f(x), 0 <r < oo is defined as
For more detailed references, see Jerri [1992] and Sneddon [1972].
373
374 APPENDIX A: THE HANKEL TRANSFORMS
,2 iat dU(A,) z)
then apply the mixed condition (E.2) and (E.3) on u(r, z) in (E.7) to obtain
In Section 1.4 we presented the finite Fourier sine and cosine transforms, and here we
present the finite Hankel transform of a function f(r), defined on the finite interval
(0, a), as
Fy (Ak) = / Tn (Agr) f (r)dr (A.4)
0
where {Aa} are the zeros of Jn,
dana) = 0, [ed eoogee (A.5)
With the aid of Fourier-Bessel series we can easily see (see Exercise 3) that the
inverse finite Hankel transform is
=52 3
a
Fatae
J( Apt)
(A.6)
where the sum is over the index k of the zeros Axa = Jn zkOf Jn in (A.6).
We note that to find the inverse Finite Hankel transform f(r) in (A.4), we are
asking for solving (A.4) as an integral equation in f(r). This solution f(r) is found
in terms of the Fourier-Bessel series (A.6).
376 APPENDIX A: THE HANKEL TRANSFORMS
Exercises: Appendix A
1. Verify that u(r, z) in (E.11) of Example | is the solution to the problem (E. 1)—
(E.3).
2. The orthogonal expansion (Fourier-Bessel series) of a function f(r) defined
on (0, a) in terms of the Bessel functions Jn (Agr) is
where Axa is the kth zero of the Bessel function J, (usually written as jn,4)
and the sum is over the index k of these zeros. The Fourier-Bessel coefficients
are given by
Pt In(Agr) f(r)dr ~
Gas daa lies (E.2)
So VIZ (Ar) dr
(a) Relate the Fourier coefficient c, to the finite Hankel transform F,,(A,) in
(A.4) (of Section A.2).
(b) Use (E.1) and the result in part (a) to verify the Fourier-Bessel series in
(A.6) (the inverse finite Hankel transform) of f(r) = 1,0 < r < 1. Hint:
Use (A.6) and (A.4).
3. Fluid flow through circular aperture in a wall. It is known that the velocity
potential v of the flow of a jet of perfect fluid satisfies Laplace equation (2.48)
(see also (E.1) of Example | here).
(a) Formulate the problem of this steady jet flow through a circular aperture
of unit radius | in a plane rigid wall where the velocity distribution in
the hole (at the wall z = 0 when we take z as the direction of the flow
perpendicular to the wall) is given by v(r,0) = f(r) and that the slope
of the velocity potential is zero on the wall (outside the hole).
(b) Use Hankel transform to solve the problem and reduce the mixed boundary
conditions to dual integral equations.
(c) Solve the dual integral equations for the case of constant velocity potential
f(r) = 1 at the entrance of the aperture. Hint: See Exercise 1, Section
26;
4. Find the Laplace transform of the following initial and boundary value problem
in u(r, 0,t), the displacement of a membrane with zero initial displacement
u(r,0,0) = 0,0 < r < wo, 0 < @ < 2m and a given initial velocity of
55(79:0) = 9(1,8),0 <r <00,0<8< 2m,
(E.1)
EXERCISES: APPENDIX A 377
Hint: From (1.69), (E.3), and (E.4) it is clear that only ory in (E.1) is suitable
for Laplace transformation. So let U(r,6,s) = L{u(r,6,t)} and Laplace-
transform both sides of (E.1), realizing that the differentiation with respect to
r and @ on the right side of (E.1) can be exchanged with the integration with
respect to t of the Laplace transformation.
| Sh aati aliA, area
'
|
= a
i wis pear 7
: ta EOE ou usa ne
ch i —
_ tm eA 4h
moe ose oi es as :
dhe im foe ntadeena
s eye
“DL pori Th 4 i
“yay <3 lO? WeaSita
= te 7 ounmeartcenn "i
»gabe fasi ive sats 4 a orients re,
ne one gates1Biow teoghednes ae Ji)
es ae Mihi ashHee basiy” 7
we ot a
| i Wand olese te Tw yh oae Chale hp-oeiacl
7 ; vue ee mad ¢ ai iow Go wile ah)
Seat
spr D + ye
=
7 > i. take ’ oo eG m gfewretes wi Alig ot Ji55 iy
aa
- LSM TS Py om ie voqunhe Barta - 2|
> Oger wtagsge Se onde ne
a
a
eta) toyee?
age. = ae
te ~~. a eo)
eee: al teeta & tee Sake
Fh. °
Ree
~~ AR fe
hes re woeOh ttn Corploceeane
_ ue oe | hans
ia
; a Ls mat
a 1
2 » Oo> -~
.% *
Appendix B
Green’s Function for
Various Boundary Value
Problems
The following is only a collection of very useful results that center around familiar
boundary value problems, associated with differential equations, along with their
Green’s functions that facilitate giving the Fredholm integral equation representation
of the boundary value problem. In addition, and when possible, we supply the
eigenvalues and eigenfunctions, which are of great value for the construction of the
Green’s function (Section 4.1) and the analysis and construction of the solutions to
Fredholm integral equations of Chapter 5. The theory behind the derivation of some
of these results may not be found detailed in this book.
379
380 APPENDIX B: GREEN’S FUNCTION FOR VARIOUS BOUNDARY VALUE PROBLEMS
AS) Mee
0 eG ant) = (B.2)
t(a Le ) ae
a
du
7 (B.3)
u(0)
=0 (B.4)
u(a)
=0 (B.5)
_ nnxr nit \ 2
Un = sin — n= (=) (B.6)
u(0) = 0 (B.4)
0) (B.5)
VR) Seen yt ee il (B.6)
(See Exercise 3, Section 2.5.)
oat+ Au = 0 (B.3)
u(0) = u'(0) (B.4)
AGO Reese Tx ly) (B.5)
(B.1)
(B.2)
x(t —1)-1, Get e
du
da + Au =0 (B.3)
u(O) = (1) (B.4)
B.1 GREEN’S FUNCTIONS IN TERMS OF SIMPLE FUNCTIONS 381
@ui x
1
(f) u(x) =—-A | K(a,t)u(t)dt (B.1)
0
1
Err eth 1 Oa ot
K(x,t) = G(z,t) = ae 4 ) (B.2)
aa bin Mae NY), eS oy al
du
u(0) =0 (B.4)
u(ly= 0 (B.5)
w'(0) = u'(1) (B.6)
1
(g) u(x) = | K (a, t)u(t)dt (B.1)
0
u'(0) = (B.5)
eG) a0 (B.6)
ul(1) =0 (B.7)
382 APPENDIX B: GREEN’S FUNCTION FOR VARIOUS BOUNDARY VALUE PROBLEMS
ee ln NOt
G(e,t) = {hie Veaei
1
(i) u(x) = | K (a, t)u(t)dt (B.1)
ae
Les
BNC sever er SPA ies (B.6)
Answers to Exercises
Chapter 1
Exercises 1.1, p. 20
7. Yes
8. Yes
12. MG
h(a )ala = at pe Renee (E.2)
where
Jikajule) = oe) o(e) = (B3)
and
fea (B.4)
384 ANSWERS TO EXERCISES
d ; Rese a
14. The resulting differential equation = —gp with the initial condition p(0) =
1 give pia) ee
15. a) The possibly needed differentiation of the known measured data g(x) is very
sensitive to the inaccuracy of the measured data g(x)
€). 0 (Emin = 0: 0am) co:
nm/2
Wig u(a) =) i K(z,é)u(é\dé, K(2,€)=
Exercises 1.2, p. 28
@ae)=F=2f
(lee de ye
(+
1 1
Ja
Exercises 1.3, p. 40
d2
4. 3,3 + (2A-1)u=0
2
5. (a) a + Au(xz) = 0
(b) u(Q). = 0, u(1) = 0
(Cee sin nit. = 1-5 1 — 1, 2, 3,---
Exercises 1.4, p. 73
(a) oes
(b) xo
1
() s?(s — 2)
() 5— 5, Uf) = C{uld)}
1 _ sU(s)
(€) 5 [sU(s)
—uO] = <=
386 ANSWERS TO EXERCISES
2s
Ori?
mayen.
co) [teat
0
(c) e3* sin /5a
V5
(d) ze** + e27 +1
Lets wt)
ON Gan
Orel tent
. (a) fi(t) # fo(t) when t = 3 as in (i) and (ii) of part (a). Indeed f(t) and
f2(t) could differ on a set of points t;, t2,---, tn on (0, 00).
. f@e)=bScosx
» Keta7—0nni(Es))toihave
51 | ORO = [:fi(-t)fi
ase
(—bat
ANSWERS: CHAPTER 1 387
=| AWwhwau,
oo) nr
—0o
—oo —oo
after a simple change of variable u = —t in the right integral, and the use of
ff=\f/.
15. (a) F(A) = G(A) + KQ)FQ), FO) = F{f(2)}, GA) = F{f(x)},
K(A) = F{k(a)}
b) F = G(A) Ny
patie ie aes
ixXr G(A)
De) Saaryayeay ae heme’
1620(2).—= nb =
a ee F(t)
ToOuKa
MK. itt
dt (EZ)
where F(t) and K (t) are the Fourier transforms of f(x) and k(x), respectively.
Lie ; ‘A .
= a1 =|2| 11b*—-a —b|2|
Coe coor SF
18. s
Biel sf ea sin » 1
oO)= 5 fimeHE(Fm).
TA)
19 (a) F(A) = —y a>0O
Exercises 1.5, p. 94
1. @ |ae
ge = 71 70-7854
(b) 0.7828
(d) The absolute value of the actual error from the exact value of the integral
in part (a) and the numerical approximation in part (b) is
Chapter 2
(b) N(s) = :
Ss nb) =nee «*) ne > k > 0
OEdt AO AD) tn
t
(b) n(t) = no — a n(7)dr
0
(c) n(t) = noe *#
if no
t
_ (a) n(t) = noew*/T a —(1/T)(t-7)
eae i MT ENS) Sea SIE:
N(s) = L{n(t)}
(b) n(@) noe 1/7) Ht
. (a) g(t) = boh(t)
(b) b(t — 7) p(r)m(r)
Ar
B
© | b(t — r)p(r)m(r)dr
B
(d) b(t) = boh(t) + fsb(t — r)p(r)m(r)dr
a
‘Jerri, 1999
ANSWERS: CHAPTER 2 389
d
ie (a) = = 6
2. (ay
#6, do
3. (a) What should be the potential distribution on the boundary of the disk such
that the potential u(r, 6) at a given point (r,0) takes a predetermined value
u(r, 8) = g(r,6)?
Exercises 2.3, p. 112
l
(b) D(z) i.K (a, €)p(€)d€, where K (a, €) is as given in (2.26).
0
(i) a close to an exact one, where we look at the density distribution to be zero
l i} DL BY
outside the beads on (0 ; 5) (7 3) (—,/), and one for the two intervals of
l l Dimou
the lengths of the beads on (=, D and (> q” The answer here is
2The detailed solution of this problem (in over five pages), with figures is found in “The Student’s Solution
Manual" to accompany this book [Jerri, 1999]. See the end of the preface for more information.
390 ANSWERS TO EXERCISES
3a | Te _ 227
800T) 288% 36007) ’
“1 4002? + 16/? — 1912! (2
800 To 288To
1 1800x? + 721? — 94721
ee
9i(i — x) i
aah eae
cr aye
Bt 8
Wcv= 9
800T> 800T) — 7200 To 4 3
gi(i—a) 1 1442? + 641? — 19921
800T> 288 To
_ __1 15191? — 489421 + 36002” ee 3
77200 To Ree ot!
Qii=2x) 17l(l=2) . 2531 —2) 3l wees
800To 288T> «36007 Ace tox
with its above five branches.
(ii) An approximation to the problem, where we take the weights of the two
beads as two point forces located at their respective centers of masses €; =
10 1a 1 aio Re eye ea .
De aiea many eh “7h ying ait a faa
eetLala,ea 9!
Pee lg
oe 171
ead
Via) = (3 (=.55) + 5K (= a)
Aijig sil, ig ta) _ légs
Tel |205 0m 12) 240) eS Toe
0<a<H
y(x) =4 nls
Tot [20° 40pl- +e12lg Boe
1 E 91 yetle 271 + 8x
24 120Tp
<a< tt
ey)1 [pe
[lg Reh
9l lg Sey
ee 171 fl 141
= yee — 142
eee
To E; 400 Donia © 2) iy
tl <a<l.
eae
y(x) = (x°
gcr
oT, , 3 ae— 22°1 2 + 3 I’)
5. —f(yo)
Va =-y2er =| Aw
yo
0 OS
O)y(a)=2+ f(ta)ylta 0
b
ya) = | KG, tyyttyat, ho(2,t) =
0
0 re OK
2. (b) ae
>
craves a<t
- @) TS =
Lee A+ 1u(a), w(0)
0)==0,u(t)==0
0,u(1)
(b) un(x) = sinnraz, An = —n?n? - 1
Bu | By
1S
Asim, woes A .0
where K(,3)= 4288 Ozer <> 0
0, 4g > Il
F(z) = {He) Meee”
Chapter 3
(b) u(x) = e* — 1
5 o) = coshx
Cua) 5sine
9. F(x) =-1+Ce*,
23 ae
a(g\—Cress=ress C= 1
10. (b) The inequalities generated for the bounds (E.1) of the iterated kernels, in
particular the n! in the denominator of the bound.
r y
11. u(ax,y) = f(x,y) +a i K (x,y; &,n)u(&, n)d€dn
where f(z, y) is a given function.
— [ (x — t)*~'
f(t)dt [This is equation (3.40).]
0
5. f(x) in the Fredholm integral equation (E.2) is restricted to only the lin-
ear combination f(x) = Asinaz + Bcosz of sinz and cosz, where A =
1
ifu(€) cos dé, and B = rheu(€) sin €d€ are constants. On the other hand
0
f (x) of the Volterra integral equation of the first kind (E.1) can be the more gen-
xz
eral function f(x) = g(x) sinxz + h(x) cos x where g(x) = [ u(€) cos €dé
0
and h(x) =i u(€) sin Ed€.
0
Exercises 3.3, p. 162
1. (a), (b)
0 0.0 0.0
0.5 0.4794 0.50
1.0 0.8415 0.875
ES 0.9975 1.03125
2.0 0.9093 0.92969
Ded 0.5985 0.5957
3.0 0.1411 0.1128
3.5 -0.3508 -0.3983
4.0 -0.7568 -0.8098
Table continued.
3. (a), (b)
0 1.0 1.0
~ [oo -0.1107
Fg |2120 -1.2195
3
4 -1.414 - 1.6740
T -1.0 -1.2055
= 0.0 -0.0859
at 1.0 1.0314
tr
7 1.414 1.4942
Chapter 4
sinb(1 — €) sinbr
wee jy ASSES
~ (b) G(z,
€) =
sin bé sin b(1
(1-2)
— x)
ae
pepe ye”
“2(
7
G(cre) =
2a én<}
7
Kksin =
ONE = =ee
+ (kr)?
a(1—€), O<z<é
- y(t) = =} G(x, €)f(E y(§))d&, G(@,€) = |
Zz sin Ax
- y(t) = ay
bee ari 2 Re :
Syay= 7 sin > + a? 8 + af G(a, €)y()d€,
2
396 ANSWERS TO EXERCISES
1
— sin 5 (¢ — 2); Sc ieee
G(z,£) =
— sin 5(a - €), Exar
(6-2) ace al ee Se
14. G(z,€) =
(¢ Lad, Gsyee
(Eee)Ge Uren
15: G(z,€) =
(Litt Gam meee a1
—sin~(€-—z), -l<ar<&
16. G(x, €)=
= sin —(ti=-4)) See 1
We (b) (i) A unique Green’s function does exist, since the homogeneous boundary
value problems has only the trivial solution y(x) = 0,
€-1+(€-2)2, 0<a2<é
G(x,
€) =
(Coa (see ES
cos(x — € + 3)
en(1/2\e ee
SD
potas 8icos(é —2+1/2 ee
Qsn(1/2) 7 ° 27S!
20. Uc )e— ion — 1)sinre
4 Tv
21. (b) Un(Z) = Gina, i = 2a
1
ey
eo)
; n+1
ee sinnnrz, ao | 2(-1)"*1
|. sno ngrdr. = —, c=
(0) 2, nm
n=
ANSWERS: CHAPTER 4 397
(d)
xz e* (exact) Three-term Legendre series approximation of e2*
=f iP EY>an(yn)sinnesinng f(E,n)d&dn
nm/2 x! re
Dy (a) y(x) =-/F G(x, dae 1 ee
(x, E)y(€)dé 06”
e(1-2¢), 0<a<é
vie
G(a,€)
O<ar<
-5&(E-a)(1-2), €<a<1
ule)az a=g(2x*a +32 ? —172-5)
172 —5)— -d f(a z (e,e)u()lds,
(6¢-2)e+€—1, OS e<€
G(x,
f) =
(Gaede:
IRE Ses
~ sin 5 (2 — &), Ga a
Chapter 5
2
1. (a) u(x) = y_ Sina,»
i 24)
27
(b)b u(a) == a2 + ee
14 272 (Ana Be ; x + cos2)
— 4Ar sin
2
(c) u(x) = 22
—4 + =y sin?x
» ‘
(d) u(a) = ————(2 cosz + mA sin 2z)
4+ 12)?
(e) u(x) = e* + A(e — 2)(5a? — 3)
(f) u(x) = 2x — ;
. (a) The eigenvalues of the kernel are Ay} = 1/7 and Az = —(1/7) with their
corresponding (normalized) eigenfunctions ¢,(r%) = 1/V2z(sinx + cosz)
and ¢2(x) = 1/V2zn(sin x — cos z), respectively.
(i) A = 1/V2rz is not an eigenvalue, i.e., \ = 1//2a 4 F1/7, hence the
problem in (E.1) has a unique solution for arbitrary f(x), which, of course
includes the given function f(z) = 2”.
(ii) X = 1/7 is one of the eigenvalues A; = 1/7, which corresponds to the
eigenfunction ¢;(x) = 1/V2z(sinz + cosz).
Since the kernel is symmetric, Theorem 2 requires that in order for (E.1) to
have an infinity of solutions, the given nonhomogeneous term f(z) = sin 3x
musi be orthogonal to the eigenfunction ¢;(x) corresponding to A; = 1/7,
which happened to be the case since Je" (sin x +cosz) sin 3rdz = 0.
Oi@QA= if feg@) =a?
Sal EW, An? yk 1 A i1 a 1
1a) Sa ee (2 rt) sing + 7? (1 rt) cos]
(b) (ii) u(x) = sin 3x + “isin x + cos a], cis an arbitrary constant (an infinity
T
of solutions!).
. The kernel K (x,t) = sin(x — t) of 1(d) must not have real eigenvalues, as it
is clear form the answer of problem 2(d) with its denominator of 4 + 1?.? not
vanishing for all real values of A! (Indeed, following the steps around (E.6) of
2
Example 4 we find the two eigenvalues as pure imaginary ones A = eee)
T
O(a) Ae NS =e
(b)
\ # £2
(c) The two systems corresponding to A; and Ay become either incompatible or
redundant. So there exists either no solution or not a unique solution depending
on f(z).
@MA =A = 2, ¢1(¢) =A — Zz); N=. = —2,
do(2) = BW = 32)
(e) u(x) = f(x) + A(1 — 2) (an infinity of solutions!)
1+ vV1-—-4bA
10. (a) u(x) = ed 5 an
1
(b) u(x) = X
Li (a), (b) The error for both cases is of about the order 10~?, so it appears that
we have to include many more terms of the Maclaurin series of cos xt than the
above two or three!
(c)
ANSWERS: CHAPTER 5 401
Approximate
Solution 1.0 0.999989 0.999937 0.999962 1.00144 1.00833
Exact solution,
Teor Oe leO 1.0 1.0 1.0 1.0
1 ertn
13. (a) v(x) = 2+ ( u(t)dt
a a!
(b) v(x) = 3x
14.
= |K(o,)
KC Bas= Kiya)
after using K = K. The same is done for Ko(z,y).
© f iEcos? (x + t)dxdt = =
—4 cosz sin x )
ula) =2-a( S524 A+ 2/7
zt
. (a) The kernel is square integrable since i / K?(a, t)dxdt < oo, due to
0 Jo
|sin
xzcost|? < 1 and|coszsint|? < 1.
d?u 1
Org On es CaO Oa
2az sin /1 + Agu ; “pee
(c) u(x2) = conde + AD) ESET where VIF Hag = wh. and
be Cis) Glaeser
ee
oe Rie D(BR) 2 (QED ne? CERO em ORS:2)
z g k even
= fn k? —1?
Ok odd,
4. (b) See (E.3) and (E.4) of Example 8 for u(x) = 1 and f(x) =
respectively.
Nae e\c
(BAO. valk
ute DUO —
g—2t+X(e+t
— 2at — 2/3)
Resolvent kernel: I(x, t; A) =
1+ 2/2 +2/6
uC) ae ees Ot
1+A/2+)A7/6|3 2 6\6
ANSWERS: CHAPTER 5 403
ee
a) = 2°
a r,t; eeeBe
ze u(x)
fees
5°
Beles De iz
ee
ne ey oe Ee1—r+ )
2/18
— 3Azd
_ 34(2 + 6)
MONS = am ris
sin(x + t) + (1A/2) cos(x — t) 4
(d) t; A)=
qd) D(z,tN a /41-2
ee SS
mF 2 abet
1
u(x) = 1+ 1 —7?r?/4 (?rsinzg +2dcosz), =
D
ecb) u(r)
=f (2)+rAfo ze f(t)dt
. E(w, tA) =sin(a:— 2t),0(2)i=2
Z
u(x) =3+ : (rsing +2drcosz), A? és
= 1 — 7?\2/4 i me
. (a) M1 ~d
(b) Ai ~ 2.58
. (a) (i) $3(x) = 4.1818 sinx
(ii) Sg(x) = 4.91689 sin x — 1.32080 sin 27 + 0.28337 sin 3x — 0.03132 sin 4z
(b)
ay -| -0.5 0.0 0.5 1.0
10. (a) S4(x) = 1 — 1.52 + 0.52? — 0.12? [very close to the collocation method
of exercise 8(a)]
(b) S2(x) = —0.8162 sinz + 0.8997 cosx
(C)iS3 (a) =sen 5(exact solution)
2e(b) For f(s) ae — a”) and u(x) = 1 0n (0,1), see their Fourier coefficients
in (E.3) and (E.4) of Example 8, respectively.
3. (a)
b n
as / poeta) u(t)dt
eal
n b
= SCO) ifbj,(t)u(t)dt
k= cS
n b
oS Cn AnOn(x
For this solution a to exist, its series must be convergent, but nous that
the eigenvalues \,, increase with n (see Example 7 with its \,, = n?77), then
Cn must die out (relatively) very fast which puts a lot of strain (or restriction) on
the class of functions f(z) that allow its associated Fredholm integral equation
of the first kind (E.1) to have a solution.
=) _bidi(z)
i=l
ANSWERS: CHAPTER 5 405
where
b; = i u(t)c;(t)dt
1 :
5.(a) i) Ap= - corresponding to ¢;(x) = sinz+cosz, Az = oe correspond-
T
ing to ¢o(x) = sinz — cosa.
20
(ii) i sin(x + t)u(t)dt =
0
20 27
= sina [ u(t) cos tdt + cos a u(t) sin tdt
0 0
= b; sinz + by cosz
(c) If there are infinite eigenfunctions for the kernel, and they are complete,
then there can be no nontrivial function g(x) that is orthogonal to all of the
eigenfunctions (see a version of Picard’s theorem in Theorem 7).
7. (b) No, we cannot limit ourselves to search for only continuous input u(x) (see
part (a).)
(c) Here u(t) can have a large jump discontinuity, for example, which cor-
responds to practically no change in the continuous f(z), i.e., the input and
output are not related in a continuous way. Thus the problem is called ill-posed,
i.e., not stable, since a very small change in the input may cause a very large
change in the output.
8. If f(t) 4 0, this would mean that such a static load of density distribution
p(x) = f(x) in (2.28) gives no deflection, i.e., y(z) = 0 at any point, but we
expect some deflection y(x) as in (2.28) and Fig. 2.2.
2. (a), (b)
“iy (a) Simpson’s rule (b) Exact Example 20 (Trapezoidal rule)
0 0.99987 1 1.013
0.5 0.99992 1 1.009
1.0 0.99967 1 1021
406 ANSWERS TO EXERCISES
s (@)
(i) u(0) = 1.0, u(0.5) = 0.367, u(1.0) = —0.11019 [see part (b)]
(b) (a,1)
(a,il)
Approximate
Part (a,i) 1
Part (a,1i) o*ioe)NnNn onl| ee)
— aeen=) owdss—~ o is*)Nn~
Exact, .(ii(@)=
e~* — g/2) = oS ooNnGN o ~ — \o oS NnKe)— S aS—~ o OWNnONN
Approximate u(x) =
S3(z) = 1—- 1.4412
+0.3102? 1 0.859 0.7242 0.5956 0.4732 0.3568
- ~ — - -0.11
0.249 0.147 0.05 -0.043 -0.132
0.249 0.147 0.05 -0.043 -0.132
0.247 0.143 0.046 -0.046 -0.131
Sat yee aD
n
At At At At
¢ = 5 Koo) uo —4— Koti —- -—4 Ko 2n-1U2n-1—-
— Koontn = fo
3 3 3 3
At ING At At At
pea (1
ia 1K) Ui —2—, iota sit 4-2 Ki 2n-1U2n-1 = i:
“Ki onlin — fi
At At At
=a no a 4-7 Kian—1U2n-1 ste (1of = Kon Un = fn
(b)
0) 0.9987
0.5 0.9992
1.0 0.99967
z | (Oe V2 eye be
Numericals n-— 3) OL0F 21h73) = 17390:0
Exact, u2(z)
=V2sin2xx 0.0 1.23 -1.23 0.0
Chapter7
[aa
—— iy er Parte Wah ata
u
7" lesan!
(i) If we use the Gauss-Laguerre rule with N = 8, 8 = 0.2, we have an
approximate answer of 0.14999. The exact answer is = = 0.1570796.
Dwdx Le zi
9 100+22
100+
For N = 8, 8 = 10, we prepare the data for w,; as and use the Gauss-
(1+ 3)?
Laguerre quadrature to have the approximate value of the integral 0.15707944.
or
The exact answer is 0 = 0:15707963.
ANSWERS: CHAPTER 7 409
CO [e-@)
aioe (5) a= f
ae U eo)
(B) sin (= du = | e “sinu du
eer
fe eesinade == 28 EI)
| 1S ds
where
For Nx=6, ae
= 0 we prepare the data in the same way as in part (a) to have
268 ye noeke
‘= = 0.4955. The exact answer is 0.5.
. Since the integrated function in (E.2), with respect to p(y) = e~¥ on (0, co),
is f(y) = (y + 1)3, a polynomial of degree m = 3, then a two-point Laguerre
rule will give the exact result becausem = 3 < 2(N) — 1 = 2(2) -1=3.
With 2, = 2+ 2,22 = 2— V2,w, = 1(2 — V2), and wo = 4(2+ V2),
the two-point shifted Laguerre rule in (E.2) gives
ve 16
/ e *a'dr x —.
1 e
. (a) The result is not exact for two-point Gauss-Hermite rule because the func-
tion f(x) = z* is of degree 4> 2N — 1 = 2(2) -1 =3.
1. @ eo: =10(0)=1>0:-0=1
Using the trapezoidal rule (for u;) with At = 0.05, to = 0,t, = 0.05,
0.1
Simpson’s Rule , At = 0.05, tg = 0.10,h = >
al
ua = 0.76 + “(3.5610 + 13.16u; + 3u2], ue = 1.105127
(Sy — PANoe
(b) The comparison of these results with the exact solution u(x) = e” are
presented in the following table.
See also the answer to Exercise 4(e) for a bit more accurate results.
2. (a)% = 1.05263
uz = 1.16344
tig = 1.28201
u7 = 1.41863
ug = 1.56959
(b) The comparison of the numerical results with the exact solution u(x) = e*
is given in the following table.
02
0.97u,; = 0.9584 + “5 [8.1189u),
u, = 1.020190
and if we proceed parallel to Exercise 1,
u2 = 1.040810.
uz = 1.06183,
ua = 1.083286.
5
0.25u, =-l+ A’ Uy = 1.0
us. = —1.60667
(b) With the exact answer u(x) = e” > 0, this negative value for
ug = —1.66667 suggests a break down of the numerical method due to the
inaccuracy inherent in u; = 1.0, which is very far from its exact value of
e295 = 1.6487. In Exercise 1 with At = 0.05, we had wu; = 1.051081
compared to its exact value of e?-% = 1.05127, i.e., a good enough resolution
of At = 0.05 that preserves the characteristics of the solution u(x) = e” inside
the integral.
412 ANSWERS TO EXERCISES
tS
IOS ee / Gernaee 0
Oyee = sinks,
Meso (=) inn (=)
Exercises 7.3, p. 370
1. (a) First we use the transformation € = 2t — 1, d€ = 2dt to have an integral
on the symmetric interval —1 < € < 1 ready for using the Tchebychev rule
(7.17),
u(t) = 2? — a ¢ + ees) u (SS) dé. (E.1)
1
1.505271u; +0.52085309u2 + 0.5304834u3 + 0.54606562u4
= 0.01054174 EB)
0.52085309u; +1.5825008u2 + 0.62060115u3 + 0.68224891u,4
= 0.16500169
(E.3)
ANSWERS: CHAPTER 7 413
UE af (x + t)u(t)dt.
IN =33
u, = 0.112701, ug =0.50, uz = 0.887298
N=4
uy = 0.069432, uz = 0.330010, uz = 0.669991, us, = 0.930568
ANSWERS: CHAPTER 7 415
(b) The computations, using the method in Section 5.1 for the present equation
with degenerate kernel, show that the exact solution is u(x) = x. Part (a)
gives an exact answer with N = 2, since with the exact solution u(x) = 2, the
integrand in the integral above is of degree 2 < 2N — 1 = 2(2) —2 = 3, hence
it is approximated exactly by the two-point Gauss-Legendre rule. Of course,
what concerns the numerical rule is that the error at the samples locations, in
this case x; and x9, vanishes.
(d) Simpson’s rule will have an infinite accuracy as a rule of degree 2, which is
the degree of the polynomial (x + t)u(t) =(a + t)t = xt + t? integrated with
respect to t in (E.1).
' 1
(i) N = P49 (ie 9? Uj =A) 4) U2 = 1.0
il 1 2
ENS Ns 3 aE oe Uw= Fs Ua 10)
1 1 1 3
(ii)
11] N=
—_ 4,h = itil a
Sq aolees
=c,igo 18 =-,
= 1d = 10)
. (a) We have a degenerate kernel where the method of Section 5.1 results in the
exact solution as u(x) = sin a.
(b) Since the exact solution is u(x) = sin z, the integrand in (E.1) is (zt) sin t,
which is not a polynomial in t. Hence no finite degree (polynomial) quadrature
rule can approximate it exactly.
4 d
Gauss-Legendre rule,
G) N= 2
u, = 0.320388, u2 = 0.924892
(i) N=3
u, = 0.176134, ue = 0.707221, uz = 0.984574
(iii) N = 4
u, = 0.108847, ue = 0.495471, uz = 0.868624, us = 0.994058
416 ANSWERS TO EXERCISES
Ng
1 0.331948 0.320388 0.325886 -0.00549755
2 1.238848 0.924892 0.945409 -0.02051700
Naso
1 0.177031 0.176134 0.176108 2.58 x107°
2 0.785398 0.707221 0.707107 [Al Sc10a
3 1.393765 0.984574 0.984371 2: Onl One
Ne =A ;
1 0.109064 0.108847 0.108847 -4.8 x1078
2 - 0.518378 0.495471 (0.495472, =2.3'x 10‘
3 1.052419 0.868624 0.868624 -4.7 x10~’
4 1.461733 0.994058 0.994058 -6.5 x1077
(d) The reason for the good results is the smoothness of the integrand ¢ sin t of
Kiet) = 2tu(@) ct sint in (1).
6. The determinant is zero, since the kernel is symmetric, and with using the
Tchebychev rule this symmetry is preserved for the coefficients matrix, as seen
in the answer to Exercise 2. With this symmetry, the determinant vanishes,
which excludes the use of Cramer’s rule. So we must first check the theory
of Fredholm integral equations of the first kind in Section 5.4 for ensuring the
existence of the solution to the above integral equation before we embark on a
numerical solution. See Section 5.4, where the condition for the existence of a
unique solution of Fredholm integral equations of the first kind is, in general,
(much) more restrictive than that of Fredholm integral equations of the second
kind (Sections 5.1-5.3).
7. (a)
di = -6 — V48, \2 = -6 + V'48.
The first eigenfunction corresponding to the 4; = —6 — v/48 is
6 4
ui (x) = bees
(4+ 5v48)
The second eigenfunction corresponding to Ap = —6 + 48, is
oe _ (-6
+ V48)
EEO
Ley aRy
(d) The interpolations of the four sample values of the two approximate eigen-
functions of (E.1) are
(i)
U; (a) = 6.528178475(t — 0.330010)(t — 0.669990)(t— 0.930570)
~8.052147011(t — 0.0694320)(t — 0.669990)(t— 0.930570)
—3.015830589(t — 0.0694320)(t — 0.330010)(t — 0.930570)
44.539799149(t — 0.0694320)(t — 0.330010)(t — 0.669990)
0
4.7u, + 3.4u2 + 1.5u3 + 1.2u4 = 0.9 x 101°,
11.1u; + 7.lug + 3.2ug + 2.4u4 = 0.3 x 104,
(b) 4.6u; + 3.4u2 + 1.5u3 + 1.lu, = 0.4 x 103,
14.4u; + 10.5u2 + 5.4ug + 3.6u4 = 0.5 x 102,
ANSWERS: APPENDIX A 419
(c)
€ aS g u(z)
Appendix A
/pee!
Ia Np) at a m1
oO) 2 (2AR)-
a2 Lay 1 Ov HOw
x ae! 2 0,
Or? Gn oe
dV (Ad, z)
(b) a MV Oz Viz) = Aen”
v(r,z) = ia dJo(Ar)A(A)e>7dr
0
where A(,) is the solution of the dual integral equations
ies jfAJo(Ar)A(A)dA,0<r<a
ie i d? Jo(Ar)A(A)dA,r > a
0
(c) See the answer to Exercise 1, Section 2.6.
TS P
a,: @uaey ST. ye 9 po; i
Pa ae
= 7 a — = = - Ss =a | = es os73 =
oe
Pea caa--s- *. eA
Q ; a ont) a a omnt oF
—— a vow
=) a al arr
Alt _ ae
—_ 3S _ retin -
oes mi a +
a
a —— : Ae
teAOA CHATIOR
~ _ i
—
—— 4S = 66Voy!
1a ies"y2 Pi mr
erase ya cl fj
SS ..
7
ie a
Saeed
& A
7
B oe
a ee ee
JS) — => SES an aie "sah
eee ye al wae? sEeevrinuoiat 0
aw hw
7 ot bs eh Oe _ DPF
a fs ar ag
Ore in
ie al _ a ; _ = a
SS q (eg) SiddeOAL pl
vir i ne
es a 4) ite Tiwi
ifey + O4ae a
a
——
References
Atkinson, K.E., The Numerical Solution of Integral Equations of the Second Kind,
Cambridge University Press, New York, 1997.
Anderson, R.S., and LR. DeHoog, Application and Numerical Solutions of Integral
Equations, Sijthoff en Noordhoff, Alphen aan den Rijn, The Netherlands, 1980.
Anton, H., Calculus with Analytic Geometry, 5th ed., John Wiley and Sons, Inc.,
New York, 1995.
Arfken, G., Mathematical Methods for Physicists, Academic Press, New York,
1970.
Baker, C.T., and G.F. Miller, Treatment of Integral Equations by Numerical Meth-
ods, Oxford University Press, London, 1977.
Bell, E.T., The Development of Mathematics, McGraw-Hill, New York, 1945, pp.
524-30.
Briggs, W.L., and V.E. Henson, The DFT-An Owner’s Manual for the Discrete
Fourier Transforms, SIAM, 1995.
421
422 REFERENCES
Brigham, E.O., Fast Fourier Transform and its Applications, Prentice-Hall, Engle-
wood Cliffs, N.J., 1988.
Cochran, J.A., The Analysis of Linear Integral Equations, McGraw-Hill, New York,
1972.
Collatz, L., Functional Analysis and Numerical Methods, Academic Press, New
York, 1966.
Davis, P.J., Interpolation and Approximation, 2nd ed. Blaisdell, New York, 1965.
deHoog, F.R., Review of Fredholm Integral Equations of the First Kind, in The
Application and Numerical Solution of Integral Equations, Andersen, R.S., et.
al., eds. Sijthoff and Noordhoff, the Netherlands, 1980.
Delves, L.M., and J.L., Mohammed, Computational Methods for Integral Equa-
tions, Cambridge University Press, London, 1988.
Ditkin, V.A., and A.P. Prudnikov, /ntegral Transforms and Operational Calculus,
Pergamon Press, Elmsford, N.Y., 1965.
Golberg, A.M., Solution Methods for Integral Equations: Theory and Applications,
Plenum Press, New York, 1979.
Green, C.D., Integral Equation Methods, Barnes & Noble, New York, 1969.
Jerri, A.J., Elements and Applications of Integral Equations, UMAP J., Vol. 7, pp.
45-80, 1986 (Also UMAP Module #609, 1982.)
REFERENCES 423
Jerri, A.J., Integral and Discrete Transforms with Applications and Error Analysis,
Marcel Dekker Inc., New York, 1992.
Jerri, A.J., The Gibbs Phenomenon in Fourier Analysis, Splines and Wavelet Ap-
proximations, Kluwer Academic Publishers, Boston, 1998.
Jerri, A.J. and R.L. Herman, The solution of Poisson-Boltzmann equation between
two spheres — Modified iterative methods, J. Sci. Comp., 11, pp. 127-153, 1996.
Jerri, A.J., R.L. Herman, and R.H. Weiland, The modified iterative method for non-
linear concentration in cylindrical and spherical pellets, J. Chem. Eng. Commun.,
52, pp. 173-193, 1987.
Kanwal, R.P., Linear Integral Equations, Theory and Technique, Academic Press,
New York, 1971.
Lonseth, A., Sources and Applications of Integral Equations, SIAM Review, 19,
pp. 241-278, 1977.
Miller, R.K., Nonlinear Volterra Integral Equations, W.A. Benjamin, Menlo Park,
CARO 71,
Petrovskii, I.G., Integral Equations and Their Applications, Vol. 1, Pergamon Press,
Oxford, 1960.
424 REFERENCES
Sneddon, I.N., The Use of Integral Transforms, McGraw-Hill, New York, 1972.
Widder, D.V., The Laplace Transform, Princeton University Press, Princeton, 1946.
Wing, G.M., A Primer on Integral Equations of the First Kind - The Problem of
Deconvolution and Unfolding, SIAM, Philadelphia, 1991.
Wolf, K.B., Integral Transforms in Science and Engineering, Plenum, New York,
1978.
Index
inverse, 1, 73 pair, 79
method, 146, 150 Mercer’s theorem, 239, 251
of derivatives, 46 method of traces
pairs, 49 for estimating eigenvalues, 261
transform method metric, 303
difference kernel, 143 space, 303, 313
Laplacian, 76 midpoint formula, 80, 84
least squares Miller, 84, 357
criterion, 271 minor, 94
method, 268 mixed boundary conditions
Lebesgue convergence theorem, 39 dual integral equations, 124
Legendre polynomials, 199, 335, 336, modified mapping, 322
382 Mohammed, 84, 268, 357
Leibnitz rule momentum (Fourier) space, 54
generalized, 31 mortality of equipment, 2, 6, 104
limit of a sequence multiple integrals
in metric space, 306 reduced to single integrals, 31
limit point, 302 multiplicity, 238
linear
combination, 30 Neumann, 134
equations, 211 series, 134, 138, 260, 269
Fredholm equations, 316 solution, 258
integral equations, 26, 300 neutrons
existence of the solution, 316 energy spectrum, 15
Volterra equations, 318 source of, 106
existence of a unique solution, Newton-Cotes, 357
318 (NC) rule, 328
linearly independent, 263 (closed) rules, 332
Liouville, 134 repeated rules, 331
Lipschitz, 320 rule, 84, 328, 330, 332, 337, 338,
condition, 320 350
constant, 320 n point, 330
Lonseth, 154 of the open type, 330
two-point, 331, 350
Maclaurin, 231 type N-point rules, 337
method, 333, 357 Noble, 7
rule, 333, 354 nonhomogeneous
series, 138, 143, 235 Fredholm equations
mapping, 322 second kind, 287
a finite interval, 356 with degenerate kernel, 211
mathematical induction, 312 ordinary differential equations
mechanics problems, 107 second-order, 166
Mellin nonlinear
transform, 72, 79 boundary value problems, 124
inverse, 72 differential equations
INDEX 431
interpolating, 88
numerical approximation setting,
156
numerical solution, 88, 156, 160
second kind, 24, 99, 133, 134,
137, 142
singular, 162, 355, 358
with difference kernel, 143
Leaming Resources
Gentre
oy §Lo
a re
he ;= ‘be ce. aniniton ming 24 >
ay zs Sian m7 gad Pefinee!
in cas
Pic s 252 a eddih he eani
tons ke: BH Bei ON DMAAARE . _
oh, Fad, CP, we 68 ° yin ita EME NEHeo S
oo we ~eaawas 4) -. 4 a AU Rea b Abtugaia
aoe a £4)ra Saf
itp wa Ge eel (heaod ——
i Csaerie\.40 1M ncaa iw
TT — not, OTS apie " I hw
.
hae
—— ad 226s
- - . 4m,
maples
e000 T8464, 04
a ee a - Tekeed exci grawtit be TK
on pil = weil, va) ae ~
—— =. ~ , Meal equally,
Ms
Sanly 22 * WunEe es?
ar ewaited: iv ~._ Ou—_ an trier. 247
oaprtarrty : + _ :+
ore, 77 mn apallie _
avn. 27 be Cantos :
Sikes, 1, 5) 7 = " arace vf, ty
ember | 1) eat a saat ee 7 i.
i. a4 wu, (92 : 7 : peepee “~
% aay a = Viepmemaa 2 1? = 7 % 2) ai
2 miata 196 ;
-
ss -<*) ~areeepiera, 100
Pemmsii Rs aa a. en etek ee
ris
ss >: ear it’ _
a = ae ~~, - ae
=a re ajati x
ale : ion, s6y.2 ¥ >
ae ane fo
grlt=s, 12
reiertes ~ = a Soeipan
sruem:
% ix] rate Sus _
ss fs ne Coeiunpiria’ (Panag
Py
ay ;
-
tifa Cap nats Lday > |
(ser cae OT Ew Og -ebee ~—
: -, oi 2a >= _ —
Tage a Virdoata,
UT aos .
ast yi tne an ae oe
Tue im : ia Pla a
—
iit Ake
IN.
ai
al
hy
MN
oe
\'
‘
7
aa
I"
niw> eq
SODIAIIS
jODSAjl4d
sajeM ynos
Aseiqr
yo Ayisaaaiun
“Extremely clear, self-contained text . . . offers to a wide class of
readers the theoretical foundations and the modern numerical
methods of the theory of linear integral equations.”
Abdul Jerri has revised his highly applied book to make it even more useful for
scientists and engineers, as well as mathematicians. Covering the fundamental
ideas and techniques at a level accessible to anyone with a solid undergraduate
background in calculus and differential equations, Dr. Jerri clearly demon-
strates how to use integral equations to solve real-world engineering and
physics problems. This edition provides precise guidelines to the basic methods
of solutions, details more varied numerical methods, and substantially boosts
the total of practical examples and exercises. Plus, it features added emphasis
on the basic theorems for the existence and uniqueness of solutions of integral ‘
equations and points out the interrelation between differentiation and integra-
tion. Other features include:
WILEY-INTERSCIENCE ISBN O-
John Wiley & Sons, Inc. ER Pee
Scientific, Technical, and Medical Division ee
605 Third Avenue, New York, N.Y. 10158-0012
New York ¢ Chichester ¢ Weinheim
Brisbane ¢ Singapore * Toronto > a9 dll silica