A Sparse Spectral Method For Volterra Integral Equ
A Sparse Spectral Method For Volterra Integral Equ
net/publication/333679093
CITATIONS READS
0 41
2 authors, including:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Timon Salar Gutleb on 21 June 2019.
Abstract. We introduce and analyse a sparse spectral method for the solution of
Volterra integral equations using bivariate orthogonal polynomials on a triangle domain.
The sparsity of the Volterra operator on a weighted Jacobi basis is used to achieve high
efficiency and exponential convergence. The discussion is followed by a demonstration
of the method on example Volterra integral equations of the first and second kind with
arXiv:1906.03907v1 [math.NA] 10 Jun 2019
1. Introduction
Define the Volterra integral operator
ż lpxq
pVK uqpxq :“ Kpx, yqupyqdy, (1)
0
where Kpx, yq is called the kernel, upyq is a given function of one variable and the limits
of integration are either lpxq “ x or lpxq “ 1 ´ x. This paper concerns Volterra integral
equations of the first and second kind, that is, to find u satisfying
VK u “ g or pI ` VK qu “ g.
Numerous applications and the fundamental nature of Volterra integral and integro-
differential equations motivate research into efficient and accurate numerical solvers.
Various forms of Volterra integral equations are analytically well-understood [12, 37, 47],
have been the subject of various numerical approximation schemes [12, 11, 5, 30], and
are encountered regularly in various scientific fields as well as engineering and finance
applications [12, 37, 45, 47, 25, 26].
In this paper we present a method to compute Volterra integrals and solve Volterra
integral equations by using orthogonal polynomials on a triangle domain [19, 36] to both
resolve the kernel and to reduce the equations to banded linear systems. The method
is in the same spirit as some previous contributions to the field of numerical Volterra,
Fredholm, singular integral and differential equations based on operators and orthogonal
polynomials such as [1, 24, 41, 23] but differs in choice of basis and domain, leading to op-
erator bandedness properties which can be exploited for significantly increased efficiency.
Notably the approach introduced in this paper can be used for a wider range of kernels
than many other Volterra integral equation solvers such as the methods based on orthog-
onal polynomials due to Loureiro and Xu [29, 50], the recently developed ultraspherical
spectral method in [23] or the Fourier extension method in [49] as it is not limited to
1
convolution kernel cases, that is kernels of the form Kpx, yq “ Kpx ´ yq, but works for a
wider class of kernels.
The sections in this paper are organized as follows: Section 2 introduces the required
aspects of univariate and bivariate polynomial function approximation on a real inter-
val and the triangle respectively. Section 3 introduces an efficient numerical method for
Volterra integrals and integral equations and discusses how to approach kernel compu-
tations using a multivariate variant of Clenshaw’s algorithm. In Section 4 we show the
scheme in action in both toy and application-based examples. Proofs of convergence for
well-posed problems are discussed in Section 5.
where Wpα,βq pxq “ Cpα,β,m,nq p1 ´ xqα p1 ` xqβ acts as the weight function and δnm is the
Kronecker delta. While the choice of r´1, 1s is natural, the Jacobi polynomials can be
shifted to any real interval an application requires. For α “ β “ 0 the Jacobi polynomials
reduce to the Legendre polynomials [19].
One of the primary applications of interest for the study of orthogonal polynomials are
their applications in the expansion of non-polynomial functions:
ÿ8
f pxq “ pn pxqfn “ PpxqT f ,
n“0
where fn is the function-specific coefficient of the n-th polynomial pn and we use the
notation
¨ ˛ ¨ ˛
p0 pxq f0
Ppxq :“ ˝ p 1 pxq‚ , f :“ f1 ‚.
˝
.. ..
. .
For numerical applications one uses finitely many terms in the above sum to obtain an
approximation. If a distinction between different sets of polynomials and coefficient vec-
tors on different domains is required we specify by indicating the type of polynomials
2
using standard notation for the polynomials, such as Ppα,βq pxq for the Jacobi polynomials
on a real interval, and the domain using index notation, e.g. for the bivariate orthogonal
polynomial coefficient vector of gpx, yq on the triangle domain we write gM .
3
allows for the following compact notation for the infinite-dimensional polynomial basis:
¨ ˛
P0 px, yq
Ppx, yq “ ˝P1 px, yq‚.
..
.
In this notation the expansion of a function of two variables in the bivariate polynomial
basis becomes
8 ÿ
ÿ n
f px, yq “ pn,k px, yqfn,k “ Ppx, yqT f .
n“0 k“0
For function approximation one simply uses an appropriate finite cutoff of this expansion.
On the triangle T 2 we focus on the Jacobi weights xα y β p1 ´ x ´ yqγ . One elegant way
to define the corresponding Jacobi polynomials Ppα,β,γq px, yq on the canonical triangle T 2
is by referring to the Jacobi polynomials Ppα,βq pxq on the real interval r´1, 1s (compare
[19, Proposition 2.4.1]):
ˆ ˙
pα,β,γq k p2k`β`γ`1,αq pγ,βq 2y
Pk,n px, yq “ p1 ´ xq Pn´k p2x ´ 1q Pk ´1 . (4)
1´x
Defined as such the triangle Jacobi polynomials are orthogonal with respect to a weighted
integral over the canonical triangle domain T 2 :
ż 1 ż 1´x
pα,β,γq pα,β,γq
xα y β p1 ´ x ´ yqγ Pk,n px, yqPj,m px, yqdydx “ Cpα,β,γq δjk δmn .
0 0
The detailed form of the constant Cpα,β,γq is not important here but can for example be
found in [19]. We will primarily use the Jacobi polynomials shifted to the r0, 1s interval
and denote them by P̃pα,βq , which allows us to write the Jacobi polynomials on the triangle
as: ˆ ˙
pα,β,γq k p2k`β`γ`1,αq pγ,βq y
Pk,n px, yq “ p1 ´ xq P̃n´k pxq P̃k . (5)
1´x
As in the 1-dimensional case we can define Jacobi operators Jx and Jy , one for each
variable, which respectively act as
Ppx, yqT Jx fM “ xf px, yq,
Ppx, yqT Jy fM “ yf px, yq,
for a given bivariate polynomial basis. Unlike the 1-dimensional Jacobi polynomial case
these operators are not tridiagonal but block tridiagonal operators for the triangle Jacobi
polynomials [36]:
¨ y
A0 B0y
¨ x
A0 B0x
˛ ˛
˚C0x Ax1 B1x ‹ ˚C0y Ay1 B1y ‹
Jx “ ˚ . , J . ‹, (6)
˚ ‹ ˚ ‹
“
C1x Ax2 . . ‚ y
C1y Ay2 . . ‚
‹ ˚
˝ ˝
.. .. .. ..
. . . .
where Axn , Ayn P Rpn`1qˆpn`1q , Bnx , Bny P Rpn`1qˆpn`2q and Cnx , Cny P Rpn`2qˆpn`1q . Analogous
operators to the raising and lowering operators discussed for the real interval case can be
constructed for the Jacobi polynomials on the triangle as well, see [35, 36], but we omit
their discussion as we will not make direct use of them in this paper.
To make use of Jacobi polynomials for the approximation of functions on the triangle
domain in a numerical context one requires efficient algorithms to determine the coefficient
4
vector fM for a given function f px, yq of two variables. This can be done using an algorithm
and its implementation in a C library by Slevinsky [38, 39, 40].
For the case of Jacobi polynomials on a real interval, the three-term recurrence rela-
tionship seen in the Jacobi operator in (3) can be used to write
¨ ˛
¨ ˛ P pα,βq px q
1 ˚
˚ 1
‹ ˚P2pα,βq px˚ q‹
‹
˚a0 ´ x˚ b0
˚ ‹ ˚ pα,βq ‹
pα,βq
LN px˚ qPN px˚ q “ ˚ c0 a1 ´ x ˚ b 1 ‹ ˚P3 px˚ q‹ “ e0 ,
˚ .. .. .. ‹ ˚
..
‹
˝ . . . ‚˚
˝ .
‹
‚
cN ´2 aN ´1 ´ x˚ bN ´1 P
pα,βq
px q
N ˚
(7)
where e0 is the first standard basis vector with 1 in its first component and of appro-
priate length. Solving this lower triangular system via forward substituition provides a
way to recursively evaluate each component of Ppα,βq pxq and thus also Ppα,βq pxqT f if the
coefficients of f pxq in this basis are known. Clenshaw’s algorithm is conceptually similar
but uses backward substition on the system
pα,βq
f px˚ q “ PN px˚ qT a “ eT0 LN px˚ q´T a, (8)
where a is the column vector collecting a0 to aN . The case for the Jacobi polynomials
on the triangle was recently discussed in [36] and on the basis of the recurrence in (6)
involves a block triangular system for evaluation at x˚ “ px˚ , y˚ q instead:
¨ ˛
11
˚Ax0 ´ x˚ 11 B0x ‹
˚ y y ‹
pα,β,γq
˚ A0 ´ y˚ 11 B0 ‹ pα,β,γq
LN px˚ qPN px˚ q “ ˚
˚ C0x x
A1 ´ x˚ 12 B1x ‹P
‹ N px˚ q “ e0 ,
y y y
C0 A1 ´ y˚ 12 B1
˚ ‹
˝ ‚
... ... ...
where 1k denotes the k ˆ k identity matrix. As this is not a triangular but a block trian-
gular matrix one cannot use forward substitution without first applying a preconditioner:
¨ ˛
1
˚ B0` ‹
‹ LN px˚ q “ L̃N px˚ q.
B1`
˚
˝ ‚
..
.
5
L̃N px˚ q is then a proper lower triangular matrix and can be used in an analogous system
to the ones above to evaluate the polynomials, and thus a function expanded into that
polynomial basis, recursively via forward substitution. A preconditioner which satisfies
these requirements is the block diagonal matrix whose elements are comprised of a left
inverse of the blocks ˆ x˙
Bn
Bn “ ,
Bny
such that Bn` Bn “ 1n . Clenshaw’s algorithm for the triangle Jacobi polynomials is thus
pα,β,γq
f px˚ q “ PN px˚ qT A “ eT0 L̃N px˚ q´T A.
This system can be solved via backward substitution in optimal OpN 2 q complexity if one
chooses Bn` carefully, see [36].
We first describe the idea behind the relevant operators and their use before determin-
ing their entries in matrix representation. The first operator we need is the integration
operator for a function given as the coefficients of orthogonal polynomials on a triangle.
We label this operator Qy and it acts as
ż 1´x
T
Ppxq WQ Qy fM “ f px, yqdy,
0
where WQ is a to-be-determined weight function which depends on the used basis. The
reason for the limits of integration to be defined in this way for Qy will become clear
once we discuss the explicit form of these operators and how one can make optimal use
of the triangle domain’s symmetries. Second, we need an operator Ey which extends a
one-dimensional function on r0, 1s to one on T 2 , that is:
PpxqT fr0,1s “ Ppx, yqT Ey fr0,1s
Together these two operators can be used to compute integrals of the form
ż 1´x
f pyqdy “ PpxqT WQ Qy Ey fr0,1s
0
with function f depending on a single variable. To instead integrate from 0 to x we use
a reflection operator. Due to symmetries of the polynomials, particular basis changes in
a Jacobi basis obey the simple rule [32, 19]:
P̃npα,βq pxq “ p´1qn P̃npβ,αq p1 ´ xq.
We use R to refer to the operator that uses the above property to reflect the function on
the r0, 1s interval via a basis change, i.e.
ÿ
P̃pα,βq pxqT Rf “ p´1qn fn P̃npβ,αq pxq “ f p1 ´ xq. (9)
n
Jx and Jy have important commutation relations with the introduced Qy and Ey operators.
As the Qy operator integrates with respect to y and collapses a bivariate coefficient
vector back to a univariate one the multiplication-with-x operator changes from being
6
multiplication-with-x on the triangle (“ Jx ) to being multiplication-with-x on the real
interval (“ J) when pulled through the Qy operator. A similar relation holds for similar
reasons for Jy and Ey :
Qy Jx fM “ JQy fM , (10)
Jy Ey fr0,1s “ Ey Jfr0,1s . (11)
We now give the explicit matrix representations for the operators Qy and Ey and discuss
a sensible polynomial basis choice. The explicit form of the Jacobi operators on the real
line is known in the literature (e.g. [19, 36]) and thus receives no further discussion here.
To determine the explicit form of Qy we begin by plugging in the polynomial expansion
of f px, yq into the intended integral operation and using the Jacobi polynomials on the
triangle domain as seen in (5) for our basis pn,k with α “ β “ γ “ 0:
ż 1´x ż 1´x ÿ n
8 ÿ
p1,0q T
P pxq WQ Qy fM “ f px, yqdy “ pn,k px, yqfn,k dy
0 0 n“0 k“0
n
8 ÿ ż 1´x ˆ ˙
ÿ
k p2k`1,0q p0,0q y
“ fn,k p1 ´ xq P̃n´k pxq P̃k dy
n“0 k“0 0 1´x
ÿ8 ÿ n ż1
k`1 p2k`1,0q p0,0q
“ fn,k p1 ´ xq P̃n´k pxq P̃k psq ds,
n“0 k“0 0
y p0,0q
where a substitution of Ñ s was made in the last step. As P̃k
1´x
are just the Legendre
ş1 p0,0q ş1 p0,0q
polynomials on r0, 1s we see that 0 P̃k psq ds “ 0, @k ą 0 and 0 P̃0 psq ds “ 1,
resulting in
8
ÿ
p1,0q T
P pxq WQ Qy fM “ fn,0 p1 ´ xqP̃np1,0q pxq
n“0
for integration from 0 to x. This derivation shows that starting in the Jacobi polynomial
basis on the triangle T 2 with α “ β “ γ “ 0 for the approximation of f px, yq results in
the following block diagonal structure for the integration from 0 to 1 ´ x operator with
weight WQ “ p1 ´ xq:
¨ ˛
1
˚ 1 0 ‹
Qy “ ˚
˝ 1 0 0
‹
‚
... ... ... ...
where the n-th block is an n-dimensional row vector with 1 in the first element and 0
in all remaining elements. An additional p´1qn term and change of basis changes this
integration to be from 0 to x instead. The expansion operator Ey from the Pp1,0q pxq basis
to the canonical triangle Jacobi polynomials where α “ β “ γ “ 0 has the block diagonal
7
structure ¨ ˛
ˆ
˚ ˆ ‹
˚ ‹
˚ ˆ ‹
˚ ‹
˚ ˆ ‹
˚ ‹
˚ ˆ ‹
˚ ‹
Ey “ ˚ ˆ ‹
˚
˚ ... ‹
‹
˚ ‹
˚
˚ ... ‹
‹
...
˚ ‹
˚ ‹
˝ ‚
...
where the n-th block is an n-dimensional column vector whose j-th entry is given by
p´1qj`n p2j ´ 1q
.
n
Importantly, multiplication of Qy and Ey yields a diagonal matrix whose n-th entry can
be directly generated without any matrix multiplication being required (compare [32]):
p´1qn`1
pQy Ey qn,n “ pDy qn,n “ .
n
These observations justify the basis choices as well as the choice of the limits of integration
for Qy from the standpoint of computational efficiency. Defining Qy as the integration
operator from 0 to x does not avoid the reflection step and only results in a less efficient
or equivalent placement for it.
3.2. Kernel computations using Clenshaw’s algorithm. Putting all the above ob-
servations together means one can save a significant amount of computation time by the
use of a recurrence when simultaneously using an operator valued polynomial approxima-
tion for the kernel KpJx , Jy q and then using the known commutation relations in (10–11).
To illustrate the idea behind this approach we first discuss how to do this for a monomial
kernel (or equivalently a kernel approximated in a monomial basis) and then show how
these ideas can be expanded to arbitrary polynomial bases for the kernel using a variant
of Clenshaw’s algorithm. řn
Assuming a monomial expansion for the kernel, i.e. Kpx, yq “ 8
ř n´j j
n“0 j“0 knj x y , the
primary part of the Volterra integration operator has the form
˜ ¸
ÿ8 ÿn 8 ÿ
ÿ n
n´j j
Qy KpJx , Jy qEy “ Qy knj Jx Jy Ey “ knj Jn´j Qy Ey Jj ,
n“0 j“0 n“0 j“0
where we have used the commutation relations in (10–11) to rewrite the summation using
the Jacobi operator for the interval Jacobi polynomials. Recalling that Qy Ey is a diagonal
matrix which can be generated without any need to separately compute and multiply Qy
and Ey , all that is left to compute are the required combinations of Qy Ey with the Jacobi
operators, which can be built up recursively. This kind of recursive computation of all
the required elements for the kernel can save significant computation cost if executed
correctly. Since only the coefficients of Kpx, yq for this basis actually change across
different problems one can in principle also store the basis elements Jn´j Qy Ey Jj and re-
use them making this numerical evaluation of Volterra integrals even faster upon repeated
use. This approach differs slightly depending on whether one intends to compute integrals
8
from 0 to 1 ´ x or to compute integrals from 0 to x. In the case of integrals from 0 to x,
one is either required to supply Kp1 ´ x, yq to the algorithm or alternatively the Jacobi
operators on the left can be replaced by p1´Jq to account for the reflection, meaning that
the basis elements become p1 ´ Jqn´j Qy Ey Jj . Taking the weight WQ into consideration
the full Volterra integral operator is then
n
8 ÿ
ÿ
Rp1 ´ JqQy KpJx , Jy qEy “ Rp1 ´ Jq knj p1 ´ Jqn´j Qy Ey Jj .
n“0 j“0
This straightforward approach evidently only works if the kernel is of a form that may
sensibly be approximated using monomials but it inspires an analogous approach based
on expanding the kernel in its own orthogonal polynomial basis which need not be the
same as those used to expand the function f . We use a variant of the Clenshaw algorithm
introduced in section 2.3 to build the kernel in terms of the Jacobi operators. In principle
one could compute KpJx , Jy q as a full multiplication operator acting on a triangle Jacobi
coefficient vector using an operator-valued version of Clenshaw’s algoritm as discussed in
[36]. This is not the most efficient way to approach this problem, however, as it would
mean losing the diagonal Qy Ey since for such an operator the multiplication with KpJx , Jy q
would need to happen between Qy and Ey . Nevertheless, we will briefly discuss how to
generate this multiplication by KpJx , Jy q operator in order to see which modifications one
can make to this approach in order to respect the symmetries of the triangle and end up
with recursive basis generation similar to the monomial kernel expansion case.
The multiplication by Kpx, yq operator, which we label MK , can be written in an
operator Clenshaw approach as (see [36, 33, 46]):
MK “ pe0 b 1qL´T KM , (12)
where b denotes the Kronecker product and L is defined as
¨ ˛
p11 b 1q
˚pAx0 b 1q ´ p11 b Jx q pB0x b 1q ‹
˚ y
pB0y b 1q
‹
˚pA0 b 1q ´ p11 b Jy q ‹
L“˚ ˚
pC0x b 1q pAx1 b 1q ´ p12 b Jx q pB1x b 1q
‹.
‹
y
pC0 b 1q pAx1 b 1q ´ p12 b Jy q pB1y b 1q
˚ ‹
˝ ‚
.. .. ..
. . .
As discussed for the Clenshaw evaluation method in section 2.3 this system requires
preconditioning to become solvable via backward substitution. For this case the precon-
ditioner is
¨ ˛
p11 b 1q
˚ pB0` b 1q ‹
‹ L “ L̃,
pB1` b 1q
˚
˝ ‚
...
with the Bn` defined as in section 2.3. Using such an operator valued Clenshaw algorithm
one can compute MK and thus obtain Qy KpJx , Jy qEy via Qy MK Ey . However, as discussed
above, for our purposes of Volterra integral operators this is computationally wasteful
and misses the chance to take advantage of the triangle symmetries which allow for
Qy Ey to be directly computable and diagonal. So instead we replace the KM in (12) by
pKM b Qy Ey q. The relations (10–11) then imply that all Jx operators may be replaced by
a left multiplication with J and all Jy operators may be replaced by a right multiplication
9
with J (respectively denoted by a ˛ on the appropriate side). The system to solve thus
becomes
Qy KpJx , Jy qEy “ pe0 b 1qL´T
V pKM b Qy Ey q,
with
¨ ˛
p11 b 1q
˚pAx0 b 1q ´ p11 b J˛q pB0x b 1q ‹
˚ y
pB0y b 1q
‹
˚pA0 b 1q ´ p11 b ˛Jq ‹
LV “ ˚
˚
pC0x b 1q pAx1 b 1q ´ p12 b J˛q pB1x b 1q
‹.
‹
y
pC0 b 1q pAx1 b 1q ´ p12 b ˛Jq pB1y b 1q
˚ ‹
˝ ‚
.. .. ..
. . .
After preconditioning as above, this allows the recursive and efficient computation of
Qy KpJx , Jy qEy via an operator valued Clenshaw type algorithm while at the same time
taking advantage of the diagonal nature of Qy Ey . As in the monomial case, this approach
has to be modified when integrating from 0 to x instead of from 0 to 1 ´ x. In the 0 to
x case one needs to take the reflection into account, which ends up either replacing all
the left multiplications with J by left multiplications with p1 ´ Jq for the same reasons
as above, while the right multiplications corresponding to y multiplication remain the
same, or requiring that Kp1 ´ x, yq be supplied to the algorithm. Finally, this operator
still requires left multiplication with the basis dependent weight WQ to represent the full
Volterra integral operator for this approach.
3.3. Numerical solutions to linear Volterra integral equations. The above de-
scribed computational method for Volterra integrals has a natural extension to solving
Volterra integral equations, which we describe in this section. Most generally a Volterra
integral equation is any equation in which the unknown appears at least once as the inte-
grand of a Volterra integral as defined in (1) above. One usually distinguishes between at
least two types of Volterra integral equations which are labelled Volterra integral equa-
tions of the first and second kind respectively. The Volterra integral equation of the first
kind we will be interested in takes the following form:
żx
Kpx, yqupyqdy “ gpxq, (13)
0
where upxq is the unknown function to be solved for, Kpx, yq is a given kernel and gpxq
is a given function. Volterra integral equations of the second kind we will be interested
in take the following form:
żx
upxq ´ Kpx, yqupyqdy “ gpxq, (14)
0
where once again upxq is the unknown function and Kpx, yq and gpxq are given. While
this is not further explored in this paper, there are natural extensions of these methods for
other linear Volterra-type integral equations such as the third kind equations discussed
in [2, 3, 42].
Whenever we write Qy Kp1 ´ Jx , Jy qEy in the coming sections, we mean to imply that
this operator is computed using the Clenshaw approach detailed in section 3.2.
3.3.1. Equations of the first kind. Extending the above methods for Volterra integrals
to Volterra integral equations is straightforward, though one needs to be mindful of the
10
appropriate reflections. Using the above notation conventions, one way to write the
Volterra integral equation of the first kind is
P̃p1,0q pxqT p1 ´ JqQy Kp1 ´ Jx , Jy qEy u “ P̃p1,0q pxqT ḡ,
ñ P̃p1,0q pxqT u “ P̃p1,0q pxqT pp1 ´ JqQy Kp1 ´ Jx , Jy qEy q´1 ḡ.
The notation ḡ is used to indicate that we are directly supplying the coefficients of the
reflected gp1 ´ xq to save an unnecessary additional reflection step, as formally we are
solving the equivalent ż 1´t
Kp1 ´ t, yqupyqdy “ gp1 ´ tq. (15)
0
All function coefficient vectors in this section are initially expanded in the P̃p1,0q pxq basis.
This method works in numerical experiments but deriving convergence properties for it
proves to be difficult (as is usual for Volterra equations of the first kind). However,
under the condition that we can expand the function qpxq “ gp1´xq 1´x
instead of gp1 ´ xq
p1,0q
in P̃ pxq, one can find convergence conditions (see section 5 for details). Note that
solvability of the Volterra integral equation of the first kind implies that both g and q
must vanish when the upper limit of integration vanishes. When using q to denote the
coefficient vector of qpxq “ gp1´xq
1´x
the method then becomes
P̃p1,0q pxqT Qy Kp1 ´ Jx , Jy qEy u “ P̃p1,0q pxqT q,
ñ P̃p1,0q pxqT u “ P̃p1,0q pxqT pQy Kp1 ´ Jx , Jy qEy q´1 q.
meaning that solving this type of equation for upxq is as simple as computing the coeffi-
cient vectors and operators (see the respective sections above for efficient ways to do so)
and then solving a banded system of linear equations.
3.3.2. Equations of the second kind. Using the above-introduced weighted lowering op-
p0,0q
erator Lp1,0q which shifts to the P̃p0,0q pxq basis while multiplying with p1 ´ xq, reflecting
p1,0q
the result and then using a raising operator Sp0,0q to return to the P̃p1,0q pxq basis we can
write Volterra integral equations of the second kind as
´ ¯
p1,0q p0,0q
P̃ pxq 1 ´ Sp0,0q RLp1,0q Qy Kp1 ´ Jx , Jy qEy u “ P̃p1,0q pxqT g,
p1,0q T
´ ¯´1
p1,0q T p1,0q T p1,0q p0,0q
ñ P̃ pxq u “ P̃ pxq 1 ´ Sp0,0q RLp1,0q Qy Kp1 ´ Jx , Jy qEy g,
which can once again be solved for upxq using any linear system of equations solver.
Reflecting without the lowering and raising operator is not possible (although there are
alternative ways to use such operators to accomplish the same goal) as this would result
in an inconsistency between the bases used for the two appearances of u.
3.3.3. Different limits of integration. As mentioned above, a similar derivation leads to
an analogous method for Volterra integral equations of the first and second kind with
different limits of integration:
ż 1´x
Kpx, yqupyqdy “ gpxq, (16)
0
ż 1´x
upxq ´ Kpx, yqupyqdy “ gpxq, (17)
0
This results in an identity operator replacing the reflection and conversion operators
in the above solution methods and in fact makes these types of equations even more
11
efficient to solve but limits of integration of this sort are seen less often in applications.
In particular, the operator version of Volterra integral equations of the first kind with
limits of integration 0 to 1 ´ x is:
P̃p1,0q pxqT Qy KpJx , Jy qEy u “ P̃p1,0q pxqT q,
ñ P̃p1,0q pxqT u “ P̃p1,0q pxqT pQy KpJx , Jy qEy q´1 q.
gpxq
where now q is the coefficient vector of qpxq “ 1´x
. Equations of the second kind with
these limits of integration can be written as:
P̃p1,0q pxqT p1 ´ p1 ´ JqQy KpJx , Jy qEy q u “ P̃p1,0q pxqT g,
ñ P̃p1,0q pxqT u “ P̃p1,0q pxqT p1 ´ p1 ´ JqQy KpJx , Jy qEy q´1 g.
We present an implementation of both options for the limits of integration in the next
section.
4.1. Set 1: Volterra integral equations of the first kind. We investigate the nu-
merical solution of the following two example Volterra integral equations of the first kind:
żx
x
´x
e ` e p´1 ` 2xq “ 4 ey´x u1 pyqdy. (18)
0
żx
sinp4π 2 x2 q 1 2 1 2
“ e´10px´ 3 q ´10py´ 3 q u2 pyqdy. (19)
x 0
The analytic solution to the first equation can be found to be:
u1 pxq “ xex .
We present the absolute error between the analytic and numerical solution for u1 pxq using
the orthogonal polynomial method introduced in this paper in Figure 1A for different
matrix dimensions n ˆ n and the absolute error between the numerical solution for u2 pxq
and a high degree solution computed with n “ 5050 in Figure 1B.
12
4.2. Set 2: Volterra integral equations of the second kind with oscillatory ker-
nels. We seek numerical solutions u1 , u2 and u3 to the following three Volterra integral
equations of the second kind with kernels of varying oscillatory intensity:
żx
e´10πx p1`20πq´2`cosp10πxq`sinp10πxq
u1 pxq “ 20π
` p1 ´ cos p10πx ´ 10πyqq u1 pyqdy (20)
0
x żx
e2
u2 pxq “ ` psinp10πxq ` cosp10πyqq u2 pyqdy (21)
π 0
ż 1´x
x2 ´2x
` ˘
u3 pxq “ e ` ´2x ` y ` sinp25x2 ` 8πyq u3 pyqdy. (22)
0
Accurate approximation of these kernels on the canonical triangle domain requires coef-
ficient vectors of length exceeding 103 . We include contour plots of the specified kernels
on said domain in Figure 2. One can find an analytic solution to the first equation:
u1 pxq “ e´10πx .
For the other two equations, we instead compare to a numerical solution of high degree
(n “ 5050). We plot the absolute error convergence of the numerical solutions in Fig-
ure 3. Due to the oscillatory character of these kernels and the number of coefficients
involved, this can be considered a moderate stress test of the Clenshaw approach to the
computations of the Volterra integral operator.
4.3. Set 3: Singular Volterra integral equation of the second kind in heat con-
duction with mixed boundary conditions. Finally we discuss a more application-
oriented example discussed in a handful of different variations in [18, 17, 16, 48, 7]:
ż x µ´1
y
upxq “ gpxq ` upyqdy. (23)
0 xµ
To see how equations of this type can result from heat conduction problems of the form
B2 u
Bx
´ α12 Bu
By
“ 0 with mixed boundary conditions, see for example [17]. This equation
varies both in its singularity properties as well as its number of solutions depending
on the parameter µ. This example equation stemming from an application of Volterra
integrals demonstrates that the method developed in this paper has a broader range of
applicability and can in some cases extend to certain classes of singular problems as well,
despite this not being part of the considerations during the development of the method.
For testing purposes we choose the following for gpxq:
g1 pxq “ p1 ` x ` x2 q
p1 ` 4π 2 x2 qsinhp2πxq ´ 2πxcoshp2πxq
g2 pxq “ .
4π 2 x2
The following analytic solutions to these equations can be found for general µ for g1 (e.g.
in [48]) and for µ “ 3 for g2 :
µ µ`1 µ`2 2
u1 px, µq “ ` x` x,
µ´1 µ µ`1
u2 px, µ “ 3q “ sinhp2πxq.
As the kernel is separable, the problem can instead be treated as
żx
µ µ
x upxq “ x gpxq ` y µ´1 upyqdy,
0
13
which can be solved by appropriately adding multiplications with Jacobi operators or
altering the supplied gpxq in the method to solve Volterra integral equations of the second
kind. We plot numerical solutions obtained for g1 pxq with µ “ 7 and g2 pxq with µ “ 3
in Figure 4. The naturally more error prone neighborhood of the singularity can be well
approximated arbitrarily close to the singularity (though not at the exact point of the
singularity itself) using higher values of n if needed. For g2 pxq the method shows no
instability at the weak singularity of the kernel.
100
10 5 100
10 10
abs. error
abs. error
10 10
10 15
10 20 10 20
10 25
10 30
3 6 10 15 21 6 45 78 120 171
n n
(a) (b)
Figure 1. (A) shows absolute error between (18) and the known analytic
solution while (B) compares (19) to a solution computed with n “ 5050.
1.0 0 -0.5
0.4 0.8 0.4 0.4
-0.5 -1.0
0.6 -1.0 -1.5
0.4 -2.0
0.2 0.2 -1.5 0.2
0.2 -2.5
14
100 100
u_1 u_3
10 10 u_2
10 5
10 20
abs. error
abs. error
10 30
10 10
10 40
10 50 10 15
3.5
n = 90 250 n = 300
3.0 analytic analytic
200
2.5
150
u(x)
u(x)
2.0 100
1.5 50
1.0 0
0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00
x x
(a) g1 pxq with µ “ 7 (b) g2 pxq with µ “ 3
15
Definition 5.1. We define the projection operators Pn : `2 Ñ `2 which map a given
coefficient vector to a truncated version of itself with non-zero entries for the first n
coefficients only.
Definition 5.2. The analysis operator E : L2 p0, 1q Ñ `2 is the inclusion of a square
integrable function into the `2 coefficient space of the complete basis of orthogonal Jacobi
polynomials, which is guaranteed to exist by the Stone–Weierstrass theorem and is a
bounded operator. The synthesis operator is its inverse E ´1 : `2 Ñ L2 p0, 1q, which is also
bounded. Note the terms analysis and synthesis are terminology in frame theory [13, 14].
Lemma 5.1. The coefficient space Volterra integral operator VK is compact, where VK :
`2 Ñ `2 for a given kernel Kpx, yq P L2 rT 2 s with limits of integration 0 to x acting on the
coefficient vector Banach space `2 of the Jacobi polynomials P̃p1,0q pxq is of the form
p0,0q
VK “ Lp1,0q Qy Kp1 ´ Jx , Jy qEy ,
with the respective operators defined as in section 3.
p0,0q
Proof. VK “ Lp1,0q Qy Kp1´Jx , Jy qEy follows from the definition of the involved operators,
see section 3. To see compactness of VK we consider the following diagram of functions
between Banach spaces which represents the formalized version of the method:
VK
L2 p0, 1q L2 p0, 1q
E E ´1
`2 `2
VK
VK for a kernel Kpx, yq P L2 rT 2 s is the Volterra integral operator for said kernel acting on
L2 p0, 1q. It is a classical result of functional analysis that such Volterra integral operators
VK are Hilbert–Schmidt operators and thus compact [31]. It follows that VK “ E ˝VK ˝E ´1
is a finite composition of bounded and compact operators between Banach spaces and
hence itself compact.
Lemma 5.2. For VK and Pn defined as above, we have
lim }VK ´ Pn VK PnT } “ 0.
nÑ8
Proof. This follows directly from the compactness of VK and the fact that `2 is a Hilbert
space and thus has the approximation property [27].
The above lemma justifies referring to the finite-dimensional projections Pn VK PnT of
the Volterra operator as approximations.
p1,0q p0,0q
Lemma 5.3. Sp0,0q RLp1,0q Qy Kp1 ´ Jx , Jy qEy is compact on `2 and thus Volterra integral
equations of the second kind can be written in the form p1 ` Kqu “ g with K compact.
p1,0q
Proof. The operators Sp0,0q and R acting on the Banach space `2 can both readily be seen
to be bounded operators from their definitions from the Jacobi polynomial’s recurrence
relationships [32, 18.9.5]. The result then follows from the observation that the Volterra
p0,0q
integral operator Lp1,0q Qy Kp1 ´ Jx , Jy qEy was shown to be compact and composition of
bounded operators with a compact operator yields a compact operator.
16
An analogous chain of arguments immediately establishes:
Lemma 5.4. The Volterra integral operator for the limits 0 to 1 ´ x is compact and can
be written as
VK “ p1 ´ JqQy KpJx , Jy qEy .
The method is thus also of the form in (24).
Corollary 5.5. The method described in section 3.3 converges like }u ´ Pn u} Ñ 0 as
n Ñ 8 for well-posed Volterra integral equations of the second kind.
Proof. As the method is of the form in (24), i.e. p1 ` Kqu “ g with K compact, the result
is a corollary of the above results combined with the known invertibility and convergence
properties for problems of this form in finite section methods, see e.g. [10].
5.2. Equations of the first kind. The Fredholm alternative and Neumann series ar-
guments underlying the proofs above break down for first kind problems as the Volterra
operator VK : `2 Ñ `2 is compact on the infinite dimensional Banach space `2 and there-
fore is strictly singular, cf. [6]. Thus, while the finite dimensional approximations Vn
of the Volterra operator may have an inverse Vn´1 , it is not obvious that un “ Vn´1 q
converges to u in the limit. The problem can be made well-posed, however, if one con-
siders the Volterra operator as a map between two different appropriately chosen Banach
spaces. Under sufficient continuity assumptions as well as the assumption that a given
Volterra integral equation of the first kind has a solution, this problem may then be
salvaged by finding a preconditioner which allows us to rewrite it as a problem involving
operators which are compact perturbations of Toeplitz operators. We begin by assuming
a polynomial kernel from where an extension argument directly yields that it also applies
for the non-polynomial case. Note that in this section we will prove convergence of the
method only for the case of limits of integration 0 to 1 ´ x. This is not a limitation for
the case of integral equations of the first kind, since solving
żt
Kpt, yqupyqdy “ gptq.
0
and ż 1´x
Kp1 ´ x, yqupyqdy “ gp1 ´ xq.
0
are formally equivalent, as solving one automatically solves the other with t “ 1 ´ x.
The reason for the particular choice for our proofs is that some arguments are more clear
in this variant. Furthermore, as the monomial expansion and Clenshaw algorithm based
Volterra operators are exactly the same for polynomial kernels the analysis will make use
of the simpler structure of the former.
To discuss invertibility for equations of the first kind we need to reframe the Volterra
operator as a a map between two different Banach spaces, which are similar in spirit to
Sobolev spaces.
Definition 5.3. Let `2λ with λ ě 0 denote the Banach space with norm
g
f8
fÿ
}u} 2 “ e
`λ pp1 ` nqλ |un |q2 ă 8.
n“0
17
Lemma 5.6. Let VK : `2 Ñ `21 denote the Volterra operator in coefficient space of
P̃p1,0q pxq with limits of integration 0 to 1 ´ x for a given polynomial kernel
M ÿ
ÿ n
Kpx, yq “ knj xn´j y j .
n“0 j“0
Then ˜ ¸
M ÿ
ÿ n
VK “ p1 ´ JqD D´1 knj Jn´j DJj ,
n“0 j“0
2
with D “ Qy Ey , D : ` Ñ `21 and D´1
: `21 Ñ` . 2
n`1
Proof. That D “ Qy Ey is diagonal with entries p´1qn is due to properties of the Jacobi
polynomials, see section 3 as well as [32, 18.6.1 and 18.17.1]. The important observation
to make is that D can be thought of as D : `2 Ñ `21 , which makes D a bounded and
invertible operator with D´1 : `21 Ñ `2 . With VK and Kpx, yq as above, we thus have
˜ ¸
M ÿ
ÿ n M ÿ
ÿ n
VK “ p1 ´ Jq Jn´j DJj “ p1 ´ JqD D´1 knj Jn´j DJj ,
n“0 j“0 n“0 j“0
Proof. From the Lemma 5.6 we see that the first statement is equivalent to the claim that
M ÿ
ÿ n
knj D´1 Jn´j DJj
n“0 j“0
is of the form T ` K and thus asymptotically Toeplitz. To show this we need two
observations: First, under sufficient continuity assumptions for the kernel, which are
satisfied due to the kernel being polynomial, we have that
T rasT rbs “ T rabs ´ HrasHrb̄s, (25)
18
and in particular
T rasT ras “ T ra2 s ´ HrasHrās,
where Hras, Hrās and Hrb̄s are compact Hankel operators [9]. Thus any asymptotically
Toeplitz operator (of sufficiently continuous symbol) raised to a finite power is again an
asymptotically Toeplitz operator, as pT ` Kq2 “ T 2 ` T K ` KT ` K2 and T 2 is again
Toeplitz plus something compact via the above relation. The composition of bounded
operators with compact operators is compact making T K ` KT ` K2 compact. An
induction argument demonstrates that this is true for any power n P N. In particular,
since it is known that J is a compact perturbation of a Toeplitz operator [32] we know that
Jj is a compact perturbation of a Toeplitz operator as well. The second observation is
that for the banded operator Jn´j , the operator D´1 Jn´j D is also a compact perturbation
of a Toeplitz operator and in fact we have that Jn´j and D´1 Jn´j D differ only in their
compact part, i.e. have the same Toeplitz component. Via (25) we thus have that
řM řn ´1 n´j
n“0 j“0 knj D J DJj is of the form pT ` Kq and thus asymptotically Toeplitz.
Along with the above observations, Equation (25) tells us that we can compute the
symbol of the Toeplitz part of a product of operators which are compact perturbations
of Toeplitz operators if we know the symbols of the individual Toeplitz components. Due
to bandedness it is straightforward to confirm that ` ˘the symbol of the Toeplitz part of the
multiplication operator J is p 21 ` z4 ` z̄4 q “ cos2 2θ for the Jacobi polynomials P̃p1,0q pxq,
which is thus also the symbol of the Toeplitz part of D´1 JD. Note at this point that
` ´1 ˘n´j
D JD “ D´1 Jn´j D
due to the outer operators cancelling. Given these tools as well as the linearity of the
Fourier series it follows that the symbol of the Toeplitz part of the Volterra operator ṼK
is the linear combination
M ÿ n ˆ ˙
ÿ
2n θ
f pzq “ knj cos .
n“0 j“0
2
Theorem 5.8. The method described in Section 3.3 converges for well-posed Volterra
integral equations of the first kind with limits of integration 0 to 1 ´ x
VK u “ g,
rewritten as
ṼK u “ q,
gpxq
with qpxq “ 1´x
for a polynomial kernel Kpx, yq P L2 rT 2 s and with q P `21 , subject to the
symbol of the Toeplitz part of ṼK not vanishing on the complex unit circle. This condition
is fulfilled if and only if @x P r0, 1s : Kpx, xq ‰ 0.
Proof. The requirement q P `21 arises formally due to the need to first invert D and
can be understood as stemming from the inverse integration being a differentiation. The
invertibility conditions of asymptotically Toeplitz operators of the form pT`Kq are known
in the literature (see e.g. [22, 10] and the references therein): A compactly perturbed
Toeplitz operator on `2 is invertible if it is a Fredholm operator, its index is 0 and it
has a trivial kernel [21, 10, 22]. Furthermore, a compactly perturbed Toeplitz operator
is Fredholm if its symbol (which is just the symbol of the Toeplitz part) does not vanish
anywhere on the complex unit circle.
In general, it holds that the index of a Toeplitz operator which is Fredholm is the sign-
flipped winding number of its symbol on the complex unit disk [10]. Since the symbol of
19
the Toeplitz part of the unweighted Volterra operator is real-valued and continuous its
index is thus 0 if and only if it does not vanish anywhere on the complex unit circle,
` ˘ which
2 θ
is a necessary condition for it to be Fredholm in the first place. Since cos 2 P r0, 1s,
the symbol vanishes at some point θ P r0, 2πs, i.e.
M ÿ n ˆ ˙
ÿ
2n θ
knj cos “ 0,
n“0 j“0
2
if and only if for some x P r0, 1s we have
M ÿ
ÿ n
knj xn “ 0.
n“0 j“0
Conversely, if @x P r0, 1s : Kpx, xq ‰ 0 then the Volterra operator is Fredholm because the
symbol of its Toeplitz part has no roots on the unit circle and as this symbol is real valued
its winding number and thus index is 0. This necessary condition for invertibility of the
operator becomes a sufficient condition if in addition to this we have kerpT ` Kq “ t0u,
as this yields injectivity and via the index formula [10]:
ind(T) “ indpT ` Kq :“ dimpkerpT ` Kqq ´ dimpcokerpT ` Kqq,
with indpT ` Kq “ 0 also implies surjectivity. kerpT ` Kq “ t0u is a consequence of
the classical result that the Volterra integral operator has no non-zero eigenvalues. The
convergence of the method is then a consequence of known results in the theory of finite
section methods, see e.g. [22].
Remark: The motivation for solving ṼK u “ q with qpxq “ gpxq 1´x
instead of VK u “ g
directly can be understood at this point, since for VK the symbol of the Toeplitz part is
instead found to be
M ÿ n ˆ ˙ ˆ ˙
ÿ θ 2n θ
knj sin cos ,
n“0 j“0
2 2
which always has a root on the complex unit circle at θ “ 0 and thus its induced Toeplitz
operator is not Fredholm and not invertible. Therefore the presented proof strategy only
succeeds if qpxq “ gpxq
1´x
may be used instead to get rid of the additional sine terms. The
symbol of the Toeplitz part of ṼK is comparably very well-behaved for a variety of kernels.
So far we have only been working with polynomial kernels of order M , henceforth
denoted KM , when it comes to Volterra equations of the first kind. We will need the
following theorem (see [4, 44]) which we restate without proof for the extension of the
above arguments to a non-polynomial kernel:
Theorem 5.9. Let X and Y be normed linear spaces with one or both being Banach
spaces and let T : X Ñ Y be a bounded and invertible operator with T ´1 : Y Ñ X. Then
if the bounded operator M : X Ñ Y satisfies
1
}M ´ T } ă ,
}T ´1 }
20
it follows that M is also invertible with bounded inverse operator M´1 : Y Ñ X and
}T ´1 }
}M´1 } ď ,
1 ´ }T ´1 }}T ´ M}
}T ´1 }2 }T ´ M}
}M´1 ´ T ´1 } ď .
1 ´ }T ´1 }}T ´ M}
Lemma 5.10. Given that
}ṼKM ´ ṼK } ÝÝÝÝÑ 0
M Ñ8
for a sequence of Volterra operators induced by polynomial kernels KM px, yq and a not
necessarily polynomial kernel Kpx, yq, we have
}uM ´ u} ÝÝÝÝÑ 0,
M Ñ8
6. Discussion
The method proposed in this paper can efficiently compute Volterra integrals as well
as solve Volterra integral equations of the first and second kind with high accuracy using
bivariate orthogonal polynomials to resolve the kernel along with an operator valued
Clenshaw algorithm and is not restricted to convolution kernels. Numerical experiments
suggest it can even be applicable to certain singular equations. Our approach takes
advantage of the sparsity of the required integration and extension operators which are
due to the symmetries of the Jacobi polynomial basis on the triangle domain. The method
was shown to converge for well-posed Volterra integral equations of the first and second
kind, using a link to compact perturbations of Toeplitz operators.
Extensions of this approach to various so-called integro-differential equations of Volterra
type, where both differentiation and Volterra operators act on the unknown function, as
well as extensions to non-linear Volterra equations, where the unknown function can ap-
pear in non-linear fashion in the Volterra integral, while non-trivial are conceivable and
will be addressed in future works.
21
Acknowledgments
We thank Nick Hale and Kuan Xu for crucial help on Volterra integral equations at
the initial stages of this project. We thank Mikael Slevinsky for thoroughly reading a
draft and providing detailed comments.
References
[1] A. Akyüz-Daşcıoğlu. A Chebyshev polynomial approach for linear Fredholm–Volterra integro-
differential equations in the most general form. Appl. Math. Comput., 181(1):103–112, October
2006.
[2] S. S. Allaei, Z. Yang, and H. Brunner. Existence, uniqueness and regularity of solutions to a class
of third-kind Volterra integral equations. J. Integral Equ. Appl., 27(3):325–342, September 2015.
[3] S. S. Allaei, Z. Yang, and H. Brunner. Collocation methods for third-kind VIEs. IMA J. Num. Ana.,
37(3):1104–1124, July 2017.
[4] K. E. Atkinson and W. Han. Theoretical Numerical analysis: A Functional Analysis Framework.
Number 39 in Texts in applied mathematics. Springer, Dordrecht ; New York, 3rd ed edition, 2009.
[5] E. Babolian and Z. Masouri. Direct method to solve Volterra integral equation of the first kind using
operational matrix with block-pulse functions. J. Comput. Appl. Math., 220(1):51–57, October 2008.
[6] G. Bachman and L. Narici. Functional Analysis. Dover Publications, Mineola, N.Y, 2000.
[7] P. Baratella. A Nyström interpolant for some weakly singular linear Volterra integral equations. J.
Comput. Appl. Math., 231(2):725–734, September 2009.
[8] J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah. Julia: A fresh approach to numerical
computing. SIAM Rev., 59(1):65–98, 2017.
[9] A. Böttcher and B. Silbermann. Introduction to Large Truncated Toeplitz Matrices. Springer, New
York, 1999.
[10] A. Böttcher, B. Silbermann, and A. Karlovich. Analysis of Toeplitz Operators. Springer monographs
in mathematics. Springer, Berlin, 2. ed edition, 2006. OCLC: 181538992.
[11] H. Brunner. Collocation Methods for Volterra Integral and Related Functional Differential Equations.
Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press,
2004.
[12] H. Brunner. Volterra Integral Equations: An Introduction to Theory and Applications. Cambridge
Monographs on Applied and Computational Mathematics. Cambridge University Press, 2017.
[13] P. G. Casazza and G. Kutyniok, editors. Finite Frames: Theory and Applications. Applied and
numerical harmonic analysis. Springer, New York, 2013. OCLC: ocn820820998.
[14] O. Christensen. An Introduction to Frames and Riesz Bases. Applied and Numerical Harmonic
Analysis. Birkhäuser Boston, Boston, MA, 2003.
[15] C. W. Clenshaw. A note on the summation of Chebyshev series. Math. Comput., 9(51):118–118,
September 1955.
[16] T. Diogo, N. J. Ford, P. Lima, and S. Valtchev. Numerical methods for a Volterra integral equation
with non-smooth solutions. J. Comput. Appl. Math., 189(1-2):412–423, May 2006.
[17] T. Diogo, N. Franco, and P. Lima. High order product integration methods for a Volterra integral
equation with logarithmic singular kernel. Comm. P. and Appl. Ana., 3(2):217–235, March 2004.
[18] T. Diogo, S. Mckee, and T. Tang. A Hermite-Type Collocation Method for the Solution of an Integral
Equation with a Certain Weakly Singular Kernel. IMA J. Num. Ana., 11(4):595–605, 1991.
[19] C. F. Dunkl and Y. Xu. Orthogonal Polynomials of Several Variables. Number 155 in Encyclopedia
of mathematics and its applications. Cambridge University Press, Cambridge, second edition, 2014.
[20] W. Gautschi. Orthogonal Polynomials: Computation and Approximation. Numerical mathematics
and scientific computation. Oxford University Press, Oxford ; New York, 2004. OCLC: ocm55622265.
[21] J. J. Grobler, L. E. Labuschagne, and M. Möller, editors. Operator Algebras, Operator Theory and
Applications. Birkhäuser Basel, Basel, 2010.
[22] R. Hagen, S. Roch, and B. Silbermann. C*-Algebras and Numerical Analysis. Number 236 in Chap-
man & Hall/CRC Pure and Applied Mathematics. CRC Press, Taylor & Francis Group, New York,
2001.
[23] N. Hale. An ultraspherical spectral method for linear Fredholm and Volterra integro-differential
equations of convolution type. IMA J. Num. Ana., July 2018.
22
[24] H. Köroğlu. Chebyshev series solution of linear Fredholm integrodifferential equations. Int. J. Math.
Educ. Sci. Technol., 29(4):489–500, July 1998.
[25] D. O. Krimer, S. Putz, J. Majer, and S. Rotter. Non-Markovian dynamics of a single-mode cavity
strongly coupled to an inhomogeneously broadened spin ensemble. Phys. Rev. A, 90(4), October
2014.
[26] D. O. Krimer, M. Zens, S. Putz, and S. Rotter. Sustained photon pulse revivals from inhomoge-
neously broadened spin ensembles: Sustained photon pulse revivals from inhomogeneously broad-
ened spin ensembles. Laser Photonics Rev., 10(6):1023–1030, November 2016.
[27] J. Lindenstrauss and L. Tzafriri. Classical Banach Spaces. Classics in mathematics. Springer, Berlin,
1996. OCLC: 180453042.
[28] S. K. Lintner and O. P. Bruno. A generalized Calderón formula for open-arc diffraction problems:
theoretical considerations. P. Roy. Soc. Edinb. A, 145(2):331–364, April 2015.
[29] A. Loureiro and K. Xu. Volterra-type convolution of classical polynomials. Math. Comput., January
2019.
[30] K. Maleknejad and N. Aghazadeh. Numerical solution of Volterra integral equations of the sec-
ond kind with convolution kernel by using Taylor-series expansion method. Appl. Math. Comput.,
161(3):915–922, 2005.
[31] J. Muscat. Functional Analysis An Introduction to Metric Spaces, Hilbert Spaces, and Banach Al-
gebras. Springer International Publishing, Cham, 2014. OCLC: 980961885.
[32] F.W.J. Olver, A.B.O. Daalhuis, D.W. Lozier, B.I. Schneider, R.F. Boisvert, C.W. Clark, B.R.
Miller, and B. V. Saunders (eds.). NIST Digital Library of Mathematical Functions, December
2018. http://dlmf.nist.gov.
[33] S. Olver and A. Townsend. A fast and well-conditioned spectral method. SIAM Rev., 55(3):462–489,
January 2013.
[34] S. Olver and A. Townsend. A practical framework for infinite-dimensional linear algebra. In 2014
First Workshop for High Performance Technical Computing in Dynamic Languages, pages 57–62,
LA, USA, November 2014. IEEE.
[35] S. Olver, A. Townsend, and G. Vasil. Recurrence relations for orthogonal polynomials on a triangle.
In ICOSAHOM 2018, 2019.
[36] S. Olver, A. Townsend, and G. Vasil. A sparse spectral method on triangles. arXiv:1902.04863,
February 2019.
[37] J. Prüss. Evolutionary Integral Equations and Applications. Modern Birkhäuser classics. Springer,
Basel ; New York, 2012. OCLC: ocn796763028.
[38] R. M. Slevinsky. Conquering the pre-computation in two-dimensional harmonic polynomial trans-
forms. arXiv:1711.07866, November 2017.
[39] R. M. Slevinsky. Fast and backward stable transforms between spherical harmonic expansions and
bivariate Fourier series. Appl. Comput. Harmon. Anal., November 2017.
[40] R. M. Slevinsky. FastTransforms v0.1.1, January 2019. original-date: 2018-03-15T23:11:52Z.
[41] R. M. Slevinsky and S. Olver. A fast and well-conditioned spectral method for singular integral
equations. J. Comput. Phys., 332:290–315, March 2017.
[42] H. Song, Z. Yang, and H. Brunner. Analysis of collocation methods for nonlinear Volterra integral
equations of the third kind. Calcolo, 56(1):7, January 2019.
[43] A. Townsend and S. Olver. The automatic solution of partial differential equations using a global
spectral method. J. Comput. Phys., 299:106–123, October 2015.
[44] T.D. Trogdon and S. Olver. Riemann-Hilbert Problems, Their Numerical Solution, and the Compu-
tation of Nonlinear Special Functions. Number 146 in Other titles in applied mathematics. SIAM,
Society for Industrial and Applied Mathematics, Philadelphia, 2016.
[45] F. van den Bosch, J. A. J. Metz, and J. C. Zadoks. Pandemics of Focal Plant Disease, a Model.
Phytopathology, 89(6):495–505, June 1999.
[46] G. M. Vasil, K. J. Burns, D. Lecoanet, S. Olver, B. P. Brown, and J. S. Oishi. Tensor calculus in
polar coordinates using Jacobi polynomials. J. Comput. Phys., 325:53–73, November 2016.
[47] A. Wazwaz. Linear and Nonlinear Integral Equations: Methods and Applications. Higher Education
Press; Springer, Beijing, Heidelberg, New York, 2011.
[48] A. Wazwaz and R. Rach. Two reliable methods for solving the Volterra integral equation with a
weakly singular kernel. J. Comput. Appl. Math., 302:71–80, August 2016.
[49] K. Xu, A. P. Austin, and K. Wei. A Fast Algorithm for the Convolution of Functions with Compact
Support Using Fourier Extensions. SIAM J. Sci. Comput., 39(6):A3089–A3106, January 2017.
23
[50] K. Xu and A. Loureiro. Spectral Approximation of Convolution Operators. SIAM J. Sci. Comput.,
40(4):A2336–A2355, January 2018.
24