0% found this document useful (0 votes)
9 views

13.A Review of Exponential Integrators For First Order Semi-Linear Problems

Uploaded by

xiaozhuangma1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

13.A Review of Exponential Integrators For First Order Semi-Linear Problems

Uploaded by

xiaozhuangma1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

NORGES TEKNISK-NATURVITENSKAPELIGE

UNIVERSITET

A review of exponential integrators for first order


semi-linear problems
by
Borislav V. Minchev and Will M. Wright

PREPRINT
NUMERICS NO. 2/2005

NORWEGIAN UNIVERSITY OF SCIENCE AND


TECHNOLOGY
TRONDHEIM, NORWAY

This report has URL


http://www.math.ntnu.no/preprint/numerics/2005/N2-2005.ps
Address: Department of Mathematical Sciences, Norwegian University of Science and
Technology, N-7491 Trondheim, Norway.
Abstract
Recently, there has been a great deal of interest in the construction of
exponential integrators. These integrators, as their name suggests, use the
exponential function (and related functions) of the Jacobian or an approx-
imation to it, inside the numerical method. However, unlike some of the
recent literature suggests, integrators based on this philosophy have been
known since at least 1960. The aim of this paper is to review exponential
integrators, designed for first order problems, however, we will also briefly
discuss some recent research into the construction of exponential integra-
tors, for special problems. Our hope is, that with this article, by reviewing
as much of the history of exponential integrators as reasonable possible,
we can point interested readers to appropriate references and hopefully
reduce the reinvention of known results.

1 Introduction
Even though the theory of numerical methods for time integration is well es-
tablished for a general class of problems, recently due to improvements in the
efficient computation of the exponential function, exponential integrators for
the time integration of semi-linear problems

y 0 (t) = f (y(t)) = Ly(t) + N (y(t)), y(tn−1 ) = yn−1 . (1.1)

have emerged as a viable alternative. In [88] it is claimed that the largest per-
formance improvement in the solution of partial differential equations will come
from better time integration technology. In the early fifties, the phenomenon of
stiffness was first discovered by Curtis and Hirschfelder [18]. Stiffness effectively
yields explicit integrators useless, as stability rather than accuracy governs how
the integrator performs. It could be said that more integrators have been de-
veloped to overcome the phenomenon of stiffness, than any other property that
a differential equation may have. Stiffness was also the reason for the introduc-
tion of exponential integrators. The recent wave of publications on exponential
integrators, have mainly been concerned with the time integration of spatially
discretized parabolic and hyperbolic partial differential equations.
Various methods have been developed for the differential equation (1.1). The
first paper to construct what are now known as exponential integrators, was by
Certaine [15], published in 1960. This paper constructed two exponential inte-
grators based on the Adams–Moulton methods of order two and three. These
methods are members of the Exponential Time Differencing (ETD) methods,
which find approximations to the integral in the variation of constants formula,
using an algebraic polynomial approximation to the nonlinear term. Other pa-
pers on this subject include [5, 16, 44, 43, 58, 77]. In 1967, Lawson published
[62], which provided a novel approach to solving stiff problems. Integrators
were constructed, which solve exactly the linear part of the problem and then
used a change of variables to cast the problem in a form, which a traditional
explicit method can be used to solve the transformed equation, the approxi-
mate solution is then back transformed. These methods are commonly known
as Integrating Factor (IF) methods. Further papers on this subject include

1
[6, 16, 69, 71, 90, 99]. In 1999, Munthe-Kaas published [76], which constructed
a class of methods known as the RKMK methods. These methods transform
the differential equation to a differential equation evolving on a Lie algebra,
which is a linear space. Munthe-Kaas realized that the affine action was very
useful in constructing integrators for differential equations of the form (1.1).
The affine action was also used in [14] to construct integrators for (1.1) based
on the Commutator-Free (CF) Lie group methods. In [58] Krogstad, intro-
duced the Generalized Integrating Factor (GIF) methods, which were shown to
exhibit large improvements in accuracy over the IF, ETD and CF methods. In
[52, 53] numerical experiments were performed on certain stiff PDEs using the
IF, ETD, linearly implicit methods and splitting methods. It was concluded
that the ETD methods consistently outperformed all other methods.
The aim of this paper is to try and review the history of exponential inte-
grators tailored for the problem (1.1). The reason being that a significant new
interest in developing integrators has lead to many papers in which known in-
tegrators have been rediscovered. We aim to represent exponential integrators
in a general framework. The paper is organized as follows. Section 2 considers
several modifications of the Euler method and motivates our aims. In Section 3
we construct a class of exponential general linear methods, which will provide
the framework for various methods. The next four sections discuss the history of
the IF, ETD, GIF and the integrators based on Lie group methods respectively
and represents these methods as exponential general linear methods. Section 8,
then presents an overview of the order conditions of exponential integrators and
we follow with a discussion of implementation issues. Exponential integrators
which have been constructed for particular problems will be briefly discussed in
Section 10. Finally, we conclude with some numerical experiments and several
remarks and open questions.

2 Modifications of the Euler method


Before introducing the framework of exponential integrators, we examine sev-
eral well known extensions of the Euler method. Linearizing the initial value
problem (1.1), gives

y 0 (t) = f (yn−1 ) + f 0 (yn−1 )(y − yn−1 ).

The exact solution to this linearized problem is

yn = yn−1 + hϕ1 (hf 0 (yn−1 ))f (yn−1 ), (2.1)

where the function ϕ1 , is defined as


ez − 1
ϕ1 (z) = .
z
This method is of order two, for general problems of the form (1.1), and exact for
problems, where f (y) = Ly + N , it is known as the exponential Euler method.
The earliest reference, we can find to this method, is in the paper of Pope [83],

2
where he calls the method the exponential difference equation. We are unsure
if this method had been derived even earlier.
Mainly, due to contributions from Verwer and van der Houwen, the expo-
nential Euler has been generalized in the direction of Runge–Kutta and linear
multistep methods and are known as generalized Runge–Kutta and linear mul-
tistep methods, [101, 103, 104, 102]. Despite this framework, there does not
seem to be (until recently) methods available, which use the exponential or
related functions of the exact Jacobian. This is most likely due to the cost
involved in recomputing the Jacobian and then computing the exponential or
related function of the Jacobian. Given this high cost, most integrators evolved
in one of two directions. The first approach is not to use exact representations
of the exponential and related functions but approximations, generally Padé
approximations. One of the main classes of methods in this direction are the
Rosenbrock methods, of which there is a vast literature; we refer only to the
papers by Rosenbrock, [85] and the book by Hairer and Wanner, [35]. We also
refer to the book of van der Houwen, [101] and the paper by Hairer, Bader and
Lubich, [33], where many alternative integrators, in this direction are analyzed.
The alternative approach is to compute the exponential and related functions
exactly of an approximation to the Jacobian. The first paper in this direction
was by Certaine [15] in 1960. Integrators which use exact computations of the
exponential and related functions of an approximation to the Jacobian are the
main focus of this review. This may seem a rather large restriction class of
methods, but in fact a vast array of literature exists, much of which is directed
at the efficient time integration of semi-linear problems.
For semi-linear problems, where generally most of the difficulty lies in the
operator L and not in the nonlinear term N (y), an approximation to the Ja-
cobian of f , seems reasonable. The natural choice of this approximation, for
semi-linear problems is L. Let us assume, that we use L as an approximation
to the full Jacobian in the method (2.1) above
yn = yn−1 + hϕ1 (hL)(Lyn−1 + N (yn−1 ))
= ehL yn−1 + hϕ1 (hL)N (yn−1 ). (2.2)
What is interesting about this method, is the number of different viewpoints
from which it has been derived. It was first introduced by Nørsett [78] in
1969, as the order one method in a class of exponential integrators based on
the Adams–Bashforth methods. It is also a special case of the filtered explicit
Euler method, see [60, 84]. The method (2.2) is now most commonly called the
Exponential Time Differencing (ETD) Euler. Higher order methods based on
Adams and Runge–Kutta methods are discussed in more detail in Section 5.
Another common name used for (2.2) is the exponentially fitted Euler method.
Exponential fitting was introduced by Liniger and Willoughby [63] in 1970.
More recently it was derived as the order one Lie group, Runge–Kutta method
with affine action (see [76]), and is called the Lie–Euler method. Exponential
integrators based on Lie group methods for semi-linear problems are discussed
in Section 7. The implicit ETD Euler is
yn = ehL yn−1 + hϕ1 (hL)N (yn ).

3
We now describe an alternative derivation of the method (2.2). Construct a
transformed differential equation by multiplying the original differential equa-
tion by e(tn−1 −t)L , giving
e(tn−1 −t)L y 0 (t) = e(tn−1 −t)L Ly(t) + e(tn−1 −t)L N (y(t))
(e(tn−1 −t)L y(t))0 = e(tn−1 −t)L N (y(t)).
If we find the integral representation of this differential equation and approxi-
mate the nonlinear term in the integral by N (yn−1 ), and then solve exactly, we
arrive again at the ETD Euler method. Alternatively, if we apply the explicit
Euler to the transformed equation and then transformed back into the original
variables, we get
yn = ehL yn−1 + ehL hN (yn−1 ). (2.3)
This is known as the Lawson–Euler, after Lawson [62] who invented it in 1967,
as a means of overcoming stiffness, in semi-linear problems. Method (2.3) is
most commonly referred to as the Integrating Factor (IF) Euler method, as the
differential equation is multiplied by the integrating factor e(tn−1 −t)L . However,
we choose to call methods based on this idea, Lawson methods, discussed in
more detail in Section 4. The implicit Lawson-Euler method is
yn = ehL yn−1 + hN (yn ).
We have already introduced three different extensions of the Euler method, all
of which are exponential integrators and all have been called the exponential
Euler method. It is important to distinguish between these methods, as they
often perform quite differently. Recently, Lord and Rougemont [65] extended
this idea to the class of semi-linear stochastic PDEs, constructing a first order
stochastic Lawson–Euler method. The well-known implicit Euler–Maruyama
scheme can be considered an approximation to the stochastic Lawson–Euler.
Following Hochbruck [38] we define an exponential integrator as follows.
Definition 2.1. An exponential integrator is a numerical method which in-
volves an exponential function (or a related function) of the Jacobian or an
approximation to it.
Given this definition, the three Euler methods derived in this section are
all exponential integrators. However, we will mainly restrict our attention to
exponential integrators which use an approximation of the Jacobian. Even so,
when appropriate, we will describe how many well-known methods are equiva-
lent to exponential integrators, if the exponential (and related functions) were
computed exactly; as the essential difference can be considered as one of com-
putational efficiency. We also mention here, that all exponential integrators
considered in this paper are A-stable, which is a direct consequence of using
the exponential within the method.

3 Exponential general linear methods


We have chosen to limited the scope of this paper to exponential integrators
designed for semi-linear problems, where an approximation to the Jacobian

4
(rather than the exact Jacobian), is used. The class of exponential integrators
we consider, is based on general linear methods, which use the exponential
(and related functions) of the approximate Jacobian within the integrator. We
require that; if L = 0, then the resulting method is a general linear method,
which is known as the underlying general linear method; if N (u) = 0, then the
numerical method will supply the exact solution if the exponential and related
functions are evaluated exactly.
Generally, for semi-linear problems, the difficult part (stiff or oscillatory na-
ture) of the differential equation is in the linear part of the problem. By treating
the linear part of the problem exactly, using the exponential and related func-
tions, the remaining part of the integrator can be explicit. The tradeoff here
is that the exponential and related functions are computed, rather than using
an implicit integrator. When a constant stepsize is used throughout the inte-
gration, the exponential and related functions can be evaluated before the inte-
gration begins, given that storing such information is feasible. These functions
can be a significant overhead depending on the dimensionality of the differential
equation and the structure of the matrix L. Therefore, exponential integrators
are likely to be most competitive when the matrix L is diagonal or cheaply
diagonalizable.
If h represents the stepsize and the r quantities
[n−1] [n−1]
y1 , y2 , . . . , yr[n−1] ,

are known, then the computations performed in step number n, of an exponen-


tial general linear method, are
s r
[n−1]
X X
Yi = aij (hL)hN (Yj ) + uij (hL)yj (3.1)
j=1 j=1
s r
[n] [n−1]
X X
yi = bij (hL)hN (Yj ) + vij (hL)yj . (3.2)
j=1 j=1

The coefficients of the method aij , bij , uij and vij are function of the exponential
and related functions. The quantities which are assumed known at the start
of step number n, must be computed using some starting method when the
integration begins. We can represent the exponential general linear methods in
a more compact notation by introducing the vectors Y , N (Y ), y [n−1] and y [n] ,
as
     [n−1]   [n] 
Y1 N (Y1 ) y1 y
 Y2   N (Y2 )   [n−1]   1[n] 
 y2  y 
y [n−1] =   y [n] =  2.  .

Y =  .  N (Y ) = 
   
. .
 ..  .. ..  . 

 . 
   
 
Ys N (Ys ) [n−1] [n]
yr yr
An exponential general linear method can be represented in matrix form as

Y = A(hL)hN (Y ) + U (hL)y [n−1] ,


(3.3)
y [n] = B(hL)hN (Y ) + V (hL)y [n−1] .

5
Often it is convenient to represent the exponential general linear method in the
tableau form  
A(hL) U (hL)
M (hL) = .
B(hL) V (hL)
Most of the methods that we will describe in this paper are exponential Runge–
Kutta methods or exponential Adams methods.

4 Lawson methods
The generalized Runge–Kutta processes were first discovered by Lawson [62] in
1967 and have been rediscovered many times since. They are more commonly
called Integrating Factor (IF) methods, a name used by Boyd [6], Canuto,
Hussaini, Quarteroni and Zang [12], Milewski and Tabak, [71], Maday, Patera
and Rønquist [69], Smith and Waffle [90], and Trefethen, [99]. They are also
known as Linearly Exact Runge–Kutta (LERK) methods a name which seems
to have first been used by Garcı́a-Archilla [27], to describe the single third order
method introduced by Jauberteau, Rosier and Temam in [51].
The main idea behind the Lawson methods, is to use a change of variables,
often called the Lawson transformation

v(t) = e(tn−1 −t)L y(t).

Differentiating both sides of this equation yields

v 0 (t) = −Le(tn−1 −t)L y(t) + e(tn−1 −t)L y 0 (t)


= e(tn−1 −t)L N (y(t))
= e(tn−1 −t)L N (e(t−tn−1 )L v(t)).

The purpose of transforming the differential equation in this way is to remove


the explicit dependence in the differential equation on the operator L, except
inside the exponential. The exponential function will damp the behaviour of L
removing the stiffness or highly oscillatory nature of the problem. The aim is
now to use any numerical integrator, (in the case of Lawson, a Runge–Kutta
method) on the transformed, hopefully non-stiff, differential equation.
Apply a q-step Adams method to the transformed differential equation and
then transform back to the original variable, to obtain the solution approxima-
tion. In general, the Lawson–Adams methods are
q
X
yn = ehL yn−1 + βk ekhL N (yn−k ),
k=0

where βk are the coefficients of the underlying Adams method. Lawson methods
based on the q-step BDF methods are
q
X
yn = αk ekhL yn−k + hN (yn ),
k=1

6
where αk are the coefficients of the underlying BDF method. The general for-
mulation of an s-stage, Lawson–Runge–Kutta method in the original variables
is
s
X
Yi = aij e(ci −cj )hL hN (Yj ) + eci hL yn−1 , i = 1, 2, . . . , s,
j=1
s (4.1)
X
(ci −cj )hL hL
yn = bi e hN (Yj ) + e yn−1 .
j=1

This construction requires that the c vector has nondecreasing coefficients. The
most computational efficient schemes are when the c vector has uniform abscis-
sae, with as many repetitions as possible.
It is reported in the paper [22] that the Lawson–Runge–Kutta methods
only work well on moderately stiff problems in which the solution tends to zero
or is periodic. In [22], they tried to overcome this problem by modifying the
method (see the next section for details). Numerical tests on the Kuramoto–
Sivashinsky equation comparing a variable-order variable-step implementation
of the BDF formula and the third order method presented in [51] were given in
[27]. These tests showed that the BDF method significantly outperformed the
Lawson method.
The main difficulty with Lawson based methods is the overall stiff order
achieved is limited to one. This will be discussed further in Section 8, but
basically boils down to the fact that Lawson methods only use the exponential
function. We also note here that the Lawson methods will not preserve fixed
points, which exist in the true solution.
The idea of transforming the differential equation and solving the individ-
ual parts separately has lead to great success in convection-dominated PDEs.
We refer to the paper by Maday, Patera and Rønquist [69], which has many
similarities to the Lawson and generalized Lawson methods discussed in Sec-
tion 6. Further work in this area by Celledoni, [13] uses the methods reviewed
in Section 7.

5 Exponential time differencing


A far more commonly used approach, which also has been rediscovered several
times, is the so called Exponential Time Differencing (ETD) methods, a name
originally used in computational electrodynamics. These methods also go by
the names; generalized linear multistep and generalized Runge–Kutta meth-
ods, introduced by van der Houwen and Verwer [102] and Verwer [103]; Exact
treatment of the Linear Part (ELP) schemes, introduced by Beylkin, Keiser
and Vozovoi [5]; exponential propagation a name used by Friesner et. al. [25]
and Edwards et. al. [21]. The ETD methods are very closely related to the
W-methods of Steihaug and Wolfbrandt [91] developed in 1979 to overcome the
need to use the exact Jacobian in the Rosenbrock methods and the adaptive
Runge–Kutta methods constructed as an extension of the W-methods in 1982
by Strehmel and Weiner [92]. The main difference these two classes of methods

7
is that the ETD methods compute the exponential and related functions ex-
actly, while the W-methods and the adaptive Runge–Kutta methods generally
use Padé approximations. This is discussed in more detail later in this section.
The ETD methods have been widely used in the physics literature, for example
we cite [45, 82, 87, 89, 97].
The ETD methods can be constructed via the variation of constants formu-
lae. To derive the variation of constants formula, take the transformed differ-
ential equation

(e(tn−1 −t)L y(t))0 = e(tn−1 −t)L N (y(t)), y(tn−1 ) = yn−1 ,

and find the corresponding integral representation, which is


Z h
y(tn−1 + h) = ehL yn−1 + e(h−τ )L N (y(tn−1 + τ )) dτ. (5.1)
0

The ETD methods are based on approximating the nonlinear term in the varia-
tion of constants formulae by an algebraic polynomial. Such ideas in a quadra-
ture setting were first used by Filon [23] in 1928. All ETD methods are designed
so that fixed points are preserved.

5.1 ETD linear multistep methods


To construct ETD linear multistep methods, the nonlinear term in the variation
of constants formula (5.1) is approximated by an algebraic polynomial, which
uses information known from previous steps. When L = 0, these methods
reduce to the standard linear multistep methods. The first known paper in this
direction is that of Certaine [15] from 1960, where order two and order three
methods which reduce to the corresponding Adams–Moulton formulae, were
derived. What seems to be the next paper in this direction is the 1969 paper
of Nørsett [77], where arbitrary order A-stable exponential integrators, which
reduce to the Adams–Bashforth methods when L = 0, are constructed. There
are two main representations of these methods and we believe it is worth giving
both here. First, following Nørsett, ETD Adams–Bashforth methods have the
form
q−1
X
hL
yn = e yn−1 + h αk (hL)∇k Nn−1 ,
k=0

where ∇0 N n−1 = Nn−1 and ∇k+1 N k k


n−1 = ∇ Nn−1 − ∇ Nn−2 are the backward
differences, and the functions αk (x), satisfy the following recurrence relations

xα0 (x) = ex − 1,
xαk+1 (x) + 1 = αk (x) + 12 αk−1 (x) + 31 αk−2 (x) + · · · + 1
k+1 α0 (x).

If appropriate approximations are used to αk (x), then these methods become


a subclass of those introduced in [61]. The above result has been recently
rediscovered in the paper by Cox and Matthews [16]. Calvo and Palencia [11]
have derived closely related q-step ETD methods and shown that they are
convergent and of stiff order q.

8
It is possible to construct ETD Adams–Moulton formulae using the same
philosophy as was used by Nørsett. The most likely reason why Nørsett did not
derive these formulas is that the stiffness in the problem is overcome by the use
of the exponential function, and therefore, it would seem unnecessary to use
an implicit method. However, they could be used as a corrector for an ETD
Adams–Bashforth method. The ETD Adams–Moulton methods have the form
q
X
yn = ehL yn−1 + h βk (hL)∇k Nn ,
k=0

where the functions βk (x), satisfy the following recurrence relations

xβ0 (x) = ex − 1,
 
1 1
xβk+1 (x) + (k+2)! = 1 − (k+1)! x βk (x) + · · · + (1 − x)β0 (x).

We have discussed that exponential integrators use the exponential and related
functions; the ETD Adams methods use the α and β, as the related functions,
however, the most common related functions used in exponential integrators
are the so called ϕ-functions, which are defined as
1 x (x−τ )L τ `−1
Z
ϕ` (xL) = ` e dτ, ` ≥ 1.
x 0 (` − 1)!
The ϕ-functions are related by the recurrence relation
1
ϕ` (z) − `! 1
ϕ`+1 (z) = , ϕ` (0) = .
z `!
For the second representation of the ETD Adams methods we follow the paper
of Beylkin, Keiser and Vozovoi [5]. The following lemma provides an expression
for the exact solution, which will be used throughout this paper.
Lemma 5.1. The exact solution of the initial value problem

y 0 (t) = Ly(t) + N (y(t)), y(tn−1 ) = yn−1 ,

can be represented by the expansion



(`−1)
X
(t−tn−1 )L
y(t) = e yn−1 + ϕ` ((t − tn−1 )L) (t − tn−1 )` Nn−1 ,
`=1

(`−1) d`−1
where Nn−1 = dt`−1
N (y(t)) t=tn−1 .
Proof : Substitute a Taylor series expansion of the nonlinear term into the
variation of constants formula and using the expression for the ϕ-functions,
gives
Z t−tn−1
y(t) = e (t−tn−1 )L
yn−1 + e(t−tn−1 −τ )L N (y(tn−1 + τ )) dτ
0

(`−1)
X
= e(t−tn−1 )L yn−1 + ϕ` ((t − tn−1 )L)(t − tn−1 )` Nn−1 (5.2)
`=1

9
The numerical solution approximates the variation of constants formulae
(5.1) by
q
X
yn = ehL yn−1 + h βk Nn−k ,
k=0

where β0 , β1 , . . . , βq , are the coefficients of the method, computed by expanding


the nonlinear terms in a Taylor series, about tn−1
q ∞ q
!
X X h` X
`−1 (`−1)
h βk Nn−k = βk (1 − k) Nn−1 . (5.3)
(` − 1)!
k=0 `=1 k=0

The order conditions for the ETD Adams methods can then be computed by
substituting (5.3) into the expression for the numerical solution and comparing
with (5.2) evaluated at t = tn−1 + h,
m
1 X
βk (1 − k)`−1 = ϕ` (hL),
(` − 1)!
k=0

where ` = 1, 2, . . . , m, with m = q − 1 for implicit methods, and m = q for


explicit methods. This system of equations is linear and can be represented
simply as Aβ = ϕ, where the components of the vector ϕ are ϕ` . For the ETD
Adams–Bashforth methods A`k = (1 − k)`−1 /(` − 1)!, whereas, for the ETD
Adams–Moulton methods A`k = (2 − k)`−1 /(` − 1)!.

q β1 β2 β3 β4
1 ϕ1 0 0 0
2 ϕ1 + ϕ2 −ϕ2 0 0
3 ϕ1 + 32 ϕ2 + ϕ3 −2(ϕ2 + ϕ3 ) 1
2 ϕ2 + ϕ3 0
4 ϕ1 + 11
6 ϕ 2 +2ϕ 3 +ϕ 4 −3ϕ 2 −5ϕ3 −3ϕ4
3
2 ϕ 2 +4ϕ 3 +3ϕ 4 − 1
3 ϕ 2 −ϕ 3 −ϕ4

Table 1: ETD Adams–Bashforth methods with q = 1, . . . , 4.

q β0 β1 β2 β3
0 ϕ1 0 0 0
1 ϕ2 ϕ1 − ϕ2 0 0
1 1
2 2 ϕ 2 + ϕ 3 ϕ 1 − 2ϕ 3 − 2 ϕ 2 + ϕ3 0
1 1 1
3 ϕ
3 2 + ϕ 3 + ϕ 4 ϕ 1 + ϕ
2 2 − 2ϕ 3 − 3ϕ 4 −ϕ 2 + ϕ 3 + 3ϕ4 6 2 − ϕ4
ϕ

Table 2: ETD Adams–Moulton methods with q = 0, . . . , 3.

We briefly show a connection between the ETD Adams–Bashforth methods


and the IMEX schemes of Ascher, Ruuth and Wetton [1]. The order two ETD
Adams–Bashforth method is

yn = ehL yn−1 + ϕ1 (hL)Nn−1 + ϕ2 (hL)h(Nn−1 − Nn−2 ).

10
Now approximate the exponential using the (1,1) Padé approximation
 −1  
1 1
ehL ≈ I − hL I − hL ,
2 2

and the ϕ1 and ϕ2 , with the (0,1) Padé approximations


 −1  −1
1 1 1
ϕ1 (hL) ≈ I − hL , ϕ2 (hL) ≈ I − hL .
2 2 2
Substituting these into the order two ETD Adams–Bashforth method gives the
order two IMEX scheme
 −1    
1 1 3 1
yn = I − hL I − hL yn−1 + hNn−1 − hNn−2 .
2 2 2 2

This IMEX method (see [12, 54]) uses the order two Adams–Bashforth method
on the nonlinear term and the Crank–Nicolson method on the linear term.
The IMEX methods which use an Adams–Moulton method on the linear term
can be considered as approximations to the ETD Adams–Bashforth methods.
Generally, multistep IMEX methods use a BDF method on the linear term, such
methods can not be considered as approximations to ETD multistep methods.
This can be seen by trying to find the coefficients α1 , α2 and β0 in

yn = α1 yn−1 + α2 yn−2 + β0 Nn ,

such that when expanded in Taylor series the above expression matches the first
few terms of the exact solution given in Lemma 5.1.
An extension to the ETD linear multistep methods was suggested by Jain
[50] in 1972. The idea was not to use the Newton interpolation polynomial as
was used by Nørsett [77] and Cox and Matthews [16], but a Hermite interpo-
lation polynomial. In this case the first derivate of the differential equation is
needed, with this extra information A-stable q-step methods of order 2q were
derived.

5.2 ETD Runge–Kutta methods


The computations performed in a single step of an ETD Runge–Kutta method
are
s
X
Yi = aij (hL)hN (Yj ) + eci hL yn−1 , i = 1, 2, . . . , s,
j=1
s
(5.4)
X
yn = bi (hL)hN (Yi ) + ehL yn−1 .
i=1

In van der Houwen [101] and Verwer, [104] generalized Runge–Kutta methods,
which use an approximation to the Jacobian were briefly discussed. These
methods could be considered as the first approximations to ETD Runge–Kutta
methods, as the exponential and related functions are approximated rather than

11
exactly computed. The semi-implicit (also known as linearly implicit) methods
[33, 94] can also be considered as early approximations to ETD Runge–Kutta
methods.
Another early example of an approximate ETD Runge–Kutta method ap-
peared in the paper by Ehle and Lawson [22] in 1975. The aim of this paper
was to overcome the difficulties experienced with the Lawson methods, in which
for very stiff problems, they performed poorly. The main difficulty with the
Lawson methods, is that they have a maximum stiff order of one. Through-
out this section, we will talk about the stiff and non-stiff orders of the ETD
Runge–Kutta methods, this will be discussed in more detail in Section 8. Even
when the Runge–Kutta–Lawson methods achieve the non-stiff order, their error
constants are significantly larger than other exponential integrators. The modi-
fication of the Lawson methods proposed in [22] are constructed in a somewhat
ad-hoc way, and the conditions that they require the coefficients bi and aij to
satisfy are insufficient to obtain non-stiff order greater than three and stiff order
greater than two. If we replace the rational approximations used in [22] by their
exact representations, the resulting method is
 
0 0 0 0 I
1
1

 2 ϕ1,2 0 0 0 e 2 hL 

1
 1 hL 
,
 0 ϕ
2 1,3 0 0 e 2
hL
 
 0 0 ϕ1,4 0 e 
ϕ1 − 3ϕ2 + ϕ3 2ϕ2 − ϕ3 2ϕ2 − ϕ3 −ϕ2 + ϕ3 e hL

where, for stylistic reasons we define ϕi,j = ϕi (cj hL). As we will see in Section 7
it is not possible to multiply each of the internal stages by ϕ1 and retain the
correct order. In 1978, an article in German was published by Friedli [24], where
the coefficients aij and bi , are
i−1
X s
X
aij (hL) = αijk ϕk (ci hL), bi (hL) = βik ϕk (hL), (5.5)
k=1 k=1

It is worth noting here that the ϕk functions need not be evaluated at ci hL, see
Section 8, for more details. The non-stiff order conditions, for methods up to
order five were derived and examples of methods up to non-stiff order four were
included. Below we give a method of Friedli, which is based on the order four
Runge–Kutta method of England, with stiff order three, not four as Friedli had
thought
 
0 0 0 0 I
1
1

 2 ϕ1,2 0 0 0 e 2 hL 

 1 1
 2 ϕ1,3 − 12 ϕ2,3 1
ϕ
2 2,3 0 0 e 2
hL 
.

 ϕ1,4 − 2ϕ2,4 26 2 26 24 hL

− 25 ϕ1,4 + 25 ϕ2,4 25 ϕ1,4 + 25 ϕ2,4 0 e 
ϕ1 − 3ϕ2 + 4ϕ3 0 4ϕ2 − 8ϕ3 −ϕ2 + 4ϕ3 e hL

The next class of methods, we consider, are the W-methods developed by Wolf-
brandt in his PhD thesis [107] to avoid the use of the exact Jacobian in the

12
Rosenbrock methods. The general non-stiff order conditions for these methods
were constructed by Steihaug and Wolfbrandt [91] in 1979. To keep consistency
with the rest of this paper we assume that the approximation to the Jacobian
is given by the matrix L. In this case, the W-methods can be expressed as
   
i−1
X i−1
X i
X
ki = L yn−1 + aij hkj  + N yn−1 + aij hkj  + L γij hkj ,
j=1 j=1 j=1
s
X
yn = yn−1 + bj hkj .
j=1

This formulation does not provide much insight into the potential connections
with the ETD methods, but by making the substitution
i−1
X
Yi = yn−1 + aij hkj ,
j=1

the matrices A(hL) and B(hL), can be expressed as

A(hL) = (a ⊗ Im )(Is ⊗ Im − h(γ + a) ⊗ L)−1 ,


B(hL) = (b ⊗ Im )(Is ⊗ Im − h(γ + a) ⊗ L)−1 ,

where the components of a, b and γ are aij , bi and γij , and

U (hL) = Im ⊗ es + hA(hL)(L ⊗ es ), V (hL) = Im + hB(hL)(L ⊗ es ).

This representation of the W-methods was realized by Strehmel and Weiner [94].
What this shows, is that the W-methods are not ETD Runge–Kutta methods,
as the exponential (and related functions) are not computed exactly. However,
they can be considered as approximations to ETD Runge–Kutta methods. In
1982, Strehmel and Weiner [92] constructed the adaptive Runge–Kutta meth-
ods, which are linearly implicit Runge–Kutta methods where the coefficient
matrices aij and bj are defined as in (5.5), but rational approximations are used
for the exponential and related functions. Strehmel and Weiner constructed in
general the non-stiff order conditions for the adaptive Runge–Kutta methods
in [92] and discussed B-convergence results when rational approximations are
used in [94]. We give one fourth order method of Strehmel and Weiner, which
is based on the order four Runge–Kutta method of England, we assume that
the functions are computed exactly
 
0 0 0 0 I
1
1
 ϕ1,2 0 0 0 e 2 hL 
 1 2 1
 
1
 2 ϕ1,3 − 2 ϕ2,3 1 hL 
.
ϕ
2 2,3 0 0 e 2
hL
 
 ϕ1,4 − 2ϕ2,4 −2ϕ2,4 4ϕ2,4 0 e 
ϕ1 − 3ϕ2 + 4ϕ3 0 4ϕ2 − 8ϕ3 −ϕ2 + 4ϕ3 e hL

A significant body of work by researchers in Halle, have extend the adaptive


Runge–Kutta methods and W-methods to partitioned systems, implemented

13
these methods in both sequential and parallel environments, constructed two
step variations and considered B-convergence. A small selection of references
from this work includes [7, 93, 95, 106]. Do connections exist between the B-
convergence order and the stiff order of exponential integrators? This question
is discussed in Section 8.
As was mentioned previously, there has been renewed interest in developing
exponential integrators recently. One of the main reasons for this renewed
interest is that the exponential and related functions can now be evaluated to
high precision, much more efficiently, than in the past. This overcomes the
order reduction apparent when Padé approximations are used. In the paper by
Cox and Matthews [16] apart from the ETD Adams–Bashforth methods they
included three ETD Runge–Kutta methods with two, three and four, stages.
The methods with two and three stages can be written as above, where the
coefficients of the method are linear combinations of the ϕ-functions, but this is
not the case for the four stage method. It has some important differences from
the three example methods given above. Representing the Cox and Matthews
method, which is based on the classical order four Runge–Kutta method, in
tableau form reads
 
0 0 0 0 I
1
1

 2 ϕ1,2 0 0 0 e 2 hL 

1
1 hL  .
0 ϕ 0 0 e

1,2 2

 1 1 hL 2 

 (e 2 − I)ϕ1,2 0 ϕ1,2 0 e hL 
2
ϕ1 − 3ϕ2 + 4ϕ3 2ϕ2 − 4ϕ3 2ϕ2 − 4ϕ3 −ϕ2 + 4ϕ3 e hL

There are two interesting points with this method of Cox and Matthews. The
first, pointed out by Krogstad [58] is that the internal stages have the same
structure as the commutator-free Lie group method, based on the classical order
four Runge–Kutta method, constructed by Celledoni, Martinsen and Owren [14]
with affine Lie group action (see Section 7). The second is that the coefficients
defining the method are not linear combinations of the ϕ-functions. As long as
the coefficients of the method satisfy the stiff order conditions given in Section
8 and the coefficients remain bounded, there is no reason why the coefficients
need to be linear combinations of the ϕ-functions. The four stage method of
Cox and Matthews motivated Krogstad [58] to develop a method which did not
require a product of functions, which is very similar to the methods of Friedli,
and Strehmel and Weiner.
Finally, in this section, we will briefly discuss some of the isolated examples
of ETD methods appearing in various areas of science. In the integration of
chemical reactions many integrators have been proposed. For example, the
Pseudo-Steady-State Approximation (PSSA) scheme, which reduces to the ETD
Runge–Kutta method  
0 0 I
 ϕ1 0 ehL  ,
1 1 hL
2 ϕ1 2 ϕ1 e
for semi-linear problems and has stiff order one. In a recent article, Mott, Oran
and van Leer [75] proposed the α-Quasi-Steady-State (α-QSS) integrator. This

14
integrator works also when the linear part is a function of t. However, when the
linear part is constant, the α-QSS integrator reduces to the ETD Runge–Kutta
method, with stiff order two
 
0 0 I
 ϕ1 0 ehL  .
ϕ1 − ϕ2 ϕ2 ehL

Several numerical experiments were performed in [105], comparing these low


order ETD methods with standard stiff solvers RADU5 and DASSAL. Friesner
et. al. [25] and Edwards et. al. [21] constructed a class of integrators called
exponential propagation. These methods are derived from the variation of con-
stants formula using an iteration scheme until convergence is reached
Z h
y (m) (tn−1 + h) = ehL yn−1 + e(h−τ )L N (y (m−1) (tn−1 + τ )) dτ,
0

where y (0) (tn−1 + τ ) = y(tn−1 ). If past values of the solution were used then
the resulting method would be an exponential Adams method. However, they
calculate N (y (m−1) (tn−1 + τ )) at several values of τ within the interval [0, h],
in order to fit an algebraic polynomial of the form
m
X
(m−1)
N (y (tn−1 + τ )) ≈ nj τ j .
j=0

The exponential propagation methods can be written as ETD Runge–Kutta


methods, where the number of stages depends on the number of iterations
and the number of points used to construct an algebraic polynomial. In the
computations performed in [21, 25] the exponential propagation integrators use
Krylov subspace methods to evaluate the exponential and related functions.
In [43], Hockbruck and Ostermann, constructed a class of implicit exponen-
tial Runge–Kutta methods based on collocation. The methods are constructed
by choosing the collocation nodes c1 , . . . , cs and a collocation polynomial to
approximate the nonlinear term in the variation of constants formulae. An ex-
ample method of order four, which reduces to the Lobatto IIIC when L = 0,
is
0 0 0 I
 
 1 ϕ1 − 3 ϕ2 + 1 ϕ3 ϕ2 − ϕ3 − 1 ϕ2 + 1 ϕ3 e 12 hL 
 2 4 2 4 2 .
 ϕ1 − 3ϕ2 + 4ϕ3 4ϕ2 − 8ϕ3 −ϕ2 + 4ϕ3 ehL 
ϕ1 − 3ϕ2 + 4ϕ3 4ϕ2 − 8ϕ3 −ϕ2 + 4ϕ3 ehL
The advantage of these methods is that the high stage order ensures there is
no order reduction even in the stiff case. The problem is that the method is
implicit, however fixed point iteration can be used to evaluate the stages. As
long as only a few iterations are needed for convergence, these methods may be
competitive. An early method constructed in 1978, by Palusinski and Wait [81]
is the first and only known example of a partitioned ETD method.

15
6 Generalized Lawson methods
In Section 4 we discussed the Lawson methods and showed how the Lawson
transformation was used to rewrite the differential equation in a more appro-
priate form, in which L only appeared inside an exponential. Recently Krogstad
[58] proposed a way of generalizing the Lawson methods. Krogstad called these
methods Generalized Integrating Factor (GIF) methods, as he was unaware of
the original paper by Lawson. We suggest that they should be known as the
Generalized Lawson (GL) methods. In order to solve the original problem,
Krogstad proposed to represent the solution of (1.1) as the exact solution of a
differential equation which approximates the original problem with a modified
initial condition. We then obtain a differential equation for the initial condi-
tion. Let P (t) be an algebraic polynomial of degree q − 1 through the points
{(tn−` , Nn−` )}`=1,...,q ,
q
X (t − tn−1 )`−1
P (t) = p`−1 , (6.1)
(` − 1)!
`=1

where we are assuming that a fixed stepsize h is used throughout the integration,
that is h = t` − t`−1 , and
q
1 X
p`−1 = γk` Nn−k ,
h`−1
k=1

with the elements γk` = (A−1 )`k , where A is defined as for the ETD Adams–
Bashforth methods. We use the polynomial P (t) to approximate the nonlinear
term; the GL methods solve exactly the modified initial value problem

y 0 (t) = Ly(t) + P (t), y(tn−1 ) = v(t).

The exact solution (which can be seen immediately from Lemma 5.1) is
q
X
y(t) = e(t−tn−1 )L v(t) + (t − tn−1 )` ϕ` ((t − tn−1 )L)p`−1 . (6.2)
`=1

We will denote this as the Krogstad transformation. Differentiating the above


solution with respect to t, and simplifying yields

y 0 (t) = e(t−tn−1 )L v 0 (t) + Ly(t) + P (t),

which is equivalent to the differential equation

v 0 (t) = e(tn−1 −t)L (N (y(t)) − P (t)) , v(tn−1 ) = yn−1 . (6.3)

Now we can numerically solve this transformed equation to approximate the


initial condition v(t). At first sight this seems a rather strange approach for
solving the original problem, but preprocessing dates back to the late sixties
when Butcher [9] used a similar idea to overcome the barrier which states no
five stage Runge–Kutta methods exist with fifth order.

16
As a first example, we approximate the nonlinear term with a zeroth order
polynomial P (t) = Nn−1 . Applying the classical fourth order Runge–Kutta
method to estimate the modified initial condition and then back transforming
using the Krogstad transformation, results in the GL1/cRK4 method
0 0 0 0 I
 
1
1

 2 ϕ1,2 0 0 0 e 2 hL  
1
1 1 1 hL 
ϕ − I I 0 0 e .
 2
 2 1,3 2 2
1 1
 hL hL hL

 ϕ1,4 − e 2 0 e2 0 e 
1
1 12 hL 1 21 hL
ϕ1 − 23 e 2 hL − 61 I 3e 3e
1
6I ehL
There are three interesting properties to observe about the method GL1/cRK4.
The first, is that when L = 0, this method reduces to the classical fourth
order Runge–Kutta method. Secondly, the only difference with the traditional
Lawson method is the first (block) column of the matrix A(hL) and the first
(block) element of the matrix b(hL). Thirdly, fixed points are preserved, this is
true for all GL methods.
The performance improvement of the method above can be significantly im-
proved by allowing P (t) to be a higher order approximation of N (y(t)). This
requires the use of approximations from the past, thus providing a multistep
flavour to the method. As a second example, we include the GL2/cRK4, which
uses the classical order four Runge–Kutta method to approximate the modi-
fied initial value and back transforms using the Krogstad transformation. The
resulting method, where y [n−1] = [yn−1 , hNn−2 ]T , is
0 0 0 0 I 0
 
1
1 1

 2 ϕ1,2 + 4 ϕ2,2 0 0 0 e 2 hL − 14 ϕ2,2 

 1 1
1 3 1 hL 1 1
ϕ + ϕ − I I 0 0 e − ϕ + I
2

 2 1,3 4 2,3 4 2 4 2,3 4 .

1 1 1
 ϕ1,4 + ϕ2,4 − 23 e 2 hL hL hL 1 2 hL

0 e 2 0 e −ϕ 2,4 + 2 e 
 ϕ1 + ϕ2 − e 21 hL − 1 I 1 e 21 hL 1 e 12 hL 1 I ehL −ϕ2 + 1 e 21 hL + 1 I 
 
3 3 3 6 3 6
I 0 0 0 0 0
There are several interesting properties of this method. First of all, it is an
exponential general linear method which passes two quantities from step to
step. Secondly, only the first (block) column of the matrix A and the first
(block) element of the first row of the matrix B, differ from the Lawson method.
This property holds for all GL methods, which use a Runge–Kutta method to
approximate the modified initial condition. This method is the unique method,
with stage order two, which passes the quantities yn and Nn−1 from step to step.
Finally, we mention that when L = 0, the underlying general linear method
was constructed by Butcher [8] in the first paper on general linear methods.
The main difficulty with these methods as Krogstad pointed out in [58] is the
improved accuracy comes to some extent at the price of stability. GL methods,
which use trigonometric instead of algebraic polynomials to approximate the
nonlinear term in the original problem, were constructed in [72]. In [79], a
general formulation of the GLq/RKp, is given and a modification of the GL
methods is proposed, which significantly improves accuracy and overcomes the
problems of stability.

17
We now turn our attention to the situation when the initial condition is
approximated using an Adams–Bashforth method of order p. If P (t), is of
order k, then the modified initial condition is approximated by
 
p q q
X
(i−1)hL
 X X `−i
vn = vn−1 + αi e h Nn−i −
 Nn−k .
` − k
i=1 k=1 `=1
`6=k

Using the Krogstad transformation, the resulting approximation in the original


variable is
 
q q p q
X X X
ihL
Y `−i
yn = yn−1 +  γ ϕ
`k ` (hL) + αi e  hNn−k
 ` − k
k=1 `=1 i=q+1 `=1
`6=k
p
X
+ αk ekhL hNn−k .
k=q+1

If q = 0, then we recover the Lawson–Adams–Bashforth methods; if p ≤ q, then


the GLq/ABp, reduces to the ETD Adams–Bashforth method; if p > q, then
the GLq/ABp, extends the class of ETD methods. Therefore, all exponential
Adams–Bashforth methods can be constructed using this approach. In [79], it
is shown that in general a GLq method is convergent and has at least stiff order
q + 1.

7 Lie group methods


Lie group methods were originally designed to ensure that the exact and numer-
ical solutions, evolve on the same manifold. To achieve this, Lie group methods
advance from one point on the manifold to another by following the flow of
some simple vector fields, called frozen vector fields. The flow of a frozen vector
field defines a map which is often called the exponential map. We use the nota-
tion Exp when we refer to it. An alternative (and often used in the literature)
definition of the basic motions on the manifold can also be given in terms of Lie
groups, Lie algebras and their actions upon the manifold. To avoid unnecessary
complications and heavy notation, we introduce just some of the basic concepts
of Lie group methods, particularly adopted to the special type of semi-linear
problems which we have in mind. For readers which are unfamiliar with the
theory of Lie groups and Lie algebras, for the numerical solution of differential
equations, we suggest [46, 72].
In recent years, there has been a significant amount of work in the field
of Lie group methods. Maybe the first paper to construct such methods was
by Crouch and Grossman [17] in 1993. The Crouch–Grossman (CG) methods
were later extended to the Commutator-Free (CF) methods, see [14]. The
construction of Lie group integrators for the solution of semi-discretized partial
differential equations started with the paper by Munthe-Kaas [76] and further

18
investigated for the heat equation in [14, 64, 96], parabolic problems in [57,
58, 72], convection-diffusion problems in [13] and for the nonlinear Schrödinger
equation in [3].
The manifold in which the solution of the initial value problem (1.1) evolves
is M ≡ Rm . Therefore, it is not difficult to construct a numerical integrator
which stays on M. However, the framework of Lie group methods provides,
through the freedom in the choice of the basic motions, a powerful tool for
constructing new exponential integrators. The question is, how to define the
basic motions on the manifold M so that they capture the key features of the
original vector field? For example, choosing the basic motions on M to be given
by translations, leads to the standard numerical schemes. For the semi-linear
problem (1.1) a natural choice, is to define the basic motions on M, to be the
flow of the following differential equation

y 0 = αLy + N, y(tn−1 ) = yn−1 ,

where α ∈ R and N ∈ Rm . The frozen vector field F(αL,N ) (y) = αLy + N can
be seen as a local approximation to the original vector field, for appropriate α
and N . The exact solution of the above equation is given by

y(t) = e(t−tn−1 )αL yn−1 + ϕ1 (t − tn−1 )αL (t − tn−1 )N.




The set g = {(αL, N ) | α ∈ R, N ∈ Rm } defines an algebra with zero element


(O, o), where the first element is interpreted as a matrix. It is also closed under
the bilinear bracket [·, ·] : g × g → g, given by
 
(α1 L, N1 ), (α2 L, N2 ) = (O, α1 LN2 − α2 LN1 ), (7.1)

and thus g is a Lie algebra. If G denotes the Lie group corresponding to g, we


can define the exponential map Exp: g → G, as

Exp(αL, N ) = (eαL , ϕ1 (αL)N ).

Therefore, the basic motions on M can be written in the following compact


form

y(t) = e(t−tn−1 )αL yn−1 + ϕ1 (t − tn−1 )αL (t − tn−1 )N




= Exp (t − tn−1 )(αL, N ) · yn−1 ,

where the map · : G × M → M is the affine group action

(G, g) · y = Gy + g. (7.2)

The use of the affine action for solving semi-discretized semi-linear problems
was first suggested by Munthe-Kaas [76] and later studied in [3, 14, 57, 58,
96]. In [72], it is proposed how this idea can be generalized so that better
approximations to the original vector field can be used.
We will not discuss the CG methods with affine action, as these are consid-
ered an inefficient subclass of the CF methods. So, we now turn our attention

19
to the Runge–Kutta–Munthe-Kaas (RKMK) Lie group methods. The RKMK
methods were introduced to avoid the high number of Exp evaluations needed
in the CG methods. The main idea is to transform the original differential equa-
tion evolving on a manifold M to a corresponding differential equation evolving
on a Lie algebra g. Since g is a linear space, a standard Runge–Kutta method
can be used on the transformed equation. The result is then transformed back
to the manifold. For the semi-linear problems (1.1) this idea applies as follows,
search for a curve v(t) = (tL, z(t)) in g, such that v(0) = (O, o) and the solution
of (1.1) can be expressed as

y(tn−1 + t) = Exp(v(t)) · yn−1 = etL yn−1 + ϕ1 (tL)z(t). (7.3)

Differentiating both sides of (7.3) leads to the following differential equation in


the Lie algebra g

v 0 (t) = dExp−1

v(t) L, N (Exp(v(t)) · yn−1 ) , v(0) = (O, o), (7.4)

where the map dExp−1


v(t) : g → g is defined as


X Bk
dExp−1
(α1 L, N1 ) (α2 L, N2 ) = adk(α1 L,N1 ) (α2 L, N2 ).
k!
k=0

The coefficients Bk are the Bernoulli numbers {1, − 21 , 61 , 0, − 30 1 1


, 0, 42 , . . .}. Pow-
ers of the adjoint operator ad are recursively defined by the following relation
adk(α1 L,N1 ) (α2 L, N2 ) = ad(α1 L,N1 ) adk−1
(α1 L,N1 ) (α2 L, N2 ) with ad(α1 L,N1 ) (α2 L, N2 )
given by (7.1). Therefore,

adk(α1 L,N1 ) (α2 L, N2 ) = (O, α1k−1 Lk (α1 N2 − α2 N1 )).

Substituting this expression into the dExp−1 operator and simplifying gives
   
dExp−1
(α1 L,N1 ) (α2 L, N 2 ) = α2 L, ϕ −1
1 (α1 L) N 2 − α2
α1 N 1 + α2
α1 N 1 .

Using the above expression for the dExp−1 operator, and keeping in mind that
v(t) = (tL, z(t)), the transformed differential equation (7.4) can be rewritten as
 
z(t)
z 0 (t) = ϕ−1
1 (tL) N (Exp(tL, z(t)) · y n−1 ) − t + z(t)
t , z(0) = o. (7.5)

The same result can be alternatively obtained by differentiating (7.3), with


respect to t and using the obvious equality ϕ−1 tL −1
1 (tL)e − tL = ϕ1 (tL).
The computations performed by a RKMK Lie group method, applied to the
semi-linear problem (1.1), are given in the following algorithm.

Algorithm 7.1. (Runge–Kutta–Munthe-Kaas Lie group methods)


for i = 1, 2, . . . , s do
X s    
Zi = h aij ϕ−11 (cj hL) N (Yj ) −
1
cj h Zj + 1
cj h Zj
j=1

20
Yi = eci hL yn−1 + ϕ1 (ci hL)Zi
end
s
X    
zn = h bj ϕ−1
1 (cj hL) N (Yj ) − 1
cj h Zj + 1
cj h Zj
j=1
yn = ehL yn−1 + ϕ1 (hL)zn

Here the coefficients aij and bj are the classical Runge–Kutta coefficients,
therefore RKMK methods do not require a new order theory. As an example, we
apply the classical fourth order Runge–Kutta method to the transformed differ-
ential equation (7.5). The resulting method denoted as RKMK4e represented
in the original variables is
 
0 0 0 0 I
1
1

 2 ϕ1,2 0 0 0 e 2 hL 

1
1 1 1 hL 
,
ϕ − I

2I 0 0 e 2
 −22 1,2 −12

−2 −1 −1
 ϕ1,2 − 2ϕ1,2 + I ϕ1 (−ϕ1,2 + ϕ1,2 ) ϕ1 ϕ1,2 0 hL

e 
1 hL
b1 (z) b2 (z) b3 (z) 6 I e

where the coefficients b1 , b2 and b3 are


 
b1 (z) = ϕ1 21 ϕ−2 1,2 − 4 −1
ϕ
3 1,2 + I − 16 ϕ−2 1 −1 1
1,2 + 3 ϕ1,2 − 6 I,
 
b2 (z) = ϕ1 − 21 ϕ−2 1,2 + 5 −1 1 −2 1 −1
6 1,2 + 6 ϕ1,2 − 6 ϕ1,2 ,
ϕ
b3 (z) = 12 ϕ1 ϕ−1 1 −1
1,2 − 6 ϕ1,2 .

This shows again that the coefficients of the method need not only be linear
combinations of ϕ-functions. Unfortunately, the RKMK methods do not work
well for problems where L represents the stiff term, the reason being that kLk
is typically much larger than kϕ1 (L)k, and to evaluate the dExp−1 operator
commutators need to be evaluated, which involve products of the form LN (u).
As was pointed out by Krogstad [57] certain stepsize restrictions are present due
to singularities of the function ϕ−1
1 at the points 2πk for k = ±1, ±2, . . .. In
[76], it was mentioned that it is not necessary to compute the dExp−1 operator
exactly as was done in the previous example. It is possible to truncate the
series to the order of the method or one less. The following example gives the
RKMK4t with a truncation of the dExp−1 operator
 
0 0 0 0 I
1
1

 2 ϕ1,2 0 0 0 e 2 hL 

1
1 1 1 hL . (7.6)
2 (I − 4 hL)ϕ1,2
 
8 hLϕ1,2 0 0 e2

hL
 
 0 0 ϕ1 0 e 
1 1 1 1 1 1 hL
6 (I + 2 hL)ϕ1 3 ϕ1 3 ϕ1 6 (I − 2 hL)ϕ1 e

We finish our discussion on RKMK methods by pointing out that the Lawson
methods, discussed in Section 4, and the GL methods, discussed in Section 6,
can also be seen as RKMK methods, with appropriate approximations to the
Exp map, see [72].

21
We now consider the CF Lie group methods originally designed in [14].
These methods were introduced to overcome the high costs of the CG methods
and the need for commutators in the RKMK methods. The computations
performed by the method are given in the following algorithm.
Algorithm 7.2. (Commutator-Free Lie group method)
for i = 1, 2, . . 
. , s do   
Xs s
X
J 1
Yi = Exp h αij (L, Nj ) · · · Exp h αij (L, Nj ) · yn−1
j=1 j=1
Ni = N (Yi )
end    
Xs s
X
yn = Exp h βjJ (L, Nj ) · · · Exp h βj1 (L, Nj ) · yn−1
j=1 j=1
k and β k are parameters of the method. They are deter-
The coefficients αij j
mined from order theory which can be adapted from the order theory presented
in [80]. In general the method is implicit unless αij k = 0 for i ≤ j, in which case,

it is explicit. The parameter J counts the number of Exp evaluations at each


stage and it is equal to the number of sub-stages included in the stage.
As an example, we consider the CF Lie group method of order four pro-
posed in [14]. It is based on the classical fourth order Runge–Kutta method
and imposes splitting on the last stage and the output approximation. When
the affine action (7.2) is used to obtain an exponential integrator, it is sufficient
to multiply the coefficients of the CF Lie group method by ϕ1 (ci hL) and the
incoming quantities by eci hL . Note that this procedure works only if the under-
lying method is a CF Lie group method. Otherwise this will be insufficient to
guarantee the required order. This was the mistake made by Ehle and Lawson
[22]. As we have already mentioned, the internal stages of the order four CF
Lie group method, based on the classical Runge–Kutta method of order four,
are the same as the internal stages of Cox and Matthews method, we therefore
consider only the output approximation
1
 1

yn = e 2 hL ϕ1,2 14 hN (Y1 ) + 61 hN (Y2 ) + 61 hN (Y3 ) − 12
1
hN (Y4 ) + e 2 hL yn−1


1
hN (Y1 ) + 16 hN (Y2 ) + 61 hN (Y3 ) + 41 hN (Y4 )

+ ϕ1,2 − 12
1 1
hL
= 1
12 ϕ1,2 (3e
2 − I)hN (Y1 ) + 16 ϕ1,2 (e 2 hL + I)hN (Y2 )
1 1
hL
+ 1
6 ϕ1,2 (e
2 + I)hN (Y3 ) + 1
12 ϕ1,2 (3I − e 2 hL )hN (Y4 ) + ehL yn−1 .
When implementing a CF method one would not expand the last stage and
the output approximation as the resulting formulation is more expensive to
implement. However, we represent the CF method in this way to show that it
can indeed be represented as an exponential Runge–Kutta method
 
0 0 0 0 I
1
1

 2 ϕ1,2 0 0 0 e 2 hL 

1
1 hL  . (7.7)
0 ϕ 0 0 e

2

 1 1 hL 2 1,2 

− I)ϕ1,2 hL
2 (e 0 ϕ1,2 0 e
 2 
1 1 1 1 1 1 hL
2 ϕ1 − 3 ϕ1,2 3 ϕ1 3 ϕ1 − 6 ϕ1 + 3 ϕ1,2 e

22
This gives another example of how the coefficients of the method can have a
more general nature than just linear combinations of certain ϕ-functions.

8 Order conditions
The order conditions for generalized Runge–Kutta methods were first studied
by van der Houwen [101]. The case when an inexact Jacobian is used was also
briefly discussed, but order conditions for this situation were not derived. The
Rosenbrock methods are a special case of the generalized Runge–Kutta meth-
ods, when low order Padé approximations are used. We limit our discussion, in
this section, to exponential integrators which use an approximate Jacobian.

8.1 Non-Stiff order conditions


The first major contributions to constructing a non-stiff order theory, came in
three independent papers [24, 91, 92]. In the first paper, Friedli considered ETD
Runge–Kutta methods; although a general theory was not given, the non-stiff
order conditions up to order five were derived. In the following year, Steihaug
and Wolfbrandt constructed the order conditions for the W-methods, which use
an approximation to the Jacobian. As we saw in Section 5, these methods can
be considered as approximations to ETD Runge–Kutta methods. Strehmel and
Weiner [92] constructed in 1982 the general non-stiff order conditions for the
ETD Runge–Kutta methods.
In this section, we extend the non-stiff order theory to exponential general
linear methods (3.3). It should be noted, that the order theory constructed
here, and in all the papers referenced above, assume that powers of the matrix
L, exist and thus the method coefficients can be expanded into a power series.
As was first pointed out in [91] it is useful to use bi-coloured (white and black)
rooted trees, with the extra requirement that the white nodes only have one
child, in the analysis of the non-stiff order conditions. Let 2T∗ , denote this set
of rooted trees. The restriction on the white nodes comes directly from the
linear term in the differential equation (1.1). We use B-series to analyze the
non-stiff order conditions. For an elementary weight function a : 2T∗ → R, the
B-series is defined as
X a(τ )
B(a, u) = a(∅)u + h|τ | F (τ )(u).

σ(τ )
τ ∈2T

For those not familiar with these concepts we suggest the monographs [10, 36]
for a complete treatment. Each tree τ can be decomposed as τ = [τ1 , . . . , τ` ]tp(τ ) ,
where tp(τ ) = { , } represents the colour of the root node and τ1 , . . . , τ` is the
forest remaining after the root node has been removed. The order |τ |, symmetry
σ(τ ) and density γ(τ ) are defined in the same way as for Runge–Kutta methods.
A one to one correspondence between the rooted bi-coloured trees and the
elementary differentials exists, where F (τ )( ) = Lu and F (τ )( ) = N (u), and

LF (τ1 )(u), if τ = [τ1 ] ,
F (τ )(u) = (`) (8.1)
N (u)(F (τ1 )(u), . . . , F (τ` ))(u)), if τ = [τ1 , . . . , τ` ] .

23
The exact solution of (1.1) can be represented by the following B-series u(t +
h) = B(γ −1 , u), which is exactly the same as for Runge–Kutta methods. We
are now interested in finding the elementary weight function which describes
the operations of the numerical method. Before we do this, it is convenient to
expand aij , bij , uij and vij into power series, which gives
s X r X
[l] [l] [n−1]
X X
Yi = aij (hL)l hN (Yj ) + uij (hL)l yj , i = 1, . . . , s,
j=1 l≥0 j=1 l≥0
s X r X (8.2)
[n−1] [l] [l] [n−1]
X X
yi = bij (hL)l hN (Yj ) + vij (hL)l yj , i = 1, . . . , r.
j=1 l≥0 j=1 l≥0

To obtain B-series expansions of the numerical solution we need the following


two lemmas. These are extensions of well known results, and we therefore, do
not include proofs.
Lemma 8.1. Let a : 2T∗ → R be a mapping satisfying a(∅) = 1, then

hN (B(a(τ ), u)) = B(a0 (τ ), u),

where the derivative of the elementary weight function satisfies a0 (∅) = 0, and

0 0, if τ = [τ1 ] ,
a (τ ) =
a(τ1 ) . . . a(τ` ), if τ = [τ1 , . . . , τ` ] .

Lemma 8.2. Let ψx (z) be a power series in z, ψx (z) = l≥0 x[l] z l , and let
P
a : 2T∗ → R be a mapping, then

ψx (hL)B(a(τ ), u) = B((ψx (L)a)(τ ), u),

where the elementary weight function satisfies (ψx (L)a)(τ ) = l≥0 x[l] (Ll a)(τ ),
P

with (Ll a)(∅) = 0, and

(Ll−1 a)(τ1 ),

l if τ = [τ1 ] ,
(L a)(τ ) =
0, if τ = [τ1 , . . . , τ` ] .
We now have everything we need to represent the numerical method denoted
by (8.2) using B-series. From Lemmas 8.1 and 8.2, it follows
X [l]
aij (hL)l hN (Yj ) = ψaij (hL)B(ξj0 (τ ), yn−1 ) = B((ψaij (L)ξj0 )(τ ), yn−1 ).
l≥0

Let α denote the generating function of the starting method S(hL), that is
[n−1]
yi = B(αi , yn−1 ), then for all τ ∈ 2T∗ and |τ | ≤ p, the generating functions
for the order conditions satisfy
s
X r
X
ξi (τ ) = (ψaij (L)ξj0 )(τ ) + (ψuij (L)αj )(τ ),
j=1 j=1
Xs Xr
Eαi (τ ) = (ψbij (L)ξj0 )(τ ) + (ψvij (L)αj )(τ ),
j=1 j=1

24
where E is the elementary weight function of the exact solution. The matrix
representation of the elementary weight functions interpreted in the natural
way are

ξ(t) = (ψA (L)ξ 0 )(τ ) + (ψU (L)α)(τ ),


Eα(τ ) = (ψB (L)ξ 0 )(τ ) + (ψV (L)α)(τ ).

Given that we have B-series expansions for both the exact and the numerical
solutions, we can now define order in a similar way as for general linear methods.

Definition 8.3. An exponential general linear method M (hL), with elementary


weight function m : 2T∗ → R, has non-stiff order p, if, for all τ ∈ 2T∗ such that
|τ | ≤ p,
m(τ ) = Eα(τ ).

This shows that it is not possible, in general, to obtain order by simply


using a general linear method for the nonlinear part N of the problem. There
are coupling conditions between the nonlinear and linear parts of the problem
despite the fact that the linear part has been solved exactly. Note that if L = 0,
the order conditions simply reduce to the order conditions corresponding to the
black trees. In this case the method (3.3) is equivalent to a general linear
method for the nonlinear part N .
Along with the recent resurgence in constructing exponential integrators,
there has been a resurgence in developing the order conditions for these meth-
ods, we cite the papers [2, 44, 55, 56]. In all of these papers except [44] the
non-stiff order conditions have been rederived.

8.2 Stiff order conditions


The main problem with the non-stiff order conditions, is that they assume the
exponential and related function can be expanded in a power series. However,
for the problems that we are interested in solving; parabolic and hyperbolic
PDEs, the operator or matrix L, is unbounded. The ϕ-functions are bounded,
but because L is unbounded it is not possible to expand the ϕ-functions in power
series. What is interesting is that even though methods are constructed using
the non-stiff order conditions, that is the exponential and related functions are
expanded in power series, they are implemented so that these functions are
computed exactly. For almost all problems, which are used to test exponential
integrators, the method implemented in this way will perform at its non-stiff
order. Recently, Hochbruck and Ostermann constructed a problem in which,
most exponential integrators performed significantly worse than their non-stiff
order, see problems 3 and 4 in Section 11. To overcome this, Hochbruck and
Ostermann, [43] introduced a class of implicit collocation ETD Runge–Kutta
methods. The high stage order makes it possible to analyze the order without
expanding the coefficients of the method in power series. The problem now is,
how do we treat the implicit nature of the method, given that the exponential
integrators are designed to overcome the need for implicit computations. It is
sufficient for these methods to use a fixed point iteration scheme instead of a

25
modified Newton type iteration scheme. The resulting explicit method, given a
suitable number of iterations, will be of the appropriate stiff order. The overall
process would involve a significant number of stages, around sp, where s is the
number of stages and p is the stiff order required.
To overcome the need for so many stages, Hochbruck and Ostermann, [43]
constructed the stiff order conditions up to order four, for an ETD explicit
Runge–Kutta method. We refer to that paper for details, but we give the order
four conditions below, where we interpret them in a component by component
sense

A(hL)e = cϕ1 (chL),


b(hL)e = ϕ1 (hL),
b(hL)c = ϕ2 (hL),
c2
b(hL) = ϕ3 (hL),
2!
b(hL)J (ϕ2 (chL) − A(hL)c) = 0,
c3
b(hL) = ϕ4 (hL),
3!
c2
 
b(hL)J c3 ϕ3 (chL) − A(hL) = 0,
2!
b(hL)JA(hL)J c2 ϕ2 (chL) − A(hL)c = 0,


b(hL)cK c2 ϕ2 (chL) − A(hL)c = 0,




both J and K are certain problem dependent operators. The first condition is
a generalization of the C(1) condition satisfied by most Runge–Kutta methods.
It also, along with the second condition, ensures that the method preserves fixed
points. The second, third, fourth and sixth conditions are generalizations of the
bushy tree conditions. The remaining conditions contain either a generalization
of the C(2) or C(3) conditions. As it is well known that a Runge–Kutta method
can not have stage order greater than one, it is not possible to satisfy the
generalizations of the C(2) and C(3) conditions. So ETD Runge–Kutta methods
with high stiff order must have some of the b(hL) coefficients equal to zero. This
is evident in the stiff order four method derived in [44],
 
0 0 0 0 0 I
1
1

 2 ϕ1,2 0 0 0 0 e 2 hL 

1
1 hL 
ϕ − ϕ ϕ 0 0 0 e
 2
2 1,3 2,3 2,3
 , (8.3)
 
ϕ1,4 − 2ϕ2,4 ϕ2,4 ϕ2,4 0 0 ehL 


 1 1
 ϕ1,5 − 2a5,2 − a5,4 a5,2 a5,2 1 ϕ2,5 − a5,2 e 2 hL 

2 4 0
ϕ1 − 3ϕ2 + 4ϕ3 0 0 −2ϕ2 + 4ϕ3 4ϕ2 − 8ϕ3 ehL

where
1 1 1
a5,2 = ϕ2,5 − ϕ3,4 + ϕ2,4 − ϕ3,5 .
4 4 2
It is even more evident for the implicit ETD Runge–Kutta methods with fixed
point iteration. In [44], the authors proved the following lemma.

26
Lemma 8.4. For an exponential integrator to have stiff order p, it is sufficient
to satisfy the stiff order p − 1 conditions and order p conditions with b(0).

What the stiff order conditions and the lemma tell us, is that to achieve order
p, we must have at least ϕ` , for ` = 1, . . . , p − 1, within the method. This means
that the Lawson methods discussed in Section 4, can have in general, at most
stiff order one. But for some particular applications, it can be shown that they
perform to their full non-stiff order. In [3], it is shown, that for the nonlinear
Schrödinger equation with smooth potential, the Lawson methods exhibit the
full non-stiff order. Kværnø, [59] has derived the stiff order conditions using a
B-series approach for scalar equations. In this case, the elementary differentials
commute so it is possible to overcome the need to expand in power series.
Whether it is possible to use a B-series approach on non-scalar equations is still
unclear.
In this subsection we have only discussed the stiff order conditions for ex-
ponential Runge–Kutta methods, the reason being that the q-step ETD Adams
methods derived in Section 5 have stiff order q. This was recently proved by
Calvo and Palencia in [11]. Also recently in [79] it is proved that the General-
ized Lawson methods discussed in Section 6 have stiff order q + 1, for a GLq
method. Explicit exponential general linear methods, with high stage order,
seem to be the most promising class of methods for semi-linear problems. High
stiff stage order overcomes the problems of obtaining high stiff order, without
requiring the method to be implicit.

9 Implementation issues
In this section, we briefly address some practical issues regarding the imple-
mentation of the exponential integrators. The main computational challenge in
the implementation of any exponential integrator is the need for fast and com-
putationally stable evaluations of the exponential and the related ϕ-functions.
There are many methods available in the literature for computing the expo-
nential function, we refer to [73] and references within. Almost all exponential
integrators, with the exception of the integrators derived from the Lie group
framework, explicitly use linear combinations of the functions
1
ϕ` (z) −
ϕ0 (z) = ez , ϕ`+1 = `!
, ` = 0, 1, . . . , (9.1)
z
as previously defined in Section 5. A straightforward implementation, based on
the above formulas, suffers for small z, from cancellation errors [37, 53]. As `
increases, the cancellation errors become even more extreme. A way to avoid
this problem is to approximate each ϕ-functions by its truncated Taylor series
expansion. This approach, however, fails to produce correct results for large z.
Thus, a natural idea, first used by Cox and Matthews [16] is to introduce a cutoff
point and to compute the ϕ-functions directly by (9.1) when z is large, and by
truncated Taylor series expansions, when z is small. The problem with this
approach, is that there may exist a region, in the z variables, in which neither

27
approximation is accurate. To avoid this drawback, Kassam and Trefethen [53]
proposed to approximate the ϕ-functions by the Cauchy integral formula
Z
1
ϕ` (z) = ϕ` (λ)(λ − z)−1 dλ, (9.2)
2πi Γ

where Γ is a contour in the complex plane that encloses z and is well separated
from 0. In [53], it is suggested that the contour Γ, can be chosen to be a circle
centered on the real axis. In this case, due to symmetry, one can evaluate the
integral only on the upper half of the circle and then double the real part of
the result. However, we mention that depending on the type of the problem
we have to solve, other contours, different from circles, can also be used. For
example, for parabolic problems, it seems preferable to choose the contour Γ
to be a parabola also centered on the real axis. The integral in (9.2) can
be easily approximated by the trapezoid rule [99]. The main disadvantages
of the Cauchy integral approach are: Unless the matrix L has a very special
structure, computing approximations to the ϕ-functions, is simply too expensive
to implement; In general the contour varies for each problem, making it difficult
to obtain general algorithms.
Another approach for approximating the ϕ-functions, is to use high order
Padé approximations, combined with a scaling and squaring technique, which
can be adopted from the approach proposed in [5]. Obtaining general formulae
for the elements of the Padé table is more difficult than for the exponential
function. Unique expressions of order d + n, where d and n are the degrees
of the denominator and numerator respectively, exist for ` ≥ 2. This is not
possible for the ϕ1 function. An example, is the Padé (1,3) approximation to
the ϕ1 function, which is actually only a third order approximation.
Using the similarity transformation

z = SyS −1 ,

where y is such that ϕ` (y) is cheap and easy to compute. Then

ϕ` (z) = Sϕ` (y)S −1 .

The choices of S and y involve two conflicting tasks. Try to make y as close
to diagonal as possible while requiring S to be well conditioned. Therefore, it
is natural to choose S to be an unitary matrix. Two algorithms, based on this
decomposition idea are of practical interest. These are the block Schur–Parlett
algorithm [19], and the tridiagonal reduction algorithm, first proposed for the
ϕ1 function in [67] and latter generalized to all ϕ-functions in [72]. As for the
Cauchy integral approach, except in some special cases, these methods are also
too expensive to implement. This is even worse if a variable-stepsize strategy
is used, in which case, we need to recompute the ϕ-functions, every time the
stepsize is changed.
A way to overcome the higher computational cost arising from the change of
the stepsize, is to take advantage of the fact that when implementing any expo-
nential integrator, we do not really need to compute the ϕ-functions. What we

28
need, is just their action to a given state vector v. Krylov subspace approxima-
tions to the exponential and some related functions, have been studied by many
authors, see for example [26, 39, 74, 86]. The main idea is to approximately
project the action of the function ϕ` (z) on a state vector v, to a smaller Krylov
subspace
Km ≡ span{v, zv, . . . , z m−1 v}.
The dimensionality m of the Krylov subspace is usually much smaller than the
dimensionality of z. If Vm = [v1 , v2 , . . . , vm ] is an orthogonal basis of Km and
zm is the orthogonal projection of z to the subspace Km with respect to the
basis Vm , then we can approximate the action of ϕ` (z), on the vector v by
ϕ` (z)v ≈ ||v|| Vm ϕ` (zm )e1 ,
where e1 is the identity vector in Rm . The main advantage of the above for-
mula, is that, instead of working with the original large space, we work with
its orthogonal approximation, which has much smaller dimension. Thus, the
cost of computing the expression ||v|| Vm ϕ` (zm )e1 is usually much smaller than
the cost needed to compute ϕ` (z)v. In addition, when the linear part L of the
equation (1.1) arises from a spatial discretization of an elliptic operator, it is
possible to speed up the iterative process by using a preconditioned operation
see [100].

10 Special methods
In this section, we aim to briefly mention (and direct the reader to the appro-
priate references) exponential integrators, which are not our primary concern in
this article, but have played an important role in providing efficient numerical
solutions.
Much of the recent effort into the construction of exponential integrators,
has been for the numerical solution of highly oscillatory problems. There has
been several recent papers for second order problems of the form
y 00 (t) = −Ly(t) + N (y(t)), y(tn−1 ) = yn−1 , y 0 (tn−1 ) = yn−1
0
,
where L is a symmetric and positive semi-definite real matrix with arbitrary
large norm. Such problems, produce oscillatory solutions and the aim of the
exponential integrators, in this situation, is to evaluate the right hand side
of the differential equation only a few times for several periods of the fastest
oscillation. Such methods can be effectively used for the numerical solution of
problems from astrophysics and molecular dynamics among others.
The variation of constants formula, is again, the starting point for construc-
tion of exponential integrators for second order problems. To use the variation
of constants formula (5.1) we express the second order problem as a system of
two first order problems, with Ω = L1/2 ,
Ω−1 sin(t − tn−1 )Ω
    
y(t) cos(t − tn−1 )Ω yn−1
=
y 0 (t) −Ω sin(t − tn−1 )Ω cos(tn−1 )Ω 0
yn−1
Z t
Ω−1 sin(t − τ )Ω
 
+ N (y(τ ))dτ.
tn−1 cos(t − τ )Ω

29
If we approximate the nonlinear term N (y(τ )) by Nn−1 , and compute the ex-
act solution, the resulting method, once the velocity components have been
eliminated, reads

yn+1 − 2 cos(hΩ)yn + yn−1 = h2 sinc2 ( 12 hΩ)Nn . (10.1)

This method (for scalar equations) was discovered by Gautschi [28] in 1961. In
this paper multistep type methods were constructed, which produce the exact
solution if the problem emits a sufficiently low degree, trigonometric polynomial
solution. Note that the method of Gautschi reduces to the well known Störmer–
Verlet method when L = 0. The starting values yn and yn0 are computed by
replacing the N (y(τ )) by N (yn−1 ), from which one can conclude that both the
solution and derivative are exact when N (y(t)) is a constant. Deuflhard [20]
constructed a similar method by approximating, using the trapezoidal rule, the
integral term in the variation of constants formulae, which leads to

yn+1 − 2 cos(hΩ)yn + yn−1 = h2 sinc(hΩ)Nn .

Both the methods of Gautschi and Deuflhard, suffer from resonant problems,
when the eigenvalues of hΩ, are integer multiples of π. To overcome this prob-
lem, Hochbruck and Lubich [41] use a filter function ψ, such that ψ(0) = 1 and
ψ(k 2 π 2 ) = 0, for k = 1, 2, . . ., that is

Nn−1 = N (ψ(h2 L)yn−1 ). (10.2)

Hochbruck and Lubich [41] and Grimm, [31, 32], (for the situation when L is a
function of t and y), prove that (10.1) with Nn−1 defined as (10.2), is a second
order integrator, independent of the product of the stepsize with the frequen-
cies. A method with two force evaluations was proposed by Hairer and Lubich,
[34], which provided the correct slow energy exchange between stiff components
for the Fermi–Pasta–Ulam problem. We refer to the book by Hairer, Lubich and
Wanner [36] for a thorough review of integrators for highly oscillatory differen-
tial equations, providing the various filter functions used, and which methods
preserve geometric properties of the differential equation.
Another important application, where exponential integrators have proven
to be extremely competitive is for Schrödinger equations, with time dependent
Hamiltonian
ψ 0 (t) = −iH(t)ψ(t), H(t) = U + V (t). (10.3)
We have already discussed integrators, which use the fact that the Hamiltonian
can be split in this way, and only use the exponential and related functions of U .
Several methods exist, which exponentiate the full Hamiltonian. For example,
the so-called exponential midpoint scheme

ψn = exp(−ihH(tn−1/2 ))ψn−1 ,

which is proved by Hochbruck and Lubich [40] to have second order behaviour
independent of the smoothness of the solution. The exponential midpoint rule
relies on the fact that the exponential of a large matrix can be computed effi-
ciently. We cite the review article of Lubich [68] (and references within), which

30
addresses this issue for various methods for problems of the form (10.3). Jahnke
and Lubich in a series of papers [47, 48, 49], construct numerical methods for a
singularly perturbed Schrödinger equation
i
ψ 0 (t) = − H(t)ψ(t), H(t) = U + V (t), (0 <   1),

allowing stepsizes h > , which is not possible due to accuracy constraints
for the exponential midpoint rule. The method of construction relies on the
transformation to adiabatic variables
  Z t
T i
Q(t) ψ(t) = exp − ϕ(t) ν(t), ϕ(t) = Λ(τ ) dτ.
 tn−1

where, the Hamiltonian is diagonalized as H(t) = Q(t)Λ(t)Q(t)T . The integra-


tor is applied to the transformed differential equation in the adiabatic variables.
We do not give the numerical methods here due to excess notation, but refer
directly to [47] for a complete treatment. This transformation to adiabatic
variables was also used by Lorenz, Jahnke and Lubich [66], where the adiabatic
midpoint and adiabatic Magnus method is developed for the problem
1 1
y 00 (t) = − 2
L(t)y(t) + 2 N (t), y(tn−1 ) = yn−1 , y 0 (tn−1 ) = yn−1
0
.
 
Both the adiabatic midpoint and adiabatic Magnus method are of order two,
independent of  and the stepsize h used by the integrators.
Recently, in [29, 98], the exponential mid-point rule was also combined with
a second-order Magnus method [70] to find the solution of the following non-
autonomous semi-linear parabolic problem

y 0 (t) = L(t)y(t) + N (y(t)), y(tn−1 ) = yn−1 .

The stability and convergence properties of the resulting exponential integrator


1
yn = ehL(tn−1 + 2 h) yn−1 + hϕ1 hL(tn−1 + 21 h) N y(tn−1 + 21 h) ,
 

were studied using the framework of sectorial operators and analytic semi-
groups. It was found that under reasonable smoothness assumptions on the
data and the exact solution, this method achieves the desired order, without
imposing unnatural restrictions on the stepsize. It is worth mentioning that
the above exponential integrator can be also derived from the framework of Lie
group methods, with affine algebra action, by choosing to freeze the vector field
at the point tn−1 + 12 h. In the Lie group methods literature, this method is
often called the exponential Lie-Euler method. A similar type of analysis for
the same method applied to quasi-linear parabolic problems was performed in
[30].
Finally, in this section, we mention the Matlab code exp4, which is de-
scribed in [42]. This code is the first actual implementation of an exponential
integrator, as before this all implementations used low order approximations
to the exponential and related functions. The method used in the code exp4,

31
is of fourth order and can be regarded as a Rosenbrock-like exponential inte-
grator. Only the ϕ1 (γhf 0 (yn−1 )) function is used in the method and its action
on a vector is evaluated using a Krylov subspace approximation. A Padé (6,6)
approximation using a scaling and squaring technique, is used to evaluate the
resulting ϕ1 function. The implementation used variable-stepsizes and was com-
pared against well known standard solvers such as a matlab implementation of
radau5, and ode45 and ode15s, from the Matlab ODE suite. The code exp4
performed very well on stiff and highly oscillatory problems, despite the lack of
a preconditioner.

11 Numerical experiments
In this section, we compare six exponential integrators on five well known PDEs.
The methods under consideration are: lawson4, the Runge–Kutta–Lawson
method (4.1), with classical order four Runge–Kutta coefficients; hochost4, the
stiff order four ETD method of Hochbruck and Ostermann, (8.3); abnorsett4,
the order four Adams–Bashforth–Nørsett method see Table 5.1, genlawson43,
the GL method GL3/cRK4; rkmk4t, the RKMK method given in (7.6); cfree4,
the CF method given in (7.7). We also include cranknicolson, the Crank–
Nicolson scheme which is the most commonly used integrator for such problems.
We believe these methods to be a fair representation of the methods available.
The ϕ-functions are evaluated using (6,6) Padé approximations, with a scal-
ing and squaring technique. All experiments use a matlab package described in
[4], which can be downloaded from http://www.math.ntnu.no/num/expint/.
For the sake of convenience we briefly include a description of each of the five
PDEs under consideration.
Problem 1. The Kuramoto–Sivashinsky equation with periodic boundary con-
ditions (see [53])

yt = −yxx − yxxxx − yyx , x ∈ [0, 32π],


x x
 
y(x, 0) = cos 16 1 + sin 16 .

This is a semi-linear parabolic problem and we discretized in space using a 128-


point Fourier spectral discretization. The resulting ODE is integrated over the
interval t = 0 to t = 65. Due to the periodic boundary conditions the matrix
L is diagonal. The fourth order term makes this problem extremely stiff, with
rapid linear decay of high wave number modes.
Problem 2. The Burgers equation with periodic boundary conditions (see [53])

yt = λyxx − 12 (y 2 )x , x ∈ [−π, π],


2
y(x, 0) = exp(−10 sin (x/2)),

where λ = 0.03. We use a 128-point Fourier spectral discretization and the


resulting ODE is integrated over one period t = [0, 1]. The stiffness in this
problem comes from the term λyxx , which results in rapid oscillations of the
high wave number modes.

32
Problem 3. The Allen–Cahn equation with constant Dirichlet boundary con-
ditions (see [53])

yt = λyxx + y − y 3 , x ∈ [−1, 1],


y(x, 0) = 0.53x + 0.47 sin(−1.5πx), u(−1, t) = −1, u(1, t) = 1,

where λ = 0.001. A 50-point Chebyshev spectral discretization is used and the


ODE is then integrated from t = 0 to t = 3. To satisfy the boundary conditions,
define y = w + x and work with homogeneous boundary conditions in the w
variable. In this problem the matrix L, is full and the computations are more
demanding.

Problem 4. A semi-linear parabolic problem of Hochbruck and Ostermann


with homogeneous Dirichlet boundary conditions (see [44])
1
yt = yxx + + Φ, x ∈ [0, 1],
1 + y2

where Φ is chosen so that the exact solution is y(x, t) = x(1−x)et . The problem
is discretized in space using a 64-point standard finite difference scheme. The
resulting ODE is integrated from t = 0 to t = 1.

Problem 5. The nonlinear Schrödinger equation with periodic boundary con-


ditions (see [3])

iyt = −yxx + V (x) + λ|u|2 y,




y(x, 0) = exp(sin(2x)),

where the nonlinear constant λ = 1 and the potential V has a regularity of two.
We use a 256-point Fourier spectral discretization and integrate in time from
t = 0 to t = 1. The stiffness in this problem comes from the term yxx , which
results in rapid oscillations of the high wave number modes.

2
10

0
10

−2
10
Error

2
−4 5
10
abnorsett4
4
lawson4
−6
hochost4
10 genlawson43
rkmk4t
cfree4
cranknicolson
−8
10 −3 −2 −1 0
10 10 10 10
Timestep h

Figure 1: Relative stepsize vs global error for Kuramoto–Sivashinsky equation.

33
5
10

0
10

Error

−5 2
10 abnorsett4
lawson4
hochost4
genlawson43
rkmk4t
4 cfree4
cranknicolson
−10
10 −4 −3 −2 −1 0
10 10 10 10 10
Timestep h

Figure 2: Relative stepsize vs global error for Burgers equation.

0
10

−2
10

1
−4
10
Error

−6
10 1.25

2
abnorsett4
lawson4
−8
hochost4
10 genlawson43
rkmk4t
4
cfree4
cranknicolson
−10
10 −3 −2 −1 0
10 10 10 10
Timestep h

Figure 3: Relative stepsize vs global error for Allen–Cahn equation.

2
10

0
10

−2
10 1.25

−4
10
Error

2.25
−6
10 4
4
abnorsett4
−8
10 lawson4
hochost4
genlawson43
−10
10 rkmk4t
cfree4
cranknicolson
−12
10 −4 −3 −2 −1 0
10 10 10 10 10
Timestep h

Figure 4: Relative stepsize vs global error for Hochbruck–Ostermann equation.

34
2
10

0
10

−2
10
2
−4
10

Error −6 1.5
10
4
abnorsett4
−8
10 lawson4
hochost4
genlawson43
−10
10 rkmk4t
cfree4
cranknicolson
−12
10 −4 −3 −2 −1 0
10 10 10 10 10
Timestep h

Figure 5: Relative stepsize vs global error for nonlinear Schrödinger equation.

The Kuramoto–Sivashinsky, Allen–Cahn and Hochbruck–Ostermann prob-


lems are all parabolic and therefore, the stiff order theory briefly discussed in
Section 8 applies. The observed error in the Hochbruck–Ostermann problem is
exactly as the theory predicts. We see that abnorsett4 has six orders of mag-
nitude improvement over the lawson4 method. The Kuramoto–Sivashinsky
problem is implemented using periodic boundary conditions and we see that all
methods achieve their non-stiff order. What is surprising, is that for the Allen–
Cahn equation which does not have periodic boundary conditions, all methods
except lawson4 and rkmk4t achieve the non-stiff order. The rkmk4t performs
badly on all problems, which do not have periodic boundary conditions. The
Burgers and nonlinear Schrödinger equations are hyperbolic problems. The
Burgers equation is implemented with periodic boundary conditions and all
methods perform similarly and achieve the non-stiff order. On the other hand
the nonlinear Schrödinger, which has a non-smooth potential, has a significant
effect on the order achieved. The lawson4 method produces large oscillations in
the numerical approximation, but achieves an order around 1.5. The lie group
solvers also produce erratic behaviour of order around 2.25. The three stiff
order four methods abnorsett4, genlawson43 and hochost4 seem to achieve
order four for most of the computation. This reduces as the stepsize reduces,
whether this is caused by roundoff is unclear. It has been shown that for the
nonlinear Schrödinger equation the Lawson methods are much less sensitive to
non-smooth initial conditions, see [3]. Understanding exactly why the Lawson
methods perform better in this situation is worth further investigation. We note
that the integrator used in most computations of this sort is the Crank–Nicolson
scheme, we point out that in all experiments the exponential integrators perform
significantly better.
Over all problems we see that the leading contenders are the methods
abnorsett4, genlawson43 and hochost4, which are the only methods with full
stiff order four. The first two methods have stiff stage order equal to stiff order.
This can only be achieved by passing more than the approximation from step to
step, but methods of this type are likely to be the best contenders for an efficient
implementation. The loss of stability observed for the genlawson43 method on

35
the Kuramoto–Sivashinsky equation can be rectified but a slight modification
of the output approximation. This modification is described in [79], where it
is shown that the modified methods have improved stability and a significant
improvement in accuracy. Even though we have not included these methods in
our experiments, we promote the modified generalized Lawson methods as the
integrator of choice. We note that in all the experiments roundoff errors start
affecting accuracy generally around the 1e−10 accuracy level. It is likely that
this loss of accuracy can be overcome using compensated summation.

12 Discussion
In this article we have attempted to provide a partial history of the exponen-
tial integrators, for the numerical solution of first order semi-linear problems.
These methods, date back to around the early sixties and have been developed
not only by numerical analysts but by mathematicians studying, for example,
pattern formation, which end up with an systems of equations of the form (1.1)
to solve. Also chemists and physicists, among others, have independently de-
veloped exponential integrators. One of the major difficulties we found when
working on this paper, is trying to find the appropriate references, searching
databases is difficult when a particular method has almost ten different names,
as has the exponential time differencing methods. It is for this reason that
we do not claim we have a complete list of references and we see why these
methods, which are very natural, have been reproduced so often.
Exponential integrators or approximations to them, encompasses a huge va-
riety of methods, we have therefore, restricted the focus of this paper, mainly
to semi-linear problems of first order. Traditionally such problems have been
the focus of most of the research. In doing so we have constructed a unified
framework for representing the methods, known as exponential general linear
methods, which facilitates a clearer understanding of the similarities and differ-
ences between methods. The Lawson, exponential time differencing, generalized
Lawson and the Lie group methods with affine action all fit into this class of
methods. The non-stiff order theory for these problems has been generalized to
the class of exponential general linear methods using B-series. We have chosen
to include this not because it gives the correct order conditions for problems
when L has arbitrary large norm, but rather because such conditions have been
constructed by several authors in the last few years, just to find in the re-
view process that they were first derived for Runge–Kutta methods in the late
seventies and early eighties.
The research into this paper has lead us to various areas in which we be-
lieve future research would be useful. Developing convergence results for semi-
linear hyperbolic problems, as was done for semi-linear parabolic problems by
Hochbruck and Ostermann, [44], is essential. In many numerical experiments
the order achieved by certain methods is greater than the stiff order of the
method and often equal to the non-stiff order. Well-known examples include
the Allen–Cahn, Kuramoto–Sivashinsky, KdV and nonlinear Schrödinger equa-
tions. Determining exactly when one can expect a higher order of convergence,

36
than the stiff order predicts, would make constructing methods for particu-
lar problems easier. Most exponential integrators are implemented in a fixed
stepsize regime, this allows an explicit implementation once the appropriate
preprocessing has been done before the integration begins. A major concern
with exponential integrators is whether they will be efficient when variable-
stepsizes are allowed as this requires the exponential and related functions to
be recomputed each time the stepsize is changed. This is likely not to be a
problem when the matrix L is diagonal, generally the result of a Fourier spec-
tral discretization, but more evident in the the case when L is a full matrix,
resulting from a Chebyshev or finite difference discretization. In such situations
Krylov approximations are likely to provide the most efficient methods. The
functions which arise naturally from the variation of constants formula, the so-
called ϕ-functions, are needed to satisfy the stiff order conditions. Is it possible
to construct integrators with high stiff stage order which also use other func-
tions? If so, what are the properties these functions need to satisfy. What is
evident throughout the literature is that the observed order is closely related to
high stage order, the situation is no different for exponential integrators. This
motivates the need to look for exponential integrators with high stage order.
Are there traditional integrators with high stage order, which can be gener-
alized to the exponential setting? Another interesting question, which needs
further investigation, is how to construct higher order exponential integrators,
for non-autonomous semi-linear and quasi-linear problems, which are different
from those arising from the framework of Lie group methods.

Acknowledgments
We are grateful to the numerical analysis research groups in Geneva, Trondheim
and Tübingen, for the many valuable discussions we have had, while working
on the contents on this paper.

References
[1] U. M. Ascher, S. J. Ruuth, and B. T. R. Wetton. Implicit-explicit methods
for time dependent partial differential equations. SIAM J. Numer. Anal.,
32(3):797–823, 1995.

[2] H. Berland, B. Owren, and B. Skaflestad. B-series and order conditions for
exponential integrators. Technical Report 5/04, The Norwegian Institute
of Science and Technology, 2004. http://www.math.ntnu.no/preprint/.

[3] H. Berland and B. Skaflestad. Solving the nonlinear Schrödinger


equation using exponential integrators. Technical Report 3/05,
The Norwegian Institute of Science and Technology, 2005.
http://www.math.ntnu.no/preprint/.

37
[4] H. Berland, B. Skaflestad, and W. Wright. Expint - a Matlab package for
exponential integrators. Technical Report 4/05, The Norwegian Institute
of Science and Technology, 2005. http://www.math.ntnu.no/preprint/.

[5] G. Beylkin, J. M. Keiser, and L. Vozovoi. A new class of time discretiza-


tion schemes for the solution of nonlinear PDEs. J. of Comp. Phys.,
147:362–387, 1998.

[6] J. P. Boyd. Chebyshev and Fourier spectral methods. Dover, New York,
2001.

[7] J. Bruder, K. Strehmel, and R. Weiner. Partitioned adaptive Runge–


Kutta methods for the solution of non-stiff and stiff systems. Numer.
Math., 52:621–638, 1988.

[8] J. C. Butcher. On the convergence of numerical solutions to ordinary


differential equations. Math. Comp., 20:1–10, 1966.

[9] J. C. Butcher. The effective order of Runge-Kutta methods. Lecture Notes


Math., 109:133–139, 1969.

[10] J. C. Butcher. Numerical methods for ordinary differential equations.


John Wiley & Sons, 2003.

[11] M. P. Calvo and C. Palencia. A class of multistep exponential integrators


for semilinear problems. Submitted, 2005.

[12] C. Canuto, M. Y. Hussaini, A. Quateroni, and T. A. Zang. Spectral


methods in fluid dynamics. Springer Verlag, 1988.

[13] E. Celledoni. Eulerian and semi-Lagrangian schemes based on commuta-


tor free exponential integators. To appear, 2005.

[14] E. Celledoni, A. Marthinsen, and B. Owren. Commutator-free Lie group


methods. FGCS, 19(3):341–352, 2003.

[15] J. Certaine. The solution of ordinary differential equations with large time
constants. In Mathematical methods for digital computers, pages 128–132.
Wiley, New York, 1960.

[16] S. M. Cox and P. C. Matthews. Exponential time differencing for stiff


systems. J. Comput. Phys., 176(2):430–455, 2002.

[17] P. E. Crouch and R. Grossman. Numerical integration of ordinary differ-


ential equations on manifolds. J. Nonlinear Sci., 3:1–33, 1993.

[18] C. F. Curtiss and J. O. Hirschfelder. Integration of stiff equations.


Proc. Nat. Acad. Sci., 38:235–243, 1952.

[19] P. Davies and N. Higham. A Schur–Parlett algorithm for computing


matrix functions. SIAM J. Matrix Anal. Applic., 25(2):464–485, 2003.

38
[20] P. Deuflhard. A study of extrapolation methods based on multistep
schemes without parasitic solutions. Z. Angew. Math. Phys., 30:177–189,
1979.

[21] W. S. Edwards, L. S. Tuckerman, R. A. Friesner, and D. C. Sorensen.


Krylov methods for the incompressible Navier–Stokes equations. J. of
Comp. Phys., 110:82–102, 1994.

[22] B. L. Ehle and J. D. Lawson. Generalized Runge–Kutta processes for stiff


initial-value problems. J. Inst. Maths. Applics., 16:11–21, 1975.

[23] L. N. G. Filon. On a quadrature formulae for trigonometric polynomials.


Proc. Roy. Soc. Edinburgh, 49:38–47, 1928–1929.

[24] A. Friedli. Verallgemeinerte Runge–Kutta Verfahren zur Lösung steifer


Differentialgleichungssysteme. In Numerical treatment of differential
equations, pages 35–50. Lecture Notes in Math., Vol. 631. Springer, Berlin,
1978.

[25] R. A. Friesner, L. S. Tuckerman, B. C. Dornblaser, and T. V. Russo. A


method for the exponential propagation of large stiff nonlinear differential
equations. J. Sci. Comput., 4(4):327–354, 1989.

[26] E. Gallopoulos and Y. Saad. Efficient solution of parabolic equations by


Krylov approximation methods. SIAM J. Sci. Statist. Comput., 13:1236–
1264, 1992.

[27] B. Garcı́a-Archilla. Some practical experience with the time integration


of dissipative equations. J. of Comp. Phys., 122:25–29, 1995.

[28] W. Gautschi. Numerical integration of ordinary differential equations


based on trigonometric polynomials. Numer. Math., 3:381–397, 1961.

[29] C. González, A. Ostermann, and M. Thalhammer. A second-order Mag-


nus integrator for non-autonomous parabolic problems. To appear in J.
Comp. Appl. Math, 2005.

[30] C. González and M. Thalhammer. A second-order Magnus type integrator


for quasilinear parabolic problems. Submitted to Math. Comp., 2004.

[31] V. Grimm. On error bounds for Gautschi-type exponential integrators


applied to oscillatory second-order differential equations. Numer. Math.,
100(1):71–89, 1995.

[32] V. Grimm. Exponentielle integratoren als lange-Zeitschritt-Verfahren für


oszillatorische Differentialgleichungen zweiter Ordnung. PhD thesis, Uni-
versity of Düsseldorf, 2002.

[33] E. Hairer, G. Bader, and C. Lubich. On the stability of semi-implicit


methods for ordinary differential equation. BIT, 22:211–232, 1982.

39
[34] E. Hairer and Ch. Lubich. Long-time energy conservation of numerical
methods for oscillatory differential equations. SIAM J. Numer. Anal.,
38(2):414–441, 2000.

[35] E. Hairer and G. Wanner. Solving Ordinary Differential Equations. II,


volume 14 of Springer Series in Computational Mathematics. Springer-
Verlag, Berlin, second edition, 1996. Stiff and differential-algebraic prob-
lems.

[36] Ernst Hairer, Christian Lubich, and Gerhard Wanner. Geometric Numer-
ical Integration, volume 31 of Springer Series in Computational Mathe-
matics. Springer-Verlag, Berlin, 2002. Structure-preserving algorithms
for ordinary differential equations.

[37] N. J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM,


Philadelphia, 1996.

[38] M. Hochbruck. Exponential integrators. Workshop on Exponential Inte-


grators, Innsbruck, October 20-23, 2004.

[39] M. Hochbruck and C. Lubich. On Krylov subspace approximations to the


matrix exponential operator. SIAM J. Numer. Anal., 34(5):1911–1925,
1997.

[40] M. Hochbruck and C. Lubich. Exponential integrators for quantum-


classical molecular dynamics. BIT, 39(4):620–645, 1999.

[41] M. Hochbruck and C. Lubich. A Gautschi-type method for oscillatory


second order differential equations. Numer. Math., 83:403–426, 1999.

[42] M. Hochbruck, C. Lubich, and H Selhofer. Exponential integrators for


large systems of differential equations. SIAM J. Sci. Comput., 19(5):1552–
1574, 1998.

[43] M. Hochbruck and A. Ostermann. Exponential Runge–Kutta methods


for parabolic problems. Appl. Numer. Math., 53:323–339, 2005.

[44] M. Hochbruck and A. Ostermann. Explicit exponential Runge–Kutta


methods for semilinear parabolic problems. SIAM J. Numer. Anal., To
appear 2005.

[45] R. Holland. Finite-difference time-domain (FDTD) analysis of magnetic


diffusion. IEEE Trans. Elect. Comp., 36(1):32–39, 1994.

[46] A. Iserles, H. Z. Munthe-Kaas, S. P. Nørsett, and A. Zanna. Lie-group


methods. Acta Numerica, 9:215–365, 2000.

[47] T. Jahnke. Numerische VVerfahren für fast adiabatische Quantendy-


namik. PhD thesis, University of Tübingen, 2003. In German.

[48] T. Jahnke. Long-time-step integrators for almost adiabatic quantum dy-


namics. SIAM J. Sci. Comput., 25(6):2145–2164, 2004.

40
[49] T. Jahnke and C. Lubich. Numerical integrators for quantum dynamics
close to the adiabatic limit. Numer. Math., 94:289–314, 2003.

[50] R. K. Jain. Some A-stable methods for stiff ordinary differential equations.
Math. Comp., 26:71–77, 1972.

[51] F. Jauberteau, C. Rosier, and R. Temam. A nonlinear galerkin method


for the Navier–Stokes equations. Comput. Methods Appl. Mech. Engrg.,
80:245–260, 1990.

[52] A.-K. Kassam. High Order Timestepping for Stiff Semilinear Partial Dif-
ferential Equations. PhD thesis, University of Oxford, 2004.

[53] A.-K. Kassam and L. N. Trefethen. Fourth-order time-stepping for stiff


PDEs. SIAM J. Sci. Comput., 26(4):1214–1233, 2005.

[54] J. Kim and P. Moin. Application of a fractional-step method to incom-


pressible Navier–Stokes equations. J. of Comp. Phys., 59:308–325, 1985.

[55] S. Koikari. Bicolored tree analysis and order conditions of ETD Runge–
Kutta methods. http://www16.ocn.ne.jp/˜koikari/, 2005.

[56] S. Koikari. Rooted tree analysis of Runge–Kutta methods with exact


treatment of linear terms. J. Comput. Appl. Math, 177:427–453, 2005.

[57] S. Krogstad. RKMK-related methods for stiff PDEs. Technical report,


University of Bergen, 2003. http://www.ii.uib.no/∼stein/.

[58] S. Krogstad. Generalized integrating factor methods for stiff PDEs. J. of


Comp. Phys., 203(1):72–88, 2005.

[59] A. Kværnø. Error behaviour of exponential Runge–Kutta methods. Tech-


nical Report 7/04, The Norwegian Institute of Science and Technology,
2004. http://www.math.ntnu.no/preprint/.

[60] J. D. Lambert. Computational methods in ordinary differential equations.


John Wiley & Sons, 1973.

[61] J. D. Lambert and S. T. Sigurdsson. Multistep methods with variable


matrix coefficients. SIAM J. Numer. Anal., 9:715–733, 1972.

[62] D. J. Lawson. Generalized Runge–Kutta processes for stable systems with


large lipschitz constants. SIAM J. Numer. Anal., 4:372–380, 1967.

[63] W. Liniger and R. A. Willoughby. Efficient integration methods for stiff


systems of ordinary differential equations. SIAM J. Numer. Anal., 7:47–
66, 1970.

[64] E. Lodden. Geometric integration of the heat equation. Master’s thesis,


University of Bergen, 2000.

[65] G. Lord and J. Rougemont. A numerical scheme for stochastic PDEs with
Gevrey regularity. IMA J. Numer. Anal., 24(4):587–604, 2004.

41
[66] K. Lorenz, T. Jahnke, and C. Lubich. Adiabatic integrators for highly
oscillatory second order linear differential equations with time-varying
eigendecomposition. Technical report, University of Tübingen, 2005.

[67] Y. Y. Lu. Computing a matrix function for exponential integrators. J.


Comput. Appl. Math, 161(1):203–216, 2003.

[68] C. Lubich. Integrators for quantum dynamics: a numerical analyst’s brief


review. In Quantum Simulations of Many-Body Systems: From Theory
to Algorithms, pages 459–466. NIC series, 2002.

[69] Y. Maday, A. T. Patera, and E. M. Rønquist. An operator-integration-


factor splitting method for time-dependent problems: application to in-
compressible fluid flow. SIAM J. Sci. Comput., 5(4):263–292, 1990.

[70] W. Magnus. On the exponential solution of differential equations for a


linear operator. Comm. Pure and Appl. Math., VII:649–673, 1954.

[71] P. A. Milewski and E. G. Tabak. A pseudospectral procedure for the


solution of nonlinear wave equations with examples from free surface flows.
SIAM J. Sci. Comput., 21(3):1002–1114, 1999.

[72] B. Minchev. Exponential integration for semi-linear problems. PhD thesis,


University of Bergen, 2004.

[73] C. B. Moler and C. F. van Loan. Nineteen dubious ways to compute the
exponential of a matrix, twenty five years later. SIAM Review, 45(1):3–49,
2003.

[74] I. Moret and P. Novati. A rational Krylov method for solving time-
periodic differential equations. Preprint http://univaq.it/˜novati/, 2004.

[75] D. R. Mott, E. S. Oran, and B. van Leer. A quasi-steady-state solver


for stiff ordinary differential equations of reaction kinetics. J. of Comp.
Phys., 164:407–428, 2000.

[76] Hans Munthe-Kaas. High order Runge–Kutta methods on manifolds. In


Proceedings of the NSF/CBMS Regional Conference on Numerical Anal-
ysis of Hamiltonian Differential Equations (Golden, CO, 1997), volume
29,1, pages 115–127, 1999.

[77] S. P. Nørsett. An A-stable modification of the Adams–Bashforth methods.


In Conf. on Numerical Solution of Differential Equations (Dundee, 1969),
pages 214–219. Springer, Berlin, 1969.

[78] S. P. Nørsett. Numerisk integrasjon av stive likninger. Master’s thesis,


University of Oslo, 1969.

[79] A. Ostermann, M. Thalhammer, and W. Wright. More on generalized


Lawson methods. In preperation, 2005.

42
[80] B. Owren and A. Marthinsen. Runge–Kutta methods adapted to mani-
folds and based on rigid frames. BIT, 39(1):116–142, 1999.

[81] O. A. Palusinski and J. V. Wait. Simulation methods for combined linear


and nonlinear systems. Simulation, pages 85–94, 1978.

[82] P. G. Petropoulus. Analysis of exponential time-differencing for FDTD


in lossy dielectrics. IEEE Trans. Antennas Propagation, 45:1054–1057,
1997.

[83] D. A. Pope. An exponential method of numerical integration of ordinary


differential equations. Comm. AGM, 6:491–493, 1963.

[84] David T. Pratt. Exponential-fitted methods for stiff ordinary differen-


tial equations. In Numerical mathematics and applications (Oslo, 1985),
IMACS Trans. Sci. Comput.—85, I, pages 145–151. North-Holland, Am-
sterdam, 1986.

[85] H. H. Rosenbrock. Some general implicit processes for the numerical


solution of differential equations. Comput. J., 5:329–330, 1962/1963.

[86] Y. Saad. Krylov subspace methods for solving large unsymmetric linear
systems. Math. Comp., 37:105–126, 1981.

[87] M. Sato, K. Kawabata, and J. E. Hansen. A fast invariant imbedding


method for multiple scattering calculations and an application to equiv-
alent widths of CO2 lines of venus. Astro. J., 216:947–962, 1977.

[88] M. Schatzman. Toward non-commutative numerical analysis: high order


integration in time. J. Sci. Comp., 17(1–4):99–116, 2002.

[89] C. Schuster, A. Christ, and W. Fichtner. Review of FDTD time-stepping


for efficient simulation of electric conductive media. Microwave Optical
Technol. Lett., 25:16–21, 2000.

[90] L. M. Smith and F. Waleffe. Generation of slow large scales in forced in


forced rotating stratified turbulence. J. Fluid Mech., 451:145–169, 2002.

[91] T. Steihaug and A. Wolfbrandt. An attempt to avoid exact Jacobian and


nonlinear equations in the numerical solution of stiff differential equations.
Math. Comp., 33(146):521–534, 1979.

[92] K. Strehmel and R. Weiner. Behandlung steifer Anfangswertprob-


leme gewöhnlicher Differentialgleichungen mit adaptiven Runge-Kutta-
Methoden. Computing, 29(2):153–165, 1982.

[93] K. Strehmel and R. Weiner. Partitioned adaptive Runge-Kutta methods


and their stability. Numer. Math., 45(2):283–300, 1984.

[94] K. Strehmel and R. Weiner. B-convergence results for linearly implicit


one step methods. BIT, 27:264–281, 1987.

43
[95] K. Strehmel, R. Weiner, and I. Dannehl. A study of B-convergence of
linearly implicit Runge-Kutta methods. Computing, 40(3):241–253, 1988.

[96] A. Suslowicz. Application of numerical Lie group integrators to parabolic


PDEs. Technical Report 13/01, University of Bergen, 2001.

[97] A. Taflove. Computational electrodynamics: The finite-difference time-


domain model. Artech House, 1995.

[98] M. Thalhammer. A second-order Magnus type integrator for non-


autonomous semilinear parabolic problems. Submitted to IMA J. Numer.
Anal., 2004.

[99] L. N. Trefethen. Spectral methods in matlab. Soc. Industr. Appl. Math.,


2000.

[100] J. van den Eshof and M. Hochbruck. Preconditioning Lanczos approxi-


mations to the matrix exponential. Technical report, University of Düs-
seldorf, 2004. http://www.am.uni-duesseldorf.de/ marlis/.

[101] P. J. van der Houwen. Construction of integration formulas for initial


value problems. North-Holland Publishing Co., Amsterdam, 1977. North-
Holland Series in Applied Mathematics and Mechanics, Vol. 19.

[102] P. J. van der Houwen and J. G. Verwer. Generalized linear multistep


methods. I. Development of algorithms with zero-parasitic roots. Mathe-
matisch Centrum, Amsterdam, 1974. Mathematisch Centrum, Afdeling
Numerieke Wiskunde NW 10/74.

[103] J. G. Verwer. On generalized linear multistep methods with zero-parasitic


roots and an adaptive principal root. Numer. Math., 27:143–155, 1976.

[104] J. G. Verwer. S-stability properties for generalized Runge–Kutta meth-


ods. Numer. Math., 27:359–370, 1976.

[105] J. G. Verwer and M. van Loon. An evaluation of explicit pseudo-steady-


state approximation schemes for stiff ODE systems from chemical kinetics.
J. of Comp. Phys., 113:347–352, 1994.

[106] R. Weiner, B. A. Schmitt, and H. Podhaisky. ROWMAP—a ROW-code


with Krylov techniques for large stiff ODEs. Appl. Numer. Math., 25(2-
3):303–319, 1997.

[107] A. Wolfbrandt. A study of Rosenbrock processes with respect to order con-


ditions and stiff stability. PhD thesis, University of Technology Goten-
burg, 1977.

44

You might also like