0% found this document useful (0 votes)
22 views

Appendix E - Numerical Stability and O - 2006 - Numerical Methods in Biomedical

Three topics important for numerically integrating differential equations are error propagation, stability, and solution convergence. The document discusses the inherent and numerical stability of solutions, examining the error propagation and stability of Euler methods. It analyzes the stability region and characteristics of the explicit Euler method, finding it is conditionally stable with a finite stability boundary.

Uploaded by

Tonny Hii
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Appendix E - Numerical Stability and O - 2006 - Numerical Methods in Biomedical

Three topics important for numerically integrating differential equations are error propagation, stability, and solution convergence. The document discusses the inherent and numerical stability of solutions, examining the error propagation and stability of Euler methods. It analyzes the stability region and characteristics of the explicit Euler method, finding it is conditionally stable with a finite stability boundary.

Uploaded by

Tonny Hii
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Appendix E:

Numerical Stability
and Other Topics

Three topics of great importance in the numerical integration of differential equations


equations are error propagation, stability, and convergence of the solutions. Two
types of stability considerations enter into the solution of ordinary differential
equations: inherent stability (or instability) of the analytical solution and numerical
stability (or instability). Inherent stability is determined by the mathematical
formulation of the problem and is dependent on the eigenvalues of the Jacobian
matrix of the differential equations, as was shown in Section 7.6. On the other hand,
numerical stability is a function of the error propagation in the numerical integration
method. The behavior of the error propagation depends on the values of the
characteristic roots of the difference equations that yield the numerical solution. In
this Appendix, we systematically examine the error propagation and stability of
several numerical integration methods and suggest ways of reducing these errors by
the appropriate choice of step size and integration algorithm.

575
576 APPENDIX E: NUMERICAL STABILITY AND OTHER TOPICS

E.1 Stability of the Euler Methods


Consider the initial-value differential equation in the linear form:
dy
= λy (E.1)
dt
where the initial condition is given as

y ( t0 ) = y0 (E.2)

Furthermore, assume that λ is real and y0 is finite. The analytical solution of this
differential equation
y ( t ) = y0 e λ t (E.3)

is inherently stable for λ < 0, that is:

lim y ( t ) = 0 (E.4)
t →∞

Consider now the stability of the numerical solution of this problem, obtained
using the explicit Euler method, and momentarily ignore the truncation and roundoff
errors. Applying Eq. (7.25), we obtain the recurrence equation:
yn +1 = yn + hλ yn (E.5)

which is, after rearrangement, the following first-order homogeneous difference


equation:
yn +1 − (1 + hλ ) yn = 0 (E.6)

Using the definition of the shift operator, Eyn = yn +1 , we obtain

Eyn − (1 + hλ ) yn = 0 (E.7)

which yields the characteristic equation

E − (1 + hλ ) = 0 (E.8)

whose root is
µ1 = (1 + hλ ) (E.9)

From this, we obtain the solution of the difference equation (E.6) as


yn = C (1 + hλ )
n
(E.10)
E.1 STABILITY OF THE EULER METHODS 577

The constant C is calculated from the initial condition, at t = t0 :

n=0 y n = y0 = C (E.11)

Therefore, the final form of the solution is

yn = y0 (1 + hλ )
n
(E.12)

The differential equation is an initial-value problem; therefore, n can increase without


bound. Because the solution yn is a function of (1 + hλ ) , its behavior is determined
n

by the value of (1 + hλ ) .
A numerical solution is said to be absolutely stable if

lim yn = 0 (E.13)
n →∞

The numerical solution of the differential equation (E.1) using the explicit Euler
method is absolutely stable if

1 + hλ ≤ 1 (E.14)

Because (1 + hλ ) is the root of the characteristic equation (E.8), an alternative


definition of absolute stability is

µi ≤ 1 i = 1, 2,… , k (E.15)

where more than one root exists in the multi-step numerical methods.
The inequality (E.14) is equivalent to:

−2 ≤ hλ ≤ 0 (E.16)

which sets the limits of the integration step size for a stable solution as follows:
Because h is positive, λ < 0 and

2
h≤ (E.17)
λ

Inequality (E.17) is a finite general stability boundary, and the explicit Euler
method is called conditionally stable. Any method with an infinite general stability
boundary is called unconditionally stable.
578 APPENDIX E: NUMERICAL STABILITY AND OTHER TOPICS

At the outset, we assumed that λ was real in order to simplify the derivation.
This assumption is not necessary; λ can be a complex number. A solution is stable,
converging with damped oscillations, when complex roots are present (α ± β i ) and

( )
the moduli of the roots r = α 2 ± β 2 are less than or equal to unity:

r ≤1 (E.18)

The inequalities, (E.16) and (E.18), describe the circle with a radius of unity on the
complex plane. Fig. E.1 shows the regions of numerical stability for the Euler and
Runge-Kutta methods. The set of values of hλ inside the circle yields stable
numerical solutions of Eq. (E.1) using the Euler, and equivalently, first-order Runge-
Kutta integration method.

Figure E.1 Stability regions in the complex plane for Runge-Kutta methods of order 1
(explicit Euler), 2, 3, 4, and 5.

We now return to the consideration of the truncation and roundoff errors of the
Euler method. The propagation of these errors in the numerical solution will be
represented by a difference equation. Beginning with the nonlinear form of the
initial-value problem:
dy
= f (t, y ) (E.19)
dt
where the initial condition is given by

y ( t0 ) = y0 (E.20)
E.1 STABILITY OF THE EULER METHODS 579

we define the accumulated error of the numerical solution at step ( n + 1) as

ε n +1 = yn +1 − y ( tn +1 ) (E.21)

where y (tn +1 ) is the exact value of y, and yn +1 is the calculated value of y at tn +1 . We


then write the exact solution, y (tn +1 ) , as a Taylor series expansion, showing as many
terms as needed for the Euler method:

y ( tn +1 ) = y ( tn ) + hf ( tn , y ( tn ) ) + TE , n +1 (E.22)

where TE, n+1 is the local truncation error for step ( n + 1) . We also write the calculated
value yn + 1 obtained from the explicit Euler formula:

yn +1 = yn + hf ( tn , yn ) + RE ,n +1 (E.23)

where RE,n+1 is the roundoff error introduced by the computer in step ( n + 1) .


Combining Eqs. (E.21), (E.22), and (E.23):

ε n +1 = yn − y ( tn ) + h ⎡⎣ f ( tn , yn ) − f ( tn , y ( tn ) ) ⎤⎦ − TE ,n +1 + RE ,n +1 (E.24)

which simplifies to:

ε n +1 = ε n + h ⎡⎣ f ( tn , yn ) − f ( tn , y ( tn ) ) ⎤⎦ − TE ,n +1 + RE ,n +1 (E.25)

The mean-value theorem:

∂f
f ( t n , yn ) − f ( t n , y ( t n ) ) = ⎡ yn − y ( tn ) ⎤⎦ yn < α < y ( t n ) (E.26)
∂y α , xn ⎣

can be used to simplify Eq. (E.25) to

⎡ ∂f ⎤
ε n +1 − ⎢1 + h ⎥ ε n = −TE , n +1 + RE ,n +1 (E.27)
⎢⎣ ∂y α , xn ⎥

This is a first-order nonhomogeneous difference equation with varying coefficients,


which can be solved only by iteration. However, by making the following simpli-
fying assumptions:

TE ,n +1 = TE = constant (E.28)
580 APPENDIX E: NUMERICAL STABILITY AND OTHER TOPICS

RE , n +1 = RE = constant (E.29)

∂f
= λ = constant (E.30)
∂y α , xn

Eq. (E.27) simplifies to

ε n +1 − (1 + hλ ) ε n = −TE + RE (E.31)

the solution of which is given by the sum of the homogeneous and particular
solutions:
−TE + RE
ε n = C1 (1 + hλ ) +
n
(E.32)
1 − (1 + hλ )

Comparison of Eqs. (E.6) and (E.31) reveals that the characteristic equations for the
solution yn and the error ε n are identical. The truncation and roundoff error terms in
Eq.(E.31) determine the particular solution. The constant C1 is calculated by
assuming that the initial condition of the differential equation has no error; that is,
ε 0 = 0 . The final form of the equation that describes the behavior of the propagation
error is
−TE + RE ⎡
(1 + hλ ) − 1⎤
n
εn = (E.33)
hλ ⎣ ⎦

A great deal of insight can be gained by thoroughly examining Eq. (E.33). As


expected, the value of (1 + hλ ) is the determining factor in the behavior of the
propagation error. Consider first the case of a fixed finite step size h, with the number
of integration steps increasing to a very large n. The limit on the error as n → ∞ is:
−TE + RE
lim ε n = for 1 + hλ < 1 (E.34)
n →∞ hλ

lim ε n = ∞ for 1 + hλ > 1 (E.35)


n →∞

In the first situation (Eq. (E.34)), λ < 0 , 0 < h < 2 λ , the error is bounded and
the numerical solution is stable. The numerical solution differs from the exact
solution by only the finite quantity ( −TE + RE ) hλ , which is a function of the
truncation error, the roundoff error, the step size, and the eigenvalue of the
differential equation.
E.1 STABILITY OF THE EULER METHODS 581

In the second situation (Eq. (E.35)), λ > 0 , h > 0 , the error is unbounded and
the numerical solution is unstable. For λ > 0 , however, the exact solution itself is
inherently unstable. For this reason we introduce the concept of relative error,
defined as:
εn
relative error = (E.36)
yn

Utilizing Eqs. (E.12) and (E.33), we obtain the relative error as:

εn −TE + RE ⎡ 1 ⎤
= ⎢1 − ⎥ (E.37)
yn y0 hλ ⎣⎢ (1 + hλ )n ⎦⎥

The relative error is bounded for λ > 0 and unbounded for λ < 0 . So we conclude
that for inherently stable differential equations, the absolute error is the pertinent
criterion for numerical stability, whereas for inherently unstable differential
equations, the relative error must is the right criterion.
Let us now consider a fixed interval of integration, 0 < t < α , so that

α
h= (E.38)
n

and we increase the number of integration steps to a very large n. This, of course,
causes h → 0 .
A numerical method is said to be convergent if

lim ε n = 0 (E.39)
h →0

In the absence of roundoff error, the Euler method, and most other integration
methods, would be convergent because

lim TE = 0 (E.40)
h →0

therefore, Eq. (E.39) would be true. However, roundoff error is never absent in
numerical calculations.
As h → 0 , the truncation error goes to zero and the roundoff error is the crucial
factor in the propagation of error:

(1 + hλ )
n
−1
lim ε n = RE lim (E.41)
h→0 h →0 hλ
582 APPENDIX E: NUMERICAL STABILITY AND OTHER TOPICS

By L’Hôpital’s rule, the roundoff error propagates unbounded as the number of


integration steps becomes very large:

lim ε n = RE [ ∞ ] (E.42)
h →0

This is the “catch-22” of numerical methods: a smaller integration step size reduces
the truncation error but requires more computation, thereby increasing the roundoff
error.
A similar analysis of the implicit Euler method (backward Euler) results in the
following equations, for the solution:

y0
yn +1 = (E.43)
(1 − hλ )
n

and the propagation error:

−TE + RE ⎡ 1 ⎤
ε n +1 = (1 − hλ ) ⎢ − 1⎥ (E.44)
hλ ⎣⎢ (1 − hλ )
n
⎥⎦

For λ < 0 and 0 < h < ∞ , the solution is stable:

lim yn = 0 (E.45)
n →∞

and the error is bounded:

−TE + RE
lim ε n = − (1 − hλ ) (E.46)
n →∞ hλ

No limitation is placed on the step size; therefore, the implicit Euler method is
unconditionally stable for λ < 0 . On the other hand, when λ > 0 , the following
inequality must be true for a stable solution:

1 − hλ ≤ 1 (E.47)

This imposes the limit on the step size:

−2 ≤ hλ < 0 (E.48)

It can be concluded that the implicit Euler method has a wider range of stability
than the explicit Euler method (see Table E.1).
E.2 STABILITY OF THE RUNGE-KUTTA METHODS 583

E.2 Stability of the Runge-Kutta Methods


Using methods parallel to those of the previous section, the recurrence equations and
the corresponding roots for the Runge-Kutta methods can be derived (Lapidus and
Sienfeld, 1971). For the differential equation (E.1), these are:

Second-order Runge-Kutta:

⎛ 1 ⎞
yn +1 = ⎜1 + hλ + h 2 λ 2 ⎟ yn (E.49)
⎝ 2 ⎠

1
µ1 = 1 + hλ + h 2 λ 2 (E.50)
2
Third-order Runge-Kutta:

⎛ 1 1 ⎞
yn +1 = ⎜1 + hλ + h 2 λ 2 + h3λ 3 ⎟ yn (E.51)
⎝ 2 6 ⎠

1 1
µ1 = 1 + hλ + h 2 λ 2 + h3λ 3 (E.52)
2 6

Fourth-order Runge-Kutta:

⎛ 1 1 1 ⎞
yn +1 = ⎜1 + hλ + h 2 λ 2 + h3λ 3 + h 4 λ 4 ⎟ yn (E.53)
⎝ 2 6 24 ⎠

1 1 1 4 4
µ1 = 1 + hλ + h 2 λ 2 + h3λ 3 + hλ (E.54)
2 6 24

Fifth-order Runge-Kutta:

⎛ 1 1 1 1 5 5 0.5625 6 6 ⎞
yn +1 = ⎜ 1 + hλ + h 2 λ 2 + h3λ 3 + h 4 λ 4 + hλ + h λ ⎟ yn (E.55)
⎝ 2 6 24 120 720 ⎠

1 1 1 4 4 1 5 5 0.5625 6 6
µ1 = 1 + hλ + h 2 λ 2 + h3λ 3 + hλ + hλ + hλ (E.56)
2 6 24 120 720

The last term in the right-hand side of Eqs. (E.55) and (E.56) is specific to the fifth-
order Runge-Kutta, which appears in Table 5.2 of Constantinides and Mostoufi
(1999) and varies for different fifth-order formulas.
584 APPENDIX E: NUMERICAL STABILITY AND OTHER TOPICS

The condition for absolute stability

µi ≤ 1 i = 1, 2,… , k (E.57)

applies to all the above methods. The absolute real stability boundaries for these
methods are listed in Table E.1, and the regions of stability in the complex plane are
shown in Fig. E.2. In general, as the order increases, so do the stability limits.

Table E.1 Real stability boundaries

Method Boundary

Explicit Euler −2 ≤ hλ < 0

⎧0 < h < ∞ for λ < 0


Implicit Euler ⎨
⎩−2 ≤ hλ < 0 for λ > 0

Modified Euler
−1.077 ≤ hλ < 0
(predictor-corrector)

Second-order Runge-Kutta −2 ≤ hλ < 0

Third-order Runge-Kutta −2.5 ≤ hλ < 0

Fourth-order Runge-Kutta −2.785 ≤ hλ < 0

Fifth-order Runge-Kutta −5.7 ≤ hλ < 0

Adams −0.546 ≤ hλ < 0

Adams-Moulton −1.285 ≤ hλ < 0


E.3 STABILITY OF MULTISTEP METHODS 585

E.3 Stability of Multistep Methods


Using methods parallel to those of the previous section, the recurrence equations and
the corresponding roots for the modified Euler, Adams, and Adams-Moulton
methods can be derived (Lapidus and Sienfeld, 1971). For the differential equation
(E.1), these are:
Modified Euler (combination of predictor and corrector):

yn +1 = (1 + hλ + h 2 λ 2 ) yn (E.58)

µ1 = 1 + hλ + h 2 λ 2 (E.59)

Adams:

⎛ 23 ⎞ ⎛4 ⎞ ⎛ 5 ⎞
yn +1 = ⎜ 1 + hλ ⎟ yn − ⎜ hλ ⎟ yn −1 + ⎜ hλ ⎟ yn − 2 (E.60)
⎝ 12 ⎠ ⎝ 3 ⎠ ⎝ 12 ⎠

⎛ 23 ⎞ 2 ⎛ 4 ⎞ ⎛ 5 ⎞
µ 3 − ⎜1 + hλ ⎟ µ + ⎜ hλ ⎟ µ − ⎜ hλ ⎟ = 0 (E.61)
⎝ 12 ⎠ ⎝ 3 ⎠ ⎝ 12 ⎠
Adams-Moulton (combination of predictor and corrector):

⎛ 7 55 ⎞ ⎛ 5 59 ⎞
yn +1 = ⎜ 1 + hλ + h 2 λ 2 ⎟ yn − ⎜ hλ + h 2 λ 2 ⎟ yn −1
⎝ 6 64 ⎠ ⎝ 24 64 ⎠
(E.62)
⎛ 1 37 ⎞ ⎛ 9 ⎞
+ ⎜ hλ + h 2 λ 2 ⎟ yn − 2 − ⎜ h 2 λ 2 ⎟ yn −3
⎝ 24 64 ⎠ ⎝ 64 ⎠

⎛ 7 55 2 2 ⎞ 3 ⎛ 5 59 ⎞
µ 4 − ⎜1 + hλ + h λ ⎟ µ + ⎜ hλ + h 2 λ 2 ⎟ µ 2
⎝ 6 64 ⎠ ⎝ 24 64 ⎠
(E.63)
⎛ 1 37 ⎞ ⎛ 9 ⎞
− ⎜ hλ + h 2 λ 2 ⎟ µ + ⎜ h 2 λ 2 ⎟ = 0
⎝ 24 64 ⎠ ⎝ 64 ⎠
The condition for absolute stability,

µi ≤ 1 i = 1, 2,… , k (E.57)

applies to all the above methods. The absolute real stability boundaries for these
methods are also listed in Table E.1, and the regions of stability in the complex plane
are shown in Fig. E.2.
586 APPENDIX E: NUMERICAL STABILITY AND OTHER TOPICS

Figure E.2 Stability regions in the complex plane for the modified Euler (Euler
predictor-corrector), Adams, and Adams-Moulton methods.

E.4 Stability of Methods for Partial Differential Equations


In this section, we discuss the stability of finite difference approximations of partial
differential equations using the well-known von Neumann procedure. This method
introduces an initial error represented by a finite Fourier series and examines how this
error propagates during the solution. The von Neumann method applies to initial-value
problems; for this reason it is used to analyze the stability of the explicit method for
parabolic equations developed in Section 8.5.2 and the explicit method for hyperbolic
equations developed in Section 8.5.3.
We define the error εm,n as the difference between the solution of the finite
difference approximation, um,n, and the exact solution of the differential equation,
um ,n , at step (m, n):

ε m ,n ≡ um ,n − um ,n (E.64)

The explicit finite difference solution (8.61) of the parabolic partial differential
equation (8.22) can be written for um ,n +1 and um ,n +1 as follows:
E.4 STABILITY OF METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS 587

⎛ α ∆t ⎞ ⎛ α ∆t ⎞ ⎛ α ∆t ⎞
um ,n +1 = ⎜ 2 ⎟ um +1,n + ⎜ 1 − 2 2 ⎟ um ,n + ⎜ 2 ⎟ um −1,n + REm ,n +1 (E.65)
⎝ ∆ x ⎠ ⎝ ∆ x ⎠ ⎝ ∆x ⎠

⎛ α ∆t ⎞ ⎛ α ∆t ⎞ ⎛ α ∆t ⎞
um ,n +1 = ⎜ 2 ⎟ um +1,n + ⎜ 1 − 2 2 ⎟ um ,n + ⎜ 2 ⎟ um −1,n + TEm ,n +1 (E.66)
⎝ ∆x ⎠ ⎝ ∆x ⎠ ⎝ ∆x ⎠

where REm ,n +1 and TEm ,n +1 are the roundoff and truncation errors, respectively, at step
(m, n+1). Combining Eqs. (E.64)–(E.66), we obtain

⎛ α ∆t ⎞ ⎛ α ∆t ⎞ ⎛ α ∆t ⎞
ε m ,n +1 − ⎜ ε
2 ⎟ m +1,n
− ⎜ 1 − 2 2 ⎟ ε m ,n − ⎜ 2 ⎟ ε m −1,n = REm ,n +1 + TEm ,n +1 (E.67)
⎝ ∆x ⎠ ⎝ ∆x ⎠ ⎝ ∆x ⎠

This is a nonhomogeneous finite difference equation in two dimensions, representing


the propagation of error during the numerical solution of the parabolic partial
differential equation (8.22). The solution of this finite difference equation is rather
difficult to obtain. For this reason, the von Neumann analysis considers the homo-
geneous part of Eq. (E.67):

⎛ α ∆t ⎞ ⎛ α ∆t ⎞ ⎛ α ∆t ⎞
ε m ,n +1 − ⎜ ε
2 ⎟ m +1,n
− ⎜ 1 − 2 2 ⎟ ε m ,n − ⎜ 2 ⎟ ε m −1,n = 0 (E.68)
⎝ ∆x ⎠ ⎝ ∆x ⎠ ⎝ ∆x ⎠

The above equation represents the propagation of the error introduced at the initial
point only, i.e., at n = 0, and ignores truncation and roundoff errors that enter the
solution at n > 0.
The solution of the homogeneous finite difference equation may be written in
the following separable form:

ε m ,n = ceγ n ∆t ei β m ∆x (E.69)

where i = −1 and c, γ, and β are constants. At n = 0,

ε m ,0 = cei β m ∆x (E.70)

which is the error at the initial point. Therefore, the term eγ ∆t is the amplification
factor of the initial error. In order for the original error not to grow as n increases, the
amplification factor must satisfy the von Neumann condition for stability:

e γ ∆t ≤ 1 (E.71)
588 APPENDIX E: NUMERICAL STABILITY AND OTHER TOPICS

The amplification factor can have complex values. In that case, the modulus of
the complex numbers must satisfy the above inequality; that is,

r ≤1 (E.72)

Therefore the stability region in the complex plane is a circle of radius = 1, as shown in
Fig. E.3.

Imaginary

+i
Unstable

Stable
Real
-1 0 1

-i

Figure E.3 Stability region in the complex plane.

The amplification factor is determined by substituting Eq. (E.69) into Eq.


(E.68) and rearranging to obtain:

⎛ α ∆t ⎞ α ∆ t
e γ ∆t = ⎜ 1 − 2 2 ⎟ + 2 ( e i β ∆ x + e − i β ∆ x ) (E.73)
⎝ ∆x ⎠ ∆x

Using the trigonometric identities,

e i β ∆x + e − i β ∆ x
= cos ( β ∆x ) (E.74)
2

and

⎛ β ∆x ⎞
1 − cos ( β ∆x ) = 2sin 2 ⎜ ⎟ (E.75)
⎝ 2 ⎠

Eq. (E.73) becomes

⎛ α ∆t ⎞ ⎡ ⎛ β ∆x ⎞ ⎤
eγ ∆t = 1 − ⎜ 4 2 ⎟ ⎢sin 2 ⎜ ⎟⎥ (E.76)
⎝ ∆x ⎠ ⎣ ⎝ 2 ⎠⎦
E.4 STABILITY OF METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS 589

Combining this with the von Neumann condition for stability, we obtain the stability
bound:

⎛ α ∆t ⎞ ⎡ ⎛ β ∆x ⎞ ⎤ 1
0 ≤ ⎜ 2 ⎟ ⎢sin 2 ⎜ ⎟⎥ ≤ (E.77)
⎝ ∆x ⎠ ⎣ ⎝ 2 ⎠⎦ 2

The sin 2 ( β ∆x 2 ) term has its highest value equal to unity; therefore,

⎛ α ∆t ⎞ 1
0≤⎜ 2 ⎟≤ (E.78)
⎝ ∆x ⎠ 2

is the limit for conditional stability for this method. It should be noted that this limit is
identical to that obtained by using the positivity rule (Section 8.5.2).
The stability of the explicit solution (8.84) of the hyperbolic equation (8.82)
can be similarly analyzed using the von Neumann method. The homogeneous
equation for the error propagation of that solution is:

⎛ α 2 ∆t 2 ⎞ α 2 ∆t 2
ε m ,n +1 − 2 ⎜ 1 −
∆x 2
⎟ ε m ,n −
∆x 2
(ε m+1,n + ε m−1,n ) + ε m,n−1 = 0 (E.79)
⎝ ⎠

Substitution of the solution (E.69) into (E.79) and use of the trigonometric identities
(E.74) and (E.75) give the amplification factor as:

2
⎡ α 2 ∆t 2 2 ⎛ β ∆x ⎞ ⎤ ⎡ α 2 ∆t 2 2 ⎛ β ∆x ⎞ ⎤
eγ ∆t = ⎢1 − 2 sin ⎜ ⎟⎥ ± ⎢1 − 2 sin ⎜ ⎟⎥ − 1 (E.80)
⎣ ∆x 2 ⎝ 2 ⎠⎦ ⎣ ∆x 2 ⎝ 2 ⎠⎦

The above amplification factor satisfies inequality (E.72) in the complex plane,
that is, when
2
⎡ α 2 ∆t 2 2 ⎛ β ∆ x ⎞ ⎤
⎢1 − 2 ∆x 2 sin ⎜ 2 ⎟ ⎥ − 1 ≤ 0 (E.81)
⎣ ⎝ ⎠⎦

which converts to the following inequality:

α 2 ∆t 2 1
≤ (E.82)
∆x 2
2⎛ β ∆x ⎞
sin ⎜ ⎟
⎝ 2 ⎠
590 APPENDIX E: NUMERICAL STABILITY AND OTHER TOPICS

The sin 2 ( β∆x 2 ) term has its highest value equal to unity; therefore,

α 2 ∆t 2
≤1 (E.83)
∆x 2

is the conditional stability limit for this method.


In a similar manner, the stability of other explicit and implicit finite difference
methods may be examined. This has been done by Lapidus and Pinder (1982), who
conclude that “most explicit finite difference approximations are conditionally stable,
whereas most implicit approximations are unconditionally stable.”

E.5 Step Size Control


The discussion of stability analysis in the previous sections relied on the simplifying
assumption that the value of λ remains constant throughout the integration. This is
true for linear equations such as Eq. (E.1); however, for nonlinear equations, the
value of λ may vary considerably over the interval of integration. The minimum
integration step size must be chosen using the maximum possible value of λ, which
will guarantee stability at the expense of computation time. For problems in which
computation time becomes excessive, it is possible to develop strategies for
automatically adjusting the step size at each step of the integration.
A simple test for checking the step size is to do the calculations at each interval
twice: do them once with the full step size, and then repeat the calculations over the
same interval with a smaller step size, usually half of the first one. If, at the end of the
interval, the difference between the predicted values of y by both approaches is less
than a specified convergence criterion, the step size may be increased. Otherwise, a
larger than acceptable difference between the two calculated y values suggests that
the step size is large, and it should be shortened in order to achieve an acceptable
truncation error.
Another method of controlling the step size is to obtain an estimation of the
truncation error at each interval. A good example of such an approach is the Runge-
Kutta-Fehlberg method (see Table 5.2 of Constantinides and Mostoufi, 1999), which
provides an estimate of the local truncation error. This error estimate can be easily
introduced into the computer program, and you can let the program automatically
change the step size at each point until the desired accuracy is achieved.
As mentioned before, the optimum number of correctors is two. Therefore, in
the case of using a predictor-corrector method, if the convergence is achieved before
the second corrected value, the step size may be increased. On the other hand, if the
convergence is not achieved after the second application of the corrector, the step
size should be reduced.
E.6 STIFF DIFFERENTIAL EQUATIONS 591

E.6 Stiff Differential Equations


In Section E.1, we showed that the stability of the numerical solution of differential
equations depends on the value of hλ, and that λ, together with the stability boundary
of the method, determines the step size of integration. In the case of the linear
differential equation
dy
= λy (E.1)
dt
λ is the eigenvalue of that equation, and it remains constant throughout the
integration. The nonlinear differential equation:
dy
= f (t, y ) (7.11)
dt
can be linearized at each step using the mean-value theorem (E.26), so that λ can be
determined from the partial derivative of the function with respect to y:

∂f
λ= (E.84)
∂y α ,tn
The value of λ is no longer a constant but varies in magnitude at each step of the
integration.
This analysis can be extended to a set of simultaneous nonlinear differential
equations:
dy1
= f1. ( t , y1 , y2 ,… , yn )
dt
dy2
= f 2 ( t , y1 , y2 ,… , yn )
dt (7.60)

dyn
= f n ( t , y1 , y2 ,… , yn )
dt
Linearizing the set produces the Jacobian matrix

⎡ ∂f1 ∂f1 ⎤
⎢ ∂y … ∂y ⎥
⎢ 1 n

J=⎢ ⎥ (E.85)
⎢ ⎥
⎢ ∂f n ∂f n ⎥
⎢⎣ ∂y1 ∂yn ⎥⎦
592 APPENDIX E: NUMERICAL STABILITY AND OTHER TOPICS

The eigenvalues {λi | i = 1, 2, …, n} of the Jacobian matrix are the determining


factors in the stability analysis of the numerical solution. The step size of integration
is determined by the stability boundary of the method and the maximum eigenvalue.
When the eigenvalues of the Jacobian matrix of the differential equations are
all of the same order of magnitude, no unusual problems arise in integrating the set.
However, when the maximum eigenvalue is several orders of magnitude larger than
the minimum eigenvalue, the equations are said to be stiff. The stiffness ratio (SR) of
such a set is defined as

max Real ( λ i )
1≤ i ≤ n
SR = (E.86)
min Real ( λ i )
1≤ i ≤ n

The integration step size is determined by the largest eigenvalue, and the final
time of integration is usually dictated by the smallest eigenvalue; since the variable
most affected by the smallest eigenvalue changes very slowly; therefore, integrating
stiff differential equations using explicit methods may be computer time intensive.
The MATLAB functions ode23s and ode15s are solvers suitable for the
solution of stiff ordinary differential equations (see Table 7.2).

E.7 References
Constantinides, A. and Mostoufi, N. 1999. Numerical Methods for Chemical
Engineers with MATLAB Applications. Upper Saddle River, NJ: Prentice
Hall PTR.
Lapidus, L. and Sienfeld J. H. 1971. Numerical Solution of Ordinary Differential
Equations. New York: Academic Press.
Lapidus, L. and Pinder G. F. 1982. Numerical Solution of Partial Differential
Equations in Science and Engineering, New York: J. Wiley & Sons, Inc.

You might also like