(Nicholas J. Higham) Functions of Matrices Theory PDF
(Nicholas J. Higham) Functions of Matrices Theory PDF
Functions
of Matrices
ot104_HighamFM-B:Gockenbach 2/8/2008 2:47 PM Page 2
ot104_HighamFM-B:Gockenbach 2/8/2008 2:47 PM Page 3
Functions
of Matrices
Theory and Computation
Nicholas J. Higham
University of Manchester
Manchester, United Kingdom
10 9 8 7 6 5 4 3 2 1
All rights reserved. Printed in the United States of America. No part of this book may
be reproduced, stored, or transmitted in any manner without the written permission of the
publisher. For information, write to the Society for Industrial and Applied Mathematics,
3600 Market Street, 6th Floor, Philadelphia, PA 19104-2688 USA.
Trademarked names may be used in this book without the inclusion of a trademark symbol.
These names are used in an editorial context only; no infringement of trademark is intended.
QA188.H53 2008
512.9'434--dc22
2007061811
is a registered trademark.
ot104_HighamFM-B:Gockenbach 2/8/2008 2:47 PM Page 5
To Françoise
ot104_HighamFM-B:Gockenbach 2/8/2008 2:47 PM Page 6
Contents
List of Tables xv
Preface xvii
2 Applications 35
2.1 Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.1.1 Exponential Integrators . . . . . . . . . . . . . . . . . . . . . . 36
2.2 Nuclear Magnetic Resonance . . . . . . . . . . . . . . . . . . . . . . . 37
2.3 Markov Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.4 Control Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.5 The Nonsymmetric Eigenvalue Problem . . . . . . . . . . . . . . . . . 41
2.6 Orthogonalization and the Orthogonal Procrustes Problem . . . . . . 42
2.7 Theoretical Particle Physics . . . . . . . . . . . . . . . . . . . . . . . 43
2.8 Other Matrix Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.9 Nonlinear Matrix Equations . . . . . . . . . . . . . . . . . . . . . . . 44
2.10 Geometric Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.11 Pseudospectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.12 Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
vii
viii Contents
3 Conditioning 55
3.1 Condition Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2 Properties of the Fréchet Derivative . . . . . . . . . . . . . . . . . . . 57
3.3 Bounding the Condition Number . . . . . . . . . . . . . . . . . . . . 63
3.4 Computing or Estimating the Condition Number . . . . . . . . . . . 64
3.5 Notes and References . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
14 Miscellany 313
14.1 Structured Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
14.1.1 Algebras and Groups . . . . . . . . . . . . . . . . . . . . . . . 313
14.1.2 Monotone Functions . . . . . . . . . . . . . . . . . . . . . . . 315
14.1.3 Other Structures . . . . . . . . . . . . . . . . . . . . . . . . . 315
14.1.4 Data Sparse Representations . . . . . . . . . . . . . . . . . . . 316
14.1.5 Computing Structured f (A) for Structured A . . . . . . . . . 316
14.2 Exponential Decay of Functions of Banded Matrices . . . . . . . . . . 317
14.3 Approximating Entries of Matrix Functions . . . . . . . . . . . . . . . 318
A Notation 319
Bibliography 379
Index 415
List of Figures
3.1 Relative errors in the Frobenius norm for the finite difference approx-
imation (3.22) to the Fréchet derivative. . . . . . . . . . . . . . . . . 68
6.1 The cardioid (6.45), shaded, together with the unit circle . . . . . . . 157
7.1 Convergence of the Newton iteration (7.6) for a pth root of unity . . 179
7.2 Regions of a ∈ C for which the inverse Newton iteration (7.15) con-
verges to a−1/p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
9.1 Normwise relative errors for funm mod and condrel (exp, A)u. . . . . . 230
12.1 Normwise relative errors for Algorithm 12.6, MATLAB’s funm, and
Algorithm 12.7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
12.2 Same data as in Figure 12.1 presented as a performance profile. . . . 297
12.3 Normwise relative errors for Algorithm 12.6, Algorithm 12.7, Algo-
rithm 12.8, funm, and sine obtained as shifted cosine from Algo-
rithm 12.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
xiii
List of Tables
5.1 Iteration functions fℓm from the Padé family (5.27). . . . . . . . . . . 116
5.2 Number of iterations for scaled Newton iteration. . . . . . . . . . . . 125
5.3 Newton iteration with spectral scaling for Jordan block J(2) ∈ R16×16 . 126
5.4 Newton iteration with determinantal scaling for random A ∈ R16×16
with κ2 (A) = 1010 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.5 Newton iteration with determinantal scaling for random A ∈ R16×16
with real eigenvalues parametrized by d. . . . . . . . . . . . . . . . . 127
xv
xvi List of Tables
11.1 Maximal values θm of kXk such that the bound (11.19) ensures krm (X)−
log(I +X)k does not exceed u = 2−53 , along with upper bound (11.20)
for κ(qm (X)) and upper bound (11.21) for φm , both with kXk = θm . 277
B.1 Constants αpq such that kAkp ≤ αpq kAkq , A ∈ Cm×n . . . . . . . . . . 327
Functions of matrices have been studied for as long as matrix algebra itself. Indeed,
in his seminal A Memoir on the Theory of Matrices (1858), Cayley investigated the
square root of a matrix, and it was not long before definitions of f (A) for general f
were proposed by Sylvester and others. From their origin in pure mathematics, ma-
trix functions have broadened into a subject of study in applied mathematics, with
widespread applications in science and engineering. Research on matrix functions in-
volves matrix theory, numerical analysis, approximation theory, and the development
of algorithms and software, so it employs a wide range of theory and methods and
promotes an appreciation of all these important topics.
My first foray into f (A) was as a graduate student when I became interested in
the matrix square root. I have worked on matrix functions on and off ever since.
Although there is a large literature on the subject, including chapters in several
books (notably Gantmacher [203, ], Horn and Johnson [296, ], Lancaster
and Tismenetsky [371, ], and Golub and Van Loan [224, ]), there has not
previously been a book devoted to matrix functions. I started to write this book in
2003. In the intervening period interest in matrix functions has grown significantly,
with new applications appearing and the literature expanding at a fast rate, so the
appearance of this book is timely.
This book is a research monograph that aims to give a reasonably complete treat-
ment of the theory of matrix functions and numerical methods for computing them,
as well as an overview of applications. The theory of matrix functions is beautiful and
nontrivial. I have strived for an elegant presentation with illuminating examples, em-
phasizing results of practical interest. I focus on three equivalent definitions of f (A),
based on the Jordan canonical form, polynomial interpolation, and the Cauchy inte-
gral formula, and use all three to develop the theory. A thorough treatment is given
of problem sensitivity, based on the Fréchet derivative. The applications described
include both the well known and the more speculative or recent, and differential
equations and algebraic Riccati equations underlie many of them.
The bulk of the book is concerned with numerical methods and the associated
issues of accuracy, stability, and computational cost. Both general purpose methods
and methods for specific functions are covered. Little mention is made of methods
that are numerically unstable or have exorbitant operation counts of order n4 or
higher; many methods proposed in the literature are ruled out for at least one of
these reasons.
The focus is on theory and methods for general matrices, but a brief introduction
to functions of structured matrices is given in Section 14.1. The problem of computing
a function of a matrix times a vector, f (A)b, is of growing importance, though as yet
numerical methods are relatively undeveloped; Chapter 13 is devoted to this topic.
One of the pleasures of writing this book has been to explore the many connec-
tions between matrix functions and other subjects, particularly matrix analysis and
numerical analysis in general. These connections range from the expected, such as
xvii
xviii Preface
divided differences, the Kronecker product, and unitarily invariant norms, to the un-
expected, which include the Mandelbrot set, the geometric mean, partial isometries,
and the role of the Fréchet derivative beyond measuring problem sensitivity.
I have endeavoured to make this book more than just a monograph about matrix
functions, and so it includes many useful or interesting facts, results, tricks, and
techniques that have a (sometimes indirect) f (A) connection. In particular, the book
contains a substantial amount of matrix theory, as well as many historical references,
some of which appear not to have previously been known to researchers in the area.
I hope that the book will be found useful as a source of statements and applications
of results in matrix analysis and numerical linear algebra, as well as a reference on
matrix functions.
Four main themes pervade the book.
Role of the sign function. The matrix sign function has fundamental theoretical
and algorithmic connections with the matrix square root, the polar decomposition,
and, to a lesser extent, matrix pth roots. For example, a large class of iterations for
the matrix square root can be obtained from corresponding iterations for the matrix
sign function, and Newton’s method for the matrix square root is mathematically
equivalent to Newton’s method for the matrix sign function.
Stability. The stability of iterations for matrix functions can be effectively defined
and analyzed in terms of power boundedness of the Fréchet derivative of the iteration
function at the solution. Unlike some earlier, more ad hoc analyses, no assumptions
are required on the underlying matrix. General results (Theorems 4.18 and 4.19)
simplify the analysis for idempotent functions such as the matrix sign function and
the unitary polar factor.
Schur decomposition and Parlett recurrence. The use of a Schur decomposition
followed by reordering and application of the block form of the Parlett recurrence
yields a powerful general algorithm, with f -dependence restricted to the evaluation
of f on the diagonal blocks of the Schur form.
Padé approximation. For transcendental functions the use of Padé approximants,
in conjunction with an appropriate scaling technique that brings the matrix argument
close to the origin, yields an effective class of algorithms whose computational building
blocks are typically just matrix multiplication and the solution of multiple right-hand
side linear systems. Part of the success of this approach rests on the several ways
in which rational functions can be evaluated at a matrix argument, which gives the
scope to find a good compromise between speed and stability.
In addition to surveying, unifying, and sometimes improving existing results and
algorithms, this book contains new results. Some of particular note are as follows.
• Theorem 1.35, which relates f (αIm + AB) to f (αIn + BA) for A ∈ Cm×n and
B ∈ Cn×m and is an analogue for general matrix functions of the Sherman–
Morrison–Woodbury formula for the matrix inverse.
• Theorem 4.15, which shows that convergence of a scalar iteration implies con-
vergence of the corresponding matrix iteration when applied to a Jordan block,
under suitable assumptions. This result is useful when the matrix iteration
can be block diagonalized using the Jordan canonical form of the underlying
matrix, A. Nevertheless, we show in the context of Newton’s method for the
matrix square root that analysis via the Jordan canonical form of A does not
always give the strongest possible convergence result. In this case a stronger
result, Theorem 6.9, is obtained essentially by reducing the convergence analysis
to the consideration of the behaviour of the powers of a certain matrix.
Preface xix
• Theorems 5.13 and 8.19 on the stability of essentially all iterations for the ma-
trix sign function and the unitary polar factor, and the general results in The-
orems 4.18 and 4.19 on which these are based.
• Theorems 6.14–6.16 on the convergence of the binomial, Pulay, and Visser iter-
ations for the matrix square root.
• An improved Schur–Parlett algorithm for the matrix logarithm, given in Sec-
tion 11.6, which makes use of improved implementations of the inverse scaling
and squaring method in Section 11.5.
The Audience
The book’s main audience is specialists in numerical analysis and applied linear al-
gebra, but it will be of use to anyone who wishes to know something of the theory
of matrix functions and state of the art methods for computing them. Much of the
book can be understood with only a basic grounding in numerical analysis and linear
algebra.
Acknowledgments
A number of people have influenced my thinking about matrix functions. Discussions
with Ralph Byers in 1984, when he was working on the matrix sign function and I was
investigating the polar decomposition, first made me aware of connections between
these two important tools. The work on the matrix exponential of Cleve Moler and
Charlie Van Loan has been a frequent source of inspiration. Beresford Parlett’s ideas
on the exploitation of the Schur form and the adroit use of divided differences have
been a guiding light. Charles Kenney and Alan Laub’s many contributions to the
matrix function arena have been important in my own research and are reported on
many pages of this book. Finally, Nick Trefethen has shown me the importance of the
Cauchy integral formula and has offered valuable comments on drafts at all stages.
I am grateful to several other people for providing valuable help, suggestions, or
advice during the writing of the book:
Rafik Alam, Awad Al-Mohy, Zhaojun Bai, Timo Betcke, Rajendra Bhatia,
Tony Crilly, Philip Davies, Oliver Ernst, Andreas Frommer, Chun-Hua Guo,
Gareth Hargreaves, Des Higham, Roger Horn, Bruno Iannazzo, Ilse Ipsen,
Peter Lancaster, Jörg Liesen, Lijing Lin, Steve Mackey, Roy Mathias,
Volker Mehrmann, Thomas Schmelzer, Gil Strang, Françoise Tisseur, and
Andre Weideman.
Working with the SIAM staff on the publication of this book has been a pleasure. I
thank, in particular, Elizabeth Greenspan (acquisitions), Sara Murphy (acquisitions),
Lois Sellers (design), and Kelly Thomas (copy editing).
Research leading to this book has been supported by the Engineering and Physical
Sciences Research Council, The Royal Society, and the Wolfson Foundation.
In this first chapter we give a concise treatment of the theory of matrix functions,
concentrating on those aspects that are most useful in the development of algorithms.
Most of the results in this chapter are for general functions. Results specific to
particular functions can be found in later chapters devoted to those functions.
1.1. Introduction
The term “function of a matrix” can have several different meanings. In this book we
are interested in a definition that takes a scalar function f and a matrix A ∈ Cn×n
and specifies f (A) to be a matrix of the same dimensions as A; it does so in a way
that provides a useful generalization of the function of a scalar variable f (z), z ∈ C.
Other interpretations of f (A) that are not our focus here are as follows:
• Elementwise operations on matrices, for example sin A = (sin aij ). These oper-
ations are available in some programming languages. For example, Fortran 95
supports “elemental operations” [423, ], and most of MATLAB’s elemen-
tary and special functions are applied in an elementwise fashion when given
matrix arguments. However, elementwise operations do not integrate well with
matrix algebra, as is clear from the fact that the elementwise square of A is not
equal to the matrix product of A with itself. (Nevertheless, the elementwise
product of two matrices, known as the Hadamard product or Schur product, is
a useful concept [294, ], [296, , Chap. 5].)
• Functions producing a scalar result, such as the trace, the determinant, the
spectral radius, the condition number κ(A) = kAk kA−1 k, and one particular
generalization to matrix arguments of the hypergeometric function [359, ].
• Functions mapping Cn×n to Cm×m that do not stem from a scalar function.
Examples include matrix polynomials with matrix coefficients, the matrix trans-
pose, the adjugate (or adjoint) matrix, compound matrices comprising minors
of a given matrix, and factors from matrix factorizations. However, as a special
case, the polar factors of a matrix are treated in Chapter 8.
Before giving formal definitions, we offer some motivating remarks. When f (t)
is a polynomial or rational function with scalar coefficients and a scalar argument,
t, it is natural to define f (A) by substituting A for t, replacing division by matrix
1
2 Theory of Matrix Functions
inversion (provided that the matrices to be inverted are nonsingular), and replacing
1 by the identity matrix. Then, for example,
1 + t2
f (t) = ⇒ f (A) = (I − A)−1 (I + A2 ) if 1 ∈
/ Λ(A).
1−t
Here, Λ(A) denotes the set of eigenvalues of A (the spectrum of A). Note that rational
functions of a matrix commute, so it does not matter whether we write (I − A)−1 (I +
A2 ) or (I + A2 )(I − A)−1 . If f has a convergent power series representation, such as
t2 t3 t4
log(1 + t) = t − + − + ···, |t| < 1,
2 3 4
we can again simply substitute A for t to define
A2 A3 A4
log(I + A) = A − + − + ···, ρ(A) < 1. (1.1)
2 3 4
Here, ρ denotes the spectral radius and the condition ρ(A) < 1 ensures convergence of
the matrix series (see Theorem 4.7). In this ad hoc fashion, a wide variety of matrix
functions can be defined. However, this approach has several drawbacks:
Z −1 AZ = J = diag(J1 , J2 , . . . , Jp ), (1.2a)
λk 1
..
λk .
Jk = Jk (λk ) = . ∈ Cmk ×mk , (1.2b)
.. 1
λk
1.2 Definitions of f (A) 3
Definition 1.2 (matrix function via Jordan canonical form). Let f be defined on
the spectrum of A ∈ Cn×n and let A have the Jordan canonical form (1.2). Then
where
′ f (mk −1) )(λk )
f (λk ) f (λk ) ...
(mk − 1)!
.. ..
f (Jk ) :=
f (λk ) . . .
(1.4)
..
. ′
f (λk )
f (λk )
1/2 1
A simple example illustrates the definition. For the Jordan block J = 0 1/2
and f (x) = x3 , (1.4) gives
f (1/2) f ′ (1/2) 1/8 3/4
f (J) = = ,
0 f (1/2) 0 1/8
[371, , Chap. 9]. Note that the values depend not just on the eigenvalues but also on the maximal
Jordan block sizes ni .
4 Theory of Matrix Functions
Finally, we explain how (1.4) can be obtained from Taylor series considerations.
In (1.2b) write Jk = λk I +Nk ∈ Cmk ×mk , where Nk is zero except for a superdiagonal
of 1s. Note that for mk = 3 we have
0 1 0 0 0 1
Nk = 0 0 1 , Nk2 = 0 0 0 , Nk3 = 0.
0 0 0 0 0 0
In general, powering Nk causes the superdiagonal of 1s to move a diagonal at a time
towards the top right-hand corner, until at the mk th power it disappears: Ekmk = 0;
so Nk is nilpotent. Assume that f has a convergent Taylor series expansion
f (j) (λk )(t − λk )j
f (t) = f (λk ) + f ′ (λk )(t − λk ) + · · · + + ···.
j!
On substituting Jk ∈ Cmk ×mk for t we obtain the finite series
f (mk −1) (λk )Nkmk −1
f (Jk ) = f (λk )I + f ′ (λk )Nk + · · · + , (1.5)
(mk − 1)!
since all powers of Nk from the mk th onwards are zero. This expression is easily seen
to agree with (1.4). An alternative derivation of (1.5) that does not rest on a Taylor
series is given in the next section.
Definition 1.2 requires the function f to take well-defined values on the spectrum
of A—including values associated
√ with derivatives, where appropriate. Thus in the
case of functions such as t and log t it is implicit that a single branch has been
chosen in (1.4). Moreover, if an eigenvalue occurs in more than one Jordan block
then the same choice of branch must be made in each block. If the latter requirement
is violated then a nonprimary matrix function is obtained, as discussed in Section 1.4.
Theorem 1.3. For polynomials p and q and A ∈ Cn×n , p(A) = q(A) if and only if
p and q take the same values on the spectrum of A.
Proof. Suppose that two polynomials p and q satisfy p(A) = q(A). Then d = p−q
is zero at A so is divisible by the minimal polynomial ψ. In other words, d takes only
the value zero on the spectrum of A, that is, p and q take the same values on the
spectrum of A.
Conversely, suppose p and q take the same values on the spectrum of A. Then
d = p − q is zero on the spectrum of A and so must be divisible by the minimum
polynomial ψ, in view of (1.6). Hence d = ψr for some polynomial r, and since
d(A) = ψ(A)r(A) = 0, it follows that p(A) = q(A).
Thus it is a property of polynomials that the matrix p(A) is completely determined
by the values of p on the spectrum of A. It is natural to generalize this property to
arbitrary functions and define f (A) in such a way that f (A) is completely determined
by the values of f on the spectrum of A.
Q
where φi (t) = f (t)/ j6=i (t − λj )nj . For a matrix with distinct eigenvalues (ni ≡ 1,
s = n) this formula reduces to the familiar Lagrange form
n
X Yn
t − λj
p(t) = f (λi )ℓi (t), ℓi (t) = . (1.9)
i=1 j=1
λi − λj
j6=i
where m = deg ψ and the set {xi }m i=1 comprises the distinct eigenvalues λ1 , . . . , λs
with λi having multiplicity ni . Here the f [. . .] denote divided differences, which are
defined in Section B.16. Another polynomial q for which f (A) = q(A) is given by
(1.10) with m = n and {xi }ni=1 the set of all n eigenvalues of A:
Remark 1.8. If f is given by a power series, Definition 1.4 says that f (A) is never-
theless expressible as a polynomial in A of degree at most n−1. Another way to arrive
at this conclusion is as follows. The Cayley–Hamilton theorem says that any matrix
1.2 Definitions of f (A) 7
satisfies its own characteristic equation: q(A) = 0,2 where q(t) = det(tI − A) is the
characteristic polynomial. This theorem follows immediately from the fact that the
minimal polynomial ψ divides q (see Problem 1.18 for another proof). Hence the nth
power of A, and inductively all higher powers, are expressible as a linear combination
of I, A, . . . , An−1 . Thus any power series in A can be reduced to a polynomial in A
of degree at most n − 1. This polynomial is rarely of an elegant form or of practical
interest; exceptions are given in (1.16) and Problem 10.13.
with a real coefficient matrix and right-hand side. We conclude that r has real
coefficients and hence f (A) = p(A) is real when A is real. This argument extends to
real n × n matrices under the stated condition on f . As a particular example, we can
conclude that if A is real and nonsingular with no eigenvalues on the negative real
axis then A has a real square root and a real logarithm. For a full characterization
of the existence of real square roots and logarithms see Theorem 1.23. Equivalent
conditions to f (A) being real for real A when f is analytic are given in Theorem 1.18.
Remark 1.10. We can derive directly from Definition 1.4 the formula (1.4) for a
function of the Jordan block Jk in (1.2). It suffices to note that the interpolation
conditions are p(j) (λk ) = f (j) (λk ), j = 0: mk − 1, so that the required Hermite
interpolating polynomial is
and then to evaluate p(Jk ), making use of the properties of the powers of Nk noted
in the previous section (cf. (1.5)).
The integrand contains the resolvent, (zI − A)−1 , which is defined on Γ since Γ
is disjoint from the spectrum of A.
This definition leads to short proofs of certain theoretical results and has the
advantage that it can be generalized to operators.
Theorem 1.12. Definition 1.2 (Jordan canonical form) and Definition 1.4 (Hermite
interpolation) are equivalent. If f is analytic then Definition 1.11 (Cauchy integral )
is equivalent to Definitions 1.2 and 1.4.
Proof. Definition 1.4 says that f (A) = p(A) for a Hermite interpolating poly-
nomial p satisfying (1.7). If A has the Jordan form (1.2) then f (A) = p(A) =
p(ZJZ −1 ) = Zp(J)Z −1 = Z diag(p(Jk ))Z −1 , just from elementary properties of ma-
trix polynomials. But since p(Jk ) is completely determined by the values of p on the
spectrum of Jk , and these values are a subset of the values of p on the spectrum of A,
it follows from Remark 1.5 and Remark 1.10 that p(Jk ) is precisely (1.4). Hence the
matrix f (A) obtained from Definition 1.4 agrees with that given by Definition 1.2.
For the equivalence of Definition 1.11 with the other two definitions, see Horn and
Johnson [296, , Thm. 6.2.28].
We will mainly use (for theoretical purposes) Definitions 1.2 and 1.4. The polyno-
mial interpolation definition, Definition 1.4, is well suited to proving basic properties
of matrix functions, such as those in Section 1.3, while the Jordan canonical form
definition, Definition 1.2, excels for solving matrix equations such as X 2 = A and
eX = A. For many purposes, such as the derivation of the formulae in the next
section, either of the definitions can be used.
In the rest of the book we will refer simply to “the definition of a matrix function”.
and so
t − v∗ u t−0
p(t) = f (0) + ∗ f (v ∗ u).
0 − v∗ u v u−0
Hence
f (0) uv ∗
f (A) = p(A) = − ∗ uv ∗ + f (0)I + f (v ∗ u) ∗
v u∗ v u
f (v u) − f (0)
= f (0)I + uv ∗ (1.13)
v∗ u − 0
= f (0)I + f [v ∗ u, 0] uv ∗ .
We have manipulated the expression into this form involving a divided difference
because it is suggestive of what happens when v ∗ u = 0. Indeed f [0, 0] = f ′ (0) and so
when v ∗ u = 0 we may expect that f (A) = f (0)I + f ′ (0)uv ∗ . To confirm this formula,
note that v ∗ u = 0 implies that the spectrum of A consists entirely of 0 and that
A2 = (v ∗ u)uv ∗ = 0. Hence, assuming A 6= 0, A must have one 2 × 2 Jordan block
corresponding to the eigenvalue 0, with the other n − 2 zero eigenvalues occurring in
1 × 1 Jordan blocks. The interpolation conditions (1.7) are therefore
is valid for all u and v. We could have obtained this formula directly by using the
divided difference form (1.10) of the Hermite interpolating polynomial r, but the
derivation above gives more insight.
We now show how the formula is obtained from Definition 1.2 when v ∗ u 6= 0 (for
the case v ∗ u = 0 see Problem 1.15). The Jordan canonical form can be written as
∗ ∗
v /(v u)
A = [ u X ] diag(v ∗ u, 0, . . . , 0) ,
Y
Hence
v ∗ /(v ∗ u) uv ∗
f (A) = [ u X ] diag(f (v ∗ u), f (0), . . . , f (0)) = f (v ∗ u) ∗ + f (0)XY.
Y v u
For a more general result involving a perturbation of arbitrary rank see Theorem 1.35.
1
p(t) = f (1) (t + 1) (t − i) (t + i) − f (−1) (t − 1) (t − i) (t + i)
4
+ if (i) (t − 1) (t + 1) (t + i) − if (−i) (t − 1) (t + 1) (t − i) . (1.18)
Thus f (A) = p(A), and in fact this formula holds even for n = 1: 3, since incorpo-
rating extra interpolation conditions does not affect the ability of the interpolating
polynomial to yield f (A) (see Remark 1.5). This expression can be quickly evaluated
in O(n2 log n) operations because multiplication of a vector by Fn can be carried out
in O(n log n) operations using the fast Fourier transform (FFT).
Because Fn is unitary and hence normal, Fn is unitarily diagonalizable: Fn =
QDQ∗ for some unitary Q and diagonal D. (Indeed, any matrix with minimal polyno-
mial ψ(t) has distinct eigenvalues and so is diagonalizable.) Thus f (Fn ) = Qf (D)Q∗ .
However, this formula requires knowledge of Q and D and so is much more compli-
cated to use than (1.18).
1.3. Properties
The sign of a good definition is that it leads to the properties one expects or hopes
for, as well as some useful properties that are less obvious. We collect some general
properties that follow from the definition of f (A).
Theorem 1.13. Let A ∈ Cn×n and let f be defined on the spectrum of A. Then
(a) f (A) commutes with A;
(b) f (AT ) = f (A)T ;
(c) f (XAX −1 ) = Xf (A)X −1 ;
(d) the eigenvalues of f (A) are f (λi ), where the λi are the eigenvalues of A;
1.3 Properties 11
Proof. Definition 1.4 implies that f (A) is a polynomial in A, p(A) say. Then
f (A)A = p(A)A = Ap(A) = Af (A), which proves the first property. For (b) we have
f (A)T = p(A)T = p(AT ) = f (AT ), where the last equality follows from the fact that
the values of f on the spectrum of A are the same as the values of f on the spectrum of
AT . (c) and (d) follow immediately from Definition 1.2. (e) follows from (c) when X is
nonsingular; more generally it is obtained from Xf (A) = Xp(A) = p(A)X = f (A)X.
For (f), f (A) = p(A) is clearly block triangular and its ith diagonal block is p(Aii ).
Since p interpolates f on the spectrum of A it interpolates f on the spectrum of each
Aii , and hence p(Aii ) = f (Aii ). (g) is a special case of (f). (h) is a special case of
(g), since Im ⊗ A = diag(A, A, . . . , A). Finally, we have A ⊗ B = Π(B ⊗ A)Π T for a
permutation matrix Π, and so
Theorem 1.14 (equality of two matrix functions). With the notation of Section 1.2,
f (A) = g(A) if and only if
f (j) (λi ) = 0, j = 0: ni − 1, i = 1: s.
Theorem 1.15 (sum and product of functions). Let f and g be functions defined on
the spectrum of A ∈ Cn×n .
(a) If h(t) = f (t) + g(t) then h(A) = f (A) + g(A).
(b) If h(t) = f (t)g(t) then h(A) = f (A)g(A).
12 Theory of Matrix Functions
Proof. Part (a) is immediate from any of the definitions of h(A). For part
(b), let p and q interpolate f and g on the spectrum of A, so that p(A) = f (A)
and q(A) = g(A). By differentiating and using the product rule we find that the
functions h(t) and r(t) = p(t)q(t) have the same values on the spectrum of A. Hence
h(A) = r(A) = p(A)q(A) = f (A)g(A).
The next result generalizes the previous one and says that scalar functional re-
lationships of a polynomial nature are preserved by matrix functions. For example
sin2 (A) + cos2 (A) = I, (A1/p )p = A, and eiA = cos(A) + i sin(A). Of course, gener-
alizations of scalar identities that involve two or more noncommuting matrices may
fail; for example, eA+B , eA eB , and eB eA are in general all different (see Section 10.1).
Theorem 1.17 (composite function). Let A ∈ Cn×n and let the distinct eigenvalues
of A be λ1 , . . . , λs with indices n1 , . . . , ns . Let h be defined on the spectrum of A (so
that the values h(j) (λi ), j = 0: ni − 1, i = 1: s exist) and let the values g (j) (h(λi )),
j = 0: ni − 1, i = 1: s exist. Then f (t) = g(h(t)) is defined on the spectrum of A and
f (A) = g(h(A)).
and all the derivatives on the right-hand side exist, f is defined on the spectrum of A.
Let p(t) be any polynomial satisfying the interpolation conditions
From Definition 1.2 it is clear that the indices of the eigenvalues µ1 , . . . , µs of h(A)
are at most n1 , . . . , ns , so the values on the right-hand side of (1.20) contain the
values of g on the spectrum of B = h(A); thus g(B) is defined and p(B) = g(B). It
now follows by (1.19) and (1.20) that the values of f (t) and p(h(t)) coincide on the
spectrum of A. Hence by applying Theorem 1.16 to Q(f (t), h(t)) = f (t) − p(h(t)) we
conclude that
f (A) = p(h(A)) = p(B) = g(B) = g(h(A)),
1.3 Properties 13
as required.
The assumptions in Theorem 1.17 on g for f (A) to exist are stronger than neces-
sary in certain cases where a Jordan block of Asplits
under evaluation of h. Consider,
for example, g(t) = t1/3 , h(t) = t2 , and A = 00 10 . The required derivative g ′ (0) in
Theorem 1.17 does not exist, but f (A) = (A2 )1/3 = 0 nevertheless does exist. (A
full description of the Jordan canonical form of f (A) in terms of that of A is given in
Theorem 1.36.)
Theorem 1.17 implies that exp(log A) = A, provided that log is defined on the
spectrum of A. However, log(exp(A)) = A does not hold unless the spectrum of
A satisfies suitable restrictions, since the scalar relation log(et ) = t is likewise not
generally true in view of et = et+2kπi for any integer k; see Problem 1.39.
Although f (AT ) = f (A)T always holds (Theorem 1.13 (b)), the property f (A∗ ) =
f (A)∗ does not. The next result says essentially that for an analytic function f
defined on a suitable domain that includes a subset S of the real line, f (A∗ ) = f (A)∗
holds precisely when f maps S back into the real line. This latter condition also
characterizes when A real implies f (A) real (cf. the sufficient conditions given in
Remark 1.9).
Theorem 1.18 (Higham, Mackey, Mackey, and Tisseur). Let f be analytic on an open
subset Ω ⊆ C such that each connected component of Ω is closed under conjugation.
Consider the corresponding matrix function f on its natural domain in Cn×n , the set
D = { A ∈ Cn×n : Λ(A) ⊆ Ω }. Then the following are equivalent:
(a) f (A∗ ) = f (A)∗ for all A ∈ D.
(b) f (A) = f (A) for all A ∈ D.
(c) f (Rn×n ∩ D) ⊆ Rn×n .
(d) f (R ∩ Ω) ⊆ R.
Proof. The first two properties are obviously equivalent, in view of Theorem 1.13 (b).
Our strategy is therefore to show that (b) ⇒ (c) ⇒ (d) ⇒ (b).
(b) ⇒ (c): If A ∈ Rn×n ∩ D then
Proof. See Horn and Johnson [296, , Thm. 6.2.27 (1)], and Mathias [412,
, Lem. 1.1] for the conditions as stated here.
14 Theory of Matrix Functions
For continuity of f (A) on the set of normal matrices just the continuity of f is
sufficient [296, , Thm. 6.2.37].
Our final result shows that under mild conditions to check the veracity of a matrix
identity it suffices to check it for diagonalizable matrices.
Theorem 1.20. Let f satisfy the conditions of Theorem 1.19. Then f (A) = 0 for all
A ∈ Cn×n with spectrum in D if and only if f (A) = 0 for all diagonalizable A ∈ Cn×n
with spectrum in D.
Proof. See Horn and Johnson [296, , Thm. 6.2.27 (2)].
For an example of the use of Theorem 1.20 see the proof of Theorem 11.1. The-
orem 1.13 (f) says that block triangular structure is preserved by matrix functions.
An explicit formula can be given for an important instance of the block 2 × 2 case.
Theorem 1.21. Let f satisfy the conditions of Theorem 1.19 with D containing the
spectrum of n−1 1
n−1 B c
A= ∈ Cn×n .
1 0 λ
Then
f (B) g(B)c
f (A) = , (1.21)
0 f (λ)
/ Λ(B) then g(B) = (B − λI)−1 (f (B) −
where g(z) = f [z, λ]. In particular, if λ ∈
f (λ)I).
Proof. We need only to demonstrate the formula for the (1,2) block F12 of f (A).
Equating (1,2) blocks in f (A)A = Af (A) (Theorem 1.13 (a)) yields BF12 + cf (λ) =
f (B)c + F12 λ, or (B − λI)F12 = (f (B) − f (λ)I)c. If λ ∈
/ Λ(B) the result is proved.
Otherwise, the result follows by a continuity argument: replace λ by λ(ǫ) = λ + ǫ, so
that λ(ǫ) ∈
/ Λ(B) for sufficiently small ǫ, let ǫ → 0, and use the continuity of divided
differences and of f (A).
For an expression for a function of a general block 2 × 2 block triangular matrix
see Theorem 4.12.
√
that is, solve X 2 = A. Taking f (t) = t, the interpolation
√ conditions in Defini-
tions 1.4 are (with s = 1, n1 = 1) simply p(1) = 1. The interpolating polynomial
is therefore either p(t) = 1 or p(t) = −1, corresponding to the two square roots of
1, giving I and −I as square roots of A. Both of these square roots are, trivially,
polynomials in A. Turning to Definition 1.2, the matrix A is already in Jordan form
with two 1 × 1 Jordan blocks, and the definition provides the same two square roots.
However, if we ignore the prescription at the end of Section 1.2.1 about the choice of
branches then we can obtain two more square roots,
−1 0 1 0
, ,
0 1 0 −1
in which the two eigenvalues 1 have been sent to different square roots. Moreover,
since A = ZIZ −1 is a Jordan canonical form for any nonsingular Z, Definition 1.2
yields the square roots
−1 0 1 0
Z Z −1 , Z Z −1 , (1.22)
0 1 0 −1
and these formulae provide an infinity of square roots, because only for diagonal Z
are the matrices in (1.22) independent of Z. Indeed, one infinite family of square
roots of A comprises the Householder reflections
cos θ sin θ
H(θ) = , θ ∈ [0, 2π].
sin θ − cos θ
Definitions 1.2, 1.4, and 1.11 yield primary matrix functions. In most applications
it is primary matrix functions that are of interest, and virtually all the existing the-
ory and available methods are for such functions. Nonprimary matrix functions are
obtained from Definition 1.2 when two equal eigenvalues in different Jordan blocks
are mapped to different values of f ; in other words, different branches of f are taken
for different Jordan blocks with the same eigenvalue. The function obtained thereby
depends on the matrix Z in (1.3). This possibility arises precisely when the function is
multivalued and the matrix is derogatory, that is, the matrix has multiple eigenvalues
and an eigenvalue appears in more than one Jordan block.
Unlike primary matrix functions, nonprimary ones are not expressible as polyno-
mials in the matrix. However, a nonprimary function obtained from Definition 1.2,
using the prescription in the previous paragraph, nevertheless commutes with the ma-
trix. Such a function has the form X = Z diag(fk (Jk ))Z −1 , where A = Z diag(Jk )Z −1
is a Jordan canonical form and where the notation fk denotes that the branch of f
taken depends on k. Then XA = AX, because fk (Jk ) is a primary matrix function
and so commutes with Jk .
But note that not all nonprimary matrix functions are obtainable
from the Jordan
canonical form prescription above. For example, A = 00 00 has the square root
X = 00 10 , and X is a Jordan block larger than the 1 × 1 Jordan blocks of A. This
example also illustrates that a nonprimary function can have the same spectrum as a
primary function, and so in general a nonprimary function cannot be identified from
its spectrum alone.
Nonprimary functions can be needed when, for a matrix A depending on a param-
eter t, a smooth curve of functions f (A(t)) needs to be computed and eigenvalues of
A(t) coalesce. Suppose we wish to compute square roots of
cos θ sin θ
G(θ) =
− sin θ cos θ
16 Theory of Matrix Functions
The only primary square roots of G(π) are ±iI, which are nonreal. While it is
nonprimary, G(π/2) is the square root we need in order to produce a smooth curve
of square roots.
An example of an application where nonprimary logarithms arise is the embed-
dability problems for Markov chains (see Section 2.3).
A primary matrix function with a nonprimary flavour is the matrix sign function
(see Chapter 5), which for a matrix A ∈ Cn×n is a (generally) nonprimary square
root of I that depends on A.
Unless otherwise stated, f (A) denotes a primary matrix function throughout this
book.
Theorem 1.22 (existence of matrix square root). A ∈ Cn×n has a square root if and
only if in the “ascent sequence” of integers d1 , d2 , . . . defined by
di = dim(null(Ai )) − dim(null(Ai−1 ))
Proof. See Cross and Lancaster [122, ] or Horn and Johnson [296, , Cor.
6.4.13].
To illustrate, consider a Jordan block J ∈ Cm×m with eigenvalue zero. We have
dim(null(J 0 )) = 0, dim(null(J)) = 1, dim(null(J 2 )) = 2, . . . , dim(null(J m )) = m,
and so the ascent sequence comprises m 1s. Hence Jk does not have a square root
unless m = 1. However, the matrix
0 1 0
0 0 0 (1.23)
0 0 0
has ascent sequence 2, 1, 0, . . . and so does have a square root—for example, the matrix
0 0 1
0 0 0 (1.24)
0 1 0
(which is the 3 × 3 Jordan block with eigenvalue 0 with rows and columns 2 and 3
interchanged).
1.6 Classification of Matrix Square Roots and Logarithms 17
Another important existence question is “If A is real does there exist a real f (A),
either primary or nonprimary?” For most common functions the answer is clearly yes,
by considering a power series representation. For the square root and logarithm the
answer is not obvious; the next result completes the partial answer to this question
given in Remark 1.9 and Theorem 1.18.
Proof. For the last part consider the real Schur decomposition, QT AQ = R (see
Section B.5), where Q ∈ Rn×n is orthogonal and R ∈ Rn×n is upper quasi-triangular.
Clearly, f (A) is real if and only if QT f (A)Q = f (R) is real, and a primary matrix
function f (R) is block upper triangular with diagonal blocks f (Rii ). If A has a
negative real eigenvalue then some Rii is 1 × 1 and negative, making f (Rii ) nonreal
for f the square root and logarithm.
The result of (b) is due to Culver [126, ], and the proof for (a) is similar; see
also Horn and Johnson [296, , Thms. 6.4.14, 6.4.15] and Nunemacher [451, ].
Theorem 1.23 implies that −In has a real, nonprimary square root and logarithm
for every even n. For some insight into part (a), note that if A has two Jordan blocks
J of the same size then its Jordan matrix has a principal submatrix of the form
J 0 0 I 2
0J = J 0 .
where f (Jk (λk )) is given in (1.4) and where jk = 1 or 2 denotes the branch of f ; thus
(1) (2)
Lk = −Lk . Our first result characterizes all square roots.
18 Theory of Matrix Functions
Theorem 1.24 (Gantmacher). Let A ∈ Cn×n be nonsingular with the Jordan canon-
ical form (1.2). Then all solutions to X 2 = A are given by
(j ) (j )
X = ZU diag(L1 1 , L2 2 , . . . , L(jp)
p )U
−1 −1
Z , (1.25)
where µ2k = λk , k = 1: p.
Now consider the matrix
(j ) (j )
L = diag(L1 1 , L2 2 , . . . , L(jp)
p ), (1.27)
(j )
where we choose the jk so that Lk k has eigenvalue µk for each k. Since L is a
square root of J, by the same argument as above L must have the Jordan canonical
form JX . Hence X = W LW −1 for some nonsingular W . From X 2 = A we have
W JW −1 = W L2 W −1 = ZJZ −1 , which can be rewritten as (Z −1 W )J = J(Z −1 W ).
Hence U = Z −1 W is an arbitrary matrix that commutes with J, which completes the
proof.
The structure of the matrix U in Theorem 1.24 is described in the next result.
Theorem 1.25 (commuting matrices). Let A ∈ Cn×n have the Jordan canonical
form (1.2). All solutions of AX = XA are given by X = ZW Z −1 , where W = (Wij )
with Wij ∈ Cmi ×mj (partitioned conformably with J in (1.2)) satisfies
0, λi 6= λj ,
Wij =
Tij , λi = λj ,
where Tij is an arbitrary upper trapezoidal Toeplitz matrix which, for mi < mj , has
the form Tij = [0, Uij ], where Uij is square.
Theorem 1.26 (classification of square roots). Let the nonsingular matrix A ∈ Cn×n
have the Jordan canonical form (1.2) with p Jordan blocks, and let s ≤ p be the number
of distinct eigenvalues of A. Then A has precisely 2s square roots that are primary
functions of A, given by
(j ) (j )
Xj = Z diag(L1 1 , L2 2 , . . . , L(jp)
p )Z
−1
, j = 1: 2s ,
Proof. The proof consists of showing that for the square roots (1.25) for which
ji = jk whenever λi = λk ,
(j ) (j ) (j ) (j )
U diag(L1 1 , L2 2 , . . . , L(jp)
p )U
−1
= diag(L1 1 , L2 2 , . . . , L(jp)
p ),
that is, U commutes with the block diagonal matrix in the middle. This commutativ-
ity follows from the explicit form for U provided by Theorem 1.25 and the fact that
upper triangular Toeplitz matrices commute.
Theorem 1.26 shows that the square roots of a nonsingular matrix fall into two
classes. The first class comprises finitely many primary square roots, which are “iso-
lated”, being characterized by the fact that the sum of any two of their eigenvalues
is nonzero. The second class, which may be empty, comprises a finite number of pa-
rametrized families of matrices, each family containing infinitely many square roots
sharing the same spectrum.
Theorem 1.26 has two specific implications of note. First, if λk 6= 0 then the two
upper triangular square roots of Jk (λk ) given by (1.4) with f the square root function
are the only square roots of Jk (λk ). Second, if A is nonsingular and nonderogatory,
that is, none of the s distinct eigenvalues appears in more than one Jordan block,
then A has precisely 2s square roots, each of which is a primary function of A.
There is no analogue of Theorems 1.24 and 1.26 for singular A. Indeed the Jordan
block structure of a square root (when one exists) can be very different from that
of A. The search for square roots X of a singular matrix is aided by Theorem 1.36
below, which helps identify the possible Jordan forms of X; see Problem 1.29.
Analogous results, with analogous proofs, hold for the matrix logarithm.
Theorem 1.27 (Gantmacher). Let A ∈ Cn×n be nonsingular with the Jordan canon-
ical form (1.2). Then all solutions to eX = A are given by
(j ) (j )
X = ZU diag(L1 1 , L2 2 , . . . , L(jp)
p )U
−1 −1
Z ,
where
(j )
Lk k = log(Jk (λk )) + 2jk πiImk ; (1.28)
log(Jk (λk )) denotes (1.4) with the f the principal branch of the logarithm, defined by
Im(log(z)) ∈ (−π, π]; jk is an arbitrary integer; and U is an arbitrary nonsingular
matrix that commutes with J.
(j )
where L1 1 is defined in (1.28), corresponding to all possible choices of the integers
j1 , . . . , jp , subject to the constraint that ji = jk whenever λi = λk .
If s < p then eX = A has nonprimary solutions. They form parametrized families
(j ) (j )
Xj (U ) = ZU diag(L1 1 , L2 2 , . . . , L(jp)
p )U
−1 −1
Z ,
Proof. Note first that a nonprimary square root of A, if one exists, must have
eigenvalues µi and µj with µi = −µj , and hence the eigenvalues cannot all lie in the
open right half-plane. Therefore only a primary square root can have spectrum in the
open right half-plane. Since A has no eigenvalues on R− , it is clear from Theorem 1.26
that there is precisely one primary square root of A whose eigenvalues all lie in the
open right half-plane. Hence the existence and uniqueness of A1/2 is established.
That A1/2 is real when A is real follows from Theorem 1.18 or Remark 1.9.
See Problem 1.27 for an extension of Theorem 1.29 that allows A to be singular.
Corollary 1.30. A Hermitian positive definite matrix A ∈ Cn×n has a unique Her-
mitian positive definite square root.
Proof. By Theorem 1.29 the only possible Hermitian positive definite square
root is A1/2 . That A1/2 is Hermitian positive definite follows from the expression
A1/2 = QD1/2 Q∗ , where A = QDQ∗ is a spectral decomposition (Q unitary, D
diagonal), with D having positive diagonal entries.
For a proof of the corollary from first principles see Problem 1.41.
This equality is trivial for monomials and follows immediately for general polynomials.
First we recap a result connecting the Jordan structures of AB and BA. We
denote by zi (X) the nonincreasing sequence of the sizes z1 , z2 , . . . , of the Jordan
blocks corresponding to the zero eigenvalues of the square matrix X.
Theorem 1.32 (Flanders). Let A ∈ Cm×n and B ∈ Cn×m . The nonzero eigenvalues
of AB are the same as those of BA and have the same Jordan structure. For the zero
eigenvalues (if any), |zi (AB) − zi (BA)| ≤ 1 for all i, where the shorter sequence
is appended with zeros as necessary, and any such set of inequalities is attained for
some A and B. If m 6= n then the larger (in dimension) of AB and BA has a zero
eigenvalue of geometric multiplicity at least |m − n|.
Theorem 1.33. Let A ∈ Cn×n and B ∈ Cm×m and let f be defined on the spectrum
of both A and B. Then there is a single polynomial p such that f (A) = p(A) and
f (B) = p(B).
Corollary 1.34. Let A ∈ Cm×n and B ∈ Cn×m and let f be defined on the spectra
of both AB and BA. Then
Af (BA) = f (AB)A. (1.30)
Proof. By Theorem 1.33 there is a single polynomial p such that f (AB) = p(AB)
and f (BA) = p(BA). Hence, using (1.29),
When A and B are square and A, say, is nonsingular, another proof of Corol-
lary 1.34 is as follows: AB = A(BA)A−1 so f (AB) = Af (BA)A−1 , or f (AB)A =
Af (BA).
As a special case of the corollary, when AB (and hence also BA) has no eigenvalues
on R− (which implies that A and B are square, in view of Theorem 1.32),
A(BA)1/2 = (AB)1/2 A.
22 Theory of Matrix Functions
In fact, this equality holds also when AB has a semisimple zero eigenvalue and the
definition of A1/2 is extended as in Problem 1.27.
Corollary 1.34 is useful for converting f (AB) into f (BA) within an expression, and
vice versa; see, for example, (2.26), the proof of Theorem 6.11, and (8.5). However,
when m > n, (1.30) cannot be directly solved to give an expression for f (AB) in
terms of f (BA), because (1.30) is an underdetermined system for f (AB). The next
result gives such an expression, and in more generality.
Theorem 1.35. Let A ∈ Cm×n and B ∈ Cn×m , with m ≥ n, and assume that BA
is nonsingular. Let f be defined on the spectrum of αIm + AB, and if m = n let f be
defined at α. Then
f (αIm + AB) = f (α)Im + A(BA)−1 f (αIn + BA) − f (α)In B. (1.31)
Proof. Note first that by Theorem 1.32, the given assumption on f implies that
f is defined on the spectrum of αIn + BA and at α.
Let g(t) = f [α + t, α] = t−1 (f (α + t) − f (α)), so that f (α + t) = f (α) + tg(t).
Then, using Corollary 1.34,
f (αIm + AB) = f (α)Im + ABg(AB)
= f (α)Im + Ag(BA)B
= f (α)Im + A(BA)−1 f (αIn + BA) − f (α)In B,
as required.
This result is of particular interest when m > n, for it converts the f (αIm + AB)
problem—a function evaluation of an m × m matrix—into the problem of evaluating
f and the inverse on n × n matrices. Some special cases of the result are as follows.
(a) With n = 1, we recover (1.16) (albeit with the restriction v ∗ u 6= 0).
(b) With f the inverse function and α = 1, (1.31) yields, after a little manipulation,
the formula (I + AB)−1 = I − A(I + BA)−1 B, which is often found in textbook
exercises. This formula in turn yields the Sherman–Morrison–Woodbury formula
(B.12) on writing A + U V ∗ = A(I + A−1 U · V ∗ ). Conversely, when f is analytic
we can obtain (1.31) by applying the Sherman–Morrison–Woodbury formula to the
Cauchy integral formula (1.12). However, Theorem 1.35 does not require analyticity.
As an application of Theorem 1.35, we now derive a formula for f (αIn +uv ∗ +xy ∗ ),
where u, v, x, y ∈ Cn , thereby extending (1.16) to the rank-2 case. Write
∗
v
uv ∗ + xy ∗ = [ u x ] ∗ ≡ AB.
y
Then
v∗ u v∗ x
C := BA = ∈ C2×2 .
y∗ u y∗ x
Hence
∗ ∗ −1
v∗
f (αIn + uv + xy ) = f (α)In + [ u x]C f (αI2 + C) − f (α)I2 . (1.32)
y∗
The evaluation of both C −1 and f (αI2 + C) can be done explicitly (see Problem 1.9
for the latter), so (1.32) gives a computable formula that can, for example, be used
for testing algorithms for the computation of matrix functions.
1.9 Miscellany 23
1.9. Miscellany
In this section we give a selection of miscellaneous results that either are needed
elsewhere in the book or are of independent interest.
The first result gives a complete description of the Jordan canonical form of f (A)
in terms of that of A. In particular, it shows that under the action of f a Jordan
block J(λ) splits into at least two smaller Jordan blocks if f ′ (λ) = 0.
Theorem 1.36 (Jordan structure of f (A)). Let A ∈ Cn×n with eigenvalues λk , and
let f be defined on the spectrum of A.
(a) If f ′ (λk ) 6= 0 then for every Jordan block J(λk ) in A there is a Jordan block
of the same size in f (A) associated with f (λk ).
(b) Let f ′ (λk ) = f ′′ (λk ) = · · · = f (ℓ−1) (λk ) = 0 but f (ℓ) (λk ) 6= 0, where ℓ ≥ 2,
and consider a Jordan block J(λk ) of size r in A.
(i ) If ℓ ≥ r, J(λk ) splits into r 1 × 1 Jordan blocks associated with f (λk ) in
f (A).
(ii ) If ℓ ≤ r − 1, J(λk ) splits into the following Jordan blocks associated with
f (λk ) in f (A):
• ℓ − q Jordan blocks of size p,
• q Jordan blocks of size p + 1,
where r = ℓp + q with 0 ≤ q ≤ ℓ − 1, p > 0.
Proof. We prove just the first part. From Definition 1.2 it is clear that f either
preserves the size of a Jordan block Jk (λk ) ∈ Cmk ×mk of A—that is, f (Jk (λk )) has
Jordan form Jk (f (λk )) ∈ Cmk ×mk —or splits Jk (λk ) into two or more smaller blocks,
each with eigenvalue f (λk ). When f ′ (λk ) 6= 0, (1.4) shows that f (Jk (λk ))−f (λk )I has
rank mk − 1, which implies that f does not split the block Jk (λk ). When f ′ (λk ) = 0,
it is clear from (1.4) that f (Jk (λk )) − f (λk )I has rank at most mk − 2, which implies
that f (Jk (λk )) has at least two Jordan blocks. For proofs of the precise splitting
details, see Horn and Johnson [296, , Thm. 6.2.25] or Lancaster and Tismenetsky
[371, , Thm. 9.4.7].
To illustrate the result, consider the matrix
0 1 0 0
0 0 1 0
A= ,
0 0 0 1
0 0 0 0
Clearly f (A) has Jordan form comprising two 1 × 1 blocks and one 2 × 2 block. We
have f ′ (0) = f ′′ (0) = 0 and f ′′′ (0) 6= 0. Applying Theorem 1.36 (b) with ℓ = 3,
r = 4, p = 1, q = 1, the theorem correctly predicts ℓ − q = 2 Jordan blocks of size
24 Theory of Matrix Functions
1 and q = 1 Jordan block of size 2. For an example of a Jordan block splitting with
f (X) = X 2 , see the matrices (1.23) and (1.24).
Theorem 1.36 is useful when trying to solve nonlinear matrix equations, because
once the Jordan form of f (A) is known it narrows down the possible Jordan forms of
A; see, e.g., Problems 1.30 and 1.51.
We noted in Section 1.4 that a nonprimary function of a derogatory A may com-
mute with A but is not a polynomial in A. The next result shows that all matrices
that commute with A are polynomials in A precisely when A is nonderogatory—that
is, when no eigenvalue appears in more than one Jordan block in the Jordan canonical
form of A.
Theorem 1.38. B ∈ Cn×n commutes with every matrix that commutes with A ∈
Cn×n if and only if B is a polynomial in A.
Proof. See Horn and Johnson [296, , Thm. 4.4.19].
The following result is useful for finding solutions of a nonlinear matrix equation
of the form f (X) = A.
Corollary 1.41. Suppose A, B ∈ Cn×n and AB = BA. Then for some ordering of
the eigenvalues of A, B, and AB we have λi (AopB) = λi (A)opλi (B), where op = +,
−, or ∗.
Proof. By Theorem 1.40 there exists a unitary U such that U ∗ AU = TA and
U BU = TB are both upper triangular. Thus U ∗ (A op B)U = TA op TB is upper
∗
Theorem 1.42 (McCoy). For A, B ∈ Cn×n the following conditions are equiva-
lent.
(a) There is an ordering of the eigenvalues such that λi (p(A, B)) = p(λi (A), λi (B))
for all polynomials of two variables p(x, y).
(b) There exists a unitary U ∈ Cn×n such that U ∗ AU and U ∗ BU are upper
triangular.
(c) p(A, B)(AB − BA) is nilpotent for all polynomials p(x, y) of two variables.
Theorem 1.43. A ∈ Cn×n is unitary if and only if A = eiH for some Hermitian H.
In this representation H can be taken to be Hermitian positive definite.
Proof. The Schur decomposition of A has the form A = QDQ∗ with Q unitary
and D = diag(exp(iθj )) = exp(iΘ), where Θ = diag(θj ) ∈ Rn×n . Hence A =
Q exp(iΘ)Q∗ = exp(iQΘQ∗ ) = exp(iH), where H = H ∗ . Without loss of generality
we can take θj > 0, whence H is positive definite.
Theorem 1.44. A ∈ Cn×n has the form A = eS with S real and skew-symmetric if
and only if A is real orthogonal with det(A) = 1.
Proof. “⇒”: IfP S is real and skew-symmetric then A is real, ATA = e−S eS = I,
and det(eS ) = exp( λi (S)) = exp(0) = 1, since the eigenvalues of S are either zero
or occur in pure imaginary complex conjugate pairs.
“⇐”: If A is real orthogonal then it has the real Schur decomposition A =QDQT
aj bj
with Q orthogonal and D = diag(Dii ), where each Dii is 1, −1, or of the form −b j aj
with a2j + b2j = 1. Since det(A) = 1, there is an even number of −1s, and so we can
aj bj
include the −1 blocks among the −b j aj
blocks. It is easy to show that
aj bj cos θj sin θj 0 θj
≡ = exp =: exp(Θj ). (1.33)
−bj aj − sin θj cos θj −θj 0
Note that another way of expressing Theorem 1.45 is that for any logarithm of a
nonsingular X, det(X) = exp(trace(log(X))).
Buchheim gave a derivation of the formula [84, ] and then generalized it to mul-
tiple eigenvalues using Hermite interpolation [85, ].
Weyr [614, ] defined f (A) using a power series for f and showed that the
series converges if the eigenvalues of A lie within the radius of convergence of the
series. Hensel [258, ] obtained necessary and sufficient conditions for convergence
when one or more eigenvalues lies on the circle of convergence (see Theorem 4.7).
Metzler [424, ] defined the transcendental functions eA , log(A), sin(A), and
arcsin(A), all via power series.
The Cauchy integral representation was anticipated by Frobenius [195, ], who
states that if f is analytic then f (A) is the sum of the residues of (zI − A)−1 f (z) at
the eigenvalues of A. Poincaré [473, ] uses the Cauchy integral representation,
and this way of defining f (A) was proposed in a letter from Cartan to Giorgi, circa
1928 [216, ].
The Jordan canonical form definition is due to Giorgi [216, ]; Cipolla [109,
] extended it to produce nonprimary matrix functions.
Probably the first book (actually a booklet) to be written on matrix functions is
that of Schwerdtfeger [513, ]. With the same notation as in Definitions 1.2 and
1.4 he defines
Xs nXi −1
f (j) (λi )
f (A) = Ai (A − λi I)j ,
i=1 j=0
j!
where the Ai are the Frobenius covariants: Ai = Z diag(gi (Jk ))Z −1 , where gi (Jk ) = I
if λi is an eigenvalue of Jk and gi (Jk ) = 0 otherwise, where A = Z diag(Jk )Z −1 is the
1.11 Notes and References 27
Jordan canonical form. This is just a rewritten form of the expression for f (A) given
by Definition 1.2 or by the Lagrange–Hermite formula (1.8). It can be restated as
s nX
X i −1
where the Zij depend on A but not on f . For more details on these formulae see Horn
and Johnson [296, , pp. 401–404, 438] and Lancaster and Tismenetsky [371, ,
Sec. 9.5].
The equivalence of all the above definitions of f (A) (modulo their different levels
of generality) was first shown by Rinehart [493, ] (see the quote at the end of the
chapter).
One of the earliest uses of matrices in practical applications was by Frazer, Duncan,
and Collar of the Aerodynamics Department of the National Physical Laboratory
(NPL), England, who were developing matrix methods for analyzing flutter (unwanted
vibrations) in aircraft. Their book Elementary Matrices and Some Applications to
Dynamics and Differential Equations [193, ] emphasizes the important role of
the matrix exponential in solving differential equations and was “the first to employ
matrices as an engineering tool” [71, ], and indeed “the first book to treat matrices
as a branch of applied mathematics” [112, ].
Early books with substantial material on matrix functions are Turnbull and Aitken
[579, , Sec. 6.6–6.8]; Wedderburn [611, , Chap. 8], which has a useful bibliog-
raphy arranged by year, covering 1853–1933; MacDuffee [399, , Chap. IX], which
gives a concise summary of early work with meticulous attribution of results; Ferrar
[184, , Chap. 5]; and Hamburger and Grimshaw [245, ]. Papers with useful
historical summaries include Afriat [5, ] and Heuvers and Moak [259, ].
Interest in computing matrix functions grew rather slowly following the advent of
the digital computer. As the histogram on page 379 indicates, the literature expanded
rapidly starting in the 1970s, and interest in the theory and computation of matrix
functions shows no signs of abating, spurred by the growing number of applications.
A landmark paper is Moler and Van Loan’s “Nineteen Dubious Ways to Compute
the Exponential of a Matrix” [437, ], [438, ], which masterfully organizes
and assesses the many different ways of approaching the eA problem. In particular,
it explains why many of the methods that have been (and continue to be) published
are unsuitable for finite precision computation.
The “problem solving environments” MATLAB, Maple, and Mathematica have
been invaluable for practitioners using matrix functions and numerical analysts de-
veloping algorithms for computing them. The original 1978 version of MATLAB
included the capability to evaluate the exponential, the logarithm, and several other
matrix functions. The availability of matrix functions in MATLAB and it competitors
has undoubtedly encouraged the use of succinct, matrix function-based solutions to
problems in science and engineering.
is given by Lancaster and Tismenetsky [371, , Chap. 9]. A classic reference is
Gantmacher [203, , Chap. 5]. Golub and Van Loan [224, , Chap. 11] briefly
treat the theory before turning to computational matters. Linear algebra and matrix
analysis textbooks with a significant component on f (A) include Cullen [125, 1972],
Pullman [481, ], and Meyer [426, ].
For more details on the Jordan canonical form see Horn and Johnson [295, ,
Chap. 3] and Lancaster and Tismenetsky [371, , Chap. 6].
Almost every textbook on numerical analysis contains a treatment of polynomial
interpolation for distinct nodes, including the Lagrange form (1.9) and the Newton
divided difference form (1.10). Textbook treatments of Hermite interpolation are
usually restricted to once-repeated nodes; for the general case see, for example, Horn
and Johnson [296, , Sec. 6.1.14] and Stoer and Bulirsch [542, , Sec. 2.1.5].
For the theory of functions of operators (sometimes called the holomorphic func-
tional calculus), see Davies [133, ], Dunford and Schwartz [172, ], [171, ],
and Kato [337, ].
Functions of the DFT matrix, and in particular fractional powers, are considered
by Dickinson and Steiglitz [151, ], who obtain a formula equivalent to (1.18).
Much has been written about fractional transforms, mainly in the engineering litera-
ture; for the fractional discrete cosine transform, for example, see Cariolaro, Erseghe,
and Kraniauskas [96, ].
Theorems 1.15–1.17 can be found in Lancaster and Tismenetsky [371, , Sec. 9.7].
Theorem 1.18 is from Higham, Mackey, Mackey, and Tisseur [283, , Thm. 3.2].
The sufficient condition of Remark 1.9 and the equivalence (c) ≡ (d) in Theorem 1.18
can be found in Richter [491, ].
Different characterizations of the reality of f (A) for real A can be found in Evard
and Uhlig [179, , Sec. 4] and Horn and Piepmeyer [298, ].
The terminology “primary matrix function” has been popularized through its use
by Horn and Johnson [296, , Chap. 6], but the term was used much earlier by
Rinehart [495, ] and Cullen [125, 1972].
A number of early papers investigate square roots and pth roots of (singular)
matrices, including Taber [561, ], Metzler [424, ], Frobenius [195, ],
Kreis [363, ], Baker [40, ], and Richter [491, ], and Wedderburn’s book
also treats the topic [611, , Secs. 8.04–8.06].
Theorem 1.24 is a special case of a result of Gantmacher for pth roots [203, ,
Sec. 8.6]. Theorem 1.26 is from Higham [268, ]. Theorem 1.27 is from [203, ,
Sec. 8.8].
Theorem 1.32 is proved by Flanders [188, ]. Alternative proofs are given by
Thompson [566, ] and Horn and Merino [297, , Sec. 6]; see also Johnson and
Schreiner [321, ].
We derived Theorem 1.35 as a generalization of (1.16) while writing this book;
our original proof is given in Problem 1.45. Harris [249, , Lem. 2] gives the result
for α = 0 and f a holomorphic function, with the same method of proof that we have
given. The special case of Theorem 1.35 with f the exponential and α = 0 is given
by Celledoni and Iserles [102, ].
Formulae for a rational function of a general matrix plus a rank-1 perturbation,
r(C + uv ∗ ), are derived by Bernstein and Van Loan [61, ]. These are more
complicated and less explicit than (1.31), though not directly comparable with it since
C need not be a multiple of the identity. The formulae involve the coefficients of r and
so cannot be conveniently applied to an arbitrary function f by using “f (A) = p(A)
Problems 29
Problems
The only way to learn mathematics is to do mathematics.
— PAUL R. HALMOS, A Hilbert Space Problem Book (1982)
1.1. Show that the value of f (A) given by Definition 1.2 is independent of the par-
ticular Jordan canonical form that is used.
1.2. Let Jk be the Jordan block (1.2b). Show that
f (mk −1) (−λk )
f (−λk ) −f ′ (−λk ) ... (−1)mk −1
(mk − 1)!
.. ..
f (−Jk ) = f (−λk ) . . . (1.34)
..
. ′
−f (−λk )
f (−λk )
1.3. (Cullen [125, , Thm. 8.9]) Define f (A) by the Jordan canonical form defi-
nition. Use Theorem 1.38 and the property f (XAX −1 ) = Xf (A)X −1 to show that
f (A) is a polynomial in A.
1.4. (a) Let A ∈ Cn×n have an eigenvalue λ and corresponding eigenvector x. Show
that (f (λ), x) is a corresponding eigenpair for f (A).
(b) Suppose A has constant row sums α, that is, Ae = αe, where e = [1, 1, . . . , 1]T .
Show that f (A) has row sums f (α). Deduce the corresponding result for column sums.
1.5. Show that the minimal polynomial ψ of A ∈ Cn×n exists, is unique, and has
degree at most n.
1.6. (Turnbull and Aitken [579, , p. 75]) Show that if A ∈ Cn×n has minimal
polynomial ψ(A) = A2 − A − I then (I − 31 A)−1 = 35 (A + 2I).
1.7. (Pullman [481, , p. 56]) The matrix
−2 2 −2 4
−1 2 −1 1
A=
0 0 1 0
−2 1 −1 4
1.10. Let J = eeT ∈ Rn×n denote the matrix of 1s. Show using Definition 1.4 that
1.11. What are the interpolation conditions (1.7) for the polynomial p such that
p(A) = A?
1.12. Let A ∈ Cn×n have only two distinct eigenvalues, λ1 and λ2 , both semisimple.
Obtain an explicit formula for f (A).
1.13. Show using each of the three definitions (1.2), (1.4), and (1.11) of f (A) that
AB = BA implies f (A)B = Bf (A).
1.14. For a given A ∈ Cn×n and a given function f explain how to reliably compute
in floating point arithmetic a polynomial p such that f (A) = p(A).
1.15. Show how to obtain the formula (1.14) from Definition 1.2 when v ∗ u = 0 with
uv ∗ 6= 0.
1.16. Prove the formula (1.16) for f (αI + uv ∗ ). Use this formula to derive the
Sherman–Morrison formula (B.11).
1.17. Use (1.16) to obtain an explicit formula for f (A) for A = λIn−1 c
0 λ ∈ C
n×n
.
Check your result against Theorem 1.21.
1.18. (Schwerdtfeger [513, ]) Let p be a polynomial and A ∈ Cn×n . Show that
p(A) = 0 if and only if p(t)(tI − A)−1 is a polynomial in t. Deduce the Cayley–
Hamilton theorem.
1.19. Cayley actually discovered a more general version of the Cayley–Hamilton the-
orem, which appears in a letter to Sylvester but not in any of his published work
[120, ], [121, , p. 470], [464, , Letter 44]. Prove his general version: if
A, B ∈ Cn×n , AB = BA, and f (x, y) = det(xA − yB) then f (B, A) = 0. Is the
commutativity condition necessary?
1.20. Let f satisfy f (−z) = ±f (z). Show that f (−A) = ±f (A) whenever the pri-
mary matrix functions f (A) and f (−A) are defined. (Hint: Problem 1.2 can be
used.)
1.21. Let P ∈ Cn×n be idempotent (P 2 = P ). Show that f (aI + bP ) = f (a)I +
(f (a + b) − f (a))P .
1.22. Is f (A) = A∗ possible for a suitable choice of f ? Consider, for example,
f (λ) = λ.
1.23. Verify the Cauchy integral formula (1.12) in the case f (λ) = λj and A = Jn (0),
the Jordan block with zero eigenvalue.
1.24. Show from first principles that for λk 6= 0 a Jordan block Jk (λk ) has exactly
two upper triangular square roots. (There are in fact only two square roots of any
form, as shown by Theorem 1.26.)
1.25. (Davies [131, 2007]) Let A ∈ Cn×n (n > 1) be nilpotent of index n (that is,
An = 0 but An−1 6= 0). Show that A has no square root but that A + cAn−1 is a
square root of A2 for any c ∈ C. Describe all such A.
Problems 31
1.26. Suppose that X ∈ Cn×n commutes with A ∈ Cn×n , and let A have the Jordan
canonical form Z −1 AZ = diag(J1 , J2 , . . . , Jp ) = J. Is Z −1 XZ block diagonal with
blocking conformable with that of J?
1.27. (Extension of Theorem 1.29.) Let A ∈ Cn×n have no eigenvalues on R− except
possibly for a semisimple zero eigenvalue. Show that there is a unique square root X
of A that is a primary matrix function of A and whose nonzero eigenvalues lie in the
open right half-plane. Show that if A is real then X is real.
1.28. Investigate the square roots of the upper triangular matrix
0 1 1
A= 1 1.
0
1
1.34. Let A ∈ Cn×n have no eigenvalues on R− . Show that A1/2 = e 2 log A , where
the logarithm is the principal logarithm.
1.35. Let A, B ∈ Cn×n and AB = BA. Is it true that (AB)1/2 = A1/2 B 1/2 when the
square roots are defined?
1.36. (Hille [290, ]) Show that if eA = eB and no two elements of Λ(A) differ by
a nonzero integer multiple of 2πi then AB = BA. Given an example to show that
this conclusion need not be true without the assumption on Λ(A).
1.37. Show that if eA = eB and no eigenvalue of A differs from an eigenvalue of B
by a nonzero integer multiple of 2πi then A = B.
1.38. Let A ∈ Cn×n be √nonsingular. Show that if f is an even function (f (z) = f (−z)
for all z ∈ C) then f ( A) is the same for all choices of square root (primary or
nonprimary). Show that if f is an odd function (f (−z) = −f (z) for all z ∈ C) then
√ ±1 √
A f ( A) is the same for all choices of square root.
1.39. Show that for A ∈ Cn×n , log(eA ) = A if and only if | Im(λi )| < π for every
eigenvalue λi of A, where log denotes the principal logarithm. (Since ρ(A) ≤ kAk for
any consistent norm, kAk < π is sufficient for the equality to hold.)
1.40. Let A, B ∈ Cn×n and let f and g be functions such that g(f (A)) = A and
g(f (B)) = B. Assume also that B and f (A) are nonsingular. Show that f (A)f (B) =
f (B)f (A) implies AB = BA. For example, if the spectra of A and B lie in the open
right half-plane we can take f (x) = x2 and g(x) = x1/2 , or if ρ(A) < π and ρ(B) < π
we can take f (x) = ex and g(x) = log x (see Problem 1.39).
1.41. Give a proof from first principles (without using the theory of matrix functions
developed in this chapter) that a Hermitian positive definite matrix A ∈ Cn×n has a
unique Hermitian positive definite square root.
1.42. Let A ∈ Cn×n have no eigenvalues on R− . Given that A has a square root X
with eigenvalues in the open right half-plane and that X is a polynomial in A, show
from first principles, and without using any matrix decompositions, that X is the
unique square root with eigenvalues in the open right half-plane.
1.43. Prove the first and last parts of Theorem 1.32. (For the rest, see the sources
cited in the Notes and References.)
1.44. Give another proof of Corollary 1.34 for m 6= n by using the identity
AB 0 Im A I A 0 0
= m (1.35)
B 0 0 In 0 In B BA
(which is (1.36) below with α = 0). What additional hypotheses are required for this
proof?
1.45. Give another proof of Theorem 1.35 based on the identity
AB + αIm 0 Im A Im A αIm 0
= . (1.36)
B αIn 0 In 0 In B BA + αIn
1.50.
1 1996
(Borwein,
Bailey, and Girgensohn [77, , p. 216]) Does the equation sin A =
0 1 have a solution? (This was Putnam Problem 1996-B4.)
1.51. Show that the equation
1 a a ... a
1 a ... a
..
cosh(A) = 1 ... . ∈ Cn×n
..
. a
1
35
36 Applications
which is an explicit formula for y in the case that f is independent of y. These formulae
do not necessarily provide the best way to compute the solutions numerically; the large
literature on the numerical solution of ordinary differential equations (ODEs) provides
alternative techniques. However, the matrix exponential is explicitly used in certain
methods—in particular the exponential integrators described in the next subsection.
In the special case f (t, y) = b, (2.3) is
t
y(t) = eAt c + eAt −A−1 e−As b 0 , (2.4)
where
ez − 1 z z2
ψ1 (z) = =1+ + + ···. (2.6)
z 2! 3!
The expression (2.5) has the advantage over (2.4) that it is valid when A is singular.
Some matrix differential equations have solutions expressible in terms of the matrix
exponential. For example, the solution of
dY
= AY + Y B, Y (0) = C
dt
is easily verified to be
Y (t) = eAt CeBt .
d2 y
+ Ay = 0, y(0) = y0 , y ′ (0) = y0′ (2.7)
dt2
has solution √ √ −1 √
y(t) = cos( A t)y0 + A sin( A t)y0′ , (2.8)
√
where A denotes any square root of A √ (see Problems 2.2 and 4.1). The solution
exists for all A. When A is singular (and A possibly does not exist) this formula is
√ √ −1 √
interpreted by expanding cos( A t) and A sin( A t) as power series in A.
1990s due principally to advances in numerical linear algebra that have made efficient
implementation of the methods possible.
A simple example of an exponential integrator is the exponential time differencing
(ETD) Euler method
yn+1 = ehA yn + hψ1 (hA)f (tn , yn ), (2.9)
where yn ≈ y(tn ), tn = nh, and h is a stepsize. The function ψ1 , defined in (2.6),
is one of a family of functions {ψk } that plays an important role in these methods;
for more on the ψk see Section 10.7.4. The method (2.9) requires the computation
of the exponential and ψ1 (or, at least, their actions on a vector) at each step of the
integration.
For an overview of exponential integrators see Minchev and Wright [431, ],
and see LeVeque [380, ] for a concise textbook treatment. A few key papers are
Cox and Matthews [118, ], Hochbruck, Lubich, and Selhofer [292, ], Kassam
and Trefethen [336, ], and Schmelzer and Trefethen [503, ], and a MATLAB
toolbox is described by Berland, Skaflestad, and Wright [59, ].
For any such Q, eQt is nonnegative for all t ≥ 0 (see Theorem 10.29) and has unit
row sums, so is stochastic.
Now consider a discrete-time Markov process with transition probability matrix
P in which the transition probabilities are independent of time. We can ask whether
P = eQ for some intensity matrix Q, that is, whether the process can be regarded
as a discrete manifestation of an underlying time-homogeneous Markov process. If
such a Q exists it is called a generator and P is said to be embeddable. Necessary
and sufficient conditions for the existence of a generator for general n are not known.
Researchers in sociology [525, ], statistics [331, ], and finance [315, ]
have all investigated this embeddability problem. A few interesting features are as
follows:
• If P has distinct, real positive eigenvalues then the only real logarithm, and
hence the only candidate generator, is the principal logarithm.
• P may have one or more real negative eigenvalues, so that the principal loga-
rithm is undefined, yet a generator may still exist. For example, consider the
matrix [525, , Ex. 10]
1 + 2x 1 − x 1 − x √
1
P = 1 − x 1 + 2x 1 − x , x = −e−2 3π ≈ −1.9 × 10−5 .
3
1 − x 1 − x 1 + 2x
dx
= F x(t) + Gu(t), F ∈ Cn×n , G ∈ Cn×m ,
dt
y = Hx(t) + Ju(t), H ∈ Cp×n , J ∈ Cp×m ,
Here, x is the state vector and u and y are the input and output vectors, respectively.
The connection between the two forms is given by
Z τ
A = eF τ , B= eF t dt G,
0
where τ is the sampling period. Therefore the matrix exponential and logarithm are
needed for these conversions. In the MATLAB Control System Toolbox [413] the
functions c2d and d2c carry out the conversions, making use of MATLAB’s expm and
logm functions.
We turn now to algebraic equations arising in control theory and the role played
by the matrix sign function. The matrix sign function was originally introduced by
Roberts [496] in 1971 as a tool for solving the Lyapunov equation and the algebraic
Riccati equation. It is most often defined in terms of the Jordan canonical form
A = ZJZ −1 of A ∈ Cn×n . If we arrange that
J1 0
J= ,
0 J2
where the eigenvalues of J1 ∈ Cp×p lie in the open left half-plane and those of J2 ∈
Cq×q lie in the open right half-plane, then
−Ip 0
sign(A) = Z Z −1 . (2.11)
0 Iq
The sign function is undefined if A has an eigenvalue on the imaginary axis. Note
that sign(A) is a primary matrix function corresponding to the scalar sign function
1, Re z > 0,
sign(z) = z ∈ C,
−1, Re z < 0,
which maps z to the nearest square root of unity. For more on the sign function,
including alternative definitions and iterations for computing it, see Chapter 5.
The utility of the sign function is easily seen from Roberts’ observation that the
Sylvester equation
so the solution X can be read from the (1, 2) block of the sign of the block upper
triangular matrix A0 −B
−C
. The conditions that sign(A) and sign(B) are identity
matrices are certainly satisfied for the Lyapunov equation, in which B = A∗ , in the
common case where A is positive stable, that is, Re λi (A) > 0 for all i.
Consider now the algebraic Riccati equation
XF X − A∗ X − XA − G = 0, (2.13)
where all matrices are n × n and F and G are Hermitian. The desired solution is
Hermitian and stabilizing, in the sense that the spectrum of A − F X lies in the open
left half-plane. Such a solution exists and is unique under suitable conditions that we
will not describe; see [349, ] and [370, , Chap. 22] for details. The equation
can be written in the equivalent form
−1
A∗ G X −In −(A − F X) −F X −In
W = = .
F −A In 0 0 (A − F X)∗ In 0
(2.14)
By assumption, A − F X has eigenvalues with negative real part. Hence we can apply
the sign function to (2.14) to obtain
−1
X −In In Z X −In
sign(W ) =
In 0 0 −In In 0
for some Z. Writing sign(W ) − I2n = [M1 M2 ], where M1 , M2 ∈ C2n×n , the latter
equation becomes
X −In X −In 0 Z
[ M1 M2 ] = , (2.15)
In 0 In 0 0 −2In
which gives
M1 X = −M2 ,
which is a system of 2n equations in n2 unknowns. The system is consistent, by
2
we see that [−In X]M1 = 2In , which implies that M1 has full rank. To summarize:
the Riccati equation (2.13) can be solved by computing the sign of a 2n × 2n matrix
and then solving an overdetermined but consistent system; the latter can be done
using a QR factorization.
2.5 The Nonsymmetric Eigenvalue Problem 41
Theorem 2.1 (spectrum splitting via sign function). Let A ∈ Rn×n have no pure
imaginary eigenvalues and define W = (sign(A) + I)/2. Let
q n−q
q R11 R12
QT W Π =
n−q 0 0
be a rank-revealing QR factorization, where Π is a permutation matrix and q =
rank(W ). Then
q n−q
T q A11 A12
Q AQ = ,
n−q 0 A22
where the eigenvalues of A11 lie in the open right half-plane and those of A22 lie in
the open left half-plane.
Proof. See Problem 2.3.
The number of eigenvalues in more complicated regions of the complex plane can
be counted by suitable sequences of matrix sign evaluations. For example, assuming
that A has no eigenvalues lying on the edges of the relevant regions:
• The number of eigenvalues of A lying in the vertical strip Re z ∈ (ξ1 , ξ2 ) is
1
2 trace sign(A − ξ1 I) − sign(A − ξ2 I) .
min{ kA − Qk : Q∗ Q = I }.
where Y (t) ∈ Rm×n , m ≥ n, and Y (t)T Y (t) = I for all t > 0; see Hairer, Lubich, and
Wanner [239, , Sec. 4.4], D. J. Higham [262, ], and Sofroniou and Spaletta
[534, ].
In quantum chemistry orthogonalization using the unitary polar factor is called
Löwdin orthogonalization; see Bhatia and Mukherjea [67, ], Goldstein and Levy
[221, ], and Jansik et al. [318, ]. An application of the polar decomposi-
tion to determining the orientation of “parallel spherical wrist” robots is described
by Vertechy and Parenti-Castelli [601, ]. Moakher [434, ] shows that a cer-
tain geometric mean of a set of rotation matrices can be expressed in terms of the
orthogonal polar factor of their arithmetic mean.
The polar decomposition is also used in computer graphics as a convenient way
of decomposing a 3 × 3 or 4 × 4 linear transformation into simpler component parts
(see Shoemake and Duff [520, ]) and in continuum mechanics for representing
2.7 Theoretical Particle Physics 43
the deformation gradient as the product of a rotation tensor and a stretch tensor (see
Bouby, Fortuné, Pietraszkiewicz, and Vallée [78, ]).
The orthogonal Procrustes problem is to solve
min kA − BQkF : Q ∈ Cn×n , Q∗ Q = I , (2.17)
where A, B ∈ Cm×n ; thus a unitary matrix is required that most nearly transforms a
rectangular matrix B into a matrix A of the same dimensions in a least squares sense.
A solution is given by the unitary polar factor of B ∗A; see Theorem 8.6. The orthogo-
nal Procrustes problem is a well-known and important problem in factor analysis and
in multidimensional scaling in statistics; see the books by Gower and Dijksterhuis
[226, ] and Cox and Cox [119, ]. In these applications the matrices A and
B represent sets of experimental data, or multivariate samples, and it is necessary to
determine whether the sets are equivalent up to rotation. An important variation of
(2.17) requires Q to be a “pure rotation”, that is, det(Q) = 1; one application area
is shape analysis, as discussed by Dryden and Mardia [168, ]. Many other varia-
tions of the orthogonal Procrustes problem exist, including those involving two-sided
transformations, permutation transformations, and symmetric transformations, but
the solutions have weaker connections with the polar decomposition and with matrix
functions.
The polar decomposition and the orthogonal Procrustes problem both arise in
numerical methods for computing analytic singular value decompositions, as explained
by Mehrmann and Rath [419, ].
where again any square root can be taken, and again this expression is easily verified
to satisfy (2.19). Another expression for X is as the (1,2) block of sign( A0 B0 ), which
follows from the sign-based solution of (2.13) and also from Theorem 5.2. As this
example illustrates, there may be several ways to express the solutions to a matrix
equation in terms of matrix functions.
If A and B are Hermitian positive definite then there is a unique Hermitian positive
definite solution to (2.19), given by any of the expressions above, where the Hermitian
positive definite square root is always taken. The uniqueness follows from writing
(2.19) as Y 2 = C, where Y = A1/2 XA1/2 and C = A1/2 BA1/2 and using the fact that
a Hermitian positive definite matrix has a unique Hermitian positive definite square
root (Corollary 1.30). (Note that this approach leads directly to (2.21).) In this case
formulae that are more computationally efficient than those above are available; see
Problem 2.7 and Algorithm 6.22.
An equation that generalizes the scalar quadratic in a different way to the algebraic
Riccati equation is the quadratic matrix equation
AX 2 + BX + C = 0, A, B, C ∈ Cn×n . (2.22)
which arises in the analysis of damped structural systems and vibration problems
[369, ], [570, ]. The standard approach is to reduce (2.23) to a generalized
eigenproblem (GEP) Gx = λHx of twice the dimension, 2n. This “linearized” prob-
lem can be further converted to a standard eigenvalue problem of dimension 2n under
suitable nonsingularity conditions on the coefficients A, B, and C. However, if we
46 Applications
can find a solution X of the associated quadratic matrix equation (2.22) then we can
write
λ2 A + λB + C = −(B + AX + λA)(X − λI), (2.24)
and so the eigenvalues of (2.23) are those of X together with those of the GEP
(B + AX)x = −λAx, both of which are n × n problems. Bridges and Morris [82,
] employ this approach in the solution of differential eigenproblems.
For a less obvious example of where a matrix function arises in a nonlinear matrix
equation, consider the problem of finding an orthogonal Q ∈ Rn×n such that
which arises in the analysis of the dynamics of a rigid body [94, ] and in the
solution of algebraic Riccati equations [309, ]. The equation can be rewritten
Q−Q−1 = S, which implies both Q2 −I = QS and Q2 −I = SQ. Hence Q2 − 21 (QS +
p
SQ) − I = 0, or (Q − 21 S)2 = I + S 2 /4. Thus Q is of the form Q = 12 S + I + S 2 /4
for some square root. Using the theory of matrix square roots the equation (2.25) can
then be fully analyzed.
There is little in the way of numerical methods for solving general nonlinear matrix
equations f (X) = A, other than Newton’s method. By using the Jordan canonical
form it is usually possible to determine and classify all solutions (as we did in Sec-
tion 1.6 for the square root and logarithm), but this approach is usually not feasible
computationally; see Horn and Johnson [296, , Cor. 6.2.12, Sec. 6.4] for details.
Finally, we note that nonlinear matrix equations provide useful test problems for
optimization and nonlinear least squares solvers, especially when a reference solution
can be computed by matrix function techniques. Some matrix square root problems
are included in the test collection maintained by Fraley on Netlib [191] and in the
CUTEr collection [225, ].
X = B 1/2 (B −1/2 AB −1/2 )1/2 B 1/2 = B(B −1 A)1/2 = (AB −1 )1/2 B, (2.26)
where the last equality can be seen using Corollary 1.34. The geometric mean has
the properties (see Problem 2.5)
A # A = A, (2.27a)
−1 −1 −1
(A # B) = A # B , (2.27b)
A # B = B # A, (2.27c)
1
A # B ≤ (A + B), (2.27d)
2
√
all of which generalize properties of the scalar geometric mean a # b = ab. Here,
X ≥ 0 denotes that the Hermitian matrix X is positive semidefinite; see Section B.12.
2.11 Pseudospectra 47
The geometric mean yields the solution to more general equations than XA−1 X =
B. For example, if A and B are Hermitian positive definite then the unique Hermitian
positive definite solution to XA−1 X ± X − B = 0 is X = 21 (∓A + A # (A + 4B)) [391,
, Thm. 3.1].
Another definition of geometric mean of Hermitian positive definite matrices A
and B is
E(A, B) = exp( 12 (log(A) + log(B))), (2.29)
where log is the principal logarithm. This is called the log-Euclidean mean by Arsigny,
Fillard, Pennec, and Ayache [20, ], who investigate its properties.
2.11. Pseudospectra
Pseudospectra are not so much an application of matrix functions as objects with
intimate connections to them. The ǫ-pseudospectrum of A ∈ Cn×n is defined, for a
given ǫ > 0 and a subordinate matrix norm, to be the set
The resolvent therefore provides a link between pseudospectra and matrix functions,
through Definition 1.11.
Pseudospectra provide a means for judging the sensitivity of the eigenvalues of a
matrix to perturbations in the matrix elements. For example, the 0.01-pseudospectrum
indicates the uncertainty in the eigenvalues if the elements are known to only two dec-
imal places. More generally, pseudospectra have much to say about the behaviour of
a matrix. They provide a way of describing the effects of nonnormality on processes
such as matrix powering and exponentiation.
Matrix functions and pseudospectra have several features in common: they are
applicable in a wide variety of situations, they are “uninteresting” for normal matrices,
and they are nontrivial to compute.
For more on pseudospectra we can do no better than refer the reader to the
ultimate reference on the subject: Trefethen and Embree [573, ].
2.12. Algebras
While this book is concerned with matrices over the real or complex fields, some of the
results and algorithms are applicable to more general algebras, and thereby provide
tools for working with these algebras. For example, the GluCat library [217] is a
generic library of C++ templates that implements universal Clifford algebras over
the real and complex fields. It includes algorithms for the exponential, logarithm,
square root, and trigonometric functions, all based on algorithms for matrices.
48 Applications
Figure 2.1. The dots are the pth roots of unity and the lines the sector boundaries, illustrated
for p = 2: 5. The scalar sector function sectp (z) maps z ∈ C to the nearest pth root of unity.
p = 2 gives the sign function.
can be defined via the Jordan canonical form, in an analogous way as for the sign
function in Section 2.4, but now mapping each eigenvalue to the nearest pth root
of unity. For p = 2, the matrix sign function is obtained. More precisely, for A ∈
Cn×n having no eigenvalues with argument (2k + 1)π/p, k = 0: p − 1, the matrix
p-sector function can be defined by sectp (A) = A(Ap )−1/p (where the principal pth
root is taken; see Theorem 7.2). Figure 2.1 illustrates the scalar sector function. The
sector function has attracted interest in the control theory literature because it can
be used to determine the number of eigenvalues in a specific sector and to obtain the
corresponding invariant subspace [358, ]. However, a good numerical method for
computing the matrix sector function is currently lacking.
Matrix nearness problems ask for the distance from a given matrix to the nearest
matrix with a certain property, and for that nearest matrix. The nearest unitary
matrix problem mentioned in Section 2.6 is of this type. The use of a Bregman
divergence in place of a matrix norm is proposed by Dhillon and Tropp [150, ].
The Bregman divergence of X ∈ Cn×n from Y ∈ Cn×n is defined by D(X, Y ) =
ψ(X) − ψ(Y ) − h∇ψ(Y ), X − Y i, where ψ : Cn×n → R+ is strictly convex and h·i is an
inner product. A particular instance applying to Hermitian positive definite
matrices
is the von Neumann divergence D(X, Y ) = trace X(log(X) − log(Y ) − X + Y ), the
use of which leads to the need to evaluate expressions such as exp(log(Y ) + W ). Thus
Bregman divergences provide another application of matrix functions.
where
y(t) = g(t), −1 ≤ t ≤ 0,
for a given function g. If we look for solutions y(t) = exp(tS)c for some con-
stant c ∈ Cn then we are led to the matrix equation S exp(S) = A, and hence
to S =PWk (A). The general solution to the problem can then be expressed as
∞
y(t) = k=−∞ eWk (A)t ck , where the vectors ck are determined by g.
For more details of the scalar Lambert W function see Corless, Gonnet, Hare,
Jeffrey, and Knuth [115, ]. The matrix Lambert W function is analyzed by
Corless, Ding, Higham, and Jeffrey [114, ], who show that as a primary matrix
function it does not yield all solutions of S exp(S) = A, which is analogous to the
fact that the primary matrix logarithm does not provide all solutions of eX = A
(Theorem 1.28). For the application to delay differential equation see Jarlebring and
Damm [320, ] and the references therein, and Heffernan and Corless [256, ].
A good reference on the numerical solution of delay differential equations is Bellen
and Zennaro [50, ].
The idea of using the matrix sign function to compute eigensystems in divide and
conquer fashion was first investigated by Denman and Beavers in the early 1970s;
see [146, ] and the references therein. However, in these early papers nonunitary
transformations are used, in contrast to the approach in Section 2.5.
Early references for (2.19) and formulae of the form (2.20) are Frobenius [195,
], Baker [40, ], and Turnbull and Aitken [579, , p. 152].
For more on geometric means of positive definite matrices see Bhatia [65, ,
Chap. 4], Lawson and Lim [377, ], Ando, Li, and Mathias [15, ], and Moakher
[435, ]. Proofs of (2.28) can be found in Bhatia [65, , Thm. 4.1.3] and Ando
[13, , Thm. 2], [14, , Thm. 2.8]. The geometric mean (2.26) appears to have
been first introduced by Pusz and Woronowicz [482, ].
The factorization (2.24) and other properties of matrix polynomials are treated
by Davis [140, ], Dennis, Traub, and Weber [147, ], Gohberg, Lancaster,
and Rodman [220, ], Lancaster [369, ], and Lancaster and Tismenetsky [371,
].
Problems
Though mathematics is much easier to watch than do,
it is a most unrewarding spectator sport.
— CHARLES G. CULLEN, Matrices and Linear Transformations (1972)
2.1. Derive the formula (2.3) for the solution of problem (2.2).
2.2. Reconcile the fact that the initial value problem (2.7) has a unique solution with
√ √ −1 √
the observation that cos( A t)y0 + A sin( A t)y0′ is a solution for any square
√
root A of A.
2.3. Prove Theorem 2.1.
2.4. Show that if the Hermitian positive definite matrices A and B commute then
the geometric means # in (2.26) and E in (2.29) are given by A # B = A1/2 B 1/2 =
E(A, B).
2.5. Prove the relations (2.27) satisfied by the geometric mean #.
2.6. (Bhatia [65, , p. 111]) Show that for Hermitian positive definite A, B ∈ C2×2 ,
√
αβ
A#B = p (α−1 A + β −1 B),
det(α A + β −1 B)
−1
In practice most data are inexact or uncertain. Computations with exact data are
subject to rounding errors, and the rounding errors in an algorithm can often be
interpreted as being equivalent to perturbations in the data, through the process
of backward error analysis. Therefore whether the data are exact or inexact, it is
important to understand the sensitivity of matrix functions to perturbations in the
data. Sensitivity is measured by condition numbers. This chapter is concerned with
defining appropriate condition numbers and showing how to evaluate or estimate
them efficiently. The condition numbers can be expressed in terms of the norm of
the Fréchet derivative, so we investigate in some detail the properties of the Fréchet
derivative.
which measures by how much, at most, small changes in the data can be magnified in
the function value, when both changes are measured in a relative sense. Assuming for
simplicity that f is twice continuously differentiable, f (x + ∆x) − f (x) = f ′ (x)∆x +
o(∆x), which can be rewritten as
′
f (x + ∆x) − f (x) xf (x) ∆x
= + o(∆x).
f (x) f (x) x
(Recall that h = o(ǫ) means that khk/ǫ → 0 as ǫ → 0.) It is then immediate that
′
xf (x)
condrel (f, x) = . (3.1)
f (x)
kf (X + E) − f (X)k
condrel (f, X) := lim sup , (3.2)
ǫ→0 kEk≤ǫkXk ǫkf (X)k
4 The definition is applicable to arbitrary functions f . Most of the results of Section 3.2 onwards
assume f is a primary matrix function as defined in Chapter 1. We will use the condition number
(3.2) for a more general f in Section 8.2.
55
56 Conditioning
where the norm is any matrix norm. This definition implies that
kf (X + E) − f (X)k kEk
≤ condrel (f, X) + o(kEk), (3.3)
kf (X)k kXk
and so provides an approximate perturbation bound for small perturbations E.
Some care is needed in interpreting (3.3) for functions not defined throughout
Cn×n . The definition (3.2) is clearly valid as long as f is defined in a neighbourhood
of X. The bound (3.3) is therefore valid for X + E in that neighbourhood. An
example is given in Section 5.1 that shows how blindly invoking (3.3) can lead to a
patently incorrect bound.
A corresponding absolute condition number, in which the change in the data and
the function are measured in an absolute sense, is defined by
kf (X + E) − f (X)k
condabs (f, X) := lim sup . (3.4)
ǫ→0 kEk≤ǫ ǫ
Note that
kXk
condrel (f, X) = condabs (f, X) , (3.5)
kf (X)k
so the two condition numbers differ by just a constant factor. Usually, it is the relative
condition number that is of interest, but it is more convenient to state results for the
absolute condition number.
To obtain explicit expressions analogous to (3.1) we need an appropriate notion
of derivative for matrix functions. The Fréchet derivative of a matrix function f :
Cn×n → Cn×n at a point X ∈ Cn×n is a linear mapping
L
Cn×n −→ Cn×n
E 7−→ L(X, E)
The Fréchet derivative may not exist, but if it does it is unique (see Problem 3.3).
The notation L(X, E) can be read as “the Fréchet derivative of f at X in the direction
E ”, or “the Fréchet derivative of f at X applied to the matrix E”. If we need to show
the dependence on f we will write Lf (X, E). When we want to refer to the mapping
at X and not its value in a particular direction we will write L(X). In the case n = 1
we have, trivially, L(x, e) = f ′ (x)e, and more generally if X and E commute then
L(X, E) = f ′ (X)E = Ef ′ (X) (see Problem 3.8).
The absolute and relative condition numbers can be expressed in terms of the
norm of L(X), which is defined by
kL(X, Z)k
kL(X)k := max . (3.7)
Z6=0 kZk
Theorem 3.1 (Rice). The absolute and relative condition numbers are given by
Proof. In view of (3.5), it suffices to prove (3.8). From (3.4) and (3.6), and using
the linearity of L, we have
kf (X + E) − f (X)k
condabs (f, X) = lim sup
ǫ→0 kEk≤ǫ ǫ
L(X, E) + o(kEk)
= lim sup
ǫ→0 kEk≤ǫ ǫ
= lim sup
L(X, E/ǫ) + o(kEk)/ǫ
ǫ→0 kEk≤ǫ
Finally, the sup can be replaced by a max, since we are working on a finite dimensional
vector space, and the maximum is attained with kZk = 1.
To illustrate, for f (X) = X 2 we have f (X + E) − f (X) = XE + EX + E 2 , so
Proof. We have
f (A + E) = g(A + E)h(A + E)
= g(A) + Lg (A, E) + o(kEk) h(A) + Lh (A, E) + o(kEk)
= f (A) + Lg (A, E)h(A) + g(A)Lh (A, E) + o(kEk).
Theorem 3.4 (chain rule). Let h and g be Fréchet differentiable at A and h(A),
respectively, and let f = g ◦ h (i.e., f (A) = g(h(A))). Then f is Fréchet differentiable
at A and Lf = Lg ◦ Lh , that is, Lf (A, E) = Lg h(A), Lh (A, E) .
Proof.
The following theorem says that the Fréchet derivative of the inverse function is
the inverse of the Fréchet derivative of the function.
Theorem 3.5 (derivative of inverse function). Let f and f −1 both exist and be con-
tinuous in an open neighbourhood of X and f (X), respectively, and assume Lf ex-
ists and is nonsingular at X. Then Lf −1 exists at Y = f (X) and Lf −1 (Y, E) =
L−1 −1
f (X, E), or equivalently, Lf X, Lf −1 (Y, E) = E. Hence kLf −1 (Y )k = kLf (X)k.
Proof. For the existence see Dieudonné [159, , Thm. 8.2.3]. The formulae
can be obtained by applying the chain rule to the relation f (f −1 (Y )) = Y , which
gives the equality Lf X, Lf −1 (Y, E) = E.
2 −1 1/2
To illustrate Theorem 3.5
we take f (x) = x and f (x) =2 x . The theorem says
that Lx2 X, Lx1/2 (X , E) = E, i.e., using (3.10), XLx1/2 (X , E)+Lx1/2 (X 2 , E)X =
2
E. In other words, L = Lx1/2 (A, E) is the solution of the Sylvester equation A1/2 L +
LA1/2 = E.
The following theorem will also be very useful. For the rest of this section D
denotes an open subset of R or C.
Then
A(ǫ)−A(0) A(ǫ)−A(0)
A(0) −1 A(0)
f ǫ = Uf U ǫ U U −1
0 A(ǫ) 0A(ǫ)
A(0) 0
= Uf U −1
0 A(ǫ)
f (A(0)) 0
=U U −1
0 f (A(ǫ))
(A(0))
f (A) f (A(ǫ))−fǫ
= .
0 f (A(ǫ))
d d
f (A(t)) = pA⊕A (A(t)), (3.14)
dt t=0 dt t=0
where pA⊕A interpolates f and its derivatives at the zeros of the characteristic poly-
nomial of A ⊕ A ≡ diag(A, A), that is,
(j)
pA⊕A (λi ) = f (j) (λi ), j = 0: 2ri − 1, i = 1: s, (3.15)
Proof. Define
A A′ (0)
B= .
0 A
Theorem 3.6 shows that
d d
f (A(t)) = [f (B)]12 = [p (B)]12 = p(A(t)),
dt t=0 dt t=0
where the second equality holds for any polynomial p that takes the same values as f
on the spectrum of B. By its definition (3.15), pA⊕A is such as polynomial. (Note that
pA⊕A may satisfy more interpolation conditions than are required in order to take
the same values as f on the spectrum of B. The polynomial pA⊕A is essentially an
“overestimate” that has the required properties and can be defined without knowledge
of the Jordan structure of B; see Remark 1.5.)
With the aid of the previous two results we can now identify sufficient conditions
on f for the Fréchet derivative to exist and be continuous.
Theorem 3.8 (existence and continuity of Fréchet derivative). Let f be 2n−1 times
continuously differentiable on D. For X ∈ Cn×n with spectrum in D the Fréchet
derivative L(X, E) exists and is continuous in the variables X and E.
The significance of this formula is that it converts the problem of evaluating the
Fréchet derivative in a particular direction to that of computing a single matrix
function—albeit for a matrix of twice the dimension. This is useful both in theory
and in practice.
We now find the eigenvalues of the Fréchet derivative. An eigenpair (λ, V ) of L(X)
comprises a scalar λ, the eigenvalue, and a nonzero matrix V ∈ Cn×n , the eigenvector,
such that L(X, V ) = λV .
Since L is a linear operator
f [λi , λj ], i, j = 1: n,
where the λi are the eigenvalues of X. If ui and vj are nonzero vectors such that
Xui = λi ui and vjT X = λj vjT , then ui vjT is an eigenvector of L(X) corresponding to
f [λi , λj ].
Pm
Proof. Suppose, first, that f is a polynomial: f (t) = k=0 ak tk . Then (see the
more general Problem 3.6)
m
X k
X
L(X, E) = ak X j−1 EX k−j .
k=1 j=1
eigenvalues of L(X) have been accounted for. Note that Theorem 3.9 does not nec-
essarily identify all the eigenvectors of L(X); see Problem 3.11.
Theorem 3.9 enables us to deduce when the Fréchet derivative is nonsingular.
The next result shows that the Fréchet derivative in any direction at a diagonal
matrix is formed simply by Hadamard multiplication by the matrix of divided dif-
ferences of the eigenvalues. Here, ◦ denotes the Hadamard (or Schur) product of
A, B ∈ Cn×n : A ◦ B = (aij bij ).
and using the fact that f (A) commutes with A (Theorem 1.13 (a)) we obtain
f (D) L(D, E) D E D E f (D) L(D, E)
= .
0 f (D) 0 D 0 D 0 f (D)
If the λi are distinct then the latter equation immediately gives (3.18). If the λi
are not distinct then consider D + diag(1, 2, . . . , n)ǫ, which has distinct eigenvalues
for ǫ sufficiently small and positive, and for which (3.18) therefore holds. Letting
ǫ → 0, since L(D, E) is continuous in D and E by Theorem 3.8, (3.18) holds for D
by continuity.
The final result is an analogue for the Fréchet derivative of the fact that the 2-norm
of a diagonal matrix equals its spectral radius.
Corollary 3.13. Under the conditions of Theorem 3.11, kL(D)kF = maxi,j |f [λi , λj ]|.
3.3 Bounding the Condition Number 63
Note that this is essentially the standard result (B.8) that no eigenvalue of a matrix
can exceed any norm of the matrix.
Proof. From (3.8), condabs (f, X) = kL(X)k. The bound of the theorem is
obtained by maximizing over all the eigenvalues, using (3.19) and Theorem 3.9.
By specializing to the Frobenius norm we can obtain an upper bound for the con-
dition number. Here we need the matrix condition number with respect to inversion,
κ(Z) = kZkkZ −1 k for Z ∈ Cn×n .
e −1 , where E
Proof. By Corollary 3.12 we have L(X, E) = ZL(D, E)Z e = Z −1 EZ.
Hence, using (B.7),
e F ≤ κ2 (Z)kL(D)kF kEk
kL(X, E)kF ≤ κ2 (Z)kL(D, E)k e F ≤ κ2 (Z)2 kL(D)kF kEkF .
Corollary 3.16. Let X ∈ Cn×n be normal. Then, for the Frobenius norm,
Given the ability to compute L(X, E) we can therefore compute the condition number
exactly in the Frobenius norm by explicitly forming K(X).
Algorithm 3.17 (exact condition number). Given X ∈ Cn×n and a function f and
its Fréchet derivative this algorithm computes condrel (f, X) in the Frobenius norm.
1 for j = 1: n
2 for i = 1: n
3 Compute Y = L(X, ei eTj ).
4 K(: , (j − 1)n + i) = vec(Y )
5 end
6 end
7 condrel (f, X) = kKk2 kXkF /kf (X)kF
Cost: O(n5 ) flops, assuming f (X) and L(X, E) cost O(n3 ) flops.
For large n, Algorithm 3.17 is prohibitively expensive and so the condition number
must be estimated rather than computed exactly. In practice, what is needed is an
estimate that is of the correct order of magnitude—more than one correct significant
digit is not needed.
No analogous equalities to (3.20) hold for the 1-norm, but we can bound the ratio
of the 1-norms of L(X) and K(X).
kL(X)k1
≤ kK(X)k1 ≤ nkL(X)k1 .
n
3.4 Computing or Estimating the Condition Number 65
Proof. For E ∈ Cn×n we have kEk1 ≤ k vec(E)k1 ≤ nkEk1 (with equality on the
left for E = eeT1 and on the right for E = eeT ). Hence, using (3.17),
Algorithm 3.19 (power method). Given A ∈ Cn×n this algorithm uses the power
method applied to A∗A to produce an estimate γ ≤ kAk2 .
Now
kwk+1 k22 = wk+1
∗
wk+1 = zk∗ A∗ wk+1 = zk∗ zk+1 ,
so kwk+1 k22 ≤ kzk k2 kzk+1 k2 , which implies
Hence γ in Algorithm 3.19 is the best estimate obtainable from the lower bounds in
(3.21).
66 Conditioning
Algorithm 3.20 (power method on Fréchet derivative). Given X ∈ Cn×n and the
Fréchet derivative L of a function f , this algorithm uses the power method to produce
an estimate γ ≤ kL(X)kF .
where it max is the maximum allowed number of iterations and tol is a relative conver-
gence tolerance. Since an estimate of just the correct order of magnitude is required,
tol = 10−1 or 10−2 may be suitable. However, since linear convergence can be arbi-
trarily slow it is difficult to construct a truly reliable convergence test.
An alternative to the power method for estimating the largest eigenvalue of a
Hermitian matrix is the Lanczos algorithm. Mathias [408, ] gives a Lanczos-
based analogue of Algorithm 3.20. He shows that the Lanczos approach generates
estimates at least as good as those from Algorithm 3.20 at similar cost, but notes
that for obtaining order of magnitude estimates the power method is about as good
as Lanczos.
Turning to the 1-norm, we need the following algorithm.
Algorithm 3.21 (LAPACK matrix norm estimator). Given A ∈ Cn×n this algo-
rithm computes γ and v = Aw such that γ ≤ kAk1 with kvk1 /kwk1 = γ (w is
not returned). For z ∈ C, sign(z) = z/|z| if z 6= 0 and sign(0) = 1.5
1 v = A(n−1 e)
2 if n = 1, quit with γ = |v1 |, end
3 γ = kvk1
4 ξ = sign(v)
5 x = A∗ ξ
6 k=2
7 repeat
8 j = min{ i: |xi | = kxk∞ }
9 v = Aej
5 This definition of sign is different from that used in Chapter 5.
3.4 Computing or Estimating the Condition Number 67
10 γ=γ
11 γ = kvk1
12 if (A is real and sign(v) = ±ξ) or γ ≤ γ, goto line 17, end
13 ξ = sign(v)
14 x = A∗ ξ
15 k =k+1
16 until (kxk∞ = xj or k> 5)
i−1
17 xi = (−1)i+1 1 + n−1 , i = 1: n
18 x = Ax
19 if 2kxk1 /(3n) > γ then
20 v=x
21 γ = 2kxk1 /(3n)
22 end
Algorithm 3.21 is the basis of all the condition number estimation in LAPACK
and is used by MATLAB’s rcond function. MATLAB’s normest1 implements a block
generalization of Algorithm 3.21 due to Higham and Tisseur [288, ] that iterates
with an n × t matrix where t ≥ 1; for t = 1, Algorithm 3.21 (without lines 17–22) is
recovered.
Key properties of Algorithm 3.21 are that it typically requires 4 or 5 matrix–
vector products, it frequently produces an exact estimate (γ = kAk1 ), it can produce
an arbitrarily poor estimate on specially constructed “counterexamples”, but it almost
invariably produces an estimate correct to within a factor 3. Thus the algorithm is a
very reliable means of estimating kAk1 .
We can apply Algorithm 3.21 with A = K(X) and thereby estimate kL(X)k1 .
Advantages of Algorithm 3.22 over Algorithm 3.20 are a “built-in” starting matrix
and convergence test and a more predictable number of iterations.
Algorithms 3.20 and 3.22 both require two Fréchet derivative evaluations per iter-
ation. One possibility is to approximate these derivatives by finite differences, using,
from (3.11),
f (X + tE) − f (X)
L(X, E) ≈ =: ∆f (X, t, E) (3.22)
t
for a small value of t. The choice of t is a delicate matter—more so than for scalar
finite difference approximations because the effect of rounding errors on the evaluation
of f (X + tE) is more difficult to predict. A rough guide to the choice of t can be
developed by balancing the truncation and rounding errors. For a sufficiently smooth
f , (3.6) implies f (X + tE) − f (X) − L(X, tE) = O(t2 kEk2 ). Hence the truncation
error ∆f (X, t, E) − L(X, E) can be estimated by tkEk2 . For the evaluation of f (X)
we have at best f l(f (X)) = f (X) + E, where kEk ≤ ukf (X)k. Hence
kf l(∆f (X, t, E)) − ∆f (X, t, E)k ≤ u(kf (X + tE)k + kf (X)k)/t ≈ 2ukf (X)k/t.
68 Conditioning
−4
10
−5
Relative error 10
−6
10
−7
10
−8
10
−11 −10 −9 −8 −7 −6
10 10 10 10 10 10
t
Figure 3.1. Relative errors in the Frobenius norm for the finite difference approximation
(3.22) with f (A) = eX , X and E random matrices from the normal (0, 1) distribution, and
250 different t. The dotted line is u1/2 and the circle on the x-axis denotes topt in (3.23).
The error due to rounding is therefore estimated by ukf (X)k/t. The natural choice
of t is that which minimizes the maximum of the error estimates, that is, t for which
tkEk2 = ukf (X)k/t, or
1/2
ukf (X)k
topt = . (3.23)
kEk2
The minimum is u1/2 kf (X)k1/2 kEk. In practice, topt is usually fairly close to mini-
mizing the overall error. Figure 3.1 shows a typical example.
For methods for computing f (X) that employ a Schur decomposition, X = QT Q∗ ,
an efficient way to obtain the function evaluation f (X + G) in (3.22), where G ≡ tE,
has been suggested by Mathias [411, ]. The idea is to write f (X + G) = Qf (T +
e ∗ , where G
G)Q e = Q∗ GQ, and then reduce G e to upper triangular form by nearly
unitary transformations, exploiting the fact that G e is small. Then the underlying
method is used to evaluate f at the triangular matrix. See [411, ] for details.
Finally, we briefly mention a probabilistic approach to condition estimation. To
first order in t, (3.22) gives k∆f (X, t, E)kF ≤ kL(X)kF kEkF . Kenney and Laub [346,
] propose choosing E with elements from the normal (0,1) distribution, scaling
so that kEkF = 1, and then approximating kL(X)kF by φn k∆f (X, t, E)kF , where φn
is a constant such that the expected value of k∆f (X, t, E)kF is kL(X)kF . They give
an explicit formula for φn and show how to evaluate the probability that the estimate
is within a certain factor of kL(X)kF , assuming that the O(t) error in (3.22) can be
ignored. Several independent E can be used in order to get a better estimate. For
example, with two E the estimate is within a factor 5 of kL(X)kF with probability
at least 0.9691 (for all n). The main weaknesses of this approach are that the theory
applies only to real matrices, it is unclear how small t must be for the theory to be
valid, and the method is expensive if many significant digits are required with high
3.5 Notes and References 69
probability. Probabilistic estimates for the power method and Lanczos method are
also available; see Dixon [160, ], Kuczyński and Woźniakowski [365, ], and
Van Dorsselaer, Hochstenbach, and Van der Vorst [590, ].
For more details on the Fréchet derivative, and on connections between Fréchet and
Gâteaux derivatives, see, for example, Aubin and Ekeland [23, , Sec. 1.4], Atkin-
son and Han [22, , Sec. 5.3], Bhatia [64, , Sec. X.4], or Ortega and Rheinboldt
[453, , Sec. 3.1].
There is a literature on matrix differential calculus aimed at application areas
and not focusing on matrix functions as defined in this book. See, for example, the
book by Magnus and Neudecker [402, ] concerned particularly with statistics and
psychometrics.
Our definitions (3.2) and (3.4) of condition number, and Theorem 3.1, are special
cases of definitions and results of Rice [487, ].
Theorem 3.6 is from Mathias [412, ], where it is stated in a form that requires
f to be only 2m − 1 times continuously differentiable, where m is the size of the
largest Jordan block of A(t), for all t in some neighbourhood of 0. The identity (3.13)
is proved by Najfeld and Havel [445, , Thm. 4.11] under the assumption that f
is analytic. Theorem 3.7 is from Horn and Johnson [296, , Thm. 6.6.14], with
conditions modified as in [412, ].
Theorem 3.9 appears to be new in the form stated. A weaker version that assumes
f has a power series expansion and does not show that all eigenvalues of L(X) are
accounted for is given by Kenney and Laub [340, ].
Theorem 3.11 is due to Daleckiı̆ and Kreı̆n [129, ], [130, ]. Presentations
of this and more general results can be found in Bhatia [64, , Sec. V.3] and Horn
and Johnson [296, , Sec. 6.6].
Theorem 3.14 and Corollary 3.16 are obtained by Kenney and Laub [340, ]
in the case where f has a convergent power series representation. Their proofs work
with K(X), which in this case has an explicit representation in terms of Kronecker
products; see Problem 3.6. That Corollary 3.16 holds without this restriction on f is
noted by Mathias [411, ].
Convergence analysis for the power method for computing an eigenpair of a general
matrix B can be found in Golub and Van Loan [224, , Sec. 7.3], Stewart [538, ,
Sec. 2.1], Watkins [607, , Sec. 5.3], and Wilkinson [616, , Sec. 9.3]. In these
analyses the assumption of a dominant eigenvalue is needed to guarantee convergence;
for Algorithm 3.19, with B ≡ A∗A, no such assumption is needed because eigenvalues
of B of maximal modulus are necessarily equal.
The power method in Algorithm 3.20 is suggested by Kenney and Laub [340,
].
Algorithm 3.21 was developed by Higham [270, ]. The algorithm is based
on a p-norm power method of Boyd [79, ], also investigated by Tao [563, ]
and derived independently for the 1-norm by Hager [237, ]. For more details see
Higham [270, ], [271, ], and [276, , Chap. 15].
70 Conditioning
Problems
3.1. Evaluate the Fréchet derivatives L(X, E) of F (X) = I, F (X) = X, and F (X) =
cos(X), assuming in the last case that XE = EX.
3.2. Show that if X = QT Q∗ is a Schur decomposition then L(X, E) = QL(T, Q∗ EQ)Q∗ .
3.3. Show that the Fréchet derivative is unique.
3.4. Prove (3.11), namely that the Fréchet derivative is a directional derivative.
3.5. Let A = A11 A12
0 A22 , where A11 ∈ C
n1 ×n1
and A22 ∈ Cn2 ×n2 with n = n1 + n2 .
What is the maximum size of a Jordan block of A?
P∞
3.6. Let the power series f (x) = i=0 ai xi have radius of convergence r. Show that
for X, E ∈ Cn×n with kXk < r, the Fréchet derivative
∞
X i
X
L(X, E) = ai X j−1 EX i−j , (3.24)
i=1 j=1
3.7. Show that if f has a power series expansion with real coefficients then L(X ∗ , E) =
L(X, E ∗ )∗ .
3.8. Show that if X and E commute then L(X, E) = f ′ (X)E = Ef ′ (X), where f ′
denotes the derivative of the scalar function f .
3.9. (Stickel [541, ], Rinehart [494, ]) Suppose that f is analytic on and
inside a closed contour Γ that encloses Λ(X). Show that the Fréchet derivative of f
is given by Z
1
L(X, E) = f (z)(zI − X)−1 E (zI − X)−1 dz.
2πi Γ
Deduce that if XE = EX then L(X, E) = f ′ (X)E = Ef ′ (X), where f ′ denotes the
derivative of the scalar function f .
3.10. Consider any two eigenvalues λ and µ of X ∈ Cn×n , with corresponding right
and left eigenvectors u and v, so that Xu = λu and v T X = µv T . Show directly
(without using Kronecker products or reducing to the case that f is a polynomial)
that uv T is an eigenvector of L(X) with corresponding eigenvalue f [λ, µ].
3.11. (Research problem) Determine the Jordan form of the Fréchet derivative
L(X) of f in terms of that
of X. To see that this question is nontrivial, note that for
f (X) = X 2 and X = 00 10 , L(X) has one Jordan block of size 3 and one of size 1,
both for the eigenvalue 0.
Chapter 4
Techniques for General Functions
Many different techniques are available for computing or approximating matrix func-
tions, some of them very general and others specialized to particular functions. In this
chapter we survey a variety of techniques applicable to general functions f . We be-
gin with the basic tasks of evaluating polynomial and rational functions, and address
the validity of matrix Taylor series and the truncation error when a finite number of
terms are summed. Then we turn to methods based on similarity transformations,
concentrating principally on the use of the Schur decomposition and evaluation of a
function of a triangular matrix. Matrix iterations are an important tool for certain
functions, such as matrix roots. We discuss termination criteria, show how to define
stability in terms of a Fréchet derivative, and explain how a convergence result for
a scalar iteration can be translated into a convergence result for the corresponding
matrix iteration. Finally, we discuss preprocessing, which may be beneficial before
applying a particular algorithm, and present several bounds on kf (A)k. Many of the
methods for specific f described later in the book make use of one or more of the
techniques treated in this chapter.
We describe the cost of an algorithm in one of two ways. If, as is often the case,
the algorithm is expressed at the matrix level, then we count the number of matrix
operations. Thus
where the matrices A and B are n × n. Other operation counts are given in terms
of flops, where a flop denotes any of the four elementary operations on scalars +, −,
∗, /. The costs in flops of various matrix operations are summarized in Appendix C,
where some comments on the relevance and interpretation of these different measures
are given.
It is worth clarifying the terms “method” and “algorithm”. For us, a method is
usually a general technique that, when the details are worked out, can lead to more
than one algorithm. An algorithm is a specific automatic procedure that we name
“Algorithm” and specify using pseudocode.
71
72 Techniques for General Functions
Algorithm 4.2 (Horner’s method). This algorithm evaluates the polynomial (4.1)
by Horner’s method.
1 Sm−1 = bm X + bm−1 I
2 for k = m − 2: −1: 0
3 Sk = XSk+1 + bk I
4 end
5 pm = S 0
Cost: (m − 1)M .
Horner’s method is not suitable when m is not known at the start of the evaluation,
as is often the case when a truncated power series is to be summed. In this case pm
can be evaluated by explicitly forming each power of X.
4.2 Polynomial Evaluation 73
Algorithm 4.3 (evaluate polynomial via explicit powers). This algorithm evaluates
the polynomial (4.1) by explicitly forming matrix powers.
1 P =X
2 S = b0 I + b1 X
3 for k = 2: m
4 P = PX
5 S = S + bk P
6 end
7 Pm = S
Cost: (m − 1)M .
Note that while Algorithms 4.2 and 4.3 have the same cost for matrices, when X
is a scalar Algorithm 4.3 is twice as expensive as Algorithm 4.2.
Another method factorizes the polynomial pm (x) = bm (x − ξ1 ) . . . (x − ξm ) and
then evaluates this factorized form at the matrix X.
Algorithm 4.4 (evaluate polynomial in factored form). This algorithm evaluates the
polynomial (4.1) given the roots ξ1 , . . . , ξm of pm .
1 S = X − ξm I
2 for k = m − 1: −1: 1
3 S = S(X − ξk I)
4 end
5 pm = S
Cost: (m − 1)M .
One drawback to Algorithm 4.4 is that some of the roots ξj may be complex and so
complex arithmetic can be required even when the polynomial and X are real. In such
situations the algorithm can be adapted in an obvious way to employ a factorization
of pm into real linear and quadratic factors.
The fourth and least obvious method is that of Paterson and Stockmeyer [466,
], [224, , Sec. 11.2.4], in which pm is written as
r
X
pm (X) = Bk · (X s )k , r = ⌊m/s⌋, (4.2)
k=0
m 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
PS method 1 2 2 3 3 4 4 4 5 5 5 6 6 6 6
Algs 4.2/4.3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
which can be evaluated in 3M , compared with the 5M required for Horner’s method.
Note that (4.2) is a polynomial with matrix coefficients, and the cost of evaluation
given the Bk and X s is rM . The total cost of evaluating pm is
n
1 if s divides m,
s + r − 1 − f (s, m) M, f (s, m) = (4.3)
0 otherwise.
√ √
This√quantity is approximately minimized by s = m, so we take for s either ⌊ m⌋
or ⌈ m ⌉; it can be shown that both choices yield the same operation count. As an
2
extreme case, this method evaluates Aq as (Aq )q , which clearly requires much less
work than the previous two methods for large q, though for a single high power of A
binary powering (Algorithm 4.1) is preferred.
Table 4.1 shows the cost of the Paterson–Stockmeyer method for m = 2: 16; for
each m ≥ 4 it requires strictly fewer multiplications than Algorithms 4.2 and 4.3. For
each m in the table it can be shown that both choices of s minimize (4.3) [247, ].
Unfortunately, the Paterson–Stockmeyer method requires (s + 2)n2 elements of
storage. This can be reduced to 4n2 by computing pm a column at a time, as shown
by Van Loan [596, ], though the p cost of evaluating pm then increases to (2s +
r − 3 − f (s, m))M . The value s = m/2 approximately minimizes the cost of Van
Loan’s variant, and it then costs about 40% more than the original method.
It is important to understand the effect of rounding errors on these four polynomial
evaluation methods. The next theorem provides error bounds for three of the methods.
For matrices, absolute values and inequalities are defined componentwise. We write
γ
en = cnu/(1−cnu), where u is the unit roundoff and c is a small integer constant whose
precise value is unimportant. For details of our model of floating point arithmetic see
Section B.15.
Theorem 4.5. The computed polynomial pbm obtained by applying Algorithm 4.2,
Algorithm 4.3, or the Paterson–Stockmeyer method to pm in (4.1) satisfies
|pm − pbm | ≤ γ
emn pem (|X|),
Pm
where pem (X) = k=0 |bk |X k . Hence kpm − pbm k1,∞ ≤ γ
emn pem (kXk1,∞ ).
The bound of the theorem is pessimistic in the sense that inequalities such as
|X j | ≤ |X|j are used in the derivation. But the message of the theorem is clear: if
there is significant cancellation in forming pm then the error in the computed result
can potentially be large. This is true even in the case of scalar x. Indeed, Stegun and
Abramowitz [535, ] presented the now classic example of evaluating e−5.5 from a
truncated Taylor series and showed how cancellation causes a severe loss of accuracy
in floating point arithmetic; they also noted that computing e5.5 and reciprocating
avoids the numerical problems.
4.2 Polynomial Evaluation 75
10
10
0
10
−10
10
−16
10
0 10 20 30 40 50 60 70 80 90 100
Figure 4.1. 2-norms of first 99 terms in Taylor series of eA , for A in (4.4) with α = 25.
for which
A cos α sin α
e = .
− sin α cos α
We took α = 25 and summed the first 99 terms of the Taylor series for eA using
Algorithm 4.3; at this point adding further terms makes no difference to the computed
sum. The computed sum X b has error keA − Xk b 2 = 1.5 × 10−7 , which represents a
loss of 9 significant digits in all components of X. This loss of significance can be
understood with the aid of Figure 4.1, which shows that the terms in the series grow
rapidly in norm, reaching a maximum of order 1010 . Since all the elements of eA are
of order 1, there is clearly massive cancellation in the summation, and based on the
size of the maximum term a loss of about 10 significant digits would be expected.
Turning to Theorem 4.5, the 2-norm of u pem (|X|) is 8 × 10−6 , so the upper bound of
the theorem is reasonably sharp in this example if we set the constant γ emn to u.
Unlike in the scalar example of Stegun and Abramowitz, computing e−A and then
T
inverting does not help in this example, since e−A = eA in this case and so the
Taylor series is merely transposed.
Error analysis for Algorithm 4.4 is essentially the same as error analysis of a matrix
product (see [276, , Secs. 3.7, 18.2]).
Theorem 4.6. The computed polynomial pbm obtained by applying Algorithm 4.4 to
pm in (4.1) satisfies
|pm − pbm | ≤ γ
emn |bm ||X − ξ1 I| . . . |X − ξm I|. (4.5)
Note that this theorem assumes the ξj are known exactly. The ξk can be ill
conditioned functions of the coefficients bk , so in practice the errors in computing the
ξj could have a significant effect. The main thing to note about the bound (4.5) is
that it depends on the ordering of the ξk , since the matrices |X −ξk I| do not commute
with each other in general. An ordering that has been suggested is the Leja ordering
[485, ]; see the discussion in [444, ]. We will not consider Algorithm 4.4
further because the special nature of the polynomials in matrix function applications
tends to make the other algorithms preferable.
76 Techniques for General Functions
Theorem 4.7 (convergence of matrix Taylor series). Suppose f has a Taylor series
expansion
∞
X
f (k) (α)
f (z) = ak (z − α)k ak = (4.6)
k!
k=0
Proof. It is easy to see from Definition 1.2 that it suffices to prove the theorem
n×n
Pm A = J(λ)k= λI + N ∈ C
for a Jordan block, , where N is strictly upper triangular.
Let fm (z) = k=0 ak (z − α) . We have
m
X k
fm (J(λ)) = ak (λ − α)I + N
k=0
m
X Xk
k
= ak (λ − α)k−i N i
i=0
i
k=0
Xm Xm
i k
= N ak (λ − α)k−i
i=0
i
k=i
m
X m
Ni X
= ak k(k − 1) . . . (k − i + 1)(λ − α)k−i
i=0
i!
k=i
m min(m,n−1)
X N i (i) X N i (i)
= f (λ) = f (λ).
i=0
i! m i=0
i! m
(i)
Evidently, limm→∞ fm (J(λ)) exists if and only if limm→∞ fm (λ) exists for i = 1: n−1,
which is essentially the statement of case (b), because if the series for f differentiated
term by term ni − 1 times converges at λ then so does the series differentiated j times
for j = 0: ni − 1. Case (a) follows from the standard result in complex analysis that
a power series differentiated term by term converges within the radius of convergence
of the original series.
4.3 Taylor Series 77
Theorem 4.8 (Taylor series truncation error bound). Suppose f has the Taylor se-
ries expansion (4.6) with radius of convergence r. If A ∈ Cn×n with ρ(A − αI) < r
then for any matrix norm
s−1
X
1
f (A) − k
ak (A − αI)
≤ max
(A − αI)s f (s) (αI + t(A − αI))
. (4.8)
s! 0≤t≤1
k=0
Proof. See Mathias [409, , Cor. 2]. Note that this bound does not con-
tain a factor depending on n, unlike the 2-norm version of the bound in [224, ,
Thm. 11.2.4]. Moreover, the norm need not be consistent.
In order to apply this theorem we need to bound the term max0≤t≤1 kZ s f (s) (αI +
tZ)k∞ . For certain f this is straightforward. We illustrate using the cosine function.
With α = 0, s = 2k + 2, and
2k
X (−1)i
T2k (A) = A2i ,
i=0
(2i)!
Now
kAk2∞ kAk4∞
≤1+ + + · · · = cosh(kAk∞ ),
2! 4!
and so the error in the truncated Taylor series approximation to the matrix cosine
satisfies the bound
kA2k+2 k∞
k cos(A) − T2k (A)k∞ ≤ cosh(kAk∞ ). (4.9)
(2k + 2)!
78 Techniques for General Functions
We also need to bound the error in evaluating T2k (A) in floating point arithmetic.
From Theorem 4.5 we have that if T2k (A) is evaluated by any of the methods of
Section 4.2 then the computed Tb2k satisfies
kT2k − Tb2k k∞ ≤ γ
ekn cosh(kAk∞ ).
Hence
k cos(A) − Tb2k k∞ kA2k+2 k∞ cosh(kAk∞ )
≤ +γ
ekn . (4.10)
k cos(A)k∞ (2k + 2)! k cos(A)k∞
We can draw two conclusions. First, for maximum accuracy we should choose k so
that
kA2k+2 k∞
≈γekn .
(2k + 2)!
Second, no matter how small the truncation error, the total relative error can poten-
tially be as large as γ
ekn cosh(kAk∞ )/k cos(A)k∞ , and this quantity can be guaranteed
to be of order γekn only if kAk∞ <∼ 1. The essential problem is that if kAk∞ ≫ 1 then
there can be severe cancellation in summing the series.
If kAk∞ ≤ 1 then, using k cos (A)k∞ ≥ 1 − (cosh(kAk∞ ) − 1) = 2 − cosh(kAk∞ ),
we have
0.45 ≤ 2 − cosh(1) ≤ k cos (A)k∞ ≤ cosh(1) ≤ 1.55, (4.11)
which gives
cosh(kAk∞ )
≤ 3.4.
k cos(A)k∞
We conclude that a relative error k cos(A) − Tb2k k∞ /k cos(A)k∞ of order γ
ekn is guar-
anteed if kAk∞ ≤ 1 and k is sufficiently large. In fact, since 18! ≈ 6 × 1015 , k = 8
suffices in IEEE standard double precision arithmetic, for which the unit roundoff
u ≈ 1.1 × 10−16 .
If a [k/m] Padé approximant exists then it is unique; see Problem 4.2. It is usually
required that pkm and qkm have no common zeros, so that pkm and qkm are unique.
For a given f , k, and m, a [k/m] Padé approximant might not exist, though for certain
f existence has been proved for all k and m.
The condition (4.12) shows that rkm reproduces the first k + m + 1 terms of the
Taylor series of f about the origin, and of course if m = 0 then rkm is precisely a
truncated Taylor series.
Continued fraction representations
a1 x
f (x) = b0 +
a2 x
b1 +
a3 x
b2 +
b3 + · · ·
are intimately connected with Padé approximation and provide a convenient way of
obtaining them. Specifically, if b1 = b2 = · · · = 1 and the ai are all nonzero then the
convergents
a1 x
rm (x) ≡ rmm (x) = b0 + . (4.13)
a2 x
b1 +
a3 x
b2 +
a2m−1 x
b3 + · · · +
a2m x
b2m−1 +
b2m
80 Techniques for General Functions
m 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
1 2 3 4 4 5 5 6 6 7 7 8 8 8 9
FC C F F F C FC C C F F F F F C
are the [0/0], [1/0], [1/1], [2/1], [2/2], . . . Padé approximants of f [38, , Sec 4.D],
[39, , Thm. 4.2.1].
Padé approximants are of particular interest in matrix function approximation for
three main reasons:
Algorithm 4.9 (continued fraction, top-down). This algorithm evaluates the con-
tinued fraction (4.13) in top-down fashion at the matrix X ∈ Cn×n .
1 P−1 = I, Q−1 = 0, P0 = b0 I, Q0 = I
2 for j = 1: 2m
3 Pj = bj Pj−1 + aj XPj−2
4 Qj = bj Qj−1 + aj XQj−2
5 end
6 rm = P2m Q−1 2m
Cost: 2(2m−2)M +D. (Note that for j = 1, 2 the multiplications XPj−2 and XQj−2
are trivial, since Pj−2 and Qj−2 are multiples of I.)
Using bottom-up evaluation, rm (X) is evaluated as follows.
Algorithm 4.10 (continued fraction, bottom-up). This algorithm evaluates the con-
tinued fraction (4.13) in bottom-up fashion at the matrix X ∈ Cn×n .
(m)
The coefficients βj are minus the reciprocals of the roots of the denominator poly-
nomial qmm (x) and so may be complex; in this case an alternative partial fraction
with quadratic denominators could be considered. The cost of evaluating (4.14) at
the matrix X is just mD, but on a parallel computer the m terms in (4.14) can be
evaluated in parallel.
Of course, the numerical stability of these different methods of evaluation needs
to be considered along with the computational cost. Since the stability depends
very much on the function f , we delay further consideration until later sections on
particular f .
4.5. Diagonalization
A wide class of methods for evaluating matrix functions is based on exploiting the
relation f (ZBZ −1 ) = Zf (B)Z −1 (Theorem 1.13 (c)). The idea is to factor A =
ZBZ −1 , with B of a form that allows easy computation of f (B). Then f (A) =
Zf (B)Z −1 is readily obtained.
82 Techniques for General Functions
>> norm(A-X^2)
ans =
9.9519e-009
Given that A has norm of order 1 and the unit roundoff u ≈ 10−16 , the residual
kA − X 2 k2 ≈ 10−8 of the computed X is disappointing—especially considering that
MATLAB’s sqrtm function achieves a residual of order u:
>> Y = sqrtm(A); norm(A-Y^2)
ans =
6.4855e-016
The explanation lies with the ill conditioning of the matrix Z:
>> [Z,D] = eig(A); cond(Z)
ans =
9.4906e+007
That κ2 (Z)u is roughly the size of the residual is no coincidence. Suppose the only
error in the process is an error E in evaluating f (B). Then we obtain
and
kfe(A) − f (A)k ≤ kZk kEk kZ −1 k = κ(Z) kEk.
When B is diagonal and Gaussian elimination with partial pivoting is used in the
evaluation, we should interpret κ(Z) as
which for any p-norm is approximately achieved (and exactly achieved when p = 1)
when ZD has columns of unit p-norm; see [276, , Thm. 7.5, Sec. 9.8]. The con-
clusion is that we must expect errors proportional to κ(Z) in our computed function.
Since the conditioning of f (A) is not necessarily related to κ(Z), this diagonalization
method may be numerically unstable.
Diagonalization was used to compute certain matrix functions in the original For-
tran version of MATLAB (“Classic MATLAB”, 1978–1984), which was designed for
teaching purposes:
< M A T L A B >
Version of 01/10/84
4.5 Diagonalization 83
function F = funm_ev(A,fun)
%FUNM_EV Evaluate general matrix function via eigensystem.
% F = FUNM_EV(A,FUN) evaluates the function FUN at the
% square matrix A using the eigensystem of A.
% This function is intended for diagonalizable matrices only
% and can be numerically unstable.
[V,D] = eig(A);
F = V * diag(feval(fun,diag(D))) / V;
HELP is available
<>
help fun
where λi = tii , Sij is the set of all strictly increasing sequences of integers that start
at i and end at j, and f [λs0 , . . . , λsk ] is the kth order divided difference of f at
λs0 , . . . , λsk .
Proof. See Davis [139, ], Descloux [148, ], or Van Loan [592, ].
We can also use Theorem 4.11 to check formula (1.4) for a function of a Jordan
block, T = Jk (λk ). Since the only nonzero off-diagonal elements of T are 1s on the
superdiagonal, we have, using (B.27) again,
f (j−i) (λk )
fij = ti,i+1 . . . tj−1,j f [λk , λk , . . . , λk ] = , i < j.
| {z } (j − i)!
j − i + 1 times
Theorem 4.12 (Kenney and Laub). Let f be 2n−1 times continuously differentiable
and let
A11 A12 A11 0 0 A12
A= , D= , N= .
0 A22 0 A22 0 0
Then f (A) = f (D) + L(D, N ) (i.e., the o(·) term in (3.6) is zero).
or
j−1
X
fij (tii − tjj ) = tij (fii − fjj ) + (fik tkj − tik fkj ), (4.18)
k=i+1
j−1
X
fii − fjj fik tkj − tik fkj
fij = tij + , i < j.
tii − tjj tii − tjj
k=i+1
The right-hand side depends only on the elements to the left of fij and below it.
Hence this recurrence enables F to be computed either a superdiagonal at a time,
starting with the diagonal, or a column at a time, from the second column to the last,
moving up each column.
86 Techniques for General Functions
1 fii = f (tii ), i = 1: n
2 for j = 2: n
3 for i = j − 1: −1: 1
X
j−1
fii − fjj
4 fij = tij + fik tkj − tik fkj (tii − tjj )
tii − tjj
k=i+1
5 end
6 end
1 Fii = f (Tii ), i = 1: n
2 for j = 2: n
3 for i = j − 1: −1: 1
4 Solve for Fij the Sylvester equation
Pj−1
Tii Fij − Fij Tjj = Fii Tij − Tij Fjj + k=i+1 (Fik Tkj − Tik Fkj )
5 end
6 end
In Algorithm 4.14, computing Fii = f (Tii ) for a block of dimension greater than
1 is a nontrivial problem that we pursue in Section 9.1.
The block recurrence can be used in conjunction with the real Schur decomposition
of A ∈ Rn×n ,
QT AQ = T,
where Q ∈ Rn×n is orthogonal and T ∈ Rn×n is quasi upper triangular, that is,
block upper triangular with 1 × 1 or 2 × 2 diagonal blocks, with any 2 × 2 diagonal
blocks having complex conjugate eigenvalues. When f (A) is real, this enables it to
be computed entirely in real arithmetic.
We turn now to numerical considerations. In Listing 4.2 we give a function
funm simple that employs Algorithm 4.13 in conjunction with an initial Schur re-
duction to triangular form. The function is in principle applicable to any matrix with
distinct eigenvalues. It is very similar to the function funm in versions 6.5 (R14) and
earlier of MATLAB; version 7 of MATLAB introduced a new funm that implements
Algorithm 9.6 described in Section 9.4, which itself employs Algorithm 4.14.
Function funm simple often works well. For example the script M-file
format rat, A = gallery(’parter’,4), format short
evals = eig(A)’
X = real(funm_simple(A,@sqrt))
res = norm(A-X^2)
produces the output
A =
2 -2 -2/3 -2/5
2/3 2 -2 -2/3
2/5 2/3 2 -2
2/7 2/5 2/3 2
evals =
1.5859 - 2.0978i 1.5859 + 2.0978i 2.4141 - 0.7681i
2.4141 + 0.7681i
X =
1.4891 -0.6217 -0.3210 -0.2683
0.2531 1.5355 -0.5984 -0.3210
0.1252 0.2678 1.5355 -0.6217
0.0747 0.1252 0.2531 1.4891
res =
1.6214e-014
A major weakness of funm simple is demonstrated by the following experiment. Let
A be the 8 × 8 triangular matrix with aii ≡ 1 and aij ≡ −1 for j > i, which is MAT-
LAB’s gallery(’triw’,8). With f the exponential, Table 4.3 shows the normwise
relative errors for A and two small perturbations of A, one full and one triangular.
The condition number of f (A) (see Chapter 3) is about 2 in each case, so we would
expect to be able to compute f (A) accurately. For A itself, funm simple yields an
error of order 1, which is expected since it is unable to compute any of the super-
diagonal elements of f (A). For A plus the random full perturbation (which, being
full, undergoes the Schur reduction) the eigenvalues are distinct and at distance at
least 10−2 apart. But nevertheless, funm simple loses 6 significant digits of accuracy.
For the third matrix, in which the perturbation is triangular and the eigenvalues are
88 Techniques for General Functions
function F = funm_simple(A,fun)
%FUNM_SIMPLE Simplified Schur-Parlett method for function of a matrix.
% F = FUNM_SIMPLE(A,FUN) evaluates the function FUN at the
% square matrix A by the Schur-Parlett method using the scalar
% Parlett recurrence (and hence without blocking or reordering).
% This function is intended for matrices with distinct eigenvalues
% only and can be numerically unstable.
% FUNM should in general be used in preference.
n = length(A);
F = Q*F*Q’;
Table 4.3. Errors keA − Fbk/keA k for Fb from funm simple for A = gallery(’triw’,8).
Csanky
0
10
−5
10
−10
10
25 30 35 40 45 50 55 60
poly (MATLAB)
0
10
−5
10
−10
10
25 30 35 40 45 50 55 60
Figure 4.2. Relative errors for inversion of A = 3In , n = 25: 60, via the characteristic
polynomial.
n−1
X
−1 1 n−1 n−i−1
A =− A + ci A .
cn i=1
c1 −s1
1
s1 2 c2 −s2
.. ..
s2 s1 3 . .
. = . . (4.20)
s3 s2 s1 4 . .
.. .. .. .. .. . .
. . . . . .. ..
. .
sn−1 ... s3 s2 s1 n cn −sn
The method based on the latter system was proposed for parallel computation by
Csanky [124, ] (it takes O(log2 n) time on O(n4 ) processors) but in fact goes
back at least to Bingham [68, ]. Figure 4.2 plots the ∞-norm relative errors
when A = 3In is inverted by these two approaches. All accuracy is lost by the time
n = 55, despite A being perfectly conditioned.
4.9 Matrix Iterations 91
For the iterations used in practice, X0 is not arbitrary but is a fixed function of
A—usually X0 = I or X0 = A. The iteration function g may or may not depend
on A.
Considerations of computational cost usually dictate that g is a polynomial or
rational function. Rational g require the solution of linear systems with multiple right-
hand sides, or even explicit matrix inversion. On modern computers with hierarchical
memories, matrix multiplication is usually much faster than solving a matrix equation
or inverting a matrix, so iterations that are multiplication-rich, which means having a
polynomial g, are preferred. The drawback is that such iterations usually have weaker
convergence properties than rational iterations.
A standard means of deriving matrix iterations is to apply Newton’s method to
an algebraic equation satisfied by f (A) and then to choose X0 so that the iteration
formula is simplified. In subsequent chapters we will study iterations that can be
derived in this way for computing the matrix sign function, the matrix square root,
matrix pth roots, and the polar decomposition.
It is important to keep in mind that results and intuition from scalar nonlinear
iterations do not necessarily generalize to the matrix case. For example, standard
convergence conditions expressed in terms of derivatives of g at a fixed point in the
scalar case do not directly translate into analogous conditions on the Frechét and
higher order derivatives in the matrix case.
for all sufficiently large k, for some positive constant c. The iteration (4.21) is said to
have order p if the convergent sequences it generates have order p. Linear convergence
corresponds to p = 1 and quadratic convergence to p = 2. The convergence is called
superlinear if limk→∞ kX∗ − Xk+1 k/kX∗ − Xk k = 0.
The convergence of a sequence breaks into two parts: the initial phase in which the
error is reduced safely below 1, and the asymptotic phase in which (4.22) guarantees
convergence to zero. The order of convergence tells us about the rate of convergence
in the asymptotic phase, but it has nothing to say about how many iterations are
taken up by the first phase. This is one reason why a higher order of convergence
is not necessarily better. Other factors to consider when comparing iterations with
different orders of convergence are as follows.
per iteration no larger than twice that of the quadratic one if it is to be worth
considering.
• In practice we often need to scale an iteration, that is, introduce scaling param-
eters to reduce the length of the initial convergence phase. The higher the order
the less opportunity there is to scale (relative to the amount of computation).
• Numerical stability considerations may rule out certain iterations from consid-
eration for practical use.
and for sufficiently fast convergence kXk+1 − X∗ k ≪ kXk − X∗ k and hence the two
terms on the right-hand side are of roughly equal norm (and their largest elements
are of opposite signs). To obtain more insight, consider a quadratically convergent
method, for which
kXk+1 − X∗ k ≤ c kXk − X∗ k2 (4.23)
close to convergence, where c is a constant. Now
for small enough kXk − X∗ k, so the error in Xk+1 is bounded in terms of the square
of kXk+1 − Xk k. The conclusion is that a stopping test that accepts Xk+1 when δk+1
is of the order of the desired relative error may terminate one iteration too late.
The analysis above suggests an alternative stopping test for a quadratically con-
vergent iteration. From (4.24) we expect kXk+1 − X∗ k/kXk+1 k ≤ η if
1/2
ηkXk+1 k
kXk+1 − Xk k ≤ . (4.25)
2c
4.9 Matrix Iterations 93
The value of c in (4.23) is usually known. The test (4.25) can be expected to terminate
one iteration earlier than the test δk+1 ≤ η, but it can potentially terminate too soon
if rounding errors vitiate (4.23) or if the test is satisfied before the iteration enters
the regime where (4.23) is valid.
A stopping test based on the distance between successive iterates is dangerous be-
cause in floating point arithmetic there is typically no guarantee that δk (or indeed the
relative error) will reach a given tolerance. One solution is to stop when a preassigned
number of iterations have passed. An alternative, suitable for any quadratically or
higher order convergent iteration, is to terminate once the relative change has not de-
creased by a factor at least 2 since the previous iteration. A reasonable convergence
test is therefore to terminate the iteration at Xk+1 when
δk+1 ≤ η or δk+1 ≥ δk /2, (4.26)
where η is the desired relative error. This test can of course be combined with others,
such as (4.25).
4.9.3. Convergence
Convergence analysis for the matrix iterations in this book will be done in two general
ways. The first is to work entirely at the matrix level. This is our preference, when
it is possible, because it tends to give the most insight into the behaviour of the
iteration. The second approach breaks into three parts:
• show, if possible, that the iteration converges for an arbitrary matrix if and only
if it converges when applied to the Jordan blocks of the matrix (why this is not
always possible is explained in the comments after Theorem 6.9);
• prove convergence of the scalar iteration when applied to the eigenvalues (which
implies convergence of the diagonal of the Jordan blocks);
• prove convergence of the off-diagonals of the Jordan blocks.
The last step can be done in some generality, as our next theorem shows.
Consider, for example, the iteration
1
Xk+1 = (Xk + Xk−1 A), X0 = A ∈ Cn×n , (4.27)
2
which is an analogue for matrices of Heron’s method for the square root of a scalar.
As is well known, for n = 1 the iteration converges to the principal square root of
a if a does not lie on R− . If A has the Jordan canonical form A = ZJZ −1 then
it is easy to see that the iterates from (4.27) are given by Xk = ZJk Z −1 , where
Jk+1 = 21 (Jk + Jk−1 J), J0 = J. Hence Jk has the same block diagonal structure as
J, and convergence reduces to the case where J is a single Jordan block. The next
result allows us to deduce convergence of the Xk when A has no eigenvalues on R− .
This result is quite general and will be of use for other iterations in later chapters.
Theorem 4.15. Let g(x, t) be a rational function of both of its arguments. Let the
scalar sequence generated by xk+1 = g(xk , λ), x0 = φ0 (λ) converge to x∗ = f (λ),
∂g
where φ0 is a rational function and λ ∈ C, and assume that | ∂x (x∗ , λ)| < 1 (i.e.,
x∗ is an attracting fixed point of the iteration). Then the matrix sequence generated
by Xk+1 = g(Xk , J(λ)), X0 = φ0 (J(λ)), where J(λ) ∈ Cm×m is a Jordan block,
converges to a matrix X∗ with (X∗ )ii ≡ f (λ).
94 Techniques for General Functions
Proof. Let φk (t) = g(φk−1 (t), t), k ≥ 1, so that φk is a rational function of t and
xk = φk (λ). Upper triangular Toeplitz structure is preserved under inversion and
multiplication, and hence under any rational function. Hence
a1 (k) a2 (k) . . . am (k)
. ..
a1 (k) . . .
Xk = g(Xk−1 , J(λ)) = φk (J(λ)) =
.
..
. a2 (k)
a1 (k)
Clearly, the diagonal elements of Xk satisfy a1 (k) = φk (λ) = xk and hence tend to
x∗ . It remains to show that the elements in the strictly upper triangular part of Xk
converge.
From (1.4) we know that
1 dj−1
aj (k) = φ k (t) .
(j − 1)! dtj−1 λ
dφk d
a2 (k) = (t) = g φk−1 (t), t
dt λ dt λ
∂g ′ ∂g
= φk−1 (t), t φk−1 (t) + φk−1 (t), t
∂x ∂t
λ
∂g ∂g
= (xk−1 , λ)φ′k−1 (λ) + (xk−1 , λ)
∂x ∂t
∂g ∂g
= (xk−1 , λ)a2 (k − 1) + (xk−1 , λ).
∂x ∂t
∂g
Since | ∂x (x∗ , λ)| < 1, by Problem 4.7 it follows that
∂g ∂g
a2 (k) → (x∗ , λ)/(1 − (x∗ , λ)) as k → ∞. (4.28)
∂t ∂x
As an induction hypothesis suppose that ai (k) has a limit as k → ∞ for i = 2: j −1.
Then by the chain rule
1 dj−1 1 dj−1
aj (k) = φ k (t) = g φ k−1 (t), t
(j − 1)! dtj−1 λ (j − 1)! dtj−1 λ
j−2
1 d ∂g ′ ∂g
= φk−1 (t), t φk−1 (t) + φk−1 (t), t
(j − 1)! dtj−2 ∂x ∂t λ
j−1
1 ∂g d (k)
= φk−1 (t), t j−1 φk−1 (t) + τj φk−1 (t), t
(j − 1)! ∂x dt λ
∂g 1 (k)
= (xk−1 , λ)aj (k − 1) + τ (xk−1 , λ), (4.29)
∂x (j − 1)! j
(k)
where τj (xk−1 , λ) is a sum of terms comprising products of one or more elements
ai (k − 1), i = 1: j − 1, and derivatives of g evaluated at (xk−1 , λ), the number of
terms in the sum depending on j but not k. By the inductive hypothesis, and since
4.9 Matrix Iterations 95
(k)
xk−1 → x∗ as k → ∞, τj (xk−1 , λ) has a limit as k → ∞. Hence by Problem 4.7,
aj (k) has a limit as k → ∞, as required.
The notation “g(xk , λ)” in Theorem 4.15 may seem unnecessarily complicated. It
is needed in order for the theorem to allow either or both of the possibilities that
(a) the iteration function g depends on A,
(b) the starting matrix X0 depends on A.
For the Newton iteration (4.27), both g and X0 depend on A, while for the sign
iteration in (4.30) below only X0 depends on A.
Notice that Theorem 4.15 does not specify the off-diagonal elements of the limit
matrix X∗ . These can usually be deduced from the equation X∗ = g(X∗ , A) together
with knowledge of the eigenvalues of X∗ . For example, for the Newton square root
iteration we know that X∗ = 12 (X∗ + X∗−1 A), or X∗2 = A, and, from knowledge of
the scalar iteration, that X∗ has spectrum in the open right half-plane provided that
A has no eigenvalues on R− . It follows that X∗ is the principal square root (see
Theorem 1.29). A variant of Theorem 4.15 exists that assumes that f is analytic and
guarantees convergence to X∗ = f (J(λ)); see Iannazzo [307, ].
In the special case where the iteration function does not depend on the parameter
λ, the limit matrix must be diagonal, even though the starting matrix is a Jordan
block.
Corollary 4.16 (Iannazzo). Under the conditions of Theorem 4.15, if the iteration
function g does not depend on λ then the limit X∗ of the matrix iteration is diagonal.
Proof. In the notation of the proof of Theorem 4.15 we need to show that aj (k) →
0 as k → ∞ for j = 2: m. By assumption, ∂g/∂t ≡ 0 and so a2 (k) → 0 as k → ∞
by (4.28). An inductive proof using (4.29) then shows that aj (k) → 0 as k → ∞ for
j = 3: m.
Table 4.4. Square root iteration (4.27) and sign iteration (4.30) applied to Wilson matrix in
single precision arithmetic. (A1/2 )11 = 2.389 to four significant figures, while sign(A) = I.
which is symmetric positive definite and moderately ill conditioned, with κ2 (A) ≈
2984. For such a benign matrix, we might expect no difficulty in computing A1/2 .
The computed Xk behave as they would in exact arithmetic up to around iteration
4, but thereafter the iterates rapidly diverge. It turns out that iteration (4.27) is
unstable unless A has very closely clustered eigenvalues. The instability is related
to the fact that (4.27) fails to converge for some matrices X0 in a neighbourhood of
A1/2 ; see Section 6.4 for the relevant analysis.
Let us now modify the square root iteration by replacing A in the iteration formula
by I:
1
Yk+1 = (Yk + Yk−1 ), Y0 = A. (4.30)
2
For any A for which sign(A) is defined, Yk converges quadratically to sign(A). (Note,
for consistency with (4.27), that sign(A) is one of the square roots of I.) Iteration
(4.30) is stable for all A (see Theorem 5.13), and Table 4.4 confirms the stability for
the Wilson matrix.
In order to understand fully the behaviour of a matrix iteration in finite precision
arithmetic we would like to
• bound or estimate the minimum relative error kX∗ − X bk k/kX∗ k over the com-
puted iterates Xbk , and determine whether this error is consistent with the con-
ditioning of the problem,
The first task is very difficult for most iterations, though the conditioning can
be determined, as shown in the previous chapter. To make progress on this task we
examine what happens when X0 is very close to X∗ , which leads to the notion of
limiting accuracy. The second task, which is related to the first, is also difficult when
4.9 Matrix Iterations 97
considered over the whole iteration, but near to convergence the propagation of errors
is more amenable to analysis.
We will use the following definition of stability of an iteration. We write Li (X) to
denote the ith power of the Fréchet derivative
L at X, defined as i-fold composition;
thus L3 (X, E) ≡ L X, L(X, L(X, E)) .
Definition 4.17 (stability). Consider an iteration Xk+1 = g(Xk ) with a fixed point
X. Assume that g is Fréchet differentiable at X. The iteration is stable in a neigh-
borhood of X if the Fréchet derivative Lg (X) has bounded powers, that is, there exists
a constant c such that kLig (X)k ≤ c for all i > 0.
The definition therefore ensures that in a stable iteration sufficiently small errors
introduced near a fixed point have a bounded effect, to first order, on succeeding
iterates.
To test for stability we need to find the Fréchet derivative of the iteration function
g and then determine the behaviour of its powers, possibly by working with the Kro-
necker matrix form of the Fréchet derivative. Useful here is the standard result that
a linear operator on Cn×n (or its equivalent n2 × n2 matrix) is power bounded if its
spectral radius is less than 1 (see Problem 4.6) and not power bounded if its spectral
radius exceeds 1. The latter property can be seen from the fact that kLkg (X)k ≥ |λ|k
for any eigenvalue λ of Lg (X) (cf. (3.19)). We will sometimes find the pleasing situa-
tion that Lg (X) is idempotent, that is, L2g (X, E) = Lg (X, Lg (X, E)) = Lg (X, E), in
which case power boundedness is immediate.
Note that any superlinearly convergent scalar iteration xk+1 = g(xk ) has zero
derivative g ′ at a fixed point, so for such scalar iterations convergence implies stability.
For matrix iterations, however, the Fréchet derivative is not generally zero at a fixed
point. For example, for the sign iteration (4.30) it is easy to see that at a fixed point Y
we have Lg (Y, E) = 12 (E − Y EY ) (see Theorem 5.13). Although Y 2 = I, Y E 6= EY
98 Techniques for General Functions
in general and so Lg (Y, E) 6= 0. This emphasizes that stability is a more subtle and
interesting issue for matrices than in the scalar case.
An important special case is when the underlying function f has the property that
f (f (A)) = f (A) for all A for which f is defined, that is, f is idempotent. For such
an f we can show that Lf (X) is idempotent at X = f (X).
But because f is idempotent, h(A) ≡ f (A) and so Lh (X, E) = Lf (X, E). Hence
Lf (X, Lf (X, E)) = Lf (X, E), which shows that Lf (X) is idempotent.
Stability is determined by the Fréchet derivative of the iteration function, not that
of f . However, these two derivatives are one and the same if we add to the condition
of Theorem 4.18 the conditions that in the iteration Xk+1 = g(Xk ) the function g is
independent of the starting matrix X0 and that the iteration is superlinearly conver-
gent when started sufficiently close to a fixed point. These conditions are satisfied for
the matrix sign function and the unitary polar factor, and the corresponding itera-
tions of interest, but not for matrix roots (since (A1/p )1/p 6= A1/p and the iteration
function g invariably depends on X0 ).
Hence
• f (A) = A1/2 is not idempotent and the iteration function in (4.27) depends on
X0 = A, so the theorems are not applicable.
4.10 Preprocessing 99
• f (A) = sign(A) is idempotent and Fréchet differentiable and the iteration (4.30)
is quadratically convergent with iteration function independent of Y0 = A, so
the theorems apply and the iteration is therefore stable. See Section 5.7 for
details.
The Fréchet derivative also allows us to estimate the limiting accuracy of an iter-
ation.
Definition 4.20 (limiting accuracy). For an iteration Xk+1 = g(Xk ) with a fixed
point X the (relative) limiting accuracy is ukLg (X)k.
To interpret the definition, consider one iteration applied to the rounded exact
solution, X0 = X + E0 , where kE0 k <
∼ ukXk. From (4.31) we have
kE1 k < <
∼ kLg (X, E0 )k ≤ kLg (X)k kE0 k ∼ ukLg (X)k kXk,
and so the limiting accuracy is a bound for the relative error kX − X1 k/kXk. We can
therefore think of the limiting accuracy as the smallest error we can reasonably expect
to achieve in floating point arithmetic once—and indeed if —the iteration enters an
O(u) neighbourhood of a fixed point. Limiting accuracy is once again an asymptotic
property.
While stability corresponds to the boundedness of the powers of Lg (X), which
depends only on the eigenvalues of Lg (X), the limiting accuracy depends on the
norm of Lg (X), and so two stable iterations can have quite different limiting accuracy.
However, neither of these two notions necessarily gives us a reliable estimate of the
accuracy of a computed solution or the size of its residual. Clearly, an unstable
iteration may never achieve its limiting accuracy, because instability may prevent it
reaching the region of uncertainty around the solution whose size limiting accuracy
measures.
Finally, it is important to note that the Fréchet derivative analysis treats the
propagation of errors by the exact iteration. In practice, rounding errors are incurred
during the evaluation of the iteration formula, and these represent another source of
error that is dependent on how the formula is evaluated. For example, as noted in
Section 4.4.3, rational functions can be evaluated in several ways and these potentially
have quite different numerical stability properties.
Analysis based on the Fréchet derivative at the solution will prove to be informa-
tive for many of the iterations considered in this book, but because of the limitations
explained above this analysis cannot always provide a complete picture of the numer-
ical stability of a matrix iteration.
4.10. Preprocessing
In an attempt to improve the accuracy of an f (A) algorithm we can preprocess the
data. Two available techniques are argument reduction (or translation) and balancing.
Both aim to reduce the norm of the matrix, which is important when Taylor or Padé
approximants—most accurate near the origin—are to be applied.
Argument reduction varies slightly depending on the function. For the matrix
exponential, eA−µI = e−µ eA , so any multiple of I can be subtracted from A. For
trigonometric functions such as the cosine,
1
min kA − µIk∞ = max(aii + ri ) + max(−aii + ri ) , (4.33)
µ∈R 2 i i
P
where ri = j6=i |aij |, and the optimal µ is
1
µ= max(aii + ri ) − max(−aii + ri ) .
2 i i
The inner max terms in the last expression are the rightmost point and the negative
of the leftmost point in the union of the Gershgorin discs for A. It is clear that the
optimal shift µ must make these extremal points equidistant from the origin; hence it
must satisfy maxi (aii − µ + ri ) = maxi (−aii + µ + ri ), that is, µ = 21 (maxi (aii + ri ) −
maxi (−aii + ri )). The formula (4.33) follows on substitution of the optimal µ.
In the computation of scalar elementary functions it is well known that special
techniques must be used to avoid severe loss of accuracy in argument reduction for
large arguments [441, , Chap. 8]. The standard techniques are not directly appli-
cable to the matrix case, so we must recognize that argument reduction is potentially
a significant source of error.
Balancing is a heuristic that attempts to equalize the norms of the ith row and ith
column, for each i, by a diagonal similarity transformation. It is known that balancing
in the 2-norm is equivalent to minimizing kD−1 ADkF over all nonsingular diagonal
D [454, ]. Nowadays, “balancing” is synonymous with the balancing algorithms
in LAPACK [12] and MATLAB [414], which compute B = D−1 AD, where D is a
permuted diagonal matrix with diagonal elements powers of the machine base chosen
so that the 1-norms of the ith row and ith column of B are of similar magnitude for
all i. Balancing is an O(n2 ) calculation that can be performed without roundoff. It
is not guaranteed to reduce the norm, so it is prudent to replace A by the balanced
B only if kBk < kAk.
Balancing can be combined with argument reduction. Since trace(A) = trace(D−1 AD)
and the balancing transformation is independent of the diagonal elements of the ma-
trix, argument reduction in the Frobenius norm yields the same shift before balancing
as after. Therefore it makes no difference in which order these two operations are done.
4.10 Preprocessing 101
In some special cases we can say more about argument reduction and balancing.
n×n
Pn that an intensity matrix is a matrix Q ∈ R
Recall from Section 2.3 such that
qij ≥ 0 for i 6= j and j=1 qij = 0, i = 1: n.
Corollary 4.22 (Melloy and Bennett). Let Q ∈ Rn×n be an intensity matrix. Then
min kQ + µIk∞ = max |qii | = kQk∞ /2.
µ∈R i
The minimum is attained for µ = maxi |qii |, for which Q + µI is nonnegative with
row sums all equal to µ.
Proof. In the notation of Theorem 4.21 (b), we have qii + ri = 0, so −qii + ri =
2|qii |, and the result then follows from the theorem.
Corollary 4.22 shows that for an intensity matrix the optimal shift reduces the
∞-norm by a factor 2. We now show that the shifted matrix is already of minimal
∞-norm under diagonal similarities. We need the following more general result.
Theorem 4.25. Let A ∈ Cn×n have the Jordan canonical form (1.2), with distinct
eigenvalues λ1 , . . . , λs . Then
(j)
f (λi )
kf (A)kp ≤ nmax κp (Z) max , p = 1: ∞, (4.36)
i=1:s
j=0:n −1
j!
i
Theorem 4.26 (Trefethen and Embree). Let A ∈ Cn×n and ǫ > 0. Let f be analytic
on and inside a closed contour Γǫ that encloses Λǫ (A). Then
Lǫ
kf (A)k2 ≤ max |f (z)|, (4.38)
2πǫ z∈Γǫ
where Lǫ is the arc length of Γǫ . In particular,
ρǫ
kf (A)k2 ≤ max |f (ρǫ eiθ )|, (4.39)
ǫ θ∈[0,2π]
where ρǫ = ρǫ (A) = max{ |z| : z ∈ Λǫ (A) } is the pseudospectral radius.
Proof. To prove (4.38) let Γ := Γǫ in the Cauchy integral (1.12), take norms,
and use the fact that k(zI − A)−1 k ≤ ǫ−1 on and outside Λǫ (A). The bound (4.39) is
obtained from (4.38) by taking Γǫ to be a circle with centre 0 and radius ρǫ .
The next result does not involve a similarity taking A to Jordan form but it needs
knowledge of the derivatives of f on a larger region than just the spectrum. Denote
by conv(S) the convex hull of the set S.
Theorem 4.27 (Young). Let A ∈ Cn×n and let f be analytic in a convex open set
containing Λ(A). Then
!2 1/2
X n kAkiF supz∈conv(Λ(A)) |f (i) (z)|
n−1
kf (A)kF ≤ . (4.40)
i=0
i+1 i!
4.11 Bounds for kf (A)k 103
(r)
Now it can be shown that |N |r = (nij ) satisfies
(
0, j < i + r,
(r)
nij = P (r) |ts0 ,s1 ||ts1 ,s2 | . . . |tsr−1 ,sr |, j ≥ i + r.
(s0 ,...,sr )∈S ij
Theorem 4.29 (Gil). Let A ∈ Cn×n and let f be analytic in a convex open set
containing Λ(A). Then
n−1
X
kf (A)k2 ≤ sup |f (i) (z)| ∆(A)i (i!)−3/2 . (4.43)
i=0 z∈conv(Λ(A))
Corollary 4.22 is due to Melloy and Bennett [422, ]. Further analysis of
balancing can be found in Ström [545, ] and Fenner and Loizou [183, ].
Theorem 4.23 is from Ström [545, ]. Corollary 4.24 generalizes a result of Melloy
and Bennett [422, ] stated for a shifted intensity matrix.
Theorem 4.27 is from Young [620, ]. Theorem 4.26 is from Trefethen and Em-
bree [573, ]. Theorem 4.28 is from Golub and Van Loan [224, , Thm. 11.2.2]
and it appeared first in Van Loan [592, , Thm. 4]. Theorem 4.29 is from Gil [214,
]. For some other approaches to bounding kf (A)k see Crouzeix [123, ] and
Greenbaum [230, ].
Problems
Since the purpose of mathematics is to solve problems,
it is impossible to judge one’s progress without
breaking a lance on a few problems from stage to stage.
— RICHARD BELLMAN, Introduction to Matrix Analysis (1970)
4.1. LetPA ∈ Cm×n , B ∈ Cn×m , and C = B0 A0 ∈ C(m+n)×(m+n) . Show that if
∞
f (z) = i=0 ai z i then (within the radius of convergence)
" P∞ i
P∞ i
#
i=0 a2i (AB) A i=0 a2i+1 (BA)
f (C) = P∞ P∞ .
i i
B i=0 a2i+1 (AB) i=0 a2i (BA)
then −1
T11 0 I −X T11 T12 I −X
= ,
0 T22 0 I 0 T22 0 I
and deduce that
F12 = f (T11 )X − Xf (T22 ). (4.46)
Show that the formulae (4.44) and (4.46) are equivalent.
106 Techniques for General Functions
4.4. (Parlett [459, ]) Let S, T ∈ Cn×n be upper triangular and let X = f (T −1 S),
Y = f (ST −1 ). Show that
SX − Y S = 0, T X − Y T = 0, (4.47)
and hence show how to compute X and Y together, by a finite recurrence, without
explicitly forming T −1 S. When does your recurrence break down?
4.5. Consider the conditions of Theorem 4.15 under the weaker assumption that
∂g
| ∂x (x∗ , λ)| = 1. Construct examples with n = 2 to show that the corresponding
matrix iteration may or may not converge.
4.6. Show that A ∈ Cn×n is power bounded (that is, for any norm there exists a
constant c such that kAk k ≤ c for all k ≥ 0) if ρ(A) < 1. Give a necessary and
sufficient condition for A to be power bounded.
4.7. (Elsner [176, ]) Consider the recurrence yk+1 = ck yk + dk , where ck → c
and dk → d as k → ∞, with |c| < 1. Show that limk→∞ yk = d/(1 − c).
4.8. (Kahan [326, ]) Show that A ∈ Cn×n is nilpotent of index k if and only if
trace(Ai ) = 0, i = 1: k.
4.9. (Research problem) Develop bounds for kf (A)−r(A)k for nonnormal A and r
a best L∞ approximation or Padé approximant, for any suitable norm. Some bounds
for particular f and Padé approximants can be found in later chapters.
4.10. (Research problem) Develop new bounds on kf (A)k to add to those in
Section 4.11.
Chapter 5
Matrix Sign Function
The scalar sign function is defined for z ∈ C lying off the imaginary axis by
1, Re z > 0,
sign(z) =
−1, Re z < 0.
The matrix sign function can be obtained from any of the definitions in Chapter 1.
Note that in the case of the Jordan canonical form and interpolating polynomial
definitions, the derivatives sign(k) (z) are zero for k ≥ 1. Throughout this chapter,
A ∈ Cn×n is assumed to have no eigenvalues on the imaginary axis, so that sign(A)
is defined. Note that this assumption implies that A is nonsingular.
As we noted in Section 2.4, if A = ZJZ −1 is a Jordan canonical form arranged
so that J = diag(J1 , J2 ), where the eigenvalues of J1 ∈ Cp×p lie in the open left
half-plane and those of J2 ∈ Cq×q lie in the open right half-plane, then
−Ip 0
sign(A) = Z Z −1 . (5.1)
0 Iq
Two other representations have some advantages. First is the particularly concise
formula (see (5.5))
sign(A) = A(A2 )−1/2 , (5.2)
which generalizes the scalar formula sign(z) = z/(z 2 )1/2 . Recall that B 1/2 denotes the
principal square root of B (see Section 1.7). Note that A having no pure imaginary
eigenvalues is equivalent to A2 having no eigenvalues on R− . Next, sign(A) has the
integral representation (see Problem 5.3)
Z ∞
2
sign(A) = A (t2 I + A2 )−1 dt. (5.3)
π 0
Theorem 5.1 (properties of the sign function). Let A ∈ Cn×n have no pure imagi-
nary eigenvalues and let S = sign(A). Then
(a) S 2 = I (S is involutory);
(b) S is diagonalizable with eigenvalues ±1;
(c) SA = AS;
(d) if A is real then S is real;
(e) (I + S)/2 and (I − S)/2 are projectors onto the invariant subspaces associated
with the eigenvalues in the right half-plane and left half-plane, respectively.
107
108 Matrix Sign Function
Proof. The properties follow from (5.1)–(5.3). Of course, properties (c) and (d)
hold more generally for matrix functions, as we know from Chapter 1 (see Theo-
rem 1.13 (a) and Theorem 1.18).
Although sign(A) is a square root of the identity matrix, it is not equal to I or
−I unless the spectrum of A lies entirely in the open right half-plane or open left
half-plane, respectively. Hence, in general, sign(A) is a nonprimary square root of I.
Moreover, although sign(A) has eigenvalues ±1, its norm can be arbitrarily large.
The early appearance of this chapter in the book is due to the fact that the sign
function plays a fundamental role in iterative methods for matrix roots and the polar
decomposition. The definition (5.2) might suggest that the sign function is a “special
case” of the square root. The following theorem, which provides an explicit formula
for the sign of a block 2 × 2 matrix with zero diagonal blocks, shows that, if anything,
the converse is true: the square root can be obtained from the sign function (see
(5.4)). The theorem will prove useful in the next three chapters.
Theorem 5.2 (Higham, Mackey, Mackey, and Tisseur). Let A, B ∈ Cn×n and sup-
pose that AB (and hence also BA) has no eigenvalues on R− . Then
0 A 0 C
sign = ,
B 0 C −1 0
where C = A(BA)−1/2 .
Proof. The matrix P = B0 A0 cannot have any eigenvalues on the imaginary
axis, because if it did then P 2 = AB 0 −
0 BA would have an eigenvalue on R . Hence
sign(P ) is defined and
−1/2
2 −1/2 0 A AB 0
sign(P ) = P (P ) =
B 0 0 BA
−1/2
0 A (AB) 0
=
B 0 0 (BA)−1/2
0 A(BA)−1/2 0 C
= =: .
B(AB)−1/2 0 D 0
Since the square of the matrix sign of any matrix is the identity,
2
2 0 C CD 0
I = (sign(P )) = = ,
D 0 0 DC
In addition to the association with matrix roots and the polar decomposition
(Chapter 8), the importance of the sign function stems from its applications to Ric-
cati equations (Section 2.4), the eigenvalue problem (Section 2.5), and lattice QCD
(Section 2.7).
5.1 Sensitivity and Conditioning 109
In this chapter we first give perturbation theory for the matrix sign function and
identify appropriate condition numbers. An expensive, but stable, Schur method for
sign(A) is described. Then Newton’s method and a rich Padé family of iterations,
having many interesting properties, are described and analyzed. How to scale and how
to terminate the iterations are discussed. Then numerical stability is considered, with
the very satisfactory conclusion that all sign iterations of practical interest are sta-
ble. Numerical experiments illustrating these various features are presented. Finally,
best L∞ rational approximation via Zolotarev’s formulae, of interest for Hermitian
matrices, is described.
As we will see in Chapter 8, the matrix sign function has many connections with
the polar decomposition, particularly regarding iterations for computing it. Some
of the results and ideas in Chapter 8 are applicable, with suitable modification, to
the sign function, but are not discussed here to avoid repetition. See, for example,
Problem 8.26.
where L(A, ∆A) is the Fréchet derivative of the matrix sign function at A in the
direction ∆A. Now from (A + ∆A)(S + ∆S) = (S + ∆S)(A + ∆A) we have
A∆S − ∆SA = S∆A − ∆AS + ∆S∆A − ∆A∆S = S∆A − ∆AS + o(k∆Ak), (5.7)
Theorem 5.3 (Kenney and Laub). The Fréchet derivative L = Lsign (A, ∆A) of the
matrix sign function satisfies
N L + LN = ∆A − S∆AS, (5.9)
Proof. Since the eigenvalues of N lie in the open right half-plane, the Sylvester
equation (5.9) has a unique solution L which is a linear function of ∆A and, in view
of (5.8), differs from ∆S = sign(A + ∆A) − S by o(k∆Ak). Hence (5.6) implies that
L = L(A, ∆A).
By applying the vec operator and using the relation (B.16) we can rewrite (5.9)
as
P vec(L) = (In2 − S T ⊗ S) vec(∆A),
where
P = I ⊗ N + N T ⊗ I.
Hence
max kL(A, ∆A)kF = max kP −1 (In2 − S T ⊗ S) vec(∆A)k2
k∆AkF =1 k∆AkF =1
= kP −1 (In2 − S T ⊗ S)k2 .
The (relative) condition number of sign(A) in the Frobenius norm is therefore
kAkF
κsign (A) := condrel (sign, A) = kP −1 (In2 − S T ⊗ S)k2 . (5.10)
kSkF
If S = I, which means that all the eigenvalues of A are in the open right half-plane,
then cond(S) = 0, which corresponds to the fact that the eigenvalues remain in this
half-plane under sufficiently small perturbations of A.
To gain some insight into the condition number, suppose that A is diagonalizable:
A = ZDZ −1 , where D = diag(λi ). Then S = ZDS Z −1 and N = ZDN Z −1 , where
DS = diag(σi ) and DN = diag(σi λi ), with σi = sign(λi ). Hence
kAkF
κsign (A) = k(Z −T ⊗ Z) · (I ⊗ DN + DN ⊗ I)−1 (In2 − DST ⊗ DS ) · (Z T ⊗ Z −1 )k2 .
kSkF
The diagonal matrix in the middle has elements (1 − σi σj )/(σi λi + σj λj ), which are
either zero or of the form 2/|λi − λj |. Hence
2 1 kAkF
κsign (A) ≤ 2κ2 (Z) max : Re λi Re λj < 0 . (5.11)
|λi − λj | kSkF
Equality holds in this bound for normal A, for which Z can be taken to unitary.
The gist of (5.11) is that the condition of S is bounded in terms of the minimum
distance between eigenvalues across the imaginary axis and the square of the condition
of the eigenvectors. Note that (5.11) is precisely the bound obtained by applying
Theorem 3.15 to the matrix sign function.
One of the main uses of κsign is to indicate the sensitivity of sign(A) to perturba-
tions in A, through the perturbation bound (3.3), which we rewrite here for the sign
function as
k sign(A + E) − sign(A)kF kEkF
≤ κsign (A) + o(kEkF ). (5.12)
k sign(A)kF kAkF
This bound is valid as long as sign(A + tE) is defined for all t ∈ [0, 1]. It is instructive
to see what can go wrong when this condition is not satisfied. Consider the example,
from [347, ],
A = diag(1, −ǫ2 ), E = diag(0, 2ǫ2 ), 0 < ǫ ≪ 1.
5.1 Sensitivity and Conditioning 111
2 2 √
√ ≤ √ 2ǫ2 + o(ǫ2 ) = 2 2ǫ2 + o(ǫ2 ).
2 2
2(1 + ǫ )
This bound is clearly incorrect. The reason is that the perturbation E causes eigenval-
ues to cross the imaginary axis; therefore sign(A + tE) does not exist for all t ∈ [0, 1].
Referring back to the analysis at the start of this section, we note that (5.7) is valid
for k∆AkF < kEkF /3, but does not hold for ∆A = E, since then ∆S 6= O(k∆Ak).
Another useful characterization of the Fréchet derivative is as the limit of a matrix
iteration; see Theorem 5.7.
Consider now how to estimate κsign (A). We need to compute a norm of B =
P −1 (In2 − S T ⊗ S). For the 2-norm we can use Algorithm 3.20 (the power method).
Alternatively, Algorithm 3.22 can be used to estimate the 1-norm. In both cases we
need to compute L(A, E), which if done via (5.9) requires solving a Sylvester equation
involving N ; this can be done via a matrix sign evaluation (see Section 2.4), since N
is positive stable. We can compute L⋆ (X, E) in a similar fashion, solving a Sylvester
equation of the same form. Alternatively, L(A, E) can be computed using iteration
(5.23) or estimated by finite differences. All these methods require O(n3 ) operations.
It is also of interest to understand the conditioning of the sign function for A ≈
sign(A), which is termed the asymptotic conditioning. The next result provides useful
bounds.
Theorem 5.4 (Kenney and Laub). Let A ∈ Cn×n have no pure imaginary eigenval-
ues and let S = sign(A). If k(A − S)Sk2 < 1, then
N ∆S + ∆SN = ∆A − S∆AS,
(kSk22 + 1)k∆AkF
k∆SkF ≤ ,
2(1 − kGk2 )
which gives the upper bound.
Now let σ = kSk2 and Sv = σu, u∗ S = σv ∗ , where u and v are (unit-norm) left
and right singular vectors, respectively. Putting ∆A = vu∗ in (5.15) gives
Hence
(kSk22 − 1)k∆AkF = (σ 2 − 1)k∆AkF ≤ 2k∆SkF (1 + kGk2 ),
which implies the lower bound.
Setting A = S in (5.13) gives (5.14).
Theorem 5.4 has something to say about the attainable accuracy of a computed
sign function. In computing S = sign(A) we surely cannot do better than if we
computed sign(f l(S)). But Theorem 5.4 says that relative errors in S can be magnified
when we take the sign by as much as kSk2 /2, so we cannot expect a relative error in
our computed sign smaller than kSk2 u/2, whatever the method used.
Algorithm 5.5 (Schur method). Given A ∈ Cn×n having no pure imaginary eigen-
values, this algorithm computes S = sign(A) via a Schur decomposition.
Cost: 25n3 flops for the Schur decomposition plus between n3/3 and 2n3/3 flops for
U and 3n3 flops to form S: about 28 23 n3 flops in total.
It is worth noting that the sign of an upper triangular matrix T will usually
have some zero elements in the upper triangle. Indeed, suppose for some j > i that
tii , ti+1,i+1 , . . . , tjj all have the same sign, and let Tij = T (i: j, i: j). Then, since
all the eigenvalues of Tij have the same sign, the corresponding block S(i: j, i: j) of
S = sign(T ) is ±I. This fact could be exploited by reordering the Schur form so that
the diagonal of T is grouped according to sign. Then sign(T ) would have the form
5.3 Newton’s Method 113
±IW
0 ∓I , where W is computed by the Parlett recurrence. The cost of the reordering
may or may not be less than the cost of (redundantly) computing zeros from the first
expression for uij in Algorithm 5.5.
The connection of this iteration with the sign function is not immediately obvious,
but in fact the iteration can be derived by applying Newton’s method to the equation
X 2 = I (see Problem 5.8), and of course sign(A) is one solution of this equation
(Theorem 5.1 (a)). The following theorem describes the convergence of the iteration.
Theorem 5.6 (convergence of the Newton sign iteration). Let A ∈ Cn×n have no
pure imaginary eigenvalues. Then the Newton iterates Xk in (5.16) converge quadrat-
ically to S = sign(A), with
1
kXk+1 − Sk ≤ kX −1 k kXk − Sk2 (5.17)
2 k
for any consistent norm. Moreover, for k ≥ 1,
k k
Xk = (I − G20 )−1 (I + G20 )S, where G0 = (A − S)(A + S)−1 . (5.18)
Proof. For λ = reiθ we have λ + λ−1 = (r + r−1 ) cos θ + i(r − r−1 ) sin θ, and hence
eigenvalues of Xk remain in their open half-plane under the mapping (5.16). Hence
Xk is defined and nonsingular for all k. Moreover, sign(Xk ) = sign(X0 ) = S, and so
Xk + S = Xk + sign(Xk ) is also nonsingular.
Clearly the Xk are (rational) functions of A and hence, like A, commute with S.
Then
1
Xk+1 ± S = Xk + Xk−1 ± 2S
2
1 −1 2
= Xk Xk ± 2Xk S + I
2
1
= Xk−1 (Xk ± S)2 , (5.19)
2
and hence 2
(Xk+1 − S)(Xk+1 + S)−1 = (Xk − S)(Xk + S)−1 .
k+1
Defining Gk = (Xk − S)(Xk + S)−1 , we have Gk+1 = G2k = · · · = G02 . Now
G0 = (A − S)(A + S)−1 has eigenvalues (λ − sign(λ))/(λ + sign(λ)), where λ ∈ Λ(A),
k
all of which lie inside the unit circle since λ is not pure imaginary. Since Gk = G20
and ρ(G0 ) < 1, by a standard result (B.9) Gk → 0 as k → ∞. Hence
Xk = (I − Gk )−1 (I + Gk )S → S as k → ∞. (5.20)
114 Matrix Sign Function
The norm inequality (5.17), which displays the quadratic convergence, is obtained by
taking norms in (5.19) with the minus sign.
Theorem 5.6 reveals quadratic convergence of the Newton iteration, but also dis-
plays in (5.18) precisely how convergence occurs: through the powers of the matrix
G0 converging to zero. Since for any matrix norm,
2k
k k |λ − sign(λ)|
kG20 k ≥ ρ(G20 ) = max . , (5.21)
λ∈Λ(A) |λ + sign(λ)|
Newton–Schulz iteration:
1
Xk+1 = Xk (3I − Xk2 ), X0 = A. (5.22)
2
Theorem 5.7 (Kenney and Laub). Let A ∈ Cn×n have no pure imaginary eigenval-
ues. With Xk defined by the Newton iteration (5.16), let
1
Yk+1 = (Yk − Xk−1 Yk Xk−1 ), Y0 = E. (5.23)
2
which has quadratic convergence to sign(a). Combining two Newton steps yields
yk+2 = (yk4 + 6yk2 + 1)/(4yk (yk2 + 1)), and we can thereby define the quartically con-
vergent iteration
y 4 + 6yk2 + 1
yk+1 = k , y0 = a.
4yk (yk2 + 1)
While a lot can be done using arguments such as these, a more systematic development
is preferable. We describe an elegant Padé approximation approach, due to Kenney
and Laub [343, ], that yields a whole table of methods containing essentially all
those of current interest.
For non–pure imaginary z ∈ C we can write
z z z
sign(z) = = = , (5.25)
(z 2 )1/2 (1 − (1 − z 2 ))1/2 (1 − ξ)1/2
pℓm (1 − x2k )
xk+1 = fℓm (xk ) := xk , x0 = a. (5.27)
qℓm (1 − x2k )
Table 5.1 shows the first nine iteration functions fℓm from this family. Note that f11
gives Halley’s method (see Problem 5.12), while f10 gives the Newton–Schulz iteration
(5.22). The matrix versions of the iterations are defined in the obvious way:
Padé iteration:
Two key questions are “what can be said about the convergence of (5.28)?” and “how
should the iteration be evaluated?”
The convergence question is answered by the following theorem.
116 Matrix Sign Function
Table 5.1. Iteration functions fℓm from the Padé family (5.27).
Theorem 5.8 (convergence of Padé iterations). Let A ∈ Cn×n have no pure imagi-
nary eigenvalues. Consider the iteration (5.28) with ℓ + m > 0 and any subordinate
matrix norm.
(a) For ℓ ≥ m − 1, if kI − A2 k < 1 then Xk → sign(A) as k → ∞ and kI − Xk2 k <
k
kI − A2 k(ℓ+m+1) .
(b) For ℓ = m − 1 and ℓ = m,
(ℓ+m+1)k
(S − Xk )(S + Xk )−1 = (S − A)(S + A)−1
The gr are the iteration functions from the Padé table taken in a zig-zag fashion from
the main diagonal and first superdiagonal:
2x x(3 + x2 )
g1 (x) = x, g2 (x) = , g3 (x) = ,
1 + x2 1 + 3x2
4x(1 + x2 ) x(5 + 10x2 + x4 ) x(6 + 20x2 + 6x4 )
g4 (x) = , g5 (x) = , g 6 (x) = .
1 + 6x2 + x4 1 + 10x2 + 5x4 1 + 15x2 + 15x4 + x6
We know from Theorem 5.8 that the iteration Xk+1 = gr (Xk ) converges to sign(X0 )
with order r whenever sign(X0 ) is defined. These iterations share some interesting
properties that are collected in the next theorem.
Theorem 5.9 (properties of principal Padé iterations). The principal Padé iteration
function gr defined in (5.29) has the following properties.
(1 + x)r − (1 − x)r
(a) gr (x) = . In other words, gr (x) = pr (x)/qr (x), where
(1 + x)r + (1 − x)r
pr (x) and qr (x) are, respectively, the odd and even parts of (1 + x)r .
5.4 The Padé Family of Iterations 117
2⌈ r−2 ⌉
2 X′ x
gr (x) = , (5.30)
r i=0 sin2 (2i+1)π
+ cos2 (2i+1)π x2
2r 2r
where the prime on the summation symbol denotes that the last term in the sum is
halved when r is odd.
Proof.
(a) See Kenney and Laub [343, 1991, Thm. 3.2].
(b) Recalling that tanh(x) = (ex − e−x )/(ex + e−x ), it is easy to check that
1 1+x
arctanh(x) = log .
2 1−x
Hence r/2
1+x
r arctanh(x) = log .
1−x
Taking the tanh of both sides gives
r/2 r/2
1+x 1−x
−
1−x 1+x (1 + x)r − (1 − x)r
tanh(r arctanh(x)) = r/2 r/2 = (1 + x)r + (1 − x)r = gr (x).
1+x 1−x
+
1−x 1+x
(d) The partial fraction expansion is obtained from a partial fraction expansion for
the hyperbolic tangent; see Kenney and Laub [345, , Thm. 3].
Some comments on the theorem are in order. The equality in (a) is a scalar equiv-
alent of (b) in Theorem 5.8, and it provides an easy way to generate the gr . Property
(c) says that one rth order principal Padé iteration followed by one sth order iteration
is equivalent to one rsth order iteration. Whether or not it is worth using higher order
iterations therefore depends on the efficiency with which the different iterations can
be evaluated. The properties in (b) and (c) are analogous to properties of the Cheby-
shev polynomials. Figure 5.1 confirms, for real x, that gr (x) = tanh(r arctanh(x))
approximates sign(x) increasingly well near the origin as r increases.
Some more insight into the convergence, or nonconvergence, of the iteration xk+1 =
gr (xk ) from (5.29) can be obtained by using Theorem 5.8 (b) to write, in polar form,
r r
ρk+1 eiθk+1 := (s − xk+1 )(s + xk+1 )−1 := (s − xk )(s + xk )−1 = ρk eiθk ,
118 Matrix Sign Function
0.5
−0.5
r= 2
r= 4
−1 r= 8
r = 16
−3 −2 −1 0 1 2 3
Theorem 5.10 (Kenney and Laub). For the Newton iteration (5.16), if Xk has eigen-
values ±1 for some k then Xk+p = sign(A) for 2p ≥ m, where m is the size of the
largest Jordan block of Xk (which is no larger than the size of the largest Jordan block
of A).
Proof. Let Xk have the Jordan form Xk = ZJk Z −1 , where Jk = D + Nk , with
D = diag(±1) = sign(Jk ) and Nk strictly upper triangular. Nk has index of nilpotence
m, that is, Nkm = 0 but all lower powers are nonzero. We can restrict our attention
to the convergence of the sequence beginning with Jk to diag(±1), and so we can set
Z = I. The next iterate, Xk+1 = D + Nk+1 , satisfies, in view of (5.19),
1 −1 2
Nk+1 =X Nk .
2 k
Since Nk has index of nilpotence m, Nk+1 must have index of nilpotence ⌈m/2⌉. Ap-
plying this argument repeatedly shows that for 2p ≥ m, Nk+p has index of nilpotence
1 and hence is zero, as required. That m is no larger than the order of the largest
Jordan block of A follows from Theorem 1.36.
An effective way to enhance the initial speed of convergence is to scale the iterates:
prior to each iteration, Xk is replaced by µk Xk , giving the scaled Newton iteration
For determinantal scaling, | det(µk Xk )| = 1, so that the geometric mean of the eigen-
values of µk Xk has magnitude 1. This scaling has the property that µk minimizes
d(µk Xk ), where
Xn
d(X) = (log |λi |)2
i=1
and the are λi the eigenvalues of X. Hence determinantal scaling tends to bring the
eigenvalues closer to the unit circle; see Problem 5.13.
When evaluating the determinantal scaling factor (5.35) some care is needed to
avoid unnecessary overflow and underflow, especially when n is large. The quan-
tity µk should be within the range of the floating point arithmetic, since its re-
ciprocal has magnitude the geometric mean of the eigenvalues of Xk and hence
lies between the moduli of the smallest and largest eigenvalues. But det(Xk ) can
underflow or overflow. Assuming that an LU factorization P Xk = Lk Uk is com-
−1/n
puted, where Uk hasPndiagonal elements uii , we can rewrite µk = |u11 . . . unn | as
µk = exp((−1/n) i=1 log |uii |). The latter expression avoids underflow and over-
flow; however, cancellation in the summation can produce an inaccurate computed
µk , so it may be desirable to use one of the summation methods from Higham [276,
, Chap. 4].
For spectral scaling, if λn , . . . , λ1 are the eigenvalues of Xk ordered by increasing
magnitude, then µk = |λ1 λn |−1/2 and so µk Xk has eigenvalues of smallest and largest
magnitude |µk λn | = |λn /λ1 |1/2 and |µk λ1 | = |λ1 /λn |1/2 . If λ1 and λn are real, then
in the Cayley metric
|x − 1|/|x + 1|, Re x > 0,
C(x, sign(x)) :=
|x + 1|/|x − 1|, Re x < 0,
Theorem 5.11 (Barraud). Let the nonsingular matrix A ∈ Cn×n have all real eigen-
values and let S = sign(A). Then, for the Newton iteration (5.34) with spectral scaling,
Xd+p−1 = sign(A), where d is the number of distinct eigenvalues of A and 2p ≥ m,
where m is the size of the largest Jordan block of A.
Proof. We will need to use the following easily verified properties of the iteration
function f (x) = 21 (x + 1/x):
(1) (1)
X1 of largest modulus. Hence X1 has eigenvalues λi satisfying |λn | ≤ · · · ≤
(1) (1)
|λ2 | = |λ1 |. Each subsequent iteration increases by at least 1 the number of
eigenvalues with maximal modulus until, after d − 1 iterations, Xd−1 has eigenvalues
of constant modulus. Then µd−1 Xd−1 has converged eigenvalues ±1 (as does Xd ).
By Theorem 5.10, at most a further p iterations after Xd−1 are needed to dispose of
the Jordan blocks (and during these iterations µk ≡ 1, since the eigenvalues are fixed
at ±1).
For 1 × 1 matrices spectral scaling and determinantal scaling are equivalent, and
both give convergence in at most two iterations (see Problem 5.14). For 2×2 matrices
spectral scaling and determinantal scaling are again equivalent, and Theorem 5.11
tells us that we have convergence in at most two iterations if the eigenvalues are
real. However, slightly more is true: both scalings give convergence in at most two
iterations for any real 2 × 2 matrix (see Problem 5.14).
Determinantal scaling can be ineffective when there is a small group of outlying
eigenvalues and the rest are nearly converged. Suppose that A has an eigenvalue
10q (q ≥ 1) with the rest all ±1. Then determinantal scaling gives µk = 10−q/n ,
whereas spectral scaling gives µk = 10−q/2 ; the former quantity is close to 1 and hence
the determinantally scaled iteration will behave like the unscaled iteration. Spectral
scaling can be ineffective when the eigenvalues of A cluster close to the imaginary
axis (see the numerical examples in Section 5.8).
All three scaling schemes are inexpensive to implement. The determinant det(Xk )
can be computed at negligible cost from the LU factorization that will be used to
compute Xk−1 . The spectral scaling parameter can be cheaply estimated by applying
the power method to Xk and its inverse, again exploiting the LU factorization in the
latter case. Note, however, that for a real spectrum spectral scaling increases the
number of eigenvalues with maximal modulus on each iteration, which makes reliable
implementation of the power method more difficult. The norm scaling is trivial to
compute for the Frobenius norm, and for the 2-norm can be estimated using the power
method (Algorithm 3.19).
The motivation for scaling is to reduce the length of the initial phase during
which the error is reduced below 1. Should we continue to scale throughout the whole
iteration? All three scaling parameters (5.35)–(5.37) converge to 1 as Xk → S, so
scaling does not destroy the quadratic convergence. Nor does it bring any benefit, so
it is sensible to set µk ≡ 1 once the error is sufficiently less than 1.
Lemma 5.12 (Kenney, Laub, Pandey, and Papadopoulos). Let A ∈ Cn×n have no
pure imaginary eigenvalues, let S = sign(A), and let k · k be any subordinate matrix
norm. If kS(A − S)k = ǫ < 1 then
1−ǫ 1+ǫ
kA − A−1 k ≤ kA − Sk ≤ kA − A−1 k (5.39)
2+ǫ 2−ǫ
122 Matrix Sign Function
and
kA2 − Ik kA − Sk
≤ ≤ kA2 − Ik. (5.40)
kSk(kAk + kSk) kSk
The lower bound in (5.40) always holds.
Proof. Let E = A − S. Since S 2 = I, we have A = S + E = (I + ES)S. It is
then straightforward to show that
using the fact that A and S, and hence also E and S, commute. The upper bound
in (5.39) is obtained by postmultiplying by (2I + ES)−1 and taking norms, while
postmultiplying by (I + ES)−1 and taking norms gives the lower bound.
The lower bound in (5.40) is obtained by taking norms in A2 −I = (A−S)(A+S).
For the upper bound, we write the last equation as A − S = (A2 − I)(A + S)−1 and
need to bound k(A + S)−1 k. Since A + S = 2S(I + 21 S(A − S)), we have
1
1 −1 −1 kS −1 k
k(A + S)−1 k = kS I + 12 S(A − S) k≤ 2 ≤ kSk.
2 1 − 12 ǫ
Note that since the iterations of interest satisfy sign(Xk ) = sign(A), the bounds
of Lemma 5.12 are applicable with A replaced by an iterate Xk .
We now describe some possible convergence criteria, using η to denote a con-
vergence tolerance proportional to both the unit roundoff (or a larger value if full
accuracy is not required) and a constant depending on the matrix dimension, n. A
norm will denote any easily computable norm such as the 1-, ∞-, or Frobenius norms.
We begin with the Newton iteration, describing a variety of existing criteria followed
by a new one.
A natural stopping criterion, of negligible cost, is
kXk+1 − Xk k
δk+1 := ≤ η. (5.41)
kXk+1 k
As discussed in Section 4.9, this criterion is really bounding the error in Xk , rather
than Xk+1 , so it may stop one iteration too late. This drawback can be seen very
clearly from (5.39): since Xk+1 −Xk = 21 (Xk−1 −Xk ), (5.39) shows that kXk+1 −Xk k ≈
kS − Xk k is an increasingly good approximation as the iteration converges.
The test (5.41) could potentially never be satisfied in floating point arithmetic.
The best bound for the error in the computed Z bk = f l(X −1 ), which we assume to
k
be obtained by Gaussian elimination with partial pivoting, is of the form [276, ,
Sec. 14.3]
kZbk − X −1 k
k
≤ cn uκ(Xk ), (5.42)
kXk−1 k
where cn is a constant. Therefore for the computed sequence Xk , kXk+1 − Xk k ≈
1 b
2 kZk − Xk k might be expected to be proportional to κ(Xk )kXk ku, suggesting the
test δk+1 ≤ κ(Xk )η. Close to convergence, Xk+1 ≈ Xk ≈ S = S −1 and so κ(Xk ) ≈
kXk+1 k2 . A test δk+1 ≤ kXk+1 k2 η is also suggested by the asymptotic conditioning
of the sign function, discussed at the end of Section 5.1. On the other hand, a test of
the form δk+1 ≤ kXk+1 kη is suggested by Byers, He, and Mehrmann [89, ], based
5.7 Numerical Stability of Sign Iterations 123
on a perturbation bound for the sign function. To summarize, there are arguments
for using the stopping criterion
This is essentially the same test as (4.25), bearing in mind that in the latter bound
c = kS −1 k/2 ≈ kXk−1 k/2. This bound should overcome the problem of (5.41) of
stopping one iteration too late, but unlike (5.43) with p = 1, 2 it takes no explicit
account of rounding error effects. A test of this form has been suggested by Benner
and Quintana-Ortı́ [55, ]. The experiments in Section 5.8 give further insight.
For general sign iterations, intuitively appealing stopping criteria can be devised
based on the fact that trace(sign(A)) is an integer, but these are of little practical
use; see Problem 5.16.
The upper bound in (5.40) shows that kA − Xk k/kXk k ≤ kXk2 − Ik and hence
suggests stopping when
kXk2 − Ik ≤ η. (5.45)
This test is suitable for iterations that already form Xk2 , such as the Schulz iteration
(5.22). Note, however, that the error in forming f l(Xk2 − I) is bounded at best by
cn ukXk k2 ≈ cn ukSk2 , so when kSk is large it may not be possible to satisfy (5.45),
and a more suitable test is then
kXk2 − Ik
≤ η.
kXk k2
Theorem 5.13 (stability of sign iterations). Let S = sign(A), where A ∈ Cn×n has
no pure imaginary eigenvalues. Let Xk+1 = g(Xk ) be superlinearly convergent to
sign(X0 ) for all X0 sufficiently close to S and assume that g is independent of X0 .
Then the iteration is stable, and the Fréchet derivative of g at S is idempotent and is
given by Lg (S, E) = L(S, E) = 21 (E − SES), where L(S) is the Fréchet derivative of
the matrix sign function at S.
Proof. Since the sign function is idempotent, stability, the idempotence of Lg ,
and the equality of Lg (S) and L(S), follow from Theorems 4.18 and 4.19. The formula
for L(S, E) is obtained by taking N = I in Theorem 5.3.
124 Matrix Sign Function
Theorem 5.13 says that the Fréchet derivative at S is the same for any superlinearly
convergent sign iteration and that this Fréchet derivative is idempotent. Unbounded
propagation of errors near the solution is therefore not possible for any such iteration.
The constancy of the Fréchet derivative is not shared by iterations for all the functions
in this book, as we will see in the next chapter.
Turning to limiting accuracy (see Definition 4.20), Theorem 5.13 yields kLg (S, E)k ≤
1
2 (1 + kSk2 )kEk, so an estimate for the limiting accuracy of any superlinearly conver-
gent sign iteration is kSk2 u. Hence if, for example, κ(S) = kSk2 ≤ u−1/2 , then we
can hope to compute the sign function to half precision.
If S commutes with E then Lg (S, E) = 0, which shows that such errors E are
eliminated by the iteration to first order. To compare with what convergence con-
siderations say about E, note first that in all the sign iterations considered here
the matrix whose sign is being computed appears only as the starting matrix and
not within the iteration. Hence if we start the iteration at S + E then the iter-
ation converges to sign(S + E), for sufficiently small kEk (so that the sign exists
and any convergence conditions are satisfied). Given that S has the form (5.1),
any E commuting with S has the form Z diag(F11 , F22 )Z −1 , so that sign(S + E) =
Z sign(diag(−Ip + F11 , Iq + F22 ))Z −1 . Hence there is an ǫ such that for all kEk ≤ ǫ,
sign(S + E) = S. Therefore, the Fréchet derivative analysis is consistent with the
convergence analysis.
Of course, to obtain a complete picture, we also need to understand the effect
of rounding errors on the iteration prior to convergence. This effect is surprisingly
difficult to analyze, even though the iterative methods are built purely from matrix
multiplication and inversion. The underlying behaviour is, however, easy to describe.
Suppose, as discussed above, that we have an iteration for sign(A) that does not
contain A, except as the starting matrix. Errors on the (k − 1)st iteration can be
accounted for by perturbing Xk to Xk + Ek . If there are no further errors then
(regarding Xk + Ek as a new starting matrix) sign(Xk + Ek ) will be computed. The
error thus depends on the conditioning of Xk and the size of Ek . Since errors will
in general occur on each iteration, the overall error will be a complicated function of
κsign (Xk ) and Ek for all k.
We now restrict our attention to the Newton iteration (5.16). First, we note that
the iteration can be numerically unstable: the relative error is not always bounded by
a modest multiple of the condition number κsign (A), as is easily shown by example (see
the next section). Nevertheless, it generally performs better than might be expected,
given that it inverts possibly ill conditioned matrices. We are not aware of any
published rounding error analysis for the computation of sign(A) via the Newton
iteration.
Error analyses aimed at the application of the matrix sign function to invariant
subspace computation (Section 2.5) are given by Bai and Demmel [29, ] and By-
ers, He, and Mehrmann [89, ]. These analyses show that the matrix sign function
may be more ill conditioned than the problem of evaluating the invariant subspaces
corresponding to eigenvalues in the left half-plane and right half-plane. Neverthe-
less, they show that when Newton’s method is used to evaluate the sign function the
computed invariant subspaces are usually about as good as those computed by the
QR algorithm. In other words, the potential instability rarely manifests itself. The
analyses are complicated and we refer the reader to the two papers for details.
In cases where the matrix sign function approach to computing an invariant sub-
space suffers from instability, iterative refinement can be used to improve the com-
5.8 Numerical Experiments and Algorithm 125
Table 5.2. Number of iterations for scaled Newton iteration. The unnamed matrices are
(quasi)-upper triangular with normal (0, 1) distributed elements in the upper triangle.
Scaling
Matrix none determinantal spectral norm
Lotkin 25 9 8 9
Grcar 11 9 9 15
ˆ 1 (j/n)1000
˜
A(j: j + 1, j: j + 1) = −(j/n)1000 1
24 16 19 19
ajj = 1 + 1000i(j − 1)/(n − 1) 24 16 22 22
a11 = 1000, ajj ≡ 1, j ≥ 2 14 12 6 10
a11 = 1 + 1000i, ajj ≡ 1, j ≥ 2 24 22 8 19
puted subspace [29, ]. Iterative refinement can also be used when the sign function
is used to solve algebraic Riccati equations (as described in Section 2.4) [88, ].
Finally, we note that all existing numerical stability analysis is for the unscaled
Newton iteration. Our experience is that scaling tends to improve stability, not worsen
it.
Table 5.2 reports the results. The Lotkin matrix is a typical example of how scaling
can greatly reduce the number of iterations. The Grcar example shows how norm
scaling can perform poorly (indeed being worse than no scaling). The third matrix
(real) and fourth matrix (complex) have eigenvalues on a line with real part 1 and
imaginary parts between 0 and 1000. Here, spectral scaling and norm scaling are
both poor. The fifth and sixth matrices, again real and complex, respectively, have
eigenvalues all equal to 1 except for one large outlier, and they are bad cases for
determinantal scaling.
Table 5.3 illustrates the convergence results in Theorems 5.10 and 5.11 by showing
the behaviour of the Newton iteration with spectral scaling for J(2) ∈ R16×16 , which
126 Matrix Sign Function
Table 5.3. Newton iteration with spectral scaling for Jordan block J(2) ∈ R16×16 .
kS − Xk k∞ kXk2 − Ik∞
k δk µk (5.41) (5.44)
kSk∞ kXk k2∞
1 2.5e-1 1.8e+0 3.6e-1 5.0e-1
2 2.5e-2 2.2e-1 4.8e-2 1.0e0
3 3.0e-4 2.5e-2 6.0e-4 1.0e0
4 0 3.0e-4 0 1.0e0
√ √
5 0 0 0
Table 5.4. Newton iteration with determinantal scaling for random A ∈ R16×16 with κ2 (A) =
1010 ; κsign (A) = 3 × 108 , kSkF = 16.
kS − Xk k∞ kXk2 − Ik∞
k δk µk (5.41) (5.44)
kSk∞ kXk k2∞
1 4.3e3 1.0e0 1.1e-1 1.0e5
2 1.5e1 2.8e2 1.3e-1 6.8e-3
3 1.9e0 6.3e0 5.9e-2 1.4e-1
4 2.1e-1 1.7e0 2.1e-2 6.1e-1
5 6.4e-2 2.3e-1 4.3e-3 9.5e-1
6 2.0e-3 6.2e-2 1.6e-4 9.8e-1
7 4.1e-6 2.0e-3 3.3e-7 1.0e0
8 2.1e-9 4.1e-6 8.9e-13
√
9 2.1e-9 1.1e-11 3.2e-17
√ √
10 2.1e-9 1.5e-15 3.5e-17
is a Jordan block with eigenvalue 2. Here and below the last two columns of the table
indicate with a tick iterations on which the convergence conditions (5.41) and (5.44)
are satisfied for the ∞-norm, with η = n1/2 u. In Theorem 5.11, d = 1 and p = 4,
and indeed Xd+p−1 = X4 = sign(J(2)). At the start of the first iteration, µ0 X0 has
eigenvalues 1, and the remaining four iterations remove the nonnormal part; it is easy
to see that determinantal scaling gives exactly the same results.
Table 5.4 reports 12 iterations for a random A ∈ R16×16 with κ2 (A) = 1010
generated in MATLAB by gallery(’randsvd’,16,1e10,3). Determinantal scaling
was used. Note that the relative residual decreases significantly after the error has
stagnated. The limiting accuracy of kSk22 u is clearly not relevant here, as the iterates
do not approach S sufficiently closely.
Both these examples confirm that the relative change δk+1 is a good estimate
of the relative error in Xk (compare the numbers in the third column with those
immediately to the northwest) until roundoff starts to dominate, but thereafter the
relative error and relative change can behave quite differently.
Finally, Table 5.5 gives examples with large kSk. The matrix is of the form
A = QT QT , where Q is a random orthogonal matrix and T ∈ R16×16 is generated as
an upper triangular matrix with normal (0,1) distributed elements and tii is replaced
by d|tii | for i = 1: 8 and by −d|tii | for i = 9: 16. As d is decreased the eigenvalues
of A approach the origin (and hence the imaginary axis). Determinantal scaling was
used and we terminated the iteration when the relative error stopped decreasing sig-
5.8 Numerical Experiments and Algorithm 127
Table 5.5. Newton iteration with determinantal scaling for random A ∈ R16×16 with real
eigenvalues parametrized by d.
kS − Xk k∞
d no. iterations mink kAk2 κ2 (A) kSk2 κsign (A)
kSk∞
1 6 2.7e-13 6.7 4.1e3 1.3e2 4.7e3
3/4 6 4.1e-10 6.5 5.7e5 5.4e3 6.5e5
1/2 6 2.6e-6 6.2 2.6e8 3.9e5 6.5e7
1/3 3 7.8e-1 6.4 2.6e15 7.5e11 3.9e7
nificantly. This example shows that the Newton iteration can behave in a numerically
unstable way: the relative error can greatly exceed κsign (A)u. Note that the limiting
accuracy kSk22 u provides a good estimate of the relative error for the first three values
of d.
Our experience indicates that (5.44) is the most reliable termination criterion,
though on badly behaved matrices such as those in Table 5.5 no one test can be relied
upon to terminate at the “right moment”, if at all.
Based on this and other evidence we suggest the following algorithm based on the
scaled Newton iteration (5.34).
Algorithm 5.14 (Newton algorithm for matrix sign function). Given a nonsingular
A ∈ Cn×n with no pure imaginary eigenvalues this algorithm computes X = sign(A)
using the scaled Newton iteration. Two tolerances are used: a tolerance tol cgce
for testing convergence and a tolerance tol scale for deciding when to switch to the
unscaled iteration.
1 X0 = A; scale = true
2 for k = 1: ∞
3 Yk = Xk−1
4 if scale
5 Set µk to one of the scale factors (5.35)–(5.37).
6 else
7 µk = 1
8 end
9 Xk+1 = 12 (µk Xk + µ−1k Yk )
10 δk+1 = kXk+1 − Xk kF /kXk+1 kF
11 if scale = true and δk+1 ≤ tol scale, scale = false, end
12 if kXk+1 − Xk kF ≤ (tol cgcekXk+1 k/kYk k)1/2 or
(δk+1 > δk /2 and scale = false)
13 goto line 16
14 end
15 end
16 X = Xk+1
quadratic convergence once the convergence has set in. The convergence test is (5.44)
combined with the requirement to stop if, in the final convergence phase, δk has not
decreased by at least a factor 2 during the previous iteration (which is a sign that
roundoff errors are starting to dominate).
We have left the choice of scale factor at line 5 open, as the best choice will depend
on the class of problems considered.
where
sn2 (jK/(2m); κ)
cj = ,
1 − sn2 (jK/(2m); κ)
κ = (1 − (δmin /δmax )2 )1/2 , and K is the complete elliptic integral for the modulus κ.
The constant D is determined by the condition
√ √
max 2
(1 − x re(x)) = − min 2
(1 − x re(x)),
x∈[1,(δmin /δmax ) )] x∈[1,(δmin /δmax ) )]
and the extrema occur at xj = dn−2 (jK/(2m)), j = 0: 2m, where dn2 (w; κ) = 1 −
κ2 sn2 (w; κ).
(b) The best L∞ approximation r from R2m−1,2m to sign(x) on the interval
r((x/δmin )2 ), where re is defined in
[−δmax , −δmin ] ∪ [δmin , δmax ] is r(x) = (x/δmin )e
(a).
5.10 Notes and References 129
sign(x)
−1
r(x)
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
−0.9998 1.0000
−1.0000 0.9998
−2 −1.5 −1 1 1.5 2
Figure 5.2. Best L∞ approximation r(x) to sign(x) from R3,4 on [−2, −1] ∪ [1, 2]. The lower
two plots show r(x) in particular regions of the overall plot above.
Figure 5.2 plots the best L∞ approximation to sign(x) from R3,4 on [−2, −1] ∪
[1, 2], and displays the characteristic equioscillation property of the error, which has
maximum magnitude about 10−4 . In the QCD application δmin and δmax are chosen
so that the spectrum of the matrix is enclosed and r is used in partial fraction form.
It is natural to ask how sharp the sufficient condition for convergence kI −A2 k < 1
in Theorem 5.8 (a) is for ℓ > m and what can be said about convergence for ℓ < m−1.
These questions are answered experimentally by Kenney and Laub [343, ], who
give plots showing the boundaries of the regions of convergence of the scalar iterations
in C.
The principal Padé iterations for the sign function were first derived by Howland
[302, ], though for even k his iteration functions are the inverses of those given
here. Iannazzo [307, ] points out that these iterations can be obtained from the
general König family (which goes back to Schröder [509, ], [510, ]) applied to
the equation x2 −1 = 0. Parts (b)–(d) of Theorem 5.9 are from Kenney and Laub [345,
]. Pandey, Kenney, and Laub originally obtained the partial fraction expansion
(5.30), for even k only, by applying Gaussian quadrature to an integral expression for
h(ξ) in (5.26) [457, ]. The analysis leading to (5.31) is from Kenney and Laub
[345, ].
Theorem 5.10 is due to Kenney and Laub [344, ], and the triangular matrices
in Table 5.2 are taken from the same paper.
Theorem 5.11 is due to Barraud [44, , Sec. 4], but, perhaps because his paper
is written in French, his result went unnoticed until it was presented by Kenney and
Laub [344, , Thm. 3.4].
Lemma 5.12 collects results from Kenney, Laub, and Papadopoulos [350, ]
and Pandey, Kenney, and Laub [457, ].
The spectral scaling (5.36) and norm scaling (5.37) were first suggested by Barraud
[44, ], while determinantal scaling (5.35) is due to Byers [88, ].
Kenney and Laub [344, ] derive a “semioptimal” scaling for the Newton iter-
ation that requires estimates of the dominant eigenvalue (not just its modulus, i.e.,
the spectral radius) of Xk and of Xk−1 . Numerical experiments show this scaling to
be generally at least as good as the other scalings we have described. Semioptimal
scaling does not seem to have become popular, probably because it is more delicate
to implement than the other scalings and the other scalings typically perform about
as well in practice.
Theorem 5.13 on the stability of sign iterations is new. Indeed we are not aware
of any previous analysis of the stability of sign iterations.
Our presentation of Zolotarev’s Theorem 5.15 is based on that in van den Eshof,
Frommer, Lippert, Schilling, and Van der Vorst [585, ] and van den Eshof [586,
]. In the numerical analysis literature this result seems to have been first pointed
out by Kenney and Laub [347, , Sec. III]. Theorem 5.15 can also be found in
Achieser [1, , Sec. E.27], Kennedy [338, ], [339, ], and Petrushev and
Popov [470, , Sec. 4.3].
A “generalized Newton sign iteration” proposed by Gardiner and Laub [205, ]
has the form
1
Xk+1 = (Xk + BXk−1 B), X0 = A.
2
If B is nonsingular this is essentially the standard Newton iteration applied to B −1 A
and it converges to B sign(B −1 A). For singular B, convergence may or may not
occur and can be at a linear rate; see Bai, Demmel, and Gu [31, ] and Sun and
Quintana-Ortı́ [550, ]. This iteration is useful for computing invariant subspaces
of matrix pencils A − λB (generalizing the approach in Section 2.5) and for solving
generalized algebraic Riccati equations.
Problems 131
Problems
5.1. Show that sign(A) = A for any involutory matrix.
5.2. How are sign(A) and sign(A−1 ) related?
5.3. Derive the integral formula (5.3) from (5.2) by using the Cauchy integral formula
(1.12).
5.4. Show that sign(A) = (2/π) limt→∞ tan−1 (tA).
5.5. Can
−1 1 1/2
A= 0 1 −1
0 0 1
be the sign of some matrix?
5.6. Show that the geometric mean A # B of two Hermitian positive definite matrices
A and B satisfies
0 A#B 0 B
= sign .
(A # B)−1 0 A−1 0
5.7. (Kenney and Laub [342, ]) Verify that for A ∈ R2×2 the matrix sign
decomposition (5.5) is given as follows. If det(A) > 0 and trace(A) 6= 0 then
S = sign(trace(A))I and N = sign(trace(A))A; if det(A) < 0 then
S = µ A − det(A)A−1 , N = µ A2 − det(A)I ,
where −1/2
µ = − det(A − det(A)A−1 ) ;
otherwise S is undefined.
5.8. Show that the Newton iteration (5.16) for the matrix sign function can be derived
by applying Newton’s method to the equation X 2 = I.
5.9. By expanding the expression sign(S + E) = (S + E)((S + E)2 )−1/2 from (5.2),
show directly that the Fréchet derivative of the matrix sign function at S = sign(S)
is given by L(S, E) = 12 (E − SES).
5.10. Consider the scalar Newton sign iteration xk+1 = 21 (xk + x−1
k ). Show that if
x0 = coth θ0 then xk = coth 2k θ0 . Deduce a convergence result.
5.11. (Schroeder [511, ]) Investigate the behaviour of the Newton iteration (5.16)
for scalar, pure imaginary x0 . Hint: let x0 = ir0 ≡ −i cot(πθ0 ) and work in θ
coordinates.
5.12. Halley’s iteration for solving f (x) = 0 is [201, ]
fk /fk′
xk+1 = xk − ,
1 − 21 fk fk′′ /(fk′ )2
where fk , fk′ , and fk′′ denote the values of f and its first two derivatives at xk . Show
that applying Halley’s iteration to f (x) = x2 − 1 yields the iteration function f1,1 in
Table 5.1.
132 Matrix Sign Function
5.13. (Byers [88, ]) Show that determinantal scaling µ = | det(X)|−1/n mini-
mizes d(µX), where
Xn
d(X) = (log |λi |)2
i=1
and the λi are the eigenvalues of X. Show also that d(X) = 0 if and only if the
spectrum of X lies on the unit circle and that d(X) is an increasing function of
|1 − |λi || for each eigenvalue λi .
5.14. Consider the Newton iteration (5.34), with determinantal scaling (5.35) and
spectral scaling (5.36). Show that with both scalings the iteration converges in at
most two iterations (a) for scalars and (b) for any real 2 × 2 matrix.
5.15. (Higham, Mackey, Mackey, and Tisseur [283, ]) Suppose that sign(A) = I
and A2 = I + E, where kEk < 1, for some consistent norm. Show that
kEk
kA − Ik ≤ p < kEk.
1 + 1 − kEk
How does this bound compare with the upper bound in (5.40)?
5.16. Discuss the pros and cons of terminating an iteration Xk+1 = g(Xk ) for the
matrix sign function with one of the tests
| trace(Xk2 ) − n| ≤ η, (5.46)
| trace(Xk ) − round(trace(Xk ))| ≤ η, (5.47)
The sign function of a square matrix can be defined in terms of a contour integral
or as the result of an iterated map Zr+1 = 12 (Zr + Zr−1 ).
Application of this function enables a matrix to be decomposed into
two components whose spectra lie on opposite sides of the imaginary axis.
— J. D. ROBERTS, Linear Model Reduction and Solution of the
Algebraic Riccati Equation by Use of the Sign Function (1980)
The matrix square root is one of the most commonly occurring matrix functions,
arising most frequently in the context of symmetric positive definite matrices. The
key roles that the square root plays in, for example, the matrix sign function (Chap-
ter 5), the definite generalized eigenvalue problem (page 35), the polar decomposition
(Section 2.6 and Chapter 8), and the geometric mean (Section 2.10), make it a useful
theoretical and computational tool. The rich variety of methods for computing the
matrix square root, with their widely differing numerical stability properties, are an
interesting subject of study in their own right.
We will almost exclusively be concerned with the principal square root, A1/2 .
Recall from Theorem 1.29 that for A ∈ Cn×n with no eigenvalues on R− , A1/2 is the
unique square
√ root X of A whose spectrum lies in the open right half-plane. We will
denote by A an arbitrary, possibly nonprincipal square root.
We note the integral representation
Z ∞
2
A1/2 = A (t2 I + A)−1 dt, (6.1)
π 0
which is a special case of (7.1) in the next chapter. The integral can be deduced from
that for the matrix sign function (see Problem 6.1).
This chapter begins with analysis of the conditioning of the matrix square root
and the sensitivity of the relative residual. Then a Schur method, and a version work-
ing entirely in real arithmetic, are described. Newton’s method and several variants
follow, with a stability analysis revealing that the variants do not suffer the instability
that vitiates the Newton iteration. After a discussion of scaling, numerical experi-
ments are given to provide insight into the analysis. A class of coupled iterations
obtained via iterations for the matrix sign function are derived and their stability
proved. Linearly convergent iterations for matrices that are “almost diagonal”, as
well as for M-matrices, are analyzed, and a preferred iteration for Hermitian posi-
tive definite matrices is given. The issue of choosing from among the many square
roots of a given matrix is addressed by considering how to compute a small-normed
square root. A brief comparison of the competing methods is given. Finally, appli-
cations of involutory matrices, and some particular involutory matrices with explicit
representations, are described.
133
134 Matrix Square Root
where the argument of κsqrt denotes the particular square root under consideration.
It follows that
1 kAkF
κsqrt (X) ≥ , (6.3)
mini,j=1:n |µi + µj | kXkF
√
where the µj are the eigenvalues of X = A (and this inequality can also be obtained
from Theorem 3.14). This inequality is interesting because it reveals two distinct
situations in which κsqrt must be large. The first situation is when A (and hence X)
has an eigenvalue of small modulus. The second situation is when the square root is
the principal square root and a real A has a pair of complex conjugate eigenvalues
1/2
close to the negative real axis: λ = rei(π−ǫ) (0 < ǫ ≪ 1) and λ. Then |λ1/2 + λ | =
r1/2 |ei(π−ǫ)/2 + e−i(π−ǫ)/2 | = r1/2 |e−iǫ/2 − eiǫ/2 | = r1/2 O(ǫ). In this latter case A is
close to a matrix for which the principal square root is not defined.
If A is normal and X is normal (as is any primary square root of a normal A)
then, either directly from (6.2) or from Corollary 3.16, we have equality in (6.3).
The formula for κsqrt allows us to identify the best conditioned square root of a
Hermitian positive definite matrix. As usual, κ(X) = kXk kX −1 k.
Lemma 6.1. If A ∈ Cn×n is Hermitian positive definite and X is any primary square
root of A then
1/2
kA−1 k2 kAkF
κsqrt (A1/2 ) = ≤ κsqrt (X).
2 kA1/2 kF
Moreover,
1 1
3/2
κF (A1/2 ) ≤ κsqrt (A1/2 ) ≤ κF (A1/2 ).
2n 2
Staying with positive definite matrices for the moment, the next result gives an
elegant bound for the difference between the principal square roots of two matrices.
6.2 Schur Method 135
Theorem 6.2. If A, B ∈ Cn×n are Hermitian positive definite then for any unitarily
invariant norm
1
kA1/2 − B 1/2 k ≤ kA − Bk,
λmin (A)1/2 + λmin (B)1/2
Proof. This is a special case of a result of van Hemmen and Ando [591, ,
Prop. 3.2]; see also Bhatia [62, ].
e = X + E be an approximation to a square root X of A ∈ Cn×n , where
Let X
e 2 = A + XE + EX + E 2 , which leads to the relative residual
kEk ≤ ǫkXk. Then X
bound
e 2k
kA − X
≤ (2ǫ + ǫ2 )α(X),
kAk
where
kXk2 kXk2
α(X) = = ≥ 1. (6.4)
kAk kX 2 k
The quantity α(X) can be regarded as a condition number for the relative residual
of X; if it is large then a small perturbation of X (such as f l(X)—the rounded
square root) can have a relative residual much larger than the size of the relative
perturbation. An important conclusion is that we cannot expect a numerical method
to do better than provide a computed square root X b with relative residual of order
b
α(X)u, where u is the unit roundoff. Where there is a choice of square root, one of
minimal norm is therefore to be preferred. It is easy to show that
κ(X)
≤ α(X) ≤ κ(X).
κ(A)
Thus a large value of α(X) implies that X is ill conditioned, and if A is well condi-
tioned then α(X) ≈ κ(X). If X is normal then α(X) = 1 in the 2-norm.
u2ii = tii ,
j−1
X
(uii + ujj )uij = tij − uik ukj . (6.5)
k=i+1
We can compute the diagonal of U and then solve for the uij either a superdiagonal at
a time or a column at a time. The process cannot break down, because 0 = uii +ujj =
f (tii ) + f (tjj ) is not possible, since the tii are nonzero and f , being a primary matrix
function, maps equal tii to the same square root. We obtain the following algorithm.
136 Matrix Square Root
Cost: 25n3 flops for the Schur decomposition plus n3/3 for U and 3n3 to form X:
28 31 n3 flops in total.
Algorithm 6.3 generates all the primary square roots of A as different choices of
√ 1/2 √
sign in uii = tii ≡ ±tii are used, subject to the restriction that tii = tjj ⇒ tii =
√
tjj .
If A is singular with a semisimple zero eigenvalue of multiplicity k then Algo-
rithm 6.3 can be adapted by ordering the Schur decomposition so that the zero eigen-
values are in the trailing k diagonal elements of T . Then T must have the structure
n−k k
n−k T11 T12
T = , (6.6)
k 0 0
with T11 nonsingular. Indeed if some element of the trailing k × k block of T were
nonzero then the rank of T would exceed n − k, but 0 being a semisimple eigenvalue
of multiplicity k implies rank(T ) = n − k. It is clear that any primary square root has
the same block structure as T and that Algorithm 6.3 computes such a square root
provided that we set uij = 0 when i > n − k and j > n − k, which are the cases where
the algorithm would otherwise incur division by zero. The behaviour of the algorithm
for singular A without reordering to the form (6.6) is examined in Problem 6.5.
If A is real but has some nonreal eigenvalues then Algorithm 6.3 uses complex
arithmetic. This is undesirable, because complex arithmetic is more expensive than
real arithmetic and also because rounding errors may cause a computed result to be
produced with nonzero imaginary part. By working with a real Schur decomposition
complex arithmetic can be avoided.
Let A ∈ Rn×n have the real Schur decomposition A = QRQT , where Q is orthog-
onal and R is upper quasi-triangular with 1 × 1 and 2 × 2 diagonal blocks. Then
f (A) = Qf (R)QT , where U = f (R) is upper quasi-triangular with the same block
structure as R. The equation U 2 = R can be written
Uii2 = Rii ,
j−1
X
Uii Uij + Uij Ujj = Rij − Uik Ukj . (6.7)
k=i+1
Once the diagonal blocks Uii have been computed (6.7) provides a way to compute
the remaining blocks Uij a block superdiagonal or a block column at a time. The
condition for the Sylvester equation (6.7) to have a unique solution Uij is that Uii
6.2 Schur Method 137
and −Ujj have no eigenvalue in common (see (B.20)), and this is guaranteed for any
primary square root when A is nonsingular. When neither Uii nor Ujj is a scalar,
(6.7) can solved by writing it in the form
T Pj−1
(I ⊗ Uii + Ujj ⊗ I) vec(Uij ) = vec Rij − k=i+1 Uik Ukj ,
Lemma 6.4. Let A ∈ R2×2 have distinct complex conjugate eigenvalues. Then A
has four square roots, all primary functions of A. Two of them are real, with complex
conjugate eigenvalues, and two are pure imaginary, with eigenvalues that are not
complex conjugates.
Proof. Since A has distinct eigenvalues θ ± iµ, Theorem 1.26 shows that A has
four square roots, all of them functions of A. To find them let
−1 1 0
Z AZ = diag(λ, λ) = θI + iµK, K= .
0 −1
Then
A = θI + µW,
that is, two real square roots with eigenvalues ±(α + iβ, α − iβ); or
X = ±i(βI − αW ),
that is, two pure imaginary square roots with eigenvalues ±(α + iβ, −α + iβ).
1/2
The proof of the lemma gives a way to construct Rii . Writing
r r12
Rii = 11 ,
r21 r22
1 1 1/2
θ= (r11 + r22 ), µ= −(r11 − r22 )2 − 4r21 r12 . (6.8)
2 2
We now require α and β such that (α + iβ)2 = θ + iµ. A stable way to compute them
is as follows.
138 Matrix Square Root
Algorithm 6.5. This algorithm computes the square root α+iβ of θ+iµ with α ≥ 0.
Theorem 6.6. Let A ∈ Rn×n be nonsingular. If A has a real negative eigenvalue then
A has no real square roots that are primary functions of A. If A has no real negative
eigenvalues, then there are precisely 2r+c real primary square roots of A, where r is
the number of distinct real eigenvalues and c the number of distinct complex conjugate
eigenvalue pairs.
Proof. Let A have the real Schur decomposition A = QRQT . Since f (A) =
Qf (R)QT , f (A) is real if and only if f (R) is real. If A has a real negative eigenvalue,
Ri = (rii ) say, then f (Ri ) is necessarily nonreal; this gives the first part of the
theorem.
If A has no real negative eigenvalues, consider the 2s primary square roots of A
described in Theorem 1.26. We have s = r + 2c. Lemma 6.4 shows that each 2 × 2
block Rii has two real primary square roots. Hence, of the 2s = 2r+2c primary square
roots of A, precisely 2r+c of them are real.
Cost: 28 31 n3 flops.
Two comments are necessary. First, the principal square root is computed if the
principal square root is taken at line 2, which for 2×2 blocks means taking the positive
6.3 Newton’s Method and Its Variants 139
sign in (6.9). Second, as for Algorithm 6.3, it is necessary that whenever Rii and Rjj
have the same eigenvalues, we take the same square root.
Now we consider the numerical stability of Algorithms 6.3 and 6.7. A straight-
forward rounding error analysis shows that the computed square root U b of T in
Algorithm 6.3 satisfies (see Problem 6.6)
b 2 = T + ∆T,
U |∆T | ≤ γ b |2 ,
en |U
b 2 = A + ∆A,
X k∆AkF ≤ γ b 2F ,
en3 kXk
b 2 kF
kA − X
≤γ b
en3 αF (X). (6.10)
kAkF
where α is defined in (6.4). We conclude that Algorithm 6.3 has essentially optimal
stability. The same conclusion holds for Algorithm 6.7, which can be shown to satisfy
the same bound (6.10).
X0 given,
Solve Xk Ek + Ek Xk = A − Xk2
k = 0, 1, 2, . . . . (6.11)
Xk+1 = Xk + Ek
At each iteration a Sylvester equation must be solved for Ek . The standard way
of solving Sylvester equations is via Schur decomposition of the coefficient matrices,
which in this case are both Xk . But the Schur method of the previous section can
compute a square root with just one Schur decomposition, so Newton’s method is
unduly expensive in the form (6.11). The following lemma enables us to reduce
the cost. Note that Ek in (6.11) is well-defined, that is, the Sylvester equation is
nonsingular, if and only if Xk and −Xk have no eigenvalue in common (see (B.20)).
Lemma 6.8. Suppose that in the Newton iteration (6.11) X0 commutes with A and
all the iterates are well-defined. Then, for all k, Xk commutes with A and Xk+1 =
1 −1
2 (Xk + Xk A).
−1
Proof. The proof is by induction. Let Y0 = X0 and Yk = 12 (Xk−1 + Xk−1 A),
k ≥ 1. For the inductive hypothesis we take Xk A = AXk and Yk = Xk , which is
trivially true for k = 0. The matrix Fk = 21 (Xk−1 A − Xk ) is easily seen to satisfy
Xk Fk + Fk Xk = A − Xk2 , so Fk = Ek and Xk+1 = Xk + Fk = 21 (Xk + Xk−1 A) = Yk+1 ,
140 Matrix Square Root
which clearly commutes with A since Xk does. Hence the result follows by induction.
The lemma shows that if X0 is chosen to commute with A then all the Xk and Ek
in (6.11) commute with A, permitting great simplification of the iteration.
The most common choice is X0 = A (or X0 = I, which generates the same X1 ),
giving the Newton iteration
Theorem 6.9 (convergence of Newton square root iteration). Let A ∈ Cn×n have
no eigenvalues on R− . The Newton square root iterates Xk from (6.12) with any
X0 that commutes with A are related to the Newton sign iterates
1
Sk+1 = (Sk + Sk−1 ), S0 = A−1/2 X0
2
by Xk ≡ A1/2 Sk . Hence, provided that A−1/2 X0 has no pure imaginary eigenval-
ues, the Xk are defined and Xk converges quadratically to A1/2 sign(A−1/2 X0 ). In
particular, if the spectrum of A−1/2 X0 lies in the right half-plane then Xk converges
quadratically to A1/2 and, for any consistent norm,
1
kXk+1 − A1/2 k ≤ kX −1 k kXk − A1/2 k2 . (6.13)
2 k
Proof. Note first that any matrix that commutes with A commutes with A±1/2 ,
by Theorem 1.13 (e). We have X0 = A1/2 S0 and so S0 commutes with A. Assume
that Xk = A1/2 Sk and Sk commutes with A. Then Sk commutes with A1/2 and
1 1/2 1
Xk+1 = (A Sk + Sk−1 A−1/2 A) = A1/2 · (Sk + Sk−1 ) = A1/2 Sk+1 ,
2 2
and Sk+1 clearly commutes with A. Hence Xk ≡ A1/2 Sk by induction. Then, using
Theorem 5.6, limκ→∞ Xk = A1/2 limκ→∞ Sk = A1/2 sign(S0 ) = A1/2 sign(A−1/2 X0 ),
and the quadratic convergence of Xk follows from that of Sk .
For the last part, if S0 = A−1/2 X0 has spectrum in the right half-plane then
sign(S0 ) = I and hence Xk → A1/2 . Using the commutativity of the iterates with A,
it is easy to show that
1 −1
Xk+1 ± A1/2 = X (Xk ± A1/2 )2 , (6.14)
2 k
which, with the minus sign, gives (6.13).
6.3 Newton’s Method and Its Variants 141
k k
Xk = A1/2 (I − G20 )−1 (I + G20 ), where k ≥ 1 and G0 = (A1/2 − I)(A1/2 + I)−1 .
DB iteration:
1
Xk+1 = Xk + Yk−1 , X0 = A,
2
(6.15)
1
Yk+1 = Yk + Xk−1 , Y0 = I.
2
Xk or Yk :
CR iteration:
IN iteration:
Xk+1 = Xk + Ek , X0 = A,
1 −1 1 (6.20)
Ek+1 = − Ek Xk+1 Ek , E0 = (I − A).
2 2
6.3 Newton’s Method and Its Variants 143
Theorem 6.10. Let the singular matrix A ∈ Cn×n have semisimple zero eigenvalues
and nonzero eigenvalues lying off R− . The iterates Xk from the Newton iteration
(6.12) started with X1 = 12 (I + A) are nonsingular and converge linearly to A1/2 , with
This is a double-length Newton step, whose benefits for problems with a singular
Fréchet derivative at the solution have been noted in the more general context of
algebraic Riccati equations by Guo and Lancaster [234, ]. From the proof of
(k) −1 1/2
Theorem 6.10 we see that Xk−1 A = Z diag(J1 J1 , 0)Z −1 ≈ Z diag(J1 , 0)Z −1 =
(k) 1/2
A1/2 , since J1 ≈ J1 by assumption.
While the iterations described in this section are mathematically equivalent, they
are not equivalent in finite precision arithmetic. We will see in the next section that
they have quite different stability properties.
1 1/2 −1/2
(1 − λi λj ), i, j = 1: n,
2
where the λi are the eigenvalues of A. Hence to guarantee stability we need
1
1/2 −1/2
ψN := max 1 − λi λj < 1. (6.24)
i,j 2
6.4 Stability and Limiting Accuracy 145
6.4.2. DB Iterations
For the DB iteration (6.15) the iteration function is
1 X + Y −1
G(X, Y ) = .
2 Y + X −1
It is easy to see that Lg (I, X) is idempotent, and hence the product form of the DB
iteration is stable at (M, X) = (I, A1/2 ) and at (M, Y ) = (I, A−1/2 ).
To determine the limiting accuracy, with B = A1/2 , kEk ≤ ukA1/2 k, and kF k ≤
ukA−1/2 k, the (1,1) block of (6.25) is bounded by 21 kA1/2 k + kA1/2 k2 kA−1/2 k u =
1
2 kA
1/2
k(1 + κ(A1/2 ))u and the (2,1) block by 21 kA−1/2 k(1 + κ(A1/2 ))u. The relative
limiting accuracy estimate is therefore 12 (1 + κ(A1/2 ))u, as for the Newton iteration.
For the product DB iteration, by considering (6.26) with kEk ≤ u and kF k ≤
kA1/2 ku we obtain a relative limiting accuracy estimate of (3/2)u, which is indepen-
dent of A.
6.4.3. CR Iteration
For the CR iteration (6.19) the iteration function is
−Y Z −1 Y
G(Y, Z) = .
Z − 2Y Z −1 Y
6.4.4. IN Iteration
The iteration function for the IN iteration (6.20) is
X +H
G(X, H) = .
− 12 H(X + H)−1 H
Therefore Lg (A1/2 , 0) is idempotent and the iteration is stable. The relative limiting
accuracy is again trivially of order u.
6.5 Scaling the Newton Iteration 147
Table 6.2. Summary of stability and limiting accuracy of square root iterations.
6.4.5. Summary
Table 6.2 summarizes what our analysis says about the stability and limiting accu-
racy of the iterations. The Newton iteration (6.12) is unstable at A1/2 unless the
eigenvalues λi of A are very closely clustered, in the sense that (λi /λj )1/2 lies in a
ball of radius 2 about z = 1 in the complex plane, for all i and j. The four rewritten
versions of Newton’s iteration, however, are all stable at A1/2 . The price to be paid
for stability is a coupled iteration costing more than the original Newton iteration
(see Table 6.1). The limiting accuracy of the DB iteration is essentially κ(A1/2 )u, but
the other three stable iterations have limiting accuracy of order u.
For the DB iteration, the µk in (6.27) can be expressed in terms of just Xk and
Yk , by using the relation Yk = A−1 Xk . The determinantally scaled DB iteration is
DB iteration (scaled):
det(Xk ) −1/n −1/(2n)
µk = 1/2
or µk = det(Xk ) det(Yk ) ,
det(A)
1 (6.28)
Xk+1 = µk Xk + µ−1
k Yk
−1
, X0 = A,
2
1
Yk+1 = µk Yk + µ−1
k Xk
−1
, Y0 = I.
2
148 Matrix Square Root
For the product form of the DB iteration determinantal scaling gives, using Mk =
Xk Yk ,
ek = 1 (µ−1 X −1 A − µk Xk )
E
2 k k
1 −1 1 −1 1
= µ−1
k · (Xk A − Xk ) + µk Xk − µk Xk
2 2 2
1 1
= µ−1
k (Ek + Xk ) − µk Xk .
2 2
The scaled iteration is therefore
IN iteration (scaled):
X0 = A, E0 = 12 (I − A),
det(Xk ) −1/n
µk = ,
det(A)1/2
ek = µ−1 (Ek + 1 Xk ) − 1 µk Xk ,
E (6.30)
k 2 2
ek ,
Xk+1 = µk Xk + E
−1
Ek+1 = − 12 Ek Xk+1 Ek .
Although the formulae (6.30) appear very different from, and less elegant than, (6.27),
the two iterations generate exactly the same sequence of iterates Xk .
A suitable stopping test for all these iterations is from (4.25) and (6.13),
1/2
kXk+1 k
kXk+1 − Xk k ≤ η , (6.31)
kXk−1 k
n = 8;
Q = gallery(’orthog’,n);
A = Q*rschur(n,2e2)*Q’;
The function rschur(n,mu), from the Matrix Computation Toolbox [264], gen-
erates an upper quasi-triangular matrix with eigenvalues αj + iβj , αj = −j 2 /10,
βj = −j, j = 1: n/2 and (2j, 2j + 1) elements mu.
4. A 16×16 Chebyshev–Vandermonde matrix, gallery(’chebvand’,16) in MAT-
LAB, which has 8 complex eigenvalues with modulus of order 1 and 8 real,
positive eigenvalues between 3.6 and 10−11 .
5. A 9 × 9 singular nonsymmetric matrix resulting from discretizing the Neumann
problem with the usual five point operator on a regular mesh. The matrix has
real nonnegative eigenvalues and a one-dimensional null space with null vector
the vector of 1s.
The first matrix, for which the results are given in Table 6.3, represents an easy
problem. The Newton iteration is unstable, as expected since ψN > 1 (see (6.24)):
the error reaches a minimum of about u1/2 and then grows unboundedly. The other
methods all produce tiny errors and residuals. Scaling produces a useful reduction in
the number of iterations.
For the Moler matrix, we can see from Table 6.4 that the Schur method performs
well: the residual is consistent with (6.10), and the error is bounded by κsqrt (X)u, as
it should be given the size of the residual (which is also the backward error). Newton’s
method is unstable, as expected given the eigenvalue distribution. The convergence
of the sequence of errors (not shown) for the DB, product DB, and IN iterations
is more linear than quadratic, and a large number of iterations are required before
convergence is reached. The DB and product DB iterations have error significantly
exceeding κsqrt (X)u, and are beaten for accuracy and stability by the IN iteration,
which matches the Schur method. For this matrix, scaling has little effect.
For the nonnormal matrix the tables are turned and it is the IN iteration that
performs badly, giving an error larger than those from the other iterative methods
by a factor about 104 ; see Table 6.5. The Newton iteration performs well, which
150 Matrix Square Root
Table 6.3. Results for rank-1 perturbation of I. α∞ (X) = 1.4, κsqrt (X) = 40, κ2 (A1/2 ) =
1.7 × 102 , ψN = 39.
Table 6.4. Results for Moler matrix. α∞ (X) = 1.1, κsqrt (X) = 8.3 × 104 , κ2 (A1/2 ) =
3.6 × 105 , ψN = 105 .
is consistent with ψN being only slightly larger than 1. The residual of the Schur
methods is consistent with (6.10).
The Chebyshev–Vandermonde matrix shows poor performance of the DB and
product DB iterations, which are beaten by the IN iteration. The Newton iteration
is wildly unstable.
These experiments confirm the predictions of the stability analysis. However, the
limiting accuracy is not a reliable predictor of the relative errors. Indeed, although
the product DB iteration has better limiting accuracy than the DB iteration, this is
not seen in practice. The main conclusion is that no one of the DB, product DB, and
IN iterations is always best. Based on these limited experiments one might choose the
IN iteration when α(X) = O(1) and one of the DB or product DB iterations when
α(X) is large, but this reasoning has no theoretical backing (see Problem 6.24).
Finally, for the singular Neumann matrix we applied the IN iteration (6.20) with-
out scaling until the relative residual was smaller than 10−4 and then computed the
double step (6.22). Six iterations of (6.20) were required, after which the relative
residual was of order 10−5 and the relative error of order 10−3 . After applying (6.22)
these measures both dropped to 10−14 , showing that the double step can be remark-
ably effective.
6.6 Numerical Experiments 151
Table 6.5. Results for nonnormal matrix. α∞ (X) = 1.4 × 108 , κsqrt (X) = 5.8 × 107 ,
κ2 (A1/2 ) = 6.2 × 1010 , ψN = 1.3.
Table 6.6. Results for Chebyshev–Vandermonde matrix. α∞ (X) = 2.8, κsqrt (X) = 5.2 × 106 ,
κ2 (A1/2 ) = 1.3 × 107 , ψN = 3.3 × 105 .
Theorem 6.11 (Higham, Mackey, Mackey, and Tisseur). Let A ∈ Cn×n have no eigen-
values on R− . Consider any iteration 2
0 A of the form Xk+1 = g(Xk ) ≡ Xk h(Xk ) that
converges to sign(X0 ) for X0 = I 0 with order of convergence m. Then in the
coupled iteration
Yk+1 = Yk h(Zk Yk ), Y0 = A,
(6.33)
Zk+1 = h(Zk Yk )Zk , Z0 = I,
Yk → A1/2 and Zk → A−1/2 as k → ∞, both with order of convergence m, Yk
commutes with Zk , and Yk = AZk for all k.
Proof. Observe that
0 Yk 0 Yk Yk Zk 0
g = h
Zk 0 Zk 0 0 Zk Yk
0 Yk h(Yk Zk ) 0
=
Zk 0 0 h(Zk Yk )
0 Yk h(Zk Yk )
=
Zk h(Yk Zk ) 0
0 Yk h(Zk Yk ) 0 Yk+1
= = ,
h(Zk Yk )Zk 0 Zk+1 0
where the penultimate equality follows from Corollary 1.34. The initial conditions
Y0 = A and Z0 = I together with (6.32) now imply that Yk and Zk converge to A1/2
and A−1/2 , respectively. It is easy to see that Yk and Zk are polynomials in A for all
k, and hence Yk commutes with Zk . Then Yk = AZk follows by induction. The order
of convergence of the coupled iteration (6.33) is clearly the same as that of the sign
iteration from which it arises.
Theorem 6.11 provides an alternative derivation of the DB iteration (6.15). Take
g(X) = 12 (X +X −1 ) = X · 12 (I +X −2 ) ≡ Xh(X 2 ). Then Yk+1 = Yk · 21 (I +(Zk Yk )−1 ) =
1 −1 −1 1 −1 1 −1
2 Yk (I + Yk Zk ) = 2 (Yk + Zk ), and, likewise, Zk+1 = 2 (Zk + Yk ).
New iterations are obtained by applying the theorem to the Padé family (5.28):
Padé iteration:
For 0 ≤ ℓ, m ≤ 2, pℓm (1 − x2 ) and qℓm (1 − x2 ) can be read off from Table 5.1. For
ℓ = m − 1 and ℓ = m, pℓm /qℓm has the partial fraction form given in Theorem 5.9
(d). From Theorem 5.8 and Theorem 6.11 we deduce that Yk → A1/2 and Zk →
A−1/2 with order ℓ + m + 1 unconditionally if ℓ = m − 1 or ℓ = m, or provided
k diag(I − A, I − A)k < 1 if ℓ ≥ m + 1.
For ℓ = 1, m = 0, (6.34) gives a Newton–Schulz iteration:
Newton–Schulz iteration:
Yk+1 = 21 Yk (3I − Zk Yk ), Y0 = A,
1
(6.35)
Zk+1 = 2 (3I − Zk Yk )Zk , Z0 = I.
This is closely related to the DB iteration (6.15) in that Yk from (6.36) equals the
inverse of Xk from (6.15) with X0 = A−1 , and similarly for the Zk . The DB iteration
is generally preferable to (6.36) because it requires less work per iteration.
What can be said about the stability of the coupled iteration (6.33)? Since Yk
and Zk commute, the iteration formulae can be rewritten in several ways, and the
particular choice of formula turns out to be crucial to the stability. For example,
(6.36) is stable, but with the rearrangement Zk+1 = 2Zk (Zk Yk + I)−1 , stability is
lost. The next theorem shows that all instances of (6.33) are stable and hence that
the particular formulae (6.33) are the right choice from the point of view of stability.
Theorem 6.12 (Higham, Mackey, Mackey, and Tisseur). Consider any iteration of
the form (6.33) and its associated mapping
Y h(ZY )
G(Y, Z) = ,
h(ZY )Z
where Xk+1 = g(Xk ) ≡ Xk h(Xk2 ) is any superlinearly convergent iteration for sign(X0 ).
Then any matrix pair of the form P = (B, B −1 ) is a fixed point for G, and the Fréchet
derivative of G at P is given by
1 E − BF B
Lg (P ; E, F ) = . (6.37)
2 F − B −1 EB −1
This result proves stability of all the Padé iterations. Note that (6.37) is identical
to the expression (6.25) for the Fréchet derivative of the DB iteration, as might be
expected since the inverse of the DB iteration is a member of the Padé family, as
noted above.
An underlying theme in this section is that the use of commutativity is best
avoided if stable iterations are to be obtained. Note that the proof of Theorem 6.11
does not draw on commutativity. Simply rearranging the “Z formula” in (6.33) to
Zk+1 = Zk h(Zk Yk ), which is valid mathematically, changes the Fréchet derivative,
and the new iteration is generally unstable (as mentioned above for ℓ = 0, m = 1)
[283, , Lem. 5.4]. What it is safe to do is to rearrange using Corollary 1.34. Thus
changing the Z formula to Zk+1 = Zk h(Yk Zk ) does not affect the stability of (6.33).
X∞ 1 X∞
(I − C)1/2 = 2 (−C)j ≡ I − αj C j , αj > 0, (6.38)
j=0
j j=1
and try to choose s so that ρ(C) < 1. When A has real, positive eigenvalues, 0 <
λn ≤ λn−1 ≤ · · · ≤ λ1 , the s that minimizes ρ(C), and the corresponding minimal
ρ(C), are (see Problem 6.17)
λ1 − λn
s = (λ1 + λn )/2, ρ(C) = < 1. (6.40)
λ1 + λn
An alternative choice of s, valid for all A, is s = trace(A∗ A)/ trace(A∗ ), which mini-
mizes kCkF ; see Problem 6.18. This choice may or may not achieve ρ(C) < 1.
Assume now that ρ(C) < 1. An iterative method can be derived by defining
(I − C)1/2 =: I − P and squaring the equation to obtain P 2 − 2P = −C. A natural
6.8 Special Matrices 155
Binomial iteration:
1
Pk+1 = (C + Pk2 ), P0 = 0. (6.41)
2
Our choice of name for this iteration comes from the observation that I−Pk reproduces
the binomial expansion (6.38) up to and including terms of order C k . So (6.41) can
be thought of as a convenient way of generating the binomial expansion.
Before analyzing the convergence of the binomial iteration we note some properties
of P . From (6.38) we have
X∞
P = αj C j , (6.42)
j=1
where thePαj are positive. The eigenvalues P∞ of P and C are therefore related by
∞
λi (P ) = j=1 αj λi (C)j . Hence ρ(P ) ≤ j=1 αj ρ(C)j = 1 − (1 − ρ(C))1/2 . Since
1 − (1 − x)1/2 ≤ x for 0 ≤ x ≤ 1, we conclude that ρ(P ) ≤ ρ(C) < 1. Similarly, we
find that kP k ≤ kCk for any consistent matrix norm for which kCk < 1. Finally, it
is clear that of all square roots I − Q of I − C, the principal square root is the only
one for which ρ(Q) < 1, since ρ(Q) < 1 implies that the spectrum of I − Q lies in the
open right half-plane.
We first analyze convergence in a special case. For A ∈ Rn×n , let A ≥ 0 denote
that aij ≥ 0 for all i and j. If C ≥ 0 then the binomial iteration enjoys mononotic
convergence.
Theorem 6.13. Let C ∈ Rn×n satisfy C ≥ 0 and ρ(C) < 1 and write (I − C)1/2 =
I − P . Then in the binomial iteration (6.41), Pk →P with
0 ≤ Pk ≤ Pk+1 ≤ P, k ≥ 0; (6.43)
1
2 (pk + p)(pk − p) and so
1
|pk+1 − p| ≤ (|pk | + |p|)|pk − p|
2
1
≤ (1 − (1 − |c|)1/2 + |c|)|pk − p| =: θ|pk − p|, (6.44)
2
where θ = 21 (1 − (1 − |c|)1/2 + |c|) ≤ |c| < 1. Hence pk → p with a monotonically
decreasing error. This argument can be built into a proof that in the matrix iteration
(6.41) Pk converges to P if ρ(C) < 1. But this restriction on C is stronger than
necessary! The next result describes the actual region of convergence of the binomial
iteration, which is closely connected with the Mandelbrot set.
−1
−2
−4 −3 −2 −1 0 1 2
Figure 6.1. The cardioid (6.45), shaded, together with the unit circle. The binomial iteration
converges for matrices C whose eigenvalues lie inside the cardioid.
If ρ(C) < 1, there is a consistent norm such that kCk < 1 (see Section B.7).
Hence when (6.41) is applicable to I − C, so is the Newton–Schulz iteration (6.35),
which also requires only matrix multiplication but which has quadratic convergence.
If medium to high accuracy is required, the Newton–Schulz iteration will be more
efficient than the binomial iteration, despite requiring three matrix multiplications
per iteration versus one for (6.41).
Pulay iteration:
1/2 1/2
Equation (6.48) is easily solved for Bk+1 : (Bk+1 )ij = (A − D − Bk2 )ij /(dii + djj ),
i, j = 1: n. The Pulay iteration is easily shown to be a rewritten version of the
modified Newton iteration (6.46) with X0 = D1/2 ; see Problem 6.16. The next result
gives a sufficient condition for convergence to the principal square root.
for some consistent matrix norm then in the iteration (6.48), Bk → A1/2 − D1/2
linearly.
Proof. Let Ek = B − Bk and subtract (6.48) from (6.47) to obtain
It follows that kEk k ≤ ek , where ek+1 = e2k /(2σ) + (β/σ)ek , where e0 = β = kBk and
1/2
σ = mini di . The change of variable fk = ek /β yields, since θ = β/σ,
θ 2
fk+1 = f + θfk , f0 = 1. (6.50)
2 k
Suppose θ < 2/3. If |fk | ≤ 1 then
θ
|fk+1 | = fk + θ |fk | < |fk |.
2
Visser iteration:
Theorem 6.16 (convergence of Visser iteration). Let A ∈ Cn×n and α > 0. If the
eigenvalues of I − 4α2 A lie in the cardioid (6.45) then A1/2 exists and the iterates Xk
from (6.51) converge linearly to A1/2 .
6.8 Special Matrices 159
1 e
Yk+1 = Yk + (A − Yk2 ), Y0 = I.
2
e ≡ I − C and Yk = I − Pk .
This is equivalent to the binomial iteration (6.41) with A
Hence Theorem 6.14 shows that we have convergence of Yk to A e1/2 if the eigenvalues
e
of C = I − A lie in the cardioid (6.45). In other words, Xk → A1/2 (since α > 0) if
the eigenvalues of I − 4α2 A lie in the cardioid.
If the eigenvalues of A are real and positive then the condition of the theorem is
0 < α < ρ(A)−1/2 . Convergence under these assumptions, but with equality allowed
in the upper bound for α, was proved by Elsner [176, ].
The advantage of the Pulay iteration over the binomial iteration and the Visser
iteration is that it can be applied to a matrix whose diagonal is far from constant,
provided the matrix is sufficiently diagonally dominant. For example, consider the
16 × 16 symmetric positive definite matrix A with aii = i2 and aij = 0.1, i 6= j. For
the Pulay iteration (6.48) with D = diag(A), we have θ = 0.191 in (6.49) and just 9
iterations are required to compute A1/2 with a relative residual less than nu in IEEE
double precision arithmetic. For the binomial iteration (6.41), writing A = s(I − C)
with s given by (6.40), ρ(C) = 0.992 and 313 iterations are required with the same
convergence tolerance. Similarly, the Visser iteration (6.51) requires 245 iterations
with α = 0.058, which was determined by hand to approximately minimize the number
of iterations. (The Newton–Schulz iteration (6.35) does not converge for this matrix.)
The stability of (6.51) under the assumptions of the theorem is easily demonstrated
under the assumption that A has positive real eigenvalues. The Fréchet derivative
of the iteration function g(X) = X + α(A − X 2 ) is Lg (X, E) = E − α(XE + EX).
1/2 1/2
The eigenvalues of Lg (A1/2 ) are µij = 1 − α(λi + λj ), i, j = 1: n, where λi
denotes an eigenvalue of A. The maximum µmax = maxi,j |µij | is obtained for i = j,
1/2
so µmax = maxi |1 − 2αλi |. Since 1 − 4α2 λi lies in the cardioid (6.45), we have
1/2
1 − 4α2 λi > −3, i.e., αλi < 1, so µmax < 1. Hence Lg (A1/2 ) is power bounded and
the iteration is stable.
Since ρ(B) ≥ maxi bii (see Section B.11), (6.52) implies that any nonsingular M-
matrix has positive diagonal entries. The representation (6.52) is not unique. We can
take s = maxi aii (see Problem 6.20), which is useful for computational purposes.
160 Matrix Square Root
Theorem 6.18 (Alefeld and Schneider). For any nonsingular M-matrix the princi-
pal square root exists, is an M-matrix, and is the only square root that is an M-matrix.
Proof. The first part was shown above. The analysis below of the binomial
iteration (6.41) provides a constructive proof of the second part. For the third part
note that any nonsingular M-matrix has spectrum in the right half-plane and that
there is only one square root with this property by Theorem 1.29.
and so a splitting of the form used in Section 6.8.1 automatically exists. It follows
that we can use the binomial iteration (6.41) to compute (I − C)1/2 =: I − P . Since
C ≥ 0, the monotonic convergence shown in Theorem 6.13 is in effect. Moreover,
since we showed in Section 6.8.1 that P ≥ 0 and ρ(P ) ≤ ρ(C) < 1, it follows that
I − P , and hence A1/2 , is an M-matrix (which completes the proof of Theorem 6.18).
As mentioned in Section 6.8.1, the Newton–Schulz iteration (6.35) can be used to
compute (I−C)1/2 and will generally be preferable. However, (6.41) has the advantage
that it is structure-preserving: at whatever point the iteration is terminated we have
an approximation s1/2 (I − Pk ) ≈ A1/2 that is an M-matrix; by contrast, the Newton–
Schulz iterates Yk are generally not M-matrices.
The Newton iterations of Section 6.3 are applicable to M-matrices and are structure-
preserving: if A is a nonsingular M-matrix and X0 = A then so are all the iterates Xk
(or, in the case of the CR iteration (6.19), Zk ). This nonobvious property is shown
by Meini [421, ] for (6.19) via the cyclic reduction derivation of the iteration; a
direct proof for the Newton iteration (6.12) does not seem easy.
A numerical experiment illustrates well the differing convergence properties of
the binomial and Newton–Schulz iterations. For the 8 × 8 instance of (6.53), with
s = maxi aii = 2, we find ρ(C) = 0.94. The binomial iteration (6.41) requires
114 iterations to produce a relative residual less than nu in IEEE double precision
arithmetic, whereas the Newton–Schulz iteration requires just 9 iterations. When
the residual tolerance is relaxed to 10−3 , the numbers of iterations are 15 and 6,
respectively. If we increase the diagonal of the matrix from 2 to 4 then ρ(C) = 0.47
and the numbers of iterations are, respectively, 26 and 6 (tolerance nu) and 4 and
3 (tolerance 10−3 ), so iteration (6.41) is more competitive for the more diagonally
dominant matrix.
6.8 Special Matrices 161
A class of matrices that includes the M-matrices is defined with the aid of the
comparison matrix associated with A:
|aii |, i = j,
M (A) = (mij ), mij = (6.55)
−|aij |, i 6= j.
Theorem 6.20 (Lin and Liu). For any nonsingular H-matrix A ∈ Cn×n with pos-
itive diagonal entries the principal square root A1/2 exists and is the unique square
root that is an H-matrix with positive diagonal entries.
Proof. By Problem 6.20 we can write M (A) = sI − B, where B ≥ 0, s > ρ(B),
and s = maxi |aii | = maxi aii . This means that A = sI − C, e where |C|
e = B and
e e
hence ρ(C) ≤ ρ(|C|) = ρ(B) < s. It follows that the eigenvalues of A lie in the
open right half-plane. Hence the principal square root A1/2 exists. We now need
to show that A1/2 is an H-matrix with positive diagonal entries. This can be done
by applying the binomial expansion (6.38) to A = s(I − C), where C = C/s e and
1/2 1/2
ρ(C) < 1. We have A = s (I − P ) where, from (6.42), ρ(P ) ≤ ρ(|P |) ≤ ρ(|C|) =
e < 1. Since maxi pii ≤ ρ(P ) < 1, A1/2 has positive diagonal elements.
s−1 ρ(|C|)
Moreover, M (A1/2 ) = s1/2 (I − |P |) and ρ(|P |) < 1, so A1/2 is an H-matrix.
Finally, A1/2 is the only square root that is an H-matrix with positive diagonal
entries, since it is the only square root with spectrum in the open right half-plane.
For computing the principal square root of an H-matrix with positive diagonal
elements we have the same options as for an M-matrix: the binomial iteration (6.41),
the Newton iterations of Section 6.3, and the Newton–Schulz iteration (6.35). The bi-
nomial iteration no longer converges monotonically. For the behaviour of the Newton
iterations, see Problem 6.25.
Algorithm 6.21. Given a Hermitian positive definite matrix A ∈ Cn×n this algo-
rithm computes H = A1/2 .
1 A = R∗ R (Cholesky factorization).
2 Compute the Hermitian polar factor H of R by applying Algorithm 8.20
to R (exploiting the triangularity of R).
Cost: Up to about 15 23 n3 flops.
162 Matrix Square Root
Algorithm 8.20, involved in step 2, is a Newton method for computing the Her-
mitian polar factor; see Section 8.9. Algorithm 6.21 has the advantage that it it-
erates on R, whose 2-norm condition number is the square root of that of A, and
that it takes advantage of the excellent stability and the ready acceleration possi-
bilities of the polar iteration. The algorithm can be extended to deal with positive
semi definite matrices by using a Cholesky factorization with pivoting [286, ]. The
algorithm is also easily adapted to compute the inverse square root, A−1/2 , at the
same cost: after computing the unitary polar factor U by Algorithm 8.20, compute
A−1/2 = R−1 U (= H −1 ).
An alternative approach is to use the binomial iteration (6.41). If A ∈ Cn×n is
Hermitian positive definite then it has real, positive eigenvalues λn ≤ · · · ≤ λ1 , and
so using (6.39) and (6.40) we can write A = s(I − C), where s = (λ1 + λn )/2 and
ρ(C) = kCk2 = (κ2 (A) − 1)(κ2 (A) + 1) < 1, and this choice of s minimizes kCk2 .
The convergence of (6.41) is monotonic nondecreasing in the positive semidefinite
ordering, where for Hermitian matrices A and B, A ≤ B denotes that B −A is positive
semidefinite (see Section B.12); this can be shown by diagonalizing the iteration
and applying Theorem 6.13 to the resulting scalar iterations. However, unless A
is extremely well conditioned (κ2 (A) < 3, say) the (linear) convergence of (6.41) will
be very slow.
We now return to the Riccati equation XAX = B discussed in Section 2.9, which
generalizes the matrix square root problem. When A and B are Hermitian positive
definite we can use a generalization of Algorithm 6.21 to compute the Hermitian
positive definite solution. For a derivation of this algorithm see Problem 6.21. The
algorithm is more efficient than direct use of the formulae in Section 2.9 or Problem 2.7
and the same comments about conditioning apply as for Algorithm 6.21.
Algorithm 6.22. Given Hermitian positive definite matrices A, B ∈ Cn×n this al-
gorithm computes the Hermitian positive definite solution of XAX = B.
1 A = R∗ R, B = S ∗ S (Cholesky factorizations).
2 Compute the unitary polar factor U of SR∗ using Algorithm 8.20.
3 X = R−1 U ∗ S (by solving the triangular system RX = U S).
which is in Jordan form with a 2 × 2 block and a 1 × 1 block. The two primary square
roots of A are 1/2 1 −1/2
ǫ 2ǫ 0
X = ± 0 ǫ1/2 0 ,
0 0 ǫ1/2
and kXk is therefore of order ǫ−1/2 . But A also has two families of nonprimary square
roots: 1/2 1 −1/2
ǫ 2ǫ 0
X = ±W 0 ǫ1/2 0 W −1 ,
0 0 −ǫ1/2
where
a b c
W = 0 a 0,
0 d e
with the parameters a, b, c, d, e arbitrary subject to W being nonsingular (see Theo-
rems 1.26 and 1.25). Setting a = e = 1, b = 0, c = −1/(2ǫ1/2 ), and d = −c gives the
particular square roots
1/2
ǫ 0 1
X = ± 0 ǫ1/2 0 . (6.57)
0 1 −ǫ1/2
These two nontriangular square roots have kXk = O(1), so they are much smaller
normed than the primary square roots. This example shows that the square root of
minimal norm may be a nonprimary square root and that the primary square roots
can have norm much larger than the minimum over all square roots. Of course, if A is
1/2
normal then the minimal possible value kXk2 = kAk2 is attained at every primary
(and hence normal) square root. It is only for nonnormal matrices that there is any
question over the size of kXk.
How to characterize or compute a square root of minimal norm is a difficult open
question. The plausible conjecture that for a matrix with distinct, real, positive
eigenvalues the principal square root will be the one of minimal norm is false; see
Problem 6.22.
For the computation, Björck and Hammarling [73, ] consider applying general
purpose constrained optimization methods. Another approach is to augment the
Schur method with a heuristic designed to steer it towards a minimal norm square
root. Write the equations (6.5) as
1/2
ujj = ±tjj ,
Pj−1
tij − k=i+1 uik ukj (6.58)
uij = , i = j − 1: −1: 1.
uii + ujj
Denote the values uij resulting from the two possible choices of ujj by u+ −
ij and uij .
Suppose u11 , . . . , uj−1,j−1 have been chosen and the first j −1 columns of U have been
computed. The idea is to take whichever sign for ujj leads to the smaller 1-norm for
the jth column of U , a strategy that is analogous to one used in matrix condition
number estimation [276, , Chap. 15].
1/2
1 u11 = t11
2 for j = 2: n
3 Compute from (6.58) u+ −
ij , uij (i = j − 1: −1: 1),
P j P j
s+j =
+ −
i=1 |uij |, and sj =
−
i=1 |uij |.
4 if sj ≤ sj then uij = uij , i = 1: j, else uij = u−
+ − +
ij , i = 1: j.
5 end
Cost: 23 n3 flops.
The cost of Algorithm 6.23 is twice that of a direct computation of T 1/2 with
an a priori choice of the uii . This is a minor overhead in view of the overall cost of
Algorithm 6.3.
For T with distinct eigenvalues, every square root of T is a candidate for compu-
tation by Algorithm 6.23. If T has some repeated eigenvalues then the algorithm will
avoid nonprimary square roots, because these yield a zero divisor in (6.58) for some
i and j.
It is easy to see that when T has real, positive diagonal elements Algorithm 6.23
will assign positive diagonal to U . This choice is usually, but not always, optimal, as
can be confirmed by numerical example.
A weakness of Algorithm 6.23 is that it locally minimizes column norms without
judging the overall matrix norm. Thus the size of the (1, n) element of the square
root, for example, is considered only at the last stage. The algorithm can therefore
produce a square root of norm far from the minimum (as it does on the matrix in
Problem 6.22, for example). Generally, though, it performs quite well.
This matrix is discussed by Boyd, Micchelli, Strang, and Zhou [80, ], who describe
various applications of matrices from a more general class of “binomial matrices”.
The latter three matrices are available in MATLAB as
gallery(’invol’,n) % D*H
pascal(n,1)’
gallery(’binomial’,n) % Without the 2^((1-n)/2) scale factor.
The instability of Newton’s iteration (6.12) was first noted by Laasonen [366, ].
He stated without proof that, for matrices with real, positive eigenvalues, Newton’s
method “if carried out indefinitely, is not stable whenever the ratio of the largest to
the smallest eigenvalue of A exceeds the value 9”. Analysis justifying this claim was
given by Higham [267, ], who obtained the condition (6.24) and also proved the
stability of the DB iteration. Some brief comments on the instability of (6.12) are
given by Liebl [390, ], who computes the eigenvalues of the Fréchet derivative of
the iteration function (Exercise E.10.1-18 of [453, ] summarizes Liebl’s paper).
The stability of the product form of the DB iteration and of the CR iteration is
proved by Cheng, Higham, Kenney, and Laub [108, ] and Iannazzo [305, ],
respectively.
The Padé iterations (6.34) with ℓ = m − 1 are derived and investigated by Higham
[274, ], who proves the stability and instability of particular variants and shows
how to scale the iterations. Theorems 6.11 and 6.12 are from Higham, Mackey,
Mackey, and Tisseur [283, , Thms. 4.5, 5.3]. Our proof of Theorem 6.12 is
much shorter than the one in [283, ], which does not exploit the stability of the
underlying sign iterations.
Using the binomial expansion (6.38) to compute matrix square roots is suggested
by Waugh and Abel [610, ] in connection with the application discussed in Sec-
tion 2.3 involving roots of transition matrices. The binomial expansion is also inves-
tigated by Duke [170, ].
The binomial iteration (6.41) is discussed for C ≥ 0 by Alefeld and Schneider [8,
] and Butler, Johnson, and Wolkowicz [86, ], where it is used for theoretical
purposes rather than suggested as a computational tool. Theorem 6.13 is essentially
found in these papers. Albrecht [7, ] suggests the use of (6.41) for Hermitian
positive definite matrices. Theorem 6.14 is new.
The iteration (6.48) was suggested for symmetric positive definite matrices by
Pulay [480, ] (see also Bellman [51, , Ex. 1, p. 334]), but no convergence
result is given there. Theorem 6.15 is new. The approximation B1 obtained after one
iteration of (6.48) for symmetric positive definite A and D = diag(A) has been used
in a stochastic differential equations application by Sharpa and Allen [517, ].
The iteration (6.51) can be traced back to Visser [602, ], who used it to show
that a bounded positive definite self-adjoint operator on Hilbert space has a bounded
positive definite self-adjoint square root without applying the spectral theorem. This
use of the iteration can also be found in functional analysis texts, such as those
of Debnath and Mikusiński [144, ], Riesz and Sz.-Nagy [492, , Sec. 104],
Schechter [502, , Sec. 8.2], and Halmos [244, , Prob. 121]. For a generalization
of Visser’s technique to pth roots see Problem 7.7. Theorem 6.16 is new. Elsner [176,
] obtains the stability result derived at the end of Section 6.8.2. Iteration (6.51)
and its stability are also studied for symmetric positive definite matrices by Liebl
[390, ] and Babuška, Práger, and Vitásek [25, , Sec. 2.4.5].
Theorem 6.18 is an interpretation of results of Alefeld and Schneider [8, ],
who also treat square roots of singular M-matrices. Theorem 6.20 is proved for real
H-matrices by Lin and Liu [393, ].
Algorithm 6.21 for Hermitian positive definite matrices is due to Higham [266,
]. Algorithm 6.22 is suggested by Iannazzo [307, ].
That the matrix (6.56) has a small-normed square root (6.57) is pointed out by
Björck and Hammarling [73, ]. Algorithm 6.23 is from Higham [268, ].
For more on linear substitution ciphers see Bauer [46, ].
168 Matrix Square Root
Early contributions on involutory matrices are those of Cayley [99, ], [100,
] and Sylvester [556, ]. The number of involutory matrices over a finite field
is determined by Hodges [293, ].
Finally, an application in which the inverse matrix square root appears is the
computation of tight windows of Gabor frames [319, ].
Problems
6.1. Prove the integral formula (6.1) by using the integral (5.3) for the matrix sign
function together with (5.4).
6.2. (Cayley [99, ], [100, ], Levinger [382, ]) Let A ∈ C2×2 . If A has
distinct complex conjugate eigenvalues then the square roots of A can be obtained
explicitly from the formulae following Lemma 6.4. More generally, the following
approach can be considered. By the Cayley–Hamilton theorem any 2 × 2 matrix X
satisfies X 2 − trace(X)X + det(X)I = 0. Let X 2 = A. Then
p
A − trace(X)X + det(A) I = 0. (6.60)
We can solve (6.61) for trace(X) and then solve (6.60) for X to obtain
p
A + det(A) I
X=q p .
trace(A) + 2 det(A)
6.9. (Elsner [176, ]; the ideas in this problem have been generalized to algebraic
Riccati equations: see Mehrmann [418, , Thm. 11.3] and Lancaster and Rodman
[370, , Thms. 9.1.1, 9.2.1].)
(a) Let > and ≥ denote the orderings on Hermitian matrices defined in Section B.12.
Show that if A > 0 and X0 > 0 then the iterates from the full Newton iteration (6.11)
satisfy
0 < A1/2 ≤ · · · ≤ Xk+1 ≤ Xk ≤ · · · ≤ X1 , A ≤ Xk2 , k ≥ 1,
and deduce that Xk converges monotonically to A1/2 in the positive semidefinite
ordering. Hint: first show that if C > 0 and H ≥ 0 (H > 0) then the solution of the
Sylvester equation XC + CX = H satisfies X ≥ 0 (X > 0).
(b) Note that X0 is an arbitrary positive definite matrix: it need not commute with
A. What can be said about the convergence of the simplified Newton iteration (6.12)?
(c) Suppose A is not Hermitian positive definite but that there is a nonsingular Z
such that Z −1 AZ and Z −1 X0 Z are Hermitian positive definite. How can the result
of (a) be applied?
6.10. Explain the behaviour of the Newton iteration (6.12) when A has an eigenvalue
on R− .
6.11. (Iannazzo [305, ]) Give another derivation of the DB iteration (6.15) as
follows. “Symmetrize” (6.12) by writing it as
1
Xk+1 = Xk + A1/2 Xk−1 A1/2 , X0 = A.
2
Heuristically, this symmetrization should improve the numerical stability. Remove
the unwanted square roots by introducing a new variable Yk = A−1/2 Xk A−1/2 .
6.12. In the DB iteration (6.15) it is tempting to argue that when computing Yk+1 the
latest X iterate, Xk+1 , should be used instead of Xk , giving Xk+1 = 12 Xk + Yk−1
−1
and Yk+1 = 12 Yk + Xk+1 . (This is the same reasoning that produces the Gauss–
Seidel iteration from the Jacobi iteration.) What effect does this change have on the
convergence of the iteration?
6.13. From (6.25) we obtain the relation for the DB iteration function G:
2 !
1 E − A1/2 F A1/2
E
1/2
G(A + E, A −1/2
+ F) = −1/2 −1/2
+O
F
.
2 F −A EA
Explain how the quadratic convergence of the DB iteration can be seen from this
relation. In other words, reconcile the stability analysis with the convergence analysis.
6.14. In the coupled Padé iteration (6.34) we know that Yk = AZk for all k, so we
can rewrite the Yk recurrence in the uncoupled form
Yk+1 = Yk pℓm (1 − A−1 Yk2 ) qℓm (1 − A−1 Yk2 )−1 , Y0 = A. (6.62)
What are the pros and cons of this rearrangement? Note that for ℓ = 0, m = 1, and
A ← A−1 , (6.62) is the iteration
Yk+1 = 2Yk (I + Yk AYk )−1 (6.63)
for computing A−1/2 , which appears occasionally in the literature [518, ].
170 Matrix Square Root
6.15. Discuss whether the requirement ρ(C) < 1 in Theorem 6.13 can be relaxed by
making use of Theorem 6.14.
6.16. Show that Pulay’s iteration (6.48) is equivalent to the modified Newton itera-
tion (6.46) with X0 = D1/2 , and specifically that Xk = D1/2 + Bk .
6.17. Show that for A ∈ Cn×n with real, positive eigenvalues 0 < λn ≤ λn−1 ≤
· · · ≤ λ1 , mins∈R { ρ(C) : A = s(I − C) } is attained at s = (λ1 + λn )/2, and ρ(C) =
(λ1 − λn )/(λ1 + λn ).
6.18. Show that for A ∈ Cn×n , mins∈C { kCkF : A = s(I − C) } is attained at
s = trace(A∗ A)/ trace(A∗ ). (Cf. Theorem 4.21 (a).)
6.19. Show that Theorem 6.16 can be extended to A having a semisimple zero eigen-
value.
6.20. Show that if A is an M-matrix then in the representation (6.52) we can take
s = maxi aii .
6.21. Derive Algorithm 6.22 for solving XAX = B.
6.22. Show that for the matrix
1 −1 1/ǫ2
T = 0 (1 + ǫ)2 1 , 0 < ǫ ≪ 1,
0 0 (1 + 2ǫ)2
the principal square root is not the square root of minimal norm.
6.23. (Meyer [426, , Ex. 3.6.2]) Show that for all A ∈ Cm×n and B ∈ Cn×m the
matrix
I − BA B
2A − ABA AB − I
is involutory.
6.24. (Research problem) Which of the DB, product DB, and IN iterations is to
be preferred in which circumstances?
6.25. (Research problem) When the Newton iteration (6.12) is applied to a non-
singular H-matrix A with positive diagonal entries are all the iterates Xk nonsingular
H-matrices with positive diagonal entries?
6.26. (Research problem) Develop algorithms for computing directly the Cholesky
factors of (a) the principal square root of a Hermitian positive definite matrix and
(b) the Hermitian positive definite solution X of XAX = B, where A and B are
Hermitian positive definite.
Problems 171
Blackwell: You know the algorithm for calculating the square root? . . .
It occurred to me that maybe this algorithm would work for positive definite matrices.
You take some positive definite X , add it to SX −1 and divide by two.
The question is: Does this converge to the square root of X . . . .
In a particular example, the error at first was tremendous,
then dropped down to about .003.
Then it jumped up a bit to .02, then jumped up quite a bit to .9,
and then it exploded. Very unexpected.. . .
It turns out that the algorithm works provided that the matrix you start with
commutes with the matrix whose square root you want.
You see, it’s sort of natural because you
have to make a choice between SX −1 and X −1 S ,
but of course if they commute it doesn’t make any difference.
Of course I started out with the identity matrix and it should commute with anything.
So what happened?
MP: You must have been having some kind of roundoff.
Blackwell: Exactly! If the computer had calculated exactly it would have converged.
The problem is that the matrix the computer used didn’t quite commute.
— DAVID BLACKWELL7 , in Mathematical People: Profiles and Interviews (1985)
7 From [6, ].
Chapter 7
Matrix pth Root
X is a pth root of A ∈ Cn×n if X p = A. The pth root of a matrix, for p > 2, arises
less frequently than the square root, but nevertheless is of interest both in theory and
in practice. One application is in the computation of the matrix logarithm through
the relation log A = p log A1/p (see Section 11.5), where p is chosen so that A1/p
can be well approximated by a polynomial or rational function. Here, A1/p denotes
the principal pth root, defined in the next section. Roots of transition matrices are
required in some finance applications (see Section 2.3). Related to the pth root is the
matrix sector function sectp (A) = (Ap )−1/p A discussed in Section 2.14.3.
The matrix pth root is an interesting object of study because algorithms and
results for the case p = 2 do not always generalize easily, or in the manner that they
might be expected to.
In this chapter we first give results on existence and classification of pth roots.
Then we generalize the Schur method described in Chapter 6 for the square root.
Newton iterations for the principal pth root and its inverse are explored, leading to a
hybrid Schur–Newton algorithm. Finally, we explain how the pth root can be obtained
via the matrix sign function.
Throughout the chapter p is an integer. Matrix roots Aα with α a real number
can be defined (see Problem 7.2) but the methods of this chapter are applicable only
when α is the reciprocal of an integer.
7.1. Theory
We summarize some theoretical results about matrix pth roots, all of which generalize
results for p = 2 already stated.
Theorem 7.1 (classification of pth roots). Let the nonsingular matrix A ∈ Cn×n
have the Jordan canonical form Z −1 AZ = J = diag(J1 , J2 , . . . , Jm ), with Jk =
(j )
Jk (λk ), and let s ≤ m be the number of distinct eigenvalues of A. Let Lk k =
(j )
Lk k (λk ), k = 1: m, denote the p pth roots given by (1.4), where jk ∈ {1, 2, . . . , p}
denotes the branch of the pth root function. Then A has precisely ps pth roots that
are primary functions of A, given by
(j ) (j )
Xj = Z diag(L1 1 , L2 2 , . . . , L(jm)
m )Z
−1
, j = 1: ps ,
173
174 Matrix pth Root
Theorem 7.2 (principal pth root). Let A ∈ Cn×n have no eigenvalues on R− . There
is a unique pth root X of A all of whose eigenvalues lie in the segment { z : −π/p <
arg(z) < π/p }, and it is a primary matrix function of A. We refer to X as the
principal pth root of A and write X = A1/p . If A is real then A1/p is real.
Proof. The proof is entirely analogous to that of Theorem 1.29.
Theorem 7.3 (existence of pth root). A ∈ Cn×n has a pth root if and only if the
“ascent sequence” of integers d1 , d2 , . . . defined by
di = dim(null(Ai )) − dim(null(Ai−1 ))
has the property that for every integer ν ≥ 0 no more than one element of the sequence
lies strictly between pν and p(ν + 1).
Proof. See Psarrakos [479, ].
We note the integral representation
Z ∞
p sin(π/p)
A1/p = A (tp I + A)−1 dt, (7.1)
π 0
generalizing (6.1), which holds for any real p > 1. For an arbitrary real power, see
Problem 7.2.
Our final result shows that a pth root can be obtained from the invariant subspace
of a block companion matrix associated with the matrix polynomial λp I − A.
Theorem 7.4 (Benner, Byers, Mehrmann, and Xu). Let A ∈ Cn×n . If the columns
of U = [U1∗ , . . . , Up∗ ]∗ ∈ Cpn×n span an invariant subspace of
0 I
0 I
.. ..
C= . . ∈ Cpn×pn , (7.2)
.
.. I
A 0
that is, CU = U Y for some nonsingular Y ∈ Cn×n , and U1 is nonsingular, then
X = U2 U1−1 is a pth root of A.
Proof. The equation CU = U Y yields
Hence
Substituting the latter equation for Vij into the equation for Rij gives
(0) (1)
Rij = Uii Uii Uij + Uij Ujj + Bij + Uij Vjj + Bij ,
This is a generalized Sylvester equation for Uij . The right-hand side depends only on
blocks of U and V lying to the left of and below the (i, j) blocks. Hence, just as in
the square root case, we can solve for Uij and Vij a block column at a time or a block
superdiagonal at a time.
Cost: 25n3 flops for the Schur decomposition plus 2n3/3 for U and 3n3 to form X:
(0)
28 23 n3 flops in total. (The cost of forming U is dominated by that of computing Bij
(1)
and Bij , which is essentially that of forming U 2 and U V , respectively.)
This approach can be generalized to obtain an algorithm for pth roots. Let U p =
R. The idea is to define V (k) = U k+1 , k = 0: p−2 and obtain recurrences for the blocks
of the V (k) using the relations U V (k) = V (k+1) , k = 0: p − 3 and U V (p−2) = R. We
simply state the algorithm; for the derivation, which involves some tedious algebra,
see Smith [530, ].
Cost: 25n3 flops for the Schur decomposition plus (p − 1)n3/3 for U (which is essen-
tially the cost of computing the Bk ) and 3n3 to form X: (28 + (p − 1)/3)n3 flops in
total.
√ Two details in Algorithms 7.5 and 7.6 remain to be described: how to compute
p
Rjj and how to √ solve the generalized Sylvester equations.
p
A formula for Rjj , for Rjj ∈ R2×2 , can be obtained by adapting the approach
used for p = 2 in Section 6.2. Since Rjj has distinct eigenvalues we can write (as in
the proof of Lemma 6.4)
1 0
Z −1 Rjj Z = diag(λ, λ) = θI + iµK, K= .
0 −1
which has dimension 1, 2, or 4 and can be solved by Gaussian elimination with partial
pivoting. To check the nonsingularity of the coefficient matrix we note that, by (B.17),
this matrix has eigenvalues
p p
p−1
X λr − µs , λ 6= µ ,
k p−1−k r s
λr µs = λr − µs
p−1
k=0 pλr , λr = µs ,
where λr ∈ Λ(Ujj ) and µs ∈ Λ(Uii ), with λr and µs nonzero by the assumption that
A is nonsingular. Hence nonsingularity holds when λpr 6= µps for all r and s, which
requires that λr 6= µs e2πik/p , k = 1: p − 1. This condition is certainly satisfied if
any eigenvalue appearing in two different blocks Rii and Rjj is mapped to the same
1/p 1/p
pth root in Rii and Rjj . Hence the algorithm will succeed when computing any
primary pth root.
Algorithm 7.6 requires storage for the 2p − 2 intermediate upper quasi-triangular
matrices V (k) and B (k) . For large p, this storage can be a significant cost. However,
if p is composite the pth root can be obtained by successively computing the roots
given by the prime factors of p, which saves both storage and computation.
If A is nonsingular and has a negative real eigenvalue then the principal pth root
is not defined. For √ odd p, Algorithm 7.6 can nevertheless compute a real, primary
pth root by taking p λ = −|λ|1/p when λ < 0. For even p, there is no real, primary
pth root in this situation.
The stability properties of Algorithm 7.6 are entirely analogous to those for the
square root case. The computed X b satisfies
b p kF
kA − X
≤γ b
en3 αF (X), (7.3)
kAkF
where
kXkp kXkp
α(X) = = ≥ 1.
kAk kX p k
The quantity α(X) can be regarded as a condition number for the relative residual of
X, based on the same reasoning as used in Section 6.1.
Algorithms 7.5 and 7.6 are readily specialized to use the complex Schur decompo-
sition, by setting all blocks to be 1 × 1.
No O(n3 ) flops methods are known for solving this generalized Sylvester equation
for p > 2 and so it is necessary to simplify the equation by using commutativity
properties, just as in Lemma 6.8. When X0 commutes with A so do all the Xk , and
Newton’s method can be written (see Problem 7.5)
The convergence properties of the iteration, and their dependence on X0 , are much
more complicated than in the square root case.
The most obvious choice, X0 = A, is unsatisfactory because no simple conditions
are available that guarantee convergence to the principal pth root for general A. To
illustrate, we consider the scalar case. Note that if we define yk = a−1/p xk then (7.5)
reduces to
1
yk+1 = (p − 1)yk + yk1−p , y0 = a−1/p x0 , (7.6)
p
which is the Newton iteration for a pth root of unity with a starting value depending
on a and x0 . Figure 7.1 plots for p = 2: 5, a = 1, and y0 ranging over a 400 × 400
grid with Re y0 , Im y0 ∈ [−2.5, 2.5], the root to which yk from (7.6) converges, with
each root denoted by a different grayscale from white (the principal root) to black.
Convergence is declared if after 50 iterations the iterate is within relative distance
10−13 of a root; the relatively small number of points for which convergence was not
observed are plotted white. For p = 2, the figure confirms what we already know from
Chapter 6: convergence is obtained to whichever square root of unity lies nearest y0 .
But for p ≥ 2 the regions of convergence have a much more complicated structure,
involving sectors with petal-like boundaries. This behaviour is well known and is
illustrative of the theory of Julia sets of rational maps [468, ], [511, ]. In
spite of these complications, the figures are suggestive of convergence to the principal
root of unity when y0 lies in the region
which is marked by the solid line in the figure for p ≥ 3. In fact, this is a region of
convergence, as the next theorem, translated back to (7.5) with varying a and x0 = 1
shows. We denote by R+ the open positive real axis.
Theorem 7.7 (convergence of scalar Newton iteration). For all p > 1, the iteration
1
xk+1 = (p − 1)xk + x1−p
k a , x0 = 1,
p
converges quadratically to a1/p if a belongs to the set
Corollary 7.8 (convergence of matrix Newton iteration). Let A ∈ Cn×n have no eigen-
values on R− . For all p > 1, the iteration (7.5) with X0 = I converges quadratically
to A1/p if each eigenvalue λi of A belongs to the set S in (7.7).
7.3 Newton’s Method 179
p=2 p=3
2.5 2.5
2 2
1.5 1.5
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1.5 −1.5
−2 −2
−2.5 −2.5
−2 −1 0 1 2 −2 −1 0 1 2
p=4 p=5
2.5 2.5
2 2
1.5 1.5
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1.5 −1.5
−2 −2
−2.5 −2.5
−2 −1 0 1 2 −2 −1 0 1 2
Figure 7.1. Convergence of the Newton iteration (7.6) for a pth root of unity. Each point
y0 in the region is shaded according to the root to which the iteration converges, with white
denoting the principal root. Equivalently, the plots are showing convergence of (7.5) in the
scalar case with a = 1 and the shaded points representing x0 .
Proof. Theorems 7.7 and 4.15 together imply that Yk has a limit Y∗ whose
eigenvalues are the principal pth roots of the eigenvalues of A. The limit clearly
satisfies Y∗p = A, and so is a pth root of A. By Theorem 7.2, the only pth root of A
having the same spectrum as Y∗ is the principal pth root, and so Y∗ = A1/p .
Algorithm 7.9 (Matrix pth root via Newton iteration). Given A ∈ Cn×n having no
eigenvalues on R− this algorithm computes X = A1/p using the Newton iteration.
1 B = A1/2
2 C = B/θ, where θ is any upper bound for ρ(B) (e.g., θ = kBk)
3 Use the
Newton iteration (7.5) with X0 = I to compute
C 2/p , p even,
X= 2
C 1/p , p odd.
4 X ← kBk2/p X
180 Matrix pth Root
To see why the algorithm works, note that the eigenvalues of B, and hence also
C, lie in the open right half-plane, and ρ(C) ≤ 1. Therefore C satisfies the conditions
of the corollary.
Unfortunately, instability vitiates the convergence properties of iteration (7.5) in
practical computation—indeed we have already seen in Chapters 4 and 6 that this is
the case for p = 2. Analysis similar to that in Section 6.4.1 (see Problem 7.12) shows
that the eigenvalues of Lg (A1/p ), where g(X) = p−1 [(p − 1)X + X 1−p A], are of the
form
p−1
X r/p !
1 λi
(p − 1) − , i, j = 1: n, (7.8)
p r=1
λj
where the λi are the eigenvalues of A. To guarantee stability we need these eigenvalues
to be less than 1 in modulus, which is a very severe restriction on A.
Inspired by Problem 6.11, we might try to obviate the instability by rewriting the
iteration in the more symmetric form (using the commutativity of Xk with A1/p )
1
Xk+1 = (p − 1)Xk + A1/p Xk−1 A1/p Xk−1 . . . A1/p Xk−1 A1/p
p
1
= (p − 1)Xk + (A1/p Xk−1 )p−1 A1/p . (7.9)
p
This iteration is stable. Denoting the iteration function by g, it is straightforward to
show that Lg (A1/p , E) = 0, and stability is immediate. For practical computation,
however, we need another way of rewriting the Newton iteration that does not involve
A1/p . Taking a cue from the product form DB iteration (6.17) for the square root,
we define Mk = Xk−p A. Then it is trivial to obtain, using the mutual commutativity
of A, Xk , and Mk ,
(Of course, (7.12) is (7.5) with p ← −p.) For p = 1 this is the well-known Newton–
Schulz iteration for the matrix inverse [512, ] and the residuals Rk = I − Xkp A
satisfy Rk+1 = Rk2 (see Problem 7.8). The next result shows that a p-term recursion
for the residuals holds in general.
Theorem 7.10 (Bini, Higham, and Meini). The residuals Rk = I −Xkp A from (7.12)
satisfy
p+1
X
Rk+1 = ai Rki , (7.13)
i=2
Pp+1
where the ai are all positive and i=2 ai = 1. Hence if 0 < kR0 k < 1 for some
consistent matrix norm then kRk k decreases monotonically to 0 as k → ∞, with
kRk+1 k < kRk k2 .
Proof. We have Xk+1 = p−1 Xk (pI + Rk ). Since Xk commutes with A, and hence
with Rk , we obtain
1
Rk+1 = I − (I − Rk )(pI + Rk )p (7.14)
pp
1h p i
X p
=I− p p I+ bi Rki − Rkp+1
p i=1
" p #
1 X p+1
i
=− p bi R k − R k ,
p i=1
where
p p−i p
bi = p − pp−i+1
i i−1
p−i p p
=p − p
i i−1
p−i p! p! p
=p −
i!(p − i)! (i − 1)!(p − i + 1)!
p−i p! 1 p
=p − .
(i − 1)!(p − i)! i (p − i + 1)
It is easy to see that b1 = 0 and bi < 0 for i ≥ 2. Hence (7.13) holds, with ai > 0 for
Pp+1
all i. By setting Rk ≡ I in (7.13) and (7.14) it is easy to see that i=2 ai = 1.
182 Matrix pth Root
Since 0 < kR0 k < 1, by induction the kRk k form a monotonically decreasing sequence
that converges to zero.
If X0 does not commute with A then little can be said about the behaviour of the
residuals for p ≥ 2; see Problems 7.8 and 7.9.
Theorem 7.10 does not immediately imply the convergence of Xk (except in the
scalar case). We can conclude that the sequence of pth powers, {Xkp }, is bounded, but
the boundedness of {Xk } itself does not follow when n > 1. If the sequence {Xk } is
indeed bounded then by writing (7.12) as Xk+1 − Xk = p1 Xk (I − Xkp A) = p1 Xk Rk we
see that {Xk } is a Cauchy sequence and thus converges (quadratically) to a matrix
X∗ . This limit satisfies I − X∗p A = 0 and so is some inverse pth root of A, but not
necessarily the inverse principal pth root.
The next result proves convergence when all the eigenvalues of A lie on the real
interval (0, p + 1) and X0 = I.
Theorem 7.11 (Bini, Higham, and Meini). Suppose that all the eigenvalues of A are
real and positive. Then iteration (7.12) with X0 = I converges to A−1/p if ρ(A) <
p + 1. If ρ(A) = p + 1 the iteration does not converge to the inverse of any pth root
of A.
Proof. By the same reasoning as in the proof of Corollary 7.8 it suffices to analyze
the convergence of the iteration on the eigenvalues of A. We therefore consider the
scalar iteration
1
xk+1 = (1 + p)xk − xp+1 k a , x0 = 1, (7.15)
p
with a > 0. Let yk = a1/p xk . Then
1
yk+1 = (1 + p)yk − ykp+1 =: f (yk ), y0 = a1/p , (7.16)
p
and we need to prove that yk → 1 if y0 = a1/p < (p + 1)1/p . We consider two cases.
If yk ∈ (0, 1) then
1 − ykp
yk+1 = yk 1 + > yk .
p
Moreover, since
p+1
f (0) = 0, f (1) = 1, f ′ (y) = p (1 − y p ) > 0 for y ∈ [0, 1),
it follows that f (y) < 1 for y ∈ [0, 1). Hence yk < yk+1 < 1 and so the yk form
a monotonically increasing sequence tending to a limit in (0, 1]. But the only fixed
points of the iteration are the roots of unity, so the limit must be 1. Now suppose
y0 ∈ (1, (p + 1)1/p ). We have f (1) = 1 and f ((p + 1)1/p ) = 0, and f ′ (y) < 0 for y > 1.
It follows that f maps (1, (p + 1)1/p ) into (0, 1) and so after one iteration y1 ∈ (0, 1)
and the first case applies. The last part of the result follows from f ((p + 1)1/p ) = 0
and the fact that 0 is a fixed point of the iteration.
7.4 Inverse Newton Method 183
2 2 2
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
0 1 2 0 1 2 3 0 1 2 3 4
p=4 p=8 p = 16
2 2 2
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
0 1 2 3 4 5 0 1 2 3 4 5 6 7 8 9 0 4 8 12 16
Figure 7.2. Regions of a ∈ C for which the inverse Newton iteration (7.15) converges to
a−1/p . The dark shaded region is E(1, p) in (7.17). The union of that region with the lighter
shaded points is the experimentally determined region of convergence. The solid line marks
the disk of radius 1, centre 1. Note the differing x-axis limits.
The following result builds on the previous one to establish convergence when the
spectrum of A lies within a much larger wedge-shaped convex region in the complex
plane, the region depending on a parameter c. We denote by conv the convex hull.
Theorem 7.12 (Guo and Higham). Let A ∈ Cn×n have no eigenvalues on R− . For
all p ≥ 1, the iterates Xk from (7.12) with X0 = 1c I and c ∈ R+ converge quadratically
to A−1/p if all the eigenvalues of A are in the set
E(c, p) = conv { z : |z − cp | ≤ cp }, (p + 1)cp \ { 0, (p + 1)cp }. (7.17)
Lemma 7.13 (Guo and Higham). Suppose that A ∈ Cn×n has a positive eigenvalue
λ of multiplicity n and that the largest Jordan block is of size q. Then for the iteration
(7.12) with X0 = λ−1/p I we have Xm = A−1/p for the first m such that 2m ≥ q.
Algorithm 7.14 (Matrix inverse pth root via inverse Newton iteration). Given A ∈
Cn×n having no eigenvalues on R− this algorithm computes X = A−1/p using the
inverse Newton iteration.
The flop counts of Algorithms 7.9 and 7.14 are generally greater than that for the
Schur method unless p is large, although the fact that the Newton iterations are
rich in matrix multiplication and matrix inversion compensates somewhat on modern
computers. However it is possible to combine the Schur and Newton methods to
advantage. An initial Schur decomposition reveals the eigenvalues, which can be used
to make a good choice of the parameter c in the inverse Newton iteration, and the
subsequent computations are with triangular matrices. The algorithm can be arranged
so that it computes either A1/p or A−1/p , by using either (7.18) or the variant of it
7.5 Schur–Newton Algorithm 185
Here, Yk → A1/p under the same conditions (those in Theorem 7.12) that Xk →
A−1/p .
We state the algorithm for real matrices, but an analogous algorithm is obtained
for complex matrices by using the complex Schur decomposition.
square roots as it can; these square roots are inexpensive since the matrix is triangular.
When the eigenvalues are all real the algorithm uses a choice of c with a certain
optimality property. For nonreal eigenvalues, the requirement on the arguments in
line 7 ensures that the corresponding choice of c in line 18 leads to fast convergence.
For more details see Guo and Higham [233, ], where numerical experiments show
Algorithm 7.15 to perform in a numerically stable manner.
The cost of the algorithm is about
2 1 k2 k2
28 + (k1 + k2 ) − + k0 + log2 p n3 flops,
3 3 2 2
The relation (5.4) shows that for p = 2 the sign of C directly reveals A1/p , so we
might hope that A1/p is recoverable from sign(C) for all p. This is true when p is
even and not a multiple of 4.
Theorem 7.16 (Bini, Higham, and Meini). If p = 2q where q is odd, then the first
block column of the matrix sign(C) is given by
γ0 I
γ1 X
1 γ X 2
V = 2 ,
p ..
.
p−1
γp−1 X
Pp−1 kj
where X = A1/p , γk = j=0 ωp θj , ωp = e
2πi/p
, and θj = −1 for j = ⌊q/2⌋ +
1: ⌊q/2⌋ + q, θj = 1 otherwise. If A is Hermitian positive definite then this result
holds also for odd p.
This result allows us to compute the principal pth root of A from the (2, 1) block
of sign(C). For p = 2 we have θ0 = 1, θ1 = −1, γ0 = 0, γ1 = 2, and Theorem 7.16
reproduces the first block column of (5.4).
7.7 Notes and References 187
Algorithm 7.17 (Matrix pth root via matrix sign function). Given A ∈ Cn×n hav-
ing no eigenvalues on R− this algorithm computes X = A1/p via the matrix sign
function.
1 If p is odd
2 p ← 2p, A ← A2
3 else
4 if p is a multiple of 4
5 while p/2 is even, A ← A1/2 , p = p/2, end
6 end
7 end
8 S = sign(C), where C is given in (7.20).
p P⌊q/2⌋
9 X = 2σ S(n + 1: 2n, 1: n) where σ = 1 + 2 j=1 cos(2πj/p) and q = p/2.
Assuming the matrix sign function is computed using one of the Newton or Padé
methods of Chapter 5, the cost of Algorithm 7.17 is O(n3 p3 ) flops, since C is np × np.
The algorithm therefore appears to be rather expensive in computation and storage.
However, both computation and storage can be saved by exploiting the fact that the
sign iterates belong to the class of A-circulant matrices, which have the form
W0 W1 ... Wp−1
.. ..
AWp−1 W0 . .
.
.. . .
. . . . . W1
AW1 . . . AWp−1 W0
Prior to Iannazzo’s analysis the only available convergence result for the Newton
iteration was for symmetric positive definite matrices [299, ].
The condition for stability of the Newton iteration (7.5) that the eigenvalues in
(7.8) are of modulus less than 1 is obtained by M. I. Smith [530, ].
Algorithm 7.9 is due to Iannazzo [306, ], who also obtains, and proves stable,
iterations (7.9), (7.10), and (7.18).
Section 7.4 is based on Bini, Higham, and Meini [69, ] and Guo and Higham
[233, ], from where the results and algorithms therein are taken.
For p = 2, the inverse Newton iteration (7.12) is well known in the scalar case as
an iteration for the inverse square root. It is employed in floating point hardware to
compute the square root a1/2 via a−1/2 × a, since the whole computation can be done
using only multiplications [117, ], [334, ]. The inverse Newton iteration is
also used to compute a1/p in arbitrarily high precision in the MPFUN and ARPREC
packages [36, ], [37, ].
The inverse Newton iteration with p = 2 is studied for symmetric positive definite
matrices by Philippe, who proves that the residuals Rk = I − Xk2 A satisfy Rk+1 =
3 2 1 3
4 Rk + 4 Rk [471, , Prop. 2.5]. For arbitrary p, R. A. Smith [531, , Thm. 5]
proves quadratic convergence of (7.12) to some pth root when ρ(R0 ) < 1, as does Lakić
[368, ] under the assumption that A is diagonalizable, but neither determines to
which root convergence is obtained.
The coupled iteration (7.18) is a special case of a family of iterations of Lakić [368,
], who proves stability of the whole family.
Section 7.5 is based on Guo and Higham [233, ]. Section 7.6 is based on Bini,
Higham, and Meini [69, ].
Some other methods for computing the pth root are described by Bini, Higham,
and Meini [69, ], all subject to the same restriction on p as in Theorem 7.16. One
comprises numerical evaluation at the Fourier points of a transformed version of the
integral (7.1), in which the integration is around the unit circle. Another computes a
Wiener–Hopf factorization of the matrix Laurent polynomial F (z) = z −p/2 ((1+z)p A−
(1 − z)p I), from which A1/p is readily obtained. Another method expresses A1/p in
terms of the central coefficients H0 , . . . , Hp/2−1 of the matrix Laurent series H(z) =
P+∞
H0 + i=1 (z i + z −i )Hi , where H(z)F (z) = I. Further work is needed to understand
the behaviour of these methods in finite precision arithmetic; see Problem 7.18.
The principal pth root of a nonsingular M-matrix (see Section 6.8.3) is an M-
matrix, but for p > 3 it need not be the only pth root that is an M-matrix; see
Fiedler and Schneider [185, ].
Sylvester [554, ], [555, ] was interested in finding pth roots of the 2 × 2
identity because of a connection with a problem solved earlier by Babbage in his
“calculus of functions”. The problem is to find a Möbius transformation φ(x) =
(ax + b)/(cx + d) such that φ(i) (x) = x for a given i. This is easily seen to be
equivalent to finding a matrix A = ac db for which Ai = I2 (see, e.g., Mumford,
Series, and Wright [442, ]).
A few results are available on integer pth roots of the identity matrix. Turnbull
[577, ], [578,
, p. 332] observes that the antitriangular matrix An with aij =
(−1)n−j n−j
i−1 for i + j ≤ n + 1 is a cube root of In . For example, A34 = I4 , where
−1 1 −1 1
−3 2 −1 0
A4 = .
−3 1 0 0
−1 0 0 0
Problems 189
This matrix is a rotated variant of the Cholesky factor of the Pascal matrix (cf. the
involutory matrix in (6.59)). Up to signs, An is the MATLAB matrix pascal(n,2).
Vaidyanathaswamy [582, ], [583, ] derives conditions on p for the existence
of integer pth roots of In ; in particular, he shows that for a given n there are only
finitely many possible p. Problem 7.6 gives an integer (n + 1)st root of In .
Finally, we note that Janovská and Opfer [317, , Sec. 8] show how computing
a pth root of a quaternion can be reduced to computing a pth root of a matrix,
though the special structure of the matrix needs to be exploited to be competitive
with methods working in the algebra of quaternions.
Problems
7.1. Does an m × m Jordan block with eigenvalue zero have a pth root for any p?
7.2. How should Aα be defined for real α ∈ [0, ∞)?
7.3. Show that if X p = In then the eigenvalues of X are pth roots of unity and X is
diagonalizable.
7.4. Show that the Frobenius norm (relative) condition number for the matrix pth
root X of A ∈ Cn×n is given by
P −1
p p−j T
j=1 (X ) ⊗ X j−1
kAkF
2
cond(X) = (7.21)
kXkF
and that this condition number is finite if and only if X is a primary pth root of A.
7.5. Suppose that in the Newton iteration for a matrix pth root defined by (7.4) and
Xk+1 = Xk + Ek , X0 commutes with A and all the iterates are well-defined. Show
that, for all k, Xk commutes with A and Xk+1 = p1 ((p − 1)Xk + Xk1−p A).
7.6. (Bambaii and Chowla [42, ]) Let
−1 −1 ... ... −1
1 0 ... ... 0
.. ..
An = 0 1 . . ∈ Rn×n .
.
.. .. .. ..
. . .
0 ... ... 1 0
7.8. Show that for the Newton–Schulz iteration for the inverse of A ∈ Cn×n ,
the residuals Rk = I − Xk A and Rk = I − AXk both satisfy Rk+1 = Rk2 for any X0 .
(Note that (7.22) is a more symmetric variant of (7.12) with p = 1. For (7.12) this
residual recurrence is assured only for X0 that commute with A.)
7.9. Investigate the behaviour of the inverse Newton iteration (7.12) for p = 2 with
1 0 1 θ
A= , X 0 = , θ 6= 0.
0 µ−2 0 µ
7.13. Assume that the Newton iteration (7.5) converges to a pth root B of A. By
making use of the factorization
p−2
X
(p − 1)xp − p xp−1 b + bp = (x − b)2 (i + 1)xi bp−2−i ,
i=0
obtain a constant c so that kXk+1 − Bk ≤ ckXk − Bk2 for all sufficiently large k.
7.14. Prove the stability of the iteration (7.18).
7.15. (Guo and Higham [233, ]) Let A ∈ Rn×n be a stochastic matrix. Show
that if A is strictly diagonally dominant (by rows—see (B.2)) then the inverse Newton
iteration (7.12) with X0 = I converges to A−1/p . The need for computing roots of
stochastic matrices arises in Markov model applications; see Section 2.3. Transition
matrices arising in the credit risk literature are typically strictly diagonally dominant
[315, ], and such matrices are known to have at most one generator [127, ].
7.16. (Guo and Higham [233, ]) Let X e = X + E, with kEk ≤ ǫkXk, be an
approximate pth root of A ∈ C n×n e p k that is sharp to
. Obtain a bound for kA − X
first order in kEk and hence explain why
kA − Xe pk
e =
ρA (X)
Pp−1
e
kXk e p−1−i T ⊗ X
e i
i=0 X
is a more appropriate definition of relative residual (e.g., for testing purposes) than
kA − Xe p k/kX
e p k.
Problems 191
The polar decomposition is the generalization to matrices of the familiar polar repre-
sentation z = reiθ of a complex number. It is intimately related to the singular value
decomposition (SVD), as our proof of the decomposition reveals.
193
194 The Polar Decomposition
Problem B.2, H + is Hermitian and commutes with H. Hence, by the first and second
Moore–Penrose conditions (B.3) (i) and (B.3) (ii), any product comprising p factors
H and q factors H + , in any order, equals H if p = q + 1, H + if q = p + 1, and HH +
if p = q. Define H = (A∗ A)1/2 and U = AH + . We first need to show that U is a
partial isometry (U ∗ = U + ) and U ∗ U = HH + . To prove the latter equality, we have
U ∗ U = H + A∗ AH + = H + H 2 H + = HH + .
U U ∗ U = AH + · H + A∗ · AH + = A(H + )2 H 2 H + = AH + = U,
U ∗ U U ∗ = H + A∗ · AH + · H + A∗ = H + H 2 (H + )2 A∗ = H + A∗ = U ∗ ,
• Let m ≥ n. Then the factor U in Theorem 8.3 is the matrix U in (8.1) with
W = 0; U in (8.3) has orthonormal columns precisely when A has full rank; and
if A has full rank the polar decomposition and canonical polar decomposition
are the same.
n = 1 : a = kak−1
2 a · kak2 ,
m = 1 : a∗ = kak−1 ∗ −1 ∗
2 a · kak2 aa . (8.4)
where the second equality follows from Corollary 1.34 with f the function
D
f (x) = x1/2 . Here, D denotes the Drazin inverse (see Problem 1.52), which
is identical to the pseudoinverse for Hermitian matrices, to which it is being
applied here.
196 The Polar Decomposition
In most of the rest of this chapter we will restrict to the standard polar decompo-
sition with m ≥ n.
Neither polar factor is a function of A (though this can be arranged by modifying
the definition of matrix function, as explained in Problem 1.53; see also Problem 8.9).
The reason for including the polar decomposition in this book is that it has many
intimate relations with the matrix sign function and the matrix square root, and so
it is naturally treated alongside them.
A fundamental connection between the matrix sign function and the polar decom-
position is obtained from Theorem 5.2 with B = A∗ when A ∈ Cn×n is nonsingular:
0 A 0 U
sign = . (8.6)
A∗ 0 U∗ 0
This relation can be used to obtain an integral formula for U from the integral formula
(5.3) for the sign function:
Z ∞
2
U= A (t2 I + A∗A)−1 dt. (8.7)
π 0
where P ∈ Cm×m and Q ∈ Cn×n are unitary, and R ∈ Cr×r is nonsingular and upper
triangular [224, , Sec. 5.4.2]. The polar decomposition of R can now be computed
and the polar decomposition of A pieced back together.
8.1 Approximation Properties 197
Theorem 8.4 (Fan and Hoffman). Let A ∈ Cm×n (m ≥ n) have the polar decom-
position A = U H. Then kA − U k = min{ kA − Qk : Q∗ Q = In } for any unitarily
invariant norm. The minimizer is unique for the Frobenius norm if A has full rank.
E ∗ U + U ∗ E + E ∗ E = 0. (8.10)
Hence
(A − Q)∗ (A − Q) = (A − U )∗ (A − U ) − (A − U )∗ E − E ∗ (A − U ) + E ∗ E
= (A − U )∗ (A − U ) − A∗ E − E ∗ A. (8.11)
Taking the trace in (8.11) gives kA − Qk2F = kA − U k2F − trace(A∗ E + E ∗ A), and
where we have used (8.10). Hence kA − Qk2F ≥ kA − U k2F , with equality only if
trace(EHE ∗ ) = 0, and if A has full rank then H is nonsingular and this last condition
implies E = 0, giving the uniqueness condition.
For m < n we can apply Theorem 8.4 to A∗ to deduce that if A∗ = U bH
b is a
b ∗ ∗
polar decomposition then kA − U k = min{ kA − Qk : Q Q = Im } for any unitarily
invariant norm. The next result, concerning best approximation by partial isometries,
holds for arbitrary m and n and reduces to Theorem 8.4 when rank(A) = n.
Theorem 8.5 (Laszkiewicz and Ziȩtak). Let A ∈ Cm×n have the canonical polar de-
composition A = U H. Then U solves
for any unitarily invariant norm. Equality is achieved for Q = U (which satisfies the
range condition by Problem 8.8) because
Σr 0 Ir 0
kA − U k =
P
∗
V −P V ∗
0 0m−r,n−r 0 0m−r,n−r
Σr − Ir 0
=
.
0 0m−r,n−r
Theorem 8.4 says that the polar factor U is the nearest matrix to A with orthonor-
mal columns. Hence the polar decomposition provides an optimal way of orthogonal-
izing a matrix. Applications are discussed in Section 2.6. For square matrices U is
the nearest unitary matrix to A. A related approximation problem is that of finding
the nearest unitary matrix with determinant 1—the nearest “rotation matrix”. This
problem is a special case of one of the Procrustes problems discussed in Section 2.6,
for which we now derive solutions.
Theorem 8.7 (Fan and Hoffman). For A ∈ Cn×n and any unitarily invariant norm,
kA−AH k = min{ kA−Xk : X = X ∗ }. The solution is unique for the Frobenius norm.
1
kA − AH k = kAK k = k(A − Y ) + (Y ∗ − A∗ )k
2
1 1
≤ kA − Y k + k(Y − A)∗ k
2 2
= kA − Y k,
using the fact that kAk = kA∗ k for any unitarily invariant norm. The uniqueness for
the Frobenius norm is a consequence of the strict convexity of the norm.
Theorem 8.8. Let A ∈ Cn×n and let AH have the polar decomposition AH = U H.
Then X = (AH + H)/2 is the unique solution to
Proof. Let X be any Hermitian positive semidefinite matrix. From the fact that
kS + Kk2F = kSk2F + kKk2F if S = S ∗ and K = −K ∗ , we have
since yii ≥ 0 because Y is positive semidefinite. This lower bound is uniquely at-
tained when Y = diag(di ) ≡ diag(max(λi , 0)), for which X = Q diag(di )Q∗ . The
representation X = (AH + H)/2 follows, since H = Q diag(|λi |)Q∗ .
If A is Hermitian then Theorem 8.8 says that the nearest Hermitian positive
semidefinite matrix is obtained by replacing every eigenvalue λi of A by max(λi , 0)
and leaving the eigenvectors unchanged. If, instead, λi is replaced by |λi | then H
itself is obtained, and kA − HkF is at most twice the Frobenius norm distance to
the nearest Hermitian positive semidefinite matrix. These ideas have been used in
Newton’s method in optimization to modify indefinite Hessian matrices to make them
definite [215, , Sec. 4.4.2], [449, , Sec. 6.3],
If A has full rank and k∆Ak2 < σn then U and U + ∆U are uniquely defined and
A, ∆A ∈ Rm×n A, ∆A ∈ Cm×n
m=n 2/(σn + σn−1 ) σn−1
m>n σn−1 σn−1
Proof. For the bound on k∆HkF , see Problem 8.15. We outline the proofs for
m = n, which are from Kenney and Laub [342, , Thms. 2.2, 2.3]. To first order
we have
∆A = ∆U H + U ∆H, (8.15)
∗ ∗
0 = ∆U U + U ∆U. (8.16)
U HU ∗ ∆U + ∆U H = ∆A − U ∆A∗ U.
Inserting the expressions for U and H in terms of the SVD A = P ΣQ∗ we have
P ΣP ∗ ∆U + ∆U QΣQ∗ = ∆A − P Q∗ ∆A∗ P Q∗ ,
or
ΣW + W Σ = E − E ∗ , W = P ∗ ∆U Q, E = P ∗ ∆AQ.
Hence
eij − eji
wij = ,
σi + σj
so that
|eij |2 + |eji |2
|wij |2 ≤ .
2σn2
Thus k∆U kF = kW kF ≤ kEkF /σn = k∆AkF /σn . This bound is attained for ∆A =
P (ien eTn )Q∗ .
For real A and ∆A the same argument gives ΣW + W Σ = E − E T , with real W
and E given by W = P T ∆U Q and E = P T ∆AQ. We have
eij − eji
wij = ,
σi + σj
8.2 Sensitivity and Conditioning 201
Hence k∆U kF = kW kF ≤ 2kEkF /(σn + σn−1 ) = 2k∆AkF /(σn + σn−1 ). The bound
is attained when ∆A = P (en−1 eTn − en eTn−1 )QT .
For the bounds for m > n, see Chaitin-Chatelin and Gratton [103, ].
We can define condition numbers of the polar factors using (3.2) and (3.4), where
f maps A to U or √ H and we take the Frobenius norm. We can conclude from the
theorem that κ√ H ≤ 2, where κH is the absolute condition
√ number for H. The actual
value of κH is 2(1 + κ2 (A)2 )1/2 /(1 + κ2 (A)) ∈ [1, 2], as shown by Chaitin-Chatelin
and Gratton [103, ]. For U , κU = θkAkF /kU kF is a relative condition number.
Interestingly, for square A, κU depends very much on whether the data are real or
complex, and in the complex case it is essentially the condition number with respect
to inversion, κ(A). For rectangular A, this dichotomy between real and complex data
does not hold; see Problem 8.17. To summarize, H is always perfectly conditioned,
but the condition of U depends on the smallest one or two singular values of A.
A number of variations on the bounds for ∆U are available. Particularly elegant
are the following nonasymptotic bounds. Problem 8.16 asks for a proof of the first
bound for n = 1.
ek ≤ 2 e
kU − U kA − Ak
σn + σ
en
Note that the bound of the theorem is attained whenever A and Ae are unitary.
e
The bound remains valid in the Frobenius norm for A, A ∈ C m×n
of rank n [389,
, Thm. 2.4].
Theorem 8.11. Let A, A e ∈ Rn×n be nonsingular, with ith largest singular values σi
and σ e F < σn + σ
ei , respectively. If kA − Ak en then the unitary polar factors U of A
e e
and U of A satisfy
e kF ≤ 4 e F.
kU − U kA − Ak
σn−1 + σn + σ
en−1 + σ
en
A perturbation result for the orthogonal Procrustes problem (8.13) with the con-
straint det(W ) = 1 is given by Söderkvist [533, ].
202 The Polar Decomposition
A variant of (8.17) is
for which Yk ≡ Xk−∗ , k ≥ 1 (see Problem 8.19) and limk→∞ Yk = limk→∞ Xk−∗ =
U −∗ = U . Note that this iteration is applicable to rectangular A.
As for the Newton sign iteration in Section 5.3, one step of the Schulz iteration
can be used to remove the matrix inverse from (8.17), giving
Newton–Schulz iteration:
1
Xk+1 = Xk (3I − Xk∗ Xk ), X0 = A. (8.20)
2
√
This iteration is quadratically convergent to U if 0 < σi < 3 for every singular value
σi of A (see Problem 8.20). Again, the iteration is applicable to rectangular A.
Iterations (8.19) and (8.20) are both members of the Padé family (8.22).
Theorem 8.13 (Higham, Mackey, Mackey, and Tisseur). Let A ∈ Cm×n be of rank
n and have the polar decomposition A = U H. Let g be any matrix function of the
form g(X) = X h(X 2 ) such that the iteration Xk+1 = g(Xk ) converges to sign(X0 )
for X0 = H with order of convergence m. Assume that g has the property that
g(X)∗ = g(X ∗ ). Then the iteration
Yk+1 = Yk h(Yk∗ Yk ), Y0 = A (8.21)
converges to U with order of convergence m.
Proof. Let Xk+1 = g(Xk ) with X0 = H, so that limk→∞ Xk = sign(H) = I. We
claim that Xk∗ = Xk and Yk = U Xk for all k. These equalities are trivially true for
k = 0. Assuming that they are true for k, we have
∗
Xk+1 = g(Xk )∗ = g(Xk∗ ) = g(Xk ) = Xk+1
and
Yk+1 = U Xk h(Xk∗ U ∗ U Xk ) = U Xk h(Xk2 ) = U Xk+1 .
The claim follows by induction. Hence limk→∞ Yk = U limk→∞ Xk = U . The order
of convergence is readily seen to be m.
This result is analogous to Theorem 6.11 (recall that the assumed form of g is
justified at the start of Section 6.7). Another way to prove the result is to use the
SVD to show that the convergence of (8.21) is equivalent to the convergence of the
scalar sign iteration xk+1 = g(xk ) with starting values the singular values of A.
To illustrate the theorem, we write the Newton sign iteration (5.16) as Xk+1 =
1 −1 1 2 −1
2 (X k + Xk ) = Xk · 2 (I + (Xk ) ). Then, for square matrices, the theorem yields
Yk+1 = Yk · 2 (I + (Yk Yk ) ) = 21 Yk (I + Yk−1 Yk−∗ ) = 12 (Yk + Yk−∗ ). More straightfor-
1 ∗ −1
wardly, starting with the Newton–Schulz iteration (5.22), Xk+1 = 12 Xk (3I − Xk2 ), we
immediately obtain (8.20).
Padé iteration:
Recall that pℓm (ξ)/qℓm (ξ) is the [ℓ/m] Padé approximant to h(ξ) = (1 − ξ)−1/2 and
that some of the iteration functions fℓm (x) = xpℓm (1 − x2 )/qℓm (1 − x2 ) are given in
Table 5.1. (Of course, the derivation of the Padé iterations could be done directly for
the polar decomposition by using the analogue of (5.25): eiθ = z(1 − (1 − |z|2 ))−1/2 ).
In particular, ℓ = 0, m = 1 gives (8.19), while ℓ = m = 1 gives the Halley iteration
1 −1
Xk+1 = Xk (3I + Xk∗ Xk )(I + 3Xk∗ Xk )−1 = Xk I + 8 I + 3Xk∗ Xk , X0 = A,
3
in which the second expression is the more efficient to evaluate (cf. (5.32)). The
convergence of the Padé iterations is described in the next result.
9 In this section only we denote the row dimension of A by s in order to avoid confusion with the
Corollary 8.14. Let A ∈ Cs×n be of rank n and have the polar decomposition A =
U H. Consider the iteration (8.22) with ℓ+m > 0 and any subordinate matrix norm.
(a) For ℓ ≥ m − 1, if kI − A∗Ak < 1 then Xk → U as k → ∞ and kI − Xk∗ Xk k <
k
kI − A∗Ak(ℓ+m+1) .
(b) For ℓ = m − 1 and ℓ = m,
(ℓ+m+1)k
(I − Hk )(I + Hk )−1 = (I − H)(I + H)−1 ,
where
1 (2i − 1)π 1
ξi = 1 + cos , αi2 = − 1, i = 1: m.
2 2m ξi
For s = 1 this formula is just (8.19). The following lemma proves a somewhat unex-
pected property of these iterations: after the first iteration all iterates have residual
less than 1.
Lemma 8.15. Let A ∈ Cs×n be of rank n and have the polar decomposition A = U H.
The iterates from (8.22) with ℓ = m − 1 satisfy kXk∗ Xk − Ik2 < 1 for k ≥ 1.
Proof. By using the SVDs of A and Xk the inequality can be reduced to the
corresponding inequality for the singular values of Xk , which satisfy the scalar it-
eration xk+1 = xk pr (1 − x2k )/qr (1 − x2k ) =: gr (xk ). Applying Theorem 5.9(a) with
r = ℓ + m + 1 = 2m it suffices to note that (a) 0 < gr (x) for x > 0 and (b) gr (x) < 1
for all x since r is even.
The lemma is useful because for the iterations with ℓ > m we need kI − A∗Ak < 1
to ensure convergence (Corollary √ 8.14). An example is the Newton–Schulz iteration
(8.20) (for which 0 < kAk2 < 3 is in fact sufficient for convergence, as noted in
Section 8.3). The lemma opens the possibility of carrying out at least one step of
one of the iterations with ℓ = m − 1 and then switching to (8.20) or some other
multiplication-rich iteration, safe in the knowledge that this second iteration will
converge.
These convergence results for the Padé iterations can be generalized to rank-
deficient A. As we have already observed, it suffices to consider the convergence of
the scalar iteration on the singular values of A. Zero singular values are fixed points
of the iteration and for the nonzero singular values the convergence to 1 is determined
by the results above. So for ℓ = m − 1 and ℓ = m we are guaranteed convergence
to the factor U in the canonical polar decomposition, while for ℓ > m convergence
to U is assured if |1 − σi2 | < 1 for all nonzero singular values σi of A. However,
8.6 Scaling the Newton Iteration 205
these results are mainly of theoretical interest because in practice rounding errors
will usually perturb the zero singular values, which will then converge to 1; the limit
matrix will then be of full rank and therefore a factor U in the polar decomposition.
The quantity (8.25) minimizes θ(µk ) over all µk , and so is in this sense optimal. To
analyze optimal scaling it suffices to analyze the convergence of the singular values
(k)
of Xk to 1 (see the proof of Theorem 8.12). Write σi = σi (Xk ). Observe that this
scaling makes the smallest and largest singular values of Xk reciprocals of each other:
s s
(k) (k)
σ1 σn
σ1 (µk Xk ) = (k)
, σ n (µk X k ) = (k)
. (8.28)
σn σ1
These reciprocal values then map to the same value by (5.38a), which is the largest
singular value of X1 by (5.38b). Hence the singular values of Xk+1 satisfy
s s
(k) (k)
(k+1) (k+1) 1 σ σ n
1 ≤ σn(k+1) ≤ · · · ≤ σ2 = σ1 = 1
(k)
+ (k)
. (8.29)
2 σn σ 1
It follows that
1 1
κ2 (Xk+1 ) ≤ κ2 (Xk )1/2 + ≤ κ2 (Xk )1/2 .
2 κ2 (Xk )1/2
206 The Polar Decomposition
(k)
By comparison, µk = 1 and if we assume for simplicity that σn = 1 then κ2 (Xk+1 ) =
1 (k) (k) 1
2 (σ1 + 1/σ1 ) ≈ 2 κ2 (Xk ), which shows a much less rapid reduction in large values
of κ2 (Xk ) for the unscaled iteration. A particularly interesting feature of optimal
scaling is that it ensures finite termination of the iteration.
Theorem 8.16 (Kenney and Laub). For the scaled Newton iteration (8.24) with op-
timal scaling (8.25), Xd = U , where d is the number of distinct singular values of A.
Proof. The argument just before the theorem shows that the multiplicity of the
largest singular value of Xk increases by one on every iteration. Hence at the end of
the dth iteration Xd−1 has all its singular values equal. The next scaling maps all the
singular values to 1 (i.e., µd−1 Xd−1 = U ), and so Xd = U .
In general we will not know that d in Theorem 8.16 is small, so the theorem is
of little practical use. However, it is possible to predict the number of iterations
accurately and cheaply via just scalar computations. By (8.29), the extremal singular
(1) (1)
values of X1 satisfy 1 ≤ σn ≤ σ1 . The largest singular value of µ1 X1 is then
(1)
(1) 1/2
(1) 1/2
σ1 /σn ≤ σ1 , and so from the properties in (5.38) of the map f (x) =
1
2 (x + 1/x) it follows that the singular values of X2 are
s s
(1) (1) q q
(2) (2) 1 σ 1 σ n
1 (1) 1 (1)
1 ≤ σn ≤ · · · ≤ σ1 = (1)
+ (1)
≤ σ 1 + q = f σ 1 .
2 σn σ1 2 (1)
σ1
7
Iterations
5
1
0 2 4 6 8 10 12 14 16
log (κ (A))
10 2
Figure 8.1. Bounds on number of iterations for Newton iteration with optimal scaling for
1 ≤ κ2 (A) ≤ 1016 .
of scaling are reduced. Second, the scaling parameters require the computation of
Xk−1 , which does not appear in the Padé iterations, so this is an added cost. Third,
practical experience shows that scaling degrades the numerical stability of the Padé
iterations [285, ], so there is a tradeoff between speed and stability. For more on
numerical stability, see Section 8.8.
kA∗A − Ik kA∗A − Ik
≤ kA − U k ≤ ,
1 + σ1 (A) 1 + σn (A)
for any unitarily invariant norm.
Proof. It is straightforward to show that A∗A − I = (A − U )∗ (A + U ). Taking
norms and using (B.7) gives the lower bound. Since A + U = U (H + I) we have, from
the previous relation,
Hence
kA∗A − Ik
kA − U k = k(A − U )∗ U k ≤ kA∗A − Ik k(H + I)−1 k2 ≤ ,
1 + σn (A)
since the eigenvalues of H are the singular values of A.
This result shows that the two measures of orthonormality kA∗A − Ik and kA − U k
are essentially equivalent, in that they have the same order of magnitude if kAk < 1/2
(say). Hence the residual kXk∗ Xk − Ik of an iterate is essentially the same as the error
kU − Xk k. This result is useful for the Padé iterations, which form Xk∗ Xk .
The next result bounds the distance kA − U k in terms of the Newton correction
1
2 (A − A−∗ ).
208 The Polar Decomposition
Lemma 8.18. Let A ∈ Cn×n be nonsingular and have the polar decomposition A =
U H. If kA − U k2 = ǫ < 1 then for any unitarily invariant norm
1−ǫ 1+ǫ
kA − A−∗ k ≤ kA − U k ≤ kA − A−∗ k.
2+ǫ 2−ǫ
Note that if all the singular values of A are at least 1 (as is the case for the
scaled or unscaled Newton iterates Xk for k ≥ 1) then kA − U k ≤ kA − A−∗ k for
any unitarily invariant norm, with no restriction on kA − U k. This follows from the
inequality σi − 1 ≤ σi − σi−1 for each singular value of A and the characterization of
kAk as a symmetric gauge function of the singular values.
Let us consider how to build a termination criterion for the (unscaled) Newton
iteration. We can use similar reasoning as we used to derive the test (5.44) for
the matrix sign function. Lemma 8.18 suggests that once convergence has set in,
kXk − U k ≈ 12 kXk − Xk−∗ k. However, having computed Xk−∗ we might as well
compute the next iterate, Xk+1 , so it is the error in Xk+1 that we wish to bound in
terms of kXk − Xk−∗ k. In view of (8.18) we expect that
1 1 1
kXk+1 − U kF < −1 −∗ 2 −1 2
∼ 2 kXk kF 4 kXk − Xk kF = 2 kXk kF kXk+1 − Xk kF .
Modulo the constant 2, this is precisely the test (4.25) when Xk ≈ U , since c = n1/2 /2
therein. This test with an extra factor n1/2 on the right-hand side is also recommended
by Kielbasiński and Ziȩtak [352, , App. B]. Such a test should avoid the weakness
of a test of the form δk ≤ η that it tends to terminate the iteration too late (see the
discussion in Section 4.9.2). In particular, (8.31) allows termination after the first
iteration, which will be required in orthogonalization applications where the starting
matrix can be very close to being orthogonal (as in [601, ], for example).
8.8 Numerical Stability and Choice of H 209
b = V + ∆U,
U V ∗ V = I, k∆U k ≤ ǫkV k, (8.32a)
b = K + ∆H,
H b ∗ = H,
H b k∆Hk ≤ ǫkKk, (8.32b)
V K = A + ∆A, k∆Ak ≤ ǫkAk, (8.32c)
where K is Hermitian positive semidefinite (and necessarily positive definite for the
2-norm, by (8.32c), if A has full rank and κ2 (A) < 1/ǫ) and ǫ is a small multiple of
the unit roundoff u. These conditions say that U b and Hb are relatively close to the
true polar factors of a matrix near to A. If the polar decomposition is computed via
the SVD then these ideal conditions hold with ǫ a low degree polynomial in m and
n, which can be shown using the backward error analysis for the SVD [224, ,
Sec. 5.5.8].
We turn now to iterations for computing the unitary polar factor. The next result
describes their stability.
Theorem 8.19 (stability of iterations for the unitary polar factor). Let the nonsin-
gular matrix A ∈ Cn×n have the polar decomposition A = U H. Let Xk+1 = g(Xk )
be superlinearly convergent to the unitary polar factor of X0 for all X0 sufficiently
close to U and assume that g is independent of X0 . Then the iteration is stable, and
the Fréchet derivative of g at U is idempotent and is given by Lg (U, E) = L(U, E) =
1 ∗ ∗ −1/2
2 (E − U E U ), where L(U ) is the Fréchet derivative of the map A → A(A A)
at U .
Proof. Since the map is idempotent, stability, the idempotence of Lg , and the
equality of Lg (U ) and L(U ), follow from Theorems 4.18 and 4.19. To find L(U, E) it
suffices to find Lg (U, E) for the Newton iteration, g(X) = 21 (X + X −∗ ). From
1 ∗
g(U + E) = U + E + U −1 − U −1 EU −1 + O(kEk2 )
2
1
= U + E + U − U E ∗ U + O(kEk2 )
2
Theorem 8.19 shows that, as for the matrix sign iterations, all the iterations in
this book for the unitary polar factor are stable. The polar iterations generally have
better behaviour than the sign iterations because the limit matrix is unitary (and
hence of unit 2-norm) rather than just idempotent (and hence of unrestricted norm).
Since these iterations compute only U , a key question is how to obtain H. Given
a computed U b it is natural to take H
b = U b ∗ A. This matrix will in general not be
Hermitian, so we will replace it by the nearest Hermitian matrix (see Theorem 8.7):
b∗ ∗ b∗
b = (U A) + U A .
H (8.33)
2
210 The Polar Decomposition
(Ideally, we would take the nearest Hermitian positive semidefinite matrix, but this
would be tantamount to computing the polar decomposition of H, b in view of Theo-
b satisfies (8.32a), then U
rem 8.8). If U b ∗U
b = I + O(ǫ) and
b F = 1 k(U
b Hk
kA − U b ∗ A)∗ − U
b ∗ AkF + O(ǫ).
2
Hence the quantity
b ∗ A)∗ − U
k(U b ∗ AkF
b) =
β(U (8.34)
2kAkF
is a good approximation to the relative residual kA − U b Hk
b F /kAkF . Importantly,
b
β(U ) can be computed without an extra matrix multiplication.
We now summarize how to measure a posteriori the quality of a computed polar
decomposition A ≈ U bH b obtained via an iterative method for U b and (8.33). The
quantities kUb U
∗ b − IkF and kA − U b Hk
b F /kAkF should be of order the convergence
b
tolerance and H (which is guaranteed to be Hermitian) should be positive semidefinite.
The latter condition can be tested by computing the smallest eigenvalue or attempting
a Cholesky decomposition (with pivoting). If kU b ∗U
b − IkF is small then the relative
residual can be safely approximated by β(U ). b
The ultimate question regarding any iteration for the unitary polar factor is “what
can be guaranteed a priori about the computed U b ?” In other words, is there an a priori
forward or backward error bound that takes account of all the rounding errors in the
iteration as well as the truncation error due to terminating an iterative process? Such
results are rare for any matrix iteration, but Kielbasiński and Ziȩtak [352, ] have
done a detailed analysis for the Newton iteration with the 1, ∞-norm scaling (8.26).
Under the assumptions that matrix inverses are computed in a mixed backward–
forward stable way and that µ1,∞ k is never too much smaller than µopt
k they show that
the computed factors are backward stable. The assumption on the computed inverses
is not always satisfied when the inverse is computed by Gaussian elimination with
partial pivoting [276, , Sec. 14.1], but it appears to be necessary in order to push
through the already very complicated analysis. Experiments and further analysis are
given by Kielbasiński, Zieliński, and Ziȩtak [351, ].
8.9. Algorithm
We give an algorithm based on the Newton iteration (8.24) with the 1, ∞-norm scaling
(8.26).
1 if m > n
2 compute a QR factorization A = QR (R ∈ Cn×n )
3 A := R
4 end
5 X0 = A; scale = true
6 for k = 1: ∞
8.9 Algorithm 211
Cost: For m = n: 2(k + 1)n3 flops, where k iterations are used. For m > n: 6mn2 +
(2k − 3 13 )n3 flops (assuming Q is kept in factored form).
The strategy for switching to the unscaled iteration, and the design of the conver-
gence test, are exactly as for Algorithm 5.14.
Possible refinements to the algorithm include:
(a) Doing an initial complete orthogonal decomposition (8.8) instead of a QR factor-
ization when the matrix is not known to be of full rank.
(b) Switching to the matrix multiplication-rich Newton–Schulz iteration once kI −
Xk∗ Xk k ≤ θ, for some θ < 1. The computation of Xk∗ Xk in this test can be avoided
by applying a matrix norm estimator to I − Xk∗ Xk [276, , Chap. 15] (see Higham
and Schreiber [286, ]).
We now give some numerical examples to illustrate the theory, obtained with
a MATLAB implementation of Algorithm 8.20. We took tol cgce = n1/2 u and
tol scale = 10−2 . We used three real test matrices:
We report in Tables 8.1–8.3 various statistics for each iteration, including δk+1 =
kXk+1 − Xk kF /kXk+1 kF and β(U b ) in (8.34). The line across each table follows the
iteration number for which the algorithm detected convergence. In order to compare
the stopping criterion in the algorithm with the criterion δk+1 ≤ tol cgce, we contin-
ued the iteration until the latter test was satisfied, and those iterations appear after
the line.
Several points are worth noting. First, the convergence test is reliable and for these
matrices saves one iteration over the simple criterion δk+1 ≤ tol cgce. Second, the
number of iterations increases with κ2 (A), as expected from the theory, but even for
the most ill conditioned matrix only seven iterations are required. Third, the Frank
matrix is well conditioned with respect to U but very ill conditioned with respect
212 The Polar Decomposition
Table 8.1. Results for nearly orthogonal matrix, n = 16. κU = 1.0, κ2 (A) = 1.0, β(Ub ) = 3.3×
10−16 b ∗b
, kU U − IkF = 1.4 × 10 −15 b b
, kA − U HkF /kAkF = 5.1 × 10 −16 b
, λmin (H) = 9.9 × 10−1 .
kXk − U kF
k kXk∗ Xk − IkF δk+1 kXk kF µk
kU kF
1 1.3e-5 1.1e-4 2.9e-3 4.0e+0
2 3.3e-10 2.6e-9 1.3e-5 4.0e+0
3 8.1e-16 1.5e-15 3.3e-10 4.0e+0
4 8.2e-16 1.4e-15 2.7e-16 4.0e+0
b) =
Table 8.2. Results for binomial matrix, n = 16. κU = 1.7 × 103 , κ2 (A) = 4.7 × 103 , β(U
b ∗U
3.5 × 10−16 , kU b − IkF = 1.4 × 10−15 , kA − U
b Hk
b F /kAkF = 4.2 × 10−16 , λmin (H)
b = 2.6.
kXk − U kF
k kXk∗ Xk − IkF δk+1 kXk kF µk
kU kF
1 1.7e+1 2.4e+3 2.6e+2 7.0e+1 5.5e-3
2 1.4e+0 2.2e+1 7.0e+0 9.2e+0 1.8e-1
3 1.3e-1 1.1e+0 1.2e+0 4.5e+0 5.9e-1
4 2.6e-3 2.1e-2 1.3e-1 4.0e+0 9.2e-1
5 1.4e-6 1.1e-5 2.6e-3 4.0e+0
6 1.3e-12 1.0e-11 1.4e-6 4.0e+0
7 3.3e-14 1.2e-15 1.3e-12 4.0e+0
8 3.3e-14 1.4e-15 2.3e-16 4.0e+0
b ) = 2.5×
Table 8.3. Results for Frank matrix, n = 16. κU = 5.2×101 , κ2 (A) = 2.3×1014 , β(U
b ∗U
10−16 , kU b − IkF = 1.1 × 10−15 , kA − U
b Hk
b F /kAkF = 3.7 × 10−16 , λmin (H)
b = 3.5 × 10−13 .
kXk − U kF
k kXk∗ Xk − IkF δk+1 kXk kF µk
kU kF
1 2.9e+6 8.4e+13 1.0e+0 1.1e+7 1.8e+5
2 1.9e+0 4.5e+1 1.1e+6 1.0e+1 1.1e-6
3 2.7e-1 2.5e+0 1.4e+0 4.9e+0 4.2e-1
4 9.5e-3 7.7e-2 2.6e-1 4.0e+0 8.3e-1
5 3.9e-5 3.1e-4 9.5e-3 4.0e+0
6 2.5e-9 2.0e-8 3.9e-5 4.0e+0
7 3.3e-15 9.6e-16 2.5e-9 4.0e+0
8 3.3e-15 1.1e-15 1.7e-16 4.0e+0
8.10 Notes and References 213
to inversion, since it has just one small singular value. Since the Newton iteration
inverts A on the first step, we might expect instability. Yet the algorithm performs in
a backward stable fashion, as it does for all three matrices. Finally, if optimal scaling
is used then the numbers of iterations are unchanged for the first and last matrices
and one less for the second, emphasizing the effectiveness of the 1, ∞-norm scaling.
Experience from these and many other experiments (see, e.g., [625, ]) suggests
that
• Algorithm 8.20 with tol cgce = nu requires at most about 8 iterations in IEEE
double precision arithmetic for matrices A not too close to being rank-deficient
(say, κ2 (A) ≤ 1014 ) and at most 1 or 2 more iterations for matrices numerically
rank-deficient;
Thus the algorithm is remarkably stable, quick to converge, and robust—much more
so than the Newton sign iteration with any scaling (cf. Table 5.2, for example). And
its flop count is less than that for computation of the polar factors via the SVD (see
Problem 8.24).
Theorem 8.3 goes back at least to von Neumann [603, , Satz 7], but it has
been rediscovered by several authors. The result is most easily found in the literature
on generalized inverses—for example in Penrose’s classic paper [469, ] and in
Ben-Israel and Greville [52, , Thm. 7, p. 220]. For square A, the result appears
in an exercise of Halmos [243, , p. 171].
A “refined” polar decomposition A = U P D ∈ Cm×n (m ≥ n) is investigated
by Eirola [175, ]. Here, U has orthonormal columns, P is Hermitian positive
semidefinite with unit diagonal (and so is a correlation matrix), and D is diagonal with
nonnegative diagonal elements. Eirola shows that the decomposition always exists
(the proof is nontrivial) and considers uniqueness, computation, and an application.
An analytic polar decomposition A(t) = U (t)H(t) can be defined for a real matrix
A(t) whose elements are analytic functions of a real variable t. The factors are required
to be analytic, U (t) orthogonal, and H(t) symmetric but not definite. Existence of
the analytic polar decomposition is discussed by Mehrmann and Rath [419, ,
Sec. 3.1].
The integral formula (8.7) is due to Higham [273, ].
The case m = n in Theorem 8.4 is proved by Fan and Hoffman [181, ]; for
m > n the result is stated without proof by Rao [484, ] and a proof is given by
Laszkiewicz and Ziȩtak [372, ].
Theorem 8.5 is equivalent to a result of Laszkiewicz and Ziȩtak [372, ], in
which the constraint on Q is rank(Q) = rank(A), and was obtained for the Frobenius
norm by Sun and Chen [549, ]. For a related result with no constraints on the
partial isometry and that generalizes earlier results of Maher [404, ] and Wu [618,
] see Problem 8.12.
Theorem 8.7 is from Fan and Hoffman [181, ]. Theorem 8.8 is from Higham [269,
]. The best approximation property of (AH + H)/2 identified in Theorem 8.8
remains true in any unitarily invariant norm if A is normal, as shown by Bhatia and
Kittaneh [66, ]. For general matrices and the 2-norm, the theory of best Her-
mitian positive semidefinite approximation is quite different and does not involve the
polar decomposition; see Halmos [242, ] and Higham [269, ].
The orthogonal Procrustes problem (8.12) was first solved by Green [229, ]
(for full rank A and B) and Schönemann [507, ] (with no restrictions on A and
B). The rotation variant (8.13) was posed by Wahba [605, ], who explains that
it arises in the determination of the attitude of a satellite. Unlike the problem of
finding the nearest matrix with orthonormal columns, the solution to the orthogonal
Procrustes problem is not the same for all unitarily invariant norms, as shown by
Mathias [410, ].
Given that the QR factorization A = QR can be more cheaply computed than the
polar decomposition A = U H, for the purposes of orthogonalization it is natural to
ask whether Q can be used in place of U ; in other words, is kA − Qk close to kA − U k?
Two bounds provide some insight, both holding for A ∈ Cm×n of rank n under the
assumption that R has positive √ diagonal elements. Chandrasekaran and Ipsen [105,
] show that kA − QkF ≤ 5 n kA − U k2 , assuming that A has columns of unit
2-norm. Sun [547, ] proves that if kA∗ A − Ik2 < 1 then
1 + kAk2
kA − QkF ≤ √ kA − U kF .
2(1 − kA∗A − Ik2 )
The latter bound is the sharper for kA∗ A − Ik2 < 1/2.
The use of the complete orthogonal decomposition as a “preprocessor” before
applying an iterative method was suggested by Higham and Schreiber [286, ].
8.10 Notes and References 215
Theorem 8.9 has an interesting history. A version of the theorem was first de-
veloped by Higham [266, ], with a larger constant in the bound for ∆U and the
sharpness of the bounds not established. Barrlund [45, ] and Kenney and Laub
[342, , Thm. 2.3] were the first to recognize the differing sensitivity of U for real
and complex data. Unbeknown to the numerical analysts, in the functional analysis
literature the bound for ∆H had already been obtained via the analysis of the Lips-
chitz continuity of the
√ absolute value map. Araki and Yamagami√ [16, ] showed
that k|A| − |B|kF ≤ 2kA − BkF and that the constant 2 is as small as possible;
see Problem 8.15 for an elegant proof using ideas of Kittaneh. For some history of
this topic and further bounds on the norm of |A| − |B| see Bhatia [63, , Sec. 5],
[64, , Sec. X.2].
R.-C. Li [387, ] gives a different style of perturbation result for U in which the
perturbation is expressed in multiplicative form (A → XAY ); it shows that the change
in U depends only on how close X and Y are to the identity and not on the condition
of A. In subsequent work Li [388, ] obtains a perturbation bound for H for the
case where A is graded, that is, A = BD where B is well conditioned and the scaling
matrix D (usually diagonal) can be very ill conditioned; the bounds show that the
smaller elements of H can be much less sensitive to perturbations in A than the bound
in Theorem 8.9 suggests. Li also investigates how to compute H accurately, suggesting
the use of the SVD computed by the one-sided Jacobi algorithm; interestingly, it
proves important to compute H from U ∗ A and not directly from the SVD factors.
Some authors have carried out perturbation analysis for the canonical polar de-
composition under the assumption that rank(A) = rank(A + ∆A); see, for example,
R.-C. Li [385, ] and W. Li and Sun [389, ].
The polar decomposition cannot in general be computed in a finite number of
arithmetic operations and radicals, as shown by George and Ikramov [212, ],
[213, ]. For a companion matrix the polar decomposition is finitely computable
and Van Den Driessche and Wimmer [584, ] provide explicit formulae; formulae
for the block companion matrix case are given by Kalogeropoulos and Psarrakos [332,
].
The Newton–Schulz iteration (8.20) is used in Mathematica’s NDSolve function
within the projected integration method that solves matrix ODEs with orthogonal
solutions; see Sofroniou and Spaletta [534, ].
Theorem 8.13 is essentially a special case of a result of Higham, Mackey, Mackey,
and Tisseur [283, , Thm. 4.6] that applies to the generalized polar decomposition
referred to above.
Early papers on iterations for computing the unitary polar factor are those by
Björck and Bowie [72, ], Kovarik [361, ], and Leipnik [379, ]. Each
of these papers (none of which cites the others) develops families of polynomial
iterations—essentially the [ℓ/0] Padé iterations.
The Newton iteration (8.17) was popularized as a general tool for computing the
polar decomposition by Higham [266, ]. It had earlier been used in the aerospace
application mentioned in Section 2.6 to orthogonalize the 3×3 direction cosine matrix;
see Bar-Itzhack, Meyer, and Fuhrmann [43, ].
Gander [202, ] (see also Laszkiewicz and Ziȩtak [372, ]) obtains conditions
on h for iterations of the form (8.21) to have a particular order of convergence and
also considers applying such iterations to rank deficient matrices.
The particular Padé iteration (8.23) in partial fraction form was derived from
the corresponding sign iteration by Higham [273, ]. It was developed into a
216 The Polar Decomposition
practical algorithm for parallel computers by Higham and Papadimitriou [285, ],
who obtain order of magnitude speedups over computing the polar factors via the
SVD on one particular virtual shared memory MIMD computer.
Lemma 8.15 is due to Higham [273, ].
The optimal scaling (8.25) and its approximations (8.26) and (8.27) were suggested
by Higham [265, ], [266, ]. All three scalings are analyzed in detail by Kenney
and Laub [344, ]. The Frobenius norm scaling is also analyzed by Dubrulle [169,
]; see Problem 8.23.
Theorem 8.16 is due to Kenney and Laub [344, ], who also derive (8.30).
Lemmas 8.17 and 8.18 are from Higham [273, ]. Theorem 8.19 is new.
All the globally convergent iterations described in this chapter involve matrix
inverses or the solution of multiple right-hand side linear systems. Problem 8.26
describes how the Newton iteration variant (8.19) can be implemented in an inversion-
free form. The idea behind this implementation is due to Zha and Zhang [622, ],
who consider Hermitian matrices and apply the idea to subspace iteration and to
iterations for the matrix sign function.
A drawback of the algorithms described in this chapter is that they cannot take
advantage of a known polar decomposition of a matrix close to A. Thus knowing a
polar decomposition A e=U eHe is of no help when computing the polar decomposition
of A, however small the norm or rank of A − A. e Indeed the iterations all require A
as the starting matrix. The same comment applies to the iterations for the matrix
sign function and matrix roots. The trace maximization algorithm outlined in Prob-
lem 8.25 can take advantage of a “nearby” polar decomposition but, unfortunately,
the basic algorithm is not sufficiently efficient to be of practical use. An SVD updat-
ing technique of Davies and Smith [138, ] could potentially be used to update
the polar decomposition. See Problem 8.27.
Problems
8.1. Find all polar decompositions of the Jordan block Jm (0) ∈ Cm×m .
8.2. (Uhlig [581, ]) Verify that for nonsingular A ∈ R2×2 the polar factors are
U = γ A + | det(A)|A−T , H = γ ATA + | det(A)|I ,
where −1/2
γ = det A + | det(A)| A−T .
8.3. What are the polar and canonical polar decompositions of 0 ∈ Cm×n ?
8.4. (Higham, Mackey, Mackey, and Tisseur [283, , Thm. 4.7]) If Q ∈ Cn×n is
unitary and −1 ∈/ Λ(Q) what is the polar decomposition of I + Q? How might we
compute Q1/2 iteratively using (8.17)?
8.5. (Moakher [434, ]) Show that if Q1 , Q2 ∈ Cn×n are unitary and −1 ∈ /
Λ(Q∗1 Q2 ) then then the unitary polar factor of Q1 + Q2 is Q1 (Q∗1 Q2 )1/2 .
8.6. Let A ∈ Cm×n . Show that if H = (A∗A)1/2 then null(H) = null(A).
8.7. For A ∈ Cm×n with m < n investigate the existence and uniqueness of polar
decompositions A = U H with U ∈ Cm×n having orthonormal rows and H ∈ Cn×n
Hermitian positive semidefinite.
Problems 217
8.8. Show that the condition range(U ∗ ) = range(H) in Theorem 8.3 is equivalent to
range(U ) = range(A).
8.9. Let A ∈ Cn×n be normal and nonsingular. Show that the polar factors U and
H may be expressed as functions of A.
8.10. Give a proof of Theorem 8.4 from first principles for the 2-norm.
m×n
A,∗B ∈ C
8.11. Let . Show that min kA − BW kF : W ∈ Cn×n , W ∗ W = I
and min kB A − W kF : W ∈ Cn×n , W ∗ W = I are attained at the same matrix
W . Thus the orthogonal Procrustes problem problem reduces to the nearest unitary
matrix problem.
8.12. (Laszkiewicz and Ziȩtak [372, ]) Show that for A ∈ Cm×n and any unitarily
invariant norm
min { kA − Qk : Q ∈ Cm×n is a partial isometry } = max max(σi , |1 − σi |),
i=1:min(m,n)
8.16. (Scalar version of Theorem 8.10. This bound is not readily found in texts on
complex analysis.) Show if z1 = r1 eiθ1 and z2 = r2 eiθ2 are complex numbers in polar
form then
|z1 − z2 |
|eiθ1 − eiθ2 | ≤ 2 .
r1 + r2
where ǫ ≫ δ > 0. Show that the difference kU − U e kF between the respective polar
factors is of order δ/ǫ, showing that the sensitivity of U depends on 1/σn and not on
1/(σn + σn−1 ) as it would if A were square.
8.18. Show that the Newton iteration (8.17) can be derived by applying Newton’s
method to the equation X ∗ X = I.
8.19. Prove that for nonsingular A ∈ Cn×n the iterates Xk from (8.17) and Yk from
(8.19) are related by Yk = Xk−∗ for k ≥ 1.
8.20. Let A ∈ Cm×n of rank n have the polar decomposition A = U H. Show that√in
the Newton–Schulz iteration (8.20), Xk → U quadratically as k → ∞ if kAk2 < 3,
and that
1
kXk+1 − U k2 ≤ kXk + 2U k2 kXk − U k22 . (8.36)
2
Show also that Rk = I − Xk∗ Xk satisfies Rk+1 = 43 Rk2 + 41 Rk3 .
8.21. The Newton iteration (8.17) and the Newton–Schulz iteration (8.20) are both
quadratically convergent. The convergence of the Newton iteration is described by
(8.18) and that of Newton–Schulz by (8.36). What do the error constants in the error
relations for these two iterations imply about their relative speeds of convergence?
(1)
8.22. Show for the unscaled Newton iteration (8.17) that kU −Xk k2 = f (k−1) (σ1 )−
(k)
1, where f (x) = 21 (x + 1/x) and σi = σi (Xk ). [Cf. (8.30) for the optimally scaled
iteration.]
8.23. (Dubrulle [169, ]) (a) Show that the Frobenius norm scaling (8.27) mini-
mizes kXk+1 kF over all µk .
(b) Show that the 1, ∞-norm scaling (8.26) minimizes a bound on kXk+1 k1 kXk+1 k∞ .
8.24. Compare the flop count of Algorithm 8.20 with that for computation of the
polar factors via the SVD.
Problems 219
8.25. We know from Problem 8.13 that the polar factors U and H of A ∈ Rn×n satisfy
max{ trace(W ∗ A) : W ∗ W = I } = trace(U ∗ A) = trace(H). Thus premultiplying A
by U ∗ both symmetrizes it and maximizes the trace. This suggests developing an
algorithm for computing the polar decomposition that iteratively maximizes the trace
by premultiplying A by a succession of suitably chosen orthogonal matrices.
(a) Show that if aij 6= aji then a Givens rotation G = −cos
sin
θ sin θ
θ cos θ exists such that
T aii aij
G aji ajj is symmetric and has maximal trace over all θ.
(b) Show that if A is symmetric but indefinite then a Householder transformation
G = I − 2vv T /(v T v) can be chosen so that trace(GA) > trace(A).
(c) Develop an algorithm for computing the polar decomposition based on a combi-
nation of the transformations in (a) and (b).
for some unitary M . By examining the blocks of this equation, obtain an expression
for Q2 Q∗1 and hence show that the Newton iteration variant (8.19), which we rewrite
as
Xk+1 = 2Xk (I + Xk∗ Xk )−1 , X0 = A, (8.37)
can be written as
n n
(k)
n I n Q1
= (k) Rk (QR factorization), (8.38a)
m Xk m Q2
(k) (k) ∗
Xk+1 = 2Q2 Q1 . (8.38b)
We have seen in earlier chapters that reliable algorithms for computing the matrix
sign function and matrix roots can be constructed by using the Schur decomposition
to reduce the problem to the triangular case and then exploiting particular properties
of the functions. We now develop a general purpose algorithm for computing f (A)
via the Schur decomposition. We assume that the reader has read the preliminary
discussion in Section 4.6.
Our algorithm for computing f (A) consists of several stages. The Schur decom-
position A = QT Q∗ is computed; T is reordered and blocked to produce another
triangular matrix Te with the property that distinct diagonal blocks have “sufficiently
distinct” eigenvalues and the eigenvalues within each diagonal block are “close”; the
diagonal blocks f (Teii ) are computed; the rest of f (Te) is obtained using the block form
of the Parlett recurrence; and finally the unitary similarity transformations from the
Schur decomposition and the reordering are reapplied. We consider first, in Sec-
tion 9.1, the evaluation of f on the atomic triangular blocks Teii , for which we use
a Taylor series expansion. “Atomic” refers to the fact that these blocks cannot be
further reduced. This approach is mainly intended for functions whose Taylor series
have an infinite radius of convergence, such as the trigonometric and hyperbolic func-
tions, but for some other functions this step can be adapted or replaced by another
technique, as we will see in later chapters. In Section 9.2 we analyze the use of the
block form of Parlett’s recurrence. Based on the conflicting requirements of these
two stages we describe a Schur reordering and blocking strategy in Section 9.3. The
overall algorithm is summarized in Section 9.4, where its performance on some test
matrices is illustrated. The relevance of several preprocessing techniques is discussed
in Section 9.5.
which defines M as T shifted by the mean of its eigenvalues. If f has a Taylor series
representation
∞
X f (k) (σ) k
f (σ + z) = z (9.2)
k!
k=0
221
222 Schur–Parlett Algorithm
If T has just one eigenvalue, so that tii ≡ σ, then M is strictly upper triangular
and hence is nilpotent with M n = 0; the series (9.3) is then finite. More generally, if
the eigenvalues of T are sufficiently close, then the powers of M can be expected to
decay quickly after the (n − 1)st, and so a suitable truncation of (9.3) should yield
good accuracy. This notion is made precise in the following lemma, in which M is
represented by M = D + N , with D diagonal and N strictly upper triangular and
hence nilpotent with M n = 0.
Lemma 9.1 (Davies and Higham). Let D ∈ Cn×n be diagonal with |D| ≤ δI and let
N ∈ Cn×n be strictly upper triangular. Then
min(k,n−1)
X k k−i
k
|(D + N ) | ≤ δ |N |i
i=0
i
and the same inequality holds with the absolute values replaced by any matrix norm
subordinate to an absolute vector norm.
Proof. The bound follows from
|(D + N )k | ≤ (|D| + |N |)k ≤ (δI + |N |)k ,
followed by a binomial expansion of the last term. Since |N |n−1 = 0 we can drop
the terms involving |N |i for i ≥ n − 1. The corresponding bound for matrix norms
is obtained by taking norms in the binomial expansion of (D + N )k and using (B.6).
Theorem 9.2 (Davies and Higham). Let D ∈ Cn×n be diagonal with distinct eigen-
values λ1 , . . . , λp (1 ≤ p ≤ n) of multiplicity k1 , . . . , kp , respectively, and let the values
f (j) (λi ), j = 0: ki − 1, i = 1: p, be defined. Then f (D + N ) = f (D) for all strictly
triangular N ∈ Cn×n if and only if f (D) = f (λ1 )I and
Corollary 9.3. Let D ∈ Cn×n be a nonzero diagonal matrix and let k ≥ 2. Then
(D + N )k = Dk for all strictly triangular matrices N ∈ Cn×n if and only if
Proof. By Theorem 9.2, all the diagonal elements of D must be kth roots of
the same number, β k say. The condition (9.5) implies that any repeated diagonal
element dii must satisfy f ′ (dii ) = kdk−1
ii = 0, which implies dii = 0 and hence D = 0;
therefore D has distinct diagonal elements.
As a check, we note that the diagonal of M in (9.4) is of the form in the corollary
for even powers k. The corollary shows that this phenomenon of very nonmonotonic
convergence of the Taylor series can occur when the eigenvalues are a constant multiple
of kth roots of unity. As is well known, the computed approximations to multiple
eigenvalues occurring in a single Jordan block tend to have this distribution. We
will see in the experiment of Section 9.4 that this eigenvalue distribution also causes
problems in finding a good blocking.
We now develop a strict bound for the truncation error of the Taylor series, which
we will use to decide when to terminate the series. We apply Theorem 4.8 with
A := σI + M , α := σ, M from (9.1), and the Frobenius norm, and so we need to be
able to bound max0≤t≤1 kM s f (s) (σI + tM )kF . We will bound it by the product of
the norms, noting that the term M s is needed anyway if we form the next term of
the series. To bound max0≤t≤1 kf (s) (σI + tM )kF we can use Theorem 4.28 to show
that
ωs+r
max kf (s) (σI + tM )kF ≤ max k(I − |N |)−1 kF , (9.6)
0≤t≤1 0≤r≤n−1 r!
where ωs+r = supz∈Ω |f (s+r) (z)| and N is the strictly upper triangular part of M .
By using (9.6) in (4.8) we can therefore bound the truncation error. Approximating
the Frobenius norm by the ∞-norm, the term k(I − |N |)−1 k∞ can be evaluated
in just O(n2 ) flops, since I − |N | is an M -matrix: we solve the triangular system
(I − |N |)y = e, where e = [1, 1, . . . , 1]T , and then kyk∞ = k(I − |N |)−1 k∞ [276, ,
Sec. 8.3].
We now state our algorithm for evaluating a function of an atomic block via the
Taylor series.
Pn
(9.2) for z in an open disk containing λi − σ, i = 1: n, where σ = n−1 i=1 λi , and
the ability to evaluate derivatives of f , this algorithm computes F = f (T ) using a
truncated Taylor series.
Pn
1 σ = n−1 i=1 λi , M = T − σI, tol = u
2 µ = kyk∞ , where y solves (I − |N |)y = e and N is the strictly
upper triangular part of T .
3 F0 = f (σ)In
4 P =M
5 for s = 1: ∞
6 Fs = Fs−1 + f (s) (σ)P
7 P = P M/(s + 1)
8 if kFs − Fs−1 kF ≤ tolkFs kF
% Successive terms are close so check the truncation error bound.
9 Estimate or bound ∆ = max0≤r≤n−1 ωs+r /r!, where
ωs+r = supz∈Ω |f (s+r) (z)|, with Ω a closed convex set containing Λ(T ).
10 if µ∆kP kF ≤ tolkFs kF , quit, end
11 end
12 end
If there is heavy cancellation in the sum (9.3) then a large relative error kF −Fbk/kF k is
possible. This danger is well known, particularly in the case of the matrix exponential
(see Chapter 10). A mitigating factor here is that our matrix T is chosen to have
eigenvalues that are clustered, which tends to limit the amount of cancellation in the
sum. However, for sufficiently far from normal T , damaging cancellation can take
place. For general functions there is little we can do to improve the accuracy; for
particular f we can of course apply alternative methods, as illustrated in Chapters 10
and 11.
9.2 Evaluating the Upper Triangular Part of f (T ) 225
This Sylvester equation is nonsingular and it is easy to see that Fij can be computed
a column at a time from first to last, with each column obtained as the solution of a
triangular system. Of particular concern is the propagation of errors in the recurrence.
These errors are of two sources: errors in the evaluation of the diagonal blocks Fii ,
and rounding errors in the formation and solution of (9.7). To gain insight into both
types of error we consider the residual of the computed solution Fb:
T Fb − FbT =: R, (9.8)
where Rij is the residual from the solution of the Sylvester equation (9.7). Although
it is possible to obtain precise bounds on R, these are not important to our argument.
Writing Fb = F + ∆F , on subtracting T F − F T = 0 from (9.8) we obtain
T ∆F − ∆F T = R.
and these equations can be solved to determine ∆Fij a block superdiagonal at a time.
It is straightforward to show that
kTii X − XTjj kF
sep(Tii , Tjj ) = min .
X6=0 kXkF
It follows that rounding errors introduced during the stage at which Fij is com-
puted (i.e., represented by Rij ) can lead to an error ∆Fij of norm proportional to
sep(Tii , Tjj )−1 kRij k. Moreover, earlier errors (represented by the ∆Fij terms on the
right-hand side of (9.9)) can be magnified by a factor sep(Tii , Tjj )−1 . It is also clear
from (9.9) that even if sep(Tii , Tjj )−1 is not large, serious growth of errors in the
recurrence (9.9) is possible if some off-diagonal blocks Tij are large.
To minimize the bounds (9.10) for all i and j we need the blocks Tii to be as
well separated as possible in the sense of sep. However, trying to maximize the
separations between the diagonal blocks Tii tends to produce larger blocks with less
226 Schur–Parlett Algorithm
tightly clustered eigenvalues, which increases the difficulty of evaluating f (Tii ), so any
strategy for reordering the Schur form is necessarily a compromise.
Computing sep(Tii , Tjj ) exactly when both blocks are m × m costs O(m4 ) flops,
while condition estimation techniques allow an estimate to be computed at the cost of
solving a few Sylvester equations, that is, in O(m3 ) flops [87, ], [270, ], [324,
]. It is unclear how to develop a reordering and blocking strategy for produc-
ing “large seps” at reasonable cost; in particular, it is unclear how to define “large.”
Indeed the maximal separations are likely to be connected with the conditioning of
f (T ), but little or nothing is known about any such connections. More generally, how
to characterize matrices for which the condition number of f is large is not well under-
stood, even for the matrix exponential (see Section 10.2). Recalling the equivalence
mentioned in Section 4.7 between block diagonalization and the use of the Parlett
recurrence, a result of Gu [232, ] provides further indication of the difficulty of
maximizing the seps: he shows that, given a constant τ , finding a similarity transfor-
mation with condition number bounded by τ that block diagonalizes (with at least
two diagonal blocks) a triangular matrix is NP-hard.
In the next section we will adopt a reordering and blocking strategy that bounds
the right-hand side of the approximation
1
sep(Tii , Tjj )−1 ≈
min{ |λ − µ| : λ ∈ Λ(Tii ), µ ∈ Λ(Tjj ) }
by the reciprocal of a given tolerance. The right-hand side is a lower bound for the
left that can be arbitrarily weak, but it is a reasonable approximation for matrices
not too far from being normal.
It is natural to look for ways of improving the accuracy of the computed Fb from the
Parlett recurrence. One candidate is fixed precision iterative refinement of the systems
(9.7). However, these systems are essentially triangular, and standard error analysis
shows that the backward error is already small componentwise [276, , Thm. 8.5];
fixed precision iterative refinement therefore cannot help. The only possibility is to
use extended precision when solving the systems.
2. separation within blocks: for every block Teii with dimension bigger than 1, for
every λ ∈ Λ(Teii ) there is a µ ∈ Λ(Teii ) with µ 6= λ such that |λ − µ| ≤ δ.
Here, δ > 0 is a blocking parameter. The second property implies that for Teii ∈ Rm×m
(m > 1)
max{ |λ − µ| : λ, µ ∈ Λ(Teii ), λ 6= µ } ≤ (m − 1)δ,
and this bound is attained when, for example, Λ(Teii ) = {δ, 2δ, . . . , mδ}.
The following algorithm is the first step in obtaining such an ordering. It can be
interpreted as finding the connected components of the graph on the eigenvalues of
T in which there is an edge between two nodes if the corresponding eigenvalues are a
distance at most δ apart.
9.3 Reordering and Blocking the Schur Form 227
Algorithm 9.5 (block pattern). Given a triangular matrix T ∈ Cn×n with eigenval-
ues λi ≡ tii and a blocking parameter δ > 0, this algorithm produces a block pattern,
defined by an integer vector q, for the block Parlett recurrence: the eigenvalue λi is
assigned to the set Sqi , and it satisfies the conditions that min{|λi − λj |: λi ∈ Sp , λj ∈
Sq , p 6= q} > δ and, for each set Si with more than one element, every element of Si
is within distance at most δ from some other element in the set. For each set Sq , all
the eigenvalues in Sq are intended to appear together in an upper triangular block Teii
of Te = U ∗ T U .
1 p=1
2 Initialize the Sp to empty sets.
3 for i = 1: n
4 if λi ∈
/ Sq for all 1 ≤ q < p
5 Assign λi to Sp .
6 p=p+1
7 end
8 for j = i + 1: n
9 Denote by Sqi the set that contains λi .
10 if λj ∈ / Sqi
11 if |λi − λj | ≤ δ
12 if λj ∈
/ Sk for all 1 ≤ k < p
13 Assign λj to Sqi .
14 else
15 Move the elements of Smax(qi ,qj ) to Smin(qi ,qj ) .
16 Reduce by 1 the indices of sets Sq for q > max(qi , qj ).
17 p=p−1
18 end
19 end
20 end
21 end
22 end
adjacent elements:
Swapping two adjacent diagonal elements of T requires 20n flops, plus another 20n
flops to update the Schur vectors, so the cost of the swapping is 40n times the num-
ber of swaps. The total cost is usually small compared with the overall cost of the
algorithm.
The cost of Algorithm 9.6 depends greatly on the eigenvalue distribution of A, and is
roughly between 28n3 flops and n4 /3 flops. Note that Q, and hence F , can be kept
in factored form, with a significant computational saving. This is appropriate if F
needs just to be applied to a few vectors, for example.
We have set the blocking parameter δ = 0.1, which our experiments indicate is as
good a default choice as any. The optimal choice of δ in terms of cost or accuracy is
problem-dependent.
Algorithm 9.6 has a property noted as being desirable by Parlett and Ng [462,
]: it acts simply on simple cases. Specifically, if A is normal, so that the Schur
decomposition is A = QDQ∗ with D diagonal, the algorithm simply evaluates f (A) =
Qf (D)Q∗ . At another extreme, if A has just one eigenvalue of multiplicity n, then
the algorithm works with a single block, T11 ≡ T , and evaluates f (T11 ) via its Taylor
series expanded about the eigenvalue.
P Another attraction of Algorithm 9.6 is that it allows a function of the form f (A) =
i fi (A) (e.g., f (A) = sin A + cos A) to be computed with less work than is required
to compute each fi (A) separately, since the Schur decomposition and its reordering
need only be computed once.
9.4 Schur–Parlett Algorithm for f (A) 229
Reordering the Schur form is a nontrivial subject, the state of the art of which
is described by Bai and Demmel [28, ]. The algorithm described therein uses
unitary similarities and effects a sequence of swaps of adjacent blocks as in (9.12).
The algorithm has guaranteed backward stability and, for swapping only 1 × 1 blocks
as we are here, always succeeds. LAPACK routine xTREXC implements this algorithm
and is called by the higher level Schur reordering routine xTRSEN [12, ]; the
MATLAB function ordschur calls xTRSEN.
Algorithm 9.6 was developed by Davies and Higham [135, ] and is imple-
mented by the MATLAB function funm.
For real matrices, it might seem that by using the real Schur decomposition in the
first step of Algorithm 9.6 it would be possible to work entirely in real arithmetic.
However, the algorithm’s strategy of placing eigenvalues that are not close in different
blocks requires splitting complex conjugate pairs of eigenvalues having large imaginary
parts, forcing complex arithmetic, so the algorithm does not in general lend itself to
exploitation of the real Schur form. However, if A is real and normal then the real
Schur decomposition is block diagonal, no reordering is necessary, and Algorithm 9.6
can be reduced to computation of the Schur form and evaluation of f on the diagonal
blocks. For the 2 × 2 diagonal blocks Problem 9.2 provides appropriate formulae.
We focus for a moment on some negative aspects of the algorithm revealed by
numerical experiments in [135, ]. The algorithm can be unstable, in the sense
that the normwise relative error can greatly exceed condrel (f, A)u. Changing the
blocking parameter δ (say from 0.1 to 0.2) may produce a different blocking that
cures the instability. However, instability can be present for all choices of δ. Moreover,
instability can be present for all nontrivial blockings (i.e., any blocking with more than
one block)—some of which it might not be possible to generate by an appropriate
choice of δ in the algorithm. The latter point indicates a fundamental weakness of
the Parlett recurrence.
To illustrate the typical behaviour of Algorithm 9.6 we present a numerical exper-
iment with f the exponential function. We took 71 test matrices, which include some
from MATLAB (in particular, from the gallery function), some from the Matrix
Computation Toolbox [264], and test matrices from the eA literature; most matrices
are 10 × 10, with a few having smaller dimension. We evaluated the normwise relative
errors of the computed matrices from a modified version funm mod of MATLAB 7.6’s
funm (when invoked as funm(A,@exp) the modified version uses Algorithm 9.4 for
the diagonal blocks), where the “exact” eA is obtained at 100 digit precision using
MATLAB’s Symbolic Math Toolbox.
Figure 9.1 displays the relative errors together with a solid line representing
condrel (exp, A)u, where condrel (exp, A) is computed using Algorithm 3.17 with Al-
gorithm 10.27, and the results are sorted by decreasing condition number; the norm
is the Frobenius norm. For funm to perform in a forward stable manner its error
should lie not too far above this line on the graph; note that we must accept some
dependence of the error on n.
The errors are mostly very satisfactory, but with two exceptions. The first ex-
ception is the MATLAB matrix gallery(’chebspec’,10). This matrix is similar
to a Jordan block of size 10 with eigenvalue 0 (and hence is nilpotent), modulo the
rounding errors in its construction. The computed eigenvalues lie roughly on a circle
with centre 0 and radius 0.2; this is the most difficult distribution for Algorithm 9.6
to handle. With the default δ = 0.1, funm mod chooses a blocking with eight 1 × 1
blocks and one 2 × 2 block. This leads to an error ≈ 10−7 , which greatly exceeds
230 Schur–Parlett Algorithm
−2
10
−4
10
−6
10
−8
10
−10
10
−12
10
−14
10
−16
10
−18
10
0 10 20 30 40 50 60 70
Figure 9.1. Normwise relative errors for funm mod (∗) and condrel (exp, A)u (solid line).
condrel (exp, A)u ≈ 10−13 . However, increasing δ to 0.2 produces just one 10 × 10
block, which after using 20 terms of the Taylor series leads to an error ≈ 10−13 . The
other exceptional matrix is gallery(’forsythe’,10), which is a Jordan block of
size 10 with eigenvalue 0 except for a (10,1) entry of u1/2 . The computed eigenvalue
distribution and the behaviour for δ = 0.1 and δ = 0.2 are similar to that for the
chebspec matrix.
The main properties of Algorithm 9.6 can be summarized as follows.
1. The algorithm requires O(n3 ) flops unless close or repeated eigenvalues force a
large block Tii to be chosen, in which case the operation count can be up to
n4 /3 flops.
2. The algorithm needs to evaluate derivatives of the function when there are
blocks of dimension greater than 1.
3. In practice the algorithm usually performs in a forward stable manner. However,
the error can be greater than the condition of the problem warrants; if this
behaviour is detected then a reasonable strategy is to recompute the blocking
with a larger δ.
Points 1 and 2 are the prices to be paid for catering for general functions and
nonnormal matrices with possibly repeated eigenvalues.
9.5. Preprocessing
In an attempt to improve the accuracy of Algorithm 9.6 we might try to preprocess
the data before applying a particular stage of the algorithm, using one or more of the
techniques discussed in Section 4.10.
Translation has no effect on our algorithm. Algorithm 9.4 for evaluating the Taylor
series already translates the diagonal blocks, and further translations before applying
9.6 Notes and References 231
the Parlett recurrence are easily seen to have no effect, because (9.7) is invariant
under translations T → T − αI and F → F − βI.
A diagonal similarity transformation could be applied at any stage of the algo-
rithm and then undone later. For example, such a transformation could be used in
conjunction with Parlett’s recurrence in order to make U := D−1 T D less nonnormal
than T and to increase the separations between diagonal blocks. In fact, by choosing
D of the form D = diag(θn−1 , . . . , 1) we can make U arbitrarily close to diagonal form.
Unfortunately, no practical benefit is gained: Parlett’s recurrence involves solving tri-
angular systems and the substitution algorithm is invariant under diagonal scalings
(at least, as long as they involve only powers of the machine base). Similar comments
apply to the evaluation of the Taylor series in Algorithm 9.4.
It may be beneficial to apply balancing at the outset, prior to computing the Schur
decomposition, particularly when we are dealing with badly scaled matrices.
Problems
9.1. Prove Theorem 9.2.
9.2. Let A ∈ R2×2 have distinct complex conjugate eigenvalues λ = θ + iµ and λ and
let f (λ) = α + iβ. Show that f (A) = αI + βµ−1 (A − θI). If A is normal how does
this formula simplify?
Chapter 10
Matrix Exponential
The matrix exponential is by far the most studied matrix function. The interest in
it stems from its key role in the solution of differential equations, as explained in
Chapter 2. Depending on the application, the problem may be to compute eA for a
given A, to compute eAt for a fixed A and many t, or to apply eA or eAt to a vector
(cf. (2.3)); the precise task affects the choice of method.
Many methods have been proposed for computing eA , typically based on one of
the formulae summarized in Table 10.1. Most of them are of little practical interest
when numerical stability, computational cost, and range of applicability are taken
into account. In this chapter we make no attempt to survey the range of existing
methods but instead restrict to a few of proven practical value.
A broad assortment of methods is skilfully classified and analyzed in the classic
“Nineteen dubious ways” paper of Moler and Van Loan [437, ], reprinted with an
update in [438, ]. The conclusion of the paper is that there are three candidates
for best method. One of these—employing methods for the numerical solution of
ODEs—is outside the scope of this book, and in fact the converse approach of using
the matrix exponential in the solution of differential equations has received increasing
attention in recent years (see Section 2.1.1). The others—the scaling and squaring
method and the use of the Schur form—are treated here in some detail. The scaling
and squaring method has become the most widely used, not least because it is the
method implemented in MATLAB.
This chapter begins with a summary of some basic properties, followed by results
on the Fréchet derivative and the conditioning of the eA problem. The scaling and
squaring method based on underlying Padé approximation is then described in some
detail. Three approaches based on the Schur decomposition are outlined. The be-
haviour of the scaling and squaring method and two versions of the Schur method of
the previous chapter is illustrated on a variety of test problems. Several approaches
for approximating the Fréchet derivative and estimating its norm are explained. A
final section treats miscellaneous topics: best rational L∞ approximation, essentially
nonnegative matrices, preprocessing, and the ψ functions that we first saw in Sec-
tion 2.1.
233
234 Matrix Exponential
Another representation is
This formula is the limit of the first order Taylor expansion of A/s raised to the power
s ∈ Z. More generally, we can take the limit as r → ∞ or s → ∞ of r terms of the
Taylor expansion of A/s raised to the power s, thereby generalizing both (10.1) and
(10.2). The next result shows that this general formula yields eA and it also provides
an error bound for finite r and s.
kAkr+1 kAk
keA − Tr,s k ≤ e (10.4)
sr (r + 1)!
Hence
keA − Tr,s k ≤ kB − T k s max kBki kT ks−i−1 .
i=0:s−1
Pr 1 i
Now kT k ≤ i=0 i! (kAk/s) ≤ ekAk/s , and kBk satisfies the same bound, so
s−1
keA − Tr,s k ≤ s keA/s − T k e s kAk .
10.1 Basic Properties 235
Theorem 10.2. For A, B ∈ Cn×n , e(A+B)t = eAt eBt for all t if and only if AB =
BA.
Proof. If AB = BA then all the terms in the power series expansions of e(A+B)t
and eAt eBt commute and so these matrices are equal for the same reasons as in the
scalar case. If e(A+B)t = eAt eBt for all t then equating coefficients of t2 in the power
series expansions of both sides yields (AB + BA)/2 = AB or AB = BA.
The commutativity
0 of A and B is not necessary for eA+B = eA eB to hold, as the
example A = 00 2πi , B = 00 2πi
1
shows (eA+B = eA = eB = I). But if A and B have
algebraic entries then their commutativity is necessary for eA+B = eA eB = eB eA to
hold. An algebraic number is defined by the property that it is a root of a polynomial
with rational (or equivalently, integer) coefficients.
Theorem 10.3 (Wermuth). Let A ∈ Cn×n and B ∈ Cn×n have algebraic elements
and let n ≥ 2. Then eA eB = eB eA if and only if AB = BA.
Proof. The “if” part is trivial. For the “only if”, note that Lindemann’s theorem
on the transcendence of π implies that no two eigenvalues of A differ by a nonzero
integer multiple of 2πi, since the eigenvalues of A, being the roots of a polynomial
with algebraic coefficients, are themselves algebraic. Hence A is a primary logarithm
of eA , since the nonprimary logarithms (if any) are characterized by two copies of
a repeated eigenvalue being mapped to different logarithms, which must differ by a
nonzero integer multiple of 2πi (see Theorem 1.28). Thus A is a polynomial in eA ,
and likewise B is a polynomial in eB . Since eA and eB commute, A and B commute.
This relation can be seen as the first two terms in the following general result con-
necting etA etB and et(A+B) .
Note that the third order term in (10.5) can be written in other ways, such as
(t3 /12)([[A, B], B]+[[B, A], A]). A kind of dual to (10.5) is an infinite product formula
of Zassenhaus.
P∞Yet another variant of relations between etA etB and et(A+B) is et(A+B) = etA etB +
i
i=2 Ei t , for which Richmond [489, ] gives recurrences for the Ei . In a different
vein, the Strang splitting breaks etB into its square root factors and thereby provides
a second order accurate approximation: et(A+B) = etB/2 etA etB/2 + O(t3 ).
Another way to relate the exponential of a sum to a related product of exponentials
is via the limit in the next result.
Theorem 10.6 (Suzuki). For A1 , . . . , Ap ∈ Cn×n and any consistent matrix norm,
p 2 p
A1 +···+Ap A1 /m
Ap /m m 2 X m+2 X
ke − e ...e k≤ kAj k exp kAj k . (10.7)
m j=1 m j=1
Hence m
eA1 +···+Ap = lim eA1 /m . . . eAp /m . (10.8)
m→∞
Proof. Let G = e(A1 +···+Ap )/m and H = eA1 /m . . . eAp /m . Then we need to bound
kG − H m k. Using Lemma B.4 and Theorem 10.10 we have
m
kGm − H m k ≤ kG − Hk kGkm−1 + kGkm−2 kHk + · · · + kHkm−1
m−1 Pp
kAj k
≤ m kG − Hk e m j=1 .
Now
P p
−1 2 p
kAj k 2 X
kG − Hk ≤ kHk kGH − Ik ≤ kHk e m j=1 − 1+ kAj k
m j=1
p 2
1 2 X 2
Pp
≤ kHk kAj k e m j=1 kAj k ,
2 m j=1
2
Pp
where for the second inequality we used kGH −1 k ≤ kGkkH −1 k ≤ e m j=1 kAj k and
the fact that the first two terms of the expansion of GH −1 − I are zero and for the
third inequality we used a Taylor series with remainder (Theorem 4.8). Hence
Pp
Xp 2 P
m−1 Pp
m m 1
kAj k 2 2 p
j=1 kAj k e m j=1 kAj k ,
kG − H k ≤ m e m j=1 kA j k e m
m2 j=1
as required.
The following result provides a better error bound for p = 2 than (10.7) in the
sense that the bound is small when the commutator is small.
Theorem 10.9. Let A ∈ Cn×n and B ∈ Cm×m . Then eA⊗I = eA ⊗I, eI⊗B = I ⊗eB ,
and eA⊕B = eA ⊗ eB .
Proof. The first two relations are easily obtained from the power series (10.1) and
also follow from Theorem 1.13 (h), (i). The proof of the second relation is analogous.
Since A⊗I and I ⊗B commute, eA⊕B = eA⊗I+I⊗B = eA⊗I eI⊗B = (eA ⊗I)(I ⊗eB ) =
eA ⊗ eB .
Bounds for the norm of the matrix exponential are of great interest. We state just
three of the many bounds available in the literature. We need the spectral abscissa,
Theorem 10.12. Let A ∈ Cn×n have the Schur decomposition A = Q(D + N )Q∗ ,
where D is diagonal and N is strictly upper triangular. Then
n−1
X kN kk2
eα(A) ≤ keA k2 ≤ eα(A) . (10.12)
k!
k=0
10.2. Conditioning
In this section we investigate the sensitivity of the matrix exponential to perturba-
tions.
The matrix exponential satisfies the identity (see Problem 10.1)
Z t
(A+E)t At
e =e + eA(t−s) Ee(A+E)s ds. (10.13)
0
(A+E)s
Using this expression to substitute for e inside the integral yields
Z t
e(A+E)t = eAt + eA(t−s) EeAs ds + O(kEk2 ). (10.14)
0
Hence, from the definition (3.6), the Fréchet derivative of the exponential at A in the
direction E is given by
where ψ1 (x) = (ex − 1)/x and τ (x) = tanh(x)/x. The third expression is valid if
1 T
2 kA ⊕ (−A)k < π/2 for some consistent matrix norm.
T
= 12 (eA ⊗ I + I ⊗ eA ) τ 12 [AT ⊕ (−A)]
T
= 12 (eA ⊕ eA ) τ 21 [AT ⊕ (−A)] .
The norm restriction is due to tanh(x) having poles with |x| = π/2.
Each of the three expressions (10.17) is of potential interest for computational
purposes, as we will see in Section 10.6.
Note that the matrix AT ⊕ (−A) has eigenvalues λi (A) − λj (A), i, j = 1: n (see
Section B.13), so it is singular with at least n zero eigenvalues. This does not affect
the formulae (10.17) since φ, τ , sinch, and their derivatives are defined from their
power series expansions at the origin.
The relative condition number of the exponential is (see Theorem 3.1)
kL(A)k kAk
κexp (A) = .
keA k
From Theorem 10.13 we obtain explicit expressions for kL(A)kF .
240 Matrix Exponential
Proof. The formulae follow immediately from the theorem on using the fact that
k vec(B)k2 = kBkF .
The next lemma gives upper and lower bounds for κexp (A).
Lemma 10.15. For A ∈ Cn×n we have, for any subordinate matrix norm,
ekAk kAk
kAk ≤ κexp (A) ≤ . (10.22)
keA k
Proof. From (10.15) we have
Z 1 Z 1
kAk(1−s) kAks
kL(A, E)k ≤ kEk e e ds = kEk ekAk ds = kEkekAk ,
0 0
R1
so that kL(A)k ≤ ekAk . Also, kL(A)k ≥ kL(A, I)k = k 0
eA dsk = keA k, and the
result follows.
An important class of matrices that has minimal condition number is the normal
matrices.
Theorem 10.16 (Van Loan). If A ∈ Cn×n is normal then in the 2-norm, κexp (A) =
kAk2 .
Proof. We need a slight variation of the upper bound in (10.22). First, note that
for a normal matrix B, keB k2 = eα(B) . From (10.15) we have
Z 1
kL(A, E)k2 ≤ kEk2 eα(A)(1−s) eα(A)s ds
0
Z 1
= kEk2 eα(A) ds = eα(A) kEk2 = keA k2 kEk2 .
0
Hence kL(A)k2 ≤ keA k2 , which implies the result in view of the lower bound in
(10.22).
A second class of matrices with perfect conditioning comprises nonnegative scalar
multiples of stochastic matrices.
Theorem 10.17 (Melloy and Bennett). Let A ∈ Rn×n be a nonnegative scalar mul-
tiple of a stochastic matrix. Then in the ∞-norm, κexp (A) = kAk∞ .
Proof. From Theorem 10.10 we have keA k∞ = ekAk∞ . Hence the result follows
from (10.22) with the ∞-norm.
For a third class of perfectly conditioned matrices, see Theorem 10.30 below.
Unfortunately, no useful characterization of matrices for which κexp is large is
known; see Problem 10.15.
The next result is useful because it shows that argument reduction A ← A − µI
reduces the condition number if it reduces the norm. Since eA−µI = e−µ eA , argument
reduction is trivial to incorporate in any algorithm.
10.3 Scaling and Squaring Method 241
Theorem 10.18 (Parks). If µ is a scalar such that kA − µIk < kAk then κexp (A −
µI) < κexp (A).
Hence kL(A − µI)k = e−Re µ kL(A)k. Thus if kA − µIk < kAk then
Xk Xm
(k + m − j)! k! xj (k + m − j)! m! (−x)j
pkm (x) = , qkm (x) = . (10.23)
j=0
(k + m)! (k − j)! j! j=0
(k + m)! (m − j)! j!
Note that pkm (x) = qmk (−x), which reflects the property 1/ex = e−x of the expo-
nential function. Later we will exploit the fact that pmm (x) and qmm (x) approx-
imate ex/2 and e−x/2 , respectively, though they do so much less accurately than
rmm = pmm /qmm approximates ex . That rkm satisfies the definition of Padé approx-
imant is demonstrated by the error expression
k! m!
ex − rkm (x) = (−1)m xk+m+1 + O(xk+m+2 ). (10.24)
(k + m)!(k + m + 1)!
We also have the exact error expression, for A ∈ Cn×n [438, , App. A], [501,
],
Z 1
(−1)m k+m+1
eA − rkm (A) = A qkm (A)−1 etA (1 − t)k tm dt. (10.25)
(k + m)! 0
242 Matrix Exponential
where we assume that kGk < 1, so that H = log(I + G) is guaranteed to exist. (Here,
log denotes the principal logarithm.) It is easy to show that kHk ≤ − log(1 − kGk)
(see Problem 10.8). Now G is clearly a function of A hence so is H, and therefore H
commutes with A. It follows that
rm (A) = eA eH = eA+H .
Now we replace A by A/2s , where s is a nonnegative integer, and raise both sides of
this equation to the power 2s , to obtain
s
rm (A/2s )2 = eA+E ,
where E = 2s H satisfies
kEk ≤ −2s log(1 − kGk)
and G satisfies (10.26) with A replaced by 2−s A. We summarize our findings in the
following theorem.
where kGk < 1 and the norm is any consistent matrix norm. Then
s
rm (2−s A)2 = eA+E ,
viewpoint is that it takes into account the effect of the squaring phase on the error
in the Padé approximant and, compared with a forward error bound, avoids the need
to consider the conditioning of the problem.
Our task now is to bound the norm of G in (10.27) in terms of k2−s Ak. One way
to proceed is to assume an upper bound on kAk and use the error formula (10.25) to
obtain an explicit bound on G, or at least one that is easy to evaluate. This approach,
which is used by Moler and Van Loan [438, ] and is illustrated in Problem 10.10,
is mathematically elegant but does not yield the best possible bounds and hence does
not lead to the best algorithm. We will use a bound on kGk that makes no a priori
assumption on kAk and is as sharp as possible. The tradeoff is that the bound is hard
to evaluate, but this is a minor inconvenience because the evaluation need only be
done during the design of the algorithm.
Define the function
ρ(x) = e−x rm (x) − 1.
In view of the Padé approximation property (10.24), ρ has a power series expansion
∞
X
ρ(x) = ci xi , (10.29)
i=2m+1
and this series will converge absolutely for |x| < min{ |t| : qm (t) = 0 } =: νm . Hence
∞
X
kGk = kρ(2−s A)k ≤ |ci |θi =: f (θ), (10.30)
i=2m+1
where θ := k2−s Ak < νm . It is clear that if A is a general matrix and only kAk is
known then (10.30) provides the smallest possible bound on kGk. The corresponding
bound of Moler and Van Loan [438, , Lem. 4] is easily seen to be less sharp, and
a refined analysis of Dieci and Papini [157, , Sec. 2], which bounds a different
error, is also weaker when adapted to bound kGk.
Combining (10.30) with (10.28) we have
Evaluation of f (θ) in (10.30) would be easy if the coefficients ci were one-signed, for
then we would have f (θ) = |ρ(θ)|. Experimentally, the ci are one-signed for some, but
not all, m. Using MATLAB’s Symbolic Math Toolbox we evaluated f (θ), and hence
the bound (10.31), in 250 decimal digit arithmetic, summing the first 150 terms of
the series, where the ci in (10.29) are obtained symbolically. For m = 1: 21 we used a
zero-finder to determine the largest value of θ, denoted by θm , such that the backward
error bound (10.31) does not exceed u = 2−53 ≈ 1.1 × 10−16 . The results are shown
to two significant figures in Table 10.2.
The second row of the table shows the values of νm , and we see that θm < νm in
each case, confirming that the bound (10.30) is valid. The inequalities θm < νm also
confirm the important fact that qm (A) is nonsingular for kAk ≤ θm (which is in any
case implicitly enforced by our analysis).
Next we need to determine the cost of evaluating rm (A). Because of the relation
qm (x) = pm (−x) between the numerator and denominator polynomials, an efficient
scheme can be based on explicitly computing the even powers of A, forming pm and
244 Matrix Exponential
Table 10.2. Maximal values θm of k2−s Ak such that the backward error bound (10.31) does not
exceed u = 2−53 , values of νm = min{ |x| : qm (x) = 0}, and upper bound ξm for kqm (A)−1 k.
m 1 2 3 4 5 6 7 8 9 10
θm 3.7e-8 5.3e-4 1.5e-2 8.5e-2 2.5e-1 5.4e-1 9.5e-1 1.5e0 2.1e0 2.8e0
νm 2.0e0 3.5e0 4.6e0 6.0e0 7.3e0 8.7e0 9.9e0 1.1e1 1.3e1 1.4e1
ξm 1.0e0 1.0e0 1.0e0 1.0e0 1.1e0 1.3e0 1.6e0 2.1e0 3.0e0 4.3e0
m 11 12 13 14 15 16 17 18 19 20 21
θm 3.6e0 4.5e0 5.4e0 6.3e0 7.3e0 8.4e0 9.4e0 1.1e1 1.2e1 1.3e1 1.4e1
νm 1.5e1 1.7e1 1.8e1 1.9e1 2.1e1 2.2e1 2.3e1 2.5e1 2.6e1 2.7e1 2.8e1
ξm 6.6e0 1.0e1 1.7e1 3.0e1 5.3e1 9.8e1 1.9e2 3.8e2 8.3e2 2.0e3 6.2e3
Pm i
qm , and then solving the matrix equation qm rm = pm . If pm (x) = i=0 bi x , we
have, for the even degree case,
so p2m+1 and q2m+1 = −U + V can be evaluated at exactly the same cost as p2m and
q2m . However, for m ≥ 12 this scheme can be improved upon. For example, we can
write
and q12 (A) = U − V . Thus p12 and q12 can be evaluated in just six matrix mul-
tiplications (for A2 , A4 , A6 , and three additional multiplications). For m = 13 an
analogous formula holds, with the outer multiplication by A transferred to the U
term. Similar formulae hold for m ≥ 14. Table 10.3 summarizes the number of
matrix multiplications required to evaluate pm and qm , which we denote by πm , for
m = 1: 21.
The information in Tables 10.2 and 10.3 enables us to determine the optimal
algorithm when kAk ≥ θ21 . From Table 10.3, we see that the choice is between
m = 1, 2, 3, 5, 7, 9, 13, 17 and 21 (there is no reason to use m = 6, for example, since
the cost of evaluating the more accurate q7 is the same as the cost of evaluating
q6 ). Increasing from one of these values of m to the next requires an extra matrix
multiplication to evaluate rm , but this is offset by the larger allowed θm = k2−s Ak if
θm jumps by more than a factor 2, since decreasing s by 1 saves one multiplication in
10.3 Scaling and Squaring Method 245
Table 10.3. Number of matrix multiplications, πm , required to evaluate pm (A) and qm (A),
and measure of overall cost Cm in (10.35).
m 1 2 3 4 5 6 7 8 9 10
πm 0 1 2 3 3 4 4 5 5 6
Cm 25 12 8.1 6.6 5.0 4.9 4.1 4.4 3.9 4.5
m 11 12 13 14 15 16 17 18 19 20 21
πm 6 6 6 7 7 7 7 8 8 8 8
Cm 4.2 3.8 3.6 4.3 4.1 3.9 3.8 4.6 4.5 4.3 4.2
the final squaring stage. Table 10.2 therefore shows that m = 13 is the best choice.
Another way to arrive at this conclusion is to observe that the cost of the algorithm in
matrix multiplications is, since s = ⌈log2 kAk/θm ⌉ if kAk ≥ θm and s = 0 otherwise,
(We ignore the required matrix equation solution, which is common to all m.) We
wish to determine which m minimizes this quantity. For kAk ≥ θm we can remove the
max and ignore the kAk term, which is essentially a constant shift, so we minimize
Cm = πm − log2 θm . (10.35)
The Cm values are shown in the second line of Table 10.3. Again, m = 13 is optimal.
We repeated the computations with u = 2−24 ≈ 6.0 × 10−8 , which is the unit roundoff
in IEEE single precision arithmetic, and u = 2−105 ≈ 2.5 × 10−32 , which corresponds
to quadruple precision arithmetic; the optimal m are now m = 7 (θ7 = 3.9) and
m = 17 (θ17 = 3.3), respectively.
Now we consider the effects of rounding errors on the evaluation of rm (A). We
immediately rule out m = 1 and m = 2 because r1 and r2 can suffer from loss of
significance in floating point arithmetic. For example, r1 requires kAk to be of order
10−8 after scaling, and then the expression r1 (A) = (I + A/2)(I − A/2)−1 loses about
half the significant digits in A in double precision arithmetic; yet if the original A has
norm of order at least 1 then all the significant digits of some of the elements of A
should contribute to the result. Applying Theorem 4.5 to pm (A), where kAk1 ≤ θm ,
and noting that pm has all positive coefficients, we deduce that
Hence the relative error is bounded approximately by γemn eθm , which is a satisfactory
bound given the values of θm in Table 10.2. Replacing A by −A in the latter bound
we obtain
kqm (A) − qbm (A)k1 <
∼γemn kqm (A)k1 eθm .
In summary, the errors in the evaluation of pm and qm are nicely bounded.
To obtain rm we solve a multiple right-hand side linear system with qm (A) as
coefficient matrix, so to be sure that this system is solved accurately we need to check
246 Matrix Exponential
Pm
Table 10.4. Coefficients b(0: m) in numerator pm (x) = i=0 bi xi of Padé approximant rm (x)
to ex , normalized so that b(m) = 1.
m b(0: m)
3 [120, 60, 12, 1]
5 [30240, 15120, 3360, 420, 30, 1]
7 [17297280, 8648640, 1995840, 277200, 25200, 1512, 56, 1]
9 [17643225600, 8821612800, 2075673600, 302702400, 30270240,
2162160, 110880, 3960, 90, 1]
13 [64764752532480000, 32382376266240000, 7771770303897600,
1187353796428800, 129060195264000, 10559470521600,
670442572800, 33522128640, 1323241920,
40840800, 960960, 16380, 182, 1]
that qm (A) is well conditioned. It is possible to obtain a priori bounds for kqm (A)−1 k
under assumptions such as (for any subordinate matrix norm) kAk ≤ 1/2 [438, ,
Lem. 2], kAk ≤ 1 [606, , Thm. 1], or qm (−kAk) < 2 [157, , Lem. 2.1] (see
Problem 10.9), but these assumptions are not satisfied for all the m and kAk of interest
to us. Therefore we take a similar approach to the way we derived the constants θm .
With kAk ≤ θm and by writing
qm (A) = e−A/2 I + eA/2 qm (A) − I) =: e−A/2 (I + F ),
we have, if kF k < 1,
eθm /2
kqm (A)−1 k ≤ keA/2 k k(I + F )−1 k ≤ .
1 − kF k
P∞ P∞
We can expand ex/2 qm (x) − 1 = i=2 di xi , from which kF k ≤ i=2 |di |θm
i
follows.
Our overall bound is
eθm /2
kqm (A)−1 k ≤ P∞ i
.
1 − i=2 |di |θm
By determining the di symbolically and summing the first 150 terms of the sum in
250 decimal digit arithmetic, we obtained the bounds in the last row of Table 10.2,
which confirm that qm is very well conditioned for m up to about 13 when kAk ≤ θm .
The overall algorithm is as follows. It first checks whether kAk ≤ θm for m ∈
{3, 5, 7, 9, 13} and, if so, evaluates rm for the smallest such m. Otherwise it uses the
scaling and squaring method with m = 13.
Algorithm 10.20 (scaling and squaring algorithm). This algorithm evaluates the ma-
trix exponential X = eA of A ∈ Cn×n using the scaling and squaring method. It uses
the constants θm given in Table 10.2 and the Padé coefficients in Table 10.4. The
algorithm is intended for IEEE double precision arithmetic.
1 for m = [3 5 7 9]
2 if kAk1 ≤ θm
% Form rm (A) = [m/m] Padé approximant to A.
3 Evaluate U and V using (10.33) and solve (−U + V )X = U + V .
4 quit
10.3 Scaling and Squaring Method 247
5 end
6 end
7 A ← A/2s with s ≥ 0 a minimal integer such that kA/2s k1 ≤ θ13
(i.e., s = ⌈log2 (kAk1 /θ13 )⌉).
8 % Form [13/13] Padé approximant to eA .
9 A2 = A2 , A4 = A22 , A6 = A2 A4
10 U = A A6 (b13 A6 + b11 A4 + b9 A2 ) + b7 A6 + b5 A4 + b3 A2 + b1 I
11 V = A6 (b12 A6 + b10 A4 + b8 A2 ) + b6 A6 + b4 A4 + b2 A2 + b0 I
12 Solve (−U + V )r13 = U + V for r13 .
s
13 X = r13 2 by repeated squaring.
Cost: πm +⌈log2 (kAk1 /θm )⌉ M +D, where m is the degree of Padé approximant
used and πm is tabulated in Table 10.3. (M and D are defined at the start of
Chapter 4.)
2k 2k+1
It is readily checked that the sequences θ13 b2k and θ13 b2k+1 are approximately
monotonically decreasing with k, and hence the ordering given in Algorithm 10.20 for
evaluating U and V takes the terms in approximately increasing order of norm. This
ordering is certainly preferable when A has nonnegative elements, and since there
cannot be much cancellation in the sums it cannot be a bad ordering [276, ,
Chap. 4].
The part of the algorithm most sensitive to rounding errors is the final scaling
phase. The following general result, in which we are thinking of B as the Padé
approximant, shows why.
where
k−1
kBk2 kB 2 k kB 4 k . . . kB 2 k
µ= ≥ 1. (10.38)
kB 2k k
The ratio µ can be arbitrarily large, because cancellation can cause an intermediate
j k
power B 2 (j < k) to be much larger than the final power B 2 . In other words, the
powers can display the hump phenomenon, illustrated in Figure 10.1 for the matrix
−0.97 25
A= . (10.39)
0 −0.3
248 Matrix Exponential
We see that while the powers ultimately decay to zero (since ρ(A) = 0.97 < 1),
initially they increase in norm, producing a hump in the plot. The hump can be
arbitrarily high relative to the starting point kAk. Moreover, there may not be a
single hump, and indeed scalloping behaviour is observed for some matrices. Analysis
of the hump for 2 × 2 matrices, and various bounds on matrix powers, can be found
in [276, , Chap. 18] and the references therein.
Another way of viewing this discussion is through the curve keAt k, shown for the
matrix (10.39) in Figure 10.2. This curve, too, can be hump-shaped. Recall that we
are using the relation eA = (eA/σ )σ , and if t = 1/σ falls under a hump but t = 1 is
beyond it, then keA k ≪ keA/σ kσ . The connection between powers and exponentials
is that if we set B = eA/r then the values kB j k are the points on the curve keAt k for
t = 1/r, 2/r, . . ..
The bound (10.37) also contains a factor 2k − 1. If kAk > θ13 then 2k ≈ kAk and
so the overall relative error bound contains a term of the form µkAknu. However,
since κexp (A) ≥ kAk (see Lemma 10.15), a factor kAk is not troubling.
Our conclusion is that the overall effect of rounding errors in the final squaring
stage may be large relative to the computed exponential X, b and so Xb may have large
relative error. This may or may not indicate instability of the algorithm, depending
on the conditioning of the eA problem at the matrix A. Since little is known about
the size of the condition number κexp for nonnormal A, no clear general conclusions
can be drawn about the stability of the algorithm (see Problem 10.16).
In the special case where A is normal the scaling and squaring method is guaran-
teed to be forward stable. For normal matrices there is no hump because kAk k2 =
kAkk2 , so µ = 1 in (10.38), and 2k ≈ kAk2 = κexp (A) by Theorem 10.16. Therefore the
squaring phase is innocuous and the error in the computed exponential is consistent
with the conditioning of the problem. Another case in which the scaling and squaring
method is forward stable is when aij ≥ 0 for i 6= j, as shown by Arioli, Codenotti, and
Fassino [17, ]. The reason is that the exponential of such a matrix is nonnegative
(see Section 10.7.2) and multiplying nonnegative matrices is a stable procedure since
there can be no cancellation.
Finally, we note that the scaling and squaring
method
has a weakness when applied
to block triangular matrices. Suppose A = A11 A12
0 A22 . Then
Z 1
e
A11
eA11 (1−s) A12 eA22 s ds
eA = 0 (10.40)
0 eA22
(see Problem 10.12). The linear dependence of the (1,2) block of eA on A12 sug-
gests that the accuracy of the corresponding block of a Padé approximant should
not unduly be affected by kA12 k and hence that in the scaling and squaring method
only the norms of A11 and A22 should influence the amount of scaling (specified by
s in Algorithm 10.20). But since s depends on the norm of A as a whole, when
kA12 k ≫ max(kA11 k, kA22 k) the diagonal blocks are overscaled with regard to the
computation of eA11 and eA22 , and this may have a harmful effect on the accuracy
of the computed exponential (cf. the discussion on page 245 about rejecting degrees
m = 1, 2). In fact, the block triangular case merits special treatment. If the spectra of
A11 and A22 are well separated then it is best to compute eA11 and eA22 individually
and obtain F12 from the block Parlett recurrence by solving a Sylvester equation of
the form (9.7). In general, analysis of Dieci and Papini [157, ] suggests that if
the scaling and squaring method is used with s determined so that 2−s kA11 k and
10.3 Scaling and Squaring Method 249
34
32
30
28
||Ak||2
26
24
22
20
0 5 10 15 20
k
16
14
12
10
||eAt||2
8
0
0 2 4 6 8 10
t
2−s kA22 k are appropriately bounded, without consideration of kA12 k, then an accu-
rate approximation to eA will still be obtained.
where ci = f [λ1 , λ2 , . . . , λi ] with λi ≡ tii and f (t) = et . The cost of evaluating p(Tii )
is O(m4 ) flops.
For general functions f (or a set of given function values) divided differences are
computed using the standard recurrence (B.24). However, the recurrence can produce
inaccurate results in floating point arithmetic [276, , Sec. 5.3]. This can be seen
from the first order divided difference f [λk , λk+1 ] = (f (λk+1 ) − f (λk ))/(λk+1 − λk )
(λk 6= λk+1 ), in which for λk close to λk+1 the subtraction in the numerator will
suffer cancellation and the resulting error will be magnified by a small denominator.
Obtaining accurate divided differences is important because the matrix product terms
in (10.41) can vary greatly in norm. For a particular f that is given in functional
form rather than simply as function values f (λi ), we would hope to be able to obtain
the divided differences more accurately by exploiting properties of f . The next result
offers one way to do this because it shows that evaluating f at a certain bidiagonal
matrix yields the divided differences in the first row.
Theorem 10.22 (Opitz). The divided difference f [λ1 , λ2 , . . . , λm ] is the (1, m) ele-
ment of f (Z), where
λ1 1
λ2 1
.. ..
Z= . . .
.
.. 1
λm
For m = 2 the theorem is just (4.16), while for λ1 = λ2 = · · · = λm it reproduces
the Jordan block formula (1.4), in view of (B.27).
McCurdy, Ng, and Parlett [416, ] investigate in detail the accurate evaluation
of the divided differences of the exponential. They derive a hybrid algorithm that
10.4 Schur Algorithms 251
uses the standard recurrence when it is safe to do so (i.e., when the denominator is
not too small) and otherwise uses the Taylor series of the exponential in conjunction
with Theorem 10.22 in a sophisticated way that employs scaling and squaring. They
also discuss the use of matrix argument reduction, which has some benefits for their
algorithm since it can reduce the size of the imaginary parts of the eigenvalues. Note
that if Jk (λ) is a Jordan block then exp(Jk (λ)) = exp(Jk (λ − 2πij)) for any integer
j (since in (1.4) the values of et and its derivatives are unchanged by the shift λ →
λ − 2πij). Hence for an arbitrary A, each eigenvalue can be shifted by an integer
multiple of 2πi so that its imaginary part is of modulus at most π without changing
eA . If A is in triangular form then a technique developed by Ng [448, ] based on
the Parlett recurrence can be used to carry out matrix argument reduction. Li [384,
] shows how when A is real (10.41) can be evaluated in mainly real arithmetic,
with complex arithmetic confined to the computation of the divided differences.
Unfortunately, no precise statement of an overall algorithm for eA is contained in
the above references and thorough numerical tests are lacking therein.
The evaluation of the exponential of the atomic diagonal blocks by Algorithm 10.20
brings the greater efficiency of Padé approximation compared with Taylor approxi-
mation to the exponential and also exploits scaling and squaring (though the latter
could of course be used in conjunction with a Taylor series).
Compared with Algorithm 10.20 applied to the whole (triangular) matrix, the
likelihood of overscaling is reduced because the algorithm is being applied only to the
(usually small-dimensioned) diagonal blocks (and not to the 1 × 1 or 2 × 2 diagonal
blocks).
Algorithm 10.23 is our preferred alternative to Algorithm 10.20. It has the ad-
vantage over the methods of the previous two subsections of simplicity and greater
efficiency, and its potential instabilities are more clear.
2. The modified version funm mod of MATLAB’s funm tested in Section 9.4, which
exponentiates the diagonal blocks in the Schur form using Taylor series.
Figure 10.3 shows the normwise relative errors kXb −eA kF /keA kF of the computed
b
X. Figure 10.4 presents the same data in the form of a performance profile: for a
given α on the x-axis, the y coordinate of the corresponding point on the curve is the
probability that the method in question has an error within a factor α of the smallest
error over all the methods on the given test set. Both plots are needed to understand
the results: the performance profile reveals the typical performance, while Figure 10.3
highlights the extreme cases.10 For more on performance profiles see Dolan and Moré
[161, ] and Higham and Higham [263, , Sec. 22.4].
Several observations can be made.
for a method if the x-axis is extended far enough to the right, but we have limited to x ∈ [1, 15] for
readability.
10.6 Evaluating the Fréchet Derivative and Its Norm 253
0
10
funm_mod
funm
expm
expmdemo1
−5
10
−10
10
−15
10
−20
10
0 10 20 30 40 50 60 70
Figure 10.3. Normwise relative errors for MATLAB’s funm, expm, expmdemo1, and funm mod;
the solid line is κexp (A)u.
• expmdemo1 is the least reliable of the four codes. This is mainly due to its
suboptimal choice of m and s; expm usually takes a larger s and hence requires
fewer squarings.
Hence the Fréchet derivative L(A, E) can be obtained by applying any existing method
for the exponential to the above block upper triangular 2n×2n matrix and reading off
the (1,2) block. Of course it may be possible to take advantage of the block triangular
form in the method. This approach has the major advantage of simplicity but is likely
to be too expensive if n is large.
It is interesting to note a duality between the matrix exponential and its Fréchet
derivative. Either one can be used to compute the other: compare (10.43) with
Section 10.4.2.
Some other techniques for evaluating the Frechét derivative exploit scaling and
squaring. Applying the chain rule (Theorem 3.4) to the identity eA = (eA/2 )2 gives,
254 Matrix Exponential
0.9
0.8
0.7
0.6
p
0.5
0.4
0.3
0.2 expm
funm_mod
0.1 funm
expmdemo1
0
2 4 6 8 10 12 14
α
This relation yields the following recurrence for computing L0 = Lexp (A, E):
The advantage of the recurrence is that it reduces the problem of approximating the
Fréchet derivative for A to that of approximating the Fréchet derivative for 2−s A, and
s can be chosen to make k2−s Ak small enough that the latter problem can be solved
−i −(i+1) 2
accurately and efficiently. In the recurrence the relation e2 A = e2 A
can be
exploited to save computational effort, although repeated squaring could worsen the
effects of rounding error. Another attraction of the recurrence is that it is possible to
intertwine the computation of L(A, E) with that of eA by the scaling and squaring
method and thereby compute both matrices with less effort than is required to com-
−s
pute each separately; however, the requirements on s for the approximation of e2 A
and L2−s A may differ, as we will see below. We now describe two approaches based
on the above recurrence.
10.6.1. Quadrature
One way to approximate L(A, E) is to apply quadrature to the integral representation
(10.15). For example, we can apply the m-times repeated trapezium rule
Z 1
1
f (t) dt ≈ ( 21 f0 + f1 + f2 + · · · + fm−1 + 21 fm ), fi := f (i/m)
0 m
10.6 Evaluating the Fréchet Derivative and Its Norm 255
to obtain
1 1 A
m−1
X 1
L(A, E) ≈ e E+ eA(1−i/m) EeAi/m + EeA .
m 2 i=1
2
Proof. We have
p
1X
R1 (A/2, E/2) = wi eA(1−ti )/2 EeAti /2 .
2 i=1
Now
p
1X
eA/2 R1 (A/2, E/2) + R1 (A/2, E/2)eA/2 = wi eA(1−ti /2) EeAti /2
2 i=1
+ eA(1−ti )/2 EeA(1+ti )/2
= R2 (A, E), (10.46)
1
Pp
since R2 (f ) = 2 i=1 wi f (ti /2)+f (1/2+ti /2) . Repeated use of the relation (10.46)
yields the result.
Note that the approximation Q0 can be derived by using Lexp ≈ R1 in (10.44). The
extra information in Theorem 10.24 is that the resulting approximation is precisely
R 2s .
Two key questions are how large the error R2s (A, E) − L(A, E) is and which
quadrature rule to choose.
The next result helps in these regards. We define µe(A) =
max 0, 21 (µ(A) + µ(−A) , where µ is the logarithmic norm (10.11), and note that
both µ e(A) and µ(A) are bounded above by kAk2 .
Theorem 10.25 (error bounds). Assume that c := kAk22 eµe(A)/m /(6m2 ) < 1. Then
for the repeated trapezium rule RT,m , the repeated Simpson rule RS,m , and any uni-
tarily invariant norm,
2c
kL(A, E) − RT,m (A, E)k ≤ kL(A)k kEk, (10.47)
1−c
eµe(A)/m
kL(A, E) − RS,m (A, E)k ≤ km−1 Ak42 kL(A)k kEk. (10.48)
180(1 − c)
256 Matrix Exponential
where τ (x) = tanh(x)/x and k 12 [AT ⊕ (−A)]k < π/2 is assumed. Using (B.16), we
have
1
L(A, E) = (Y eA + eA Y ), vec(Y ) = τ 21 [AT ⊕ (−A)] vec(E).
2
Note that 12 [AT ⊕(−A)] vec(E) = 21 (AT ⊗I −I ⊗A) vec(E) = 12 vec(EA−AE). Hence
if r(x) is a rational approximation to τ (x) in which both
numerator and denominator
are factored into linear factors then r 21 [AT ⊕ (−A)] vec(E) can be evaluated at the
cost of solving a sequence of Sylvester equations containing “Sylvester products” on
the right-hand side. To be specific, let
m
Y
τ (x) ≈ rm (x) = (x/βi − 1)−1 (x/αi − 1) (10.50)
i=1
is not too large. For m = 8 and the 1-norm this bound is at most 55.2 if kAkp ≤ 1,
which is quite acceptable.
The ideas above are collected in the following algorithm.
258 Matrix Exponential
αj βj
±3.1416e0i ±1.5708e0i
±6.2900e0i ±4.7125e0i
±1.0281e1i ±7.9752e0i
±2.8894e1i ±1.4823e1i
Cost: (18 + 2s)M and s matrix exponentials (or 1 exponential and (17 + 3s)M if
repeated squaring is used at line 9), and the solution of 8 Sylvester equations.
In practice we would use an initial Schur decomposition of A to reduce the cost,
making use of Problem 3.2; this is omitted from the algorithm statement to avoid
clutter.
Similar algorithms could be developed for the other two formulae in (10.17).
choice of E. For large n, this is prohibitively expensive and the condition num-
ber must be estimated rather than exactly computed. One possibility is to use
the power method, Algorithm 3.20, in conjunction with Algorithm 10.26 or Algo-
rithm 10.27. A good starting matrix is Z0 = XX ∗ + X ∗ X, where X = eA , the
reason being that X = L(A, I) and the adjoint step of the power method gives
R1 ∗
L⋆ (A, X) = L(A∗ , X) = 0 eA (1−s) eA eA s ds ≈ (XX ∗ + X ∗ X)/2, using a trapezium
∗
10.7. Miscellany
In this section we discuss some miscellaneous methods, techniques, and special classes
of matrices, as well as a variation of the eA problem.
Theorem 10.29. Let A ∈ Rn×n . Then eAt ≥ 0 for all t ≥ 0 if and only if A is
essentially nonnegative.
Proof. Suppose eAt ≥ 0 for all t ≥ 0. Taking t ≥ 0 sufficiently small in the
expansion eAt = I + At + O(t2 ) shows that the off-diagonal elements of A must be
nonnegative.
Conversely, suppose A is essentially nonnegative. We can write A = D + N , where
D = diag(A) and N ≥ 0, which can be rewritten as
A = θI + (D − θI + N ) =: θI + B, θ = min aii . (10.52)
i
Then B ≥ 0 and eAt = eθt eBt ≥ 0 for t ≥ 0, since the exponential of a matrix with
nonnegative elements is clearly nonnegative.
The formula eA = eθ eB with θ, B as in (10.52) is used in the uniformization
method in Markov chain analysis [523, ], [524, , Sec. 2], [540, , Chap. 8],
in which eB v (v ≥ 0) is approximated using a truncated Taylor series expansion of
eB . The Taylor series for eB contains only nonnegative terms, whereas that for eA
contains terms of both sign and so can suffer from cancellation. Note that in the case
of A being an intensity matrix the shift θ is optimal in the sense of Corollary 4.22.
Recall from Section 2.3 that an intensity matrix is an essentially nonnegative
matrix with zero row sums, and that the exponential of such a matrix is stochastic
and so has unit ∞-norm. An intensity matrix is perfectly conditioned for the matrix
exponential.
Theorem 10.30. If A ∈ Rn×n is an intensity matrix then in the ∞-norm, κexp (A) =
kAk∞ .
Proof. For s ∈ [0, 1], sA is also an intensity matrix. Hence on taking ∞-norms
in (10.15) we obtain
Z 1
kL(A, E)k∞ ≤ keA(1−s) k∞ kEk∞ keAs k∞ ds
0
10.7 Miscellany 261
Z 1
= kEk∞ ds = kEk∞ = keA k∞ kEk∞ .
0
10.7.3. Preprocessing
Balancing, and argument reduction via a single shiftA ← A − µI, can be used as
preprocessing steps in any of the algorithms described in this chapter. Theorem 10.18
in conjunction with Theorem 4.21 (a) suggests that µ = n−1 trace(A) is an excellent
choice of shift. However, this shift is not always appropriate. If A has one or more
eigenvalues with large negative real part then µ can be large and negative and A − µI
can have an eigenvalue with large positive real part, causing eA−µI to overflow.
Preprocessing is potentially particularly helpful to Algorithm 10.20 (the scaling
and squaring method), since it can reduce the norm and hence directly reduce the cost
of the method. Our own numerical experience suggests that preprocessing typically
has little effect on the cost or accuracy of Algorithm 10.20, and since it could even
worsen the accuracy we do not recommend its automatic use.
ez − 1 ez − 1 − z ez − 1 − z − 12 z 2
ψ0 (z) = ez , ψ1 (z) = , ψ2 (z) = , ψ3 (z) = ,....
z z2 z3
As explained in Section 2.1, these functions play a fundamental role in exponential
integrators for ordinary differential equations, and we saw earlier in this chapter that
ψ1 arises in one of the representations (10.17) for the Fréchet derivative of the matrix
exponential. An integral representation is
Z 1
1
ψk (z) = e(1−t)z tk−1 dt (10.53)
(k − 1)! 0
1
ψk (z) = z ψk+1 (z) + . (10.54)
k!
From the last two equations it follows that ψk+1 is the divided difference
Even for scalar arguments it is a nontrivial task to evaluate the ψk accurately. For
example, ψ1 (z) suffers badly from cancellation if evaluated as (ez − 1)/z for |z| ≪ 1,
though this formula can be reworked in a nonobvious way to avoid the cancellation
[276, , Sec. 1.14.1] (or the function expm1(x) := ex − 1, available as expm1 in
MATLAB, can be invoked [562, ]).
Padé approximants to the ψ functions are known explicitly. The following theorem
includes (10.23) and (10.24) (with k = m therein) as special cases.
262 Matrix Exponential
Theorem 10.31. The diagonal [m/m] Padé approximant rm (z) = pm (z)/qm (z) to
ψk (z) is given by
Xm i
X j
m! (2m + k − j)!(−1) zi,
pm (z) =
(2m + k)! i=0 j=0 j!(m − j)!(k + i − j)!
m
X (2m + k − i)!
m!
qm (z) = (−z)i ,
(2m + k)! i=0 i!(m − i)!
and
(−1)m m!(m + k)!
ψk (z) − rm (z) = z 2m+1 + O(z 2m+2 ).
(2m + k)!(2m + k + 1)!
Proof. See Skaflestad and Wright [527, ]. Magnus and Wynn [401, ] had
earlier obtained the [k/m] Padé approximants to ψ0 .
With the aid of these Padé approximants, evaluation of ψj , j = 0: k, can be done
via an analogue of the scaling and squaring method for the exponential. The squaring
phase makes use (with α = β = 1) of the identity [527, ], for α, β ∈ R,
Xk
1 k αj β k−j
ψk (α + β)z = β ψ 0 (αz)ψ k (βz) + ψ j (αz) . (10.55)
(α + β)k j=1
(k − j)!
and Maradudin [612, ]. The domain of convergence of the formula is discussed by
Blanes and Casas [74, ]. A simple way to compute the coefficients in the formula,
involving a matrix log(eF eG ) where F and G are certain strictly upper bidiagonal
matrices, is given by Reinsch [486, ].
For more on the Zassenhaus formula (10.6) see Magnus [403, ] or Wilcox [615,
], and see Scholz and Weyrauch [506, ] for a simple way to compute the
coefficients Ci .
The Strang splitting is due to Strang [544, ]. For a survey of splitting methods
see McLachlan and Quispel [417, ].
Theorem 10.6 is from Suzuki [552, ]. The formula (10.8) is widely used in
physics, where it is known as the Suzuki–Trotter formula. For an example of the
use of the formula see Bai, Chen, Scalettar, and Yamazaki [26, ]. The Trotter
product formula in Corollary 10.7 is from Trotter [575, ].
Other bounds on keA k can be found in Van Loan [594, ] and Kågström [322,
]. For various bounds on supt≥0 ketA k see Trefethen and Embree [573, ].
The representation (10.15) for the Fréchet derivative appears in Karplus and
Schwinger [335, ], and Najfeld and Havel [445, ] state that this is the earliest
appearance.
Theorem 10.13 is a mixture of relations from Kenney and Laub [348, ] and
Najfeld and Havel [445, ]. It is possible to “unvec” the relations (10.17), but the
resulting expressions (see [580, ] or [241, 2003, Thm. 3.5] for (10.17a)) are less
useful for computational purposes.
Theorem 10.16 is from Van Loan [594, ]. Theorem 10.17 is due to Melloy
and Bennett [422, ], whose paper seems to have hitherto been overlooked. The-
orem 10.18 is from Parks [458, ].
For more on the theory and computation of the Fréchet derivative (and higher
derivatives) of the exponential see Najfeld and Havel [445, ].
The formulae (10.23) and (10.24) are due to Padé, and proofs are given by Gautschi
[207, , Thm. 5.5.1]. A proof of the “ρ(rmm (A)) < 1” property can be found in
[207, , p. 308].
It is known that the poles of the [k/m] Padé approximant rkm (x) = pkm (x)/qkm (x),
that is, the zeros of qkm , are distinct, though they are complex; see Zakian [621, ,
Lem. 4.1]. Thus rkm can be expressed in linear partial fraction form, as is done by
Gallopoulos and Saad [200, ], for example.
An early reference to the scaling and squaring method is Lawson [376, ], and
Ward [606, ] attributes the method to Lawson.
The evaluation schemes (10.32) and (10.33) were suggested by Van Loan [593,
].
Algorithm 10.20 and the supporting analysis are due to Higham [278, ]. Sec-
tion 10.3 is based on [278].
The analysis of Moler and Van Loan [438, , Sec. 3, App. A] proceeds as in
Problem 10.10 and makes the assumption that k2−s Ak ≤ 1/2. The backward error
bound is used to determine a suitable Padé degree m. Based on this analysis, in
MATLAB 7 (R14SP3) and earlier the expm function used an implementation of the
scaling and squaring method with m = 6 and k2−s Ak∞ ≤ 1/2 as the scaling criterion
(and Sidje [522, ] uses the same parameters in his function padm). Version 7.2
(R2006a) of MATLAB introduced a new expm that implements Algorithm 10.20. This
new version of expm is more efficient than the old, as the experiment of Section 10.5
indicates, and typically more accurate, as a side effect of the reduced scaling and
264 Matrix Exponential
hence fewer squarings. Mathematica’s MatrixExp function has used Algorithm 10.20
for matrices of machine numbers since Version 5.1.
Ward [606, ] uses m = 8 and k2−s Ak1 ≤ 1 in his implementation of the scaling
and squaring method and preprocesses with argument reduction and balancing, as
discussed in Section 10.7.3. He gives a rounding error analysis that leads to an
a posteriori forward error bound. This analysis is similar to that given here, but for
the squaring phase it uses a recurrence based on computed quantities in place of the
a prior bound of Theorem 10.21.
Najfeld and Havel [445, ] suggest a variation of the scaling and squaring
method that uses Padé approximations to the function
from which the exponential can be recovered via e2x = (τ (x) + x)/(τ (x) − x). The
continued fraction expansion provides an easy way of generating the Padé approxi-
mants to τ . Higham [278, ] shows that the algorithm suggested by Najfeld and
Havel is essentially a variation of the standard scaling and squaring method with di-
rect Padé approximation, but with weaker guarantees concerning its behaviour both
in exact arithmetic (because a backward error result is lacking) and in floating point
arithmetic (because a possibly ill conditioned matrix must be inverted).
The overscaling phenomenon discussed at the end of Section 10.3 was first pointed
out by Kenney and Laub [348, ] and Dieci and Papini [157, ].
Equation (10.40) shows that the (1,2) block of the exponential of a block triangular
matrix can be expressed as an integral. Van Loan [595, ] uses this observation in
the reverse direction to show how certain integrals involving the matrix exponential
can be obtained by evaluating the exponential of an appropriate block triangular
matrix.
A conditioning analysis of the exponential taking account of the structure of block
2 × 2 triangular matrices is given by Dieci and Papini [158, ].
Theorem 10.22 is due to Opitz [452, ] and was rediscovered by McCurdy: see
[416, ]. The result is also proved and discussed by de Boor [143, ].
The use of quadrature for evaluating L(A, E) was investigated by Kenney and
Laub [340, ], who concentrate on the repeated trapezium rule, for which they
obtain the recurrence (10.45) and a weaker error bound than (10.47).
Section 10.6.2 is based on Kenney and Laub [348, ], wherein the focus is not on
computing the Fréchet derivative of the exponential per se, but on using the Fréchet
derivative within the Schur–Fréchet algorithm (see Section 10.4.2). The continued
fraction expansion (10.51) can be found in Baker [38, , p. 65]. The starting
matrix Z0 = XX ∗ + X ∗ X for the power method mentioned in Section 10.6.3 was
suggested by Kenney and Laub [340, ].
Theorem 10.29 can be found in Bellman [51, , p. 176] and in Varga [600, ,
Sec. 8.2]. Theorem 10.30 is new.
While MATLAB, Maple, and Mathematica all have functions to compute the
matrix exponential, the only freely available software we are aware of is Sidje’s Expokit
package of MATLAB and Fortran codes [522, ].
Problems 265
Problems
10.1. Prove the identity (10.13).
10.2. Give a proof of AB = BA ⇒ e(A+B)t = eAt eBt for all t (the “if” part of
Theorem 10.2) that does not use the power series for the exponential.
10.3. Show that for any A, B ∈ Cn×n ,
keA − eB k ≤ kA − Bk emax(kAk,kBk) .
10.4. For A, B ∈ Cn×n show that det(eA+B ) = det(eA eB ) (even though eA+B 6=
eA eB in general).
10.5. Show that for A ∈ Cn×n and any subordinate matrix norm, κexp (A) ≤ e2kAk kAk.
10.6. Let A ∈ Cn×n be normal. Theorem 10.16 says that the absolute condition
number condabs (exp, X) = keA k2 , but Corollary 3.16 says that condabs (exp, A) =
maxλ,µ∈Λ(A) |f [λ, µ]|, where f = exp. Reconcile these two formulae.
10.7. Determine when the Fréchet derivative L(A) of the exponential is nonsingular.
10.8. Show that if kGk < 1 then
kGk
k log(I + G)k ≤ − log(1 − kGk) ≤ (10.58)
1 − kGk
The norm here is any subordinate matrix norm. Show further that if kAk < log 4 =
1.39 then kqm (A)−1 k ≤ 1/(2 − ekAk/2 ), and hence that kqm (A)−1 k ≤ 3 for kAk ≤ 1.
10.10. Let rm = pm /qm be the [m/m] Padé approximant to the exponential. Show
that G = e−A rm (A) − I satisfies
qm (kAk) kAk
kGk ≤ e − rm (kAk) (10.59)
2 − qm (−kAk)
for any subordinate matrix norm, assuming that kGk < 1 and qm (−kAk) < 2.
10.11. Prove Theorem 10.21, using the basic result that for the 1-, ∞-, and Frobenius
norms [276, , Sec. 3.5], kAB − f l(AB)k ≤ γn kAk kBk.
10.12. Derive the formula (10.40) for the exponential of a block 2×2 block triangular
matrix.
266 Matrix Exponential
10.13. Prove Rodrigues’ formula [407, , p. 291] for the exponential of a skew-
symmetric matrix A ∈ R3×3 :
sin θ 1 − cos θ 2
eA = I + A+ A , (10.60)
θ θ2
p
where θ = kAk2F /2. (For generalizations of this formula to higher dimensions see
Gallier and Xu [199, ].)
10.14. Given that eA is the matrix
cos t 0 0 0 0 0 0 − sin t
0 cos t 0 0 0 0 − sin t 0
0 0 cos t 0 0 − sin t 0 0
0 0 0 cos t − sin t 0 0 0
,
0 0 0 sin t cos t 0 0 0
0 0 sin t 0 0 cos t 0 0
0 sin t 0 0 0 0 cos t 0
sin t 0 0 0 0 0 0 cos t
what is A?
10.15. (Research problem) Obtain conditions that are necessary or sufficient—
and ideally both—for κexp (A) to be large.
10.16. (Research problem) Prove the stability or otherwise of the scaling and
squaring algorithm, by relating rounding errors in the square root phase to the con-
ditioning of the eA problem.
Problems 267
The [error of the ] best rational approximation to e−x on [0, +∞) exhibits
geometric convergence to zero.
It is this geometric convergence which has fascinated so many researchers.
— A. J. CARPENTER, A. RUTTAN, and R. S. VARGA, Extended Numerical Computations
on the “ 1/9” Conjecture in Rational Approximation Theory (1984)
In this survey we try to describe all the methods that appear to be practical,
classify them into five broad categories,
and assess their relative effectiveness.
— CLEVE B. MOLER and CHARLES F. VAN LOAN,
Nineteen Dubious Ways to Compute the Exponential of a Matrix (1978)
Proof. It suffices to prove the result for diagonalizable A, by Theorem 1.20, and
R1 −1
hence it suffices to show that log x = 0 (x − 1) t(x − 1) + 1 dt for x ∈ C lying off
R− ; this latter equality is immediate.
Now we turn to useful identities satisfied by the logarithm. Because of the multi-
valued nature of the logarithm it is not generally the case that log(eA ) = A, though
a sufficient condition for the equality is derived in Problem 1.39. To understand the
scalar case of this equality we use the unwinding number of z ∈ C defined by
z − log(ez )
U(z) = . (11.2)
2πi
269
270 Matrix Logarithm
Proof. Consider first the scalar case. By definition, aα = eα log a . Hence, using
(11.3), log aα = log(eα log a ) = α log a − 2πiU(α log a). But U(α log a) = 0 for a ∈
/ R−
and α ∈ [−1, 1]. Thus the identity holds for scalar A and follows for general A by
Theorem 1.20. (For the last part, cf. Problem 1.34.)
We now turn to the question of when log(BC) = log(B) + log(C). First, consider
the scalar case. Even here, the equality may fail. Let b = c = e(π−ǫ)i for ǫ small and
positive. Then
Note that since Im log z ∈ (−π, π], we have U(log z1 + log z2 ) ∈ {−1, 0, 1}. Now
U(log z1 + log z2 ) = 0 precisely when | arg z1 + arg z2 | < π, so this is the condition for
log(z1 z2 ) = log z1 + log z2 to hold. As a check, the inequality in (11.4) is confirmed
by the fact that U(log b + log c) = U(2π − 2ǫ) = 1.
The next theorem generalizes this result from scalars to matrices. Recall from
Corollary 1.41 that if X and Y commute then for each eigenvalue λj of X there is an
eigenvalue µj of Y such that λj + µj is an eigenvalue of X + Y . We will call µj the
eigenvalue corresponding to λj .
Proof. Note first that log(B), log(C), and log(BC) are all defined—by (11.6) in
the latter case. The matrices log(B) and log(C) commute, since B and C do. By
Theorem 10.2 we therefore have
Thus log(B) + log(C) is some logarithm of BC. The imaginary part of the jth
eigenvalue of log(B) + log(C) is
Theorem 11.4 (Cheng, Higham, Kenney, and Laub). Let B, C ∈ Cn×n . Suppose
that A = BC has no eigenvalues on R− and
(a) BC = CB,
(b) every eigenvalue of B lies in the open half-plane of the corresponding eigen-
value of A1/2 (or, equivalently, the same condition holds for C).
Then log(A) = log(B) + log(C).
Proof. As noted above, since B and C commute there is a correspondence between
the eigenvalues a, b, and c (in some ordering) of A, B, and C: a = bc. Express these
eigenvalues in polar form as
The “error”, however, is expressed in terms of log(X) and log(Y ) rather than X
and Y .
272 Matrix Logarithm
λ
B λ1/2
A
Im λ
Re λ
Figure 11.1. Illustration of condition (b) of Theorem 11.4, which requires every eigenvalue of
1/2
B (λB ) to lie in the open half-plane (shaded ) of the corresponding eigenvalue of A1/2 (λA ).
11.2. Conditioning
From Theorem 3.5 we know that the Fréchet derivative of the logarithm is the in-
verse of the Fréchet derivative of the exponential, so that Lexp log(A), Llog (A, E) =
R1
E. From (10.15) we have Lexp (A, E) = 0 eA(1−s) EeAs ds, and hence L(A, E) =
R1
Llog (A, E) satisfies E = 0 elog(A)(1−s) L(A, E)elog(A)s ds, that is (using Theorem 11.2),
Z 1
E= A1−s L(A, E)As ds. (11.9)
0
For a normal matrix and the Frobenius norm this inequality is an equality (Corol-
lary 3.16). But even for normal A, the right-hand side generally exceeds maxλ∈Λ(A) |λ|−1 .
For Hermitian positive definite matrices we do, however, have the equality kL(A)kF =
maxλ∈Λ(A) |λ|−1 = kA−1 k2 .
The lower bound (11.11) is large in two situations: if A has an eigenvalue of small
modulus or if A has a pair of complex eigenvalues lying close to, but on opposite sides
of, the negative real axis; in both cases A is close to a matrix for which the principal
logarithm is not defined.
x2 x3 x4
log(1 + x) = x − + − + ···, (11.12)
2 3 4
which converges for |x| < 1 (and in fact at every x on the unit circle except x = −1).
Replacing x by −x we have
x2 x3 x4
log(1 − x) = −x − − − − ···
2 3 4
and subtracting from the first equation gives
1+x x3 x5
log =2 x+ + + ··· . (11.13)
1−x 3 5
1−x 1−y
y := ⇐⇒ x=
1+x 1+y
we have 2k+1
∞
X 1 1−y
log y = −2 . (11.14)
2k + 1 1+y
k=0
A2 A3 A4
log(I + A) = A − + − + ···, ρ(A) < 1 (11.15)
2 3 4
and
∞
X 1 2k+1
log(A) = −2 (I − A)(I + A)−1 , min Re λi (A) > 0. (11.16)
2k + 1 i
k=0
The Gregory series (11.16) has a much larger region of convergence than (11.15), and
in particular converges for any Hermitian positive definite A. However, convergence
can be slow if A is not close to I.
274 Matrix Logarithm
which is the truncation of an infinite continued fraction for log(1 + x). Second is the
partial fraction form
m (m)
X αj x
rm (x) = (m)
, (11.18)
j=1 1 + βj x
(m) (m)
where the αj are the weights and the βj the nodes of the m-point rule on [0, 1].
Codes for computing these weights and nodes, which the theory of Gaussian quadra-
ture guarantees are real, are given in [142, , App. 2], [206, ], [208, ,
Sec. 3.1], [478, , Sec. 4.6].
An interesting connection is that rm (A − I) is produced by applying the m-point
Gauss–Legendre quadrature rule to (11.1) [154, , Thm. 4.3].
The logarithm satisfies log(1 + x) = − log(1/(1 + x)). The diagonal Padé approx-
imant satisfies the corresponding identity.
Lemma 11.5 (Kenney and Laub). The diagonal Padé approximant rm to log(1 + x)
satisfies rm (x) = −rm (−x/(1 + x)).
Proof. The result is a special case of a more general invariance result applying
to an arbitrary function under an origin-preserving linear fractional transformation
of its argument [38, , Thm. 9.1], [39, , Thm. 1.5.2].
The next result states the important fact that the error in matrix Padé approx-
imation is bounded by the error in scalar Padé approximation at the norm of the
matrix.
Theorem 11.6 (Kenney and Laub). For kXk < 1 and any subordinate matrix norm,
krm (X) − log(I + X)k ≤ rm (−kXk) − log(1 − kXk). (11.19)
The bound (11.19) is sharp in that it is exact for nonpositive scalar X and hence
for nonpositive diagonal X in any p-norm.
Several possibilities exist for evaluating rm (X) = qm (X)−1 pm (X):
• Horner’s method applied to pm and qm (Algorithm 4.2), followed by solution of
qm rm = pm ;
• the Paterson–Stockmeyer method applied to pm and qm (see Section 4.4.3),
followed by solution of qm rm = pm ;
11.5 Inverse Scaling and Squaring Method 275
The choice between these methods is a compromise between computational cost and
numerical stability. For the first two approaches it is crucial to establish that the
denominator polynomial is not too ill conditioned. The next result provides a bound.
Lemma 11.7 (Kenney and Laub). For kXk < 1 and any subordinate matrix norm,
the denominator qm of the diagonal Padé approximant rm satisfies
qm (kXk)
κ(qm (X)) ≤ . (11.20)
qm (−kXk)
In the detailed analysis of Higham [275, ] the partial fraction method emerges
as the best overall method for evaluating rm . Its cost is m solutions of multiple
right-hand side linear systems. Its numerical stability is governed by the condition of
the linear systems that are solved, and for a stable linear system solver the normwise
relative error will be bounded approximately by d(m, n)uφm , where
(m)
(m) 1 + |βj |kXk
φm = max κ I + βj X ≤ max (m)
. (11.21)
j j 1− |βj |kXk
(m)
Since βj ∈ (0, 1), φm is guaranteed to be of order 1 provided that kXk is not too
close to 1. An advantage for parallel computation is that the m terms in (11.18) can
be evaluated in parallel.
So far, we have concentrated on Padé approximants to log(1 + x). Motivated by
the Gregory series (11.13) we might also consider approximating log (1 + x)/(1 − x) .
It turns out that these two sets of Padé approximants are closely related.
Lemma 11.8.
The Padé approximants rm (x) to log(1 + x) and sm (x) to log (1 +
x)/(1 − x) are related by rm (x) = sm (x/(x + 2)).
provided that kXk < 1. This inequality is obtained in the same way as (11.19) from
a more general result in [341, , Cor. 4] holding for a class of hypergeometric
functions.
k
He used this approximation with a1/2 = 1+x and a sufficiently large k, and multiplied
by the scale factor needed to obtain logarithms to base 10. His approximation was
k
therefore log10 a ≈ 2k · log10 e · (a1/2 − 1).
For a matrix A with no eigenvalues on R− , the analogue of Briggs’ idea is to use
the identity, from Theorem 11.2,
k
log(A) = 2k log A1/2 . (11.22)
k
The integer k is to be chosen so that log(A1/2 ) is easy to approximate. We will use a
k
Padé approximant of log(1 + x), so we require A1/2 sufficiently close to the identity,
k
which holds for large enough k since limk→∞ A1/2 = I. This method is called the
inverse scaling and squaring method by analogy with the scaling and squaring method
for the matrix exponential.
At this point it is worth pointing out a danger in implementing Briggs’ origi-
nal method in floating point arithmetic. Suppose we are working in IEEE double
precision arithmetic and wish to obtain the logarithm to full accuracy of about 16
k
significant decimal digits. Then we will need |a1/2 − 1| ≈ 10−8 (as can be seen from
k
Theorem 4.8). The subtraction a1/2 − 1 must therefore suffer massive cancellation,
k
with about half of the significant digits in a1/2 being lost. This was not a problem
for Briggs, who calculated to 30 decimal digits in order to produce a table of 14-digit
logarithms. And the problem can be avoided by using higher order approximations to
k
log a1/2 . However, in the matrix context a value of k sufficiently large for the matrix
as a whole may be too large for some particular submatrices and could lead to dam-
aging subtractive cancellation. This is an analogue of the overscaling phenomenon
discussed at the end of Section 10.3 in the context of the scaling and squaring method
for the exponential.
The design of an algorithm depends on what kinds of operation we are prepared
to carry out on the matrix. We consider first the use of a Schur decomposition, which
reduces the problem to that of computing the logarithm of a triangular matrix, and
then consider working entirely with full matrices.
Table 11.1. Maximal values θm of kXk such that the bound (11.19) ensures krm (X) − log(I +
X)k does not exceed u = 2−53 , along with upper bound (11.20) for κ(qm (X)) and upper bound
(11.21) for φm , both with kXk = θm .
m 1 2 3 4 5 6 7 8 9
θm 1.10e-5 1.82e-3 1.62e-2 5.39e-2 1.14e-1 1.87e-1 2.64e-1 3.40e-1 4.11e-1
κ(qm ) 1.00e0 1.00e0 1.05e0 1.24e0 1.77e0 3.09e0 6.53e0 1.63e1 4.62e1
φm 1.00e0 1.00e0 1.03e0 1.11e0 1.24e0 1.44e0 1.69e0 2.00e0 2.36e0
m 10 11 12 13 14 15 16 32 64
θm 4.75e-1 5.31e-1 5.81e-1 6.24e-1 6.62e-1 6.95e-1 7.24e-1 9.17e-1 9.78e-1
κ(qm ) 1.47e2 5.07e2 1.88e3 7.39e3 3.05e4 1.31e5 5.80e5 >1e16 >1e16
φm 2.76e0 3.21e0 3.71e0 4.25e0 4.84e0 5.47e0 6.14e0 2.27e1 8.85e1
to evaluate rm . However, we will use the partial fraction expansion, which the φm
values show will deliver a fully accurate evaluation for all the m of interest.
In interpreting the data in the table we need to know the effect of taking a square
root of T on the required m. This can be determined irrespective of the triangularity
1/2k+1 1/2k+1 1/2k k
of T . Since I − A I+A = I−A and A1/2 → I as k → ∞ it
follows that for large k,
k+1 1 k
kI − A1/2 k≈ kI − A1/2 k, (11.23)
2
k
so that once A1/2 has norm approximately 1 a further square root should approxi-
mately halve the distance to the identity matrix.11
Since θm /2 < θm−2 for m > 7, to minimize the cost we should keep taking
square roots until kXk ≤ θ7 , since each such square root costs only half the resultant
saving (at least two multiple right-hand side solves) in the cost of evaluating the Padé
approximant. The question is whether this choice needs to be moderated by stability
considerations. We have already noted from the φm values in Table 11.1 that the
evaluation of rm for such X is as accurate as can be hoped. The only danger in
scaling X to have norm at most θ7 , rather than some larger value, is that the greater
number of square roots may increase the error in the matrix whose Padé approximant
is computed. However, the later square roots are of matrices close to I, which are
well conditioned in view of (6.2), so these extra square roots should be innocuous.
We are now ready to state an algorithm. Rather than tailor the algorithm to the
particular precision, we express it in such a way that the same logic applies whatever
the precision, with changes needed only to the integer constants and the θi .
Algorithm 11.9 (inverse scaling and squaring algorithm with Schur decomposition).
Given A ∈ Cn×n with no eigenvalues on R− this algorithm computes X = log(A) via
a Schur decomposition and the inverse scaling and squaring method. It uses the con-
stants θi given in Table 11.1. The algorithm is intended for IEEE double precision
arithmetic.
2 k = 0, p = 0
3 while true
4 τ = kT − Ik1
5 if τ ≤ θ7
6 p=p+1
7 j1 = min{ i : τ ≤ θi , i = 3: 7 }
8 j2 = min{ i : τ /2 ≤ θi , i = 3: 7 }
9 if j1 − j2 ≤ 1 or p = 2, m = j1 , goto line 14, end
10 end
11 T ← T 1/2 using Algorithm 6.3.
12 k =k+1
13 end
14 Evaluate U = rm (T − I) using the partial fraction expansion (11.18).
15 X = 2k QU Q∗
Algorithm 11.10 (inverse scaling and squaring algorithm). Given A ∈ Cn×n with
no eigenvalues on R− this algorithm computes X = log(A) by the inverse scaling
and squaring method. It uses the constants θi given in Table 11.1. The algorithm is
intended for IEEE double precision arithmetic.
1 k = 0, it0 = 5, p = 0
2 while true
3 τ = kA − Ik1
4 if τ < θ16
5 p=p+1
6 j1 = min{ i : τ ≤ θi , i = 3: 16 }
7 j2 = min{ i : τ /2 ≤ θi , i = 3: 16 }
8 if 2(j1 − j2 )/3 ≤ itk or p = 2, m = j1 , goto line 13, end
9 end
10 A ← A1/2 using the scaled product DB iteration (6.29); let itk+1 be
the number of iterations required.
11 k =k+1
12 end
13 Evaluate Y = rm (A − I) using the partial fraction expansion (11.18).
14 X = 2k Y
Pk
Cost: i=1 iti 4n3 + 8mn3/3 flops.
where U is the unwinding number (11.2) and z = (λ2 − λ1 )/(λ2 + λ1 ). Using the
hyperbolic arc tangent, defined by
1 1+z
atanh(z) := log , (11.24)
2 1−z
we have
2 atanh(z) + 2πiU(log λ2 − log λ1 )
f12 = t12 . (11.25)
λ2 − λ1
It is important to note that atanh can be defined in a different way, as recommended
in [81, ] and [327, ]:
1
atanh(z) := (log(1 + z) − log(1 − z)). (11.26)
2
With this definition a slightly more complicated formula than (11.25) is needed:
2 atanh(z) + 2πi U(log λ2 − log λ1 ) + U(log(1 + z) − log(1 − z))
f12 = t12 . (11.27)
λ2 − λ1
MATLAB’s atanh function is defined by (11.24). Of course, the use of (11.25) or
(11.27) presupposes the availability of an accurate atanh implementation. Our overall
formula is as follows, where we avoid the more expensive atanh formula when λ1 and
λ2 are sufficiently far apart:
t12
, λ1 = λ 2 ,
λ1
f12 = log λ2 − log λ1 (11.28)
t12 , |λ1 | < |λ2 |/2 or |λ2 | < |λ1 |/2 ,
λ 2 − λ 1
(11.25) or (11.27), otherwise.
0
logm_old
10 logm
logm_iss_schur
logm_iss
−5
10
−10
10
−15
10
0 10 20 30 40 50 60
Figure 11.2. Normwise relative errors for MATLAB’s logm, logm old, logm iss schur, and
logm iss; the solid line is κlog (A)u.
2. the logm function in MATLAB 7.6 (R2008a) onwards, which implements Algo-
rithm 11.11;
3. logm iss schur, which implements Algorithm 11.9;
4. logm iss, which implements Algorithm 11.10. The convergence test for the
scaled product DB iteration is based on kMk − Ik.
Figure 11.2 shows the normwise relative errors kXb − log(A)kF /k log(A)kF , where
b
X is the computed logarithm and log(A) is computed at 100 digit precision, along with
κlog (A)u, where κlog = condrel (log, A) is computed with the aid of Algorithm 11.12.
Figure 11.3 presents the same data in the form of a performance profile.
Several observations can be made.
• logm is clearly superior to logm old. Further investigation shows that both
the special treatment of 2 × 2 diagonal blocks and the use of Algorithm 11.9
contribute to the improvement in accuracy.
• logm iss schur is even more markedly superior to logm iss, showing the ben-
efits for accuracy of applying the inverse scaling and squaring method to trian-
gular matrices.
0.9
0.8
0.7
0.6
p
0.5
0.4
0.3
logm
logm_iss_schur
0.2
logm_old
logm_iss
0.1
1 2 3 4 5 6 7 8 9 10
α
T
−1 T
vec(Llog (A, E)) = 2τ 1
2 [X ⊕ (−X)] (A ⊕ A)−1 vec(E), X = log(A), (11.29)
where τ (x) = tanh(x)/x and kXk ≤ 1 is assumed for some consistent matrix norm.
That the inverses in (11.29) exist is verified in Problem 11.8.
In order to use this formula we first need to ensure that X = log(A) has norm
s
at most 1. This is achieved by taking repeated square roots: if kI − A1/2 k ≤ 1 −
s
1/e then k log(A1/2 )k ≤ 1 (see Problem 11.4), and so we can apply the formula to
s
A1/2 . To take account of the square root phase we note that applying the chain rule
(Theorem 3.4) to log(A) = 2 log(A1/2 ) gives Llog (A, E) = 2Llog A1/2 , Lx1/2 (A, E) .
s
Hence, by recurring this relation, we have Llog (A, E) = Llog (A1/2 , Es ), where
E0 = 2s E, (11.30a)
i
1/2 1/2i
A Ei + E i A = Ei−1 , i = 1: s. (11.30b)
We obtain an algorithm for evaluating Llog (A, E) by using this Sylvester recurrence
and reversing the steps in the derivation of Algorithm 10.27.
2 E0 = 2s E
3 for i = 1: s
i i
4 Solve for Ei the Sylvester equation A1/2 Ei + Ei A1/2 = Ei−1 .
5 end
6 Solve for G9 the Sylvester equation BG9 + G9 B = Es .
7 X = log(B)
8 for i = 8: −1: 1
9 Solve for Gi the Sylvester equation (I + X/αi )Gi + Gi (I − X/αi ) =
(I + X/βi )Gi+1 + Gi+1 (I − X/βi ).
10 end
11 L = 2G0
Cost: 16M , s matrix square roots, 1 matrix logarithm, and the solution of s + 9
Sylvester equations.
In practice, it is advisable to combine the square root computations in line 1 with
the loop in lines 3–5 in order to avoid having to store the intermediate square roots.
As with Algorithm 10.27 for the exponential, a Schur reduction of A to triangular
form should be exploited in conjunction with Algorithm 11.12 to reduce the cost.
Algorithm 11.12 can combined with Algorithm 11.9, with the number of square
roots determined by Algorithm 11.9, which has the more stringent requirement on
kT − Ik.
It is also possible to estimate the Fréchet derivative by quadrature. For one
approach, see Dieci, Morini, and Papini [154, ].
An obvious analogue of Algorithm 10.28 can be formulated for computing and
estimating the condition number of the matrix logarithm.
The inverse scaling and squaring method was proposed by Kenney and Laub [340,
] for use with a Schur decomposition. It was further developed by Cheng, Higham,
Kenney, and Laub [108, ] in a transformation-free form that takes as a parameter
the desired accuracy. The two key ideas therein are prediction of the most efficient
Padé degree, based on the bound (11.19), and the use of an incomplete form of the
product DB iteration, with analysis to show how early the iteration can be terminated
without compromising the overall accuracy; see Problem 11.7. The implementation in
s
Kenney and Laub [340, ] requires kXk = kI − A1/2 k ≤ 0.25 and uses fixed Padé
degree 8, while Dieci, Morini, and Papini [154, ] require kXk ≤ 0.35 and take
Padé degree 9. Cardoso and Silva Leite [95, ] propose a version of the inverse
scaling and squaring method based on Padé approximants to log (1 + x)/(1 − x) .
It explicitly forms a matrix Cayley transform and requires a matrix squaring at each
square root step, so its numerical stability is questionable and it appears to offer no
general cost advantages over the use of log(1 + x) (cf. Lemma 11.8).
The inverse scaling and squaring method is used by Arsigny, Commowick, Pennec,
and Ayache [19, ] to compute the logarithm of a diffeomorphism, which they
then use to compute the log-Euclidean mean (2.29) of three-dimensional geometrical
transformations.
The formula (11.25) can be found in Kahan and Fateman [329, ].
Section 11.8 is based on Kenney and Laub [348, ].
In connection with overscaling in the inverse scaling and squaring method, Dieci
and Papini [156, ] consider a block 2 × 2 upper triangular T = (Tij ) ∈ Cn×n and
obtain a bound for krm (T − I) − log(T )k of the form kT12 k multiplied by a function of
maxi=1,2 kTii − Ik (assumed less than 1) but having no direct dependence on kT − Ik.
This bound potentially allows a reduction in the number of square roots required
by the inverse scaling and squaring method compared with the use of (11.19), since
maxi=1,2 kTii − Ik may be less than 1 when kT − Ik greatly exceeds 1, due to a large
T12 . Related results are obtained by Cardoso and Silva Leite [95, ] for the Padé
approximants sm defined in Lemma 11.8. A difficulty in applying these bounds is
that it is not clear how best to choose the blocking. A more natural approach is
Algorithm 11.11, which automatically blocks the Schur form and applies the inverse
scaling and squaring method to the (usually small) diagonal blocks.
For Hermitian positive definite matrices, Lu [394, ] proposes an algorithm en-
tirely analogous to those of his described in Sections 6.10 and 10.7.1 for the square root
and the exponential, based on the linear partial fraction form of Padé approximants
to log(1 + x).
Problems
11.1. Show that for any λ not lying on R− the (principal) logarithm log(J(λ)) of a
Jordan block J(λ) ∈ Cm×m can be defined via the series (11.15). Thus explain how
(11.15) can be used to define log(A) for any A ∈ Cn×n having no eigenvalues on R− ,
despite the finite radius of convergence of the series.
11.2. Derive the formula (11.10) for the Fréchet derivative of the logarithm, using
the definition (3.6) and the integral (11.1).
11.3. Show that kI − Ak < 1 for some consistent matrix norm is sufficient for the
convergence of the Gregory series (11.16).
Problems 285
11.4. Let k·k be a consistent norm. Show that if kI −M k ≤ 1−1/e then k log(M )k ≤
1.
11.5. Show that if Algorithm 11.9 takes at least one square root and the approxi-
mation (11.23) is exact once kT − Ik1 ≈ θ7 then the algorithm will take m = 5, 6,
or 7.
11.6. Show that for A ∈ Cn×n with no eigenvalues on R− , log(A) = log(A/kAk) +
log(kAkI). This relation could be used prior to calling Algorithm 11.9 or Algo-
rithm 11.10 on the matrix A/kAk whose norm is 1. Is this worthwhile?
11.7. (Cheng, Higham, Kenney, and Laub [108, ]) Show that the iterates from
the product DB iteration (6.17) satisfy log(A) = 2 log(Xk ) − log(Mk ). Suppose we
terminate the iteration after k iterations and set X (1) = Xk , and now apply the
product DB iteration to X (1) , again for a finite number of iterations. Show that
continuing this process leads after s steps to
log(A) = 2s log(X (s) ) − log(M (1) ) − 2 log(M (2) ) − · · · − 2s−1 log(M (s) ), (11.31)
where X (i) and M (i) are the final iterates from the product DB iteration applied to
X (i−1) . The (unknown) log(M (i) ) terms on the right-hand side can be bounded using
Problem 11.4.
11.8. Let A ∈ Cn×n have no eigenvalues on R− . Let X = log(A) and assume kXk ≤ 1
for some consistent matrix norm.
(a) Show that AT ⊕ A is nonsingular.
Q∞
(b) Using the expansion τ (x)
= tanh(x)/x = k=1 (π 2 + x2 /k 2 )/(π 2 + 4x2 /(2k − 1)2 )
show that τ 21 [X T ⊕ (−X)] is nonsingular.
11.9. Let R be the upper triangular Cholesky factor of the n × n Pascal matrix
(see Section 6.11). Show that log(R) is zero except on the first superdiagonal, which
comprises 1, 2, . . . , n − 1. Thus for n = 4,
1 1 1 1 1 0 1 0 0 0
0 1 2 3 4 0 0 2 0 0
log 0 0 1 3 6 = 0 0 0 3 0.
0 0 0 1 4 0 0 0 0 4
0 0 0 0 1 0 0 0 0 0
Explain why all logarithms of R are upper triangular and why this is the only real
logarithm of R.
b be an approximate logarithm of A. Derive a formula for the relative
11.10. Let X
b
backward error of X.
11.11. (Kenney) Let A ∈ Cn×n be nonsingular. Consider the iterations
In 1614 John Napier (1550–1617), of Merchiston Castle, near (now in) Edinburgh,
published his book Mirifici Logarithmorum Canonis Descriptio,
in which he gives a table of logarithms and an account of how he computed them.
If anyone is entitled to use the word “logarithm” it is Napier, since he coined the word,
deriving it from the two Greek words
λóγoς (meaning “reckoning,” in this context) and
ὰριθµóς (meaning “number”).
— GEORGE M. PHILLIPS, Two Millennia of Mathematics:
From Archimedes to Gauss (2000)
We now turn our attention to the two most important trigonometric functions: the
cosine and the sine. We saw in Section 2.1 that the matrix sine and cosine arise in the
solution of the second order differential system (2.7). More general problems of this
type, with a forcing term f (t) on the right-hand side, arise from semidiscretization of
the wave equation and from mechanical systems without damping, and their solutions
can be expressed in terms of integrals involving the sine and cosine [514, ].
This chapter begins with a review of addition formulae and normwise upper and
lower bounds. Expressions for the Fréchet derivatives are then derived. A detailed
derivation is given of an algorithm for cos(A) that employs variable degree Padé ap-
proximation in conjunction with the double angle formula for the cosine. A numerical
illustration is given of the performance of that algorithm and two alternatives. Then
an algorithm is developed for computing cos(A) and sin(A) simultaneously that inter-
twines the cosine and sine double angle recurrences. Finally, preprocessing is briefly
discussed.
which yields
eiA + e−iA eiA − e−iA
cos(A) = , sin(A) = (12.2)
2 2i
and
cos2 (A) + sin2 (A) = I. (12.3)
iA iA
For real A, we can write cos(A) = Re e , sin(A) = Im e .
A number of basic properties are analogous to those for the matrix exponential.
287
288 Matrix Cosine and Sine
Theorem 12.2 (Kronecker addition formulae). Let A ∈ Cn×n and B ∈ Cm×m . Then
f (A ⊗ I) = f (A) ⊗ I and f (I ⊗ B) = I ⊗ f (B) for f = cos, sin, and
cos(A ⊕ B) = cos(A) ⊗ cos(B) − sin(A) ⊗ sin(B), (12.8)
sin(A ⊕ B) = sin(A) ⊗ cos(B) + cos(A) ⊗ sin(B). (12.9)
Proof. The result follows from Theorem 10.9 on using (12.2).
The next result provides some easily evaluated upper and lower bounds on the
norm of the sine and cosine.
Theorem 12.3 (norm bounds). For A ∈ Cn×n and any subordinate matrix norm we
have
2 − cosh(kAk) ≤ 2 − cosh(kA2 k1/2 )
≤ k cos(A)k ≤ cosh(kA2 k1/2 ) ≤ cosh(kAk), (12.10)
kAk3
kAk − ≤ k sin(A)k ≤ sinh(kAk). (12.11)
6 1 − kAk2 /20
P∞ P∞
Proof. We have the bound k cos(A)k ≤ k=0 kA2k k/(2k)! ≤ k=0 kA2 kk /(2k)! =
cosh(kA2 k1/2 ). Similarly,
∞
X ∞
X
k cos(A)k ≥ 1 − kA2k k/(2k)! ≥ 1 − kA2 kk /(2k)!
k=1 k=1
2 1/2
= 1 − (cosh(kA k ) − 1) = 2 − cosh(kA2 k1/2 ).
This gives (12.10), since kA2 k1/2 ≤ kAk and hence cosh(kA2 k1/2 ) ≤ cosh(kAk). The
upper bound for k sin(A)k is straightforward. For the lower bound we have
X∞
kA2k+1 k kA3 k kA2 k kA4 k
k sin(A)k ≥ kAk − ≥ kAk − 1+ + + ···
(2k + 1)! 6 4·5 4·5·6·7
k=1
2
!
kA3 k kA2 k kA2 k kAk3
≥ kAk − 1+ + + · · · ≥ kAk − .
6 4·5 4·5 6 1 − kAk2 /20
Since kA2 k ≤ kAk2 can be an arbitrarily weak inequality for nonnormal A (con-
sider involutory matrices, for example), the inner bounds in (12.10) can be smaller
than the outer ones by an arbitrary factor. Numerical methods are likely to form
A2 when evaluating cos(A), since cos is an even function, enabling the tighter bound
to be evaluated at no extra cost (and in some cases, such as (2.8), A2 is the given
matrix).
12.2 Conditioning 289
12.2. Conditioning
By using (10.15) and (12.2) we can obtain expressions for the Fréchet derivatives of
the cosine and sine functions:
Z 1
Lcos (A, E) = − cos(A(1 − s))E sin(As) + sin(A(1 − s))E cos(As) ds, (12.12)
0
Z 1
Lsin (A, E) = cos(A(1 − s))E cos(As) − sin(A(1 − s))E sin(As) ds. (12.13)
0
Just as for the exponential and the logarithm we can obtain explicit expressions
for the Fréchet derivatives with the aid of vec and the Kronecker product.
Table 12.1. Number of matrix multiplications π2m required to evaluate p2m (A) and q2m (A).
2m 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
π2m 1 2 3 4 5 5 6 6 7 7 8 8 9 9 9
5 2 115 2 313 4
1− x 1− x + x
12 252 15120
r2 (x) = , r4 (x) = .
1 11 2 13 4
1 + x2 1+ x + x
12 252 15120
Thereafter the numerators and denominators of the rational coefficients grow rapidly
in size; for example,
A2 = A2 , A4 = A22 , A6 = A2 A4 , (12.17a)
p12 = a0 I + a2 A2 + a4 A4 + a6 A6 + A6 (a8 A2 + a10 A4 + a12 A6 ), (12.17b)
q12 = b0 I + b2 A2 + b4 A4 + b6 A6 + A6 (b8 A2 + b10 A4 + b12 A6 ). (12.17c)
Table 12.1 summarizes the cost of evaluating p2m and q2m for 2m = 2: 2: 30.
Ci = cos(2i−s A).
12.4 Double Angle Algorithm for Cosine 291
The integer s is chosen so that 2−s A has norm small enough that we can obtain a good
approximation of C0 = cos(2−s A) at reasonable cost. By applying the cosine double
angle formula (12.6) we can compute Cs = cos(A) from C0 using the recurrence
Theorem 12.5 (Higham and Smith). Let C bi = Ci + Ei , where C0 = cos(2−s A), the
b b 2
Ci satisfy (12.18), and Ci+1 = f l(2Ci − I). For the 1-, ∞−, and Frobenius norms,
assuming that kEi k ≤ 0.05kCi k for all i, we have
where
kRi k ≤ γ bi k2 + 1).
en+1 (2αn kC (12.21)
We can rewrite (12.20) as
Ei+1 = 2(Ei2 + Ei Ci + Ci Ei ) + Ri .
Table 12.2. Maximum value θ2m of θ such that the absolute error bound (12.24) does not
exceed u = 2−53 .
2m 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
θ2m 6.1e-3 0.11 0.43 0.98 1.7 2.6 3.6 4.7 5.9 7.1 8.3 9.6 10.9 12.2 13.6
Consider the special case where A is normal with real eigenvalues, λi . Then
kCi k2 = max1≤j≤n | cos(2i−s λj )| ≤ 1 for all i and (12.19) yields
This bound reflects the fact that, because the double angle recurrence multiplies the
square of the previous iterate by 2 at each stage, the errors could grow by a factor 4
at each stage, though this worst-case growth is clearly extremely unlikely.
It is natural to require the approximation C b0 to satisfy a relative error bound
of the form kC0 − C b0 k/kC0 k ≤ u. The lower bound in (12.10) guarantees that the
denominator is nonzero if kAk or kA2 k1/2 is less than cosh−1 (2) = 1.317 . . .. Consider
the term kE0 k in the bound (12.19). With the relative error bound and the norm
restriction just described, we have kE0 k ≤ ukC0 k ≤ 2u, which is essentially the same
as if we imposed an absolute error bound kE0 k ≤ u. An absolute bound has the
advantage of putting no restrictions on kAk or kA2 k1/2 and thus potentially allowing
a smaller s, which means fewer double angle recurrence steps. The norms of the
matrices C0 , . . . , Cs−1 are different in the two cases, and we can expect an upper
bound for kC0 k to be larger in the absolute case. But if the absolute criterion permits
fewer double angle steps then, as is clear from (12.19), significant gains in accuracy
could accrue. In summary, the error analysis provides support for the use of an
absolute error criterion if kC0 k is not too large. Algorithms based on a relative error
bound were developed by Higham and Smith [287, ] and Hargreaves and Higham
[248, ], but experiments in the latter paper show that the absolute bound leads
to a more efficient and more accurate algorithm. We now develop an algorithm based
on an absolute error bound.
As for the exponential and the logarithm, Padé approximants to the cosine provide
greater efficiency than Taylor series for the same quality of approximation. The
truncation error for r2m has the form
∞
X
cos(A) − r2m (A) = c2i A2i .
i=2m+1
Hence
∞
X
k cos(A) − r2m (A)k ≤ |c2i |θ2i , (12.24)
i=2m+1
where
θ = θ(A) = kA2 k1/2 .
Define θ2m to be the largest value of θ such that the bound in (12.24) does not exceed
u. Using the same mixture of symbolic and high precision calculation as was employed
in Section 10.3 for the exponential, we found the values of θ2m listed in Table 12.2.
12.4 Double Angle Algorithm for Cosine 293
Table 12.3. Upper bound for κ(q2m (A)) when θ ≤ θ2m , based on (12.26) and (12.27), where
the θ2m are given in Table 12.2. The bound does not exist for 2m ≥ 26.
2m 2 4 6 8 10 12 14 16 18 20 22 24
Bound 1.0 1.0 1.0 1.0 1.1 1.2 1.4 1.8 2.4 3.5 7.0 9.0e1
2m 2 4 6 8 10 12 14 16 18 20 22 24
ke
p2m k∞ 1.0 1.0 1.1 1.5 2.7 6.2 1.6e1 4.3e1 1.2e2 3.7e2 1.2e3 3.7e3
ke
q2m k∞ 1.0 1.0 1.0 1.0 1.1 1.1 1.2 1.3 1.4 1.6 1.7 2.0
We now need to consider the effects of rounding errors on the evaluation of r2m .
Consider, first, the evaluation of p2m and q2m , and assume initially that A2 is evalu-
ated exactly. Let g2m (A2 ) denote either of the even polynomials p2m (A) and q2m (A).
It follows from Theorem 4.5 that for the 1- and ∞-norms,
where ge2m denotes g2m with its coefficients replaced by their absolute values. When we
take into account the error in forming A2 we find that the bound (12.25) is multiplied
by a term that is approximately µ(A) = k|A|2 k/kA2 k ≥ 1. The quantity µ can
be arbitrarily large. However, µ∞ (A) ≤ (kAk∞ /θ∞ (A))2 , so a large µ implies that
basing the algorithm on θ(A) rather than kAk produces a smaller s, which means that
potentially increased rounding errors in the evaluation of p2m and q2m are balanced
by potentially decreased error propagation in the double angle phase.
Since we obtain r2m by solving a linear system with coefficient matrix q2m (A), we
require q2m (A) to
Pbe well conditioned to be sure that the system is solved accurately.
m
From q2m (A) = i=0 b2i A2i , we have
m
X
kq2m (A)k ≤ |b2i |θ2i . (12.26)
i=0
Using the inequality k(I + E)−1 k ≤ (1 − kEk)−1 for kEk < 1 gives
1 1
kq2m (A)−1 k ≤ Pm 2i
≤ Pm . (12.27)
|b0 | − k i=1 b2i A k |b0 | − i=1 |b2i |θ2i
Table 12.3 tabulates the bound for κ(q2m (A)) obtained from (12.26) and (12.27), the
latter being finite only for 2m ≤ 24. It shows that q2m is well conditioned for all m
in this range.
Now we consider the choice of m. In view of Table 12.3, we will restrict to 2m ≤ 24.
Table 12.4, which concerns the error bound (12.25) for the evaluation of p2m and q2m ,
suggests further restricting 2m ≤ 20, say. From Table 12.1 it is then clear that we
need consider only 2m = 2, 4, 6, 8, 12, 16, 20. Dividing A (and hence θ) by 2 results in
one extra matrix multiplication in the double angle phase, whereas for θ ≤ θ2m the
cost of evaluating the Padé approximant increases by one matrix multiplication with
each increase in m in our list of considered values. Since the numbers θ12 , θ16 , and θ20
294 Matrix Cosine and Sine
Table 12.5. Logic for choice of scaling and Padé approximant degree d ≡ 2m. Assuming A
has already been scaled, if necessary, so that θ ≤ θ20 = 7.1, further scaling should be done to
bring θ within the range for the indicated value of d.
Range of θ d
[0, θ16 ] = [0, 4.7] smallest d ∈ {2, 4, 6, 8, 12, 16} such that θ ≤ θd
(θ16 , 2θ12 ] = (4.7, 5.2] 12 (scale by 1/2)
(2θ12 , θ20 ] = (5.2, 7.1] 20 (no scaling)
differ successively by less than a factor 2, the value of d ≡ 2m that gives the minimal
work depends on θ. For example, if θ = 7 then d = 20 is best, because nothing would
be gained by a further scaling by 1/2, but if θ = 5 then scaling by 1/2 enables us to
use d = 12, and the whole computation then requires one less matrix multiplication
than if we immediately applied d = 20. Table 12.5 summarizes the relevant logic.
The tactic, then, is to scale so that θ ≤ θ20 and to scale further only if a reduction in
work is achieved.
With this scaling strategy we have, by (12.10), kC0 k ≤ cosh(θ20 ) ≤ 606. Since
this bound is not too much larger than 1, the argument given at the beginning of this
section provides justification for the following algorithm.
Algorithm 12.6 (double angle algorithm). Given A ∈ Cn×n this algorithm approx-
imates C = cos(A). It uses the constants θ2m given in Table 12.2. The algorithm is
intended for IEEE double precision arithmetic. At lines 5 and 12 the Padé approxi-
mants are to be evaluated via a scheme of form (12.17), making use of the matrix B.
1 B = A2
1/2
2 θ = kBk∞
3 for d = [2 4 6 8 12 16]
4 if θ ≤ θd
5 C = rd (A)
6 quit
7 end
8 end
9 s = ⌈log2 (θ/θ20 )⌉ % Find minimal integer s such that 2−s θ ≤ θ20 .
10 Determine optimal d from Table 12.5 (with θ ← 2−s θ); increase s as necessary.
11 B ← 4−s B
12 C = rd (2−s A)
13 for i = 1: s
14 C ← 2C 2 − I
15 end
1/2
Cost: πd + ⌈log2 (kA2 k∞ /θd )⌉ M + D, where πd and θd are tabulated in Ta-
bles 12.1 and 12.2, respectively.
In the next section we will compare Algorithm 12.6 with the following very simple
competitor based on (12.2).
Algorithm 12.7 (cosine and sine via exponential). Given A ∈ Cn×n this algorithm
computes C = cos(A) and S = sin(A) via the matrix exponential.
12.5 Numerical Experiment 295
1 X = eiA
2 If A is real
3 C = Re X
4 S = Im X
5 else
6 C = (X + X −1 )/2
7 S = (X − X −1 )/(2i)
8 end
1. Algorithm 12.6.
We used a set of 55 test matrices including matrices from MATLAB, from the
Matrix Computation Toolbox [264], and from [287, ]. The matrices are mostly
10 × 10 and their norms range from order 1 to 107 , though more than half have ∞-
norm 10 or less. We evaluated the relative error kC b − CkF /kCkF , where Cb is the
computed approximation to C and C = cos(A) is computed at 100-digit precision.
Figure 12.1 shows the results. The solid line is the unit roundoff multiplied by an
estimate of κcos obtained using Algorithm 3.20 with finite differences. Figure 12.2
gives a performance profile representing the same data.
Some observations on the results:
• The three algorithms are all behaving in a numerically stable way apart from
occasional exceptions for each algorithm.
• The average cost of Algorithm 12.6 in this experiment is about 9 matrix mul-
tiplications, or 18n3 flops, which compares favourably with the cost of at least
28n3 flops of funm.
• It is reasonable to conclude that on this test set Algorithm 12.7 is the most
accurate solver overall, followed by Algorithm 12.6 and then funm.
5
10 Alg. 12.6
funm
Alg. 12.7
0
10
−5
10
−10
10
−15
10
0 10 20 30 40 50
Figure 12.1. Normwise relative errors for Algorithm 12.6, MATLAB’s funm, and Algo-
rithm 12.7; the solid line is an estimate of κcos (A)u.
Since this expansion contains only odd powers of A we bound the series in terms of
kAk instead of θ(A):
∞
X
k sin(A) − rem (A)k ≤ |c2i+1 |β 2i , β = kAk. (12.28)
i=m
12.6 Double Angle Algorithm for Sine and Cosine 297
0.9
0.8
0.7
0.6
p
0.5
0.4
0.3
0.2
Alg. 12.7
0.1 Alg. 12.6
funm
0
5 10 15 20 25 30 35 40 45 50
α
Table 12.6. Maximum value βm of kAk such that the absolute error bound (12.28) does not
exceed u = 2−53 .
m 2 3 4 5 6 7 8 9 10 11 12
βm 1.4e-3 1.8e-2 6.4e-2 1.7e-1 0.32 0.56 0.81 1.2 1.5 2.0 2.3
m 13 14 15 16 17 18 19 20 21 22 23 24
βm 2.9 3.3 3.9 4.4 5.0 5.5 6.2 6.7 7.4 7.9 8.7 9.2
Define βm to be the largest value of β such that the bound (12.28) does not exceed
u. Using the same technique as for the cosine, we computed the values shown in
Table 12.6. These values of βm can be compared with the values of θ2m in Table 12.2.
On comparing Table 12.6 with Table 12.2 we see that for 4 ≤ 2m ≤ 22 we have
β2m < θ2m < β2m+1 . We could therefore scale so that k2−s Ak ≤ β2m and then use
the [2m/2m] Padé approximants to the sine and cosine, or scale so that k2−s Ak ≤ θ2m
and use the [2m/2m] Padé approximant to the cosine and the [2m + 1/2m + 1]
Padé approximant to the sine. Since we can write an odd polynomial in A (e pm (A))
as A times an even polynomial of degree one less, it turns out to be as cheap to
evaluate re2m+1 and r2m as to evaluate re2m and r2m . Therefore we will scale so that
k2−s Ak ≤ θ2m and then evaluate r2m for the cosine and re2m+1 for the sine. Evaluating
p2m , q2m , pe2m+1 , and qe2m+1 reduces to evaluating four even polynomials of degree
2m. This can be done by forming the powers A2 , A4 , . . . , A2m , at a total cost of
m + 1 multiplications. However, for 2m ≥ 20 it is more efficient to use the schemes of
the form (12.17). We summarize the cost of evaluating p2m , q2m , pe2m+1 , and qe2m+1
for m = 2: 2: 24 in Table 12.7.
Now we consider the choice of degree, d ≡ 2m. Bounds analogous to those in
Table 12.3 show that qej+1 is well conditioned for d ≤ 24, and bounds for pej+1 and qej+1
analogous to those in Table 12.4 suggest restricting to d ≤ 20 (the same restriction
298 Matrix Cosine and Sine
that was made in Section 12.4 for the Padé approximants for the cosine). It is then
clear from Table 12.7 that we need only consider d = 2, 4, 6, 8, 10, 12, 14, 16, 20. Noting
that dividing A by 2 results in two extra multiplications in the double-angle phase
and that increasing from one value of d to the next in our list of considered values
increases the cost of evaluating the Padé approximants by one multiplication, we can
determine the most efficient choice of d by a similar argument to that in the previous
section. The result is that we should scale so that θ ≤ θ20 , and scale further according
to exactly the same strategy as in Table 12.5 except that in the first line of the table
“10” and “14” are added to the set of possible d values.
The algorithm can be summarized as follows.
Algorithm 12.8 (double angle algorithm for sine and cosine). Given A ∈ Cn×n this
algorithm approximates C = cos(A) and S = sin(A). It uses the constants θ2m given
in Table 12.2. The algorithm is intended for IEEE double precision arithmetic.
1 for d = [2 4 6 8 10 12 14 16]
2 if kAk∞ ≤ θd
3 C = rd (A), S = red+1 (A)
4 quit
5 end
6 end
7 s = ⌈log2 (kAk∞ /θ20 )⌉ % Find minimal integer s such that 2−s kAk∞ ≤ θ20 .
8 Determine optimal d from modified Table 12.5 (with θ = 2−s kAk∞ )
and increase s as necessary.
9 C = rd (2−s A), S = red+1 (2−s A)
10 for i = 1: s
11 S ← 2CS, C ← 2C 2 − I
12 end
Cost: π ed + ⌈log2 (kAk∞ /θd )⌉ M + 2D, where θd and π ed are tabulated in Ta-
bles 12.2 and 12.7, respectively.
How much work does Algorithm 12.8 save compared with separate computation of
cos(A) and sin(A) = cos(A − π2 I) by Algorithm 12.6? The answer is roughly 2πd − π ed
matrix multiplies, which rises from 1 when d = 4 to 4 when d = 20; the overall saving
is therefore up to about 29%.
We tested Algorithm 12.8 on the same set of test matrices as in Section 12.5.
Figure 12.3 compares the relative errors for the computed sine and cosine with the
corresponding errors from funm, invoked as funm(A,@sin) and funm(A,@cos); with
Algorithm 12.7; and with sin(A) computed as the shifted cosine sin(A) = cos(A − π2 I)
using Algorithm 12.6. Note that the cost of the two funm computations can be reduced
by using the same Schur decomposition for sin(A) as for cos(A). Algorithm 12.8
provides similar or better accuracy to funm and the shifted cosine on this test set. Its
cost varies from 9 matrix multiplies and solves to 55, with an average of 17.
12.7 Notes and References 299
Alg 12.6
−5
10 Alg. 12.8 (cos)
funm (cos)
−10
10
−15
10
0 10 20 30 40 50
−10
10
0 10 20 30 40 50
Figure 12.3. Normwise relative errors for Algorithm 12.6, Algorithm 12.7, Algorithm 12.8,
funm, and sine obtained as shifted cosine from Algorithm 12.6. The solid line is an estimate
of κcos (A)u (top) and κsin (A)u (bottom).
12.6.1. Preprocessing
Balancing and shifting can be used to reduce the norm prior to applying any of the
algorithms in this chapter. We can exploit the periodicity relation (4.32) to shift A
by πjI. Theorem 4.21 motivates taking j as whichever of the two integers nearest to
trace(A)/(nπ) gives the smaller value of kA − πjIkF , or as 0 if neither choice reduces
the Frobenius norm of A. Thus preprocessing consists of computing A e = D−1 (A −
πqI)D, where D represents balancing, and recovering cos(A) = (−1)q D cos(A)D e −1
q e
and sin(A) = (−1) D sin(A)D . −1
The matrix cosine and sine are used in a matrix multiplication-dominated algo-
rithm proposed by Yau and Lu [619, ] for solving the symmetric eigenvalue prob-
lem Ax = λx. This algorithm requires cos(A) and sin(A), which are approximated
using Chebyshev expansions in conjunction with the cosine double angle recurrence.
For more details see [619, ] and Tisseur [568, ].
Problems
12.1. Show that if A ∈ Cn×n is involutory (A2 = I) then cos(kπA) = (−1)k I for all
integers k.
12.2. (Pólya and Szegö [475, , Problem 210]) Does the equation sin(A) = 10 11
have a solution?
12.3. Derive (12.1), (12.3), (12.4), and (12.5) using the results of Section 1.3.
12.4. Assuming that X0 is nonsingular, show that X1 obtained from one step of the
Newton sign iteration (5.16) can be written X1 = cos(i−1 log(X0 )). (Cf. line 6 of
Algorithm 12.7.)
12.5. Suppose we know cos(A). Can we determine sin(A) from (12.3), i.e., from the
relation cos2 (A) + sin2 (A) = I? Consider separately the cases where A is a general
matrix and A is triangular.
In some applications it is not f (A) that is required but rather the action of f (A)
on a vector: f (A)b. Indeed, if A is sparse then f (A) may be much denser than A
and for large dimensions it may be too expensive to compute or store f (A), while
computing and storing f (A)b may be feasible. In this chapter we consider various
facets of the f (A)b problem. Our treatment is relatively concise, for two reasons.
First, some of the techniques used for f (A)b are closely related to those for f (A) that
have been treated earlier in the book. Second, the f (A)b problem overlaps with the
theory and practice of Krylov subspace methods and of the numerical evaluation of
contour integrals. A thorough treatment of these latter topics is outside the scope
of this book. Moreover, the application of these techniques to the f (A)b problem is
a relatively new and active topic of research and the state of the art is advancing
rapidly.
The development of a method for f (A)b must be guided by the allowable opera-
tions with A. A minimal assumption is that matrix–vector products can be formed.
Another possible assumption is that A is a large sparse matrix for which linear sys-
tems (αA + βI)x = c can be solved efficiently by sparse direct methods but the
computation of the Hessenberg and Schur forms are impractical. Alternatively, we
may assume that any necessary factorization can be computed.
We begin this chapter by obtaining a b-specific representation of f (A)b in terms of
Hermite interpolation. Then we consider computational approaches based on Krylov
subspace methods, quadrature, and differential equations.
We emphasize that the methods described in this chapter are not intended for
f (A) = A−1 , for which the special form of f can usually be exploited to derive more
effective variants of these methods.
s
Y
ψA,b (t) = (t − λi )ℓi , 0 ≤ ℓi ≤ ni ,
i=1
301
302 Function of Matrix Times Vector: f (A)b
Theorem 13.1. For polynomials p and q and A ∈ Cn×n , p(A)b = q(A)b if and only
if p(j) (λi ) = q (j) (λi ), j = 0: ℓi − 1, i = 1: s.
With this background we can show that the required polynomial is of degree only
deg ψA,b ≤ deg ψA .
Theorem 13.2. Let f be defined on the spectrum of A ∈ Cn×n and let ψA,b be the
minimal polynomial of A with respect to b. Then f (A)b
P= q(A)b, where q is the unique
s
Hermite interpolating polynomial of degree less than i=1 ℓi = deg ψA,b that satisfies
the interpolation conditions
Proof. Let p be the polynomial given by Definition 1.4 such that f (A) = p(A) and
consider q defined by (13.1). By (1.7), (13.1), and Theorem 13.1, q(A)b = p(A)b =
f (A)b.
Of course, the polynomial q in Theorem 13.2 depends on both A and b, and
q(A)c = f (A)c is not guaranteed for c 6= b.
The kth Krylov subspace of A ∈ Cn×n and a nonzero vector b ∈ Cn is defined by
Theorem 13.2 says that f (A)b ∈ Kd (A, b), where d = deg ψA,b . Thus the size of a
Krylov subspace necessary to capture f (A)b depends on both A and b. Note also that
deg ψA,b can be characterized as the smallest k such that Kk (A, b) = Kk+1 (A, b).
We consider Krylov subspace methods in the next section.
that is, the Arnoldi vectors {qi }ki=1 form an orthonormal basis for the Krylov subspace
Kk (A, q1 ). The Arnoldi process produces the factorization
1 for k = 1: n
2 z = Aqk
3 for i = 1: k
4 hik = qi∗ z
5 z = z − hik qi % modified Gram–Schmidt
6 end
7 hk+1,k = kzk2
8 if hk+1,k = 0, m = k, quit, end
9 qk+1 = z/hk+1,k
10 end
We are effectively evaluating f on the smaller Krylov subspace Kk (A, q1 ) and then
expanding the result back onto the original space Cn .
When is (13.7) an exact approximation? Clearly, if k = n is reached then Qk ∈
Cn×n is unitary and f (A)b = f (Qn Hn Q∗n )b = Qn f (Hn )Q∗n b = Qn f (Hn )kbk2 e1 = fk .
More precisely, upon termination of the Arnoldi process on step m = deg ψA,b , we
have fm = f (A)b (see Problem 13.1).
Some more insight into the approximation (13.7) is provided by the next two re-
sults. The first says that the approximation is exact if f is a polynomial of sufficiently
low degree.
Lemma 13.4 (Saad). Let A ∈ Cn×n and Qk , Hk be the result of k steps of the
Arnoldi process on A. Then for any polynomial pj of degree j ≤ k − 1 we have
since eTk Hkj e1 = 0 for j ≤ k − 2. Thus the result is true for j + 1, as required.
The next result identifies the approximation (13.7) as being obtained from a poly-
nomial that interpolates f on the spectrum of Hk rather than the spectrum of A.
Theorem 13.5 (Saad). Let Qk , Hk be the result of k steps of the Arnoldi process on
A ∈ Cn×n and b ∈ Cn . Then
where pek−1 is the unique polynomial of degree at most k − 1 that interpolates f on the
spectrum of Hk .
13.2 Krylov Subspace Methods 305
Proof. Note first that the upper Hessenberg matrix Hk has nonzero subdiagonal
elements hj+1,j , which implies that it is nonderogatory (see Problem 13.3). Conse-
quently, the minimal polynomial of H is the characteristic polynomial and so has
degree k. By Definition 1.4, f (Hk ) = pek−1 (Hk ) for a polynomial pek−1 of degree at
most k − 1. Hence
kbk2 Qk f (Hk )e1 = kbk2 Qk pek−1 (Hk )e1 = kbk2 pek−1 (A)q1 = pek−1 (A)b,
We briefly discuss some of the important issues associated with the approximation
(13.7).
Restarted Arnoldi. As k increases so does the cost of storing the Arnoldi vectors
q1 , . . . , qk and computing Hk . It is standard practice in the context of linear systems
(f (x) = 1/x) and the eigenvalue problem to restart the Arnoldi process after a fixed
number of steps with a judiciously chosen vector that incorporates information gleaned
from the computations up to that step. For general f restarting is more difficult
because of the lack of a natural residual. Eiermann and Ernst [174, ] show how
restarting can be done for general f by a technique that amounts to changing at the
restart both the starting vector and the function. This method requires the evaluation
of f at a block Hessenberg matrix of size km, where m is the restart length. This
expense is avoided in a different implementation by Afanasjew, Eiermann, Ernst, and
Güttel [3, ], which employs a rational approximation to f and requires only the
solution of k Hessenberg linear systems of size m. Further analysis of restarting is
given by Afanasjew, Eiermann, Ernst, and Güttel [4, ].
Existence of f (Hk ). For k < deg ψA,b it is possible that f (Hk ) is not defined, even
though f (A) is. From (13.6) it follows that the field of values of Hk is contained in
the field of values of A. Therefore a sufficient condition for f (Hk ) to be defined is
that f and its first n derivatives are defined at all points within the field of values
of A.
Computing f (Hk ). The Arnoldi approximation (13.7) requires the computation
of f (Hk )e1 , which is the first column of f (Hk ). Since k is of moderate size we can
compute the whole matrix f (Hk ) by any of the methods described in earlier chapters
of this book, taking advantage of the Hessenberg structure if possible. Whether or not
it is possible or advantageous to apply a method specific to this f (Hk )b problem—
such as those described in the following sections—depends on f , k, and the desired
accuracy.
Convergence, error bounds, and stopping test. The quality of the approximations
(13.7) depends on how well pek−1 (A)b approximates f (A)b, where pek−1 is defined
in Theorem 13.5. It is well known that the eigenvalues in the outermost part of
the spectrum tend to approximated first in the Arnoldi process, so we might expect
the components of the approximations (13.7) in the eigenvectors corresponding to
the outermost part of the spectrum to be the most accurate. We state a result
that describes convergence for the exponential function and Hermitian matrices with
spectrum in the left half-plane.
(13.7) of eA b is bounded by
2 √
10β e−k /(5ρ) , 2 ρ ≤ k ≤ 2ρ,
ke b − βQk e e1 k2 ≤ 10β −ρ eρ k
A Hk
e , k ≥ 2ρ.
ρ k
Proof. See Hochbruck and Lubich [291, ].
The bound of the theorem shows that rapid convergence is obtained for k ≥
1/2
kAk2 . Similar convergence bounds for f (x) = ex are given by Hochbruck and
Lubich in [291, ] for general A under the assumption that the field of values of A
lies in a disk or a sector. However, convergence for general A and general f is poorly
understood (see Problem 13.5).
We are not aware of a criterion for general f and general A for determining when
the approximation (13.7) is good enough and hence when to terminate the Arnoldi
process. For the exponential, an error estimate
13.3. Quadrature
13.3.1. On the Real Line
Ra
Suppose we have a representation f (A) = 0 g(A, t) dt for some rational function g
and a ∈ R. Then the use of a quadrature formula
Z a m
X
g(t) dt ≈ ck g(tk ) (13.8)
0 k=1
is natural, yielding
m
X
f (A)b ≈ g(A, tk )b. (13.9)
k=1
We have seen that the matrix sign function, matrix pth roots, the unitary polar factor,
and the matrix logarithm all have the required integral representations with a rational
13.3 Quadrature 307
function g (see (5.3), (7.1), (8.7), and (11.1)) and so are amenable to this approach.
Because g is rational, (13.9) reduces to matrix–vector products and the solution of
linear systems. If a Hessenberg reduction A = QHQ∗ can be computed then (13.9)
transforms to
Xm
f (A)b ≈ Q g(H, tk )d, d = Q∗ b. (13.10)
k=1
Since a Hessenberg system can be solved in O(n2 ) flops, as opposed to O(n3 ) flops for
a dense system, (13.10) will be more efficient than (13.9) for sufficiently large m or
sufficiently many different vectors b that need to be treated with the same A; indeed,
ultimately the most efficient approach will be to employ a Schur decomposition instead
of a Hessenberg decomposition.
The quadrature formula (13.8) might be a Gauss rule, a repeated rule, such as
the repeated trapezium or repeated Simpson rule, or the result of applying adaptive
quadrature with one of these rules. Gaussian quadrature has close connections with
Padé approximation, and we noted in Section 11.4 that for the logarithm the use
of Gauss–Legendre quadrature with the integral (11.1) leads to the diagonal Padé
approximants to the logarithm, via (11.18). Focusing on log(I + X) for the moment,
Theorem 11.6 shows that an error bound for the Padé approximant, and hence for
Gaussian quadrature, is available for kXk < 1. For kXk > 1 the inverse scaling and
squaring approach is not attractive in the context of log(A)b, since it requires matrix
square roots. However, adaptive quadrature can be used and automatically chooses
the location and number of the quadrature points in order to achieve the desired
accuracy. For more details, including numerical experiments, see Davies and Higham
[136, ].
where f is analytic on and inside a closed contour Γ that encloses the spectrum of A.
Suppose we take for the contour Γ a circle with centre α and radius β, { z : z − α =
βeiθ , 0 ≤ θ ≤ 2π }, and then approximate the integral using the repeated trapezium
rule. Using dz = iβeiθ dθ = idθ(z(θ) − α) and writing the integrand in (13.11) as g(z),
we obtain Z Z 2π
g(z)dz = i (z(θ) − α)g(z(θ)) dθ. (13.12)
Γ 0
The integral in (13.12) is a periodic function of θ with period 2π. Applying the
m-point repeated trapezium rule to (13.12) gives
Z m−1
2πi X
g(z) dz ≈ (zk − α)g(zk ),
Γ m
k=0
where zk − α = βe2πki/m , that is, z0 , . . . , zm are equally spaced points on the contour
Γ (note that since Γ is a circle we have z0 = zm ). When A is real and we take α real
it suffices to use just the zk in the upper half-plane and then take the real part of the
result. The attraction of the trapezium rule is that it is exponentially accurate when
308 Function of Matrix Times Vector: f (A)b
applied to an analytic integrand on a periodic domain [142, , Sec. 4.6.5], [571,
].
Although this is a natural approach, it is in general very inefficient unless A is
well conditioned, as shown by Davies and Higham [136, ] and illustrated below.
The problem is that Γ is contained in a very narrow annulus of analyticity. However,
by using a suitable change of variables in the complex plane, carefully constructed
based on knowledge of the extreme points of the spectrum and any branch cuts or
singularities of f , quadrature applied to the Cauchy integral can be very effective, as
shown by Hale, Higham, and Trefethen [240, ].
For illustration consider the computation of A1/2 for A the 5 × 5 Pascal matrix,
which is symmetric positive definite with spectrum on the interval [0.01, 92.3]. First,
we choose for Γ the circle with centre (λmin + λmax )/2 and radius λmax /2, which is a
compromise between enclosing the spectrum and avoiding the negative real axis. We
find that about 32,000 and 262,000 points are needed to provide 2 and 13 decimal
digits of accuracy, respectively. As an alternative we instead conformally map the
region of analyticity of f and the resolvent to an annulus, with the interval containing
the spectrum mapping to the inner boundary circle and the negative real axis mapping
to the outer boundary circle. Then we apply the trapezium rule over a circle in the
annulus. With the mapping constructed as described in [240, ], we need just 5
and 35 points to provide 2 and 13 decimal digits of accuracy, respectively—a massive
improvement, due to the enlarged annulus of analyticity. By further exploiting the
properties of the square root function these numbers of points can be reduced even
more, with just 20 points yielding 14 digits. Theorems are available to explain this
remarkably fast (geometric) convergence [240, ].
When a quadrature rule is applied to a (transformed) Cauchy integral it produces
an answer that can be interpreted as the exact integral of a rational function whose
poles are the nodes and whose residues are the weights. It is therefore natural to ask
what rational approximations the trapezium rule is producing in the above example.
It turns out the last of the approaches mentioned above reproduces what is essentially
the best rational approximation to x−1/2 in Theorem 5.15 identified by Zolotarev.
However, conformal mapping and the trapezium rule can be applied to arbitrary
functions and to matrices with spectra that are not all real, for which associated best
rational approximations are usually not known.
Numerical evaluation of the ψ functions (see Section 10.7.4) via contour integrals,
in the context of exponential integrators, is discussed by Kassam and Trefethen [336,
] and Schmelzer and Trefethen [503, ]. For the case of the exponential and
matrices with negative real eigenvalues, see Trefethen, Weideman, and Schmelzer [574,
].
dy −1
= α(A − I) t(A − I) + I y, y(0) = b, 0 ≤ t ≤ 1, (13.13)
dt
13.5 Other Methods 309
α
has the unique solution y(t) = t(A − I) + I b, and hence y(1) = Aα b.
Proof. The existence of a unique solution follows from the fact that the ODE
satisfies a Lipschitz condition with Lipschitz constant sup0≤t≤1 k(A − I) t(A − I) +
−1
I k < ∞. It is easy to check that y(t) is this solution.
Thus y(1) = Aα b can be obtained by applying an ODE solver to (13.13). The
initial value problem can potentially be stiff, depending on α, the matrix A, and the
requested accuracy, so some care is needed in choosing a solver. Again, a Hessenberg
or Schur reduction of A can be used to reduce the cost of evaluating the differential
equation.
Problems
13.1. Show that after m = deg ψA,b steps of the Arnoldi process with q1 = b/kbk2 we
have fm = kbk2 Qf (Hm )e1 ≡ f (A)b.
13.2. Show that dim(Km (A, b)) = min(m, deg ψA,b ).
13.3. Show that if H ∈ Cn×n is upper Hessenberg with hj+1,j 6= 0 for all j then H
is nonderogatory, that is, no eigenvalue appears in more than one Jordan block.
13.4. (T. Lippert, communicated by A. Frommer) The following recursive algorithm
computes x = Xk b, where Xk is the kth iterate from the Newton–Schulz iteration
(5.22) for the matrix sign function, using only matrix–vector products.
1 function x = N S(A, b, k)
2 if k = 1
3 x = 12 A(3b − A(Ab))
4 else
5 x = N S(A, b, k − 1)
6 x = N S(A, x, k − 1)
7 x = 3b − x
8 x = 12 N S(A, x, k − 1)
9 end
Explain why the algorithm works and evaluate its cost, comparing it with the cost of
computing Xk and then forming Xk b.
13.5. (Research problem) Obtain error bounds for the Arnoldi approximation
(13.7) for general A and general f .
13.6. (Research problem) For f (x) = ex the Arnoldi approximation (13.7) re-
quires the computation of eH , where H is upper Hessenberg. Can Algorithm 10.20
(scaling and squaring) be usefully adapted to take advantage of the Hessenberg struc-
ture?
Problems 311
The basic idea of the Krylov subspace techniques considered in this paper is to
approximately project the exponential of the large matrix onto a small Krylov subspace.
The only matrix exponential operation performed
is therefore with a much smaller matrix.
— Y. SAAD, Analysis of Some Krylov Subspace Approximations to
the Matrix Exponential Operator (1992)
This final chapter treats a few miscellaneous topics that do not fit elsewhere in the
book.
which shows that A1/2 is also unitary. Therefore the square root function preserves
the property of being unitary. However, underlying this result is a much more general
one. Suppose that M −1 A∗ M = A−1 for some nonsingular M ∈ Cn×n . Then we have
1/2 1/2 ∗
A−1/2 = M −1 A∗ M = M −1 A∗ M = M −1 A1/2 M, (14.1)
which shows that A1/2 satisfies a relation of the same form. Thus the same proof
has shown preservation of structure for a much wider class of matrices. This line
of thinking suggests that there are gains to be made by carrying out analysis in a
suitably general setting. A setting that has proved very fruitful is a scalar product
space and its associated structures.
Let K = R or C and consider a scalar product on Kn , that is, a bilinear or
sesquilinear form h·, ·iM defined by any nonsingular matrix M : for x, y ∈ Kn ,
T
x M y for real or complex bilinear forms,
hx, yiM =
x∗ M y for sesquilinear forms.
313
314 Miscellany
Table"14.1. Structured
# matrices associated with some scalar products.
. 1 ,
» – » –
0 In Ip 0
R= .. J=
−In 0
, Σp,q =
0 −Iq
with p + q = n.
1
Bilinear forms
Rn I Real orthogonals Symmetrics Skew-symmetrics
Cn I Complex orthogonals Complex symmetrics Cplx skew-symmetrics
Rn Σp,q Pseudo-orthogonals Pseudosymmetrics Pseudoskew-symmetrics
Cn Σp,q Cplx pseudo-orthogonals Cplx pseudo-symm. Cplx pseudo-skew-symm.
Rn R Real perplectics Persymmetrics Perskew-symmetrics
R2n J Real symplectics Skew-Hamiltonians Hamiltonians
C2n J Complex symplectics Cplx J-skew-symm. Complex J-symmetrics
Sesquilinear forms
Cn I Unitaries Hermitian Skew-Hermitian
Cn Σp,q Pseudo-unitaries Pseudo-Hermitian Pseudoskew-Hermitian
C2n J Conjugate symplectics J-skew-Hermitian J-Hermitian
The adjoint of A with respect to the scalar product h·, ·iM , denoted by A⋆ , is uniquely
defined by the property hAx, yiM = hx, A⋆ yiM for all x, y ∈ Kn . It can be shown that
the adjoint is given explicitly by
M −1 AT M for bilinear forms,
A⋆ = (14.2)
M −1 A∗ M for sesquilinear forms.
Associated with h·, ·iM is an automorphism group G, a Lie algebra L, and a Jordan
algebra J, which are the subsets of Kn×n defined by
G := {G : hGx, GyiM = hx, yiM ∀x, y ∈ Kn } = G : G⋆ = G−1 , (14.3)
n
⋆
L := {L : hLx, yiM = −hx, LyiM ∀x, y ∈ K } = L : L = −L , (14.4)
n
⋆
J := {S : hSx, yiM = hx, SyiM ∀x, y ∈ K } = S : S = S . (14.5)
G is a multiplicative group, while L and J are linear subspaces. Table 14.1 shows a
sample of well-known structured matrices in G, L, or J associated with some scalar
products.
In this language, (14.1) and its analog with “∗” replaced by “T ”, shows that A ∈ G
implies A1/2 ∈ G, that is, the square root function preserves matrix automorphism
groups. One can ask which other functions preserve G. A thorough investigation of
this question is given by Higham, Mackey, Mackey, and Tisseur [283, ].
Mappings between G, L, and J provide examples of interplay between these struc-
⋆
tures. For example, if A ∈ L then X = eA ∈ G, because X ⋆ = (eA )⋆ = eA = e−A =
−1
X . This generalizes the familiar property that the exponential of a skew-Hermitian
matrix is unitary. A more general setting in which similar questions can be consid-
ered is a Lie group and its associated Lie algebra (not necessarily arising from a scalar
14.1 Structured Matrices 315
product), and the observation just made is a special case of the well-known one that
the exponential map takes the Lie algebra into the corresponding Lie group. This
mapping is important in the numerical solution of ODEs on Lie groups by geometric
integration methods. For details, see Hairer, Lubich, and Wanner [239, ], Iserles,
Munthe-Kaas, Nørsett, and Zanna [313, ], and Iserles and Zanna [314, ].
Two particularly important classes in Table 14.1 are the Hamiltonian matrices
and the symplectic matrices. Results on logarithms of such matrices can be found in
Dieci [152, ] and [153, ]. The result that every real skew-Hamiltonian matrix
has a real Hamiltonian square root is proved by Faßbender, Mackey, Mackey, and Xu
[182, ], while Ikramov proves the corresponding result for complex matrices [308,
].
Some of the techniques developed in these contexts can be applied even more
generally. For example, A ∈ Cn×n is centrosymmetric if JAJ = A (where J is
defined in Table 14.1) and by the same argument as above, JA1/2 J = A1/2 , so the
square root function preserves centrosymmetry.
Perturbation analysis of matrix functions can be done in such way as to restrict
perturbations to those that maintain the structure of the matrices being perturbed,
yielding structured condition numbers. Such analysis has been carried out by Davies
for the Jordan and Lie algebras J and L [134, ].
The polar decomposition A = U H interacts with the above structures in an inter-
esting way. First, various results can be proved about how U and H inherit structure
from A. Second, the polar decomposition can be generalized to a decomposition
A = U H in which U ∈ G and H ∈ J, subject to suitable conditions on A. For
details and further references see Higham [277, ], Higham, Mackey, Mackey, and
Tisseur [282, ], [283, ], and Mackey, Mackey, and Tisseur [400, ] (see, in
particular, the discussion in Section 6), and for similar considerations in a Lie group
setting see Iserles and Zanna [314, ] and Munthe-Kaas, Quispel, and Zanna [443,
].
Fn CFn−1 = D = diag(di ).
Assuming we know that A and f (A) are both structured (possibly with different, but
related structures), what methods are available that exploit the structure? This is
a very general question about which much is known. We give a small selection of
references.
For algorithms for computing the square root or logarithm of orthogonal and
unitary matrices, or more generally matrices in a group G, see Cardoso, Kenney, and
Silva Leite [93, ], Cheng, Higham, Kenney, and Laub [107, ], and Higham,
Mackey, Mackey, and Tisseur [283, ].
If f preserves a certain structure, do the Padé approximants to f also preserve the
structure? It is well known that diagonal Padé approximants rm to the exponential
have the property that r(A) is unitary if A is skew-Hermitian. Dieci [152, ,
Thm. 2.2] shows a kind of converse: that diagonal Padé approximants to the logarithm
yield skew-Hermitian matrices when evaluated at unitary matrices.
Computing functions of matrices stored in a data sparse format in a way that
exploits the structure is relatively little explored to date, other than of course for the
matrix inverse. For some work on functions of H-matrices see Baur and Benner [47,
], Gavrilyuk, Hackbusch, and Khoromskij [209, ], and Grasedyck, Hackbusch,
and Khoromskij [228, ].
14.2 Exponential Decay of Functions of Banded Matrices 317
The magnitudes of three functions of A are as follows, where the elements are shown
to one significant figure:
9e+1 8e+1 3e+1 1e+1 3e+0 5e-1
8e+1 1e+2 9e+1 4e+1 1e+1 3e+0
3e+1 9e+1 1e+2 9e+1 4e+1 1e+1
e =
A
1e+1
,
4e+1 9e+1 1e+2 9e+1 3e+1
3e+0 1e+1 4e+1 9e+1 1e+2 8e+1
5e-1 3e+0 1e+1 3e+1 8e+1 9e+1
1e+0 3e-1 3e-2 6e-3 1e-3 2e-4
3e-1
1e+0 3e-1 4e-2 6e-3 1e-3
3e-2 3e-1 1e+0 3e-1 4e-2 6e-3
,
log(A) =
6e-3
4e-2 3e-1 1e+0 3e-1 3e-2
1e-3 6e-3 4e-2 3e-1 1e+0 3e-1
2e-4 1e-3 6e-3 3e-2 3e-1 1e+0
2e+0 3e-1 2e-2 2e-3 4e-4 7e-5
3e-1
2e+0 3e-1 2e-2 2e-3 4e-4
2e-2 3e-1 2e+0 3e-1 2e-2 2e-3
.
A 1/2
=
2e-3
2e-2 3e-1 2e+0 3e-1 2e-2
4e-4 2e-3 2e-2 3e-1 2e+0 3e-1
7e-5 4e-4 2e-3 2e-2 3e-1 2e+0
In all three cases there is decay of the elements away from the main diagonal, the
rate of decay depending on the function. This is a general phenomenon for symmetric
band matrices and is not limited to matrices that are diagonally dominant.
Theorem 14.1. Let f be analytic in an ellipse containing Λ(A) and let A ∈ Cn×n be
Hermitian and of bandwidth m (aij = 0 for |i − j| > m). Then f (A) = (fij ) satisfies
|fij | ≤ Cρ|i−j| , where C is a constant and ρ = q 1/m , where q ∈ (0, 1) depends only
on f .
The theorem shows that the elements of f (A) are bounded in an exponentially
decaying manner away from the diagonal, with the bound decreasing as the bandwidth
decreases. Note that this does not necessarily mean that “decay to zero” is observed
318 Miscellany
Further decay results applying to general A and with a certain “graph theoretic
distance” replacing |i − j| in the exponent are provided by Benzi and Razouk [58,
].
Bounds of the same form as in Theorem 14.1 specialized to the exponential and
for general A are obtained by Iserles [312, ].
319
Appendix B
Background: Definitions and Useful Facts
This appendix collects together a variety of basic definitions, properties, and results
from matrix analysis and numerical analysis that are needed in the main text. Terms
being defined are in boldface. Where specific references are not given, general sources
of further information are Horn and Johnson [295, ] and Lancaster and Tismenet-
sky [371, ].
321
322 Background: Definitions and Useful Facts
q(t) = det(tI − A), which has degree n. The minimal polynomial of A is the unique
monic polynomial ψ of lowest degree such that ψ(A) = 0.
The set of eigenvalues of A, called the spectrum, is denoted by Λ(A) = {λ1 , . . . , λn }.
We sometimes denote by λi (A) the ith eigenvalue of A in some (usually arbitrary)
ordering. The eigenspace of A corresponding to an eigenvalue λ is the vector space
{ x ∈ Cn : Ax = λx }.
The algebraic multiplicity of λ is its multiplicity as a zero of the characteristic
polynomial q(t). The geometric multiplicity of λ is dim(null(A − λI)), which is
the dimension of the eigenspace of A corresponding to λ, or equivalently the number
of linearly independent eigenvectors associated with λ.
Any matrix A ∈ Cn×n can be expressed in the Jordan canonical form
Z −1 AZ = J = diag(J1 , J2 , . . . , Jp ), (B.1a)
λk 1
..
λk .
Jk = Jk (λk ) = .. ∈ Cmk ×mk , (B.1b)
. 1
λk
where Z is nonsingular and m1 + m2 + · · · + mp = n. The λk are the eigenvalues of
A. The matrices Jk are called Jordan blocks. The Jordan matrix J is unique up to
the ordering of the blocks Jk , but the transforming matrix Z is not unique.
In terms of the Jordan canonical form (B.1), the algebraic multiplicity of λ is the
sum of the dimensions of the Jordan blocks in which λ appears, while the geometric
multiplicity is the number of Jordan blocks in which λ appears.
An eigenvalue is called semisimple if its algebraic and geometric multiplicities
are the same or, equivalently, if it occurs only in 1 × 1 Jordan blocks. An eigenvalue is
defective if is not semisimple, that is, if it appears in a Jordan block of size greater
than 1, or, equivalently, if its algebraic multiplicity exceeds its geometric multiplicity.
A matrix is defective if it has a defective eigenvalue, or, equivalently, if it does not
have a complete set of linearly independent eigenvectors.
A matrix is derogatory if in the Jordan canonical form an eigenvalue appears
in more than one Jordan block. Equivalently, a matrix is derogatory if its minimal
polynomial (see page 4) has degree less than that of the characteristic polynomial.
The properties of being defective and derogatory are independent. A matrix that is
not derogatory is called nonderogatory.
A matrix is diagonalizable if in the Jordan canonical form the Jordan matrix J
is diagonal, that is, mk ≡ 1 and p = n in (B.1).
The index of an eigenvalue is the dimension of the largest Jordan block in which
it appears. The index of a matrix is the index of its zero eigenvalue, which can be
characterized as the smallest nonnegative integer k such that rank(Ak ) = rank(Ak+1 ).
The spectral radius ρ(A) = max{ |λ| : det(A − λI) = 0 }.
The field of values of A ∈ Cn×n is the set of all Rayleigh quotients:
∗
z Az n
F (A) = : 0 6= z ∈ C .
z∗z
The set F (A) is convex, and when A is normal it is the convex hull of the eigenvalues.
For a Hermitian matrix F (A) is a segment of the real axis and for a skew-Hermitian
matrix it is a segment of the imaginary axis.
The following eigenvalue localization theorem is often useful.
B.3 Invariant Subspaces 323
Theorem B.1 (Gershgorin, 1931). The eigenvalues of A ∈ Cn×n lie in the union of
the n disks in the complex plane
n
X
Di = z ∈ C : |z − aii | ≤ |aij | , i = 1: n.
j=1
j6=i
which illustrates the convention that blank entries in a matrix denote zeros.
A Toeplitz matrix is one for which the entries along each diagonal are constant:
aij = ri−j for some parameters rk . A circulant matrix is a special type of Toeplitz
matrix in which each row is a cyclic permutation one element to the right of the row
above. Examples:
a b c a b c
T = d a b, C = c a b.
e d a b c a
For more on the pseudoinverse see any textbook on numerical linear algebra or
Ben-Israel and Greville [52, ].
326 Background: Definitions and Useful Facts
Lemma B.2. For U ∈ Cm×n each of the following conditions is equivalent to U being
a partial isometry:
(a) U + = U ∗ ,
(b) U U ∗ U = U ,
(c) the singular values of U are all 0 or 1.
Proof. The equivalences are straightforward and can be obtained, for example,
using Problem B.4.
For more about partial isometries see Erdelyi [177, ] and Campbell and Meyer
[92, , Chap. 4].
B.7. Norms
A matrix norm on Cm×n is a function k · k : Cm×n → R satisfying the following
conditions:
The most important case is where the vector norm is the p-norm, defined by kxkp =
Pn
p 1/p
i=1 |xi | (1 ≤ p < ∞) and kxk∞ = maxi=1:n |xi |, which gives the matrix p-
norm kAkp = maxx6=0 kAxkp /kxkp . The three most useful subordinate matrix norms
B.7 Norms 327
Table B.1. Constants αpq such that kAkp ≤ αpq kAkq , A ∈ Cm×n .
q
1 2 ∞ F
√ √
1 1 m m m
√ √
2 n 1 m 1
√ √
p ∞ n n 1 n
√ p √
F n rank(A) m 1
are
Pm
kAk1 = max1≤j≤n i=1 |aij |, “max column sum”,
1/2
kAk2 = ρ(A∗A) = σmax (A), spectral norm,
Pn ∗
kAk∞ = max1≤i≤m j=1 |aij | = kA k1 , “max row sum”,
These norms are all within a constant factor (depending only on the dimensions of
A) of each other, as summarized in Table B.1.
The Frobenius norm for A ∈ Cm×n is
X m X n 1/2
2
1/2
kAkF = |aij | = trace(A∗A) . (B.5)
i=1 j=1
√
For any subordinate matrix norm kIn k = 1, whereaskIn kF = n. When results in
this book assume the norm to be subordinate it is usually because they use kIk = 1.
If kABk ≤ kAk kBk for all A ∈ Cm×n and B ∈ Cn×p then the norm is called
consistent. (Note that there can be three different norms in this relation, so we
should say the norms are consistent.) All subordinate matrix norms are consistent,
as is the Frobenius norm.
A vector norm on Cn is absolute if k |x| k = kxk for all x ∈ Cn . A matrix norm
subordinate to an absolute vector norm satisfies [295, , Thm. 5.6.37]
A norm on Cm×n for which kU AV k = kAk for all unitary U ∈ Cm×m and V ∈
n×n
C and all A ∈ Cm×n is called a unitarily invariant norm. Such a norm is a
function only of the singular values of A, and hence it satisfies kAk = kA∗ k. To be
more precise, for a unitarily invariant norm kAk is a symmetric gauge function of
the singular values. A symmetric gauge function is an absolute, permutation-invariant
norm on Rn , that is, a vector norm g such that g(|x|) = g(x), where |x| = (|xi |), and
g(x) = g(P x) for any permutation matrix P . A vector norm is absolute if and only
if it is monotone, where a monotone norm is one for which |x| ≤ |y| ⇒ kxk ≤ kyk.
For more details, see Horn and Johnson [295, , Sec. 7.4] or Stewart and Sun [539,
].
For any unitarily invariant norm [296, , Cor. 3.5.10],
(in fact, any two of the norms on the right-hand side can be 2-norms).
The following result is used in Chapter 8.
328 Background: Definitions and Useful Facts
Theorem B.3 (Mirsky). Let A, B ∈ Cm×n have SVDs with diagonal matrices ΣA , ΣB ∈
Rm×n , where the diagonal elements are arranged in nonincreasing order. Then kA −
Bk ≥ kΣA − ΣB k for every unitarily invariant norm.
Proof. See Mirsky [432, , Thm. 5] or Horn and Johnson [295, , Thm.
7.4.51].
The condition number (with respect to inversion) of A ∈ Cn×n is κ(A) =
kAk kA−1 k.
For any consistent norm and A ∈ Cn×n
ρ(A) ≤ kAk (B.8)
(see Problem B.5).
For any A ∈ Cn×n and ǫ > 0 there is a consistent matrix norm (depending on A)
such that kAk ≤ ρ(A) + ǫ [295, , Lem. 5.6.10]. In particular, if ρ(A) < 1 there is
a consistent matrix norm such that kAk < 1. This fact provides an easy way to prove
the result that
ρ(A) < 1 ⇒ lim Ak = 0. (B.9)
k→∞
Lemma B.4. For A, B ∈ Cn×n and any consistent matrix norm we have
kAm − B m k ≤ kA − Bk kAkm−1 + kAkm−2 kBk + · · · + kBkm−1 .
Pm−1
Proof. The bound follows from the identity Am −B m = i=0 Ai (A−B)B m−1−i ,
which can be proved by induction. The inductive step uses Am − B m = (Am−1 −
B m−1 )B + Am−1 (A − B).
(b)
A ≥ 0, B ≤ C, AB = BA, AC = CA ⇒ AB ≤ AC; (B.15)
1
Re x∗ Ay = (x + y)∗ A(x + y) − (x − y)∗ A(x − y)
4
and set A = Xk . For any given x, y ∈ Rn the right-hand side has a limit, α∗ (x, y),
as k → ∞, from the result just shown. Hence x∗ Xk y → α∗ (x, y). Now put x = ei ,
y = ej , to deduce that (Xk )ij → α∗ (ei , ej ) =: (X∗ )ij . Clearly, X∗ ≥ B. If the Xk are
not all real then an analogous proof holds using the complex polarization identity
1
x∗ Ay = (x + y)∗ A(x + y) − (x − y)∗ A(x − y)
4
− i (x + iy)∗ A(x + iy) − (x − iy)∗ A(x − iy) .
Proof. See, e.g., Bhatia [64, , Thm. V.1.9], [65, , Thm. 4.2.1] or Zhan
[623, , Thm. 1.1].
B.13 Kronecker Product and Sum 331
and hence the Sylvester equation is nonsingular precisely when A and B have no
eigenvalue in common.
where f l denotes the computed value of an expression and u is the unit roundoff .
In addition to (B.21), we can also use the variant
x op y
f l(x op y) = , |δ| ≤ u. (B.22)
1+δ
In IEEE double precision arithmetic, u = 2−53 ≈ 1.11 × 10−16 , while for IEEE
single precision arithmetic, u = 2−24 ≈ 5.96 × 10−8 .
This model applies to real x and y. For complex data the model must be adjusted
by increasing the bound for |δ| slightly [276, , Lem. 3.5].
Error analysis results are often stated in terms of the constants
nu cnu
γn = , γ
en = ,
1 − nu 1 − cnu
where c is a small integer constant whose precise value is unimportant. For all the
algorithms in this book we can state error bounds that are valid for both real and
complex arithmetic by using the γe notation.
The following lemma [276, , Lem. 3.1] is fundamental for carrying out round-
ing error analysis.
where
nu
|θn | ≤ = γn .
1 − nu
Complex arithmetic costs significantly more than real arithmetic: in particular,
for scalars, a complex addition requires 3 real additions and a complex multiplication
requires 4 real multiplications and 2 real additions, though the latter can be reduced
at the cost of some stability [272, ]. The ratio of execution time of a numer-
ical algorithm implemented in real versus complex arithmetic depends on both the
algorithm and the computer, but a ratio of 4 is reasonably typical.
For the computed product C b = f l(AB) of A, B ∈ Cn×n we have kC − Ck b p ≤
en kAkp kBkp , p = 1, ∞, F .
γ
For more on floating point arithmetic and rounding error analysis see Higham
[276, ].
f [xk ] = f (xk ),
f (xk+1 ) − f (xk ) ,
xk 6= xk+1 ,
f [xk , xk+1 ] = xk+1 − xk
′
f (xk+1 ), xk = xk+1 ,
B.16 Divided Differences 333
f [x1 , x2 , . . . , xk+1 ] − f [x0 , x1 , . . . , xk ]
, x0 6= xk+1 ,
xk+1 − x0
f [x0 , x1 , . . . , xk+1 ] = f (k+1) (x (B.24)
k+1 )
, x0 = xk+1 .
(k + 1)!
One of their main uses is in constructing the Newton divided difference form of the in-
terpolating polynomial, details of which can be found in numerical analysis textbooks,
such as Neumaier [447, , Sec. 3.1] and Stoer and Bulirsch [542, , Sec. 2.1.5].
Another important representation of a divided difference is as a multiple integral.
Assume f is k times continuously differentiable. It is easy to check that f [x0 , x1 ] =
R1 ′
0
f (x0 + (x1 − x0 )t) dt. This formula generalizes to the Genocchi–Hermite formula
Z 1 Z t1 Z t2 Z tk−1 k
X
f [x0 , x1 , . . . , xk ] = ... f (k) x0 + tj (xj − xj−1 dtk . . . dt2 dt1 ,
0 0 0 0 j=1
(B.25)
which expresses f [x0 , x1 , . . . , xk ] as an average of f (k) over the simplex with vertices
x0 , x1 , . . . , xk . Note that this formula, like (B.29) below, does not require the xi to
be ordered as in (B.23), so it gives a meaning to divided differences for an arbitrary
ordering.
It follows from (B.25) that if f is k-times continuously differentiable then the di-
vided difference f [x0 , x1 , . . . , xk ] is a continuous function of its arguments. Moreover,
for real points xi ,
f (k) (ξ)
f [x0 , x1 , . . . , xk ] = for some ξ ∈ [mini xi , maxi xi ]. (B.26)
k!
Hence for confluent arguments we recover the confluent case in (B.24):
f (k) (x)
f [x, x, . . . , x] = . (B.27)
| {z } k!
k + 1 times
No result of the form (B.26) holds for complex xi , even for k = 1 [178, ]. It nev-
ertheless follows from (B.25) that if Ω is a closed convex set containing x0 , x1 , . . . , xk ∈
C then
maxz∈Ω f (k) (z)
|f [x0 , x1 , . . . , xk ]| ≤ . (B.28)
k!
Yet another definition of divided differences can be given in terms of contour inte-
gration. Let f be analytic inside and on a closed contour Γ that encloses x0 , x1 , . . . , xk .
Then Z
1 f (z)
f [x0 , x1 , . . . , xk ] = dz. (B.29)
2πi Γ (z − x0 )(z − x1 ) . . . (z − xk )
The properties (B.24)–(B.27) are readily deduced from this definition.
For more details of divided difference see, for example, Conte and de Boor [113,
, Thm. 2.5], Isaacson and Keller [311, , Sec. 6.1], and Neumaier [447, ,
Thm. 3.1.8]. The formulae (B.25) and (B.29) are less commonly treated, hence we
give a list of references in which they can be found: de Boor [143, ], Gel’fond [210,
, Sec. 1.4.3], Horn and Johnson [296, , Sec. 6.1], Isaacson and Keller [311,
, Sec. 6.1], Kahan [328, ], Kahan and Fateman [329, ], Milne-Thompson
[428, , Chap. 1], and Ostrowski [455, ].
334 Background: Definitions and Useful Facts
Problems
B.1. Prove the eigenpair relations between A and B stated in Section B.3.
B.2. Use the Moore–Penrose conditions (B.3) to show that if A is Hermitian then so
is its pseudoinverse. Deduce that if A is Hermitian then A commutes with A+ .
B.3. Show that for A ∈ Cm×n , range(A∗A) = range(A∗ ).
Ir 0
∗
B.4. Show that every partial isometry A ∈ Cm×n has the form A = P 0 0m−r,n−r Q ,
for some unitary P ∈ Cm×m and Q ∈ Cn×n , where r = rank(A).
B.5. Show that for any A ∈ Cn×n and any consistent matrix norm, ρ(A) ≤ kAk,
where ρ is the spectral radius.
B.6. Let A ∈ Cm×n . Show that
A 0
= kAk for any unitarily invariant norm.
In this appendix we summarize the cost of some standard matrix computations. The
unit of measure is the flop, which denotes any of the four elementary scalar operations
+, −, ∗, /. The operation counts assume the use of substitution for solving triangular
systems and LU factorization, and appropriate symmetric variants thereof in the
symmetric case, for solving linear systems. When complex arithmetic is involved, the
flops should be interpreted as counting operations on complex numbers.
For details of the relevant algorithms see, for example, Golub and Van Loan [224,
], Higham [276, ], Trefethen and Bau [572, ], or Watkins [607, ].
It is important to stress that the run time of an algorithm in a particular com-
puting environment may or may not be well predicted by the flop count. Different
matrix operations may run at different speeds, depending on the machine architec-
ture. In particular, matrix multiplication exploits hierarchical memory architectures
better than matrix inversion, so it is generally desirable to make algorithms matrix
multiplication-rich. See Dongarra, Duff, Sorensen and Van der Vorst [162, ] and
Golub and Van Loan [224, ] for more details.
335
336 Operation Counts
Table C.1. Cost of some matrix computations for real, n × n matrices. Here A and B are
general, nonsymmetric; T is triangular; H and M are symmetric. X denotes a matrix to be
determined; Y denotes a symmetric matrix to be determined. x, b are n-vectors.
Table C.2. Cost of some matrix factorizations and decompositions. A is n × n, except for
QR and SVD, where it is m × n (m ≥ n).
• The codes are designed for simplicity and readability rather than maximum
efficiency.
• Algorithmic options such as preprocessing are omitted.
• The codes are intended for double precision matrices. Those algorithms in which
the parameters can be adapted to the precision have not been written to take
advantage of single precision inputs.
The contents of the toolbox are listed in Table D.1, along with a reference to the
corresponding algorithms, theorems, or sections in the book. We have not provided
codes for algorithms that are already provided as part of MATLAB. Such matrix-
function M-files are listed in Table D.2, along with a reference to the corresponding
algorithm in this book.
The Matrix Function Toolbox is available from
http://www.ma.man.ac.uk/~higham/mftoolbox
To test that the toolbox is working correctly, run the function mft test one or
more times.
339
340 Matrix Function Toolbox
Table D.1. Contents of Matrix Function Toolbox and corresponding parts of this book.
A = ZJZ −1 = W JW −1 (E.1)
(by incorporating a permutation matrix in W we can assume without loss of generality that
J is the same matrix in both cases). The definition gives f1 (A) = Zf (J)Z −1 , f2 (A) =
W f (J)W −1 and we need to show that f1 (A) = f2 (A), that is, W −1 Zf (J)Z −1 W = f (J), or
X −1 f (J)X = f (J) where X = Z −1 W . Now by (E.1) we have X −1 JX = J, which implies
f (J) = f (X −1 JX) = X −1 f (J)X, the last equality following from Definition 1.2. Hence
f1 (A) = f2 (A), as required.
Further insight can be gained by noting that the general form of X is given by Theo-
rem 1.25 and that this form commutes with f (J) given by (1.4).
1.2. We have −Jk = DJk (−λk )D, where D = diag(1, −1, 1, . . . , (−1)mk −1 ). Hence f (−Jk ) =
Df (Jk (−λk ))D, from which the result follows. Alternatively, we can write −Jk = −λk I +Nk ,
where Nk is zero except for a superdiagonal of −1s, and expand f (−λk I + Nk ) in a Taylor
series (cf. (1.5)).
343
344 Solutions to Problems
(b) Setting (λ, x) ≡ (α, e) in (a) gives the row sum result. If A has column sums α then
AT e = αe and applying the row sum result to AT gives f (α)e = f (AT )e = f (A)T e. So f (A)
has column sums f (α).
1.5. There is clearly a lower bound on the degrees of polynomials p such that p(A) = 0, and
by the Cayley–Hamilton theorem the characteristic polynomial is a candidate for having
minimal degree, so the minimum is at most n. Hence ψ exists, as we can always normalize
to obtain a monic polynomial. Let p1 and p2 be two monic polynomials of lowest degree.
Then p3 = p2 − p1 is a polynomial of degree less than p1 and p2 and p3 (A) = 0; hence p3 = 0,
i.e., p1 = p2 . So ψ is unique.
1.7. It is easiest to use the polynomial interpolation definition (1.4), which says that cos(πA) =
p(A), where p(1) = cos π = −1, p′ (1) = −π sin π = 0, p(2) = cos 2π = 1. Writing
p(t) = a + bt + ct2 we have
2 32 3 2 3
1 1 1 a −1
40 1 254 b 5 = 4 05,
1 2 4 c 1
which can be solved to give p(t) = 1 − 4t + 2t2 . Hence
2 3
−3 0 0 4
6 0 −1 0 07
2 6
cos(πA) = p(A) = I − 4A + 2A = 4 7.
0 0 −1 05
−2 0 0 3
Evaluating cos(πA) from its power series would be much more complicated.
1.8. From the spectral properties of uv ∗ identified in Section 1.2.5 we deduce that the char-
acteristic polynomial is p(t) = tn−1 (t−v ∗ u) and the minimal polynomial is ψ(t) = t(t−v ∗ u).
As a check, we have ψ(uv ∗ ) = uv ∗ (uv ∗ − (v ∗ u)I) = v ∗ u(uv ∗ − uv ∗ ) = 0.
1 if (a − d)2 + 4bc 6= 0
2 A has distinct eigenvalues
3 else if b = c = 0
4 A has a double eigenvalue in two 1 × 1 Jordan blocks and A = λI (λ = a = d)
5 else
6 A has a Jordan block of size 2
7 end
In the second of the three cases, the derivative f ′ (λ) appears to enter in the formula (E.2),
even though the theory says it shouldn’t. The appearance is illusory, because A − λ1 I = 0.
Solutions to Problems 345
t − (α + nβ) t−α
p(t) = f (α) + f (α + nβ).
−nβ nβ
Thus
βJ − nβI βJ
f (A) = f (α) + f (α + nβ) = f (α)I + n−1 (f (α + nβ) − f (α))J.
−nβ nβ
t − λ2 t − λ1
p(t) = f (λ1 ) + f (λ2 ) .
λ1 − λ2 λ2 − λ1
Hence
f (λ1 ) f (λ2 )
f (A) = p(A) = (A − λ2 I) + (A − λ1 I).
λ1 − λ2 λ2 − λ1
1.13. For the Jordan canonical form definition (1.2), use Theorem 1.25 to write down the
form of a general matrix B that commutes with A. The verification of f (A)B = Bf (A) then
reduces to the fact that upper triangular Toeplitz matrices commute.
For the interpolation definition (1.4) the result is immediate: f (A) is a polynomial in A
and so commutes with B since A does.
For the Cauchy integral definition (1.11) we have, using (zI − A)B = B(zI − A),
Z
1
f (A)B = f (z)(zI − A)−1 B dz
2πi Γ
Z
1
= B f (z)(zI − A)−1 dz = Bf (A).
2πi Γ
1.14. The Jordan structure of A cannot be reliably computed in floating point arithmetic,
but the eigenvalues can (in the sense of obtaining backward stable results). Therefore we
can find the Hermite interpolating polynomial p satisfying p(j) (λi ) = f (j) (λi ), j = 0: ji − 1,
i = 1: s, where λ1 , . . . , λs are the distinct eigenvalues and λi has algebraic multiplicity ji .
This polynomial p satisfies more interpolation conditions than in (1.7), but nevertheless
p(A) = f (A) (see Remark 1.5). Perhaps the most elegant way to obtain p is in the divided
difference form (1.10), which needs no attempt to identify repeated eigenvalues.
Hence
2 32 ∗ 3
f (0) f ′ (0) 0 u /(u∗ u)
f (A) = [ u v/(v ∗ v) X ]4 0 f (0) 0 54 v∗ 5
0 0 f (0) Y
uu∗ vv ∗
= f (0) ∗
+ f ′ (0)uv ∗ + f (0) ∗ + f (0)XY.
u u v v
But XY = I − uu∗ /(u∗ u) − vv ∗ /(v ∗ v) from (E.3), and hence f (A) = f (0)I + f ′ (0)uv ∗ .
1.16. The formula can be obtained from the polynomial interpolation definition by noting
that:
(a) For v ∗ u 6= 0, A = αI + uv ∗ has a semisimple eigenvalue α of multiplicity n − 1 and
an eigenvalue α + v ∗ u.
(b) For v ∗ u = 0, A has n eigenvalues α; n − 2 of these are in 1 × 1 Jordan blocks and
there is one 2 × 2 Jordan block.
` ´−1
The Sherman–Morrison formula is obtained by writing (A+uv ∗ )−1 = A(I +A−1 u·v ∗ ) =
(I + A−1 u · v ∗ )−1 A−1 and applying (1.16) with f (x) = x−1 .
ˆc˜
1.17. Write A = λIn + eTn . Then, by (1.16),
0
» – » –
c T f (λ)In−1 f ′ (λ)c
f (A) = f (λ)In + f [λ, λ] en = .
0 0 f (λ)
1.18. It is easy to see that for scalars x and y, p(x)−p(y) = q(x, y)(x−y) for some polynomial
q. We can substitute tI for x and A for y to obtain
If p(A) = 0 then we have p(t)(tI − A)−1 = q(tI, A), so that p(t)(tI − A)−1 is a polynomial
in t. Conversely, if p(t)(tI − A)−1 is a polynomial in t then from (E.4) it follows that
p(A)(tI − A)−1 = p(t)(tI − A)−1 − q(tI, A) is a polynomial. Since p(A) is a constant this
implies that p(A) = 0.
To obtain the Cayley–Hamilton theorem set p(t) = det(tI − A). From the formula
B −1 = adj(B)/ det(B), where the adjugate adj is the transpose of the matrix of cofactors, it
follows that p(t)(tI − A)−1 = adj(tI − A) is a polynomial in t, so p(A) = 0 by the first part.
If f is analytic, a shorter proof is obtained from the Cauchy integral definition. Let Γ be
a closed contour enclosing Λ(A) and Γ1 its reflection in the imaginary axis, which encloses
Λ(−A). From (1.12),
Z
1
f (−A) = f (z)(zI + A)−1 dz
2πi Γ1
Z
1
= f (−w)(wI − A)−1 dw (w = −z)
2πi Γ
Z
1
=± f (w)(wI − A)−1 dw = ±f (A).
2πi Γ
1.22. No. f (A) is a polynomial in A and so cannot equal A∗ in general (consider triangular
A). However, if A is normal then A = QDQ∗ for some unitary Q and D = diag(λi ), and for
f (λ) = λ we have f (A) = Qf (D)Q∗ = QDQ∗ = A∗ . As a check, for unitary matrices we
have f (A) = A∗ = A−1 and for skew-Hermitian matrices f (A) = A∗ = −A, so f is clearly a
matrix function in both cases.
1.24. The formula (1.4) provides two upper triangular square roots, X1 and X2 , correspond-
ing to the two different branches of f at λk . We have to show that these are the only upper
triangular square roots. Let X be an upper triangular square root of Jk . Then, equating
(i, i) and (i, i + 1) elements in X 2 = Jk gives x2ii = λk and (xii + xi+1,i+1 )xi,i+1 = 1,
i = 1: mk − 1. The second equation implies xii + xi+1,i+1 6= 0, so from the first, x11 = x22 =
1/2
· · · = xmk ,mk = ±λk . Since xii + xjj 6= 0 for all i and j, X is uniquely determined by its
diagonal elements (see Algorithm 4.13 or Algorithm 6.3); these are the same as those of X1
or X2 , so X = X1 or X = X2 .
348 Solutions to Problems
1.25. An = 0 implies that all the eigenvalues of A are zero, and An−1 6= 0 then implies that
the Jordan form of A comprises just a single n × n Jordan block. Hence A has no square root
by Theorem 1.22. Or, in a more elementary fashion, if A = X 2 then X 2n = An = 0, which
implies X n = 0 since X is n × n. But then An−1 = X 2(n−1) = X n X n−2 = 0 since n ≥ 2,
which is a contradiction. For the last part, we have (A+cAn−1 )2 = A2 +2cAn +c2 A2n−2 = A2
for any c.
1.26. Yes, if A is nonderogatory, but in general no: Theorem 1.25 gives the general form,
from which it is clear that Z −1 XZ is necessarily block diagonal if and only if A is nonderoga-
tory.
1.27. We can write the Jordan canonical form of A as A = Z diag(J1 , 0)Z −1 , where J1
contains the Jordan blocks corresponding to the nonzero eigenvalues. With f denoting the
square root function, any primary square root of A has the form
1.28. By direct calculation we find that all upper triangular square roots of A are of the
form 2 3
0 1 θ
X(θ) = ± 4 1 15,
0
where θ ∈ C is arbitrary. Now A is involutory (A = A2 ) so any polynomial in A has the
form p(A) = αI + βA, which has equal (1, 2) and (1, 3) elements. It follows that ±X(1) are
the only primary square roots of A.
Since dim(null(A)) = 2, 0 is a semisimple eigenvalue of A and hence A is diagonalizable.
Indeed 2 3
1 −1 0
V −1 AV = diag(1, 0, 0), V = 4 1 −1 −1 5 .
0 1 1
A family of nonprimary square roots is obtained as
2 3
ˆ 0 θ ˜ −1 −θ 1+θ 1
Y = V diag(1, 0 0 )V = 4 −θ 1+θ 15.
θ −θ 0
Note that for θ 6= 0, Y has a different Jordan structure than A—a phenomenon that for
matrix square roots can happen only when A is singular.
1.29. Note that A = diag(J2 (0), 0). Let X 2 = A and consider the possible Jordan block
structures of X. Applying Theorem 1.36 with f (x) = x2 to any Jordan block of X we find
that ℓ = 2 and case b(ii) must pertain, with r = 3 and p = q = 1. Hence X = ZJ3 (0)Z −1 for
some nonsingular Z. To determine (as far as possible) Z we write X 2 = A as ZJ3 (0)2 = AZ
and examine the resulting equations, which force Z to have the form
2 3
a b c
Z = 40 0 a5,
0 d e
Solutions to Problems 349
where det(Z) = −a2 d must be nonzero. Evaluating X = ZJ3 (0)Z −1 we find that some of
the remaining parameters in Z are redundant and the general form of X is
2 3
0 y x
X = 40 0 05, x 6= 0.
0 1/x 0
where Ji (0) is of size at least 2 for i = 1: k and λi 6= 0 for i ≥ r + 1. By Theorem 1.36, Ji (0)
splits into smaller Jordan blocks when squared for i = 1: k, since f ′ (0) = 0 for f (x) = x2 .
Therefore A = X 2 has more Jordan blocks than X. But any polynomial in A has no more
Jordan blocks than A. Therefore X cannot be a polynomial in A.
1.31. The form of X is rather surprising. Since any primary square root of A is a polynomial
in A, a first reaction might be to think that X is a nonprimary square root. However, X and
A are both symmetric and structural considerations do not rule out X being a polynomial
in A. In fact, A has distinct eigenvalues (known to be 0.25 sec(iπ/(2n + 1))2 , i = 1: n [189,
]), so all its square roots are primary. X is clearly not A1/2 , since X has zero elements
on the diagonal and so is not positive definite. In fact, X has ⌈n/2⌉ positive eigenvalues
and ⌊n/2⌋ negative eigenvalues (which follows from the inertia properties of a 2 × 2 block
symmetric matrix—see, for example, Higham and Cheng [279, , Thm. 2.1]). X is an
indefinite square root that “just happens” to have a very special structure.
1.33. A = eB where B is the Jordan block Jn (0), as is easily seen from (1.4). B +2kπi is also
a logarithm for any k ∈ Z, and these are all the logarithms, as can be seen from Theorem 1.28
on noting that A has just one block in its Jordan canonical form since rank(A − I) = n − 1.
1.34. Let X = log A and Y = eX/2 . Then Y 2 = eX/2 eX/2 = eX = A, using Theorem 10.2.
So Y is some square root of A. The eigenvalues of Y are of the form eλi /2 , where λi is an
eigenvalue of X and has Im λi ∈ (−π, π), and so −π/2 < arg λi /2 < π/2. Thus the spectrum
of Y lies in the open right half-plane, which means that Y = A1/2 .
1.35. A1/2 is a polynomial in A and B 1/2 is a polynomial in B, so A1/2 commutes with B 1/2 .
Therefore (A1/2 B 1/2 )2 = A1/2 B 1/2 A1/2 B 1/2 = A1/2 A1/2 B 1/2 B 1/2 = AB. Thus A1/2 B 1/2
is some square root of AB. By Corollary 1.41 the eigenvalues of A1/2 B 1/2 are of the form
λi (A1/2 )λi (B 1/2 ) and so lie in the open right half-plane if the eigenvalues of A and B lie in
the open right half-plane. The latter condition is needed to ensure the desired equality, as
is clear by taking A = B.
for which eA = eB = I, both matrices have spectrum {0, 2πi, −2πi}, and AB 6= BA. Such
examples are easily constructed using the real Jordan form analogue of Theorem 1.27.
Schmoeger [505, ] investigates what can be concluded from eA = eB when the
condition on Λ(A) is not satisfied but A is normal.
1.37. Let λ1 , . . . , λs be the distinct eigenvalues of C = eA and denote by qi the algebraic
multiplicity of λi . The qk copies of λk are mapped to log λk + 2πrj i in A and log λk + 2πsj i
in B, for some integers rj and sj , j = 1: qk . The given condition implies that rj = sj ≡ tk
for all j, so that all copies of λk are mapped to the same logarithm. This is true for all k,
so A and B are primary logarithms with the same spectrum, which means that they are the
same matrix.
1.38. Suppose, first, that f is even. From Theorem 1.26 we know that for any square root
X of A, f (X) has the form
(j )
f (X) = ZU diag(f (Lk k ))U −1 Z −1 ,
(j )
where A = ZJZ −1 is a Jordan canonical form with J = diag(Jk ), Lk k is a square root of
(1) (2)
Jk , jk = 1, 2, and U commutes with J. But Lk = −Lk implies
(1) (2)
f (Lk ) = f (Lk ) (E.5)
(j) (j)
by Problem 1.20. Hence U diag(f (Lk ))U −1 = diag(f (Lk ))(cf. the proof of Theorem 1.26).
(jk )
Thus f (X) = Z diag(f (Lk ))Z −1 , which is the same for all choices of the jk by (E.5).
(j ) (j )
The proof for f odd is very similar, reducing to the observation that (Lk k )±1 f (Lk k )
is the same for jk = 1 as for jk = 2.
1.39. If A = log(eA ) then max{ | Im(λi )| : λi ∈ Λ(A) } < π is immediate from the definition
of the principal logarithm. Suppose the latter eigenvalue condition holds and let X = eA .
X has no eigenvalues on R− and so log(X) is defined. A is clearly some logarithm of X and
its eigenvalues satisfy Im λ ∈ (−π, π). By Theorem 1.28, every logarithm other than log(X)
has at least one eigenvalue with | Im λ| ≥ π. Therefore A must be the principal logarithm,
log(X).
1.40. By applying g to the equation f (A)f (B)f (A)−1 = f (B) and using Theorem 1.13 (c)
we obtain f (A)Bf (A)−1 = B, which can be rewritten as B −1 f (A)B = f (A). Applying g to
this equation gives B −1 AB = A, or AB = BA, as required.
1.41. Let A have the spectral decomposition A = QΛQ∗ , where, without loss of generality,
we can suppose that the eigenvalues are ordered so that Λ = diag(λ1 I1 , . . . , λm Im ), with
λ1 , . . . , λm distinct. Suppose X is a Hermitian positive definite square root of A, so that
A = X 2 . Then Λ = Q∗ AQ = Q∗ X 2 Q = (Q∗ XQ)2 =: Y 2 , where Y is Hermitian positive
definite. Now Y clearly commutes with Λ = Y 2 , so Y = diag(Yk ), where the blocking is
conformable with that of Λ. It remains to determine the Yk , which satisfy Yk2 = λk I. Now
1/2
the eigenvalues of Yk must be ±λk , and since Yk is positive definite, the eigenvalues must all
1/2 1/2 1/2
be λk . But the only Hermitian matrix with all its eigenvalues equal to λk is Yk = λk Ik .
1/2
Hence X = QY Q∗ = Q diag(λk Ik )Q∗ = QΛ1/2 Q∗ is the unique Hermitian positive definite
square root of A.
1.42. Let Y be another square root of A with eigenvalues in the open right half-plane. Since
Y A = Y 3 = AY , Y commutes with A, and hence with any polynomial in A; in particular Y
commutes with X. Therefore
(X + Y )(X − Y ) = X 2 − Y 2 − XY + Y X = X 2 − Y 2 = A − A = 0.
Since X and Y commute, the eigenvalues of X + Y are of the form λi (X) + λi (Y ), by
Corollary 1.41, and these are all nonzero since the spectra of X and Y are in the open right
half-plane. Hence X + Y is nonsingular and thus X − Y = 0, as required.
Solutions to Problems 351
for some X1 and X2 . Equating (1,1) blocks gives f (AB + αIm ) = f (α)Im + AX2 . To
determine X2 we use Theorem 1.13 (a) to obtain
» –» – » –
αIm 0 f (α)Im 0 f (α)Im 0
=
B BA + αIn X2 f (BA + αIn ) X2 f (BA + αIn )
» –
αIm 0
× .
B BA + αIn
Equating (2, 1) blocks gives f (α)B + (BA + αIn )X2 = αX2 + f (BA + αIn )B, or X2 =
(BA)−1 (f (BA + αIn ) − f (α)In )B. The result follows. As in Problem 1.44 this proof requires
an extra assumption: in this case the existence of an extra derivative of f at α.
352 Solutions to Problems
1.51. Denote the given matrix by B, and note that B has just one n × n Jordan block in
its Jordan form, since rank(B − I) = n − 1. Let f (x) = cosh(x). The eigenvalues of A must
be f −1 (1) = 0. We have f ′ (0) = sinh(0) = 0, f ′′ (0) = cosh(0) = 1. Theorem 1.36(b) (with
ℓ = 2) says that f (A) must have more than one Jordan block. This contradicts the Jordan
structure of B and hence the equation has no solution.
1.52. (a) f (z) = 1/z for z 6= 0 and f (j) (0) = 0 for all j. Hence f is discontinuous at
zero. Nevertheless, f (A) is defined.
(b) This formula is readily verified by direct computation using (1.37). Of course, Def-
inition 1.4 directly yields AD as a polynomial in A, and this polynomial may be of much
smaller degree than xk p(x)k+1 . We note also that if B −k−1 = q(B) then AD = Ak q(A),
which provides an alternative formula.
(c) As explained in Section 1.2.5, the index of uv ∗ is 1 if v ∗ u 6= 0 and 2 otherwise. If
the index is 1 then (uv ∗ )D = 0. Otherwise, (uv ∗ )D = uv ∗ /(v ∗ u)2 , as can be obtained from
(1.14), for example.
For more on the Drazin inverse, see Campbell and Meyer [92, ].
1.53. If A ∈ Cm×n has the (compact) SVD A = P ΣQ∗ with P ∈ Cm×r , Σ = diag(σi ) ∈
Rr×r , Q ∈ Cr×n , where r = rank(A), we can define f (A) = P f (Σ)Q∗ , where f (Σ) =
diag(f (σi )). An alternative representation is f (A) = U f (H), where A = U H is a polar
decomposition (see Chapter 8) and f (H) is the usual function of a matrix. This definition,
which is given and investigated by Ben-Israel and Greville [52, , Sec. 6.7], does not
reduce to the usual definition when m = n. The definition does not appear to lead to any
useful new insights or have any significant applications.
2.1. We have
d ` −At ´
e y(t) = −Ae−At y + e−At y ′ = e−At (y ′ − Ay) = e−At f (t, y).
dt
−As
Rs
Hence e y(s) − y(0) = 0 e−At f (t, y) dt. Multiplying through by eAs and interchanging
the roles of s and t gives the result.
where the eigenvalues of J1 ∈ Cp×p lie in the left half-plane and those of J2 lie in the
right half-plane. Then sign(A) = Z diag(−Ip , In−p )Z −1 and, as we noted in Section 2.5,
range(W ) is the invariant subspace corresponding to the eigenvalues in the right half-plane.
Write Q = [ Q1 Q2 ], where Q1 ∈ Cn×q . Then
» –
R11 R12
W Π = [ Q 1 Q2 ] = Q1 [ R11 R12 ] .
0 0
Hence Q1 is an orthogonal basis for range(W ), which means that range(Q1 ), too, is an
invariant subspace corresponding to the eigenvalues in the right half-plane, and hence q =
n − p. Thus AQ1 = Q1 Y , for some Y whose eigenvalues lie in the right half-plane. Hence
» T – » – » –
Q1 AQ1 QT1 AQ2 Y QT1 AQ2 Y QT1 AQ2
QT AQ = = = ,
QT2 AQ1 QT2 AQ2 (QT2 Q1 )Y QT2 AQ2 0 QT2 AQ2
2.4. Since A and B commute, A1/2 and B 1/2 commute. Hence X = A1/2 B 1/2 satisfies
XA−1 X = A1/2 B 1/2 A−1 A1/2 B 1/2 = B 1/2 A1/2 A−1 A1/2 B 1/2 = B. Moreover, X is clearly
Hermitian, and it has positive eigenvalues because it is the product of positive definite
matrices.
For the log-Euclidean geometric mean E we have, since log(A) and log(B) commute,
1 1 1 1 1/2
) log(B 1/2 )
E(A, B) = e 2 log(A)+ 2 log(B) = e 2 log(A) e 2 log(B) = elog(A e = A1/2 B 1/2 ,
where we have used Theorems 10.2 and 11.2 (or Problem 1.34).
1
B 1/2 (B −1/2 AB −1/2 )1/2 B 1/2 ≤ (A + B).
2
2.7. We have XR∗RX = B and hence (RXR∗ )2 = RBR∗ , so RXR∗ = (RBR∗ )1/2 . Hence
X = R−1 (RBR∗ )1/2 R−∗ , which is clearly Hermitian positive definite. Any of the methods
from Chapter 6 can be used to evaluate (RBR∗ )1/2 . This given formula is more efficient to
evaluate than the formulae given in Section 2.9, but Algorithm 6.22 is even better.
3.1. By determining the linear part of f (X + E) − f (X), we find that L(X, E) = 0, E, and
− sin(X)E, respectively, using (12.4) in the latter case. If the second of these expressions
seems counterintuitive, note that L(X, E) = E says that L(X) is the identity operator.
3.2. We have
f (X + tE) − f (X)
L(X, E) = lim
t→0 t
!
e − f (T )
f (T + tE)
= Q lim Q∗
t→0 t
e ∗,
= QL(T, E)Q
e = Q∗ EQ, as required.
where E
Solutions to Problems 355
and so L(X, E) − M (X, E) = o(kEk). Let E = tE e with Ee fixed. Then, by the linearity
e e = o(tkEk),
of the Fréchet derivative, L(X, E) − M (X, E) = t(L(X, E) − M (X, E)) e that is,
e e −1 e e
L(X, E) − M (X, E) = t o(tkEk), Taking the limit as t → 0 gives L(X, E) = M (X, E). e
e
Since E is arbitrary, L(X) = M (X).
Setting E = tF , for fixed F and varying t, we have, since L(X) is a linear operator,
which implies
“ f (X + tF ) − f (X) ”
lim − L(X, F ) = 0,
t→0 t
showing that L(X, F ) is the derivative in the direction F .
3.5. The maximum Jordan block size is n. The maximum is attained when A11 and A22 are
Jordan blocks corresponding to the same eigenvalue λ and A12 = en1 eTn2 . For example, with
n1 = 2, n2 = 3,
2 3
λ 1 0 0 0
6 0 λ 1 0 0 7
6 7
A=6 6 λ 1 0 7 7
4 λ 1 5
λ
and rank(A − λI) = 4, so A has a single 5 × 5 Jordan block corresponding to λ. For a more
precise result of this form see Mathias [412, , Lem. 3.1].
since L(X, E) is the linear term in this expansion. The matrix power series has the same
radius of convergence as the given scalar series (see Theorem 4.7), so if kXk < r we can
scale E → θE so that kX + θEk < r and the expansion is valid. But L(X, θE) = θL(X, E),
so the scaled expansion yields L(X, E). K(X) is obtained by using (B.16).
3.9. From the Cauchy integral definition (1.11) we have, for small enough kEk,
Z
1
f (X + E) = f (z)(zI − X − E)−1 dz.
2πi Γ
356 Solutions to Problems
4.3. Obtaining (4.46) is straightforward. To show the equivalence of (4.44) and (4.46) we
just have to show that F12 given by (4.46) satisfies (4.44), since we know that (4.44) has a
unique solution. Using f (Tii )Tii = Tii f (Tii ), we have
T11 F12 − F12 T22 = T11 f (T11 )X − T11 Xf (T22 ) − f (T11 )XT22 + Xf (T22 )T22
= f (T11 )(T11 X − XT22 ) − (T11 X − XT22 )f (T22 )
= f (T11 )T12 − T12 f (T22 )
This system can be solved provided the coefficient matrix is nonsingular, that is, sii tjj −
sjj tii 6= 0.
ˆ ˜ ˆ k˜
4.5. Let J(λ) = 10 11 , λ = 1, φ0 (λ) = λ. For g(x) = x2 we have x∗ ≡ xk = 1 yet Xk = 10 21
diverges. On the other hand, for g(x) = (x2 +1)/2 we have x∗ ≡ xk = 1 yet Xk ≡ J(λ) = X∗ .
for any p-norm. If ρ(A) < 1 we can choose δ > 0 such that ρ(A) + δ < 1 and so kAk kp is
bounded for all k. By the equivalence of matrix norms (see, e.g., [537, 1998, Thm. 4.6]) the
result holds for any matrix norm. The bound (E.9) is a special case of a result of Ostrowski
[455, , Thm. 20.1]. Notice that this argument actually shows that Ak → 0 if ρ(A) < 1.
More precisely, A is power bounded if and only if ρ(A) ≤ 1 and for any eigenvalue λ such
that |λ| = 1, λ is semisimple (i.e., λ appears only in 1 × 1 Jordan blocks). However, for our
purposes of obtaining sufficient conditions for stability, the sufficient condition ρ(A) < 1 is
all we need.
4.7. Any limit y∗ must satisfy y∗ = cy∗ + d, so that y∗ = d/(1 − c), and
so it suffices to take d = 0 and show that yk → 0. There exist k and q ∈ [0, 1) such that
|ci | ≤ q < 1 for i ≥ k; let D = maxi≥k |di |. For E ≥ D/(1 − q), if |yi | ≤ E and i ≥ k then
|yi+1 | ≤ qE + D ≤ E. Hence with M = max{E, maxi≤k |yi |} we have |yi | ≤ M for all i.
Given ǫ > 0, there exists n(ǫ) such that for all i ≥ n(ǫ), |ci | ≤ q < 1 and |di | ≤ ǫ. Then,
for i ≥ n(ǫ), |yi+1 | ≤ q|yi | + ǫ, and so |yn(ǫ)+j | ≤ q j M + ǫ/(1 − q) ≤ 2ǫ/(1 − q) for large
enough j. It follows that yi → 0, as required.
5.2. sign(A) = sign(A−1 ). The easiest way to see this is from the Newton iteration (5.16),
because both X0 = A and X0 = A−1 lead to X1 = 12 (A + A−1 ) and hence the same sequence
{Xk }k≥1 .
5.3. Applying the Cauchy integral formula with f (z) = z −1/2 and using a Hankel contour
that goes from −∞ − 0i to 0 then around the origin and back to −∞ + 0i, with t = iz 1/2 ,
so that dt = 21 iz −1/2 dz, we have
Z ∞ Z ∞
1 1
A(A2 )−1/2 = 2 · A z −1/2 (zI − A2 )−1 dz = A (t2 I + A2 )−1 dt.
2πi 0 π 0
R
5.4. Using the matrix analogue of the formula (x2 + a2 )−1 dx = a−1 arctan(x/a), from
(5.3) we have
2 ˛∞ 2
sign(A) = arctan(tA−1 )˛0 = lim arctan(tA−1 ).
π t→∞ π
5.5. No: A2 differs from I in the (1,3) and (2,3) entries. A quick way to arrive at the
answer without computing A2 is to note that if A is the sign of some matrix then since
a22 = a33 = 1 we must have A(2: 3, 2: 3) = I (see the discussion following Algorithm 5.5),
which is a contradiction.
5.6. The result follows from Theorem 5.2, since C = B(A−1 B)−1/2 = B(B −1 A)1/2 .
Hence θk+1 = 2θk , and so θk = 2k θ0 follows by induction. Now coth x = (ex +e−x )/(ex −e−x ),
so coth 2k θ0 → 1 or −1 as k → ∞ according as Re θ0 > 0 or Re θ0 < 0, or equivalently,
Re x0 > 0 or Re x0 < 0.
5.11. Let x0 = ir0 with r0 real. It is easy to show by induction that xk = irk , where rk is
real and „ «
1 1
rk+1 = rk − .
2 rk
Solutions to Problems 359
The xk cannot converge because they are pure imaginary and the only possible limits are
±1. Setting rk = − cot(πθk ), we have
„ «
1 1
− cot(πθk+1 ) = rk+1 = − cot(πθk ) +
2 cot(πθk )
1 − cos(πθk )2 + sin(πθk )2
=
2 cos(πθk ) sin(πθk )
1 cos(2πθk )
=− 1 = − cot(2πθk ).
2 2 sin(2πθk )
So θk+1 = 2θk , or equivalently, given the periodicity of cot,
θk+1 = 2θk mod 1. (E.10)
The behaviour of the rk is completely described by this simple iteration. If θ0 has a periodic
binary expansion then periodic orbits are produced. If θ0 has a terminating binary expansion
then eventually θk = 0, that is, rk = ∞. Irrational θ0 lead to sequences rk in which the same
value never occurs twice. The mapping (E.10) is known as the Bernoulli shift [165, 1992,
Ex. 3.8].
5.13. For µ > 0 we have
n
X
g(µ) := d(µX) = (log µ + log |λi |)2 ,
i=1
and hence
n
2X
g ′ (µ) = (log µ + log |λi |).
µ i=1
Solving g ′ (µ) = 0 gives log(|λ1 | . . . |λn |) = − log µn , or µ = (|λ1 | . . . |λn |)−1/n = | det(X)|−1/n .
The last part is trivial.
5.14. (a) If a is real, x1 = sign(a). Otherwise, a = reiθ and γ0 x0 = eiθ lies on the unit
circle, x1 = cos θ is real, and hence x2 = sign(a).
(b) In view of Theorem 5.11, it suffices to consider the case where A ∈ R2×2 has a
complex conjugate pair of eigenvalues, λ = re±iθ , θ ∈ (0, π). Then µ0 X0 has eigenvalues
e±iθ and X1 has equal eigenvalues cos θ. The next iterate, X2 , has eigenvalues ±1 and the
iteration has converged, since the Jordan form of A is necessarily diagonal.
5.15. We make use of P∞ the observation that P∞ if |x| < 1 then (1√+ x)1/2 has a convergent
Maclaurin series 1 + k=1 ak x such that k=1 |ak ||x|k = 1 − 1 − x. Since sign(A) = I
k
P
we have A = (A2 )1/2 and hence A = (I + E)1/2 = I + ∞ k
k=1 ak E , since kEk < 1. Then
‚∞ ‚
‚X ‚ X ∞
p
‚ ‚ kEk
kA − Ik = ‚ ak E k ‚ ≤ |ak |kEkk = 1 − 1 − kEk = p < kEk.
‚ ‚ 1 + 1 − kEk
k=1 k=1
5.17. Consider the sign iteration (5.16) with X0 = W . It is easy to check that the Xk are
all Hamiltonian. Write the iteration as
1
Xk+1 = (Xk + (JXk )−1 J), X0 = W,
2
or
1
Yk+1 = (Yk + JYk−1 J), Y0 = JW,
2
where the matrix Yk = JXk is Hermitian and is just Xk with its blocks rearranged and their
signs changed.
ˆ ˜
6.1. Straightforward on evaluating the (1,2) block of (5.3) with A ← I0 A0 .
ˆa b˜
6.2. Let A = c d
and δ = det(A)1/2 = (ad − bc)1/2 . Then trace(X) = ±(a + d ± 2δ)1/2 .
Hence » –
a±δ b
X = ±(a + d ± 2δ)−1/2 .
c d±δ
If a + d ± 2δ 6= 0 then A has distinct eigenvalues and all four square roots are given by this
formula. Otherwise, A has repeated eigenvalues and the formula breaks down for at least
one of the choices of sign in the term ±2δ. In this case there may be no square roots; there
may be just two (when the Jordan form of A has one 2 × 2 Jordan block), which the formula
gives; or there may be infinitely many square roots (in this case A = aI) and the formula
gives just ±a1/2 I.
ˆ ˜
6.3. Let A(ǫ) = 0ǫ 1ǫ . Then
2 3
1
1/2 ǫ1/2
A(ǫ) =4 2ǫ 1/2 5.
0 ǫ1/2
6.5. If there is just one zero eigenvalue then it is easy to see that Algorithm 6.3 runs to
completion and computes a primary square root. Otherwise, uii = ujj = 0 for some i 6= j
and the algorithm breaks down with division by zero at the stage where it is solving (6.5).
There are now two possibilities. First, (6.5) may be inconsistent. Second, (6.5) may be
automatically satisfied because the right-hand side is zero; if so, what value to assign to
uij in order to obtain a primary square root, or indeed any square root, may depend on
information from the later steps of the algorithm.
For example, consider
2 3 2 32 3
0 1 1 0 1 x 0 1 x
T = 4 0 1 1 5 = 4 0 1 1 5 4 0 1 1 5 = U 2.
0 0 0 0 0 0 0 0 0
T has a semisimple zero eigenvalue. Algorithm 6.3 computes the first superdiagonal of U
and then for i = 1, j = 3, (6.5) has the form 0 · x = 1 − 1 = 0. We can assign x any value,
Solutions to Problems 361
but only x = 1 produces a primary square root: if x 6= 1 then U has rank 2 and hence its
zero eigenvalue appears in a Jordan block of size 2.
For the matrix
20 1 0 1 03
6 1 0 1 07
6 7
T =6 0 1 07,
4 5
1 0
0
which has semisimple zero eigenvalue of multiplicity 3, any upper triangular square root has
the form
20 1 a 1 − a b3
2
1
6 1 0 2
07
6 7
U =6 0 1 c7,
4 5
1 0
0
where a, b, and c are arbitrary subject to ac = 0. But the constraint on a and c is not
discovered until the last step of Algorithm 6.3, and for U to be a primary square root we
need rank(U ) = 2 and hence b = c = 0 and a = 0.
The conclusion is that for singular matrices it is best to employ the reordered Schur
form, as described in Section 6.2.
6.6. Consider Algorithm 6.3 and suppose, first, that T ∈ Rn×n . The diagonal elements
satisfy
√
bii = tii (1 + δi ),
u |δi | ≤ u.
For the off-diagonal elements, using the analysis of inner products in [276, , Sec. 3.1] we
find that, whatever ordering is used in the summation,
j−1
X
(b
uii + u
bjj )b
uij (1 + θ3 ) = tij − u
bik u
bkj (1 + θn−2 ),
k=i+1
where |θk | ≤ γk . Hence U b 2 = T + ∆T , |∆T | ≤ γn−2 |U b |2 . The same analysis holds for
complex data but the constants must be increased slightly. This can be accounted for by
replacing γn−2 by γen−2 , or γ
en for simplicity. For Algorithm 6.7 the errors in forming Uii in
(6.9) mean that only a normwise bound can be obtained.
6.8. Since X0 does not commute with A we cannot invoke Theorem 6.9. Applying the
iteration we find that » –
1 ξθ 1−µ
X1 = , ξ= ,
0 µ 2
and hence » –
1 ξk θ
Xk = .
0 µ
Thus we have linear convergence to A1/2 if |ξ| < 1 (except when ξ = 0, i.e., µ = 1, which
gives convergence in one step) and divergence if |ξ| ≥ 1. For real µ, these two situations
correspond to −1 < µ < 3 and µ < −1 or µ ≥ 3, respectively. Hence quadratic convergence,
and even convergence itself, can be lost when X0 does not commute with A.
362 Solutions to Problems
6.9. (a) The result in the hint is proved by using the spectral decomposition C = QDQ∗
e
(Q unitary, D = diag(di ) > 0) to rewrite the equation as XD+D Xe = H,
e where Xe = Q∗ XQ,
e ∗ e e −1
H = Q HQ. Then X = H ◦D1 , where D1 = ((di +dj ) ) and ◦ is the Hadamard (entrywise)
product. The matrix D1 is a Cauchy matrix with positive parameters and hence is positive
definite (as follows from its upper triangular LU factor having positive diagonal [276, ,
Sec. 28.1]). The Hadamard product of a Hermitian positive definite matrix with a Hermitian
positive (semi)definite matrix is Hermitian positive (semi)definite [296, , Thm. 5.2.1],
e and hence X are Hermitian positive (semi)definite.
so X
From (6.11) follow the three relations
From X0 > 0, (E.12), and the hint, it follows that X1 > 0, and (E.13) gives X12 ≥ A. Assume
Xk2 ≥ A and Xk > 0. Then
(i) Xk+1 ≤ Xk by (E.11) and the hint,
(ii) Xk+1 > 0 by (E.12) and the hint,
2
(iii) Xk+1 ≥ A by (E.13).
Hence all but one of the required inequalities follow by induction. The remaining inequality,
Xk ≥ A1/2 , follows from (iii) on invoking Theorem B.9.
The sequence Xk is nonincreasing in the positive semidefinite ordering and bounded below
by A1/2 , so it converges to a limit, X∗ > 0, by Lemma B.8 (c). From (E.13) it follows that
X∗2 = A. But A1/2 is the only positive definite square root, so X∗ = A1/2 .
(b) If X0 commutes with A then by Lemma 6.8 the full and simplified Newton iterations
generate exactly the same sequence, so monotonic convergence holds for (6.12). However,
for arbitrary X0 > 0 the simplified iteration (6.12) does not, in general, converge.
(c) Since X 2 = A if and only if (Z −1 XZ)2 = Z −1 AZ, part (a) can be applied to
Ae = Z −1 AZ and X e0 = Z −1 X0 Z, for which Xek = Z −1 Xk Z. Monotonic convergence holds
e
for the Xk > 0.
6.10. If the iteration is to converge then it must converge on the spectrum of A, that is, the
iterations „ «
1 λ
xk+1 = xk + , x0 = λ
2 xk
must converge for each eigenvalue λ of A. If λ ∈ R− then xk ∈ R for all k and so xk cannot
converge to either of the square roots of λ, both of which are pure imaginary. Hence the
Newton iteration does not converge. In view of the relation with the Newton sign iteration
given in Theorem 6.9 (which is valid for scalar a even when a ∈ R− ), the behaviour of the
iteration is described by Problem 5.11.
6.11. The symmetrization step is legal because Xk A = AXk ⇒ Xk A1/2 = A1/2 Xk . The
variables Xk and Yk are easily seen to satisfy (6.15).
6.12. The modified iteration is only linearly convergent, as is easily verified numerically. The
reasoning used to derive the modified iteration is dubious for an iteration that is already
quadratically convergent.
6.13. The iterates satisfy Yk = A−1 Xk , so to analyze the errors in the (exact) iteration we
must set F = A−1 E, which gives
» – ‚» –‚2 !
1 E − A−1/2 EA1/2 ‚ E ‚
G(A 1/2
+ E, A −1/2
+ F) = +O ‚ ‚
‚ F ‚ .
2 A−1 E − A−1/2 EA−1/2
Solutions to Problems 363
which implies the quadratic convergence of the DB iteration (near the solution—this analysis
does not prove global convergence).
6.14. The uncoupled recurrence is numerically unstable [274, ], so this “simplification”
is not recommended. The iteration (6.63) has exactly the same stability properties as the
Newton iteration (6.12).
6.15. C ≥ 0 implies that ρ(C) is an eigenvalue of C (see Theorem B.7), and since the cardioid
extends only to 1 on the positive real axis, if Λ(C) lies in the cardioid then ρ(C) < 1, so the
spectrum of C must lie inside the unit circle. So the requirement on ρ(C) cannot be relaxed.
6.16. We need show that Xk defined by (6.46) with X0 = D1/2 is related to Bk from (6.48)
by Xk = D1/2 + Bk . This is clearly true for k = 0. Suppose it is true for k. Then the
modified Newton iteration (6.46) is
that is,
D1/2 (Ek + Bk ) + (Ek + Bk )D1/2 = A − D − Bk2 .
Comparing with (6.48), we see that Bk+1 = Ek + Bk = Xk+1 − Xk + Bk = Xk+1 − D1/2 , as
required.
6.19. For A with a semisimple zero eigenvalue linear convergence to A1/2 holds if the nonzero
eigenvalues of A satisfy the conditions of Theorem 6.16, with A1/2 now denoting the matrix
in Problem 1.27.
By using the Jordan form and Theorem 4.15 the convergence problem as a whole is
reduced to showing the linear convergence to λ1/2 of xk+1 = xk + α(λ − x2k ), x0 = (2α)−1 ,
for every eigenvalue λ of A. Only the convergence for a semisimple λ = 0 is in question. We
just have to show that the iteration xk+1 = xk − αx2k , x0 = (2α)−1 converges to 0 when 0 <
α ≤ ρ(A)−1/2 ; in fact, irrespective of the value of α > 0 we have 0 < xk+1 < xk < · · · < x0
and so xk converges to the unique fixed point 0.
6.20. We are given that A = sI − B with B ≥ 0 and s > ρ(B). Since diag(B) = diag(s −
aii ) ≥ 0, s ≥ maxi aii is necessary. Let α = maxi aii . Then A = αI − C, where C =
B + (α − s)I. Now C ≥ 0, since cij = bij ≥ 0 for i 6= j and cii = α − aii ≥ 0. Since B ≥ 0
and C ≥ 0, ρ(B) is an eigenvalue of B and ρ(C) an eigenvalue of C, by Theorem B.7. Hence
ρ(C) = ρ(B) + α − s < α. Hence we can take s = α in (6.52).
6.21. One way to derive the algorithm isˆvia the ˜ matrix sign function. We noted in Sec-
tion 2.9 that X is the (1,2) block of sign( A0 B0 ). Given Cholesky factorizations A = R∗ R
and B = S ∗ S we have
» – » – » −1 –» –» –
0 B 0 S∗S R 0 0 RS ∗ R 0
= = .
A 0 R∗ R 0 0 S −1 SR∗ 0 0 S
364 Solutions to Problems
Hence „» –« » –» –» –
0 B R−1 0 0 U R 0
sign = ,
A 0 0 S −1 U −1 0 0 S
where, in view of (8.6), RS ∗ = U H is a polar decomposition. From the (1,2) block of this
equation we obtain X = R−1 U S. That X is Hermitian positive definite can be seen from
X = R−1 U · S = S ∗ H −1 · S. In fact, we have obtained a variant of Algorithm 6.22. For any
nonsingular A, the unitary polar factor of A is the conjugate transpose of the unitary polar
factor of A∗ . Hence it is equivalent to find the unitary polar factor V = U ∗ of SR∗ and then
set X = R−1 V ∗ S ≡ R−1 U S.
7.1. We use Theorem 7.3. As noted in Section 1.5, the ascent sequence is d1 = 1, d2 = 1,
. . . , dm = 1, 0 = dm+1 = dm+2 = · · ·. Hence for ν = 0 and m > 1 there is more than one
element of the sequence in the interval (pν, p(ν + 1)) = (0, p). Thus for m > 1 there is no
pth root.
7.2. Assuming A has no eigenvalues on R− , we can define Aα = exp(α log A), where the log
is the principal logarithm. For α = 1/p, with p an integer, this definition yields A1/p , as
defined in Theorem 7.2. To see this, note that, by commutativity, (Aα )p = exp(α log A)p =
exp(pα log A) = exp(log A) = A, so that Aα is some pth root of A. To determine which root
1 log λ
it is we need to find its spectrum. The eigenvalues of Aα are of the form e p , where λ is
1 log λ
an eigenvalue of A. Now log λ = x + iy with y ∈ (−π, π), and so e p = ex/p eiy/p lies in
the segment { z : −π/p < arg(z) < π/p }. The spectrum of Aα is therefore precisely that of
A1/p , so these two matrices are one and the same.
For α ∈ (0, 1) we can also define Aα by (7.1) with p = 1/α.
7.3. Let X have Jordan canonical form X = ZJZ −1 . Then In = X p = ZJ p Z −1 , that is,
J p = In . This implies that the eigenvalues λ (the diagonal elements of J) satisfy λp = 1,
i.e., they are pth roots of unity. But then if the Jordan form is nontrivial we see from (1.4)
that J p has nonzero elements in the upper triangle. This contradicts J p = In , so J must be
diagonal.
p −1 1/p
7.4. From Theorem 3.5Pwith f (x) = x and f (x) = x we have Lxp (X, Lx1/p (X p , E)) =
E. Now Lxp (X, E) = pj=1 X j−1 EX p−j (cf. (3.24)) and so with X p = A, L = Lx1/p (A, E)
P
satisfies pj=1 X j−1 LX p−j = E, or
p
X
(X p−j )T ⊗ X j−1 · vec(L) = vec(E). (E.14)
j=1
for µr 6= µs , where the µi are the eigenvalues of X. It follows that the coefficient matrix is
nonsingular, and the condition number finite, precisely when every copy of an eigenvalue of
A is mapped to the same pth root, which is equivalent to saying X that is a primary pth
root, by Theorem 7.1.
7.9. Since X0 does not commute with A we cannot invoke Theorem 7.10 or Theorem 7.12.
Applying the iteration we find that
» –
1 ξθ 2µ2 − µ − 1
X1 = , ξ= ,
0 µ 2µ2
and hence » –
1 ξk θ
Xk = .
0 µ
Thus we have linear convergence to A−1/2 if |ξ| < 1 (except when ξ = 0, which gives
convergence in one step) and divergence if |ξ| ≥ 1. For real µ, these two situations correspond
to µ > −1 and µ ≤ −1, respectively. This example shows that (quadratic) convergence can
be lost when X0 does not commute with A.
7.10. We find from the proof of Theorem 7.10 that a2 = (1/2)(1 + 1/p), which decreases
from 3/4 for p = 2 to 1/2 as p → ∞. So once the error is sufficiently small we can expect
slightly faster convergence for larger values of p.
7.13. Since X0 , and hence all the iterates and B, commute with A, we can use the given
factorization with x ← Xk and b ← B to obtain
Xk1−p
Xk+1 − B = ((p − 1)Xkp − p Xkp−1 B + A)
p
p−2
Xk1−p X
= (Xk − B)2 (i + 1)Xki B p−2−i .
p i=0
366 Solutions to Problems
We conclude that
p−2
kXk1−p k X
kXk+1 − Bk ≤ kXk − Bk2 (i + 1)kXk ki kBkp−2−i .
p i=0
7.14. The relation (7.11) holds for this iteration, too, if in (7.11) X/p is replaced by −X/p.
7.15. Gershgorin’s theorem (Theorem B.1) shows that every eigenvalue lies in one of the
disks |z − aii | ≤ 1 − aii , and by diagonal dominance we have aii > 0.5, so the spectrum lies
in E(1, p) in (7.17). Hence the iteration with X0 = I converges to A−1/p by Theorem 7.12.
7.16. We have
p−1
!
X ` ´T
e p) = −
vec(A − X X p−1−i ⊗ Xi vec(E) + O(kEk2 ).
i=0
8.1. Direct computation shows that H = (A∗A)1/2 = diag(0, 1, . . . , 1). Then A = U H im-
plies that U (:, 2: m) = I(:, 1: m − 1). The first column of U is determined by the requirement
U ∗ U = I and must be of the form θem , where |θ| = 1. If we require a real decomposition
then θ = ±1.
8.3. In both cases H = 0. For the polar decomposition U can be any matrix with orthonor-
mal columns. For the canonical polar decomposition, U = AH + = 0.
8.4. Note that V = Q1/2 is unitary with spectrum in the open right half-plane, so V + V ∗ =
V + V −1 is Hermitian and positive definite. Thus the polar decomposition is I + Q = Q1/2 ·
(Q1/2 +Q−1/2 ) ≡ U H. Hence Q1/2 can be computed by applying (8.17) with A = I +Q. This
result is a special case of a more general result applying to matrices Q in the automorphism
group of a scalar product [283, , Thm. 4.7].
8.7. Let A have theˆ SVD˜ A = P [Σ 0]Q∗ , where Σ ∈ Rm×m is possibly singular. Then
A = P [Im 0]Q∗ · Q Σ0 G0 Q∗ ≡ U H is a polar decomposition for any Hermitian positive
semidefinite G ∈ C(n−m)×(n−m) . The decomposition always exists, but H is never unique.
The nonuniqueness is clear in the extreme case m = 1, with A = a∗ , U = u∗ . Then
e −1
a∗ H
a∗ = u∗ H =: e −1 k2 H
· ka∗ H e
e
ka H −1 k2
∗
e (e.g., H
is a polar decomposition for any Hermitian positive definite H e = I). Another polar
decomposition is (8.4): a∗ = kak−1
2 a ∗
· kak −1
2 aa ∗
.
Solutions to Problems 367
U + U = U ∗ U = H + A∗AH + = H + H 2 H + = HH + HH + = HH + ,
as required.
8.10. We use the same notation as in the proof given for the Frobenius norm. Note that
A∗ E + E ∗ A = HU ∗ E + E ∗ U H = H(U ∗ Q − In ) + (Q∗ U − In )H = HU ∗ Q + Q∗ U H − 2H. Let
t be an eigenvalue of H that maximizes (t − 1)2 and let w with kwk2 = 1 be a corresponding
eigenvector. Then, from (8.9),
kA − U k22 = (t − 1)2 . (E.16)
Since kBk22 = maxkzk2 =1 z ∗ B ∗ Bz, (8.11) implies
on using Re w∗ U ∗ Qw ≤ |w∗ U ∗ Qw| ≤ kU wk2 kQwk2 = 1. The result follows from (E.16) and
(E.17).
Note that while the minimizers are the same, the minimal values of the two objective func-
tions are different.
ˆ Σr 0
˜
8.12. Let Q ∈ Cm×n be any partial isometry and let A have the SVD A = P 0 0m−r,n−r
Q∗ ,
where r = rank(A). Then, by Theorem B.3,
where the µi are the singular values of Q. Since µi ∈ {0, 1} for all i (see Lemma B.2) we
have
kA − Qk ≥ k diag(f (σ1 ), . . . , f (σr ), 0, . . . , 0)k,
where f (x) = min{x, |1−x|}. The lower bound is attained for Q = P diag(µi )V ∗ with µi = 1
if σi ≥ 1/2, µi = 0 if σi ≤ 1/2, i = 1: r, and µi = 0 for i > r.
Pr
(a) Since V is unitary, |vii | ≤ 1, and so f (W ) ≤ i=1 σr . Equality is attained iff
V = diag(Ir , Z), for some unitary Z, and since W = P V ∗ Q∗ , (8.1) shows that this condition
holds iff W is a unitary polar factor of A. If A is nonsingular then the unitary polar factor
is unique and so W is the unique maximizer.
(b) If det(P Q∗ ) = 1 then (a) shows that all solutions are given by unitary polar factors
(8.1) of A with det(W ) = det(Z) = 1. If σn−1 6= 0 then Z either is empty (if r = n) or else
has just one degree of freedom (vnn = ±1), which is used up by the condition det(W ) = 1,
so the solution is unique.
If det(P Q∗ ) = −1 then it is easy to see that det(W ) = 1 implies f (W ) = Re trace(V Σ) ≤
σ1 + · · · + σn−1 − σn , with equality when V = diag(1, . . . , 1, −1), and this V is clearly unique
if σn−1 > σn . (For a detailed proof, see Hanson and Norris [246, ].) It remains to check
that the expression for the maximizer is independent of the choice of SVD (i.e., of P and
Q); this is easily seen using Problem B.11.
which is just the problem in Theorem 8.4. So the solution is Q = M −1/2 U , where U is the
unitary polar factor of M 1/2 A. The solution can also be expressed as Q = R−1 U , where U
is the unitary polar factor of RA and and M = R∗R is the Cholesky factorization.
8.15. By using the spectral decomposition A = Q diag(λi )Q∗ we can reduce (i) to the case
where A = diag(λi ). Then
X X
kf (A)X − Xf (A)k2F = |(f (λi ) − f (λj ))xij |2 ≤ c2 |λi − λj |2 |xij |2 = c2 kAX − XAk2F .
i,j i,j
Parts (ii) and (iv) are straightforward. Part (iii) is obtained by taking X = I and f (z) = |z|
in part (ii).
8.16. We have
r1 − r2 ei(θ2 −θ1 ) = e−iθ1 (z1 − z2 ). (E.18)
Swapping the roles of z1 and z2 and taking the complex conjugate gives
Hence
e−iθ1 (z1 − z2 ) + eiθ2 (z2 − z1 )
1 − ei(θ2 −θ1 ) = ,
r1 + r2
which yields the result on multiplying through by eiθ1 and taking absolute values. This proof
is a specialization of Li’s proof of the matrix version of the bound (Theorem 8.10).
8.17. We have 2 3 2 3
1 0 1 √0
U = 40 15, e = 40
U ǫ/ √ǫ2 + δ 2 5 ,
0 0 0 δ/ ǫ2 + δ 2
e is of order δ/ǫ, as required.
so the (3,2) element of U − U
Solutions to Problems 369
8.19. We have
1 A−∗ ∗
X1 = (A + A−∗ ) = (A A + I) = Y1−∗ .
2 2
If Yk = Xk−∗ then
1 1 1
Xk+1 = (Xk + Xk−∗ ) = (Yk−∗ + Yk ) = Yk−∗ (I + Yk∗ Yk ) = Yk+1
−∗
.
2 2 2
8.20. We will show that for the scalar Newton–Schulz iteration xk+1 = g(xk ) = 12 xk (3 − x2k )
√
with x0 ∈ (0, 3), xk → 1 quadratically as√k → ∞. This yields the quadratic convergence
of the matrix iteration for 0 < σi (A) < 3 by using the SVD of A to diagonalize the
iteration. Now g agrees with f in (7.16) for p = 2, and the argument in the proof of
Theorem 7.11 shows that xk → 1 as k → ∞, with quadratic convergence described by
xk+1 − 1 = − 21 (xk − 1)2 (xk + 2). The latter equation leads to (8.36). The residual recurrence
can be shown directly or deduced from Theorem 7.10 with p = 2.
8.21. The error bound (8.18) shows a multiplier in the quadratic convergence condition
of kXk−1 k2 /2, which converges to 1/2 as Xk → U . For the Newton–Schulz iteration the
multiplier is kXk + 2U k2 /2, which converges to 3/2 in the limit. We conclude that the
Newton iteration has an asymptotic error constant three times smaller than that for the
Newton–Schulz iteration, and so can be expected to converge a little more quickly in general.
(1) (1)
8.22. In view of (5.38), X1 has singular values 1 ≤ σn ≤ · · · ≤ σ1 . For k ≥ 1, Xk therefore
(k) (k) (1)
has singular values 1 ≤ σn ≤ · · · ≤ σ1 = f (k−1) (σ1 ). The result follows.
8.23. (a) From (8.24), and using the trace characterization (B.5) of the Frobenius norm, we
have
1` 2 −1 2 ´
kXk+1 k2F = µk kXk k2F + 2 Re trace(Xk Xk−∗ ) + µ−2
k kXk kF .
4
Differentiating with respect to µk shows that
` the minimum is obtained
´ at µF
k.
1 −1 −∗
(b) Write X ≡ Xk . Using kXk+1 k ≤ 2 µk kXk + µk kXk k for the 1- and ∞-norms,
and kAk1 = kA∗ k∞ , we obtain
1` 2 ´
kXk+1 k1 kXk+1 k∞ ≤ µk kXk1 kXk∞ +kXk1 kX −1 k1 +kXk∞ kX −1 k∞ +µ−2
k kX
−1
k1 kX −1 k∞ .
4
8.24. Using Table C.2 we see that for m > n, the SVD approach requires about min(14mn2 +
8n3 , 6mn2 + 20n3 ) + 2mn2 + n3 flops, while Algorithm 8.20 requires 6mn2 + (2k − 3 13 )n3 ≤
6mn2 + 17n3 flops. The SVD approach is clearly the more expensive. For m = n the
operation counts are 25n3 flops for the SVD versus at most 22n3 flops for Algorithm 8.20.
Note the comments in Appendix C concerning the relevance of flop counts.
370 Solutions to Problems
8.25. (a) For the first part, writing c = cos θ, s = sin θ, we find that
» – » –
aii aij caii − saji caij − sajj
A′ = GT = .
aji ajj saii + caji saij + cajj
The trace of A′ is f (θ) = c(aii + ajj ) + s(aij − aji ). Since f ′ (θ) = −s(aii + ajj ) + c(aij − aji )
we see that f ′ (θ) = 0 is precisely the condition for symmetry. It is easily checked that for a
suitable choice of the signs of c and s, trace(A′ ) > trace(A) and hence the trace is maximized
(rather than minimized).
(b) trace(GA) = trace(A) − trace(2vv T/(v T v) · A) = trace(A) − 2v TAv/(v T v). This
quantity is maximized when the Rayleigh quotient v TAv/(v T v) is minimized, which occurs
when v is an eigenvector corresponding to λmin (A), which is negative by assumption. Hence
maxv trace(GA) = trace(A) + 2|λmin (A)| > trace(A).
(c) Details can be found in Smith [529, , Chap. 3]. This idea was originally suggested
by Faddeev and Faddeeva [180, ]; Kublanovskaya [364, ] had earlier investigated
a symmetrization process based on just Givens rotations. The algorithm is only linearly
convergent (with slow linear convergence) and a proof of global convergence appears difficult.
Unfortunately, the idea does not live up to its promise.
8.26. (a) The blocks of Q satisfy
Q1 = (I + A∗ A)−1/2 M, Q2 = A(I + A∗ A)−1/2 M,
giving
Q2 Q∗1 = A(I + A∗ A)−1 .
Using this formula with A ≡ Xk gives (8.38).
(b) The flop counts per iteration are mn2 + 7n3/3 for (8.37) and 6mn2 + 8n3/3 for (8.38).
If advantage is taken of the leading identity block in (8.38) the flop count can be reduced,
but not enough to approach the operation count of (8.37).
(c) As discussed at the end of Section 8.6, we can scale (8.37) by setting Xk ← µk Xk ,
with one of the scalings (8.25)–(8.27). The problem is how to compute µk , since Xk−1 is
not available (and does not exist if m 6= n). Concentrating on (8.25), from (8.38a) we have
Rk∗ Rk = I + Xk∗ Xk , and since Rk is triangular we can apply the power and inverse power
methods to estimate the extremal singular values of Rk and hence those of Xk . Unfortu-
nately, we obtain µk only after Xk+1 has been (partly) computed, so we can only use µk to
scale the next iterate: Xk+1 ← µk Xk+1 .
9.1. (⇐) For any strictly triangular N let D + N = Z diag(J1 , . . . , Jq )Z −1 (q ≥ p) be a
Jordan canonical form of D + N with Jordan blocks
2λ 1 3
i
6 λi 1 7
6 .. .. 7
6 . . 7 m ×m
Ji = 6 7 ∈ C i i,
6 .. 7
4 . 5
1
λi
where, necessarily, mi does not exceed the kj corresponding to λi . Then
f (D + N ) = Z diag(f (J1 ), . . . , f (Jq ))Z −1 ,
where, from (1.4),
2 f (mi −1) (λi )
3
f (λi ) f ′ (λi ) ... ... (mi −1)!
6 7
6 .. 7
6 f (λi ) f ′ (λi ) ... . 7
6 7
f (Ji ) = 6 .. .. .. 7. (E.20)
6 . . . 7
6 7
6 .. 7
4 . f ′ (λi ) 5
f (λi )
Solutions to Problems 371
Since the derivatives of f are zero on any repeated eigenvalue and f (λi ) = f (λ1 ) for all i,
f (D + N ) = Zf (D)Z −1 = Zf (λ1 )IZ −1 = f (λ1 )I = f (D).
(⇒) Let F = f (D +N ), and note that by assumption F = f (D) and hence F is diagonal.
The equation F (D + N ) = (D + N )F reduces to F N = N F , and equating (i, j) elements
for j > i gives (fii − fjj )nij = 0. Since this equation holds for all strictly triangular N , it
follows that fii = fjj for all i and j and hence that F = f (λ1 )I.
If at least one of the λi is repeated, then we can find a permutation matrix P and a strictly
upper bidiagonal matrix B such that P DP T + B = P (D + P T BP )P T is nonderogatory
and is in Jordan canonical form, and N = P T BP is strictly upper triangular. We have
Λ(D) = Λ(D + N ) and so the requirement f (D + N ) = f (D) implies that f (P DP T + B) =
P f (D)P T = f (λ1 )I, and hence, in view of (E.20), (9.5) holds.
10.1. X(t) = e(A+E)t satisfies the differential equation X ′ (t) = AX(t) + EX(t), X(0) = I.
Hence from the matrix analogue of (2.3), we have
Z t
X(t) = eAt + eA(t−s) EX(s) ds,
0
10.2. Let AB = BA and f (t) = e(A+B)t − eAt eBt . Then f (0) = 0 and, because etA is a
polynomial in tA (as is any function of tA) and hence commutes with B,
Hence
Z 1
keA − eB k ≤ kA − Bk ekAk(1−s) ekBks ds
0
Z 1
≤ kA − Bk emax(kAk,kBk) ds = kA − Bk emax(kAk,kBk) .
0
Qn Q
10.4. det(e A+B
) = i=1 λi (e A+B
) = n i=1 e
λi (A+B)
= etrace(A+B) = etrace(A)+trace(B) =
trace(A) trace(B) A B A B
e e = det(e ) det(e ) = det(e e ).
10.5. The bound is immediate from Lemma 10.15 and Theorem 10.10.
10.6. From (B.28) we have |f [λ, µ]| ≤ maxz∈Ω |f ′ (z)| = maxz∈Ω |ez | = max(Re λ, Re µ) ≤
α(A), where α is defined in (10.10) and Ω is the line joining λ and µ, and there is equality
for λ = µ. Hence maxλ,µ∈Λ(A) |f [λ, µ]| = α(A) = keA k2 . Thus the two different formulae
are in fact equivalent.
372 Solutions to Problems
10.7. By Corollary 3.10, since eλi 6= 0 and eλi = eλj if and only if λi − λj = 2mπi for some
integer m, L(A) is nonsingular precisely when no two eigenvalues of A differ by 2mπi for
some nonzero integer m.
P∞ j+1 j
10.8. Since log(I + G) = j=1 (−1) G /j (see (1.1)),
X∞ X∞
kGkj kGk
k log(I + G)k ≤ = − log(1 − kGk) ≤ kGk kGkj = .
j=1
j j=0
1 − kGk
Similarly,
kG2 k kG3 k
k log(I + G)k ≥ kGk − − − ···
2 3
kG2 k kGk2
≥ kGk − (1 + kGk + kGk2 + · · ·) = kGk − .
2 2(1 − kGk)
Xm
(2m − j)! m! (−A)j
qm (A) = I + =: I + F.
j=1
(2m)! (m − j)! j!
Now
Xm
(2m − j)! m! kAkj
kF k ≤ = qm (−kAk) − 1,
j=1
(2m)! (m − j)! j!
Hence kqm (A)−1 k ≤ 1/(1 − kF k) ≤ 1/(2 − ekAk/2 ) provided kAk < log 4. The last part is
straightforward to check.
Specializing to k = m and using Problem 10.9 to rewrite the bound entirely in terms of kAk
gives (10.59).
Solutions to Problems 373
which of course agrees with (1.4). Via the Jordan canonical form, log(A) can then be defined
for A with no eigenvalues on R− .
11.2. The proof is straightforward, by using the integral and (B.10) to obtain the first order
part of the expansion of log(A + E) − log(A).
11.3. We have maxi |1−λi (A)| = ρ(I −A) ≤ kI −Ak < 1, which implies that every eigenvalue
of A lies in the open right half-plane, as required. The convergence also follows from
kI − Ak
k(I − A)(I + A)−1 k = kI − Ak k(2I + A − I)−1 k ≤ < 1.
2 − kI − Ak
11.5. The property follows from consideration of Table 11.1 and, in particular, these num-
bers:
θ3 θ4 θ5 θ7 /2 θ6 2θ5 θ7 2θ6 θ8
0.0162 0.0539 0.114 0.132 0.187 0.228 0.264 0.374 0.340
11.6. The equality follows from Theorem 11.4. This preprocessing is rather pointless: we
need A ≈ I, but making kAk = 1 in general makes no useful progress towards this goal.
11.7. The individual eigenvalues of Xk follow the scalar iteration xk+1 = (xk + ax−1k )/2,
x0 = a, where a is an eigenvalue of A. With the transformation yk = a−1/2 xk this becomes
yk+1 = (yk + yk−1 )/2, y0 = a1/2 , and it follows that −π/2 < arg yk < π/2 for all k, or
equivalently that xk is in the open half plane of a1/2 for all k.
From the derivation of the product DB iteration we know that Mk = Xk A−1 Xk and
that Xk and Mk are functions of A. By commutativity and Theorem 1.40 we know that
for each eigenvalue λj of A there are corresponding eigenvalues µj of Mk and ξj of Xk ,
with µj = ξj2 /λj . By Theorem 1.20 it suffices to show that log µj = 2 log ξj − log λj for
1/2 1/2
all j. Let z = ξj /λj . Then arg z = arg ξj − arg λj + 2πk for some k ∈ Z, and since
1/2
| arg z| < π/2 (as we just saw in the previous paragraph) and | arg λj | < π/2, we have k = 0.
1/2 1/2
Hence | arg ξj − arg λj | < π, which implies, by Theorem 11.3, log z = log ξj − log λj =
log ξj − 12 log λj . Thus log µj = 2 log ξj − log λj , as required. Equation (11.31) follows by
induction.
11.9. For the first part see Edelman and Strang [173, ]. Since rank(R − I) = n − 1, R is
similar to a single Jordan block J(0). Hence by Theorem 1.28, in which p = 1, all logarithms
are primary matrix functions and have the form log(R) + 2jπiI for j ∈ Z, which is always
upper triangular and is real only for j = 0.
11.10. The relative backward error of Xb is the smallest value of kEk/kAk such that X
b =
b
X
log(A + E). There is only one E satisfying the latter equation, namely E = e − A, so the
b
relative backward error is keX − Ak/kAk.
Solutions to Problems 375
11.11. Consider first (11.32). Newton’s method for f (X) = 0 solves for E in the first order
perturbation expansion of f (X + E) = 0 and sets X ← X + E. For f (X) = eX − A we have,
if XE = EX, f (X + E) = eX eE − A = eX (I + E) − A to first order, or E = e−X A − I,
which gives X + E = X − I + e−X A. But XA = AX clearly implies XE = EX, so under
the assumption XA = AX, E is the Newton update (which we are implicitly assuming is
unique). Since X + E clearly commutes with A, this argument can be repeated and we have
derived (11.32). The commutativity relations involving Xk are easily proved by induction.
To analyze the asymptotic convergence, let A = eX with X a primary logarithm of
A, and write Xk+1 − X = Xk − X − I + e−Xk A = Xk − X − I + eX−Xk , since X and
Xk are functions of A and hence commute. Theorem 4.8 gives eX−Xk = I + (X − Xk ) +
1
2
max0≤t≤1 k(X − Xk )2 et(X−Xk ) k, which gives kX − Xk+1 k ≤ 12 kX − Xk k2 ekX−Xk k for any
subordinate matrix norm.
The analysis for (11.33) is very similar, if a little more tedious. The convergence bound
is kX − Xk+1 k ≤ 61 kX − Xk k3 ekX−Xk k .
The extra cost of (11.33) over (11.32) is one matrix inversion per iteration, which is
negligible compared with the cost of the matrix exponential. Therefore, all other things
being equal, the cubically convergent iteration (11.33) should be preferred.
P (−1)i `P∞ (−1)i ´
12.1. With B = kπA, B 2i = (kπ)2i I, so cos(B) = ∞ i=0 (2i)! B
2i
= i=0 (2i)! (kπ)
2i
I=
cos(kπ)I = (−1)k I.
ˆ ˜
12.2. From Theorem 1.36 the only possible Jordan form of A is J2 (λ) = 10 11 with sin λ = 1.
But then cos(λ) = 0 and so by Theorem 1.36 the Jordan blockˆ splits ˜ into two 1 × 1 blocks
in sin(J2 (λ)). Hence sin(A) cannot be equal to (or similar to) 10 11 .
12.3. All four identities follow from Theorems 1.16 and 1.20.
12.4. Let B = i−1 log(X0 ). Then eiB = X0 and so we have X1 = 21 (X0 + X0−1 ) = 12 (eiB +
e−iB ) = cos(B), as required.
p
12.5. sin(A) = I − cos2 (A) for some square root, but without knowing the eigenvalues of
A we cannot determine which is the required square root. But if A is triangular then the
eigenvalues of A are known and we can proceed. Consider, first, the scalar case and note
that
cos(x + iy) = cos(x) cos(iy) − sin(x) sin(iy) = cos(x) cosh(y) − i sin(x) sinh(y),
sin(x + iy) = sin(x) cos(iy) + cos(x) sin(iy) = sin(x) cosh(y) + i cos(x) sinh(y).
Since cosh(y) ≥ 0, from xpwe can determine the sign of Re sin(x+iy) and hence which square
root to take in sin(z) = 1 −p cos2 (z). Returning to the matrix case, we can determine each
diagonal element of sin(A) = I − cos2 (A) by the procedure just outlined. If the eigenvalues
of A are distinct then we can use the Parlett recurrence to solve for the off-diagonal part of
sin(A). However, if A has a repeated eigenvalue then the recurrence breaks down. Indeed,
the equation cos2 (A)+sin2 (A) = I may not contain enough information to determine sin(A).
For example, from
» – » – » –
0 x 1 0 0 x
A= , cos(A) = , sin(A) = ,
0 0 0 1 0 0
it is clear that sin(A) cannot be recovered from I − cos2 (A) = 0 without using knowledge
of A.
13.1. By (13.5), AQm = Qm Hm , where Qm ˆ∈ Cn×m ˜ , Hm ∈ C
m×m
. Let U be such that
[Qm U ] is unitary. Then [Qm U ]∗ A [Qm U ] = H0m 00 . Hence
„» –« » ∗ –
Hm 0 Qm
f (A)b = [Qm U ]f ∗ b
0 U AU U∗
» –
f (Hm ) 0
= [Qm U ] kbk2 e1 = kbk2 Qm f (Hm )e1 = fm .
0 f (U ∗ AU )
376 Solutions to Problems
13.3. For any µ ∈ C, H − µI is upper Hessenberg with nonzero subdiagonal elements and so
rank(H − µI) ≥ n − 1. For µ an eigenvalue we therefore have rank(H − µI) = n − 1, which
implies that µ appears in only one Jordan block.
13.4. The correctness of the algorithm is easily proved by induction. For k = 1 the correct-
2
ness is immediate. For general k it is easy to see that x = 21 Xk−1 (3I − Xk−1 )b.
Since the function makes 3 recursive calls, with 3 matrix–vector multiplications at the
lowest level (k = 1), the cost is 3k matrix–vector multiplications. This is to be compared
with 2k matrix multiplications and 1 matrix–vector multiplication if we compute Xk and
then Xk b. The recursive computation requires fewer flops if 3k n2 < 3
∼ 2kn . If k = 8, this
>
requires n ∼ 410. The temporary storage required by the recursive algorithm is just k vectors
(one per level of recursion).
Note that this algorithm does not lend itself to dynamic determination of k, unlike for
the usual Newton–Schulz iteration: if we know Xk−1 b then computing Xk b costs 2/3 as much
as computing Xk b from scratch.
B.1. Since the columns of X form a basis they are independent, and so X + X = I. Let
(λ, x) be an eigenpair of A with x ∈ X . Then x = Xz for a unique z 6= 0, and z = X + x.
Hence
λx = Ax = AXz = XBz.
Multiplying on the left by X + gives λz = λX + x = Bz, so (λ, z) is an eigenpair for B.
Let (λ, z) be an eigenpair for B. Then AXz = XBz = λXz, and Xz 6= 0 since the
columns of X are independent, so (λ, Xz) is an eigenpair for A.
B.2. Let A be Hermitian and write X = A+ . Taking the conjugate transposes of (B.3) (i)
and (B.3) (ii) yields AX ∗ A = A and X ∗ AX ∗ = X ∗ . Using (B.3) (iii) we have (X ∗ A)∗ =
AX = (AX)∗ = X ∗ A, and (B.3) (iv) yields (AX ∗ )∗ = XA = (XA)∗ = AX ∗ . So X ∗ satisfies
the same four Moore–Penrose conditions as X, which means that X = X ∗ by the uniqueness
of the pseudoinverse. Finally, by (B.3) (iii) we have AX = (AX)∗ = XA, so A and X
commute.
Since kxk2 = kyk2 , A is a partial isometry iff kΣr yk2 = kyk2 for all y, or equivalently,
Σr = I r .
Solutions to Problems 377
B.5. Let λ be an eigenvalue of A and x the corresponding eigenvector, and form the matrix
X = [x, x, . . . , x] ∈ Cn×n . Then AX = λX, so |λ|kXk = kAXk ≤ kAk kXk, showing that
|λ| ≤ kAk. For a subordinate norm it suffices to take norms in the equation Ax = λx.
ˆA˜
B.6. Let B = 0
. Since B ∗B = A∗A, B and A have the same singular values, so kBk = kAk.
B.7. For any unitarily invariant norm, kBk depends only on the singular values of B. Now
(U A)∗ (U A) = A∗ U ∗ U A = A∗A, so U A and A have the same singular values. Hence kAk =
kU Ak. Alternatively, choose V ∈ Cm×(m−n) so that [U, V ] is unitary and use Problem B.6
to deduce that ‚» –‚ ‚ » –‚
‚ A ‚ ‚ A ‚
‚
kAk = ‚ ‚ ‚
= [U V ] ‚ = kU Ak.
0 ‚ ‚ 0 ‚
Therefore P1 − P2 = 0.
B.10. AA+ is Hermitian by (B.3) (iii), and (AA+ )2 = AA+ AA+ = AA+ by (B.3) (i). It
remains to show that range(AA+ ) = range(A).
Let x ∈ range(A), so that x = Ay for some y. Then, by (B.3) (i), x = AA+ Ay = AA+ x,
so x ∈ range(AA+ ). Conversely, x ∈ range(AA+ ) implies x = AA+ y for some y and then
x = A(A+ y) ∈ range(A). Thus AA+ is the orthogonal projector onto range(A).
By the first part, the orthogonal projector onto range(A∗ ) is A∗ (A∗ )+ = A∗ (A+ )∗ =
(A A)∗ = A+ A.
+
P ΣQ∗ = A = PeΣ Q e ∗ = P D1∗ ΣD2 Q∗ , which implies Σ = D1∗ ΣD2 = ΣD1∗ D2 , since D1
commutes with Σ and hence with (Σ 2 )1/2 . Write Σ = diag(Σ1 , 0n−r ), where Σ1 ∈ Rr×r
2
You will find it a very good practice always to verify your references, sir.
— MARTIN JOSEPH ROUTH (1878)
30
25
Number of references
20
15
10
0
1850 1870 1890 1910 1930 1950 1970 1990 2008
379
380 Bibliography
[1] N. I. Achieser. Theory of Approximation. Frederick Ungar Publishing Co., New York,
1956. x+307 pp. (Cited on p. 130.)
[2] Eva Achilles and Richard Sinkhorn. Doubly stochastic matrices whose squares are
idempotent. Linear and Multilinear Algebra, 38:343–349, 1995. (Cited on p. 191.)
[3] Martin Afanasjew, Michael Eiermann, Oliver G. Ernst, and Stefan Güttel. Implemen-
tation of a restarted Krylov subspace method for the evaluation of matrix functions.
Manuscript, July 2007. 27 pp. (Cited on p. 305.)
[4] Martin Afanasjew, Michael Eiermann, Oliver G. Ernst, and Stefan Güttel. On the
steepest descent method for matrix functions. Manuscript, 2007. 19 pp. (Cited on
p. 305.)
[5] S. N. Afriat. Analytic functions of finite dimensional linear transformations. Proc.
Cambridge Philos. Soc., 55(1):51–61, 1959. (Cited on p. 27.)
[6] Donald J. Albers and G. L. Alexanderson, editors. Mathematical People: Profiles and
Interviews. Birkhäuser, Boston, MA, USA, 1985. xvi+372 pp. ISBN 0-8176-3191-7.
(Cited on p. 171.)
[7] J. Albrecht. Bemerkungen zu Iterationsverfahren zur Berechnung von A1/2 und A−1 .
Z. Angew. Math. Mech., 57:T262–T263, 1977. (Cited on p. 167.)
[8] G. Alefeld and N. Schneider. On square roots of M -matrices. Linear Algebra Appl.,
42:119–132, 1982. (Cited on p. 167.)
[9] Marc Alexa. Linear combination of transformations. ACM Trans. Graphics, 21(3):
380–387, 2002. (Cited on pp. 50, 53.)
[10] E. J. Allen, J. Baglama, and S. K. Boyd. Numerical approximation of the product of
the square root of a matrix with a vector. Linear Algebra Appl., 310:167–181, 2000.
(Cited on p. 310.)
[11] Simon L. Altmann. Rotations, Quaternions, and Double Groups. Oxford University
Press, 1986. xiv+317 pp. ISBN 0-19-855372-2. (Cited on p. 300.)
[12] E. Anderson, Z. Bai, C. H. Bischof, S. Blackford, J. W. Demmel, J. J. Dongarra, J. J.
Du Croz, A. Greenbaum, S. J. Hammarling, A. McKenney, and D. C. Sorensen. LA-
PACK Users’ Guide. Third edition, Society for Industrial and Applied Mathematics,
Philadelphia, PA, USA, 1999. xxvi+407 pp. ISBN 0-89871-447-8. (Cited on pp. 100,
229.)
[13] T. Ando. Concavity of certain maps on positive definite matrices and applications to
Hadamard products. Linear Algebra Appl., 26:203–241, 1979. (Cited on p. 52.)
[14] T. Ando. Operator-Theoretic Methods for Matrix Inequalities. Hokusei Gakuen Uni-
versity, March 1998. 77 pp. (Cited on p. 52.)
[15] T. Ando, Chi-Kwong Li, and Roy Mathias. Geometric means. Linear Algebra Appl.,
385:305–334, 2004. (Cited on p. 52.)
[16] Huzihiro Araki and Shigeru Yamagami. An inequality for Hilbert–Schmidt norm.
Commun. Math. Phys., 81:89–96, 1981. (Cited on p. 215.)
[17] M. Arioli, B. Codenotti, and C. Fassino. The Padé method for computing the matrix
exponential. Linear Algebra Appl., 240:111–130, 1996. (Cited on pp. 104, 248.)
[18] W. E. Arnoldi. The principle of minimized iterations in the solution of the matrix
eigenvalue problem. Quart. Appl. Math., 9:17–29, 1951. (Cited on p. 309.)
[19] Vincent Arsigny, Oliver Commowick, Xavier Pennec, and Nicholas Ayache. A log–
Euclidean framework for statistics on diffeomorphisms. In Medical Image Computing
and Computer-Assisted Intervention—MICCAI 2006, Rasmus Larsen, Mads Nielsen,
and Jon Sporring, editors, number 4190 in Lecture Notes in Computer Science, Spring-
er-Verlag, Berlin, 2006, pages 924–931. (Cited on p. 284.)
Bibliography 381
[20] Vincent Arsigny, Pierre Fillard, Xavier Pennec, and Nicholas Ayache. Geometric means
in a novel vector space structure on symmetric positive-definite matrices. SIAM J.
Matrix Anal. Appl., 29(1):328–347, 2007. (Cited on p. 47.)
[21] Ashkan Ashrafi and Peter M. Gibson. An involutory Pascal matrix. Linear Algebra
Appl., 387:277–286, 2004. (Cited on p. 165.)
[22] Kendall E. Atkinson and Weimin Han. Theoretical Numerical Analysis: A Functional
Analysis Framework. Second edition, Springer-Verlag, New York, 2005. xviii+576 pp.
ISBN 0-387-25887-6. (Cited on p. 69.)
[23] Jean-Pierre Aubin and Ivar Ekeland. Applied Nonlinear Analysis. Wiley, New York,
1984. xi+518 pp. ISBN 0-471-05998-6. (Cited on p. 69.)
[24] L. Autonne. Sur les groupes linéaires, réels et orthogonaux. Bulletin de la Société
Mathématique de France, 30:121–134, 1902. (Cited on p. 213.)
[25] Ivo Babuška, Milan Práger, and Emil Vitásek. Numerical Processes in Differential
Equations. Wiley, London, 1966. (Cited on p. 167.)
[26] Zhaojun Bai, Wenbin Chen, Richard Scalettar, and Ichitaro Yamazaki. Lecture notes
on advances of numerical methods for Hubbard quantum Monte Carlo simulation.
Manuscript, 2007. (Cited on pp. 44, 263.)
[27] Zhaojun Bai and James W. Demmel. Design of a parallel nonsymmetric eigenroutine
toolbox, Part I. In Proceedings of the Sixth SIAM Conference on Parallel Processing
for Scientific Computing, Volume I, Richard F. Sincovec, David E. Keyes, Michael R.
Leuze, Linda R. Petzold, and Daniel A. Reed, editors, Society for Industrial and Ap-
plied Mathematics, Philadelphia, PA, USA, 1993, pages 391–398. Also available as
Research Report 92-09, Department of Mathematics, University of Kentucky, Lexing-
ton, KY, USA, December 1992, 30 pp. (Cited on pp. 41, 51.)
[28] Zhaojun Bai and James W. Demmel. On swapping diagonal blocks in real Schur form.
Linear Algebra Appl., 186:73–95, 1993. (Cited on p. 229.)
[29] Zhaojun Bai and James W. Demmel. Using the matrix sign function to compute
invariant subspaces. SIAM J. Matrix Anal. Appl., 19(1):205–225, 1998. (Cited on
pp. 41, 124, 125.)
[30] Zhaojun Bai, James W. Demmel, Jack J. Dongarra, Axel Ruhe, and Henk A. Van der
Vorst, editors. Templates for the Solution of Algebraic Eigenvalue Problems: A Prac-
tical Guide. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA,
2000. xxix+410 pp. ISBN 0-89871-471-0. (Cited on p. 309.)
[31] Zhaojun Bai, James W. Demmel, and Ming Gu. Inverse free parallel spectral divide
and conquer algorithms for nonsymmetric eigenproblems. Numer. Math., 76:279–308,
1997. (Cited on p. 130.)
[32] Zhaojun Bai, Mark Fahey, and Gene H. Golub. Some large-scale matrix computation
problems. J. Comput. Appl. Math., 74:71–89, 1996. (Cited on pp. 44, 318.)
[33] Zhaojun Bai, Mark Fahey, Gene H. Golub, M. Menon, and E. Richter. Computing
partial eigenvalue sum in electronic structure calculations. Technical Report SCCM-
98-03, SCCM, Stanford University, January 1998. 19 pp. (Cited on p. 318.)
[34] Zhaojun Bai and Gene H. Golub. Bounds for the trace of the inverse and the deter-
minant of symmetric positive definite matrices. Ann. Numer. Math., 4:29–38, 1997.
(Cited on p. 318.)
[35] Zhaojun Bai and Gene H. Golub. Some unusual eigenvalue problems. In Vector and
Parallel Processing—VECPAR’98, J. Palma, J. Dongarra, and V. Hernández, editors,
volume 1573 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, 1999,
pages 4–19. (Cited on p. 318.)
382 Bibliography
[36] David H. Bailey. Algorithm 719: Multiprecision translation and execution of FOR-
TRAN programs. ACM Trans. Math. Software, 19(3):288–319, 1993. (Cited on
pp. 188, 286.)
[37] David H. Bailey, Yozo Hida, Xiaoye S. Li, and Brandon Thompson. ARPREC: An
arbitrary precision computation package. Technical Report LBNL-53651, Lawrence
Berkeley National Laboratory, Berkeley, California, March 2002. 8 pp. (Cited on
p. 188.)
[38] George A. Baker, Jr. Essentials of Padé Approximants. Academic Press, New York,
1975. xi+306 pp. (Cited on pp. 80, 104, 264, 274.)
[39] George A. Baker, Jr. and Peter Graves-Morris. Padé Approximants, volume 59 of En-
cyclopedia of Mathematics and Its Applications. Second edition, Cambridge University
Press, 1996. xiv+746 pp. (Cited on pp. 80, 104, 274, 283.)
[40] H. F. Baker. The reciprocation of one quadric into another. Proc. Cambridge Philos.
Soc., 23:22–27, 1925. (Cited on pp. 28, 52.)
[41] A. V. Balakrishnan. Fractional powers of closed operators and the semigroups gener-
ated by them. Pacific J. Math., 10(2):419–437, 1960. (Cited on p. 187.)
[42] R. P. Bambah and S. Chowla. On integer cube roots of the unit matrix. Science and
Culture, 12:105, 1946. (Cited on p. 189.)
[43] Itzhack Y. Bar-Itzhack, J. Meyer, and P. A. Fuhrmann. Strapdown matrix orthogonal-
ization: The dual iterative algorithm. IEEE Trans. Aerospace and Electronic Systems,
AES-12(1):32–37, 1976. (Cited on p. 215.)
[44] A. Y. Barraud. Investigations autour de la fonction signe d’une matrice application a
l’équation de Riccati. R.A.I.R.O. Automatique/Systems Analysis and Control, 13(4):
335–368, 1979. (Cited on p. 130.)
[45] Anders Barrlund. Perturbation bounds on the polar decomposition. BIT, 30:101–113,
1990. (Cited on p. 215.)
[46] Friedrich L. Bauer. Decrypted Secrets: Methods and Maxims of Cryptology. Third
edition, Springer-Verlag, Berlin, 2002. xii+474 pp. ISBN 3-540-42674-4. (Cited on
pp. 165, 167.)
[47] U. Baur and Peter Benner. Factorized solution of Lyapunov equations based on hier-
archical matrix arithmetic. Computing, 78:211–234, 2006. (Cited on pp. 51, 316.)
[48] Connice A. Bavely and G. W. Stewart. An algorithm for computing reducing subspaces
by block diagonalization. SIAM J. Numer. Anal., 16(2):359–367, 1979. (Cited on
p. 89.)
[49] C. A. Beattie and S. W. Smith. Optimal matrix approximants in structural iden-
tification. J. Optimization Theory and Applications, 74(1):23–56, 1992. (Cited on
p. 217.)
[50] Alfredo Bellen and Marino Zennaro. Numerical Methods for Delay Differential Equa-
tions. Oxford University Press, 2003. xiv+395 pp. ISBN 0-19-850654-6. (Cited on
p. 51.)
[51] Richard Bellman. Introduction to Matrix Analysis. Second edition, McGraw-Hill,
New York, 1970. xxiii+403 pp. Reprinted by Society for Industrial and Applied
Mathematics, Philadelphia, PA, USA, 1997. ISBN 0-89871-399-4. (Cited on pp. 51,
105, 165, 167, 264, 267.)
[52] Adi Ben-Israel and Thomas N. E. Greville. Generalized Inverses: Theory and Appli-
cations. Second edition, Springer-Verlag, New York, 2003. xv+420 pp. ISBN 0-387-
00293-6. (Cited on pp. 214, 325, 353.)
Bibliography 383
[53] Peter Benner and Ralph Byers. Disk functions and their relationship to the matrix
sign function. In Proceedings of the European Control Conference ECC97, Paper 936,
BELWARE Information Technology, Waterloo, Belgium, 1997. CD ROM. (Cited on
p. 49.)
[54] Peter Benner, Ralph Byers, Volker Mehrmann, and Hongguo Xu. A unified deflating
subspace approach for classes of polynomial and rational matrix equations. Preprint
SFB393/00-05, Zentrum für Technomathematik, Universität Bremen, Bremen, Ger-
many, January 2000. 29 pp. (Cited on pp. 49, 187.)
[55] Peter Benner and Enrique S. Quintana-Ortı́. Solving stable generalized Lyapunov
equations with the matrix sign function. Numer. Algorithms, 20(1):75–100, 1999.
(Cited on pp. 51, 123.)
[56] Peter Benner, Enrique S. Quintana-Ortı́, and Gregorio Quintana-Ortı́. Solving stable
Sylvester equations via rational iterative schemes. J. Sci. Comput., 28:51–83, 2006.
(Cited on p. 51.)
[57] Michele Benzi and Gene H. Golub. Bounds for the entries of matrix functions with
applications to preconditioning. BIT, 39(3):417–438, 1999. (Cited on p. 317.)
[58] Michele Benzi and Nader Razouk. Decay bounds and O(n) algorithms for approxi-
mating functions of sparse matrices. Electron. Trans. Numer. Anal., 28:16–39, 2007.
(Cited on p. 318.)
[59] Håvard Berland, Bård Skaflestad, and Will Wright. EXPINT—A MATLAB package
for exponential integrators. ACM Trans. Math. Software, 33(1):Article 4, 2007. (Cited
on p. 37.)
[60] Abraham Berman and Robert J. Plemmons. Nonnegative Matrices in the Mathematical
Sciences. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA,
1994. xx+340 pp. Corrected republication, with supplement, of work first published
in 1979 by Academic Press. ISBN 0-89871-321-8. (Cited on pp. 159, 260, 329.)
[61] David S. Bernstein and Charles F. Van Loan. Rational matrix functions and rank-1
updates. SIAM J. Matrix Anal. Appl., 22(1):145–154, 2000. (Cited on p. 28.)
[62] Rajendra Bhatia. Some inequalities for norm ideals. Commun. Math. Phys., 111:
33–39, 1987. (Cited on p. 135.)
[63] Rajendra Bhatia. Matrix factorizations and their perturbations. Linear Algebra Appl.,
197/198:245–276, 1994. (Cited on pp. 215, 217.)
[64] Rajendra Bhatia. Matrix Analysis. Springer-Verlag, New York, 1997. xi+347 pp.
ISBN 0-387-94846-5. (Cited on pp. 69, 187, 215, 217, 237, 315, 330.)
[65] Rajendra Bhatia. Positive Definite Matrices. Princeton University Press, Princeton,
NJ, USA, 2007. ix+254 pp. ISBN 0-691-12918-5. (Cited on pp. 52, 315, 330.)
[66] Rajendra Bhatia and Fuad Kittaneh. Approximation by positive operators. Linear
Algebra Appl., 161:1–9, 1992. (Cited on p. 214.)
[67] Rajendra Bhatia and Kalyan K. Mukherjea. On weighted Löwdin orthogonalization.
Int. J. Quantum Chemistry, 29:1775–1778, 1986. (Cited on p. 42.)
[68] M. D. Bingham. A new method for obtaining the inverse matrix. J. Amer. Statist.
Assoc., 36(216):530–534, 1941. (Cited on p. 90.)
[69] Dario A. Bini, Nicholas J. Higham, and Beatrice Meini. Algorithms for the matrix pth
root. Numer. Algorithms, 39(4):349–378, 2005. (Cited on pp. 187, 188, 191.)
[70] Dario A. Bini, Guy Latouche, and Beatrice Meini. Numerical Methods for Structured
Markov Chains. Oxford University Press, 2005. xi+327 pp. ISBN 0-19-852768-3.
(Cited on p. 45.)
384 Bibliography
[71] R. E. D. Bishop. Arthur Roderick Collar. 22 February 1908–12 February 1986. Bi-
ographical Memoirs of Fellows of the Royal Society, 33:164–185, 1987. (Cited on
p. 27.)
[72] Åke Björck and C. Bowie. An iterative algorithm for computing the best estimate of
an orthogonal matrix. SIAM J. Numer. Anal., 8(2):358–364, 1971. (Cited on p. 215.)
[73] Åke Björck and Sven Hammarling. A Schur method for the square root of a matrix.
Linear Algebra Appl., 52/53:127–140, 1983. (Cited on pp. 163, 166, 167, 171, 187.)
[74] Sergio Blanes and Fernando Casas. On the convergence and optimization of the Baker–
Campbell–Hausdorff formula. Linear Algebra Appl., 378:135–158, 2004. (Cited on
p. 263.)
[75] Artan Boriçi. QCDLAB project. http://phys.fshn.edu.al/qcdlab.html. (Cited on
p. 43.)
[76] Steffen Börm, Lars Grasedyck, and Wolfgang Hackbusch. Hierarchical matrices. Lec-
ture Note No. 21, Max-Planck-Institute for Mathematics in the Sciences, Leipzig, Ger-
many, 2003. 71 pp. Revised June 2006. (Cited on p. 316.)
[77] Jonathan M. Borwein, David Bailey, and Roland Girgensohn. Experimentation in
Mathematics: Computational Paths to Discovery. A K Peters, Natick, Massachusetts,
2004. x+357 pp. ISBN 1-56881-136-5. (Cited on p. 33.)
[78] C. Bouby, D. Fortuné, W. Pietraszkiewicz, and C. Vallée. Direct determination of
the rotation in the polar decomposition of the deformation gradient by maximizing a
Rayleigh quotient. Z. Angew. Math. Mech., 85(3):155–162, 2005. (Cited on p. 43.)
[79] David W. Boyd. The power method for ℓp norms. Linear Algebra Appl., 9:95–101,
1974. (Cited on p. 69.)
[80] Geoff Boyd, Charles A. Micchelli, Gilbert Strang, and Ding-Xuan Zhou. Binomial
matrices. Adv. in Comput. Math., 14:379–391, 2001. (Cited on p. 166.)
[81] Russell J. Bradford, Robert M. Corless, James H. Davenport, David J. Jeffrey, and
Stephen M. Watt. Reasoning about the elementary functions of complex analysis. An-
nals of Mathematics and Artificial Intelligence, 36:303–318, 2002. (Cited on pp. 280,
283.)
[82] T. J. Bridges and P. J. Morris. Differential eigenvalue problems in which the parameter
appears nonlinearly. J. Comput. Phys., 55:437–460, 1984. (Cited on p. 46.)
[83] William Briggs. Ants, Bikes, and Clocks: Problem Solving for Undergraduates. Society
for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2005. 168 pp. ISBN
0-89871-574-1. (Cited on p. 343.)
[84] A. Buchheim. On the theory of matrices. Proc. London Math. Soc., 16:63–82, 1884.
(Cited on p. 26.)
[85] A. Buchheim. An extension of a theorem of Professor Sylvester’s relating to matrices.
Phil. Mag., 22(135):173–174, 1886. Fifth series. (Cited on p. 26.)
[86] G. J. Butler, Charles R. Johnson, and H. Wolkowicz. Nonnegative solutions of a
quadratic matrix equation arising from comparison theorems in ordinary differential
equations. SIAM J. Alg. Discrete Methods, 6(1):47–53, 1985. (Cited on p. 167.)
[87] Ralph Byers. A LINPACK-style condition estimator for the equation AX − XB T = C.
IEEE Trans. Automat. Control, AC-29(10):926–928, 1984. (Cited on p. 226.)
[88] Ralph Byers. Solving the algebraic Riccati equation with the matrix sign function.
Linear Algebra Appl., 85:267–279, 1987. (Cited on pp. 51, 125, 130, 132.)
[89] Ralph Byers, Chunyang He, and Volker Mehrmann. The matrix sign function method
and the computation of invariant subspaces. SIAM J. Matrix Anal. Appl., 18(3):615–
632, 1997. (Cited on pp. 122, 124.)
Bibliography 385
[90] D. Calvetti, E. Gallopoulos, and L. Reichel. Incomplete partial fractions for parallel
evaluation of rational matrix functions. J. Comput. Appl. Math., 59:349–380, 1995.
(Cited on p. 104.)
[91] Daniela Calvetti, Sun-Mi Kim, and Lothar Reichel. Quadrature rules based on the
Arnoldi process. SIAM J. Matrix Anal. Appl., 26(3):765–781, 2005. (Cited on p. 318.)
[92] S. L. Campbell and C. D. Meyer, Jr. Generalized Inverses of Linear Transformations.
Pitman, London, 1979. xi+272 pp. Reprinted by Dover, New York, 1991. ISBN
0-486-66693-X. (Cited on pp. 325, 326, 353.)
[93] João R. Cardoso, Charles S. Kenney, and F. Silva Leite. Computing the square root
and logarithm of a real P -orthogonal matrix. Appl. Numer. Math., 46:173–196, 2003.
(Cited on p. 316.)
[94] João R. Cardoso and F. Silva Leite. The Moser–Veselov equation. Linear Algebra
Appl., 360:237–248, 2003. (Cited on p. 46.)
[95] João R. Cardoso and F. Silva Leite. Padé and Gregory error estimates for the logarithm
of block triangular matrices. Appl. Numer. Math., 56:253–267, 2006. (Cited on p. 284.)
[96] Gianfranco Cariolaro, Tomaso Erseghe, and Peter Kraniauskas. The fractional discrete
cosine transform. IEEE Trans. Signal Processing, 50(4):902–911, 2002. (Cited on
p. 28.)
[97] Lennart Carleson and Theodore W. Gamelin. Complex Dynamics. Springer-Verlag,
New York, 1993. ix+175 pp. ISBN 0-387-97942-5. (Cited on p. 156.)
[98] A. J. Carpenter, A. Ruttan, and R. S. Varga. Extended numerical computations on
the “1/9” conjecture in rational approximation theory. In Rational Approximation and
Interpolation, P. R. Graves-Morris, E. B. Saff, and R. S. Varga, editors, volume 1105 of
Lecture Notes in Mathematics, Springer-Verlag, Berlin, 1984, pages 383–411. (Cited
on pp. 259, 267.)
[99] Arthur Cayley. A memoir on the theory of matrices. Philos. Trans. Roy. Soc. London,
148:17–37, 1858. (Cited on pp. 26, 34, 166, 168, 191.)
[100] Arthur Cayley. On the extraction of the square root of a matrix of the third order.
Proc. Roy. Soc. Edinburgh, 7:675–682, 1872. (Cited on pp. 26, 166, 168.)
[101] Arthur Cayley. The Newton–Fourier imaginary problem. Amer. J. Math., 2(1):97,
1879. (Cited on pp. 187, 191.)
[102] Elena Celledoni and Arieh Iserles. Approximating the exponential from a Lie algebra
to a Lie group. Math. Comp., 69(232):1457–1480, 2000. (Cited on p. 28.)
[103] F. Chaitin-Chatelin and S. Gratton. On the condition numbers associated with the
polar factorization of a matrix. Numer. Linear Algebra Appl., 7:337–354, 2000. (Cited
on p. 201.)
[104] Raymond H. Chan, Chen Greif, and Dianne P. O’Leary, editors. Milestones in Matrix
Computation: The Selected Works of Gene H. Golub, with Commentaries. Oxford
University Press, 2007. xi+565 pp. ISBN 978-0-19-920681-0. (Cited on p. 391.)
[105] Shivkumar Chandrasekaran and Ilse C. F. Ipsen. Backward errors for eigenvalue and
singular value decompositions. Numer. Math., 68:215–223, 1994. (Cited on p. 214.)
[106] Chi-Tsong Chen. Linear System Theory and Design. Third edition, Oxford University
Press, 1999. xiii+334 pp. ISBN 0-19-511777-8. (Cited on p. 51.)
[107] Sheung Hun Cheng, Nicholas J. Higham, Charles S. Kenney, and Alan J. Laub. Return
to the middle ages: A half-angle iteration for the logarithm of a unitary matrix. In
Proceedings of the Fourteenth International Symposium of Mathematical Theory of
Networks and Systems, Perpignan, France, 2000. CD ROM. (Cited on p. 316.)
386 Bibliography
[108] Sheung Hun Cheng, Nicholas J. Higham, Charles S. Kenney, and Alan J. Laub. Ap-
proximating the logarithm of a matrix to specified accuracy. SIAM J. Matrix Anal.
Appl., 22(4):1112–1125, 2001. (Cited on pp. 104, 141, 166, 167, 283, 284, 285.)
[109] M. Cipolla. Sulle matrice espressione analitiche di un’altra. Rendiconti Circolo Matem-
atico de Palermo, 56:144–154, 1932. (Cited on p. 26.)
[110] W. J. Cody, G. Meinardus, and R. S. Varga. Chebyshev rational approximations to e−x
in [0, +∞) and applications to heat-conduction problems. J. Approximation Theory,
2:50–65, 1969. (Cited on p. 259.)
[111] John P. Coleman. Rational approximations for the cosine function; P-acceptability
and order. Numer. Algorithms, 3:143–158, 1992. (Cited on p. 300.)
[112] A. R. Collar. The first fifty years of aeroelasticity. Aerospace (Royal Aeronautical
Society Journal), 5:12–20, 1978. (Cited on pp. 27, 53.)
[113] Samuel D. Conte and Carl de Boor. Elementary Numerical Analysis: An Algorithmic
Approach. Third edition, McGraw-Hill, Tokyo, 1980. xii+432 pp. ISBN 0-07-066228-2.
(Cited on p. 333.)
[114] Robert M. Corless, Hui Ding, Nicholas J. Higham, and David J. Jeffrey. The solution of
S exp(S) = A is not always the Lambert W function of A. In ISSAC ’ 07: Proceedings
of the 2007 International Symposium on Symbolic and Algebraic Computation, New
York, 2007, pages 116–121. ACM Press. (Cited on p. 51.)
[115] Robert M. Corless, Gaston H. Gonnet, D. E. G. Hare, David J. Jeffrey, and Donald E.
Knuth. On the Lambert W function. Adv. in Comput. Math., 5(4):329–359, 1996.
(Cited on pp. 51, 53.)
[116] Robert M. Corless and David J. Jeffrey. The unwinding number. ACM SIGSAM
Bulletin, 30(2):28–35, 1996. (Cited on p. 283.)
[117] Marius Cornea-Hasegan and Bob Norin. IA-64 floating-point operations and the IEEE
standard for binary floating-point arithmetic. Intel Technology Journal, 3, 1999. http:
//developer.intel.com/technology/itj/. (Cited on p. 188.)
[118] S. M. Cox and P. C. Matthews. Exponential time differencing for stiff systems. J.
Comput. Phys., 176:430–455, 2002. (Cited on p. 37.)
[119] Trevor F. Cox and Michael A. A. Cox. Multidimensional Scaling. Chapman and Hall,
London, 1994. xi+213 pp. ISBN 0-412-49120-6. (Cited on p. 43.)
[120] Tony Crilly. Cayley’s anticipation of a generalised Cayley–Hamilton theorem. Historia
Mathematica, 5:211–219, 1978. (Cited on p. 30.)
[121] Tony Crilly. Arthur Cayley: Mathematician Laureate of the Victorian Age. Johns
Hopkins University Press, Baltimore, MD, USA, 2006. xxi+610 pp. ISBN 0-8018-
8011-4. (Cited on pp. 26, 30.)
[122] G. W. Cross and P. Lancaster. Square roots of complex matrices. Linear and Multi-
linear Algebra, 1:289–293, 1974. (Cited on p. 16.)
[123] Michel Crouzeix. Bounds for analytical functions of matrices. Integral Equations and
Operator Theory, 48:461–477, 2004. (Cited on p. 105.)
[124] L. Csanky. Fast parallel matrix inversion algorithms. SIAM J. Comput., 5(4):618–623,
1976. (Cited on p. 90.)
[125] Charles G. Cullen. Matrices and Linear Transformations. Second edition, Addison-
Wesley, Reading, MA, USA, 1972. xii+318 pp. Reprinted by Dover, New York, 1990.
ISBN 0-486-66328-0. (Cited on pp. 28, 29, 52.)
[126] Walter J. Culver. On the existence and uniqueness of the real logarithm of a matrix.
Proc. Amer. Math. Soc., 17:1146–1151, 1966. (Cited on p. 17.)
Bibliography 387
[127] James R. Cuthbert. On uniqueness of the logarithm for Markov semi-groups. J. London
Math. Soc., 4:623–630, 1972. (Cited on p. 190.)
[128] Germund Dahlquist. Stability and Error Bounds in the Numerical Integration of Or-
dinary Differential Equations. PhD thesis, Royal Inst. of Technology, Stockholm, Swe-
den, 1958. Reprinted in Trans. Royal Inst. of Technology, No. 130, Stockholm, Sweden,
1959. (Cited on p. 237.)
[129] Ju. L. Daleckiı̆. Differentiation of non-Hermitian matrix functions depending on a
parameter. Amer. Math. Soc. Transl., Series 2, 47:73–87, 1965. (Cited on p. 69.)
[130] Ju. L. Daleckiı̆ and S. G. Kreı̆n. Integration and differentiation of functions of Her-
mitian operators and applications to the theory of perturbations. Amer. Math. Soc.
Transl., Series 2, 47:1–30, 1965. (Cited on p. 69.)
[131] E. B. Davies. Approximate diagonalization. SIAM J. Matrix Anal. Appl., 29(4):1051–
1064, 2007. (Cited on p. 30.)
[132] E. Brian Davies. Science in the Looking Glass: What Do Scientists Really Know?
Oxford University Press, 2003. x+295 pp. ISBN 0-19-852543-5. (Cited on p. 34.)
[133] E. Brian Davies. Linear Operators and their Spectra. Cambridge University Press,
Cambridge, UK, 2007. xii+451 pp. ISBN 978-0-521-86629-3. (Cited on p. 28.)
[134] Philip I. Davies. Structured conditioning of matrix functions. Electron. J. Linear
Algebra, 11:132–161, 2004. (Cited on p. 315.)
[135] Philip I. Davies and Nicholas J. Higham. A Schur–Parlett algorithm for computing
matrix functions. SIAM J. Matrix Anal. Appl., 25(2):464–485, 2003. (Cited on pp. 229,
231.)
[136] Philip I. Davies and Nicholas J. Higham. Computing f (A)b for matrix functions f .
In QCD and Numerical Analysis III, Artan Boriçi, Andreas Frommer, Báalint Joó,
Anthony Kennedy, and Brian Pendleton, editors, volume 47 of Lecture Notes in Com-
putational Science and Engineering, Springer-Verlag, Berlin, 2005, pages 15–24. (Cited
on pp. 307, 308, 310.)
[137] Philip I. Davies, Nicholas J. Higham, and Françoise Tisseur. Analysis of the Cholesky
method with iterative refinement for solving the symmetric definite generalized eigen-
problem. SIAM J. Matrix Anal. Appl., 23(2):472–493, 2001. (Cited on p. 35.)
[138] Philip I. Davies and Matthew I. Smith. Updating the singular value decomposition.
J. Comput. Appl. Math., 170:145–167, 2004. (Cited on p. 216.)
[139] Chandler Davis. Explicit functional calculus. Linear Algebra Appl., 6:193–199, 1973.
(Cited on p. 84.)
[140] George J. Davis. Numerical solution of a quadratic matrix equation. SIAM J. Sci.
Statist. Comput., 2(2):164–175, 1981. (Cited on p. 52.)
[141] Philip J. Davis and Aviezri S. Fraenkel. Remembering Philip Rabinowitz. Notices
Amer. Math. Soc., 54(11):1502–1506, 2007. (Cited on p. 311.)
[142] Philip J. Davis and Philip Rabinowitz. Methods of Numerical Integration. Second
edition, Academic Press, London, 1984. xiv+612 pp. ISBN 0-12-206360-0. (Cited on
pp. 274, 308.)
[143] Carl de Boor. Divided differences. Surveys in Approximation Theory, 1:46–69, 2005.
(Cited on pp. 264, 333.)
[144] Lokenath Debnath and Piotr Mikusiński. Introduction to Hilbert Spaces with Appli-
cations. Second edition, Academic Press, San Diego, CA, USA, 1999. xviii+551 pp.
ISBN 0-12-208436-5. (Cited on p. 167.)
[145] James W. Demmel. Applied Numerical Linear Algebra. Society for Industrial and
Applied Mathematics, Philadelphia, PA, USA, 1997. xi+419 pp. ISBN 0-89871-389-7.
(Cited on p. 325.)
388 Bibliography
[146] Eugene D. Denman and Alex N. Beavers, Jr. The matrix sign function and computa-
tions in systems. Appl. Math. Comput., 2:63–94, 1976. (Cited on pp. 52, 141.)
[147] John E. Dennis, Jr., J. F. Traub, and R. P. Weber. The algebraic theory of matrix
polynomials. SIAM J. Numer. Anal., 13(6):831–845, 1976. (Cited on p. 52.)
[148] Jean Descloux. Bounds for the spectral norm of functions of matrices. Numer. Math.,
15:185–190, 1963. (Cited on pp. 84, 103.)
[149] Robert L. Devaney. An Introduction to Chaotic Dynamical Systems. Second edition,
Addison-Wesley, Reading, MA, USA, 1989. xvi+336 pp. ISBN 0-201-13046-7. (Cited
on p. 156.)
[150] Inderjit S. Dhillon and Joel A. Tropp. Matrix nearness problems with Bregman diver-
gences. SIAM J. Matrix Anal. Appl., 29(4):1120–1146, 2007. (Cited on p. 50.)
[151] Bradley W. Dickinson and Kenneth Steiglitz. Eigenvectors and functions of the discrete
Fourier transform. IEEE Trans. Acoust., Speech, Signal Processing, ASSP-30(1):25–31,
1982. (Cited on p. 28.)
[152] Luca Dieci. Considerations on computing real logarithms of matrices, Hamiltonian
logarithms, and skew-symmetric logarithms. Linear Algebra Appl., 244:35–54, 1996.
(Cited on pp. 50, 315, 316.)
[153] Luca Dieci. Real Hamiltonian logarithm of a symplectic matrix. Linear Algebra Appl.,
281:227–246, 1998. (Cited on p. 315.)
[154] Luca Dieci, Benedetta Morini, and Alessandra Papini. Computational techniques for
real logarithms of matrices. SIAM J. Matrix Anal. Appl., 17(3):570–593, 1996. (Cited
on pp. 274, 283, 284.)
[155] Luca Dieci, Benedetta Morini, Alessandra Papini, and Aldo Pasquali. On real loga-
rithms of nearby matrices and structured matrix interpolation. Appl. Numer. Math.,
29:145–165, 1999. (Cited on p. 50.)
[156] Luca Dieci and Alessandra Papini. Conditioning and Padé approximation of the log-
arithm of a matrix. SIAM J. Matrix Anal. Appl., 21(3):913–930, 2000. (Cited on
p. 284.)
[157] Luca Dieci and Alessandra Papini. Padé approximation for the exponential of a block
triangular matrix. Linear Algebra Appl., 308:183–202, 2000. (Cited on pp. 243, 246,
248, 264, 265.)
[158] Luca Dieci and Alessandra Papini. Conditioning of the exponential of a block triangular
matrix. Numer. Algorithms, 28:137–150, 2001. (Cited on p. 264.)
[159] J. Dieudonné. Foundations of Modern Analysis. Academic Press, New York, 1960.
xiv+361 pp. (Cited on p. 58.)
[160] John D. Dixon. Estimating extremal eigenvalues and condition numbers of matrices.
SIAM J. Numer. Anal., 20(4):812–814, 1983. (Cited on p. 69.)
[161] Elizabeth D. Dolan and Jorge J. Moré. Benchmarking optimization software with
performance profiles. Math. Programming, 91:201–213, 2002. (Cited on p. 252.)
[162] Jack J. Dongarra, Iain S. Duff, Danny C. Sorensen, and Henk A. Van der Vorst.
Numerical Linear Algebra for High-Performance Computers. Society for Industrial
and Applied Mathematics, Philadelphia, PA, USA, 1998. xviii+342 pp. ISBN 0-
89871-428-1. (Cited on p. 335.)
[163] William F. Donoghue, Jr. Monotone Matrix Functions and Analytic Continuation.
Springer-Verlag, Berlin, 1974. 182 pp. ISBN 3-540-06543-1. (Cited on p. 315.)
[164] M. P. Drazin, J. W. Dungey, and K. W. Gruenberg. Some theorems on commutative
matrices. J. London Math. Soc., 26(2):221–228, 1951. (Cited on p. 29.)
Bibliography 389
[165] P. G. Drazin. Nonlinear Systems. Cambridge University Press, Cambridge, UK, 1992.
xiii+317 pp. ISBN 0-521-40668-4. (Cited on p. 359.)
[166] V. Druskin, A. Greenbaum, and L. Knizhnerman. Using nonorthogonal Lanczos vectors
in the computation of matrix functions. SIAM J. Sci. Comput., 19(1):38–54, 1998.
(Cited on p. 310.)
[167] Vladimir L. Druskin and Leonid A. Knizhnerman. Two polynomial methods of calcu-
lating functions of symmetric matrices. U.S.S.R. Comput. Maths. Math. Phys., 29(6):
112–121, 1989. (Cited on p. 309.)
[168] Ian L. Dryden and Kanti V. Mardia. Statistical Shape Analysis. Wiley, New York,
1998. xvii+347 pp. ISBN 0-471-95816-6. (Cited on p. 43.)
[169] Augustin A. Dubrulle. An optimum iteration for the matrix polar decomposition.
Electron. Trans. Numer. Anal., 8:21–25, 1999. (Cited on pp. 216, 218.)
[170] B. J. Duke. Certification of Algorithm 298 [F1]: Determination of the square root of
a positive definite matrix. Comm. ACM, 12(6):325–326, 1969. (Cited on p. 167.)
[171] Nelson Dunford and Jacob T. Schwartz. Linear Operators. Part I: General Theory.
Wiley, New York, 1988. xiv+858 pp. Wiley Classics Library edition. ISBN 0-471-
60848-3. (Cited on p. 28.)
[172] Nelson Dunford and Jacob T. Schwartz. Linear Operators. Part III: Spectral Operators.
Wiley, New York, 1971. xix+1925–2592 pp. (Cited on p. 28.)
[173] Alan Edelman and Gilbert Strang. Pascal matrices. Amer. Math. Monthly, 111(3):
189–197, 2004. (Cited on p. 374.)
[174] Michael Eiermann and Oliver G. Ernst. A restarted Krylov subspace method for the
evaluation of matrix functions. SIAM J. Numer. Anal., 44(6):2481–2504, 2006. (Cited
on p. 305.)
[175] Timo Eirola. A refined polar decomposition: A = U P D. SIAM J. Matrix Anal. Appl.,
22(3):824–836, 2000. (Cited on p. 214.)
[176] Ludwig Elsner. Iterative Verfahren zur Lösung der Matrizengleichung X 2 − A = 0.
Buletinul Institutului Politehnic din Iasi, xvi(xx):15–24, 1970. (Cited on pp. 106, 159,
167, 169.)
[177] Ivan Erdelyi. On partial isometries in finite-dimensional Euclidean spaces. SIAM J.
Appl. Math., 14(3):453–467, 1966. (Cited on p. 326.)
[178] J.-Cl. Evard and F. Jafari. A complex Rolle’s theorem. Amer. Math. Monthly, 99(9):
858–861, 1992. (Cited on p. 333.)
[179] Jean-Claude Evard and Frank Uhlig. On the matrix equation f (X) = A. Linear
Algebra Appl., 162–164:447–519, 1992. (Cited on p. 28.)
[180] D. K. Faddeev and V. N. Faddeeva. Numerische Methoden der Linearen Algebra. Veb
Deutscher Verlag der Wissenschaften, Berlin, 1964. (Cited on p. 370.)
[181] Ky Fan and A. J. Hoffman. Some metric inequalities in the space of matrices. Proc.
Amer. Math. Soc., 6:111–116, 1955. (Cited on p. 214.)
[182] Heike Faßbender, D. Steven Mackey, Niloufer Mackey, and Hongguo Xu. Hamiltonian
square roots of skew-Hamiltonian matrices. Linear Algebra Appl., 287:125–159, 1999.
(Cited on p. 315.)
[183] T. I. Fenner and G. Loizou. Optimally scalable matrices. Philos. Trans. Roy. Soc.
London Ser. A, 287(1345):307–349, 1977. (Cited on p. 105.)
[184] W. L. Ferrar. Finite Matrices. Oxford University Press, 1951. vii+182 pp. (Cited on
pp. 27, 34.)
[185] Miroslav Fiedler and Hans Schneider. Analytic functions of M -matrices and general-
izations. Linear and Multilinear Algebra, 13:185–201, 1983. (Cited on p. 188.)
390 Bibliography
[224] Gene H. Golub and Charles F. Van Loan. Matrix Computations. Third edition, Johns
Hopkins University Press, Baltimore, MD, USA, 1996. xxvii+694 pp. ISBN 0-8018-
5413-X (hardback), 0-8018-5414-8 (paperback). (Cited on pp. xvii, 28, 69, 73, 77,
89, 105, 139, 196, 209, 231, 262, 309, 335, 337.)
[225] Nicholas I. M. Gould, Dominique Orban, and Philippe L. Toint. CUTEr and SifDec:
A constrained and unconstrained testing environment, revisited. ACM Trans. Math.
Software, 29(4):373–394, 2003. (Cited on p. 46.)
[226] John C. Gower and Garmt B. Dijksterhuis. Procrustes Problems. Oxford University
Press, 2004. xiv+233 pp. ISBN 0-19-851058-6. (Cited on p. 43.)
[227] W. B. Gragg. The Padé table and its relation to certain algorithms of numerical
analysis. SIAM Rev., 14(1):1–62, 1972. (Cited on p. 104.)
[228] L. Grasedyck, W. Hackbusch, and B. N. Khoromskij. Solution of large scale algebraic
matrix Riccati equations by use of hierarchical matrices. Computing, 70:121–165, 2003.
(Cited on pp. 51, 316.)
[229] Bert F. Green. The orthogonal approximation of an oblique structure in factor analysis.
Psychometrika, 17(4):429–440, 1952. (Cited on p. 214.)
[230] Anne Greenbaum. Some theoretical results derived from polynomial numerical hulls
of Jordan blocks. Electron. Trans. Numer. Anal., 18:81–90, 2004. (Cited on p. 105.)
[231] R. Grone, C. R. Johnson, E. M. Sá, and H. Wolkowicz. Normal matrices. Linear
Algebra Appl., 87:213–225, 1987. (Cited on p. 323.)
[232] Ming Gu. Finding well-conditioned similarities to block-diagonalize nonsymmetric
matrices is NP-hard. Journal of Complexity, 11(3):377–391, 1995. (Cited on p. 226.)
[233] Chun-Hua Guo and Nicholas J. Higham. A Schur–Newton method for the matrix pth
root and its inverse. SIAM J. Matrix Anal. Appl., 28(3):788–804, 2006. (Cited on
pp. 51, 186, 188, 190.)
[234] Chun-Hua Guo and Peter Lancaster. Analysis and modification of Newton’s method
for algebraic Riccati equations. Math. Comp., 67(223):1089–1105, 1998. (Cited on
p. 144.)
[235] Hongbin Guo and Rosemary A. Renaut. Estimation of uT f (A)v for large-scale un-
symmetric matrices. Numer. Linear Algebra Appl., 11:75–89, 2004. (Cited on p. 318.)
[236] Markus Haase. The Functional Calculus for Sectorial Operators. Number 169 in
Operator Theory: Advances and Applications. Birkhäuser, Basel, Switzerland, 2006.
xiv+392 pp. ISBN 3-7643-7697-X. (Cited on p. 187.)
[237] William W. Hager. Condition estimates. SIAM J. Sci. Statist. Comput., 5(2):311–316,
1984. (Cited on p. 69.)
[238] E. Hairer and G. Wanner. Analysis by Its History. Springer-Verlag, New York, 1996.
x+374 pp. ISBN 0-387-94551-2. (Cited on p. 283.)
[239] Ernst Hairer, Christian Lubich, and Gerhard Wanner. Geometric Numerical Integra-
tion: Structure-Preserving Algorithms for Ordinary Differential Equations. Springer-
Verlag, Berlin, 2002. xiii+515 pp. ISBN 3-540-43003-2. (Cited on pp. 42, 315.)
[240] Nicholas Hale, Nicholas J. Higham, and Lloyd N. Trefethen. Computing Aα , log(A)
and related matrix functions by contour integrals. MIMS EPrint 2007.103, Manchester
Institute for Mathematical Sciences, The University of Manchester, UK, August 2007.
19 pp. (Cited on p. 308.)
[241] Brian C. Hall. Lie Groups, Lie Algebras, and Representations. Springer-Verlag, New
York, 2003. xiv+351 pp. ISBN 0-387-40122-9. (Cited on pp. 262, 263.)
[242] Paul R. Halmos. Positive approximants of operators. Indiana Univ. Math. J., 21(10):
951–960, 1972. (Cited on p. 214.)
Bibliography 393
[243] Paul R. Halmos. Finite-Dimensional Vector Spaces. Springer-Verlag, New York, 1974.
viii+200 pp. Reprint of the Second edition published by Van Nostrand, Princeton, NJ,
1958. ISBN 0-387-90093-4. (Cited on p. 214.)
[244] Paul R. Halmos. A Hilbert Space Problem Book. Second edition, Springer-Verlag,
Berlin, 1982. xvii+369 pp. ISBN 0-387-90685-1. (Cited on pp. 29, 167.)
[245] H. L. Hamburger and M. E. Grimshaw. Linear Transformations in n-Dimensional
Vector Space: An Introduction to the Theory of Hilbert Space. Cambridge University
Press, 1951. x+195 pp. (Cited on p. 27.)
[246] Richard J. Hanson and Michael J. Norris. Analysis of measurements based on the
singular value decomposition. SIAM J. Sci. Statist. Comput., 2(3):363–373, 1981.
(Cited on p. 368.)
[247] Gareth Hargreaves. Topics in Matrix Computations: Stability and Efficiency of Algo-
rithms. PhD thesis, University of Manchester, Manchester, England, 2005. 204 pp.
(Cited on p. 74.)
[248] Gareth I. Hargreaves and Nicholas J. Higham. Efficient algorithms for the matrix
cosine and sine. Numer. Algorithms, 40(4):383–400, 2005. (Cited on pp. 292, 299.)
[249] Lawrence A. Harris. Computation of functions of certain operator matrices. Linear
Algebra Appl., 194:31–34, 1993. (Cited on p. 28.)
[250] W. F. Harris. The average eye. Opthal. Physiol. Opt., 24:580–585, 2005. (Cited on
p. 50.)
[251] W. F. Harris and J. R. Cardoso. The exponential-mean-log-transference as a possible
representation of the optical character of an average eye. Opthal. Physiol. Opt., 26(4):
380–383, 2006. (Cited on p. 50.)
[252] Timothy F. Havel, Igor Najfeld, and Ju-xing Yang. Matrix decompositions of two-
dimensional nuclear magnetic resonance spectra. Proc. Nat. Acad. Sci. USA, 91:7962–
7966, 1994. (Cited on p. 51.)
[253] Thomas Hawkins. The theory of matrices in the 19th century. In Proceedings of the
International Congress of Mathematicians, Vancouver, volume 2, 1974, pages 561–570.
(Cited on p. 26.)
[254] Thomas Hawkins. Another look at Cayley and the theory of matrices. Arch. Internat.
Histoire Sci., 27(100):82–112, 1977. (Cited on p. 26.)
[255] Thomas Hawkins. Weierstrass and the theory of matrices. Archive for History of Exact
Sciences, 12(2):119–163, 1977. (Cited on p. 26.)
[256] Jane M. Heffernan and Robert M. Corless. Solving some delay differential equations
with computer algebra. Mathematical Scientist, 31(1):21–34, 2006. (Cited on p. 51.)
[257] Peter Henrici. Bounds for iterates, inverses, spectral variation and fields of values of
non-normal matrices. Numer. Math., 4:24–40, 1962. (Cited on p. 103.)
[258] Kurt Hensel. Über Potenzreihen von Matrizen. J. Reine Angew. Math., 155(2):107–
110, 1926. (Cited on pp. 26, 104.)
[259] Konrad J. Heuvers and Daniel Moak. Matrix solutions of the functional equation of
the gamma function. Aequationes Mathematicae, 33:1–17, 1987. (Cited on p. 27.)
[260] The Hewlett-Packard HP-35 Scientific Pocket Calculator. Hewlett-Packard, Cupertino,
CA, USA, 1973. 4 pp. Advertising brochure. 5952-6000 Rev. 7/73. (Cited on p. 286.)
[261] HP 48 Programmer’s Reference Manual. Hewlett-Packard, Corvallis Division, Corval-
lis, OR, USA, July 1990. 504 pp. Mfg. No. 00048-90053. (Cited on p. 286.)
[262] Desmond J. Higham. Time-stepping and preserving orthonormality. BIT, 37(1):24–36,
1997. (Cited on p. 42.)
394 Bibliography
[263] Desmond J. Higham and Nicholas J. Higham. MATLAB Guide. Second edition, Society
for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2005. xxiii+382 pp.
ISBN 0-89871-578-4. (Cited on p. 252.)
[264] Nicholas J. Higham. The Matrix Computation Toolbox. http://www.ma.man.ac.uk/
~higham/mctoolbox. (Cited on pp. 129, 149, 229, 295.)
[265] Nicholas J. Higham. Nearness Problems in Numerical Linear Algebra. PhD thesis,
University of Manchester, Manchester, England, July 1985. 173 pp. (Cited on p. 216.)
[266] Nicholas J. Higham. Computing the polar decomposition—with applications. SIAM
J. Sci. Statist. Comput., 7(4):1160–1174, 1986. (Cited on pp. 167, 215, 216.)
[267] Nicholas J. Higham. Newton’s method for the matrix square root. Math. Comp., 46
(174):537–549, 1986. (Cited on pp. 104, 166, 167.)
[268] Nicholas J. Higham. Computing real square roots of a real matrix. Linear Algebra
Appl., 88/89:405–430, 1987. (Cited on pp. 28, 166, 167.)
[269] Nicholas J. Higham. Computing a nearest symmetric positive semidefinite matrix.
Linear Algebra Appl., 103:103–118, 1988. (Cited on p. 214.)
[270] Nicholas J. Higham. FORTRAN codes for estimating the one-norm of a real or complex
matrix, with applications to condition estimation (Algorithm 674). ACM Trans. Math.
Software, 14(4):381–396, 1988. (Cited on pp. 69, 226.)
[271] Nicholas J. Higham. Experience with a matrix norm estimator. SIAM J. Sci. Statist.
Comput., 11(4):804–809, 1990. (Cited on p. 69.)
[272] Nicholas J. Higham. Stability of a method for multiplying complex matrices with three
real matrix multiplications. SIAM J. Matrix Anal. Appl., 13(3):681–687, 1992. (Cited
on p. 332.)
[273] Nicholas J. Higham. The matrix sign decomposition and its relation to the polar
decomposition. Linear Algebra Appl., 212/213:3–20, 1994. (Cited on pp. 129, 214,
215, 216.)
[274] Nicholas J. Higham. Stable iterations for the matrix square root. Numer. Algorithms,
15(2):227–242, 1997. (Cited on pp. 108, 167, 363.)
[275] Nicholas J. Higham. Evaluating Padé approximants of the matrix logarithm. SIAM
J. Matrix Anal. Appl., 22(4):1126–1135, 2001. (Cited on p. 275.)
[276] Nicholas J. Higham. Accuracy and Stability of Numerical Algorithms. Second edi-
tion, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2002.
xxx+680 pp. ISBN 0-89871-521-0. (Cited on pp. 69, 75, 82, 102, 104, 120, 122,
134, 143, 163, 165, 166, 210, 211, 223, 226, 247, 248, 250, 261, 265, 309, 324,
325, 332, 335, 361, 362.)
[277] Nicholas J. Higham. J-orthogonal matrices: Properties and generation. SIAM Rev.,
45(3):504–519, 2003. (Cited on p. 315.)
[278] Nicholas J. Higham. The scaling and squaring method for the matrix exponential
revisited. SIAM J. Matrix Anal. Appl., 26(4):1179–1193, 2005. (Cited on pp. 104,
263, 264.)
[279] Nicholas J. Higham and Sheung Hun Cheng. Modifying the inertia of matrices arising
in optimization. Linear Algebra Appl., 275–276:261–279, 1998. (Cited on p. 349.)
[280] Nicholas J. Higham and Hyun-Min Kim. Numerical analysis of a quadratic matrix
equation. IMA J. Numer. Anal., 20(4):499–519, 2000. (Cited on p. 45.)
[281] Nicholas J. Higham and Hyun-Min Kim. Solving a quadratic matrix equation by
Newton’s method with exact line searches. SIAM J. Matrix Anal. Appl., 23(2):303–
316, 2001. (Cited on p. 45.)
Bibliography 395
[282] Nicholas J. Higham, D. Steven Mackey, Niloufer Mackey, and Françoise Tisseur. Com-
puting the polar decomposition and the matrix sign decomposition in matrix groups.
SIAM J. Matrix Anal. Appl., 25(4):1178–1192, 2004. (Cited on p. 315.)
[283] Nicholas J. Higham, D. Steven Mackey, Niloufer Mackey, and Françoise Tisseur. Func-
tions preserving matrix groups and iterations for the matrix square root. SIAM J.
Matrix Anal. Appl., 26(3):849–877, 2005. (Cited on pp. 13, 28, 104, 129, 132, 154,
167, 215, 216, 314, 315, 316, 366.)
[284] Nicholas J. Higham and Pythagoras Papadimitriou. A new parallel algorithm for com-
puting the singular value decomposition. In Proceedings of the Fifth SIAM Conference
on Applied Linear Algebra, John G. Lewis, editor, Society for Industrial and Applied
Mathematics, Philadelphia, PA, USA, 1994, pages 80–84. (Cited on p. 196.)
[285] Nicholas J. Higham and Pythagoras Papadimitriou. A parallel algorithm for computing
the polar decomposition. Parallel Comput., 20(8):1161–1173, 1994. (Cited on pp. 207,
216.)
[286] Nicholas J. Higham and Robert S. Schreiber. Fast polar decomposition of an arbitrary
matrix. SIAM J. Sci. Statist. Comput., 11(4):648–655, 1990. (Cited on pp. 162, 211,
214.)
[287] Nicholas J. Higham and Matthew I. Smith. Computing the matrix cosine. Numer.
Algorithms, 34:13–26, 2003. (Cited on pp. 292, 295, 299.)
[288] Nicholas J. Higham and Françoise Tisseur. A block algorithm for matrix 1-norm
estimation, with an application to 1-norm pseudospectra. SIAM J. Matrix Anal. Appl.,
21(4):1185–1201, 2000. (Cited on p. 67.)
[289] Lester S. Hill. Cryptography in an algebraic alphabet. Amer. Math. Monthly, 36:
306–312, 1929. (Cited on p. 165.)
[290] Einar Hille. On roots and logarithms of elements of a complex Banach algebra. Math.
Annalen, 136(1):46–57, 1958. (Cited on p. 32.)
[291] Marlis Hochbruck and Christian Lubich. On Krylov subspace approximations to the
matrix exponential operator. SIAM J. Numer. Anal., 34(5):1911–1925, 1997. (Cited
on p. 306.)
[292] Marlis Hochbruck, Christian Lubich, and Hubert Selhofer. Exponential integrators for
large systems of differential equations. SIAM J. Sci. Comput., 19(5):1552–1574, 1998.
(Cited on pp. 37, 262.)
[293] John H. Hodges. The matrix equation X 2 − I = 0 over a finite field. Amer. Math.
Monthly, 65(7):518–520, 1958. (Cited on p. 168.)
[294] Roger A. Horn. The Hadamard product. In Matrix Theory and Applications, Charles R.
Johnson, editor, volume 40 of Proceedings of Symposia in Applied Mathematics, Amer-
ican Mathematical Society, Providence, RI, USA, 1990, pages 87–169. (Cited on
p. 1.)
[295] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press,
1985. xiii+561 pp. ISBN 0-521-30586-1. (Cited on pp. 25, 28, 321, 323, 325, 327,
328, 329, 346.)
[296] Roger A. Horn and Charles R. Johnson. Topics in Matrix Analysis. Cambridge Uni-
versity Press, 1991. viii+607 pp. ISBN 0-521-30587-X. (Cited on pp. xvii, 1, 8, 13,
14, 16, 17, 23, 24, 27, 28, 46, 69, 213, 235, 315, 327, 331, 333, 362.)
[297] Roger A. Horn and Dennis I. Merino. Contragredient equivalence: A canonical form
and some applications. Linear Algebra Appl., 214:43–92, 1995. (Cited on p. 28.)
[298] Roger A. Horn and Gregory G. Piepmeyer. Two applications of the theory of primary
matrix functions. Linear Algebra Appl., 361:99–106, 2003. (Cited on p. 28.)
396 Bibliography
[299] W. D. Hoskins and D. J. Walton. A faster, more stable method for computing the pth
roots of positive definite matrices. Linear Algebra Appl., 26:139–163, 1979. (Cited on
p. 188.)
[300] A. S. Householder and John A. Carpenter. The singular values of involutory and of
idempotent matrices. Numer. Math., 5:234–237, 1963. (Cited on p. 165.)
[301] Alston S. Householder. The Numerical Treatment of a Single Nonlinear Equation.
McGraw-Hill, New York, 1970. viii+216 pp. (Cited on p. 90.)
[302] James Lucien Howland. The sign matrix and the separation of matrix eigenvalues.
Linear Algebra Appl., 49:221–232, 1983. (Cited on pp. 41, 130.)
[303] John H. Hubbard and Beverly H. West. Differential Equations: A Dynamical Systems
Approach. Higher Dimensional Systems. Springer-Verlag, New York, 1995. xiv+601
pp. ISBN 0-387-94377-3. (Cited on p. 267.)
[304] Thomas J. R. Hughes, Itzhak Levit, and James Winget. Element-by-element implicit
algorithms for heat conduction. J. Eng. Mech., 109(2):576–585, 1983. (Cited on
p. 310.)
[305] Bruno Iannazzo. A note on computing the matrix square root. Calcolo, 40:273–283,
2003. (Cited on pp. 143, 166, 167, 169.)
[306] Bruno Iannazzo. On the Newton method for the matrix P th root. SIAM J. Matrix
Anal. Appl., 28(2):503–523, 2006. (Cited on pp. 178, 188.)
[307] Bruno Iannazzo. Numerical Solution of Certain Nonlinear Matrix Equations. PhD
thesis, Università degli studi di Pisa, Pisa, Italy, 2007. 180 pp. (Cited on pp. 95, 104,
130, 167.)
[308] Khakim D. Ikramov. Hamiltonian square roots of skew-Hamiltonian matrices revisited.
Linear Algebra Appl., 325(1-3):101–107, 2001. (Cited on p. 315.)
[309] F. Incertis. A skew-symmetric formulation of the algebraic Riccati equation problem.
IEEE Trans. Automat. Control, AC-29(5):467–470, 1984. (Cited on p. 46.)
[310] Ilse C. F. Ipsen and Dean J. Lee. Determinant approximations. Manuscript, 2003. 13
pp. (Cited on p. 44.)
[311] Eugene Isaacson and Herbert Bishop Keller. Analysis of Numerical Methods. Wiley,
New York, 1966. xv+541 pp. Reprinted by Dover, New York, 1994. ISBN 0-486-68029-
0. (Cited on pp. 90, 333.)
[312] Arieh Iserles. How large is the exponential of a banded matrix? J. New Zealand Maths
Soc., 29:177–192, 2000. (Cited on p. 318.)
[313] Arieh Iserles, Hans Z. Munthe-Kaas, Syvert P. Nørsett, and Antonella Zanna. Lie-
group methods. Acta Numerica, 9:215–365, 2000. (Cited on p. 315.)
[314] Arieh Iserles and Antonella Zanna. Efficient computation of the matrix exponential
by generalized polar decompositions. SIAM J. Numer. Anal., 42(5):2218–2256, 2005.
(Cited on p. 315.)
[315] Robert B. Israel, Jeffrey S. Rosenthal, and Jason Z. Wei. Finding generators for
Markov chains via empirical transition matrices, with applications to credit ratings.
Mathematical Finance, 11(2):245–265, 2001. (Cited on pp. 38, 190.)
[316] Anzelm Iwanik and Ray Shiflett. The root problem for stochastic and doubly stochastic
operators. J. Math. Anal. and Appl., 113(1):93–112, 1986. (Cited on p. 191.)
[317] Drahoslava Janovská and Gerhard Opfer. Computing quaternionic roots by Newton’s
method. Electron. Trans. Numer. Anal., 26:82–102, 2007. (Cited on p. 189.)
[318] Branislav Jansik, Stinne Høst, Poul Jørgensen, Jeppe Olsen, and Trygve Helgaker.
Linear-scaling symmetric square-root decomposition of the overlap matrix. J. Chem.
Phys., 126:12404, 2007. (Cited on p. 42.)
Bibliography 397
[338] A. D. Kennedy. Approximation theory for matrices. Nuclear Physics B (Proc. Suppl.),
128:107–116, 2004. (Cited on p. 130.)
[339] A. D. Kennedy. Fast evaluation of Zolotarev coefficients. In QCD and Numerical
Analysis III, Artan Boriçi, Andreas Frommer, Báalint Joó, Anthony Kennedy, and
Brian Pendleton, editors, volume 47 of Lecture Notes in Computational Science and
Engineering, Springer-Verlag, Berlin, 2005, pages 169–189. (Cited on p. 130.)
[340] Charles S. Kenney and Alan J. Laub. Condition estimates for matrix functions. SIAM
J. Matrix Anal. Appl., 10(2):191–209, 1989. (Cited on pp. 69, 264, 284.)
[341] Charles S. Kenney and Alan J. Laub. Padé error estimates for the logarithm of a
matrix. Internat. J. Control, 50(3):707–730, 1989. (Cited on pp. 275, 283.)
[342] Charles S. Kenney and Alan J. Laub. Polar decomposition and matrix sign function
condition estimates. SIAM J. Sci. Statist. Comput., 12(3):488–504, 1991. (Cited on
pp. 129, 131, 200, 215.)
[343] Charles S. Kenney and Alan J. Laub. Rational iterative methods for the matrix sign
function. SIAM J. Matrix Anal. Appl., 12(2):273–291, 1991. (Cited on pp. 104, 115,
116, 117, 130.)
[344] Charles S. Kenney and Alan J. Laub. On scaling Newton’s method for polar decom-
position and the matrix sign function. SIAM J. Matrix Anal. Appl., 13(3):688–706,
1992. (Cited on pp. 130, 216.)
[345] Charles S. Kenney and Alan J. Laub. A hyperbolic tangent identity and the geometry
of Padé sign function iterations. Numer. Algorithms, 7:111–128, 1994. (Cited on
pp. 117, 130.)
[346] Charles S. Kenney and Alan J. Laub. Small-sample statistical condition estimates for
general matrix functions. SIAM J. Sci. Comput., 15(1):36–61, 1994. (Cited on p. 68.)
[347] Charles S. Kenney and Alan J. Laub. The matrix sign function. IEEE Trans. Automat.
Control, 40(8):1330–1348, 1995. (Cited on pp. 110, 129, 130.)
[348] Charles S. Kenney and Alan J. Laub. A Schur–Fréchet algorithm for computing the
logarithm and exponential of a matrix. SIAM J. Matrix Anal. Appl., 19(3):640–663,
1998. (Cited on pp. 104, 251, 257, 263, 264, 279, 284.)
[349] Charles S. Kenney, Alan J. Laub, and Edmond A. Jonckheere. Positive and negative
solutions of dual Riccati equations by matrix sign function iteration. Systems and
Control Letters, 13:109–116, 1989. (Cited on p. 40.)
[350] Charles S. Kenney, Alan J. Laub, and P. M. Papadopoulos. A Newton-squaring al-
gorithm for computing the negative invariant subspace of a matrix. IEEE Trans.
Automat. Control, 38(8):1284–1289, 1993. (Cited on p. 130.)
[351] Andrzej Kielbasiński, Pawel Zieliński, and Krystyna Ziȩtak. Numerical experiments
with Higham’s scaled method for polar decomposition. Technical report, Institute
of Mathematics and Computer Science, Wroclaw University of Technology, Wroclaw,
Poland, May 2006. 38 pp. (Cited on p. 210.)
[352] Andrzej Kielbasiński and Krystyna Ziȩtak. Numerical behaviour of Higham’s scaled
method for polar decomposition. Numer. Algorithms, 32:105–140, 2003. (Cited on
pp. 208, 210.)
[353] Fuad Kittaneh. On Lipschitz functions of normal operators. Proc. Amer. Math. Soc.,
94(3):416–418, 1985. (Cited on p. 217.)
[354] Fuad Kittaneh. Inequalities for the Schatten p-norm. III. Commun. Math. Phys., 104:
307–310, 1986. (Cited on p. 217.)
[355] Leonard F. Klosinski, Gerald L. Alexanderson, and Loren C. Larson. The fifty-first
William Lowell Putnam mathematical competition. Amer. Math. Monthly, 98(8):719–
727, 1991. (Cited on p. 33.)
Bibliography 399
[374] Alan J. Laub. Invariant subspace methods for the numerical solution of Riccati equa-
tions. In The Riccati Equation, Sergio Bittanti, Alan J. Laub, and Jan C. Willems,
editors, Springer-Verlag, Berlin, 1991, pages 163–196. (Cited on pp. 51, 53.)
[375] P.-F. Lavallée, A. Malyshev, and M. Sadkane. Spectral portrait of matrices by block
diagonalization. In Numerical Analysis and Its Applications, Lubin Vulkov, Jerzy
Waśniewski, and Plamen Yalamov, editors, volume 1196 of Lecture Notes in Computer
Science, Springer-Verlag, Berlin, 1997, pages 266–273. (Cited on p. 89.)
[376] J. Douglas Lawson. Generalized Runge-Kutta processes for stable systems with large
Lipschitz constants. SIAM J. Numer. Anal., 4(3):372–380, 1967. (Cited on p. 263.)
[377] Jimmie D. Lawson and Yongdo Lim. The geometric mean, matrices, metrics, and
more. Amer. Math. Monthly, 108(9):797–812, 2001. (Cited on p. 52.)
[378] Dean J. Lee and Ilse C. F. Ipsen. Zone determinant expansions for nuclear lattice
simulations. Physical Review C, 68:064003, 2003. (Cited on p. 44.)
[379] R. B. Leipnik. Rapidly convergent recursive solution of quadratic operator equations.
Numer. Math., 17:1–16, 1971. (Cited on p. 215.)
[380] Randall J. LeVeque. Finite Difference Methods for Ordinary and Partial Differential
Equations: Steady-State and Time-Dependent Problems. Society for Industrial and
Applied Mathematics, Philadelphia, PA, USA, 2007. xv+341 pp. ISBN 978-0-898716-
29-0. (Cited on p. 37.)
[381] Jack Levine and H. M. Nahikian. On the construction of involutory matrices. Amer.
Math. Monthly, 69(4):267–272, 1962. (Cited on p. 165.)
[382] Bernard W. Levinger. The square root of a 2 × 2 matrix. Math. Mag., 53(4):222–224,
1980. (Cited on p. 168.)
[383] Malcolm H. Levitt. Spin Dynamics: Basics of Nuclear Magnetic Resonance. Wiley,
Chichester, UK, 2001. xxiv+686 pp. ISBN 0-471-48921-2. (Cited on p. 51.)
[384] Jing Li. An Algorithm for Computing the Matrix Exponential. PhD thesis, Mathematics
Department, University of California, Berkeley, CA, USA, 1988. 77 pp. (Cited on
p. 251.)
[385] Ren-Cang Li. A perturbation bound for the generalized polar decomposition. BIT,
33:304–308, 1993. (Cited on p. 215.)
[386] Ren-Cang Li. New perturbation bounds for the unitary polar factor. SIAM J. Matrix
Anal. Appl., 16(1):327–332, 1995. (Cited on pp. 201, 218.)
[387] Ren-Cang Li. Relative perturbation bounds for the unitary polar factor. BIT, 37(1):
67–75, 1997. (Cited on p. 215.)
[388] Ren-Cang Li. Relative perturbation bounds for positive polar factors of graded matri-
ces. SIAM J. Matrix Anal. Appl., 27(2):424–433, 2005. (Cited on p. 215.)
[389] Wen Li and Weiwei Sun. Perturbation bounds of unitary and subunitary polar factors.
SIAM J. Matrix Anal. Appl., 23(4):1183–1193, 2002. (Cited on pp. 201, 215.)
[390] Pietr Liebl. Einige Bemerkungen zur numerischen Stabilität von Matrizeniterationen.
Aplikace Matematiky, 10(3):249–254, 1965. (Cited on p. 167.)
[391] Yongdo Lim. The matrix golden mean and its applications to Riccati matrix equations.
SIAM J. Matrix Anal. Appl., 29(1):54–66, 2007. (Cited on p. 47.)
[392] Chih-Chang Lin and Earl Zmijewski. A parallel algorithm for computing the eigenval-
ues of an unsymmetric matrix on an SIMD mesh of processors. Report TRCS 91-15,
Department of Computer Science, University of California, Santa Barbara, July 1991.
(Cited on p. 51.)
[393] Lu Lin and Zhong-Yun Liu. On the square root of an H-matrix with positive diagonal
elements. Annals of Operations Research, 103:339–350, 2001. (Cited on p. 167.)
Bibliography 401
[394] Ya Yan Lu. Computing the logarithm of a symmetric positive definite matrix. Appl.
Numer. Math., 26:483–496, 1998. (Cited on p. 284.)
[395] Ya Yan Lu. Exponentials of symmetric matrices through tridiagonal reductions. Linear
Algebra Appl., 279:317–324, 1998. (Cited on p. 260.)
[396] Ya Yan Lu. A Padé approximation method for square roots of symmetric positive
definite matrices. SIAM J. Matrix Anal. Appl., 19(3):833–845, 1998. (Cited on
p. 164.)
[397] Ya Yan Lu. Computing a matrix function for exponential integrators. J. Comput.
Appl. Math., 161:203–216, 2003. (Cited on p. 262.)
[398] Cyrus Colton MacDuffee. Vectors and Matrices. Number 7 in The Carus Mathematical
Monographs. Mathematical Association of America, 1943. xi+203 pp. (Cited on p. 34.)
[399] Cyrus Colton MacDuffee. The Theory of Matrices. Chelsea, New York, 1946. v+110
pp. Corrected reprint of first edition (J. Springer, Berlin, 1933). Also available as
Dover edition, 2004. (Cited on p. 27.)
[400] D. Steven Mackey, Niloufer Mackey, and Françoise Tisseur. Structured factorizations
in scalar product spaces. SIAM J. Matrix Anal. Appl., 27:821–850, 2006. (Cited on
p. 315.)
[401] Arne Magnus and Jan Wynn. On the Padé table of cos z. Proc. Amer. Math. Soc., 47
(2):361–367, 1975. (Cited on pp. 262, 290, 296.)
[402] Jan R. Magnus and Heinz Neudecker. Matrix Differential Calculus with Applications in
Statistics and Econometrics. Revised edition, Wiley, Chichester, UK, 1999. xviii+395
pp. ISBN 0-471-98633-X. (Cited on p. 69.)
[403] Wilhelm Magnus. On the exponential solution of differential equations for a linear
operator. Comm. Pure Appl. Math., 7:649–673, 1954. (Cited on p. 263.)
[404] P. J. Maher. Partially isometric approximation of positive operators. Illinois Journal
of Mathematics, 33(2):227–243, 1989. (Cited on p. 214.)
[405] Jianqin Mao. Optimal orthonormalization of the strapdown matrix by using singular
value decomposition. Computers Math. Applic., 12A(3):353–362, 1986. (Cited on
p. 42.)
[406] Marvin Marcus and Henryk Minc. Some results on doubly stochastic matrices. Proc.
Amer. Math. Soc., 13(4):571–579, 1962. (Cited on p. 191.)
[407] Jerrold E. Marsden and Tudor S. Ratiu. Introduction to Mechanics and Symmetry.
Second edition, Springer-Verlag, New York, 1999. xviii+582 pp. ISBN 0-387-98643-X.
(Cited on p. 266.)
[408] Roy Mathias. Evaluating the Frechet derivative of the matrix exponential. Numer.
Math., 63:213–226, 1992. (Cited on pp. 66, 256.)
[409] Roy Mathias. Approximation of matrix-valued functions. SIAM J. Matrix Anal. Appl.,
14(4):1061–1063, 1993. (Cited on p. 77.)
[410] Roy Mathias. Perturbation bounds for the polar decomposition. SIAM J. Matrix Anal.
Appl., 14(2):588–597, 1993. (Cited on p. 214.)
[411] Roy Mathias. Condition estimation for matrix functions via the Schur decomposition.
SIAM J. Matrix Anal. Appl., 16(2):565–578, 1995. (Cited on pp. 68, 69.)
[412] Roy Mathias. A chain rule for matrix functions and applications. SIAM J. Matrix
Anal. Appl., 17(3):610–620, 1996. (Cited on pp. 13, 69, 355.)
[413] Control Systems Toolbox Documentation. The MathWorks, Inc., Natick, MA, USA.
Online version. (Cited on p. 39.)
[414] MATLAB. The MathWorks, Inc., Natick, MA, USA. http://www.mathworks.com.
(Cited on p. 100.)
402 Bibliography
[415] Neal H. McCoy. On the characteristic roots of matric polynomials. Bull. Amer. Math.
Soc., 42:592–600, 1936. (Cited on p. 29.)
[416] A. McCurdy, K. C. Ng, and B. N. Parlett. Accurate computation of divided differences
of the exponential function. Math. Comp., 43(168):501–528, 1984. (Cited on pp. 250,
264.)
[417] Robert I. McLachlan and G. Reinout W. Quispel. Splitting methods. Acta Numerica,
11:341–434, 2002. (Cited on p. 263.)
[418] Volker Mehrmann. The Autonomous Linear Quadratic Control Problem: Theory and
Numerical Solution, volume 163 of Lecture Notes in Control and Information Sciences.
Springer-Verlag, Berlin, 1991. 177 pp. ISBN 0-340-54170-5. (Cited on pp. 132, 169.)
[419] Volker Mehrmann and Werner Rath. Numerical methods for the computation of an-
alytic singular value decompositions. Electron. Trans. Numer. Anal., 1:72–88, 1993.
(Cited on pp. 43, 214.)
[420] Madan Lal Mehta. Matrix Theory: Selected Topics and Useful Results. Second edi-
tion, Hindustan Publishing Company, Delhi, 1989. xvii+376 pp. ISBN 81-7075-012-1.
(Cited on p. 187.)
[421] Beatrice Meini. The matrix square root from a new functional perspective: Theoretical
results and computational issues. SIAM J. Matrix Anal. Appl., 26(2):362–376, 2004.
(Cited on pp. 142, 160, 166.)
[422] Brian J. Melloy and G. Kemble Bennett. Computing the exponential of an intensity
matrix. J. Comput. Appl. Math., 46:405–413, 1993. (Cited on pp. 105, 263.)
[423] Michael Metcalf and John K. Reid. Fortran 90/95 Explained. Second edition, Oxford
University Press, 1999. xv+341 pp. ISBN 0-19-850558-2. (Cited on p. 1.)
[424] W. H. Metzler. On the roots of matrices. Amer. J. Math., 14(4):326–377, 1892. (Cited
on pp. 26, 28.)
[425] Gérard Meurant. A review on the inverse of symmetric tridiagonal and block tridiag-
onal matrices. SIAM J. Matrix Anal. Appl., 13(3):707–728, 1992. (Cited on p. 316.)
[426] Carl D. Meyer. Matrix Analysis and Applied Linear Algebra. Society for Industrial and
Applied Mathematics, Philadelphia, PA, USA, 2000. xii+718 pp. ISBN 0-89871-454-0.
(Cited on pp. 28, 170, 326.)
[427] Charles A. Micchelli and R. A. Willoughby. On functions which preserve the class of
Stieltjes matrices. Linear Algebra Appl., 23:141–156, 1979. (Cited on p. 315.)
[428] L. M. Milne-Thompson. The Calculus of Finite Differences. Macmillan, London, 1933.
xxiii+558 pp. (Cited on p. 333.)
[429] Henryk Minc. Nonnegative Matrices. Wiley, New York, 1988. xiii+206 pp. ISBN
0-471-83966-3. (Cited on p. 191.)
[430] Borislav V. Minchev. Computing analytic matrix functions for a class of exponential
integrators. Reports in Informatics 278, Department of Informatics, University of
Bergen, Norway, June 2004. 11 pp. (Cited on p. 262.)
[431] Borislav V. Minchev and Will M. Wright. A review of exponential integrators for first
order semi-linear problems. Preprint 2/2005, Norwegian University of Science and
Technology, Trondheim, Norway, 2005. 44 pp. (Cited on pp. 37, 53.)
[432] L. Mirsky. Symmetric gauge functions and unitarily invariant norms. Quart. J. Math.,
11:50–59, 1960. (Cited on p. 328.)
[433] L. Mirsky. An Introduction to Linear Algebra. Oxford University Press, 1961. viii+440
pp. Reprinted by Dover, New York, 1990. ISBN 0-486-66434-1. (Cited on p. 328.)
[434] Maher Moakher. Means and averaging in the group of rotations. SIAM J. Matrix
Anal. Appl., 24(1):1–16, 2002. (Cited on pp. 42, 216.)
Bibliography 403
[435] Maher Moakher. A differential geometric approach to the geometric mean of symmetric
positive-definite matrices. SIAM J. Matrix Anal. Appl., 26(3):735–747, 2005. (Cited
on p. 52.)
[436] Cleve B. Moler. Numerical Computing with MATLAB. Society for Industrial and
Applied Mathematics, Philadelphia, PA, USA, 2004. xi+336 pp. Also available elec-
tronically from www.mathworks.com. ISBN 0-89871-560-1. (Cited on p. 165.)
[437] Cleve B. Moler and Charles F. Van Loan. Nineteen dubious ways to compute the
exponential of a matrix. SIAM Rev., 20(4):801–836, 1978. (Cited on pp. 27, 233,
262, 267.)
[438] Cleve B. Moler and Charles F. Van Loan. Nineteen dubious ways to compute the
exponential of a matrix, twenty-five years later. SIAM Rev., 45(1):3–49, 2003. (Cited
on pp. 27, 231, 233, 237, 241, 243, 246, 262, 263, 265, 267.)
[439] Pierre Montagnier, Christopher C. Paige, and Raymond J. Spiteri. Real Floquet factors
of linear time-periodic systems. Systems and Control Letters, 50:251–262, 2003. (Cited
on p. 286.)
[440] I. Moret and P. Novati. The computation of functions of matrices by truncated Faber
series. Numer. Funct. Anal. Optim., 22(5&6):697–719, 2001. (Cited on p. 309.)
[441] Jean-Michel Muller. Elementary Functions: Algorithms and Implementation.
Birkhäuser, Boston, MA, USA, 1997. xv+204 pp. ISBN 0-8176-3990-X. (Cited
on p. 100.)
[442] David Mumford, Caroline Series, and David Wright. Indra’s Pearls: The Vision of
Felix Klein. Cambridge University Press, 2002. xix+396 pp. ISBN 0-521-35253-3.
(Cited on p. 188.)
[443] H. Z. Munthe-Kaas, G. R. W. Quispel, and A. Zanna. Generalized polar decompo-
sitions on Lie groups with involutive automorphisms. Found. Comput. Math., 1(3):
297–324, 2001. (Cited on p. 315.)
[444] Noël M. Nachtigal, Satish C. Reddy, and Lloyd N. Trefethen. How fast are nonsym-
metric matrix iterations? SIAM J. Matrix Anal. Appl., 13(3):778–795, 1992. (Cited
on p. 75.)
[445] Igor Najfeld and Timothy F. Havel. Derivatives of the matrix exponential and their
computation. Advances in Applied Mathematics, 16:321–375, 1995. (Cited on pp. 51,
69, 263, 264.)
[446] Herbert Neuberger. Exactly massless quarks on the lattice. Phys. Lett. B, 417(1-2):
141–144, 1998. (Cited on p. 43.)
[447] Arnold Neumaier. Introduction to Numerical Analysis. Cambridge University Press,
2001. viii+356 pp. ISBN 0-521-33610-4. (Cited on p. 333.)
[448] Kwok Choi Ng. Contributions to the computation of the matrix exponential. Technical
Report PAM-212, Center for Pure and Applied Mathematics, University of California,
Berkeley, February 1984. 72 pp. PhD thesis. (Cited on pp. 104, 227, 251.)
[449] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer-Verlag, New
York, 1999. xx+636 pp. ISBN 0-387-98793-2. (Cited on p. 199.)
[450] Paolo Novati. A polynomial method based on Fejér points for the computation of
functions of unsymmetric matrices. Appl. Numer. Math., 44:201–224, 2003. (Cited
on p. 309.)
[451] Jeffrey Nunemacher. Which matrices have real logarithms? Math. Mag., 62(2):132–
135, 1989. (Cited on p. 17.)
[452] G. Opitz. Steigungsmatrizen. Z. Angew. Math. Mech., 44:T52–T54, 1964. (Cited on
p. 264.)
404 Bibliography
[453] James M. Ortega and Werner C. Rheinboldt. Iterative Solution of Nonlinear Equations
in Several Variables. Society for Industrial and Applied Mathematics, Philadelphia,
PA, USA, 2000. xxvi+572 pp. Republication of work first published by Academic
Press in 1970. ISBN 0-89871-461-3. (Cited on pp. 69, 167.)
[454] E. E. Osborne. On pre-conditioning of matrices. J. Assoc. Comput. Mach., 7:338–345,
1960. (Cited on p. 100.)
[455] A. M. Ostrowski. Solution of Equations in Euclidean and Banach Spaces. Academic
Press, New York, 1973. xx+412 pp. Third edition of Solution of Equations and Systems
of Equations. ISBN 0-12530260-6. (Cited on pp. 333, 357.)
[456] C. C. Paige. Krylov subspace processes, Krylov subspace methods, and iteration poly-
nomials. In Proceedings of the Cornelius Lanczos International Centenary Conference,
J. David Brown, Moody T. Chu, Donald C. Ellison, and Robert J. Plemmons, editors,
Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1994, pages
83–92. (Cited on p. 311.)
[457] Pradeep Pandey, Charles S. Kenney, and Alan J. Laub. A parallel algorithm for the
matrix sign function. Int. J. High Speed Computing, 2(2):181–191, 1990. (Cited on
p. 130.)
[458] Michael James Parks. A Study of Algorithms to Compute the Matrix Exponential. PhD
thesis, Mathematics Department, University of California, Berkeley, CA, USA, 1994.
53 pp. (Cited on p. 263.)
[459] Beresford N. Parlett. Computation of functions of triangular matrices. Memorandum
ERL-M481, Electronics Research Laboratory, College of Engineering, University of
California, Berkeley, November 1974. 18 pp. (Cited on pp. 86, 106.)
[460] Beresford N. Parlett. A recurrence among the elements of functions of triangular
matrices. Linear Algebra Appl., 14:117–121, 1976. (Cited on p. 85.)
[461] Beresford N. Parlett. The Symmetric Eigenvalue Problem. Society for Industrial
and Applied Mathematics, Philadelphia, PA, USA, 1998. xxiv+398 pp. Unabridged,
amended version of book first published by Prentice-Hall in 1980. ISBN 0-89871-402-8.
(Cited on p. 35.)
[462] Beresford N. Parlett and Kwok Choi Ng. Development of an accurate algorithm for
exp(Bt). Technical Report PAM-294, Center for Pure and Applied Mathematics,
University of California, Berkeley, August 1985. 23 pp. Fortran program listings are
given in an appendix with the same report number printed separately. (Cited on
pp. 89, 228, 250.)
[463] Karen Hunger Parshall. Joseph H. M. Wedderburn and the structure theory of algebras.
Archive for History of Exact Sciences, 32(3-4):223–349, 1985. (Cited on p. 26.)
[464] Karen Hunger Parshall. James Joseph Sylvester. Life and Work in Letters. Oxford
University Press, 1998. xv+321 pp. ISBN 0-19-850391-1. (Cited on p. 30.)
[465] Karen Hunger Parshall. James Joseph Sylvester. Jewish Mathematician in a Victorian
World. Johns Hopkins University Press, Baltimore, MD, USA, 2006. xiii+461 pp. ISBN
0-8018-8291-5. (Cited on p. 26.)
[466] Michael S. Paterson and Larry J. Stockmeyer. On the number of nonscalar multiplica-
tions necessary to evaluate polynomials. SIAM J. Comput., 2(1):60–66, 1973. (Cited
on p. 73.)
[467] G. Peano. Intégration par séries des équations différentielles linéaires. Math. Annalen,
32:450–456, 1888. (Cited on p. 26.)
[468] Heinz-Otto Peitgen, Hartmut Jürgens, and Dietmar Saupe. Fractals for the Classroom.
Part Two: Complex Systems and Mandelbrot Set. Springer-Verlag, New York, 1992.
xii+500 pp. ISBN 0-387-97722-8. (Cited on p. 178.)
Bibliography 405
[469] R. Penrose. A generalized inverse for matrices. Proc. Cambridge Philos. Soc., 51(3):
406–413, 1955. (Cited on p. 214.)
[470] P. P. Petrushev and V. A. Popov. Rational Approximation of Real Functions. Cam-
bridge University Press, Cambridge, UK, 1987. ISBN 0-521-33107-2. (Cited on p. 130.)
[471] Bernard Philippe. An algorithm to improve nearly orthonormal sets of vectors on
a vector processor. SIAM J. Alg. Discrete Methods, 8(3):396–403, 1987. (Cited on
p. 188.)
[472] George M. Phillips. Two Millennia of Mathematics: From Archimedes to Gauss.
Springer-Verlag, New York, 2000. xii+223 pp. ISBN 0-387-95022-2. (Cited on pp. 283,
286.)
[473] H. Poincaré. Sur les groupes continus. Trans. Cambridge Phil. Soc., 18:220–255, 1899.
(Cited on p. 26.)
[474] George Pólya and Gabor Szegö. Problems and Theorems in Analysis I. Series. Integral
Calculus. Theory of Functions. Springer-Verlag, New York, 1998. xix+389 pp. Reprint
of the 1978 edition. ISBN 3-540-63640-4. (Cited on p. 343.)
[475] George Pólya and Gabor Szegö. Problems and Theorems in Analysis II. Theory of
Functions. Zeros. Polynomials. Determinants. Number Theory. Geometry. Springer-
Verlag, New York, 1998. xi+392 pp. Reprint of the 1976 edition. ISBN 3-540-63686-2.
(Cited on p. 300.)
[476] Renfrey B. Potts. Symmetric square roots of the finite identity matrix. Utilitas Math-
ematica, 9:73–86, 1976. (Cited on p. 165.)
[477] M. J. D. Powell. Approximation Theory and Methods. Cambridge University Press,
Cambridge, UK, 1981. x+339 pp. ISBN 0-521-29514-9. (Cited on p. 79.)
[478] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery.
Numerical Recipes: The Art of Scientific Computing. Third edition, Cambridge Uni-
versity Press, 2007. xxi+1235 pp. ISBN 978-0-521-88068-8. (Cited on p. 274.)
[479] Panayiotis J. Psarrakos. On the mth roots of a complex matrix. Electron. J. Linear
Algebra, 9:32–41, 2002. (Cited on p. 174.)
[480] Péter Pulay. An iterative method for the determination of the square root of a positive
definite matrix. Z. Angew. Math. Mech., 46:151, 1966. (Cited on pp. 167, 171.)
[481] Norman J. Pullman. Matrix Theory and its Applications: Selected Topics. Marcel
Dekker, New York, 1976. vi+240 pp. ISBN 0-8247-6420-X. (Cited on pp. 28, 29.)
[482] W. Pusz and S. L. Woronowicz. Functional calculus for sesquilinear forms and the
purification map. Reports on Mathematical Physics, 8(2):159–170, 1975. (Cited on
p. 52.)
[483] Heydar Radjavi and Peter Rosenthal. Simultaneous Triangularization. Springer-Ver-
lag, New York, 2000. xii+318 pp. ISBN 0-387-98466-6. (Cited on p. 29.)
[484] C. R. Rao. Matrix approximations and reduction of dimensionality in multivariate sta-
tistical analysis. In Multivariate Analysis—V, P. R. Krishnaiah, editor, North Holland,
Amsterdam, 1980, pages 3–22. (Cited on p. 214.)
[485] Lothar Reichel. The application of Leja points to Richardson iteration and polynomial
preconditioning. Linear Algebra Appl., 154/156:389–414, 1991. (Cited on p. 75.)
[486] Matthias W. Reinsch. A simple expression for the terms in the Baker–Campbell–
Hausdorff series. J. Math. Phys., 41(4):2434–2442, 2000. (Cited on p. 263.)
[487] John R. Rice. A theory of condition. SIAM J. Numer. Anal., 3(2):287–310, 1966.
(Cited on p. 69.)
[488] Norman M. Rice. On nth roots of positive operators. Amer. Math. Monthly, 89(5):
313–314, 1982. (Cited on p. 189.)
406 Bibliography
[509] Ernst Schröder. Ueber unendliche viele Algorithmen zur Auflösung der Gleichungen.
Math. Annalen, 2:317–365, 1870. (Cited on pp. 130, 407.)
[510] Ernst Schröder. On infinitely many algorithms for solving equations. Technical Report
TR-92-121, Department of Computer Science, University of Maryland, College Park,
MD, USA, November 1992. 57 pp. Translation of [509] by G. W. Stewart. (Cited on
p. 130.)
[511] Manfred Schroeder. Fractals, Chaos, Power Laws: Minutes from an Infinite Paradise.
W. H. Freeman, New York, 1991. xviii+429 pp. ISBN 0-7167-2136-8. (Cited on
pp. 131, 178.)
[512] Günther Schulz. Iterative Berechnung der reziproken Matrix. Z. Angew. Math. Mech.,
13:57–59, 1933. (Cited on pp. 114, 181.)
[513] Hans Schwerdtfeger. Les Fonctions de Matrices. I. Les Fonctions Univalentes. Number
649 in Actualités Scientifiques et Industrielles. Hermann, Paris, France, 1938. 58 pp.
(Cited on pp. 26, 30, 51.)
[514] Steven M. Serbin. Rational approximations of trigonometric matrices with application
to second-order systems of differential equations. Appl. Math. Comput., 5(1):75–92,
1979. (Cited on p. 287.)
[515] Steven M. Serbin and Sybil A. Blalock. An algorithm for computing the matrix cosine.
SIAM J. Sci. Statist. Comput., 1(2):198–204, 1980. (Cited on p. 299.)
[516] Lawrence F. Shampine and Mark W. Reichelt. The MATLAB ODE suite. SIAM J.
Sci. Comput., 18(1):1–22, 1997. (Cited on p. 165.)
[517] Wyatt D. Sharpa and Edward J. Allen. Stochastic neutron transport equations for
rod and plane geometries. Annals of Nuclear Energy, 27(2):99–116, 2000. (Cited on
p. 167.)
[518] N. Sherif. On the computation of a matrix inverse square root. Computing, 46:295–305,
1991. (Cited on p. 169.)
[519] L. S. Shieh, Y. T. Tsay, and C. T. Wang. Matrix sector functions and their applications
to system theory. IEE Proc., 131(5):171–181, 1984. (Cited on p. 48.)
[520] Ken Shoemake and Tom Duff. Matrix animation and polar decomposition. In Pro-
ceedings of the Conference on Graphics Interface ’92, Morgan Kaufmann Publishers
Inc., San Francisco, CA, USA, 1992, pages 258–264. (Cited on p. 42.)
[521] Avram Sidi. Practical Extrapolation Methods; Theory and Applications. Cambridge
University Press, Cambridge, UK, 2003. xxii+519 pp. ISBN 0-521-66159-5. (Cited
on p. 104.)
[522] Roger B. Sidje. Expokit: A software package for computing matrix exponentials. ACM
Trans. Math. Software, 24(1):130–156, 1998. (Cited on pp. 262, 263, 264.)
[523] Roger B. Sidje, Kevin Burrage, and Shev MacNamara. Inexact uniformization method
for computing transient distributions of Markov chains. SIAM J. Sci. Comput., 29(6):
2562–2580, 2007. (Cited on p. 260.)
[524] Roger B. Sidje and William J. Stewart. A numerical study of large sparse matrix
exponentials arising in Markov chains. Computational Statistics & Data Analysis, 29:
345–368, 1999. (Cited on p. 260.)
[525] Burton Singer and Seymour Spilerman. The representation of social processes by
Markov models. Amer. J. Sociology, 82(1):1–54, 1976. (Cited on p. 38.)
[526] Abraham Sinkov. Elementary Cryptanalysis: A Mathematical Approach. Mathematical
Association of America, Washington, D.C., 1966. ix+222 pp. ISBN 0-88385-622-0.
(Cited on p. 171.)
408 Bibliography
[527] Bård Skaflestad and Will M. Wright. The scaling and modified squaring method for
matrix functions related to the exponential. Preprint, Norwegian University of Science
and Technology, Trondheim, Norway, 2006. 17 pp. (Cited on p. 262.)
[528] David M. Smith. Algorithm 693: A FORTRAN package for floating-point multiple-
precision arithmetic. ACM Trans. Math. Software, 17(2):273–283, 1991. (Cited on
p. 286.)
[529] Matthew I. Smith. Numerical Computation of Matrix Functions. PhD thesis, Uni-
versity of Manchester, Manchester, England, September 2002. 157 pp. (Cited on
p. 370.)
[530] Matthew I. Smith. A Schur algorithm for computing matrix pth roots. SIAM J. Matrix
Anal. Appl., 24(4):971–989, 2003. (Cited on pp. 174, 176, 187, 188, 190.)
[531] R. A. Smith. Infinite product expansions for matrix n-th roots. J. Austral. Math. Soc.,
8:242–249, 1968. (Cited on p. 188.)
[532] Alicja Smoktunowicz, Jesse L. Barlow, and Julien Langou. A note on the error analysis
of classical Gram–Schmidt. Numer. Math., 105:299–313, 2006. (Cited on p. 309.)
[533] Inge Söderkvist. Perturbation analysis of the orthogonal Procrustes problem. BIT, 33:
687–694, 1993. (Cited on p. 201.)
[534] Mark Sofroniou and Giulia Spaletta. Solving orthogonal matrix differential systems
in Mathematica. In Computational Science—ICCS 2002 Proceedings, Part III, Peter
M. A. Sloot, C. J. Kenneth Tan, Jack J. Dongarra, and Alfons G. Hoekstra, editors,
volume 2002 of Lecture Notes in Computer Science, Springer-Verlag, Berlin, 2002,
pages 496–505. (Cited on pp. 42, 215.)
[535] Irene A. Stegun and Milton Abramowitz. Pitfalls in computation. J. Soc. Indust. Appl.
Math., 4(4):207–219, 1956. (Cited on p. 74.)
[536] G. W. Stewart. Error and perturbation bounds for subspaces associated with certain
eigenvalue problems. SIAM Rev., 15(4):727–764, 1973. (Cited on p. 231.)
[537] G. W. Stewart. Matrix Algorithms. Volume I: Basic Decompositions. Society for
Industrial and Applied Mathematics, Philadelphia, PA, USA, 1998. xx+458 pp. ISBN
0-89871-414-1. (Cited on p. 357.)
[538] G. W. Stewart. Matrix Algorithms. Volume II: Eigensystems. Society for Industrial
and Applied Mathematics, Philadelphia, PA, USA, 2001. xix+469 pp. ISBN 0-89871-
503-2. (Cited on pp. 35, 69, 309, 323, 325.)
[539] G. W. Stewart and Ji-guang Sun. Matrix Perturbation Theory. Academic Press,
London, 1990. xv+365 pp. ISBN 0-12-670230-6. (Cited on p. 327.)
[540] William J. Stewart. Introduction to the Numerical Solution of Markov Chains. Prince-
ton University Press, Princeton, NJ, USA, 1994. xix+539 pp. ISBN 0-691-03699-3.
(Cited on p. 260.)
[541] Eberhard Stickel. On the Fréchet derivative of matrix functions. Linear Algebra Appl.,
91:83–88, 1987. (Cited on p. 70.)
[542] Josef Stoer and R. Bulirsch. Introduction to Numerical Analysis. Third edition,
Springer-Verlag, New York, 2002. xv+744 pp. ISBN 0-387-95452-X. (Cited on
pp. 28, 154, 333.)
[543] David R. Stoutemyer. Crimes and misdemeanors in the computer algebra trade. No-
tices Amer. Math. Soc., 38(7):778–785, 1991. (Cited on p. 286.)
[544] Gilbert Strang. On the construction and comparison of difference schemes. SIAM J.
Numer. Anal., 5(3):506–517, 1968. (Cited on p. 263.)
[545] Torsten Ström. Minimization of norms and logarithmic norms by diagonal similarities.
Computing, 10:1–7, 1972. (Cited on p. 105.)
Bibliography 409
[546] Torsten Ström. On logarithmic norms. SIAM J. Numer. Anal., 12(5):741–753, 1975.
(Cited on p. 237.)
[547] Ji-guang Sun. A note on backward perturbations for the Hermitian eigenvalue problem.
BIT, 35:385–393, 1995. (Cited on p. 214.)
[548] Ji-guang Sun. Perturbation analysis of the matrix sign function. Linear Algebra Appl.,
250:177–206, 1997. (Cited on p. 129.)
[549] Ji-guang Sun and C.-H. Chen. Generalized polar decomposition. Math. Numer. Sinica,
11:262–273, 1989. In Chinese. Cited in [372]. (Cited on p. 214.)
[550] Xiaobai Sun and Enrique S. Quintana-Ortı́. The generalized Newton iteration for the
matrix sign function. SIAM J. Sci. Comput., 24(2):669–683, 2002. (Cited on p. 130.)
[551] Xiaobai Sun and Enrique S. Quintana-Ortı́. Spectral division methods for block gen-
eralized Schur decompositions. Math. Comp., 73(248):1827–1847, 2004. (Cited on
p. 49.)
[552] Masuo Suzuki. Generalized Trotter’s formula and systematic approximants of ex-
ponential operators and inner derivations with applications to many-body problems.
Commun. Math. Phys., 51(2):183–190, 1976. (Cited on pp. 262, 263.)
[553] J. J. Sylvester. Additions to the articles, “On a New Class of Theorems,” and “On
Pascal’s Theorem”. Philosophical Magazine, 37:363–370, 1850. Reprinted in [558,
pp. 1451–151]. (Cited on p. 26.)
[554] J. J. Sylvester. Note on the theory of simultaneous linear differential or difference
equations with constant coefficients. Amer. J. Math., 4(1):321–326, 1881. Reprinted
in [559, pp. 551-556]. (Cited on pp. 188, 191.)
[555] J. J. Sylvester. Sur les puissances et les racines de substitutions linéaires. Comptes
Rendus de l’Académie des Sciences, 94:55–59, 1882. Reprinted in [559, pp. 562–564].
(Cited on pp. 187, 188.)
[556] J. J. Sylvester. Sur les racines des matrices unitaires. Comptes Rendus de l’Académie
des Sciences, 94:396–399, 1882. Reprinted in [559, pp. 565–567]. (Cited on p. 168.)
[557] J. J. Sylvester. On the equation to the secular inequalities in the planetary theory.
Philosophical Magazine, 16:267–269, 1883. Reprinted in [560, pp. 110–111]. (Cited on
pp. 26, 34, 187.)
[558] The Collected Mathematical Papers of James Joseph Sylvester, volume 1 (1837–1853).
Cambridge University Press, 1904. xii+650 pp. (Cited on p. 409.)
[559] The Collected Mathematical Papers of James Joseph Sylvester, volume III (1870–1883).
Chelsea, New York, 1973. xv+697 pp. ISBN 0-8284-0253-1. (Cited on p. 409.)
[560] The Collected Mathematical Papers of James Joseph Sylvester, volume IV (1882–1897).
Chelsea, New York, 1973. xxxvii+756 pp. ISBN 0-8284-0253-1. (Cited on p. 409.)
[561] Henry Taber. On the theory of matrices. Amer. J. Math., 12(4):337–396, 1890. (Cited
on p. 28.)
[562] Ping Tak Peter Tang. Table-driven implementation of the expm1 function in IEEE
floating-point arithmetic. ACM Trans. Math. Software, 18(2):211–222, 1992. (Cited
on p. 261.)
[563] Pham Dinh Tao. Convergence of a subgradient method for computing the bound norm
of matrices. Linear Algebra Appl., 62:163–182, 1984. In French. (Cited on p. 69.)
[564] Olga Taussky. Commutativity in finite matrices. Amer. Math. Monthly, 64(4):229–235,
1957. (Cited on p. 29.)
[565] Olga Taussky. How I became a torchbearer for matrix theory. Amer. Math. Monthly,
95(9):801–812, 1988. (Cited on p. 29.)
410 Bibliography
[566] R. C. Thompson. On the matrices AB and BA. Linear Algebra Appl., 1:43–58, 1968.
(Cited on p. 28.)
[567] C. Thron, S. J. Dong, K. F. Liu, and H. P. Ying. Padé-Z2 estimator of determinants.
Physical Review D, 57(3):1642–1653, 1997. (Cited on p. 44.)
[568] Françoise Tisseur. Parallel implementation of the Yau and Lu method for eigenvalue
computation. International Journal of Supercomputer Applications and High Perfor-
mance Computing, 11(3):197–204, 1997. (Cited on p. 300.)
[569] Françoise Tisseur. Newton’s method in floating point arithmetic and iterative re-
finement of generalized eigenvalue problems. SIAM J. Matrix Anal. Appl., 22(4):
1038–1057, 2001. (Cited on p. 166.)
[570] Françoise Tisseur and Karl Meerbergen. The quadratic eigenvalue problem. SIAM
Rev., 43(2):235–286, 2001. (Cited on p. 45.)
[571] L. N. Trefethen and J. A. C. Weideman. The fast trapezoid rule in scientific computing.
Paper in preparation, 2007. (Cited on p. 308.)
[572] Lloyd N. Trefethen and David Bau III. Numerical Linear Algebra. Society for Industrial
and Applied Mathematics, Philadelphia, PA, USA, 1997. xii+361 pp. ISBN 0-89871-
361-7. (Cited on pp. 325, 335.)
[573] Lloyd N. Trefethen and Mark Embree. Spectra and Pseudospectra: The Behavior of
Nonnormal Matrices and Operators. Princeton University Press, Princeton, NJ, USA,
2005. xvii+606 pp. ISBN 0-691-11946-5. (Cited on pp. 47, 105, 263.)
[574] Lloyd N. Trefethen, J. A. C. Weideman, and Thomas Schmelzer. Talbot quadratures
and rational approximations. BIT, 46(3):653–670, 2006. (Cited on pp. 259, 308.)
[575] H. F. Trotter. On the product of semi-groups of operators. Proc. Amer. Math. Soc.,
10(4):545–551, 1959. (Cited on p. 263.)
[576] J. S. H. Tsai, L. S. Shieh, and R. E. Yates. Fast and stable algorithms for computing
the principal nth root of a complex matrix and the matrix sector function. Comput.
Math. Applic., 15(11):903–913, 1988. (Cited on p. 129.)
[577] H. W. Turnbull. The matrix square and cube roots of unity. J. London Math. Soc., 2
(8):242–244, 1927. (Cited on p. 188.)
[578] H. W. Turnbull. The Theory of Determinants, Matrices, and Invariants. Blackie,
London and Glasgow, 1929. xvi+338 pp. (Cited on p. 188.)
[579] H. W. Turnbull and A. C. Aitken. An Introduction to the Theory of Canonical Matrices.
Blackie, London and Glasgow, 1932. xiii+200 pp. Reprinted with appendix, 1952.
(Cited on pp. 27, 29, 52.)
[580] G. M. Tuynman. The derivation of the exponential map of matrices. Amer. Math.
Monthly, 102(9):818–820, 1995. (Cited on p. 263.)
[581] Frank Uhlig. Explicit polar decomposition and a near-characteristic polynomial: The
2 × 2 case. Linear Algebra Appl., 38:239–249, 1981. (Cited on p. 216.)
[582] R. Vaidyanathaswamy. Integer-roots of the unit matrix. J. London Math. Soc., 3(12):
121–124, 1928. (Cited on p. 189.)
[583] R. Vaidyanathaswamy. On the possible periods of integer matrices. J. London Math.
Soc., 3(12):268–272, 1928. (Cited on p. 189.)
[584] P. Van Den Driessche and H. K. Wimmer. Explicit polar decomposition of companion
matrices. Electron. J. Linear Algebra, 1:64–69, 1996. (Cited on p. 215.)
[585] J. van den Eshof, A. Frommer, Th. Lippert, K. Schilling, and H. A. Van der Vorst.
Numerical methods for the QCD overlap operator. I. Sign-function and error bounds.
Computer Physics Communications, 146:203–224, 2002. (Cited on pp. 43, 130.)
Bibliography 411
[586] Jasper van den Eshof. Nested Iteration methods for Nonlinear Matrix Problems. PhD
thesis, Utrecht University, Utrecht, Netherlands, September 2003. vii+183 pp. (Cited
on p. 130.)
[587] Henk A. Van der Vorst. An iterative solution method for solving f (A)x = b, using
Krylov subspace information obtained for the symmetric positive definite matrix A. J.
Comput. Appl. Math., 18:249–263, 1987. (Cited on p. 309.)
[588] Henk A. Van der Vorst. Solution of f (A)x = b with projection methods for the matrix
A. In Numerical Challenges in Lattice Quantum Chromodynamics, Andreas Frommer,
Thomas Lippert, Björn Medeke, and Klaus Schilling, editors, volume 15 of Lecture
Notes in Computational Science and Engineering, Springer-Verlag, Berlin, 2000, pages
18–28. (Cited on p. 309.)
[589] Henk A. Van der Vorst. Iterative Krylov Methods for Large Linear Systems. Cambridge
University Press, 2003. xiii+221 pp. ISBN 0-521-81828-1. (Cited on p. 309.)
[590] Jos L. M. van Dorsselaer, Michiel E. Hochstenbach, and Henk A. Van der Vorst.
Computing probabilistic bounds for extreme eigenvalues of symmetric matrices with
the Lanczos method. SIAM J. Matrix Anal. Appl., 22(3):837–852, 2000. (Cited on
p. 69.)
[591] J. L. van Hemmen and T. Ando. An inequality for trace ideals. Commun. Math. Phys.,
76:143–148, 1980. (Cited on p. 135.)
[592] Charles F. Van Loan. A study of the matrix exponential. Numerical Analysis Report
No. 10, University of Manchester, Manchester, UK, August 1975. Reissued as MIMS
EPrint 2006.397, Manchester Institute for Mathematical Sciences, The University of
Manchester, UK, November 2006. (Cited on pp. 84, 105, 262.)
[593] Charles F. Van Loan. On the limitation and application of Padé approximation to the
matrix exponential. In Padé and Rational Approximation: Theory and Applications,
E. B. Saff and R. S. Varga, editors, Academic Press, New York, 1977, pages 439–448.
(Cited on p. 263.)
[594] Charles F. Van Loan. The sensitivity of the matrix exponential. SIAM J. Numer.
Anal., 14(6):971–981, 1977. (Cited on pp. 238, 263.)
[595] Charles F. Van Loan. Computing integrals involving the matrix exponential. IEEE
Trans. Automat. Control, AC-23(3):395–404, 1978. (Cited on p. 264.)
[596] Charles F. Van Loan. A note on the evaluation of matrix polynomials. IEEE Trans.
Automat. Control, AC-24(2):320–321, 1979. (Cited on p. 74.)
[597] R. Vandebril, M. Van Barel, G. H. Golub, and N. Mastronardi. A bibliography on
semiseparable matrices. Calcolo, 42:249–70, 2005. (Cited on p. 316.)
[598] V. S. Varadarajan. Lie Groups, Lie Algebras, and Their Representations. Prentice-
Hall, Englewood Cliffs, NJ, USA, 1974. xiii+430 pp. ISBN 0-13-535732-2. (Cited on
p. 262.)
[599] J. M. Varah. On the separation of two matrices. SIAM J. Numer. Anal., 16(2):216–222,
1979. (Cited on p. 231.)
[600] Richard S. Varga. Matrix Iterative Analysis. Second edition, Springer-Verlag, Berlin,
2000. x+358 pp. ISBN 3-540-66321-5. (Cited on pp. 242, 260, 264.)
[601] R. Vertechy and V. Parenti-Castelli. Real-time direct position analysis of parallel
spherical wrists by using extra sensors. Journal of Mechanical Design, 128:288–294,
2006. (Cited on pp. 42, 208.)
[602] Cornelis Visser. Note on linear operators. Proc. Kon. Akad. Wet. Amsterdam, 40(3):
270–272, 1937. (Cited on p. 167.)
[603] John von Neumann. Über Adjungierte Funktionaloperatoren. Ann. of Math. (2), 33
(2):294–310, 1923. (Cited on p. 214.)
412 Bibliography
[604] John von Neumann. Über die analytischen Eigenschaften von Gruppen linearer Trans-
formationen und ihrer Darstellungen. Math. Zeit., 30:3–42, 1929. (Cited on p. 283.)
[605] Grace Wahba. Problem 65-1, a least squares estimate of satellite attitude. SIAM Rev.,
7(3):409, 1965. Solutions in 8(3):384–386, 1966. (Cited on p. 214.)
[606] Robert C. Ward. Numerical computation of the matrix exponential with accuracy
estimate. SIAM J. Numer. Anal., 14(4):600–610, 1977. (Cited on pp. 104, 246, 263,
264.)
[607] David S. Watkins. Fundamentals of Matrix Computations. Second edition, Wiley, New
York, 2002. xiii+618 pp. ISBN 0-471-21394-2. (Cited on pp. 69, 323, 325, 335.)
[608] David S. Watkins. The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods.
Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2007. x+442
pp. ISBN 978-0-898716-41-2. (Cited on pp. 309, 323.)
[609] G. A. Watson. Approximation Theory and Numerical Methods. Wiley, Chichester, UK,
1980. x+229 pp. ISBN 0-471-27706-1. (Cited on p. 79.)
[610] Frederick V. Waugh and Martin E. Abel. On fractional powers of a matrix. J. Amer.
Statist. Assoc., 62:1018–1021, 1967. (Cited on pp. 38, 167.)
[611] J. H. M. Wedderburn. Lectures on Matrices, volume 17 of American Mathematical
Society Colloquium Publications. American Mathematical Society, Providence, RI,
USA, 1934. vii+205 pp. (Cited on pp. 27, 28.)
[612] G. H. Weiss and A. A. Maradudin. The Baker-Hausdorff formula and a problem in
crystal physics. J. Math. Phys., 3(4):771–777, 1962. (Cited on p. 263.)
[613] Edgar M. E. Wermuth. Two remarks on matrix exponentials. Linear Algebra Appl.,
117:127–132, 1989. (Cited on p. 262.)
[614] Edouard Weyr. Note sur la théorie de quantités complexes formées avec n unités
principales. Bull. Sci. Math. II, 11:205–215, 1887. (Cited on pp. 26, 104.)
[615] R. M. Wilcox. Exponential operators and parameter differentiation in quantum
physics. J. Math. Phys., 8(4):962–982, 1967. (Cited on p. 263.)
[616] J. H. Wilkinson. The Algebraic Eigenvalue Problem. Oxford University Press, 1965.
xviii+662 pp. ISBN 0-19-853403-5 (hardback), 0-19-853418-3 (paperback). (Cited on
p. 69.)
[617] Arthur Wouk. Integral representation of the logarithm of matrices and operators. J.
Math. Anal. and Appl., 11:131–138, 1965. (Cited on p. 283.)
[618] Pei Yuan Wu. Approximation by partial isometries. Proc. Edinburgh Math. Soc., 29:
255–261, 1986. (Cited on p. 214.)
[619] Shing-Tung Yau and Ya Yan Lu. Reducing the symmetric matrix eigenvalue problem
to matrix multiplications. SIAM J. Sci. Comput., 14(1):121–136, 1993. (Cited on
p. 300.)
[620] N. J. Young. A bound for norms of functions of matrices. Linear Algebra Appl., 37:
181–186, 1981. (Cited on p. 105.)
[621] V. Zakian. Properties of IM N and JM N approximants and applications to numerical
inversion of Laplace transforms and initial value problems. J. Inst. Maths. Applics.,
50:191–222, 1975. (Cited on p. 263.)
[622] Hongyuan Zha and Zhenyue Zhang. Fast parallelizable methods for the Hermitian
eigenvalue problem. Technical Report CSE-96-041, Department of Computer Science
and Engineering, Pennsylvania State University, University Park, PA, May 1996. 19
pp. (Cited on p. 216.)
[623] Xingzhi Zhan. Matrix Inequalities, volume 1790 of Lecture Notes in Mathematics.
Springer-Verlag, Berlin, 2002. ISBN 3-540-43798-3. (Cited on p. 330.)
Bibliography 413
[624] Fuzhen Zhang, editor. The Schur Complement and Its Applications. Springer-Verlag,
New York, 2005. xvi+295 pp. ISBN 0-387-24271-6. (Cited on p. 329.)
[625] Pawel Zieliński and Krystyna Ziȩtak. The polar decomposition—properties, applica-
tions and algorithms. Applied Mathematics, Ann. Pol. Math. Soc., 38:23–49, 1995.
(Cited on p. 213.)
Index
A suffix “t” after a page number denotes a table, “n” a footnote, and “q” a quotation.
Mathematical symbols and Greek letters are indexed as if they were spelled out. The solution
to a problem is not usually indexed if the problem itself is indexed under the same term.
415
416 Index