100% found this document useful (1 vote)
35 views

Theory and Computation of Tensors: Multi-Dimensional Arrays 1st Edition Yimin Wei 2024 scribd download

Wei

Uploaded by

kaiyanottogr
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
35 views

Theory and Computation of Tensors: Multi-Dimensional Arrays 1st Edition Yimin Wei 2024 scribd download

Wei

Uploaded by

kaiyanottogr
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Download the Full Version of textbook for Fast Typing at textbookfull.

com

Theory and Computation of Tensors: Multi-


Dimensional Arrays 1st Edition Yimin Wei

https://textbookfull.com/product/theory-and-computation-of-
tensors-multi-dimensional-arrays-1st-edition-yimin-wei/

OR CLICK BUTTON

DOWNLOAD NOW

Download More textbook Instantly Today - Get Yours Now at textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Multi dimensional Approaches Towards New Technology Ashish


Bharadwaj

https://textbookfull.com/product/multi-dimensional-approaches-towards-
new-technology-ashish-bharadwaj/

textboxfull.com

Compilation for Secure Multi party Computation 1st Edition


Niklas Büscher

https://textbookfull.com/product/compilation-for-secure-multi-party-
computation-1st-edition-niklas-buscher/

textboxfull.com

Simultaneous Systems of Differential Equations and Multi-


Dimensional Vibrations 1st Edition Luis Manuel Braga Da
Costa Campos
https://textbookfull.com/product/simultaneous-systems-of-differential-
equations-and-multi-dimensional-vibrations-1st-edition-luis-manuel-
braga-da-costa-campos/
textboxfull.com

Group Theory and Computation N.S. Narasimha Sastry

https://textbookfull.com/product/group-theory-and-computation-n-s-
narasimha-sastry/

textboxfull.com
Vector Analysis and Cartesian Tensors Bourne

https://textbookfull.com/product/vector-analysis-and-cartesian-
tensors-bourne/

textboxfull.com

Control of Magnetotransport in Quantum Billiards Theory


Computation and Applications 1st Edition Christian V.
Morfonios
https://textbookfull.com/product/control-of-magnetotransport-in-
quantum-billiards-theory-computation-and-applications-1st-edition-
christian-v-morfonios/
textboxfull.com

From Algebraic Structures to Tensors Digital Signal and


Image Processing Matrices and Tensors in Signal Processing
Set 1st Edition Gérard Favier (Editor)
https://textbookfull.com/product/from-algebraic-structures-to-tensors-
digital-signal-and-image-processing-matrices-and-tensors-in-signal-
processing-set-1st-edition-gerard-favier-editor/
textboxfull.com

Mathematics and Computation A Theory Revolutionizing


Technology and Science Wigderson

https://textbookfull.com/product/mathematics-and-computation-a-theory-
revolutionizing-technology-and-science-wigderson/

textboxfull.com

The Three Dimensional Navier Stokes Equations Classical


Theory 1st Edition James C. Robinson

https://textbookfull.com/product/the-three-dimensional-navier-stokes-
equations-classical-theory-1st-edition-james-c-robinson/

textboxfull.com
Theory and
Computation
of Tensors
Theory and
Computation
of Tensors
Multi-Dimensional Arrays

WEIYANG DING
YIMIN WEI

AMSTERDAM • BOSTON • HEIDELBERG • LONDON


NEW YORK • OXFORD • PARIS • SAN DIEGO
SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Academic Press is an imprint of Elsevier
Academic Press is an imprint of Elsevier
125 London Wall, London EC2Y 5AS, UK
525 B Street, Suite 1800, San Diego, CA 92101-4495, USA
50 Hampshire Street, 5th Floor, Cambridge, MA 02139, USA
The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK

Copyright © 2016 Elsevier Ltd. All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, electronic
or mechanical, including photocopying, recording, or any information storage and retrieval system,
without permission in writing from the publisher. Details on how to seek permission, further
information about the Publisher’s permissions policies and our arrangements with organizations such as
the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website:
www.elsevier.com/permissions.

This book and the individual contributions contained in it are protected under copyright by the
Publisher (other than as may be noted herein).

Notices
Knowledge and best practice in this field are constantly changing. As new research and experience
broaden our understanding, changes in research methods, professional practices, or medical treatment
may become necessary.

Practitioners and researchers must always rely on their own experience and knowledge in evaluating
and using any information, methods, compounds, or experiments described herein. In using such
information or methods they should be mindful of their own safety and the safety of others, including
parties for whom they have a professional responsibility.

To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume
any liability for any injury and/or damage to persons or property as a matter of products liability,
negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas
contained in the material herein.

Library of Congress Cataloging-in-Publication Data


A catalog record for this book is available from the Library of Congress

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library

ISBN 978-0-12-803953-3 (print)


ISBN 978-0-12-803980-9 (online)

For information on all Academic Press publications


visit our website at https://www.elsevier.com/

Publisher: Glyn Jones


Acquisition Editor: Glyn Jones
Editorial Project Manager: Anna Valutkevich
Production Project Manager: Debasish Ghosh
Designer: Nikki Levy

Typeset by SPi Global, India


Contents

Preface v

I General Theory 1

1 Introduction and Preliminaries 3


1.1 What Arc Tensors? . 3
1.2 Basic Operations 6
1.3 Tensor Decompositions. 8
1.4 Tensor Eigenvalue Problems !O

2 Generalized Tensor Eigenvalue Problems 11


2.1 A Unified Framework. 11
2.2 Basic Definitions 13
2.3 Seventl Brlsic Properties 14
2.3.1 Number of Eigel1values . 14
2.3.2 Spectral R.adius . 15
2.3.3 Diagonalizable Tensor Pairs 15
2.3.4 Gcrshgori n Circle Theorem 16
2.3.5 Backward Error Analysis 19
2.4 Real Tensor Pairs. 20
2.4.1 The Crawford Number 21
2.4.2 Symmetric-Definite Tensor Pair s 22
2.5 Sign-Complex Spectral Radius 26
2.5.1 Definitions 26
2.5.2 Collatz-Wielalldt Formula 27
2.5.3 Properties for Single Tensors 29
2.5.4 The Componentwise Distance to Singularity. 31
2.5.5 Bauer-Fike TheOl'eul 33
2.6 An Illustrative Example 34

II Hankel Tensors 37

3 Fast Tensor-Vector Products 39


3.1 Hankel Tensors 39
3.2 Exponential Data Fit,ting 40
3.2.1 The One-Dimensional Case 40
3.2.2 The l\lultidimensiollal Case 42
3.3 Anti-Circulant Tensors 45
3.3.1 Diagonalization 46
3.3.2 Singular Vailles 47
viii CONTENTS

3.3.3 Block Tellsors . 49


3.4 Fast Hankel Tensor-Vector Product 50
3.5 Numerical Examples 53

4 Inheritance Properties 59
4.1 Inheritance Properties 59
4.2 The First luhcl'itance Property of Haukel Tensors. 61
4.2.1 A COllvolution Formula . 61
4.2.2 Lower-Order Implies Higher-Order 63
4.2.3 SOS Dccomposit.ioll of Strong Hankel Tensors. 65
11.3 The Second Inheritance Property of Hankel Tensors 66
4.3.1 Strong Hankel Tensors . . 66
4.3.2 A General Vandermolldc Decomposition of Hankel i\latrices 68
4.3.3 An Augmcnted Vandcnnonde Dccomposition of Hankel
Tensors 71
4.3.4 The Second Inherit.ance Propert.y of Hankel Tensors 75
4.4 The Third Inheritance Property of Hankel Tensors 77

III M-Tensors 79

5 Definitions and Basic Properties 81


5.1 Preliminaries 81
5.l.! Nonnegative Tensor 81
5.1.2 From Atf-l\fatl'ix to M-Tensol' 82
5.2 SpeclrR] Propert.ies of M-Tensors. 83
5.3 Semi-Positivity 84
5.3.1 Definitions 84
5.3.2 Semi-Positive Z-Tensol's 85
5.3.3 Proof of Theorem 5.7 87
5.3.4 General M-Tensors 89
5.4 l\lonotonicity 90
5.4.1 Defiuitions 90
5.4.2 Properties 90
5.4.3 A Count.er Example 93
5.4.4 A Nontrivial l\lollotone Z-Tensor 93
5.5 An Extension of M-Tensors 93
5.6 SummRtion 95

6 Multilinear Systems with M-Tensors 97


6.1 i\lotivat.ions 97
6.2 Triangular Equations. 99
6.3 M-Equatiolls and Beyond 102
6.3.1 M-Equations 102
6.3.2 Nonpositive R.ight-Hand Side 104
6.3.3 Nonhomog eneous Left-Hanel Side 105
6.3.4 Absolute M-Equations 106
6.3.5 Banded M-Equation . 107
6.4 Iterative i\lethods for M-Equations . 108
6.4.1 The Classical Iterations 109
CONTENTS ix

6.4.2 The Newton Method for Symmetric M-Equat. iolls 111


6.'1.3 Numerical Tests 112
6.5 Perturbation Analysis of M-Eql.lations ]]4
6.5,] Backwarrl. Errors of Triangular M-Equations 115
6.5.2 Condition Numbers 116
6.6 Inverse Iteration 118

Bibliography 125

Subject Index 135


Preface
This book is devoted to the theory and computation of tensors, also called hyper-
matrices. Our investigation includes theories on generalized tensor eigenvalue
problems and two kinds of structured tensors, Hankel tensors and M-tensors.
Both theoretical analyses and computational aspects are discussed.
We begin with the generalized tensor eigenvalue problems, which are re-
garded as a unified framework of different kinds of tensor eigenvalue problems
arising from applications. We focus on the perturbation theory and the error
analysis of regular tensor pairs. Employing various techniques, we extend sev-
eral classical results from matrices or matrix pairs to tensor pairs, such as the
Gershgorin circle theorem, the Collatz-Wielandt formula, the Bauer-Fike the-
orem, the Rayleigh-Ritz theorem, backward error analysis, the componentwise
distance of a nonsingular tensor to singularity, etc.
In the second part, we focus on Hankel tensors. We first propose a fast algo-
rithm for Hankel tensor-vector products by introducing a special class of Hankel
tensors that can be diagonalized by Fourier matrices, called anti-circulant ten-
sors. Then we obtain a fast algorithm for Hankel tensor-vector products by em-
bedding a Hankel tensor into a larger anti-circulant tensor. The computational
complexity is reduced from O(nm ) to O(m2 n log mn). Next, we investigate the
spectral inheritance properties of Hankel tensors by applying the convolution
formula of the fast algorithm and an augmented Vandermonde decomposition of
strong Hankel tensors. We prove that if a lower-order Hankel tensor is positive
semidefinite, then a higher-order Hankel tensor with the same generating vector
has no negative H-eigenvalues, when (i) the lower order is 2, or (ii) the lower
order is even and the higher order is its multiple.
The third part is contributed to M-tensors. We attempt to extend the
equivalent definitions of nonsingular M -matrices, such as semi-positivity, mono-
tonicity, nonnegative inverse, etc., to the tensor case. Our results show that
the semi-positivity is still an equivalent definition of nonsingular M-tensors,
while the monotonicity is not. Furthermore, the generalization of the “nonneg-
ative inverse” property inspires the study of multilinear system of equations.
We prove the existence and uniqueness of the positive solutions of nonsingular
M-equations with positive right-hand sides, and also propose several iterative
methods for computing the positive solutions.
We would like to thank our collaborator Prof. Liqun Qi of the Hong Kong
Polytechnic University, who leaded us to the research of tensor spectral theory
and always encourages us to explore the topic. We would also like to thank
Prof. Eric King-wah Chu of Monash University and Prof. Sanzheng Qiao of
McMaster University, who read this book carefully and provided feedback during
the writing process.
This work was supported by the National Natural Science Foundation of
China under Grant 11271084, School of Mathematical Sciences and Key Labo-
ratory of Mathematics for Nonlinear Sciences, Fudan University.
Chapter 1

Introduction and
Preliminaries

We first introduce the concepts and sources of tensors in this chapter. Several
essential and frequently used operations involving tensors are also included. Fur-
thermore, two basic topics, tensor decompositions and tensor eigenvalue prob-
lems, are briefly discussed at the end of this chapter.

1.1 What Are Tensors?


The term tensor or hypermatrix in this book refers to a multiway array. The
number of the dimensions of a tensor is called its order, that is, A = (ai1 i2 ...im )
is an mth -order tensor. Particularly, a scalar is a 0th -order tensor, a vector is
a 1st -order tensor, and a matrix is a 2nd -order tensor. As other mathematical
concepts, tensor or hypermatrix is abstracted from real-world phenomena and
other scientific theories. Where do the tensors arise? What kinds of properties
do we care most? How many different types of tensors do we have? We will
briefly answer these questions employing several illustrative examples in this
section.

Example 1.1. As we know, a table is one of the most common realizations of


a matrix. We can also understand tensors or hypermatrices as complex tables
with multivariables. For instance, if we record the scores of 4 students on 3
subjects for both the midterm and final exams, then we can design a 3rd -order
tensor S of size 4 × 3 × 2 whose (i, j, k) entry sijk denotes the score of the i-th
student on the j-th subjects in the k-th exam. This representation is natural
and easily understood, thus it is a convenient data structure for construction
and query. However, when we need to print the information on a piece of paper,
the 3D structure is apparently not suitable for 2D visualization. Thus we need
to unfold the cubic tensor into a matrix. The following two different unfoldings
of the same tensor both include all the information in the original complex
table. We can see from the two tables that their entries are the same up to a
permutation. Actually, there are many different ways to unfold a higher-order
tensor into a matrix, and the linkages between them are permutations of indices.

Theory and Computation of Tensors. 3


http://dx.doi.org/10.1016/B978-0-12-803953-3.50001-0
Copyright © 2016 Elsevier Ltd. All rights reserved.
4 CHAPTER 1. INTRODUCTION AND PRELIMINARIES

Sub. 1 Sub. 2 Sub. 3


Mid Final Mid Final Mid Final
Std. 1 s111 s112 s121 s122 s131 s132
Std. 2 s211 s212 s221 s222 s231 s232
Std. 3 s311 s312 s321 s322 s331 s332
Std. 4 s411 s412 s421 s422 s431 s432

Table 1.1: The first way to print S.

Mid Final
Sub. 1 Sub. 2 Sub. 3 Sub. 1 Sub. 2 Sub. 3
Std. 1 s111 s121 s131 s112 s122 s132
Std. 2 s211 s221 s231 s212 s222 s232
Std. 3 s311 s321 s331 s312 s322 s332
Std. 4 s411 s421 s431 s412 s422 s432

Table 1.2: The second way to print S.

Example 1.2. Another important realization of tensors are the storage of color
images and videos. A black-and-white image can be stored as a greyscale matrix,
whose entries are the greyscale values of the corresponding pixels. Color images
are often built from several stacked color channels, each of which represents
value levels of the given channel. For example, RGB images are composed of
three independent channels for red, green, and blue primary color components.
We can apply a 3rd -order tensor P to store an RGB image, whose (i, j, k) entry
denotes the value of the k-th channel in the (i, j) position. (k = 1, 2, 3 represent
the red, green, and blue channel, respectively.) In order to store a color video,
we may need an extra index for the time axis. That is, we employ a 4th -order
tensor M = (mijkt ), where M(:, :, :, t) stores the t-th frame of the video as a
color image.
Example 1.3. Denote x = (x1 , x2 , . . . , xn )> ∈ Rn . As we know, a degree-1
polynomial p1 (x) = c1 x1 + c2 x2 + · · · + cn xn can be rewritten into p1 (x) = x> c,
where the vector c = (c1 , c2 , . . . , cn )> . Similarly, a degree-2 polynomial p2 (x) =
P n >
i,j=1 cij xi xj , that is, a quadratic form, can be simplified into p2 (x) = x Cx,
th
where the matrix C = (cij ). By analogy, if we denote an m -order tensor
C = (ci1 i2 ...im ) and apply a notation, which will be introduced in the next
section, then the degree-m homogeneous polynomial
n X
X n n
X
pm (x) = ··· ci1 i2 ...im xi1 xi2 . . . xim
i1 =1 i2 =1 im =1

can be rewritten as
pm (x) = Cxm .
Moreover, x> c = 0 is often used to denote a hyperplane in Rn . Similarly,
Cxm = 0 can stand for an degree-m hypersurface in Rn . We shall see in Section
1.2 that the normal vector at a point x0 on this hypersurface is nx0 = Cxm−1
0 .
1.1. WHAT ARE TENSORS? 5

Example 1.4. The Taylor expansion is a well-known mathematical tool. The


Taylor series of a real or complex-valued function f (x) that is infinitely differ-
entiable at a real or complex number a is the power series

X 1 (m)
f (x) = f (a)(x − a)m .
m=0
m!

A multivariate function f (x1 , x2 , . . . , xn ) that is infinitely differentiable at a


point (a1 , a2 , . . . , an ) also has its Taylor expansion
∞ ∞
X X (x1 − a1 )i1 . . . (xn − an )in ∂ i1 +···+in f (a1 , . . . , an )
f (x1 , . . . , xn ) = ··· ,
i1 =0 i =0
n
i1 ! · · · in ! ∂xi11 . . . ∂xinn
which is equivalent to
n
X ∂f (a1 , . . . , an )
f (x1 , . . . , xn ) = f (a1 , . . . , an ) + (xi − ai )
i=1
∂xi
n n
1 X X ∂ 2 f (a1 , . . . , an )
+ (xi − ai )(xj − aj )
2! i=1 j=1 ∂xi ∂xj
n n n
1 X X X ∂ 3 f (a1 , . . . , an )
+ (xi − ai )(xj − aj )(xk − ak ) + · · · .
3! i=1 j=1 ∂xi ∂xj ∂xk
k=1

Denoting x = (x1 , x2 , . . . , xn )> and a = (a1 , a2 , . . . , an )> , then we can rewrite


the second and the third terms in the above equation as
(x − a)> ∇f (a) and (x − a)> ∇2 f (a)(x − a),
where ∇f (a) and ∇2 f (a) are the gradient and the Hessian of f (x) at a, re-
spectively. If we define the mth -order gradient tensor ∇m f (a) of f (x) at a
as
∂ m f (a1 , a2 , . . . , an )
∇m f (a) i1 i2 ...im =

,
∂xi1 ∂xi2 . . . ∂xim
then the Taylor expansion of a multivariate function can also be expressed by

X 1 m
f (x) = ∇ f (a)(x − a)m .
m=0
m!
Example 1.5. The discrete-time Markov chain is one of the most important
models of random processes [10, 28, 81, 114], which assumes that the future is
dependent solely on the finite past. This model is so simple and natural that
Markov chains are widely employed in many disciplines, such as thermodynam-
ics, statistical mechanics, queueing theory, web analysis, economics, and finance.
An sth -order Markov chain is a stochastic process with the Markov property,
that is, a sequence of variables {Yt }∞
t=1 satisfying

Pr(Yt = i1 | Yt−1 = i2 , . . . , Y1 = it ) = Pr(Yt = i1 | Yt−1 = i2 , . . . , Yt−s = is+1 ),


for all t > s. That is, any state depends solely on the immediate past s states.
Particularly, when the step length s = 1, the sequence {Yt }∞
t=1 is a standard first-
order Markov chain. Define the transition probability matrix P of an second-
order Markov chain as
pij = Pr(Yt = i | Yt−1 = j),
6 CHAPTER 1. INTRODUCTION AND PRELIMINARIES

which is a stochastic matrix, that is, pij ≥ 0 for all i, j = 1, 2, . . . , n and


Pn
i=1 pij = 1 for all j = 1, 2, . . . , n. The probability distribution of Yt is denoted
e
by a vector
(xt )i = Pr(Yt = i).
Then the Markov chain is modeled by xt+1 = P xt , thus the stationary probabil-
ity distribution x satisfies x = P x and is exactly an eigenvector of the transition
probability matrix corresponding to the eigenvalue 1.
For higher-order Markov chains, we have similar formulations. Take a second-
order Markov chain, that is, s = 2, as an example. Define the transition proba-
bility tensor P of a second-order Markov chain as

pijk = Pr(Yt = i | Yt−1 = j, Yt−2 = k),


Pn
which is a stochastic tensor, that is, P ≥ 0 and i=1 pijk = 1 for all j, k =
1, 2, . . . , n. The probability distribution of Yet in the product space can be re-
shaped into a matrix

(Xt )i,j = Pr(Yt = i, Yt−1 = j).

Then the stationary probability distribution in the product space X satisfies


n
X
X(i, j) = P (i, j, k) · X(j, k)
k=1

for all i, j = 1, 2, . . . , n. If we further assume that

Pr(Yt = i, Yt−1 = j) = Pr(Yt = i) · Pr(Yt−1 = j)

for all i, j = 1, 2, . . . , n and denote (xt )i = Pr(Yt = i), that is, Xt = xt x>
t , then
the stationary probability distribution x satisfies x = Px2 [74]. We shall see in
Section 1.2 that x is a special eigenvector of the tensor P.

From the above examples, we can gain some basic ideas about what tensors
are and where they come from. Generally speaking, there are two kinds of
tensors: the first kind is a data structure, which admits different dimensions
according to the complexity of the data; the second kind is an operator, where
it possesses different meanings in different situations.

1.2 Basic Operations


We first introduce several basic tensor operations that will be frequently referred
to in the book. One of the difficulties of tensor research is the complicated
indices. Therefore we often use some small-size examples rather than exact
definitions in this section to describe those essential concepts more clearly. For
more detailed definitions, we refer to Chapter 12 in [49] and the references
[18, 66, 98].

• To treat or visualize the multidimensional structures, we often reshape a


higher-order tensor into a vector or a matrix, which are more familiar to
1.2. BASIC OPERATIONS 7

us. The vectorization operator vec(·) turns tensors into column vectors.
Take a 2 × 2 × 2 tensor A = (aijk )2i,j,k=1 for example, then

vec(A) = (a111 , a211 , a121 , a221 , a112 , a212 , a122 , a222 )> .

There are a lot of different ways to reshape tensors into matrices, which
are often referred to as “unfoldings.” The most frequently applied one
is call the modal unfolding. The mode-k unfolding A(k) of an mth -order
tensor A of size n1 × n2 × · · · × nm is an nk -by-(N/nk ) matrix, where
N = n1 n2 . . . nm . Again, use the above 2 × 2 × 2 example. Its mode-1,
mode-2, and mode-3 unfoldings are
 
a111 a121 a112 a122
A(1) = ,
a211 a221 a212 a222
 
a111 a211 a112 a212
A(2) = ,
a121 a221 a122 a222
 
a111 a211 a121 a221
A(3) = ,
a112 a212 a122 a222
respectively. Sometimes the mode-k unfolding is also denoted as Unfoldk (·).
• The transposition operation of a matrix is understood as the exchange
of the two indices. But higher-order tensors have more indices, thus we
have much more transpositions of tensors. If A is a 3rd -order tensor,
then there are six possible
 transpositions denoted as A<[σ(1),σ(2),σ(3)]> ,
where σ(1), σ(2), σ(3) is any of the six permutations of (1, 2, 3). When
B = A<[σ(1),σ(2),σ(3)]> , it means

biσ(1) iσ(2) iσ(3) = ai1 i2 i3 .

If all the entries of a tensor are invariant under any permutations of the
indices, then we call it a symmetric tensor. For example, a 3rd -order tensor
is said to be symmetric if and only if

A<[1,2,3]> = A<[1,3,2]> = A<[2,1,3]> = A<[2,3,1]> = A<[3,1,2]> = A<[3,2,1]> .

• Modal tensor-matrix multiplications are essential in this book, which are


generalizations of matrix-matrix multiplications. Let A be an mth -order
tensor of size n1 × n2 × · · · × nm and M be a matrix of size nk × n0k , then
the mode-k product A ×k M of the tensor A and the matrix M is another
mth -order tensor of size n1 × · · · × n0k × · · · × nm with
nk
X
(A ×k M )i1 ...ik−1 jk ik+1 ...im = ai1 ...ik−1 ik ik+1 ...im · mik jk .
ik =1

Particularly, if A, M1 and M2 are all matrices, then A ×1 M1 ×2 M2 =


M1> AM2 . Easily verified, the tensor-matrix multiplications satisfy that
1. A ×k Mk ×l Ml = A ×l Ml ×k Mk , if k 6= l,
2. A ×k M1 ×k M2 = A ×k (M1 M2 ),
8 CHAPTER 1. INTRODUCTION AND PRELIMINARIES

3. A ×k (α1 M1 + α2 M2 ) = α1 A ×k M1 + α1 A ×k M1 ,
4. Unfoldk (A ×k M ) = M > A(k) ,
5. vec(A ×1 M1 ×2 M2 · · · ×m Mm ) = (Mm ⊗ · · · ⊗ M2 ⊗ M1 )> vec(A),
where A is a tensor, Mk are matrices, and α1 , α2 are scalars.
• If the matrices degrade into column vectors, then we obtain another cluster
of important notations for tensor spectral theory. Let A be an mth -order
n-dimensional tensor, that is, of size n × n × · · · × n, and x be a vector of
length n, then for simplicity:
Axm = A ×1 x ×2 x ×3 x · · · ×m x is a scalar,
Axm−1 = A ×2 x ×3 x · · · ×m x is a vector,
m−2
Ax =A ×3 x · · · ×m x is a matrix.

• Like the vector case, an inner product of two tensors A and B of the same
size are defined by
n1
X nm
X
hA, Bi = ··· ai1 i2 ...im · bi1 i2 ...im ,
i1 =1 im =1

which is exactly the usual inner product of the two vectors vec(A) and
vec(B).
• The outer product of two tensors is a higher-order tensor. Let A and
B be mth -order and (m0 )th -order tensors, respectively. Then their outer
product A ◦ B is an (m + m0 )th -order tensor with
(A ◦ B)i1 ...im j1 ...jm0 = ai1 i2 ...im · bj1 j2 ...jm0 .
If a and b are vectors, then a ◦ b = ab> .
• We sometimes refer to the Hadamard product of two tensors with the
same size as
⊗ B)i1 i2 ...im = ai1 i2 ...im · bi1 i2 ...im .
(AHAD
The Hadamard product will also be denoted as A. ∗ B in the descriptions
of some algorithms, which is a MATLAB-type notation.

1.3 Tensor Decompositions


Given a tensor, how can we retrieve the information hidden inside? One reason-
able answer is the tensor decomposition approach. Existing tensor decomposi-
tions include the Tucker-type decompositions, the CANDECOMP/PARAFAC
(CP) decomposition, tensor train representation, etc. For those readers inter-
ested in tensor decompositions, we recommend the survey papers [50, 66]. Some
tensor decompositions are generalizations of the singular value decomposition
(SVD) [49].
The SVD is one of the most important tools for matrix analysis and compu-
tation. Any matrix A ∈ Rm×n with rank r has the decomposition
A = U ΣV > ,
1.3. TENSOR DECOMPOSITIONS 9

where U ∈ Rm×r and V ∈ Rn×r are column orthogonal matrices, that is,
U > U = Im and V > V = In , and Σ = diag(σ1 , σ2 , . . . , σr ) is a positive diagonal
matrix. Then σk are called the singular values of A. If U = [u1 , u2 , . . . , ur ] and
V = [v1 , v2 , . . . , vr ], then the SVD can be rewritten asinto
r
X
A= σi ui vi> ,
i=1

which represents the matrix A as a sum of several rank-one matrices. For an


arbitrary nonsingular matrix M ∈ Rr×r , denote L = U ΣM > ∈ Rm×r and
R = V M −1 ∈ Rn×r . Then
A = LR>
is a low-rank decomposition if r is much smaller than m and n. These three
equivalent formulas of the SVD are all extended to the higher-order tensors.
Let A be an mth -order tensor of size n1 × n2 × · · · × nm . The Tucker-type
decompositions of the tensor A has the form
A = S ×1 U1> ×2 U2> · · · ×m Um
>
,
where S is an mth -order tensor of size r × r × · · · × r, called the core tensor,
and Uk ∈ Rn×r for k = 1, 2, . . . , m. Note that the first formula A = U ΣV >
of the SVD can be rewritten into this form A = Σ ×1 U > ×2 V > . If there
are no restrictions on S and Uk , then we have an infinite number of Tucker-
type decompositions, most of which are no more informative. The higher-order
singular value decomposition (HOSVD), proposed in [33, 35], is a special Tucker-
type decomposition, which satisfies that
• the core S is all-orthogonal (hSik =α , Sik =β i = 0 for all k = 1, 2, . . . , m,
α 6= β) and ordered (kSik =1 kF ≥ kSik =2 kF ≥ · · · ≥ kSik =nk kF for all
k = 1, 2, . . . , m); and
• the matrices Uk are column orthogonal.
Actually, the matrix Uk consists of the left singular vectors of the mode-k un-
folding A(k) , and kSik =1 kF , kSik =2 kF , . . . , kSik =nk kF are exactly the singular
values of A(k) . One can easily verify that if a matrix is all-orthogonal then it
must be a diagonal matrix. Thus the 2nd -order version of the HOSVD is exactly
the SVD.
Nevertheless, the core tensor S of the HOSVD is never sparse in the higher-
order cases, thus complicated and obscure. If we restrict the core tensor to be
diagonal, that is, the entries except for sii...i (i = 1, 2, . . . , r) are all zero, then
we may relax the restriction on the number of terms r. Denote the diagonal
entries of S as σ1 , σ2 , . . . , σr , and Uk = [uk1 , uk2 , . . . , ukr ] for all k = 1, 2, . . . , m.
The CP decomposition of the tensor A is referred to as
r
X
A= σi u1i ◦ u2i ◦ · · · ◦ umi ,
i=1

where r might be larger than n. Each term u1i ◦ u2i ◦ · · · ◦ umi in the CP
decomposition is called a rank-one tensor. The least number of the terms, that
is,
n Xr o
R = min r : A = σi u1i ◦ u2i ◦ · · · ◦ umi ,
i=1
10 CHAPTER 1. INTRODUCTION AND PRELIMINARIES

is called the CP-rank of the tensor A. The computation or estimation of the


CP-ranks of higher-order tensors is NP-hard [54], and still a hard task in tensor
research.
The tensor train format (TT) [92, 91] for higher-order tensors extends the
low-rank decomposition of matrices. Let A be an mth -order tensor. Then it can
be expressed in the following format
r1 rm−1
X X
ai1 i2 ...im = ··· (G1 )i1 k1 (G2 )k1 i2 k2 · · · (Gm−1 )km−2 im−1 km−1 (Gm )km−1 im
k1 =1 km−1 =1

for all i1 , i2 , . . . , im , where G1 and Gm are matrices and G2 , . . . , Gm−1 are 3rd -
order tensors. If r1 , r2 , . . . , rm−1 are much smaller than the tensor size, then
the TT representation will greatly reduce the storage cost for the tensor.

1.4 Tensor Eigenvalue Problems


We have introduced the tensor-vector products in the previous sections. Given
an mth -order n-dimensional square tensor A, notice that x 7→ Axm−1 is a non-
linear operator from Cn to itself. Thus we should consider some characteristic
values of this operator. For this sake, we can define several tensor eigenvalue
problems.
First, we introduce a homogeneous tensor eigenvalue problem. If a scalar
λ ∈ C and a nonzero vector x ∈ Cn satisfy
Axm−1 = λx[m−1] ,
where x[m−1] = [xm−1
1 , xm−1
2 , . . . , xm−1
n ]> , then λ is called an eigenvalue of A
and x is called a corresponding eigenvector. Furthermore, when the tensor A,
the scalar λ and the vector x are all real, we call λ an H-eigenvalue of A and x a
corresponding H-eigenvector. For a real symmetric tensor A, its H-eigenvectors
are exactly the KKT points of the polynomial optimization problem
max/min Axm ,
s.t. xm m m
1 + x2 + · · · + xn = 1.

This was the original motivation when Qi [98] and Lim [76] first introduced this
kind of eigenvalue problem.
We also have other nonhomogeneous tensor eigenvalue definitions. For ex-
ample, if a scalar λ ∈ C and a nonzero vector x ∈ Cn satisfy
Axm−1 = λx with x> x = 1,
then λ is called an E-eigenvalue of A and x is called a corresponding E-
eigenvector [99]. Furthermore, when the tensor A, the scalar λ, and the vector x
are all real, we call λ an Z-eigenvalue of A and x a corresponding Z-eigenvector.
For a real symmetric tensor A, its Z-eigenvectors are exactly the KKT points
of the polynomial optimization problem
max/min Axm ,
s.t. x21 + x22 + · · · + x2n = 1.
We will introduce more definitions of tensor eigenvalues in Chapter 2, which
can be unified into a generalized tensor eigenvalue problem Axm−1 = λBxm−1 .
Further discussions will be conducted on this unified framework.
Chapter 2

Generalized Tensor
Eigenvalue Problems

Generalized matrix eigenvalue problems are essential in scientific and engineer-


ing computations. The generalized eigenvalue formula Ax = λBx is also a
very pervasive model, which covers the matrix polynomial eigenvalue problems.
There has been extensive study of both the theories and the computations of
generalized matrix eigenvalue problems, and one can refer to [6, 49, 113] for
more details. Applying the notations introduced in Chapter 1, the generalized
tensor eigenvalue problem is referred to finding λ and x satisfying

Axm−1 = λBxm−1 .

The detailed definition will be introduced later.

2.1 A Unified Framework


Since Qi [98] and Lim [76] proposed the definition of tensor eigenvalues inde-
pendently in 2005, a great deal of mathematical effort has been devoted to
the study of the eigenproblems of single tensors, especially for the nonnegative
tensors [18, 19]. Nevertheless, there has been much less specific research on the
generalized eigenproblems, since Chang et al. [19, 20] introduced the generalized
tensor eigenvalues. Recently, Kolda and Mayo [68], Cui et al. [30], and Chen
et al. [25] proposed numerical algorithms for generalized tensor eigenproblems.
Kolda and Mayo [68] applied an adaptive shifted power method by solving the
optimization problem
Axm
max kxkm .
kxk=1 Bxm

In Cui, Dai, and Nie’s paper [30], they impose the constraint λk+1 < λk − δ
when the k-th eigenvalue is obtained, so that the (k + 1)-th eigenvalue can also
be computed by the same method as the previous ones. Chen, Han, and Zhou
[25] proposed homotopy methods for generalized tensor eigenvalue problems.
It was pointed out in [19, 20, 30, 68] that the generalized eigenvalue frame-
work unifies several definitions of tensor eigenvalues, such as the eigenvalues and

Theory and Computation of Tensors. 11


http://dx.doi.org/10.1016/B978-0-12-803953-3.50002-2
Copyright © 2016 Elsevier Ltd. All rights reserved.
12 CHAPTER 2. GENERALIZED TENSOR EIGENVALUE PROBLEMS

the H-eigenvalues [98], that is,

Axm−1 = λx[m−1] , where x[m−1] = [xm−1


1 , xm−1
2 , . . . , xm−1
n ]> ,

the E-eigenvalues and the Z-eigenvalues [99], that is,

Axm−1 = λx with x> x = 1,

and the D-eigenvalues [103], that is,

Axm−1 = λDx with x> Dx = 1,

where D is a positive definite matrix. We shall present several other potential


applications of generalized tensor eigenvalue problems.
The first example is the higher-order Markov chain [28]. Li and Ng [74]
reformulated an (m − 1)st -order Markov chain problem into searching for a
nonnegative vector x em−1 = x
e satisfying P x e with x e2 + · · · + x
e1 + x en = 1, where
P is the transition probability tensor of the Markov chain. Let E be a tensor
such that

(Exm−1 )i = xi (x1 + x2 + · · · + xn )m−2 (i = 1, 2, . . . , n)

for an arbitrary vector x = (x1 , x2 , . . . , xn )> . Thus a nonnegative vector xe that


satisfies the homogenous equality P x em−1 = E x em−1 is exactly what is required.
The US-eigenvalue [85] from quantum information processing is another
special case of the generalized tensor eigenvalue. Let S be an mth -order n-
dimensional symmetric tensor. Then the US-eigenvalues are defined by

S̄xm−1 = λx̄,

with kxk2 = 1.
S x̄m−1 = λx,

Denote Se as an mth -order (2n)-dimensional tensor with

S(1
e : n, . . . , 1 : n) = S̄ and S(n
e + 1 : 2n, . . . , n + 1 : 2n) = S,

e = (x> , x̄> )> . When m is even, let J be a tensor such that


and x

yi+n (y1 yn+1 + y2 yn+2 + · · · + yn y2n )m/2−1 , i = 1, 2, . . . , n,



m−1
(J y )i =
yi−n (y1 yn+1 + y2 yn+2 + · · · + yn y2n )m/2−1 , i = n + 1, . . . , 2n,

for an arbitrary vector y ∈ C2n . Noticing that x e1 x


en+1 +e en+2 +· · ·+e
x2 x xn x
e2n = 1,
we rewrite the definition of the US-eigenvalue into Sex em−1 = λJ x em−1 .
Another potential application is from multilabel learning, where the hyper-
graphs are naturally involved [116]. The Laplacian tensor of a hypergraph is
widely studied in [29, 58, 73]. Denote L as the Laplacian tensor of the hyper-
graph induced by the classification structures. Then Lxm presents the clustering
score of the data points {x1 , x2 , . . . , xn } when m is even. Borrowing the idea of
graph embedding [127], we can derive a framework of multilabel learning

max Lxm ,
s.t. Lp xm = 1,
2.2. BASIC DEFINITIONS 13

where Lp is the Laplacian tensor of a penalty hypergraph, which removes some


trivial relations. Employing the method of Lagrange multipliers, we can trans-
form the optimization problem into a generalized tensor eigenvalue problem
Lxm−1 = λLp xm−1 .
Moreover, consider a homogenous polynomial dynamical system [43],
dBu(t)m−1
= Au(t)m−1 .
dt
We explored the stability of the above system in [43]. Similarly to the linear
case, if we require a solution which has the form u(t) = x·eλt , then a generalized
tensor eigenvalue problem Axm−1 = (m − 1)λBxm−1 is naturally raised from
this homogenous system.
The generalized tensor eigenvalue problem is an interesting topic, although
its computational solutions are not well-developed so far. Therefore we aim at
investigating the generalized tensor eigenvalue problems theoretically, especially
on some perturbation properties as a prerequisite of the numerical computations
in [68].

2.2 Basic Definitions


We will first introduce some concepts and notations involved. Let K1,2 denote
a projective plane [113], in which (α1 , β1 ), (α2 , β2 ) ∈ K × K are regarded as the
same point, if there is a nonzero scalar γ ∈ K such that (α1 , β1 ) = (γα2 , γβ2 ).
We can take K as the complex number field C, the real number field R, or the
nonnegative number cone R+ .
The determinant of a tensor is investigated by Qi et al. [98, 57]. The
determinant of an mth -order n-dimensional tensor A is the resultant [31] of
the system of homogeneous equations Axm−1 = 0, which is also the unique
polynomial on the entries of A satisfying that
• det(A) = 0 if and only if Axm−1 = 0 has a nonzero solution;
• det(I) = 1, where I is the unit tensor;
• det(A) is an irreducible polynomial on the entries of A.
Furthermore, det(A) is homogeneous of degree n(m − 1)n−1 . As a simple ex-
ample in [98], the determinant of a 2 × 2 × 2 tensor is defined by
 
a111 a112 + a121 a122 0
 0 a111 a112 + a121 a122 
det(A) = det 
a211 a212 + a221
.
a222 0 
0 a211 a212 + a221 a222
Now we can define the generalized eigenvalue problems of tensor pairs which
is similar to the matrix case [6, 49, 113]. Let A and B be two mth -order tensors
in Cn×n×···×n . We call {A, B} a regular tensor pair, if
det(βA − αB) 6= 0 for some (α, β) ∈ C1,2 .
Reversely, we call {A, B} a singular tensor pair, if
det(βA − αB) = 0 for all (α, β) ∈ C1,2 .
14 CHAPTER 2. GENERALIZED TENSOR EIGENVALUE PROBLEMS

In this chapter, we focus on the regular tensor pairs.


If {A, B} is a regular tensor pair and there exists (α, β) ∈ C1,2 with a nonzero
vector x ∈ Cn such that

(2.1) βAxm−1 = αBxm−1 ,

then we call (α, β) an eigenvalue of the regular tensor pair {A, B} and x the
corresponding eigenvector. When B is nonsingular, that is, det(B) 6= 0, no
nonzero vector x ∈ Cn satisfies that Bxm−1 = 0, according to [57, Theorem
3.1]. Thus β 6= 0 if (α, β) is an eigenvalue of {A, B}. We also call λ = α/β ∈ C
an eigenvalue of the tensor pair {A, B} when det(B) 6= 0. Denote the spectrum,
that is, the set of all the eigenvalues, of {A, B} as

λ(A, B) = (α, β) ∈ C1,2 : det(βA − αB) = 0 ,

or, for a nonsingular B, we also denote



λ(A, B) = λ ∈ C : det(A − λB) = 0 .

This chapter is devoted to the properties and perturbations of the spectrum


λ(A, B) for a regular tensor pair {A, B}.

2.3 Several Basic Properties


There are plenty of theoretical results about the regular matrix pairs [6, 49, 113].
In this section, we extend several of them to the regular tensor pairs. Note that
some of the generalized results maintain the forms similar to the matrix case,
while others do not.

2.3.1 Number of Eigenvalues


How many eigenvalues does a regular tensor pair have? This might be the first
question about the spectrum of a regular tensor pair. We have the following
result:
Theorem 2.1. The number of the eigenvalues of an mth -order n-dimensional
regular tensor pair is n(m − 1)n−1 counting multiplicity.
Proof. If {A, B} is an mth -order n-dimensional regular tensor pair, then there
exists such (γ, δ) ∈ C1,2 that det(δA−γB) 6= 0. Assume that (γ, δ) is normalized
to |γ|2 + |δ|2 = 1. We have the transformation, as the matrix case [115, Page
300], ( (
Ae = γ̄A + δ̄B, A = γ Ae + δ̄ B,
e

Be = δA − γB, B = δ Ae − γ̄ B.
e

Thus there is a one-to-one map between λ(A, B) and λ(A,


e B):
e
 
αe = αγ̄ + β δ̄, α=α eγ + βeδ̄,

βe = αδ − βγ, β=α eδ − βγ̄.
e

Since det(B)
e 6= 0, then the eigenvalues of {A,
e B}
e are exactly the complex roots
of the polynomial equation det(A − λB) = 0. By [57, Proposition 2.4], the
e e
2.3. SEVERAL BASIC PROPERTIES 15

degree of the polynomial det(Ae − λB) e of λ is no more than n(m − 1)n−1 . We


shall show that the degree is exactly n(m − 1)n−1 .
n−1
Denote the coefficient of the item λn(m−1) in det(Ae − λB)
e as f (A,
e B).
e
From the definition of det(A − λB), we see that f (A, B) is actually independent
e e e e
of the entries of A.
e Thus f (A,e B)
e = f (O, B),
e where O denotes the zero tensor.
It is easy to verify that f (O, B) = det(B) 6= 0. Thus f (I, B)
e e e is nonzero, and so
is f (A, B). Therefore the polynomial det(A − λB) = 0 has n(m − 1)n−1 roots
e e e e
counting multiplicity.

2.3.2 Spectral Radius


Spectral radius is an important quantity for the eigenvalue problems. To define
the spectral radius of a regular tensor pair, we introduce a representation of the
points in R1,2 first. Let (s, c) be a point in R1,2 . Without loss of generality, we
can assume that c ≥ 0 and s > 0 when c = 0. Define an angle in (−π/2, π/2] as
s
θ(s, c) := arcsin √ .
s2 + c2
When c is fixed, θ(·, c) is an increasing function; When s is fixed, θ(s, ·) is a
decreasing function.
For a regular tensor pair {A, B}, we define the θ-spectral radius as

ρθ (A, B) := max θ(|α|, |β|).


(α,β)∈λ(A,B)

Define another nonnegative function about {A, B} as

θ kAxm−1 k, kBxm−1 k .

ψθ (A, B) := max
x∈Cn \{0}


It is apparent that ρθ (A, B) ≤ ψθ (A, B), since θ kAxm−1 k, kBxm−1 k = θ(|α|, |β|),
if (α, β) ∈ λ(A, B) and x is the corresponding eigenvector.
When det(B) 6= 0, we can define the spectral radius in a simpler way

ρ(A, B) := max |λ|.


λ∈λ(A,B)

Similarly, we define a nonnegative function about {A, B} as

kAxm−1 k
ψ(A, B) := max .
x∈Cn \{0} kBxm−1 k

It is easy to verify that tan ρθ (A, B) = ρ(A, B) ≤ ψ(A, B) = tan ψθ (A, B).
Furthermore, if B is fixed, then the nonnegative function ψ(·, B) is a seminorm
on Cn×n×···×n . When A, B are matrices and B = I, the result becomes the
familiar one that the spectral radius of a matrix is always no larger than its
norm.

2.3.3 Diagonalizable Tensor Pairs


Diagonalizable matrix pairs play an important role in the perturbation theory
of the generalized eigenvalues [113, Section 6.2.3]. A tensor A is said to be
16 CHAPTER 2. GENERALIZED TENSOR EIGENVALUE PROBLEMS

diagonal if all its entries except the diagonal ones aii...i (i = 1, 2, . . . , n) are
zeros. We call {A, B} a diagonalizable tensor pair if there are two nonsingular
matrices P and Q such that

C = P −1 AQm−1 and D = P −1 BQm−1

are two diagonal tensors. Let the diagonal entries of C and D be {c1 , c2 , . . . , cn }
and {d1 , d2 , . . . , dn }, respectively. If (ci , di ) 6= (0, 0) for all i = 1, 2, . . . , n,
then {A, B} is a regular tensor pair. Furthermore, (ci , di ) are exactly all the
eigenvalues of {A, B}, and their multiplicities are (m − 1)n−1 .
It should be pointed out that the concept of “diagonalizable tensor pair”
is not so general as the concept of “diagonalizable matrix pair” [113, Section
6.2.3]. For instance, if matrices A and B are both symmetric, then the matrix
pair {A, B} must be diagonalizable. However, this is not true for symmetric
tensor pairs. Hence we shall present a nontrivial diagonalizable tensor pair to
illustrate that it is a reasonable definition.
Example 2.1. Let A and B be two mth -order n-dimensional anti-circulant
tensors [39]. According to the result in that paper, we know that

C = Fn∗ A(Fn∗ )m−1 and D = Fn∗ B(Fn∗ )m−1

are both diagonal, where Fn is the n-by-n Fourier matrix [49, Section 1.4].
Therefore an anti-circulant tensor pair must be diagonalizable.

2.3.4 Gershgorin Circle Theorem


The Gershgorin circle theorem [121] is useful to bound the eigenvalues of a
matrix. Stewart and Sun proposed a generalized Gershgorin theorem for a
regular matrix pair (see Corollary 2.5 in [113, Section 6.2.2]). We extend this
famous theorem to the tensor case in this subsection. Define the p-norm of a
tensor as the p-norm of its mode-1 unfolding [49, Chapter 12.4]

kAkp := kA(1) kp .

Particularly, the tensor ∞-norm has the form


n X
X n n
X
kAk∞ = max ··· |aii2 ...im |.
i=1,2,...,n
i2 =1 i3 =1 im =1

Notice that for a positive integer k, we have kx[k] k∞ = kx⊗k k∞ , where x[k] =
[xk1 , xk2 , . . . , xkn ]> is the componentwise power and x⊗k = x ⊗ x ⊗ · · · ⊗ x is the
Kronecker product of k copies of x [49, Section 12.3]. Denote M A := A ×1 M >
and AM m−1 = A×2 M · · ·×m M for simplicity. Then we can prove the following
lemma for ∞-norm:
Lemma 2.2. Let {A, B} and {C, D} = {CI, DI} be two mth -order n-dimensional
regular tensor pairs, where C and D are two matrices. If (α, β) ∈ C1,2 is an
eigenvalue of {A, B}, then either det(βC − αD) = 0 or

(βC − αD)−1 β(C − A) − α(D − B)


 
≥ 1.

2.3. SEVERAL BASIC PROPERTIES 17

Proof. If (α, β) ∈ C1,2 satisfies that det(βC − αD) 6= 0, then it holds for an
arbitrary nonzero vector y ∈ Cn that (βC − αD)ym−1 6= 0, which is equivalent
to (βC − αD)y[m−1] 6= 0. Then the matrix βC − αD is nonsingular.
Since (α, β) is an eigenvalue of {A, B}, there exists a nonzero vector x ∈ Cn
such that βAxm−1 = αBxm−1 . This indicates that

β(C − A) − α(D − B) xm−1 = (βC − αD)xm−1 .


 

Thus from the expressions of C and D, we have

Gxm−1 := (βC − αD)−1 β(C − A) − α(D − B) xm−1 = x[m−1] .


 

Then following the definition of the tensor ∞-norm and

kG(1) vk∞ kG(1) x⊗(m−1) k∞ kGxm−1 k∞


kGk∞ = max ≥ = = 1,
v∈Cnm−1
\{0} kvk∞ kx⊗(m−1) k∞ kx[m−1] k∞

we prove the result.


Comparing with Theorem 2.3 in [113, Section 6.2.2], Lemma 2.2 specifies
that {C, D} has a special structure and the norm in the result must be the
∞-norm. Nevertheless, these restrictions do not obstruct its application to the
proof of the following lemma:
Lemma 2.3. Let {A, B} be an mth -order n-dimensional regular tensor pair.
Furthermore, assume that (aii...i , bii...i ) 6= (0, 0) for i = 1, 2, . . . , n. Denote
 X 
Di (A, B) := (α, β) ∈ C1,2 : |βaii...i − αbii...i | ≤ |βaii2 ...im − αbii2 ...im |
(i2 ,...,im )
6=(i,...,i)
Sn
for i = 1, 2, . . . , n. Then λ(A, B) ⊆ i=1 Di (A, B).
Proof. Take the diagonal matrices C = diag(a11...1 , a22...2 , . . . , ann...n ) and D =
diag(b11...1 , b22...2 , . . . , bnn...n ) in Lemma 2.2. Then from the assumption, we
know that {C, D} is a regular tensor pair.
When (α, β) ∈ λ(A, B) satisfies that det(βC − αD) = 0, it must hold that
βaii...i − αbii...i = 0 for some i. Thus it is obvious that (α, β) ∈ Di (A, B).
When (α, β) ∈ λ(A, B) does not satisfy that det(βC − αD) = 0, by Lemma
2.2, we have

(βC − αD)−1 β(C − A) − α(D − B)


 

X βaii2 ...im − αbii2 ...im
= max ≥ 1.
i=1,2,...,n βaii...i − αbii...i
(i2 ,...,im )
6=(i,...,i)

Thus for some i, it holds that


X
|βaii...i − αbii...i | ≤ |βaii2 ...im − αbii2 ...im |,
(i2 ,...,im )
6=(i,...,i)

that is, (α, β) ∈ Di (A, B).


18 CHAPTER 2. GENERALIZED TENSOR EIGENVALUE PROBLEMS

Lemma 2.3 is alike in form to the Gershgorin circle theorem. However, it is


not the desired result, since there are still α and β on the right-hand side of the
inequality. To avoid this, we introduce the chordal metric in C1,2 [49, 113]:
 |α1 β2 − β1 α2 |
chord (α1 , β1 ), (α2 , β2 ) = p p .
|α1 | + |β1 |2 |α2 |2 + |β2 |2
2

Moreover, we can easily prove by the Cauchy inequality that


X p
|βaii2 ...im − αbii2 ...im | ≤ |α|2 + |β|2
(i2 ,i3 ,...,im )
6=(i,i,...,i)
v
u !2 !2
u X X
×u |aii2 ...im | + |bii2 ...im | .
u
t
(i2 ,i3 ,...,im ) (i2 ,i3 ,...,im )
6=(i,i,...,i) 6=(i,i,...,i)

Finally, we obtain the Gershgorin circle theorem for regular tensor pairs.
Theorem 2.4. Let {A, B} be an mth -order n-dimensional regular tensor pair.
Suppose that (aii...i , bii...i ) 6= (0, 0) for all i = 1, 2, . . . , n. Denote the disks
n o
Gi (A, B) := (α, β) ∈ C1,2 : chord (α, β), (aii...i , bii...i ) ≤ γi


for i = 1, 2, . . . , n, where
v
u 2  2
u P P
(i2 ,i3 ,...,im ) |aii2 ...im | + (i2 ,i3 ,...,im ) |bii2 ...im |
u
u
6=(i,i,...,i) 6=(i,i,...,i)
γi = .
t
|aii...i |2 + |bii...i |2
Sn
Then λ(A, B) ⊆ i=1 Gi (A, B).
If B is taken as the unit tensor I, then Theorem 2.4 reduces to the Gershgorin
circle theorem for single tensors, that is, [98, Theorem 6(a)].
Furthermore, the Gershgorin circle theorem can have a tighter version when
the order of the tensor pair is no less than 3. A tensor is called semi-symmetric
[86] if its entries are invariable under any permutations of the last (m − 1)
indices. Then for an arbitrary tensor A, there is a semi-symmetric tensor Ae
such that Axm−1 = Ax e m−1 for all x ∈ Cn . Concretely, the entries of Ae are
1 X
aii2 ...im = aii02 i03 ...i0m ,
|π(i2 , . . . , im )| 0 0
e
0
(i2 ,i3 ,...,im )∈
π(i2 ,i3 ,...,im )

where π(i2 , i3 , . . . , im ) denotes the set of all permutations of (i2 , i3 , . . . , im ) and


|π(i2 , i3 , . . . , im )| denotes the number of elements in π(i2 , i3 , . . . , im ). Note that
aii...i = e aii...i and
X X
|aii2 ...im | ≥ |e
aii2 ...im |.
(i2 ,i3 ,...,im ) (i2 ,i3 ,...,im )
6=(i,i,...,i) 6=(i,i,...,i)

Hence we have a tight version of the Gershgorin circle theorem for regular tensor
pairs, that is, the disk Gei (A, B) in the following theorem must be no larger than
the disk Gi (A, B) in Theorem 2.4.
2.3. SEVERAL BASIC PROPERTIES 19

Theorem 2.5. Let {A, B} be an mth -order n-dimensional regular tensor pair.
Assume that (aii...i , bii...i ) 6= (0, 0) for all i = 1, 2, . . . , n. Let Ae and B e be the
m−1 m−1 m−1
semi-symmetric tensors such that Ax = Ax
e and Bx = Bxm−1 for
e
n
all x ∈ C . Denote the disks
n o
Gei (A, B) := (α, β) ∈ C1,2 : chord (α, β), (aii...i , bii...i ) ≤ γ

ei

for i = 1, 2, . . . , n, where
v
u 2  2
u P P
(i2 ,i3 ,...,im ) |e
aii2 ...im | + (i2 ,i3 ,...,im ) |bii2 ...im |
u e
u
6=(i,i,...,i) 6=(i,i,...,i)
γ
ei = .
t
|aii...i |2 + |bii...i |2
Sn
Then λ(A, B) ⊆ i=1 Gei (A, B).

2.3.5 Backward Error Analysis


Backward error is a measurement of the stability of a numerical algorithm [53].
It is also widely employed as a stopping criteria for iterative algorithms. We
propose the framework of the backward error analysis (see [52] for the matrix
version) on the algorithms for generalized tensor eigenproblems.
Suppose that we obtain a computational eigenvalue (b α, β)
b and the corre-
sponding computed eigenvector x b of a regular tensor pair {A, B} by an algo-
rithm. Then they can be regarded as an exact eigenvalue and eigenvector of
another tensor pair {A + E, B + F}. Define the normwise backward error of the
computed solution as
n o
ηδ1 ,δ2 (b
α, β,
bx b) := min k(E/δ1 , F/δ2 )kF : β(A
b + E)b xm−1 = αb(B + F)bxm−1 .

Here δ1 and δ2 are two parameters. When taking δ1 = δ2 = 1, the backward


error is called the absolute backward error and denoted as η[abs] (b
α, β,
bx b). When
taking δ1 = kAkF and δ2 = kBkF , the backward error is called the relative
backward error and denoted as η[rel] (b
α, β,
bx b).
m−1 b xm−1 . Then the two tensors E and
Denote the residual b bBb
r := α x − βAb
F must satisfy that βE b xbm−1 − α bm−1 = b
bF x r, which can be rewritten into an
underdetermined linear equation [53, Chapter 21]
  b ⊗(m−1) 
δ βx

E(1) /δ1 , −F(1) /δ2 · 1 ⊗(m−1) = br.
δ2 α
bx

Hence the least-norm solution is given by

b ⊗(m−1) †
 
δ βx
 
r 1 ⊗(m−1) ,
E(1) /δ1 , −F(1) /δ2 = b
δ2 α
bx

where A† denotes the Moore-Penrose pseudoinverse of a matrix A [49, Section


5.5.2]. Notice that v† = v∗ /kvk22 and kbx⊗(m−1) k2 = kb xkm−1
2 . Therefore we
derive the explicit form of the normwise backward error.
20 CHAPTER 2. GENERALIZED TENSOR EIGENVALUE PROBLEMS

Theorem 2.6. The normwise backward error is given by


  1 kbrk2
(2.2) ηδ1 ,δ2 (b
α, β,
bx b) = E(1) /δ1 , −F(1) /δ2 =q · .
F b 2 + |δ2 α
|δ1 β| b|2 xkm−1
kb 2

Particularly, the absolute backward error is presented by

1 kbrk2
(2.3) η[abs] (b
α, β,
bx b) = q · m−1 ,
|β| 2
b + |bα|2 kb
x k2

and the relative backward error can be expressed as

1 kbrk2
(2.4) η[rel] (b
α, β,
bx b) = q · m−1 .
2
b 2 kAk + |b
|β| α |2 kBk 2 kb
x k2
F F

Furthermore, we can also discuss the componentwise backward error (see [52]
for the matrix version). Define the componentwise backward error corresponding
to the nonnegative weight tensors U, V of the computed solution as
n o
ωU ,V (b
α, β,
bx b + E)b
b) := min  : β(A xm−1 = α
b(B + F)bxm−1 , |E| ≤ U, |F| ≤ V .

Take the absolute values on both sides of b


r = βE
b xbm−1 − α bm−1 and obtain
bF x
 
|b x|m−1 + |b
r| ≤ |β||E||b
b x|m−1 ≤  |β|U|b
α||F||b b x|m−1 + |b x|m−1 .
α|V|b

Hence the componentwise backward error has a lower bound

|b
ri |
ωU ,V (b
α, β,
bx b) ≥ max   .
i=1,2,...,n m−1
|β|U|b
b x| + |b x|m−1
α|V|b
i

The equality can be attained by


b −1 · SUT m−1
E =  · sign(β) α)−1 · SVT m−1 ,
and F =  · −sign(b

where S and T are two diagonal matrices such that b r = S|br| and T x
b = |bx|.
Therefore we derive the explicit form of the componentwise backward error.
Theorem 2.7. The componentwise backward error is given by

|b
ri |
(2.5) ωU ,V (b
α, β,
bx b) = max   .
i=1,2,...,n m−1
|β|U|b
b x| + |b x|m−1
α|V|b
i

2.4 Real Tensor Pairs


For tensor problems, the real case is quite different from the complex one. For
instance, an eigenvector associated with a real eigenvalue of a real tensor can
be complex and cannot be transformed into a real one by multiplying a scalar,
which will not occur for matrices. Therefore we need to discuss the real tensor
pairs separately.
2.4. REAL TENSOR PAIRS 21

2.4.1 The Crawford Number


We need to define the concept of “regular pair” in another way for real tensor
pairs. Let A and B be two mth -order real tensors in Rn×n×···×n . If there does
not exist a real vector x ∈ Rn \ {0} such that
Axm−1 = Bxm−1 = 0,
then we call {A, B} an R-regular tensor pair (“R” for “real”). A real eigenvalue
of an R-regular tensor pair with a real eigenvector is called an H-eigenvalue,
following the same name for single tensors [98]. Moreover, Zhang [134] proved
the existence of an H-eigenvalue for a real tensor pair, when m is even and n is
odd.
Obviously, a tensor pair is R-regular if it is regular. However, an R-regular
tensor pair need not be regular.
Example 2.2. Suppose that m is even. Let Z be a real tensor with

1, i = j,
zi1 i2 ...im = δi1 i2 δi3 i4 · · · δim−1 im , δij =
0, i 6= j.
Then Zxm−1 = x · (x> x)(m−2)/2 . As stated in [20], the H-eigenvalues of {A, Z}
are the Z-eigenvalues of A. For an arbitrary x ∈ Rn \{0}, it holds that Zxm−1 6=
0 since x> x > 0. Thus the real tensor pair {A, Z} is R-regular. Nevertheless,
y> y = 0 can be true for some complex vector y ∈ Cn \{0}. Thus if Aym−1 = 0,
then {A, Z} is a singular tensor pair.
Define the Crawford number (see [113, Section 6.1.3] for the matrix case) of
a real tensor pair {A, B} as
np o
cr (A, B) := min (Axm )2 + (Bxm )2 : kxk = 1, x ∈ Rn .

If cr (A, B) > 0, then we call {A, B} a definite tensor pair. A definite tensor pair
must be an R-regular pair, since (Axm )2 +(Bxm )2 = 0 if Axm−1 = Bxm−1 = 0.
We will now discuss the continuity of the Crawford number. Denote
n o
kAkL := max A ×1 x1 ×2 x2 · · · ×m xm : kxk k = 1, k = 1, 2, . . . , m ,

which is a norm on Rn×n×···×n defined by Lim [76]. Here, the norm in the
definition can be an arbitrary vector norm if not specified. Then the continuity
of the Crawford number can be seen from the following theorem.
Theorem 2.8. Let {A, B} and {A + E, B + F} be two real tensor pairs. Then
q
(2.6) cr (A + E, B + F) − cr (A, B) ≤ kEk2L + kFk2L .
Proof. By the definition of the Crawford number, we have
q 2 2
cr (A + E, B + F) = min (A + E)xm + (B + F)xm
kxk=1
hp p i
≥ min (Axm )2 + (Bxm )2 − (Exm )2 + (Fxm )2
kxk=1
r
≥ cr (A, B) − max (Exm )2 + max (Fxm )2 .
kxk=1 kxk=1
q
= cr (A, B) − kEk2L + kFk2L
22 CHAPTER 2. GENERALIZED TENSOR EIGENVALUE PROBLEMS

The remaining part follows directly by


cr (A, B) = cr (A + E − E, B + F − F)
r
≥ cr (A + E, B + F) − max (−Exm )2 + max (−Fxm )2
kxk=1 kxk=1
q
= cr (A + E, B + F) − kEk2L + kFk2L .

By the same approach, we prove that small perturbations preserve the defi-
niteness of a real tensor pair.
Theorem 2.9. Let {A, B} be a definite tensor pair and {A + E, B + F} be a
real tensor pair. If s
(Exm )2 + (Fxm )2
max < 1,
kxk=1 (Axm )2 + (Bxm )2
then {A + E, B + F} is also a definite tensor pair.
Proof. By the definition of the Crawford number, we have
q 2 2
cr (A + E, B + F) = min (A + E)xm + (B + F)xm
kxk=1
hp p i
≥ min (Axm )2 + (Bxm )2 − (Exm )2 + (Fxm )2
kxk=1
s !
(Exm )2 + (Fxm )2
≥ cr (A, B) 1 − max .
kxk=1 (Axm )2 + (Bxm )2

When the item in the blanket is positive, we can conclude that cr (A+E, B+F) >
0, since cr (A, B) > 0.
A more concise and useful corollary also follows directly.
Corollary 2.10. Let {A, B} be a definite tensor pair and {A + E, B + F} be a
real tensor pair. If
q
(2.7) kEk2L + kFk2L < cr (A, B),

then {A + E, B + F} is also a definite tensor pair.

2.4.2 Symmetric-Definite Tensor Pairs


A tensor A is called symmetric [98] if its entries are invariable under any per-
mutations of the indices and is called positive definite [98] if Axm > 0 for all
x ∈ Rn \ {0}. (Thus m must be even.) Then we call {A, B} a symmetric-
definite tensor pair if A is symmetric and B is symmetric positive definite
(see [49, Section 8.7.1] for the matrix case). It is easy to understand that a
symmetric-definite tensor pair must be a definite pair as introduced in Section
2.4.1. Chang et al. [20] proved the existence of the H-eigenvalues for symmetric-
definite tensor pairs. Furthermore, there is a clear depiction of the maximal and
the minimal H-eigenvalues of a symmetric-definite tensor pair. Before stating
the main result, we need some lemmata.
2.4. REAL TENSOR PAIRS 23

Lemma 2.11. Let S and T be two mth -order n-dimensional real symmetric
tensors. Assume that m is even and T is positive definite. Define

WT (S) := Sxm : T xm = 1, x ∈ Rn .


Then WT (S) is a convex set.


Proof. We prove this lemma following the definition of convex sets. Let x1 , x2 ∈
Rn satisfy that T xm m m
1 = T x2 = 1. Then ω1 = Sx1 and ω2 = Sx2 are two
m

points in WT (S). Without loss of generality, we can assume that ω1 > ω2 . For
an arbitrary t ∈ (0, 1), we desire a vector x
e with S xem = tω1 + (1 − t)ω2 and
Txem = 1, so that tω1 + (1 − t)ω2 is also a point in WT (S).
Take x = τ x1 + x2 , and τ is selected to satisfy that

Sxm = [tω1 + (1 − t)ω2 ] · T xm .

Recall that A(u + v)m can be expanded for a symmetric tensor A


m  
X m
A(u + v)m = Auk vm−k ,
k
k=0

where Auk vm−k = A ×1 u · · · ×k u ×k+1


 v · · · ×m v.
Thus p(τ ) = Sxm − tω1 + (1 − t)ω2 · T xm is a polynomial about τ of degree
m. Moreover, the coefficient of the item τ m is

Sxm m
1 − [tω1 + (1 − t)ω2 ] · T x1 = (1 − t)(ω1 − ω2 ) > 0,

and the coefficient of the item 1 is

Sxm m
2 − [tω1 + (1 − t)ω2 ] · T x2 = t(ω2 − ω1 ) < 0.

Therefore there must be a real τ such that p(τ ) = 0. With such a τ , we can take
e = x/(T xm )1/m , since T is positive definite. Hence we prove this lemma.
x
When T is taken as I, we know from Lemma 2.11 that WI (S) is convex.
Furthermore, WI (S) is also compact, since the set {x ∈ Rn : Ixm = 1}, that
is, {x ∈ Rn : kxkm = 1}, is compact and the function x 7→ Sxm is continuous.
Thus the set WI (S) is actually a segment on the real axis.
Remark 2.12. For a real matrix S, the set {x> Sx : x> x = 1, x ∈ Rn } is called
the real field of values [55, Section 1.8.7]. Similarly, we define the real field of
values for an even-order real tensor S as WI (S). The above discussion reveals
that WI (S) is a segment on the real axis.
Denote by

ωmax := max {ω : ω ∈ WI (S)} and ωmin := min {ω : ω ∈ WI (S)} .

If S is positive definite, then ωmax ≥ ωmin > 0. Thus for any vector x ∈ Rn , we
have
ωmin Ixm ≤ Sxm ≤ ωmax Ixm ,
or, equivalently,
−1
1/m −1
1/m
ωmax Sxm ≤ kxkm ≤ ωmin Sxm ,
Exploring the Variety of Random
Documents with Different Content
Ce livre, F de l’alphabet des lettres
achevé d’imprimer pour la Cité des Livres, le 28
janvier 1926, par Ducros et Colas, Maîtres-
Imprimeurs à Paris, a été tiré à 440
exemplaires : 5 sur papier vélin à la cuve
“héliotrope” des papeteries du Marais,
numérotés de 1 à 5 ; 10 exemplaires sur japon
ancien à la forme, numérotés de 6 à 15 ; 25
exemplaires sur japon impérial, numérotés de
16 à 40 ; 50 exemplaires sur vergé de Hollande,
numérotés de 41 à 90 ; et 350 exemplaires sur
vergé à la forme d’Arches, numérotés de 91 à
440. Il a été tiré en outre : 25 exemplaires sur
madagascar réservés à M. Édouard Champion,
marqués alphabétiquement de A à Z ; et 30
exemplaires hors commerce sur papiers divers,
numérotés de I à XXX.

Exemplaire No
*** END OF THE PROJECT GUTENBERG EBOOK CLAVECIN ***

Updated editions will replace the previous one—the old editions


will be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright
in these works, so the Foundation (and you!) can copy and
distribute it in the United States without permission and without
paying copyright royalties. Special rules, set forth in the General
Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the


free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree to
abide by all the terms of this agreement, you must cease using
and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only


be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project
Gutenberg™ works in compliance with the terms of this
agreement for keeping the Project Gutenberg™ name
associated with the work. You can easily comply with the terms
of this agreement by keeping this work in the same format with
its attached full Project Gutenberg™ License when you share it
without charge with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.

1.E. Unless you have removed all references to Project


Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:

This eBook is for the use of anyone anywhere in the United


States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it
away or re-use it under the terms of the Project Gutenberg
License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country where
you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is


derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of the
copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is


posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files
containing a part of this work or any other work associated with
Project Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute
this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must, at
no additional cost, fee or expense to the user, provide a copy, a
means of exporting a copy, or a means of obtaining a copy upon
request, of the work in its original “Plain Vanilla ASCII” or other
form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or


providing access to or distributing Project Gutenberg™
electronic works provided that:

• You pay a royalty fee of 20% of the gross profits you derive from
the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt that
s/he does not agree to the terms of the full Project Gutenberg™
License. You must require such a user to return or destroy all
copies of the works possessed in a physical medium and
discontinue all use of and all access to other copies of Project
Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project


Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
law in creating the Project Gutenberg™ collection. Despite
these efforts, Project Gutenberg™ electronic works, and the
medium on which they may be stored, may contain “Defects,”
such as, but not limited to, incomplete, inaccurate or corrupt
data, transcription errors, a copyright or other intellectual
property infringement, a defective or damaged disk or other
medium, a computer virus, or computer codes that damage or
cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES -


Except for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU
AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE,
STRICT LIABILITY, BREACH OF WARRANTY OR BREACH
OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE
TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER
THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR
ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE
OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF
THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If


you discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person or
entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.

1.F.4. Except for the limited right of replacement or refund set


forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR
ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the


Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you do
or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.

Section 2. Information about the Mission of


Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status by
the Internal Revenue Service. The Foundation’s EIN or federal
tax identification number is 64-6221541. Contributions to the
Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.

The Foundation’s business office is located at 809 North 1500


West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws


regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or
determine the status of compliance for any particular state visit
www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states


where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot


make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.

Please check the Project Gutenberg web pages for current


donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
credit card donations. To donate, please visit:
www.gutenberg.org/donate.

Section 5. General Information About Project


Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.

Project Gutenberg™ eBooks are often created from several


printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like