Tensor Calculus Intro Taha
Tensor Calculus Intro Taha
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/296707240
CITATIONS READS
3 145
1 author:
Taha Sochi
Independent author
102 PUBLICATIONS 730 CITATIONS
SEE PROFILE
All content following this page was uploaded by Taha Sochi on 23 May 2016.
The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
Introduction to Tensor Calculus
Taha Sochi∗
∗
Department of Physics & Astronomy, University College London, Gower Street, London, WC1E 6BT.
Email: [email protected].
1
2
Preface
These are general notes on tensor calculus originated from a collection of personal notes
which I prepared some time ago for my own use and reference when I was studying the
subject. I decided to put them in the public domain hoping they may be beneficial to some
students in their effort to learn this subject. Most of these notes were prepared in the
form of bullet points like tutorials and presentations and hence some of them may be more
concise than they should be. Moreover, some notes may not be sufficiently thorough or
general. However this should be understandable considering the level and original purpose
of these notes and the desire for conciseness. There may also be some minor repetition
in some places for the purpose of gathering similar items together, or emphasizing key
points, or having self-contained sections and units.
These notes, in my view, can be used as a short reference for an introductory course on
tensor algebra and calculus. I assume a basic knowledge of calculus and linear algebra
with some commonly used mathematical terminology. I tried to be as clear as possible and
to highlight the key issues of the subject at an introductory level in a concise form. I hope
I have achieved some success in reaching these objectives at least for some of my target
audience. The present text is supposed to be the first part of a series of documents about
tensor calculus for gradually increasing levels or tiers. I hope I will be able to finalize and
publicize the document for the next level in the near future.
CONTENTS 3
Contents
Preface 2
Contents 3
2 Preliminaries 10
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 General Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Examples of Tensors of Different Ranks . . . . . . . . . . . . . . . . . . . . 15
2.4 Applications of Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Types of Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.1 Covariant and Contravariant Tensors . . . . . . . . . . . . . . . . . 17
2.5.2 True and Pseudo Tensors . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5.3 Absolute and Relative Tensors . . . . . . . . . . . . . . . . . . . . . 24
2.5.4 Isotropic and Anisotropic Tensors . . . . . . . . . . . . . . . . . . . 25
2.5.5 Symmetric and Anti-symmetric Tensors . . . . . . . . . . . . . . . . 25
2.6 Tensor Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.6.1 Addition and Subtraction . . . . . . . . . . . . . . . . . . . . . . . 28
2.6.2 Multiplication by Scalar . . . . . . . . . . . . . . . . . . . . . . . . 29
2.6.3 Tensor Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.6.4 Contraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.6.5 Inner Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.6.6 Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7 Tensor Test: Quotient Rule . . . . . . . . . . . . . . . . . . . . . . . . . . 34
CONTENTS 4
3 δ and Tensors 36
3.1 Kronecker δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.2 Permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3 Useful Identities Involving δ or/and . . . . . . . . . . . . . . . . . . . . . 38
3.3.1 Identities Involving δ . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.2 Identities Involving . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.3 Identities Involving δ and . . . . . . . . . . . . . . . . . . . . . . . 42
3.4 Generalized Kronecker delta . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5 Metric Tensor 76
6 Covariant Differentiation 79
References 83
1 NOTATION, NOMENCLATURE AND CONVENTIONS 5
• In the present notes we largely follow certain conventions and general notations; most of
which are commonly used in the mathematical literature although they may not be univer-
sally adopted. In the following bullet points we outline these conventions and notations.
We also give initial definitions of the most basic terms and concepts in tensor calculus;
more thorough technical definitions will follow, if needed, in the forthcoming sections.
• Scalars are algebraic objects which are uniquely identified by their magnitude (abso-
lute value) and sign (±), while vectors are broadly geometric objects which are uniquely
identified by their magnitude (length) and direction in a presumed underlying space.
• At this early stage in these notes, we generically define “tensor” as an organized array
of mathematical objects such as numbers or functions.
• In generic terms, the rank of a tensor signifies the complexity of its structure. Rank-0
tensors are called scalars while rank-1 tensors are called vectors. Rank-2 tensors may be
called dyads although this, in common use, may be restricted to the outer product of two
vectors and hence is a special case of rank-2 tensors assuming it meets the requirements
of a tensor and hence transforms as a tensor. Like rank-2 tensors, rank-3 tensors may
be called triads. Similar labels, which are much less common in use, may be attached to
higher rank tensors; however, none will be used in the present notes. More generic names
for higher rank tensors, such as polyad, are also in use.
• In these notes we may use “tensor” to mean tensors of all ranks including scalars (rank-0)
and vectors (rank-1). We may also use it as opposite to scalar and vector (i.e. tensor of
rank-n where n > 1). In almost all cases, the meaning should be obvious from the context.
• Non-indexed lower case light face Latin letters (e.g. f and h) are used for scalars.
• Non-indexed (lower or upper case) bold face Latin letters (e.g. a and A) are used for
vectors. The exception to this is the basis vectors where indexed bold face lower or upper
case symbols are used. However, there should be no confusion or ambiguity about the
1 NOTATION, NOMENCLATURE AND CONVENTIONS 6
∂
∂i = ∇ i = (1)
∂xi
1 NOTATION, NOMENCLATURE AND CONVENTIONS 7
• A comma preceding a subscript index (e.g. , i) is also used to denote partial differentia-
tion with respect to the ith spatial coordinate in Cartesian systems, e.g.
∂A
A,i = (2)
∂xi
• Partial derivative symbol with a spatial subscript, rather than an index, are used to
denote partial differentiation with respect to that spatial variable. For instance
∂
∂r = ∇r = (3)
∂r
is used for the partial derivative with respect to the radial coordinate in spherical coordi-
nate systems identified by (r, θ, φ) spatial variables.
• Partial derivative symbol with repeated double index is used to denote the Laplacian
operator:
∂ii = ∂i ∂i = ∇2 = ∆ (4)
The notation is not affected by using repeated double index other than i (e.g. ∂jj or ∂kk ).
The following notations:
∂ii2 ∂2 ∂i ∂ i (5)
are also used in the literature of tensor calculus to symbolize the Laplacian operator.
However, these notations will not be used in the present notes.
• We follow the common convention of using a subscript semicolon preceding a subscript
index (e.g. Akl;i ) to symbolize covariant differentiation with respect to the ith coordinate
(see § 6). The semicolon notation may also be attached to the normal differential operators
to indicate covariant differentiation (e.g. ∇;i or ∂;i to indicate covariant differentiation with
respect to the index i).
• All transformation equations in these notes are assumed continuous and real, and all
1 NOTATION, NOMENCLATURE AND CONVENTIONS 8
∂i ∂j = ∂j ∂i (6)
such a transformation is unique and hence an inverse transformation from the transformed
to the original space is also defined. Mathematically, each one of the direct and inverse
transformation can be regarded as a mathematical correlation expressed by a set of equa-
tions in which each coordinate in one space is considered as a function of the coordinates
in the other space. Hence the transformations between the two sets of coordinates in
the two spaces can by expressed mathematically by the following two sets of independent
relations:
x̄i = x̄i (x1 , x2 , . . . , xn ) & xi = xi (x̄1 , x̄2 , . . . , x̄n ) (7)
2 Preliminaries
2.1 Introduction
2
We adopt this assertion, which is common in the literature of tensor calculus, as we think it is suitable
for this level. However, there are many instances in the literature of tensor calculus where indices are
repeated more than twice in a single term. The bottom line is that as long as the tensor expression makes
sense and the intention is clear, such repetitions should be allowed with no need in our view to take special
precaution like using parentheses. In particular, the summation convention will not apply automatically
in such Pcases although summation on such indices can be carried out explicitly, by using the summation
symbol , or by special declaration of such intention similar to the summation convention. Anyway, in
the present text we will not use indices repeated more than twice in a single term.
2.2 General Rules 13
n
X
i
A Bi ≡ Ai Bi = A1 B1 + A2 B2 + . . . + An Bn (10)
i=1
n X
X n
ij
δij A ≡ δij Aij (11)
i=1 j=1
n X
X n X
n
ij k
ijk A B ≡ ijk Aij B k (12)
i=1 j=1 k=1
• When dummy indices do not imply summation, the situation must be clarified by en-
closing such indices in parentheses or by underscoring or by using upper case letters (with
declaration of these conventions) or by adding a clarifying comment like “no summation
on repeated indices”.
• Tensors with subscript indices, like Aij , are called covariant, while tensors with super-
script indices, like Ak , are called contravariant. Tensors with both types of indices, like
Almn
lk , are called mixed type. More details about this will follow in § 2.5.1.
• Subscript indices, rather than subscripted tensors, are also dubbed “covariant” and
superscript indices are dubbed “contravariant”.
• Each tensor index should conform to one of the variance transformation rules as given
by Eqs. 20 and 21, i.e. it is either covariant or contravariant.
• For orthonormal Cartesian coordinate systems, the two variance types (i.e. covariant
and contravariant) do not differ because the metric tensor is given by the Kronecker delta
(refer to § 5 and 3.1) and hence any index can be upper or lower although it is common
to use lower indices in such cases.
• For tensor invariance, a pair of dummy indices should in general be complementary
in their variance type, i.e. one covariant and the other contravariant. However, for or-
2.2 General Rules 14
thonormal Cartesian systems the two are the same and hence when both dummy indices
are covariant or both are contravariant it should be understood as an indication that the
underlying coordinate system is orthonormal Cartesian if the possibility of an error is
excluded.
• As indicated earlier, tensor order is equal to the number of its indices while tensor rank is
equal to the number of its free indices; hence vectors (terms, expressions and equalities) are
represented by a single free index and rank-2 tensors are represented by two free indices.
The dimension of a tensor is determined by the range taken by its indices.
• The rank of all terms in legitimate tensor expressions and equalities must be the same.
• Each term in valid tensor expressions and equalities must have the same set of free
indices (e.g. i, j, k).
• A free index should keep its variance type in every term in valid tensor expressions and
equations, i.e. it must be covariant in all terms or contravariant in all terms.
• While free indices should be named uniformly in all terms of tensor expressions and
equalities, dummy indices can be named in each term independently, e.g.
j
Aiik + Bjk lm
+ Clmk (13)
Aij
ij , Aim ink
m + Bnk , Cij = Aij − Bij , a = Bjj (14)
2.3 Examples of Tensors of Different Ranks 15
• Indexing is generally distributive over the terms of tensor expressions and equalities, e.g.
and
[A = B]i ⇐⇒ [A]i = [B]i (17)
• Unlike scalars and tensor components, which are essentially scalars in a generic sense,
operators cannot in general be freely reordered in tensor terms, therefore
f h = hf & Ai B i = B i Ai (18)
but
∂i Ai 6= Ai ∂i (19)
• Almost all the identities in the present notes which are given in a covariant or a con-
travariant or a mixed form are similarly valid for the other forms unless it is stated other-
wise. The objective of reporting in only one form is conciseness and to avoid unnecessary
repetition.
• Examples of rank-0 tensors (scalars) are energy, mass, temperature, volume and density.
These are totally identified by a single number regardless of any coordinate system and
hence they are invariant under coordinate transformations.
2.4 Applications of Tensors 16
• Examples of rank-1 tensors (vectors) are displacement, force, electric field, velocity and
acceleration. These need for their complete identification a number, representing their
magnitude, and a direction representing their geometric orientation within their space.
Alternatively, they can be uniquely identified by a set of numbers, equal to the number
of dimensions of the underlying space, in reference to a particular coordinate system and
hence this identification is system-dependent although they still have system-invariant
properties such as length.
• Examples of rank-2 tensors are Kronecker delta (see § 3.1), stress, strain, rate of strain
and inertia tensors. These require for their full identification a set of numbers each of
which is associated with two directions.
• Examples of rank-3 tensors are the Levi-Civita tensor (see § 3.2) and the tensor of
piezoelectric moduli.
• Examples of rank-4 tensors are the elasticity or stiffness tensor, the compliance tensor
and the fourth-order moment of inertia tensor.
• Tensors of high ranks are relatively rare in science and engineering.
• Although rank-0 and rank-1 tensors are, respectively, scalars and vectors, not all scalars
and vectors (in their generic sense) are tensors of these ranks. Similarly, rank-2 tensors
are normally represented by matrices but not all matrices represent tensors.
• Tensor calculus is very powerful mathematical tool. Tensor notation and techniques
are used in many branches of science and engineering such as fluid mechanics, contin-
uum mechanics, general relativity and structural engineering. Tensor calculus is used for
elegant and compact formulation and presentation of equations and identities in mathe-
matics, science and engineering. It is also used for algebraic manipulation of mathematical
expressions and proving identities in a neat and succinct way (refer to § 4.6).
2.5 Types of Tensors 17
• As indicated earlier, the invariance of tensor forms serves a theoretically and practically
important role by allowing the formulation of physical laws in coordinate-free forms.
• In the following subsections we introduce a number of tensor types and categories and
highlight their main characteristics and differences. These types and categories are not
mutually exclusive and hence they overlap in general; moreover they may not be exhaustive
in their classes as some tensors may not instantiate any one of a complementary set of
types such as being symmetric or anti-symmetric.
• These are the main types of tensor with regard to the rules of their transformation
between different coordinate systems.
• Covariant tensors are notated with subscript indices (e.g. Ai ) while contravariant tensors
are notated with superscript indices (e.g. Aij ).
• A covariant tensor is transformed according to the following rule
∂xj
Āi = Aj (20)
∂ x̄i
∂ x̄i j
Āi = A (21)
∂xj
where the barred and unbarred symbols represent the same mathematical object (tensor
or coordinate) in the transformed and original coordinate systems respectively.
• An example of covariant tensors is the gradient of a scalar field.
• An example of contravariant tensors is the displacement vector.
2.5.1 Covariant and Contravariant Tensors 18
• Some tensors have mixed variance type, i.e. they are covariant in some indices and
contravariant in others. In this case the covariant variables are indexed with subscripts
while the contravariant variables are indexed with superscripts, e.g. Aji which is covariant
in i and contravariant in j.
• A mixed type tensor transforms covariantly in its covariant indices and contravariantly
in its contravariant indices, e.g.
Ā $ A (covariant)
Ā $ A (contravariant) (23)
Ā $ A (mixed)
We assume that the barred tensor and its coordinates are indexed with ijkl and the
unbarred are indexed with npqr, so we add these indices in their presumed order and
position (lower or upper) paying particular attention to the order in the mixed type:
Āijkl $ Anpqr
Āijkl $ Anpqr
Since the barred and unbarred tensors are of the same type, as they represent the same
tensor in two coordinate systems,3 the indices on the two sides of the equalities should
match in their position and order. We then insert a number of partial differential operators
on the right hand side of the equations equal to the rank of these tensors, which is 4 in our
example. These operators represent the transformation rules for each pair of corresponding
coordinates one from the barred and one from the unbarred:
∂ ∂ ∂ ∂
Āijkl $ ∂ ∂ ∂ ∂
Anpqr
Āijkl $ ∂ ∂ ∂ ∂
∂ ∂ ∂ ∂
Anpqr (25)
Āijkl $ ∂ ∂ ∂ ∂
∂ ∂ ∂ ∂
Anpqr
Now we insert the coordinates of the barred system into the partial differential operators
3
Similar basis vectors are assumed.
2.5.1 Covariant and Contravariant Tensors 20
noting that (i) the positions of any index on the two sides should match, i.e. both upper
or both lower, since they are free indices in different terms of tensor equalities, (ii) a
superscript index in the denominator of a partial derivative is in lieu of a covariant index
in the numerator4 , and (iii) the order of the coordinates should match the order of the
indices in the tensor:
∂ ∂ ∂ ∂
Āijkl $ ∂xi ∂xj ∂xk ∂xl
Anpqr
∂xi ∂xj ∂xk ∂xl
Āijkl $ ∂ ∂ ∂ ∂
Anpqr (26)
∂xi ∂xj ∂
Āijkl $ ∂
∂
∂ ∂xk ∂xl
Anpqr
For consistency, these coordinates should be barred as they belong to the barred tensor;
hence we add bars:
∂ ∂ ∂ ∂
Āijkl $ ∂ x̄i ∂ x̄j ∂ x̄k ∂ x̄l
Anpqr
∂ x̄i ∂ x̄j ∂ x̄k ∂ x̄l
Āijkl $ ∂ ∂ ∂ ∂
Anpqr (27)
∂ x̄i ∂ x̄j ∂
Āijkl $ ∂
∂
∂ ∂ x̄k ∂ x̄l
Anpqr
Finally, we insert the coordinates of the unbarred system into the partial differential
operators noting that (i) the positions of the repeated indices on the same side should
be opposite, i.e. one upper and one lower, since they are dummy indices and hence the
position of the index of the unbarred coordinate should be opposite to its position in the
unbarred tensor, (ii) an upper index in the denominator is in lieu of a lower index in the
numerator, and (iii) the order of the coordinates should match the order of the indices in
4
The use of upper indices in the denominator of partial derivatives, which is common in this type of
equations, is to indicate the fact that the coordinates and their differentials transform contravariantly.
2.5.1 Covariant and Contravariant Tensors 21
the tensor:
We also replaced the ‘$’ sign in the final set of equations with the strict equality sign ‘=’
as the equations now are complete.
• A tensor of m contravariant indices and n covariant indices may be called type (m, n)
tensor, e.g. Akij is a type (1, 2) tensor. When one or both variance types are absent, zero
is used to refer to the absent type in this notation, e.g. B ik is a type (2, 0) tensor.
• The covariant and contravariant types of a tensor are linked through the metric tensor
(refer to § 5).
• For orthonormal Cartesian systems there is no difference between covariant and con-
travariant tensors, and hence the indices can be upper or lower.
• The vectors providing the basis set (not necessarily of unit length or mutually orthogonal)
for a coordinate system are of covariant type when they are tangent to the coordinate axes,
and they are of contravariant type when they are perpendicular to the local surfaces of
constant coordinates. These two sets are identical for orthonormal Cartesian systems.
• Formally, the covariant and contravariant basis vectors are given respectively by:
∂r
Ei = & Ei = ∇ui (29)
∂ui
or of unit length; however the two sets are reciprocal systems and hence they satisfy the
following reciprocity relation:
Ei · Ej = δij (30)
A = A i Ei or A = Ai Ei (31)
where Ei and Ei are the contravariant and covariant basis vectors respectively. The use of
the covariant or contravariant form of the vector representation is a matter of choice and
convenience.
• More generally, a tensor of any rank (≥ 1) can be represented covariantly using con-
travariant basis tensors of that rank, or contravariantly using covariant basis tensors, or
in a mixed form using a mixed basis of opposite type. For example, a rank-2 tensor A can
be written as:
A = Aij Ei Ej = Aij Ei Ej = Aij Ei Ej (32)
• These are also called polar and axial tensors respectively although it is more common
to use the latter terms for vectors. Pseudo tensors may also be called tensor densities.
• True tensors are proper (or ordinary) tensors and hence they are invariant under co-
ordinate transformations, while pseudo tensors are not proper tensors since they do not
transform invariantly as they acquire a minus sign under improper orthogonal transfor-
2.5.2 True and Pseudo Tensors 23
mations which involve inversion of coordinate axes through the origin with a change of
system handedness.
• Because true and pseudo tensors have different mathematical properties and represent
different types of physical entities, the terms of consistent tensor expressions and equations
should be uniform in their true and pseudo type, i.e. all terms are true or all are pseudo.
• The direct product (refer to § 2.6.3) of even number of pseudo tensors is a proper tensor,
while the direct product of odd number of pseudo tensors is a pseudo tensor. The direct
product of true tensors is obviously a true tensor.
• The direct product of a mix of true and pseudo tensors is a true or pseudo tensor
depending on the number of pseudo tensors involved in the product as being even or odd
respectively.
• Similar rules to those of direct product apply to cross products (including curl operations)
involving tensors (usually of rank-1) with the addition of a pseudo factor for each cross
product operation. This factor is contributed by the permutation tensor (see § 3.2) which
is implicit in the definition of the cross product (see Eqs. 121 and 146).
• In summary, what determines the tensor type (true or pseudo) of the tensor terms in-
volving direct5 and cross products is the parity of the multiplicative factors of pseudo type
plus the number of cross product operations involved since each cross product contributes
an factor.
• Examples of true scalars are temperature, mass and the dot product of two polar or two
axial vectors, while examples of pseudo scalars are the dot product of an axial vector and
a polar vector and the scalar triple product of polar vectors.
• Examples of polar vectors are displacement and acceleration, while examples of axial
vectors are angular velocity and cross product of polar vectors in general (including curl
operation on polar vectors) due to the involvement of the permutation symbol which is
5
Inner product (see § 2.6.5) is the result of a direct product operation followed by a contraction (see
§ 2.6.4) and hence it is a direct product in this context.
2.5.3 Absolute and Relative Tensors 24
a pseudo tensor (refer to § 3.2). The essence of this distinction is that the direction of a
pseudo vector depends on the observer choice of the handedness of the coordinate system
whereas the direction of a proper vector is independent of such choice.
• Examples of proper tensors of rank-2 are stress and rate of strain tensors, while examples
of pseudo tensors of rank-2 are direct products of two vectors: one polar and one axial.
• Examples of proper tensors of higher ranks are piezoelectric moduli tensor (rank-3)
and elasticity tensor (rank-4), while examples of pseudo tensors of higher ranks are the
permutation tensor of these ranks.
w
∂x ∂ x̄i ∂ x̄j ∂ x̄k ∂xd ∂xe ∂xf ab...c
Āij...klm...n = · · · · · · A de...f (33)
∂ x̄ ∂xa ∂xb ∂xc ∂ x̄l ∂ x̄m ∂ x̄n
where ∂x
∂ x̄
is the Jacobian of the transformation between the two systems. When w = 0
the tensor is described as an absolute or true tensor, while when w = −1 the tensor is
described as a pseudo tensor. When w = 1 the tensor may be described as a tensor
density.6
• As indicated earlier, a tensor of m contravariant indices and n covariant indices may be
called type (m, n). This may be generalized to include the weight as a third entry and
hence the type of the tensor is identified by (m, n, w).
• Relative tensors can be added and subtracted if they are of the same variance type and
have the same weight; the result is a tensor of the same type and weight. Also, relative
tensors can be equated if they are of the same type and weight.
• Multiplication of relative tensors produces a relative tensor whose weight is the sum of
6
Some of these labels are used differently by different authors.
2.5.4 Isotropic and Anisotropic Tensors 25
the weights of the original tensors. Hence, if the weights are added up to a non-zero value
the result is a relative tensor of that weight; otherwise it is an absolute tensor.
• Isotropic tensors are characterized by the property that the values of their components
are invariant under coordinate transformation by proper rotation of axes. In contrast, the
values of the components of anisotropic tensors are dependent on the orientation of the
coordinate axes. Notable examples of isotropic tensors are scalars (rank-0), the vector 0
(rank-1), Kronecker delta δij (rank-2) and Levi-Civita tensor ijk (rank-3). Many tensors
describing physical properties of materials, such as stress and magnetic susceptibility, are
anisotropic.
• Direct and inner products of isotropic tensors are isotropic tensors.
• The zero tensor of any rank is isotropic; therefore if the components of a tensor vanish
in a particular coordinate system they will vanish in all properly and improperly rotated
coordinate systems.7 Consequently, if the components of two tensors are identical in a
particular coordinate system they are identical in all transformed coordinate systems.
• As indicated, all rank-0 tensors (scalars) are isotropic. Also, the zero vector, 0, of any
dimension is isotropic; in fact it is the only rank-1 isotropic tensor.
• These types of tensor apply to high ranks only (rank ≥ 2). Moreover, these types are
not exhaustive, even for tensors of ranks ≥ 2, as there are high-rank tensors which are
neither symmetric nor anti-symmetric.
7
For improper rotation, this is more general than being isotropic.
2.5.5 Symmetric and Anti-symmetric Tensors 26
Similar conditions apply to contravariant type tensors (refer also to the following).
• A rank-n tensor Ai1 ...in is symmetric in its two indices ij and il iff
• Any rank-2 tensor Aij can be synthesized from (or decomposed into) a symmetric part
A(ij) (marked with round brackets enclosing the indices) and an anti-symmetric part A[ij]
(marked with square brackets) where
1 1
Aij = A(ij) + A[ij] , A(ij) = (Aij + Aji ) & A[ij] = (Aij − Aji ) (38)
2 2
1
A(ijk) = (Aijk + Akij + Ajki + Aikj + Ajik + Akji ) (39)
3!
2.5.5 Symmetric and Anti-symmetric Tensors 27
and anti-symmetrized by
1
A[ijk] = (Aijk + Akij + Ajki − Aikj − Ajik − Akji ) (40)
3!
1
A(i1 ...in ) = (sum of all even & odd permutations of indices i’s) (41)
n!
and anti-symmetrized by
1
A[i1 ...in ] = (sum of all even permutations minus sum of all odd permutations) (42)
n!
• For a symmetric tensor Aij and an anti-symmetric tensor B ij (or the other way around)
we have
Aij B ij = 0 (43)
• The indices whose exchange defines the symmetry and anti-symmetry relations should
be of the same variance type, i.e. both upper or both lower.
• The symmetry and anti-symmetry characteristic of a tensor is invariant under coordinate
transformation.
• A tensor of high rank (> 2) may be symmetrized or anti-symmetrized with respect to
only some of its indices instead of all of its indices, e.g.
1 1
A(ij)k = (Aijk + Ajik ) & A[ij]k = (Aijk − Ajik ) (44)
2 2
• For a totally skew-symmetric tensor (i.e. anti-symmetric in all of its indices), nonzero
entries can occur only when all the indices are different.
• There are many operations that can be performed on tensors to produce other tensors
in general. Some examples of these operations are addition/subtraction, multiplication
by a scalar (rank-0 tensor), multiplication of tensors (each of rank > 0), contraction and
permutation. Some of these operations, such as addition and multiplication, involve more
than one tensor while others are performed on a single tensor, such as contraction and
permutation.
• In tensor algebra, division is allowed only for scalars, hence if the components of an
indexed tensor should appear in a denominator, the tensor should be redefined to avoid
1
this, e.g. Bi = Ai
.
a=b+c (47)
Ai = Bi − Ci (48)
2.6.2 Multiplication by Scalar 29
• The added/subtracted terms should have the same indicial structure with regard to
j
their free indices, as explained in § 2.2, hence Aijk and Bik cannot be added or subtracted
although they are of the same rank and type, but Ami i
mjk and Bjk can be added and sub-
tracted.
• Addition of tensors is associative and commutative:
(A + B) + C = A + (B + C) (50)
A+B=B+A (51)
• A tensor can be multiplied by a scalar, which generally should not be zero, to produce
a tensor of the same variance type and rank, e.g.
Ajik = aBik
j
(52)
• This may also be called outer or exterior or direct or dyadic multiplication, although
some of these names may be reserved for operations on vectors.
• On multiplying each component of a tensor of rank r by each component of a tensor of
rank k, both of dimension m, a tensor of rank (r + k) with mr+k components is obtained
where the variance type of each index (covariant or contravariant) is preserved, e.g.
Ai Bj = Cij (53)
• The outer product of a tensor of type (m, n) by a tensor of type (p, q) results in a tensor
of type (m + p, n + q).
• Direct multiplication of tensors may be marked by the symbol , mostly when using
symbolic notation for tensors, e.g. A B. However, in the present text no symbol will be
used for the operation of direct multiplication.
• Direct multiplication of tensors is not commutative.
• The outer product operation is distributive with respect to the algebraic sum of tensors:
A (B ± C) = AB ± AC & (B ± C) A = BA ± CA (55)
product is equal to the number of the vectors involved (e.g. 2 for dyads and 3 for triads).
• Not every tensor can be synthesized as a product of lower rank tensors.
• In the outer product, it is understood that all the indices of the involved tensors have
the same range.
• The outer product of tensors yields a tensor.
2.6.4 Contraction
• Contraction of a tensor of rank > 1 is to make two free indices identical, by unifying
their symbols, and perform summation over these repeated indices, e.g.
Aji contraction
−−−−−−−−→ Aii (56)
Ajk
il contraction on jl Amk
im (57)
−−−−−−−−−−−−→
• In matrix algebra, taking the trace (summing the diagonal elements) can also be consid-
ered as contraction of the matrix, which under certain conditions can represent a rank-2
tensor, and hence it yields the trace which is a scalar.
• Applying the index contraction operation on a tensor results into a tensor.
• Application of contraction of indices operation on a relative tensor (see § 2.5.3) produces
a relative tensor of the same weight as the original tensor.
• On taking the outer product (refer to § 2.6.3) of two tensors of rank ≥ 1 followed by a
contraction on two indices of the product, an inner product of the two tensors is formed.
Hence if one of the original tensors is of rank-m and the other is of rank-n, the inner
product will be of rank-(m + n − 2).
• The inner product operation is usually symbolized by a single dot between the two
tensors, e.g. A · B, to indicate contraction following outer multiplication.
• In general, the inner product is not commutative. When one or both of the tensors
involved in the inner product are of rank > 1 the order of the multiplicands does matter.
• The inner product operation is distributive with respect to the algebraic sum of tensors:
A · (B ± C) = A · B ± A · C & (B ± C) · A = B · A ± C · A (58)
• As indicated before (see § 2.6.4), the dot product of two vectors is an example of the
inner product of tensors, i.e. it is an inner product of two rank-1 tensors to produce a
rank-0 tensor:
[ab]ij = ai bj contraction
−−−−−−−−→ a · b = ai b i (59)
• Another common example (from linear algebra) of inner product is the multiplication of
a matrix (representing a rank-2 tensor assuming certain conditions) by a vector (rank-1
2.6.5 Inner Product 33
The multiplication of two n × n matrices is another example of inner product (see Eq.
119).
• For tensors whose outer product produces a tensor of rank > 2, various contraction
operations between different sets of indices can occur and hence more than one inner
product, which are different in general, can be defined. Moreover, when the outer product
produces a tensor of rank > 3 more than one contraction can take place simultaneously.
• There are more specialized types of inner product; some of which may be defined dif-
ferently by different authors. For example, a double inner product of two rank-2 tensors,
A and B, may be defined and denoted by double vertically- or horizontally-aligned dots
(e.g. A : B or A · ·B) to indicate double contraction taking place between different pairs
of indices. An instance of these types is the inner product with double contraction of two
dyads which is commonly defined by8
ab : cd = (a · c) (b · d) (61)
and the result is a scalar. The single dots in the right hand side of the last equation
symbolize the conventional dot product of two vectors. Some authors may define a different
type of double-contraction inner product of two dyads, symbolized by two horizontally-
aligned dots, which may be called a “transposed contraction”, and is given by
ab · ·cd = ab : dc = (a · d) (b · c) (62)
8
It is also defined differently by some authors.
2.6.6 Permutation 34
where the result is also a scalar. However, different authors may have different conventions
and hence one should be vigilant about such differences.
• For two rank-2 tensors, the aforementioned double-contraction inner products are simi-
larly defined as in the case of two dyads:
• Inner products with higher multiplicity of contractions are similarly defined, and hence
can be regarded as trivial extensions of the inner products with lower contraction multi-
plicities.
• The inner product of tensors produces a tensor because the inner product is an outer
product operation followed by an index contraction operation and both of these operations
on tensors produce tensors.
2.6.6 Permutation
• A tensor may be obtained by exchanging the indices of another tensor, e.g. transposition
of rank-2 tensors.
• Tensor permutation applies only to tensors of rank ≥ 2.
• The collection of tensors obtained by permuting the indices of a basic tensor may be
called isomers.
• Sometimes a tensor-like object may be suspected for being a tensor; in such cases a test
based on the “quotient rule” can be used to clarify the situation. According to this rule, if
the inner product of a suspected tensor with a known tensor is a tensor then the suspect
is a tensor. In more formal terms, if it is not known if A is a tensor but it is known that
2.7 Tensor Test: Quotient Rule 35
B and C are tensors; moreover it is known that the following relation holds true in all
rotated (properly-transformed) coordinate frames:
then A is a tensor. Here, A, B and C are respectively of ranks m, n and (m + n − 2), due
to the contraction on k which can be any index of A and B independently.
• Testing for being a tensor can also be done by applying first principles through direct
substitution in the transformation equations. However, using the quotient rule is generally
more convenient and requires less work.
• The quotient rule may be considered as a replacement for the division operation which
is not defined for tensors.
3 δ AND TENSORS 36
3 δ and Tensors
• These tensors are of particular importance in tensor calculus due to their distinctive
properties and unique transformation attributes. They are numerical tensors with fixed
components in all coordinate systems. The first is called Kronecker delta or unit ten-
sor, while the second is called Levi-Civita9 , permutation, anti-symmetric and alternating
tensor.
• The δ and tensors are conserved under coordinate transformations and hence they are
the same for all systems of coordinate.10
3.1 Kronecker δ
Similar identities apply to the contravariant and mixed types of this tensor.
• It is invariant in all coordinate systems, and hence it is an isotropic tensor.11
• It is defined as:
1
(i = j)
δij = (66)
0
(i 6= j)
9
This name is usually used for the rank-3 tensor. Also some authors distinguish between the permuta-
tion tensor and Levi-Civita tensor even for rank-3. Moreover, some of the common labels and descriptions
of are more specific to rank-3.
10
For the permutation tensor, the statement applies to proper coordinate transformations.
11
In fact it is more general than isotropic as it is invariant even under improper coordinate transfor-
mations.
3.2 Permutation 37
• Covariant, contravariant and mixed type of this tensor are the same, that is
3.2 Permutation
• This is an isotropic tensor. It has a rank equal to the number of dimensions; hence, a
rank-n permutation tensor has nn components.
• It is totally anti-symmetric in each pair of its indices, i.e. it changes sign on swapping
any two of its indices, that is
The reason is that any exchange of two indices requires an even/odd number of single-
step shifts to the right of the first index plus an odd/even number of single-step shifts to
the left of the second index, so the total number of shifts is odd and hence it is an odd
permutation of the original arrangement.
• It is a pseudo tensor since it acquires a minus sign under improper orthogonal transfor-
mation of coordinates (inversion of axes with possible superposition of rotation).
• Definition of rank-2 (ij ):
1 (i, j, k is even permutation of 1,2,3)
ijk = −1 (i, j, k is odd permutation of 1,2,3) (71)
0
(repeated index)
• The definition of rank-n (i1 i2 ...in ) is similar to the definition of rank-3 considering
index repetition and even or odd permutations of its indices (i1 , i2 , · · · , in ) corresponding
to (1, 2, · · · , n), that is
1 [(i1 , i2 , . . . , in ) is even permutation of (1, 2, . . . , n)]
i1 i2 ...in = −1 [(i1 , i2 , . . . , in ) is odd permutation of (1, 2, . . . , n)] (72)
0
[repeated index]
in the other tensor by the other index of the Kronecker delta, that is
δij Aj = Ai (76)
In such cases the Kronecker delta is described as the substitution or index replacement
operator. Hence,
δij δjk = δik (77)
Similarly,
δij δjk δki = δik δki = δii = n (78)
∂xi
= ∂j xi = xi,j = δij (79)
∂xj
∂xi ∂xj
= = δij = δ ij (81)
∂xj ∂xi
ei · ej = δij (82)
• The double inner product of two dyads formed by orthonormal basis vectors of an
3.3.2 Identities Involving 40
• For rank-3 :
ijk = kij = jki = −ikj = −jik = −kji (sense of cyclic order) (84)
These equations demonstrate the fact that rank-3 is totally anti-symmetric in all of its
indices since a shift of any two indices reverses the sign. This also reflects the fact that
the above tensor system has only one independent component.
• For rank-2 :
ij = (j − i) (85)
• For rank-3 :
1
ijk = (j − i) (k − i) (k − j) (86)
2
• For rank-4 :
1
ijkl = (j − i) (k − i) (l − i) (k − j) (l − j) (l − k) (87)
12
• For rank-n :
n−1
" n
#
Y 1 Y 1 Y
a1 a2 ···an = (aj − ai ) = (aj − ai ) (88)
i=1
i! j=i+1 S(n − 1) 1≤i<j≤n
3.3.2 Identities Involving 41
k
Y
S(k) = i! = 1! · 2! · . . . · k! (89)
i=1
A simpler formula for rank-n can be obtained from the previous one by ignoring the
magnitude of the multiplication factors and taking only their signs, that is
!
Y Y
a1 a2 ···an = σ (aj − ai ) = σ (aj − ai ) (90)
1≤i<j≤n 1≤i<j≤n
where
+1 (k > 0)
σ(k) = −1 (k < 0) (91)
0
(k = 0)
• For rank-n :
i1 i2 ···in i1 i2 ···in = n! (92)
because this is the sum of the squares of i1 i2 ···in over all the permutations of n different
indices which is equal to n! where the value of of each one of these permutations is either
+1 or −1 and hence in both cases their square is 1.
• For a symmetric tensor Ajk :
ijk Ajk = 0 (93)
because an exchange of the two indices of Ajk does not affect its value due to the symmetry
whereas a similar exchange in these indices in ijk results in a sign change; hence each term
in the sum has its own negative and therefore the total sum will vanish.
•
ijk Ai Aj = ijk Ai Ak = ijk Aj Ak = 0 (94)
3.3.3 Identities Involving δ and 42
because, due to the commutativity of multiplication, an exchange of the indices in A’s will
not affect the value but a similar exchange in the corresponding indices of ijk will cause
a change in sign; hence each term in the sum has its own negative and therefore the total
sum will be zero.
• For a set of orthonormal basis vectors in a 3D space with a right-handed orthonormal
Cartesian coordinate system:
ei × ej = ijk ek (95)
•
ijk δ1i δ2j δ3k = 123 = 1 (97)
• For rank-2 :
δik δil
ij kl = = δik δjl − δil δjk (98)
δ
jk δjl
• For rank-3 :
δil δim δin
ijk lmn = δjl δjm δjn = δil δjm δkn +δim δjn δkl +δin δjl δkm −δil δjn δkm −δim δjl δkn −δin δjm δkl
δkl δkm δkn
(101)
δil δim
ijk lmk = = δil δjm − δim δjl (102)
δ δ
jl jm
The last identity is very useful in manipulating and simplifying tensor expressions and
proving vector and tensor identities.
since the rank and dimension of are the same, which is 3 in this case.
• For rank-n :
δi 1 j 1 δi 1 j 2 · · · δi 1 j n
δi j δi j · · · δi j
21 2 2 2 n
i1 i2 ···in j1 j2 ···jn = . (105)
.. .. ... ..
. .
δi n j 1 δi n j 2 · · · δi n j n
δji11 δji12 ··· δji1n
δji21 δji22 i2
· · · δjn
δji11 ...i n
= (108)
...jn .. .. .. .
. . . ..
δjin1 δjin2 in
· · · δjn
where the δji entries in the determinant are the normal Kronecker delta as defined by Eq.
66.
• Accordingly, the relation between the rank-n and the generalized Kronecker delta in
an nD space is given by:
Hence, the permutation tensor may be considered as a special case of the generalized
Kronecker delta. Consequently the permutation symbol can be written as an n × n deter-
minant consisting of the normal Kronecker deltas.
• If we define
ij ijk
δlm = δlmk (110)
3.4 Generalized Kronecker delta 45
ij i j
δlm = δli δm
j
− δm δl (111)
Other identities involving δ and can also be formulated in terms of the generalized
Kronecker delta.
• On comparing Eq. 105 with Eq. 108 we conclude
δji11 ...i n
...jn =
i1 ...in
j1 ...jn (112)
4 APPLICATIONS OF TENSOR NOTATION AND TECHNIQUES 46
where the last two equalities represent the expansion of the determinant by row and by
column. Alternatively
1
det (A) = ijk lmn Ail Ajm Akn (115)
3!
1
det (A) = i1 ···in A1i1 . . . Anin = i1 ···in Ai1 1 . . . Ain n = i ···i j ···j Ai j . . . Ain jn (116)
n! 1 n 1 n 1 1
−1 1
A ij = jmn ipq Amp Anq (117)
2 det (A)
It should be noticed that here we are using matrix notation. The multiplication operation,
according to the symbolic notation of tensors, should be denoted by a dot between the
tensor and the vector, i.e. A·b.12
• The multiplication of two n × n matrices A and B as defined in linear algebra is:
Again, here we are using matrix notation; otherwise a dot should be inserted between the
two matrices.
• The dot product of two vectors is:
A · B =δij Ai Bj = Ai Bi (120)
The readers are referred to § 2.6.5 for a more general definition of this type of product
that includes higher rank tensors.
• The cross product of two vectors is:
12
The matrix multiplication in matrix notation is equivalent to a dot product operation in tensor
notation.
4.2 Scalar Invariants of Tensors 48
• In the following we list and write in tensor notation a number of invariants of low rank
tensors which have special importance due to their widespread applications in vector and
tensor calculus. All These invariants are scalars.
• The value of a scalar (rank-0 tensor), which consists of a magnitude and a sign, is
invariant under coordinate transformation.
• An invariant of a vector (rank-1 tensor) under coordinate transformations is its magni-
tude, i.e. length (the direction is also invariant but it is not scalar!).13
• The main three independent scalar invariants of a rank-2 tensor A under change of basis
are:
I = tr (A) = Aii (124)
II = tr A2 = Aij Aji
(125)
• Different forms of the three invariants of a rank-2 tensor A, which are also widely used,
are:
I1 = I = Aii (127)
13
In fact the magnitude alone is invariant under coordinate transformations even for pseudo vectors
because it is a scalar.
4.3 Common Differential Operations in Tensor Notation 49
1 2 1
I2 = I − II = (Aii Ajj − Aij Aji ) (128)
2 2
1 3 1
I3 = det (A) = I − 3I II + 2III = ijk pqr Aip Ajq Akr (129)
3! 3!
• The invariants I, II and III can similarly be defined in terms of the invariants I1 , I2
and I3 as follow:
I = I1 (130)
• Since the determinant of a matrix representing a rank-2 tensor is invariant, then if the
determinant vanishes in one coordinate system it will vanish in all coordinate systems and
vice versa. Consequently, if a rank-2 tensor is invertible in a particular coordinate system,
it is invertible in all coordinate systems.
• Ten joint invariants between two rank-2 tensors, A and B, can be formed; these are:
tr (A), tr (B), tr (A2 ), tr (B2 ), tr (A3 ), tr (B3 ), tr (A · B), tr (A2 · B), tr (A · B2 ) and
tr (A2 · B2 ).
• Here we present the most common differential operations as defined by tensor notation.
These operations are mostly based on the various types of interaction between the vector
differential operator nabla ∇ with tensors of different ranks as well as interaction with
other types of operation like dot and cross products.
4.3.1 Cartesian System 50
∂f
[∇f ]i = ∇i f = = ∂i f = f,i (134)
∂xi
• The gradient of a differentiable vector function of position A (which is the outer product,
as defined in § 2.6.3, between the ∇ operator and the vector) is a rank-2 tensor defined
by:
[∇A]ij = ∂i Aj (135)
∇ (f + h) = ∇f + ∇h (136)
∇f 6= f ∇ (137)
(∇f ) h 6= ∇ (f h) (138)
∂Ai ∂Ai
∇ · A = δij = = ∇i Ai = ∂i Ai = Ai,i (139)
∂xj ∂xi
The divergence operation can also be viewed as taking the gradient of the vector followed
by a contraction. Hence, the divergence of a vector is invariant because it is the trace of
a rank-2 tensor.14
• The divergence of a differentiable rank-2 tensor A is a vector defined in one of its forms
by:
[∇ · A]i = ∂j Aji (140)
These two different forms can be given, respectively, in symbolic notation by:
∇ · (A + B) = ∇ · A + ∇ · B (143)
∇ · A 6= A · ∇ (144)
14
It may also be argued that the divergence of a vector is a scalar and hence it is invariant.
4.3.1 Cartesian System 52
∇ · (f A) 6= ∇f · A (145)
∂Ak
[∇ × A]i = ijk = ijk ∇j Ak = ijk ∂j Ak = ijk Ak,j (146)
∂xj
• The curl operation may be generalized to tensors of rank > 1, and hence the curl of a
differentiable rank-2 tensor A can be defined as a rank-2 tensor given by:
∇ × (A + B) = ∇ × A + ∇ × B (148)
∇ × A 6= A × ∇ (149)
∇ × (A × B) 6= (∇ × A) × B (150)
• The Laplacian scalar operator, also called the harmonic operator, acting on a differen-
tiable scalar f is given by:
∂ 2f ∂ 2f
∆f = ∇2 f = δij = = ∇ii f = ∂ii f = f,ii (151)
∂xi ∂xj ∂xi ∂xi
• The Laplacian operator acting on a differentiable vector A is defined for each component
4.3.2 Other Coordinate Systems 53
of the vector similar to the definition of the Laplacian acting on a scalar, that is
2
∇ A i = ∂jj Ai (152)
• The following scalar differential operator is commonly used in science (e.g. in fluid
dynamics):
∂
A · ∇ = Ai ∇i = Ai = Ai ∂i (153)
∂xi
• The differentiation of a tensor increases its rank by one, by introducing an extra covariant
index, unless it implies a contraction in which case it reduces the rank by one. Therefore
the gradient of a scalar is a vector and the gradient of a vector is a rank-2 tensor (∂i Aj ),
while the divergence of a vector is a scalar and the divergence of a rank-2 tensor is a vector
(∂j Aji or ∂i Aji ). This may be justified by the fact that ∇ is a vector operator. On the
other hand the Laplacian operator does not change the rank since it is a scalar operator;
hence the Laplacian of a scalar is a scalar and the Laplacian of a vector is a vector.
• For completeness, we define here some differential operations in the most commonly
used non-Cartesian coordinate systems, namely cylindrical and spherical systems, as well
as general orthogonal coordinate systems.
• We can use indexed generalized coordinates like q1 , q2 and q3 for the cylindrical coor-
dinates (ρ, φ, z) and the spherical coordinates (r, θ, φ). However, for more clarity at this
4.3.2 Other Coordinate Systems 54
level and to follow the more conventional practice, we use the coordinates of these systems
as suffixes in place of the indices used in the tensor notation.15
• For the cylindrical system identified by the coordinates (ρ, φ, z) with an orthonormal
basis vectors eρ , eφ and ez :16
The ∇ operator is:
1
∇ = eρ ∂ρ + eφ ∂φ + ez ∂z (155)
ρ
1 1
∇2 = ∂ρρ + ∂ρ + 2 ∂φφ + ∂zz (156)
ρ ρ
∂f 1 ∂f ∂f
∇f = eρ + eφ + ez (157)
∂ρ ρ ∂φ ∂z
1 ∂ (ρAρ ) ∂Aφ ∂ (ρAz )
∇·A= + + (158)
ρ ∂ρ ∂φ ∂z
For plane polar coordinate systems, these operators and operations can be obtained by
dropping the z components or terms from the cylindrical form of the above operators and
15
There is another reason that is these are physical components not covariant or contravariant.
16
It should be obvious that since ρ, φ and z are specific coordinates and not variable indices, the
summation convention does not apply.
4.3.2 Other Coordinate Systems 55
operations.
• For the spherical system identified by the coordinates (r, θ, φ) with an orthonormal basis
vectors er , eθ and eφ :17
The ∇ operator is:
1 1
∇ = er ∂r + eθ ∂θ + eφ ∂φ (160)
r r sin θ
2 1 cos θ 1
∇2 = ∂rr + ∂r + 2 ∂θθ + 2 ∂θ + 2 2 ∂φφ (161)
r r r sin θ r sin θ
∂f 1 ∂f 1 ∂f
∇f = er + eθ + eφ (162)
∂r r ∂θ r sin θ ∂φ
∂ (r2 Ar )
1 ∂ (sin θAθ ) ∂Aφ
∇·A= 2 sin θ +r +r (163)
r sin θ ∂r ∂θ ∂φ
17
Again, the summation convention does not apply to r, θ and φ.
4.4 Common Identities in Vector and Tensor Notation 56
2 1 ∂ h2 h3 ∂ ∂ h1 h3 ∂ ∂ h1 h2 ∂
∇ = + + (166)
h1 h2 h3 ∂u1 h1 ∂u1 ∂u2 h2 ∂u2 ∂u3 h3 ∂u3
u1 ∂f u2 ∂f u3 ∂f
∇f = + + (167)
h1 ∂u1 h2 ∂u2 h3 ∂u3
1 ∂ ∂ ∂
∇·A= (h2 h3 A1 ) + (h1 h3 A2 ) + (h1 h2 A3 ) (168)
h1 h2 h3 ∂u1 ∂u2 ∂u3
• Here we present some of the widely used identities of vector calculus in the traditional
vector notation and in its equivalent tensor notation. In the following bullet points, f
and h are differentiable scalar fields; A, B, C and D are differentiable vector fields; and
r = xi ei is the position vector.
4.4 Common Identities in Vector and Tensor Notation 57
∇·r = n
m (170)
∂ i xi = n
∇×r = 0
m (171)
ijk ∂j xk = 0
∇ (a · r) = a
m (172)
∂i (aj xj ) = ai
∇ · (∇f ) = ∇2 f
m (173)
∂i (∂i f ) = ∂ii f
4.4 Common Identities in Vector and Tensor Notation 58
∇ · (∇ × A) = 0
m (174)
ijk ∂i ∂j Ak = 0
∇ × (∇f ) = 0
m (175)
ijk ∂j ∂k f = 0
∇ (f h) = f ∇h + h∇f
m (176)
∂i (f h) = f ∂i h + h∂i f
∇ · (f A) = f ∇ · A + A · ∇f
m (177)
∂i (f Ai ) = f ∂i Ai + Ai ∂i f
4.4 Common Identities in Vector and Tensor Notation 59
∇ × (f A) = f ∇ × A + ∇f × A
m (178)
A · (B × C) = C · (A × B) = B · (C × A)
m m (179)
A × (B × C) = B (A · C) − C (A · B)
m (180)
A × (∇ × B) = (∇B) · A − A · ∇B
m (181)
∇ × (∇ × A) = ∇ (∇ · A) − ∇2 A
m (182)
∇ (A · B) = A × (∇ × B) + B × (∇ × A) + (A · ∇) B + (B · ∇) A
m (183)
∇ · (A × B) = B · (∇ × A) − A · (∇ × B)
m (184)
∇ × (A × B) = (B · ∇) A + (∇ · B) A − (∇ · A) B − (A · ∇) B
m (185)
•
A·C A·D
(A × B) · (C × D) =
B·C B·D
m (186)
(A × B) × (C × D) = [D · (A × B)] C − [C · (A × B)] D
m (187)
• In vector and tensor notations, the condition for a vector field A to be solenoidal is:
∇·A = 0
m (188)
∂i Ai = 0
• In vector and tensor notations, the condition for a vector field A to be irrotational is:
∇×A = 0
m (189)
ijk ∂j Ak = 0
4.5 Integral Theorems in Tensor Notation 62
• The divergence theorem for a differentiable vector field A in vector and tensor notation
is:
ZZZ ZZ
∇ · A dτ = A · n dσ
V S
m (190)
Z Z
∂i Ai dτ = Ai ni dσ
V S
• The divergence theorem for differentiable tensor fields of higher ranks A in tensor nota-
tion for the index k is:
Z Z
∂k Aij...k...m dτ = Aij...k...m nk dσ (192)
V S
• Stokes theorem for a differentiable vector field A in vector and tensor notation is:
ZZ Z
(∇ × A) · n dσ = A · dr
S C
m (193)
Z Z
ijk ∂j Ak ni dσ = Ai dxi
S C
where C stands for the perimeter of the surface S and dr is the vector element tangent to
4.6 Examples of Using Tensor Techniques to Prove Identities 63
the perimeter.
• Stokes theorem for a differentiable rank-2 tensor field A in tensor notation for the first
index is:
Z Z
ijk ∂j Akl ni dσ = Ail dxi (194)
S C
• Stokes theorem for differentiable tensor fields of higher ranks A in tensor notation for
the index k is:
Z Z
ijk ∂j Alm...k...n ni dσ = Alm...k...n dxk (195)
S C
• ∇ · r = n:
∇ · r = ∂ i xi (Eq. 139)
=n (Eq. 80)
• ∇ × r = 0:
[∇ × r]i = ijk ∂j xk (Eq. 146)
=0 (Eq. 71)
• ∇ (a · r) = a:
= aj ∂i xj + xj ∂i aj (product rule)
= aj ∂i xj (aj is constant)
(198)
= aj δji (Eq. 79)
= ai (Eq. 76)
= ∇2 f (Eq. 151)
4.6 Examples of Using Tensor Techniques to Prove Identities 65
• ∇ · (∇ × A) = 0:
This can also be concluded from line three by arguing that: since by the continuity con-
dition ∂i and ∂j can change their order with no change in the value of the term while a
corresponding change of the order of i and j in ijk results in a sign change, we see that
each term in the sum has its own negative and hence the terms add up to zero (see Eq.
94).
• ∇ × (∇f ) = 0:
This can also be concluded from line three by a similar argument to the one given in the
4.6 Examples of Using Tensor Techniques to Prove Identities 66
previous point. Because [∇ × (∇f )]i is an arbitrary component, then each component is
zero.
• ∇ (f h) = f ∇h + h∇f :
= ∂i (f Ai ) (definition of index)
(203)
= f ∂i Ai + Ai ∂i f (product rule)
• ∇ × (f A) = f ∇ × A + ∇f × A:
• A · (B × C) = C · (A × B) = B · (C × A):
= kij Ck Ai Bj (commutativity)
= jki Bj Ck Ai (commutativity)
= B · (C × A) (Eq. 122)
The negative permutations of these identities can be similarly obtained and proved by
changing the order of the vectors in the cross products which results in a sign change.
4.6 Examples of Using Tensor Techniques to Prove Identities 68
• A × (B × C) = B (A · C) − C (A · B):
= Bi (A · C) − Ci (A · B) (Eq. 120)
• A × (∇ × B) = (∇B) · A − A · ∇B:
= Am ∂i Bm − Al ∂l Bi (Eq. 76)
• ∇ × (∇ × A) = ∇ (∇ · A) − ∇2 A:
= ∂m ∂i Am − ∂l ∂l Ai (Eq. 76)
= [∇ (∇ · A)]i − ∇2 A i
(Eqs. 139, 134 & 152)
= ∇ (∇ · A) − ∇2 A i
(Eqs. 16)
(208)
Because i is a free index the identity is proved for all components. This identity can also
be considered as an instance of the identity before the last one, observing that in the
second term on the right hand side the Laplacian should precede the vector, and hence no
independent proof is required.
• ∇ (A · B) = A × (∇ × B) + B × (∇ × A) + (A · ∇) B + (B · ∇) A:
4.6 Examples of Using Tensor Techniques to Prove Identities 71
We start from the right hand side and end with the left hand side
[A × (∇ × B) + B × (∇ × A) + (A · ∇) B + (B · ∇) A]i =
ijk Aj [∇ × B]k + ijk Bj [∇ × A]k + (Al ∂l ) Bi + (Bl ∂l ) Ai = (Eqs. 121, 139 & indexing)
(δil δjm − δim δjl ) Aj ∂l Bm + (δil δjm − δim δjl ) Bj ∂l Am + (Al ∂l ) Bi + (Bl ∂l ) Ai = (Eq. 102) (209)
(δil δjm Aj ∂l Bm − δim δjl Aj ∂l Bm ) + (δil δjm Bj ∂l Am − δim δjl Bj ∂l Am ) + (Al ∂l ) Bi + (Bl ∂l ) Ai = (distributivity)
Am ∂i Bm + Bm ∂i Am = (Eq. 76)
• ∇ · (A × B) = B · (∇ × A) − A · (∇ × B):
= B · (∇ × A) − A · (∇ × B) (Eq. 120)
4.6 Examples of Using Tensor Techniques to Prove Identities 73
• ∇ × (A × B) = (B · ∇) A + (∇ · B) A − (∇ · A) B − (A · ∇) B:
= Bm ∂m Ai + Ai ∂m Bm − Bi ∂j Aj − Aj ∂j Bi (Eq. 76)
= [(B · ∇) A]i + [(∇ · B) A]i − [(∇ · A) B]i − [(A · ∇) B]i (Eqs. 153 & 139)
A·C A·D
• (A × B) · (C × D) = :
B·C B·D
= (A · C) (B · D) − (A · D) (B · C) (Eq. 120)
A·C A·D
= (definition of determinant)
B·C B·D
(212)
4.6 Examples of Using Tensor Techniques to Prove Identities 75
• (A × B) × (C × D) = [D · (A × B)] C − [C · (A × B)] D:
5 Metric Tensor
• This is a rank-2 tensor which may also be called the fundamental tensor.
• The main purpose of the metric tensor is to generalize the concept of distance to gen-
eral curvilinear coordinate frames and maintain the invariance of distance in different
coordinate systems.
• In orthonormal Cartesian coordinate systems the distance element squared, (ds)2 , be-
tween two infinitesimally neighboring points in space, one with coordinates xi and the
other with coordinates xi + dxi , is given by
This definition of distance is the key to introducing a rank-2 tensor, gij , called the metric
tensor which, for a general coordinate system, is defined by
where the indexed E are the covariant and contravariant basis vectors as defined in § 2.5.1.
• The mixed type metric tensor is given by:
g ij = Ei · Ej = δ ij & gi j = Ei · Ej = δi j (217)
• For a coordinate system in which the metric tensor can be cast in a diagonal form where
the diagonal elements are ±1 the metric is called flat.
• For Cartesian coordinate systems, which are orthonormal flat-space systems, we have
• The contravariant metric tensor is used for raising indices of covariant tensors and the
covariant metric tensor is used for lowering indices of contravariant tensors, e.g.
Ai = g ij Aj Ai = gij Aj (220)
where the metric tensor acts, like a Kronecker delta, as an index replacement operator.
Hence, any tensor can be cast into a covariant or a contravariant form, as well as a mixed
form. However, the order of the indices should be respected in this process, e.g.
Some authors insert dots (e.g. A·ji ) to remove any ambiguity about the order of the indices.
• The covariant and contravariant metric tensors are inverses of each other, that is
−1
[gij ] = g ij g = [gij ]−1
ij
& (222)
Hence
g ik gkj = δ ij & gik g kj = δi j (223)
5 METRIC TENSOR 78
• It is common to reserve the “metric tensor” to the covariant form and call the con-
travariant form, which is its inverse, the “associate” or “conjugate” or “reciprocal” metric
tensor.
• As a tensor, the metric has a significance regardless of any coordinate system although
it requires a coordinate system to be represented in a specific form.
• For orthogonal coordinate systems the metric tensor is diagonal, i.e. gij = g ij = 0 for
i 6= j.
• For flat-space orthonormal Cartesian coordinate systems in a 3D space, the metric tensor
is given by:
1 0 0
ij ij
[gij ] = [δij ] =
0 1 0 = δ = g
(224)
0 0 1
• For cylindrical coordinate systems with coordinates (ρ, φ, z), the metric tensor is given
by:
1 0 0 1 0 0
ij
[gij ] = 2 & g = 1 (225)
0 ρ 0
0 ρ2
0
0 0 1 0 0 1
• For spherical coordinate systems with coordinates (r, θ, φ), the metric tensor is given by:
1 0 0 1 0 0
ij
[gij ] = 2 & g = 1 (226)
0 r 0 0 0
r2
0 0 r2 sin2 θ 0 0 1
r2 sin2 θ
6 COVARIANT DIFFERENTIATION 79
6 Covariant Differentiation
• The ordinary derivative of a tensor is not a tensor in general. The objective of covariant
differentiation is to ensure the invariance of derivative (i.e. being a tensor) in general
coordinate systems, and this results in applying more sophisticated rules using Christof-
fel symbols where different differentiation rules for covariant and contravariant indices
apply. The resulting covariant derivative is a tensor which is one rank higher than the
differentiated tensor.
• Christoffel symbol of the second kind is defined by:
g kl
k ∂gil ∂gjl ∂gij
ij = + − (227)
2 ∂xj ∂xi ∂xl
where the indexed g is the metric tensor in its contravariant and covariant forms with
implied summation over l. It is noteworthy that Christoffel symbols are not tensors.
• The Christoffel symbols of the second kind are symmetric in their two lower indices:
k k
ij = ji (228)
• For Cartesian coordinate systems, the Christoffel symbols are zero for all the values of
indices.
• For cylindrical coordinate systems (ρ, φ, z), the Christoffel symbols are zero for all the
values of indices except:
1
22 = −ρ (229)
2 2 1
12 = 21 =
ρ
• For spherical coordinate systems (r, θ, φ), the Christoffel symbols are zero for all the
values of indices except:
1
22 = −r (230)
1
= −r sin2 θ
33
2 2 1
12 = 21 =
r
2
33 = − sin θ cos θ
3 1
=
3
13 31 = r
3
=
3
23 32 =
cot θ
This is justified by the fact that the covariant derivative is different from the normal partial
derivative because the basis vectors in general coordinate systems are dependent on their
spatial position, and since a scalar is independent of the basis vectors the covariant and
partial derivatives are identical.
• For a differentiable vector A the covariant derivative is:
k
Aj;i = ∂i Aj − ji Ak (covariant)
(232)
Aj;i = ∂i Aj +
j
ki Ak (contravariant)
6 COVARIANT DIFFERENTIATION 81
l l
Ajk;i = ∂i Ajk − ji Alk − ki Ajl (covariant)
Aij...k ij...k
Aaj...k
i j k ij...a
lm...p;q = ∂q Alm...p + aq Aia...k
lm...p +lm...p + · · · + aq Alm...p
aq (234)
a ij...k a ij...k
− alq Aij...k
am...p − mq Ala...p − · · · − pq Alm...a
• From the last three points a pattern for covariant differentiation emerges, that is it starts
with a partial derivative term then for each tensor index an extra Christoffel symbol term
is added, positive for superscripts and negative for subscripts, where the differentiation
index is the second of the lower indices in the Christoffel symbol.
• Since the Christoffel symbols are identically zero in Cartesian coordinate systems, the
covariant derivative is the same as the normal partial derivative for all tensor ranks.
• The covariant derivative of the metric tensor is zero in all coordinate systems.
• Several rules of normal differentiation similarly apply to covariant differentiation. For
example, covariant differentiation is a linear operation with respect to algebraic sums of
tensor terms:
∂;i (aA ± bB) = a∂;i A ± b∂;i B (235)
where a and b are scalar constants and A and B are differentiable tensor fields. The
product rule of normal differentiation also applies to covariant differentiation of tensor
multiplication:
∂;i (AB) = (∂;i A) B + A∂;i B (236)
6 COVARIANT DIFFERENTIATION 82
This rule is also valid for the inner product of tensors because the inner product is an
outer product operation followed by a contraction of indices, and covariant differentiation
and contraction of indices commute.
• The covariant derivative operator can bypass the raising/lowering index operator:
and hence the metric behaves like a constant with respect to the covariant operator.
• A principal difference between normal partial differentiation and covariant differentiation
is that for successive differential operations the partial derivative operators do commute
with each other (assuming certain continuity conditions) but the covariant operators do
not commute, that is
References
[1] G.B. Arfken; H.J. Weber; F.E. Harris. Mathematical Methods for Physicists A Com-
prehensive Guide. Elsevier Academic Press, seventh edition, 2013.
[2] R.B. Bird; R.C. Armstrong; O. Hassager. Dynamics of Polymeric Liquids, volume 1.
John Wiley & Sons, second edition, 1987.
[3] R.B. Bird; W.E. Stewart; E.N. Lightfoot. Transport Phenomena. John Wiley & Sons,
second edition, 2002.
[4] M.L. Boas. Mathematical Methods in the Physical Sciences. John Wiley & Sons Inc.,
third edition, 2006.
[5] C.F. Chan Man Fong; D. De Kee; P.N. Kaloni. Advanced Mathematics for Engineering
and Science. World Scientific Publishing Co. Pte. Ltd., first edition, 2003.
[6] T.L. Chow. Mathematical Methods for Physicists: A concise introduction. Cambridge
University Press, first edition, 2003.
[7] J.H. Heinbockel. Introduction to Tensor Calculus and Continuum Mechanics. 1996.
[8] D.C. Kay. Schaum’s Outline of Theory and Problems of Tensor Calculus. McGraw-
Hill, first edition, 1988.
[9] K.F. Riley; M.P. Hobson; S.J. Bence. Mathematical Methods for Physics and Engi-
neering. Cambridge University Press, third edition, 2006.
[10] D. Zwillinger, editor. CRC Standard Mathematical Tables and Formulae. CRC Press,
32nd edition, 2012.
Note: As well as the references cited above, I benefited during the writing of these notes
from many sources such as tutorials, presentations, and articles which I found on the
REFERENCES 84