Notes On Tensor
Notes On Tensor
Mathematical preliminaries
In this chapter, we quickly summarize necessary tensor algebra and calculus, and
introduce the notation employed in this text. We assume familiarity with matrix
algebra and indical notation. More information may be obtained from standard texts
such as Strang (2005) or Knowles (1998).
1.2 Vectors
3
4 1 Mathematical preliminaries
where θ is the angle between vectors a and b, êc is a unit vector normal to the plane
containing a and b, and εi jk is the alternating tensor defined by
1 if i, j and k are an even permutation
εi jk = −1 if i, j and k are an odd permutation (1.4)
0 otherwise;
1.3 Tensors
(A + B) · a = C · a := A · a + B · a (1.5a)
and (A · B) · a = D · a := A · (B · a). (1.5b)
It is understood that the two tensors A and B relate vectors belonging to the same
set.
1.3 Tensors 5
where the ‘·’ on the left-hand sides denotes a tensor operation, and the usual vector
dot product on the right-hand sides. Contrasting the computation
with (1.1), suggests that a tensorial basis may be constructed by taking appropriate
order tensor products of the unit vectors. We note that the above represents the
linear combination of êi ⊗ ê j . Thus, a second-order tensorial basis in the coordinate
system O is given by the nine unit tensors êi ⊗ ê j . A second-order tensor A may then
be written as
A = Ai j êi ⊗ ê j , (1.8)
in terms of A’s components Ai j in O. These components, obtained by appealing to
(1.6), are given by the equations
Ai j = êi · A · ê j , (1.9)
that are reminiscent of analogous ones for vector components; see (1.1). We will
refer to the nine Ai j ’s as the “matrix of A in êi ” denoted by [A]. In another coordinate
system, say P, the tensorial basis is given by ê0i ⊗ ê0j , while A0i j = ê0i · A · ê0j constitute
the “matrix of A in ê0i ” denoted by [A]0 . A second-order tensor’s interactions with
vectors and other second-order tensors may be obtained by repeated (if required)
application of (1.6). These operations are summarized below:
where the first two operations produce vectors, the next another second-order tensor,
and the third a scalar. The double-dot product ‘ : ’, as its form suggests, denotes a
sequential application of dot products, as illustrated above. The tensor A’s actions
on higher-order tensors may be analogously defined. When there is no confusion,
second-order tensors are referred to simply as tensors.
The transpose AT of a tensor A is defined by the following formula
for any two vectors a and b. The above results in [AT ]i j = A ji = [A] ji . From the
above definition of a tensor’s transpose the following identities are easily proved:
W · b = w × b. (1.18)
where the last invariant represents the determinant of A that may also be computed
via standard formulae after finding A’s matrix in any coordinate system. Finally, as
for vectors, it is possible to measure the magnitude of a tensor, by employing the
double-dot product ‘ : ’ introduced in (1.10). The norm of a tensor A is defined by
√ p p
|A| := A : AT = IA2 = Ai j Ai j . (1.23)
Frequently, and again, for all tensors considered in this book, it is possible to
define associated inverse tensors , i.e., given A taking a to b, the inverse tensor A−1
brings b back to a. It is easy enough to see that a tensor and its inverse share the same
eigenvectors, but inverse eigenvalues. Thus, if A has a zero eigenvalue, its inverse
does not exist. The following identities regarding inverses are easily verified:
An important class of tensors that will occur frequently in the text is the orthog-
onal tensor Q that has the property that given any vector a,
|Q · a| = |a|, (1.25)
i.e., Q preserves a vector’s length. From this the following properties follow:
In applications to follow, all orthogonal tensors will have determinant one. Such
proper orthogonal tensors are called rotation tensors. Physically, as its name sug-
gests, a rotation tensor represents rotation about the origin. It may be shown that of a
rotation tensor’s three eigenvalues, two are complex conjugates of norm one and the
third is unity; see e.g., Knowles (1998, p. 51). The eigenvector corresponding to the
unitary eigenvalue provides the axis of rotation. The amount of rotation is provided
by the argument of the complex eigenvalue.
Symmetric and rotation tensors come together in the polar decomposition theo-
rem (Knowles 1998, p. 57), which states that for any tensor A with det A > 0, it is
possible to find a rotation tensor R and positive definite tensors U and V, so that
uniquely. Thus, U = RT · V · R, and U and V share the same eigenvalues, while their
eigenvectors are related through R. We recall that transformation via symmetric
tensor’s operation corresponds to linearly and independently scaling three mutually
perpendicular directions. Any linear transformation may thus be viewed as a rotation
followed (preceded) by three scalings along the orthogonal eigen-coordinate system
of V(U).
1.3 Tensors 9
We have already mentioned the tensor product of two vectors in (1.7). Amongst
other things, the tensor product helps in “tensorizing” the vector operations of taking
dot- and cross-products, viz.,
a · b = tr a ⊗ b = a ⊗ b : 1, (1.28a)
and a × b = −2 ax sk a ⊗ b. (1.28b)
Some additional identities that are easily proved, and will often be used are
a ⊗ A · b = a ⊗ b · AT , (1.29a)
a · Ab = a ⊗ b : A, (1.29b)
sk A : B = tr (sk A · sym B) + tr (sk A · sk B) = ax A · ax sk B, (1.29c)
A·B : C = C·A : B = B·C : A (1.29d)
and tr (S · W) = 0, (1.29e)
First, consider third-order tensors. In terms of the third-order tensorial bases, êi ⊗
ê j ⊗ êk in {O, êi }, a third-order tensor is defined as
An important example of a third-order tensor is the alternating tensor that has al-
ready been defined by (1.4).
Fourth-order tensors are formed in a manner analogous to third-order tensors,
and their operations on vectors and tensors of various orders may be developed by
following (1.10) and (1.31), for example,
A : B = Ai jkl êi ⊗ ê j ⊗ êk ⊗ êl : Bmn êm ⊗ ên = Ai jkl Blk êi ⊗ ê j . (1.33)
10 1 Mathematical preliminaries
We will need to find the components of vectors and second-order tensors in one
coordinate system, say {O, êi }, given its matrix in the other, say P, ê0i . This may
and substituting this relationship into (1.1) and (1.9). It may be proved that the tensor
This represents the geometrically intuitive fact that, because both O and P are
right-handed Cartesian coordinate systems, it is possible to obtain one from the
other by a rotation. The components of R are
Substituting the previous two equations into (1.1) and (1.9), we obtain the coor-
dinate transformation rules
i.e., the rotation tensor R has the same components in the two frames that it relates.
Higher-order tensor transformation formulae may be developed similarly when
required.
1.5 Calculus
Let Φ(x) be a nth -order tensor field defined over three-dimensional space with x
being a position vector. The gradient of Φ is defined by
1.5 Calculus 11
∂ Φ(x)
∇Φ = = Φ,i êi , (1.39)
∂x
where the comma denotes differentiation and
∂ (·) ∂ (·) ∂ (·) ∂ (·) ∂ (·)
∇(·) = = êi = ê1 + ê2 + ê3
∂x ∂ xi ∂ x1 ∂ x2 ∂ x3
is the gradient operator. In particular, we have the formulae
∂a ∂ bi ∂Ci j
∇a = êi , ∇b = êi ⊗ ê j and ∇C = êi ⊗ ê j ⊗ êk , (1.40)
∂ xi ∂xj ∂ xk
where a, b and C are, respectively, scalar, vector and tensor fields. Contracting any
two free indices of the gradient of Φ, we obtain its divergence denoted by ∇·Φ; this,
therefore, is applicable only when Φ is not a scalar field. We obtain the formulae
∂ bi ∂Ci j
∇·b = = bi ,i and ∇ · C = êi = Ci j , j êi . (1.41)
∂ xi ∂xj
Note that in the last equation we could have alternatively defined ∇·C as Ci j ,i ê j . The
gradient of a field identifies the direction of steepest change through its eigenvectors,
while its divergence is an estimate of the field’s local flux.
A field Φ(x) may be expanded in a Taylor’s series about a location x0 provided
some smoothness conditions are satisfied; see, e.g., Sokolnikoff (1966, p.311). The
expansion may be expressed as
Φ(x) = Φ0 + ∇Φ0 · h + ∇∇Φ0 : h ⊗ h + O |h|3
= Φ0 + Φ,i |0 hi + Φ,i j 0 hi h j + O |h|3 ,
(1.42)
Consider a spatially varying nth -order tensor field Φ(x). The divergence theorem
allows us to relate the volume integral of the gradient of Φ to its appropriate surface
integral. An efficient way of stating this theorem is
Z Z
Φi jk...,l dV = Φi jk... nl dS,
V S
where V is the domain’s volume, S its bounding surface, and n̂ the outward normal
to S. Specializing the above formula to scalar φ (x), vector v(x) and tensor A(x)
fields, we obtain
12 1 Mathematical preliminaries
Z Z Z Z Z Z
∇φ dV = φ n̂ dS, ∇vdV = v ⊗ n̂ dS and ∇AdV = A ⊗ n̂ dS. (1.43)
V S V S V S
Taking the trace of the last two equations above, we obtain two useful identities:
Z Z Z Z
∇ · vdV = v · n̂ dS, and ∇ · AdV = A · n̂ dS. (1.44)
V S V S
Frequently, we will encounter fields Φ(x,t) that depend also on time. When x
is fixed, time differentiation is straightforward. For a second-order tensor field
A (x,t) = A (t) we have the formulae:
T
d T dA
A = (1.45a)
dt dt
d −1 dA −1
A = −A−1 · ·A (1.45b)
dt dt
d dA −1
and det A = tr ·A det A. (1.45c)
dt dt
However, if x changes with time, as, say, in a deforming body with x(t) locating
a material point, care is required. For example, if Φ(x,t) relates to a particle prop-
erty, then Φ is modified both due to passage of time and by the particle’s changing
location. The appropriate formula for the total time rate of change of Φ(x(t),t) is
obtained by employing the chain rule:
d ∂Φ ∂Φ
Φ(x(t),t) = + ∇Φ · v = + Φ,i vi , (1.46)
dt ∂t ∂t
where we have set dx/dt = v(x(t),t). In general, we have the relationship
d ∂
(.) = (.) + ∇(.) · v, (1.47)
dt ∂t
between the total time dervative d/dt(.), the material derivative ∂ /∂t(.), and the
streaming or convective derivative ∇(.) · v.