0% found this document useful (0 votes)
46 views

Notes On Tensor

This chapter provides mathematical preliminaries for tensors and tensor calculus. It introduces tensor notation, defines vectors and tensors of various orders, and establishes operations and properties of tensors including transpose, trace, symmetric and anti-symmetric parts. Key tensor operations and identities are summarized. Coordinate systems and their associated basis vectors are also defined.

Uploaded by

rohit lambora
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Notes On Tensor

This chapter provides mathematical preliminaries for tensors and tensor calculus. It introduces tensor notation, defines vectors and tensors of various orders, and establishes operations and properties of tensors including transpose, trace, symmetric and anti-symmetric parts. Key tensor operations and identities are summarized. Coordinate systems and their associated basis vectors are also defined.

Uploaded by

rohit lambora
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter 1

Mathematical preliminaries

In this chapter, we quickly summarize necessary tensor algebra and calculus, and
introduce the notation employed in this text. We assume familiarity with matrix
algebra and indical notation. More information may be obtained from standard texts
such as Strang (2005) or Knowles (1998).

1.1 Coordinate systems

We will exclusively employ right-handed cartesian coordinate systems. The coordi-


nate system of choice may be stationary, translating, or rotating, or both. We will
employ calligraphic capital letters to identify coordinate systems. In this text we
will typically employ three coordinate systems O, P and S with associated unit
vectors êi , ê0i and ê00i , respectively. This will be indicated by, e.g., {O, êi }.

1.2 Vectors

A vector a is represented as thus. The components of a in O will be denoted by ai


and in P by a0i , so that we have the identities

a = (a · êi )êi = ai êi = (a · ê0i )ê0i = a0i ê0i , (1.1)

where ‘ · ’ is the usual vector dot product. The magnitude or norm of a is


1/2
|a| = (a · a)1/2 = (ai ai )1/2 = a0i a0i . (1.2)

We now collect several useful formulae:

3
4 1 Mathematical preliminaries

a · b = |a||b| cos θ = ai bi , (1.3a)


a × b = −b × a = |a||b| sin θ êc = εi jk êi a j bk , (1.3b)
a · (b × c) = b · (c × a) = c · (a × b) = εi jk ai b j ck (1.3c)
and a × (b × c) = (a · c)b − (a · b)c = εi jk εklm êi a j bl cm . (1.3d)

where θ is the angle between vectors a and b, êc is a unit vector normal to the plane
containing a and b, and εi jk is the alternating tensor defined by

 1 if i, j and k are an even permutation
εi jk = −1 if i, j and k are an odd permutation (1.4)
0 otherwise;

cf. Sec. 1.3.2.


We will typically limit ourselves to three-dimensional vectors.

1.3 Tensors

A first-order tensor is simply a vector. A second-order tensor is a linear transforma-


tion that maps a vector to another vector. Third- and fourth-order tensors relating
lower-order tensors to other lower-order tensors may be similarly defined.

1.3.1 Second-order tensors

A second-order tensor A is probed by its action ‘ · ’ on vectors. We employ the same


symbol as for the dot-product of vectors because of similarities between the two
operations. We define the resultant b of A’s operation, specifically a right-operation,
on a by
b = A · a.
Similarly a left-operation may be defined. As with vectors, we will typically limit
ourselves to second-order tensors that operate on and result in three-dimensional
vectors.
The addition A + B and multiplication A · B of two tensors A and B result in
tensors C and D, respectively, that are defined in terms of how they operate on some
vector a, i.e.,

(A + B) · a = C · a := A · a + B · a (1.5a)
and (A · B) · a = D · a := A · (B · a). (1.5b)

It is understood that the two tensors A and B relate vectors belonging to the same
set.
1.3 Tensors 5

To better understand tensors, it is useful to generalize the concept of a unit vector


to a tensorial basis . Such a generalization is furnished by the tensor product a ⊗ b
of two vectors a and b. The entity a ⊗ b is a second-order tensor that can act on
another vector c in two different ways – the left- and right- operations – to yield
another vector:

(a ⊗ b) · c = (c · b)a and c · (a ⊗ b) = (c · a)b , (1.6)

where the ‘·’ on the left-hand sides denotes a tensor operation, and the usual vector
dot product on the right-hand sides. Contrasting the computation

a ⊗ b = ai êi ⊗ b j ê j = (ai b j )êi ⊗ ê j (1.7)

with (1.1), suggests that a tensorial basis may be constructed by taking appropriate
order tensor products of the unit vectors. We note that the above represents the
linear combination of êi ⊗ ê j . Thus, a second-order tensorial basis in the coordinate
system O is given by the nine unit tensors êi ⊗ ê j . A second-order tensor A may then
be written as
A = Ai j êi ⊗ ê j , (1.8)
in terms of A’s components Ai j in O. These components, obtained by appealing to
(1.6), are given by the equations

Ai j = êi · A · ê j , (1.9)

that are reminiscent of analogous ones for vector components; see (1.1). We will
refer to the nine Ai j ’s as the “matrix of A in êi ” denoted by [A]. In another coordinate
system, say P, the tensorial basis is given by ê0i ⊗ ê0j , while A0i j = ê0i · A · ê0j constitute
the “matrix of A in ê0i ” denoted by [A]0 . A second-order tensor’s interactions with
vectors and other second-order tensors may be obtained by repeated (if required)
application of (1.6). These operations are summarized below:

A · a = Ai j êi ⊗ ê j · am êm = Ai j a j êi , (1.10a)


a · A = am êm · Ai j êi ⊗ ê j = ai Ai j ê j , (1.10b)
A · B = Ai j êi ⊗ ê j · Bmn êm ⊗ ên = Ai j B jn êi ⊗ ên (1.10c)
and A : B = Ai j êi ⊗ ê j : Bmn êm ⊗ ên = Ai j êi · B jn ên = Ai j B ji , (1.10d)

where the first two operations produce vectors, the next another second-order tensor,
and the third a scalar. The double-dot product ‘ : ’, as its form suggests, denotes a
sequential application of dot products, as illustrated above. The tensor A’s actions
on higher-order tensors may be analogously defined. When there is no confusion,
second-order tensors are referred to simply as tensors.
The transpose AT of a tensor A is defined by the following formula

(A · a) · b = a · (AT · b), (1.11)


6 1 Mathematical preliminaries

for any two vectors a and b. The above results in [AT ]i j = A ji = [A] ji . From the
above definition of a tensor’s transpose the following identities are easily proved:

(A + B)T = AT + BT and (A · B)T = BT · AT . (1.12)

The trace of a tensor A with components Ai j is obtained by contracting the in-


dices i and j:
tr A = Aii = A11 + A22 + A33 . (1.13)
We see below that the trace of a tensor is independent of the coordinate system
in which it is computed. The following identities regarding transposes are easily
proved:
!
n 1
tr A = tr AT ⇒ tr ∏ Ai = tr ∏ ATi , (1.14a)
i=1 i=n
!
n n n−k−1 0
tr ∏ Ai = tr ∏ Ai · ∏ Ai , 0 6 k 6 n − 1, and ∏ Ai = 1, (1.14b)
i=1 i=n−k i=1 i=1

where ∏ni=1 Ai = A1 · A2 · ... · An .


A tensor is said to be symmetric if A = AT , and anti- / skew- symmetric if A =
−AT . Given an arbitrary tensor A we define its symmetric part
1
A + AT ,

sym A = (1.15)
2
and anti-/ skew- symmetric part
1
A − AT ,

asym A = (1.16)
2
so that any tensor A may be written as a sum of a symmetric and anti-symmetric
tensor
A = sym A + asym A. (1.17)
An anti-symmetric tensor has at most three independent components in any co-
ordinate system. Thus, for any anti-symmetric tensor W, it is possible to associate
an axial vector denoted by w with the property that for all vectors b

W · b = w × b. (1.18)

The operations of constructing anti-symmetric tensors from vectors and extracting


axial vectors from anti-symmetric tensors are denoted by sk w (= W) and ax W (=
w), respectively. The relationship between w and W may be expressed in indical
notation employing the alternating tensor of (1.4):
1
ax W = w = − εi jkW jk êi and sk w = W = −εi jk wi ê j ⊗ êk . (1.19)
2
1.3 Tensors 7

Employing (1.3b), it is straightforward to check that the above prescription for ax W


and sk w will satisfy (1.18).
For most tensors, and almost all tensors occurring in this book, it is possible to
find three unit vectors that are simply scaled under that tensor’s operation, i.e., given
A, there (almost always) exist three unit vectors v̂i and correspondingly three scalars
λi , such that
A · v̂i = λi v̂i (no sum). (1.20)
These special vectors v̂i are the eigenvectors of A, and the corresponding scalings
are A’s eigenvalues. In the coordinate system described by the three eigenvectors,
the tensor’s matrix is diagonalized with the tensor’s eigenvalues as the diagonal
entries. This simple diagonal nature makes employing the eigen-coordinate system
very tempting for computation. Unfortunately, there is no guarantee that the eigen-
vector triad are mutually orthogonal, so that the coordinate system they describe
may not be cartesian. However, if the tensor is symmetric, it is always possible to
diagonalize it, and, moreover, the eigenvectors are orthogonal, so that the coordi-
nate system they describe is frequently a convenient operational choice. Thus, given
a symmetric tensor S, it is possible to find three eigenvectors v̂i and corresponding
eigenvalues λi , so that S is simply
3
S = ∑ λi v̂i ⊗ v̂i . (1.21)
i=1

The operation of a symmetric S therefore corresponds to a linear scaling along three


mutually orthogonal eigen-directions.
If in case all the eigenvalues of a symmetric tensor are non-zero and positive, the
symmetric tensor is said to be positive definite. Finally, it is important to mention
that for any tensor the number of eigenvalues equals the dimension of the under-
lying space, whether or not it is diagonalizable. For example, in three dimensions,
every tensor has three eigenvalues even if it doesn’t have three eigenvectors. These
eigenvalues are either all real, or a mixture of real and complex conjugate pairs.
While the components of a tensor depend on the coordinate system, its eigenval-
ues do not. Therefore, functions of these eigenvalues, called principal invariants,
also remain unaffected by the choice of the coordinate system; the number of these
invariants equaling the dimension of the underlying space. In three dimensions, a
second-order tensor A with eigenvalues λi has the three invariants
3
IA = ∑ λi = Aii = tr A, (1.22a)
i=1
1
I2A − IA2

IIA = ∑ λi λ j = 2 (1.22b)
i6= j
3
and IIIA = = ∏ λi = det A, (1.22c)
i=1
8 1 Mathematical preliminaries

where the last invariant represents the determinant of A that may also be computed
via standard formulae after finding A’s matrix in any coordinate system. Finally, as
for vectors, it is possible to measure the magnitude of a tensor, by employing the
double-dot product ‘ : ’ introduced in (1.10). The norm of a tensor A is defined by
√ p p
|A| := A : AT = IA2 = Ai j Ai j . (1.23)

Frequently, and again, for all tensors considered in this book, it is possible to
define associated inverse tensors , i.e., given A taking a to b, the inverse tensor A−1
brings b back to a. It is easy enough to see that a tensor and its inverse share the same
eigenvectors, but inverse eigenvalues. Thus, if A has a zero eigenvalue, its inverse
does not exist. The following identities regarding inverses are easily verified:

A · A−1 = A−1 · A = 1 (1.24a)


(A · B)−1 = B−1 · A−1 (1.24b)
det A−1 = (det A)−1 (1.24c)
and (AT )−1 = (A−1 )T = A−T . (1.24d)

An important class of tensors that will occur frequently in the text is the orthog-
onal tensor Q that has the property that given any vector a,

|Q · a| = |a|, (1.25)

i.e., Q preserves a vector’s length. From this the following properties follow:

Q−1 = QT and det Q = ±1. (1.26)

In applications to follow, all orthogonal tensors will have determinant one. Such
proper orthogonal tensors are called rotation tensors. Physically, as its name sug-
gests, a rotation tensor represents rotation about the origin. It may be shown that of a
rotation tensor’s three eigenvalues, two are complex conjugates of norm one and the
third is unity; see e.g., Knowles (1998, p. 51). The eigenvector corresponding to the
unitary eigenvalue provides the axis of rotation. The amount of rotation is provided
by the argument of the complex eigenvalue.
Symmetric and rotation tensors come together in the polar decomposition theo-
rem (Knowles 1998, p. 57), which states that for any tensor A with det A > 0, it is
possible to find a rotation tensor R and positive definite tensors U and V, so that

A = R·U = V·R (1.27)

uniquely. Thus, U = RT · V · R, and U and V share the same eigenvalues, while their
eigenvectors are related through R. We recall that transformation via symmetric
tensor’s operation corresponds to linearly and independently scaling three mutually
perpendicular directions. Any linear transformation may thus be viewed as a rotation
followed (preceded) by three scalings along the orthogonal eigen-coordinate system
of V(U).
1.3 Tensors 9

We have already mentioned the tensor product of two vectors in (1.7). Amongst
other things, the tensor product helps in “tensorizing” the vector operations of taking
dot- and cross-products, viz.,

a · b = tr a ⊗ b = a ⊗ b : 1, (1.28a)
and a × b = −2 ax sk a ⊗ b. (1.28b)

Some additional identities that are easily proved, and will often be used are

a ⊗ A · b = a ⊗ b · AT , (1.29a)
a · Ab = a ⊗ b : A, (1.29b)
sk A : B = tr (sk A · sym B) + tr (sk A · sk B) = ax A · ax sk B, (1.29c)
A·B : C = C·A : B = B·C : A (1.29d)
and tr (S · W) = 0, (1.29e)

where S and W are, respectively, symmetric and anti-symmetric tensors.

1.3.2 Third- and fourth- order tensors

First, consider third-order tensors. In terms of the third-order tensorial bases, êi ⊗
ê j ⊗ êk in {O, êi }, a third-order tensor is defined as

A = Ai jk êi ⊗ ê j ⊗ êk , (1.30)

so that Ai jk are A ’s components in this coordinate system. The actions of A on


vectors and other tensors of various orders are defined in a manner similar to that of
a second-order tensor (1.10), e.g.,

A · a = Ai jk êi ⊗ ê j ⊗ êk · am êm = Ai jk ak êi ⊗ ê j (1.31a)


and a · A = am êm · Ai jk êi ⊗ ê j ⊗ êk = ai Ai jk ê j ⊗ êk . (1.31b)

An important example of a third-order tensor is the alternating tensor that has al-
ready been defined by (1.4).
Fourth-order tensors are formed in a manner analogous to third-order tensors,

A = Ai jkl êi ⊗ ê j ⊗ êk ⊗ êl , (1.32)

and their operations on vectors and tensors of various orders may be developed by
following (1.10) and (1.31), for example,

A : B = Ai jkl êi ⊗ ê j ⊗ êk ⊗ êl : Bmn êm ⊗ ên = Ai jkl Blk êi ⊗ ê j . (1.33)
10 1 Mathematical preliminaries

1.4 Coordinate transformation

We will need to find the components of vectors and second-order  tensors in one
coordinate system, say {O, êi }, given its matrix in the other, say P, ê0i . This may

be done by expressing the unit vectors of P in terms of those of O as,

ê0j = (êi · ê0j ) êi ,

and substituting this relationship into (1.1) and (1.9). It may be proved that the tensor

R = (êi · ê0j ) êi ⊗ ê j (1.34)

is, in fact, a rotation tensor with the property that

ê0i = R · êi . (1.35)

This represents the geometrically intuitive fact that, because both O and P are
right-handed Cartesian coordinate systems, it is possible to obtain one from the
other by a rotation. The components of R are

Ri j = êi · ê0j (1.36)

Substituting the previous two equations into (1.1) and (1.9), we obtain the coor-
dinate transformation rules

[a] = [R][a]0 , so that ai = Ri j a0j (1.37a)

and [A] = [R][A]0 [R]T , so that Ai j = Rik A0kl R jl , (1.37b)


for vector and second-order tensor components, respectively. As a special case,
when A = R, we find that

[R] = [R]0 , so that Ri j = R0i j , (1.38)

i.e., the rotation tensor R has the same components in the two frames that it relates.
Higher-order tensor transformation formulae may be developed similarly when
required.

1.5 Calculus

1.5.1 Gradient and Divergence. Taylor’s theorem.

Let Φ(x) be a nth -order tensor field defined over three-dimensional space with x
being a position vector. The gradient of Φ is defined by
1.5 Calculus 11

∂ Φ(x)
∇Φ = = Φ,i êi , (1.39)
∂x
where the comma denotes differentiation and
∂ (·) ∂ (·) ∂ (·) ∂ (·) ∂ (·)
∇(·) = = êi = ê1 + ê2 + ê3
∂x ∂ xi ∂ x1 ∂ x2 ∂ x3
is the gradient operator. In particular, we have the formulae

∂a ∂ bi ∂Ci j
∇a = êi , ∇b = êi ⊗ ê j and ∇C = êi ⊗ ê j ⊗ êk , (1.40)
∂ xi ∂xj ∂ xk

where a, b and C are, respectively, scalar, vector and tensor fields. Contracting any
two free indices of the gradient of Φ, we obtain its divergence denoted by ∇·Φ; this,
therefore, is applicable only when Φ is not a scalar field. We obtain the formulae

∂ bi ∂Ci j
∇·b = = bi ,i and ∇ · C = êi = Ci j , j êi . (1.41)
∂ xi ∂xj

Note that in the last equation we could have alternatively defined ∇·C as Ci j ,i ê j . The
gradient of a field identifies the direction of steepest change through its eigenvectors,
while its divergence is an estimate of the field’s local flux.
A field Φ(x) may be expanded in a Taylor’s series about a location x0 provided
some smoothness conditions are satisfied; see, e.g., Sokolnikoff (1966, p.311). The
expansion may be expressed as
 
Φ(x) = Φ0 + ∇Φ0 · h + ∇∇Φ0 : h ⊗ h + O |h|3
 
= Φ0 + Φ,i |0 hi + Φ,i j 0 hi h j + O |h|3 ,

(1.42)

where the subscript ‘0’ indicates evaluation at x0 , x is a point in the neighborhood


of x0 , h = x − x0 , and O estimates the remainder.

1.5.2 The divergence theorem

Consider a spatially varying nth -order tensor field Φ(x). The divergence theorem
allows us to relate the volume integral of the gradient of Φ to its appropriate surface
integral. An efficient way of stating this theorem is
Z Z
Φi jk...,l dV = Φi jk... nl dS,
V S

where V is the domain’s volume, S its bounding surface, and n̂ the outward normal
to S. Specializing the above formula to scalar φ (x), vector v(x) and tensor A(x)
fields, we obtain
12 1 Mathematical preliminaries
Z Z Z Z Z Z
∇φ dV = φ n̂ dS, ∇vdV = v ⊗ n̂ dS and ∇AdV = A ⊗ n̂ dS. (1.43)
V S V S V S

Taking the trace of the last two equations above, we obtain two useful identities:
Z Z Z Z
∇ · vdV = v · n̂ dS, and ∇ · AdV = A · n̂ dS. (1.44)
V S V S

1.5.3 Time-varying fields

Frequently, we will encounter fields Φ(x,t) that depend also on time. When x
is fixed, time differentiation is straightforward. For a second-order tensor field
A (x,t) = A (t) we have the formulae:
 T
d T dA
A = (1.45a)
dt dt
d −1 dA −1
A = −A−1 · ·A (1.45b)
dt  dt 
d dA −1
and det A = tr ·A det A. (1.45c)
dt dt

However, if x changes with time, as, say, in a deforming body with x(t) locating
a material point, care is required. For example, if Φ(x,t) relates to a particle prop-
erty, then Φ is modified both due to passage of time and by the particle’s changing
location. The appropriate formula for the total time rate of change of Φ(x(t),t) is
obtained by employing the chain rule:

d ∂Φ ∂Φ
Φ(x(t),t) = + ∇Φ · v = + Φ,i vi , (1.46)
dt ∂t ∂t
where we have set dx/dt = v(x(t),t). In general, we have the relationship

d ∂
(.) = (.) + ∇(.) · v, (1.47)
dt ∂t
between the total time dervative d/dt(.), the material derivative ∂ /∂t(.), and the
streaming or convective derivative ∇(.) · v.

You might also like