Strong Typed Tensor Calculus
Strong Typed Tensor Calculus
M
T
M) such that
c
1
M
M
c
n
((F
1
M
M
F
n
)
M
T
M)
is a vector bundle isomorphism.
2
High- Mid- Low-level Description
Variations; variational derivatives; tangent vectors.
m m Variation of a point in M; I m M; m: I M.
Variational derivative; :=
|=0.
m m Tangent vector; linearization of a variation;
m Tm
0
M; m T
m(0)
M.
Projection maps; canonical isomorphisms; bundle-related maps and spaces.
pr pr
i
pr
A
1
An
i
Set-theoretic projection onto ith factor or named factor;
pr
A
i
pr
A
1
An
A
i
pr
A
1
An
i
: A1 An Ai.
B,
A
A
B
Canonical isomorphism;
A
B
: A B;
B
A
:=
A
B
1
.
M,
F
F
M
Bundle projection map;
F
M
: F M.
H,
H
H
Pullback bundle ber projection map;
H
H
:
H H.
Trivial bundle constructions and projection maps.
M N N M N N Trivial bundle over N; M N := M N;
MN
N
: M N N, (m, n) n.
M N M M N M Trivial bundle over M; M N := M N;
MN
M
: M N M, (m, n) m.
Shared base-space bundle constructions and projection maps.
E F M E M F M Direct product; E M F :=
mM
Em Fm;
E
M
F
M
(e, f) :=
E
M
(e)
F
M
(f).
E F M E M F M Whitney sum; E M F :=
mM
Em Fm;
E
M
F
M
(e f) :=
E
M
(e)
F
M
(f).
E F M E M F M Tensor product; E M F :=
mM
Em Fm;
E
M
F
M
c
ij
ei fj
:=
E
M
(e
k
)
F
M
(f
) (for any k, ).
Separate base-space bundle constructions and projection maps.
E H M N E MN H M N Direct product; E MN H :=
(m,n)MN
Em Hn.
E
MN
H
MN
(e, h) :=
E
M
(e) ,
H
N
(h)
.
E H M N E MN H M N Whitney sum; E MN H :=
(m,n)MN
Em Hn.
E
MN
H
MN
(e h) :=
E
M
(e) ,
H
N
(h)
.
E H M N E MN H M N Tensor product; E MN H :=
(m,n)MN
Em Hn.
E
MN
H
MN
c
ij
ei hj
:=
E
M
(e
k
) ,
H
N
(h
(for any k, ).
Trace; natural pairing; tensor/tensor eld contraction. Simple tensor expressions are extended linearly.
tr trV Trace on V ; trV : V
V R, v (v).
v V v Natural pairing; V : V
V R, (, v) (v).
A B AV B Tensor contraction; V : (U V
) (V W) U W,
(u ) V (v w) := u ( V v) w (v) u w.
S
n
T S V
1
Vn
T Alternate for V , where V = V1 Vn.
S : T, S
2
T S V
1
V
2
T Special notation for n = 2.
tr trF Trace on F M; trF : (F
M F) C
(M, R) ,
[trF ( M f)] (m) := (m) Fm
f (m) for m M.
f F f Natural pairing; F : (F
) (F) C
(M, R),
( F f) (m) := (m) Fm
f (m) for m M.
A f AF f Natural pairing; F : (E M F
) F E,
(e M ) F f := e (m) ( (m) Fm
f) Em; m :=
F
M
(f).
S T S F T Tensor eld contraction; pointwise tensor contraction;
F : (E M F
) (F M G) (E M G) ,
[(e ) F (f g)] (m) := ( (m) Fm
f (m)) e (m) g (m).
S
n
T S F
1
M
Fn
T Alternate for F , where F = F1 M M Fn.
S : T, S
2
T S F
1
M
F
2
T Special notation for n = 2.
3
High- Mid- Low-level Description
Permutations of tensors and tensor elds.
A
, A
n
A
V
1
V
n
Right-action of permutations on n-tensors/n-tensor elds;
(v1 vn)
:= v
1
(1)
v
1
(n)
; (A
= A
.
Spaces of sections of bundles.
(H),
H
N
h C
(N, H) |
H
N
h = IdN
(H),
H
N
(H) :=
h C
(M, H) |
H
N
h =
.
Vertical bundle, pullback bundle, projection maps, pullback of sections.
V E E Vertical bundle over E M; V E := ker T
E
M
TE.
projection map
V E
E
:=
TE
E
|V E.
H M Pullback bundle;
H
M
(m, h) := m;
H
H
(m, h) := h.
h (
H)
dened by
H
H
h = h; h (H).
Covariant derivatives; partial covariant derivatives.
L
MR
L Natural linear covariant derivative; dierential of functions;
MR
L
MR
L := dL (T
M), where L C
(M, R).
X
E
X Linear covariant derivative on vector bundle E M;
E
X
E
X (E M T
MN
Tangent map as tensor eld;
MN
MN
(
TN M T
M), where C
(M, N).
H
Pullback covariant derivative; (
H);
dened by
h =
H
h
TN
MN
; h (H).
L,c
1
, . . . , L,cn
Partial dierential of functions;
L,c
i
(F
i
), dened by
MR
L =
n
i=1
L,c
i
F
i
ci.
X,c
1
, . . . , X,cn
Partial linear covariant derivative;
X,c
i
(E M F
i
), dened by
E
X =
n
i=1
X,c
i
F
i
ci.
,M
1
, . . . , ,Mn
Partial derivative decomposition of tangent map;
,M
i
(
TN M pr
i
T
Mi),
where M = M1 Mn, pr
i
:= pr
M
i
, and
MN
=
n
i=1
,M
i
pr
i
TM
i
MM
i
pr
i
.
Covariant Hessians.
2
L
T
MR
L Covariant Hessian of functions;
MR
L
2
L (T
M T
M); L C
(M, R).
2
X
ET
E
X Covariant Hessian on vector bundle E M;
E
M
T
E
X
2
X (E T
M T
M); X (E).
TNT
MN
Covariant Hessian of maps;
TN
M
T
MN
2
(
TN T
M T
M). C
(M, N).
Derivative conventions.
Xe Directional derivative notation; Xe := e TM X.
n
e
n
(X1 Xn1 Xn) Iterated covariant derivative convention;
dened by
Xn
n1
e
n1
(X1 Xn1).
R(X, Y ) := XY +Y X +
[X,Y ]
Curvature operator; R(X, Y ) e =
2
e : (X Y Y X).
z : M M I, m (m, 0) Evaluation-at-zero map.
z
, then is a
_
0
1
_
-tensor. Because of the repeated j down indices, the
expression g
ij
j
typically indicates a type error; g
ij
cant contract with
j
because of incompatible valence
(valence being the number of up and down indices). Furthermore, multiplying a
_
0
2
_
-tensor with a
_
0
1
_
-tensor
without contraction should result in a
_
0
3
_
-tensor, which should be denoted using three indices, as in g
ij
k
.
The only explicit type information provided by abstract index notation is that of valence. The semi
qualier mentioned earlier is earned by the lack of distinction between the dierent spaces in which the
tensors reside. For example, if U, V, W are nite-dimensional vector spaces, then linear maps A: U V
and B: V W can be written as
_
1
1
_
-tensors, and their composition B A: U W is written as the
tensor contraction (B A)
i
j
= B
i
k
A
k
j
. However, while the expression A
i
k
B
k
j
makes sense in terms of valence
compatibility (i.e. grammatically), the composition AB that it should represent is not well-dened. Thus
this form of type error is not caught by abstract index notation, since the domains/codomains of the linear
maps must be checked separately.
The use of dimensional analysis (the abstract use of units such as kilograms, seconds, etc) in Physics is
an important precedent of strong typing. Each quantity has an associated dimension (this is a dierent
meaning from the dimension of linear algebra) which is expressed as a fraction of powers of formal symbols.
The ordinary algebraic rules for fractions and formal symbols are used for the dimensional components, with
5
the further requirement that addition and equality may only occur between quantities having the same
dimension.
For example, if E, M and C represent the dimensions of energy, mass and cost, respectively, and if the
energy storage density E/M of a battery manufacturing process is known (having dimensions energy per
mass) and the manufacturing weight yield wM/C of the battery is known (having dimensions mass per cost),
then under the algebraic convensions of dimensional analysis, calculating the energy storage per cost (which
should have dimensions energy per cost) is simple;
_
E
M
__
w
M
C
_
= w
EM
MC
= w
E
C
(the M symbols cancel in the fraction). Here, both and w are real numbers, and besides using the well-
denedness of real multiplication, no type-checking is done in the expression w.
A contrasting example is the quantity /w, having dimensions EC/M
2
. However, these dimensions may
be considered to be meaningless in the given context. The quantitys type adds meaning to the real-valued
quantity, and while the quantity is well-dened as a real number, the uselessness of the type may indicate
that an error has been made in the calculations. For example, a type mismatch between the two sides of an
equation is a strong indication of error.
This is also a convenient way to think about the chain rule of calculus. If z (y), y (x), and x measure
real-valued quantities, then z (y (x)) measures the quantity z with respect to quantity x. Using Z, Y and X
for the dimensions of the quantities z, y and x respectively, the derivative
dz
dx
has units Z/X. When worked
out, the dimensions for the quantities on either side of the equation
dz
dx
=
dz
dy
dy
dx
will match exactly, having a
non-coincidental similarity to the calculation in the battery product example.
2 Telescoping Notation (aka Dont Fear the Verbosity)
Many of the computations developed in this paper will appear to be overly pedantic, owing to the decoration-
heavy notation that will be introduced in Section 3. This decoration is largely for the purpose of tracking the
myriad of types in the type system and to assist the human reader or writer in making sense of and error-
checking the expressions involved. The pedantry in this paper plays the role of introducing the technique.
The notation is designed to telescope
1
, meaning that there is a spectrum of notational decoration; from
pedantically type-specied, verbose, and decoration-heavy, where [almost] no types must be inferred
from context and there is little work or expertise required on the part of the reader, to
somewhat decorated but more compact, where the reader must do a little bit of thinking to infer some
types, all the way to
tersely notated with minimal type decoration, where [almost] all types must be inferred from context
and the reader must either do a lot of thinking or be relatively experienced.
Additionally, some of the chosen symbols are meant to obey the same telescoping range of specity. For
example, compare n-fold tensor contraction
n
with type-specied
V1Vn
as discussed in Section 3, or the
symbols ,
, and
as discussed in Section 10. Tersely notated computations can be seen in Section 10,
while fully-verbose computations abound in the careful exposition of Part II.
3 Strongly-Typed Linear Algebra via Tensor Products
A fully strongly typed formulation of linear algebra will now be developed which enjoys a level of abstraction
and exibility similar to that of Penroses abstract index notation. Emphasis will be placed on notational
1
Credit for the notion of telescoping notation is due in part to David DeConde, during one of many enjoyable and insightful
conversations.
6
and conceptual regularity via a tensor formalism, coupled with a notion of untangled expression which
exploits and notationally depicts the associativity of linear composition.
If V denotes a nite-dimensional vector space, then let
V
: V
V R, (, v) (v)
denote the natural pairing on V , and denote
V
(, v) using the inx notation
V
v. The natural pairing
is a nondegenerate bilinear form and its bilinearity gives the expression
V
v multiplicative semantics
(distributivity and commutativity with scalar multiplication), thereby justifying the use of the inx operator
normally reserved for multiplication. The natural pairing subscript V is seemingly pedantic, but will prove
to be an invaluable tool for articulating and navigating the rich type system of the linear algebraic and vector
bundle constructions used in this paper. When clear from context, the subscript V may be omitted.
Because V is nite-dimensional, it is reexive (i.e. the canonical injection V V
, v ( (v)) is
a linear isomorphism). Thus the natural pairing
V
on V
V
: V V
R, (v, ) (v) .
Note that
V
v = v
V
. Though subtle, the distinction between
V
and
V
is important within the type
system used in this paper.
Through a universal mapping property of multilinear maps, the bilinear forms
V
and
V
descend to the
natural trace maps
tr
V
: V
V R, v (v) , and
tr
V
: V V
R, v (v) ,
each extended linearly to non-simple tensors. These operations can also be called tensor contraction.
Noting that (V
V )
and (V V
and V
V respectively, then
for each A V
V and B V V
, it follows that tr
V
(A) = Id
V
V
V
A and tr
V
(B) = Id
V
V V
B.
Denition 3.1 (Linear maps as tensors). Let V and W be nite-dimensional vector spaces, and let
Hom(V, W) denote the space of vector space morphisms from V to W (i.e. linear maps). The linear
isomorphism
W V
Hom(V, W) ,
w (V W, v w(
V
v))
(extended linearly to general tensors) will play a central conceptual role in the calculations employed in this
paper, as it will facilitate constructions which would otherwise be awkward or dicult to express. Linear
maps and appropriately typed tensor products will be identied via this isomorphism.
Given bases v
1
, . . . , v
m
V and w
1
, . . . , w
n
W, and dual bases v
1
, . . . , v
m
V
and w
1
, . . . , w
n
W
,
a linear map A: V W can be written under the identication in (3.1) as
A = A
i
j
w
i
v
j
,
where A
i
j
= w
i
W
A
V
v
j
R, and in fact
_
A
i
j
M
nm
(R) is the matrix representation of A with respect
to the bases v
1
, . . . , v
m
V and w
1
, . . . , w
n
W, noting that the i and j indices denote the output and
input components of A respectively. Tensors are therefore the strongly typed analog of matrices, where
the W V
W,
w w,
7
(where the map is extended linearly to general tensors). This is literally the tensor abstraction of the matrix
transpose operation; if A = A
i
j
w
i
j
, then the dual A is A
= A
j
i
i
w
j
. The matrix of A
is precisely
the transpose of the matrix of A with respect to the relevant bases. The map itself can be written as a
4-tensor V
W W
V , where A
=
WV
A.
There is a notion of the natural pairing of tensor products, which implements composition and evaluation
of linear maps, and can be thought of as a natural generalization of scalar multiplication in a eld. If U, V,
and W are each nite-dimensional vector spaces, then the bilinear form
(U V
) (V W) U R W
= U W,
(u , v w) u (
V
v) w = (
V
v) u w
will be denoted also by the inx notation
V
(i.e. (u )
V
(v w) = (
V
v) u w). If V itself is a tensor
product of n factors which are clear from context, then
V
may be denoted by
n
(think an n-fold tensor
contraction). If n = 2, then typically : is used in place of
2
. For example, from above, A
=
WV
A = : A.
Given a permutation S
n
, dene a right-action by : V
1
V
n
V
1
(1)
V
1
(n)
, mapping
elements in the obvious way. For example, (2 3 4) acting on v
1
v
2
v
3
v
4
puts the second factor in the third
position, the third factor in the fourth position, and the fourth factor in the second, giving v
1
v
4
v
2
v
3
.
This permutation is itself a linear map and of course can be written as a tensor. However, because it is
dened in terms of a right action, the domain factors will come on the left. Thus is written as a tensor
of the form V
1
V
n
V
1
(1)
V
1
(n)
(i.e. as a 2n-tensor). Certain tensor constructions are
conducive to using such permutations. In the above example, can be written as (1 2) W
V V
W.
The permutation right-action also works naturally when notated using superscripts. For example, if
B U V W, then
B
(1 2)
:= B
U
W
(1 2) V U W
and so
_
B
(1 2)
_
(2 3)
= (B
U
W
(1 2))
V
W
(2 3)
= B
U
W
((1 2)
V
W
(2 3))
= B
U
W
(1 2) (2 3)
= B
U
W
(1 3 2) V W U.
When multiplying the permutations (1 2) and (2 3) in the third line, it is important to note that they are
read left-to-right, since they are acting on B on the right.
The inline cycle notation is somewhat ambiguous in isolation because the number of factors in the
domain/codomain is not specied, let alone their types. This information can sometimes be inferred from
context, such as from the natural pairing subscripts, as in the following examples.
Example 3.2 (Linearizing the inversion map). Let i : GL(V ) GL(V ) , A A
1
, i.e. the linear map
inversion operator, where GL(V ) is an open submanifold of V V
(V V
= V V
V at A GL(V ) in the
8
direction B T
A
(GL(V ))
= V V
is
Di (A)
V V
B = Di
V V
(A+B)
= (i (A+B))
=
_
(A+B)
1
_
=
_
__
1 +BA
1
_
A
_
1
_
=
_
A
1
_
1 +BA
1
_
1
_
=
_
A
1
n=0
_
BA
1
_
n
_
(
BA
1
2
__
= A
1
V
B
V
A
1
.
In order to move the B parameter out so that it plays the same syntactical role as in the original expression
Di (A) B, via adjacent natural pairing, some simple tensor manipulations can be done. The process is easily
and accurately expressed via diagram. The following sequence of diagrams is a sequence of equalities. The
diagram should be self-explanatory, but for reference, the number of boxes for a particular label denotes
the rank of the tensor, with each box labeled with its type. The lines connecting various boxes are natural
pairings, and the circles represent the unpaired slots, which comprise the type of the resulting expression.
V V
V
Di(A)
V V
B
V V
A
1
V V
B
V V
A
1
The following step is nothing but moving the boxes for B out; the natural pairings still apply to the same
slots, hence the cables dangling below.
V V
A
1
V V
A
1
V V
B
In this setting, a tensor product amounts to ippantly gluing boxes together.
V V
V V
A
1
A
1
V V
B
9
In order for B to be naturally paired in the same adjacent manner as in the original expression Di (A) B,
the slots of A
1
A
1
must be permuted; the second moves to the third, the third to the fourth, and the
fourth to the second.
V V
_
A
1
A
1
_
(2 3 4)
V V
B
The rst diagram equals the last one, thus Di (A)
V V
B =
_
A
1
A
1
_
(2 3 4)
V V
B, and by the
nondegeneracy of the natural pairing on V V
and
X
, then (AB)
V
X
( ) = (A
V
) (B
X
).
There is a slight ambiguity in the notation coming from a lack of specication on how the tensor product
of the operands is decomposed in the case when there is more than one such decomposition. Notation
explicitly resolving this ambiguity will not be needed in this paper as the relevant tensor product is usually
clear from context.
The parallel tensor product is associative; if Y and Z are also vector spaces and C Y Z, then
(AB) C = A(B C) (U W Y ) (V X Z) ,
allowing multiply-parallel tensor products.
Example 3.4 (Tensor product of inner product spaces). If (V, g) and (W, h) are inner product spaces (noting
that g V
and h W
is
an inner product space having induced inner product k (A, B) := tr
V
_
g
1
V
A
W
h
W
B
_
. Here, the
inputs of A and B (the V
, and the trace is used to complete the cycle by plugging the output
into the input, thereby producing a real number. The expression k (A, B) can be written in a more natural
way, which takes advantage of the linear composition, as A : k : B (or, pedantically, A
W
V
k
WV
B),
instead of the more common but awkward trace expression mentioned earlier. In the tensor formalism, the
inner product k should have type W
V W
such that
1
(U
)
= U
c
as smooth manifolds. The manifolds c, E and N are called the typical ber, the total space, and the
base space respectively. The map is called the bundle projection. The full 4-tuple specifying a bundle
can be recovered from the bundle projection map, so a locally trivial smooth map can be said to dene a
smooth bundle. The dimension of the typical ber of a bundle will be called its rank, and will be denoted
by rank or rank E when the bundle is understood from context.
The space of smooth sections of a smooth bundle dened by : E N is
() := C
(N, E) [ = Id
N
,
and may also be denoted by (E), if the bundle is clear from context. If nonempty, () is generally an
innite-dimensional manifold (the exception being when the base space N is nite).
Proposition 4.1 (Trivial bundle). Let M and N be smooth manifolds. With M N := M N and
MN
:= pr
MN
2
: M N N
denes a smooth bundle
_
M, M N,
MN
, N
_
, called a trivial bundle. Similarly, with M N := MN
and
MN
:= pr
MN
1
: M N M,
_
N, M N,
MN
, M
_
is a trivial bundle.
No proof is deemed necessary for (4.1), as each bundle projection trivializes globally in the obvious way.
The symbol is a composite of (indicating direct product) and or (indicating the base space).
If M and N are smooth manifolds as in (4.1), then there are two particularly useful natural identications.
C
(M, N)
= (M N) C
(M, N)
= (N M)
Id
M
M
M
Id
M
pr
MN
2
pr
NM
1
These identications can be thought of identifying a map C
F
: E F M N, (e, f)
_
E
(e) ,
F
(f)
_
denes a smooth bundle
_
c T, E F,
E
F
, M N
_
. This bundle is called the direct product of
E
and
F
, and is not necessarily a trivial bundle.
Proof. Let
E
:
_
E
_
1
(U) U c and
F
:
_
F
_
1
(V ) V T trivialize
E
and
F
over open sets
U M and V N respectively. Then
F
:
_
E
_
1
(U)
_
F
_
1
(V ) U c V T
has inverse
_
E
_
1
F
_
1
. Note that
_
E
_
1
(U)
_
F
_
1
(V ) =
_
(e, f) E F [
E
(e) U,
F
(f) V
_
=
_
(e, f) E F [
_
F
_
(e, f) U V
_
=
_
F
_
1
(U V ) ,
and that
P : (U c) (V T) (U V ) (c T) , ((u, e) , (v, f)) ((u, v) , (e, f))
denes a dieomorphism. Then
EF
:= P
_
F
_
:
_
F
_
1
(U V ) (U V ) (c T)
denes a dieomorphism, and
pr
(UV )(EF)
1
EF
(e, f)
= pr
(UV )(EF)
1
P
_
F
_
(e, f)
= pr
(UV )(EF)
1
P
_
E
(e) ,
F
(f)
_
= pr
(UV )(EF)
1
P
_
E
(e) ,
F
(f)
_
= pr
(UV )(EF)
1
P
__
pr
UE
1
E
(e) , pr
UE
2
E
(e)
_
,
_
pr
V F
1
F
(f) , pr
V F
2
F
(f)
__
= pr
(UV )(EF)
1
__
pr
UE
1
E
(e) , pr
V F
1
F
(f)
_
,
_
pr
UE
2
E
(e) , pr
V F
2
F
(f)
__
=
_
pr
UE
1
E
(e) , pr
V F
1
F
(f)
_
=
_
E
(e) ,
F
(f)
_
=
_
F
_
(e, f) ,
showing that
EF
trivializes
E
F
over UV MN. Since MN can be covered by such trivializing
sets, this establishes that
E
F
denes a smooth bundle. The typical ber of
E
F
is c T.
A smooth vector bundle is a ber bundle whose typical ber is a vector space and whose local
trivializations are linear isomorphisms when restricted to each ber. If (c, E, , M) is a smooth vector
bundle, then its dual vector bundle (c
, E
:=
pM
(E
p
)
: E
M,
p
p.
12
Because c is a vector space, the notation c
p
and e E
p
, then
E
e :=
Ep
e and e
E
:= e
Ep
. Both expressions evaluate to (e). Natural traces and n-fold tensor
contraction can be dened analogously. Again, while seemingly pedantic, the subscripted natural pairing
notation will prove to be a valuable tool in articulating and error-checking calculations involving vector
bundles. To generalize the rest of Section 3 will require the denition of additional structures.
For the remainder of this section, let
_
c, E,
E
, M
_
and
_
T, F,
F
, N
_
now be smooth vector bundles.
The following construction is essentially an alternate notation for
E
F
: E F M N, but is one
that takes advantage of the fact that
E
and
F
are vector bundles, and encodes in the notation the fact
that the resulting construction is also a vector bundle. This is analogous to how V W is a vector space
with a natural structure if V and W are vector spaces, except that this is usually denoted by V W.
Proposition 4.3 (Full direct sum vector bundle). If
E
MN
F := E F,
Then
MN
F
:=
E
F
: E
MN
F M N
denes a smooth vector bundle
_
c T, E
MN
F,
E
MN
F
, M N
_
, called the full direct sum of
E
and
F
.
For each (p, q) M N, the vector space structure on
_
MN
F
_
1
(p, q) is given in the following
way. Let R and (e
1
, f
1
) , (e
2
, f
2
)
_
MN
F
_
1
(p, q). Then
(e
1
, f
1
) + (e
2
, f
2
) = (e
1
+e
2
, f
1
+f
2
) .
It is critical to see (4.5) for remarks on notation.
Proof. Let U, V , c, T, P,
E
,
F
and
EF
be as in the proof of (4.2), and dene
E
MN
F
:=
EF
.
Noting that
E
MN
F
is a smooth bundle isomorphism over Id
UV
, so to show that
E
MN
F
is a linear
isomorphism in each ber, it suces to show that it is linear in each ber. Let R, (p, q) U V and
(e
1
, f
1
) , (e
2
, f
2
)
_
MN
F
_
1
(p, q). Then
E
MN
F
(e
1
+e
2
, f
1
+f
2
) = P
_
F
_
(e
1
+e
2
, f
1
+f
2
)
= P
_
E
(e
1
+e
2
) ,
F
(f
1
+f
2
)
_
= P
_
E
(e
1
) +
E
(e
2
) ,
F
(f
1
) +
F
(f
2
)
_
(by trivial vector bundle structures on U cand V T)
= P
_
E
(e
1
) ,
F
(f
1
)
_
+P
_
E
(e
2
) ,
F
(f
2
)
_
(by trivial vector bundle structure on (U V ) (c T))
= P
_
F
_
(e
1
, f
1
) +P
_
F
_
(e
2
, f
2
)
=
E
MN
F
(e
1
, f
1
) +
E
MN
F
(e
2
, f
2
) .
Thus
E
MN
F
is linear in each ber, and because it is invertible, it is a linear isomorphism in each ber.
In particular,
E
MN
F
is a smooth vector bundle isomorphism over Id
UV
. Applying
_
E
MN
F
_
1
to
the above equation gives
(e
1
+e
2
, f
1
+f
2
) = (e
1
, f
1
) + (e
2
, f
2
) ,
as desired.
This construction diers from the Whitney sum of two vector bundles, as the base spaces of the bundles are
kept separate, and arent even required to be the same. This allows the identication of T (M N) MN
13
as TM
MN
TN M N, which may be done without comment later in this paper. Some important
related structures are pr
1
TM
M
: pr
1
TM M N and pr
2
TN
N
: pr
2
TN M N, where pr
i
:= pr
MN
i
.
The next construction is what will be used in the implementation of smooth vector bundle morphisms
as tensor elds.
Proposition 4.4 (Full tensor product bundle). If
E
MN
F :=
(p,q)MN
E
p
F
q
(disjoint union),
Then
MN
F
: E
MN
F M N,
ij
e
i
f
j
_
E
(e
1
) ,
F
(f
1
)
_
(here,
ij
R)
denes a smooth vector bundle
_
c T, E
MN
F,
E
MN
F
, M N
_
, called the full tensor product
2
of
E
and
F
.
It is critical to see (4.5) for remarks on notation.
Proof. Since the argument
ij
e
i
f
j
in the denition of
E
MN
F
is not necessarily unique, the well-
denedness of
E
MN
F
must be shown. Let
ij
e
1
i
f
1
j
=
ij
e
2
i
f
2
j
. Then in particular,
ij
e
1
i
f
1
j
,
ij
e
2
i
f
2
j
E
p
F
q
for some (p, q) M N, and therefore e
1
i
, e
2
i
E
p
and f
1
j
, f
2
j
F
q
for each index
i and j. Thus
E
_
e
1
1
_
= p =
E
_
e
2
1
_
and
F
_
f
1
1
_
= q =
F
_
f
2
1
_
, so the expression dening
E
MN
F
is well-dened.
The set E
MN
F does not have an a priori global smooth manifold structure, as it is dened as the
disjoint union of vector spaces. A smooth manifold structure compatible with that of the constituent vector
spaces will now be dened.
Let
E
:
_
E
_
1
(U) Uc and
F
:
_
F
_
1
(V ) V T trivialize
E
and
F
over open sets U M
and V N respectively, such that
E
and
F
are each linear in each ber. Dene
E
MN
F
:
_
MN
F
_
1
(U V ) (U V ) (c T)
X
__
MN
F
_
(X) ,
__
pr
UE
2
E
_
_
pr
V F
2
F
__
(X)
_
.
The map
E
MN
F
is well-dened and smooth in each ber by construction, since for each (p, q) U V ,
_
pr
UE
2
E
_
_
pr
V F
2
F
_
[
EpEq
: E
p
E
q
c T
is a linear isomorphism by construction. Additionally,
E
MN
F
has been constructed so that
pr
(UV )(E
MN
F)
1
E
MN
F
=
E
MN
F
on
_
MN
F
_
1
(U V ). Dene the smooth structure on
_
MN
F
_
1
(U V ) E
MN
F
by declaring
E
MN
F
to be a dieomorphism. The map
E
MN
F
is trivialized over U V . The set
E
MN
F can be covered by such trivializing open sets. Thus E
MN
F has been shown to be locally
dieomorphic to the direct product of smooth manifolds, and therefore it has been shown to be a smooth
manifold. With respect to the smooth structure on E
MN
F, the map
E
MN
F
is smooth, and has
therefore been shown to dene a smooth vector bundle.
Remark 4.5 (Notation regarding base space). The full direct sum (4.3) and full tensor product (4.4)
bundle constructions allow direct sums and tensor products to be taken of vector bundles when the base
spaces dier. If the base spaces are the same, then the construction joins them, producing a vector bundle
over that shared base space. For example, if E and F are vector bundles over M, then E
MM
F has base
space M M, while E F has base space M. The base space can be specied in either case as a notational
aide; the latter example would be written as E
M
F. If no subscript is provided on the symbol, then
2
This construction is alluded to in [7, pg. 121], but is not dened or discussed.
14
the base spaces are joined if possible (if they are the same space), otherwise they are kept separate, as in
the full tensor product construction. This notational convention conforms to the standard Whitney sum
and tensor product bundle notation, and uses the notion of telescoping notation to provide more specicity
when necessary.
Given a ber bundle, a natural vector bundle can be constructed on top of it, essentially quantifying
the variations of bundle elements along each ber. This is known as the vertical bundle, and it plays a
critical role in the development of Ehresmann connections, which provide the horizontal complement to
the vertical bundle.
Proposition 4.6 (Vertical bundle). Let
E
: E M dene a smooth [ber] bundle. If V E := ker T
E
TE, then
V E
:=
TE
E
[
V E
: V E E denes a smooth vector bundle subbundle of
TE
E
: TE E, called
the vertical bundle over E. Furthermore, the ber over e E is V
e
E = T
e
E
E
(e)
T
e
E.
Proof. Because
E
is a smooth surjective submersion, V E E is a subbundle of TE E having corank
dimM and therefore rank equal to that of E. Furthermore, if e E and e
E
(e)
, then e
represents
an arbitrary element of T
e
E
E
(e)
, and T
E
(e
) =
_
E
(e
)
_
= ( (e)) = 0, showing that e
ker T
E
,
and therefore that e
V
e
E. This shows that T
e
E
E
(e)
V
e
E. Because dimT
e
E
E
(e)
= rank E, this shows
that T
e
E
E
(e)
= V
e
E.
Given the extra structure that a vector bundle provides over a [ber] bundle, there is a canonical smooth
vector bundle isomorphism which adds signicant value to the pullback bundle formalism used throughout
this paper. This can be seen put to greatest use in Part II, for example, in development of the rst variation
(see (12.1)).
Proposition 4.7 (Vertical bundle as pullback). If : E M denes a smooth vector bundle, then
E
V E
:
E V E,
(x, y)
(x +y)
is a smooth vector bundle isomorphism over Id
E
, called the vertical lift, having inverse
V E
E
: e
_
e
0
, lim
0
e
e
0
_
,
where, without loss of generality, e
E
V E
is linear and injective on each ber. By a dimension counting argument, it is
therefore an isomorphism on each ber. Because it preserves the basepoint, it is a vector bundle isomorphism
over Id
E
. Because the map (x, y, ) x + y is smooth, so is the dening expression for
E
V E
, thereby
establishing smoothness. That
V E
E
inverts
E
V E
is a trivial calculation.
5 Strongly-Typed Tensor Field Operations
Because vector bundles and the related operations can be thought of conceptually as sheaves of linear al-
gebra, the constructions in Section 3, generalized earlier in this section, can be further generalized to the
setting of sections of vector bundles.
If E, F, G are smooth vector bundles over M, then dene the natural pairing of a tensor eld with a
vector:
F
: (E
M
F
) F E,
(e
M
, f) e
_
F
(f)
_ _
F
(f)
_
F
f
,
15
extending linearly to general tensor elds. Further, dene the natural pairing of tensor elds:
F
: (E
M
F
) (F
M
G) (E
M
G) ,
(e
M
, f
M
g)
_
p e (p)
M
_
(p)
Fp
f (p)
_
M
g (p)
_
=
_
p
_
(p)
Fp
f (p)
_
(e
M
g) (p)
_
,
extending linearly to general tensor elds. This multiple use of the
F
symbol is a concept known as opera-
tor overloading in computer programming. No ambiguity is caused by this overloading, as the particular
use can be inferred from the types of the operands. As before, the subscript F may be optionally omitted
when clear from context.
The permutations dened in Section 3 are generalized as tensor elds. If F
1
, . . . , F
n
are smooth vector
bundles over M, and S
n
is a permutation, then can act on F
1
M
M
F
n
by permuting its factors,
and therefore can be identied with a tensor eld
_
F
1
M
M
F
n
M
F
1
(1)
M
M
F
1
(n)
_
dened by
(f
1
M
M
f
n
)
F
M
F
n
:= f
1
(1)
M
M
f
1
(n)
.
An important feature of such permutation tensor elds is that they are parallel with respect to covariant
derivatives on the factors F
1
, . . . , F
n
(see (8.12) for more on this).
6 Pullback Bundles
The pullback bundle, dened below, is a crucial building block for many important bundle constructions, as
it enriches the type system dramatically, and allows the tensor formulation of linear algebra to be extended
to the vector bundle setting. In particular, the abstract, global formulation of the space of smooth vector
bundle morphisms over a map : M N is achieved quite cleanly using a pullback bundle. Furthermore,
the use of pullback bundles and pullback covariant derivatives simplies what would otherwise be local co-
ordinate calculations, thereby giving more insight into the geometric structure of the problem.
For the duration of this section, let (T, F, , N) be a smooth bundle having rank r.
Proposition 6.1 (Pullback bundle). Let M and N be smooth manifolds and let : M N be smooth. If
F
:= pr
MF
1
[
F
:
F M, (m, f) m,
then
_
T,
F,
F
, M
_
denes a smooth bundle. In particular,
F
is called the pullback of by .
Proof. Recalling that T denotes the typical ber of , let :
1
(U) U T trivialize over open set
U N. Dene
1
(U)
_
1
(U) T, (m, f)
_
m, pr
UF
2
(f)
_
and
:
1
(U) T
1
(U)
_
, (m, f)
_
m,
1
((m) , f)
_
.
Claim (1):
and
1
1
(U)
_
1
(U)
1
(U), and
is clearly smooth
as a map dened on the larger manifold. Therefore it restricts to a smooth map on
1
(U)
_
. An
analogous argument shows that
1
inverts
1
(U)
_
. Then
(m, f) =
1
_
m, pr
UF
2
(f)
_
=
_
m,
1
_
(m) , pr
UF
2
(f)
__
=
_
m,
1
_
(f) , pr
UF
2
(f)
__
(since (m) = (f))
=
_
m,
1
_
pr
UF
1
(f) , pr
UF
2
(f)
__
=
_
m,
1
(f)
_
= (m, f) .
With g T,
(m, g) =
_
m,
1
((m) , g)
_
=
_
m, pr
UF
2
1
((m) , g)
_
=
_
m, pr
UF
2
((m) , g)
_
= (m, g) ,
proving Claim (2).
Claim (3):
trivializes
F
over
1
(U) M. Proof: Let (m, f)
1
(U)
_
. Then
pr
1
(U)F
1
(m, f) = pr
1
(U)F
1
_
m, pr
UF
2
(f)
_
= m =
F
(m, f) ,
and by claims (1) and (2),
is a dieomorphism, so
trivializes
F
over
1
(U) M. Claim (3)
proved.
Since M can be covered with sets as in claim (3) and since the typical ber of
F
is dieomorphic to T,
this shows that
F
denes a smooth bundle
_
T,
F,
F
, M
_
. Because
F is locally dieomorphic to
the product of an open subset of M with T,
F
F
:
F F,
(m, f) f
is a smooth bundle morphism over which is an isomorphism when restricted to any ber of
F.
Because
F
F
is the projection pr
MF
F
[
F
, its tangent map is also just the projection pr
TMTF
TF
[
T
F
.
Proposition 6.3 (Bundle pullback is a contravariant functor). The map of categories
Pullback: Manifold Bundle (M) [ M Manifold ,
M Bundle (M) ,
(: M N)
_
Bundle (N) Bundle (M) , (T, F, , N)
_
T,
F,
F
, M
__
is a contravariant functor. Here, naturally isomorphic bundles in Bundle (M), for each manifold M, are
identied (along with the corresponding morphisms).
17
Proof. Noting that
Id
N
F = (n, f) N F [ Id
N
(n) = (f)
= F
and that
(Id
N
) (n, f) =
_
pr
NF
1
[
Id
N
F
_
(n, f) = n = (f)
= Id
N
= ,
it follows that Pullback (Id
N
) = Id
Bundle(N)
= Id
Pullback(N)
, i.e. Pullback satises the identity axiom of
functoriality.
For the contravariance axiom, let : M N and : L M be smooth manifold morphisms and let
(T, F, , N) be a smooth bundle. Then
F =
_
(, p) L
F [ () =
F
(p)
_
=
_
(, (m, f)) L (M F) [ () =
F
(m, f) and (m) = (f)
_
= (, (m, f)) L (M F) [ () = m and (m) = (f)
= (, f) L F [ () = (f)
= ( )
F
and
F
(, (m, f)) =
_
pr
L
F
1
[
F
_
(, (m, f)) = and
()
F
(, f) =
_
pr
LF
1
[
()
F
_
(, f) = ,
showing that
F
=
()
F
, and therefore
Pullback () Pullback () = Pullback ( ) ,
establishing Pullback as a contravariant functor.
The space of sections of a pullback bundle is easily quantied.
(
F) =
_
C
(M,
F) [
F
= Id
M
_
.
This space will be central in the theory developed in the rest of this paper. Furthermore, it is naturally
identied with the space of sections along the pullback map;
(F) :=
_
C
(M, F) [
F
=
_
.
These spaces are naturally isomorphic to one another, and therefore an identication can be made when
convenient. While the former space is more correct from a strongly typed standpoint, the latter space is a
convenient and intuitive representational form. The particular correspondence depends heavily on the fact
that
F is a submanifold of M F.
(
F)
=
(F)
pr
MF
2
,
Id
M
M
.
Furthermore, if f (F), then f
(F) then each point p M has some neighborhood U in which can be written locally as
[
U
=
i
f
i
[
U
, where f
1
, . . . , f
r
_
F [
(U)
_
is a frame for F [
(U)
, and
1
, . . . ,
r
C
(U, R) are
dened by
i
=
_
f
i
[
U
_
F
[
U
.
Proof. Let p M, let V N be a neighborhood of (p) over which F [
V
is trivial, and let U =
1
(V ),
so that U is a neighborhood of p. Let f
1
, . . . , f
r
(F [
V
) be a frame for F [
V
(i.e. F [
(U)
), and let
f
1
, . . . , f
r
_
(F [
V
)
_
be the corresponding coframe (i.e. the unique f
1
, . . . , f
r
such that f
i
F
f
j
=
i
j
for
each i, j). Dene
i
C
(M, R) by
i
=
_
f
i
[
U
_
F
[
U
. Then
i
f
i
[
U
=
_
f
i
[
U
_
F
[
U
f
i
[
U
=
_
(f
i
[
U
)
U
_
f
i
[
U
__
F
[
U
=
__
f
i
V
f
i
_
[
U
_
F
[
U
=
_
Id
F|
V
[
U
_
F
[
U
= [
U
,
as desired.
Some literature uses expressions of the form f
f := Id
M
M
(f ) (
F) .
This is known as a pullback section.
The pullback section is deservedly named. If : M N and : L M are smooth, then
f
=
( )
E
M
(E
N
F) ,
(m, e)
M
(m, f) (m, e
N
f)
(extended linearly to general tensors) is a smooth vector bundle isomorphism.
Proof. Let c denote the above map. The well-denedness of c comes from the universal mapping property on
multilinear forms which induces a linear map on a corresponding tensor product. If c ((m, e)
M
(m, f)) = 0,
then e
N
f = 0, which implies that e = 0 or f = 0, and therefore that (m, e)
M
(m, f) = 0. Because
there exists a basis for (
E
M
F)
m
consisting only of simple tensors, this implies that c is injective, and
by a dimensionality argument, that c is an isomorphism. The map is clearly smooth and respects the ber
structures of its domain and codomain. Thus c is a smooth vector bundle isomorphism.
The contravariance of pullback and its naturality with respect to tensor product are two essential prop-
erties which provide some of the exibility and precision of the strongly typed tensor formalism described in
this paper. This will become quite apparent in Part II.
19
Remark 6.7 (Tensor eld formulation of smooth vector bundle morphisms). A particularly useful application
of pullback bundles is in forming a rich type system for smooth vector bundle morphisms. This approach was
inspired by [20, pg. 11]. Let
E
: E M and
F
: F N be smooth vector bundles, and let : M N be
smooth. Consider Hom
(E, F), i.e. the space of smooth vector bundle morphisms over the map . There is
a natural identication with another space which lets the base map play a more direct role in the spaces
type. In particular,
Hom
(E, F)
= Hom
Id
M
(E,
F) ,
A
E
E
A,
pr
MF
2
B B.
This particular identication of smooth vector bundle morphisms over can now be directly translated into
the tensor eld formalism, analogously to (3.1).
(
F
M
E
) Hom
Id
M
(E,
F) ,
A (e A
E
e) .
The inverse image of B Hom
Id
M
(E,
f
i
M
e
j
, where
B
i
j
:=
df
i
B e
j
C
(U, R).
Quantifying smooth vector bundle morphisms as the tensor elds lends itself naturally to doing calculus
on vector and tensor bundles, as the relevant derivatives (covariant derivatives) take the form of tensor elds.
The type information for a particular vector bundle morphism is encoded in the relevant tensor bundle.
7 Tangent Map as a Tensor Field
This section deals specically with the tangent map operator by using concepts from Section 5 and Section
6 to place it in a strongly typed setting and to prepare to unify a few seemingly disparate concepts and
notation for some tangible benet (in particular, see Section 10).
Given a smooth map : M N, its tangent map T: TM TN is a smooth vector bundle morphism
over , so by (6.7), is naturally identied with a tensor eld
MN
(
TN
M
T
M) ,
which may be denoted by
V F
N
T
(M, N) is naturally
identied as (N M), and there is a natural Ehresmann connection on the bundle N M, whose cor-
responding covariant derivative is the tangent map operator. This is the subject of another of the authors
papers and will not be discussed here further. This is mentioned here to incorporate linear covariant deriva-
tives (to be introduced and discussed in Section 8) and the tangent map operator (a nonlinear covariant
derivative) under the single category covariant derivative.
20
There is a subtle issue regarding construction of the cotangent map of which is handled easily by the
tensor eld construction. In particular, while the cotangent map T
p
: T
(p)
N T
p
M is the adjoint of T
p
,
it does not follow that T
Hom(T
N, T
q
N that is not of
the form T
(p)
N, and therefore the domain could not be all of T
(p0)
N = T
(p1)
N, and
T
p0
M ,= T
p1
M, so the action on the ber T
(p0)
N is not well-dened.
In the tensor eld parlance, the cotangent map T
)
(1 2)
(T
M
M
TN) .
The permutation superscript (1 2) is used here instead of to distinguish it notationally from pullback nota-
tion, which will be necessary in later calculations. The key concept is that the tensor eld (
)
(1 2)
encodes
the base map ; the basepoint p M is part of the domain
N itself.
The chain rule in the tensor eld formalism makes use of the bundle pullback. If : L M is smooth,
then
LN
( ) =
MN
TM
LM
.
Because
TM
L
T
TN
M
T
M)) = (
TN
L
M) =
_
( )
TN
L
M
_
is necessary (instead of just
TN
M
T
M)).
Sometimes it is useful to discard some type information and write
M
Id
M
(TN
NM
T
M),
i.e.
: M TN
NM
T
M such that
_
TN
N
NM
T
M
M
_
=
M
Id
M
. This is easily
done by the canonical ber projeciton available to all pullback bundle constructions;
TN
M
T
M
=
(
M
Id
M
)
(TN
NM
T
(
M
Id
M
)
(TN
NM
T
M)
TN
NM
T
M
: (
M
Id
M
)
(TN
NM
T
M) TN
NM
T
M,
as dened in (6.2). The granularity of the type system should reect the weight of the calculations being
performed. For demonstration of contrasting situations, see the discussion at the beginning of Section 8 and
the computation of the rst variation in (12.1).
It is important to have notation which makes the distinction between the smooth vector bundle morphism
formalism and the tensor eld formalism, because it may sometimes be necessary to mix the two, though
this paper will not need this. An added benet to the tensor eld formulation of tangent maps is that certain
notions regarding derivatives can be conceptually and notationally combined, for example in Section 10.
8 Linear Covariant Derivatives
As will be shown in the following discussion, a linear covariant derivative (commonly referred to in the
standard literature without the linear qualier) provides a way to generalize the notion in elementary cal-
culus of the dierential of a vector-valued function. The linear covariant derivative interacts naturally with
the notion of the pullback bundle, and this interaction leads naturally to what could be called a covariant
derivative chain rule, which provides a crucial tool for the tensor calculus computations seen later.
Let V and W be nite-dimensional vector spaces let U V be open, and let : U W be dierentiable.
Recall from elementary calculus the dierential D: U W V
UW
: U TW
WU
T
U
= (W W)
WU
(V
U)
= (W V
) (W U) .
Because (W V
) (W U), as a set, is a direct product, it can be decomposed into two factors. Letting
pr
1
and pr
2
be the projections onto the rst and second factors respectively,
pr
1
: U W V
and pr
2
: U W U.
The map pr
2
= .
This base map information is discarded in dening the dierential of as D := pr
1
NE
: N TE
EN
T
N). A linear
covariant derivative provides an eective trivialization of TE analogous to the trivialization TW
= W W
as discussed above, discarding all but the ber portion of the derivative, allowing the construction of an
object known as the total linear covariant derivative analogous to the dierential D as discussed above.
The notion of a linear covariant derivative on a vector bundle is arguably the crucial element of dierential
geometry
3
. In particular, this operator implements the product rule property common to anything that can
be called a derivation a property which is particularly conducive to the operation of tensor calculus. The
total linear covariant derivative of a vector eld (i.e. section of a vector bundle) allows the generalization
of many constructions in elementary calculus to the setting of smooth vector bundles equipped with linear
covariant derivatives. For example, the divergence div X := tr DX of a vector eld X on R
n
generalizes to
the divergence div X := tr X of a vector eld X on N, which has an analogous divergence theorem among
other qualitative similarities.
Remark 8.1 (Natural linear covariant derivative on trivial line bundle). Before making the general denition
for the linear covariant derivative, a natural linear covariant derivative will be introduced. With N denoting
a smooth manifold as before, if f C
NR
f := df.
Because C
(N, R) is naturally identied with (RN), this is essentially the natural linear covariant
derivative on the trivial line bundle RN. Note that there is an associated product rule; if f, g C
(N, R),
then fg C
NR
(fg) = d (fg) = g df +f dg = g
NR
f +f
NR
g.
When clear from context, the superscript decoration can be omitted and the derivative denoted as
f.
3
The Fundamental Lemma of Riemannian Geometry establishes the existence of the Levi-Civita connection[9, pg. 68],
which is a linear covariant derivative satisfying certain naturality properties.
22
Denition 8.2 (Linear covariant derivative). A linear covariant derivative on a vector bundle dened
by : E N is an R-linear map
E
: (E) (E
N
T
E
(f
N
) =
N
NR
f +f
N
E
, (8.1)
where f C
(N, R) and (E). The switch in order in the rst term of the expression is necessary to
form a tensor eld of the correct type, (E
N
T
E
is known
as the total [linear] covariant derivative of . If
E
= 0 [in a subset U N], then is said to be
parallel [on U]. The linear qualier is implied in standard literature and is therefore often omitted.
The inscribed in
is to indicate that the covariant derivative is linear, and can be omitted when clear
from context, or when it is unnecessary to distinguish it from the nonlinear tangent map operator whose
decorated symbol is
. For the remainder of this section, this distinction will not be necessary, so an
undecorated will be used.
For V (TN), it is customary to denote
E
V by
E
V
, where V indicates the directional component
of the derivative. Following this convention, the product rule can be written in a form where the product
rule is more obvious;
E
V
(f
N
) =
NR
V
f
N
+f
N
E
V
.
A covariant derivative is a local operator with respect to the base space N; if p N, then
_
_
(p)
depends only on the restriction of to an arbitrarily small neighborhood of p [9, pg. 50], and therefore
the restriction
E|
U
:
U
(E)
U
(E
N
T
(N, R)-module whose elements are functions on N (and therefore have a notion of restriction to a
subset) and U N is open, then let G
U
denote the set of restrictions of the elements of G to the set U.
Note that G
U
U
by construction.
Denition 8.3 (Finitely generating subset). Say that a subset of a module nitely generates the module
if the subset contains a nite set of generators for the module.
Denition 8.4 (Locally nitely generating subset). If is a C
nitely generates
V
(E) (here, r, recalling that r = rank E). Without loss of generality, let g
1
(q) , . . . , g
r
(q) be lin-
early independent (this is possible because g
1
(q) , . . . , g
N) whose restriction to G is
G
.
Proof. If q N, then by (8.5) there exists a neighborhood U N of q for which there are e
1
, . . . , e
r
G
U
forming a frame for E [
U
. If (E), then [
U
=
i
e
i
for some
1
, . . . ,
r
C
(U, R) (specically,
i
= e
i
E
[
U
, where e
1
, . . . , e
r
U
(E
N) locally on
U
(E) so as to satisfy the product rule
E
( [
U
) := e
i
N
NR
i
+
i
N
G
e
i
.
To show well-denedness, let f
1
, . . . , f
r
G
U
be another frame for E [
U
. Then =
i
f
i
for some
1
, . . . ,
r
1
_
i
j
e
i
e
j
respectively, it follows
that f
i
=
j
i
e
j
and
i
=
j
_
1
_
i
j
. Then
E
_
i
f
i
_
= f
i
N
NR
i
+
i
N
G
f
i
=
j
i
e
j
N
NR
_
k
_
1
_
i
k
_
+
j
_
1
_
i
j
N
G
_
k
i
e
k
_
=
j
i
e
j
_
1
_
i
k
N
NR
k
+
j
i
e
j
N
NR
_
1
_
i
k
+
j
_
1
_
i
j
e
k
N
NR
k
i
+
j
_
1
_
i
j
k
i
G
e
k
=
j
k
e
j
N
NR
k
+
j
k
j
G
e
k
+
e
k
N
NR
_
k
i
_
1
_
i
_
=
E
_
i
e
i
_
.
The last equality follows because
k
i
_
1
_
i
=
k
k
i
_
1
_
i
_
= 0.
Thus the expression dening
E
doesnt depend on the choice of local frame. This establishes the well-
denedness of
E
.
Clearly the restriction of
E
to G is
G
. This establishes the claim of existence. Uniqueness follows
from the fact that
E
is dened in terms of the maps
NR
and
G
.
Lemma (8.6) is used in the proof of the following proposition to allow a natural formulation of the
pullback covariant derivative with respect to a natural locally nite generating subset of (
E), in which
the relevant derivative has a natural chain rule.
4
What is meant by this is that the product rule must only be satised on
N
g if g G, where C
(N, R) and g G.
24
Proposition 8.7 (Pullback covariant derivative). If : M N is smooth and
E
is a covariant derivative
on E, then there is a unique covariant derivative
E
on
e =
E
e
TN
MN
E) [ =
U
(E) over open set U N induces a local frame
e
1
, . . . ,
e
rank E
1
(U)
(
E), so G is a locally
nite generating subset of (
E). Dene
G
: G (
E
M
T
N) ,
E
e
TN
MN
.
The well-denedness and R-linearity of
G
comes from that of
E
. For the product rule, if C
(M, R)
and e (E), then the product
M
for some C
(N, R),
in which case,
M
e =
e =
(
N
e). Then it follows that
G
(
M
e) =
G
(
N
e)
=
E
(
N
e)
TN
MN
_
e
N
NR
+
N
E
e
_
TN
MN
_
e
N
NR
TN
MN
+
_
N
E
e
_
TN
MN
e
M
_
NR
TN
MN
_
+
M
_
E
e
TN
MN
_
=
e
M
MR
M
G
e
=
e
M
MR
+
M
G
e,
which is exactly the required product rule. By (8.6), there exists a unique covariant derivative
E
on
E
whose restriction to G is
G
.
The full notation
E
is often cumbersome, so it may be denoted by
TM is
parameterized by time. Then
TM
d
dt
(
(t)
(t)
M
(t) varies for t I.
(t)
(t) is constant for t I,
(t) = 0 for t I.
Figure 8.1: A picture of the manifold M, path , and vector elds
i
)
is a local frame for
i
) (t) for
some functions
_
i
: R R
_
. Then
TM
d
dt
=
TM
d
dt
_
i
_
=
_
d
dt
i
_
i
+
i
TM
d
dt
i
=
d
i
dt
i
+
i
TM
TM
RM
TR
d
dt
=
d
i
dt
i
+
i
TM
TM
d
dt
.
Note that
RM
(
i
can be identied with an elementary vector space derivative (the ber is a vector space and so an elementary
derivative is well-dened there). This ber-direction derivative is nonvanishing by assumption, so
TM
d
dt
is nonvanishing on I as desired.
Introducing a bit of natural notation which will be helpful for the next result, if X (E) and Y (F),
then dene X Y X
MN
Y (E
MN
F) and X Y X
MN
Y (E
MN
F) by
(X
MN
Y ) (p, q) := X (p) Y (q) and (X
MN
Y ) (p, q) := X (p) Y (q)
for each (p, q) M N.
26
Proposition 8.9 (Induced covariant derivatives on E
MN
F and E
MN
F). If
E
and
F
are covariant
derivatives on E and F respectively, then there are unique covariant derivatives
E
MN
F
: (E
MN
F) ((E
MN
F)
MN
(T
M
MN
T
N))
and
E
MN
F
: (E
MN
F) ((E
MN
F)
MN
(T
M
MN
T
N))
on E F and E F respectively, satisfying the sum rule
EF
uv
(X Y ) =
E
u
X
F
v
Y
and the product rule
EF
uv
(X Y ) =
E
u
X Y +X
F
v
Y,
respectively, where X (E), Y (F), and u v TM TN. Here, TM TN M N (and its
dual) is used instead of the isomorphic vector bundle T (M N) M N (and its dual).
Proof. Suppressing the pedantic use of the M N subscript to avoid unnecessary notational overload, the
set G := e f [ e (E) , f (F) is a locally nite generator of (E F), since local frames for EF
take the form e
i
0, 0 f
j
, where e
i
and f
j
are local frames for E and F respectively. Dene
G
: G ((E F) (T
M T
N)) ,
X Y
_
u v
E
u
X
F
v
Y
_
, where u v TM TN.
This map is well-dened and R-linear by construction, since the connections
E
and
F
are well-dened
and R-linear. If C
H
: H ((E F) (T
M T
N)) ,
X Y
_
u v
E
u
X Y +X
F
v
Y
_
, where u v TM TN.
This map is well-dened and R-linear by construction, since the connections
E
and
F
are well-dened and
R-linear. For the product rule, with C
(M, R) and C
H
uv
(
MN
(X Y ))
=
H
uv
(( )
MN
(X Y ))
=
H
uv
((
M
X) (
N
Y ))
=
E
u
(
M
X) (
N
Y ) + (
M
X)
F
v
(
N
Y )
=
_
MR
u
M
X
_
(
N
Y ) +
_
M
E
u
X
_
(
N
Y )
+ (
M
X)
_
NR
v
N
Y
_
+ (
M
X)
_
N
F
v
Y
_
=
_
MR
u
+
NR
v
_
MN
(X Y ) +
MN
_
E
u
X Y +X
F
v
Y
_
=
MNR
uv
MN
(X Y ) +
MN
H
uv
(X Y ) ,
which is exactly the required product rule. By (8.6), there exists a unique connection
EF
on EF whose
restriction to H is
H
.
27
Remark 8.10 (Naturality of the covariant derivatives on E
MN
F and E
MN
F). Letting pr
i
:= pr
MN
i
(i 1, 2) for brevity, the maps
: E
MN
F pr
1
E
MN
pr
2
F,
e f
__
F
_
(e f) , e
_
MN
__
F
_
(e f) , f
_
and
: E
MN
F pr
1
E
MN
pr
2
F,
e f
__
F
_
(e f) , e
_
MN
__
F
_
(e f) , f
_
,
each extended linearly to the rest of their domains, are easily shown to be smooth vector bundle isomorphisms
over Id
MN
. Then
EF
z
(X Y ) =
1
_
pr
1
E
MN
pr
2
F
z
(X Y )
_
and
EF
z
(X Y ) =
1
_
pr
1
E
MN
pr
2
F
z
(X Y )
_
for all X (E), Y (F), and z T (M N), showing that the connections on E F and E F are
and -related to the naturally induced connections on pr
1
Epr
2
F and pr
1
E
MN
pr
2
F respectively, and
are therefore in this sense natural. The sum XY (E F) and product XY (E F) correspond
to pr
1
X
MN
pr
2
Y and pr
1
X
MN
pr
2
Y (pr
1
E
MN
pr
2
F) under mathn respectively.
Many important tensor constructions involve permutations. An extremely useful property of these per-
mutations is that they commute with the covariant derivatives induced by the covariant derivatives on the
tensor bundle factors, making them natural operators in the setting of covariant tensor calculus.
Proposition 8.11 (Transposition tensor elds are parallel). Let E
1
, E
2
, E
3
, E
4
be smooth vector bundles
over M having covariant derivatives
E1
,
E2
,
E3
,
E4
respectively, let A := E
1
M
E
2
M
E
3
M
E
4
and
B := E
1
M
E
3
M
E
2
M
E
4
, and let
A
and
B
denote the induced covariant derivatives.
If (2 3) (A
M
B) denotes the tensor eld which maps e
1
M
e
2
M
e
3
M
e
4
A to e
1
M
e
3
M
e
2
M
e
4
B (i.e. (2 3) transposes the second and third factors), then (2 3) is a parallel tensor eld with
respect to the covariant derivative induced on the vector bundle A
M
B M, i.e.
A
M
B
(2 3) = 0.
Proof. Let X (TM). Then
(e
1
M
e
2
M
e
3
M
e
4
)
A
A
M
B
X
(2 3)
=
B
X
((e
1
M
e
2
M
e
3
M
e
4
)
A
(2 3))
A
X
(e
1
M
e
2
M
e
3
M
e
4
)
A
(2 3)
=
B
X
(e
1
M
e
3
M
e
3
M
e
4
)
E1
X
e
1
M
e
3
M
e
2
M
e
4
e
1
M
E3
X
e
3
M
e
2
M
e
4
e
1
M
e
3
M
E2
X
e
2
M
e
4
e
1
M
e
3
M
e
2
M
E4
X
e
4
=
B
X
(e
1
M
e
3
M
e
3
M
e
4
)
B
X
(e
1
M
e
3
M
e
3
M
e
4
)
= 0.
Because X is arbitrary, this shows that (e
1
M
e
2
M
e
3
M
e
4
)
A
A
M
B
(2 3) = 0. This extends linearly
to general tensors, so
A
M
B
(2 3) = 0, as desired.
The fact that all transposition tensor elds are parallel implies that all permutation tensor elds are
parallel, since every permutation is just the product of transpositions. This gives as an easy corollary that a
covariant derivative operation commutes with a permutation operation, which has quite a succinct statement
using the permutation superscript notation.
28
Corollary 8.12 (Permutation tensor elds are parallel). Let E
1
, . . . , E
k
be smooth vector bundles over M
each having a covariant derivative, and let A := E
1
M
M
E
k
and B := E
1
(1)
M
M
E
1
(k)
. If
S
k
is interpreted as the tensor eld in (A
M
B) which maps e
1
M
M
e
k
to e
1
(1)
M
M
e
1
(k)
,
then is a parallel tensor eld. Stated using the superscript notation, with X (TM) and a (A),
B
X
a
=
_
A
X
a
_
.
Proof. This follows from the fact that can be written as the product of transpositions;
X
= 0 because
of the product rule and because each transposition is parallel. The claim regarding commutation with the
superscript permutation follows easily from its denition.
B
X
a
=
B
X
(a
A
) = a
A
A
M
B
X
+
A
X
a
A
=
_
A
X
a
_
,
using the fact that
A
M
B
X
= 0, since is a parallel tensor eld.
9 Decomposition of
TE
E
: TE E
In using the calculus of variations on a manifold M where the Lagrangian is a function of TM (this form
of Lagrangian is ubiquitous in mechanics), taking the rst variation involves passing to TTM. Without a
way to decompose variations into more tractable components, the standard integration-by-parts trick [6, pg.
16] cant be applied. The notion of a local trivialization of TTM via choice of coordinates on M is one way
to provide such a decomposition. A coordinate chart (U, : U R
n
) on M establishes a locally trivializing
dieomorphism TTU
= (U) R
n
R
n
R
n
. However such a trivialization imposes an articial additive
structure on TTU depending on the [non-canonical] choice of coordinates, only gives a local formulation
of the relevant objects, and the ensuing coordinate calculations dont give clear insight into the geometric
structure of the problem. The notion of the linear connection remedies this.
A linear connection on the vector bundle : E M is a subbundle H E of
TE
E
: TE E such
that TE = H
E
V E and T
a
H
x
= H
ax
for all a R 0 and x E, where
a
: E E, e ae is the
scalar multiplication action of a on E [8, pg. 512]. The bundle H E may also be called a horizontal
space of the vector bundle
TE
E
: TE E (a is used instead of the because a choice of H E is generally
non-unique). For convenience, dene h :=
TM
E
T
E
E
T
E) (i.e.
v : TE E is a smooth vector bundle morphism over ) is a left-inverse for
E
V E
(V E
E
)
that is equivariant with respect to T
a
and
a
(i.e. v T
a
=
a
v) [12, pg. 245], then H := ker v TE
denes a linear connection on the vector bundle : E M. Such a map v is called the connection map
associated to H. Conversely, given a linear connection H, there is exactly one connection map dening H
in the stated sense.
Proof. That v is a left-inverse for
E
V E
implies that v has full rank, so H := ker v denes a subbundle of
TE
E
: TE E having the same rank as TM. Because v is smooth, H is a smooth subbundle. Furthermore,
the condition implies that V
e
EH
e
= 0 for each e E, and therefore TE = H
E
V E by a rank-counting
argument.
If x TE and a R 0, then v T
a
x =
a
v x, which equals zero if and only if v x = 0, i.e. if
and only if x H. Thus T
a
H = H. This establishes H E as a linear connection.
Conversely, if H is a linear connection and v
1
and v
2
are connection maps for H, then v
1
TE
E
V E
=
Id
E
= v
2
TE
E
V E
. Then because the image of
E
V E
is all of V E, it follows that v
1
[
V E
= v
2
[
V E
. Since
v
1
[
H
= 0 = v
2
[
H
by denition, and since TE = H
E
V E, this shows that v
1
= v
2
. Uniqueness of
29
E
ZE
T
0p
E
T
ep
E
e
p
0
p
V
ep
E
H
ep
V
0p
E
H
0p
c
E
p
Figure 9.1: A diagram representing the decomposition of TE E into horizontal and vertical subbundles.
The vertical lines represent individual bers of E, while p M, e
p
E
p
, 0
p
E
p
denotes the zero vector
of E
p
, and ZE denotes the zero subbundle of E; ZE
= M. By the equivariance property of the linear
connection, ZE is a submanifold of E which is entirely horizontal (its tangent space is entirely composed of
horizontal vectors). The tangent spaces T
0p
E and T
ep
E are drawn; green arrows representing the vertical
subspaces (along the bers), red arrows representing the horizontal subspaces. Finally, c is a horizontal
curve passing through e
p
.
connection maps has been established. To show existence, dene v :=
V E
E
V E
pr
V E
(
E
E
T
E),
where pr
V E
: H
E
V E V E be the canonical projection, recalling that H
E
V E = TE. It is easily shown
that v is a connection map for H.
Proposition 9.2 (Decomposing
TE
E
: TE E). If v (
E
E
T
TM
E
E (9.1)
is a smooth vector bundle isomorphism over Id
E
. See Figure 9.1.
Proof. Because TE = H
E
V E, and H = ker v and V E = ker h, the ber-wise restriction
h
E
v [
TeE
: T
e
E (
TM
E
E)
e
= T
(e)
M E
(e)
is a linear isomorphism for each e E. The map is a smooth vector bundle morphism over Id
E
by construc-
tion. It is therefore a smooth vector bundle isomorphism over Id
E
.
30
Remark 9.3 (Linear connection/covariant derivative correspondence). Given a covariant derivative
E
on a
smooth vector bundle : E M, there is a naturally induced linear connection, dened via the connection
map
v : TE E, (9.2)
()
,
where : I E is a variation of E. Here,
()
E
denotes the pullback of the covariant derivative
E
through the map (see (8.7)). Conceptually, all v does is replace an ordinary derivative (
) with
the corresponding covariant one (
()
).
Conversely, given a connection map v (
E
E
T
E
: (E) (E
M
T
M) ,
TE
ME
.
The scaling equivariance of v is critical for showing that this map actually denes a covariant derivative.
Full type safety should be observed here; by the contravariance of the pullback of bundles (see (6.3)),
E
= ( )
E = Id
M
E
= E, so
v (
E
E
T
E))
= (
E
M
E)
= (E
M
E) ,
and therefore
(E
M
T
M
T
M) such that c
1
M
M
c
n
((F
1
M
M
F
n
)
M
T
i
) for each i 1, . . . , n such that
MR
L = L
,c1
F1
c
1
+ +L
,cn
Fn
c
n
.
This decomposition of L provides what will be called partial covariant derivatives of L (with respect to
the given decomposition).
Proof. The following equivalences provide a formula for directly dening L
,c1
, . . . , L
,cn
.
L = L
,c1
F1
c
1
+ +L
,cn
Fn
c
n
L = (L
,c1
M
M
L
,cn
)
F1
M
M
Fn
(c
1
M
M
c
n
)
L
TM
(c
1
M
M
c
n
)
1
= L
,c1
M
M
L
,cn
.
Existence and uniqueness is therefore proven.
Corollary 9.5 (Horizontal/vertical derivatives). Let h :=
TM
E
T
E) as before. If v
(
E
E
T
M)
and L
,v
(
) such that L = L
,h
TM
h +L
,v
E
v.
31
It should be noted that the basepoint-preserving issue discussed in Section 7 plays a role in choosing to
use the tensor eld formulation of h: TE TM and v : TE E. In particular, without preserving the
basepoint (via the -pullback of TM and E to form h (
TM
E
T
E) and v (
E
E
T
E)), the
map h
E
v would not be a smooth bundle isomorphism, and the horizontal and vertical derivatives would
be maps of the form L
,h
: E T
M and L
,v
: E E
or
2
: (X
M
Y Y
M
X)
is an expression measuring the non-commutativity of the X and Y derivatives of . The quantity
2
will
be called the covariant Hessian of , because it generalizes the Hessian of elementary calculus; it contains
only second-derivative information, and in the special case seen below, it is symmetric in the argument
components. It should be noted that if F M is the vector bundle such that (F
M
T
M), then
2
(F
M
T
M
M
T
2
: (X
M
Y Y
M
X) =
Y
X
X
Y
=
Y
X
Y
X
X
Y
+
X
Y
=
X
Y
+
Y
X
+ [X, Y ]
=
X
Y
+
Y
X
+
[X,Y ]
,
which is syntactically identical to the common denition for the [Riemannian] curvature endomorphism
R(X, Y ) . In the traditional setting, where
E
is a linear covariant derivative on vector bundle E, the
curvature endomorphism takes the form of a tensor eld R
E
(E
M
E
M
T
M
M
T
M). In this
setting however, because
E
may be nonlinear, such a tensorial formulation doesnt generally exist. Instead,
R
E
(X, Y ) :=
X
E
Y
+
Y
E
X
+
E
[X,Y ]
denes a second-order covariant dierential operator (covariant meaning tensorial in the X and Y compo-
nents). Put dierently,
R
E
(X, Y ) =
2
: (X
M
Y Y
M
X) ,
32
which will be called the (possibly nonlinear) curvature operator, measures the non-commutativity of the
X and Y derivatives of . If R
E
is identically zero, then the bundle E is said to be at with respect to the
relevant connections/covariant derivatives.
There are two particularly important instances of at bundles. The rst is the trivial line bundle dened
by
RS
(whose space of smooth sections, as discussed in Section 4, is naturally identied with C
(S, R)).
In this case,
SR
f (T
S), and
2
f
SR
f (T
S
S
T
S
S
T
S) is a symmetric tensor
eld (i.e. it has a (1 2) symmetry). Here, the covariant derivative on C
(S, R) is
SR
as dened above.
Proof. Let X, Y (TS). Recall that f df (T
S). Then
2
f : (X
S
Y Y
S
X)
=
Y
f X
X
f Y
=
Y
(f X) f
Y
X
X
(f Y ) +f
X
Y
= (f X) Y (f Y ) X +f [X, Y ] (by symmetry of
TS
)
= f [X, Y ] +f [X, Y ] (by denition of [X, Y ])
= 0.
Because X
S
Y is pointwise-arbitrary in TS
S
TS, this shows that
2
f is symmetric. Equivalently stated,
R
SR
is identically zero, and therefore the relevant bundle is at.
The second important case involves the nonlinear covariant derivative
MS
on C
TS
M
T
MS
(
TS
M
T
M
M
T
M) ,
so R
MS
(X, Y ) (
TS).
Proposition 10.2 (Symmetry of covariant Hessian on maps). Let M and S be smooth manifolds and let
TM
and
TS
be symmetric covariant derivatives. If C
TS
M
T
M
M
T
M)
is a tensor eld which is symmetric in the two T
(M, S) is
MS
as dened above.
Proof. Let X, Y (TM) and f C
f (
TS). Then
TS
R
MS
(X, Y ) =
f
_
Y
+
Y
X
+
[X,Y ]
_
=
X
(
f Y ) +
X
f Y
+
Y
(
f X)
Y
f X
+
f [X, Y ]
=
X
(
f Y ) +
Y
(
f X) +
f [X, Y ]
+
_
2
f X
_
Y
_
2
f Y
_
X
= (
f Y ) X +(
f X) Y +
f [X, Y ]
2
f : (( X)
M
( Y ) ( Y )
M
( X)) .
By denition, (
f Y ) X + (
f X) Y =
f is pointwise-arbitrary in
S and
X and Y are pointwise-arbitrary in TM, this shows that R
MS
is identically zero, so the bundle dened
by
SM
M
: S M M, whose space of sections is identied with C
M components.
33
The construction used in (9.4) can be applied to nonlinear as well as linear covariant derivatives to
considerable advantage. For example, if : M N L, where M, N, L are smooth manifolds and p
M
:=
pr
MN
1
and p
N
:= pr
MN
2
, then dene
,M
(
TL
MN
p
M
T
M) and
,N
(
TL
MN
p
N
T
N)
by
MNL
=
,M
p
M
TM
p
M
+
,N
p
N
TN
p
N
.
This gives a convenient way to express partial covariant derivatives, which will be used heavily in Part II in
calculating the rst and second variations of an energy functional. Note that in this parlance,
,(MN)
is
the full tangent map
.
Dening second partial covariant derivatives
,MM
,
,MN
,
,NM
and
,NN
by
,M
=
,MM
p
M
+
,MN
p
N
and
,N
=
,NM
p
M
+
,NN
p
N
,
the symmetry of the covariant Hessian of can be used to show various symmetries these second derivatives.
Proposition 10.3 (Symmetries of partial covariant derivatives). With and its second partial covariant
derivatives as above,
,MM
(
TL
MN
p
M
T
M
MN
p
M
T
M)
and
,NN
(having analogous type) are (2 3)-symmetric (i.e. (
,MM
)
(2 3)
=
,MM
and (
,NN
)
(2 3)
=
,NN
)
and the mixed, second partial covariant derivatives
,MN
(
TL
MN
p
M
T
N
MN
p
N
T
N) and
,NM
(
TL
MN
p
N
T
N
MN
p
M
T
M)
are mutually (2 3)-symmetric (i.e.
,MN
= (
,NM
)
(2 3)
).
Proof. Let X, Y (TM TN). If Tp
N
X = 0 and Tp
M
Y = 0, then
0 =
2
: (X
MN
Y Y
MN
X) (by (10.2))
=
,MM
: (
p
M
X
MN
p
M
Y
p
M
Y
MN
p
M
X)
+
,MN
: (
p
M
X
MN
p
N
Y
p
M
Y
MN
p
N
X)
+
,NM
: (
p
N
X
MN
p
M
Y
p
N
Y
MN
p
M
X)
+
,NN
: (
p
N
X
MN
p
N
Y
p
N
Y
MN
p
N
X)
=
,MN
: (
p
M
X
MN
p
N
Y
p
N
Y
MN
p
M
X)
=
_
,MN
(
,NM
)
(2 3)
_
: (
p
M
X
MN
p
N
Y ) .
Because
p
M
X and
p
N
Y are pointwise-arbitrary in p
M
TM and p
N
TN respectively, this implies that
,MN
= (
,NM
)
(2 3)
. Analogous calculations (setting
p
M
X = 0 and
p
M
Y = 0 and then separately
setting
p
N
X = 0 and
p
N
Y = 0) show that
,MM
= (
,MM
)
(2 3)
and
,NN
= (
,NN
)
(2 3)
.
There are two nal results regarding the second covariant derivative that will be especially useful in the
calculation of the rst and second variations of an energy functional (see (12.1) and (13.2)).
Proposition 10.4 (Chain rule for covariant Hessian). Let : E N dene a bundle having a rst and
second covariant derivative (i.e. a section of E can be covariantly dierentiated twice). If : M N and
e (E), then
e =
2
e :
TN
(
) +
TN
.
34
Proof. Let X (TM). Then
e X =
X
e
=
X
_
E
e
TN
_
=
X
(
e)
TN
TN
X
=
_
2
e
TN
X
_
TN
TN
X
=
_
2
e :
TN
(
) +
TN
X.
Because X is pointwise-arbitrary in TM, this establishes the desired equality.
Proposition 10.5 (Pullback curvature endomorphism). Let : E N dene a vector bundle having rst
and second covariant derivatives. If : M N, then R
TN
=
R
TN
:
TN
(
).
Proof. Note that R
TN
(
TN
M
N
M
T
M
M
T
Z (
TN). Then
(Id
TN
M
Z)
TN
M
N
R
TN
:
TM
(X
M
Y )
= R
TN
(X, Y ) (
Z)
=
2
Z :
TM
(X
M
Y )
=
2
Z :
TN
(
) :
TM
(X
M
Y )
+
TN
:
TM
(X
M
Y ) (by (10.4))
=
2
Z :
TN
((
X)
M
(
Y )) +
TN
0 (by symmetry of
)
=
_
_
(Id
TN
N
Z)
TN
N
T
N
R
TN
__
:
TN
((
X)
M
(
Y ))
= (Id
TN
M
Z)
TN
M
R
TN
:
TN
(
) :
TM
(X
M
Y ) ,
and because X, Y and
(
,A
) = (z
)
,A
,
i.e. evaluation in B commutes with a derivative along A.
Proof. Let X (TA), and let p
A
:= pr
AB
1
and p
B
:= pr
AB
2
. Then
(z
)
,A
X =
z
E
z
X
= z
E
z
(TATB)
z
TA
X
= z
E
TATB
p
A
X
_
= z
,A
A
TA
p
A
TATB
p
A
X
_
(since
p
B
TATB
p
A
X = 0)
= z
,A
TA
X (since z
A
X = (p
A
z)
X = Id
A
X = X),
and because X is pointwise-arbitrary in TM, this implies that z
,A
= (z
)
,A
as desired.
35
Proposition 10.7. Let A, B, C be smooth manifolds, let : A B C be smooth, let p
A
:= pr
AB
1
and
p
B
:= pr
AB
2
, and let X, Y (TATB). If
p
B
X = 0 and
p
A
Y = 0, then
,AB
: ((
p
A
X)
AB
(
p
B
Y )) =
TC
Y
ABC
X
.
Proof. The conditions
p
B
X = 0 and
p
A
Y = 0 imply that
Y
X = 0 in the product covariant
derivative. Then since p
A
AB
p
B
= Id
AB
, it follows that
p
A
AB
p
B
= (
p
A
AB
p
B
) =
(p
A
AB
p
B
) = Id
TATB
= 0,
and therefore
Y
(
p
A
X) =
Y
p
A
X +
p
A
Y
X = 0 X +
p
A
0 = 0.
For the main calculation,
,AB
: ((
p
A
X)
AB
(
p
B
Y ))
=
_
,AB
p
B
TB
p
B
TATB
Y
_
A
TA
p
A
TATB
X
=
_
AB
p
A
Y
,A
_
A
TA
p
A
TATB
X (since
p
A
Y = 0)
=
Y
(
,A
p
A
X)
,A
p
A
Y
(
p
A
X) (by reverse product rule)
=
TC
Y
ABC
X
(since
Y
(
p
A
X) = 0),
as desired.
36
Part II
Riemannian Calculus of Variations
The use of the Calculus of Variations in the Riemannian setting to develop the geodesic equations and to
study harmonic maps is quite well-established. A more general formulation is required for more specic
applications, such as continuum mechanics in Riemannian manifolds. The tools developed in Part I will now
be used to formulate the rst and second variations and Euler-Lagrange equations of an energy functional
corresponding to a rst-order Lagrangian. In particular, the bundle decomposition discussed in Section 9
will be needed to employ the standard integration-by-parts trick seen in the formulation of the analogous
parts of the elementary Calculus of Variations. The seemingly heavy and pedantic formalism built up thus
far will now show its usefulness.
In this part, let (M, g) and (S, h) be Riemannian manifolds with M compact. Calculations will be done
formally in the space C
(M, S), noting that its completion under various norms will give various Sobolev
spaces of maps from M to S, which are ultimately the spaces which must be considered when nding critical
points of the relevant energy functionals. See [4, 5] for details on the analytical issues. Let dV
g
denote
the Riemannian volume form corresponding to metric g, and let dV
g
be the induced volume form on M.
Let : M M be the inclusion, and let (
M and :=
TS
S
SM
T
M
M
, making : E S M a vector bundle.
The energy functionals in this section will be assumed to have the form
L: C
(M, S) R,
_
M
L
dV
g
,
where L: E R, referred to as the Lagrangian of the functional, is smooth. Here,
could be understood
to take values either in E = TS
SM
T
M or
TS
M
T
is literal, while in the latter case, there is an implicit conversion from
TS
M
T
M to TS
SM
T
M via
a ber projection bundle morphism (see (6.2)). Either way, L
: M R. Let
TS
and
TM
denote
the respective Levi-Civita connections, which induce a covariant derivative
E
on E (see (8.9)). Dene the
connection map v (
E
E
T
E) using
E
as in (9.2). For convenience, the S M subscript will be
suppressed on the full tensor product dening E from here forward.
11 Critical Points and Variations
One of the most pertinent properties of an energy functional is its set of critical points. Often, the solution to
a problem in physics will take the form of minimizing a particular energy functional. Lagrangian mechanics
is the quintessintial example of this. This section will deal with some of the main considerations regarding
such critical points.
Because the domain of a [real-valued] functional L may be a nonlinear space, the relevant rst derivative
is the [real-valued] dierential dL, which is paired with the linearized variation of a map C
(M, S). In
particular, a one-parameter variation of is a smooth map : M I S, where the I component is
the variational parameter. Letting i denote the standard coordinate on I, the linearized variation is then
i
: M TS, recalling that
i
:=
i
[
i=0
. Because
TS
S
i
= , it follows that
i
(
TS), i.e.
i
is
a vector eld along . The object
i
will be called a linearized variation. Call the elements of (
TS)
linear variations.
Proposition 11.1 (Each linear variation is a linearized variation). Let exp: U S denote the exponential
map associated to
TS
, where U TS is a neighborhood of the zero bundle in TS on which exp is dened,
and let : TS R TS, (s, ) s denote the scalar multiplication structure on TS. If A (
TS) and
37
if : U S is dened by := exp (AId
I
) [
U
, then
i
= A. In other words, every vector eld over
is realized as the linearization of a one-parameter variation of .
Proof. The map is well-dened and smooth by construction. Let p M. Then
(
i
) (p) =
i
((p, i))
=
i
(exp (AId
I
) (p, i))
=
exp
i
((A(p) , i))
=
exp
i
(iA(p))
=
exp
_
E
V E
[
Z(
E)
_
A(p)
= A(p) ,
where Z (
(C
(M, S))
with (
S
E
M
.
Let
:=
S
(
S
TS
E
T
E) and :=
M
(
M
TM
E
T
E) .
The letters sigma and mu have been chosen to reect the fact that L
,
(
S
T
S) and L
,
(
M
T
M)
give the S component (spatial) and M component (material) of the derivative
ER
L (T
E). The
connection map v will be retained as is, giving L
,v
(
(M, S)
and A (
TS), then
dL() A =
_
M
A
S
_
,M
L
,
div
M
_
,M
L
,v
__
dV
g
+
_
M
A
,M
L
,v
T
M
dV
g
.
38
The expression above is often called the rst variation of L. A type analysis here gives
,M
L
,
(
S) and
,M
L
,v
(
S
M
TM). Recall that because the domain of is M,
,M
.
Proof. Supporting calculations will be made below in lemmas. Let : M I S be as in (11.1), so that
i
= A. For tidiness, let L
,
:=
,M
L
,
and L
,v
:=
,M
L
,v
. Then
dL() A = dL()
i
=
i
(L())
=
_
M
i
(L
,M
) dV
g
=
_
M
L
,
TS
A+L
,v
TS
M
T
TS
AdV
g
(by (12.2))
=
_
M
A
S
(L
,
div
M
L
,V
) + div
M
(A
S
L
,V
) dV
g
(by (12.2))
=
_
M
A
S
(L
,
div
M
L
,V
) dV
g
+
_
M
A
S
L
,V
T
M
dV
g
(divergence theorem),
as desired.
As for the types of
,M
L
,
and
,M
L
,v
, the contravariance of bundle pullback allows signicant simpli-
cation. Because L
,
(
S
T
S) and L
,v
(
),
,M
L
,
_
,M
S
T
S
_
=
_
(
S
,M
)
S
_
= (
S) and
,M
L
,v
_
,M
E
_
=
_
(
,M
)
(T
S TM)
_
= (
S
M
TM) .
The supporting calculations follow. Dene z : M MI, m (m, 0) for purposes of evaluation of i = 0
via precomposition as in (10.6). Then
i
is a section of a pullback bundle;
i
= z
i
(z
(TM TI)). It
should be noted that z = by denition, and that z
,M
= (z
)
,M
=
,M
by (10.6).
Lemma 12.2. Let L, , A, , and v be as in Theorem 12.1. The variational derivative of L
,M
decomposes
in terms of the partial covariant derivatives L
,
and L
,v
and the linearized variation A;
i
(L
,M
) =
,M
L
,
TS
i
+
,M
L
,v
(
M
Id
M
)
i
.
The integration-by-parts trick as in the derivation of the rst variation in elementary calculus of variations
generalizes to the covariant setting;
L
,
TS
A+L
,v
TS
M
T
A = A
S
(L
,
div
M
L
,v
) + div
M
(A
S
L
,v
) .
Proof. A wonderful string of equalities follows.
i
(L
,M
)
= z
MIR
(L
,M
)
z
(TMTI)
i
(here,
i
= z
i
)
= z
,M
ER
L
z
,M
TE
z
MIE
,M
z
(TMTI)
i
(chain rule)
=
,M
_
L
,
S
TS
+L
,
M
TM
+L
,v
E
v
_
,M
TE
i
,M
(by (9.4) and because
,M
z =
,M
)
=
,M
L
,
,M
S
TS
,M
,M
TE
i
,M
+
,M
L
,
,M
M
TM
,M
,M
TE
i
,M
+
,M
L
,v
,M
,M
v
,M
TE
i
,M
=
,M
L
,
TS
i
+
,M
L
,v
(
M
Id
M
)
i
(by (12.3))
39
Note that by (6.3),
,M
S
TS = (
S
,M
)
TS =
TS,
,M
M
TM = (
M
,M
)
TM =
_
pr
MI
M
_
TM
and
,M
E = (
,M
)
E =
_
MI
pr
MI
M
_
E. Replacing
i
with A gives
i
(L
,M
) = L
,
TS
A+L
,v
TS
M
T
A,
establishing the rst equality.
For the second,
L
,
TS
A+L
,v
TS
M
T
A
= L
,
TS
A+ tr
T
M
_
L
,v
TS
A
_
(tracing TMseparately)
= A
S
L
,
+ tr
T
M
_
(L
,v
TS
A)
_
M
Id
M
L
,v
_
TS
A
_
(reverse product rule)
= A
S
L
,
A
S
tr
T
M
Id
M
L
,v
+ tr
T
M
(A
S
L
,v
) (
TS
commutes with tr
T
M
)
= A
S
(L
,
div
M
L
,v
) + div
M
(A
S
L
,v
) (denition of div
M
).
Note that L
,v
(
S
M
TM), so div
M
L
,v
(
S) and A
S
L
,v
(TM).
Lemma 12.3. The variation
i
,M
decomposes as follows.
,M
,M
TE
i
,M
=
i
(
TS) ,
,M
,M
TE
i
,M
= 0 (TM) ,
,M
v
,M
TE
i
,M
=
TS
i
(
TS
M
T
M) .
Proof. This calculation determines the component of
i
,M
.
,M
,M
TE
i
,M
=
,M
,M
TE
i
,M
=
i
(
S
,M
)
=
i
_
pr
SM
S
,M
_
=
i
(z
TS)
= (
TS) .
This calculation determines the component of
i
,M
.
,M
,M
TE
i
,M
=
,M
,M
TE
i
,M
=
i
(
M
,M
)
=
i
_
pr
SM
M
,M
_
=
i
pr
MI
M
= 0
_
z
_
pr
MI
M
_
TM
_
= (TM) .
The last equality follows from the fact that pr
MI
M
does not depend on the i coordinate.
This calculation determines the v component of
i
,M
. Let p
M
:= pr
MI
M
and p
I
:= pr
MI
I
. The
left-hand side of the third equality claimed in the lemma will be examined before evaluating at i = 0;
,M
v
,M
TE
i
,M
=
(
,M
)
E
p
I
i
,M
=
(
MI
p
M
)
(TST
M)
p
I
i
,M
(
TS
MI
p
M
T
M) .
40
Let Y (TM), noting that p
M
Y (p
M
TM). Then
_
,M
v
,M
TE
i
,M
_
M
TM
p
M
Y
=
TS
MI
p
M
T
M
p
I
i
,M
p
M
TM
p
M
Y
=
_
,MI
p
I
TI
p
i
_
M
TM
p
M
Y
=
_
,IM
p
M
TM
p
M
Y
_
I
TI
p
i
(by (10.3))
=
_
,I
p
I
TI
p
i
_
,M
p
M
TM
p
M
Y
,I
p
I
TI
_
(p
i
)
,M
p
M
TM
p
M
Y
_
= (
i
)
,M
p
M
TM
p
M
Y (since p
i
doesnt depend on M).
Recall that Id
M
= p
M
z and that the pullback of bundles is contravariant. Then evaluating at i = 0 via
pullback by z renders
_
,M
v
,M
TE
i
,M
_
TM
Y
=
_
(
,M
z)
v
(
,M
z)
TE
z
,M
_
(p
M
z)
TM
(p
M
z)
Y
=
_
z
,M
v
z
,M
TE
z
,M
_
M
TM
z
M
Y
= z
__
,M
v
,M
TE
i
,M
_
M
TM
p
M
Y
_
= z
_
(
i
)
,M
p
M
TM
p
M
Y
_
= z
(
i
)
,M
z
M
TM
z
M
Y
= (z
i
)
,M
(p
M
z)
TM
(p
M
z)
Y (by (10.6))
= (
i
)
,M
TM
Y
=
TS
i
TM
Y
The last equality is because
i
(
,M
,M
TI
i
,M
=
TS
i
, i.e. the variational derivative
i
commutes with the rst material derivative, just as in the
analogous situation in elementary calculus of variations.
Corollary 12.4 (Euler-Lagrange equations). If C
TS)), then
,M
L
,
div
M
_
,M
L
,v
_
= 0 on M,
,M
L
,v
TM
= 0 on M.
These are called the Euler-Lagrange equations for the energy functional L. Recall that because the domain
of is M,
,M
.
Proof. This follows trivially from (12.1) and the Fundamental Lemma of the Calculus of Variations [6, pg.
16].
It should be noted that the boundary Euler-Lagrange equation is due to the fact that the admissible
variations are entirely unrestricted. If, for example, the class of maps being considered had xed boundary
data, then any variation would vanish at the boundary, and there would be no boundary Euler-Lagrange
equation; this is typically how geodesics and harmonic maps are formulated.
41
Remark 12.5 (Analogs in elementary calculus of variations). The quantities L
,
, L
,
, L
,v
generalize the quan-
tities
L
x
,
L
z
,
L
p
respectively of the elementary treatment of the calculus of variations for energy functional
(f : U R
n
)
_
U
L(x, f (x) , Df (x)) dx,
where U R
m
is compact and U R
n
R
mn
(x, z, p) L(x, z, p) is the Lagrangian. Here,
L
x
: U
R
n
R
mn
R
m
,
L
z
: U R
n
R
mn
R
n
, and
L
p
: U R
n
R
mn
R
mn
decompose the total
derivative dL and are dened by the relation
dL(x, z, p) (u, v, w) =
L
x
(x, z, p) u +
L
z
(x, z, p) v +
L
p
(x, z, p) : w
for u R
m
, v R
n
, and w R
mn
. The Euler-Lagrange equation in this setting is
_
L
z
div
U
L
p
_
(x, f (x) , Df (x)) = 0 for x U,
noting that the left hand side of the equation takes values in R
n
.
In most situations involving simpler calculations, it is desirable and acceptable to dispense with the
highly decorated notation and use trimmed-town, context-dependent notation, leaving o type-specifying
sub/superscripts when clear from context.
Proposition 12.6 (Conserved quantity). If M is a real interval, C
L
,v
TS
M
T
L C
(M, R)
is constant. If L is kinetic minus potential energy, then H is kinetic plus potential energy (the total energy),
and is referred to as the Hamiltonian.
Proof. Let t be the standard real coordinate. Note that because M is a real interval,
M
dt.
Terms appearing in the derivative of H can be simplied as follows. Note the repeated
derivatives;
: M
TS
M
T
M but
: M (
T (
TS
M
T
d
dt
= (
d
dt
=
d
dt
(
S
) =
,
(
d
dt
= (
v
d
dt
= d
dt
,
d
dt
(
L = (
d
dt
= (
L
,
+ (
L
,v
d
dt
,
d
dt
_
(
L
,v
:
_
= d
dt
(
L
,v
: (
M
dt) + (
L
,v
: d
dt
=
_
d
dt
(
L
,v
dt
_
+ (
L
,v
: d
dt
.
Again, because M is a real interval, the divergence is just the derivative, so the Euler-Lagrange equation is
0 = (
L
,
div
M
_
(
L
,v
_
= (
L
,
(
L
,v
:
_
dt
M
d
dt
_
= (
L
,
d
dt
(
L
,v
dt,
42
and therefore d
dt
(
L
,v
dt = (
L
,
. Thus
d
dt
H = d
dt
_
(
L
,v
:
L
_
=
_
d
dt
(
L
,v
dt (
L
,
_
which is zero because satises the Euler-Lagrange equation. This shows that H is constant along solutions
of the Euler-Lagrange equation, and is therefore a conserved quantity. It should be noted that this proof
relies on the fact that the divergence takes a particularly simple form when the domain M is a real interval;
the result does not necessarily hold for a general choice of M.
Example 12.7 (Harmonic maps). Dene a metric
k (E
SM
E
)
= ((TS T
M)
SM
(TS T
M))
in a manner analogous to that in (3.4);
k := h g
1
.
To clarify, h g
1
((T
S
S
T
S) (TM
M
TM)), so permuting the middle two components (as in
the denition of h g
1
) gives the correct type, including the necessary metric symmetry condition. If
A E, then [A[
2
k
is the quantity obtained by raising/lowering the indices of A and pairing it naturally with
A. A useful fact is that k = 0; if u v TS TM, then permutation commutativity (8.12) and the
product rule gives
uv
k =
uv
_
h g
1
_
=
u
h g
1
+h
v
g
1
,
which equals zero because h and g
1
are parallel with respect to
TS
and
T
M
respectively.
With Lagrangian
L: E R, A
1
2
[A[
2
k
and energy functional
c () :=
_
M
L
dV
g
(c () is called the energy of ), the resulting Euler-Lagrange equations can be written down after calculating
L
,
and L
,v
. It is worthwhile to note that L is a quadratic form A A :
1
2
k : A on E, which will automatically
imply that L
,v
(A) = A : k. However, the calculation showing this will be carried out for demonstration
purposes.
Let A, B TS T
(A+B)
=
(L(A+B))
=
_
(A+B) :
1
2
(
k (A+B)) : (A+B)
_
.
The product rule gives three terms. The middle term is zero because (A+B) = (A), and therefore does
not depend on . The basepoint evaluation notation for
, =
S
E
M
, so
h =
E
. Let A() be a horizontal curve in E = TS T
E
d
d
A. Then
L
,h
(A)
(TSTM)
h
TE
A =
_
L
,h
(TSTM)
h +L
,v
E
v
_
TE
A
= L
TE
A
=
(L A)
=
_
A :
1
2
A
k : A
_
.
As before, the product rule gives three terms. Using the contravariance of bundle pullback, the middle term
is
1
2
(A)
(E
SM
E
( A)
k =
1
2
( A)
SM
E
( A) ,
which equals zero because k = 0. Thus
L
,h
(A) h
A =
A :
1
2
k : A+A :
1
2
k :
A,
which equals zero because
A = v
A = 0. The quantity h
(TS TM),
showing that L
,h
= 0. Finally, h =
E
implies that L
,
= 0 and L
,
= 0. This can be understood from
the fact that L depends only on the ber values of A, and has no explicit dependence on the basepoint; this
relies crucially on the fact that k = 0.
Finally, the Euler-Lagrange equations can be written down. Recalling that the natural trace of a tensor
(used in the divergence term in the Euler-Lagrange equation) is contraction with the appropriate identity
tensor, let (e
i
) be a local frame for TM and let
_
e
i
_
be its dual coframe, so that e
i
M
e
i
is a local expression
5
for Id
TM
(TM
M
T
L
,
div
M
_
(
L
,v
_
= tr (
: k)
=
ei
(
: k)
T
M
e
i
=
ei
: k e
i
:
ei
k e
i
.
The second term vanishes because k = 0. Unraveling the denition of k gives
ei
: k =
h
ei
g
1
. Contracting both sides of the above equation with
h
1
gives
0 =
ei
g
1
e
i
=
ei
e
i
= tr
g
2
(
TS) .
The quantity tr
g
2
is the g-trace of the covariant Hessian of and can rightfully be called the covariant
Laplacian of and denoted by
g
(this is also referred to as the tension eld of in other literature
[20, pg. 13], which is denoted ()). Note that
g
is a vector eld along . This makes sense because
is not necessarily a scalar function; it takes values in S. In the case S = R,
g
is the ordinary covariant
Laplacian on scalar functions.
A harmonic map is dened as a critical point of the energy functional c () :=
_
M
1
2
[
[
2
k
dV
M
.
Assuming a xed boundary (so that the variations vanish on the boundary) eliminates the boundary Euler-
Lagrange equation, the remaining equation is
g
= 0 on the interior of M,
which is the generalization of Laplaces equation. Satisfying Laplaces equation is a sucient condition for a
map to be a critical point of the energy functional. There is an abundance of literature concerning harmonic
maps and the analysis thereof [2, 6, 14, 20].
5
It should be noted that while Id
TM
is being written as the local expression e
i
M
e
i
, no inherently local property is being
used; this tensor decomposition is only used so that the product rule can be used in the following calculations in a clear way.
44
Example 12.8 (The geodesic equation). A fundamental problem in dierential geometry is determining
length-minimizing curves between given points. If M is a bounded, real interval, and t denotes the standard
real coordinate, then the length functional on curves : M S is L() :=
_
M
[
[
g
dt. A topological metric
d: M M R on M can be dened as
d (p, q) := inf L() [ joins pto q .
It can be shown that the length functional L() :=
_
M
[
[
h
dt and the energy functional c () :=
_
M
1
2
[
[
2
h
dt
have identical minimizers. Note that
(i.e.
g
) has a single term. The Euler-Lagrange equation, on the interior of M, is
0 =
g
= tr
g
2
=
:
_
d
dt
,
d
dt
_
= d
dt
d
dt
= d
dt
_
d
dt
_
d
dt
d
dt
.
But
d
dt
=
and d
dt
d
dt
= 0, giving the geodesic equation
TS
d
dt
= 0 on the interior of M.
This is the covariant way to state that the acceleration of is identically zero. The geodesic equation is
commonly notated as 0 =
TS
is correct (see (8.8)).
While formulated using xed boundary conditions ( has p and q as its endpoints), the geodesic equation
is a second order ODE for which initial tangent vector conditions are sucient to uniquely determine a
solution.
13 Second Variation
A further consideration after nding critical points of the energy functional L is determining which critical
points are extrema. This will involve calculating the second derivative of L. Let C := C
C
= (
CR
L, where the covariant derivative
T
C
is induced
by
TS
[5, Theorem 5.4].
For the remainder of this section, let I, J R be neighborhoods of zero, let i and j be their respective stan-
dard coordinates, and extend the existing -style derivative-at-a-point notation by dening
i
:=
i
[
i=j=0
,
j
:=
j
[
i=j=0
, and evaluation map z : M MI J, m (m, 0, 0). Then
i
= z
i
and
j
= z
j
; these
will be used as in the calculation of the rst variation.
Theorem 13.1 (Second variation of L). Let L, L, , , v and all be dened as above. If C
(M, S)
is a critical point of L and A, B T
C
= (
2
L() :
T
C
(AB)
=
_
M
A
,M
L
,
TS
B +A
,M
L
,v
TS
M
T
TS
B
+
TS
A
S
M
TM
,M
L
,v
TS
B +
TS
A
S
M
TM
,M
L
,vv
TS
M
T
TS
B
A
S
_
,M
L
,v
TS
M
T
M
_
R
TS
TS
,M
__
TS
BdV
g
.
This is often called the second variation of L. Here, R
TS
(TS
S
T
S
S
T
S
S
T
S) denotes the
Riemannian curvature endomorphism tensor for the Levi-Civita connection on TS.
45
Proof. Let : MIJ S be a two-parameter variation such that
i
= A and
j
= B (e.g. (m, i, j) :=
exp (iA(m) +jB(m))). The variation can be naturally identied with a variation : I J C, (i, j)
(m (m, i, j)) which is more conducive to the use of C as a manifold. The tensor products in the generally
innite-dimensional TC are taken formally. Let z := (0, 0) I J.
By (10.4), taking the algebra formally in the case of innite-rank vector bundles,
2
_
L
_
=
2
L :
TC
_
IJ
_
+
TC
,
so
_
2
L
C
_
:
T
C
(AB)
=
_
2
L
C
_
:
T
C
_
i
j
_
+ (L
C
)
T
TC
j
i
(since L
C
= 0)
=
_
2
L
C
IJ
z
_
:
z
TC
_
i
j
_
+
_
L
C
IJ
z
_
TC
TC
j
i
=
2
_
L
C
_
:
TITJ
(
i
IJ
j
) (by above)
=
j
i
_
L
C
_
=
_
M
i
(L
,M
) dV
g
=
_
M
2
(L
,M
) :
TMTITJ
(
i
MIJ
j
) dV
g
=
_
M
A
,M
L
,
TS
B +A
,M
L
,v
TS
M
T
TS
B
+
TS
A
S
M
TM
,M
L
,v
TS
B
+
TS
A
S
M
TM
,M
L
,vv
TS
M
T
TS
B
A
S
_
,M
L
,v
TS
M
T
M
_
R
TS
TS
,M
__
TS
BdV
g
(by Calculation (1)).
Supporting calculations follow.
Calculation (1): Abbreviate
,M
L
,xy
by L
,xy
. By (10.4),
2
(L
,M
) :
TMTITJ
(
i
MIJ
j
)
=
__
,M
2
L :
,M
TE
(
,M
MIJ
,M
) +
,M
L
,M
TE
,M
_
z
_
:
z
(TMTITJ)
(
i
M
j
)
= z
,M
2
L :
z
,M
TE
(
i
,M
M
j
,M
) +z
,M
L
z
,M
TE
,M
TE
j
,M
(by Calculation (2))
= L
,
:
,M
S
TS
(
i
M
j
) +L
,v
,M
S
TS
M
,M
E
_
i
M
TS
_
+L
,v
,M
E
M
,M
S
TS
_
TS
i
M
j
_
+L
,vv
:
,M
E
_
TS
i
M
TS
_
+L
,
,M
S
TS
TS
j
i
+L
,v
,M
TS
TS
j
i
+L
,v
,M
E
_
(Id
TS
i
)
TS
M
S
_
R
TS
TS
j
TS
,M
_
(by Calculation (3)).
Note that
TS
j
i
(
,M
S
TS
TS
j
i
+L
,v
,M
TS
TS
j
i
dV
g
= 0.
46
Thus
_
M
2
(L
,M
) :
TMTITJ
(
i
MIJ
j
) dV
g
=
_
M
L
,
:
,M
S
TS
(
i
M
j
) +L
,v
,M
S
TS
M
,M
E
_
i
M
TS
_
+L
,v
,M
E
M
,M
S
TS
_
TS
i
M
j
_
+L
,vv
:
,M
E
_
TS
i
M
TS
_
+
i
S
_
L
,v
,M
E
__
R
TS
TS
j
TS
,M
_
_
dV
g
=
_
M
A
S
L
,
TS
B +A
S
L
,v
TS
M
T
TS
B
+
TS
A
S
M
TM
L
,v
TS
B +
TS
A
S
M
TM
L
,vv
TS
M
T
TS
B
A
S
_
L
,v
TS
M
T
M
_
R
TS
TS
,M
__
TS
BdV
g
(by antisymmetry of curvature tensor).
Calculation (2):
z
,M
:
z
(TMTITJ)
(
i
M
j
)
= z
,M
TE
MIJ
(T
MT
IT
J)
,M
:
z
(TMTITJ)
z
(
i
MIJ
j
)
= z
,M
TE
MIJ
(T
MT
IT
J)
,M
:
TMTITJ
(
i
MIJ
j
)
_
= z
,M
TE
MIJ
(T
MT
IT
J)
j
,M
TMTITJ
i
_
= z
,M
TE
j
(
,M
TMTITJ
i
) (since
TMTITJ
j
i
= 0)
=
,M
TE
j
,M
.
Calculation (3): As calculated in the proof of (12.1),
,M
,M
TE
i
,M
=
i
(
TS) ,
,M
,M
TE
i
,M
= 0 (TM) ,
,M
v
,M
TE
i
,M
=
TS
i
(
TS
M
T
M) .
Furthermore, letting P := pr
MIJ
M
for brevity and noting that P z = Id
M
,
,M
,M
TE
,M
TE
j
,M
= z
,M
S
TS
j
_
,M
,M
TE
i
,M
_
(since = 0)
= z
(
S
,M
)
TS
j
i
(using calculation from (12.1))
=
TS
j
i
(z
TS)
= (
TS) ,
47
,M
,M
TE
,M
TE
j
,M
= z
,M
M
TM
j
_
,M
,M
TE
i
,M
_
(since = 0)
= z
(
M
,M
)
TM
j
0 (using calculation from (12.1))
= 0
_
z
(
M
,M
)
TM
_
= (z
TM)
= (TM) ,
,M
v
,M
TE
,M
TE
j
,M
= z
,M
E
j
_
,M
v
,M
TE
i
,M
_
(since v = 0)
= z
(
,M
)
E
j
(
i
)
,M
(using calculation from (12.1)).
Note that
,M
v
_
,M
E
M
,M
T
E
_
=
_
(
M
Id
M
)
E
M
,M
T
E
_
,
and therefore
,M
v
,M
TE
,M
TE
j
,M
_
(
M
Id
M
)
E
_
= (
TS
M
T
M) ,
so it suces to examine its natural pairing with TM elements. Let X (TM), noting that X = Id
M
X =
z
X and that P
X = TP (X 0
TI
0
TJ
) (P
TM). Then
_
,M
v
,M
TE
,M
TE
j
,M
_
TM
X
= z
TS
MIJ
P
M
j
(
i
)
,M
z
TM
z
X
= z
TS
j
_
(
i
)
,M
P
TM
P
TMTITJ
(X 0
TI
0
TJ
)
_
z
_
(
i
)
,M
P
TM
j
(
P
TMTITJ
(X 0
TI
0
TJ
))
_
= z
TS
j
TS
X0
TI
0
TJ
i
(
i
)
,M
0
P
TM
_
= z
TS
X0
TI
0
TJ
TS
j
i
+
TS
[j,X0
TI
0
TJ
]
i
R
TS
(
j
, X 0
TI
0
TJ
)
i
_
= z
TS
X0
TI
0
TJ
TS
j
i
+
TS
0
i
_
z
_
(Id
TS
MIJ
i
)
TS
MIJ
S
R
TS
:
TMTITJ
(
j
MIJ
(X 0
TI
0
TJ
))
_
=
_
TS
TS
j
i
+ (Id
TS
i
)
TS
M
S
_
R
TS
TS
j
TS
,M
_
TM
X,
where the last equality follows from Calculations (4) and (5). Because X is pointwise-arbitrary in TM, this
shows that
,M
v
,M
TE
,M
TE
j
,M
=
_
TS
j
i
_
,M
+(Id
TS
i
)
TS
M
S
_
R
TS
TS
j
TS
,M
.
Calculation (4):
z
TS
X0
TI
0
TJ
TS
j
i
+
TS
0
i
_
= z
TS
j
i
_
,M
TM
z
X
=
_
TS
j
i
_
,M
TM
X (by (10.6))
=
TS
TS
j
i
TM
X (because
TS
j
i
(z
TS)
= (
TS)).
48
Calculation (5):
z
_
R
TS
:
TMTITJ
(
j
MIJ
(X 0
TI
0
TJ
))
_
= z
_
R
TS
:
TMTITJ
((X 0
TI
0
TJ
)
MIJ
j
)
_
(antisymmetry of R
TS
)
= z
R
TS
:
TS
(
MIJ
) :
TMTITJ
((X 0
TI
0
TJ
)
MIJ
j
)
_
(by (10.5))
= z
R
TS
:
TS
((
,M
P
TM
P
X)
MIJ
j
)
_
= z
__
R
TS
TS
j
TS
,M
P
TM
P
X
_
=
_
z
R
TS
TS
z
TS
z
,M
z
TM
z
X
=
_
R
TS
TS
j
TS
,M
TM
X.
Theorem 13.2 (Second variation of L (alternate form)). Let L, L, , , v and all be dened as above.
If C
TS), then
2
L() :
T
C
(AB)
=
_
M
A
S
L
,
TS
B +A
S
L
,v
TS
M
T
TS
B
A
S
div
M
L
,v
TS
B A
S
L
,v
T
M
M
TS
_
TS
B
_
(1 2)
A
S
div
M
L
,vv
TS
M
T
TS
B
A
S
L
,vv
T
M
M
TS
M
T
M
_
TS
M
T
TS
B
_
(1 2 3)
A
S
_
L
,v
TS
M
T
M
_
R
TS
TS
,M
__
TS
BdV
g
+
_
M
(A
S
L
,v
TS
B)
T
M
+
_
A
S
L
,vv
TS
M
T
TS
B
_
M
dV
g
Proof. This result follows essentially from (13.1) via several instances of integration by parts to express the
integrand(s) entirely in terms of A and not its covariant derivatives. Abbreviate
,M
L
,xy
by L
,xy
. Then,
integrating by parts allows the covariant derivatives of A to be ipped across the natural pairings over
TS.
_
M
TS
A
S
M
TM
L
,v
TS
BdV
g
=
_
M
tr
TM
_
_
TS
A
_
(1 2)
TS
L
,v
TS
B
_
dV
g
(TMtrace is taken separately)
=
_
M
tr
TM
_
TM
(A
S
L
,v
TS
B)
_
tr
TM
_
A
S
M
TM
M
TS
L
,v
TS
B
_
tr
TM
_
A
S
L
,v
TS
TS
B
_
dV
g
(reverse product rule)
=
_
M
A
S
div
M
L
,v
TS
B (denition of divergence)
A
S
L
,v
T
M
M
TS
_
TS
B
_
(1 2)
dV
g
+
_
M
(A
S
L
,v
TS
B)
T
M
dV
g
(divergence theorem).
49
Similiarly,
_
M
TS
A
S
M
TM
L
,vv
TS
M
T
TS
BdV
g
=
_
M
tr
TM
_
_
TS
A
_
(1 2)
S
L
,vv
TS
M
T
TS
B
_
dV
g
=
_
M
tr
TM
_
TM
_
A
S
L
,vv
TS
M
T
TS
B
__
tr
TM
_
A
S
M
TM
M
S
M
TM
L
,vv
TS
M
T
TS
B
_
tr
TM
_
A
S
L
,vv
TS
M
T
TS
M
T
TS
B
_
dV
g
=
_
M
A
S
div
M
L
,vv
TS
M
T
TS
B
A
S
L
,vv
T
M
M
TS
M
T
M
_
TS
M
T
TS
B
_
(1 2 3)
dV
g
+
_
M
_
A
S
L
,vv
TS
M
T
TS
B
_
M
dV
g
.
Together with (13.1), this gives the desired result.
14 Questions and Future Work
This paper is a rst pass at the development of a strongly-typed tensor calculus formalism. The details of
its workings are by no means complete or fully polished, and its landscape is riddled with many tempting
rabbit holes which would certainly produce useful results upon exploration, but which were out of the scope
of a rst exposition. Here is a list of some topics which the author considers worthwhile to pursue, and
which will likely be the subject of his future work. Hopefully some of these topics will be inspiring to other
mathematicians, and ideally will start a conversation on the subject.
There renements to be made to the type system used in this paper in order to achieve better error-
checking and possibly more insight into the relevant objects. There are still implicit type identications
being done (mostly the canonical identications between dierent pullback bundles).
The calculations done in this paper are not in an optimally polished and rened state. With experi-
ence, certain common operations can be identied, abstract computational rules generated for these
operations, and the relevant calculations simplied.
The language of Category Theory can be used to address the implicit/explicit handling of natural type
identications, for example, the identication used in showing the contravariance of bundle pullback;
F
= ( )
F.
The details of the particular implementation of the pullback bundle
F
M
and
F
F
. In the authors experience (which occurred too
late to be incorporated into this paper), using this abstract interface cleans up calculations involving
pullback bundles signicantly.
The type system used for any particular problem or calculation can be enriched or simplied to adjust
to the level of detail appropriate for the situation. For example, if C
TM
R
T
R
dt, where
TM)
is given by
d
dt
. This primed derivative has a simpler type than the total derivative, and would
50
presumably lead to simplier calculations (e.g. in (12.6). This primed derivative could also be used in
the derivation of the rst and second variations. While this would simplify the type system, it would
diversify the notation and make the computational system less regularized. However, some situations
may benet overall from this.
The notion of strong typing comes from computer programming languages. The human-driven type-
checking which is facilitated by the pedantically decorated notation in this paper can be done by
computer by implementing the objects and operations of this tensor calculus formalism in a strongly
typed language such as Haskell. This would be a step toward automated calculation checking, and
could be considered a step toward automated proof checking from the top down (as opposed to from
the bottom up, using a system such as the Coq Proof Assistant).
Is there some sort of completeness result about the calculational tools and type system in this paper?
In other words, is it possible to accomplish everything in a global, coordinate-free way using a certain
set of tools, such as pullback bundles, covariant derivatives, chain rules, permutations, evaluation-by-
pullback?
The alternate form of the second variation (see (13.2)) can be used to form a generalized Jacobi eld
equation for a particular energy functional. Analysis of this equation and its solutions may give insights
analogous to the standard (geodesic-based) Jacobi eld equation.
Acknowledgements
I would like to express my gratitude to the ARCS (Achievement Rewards for College Scientists) Founda-
tion for their having awarded me a 2011-2012 ARCS Fellowship, and for their generous eorts to promote
excellence in young scientists. I would like to thank my advisor Debra Lewis for trusting in my abilities
and providing me with the freedom in which the creative endeavor that this paper required could ourish. I
would like to thank David DeConde for the invaluable conversations at the Octagon in which imagination,
creativity, and exploration were gladly fostered. Thanks to Chris Shelley for showing me how to create the
tensor diagrams using Tikz. Finally, I would like to thank both Debra and David for their help in editing
this paper.
References
[1] Luca Cardelli. Typeful programming. 1991 (revised 1993). Available online at
ftp://gatekeeper.research.compaq.com/pub/DEC/SRC/research-reports/SRC-045.pdf.
[2] Conference Board of the Mathematical Sciences Regional Conference Series in Mathematics. Selected
Topics in Harmonic Maps, number 50. American Mathematical Society, 1983.
[3] C.T.J. Dodson and M.S. Radivoiovici. Second-order tangent structures. International Journal of The-
oretical Physics, 21(2):151161, 1982.
[4] David G. Ebin and Jerrold E. Marsden. Groups of dieomorphisms and the motion of an incompressible
uid. The Annals of Mathematics, Second Series, 92(1):102163, 1970.
[5] Halldor I. Eliasson. Geometry of manifolds of maps. J. Dierential Geometry, 1(2), 1967.
[6] Mariano Giaquinta and Stefan Hildebrandt. Calculus of Variations I. Springer-Verlag, 1996.
[7] Ivan Kolr, Peter W. Michor, and Jan Slovk. Natural Operations in Dierential Geometry, volume 434.
Springer Verlag, 1993. This is an online book which can be found at http://www.mat.univie.ac.at/ mi-
chor/listpubl.html.
51
[8] Jerey M. Lee. Manifolds and Dierential Geometry, volume 107. American Mathematical Society,
2009.
[9] John M. Lee. Riemannian Manifolds: An Introduction to Curvature, volume 176. Springer Verlag, 1997.
[10] John M. Lee. Introduction to Smooth Manifolds, volume 218. Springer Verlag, 2006.
[11] Jerrold E. Marsden and Thomas J. R. Hughes. Mathematical Foundations of Elasticity. Prentice Hall,
Inc., 1983.
[12] Peter W. Michor. Topics in Dierential Geometry, volume 93. American Mathematical Society, 2008.
[13] George A. Miller. The magical number seven, plus or minus two: Some limits on our capacity for
processing information. American Psychological Association, 101(2):343352, 1955.
[14] Seiki Nishikawa. Vartiational Problems in Geometry, volume 205. American Mathematical Society,
2002.
[15] Richard S. Palais. Foundations of Global Non-Linear Analysis. W.A. Benjamin, Inc., 1968.
[16] David Parnas. On the criteria to be used in decomposing systems into modules. Communications of
the ACM, 15(12):10531058, 1972.
[17] Roger Penrose. The Road to Reality. Vintage Books, 2004.
[18] Eric S. Raymond. The Art of Unix Programming. Pearson Education, Inc., 2003. This is an online book
which can be found at http://www.faqs.org/docs/artu/index.html.
[19] Wolfgang Walter. Ordinary Dierential Equations, volume 182. Springer Verlag, 1998.
[20] Yuanlong Xin. Geometry of Harmonic Maps, volume 23. Birkhuser, 1996.
52