0% found this document useful (0 votes)
8 views

intro_manifolds (2)

The document is an introduction to smooth manifolds, detailing concepts such as Euclidean spaces, smooth functions, tangent vectors, and differential forms. It covers the structure of manifolds, smooth maps, and tangent spaces, along with various mathematical tools and theorems relevant to the study of manifolds. The content is organized into sections that progressively build on foundational concepts in differential geometry.

Uploaded by

Angelo Oppio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

intro_manifolds (2)

The document is an introduction to smooth manifolds, detailing concepts such as Euclidean spaces, smooth functions, tangent vectors, and differential forms. It covers the structure of manifolds, smooth maps, and tangent spaces, along with various mathematical tools and theorems relevant to the study of manifolds. The content is organized into sections that progressively build on foundational concepts in differential geometry.

Uploaded by

Angelo Oppio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 240

ou

Introduction to Smooth Manifolds

Zh Felix Zhou

January 3, 2024
1
x
eli
©F

1 Based on “An introduction to Manifolds” by Loring Tu


©F

2
eli
x
Zh
ou
Contents

ou
1 Euclidean Spaces 11
1.1 Smooth Functions on a Euclidean Space . . . . . . . . . . . . . . . . . . . . 11

Zh
1.1.1 Smooth Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.1.2 Taylor’s Theorem with Remainder . . . . . . . . . . . . . . . . . . . 12
1.2 Tangent Vectors in Rn as Derivations . . . . . . . . . . . . . . . . . . . . . . 13
1.2.1 The Directional Derivative . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.2 Germs of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.3 Derivations at a Point . . . . . . . . . . . . . . . . . . . . . . . . . . 15
x
1.2.4 Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.2.5 Vector Fields as Derivations . . . . . . . . . . . . . . . . . . . . . . . 19
1.3 The Exterior Algebra of Multicovectors . . . . . . . . . . . . . . . . . . . . . 20
eli
1.3.1 Dual Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.3.2 Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.3.3 Multilinear Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.3.4 The Permutation Action on Multilinear Functions . . . . . . . . . . . 22
©F

1.3.5 The Symmetrizing and Alternating Operators . . . . . . . . . . . . . 23


1.3.6 The Tensor Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3.7 The Wedge Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3.8 Anticommutativity of the Wedge Product . . . . . . . . . . . . . . . 26
1.3.9 Associativity of the Wedge Product . . . . . . . . . . . . . . . . . . . 27

3
1.3.10 A Basis for k-Covectors . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.3.11 Useful Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.4 Differential Forms on Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
1.4.1 Differential 1-Forms and the Differential of a Function . . . . . . . . 33
1.4.2 Differential k-Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

ou
1.4.3 Differential Forms as Multilinear Functions on Vector Fields . . . . . 37
1.4.4 The Exterior Derivative . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.4.5 Closed & Exact Forms . . . . . . . . . . . . . . . . . . . . . . . . . . 40
1.4.6 Applications to Vector Calculus . . . . . . . . . . . . . . . . . . . . . 41

Zh
1.4.7 Convention on Subscripts and Superscripts . . . . . . . . . . . . . . . 44
1.4.8 Miscellaneous Results . . . . . . . . . . . . . . . . . . . . . . . . . . 44

2 Manifolds 47
2.1 Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.1.1 Topological Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.1.2 Compatible Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
x
2.1.3 Smooth Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.1.4 Examples of Smooth Manifolds . . . . . . . . . . . . . . . . . . . . . 51
2.2 Smooth Maps on a Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
eli
2.2.1 Smooth Functions on a Manifold . . . . . . . . . . . . . . . . . . . . 54
2.2.2 Smooth Maps between Manifolds . . . . . . . . . . . . . . . . . . . . 55
2.2.3 Diffeomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
©F

2.2.4 Smoothness in Terms of Components . . . . . . . . . . . . . . . . . . 57


2.2.5 Examples of Smooth Maps . . . . . . . . . . . . . . . . . . . . . . . . 58
2.2.6 Partial Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.2.7 Inverse Function Theorem . . . . . . . . . . . . . . . . . . . . . . . . 62
2.3 Quotients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.3.1 The Quotient Topology . . . . . . . . . . . . . . . . . . . . . . . . . 64

4
2.3.2 Continuity of a Map on a Quotient . . . . . . . . . . . . . . . . . . . 64
2.3.3 Identification of Subset to a Point . . . . . . . . . . . . . . . . . . . . 64
2.3.4 A Necessary Condition for a Hausdorff Quotient . . . . . . . . . . . . 65
2.3.5 Open Equivalence Relations . . . . . . . . . . . . . . . . . . . . . . . 65
2.3.6 The Real Projective Space . . . . . . . . . . . . . . . . . . . . . . . . 67

ou
2.3.7 The Standard Smooth Atlas on a Real Projective Space . . . . . . . 69
2.3.8 The Grassmannian Manifold . . . . . . . . . . . . . . . . . . . . . . . 70

3 The Tangent Space 75


3.1 The Tangent Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

Zh
3.1.1 The Tangent Space at a Point . . . . . . . . . . . . . . . . . . . . . . 75
3.1.2 The Differential of a Map . . . . . . . . . . . . . . . . . . . . . . . . 76
3.1.3 The Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.1.4 Bases of Tangent Space at a Point . . . . . . . . . . . . . . . . . . . 79
3.1.5 A Local Expression for the Differential . . . . . . . . . . . . . . . . . 80
3.1.6 Curves in a Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
x
3.1.7 Computing the Differential Using Curves . . . . . . . . . . . . . . . . 84
3.1.8 Immersions and Submersions . . . . . . . . . . . . . . . . . . . . . . 85
3.1.9 Rank, Critical & Regular Points . . . . . . . . . . . . . . . . . . . . . 86
eli
3.1.10 Useful Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.2 Submanifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.2.1 Submanifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
©F

3.2.2 Level Sets of a Function . . . . . . . . . . . . . . . . . . . . . . . . . 92


3.2.3 The Regular Level Set Theorem . . . . . . . . . . . . . . . . . . . . . 94
3.2.4 Examples of Regular Manifolds . . . . . . . . . . . . . . . . . . . . . 95
3.2.5 Transversality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
3.2.6 Useful Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.3 Categories and Functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

5
3.3.1 Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
3.3.2 Functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

u
3.3.3 The Dual and Multicovector Functors . . . . . . . . . . . . . . . . . 105
3.4 The Rank of a Smooth Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.4.1 Constant Rank Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 107

o
3.4.2 The Immersion & Submersion Theorems . . . . . . . . . . . . . . . . 110
3.4.3 Images of Smooth Maps . . . . . . . . . . . . . . . . . . . . . . . . . 112
3.4.4 Smooth Maps into Submanifold . . . . . . . . . . . . . . . . . . . . . 114

Zh
3.4.5 The Tangent Plane to a Surface in R3 . . . . . . . . . . . . . . . . . 115
3.4.6 The Differential of an Inclusion Map . . . . . . . . . . . . . . . . . . 117
3.4.7 Useful Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.5 The Tangent Bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.5.1 The Topology of the Tangent Bundle . . . . . . . . . . . . . . . . . . 119
3.5.2 The Manifold Structure on the Tangent Bundle . . . . . . . . . . . . 122
3.5.3 Vector Bundles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
3.5.4 Smooth Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
ix
3.5.5 Smooth Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.6 Bump Functions and Partitions of Unity . . . . . . . . . . . . . . . . . . . . 129
3.6.1 Smooth Bump Functions . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.6.2 Partitions of Unity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
3.6.3 Existence of a Partition of Unity . . . . . . . . . . . . . . . . . . . . 132
el

3.7 Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134


3.7.1 Smoothness of Vector Field . . . . . . . . . . . . . . . . . . . . . . . 135
3.7.2 Integral Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
©F

3.7.3 Local Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138


3.7.4 The Lie Bracket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
3.7.5 The Pushforward of Vector Fields . . . . . . . . . . . . . . . . . . . . 144
3.7.6 Related Vector Fields . . . . . . . . . . . . . . . . . . . . . . . . . . 144

6
4 Lie Groups and Lie Algebras 147
4.1 Lie Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.1.1 Lie Groups & Examples . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.1.2 Lie Subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.1.3 The Matrix Exponential . . . . . . . . . . . . . . . . . . . . . . . . . 151

ou
4.1.4 The Trace of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.1.5 The Differential of Det at the Identity . . . . . . . . . . . . . . . . . 154
4.2 Lie Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.2.1 Tangent Space at the Identity of a Lie Group . . . . . . . . . . . . . 155

Zh
4.2.2 Left-Invariant Vector Fields on a Lie Group . . . . . . . . . . . . . . 157
4.2.3 The Lie Algebra of a Lie Group . . . . . . . . . . . . . . . . . . . . . 159
4.2.4 The Lie Bracket on gl(n, R) . . . . . . . . . . . . . . . . . . . . . . . 160
4.2.5 The Pushforward of Left-Invariant Vector Fields . . . . . . . . . . . . 162
4.2.6 The Differential as a Lie Algebra Homomorphism . . . . . . . . . . . 163

5 Differential Forms 165


x
5.1 Differential 1-Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5.1.1 The Differential of a Function . . . . . . . . . . . . . . . . . . . . . . 165
5.1.2 Local Expression for a Differential 1-Form . . . . . . . . . . . . . . . 166
eli
5.1.3 The Cotangent Bundle . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5.1.4 Characterization of Smooth 1-Forms . . . . . . . . . . . . . . . . . . 168
5.1.5 Pullback of 1-Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
©F

5.1.6 Restriction of 1-Forms to Immersed Submanifolds . . . . . . . . . . . 173


5.2 Differential k-Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.2.1 Differential Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5.2.2 Local Expression for a k-Form . . . . . . . . . . . . . . . . . . . . . . 177
5.2.3 The Bundle Point of View . . . . . . . . . . . . . . . . . . . . . . . . 178
5.2.4 Smooth k-Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

7
5.2.5 Pullback of k-Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
5.2.6 The Wedge Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
5.2.7 Differential Forms on a Circle . . . . . . . . . . . . . . . . . . . . . . 184
5.2.8 Invariant Forms on a Lie Group . . . . . . . . . . . . . . . . . . . . . 184
5.3 The Exterior Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

ou
5.3.1 Exterior Derivative on a Coordinate Chart . . . . . . . . . . . . . . . 187
5.3.2 Local Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.3.3 Existence of an Exterior Derivative on a Manifold . . . . . . . . . . . 189
5.3.4 Uniqueness of the Exterior Derivative . . . . . . . . . . . . . . . . . . 189

Zh
5.3.5 Exterior Differentiation Under a Pullback . . . . . . . . . . . . . . . 191
5.3.6 Restriction of k-Forms to a Submanifold . . . . . . . . . . . . . . . . 193
5.3.7 A Nowhere-Vanishing 1-Form on the Circle . . . . . . . . . . . . . . 194
5.4 The Lie Derivative & Interior Multiplication . . . . . . . . . . . . . . . . . . 195
5.4.1 Families of Vector Fields and Differential Forms . . . . . . . . . . . . 195
5.4.2 The Lie Derivative of a Vector Field . . . . . . . . . . . . . . . . . . 198
5.4.3 The Lie Derivative of a Differential Form . . . . . . . . . . . . . . . . 201
x
5.4.4 Interior Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.4.5 Properties of the Lie Derivative . . . . . . . . . . . . . . . . . . . . . 205
eli
5.4.6 Global Formulas for the Lie and Exterior Derivatives . . . . . . . . . 208

6 Integration 211
6.1 Orientations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
©F

6.1.1 Orientations of a Vector Space . . . . . . . . . . . . . . . . . . . . . 211


6.1.2 Orientations & n-Covectors . . . . . . . . . . . . . . . . . . . . . . . 212
6.1.3 Orientations on a Manifold . . . . . . . . . . . . . . . . . . . . . . . 213
6.1.4 Orientations & Differential Forms . . . . . . . . . . . . . . . . . . . . 215
6.1.5 Orientations & Atlases . . . . . . . . . . . . . . . . . . . . . . . . . . 218
6.2 Manifolds with Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

8
6.2.1 Smooth Invariance of Domain in Rn . . . . . . . . . . . . . . . . . . 220
6.2.2 Manifolds with Boundary . . . . . . . . . . . . . . . . . . . . . . . . 222
6.2.3 The Boundary of a Manifold with Boundary . . . . . . . . . . . . . . 224
6.2.4 Tangent Vectors, Differential Forms, and Orientations . . . . . . . . 224
6.2.5 Outward-Pointing Vector Fields . . . . . . . . . . . . . . . . . . . . . 225

ou
6.2.6 Boundary Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . 226
6.3 Integration on Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
6.3.1 The Riemann Integral of a Function on Rn . . . . . . . . . . . . . . . 228
6.3.2 Integrability Conditions . . . . . . . . . . . . . . . . . . . . . . . . . 230

Zh
6.3.3 The Integral of an n-Form on Rn . . . . . . . . . . . . . . . . . . . . 231
6.3.4 Integral of a Differential Form over a Manifold . . . . . . . . . . . . . 232
6.3.5 Integration over a Zero-Dimensional Manifold . . . . . . . . . . . . . 237
6.3.6 Stokes’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
6.3.7 Line Integrals & Green’s Theorem . . . . . . . . . . . . . . . . . . . 239
x
eli
©F

9
©F

10
eli
x
Zh
ou
Chapter 1

ou
Euclidean Spaces

Zh
Our goal in this section is to develop calculus in Rn that is independent of coordinates. This
allows us to transition to the setting of manifolds where there is no global coordinate system.

1.1 Smooth Functions on a Euclidean Space


x
eli
1.1.1 Smooth Functions

In keeping with the conventions of differentiaal geometry, the indices on coordinates are
superscripts and not subscripts.
©F

Definition 1.1.1 (C k )
Let k ≥ 0. A function f : U → R is said to be C k at p ∈ U if its parptial derivatives
or all orders j ≤ k exist and are continuous at p.
A vector-valued function f : U → Rm is C k at p ∈ U if all its component functions
are C k at p.
We say f : U → R, is C k on U if it is C k at every point p ∈ U .

We treat the terms C ∞ and smooth as synonymous.

11
Definition 1.1.2 (Real-Analytic)
We say the function f is real-analytic at a point p if in some neighborhood of p it is
equal to its Taylor series at p:
X 1 X ∂k
f (x) = f (p) + i1 . . . ∂xik
(p)(xi1 − pi1 ) . . . (xik − pik ).
k≥1
k! i ,...,i
∂x
1 k

ou
Recall that a convergent power series can be differentiated term by term in its domain of
convergence. Hence a real-analytic function is necessarily C ∞ . However, the converse need
not hold.

Example 1.1.1
The function f : R → R given by

Zh
(
exp(−1/x) x > 0
x 7→
0, x=0

is C ∞ on R but f (k) (0) = 0 for all k. Hence it cannot be real-analytic about the point
x = 0.

1.1.2 Taylor’s Theorem with Remainder

Although a C ∞ function need not be equal to its Taylor series, there is a version of Taylor’s
x
theorem for C ∞ functions that is often good enough.
We say S ⊆ Rn is star-shaped with respect to a point p ∈ S if for every x ∈ S, the line
segment [p, x] lies in S.
eli

Lemma 1.1.2
Let f be C ∞ on an open subset U ⊆ Rn that is star-shaped with respect to some
p ∈ U . There are functions g1 , . . . , gn ∈ C ∞ (U ) such that
n
©F

X
f (x) = f (p) + (xi − pi )gi (x)
i=1
∂f
gi (p) = (p).
∂xi

Proof
Since U is star-shaped with respect to p, for any x ∈ U the line segment yt := p + t(x −
p), t ∈ [0, 1] lies in U . Thus f (yt ) is well-defined for all t ∈ [0, 1].

12
By the chain rule,
d X ∂f
f (yt ) = (xi − pi ) i (yt ).
dt i
∂x
Integrating both sides with respect to t from 0 to 1,
Z t
X
i i ∂f
f (x) − f (p) = (x − p ) (yt )dt.
∂xi

ou
i 0

Taking Z t
∂f
gi (x) := (yt )dt
0 ∂xi
suffices. Indeed, gi ∈ C ∞
since f ∈ C ∞
and gi (p) = ∂f
∂xi
(p).
Let n = 1, p = 0. The lemma above states that

Zh
f (x) = f (0) + xg1 (x)

for some g1 ∈ C ∞ . Applying the lemma repeatedly gives

f (x) = f (0) + x(g1 (0) + xg2 (x))


= ...
= f (0) + g1 (0)x + g2 (0)x2 + · · · + gk (0)xk .

Differentiating the expression above and evaluationg at 0 yields


x
1 (k)
gk (0) = f (0).
k!
Thus the above is a polynomial expansion of f (x) whose terms up to the last term agree
eli
with the Taylor series of f at 0.

1.2 Tangent Vectors in Rn as Derivations


©F

A secant plane to a surface in Rn is a plane determined by n ppoints of the surface. As


the points approach a point p on the surface, if the corresponding secant planes approach a
limiting position, then the plane that is the limiting position of the secant planes is called
the tangent plane to the surface at p. A vector p is tangent to a surface if it lies in the
tangent plane at p.
The notions above presupposes that the surface is embedded in an Euclidean space, and so
does not permit to surfaces such as the projective plane. Our goal is to find a characterization
of tangent vectors that generalize to manifolds.

13
1.2.1 The Directional Derivative

In order to distinguish between points and vectors (directions), we write a point p ∈ Rn as


p = (p1 , . . . , pn ) and a vector in the tangent space Tp (Rn ) as v = hv 1 , . . . , v n i.
The line through a point p with direction v in Rn has parametrization
c(t) = (pi + tv i )i .

ou
If f ∈ C ∞ in a neighborhood of p and v ∈ Tp (Rn ), recall the directional derivative of f in
the direction v at p is defined as
f (c(t)) − f (p) d
Dv f = lim = f (c(t)).
t→0 t dt t=0

By the chain rule,

Zh
X dci ∂f X ∂f
Dv f = (0) (p) = v i i (p).
i
dt ∂xi i
∂x

The notation Dv f is understood to be an evaluation at p and is thus a number and not a


function. We explicitly write
* +
X ∂ ∂
Dv = v i i = v, i .
i
∂v ∂x p
p

for the map that sends a function f to the number Dv f . We often omit the subscript p
for simplicity of the meaning is clear from context. The association v 7→ Dv offers a way
x
to characterize tangent vectors as certain operators on functions. We study this in greater
detail in the next subsections.
eli
1.2.2 Germs of Functions

As long as two functions agree on some neighborhood of a point p, they will have the same
directional derivatives at p. This suggests that we introduce an equivalence relation on the
C ∞ functions defined in some neighborhood of p.
©F

Definition 1.2.1 (Germ)


Consider the set of all pairs (f, U ) where U 3 p is a neighborhood of p and f : U → R
is a C ∞ function. We say that (f, U ) is equivalent to (g, V ) if there is an open set
W ⊆ U ∩ V containing p such that f = g when restricted to W .
The equivalence class of (f, U ) is called the germ of f at p.

We write Cp∞ (Rn ) or simply Cp∞ if there is no possibility of confusion for the set of all germs
of C ∞ functions on Rn at p.

14
Example 1.2.1
The functions f (x) = 1/1−x with domain R \ {1} and g(x) = k≥0 xk with domain (−1, 1)
P
have the same germ at any point p in the open interval (−1, 1).
Recall the following definition.

Definition 1.2.2 (Algebra)

ou
An algebra over a field K is a vector space A over K equiped with a multiplication
map
µ:A×A→A
usually written µ(a, b) = a · b such that for all a, b, c ∈ A and r ∈ K,
(i) (a · b) · c = a · (b · c) [associativity]
(ii) (a + b) · c = a · c + b · c and a · (b + c) = a · b + a · c [distributivity]

Zh
(iii) r(a · b) = (ra) · b = a · (rb) [homogeneity]

Equivalently, an algebra over a field K is a ring A (with or without multiplicative identity)


that is also a vector space over K such that the ring multiplication satisfies the homogeneity
condition (iii). So an algebra has three operations: ring addition and multiplication as well
as the scalar multiplication of a vector space.
Recall a map L : V → W between vector spaces over a field K is called a K-linear map/op-
erator if for any r ∈ K and u, v ∈ V ,
(i) L(u + v) = L(u) + L(v)
x
(ii) L(rv) = rL(v)
In the case that V, W are algebras over a field K and L satisfies the additional property for
all u, v ∈ V , we say that L is an algebra homomorphism.
eli
(iii) L(uv) = L(u)L(v)
The addition and multiplication of functions induce corresponding operations on Cp∞ , making
it an algebra over R. Note that Cp∞ is an equivalence class over functions, so the previous
statement is not trivial.
©F

1.2.3 Derivations at a Point

For each tangent vector v at a point p ∈ Rn , the directional derivative at p gives a map of
real vector spaces

Dv : Cp∞ → R.

15
By the chain rule, Dv is R-linear and satisfies the Leibneiz rule
Dv (f g) = (Dv f )g(p) + f (p)Dv g
since the partial derivatives have these properties.

Definition 1.2.3 (Point-Derivation)


A linear map D : Cp∞ → R satisfying the Leibniz rule is said to be a derivation at p

ou
or a point-derivation of Cp∞ .

Denote the set of all derivations at p by Dp (Rn ). This is a real vector space since the sum
of two derivations at p and a scalar multiple of a derivation at p are again derivations at p.
We know that directional derivatives at p are all derivations at p, so the map φ : Tp (Rn ) →
Dp (Rn ) from tangent vectors to derivations given by

Zh
* +

v 7→ Dv = v, i .
∂x p

is a linear map of vector spaces.

Lemma 1.2.2
If D is a point-derivation of Cp∞ , then D(c) = 0 for any constant function c.

Proof
x
By R-linearity, D(c) = cD(1) and it suffices to prove that D(1) = 0. By the Leibniz rule,

D(1) = D(1) · 1 + 1 · D(1) = 2D(1).


eli
This is only possible if D(1) = 0.

Theorem 1.2.3
The linear map φ(v) := Dv is an isomorphism of vector spaces from the tangent space
Tp (Rn ) to the space of point-derivations of Dp (Rn ).
©F

Proof
Injectivity. Suppose Dv ≡ 0 for some v ∈ Tp (Rn ). We claim that v = 0. To see this, apply
Dv to the coordinate function xj to see that
0 = Dv (xj ) = hv, δij i = v j .

Surjectivity. Let D be a derivation at p and (f, V ) a representative of a germ in Cp∞ .


Making V smaller if necessary, we may assume that V is an open ball and hence star

16
shaped at p. By Taylor’s theorem with remainder, there are gi ∈ C ∞ (V ) such that
X
f (x) = f (p) + (xi − pi )gi (x)
i

and gi (p) = ∂x
∂f
i (p). Applying D to both sides and recalling D annihilates constant
functions, we get by the Leibniz rule

ou
X X
D(f ) = [D(xi )gi (p) + pi D(gi )] − pi D(gi )
i i
X
i
= D(x )gi (p)
i
X ∂f
= D(xi ) (p).
i
∂xi

Zh
Thus D = Dv for v = hDx1 , . . . , Dxn i.
This theorem shows that one may identify the tangent vectors at p with the derivations at
p. Under the vector space isomorphism Tp (Rn ) ∼= Dp (Rn ), the standard basis correspondes
to the coordinate partial derivatives From now on, we make this identification and write a
tangent vector v = i v i ei as
P

X ∂
v= vi .
i
∂xi p

The vector space Dp (Rn ) of derivations turns out to be more suitable for generalization to
x
manifolds.

1.2.4 Vector Fields


eli

Definition 1.2.4 (Vector Field)


A vector field X on an open subset U ⊆ Rn is a function that assigns to each point
p ∈ U a tangent vector Xp ∈ Tp (Rn ) ∼
= Dp (Rn ).
©F

n o
Since Tp (Rn ) has basis ∂/∂xi p
, the vector Xp is a linear combination

X ∂
Xp = ai (p)
i
∂xi p

for some p ∈ U and ai (p) ∈ R. Omitting p, we may write X = i ai ∂/∂xi , where the ai ’s are
P
now functions on U . We say that the vector field X is C ∞ on U if the coefficient functions

17
ai ∈ C ∞ (U ). One can identify vector fields on U with column vectors of functions on U
 
a1
ai i ↔  ...  .
X ∂
X=
 
∂x
i an

ou
This is the same identification from tangent vectors to derivations, except we now allow the
point p to move in U .
The ring of C ∞ functions on an open set U is commonly denoted by C ∞ (U ) or F(U ). The
multiplication of vector fields by functions on U is defined pointwise:

(f X)p := f (p)Xp .

Zh
If X = ai ∂/∂xi is a C ∞ vector field and f ∈ C ∞ (U ), then
P
i

X
fX = (f ai )∂/∂xi
i

is a C ∞ vector field on U . Thus the set of all C ∞ vector fields on U , denoted X(U ), is not
only a vector space over R, but also a module over the ring C ∞ (U ).

Definition 1.2.5 (Module)


x
Let R be a commutative ring with identity. A (left) R-module is an abelian group A
with a scalar multiplication map µ : R × A → A, usually written µ(r, a) = ra, such
that for all r, s ∈ R and a, b ∈ A,
(i) (rs)a = r(sa) [associativity]
eli
(ii) 1a = a if 1 ∈ R is the [identity]
(iii) (r + s)a = ra + sa, r(a + b) = ra + rb [distributivity]

If R is a field, then an R-module is precisely a vector space over R. Thus modules generalize
a vector space by allowing scalars over a ring rather than a field.
©F

Definition 1.2.6 (R-Module Homomorhism)


Let A, A0 be R-modules. An R-module homomorphism from A to A0 is a map f :
A → A0 that preserves both addition and scalar multiplication: for all a, b ∈ A and
r ∈ R,
(i) f (a + b) = f (a) + f (b),
(ii) f (ra) = rf (a).

18
1.2.5 Vector Fields as Derivations

If X is a C ∞ vector field on an open subset U of Rn and f ∈ C ∞ (U ), we can define a new


function Xf on U by
(Xf )(p) := Xp f
for any p ∈ U . Writing X = i a ∂/∂xi , we get that
P i

ou
X ∂f
(Xf )(p) = ai (p) (p)
i
∂xi
X ∂f
Xf = ai .
i
∂xi

This shows that Xf ∈ C ∞ (U ). So a C ∞ vector field X gives rise to an R-linear map

Zh
C ∞ (U ) → C ∞ (U )
f 7→ Xf.

Proposition 1.2.4 (Leibniz Rule for a Vector Field)


If X is a C ∞ vector field and f, g ∈ C ∞ (U ) for some open subset U ⊆ Rn , then X(f g)
satisfies the product (Leibniz) rule

X(f g) = (Xf )g + f Xg.


x
Proof
At each point p ∈ U , the vector Xp satisfies the Leibniz rule by definition of a derivation:
eli
Xp (f g) = (Xp f )g(p) + f (p)(Xp g).
We now define a derivation as opposed to a point-derivation.

Definition 1.2.7 (Derivation)


If A is an algebra over a field K, a derivation of A is a K-linear map D : A → A such
that
©F

D(ab) = (Da)b + aDb


for all a, b ∈ A.

Note that a derivation at p is not a derivation of the algebra Cp∞ . A derivation at p is a map
from Cp∞ → R, while a derivation is a map from Cp∞ → Cp∞ .
The set of all derivations is closed under addition and scalar multiplication and forms a
vector space denoted by Der(A). From the discussion above, a C ∞ vector field on an open

19
set U gives rise to a derivation of the algebra C ∞ (U ). We therefore have a map

ϕ : X(U ) → Der(C ∞ (U ))
X 7→ (f 7→ Xf ).

Just as tangent vectors at a point p can be identified with point-derivations of Cp∞ , the vector
fields on an open set U can be identified with the derivations of the algebra C ∞ (U ), ie the

ou
map ϕ is an isomorphism of vector spaces. The injectivity property is not too hard to show
but the surjectivity is non-trivial.

1.3 The Exterior Algebra of Multicovectors


Our goal in this section is to generalize parts of vector calculus from R3 to Rn , such as the

Zh
cross product. a key insight of Grassman, the author of the the multivector, is to work in
the dual space of linear functionals. This provides more flexibility than the viewpoint of
tangent vectors.

1.3.1 Dual Space

If V, W are real vector spaces, we denote by Hom(V, W ) the vector space of all linear maps
f : V → W . Recall the dual space V ∨ of V is the vector space of all real-valued linear
functionals on V
x
V ∨ = Hom(V, R).
The elements of V ∨ are known as covectors or 1-covectors on V .
In the rest of this section, V is some finite-dimensional vector space. If {ei : i ∈ [n]} is
eli
some basis for V , then every v ∈ V is a unique linear combination of the basis vectors.
Let αi : V → R denote the i-th coordinate functional that picks out the i-th coordinate,
αi (v) = v i .

Proposition 1.3.1
The functions {αi : i ∈ [n]} form a basis for V ∨ .
©F

Proof
Any f ∈ V ∨ can be expressed as
X
f (v) = f (ei )αi (v).
i

Linear independence can be checked using the basis vectors ei ’s.

20
We say the αi ’s form the dual basis of the ei ’s.

Corollary 1.3.1.1
dim V ∨ = dim V for any finite-dimensional vector space V .

1.3.2 Permutations

ou
Fix a positive integer k. A permutation of [k] is a bijection σ : A → A. An r-cycle is a
permutation that is cyclic on some r elements while fixing the others. A transposition is a
2-cycle. Two cycles (a1 . . . ar ) and (b1 . . . bs ) are said to be disjoint if the sets {ai } and {bj }
have empty intersection. The product τ σ of two permutations τ, σ of A is the composition
τ ◦ σ. We write Sk to denote the set of all permutations on [k].

Zh
Recall from elementary group theory that any permutation is the product of disjoint cycles.
Moreover, the sign of a permutation, denoted sgn(σ), takes on value ±1 depending on
whether the permutation is the product of even or odd number of transpositions. This
function is well-defined and satisfies

sgn(στ ) = sgn(σ) sgn(τ ).

An inversion in a permutation σ is an ordered pair (σ(i), σ(j)) such that i < j but σ(i) >
σ(j). A second way to compute the sign of a permutation is to count the bnumber of
inversions.
x
Proposition 1.3.2
A permutation is even if and only if it has an even number of inversions.
eli
Proof
The proof is algorithmic and is essentially bubble sort.

1.3.3 Multilinear Functions


©F

A function f : V k → R is k-linear if it is linear in each of its k arguments. It is customary to


write bilinar and trilinear instead of 2-linear and 3-linear. A k-linear function on V is also
called a k-tensor on V . We denote the vector space of all k-tensors on V by Lk (V ). If f is
a k-tensor on V , we say that the degree of f is k.

Example 1.3.3
The dot product on Rn is bilinear.

21
Example 1.3.4
If we view the determinant as a function of the n column vectors of a matrix, then it is
n-linear in Rn .

Definition 1.3.1 (Symmetric k-Linear Function)


A k-linear function f : V k → R is symmetric if

ou
f (vσ(1) , . . . , vσ(k) ) = f (v1 , . . . , vk )

for all permutations σ ∈ Sk .

Definition 1.3.2 (Alternating k-Linear Function)


A k-linear function f : V k → R is alternating if

Zh
f (vσ(1) , . . . , vσ(k) ) = (sgn σ)f (v1 , . . . , vk )

for all permutations σ ∈ Sk .

Example 1.3.5
(i) The dot product on Rn is symmetric
(ii) The determinant on Rn is alternating
(iii) The cross product v × w on R3 is alternating
x
Example 1.3.6
For any two linear functionals f, g ∈ V ∨ , the function

(f ∧ g)(u, v) := f (u)g(v) − f (v)g(u)


eli
is alternating. This is a special case of the wedge product which we will see soon.
We are especially interested in the space Ak (V ) of all alternating k-linear functions on a
vector space V for k > 0. These are also known as alternating k-tensors, k-covectors, or
multicovectors of degree k on V . For k = 0, we define a 0-covector to be a constant so that
A0 (V ) = R by convention. A 1-covector is simply a covector.
©F

1.3.4 The Permutation Action on Multilinear Functions

If f is a k-linear function on some vector space V and σ is a permutation in Sk , we define a


new k-linear function σf by

(σf )(v1 , . . . , vk ) = f (vσ(1) , . . . , vσ(k) ).

22
Thus f is symmetric if and only if σf = f for all σ ∈ Sk and f is alternating if and only if
σf = (sgn σ)f for all σ ∈ Sk .

Lemma 1.3.7
If σ, τ ∈ Sk and f ∈ Lk (V ), then τ (σf ) = (τ σ)f .

Thus the map Sk × Lk → Lk described above is a left action.

ou
Definition 1.3.3 (Left Action)
If G is a group and X an arbitrary set, a map G × X → X written as

(σ, x) 7→ σ · x

is a left action of G on X if

Zh
(i) e · x = x for every x ∈ X, where e is the identity element in G
(ii) τ · (σ · x) = (τ σ) · x for all τ, σ ∈ G and x ∈ X.

A right action can be similarly defined.


Recall too the following definition.

Definition 1.3.4 (Orbit)


The orbit of an element x ∈ X with respect to a group action is defined to be the set
x
Gx := {σ · x ∈ X : σ ∈ G}.

Note that each permutation Sk itself acts as a linear operator Lk (V ) → Lk (V ), since σf is


R-linear in f .
eli

1.3.5 The Symmetrizing and Alternating Operators

Given any k-linear function f on a vector space V , there is a standard trick to make a
symmetric or alternating k-linear function from f .
©F

X
(Sf )(v1 , . . . , vk ) = f (vσ(1) , . . . , vσ(k) )
σ∈Sk
X
=: σf
σ∈Sk
X
(Af )(v1 , . . . , vk ) = (sgn σ)σf.
σ∈Sk

23
Remark 1.3.8 This trick turns up in different places when taking an average over permuta-
tions can “simply” the problem. For example, we can reduce some instances of semidefinite
programs from algebraic graphs to linear programs by taking an average over permutations
from its automorphism group.

ou
Proposition 1.3.9
If f ∈ Lk (V ),
(i) Sf ∈ Lk (V ) is symmetric
(ii) Af ∈ Lk (V ) is alternating

Zh
Proof
We omit the proof of (i) since it follows the same flow as (ii). For τ ∈ Sk ,
X
τ (Af ) := (sgn σ)τ (σf )
σ∈Sk
X
= (sgn σ)(τ σf )
σ∈Sk
X
= (sgn τ )2 (sgn σ)(τ σf )
σ∈Sk
x
X
= (sgn τ ) (sgn τ σ)(τ σf )
σ∈Sk

= (sgn τ )Af.
eli
Lemma 1.3.10
If f ∈ Lk (V ) is alternating, then

Af = (k!)f.
©F

Proof
By computation,
X
Af = (sgn σ)σf
σ∈Sk
X
= (sgn σ)2 f
σ∈Sk

= (k!)f.

24
1.3.6 The Tensor Product

Let f ∈ Lk (V ) and g ∈ L` (V ). Their tensor product is the (k + `)-linear function f ⊗ g


defined by
(f × g)(v1 , . . . , vk+` ) := f (v1 , . . . , vk )g(vk+1 , . . . , vk+` ).

Example 1.3.11 (Bilinear Maps)

ou
Let {ei : i ∈ [n]} be a basis for a vector space V {αi } the dual basis for V ∨ , and
h, i : V × V → R a bilinear map on V .
Set gij := hei , ej i ∈ R. If v = i v i ei and w = wi ei , we can express h, i in terms of the
P P
tensor product
X
hv, wi = v i wj hei , ej i

Zh
i,j
X
= αi (v)αj (w)gij
i,j
X
= gij (αi ⊗ αj )(v, w).
i,j

Proposition 1.3.12
The tensor product of multi-linear functions is associative since multiplication is asso-
ciative in R.
x
1.3.7 The Wedge Product

If two multilinear functions f, g ∈ Lk (V ) are alternating, we would like to have a product


eli
that is alternating as well. This motivates the definition of the wedge product, also called
the exterior product: for f ∈ Ak (V ), g ∈ A` (V ),
1
(f ∧ g)(v1 , . . . , vk+` ) := A(f ⊗ g)(v1 , . . . , vk+` )
k!`!
1 X
= (sgn σ)f (vσ(1) , . . . , vσ(k) )g(vσ(k+1) , . . . , vσ(k+`) ).
©F

k!`! σ∈S
k+`

Remark 1.3.13 This construction is also alternating for f ∈ Lk (V ), g ∈ A` (V ). However,


the terms in the denominator only make sense for alternating functions which we see next.
When k = 0, the element f ∈ A0 (V ) is just some constant c. Then the wedge product c ∧ g
is simply scalar multiplication
1 X
c∧g = (sgn σ)cg(vσ(1) , . . . , vσ(`) ) = cg.
`! σ∈S
`

25
The coefficient 1/k!`! compensates for repetitions in the sum. For every permutation σ ∈ Sk+` ,
there are k! permutations τ ∈ Sk that permute the first k arguments and leave the arguments
of g alone. For every such τ ∈ Sk , the composition στ ∈ Sk+` contributes the same term to
the sum, since f is alternating. This we divide by k! to get rid of the k! repeating terms in
the sum and similarly for `!.
Another way to avoid redundancies in the definition of f ∧ g is to stipulate that in the sum,
σ(1), . . . , σ(k) is in ascending order and also σ(k + 1), . . . , σ(k + `).

ou
Definition 1.3.5 (Shuffle)
A permutation σ ∈ Sk+` is a (k, `)-shuffle if

σ(1) < · · · < σ(k)

and

Zh
σ(k + 1) < · · · < σ(k + `).

We can alternatively define the


 wedge product by summing over all (k, `)-shuffles. Thus this
is a sum over /k!`! = k terms. rather than (k + `)! terms.
(k+`)! k+`

Example 1.3.14
The wedge product of two covectors f, g ∈ A1 (V ) is given by

(f ∧ g)(v, w) = f (v)g(w) − f (w)g(v).


x
1.3.8 Anticommutativity of the Wedge Product
eli

By the definition of the wedge product, f ∧g is bilinear in f and g since it is a sum of bilinear
functions.

Proposition 1.3.15
The wedge product is anticommutative: if f ∈ Ak (V ) and g ∈ A` (V ),
©F

f ∧ g = (−1)k` g ∧ f.

Proof
Define τ ∈ Sk as (
i + k, i≤`
τ (i) :=
i − ` ≡ i + k − (k + `), i > `

26
Thus τ is the k-th product of the k + ` cycle and

σ(1) = στ (` + 1)
...
σ(k) = στ (` + k)
σ(k + 1) = στ (1)

ou
...
σ(k + `) = στ (`).

For any v1 , . . . , vk+` ∈ V ,


X
A(f ⊗ g)(v1 , . . . , vk+` ) = (sgn σ)f (vσ(1) , . . . , vσ(k) )g(vσ(k+1) , . . . , vσ(k+`) )
σ∈Sk+`

Zh
X
= (sgn σ)f (vστ (`+1) , . . . , vστ (`+k) )g(vστ (1) , . . . , vστ (`) )
σ∈Sk+`
X
= (sgn τ ) (sgn στ )g(vστ (1) , . . . , vστ (`) )f (vστ (`+1) , . . . , vστ (`+k) )
σ∈Sk+`

= (sgn τ )A(g ⊗ g)(v1 , . . . , vk+` ).

The statement can then be proven by dividing by k!`! and verifying that sgn τ = (−1)k` .
Indeed, the (k + `)-cycle has sign (−1)k+`−1 . Taking the composition of k of them yields

(−1)k`+k(k−1) = (−1)k` .
x
Corollary 1.3.15.1
If f is a multicovector of odd degree on V , then f ∧ f = 0.
eli
Proof
Let k be the degree of f . By anticommutativity,
2
f ∧ f = (−1)k f ∧ f = −f ∧ f.

This is only possible if f ∧ f = 0.


©F

1.3.9 Associativity of the Wedge Product

The wedge product of a k-covector f and an `-covector g on a vector space V is by definition


the (k + `)-covector
1
f ∧g = A(f ⊗ g).
k!`!
27
Our goal in this section is to show that this product is associative.

Lemma 1.3.16
Suppose f ∈ Lk (V ) and g ∈ L` (V ).
(i) A(A(f ) ⊗ g) = k!A(f ⊗ g)
(ii) A(f ⊗ A(g)) = `!A(f ⊗ g)

ou
Proof
We prove (i) and omit the proof of (ii) as it follows the exact train of thought. By
definition, !
X X
A(A(f ) ⊗ g) = (sgn σ)σ (sgn τ )(τ f ) ⊗ g .
σ∈Sk+` τ ∈Sk

Zh
We can view τ ∈ Sk as a permutation in Sk+` . Hence
X
A(A(f ) ⊗ g) = (sgn σ)(sgn τ )(στ )(f ⊗ g).
σ∈Sk+` ,τ ∈Sk

For each µ ∈ Sk+` and τ ∈ Sk , σ = µτ −1 is the unique element such that µ = στ . Hence
each µ ∈ Sk+` appears once in the double sum for each τ ∈ Sk , and hence k! times in
total. Thus X
A(A(f ) ⊗ g) = k! (sgn µ)µ(f ⊗ g) = k!A(f ⊗ g).
µ∈Sk+`
x
Proposition 1.3.17 (Associativity of the Wedge Product)
Let V be a real vector space and f ∈ Ak (V ), g ∈ A` (V ), h ∈ Am (V ). Then
eli
(f ∧ g) ∧ h = f ∧ (g ∧ h).

Proof
By the definition of the wedge product,
1
(f ∧ g) ∧ h = A((f ∧ g) ⊗ h)
©F

(k + `)!m!
1 1
= A(A(f ⊗ g) ⊗ h)
(k + `)!m! k!`!
(k + `)!
= A((f ⊗ g) ⊗ h) lemma
(k + `)!m!k!`!
1
= A((f ⊗ g) ⊗ h).
k!`!m!
We can also show that that f ∧ (g ∧ h) is also equal to the last term, concluding the proof

28
by the associativity of the tensor product.
Since associativity holds, it is customary to omit the parenthesis in multiple wedge products.

Corollary 1.3.17.1
If fi ∈ Adi (V ), then

ou
f1 ∧ · · · ∧ fr = A(f1 ⊗ · · · ⊗ fr ).
(d1 )! . . . (dr )!

Let [bij ] denote the matrix whose (i, j)-th entry is bij . We have the following proposition.

Proposition 1.3.18 (Wedge Product of 1-Covectors)


If α1 , . . . , αk ∈ L1 (V ) and v1 , . . . , vk ∈ V ,

Zh
(α1 ∧ · · · ∧ αk )(v1 , . . . , vk ) = det α(vj )i .
 

Proof
We have

(α1 ∧ · · · ∧ αk )(v1 , . . . , vk ) = A(α1 ⊗ · · · ⊗ αk )(v1 , . . . , vk )


X
= (sgn σ)α1 (vσ(1) ) . . . αk (vσ(k) )
σ∈Sk

= det αi (vj ) .
 
x
Recall the notation

M
A= Ak
eli
k=0

means that each nonzero element of A is uniquely a finite sum

a = ai 1 + · · · + ai m
©F

where each 0 6= aij ∈ Aij .

Definition 1.3.6 (Graded Algebra)


An algebra A over a field K is graded if it can be written as a direct sum A = ⊕∞
k=0 A
k

of vector spaces over K such that the multiplication map from A sends Ak × A` to
Ak+` .

29
Definition 1.3.7 (Graded Commutative)
A graded algebra A = ⊕∞ k=0 A is said to be graded commutative or anticommutative
k

if for all a ∈ A and b ∈ A ,


k `

ab = (−1)k` ba.

A homomorphism of graded algebras is an algebra homomorphism that preserves the degree.

ou
Example 1.3.19
The polynomial algebra A = R[x, y] is the graded by degree; Ak consists of all homoge-
neous polynomials of total degree k in the variables x and y.
For a vector space V of finite dimension n, define
∞ n

Zh
M M
A∗ (V ) := Ak (V ) = Ak (V ).
k=0 k=0

Note the second equality is due to linear dependence. With the wedge product of multicov-
ector as multiplication, A∗ (V ) becomes an anticommutative graded algebra, known as the
exterior algebra or the Grassman algebra of multicovectors on the vector space V .

1.3.10 A Basis for k-Covectors


x
Let e1 , . . . , en be a basis for a real vector space V and α1 , . . . , αn the dual basis for V ∨ . We
introduce the multi-index notation

I = (i1 , . . . , ik )
eli
to write eI := (ei1 , . . . , eik ) and αI for αi1 ∧ · · · ∧ αik .
A k-linear function f ∈ Lk (V ) is completely determined by its values on all k-tuples eI . If
f is alternating, it is completely determined by its values on eI with 1 ≤ i1 < · · · < ik ≤ n.
This it suffices to consider eI with I in strictly ascending order.
©F

Lemma 1.3.20
Let e1 , . . . , en be a basis for a vector space V and α1 , . . . , αn its dual basis in V ∨ . If
I, J are strictly ascending multi-indices of length k, then
(
1, I = J
αI (eJ ) = δJI =
0, I 6= J

30
Proof
We have previously shown that

αI (eJ ) = det αi (ej ) i∈I,j∈J .


 

If I = J, then the matrix is the identity matrix and its determinant is 1.


If I 6= J, there is some minimal sub-index ` such that i` 6= j` . Without loss of generality,

ou
assume that i` < j` . Then i` is not equal to any of j1 , . . . , j` . But then the `-th row of
the matrix [αi (ej )] is all zero and the determinant evaluates to 0.

Proposition 1.3.21
The alternating k-linear functions αI for all strictly ascending multi-indices I, form a
basis for the space Ak (V ).

Zh
Proof
Linear Independence. Suppose I cI αI = 0. Applying both sides to an arbitrary eJ , we
P
get that X X
0= cI αI (eJ ) = cI δJI = cJ .
I I

Spanning. Let f ∈ Ak (V ). We claim that


X
f= f (eI )αI .
I
x
Indeed, by k-linearity and the alternating property if two k-covectors agree on all eJ , then
they are equal. We have
eli
X X
f (eI )αI (eJ ) = f (eI )δJI = f (eJ ).
I I

Corollary 1.3.21.1
If dim V = n, then Ak (V ) has dimension n
.

k
©F

Proof
The number of strictly ascending multi-indices is the number of size k subsets of [n].

Corollary 1.3.21.2
If k > dim V , then Ak (V ) = 0.

31
Proof
In the multi-index I, at least two will be the same, say i = j. But then αi ∧ αj = 0 since
αi , αj are equal covectors of odd degree 1 and so αI = 0 as well.

1.3.11 Useful Results

ou
Proposition 1.3.22 (Characterizations of Alternating Tensors)
Let f ∈ Lk (V ). The following are equivalent.
(i) f is alternating
(ii) f changes signs when two arguments are interchanged
(iii) f changes signs when two successive arguments are interchanged

Zh
(iv) f (v1 , . . . , vk ) = 0 whenever two of its arguments are equal

Theorem 1.3.23 (Transformation Rule for a Wedge Product of Covectors)


Suppose two sets of covectors β 1 , . . . , β k , γ 1 , . . . , γ k ∈ L1 (V ) satisfy

k
X
βi = aij γ j
j=1

for each i ∈ [k]. Then


x
β 1 ∧ · · · ∧ β k = (det A)γ 1 ∧ · · · ∧ γ k

where A = [aij ].
eli

Proof
Using the fact that α ∧ α = 0 for all α ∈ L1 (V ),

k
! k
!
X X
β1 ∧ · · · ∧ βk = a1j1 γ j1 ∧ ··· ∧ akjk γ jk
©F

j1 =1 jk =1

X k
Y
= aiσ(i) γ σ(1) ∧ · · · ∧ γ σ(k)
σ∈Sk i=1
k
XY
= aiσ(i) (sgn σ)γ 1 ∧ · · · ∧ γ k
σ∈Sk i=1

= (det A)γ 1 ∧ · · · ∧ γ k .

32
Theorem 1.3.24 (Transformation Rule for Multi-Covectors)
Let f ∈ Ak (V ) and suppose vectors u1 , . . . , uk , v1 , . . . , vk ∈ V satisfy

k
X
uj = aij vi
i=1

ou
for each j ∈ [k]. Then

f (u1 , . . . , uk ) = (det A)f (v1 , . . . , vk ).

Lemma 1.3.25
Let α1 , . . . , αk ∈ A1 (V ). Then α1 ∧ · · · ∧ αk = 0 if and only if α1 , . . . , αk are linearly
independent within the dual space V ∨ .

Zh
The lemma can be shown using the transformation rule.

Proposition 1.3.26 (Exterior Multiplication)


Let 0 6= α ∈ A1 (V ) and γ ∈ Ak (V ). Then α ∧ γ = 0 if and only if γ = α ∧ β for some
(k − 1)-covector β ∈ Lk−1 (V ).

Proof (Sketch)
If the latter holds, then clearly α ∧ γ = 0.
x
Suppose α ∧ γ = 0 and write γ as a linear combination of wedge products formed by any
basis of V ∨ that includes α. By linear independence, we see that we can factor out a α
term in each non-zero term in the linear combination.
eli

1.4 Differential Forms on Rn

1.4.1 Differential 1-Forms and the Differential of a Function


©F

Definition 1.4.1 (Cotangent Space)


The cotangent space to Rn at p, denoted by Tp∗ (Rn ) or Tp∗ Rn is defined to be the dual
space (Tp Rn )∨ of the tangent space Tp (Rn ).

Thus an element of the cotangent space Tp∗ (Rn ) is a covector on the tangent space Tp (Rn ).
The following definition is the analogue to the vector field.

33
Definition 1.4.2 (Differential 1-Form)
A covector field or a differential 1-form on an open subset U ⊆ Rn is a function ω
that assigns to each point p ∈ U a covector ωp ∈ Tp∗ (Rn ),
G
ω:U → Tp∗ (Rn )
p∈U

ou
p 7→ ωp ∈ Tp∗ (Rn )

We call a differential 1-form a 1-form for short.


From any f ∈ C ∞ (U, R), we can construct a 1-form df called the differential of f as follows.
For p ∈ U and Xp ∈ Tp U , df maps p to the covector (df )p : Tp (Rn ) → R given by

(df )p (Xp ) = Xp f.

Zh
The directional derivative of a function in the direction of a tangent vector at a point p sets
up a bilinear function

Tp (Rn ) × Cp∞ (Rn ) → R


(Xp , f ) 7→ Xp f = hXp , f i.

We can think of a tangent vector as a function on the second argument of this pairing:
hXp , ·i. The differential (df )p at p is then a function on the first argument of the pairing:
h·, f i. The value of the differential df at p is also written df |p .
x
Recall the partial derivatives form a basis to the tangent space Tp (Rn ).

Proposition 1.4.1
eli
Let x1 , . . . , xn denote the standard basis on Rn . At every point p ∈ Rn ,
(dx1 )p , . . . , (dxn )p is the dual basis for the cotangent space Tp∗ (Rn ) with respect to the
basis ∂/∂x1 |p , . . . , ∂/∂xn |p for the tangent space Tp (Rn ).

Thus each (dxi )p is an evaluation functional at point p on the tangent space.


©F

Proof
By definition, !
∂ ∂
(dxi )p = xi = δji .
∂xj p ∂xj p

If ω is a 1-form on an open subset U ⊆ Rn , the proposition above shows that at every p ∈ U ,


we can write X
ωp = ai (p)(dxi )p
i

34
for some aiP(p) ∈ R. As p varies over U , the coefficients ai become functions on U and we can
write ω = i ai dxi . The covector field ω is said to be C ∞ on U if the coefficient functions
ai are all C ∞ on U .

Proposition 1.4.2
If f : U → R is a C ∞ function on an open subset U ⊆ Rn , then

ou
X ∂f
df = i
dxi .
i
∂x

Proof
We know that
X
(df )p = ai (p)(dxi )p

Zh
i

at every p ∈ U for some ai (p). Conclude by evaluating this at the coordinate vector field
∂/∂xj :
 
∂f ∂
= df
∂xj ∂xj
 
X
i ∂
= ai dx
i
∂xj
X
= ai δji
i
x
= aj .
The proposition above shows that if f ∈ C ∞ , then the 1-form df is also C ∞ . Moreover, we
see that dxi is nothing more than the coordinate function with respect to the basis ∂/∂xj .
eli

1.4.2 Differential k-Forms


©F

Definition 1.4.3 (Differential k-Form)


A differential form ω of degree k or k-form on an open subset U ⊆ Rn is a function
that assigns to each point p ∈ U an alternating k-linear function on the tangent space
Tp (Rn ), ie ωp ∈ Ak (Tp Rn ).

Since A1 (Tp Rn ) = Tp∗ (Rn ), the definition of a k-form generalizes that of a 1-form we have
just seen.
We have shown that one possible basis for Ak (V ) where dim V < ∞, is the wedge product

35
of k-coordinate functions with strictly ascending indices. Thus a basis for Ak (Tp Rn ) is

dxIp = dxip1 ∧ · · · ∧ dxipk

for 1 ≤ i1 ≤ · · · ≤ ik ≤ n. It follows taht at each p ∈ U , ωp is a linear combination


X
ωp = aI (p)dxIp

ou
I

and a k-form is a linear combination


X
ω= aI dxI
I

with function coefficients aI : U → R. We say that a k-form is C ∞ on U if all the coefficients


aI are C ∞ functions on U .

Zh
Denote by Ωk (U ) the vector space of C ∞ k-forms on U . A 0-form on U assigns to each point
p ∈ U an element of A0 (Tp Rn ) = R. Thus Ω0 (U ) = C ∞ (U ).
The wedge product of a k-form ω and an `-form τ on some open U ⊆ Rn is defined pointwise:

(ω ∧ τ )p = ωp ∧ τp

for each p ∈ U . If ω = I aI dxI and τ = J bJ dxJ , then


P P

X
ω∧τ = (aI bJ )dxI ∧ dxJ ,
I, J disjoint
x
which shows that the wedge product of two C ∞ forms is C ∞ . Thus the wedge product is a
bilinear map
∧ : Ωk (U ) × Ω` (U ) → Ωk+` (U ).
eli
Recall that the wedge product is anticommutative and associative.
For the case k = 0, the wedge product is just the pointwise multiplication of a C ∞ function
and a C ∞ `-form
(f ∧ ω)p = f (p) ∧ ωp = f (p)ωp .
Thus if f ∈ C ∞ (U ) and ω ∈ Ω` (U ), then f ∧ ω = f ω.
©F

Example 1.4.3
Let x, y, z be the coordinates in R3 . The C ∞ k-forms on R3 are

f dx + gdy + hdz k=1


f dy ∧ dz + gdx ∧ dz + hdx ∧ dy k=2
f dx ∧ dy ∧ dz k=3

36
Example 1.4.4
Let x1 , . . . , x4 be the coordinates in R4 and p ∈ R4 . A possible basis for the vector space
A3 (Tp R4 ) is given by

(dx1 ∧ dx2 ∧ dx3 )p , (dx1 ∧ dx2 ∧ dx4 )p , (dx1 ∧ dx3 ∧ dx4 )p , (dx2 ∧ dx3 ∧ dx4 )p .
With the wedge product as multiplication and the degree of a form as the grading, the

ou
direction sum n
M

Ω (U ) = Ωk (U )
k=0
becomes an anticommutative graded algebra over R. Since we can multiply C ∞ k-forms by
C ∞ functions, the set Ωk (U ) is both a vector space over R and a module over C ∞ (U ). Thus
the direct sum Ω∗ (U ) is also a module over the ring C ∞ (U ).

Zh
1.4.3 Differential Forms as Multilinear Functions on Vector Fields

If ω is a C ∞ 1-form and X is a C ∞ vector field on an open subset U ⊆ Rn , we define a


function ω(X) on U by the formula
ω(X)p = ωp (Xp )
for each p ∈ U . In coordinates,
X
ω= ai dxi ai ∈ C ∞ (U )
i
x
X ∂
X= bj bj ∈ C ∞ (U )
j
∂xj
X
ω(X) = ai bi .
eli
i

Recall that we write X(U ) to denote the set of all C ∞ vector fields on U . Thus a C ∞ 1-form
gives rise to a map X(U ) → C ∞ (U ).
This function is linear over the ring C ∞ (U ). To see this, consider some f ∈ C ∞ (U ).
(ω(f X))p := ωp (f (p)Xp )
©F

= f (p)ωp (Xp ) ωp is linear


=: (f ω(X))p .

In the notation F(U ) = C ∞ (U ), a 1-form gives rise to a F(U )-linear map X(U ) → F(U ).
Similarly, a k-form ω on U gives rise to a k-linear map over F(U ),
X(U ) × · · · × X(U ) → F(U )
(X1 , . . . Xk ) 7→ ω(X1 , . . . , Xk ).

37
Example 1.4.5
Let ω be a 2-form and τ a 1-form on R3 . If X, Y, Z are vector fields on R3 ,

(ω ∧ τ )(X, Y, Z) = ω(X, Y )τ (Z) − ω(X, Z)τ (Y ) + ω(Y, Z)τ (X).

1.4.4 The Exterior Derivative

ou
In order to define the exterior derivative of a C ∞ k-form on an open subset U ⊆ Rn , we first
define it on 0-forms. Recall Ωk (U ) denotes the C ∞ k-forms on U . The exterior derivative
of a function f ∈ C ∞ (U ) is defined to be its differential df ∈ Ω1 (U ). Recall we showed the
coordinate expansion
X ∂f
df = i
dxi .
i
∂x

Zh
Definition 1.4.4P(Exterior Derivative)
For k ≥ 1, if ω = I aI dxI ∈ Ωk (U ) for some aI ∈ C ∞ (U ), define
X
dω := daI ∧ dxI
I
!
X X ∂aI
= dxj ∧ dxI
I j
∂xj
k+1
∈Ω (U ).
x
Example 1.4.6
Let ω = f dx + gdy ∈ Ω2 (R2 ) be a 2-form where f, g ∈ C ∞ (R2 ). We write fx = ∂/∂x and
eli
fy = ∂f /∂y for simplicity. Then

dω = df ∧ dx + dg ∧ dy
= (fx dx + fy dy) ∧ dx + (gx dx + gy dy) ∧ dy
= (gx − fy )dx ∧ dy.
©F

Definition 1.4.5 (Antiderivation)


Let A = ⊕∞k=0 A be a graded algebra over a field K. An antiderivation of the graded
k

algebra A is a K-linear map D : A → A such that for a ∈ Ak and b ∈ A` ,

D(ab) = (Da)b + (−1)k a(Db).

Furthermore, if there is some m such that the antiderivation D sends Ak → Ak+m for all k,
then we say that it is an antiderivation of degree m.

38
By defining Ak := 0 for k < ∞, we can extend the grading of a graded algebra A to negative
integers. With this extension, the degree m of an antiderivation can be negative.

Proposition 1.4.7
(i) The exterior differentiation d : Ω∗ (U ) → Ω∗ (U ) is an antiderivation of degree 1:

d(ω ∧ τ ) = (dω) ∧ τ + (−1)deg ω ω ∧ dτ.

ou
(ii) d2 = 0
(iii) If f ∈ C ∞ (U ) and X ∈ X(U ), then (df )(X) = Xf .

Proof
(i) Since both sides of the claimed equality is linear in ω, τ , it suffices to check the equality

Zh
for ω = f dxI and τ = gdxJ . Then

d(ω ∧ τ ) = d(f gdxI ∧ dxJ )


X ∂(f g)
= i
dxi ∧ dxI ∧ dxJ
i
∂x
X ∂f X ∂g
i I J
= i
gdx ∧ dx ∧ dx + f i dxi ∧ dxI ∧ dxJ .
i
∂x i
∂x

Moving the 1-form (∂g/∂xi )dxi across the k-form dxI results in the sign (−1)k as desired.
x
(ii) Again, it suffices to check for ω = f dxI . We have
!
X ∂f
2 I i I
d (f dx ) = d dx ∧ dx
eli
i
∂xi
X ∂ 2f
= j ∂xi
dxj ∧ dxi ∧ dxI .
j,i
∂x

In the sum, if i = j, then dxj ∧ dxi = 0. If i 6= j, then ∂ 2 f /∂xi ∂xj is symmetric in i, j,


but dxj ∧ dxi is alternating in i, j, hence the terms with i 6= j pair up and cancel.
©F

(iii) This is by definition.


The three properties of the proposition above uniquely characterize exterior differentiation
on an open set U ⊆ Rn .

39
Proposition 1.4.8 (Characterization of the Exterior Derivative)
If D : Ω∗ (U ) → Ω∗ (U )
(i) is an antiderivation of degree 1
(ii) satisfies D2 = 0
(iii) satisfies (Df )(X) = X for f ∈ C ∞ (U ) and X ∈ X(U )
then D = d.

ou
Proof
By linearity, it suffices to check that D = d on a basic k-form f dxi1 ∧ · · · ∧ dxik .
(iii) states that Df = df on C ∞ functions f . It follows that Ddxi = DDxi = 0 by (ii). But
then by the antiderivation property and induction, D(dxI ) = D(dxi1 ∧(dxi2 ∧· · ·∧dxik )) =
0.

Zh
Finally, for every k-form f dxI ,

D(f dxI ) = (Df ) ∧ dxI + f D(dxI ) (i)


= (df ) ∧ dxI D(dxI ) = 0
=: d(f dxI ).
x
1.4.5 Closed & Exact Forms
eli
Definition 1.4.6 (Closed k-Form)
A k-form ω on U is closed of dω = 0.

Definition 1.4.7 (Exact k-Form)


A k-form ω on U is exact if there is a (k − 1)-form τ such that ω = dτ .
©F

Note that since d(dτ ) = 0, every exact form is closed.

Example 1.4.9
Define a 1-form on R2 − {0} by

1
ω= (−ydx + xdy).
x2 + y 2

40
Then ω is closed since
   
x y
dω = d ∧ dy − d ∧ dx
x2 + y 2 x2 + y 2
= db ∧ dy − da ∧ dx
= ∂x b(dx ∧ dy) − ∂y a(dy ∧ dx)
y 2 − x2 x2 − y 2

ou
= 2 (dx ∧ dy) + (dx ∧ dy)
(x + y 2 )2 (x2 + y 2 )2
= 0.

Definition 1.4.8 (Differential Complex/Cochain Complex)


A collection of vector spaces {V k }∞ k=0 with linear maps dk : V
k
→ V k+1 such that
dk+1 ◦ dk = 0 is called a differential complex/cochain complex.

Zh
For any open subset U ⊆ Rn , the exterior derivative d makes the vector space Ω∗ (U ) of C ∞
forms on U into a cochain complex, called the de Rham complex of U :
d d
0 → Ω0 (U ) −
→ Ω1 (U ) −
→ ....

The closed forms are precisely the elements of the kernel of d and the exact forms are the
elements of the image of d.

1.4.6 Applications to Vector Calculus


x
The theory of differential forms unifies many theorems in vector calculus. By a vector-valued
function on an open subset U ⊆ R3 , we mean some F = (P, Q, R) : U → R3 . Thus this is
precisely a vector field on U .
eli
Three operators gradient, curl, and divergence are defined below:
   
∂x fx
grad f = ∂y  f = fy 
∂z fz
©F

       
P ∂x P Ry − Qz
curl Q = ∂y  × Q = −(Rx − Pz )
R ∂z R Qx − Py
     
P ∂x P
div Q = ∂y · Q = Px + Qy + Rz .
    
R ∂z R

Now, every 1-form on U is a linear combination with function coefficients of dx, dy, dz. Thus

41
we can identify 1-forms with vector fields on U via
 
P
P dx + Qdy + Rdz ↔ Q .
R

2-forms can be identified with vector fields on U via

ou
 
P
P dy ∧ dz + Qdz ∧ dx + Rdx ∧ dy ↔ Q .

R

Finally, 3-forms on U can be identified with functions on U :

f dx ∧ dy ∧ dz ↔ f.

Zh
Following these identifications, the exterior derivative of a 0-form f is
 
fx
df = fx dx + fy dy + fz dz ↔ fy  = grad f.

fz

The exterior derivative of a 1-form is

d(P dx + Qdy + Rdz)


= (Ry − Qz )dy ∧ dz − (Rx − Pz )dz ∧ dx + (Qx − Py )dx ∧ dy
x
l
   
Ry − Qz P
−(Rx − Pz ) = curl Q .
eli
Qx − Py R

Lastly, the exterior derivative of a 2-form is

d(P dy ∧ dz + Qdz ∧ dx + Rdx ∧ dy)


= (Px + Qy + Rz )dx ∧ dy ∧ dz
©F

l
 
P
Px + Qy + Rz = div Q .
R

In summary, the exterior derivative on 0-forms, 1-forms, and 2-forms are simply the operators
grad, curl, and div, under the appropriate identifications. Under these identifications, a
vector field hP, Q, Ri on R3 is the gradient of a C ∞ function f if and only if the corresponding
1-form P dx + Qdy + Rdz = df .

42
We now state three basic facts concerning grad, curl, and div.

Proposition 1.4.10
The following hold:
(a) curl(grad f ) = h0, 0, 0i
(b) div(curlhP, Q, Ri) = 0

ou
This proposition expresses the property d2 = 0 on open subsets of R3 .

Proposition 1.4.11
On R3 , a vector field F is the gradient of some scalar function f if and only if curl F = 0.

This proposition expresses the fact that a 1-form on R3 is exact if and only if it is closed.
This need not hold on a region other than R3 , as the following examplle shows.

Zh
Example 1.4.12
Define U := R3 − {(0, 0, z) : z ∈ R} and consider the familiar vector field
 
−y x
F = , ,0
x2 + y 2 x2 + y 2

on R3 . Then curl F = 0 as we computed, but F is not the gradient of any C ∞ function on


U . This can be proven by contradition using the fundamental theorem for line integrals.
In the language of differential forms, the corresponding 1-form ω ↔ F is closed but not
x
exact.
It turns out whether the previous proposition is true for a region U depends only on the
topology of U . One measure of failure of a closed k-form to be exact is the quotient vector
space
eli

{closed k-forms on U }
H k (U ) :=
{exact k-forms on U }
©F

called the k-th de Rham cohomology of U . The generalization of the previous proposition
to any differential form on Rn is called the Poincaré lemma: for k ≥ 1, every closed k-form
on Rn is exact. This is equivalent to proving the vanishing of the k-th deRham cohomology
H k (Rn ) for k ≥ 1.
The theory of differential forms allows us to generalize vector calculus from R3 to Rn and
then manifolds of any dimension. The general Stokes theorem subsumes and unifies the
fundamental theorem for line integrals, Green’s theorem in the plane, the classical Stokes
theorem for a surface in R3 , and the divergence theorem.

43
1.4.7 Convention on Subscripts and Superscripts

In differential geometry it is customary to index vector fields with subscripts e1 , . . . , en and


differential forms with superscripts ω 1 , . . . , ω n . Being 0-forms, coordinate functions take
superscripts x1 , . . . , xn . Their differentials, beign 1-forms, should also have superscripts
dx1 , . . . , dxn . Coordinate vector fields ∂/∂i are considered to have subscripts since the i
appears in the lower half of the fraction.

ou
Coefficient functions may have superscripts or subscripts depending on whether they P are
the coefficients of a vector field or a differential forms. For a vector field X = i a ei ,
i

the coefficients have superscripts since


Pmismatch “cancels out”. For the same reason, the
coefficients in a differential form ω = j bj dx have subscripts.
j

This
P convention conveniently leads to a “conservation of indices”. For example, if X =
a ∂/∂x i
, then

Zh
i i
ai = (dxi )(X)
so both sides of the equality have a net superscript i. If ω = j bj dxj ,
P

! !
X
j
X
i ∂ X
ω(X) = bj dx a = b i ai .
j i
∂xi i

After cancellation of superscripts and subscripts, both sides of the quality sign have zero net
index. This convention is a useful mnemonic aid in some of the transformation formulas in
differential geometry.
x
1.4.8 Miscellaneous Results
eli
Definition 1.4.9 (Superderivation)
Let A = ⊕∞ k=−∞ A be a graded algebra over a field K with A = 0 for k < 0 and
k k

m ∈ Z.
A superderivation of A of degree m is a K-linear map D : A → A such that for all k,
D(Ak ) ⊆ Ak+m and for all a ∈ Ak , b ∈ A` ,
©F

D(ab) = (Da)b + (−1)km a(Db).

Proposition 1.4.13
If D1 , D2 are two superderivations of A of respective degrees m1 , m2 , their commutator

[D1 , D2 ] := D1 ◦ D2 − (−1)m1 m2 D2 ◦ D1

is a superderivation of degree m1 + m2 .

44
This proposition can be verified by checking the definitions.
A super derivation is said to be even or odd depending on the parity of its degree. An even
superderivation is a derivation is a derivation and an odd superderivation is an antiderivation.

ou
Zh
x
eli
©F

45
©F

46
eli
x
Zh
ou
Chapter 2

ou
Manifolds

Zh
Intuitively, a manifold is a generalization of curves and surfaces to higher dimension. It is
locally Euclidean in the sense that every point has a neighborhood, called a chart, that is
homeomorphic to an open subset of Rn . The coordinates on a chart allow one to carry out
computations as though in a Euclidean space, so may concepts from Rn , such as differentia-
bility, point-derivations, tangent spaces, and differential forms, carry over to a manifold.
Our goal is to define and explore basic properties of a smooth manifold and smooth maps
between manifolds. Initially, the only way to verify that a space is a manifold is to exhibit a
collection of C ∞ compatible charts covering the space. We eventually see a set of sufficient
conditions under which a quotient topological space becomes a manifold.
x
2.1 Manifolds
eli
2.1.1 Topological Manifolds

We recall a few definitions from point-set topology. A topological space is second countable if
it has a countable basis. It is said to be Hausdorff if any two distinct points are repectively
contained in disjoint open neighborhoods.
©F

Definition 2.1.1 (Locally Euclidean)


A topological space M is locally Euclidean of dimension n if every point p ∈ M has
a neighborhood U such that there is a homeomorphism φ : U → Rn from U onto an
open subset of Rn .

We call the pair (U, φ) a chart, U a coordinate neighborhood or a coordinate open set, and φ
a coordinate map or a coordinate system on U . Moreover, we say a chart (U, φ) is centered

47
at p ∈ U if φ(p) = 0.

Definition 2.1.2 (Topological Manifold)


A topological manifold is a Hausdorff, second countable, locally Euclidean space.

Remark 2.1.1 We require manifolds to be Hausdorff and second countable to ensure that

ou
manifolds behave as expected from our experience with Euclidean spaces. For instance, finite
subsets are closed and limits of sequences are unique in Hausdorff spaces. The motivation
for second-countability is based on the existence of the so called partitions of unity.

Remark 2.1.2 We can restrict our definition of manifolds by forcing each coordinate map
φ : U → Rn to be homeomorphic to an open ball in Rn or to Rn itself.

Zh
We say a manifold is of dimension n or is an n-manifold if it is locally Euclidean of dimension
n. For the dimension of a manifold to be well-defined, we need to know that for n 6= m,
an open subset of Rn is not homeomorphic to an open subset of Rm . This is known as the
invariance of dimension but is not easy to prove directly. Since we are interested in smooth
manifolds, the analogous result is much easier to prove.

Example 2.1.3
The Euclidean space Rn is covered by a single chart (Rn , IdRn ) and is hence an n-manifold.
x
Similarly, every open subset of Rn is also an n-manifold.
Let (X, τ ) be a topological space and A ⊆ X. Recall that A endowed with the subspace
topology {U ∩ A : U ∈ τ } is called a subspace of X. The Hausdorff condition and second
eli
countability are inherited by subspaces. Thus any subspace of Rn is automatically Hausdorff
and second countable.

Example 2.1.4
The graph of a contiuous function f : R → R is a 1-manifold since it is a subspace of R2
and is locally Euclidean through the homeomorphism (x, f (x)) 7→ x.
©F

Example 2.1.5
The union of the x and y-axis in R2 is not a topological manifold.
Suppose towards a contradition that there is a neighborhood of the intersection p that is
homeomorphic to an open ball B of Rn with p mapping to 0. The homeomorphism U → B
restricts to a homeomorphism U − {p} → B − {0}. Now, B − {0} is either connected for
n ≥ 2 or disconnected if n = 1. Since U − {p} has four connected components, this is a
contradiction.

48
2.1.2 Compatible Charts

Suppose (U, φ), (V, ψ) are two charts of an n-manifold. Since U ∩ V is open in U and
φ : U → Rn is a homeomorphism onto a open subset of Rn , the image φ(U ∩ V ) will also be
an open subset of Rn . Similarly, ψ(U ∩ V ) is an open subset of Rn .

ou
Definition 2.1.3 (Smoothly Compatible)
Two charts (U, φ), (V, ψ) of a topological manifold are C ∞ -compatible if the two maps

φ ◦ ψ −1 : ψ(U ∩ V ) → φ(U ∩ V )
ψ ◦ φ−1 : φ(U ∩ V ) → ψ(U ∩ V )

are C ∞ as functions from Rn → Rn .

Zh
These compositions are called transition functions between the charts. We often omit
“smoothly” and simply speak of compatible charts.
If U ∩V is empty, then the two charts are automatically smoothly compatible. For simplicity
of notation, we will sometimes write Uαβ for Uα ∩ Uβ and similarly Uαβγ = Uα ∩ Uβ ∩ Uγ .

Definition 2.1.4 (Smooth Atlas)


A C ∞ atlas or simply an atlas on a local Euclidean space M is a collection U =
{(Uα , φα )} of pairwise C ∞ -compatible charts that cover M , ie M = ∪α Uα .
x
Example 2.1.6
The unit circle S 1 in the complex plane C can be described as the set of points
eli
{eit ∈ C : t ∈ [0, 2π]}.

Let U1 , U2 be the two open subsets of S 1

U1 = {eit : t ∈ (−π, π)}


U2 = {eit : t ∈ (0, 2π)}
©F

and define φα : Uα → R for α = 1, 2 by

φ1 (eit ) = t t ∈ (−π, π)
φ2 (eit ) = t t ∈ (0, 2π)

Then {(U1 , φ1 ), (U2 , φ2 )} form a C ∞ atlas on S 1 .


Although the smooth compatibility of charts is reflexive and symmetric, it is not transitive.

49
Indeed, Suppose (U1 , φ1 ) is compatible with (U2 , φ2 ) and (U2 , φ2 ) is compatible with (U3 , φ3 ).
Note that the three coordinate functions are simultaneously defined on the intersection U123 .
Thus the composite
φ3 ◦ φ−1 −1 −1
1 = (φ3 ◦ φ2 ) ◦ (φ2 ◦ φ1 )

is smooth but only on φ1 (U123 ) and not necessarily on φ1 (U13 ). Here the equality means on
the restriction to their common domains.

ou
Definition 2.1.5
A chart (V, ψ) is compatible with an atlas U if it is compatible with all the charts of
the atlas.

Lemma 2.1.7
Let U = {(Uα , φα )} be an atlas on a locally Euclidean space. If two charts

Zh
(V, ψ), (W, σ) are both compatible with the atlas, then they are compatible with
each other.

Proof
Let p ∈ V ∩ W . We need to show that σ ◦ ψ −1 is C ∞ at ψ(p). Since U is an atlas for M ,
we can choose Uα 3 p for some α. Then p is in the intersection V ∩ W ∩ Uα .
From the remark above,

σ ◦ ψ −1 = (σ ◦ φ−1 −1
α ) ◦ (φα ◦ ψ )
x
is C ∞ on ψ(V ∩ W ∩ Uα ), and hence at ψ(p). But p ∈ V ∩ W was arbitrary, hence σ ◦ ψ −1
is C ∞ on ψ(V ∩ W ). Similarly, ψ ◦ σ −1 is C ∞ on σ(V ∩ W ).
eli
2.1.3 Smooth Manifolds

An atlas M on a locally Euclidean space is said to be maximal if it is not contained in a


larger atlas. In other words, if U is any other atlas containing M, then U = M.
©F

Definition 2.1.6 (Smooth Manifold)


A smooth manifold is a topological manifold M together with a maximal atlas.

the maximal atlas is also called a differentiable structure on M . A manifold is said to have
dimension n if all of its connected components have dimension n. A 1-dimensional manifold
is also called a curve, a 2-dimensional manifold a surface, and an n-dimensional manifold an
n-manifold as before.
In practice, we need not check that a topological manifold M has a maximal atlas. The

50
existence of any atlas on M will do, due to the following proposition.

Proposition 2.1.8
Any atlas U on a locally Euclidean space is contained in a unique maximal atlas.

Proof
Adjoin to the atlas U all charts that are compatible with U. But then add the added

ou
charts are compartible with each other by the previous proposition. Thus the enlarged
collection of charts is still an atlas. Any chart compatible with the new atlas must be
compatible with the original atlas U and so by construction belongs to the new atlas. This
proves that the new atlas is maximal.
Uniqueness follows by adding all charts from another maximal atlas and noting we have
yet another atlas.

Zh
In summary, to show that a topological space M is a smooth manifold, it suffices to check
(i) M is Hausdorff and second countable
(ii) M has a (not necessarily smooth) atlas.
From hereonforth, “manifold” will mean a smooth manifold. In the context of manifolds,
we denote the standard coordinates on Rn by r1 , . . . , rn . If (U, φ : U → Rn ) is a chart of
some manifold, we let xi := ri ◦ φ be the i-th component of φ and write φ = (x1 , . . . , xn ) and
(U, φ) = (U, x1 , . . . , xn ). Thus for p ∈ U , (x1 (p), . . . , xn (p)) is a point in Rn . The functions
are called (local) coordinates on U . By abuse of notation, we sometimes omit the p so the
notation (x1 , . . . , xn ) standard for local coordinates on the open set U and for a point in
x
Rn . By a chart (U, φ) about p in a manifold M , we will mean a chart in the differentiable
structure of M such that p ∈ U .
eli
2.1.4 Examples of Smooth Manifolds

Example 2.1.9 (Euclidean Space)


Rn is an n-manifold with a single chart (Rn , r1 , . . . , rn ) where the ri ’s are the standard
coordinates on Rn .
©F

Example 2.1.10 (Open Submanfolds)


Any open subset V of a manifold M is also a manifold. If {(Uα , φα )} is an atlas for M ,
then {(Uα ∩ V, φα |Uα ∩V )} is an atlas for V .

Example 2.1.11 (0-Manifolds)


In a 0-manifold, every point p has a neighborhood that is homoemorphic to R0 , mean-
ing the neighborhood consists only p. Thus a 0-manifold is a discrete set. By second

51
countability, this discrete set must be countable.

Example 2.1.12 (Graphs of Smooth Functions)


For A ⊆ Rn and a function f : A → Rm , the graph of f is defined as

Γ(f ) := {(x, f (x)) ∈ A × Rm }.

If U is an open subset of Rn and f : U → Rn is C ∞ , the two maps

ou
φ : Γ(f ) → U (x, f (x)) 7→ x
(1, f ) : U → Γ(f ) x 7→ (x, f (x))

are continuous and inverse to each other and hence homeomorphisms.


The graph Γ(f ) of a smooth function f has a single chart atlas (Γ(f ), φ) and is therefore

Zh
a smooth manifold.
This shows that many of the familiar surfaces of calculus, for example an illiptic paraboloid
or a hyperbolic paraboloid, are manifolds.

Example 2.1.13 (General Linear Groups)


For any two positive integers m and n, let Rm×n be the vector space of all m × n matrices.
Since Rm×n is isomorphic to Rmn , we give it the topology of Rmn . The general linear
group GL(n, R) is by definition
−1
GL(n, R) := {A ∈ Rn×n : det A 6= 0} = det(R − {0}).
x
Since the determinant is a polynomial of the entries of the matrix, GL(n, R) is an open
subset of Rn×n ∼
= Rn and is therefore a manifold.
2
eli
By the same reasoning, GL(n, C) is a manifold of dimension 2n2 .

Example 2.1.14 (Unit Circle in the Plane)


We know that S 1 is a manifold since it has a two chart atlas. Let us consider S 1 as a
subset of R2 with defining equation x2 + y 2 = 1.
We can cover S 1 with the upper, lower semicircles U1 , U2 and right, left semicircles U3 , U4 .
©F

We can take φ1 , φ2 to be the coordinate function x and φ3 , φ4 to be the coordinate function


y. It can be checked that {(Ui , φi )}4i=1 is a smooth atlas on S 1 .

Example 2.1.15 (Product Manifold)


If M, N are smooth manifolds, then M × N with its product topology is Hausdorff and
second countable. To see that M × N is a manifold, we need to exhibit an atlas on it.

52
Proposition 2.1.16
Let {(Uα , φα )} and {(Vi , ψi )} be smooth atlases for the manifolds M, N of dimensions
m, n respectively. Then the collection

{(Uα × Vi , φα × ψi : Uα × Vi → Rm × Rn )}

of charts is a smooth atlas on M × N . Thus M × N is a smooth (m + n)-manifold.

ou
Example 2.1.17
The infinite cylinder S 1 × R and the torus S 1 × S 1 are manifolds. Moreover, the n-
dimensional torus ×ni=1 S 1 is a manifold.
Remark 2.1.18 Let S n denote the unit sphere

(x1 )2 + . . . (xn+1 )2 = 1

Zh
in Rn+1 . It is not hard to write down a smooth atlas of size 2n on S n by considering
overlapping semi-spheres. The manifold S n with this differentiable structure is called the
standard n-sphere.

One of the most surprising achievements in topology was John Milnor’s discovery of exotic
7-spheres, smooth manifolds homeomorphic but not diffeomorphic to the standard 7-sphere.
In 1963, Michel Kervaire and John Milnor determined that there are exactly 28 nondiffeo-
morphic differentiable structures on S 7 .
x
It is known that in dimensions < 4, every topological manifold has a unique differentiable
structure and in dimensions > 4, every compact topological manifold has a finite number of
differentiable structures. Dimension 4 is a mystery and the statement that S 4 has a unique
differentiable structure is called the smooth Poincaré conjecture.
eli
Michel Kervaire was the first to construct an example of topological manifolds with no
differentiable structure.

2.2 Smooth Maps on a Manifold


©F

Having defined smooth manifolds, we now consider maps between them. Through coordinate
charts, we can transfer the notion of smooth maps from Euclidean spaces to manifolds. It
turns out that smooth compatibility of charts in an atlas means the smoothness of a map is
independent of the choice of charts and is therefore well-defined. We give various criteria for
the smoothness of a map and provide examples of such maps.
Then we transfer the notion of partial derivatives from Euclidean space to a coordinate chart
on a manifold. This enables us to generalize the inverse function theorem to manifolds. Using

53
this result, we formulate a criterion for a set of smooth functions to serve as local coordinates
near a point.

2.2.1 Smooth Functions on a Manifold

Definition 2.2.1 (Smooth at a Point)

ou
Let M be a smooth n-manifold. A function f : M → R is C ∞ /smooth a a point
p ∈ M if there is a chart (U, φ) about p such that f ◦ φ−1 : φ(U ) → R is C ∞ at φ(p).

The function f is said to be C ∞ on M if it is smooth at every point of M .


Remark 2.2.1 The definition of smoothness is independent of the chosen chart. Indeed, if
(U, φ), (V, ψ) are charts about p ∈ M and f ◦ φ−1 is smooth at φ(p), then by the smooth

Zh
compatibility of the atlas,

f ◦ ψ −1 = (f ◦ φ−1 ) ◦ (φ ◦ ψ −1 )

is smooth at ψ(p). Note we may need to restrict to a smaller neighborhood about p but this
does not change the result.

Remark 2.2.2 In the definition above, f : M → R is not assumed to be continuous.


However, if f is smooth at p ∈ M , then

f = (f ◦ φ−1 ) ◦ φ
x
is a composition of continuous functions at p and is therefore continuous at p.
Since we are only interested in smooth functions on an open set, there is no loss of generality
in assuming at the outset that f is continuous.
eli

Proposition 2.2.3 (Smoothness of Real-Valued Function)


Let M be a manifold of dimension n and f : M → R a real-valued function on M . The
following are equivalent:
(i) f is C ∞
©F

(ii) The manifold m has an atlas such that for every chart (U, φ) in the atlas, f ◦ φ−1 :
φ(U ) → R is C ∞
(iii) For every chart (V, ψ) on M , the function f ◦ ψ −1 : ψ(V ) → R is C ∞

The idea of this proposition will be a recurrent motif: to prove the smoothness of an object,
it suffices to show a smoothness criterion holds on the charts of some atlas. Once the object
is shown to be smooth, it then follows that the same smoothness criterion holds on every
chart of the manifold.

54
Definition 2.2.2 (Pullback)
Let F : N → M be a map and h a function on M . The pullback of h by F , denoted
F ∗ h, is the composition h ◦ F .

Thus a function f : M → R is smooth on a chart (U, φ) if and only if its pullback (φ−1 )∗ f
by φ−1 is smooth on φ(U ).

ou
2.2.2 Smooth Maps between Manifolds

Definition 2.2.3 (Smooth at a Point)


Let M be a smooth m-manifold and N a smooth n-manifold. A continuous map
F : N → M is C ∞ at a point p ∈ N if there are charts (V, ψ) about F (p) ∈ M and

Zh
(U, φ) about p ∈ N such that the composition

ψ ◦ F ◦ φ−1 : φ(F −1 (V ) ∩ U ) → Rm

is C ∞ at φ(p).

The continuous map F : N → M is said to be C ∞ if it is C ∞ at every point of N .

Remark 2.2.4 In the definition above, we assume F : N → M is continuous so F −1 (V ) is


open in N . Thus smooth maps between manifolds are by definition continuous.
x
Note that if we take M = R with the single chart (R, Id), then we recover the definition of
smooth maps f : M → R.
eli
Proposition 2.2.5
Suppose F : N → M is smooth at p ∈ N . If (U, φ) is any chart about p in N and (V, ψ)
is any chart about F (p) ∈ M , then ψ ◦ F ◦ φ−1 is smooth at φ(p).

Proof
©F

Since F is smooth at p ∈ N , there are charts (Uα , φα ) about p ∈ N and (Vβ , ψβ ) about
F (p) ∈ M such that ψβ ◦ F ◦ φ−1
α is smooth at φα (p).

By the smooth compatility of charts in an atlas, both φα ◦ φ−1 and ψ ◦ ψβ−1 are smooth
on open subsets of Euclidean space. Hence the composition

ψ ◦ F ◦ φ−1 = (ψ ◦ ψβ−1 ) ◦ (ψβ ◦ F ◦ φ−1 −1


α ) ◦ (φα ◦ φ )

is C ∞ at φ(p).

55
Next, we present a way to check smoothness of a map without specifying points in the
domain.

Proposition 2.2.6
Let N, M be smooth manifolds and F : N → M be continuous. The following are
equivalent:
(i) F is C ∞

ou
(ii) There are atlases U for N and B for M such that for every chart (U, φ) in U and
(V, ψ) in B, the following map is C ∞

ψ ◦ F ◦ φ−1 : φ(U ∩ F −1 (V )) → Rm

(iii) For every chart (U, φ) on N and (V, ψ) on M , the following map is C ∞

Zh
ψ ◦ F ◦ φ−1 : φ(U ∩ F −1 (V )) → Rm

Proposition 2.2.7 (Composition of Smooth Maps)


If F : N → M and G : M → P are smooth maps of manifolds, then the composition
G ◦ F : N → P is smooth.

Proof
Let (U, φ), (V, ψ) and (W, σ) be charts on N, M, P respectively. Then

σ ◦ (G ◦ F ) ◦ φ−1 = (σ ◦ G ◦ ψ −1 ) ◦ (ψ ◦ F ◦ φ−1 )
x
under a suitable restriction. By a previous proposition, this is then a composition of
smooth maps on Euclidean space and is therefore smooth. The same proposition now
ensures G ◦ F is smooth. Note we need to let ψ vary over all charts for M to check the
eli
composition is smooth over its entire domain.

2.2.3 Diffeomorphisms
©F

Definition 2.2.4 (Diffeomorphism of Manifolds)


A diffeomorphism of manifolds is a bijective C ∞ map F : N → M whose inverse F −1
is also smooth.

The next two propositions hsow that coordinate maps are diffeomorphisms and conversely,
every diffeomorphism of an open subset of a manifold with an open subset of a Euclidean
space can serve as a coordinate map.

56
Proposition 2.2.8
If (U, φ) is a chart on an n-manifold M , the coordinate map φ : U → φ(U ) ⊆ Rn is a
diffeomorphism.

Proof
φ is a homeomorphism by definition so it suffices to check that φ, φ−1 are both smooth.

ou
We use a previous proposition to check that φ : U → φ(U ) is a smooth map between the
open submanifolds U, φ(U ) of M, Rn respectively. It suffices to show that φ is smooth
with respect to particular atlases for U, φ(U ). Indeed, consider the single chart atlases
{(U, φ)} and {(φ(U ), Id)}. We see that

Id ◦φ ◦ φ−1 = Id

is trivially C ∞ . The same atlases can be used to show that φ−1 is also C ∞ .

Zh
Proposition 2.2.9
Let U ⊆ M be an open subset of the n-manifold M . If F : U → F (U ) ⊆ Rn is a
diffeomorphism onto an open subset of Rn , then (U, F ) is a chart in the differentiable
structure of M .

Proof
For any chart (Uα , φα ) in the maximal atlas of M , both φα , φ−1
α are C

by the previous
proposition. As compositions of smooth maps, both F ◦ φ−1 α and φα ◦ F −1
are C ∞ . Hence
(U, F ) is compatible with the maximal atlas so that (U, F ) must belong in the atlas by
x
maximality.
eli

2.2.4 Smoothness in Terms of Components

We now derive a criterion that reduces the smoothness of a map to the smoothness of real-
valued functions on open sets.
©F

Proposition 2.2.10 (Smoothness of Vector-Valued Functions)


Let N be a manifold and F : N → Rm a continuous map. The following are equivalent:
(i) F is C ∞
(ii) N has an atlas such that for every chart (U, φ) in the atlas, the map F ◦ φ−1 :
φ(U ) → Rm is smooth
(iii) For every chart (U, φ) on N , the map F ◦ φ−1 : φ(U ) → Rm is C ∞

57
Proposition 2.2.11 (Smoothness in terms of Components)
Let N be a manifold. A vector-valued function F : N → Rm is C ∞ if and only if its
component functions F 1 , . . . , F m : N → R are all C ∞ .

Example 2.2.12
The map F : R → S 1 given by

ou
F (t) = (cos t, sin t)

is C ∞ .

Proposition 2.2.13 (Smoothness in term of Vector-Valued Functions)


Let F : N → M be a continuous maps between an n-manifold and an m-manifold. The
following are equivalent:

Zh
(i) F is C ∞
(ii) M has an atlas such that for every chart (V, ψ) = (V, y 1 , . . . , y m ) in the atlas, the
vector-valued function ψ ◦ F : F −1 (V ) → Rm is smooth
(iii) For every chart (V, ψ) = (V, y 1 , . . . , y m ) on M , the vector-valued function ψ ◦ F :
F −1 (V ) → Rm is C ∞

This smoothness criterion then translates into a smoothness criterion in terms of the com-
ponents of the map.

Proposition 2.2.14 (Smoothness in terms of Components)


x
Let F : N → M be a continuous map from an n-manifold to an m-manifold. The
following are equivalent.
(i) F is smooth
eli
(ii) M has an atlas such that for every chart (V, ψ) = (V, y 1 , . . . , y m ) in the atlas, the
components y i ◦ F : F −1 (V ) → R of F relative to the chart are all C ∞
(iii) For every chart (V, ψ) = (V, y 1 , . . . , y m ) on M , the components y i ◦ F : F −1 (V ) →
R of F relative to the chart are all smooth
©F

2.2.5 Examples of Smooth Maps

Proposition 2.2.15 (Smoothness of Projection Map)


Let M, N be manifolds and π : M × N → M given by

π(p, q) = p

the projection to the first factor. Then π is a smooth map.

58
Proof
Fix (p, q) ∈ M × N and let (U, φ) = (U, x1 , . . . , xm ) and (V, ψ) = (V, y 1 , . . . , y m ) be
coordinate neighborhoods of p, q respectively. We know that (U × V, φ × ψ) = (U ×
V, x1 , . . . , xm , y 1 , . . . , y n ) is a coordinate neighborhood of (p, q). But then

φ ◦ π ◦ (φ × ψ)−1 (a1 , . . . , am , b1 , . . . , bn ) = (a1 , . . . , am )

ou
is a smooth map from (φ × ψ)(U × V ) ⊆ Rm+n to φ(U ) ⊆ Rm . Thus π is smooth at (p, q)
and by the arbitrary choice of (p, q), π is C ∞ on M × N .

Proposition 2.2.16
Let M1 , M2 , N be manifolds of dimension m1 , m2 , n respectively. A map (f1 , f2 ) : N →
M1 × M2 is smooth if and only if f1 , f2 are both smooth.

Zh
Proof
If (f1 , f2 ) is smooth, then the projection is smooth by the previous proposition.
Conversely, suppose f1 , f2 are smooth. Then the components of f1 , f2 on any local coordi-
nate neighborhoods of M1 , M2 respectively are smooth. But there is an atlas U of M1 ×M2
which is just the cross product of charts from M1 , M2 and the components of (f1 , f2 ) on
the local coordinate neighborhoods of U are smooth by assumption. This suffices to show
that (f1 , f2 ) is indeed smooth.

Proposition 2.2.17
A smooth function f (x, y) on R2 restricts to a smooth function on S 1 .
x
Proof (Sketch)
We denote a point on S 1 as p = (a, b) and use x, y to mean the standard coordinate
eli
functions on R2 . If we show that x, y restricts to C ∞ functions on S 1 , then the inclusion
map i(p) = (x(p), y(p)) is then smooth on S 1 and the composition f |S 1 = f ◦ i will
therefore be smooth.
We can check that x, y are smooth on a particular atlas.
The definition of a smooth map between manifolds allows us to define a Lie group.
©F

59
Definition 2.2.5 (Lie Group)
A Lie group is a smooth manifold G with a group structure such that the multiplica-
tion map
µ:G×G→G
and inverse map
ι:G→G

ou
are both smooth.

We can similarly define a topological group which is a topological space with a group structure
wher the multiplication and inverse maps are both continuous. Note a topological group is
not required to the a topological manifold.

Example 2.2.18

Zh
(i) Rn is a Lie group under addition
(ii) C× = C \ {0} is a Lie group under multiplication
(iii) The unit circle S 1 ⊆ C× is a Lie group under multiplication
(iv) The Cartesian product G1 × G2 of two Lie groups (G1 , µ1 ) and (G2 , µ2 ) is a Lie
group under coordinatewise multiplicative µ1 × µ2

Example 2.2.19 (General Linear Group)


The general linear group GL(n, R) is a open submanifold of Rn×m . Matrix multiplication
is a polynomial in the coordinates of the input and is hence a smooth map. By Cramer’s
x
rule, the entries in the inverse of a matrix is a rational function of the input entries. Thus
the inverse map is also smooth and GL(n, R) is a Lie group.
The notation forPmatrices is difficult. A ∈ Rn×n can represent a linear transformation
eli
y = Ax so y i = j aij xj and we write A = [aij ]. But A can also represent a bilinear form
hx, yi = xT Ay. But then hx, yi = i,j xi aij y j and A = [aij ].
P

In the absence of context, we write A = [aij ].


©F

60
2.2.6 Partial Derivatives

Definition 2.2.6
Let (U, φ) = (U, x1 , . . . , xn ) be a chart on an n-manifold M , r1 , . . . , rn the standard
coordinates on Rn , and f : U → R a smooth function.
The partial derivative ∂f /∂xi of f with respect to xi at p is given by

ou
∂ ∂f
f := (p)
∂xi p ∂xi
∂(f ◦ φ−1 )
:= (φ(p))
∂ri

:= (f ◦ φ−1 ).
∂ri φ(p)

Zh
The partial derivative ∂f /∂xi is C ∞ on U because its pullback (∂f /∂xi ) ◦ φ−1 is C ∞ on
φ(U ).
The next proposition states that partial derivatives on a manifold satisfy the same duality
property ∂ri /∂rj = δji as the coordinate functions on Rn .

Proposition 2.2.20
Suppose (U, x1 , . . . , xn ) is a chart on a manifold. Then ∂xi /∂xj = δji .
x
Proof
At a point p ∈ U , by the definition of ∂/∂xj |p ,

∂xi ∂(xi ◦ φ−1 )


eli
(p) = (φ(p))
∂xj ∂rj
∂(ri ◦ φ ◦ φ−1 )
= (φ(p))
∂rj
∂ri
= j (φ(p))
∂r
= δji .
©F

Definition 2.2.7 (Jacobian)


Let F : N → M be a smooth map (U, φ) = (U, x1 , . . . , xn ) be a chart on N , and
(V, ψ) = (V, y 1 , . . . , y m ) be a chart on M such that F (U ) ⊆ V .
Let F i = y i ◦ F = ri ◦ ψ ◦ F : U → R denote the i-th component of F in the chart
(V, ψ). Then the matrix [∂F i /∂xj ] is the Jacobian matrix of F relative to the charts
(U, φ), (V, ψ).

61
If M, N have the same dimension, then det[∂F i /∂xj ] is known as the Jacobian deter-
minant of F relative to the two charts. The Jacobian determinant is also written as
∂(F 1 , . . . , F n )/∂(x1 , . . . , xn ).

Example 2.2.21 (Jacobian Matrix of a Transition Map)


Let (U, φ) = (U, x1 , . . . , xn ) and (V, ψ) = (V, y 1 , . . . , y n ) be overlapping charts on a mani-
fold M . The transition map ψ ◦ φ−1 : φ(U ∩ V ) → ψ(U ∩ V ) is a diffeomorphism of open

ou
subsets of Rn . Its Jacobian matrix J(ψ ◦ φ−1 ) at φ(p) is the matrix [∂y i /∂xj ] of partial
derivatives at p.
Indeed,

∂(ψ ◦ φ−1 )i ∂(ri ◦ ψ ◦ φ−1 )


(φ(p)) = (φ(p))
∂rj ∂rj
∂(y i ◦ φ−1 )

Zh
= (φ(p))
∂rj
∂y i
= (p).
∂xj

2.2.7 Inverse Function Theorem


x
Recall that any diffeomorphism F : U → F (U ) ⊆ Rn of an open subset U of a manifold can
be interpreted as a chart on U .

Definition 2.2.8 (Locally Invertible/Local Diffeomorphism)


eli
We say that a C ∞ map F : N → M is locally invertible or a local diffeomorphism at
p ∈ N if p has a neighborhood U on which F |U : U → F (U ) is a diffeomorphism.

Given n smooth functions F 1 , . . . , F n in a neighborhood of a point p ∈ N of an n-manifold,


we would like to know whether they form a coordiante system, possibly on a smaller neigh-
borhood of p. The inverse function theorem provdies an answer.
©F

Theorem 2.2.22 (Inverse Function Theorem for Rn )


Let F : W → Rn be a smooth map defined on an open subset W ⊆ Rn . For any
p ∈ W , the map F is locally invertible at p if and only if the Jacobian determinant
det[∂F i /∂rj (p)] is non-zero.

This theorem is typically proved in a standard course on multivariate calculus/real analysis.


Since the statement is a local result, it easily translates to manifolds.

62
Theorem 2.2.23 (Inverse Function Theorem for Manifolds)
Let F : N → M be a smooth map between two manifolds of the same dimension,
and p ∈ N . Suppose for some charts (U, φ) = (U, x1 , . . . , xn ) about p ∈ N and
(V, ψ) = (V, y 1 , . . . , y n ) about F (p) ∈ M we have F (U ) ⊆ V . Set F i := y i ◦ F . Then
F is locally invertible at p if and only if its Jacobian determinant det[∂F i /∂xj (p)] is
non-zero.

ou
Proof
Since F i = y i ◦ F = ri ◦ ψ ◦ F , the Jacobian matrix of F relative to the charts (U, φ) and
(V, ψ) is
∂(ri ◦ ψ ◦ F ◦ φ−1 )
 i  
∂(ri ◦ ψ ◦ F )
  
∂F
(p) = (p) = (φ(p)) ,
∂xj ∂xj ∂rj
which is precisely the Jacobian matrix at φ(p) of the map

Zh
ψ ◦ F ◦ φ−1 : φ(U ) → ψ(V )

between two open subsets of Rn .


By the inverse function theorem for Rn ,

∂r ◦ (ψ ◦ F ◦ φ−1 )
 i   i 
∂F
det (p) = det (φ(p)) 6= 0
∂xj ∂rj

if and only if ψ ◦ F ◦ φ−1 is locally invertible at φ(p). Since ψ, φ are diffeomorphisms, this
x
last statement is equivalent to the local invertibility of F at p.
We typically apply the inverse function theorem in the following form.

Corollary 2.2.23.1
eli
Let N be an n-manifold. A set of n smooth functions F 1 , . . . , F n defined on a coordinate
neighborhood (U, x1 , . . . , xn ) of a point p ∈ N forms a coordinate system aout p if and
only if the Jacobian determinant det[∂F i /∂xj (p)] is non-zero.

Example 2.2.24
©F

Consider the functions x2 + y 2 − 1, y on R2 . Let F : R2 → R2 be given by


(x, y) 7→ (x2 + y 2 − 1, y).
F can serve as a coordinate map in a neighborhood of p if and only if it is a local diffeo-
morphism at p. The inverse function theorem states that this is equal to the condition
∂(F 1 , F 2 )
 
2x 2y
0 6= = det = 2x.
∂(x, y) 0 1
Thus F can serve as a coordinate system at any point not on the y-axis.

63
2.3 Quotients
Recall that given an equivalence relation on a topological space, we can always imbue the
quotient space with a topology such that the natural projection map is continuous. However,
even if the original space is a mnaifold, a quotient space is often not a manifold. We study
conditions under which a quotient space remains second countable and Hausdorff.

ou
2.3.1 The Quotient Topology

Recall that an equivalence relation ∼ on a set S is reflexive, symmetric, and transitive


relation. The equivalence class [x] of x ∈ S is the set of all elements in S equivalent to x.
We denote the set of equivalence classes by S/∼ and refer to this set as the quotient of S by
∼. The natural projection map π : S → S/ ∼ is given by x 7→ [x].

Zh
If S is a topological space, we can define a topology on S/∼ by declaring a set U ⊆ S/∼
to be open if and only if π −1 (U ) is open in S. Note the projection map is automatically
continuous by definition under this topology. We call this the quotient topology on S/∼ and
under this topology, S/ ∼ is called the quotient space.

2.3.2 Continuity of a Map on a Quotient

Let ∼ be an equivalence relation on the topological space S and give S/∼ the quotient
topology. Suppose f : S → Y takes values in another topology space Y is constant on each
x
equivalence class. Then it induces a map f¯ : S/∼ → Y given by

[p] 7→ f (p).
eli
Proposition 2.3.1
The induced map f¯ : S/∼ → Y is continuous if and only if the map f : S → Y is
continuous.

This proposition gives a useful criterion for verifying continuous of f¯: lift the function to
f¯ = f ◦ π on S and check the continuous of the lifted map f . The proof of the proposition
©F

follows by checking the definitions.

2.3.3 Identification of Subset to a Point

If A is a subspace of a topological space S, we can define a relation ∼ on S by declaring


x ∼ y for all x, y ∈ A. This is an equivalence relation and we say the quotient space S/∼ is
obtained from S by identifying A to a point.

64
Example 2.3.2
Let I = [0, 1] be the closed unit interval and I/∼ the quotient space obtained from I
by identifying {0, 1}. Denote by S 1 the unit circle in the complex plane. The function
f : I → S 1 given by
x 7→ exp(2πix)
assumes the same value at 0 and 1 and so induces a function f¯ : I/∼ → S 1 .

ou
Proposition 2.3.3
The function f¯ : I/∼ → S 1 is a homeomorphism.

Proof
Since f is continuous, f¯ is also continuous. Clearly, f¯ is a bijection. As the continuous
image of the compact set I, the quotient I/∼ is compact. Thus f¯ is a continuous bijection

Zh
from a compact space to a Hausdorff space.
From elementary point-set topology, the continuous image of compact (and thus closed)
spaces are again compact (and thus closed) so f¯ is a closed mapping. But then the inverse
is continuous since the pre-image of closed sets are closed. Hence f¯ is a homeomorphism.

2.3.4 A Necessary Condition for a Hausdorff Quotient

The quotient construction does not in general preserve the Hausdorff or second countability
properties. We can derive a necessary condition through the following observation: Every
x
singleton in a Hausdorff space is closed. Thus if π : S → S/∼ is the projection and the
quotient is Hausdorff, then for any p ∈ S, its image {π(p)} is closed in S/∼. By the
continuity of π, the inverse image π −1 (π(p)) = [p] is closed in S.
eli
Proposition 2.3.4
If the quotient space S/∼ is Hausdorff, then the equivalence class [p] of any point p ∈ S
is necessarily closed in S.

Example 2.3.5
©F

Define an equivalence relation ∼ on R by identifying the open interval (0, ∞) to a point.


Then the quotient R/∼ is not Hausdorff since the (0, ∞) is not closed in R.

2.3.5 Open Equivalence Relations

We derive conditions under which a quotient space is Hausdorff or second countable. Recall
that a map f : X → Y of topological spaces is an open mapping if the image of open sets
under f is open.

65
Definition 2.3.1 (Open Equivalence Relation)
An equivalence relation ∼ on a topological space S is said to be open if the projection
map π : S → S/∼ is open.

In other words, ∼ on S is open if and only if for every open set U ⊆ S, the set
[
π −1 (π(U )) =

ou
[x]
x∈U

of all points equivalence to some point of U is open.

Example 2.3.6
Consider R with ∼ the equivalence relation that identifies 1, −1. Then

π −1 (π(−2, 0)) = (−2, 0) ∪ {1}

Zh
is not open in R so ∼ cannot be open.
Given an equivalence relation ∼ on S, let R ⊆ S × S be the subset that defines the relation
R := {(x, y) ∈ S × S : x ∼ y}.
We say that R is the graph of ∼.

Theorem 2.3.7
Let ∼ be an open equivalence relation on a topological space S. Then the quotient
x
space S/∼ is Hausdorff if and only if the graph R of ∼ is closed in S × S.

Proof
eli
By definition, R is closed in S × S if and only if (S × S) − R is open in S × S. In other
words, for every (x, y) ∈ S × S − R, there is a basic open set U × V containing (x, y)
such that U × V ⊆ S × S − R. This can be restated as for every x  y ∈ S, there are
neighborhoods U 3 x, V 3 y in S such that no element of U is equivalent to an element
of V . This is equivalent to the statement that for any two points [x] 6= [y] ∈ S/∼, there
are neighborhoods U 3 x, V 3 y in S such that π(U ) ∩ π(V ) = ∅ in S/∼.
©F

( =⇒ ) Suppose R is closed in S × S and consider the last equivalent statement. By


definition, S/∼ is Hausdorff.
( ⇐= ) Conversely, suppose that S/∼ is Hausdorff. Choose [x] 6= [y] ∈ S/∼. There are
disjoint open sets A, B ⊆ S/∼ such that [x] ∈ A, [y] ∈ B. By the surjectivity of π, we
have A = π(π −1 A) and π(π −1 B). Let U = π −1 A and V = π −1 B. Then x ∈ U, y ∈ V and
A = π(U ) and B = π(V ) are disjoint open sets in S/∼.
This is the last equivalent condition to R being closed in S × S.

66
Corollary 2.3.7.1
A topological space S is Hausdorff if and only if the diagonal ∆ in S × S is closed.

Theorem 2.3.8
Let ∼ be an open equivalence relation on a topological space S with projection π :
S → S/∼. If B := {Bα } is a basis of S, then the image {π(Bα )} under π is a basis

ou
for S/∼.

Proof
Let W ⊆ S/∼ be open and pick [x] ∈ W for some x ∈ S. Then π −1 (W ) 3 x is open in
S and there is some Bα 3 x contained in π −1 (W ). Then [x] = π(x) ∈ π(Bα ) ⊆ W as
desired.

Zh
Corollary 2.3.8.1
If ∼ is an open equivalence relation on a second-countable space S, then the quotient
space S/∼ is second countable.

2.3.6 The Real Projective Space

Definition 2.3.2 (Real Projective Space RP n )


Define an equivalence relation on Rn+1 − {0} by
x
x ∼ y ⇐⇒ ∃t ≥ 0, y = tx.

The real projective space RP n is defined as the quotient space


eli
(Rn+1 − {0})/∼.

We denote the equivalence class of a point (a0 , . . . , an ) ∈ Rn+1 − {0} by [a0 , . . . , an ] and let
π : Rn+1 − {0} → RP n be the projection. We call [a0 , . . . , an ] homogeneous coordinates on
RP n .
©F

Geometrically speaking, two nonzero points in Rn+1 are equivalent if and only if they lie on
the same line through the origin, so RP n can be interpreted as the set of all lines through
the origin in Rn+1 . Such a line uniquely determines a pair of antipodal points on S n . This
suggests that we define an equivalence relation on S n by identifying antipodal points:

x ∼ y ⇐⇒ x = ±y.

We then have a bijection RP n ↔ S n /∼.

67
Proposition 2.3.9 (Real Projective Space as a Quotient of a Sphere)
For x = (x1 , . . . , xn ) ∈ Rn , let kxk denote the Euclidean norm of x. The map f :
Rn+1 − {0} → S n given by
x
x 7→
kxk
induces a homeomorphism f¯ : RP n → S n /∼.

ou
Proof
Since f is continuous on Rn+1 − {0}, π∼ ◦ f : Rn+1 − {0} → S n /∼ is a composition
of continuous functions and is therefore continuous. By a previous proposition, π∼ ◦ f is
constant on the equivalence classes of RP n and the natural quotient function f¯ is therefore
also continuous.
Consider the identity map Id : Rn+1 → Rn+1 . This is certainly continuous. But then

Zh
g : S n → Rn+1 /∼ given by g = π∼ ◦ Id ◦ι where ι is the the inclusion map is continuous.
Since g is constant over the equivalence classes of ∼, the natural quotient function ḡ is
also continuous.
By observation,
 
x
f¯[x , . . . , x ] =
0 n
kxk
ḡ[x] = [x0 , . . . , xn ]

are inverses and we conclude the proof.


x
Example 2.3.10 (The Real Projective Line RP 1 )
RP 1 is homeomorphic to the quotient S 1 /∼, which in turn homeomorphic to the closed
eli
upper semicircle with two identified endpoints. This is in turn homeomorphic to S 1 and
so RP 1 ∼
= S 1.

Example 2.3.11 (The Real Projective Plane RP 2 )


RP 2 is homeomorphic to S 2 /∼. We can interpret this as the closed upper hemisphere
H 2 ⊆ R3 in which each pair of antipodal points on the equator is identified, denoted
H 2 /∼. Let D2 ⊆ R2 denote the closed unit disk and remark the map ϕ : H 2 → D2 given
©F

by
(x, y, z) 7→ (x, y)
is a homeomorphism. Let D2 /∼ denote the closed unit disk with antipodal points iden-
tified. ϕ induces a homeomorphism from H 2 /∼ → D2 ∼. Thus RP 2 ∼= D2 /∼.
RP 2 cannot be embedded as a submanifold into R3 . However, if we allow self-intersection,
we can map RP 2 → R3 as a cross-cap. This map is not injective.

68
Proposition 2.3.12
The equivalence relation on Rn+1 − {0} in the definition of RP n is an open equivalence
relation.

Proof
For an open subset U ⊆ Rn+1 − {0}, the image π(U ) is open in RP n if and only if
π −1 (π(U )) is open in Rn+1 − {0}. But π −1 (π(U )) consists of all nonzero scalar multiples

ou
of points of U . That is,
[ [
π −1 (π(U )) = tU = {tp : p ∈ U }.
t∈R×

But multiplication by t ∈ R× is a homeomorphism of Rn+1 − {0}, the set tU is open for


any t and so the union of all such sets remains open.

Zh
Corollary 2.3.12.1
The real projective space RP n is second countable.

Proposition 2.3.13
The real projective space RP n is Hausdorff.

Proof
Let S = Rn+1 − {0} and consider the set
x
R = {(x, y) ∈ S × S : ∃t ∈ R× , y = tx}.

If we write x, y as column vectors, then x y is an (n + 1) × 2 matrix. R can then be


 

characterized as the set of matrices x y ∈ S × S of rank at most1. By a standard fact



eli
from linear algebra, this is equivalent to all 2 × 2 minors of x y being zero. Thus as
the zero set of finitely many polynomials, R is a closed subset of S × S. Since ∼ is an
open equivalence relation on S, and R is closed in S × S, a previous theorem informs us
that S/∼ ∼= RP n is Hausdorff.
©F

2.3.7 The Standard Smooth Atlas on a Real Projective Space

Let [a0 , . . . , an ] be homogeneous coordinates on the projective space RP n . Although a0 is


not a well-defined function on RP n , the condition a0 6= 0 is indepedent of the choice of
representative. Hence the condition a0 6= 0 makes sense on RP n and we can define

Ui := {[a0 , . . . , an ] : ai 6= 0}

69
for each i ∈ 0, 1, . . . , n. Define φi : Ui → Rn given by
!
a0 abi an
[a0 , . . . , an ] 7→ , . . . , , . . . ,
ai ai ai

where the carat sign indicates the entry is to be omitted. This map has a continuous inverse

ou
(b1 , . . . , bn ) 7→ [b1 , . . . , |{z}
1 , . . . , bn ].
i-th entry

It follows that RP n is locally Euclidean with (Ui , φi ) as charts.


On the intersection U0 ∩ U1 , we have a0 6= 0 and a1 6= 0, and there are two coordinates
systems. We refer to the coordinate functions on U0 as x1 , . . . , xn
ai

Zh
xi = , i ∈ [n].
a0
and the coordinate functions on U1 as y 1 , . . . , y n
a0 i ai
y1 = , y = , i ∈ {2, . . . , n}.
a1 a1
So on U0 ∩ U1 ,
1 x2 xn
 
(φ1 ◦ φ−1
0 )(x) = , , . . . , .
x1 x1 x1
This is a smooth function since x1 6= 0 on φ0 (U0 ∩ U1 ). A similar formula holds on any other
x
Ui ∩ Uj . Thus the collection {(Ui , φi )}i=0,...,n is a smooth atlas for RP n , called the standard
atlas. We conclude that RP n is a smooth manifold as desired.
eli
2.3.8 The Grassmannian Manifold

Proposition 2.3.14 (Orbit Space of a Continuous Group Action)


Suppose a right action of a topological group G on a topolgical space S is continuous.
Define two points x, y to be equivalent if they are in the same orbit, so if there is some
g ∈ G such that y = xg.
©F

Define S/G to be the quotient space, also known as the orbit space of the action. Then
the projection map π : S → S/G is an open map.

Proof
It suffices to show that π −1 (π(U )) is open for every U ⊆ S open.
Note that right multiplication by some g ∈ G is a homeomorphism S → S. We can see
this by considering {g} as a topological space under the subspace topology and using the

70
continuity of the inclusion map.
We can express [
π −1 (π(U )) = Ug
g∈G

as a union of open sets which is therefore open as desired.


The Grassmannian G(k, n) is the set of all k-planes in Rn , or in other words, the set of all

ou
k-dimensional subspaces of Rn . Such a subspace is completely determined by a full-rank
matrix A ∈ Rn×k , known as a matrix representative of the k-plane. Two matrices A, B
determine the same k-plane if there is a change of basis matrix g ∈ GL(k, R) such that
B = Ag.
Let F (k, n) be the set of all n × k matrices of rank k, topologized as a subspace of Rn×k . We
define the equivalence relation

Zh
A × B ⇐⇒ ∃g ∈ GL(k, R), B = Ag.
Then there is a bijection between G(k, n) and F (k, n)/∼ and we give G(k, n) the subspace
topology F (k, n)/∼.

Proposition 2.3.15
F (k, n)/∼ is Hausdorff and second-countable.

Proof
We can view F/∼ as the orbit space of the action
x
F (n, k) × GL(k, R) → F (n, k)
(A, g) 7→ Ag
eli
which is certainly a continuous group action. But then ∼ is an open relation by the
previous proposition. It follows immediately that F (n, k)/ ∼ is second countable.
In order to show that F (n, k)/ ∼ is Hausdorff, it suffices to show the graph R ⊆ F (n, k) ×
F (n, k) of the relation ∼ is closed. We have

R = {(A, B) : ∃g, B = Ag}


©F

 
= {(A, B) : rank A B ≤ k}
= {(A, B) : all (k + 1) × (k + 1) minors have determinant 0}.

The last set is a finite intersection of zero-sets of polynomials, which is certainly closed.
We now construct a smooth atlas for G(n, k).
Let
I := {(i1 , . . . , ik ) : 1 ≤ i1 < · · · < ik ≤ n}

71
be the set of increasing row indices. Write AI to denote the submatrix of A made of the
rows of A indexed by I. For I ∈ I, define

VI := {A ∈ F (k, n) : det AI 6= 0}

and remark that this is an open set. Define φ̃I : VI → R(n−k)×k by

A 7→ (AA−1 −1

ou
I )I c = AI c AI

where (·)I c indicates taking the rows not indexed by I. This is certainly a continuous map.
Note that if A ∈ VI , then Ag ∈ VI for any g ∈ GL(k, R) since

(Ag)I = AI g ∈ GL(k, R).

Zh
Proposition 2.3.16
Let φI : (UI := F (k, n)/∼) → R(n−k)×k denote the map induced by φ̃I . Then φI is
well-defined and
{(UI , φI )}I∈I
is a smooth structure on G(n, k).

Proof
We first check that φ̃I is constant on the orbits of GL(k, n). Indeed,

(Ag)I c (Ag)−1 −1 −1
I = AI c gg AI
x
= AI c A−1
I .

Then we note that φ̃I : VI → R(n−k)×k is continuous since matrix inversion and multipli-
eli
cation are smooth mappings. But then φI : UI → R(n−k)×k is continuous as desired.
To see that it has a continuous inverse, define B (I) as the matrix obtained from B by
inserting the identity matrix in the I-th rows and consider the mapping

R(n−k)×k → UI
B 7→ [B (I) ].
©F

This is certainly continuous since it is the composition of a continuous map with the
projection map. To see that this is the inverse of φI , observe that every element [A] ∈ UI
has a canonical representative
AA−1I

whose I-th rows form the identity matrix. φ̃ selects the I c rows from the canonical
representation and the map above sends the I c rows back to the equivalence class of the
canonical representation.

72
Finally, we need to show the smooth compatibility of charts. Pick I, J ∈ I and consider

φI ◦ φ−1
J : φJ (UI ∩ UJ ) → φI (UI ∩ UJ )

B 7→ (B (J) )I c (B (J) )−1


I .

This is smooth since matrix inversion and multiplication are both smooth mappings.
Thus we see that G(n, k) is a smooth (n − k) × k-manifold.

ou
Zh
x
eli
©F

73
©F

74
eli
x
Zh
ou
Chapter 3

ou
The Tangent Space

Zh
By definition, the tangent space to a manifold at a point is the vector space of derivations at
the point. A smooth map of manifolds induces a linear map between tangent spaces at cor-
responding points, called its differential. In local coordinates, the differential is represented
by the Jacobian of partial derivatives of the map.
A basic principle in manifold theory is the linearization principle, meaning a manifold can
be approximated near a point by its tangent space at the point, and a smooth map can be
approximated by the differential of the map. One example of this is the inverse function
theorem.
x
3.1 The Tangent Space
One can define a tangent vector as an arrow in the image of the chart. However, this approach
eli
is complicated since a different chart would give rise to a different set of tangent vectors.
The cleaner approach is to consider point-derivations.

3.1.1 The Tangent Space at a Point


©F

We define germs similarly to Rn .

Definition 3.1.1 (Germ)


A germ of a C ∞ function M → N is an equivalence class of C ∞ functions defined in
a neighborhood of some p ∈ M . Two such functions are equivalent if they agree on
some (possibly smaller) neighborhood of p.

The set of germs of C ∞ real-valued functions at p ∈ M is denoted Cp∞ (M ). Addition and

75
multiplication of functions make Cp∞ (M ) into a ring. Scalar multiplication by reals make
Cp∞ (M ) and algebra over R.
We similarly generalize the definition of a point-derivation from Rn .

Definition 3.1.2 (Point-Derivation)


A derivation at a point p ∈ M /point-derivation of Cp∞ (M ) is a linear map D :

ou
Cp∞ (M ) → R such that

D(f g) = (Df )g(p) + f (p)Dg.

Definition 3.1.3 (Tangent Vector)


A tangent vector at p ∈ M is a derivation at p.

Zh
Just as for Rn , the tangent vectors at p form a vector space Tp (M ) = Tp M .
Remark 3.1.1 If U is an open set containing p in M , then the algebra Cp∞ (U ) is the same
as Cp∞ (M ). Hence Tp U = Tp M .

Given a coordinate neighborhood (U, φ) = (U, x1 , . . . , xn ) about some p ∈ M , recall the


definition of the partial derivative
∂ ∂
f= (f ◦ φ−1 ) ∈ R
∂xi p ∂ri φ(p)
x
where ri is the i-th standard coordinate on Rn . It can be checked that ∂/∂xi satisfies the
derivation property and so is a tangent vector at p.
When M is one-dimensional and t is a local coordinate, it is customary to write d/dt|p for
eli
∂/∂t|p . We often omit |p if it is understood at which point the tangent vector is located.

3.1.2 The Differential of a Map


©F

Definition 3.1.4 (Differential)


Let F : N → M be a smooth map between manifolds. At each point p ∈ N , the map
induces a linear map of tangent spaces F∗ : Tp N → TF (p) M which sends Xp ∈ Cp∞ to
F∗ (Xp ) defined by
F∗ (Xp )f := Xp (f ◦ F ) ∈ R
for f ∈ CF∞(p) M .

Note in the definition above, f is a representative of a germ at F (p).

76
Proposition 3.1.2
F∗ (Xp ) is a derivation at F (p) and F∗ : Tp N → TF (p) M is a linear map.

Proof
Let f, g ∈ Cp∞ .

ou
F∗ (Xp )(f g) := Xp (f g ◦ F )
= Xp ((f ◦ F )(g ◦ F ))
= Xp (f ◦ F )(g ◦ F (p)) + (f ◦ F (p))Xp (g ◦ F (p))
= F∗ (Xp )f g(F (p)) + f (F (p))F∗ (Xp )g

as desired.
To see linearity, let α ∈ R and Xp , Yp ∈ Tp N . For any f ∈ CF∞(p) ,

Zh
F∗ (αXp + Yp ) := (αXp + Yp )(f ◦ F )
= αXp (f ◦ F ) + Yp (f ◦ F )
= αF∗ (Xp )f + F∗ (Yp )f.
We sometimes write F∗,p = F∗ to make the dependence on p explicit.

Example 3.1.3 (Differential of a Map between Euclidean Spaces)


Suppose F : Rn → Rm is smooth and p ∈ Rn . The tangent vectors ∂/∂xj |p , j ∈ [n] and
∂/∂y i |p , i ∈ [m] form bases for the tangent spaces Tp Rn and TF (p) Rm , respectively. The
x
linear map F∗ : Tp Rn → TF (p) Rm is described by a matrix [aij ] relative to the two bases.
!
∂ X ∂
F∗ = akj akj .
eli
∂xj p k
∂y k F (p)

Let F i := y i ◦ F . We can find aij by evaluating both sides at y i .


X ∂ X
akj yi = akj δki = aij
k
∂y k F (p) k
©F

!
∂ ∂ ∂F i
F∗ = (y i ◦ F ) = (p).
∂xj p ∂xj p ∂xj

Thus the matrix of F∗ is precisely the Jacobian matrix for derivative of F at p.

77
3.1.3 The Chain Rule

Theorem 3.1.4 (Chain Rule)


If F : N → M and G : M → P are smooth maps between manifolds. Then for p ∈ N ,

(G ◦ F )∗,p = G∗,F (p) ◦ F∗,p .

ou
Proof
Let Xp ∈ Tp N and f a smooth function at G(F (p)) ∈ P .

((G ◦ F )∗ Xp )f = Xp (f ◦ G ◦ F )
=: (F∗ Xp )(f ◦ G)

Zh
=: (G∗ (F∗ Xp ))f
= ((G∗ ◦ F∗ )Xp )f.

Remark 3.1.5 The differential of the identity map M → M is the identity map Tp M →
Tp M .

Corollary 3.1.5.1
If F : M → M is a diffeomorphism of manifolds, then F∗ : Tp N → TF (p) M is an
isomorphism of vector spaces for any p ∈ N .
x
Proof
By assumption, F has a smooth inverse G : M → N . By the chain rule,
eli
F∗ ◦ G∗ = (F ◦ G)∗
= (IdM )∗
= IdTp M

and similarly for G∗ ◦ F∗ .


©F

Corollary 3.1.5.2 (Invariance of Dimension)


If an open subset U ⊆ Rn is diffeomorphic to an open subset V ⊆ Rm , then n = m.

Proof
Let F : U → V be any diffeomorphism and p ∈ U . The previous corollary informs us
that F∗,p : Tp U → TF (p) is an isomorphism of vector spaces. Since there are vector space
isomorphisms Tp U ∼= Rn and TF (p) ∼ = Rm , we must have n = m.

78
3.1.4 Bases of Tangent Space at a Point

As usual, we denote by {ri } the standard coordinates on Rn and if (U, φ) is a chart about
a point p in a an n-manifold M , we set xi = ri ◦ φ. Since φ : U → Rn is a diffeomorphism
onto its image, a previous corollary tells us that the differential

ou
φ∗ : Tp M → Tφ(p) Rn

is a vector space isomorphism. In particular, dim Tp M = n = dim M .

Zh
Proposition 3.1.6
Let (U, φ) = (U, x1 , . . . , xn ) be a chart about p ∈ M . Then
!
∂ ∂
φ∗ i
= .
∂x p ∂ri φ(p)

Proof
Let f ∈ Cφ(p)

Rn .
x
!
∂ ∂
φ∗ f= (f ◦ φ)
∂xi p ∂xi p
eli

= (f ◦ φ ◦ φ−1 )
∂ri φ(p)

= f.
∂ri φ(p)

Proposition 3.1.7
©F

Let (U, φ) = (U, x1 , . . . , xn ) be a chart containig p, then the tangent space Tp M has
basis
∂ ∂
1
,..., n .
∂x p ∂x p

Proof
An isomorphism of vector spaces carries a basis to a basis. The image of the tangent
vectors above map to the partial derivatives at φ(p), which forms a basis of Tφ(p) Rn .

79
Proposition 3.1.8 (Transition Matrix for Coordiante Vectors)
Suppose (U, x1 , . . . , xn ) and (V, y 1 , . . . , y n ) are two coordinate charts on a manifold M .
Then on U ∩ V ,
∂ X ∂y i ∂
= .
∂xj i
∂xj ∂y i

ou
Proof
At each p ∈ U ∩ V , there are two bases for the tangent space Tp M and so there is a matrix
[aij (p)] such that
∂ X
k ∂
= a j .
∂xj k
∂y k

Evaluating both sides at y i , we get

Zh
∂y i X ∂y i
j
= akj k
∂x k
∂y
X
= akj δki
k
i
= aj .
x
3.1.5 A Local Expression for the Differential
eli

Give a smooth map F : N → M of manifolds and p ∈ N , let (U, x1 , . . . , xn ) be a chart about


p ∈ N and (V, y 1 , . . . , y m ) be a chart about F (p) ∈ M . We find a local expression for the
differential relative to the two charts.
©F

The partial derivatives at p, F (p) form a basis for Tp N, TF (p) M respectively. Thus the dif-
ferential F∗ = F∗,p is completely determined by the numbers aij

!
∂ X ∂
F∗ = akj .
∂xj p k
∂y k F (p)

80
Applying both sides to y i , we find that
!
X ∂
aij = akj yi
k
∂y k F (p)
!

= F∗ yi
∂xj p

ou

= (y i ◦ F )
∂xj p
i
∂F
= (p).
∂xj

Proposition 3.1.9
Given a smooth map F : N → M of manifolds and a point p ∈ N , let (U, (xj )) and

Zh
(V, (y i )) be coordinate charts about p ∈ N and F (p) ∈ M respectively. Relative to
the bases {∂/∂xj |p } for Tp N and {∂/∂y i |F (p) } for TF (p) M , the differential F∗,p : Tp N →
TF (p) M is represented by the matrix
 i 
∂F
(p) .
∂xj

Remark 3.1.10 The inverse function theorem for manifolds has a coordinate-free descrip-
tion: A smooth map F : N → M between two manifolds of the same dimension is locally
invertible at p ∈ N if and only if its differential at p F∗,p : Tp N → TF (p) M is an isomorphism.
x
3.1.6 Curves in a Manifold
eli
Definition 3.1.5 (Smooth Curve)
A smooth curve in a manifold M is a smooth map c : (a, b) → M .

Usually we assume 0 ∈ (a, b) and say that c is a curve starting at p if c(0) = p.


©F

Definition 3.1.6 (Velocity Vector)


The velocity vector c0 (t0 ) of the curve c at time t0 ∈ (a, b) is defined to be
!
d
c0 (t0 ) := c∗ ∈ Tc(t0 ) M.
dt t0

Thus c0 (t0 ) is simply the differential of the curve at the point t = t0 . We say that c0 (t0 ) is
the velocity of c at the point c(t0 ).

81
If we need to distinguish between the standard calculus notation c0 (t)and the tangent vector
c0 (t0 ), we will write ċ(t) to denote the calculus derivative.

Proposition 3.1.11 (Velocity Vector & Calculus Derivative)


Let c : (a, b) → R be a curve with co-domain R. Then

d
c0 (t0 ) = ċ(t0 )

ou
.
dx c(t0 )

Proof
Pick f ∈ Cc(t

0)
. By the chain rule from calculus,
!
d
c0 (t0 )f := c∗ f

Zh
dt t0

d
:= (f ◦ c)
dt t0
d
= ċ(t0 ) f.
dx c(t0 )

This concludes the proof.

Example 3.1.12
Define c : R → R2 by
x
c(t) := (t2 , t3 ).
Then c0 (t) is a linear combination of ∂/∂x and ∂/∂y at c(t):
eli
 
d ∂ ∂
c∗ = c0 (t) = a +b .
dt ∂x ∂y

We can evaluate both sides at x, y respectively to extract the coefficients a = 2t, b = 3t2 .
More generally, to compute the velocity vector of a smooth curve c ∈ Rn , one can simply
differentiate the components of c.
©F

Proposition 3.1.13 (Velocity of a Curve in Local Coordinates)


Let c : (a, b) → M be a smooth curve, and (U, x1 , . . . , xn ) a coordinate chart about c(t).
Write ci = xi ◦ c for the i-th component of c in the chart. Then c0 (t) is given by
n
0
X ∂
c (t) = ċi (t) .
i=1
∂xi c(t)

82
Proof
We already know that c0 (t) is a linear combination of the partial derivatives at c(t), say
with coefficients ai . Evaluating the tangent vector at xi yields

ai = c0 (t)xi
 

:= c∗ xi

ou
∂t

= (xi ◦ c)
∂t

= ci
∂t
= ċi (t).
Every smooth curve c at a point p ∈ M gives rise to a tangent vector c0 (0) ∈ Tp M . Conversely,

Zh
we can show that every tangent vector Xp ∈ Tp M is precisely the velocity vector of some
curve at p.

Proposition 3.1.14 (Existence of a Curve with a given Initial Vector)


For any point p ∈ M of a manifold and any tangent vector Xp ∈ Tp M , there are ε > 0
and a smooth curve c : (−ε, ε) → M such that c(0) = p and c0 (0) = Xp .

Proof
Let
P (U, φ) = (U, x1 , . . . , xn ) be a chart centered at p, ie φ(p) = 0 ∈ Rn . Suppose Xp =
i a ∂/∂x |p at p. Let {r } be the standard coordinates on R and write x = r ◦ φ.
i i i n i i
x
We wish to find a curve α : (−ε, ε) → Rn such that α(0) = 0 and α0 (0) = i ai ∂/∂ri |0 .
P
The previous proposition states that one possible choice of α is given by
eli
α(t) = (a1 t, . . . , an t)

where we must choose the domain (−ε, ε) to be sufficiently small such that α(t) still lies
in φ(U ).
Next, we map α to M via φ−1 . Define c := φ−1 ◦ α : (−ε, ε) → M . Then
©F

c(0) = φ−1 (α(0)) = φ−1 (0) = p.

83
By the chain rule and the fact that charts are tangent space isomorphisms,
 
d
0 −1
c (0) = (φ )∗ α∗ chain rule
dt 0
!

previous proposition
X
= (φ−1 )∗ ai i
i
∂r 0

ou

tangent space isomorphism
X
= ai i
i
∂x p

= Xp .
we can now interpret the abstract definition of a tangent vector geometrically as a directional
derivative using curves.

Proposition 3.1.15

Zh
Suppose Xp is a tangent vector at a point p ∈ M of a manifold and f ∈ Cp∞ (M ). If
c : (−ε, ε) → M is a smooth curve starting at p with c0 (0) = Xp , then

d
Xp f = (f ◦ c).
dt 0

Proof
By the definitions of c0 (0) and c∗ ,

Xp f = c0 (0)f
x
 
d
= c∗ f
dt 0
d
eli
= (f ◦ c).
dt 0

3.1.7 Computing the Differential Using Curves

We have two ways of computing the differential of a smooth map: the function definition
©F

in terms of point derivations and a matrix representation in terms of local coordinates. We


now give another way of computing the differential using curves.

Proposition 3.1.16
Let F : N → M be a smooth map between manifolds, p ∈ N , and Xp ∈ Tp N . If c is a
smooth curve starting at p ∈ N with velocity Xp at p, then

F∗,p (Xp ) = (F ◦ c)0 (0).

84
Thus F∗,p (Xp ) is the velocity vector of the image curve F ◦ c at F (p).

Proof
By assumption, c(0) = p and c0 (0) = Xp . Thus

F∗,p (Xp ) = F∗,p (c0 (0))


 
d

ou
= (F∗,p ◦ c∗,0 )
dt
 0
d
= (F ◦ c)∗,0
dt 0
0
= (F ◦ c) (0).

Example 3.1.17 (Differential of Left Matrix Multiplication)

Zh
Let g ∈ GL(n, R) and `g : GL(n, R) → GL(n, R) denote left multiplication by g. Since
GL(n, R) is an open subset of the vector space Rn×n , the tangent space Tg (GL(n, R)) can
be identified with Rn×n by mapping partial derivatives to canonical basis vectors. Let
Φ : Tg (GL(n, R)) → Rn×n denote this identification.
Let X ∈ TId (GL(n, R)) = Rn×n . To compute (`g )∗,Id (X), we can choose a curve c(t) ∈
GL(n, R) such that c(0) = Id and c0 (0) = X. From a previous proposition, this means
that c0 (0)f = (f ◦˙ c)(0) for any f ∈ CId

(GL(n, R)). We have shown that such a curve
always exists.
Then `g (c(t)) = gc(t) is simply matrix multiplication and
x
(`g )∗,Id (X) = (`g ◦ c)0 (0) previous proposition
Φ((`g )∗,Id (X)) = Φ((`g ◦ c)0 (0))
= (`g ◦ ˙c)(0) previous proposition
eli
= g ċ(0)
= gΦ(X)

Thus with the identification above, the differential


©F

(`g )∗,Id : TId (GL(n, R)) → Tg (GL(n, R))

is also left multiplication by g.

3.1.8 Immersions and Submersions

Two important cases of differential maps are immersions and submersions.

85
Definition 3.1.7 (Immersion)
A smooth map F : N → M between manifolds is said to be an immersion at p ∈ N
if its differential F∗,p : Tp N → TF (p) M is injective. If this holds at all p ∈ N , we say
F is an immersion.

Definition 3.1.8 (Submersion)

ou
A smooth map F : N → M between manifolds is said to be an submersion at p ∈ N
if its differential F∗,p : Tp N → TF (p) M is surjective. If this holds at all p ∈ N , we say
F is an submersion.

Remark 3.1.18 Recall that if N, M have dimensions n, m respectively, then dim Tp N and
dim TF (p) M = m. The injectivity of the differential F∗,p implies immediately that n ≤ m.
Similarly, surjectively implies n ≥ m.

Zh
Example 3.1.19
The prototype of an immersion is the inclusion of Rn in a higher-dimensional Rm :

ι(x1 , . . . , xn ) = (x1 , . . . , xn , 0, . . . , 0).

The prototypical submersion is the projection of Rn onto a lower-dimensional Rm :

π(x1 , . . . , xm , xm+1 , . . . , xn ) = (x1 , . . . , xm ).


x
Example 3.1.20
if U ⊆ M is an open subset, then the inclusion ι : U → M is both an immersion and a
submersion. Thus a submersion need not be onto.
eli
3.1.9 Rank, Critical & Regular Points

Recall the rank of a linear transformation between finite-dimensional vector spaces is the
dimension of the image.
©F

Definition 3.1.9 (Rank of Smooth Map)


The rank of a smooth map F : N → M between manifolds is defined as the rank of
the differential F∗,p : Tp N → TF (p) M .

Relative to local coordinates (U, x1 , . . . , xn ) at p ∈ N and (V, y 1 , . . . , y m ) at F (p) ∈ M , the


differential is represented by the Jacobian JFp = [∂F i /∂xj (p)]. Hence
 i 
∂F
rank F (p) = rank (p) .
∂xj

86
Note that the differential is independent of coordinate charts, and so is the rank of the
Jacobian matrix.

Definition 3.1.10 (Critical/Regular Point)


A point p ∈ N is a critical point of F : N → M if the differential F∗,p is not surjective.
Otherwise, it is a regular point.

ou
Thus p is a regular point of F if and only if F is a submersion at p.

Definition 3.1.11 (Critical/Regular Value)


Let F : N → M be a smooth map between manifolds. A point in M is a critical
value if it is the image of a critical point. Otherwise, it is a regular value.

Note we do not define regular values as the image of some regular point. We require all

Zh
points in the pre-image to be regular. On the other hand, c ∈ M is critical if there is a single
critical point in the pre-image.

Proposition 3.1.21
Let f : M → R be smooth. A point p ∈ M is critical if and only if relative to some
chart (U, x1 , . . . , xn ) containing p, all the partial derivatives satisfy

∂f
(p) = 0
∂xj
for j ∈ [n].
x
Proof
The differential f∗,p : Tp M → Tf (p) R ∼ = R is represented by the Jacobian matrix. Since
eli
the image is a linear subspace of R, it is either the zero map or a surjective map. Thus
f∗,p fails to be surjective if and only if all partial derivatives are zero.

3.1.10 Useful Results


©F

Proposition 3.1.22
Let M, N be manifolds and πM : M × N → M, πN : M × N → N be two projections.
For (p, q) ∈ M × N ,

(πM ∗ , πN ∗ ) : T(p,q) (M × N ) → Tp M × Tq N

is an isomorphism.

87
Proof
Recall that if (U, x1 , . . . , xn ) and (V, y 1 , . . . , y m ) are charts about p ∈ M, q ∈ N respec-
tively, (U × V, πM x1 , . . . , πN y 1 , . . . ) is a chart about (p, q) ∈ M × N . Write x̄i := πM xi
and ȳ i = πN y i . We have
!
∂ ∂
πM ∗ xi = (xi ◦ πM )

ou
j
∂ x̄ (p,q) ∂ x̄ j

∂ x̄i
=
∂ x̄j
= δij .

Hence !
∂ ∂
πM ∗ = .

Zh
∂ x̄j (p,q) ∂xj p

By checking how πM ∗ , πN ∗ acts on the canonical basis of T(p,q)(M ×N ) , we see that it is a


linear map between basis elements and is therefore an isomorphism.

Proposition 3.1.23
Let G be a Lie group equipped with its multiplication map µ : G × G → G and inverse
map ι : G → G. The differential of µ at the identity e ∈ G is addition:

µ∗,(e,e) : Te G × Te G → Te G
(Xe , Ye ) 7→ Xe + Ye
x
while the differential of ι at the identity e ∈ G is negation

ι∗,e : Te G → Te G
eli
Xe 7→ −Xe .

Proof
For the first claim, it suffices to show that µ∗, (e, e)(Xe , 0) = Xe and µ∗,(e,e) (0, Ye ) = Ye .
The result then follows by linearity.
©F

To compute µ∗,(e,e) (Xe , 0), We construct a curve α(t) in G × G at which α(0) = (e, e) and
α0 (0) = (Xe , 0) as follows. Take any curve c(t) in G such that c(0) = e and c0 (0) = Xe .
Then define α(t) := (c(t), e). We have
µ∗,(e,e) (Xe , 0) = (µ ◦ α)0 (0)
= c0 (0)
= Xp
The computation for (0, Ye ) 7→ Ye is identical.

88
For the second claim, we need only show that Xe − ι∗,e (Xe ) = 0. Indeed, take the
same curve c(t) in G as above but define α(t) := (c(t), (ι ◦ c)(t)). Note that α0 (0) =
(c0 (0), (ι ◦ c)0 (0)) by construction, which can be checked by evaluating both sides at the
coordinate functions. Then

e = (µ ◦ α)(t)
0 = (µ ◦ α)0 (t)

ou
= µ∗,(e,e) (Xe , (ι ◦ c)0 (0))
= Xe + (ι ◦ c)0 (0)
= Xe + ι∗,e (Xe ).

Proposition 3.1.24 (Transforming Vectors to Coordinate Vectors)


Let X1 , . . . , Xn be vector fields on an open subset U of an n-manifold. Suppose at some
p ∈ U , the tangent vectors (X1 )p , . . . , (Xn )p are linearly independent. Then there is a

Zh
chart (V, x1 , . . . , xn ) about p such that


(Xi )p = .
∂xi p

Proof
Let (U, y 1 , . . . , y n ) be any chart about p. Write
X ∂
(Xj )p = aij .
x
i
∂y i p

Since the (Xj )p ’s are linearly independent, the matrix given by A = [aij ] is non-singular,
and we can define a new coordinate system x1 , . . . , xn by
eli
X
yi = aij xj .
j

In other words, xj = )i y .
−1 j i
P
i (A

By the change of basis formula, for which we recall can be verified by evaluating both
©F

sides at y k ,

∂ X ∂y i ∂
=
∂xj i
∂xj ∂y i
X ∂
= aij i .
i
∂y

By construction, this evaluates to (Xj )p at the point p.

89
Definition 3.1.12 (Local Maxima)
A real-valued function f : M → R on a manifold is said to have a local maximum at
p ∈ M if there is a neighborhood U 3 p such that f (p) ≥ f (q) for all q ∈ U .

Recall that if a differentiable function f : I → R on an interval has a local maximum at


some p ∈ I, then f 0 (p) = 0 ∈ Tf (p) R. This can be shown by the definition of the calculus

ou
derivative in terms of Newton quotients.

Proposition 3.1.25
A local maximum of a smooth function f : M → R is a critical point of f .

Proof
We need to show that f∗,p is not surjective, meaning it is the zero map, or sends any

Zh
tangent vector to 0.
Fix Xp ∈ Tp M . Let c(t) be a curve starting at p with initial velocity Xp . then f ◦ c is a
differentiable function on an interval with a local maximum at t = 0 so that (f ◦c)0 (0) = 0.
But then
f∗,p (Xp ) = (f ◦ c)0 (0) = 0
for any Xp ∈ Tp M , which concludes the proof.

3.2 Submanifolds
x
Currently, we can check that a given topological space is a manifold either by definition or
by exhibiting it as an appropriate quotient space. We now derive another way: exhibiting
the topological space as a (regular) submanifold of another manifold.
eli

3.2.1 Submanifolds

The xy-plane in R3 is the prototype of a regular manifold of a manifold. It is defined by the


vanishing of the coordinate function z.
©F

Definition 3.2.1 (Regular Submanifold)


A subset S of an n-manifold is a regular submanifold of dimension k if for every p ∈ S,
there is a coordinate neighborhood (U, φ) = (U, x1 , . . . , xn ) of p in the maximal atlas
of N such that U ∩ S is defined by the vanishing of n − k of the coordinate functions.
By renumbering the coordinates, we may assume that these n−k coordinate functions
are xk+1 , . . . , xn .

90
On U ∩ S, φ = (x1 , . . . , xk , 0, . . . , 0). Let φS : U ∩ S → Rk be the restriction of the first k
components of φ to U ∩ S, so φS = (x1 , . . . , xk ). We call such a chart (U, φ) in N an adapted
chart relative to S. Note that (U ∩ S, φS ) is a chart for S in the subspace topology.

Definition 3.2.2 (Codimension)


If S is a regular submanifold of dimension k in an n-manifold N , then we say the
codimension of S in N is n − k.

ou
Remark 3.2.1 We remark that as a topological space, a regular submanifold of N is re-
quired to have the subspace topology.

Example 3.2.2 (Open Submanifolds are Regular Submanifolds)

Zh
The dimension k of the submanifold may be equal to n, the dimension of the manifold.
In this case, the charts of the submanifold have domain U ∩ S = U . Thus an open subset
of a manifold is a regular submanifold of the same dimension.

Example 3.2.3
The interval S = (−1, 1) on the x-axis is a regualr submanifold of the xy-plane. We can
take the open square (−1, 1) × (−1, 1) as an adapted chart. Then U ∩ S is precisely the
zero set of y on U .

Example 3.2.4 (Topologist’s Sine Curve)


x
Let Γ e the graph of the function f (x) = sin(1/x) over the interval (0, 1) and S the union
of Γ and the open interval

I = {(0, y) ∈ R2 : y ∈ (−1, 1)}.


eli
S ⊆ R2 cannot be a regular submanifold. Indeed, for any p ∈ I, there is no adapted
chart containing p, since any sufficiently small neighborhood U 3 p in R2 intersects S in
infinitely many components.
Recall that the closure of Γ in R2 is called the topologist’s since curve. It differs from S
in including the endpoints (1, sin 1), (0, 1), (0, −1).
©F

Proposition 3.2.5 (Regular Submanifolds are Manifolds)


Let S be a regular submanifold of N and U = {(U, φ)} a collection of compatible adapted
charts of N covering S. Then {(U ∩ S, φS )} is at atlas for S. Thus a reglar submanifold
is itself a manifold.
Moreover, if N has dimension n and S is locally defined by the vanishing of n − k
coordinates, then dim S = k.

91
Proof
Let (U, φ) = (U, x1 , . . . , xn ) and (V, Ψ) = (V, y 1 , . . . , y n ) be two adapted charts in the
given collection. Assume that they intersect. As we remarked in the definition of a
regular submanifold, we can renumber of coordinates of any adapted chart relative to
a submanifold S so that the last n − k coordinates vanish on points of S. Then for
p ∈ U ∩ V ∩ S,
(ΨS ◦ φ−1 1 k 1 k
S )(x , . . . , x ) = (y , . . . , y ).

ou
Since Ψ ◦ φ−1 is smooth, the y i ’s are smooth functions of the xj ’s and ΨS ◦ φ−1
S is therefore
smooth as well. Similarly, φS ◦ ΨS is also smooth.
−1

This shows that any two charts in {(U ∩ S), φS } are smoothly compatible. The collection
{(U ∩ S, φS )} is thus a smooth structure on S since the U ∩ S’s cover S by assumption.

Zh
3.2.2 Level Sets of a Function

Definition 3.2.3 (Level Set)


The level set of a map F : N → M is a subset

F −1 ({c}) = F −1 (c) = {p ∈ N : F (p) = c}

for some c ∈ M .

The value c ∈ M is called the level of F −1 (c). In the special case of F : N → Rm , we say
x
Z(F ) := F −1 (0) is the zero set of F .
Recall that c is a regular value of F if and only if either c is not in the image of F or at
every p ∈ F −1 (c), the differential F∗,p : Tp N → Tp M is surjective. The inverse image of a
eli
regular value c is called a regular level set. If the zero set of some F : N → Rm is regular, it
is called a regular zero set.
Remark that if a regular level set is nonempty, then the map F : N → M is a submersion
at p. In particular, dim N ≥ dim M .

Example 3.2.6 (The 2-Sphere in R3 )


©F

The unit 2-sphere


S 2 = {(x, y, z) ∈ R3 : x2 + y 2 + z 2 − 1 = 0}
is the zero set of the function

f (x, y, z) = x1 + y 2 + z 2 − 1.

Now, the only critical point of f is 0, which does not lie on the sphere. Thus all points
on the sphere are regular points of f and 0 is a regular value of f .

92
Let p be a point of S 2 at which (∂f /∂x)(p) = 2x(p) 6= 0. Then the Jacobian matrix of
the map (f, y, z) : R3 → R3 is given by
 ∂f ∂f ∂f 
∂x ∂y ∂z
0 1 0 
0 0 1

and the Jacobian determinant is nonzero. Hence the inverse function applies and we can

ou
find a neighborhood Up of p ∈ R3 such that (Up , f, y, z) is a chart in the atlas of R3 . In
this chart, the set Up ∩ S 2 is defined by the vanishing of the first coordinate f . Thus
(Up , f, y, z) is an adapted chart relative to S 2 and (Up ∩ S 2 , y, z) is a chart for S 2 .
Similarly, if either (∂f /∂y)(p) 6= 0 or (∂f /∂z)(p) 6= 0, then we can find an adapted chart
(Vp , x, f, z) or (Vp , x, y, f ) containing p in which the set Vp ∩S 2 is the zero set of the second
or third coordinate f . But at least one of the partial derivatives must be nonzero, so as p

Zh
varies over all points of the sphere, we obtain a collection of adapted charts of R3 covering
S 2 . Hence S 2 is a regular submanifold of R3 with dimension 2.
This is an important example since we can nearly translate it verbatim for the regular zero
set of a function F : N → R. First, we note that any regualr level set g −1 (c) of a smooth
function g on a manifold can be expressed as a regular zero set. Indeed, consider f = g − c
so that
g(p) = c ⇐⇒ f (p) = 0.
Moreover, the differentials of f, g are point-wise equal and so f, g have the exact same critical
points. So if g −1 (c) is a regular level set, then f −1 (0) is a regular zero set.
x
Theorem 3.2.7
Let g : N → R be a smooth function on the manifold N . Then a non-empty regular
level set S = g −1 (c) is a regular submanifold of N with codimension 1.
eli

Proof
Let f = g − c so that S = f −1 (0) is the regular zero set of f . Let p ∈ S. Since p is a
regular point of f , relative to any chart (U, x1 , . . . , xn ) about p, (∂f /∂xi )(p) 6= 0 for some
i. By relabelling, we can assume without loss of generality that i = 1. The Jacobian of
the smooth map (f, x2 , . . . , xn ) : U → Rn is given by
©F

 ∂f 
∂x 1 . . . . . . . . .
 0 1 ... 0 
 .. .. . . ..  .
 
 . . . . 
0 0 ... 1

This is nonzero at the point p by construction. Thus the inverse function theorem applies
and there is a neighborhood Up of p on which f, x1 , . . . , xn forms a coordinate system.

93
Relative to the chart (Up , f, x2 , . . . , xn ), the level set Up ∩ S is defined by setting the first
coordiante f to 0, hence the chart is adapted relative to S. But p ∈ N was arbitrary and
so S is a regular submanifold of codimension 1 in N .

ou
3.2.3 The Regular Level Set Theorem

Our next step is to extend the previous hteorem to a regular level set of a map between
smooth manifolds. This useful theorem is known under various names such as the implicit
function theorem, preimage theorem, and the regular level set theorem. We will follow the

Zh
last.

Theorem 3.2.8 (Regular Level Set Theorem)


Let F : N → M is a smooth map between manifolds with dim N = n, dim M = m.
Then a nonempty regular level set F −1 (c) for some c ∈ M is a regular submanifold
of N with codimension m.

Proof
Choose a chart (V, Ψ) = (V, y 1 , . . . , y m ) of M centered at c such that Ψ(c) = 0 ∈ Rm .
Then F −1 (V ) is an open set in N containing F −1 (c). Moreover, in F −1 (V ), F −1 (c) =
x
(Ψ ◦ F )−1 (0) and the level set F −1 (c) is the zero set of Ψ ◦ F . If F i := y i ◦ F = ri ◦ (Ψ ◦ F ),
then F −1 (c) is also the common zero set of the functions F 1 , . . . , F m on F −1 (V ).
Since we assumed the regular level set to be nonempty, we must have n ≥ m. Fix a point
eli
p ∈ F −1 (c) and let (U, x1 , . . . , xn ) be a coordinate neighborhood of p ∈ N contained in
F −1 (V ). Since F −1 (c) is a regular level set, p ∈ F −1 (c) is by definition a regular point
of F . Thus the m × n Jacobian matrix of F has rank m. By relabelling if necessary, we
may assume without loss of generality that the first m × m block is nonsingular.
Replace the first m coordinates x1 , . . . , xm of the chart (U, φ) by F 1 , . . . , F m . We claim
that there is a neighborhood Up of p such that (Up , F 1 , . . . , F m , xm+1 , . . . , xn ) is a chart in
©F

the atlas of N . It suffices to compute the Jacobian matrix of the chart function at p. But
the first m × m block is nonsingular by construction and hence the the inverse function
theorem applies.
In the chart (Up , F 1 , . . . , F m , xm+1 , . . . xn ), the set S := F −1 (c) is obtained by setting the
first m coordinates to 0. Since this is true for every point p ∈ S, S is by definition a
regular submanifold of N with codimension m.
The proof of the regular level set theorem yields the following lemma.

94
Lemma 3.2.9
Let F : N → Rm be a smooth map on a manifold N of dimension n and let S be
the level set F −1 (0). Suppose relative to some coordinate chart (U, x1 , . . . , xn ) about
p ∈ S, the determinant of the Jacobian matrix with respect to xj1 , . . . , xjm is nonzero.
Then in some neighborhood of p, we can replace xj1 , . . . , xjm by F1 , . . . , Fm to obtain
an adapted chart for N relative to S.

ou
Remark 3.2.10 The regular level set theorem gives a sufficient but not necessary condition
for a level set to be a regular manifold. For example, if f : R2 → R is the map f (x, y) = y 2 ,
then the zero set Z(f ) = Z(y 2 ) is the x-axis, a regular submanifold of R2 . But ∂f /∂x =
∂f /∂y = 0 on the x-axis, and every point in Z(f ) is a critical point of f . Thus, although
Z(f ) is a regular submanifold of R2 , it is not a regular level set of f .

Zh
3.2.4 Examples of Regular Manifolds

Example 3.2.11 (Hypersurface)


The solution set S pf x3 + y 3 + z 3 = 1 in R3 is a 2-manifold by the regular level set
theorem.

Example 3.2.12
The subset S ⊆ R3 satisfying
x
x3 + y 3 + z 3 = 1
x+y+z =0

is a 1-manifold.
eli
This can be checked by consider the function

F (x, y, z) = (x3 + y 3 + z 3 , x + y + z)

and checking that its Jacobian has rank 2 for every point in S. This implies S is a regular
level set of F and is thus a manifold.
©F

Example 3.2.13 (Special Linear Group)


The recall the special linear group

SL(n, R) = {A ∈ GL(n, R) : det A = 1}.

Then SL(n, R) = det−1 (1) is a level set of the determinant map.


We wish to check that no critical points of det live in SL(n, R). Let mij denote the

95
(i, j)-minor of A. Recall the cofactor expansion formula

det A = (−1)i+1 ai1 mi1 + · · · + (−1)i+n ain min .

Since mij is not a function of aij ,

∂f
= (−1)i+j mij .

ou
∂aij

Hence a matrix A ∈ GL(n, R) is a critical point of f if and only if all the (n − 1) × (n − 1)


minors mij of A are zero. But then such a matrix necessarily has determinant 0 by cofactor
expansion. Thus we can apply the regular level set theorem to conclude that SL(n, R) is
a regular (n2 − 1)-submanifold of GL(n, R).

3.2.5 Transversality

p ∈ f −1 (S),

Zh
Definition 3.2.4 (Transversal)
A C ∞ map f : N → M is said to be transversal to a submanifold S ⊆ M if for every

f∗ (Tp N ) + Tf (p) S = Tf (p) M


where the addition is taken to be the Minkowski sum.

We write f t S to denote this condition.


x
Theorem 3.2.14 (Tranversality)
If a smooth map f : N → M is transversal to a regular submanifold S of codimension
k in M , then f −1 (S) is a regular submanifold of codimension k in N .
eli
Remark that when S = {c} is a single point, transversality of f to S simply means that

f∗ (Tp N ) = Tf (p) M

for every p ∈ f −1 (c), ie f∗,p is surjective at every p ∈ f −1 (c), ie f −1 (c) is a regular level set.
Thus the transversality theorem is a generalization of the regular level set theorem. It is
©F

useful in giving conditions under which the intersection of two submanifolds is a submanifold.

Proof
Let p ∈ f −1 (S) and (U, x1 , . . . , xm ) be an adapted chart centered at f (p) for M relative
to S such that U ∩ S = Z(xm−k+1 , . . . , xm ), the zero set of the functions xm−k+1 , . . . , xm .
Define g : U → Rk to be the map

g = (xm−k+1 , . . . , xm ).

96
Consider g ◦ f : f −1 (U ) → Rk . By construction,

f −1 (U ) ∩ f −1 (S) = f −1 (U ∩ S)
= {p ∈ f −1 (U ) : g(f (p)) = 0}
= (g ◦ f )−1 (0).

We claim that f −1 (U ∩ S) is a regular level set of the function g ◦ f above. It suffices

ou
to show that (g ◦ f )∗,p is surjective at every p ∈ f −1 (U ∩ S). Fix Z ∈ T0 Rk . g∗ is by
construction surjective at every point in U as g is a subset of coordinate functions. Hence
there is some Y ∈ Tf (p) M such that g∗ (Y ) = Z. the transversality of f implies that there
is some X ∈ Tp N and Y 0 ∈ Tf (p) S such that f∗ (X) + Y 0 = Y . Since g is constant on S
and f (p) ∈ U ∩ S, we must have g∗ (Y 0 ) = 0 so that

Z = g∗ (Y )

Zh
g∗ (f∗ (X) + Y 0 )
= (g ◦ f )∗ (X) + 0
= (g ◦ f )∗ (X).

By the arbitrary choice of p, Z, we conclude that f −1 (U ∩ S) = (g ◦ f )−1 (0) is a regular


level set of the function g ◦ f .
It follows that for every p ∈ f −1 (S), there is a neighborhood V 3 p, where we can replace
the local coordinates (V, y 1 , . . . , y n ) with (V, g 1 , . . . , g k , y m+1 , . . . , y n ) to obtain an adapted
chart of N relative to f −1 (S). By definition, f −1 (S) is a regular submanifold of N .
x
Remark 3.2.15 As part of the proof above, we showed that
(g ◦ f )−1 k
∗ (T0 R ) = Tp N.

But
eli
(g ◦ f )−1 k −1 −1
∗ (T0 R ) = f∗ (g∗ (T0 R ))
k

= f∗−1 (Tf (p) S)


Hence
f∗−1 (Tf (p) S) = Tp N.
©F

We say two submanifolds S, S 0 intersect transversely, denoted S t S 0 , if for each p ∈ S ∩ S 0 ,


Tp S + Tp S 0 = Tp M.

Corollary 3.2.15.1
If S 0 ⊆ M is a regular submanifold that intersects the regular submanifold S transversely,
then S ∩ S 0 is a regular submanifold of M whose codimension is equal to the sum of the
codimensions of S, S 0 .

97
Proof
Apply the transversality theorem with f = ιS 0 as the inclusion map S 0 → M . Then
f −1 (S) = S ∩ S 0 is a regular submanifold of S 0 . The codimension can be seen from the
definition of a regular submanifold.

Theorem 3.2.16 (Parametric Transversality)


Let F : M × S → N be a smooth map between manifolds and assume that F t Q for

ou
some regular submanifold Q ⊆ N . Then for almost every s ∈ S, the map Fs : M → N
given y Fs (x) = F (x, s) is transverse to Q as well.

The proof of this theorem requires an application of Sard’s theorem, which we will see after.

Proof
Since F t Q, W := F −1 (Q) is a regular submanifold of M × S by the transversality

Zh
theorem. Consider the projection on the second factor, π : M × S → S given by π(x, s) =
s. By Sard’s theorem, it suffices to show that whenever s ∈ S is a regular value of
π|W : W → S, then FS t Q.
Fix a regular value s ∈ S of π|W and consider any y ∈ Fs−1 (Q) ⊆ M . Write q := Fs (y) ∈ Q.
Since (y, s) ∈ (π|W )−1 (Q) by construction, (π|W )∗,(y,s) : T(y,s) W → Ts S is surjective by
the definition of a regular value. Moreover, F∗ (T(y,s) W ) = TFs (y) Q by the remark above
since F t Q.
Fix any Zq ∈ TFs (y) N . We need to find some Yq ∈ TFs (y) Q, Xy ∈ Ty M such that
x
Zq = Yq + (Fs )∗ (Xy ).

By assumption, F t Q so there are Yq0 ∈ TFs (y) Q, Xy0 ∈ Ty M, Xs ∈ Ts S such that


eli
Zq = Yq0 + F∗ (Xy0 , Xs0 ).

But (π|W )∗ (T(y,s) W ) = Ts S so we can find some (Xy00 , Xs ) ∈ T(y,s) W such that

(π|W )∗ (Xy00 , Xs ) = Xs .

Note that this is the same Xs since π is a projection. By linearity,


©F

Zq = Yq0 + F∗ (Xy00 , Xs ) + F∗ (Xy0 − Xy00 , 0).

But F∗ (Xy00 , Xs ) ∈ Tq Q as (Xy00 , Xs ) ∈ T(y,s) W and F∗ (Xy0 − Xy00 , 0) = Fs (Xy0 − Xy00 ),


concluding the proof.

98
3.2.6 Useful Results

Proposition 3.2.17 (Regular Submanifolds)


Suppose S ⊆ R2 has the property that locall on S, one of the coordinates is a smooth
function of the other coordinate. Then S is a regular submanifold of R2 .

ou
Proof
Let p ∈ S and a neighborhood U 3 p of S such that on U ∩ S, there is a smooth function
f : A → B such that V := A × B ⊆ U and y = f (x) for all (x, y) ∈ V ∩ S. Consider the
function F : V → R2 given by

F (x, y) = (x, y − f (x)).

Zh
The Jacobian is given by  
1 0
JF (x, y) = ∂f
∂x
1
Thus F is a diffeomorphism onto its image and can be used as a coordinate map. But
in the chart (V, x, y − f (x)), V ∩ S is defined by the vanishing of the second coordinate
y − f (x). S is by definition a regular submanifold of R2 .

Proposition 3.2.18
The graph Γ(f ) of a smooth function f : R2 → R
x
Γ(f ) := {(x, y, f (x, y)) ∈ R3 }

is a submanifold of R3 .
eli
Proof
Let p ∈ Γ(f ). Consider the function F : R3 → R3 given by

F (x, y, z) := (x, y, z − f (x, y)).

Its Jacobian is given by


©F

 
1 0 0
JF (x, y, z) =  0 1 0 .
− ∂x − ∂f
∂f
∂y
1
Thus F is a local diffeomorphism and can be used as a coordinate map. Moreover, S is
globally defined by the vanishing of the third coordinate z − f (x, y) and so is a regular
submanifold of R3 .
Recall a homogeneous polynomial of degreeP k F (x0 , . . . , xn ) ∈ R[x0 , . . . , xn ] is a linear com-
bination of monomials x0 . . . xn of degree j ij = k.
i0 in

99
Lemma 3.2.19 (Euler’s Formula)
Let F (x0 , . . . , xn ) be a homogeneous polynomial of degree k. For any t ∈ R,

F (tx0 , . . . , txn ) = tk F (x0 , . . . , xn )


X ∂F
xi = kF.
∂xi

ou
i

Proof
Differentiate with respect to t.
On a projective space RP n , a homogeneous polynomial F (x0 , . . . , xn ) of degree k is not a
function, since its value at a point is not necessarily unique. However, the zero set in RP n
of a homogeneous polynomial F (x0 , . . . , xn ) is well defined, since F (a0 , . . . , an ) = 0 if and

Zh
only if
F (ta0 , . . . , tan ) = tk F (a0 , . . . , an ) = 0

for all t ∈ R× .

Definition 3.2.5 (Real Projective Variety)


The zero set of finitely many homogeneous polynomials in RP n is called a real pro-
jective variety.

A projective variety defined by a single homogeneous polynomial of degree k is called a


x
hypersurface of degree k.

Proposition 3.2.20
The hypersurface Z(F ) defined by F (x0 , x1 , x2 ) = 0 is smooth if the partial derivatives
eli
are not simultaneously zero on Z(F ).

Proof
Let p ∈ Z(F ). We claim that at least one of ∂F/∂x1 , ∂F/∂x2 is non-zero at p. Suppose
otherwise, But then
©F

X ∂F ∂F
0 = kF (p) = xi = x0 .
i
∂xi ∂x0
But x0 6= 0 on U0 so all partial derivatives vanish, a contradiction.
Recall the standard coordinates on U0 ' R2 are x = x1 /x0 , y = x2 /x0 . In U0 ,

F (x0 , x1 , x2 ) = xk0 F (1, x1 /x0 , x2 /x0 ) = xk0 F (1, x, y).

Define f (x, y) := F (1, x, y) so that f, F have the same zero set in U0 . Now, the Jacobian

100
of f is given by  ∂F 
Jf (x, y) = ∂x .
∂F
∂y

But at least one of this non-zero. We can similarly show this for U1 , U2 .
All in all, Z(F ) is a regular level set of F and is hence a regular submanifold of RP n .

ou
Proposition 3.2.21 (Product of Regular Submanifolds)
If Si is a regular submanifold of the manifold Mi , i = 1, 2, then S1 × S2 is a regular
submanifold of M1 × M2 .

Proof
Fix some (p, q) ∈ S1 × S2 as well as adapted charts (U, x1 , . . . , xn ), (V, y 1 , . . . , y m ) relative
to S1 , S2 and about p, q, respectively so that S1 ∩ U, S2 ∩ V are defined by the vanishing

Zh
of the last k, ` coordinates, respectively. Then (U × V, x1 , . . . , xn , y 1 , . . . , y m ) is a chart
about (p, q) in the product manifold M1 × M2 . By construction, (S1 × S2 ) ∩ (U ∩ V ) is
defined by the vanishing the same k + ` coordinates. It follows by definition that S1 × S2
is a regular submanifold of M1 × M2 with codimension k + `.
Recall that the complex special linear group SL(n, C) is the subgroup of GL(n, C) consisting
of complex matrices with determinant 1.

Proposition 3.2.22
SL(n, C) is a regular submanifold of GL(n, C).
x
Proof
It suffices to show that SL(n, C) is a regular level set of det. That it is a level set is clear.
Since det is complex-valued, it suffices to show that the differential at every A ∈ SL(n, C)
eli
is not identically zero.
Let A ∈ SL(n, C) and consider the curve

A(t) := (1 + t)A

which starts at A(0) = A with initial velocity A ∈ TA Cn×n (under the appropriate iden-
©F

tification) and additionally satisfies

det A(t) = (1 + t)n det A = (1 + t)n .

But then (det A(t))0 (0) = n 6= 0, concluding the proof.

101
3.3 Categories and Functors

ou
Zh
Many problems in mathematics share common features. In topology, one is interested in
knowing if two topological spaces are homeomorphic and in groups theory we wish to know
if two groups are isomorphic. This has given rise to the theories of categories and functors.
A category is essentially a collection of objects and arrow (morphisms) between objects.
These arrows satisfy the abstract properties of maps and are often structure preserving. For
instance, smooth manifolds and smooth maps form a category and so do vector spaces and
x
linear maps. A functor from one category to another preserves the identity morphism and
the composition of morphisms. If the target category is simpler than the domain category,
this provides a way to simply problems in the original category. The tangent space con-
eli
struction with the differential of a smooth map is a functor from the category of smooth
manifolds with a distinguished point to the category of vector spaces. The existence of
the tangent space functor shows that if two manifolds are diffeomorphic, then their tangent
spaces at corresponding points must be isomorphic, thereby proving the smooth invariance
of dimension. Invariance of dimension in the continuous category of topological spaces and
continuous maps is more difficult to show since there is no tangent space functor in the
continuous category.
©F

For a functor to be useful, it should be simple enough to be computable, yet complex


enough to preserve essential features of the original category. For smooth manifolds, this
delicate balance is achieved in the de Rham cohomology functor. In the rest of the book, we
introduce various functors of smooth manifolds, such as the tangent bundle and differential
forms, culminating in the de Rham cohomology.
As an concrete step, we first study the dual construction on vector spaces as a nontrivial
example of a functor.

102
3.3.1 Categories

ou
Definition 3.3.1 (Category)
A category is a collection of elements, called objects, and for any two objects A, B, a
set Mor(A, B) of elements, called morphisms from A to B such that given any f ∈
Mor(A, B), g ∈ Mor(B, C), the composite g ◦ f ∈ Mor(A, C) is defined. Furthermore,
the composition of morphisms satisfies two properties:
(i) (Identity Axiom) for each object A, there is an identity morphism IdA ∈

Zh
Mor(A, A) such that for any f ∈ Mor(A, B) and Mor(B, A),

f ◦ IdA = f, IdA ◦g = g.

(ii) (Associative Axiom) for f ∈ Mor(A, B), g ∈ Mor(B, C) and h ∈ Mor(C, D),

h ◦ (g ◦ f ) = (h ◦ g) ◦ f.

If f ∈ Mor(A, B), we often write f : A → B.

Example 3.3.1
x
The collection of groups and group homomorphisms forms a category where the objects
are groups and Mor(A, B) is the set of groups homomorphisms from A to B.
eli
Example 3.3.2
The collection of vector spaces over R and R-linear maps forms a category where objects
are real vector spaces and Mor(V, W ) is the set of linear maps from V to W .

Example 3.3.3 (Continuous Category)


The collection of all topological spaces with continuous maps between them is a category.
©F

Example 3.3.4 (Smooth Category)


The collection of smooth manifolds with smooth maps between them is a category.

Example 3.3.5 (Category of Pointed Manifolds)


Let M be a smooth manifold and q ∈ M . (M, q) is known as a pointed manifold. Given
any two such pairs (N, p), (M, q), let Mor((N, q), (M, q)) be the set of all smooth maps
F : N → M such that F (p) = q. This is a category.

103
Definition 3.3.2 (Object Isomorphism)
Two objects A, B in a category are isomorphic if there are morphisms f : A → B and
g : B → A such that
g ◦ f = IdA , f ◦ g = IdB .
In this case, both f, g ar called isomorphisms.

ou
The usual notation for an isomorphism is '. Thus A ' B can mean a group isomorphism,
a vector space isomorphism, a homeomorphism, or a diffeomorphism, depending on the
category and the context.

3.3.2 Functors

Zh
Definition 3.3.3 ((Covariant) Functor)
A (covariant) functor F from one category C to another category D is a map that
associates to each object A in C an object F(A) in D and to each morphism f : A → B
a morphism F(f ) : F(A) → F (B). such that
(i) F(IdA ) = IdF (A)
x
(ii) F(f ◦ g) = F(f ) ◦ F(g)

Example 3.3.6
eli
The tangent space construction is a functor fro the category of pointed manifolds to the
category of vector spaces. To each pointed manifold (N, p) we associate the tangent
space Tp N and to each smooth map f : (N, p) → (M, f (p)) we associate the differential
f∗,p : Tp N → Tf (p) M .
The functorial property (i) holds because if Id : N → N is the identity map, then its
©F

differential Id∗,p : Tp N → Tp N is also the identity map. The functorial property (ii) holds
because in this context it is simply the chain rule.

Proposition 3.3.7
Let F : C → D be a functor between categories. If f : A → B is an isomorphism in C,
then F(f ) : F(A) → F (B) is an isomorphism in D.

If in condition (ii) of the definition for a covariant functor we reverse the direction of the
arrow for the morphism F(f ), we obtain a contravariant functor.

104
Definition 3.3.4 (Contravariant Functor)
A contravariant functor from from category C to another category D is a map that
associates to each object A in C an object F(A) in D and to each morphism f : A → B
a morphism F(f ) : F(B) → F (A) such that
(i) F(IdA ) = IdF (A)
(ii) F(f ◦ g) = F(g) ◦ F(f )

ou
Example 3.3.8
Smooth functions on a manifold give rise to a contravariant functor that associates to
each manifold M the algebra F(M ) = C ∞ (M ) of C ∞ functions on M and to each smooth
map F : N → M of manifolds the pullback map F(F ) = F ∗ : C ∞ (M ) → C ∞ (N ) given
by

Zh
F ∗ (h) = h ◦ F
for h ∈ C ∞ (M ). It can be checked that the pullback satisfies the two functorial properties.

3.3.3 The Dual and Multicovector Functors

Another example of a contravariant functor is the dual of a vector space.


Let V be a real vector space and recall V ∨ is the vector space of all linear functionals on V ,
sometimes denoted
V ∨ = Hom(V, R).
x
If V is a finite-dimensional vector space with basis {e1 , . . . , en }, reclal that V ∨ has a basis
of linear functionals {α1 , . . . , αn } defined by
eli
αi (ej ) = δji
for i, j ∈ [n].
A linear map L : V → W of vector spaces induces a linear map L∨ , called the dual of L, as
follows. For every linear functional α : W → R, the dual map L∨ : W ∨ → V ∨ associates the
linear functional
©F

L∨ (α) = α ◦ L.
Note that the dual of L reverses the direction of the arrow.

Proposition 3.3.9
Suppose V, W, S are real vector spaces.
(i) If IdV : V → V is the identity map, then Id∨V : V ∨ → V ∨ is the identity map on
V ∨.
(ii) If f : V → W and g : W → S are linear maps, then (g ◦ f )∨ = f ∨ ◦ g ∨ .

105
This proposition shows that the dual construction F : () 7→ ()∨ is a contravariant functor
from the category of vector spaces to itself: for a real vector space V , F(V ) = V ∨ and
for f ∈ Hom(V, W ), F(f ) = f ∨ ∈ Hom(W ∨ , V ∨ ). Consequently, if f : V → W is an
isomorphism, then so is its dual f ∨ : W ∨ → V ∨ .
Fix a positive integer k ≥ 1. For any linear map L : V → W of vector spaces define the
pullback map L∗ : Ak (W ) → Ak (V ) that sends f ∈ Ak (W ) to

ou
(L∗ f )(v1 , . . . , vk ) = f (L(v1 ), . . . , L(vk )).
It can be checked from the definition that L∗ is a linear map.

Proposition 3.3.10
The pullback of covectors by a linear map satisfies the two functorial properties:
(i) If IdV : V → V is the identity map on V , then Id∗V = IdAk (V ) , the identity map on
Ak (V ).

Zh
(ii) If K : U → V and L : V → W are linear maps of vector spaces, then (L ◦ K)∗ =
K ∗ ◦ L∗ : Ak (W ) → Ak (U ).

To each vector space V , we associate the vector space Ak (V ) of all k-covectors on V , and
to each linear map L : V → W of vector spaces, we associate the pullback Ak (L) = L∗ :
Ak (W ) → Ak (V ). Then Ak () is a contravariant functor from the category of vector spaces
and linear maps to itself.
When k = 1, the space A1 (V ) is simply the dual space, and for any linear map L : V → W ,
the pullback map A1 (L) = L∗ is the dual map L∨ : W ∨ → V ∨ . Thus the multicovector
x
functor Ak () generalizes the dual functor ()∨ .

3.4 The Rank of a Smooth Map


eli

Recall the rank of a smooth map f : N → M at a point p ∈ N is defined to be the rank of


its differential at p. Two cases are of special interest: that in which f has maximal rank a
point and in which it has constant rank in a neighborhood.
Let n = dim N, m = dim M . In the case f : N → M has maximal rank, there are three
©F

possibilities (not necessarily exclusive). If n = m, then f is a local diffeomorphism at p by


the inverse function theorem. If n ≤ m, then the maximal rank is n and f is an immersion
at p. Finally, if n ≥ m, the maximal rank is m and f is a submersion at p.
Since manifolds are locally Euclidean, theorems on the rank of a smooth map between
Euclidean spaces translate easily to theorems about manifolds. This leads to the constant
rank theorem for manifolds, which gives a simple normal form for a smooth map having
constant rank on an open set. This result specializes to the immersion and submersion
theorems.

106
The image of a smooth map does not in general have a nice structure. However, using
the immersion theorem we derive conditions under which the image of a smooth map is a
manifold.

3.4.1 Constant Rank Theorem

ou
Suppose f : N → M is a smooth map of manifolds and we want to show that the level
set f −1 (c) is a manifold for some c ∈ M . A sufficient condition from the regular level set
theorem is for the differential to have maximal rank at every point of f −1 (c). However, even
if this is true, it may be difficult to show at times. The constant rank has the comparative
advantage in that it is not necessary to know the precise rank of f ; it suffices that the rank
be constant.
Let us recall the constant rank theorem for Euclidean spaces.

Zh
Theorem 3.4.1 (Constant Rank in Euclidean Space)
If f : U ⊆ Rn → Rm has constant rank k in a neighborhood of a point p ∈ U , there
are diffeomorphisms G of a neighborhood of p ∈ U sending p 7→ 0 ∈ Rn and F of a
neighborhood of f (p) ∈ Rm sending f 7→ 0 ∈ Rm such that

(F ◦ f ◦ G)−1 (x1 , . . . , xn ) = (x1 , . . . , xk , 0, . . . , 0).

Thus after a suitable change of coordinates near p ∈ U and f (p) ∈ Rm , the map f
assumes the form
x
(x1 , . . . , xn ) 7→ (x1 , . . . , xk , 0, . . . , 0).

Proof
eli
By reordering the f 1 , . . . , f m and coordinates x1 , . . . , xn if necessary, we may assume
without loss of generality that the first k × k-submatrix of the Jacobian JF (p) at the
point p is non-singular. Relabel the coordinates (x, y) = (x1 , . . . , xk , y 1 , . . . , y n−k ) := x
and the function (f, g) = (f 1 , . . . , f k , g 1 , . . . , g n−k ) := f .
Define G : U → Rm by
©F

(u, v) = G(x, y) := (f (x, y), y).


The Jacobian of G is the block matrix given by
 
∂f /∂x ∂f /∂y
JG = .
0 Id
By construction, det JG = det[∂f /∂x] 6= 0 and so the inverse function theorem guarantees
the existence of neighborhoods U1 3 p ∈ Rn and V1 3 G(p) ∈ Rm such that G : U1 → V1
is a diffeomorphism. By shrinking U1 if necessary, we may assume that (f, g) has constant
rank k on U1 .

107
On V1 ,
(u, v) = (G ◦ G−1 )(u, v) = (f ◦ G−1 , y ◦ G−1 )(u, v).
Comparing the first components give u = (f ◦ G−1 )(u, v). Hence

((f, g) ◦ G−1 )(u, v)) = (f ◦ G−1 , g ◦ G−1 )(u, v)


= (u, g ◦ G−1 (u, v))

ou
= (u, h(u, v)). h := g ◦ G−1

Since G−1 : V1 → U1 is a diffeomorphism and (f, g) has constant rank k on U1 , the


composite (f, g) ◦ G−1 has constant rank k on V1 . Its Jacobian matrix is given by
 
−1 Id 0
J((f, g) ◦ G )(u, v)) = .
∂h/∂u ∂h/∂v

Zh
For this matrix to have constant rank k, ∂g/∂v must be identically zero on V1 . Thus h is
a function of u alone and we can write

((f, g) ◦ G−1 )(u, v) = (u, h(u)).

Finally, let F : Rm → Rm be the change of coordinates given by

F (x, y) = (x, y − h(x)).

We have
x
(F ◦ f ◦ G−1 )(u, v) = F (u, h(u))
= (u, h(u) − h(u))
eli
= (u, 0).
The constan rank theorem for Euclidean spaces has an immediate analogue for manifolds.

Theorem 3.4.2 (Constant Rank)


Let N, M be manifolds of dimensions n, m respectively. Suppose f : N → M has
constant rank k in a neighborhood of p ∈ N . Then there are charts (U, φ) centered
©F

at p ∈ N and (V, Ψ) centered at f (p) ∈ M such that for (r1 , . . . , rn ) ∈ φ(U ),

(Ψ ◦ f ◦ φ−1 )(r1 , . . . , rn ) = (r1 , . . . , rk , 0, . . . , 0).

Proof
Choose a chart (Ū , φ̄) about p ∈ N and (V̄ , Ψ̄) about f (p) ∈ M . Then Ψ̄◦f ◦ φ̄−1 is a map
between open subsets of Euclidean spaces. Because φ̄, Ψ̄ are diffeomorphisms, Ψ̄ ◦ f ◦ φ̄−1

108
has the same constant rank k as f in a neighborhood of φ̄(p) ∈ Rn . By the constant
rank theorem for Euclidean spaces, there are diffeomorphisms G of a neighborhood of
φ̄(p) ∈ Rn and F of a neighborhood of (Ψ̄ ◦ f )(p) ∈ Rm such that

(F ◦ Ψ̄ ◦ f ◦ φ̄−1 ◦ G−1 )(r1 , . . . , rn ) = (r1 , . . . , rk , 0, . . . , 0).

We can then take φ = G ◦ φ̄ and Ψ = F ◦ ψ̄.

ou
The constant-rank level theorem follows easily. Recall a neighborhood of a subset A ⊆ M is
an open set containing A.

Theorem 3.4.3 (Constant-Rank Level Set)


Let f : N → M be a smooth map of manifolds and c ∈ M . If f has constant rank k
in a neighborhood of the level set f −1 (c) ∈ N , then f −1 (c) is a regular submanifold
of N with codimension k.

Zh
Proof
Let p ∈ f −1 (c) be arbitrary. By the constant rank theorem, we can find a coordinate chart
(U, φ) = (U, x1 , . . . , xn ) centered at p ∈ N and a coordinate chart (V, Ψ) = V (y 1 , . . . , y m )
centered at f (p) = c ∈ M such that

(Ψ ◦ f ◦ φ−1 )(r1 , . . . , rn ) = (r1 , . . . , rk , 0, . . . , 0) ∈ Rm .

This shows that the level set (Ψ◦f ◦φ−1 )−1 (0) is defined by the vanishing of the coordinates
r1 , . . . , rk . The image of the level set f −1 (c) under φ is precisely the level set (Ψ ◦ f ◦
x
φ−1 )−1 (0), since

φ(f −1 (c)) = φ(f −1 (Ψ−1 (0))) = (Ψ ◦ f ◦ φ−1 )−1 (0).


eli
Thus the level set f −1 (c) in U is defined by the vanishing of the coordinate functions
x1 , . . . , xk where xi := ri ◦ φ. This proves that f −1 (c) is a regular submanifold of N with
codimension k.

Example 3.4.4 (Orthogonal Group)


Let f : GL(n, R) → GL(n, R) be given by
©F

f (A) := AT A.

The orthogonal group O(n) is given by

O(n) = f −1 (I).

It can be checked that f is of constant rank on GL(n, R) and hence O(n) is a regular
submanifold of GL(n, R).

109
Consider f : N → M a map with constant rank k in a neighborhood of a point p ∈ N ,
with charts (U, φ) = (U, x1 , . . . , xn ) about p and (V, Ψ) = (V, y 1 , . . . , y m ) about f (p) from
the constant rank theorem. Note that for any q ∈ Q,

φ(q) = (x1 (q), . . . , xn (q))


(y 1 (f (q)), . . . , y n (f (q))) = Ψ(f (q))

ou
= (Ψ ◦ f ◦ φ−1 )(φ(q))
= (Ψ ◦ f ◦ φ−1 )(x1 (q), . . . , xn (q))
= (x1 (q), . . . , xk (q), 0, . . . , 0).

As functions on U ,
(y 1 ◦ f, . . . , y m ◦ f ) = (x1 , . . . , xk , 0, . . . , 0).

Zh
The local normal form of f relative to the charts above in the constant rank theorem can be
expressed in terms of the local coordinates x1 , . . . , xn and y 1 , . . . , y m as follows. The map f
is given by
(x1 , . . . , xn ) 7→ (x1 , . . . , xk , 0, . . . , 0).

3.4.2 The Immersion & Submersion Theorems

The constant rank theorem givrs local normal forms for immersions and submersions, called
the immersion theorem and submersion theorem respectively.
x
Consider a smooth map f : N → M between manifolds of dimension n, m respectively. Let
(U, (xi )), (V, (y j )) be charts about p ∈ N, f (p) ∈ M respectively and consider the Jacobian
of f∗,p with respect to these charts. Then for any p ∈ N , f∗,p is injective if and only if n ≤ m
eli
and the Jacobian has rank rank J = n. Similarly, f∗,p is surjective if and only if n ≥ m and
rank J = m.
Having maximal rank at a point is an open condition in the sense that the set

Dmax (f ) := {p ∈ U : f∗,p has maximal rank at p}


©F

is an open subset of U . Indeed, the complement

U − Dmax (f ) = {p ∈ U : rank J < k}

is equivalent to the vanishing of all k × k minors of the Jacobian. As the pullback of a closed
(singleton) set of finitely many continuous functions, U − Dmax (f ) is closed and so Dmax (f )
is open. In particular, if f has maximal rank at p, then it has maximal rank at all points in
some neighborhood of p.

110
Proposition 3.4.5
Let N be an n-manifold and M be an m-manifold. If a smooth map f : N → M is an
immersion at a point p ∈ N , then it has constant rank n in a neighborhood of p. If a
smooth map f : N → M is a submersion at a point p ∈ N , then it has constant rank m
in a neighborhood of p.

The following theorems follow immediately as special cases of the constant rank theorem.

ou
Theorem 3.4.6 (Immersion/Submersion)
(i) (Immerision theorem) Suppose f : N → M is an immersion at p ∈ N . Then
there are charts (U, φ) centered at p ∈ N and (V, Ψ) centered at f (p) ∈ M such
that in a neighborhood of φ(p),

(Ψ ◦ f ◦ φ−1 )(r1 , . . . , rn ) = (r1 , . . . , rn , 0, . . . , 0).

Zh
(ii) (Submerision theorem) Suppose f : N → M is an submersion at p ∈ N . Then
there are charts (U, φ) centered at p ∈ N and (V, Ψ) centered at f (p) ∈ M such
that in a neighborhood of φ(p),

(Ψ ◦ f ◦ φ−1 )(r1 , . . . , rm , rm+1 , . . . , rn ) = (r1 , . . . , rm ).

Corollary 3.4.6.1
A submersion f : N → M of manifolds is an open map.
x
Proof
Let W ⊆ N be open and pick a point f (p) ∈ f (W ). By the submersion theorem, there are
charts (U, φ), (V, Ψ) about p, f (p) such that (Ψ ◦ f ◦ φ−1 ) is a projection. Let U ⊇ B 3 p
eli
be an open neighborhood about p. Then φ(B) is open in Rn . But projections are open
maps and so
(Ψ ◦ f )(B) = (Ψ ◦ f ◦ φ−1 )(φ(B))
is open in Rm . But then
f (B) = Ψ−1 [(Ψ ◦ f )(B)]
©F

is an open subset of f (W ) containing f (p). This concludes the proof by the arbitrary
choice of f (p).
We now derive the regular level set theorem as corollaries of both the submersion and constant
rank theorems. Indeed, for a smooth map f : N → M of manifolds, a level set f −1 (c) is
regular if and only if f is a submersion at every point p ∈ f −1 (c). Fix one such point
p ∈ f −1 (c) and let (U, φ), (V, Ψ) be charts in the submersion theorem. Then Ψ ◦ f ◦ φ−1 =

111
π : φ(U ) → Rm is the projection onto the first m coordinates,

π(r1 , . . . , rn ) = (r1 , . . . , rm ).

It follows that on U ,

Ψ ◦ f = π ◦ φ = (r1 , . . . , rm ) ◦ φ = (x1 , . . . , xm ).

ou
It follows that

f −1 (c) = f −1 (Ψ−1 (0)) = (Ψ ◦ f )−1 (0) = Z(Ψ ◦ f ) = Z(x1 , . . . , xm ).

So f −1 (c) is defined by the vanishing of the m coordinate functions x1 , . . . , xm and (U, x1 , . . . , xn )


is an adapted chart for N relative to f −1 (c).

Zh
A third proof of the regular level set theorem using the submersion theorem proceeds as
follows. On a regular level set f −1 (c), the map f : N → M has maximal rank m at every
point. Since the maximality of the rank is an open condition, a regular level set f −1 (c) has
a neighborhood on which f has constant rank m. By the constant rank level set theorem
above, f −1 (c) is thus a regular submanifold of N .

3.4.3 Images of Smooth Maps


x
The following are all examples of smooth maps f : N → M with N = R, M = R2 .

Example 3.4.7
f (t) = (t2 , t3 ) is not an immersion since f 0 (0) = (0, 0).
eli

Example 3.4.8
f (t) = (t2 − 1, t3 − t) is an immersion since the equation

f 0 (t) = (2t, 3t2 − 1) = (0, 0)


©F

has no solutions in t.

Example 3.4.9
Let M be the union of the graph of y = sin(1/x) on (0, 1) and a smooth curve joining (0, 0)
and (1, sin 1). f : R → M in the intuitive way is a injective immersion whose image with
the subspace topology is not homeomorphic to R.
In the examples above, f (N ) is not a regular submanifold of M = R2 . We would like
conditions on f so that its image would be a regular submanifold of M .

112
Definition 3.4.1 (Embedding)
A C ∞ map f : N → M is called an embedding if
(i) it is an (injective) immersion
(ii) the image f (N ) with the subspace topology is homeomorphic to N under f .

Note that the condition that f is injective in (i) is redundant since a homeomorphism is

ou
necessarily a bijection.
Remark 3.4.10 The word “submanifold” is used differently in many contexts. Some au-
thors give the image f (N ) of an injective immersion the topology inherited from f rather
than the subspace topology of M . With this topology, f (N ) is by definition homeomorphic
to N . These authors define a submanifold to be the image of any injective immersion with
the topology and differentiable structure inherited from f . Such a set is sometimes called an
immersed submanifold. Note that if the underlying set of an immersed submanifold is given

Zh
the subspace topology, it need not be a manifold at all!

For us, a submanifold without any qualifying adjective is always a regular submanifold.
We use the phrase “near p” to mean “in a neighborhood of p.”

Theorem 3.4.11
If f : N → M is an embedding, then its image f (N ) is a regular submanifold of M .

Proof
x
Fix p ∈ N . We need to show that im some neighborhood of f (p), the set f (N ) is defined
by the vanishing of m − n coordinates.
By the immersion theorem, there are local coordinates (U, x1 , . . . , xn ) near p and (V, y 1 , . . . , y m )
eli
near f (p) such that f : U → V has the form

(x1 , . . . , xn ) 7→ (x1 , . . . , xn , 0, . . . , 0).

Thus f (U ) is defined in V by the vanishing of the coordinates y n+1 , . . . , y m . This is not


sufficient since we do not know that f (N ) ∩ V = f (U ) ∩ V .
©F

Since f (N ) with the subspace topology is homeomorphic to N , the image f (U ) is open


in f (N ). By the definition of the subspace topology, there is an open set V 0 ⊆ M such
that V 0 ∩ f (N ) = f (U ). In V ∩ V 0 ,

(V ∩ V 0 ) ∩ f (N ) = V ∩ f (U ) = (V ∩ V 0 ) ∩ f (U )

and f (U ) is defined by the vanishing of y n+1 , . . . , y m . Thus (V ∩ V 0 , y 1 , . . . , y m ) is an


adapted chart containing f (p) for f (N ). This concludes the proof by the arbitrary choice
of p.

113
Theorem 3.4.12
If N is a regular submanifold of M , then the inclusion ι : N → M is an embedding.

Proof
Since a regular submanifold has the subspace topology and ι(N ) also has the subspace
topology, ι : N → ι(N ) is certainly a homeomorphism. It remains to show that ι : N → M

ou
is an immersion.
Fix p ∈ N . Choose an adapted chart (V, y 1 , . . . , y n , y n+1 , . . . , y m ) for M about p such that
V ∩ N is the zero set of y n+1 , . . . , y m . Relative to the charts (V ∩ N, y 1 , . . . , y n ) for N and
(V, y 1 , . . . , y m ) for M , the inclusion i is given by

(y 1 , . . . , y n ) 7→ (y 1 , . . . , y n , 0, . . . , 0)

Zh
which shows that ι is an immersion.
The image of an embedding is often called an embedded submanifold. Our results above show
that an embedded submanifold and a regular submanifold are the same thing.

3.4.4 Smooth Maps into Submanifold

Suppose f : N → M is a smooth map whose image f (N ) lies in a submanifold S ⊆ M . Is


the induced map f˜ : N → S also smooth? The answer depends on whether S is a regular
submanifold or an immersed submanifold of M .
x
Theorem 3.4.13
Suppose f : N → M is C ∞ and the image of f lies in a subset S ⊆ M . If S is a
eli
regular submanifold of M , then the induced map f˜ : N → S is also C ∞ .

Proof
Denote the dimensions of N, M, S by n, m, s, respectively. Fix p ∈ N . Since S is a
regular submanifold of M , there is an adapted coordinate chart (V, Ψ) = (V, y 1 , . . . , y m )
for M about p such that S ∩ V is the zero set of y s+1 , . . . , y m , with coordinate chart
©F

(S ∩ V, ΨS ) = (S ∩ V, y 1 , . . . , y s ). By the continuity of f , we can choose a neighborhood


U 3 p of p such that f (U ) ⊆ V . Then f (U ) ⊆ V ∩ S by construction, so that for any
q ∈ U,
(Ψ ◦ f )(q) = (y 1 (f (q)), . . . , y s (f (q)), 0, . . . , 0).
It follows that on U ,
ΨS ◦ f˜ = (y 1 ◦ f, . . . , y s ◦ f ).
Since the coordinates y 1 ◦ f, . . . , y s ◦ f are smooth on U , f˜ is also smooth on U and in

114
particular at p. We conclude the proof by the arbitrary choice of p ∈ N .

Example 3.4.14 (Multiplication Map of SL(n, R))


The multiplication mapp

µ : GL(n, R) × GL(n, R) → GL(n, R)

given by (A, B) 7→ (A, B) is smooth since each entry of the resulting matrix is a polynomial

ou
function of the entries of the input matrices. However, it is not immediately obvious that
the induced map
µ̄ : SL(n, R) × SL(n, R) → SL(n, R)
is smooth as the canonical coordinate system on GL(n, R) is not a coordinate system on
SL(n, R).
Now, SL(n, R) × SL(n, R) is a regular submanifold of the product manifold GL(n, R) ×

Zh
GL(n, R). Thus the inclusion map

ι : SL(n, R) × SL(n, R) → GL(n, R) × GL(n, R)

is smooth. It follows thaat the composition µ ◦ ι is smooth as well. Because the image
of µ ◦ ι lies in SL(n, R), a regular submanifold of GL(n, R), we can apply the previous
theorem to deduce that the induced multiplication map is smooth.

3.4.5 The Tangent Plane to a Surface in R3


x
Suppose f : R3 → R has no critical points on its zero set N = f −1 (0). By the regular level
set theorem, N is a regular submanifold of R3 . Since the inclusion map ι : N → R3 on
regular submanifolds is an embedding (immersion), ι∗,p : Tp N → Tp R3 is injective at every
eli
point p ∈ N . We can thus identify the tangent plane Tp N as a plane in Tp R3 ' R3 . We
would like to find the equation of this plane.
Supppose v = i v ∂/∂x |p is a tangent vector in Tp N . Under the linear isomorphism
P i i

Tp R3 ' R3 , we identify v with the vector hv 1 , v 2 , v 3 i ∈ R3 . Let c(t) be a curve in starting at


c(0) = p and initial velocity c0 (0) = hv 1 , v 2 , v 3 i. Since c(t) lies in N , f (c(t)) = 0 for every t.
It follows by the chain rule that
©F

3
d X ∂f
0 = f (c(t)) = i
(c(t))(ci )0 (t).
dt i=1
∂x

At t = 0,
3
d X ∂f
0 = f (c(0)) = i
(p)v i .
dt i=1
∂x

115
Since the vector v = hv 1 , v 2 , v 3 i represents the arrow from p = (p1 , p2 , p3 ) to x = (x1 , x2 x3 ) ∈
Tp N , we usually make the substitution v i = xi − pi . This amounts to translating the tangent
plane from the origin to “p”. Thus the tangent plane to N at p is defined by the equation

3
d X ∂f
0 = f (c(0)) = i
(p)(xi − pi ).
dt ∂x

ou
i=1

One interpretation of this equation is that the gradient vector

∂f /∂x1 (p), ∂f /∂x2 (p), ∂f /∂x3 (p)

Zh
of f at p is normal to any vector in the tangent plane.
We remark that to recover the tangent vector v ∈ Tp N , we do not use the substitution and
instead directly compute the v i ’s.

Example 3.4.15 (Tangent Plane to a Sphere)


Let f (x, y, z) := x2 + y 2 + z 2 − 1. To get the equation of the tangent plane to the unit
sphere S 2 = f −1 (0) in R3 at (a, b, c) ∈ S 2 , we first compute the partial derivatives

∂f
x
= 2x
∂x
∂f
= 2y
∂y
eli
∂f
= 2z.
∂z
At p = (a, b, c),

∂f
(p) = 2a
∂x
©F

∂f
(p) = 2b
∂y
∂f
(p) = 2c.
∂z
The equation determining the tangent sphere that we derived above is given by

2a(x − a) + 2b(y − b) + 2x(z − c) = 0


ax + by + cz = 1. a 2 + b2 + c 2 = 1

116
3.4.6 The Differential of an Inclusion Map

Let ι : S 1 → R2 be the inclusion map of the unit circle. Denote by x, y the standard
coordinates on R2 and x̄ = i∗ x, ȳ = i∗ y their restrictions to S 1 . On the upper semicircle

U = {(a, b) ∈ S 1 : b > 0},

ou
x̄ is a local coordinate, so that ∂/∂ x̄ is defined.
We know that
!
∂ ∂ ∂ ∂
u +v = ι∗ = (· ◦ ι)
∂x p ∂y p ∂ x̄ p ∂ x̄ p

Zh
for some u, v ∈ R. Evaluating at x̄, ȳ yields the result

!
∂ ∂ ∂ ȳ ∂
ι∗ = + · .
∂ x̄ p ∂x p ∂ x̄ ∂y p

Thus although ι∗ : Tp S 1 → Tp R2 is injective, we cannot identify ∂/∂ x̄|p with ∂/∂x|p .


x
Remark 3.4.16 The formula for the image of ∂/∂ x̄|p holds for any smooth curve in C ⊆ R2 ,
such that there is a chart in C on which x̄, the restriction of x to C. is a local coordinate.
eli
Now consider the unit sphere S 2 . On the upper hemisphere, we have the coordinate map
φ = (u, v) where u, v are the first and second coordinate maps in R3 . Thus the derivations
∂/∂u|p , ∂/∂v|p are tangent vectors of S 2 at any point p = (a, b, c) on the upper hemisphere.
Let ι : S 2 → R3 be the inclusion map and x, y, z the standard coordinates on R3 . Then
©F

!
∂ ∂ ∂ ∂
ι∗ = α1 + β1 + γ1
∂u p ∂x p ∂y p ∂z p
!
∂ ∂ ∂ ∂
ι∗ = α2 + β2 + γ2 .
∂v p ∂x p ∂y p ∂z p

By a similar calculation to the above, we can check that α1 = β 2 = 1 and β 1 = α2 = 0. At

117
the point p = (a, b, c),
!

γ 1 = ι∗ (z)
∂u p


= (z ◦ i)
∂u p

ou

= ( 1 − u2 − v 2 )
∂u p
−2u
= √
2 1 − u2 − v 2 p
a
=− .
c
2
γ = ...

Zh
b
=− .
c

3.4.7 Useful Results

Proposition 3.4.17
Every smooth map f from a compact manifold N → Rm has a critical point.

Proof
x
Suppose towards a contradiction that f∗,p is surjective at every p ∈ N , ie it is a submersion.
Let π : Rm → R denote the projection onto the first coordinate and consider π◦f : N → R.
It is clear that π is a submersion. But then π ◦ f is a submersion as well.
eli
Recall that continuous maps attain their extrema on compact sets. Such an extreme is
necessarily a critical point of π ◦ f , which contradicts that π ◦ f is a submersion.

Proposition 3.4.18
Any injective immersion f : N → M on a compact manifold N is an embedding.
©F

Proof
An injective map is bijective onto its image f (N ) ⊆ M . Moreover, f is an immersion
by assumption. It suffices to check that f −1 is continuous in order to conclude that f is
homeomorphic onto its image and thus is an embedding.
We check that f is a closed map, so the pre-images of closed sets under f −1 are closed and
so f −1 is continuous. This is purely a topological results. Any closed subset of a compact
space N is compact. Hence the continuous image under f is also compact and therefore
closed.

118
This concludes the proof.

3.5 The Tangent Bundle

A smooth vector bundle over a smooth manifold M is a smoothly varying family of vector
spaces, parametrized by points of M , that locally looks like a product. The collection of

ou
tangent spaces to a manifold has the structure of a vector bundle over the manifold, called the
tangent bundle. A smooth map between two manifolds induces a bundle map between two
manifolds. This the tangent bundle construction is a functor from the category of smooth
manifolds to the category of vector bundles.
For us, the importance of the vector bundle point of view comes from its role in unifying
concepts. A section of a vector bundle π : M → E is a mapping from each point of M into
the fiber of the bundle over the point. Both vector fields and differential forms on a manifold

Zh
are sections of vector bundles over the manifold.

3.5.1 The Topology of the Tangent Bundle

Let M be a smooth manifold. Recall that at each p ∈ M , the tangent space Tp M is the
vector space of all point-derivations of Cp∞ (M ), the algebra of germs of C ∞ functions at p.

Definition 3.5.1 (Tangent Bundle)


x
The tangent bundle of a smooth manifold M is the (disjoint) union of all the tangent
spaces of M G
T M := Tp M.
eli
p∈M

In general, if {Ai }i∈I is an indexed collection of subsets of a set S, then their disjoint union
is defined to be the set G [
Ai := {i} × Ai .
i∈I i∈I

Since the union ∪p∈M Tp is already disjoint, it is not so important to us whether to specify
©F

∪, t in the definition.
There is a natural map π : T M → M given by π(v) = p where v ∈ Tp M . At times, we use
the notation (p, v) to denote a tangent vector v ∈ Tp M to make explicit the point p ∈ M at
which v is a tangent vector.
As defined, T M has no topology or manifold structure. We will make it into a smooth
manifold and further show that it is a smooth vector bundle over M . We focus for now on
defining a topology on T M .

119
Let (U, φ) = (U, x1 , . . . , xn ) be a coordinate chart on M . Define
[ [
TU = Tp U = Tp M.
p∈U p∈U

Recall that a basis for Tp U = Tp M is the set of partial derivatives of coordinate functions.
Thus any v ∈ Tp M can be uniquely written as

ou
X ∂
v= ci .
i
∂xi p

The coeficients ci = ci (v) depend on v and so are functions on T U . Let x̄i = xi ◦ π where
π(v) = p for v ∈ Tp M and define the map φ̃ : T U → φ(U ) × Rn by
v 7→ (x1 (p), . . . , xn (p), c1 (v), . . . , cn (v)) = (x̄1 , . . . , x̄n , c1 , . . . , cn )(v).
Then φ̃ has an inverse

Zh
X ∂
(φ(p), c1 , . . . , cn ) 7→ ci .
i
∂xi p

This means we can use φ̃ to transfer the topology of φ(U ) × Rn to T U : a set in T U is open if
and only if φ̃(A) is open in φ(U ) × Rn , under the standard topology of R2n . By construction,
T U is homeomorphic to φ(U ) × Rn . If V ⊆ U is open, then φ(V ) × Rn is an open subset of
φ(U ) × Rn . Hence the relative topology on T V as a subset of T U is the same as the topology
induced fron the bijection φ̃|T V : T V → φ(V ) × Rn .
Let φ∗ : Tp U → Tφ(p) Rn be the differential of the coordinate map φ at p. We may identity
φ∗ (v) with some column vector hc1 , . . . , cn i ∈ Rn where ci ’s are the coefficients of the tangent
x
vector φ∗ (v) with respect to the standard basis of Tφ(p) Rn . Thus another way to describe φ̃
is
φ̃ = (φ ◦ π, φ∗ ).
eli
Let B be the collection of all open subsets of T (Uα ) for all coordinate open sets Uα in M .
That is,

B = {A : A open in T (Uα ) for the coordinate open set Uα ⊆ M }.


[

α
©F

Lemma 3.5.1
(i) For any manifold M , the set T M is the union of all A ∈ B.
(ii) Let U, V be coordinate open sets in a manifold M . If A is open in T U and B
is oepn in T V , then A ∩ B is open in T (U ∩ V ).

It follows from this lemma that B forms a basis for some topology on T M . We given the
tangent bundle T M the topology generated by the basis B.

120
Proof
(i) Let {(Uα , φα )} be an atlas for M . Then
[ [
TM = T (Uα ) ⊆ A ⊆ T M.
α A∈B

ou
(ii) Since T (U ∩V ) is a subspace of T U , A∩T (U ∩V ) must be open in T (U ∩V ). Similarly,
B ∩ T (U ∩ V ) is open in T (U ∩ V ). But then

A ∩ B ⊆ T U ∩ T V = T (U ∩ V )

so that A ∩ B is open in T (U ∩ V ).

Lemma 3.5.2

Zh
A topological manifold M has a countable basis consisting of coordinate open sets.

Proof
Let {(Uα , φα )} be an atlas on M and B = {Bi } a countable basis for M . For each
coordinate open set Uα and p ∈ Uα , there is a basic open set Bp,α ∈ B such that

p ∈ Bp,α ⊆ Uα .

The collection {Bp,α } ⊆ B satisfies the desired properties.


x
Proposition 3.5.3
The tangent bundle T M of a manifold M is second countable.
eli
Proof
Let {(Ui , φ)}i be a countable atlas for M . Since T Ui is homeomorphic to the open subset
φi (Ui ) × Rn ⊆ R2n and any subset of a Euclidean space is secound countable, T Ui is
second countable. Let {Bi,j }j be a countable basis for T Ui . Then {Bi,j }i,j is a countable
basis for T M .
©F

Proposition 3.5.4
The tangent bundle T M of a manifold is Hausdorff.

Proof
Let (p, Xp ) 6= (q, Xq ) ∈ T M .
If p 6= q, then since M is hausdorff, we can find open sets U, V ⊆ M separating p, q. By
shrinking U, V if necessary, we may assume that U, V are coordinate open sets. Then

121
T U, T V are disjoint open sets in T M separating (p, Xp ), (q, Xq ).
Suppose now that p = q and Xp 6= Xq . Let (U, φ) be a chart about p. Then T U is
homeomorphic to an open subset of φ(U ) × Rn ⊆ R2n through the map φ̃. But Euclidean
space is Hausdorff so there are open sets Vp , Vq separating φ̃(p, Xp ), φ̃(p, Xq ). Since φ̃ is a
homeomorphism, φ̃−1 (Vp ), φ̃−1 (Vq ) separate (p, Xp ), (p, Xq ) as desired.

ou
3.5.2 The Manifold Structure on the Tangent Bundle

The most natural set of charts to propose as an atlas is the set

{(T Uα , φ̃α )}.

Our goal is now to show that this is indeed a smooth structure on T M . We already have
T M = ∪α T Uα . It remains to check that φ̃α , φ̃β are smoothly compatible on (T Uα ) ∩ (T Uβ ).

Zh
Recall if (U, x1 , . . . , xn ), (V, y 1 , . . . , y n ) are two charts on M , then for any p ∈ U ∩ V there
are two bases singled out for the tangent space Tp M : {∂/∂xj |p }j and {∂/∂y i |p }i . So any
tangent vector v ∈ Tp M has two descriptions:
X ∂ X ∂
v= aj = bi .
j
∂xj p i
∂y i p

We can translate the coefficients easily by applying both sides to xk .


x
! !
X ∂xk
j ∂
X X ∂
k k i k
a = a j
x = b j
x = bi i .
j
∂x i
∂y i
∂y

Similarly, applying both sides to y k yields


eli
X ∂y k
bk = aj .
j
∂xj

Write Uαβ = Uα ∩ Uβ , φα = (x1 , . . . , xn ), and φβ = (y 1 , . . . , y n ). Then


©F

φ̃β ◦ φ̃−1 n
α : φα (Uαβ ) × R → φβ (Uαβ ) × R
n

is given by
!
X ∂
(φα (p), a1 , . . . , an ) 7→ p, aj
j
∂xj p

7→ (φβ ◦ φα )−1 (φα (p)), b1 , . . . , bn .




122
But recall we computed
X ∂y i
bi = aj (p)
j
∂xj
X ∂(φβ ◦ φ−1
α )
i
= aj (φα (p)).
j
∂rj

ou
By the definition of a smooth atlas, φβ ◦φ−1
α is C . Therefore φ̃β ◦ φ̃α is C . This completes
∞ −1 ∞

the proof that the tangent bundle T M is a smooth manifold with {(T Uα , φ̃α )} as a smooth
atlas.

3.5.3 Vector Bundles

Zh
On the tangent bundle T M of a smooth manifold M , the natural projection map π : T M →
M given by
π(p, v) := p
makes T M into a C ∞ vector bundle over M . We make this precise in this section.

Definition 3.5.2 (Fiber)


Given any map π : E → M , we call the pre-image π −1 (p) of p ∈ M the fiber at p.

We usually denote the fiber at p as Ep . For any two maps π : E → M, π 0 : E 0 → M , a map


x
φ : E → E 0 is said to be fiber-preserving if φ(Ep ) ⊆ Ep0 for all p ∈ M .

Definition 3.5.3 (Locally Trivial)


eli
A surjective smooth map π : E → M of manifolds is said to be locally trivial of rank
r if
(i) each fiber π −1 (p) has the structure of a vector space of dimension r
(ii) for each p ∈ M , there is an open neighborhood U 3 p and a fiber-preserving
diffeomorphism φ : π −1 (U ) → U × Rr such that for every q ∈ U the restriction
©F

φ|π−1 (q) : π −1 (q) → {q} × Rr

is a vector space isomorphism.

Such an open set U in (ii) is called a trivializing open set for E and φ is a trivialization of E
over U . Also note that φ is fiber-preserving with respect to the projection map U × Rr → U .
The collection {(U, φ)} with {U } being an open cover of M , is called a local trivialization
for E, and {U } is a trivializing open cover of M for E.

123
Definition 3.5.4 (Smooth Vector Bundle)
A C ∞ vector bundle of rank r is a triple (E, M, π) consisting of manifolds E, M and
a surjective smooth map π : E → M that is locally trivial of rank r.

The manifold E is called the total space of the vector bundle and M is the base space. By
abuse of language, we say that E is a vector bundle over M .

ou
Let (E, M, π) be a vector bundle of rank r. For any regular submanifold S ⊆ M , the triple
(π −1 S, S, π|π−1 (S) ) is also a smooth vector bundle over S, called the restriction of E to S.
We often write the restriction as E|S instead of π −1 S.
Properly speaking, the tangent bundle of a manifold M is a triple (T M, M, π), and T M is
the total space of the tangent bundle. Here π is the canonical projection as aforementioned.
In common usage, T M is often referred to as the tangent bundle.

Zh
Example 3.5.5 (Product Bundle)
Given a manifold M , let π : M × Rr → M be the projection onto the first factor. Then
M × Rr → M is a vector bundle of rank r, called the product bundle of rank r over M .
The vector space structure on the fiber π −1 (p) is the obvious one.
A local trivialization on M × R is given by the identity map IdM ×R . The infinite cylinder
S 1 × R is the product bundle of rank 1 over the circle.
Let π : E → M be a smooth vector bundle. Suppose (U, Ψ) = (U, x1 , . . . , xn ) is a chart on
M and
x
φ : E|U → U × Rr
φ(e) = (π(e), c1 (e), . . . , cr (e))
eli
is a trivialization of E over U . Then

(Ψ × Id) ◦ φ
= (x1 , . . . , xn , c1 , . . . , cr ) : E|U → U × Rr → Ψ(U ) × Rr
⊆ Rn × Rr
©F

is a diffeomorphism of E|U onto its image and so is a chart on E. We call x1 , . . . , xn the base
coordinates and c1 , . . . , cr the fiber coordinates of the chart

(E|U , (Ψ × Id) ◦ φ)

on E. Note that the fiber coordinates ci depend only on the trivialization φ of the bundle
E|U and not on the trivialization Ψ of the base U .
Let πE : E → M and πF : F → N be two vector bundles, possibly of different ranks.

124
Definition 3.5.5 (Bundle Map)
A bundle map from E to F is a pair of maps (f, f˜) where f : M → N and f˜ : E → F
such that
(i) πF ◦ f˜ = f ◦ πE
(ii) f˜ is linear on each fiber, ie for each p ∈ M , f˜ : Ep → Ff (p) is a linear map of
vector spaces.

ou
The collection of all vector bundles together with bundle maps between them forms a cate-
gory.

Example 3.5.6
A smooth map f : N → M between manifolds induces a bundle map (f, f˜), where
f˜ : T N → T M is given by

Zh
f˜(p, v) = (f (p), f∗ (v)) ∈ {f (p)} × Tf (p) M ⊆ T M

for all v ∈ Tp N . This gives rise to a covariant functor T from the vategory of smooth
manifolds and smooth maps to the cateogry of vector bundles and bundle maps: To each
manifold M , we associate its tangent bundle T M , and to each smooth map f : N → M
between manifolds, we associate the bundle map

T f = (f : N → M, f˜ : T N → T M ).
If E, F are two vector bundles over the same manifold M , then a bundle map from E to F
x
over M is a bundle map in which the base map is the identity IdM . For a fixed manifold M ,
we can also consider the category of all smooth vector bundles over M and smooth vector
bundles over M . In this category, it makes sense to speak of an isomorphism of vector bundles
over M . Any vector bundle over M isomorphic over M to the product bundle M × Rr is
eli
called a trivial bundle.

3.5.4 Smooth Sections


©F

Definition 3.5.6 (Section)


A section of a vector bundle π : E → M is a map s : M → E such that π ◦ s = IdM .

This condition simply means that for each p ∈ M , s maps p into the fiber Ep above p. We
can visualize a section as a “cross-section” of the bunndle. We say that a section is smooth
if it is smooth as a map from M → E.

125
Definition 3.5.7 (Vector Field)
A vector field X on a manifold M is a function that assigns a tangent vector Xp ∈ Tp M
to each point p ∈ M .

In terms of the tangent bundle, a vector field on M is imply a section of the tangent bundle
π : T M → M and the vector field is smooth if it is smooth as a map from M → T M .

ou
Example 3.5.7
The formula  
∂ ∂ −y
X(x,y) = −y +x =
∂x ∂y x
defines a smooth vector field on R2 .
Let s, t : M → E be smooth sections of a smooth vector bundle π : E → M and f ∈ C ∞ (M ).

Zh
We define the sum s + t : M → E and product f s : M → E as follows:
(s + t)(p) := s(p) + t(p) ∈ Ep p∈M
(f s)(p) := f (p)s(p) ∈ Ep . p∈M

Proposition 3.5.8
Let s, t : M → E be smooth sections of a smooth vector bundle π : E → M and
f ∈ C ∞ (M ).
(i) s + t is a smooth section of E
(ii) f s is a smooth section of E
x
Proof
(i) It is clear that s + t is a section of E. To show that it is smooth, fix a point p ∈ M
eli
and let V 3 p be a trivializing open set for E, with smooth trivialization

φ : π −1 (V ) → V × Rr .

Suppose that for q ∈ V ,

(φ ◦ s)(q) = (q, a1 (q), . . . , ar (q))


©F

(φ ◦ t)(q) = (q, b1 (q), . . . , br (q)).

Since s, t are smooth, the components ai , bi are smooth on V . Since φ is linear on each
fiber,
(φ ◦ (s + t))(q) = (q, a1 (q) + b1 (q), . . . , ar (q) + br (q)).
This proves that s + t is a smooth map on V and hence at p. Since p is an arbitrary point
of M , the section s + t is smooth on M .

126
(ii) We again use the linearity of φ along each fiber to deduce that

(φ ◦ (f s))(q) = (q, f (q)a1 (q), . . . , f (q)ar (q)).

Since smoothness is preserved under multiplication, this function is smooth. We omit the
details as it is similar to (i).
Denote the set of all smooth sections of E by Γ(E). The proposition above shows that

ou
Γ(E) is not only a vector space over R, but also a module over the ring C ∞ (M ). For any
open subset U ⊆ M , one can also consider the vector space Γ(U, E) of smooth sections of
E over U . Then Γ(U, E) is both a vector space over R and a C ∞ (U )-module. Note that
Γ(M, E) = Γ(E). To contrast with sections over a proper subset U , a section over the entire
manifold M is called a global section.

3.5.5 Smooth Frames

Definition 3.5.8 (Frame)


Zh
A frame for a vector bundle π : E → M over an open set U is a collection of sections
s1 , . . . , sr of E over U such that at each point p ∈ U , the elements s1 (p), . . . , sr (p)
form a basis for the fiber Ep := π −1 (p).
x
A frame is said to be smooth if s1 , . . . , sr are smooth as sections of E over U . A frame for
the tangent bundle T M → M over an open set U is simply called a frame on U .
eli
Example 3.5.9
Let M be a manifold and e1 , . . . , er the standard basis for Rn . Define ēi : M → M × Rr
by
ēi (p) := (p, ei ).
Then ē1 , . . . , ēr is a smooth frame for the product bundle M × Rr → M .
©F

Example 3.5.10 (The Frame of a Trivialization)


Let π : E → M be a smooth vector bundle of rank r. If φ : E|U → U × Rr is a
trivialization of E over an open set U , then φ−1 carries the smooth frame ē1 , . . . , ēr of the
product bundle U × Rr to a smooth frame t1 , . . . , rt for E over U :

ti (p) = φ−1 (ēi (p)) = φ−1 (p, ei )

for each p ∈ U . We call t1 , . . . , tr the smooth frame over U of the trivialization φ.

127
Lemma 3.5.11
Let φ : E|U → U × Rr be a trivialization over an open set U of a smooth vector
bundle E → M , and t1 , . . . , tr the smooth frame over U of the trivialization. Then a
section X
s= bi ti
i

ou
of E over U is smooth if and only if its coefficients bi relative to the frame t1 , . . . , tr
are smooth.

Proof
Suppose the section s = i bi ti of E over U is smooth. Then φ ◦ s is smooth. But
P

X
(φ ◦ s)(p) = bi (p)φ(ti (p))

Zh
i
X
= bi (p)(p, ei )
i
!
X
= p, bi (p)ei .
i

Thus bi (p) are simply the fiber coordinates of s(p) relative to the trivialization φ. Since
φ ◦ s is smooth, all the bi ’s must be smooth.
The converse holds since the collections of smooth sections is a module over C ∞ (U ).
x
Proposition 3.5.12 (Characterization of Smooth Sections)
Let π : E → M be a smooth vector bundle and U ⊆ M an open subset. Suppose
s1 , . . . , sr is a smooth frame for E over U . Then a section
eli
X
s= cj s j
j

of E over U is smooth if and only if the coefficients cj are smooth functions on U .

We have already proven the case where s1 , . . . , sr is the frame of a trivialization of E over
©F

U . Thus it suffices to reduce the general case to this one.

Proof
Suppose s = j cj sj is a smooth section of E over U . Fix a point p ∈ U and choose a
P

trivializating open set V ⊆ U for E containing p, with smooth trivialization φ : π −1 (V ) →


V × Rr . Let t1 , . . . , tr be the smooth frame of the trivialization φ. If we write s, sj in

128
terms of the frame t1 , . . . , tr ,
X
s= bi ti
i
X
sj = aij ti ,
j

the coefficients
P b j, aj will all be smooth functions on V by the previous lemma. Next we

ou
i i

express s = j c sj in terms of the ti ’s:


X X X
bi ti = s = cj s j = cj aij ti .
i j i,j

Comparing the coefficients of ti gives bi = cj aij . In matrix notation,


P
j

Zh
   
b1 c1
b =  ...  = A  ...  = Ac.
   
br cr

At each point of V , being the transition matix between two bases, the matrix A is invert-
ible. By Cramer’s rule, A−1 is a matrix of smooth functions on V . Hence c = A−1 b is a
column vector of smooth functions on V . This proves that c1 , . . . , cr are smooth functions
at p ∈ U . Since p is an arbitrary point of U , the coefficients cj are smooth functions on
U.
x
The converse holds similarly to the base case since the collection of smooth sections forms
a C ∞ (U )-module.
Remark 3.5.13 If one replaces “smooth” by “continuous” throughout, the discussion in
eli
this subsection remains valid in the continuous category.

3.6 Bump Functions and Partitions of Unity


A partition of unity on a manifold M is a collection {ρα } of non-negative functions summing
©F

to 1. We typically ask that the partition of unity is indexed by the the same set as an open
cover {Uα } and that the support of ρα is contained in Uα so that ρα vanishes outside of Uα .
The existence of smooth partitions of unity is an important technical tool in the theory
of smooth manifolds that distinguishes it from that of real-analytic or complex manifolds.
We construct smooth ump functions on any manifold and prove the existence of a smooth
partition of unity on a compact manifold. The general case is more technical and omitted.
If we have some objective locally defined for each Uα , we have a generic way of extending it

129
to all of M as a “weighted sum”. Conversely, we can decompose a global object on a manifold
into a locally finite sum of local objects.

3.6.1 Smooth Bump Functions

Recall that R× denote the group fo nonzero real numbers under multiplication. Also, recall

ou
the support of a real-valued function f : M → R is defined to be

supp f := {p ∈ M : f (p) 6= 0}.

Let q ∈ M and U 3 q a neighborhood of q. A bump function at q supported in U is a


continuous non-negative function ρ on M that is 1 in a neighborhood of q with supp ρ ⊆
U . We are only interested in smooth bump functions which always requires a formula for

Zh
verification of smoothness.
The main challenge in constructing a smooth bump function is to obtain a smooth step
function. Consider the smooth function
(
exp(−1/t), t > 0
f (t) := .
0, t≤0

We seek a smooth step function g(t) by dividing f (t) by some positive function `(t). Such
a g(t) vanishes for t ≤ 0. The denominator should be a positive function that agrees with
f (t) for t ≥ 1 so that g(t) = 1 for t ≥ 1. We can take `(t) = f (t) + f (1 − t) and consider
x
f (t)
g(t) := .
f (t) + f (1 − t)
eli
`(t) > 0 for all t ∈ R by construction and g(t) is smooth since it is the quotient of two
smooth functions with a non-vanishing denominator.
Given 0 < a < b ∈ R, we make the linear change of variables to map [a2 , b2 ] → [0, 1]:

x − a2
x 7→ .
©F

b 2 − a2
Then g : R → [0, 1] given by
x − a2
 
h(x) := g
b 2 − a2
is a smooth step function that vanishes for x ≤ a2 and is 1 for x ≥ b2 . We then perform
another change of variables within h to make the function symmetric

k(x) := h(x2 ).

130
Finally, set
x 2 − a2
 
ρ(x) := 1 − k(x) = 1 − g .
b 2 − a2

This is a smooth bump function at 0 ∈ R that is identically 1 on [−a, a] and has support in
[−b, b]. For any q ∈ R, ρ(x − q) is a smooth bump functionat q.

ou
In order to extend this construction to Rn , consider

kxkr2 − a2
 
σ(x) := ρ(kxk) = 1 − g .
b 2 − a2

This is a smooth bump function at 0 ∈ Rn that is 1 on the closed ball B̄(0, a) and has
support in the closed ball B̄(0, b). Smoothness is a result of composing smooth functions.

Zh
Again, translating by q yields a smooth bump function at q ∈ Rn ,

σ(x − q).

Proposition 3.6.1
Let q ∈ M be a point in a manifold and U 3 q a neighborhood of q. There is a smooth
bump function at q supported in U .

Proof
x
Let V ⊆ U be a neighborhood of q and ϕ : V → Rn a coordinate map on V . Without
loss of generality, by translating if necessary, we can assume ϕ(q) = 0 ∈ Rn . Pick ε > 0
sufficiently small so that the open Euclidean ball B(0; 3ε) ⊆ ϕ(V ). Let σ : Rn → R be
the smooth bump function that is 1 on B̄(0, ε) and has support in B̄(0, 2ε). Note that
eli
supp σ ⊆ B(0; 3ε). Then the function σ ◦ ϕ : V → Rn → R is the desired bump function.
In general, a smooth function on an open subset U of a manifold M cannot be extended to a
smooth function on all M . An example is the secant function on the open interval (−π/2, π/2)
in R. However, this is possible if we relax the condition so that the global function agrees
with M only on some neighborhood of a point in U .
©F

Proposition 3.6.2 (Smooth Extension of a Function)


Suppose f is a smooth function defined on a neighborhood U ∈ p in a manifold M .
Then there is a smooth function f˜ on M that agrees with f in some possibly smaller
neighborhood of p.

Proof
Choose a smooth bump function ρ : M → R supported in U that is 1 on a neighborhood

131
V of p. Define (
ρ(q)f (q), q ∈ U
f˜(q) :=
0, q∈/U

As the product of two smooth functions on U , f˜ is smooth on U . If q ∈


/ U , then q ∈
/ supp ρ
and by the closedness of supp ρ, there is an open set containing q on which f is 0. Thus
˜
f˜ is smooth at every point q ∈
/ U as well.

ou
Since ρ ≡ 1 on V , the function f˜ agrees with f on V .

3.6.2 Partitions of Unity

A collection {Ui } of subsets of a topological space S is said to be locally finite if every point

Zh
q ∈ S has a neighborhood that meets only finite many of the sets Ui . In particular, every
q ∈ S is contained in only finitely many of the Ui ’s.

Definition 3.6.1 (Partition of Unity)


A smooth partition of unity is a collection of non-negative smooth functions {ρα :
M → R}α∈A such that
(i) The collection of supports {supp ρα } is locally finite
(ii)
P
α ρα = 1

Given an open cover {Uα }α∈A of M , we say that a partition of unity {ρα }α∈A is subordinate
x
to the open cover {Uα } if supp ρα ⊆ Uα for every α ∈ A.
The sum in (ii) makes sense as the locally finite condition ensures that it is a finite sum at
any given point.
eli
In general, consider {fα } a collection of smooth functions on a manifold M such that the
collection of its supports {supp fα } is locally finite. Then every q ∈ P
M has a neighborhood
Wq that intersects only finitely many supp fα . Thus on Wq , the sum α fα is actully a finite
sum. Thus shows that the functio is well-defined and smooth on the manifold M . We call
such a sum a locally finite sum.
©F

3.6.3 Existence of a Partition of Unity

In this subsection we begin a proof of the existence of a smooth partition of unity on a


manifold. The case of compact manifolds is easier and already has some features of the
general case. The proof of the general case requires an appropriate substitute for compactness
that is not necessary elsewhere, hence we omit it.

132
Lemma 3.6.3
If ρ1 , . . . , ρm are real-valued functions on a manifold M , then
!
X [
supp ρi ⊆ supp ρi .
i i

ou
Proof
Define
X
A := {p ∈ M : ρi (p) 6= 0}
i

and Ai := {p ∈ M : ρi (p) 6= 0}. It is clear that A ⊆ ∪i Ai . Then

Zh
X
supp ρi := Ā
i
[
⊆ Ai
i
[
= Ai
i
[
= supp ρi .
i

Here we used the fact that B ∪ C = B̄ ∪ C̄. A fact that can be proved using the definition
x
of the closure of B as the smallest closed set containing B.

Proposition 3.6.4
Let M be a compact manifold and {Uα }α∈A an open cover of M . There is a smooth
eli
partition of unity {ρα }α∈A subordinate to {Uα }α∈A .

Proof
For each q ∈ M , find an open set Uα 3 q and let Ψq be a smooth bump function at q
supported in Uα . Because Ψq (q) > 0, there is a neighborhood Wq of q on which Ψq > 0.
By the compactness of M , we can find a subcover
©F

{Wq1 , . . . , Wqm } ⊆ {Wq : q ∈ M }.

Consider m
X
Ψ := Ψqi .
i=1

This is positive at every point q ∈ M as q ∈ Wqi for some i.

133
Define
Ψqi
ϕi :=
Ψ
for i ∈ [m]. We have i ϕi = 1 by construction. Also, this is a finite sum and hence
P
locally finite. Hence we already have a partition of unity and it remains to show that it
is subordinate to {Uα } by re-indexing.
Since Ψ > 0, ϕi (q) 6= 0 if and only if Ψqi (q) 6= 0 so that

ou
supp ϕi = supp Ψqi ⊆ Uα

for some α ∈ A. For each i ∈ [m], choose τ (i) ∈ A to be an index such that supp ϕi ⊆ Uτ (i) .
Group the functions {ϕi } by τ (i) and define
X
ρα := ϕi

Zh
i∈[m]:τ (i)=α

for each α ∈ A. The empty sum is taken to zero. Then


X X X X
ρα = ϕi = ϕi = 1.
α∈A α∈A τ (i)=α i

Moreover, the lemma above guarantees that


[
supp ρα ⊆ supp ϕi ⊆ Uα .
τ (i)=α
x
This concludes the proof.
The statement of the existence of smooth partition of unity is as follows.
eli
Theorem 3.6.5 (Existence of Smooth Partition of Unity)
Let {Uα }α∈A be an open cover of a manifold M .
(i) There is a smooth partition of unity {ϕk }∞
k=1 such that for every k, ϕk has
compact support and supp ϕk ⊆ Uα for some α ∈ A.
(ii) If we do not require compact support, there is a smooth partition of unity {ρα }
subordinate to {Uα }.
©F

3.7 Vector Fields

A vector field X on a manifold M assigns a tangent vector Xp ∈ Tp M to each p ∈ M . More


formally, a vector field on M is a section of the tangent bundle T M of M .

134
Vector fields are abundant in nature. For instance, the velocity vector field of a fluid flow,
the electric field of a charge, the gravitation field of a mass, and so on. The fluid flow model
is quite natural, as every smooth vector field may be viewed locally as the velocity vector
field of a fluid flow. The path traced out by a point under this flow is called the integral
curve of the vector field. Integral are curves whose velocity vector field is the restriction of
the given vector field to the curve. The theory of ODEs guarantee the existence of integral
curves.

ou
3.7.1 Smoothness of Vector Field

We previously defined a vector field X : M → T M to be smooth if it is smooth as a section


of the tangent bundle π : T M → M . In a coordinate chart (U, φ) = (U, x1 , . . . , xn ) on M ,
the value of the vector field at p ∈ U is a linear combination

Zh
X ∂
Xp = ai (p) .
i
∂xi p

As p varies in U , the coefficients ai become functions on U .


Recall the chart (U, φ) = (U, x1 , . . . , xn ) on M induces a chart on the tangle bundle
(T U, φ̃) = (T U, x̄1 , . . . , x̄n , c1 , . . . , cn )
where x̄i = π ∗ xi = xi ◦ π and the ci are defined by

x
X
v= ci (v)
i
∂xi p

for v ∈ Tp M .
eli
Comparing coefficients of Xp
X ∂ X ∂
Xp = ai (p) = ci (Xp )
i
∂xi p i
∂xi p

for p ∈ U yields the equality ai = ci ◦ X as functions on U . The ci ’s are smooth functions on


T U as they are coordinates. Thus if X is smooth and (U, x1 , . . . , xn ) is a chart on M , then
©F

the coefficients ai of X relative to the frame ∂/∂xi are smooth on U .


The following lemma shows that the converse also holds.

Lemma 3.7.1 (Smoothness of a Vector Field on a Chart)


Let
P (U, φ) = (U, x1 , . . . , xn ) be a chart on a manifold M . A vector field X =
i a ∂/∂x on U is smooth if and only if the coefficient functions a are all smooth
i i i

on U .

135
Proof
We have proven a more general fact: For any smooth vector bundle π : E → M and P open
subset U ⊆ M . If s1 , . . . , sr is a smooth frame for E over U , then a section s = j cj sj
of E over U is smooth if and only if the cj ’s are smooth functions on U .
We let π be the tangent bundle and si = ∂/∂xi be the coordinate section to complete the
proof.

ou
We can now characterize the smoothness of a vector field on a manifold in terms of its
coefficients relative to coordinate frames.

Proposition 3.7.2 (Smoothness of a Vector Field in terms of Coefficients)


Let X be a vector field on a manifold M . The following are equivalent:
(i) X is smooth on M
(ii) M has an atlas such that on any chart (U, x1 , . . . , xn ) of the atlas, the coefficients

Zh
ai of X = i ai ∂/∂xi relative to the frame ∂/∂xi are all smooth
P

(iii) On any chart (U, x1 , . . . , xn ) for M , the coefficients ai of X = i ai ∂/∂xi relative


P
to the frame ∂/∂xi are all smooth

Just as in the Euclidean case, a vector field X on a manifold M induces a linear map on the
algebra C ∞ (M ) of smooth functions on M . For f ∈ C ∞ (M ), define Xf to be the function
(Xf )(p) = Xp f.
We can now state an alternative characterization of a smooth vector field in terms of its
action as an operator on smooth functions.
x
Proposition 3.7.3 (Smoothness of a Vector Field in terms of Functions)
A vector field X on M is smooth if and only if for every smooth function f on M , the
function Xf is smooth on M .
eli

Proof
( =⇒ ) Suppose X is smooth so that on any chart (U, x1 , . . . , xn ) of M , the coefficients ai
of X = i ai ∂/∂xi are smooth. For any f ∈ C ∞ (M ), it follows that Xf = i ai ∂f /∂xi
P P
is smooth on U . Since M can be covered by charts, Xf is smooth on M .
©F

( ⇐= ) Let (U, x1 , . . . , xn ) be any chart on M . Suppose X = i ai ∂/∂xi on U and p ∈ U .


P

Each of the coordinate functions xk can be extended to a smooth function x̃k on M that
agrees with xk in a neighborhood V of p ∈ U . Thus on V ,
! !
X ∂ X ∂
X x̃k = ai i x̃k = ai i x k = ak .
i
∂x i
∂x
By assumption, each ak is smooth on V and in particular at p. But p ∈ M was arbitrary,
concluding the proof.

136
The proposition above shows that we can view a smooth vector field X as a linear operator
on C ∞ (M ). Similar to the Euclidean case, X is a derivation: for all f, g ∈ C ∞ (M ),

X(f g) = (Xf )g + f (Xg).

An alternative view of smooth vector fields on a manifold M can be taken as smooth sections
of the tangent bundle T M and as derivations on the algebra C ∞ (M ) of smooth functions.

ou
In fact, it can be shown that these two descriptions of smooth vector fields are equivalent.
Similary to smooth extensions of smooth functions, we can smoothly extend vector fields
using bump functions.

Proposition 3.7.4
Suppose X is a smooth vector field defined on a neighborhood U 3 p in a manifold
M . Then there is a smooth vector field X̃ on M that agrees with X on some (possibly

Zh
smaller) neighborhood of p.

3.7.2 Integral Curves

Definition 3.7.1 (Integral Curve)


Let X be a smooth vector field on a manifold M and p ∈ M . An integral curve of X
is a smooth curve c : (a, b) → M such that c0 (t) = Xc(t) for all t ∈ (a, b).
x
We typically assume that 0 ∈ (a, b). If we furthermore have c(0) = p, then we say that c is
an integral curve starting at p and call p the initial point of c. To show the dependence of
such an integral curve on the initial point p, we also write c(t, p) instead of c(t).
eli
We say that an integral curve is maximal if its domain cannot be extended to a larger
interval.

Example 3.7.5
Let X be the vector field x2 d/dx on the real line R. We wish to determine the maximal
integral curve of X starting at x = 2.
©F

Denote the integral curve by x(t). Then

d d
x0 (t) = Xx(t) ⇐⇒ ẋ(t) = x2 .
dt dt
Thus x(t) satisfies the differential equation

dx
= x2
dt

137
subject to x(0) = 2. Solving the above by separation of variables yields

dx
= dt
x2
1
− =t+C
x
1
x=− .

ou
t+C
The initial conditions forces C = −1/2. Hence
2
x(t) = .
1 − 2t
The maximal interval containing 0 on which x(t) is defined is (−∞, 1/2).

Zh
This example shows that it may not be possible to extend the domain of an integral curve
to the entire real line.

3.7.3 Local Flows

Finding an integral curve of a vector field locally amounts to solving a system of first-
order ODE with initial conditions. Suppose we wish to find an integral curve c(t) of a
smooth vector field X on a manifold M in general. We first choose a coordinate chart
(U, φ) = (U, x1 , . . . , xn ) about p. In terms of the local coordinates,
x
X ∂
Xc(t) = ai (c(t))
i
∂xi c(t)

and
eli
X ∂
c0 (t) = ċi (t) ,
i
∂xi c(t)

where c (t) = x ◦ c(t) is the i-th component of c(t) in the chart (U, φ). The condition
i i

c0 (t) = Xc(t) is thus equivalent to


ċi (t) = ȧi (c(t))
©F

for each i ∈ [n]. This is an ODE with initial condition c(0) = p translating to

ci (0) = pi

for each i ∈ [n].

Remark 3.7.6 We should think of elements of the tangent space as an infinitessimal direc-
tion and the differential of a map encoding how an infinitessimal direction in the domain
corresponds to an infinitessimal direction in the image of the map.

138
Also recall that partial derivatives simplify to the calculus derivative for maps between
Euclidean spaces.

By the existence and uniqueness of solutions to ODEs, the system above always has a unique
local solution.

Theorem 3.7.7

ou
Let V ⊆ Rn be open, p0 ∈ V , and f : V → Rn a smooth function. Then the
differential equation
dy
= f (y(t))
dt
with initial conditions y(0) = p0 has a unique smooth solution y : (a(p0 ), b(p0 )) → V
where (a(p0 ), b(p0 )) is the maximal open interval containing 0 on which y is defined.

Zh
Uniqueness above means that if z : (δ, ε) → V satisfies the same ODE, then (δ, ε) ⊆
(a(p0 ), b(p0 )) and z, y agree on their common domain.
This theorem guarantees the existence and uniquness of a maximal integral curve starting
at p for a vector field X on a chart U of a manifold and a point p ∈ U .
Next we study the dependence of an integral curve on its initial point. Again we begin with
the problem locally on Rn . The function y will now be a function of two arguments t, q, and
the condition for y to be an integral curve starting at q is

∂y
x
(t, q) = f (y(t, q))
∂t
with initial conditions y(0, q) = q.
The following theorem from ODE theory guarantees the smooth dependence of the solution
eli
on the initial point.

Theorem 3.7.8
Let V ⊆ Rn be open and f : V → Rn be smooth on V . For each p0 ∈ V , there is a
neighborhood W 3 p0 in V , some ε > 0, and a smooth function y : (−ε, ε) × W → V
such that
©F

∂y
(t, q) = f (y(t, q))
∂t
and y(0, q) = q for all (t, q) ∈ (−ε, ε) × W .

It follows from this theorem that if X is a any smooth vector field on a chart U with p ∈ U ,
then there is a neighorhood W 3 p in U , ε > 0, and a smooth map F : (−ε, ε) × W → U
such that for each q ∈ W , the function F (·, q) is an integral curve of X starting at q. In
particular, F (0, q) = q. We usually write Ft (q) = F (t, q).

139
Suppose s, t ∈ (−ε, ε) are such that both Ft (Fs (q)) and Ft+s (q) are defined. Then both
Ft (Fs (q)) and Ft+s (q) as functions of t are integral curves of X with initial point Fs (q). By
the uniqueness of the integral curve starting at a point,

Ft (Fs (q)) = Ft+s (q).

The map F is called a local flow generated by X. For each q ∈ U , the function Ft (q) of t
is called a flow line of the local flow. Each flow line is an integral curve of X. If a local

ou
flow F is defined on R × M , it is called a global flow. Every smooth vector field has a local
flow about any point, but no necessarily a global flow. A vector field having a global flow is
called a complete vector field. If F is a global flow, then for every t ∈ R,

Ft ◦ F−t = F−t ◦ Ft = F0 = 1M

and so Ft : M → M is a diffeomorphism. Hence a global flow on M gives rise to a one-

Zh
parameter group of diffeomorphisms of M .
The discussion above suggests the following formal definition.

Definition 3.7.2 (Local Flow)


A local flow about a point p ∈ U in a open subset of a manifold is a smooth function

F : (−ε, ε) × W → U,

where ε > 0, W 3 p is a neighborhood within U , such that writing Ft (q) = F (t, q) we


have
x
(i) F0 (q) = q for all q ∈ W
(ii) Ft (Fs (q)) = Ft+s (q) whenever both sides are defined
eli
If F (t, q) is a local flow of the vector field X on U , then
∂F
F (0, q) = q (0, q) = XF (0,q) = Xq
∂t
Thus we can recover the vector field from its flow. If we do not know a priori that F (t, q)
is the local flow of some vector field X, we can still define q 7→ ∂F/∂t(0, q) =: Xq . This is
©F

known as the infinitessimal generator of F .

Proposition 3.7.9
Let F : (−ε, ε) × W → U be a local flow about p ∈ U . The infinitessimal generator X
of F is a smooth vector field on W and each curve F (·, p) is an integral curve of X.

Proof
We wish to show that Xf is smooth for every smooth real-valued function f on an open

140
subset V ⊆ W . For any such f and p ∈ U , write c(t) = F (t, p) so that c0 (0) = Xp

(Xf )(p) = Xp f
= c0 (0)f
 
d
= c∗ f
dt 0

ou
d
= f (c(t))
dt 0

= f (F (t, p)).
∂t (0,p)

Since compositions preserve smoothness, f (F (t, p)) is a smooth function of (t, p) and so
is its partial derivative with respect to t. Thus (Xf )(p) depends smoothly on p and X is
smooth.

Zh
We wish to show that XFt0 (p) = d/dtF (t0 , p) for all p ∈ W and t0 ∈ (−ε, ε). Define
q := Ft0 (p) and we show that d/dtF (t0 , p) = Xq . By the group law,

F (t, q) = F (t, Ft0 (p))


= F (t0 + t, p).

Thus for any smooth real-valued function f defined in a neighborhood of q,

d
Xq f = f (F (t, q))
x
dt 0
d
= f (F (t0 + t, p))
dt 0
d
eli
= f (F (t, p))
dt t0

as desired.
©F

Remark 3.7.10 In general, A smooth vector field X on a manifold M generates a local


flow whose domain at each point p ∈ M is an interval containing 0. This can be proven
by “piecing together” the maximal integral curves starting at p. Indeed, regardless of the
chosen chart, any two integral curves starting at p must agree on a small interval about 0.
This shows that F (0, q) = q for each q. The group laws can then be verified using properties
of solutions of ODEs.
Conversely, we have essentially shown that such a local flow is generated by some smooth
vector field X.

141
3.7.4 The Lie Bracket

ou
Suppose X, Y are smooth vector fields on an open subset U of a manifold M . We view X, Y
as derivation operators on C ∞ (U ). The map XY is an R-linear operator but does not satisfy
the Leibniz rule. If we consider XY − Y X however, this will be a derivation.

Definition 3.7.3 (Lie Bracket)


The Lie bracket of two smooth vector fields X, Y on U 3 p is defined to be

Zh
[X, Y ]p f := (Xp Y − Yp X)f.

Here f is a germ of a smooth function at p.

As p varies over U , [X, Y ] becomes a vector field on U .


The Lie bracket provides a product operation on the vector space X(M ) of all smooth vector
fields on M . Clearly [Y, X] = −[X, Y ].

Proposition 3.7.11 (Jacobi Identity)


x
Let X, Y, Z ∈ X(M ). Then

[X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y ]] = 0.


eli
Proposition 3.7.12 (Lie Bracket in Local Coordinates)
Consider two smooth vector fields X, Y on a manifold M and (U, x1 , . . . , xn ) a coordinate
chart. X, Y have local expressions
X ∂ X ∂
X= ai Y = bj .
∂xi ∂xj
©F

i j

Then
X ∂
[X, Y ] = ck
k
∂xk
where
X  ∂bk k

k i i ∂a
c = a i
−b i
.
i
∂x ∂x

142
Proof
Fix p ∈ U . Evaluating [X, Y ]p at the coordinate functions xk ’s yields

ck (p) = [X, Y ]p xk
= (Xp Y − Yp X)xk
= X p b k − Y p ak

ou
X  ∂bk k

i i ∂a
= a i
−b i
.
i
∂x ∂x

Definition 3.7.4 (Lie Algebra over a Field)


Let K be a field. A Lie algebra over K is a vector space V over K together with a
product [, ] : V × V → V called the bracket, satisfying the following properties for all
a, b ∈ K and X, Y, Z ∈ V :

Zh
(i) (bilinearity) [aX + bZ, Z] = a[X, Z] = b[Y, Z]
and [Z, aX + bY ] = a[Z, X] + b[Z, Y ]
(ii) (anticommutativity) [Y, X] = −[X, Y ]
(iii) (Jacobi identity) [X, [Y, Z]] + [Y, [Z, X]] + [Z, [X, Y ]] = 0

In practice, we only concern ourselves with real Lie algebras, i.e. Lie algebras over R. From
hereonforth, a Lie algebra means a real Lie algebra.

Example 3.7.13 (Abelian Lie Algebra)


x
On any vector space V , the trivial bracket [X, Y ] = 0 makes V into a Lie algebra known
as an abelian Lie algebra.
Note that our definition of an algebra requires the product be associative. In general, the
bracket of a Lie algebra need not be associate. Thus despite its name, a Lie algebra is in
eli
general not an algebra.

Example 3.7.14
If M is a manifold, the vector space X(M ) of smooth vector fields on M is a real Lie
algebra with the Lie bracket as the bracket.
©F

Example 3.7.15
Let Kn×n be the vector space of all n × n matrices over a field K. Define for X, Y ∈ Kn×n ,
[X, Y ] = XY − Y X,
where XY is the matrix product. With this bracket, Kn×n becomes a Lie algebra.
More generally, if A is any algebra over a field K, then the product
[x, y] = xy − yx

143
makes A into a Lie algebra over K.

Definition 3.7.5 (Derivation of a Lie Algebra)


A derivation of a Lie algebra V over a field K is a K-linear map D : V → V satisfying
the product rule:
D[Y, Z] = [DY, Z] + [Y, DZ]
for Y, Z ∈ V .

ou
Example 3.7.16
Let V be a Lie algebra over a field K. For each X ∈ V , define adX : V → V by

adX (Y ) := [X, Y ].

The Jacobi identity ensures that adX is a derivation of V

Zh
adX [Y, Z] = [X, [Y, Z]]
= [[X, Y ], Z] + [Y, [X, Z]]
= [adX Y, Z] + [Y, adX Z].

3.7.5 The Pushforward of Vector Fields

Let F : N → M be a smooth map between manifolds and F∗ : Tp N → TF (p) M be its


x
differential at a point p ∈ N . If Xp ∈ Tp N , we call F∗ (Xp ) the pushforward of the vector
Xp at p. This notion does not extend in general to vector fields, since F is not necessarily
injective.
eli
Consider the special case when F : N → M is a diffeomorphism. then the pushforward F∗ X
of any vector field on X always makes sense. There is no ambiguity about the meaning of

(F∗ X)F (p) = F∗,p (Xp ).


©F

Moreover, since F is surjective, F∗ X is defined everywhere on M .

3.7.6 Related Vector Fields

Under a smooth map F : N → M , we cannot in general push forward a vector field on N .


There is nonetheless a useful notion of a related vector field.

144
Definition 3.7.6 (Related Vector Field)
Let F : N → M be a smooth map between manifolds. A vector field X on N is
F -related to a vector field X̄ on M if for all p ∈ N ,

F∗,p (Xp ) = X̄F (p) .

ou
Example 3.7.17
If F : N → M is a diffeomorphism and X is a vector field on N , then the pushforward
F∗ X on M is defined. By definition, X is F -related to the vector field F∗ X on M .
An equivalent condition of F -relatedness is as follows.

Proposition 3.7.18
Let F : N → M be a smooth map of manifolds. A vector feld X on N and a vector

Zh
field X̄ on M are F -related if and only if for all g ∈ C ∞ (M ),

X(g ◦ F ) = (X̄g) ◦ F.

Proof
X, X̄ are F -related if and only if

F∗,p (Xp )g = X̄F (p) g ∀p ∈ N, g ∈ C ∞ (M )


Xp (g ◦ F ) = (X̄g)(F (p)) ∀p, g
x
(X(g ◦ F ))(p) = (X̄g)(F (p)) ∀p, g
X(g ◦ F ) = (X̄g) ◦ F ∀g

Proposition 3.7.19
eli
Let F : N → M be a smooth map of manifolds. If the smooth vector fields X, Y on N
are F -related to the smooth vector fields X̄, Ȳ , respectively, on M , then the Lie bracket
[X, Y ] on N is F -related to the Lie bracket [X̄, Ȳ ] on M .

Proof
For any g ∈ C ∞ (M ),
©F

[X, Y ](g ◦ F ) = XY (g ◦ F ) − Y X(g ◦ F )


= X((Ȳ g) ◦ F ) − Y ((X̄g) ◦ F )
= (X̄ Ȳ g) ◦ F − (Ȳ X̄g) ◦ F
= ((X̄ Ȳ − Ȳ X̄)g) ◦ F
= ([X̄, Ȳ ]g) ◦ F.

145
©F

146
eli
x
Zh
ou
Chapter 4

ou
Lie Groups and Lie Algebras

Zh
A Lie group is a manifold equipped with smooth group operations. The invertible matrix
groups form important and interesting examples of Lie groups. The left translation by a
group element g is a diffeomorphism from the group to itself that maps the identity to g.
Thus the group locally looks the same around any point. To study the local structure of a
Lie group, it suffices to examine a neighborhood of the identity element. It is not surprising
that the tangent space at the identity of a Lie group should play a role.
The tangent space at the identity of a Lie group turns out to have a canonical bracket that
makes it into a Lie algebra. This encodes within it much information about the group.
The interplay of group theory, topology, and linear algebra makes the theory of Lie groups
x
and Lie algebras a particularly rich and vibrant branch of mathematics. Our humble goal
is to examine Lie groups as an important class of manifolds and Lie algebras as examples of
tangent spaces.
eli

4.1 Lie Groups


©F

4.1.1 Lie Groups & Examples

We begin with some methods for recognizing a Lie group.


Recall the definition of a Lie group.

147
Definition 4.1.1 (Lie Group)
A Lie group is a smooth manifold G that is also a group such that the two group
operations, multiplication and inverse are smooth.

µ:G×G→G µ(a, b) = ab
ι:G→G ι(a) = a−1 .

ou
for a ∈ G, we denote by `a (x) = ax the left multiplication operation by a, and by ra (x) = xa
the right multiplication operation by a. We also refer to left/right multiplication as left/right
translation.

Proposition 4.1.1
For an element a in a Lie group G, the left multiplication `a : G → G is a diffeomorphism.

Zh
Proof
Consider the inclusion map I : G → G × G given by I(x) = (a, x). This is certainly
smooth. Then the composition
`a = I ◦ µ
is smooth.

Definition 4.1.2 (Lie Group Homomorphism)


A map F : H → G between Lie groups H, G is a Lie group homomorphism if it is a
smooth map and a group homomorphism.
x
Recall a group homomorphism is a map that preserves group operations

F (hx) = F (h)F (x).


eli
This may be rewritten in functional notation as

F ◦ `h = `F (h) ◦ F

for all h ∈ H. Recall that group homomorphisms always map the identity to itself.
©F

We use capital letters to denote matrices, but generally lowercase letters to denote their
entries.

Example 4.1.2 (General Linear Group)


We previously proved that the general linear group

GL(n, R) = {A ∈ Rn×n : det A 6= 0}

is a Lie group.

148
Example 4.1.3 (Special Linear Group)
The special linear group SL(n, R) is the subgroup of GL(n, R) consisting of matrices with
determinant 1. We know that SL(n, R) is a regular submanifold of dimension n2 − 1.
Recall that smooth maps f : N → M whose image lie in a regular submanifold S ⊆
induces a smooth map f˜ : N → S. Thus multiplication and inverse operations from
GL(n, R) induce smooth multiplication and inverse operations on SL(n, R).

ou
An analogous argument proves that the complex special linear group SL(n, C) is also a
Lie group.

Example 4.1.4 (Orthogonal Group)


The orthogonal group O(n) is the subgroup of GL(n, R) of matrices A such that AT A = Id.
Thus O(n) is the inverse image of Id under the map f (A) = AT A. We previous showed
that f : GL(n, R) → GL(n, R) has constant rank. Hence by the constant-rank level

Zh
set theorem, O(n) is a regular submanifold of GL(n, R). By a similar argument to the
previous example, it is hence a Lie group.
The constant rank level set theorem is more general than the regular level set theorem, at
the cost that we do not directly know the co-dimension of the regular submanifold. We wish
to directly apply the regular level set theorem to determine the dimension of O(n).

Lemma 4.1.5 (Space of Symmetric Matrices)


The vector space Sn of n × n real symmetric matrices has dimension
x
n(n + 1) n2 + n
= .
2 2

Consider the map f : GL(n, R) → Sn given by f (A) = AT A. The tangent space of Sn at


eli
any point is canonically isomorphic to Sn itself, as Sn is a vector space. Thus the image of
the differential
f∗,A : TA GL(n, R) → Tf (A) Sn ' Sn .

While it is true that f also maps GL(n, R) → GL(n, R) or Rn×n , we cannot hope for the
differential to be surjective. This illustrates a general principle: for the differential to be
©F

surjective, we should restrict the target space of f to be as small as possible.


Now we explicitly compute f∗,A to show that it is surjective. Since GL(n, R) is an open
subset of Rn×n , its tangent space at any A ∈ GL(n, R) is

TA GL(n, R) = TA Rn×n = Rn×n .

For any matrix X ∈ Rn×n , we know that there is a curve c(t) in GL(n, R) with c(0) = A

149
and c0 (0) = X. Then

d
f∗,A (X) = f (c(t))
dt t=0
d
= c(t)T c(t)
dt t=0

ou
= (c0 (t)c(t) + c(t)T c0 (t)) matrix product rule
t=0
= X T A + AT X.

Fix A ∈ O(n) and B ∈ Sn . To show surjectivity, it suffice to find some X ∈ Rn×n such that

X T A + AT X = B.

Zh
This equation has the solution
1
X = (AT )−1 B.
2
Thus f∗,A : TA GL(n, R) → Sn is surjective for all A ∈ O(n) and O(n) is a regular level set of
f . By the regular level set theorem, O(n) is a regular submanifold of GL(n, R) of dimension

n2 − n
n2 − dim Sn = .
2

4.1.2 Lie Subgroups


x
Recall that a smooth map F : N → M between manifolds is an immersion if its differential
is everywhere injective. If F is injective, we refer to the image F (N ) under the topology
inherited from F as an immersed submanifold.
eli

Definition 4.1.3 (Lie Subgroup)


A subgroup H of a Lie group G is a Lie subgroup if
(i) H is an immersed submanifold via the inclusion map
(ii) The group operations on H are smooth
©F

The group operations on H must be the restrictions of the operations inherited from G.
However, since a Lie group is an immersed submanifold instead of a regular submanifold, it
need not have the relative topology. Let ι : H → G be the inclusion map. The composition
of smooth maps
µ ◦ (ι × ι) : H × H → G × G → G
is smooth. If we took H to be a regular submanifold, then the multiplication map H×H → H
and inverse map H → H would automatically be smooth and condition (ii) is redundant.

150
Proposition 4.1.6
If H is a subgroup and regular submanifold of a Lie group G, then it is a Lie subgroup
of G.

A subgroup such as in the proposition above is called an embedded Lie subgroup, as the
inclusion map ι : H → G of a regular submanifold is an embedding.

ou
Example 4.1.7
SL(n, R) and O(n) are embedded Lie subgroups of GL(n, R).
We now state a powerful theorem without proof.

Theorem 4.1.8 (Closed Subgroup)


A closed subgroup of a Lie group is an embedded Lie subgroup.

Zh
Example 4.1.9
SL(n, R) and O(n) are the zero sets of polynomial equations on GL(n, R) and hence are
closed subsets of GL(n, R). It follows that they are embedded Lie subgroups of GL(n, R).

4.1.3 The Matrix Exponential

In order to compute the differential of a map on a subgroup of GL(n, R), we need a curve of
nonsingular matrices. Since the matrix exponential is always nonsingular, it is suitable for
this purpose.
x
The vector space Rn×n ' Rn of n × n matrices can be given the Euclidean norm
2

X 2
x2ij .
eli
kXk =
ij

The matrix exponential eX of a matrix X ∈ Rn×n is defined by


1 2 1 3
eX := Id +X + X + X + ...
2! 3!
©F

For this formula to make sense, we need to check that it converges in the normed linear space
Rn×n .

Definition 4.1.4 (Normed Algebra)


A normed algebra V is a normed vector space that is also an algebra over R with the
submultiplicative property: for all v, w ∈ V ,

kvwk ≤ kvkkwk.

151
Matrix multiplication makes Rn×n into a normed algebra.

Proposition 4.1.10
For X, Y ∈ Rn×n ,
kXY k ≤ kXkkY k.

Proof

ou
Cauchy-Schwartz inequality.
In a normed algebra, multiplication distributes over a finite sum. The distributivity of
multiplication over an infinite sum requires proof.

Proposition 4.1.11
Let V be a normed algebra.
(i) If a ∈ and (sm ) is a sequence in V that converges to s, then asm converges to as.

Zh
(ii) If a ∈ V and k≥0 bk is a convergent series in V , then a k bk = k abk .
P P P

Recall a series k ak in a normed vector space V is said to converge absolutely if the series
P

k kak k of norms converges in R. In a complete normed vector space (Banach space),


P
absolute convergence implies convergence. Hence to show that a series of matrices converges,
it suffices to show absolute convergence.
For any X ∈ Rn×n and k > 0, repeated application of submultiplicativity yields X k ≤
kXkk . Thus the series k≥0 X k /k! is bounded term by term in absolute value by the
P
convergent series
x
√ 1 1 √
n + kXk + kXk2 + kXk3 + · · · = ( n − 1) + ekXk .
2! 3!
eli

Hence the matrix exponential converges absolutely for any n × n matrix X.


We write e or both the exponential map and for the identity element of a general Lie group.
The context should prevent any confusion. We sometimes write exp(X) = eX .
Unlike the exponential of real numbers, when A, B are n × n matrices, it is not necessarily
©F

true that

eA+B = eA eB .

Proposition 4.1.12
If A, B ∈ Rn×n commute, then
eA+B = eA eB .

152
Proposition 4.1.13
For X ∈ Rn×n ,
d tX
e = XetX = etX X.
dt

Proof
Since each (i, j)-th entry of the series for the exponential function et X is a convergent

ou
power series in t, it is justified to differentiate term by term.
 
d tX d 1 2 2 1 3 3
e = Id +tX + t X + t X
dt dt 2! 3!
1
= X + tX 2 + t2 X 3
 2! 
1 2 2

Zh
= X Id +tX + t X + . . .
2!
t
= Xe X.

By factoring the X to the RHS, we obtain the second equality.


The definition of the matrix exponential eX makes sense even for complex matrices. We
need only replace the Euclidean norm by the Hermitian norm.

4.1.4 The Trace of a Matrix


x
The trace of a matrix is the sum of its diagonal entries
Xn
tr(X) = xii .
eli
i=1

Lemma 4.1.14
(i) For any X, Y ∈ Rn×n , tr(XY ) = tr(Y X).
(ii) For X ∈ Rn×n and A ∈ GL(n, R), tr(AXA−1 ) = tr(X).
©F

We now recall some facts from linear algebra.

Proposition 4.1.15
The trace of a real or complex matrix is equal to the sum of its (complex) eigenvalues.

Proposition 4.1.16
For any X ∈ Rn×n , det eX = etr X .


153
Proof
Suppose first that X is upper triangular. Then the diagonal entries of eX is exii for i ∈ [n].
Thus Y
det eX = exii = etr X .
i

If X is not upper triangular, we can find a nonsingular matrix such that AXA−1 is

ou
triangular. Thus
X 1 X 1 
AXA−1 −1 k
e = (AXA ) = A X A−1 = AeX A−1 .
k

k≥0
k! k≥0
k!

But then  
AXA−1 AXA−1

X X −1
= etr = etr X

det e = det Ae A = det e

Zh
by the special case.
It follows that the matrix exponential eX is never singular regardless of the X, as its deter-
minant is strictly positive.
This enables us to write down an explicit curve in GL(n, R) with given initial point and
given iniital velocity. For example, c(t) = etX : R → GL(n, R) is a curve in GL(n, R) with
initial point Id and initial velocity X, since

c(0) = e0X = I

and
x
d
c0 (0) = etX = XetX = X.
dt t=0 t=0

Similarly, c(t) = AetX is a curve in GL(n, R) with initial point A and initial velocity AX.
eli

4.1.5 The Differential of Det at the Identity

The tangent space TId GL(n, R) at the identity matrix Id is the vector space Rn×n and the
tangent space T1 R is R. Thus we can view the differential of the determinant map
©F

det : Rn×n → R
∗,Id

as a map between Euclidean spaces under the identifications above.

Proposition 4.1.17
For any X ∈ Rn×n ,
det(X) = tr X.
∗,Id

154
Proof
Choose the matrix exponential c(t) = etX so that c(0) = Id and c0 (0) = X. Then

d
det etX

det(X) =
∗,Id dt t=0
d
= et tr X

ou
dt t=0

= (tr X)et tr X
t=0
= tr X.

4.2 Lie Algebras

Zh
In a Lie group G, left translation by an element e ∈ G is a diffeomorphism that maps a
neighborhood of the identity to a neighborhood of g. Thus all the local information about
the group is concentrated in a neighborhood of the identity, and the tangent space at the
identity is especially important.
We can give the tangent space Te G a Lie bracket [, ] so that it becomes a Lie algebra, called
the Lie algebra of the Lie group. Our goal is to define the Lie algebra and identity the Lie
algebras of a few important groups.
The Lie bracket on the tangent space Te G is defined using a canonical isomorphism between
x
the tangent space at the identity and the vector space of left-invariant vector fields on G.
With respect to this Lie bracket, the differential of a Lie group homomorphism becomes a
Lie algebra homomorphism. We thus obtain a functor from the category of Lie groups and
eli
Lie group homomorphisms to the category of Lie algebras and Lie algebra homomorphisms.

4.2.1 Tangent Space at the Identity of a Lie Group

The existence of a smooth multiplication and smooth inverse makes a Lie group a very special
©F

manifold. For any g ∈ G, left translation `g : G → G by g is a diffeomorphism with inverse


`g−1 . The diffeomorphism `g takes the identity element e to the element g and induces an
isomorphism of tangent spaces

`g∗ = (`g )∗,e : Te G → Tg G.

Thus if we describe the tangent space Te G, then `g∗ Te G will give a description of the tangent
space Tg G at any g ∈ G.

155
Example 4.2.1 (Tangent space to GL(n, R) at Id)
We know Tg GL(n, R) ' Rn×n . We also identified the isomorphism `g∗ : TId GL(n, R) →
Tg GL(n, R) as left multiplication by g : X 7→ gX.

Example 4.2.2 (Tangent space to SL(n, R) at Id)


We begin by finding a condition that a tangent vector X ∈ TId SL(n, R) must satisfy. We
know there is a curve c : (−ε, ε) → SL(n, R) with c(0) = I and c0 (0) = X. Being in

ou
SL(n, R), this curve has constant determinant 1. We now take the derivative at t = 0.

d
0= det(c(t))
dt
 t=0
d
= (det ◦c)∗
dt 0

Zh
 
d
= det c∗
∗,I dt 0
0
= det(c (0))
∗,I

= det(X)
∗,I

= tr(X). previous proposition

Thus the tangent space is contained in the subspace of trace 0 matrices. But this subspace
has dimension n2 − 1 = dim TId SL(n, R) and the two spaces must be equal.
x
Proposition 4.2.3
TId SL(n, R) can be identified with the subspace of trace 0 n × n matrices.
eli
Example 4.2.4 (Tangent Space to O(n) at Id)
Let X ∈ TId O(n) and choose a curve in O(n) defined on a small interval about 0 such
that c(0) = I and c0 (0) = X. Since c(t) ∈ O(n),
c(t)T c(t) = Id .
Differentiating both sides with respect to t using the matrix product rule yields
©F

c0 (t)T c(t) + c(t)T c0 (t) = 0.


Evaluating at t = 0 gives
X T + X = 0.
Thus X is skew-symmetric. The subspace of skew-symmetric matrices must have zero on
the diagonal entries and non-diagonal entries indexed by i, j is the negation of the j, i-th
entry. Hence this subspace has dimension
1 2
(n − n) = dim TId O(n)
2

156
and these vector spaces must be equal.

Proposition 4.2.5
The tangent space TId O(n) can be identified with the n × n skew-symmetric matrices.

4.2.2 Left-Invariant Vector Fields on a Lie Group

ou
Let X be any not necessarily smooth vector field on a Lie group G. Fro any g ∈ G, since
left multiplication `g : G → G is a diffeomorphism, the pushforward `g∗ X is a well-defined
vector field on G. We say that the vector field X is left-invariant if

`g∗ (X) = X

Zh
for every g ∈ G. This means that for any h ∈ G,

`g∗ (Xh ) = Xgh .

In other words, X is left-invariant if and only if it is `g -related to itself for all g ∈ G. Thus
a left-invariant vector field X is completely determined by its value Xe at the identity, since

Xg = `g∗ (Xe ).

Conversely, given a tangent vector A ∈ Te G, we can define a vector field à on G by


x
(Ã)g := `g∗ A.

As defined, the vector field à is by construction left-invariant, since


eli
`g∗ (Ãh ) = `g∗ `h∗ A
= (`g ◦ `h )∗ A
= (`gh )∗ (A)
= Ãgh .
©F

We say that à is the left-invariant vector field on G generated y A ∈ Te G. Let L(G) be the
vector space of all left-invariant vector fields on G. Then there is a bijective correspondence

Te G ↔ L(G)
Xe ←[ X
A 7→ Ã.

It can be shown that this is in fact a vector space isomorphism.

157
Example 4.2.6 (Left-Invariant Vector Fields on R)
On the Lie group R, the group operation is addition and the identity element is 0. Thus
“left multiplication” `g is actually left addition

`g (x) = g + x.

Let us compute `g∗ (d/dx|0 ). Since `g∗ ∗ (d/dx|0 ) is a tangent vector at g, it is a scalar
multiple of d/dx|g :

ou
 
d d
`g∗ =a .
dx 0 dx g

In order to compute a, we can evaluate both sides at the function f (x) = x to see that
a = 1. Thus  
d d
`g∗ = .
dx 0 dx g

Zh
This shows that d/dx is a left-invariant vector field on R. Therefore, the left-invariant
vector fields on R are constant multiples of d/dx.

Example 4.2.7 (Left-Invariant Vector Fields on GL(n, R))


Since GL(n, R) is an open subset of Rn×n , at any g ∈ GL(n, R) there is a canonical
identification of the tangent space Tg GL(n, R) with Rn×n , under which a tangent vector
corresponds to an n × n matrix

X ∂
aij ↔ [aij ].
x
ij
∂xij g

We use the same letter


P B to denote a tangent vector B ∈ TId GL(n, R) and a matrix
B = [bij ]. Let B = ij bij ∂/∂xij |Id ∈ TId GL(n, R) and let B̃ be the left-invariant vector
eli
field on GL(n, R) generated by B. Then

B̃g = (`g )∗ B ↔ gB

under this identification.

Proposition 4.2.8
©F

Any left-invariant vector field X on a Lie group G is smooth.

Proof
We show that for any f ∈ C ∞ (G), the function Xf is also smooth. Choose a smooth
curve c : I → G on some interval about 0 such that c(0) = e and c0 (0) = Xe . If g ∈ G,
then gc(t) is a curve starting at g with initial vector Xg , since gc(0) = ge = g and
(gc)0 (0) = `g∗ c0 (0) = `g∗ Xe = Xg .

158
Then
d
(Xf )(g) = Xg f = f (gc(t)).
dt t=0

The function F (g, t) := f (gc(t)) is a composition of smooth functions and is thus smooth:
Id ×c µ f
G×I −−−→ G×G −→ G −→ R
7→ 7→ 7→

ou
(g, t) (g, c(t)) gc(t) f (gc(t)).

Its partial derivative ∂F (g, t)/∂t with respect to t is therefore also smooth. But then
∂F (g, t)/∂t|t=0 = (Xf )(g) is thus also smooth. This shows that X is indeed a smooth
vector field on G.
This proposition shows that the vector space L(G) of left-invariant vector fields on G is a
subspace of the vector space X(G) of all smooth vector fields on G.

Zh
Proposition 4.2.9
If X, Y are left-invariant vector fields on G, then so is [X, Y ].

Proof
For any g ∈ G, X is `g -related to itself, and Y is `g -related to itself. But then we know
that [X, Y ] is `g -related to itself.

4.2.3 The Lie Algebra of a Lie Group


x
Recall that a Lie algebra is a vector space g together with a bracket, i.e. an anticommutative
bilinear map [, ] : g × g → g that satisfies the Jacobi identity. A Lie subalgebra of a Lie
algebra g is a vector subspace h ⊆ g that is closed under the bracket [, ]. We know that the
eli
space L(G) of left-invariant vector fields on a Lie group G is closed under the Lie bracket [, ]
and is therefore a Lie subalgebra of the Lie algebra X(G) of all smooth vector fields on G.
The linear isomorphism ϕ : Te G ' L(G) is mutually beneficial to the two vector spaces, for
each space has somehting the other lacks. The vector space L(G) has a natural Lie algebra
structure given by the Lie bracket of vector fields, while the tangent space at the identity
©F

has a natural notion of pushforward, given by the differential of a Lie group homomorphism.
The linear isomorphism ϕ : Te G ' L(G) allows us to define a Lie bracket on Te G and to
push forward left-invariant vector fields under a Lie group homomorphism.
We begin with the Lie bracket on Te G. Given A, B ∈ Te G, we first map them via ϕ to the
left-invariant vector fields Ã, B̃, take the Lie bracket [Ã, B̃] = ÃB̃ − B̃ Ã, and then map it
back to Te G via ϕ−1 . Thus the definition of the Lie algebra [A, B] ∈ Te G should be

[A, B] = [Ã, B̃]e .

159
Proposition 4.2.10
If A, B ∈ Te G and Ã, B̃ are the left-invariant vector fields they generate, then

^
[Ã, B̃] = [A, B].

Proof

ou
Applying ()
e to both sides of the equation [A, B] = [Ã, B̃]e yields

^
[A, ^
B] = [Ã, B̃]e = [Ã, B̃],

since ()
e and ()e are inverse to each other.

With the Lie bracket [, ], the tangent space Te G becomes a Lie algebra, called the Lie algebra
of the Lie group G. As a Lie algebra, Te G is usually denoted by g.

4.2.4

Zh
The Lie Bracket on gl(n, R)

For GL(n, R), the tangent space at Id can be identified with the vector space of n × n real
matrices. We make the identification
X

ij
aij

∂xij Id
↔ [aij ].

The tangent space TId GL(n, R) with its Lie algebra structure is denoted by gl(n, R). Let Ã
x
be the left-invariant vector field on GL(n, R) generated by some A ∈ gl(n, R). Then on the
Lie algebra gl(n, R) we have the Lie bracket [A, B] = [Ã, B̃]Id coming from the Lie bracket
of left-invariant vector fields.
eli
In the following proposition, we identify the Lie bracket in terms of matrices.

Proposition 4.2.11
Let A, B ∈ TId GL(n, R). If
X ∂
©F

[A, B] = [Ã, B̃]Id = cij ,


ij
∂xij Id

then X
cij = aik bkj − bik akj .
k

Thus if derivations are identified with matrices, then

[A, B] = AB − BA.

160
Proof
We evaluate both sides of
X ∂
[A, B] = cij
ij
∂xij Id

at xij . By the definition of the Lie bracket at the point Id,

ou
cij = [Ã, B̃]Id xij
= ÃId B̃xij − B̃Id Ãxij
= AB̃xij − B Ãxij .

In order to compute B̃xij , recall that the left-invariant vector field B̃ on GL(n, R) is given
by
X ∂

Zh
B̃g = (gB)ij .
ij
∂x ij g

Hence
B̃g xij = (gB)ij
X
= gik bkj
k
X
= bkj xik (g).
k

Since this formula holds for all g ∈ GL(n, R), the function B̃xij must be
x
X
B̃xij = bkj xik .
k

But then
eli
!
X ∂ X
AB̃ij = apq bkj xik
pq
∂xpq Id k
X
= apq bkj δip δkq
pqk
X
= aik bkj
©F

k
= (AB)ij
and
B Ãxij = (BA)ij .

It follows that
cij = (AB − BA)ij
as desired.

161
4.2.5 The Pushforward of Left-Invariant Vector Fields

Recall that if F : N → M is a smooth map of manifolds and X is a smooth vector field on


N , the pushforward F∗ X is not necessarily well-defined unless F is a diffeomorphism. In the
case of Lie groups, due to the correspondence between left-invariant vector fields and tangent
vectors at the identity, it is possible to push forward left-invariant vector fields under a Lie
group homomorphism.

ou
Let F : H → G be a Lie group homomorphism. A left-invariant vector field X on H is
generated by its value A = Xe ∈ Te Hat the identity, so that X = Ã. Since the Lie group
homomorphism F maps the identity of H to the identity of G, its differential F∗,e at the
identity is a linear map from Te H → Te G. The commutative diagram below shows the
existence of an induced linear map F∗ : L(G) → L(G) on left-invariant vector fields as well
as a way to define it.

Zh
F∗,e
Te H Te G A F∗,e A
' '

L(H) L(G) Ã ^
(F∗,e A)

Definition 4.2.1
x
Let F : H → G be a Lie group homomorphism. Define F∗ : L(G) → L(H) by

^
F∗ (Ã) := (F∗,e A)
eli
for all A ∈ Te H.

Proposition 4.2.12
If F : H → G is a Lie group homomorphism and X is a left-invariant vector field on H,
then X is F -related to the left-invariant vector field F∗ X on G.
©F

Proof
Fix h ∈ H. It suffices to check that

F∗,h (Xh ) = (F∗ X)F (h) .

The LHS is equal to

F∗,h (Xh ) = F∗,h (`h∗,e Xe ) = (F ◦ `h )∗,e (Xe ),

162
while the RHS is equal to

(F∗ X)F (h) = (F^


∗,e Xe )F (h)

= `F (h)∗ F∗,e (Xe ) left-invariance


= (`F (h) ◦ F )∗,e (Xe ).

But F is a Lie group homomorphism, thus we have F ◦ `h = `F (h) ◦ F and the two sides

ou
are equal.
Thus if F : H → G is a Lie group homomorphism and X is a left-invariant vector field on
H, we will call F∗ X the pushforward of X under F .

4.2.6 The Differential as a Lie Algebra Homomorphism

Zh
Proposition 4.2.13
If F : H → G is a Lie group homomorphism, then its differential at the identity

F∗ := F∗,e : Te H → Te G

is a Lie algebra homomorphism, ie a linear map that for all A, B ∈ Te H,

F∗ [A, B] = [F∗ A, F∗ B].

Proof
x
By a previous proposition, the vector field F∗ à on G is F -related to the vector field à on
H, and the vector field F∗ B̃ is F -related to B̃ on H. Hence the bracket [F∗ Ã, F∗ B̃] on G
is F -related to the bracket [Ã, B̃] on H. In other words,
eli
F∗ ([Ã, B̃]e ) = [F∗ Ã, F∗ B̃]F (e)=e .

The LHS is by definition F∗ [A, B]. The RHS is given by

^ ^
[F∗ Ã, F∗ B̃]e = [(F∗ A), (F ∗ B)]e
©F

= [F∗ , F∗ B].

Equating the two sides concludes the proof.


Suppose H is a Lie subgroup of a Lie group, with inclusion map ι : H → G. Since ι is an
immersion, its differential
ι∗ : Te H → Te G
is injective. To distinguish the Lie bracket on Te H from the Lie bracket on Te G, we tem-
porarily attach subscripts Te H and Te G to the two Lie brackets respectively. By the previous

163
proposition,
ι∗ ([X, Y ]Te H ) = [ι∗ X, ι∗ Y ]Te G .
Thus if Te H is identified with a subspace of Te G via ι∗ , then the bracket on Te H is simply
the restriction of the bracket on Te G to Te H. Thus the Lie algebra of a Lie subgroup H may
be identified with a Lie subalgebra of the Lie algebra of G.
We typically denote the Lie algebras of the classical groups by gothic letters. The Lie algebras

ou
of GL(n, R), SL(n, R), O(n), U (n) are denoted by gl(n, R), sl(n, R), o(n), u(n), respectively.
Moreover, the Lie algebra structurse on sl(n, R), o(n), u(n) are given by

[A, B] = AB − BA

as on gl(n, R).

Remark 4.2.14 A fundamental theorem in Lie theory asserts the existence of a bijective

Zh
correspondence between the connected Lie subgroups of a Lie group G and the Lie subal-
gebras of its Lie algebra g. It is because of our desire for such a correspondence that a Lie
subgroup of a Lie group is defined to be a subgroup that is also an immersed submanifold
rather than a regular submanifold.
x
eli
©F

164
Chapter 5

ou
Differential Forms

Zh
5.1 Differential 1-Forms

Let M be a smooth manifold and p ∈ M . The cotangent space of M at p, denoted Tp∗ M =


Tp∗ M , is defined to be the dual space of the tangent space Tp M :

Tp∗ M = (Tp M )∨ = Hom(Tp M, R).

An element of Tp∗ M is called a covector at p. Thus a covector ωp at p is a linear function


x
ωp : Tp M → R.

A covector field / differential 1-form / 1-form on M is a function ω that assigns to each


point p ∈ M a covector ωp at p. It is in this sense dual to a vector field on M .
eli
Covector fields arise naturally even when we are only P interested in vector fields. If X is a
smooth vector field on Rn , then at each p ∈ Rn , Xp = i ai ∂/∂xi |p . Thus the coefficient ai
is a function of Xp ∈ Tp M . In fact, it is a linear function, ie a covector. As p varies over Rn ,
ai becomes a covector field on Rn . It is none other than the 1-form dxi that picks out the
i-th coefficient of a vector field relative to the standard frame ∂/∂x1 , . . . , ∂/∂xn .
©F

5.1.1 The Differential of a Function

Definition 5.1.1 (Differential)


If f ∈ C ∞ (M ), its differential is defined to be the 1-form df on M given by

(df )p (Xp ) = Xp f.

165
We may also write df |p for the value of the 1-form df at p. This is in parallel to the notation
for tangent vector d/dt|p .
Recall our other notion of the differential f∗ for a smooth function f : N → M as a linear
function between tangent spaces.

Proposition 5.1.1
If f : M → R is a smooth function, then for p ∈ M and Xp ∈ Tp M ,

ou
d
f∗ (Xp ) = (df )p (Xp ) .
dt f (p)

Proof
Evaluate both sides of

Zh
d
f∗ (Xp ) = a
dt f (p)

at x.
This proposition shows that under the canonical identification
d
a ↔ a,
dt f (p)

f∗ is the same as df . Hence we are justified in calling both of them the differential of f .
In terms of df , a smooth function f : M → R has a critical point at p ∈ M if and only if
x
(df )p = 0.

5.1.2 Local Expression for a Differential 1-Form


eli

Let (U, φ) = (U, x1 , . . . , xn ) be a coordinate chart on a manifold M . Then the differentials


dx1 , . . . , dxn are 1-forms on U .

Proposition 5.1.2
At each p ∈ U , the covectors (dx1 )p , . . . , (dxn )p form a basis for the cotangent space
©F

Tp∗ M dual to the basis ∂/∂x1 |p , . . . , ∂/∂xn |p for Tp M .

Proof
The proof is identical to the Euclidean case:
!

(dxi )p = δji .
∂xj p

166
We can thus write every 1-form ω on U as a linear combination
X
ω= ai dxi ,
i

where the coefficients ai are functions on U . In particular, if f is a smooth function on M ,


then the restriction of the 1-form df to U must be a linear combination

ou
X
df = ai dxi .
i

We can isolate ai by the usual trick of evaluating at both sides on ∂/∂xj to see that
∂f
aj = .
∂xj

Zh
Thus we have the following local expression for df :
X ∂f
df = i
dxi .
i
∂x

5.1.3 The Cotangent Bundle

The underlying set of the cotangent bundle T ∗ M of a manifold M is the (disjoint) union of
the cotangent spaces at all points of M :
x
G
T ∗ M := Tp∗ M.
p∈M

Just as in the case of the tangent bundle, there is a natural map π : T ∗ M → M given by
eli
π(α) = p if α ∈ Tp∗ M . Mimicking the construction of the tangent bundle, we give T ∗ M
a topology as follows: If (U, φ) = (U, x1 , . . . , xn ) is a chart on M and p ∈ U , then each
α ∈ Tp∗ M can be written uniquely as a linear combination
X
α= ci (α)dxi |p .
i
©F

This gives rise to a bijection

φ̃ : T ∗ U → φ(U ) × Rn
α 7→ (φ(p), c1 (α), . . . , cn (α)) = (φ ◦ π, c1 , . . . , cn )(α).

We can transfer the topology of φ(U ) × Rn to T ∗ U through this bijection.


For each domain U of a chart in the maximal atlas of M , let BU be the collection of all open
subsets of T ∗ U , and B the union of all BU ’s. As before, B satisfies the conditions for a

167
collection of subsets of T ∗ M to be a basis. We give T ∗ M the topology generated by the basis
B. As for the tangent bundle, with the maps φ̃ = (x1 ◦ π, . . . , xn ◦ π, c1 , . . . , cn ) as coordinate
maps, T ∗ M becomes a smooth manifold, and the projection map π : T ∗ M → M becomes
a vector bundle of rank n over M , justifying the “bundle” in the name “cotangent bundle”.
If x1 , . . . , xn are coordinates on U ⊆ M , then π ∗ x1 , . . . , π ∗ xn , c1 , . . . , cn are coordinates on
π −1 U ⊆ T ∗ M . Properly speaking, the cotangent bundle of a manifold M is the triple
(T ∗ M, M, π), while T ∗ M and M are the total space and base space of the cotangent bundles

ou
respectively. By abuse of language, it is customary to call T ∗ M the cotangent bundle of M .
In terms of the cotangent bundle, a 1-form on M is simply a section of the cotangent bundle
T ∗ M , ie it is a map ω : M → T ∗ M such that π ◦ ω = IdM . We say that a 1-form ω is smooth
if it is smooth as a map M → T ∗ M .

Example 5.1.3 (Liouville Form on the Cotangent Bundle)


Let M be an n-manifold. The total space T ∗ M of its cotangent bundle π : T ∗ M → M is

Zh
a manifold of dimension 2n. Remarkably, on T ∗ M there is a 1-form λ, called the Liouville
form (Poincaré form), defined independently of charts as follows.
A point in T ∗ M is a covector ωp ∈ Tp∗ M at some point p ∈ M . If Xωp is a tangent vector
to T ∗ M at ωp , then the pushforward π∗ (Xωp ) is a tangent vector to M at p. Thus one
can pair up ωp and π∗ (Xωp ) to obtain a real number ωp (π∗ (Xωp )). Define

λωp (Xωp ) := ωp (π∗ (Xωp )).

The cotangent bundle and the Liouville form on it play an important role in classical
mechanics.
x
5.1.4 Characterization of Smooth 1-Forms
eli
We define a 1-form ω on a manifold M to be smooth if ω : M → T ∗ M is smooth as a
section of the cotangent bundle π : T ∗ M → M . The set of all smooth 1-forms on M has the
structure of a vector space, denoted by Ω1 (M ). In a coordinate chart (U, φ) = (U, x1 , . . . , xn )
on M , the value of the 1-form ω at p ∈ U is a linear combination
X
ωp = ai (p)dxi |p .
©F

As p varies in U , the coefficients ai become functions on U . We no derive smoothness ceriteria


for a 1-form in terms of the coefficient functions ai . The development is parallel to that for
a vector field.
Recall the chart (U, φ) induces a chart
(T ∗ U, φ̃) = (T ∗ U, x̄1 , . . . , x̄n , c1 , . . . , cn )
on T ∗ M , where x̄i := π ∗ xi := xi ◦ π and the ci ’s are the coefficient functions of α ∈ Tp∗ M .

168
Comparing the coefficients in
X X
ωp = ai (p)dxi |p = ci (ωp )dxi |p ,
i i

we see that ai = ci ◦ ω, where ω is viewed as a map from U to T ∗ U . Being coordinate


functions, the ci ’s are smooth on T ∗ U . Thus if ω is smooth, then the coefficients ai of ω
relative to the frame dxi must be smooth on U . The following lemma shows that the converse

ou
is also true.

Lemma 5.1.4
Let (U, φ) = (U, x1 , . . . , xn ) be a chart on a manifold M . A 1-form ω = i ai dxi on
P
U is smooth if and only if the coefficient functions ai are all smooth.

Proof

Zh
Recall the characterization of smooth sections over U which states that a section over U
is smooth if and only if its coefficient functions with respect to any smooth frame over U
is smooth. This is simply a special case with the cotangent bundle as the vector bundle
and the coordinate 1-forms dxj as the smooth frame.

Proposition 5.1.5 (Smoothness of a 1-Form in terms of Coefficients)


Let ω be a 1-form on a manifold M . The following are equivalent:
(i) ω is smooth on M
(ii) The manifold M has an atlas
P such that on any chart (U, x1 , . . . , xn ) of the atlas,
the coefficients ai of ω = i ai dxi relative to the frame dxi are all smooth
x
(iii) On any chart (U, x1 , . . . , xn ) on the manifold, the coefficients of ω relative to the
frame dxi are all smooth
eli
Corollary 5.1.5.1
If f ∈ C ∞ (M ), then its differential df is a smooth 1-form on M .

Proof
On any chart (U, x1 , . . . , xn ) on M ,
©F

X ∂f
df = i
dxi .
i
∂x

Smoothness follows from the smoothness of coefficient functions.


If ω is a 1-form and X a vector field on M , we define a function ω(X) on M given by
ω(X)p := ωp (Xp ) ∈ R
for each p ∈ M .

169
Proposition 5.1.6 (Linearity of 1-Forms over Functions)
Let ω be a 1-form on a manifold M . If f is a function and X a vector field on M , then
ω(f X) = f ω(X).

Proof
At each p ∈ M ,

ou
ω(f X)p := ωp (f (p)Xp ) = f (p)ωp (Xp ) =: (f ω(X))p .

Proposition 5.1.7 (Smoothness of a 1-Form in terms of Vector Fields)


A 1-form ω on a manifold M is smooth if and only if for every smooth vector field X
on M , the function ω(X) is smooth on M .

Zh
Proof
( =⇒ ) Let ω be a smooth 1-form and X a smooth vector field on M . On any chart
(U, x1 , . . . , xn ) on M , we have ω = i ai dxi and X = j bj ∂x∂ j . But then by the linearity
P P
of 1-forms over functions,
! !
X X ∂ X X
ω(X) = ai dxi bj j = ai bj δji = ai b i .
i j
∂x i,j i

This is a smooth function on U .


( ⇐= ) Conversely, suppose that ω is a 1-form on M such that ω(X) is smooth for every
x
smooth vector field X.
P Giveni p ∈ M , choose a coordinate neighborhood (U, x , . . . , x )
1 n

about p so that ω = i ai dx on U . For any j ∈ [n], we can extend the smooth vector
field X := ∂/∂xj on U to a smooth vector field X̄ on M that agrees with ∂/∂xj in a
eli
neighborhood Vpj of p in U . Restricted to Vpj ,
! 
X
i ∂
ω(X̄) = ai dx = aj .
i
∂xj

Thus aj is smooth on a neighborhood of p. Repeat over all j so that all coordinate


©F

functions are simultaneously smooth on the neighborhood Vp := ∩j Vpj about p. We


conclude the proof by the arbitrary choice of p.
Let F := C ∞ (M ) be the ring of all smooth functions on M . A 1-form ω on M defines a map
(M ) → F given by X 7→ ω(X). This map is both R-linear and F-linear.

170
5.1.5 Pullback of 1-Forms

If F : N → M is a smooth map between manifolds, then at each p ∈ N , F∗,p is a linear map


that pushes forward tangent vectors at p from N to M .
Recall for a linear map T ∈ L(V, W ), its dual T ∨ ∈ L(W ∨ , V ∨ ) is defined to be
T ∨ (ϕ) := ϕ ◦ T.

ou
Definition 5.1.2 (Codifferential)
The codifferential (dual of the differential),

(F∗,p )∨ : TF∗ (p) M → Tp∗ N

pulls back a covector at F (p) from M to N . This means that if ωF (p) ∈ TF∗ (p) M is a

Zh
covector at F (p) and Xp ∈ Tp N is a tangent vector at p, then

(F∗,p )∨ ωF (p) (Xp ) = ωF (p) (F∗,p Xp ).




Another notation for the codifferential is F ∗ = (F∗,p )∨ . With this notation,


F ∗ (ωF (p) )(Xp ) = (F∗,p )∨ ωF (p) (Xp ).


We call F ∗ (ωF (p) ) the pullback of the covector ωF (p) by F . Thus the pullback of covectors is
simply the codifferential.
Unlike vectgor fields which in general cannot be pushed forward under a smooth map, every
x
covector field can be pulled back by a smooth map. If ω is a 1-form on M , its pullback F ∗ ω
is the 1-form on N defined pointwise as expected:
(F ∗ ω)p := F ∗ (ωF (p) )
eli
for p ∈ N . Thus for Xp ∈ Tp N ,
(F ∗ ω)p (Xp ) = ωF (p) (F∗ (Xp )).

Having defined the pullback of a 1-form, we turn to the natural question of whether this
operation preserves smoothness. First, we establish three commutation properties of the
©F

pullback: differential, sum, and product.


Recall that functions can also be pulled back: if F is a smooth map from N to M and
g ∈ C ∞ (M ), the F ∗ g := g ◦ F ∈ C ∞ (N ).

Proposition 5.1.8 (Commutation of the Pullback with the Differential)


Let F : N → M be a smooth map of manifolds. For any h ∈ C ∞ (M ),

F ∗ (dh) = d(F ∗ h).

171
Proof
Fix p ∈ N and Xp ∈ Tp N . We check that

(F ∗ dh)p (Xp ) = (dF ∗ h)p (Xp ).

The LHS is equal to

ou
(F ∗ dh)p (Xp ) = (df )F (p) (F∗ Xp )
= (F∗ Xp )h
= Xp (h ◦ F ).

The RHS is equal to

(dF ∗ h)p (Xp ) = Xp (F ∗ h)

Zh
= Xp (h ◦ F ).
We proceed to check that pullbacks of functions and 1-forms respect addition and scalar
multiplication.

Proposition 5.1.9 (Pullbacks of Sums and Products)


Let F : N → M be a smooth map of manifolds. Suppose ω, τ ∈ Ω1 (M ) and g ∈ C ∞ (M ).
Then
1. F ∗ (ω + τ ) = F ∗ ω + F ∗ τ
2. F ∗ (gω) = (F ∗ g)(F ∗ ω)
x
Proof
(i) Fix p ∈ N and Xp ∈ Tp N .
eli
(F ∗ (ω + τ ))p Xp = (ω + τ )F (p) (F∗ Xp )
= ωF (p) (F∗ Xp ) + τF (p) (F ∗ Xp )
= (F ∗ (ω))p Xp + (F ∗ (τ ))p Xp .

(ii)
©F

(F ∗ (gω))p Xp = (gω)F (p) (F∗ Xp )


= g(F (p))ωF (p) (F∗ Xp )
= (F ∗ g)p (F ∗ ω)p Xp .
We can now verify that pullbacks preserve smoothness.

172
Proposition 5.1.10 (Pullback of a Smooth 1-Form)
The pullback F ∗ ω of a smooth 1-form ω on M under a smooth map F : N → M is a
smooth 1-form on N .

Proof
Fix p ∈ N . It suffices to check that F ∗ ω is smooth at p. Choose a chart (V, y 1 , . . . , y n )
on M about F (p). By the continuity (smoothness) P of Fi , there is a chart∞ (U, x , . . . , x )

ou
1 n

about p in N such that F (U ) ⊆ V . On V , ω = i ai dy for some ai ∈ C (V ). On U ,


X
F ∗ω = F ∗ (ai dy i )
i
X
= F ∗ (ai )F ∗ (dy i )
i

Zh
X
= (ai ◦ F )d(F ∗ y i )
i
X
= (ai ◦ F )d(y i ◦ F )
i
X X ∂F i
= (ai ◦ F ) j
dxj
i j
∂x
X ∂F i
= (ai ◦ F ) j dxj .
i,j
∂x

Now, each ai ◦ F is a smooth composition of smooth functions at p and hence F ∗ ω is


x
smooth at p as desired.

Example 5.1.11 (Liouville Form on the Cotangent Bundle)


eli
Let M be a manifold, The Liouville form λ on the cotangent bundle T ∗ M can be expressed
as
λωp = π ∗ (ωp )
at any ωp ∈ T ∗ M .
©F

5.1.6 Restriction of 1-Forms to Immersed Submanifolds

Let S ⊆ M be an immersed submanifold and ι : S → M the inclusion map. As a reminder


this means that ι(S) inherits the topology from S and hence we automatically have S ' ι(S).
At any p ∈ S, since the differential ι∗ : Tp S → Tp M is injective, we can view the tangent
space Tp S as a subspace of Tp M . If ω is a 1-form on M , the the restriction of ω to S is the
1-form ω|S defined by
(ω|S )p (v) = ωp (v)

173
for every p ∈ S and v ∈ Tp S.

Proposition 5.1.12
If ι : S → M is the inclusion map of an immersed submanifold S and ω is a 1-form on
M , then
ι∗ ω = ω|S .

ou
Proof
For p ∈ S and v ∈ Tp S,

(ι∗ ω)p (v) = ωι(p) (ι∗ v)


= ωp (v)
= (ω|S )p (v).

Zh
For simplicity of notation, we sometimes write ω to mean ω|S .

Example 5.1.13 (A 1-Form on the Circle)


The velocity vector field of the unit circle c(t) = (x, y) = (cos t, sin t) in R2 is given by

c0 (t) = (− sin t, cos t) = (−y, x).

Thus
∂ ∂
X = −y +x
∂x ∂y
is a smooth vector field on the unit circle S 1 .
x
This notation means that if x, y are the standard coordinates on R2 and ι : S 1 → R2 is
the inclusion map, then at a point p = (x, y) ∈ S 1 , one has
eli
∂ ∂
ι∗ Xp = −y +x ,
∂x p ∂y p

where ∂/∂x|p and ∂/∂|p are tangent vectors at p in R2 .

Example 5.1.14
©F

The 1-form ω = −ydx + xdy satisfies

ω(X) ≡ 1

on the unit circle S 1 where X is the velocity vector field of the unit circle.

Remark 5.1.15 If we wish to be pedantic, a 1-form ω = −ȳdx̄ + x̄dȳ should be explicitly


written as restrictions x̄, ȳ to S 1 . However, since ι∗ x = x̄ and ι∗ dx = dx̄, there is little chance
of confusion and we typically omit the bar.

174
This is in contrast to the situation for vector fields where ι∗ (∂/∂ x̄p ) 6= ∂/∂x|p .

Example 5.1.16 (Pullback of a 1-Form)


Let h : R → S 1 ⊆ R2 be given by h(t) = (x, y) = (cos t, sin t). If ω is the 1-form
−ydx + xdy on S 1 , the pullback h∗ ω is given by

ou
h∗ (−ydx + xdy) = −(h∗ y)d(h∗ x) + (h∗ x)d(h∗ y)
= −(sin t)d(cos t) + (cos t)d(sin t)
= sin2 tdt + cos2 dt
= dt.

5.2 Differential k-Forms

Zh
Similar to the Euclidean setting, we now generalize the construction of 1-forms on a manifold
to that of k-forms. In parallel to the construction
Vkof the tangent and cotangent bundles on a
manifold, we construct the k-th exterior power (T M ) of the tangent bundle. This yields

aVnatural notion of smoothness of differential forms as smooth sections of the vector bundle
k
(T ∗ M ). The pullback and wedge product of differential forms are defined pointwise. We
consider left-invariant forms on a Lie group as examples of differential forms.

5.2.1 Differential Forms


x
Recall that a k-tensor on a vector space V is a k-linear function

f : V × · · · × V → R.
eli
We say that the k-tensor f is alternating if for any permutation σ ∈ Sk ,

f (vσ(1) , . . . , vσ(k) ) = (sgn σ)f (v1 , . . . , vk ).

Note that all 1-tensors are alternating. An alternating k-tensor on V is also called a k-
covector on V .
©F

For any vector space V , denote Vk by ∨Ak (V ) the vector space of alternating k-tensorsVon V.
Another common notation is (V ). There is a purely algebraic construction of k
(V ),
called the k-th exterior power of the vector space V , with the property that k (V ∨ ) ' Ak (V ).
V
We ignore this construction for simplicity.
We apply the function Ak () to the tangent space Tp M of a manifold M at a point p. The
vector space Ak (Tp M ), typically denoted k (Tp∗ M ), is the space of all alternating k-tensors
V
on the tangent space Tp M . A k-covector field on M is a function ω that assigns to each

175
p ∈ M a k-covector ωp ∈ k (Tp∗ M ). A k-covector field is also called a differential k-form, a
V
differential form of degree k, or simply a k-form. A top form on a manifold is a differential
form whose degree is the dimension of the manifold.
If ω is a k-form on a manifold M and X1 , . . . , Xk are vector fields on M , then ω(X1 , . . . , Xk )
is the function on M defined by

(ω(X1 , . . . , Xk ))(p) := ωp ((X1 )p , . . . , (Xk )p ).

ou
Proposition 5.2.1 (Multilinearity of a Form over Functions)
Let ω be a k-form on a manifold M . For any vector fields X1 , . . . , Xk and any function
h on M ,
ω(X1 , . . . , hXi , . . . , Xk ) = hω(X1 , . . . , Xi , . . . , Xk ).

The proof follows by the point-wise R-multilinearity of ω.

Zh
Example 5.2.2
Let (U, x1 , . . . , xn ) be a coordinate chart on a manifold. At each p ∈ U , a basis for the
tangent space Tp U is ∂/∂x1 |p , . . . , ∂/∂xn |p . Recall the dual basis for the cotangent space
Tp∗ U is
(dx1 )p , . . . , (dxn )p .
As p varies over U , we get differential 1-forms dx1 , . . . , dxn on U .
Recall from the general theory of alternating k-tensors that a basis for (Tp∗ U ) is
Vk
x
(dxi1 )p ∧ · · · ∧ (dxik )p ,

for 1 ≤ i1 < · · · < ik ≤ n. If ω is a k-form on U , then at each p ∈ U , ωp is a linear


combination
eli
X
ωp = ai1 ...ik (p)(dxi1 )p ∧ · · · ∧ (dxik )p .
Omitting the point p, we write
X
ω= ai1 ...ik dxi1 ∧ · · · ∧ dxik .

In this expression, the coefficients ai1 ...ik are functions on U as they vary with the point p.
©F

For brevity, we write

Jk,n := {I = (i1 , . . . , ik ) : 1 ≤ i1 < · · · < ik ≤ n}

to indicate the set of all strictly ascending multi-indices between 1 and n of length k, and
write X
ω= aI dxI ,
I∈Jk,n

where dxI := dxi1 ∧ · · · ∧ dxik .

176
5.2.2 Local Expression for a k-Form

We know that on a coordinate chart (U, x1 , . . . , xn ) of a manifold M , a k-form on U is a


linear combination ω = I aI dx where I ∈ Jk,n and the aI ’s are functions on U . We
I
P
write ∂i := ∂/∂x for the i-th coordinate vector field. Evaluating pointwise, we obtain the
i

following equality on U for I, J ∈ Jk,n :

ou
(
1, I = J,
dxI (∂j1 , . . . , ∂jk ) = δJI =
0, I 6= J.

Recall the notation


∂(f 1 , . . . , f k )
 i
∂f
:= det .
i
∂(x , . . . , x )
1 i k ∂xij

Zh
Proposition 5.2.3 (A Wedge of Differentials in Local Coordinates)
Let (U, x1 , . . . , xn ) be a chart on a manifold and f 1 , . . . , f k smooth functions on U . Then
X ∂(f 1 , . . . , f k )
df 1 ∧ · · · ∧ df k = i1 , . . . , x ik )
dxi1 ∧ · · · ∧ dxik .
I∈J
∂(x
k,n

Proof
On U ,
X
df 1 ∧ · · · ∧ df k = cJ dxj1 ∧ · · · ∧ dxjk
x
J∈Jk,n

for some functions cJ . For the LHS, recall that the wedge product of functions applied k
vectors satisfies
eli
 i
1 k ∂f
(df ∧ · · · ∧ df )(∂i1 , . . . , ∂ik ) = det
∂xij
∂(f 1 , . . . , f k )
= .
∂(xi1 , . . . , xik )

On the other hand, the RHS is equal to


©F

X X
cJ dxJ (∂i1 , . . . , ∂ik ) = cJ δIJ = cI .
J J

If (U, x1 , . . . , xn ) and (V, y 1 , . . . , y n ) are two overlapping charts on a manifold, then on the
intersection U ∩ V , the proposition above yields the transition formula for k-forms:
X ∂(y j1 , . . . , y jk )
dy J = i 1 , . . . , x ik )
dxI .
I∈J
∂(x
k,n

177
Two cases of the proposition above are of special interest:

Corollary 5.2.3.1
Let (U, x1 , . . . , xn ) be a chart on a manifold and f, f 1 , . . . , f n ∈ C ∞ (U ). Then
(i) (1-forms) df = i (∂f /∂xi )dxi
P

(ii) (top forms) df 1 ∧ · · · ∧ df n = det[∂f j /∂xi ]dx1 ∧ · · · ∧ dxn

ou
Proposition 5.2.4 (Transition Formula for a 2-Form)
If (U, x1 , . . . , xn ) and (V, y 1 , . . . , y n ) are two overlapping coordinate charts on M , then
a smooth 2-form ω on U ∩ V has two local expressions
X X
ω= aij dxi ∧ dxj = bk` dy k ∧ dy ` .
i<j k<`

Zh
Then
X ∂(y k , y ` )
aij = bk` .
k<`
∂(xi , xj )

Proof
By an earlier remark,
X ∂(y k , y ` )
dy k ∧ dy ` = dxi ∧ dxj .
i<j
∂(xi , xj )

So that
x
X X X ∂(y k , y ` )
ω= aij dxi ∧ dxj = bk` dxi ∧ dxj .
i<j k<` i<j
∂(xi , xj )

Matching coefficients yields the desired result.


eli

5.2.3 The Bundle Point of View

Let M be a manifold of dimension n. We mimic the construction of the tangent and cotangent
bundles and form the set
©F

k
^ k
G ^ G
(T ∗ M ) := (Tp∗ M ) = Ak (Tp M )
p∈M p∈M

of all alternating k-tensors at all points of the manifold M . This set is called V
the k-th exterior
power of the cotangent bundle. There is a canonical projection map π : k
(T ∗ M ) → M
given by π(α) = p for α ∈
Vk ∗
(Tp M ).

178
If (U, φ) is a coordinate chart on M , then there is a bijection

k k
(Tp∗ U ) ' φ(U ) × R(k )
^ [^ n

(T U ) =
p∈U
k
^
α∈ (Tp∗ U ) 7→ (φ(p), {cI (α)}I ),

ou
where α = I cI (α)dxI |p ∈ k (Tp∗ U ) and I = (1 ≤ i1 < · · · < ik ≤ n). Hence we can given
P V

(T U ) and hence k (T ∗ M ) a topology and even a differentiable structure. The details


Vk ∗ V
are just like those for the construction of the tangent bundle, so we omit them. The upshot
is that the projection map π : k (T ∗ M ) → M is a smooth vector bundle of rank nk and
V 

that a differential k-form is simply a section of


V this bundle. We define a k-form to be smooth
if it is smooth as a section of the bundle π : k (T ∗ M ) → M .

Zh
If E → M is a smooth vector bundle, then the vector space of smooth sections of E is denoted
by Γ(E) or Γ(M, E). The vector space of all smooth k-forms on M is usually denoted by
Ωk (M ). Thus
k
! k
!
^ ^
Ωk (M ) = Γ (T ∗ M ) = Γ M, (T ∗ M ) .

5.2.4 Smooth k-Forms


x
We state several characterizations of a smooth k-form. The proofs are omitted since they
are similar but more tedious than those for 1-forms.
eli
Lemma 5.2.5 (Smoothness of a k-Form on a Chart)
Let (U, x1 , . . . , xn ) be a chart on a manifold M . A k-form ω = I aI dxI on U is
P
smooth if and only if the coefficient functions aI are all smooth on U .

Proposition 5.2.6 (Characterization of a Smooth k-Form)


Let ω be a k-form on a manifold M . The following are equivalent:
©F

(i) ω is smooth on M
(ii) M has
P an atlas such that on every chart (U, x1 , . . . , xn ), the coefficients aI of
ω = I aI dx relative to the coordinate frame {dxI }I∈Jn,k are all smooth
I

(iii) On
P everyI chart (U, x , . . . , x ) in the maximalI atlas, the coefficients aI of ω =
1 n

I aI dx relative to the coordinate frame {dx }I∈Jn,k are all smooth

(iv) For any k smooth vector fields X1 , . . . , Xk on M , the function ω(X1 , . . . , Xk ) is


smooth on M

179
We defined 0-tensors and 0-covectors to be the constant functions, so
L0 (V ) = A0 (V ) = R.
Thus the bundle 0 (T ∗ M ) ' M × R and a 1-form on M is just a function on M . A smooth
V
0-form is thus the same as a smooth function on M . In our new notation,
0
!

ou
^
Ω0 (M ) = Γ (T ∗ M ) = Γ(M × R) = C ∞ (M ).

Similar to smooth functions, smooth differential forms can also be smoothly extended.

Proposition 5.2.7 (Smooth Extension of a Form)


Suppose τ is a smooth differential form defined on a neighborhood U 3 p in a man-
ifold M . There is a smooth form τ̃ on M that agrees with τ on a possibly smaller

Zh
neighborhood of p.

We note that the extension τ̃ is not unique. It depends on p as well as the choice of a bump
function at p.

5.2.5 Pullback of k-Forms

We have defined the pullback of 0-forms and 1-forms under a smooth map F : N → M . For
a smooth 0-form on M , ie a smooth function on M ,
x
F f
F ∗ f : (N −
→ M) − →R

F (f ) = f ◦ F ∈ Ω0 (N ).
eli
To generalize the pullback to k-forms for k ≥ 1, we first recall the pullback of k-covectors.
A linear map L : V → W of vector spaces induces a pullback map
L∗ : Ak (W ) → Ak (V )
(L∗ α)(v1 , . . . , vk ) = α(L(v1 ), . . . , L(vk ))
for α ∈ Ak (W ) and v1 , . . . , vk ∈ V .
©F

Suppose F : N → M is a smooth map of manifolds. At each p ∈ N , the differential


F∗,p : Tp N → TF (p) M is a linear map of tangent spaces. Hence there is a pullback map
(F∗,p )∗ : Ak (TF (p) M ) → Ak (Tp N ).
This ugly notation is usually simplified to F ∗ . Thus if ωF (p) is a k-covector at F (p) ∈ M ,
then its pullback F ∗ (ωF (p) ) is the k-covector at p ∈ N given by
F ∗ (ωF (p) )(v1 , . . . , vk ) = ωF (p) (F∗,p (v1 ), . . . , F∗,p (vk ))

180
for vi ∈ Tp N .
For a k-form ω on M , its pullback F ∗ ω is the k-form on N defined pointwise by

(F ∗ ω)p := F ∗ (ωF (p) ) = ωF (p) (F∗,p (·), . . . , F∗,p (·)).

When k = 1, this formula specializes to the definition of the pullback of a 1-form. The
pullback of a k-form can be viewed as a composition

ou
F ×···×F ωF (p)
Tp N × · · · × Tp N −−∗−−−−→

TF (p) M × · · · × TF (p) M −−−→ R.

Similar to the linearity of the pullback of a 0-form or 1-form, we can prove the following.

Proposition 5.2.8 (Linearity of the Pullback)


Let F : N → M be a smooth map. If ω, τ are k-forms on M and a ∈ R, then

Zh
(i) F ∗ (ω + τ ) = F ∗ ω + F ∗ τ
(ii) F ∗ (aω) = aF ∗ ω

We defer the basic question of whether the pullback of a smooth k-form undder a smooth
map remains smooth for k ≥ 2.

5.2.6 The Wedge Product

Recall that a (k, `)-shuffle is a permutation σ such that


x
σ(1) < · · · < σ(k), σ(k + 1) < · · · < σ(k + `).

We now that if α, β are alternating tensors of degree k, ` respectively on a vector space V ,


eli
then their wedge product α ∧ β is the alternating (k + `)-tensor on V given by
X
(α ∧ β)(v1 , . . . , vk+` ) = (sgn σ)α(vσ(1) , . . . , vσ(k) )β(vσ(k+1) , . . . , vσ(k+`) ).
σ

Here vi ∈ V and σ runs over all (k, `)-shuffles of [k + `]. For 1-covectors α, β,
©F

(α ∧ β)(v1 , v2 ) = α(v1 )β(v2 ) − α(v2 )β(v1 ).

The wedge product extends pointwise to differential forms on a manifold: for a k-form ω
and an `-form τ on M , define their wedge product ω ∧ τ to be the (k + `)-form on M such
that
(ω ∧ τ )p = ωp ∧ τp
at all p ∈ M .

181
Proposition 5.2.9
If ω, τ are smooth forms on M , then ω ∧ τ is also smooth.

ou
Zh
x
eli
©F

182
Proof
Let (U, x1 , . . . , xn ) be a chart on M . On U ,
X X
ω= aI dxI , τ= bJ dxJ
I J

for smooth functions aI , bJ on U . Their wedge product is given by

ou
! !
X X
ω∧τ = aI dxI ∧ bJ dxJ
I J
X
I J
= aI bJ dx ∧ dx
I,J
!
X X
dxK .

Zh
∈ ±aI bJ
K I∪J=K,I∩J=∅

The last equality results from the observation that dxI ∧ dxJ = 0 if I, J have a common
index. If I, J are disjoint, then dxI ∧ dxJ ∈ ±dxK where K = I ∪ J but reordered as
an increasing sequence. Since the coefficients of dxK are smooth on U , we conclude the
proof.

Proposition 5.2.10 (Pullback of a Wedge Product)


If F : N → M is a smooth map of manifolds and ω, τ are differential forms on M , then
x
F ∗ (ω ∧ τ ) = F ∗ ω ∧ F ∗ τ.

Proof
Let (U, x1 , . . . , xn ) be P
a chart about a point
P p ∈ M and v1 , . . . , vk+` ∈ Tp N . We have the
eli
local expressions ω = I aI dxI and τ = J bJ dxJ . We have

(F ∗ (ω ∧ τ ))p (v1 , . . . , vk , vk+1 , . . . , vk+` )


= (ω ∧ τ )p (F∗,p (v1 ), . . . , F∗,p (vk ), F∗,p (vk+1 ), . . . , F∗,p (vk+` ))
= (ωp ∧ τp )(F∗,p (v1 ), . . . , F∗,p (vk ), F∗,p (vk+1 ), . . . , F∗,p (vk+` ))
©F

X
= ωp (F∗,p (v1 ), . . . , F∗,p (vk ))τp (F∗,p (vk+1 ), . . . , F∗,p (vk+` ))
σ
X
= (F ∗ ω)p (v1 , . . . , vk )(F ∗ τ )p (vk+1 , . . . , vk+` )
σ
= ((F ∗ ω)p ∧ (F ∗ τ )p )(v1 , . . . , vk , vk+1 , . . . , vk+` )
= (F ∗ ω ∧ F ∗ τ )p (v1 , . . . , vk , vk+1 , . . . , vk+` ).
Define the vector space Ω∗ (M ) of smooth differential forms on a manifold M of dimension

183
n to be the direct sum n
M

Ω (M ) = Ωk (M ).
k=0

This means each element Ω (M ) is uniquely a sum nk=0 ωk , where ωk ∈ Ωk (M ). With the

P
wedge product, the vector space Ω∗ (M ) becomes a graded algebra, with the grading being
the degree of differential forms.

ou
5.2.7 Differential Forms on a Circle

Consider the map


h : R → S 1, h(t) = (cos t, sin t).

Since the derivative ḣ(t) = (− sin, cos t) is nonzero for all t, the map h : R → S 1 is a

Zh
submersion. It can be shown in this case that the pullback by a surjective submersion is an
injective algebra homomorphism. Hence h∗ : Ω(S 1 ) → Ω∗ (R) is injective and we can identify
the differential forms on S 1 with a subspace of differential forms on R.
Let ω = −ydx + xdy be the nowhere-vanishing form on S 1 . Recall that h∗ ω = dt. Since ω is
nowhere vanishing, it is a frame for the cotangent bundle T ∗ S 1 over S 1 , and every smooth
1-form α on S 1 can be written as α = f ω for some smooth f ∈ C ∞ (S 1 ). Its pullback
f¯ := h∗ f is a smooth function on R. Since pulling back preserves multiplication,
h∗ α = (h∗ f )(h∗ ω) = f¯dt.
x
We say that a function g or a 1-form gdt on R is periodic of period a if g(t + a) = g(t) for
all t ∈ R.

Proposition 5.2.11
eli
For k = 0, 1, under the pullback map h∗ Ω∗ (S 1 ) → Ω∗ (R), smooth k-forms on S 1 are
identified with smooth periodic k-forms of period 2π on R.

5.2.8 Invariant Forms on a Lie Group


©F

Just as there are left-invariant vector fields on a Lie group G, there are also left invariant
differential forms.

Definition 5.2.1 (Left-Invariant Differential Form)


Let G be a Lie group and `g : G → G denote left multiplication by g ∈ G. A k-form
on G is left-invariant if `∗g ω = ω for all g ∈ G. This means that for all g, x ∈ G,

`∗g (ωgx) = ωx .

184
By definition, a left-invariant k-form is uniquely determined by its value at the identity, since
for any g ∈ G,
ωg = `∗g−1 (ωe ).

Example 5.2.12
ω = −ydx + xdy is a left-invariant 1-form on S 1 .

ou
Proposition 5.2.13
Every left-invariant k-form ω on a Lie group G is smooth.

Proof
It suffices to show that for any k smooth vector fields X1 , . . . , Xk on G, the function

Zh
ω(X1 , . . . , Xk ) is smooth on G. Let (Y1 )e , . . . , (Yk )e be a basis for Te G and Y1 , . . . , Yn the
left-invariant vector fields they generate. Then Y1 , ,̇Yn is a smooth frame on G as any
left-invariant vector fields are smooth. Each Xj can be written as a linear combination
Xj = i aij Yi for some smooth functions aij . It suffices thus to show that ω(Y1 , . . . , Yk ) is
P
smooth for any left-invariant vector fields Y1 , . . . , Yk .
We have

(ω(Y1 , . . . , Yk ))(g)
= ωg ((Y1 )g , . . . (Yk )g )
= (`∗g−1 (ωe ))(`g∗ (Y1 )e , . . . , `g∗ (Yk )e ) left-invariance of both ω, Yi
x
= ωe ((Y1 )e , . . . , (Yk )e ).

This is a constant (smooth) function of g as desired.


eli
Similarly, a k-form ω on G is said to be right-invariant if rg∗ ω = ω for all g ∈ G. We can
analogously prove that every right-invariant form on a Lie group is smooth.
Let Ωk (G)G denote the vector space of left-invariant k-forms on G. The linear map

k
©F

^
k
Ω (G) → G
(g∨ )
ω 7→ ωe

has an inverse defined by the left-invariant differential form generated by ωe and is therefore
an isomorphism. It follows that
 
k nG
dim Ω (G) = .
k

185
5.3 The Exterior Derivative

In contrast to standard calculus, the basic objects in calculus on manifolds are differential
forms rather than functions.
Recall that an antiderivation on a graded algebra A = ⊕∞
k=0 A is an R-linear map D : A → A
k

such that

ou
D(ω · τ ) = (Dω) · τ + (−1)k ω · (Dτ )

for all ω ∈ Ak , τ ∈ A` . In the graded algebra A, and element of Ak is called a homogeneous


element of degree k. The antiderivation is of degree m if

deg Dω = deg ω + m.

Zh
for all homogeneous elements ω ∈ A.
Let M be a manifold and Ω∗ (M ) the graded algebra of smooth differential forms on M . On
the graded algebra Ω∗ (M ), there is a uniquely and intrinsically defined antiderivation called
the exterior derivative. The process of applying the exterior derivative is called exterior
differentiation.

Definition 5.3.1 (Exterior Derivative)


An exterior derivative on a manifold M is an R-linear map
x
D : Ω∗ (M ) → 0Ω∗ (M )

such that
(i) D is an antiderivative of degree 1
eli
(ii) D ◦ D = 0
(iii) For any f ∈ C ∞ (M ) and X ∈ (M ), (Df )(X) = Xf

Condition (iii) states that on 0-forms (functions), an exterior derivative agrees with the
differential df of a function f . Hence on a coordinate chart (U, x1 , . . . , xn ),
©F

X ∂f
Df = df = i
dxi .
i
∂x

Our goal is to prove the existence and uniqueness of an exterior derivative on a manifold.
Using its defining properties, we can then show that the exterior derivative commutes with
the pullback. As a corollary, the pullback of a smooth form by a smooth map is smooth.

186
5.3.1 Exterior Derivative on a Coordinate Chart

We showed the existence and uniqueness of an exterior derivative on an open subset of


Rn . The same proof carries over to any coordinate chart on a manifold. Indeed, suppose
(U, x1 , . . . , xn ) is a coordinate chart on a manifold M . Then any k-form ω on U is uniquely
a linear combination X
ω= aI dxI

ou
I

for some aI ∈ C (U ).

If D is an exterior derivative on U ,

D(dxI )
= D(dxi1 ∧ · · · ∧ dxik )

Zh
= (Ddxi1 ) · dxi2 ∧ · · · ∧ dxik + (−1)1 dxi1 · D(dxi2 ∧ · · · ∧ dxik )
= 0 − dxi1 · D(dxi2 ∧ · · · ∧ dxik ). D◦d=0

By an inductive argument on k, we see that DdxI = 0 for all I ∈ Jn,k , k ≥ 0. It follows that

linearity
X
Dω = D(aI dxI )
I

antiderivation
X X
= (DaI ) ∧ dxI + aI (DdxI )
I I

above
X
I
= (DaI ∧ dx )
x
I
X X ∂aI
= dxj ∧ dxI .
I j
∂xj
eli
Hence if any exterior derivative D exists, then it is uniquely defined by the expression above.
To show existence, we define D as above and show that it satisfies the three conditions. The
proof is exactly the same as in the Euclidean case and is thus omitted. We denote the unique
exterior derivative on a chart (U, φ) by dU .
©F

5.3.2 Local Operators

Similar to the derivative of a function on Rn , an antiderivation D on Ω∗ (M ) has the property


that for a k-form ω, the value of Dω at a point p depends only on the values of ω in a
neighborhood of p. We formalize this under the notion of local operators.
An endomorphism of a vector space W is often called an operator on W . For example, if
W = C ∞ (R) is the vector space of smooth functions on R, then the derivative d/dx is an

187
operator on W :
d
f (x) := f 0 (x).
dx
The derivative has the property that the value of f 0 (x) at a point p depends only on the
values of f in a small neighborhood of p. More precisely, if f = g on an open set U ⊆ R,
then f 0 = g 0 on U . We say that the derivative is a local operator on C ∞ (R).

ou
Definition 5.3.2 (Local Operator)
An operator D : Ω∗ (M ) → Ω∗ (M ) is said to be local if for all k ≥ 0, whenever a
k-form ω ∈ Ωk (M ) restricts to 0 on an open set U in M , then Dω ≡ 0 on U .

An equivalent criterion is that for all k ≥ 0, whenever two k-forms ω, τ ∈ Ωk (M ) agree on


an open set U , then Dω ≡ Dτ on U .

Zh
Example 5.3.1 (Integral Operator)
Define the integral operator

I : C ∞ [a, b] → C ∞ [a, b]
Z b
f 7→ f (t)dt.
a

We consider I(f ) as a constant function over [a, b]. This is not a local operator since I(f )
depends on the value of f over all [a, b].
x
Proposition 5.3.2
Any antiderivation D on Ω∗ (M ) is a local operator.
eli
Proof
Suppose ω ∈ Ωk (M ) and ω ≡ 0 on a open subset U . Let p ∈ U be arbitrary. We claim
that (Dω)p = 0.
Choose a smooth bump function f at p supported in U . In particular, f ≡ 1 in a
neighborhood of p within U . Then f ω ≡ 0 on M , since for a point q ∈ U , then ωq = 0
and if q ∈
/ U , we have f (q) = 0. By the antiderivation property of D to f ω,
©F

0 = D(0)
= D(f ω)
= (Df ) ∧ ω + (−1)0 ∧ f (Dω).

Evaluating at the RHS at p yields

0 = (Df ) ∧ 0 + 1 ∧ (Dω) = (Dω)p .

188
We remark that the same proof shows that a derivation on Ω∗ (M ) is also a local operator.

5.3.3 Existence of an Exterior Derivative on a Manifold

To define an exterior derivative on a manifold M , consider a k-form ω on M P and some point


p ∈ M . Choose a chart (U, x1 , . . . , xn ) about p. We can locally express ω = I aI dxI on U .

ou
We know that there is an exterior derivative dU on U given by
X
dU ω = daI ∧ dxI
I

on U . Define (dω)p = (dU ω)p . We need to show that (dU ω)P p is independent of the chart U
about p. If (V, y 1 , . . . , y n ) is another chart about p and ω = J bJ dy J on V , then on U ∩ V ,

Zh
X X
aI dxI = dy J .
I J

On U ∩ V , there is a unique exterior derivative


dU ∩V : Ω∗ (U ∩ V ) → Ω∗ (U ∩ V ).
By the properties of the exterior derivative on U ∩ V ,
! !
X X
dU ∩V aI dxI = dU ∩V bJ dy J
I J
X X
daI ∧ dxI = dbJ ∧ dy J
x
I J
! !
X X
daI ∧ dxI = dbJ ∧ dy J .
eli
I p J p

Thus (dω)p = (dU ω)p is well defined independently of the chart (U, x1 , . . . , xn ).
As p varies over all points of M , this defines an operator d : Ω∗ (M ) → Ω∗ (M ). In order
to check that d satisfies the defining properties of an exterior derivative, it suffices to check
them at each point p ∈ M , which we have already done.
©F

5.3.4 Uniqueness of the Exterior Derivative

Suppose D : Ω∗ (M ) → Ω∗ (M ) is an exterior derivative. We show that D necessarily coincides


with the exterior derivative defined above.
If f is a smooth function and X a smooth vector field on M , then by the defining conditions,
(Df )(X) = Xf = (df )(X).

189
Thus Df = df on functions f ∈ Ω0 (M ).
Consider now a wedge product of exact 1-forms df 1 ∧ · · · ∧ df k :

D(df 1 ∧ · · · ∧ df k )
= D(Df 1 ∧ · · · ∧ Df k ) Df i = df i

ou
k
antiderivation
X
= (−1)i−1 Df 1 ∧ · · · ∧ DDf i ∧ · · · ∧ Df k
i=1
= 0. D2 = 0

We now show that D agrees with d on any k-form ω ∈ Ωk (M ). Fix p ∈ M and choose a
chart (U, x1 , . . . , xn ) about p so that ω = I aI dxI on U . Extend the functions aI , xi on U
P

Zh
to smooth functions ãI , x̃i on M that agree with aI , xi on a neighborhood V 3 p. Define
X
ω̃ = ãI dx̃I ∈ Ωk (M ).
I

Then ω ≡ ω̃ on V .
Since D is a local operator, Dω = Dω̃ on V . Hence

(Dω)p
x
= (Dω̃)p
X
= (D ãI dx̃I )p
I
eli
X X
=( DãI ∧ dx̃I ∧ ãI ∧ Ddx̃I )p
I I
X
=( dãI ∧ dx̃I )p Dd = DD = 0
I
X
=( daI ∧ dxI )p
I
©F

= (dω)p .

This yields the following theorem.

Theorem 5.3.3
On any manifold M , there exists an exterior derivative d : Ω∗ (M ) → Ω∗ (M ) charac-
terized uniquely by the three defining properties.

190
5.3.5 Exterior Differentiation Under a Pullback

We now show that the pullback of differential forms commutes with the exterior derivative.
Combined with the fact that the pullback preserves the wedge product, this is a cornerstone
of calculations involving the pullback. We use these two properties to show that the pullback
of a smooth form under a smooth form remains smooth.

ou
Proposition 5.3.4 (Commutation of the Pullback with d)
Let F : N → M be a smooth map of manifolds. If ω ∈ Ωk (M ), then dF ∗ ω = F ∗ dω.

Proof
We have already proven the case of k = 0 where ω = h ∈ C ∞ (M ) by checking that
(F ∗ dh)p (Xp ) = (dF ∗ h)p (Xp ) for any Xp ∈ Tp N . Consider now the case of k ≥ 1. We
check that dF ∗ ω = F ∗ dω at every point p ∈ N . This reduces the proof to a local

Zh
computation.
If (V, y 1 , . . . , y m ) is a chart on M about F (p), then on V
X
ω= aI dy i1 ∧ · · · ∧ dy ik
I

where I = (i1 < · · · < ik ) and aI ∈ C ∞ (V ). Since the pullback distributes across the
wedge product,
X
F ∗ω = (F ∗ aI )F ∗ dy i1 ∧ · · · ∧ F ∗ dy ik
I
x
base case
X
= (aI ◦ F )dF i1 ∧ · · · ∧ dF ik
I
X

dF ω = d(aI ◦ F ) ∧ dF i1 ∧ · · · ∧ dF ik .
eli
I

On the other hand, again by distributivity,


!
X
F ∗ dω = F ∗ daI ∧ dy i1 ∧ · · · ∧ dy ik
I
X
©F

= F daI ∧ F ∗ dy i1 ∧ · · · ∧ F ∗ dy ik

base case
X
= d(F ∗ aI ) ∧ dF i1 ∧ · · · ∧ dF ik
I
X
= d(aI ◦ F ) ∧ dF i1 ∧ · · · ∧ dF ik .
I

By computation,
dF ∗ ω = F ∗ dω.

191
Corollary 5.3.4.1
If U ⊆ M is open and ω ∈ Ωk (M ), then

(dω)|U = d(ω|U ).

Proof
Let ι : U → M be the inclusion map. We have ω|U = ι∗ ω so that

ou
(dω)|U = ι∗ dω = dι∗ ω = d(ω|U ).

Example 5.3.5
Let U = (0, ∞) × (0, 2π) in the (r, θ)-plane R2 . Define F : U ⊆ R2 → R2 by F (r, θ) =
(r cos θ, r sin θ). Let x, y be the standard coordaintes on the target R2 . We compute
F ∗ (dx ∧ dy).

Zh
F ∗ dx = dF ∗ x
= d(x ◦ F )
= d(r cos θ)
= cos(θ)dr − r sin θdθ
F dy = dF ∗ y

= d(r sin θ)
antiderivation
x
= (sin θ)dr + r cos θdθ.

Since the pullback commutes with the wedge product,


eli
F ∗ (dx ∧ dy) = (F ∗ dx) ∧ (F ∗ dy)
= (cos θdr − r sin θdθ) ∧ (sin θdr + r cos θdθ)
= (r cos2 θ + r sin2 θ)dr ∧ dθ
= rdr ∧ dθ.

Proposition 5.3.6
©F

If F : N → M is a smooth map of manifolds and ω is a smooth k-form on M , then F ∗ ω


is a smooth k-form on N .

Proof
Fix p ∈ N . We show that there is an neighborhood about p on which F ∗ ω is smooth.
Choose a chart (V, y 1 , . . . , y m ) on M about F (p). Let F i = y i ◦ F be the i-th coordi-
nate of the map F in this chart. By the continuity (smoothness) of F , there is a chart

192
(U, x1 , . . . , xn ) on N about p such that F (U ) ⊆ V . Because ω is smooth on V ,
X
ω= aI dy i1 ∧ · · · ∧ dy ik
I

for some aI ∈ C ∞ (V ). By the properties of the pullback,


X
F ∗ω = (F ∗ aI )F ∗ (dy i1 ) ∧ · · · ∧ F ∗ (dy ik )

ou
I
X
= (F ∗ aI )dF ∗ y i1 ∧ · · · ∧ dF ∗ y ik
I
X
= (aI ◦ F )dF i1 ∧ · · · ∧ dF ik
I
X ∂(F i1 , . . . , F ik ) J
= (aI ◦ F ) dx .

Zh
I,J
∂(xj1 , . . . , xjk )

Since aI ◦ F and ∂(F i1 , . . . , F ik )/∂(xj1 , . . . , xjk ) are all smooth, we see that F ∗ ω is smooth
as desired.
In summary, if F : N → M is a smooth map of manifolds, then the pullback map F ∗ Ω∗ (M ) →
Ω∗ (N ) is a morphism of differential graded algebras, ie a degree-preserving algebra homo-
morphism that commutes with the differential.

5.3.6 Restriction of k-Forms to a Submanifold


x
The restriction of a k-form to an immersed submanifold is just like the restriction of a 1-form,
but with k arguments. Let S be a regular submanifold of the manifold M . If ω is a k-form
eli
on M , then the restriction of ω to S is the k-form ω|S on S given by
(ω|S )p (v1 , . . . , vk ) = ωp (v1 , . . . , vk )
for any v1 , . . . , vk ∈ Tp S ⊆ Tp M . Thus (ω|S )p is obtained from ωp by restricting the domain
of ωp to ×ki=1 Tp S. As before, the restriction of k-forms is the same as the pullback under the
inclusion map ι : S → M .
©F

We remark that a nonzero form on M may restrict to the zero form on a submanifold S.
For example, if S is a smooth curve on R2 defined by the nonconstant function f (x, y), then
∂f ∂f
df = dx + dy
∂x ∂y
is a nonzero 1-form on R2 , but since f is identically zero on S, df ≡ 0 on S.
Since pullbacks and exterior differentiation commute, we can write df |S to denote either one
of (df )|S = d(f |S ).

193
5.3.7 A Nowhere-Vanishing 1-Form on the Circle

Recall that −ydx+xdy is a nowhere-vanishing 1-form on the unit circle. As an application of


the exterior derivative, we construct a different nowhere-vanishing 1-form on the circle. This
construction generalizes to the construction of a nowhere-vanishing top form on a smooth
hypersurface in Rn+1 , ie a regular level set of a smooth function f : Rn+1 → R. As we will
see, the existence of a nowhere-vanishing top form is intimately related to orientations on a

ou
manifold.
At p = (1, 0), a basis for the tangent space Tp S 1 is ∂/∂y. Although dx is a nowhere-vanishing
1-form on R2 , it vanishes at (1, 0) when restricted to S 1 as
 

(dx)p = 0.
∂y

Zh
In order to find a nowhere-vanishing 1-form on S 1 , we take the exterior derivative of both
sides of the equation
x2 + y 2 = 1.
We get
2xdx + 2ydy = 0.
Note that this equation is valid only at a point in S 1 . Define

Ux := {(x, y) ∈ S 1 : x 6= 0}
Uy := {(x, y) ∈ S 1 : y 6= 0}.
x
By our calculations, on Ux ∩ Uy ,
dy dx
=− .
eli
x y
Define a 1-form ω on S 1 by
(
dy
x
, p ∈ Ux
ωp =
− dx
y
, p ∈ Uy .

This is well-defined 1-form on Ux ∪ Uy by construction as the two 1-forms agree on Ux ∩ Uy .


©F

In order to show that ω is smooth and nowhere-vanishing, we need charts. Define

Ux+ := {(x, y) ∈ S 1 : x > 0}

and similarly Ux− , Uy+ , Uy− . On Ux+ , y is a local coordinate and so dy is a basis for the
cotangent space Tp∗ S 1 at each p ∈ Ux+ . Since ω = dy/x on Ux+ , ω is smooth and nowhere
zero on Ux+ . A similar argument applies to dy/x on Ux− and −dx/y on Uy+ , Uy− . Hence ω is
smooth and nowhere vanishing on S 1 .

194
5.4 The Lie Derivative & Interior Multiplication
The exterior differentiation d was first locally defined with respect to a chart. It turns out
that d is in fact global and intrinsic to the manifold. We seek to derive a global intrinsic
formula for the exterior of a k-form such as the following:
(dω)(X, Y ) = Xω(Y ) − Y ω(X) − ω([X, Y ]).

ou
The proof uses the Lie derivative and interior multiplication, two other instrinsic operations
on a manifold. The Lie derivative allows us to differentiate a vector field or a differential
form on a manifold along another vector field. For any vector field X on a manifold, the
interior multiplication ιX is an antiderivation of degree −1 on differential forms.

5.4.1 Families of Vector Fields and Differential Forms

exists
Zh
a collection {Xt } or {ωt } of vector fields or diffferential forms on a manifold is said to be
a 1-parameter family if the parameter t runs over some subset of R. Let I ⊆ R be an
open interval and suppose {Xt } is a 1-parameter family of vector fields on M defined for all
t ∈ I \ {t0 } for some t0 ∈ I. We say that the limit
lim Xt
t→t0

P i if every ipoint p ∈ M hasi a coordinate neighborhood (U, x , . . . , x ) on which Xt |p =


1 n

a (t, p)∂/∂x |p and limt→t0 a (t, p) exists for all i. In this case, we define
x
n
X ∂
lim Xt |p := lim ai (t, p) .
t→t0
i=1
t→t0 ∂xi p
eli
It can be shown that this definition of the limit of Xt as t → t0 is independent of the choice
of the coordinate neighborhood (U, x1 , . . . , xn ), as there is a smooth change of coordinates.
A 1-parameter family {Xt }t∈I of smooth vector fields on M is said to depend smoothly on t
if every p ∈ M has a coordinate neighborhood (U, x1 , . . . , xn ) on which
X ∂
(Xt )p = ai (t, p)
©F

i
∂xi p

for (t, p) ∈ I × U and smooth functions ai on I × U . In this case we also say that {Xt }t∈I is
a smooth family of vector fields on M .
For a smooth family of vector fields on M , one can define its derivative with respect to t = t0
by !
d X ∂ai ∂
Xt = (t0 , p) i
dt t=t0 i
∂t ∂x p
p

195
for (t0 , p) ∈ I × U . It can be shown that this definition is independent of the chart
(U, x1 , . . . , xn ) containing p by considering a smooth change of coordinates. Indeed, Let
(V, y 1 , . . . , y n ) be another coordinate neighborhood of p such that
X ∂
Xt = bj (t, q)
j
∂y j

ou
on V . On the intersection U ∩ V ,

∂ X ∂y j ∂
= .
∂xi j
∂xi ∂y j

It follows that
X ∂y j
bj (t, p) = ai (t, p) .

Zh
i
∂xi
Differentiating both sides with respect to t yields

∂bj X ∂ai ∂y j
= i
.
∂t i
∂t ∂x

But then
X ∂bj ∂ X ∂ai ∂y j ∂
=
j
∂t ∂y j i,j
∂t ∂xi ∂y j
x
X ∂ai ∂
=
i
∂t ∂xi

as required. Observe that the derivative d/dt|t=t0 Xt is a smooth vector field on M .


eli
Similarly, a 1-parameter family {ωt }t∈i of smooth k-forms on M is said to depend smoothly
on t if every point of M has a coordinate neighborhood (U, x1 , . . . , xn ) on which
X
(ωt )p = bJ (t, p)dxJ |p
J
©F

for (t, p) ∈ I × U and some smooth functions bJ on I × U . We also call such a family {ωt }t∈I
a smooth family of k-forms on M and define its derivative with respect to t to be
!
d X ∂bJ
ωt = (t0 , p)dxJ |p .
dt t=t0 J
∂t
p

Similar to vector fields, this definition is independent of the chart and defines a smooth
k-form d/dt|t=t0 ωt on M .

196
Note that we write d/dt for the derivative of a smooth family of vector fields or differential
forms, but ∂/∂t for the partial derivative of a function of several variables.

Proposition 5.4.1 (Product Rule for d/dt)


If {ωt }, ωτt are smooth families of k-forms and `-forms respectively on a manifold M ,
then    
d d d
(ωt ∧ τt ) = ωt ∧ τt + ωt ∧ τt .

ou
dt dt dt

Proof
Written out in local coordinates, the statement reduces to the usual product rule in
calculus.

Proposition 5.4.2 (Commutation of d/dt|t=t0 with d)

Zh
If {ωt } is a smooth family of differential forms on a manifold M , then
 
d d
dωt = d ωt .
dt t=t0 dt t=t0

Proof
We first check that  
d d
(dωt ) = d ωt
dt dt
at anParbitrary point p ∈ M . Indeed, let (U, x1 , . . . , xn ) be a neighborhood of p such that
x
ω = J bJ dxJ for some smooth functions bJ on I × U . On U ,

d
(dωt )
eli
dt
d X ∂bJ i
= dx ∧ dxJ
dt J,i ∂xi
X ∂  ∂bJ 
= i
dxi ∧ dxJ exchange order
i,J
∂x ∂t
©F

!
X ∂bJ
=d dxJ
∂t
 J 
d
=d ωt .
dt

Evaluating at t = t0 on both sides of the equation commutes with d as d only involves


partial derivatives with respect to the xi variables.

197
5.4.2 The Lie Derivative of a Vector Field

Recall the elementary calculus definition of the derivative of a real-valued function f on R


at a point p ∈ R:
f (p + t) − f (p)
f 0 (p) := lim .
t→0 t
The issue in generalizing this to the derivative of a vector field Y on a manifold M is that

ou
at two nearby points p, q ∈ M , the tangent vectors Yp , Yq are in different vector spaces
Tp M, Tq M .so we cannot subtract them. One way around this is to use the local flow of
another vector field X to transport Yq to the tangent space Tp M at p.
Recall that for any smooth vector field X on M , there is a neighborhood U of p on which
the vector field as a local flow: ie there is some ε > 0 and a map
ϕ : (−ε, ε) × U → M

Zh
such that if we write ϕt (q) = ϕ(t, q), then

ϕt (q) = Xϕt (q) ϕ0 (q) = q q ∈ U.
∂t
In other words, for each q ∈ U , the curve ϕt (q) is an integral curve of X with initial point q.
By definition, ϕ0 (q) = q. The local flow also satisfies the property
ϕs ◦ ϕt = ϕs+t
whenever both sides are defined. Thus for each t, the map ϕt : U → ϕt (U ) is a diffeomor-
phism onto its image, with the smooth inverse ϕ−t . Indeed,
x
ϕ−t ◦ ϕt = ϕ0 = Id, ϕt ◦ ϕ−t = ϕ0 = Id .
eli
Let Y be a smooth vector field on M . To compare the values of Y at ϕt (p) and at p, we use
the diffeomorphism ϕ−t : ϕt (U ) → U to push Yϕt (p) into Tp M .

Definition 5.4.1 (Lie Derivative of a Vector Field)


For X, Y ∈ (M ) and p ∈ M , let ϕ : (−ε, ε) × U → M be a local flow of X on a
neighborhood U of p. The Lie derivative LX Y of Y with respect to X at p is the
©F

tangent vector

ϕ−t∗ (Yϕt (p) ) − Yp


(LX Y )p := lim
t→0 t
(ϕ−t∗ Y )p − Yp
= lim
t→0 t
d
= (ϕ−t∗ Y )p .
dt t=0

198
In the definition above, the limit is taken in the finite-dimensional vector space Tp M . For
the derivative to exist, it suffices that {ϕ−t∗ Y } be a smooth family of vector fields on M .
To show the smoothness of the family, we write ϕ−t∗ Y in the local coordinates x1 , . . . xn in
a chart. Let ϕit and ϕi be the i-th components of ϕt , ϕ respectively. Then
(ϕt )i (p) = ϕi (t, p) = (xi ◦ ϕ)(t, p).
Recall that relative to the frame {∂/∂xj }, the differential ϕt∗ at p is represented by the

ou
Jacobian matrix
∂(ϕt )i ∂ϕi
   
= .
∂xj (p) ∂xj (t, p)
Hence !
∂ X ∂ϕi ∂
ϕt∗ j
= j
(t, p) i .
∂x p i
∂x ∂x ϕ t (p)

Zh
Thus if Y = j b ∂/∂x , then
P j j

!
X ∂
ϕ−t∗ (Yϕt (p) ) = bj (ϕ(t, p))ϕ−t∗
j
∂xj ϕt (p)
X ∂ϕi ∂
= bj (ϕ(t, p)) j
(−t, p) i .
i,j
∂x ∂x p

When X, Y are smooth vector fields on M , both ϕi , bj are smooth functions. Hence {ϕ−t∗ Y }
is indeed a smooth family of vector fields on M . It follows that the Lie derivative LX Y exists
and is given in local coordinates by
x
d
(LX Y )p = ϕ−t∗ (Yϕt (p) )
dt t=0
∂ϕi
X ∂  

eli
j
= b (ϕ(t, p)) j (−t, p) .
i,j
∂t t=0 ∂x ∂xi p

It turns out that we have already seen the Lie derivative.

Theorem 5.4.3
If X, Y are smooth vector fields on a manifold M , then the Lie derivative LX Y
©F

coincides with the Lie bracket [X, Y ].

Recall that the Lie brakcet [X, Y ] at p is given by


[X, Y ]p f := (Xp Y − Yp X)f
for any germ f of a smooth function at p. Here
Xf (p) := Xp f.

199
Proof
We check the equality LX Y = [X, Y ] at every point by expanding both sides in local coor-
dinates. Suppose a local flow forPX is given by ϕ : (−ε, ε) × U → M , where (U, x1 , . . . , xn )
is a coordinate chart. Let X = i ai ∂/∂xi and Y = j bj ∂/∂xj on U . Recall that a local
P
flow c(t) of X satisfies

ou
X
Xc(t) = ai (c(t))
i
∂xi c(t)
X ∂
c0 (t) = ċi (t) .
i
∂xi c(t)

The condition that ϕt (p) is an integral of X translates into the ODE


∂ϕi
(t, p) = ai (ϕ(t, p)), (t, p) ∈ (−ε, ε) × U.

Zh
i = 1, . . . , n
∂t
The initial conditions state that at t = 0,
∂ϕi
(0, p) = ai (ϕ(0, p)) = ai (p).
∂t

Recall that the Lie bracket in local coordinates is


X  ∂bi i

k k ∂a ∂
[X, Y ] = a k
− b k i
.
i,k
∂x ∂x ∂x
x
From our calculations above,
(LX Y )p
eli
∂ϕi
X ∂  
j ∂
= b (ϕ(t, p)) j (−t, p)
i,j
∂t t=0 ∂x ∂xi p
"
X  ∂bj ∂ϕk ∂ϕi


= k
(ϕ(t, p)) (t, p) j (−t, p)
i,j,k
∂x ∂t ∂x ∂xi
#
X i

∂ ∂ϕ ∂
©F

− bj (ϕ(t, p)) j (−t, p) exchange order


i,j
∂x ∂t ∂xi
X  ∂bj i
 X  t=0 i 
∂ϕ ∂a ∂
= k
(p)ak (p) j (0, p) − bj (p) j (p) i
.
i,j,k
∂x ∂x i,j
∂x ∂x

But ϕ(0, p) = p so ϕ0 is the identity map and its Jacobian is the identity. In particular,
∂ϕi
(0, p) = δji
∂xj

200
and the expression above simplifies above to

(LX Y )p
X  ∂bj ∂ϕi
 X
∂ai


k j
= k
(p)a (p) j (0, p) − b (p) j (p)
i,j,k
∂x ∂x i,j
∂x ∂xi
X  ∂bi i

k k ∂a ∂
= a − b

ou
i,k
∂xk ∂xk ∂xi
= [X, Y ].
Although the Lie derivative of a vector fields does not give us anything new, it is a useful
tool alongside the Lie derivative of differential forms.

Zh
5.4.3 The Lie Derivative of a Differential Form

Let X be a smooth vector field and ω a smooth k-form on a manifold M . Fix p ∈ M and let
ϕt : U → M be a flow of X in a neighborhood U of p. The definition of the Lie derivative of
a differential form is similar to that of the Lie derivative of a vector field. Instead of pushing
a tangent vector at ϕt (p) to p via (ϕ−t )∗ , we now pull the k-covector ωϕt (p) back to p via ϕ∗t .

Definition 5.4.2 (Lie Derivative of a Differential Form)


Let X be a smooth vector field and ω a smooth k-form on a manifold M . The Lie
derivative LX ω at p ∈ M is
x
ϕ∗t (ωϕt (p) ) − ωp
(LX ω)p = lim
t→0 t
eli
(ϕ∗t ω)p − ωp
= lim
t→0 t
d
= (ϕ∗ ω)p .
dt t=0 t

By a similar argument to the case of vector fields, it can be shown that {ϕ∗t ω} is a smooth
©F

family of k-forms on M by expressing it in local coordiantes. The existence of (LX ω)p is


thus guaranteed.

Proposition 5.4.4
Let f be a smooth function and X be a smooth vector field on M . Then LX f = Xf .

Proof
Fix p ∈ M and let ϕt : U → M be a local flow of X as above. Since ϕt (p) is a curve

201
through p with initial vector Xp ,

d
(LX f )p := (ϕ∗ f )p
dt t=0 t
d
:= (f ◦ ϕt )(p)
dt t=0
= Xp f.

ou
5.4.4 Interior Multiplication

Zh
We first define interior multiplication on a vector space.

Definition 5.4.3 (Interior Multiplication)


If β is a k-covector on a vector space V and v ∈ V , for k ≥ 2 the interior multiplication
/ contraction of β with v is the (k − 1)-covector ιv β defined by

(ιv β)(v2 , . . . , vk ) = β(v, v2 , . . . , vk )

for v2 , . . . , vk ∈ V .
x
We define ιv β = β(v) ∈ R for a 1-covector β on V and ιv β = 0 for a 0-covector (constant) β
on V .

Proposition 5.4.5
eli
For 1-covectors α1 , . . . , αk on a vector space V and v ∈ V ,

k
X
1 k
ιv (α ∧ · · · ∧ α ) = (−1)i−1 αi (v)α1 ∧ · · · ∧ αbi ∧ · · · ∧ αk .
i=1

Here the hat over αi indicates that αi is omitted from the wedge product.
©F

Recall that

X
(α1 ∧ · · · ∧ αk )(v1 , . . . , vk ) = (sgn σ)α1 (vσ(1) ) . . . αk (vσ(k) ) = det αi (vj ) .
 
σ∈Sk

202
Proof
By computation,

(ιv )(α1 ∧ · · · ∧ αk )(v2 , . . . , vk )


= (α1 ∧ · · · ∧ αk )(v, v2 , . . . , vk )
= det αi (vj )
 
v1 = v

ou
k
expansion along 1st column
X
(−1)i+1 αi (v) det α` (vj ) `6=i,j6=2
 
=
i=1
Xk  
= (−1)i+1 αi (v) α1 ∧ · · · ∧ αbi ∧ · · · ∧ αk (v2 , . . . , vk ).
i=1

Proposition 5.4.6
Fix a vector v in a vector space V . Let ιv : ∗ (V ∨ ) → ∗−1 (V ∨ ) be interior multiplica-

Zh
V V
tion by v. Then
(i) ιv ◦ ιv = 0
(ii) for β ∈ k (V ∨ ) and γ ∈ ` (V ∨ ),
V V

ιv (β ∧ γ) = (ιv β) ∧ γ + (−1)k β ∧ ιv γ.

Thus ιv is an antiderivation of degree −1 whose square is 0.

Proof
x
(i) Let β ∈ k (V ∨ ). By the definition of interior multiplication,
V

(ιv (ιv β))(v3 , . . . , vk ) = (ιv β)(v, v3 , . . . , vk )


eli
= β(v, v, v3 , . . . , vk )
= 0.

The last equality follows as β is alternating and there is a repeated argument v.


(ii) Recall that the wedge product and interior multiplication are both linear in its argu-
ments. Hence it suffices to show the case there
©F

β = α1 ∧ · · · ∧ αk , γ = αk+1 ∧ · · · ∧ αk+` ,

203
where the αi ’s are all 1-covectors. Then

ιv (β ∧ γ)
= ιv (α1 ∧ · · · ∧ αk+` )
k+`
X
= (−1)i−1 αi (v)α1 ∧ · · · ∧ αbi ∧ · · · ∧ αk+`
i=1

ou
k
!
X
= (−1)i−1 αi (v)α1 ∧ · · · ∧ αbi ∧ · · · ∧ αk ∧ αk+1 ∧ · · · ∧ αk+`
i=1
k
X
k 1 k
+ (−1) α ∧ · · · ∧ α ∧ (−1)i+1 αk+i (v)αk+1 ∧ · · · ∧ αd
k+i ∧ · · · ∧ αk+`

i=1
= (ιv β) ∧ γ + (−1)k β ∧ ιv γ.

Zh
Interior multiplication on a manifold is defined pointwise. If X is a smooth vector field on
M and ω ∈ Ωk (M ), then ιX ω is the (k − 1)-form defined by

(ιX ω)p := ιXp ωp

for all p ∈ M . The form ιX ω on M is smooth since for any smooth vector fields X2 , . . . , Xk
on M ,
(ιX ω)(X2 , . . . , Xk ) = ω(X, X2 , . . . , Xk )
is a smooth function on M . In the case that ω is a 1-form, ιX (ω) = ω(X). If ω = f is a
-form (function) on M , then ιX f = 0. By the properties of interior multiplication at each
point p ∈ M , the map ιX : Ω∗ (M ) → Ω∗ (M ) is an antiderivation of degree −1 such that
x
ιX ◦ ιX = 0.
Let F denote the ring C ∞ (M ) of smooth functions on the manifold M . As ιX ω is a point
operator, ie its value at p depends only on Xp , ωp , it is F-linear in either argument. Thus
eli
ιX ω is additive in each argument.

Proposition 5.4.7
For any f ∈ F ,
(i) ιf X ω = f ιX ω
(ii) ιX (f ω) = f ιX ω
©F

Proof
We omit the proof of (ii) as it is similar.

(ιf X ω)p = ιf (p)Xp ωp


= f (p)ιXp ωp
= (f ιX ω)p .

204
Example 5.4.8 (Interior Multiplication on R2 )
Let X = x∂/∂x + y∂/∂y be the radial vector field and α = dx ∧ dy the area 2-form on
the plane R2 . We compute the contraction ιX α.
Firstly,

ιX dx = dx(X) = x

ou
ιX dy = y.

By the antiderivation property of ιX ,

ιX α = ιX (dx ∧ dy)
= (ιX dx)dy − dx(ιX dy)
= xdy − ydx.

Zh
This restricts to the nowhere-vanishing 1-form ω on the circle S 1 .

5.4.5 Properties of the Lie Derivative

We state and prove several basic properties of the Lie derivative, including its relation to
exterior derivation and interior multiplication, two other intrinsic operations on a manifold.

Theorem 5.4.9
Let X be a smooth vector field on a manifold M .
x
(i) The Lie derivative LX : Ω∗ (M ) → Ω∗ (M ) is a derivation, ie it is an R-linear
map such that for all ω ∈ Ωk (M ) and τ ∈ Ω` (M ),
eli
LX (ω ∧ τ ) = (LX ω) ∧ τ + ω ∧ (LX τ ).

(ii) The Linear derivative LX commutes with the exterior derivative d.


(iii) (Cartan homotopy formula) LX = dιX + ιX d.
(iv) (“Product” formula) For all ω ∈ Ωk (M ) and Y1 , . . . , Yk ∈ (M ),
©F

k
X
LX (ω(Y1 , . . . , Yk )) = (LX ω)(Y1 , . . . , Yk ) + ω(Y1 , . . . , LX Yi , . . . , Yk ).
i=1

Recall the product rule for smooth families {ωt }, {τt } of k-forms and `-forms:
 
d d d
(ωt ∧ τt ) = ωt ∧ τt + ωt ∧ τt .
dt dt dt

205
Proof (i)
Let p ∈ M and ϕt : U → M a local flow of X in a neighborhood U 3 p.
The Lie derivative LX is the d/dt of a vector-valued function of t. Thus the derivation
property is really just the product rule for smooth families of differential forms:

d
(LX (ω ∧ τ ))p = (ϕ∗ (ω ∧ τ ))p

ou
dt t=0 t
d
= (ϕ∗ ω)p ∧ (ϕ∗t τ )p
dt t=0 t
= (LX ω)p ∧ τp + ωp ∧ (LX τ )p .
Recall that the exterior derivative commutes with the pullback by a smooth functions as
well with d/dt of a smooth family of differential forms.

Zh
Proof (ii)
Let p ∈ M and ϕt : U → M a local flow of X in a neighborhood U 3 p.

d
LX dω := ϕ∗ dω
dt t=0 t
d
= dϕ∗ ω
dt t=0 t
 
d ∗
=d ϕω
dt t=0 t
x
= dLX ω.
Recall that if A, B are both superderivations of degree m1 , m2 , then AB − (−1)m1 m2 BA is a
superderivation of degree m1 +m2 . In particular, if A, B are antiderivations (superderivations
eli
of odd degree), then AB + BA is a derivation of degree m1 + m2 .
Also recall from an earlier proposition that
LX f = Xf
for any f ∈ C (M ) and X ∈ X(X).

©F

Proof (iii)
Let p ∈ M and ϕt : U → M a local flow of X in a neighborhood U 3 p. We claim that it
suffices to check that
LX f = (dιX + ιX d)f
for any f ∈ C ∞ (U ).
Indeed, for any ω ∈ Ωk (M ), it suffices to check that at any p ∈ M , LX ω = (dιX +
ιX d)ω. By shrinking U if necessary, we may assume we have a coordinate neighborhood

206
(U, x1 , . . . , xn ). Moreover, by linearity, we may further assume that ω is a wedge product
ω = f dxi1 ∧· · ·∧dxik . Next, the LHS is a derivation by (i) and the RHS is also a derivation
(superderivation of even degree). By (ii), the LHS commutes with exterior derivation and
so does the RHS:
d(dιX + ιX d) = dιX d = (dιX + ιX d)d.
thus both sides of the Cartan homotopy formula are derivations that commute with d.

ou
Thus if the formula holds for two differential forms ω, τ , then it holds for the wedge
product ω ∧ τ and dω. It follows that the reduction above is justified.
We conclude the proof by verifying for f ∈ C ∞ (U ) that

(dιX + ιX d)f = ιX df ιX f = 0
:= (df )(X)

Zh
= Xf
= LX f.
Recall that {ϕ−t∗ Y } is a smooth family of vector fields for each Y ∈ X(M ).
Also recall that ϕ−t∗ (Yϕt (p) ) is smooth and hence continuous at a neighborhood of (0, p).

Proof (iv)
Let p ∈ M and ϕt : U → M a local flow of X in a neighborhood U 3 p. The proof is
similar to that of the standard product rule for the calculus derivative and we focus on
the case of k = 2 for the sake of simplicity as the general case is similar but more tedious.
x
By the old trick of adding and subtracting terms,

(LX (ω(Y, Z)))p


eli
(ϕ∗ (ω(Y, Z)))p − (ω(Y, Z))p
= lim t
t→0 t
ωϕt (p) (Yϕt (p) , Zϕt (p) ) − ωp (Yp , Zp )
= lim
t→0 t
ωϕ (p) (Yϕt (p) , Zϕt (p) ) − ωp (ϕ−t∗ (Yϕt (p) ), ϕ−t∗ (Zϕt (p) ))
= lim t
t→0 t
©F

ωp (ϕ−t∗ (Yϕt (p) ), ϕ−t∗ (Zϕt (p) )) − ωp (Yp , ϕ−t∗ (Zϕt (p) ))
+ lim
t→0 t
ωp (Yp , ϕ−t∗ (Zϕt (p) )) − ωp (Yp , Zp )
+ lim .
t→0 t

207
We rewrite the first limit in the summation as
(ϕ∗t ωϕt (p) )(ϕ−t∗ (Yϕt (p) ), ϕ−t∗ (Zϕt (p) )) − ωp (ϕ−t∗ (Yϕt (p) ), ϕ−t∗ (Zϕt (p) ))
t

ϕ (ωϕt (p) ) − ωp
= t (ϕ−t∗ (Yϕt (p) ), ϕ−t∗ (Zϕt (p) ))
t
→ (LX ω)p (Yp , Zp ). t→0

ou
Here the limit is justified as both the operator and arguments have limits.
By the bilinearity of ωp , the second term is

ϕ−t∗ (Yϕt (p) ) − Yp


 
lim ωp , ϕ−t∗ (Zϕt (p) )
t→0 t
= ωp ((LX Y )p , Zp ).

Zh
Finally, by a similar calculation, the third term is given by ωp (Yp , (LX Z)p ).

Remark 5.4.10 Unlike interior multiplication, the Lie derivative LX is not F-linear in
either argument. By the derivation property,

LX (f ω) = (LX f )ω + f LX ω = (Xf )ω + f LX ω.

We note that the previous theorem can be used to compute the Lie derivative of a differential
form.
x
Example 5.4.11 (The Lie Derivative on a Circle)
Let ω be the 1-form −ydx + xdy and X the tangent vector field −y∂/∂x + x∂/∂y on the
eli
unit circle S 1 . We have

LX (−ydx) = −(LX y)dx − yLX dx derivation


= −(Xy)dx − ydLX x
= −xdx − yd(Xx)
= −xdx + ydy
©F

LX (xdy) = −ydy + xdx


LX ω = 0.

5.4.6 Global Formulas for the Lie and Exterior Derivatives

The definition of the Lie derivative only makes sense in a neighborhood of a point as it is
local. The product formula gives us access to a global formula for the Lie derivative.

208
Theorem 5.4.12 (Global Formula for the Lie Derivative)
For a smooth k-form ω and smooth vector fields X, Y1 , . . . , Yk on a manifold M ,

k
X
(LX ω)(Y1 , . . . , Yk ) = X(ω(Y1 , . . . , Yk )) − ω(Y1 , . . . , [X, Yi ], . . . , Yk ).
i=1

ou
The definition of the exterior derivative d is also local. Using the Lie derivative, we obtain
a useful global formula for the exterior derivative. We begin with the case of a 1-form.

Proposition 5.4.13
If ω is a smooth 1-form and X, Y are smooth vector fields on a manifold M , then

dω(X, Y ) = Xω(Y ) − Y ω(X) − ω([X, Y ]).

Zh
Proof
It suffices to check the P
formula in a chart (U, x1 , . . . , xn ), thus we assume without loss
of generality that ω = i ai dxi . Since both sides of the equation are R-linear in ω, we
further assume that ω = f dg where f, g ∈ C ∞ (U ).
In this case,
dω = d(f dg) = df ∧ dg
and
dω(X, Y ) = df (X)dg(Y ) − df (Y )dg(X) = (Xf )Y g − (Y f )Xg.
x
On the other hand,

Xω(Y ) = X(f dg(Y )) = X(f Y g) = (Xf )Y g + f XY g,


eli
Y ω(X) = Y (f dg(X)) = Y (f Xg) = (Y f )Xg + f Y Xg,
ω([X, Y ]) = f dg([X, Y ]) = f (XY − Y X)g.

This concludes the proof.

Theorem 5.4.14 (Global Formula for the Exterior Derivative)


Fix k ≥ 1. For a smooth k-form ω and smooth vector fields Y0 , Y1 , . . . , Yk on a
©F

manifold M ,

(dω)(Y0 , . . . , Yk )
k
X X
= (−1)i Yi ω(Y0 , . . . , Ybi , . . . , Yk ) + (−1)i+j ω([Yi , Yj ], Y0 , . . . , Ybi , . . . , Ybj , . . . , Yk ).
i=0 0≤i<j≤k

209
We have already shown the case of k = 1. Assuming the formula for degrees k − 1, the case
of degree k can be shown by induction. Indeed, by the Cartan homotopy formula,

(dω)(Y0 , Y1 , . . . , Yk ) = (ιY0 dω)(Y1 , . . . , Yk )


= (LY0 ω)(Y1 , . . . , Yk ) − (dιY0 ω)(Y1 , . . . , Yk ).

The first term can be computed using the global formula for the Lie derivative LY0 ω while

ou
the second term can be computed using the induction hypothesis.

Zh
x
eli
©F

210
Chapter 6

ou
Integration

Zh
On a manifold, we integrate differential forms rather than functions. We focus on the
integration of smooth forms over a submanifold. Note that it is nontheless possible to
integrate noncontinuous forms over more general sets.
for integration over a manifold to be well-defined, the manifold must be oriented. We begin
by discussing orientations on a manifold and enlarge the category of manifolds to include
manifolds with boundary. Our treatment of integration culminates in Stokes’ theorem for
an n-dimensional manifold.

6.1 Orientations
x
Our goal is to define orientations for n-manifolds and to investigate various equivalent char-
acterizations of orientations.
eli

6.1.1 Orientations of a Vector Space

For this
 segment, we assume all vector spaces are finite-dimensional. Two ordered bases
u = u1 . . . un and v = v1 . . . vn of a vector V are equivalent, written u ∼ v, if

©F

u = vA for some n × n matrix with positive determinant.

Definition 6.1.1 (Orientation)


An orientation of a vector space V is an equivalence class of ordered bases.

Note that any finite-dimensional vector space has exactly two orientations. If µ is an ori-
entation of a finite-dimensional vector space V , we denote the other orientation by −µ and
call it the opposite of the orientation µ.

211
By convention, we define an orientation on the zero-dimensional vector space to be one of
two signs +, −.
We typically write v1 , . . . , vn for a basis in a vector space. We enclose the basis (v1 , . .. , vn )
if it is an ordered basis or alternatively we write it in matrix notation v1 . . . vn . An
orientation is denoted [(v1 , . . . , vn )] where the square brackets now stand for an equivalence
class.

ou
6.1.2 Orientations & n-Covectors

Rather than an ordered basis, we can also useVnan n-covector to specify an orientation. This
approach is based on the fact that the space (V ) of n-covectors on V is one-dimensional.

Zh
Lemma 6.1.1
Let u1 , . . . , un and v1 , . . . , vn be vectors in a vector space V . Suppose
n
X
uj = aij vi
i=1

for some matrix A = [aij ] ∈ Rn×n . If β is an n-covector on V , then

β(u1 , . . . , un ) = (det A)β(v1 , . . . , vn ).


x
Proof
By computation,
eli
!
X X
β(u1 , . . . , un ) = β vi1 ai11 , . . . , vin ainn
i1 in
X
= ai11 . . . ainn β(vi1 , . . . , vin )
i1 ,...,in

β is alternating
X σ(1)
= a1 . . . aσ(n) β(vσ(1) , . . . , vσ(n) )
©F

n
σ∈Sn

β is alternating
X σ(1)
= (sgn σ)a1 . . . aσ(n)
n β(v1 , . . . , vn )
σ∈Sn

= (det A)β(v1 , . . . , vn ).
It follows immediately that as ordered bases (v1 , . . . , vn ), (u1 , . . . , un ), β(u1 , . . . , un ) and
β(v1 , . . . , vn ) have the same sign if and only if det A > 0.
We say that the n-covector β determines / specifies the orientation (v1 , . . . , vn ) if β(v1 , . . . , vn ) >

212
0. The previous lemma asserts that this is well-defined. Moreover, we see that two n-
covectors β, β 0 on V determine the same orientation if and only if β = aβ 0 for some a > 0.
We define an equivalence relation on the non-zero n-covectors on V by setting β ∼ β 0 if
they differ by a positive constant. Thus we alternatively describe an orientation of V by an
equivalence class of non-zero n-covectors.
A linear isomorphism n (V ∨ ) ' R identifies the set of non-zero n-covectors with R − {0}
V
with two connected components, each of which determines an orientation of V .

ou
Example 6.1.2
Let e1 , e2 be the standard basis for R2 and α1 , α2 its dual basis. Then the 2-covector
α1 ∧ α2 determines the counterclockwise orientation since

(α1 ∧ α2 )(e1 , e2 ) = 1 > 0.

Zh
Example 6.1.3
Let ∂/∂x|p , ∂/∂y|p be the standard basis for Tp R2 and (dx)p , (dy)p its dual basis. Then
(dx ∧ dy)p determines the counterclockwise orientation of Tp R2 .

6.1.3 Orientations on a Manifold


x
To orient a manifold M , we orient the tangent space at each point in M in a “coherent” way.
eli
Recall that a frame on an oepn set U ⊆ M is an n-tuple of (possibly discontinuous) vector
fields on U such that at every p ∈ U , the n-tuple (X1,p , . . . , Xn,p ) is an ordered basis for
Tp M . A global frame is a frame defined on all of M , while a local frame about p ∈ M is a
frame defined on some neighborhood of p.
We introduce an equivalence relation for frames on U : (X1 , . . . , Xn ), (Y1 , . . . , Yn ) are equiv-
alent if and only if the unique change of basis matrix has positive determinant at every
©F

p ∈ U.

Definition 6.1.2 (Pointwise Orientation)


A pointwise orientation on a manifold M assigns to each p ∈ M an orientation µp of
Tp M .

In terms of frames, a pointwise orientation on M is an equivalence class of (possibly discon-


tinuous) global frames on M .

213
Definition 6.1.3 (Continuous Pointwise Orientation)
We say that a pointwise orientation µ on M is continuous at p ∈ M if p has a
neighborhood U on which µ is represented by a continuous frame, ie there exists
continuous vector fields Y1 , . . . , Yn on U such that µq = [(Y1,q , . . . , Yn,q )] for all q ∈ U .

A continuous pointwise orientation is called an orientation on M . A manifold is said to be

ou
orientable if it has an orientation. A manifold together with an orientation is said to be
oriented.

Example 6.1.4
Rn is oriented with orientation given by the continuous global frame

(∂/∂r1 , . . . , ∂/∂rn ).

Zh
Example 6.1.5 (Open Möbius Band)
Let R denote the rectangle

R := {(x, y) ∈ R2 : x ∈ [0, 1], y ∈ (−1, 1)}.

The open Möbius band M is the quotient of the rectangle R by the equivalence relation
generated by (0, y) ∼ (1, −y). The interior of R is the open rectangle

U := {(x, y) ∈ R2 : x ∈ (0, 1), y ∈ (−1, 1)}.


x
Suppose towards a contradiction that M is orientable. An orientation on M restricts to
an orientation on U . Without loss of generality, assume the orientation is given by e1 , e2 .
By continuity, the orientations at (0, 0), (1, 0) are both given by e1 , e2 . But under the
identification, the ordered basis e1 , e2 at (1, 0) maps to e1 , −e2 at (0, 0), a contradiction.
eli
Proposition 6.1.6
A connected orientable manifold M has exactly two orientations.

Recall that a section of a tangent bundle is continuous (smooth) if and only if its coefficients
with respect to a continuous (smooth) frame are continuous (smooth) functions on U .
©F

Proof
Let µ, ν be two orientations on M . At any p ∈ M , µp , νp are orientations of Tp M . Thus
they are either the same or are opposites. Define the function f : M → {±1} by
(
1, µ p = νp ,
f (p) :=
−1, µp = −νp .

214
Fix p ∈ M . By continuity, there is a connected neighborhood U 3 p on which µ =
[(X1 , . . . , Xn )] and ν = [(Y1 , . . . , Yn )] for some continuous vector fields Xi , Yj on U . Let
A = [aij ] : U → GL(n, R) be the change of basis matrix so that
X
Yj = aij Xi .
i

The entries aij are continuous functions so that the determinant det A : U → R× is also

ou
continuous.
By the intermediate value theorem, the continuous nowhere-vanishing functions det A
on the connected set U is everywhere positive or everywhere negative, as R× has two
connected components. Hence µ = ν or µ = −v on U . Thus f is locally constant. But a
locally constant function on a connected set is constant, hence µ = ν or µ = −ν on all of
M.

Zh
6.1.4 Orientations & Differential Forms

In practice, it is easier to manipulate the nowhere-vanishing top forms that specify a point-
wise orientation. We aim to show that the continuity condition on a pointwise orientation
translates to a smooth condition on nowhere-vanishing top forms.

Lemma 6.1.7
A pointwise orientation µ = [(X1 , . . . , Xn )] on a manifold M is continuous if and only
if each p ∈ M has a coordinate neighborhood (U, x1 , . . . , xn ) on which the function
x
(dx1 ∧ · · · ∧ dxn )(X1 , . . . , Xn ) is everywhere positive.
eli
Proof
( =⇒ ) Suppose the pointwise orientation µ is continuous. By definition, every p ∈ M
has a neighborhood W on which µ is represented by a continuous frame (Y1 , . . . , Yn ).
Choose a connected coordinate neighborhood
P i (U, x , . . . , x ) of p contained in W and for
1 n

simplicity write ∂i := ∂/∂x . Then Yj = i bj ∂i for a continuous matrix-valued function


i

[bij ] : U → GL(n, R). By a previous lemma,


©F

(dx1 ∧ · · · ∧ dxn )(Y1 , . . . , Yn ) = (det bij )(dx1 ∧ · · · ∧ dxn )(∂1 , . . . , ∂n ) = det bij 6= 0.
   

As a continuous nowhere-vanishing real-valued function on a connected set, (dx1 ∧ · · · ∧


dxn )(Y1 , . . . , Yn ) is everywhere positive or everywhere negative on U . By consider x̃1 =
−x1 if necessary, we assume it is everywhere positive on U .
Since µ = [(X
P1 , .i . . , Xn )] = [(Y1 , . . . , Yn )] on U , the change of basis matrix C = [cj ] such
i

that Xj = i cj Yi has positive determinant. Applying the previous lemma once more

215
yields that on U ,

(dx1 ∧ · · · ∧ dxn )(X1 , . . . , Xn ) = (det C)(dx1 ∧ · · · ∧ dxn )(Y1 , . . . , Yn ) > 0.

( ⇐= ) Fix p ∈ M and suppose that on its neighborhood chart (U, x1 , . . . , xn ), the function
(dx1 ∧ · · · ∧ dxn )(X1 , . . . , Xn ) > 0 over all U .

ou
By shrinking U if necessary, we have a local representation Xj = j aij ∂i . Thus
P

0 < (dx1 ∧ · · · ∧ dxn )(X1 , . . . , Xn ) = (det aij )(dx1 ∧ · · · ∧ dxn )(∂1 , . . . , ∂n ) = det aij .
   

Thus on U , [(X1 , . . . , Xn )] = [(∂1 , . . . , ∂n )] by definition and the pointwise orientation µ


is continuous at p.

Theorem 6.1.8

Zh
An n-manifold M is orientable if and only if there exists a smooth nowhere-vanishing
n-form on M .

Proof
( =⇒ ) Let [(X1 , . . . , Xn )] be an orientation on M . The previous lemma assures that each
p ∈ M has a coordinate neighborhood (U, x1 , . . . , xn ) on which

(dx1 ∧ · · · ∧ dxn )(X1 , . . . , Xn ) > 0.


x
Let {(Uα , x1α , . . . , xnα )}α be a collection of these open charts voering M , and {ρα }α a
smooth partitionP of unity subordinate to the open cover {Uα }. Being a locally finite sum,
the n-form ω = α ρα dx1α ∧ . . . dxnα is well defined and smooth on M . For any p ∈ M ,
since ρα (p) ≥ 0 for all α and there is at least one α for which it is positive,
eli
X
ωp (X1,p , . . . , Xn,p ) = ρα (p)(dx1α ∧ · · · ∧ dxnα )p (X1,p , . . . , Xn,p ) > 0.
α

Thus ω is a smooth nowhere-vanishing n-form on M .


( ⇐= ) Suppose ω is a nowhere-vanishing n-form on M . At each p ∈ M , choose an
©F

ordered basis (X1,p , . . . , Xn,p ) for Tp M such that ωp (X1,p , . . . , Xn,p ) > 0. We show that at
every point, there is a coordinate neighborhood on which (dx1 ∧ · · · ∧ dxn )(X1 , . . . , Xn ) is
everywhere positive. The previous lemma concludes the proof.
Fix p ∈ M and let (U, x1 , . . . , xn ) be a connected coordinate neighborhood of p. Then on
U , ω = f dx1 ∧ · · · ∧ dxn for a smooth nowhere-vanishing function f . Being continuous
and nowhere vanishing on a connected set, f is everywhere positive or negative on U .
By taking x̃1 = −x1 if necessary, we may assume f > 0 on U . Then on U , (dx1 ∧ · · · ∧
dxn )(X1 , . . . , Xn ) > 0 as desired.

216
It can be shown that the unit sphere S 2 ⊆ R3 is orientation. A classical theorem from
algebraic topology states that a continuous vector field on an even-dimensional sphere must
vanish somewhere. Thus although the sphere S 2 has a continuous pointwise orientation, any
global frame that represents the orientation is necessarily discontinuous.
If ω, ω 0 are nowhere-vanishing smooth n-forms on an n-manifold, then ω = f ω 0 for some
nowhere-vanishing function f on M . Locally on a chart (U, x1 , . . . , xn ), ω = hdx1 ∧ · · · ∧ dxn
and ω 0 = gdx1 ∧ · · · ∧ dxn , where h, g are smooth nowhere-vanishing functions on U . Thus

ou
f = h/g is also a smooth nowhere vanishing function on U . Since U is an arbitrary chart,
f is smooth and nowhere vanishing function on M . On a connected manifold M , such a
function is either everywhere positive or everywhere negative. Thus the nowhere-vanishing
smooth n-forms on a connected orientable manifold M are partitioned into two equivalence
classes by the equivalence relation

ω ∼ ω 0 ⇐⇒ ω = f ω 0

Zh
with f > 0.
To each orientation µ = [(X1 , . . . , Xn )] on a connected orientable manifold M , we associate
the equivalence class of smooth nowhere-vanishing n-forms ω on M such that ω(X1 , . . . , Xn ) >
0. Such an ω exists by theorem above. If µ 7→ [ω], then −µ 7→ [−ω]. On a connected ori-
entable manifold, this yields a bijective correspondance

{orientations on M } ↔ {equivalence classes of smooth nowhere-vanishing n-forms on M },

where each side is a set of two elements. By considering one conected component at a time,
x
we see that the bijection still holds for an arbitrary orientable manifold, with each component
having two possible orientations and two equivalence classes of smooth nowhere-vanishing n-
forms. If ω is a smooth nowhere-vanishing n-form such that ω(X1 , . . . , Xn ) > 0, we say that
eli
ω determines of specifies the orientation [(X1 , . . . , Xn )] and we call ω an orientation form on
M . An oriented manifold can be described by a pair (M, [ω]), where [ω] is the equivalence
class of an orientation form on M . However, we typically just write M if the orientation is
clear from context. For example, Rn is oriented by dx1 ∧ · · · ∧ dxn unless otherwise specified.

Remark 6.1.9 (Orientations on Zero-Dimensional Manifolds) A connected manifold


of dimension 0 is a point. The equivalence class of nowhere-vanishing 0-forms on a point is
©F

either [−1] or [+1]. Hence a connected zero-dimensional manifold is always orientable with
its two orientations specified by ±1.
a general zero-dimensional manifold M is a countable discrete set of points and an orientatble
is given by a function that assigns to each point either 1 or −1.

A diffeomorphism F : (N, [ωN ]) → (M, ωM ) of oriented manifolds is said to be orientation-


preserving if [F ∗ ωM ] = [ωN ]. It is orientation-reversing if [F ∗ ωM ] = [−ωN ].

217
Proposition 6.1.10
Let U, V ⊆ Rn be open, both with the standard orientation inherited from Rn . A diffeo-
morphism F : U → V is orientation-preserving if and only if the Jacobian determinant
det[∂F i /∂xj ] is everywhere positive on U .

Proof

ou
Let x1 , . . . , xn and y 1 , . . . , y n be the standard coordinates on U, V ⊆ Rn . By computation,

F ∗ (dy 1 ∧ . . . dy n ) = d(F ∗ y 1 ) ∧ · · · ∧ d(F ∗ y n )


= d(y 1 ◦ F ) ∧ · · · ∧ d(y n ◦ F )
= dF 1 ∧ · · · ∧ dF n
 i
∂F
= det dx1 ∧ · · · ∧ dxn .
∂xj

Zh
Thus F is orientation-preserving if and only if det[∂F i /∂xj ] is everywhere positive on U .

6.1.5 Orientations & Atlases

Using the characterization of an orientation-preserving diffoemorphism by the sign of its


Jacobian determinant, we can describe the orientability of manifolds in terms of atlases.

Definition 6.1.4 (Oriented Atlas)


An atlas on M is said to be oriented if for any two overlapping charts
x
(U, x1 , . . . , xn ), (V, y 1 , . . . , y n ) of the atlas, the Jacobian determinant det[∂y i /∂xj ] is
everywhere positive on U ∩ V .
eli
Theorem 6.1.11
A manifold M is orientable if and only if it has an oriented atlas.

Proof
( =⇒ ) Let µ = [(X1 , . . . , Xn )] be an orientation on the manifold M . By a prior lemma,
©F

each p ∈ M has a coordinate neighborhood (U, x1 , . . . , xn ) on which

(dx1 ∧ · · · ∧ dxn )(X1 , . . . , Xn ) > 0.

We claim that the collection U of these charts is an oriented atlas.


If (U, x1 , . . . , xn ), (V, y 1 , . . . , y n ) are two overlapping charts from U, then on U ∩ V ,

(dx1 ∧ · · · ∧ dxn )(X1 , . . . , Xn ), (dy 1 ∧ · · · ∧ dy n )(X1 , . . . , Xn ) > 0.

218
But
dy 1 ∧ · · · ∧ dy n = (det ∂y i /∂xj )dx1 ∧ · · · ∧ dxn ,
 

hence det[∂y i /∂xj ] > 0 on U ∩ V .


( ⇐= ) Suppose {(U, x1 , . . . , xn )} is an oriented atlas. For each p ∈ (U, x1 , . . . , xn ), de-
fine µp to be the equivalence class of ordered bases (∂/∂x1 |p , . . . , ∂/∂xn |p ) for Tp M . If
two charts (U, x1 , . . . , xn ), (V, y1 , . . . , y n ) in the oriented atlas contains p, then by the ori-

ou
entability of the atlas, det[∂y i /∂xj ] > 0, so that (∂/∂x1 |p , . . . , ∂/∂xn |p ) is equivalent to
(∂/∂y 1 |p , . . . , ∂/∂y n |p ). Thus µ is a well-defined pointwise orientation on M . Moreover,
it is continuous as every point has a coordinate neighborhood on which µ is represented
by a continuous frame.
We say two oriented atlases {(Uα , φα )} and {(Vβ , ψβ )} on a manifold M are equivalent if the
transition functions
φα ◦ ψβ−1 : ψβ (Uα ∩ Vβ ) → φα (Uα ∩ Vβ )

Zh
have positive Jacobian determinant for all α, β.
It is not difficult to show that this is an equivalence relation on the set of oriented atlases
on a manifold M . In the proof of the theorem above, an oriented atlas {(U, x1 , . . . , xn )} on
a manifold M determines an orientation p 7→ [(∂/∂x1 |p , . . . , ∂/∂xn |p )] on M . Conversely, an
orientation [(X1 , . . . , Xn )] on M gives rise to an oriented atlas {(U, x1 , . . . , xn )} on M such
that (dx1 ∧ · · · ∧ dxn )(X1 , . . . , Xn ) > 0 on U . It can be shown that the two induced maps
{equivalence classes of oriented atlases on M } ↔ {orientations on M }
are well-defined and inverse to each other. Thus we can also specify an orientation on an
x
orientable manifold by an equivalence class of oriented atlases.
For an oriented manifold M , we denote by −M the same manifold with the opposite orien-
tation. If {(U, φ)} = {(U, x1 , . . . , xn )} is an oriented atlas specifying the orientation of M ,
the an oriented atlas specifying the orientation of −M is {(U, φ̃)} = {(U, −x1 , x2 , . . . , xn )}.
eli

6.2 Manifolds with Boundary


The prototype of a manifold with boundary is the closed upper half-space
©F

Hn := {(x1 , . . . , xn ) ∈ Rn : xn ≥ 0},
with the subspace topology inherited from Rn . The points x ∈ Hn with xn > 0 are called
the interior points of Hn , and the points with xn = 0 are called the boundary points of Hn .
The sets of these two points are denoted (Hn )◦ and ∂(Hn ), respectively.
If M is a manifold with boundary, then its boundary ∂M turns out to be a manifold of
dimension n − 1 where n = dim(M ◦ ). Moreover, an orientation on M induces an orientation
on ∂M .

219
6.2.1 Smooth Invariance of Domain in Rn

In order to discuss smooth functions on a manifold with boundary, we need to extend the
notion of a smooth function to allow for nonopen domains.

Definition 6.2.1 (Smooth on an Arbitrary Set)

ou
Let ⊆ Rn be an arbitrary subset. A function f : S → Rm os smooth at p ∈ S if there
exists a neighborhood U 3 p and a smooth function f˜ : U → Rm such that f˜ = f on
U ∩ S. We say f is smooth on S if it is smooth at each point of S.

This definition allows us to make sense of an arbitrary subset S ⊆ Rn being diffeomorphic


to an arbitrary subset T ⊆ Rm .

Proposition 6.2.1 (Smooth Functions on an Arbitrary Set)

Zh
A function f : S → Rm is smooth on S ⊆ Rn if and only if there is an open set U ⊇ S
and a smooth function f˜ : U → Rm such that f = f˜|S .

Proof
Suppose f : S → Rm is indeed smooth on S ⊆ Rn . For each p ∈ S, let Up 3 p, f˜p : Up →
Rm be the neighborhood of p and smooth function on Up that restricts to f˜ on Up ∩ S.
Then {Up }p∈S is an open cover of the open submanifold
[
U := Up .
x
p∈S

Thus there is a smooth partition of unity {ϕp } subordinate to {Up }. The function f˜ :
U → R given by
eli
X
f˜ := ϕp f˜p
p∈S

is the desired function.


The conversely clearly holds, concluding the proof.
The following theorem is the smooth analogue of a classical theorem from algebraic topology
©F

in the continuous category. We use it to show that interior points and boundary points are
invariant under diffoemorphism of open subsets of Hn .

Theorem 6.2.2 (Smooth Invariance of Domain)


Let U ⊆ Rn be an open subset, S ⊆ Rn an arbitrary subset, and f : U → S a
diffeomorphism. Then S is open in Rn .

We remark that the theorem is non-trivial since a priori, we only know that f : U → S takes

220
an open subset of U (which is open in Rn ) to an open subset of S. Hence f (U ) ⊆ S is open
in S, but not necessarily in Rn .

Proof
Fix p ∈ U . Our goal is to find a neighborhood Vf (p) 3 f (p) that is open in Rn and
contained in S.

ou
Since f : U → S is a diffeomorphism, there is an open set V ⊆ Rn containing S and a
smooth map g : V → Rn such that g|S = f −1 . Thus g ◦ f = IdU : U → U is the identity
map on U . By the chain rule,

g∗,f (p) ◦ f∗,p = IdTp U : Tp U → Tp U

is the identity map on the tangent space Tp U . In particular, f∗,p is necessarily injective.
Since U, V have the same dimension, it follows that f∗,p : Tp U → Tf (p) V is invertible. By

Zh
the inverse function theorem, f is locally invertible at p, meaning there are open neigh-
borhoods Up 3 p in U and Vf (p) 3 f (p) in V such that f : Up → Vf (p) is a diffeomorphism.
But
Vf (p) = f (Up ) ⊆ f (U ) = S
with V ⊆ Rn open in Rn and Vf (p) ⊆ V open in V , hence Vf (p) is open in Rn as desired.

Proposition 6.2.3
Let U, V be open subsets of the upper half-space Hn and f : U → V a diffeomorphism.
Then f maps interior points to interior points and boundary points to boundary points.
x
Note here we refer to openness in the relative topology on Hn .

Proof
Let p ∈ U be an interior point of Hn . Then p is contained in an open ball B, which is
eli
open in Rn . By the smooth invariance of domain, f (B) is open in Rn as well. Thus we
necessarily have f (B) ⊆ (Hn )◦ . But then f (p) ∈ f (B) is an interior point of Hn .
If p is a boundary point in U ∩ ∂H n , then f −1 (f (p)) = p is a boundary point. But
f −1 : V → U is a diffeomorphism, and by the contrapositive of what we just proved, f (p)
cannot be an interior point.
©F

Remark 6.2.4 Replacing Euclidean spaces by manifolds throughout this section, the iden-
tical proof steps yields the smooth invariance of domain for manifolds: If there is a diffeomor-
phism between an open subset U of an n-manifold N and an arbitrary subset S of another
n-manifold M , then S must be open in M .

221
6.2.2 Manifolds with Boundary

In the upper half-space Hn , we can distinguish open sets by those disjoint from the boundary,
or those that intersect the boundary. Charts on a manifold are homeomorphim to only the
first kind of open sets. A manifold with boundry generalizes the definition of a manifold by
allowing both kinds of open sets. we say that a topological space M is locally Hn if every
point p ∈ M has a neighborhood U homeomorphim to an open subset of Hn .

ou
Definition 6.2.2 (n-Manifold with Boundary)
A topological n-manifold with boundary is a second countable, Hausdorff topological
space that is locally Hn .

Let M be a topological n-manifold with boundary. For n ≥ 2, a chart on M is defined to


be a pair (U, φ) consisting of an open set U ⊆ M and a homeomorphism of U with an open

Zh
subset ϕ(U ) ⊆ Hn .
In the case of n = 1, a slight modification is necessary. We need to allow two local models,
the right half-line H1 and the left half-line
L1 := {x ∈ R : x ≤ 0}.
A chart (U, ϕ) in dimension 1 consists of an open set U in M and a homeomorphism φ of U
ewith an open subset of H1 or L1 . Under this convention, if (U, x1 , . . . , xn ) is a chart of an
n-manifold with boundary, then so is (U, −x1 , x2 , . . . , xn ) for any n ≥ 1. A manifold with
boundary has dimension at least 1, since a mnaifold of dimension 0, being a discrete set of
points, necessarily has empty boundary.
x
A collection {(U, φ)} of charts is a smooth atlas if for any two charts (U, φ) and (V, ψ), the
transition map
ψ ◦ φ−1 : φ(U ∩ V ) → ψ(U ∩ V ) ⊆ Hn
eli
is s diffeomorphism. A smooth manifold with boundary is a topological manifold with bound-
ary together with a maximal smooth atlas.
A point p ∈ M is an interior point if in some chart (U, φ), the point φ(p) is an interior point
of Hn . Similarly, p is a boundary point of M if φ(p) is a boundary point of Hn . These
concepts are well-defined independent of the choice of charts by the smooth invariance of
©F

domain. Indeed, consider any other chart (V, ψ). Then ψ ◦ φ−1 sends φ(p) 7→ ψ(p), and
φ(p), ψ(p) are either both interior points or both boundary points. The set of boundary
points of M is denoted ∂M .
Most of the concepts introduced for a manifold extend word for word to a manifold with
boundary, with the only difference being that a chart can be either of two types. For example,
a function f : M → R is smooth at a boundary point p ∈ ∂M if there is a chart (U, φ) about
p such that f ◦ φ−1 is smooth at φ(p) ∈ Hn . This in turn translates to f ◦ φ−1 having a
smooth extension to a neighborhood of φ(p) ∈ Rn .

222
We may be used to other notions of interior and boundary from point-set topology, defined
for a subset A of a topological space S. A point p ∈ S is said to be an interior point of A if
there is an open subset U ⊆ S such that

p ∈ U ⊆ A.

the point p ∈ S is an exterior point of A if there is an open subset U of S such that

ou
p ∈ U ⊆ S − A.

Finally, p ∈ S is a boundary point of A if every neighborhood of p contains both a point in


A and a point not in A. We denote by int(A), ext(A), bd(A) the sets of interior, exterior, ad
boundary points respectively of A in S. Clearly the topological space S is the disjoint union

Zh
S = int(A) t ext(A) t bd(A).

In the case the subset A ⊆ S is a manifold with boundary, we call int(A) the topological
interior and bd(A) the topological boundary, to distinguish them from the manifold interior
A◦ and the manifold boundary ∂A. Nnote that the topological interior and the topological
boundary of a set depends on an ambient space, while the manifold interior and manifold
boundary are intrinsic.

Example 6.2.5 (Toplogical vs Manifold Boundary)


Let A be the open unit disc in Rn . Then bd(A) = S n−1 but ∂A = ∅. If B is the closed
x
unit ball in Rn , then bd(B) = ∂B = S n−1 .

Example 6.2.6 (Topological vs Manifold Interior)


eli
Let S be the upper half-plane H2 and D the subset

D := {(x, y) ∈ H2 : y ≤ 1}.

The topological interior of D is the set

int(D) = {(x, y) ∈ H2 : y ∈ [0, 1)},


©F

while the manifold interior of D is the set

D◦ = {(x, y) ∈ H2 : y ∈ (0, 1)}.


To indicate the dependence of the topological interior of a set A on its ambient space S, we
may denote it by intS (A). In the example above,

intH2 (D) 6= intR2 (D) = D◦ .

223
6.2.3 The Boundary of a Manifold with Boundary

Let M be a manifold of dimension n with boundary ∂M . If (U, φ) is a chart on M , we


denote by φ0 = φ|U ∩∂M the restriction of the coordinate map φ to the boundary. Since φ
maps boundary points to boundary points,
φ0 : U ∩ ∂M → ∂Hn = Rn−1 .

ou
Moreover, if (U, φ) and (V, Ψ) are two charts on M , then
ψ 0 ◦ (φ0 )−1 : φ0 (U ∩ V ∩ ∂M ) → ψ 0 (U ∩ V ∩ ∂M )
is smooth. Thus an atlas {(Uα , φα )}α for M induces an atlas {(Uα ∩ ∂M, φα |Uα ∩∂M )} for ∂M ,
making ∂M into a manifold of dimension n − 1 without boundary.

6.2.4 Tangent Vectors, Differential Forms, and Orientations

Zh
Let M be a manifold with boundary and p ∈ ∂M . As before, two smooth functions f : U → R
and g : V → R defined on neighborhoods U, V 3 p in M are said to be equivalent if they
agree on some neighborhood W ∈ p contained in U ∩ V . A germ of smooth functions at
p is an equivalence class of such functions. Along with the usual addition, multiplicaiton,
and scalar multiplication of terms, the set Cp∞ (M ) of germs of smooth functions at p is
an R-algebra. The tangent space Tp M at p is then defined to be the vector space of all
point-derivations on the algebra Cp∞ (M ).
for instance, for p ∈ ∂H2 , ∂/∂x|p and ∂/∂y|p are both derivations on Cp∞ (H2 ). The tangent
x
space Tp (H2 ) is represented by a 2-dimensional vector space with the origin at p. Since
∂/∂y|p is a tangent vector to H2 at p, its negative −∂/∂y|p is also a tangent vector at p,
although there is not curve through p in H2 with initial velocity −∂/∂y|p .
eli
As before, the cotangent space Tp∗ M is defined to be the dual of the tangent space
Tp∗ M = Hom(Tp M, R).

Differential k-forms are also as before, as sections of the vector bundleV k T ∗ M . A differential
V
k-form is smooth if it is smooth as a section of the vector bundle k T ∗ M . For example,
dx ∧ dy is a smooth 2-form on H2 .
©F

An orientation on an n-manifold M with boundary is again a continuous pointwise ori-


entation on M . The previous discussion on orientations goes through for manifolds with
boundary. Thus the orientability of a manifold with boundary is equivalent to the existence
of a smooth nowhere-vanishing top form and to the existence of an oriented atlas. In one
of the proofs in the discussion about orientations, it was necessarily to replace the chart
(U, x1 , . . . , xn ) with the chart (U, −x1 , x2 , . . . , xn ). This is not possible for n = 1 if we did
not allow the left half-line L1 as a local model in the definition of a chart on a 1-dimensional
manifold with boundary.

224
Example 6.2.7
The closed interval [0, 1] is a smooth manifold with boundary. With d/dx as a continuous
pointwise orientation, [0, 1] is an oriented manifold with boundary. It has an oriented at
las with two charts (U1 , φ1 ), (U2 , φ2 ) where U1 = [0, 1), φ1 (x) = x and U2 = (0, 1], φ2 (x) =
x − 1. Note that φ2 maps to L1 .

ou
6.2.5 Outward-Pointing Vector Fields

Definition 6.2.3 (Inward-Pointing)


Let M be a manifold with boundary and p ∈ ∂M . We say that a tangent vector
Xp ∈ Tp M is inward-pointing if Xp ∈ / Tp (∂M ) and there is some ε > 0 and a curve
c : [0, ε) → M such that c(0) = p, c(0, ε) ⊆ M ◦ , and c0 (0) = Xp .

Zh
A tangent vector Tp ∈ Tp M is outward-pointing if −Xp is inward-pointing.

Example 6.2.8
On the upper half-plane H2 , the vector ∂/∂y|p is inward-pointing and the vector −∂/∂y|p
is outward-pointing at any p on the x-axis.

Definition 6.2.4 (Vector Field Along the Boundary)


A vector field along ∂M is a function X that assigns to each p ∈ ∂M a vector
Xp ∈ Tp M (as opposed to Tp (∂M ).
x
In a coordinate neighborhood (U, x1 , . . . , xn ) of p ∈ M , any such vector field X can be
written as a linear combination
eli
X ∂
Xq = ai (q) i
i
∂x q

for q ∈ ∂M . A vector field X along ∂M is said to be smooth at p ∈ M if there is a coordinate


neighborhood of p for which the functions ai on ∂M are smooth at p. Furthermore, X is
said to be smooth if it is smooth at every p. In terms of local coordinates, it can be shown
that a tangent vector Xp is outward-pointing if and only if an (p) < 0.
©F

Proposition 6.2.9
On any manifold M with boundary ∂M , there is a smooth outward-pointing vector field
along ∂M .

Proof (Sketch)
Cover ∂M with coordinate open sets (Uα , x1α , . . . , xnα ) in M . On each Uα , the vector field
Xα = −∂/∂xnα along Uα ∩ ∂M is smooth and outward-pointing. Choose a partition of

225
unity P
{ρα }α∈A subordinate to the open cover {Uα ∩ ∂M }α∈A . Then one can check that
X := α ρα Xα is a smooth outward-pointing vector field along ∂M .

6.2.6 Boundary Orientation

Our goal now is to show that the boundary of an orientable manifold M with boundary is
an orientable manifold without boundary. We will designate one of the orientations on the

ou
boundary as the boundary orientation. It is easily described in terms of an orientation form
or of a pointwise orientation on ∂M .
Recall that the contraction ιX ω of the k-form by X is the (k − 1)-form given by
(ιX ω)p (v2 , . . . , vk ) := ιXp ωp (v2 , . . . , vk ) := ωp (Xp , v2 , . . . , vk ).

Zh
Proposition 6.2.10
Let M be an oriented n-manifold with boundary. If ω is an orientation form on M and
X is a smooth outward-pointing vector field on ∂M , then ιX ω is a smooth nowhere-
vanishing (n − 1)-form on ∂M . Hence, ∂M is orientable.

Proof
Since ω and X are both smooth on ∂M , so is the contraction ιX ω. We argue by contra-
diction that ιX ω must be nowhere-vanishing on ∂M .
Suppose ιX ω vanishes at some p ∈ ∂M . This means that (ιX ω)p (v1 , . . . , vn−1 ) = 0 for all
x
v1 , . . . , vn−1 ∈ Tp (∂M ). Let e1 , . . . , en−1 be a basis for Tp (∂M ). Then Xp , e1 , . . . , en−1 is a
basis for Tp M as Xp ∈ / span{e1 , . . . , en−1 } and so

ωp (Xp , e1 , . . . , en−1 ) = (ιX ω)p (e1 , . . . , en−1 ) = 0.


eli
This implies ωp ≡ 0 on Tp M , which is the desired contradiction.
In the notation of preceding proposition, we define the boundary orientation on ∂M to be
the orientation with orientation form ιX ω. It can be checked that this is independent of the
choice of the orientation form ω and of the outward-pointing vector field X.
©F

Proposition 6.2.11
Suppose M is an oriented n-manifold with boundary. Let p be a point of the boundary
∂M and Xp an outward-pointing tangent vector in Tp M . An ordered basis (v1 , . . . , vn−1 )
for Tp (∂M ) represents the boundary orientation at p if and only if the ordered basis
(Xp , v1 , . . . , vn−1 ) for Tp M represents the orientation on M at p.

Proof
The proof consists of unwrapping the definitions. For p ∈ ∂M , let (v1 , . . . , vn−1 ) be an

226
ordered basis for the tangent space Tp (∂M ). Then

(v1 , . . . , vn−1 ) represents the boundary orientation on ∂M at p


⇐⇒ (ιXp ωp )(v1 , . . . , vn−1 ) > 0
⇐⇒ ωp (Xp , v1 , . . . , vn−1 ) > 0
⇐⇒ (Xp , v1 , . . . , vn−1 ) represents the orientation on M at p.

ou
Example 6.2.12 (The Boundary Orientation on ∂Hn )
An orientation form for the standard orientation on the upper half-space Hn is ω =
dx1 ∧ · · · ∧ dxn . A smooth outward-pointing vector field on ∂H n is −∂/∂xn . By definition,
an orientation form for the boundary orientation on ∂H n is given by the contraction

ι−∂/∂xn ω = −ι∂/∂xn (dx1 ∧ · · · ∧ dxn−1 ∧ dxn )


= −(−1)n−1 dx1 ∧ · · · ∧ dxn−1 ∧ ι∂/∂xn (dxn )

Zh
= (−1)n dx1 ∧ · · · ∧ dxn−1 .

Thus the boundary orientation on ∂H1 = {0} is given by −1, the boundary orientation
on ∂H2 is given by dx1 , and the boundary orientation on ∂H3 is given by −dx1 ∧ dx2 .

Example 6.2.13
The closed interval [a, b] ⊆ R with coordinate x has a standard orientation given by the
vector field d/dx, with orientation form dx. At the right endpoint b, an outward vector
is d/dx. Hence the boundary orientation at b is given by ιd/dx (dx) = 1. Similarly, the
boundary orientation at the left endpoint a is given by ι−d/dx (dx) = −1.
x
Example 6.2.14
Suppose c : [a, b] → M is a smooth immersion whose image is a 1-dimensional manifold
C with boundary. An orientation on [a, b] induces an orientation on C via the differential
eli
c∗,p : Tp [a, b] → Tc(p) C at each p ∈ [a, b]. In a situation like this, we give C the orientation
induced from the standard orientation on [a, b]. Thus the boundary orientation on the
boundary of C is given by +1 at the endpoint c(b) and −1 at the initial point c(a).

6.3 Integration on Manifolds


©F

We first recall Riemann integration for a function over a closed rectangle in Euclidean space.
By Lebesgue’s theorem, this theory can be extended to integrals over bounded subsets of Rn
whose boundary has measure zero.
The integral of an n-form with compact support in an open set of Rn is defined to be the
Riemann integral of the coefficient function. Using a partition of unity, we define the integral
of an n-form with compact support on a manifold by writing the form as a sum of forms, each

227
with compact support in a coordinate chart. We then prove the general Stokes theorem for
an oriented manifold and show how it generalizes the fundamental theorem for line integrals
as well as Green’s theorem from calculus.

6.3.1 The Riemann Integral of a Function on Rn

ou
We assume familiarity with Riemann integration in Rn and only briefly summarize the Rie-
mann integral of a bounded function over a bounded set in Rn .
A closed rectangle in Rn is a Cartesian product R = [a1 , b1 ] × · · · × [an , bn ] of closed intervals
in R. The volume vol(R) of the closed rectangle R is defined to be
n
Y
vol(R) := (bi − ai ).

Zh
i=1

A partition of the closed interval [a, b] is a set of real numbers {p0 , . . . , pn } such that

a = p0 < p1 < · · · < pn = b.

A partition of the rectangle R is a collection P = {P1 , . . . , Pn } where each Pi is a partition of


[ai , bi ]. The partition p divides the rectangel R into closed subrectangles, which we denote
by Rj .
Let f : R → R be a bounded function defined on a closed rectangle R. We define the lower
sum and upper sum of f with respect to the partition p to be
x
X X
L(f, P ) := (inf f ) vol(Rj ), U (f, P ) := (sup f ) vol(Rj ),
Rj Rj
j j
eli
where each sum runs over all subrectangles of P . For any partition P , clearly L(f, P ) ≤
U (f, P ). In fact, we will soon see that for any two partitions P, P 0 of the rectangle R,

L(f, P ) ≤ U (f, P 0 ).
©F

A partition P 0 = {P10 , . . . , Pn0 } is a refinement of the partition P = {P1 , . . . , Pn } if Pi ⊆ Pi0


for all i ∈ [n]. If P 0 is a refinement of P , then each subrectangle Rj of P is subdivided into
subrectangles Rjk 0
of P 0 . and it can be seen that

L(f, P ) ≤ L(f, P 0 ).

This is because if Rjk


0
⊆ Rj , then inf Rj f ≤ inf Rjk
0 f . Similarly, if P is a refinement of P ,
0

then
U (f, P 0 ) ≤ U (f, P ).

228
Any two partitions P, P 0 of the rectangle R have a common refinement Q = {Q1 , . . . , Qn }
with Qi := Pi ∪ Pi0 . It follows that

L(f, P ) ≤ L(f, Q) ≤ U (f, Q) ≤ U (f, P 0 ).

It follows that the supremum of the lower sum L(f, P ) over all partitions P of R is less than
or equal to the infimum of the upper sum U (f, P ) over all partitions P of R. We define these

ou
two numbers to the lower integral f and the upper integral R f , respectively:
R R
R
Z Z
f := sup L(f, P ), f := inf L(f, P ).
P R P
R

Definition 6.3.1

Zh
Let R be a closed rectangle in Rn . A bounded function f : R → R is said to be
Riemann integrable if Z Z
f= f.
R R

In this case, the Riemann integral of f is this common value, denoted


Z
f (x)dx1 . . . dxn
R

where x1 , . . . , xn are the standard coordinates on Rn .


x
Remark 6.3.1 When we speak of a rectangle in Rn , we have implicitly chosen n coordinate
axes, with coordinates x1 , . . . , xn . Thus the definition of a Riemann integral depends on the
coordinates x1 , . . . , xn .
eli

If f : A ⊆ Rn → R, then the extension of f by zero is the function f˜ : Rn → R obtained


from f by setting it to be zero outside of A. Suppose f : A → R is a bounded function on
a bounded set A in Rn . We can enclose A in a closed rectangle R and define the Riemann
integral of f over A to be
©F

Z Z
1 n
f (x)dx . . . dx = f˜(x)dx1 . . . dxn
A R

if the RHS exists. In this way, we can deal with the integral of a bounded function whose
domain is an arbitrary bounded set in Rn .
The volume vol(A) of a subset A ⊆ Rn is defined to be the integral A 1dx1 . . . dxn provided
R

the integral exists. This concept generalizes the volume of a closed rectangle we previously
defined.

229
6.3.2 Integrability Conditions

In this section, we describe some conditions under which a function defined on an open
subset Rn is Riemann integrable.
Recall that a set A ⊆ Rn is said to have (Lebesgue) measure zero P
if for every ε > 0, there is
a countable cover {Ri }i≥1 of A by closed rectangles Ri such that i≥1 vol(Ri ) < ε.

ou
The most useful criterion of Riemann integrability is due to Lebesgue:

Theorem 6.3.2 (Lebesgue’s Criterion)


A bounded function f : A → R on a bounded subset A ⊆ Rn is Riemann integrable if
and only if the set set of discontinuities of the extended function f˜ has measure zero.

Zh
Proposition 6.3.3
If a continuous function f : U → R defined on an open subset U ⊆ Rn has compact
support, then f is Riemann integrable on U .

Proof
Being continuous on a compact set, the function f is bounded. Being compact, the set
supp f is closed and bounded in Rn . We claim that the extension f˜ is continuous.
Since f˜ agrees with f on U , the extended function f˜ is continuous on U . It remains to
show f˜ is continuous on the complement. If p ∈ / U , then p ∈
/ supp f . Since supp f is a
closed subset of R , there is an open ball B 3 p disjoint from supp f . ≡f
˜ on B implying
x
n

that f˜ is continuous at p ∈
/ U . By Lebesgue’s theorem, f is Riemann integrable on U .
Remark 6.3.4 The support of a real-valued function is the closure in its domain of the
subset where the function is not zero.
eli
Definition 6.3.2 (Domain of Integration)
A subset A ⊆ Rn is called a domain of integration if it is bounded and its topological
boundary bd(A) is a set of measure zero.

Familiar figures such as triangles, rectangles, and circular disks are all domains of integration
©F

in R2 .
Proposition 6.3.5
Every bounded continuous function f defined on a domain of integration A ⊆ Rn is
Riemann integrable over A.

Proof
Let f˜ : Rn → R be the extension of f by zero. Since f is continuous on A, the extension

230
is necessarily continuous at all interior points of A. But every exterior point has a neigh-
borhood disjoint from A on which f˜ ≡ 0. But then the set of discontinuities is contained
in bd(A), which has measure zero. We conclude the proof by Lebesgue’s theorem.

6.3.3 The Integral of an n-Form on Rn

ou
Once a set of coordinates x1 , . . . , xn has been fixed on Rn , n-forms on Rn can be identified
with functions on Rn , since every n-form on Rn can be written as ω = f (x)dx1 ∧ · · · ∧ dxn for
a unique function f (x) on Rn . In this way, the theory of Riemann integration of functions
on Rn carries over to n-forms on Rn .

Definition 6.3.3 (Integral of n-Form)


Let ω = f (x)dx1 ∧ · · · ∧ dxn be a smooth n-form on an open subset U ⊆ Rn , with

Zh
standard coordinates x1 , . . . , xn . Its integral over a subset A ⊆ U is defined to be the
Riemann integral of f (x):
Z Z Z
1 n
ω= f (x)dx ∧ · · · ∧ dx := f (x)dx1 . . . dxn ,
A A A

assuming the Riemann integral exists.

Note that in this definition, we require the n-form to be written in the order dx1 ∧ · · · ∧ dxn .
If it is in any other order, we would need to rearrange it by the alternating property.
x
Example 6.3.6
If f is a bounded continuous function defined on a domain of integration A ⊆ Rn , then
the integral A f dx1 ∧ · · · ∧ dxn exists.
R
eli
Let us see how the integral of an n-form ω = f dx1 ∧ · · · ∧ dxn on an open subset U ⊆ Rn
transofrm under a change of variables. A change of variables on U is given by a diffeo-
morphism T : V ⊆ Rn → U ⊆ Rn . Let x1 , . . . , xn be the standard coordinates on U and
y 1 , . . . , y n the standard coordinates on V . Then T i := xi ◦ T = T ∗ (xi ) is the i-th component
of T . We assume that U, V are connected, and write x = (x1 , . . . , xn ) and y = (y 1 , . . . , y n ).
Furthermore, denote by J(T ) the Jacobian matrix [∂T i /∂y j ]. By the local formula for a
©F

wedge of differentials,

dT 1 ∧ · · · ∧ dT n = det(J(T ))dy 1 ∧ · · · ∧ dy n .

Recall also that the wedge product commutes with the wedge product:

F ∗ (ω ∧ τ ) = F ∗ ω ∧ F ∗ τ.

231
Hence
Z Z

T ω= (T ∗ f )T ∗ dx1 ∧ · · · ∧ T ∗ dxn
V ZV
= (f ◦ T )dT 1 ∧ · · · ∧ dT n T ∗ d = dT ∗
ZV
(f ◦ T ) det(J(T ))dy 1 ∧ · · · ∧ dy n

ou
=
ZV
= (f ◦ T ) det(J(T ))dy 1 . . . dy n .
V

Recall that the change of variables formula from calculus (whose most intuitive proof is
obtained from a measure-theoretic argument regarding the Lebesgue integral) gives
Z Z

Zh
f dx . . . dx = (f ◦ T )|det J(T )|dy 1 . . . dy n
1 n
U V

Hence puting the two equations above together yields


Z Z

T ω = ± ω,
V U

depending on the sign on the Jacobian determinant.


Recall that a diffeomorphism of oriented manifolds T : V ⊆ Rn → U ⊆ Rn is orientation-
preserving if [F ∗ ωV ] = [ωU ], where ωV , ωU are the orientation forms for V, U , respectively.
We deduced that T is oriented-preserving if and only if its Jacobian determinant det J(T ) is
x
everywhere positive on V . Our work above shows that the integral of a differential form is
not invariant under all diffeomoprhisms of V with U , but only under orientation-preserving
diffeomorphisms.
eli

6.3.4 Integral of a Differential Form over a Manifold

Integration of an n-form on Rn is not so different from integration of a function. Our


approach to integration has several distinguishing features:
©F

(i) The manifold must be oriented.


(ii) On an n-manifold, we can only integrate n-forms, not functions.
(iii) The n-forms must have compact support.
Let M be an oriented manifold of dimension n, with an oriented atlas {(Uα , φα )} giving
the orientation of M . Denote by Ωkc (M ) the vector space of smooth k-forms with compact
support on M . Suppose {(U, φ)} is a chart in this atlas. If ω ∈ Ωnc (U ) is an n-form with
compact support on U , then because φ : U → φ(U ) is a diffeomorphism, (φ−1 )∗ ω is an

232
n-form with compact support on the open subset φ(U ) ⊆ Rn . We define the integral of ω
on U to be Z Z
ω := (φ−1 )∗ ω.
U φ(U )

If (U, ψ) is another chart in the oriented atlast with the same U , then φ ◦ ψ −1 : ψ(U ) → φ(U )
is by definition an orientation-preserving diffeomorphism, and so

ou
Z Z Z
−1 ∗ −1 ∗ −1 ∗
(φ ) ω = (φ ◦ ψ ) (φ ) ω = (ψ −1 )∗ ω.
φ(U ) ψ(U ) ψ(U )

Thus the integral U ω on a chart U of the atlas is well-defined, independent of the choice of
R

coordinates on U . By the lienarity of the integral on Rn , if ω, τ ∈ Ωnc (U ), then


Z Z Z
ω+τ = ω+ τ.
U U U

Zh
Now let ω ∈ Ωnc (M ). Choose a partition of unity {ρα } subordinate to the open cover {Uα }.
Because ω has compact support and a partition of unity has locally finite supports, all except
finitely many ρα ω are identically zero. In particular,
X
ω= ρα ω
α

is a finite sum. Recall the elementary topological fact that A ∩ B ⊆ A ∩ B. this means that
supp(ρα ω) ⊆ supp(ρα ) ∩ supp(ω).
In particular, supp(ρα ω) is a closed subset of the compact set supp ω and Ris hence compact.
x
Since ρα ω is an n-form with compact support in the chart Uα , its integral Uα ρα ω is defined.
Thus we can define the integral of ω over M to be the finite sum
Z XZ
ω := ρα ω.
eli
M α Uα

For this integral to be well-defined, we must show that it is independent of the choices of
oriented atlas and partition of unity. Let {Vβ } be another oriented atlas of M specifying
the same orientation and {χβ } a partition of unity subordinate to {Vβ }. Then {(Uα ∩
Vβ , φα |Uα ∩Vβ )} and {(Uα ∩ Bβ , ψ|Uα ∩Vβ )} are two new atlases of M specifying the orientation
of M , and
©F

XZ XZ X X
ρα ω = ρα χβ ω χβ = 1
α Uα α Uα β β
XXZ
= ρα χβ ω finite sums
α β Uα
XXZ
= ρα χβ ω,
α β Uα ∩Vβ

233
where the last line follows since supp(ρα χβ ) ⊆ Uα ∩ Vβ . By symmetry, χβ ω is equal
P R
β Vβ
to the same sum. Hence

XZ XZ
ω= χβ ω,
α Uα β Vβ

ou
as desired.

Proposition 6.3.7
Let ω be an n-form with compact support on an oriented n-manifold. If −M denotes
the same manifold with the oppposite orientation, then
Z Z

Zh
ω=− ω.
−M M

Proof
By the definition of an integral, it suffices to show that for every chart (U, φ) = (U, x1 , . . . , xn )
and differential form τ ∈ Ωnc (U ), if (U, φ̄) = (U, −x1 , x2 , . . . , xn ) is the chart with the op-
posite orientation, then Z Z
−1 ∗
(φ̄ ) τ = − (φ−1 )∗ τ.
φ̄(U ) φ(U )

Let r1 , . . . , rn be the standard coordinates on Rn . Then xi = ri ◦ φ and ri = xi ◦ φ−1 .


x
With φ̄, the only difference is that for i = 1,

−x1 = r1 ◦ φ̄, r1 = −x1 ◦ φ̄−1 .


eli
Suppose τ = f dx1 ∧ · · · ∧ dxn on U . Then

(φ̄−1 )∗ τ = (f ◦ φ̄−1 )d(x1 ◦ φ̄−1 ) ∧ d(x1 ◦ φ̄−1 ) ∧ · · · ∧ d(xn ◦ φ̄−1 )


= −(f ◦ φ̄−1 )dr1 ∧ dr2 ∧ · · · ∧ drn .

Similarly,
©F

(φ−1 )∗ τ = (f ◦ φ−1 )dr1 ∧ dr2 ∧ · · · ∧ drn .


Since φ ◦ φ̄−1 : φ̄(U ) → φ(U ) is given by

(φ ◦ φ̄−1 )(a1 , a2 , . . . , an ) = (−a1 , a2 , . . . , an ),

the absolute value of its Jacobian determinant is

|J(φ ◦ φ̄−1 )| = |−1| = 1.

234
It follows that
Z
(φ̄−1 )∗ τ
φ̄(U )
Z
=− (f ◦ φ̄−1 )dr1 . . . drn calculations above
Zφ̄(U )
(f ◦ φ−1 ) ◦ (φ ◦ φ̄−1 )|J(φ ◦ φ̄−1 )|dr1 . . . drn

ou
=−
φ̄(U )
Z
=− (f ◦ φ−1 )dr1 . . . drn change of variables in Rn
φ(U )
Z
=− (φ−1 )∗ τ.
φ(U )

Our treatment of integration above can be extended nearly verbatim to oriented manifolds

Zh
with boundary. It has the virtue of simplicity and utility in proving theorems. However, it
is not practical for the actual computation of integrals. It is best to consider integrals over
a parameterized set for explicit integral calculations.

Definition 6.3.4 (Parameterized Set)


A parameterized set in an oriented n-manifold M is a subset A ⊆ M together with a
smooth map F : D → M from a compact domain of integration D ⊆ Rn to M such
that A = F (D) and F restricts to an orientation-preserving diffeomorphism from
int(D) to F (int(D)).
x
Note that by the smooth invariance of domain for manifolds, F (int(D)) is necessarily an
open subset of M The smooth map F : D → A is called a parameterization of A.
If A is a parameterized set in M with paramterization F : D →R A and ω isR a smooth n-form
eli
on M , not necessarily with compact support, then we define A ω to be D F ∗ ω. It can be
shown that the definition of A ω is independent of the parameterization and in the case
R

that A is a manifold, it agrees with the earlier definition of integration over a manifold.
Subdividing an oriented manifold into a union of parameterized sets can be an effective
method of calculating an integral over the manifold.
©F

Example 6.3.8 (Integral over a Sphere)


In spherical coordinates, ρ is the distance x2 + y 2 + z 2 of the point (x, y, z) ∈ R3 to the
p

origin, ϕ is the angle that the vector hx, y, zi makes with the positive z-axis, and θ is the
angle that the vector hx, yi in the (x, y)-plane makes with the positive x-axis. Let ω be

235
the 2-form on the unit sphere S 2 ⊆ R3 given by
 dy∧dz
 x , f 6= 0,

ω = dz∧dx
y
, y= 6 0,
 dx∧dy

z
, z 6= 0.

We with to compute
R
ω.

ou
S2

In Riemannian geometry, it can be shown that ω is the area


R form of the sphere S with
2

respect to the Euclidean metric. Therefore, the integral S 2 ω is the surface area of the
sphere.
The sphere S 2 has a parametrization by spherical coordinates

F (ϕ, θ) = (sin ϕ cos θ, sin ϕ sin θ, cos ϕ)

Zh
on D := {(ϕ, θ) ∈ R62 : ϕ ∈ [0, π], θ ∈ [0, 2π]}. Since

F ∗ x = sin ϕ cos θ,
F ∗ y = sin ϕ sin θ,
F ∗ z = cos ϕ,

we have
F ∗ dy = dF ∗ y = cos ϕ sin θdϕ + sin ϕ cos θdθ
and
x
F ∗ dz = − sin ϕdϕ,
so for x 6= 0,
F ∗ dy ∧ F ∗ dz
F ∗ω = = sin ϕdϕ ∧ dθ.
eli
F ∗x
For y 6= 0 and z 6= 0, similar calculations show that F ∗ ω is given by the same formula.
Therefore, F ∗ ω = sin ϕdϕ ∧ dθ everywhere on D, and
Z Z
ω= F ∗ω
S 2
ZD2π Z π
©F

= sin ϕdϕdθ
0 0
= 2π [− cos ϕ]π0
= 4π.

236
6.3.5 Integration over a Zero-Dimensional Manifold

The discussion of integration so far implicitly assumes that the manifold M has dimension
n ≥ 1. We now treat integration over a zero-dimensional manifold. A compact oriented
0-manifold
P M is P a finite collection of points, each point oriented by +1, −1. We write this
as M = i pi − j qj . The integral of a 0-form f : M → R is defined to be the sum

ou
Z X X
f := f (pi ) − f (qj ).
M i j

6.3.6 Stokes’ Theorem

Let M be an oriented manifold of dimension n with boundary. we give its boundary ∂M

Zh
the boundary orientation and let
R ι : ∂M → M beR the ∗inclusion map. If ω is an (n − 1)-form
on M , it is customary to write ∂M ω instead of ∂M ι ω.

Theorem 6.3.9 (Stokes)


For any smooth (n − 1)-form ω with compact support on the oriented n-manifold M ,
Z Z
dω = ω.
M ∂M

Proof
x
Choose an atlas {(Uα , φα )} for m in which each Uα is diffeormorphic to either Rn or
Hn via an orientation-preserving diffeomorphism. This is possible since any open disk is
diffeomorphic to Rn and any half-disk containing its boundary diameter is diffeomorphic
eli
to Hn . Let {ρα } be a smooth partition of unity subordinate to {Uα }. We showed in the
preceding section that ρα ω has compact support in Uα .
Suppose Stokes’ theorem holds for Rn and Hn . Then it holds for all the charts in our
atlas, which are diffeomorphic to Rn or Hn . This is because we defined
Z Z
(φ−1 ∗
©F

dω := α ) ω
Uα φα (U )

with φα (U ) = Rn or Hn . Similarly,
Z Z Z

ω := ι ω := (φ−1 ∗
α ) ω
∂Uα ∂Uα φα (Uα )

Note here that


(∂M ) ∩ Uα = ∂Uα .

237
Therefore,
Z Z X X
ω= ρα ω ρα = 1
∂M ∂M α α
XZ
= ρα ω finite sum
α ∂M
XZ

ou
= ρα ω supp(ρα ω) ⊆ Uα
α ∂Uα
XZ
= d(ρα ω) Stokes’ for Uα
α Uα
XZ
= d(ρα ω) supp d(ρα ω) ⊆ supp(ρα ω) ⊆ Uα
α M

Zh
Z !
finite sum
X
= d ρα ω
M α
Z
= dω.
M

Thus it suffices to prove Stokes’ theorem for Rn and Hn . We give a proof for H2 for the
sake of simplicity.
Proof of Stokes’ theorem in H2 : Let x, y be coordinates on H2 . Then the standard ori-
entation on H2 is given by dx ∧ dy, and the boundary orientation on ∂H2 is given by
x
ι−∂/∂y (dx ∧ dy) = dx.
The form ω is a linear combination
eli
ω = f (x, y)dx + g(x, y)dy

for smooth functions f, g with compact support in H2 . Since the supports of f, g are
compact, we may choose a real number a > 0 sufficiently large so that the supports of
f, g are contained in the interior of the square [−a, a] × [0, a]. We write fx , fy to denote
the partial derivatives of f with respect to x, y, respectively. Then
©F

 
∂g ∂f
dω = − dx ∧ dy = (gx − fy )dx ∧ dy,
∂x ∂y

and
Z Z Z
dω = gx dxdy − fy dxdy
H2 H2 H2
Z aZ a Z aZ a
= gx dxdy − fy dydx.
0 −a −a 0

238
In the expression above,
Z a a
gx (x, y)dx = g(x, y) =0
−a x=−a

since supp(g) lies in the interior of [−a, a] × [0, a]. Similarly,


Z a a

ou
fy (x, y)dy = f (x, y) = −f (x, 0)
0 y=0

because f (x, a) = 0. Thus the expression above reduces to


Z Z a
dω = f (x, 0)dx.
H2 −a

Zh
On the other hand, ∂H2 is the x-axis and dy = 0 on ∂H2 . It follows from the definition
of ω that Z Z a
ω= f (x, 0)dx.
∂H2 −a

6.3.7 Line Integrals & Green’s Theorem

We now apply Stokes’ theorem for manifolds to unify theorems of vector calculus on R2 , R3 .
Recall the calculus notation F ·dr = P dx+Qdy+Rdz for a function vector field F = hP, Q, Ri
and coordinates r = (x, y, z). As in calculus, we assume in this section that functions, vector
x
fields, and regions of integration have sufficient smoothness or regularity properties so that
all the integrals are defined.
eli
Theorem 6.3.10 (Fundamental Theorem for Line Integrals)
Let C be a curve in R3 , parameterized by some r(t) = (x(t), y(t), z(t)), t ∈ [a, b] and
let F be a vector field on R3 . If F = grad f for some scalar function f , then
Z
F · dr = f (r(b)) − f (r(a)).
C
©F

Suppose in Stokes’ theorem we take M to be a curve with parameterization r(t), t ∈ [a, b],
and ω to be the function f on C. Then
Z Z Z Z
∂f ∂f ∂f
dω = df = dx + dy + dz = grad f · dr
C C C ∂x ∂y ∂z C

and Z r(b)
ω=f = f (r(b)) − f (r(a)).
∂C r(a)

239
In this case Stokes’ theorem specializes to the fundamental theorem for line integrals.

Theorem 6.3.11 (Green)


If D is a plane region with boundary ∂D, and P, Q are smooth functions on D, then
Z Z  
∂Q ∂P
P dx + Qdy = − dxdy.
∂x ∂y

ou
∂D D

To obtain Green’s theorem, let M be a plane region D with boundary ∂D and let ω be the
1-form P dx + Qdy on D. Then
Z Z
ω= P dx + Qdy
∂D ∂D

and

Zh
Z Z
dω = Py dy ∧ dx + Qx dy ∧ dy
D ZD
= (Qx − Py )dx ∧ dy
ZD
= (Qx − Py )dxdy
ZD
= (Qx − Py )dxdy.
D
x
In this case, Stokes’ theorem specializes to Green’s theorem in the plane.
eli
©F

240

You might also like