manifolds
manifolds
Contents
1. Introduction and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Definition of a Manifold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1. Projective space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3. Differentiable Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4. Tangent Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.1. Three definitions of the tangent space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2. The three definitions are equivalent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.3. Standard bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.4. The differential of a mapping of manifolds . . . . . . . . . . . . . . . . . . . . . . . . . 22
5. Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.1. Products and coproducts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.2. Tensor Products. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.3. Symmetric and exterior products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.4. Dual spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6. Vector Bundles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7. Tangent Bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8. The Algebra of Differential Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
8.1. The pullback of a differential form by a smooth map. . . . . . . . . . . . . . . 42
8.2. The exterior derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
9. Oriented Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
10. Integration of Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
10.1. Definition and properties of the integral . . . . . . . . . . . . . . . . . . . . . . . . . . 50
10.2. Manifolds with boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
10.3. Stokes’ theorem on manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
11. de Rham Cohomology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
11.1. Definition and first properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
11.2. Homotopy invariance of de Rham cohomology . . . . . . . . . . . . . . . . . . . . 61
11.3. The Mayer-Vietoris sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
12. Differential Forms on Riemannian Manifolds. . . . . . . . . . . . . . . . . . . . . . . . . . 73
12.1. Scalar products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
12.2. The star operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.3. Poincaré duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
13. Toric Varieties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
h k
h k
k ◦ h−1
h(U ∩ V ) k(U ∩ V )
h(U ) k(V )
Two charts Overlap of charts
Figure 1. Charts (U, h) and (V, k) on the sphere and their corre-
sponding transition mapping on the overlap.
that aspect of an ordinary atlas of the earth. Instead, as above, think of the un-
derlying substrate of a manifold as putty. In mathematical terms, the pages of the
atlas are open subsets of a topological space. We will assume the reader can quickly
“review” the features of topology summarized in Appendix B.
One last observation: unlike our example, a manifold does not come with an
embedding into Euclidean space—the embedding is separate information. Whit-
ney’s embedding theorem says that, in general, an n-dimensional manifold can be
embedded in R2n . So surfaces (two-dimensional manifolds) can be embedded in R4 ,
but something special needs to occur to get an embedding in R3 , as in the case of
a sphere. The Klein bottle, which cannot be embedded in R3 , is more typical.
one of our goals will be to prove the ultimate version of Stokes’ theorem:
Z Z
ω= dω.
∂M M
Here, ω is an n-form and ∂M is the “boundary” of M (so we will need to consider
manifolds with boundaries). The operator d is called the exterior derivative. It
will have the property that applying d twice gives d2 = 0, which may remind you
of some results from ordinary vector calculus in Rn . Exploring this property leads
to remarkable topological invariants in the form of cohomology groups. If time
permits, we will consider these groups in the case of two classes of manifolds: toric
varieties and Grassmannians. The cohomology of Grassmann manifolds has a ring
structure (i.e., well-behaved addition and multiplication) known as the Schubert
calculus.
6 DAVID PERKINSON AND LIVIA XU
2. Definition of a Manifold
Definition 2.1 (Charts). Let X be a topological space. An n-dimensional chart
∼
=
on X is a homeomorphism h : U −→ U 0 from an open subset U ⊆ X, the chart
domain, onto an open subset U 0 ⊆ Rn . We denote this chart by (U, h) (see Figure 2).
We say that X is locally Euclidean if every point in X belongs to some chart domain
of X. If X is locally Euclidean, choosing a chart containing some point p ∈ X is
called taking local coordinates at p.
Figure 2. A chart.
Definition 2.2. If (U, h) and (V, k) are two n-dimensional charts on X such that
U ∩ V 6= ∅, then the homeomorphism (k ◦ h−1 )|h(U ∩V ) from h(U ∩ V ) to k(U ∩ V ) is
called the change-of-charts map, change of coordinates, or transition map, from h to
k (see Figure 3). If the transition map is furthermore a diffeomorphism (note that
these maps have domains and codomains in Rn ), then we say that the two charts are
differentiably related. Recall that a function between subsets of Euclidean spaces
is a diffeomorphism if it is bijective and both it and its inverse are differentiable.
Throughout this text, we will take differentiable to mean smooth, i.e., having partial
derivatives of all orders.
Definition 2.3. A set of n-dimensional charts on X whose chart domains cover
all of X is an n-dimensional atlas on X. The atlas is differentiable if all its charts
are differentiably related, and two differentiable atlases A and B are equivalent if
A ∪ B is also differentiable.
Definition 2.4. An n-dimensional differentiable structure on a topological space X
is a maximal n-dimensional differentiable atlas with respect to inclusion.
NOTES ON MANIFOLDS 7
Example 2.6. As a first (trivial) example of an n-manifold, take any open sub-
set U ⊆ Rn with the atlas {(U, idU )} containing a single chart.
Thus, for example, the one-sphere is the unit circle in the plane, and the two-
sphere is the usual sphere in three-space. If p = (p1 , . . . , pn+1 ) ∈ S n , then there
exists some i such that pi 6= 0. Let U be any open neighborhood of p consisting
solely of points on the sphere whose i-coordinates are nonzero. Then the mapping
at h : U → h(U ) ⊂ Rn defined by dropping the i-th coordinate of each point in U
serves as a chart containing p.
For another atlas, compatible with the one just given, one can use stereographic
projection. Here the atlas as two open sets:
U + := S n \ {(0, 0, . . . , 0, 1)} and U − := S n \ {(0, 0, . . . , 0, −1)}
each with chart defined by
x1 xn
(x1 , . . . , xn+1 ) 7→ ,..., .
1 − xn+1 1 − xn+1
2.1. Projective space. Projective n-space, denoted Pn , is an n-manifold whose
points are the lines in Rn+1 passing through the origin, i.e., the collection of all
one-dimensional linear subspaces of Rn+1 . To represent a line ` through the origin
in Rn+1 , choose any nonzero point p ∈ `. (The point p is a basis for ` as a one-
dimensional linear subspace.) For points p, q ∈ Rn+1 write p ∼ q if p = λq for some
nonzero constant λ. Then p and q represent the same line if and only if p ∼ q. So
we take our formal definition of projective n-space to be
Pn := Rn+1 \ {~0}/ ∼
with the quotient topology. (Thus, a set of points in Pn is open if and only if the
union of the set of lines they represent is an open subset of Rn+1 \ {~0}.)
If ` ∈ Pn is represented by (the equivalence class of) p = (a1 , . . . , an+1 ) ∈ `,
then (a1 , . . . , an+1 ) are called the homogeneous coordinates for `, realizing that
these “coordinates” are only defined up to scaling by a nonzero constant. One
sometimes sees the notation (a1 : a2 : · · · : an+1 ) to represent the point ` ∈ Pn , but
we will stick with (a1 , . . . , an+1 ).
We would now like to impose a manifold structure on Pn . As a warm-up, we
first treat the case n = 2. Define the set
Ux := (x, y, z) ∈ P2 : x 6= 0
Uy := (x, y, z) ∈ P2 : y 6= 0
Uz := (x, y, z) ∈ P2 : z 6= 0 .
It is easy to check that each of these is a homeomorphism. For instance, the inverse
of φx is given by (u, v) 7→ (1, u, v). The underlying motivation for these charts is as
follows: Take Ux , for instance. Fix the plane x = 1 in R3 . Then a line through the
origin in R3 meets this plane if and only if a representative nonzero point on the line
has x-coordinate not equal to 0. So Ux consists of the lines meeting the plane x = 1.
If ` ∈ Ux has homogeneous coordinates (x, y, z), we can scale (x, y, z) by λ = 1/x to
get another representative (1, y/x, z/x). This is exactly the point where ` meets the
plane x = 1. In this way, each point in Ux has a set of homogeneous coordinates
of the form (1, u, v). Dropping the 1, which is superfluous information, gives us
the mapping φx . The essential idea is that there is a one-to-one correspondence
between points in the plane x = 1 and points in Ux , and the plane x = 1 is the
same as R2 via (1, u, v) 7→ (u, v).
The collection
{(Ux , φx ), (Uy , φy ), (Uz , φz )}
2
is the standard atlas for P . What does a typical transition function look like? Con-
sider the transition from Ux to Uy . The overlap is Ux ∩Uy = (x, y, z) ∈ P2 : x 6= 0, y 6= 0 ,
and we have the commutative diagram:
(1, u, v)
φ−1
x φy
φy ◦φ−1
x
(u, v) (1/u, v/u).
In general, the standard charts of Pn are given by (Ui , φi ) for i = 1, . . . , n + 1,
where Ui = {x = (x1 , . . . , xn+1 ) ∈ Pn | xi 6= 0}, and
φi : Ui → Rn
x1 xi xn+1
x 7→
c
,..., ,..., .
xi xi xi
NOTES ON MANIFOLDS 9
The hat over xxii means that this component of the vector should be omitted (just
like we did for P2 ).
10 DAVID PERKINSON AND LIVIA XU
3. Differentiable Maps
Let M be a manifold and X some topological space. To study the behavior of a
map f : M → X near some point p ∈ M , we choose a chart (U, h) at p and look at
the “downstairs map” f ◦ h−1 : h(U ) → X instead (see Figure 4). If f ◦ h−1 has a
certain property locally at h(p), then we say that f has the property at p relative
to the chart (U, h). If this property is independent of the choice of charts, then we
just say that f has this property at p. Our first example of such a property is the
differentiability of a function.
Definition 3.1 (Real-valued functions on M ). A function f : M → R is differen-
tiable at p ∈ M if f ◦ h−1 is differentiable for some chart (U, h) at p.
Exercise 3.2. Let f : M → R be a function on a manifold M , and let (U, h) and
(V, k) be two charts at p ∈ M . Show that if f is differentiable at p relative to
(U, h), then f is differentiable at p relative to (V, k). (You can use the fact that a
composition of differentiable functions on Euclidean space is differentiable.)
Definition 3.3 (Differentiable mappings of manifolds). A continuous map f : M →
N between manifolds is differentiable at p ∈ M if it is differentiable with respect to
charts at p ∈ M and at f (p) ∈ N . Namely, if (U, h) is a chart at p and (V, k) is a
chart at f (p) such that f (U ) ⊆ V , we want the map k ◦ f ◦ h−1 to be differentiable
(recall that h(U ) ⊆ Rm and k(V ) ⊆ Rn for some m, n):
f
U V
h k
h(U ) k(V )
k◦f ◦h−1
For a picture of this,see Figure 5. If f is bijective with a differentiable inverse,
then f is called a diffeomorphism.
Remark 3.4. The reader should check that differentiability at p ∈ M is independent
of the choice of charts.
Example 3.5 (The Veronese embedding). Define ν2 : P2 → P5 by
ν2 : P2 → P5
(x, y, z) 7→ (x2 , xy, xz, y 2 , yz, z 2 ).
NOTES ON MANIFOLDS 11
which is differentiable.
Example 3.6. The existence of a diffeomorphism between two manifolds means
that as far as manifold structures are concerned, there is no difference between
the manifolds (except for, possibly, their names). Thinking in those terms, the
following theorem may be surprising:
Theorem 3.7 (Milnor). There exist differentiable structures D and D0 on the
seven-sphere S 7 with no diffeomorphism (S 7 , D) → (S 7 , D0 ).
12 DAVID PERKINSON AND LIVIA XU
4. Tangent Spaces
If we try to imagine a typical tangent space, we might think of a surface S
sitting inside R3 with a plane “kissing” the surface at some point. To compute the
tangent plane, we could create a parametrization of the surface near that point.
This would amount to finding an open set U ⊆ R2 and a “nice”2 differentiable
mapping f : U → R3 with a point p ∈ U mapping to the point in question, f (p).
Recall that by definition,
|f (p + h) − f (p) − Dfp (h)|
lim =0
|h|→0 |h|
where Dfp : R2 → R3 is the derivative of f at p. Hence, up to first order, we
have f (p + h) = f (p) + Dfp (h) where Dfp is the derivative of f at p. Let-
ting Afp (h) := f (p + h), we get the best affine approximation to f at p:
Afp : R2 → R3
h 7→ f (p) + Dfp (h).
Its image will be the tangent plane at f (p).
An underlying assumption we made above in thinking about the tangent space is
that our surface and its tangent plane are embedded in a larger space, R3 . Our goal
in this section is to take on the intriguing task of creating an intrinsic definition
of the tangent space of a manifold at a given point, i.e., one that does not depend
on embedding the manifold into another space. We will give three constructions of
the tangent space which we will call the geometric, the algebraic, and the physical
definitions of the tangent space, and we will see that all three are equivalent.3
At a couple of places in the following discussion, we will need at technical lemma.
We’ll get that out of the way now:
Lemma 4.1. Let f : W → R be a smooth function on some open subset W of Rn ,
and let w ∈ W . Then there exist smooth functions gi : W → R for i = 1, . . . , n
satisfying:
n
∂f X
gi (p) = (p) and f (x) = f (p) + gi (x)(xi − pi ).
∂xi i=1
Proof. For x ∈ W , apply the fundamental theorem of calculus and then the chain
rule to get
Z 1
d
f (x) − f (p) = f (tx + (1 − t)p) dt
0 dt
Z 1Xn
∂f
= (tx + (1 − t)p) (xi − pi ) dt
0 i=1 ∂xi
n Z 1
X ∂f
= (tx + (1 − t)p) dt (xi − pi ).
i=1 0 ∂xi
2“Nice” would mean that the mapping and its derivative are injective on U .
3These names are ad hoc. There is no standard terminology.
NOTES ON MANIFOLDS 13
Define Z 1
∂f
gi (x) = (tx + (1 − t)p) dt.
0 ∂xi
We would now like to describe a basis for Tpalg (M ). Fix a chart (U, h) at p and
define the derivations ∂i for i = 1, . . . , n by
∂
∂i (f ) := (f ◦ h−1 )(h(p))
∂xi
for each f ∈ Ep (M ). In other words, we use the chart (U, h) to identify f with an
ordinary multivariable function, and then take its i-th partial derivative. We leave
the straightforward check that each ∂i is a derivation to the reader. The reader
may also check that
(
1 if i = j
∂i (hj ) =
0 otherwise.
We claim that these ∂i form a basis for T alg . By the above displayed equation, they
are linearly independent. To see they span is not so easy. So the reader may want
to put off the following argument until well-rested! Take v ∈ T alg , and let f be a
smooth real-valued function defined near p. Define ` : h(U ) → R by ` := f ◦ h−1 .
By Lemma 4.1,
n
X
`(x) = `(h(p)) + gi (x)(xi − h(p)i )
i=1
n
X
= f (p) + gi (x)(xi − h(p)i )
i=1
16 DAVID PERKINSON AND LIVIA XU
Thus,
n
X
v= αi ∂i ,
i=1
as required.
For our last formulation of tangent space, we take perhaps the most straightfor-
ward approach. We would like to define the tangent space at p ∈ M by choosing
a chart, thus identifying M with Rn near p. We then take any vector v ∈ Rn ,
and think of it as a tangent vector at p. The problem with this approach is that it
would depend on a choice of charts, and the whole point of manifolds it to formulate
calculus without coordinates. As a minimal fix then, let’s repeat this process for
every possible chart at p. Thus, we think of a tangent vector as being a collection
of vectors in Rn , one for each chart at p. However, these vectors should somehow
reflect the way we glue charts together to construct the manifold, i.e., these vectors
should satisfy some kind of compatibility condition as we change coordinates. From
that point of view, it is perhaps natural to require the choice of vectors for each
pair of charts to be related via the derivative of the transition function between the
charts.
NOTES ON MANIFOLDS 17
that h(p) = 0, in terms of the Jacobian matrix of k ◦ h−1 , the equation v(V, k) =
Dh(p) (k ◦ h−1 )(v(U, h)) becomes
1 1
i v ve
−1 ∂ex .. ..
[J(k ◦ h )0 ]v(U, h) = = . = v(V, k),
∂xj x=0 .
vn ven
h ii
where ∂x ∂e
x
j of partials of the components of k ◦ h−1 .
v 7→ v(U, h).
4.2. The three definitions are equivalent. The following proposition shows
that our three definitions of tangent space are just three perspectives on the same
thing.
Tpphy (M ) Φ2
Tpalg (M )
2. Tpalg (M ) −→ Tpphy (M ).
Define Φ2 : Tpalg (M ) −→ Tpphy (M ) as follows:
Φ2 : Tpalg (M ) −→ Tpphy (M )
v 7−→ v : (U, h) 7→ (v(h1 ), . . . , v(hn ))
where v : Ep (M ) → R is a linear derivation (defined on the germs of differentiable
functions on M at p), and hi is the i-th component of h.
Linearity of Φ2 is straightforward. We need to show that v behaves well under
a change of coordinates. Let (V, k) be another chart at p. We want v(V, k) =
Dh(p) (k ◦ h−1 )(v(U, h)). That is,
v(h1 ) v(k1 )
J(k ◦ h−1 )(h(p)) ... = ... .
v(hn ) v(kn )
20 DAVID PERKINSON AND LIVIA XU
Note that
k(x) = (k ◦ h−1 ◦ h)(x) = (w ◦ h)(x) = (w1 (h(x)), . . . , wn (h(x))) ,
and, therefore,
n
X
ki (x) = wi (h(x)) = hj (x)wi,j (h(x)).
j=1
Since v is a derivation (and recall that by assumption h(p) = k(p) = 0),
n
X n
X
v(ki ) = (v(hj )wi,j (h(p)) + hj (p)v(wi,j ◦ h)) = v(hj )wi,j (0).
j=1 j=1
curves derivations
Φ1 : Tpgeom (M ) Tpalg (M )
Φ2 : Tpalg (M ) Tpphy (M )
Φ3 : Tpphy (M ) Tpgeom (M )
(Note that the i-th component of (Φ2 ◦ Φ1 )([α]) is exactly (hi ◦ α)0 (0)). Then,
since h is a homeomorphism,
(h ◦ β)(t) = h(p) + t(h ◦ α)0 (0).
We can therefore conclude that (h ◦ β)0 (0) = (h ◦ α)0 (0), and α ∼ β as desired.
4.3. Standard bases. We have now shown in precisely what sense the three spaces
Tpgeom M , Tpalg M , and Tpphy M are actually the same object. Thus, we are safe to
talk about the tangent space to M at p, denoted Tp M , and use any of the three
definitions to denote a tangent vector at p.
Here we define the standard basis for Tp M with respect to chart (U, h) at p,
denoted
∂ ∂
,..., .
∂x1 p ∂xn p
∂
As an element of Tpgeom (M ), we define ( ∂x i
)p to be the equivalence class of curves
represented by
t 7→ h−1 (h(p) + tei )
where ei is the i-th standard basis vector of Rn . As an element of Tpalg M , i.e., as a
derivation, for f a germ at p, we define
∂ ∂
f := (f ◦ h−1 )(h(p)).
∂xi p ∂xi
∂
And finally, as an element of Tpphy M , define ( ∂x i
)p := ei , the i-th standard basis
n
vector of R .
Our previous discuss of the three versions of tangent space have shown that these
are bases, and one can check that they are compatible with our isomorphisms Φi .
Where v ◦ f ∗ is the derivation that sends a germ φ at f (p) to the germ v(φ ◦ f )
at p (again, see Figure 10).
• Physical.
Choose a chart (U, h) at p and let (V, k) be a chart at f (p) such that f (U ) ⊆ V .
For v ∈ Tpphy M , define
Thus, once local coordinates are taken, the differential is the ordinary derivative
mapping given by the Jacobian matrix. See Figure 11.
f : P2 → P3
(x, y, z) 7→ (x3 , y 3 , z 3 , xyz).
Let p = (1, s, t) ∈ Ux . Then f (p) = (1, s3 , t3 , st). Consider the standard open set
Va = {(a, b, c, d) ∈ P3 | a 6= 0} with coordinate mapping φa (a, b, c, d) = (b/a, c/a, d/a).
With respect to these charts, we have
So given v ∈ Tpphy M that assigns the vector (v1 , v2 ) to (Ux , φx ), we have that dfp (v)
assigns
v
J f (s, t) 1
e
v2
to (Va , φa ).
Remark 4.13. The differential is functorial! The differential of the identity of M is
the identity of Tp M :
d idp = idTp M .
The differential also respects composition, i.e., the chain rule holds. For a compo-
f g
sition M1 −→ M2 −→ M3 of differentiable maps, we have
d(g ◦ f )p = dgf (p) ◦ dfp .
NOTES ON MANIFOLDS 25
5. Linear Algebra
Here we will summarize what we need to know about tensors. The presentation
is most extracted from Frank Warner’s, Foundations of Differential Manifolds and
Lie Groups. In the following, all vector spaces are finite-dimensional and define
over an arbitrary field k unless otherwise specified.
5.1. Products and coproducts. Let V and W be vector spaces. The vector
space product of V and W , is the Cartesian product V × W with linear structure
λ(v, w) + (v 0 , w0 ) := (λv + v 0 , w + w0 )
for all λ ∈ k and (v, w), (v 0 , w0 ) ∈ V × W . It is the unique vector space (up to
isomorphism) having the following universal property: Given a vector space X and
linear mappings f : X → V and g : X → W , there exists a linear mapping h : X →
V × W making the following diagram commute:
X
f ∃!h g
V ×W
π1 π2
V W.
Here π1 (v, w) = v and π2 (v, w) = w are the first and second projection mappings,
respectively. The mapping h is given by h(x) = (f (x), g(x)).
Similarly, define the coproduct of V and W , denoted V ⊕ W , by V ⊕ W = V × W
with the same vector space structure. It has the universal property that given linear
mappings f : V → X and g : W → X to a vector space X, there exists a unique
linear mapping h : V ⊕ W → X making the following diagram commute:
X
f ∃!h g
V ×W
ι1 ι2
V W.
Here ι1 (v) = (v, 0) and ι2 (w) = (0, w) are the first and second inclusion mappings,
respectively. The mapping h is determined by h(v, w) = f (v) + g(w). Notice how
the commutative diagram for coproducts is obtained from that for products by
flipping the direction of the arrows.
We could also define the product V1 × · · · × Vn and the coproduct V1 ⊕ · · · ⊕ Vn
for vector spaces V1 , . . . , Vn by slightly extending the definition given above for
the case n = 2. We will leave that to the reader along with the statement and
verification of the corresponding universal properties.
It may seem peculiar to make a distinction between the product and coproduct
here given that they are exactly the same vector spaces. The difference comes
when we consider an infinite family {Vα }α∈A of vector spaces. In that case, we can
Q
define their product α∈A Vα using the Cartesian product, just as above. However,
26 DAVID PERKINSON AND LIVIA XU
Q
their coproduct qα∈A Vα is the vector subspace of α∈A Vα consisting of vectors
for which all but a finite number of components are zero. In that way, the universal
properties are satisfied. (To see why the definition of the coproduct must change in
this case, note that for the coproduct of two vector spaces, the mapping h was given
by h(x) = f (x) + g(x). For an infinite family, the corresponding mapping would
involve an infinite sum, and infinite sums of vectors are not defined in a general
vector space.)
5.2. Tensor Products. Tensor products of vector spaces will allow us to think
of multilinear objects (such as scalar products or determinants, and their general-
izations) in terms of linear objects. We start with an informal description of the
tensor product U ⊗ V ⊗ W of three vector spaces U, V , and W . Its elements are
linear combinations of expressions of the form u ⊗ v ⊗ w where u, v, and w are
elements in U, V , and W , respectively. We are not allowed to swap the vectors,
i.e., v ⊗ u ⊗ w 6= u ⊗ v ⊗ w, in general. The defining property of the tensor is
that it is, roughly speaking, linear with respect to each entry. Thus, for example,
if α ∈ k, u0 ∈ U and v 0 ∈ V ,
(αu + u0 ) ⊗ v ⊗ w = α(u ⊗ v ⊗ w) + u0 ⊗ v ⊗ w,
and
u ⊗ (αv + v 0 ) ⊗ w = α(u ⊗ v ⊗ w) + u ⊗ v 0 ⊗ w,
and similarly for the last component. As a last example, let w0 ∈ W a compute,
using multilinearity:
Example 5.1. Let e1 , e2 be the standard basis vectors for R2 , and let f1 , f2 , f3
be the standard basis vectors for R3 . Takev = (2, 3) ∈ R2 and w = (3, 2, 1) ∈ R2 .
Then we can write v ⊗ w ∈ R2 ⊗ R3 in terms of the ei ⊗ fj :
v ⊗ w = (2, 3) ⊗ (3, 2, 1)
= (2e1 + 3e2 ) ⊗ (3f1 + 2f2 + f3 )
= 6 e1 ⊗ f1 + 4 e1 ⊗ f2 + 2 e1 ⊗ f3 + 9 e2 ⊗ f1 + 6 e2 ⊗ f2 + 3 e2 ⊗ f3 .
If you follow the above calculations, then you understand exactly the type of
gadget we are looking for. We pause now for the formal construction (which is
not as important as understanding the above calculation). We then describe the
purpose of the tensor product be exhibiting its universal property.
NOTES ON MANIFOLDS 27
Remark 5.2. Note that scalars can “float around” in tensors: for α ∈ k and u ⊗ v ⊗
w ∈ U ⊗ V ⊗ W,
α(u ⊗ v ⊗ w) = (αu) ⊗ v ⊗ w = u ⊗ (αv) ⊗ w = u ⊗ v ⊗ (αw).
V ×W f
U.
Thus, the tensor product allows us to represent a bilinear mapping with a linear
mapping—each contains the same information. The proof of the universal property
is left as an exercise.
More generally, there is a similar commutative diagram that relates a multilinear
mapping V1 × · · · × Vn → U with a linear mapping V1 ⊗ · · · ⊗ Vn → U :
28 DAVID PERKINSON AND LIVIA XU
V1 ⊗ · · · ⊗ V`
ι ∃!linear
V1 × · · · × V` multilinear
U.
Proof. Using the ideas presented above, we can define a sequence of isomorphisms
V ⊗ W ≈ V ⊗ (⊕nj=1 k) ≈ ⊕nj=1 (V ⊗ k) ≈ ⊕nj=1 V ≈ ⊕nj=1 ⊕m
i=1 k ≈ k
mn
V ×` multilinear, symmetric
W.
n
X
(v, w) 7→ hv, wi = vi w i
i=1
R n × Rn R.
It is determined by (v, w) 7→ hv, wi for all v, w ∈ Rn .
i.e., the basis consists of all monomials of degree ` in x1 , . . . , xn . The result then
follows from the usual stars-and-bars argument.
We also define
Sym V := ⊕`≥0 Sym` V
where Sym0 V := k. One may use the universal property for symmetric products
to show there is a well-defined multiplication mapping
This multiplication turns Sym V into a k-algebra (i.e., a vector space over k which
is also a ring (i.e., has a nice multiplication operation)). In fact, if V has ba-
sis x1 , . . . , xn , then Sym V is isomorphic to the polynomial ring in x1 , . . . , xn with
coefficients in k (but see the section on duality, below, to get a better formulation).
Λ` V := V ⊗` /T,
v1 ∧ · · · ∧ vi ∧ · · · ∧ vj ∧ · · · ∧ v` = −v1 ∧ · · · ∧ vj ∧ · · · ∧ vi ∧ · · · ∧ v` .
NOTES ON MANIFOLDS 31
0 = v1 ∧ · · · ∧ (vi + vj ) ∧ · · · ∧ (vi + vj ) ∧ · · · ∧ v`
= 0 + v1 ∧ · · · ∧ vi ∧ · · · ∧ vj ∧ · · · ∧ v` + v1 ∧ · · · ∧ vj ∧ · · · ∧ vi ∧ · · · ∧ v` + 0.
ι : V ×` → Λ` V
(v1 , . . . , v` ) 7→ v1 ∧ · · · ∧ v` .
The `-th exterior product is characterized by the following universal property: given
any vector space W and multilinear alternating mapping V ×` → W , there is a
unique linear map Λ` V → W making the following diagram commute:
Λ` V
ι ∃!
V ×` multilinear, alternating W.
Exercise 5.11. Recall that the determinant det of a square matrix over k is the
unique multilinear alternating function of its rows that sends the identity matrix
to 1. Let e1 , . . . , en be the standard basis of kn and let v1 , . . . , vn ∈ kn . Show that
v1 ∧ · · · ∧ vn = det(v1 , . . . , vn )e1 ∧ · · · ∧ en .
Define
Λ• V := ⊕`≥0 Λ` V
where Λ0 V := k. Using the universal property of exterior products, we can define
a multiplication on alternating tensors:
Λr V × Λs V → Λr+s V
(λ, µ) 7→ λ ∧ µ.
The vector space Λ• V with this multiplication is called the Grassmann algebra
on V . Note that for λ ∈ Λr V and µ ∈ Λs V we have
rs
λ ∧ µ = (−1) µ ∧ λ.
5.4. Dual spaces. Let hom(V, W ) denote the linear space of all linear mappings V →
W . If f, g ∈ hom(V, W ) and λ ∈ k, then λf + g is defined by
V ∗ := hom(V, k).
Show that {v1∗ , . . . , vn∗ } is a basis for V ∗ . It is called the dual basis to {v1 , . . . , vn }.
Note that this exercise shows that V and V ∗ are isomorphic (if V is finite-dimensional).
However, the isomorphism depends on a choice of basis.
f∗ : W∗ → V ∗
φ 7→ φ ◦ f
as pictured below;
f
V W φ
k.
The read should check that dualization if functorial: idV : V → V induces idV ∗ : V ∗ →
V ∗ , and commutative diagrams are preserved:
V W V∗ W∗
U U∗ .
(1)
Λ` V ∗ → (Λ` V )∗
φ1 ∧ · · · ∧ φ` 7→ [v1 ∧ · · · ∧ v` 7→ det(φi (vj ))]
(2) and
Sym` V ∗ → (Sym` V )∗
`
X Y
φ1 · · · φ` 7→ [v1 · · · v` 7→ φσ(i) (vi ))],
σ∈S` i=1
Sketch of proof of (1). The mapping in (1) exists from the universal property of
alternating tensors and the fact that the determinant is alternating and multilinear
as a function of its rows.
We define the inverse mapping (Λ` V )∗ → Λ` V ∗ in terms of a bases. Fix a basis
e1 , . . . , en for V , and for each integer vector µ such that
1 ≤ µ1 < · · · < µ` ≤ n,
define
eµ := eµ1 ∧ · · · ∧ eµ` .
n
vectors form a basis for Λ` V . Letting e∗1 , . . . , e∗` be the corresponding
These `
dual basis for V ∗ , we get the corresponding basis of vectors
e∗µ := e∗µ1 ∧ · · · ∧ e∗µ` .
for Λ` (V ∗ ).
For each ω ∈ (Λ` V )∗ and basis vector eµ , define
ωµ := ω(eµ1 ) ∧ · · · ∧ ω(eµ` ).
Finally, define
g : (Λ` V )∗ −→ Λ` V ∗
X
w 7−→ wµ eµ ∗ ,
µ : 1≤µ1 <···<µl ≤n
where e∗µ
= e∗µ1 ∧ · · · ∧ e∗µl . The reader may check that g is well-defined and inverse
to the mapping in (1).
will be useful later, we will let Alt` V denote the linear space of alternating `-forms
on V . Arguing as above, the universal property of alternating tensors gives an
isomorphism
Alt` V ≈ (Λ` V )∗ ≈ Λ` V ∗ .
Example 5.16. The space Sym2 V ∗ is the linear space of symmetric bilinear forms
on V . For instance, any inner product h , i on V can be thought of as an element
of Sym2 V ∗ . (Recall that an inner product is a nondegenate bilinear form. Nonde-
generate means that if v ∈ V and hv, wi = 0 for all w ∈ V , then w = 0.)
Recall that the wedge product defined for the exterior algebra Λ• V ∗ is given by
∧ : Λr V ∗ × Λs V ∗ −→ Λr+s V ∗
(ω, η) 7→ ω ∧ η.
The product is bilinear, associative, and anti-commutative.
Exercise 5.17. Show that taking the exterior algebra of the dual space defines
a contravariant functor Λ• : V 7→ Λ• V ∗ from the category of finite-dimensional k-
vector spaces to the category of graded anti-commutative k-algebras. That is, show
that (1) for f : V → W and ω, η ∈ Λ• W ∗ , f ∗ (ω ∧ η) = (f ∗ ω) ∧ (f ∗ η), (2) for the
f g
composition U −→ V −→ W , we have (g ◦f )∗ = f ∗ ◦g ∗ : Λ• W ∗ → Λ• V ∗ → Λ• U ∗ ,
and (3) id∗V is the identity map on Λ• V ∗ .
5.4.2. Pullbacks. A linear map L : V → W induces, for each ` ≥ 0, a pullback
mapping
L∗ : (Λ` W )∗ → (Λ` V )∗
ω 7→ [v1 ∧ · · · ∧ v` 7→ ω(Lv1 ∧ · · · ∧ Lvl )] ,
or equivalently (using the isomorphisms established above,
L∗ : Λ` W ∗ → Λ` V ∗ → (Λ` V )∗
φ1 ∧ · · · ∧ φ` 7→ L φ1 ∧ · · · ∧ L ∗ φ`
∗
7 → [v1 ∧ · · · ∧ v` 7→ det((φi ◦ L)(vj ))] .
NOTES ON MANIFOLDS 35
6. Vector Bundles
Recall that in section 4, we learned three equivalent ways to define the tangent
space Tp M of a manifold M at a point p ∈ M . We are also interested in its
dual space Tp∗ M , called the cotangent space, as well as other related vector spaces
Λk Tp M , Λk Tp∗ M , Symk Tp M , Tp∗ M ⊗k , etc.
For each p, we can choose a chart U at p and look at all the tangent spaces at
points in U . Properties of these tangent spaces give local properties of M . But to
study the global properties, we need to look at all points in M and form a vector
bundle. That is, to each point p ∈ M , we attach a vector space such that these
vector spaces behave well under change of charts. Before studying specific bundles,
we briefly introduce the general theory of vector bundles.
π π1
U
where π1 is the usual projection map onto the first coordinate. Further-
more, φU restricts to a linear isomorphism Ep → {p} × Rr on each fiber.
We call E the total space, M the base space, and the map φU a trivialization of
π −1 (U ). A line bundle is a vector bundle of rank 1.
M f
N
Rn
(p, f (p))
p M
there is a bijection
{sections of M × Rk → M } ↔ {smooth functions f : M → Rk }.
NOTES ON MANIFOLDS 37
Exercise 6.9. Show that the open Möbius strip in Example 6.5 as a line bundle
over S 1 is not trivial. (Hint: if it were, it would have a non-vanishing global section,
e.g., s(p) = 1 for all p ∈ S 1 . What is wrong with that?)
Remark 6.10. Given a global section s ∈ Γ(E) and an open set U ⊆ M , we can
always get a section s|U over U by restricting the domain of s. On the other hand,
if s ∈ Γ(U, E) is a section over U , then for every p ∈ U , we can find a global section
s that agrees with s over some neighborhood of p (presumably contained in U ) by
multiplying s with a bump function.
Definition 6.11. A frame for a vector bundle E of rank r over an open set U ⊆ M
is a collection of sections e1 , . . . , er of E over U such that at each point p ∈ U , the
elements e1 (p), . . . , er (p) form a basis for the fiber Ep .
Proposition 6.12. A vector bundle π : E → M of rank r is trivial if and only if
it has a frame over M .
Proof. (⇒) Suppose that π : E → M is trivial with a bundle isomorphism φ : E →
M × Rr . Let u1 , . . . , ur be the standard basis for Rr . Then for every p ∈ M ,
the elements (p, u1 ), . . . , (p, ur ) form a basis for {p} × Rr . So the corresponding
sections ei over M defined by ei (p) := φ−1 (p, ui ) form a basis for Ep .
(⇐) Suppose that e1 , . . . , er ∈ Γ(E) is a frame over M . Then for any p ∈ M
Pr
and e ∈ Ep , we have e = i=1 ai ei (p) for some ai ∈ R. Now define
φ : E −→ M × Rr
e 7−→ (p, a1 , . . . , ar ).
This is a bundle map with inverse
ψ : M × Rr −→ E
r
X
(p, a1 , . . . , ar ) 7−→ ai ei (p).
i=1
38 DAVID PERKINSON AND LIVIA XU
7. Tangent Bundle
Definition 7.1. The tangent bundle of M , denoted T M , is the disjoint union of
tangent spaces: G
T M := Tp M,
p∈M
together with a projection π : T M → M defined by π(v) = p if v ∈ Tp M . We often
denote an element of T M as (p, v), meaning v ∈ Tp M .
The manifold structure on T M induced by the structure on M . For a chart
(U, h) on M , we can form a chart (π −1 (U ), e
h) on T M with e
h defined by
h : π −1 (U ) −→ h(U ) × Rn
e ⊆ R n × Rn
v ∈ Tp M 7−→ (h(p), v(U, h)).
Let (U, h) and (V, k) be charts at p ∈ M . The transition function of the corre-
sponding charts in T M is given by (k ◦ h−1 , D(k ◦ h−1 )).
p ∈ Tp M
π −1 (U ∩ V )
h
e k
e
h(U ∩ V ) × Rn k(U ∩ V ) × Rn
(k◦h−1 ,D(k◦h−1 ))
Prove this fact. [Hint: see Proposition 6.12, parametrize S 1 , and use this parametriza-
tion to define a derivation v(p) smoothly varying with p and never equal to the zero
derivation.]
Remark 7.6. It is not true that the tangent bundle is always trivial. For exam-
ple, consider M = S 2 . The “hairy ball” theorem says that there cannot be non-
vanishing continuous vector fields on M . That is, if s ∈ Γ(T S 2 ) is a global section,
then there must be some p ∈ M such that s(p) = 0. It follows that T S 2 , unlike T S 1 ,
is non-trivial.
Remark 7.7. Let s : M → T M be a vector field. The zero locus of s is the set
{p ∈ M | s(p) = 0}. Note that this is well-defined since s(p) = 0 is not dependent
on the chart.
∂f
“ ∂xi (p)”
m j
R Rn .
J(k◦f ◦h−1 )(h(p))
h(U ) × Rm k(V ) × Rn .
(k◦f ◦h−1 , dfp )
Then we can “glue” the pieces together since f∗ commutes with the transition
maps: Take two pairs of charts (Ui , hi ) and (Vi , ki ) with f (Ui ) ⊆ Vi for i = 1, 2.
Let fei denote the composition ki ◦ f ◦ h−1 and observe that the following diagram
i
commutes.
(fe1 ,J fe1,(h(p)) )
h1 (U1 ∩ U2 ) × Rm k1 (V1 ∩ V2 ) × Rn
h2 ◦h−1
1 k2 ◦k1−1
h2 (U1 ∩ U2 ) × Rm k2 (V1 ∩ V2 ) × Rn .
(fe2 ,J fe2,(h(p)) )
40 DAVID PERKINSON AND LIVIA XU
f
R2 R3 .
with
df : R2 × R2 → R3 × R3
((x, y), (u, v)) 7→ ((x, y, x2 − y 2 ), (u, v, 2xu − 2yv))
since
1 0 u
u
Jf (x, y)(u, v) = 0 1 = v .
v
2x −2y 2xu − 2yv
NOTES ON MANIFOLDS 41
! (
∂ 1 if i = j
dxi,p = δ(i, j) =
∂xj p 0 if i 6= j,
The gluing instructions for the bundle are provided by the dual mapping Dp (k ◦
h−1 )∗ . We can take the charts for T ∗ M so be the same as those for T M by
identifying Tp M and Tp∗ M in local coordinates via the mapping (∂/∂xi )p 7→ dxi,p .
Then, if (U, h)) and (V, k) are overlapping charts on M , the corresponding transition
function for T ∗ M is given by
π −1 (U ∩ V )
h
e k
e
h(U ∩ V ) × Rn k(U ∩ V ) × Rn
(h◦k−1 ,D(k◦h−1 )∗ )
Fixing a chart (U, h) and bases as above, a basis for Λk Tp∗ M is provided by
where the subscript p is sometimes dropped for convenience. Now consider a k-form
ω : M → Λk T ∗ M . In local coordinates we get
X X
ω(p) = ωµ (p) dxµ = ωµ (p1 , . . . , pn ) dxµ1 ∧ · · · ∧ dxµk ∈ Λk Tp∗ M,
µ µ
where each function ωµ : h(U ) → R is differentiable. (In the above displayed equa-
tion, we are abusing notation slightly by using p to denote what really should
be h(p).)
We note that in the special case where k = n = dim M , then changing basis
affects the local form of ω via the determinant of the transition function.
Jfp
e1 , . . . , e m Rm Rn ee1 , . . . , een
(k◦f ◦h−1 )0 (h(p))
dfp∗
dy1,f (p) , . . . , dyn,f (p) Tf∗(p) N Tp∗ M dx1,p , . . . , dxm,p
M f
N
And locally, we have
!
X X
fp∗ wµ (f (p)) dyµ,f (p) = wµ (f (p))dfµ,p .
µ µ
Proof. Suppose that d : Ωk M → Ωk+1 M is a linear map satisfying the above three
conditions. We first prove uniqueness of d by showing that it has to be given by
equation (4) in any local coordinates.
P
Let ω be a k-form and suppose that ω = µ ωµ dxµ using local coordinates
on U . Using all three properties of d, we have
X
dω = d( ωµ dxµ )
µ
X
= (d(ωµ dxµ ))
µ
X
= (d(ωµ ) ∧ dxµ + (−1)0 ωµ (d(dxµ )))
µ
n
X X ∂ωµ
= (( dxi ∧ dxµ ) + 0)
µ i=1
∂xi
n
XX ∂ωµ
= dxi ∧ dxµ .
µ i=1
∂xi
NOTES ON MANIFOLDS 45
Now we show that the definition in equation (4) satisfies the three properties,
proving existence of such an operator. Note that df being the normal differential
follows immediately from the definition. To see that df 2 = 0, without loss of
generality, consider the form a dxµ and compute:
n n
X ∂a X ∂2a
d2 (a dxµ ) = d( dxj ∧ dxµ ) = dxi ∧ dxj ∧ dxµ .
j=1
∂xj i,j=1
∂xi ∂xj
2 2
If i = j, then dxi ∧ dxj = 0. If not, then ∂x∂i ∂x
a
j
dxi ∧ dxj = − ∂x∂j ∂x
a
i
dxj ∧ dxi .
To show that df satisfies the product rule, first note that for any zero-form
f ∈ Ω0 M , by definition we have
d(f dxµ ) = (df ) ∧ dxµ .
Now consider a k-form ω = f dxµ and any other differential form η = g dxν . Using
the usual product rule, we compute:
d(ω ∧ η) = d(f dxµ ∧ g dxν )
= d(f g dxµ ∧ dxν )
= d(f g) ∧ dxµ ∧ dxν
= (g df + f dg) ∧ dxµ ∧ dxν
= (df ∧ dxµ ) ∧ (g dxν ) + (−1)k (f dxµ ) ∧ (dg ∧ dxν ),
where the (−1)k comes from dg ∧ dxµ = (−1)k dxµ ∧ dg.
We have now shown that for each chart (U, h) in an atlas for M , we have an
operator dU satisfying the required properties. On overlaps, these operators must
agree by uniqueness, and hence they glue together to define a global operator d.
Definition 8.6. The operator df is called exterior differentiation, and dω is called
the exterior derivative of ω.
Lemma 8.7. Let f : M → N be a smooth map of manifolds. Then f ∗ , the pullback
by f , commutes with d. That is, for any ω ∈ Ωk N , we have
f ∗ (dω) = d(f ∗ ω).
Exercise 8.8. Prove the above lemma.
The fact that
d d d d
0 → Ω0 M −→ Ω1 M −→ Ω2 M −→ Ω3 M −→ · · ·
is a sequence of mappings of abelian groups (even of vector spaces) and that d2 = 0
says that the sequence forms a cochain complex.5 A mapping of cochain complexes
is a sequence of homomorphisms between spaces with the same indices and com-
muting with the corresponding d mappings. For instance, Lemma 8.7 tells us that
the pullback is a cochain map. In detail, given f : M → N we get the following
commutative diagram:
5A chain would be a sequence of such mappings between abelian groups in which the indices
d d d d
0 Ω0 N Ω1 N Ω2 N ...
∗ ∗ ∗
f∗ f f f
0 d Ω0 M d
Ω1 M d
Ω2 M d
...
∗ • •
We might abbreviate the above by f : Ω N → Ω M .
Lemma 8.9. Pullbacks respect compositions: applying pullbacks to a commutative
diagram of mappings of manifolds
f
M N
g
g◦f
P
induces, for each k ≥ 0, the commutative diagram
f∗
Ωk M Ωk N
g∗
f ∗ ◦g ∗
Ωk P
Together with the fact that the pullback by the identity function on a manifold
is the identity on k-forms, we have a just seen that pullback of differential forms
gives a contravariant functor from the category of smooth manifolds to the category
of cochain complexes.
Definition 8.10. A k-form ω is exact if there is a (k − 1)-form η such that dη = ω.
It is closed if dω = 0.
Note that an exact form is closed since d2 = 0. We will have much more to say
about exactness when we discuss DeRham cohomology, later.
NOTES ON MANIFOLDS 47
9. Oriented Manifolds
In ordinary integration of a one-variable function, we have
Z 1 Z 0
f dx = − f dx.
0 1
This formula is a first hint at the role played by an orientation of the underlying
manifold. In ordinary integration in several variables, reordering the standard basis
vectors gives a change of basis whose Jacobian is the corresponding permutation
matrix. Thus, via the change of variables formula for integration, the sign of the
integral will change by the sign of this permutation. Thus, before we can talk about
(coordinate-free) integration on a manifold, we need to add the additional structure
of an orientation.
Definition 9.1. Let V be a real vector space. Two ordered bases hv1 , . . . , vn i and
hw1 , . . . , wn i have the same orientation if the mapping V → V that sends vi to wi
has positive determinant.
The property of having the same orientation defines an equivalence relation on
the set of ordered bases of V with two equivalence classes, each of which is called
an orientation on V . Having chosen an orientation O, we get an orientated vector
space (V, O).
Ordered bases in O are said to be positively oriented; otherwise they are nega-
tively oriented.
Example 9.2. Let V = R3 and let e1 , e2 , e3 denote the standard basis of R3 .
The six possible ordered bases formed from these vectors fall into two equivalence
classes:
he1 , e2 , e3 i ∼ he2 , e3 , e1 i ∼ he3 , e1 , e2 i,
he2 , e1 , e3 i ∼ he1 , e3 , e2 i ∼ he3 , e2 , e1 i.
ei ej
for the vectors given in the previous example. How is orientation reflected in the
geometry of these frames.
Note that if we have Rn with the standard basis e1 , . . . , en , then an ordered
basis formed using these vectors can be identified with a permutation σ ∈ Sn on n
letters. In this way, the ordered basis heσ(1) , . . . , eσ(n) i has the same orientation as
he1 , . . . , en i if sign(σ) = 1.
Also, for example, h2e1 + e2 , e2 , e3 i ∼ he1 , e2 , e3 i since
2 0 0
det 1 1 0 = 2 > 0.
0 0 1
48 DAVID PERKINSON AND LIVIA XU
onto the line y = −1 from the north pole (0, 1) or project S 1 onto the line y = 1
from the south pole (0, −1) as shown in Figure 15. In this way we obtain two
charts covering S 1 . Let U + = S 1 \{(0, 1)}, U − = S 1 \{(0, −1)}, and let φ+ and φ−
denote the corresponding projection maps. Note that both φ+ (U + ) and φ− (U − )
are isomorphic to R and the transition map from U + to U − is smooth (check
this!). However, the transition map is not orientation-preserving. Notice that the
orientation of the image φ− (U − ) is different from that of φ+ (U + ). Hence the charts
NOTES ON MANIFOLDS 49
Remark 10.2. To see that the sets {Ai } and {Ui } with the claimed properties
always exist, start with any orienting atlas V = {(Vα , kα )} (this is possible since
we are assuming M is orientable). Since M is second-countable, it has a countable
basis B for its topology. For each p ∈ M , take a chart (Vα , kα ) at p, and then,
since B is a basis, we can find U ∈ B such that p ∈ U ⊆ Vα . Save the chart (U, h)
where h := kα |U . In the end, since B is countable, so is the set of charts we saved.
Thus, we have new countable orienting atlas U = {(Ui , hi )}. Now let A1 := U1 , and
let Ai+1 := Ui+1 \ ∪ij=1 Aj for i ≥ 1.
Definition 10.3. Using the notation above, we say that ω is integrable if each
P R
ai : hi (Ui ) → R is integrable on h(Ai ) and if i h(Ai ) |ai | < ∞. In this case, we
define the integral to be the sum:
Z XZ
ω := ai .
M i hi (Ai )
Tp M Tp∗ M
dualizing (dhi )∗ (dkj )∗
(dkj )p (dhi )p −→ p p
Rn Rn Rn Rn
(Jφi,j )kj (p) (Jφi,j )>
k
j (p)
Now take the n-th exterior power, and the transition map becomes multiplication
by det(Jφi,j (kj (p))). Since Λn Rn ∼
= R, we have the following diagram:
1 R R 1
· det((Jφi,j )>
k )=· det((Jφi,j )kj (p) )
j (p)
Remark 10.6. If M itself is compact, then all n-forms have compact support.
Proof. Exercise.
10.2. Manifolds with boundary. Now the goal is to prove Stokes’ theorem on
oriented manifolds: Z Z
ω= dω
∂M M
for some (n − 1)-form ω with compact support. But to do so, we need a good
definition of ∂M , the boundary of M and look at manifolds with boundary.
Remark 10.11. First note that it is possible for ∂U = ∅. Also, the boundary ∂U
is different from the boundary of U in the topological sense, i.e., from U \ U ◦
(cf. Definition B.1).
f
→ Rk
U−
f˜
→ Rk
Up −
p
Tp ∂M = Tp+ M ∩ Tp− M ,
where the bar indicates topological closure in Rn .
The definition of an oriented manifold with boundary is the same as for ordi-
nary manifolds. The boundary then is then orientable, but we would like to fix a
convention for its orientation.
Example 10.26. Let D3 denote the solid unit ball in R3 with its orientation
induced by R3 . Its boundary is ∂D3 = S 2 , and the natural orientation on S 2 is
given by hw1 , w2 i as shown in Figure 18.
Then
n X n
X ∂ai ˆ i ∧ · · · ∧ dxn
dω = dxj ∧ dx1 ∧ · · · ∧ dx
i=1 j=1
∂xj
n
X ∂ai
= (−1)i−1 dx1 ∧ · · · ∧ dxi ∧ · · · ∧ dxn .
i=1
∂xi
This gives us
Z n Z
X ∂ai
dω = (−1)i−1 .
M i=1 Rn
−
∂xi
∂ai
Use Fubini’s theorem to first integrate Rn (−1)i−1 ∂x
R
i
with respect to xi . For i 6= 1,
−
by definition we have
Z ∞ Z t Z 0
∂ai ∂ai ∂ai
= lim + lim
xi =−∞ ∂x i t→∞ 0 ∂x i t→−∞ t ∂xi
= lim (ai (x1 , . . . , xi−1 , t, xi+1 , . . . , xn ) − ai (x1 , . . . , xi−1 , 0, xi+1 , . . . , xn ))
t→∞
+ lim (ai (x1 , . . . , xi−1 , 0, xi+1 , . . . , xn ) − ai (x1 , . . . , xi−1 , t, xi+1 , . . . , xn ))
t→−∞
=(0 − ai (x1 , . . . , xi−1 , 0, xi+1 , . . . , xn )) + (ai (x1 , . . . , xi−1 , 0, xi+1 , . . . , xn ) − 0)
=0,
Therefore,
Z n Z Z Z
i−1 ∂ai ∂a1
X
dω = (−1) = = a1 (0, x2 , . . . , xn ).
M i=1 Rn
−
∂xi Rn
−
∂x1 Rn−1
NOTES ON MANIFOLDS 57
ι : ∂M −→ M
(x2 , . . . , xn ) 7−→ (0, x2 , . . . , xn ).
Since ι1 ≡ 0, we have
Z Z n Z
X
ω= ∗
ι ω= ˆ i ∧ · · · ∧ dxn )
ι∗ (ai dx1 ∧ · · · ∧ dx
∂M ∂M i=1 Rn−1
n Z
X
= ˆi ∧ · · · ∧ dιn
ai (0, x2 , . . . , xn ) dι1 ∧ · · · ∧ dι
i=1 Rn−1
Z
= a1 (0, x2 , . . . , xn ) dx2 ∧ · · · ∧ dxn
n−1
ZR
= dω.
M
τi : X −→ [0, 1]
λp (x)
x 7−→ Pr i .
i=1 λpi (x)
Pr
Then i=1 τi (x) = 1 for all x ∈ X, and we call τ1 , . . . , τr a partition of unity. To
find the corresponding partition of ω, define ωi ∈ Ωn−1 M by
(
τi (p)ω(p) if p ∈ X;
ωi (p) =
0 otherwise.
58 DAVID PERKINSON AND LIVIA XU
0 d 1 d d 2
0 Ω M Ω M Ω M ···
The cohomology groups of the de Rham complex of a manifold, defined below,
are important invariants of the manifold. Thus, if two manifolds have different
cohomology groups, then they are not diffeomorphic. Furthermore, using Stokes’
theorem, it can be shown that de Rham cohomology is dual to singular homology
on the manifold, where the latter can be computed through a triangulation of the
manifold and detects topological features of the manifold, for instance, the number
of k-dimensional holes (see Appendix D).
11.1. Definition and first properties. Recall that a form in the kernel of d
is called closed and a form in the image of d is called exact. In the context of
de Rham cohomology, closed forms are called cocycles and exact forms are called
coboundaries.6
Since d2 = 0, every exact form is closed, or said another way, every coboundary
is a cocycle. Thus, we can make the following definition:
Definition 11.1. The k-th cohomology group of the de Rham complex is the quo-
tient
d d
H k M := ker(Ωk M → Ωk+1 M )/ im(Ωk−1 M → Ωk M ).
If ω ∈ Ωk M is a cocycle, we denote the cohomology class of ω by
[ω] := ω + d(Ωk−1 M ).
We say that cocycles ω and η are cohomologous if [ω] = [η], i.e., if ω − η = dα for
some α ∈ Ωk−1 M .
The cohomology groups measure the extent to which the de Rham sequence is
not exact: i.e., H k M = 0 if and only if im dk−1 = ker dk . If its dimension as an
R-vector space is large, then the sequence is far from being exact in degree k. (See
Appendix subsection D.1 for the basics on exact sequences.)
Note that
Ω0 R = {f : R → R | f is smooth}
and that
Ω1 R = {f dx | f : R → R is smooth}.
Also, for a smooth function f : R → R, df = 0 means that f is a constant function.
d
Thus, we have ker(Ω0 R → Ω1 R) ∼ = R. At the same time, for any f ∈ Ω1 R, we
Rx d
can define g(x) = 0 f (t)dt. Then g is smooth and dg = f . Therefore, im(Ω0 R →
Ω1 R) = Ω1 R. In this way we can compute the cohomology groups:
d
H 0 M = ker(Ω0 R → Ω1 R)/ im(0 → Ω0 R) ∼
= R/0 = R,
d
H 1 M = im(Ω0 R → Ω1 R)/Ω1 R ∼
= R/R = 0.
The cohomology groups are not only groups under addition, they are also R-
vector spaces. In fact, there is even more structure. Define the cohomology ring of
M to be
M
H • M := H k M,
k≥0
∧ : H r M × H s M −→ H r+s M
([ω], [η]) 7−→ [ω ∧ η].
d((ω + dµ) ∧ (η + dν)) = d(ω + dµ) ∧ (η + dν) + (−1)r (ω + dµ) ∧ d(η + dν)
= dω ∧ (η + dν) + (−1)r (ω + dµ) ∧ dη
= dω ∧ η + dω ∧ dν + (−1)r (ω ∧ dη + dµ ∧ dη)
= (dω ∧ η + (−1)r ω ∧ dη) + (dω ∧ dν + (−1)r dµ ∧ dη).
We are trying to show that the difference between this form and d(ω ∧ η) is a
coboundary, i.e., in the image of d. By the product rule d(ω ∧ η) = dω ∧ η +
(−1)r ω ∧ dη. The difference is
dω ∧ dν + (−1)r dµ ∧ dη) = d(ω ∧ dν + dµ ∧ η).
Proof. It suffices to show that there is some n-form on M that is not exact. Ori-
ent M and choose ω ∈ Ωn M such that M ω 6= 0. If there were η ∈ Ωn−1 M
R
R R R
such that ω = dη, then by the Stokes’ theorem, M ω = M dη = ∂M η = 0 since
∂M = ∅ by assumption.
such that h(0, x) = f (x) and h(1, x) = g(x) for all x ∈ M (see Figure 19). In that
case, we call h a homotopy from f to g and write f ∼ h g.
At time zero, we have h0 (x) := h(0, x) = x = idRn (x), and at time 1, we have
h1 (x) := h(1, x) = 0.
Proof. Let h : [0, 1] × M → N denote the homotopy between f and g such that
h(0, x) = f (x) and h(1, x) = g(x) for all x ∈ M . The goal is to use h to construct a
collection of maps s : Ωk N → Ωk−1 M (shown below, though the diagram does not
d d d d
··· Ωk−1 N Ωk N Ωk+1 N ···
s s s s
d d d d
··· Ωk−1 M Ωk M Ωk+1 M ···
Then
P ω = P ( 3t2 + 2xt + x2 y dt ∧ dx ∧ dy) + P ((tx + t2 y) dx ∧ dy ∧ dz)
Z 1
2 2
= 3t + 2xt + x y dx ∧ dy
t=0
= 1 + x + x2 y dx ∧ dy ∈ Ω2 M.
8To learn more about this operation, read about interior products in another text on manifolds.
64 DAVID PERKINSON AND LIVIA XU
A 0 appears as a summand in the second step of this calculation since dt does not
appear in (tx + t2 y) dx ∧ dy ∧ dz.
We now resume the proof. The only thing that remains is to show that P satisfies
Equation 6. This is a local question, so we check it in local coordinates t, x1 , . . . , xm ,
where m = dim M , and exterior differentiation, the pullback, and integrals are
linear, we may assume ω has a single term. That term may or may not involve the
differential dt. We consider each of those cases separately:
Case 1: Suppose that ω = `(t, x) dt ∧ dxµ .
Consider the left-hand side of Equation 6 first. Pulling back by i0 (x) = (0, x),
we have
i∗0 (ω) = `(0, x) d0 ∧ dx = 0.
Similarly, i∗1 ω = 0. Therefore i∗1 ω − i∗0 ω = 0. Therefore, to verify Equation 6, we
must check that dP ω = −P dω. Using the fact that dt ∧ dt = 0, compute
Z 1
d(P ω) = d `(t, x) dxµ
t=0
n Z 1 !
X ∂
= `(x, t) dt dxi ∧ dxµ
i=1
∂xi 0
n Z 1 !
X ∂`
= dt dxi ∧ dxµ
i=1 0 ∂xi
n Z 1
X ∂`
= dt dxi ∧ dxµ ,
i=1 0 ∂xi
and
n
!
X ∂`
P (dω) = P dxi ∧ dt ∧ dxµ
i=1
∂xi
n
!
X ∂`
=P − dt ∧ dxi ∧ dxµ
i=1
∂xi
n Z 1
X ∂`
=− dxi ∧ dxµ ,
i=1 t=0 ∂xi
as required.
Case 2: Suppose that ω = `(x, t) dxµ .
Now P ω = 0 and, hence, dP ω = 0. We need to show that i∗1 ω − i∗0 ω = P dω:
n
!
∂` X ∂`
P dω = P dt ∧ dxµ + dxi ∧ dxµ
∂t i=1
∂xi
Z 1
∂`
= dxµ + 0
t=0 ∂t
= i∗1 ω − i∗0 ω.
Corollary 11.14. If f : M → N is null-homotopic, then f ∗,k ≡ 0 for all k > 0.
Corollary 11.15. If M is contractible, then H k M = 0 for all k > 0.
Corollary 11.16 (The Poincaré Lemma). If U ⊆ Rn is open and star-shaped
(meaning that there is x ∈ U such that for all y ∈ U , the line segment connecting
x and y lies entirely in U ), then H k U = 0 for all k > 0.
Exercise 11.17. Prove the above three corollaries.
Definition 11.18. A mapping f : M → N defines a homotopy equivalence if there
is a smooth mapping g : N → M such that f ◦ g ∼ idN and g ◦ f ∼ idM . If so, we
say that M and N are homotopy equivalent.
Remark 11.19. In particular, diffeomorphisms defines homotopy equivalences.
Theorem 11.20. If M and N are homotopy equivalent, then H k M ∼
= H k N for
all k.
Proof. This follows immediately from Theorem 11.12.
Remark 11.21 (Continuity versus smoothness.). Let M and N be smooth man-
ifolds. We have defined homotopies for smooth functions M → N , and further
our homotopies, themselves are smooth functions. It turns out that all of our
results hold if we allow our mappings to just be continuous rather than smooth.
In detail: we say continuous mappings f, g : M → N (where each of M and N
are still smooth manifolds) are (continuously) homotopic if there exists a conti-
nous mapping h : [0, 1] × M → N such that h(0, x) = f (x) and h(1, x) = g(x) for
all x ∈ M . It can be shown that if f : M → N is a continuous mapping, then it is
continuously homotopic to a smooth mapping f˜: M → N , and if two smooth map-
ping f, g : M → N are continuously homotopic, then they are smoothly homotopic.
The reason the above results are important is that they show that de Rham
cohomology is actually a topological invariant: if M and N are smooth manifolds
that are homeomorphic, then there is an isomorphism of their de Rham cohomology
rings. Without the above results, we could only make that conclusion if M and N
were diffeomorphic—a much stronger condition.
Definition 11.22. Let f : M → N be a mapping of manifolds. Then
(1) f is an immersion if dfp : Tp M → Tf (p) N is injective (see subsection 4.4 for
the definition of dfp —locally, in terms of the “physical” version of tangent
space, it is the mapping determined by the Jacobian of f );
(2) the pair (M, f ) is a submanifold if f is an injective immersion; if M ⊆ N ,
then we say M is a submanifold if (M, ι) is a submanifold where ι is the
inclusion mapping;
(3) f is an embedding if it is a one-to-one immersion and also a homeomorphism
onto im f ⊆ N where the latter set is given the subspace topology.
66 DAVID PERKINSON AND LIVIA XU
Example 11.25. Let M be the open Möbius strip as in Example 6.5. Note that
there is a deformation retraction from M to the unit circle S 1 (Figure 21). So they
Figure 21. The Möbius band deformation retracts to its central circle.
have the same cohomology groups. Similarly, the punctured plane R2 \{(0, 0)} has
a deformation retraction to S 1 (Figure 22). Therefore, one can conclude that
H k M = H k (R2 \{(0, 0)}) = H k S 1 .
[Note to Dave:] We still need to give a direct proof that
(
k 1 R if k = 0, 1;
H S =
0 otherwise.
NOTES ON MANIFOLDS 67
Figure 22. The punctured plane deformation retracts to the unit circle.
since ∂M = ∅.
◦f ∗ = ◦g ∗ .
R R
is homotopy invariant, i.e., if f, g : M → N are homotopic, then M N
Proof. Let v be a nowhere vanishing vector field on the n-sphere S n with n even. In
other words, v is a section of tangent bundle, v : S n → T S n such that v(x) 6= 0 for
all x ∈ S n . Let Dn+1 denote the solid unit ball in Rn+1 with boundary ∂Dn+1 =
S n . Now consider the antipodal involution (i.e., its square is the identity) on S n ,
τ : S n −→ S n
x = (x1 , . . . , xn+1 ) 7−→ −x = (−x1 , . . . , −xn+1 ).
We can think of v(x) as the pointer towards −x (see Figure 23) and construct a
h
homotopy idS n ∼ τ :
v(x)
h(t, x) = cos(tπ)x + sin(tπ) .
|v(x)|
Here, we are identifying Tx S n with vectors in Rn+1 perpendicular to x ∈ Rn (think-
ing of tangent vectors in terms of curves on S n passing through x). One can check
that im(h) ⊂ S n , that h(0, −) = idS n , and that h(1, −) = τ .
(7) U ∩V U ∪V
jV iV
V
10For background on exact sequences, see Appendix D.
NOTES ON MANIFOLDS 69
We want to use these inclusion maps to define cochain maps i and j such that the
following sequence of cochain complexes is exact:
i j
0 → Ω• (U ∪ V ) −→ Ω• U ⊕ Ω• V −→ Ω• (U ∩ V ) → 0,
where the boundary map of Ω• U ⊕ Ω• V is (d, d). This short exact sequence will
then induce a long exact sequence called the Mayer-Vietoris sequence:
∂ i j ∂ i
→ H k (U ∪ V ) −→ H k U ⊕ H k V −→ H k (U ∩ V ) −
··· − → H k+1 (U ∪ V ) −→ · · · ,
To see how this sequence would be useful in computing cohomology, suppose M =
U ∪ V and that it is easy to compute the cohomology of U , V , and U ∩ V . Then the
cohomology groups of M are sandwiched in an exact sequence with known groups,
which yields a lot of information. For instance, suppose that H k−1 (U ∩ V ) =
H k (U ) = H k (V ) = 0. In that case, from the Mayer-Vietoris sequence, we know
the following sequence is exact:
0 → H k M → 0,
which tells us that H k M = 0, too.
Lemma 11.29. Using the notation above, there is an exact sequence of cochain
complexes
i j
0 → Ω• (U ∪ V ) −→ Ω• U ⊕ Ω• V −→ Ω• (U ∩ V ) → 0,
where i and j are defined to be
i(ω) = (i∗U ω, i∗V ω),
j(ωU , ωV ) = jU∗ ωU − jV∗ ωV .
Proof. That i and j are cochain maps is easy to check, as is injectivity of i. We
need to show, at each level k, that im(i) = ker(j) and that j is surjective.
We first show that im(i) = ker(j). Let ω ∈ Ωk (U ∪ V ). Since taking cohomology
is functorial, it preserves the commutative diagram in (7), which implies
(j ◦ i)(ω) = j(i∗U ω, i∗V ω) = jU∗ i∗U ω − jV∗ i∗V ω = 0.
So im(i) ⊆ ker(j). Let ωU ∈ Ωk U and ωV ∈ Ωk V . If j(ωU , ωV ) = jU∗ ωU −jV∗ ωV = 0,
then ωU |U ∩V = ωV |U ∩V . So we can glue ωU and ωV together along U ∩ V and
obtain a form ω ∈ U ∪ V and hence (ωU , ωV ) = i(ω).
To see that j is surjective, let ω ∈ Ωk (U ∩ V ). Let {λU , λV } be a partition of
unity on U ∪ V subordinate to the cover {U, V }, i.e., λU , λV : U ∪ V → [0, 1] are
smooth and compactly supported with supp(λU ) ⊆ U and supp(λV ) ⊆ V . Define
ωU = λU ω, ωV = λV ω.
By defining ωU (p) = 0 outside of the support of λU , we may consider ωU to be a
form defined on all of U and with the property that jU∗ ωU = λU ω on U ∩ V . A
similar remark holds for ωV . It follows that
j(ωU , −ωV ) = j ∗ ωU + j ∗ ωV
= λU ω + λV ω
= ω,
70 DAVID PERKINSON AND LIVIA XU
0 0 0
0 Ak Bk Ck
d d d
0 0 0
The result then follows from the snake lemma.
Finally, let us see how to use the Mayer-Vietoris sequence to compute the coho-
mology.
is:
0 → H 0 S 1 → H 0 U ⊕H 0 V → H 0 (U ∩V ) → H 1 S 1 → H 1 U ⊕H 1 V → H 1 (U ∩V ) → 0.
Notice that each of U and V is diffeomorphic to an open interval of R which is
contractible. So H 0 U = H 0 V = R and H 1 U = H 1 V = 0. At the same time,
the intersection U ∩ V can be contracted to two points, so H 0 (U ∩ V ) = R2 and
NOTES ON MANIFOLDS 71
union of two circles is a one-dimensional manifold). One can then use the cover
U ∩V = A∪B and Mayer-Vietoris to show that H 1 (U ∩V ) ∼ = H 1 A⊕H 1 B = R2 (or
use the fact that the cohomology of a manifold is the direct sum of the cohomology
of its components). The Mayer-Vietoris sequence becomes
j ∂
0 → R → R2 → R2 → H 1 T → R2 −→ R2 −→ H 2 T → 0,
where ∂ is the connecting homomorphism, and j : H 1 U ⊕ H 1 V → H 1 (U ∩ V ) is
induced by the map described in Lemma 11.29. By the exactness of the sequence,
we have
H 2 T = im(∂) ∼
= R2 / ker(∂) = R2 / im(j).
To find im(j), note that an element (a, b) ∈ H 1 U ⊕H 1 V is mapped to (a−b, a−b) ∈
H 1 (U ∩ V ) under j. So im(j) ∼
= R and H 2 T ∼= R. A dimension count then gives us
1 ∼
that H T = R . 2
Note: Roughly, a torus has two independent one-dimensional holes and one two-
dimensional hole.
NOTES ON MANIFOLDS 73
12.1. Scalar products. Let V be a finite dimensional vector space over R with
dim(V ) = n.
Example 12.6. The Euclidean space Rn with the usual inner product has an
orthonormal basis e1 , . . . , en .
Sym2 V ∃!
R
Also, recall the isomorphism (Sym2 V )∗ ∼
= Sym2 V ∗ in Proposition 5.15 given by
Syml V ∗ −→ (Syml V )∗
l
1 X Y
φ1 · · · φl 7−→ [v1 · · · vl 7→ φσ(i) (vi )].
l! i=1
σ∈Sn
So the matrix corresponding to I with respect to the basis {eµ } is a diagonal matrix
with diagonal entries being ±1. Hence I is nondegenerate.
Theorem 12.11 (Sylvester’s Law of Intertia). Let h−, −i be a scalar product on
V . Then there exists a basis for V such that the matrix for V with respect to this
basis is diagonal of the form
!
Ir 0
G= ,
0 −Is
where Ik is the identity matrix of size k. The integer s is called the index of h−, −i
and is independent of the choice of basis.
Proof. ?
Definition 12.12. A scalar product on V is positive definite if its index is 0.
12.2. The star operator. Now let V be a finite dimensional oriented vector space
over R and let h−, −i be a scalar product on V .
Definition 12.13. Suppose that e1 , . . . , en is a positively oriented orthonormal
basis for V . The volumn form of V is the n-form over V ∗ :
ωV := e∗1 ∧ · · · ∧ e∗n .
Lemma 12.14. Let e1 , . . . , en and v1 , . . . , vn be orthonormal bases for V . Suppose
Pn Pn
that vj = i=1 ai,j ei and vj∗ = i=1 bi,j e∗i . Define matrices A and B with Ai,j =
ai,j and Bi,j = bi,j . Then
−1
B = (A> ) .
Exercise 12.15. Prove Lemma 12.14.
Proposition 12.16. The volumn form of V is the unique n-form sending each pos-
itively oriented orthonormal basis to 1. More generally, let v1 , . . . , vn be a positively
oriented basis for V and let G be the matrix with Gi,j = hvi , vj i. Then
p
ωV = det(G)v1∗ ∧ · · · ∧ vn∗ ,
and in particular, if v1 , . . . , vn is orthonormal, then ω = v1∗ ∧ · · · ∧ vn∗ .
Proof. Let e1 , . . . , en be a positively oriented orthonormal basis for V . Let Ie be the
matrix for [ : V → V ∗ with respect to the basis e1 , . . . , en , i.e., Ie = diag(1 , . . . , n ),
Pn
where i = hei , ei i ∈ {±1}. Suppose that vj = i=1 ai,j ei . Then we have a
commutative diagram
A
V V
G Ie
V∗ V∗
A>
(
∗ ∗ 0 if γ 6= µ;
hη, ζi = heγ , eµ i = det(heγi , eµj i) =
µ if γ = µ,
Qk
where µ = i=1 heγi , eµi i. So hη, ζi ∈ {±1} if γ = µ. On the other hand, by
definition we have
η ∧ ∗ζ = aγe e∗γ ∧ e∗γe ,
where γe is the index formed by [n]\{γ1 , . . . , γk } arranged in increasing order. Let
τγ be the permutation sending (1, . . . , n) to (γ1 , . . . , γk , γ
e1 , . . . , γ
en−k ). We have
η ∧ ∗ζ = aγe sign(τγ )ωV .
Therefore, by assumption,
(
0 if γ 6= µ;
aγe sign(τγ ) = hη, ζi =
µ if γ = µ.
Hence, (
0 if γ 6= µ;
aγe = sign(τγ )hη, ζi =
sign(τγ )µ if γ = µ.
So we define ∗ζ = ∗e∗µ to be
∗e∗µ := sign(τµ )µ e∗µe .
NOTES ON MANIFOLDS 77
Example 12.19. Consider R3 with the usual scalar product and let e1 , e2 , e3 be
the standard basis for R3 . Then
∗(e∗1 ∧ e∗2 ) = e∗3 , ∗(e∗1 ∧ e∗3 ) = −e∗2 , ∗(e∗2 ∧ e∗3 ) = e∗1 .
Example 12.20. Consider R4 with the scalar product of index 1 and let e1 , . . . , e4
be an orthonormal basis. Let G = diag(1, 1, 1, −1) be the corresponding matrix for
this scalar product with respect to the basis. Then
∗(e∗1 ∧ e∗3 ) = ±e∗2 ∧ e∗4 .
We have µ = (1, 3) and we want to find the sign. Note that τµ = (23), so sign(τµ ) =
−1. Also, µ = he1 , e1 i · he3 , e3 i = 1. Therefore,
∗(e∗1 ∧ e∗3 ) = −e∗2 ∧ e∗4 .
For another example,
∗(e∗1 ∧ e∗2 ∧ e∗4 ) = sign(34) · 1 · 1 · (−1)e∗3 = e∗3 .
Lemma 12.21. Let V be an oriented vector space over R and let h−, −i be a scalar
product of index s. For each k,
∗∗ = (−1)k(n−k)+s idΛk V ∗ .
Hence, ∗ is an isomorphism.
Exercise 12.23. Let V be an oriented vector space over R and let h−, −i be a
scalar product of index s. Show that for all η, ζ ∈ Λk V ∗
h∗η, ∗ζi = (−1)s hη, ζi.
Remark 12.25. The collection of scalar products h−, −ip of a Riemannian manifold
is sometimes called a Riemannian metric. Note that every oriented manifold can be
given such a metric using the usual scalar product on Rn and a partition of unity.
Remark 12.26. We saw in Remark 12.8 that a scalar product on a vector space V
corresponds with an element of the symmetric product Sym2 V ∗ . Thus, we can de-
fine a semi-Riemannian manifold as a manifold together with a section of the bundle
Sym2 T ∗ M (the fiber at each p ∈ M is Sym2 Tp∗ M ) such that the corresponding
bilinear form on Tp M is nondegenerate for all p ∈ M .
δk := (−1)k ∗ d ∗−1 .
Show that
d(η ∧ ∗ζ) = dη ∧ ∗ζ + η ∧ ∗δζ.
Remark 12.31. The proof is beyond the scope of these notes, but the interested
reader can consult Chapter 6 of [13].
∼
=
Hk M HkM
∗ ∼
= ∼
= Poincaré
∼
=
Hn−k M H n−k M
Corollary 12.38. Let M be a connected oriented closed Riemannian n-manifold.
Then H n M ∼
= R.
NOTES ON MANIFOLDS 81
13.1. Toric varieties from fans. Let N be a lattice of rank n, meaning that it is
a free Z-module generated by some basis w1 , . . . , wn ∈ N . So we have N ∼
= Zn as
Z-modules. Consider the vector space over R
NR := N ⊗Z R ∼
= Rn
obtained from extension of scalars. We can think of N as a collection of points
in N ⊗Z R that have integer coordinates. Let M = hom(N, Z) be the dual lattice
and use MR = M ⊗Z R to denote the corresponding vector space. There is a dual
pairing h−, −i : MR × NR → R defined by
h−, −i : MR × NR −→ R
(f, v) 7−→ f (v).
Let e1 , . . . , en be the standard basis for Rn (hence a basis for NR ). Then in coor-
dinates, the dual pairing can be written as
Pn Pn Pn
h i=1 ai e∗i , i=1 bi ei i = i=1 ai bi .
Further identifying (Rn )∗ with Rn , the pairing M × N is just the ordinary inner
product.
A convex polyhedral cone is the set
Ps
σ = { i=1 ri vi | ri ≥ 0}
for some v1 , . . . , vs ∈ NR . The cone is rational if vi ∈ N . It is strongly convex if it
does not contain a line through the origin. The dual cone σ ∨ ⊆ MR is defined to
be the set
σ ∨ := {u ∈ MR | hu, vi ≥ 0 for all v ∈ σ}.
A face τ of σ is the intersection of σ with any supporting hyperplane: τ = σ ∩ u⊥ =
{v ∈ σ | hu, vi = 0} for some u ∈ σ ∨ . A face of a cone is also a cone.
Let σ be a strongly convex rational polyhedral cone in NR . There is a semigroup
associated with σ:
Sσ := σ ∨ ∩ M.
The group algebra corresponding to σ is
C[Sσ ] := C[eu | u ∈ Sσ ]
whose elements are polynomials in the symbols eu with the relations
eu ev := eu+v and e0 := 1.
82 DAVID PERKINSON AND LIVIA XU
It turns out that Sσ is finitely generated over Z≥0 , i.e., So C[Sσ ] is finitely generated;
that is, fixing u1 , . . . , us , a minimal set of generators for Sσ , we have
C[Sσ ] ∼
= C[x1 , . . . , xs ]/I,
where I is the ideal generated by binomials corresponding to the relations between
the ui ’s:
xa1 1 · · · xann − xb11 · · · xbnn
Ps Ps
where i=1 ai ui = i=1 bi ui . By Hilbert’s theorem, a finite number of these
generators will suffice. Denote the zero set or vanishing set of I as
Example 13.2. Let n = 1. Note that τ = {0} is a cone. Its dual cone is τ ∨ =
MR ∼ = R, which is generated by v1 = e∗1 = 1 and v2 = −e∗1 = −1 with the single
relation v1 + v2 = 0. So
Uτ = {(u, 1/u) ∈ C2 | u 6= 0} ∼
= C∗ := C\{0}
(u, 1/u) 7→ u
(z, 1/z) ←[ z.
and Uτ = (C∗ )n , the n-torus. For any cone σ, the affine toric variety Uσ turns out
to contain the torus as a dense open subset.
C[Sσ ] ⊆ C[Sτ ] ∼
= C[y1 , . . . , ys , ys+1 ]/(I + (ys ys+1 − 1)),
NOTES ON MANIFOLDS 83
Figure 27. A cone σ and its dual. The generators of Sσ are circled.
u1 = e∗1 = (1, 0), u2 = e∗1 +e∗2 = (1, 1) and u3 = e∗1 +2e∗2 = (1, 2). The corresponding
group algebra is
(8) C[Sσ ] = C[xu1 , xu2 , xu3 ] = C[x1 , x1 x2 , x1 x22 ] ∼
= C[y1 , y2 , y3 ]/(y22 − y1 y3 ),
and Uσ = {(y1 , y2 , y3 ) ∈ C3 | y22 = y1 y3 } is a conic surface whose real points are
pictured in Figure 28. Now let τ = R≥0 e2 be the cone generated by e2 . Then τ ∨ is
Figure 28. The real points of the affine toric variety Uσ in Example 13.3.
the upper half plane, and Sτ is generated by e∗1 , −e∗1 , e∗2 . We have
C[Sτ ] = C[x1 , x−1 ∼
(9) 1 , x2 ] = C[z1 , z2 , z3 ]/(z1 z2 − 1).
So
'
→ C∗ × C
Uτ −
(z1 , z2 , z3 ) → (z1 , z3 ),
and the map Uτ → Uσ can be described by
C∗ × C −→ Uσ \{(y1 , y2 , y3 ) ∈ Uσ | y1 6= 0}
(z1 , z3 ) 7−→ (z1 , z1 z3 , z1 z32 ).
84 DAVID PERKINSON AND LIVIA XU
Here is a way to find the above mapping. Under the isomorphisms Equation 8 and
Equation 9, we have
y1 ↔ x1 , y2 ↔ x1 x2 , y3 ↔ x1 x22
and
z1 ↔ x1 , z2 ↔ x−1
1 , z3 ↔ x2 .
Eliminating the xi , we find z1 = y1 , y2 = z1 z3 , and y3 = z1 z32 .
A fan ∆ in NR is a collection of strongly convex rational polyhedral cones σ
in NR such that
(1) If τ is a face of σ, then τ ∈ ∆;
(2) If σ, τ ∈ ∆, then σ ∩ τ ∈ ∆.
Construction of a toric variety from a fan. Let ∆ be a fan in NR . The toric
variety associated to ∆, denoted X(∆) is constructed by starting with the disjoint
union of the affine toric varieties Uσ for all σ ∈ ∆. We then glue these varieties
together according to the following instructions. For each pair of cones σ, σ 0 ∈ ∆
we have induced mappings
C[Sσ ] ,→ C[Sσ∩σ0 ] ←- C[Sσ0 ] ! Uσ ←- Uσ∩σ0 ,→ Uσ0 .
Write
φσ,σ0
Uσ Uσ0
for the mapping that sends the image of each point p ∈ Uσ∩σ0 in Uσ to the image
of p in Uσ0 . The dashed arrow indicates that the mapping is not defined on the
whole domain and codomain. (However, it is a bijection on the images of Uσ∩σ0 .) In
this way, we glue Uσ to Uσ0 along Uσ∩σ0 and φσ,σ0 . For each pair of cones of ∆, we
perform these gluing to arrive at the toric variety X(∆). We then have Uσ ⊆ X(∆)
for each σ ∈ ∆, and the mappings φσ,σ0 then become transition functions. We omit
the verification that all of these gluings are compatible.
Definition 13.4. Let ∆ be a fan in NR . The toric variety associated to ∆, de-
noted X(∆), is constructed from ∆ by gluing the affine toric varieties Uσ for all
σ ∈ ∆.
Example 13.5. Consider the fan ∆ in R consisting of the three cones as shown in
Figure 29. We have
(0, 1) σ1 σ1∨
σ2∨
σ2
(1, 0)
(−1, −1) σ3
σ3∨
We claim that the corresponding toric variety is P2 . Indeed, one can see that
C[Sσ1 ] = C[u, v], C[Sσ2 ] = C[u−1 , u−1 v], C[Sσ3 ] = C[v −1 , vu−1 ],
and as a result,
Uσ1 = Uσ2 = Uσ3 = C2 .
However, notice that their intersections are C∗ × C and the gluing maps are given
by
Uσ1 99K Uσ2 99K Uσ3
(u, v) 7−→ (1/u, v/u) 7−→ (1/v, v/u).
Relabeling u and v as x2 /x1 and x3 /x1 , we see that the gluing morphisms are our
usual transition maps of P2 .
Exercise 13.7. Let en+1 = −e1 − . . . − en . Show that Pn can be constructed
as a toric variety using the fan ∆ with cones generated by the proper subsets of
e1 , . . . , en+1 . To start with, let σi be the cone over e1 , . . . , êi , . . . , en+1 , a maximal
cone in ∆. One can think of Uσi as the standard chart Ui of Pn and the gluing
morphisms are exactly the usual transition functions. You might find this package
of SageMath helpful when finding the generators of the dual cone σi∨ .
Example 13.8 (Hirzebruch Surfaces). Consider the fan depicted in Figure 31,
where the slanting arrow passes through the point (−1, a) for some a ∈ Z>0 . The
corresponding toric variety is called the Hirzebruch surface and is denoted Ha . The
reader should construct the dual cones and verify that
C[Sσ1 ] = C[u, v], C[Sσ2 ] = C[u, v −1 ],
86 DAVID PERKINSON AND LIVIA XU
These mappings glue together to give the projection Ha → P1 . Note that the
inverse image of every point in P1 is a P1 .
In general, suppose we have a Z-linear mapping φ : N → N 0 of lattices and fans ∆
and ∆0 in N and N 0 , respectively, having the property that for each cone σ ∈ ∆,
there is a cone σ 0 ∈ ∆0 such that φ(σ) ⊆ σ 0 , then—just as for the example of the
Hirzebruch surface, above—there is a natural mapping X(∆) → X(∆0 ) induced
by φ. In the example of the Hirzebruch surface we used φ(x, y) = x. Note that,
since a > 0, the mapping (x, y) 7→ y would not satisfy the condition.
NOTES ON MANIFOLDS 87
13.2. Toric varieties from polytopes. A rational polytope P is the convex hull
over a finite set of vertices X ⊆ Qn ⊆ Rn . We can also write P as the bounded
intersection of a finite collection of half spaces in Rn , that is,
P = {x ∈ Rn | Ax ≥ −b}
for some A ∈ Mn×n (Q) and b ∈ Qn . A (proper) face F of P of dimension n − k is
the intersection with k supporting hyperplanes of the form
{v ∈ P | hai , vi = −bi },
where ai is the i-th row of A. A facet is a face of dimension n − 1.
Given a rational polytope P ∈ MR ∼ = Rn , define a fan ∆P whose rays are the
inward pointing normals to the facets of P . There is a cone σv for each vertex v
of P , determined by the normals of the facets incident on v. A nice feature of this
construction is the dual cone of σv is formed by looking at v ∈ P and extending its
edges indefinitely (and translating to the origin).
Example 13.9. Consider the square P with vertices (0, 0), (1, 0), (2, 1), (0, 1) as
shown in the left of Figure 32. We get X(∆P ) = H1 , the Hirzebruch surface,
considered earlier, with a = 1.
v3
v2
σ4 σ1
v1 v4 σ3 σ3∨
σ2
P ⊂ MR
∆P
Exercise 13.10. Let n = 2 and let P be the convex polytope in MR with vertices
(−1, 1), (−1, −1), (2, −1). What is the corresponding toric variety X(∆P )? De-
scribe Uσ for each σ ∈ ∆P and the gluing maps between the two-dimensional cones
of ∆P .
Let ∆(1) denote the set of all 1-dimensional faces of ∆. For each D ∈ ∆(1),
let nD be the first lattice point along D. We define the Chow ring of X to be the
quotient ring graded by degree:
A• (X) := Z[D ∈ ∆(1)]/(I + J),
where
Q
I = ( D∈S D | S ⊆ ∆(1) does not generated a cone in ∆),
P
J = ( D∈∆(1) hm, nD iD | m ∈ M ),
where hm, nD i denotes the inner product.
The cohomology groups can be calculated through an extension of scalars:
Example 13.12. Let ∆ be the fan in Example 13.6 whose associated toric variety
is X = P2 . Let D1 = R≥0 e1 , D2 = R≥0 e2 , D3 = R≥0 (−e1 − e2 ). Then
I = (D1 D2 D3 ),
P3
J = ( i=1 hej , nDi iDi | j = 1, 2)
= (h(1, 0), (1, 0)iD1 + h(1, 0), (0, 1)iD2 + h(1, 0), (−1, −1)iD3 ,
h(0, 1), (1, 0)iD1 + h(0, 1), (0, 1)iD2 + h(0, 1), (−1, −1)iD3 )
= (D1 − D3 , D2 − D3 ).
We see that
A• (P2C ) = Z[D1 , D2 , D3 ]/(D1 D2 D3 , D1 −D3 , D2 −D3 ) ∼
= Z[D3 ]/(D33 ) = Z+ZD3 +ZD32 .
Therefore, we have the table
k 0 1 2 3 4
k 2 .
H (PC ) R 0 R 0 R
Exercise 13.13. What are the cohomology groups of Pn ?
by ∆(1). (Note that An−1 (X) ∼ = A1 (X) if X is smooth and complete.) We have
an injection of the lattice M (in which the dual cones sit) via
M → Z∆(1)
X
m 7→ Dm := hm, nD iD,
D∈∆(1)
then
S∼
M
= Sα ,
α∈An−1 (X)
and one can check that Sα · Sβ ⊆ Sα+β .
Example 13.16. The fan for Pn has n + 1 rays: Di = Rei for i = 1, . . . , n,
corresponding to the standard basis vectors, and Dn+1 = R(−e1 − · · · − en ). To
compute An−1 Pn = A1 Pn , we first find
P Pn+1
( D∈∆(1) hm, nD iD | m ∈ M ) = ( i=1 hej , nDi iDi | j = 1, . . . , n)
= (D1 − Dn+1 , D2 − Dn+1 , . . . , Dn − Dn+1 )
Therefore,
An−1 Pn = Z∆(1) /(D1 − Dn+1 , D2 − Dn+1 , . . . , Dn − Dn+1 ) = ZDn+1 ∼
=Z
Di 7→ Dn+1 7→ 1.
Letting xi = xDi for i = 1, . . . , n + 1, we have the homogeneous coordinate ring
S = C[x1 , . . . , xn+1 ].
a Pn+1
The degree of a monomial xa1 1 n+1
· · · xn+1 is the class of a = i=1 ai Di in An−1 Pn .
But in An−1 Pn , we have
n+1
X n+1
X
deg(xa ) = [a] = [ ai Di ] = [( ai )Dn+1 ].
i=1 i=1
Identifying An−1 Pn with Z, as above, we get
Pn+1
deg xa = i=1 ai ,
the usual degree.
Example 13.17. Let Ha denote the Hirzebruch surface as described in Exam-
ple 13.8. Recall that in Example 13.14 we computed
An−1 (Ha ) = Z∆(1) /hD1 − D3 , D2 + aD3 − D4 i → Z {D1 , D2 }
D1 7→ D1
D2 7→ D2
D3 7→ D1
D4 7→ aD1 + D2
where Z {D1 , D2 } := SpanZ {D1 , D2 }. Then, finally
An−1 (Ha ) ∼
= Z {D1 , D2 } ∼
= Z2
aD1 + bD2 7→ (a, b).
So the homogeneous coordinate ring for Ha is S = C[x1 , x2 , x3 , x4 ] and it is graded
by Z2 with each indeterminate having degree
deg(x1 ) = deg(x3 ) = (1, 0), deg(x2 ) = (0, 1), deg(x4 ) = deg(x2 xa3 ) = (a, 1).
For example, deg(x1 x22 x33 x4 ) = (1, 0) + 2(0, 1) + 3(1, 0) + (a, 1) = (4 + a, 3). The
monomial x21 x2 x33 x4 has degree (5 + a, 2). The degrees of these monomials in S
NOTES ON MANIFOLDS 91
differ even though they would have the same degree under the usual grading (in
which each xi had degree 1).
Quotients. Now assume that X is simplicial, meaning that each of its cones σ
has dim(σ) rays. (It suffices to check the maximal-dimensional cones.) Our goal is to
view X as a quotient space under some group action, generalizing the construction
of projective space. Consider the following short exact sequence of Z-modules:
0 −→ M −→ Z∆(1) −→ An−1 (X) −→ 0
X
m 7−→ hm, nD iD.
D∈∆(1)
∗
The nonzero complex numbers, C , form an abelian group under multiplication,
and hence has a Z-module structure: for a ∈ Z and z ∈ C∗ , we have a · z := z a .
It then makes sense to consider the mappings homZ (Z∆(1) , C∗ ), i.e, the Z-linear
mappings from Z∆(1) to C∗ . These are determined by the images of D ∈ ∆(1). For
instance, if ∆(1) = {D1 , . . . , Dk } and g1 , . . . , gk are any elements of C∗ , there is a
corresponding mapping
a1 D1 + · · · + ak Dk 7→ g1a1 · · · gkak ∈ C∗ .
Since the choices for the gi are arbitrary and completely determine the mapping,
we have
homZ (Z∆(1) , C∗ ) ∼
= (C∗ )∆(1) .
Next, consider homZ (An−1 (X), C∗ ). If ∆(1) = {D1 , . . . , Dk } and g1 , . . . , gk ∈
C , we can still attempt to define a mapping An−1 (X) → C∗ by sending Di 7→
∗
gi , as above. However, since there are relations among the Di in An−1 (X), the
corresponding relations must hold among the gi . In other words, the choices for
the gi are now constrained. For instance, if D3 = 2D1 + D2 , then since D3 7→ g3
and 2D1 + D2 7→ g12 g2 , we require that g3 = g12 g2 , i.e. g12 g2 g3−1 = 1. In general, we
define the group G by
hm,n i
G := homZ (An−1 (X), C∗ ) ∼ = {g ∈ (C∗ )∆(1) | D∈∆(1) gD D = 1 for all m ∈ M }.
Q
Q hm,n i
The conditions D∈∆(1) gD D = 1, encode all the required relations. It suffices to
he ,n i
let m range over a basis for M . So if M = Zn , the relations are D∈∆(1) gD i D = 1
Q
for i = 1, . . . , n.
The inclusion
G ⊆ (C∗ )∆(1) ⊆ C∆(1) .
gives a natural action of G on C∆(1) :
g · x := (gD xD )D∈∆(1) ,
where g ∈ G, x ∈ C∆(1) .
For a face σ ∈ ∆, let σ(1) := ∆(1) ∩ σ denote the rays in σ and define the
monomial Y
xσ̂ := xD ∈ S.
D ∈σ(1)
/
92 DAVID PERKINSON AND LIVIA XU
Example 13.19. Let X = Pn . The maximal cones of the fan in this case consists
of all subsets of size n from the vectors e1 , . . . , en , −e1 − · · · − en . If σ is one of
these cones, then it omits exactly one of these vectors. Hence,
B = (x1 , . . . , xn+1 )
and Z = {0} ⊂ Cn+1 . Since An−1 Pn ∼ = Z, we have G = hom(An−1 (Pn ), C, C ∗ ) ∼
=
∗ n
C . In detail, since An−1 (P ) is the span of the Di modulo the relations Di − Dn+1
for i = 1, ..n, we have
−1
G = g ∈ Cn+1 | gi gn+1
= 1 for i = 1, . . . , n
n+1
= g∈C | gi = gn+1 for i = 1, . . . , n
∼C ,
= ∗
where the isomorphism in the final step is given by g 7→ gn+1 . Using this iso-
mophism, the group action on C∆(1) = Cn+1 is given by
λ · (x1 , . . . , xn+1 ) = (λx1 , . . . , λxn+1 )
where λ = gn+1 ∈ C∗ .
As claimed in Theorem 13.18, the set C∆(1) \Z, i.e., Cn+1 \{0} is invariant under
the action by G: if (x1 , . . . , xn+1 ) 6= 0, then λ(x1 , . . . , xn+1 ) 6= 0. Further, Pn is
the quotient
Pn ∼ = (Cn+1 \{0})/(x ∼ λx | λ ∈ C∗ ).
13.5. Mapping toric varieties into projective spaces. In this section, we will
assume that X is a smooth, complete toric variety associated with the fan ∆.
Let ∆(1) = {D1 , . . . , D` } be the rays of ∆, and let S = C[x1 , . . . , x` ] be the homo-
geneous coordinate ring of X with xi corresponding to Di and graded by An−1 X.
Recall the short exact sequence
0 −→ M −→ Z∆(1) −→ An−1 (X) −→ 0
m 7−→ Dm
where
X
(10) Dm := hm, nD iD.
D∈∆(1)
The mapping φT will be an embedding if for each vertex v of P (E), as you travel
along each each emanating from v, the first lattice point you reach is an element
of T .
Example 13.21. Let X = Pn with divisors Di = R≥0 ei for i = 1, . . . , n and Dn+1 =
R≥0 (−e1 − · · · − en ), as usual. Consider the effective divisor E = dDn+1 for some
positive integer d. We have
Pn
P (E) = {m ∈ MR | hm, ei i ≥ 0 for i = 1, . . . , n and hm, − i=1 ei i ≥ −d}
Pn
= {m ∈ MR | mi ≥ 0 for i = 1, . . . , n and i=1 mi ≤ d},
i.e., P (E) is the simplex with vertices 0, de1 , . . . , den . When T = P (E)∩Zn contains
n+d
all the lattice points in P (E), we get the d-uple embedding Pn → P( d )−1 :
n+d
Pn −→ P( d )−1
(x0 , . . . , xn ) 7−→ (xd1 , x1d−1 x2 , . . . , xdn+1 ).
| {z }
all monomials of degree d
Example 13.22. Let H2 denote the Hirzebruch surface having its four rays gen-
erated by e1 , e2 , −e1 + 2e2 , −e2 :
D3 D2
(−1, 2) σ2
σ1
(0, 1)
σ3 D1
(1, 0)
σ4
(0, −1)
D4
Consider the effective divisor E = 2D3 + 3D4 . Taking the dot products of m =
(m1 , m2 ) ∈ MR = R2 with the first lattice points along each Di and using Equa-
tion 11 gives the inequalities defining the corresponding polytope:
P (E) = {m ∈ MR | m1 , m2 ≥ 0, −m1 + 2m2 ≥ −2, −m2 ≥ −3},
which is drawn in Figure 33. Consider the following set of lattice points in P (E).
T = {(0, 0), (0, 1), (0, 2), (1, 2), (0, 3), (3, 1), (8, 3)}.
Using Equation 10 we compute the corresponding divisors necessary for our map-
ping (as prescribed by Equation 12):
D(0,0) + E = 2D3 + 3D4 , D(0,1) + E = D2 + 4D3 + 2D4 ,
D(0,2) + E = 2D2 + 6D3 + D4 D(1,2) + E = D1 + 2D2 + 5D3 + D4 ,
D(0,3) + E = 3D3 + 8D4 , D(3,1) + E = 3D1 + D2 + D3 + 2D4
D(8,3) + E = 8D1 + 3D2 .
NOTES ON MANIFOLDS 95
(0, 3) (8, 3)
(1, 2)
(3, 1)
(0, 1)
(0, 0) (2, 0)
(0, 3) (8, 3)
(1, 2)
x1
x3
(0, 0) (2, 0)
x2
To get to the lattice point (1, 2), we need to slide the x1 -facet over one, the x2 -facet
up two, and the x4 -facet down one. That accounts for the first, second, and fourth
components of the exponent vector (1, 2, 5, 1). The following diagram illustrates
that the x3 -facet needs to be slid in its inward normal direction a total of five steps
to reach (1, 2):
96 DAVID PERKINSON AND LIVIA XU
x4
(0, 3) (8, 3)
(1, 2)
x1
x3
(0, 0) (2, 0)
x2
The reader should check that, for instance, monomial x31 x2 x3 x24 , corresponding to
the lattice point (3, 1), can be computed similarly.
In order to create an embedding of H2 using the divisor E = 2D3 + 3D4 , one
would needs to include not only the vertices in T , but also the first lattice points
as you travel away from each vertex along edges incident to the vertex. In other
words, T must include at least the vertices pictured in Figure 34.
(0, 3) (8, 3)
(0, 0) (2, 0)
14. Grassmannians
Definition 14.1. If V is a vector space over k, then projective space on V , de-
noted P(V ), is the collection of all one-dimensional subspaces of V . A special case
is Pnk := P(kn+1 ) (or, usually, Pn when k is clear from context). An r-plane in P(V )
is an (r + 1)-dimensional subspace of V .
Proposition 14.2. Suppose L is an r-plane, M is an s-plane, and L ∩ M is a k-
plane in Pn . Then k ≥ r + s − n.
Proof. Exercise.
Duality. An (n − 1)-plane in Pn is called a hyperplane. It is the solution set to a
linear equation
Ha := (a1 , . . . , an+1 ) · (x1 , . . . , xn+1 ) = a1 x1 + · · · + an+1 xn+1 = 0
for some a := (a1 , . . . , an+1 ) 6= (0, . . . , 0) ∈ kn+1 . Thus, a is the normal vector
to the hyperplane (in the case k = R or C). Further, a is determined by the
hyperplane up to scaling by a nonzero element of k. Define (Pn )∗ to be the dual
projective space whose points are the hyperplanes in Pn . Then we get a well-defined
bijection between Pn and its dual:
Pn → (Pn )∗
a 7→ Ha .
Definition 14.3. Let V be a vector space over k. The collection of (r + 1)-
dimensional subspaces of V is called a Grassmannian and denoted by G(r + 1, V ).
It is also called the Grassmannian of r-planes in P(V ) and then denoted by Gr P(V ).
If V = kn+1 , the Grassmannian is denoted by either G(r + 1, n + 1) or Gr Pn . (We
say Gr Pn is a moduli space space for the set of r-planes in Pn since it parametrizes
this set.)
Example 14.4. Some special cases of Grassmannians:
G(1, n + 1) = G0 Pn = Pn , G(2, 3) = G1 P2 = (P2 )∗ , Gn−1 Pn = (Pn )∗ .
The Grassmannian G(2, 4) = G1 P3 is the set of lines in three-space.
14.1. Manifold structure. We now fix our vector space k to be C (although a
lot of what we do below will carry over k = R or to arbitrary fields). Given
L ∈ G(r, n) = Gr−1 Pn−1 , we can write
L = Span {a1 , . . . , ar }
for some vectors a1 , . . . , ar ∈ Cn . Let A be the matrix whose rows are a1 , . . . , ar .
Then L = Span {b1 , . . . , br } for some other vectors bi if and only if there is an
invertible r × r matrix M such that B = M A, where B is the matrix whose rows
are the bi . (Multiplying A by M on the left performs invertible row operations and,
thus, does not change the rowspan.) Therefore,
G(r, n) = {r × n rank r matrices} (A ∼ M A : M r × r, invertible).
98 DAVID PERKINSON AND LIVIA XU
Identifying the set of r × n matrices with Cr×n = Crn induces a topology on the
set of r × n matrices and the quotient topology on G(r, n). (So a subset of G(r, n)
is open if and only if the set of all matrices representing points in that set forms an
open subset of Cr×n .) The case r = 1 recovers the usual construction of projective
space—A will be 1×n and M = [λ] for some nonzero λ ∈ C. In general, we consider
an r × n matrix of rank r to be the homogeneous coordinates for a point in G(r, n).
We now seek an open covering of G(r, n) and chart mappings generalizing those
for projective space. We motivate the idea with an example:
? ? ? ? ? ? ? ?
1 0 3 1 1 0 3 1 1 4 0 0 1 4 0 0
→ → → .
2 4 3 1 0 4 −3 −1 0 4 −3 −1 0 −4 3 1
L L0
Performing the same row operations to the identity matrix I2 yields the matrix
−1 1
M= ,
2 −1
and M L = L0 . Since the operations are invertible, all matrices row equivalent to L0
are equivalent to each other, i.e., represent the same point in G(2, 4). Knowing that
the columns 1 and 4 have been fixed, the point with homogeneous coordinates L can
then be uniquely represented by the entries in columns 2 and 3. If we then agree
to read the entries of a matrix from left-to-right and top-to-bottom, we assign
the unique point (4, 0, −4, 3) ∈ C4 to L. These are the coordinates of L with
respect to columns 1 and 4. Every 2 × 4 matrix whose first and fourth columns
are linearly independent will have similar coordinates. Also, every point x ∈ C4
has a corresponding point in G1 P3 represented by a matrix whose first and fourth
columns form I2 and whose second and third columns are given by the coordinates
of x. The result is a chart, which we will call (U1,4 , φ1,4 ).
We generalize the above example to create an atlas for G(r, n). Let j ∈ Zr
with 1 ≤ j1 < · · · < jr ≤ n. Given an r × n matrix L, let Lj be the square r × r
NOTES ON MANIFOLDS 99
Uj := {L ∈ G(r, n) | rk Lj = r} .
(13) φj : Uj → Cr(n−1)
L 7→ flattenj (L−1
j L),
Proposition 14.6. Using the notation defined above, {(Uj , φj )}j is an atlas for
G(r, n):
(1) each Uj is open in G(r, n);
(2) the Uj cover G(r, n); and
(3) φj : Uj → Cr(n−r) is a homeomorphism.
Proof. Let Ca×b denote that set of a × b matrices. Fix the isomorphism Ca×b ∼ =
Cab by reading the entries of a matrix from left-to-right and top-to-bottom. The
topology on Ca×b is determined by insisting the isomophism is a homeomorphism.
For each j, let πj be the projection mapping that sends an r×n matrix L to the r×r
submatrix Lj . Then we have a sequence of continuous mappings
πj det
Cr×n −→ Cr×r −−→ C
L 7→ Lr 7→ det(Lr ).
Definition 14.7. The standard atlas for G(r, n) is {(Uj , φj )}j where j = (j1 , . . . , jr )
with 1 ≤ j1 < j2 < · · · < jr ≤ n, and φj is defined in Equation 13.
100 DAVID PERKINSON AND LIVIA XU
Example 14.8. If L ∈ U1,2,4 ⊂ G(3, 7), then it has a representative of the form
1 0 ∗ 0 ∗ ∗ ∗
0 1 ∗ 0 ∗ ∗ ∗ .
0 0 ∗ 1 ∗ ∗ ∗
The 3(7 − 3) = 12 entries denoted by asterisks give the coordinates of the point via
the chart (U1,2,4 , φ1,2,4 ).
Proof. Since (any representative of) L ∈ G(r, n) has rank r, it follows that det(Lj ) 6=
0 for some j. Next, suppose that A and B are r × n matrices both representing L.
Then there exists an r × r matrix M such that B = M A. Therefore, for each
choice j of r columns, we have Bj = M Aj . Hence, det Bj = det M det Aj for all j.
n
So (Bj )j = λ(Aj )j where λ = det M 6= 0, and Λ(A) = Λ(B) ∈ P( r )−1 .
In this case, all other choices for I and J yield this same relation up to sign.
Therefore, the image of this Grassmannian under its Plücker embedding is
a quadric hypersurface in P5 . The reader may want to check that the coordinates
displayed in Example 14.11 satisfy this relation.
n
Theorem 14.15. The Plücker embedding Λ : G(r, n) → P( r ) is injective with image
equal to the zero-set of {PI,J }I,J .
Proof. We will just show that the image satisfies PI,J = 0 for each I, J. For
injectivity, one just back-solves the relations. See [9] for details. Let L = (aij )
be an r × n matrix representing a point in G(r, n). In the following calculation,
we expand the first (red) determinant along its last row; the expression in the
penultimate step evaluates to zero since the final (blue) matrix has a repeated row:
r+1
X
PI,J (Λ(L)) = (−1)k det Li1 ,...,ir−1 ,jk det Lj1 ,...,jbk ,...,jr+1
k=1
r+1 a1jk ad
1jk
=
X
(−1) · · · k .. ..
. ··· . ···
k=1
ar,jk ad
rjk
| {z }
expand
..
r+1 r . ad
1jk
=±
X
(−1)k
X
(−1)` a`jk ..
ad
`i1 · · · a
\`ir−1 ··· . ···
k=1 `=1 ..
. arjk
d
..
r . r+1 ad
1jk
X
= ± (−1)` ad
X k ..
··· a\ (−1) a`jk · · ·
`i1 `ir−1 . ···
`=1 .. k=1
. ad
rjk
102 DAVID PERKINSON AND LIVIA XU
··· ···
a`j1 a`jk a`jr+1
..
r
X . a1j1 ··· a1jk ··· a1jr+1
=± (−1)` ad ··· a\
`i1 `ir−1 ..
`=1 .. .
. arj1 ··· arjk ··· arjr+1
= 0.
14.3. Schubert varieties. Our goal is to describe the cohomology ring of a Grass-
mannian. In this section we describe sets whose classes will give a basis for that
ring. There are many closely-related cohomology theories. We have carefully con-
sidered de Rham cohomology. Appendix D is an introduction to abstract simplicial
homology, which can be dualized (by applying the function hom( · , R)) to define
cohomology groups). It is fairly straightforward to relate abstract simplicial coho-
mology to a cohomology theory for manifolds by, roughly, triangulating a manifold.
In section 13, we gave an operational definition of the Chow ring of a smooth com-
plete toric variety, and that will be the approach we take with Grassmannians. We
will give a very rough idea of the construction of the Chow ring, in general, and
give a recipe for computing it for the Grassmannian. We will then describe in detail
the meaning of the Chow classes for the Grassmannian and use calculations in the
Chow ring to solve interesting enumerative problems.
14.3.1. The Chow ring. An algebraic set is the set of solutions to a system of
polynomial equations. An algebraic set is irreducible if it cannot be written as a
proper union of algebraic sets. An irreducible algebraic set is called a variety.
Example 14.16. The algebraic set X = (x, y) ∈ R2 mod x(y − x2 ) = 0 is re-
ducible. It can be written as the union of the y-axis and a parabola, both of which
are varieties:
X = (x, y) ∈ R2 | x = 0 ∪ (x, y) ∈ R2 | y − x2 = 0 ,
If our defining polynomials come from the polynomial ring k[x1 , . . . , xn ], then
the corresponding algebraic set is a subset of kn . If X is defined by the poly-
nomials f1 , . . . , fk , we write X = Z(f1 , . . . , fk ). If the defining polynomials are
homogeneous, then we may regard X as a subset of projective space.
From now on, we will take k = R or C. When working over C, there is the option
of replacing C by R2 to define an algebraic set over R.
NOTES ON MANIFOLDS 103
Example 14.17. Consider the algebraic set X = (w, z) ∈ C2 | w = z 2 . This
is a variety in C2 , but since C = R2 , we may also consider it as a variety in R4 .
Letting w = a + bi and z = c + di and equating real and imaginary parts in w = z 2 ,
then X ⊂ R2 is defined by the system of equations
a = c2 − d2 and b = 2cd.
Thus, considering X as a subset of R4 , it is the intersection of two quadric hyper-
surfaces11 giving a (two-dimensional) a surface in R4 .
There are many ways to define the dimension of a variety X ∈ kn . We will state
one possibility that does not require a long digression. Say X is defined by the
system of equations f1 = · · · = fk = 0,
∂fi
dim X = n − max rk .
p∈X ∂xj i,j
∂fi
A point p at which the Jacobian matrix ∂x j
reaches its maximum rank is a
i,j
smooth point, and a non-smooth point is a singular point. The set of smooth points
of X will form a manifold.
Example 14.18. Let X = (x, y, z) ∈ R3 | z 2 = xy , a cone, with a single defining
equation f = z 2 − xy. The Jacobian matrix is
J = ∂f ∂f ∂f
∂x ∂y ∂z = −y −x 2z .
It has maximum rank of 1, which occurs at all points of X except (0, 0, 0). The
dimension of X is 3 − 1 = 2, and (0, 0, 0) is the only singular point.
P
Let X be a variety. An r-cycle of X is a finite formal sum i ni Vi where ni ∈ Z
and Vi is an r-dimensional subvariety of X. Let Zr (X) denote the set of all r-
cycles, an abelian group under addition. A subvariety V ⊆ X has codimension r
is dim V = dim X − r. So Y is a hypersurface in X if it has codimension one.
There is a notion of rational equivalence of subvarieties, which we will not define,
but which is an algebraic version of homotopy equivalence, If subvarieties V and W
are rationally equivalent, we will write V ∼ W . Rational equivalence preserves
many essential properties. For instance, if V ∼ W , then dim V = dim W .
14.3.2. Schubert varieties. For details regarding the following, see [9]. Through-
n
out, when necessary, we assume that Gr Pn is embedded in P( r ) via the Plücker
embedding.
Definition 14.24. If dim Ai = ai for all i, then we write S(a0 , . . . , ar ) or just (a0 , . . . , ar )
for the class [S(A0 , . . . , Ar )] ∈ A• Gr Pn . The (a0 , . . . , ar ) are called Schubert cycles
or Schubert classes.
Theorem 14.25. The Chow ring A• Gr Pn is a free abelian group on the Schubert
cycles (a0 , . . . , ar ) (i.e., it is generated by the Schubert cycles, and there are no
nontrivial integer combinations of the Schubert cycles that are 0). The codimension
of (a0 , . . . , ar ) is k := (r + 1)(n − r) − i (ai − i), i.e., (a0 , . . . , ar ) ∈ Ak Gr Pn .
P
The above result might be considered the solution to Hilbert’s 15th problem ([7]):
NOTES ON MANIFOLDS 105
II. Let (a0 , a1 , a2 ) = (3, 4, 5). Then S(A0 , A1 , A2 ) is those planes L in P5 such that
dim L ∩ A0 ≥ 0, dim L ∩ A1 ≥ 1 and dim L ∩ A2 ≥ 2.
However, by Proposition 14.2, these conditions are satisfied by all planes L. So
there is not condition at all, and S(A0 , A1 , A2 ) = G2 P5 .
III. Let (a0 , a1 , a2 ) = (1, 4, 5). Then S(A0 , A1 , A2 ) represents the condition that
a 2-plane meets a given line in a least a point, i.e., the plane intersects a given
line. The conditions dim L ∩ A1 ≥ 1 and dim L ∩ A2 ≥ 2 are non-conditions by
Proposition 14.2.
When is it the case that dim L ∩ Ai ≥ i imposes no condition, i.e., when is it the
case that every r-plane L satisfies this restriction? By Proposition 14.2,
dim L ∩ Ai ≥ dim L + dim Ai − n = r + ai − n
for all L and Ai . Therefore, dim L ∩ Ai ≥ i for all L ∈ Gr Pn if r + ai − n ≥ i, i.e., if
ai ≥ n − r + i.
For instance any of ar ≥ n, or ar−1 ≥ n−1, or ar−2 ≥ n−2 all impose no condition.
Since A0 ( · · · ( Ar ⊆ Pn , we have 0 ≤ a0 < · · · < ar ≤ n. So if ai = n − r + i for
some i, then aj = n − r + j for all j ≥ i.
106 DAVID PERKINSON AND LIVIA XU
Example 14.27. In G3 P6
(3, 4, 5) = [G3 P6 ].
In (2, 4, 5), the entries a1 = 4 and a2 = 5 impose no conditions. So the only
condition of consequence is that dim L ∩ A0 ≥ 0. This is the condition that the
solid (i.e., the 3-plane) L intersects the plane A0 . (In 6-space, there is just room for
a solid and a plane to not meet.) Similarly, in (1, 3, 5), the entry a2 = 5 imposes no
condition. So (1, 3, 5) can be thought of as the class in the Chow ring represented
by all solids L that intersects a given line (dim L ∩ A0 ≥ 0) and meet a given solid
in at least a line (dim L ∩ A1 ≥ 1).
Example 14.28. A basis for the Chow ring A• G1 P3 consist of the Schubert
classes (a0 , a1 ) with 0 ≤ a0 < a1 ≤ 3. Here is a table of theses classes and their
interpretation (the alternate notation for the class using curly braces is for use in
the next section):
The product of two Schubert classes {λ} and {µ} will be a unique integer linear
combination of Schubert classes. So we can write
X
{λ} · {µ} = cνλµ {ν} .
{ν}∈A• Gr Pn
The integers cνλµ are called Littlewood-Richardson coefficients. It was shown in 2006
that the general problem of computing Littlewood-Richardson numbers is #-P com-
plete. Nevertheless, there are several ways of calculating them, of which we now
describe one.
0 0
0 0 1 0 1
1 0
From the theorem, it is easy to see that cνλµ = 0 unless λ, µ ⊆ ν, i.e., unless
λi ≤ νi and µi ≤ νi for all i. It is a little harder to see that cνλµ = cνµλ .
Remark 14.33. From now on, we will feel free to represent a Schubert class by its
corresponding Young diagram. Take note of the following
(1) The Young diagrams that represent Schubert classes for Gr Pn must fit in an (r+
1) × (n − r) box of unit squares. Thus, when computing products in the Chow
ring using the formula {λ} · {µ} = ν∈A• Gr Pn cνλµ {ν} are those ν that fit in
P
that box.
(2) The identity in A• Gr Pn is [Gr Pn ] since intersecting a subvariety of Gr Pn with
the whole space does not change the variety. We have [Gr Pn ] = (n − r, n − r +
1, . . . , n) = {0, . . . , 0} = λ. Note that |λ| = 0, which agrees with the fact that
the class has codimension 0.
(3) The class of a single point in Gr Pn , i.e., of an r-plane in Pn , has the form
a = (0, 1, . . . , r) = {n − r, . . . , n − r} = λ, with Young diagram consisting of
the (r+1)×(n−r) rectangle. Note that the codimension is |λ| = (r+1)(n−r) =
dim Gr Pn , as expected.
Example 14.34. The reader should verify the following multiplication table for A• G1 P3 :
∗ 1
1 1
+ 0
0 0 0
0 0 0
0 0 0 0
0 0 0 0 0
The class of represents the condition “to meet a given line” and has codimen-
4
sion 1, therefore represents the condition “to meet four given lines”. To de-
termine how many lines in 3-space meet four generic lines, we use the table to
compute:
4
= 2 + = 2 =2 .
NOTES ON MANIFOLDS 109
Since is the class of a point, i.e., of a line in 3-space, we conclude that there
are two lines meeting four generic lines in 3-space.
Here is a sketch of another proof that there are two lines meeting four general
lines in 3-space. Call the four general lines L1 , L2 , L3 , L4 . Walk along L3 , and at
each point p ∈ L3 , stop and look out at the lines L1 and L2 . They will appear
to cross at some point. Draw a line from p through that point. One may check
that the collection of all lines drawn in this way forms a quadric surface Q in 3-
space—the zero-set of a single equation of degree two. Through each point in Q,
there is a line lying on Q that meets L1 , L2 and L3 . Now consider line L4 . A line
and a surface in P3 must meet, and since L4 is general, it will not be tangent to Q.
Since Q is defined by an equation of degree two, the line will meet in two points,
each of which corresponds to a line we drew earlier. These two lines are exactly
those meeting L1 , L2 , L3 , and L4 .
Finally, we mention a proof given by Schubert using his “principle of conservation
of number”: that the number of solutions will remain the same as the parameters of
the configuration are continuously changed as long as the number of solutions stays
finite. Suppose that L1 and L2 lie in a plane P12 and L3 and L4 lie in a plane P34 .
How many lines meet these four lines now that they are in special position? Since
we are working in projective space, both L1 ∩ L2 and L3 ∩ L4 will be points. The
line through these two points yields one solution. The intersection of P12 and P34
will be a line. Since that line sits in P12 it will meet both L1 and L2 , and it similarly
meets both L3 and L4 .
110 DAVID PERKINSON AND LIVIA XU
Note: In this text will assume all of our differentiable mappings are
smooth and use the words “differentiable” and “smooth” interchange-
ably.
For the definitions of boundary and interior is Definition B.1. In the change of
variables theorem, we have a sequence of mappings
φ f
K− → Rn .
→ φ(K) −
The mapping φ is the change of variables. For instance, if we are integrating a
function over a 2-sphere, K might be a rectangle and φ spherical coordinates.
Remark A.13.
(1) In the above displayed formulas, the stuff on the left of the equals sign is
just notation for the stuff on the right-hand side. In other words, the left-
hand side has no independent meaning. The same holds for the formulas
appearing in the definitions below.
(2) The path integral is sometimes called the line integral or the curve integral.
R
Another notation for the path integral is C f dC.
(3) The above integrals may not exist. They will exist if, for example, f is
continous on the image of C and C 0 is continous.
(4) Think of the factor of |C 0 (t)| as a “stretching” factor, recording how much
the domain, [a, b] is stretched by C as it is placed in Rn .
C(k)
C(1)
C
a b C(i − 1)
0 1 i−1 i k C(0) C(i)
We want to add up the lengths of these line segments, weighting each one using f ,
which we might as well evaluate at an endpoint of each segment. Thus, we get
k
X
weighted length of C ≈ f (C(ti )) |C(ti ) − C(ti−1 )|
i=1
k
X C(ti ) − C(ti−1 )
= f (C(ti )) (ti − ti−1 )
i=1
ti − ti−1
Z b
≈ (f ◦ C) |C 0 (t)|.
a
Remark A.15.
(1) Another notation for the flow is C F · ~t. If C is a closed curve, meaning
R
H
C(a) = C(b), then one often sees the notation C F · dC.
(2) Working out the definition given above, one sees that
Z Z b
F · dC = F (C(t)) · C 0 (t).
C a
Rb
Geometric Motivation. As claimed above, the flow can be expressed as a
F (C(t))·
C 0 (t), which can be re-written as
Z b
C 0 (t)
F (C(t)) · 0 |C 0 (t)|.
a |C (t)|
Notice the unit tangent vector appearing inside the square brackets. The quantity
inside the brackets is thus the component of F in the direction in which C is
moving. The factor of |C 0 (t)| is the stretching factor which appears when taking
a path integral of a function. Comparing with the geometric motivation given for
the path integral, we see that the flow is measuring the length of C, weighted by
the component of F along C. Hence, it makes sense to call this integral the flow
of F along C.
114 DAVID PERKINSON AND LIVIA XU
Z Z
f := (f ◦ S) |Su × Sv |,
S D
where Su and Sv are the partial derivatives of S with respect to u and v, respectively.
If f = 1, we get the surface area of S:
Z Z
area(S) = 1= |Su × Sv |.
S D
Remark A.17.
v × w := (v2 w3 − v3 w2 , v3 w1 − v1 w3 , v1 w2 − v2 w1 ),
i j k
v × w = det v1 v2 v3 = ([2, 3], [3, 1], [1, 2]),
w1 w2 w3
The important things to remember about the cross product are: (i) it is
a vector perpendicular to the plane spanned by v and w, (ii) its direction
is given by the “right-hand rule”, and (iii) its length is the area of the
parallelogram spanned by v and w.
(2) The factor |Su × Sv | is the area of the parallelogram spanned by Su and Sv
and should be thought of as the factor by which space is locally stretched
by f (analogous to |C 0 (t)| for curve integrals).
Sv
S
J S(J)
Su
Remark A.21.
(1) The distinguishing feature for this integral is that both the domain and
codomain are subsets of Rn .
(2) One could leave off the absolute value signs about the determinant. In that
case, we would be taking orientation into account, and some parts of the
integral could cancel with others.
Geometric Motivation. The integral is supposed to measure the volume of V
weighted by f . To this end, partition the domain, D. Let J be a subrectangle
of the partition. Then V (J) is a warped rectangle which can be approximated by
the parallelepiped spanned by the partials of V , i.e., the columns of the Jacobian
matrix, V 0 , scaled by vol(J). Recall that the volume of the parallelepiped spanned
by the columns of a square matrix is given by the absolute value of the determinant
of the matrix. Hence,
X
weighted volume ≈ f (V (xJ )) vol(V (J)) (xJ any point in J)
J
X
≈ f (V (xJ )) | det V 0 | vol(J)
J
Z
≈ (f ◦ V ) | det V 0 |.
D
NOTES ON MANIFOLDS 117
Case k = 1
To apply Stokes’ theorem in the case k = 1, we start with a 1-chain C and a
0-form ω. We will consider the case where C: [0, 1] → Rn is a curve in Rn . Recall
that a 0-form is a function: ω = f : Rn → R.
Theorem A.23. The flow of the gradient vector field ∇f along C is given by the
change in potential: Z
∇f · dC = f (C(1)) − f (C(0)).
C
Case k = 2
To apply Stokes’ theorem in the case k = 2, we start with a 2-chain and a 1-form.
For the 2-chain, we will take a surface S: D → R3 , where D = [0, 1] × [0, 1]. The 1-
form looks like ω = F1 dx+F2 dy+F3 dz. Let F := (F1 , F2 , F3 ) be the corresponding
vector field. For Stokes’ theorem, we need to consider dω. A straightforward
calculation (do it!) yields
dω = (D2 F3 − D3 F2 ) dy ∧ dz − (D3 F1 − D1 F3 ) dx ∧ dz + (D1 F2 − D2 F1 ) dx ∧ dy,
which corresponds to the vector field called the curl of F .
Theorem A.25. The flux of the curl of F through the surface S is equal to the
flow of F along the boundary of the surface:
Z Z
curl(F ) · ~n = F · dC
S ∂S
where C = ∂S.
thus, approximately the normal component, curl(F )(p) · v times the area of Dε . By
Stokes’ theorem, the flux is the circulation about the boundary. Thus,
Z
1
curl(F )(p) · v ≈ F · dC,
area(Dε ) C
where C = ∂Dε . It turns out that we get equality if we take the limit as ε → 0.
Hence, the component of the curl in the direction of v measures the circulation
of the original vector field about a point in the plane perpendicular to v. In this
sense, the curl measures “circulation density”. So roughly, Stokes’ theorem says
that integrating circulation density gives the total circulation.
Case k = 3
In this case, we are concerned with a 3-chain and a 2-form. We will consider the
case where the chain is a solid V : D → R3 , where D = [0, 1] × [0, 1] × [0, 1]. We will
assume that det V 0 ≥ 0 at all points. The 2-form can be written
ω = F1 dy ∧ dz − F2 dx ∧ dz + F3 dx ∧ dy.
dω = (D1 F1 + D2 F2 + D3 F3 ) dx ∧ dy ∧ dz.
(The first equality follows from the definition of div(F ), the second from the def-
inition of integration of a differential form, and the third from the definition of a
solid integral, given earlier in this handout.) The right-hand side of Stokes’ is by
definition the flux of F through the boundary of V . The classical result is:
Definition A.27. The integral of the divergence of F over V is equal to the flux
of F through the boundary of V :
Z Z
div(F ) = F · ~n.
V ∂V
change much from div(F )(p) on Vε . Hence, the integral of the divergence will be
approximately div(F )(p) times the volume of Vε . By Stokes’ we get
Z
1
div(F )(p) ≈ F · ~n,
vol(Vε ) S
where S = ∂V . Taking a limit gives an equality. Thus, the divergence measures
“flux density”: the amount of flux per unit volume diverging from a given point.
So Stokes’ theorem in this case is saying that the integral of flux density gives the
total flux.
Appendix B. Topology
B.1. Topological spaces. This appendix is an extraction of the salient parts, for
our notes, of An outline summary of basic point set topology, by Peter May. (Note:
this reference’s definition of a neighborhood differs slightly from ours—we do not
insist that a neighborhood is open. See below.)
Exercise B.7. Let X be a Hausdorff topological space. Show that {x} is closed
for each point x ∈ X.
Definition B.8. Let X and Y be topological spaces. The product topology on X×Y
has basis consisting of the sets U × V where U is open in X and V is open in Y .
Unless otherwise indicated, we will always assume the product topology on X × Y .
Exercise B.9. Show that a topological space X is Hausdorff if and only if the
diagonal ∆ := {(x, x) ∈ X × X : x ∈ X} is closed.
Exercise B.13. Show that a subspace of a Hausdorff space is Hausdorff and that
the product of Hausdorff spaces is Hausdorff. Show that the quotient of a Hausdorff
space need not be Hausdorff.
Exercise B.14.
(1) Show that f : X → Y is continuous if and only if for each x ∈ X and neigh-
borhood V of f (x), there exists a neighborhood U of x such that f (U ) ⊆ V .
(2) Show that a function f : Rn → Rm is continuous if and only if it satisfies
the usual -δ definition of continuity.
Definition B.15.
(1) X is connected if the only subsets of X that are both open and closed are ∅
and X, itself.
(2) X is path connected if every two points of X can be connected by a path.
Theorem B.19. If a topological space is locally path connected, then its components
and path components coincide.
B.4. Compactness.
B.5. Partition of unity. Let M be a manifold that is second countable and Haus-
dorff, and let U be an open cover of M . Then there exists a collection Ξ of smooth
functions M → [0, 1] ⊂ R such that:
• For each λ ∈ Ξ there exist U ∈ U containing supp(λ). [Note: The support
of λ is defined by
supp(λ) := {x ∈ M : λ(x) 6= 0} .]
• For each x ∈ M , there exists an open neigborhood V of x such that λ|V = 0
for all but finitely many λ ∈ Ξ, and
X
λ(x) = 1.
λ∈Ξ
[Note: the sum makes sense since all but finitely terms are zero.]
Partitions of unity are important to glue together locally-defined objects. For
instance, integration of forms on manifolds is at first defined locally, then glued
together for a global definition. For more on the existence of partitions of unity,
see the Wikipedia page on paracompact spaces.
We can then define integration of nice functions on the set. We first define “nice”:
where the sup is over all simple functions φ such that 0 ≤ φ(x) ≤ f (x) for all x ∈ X
(see Figure 35).
If f is not necessarily nonnegative, then write f = f + − f − where
( (
+ f (x) if f (x) ≥ 0 − −f (x) if f (x) ≤ 0
f (x) := and f (x) :=
0 otherwise 0 otherwise.
R + R −
Then f is integrable if f and f are finite, in which case
Z Z Z
f := f + − f − .
E1 E2 E3 E4 E5
Lebesgue measure. We now turn to the case of interest for this text: the standard
measure on Rn . To approximate the size of a set X ⊂ Rn , we cover it with
rectangles:
NOTES ON MANIFOLDS 125
where the inf is over sequence of rectangles I1 , I2 , . . . covering X, i.e., such that
X ⊆ ∪ki=1 Ii . The set X is Lebesgue measurable if it “splits additively in measure”:
outerX = outer(X ∩ A) + outer(X ∩ Ac )
for all A ⊆ Rn .
Exercise D.1. For a short exact sequence of R-modules as above, show that
(1) f is injective;
(2) g is surjective;
(3) V 00 is isomorphic to coker(f );
(4) dim V = dim V 0 + dim V 00 .
φ0 φ φ00
0 / W0 h /W k / W 00 .
0 00
(By commutative, we mean φ ◦ f = h ◦ φ and φ ◦ g = k ◦ φ.)
The snake lemma says there is an exact sequence
ker φ0 → ker φ → ker φ00 → coker φ0 → coker φ → coker φ00 .
126 DAVID PERKINSON AND LIVIA XU
Example D.4. Figure 36 pictures a simplicial complex ∆ on the set [5] := {1, 2, 3, 4, 5}:
1 4
5
2
Its facets are 5, 24, 34, and 123. The dimension of ∆ is 2, as determined by the
facet 123. Since not all of the facets have the same dimension, ∆ is not pure.
Therefore,
∂(σ) = ∂2 (134) = 34 − 14 + 13.
128 DAVID PERKINSON AND LIVIA XU
2 2 3 3 3
+
∂1 ∂2
− 1 2
1 1 1 2 1 2
12 2−1 123 23 − 13 + 12
4 3 4
4 3
3
∂3 4
1 3 2
1 2
1 2
1 2
4
3
2
1
1 2
4
12
1 −1
1
2
3 0 1 2 3 4
4 0 ∅ 1 1 1 1
0 R R4 R 0.
∂1 ∂0
Definition D.7. For i ∈ Z, the i-th (reduced) homology of ∆ is the vector space
H
e i (∆) := ker ∂i / im ∂i+1 .
In particular, H
e n−1 (∆) = ker(∂n−1 ), and H
e i (∆) = 0 for i > n−1 or i < 0. Elements
of ker ∂i are called i-cycles and elements of im ∂i+1 are called i-boundaries. The
i-th (reduced) Betti number of ∆ is the dimension of the i-th homology:
e i (∆) = dim(ker ∂i ) − dim(∂i+1 ).
β̃i (∆) := dim H
Remark D.8. To define ordinary (non-reduced) homology groups, Hi (∆), and Betti
numbers βi (∆), modify the chain complex by replacing C−1 (∆) with 0 and ∂0 with
130 DAVID PERKINSON AND LIVIA XU
the zero mapping. The difference between homology and reduced homology is that
H0 (∆) ' R ⊕ He 0 (∆) and, thus, β0 (∆) = β̃0 (∆) + 1. All other homology groups
and Betti numbers coincide. From now on, we use “homology” to mean reduced
homology.
Exercise D.9. Show that β̃0 (∆) is one less than the number of connected compo-
nents of ∆.
1 2
has chain complex
12 13 23
1 −1 −1 0
2 1 0 −1 1 2 3
3 0 1 1 ∅ 1 1 1
3
0 R R3 R 0.
∂1 ∂0
It is easy to see that dim(∂1 ) = dim(ker ∂0 ) = 2. It follows that β̃0 (∆) = 0, which
could have been anticipated since ∆ is connected. Since dim(∂1 ) = 2, rank-nullity
says dim(ker ∂1 ) = 1, whereas ∂2 = 0. Therefore, β̃1 (∆) = dim(ker ∂1 )−dim(∂2 ) =
1. In fact, He 1 (∆) is generated by the 1-cycle
3
23 − 13 + 12 = .
1 2
If we would add 123 to ∆ to get a solid triangle, then the above cycle would
be a boundary, and there would be no homology in any dimension. Similarly, a
solid tetrahedron has no homology, and a hollow tetrahedron has homology only in
dimension 2 (of rank 1).
NOTES ON MANIFOLDS 131
Exercise D.11. Compute the Betti numbers for the simplicial complex formed by
gluing two (hollow) triangles along an edge. Describe generators for the homology.
Example D.12. Consider the simplicial complex pictured in Figure 41 with facets
14, 24, 34, 123. It consists of a solid triangular base whose vertices are connected by
edges to the vertex 4. The three triangular walls incident on the base are hollow.
3
1
123
12 1
13 −1 12 13 14 23 24 34
14 0
1 −1 −1 −1 0 0 0
2 1 0 0 −1 −1 0
23 1 1 2 3 4
24 0 3 0 1 0 1 0 −1
0 4 0 0 1 0 1 1 ∅( 1 1 1 1 )
34
0 R R6 R4 R 0.
∂2 ∂1 ∂0
3
1
Figure 42. A tetrahedron with solid base and hollow walls. Cy-
cles around the walls sum to the boundary of the base, illustrating
a dependence among the cycles in the first homology group.
e i = nullity(∂i ) − dim(∂i+1 ).
β̃i := β̃i (∆) := dim H
Examples D.13.
I.1.
1 2 3 4
1 1 1 1
0 → R4 −−−−−−−−−−−−→ R → 0
∂0
We have dim(∂0 ) = 1, and hence by the rank-nullity theorem, nullity = 4 − 1 = 3.
The only non-vanishing homology group is
e 0 (∆) = ker ∂0 / im ∂1 = ker ∂0 = Span 2 − 1, 3 − 1, 4 − 1
H
β̃0 = 3.
Question D.14. What are the homology groups and Betti numbers for ∆ = 1, . . . , n
for general n ≥ 1?
I.2.
5 6
1 2 3 4
−1
0
0 −1
0
0
0 0
1 0
2
0 1 6
1 1 1 1 1 1
0 → R −−−−−−−−−−−→ R −−−−−−−−−−−−−−−−−−→ R → 0
∂1 ∂0
We have
dim(∂0 ) = 1, nullity(∂0 ) = 6 − 1 = 5
dim(∂1 ) = 2, nullity(∂1 ) = 0.
134 DAVID PERKINSON AND LIVIA XU
' Span{2 − 1, 3 − 1, 4 − 1}
II.1
3
1 2
−1 −1
0
1 0 −1
3
0 1 1 3
1 1 1
0 → R −−−−−−−−−−−−−−−→ R −−−−−−−−−−→ R → 0
∂1 ∂0
We have
dim(∂0 ) = 1, nullity(∂0 ) = 3 − 1 = 2
dim(∂1 ) = 2, nullity(∂1 ) = 3 − 2 = 1.
1 2
II.2.
2
1
3
5
4
NOTES ON MANIFOLDS 135
−1 0 0 0 −1
1 −1 0 0 0
0 1 −1 0 0
0 0 1 −1 0
5
0 0 0 1 1 3
1 1 1 1 1
0 → R −−−−−−−−−−−−−−−−−−−−−−−→ R −−−−−−−−−−−−−−−→ R → 0
∂1 ∂0
We have
dim(∂0 ) = 1, nullity(∂0 ) = 5 − 1 = 4
dim(∂1 ) = 4, nullity(∂1 ) = 5 − 4 = 1.
Question D.16. What happens in homology if we start with the triangle and sub-
divide its edges arbitrarily?
III.1.
4 3 5
1 2
Let’s compute the first homology.
−1 −1 −1 0 0 0 0
1 0 0 −1 −1 0 0
0 1 0 1 0 −1 −1
0 0 1 0 0 1 0
0 0 0 0 1 0 1
0 −−→ R7 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−→ R5
∂2 ∂1
III.2.
4 3 5
1 2
136 DAVID PERKINSON AND LIVIA XU
Exercise D.18. Draw the following simplicial complexes, determine their Betti
numbers, and describe bases for their homology groups. Recall that a facet of a
simplicial complex is a face that is maximal with respect to inclusion. So we can
describe a simplicial complex by just listing its facets. The whole simplicial complex
then consists of the facets and all of their subsets.
(1) ∆ with facets 123, 24, 34, 45, 56, 57, 89, 10.
(2) ∆ with facets 123, 14, 24, and 34.
(3) ∆ with facets 123, 124, 134, 234, 125, 135, 235. (Two hollow tetrahedra
glued along a face.)
NOTES ON MANIFOLDS 137
References
1. Robert G. Bartle, The elements of integration and Lebesgue measure, Wiley Classics Library,
John Wiley & Sons, Inc., New York, 1995, Containing a corrected reprint of the 1966 original [ıt
The elements of integration, Wiley, New York; MR0200398 (34 #293)], A Wiley-Interscience
Publication. MR 1312157
2. Lawrence Conlon, Differentiable manifolds, second ed., Modern Birkhäuser Classics,
Birkhäuser Boston, Inc., Boston, MA, 2008. MR 2413709
3. David A. Cox, The homogeneous coordinate ring of a toric variety, J. Algebraic Geom. 4
(1995), no. 1, 17–50.
4. V. I. Danilov, The geometry of toric varieties, Uspekhi Mat. Nauk 33 (1978), no. 2(200),
85–134, 247. MR 495499
5. William Fulton, Introduction to toric varieties, Annals of Mathematics Studies, vol. 131,
Princeton University Press, Princeton, NJ, 1993, The William H. Roever Lectures in Geome-
try. MR 1234037
6. Allen Hatcher, Algebraic topology, Cambridge University Press, Cambridge, 2002.
7. David Hilbert, Mathematical problems, Bull. of the Amer. Math. Society 8 (1902), 437–479.
8. Klaus Jänich, Vector analysis, Undergraduate Texts in Mathematics, Springer-Verlag, New
York, 2001, Translated from the second German (1993) edition by Leslie Kay. MR 1811820
9. S. L. Kleiman and Dan Laksov, Schubert calculus, Amer. Math. Monthly 79 (1972), 1061–
1082.
10. John M. Lee, Introduction to smooth manifolds, second ed., Graduate Texts in Mathematics,
vol. 218, Springer, New York, 2013. MR 2954043
11. Jerry Shurman, Calculus and analysis in Euclidean space, Undergraduate Texts in Mathe-
matics, Springer, Cham, 2016. MR 3586606
12. Loring W. Tu, An introduction to manifolds, second ed., Universitext, Springer, New York,
2011. MR 2723362
13. Frank W. Warner, Foundations of differentiable manifolds and Lie groups, Graduate Texts in
Mathematics, vol. 94, Springer-Verlag, New York-Berlin, 1983, Corrected reprint of the 1971
edition. MR 722297