0% found this document useful (0 votes)
110 views125 pages

Soluciones Guillemin 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views125 pages

Soluciones Guillemin 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 125

Kenneth DeMason, December 23, 2021

The following is a compilation of the problems I attempted from Dan Freed’s M382D - Differential
Topology Spring 2021 course at UT Austin. Sometimes these problems come from Guillemin and
Pollack, some are from Warner, and others are Dan’s own problems. The table of contents properly
labels these.

The goal of the course was to train you to think like a researcher, so his problems were often
open ended and left to interpretation. While we could choose how many to attempt each week, I
often attempted mostly all or even all of them. Such solutions are provided here, with comments
from the TA (I’m not sure if we were graded based on completion or correctness) in red. I’ve
commented on some of these (especially if there were noticeable errors) in blue. Also, Dan uses a
different convention than Guillemin and Pollack for some things; hence in some problems I have to
do strange conversions between the two.

I tend to be very thorough with my solutions. This is especially apparent in the first few homework
sets, where I was still gauging how much work was expected of us. Needless to say, you do not need
to show every single step as I did. Please do not hesitate to reach out for clarification.

Contents
HW 1 5
Guillemin/Pollack: Chapter 1.2: 2, 4, 10, 11. 5
1.2.2. 5
1.2.4. 5
1.2.10. 6
1.2.11. 7
Dan’s Problems 9
Problem 1. 9
Problem 2. 11
Problem 3. 12
Problem 4. 13
Problem 7. 15
HW 2 18
Guillemin/Pollack: Chapter 1.3: 3, 5, 7. 18
1.3.3. 18
1.3.5. 18
1.3.7. 19
Dan’s Problems 19
Problem 1. 19
Problem 2. 20
Problem 3. 21
Problem 4. 23
HW 3 25
Guillemin/Pollack: Chapter 1.4: 2, 5, 10, 12. 25
1.4.2. 25
1.4.5. 25
1.4.10. 26
1.4.12. 26
Dan’s Problems 27
Problem 1. 27
Problem 2. 29
Problem 3. 32
HW 4 33
Guillemin/Pollack: Chapter 1.7: 4, 6; Chapter 1.8: 3, 4. 33
1
M382D Attempted HW Problems
1.7.4. 33
1.7.6. 33
1.8.3. 33
1.8.4. 34
Dan’s Problems 34
Problem 1. 34
Problem 2. 35
Problem 3. 36
Problem 4. 36
Problem 5. 37
Problem 6. 37
Problem 7. 38
Problem 8. 39
HW 5 41
Guillemin/Pollack: Chapter 1.8: 7, 8. 41
1.8.7. 41
1.8.8. 41
Dan’s Problems 41
Problem 1. 41
Problem 2. 43
Problem 4. 46
Problem 6. 47
Midterm (100/110 points) 49
Problem 1. (10/10 points) 49
Problem 2. (10/10 points) 50
Problem 3. (14/15 points) 50
Problem 4. (15/15 points) 52
Problem 5. (6/15 points) 54
Problem 6. (15/15 points) 57
Problem 7. (30/30 points) 57
Problem 9. (6/10 points) (Extra Credit) 59
HW 6 61
Guillemin/Pollack: Chapter 1.5: 2, 9, 10; Chapter 2.1: 2, 6; Chapter 2.2: 1, 2, 3, 4. 61
1.5.2. 61
1.5.9. 62
1.5.10. 63
2.1.2. 63
2.1.6. 63
2.2.1. 64
2.2.2. 64
2.2.3. 64
2.2.4. 64
Dan’s Problems 64
Problem 2. 64
Problem 3. 65
HW 7 67
Guillemin/Pollack: Chapter 2.4: 3, 5, 6, 8, 11, 13; Chapter 2.6: 1, 2. 67
2.4.3. 67
2.4.5. 68
2.4.6. 68
2.4.8. 68
2.4.11. 69
2.4.13. 69
2.6.1. 70
M382D Attempted HW Problems

2.6.2. 70
Dan’s Problems 71
Problem 1. 71
HW 8 73
Warner: Chapter 2: 2, 9, 10, 12, 15. 73
2.9. 73
2.10. 73
2.15. 74
Dan’s Problems 74
Problem 1. 74
Problem 2. 76
Problem 3. 78
Problem 4. 79
Problem 5. 80
Problem 6. 80
Problem 7. 81
HW 9 82
Warner: Chapter 2: 13, 16; Chapter 4: 12. 82
2.13. 82
2.16. 83
4.12. 84
Dan’s Problems 84
Problem 1. 84
Problem 2. 85
Problem 4. 86
Problem 5. 88
Problem 6. 89
HW 10 90
Guillemin/Pollack: Chapter 4.4: 1, 2, 3, 8, 12; Chapter 4.7: 2, 3, 4, 7, 8, 9, 13. 90
4.4.1. 90
4.4.2. 90
4.4.3. 91
4.4.8. 91
4.7.2. 92
4.7.3. 92
4.7.4. 93
4.7.7. 93
4.7.8. 93
4.7.9. 94
4.7.13. 94
Dan’s Problems 94
Problem 1. 94
Problem 3. 96
HW 11 99
Guillemin/Pollack: Chapter 3.2: 12, 14, 16, 17, 26; Chapter 3.3: 6, 8, 9, 11, 14, 16. 99
3.2.12. 99
3.2.14. 99
3.2.16. 99
3.2.17. 100
3.2.26. 102
3.3.6. 102
3.3.8. 103
3.3.9. 103
3.3.11. 103
M382D Attempted HW Problems

3.3.14. 104
3.3.16. 105
Dan’s Problems 105
Problem 1. 105
Problem 2. 106
HW 12 107
Guillemin/Pollack: Chapter 3.3: 17, 19, 20; Chapter 3.5: 5, 7. 107
3.3.17. 107
3.3.20. 107
3.5.5. 107
Dan’s Problems 108
Problem 1. 108
Problem 2. 110
Problem 3. 111
Final (96/110 points) 113
Problem 1. (17/20 points) 113
Problem 2. (19/20 points) 116
Problem 3. (10/10 points) 117
Problem 4. (10/10 points) 119
Problem 5. (6/10 points) 121
Problem 6. (4/10 points) 121
Problem 7. (10/10 points) 123
Problem 8. (20/20 points) 124
M382D Attempted HW Problems

HW 1
Guillemin/Pollack: Chapter 1.2: 2, 4, 10, 11.

1.2.2. If U is an open subset of the manifold X, check that Tx U = Tx X for x ∈ U .

Solution: In problem 1 below we show that U is a submanifold of X, so I show that the tan-
gent spaces are the same object as opposed to being isomorphic. Let p be of dimension n and let
ξ ∈ Rn . Let (Vα , φα ) be a chart on X at p. Then, the differential of φα at p is given by
d
d(φα )p (ξ) = ξφα (p) = φα (p + tξ).
dt t=0

Recall that our charts on U are given by (Wα , ψα ) where Wα = U ∩ Vα and ψα = φα |Wα . Choose
a chart on U around p, so that p ∈ Wα . Since Wα ⊂ Vα is open, it follows that for t small enough
p + tξ ∈ Wα . Hence, for small t
φα (p + tξ) = φα |Wα (p + tξ) = ψα (p + tξ).
Hence, the differentials are the same. Let ξ ∈ Tp X, and choose α, β such that (Vα , φα ) and (Vβ , φβ )
are charts on X containing p. Then,
−1
ξβ = d(φβ ◦ φα )φα (p) (ξα ) = d(φβ )p ◦ (d(φα )p )−1 (ξα )
by the chain rule (for more on this, see problem 1.2.4). Now let (Wα , ψα ) and (Wβ , ψβ ) be the charts
on U at p obtained by the above. It follows that
ξβ = d(φβ )p ◦ (d(φα )p )−1 (ξα ) = d(ψβ )p ◦ (d(ψα )p )−1 (ξα ) = d(ψβ ◦ ψα−1 )ψα (p) (ξα ).
Hence, ξ ∈ Tp U . The converse is also true by essentially the same logic.

1.2.4. Suppose that f : X → Y is a diffeomorphism, and prove that at each x its differential dfx is
an isomorphism of tangent spaces.

Solution: Tangent spaces are vector spaces, so an isomorphism of them is simply an invertible
linear map. Since dfp is already linear, we just need to check that it is invertible. To do this we
compute its inverse. Recall the chain rule says d(g ◦ f )p = dgf (p) ◦ dfp when f and g are smooth.
Since f is a diffeomorphism, there exists a smooth inverse f −1 , which has a differential dfq−1 . By
the chain rule,
dff−1
(p) ◦ dfp = d(f −1 ◦ f )p = d(idX )p
dff −1 (q) ◦ dfq−1 = d(f ◦ f −1 )q = d(idY )q
We now compute d(idX )p explicitly. We use the following notation from the notes: Suppose that
the connected coordinate charts at p take values in A, an affine space of dimension n over V . Let
A = {(Uα , φα )}α∈A be the differential structure on X and let Ap ⊂ A be the set of indices such
that p ∈ Uα . Let Ap = {(Uα , φα )}α∈Ap . Let ξ ∈ Tp X. Then for each α, β ∈ Ap ,
ξβ = d(φβ ◦ φ−1
α )φα (p) (ξα )

where ξα and ξβ are the α, β components of ξ respectively. Furthermore, the differential of the map
idX : X → X is defined for all α, β by
(d(idX )p (ξ))β = d(φβ ◦ idX ◦ φ−1 −1
α )φα (p) (ξα ) = d(φβ ◦ φα )φα (p) (ξα ) = ξβ

which is to say that d(idX )p is the identity map idTp X . We remark that the above does not depend
on the choice of α. Together, this and the chain rule show that the differential is a functor (from
the category of smooth manifolds to itself). Returning to the above, we have
dff−1
(p) ◦ dfp = idTp X
dff −1 (q) ◦ dfq−1 = idTq Y
M382D Attempted HW Problems

That is to say, dfp : Tp X → Tf (p)Y is invertible with inverse dff−1


(p) : Tf (p) Y → Tp X.

TA comment: Nice!
1.2.10.
a) Let f : X → X × X be the mapping f (x) = (x, x). Check that dfp (v) = (v, v).
b) If ∆ is the diagonal of X × X, show that its tangent space T(x,x) ∆ is the diagonal of
Tx X × Tx X

Solution:
a) First, f is smooth since each component is smooth. Next, X × X has an obvious manifold
structure which we show in problem 1. The charts take the form (Uβ1 ×Uβ2 , φβ1 ×φβ2 ) where
(Uβ1 , φβ1 ) and (Uβ2 , φβ2 ) are charts on X. The tangent space of X × X is a direct product
of copies of R2n . If η ∈ T(p,p) (X × X) then ηβ is a vector in R2n . We naturally identify
this with the direct product of two copies of Rn – take the first n components and call it
ηβ1 ∈ Rn , and similarly for the second n components. We write ηβ = (ηβ1 , ηβ2 ). In this
sense we can actually construct an isomorphism T(p,p) (X × X) ' Tp X × Tp X. In general,
we have an isomorphism T(p,q) (X × Y ) ' Tp X × Tq Y .
With notation aside, let us turn to the actual problem and take ξ ∈ Tp X. Then, by definition
of the differential of f ,
ηβ = (ηβ1 , ηβ2 ) = d((φβ1 × φβ2 ) ◦ f ◦ φ−1
α )φα (p) (ξα ).
Once more, we remark this is independent of our choice of α. Suppose α0 ∈ Ap . Then,
ξβ = d((φβ1 × φβ2 ) ◦ f ◦ φ−1
α )φα (p) (ξα )
−1
= d((φβ1 × φβ2 ) ◦ f ◦ φ−1
α )φα (p) (d(φα ◦ φα0 )φα0 (p) (ξα0 ))
= d((φβ1 × φβ2 ) ◦ f ◦ φ−1
α0 )φα0 (p) (ξα0 )
where we have used the chain rule. We can compute ηβ explicitly as follows:
d
ηβ = ξα [(φβ1 × φβ2 ) ◦ f ◦ φ−1
α ](φα (p)) = [(φβ1 × φβ2 ) ◦ f ◦ φ−1
α ](φα (p) + tξα )
dt t=0
[(φβ1 × φβ2 ) ◦ f ◦ φ−1 −1
α ](φα (p) + tξα ) − [(φβ1 × φβ2 ) ◦ f ◦ φα ](φα (p))
= lim
t→0 t
((φβ1 ◦ φ−1
α )(φ α (p) + tξα ), (φ β 2 ◦ φ −1
α )(φα (p) + tξα )) − (φβ1 (p), φβ2 (p))
= lim
t→0 t
(φβ1 ◦ φ−1 (φβ2 ◦ φ−1
 
α )(φ α (p) + tξ α ) − φ β1 (p) α )(φα (p) + tξα ) − φβ2 (p)
= lim , lim .
t→0 t t→0 t
Now, for i = 1, 2 we have
ξβi = d(φβi ◦ φ−1 −1
α )φα (p) (ξα ) = ξα (φβi ◦ φα )(φα (p))
(φβi ◦ φ−1 −1
α )(φα (p) + tξα ) − (φβi ◦ φα )(φα (p))
= lim
t→0 t
(φβi ◦ φ−1
α )(φ α (p) + tξα ) − φβi (p)
= lim .
t→0 t
Hence, the components of ηβ are ξβ1 and ξβ2 , respectively. Thus,
(dfp (ξ))β = (ξβ1 , ξβ2 ).
Note that this is not what the problem asks for, but this is an artifact of using a different
definition of the tangent space. The idea is consistent though, I explain this below for com-
pleteness.

Suppose we have a smooth map φ : An → An . Let p ∈ An and let U, V be open sets


in An containing p, φ(p) respectively. Let ξ ∈ Rn . Then ξ, as a vector, gives us some di-
rection we can travel in while at p. For example, we can take some path γ : (−, ) → U
M382D Attempted HW Problems

passing through p at t = 0 such that γ 0 (0) = ξ. Then, walking along γ, as we get to p we


are traveling in the direction of ξ (since this is the tangent vector to γ at t = 0). We have
a corresponding path in V given by φ ◦ γ. Then traveling along φ ◦ γ, we eventually pass
through φ(p) and at that instant head in the direction (φ ◦ γ)0 (0). Is there a nice way to
relate this to ξ? There is, and it is given by the differential! In particular,
dφp (ξ) = (φ ◦ γ)0 (0)
Though we do not prove it here, the above discussion is independent of the path γ. In class,
we have been taking the straight line path for simplicity. So, dφp (ξ) tells us that if we travel
in the ξ direction while at p, then we travel in the dφp (ξ) direction while at φ(p).

Let us take a vector v in the literal tangent plane to X at p, which I denote as T̄p X.
This gives us some direction we can walk in on our manifold. Consider a chart (Uα , φα ). By
our preceding discussion, we see that we travel in the d(φα )p (v) direction while at φα (p).
For each chart containing p, we can form such a direction ξα . Now let ξ be a vector whose α
component is ξα . This is exactly a tangent vector in the space Tp X! Since, we get something
like
d(φβ ◦ φ−1
α )φα (p) (ξα ) = d(φβ )p (v) = ξβ
which is precisely the compatibility condition we require of the components of a tangent
vector. In this sense, all the components of a tangent vector correspond to some literal
tangent vector v to the manifold at p. Then, the statement (dfp (ξ))β = (ξβ1 , ξβ2 ) corresponds
to one like dfp (v) = (v, v), which is what we wanted.
b) I think the following is incorrect, but I figured I would include my solution anyways: First,
what is the diagonal of Tp X × Tp X? Here we can use our obvious interpretation and see
that
diag(Tp X × Tp X) = {(η1 , η2 ) ∈ Tp X × Tp X | η1 = η2 }.
Now, Tp X is trivially isomorphic to diag(Tp X × Tp X) by the mapping ξ 7→ (ξ, ξ). The
mapping f : X → ∆ ⊂ X × X given by f (x) = (x, x) is a homeomorphism since it has an
inverse given by f −1 = π1 |∆ . Note that projections are smooth, so f −1 is smooth. Thus, f
is actually a diffeomorphism. By 1.2.4, we conclude that dfp is an isomorphism of tangent
spaces, and therefore Tp X ' T(p,p) ∆. Together, we have
T(p,p) ∆ ' Tp X ' diag(Tp X × Tp X).

TA comment: As far as I can tell, your argument in part b is correct.

1.2.11.
a) Suppose that f : X → Y is a smooth map and let F : X → X × Y be F (x) = (x, f (x)).
Show that dFx (v) = (v, dfx (v)).
b) Prove that the tangent space to the graph of f at the point (x, f (x)) is the graph of dfx :
Tx X → Tf (x) Y .

Solution:
a) We use the notation presented in problem 1.2.10a, replacing subscripts 1 and 2 with x and
y respectively, and changing all instances of X × X to X × Y . Now let ξ ∈ Tp X and α, β
be indices such that the charts (Uα , φα ) and (Uβx × Vβy , φβx × ψβy ) contain p and (p, f (p))
respectively. By definition of the differential,
ηβ = (ηβx , ηβy ) = d((φβx × ψβy ) ◦ F ◦ φ−1
α )φα (p) (ξα )
[(φβx × ψβy ) ◦ F ◦ φ−1 −1
α ](φα (p)) + tξα ) − [(φβx × ψβy ) ◦ F ◦ φα ](φα (p))
= lim
t→0 t
[(φβx × ψβy ) ◦ F ◦ φ−1
α ](φα (p)) + tξα ) − (φβx (p), ψβy (f (p)))
= lim
t→0 t
M382D Attempted HW Problems

(φβx ◦ φ−1 (ψβy ◦ f ◦ φ−1


α )(φα (p) + tξα ) − ψβy (f (p))
 
α )(φα (p) + tξα ) − φβx (p)
ηβ = lim , lim
t→0 t t→0 t

= ξβx , (dfp (ξ))βy
b) Consider some η̃ ∈ Tp X × Tf (p) Y and write it as η̃ = (ηx , ηy ). Let ηβx = (ηx )βx and
similarly for y. Now construct η ∈ T(p,f (p)) (X × Y ) by ηβ = (ηβx , ηβy ). This gives an explicit
isomorphism Tp X ×Tf (p) Y ' T(p,f (p)) (X ×Y ). Now let η̃ be in the graph of dfp . Then ηx = ξ
and ηy = dfp (ξ) for some ξ ∈ Tp X. By the above, we can map this to η ∈ T(p,f (p)) (X × Y )
where the components of ηβ are ηβx = (ηx )βx = ξβx and ηβy = (ηy )βy = dfp (ξ). That is,
η = (dF )p (ξ). So we have naturally identified an element η̃ in the graph of dfp with a vector
in T(p,f (p)) Γ(f ).

TA comment: In part b, you’ve shown that you get a subspace of T(p,f (p)) Γ(f ), but why is this
subspace the entire space?

A lot of the confusion with this problem is based on Dan’s definition of the tangent space (as a
vector consisting of motion germs in each chart, rather than as an equivalence class of such motion
germs). I’m going to use the definition I know to redo this part: A vector ξ is a tangent vector to X
at p if there exists a smooth curve γ : (−, ) → X such that γ(0) = p and γ 0 (0) = ξ. The connection
to Dan’s definition is as follows. Let ξ ∈ Tp X and (Uα , φα ), (Uβ , φβ ) be charts around p. Then, the
components ξα and ξβ of ξ are related by
ξβ = d(φβ ◦ φ−1
α )p (ξα ).

We’re now going to construct a tangent vector ξ ∈ Tp X using this definition. Fix a particular
chart (Uα , φα ), choose some ξα ∈ Rn , and define for every other index β the vector ξβ using the
above rule. Then certainly ξ ∈ Tp X. Now, how can we view ξα ? We can instead look at curves
γα : (−, ) → φα (Uα ) with γα (0) = φα (p) and γα0 (0) = ξα . Defining γβ = φβ ◦ φ−1
α ◦ γα Then, each
ξβ is simply γβ0 (0) via the chain rule. The curve γβ is exactly a curve passing through φβ (p) with
initial velocity ξβ . All of this has a nice physical interpretation because we’re in affine space.

What we’ve done is represented our tangent vector ξ ∈ Tp X as a collection of curves, one for
each chart, with some compatability condition. So what’s the connection to the definition I gave?
Instead of transporting these curves from chart to chart, why not just pull it back to the manifold
itself? The map γ = φ−1 α ◦ γα is some curve in X passing through p and has some initial velocity
¯ This ξ¯ is precisely what I use as a tangent vector. When I said earlier that we were looking at
ξ.
equivalence classes, this is the identification – notice that γ = φ−1
β ◦ γβ for any β! So, in some sense
we’re identifying all of the motion curves in our charts together with a single one on our manifold.

Okay, with that discussion out of the way, let’s actually solve the problem. The map F : X → X × Y
is smooth and such that F (X) = Γ(f ), the graph of f . The differential dFp is then a linear map
Tp X → T(p,f (p)) Γ(f ). Next the graph of dfp , Γ(dfp ), is defined by
Γ(dfp ) := {ξ ∈ Tp X | (ξ, dfp (ξ))} = dFp (Tp X).
So, it suffices to show that dFp is surjective. Maybe that’s hard, but fortunately we’re working with
finite dimensional vector spaces, so we need only check injectivity. Well, if ξ ∈ Ker(dFp ) then by
dFp (ξ) = (ξ, dfp (ξ)) we see that ξ = 0 and hence dFp is injective.

It’s important to note that dFp (ξ) = (ξ, dfp (ξ)) doesn’t quite make sense. Since F maps into X × Y ,
dFp (ξ) ∈ T(p,f (p)) (X ×Y ). On the other hand, (ξ, dfp (ξ)) ∈ Tp X ×Tf (p) Y . The two vector spaces are
isomorphic though, and we can use curves as discussed above to do this. Given η ∈ T(p,f (p)) (X × Y ),
there exists a curve γ : (−, ) → X × Y such that γ(0) = (p, f (p)) and γ 0 (0) = η. Now con-
0
sider the projection curves γX := πX ◦ γ and similarly for Y . Then defining ηX = γX (0) ∈ Tp X
and ηY = γY0 (0) ∈ Tf (p) Y we can construct (ηX , ηY ) ∈ Tp X × Tf (p) Y . On the other hand, given
ξX ∈ Tp X and ξY ∈ Tf (p) Y we can find two curves βX : (−, ) → X and similarly for Y , with
M382D Attempted HW Problems

correct initial conditions. Then, defining the curve γ = (βX , βY ) in X × Y we see ηX = ξX and sim-
ilarly for Y . So, we have a well defined surjective linear map from T(p,f (p)) (X × Y ) → Tp X × Tf (p) Y ,
hence an isomorphism. We use this identification throughout.

Dan’s Problems.

Problem 1. Suppose X, Y are manifolds and f : X → Y a smooth map.


a) Prove that X × Y is also a manifold.
b) Show that the graph Γ(f ) ⊂ X × Y of f , defined by
Γ(f ) := {(x, y) ∈ X × Y | y = f (x)}
is a manifold.
c) Now suppose U ⊂ X is an open subset. Define a manifold structure on U .

Solution:
a) We show that X ×Y is a manifold from first principles. First, X ×Y is a topological manifold
since:
i) It is locally Euclidean. Let (x, y) ∈ X × Y . Then there exists an open set U ⊂ X
containing x and a homeomorphism φ : U → U 0 ⊂ An where U 0 is open. Similarly,
there exists an open set V ⊂ Y containing y and a homeomorphism ψ : V → V 0 ⊂ Am
where V 0 is open. Now, the Cartesian products W = U × V and W 0 = U 0 × V 0 are
open in X × Y and An+m respectively. Furthermore, ϕ = φ × ψ : U × V → U 0 × V 0 is a
homeomorphism since each component is. Hence there exists an open set W ⊂ X × Y
containing (x, y) and a homeomorphism ϕ : W → W 0 ⊂ An+m .
ii) It is Hausdorff and second countable, since the Cartesian product of Hausdorff spaces
is Hausdorff, and similarly for second countable.
Next, let A = {(Uα , φα )}α∈A be an atlas on X and B = {(Vβ , ψβ )}β∈B be an atlas on Y .
Let C = {(α, β) | α ∈ A, β ∈ B}. We claim that C = {(Uα × Vβ , φα × ψβ )}(α,β)∈C is an
atlas on X × Y . Indeed,
Since A and B are atlases on X and Y respectively we have that α∈A Uα = X and
S
i) S
β∈B Vβ = Y . Then,
[ [ [ [
Uα × V β = Uα × Vβ = X × Vβ = X × Y.
(α,β)∈C β∈B α∈A β∈B

So, the charts cover X × Y .


ii) Given (α1 , β1 ) and (α2 , β2 ), the charts (Uα1 , φα1 ) and (Uα2 , φα2 ) are C ∞ -related, while
the charts (Vβ1 , ψβ1 ) and (Vβ2 , ψβ2 ) are also C ∞ -related. Hence, for i, j ∈ {1, 2}, i 6= j,
 −1
Φi,j := φαj |Uαi ∩Uαj ◦ φαi |Uαi ∩Uαj : φαi (Uαi ∩ Uαj ) → φαj (Uαi ∩ Uαj )

is smooth. Similarly, Ψi,j defined as above with φ, U, α replaced by ψ, V, β is smooth.


Clearly
(Uαi × Vβi ) ∩ (Uαj × Vβj ) = (Uαi ∩ Uαj ) × (Vβi ∩ Vβj ).
Finally,
 −1
Υi,j : = (φαj × ψβj )|(Uαi ×Vβi )∩(Uαj ×Vβj ) ◦ (φαj × ψβj )|(Uαi ×Vβi )∩(Uαj ×Vβj )
−1
= (φαj × ψβj )|(Uαi ×Vβi )∩(Uαj ×Vβj ) ◦ (φ−1
αj × ψβj )|(Uαi ×Vβi )∩(Uαj ×Vβj )
= Φi,j × Ψi,j
is smooth.
If this atlas is not maximal, we can complete it to a maximal atlas. Hence, X × Y admits a
differential structure, and is therefore a smooth manifold.
M382D Attempted HW Problems

b) I think, given any non-empty subset Z of a manifold, we can turn it into a manifold by
endowing the subset with the subspace topology and constructing an atlas by taking each
chart (U, φ) and turning it into a chart (U ∩ Z, φ|Z ) on Z (I go into this in detail in part c)
). So, I presume that you mean for us to show that Γ(f ) is a submanifold of X × Y .

To this end, recall that the image of an embedding is a submanifold of the codomain.
Define F : X → X × Y by F (x) = (x, f (x)). We show that F is an embedding. First, per
1.2.11b, we know that the differential of F at p ∈ X is given by
dFp (ξ) = (ξ, dfp (ξ)).
Hence, if dFp (ξ) = 0 then ξ = 0, and the kernel of dFp is {0}. Thus dFp is injective (note:
this is why the graph of f as opposed to the image of f is always a submanifold, since dFp is
always injective even if dfp is not!). It is a standard exercise to show that X is homeomorphic
to Γ(f ) via F . We reproduce the argument here: If F (x1 ) = F (x2 ) for x1 , x2 ∈ X then
x1 = (π1 ◦ F )(x1 ) = (π1 ◦ F )(x2 ) = x2 .
Hence F is injective, and there exists an inverse F −1 : Γ(f ) → X. I claim that F −1 = π1 |Γ(f ) .
Indeed,
F (π1 |Γ(f ) )(x, y) = F (x) = (x, f (x)) = (x, y)
since, by definition if (x, y) ∈ Γ(f ) then y = f (x). Moreover,
π1 |Γ(f ) (F (x)) = π1 |Γ(f ) (x, f (x)) = x
which is well defined since the image of F is Γ(f ). It follows that F −1 = π1 |Γ(f ) . But,
projections are continuous, and restrictions of continuous functions are continuous, so F −1
is continuous. Hence F is a homeomorphism onto its image, Γ(f ).

Since F : X → X × Y is a homeomorphism onto its image and dFp is injective for all
p ∈ X, it follows that F is an embedding. So, the image of F , Γ(f ), is a submanifold of
X ×Y.
c) We first show that U is a topological manifold:
i) Let u ∈ U . Then u ∈ X, and since X is locally Euclidean there exists an open set V
containing u and a homeomorphism φ : V → V 0 ⊂ An . Let W = U ∩ V , which is an
open set containing u, and let ϕ = φ|U ∩V . Let W 0 = ϕ(U ∩ V ) which is open since φ
is an open mapping. Then, ϕ : W → W 0 is a homeomorphism since φ was. It follows
that U is locally Euclidean.
ii) U is Hausdorff and second countable since subsets of Hausdorff (resp. second countable)
spaces are Hausdorff.
If (V, φ) is a chart on X, we define a chart on U when V ∩ U is nonempty, denoted (W, ϕ),
as done in part i). For concreteness, denote the atlas on X by A = {(Vα , φα )}α∈A . Let
B = {(Wβ , ϕβ )}β∈B be the S collection of such charts; we wish to show that it is an atlas.
i) Since A is an atlas, α∈A Vα = X. Hence,
[ [ [ [
Wβ = Vβ ∩ U = Vα ∩ U = U ∩ Vα = U ∩ X = U
β∈B β∈B α∈A α∈A

because Vα ∩ U is empty whenever α ∈ / B.


ii) Let β1 , β2 ∈ B. We wish to show the transitions
 −1
Υi,j := ϕβj |Wβi ∩Wβj ◦ ϕβi |Wβi ∩Wβj : ϕβi (Wβi ∩ Wβj ) → ϕβj (Wβi ∩ Wβj )

are smooth for i, j ∈ {1, 2} distinct. Tacitly, I imply here (and henceforth) that upon re-
stricting a map, we restrict the codomain to the image of the map under this restriction.
Since B ⊂ A, we know that the transitions
 −1
Φi,j := φβj |Vβi ∩Vβj ◦ φβi |Vβi ∩Vβj : φβi (Vβi ∩ Vβj ) → φβj (Vβi ∩ Vβj )
M382D Attempted HW Problems

are smooth. Now, restrictions of smooth maps are smooth so that Φ̃i,j := Φi,j |ϕβi (Wβi ∩Wβj )
is smooth. I claim that Φ̃i,j is Υi,j . Most of this is just going through the definitions.
Clearly their domains are the same. Now if p ∈ ϕβi (Wβi ∩ Wβj ) then there exists a
u ∈ Wβi ∩ Wβj = U ∩ Vβi ∩ Vβj such that p = ϕβi (u). Since ϕβi = φβi |U ∩Vβi it follows
that p = φβi (u). Furthermore, because u ∈ Vβj as well, p = φβi |Vβi ∩Vβj (u). Thus
(φβi |Vβi ∩Vβj )−1 (p) = u.
Notice that in the previous we show that
(φβi |Vβi ∩Vβj )−1 (p) = u = (ϕβi |Wβi ∩Wβj )−1 (p).
So, we just need to show that φβj |Vβi ∩Vβj (u) = ϕβj |Wβi ∩Wβj (u). But, this is obvious
since the restriction of the former to U is the latter, and u ∈ U . Hence, Φi,j (p) =
Υi,j (p) ∈ ϕβj (Wβj ∩ Wβi ).
Thus, B is an atlas on U . We can extend this to a maximal atlas as necessary, and hence
U is a manifold.

TA comment: Part b: it is not true that any nonempty subset of a manifold is also a manifold:
consider the union of the two axes in A2 , which is not locally Euclidean near (0, 0). Everything else
looks good!

Just ignore the first paragraph. If you want to show that Γ(f ) is a manifold from charts and
atlases instead of proving it’s a submanifold as I did, I think the idea is to cover X × Y by charts
and restrict these to Γ(f ). These will be open in the subspace topology, and are locally Euclidean,
etc.
Problem 2.
a) Suppose f : A3 → R is a smooth function. Define Xc = f −1 (c) for all c ∈ R. Is Xc
necessarily a manifold? Think carefully about what that statement means. For a fixed f ,
what can you say about the set of c for which Xc is a manifold? Try many examples. You
might also want to try this problem with A2 replacing A3 .
b) Repeat with f : R → A3 and X = f (R) ⊂ A3 . Here there is no parameter (’c’ in the
previous), so you’ll have to vary the map f .

Solution:
a) To say that Xc is a manifold means that there exists a differential structure A of charts on
X. Since each smooth manifold is also necessarily a topological manifold, we can show that
Xc is not a manifold if it cannot be realized as a topological manifold. Let us consider the
function f (x, y) = y 2 − x2 (which is smooth since polynomials are). Then X0 = f −1 (0) is
given by the set of (x, y) such that y 2 = x2 . This consists of two affine lines intersecting
orthogonally (n.b., we have not defined this notion in affine space) at the point (0, 0). This
cannot be a topological manifold due to the following. Suppose we have a homeomorphism
from U a connected open neighborhood of (0, 0) in X0 to U 0 ⊂ An for n ≥ 0. Then, U 0
is also connected in An (this is trivial for n = 0). But, removing (0, 0) from U splits it
into four connected components, whereas removing the image of (0, 0) in U 0 will result in at
most two connected components. It follows that no such homeomorphism exists. When U is
`k
disconnected, we can apply similar logic by writing U as U = i=1 Uk where each Uk ⊂ X0
is open and connected, and WLOG (0, 0) ∈ U1 . Now suppose we have a homeomorphism to
U 0 ⊂ An (n ≥ 0), restrict this homemorphism to U1 , and apply the previous logic.

Now consider f (x, y, z) = z 2 − x2 − y 2 . Similar to the previous, the set X0 consisting


of (x, y, z) ∈ A3 where z 2 = x2 + y 2 is a double cone in affine 3-space. It too is not a man-
ifold since (0, 0, 0) forms a “problematic point” like (0, 0) in the previous. Note, however,
that Xc for c 6= 0 is a manifold (these are the hyperboloids of one or two sheets, depending
on the sign of c). It is interesting to see that in this example (and, the example prior) that
M382D Attempted HW Problems

the only c for which Xc is not a manifold is c = 0. Of course, we could modify these examples
so that the invalid c occur at other values, but the fact remains that there is only one. This
is very tame, as opposed to there being, say, infinitely many. But it seems plausible that
the following is true.

Conjecture: the set of c for which f −1 (c) is not a manifold has measure zero.

Also, I remember that if f : A3 → R is smooth then Xc = f −1 (c) is a manifold if for


all p ∈ Xc the differential dfp is not the zero map. I believe this is a consequence of the
inverse function theorem.
b) This seems to be a much more delicate situation. Consider x, y : R → A given by
 
t
 t≤1 
 0 t≤1
 
1 1<t≤2 t 1<t≤2
x(t) = y(t) =
3 − t
 2<t≤3 1
 2<t≤3
 
 3−t
0 else e else

The above are not smooth, but one can easily mollify them to ensure smoothness. The
picture looks like that below.

Note that at (0, 0), any open neighborhood contains two connected components, one homeo-
morphic to (a, b) for some a, b and one homeomorphic to (c, ∞) for some c. It follows that the
image is not a manifold. If I recall correctly, the above demonstrates the difference between
an embedding (for which the image is a manifold) and an immersion (a local embedding).

TA comment: Your conjecture is correct! Here is a riddle: what’s a good way to define measure
zero on a manifold? This will come up later on in the course :)

Problem 3.
a) Suppose A is an n dimensional affine space with associated vector space V . Let γ : (a, b) → A
be a smooth curve, where (a, b) ⊂ R is an open set. For t ∈ (a, b) define the tangent vector
γ 0 (t) ∈ V .
b) Take n = 2 and A = A2 so we write γ(t) = (x(t), y(t)). What is the formula for the tangent
vector in terms of the real-valued functions x, y? (What is the vector space V ?)
c) Suppose the image of γ lies in the open set A2 \ {(x, 0) | x ≥ 0}. Introduce polar coordinates
(r, θ) on this open set and write the curve as (r(t), θ(t)). How precisely are the functions
r, θ defined? What is the tangent vector to the curve in terms of the functions r, θ?
M382D Attempted HW Problems

Solution:
a) Let us first clarify what it means for γ : (a, b) → A to be smooth. We treat γ as an affine
map where (a, b) = U ⊂ B = A. The associated vector space is R. We define the tangent
vector γ 0 (t) ∈ V by the following limit
γ(s) − γ(t) γ(s + t) − γ(t)
γ 0 (t) = lim = lim = 1γ(t)
s−t
s→t s→0 s
where 1 ∈ R is a vector giving the above directional derivative. Since γ is smooth, the above
exists. It is clearly an element of V since the difference γ(s + t) − γ(t) is in V .
b) We have that
(x(s + t), y(s + t)) − (x(t), y(t))
γ 0 (t) = lim
s→0
 s 
x(s + t) − x(t) y(s + t) − y(t)
= lim , lim = (x0 (t), y 0 (t))
s→0 s s→0 s
by direct computation. The vector space V is R2 .
c) Polar and cartesian coordinates are related via x(t)
p = r(t) cos(θ(t)) and y(t) = r(t) sin(θ(t)).
2 2 2
So, x(t) + y(t) = r(t) and hence r(t) = x(t)2 + y(t)2 6= 0. On the other hand,
tan(θ(t)) = y(t)/x(t) so that θ(t) = arctan(y(t)/x(t)) (this is well defined since x(t) 6= 0 by
assumption). Hence,
p 
γ(t) = x(t)2 + y(t)2 , arctan(y(t)/x(t))

in polar coordinates. Now, differentiating the equations defining polar coordinates gives
x0 (t) = r0 (t) cos(θ(t)) − r(t) sin(θ(t))θ0 (t)
y 0 (t) = r0 (t) sin(θ(t)) + r(t) cos(θ(t))θ0 (t)
which are the coordinates of the tangent vector in Cartesian coordinates.

Problem 4. Review the general form of the chain rule; here you will use it in a specific example.
Define f, g : A3 → A3 by f (x, y, z) = (x2 , y 2 , z 2 ) and g(x, y, z) = (yz, xz, xy).
a) Write a formula for g ◦ f and compute d(g ◦ f ) at a general point (x, y, z) from the formula.
b) Now compute dg and df separately and use the chain rule to compute d(g ◦ f ). Compare
your answer to that in part a).

Solution:
I use the definition of the differential for affine maps discussed in class for the following.
a) It is easy to see that (g ◦ f )(x, y, z) = g(x2 , y 2 , z 2 ) = (y 2 z 2 , x2 z 2 , x2 y 2 ). The directional
derivative is
(g ◦ f )((x, y, z) + t(ξ1 , ξ2 , ξ3 )) − (g ◦ f )(x, y, z)
ξ(g ◦ f )(x, y, z) = lim
t→0 t
(g ◦ f )(x + tξ1 , y + tξ2 , z + tξ3 ) − (y 2 z 2 , x2 z 2 , x2 y 2 )
= lim
t→0 t
(y + tξ2 )2 (z + tξ3 )2 − y 2 z 2 (x + tξ1 )2 (z + tξ3 )2 − x2 z 2

= lim , lim ,
t→0 t t→0 t
(x + tξ1 )2 (y + tξ2 )2 − x2 y 2

lim .
t→0 t
At this point, let us compute each limit individually. Write x1 = x, x2 = y, and x3 = z.
Then notice the i-th coordinate is simply
(xj + tξj )2 (xk + tξk )2 − x2j x2k
lim
t→0 t
M382D Attempted HW Problems

for i, j, k ∈ {1, 2, 3} all distinct. Thus the i-th coordinate is


(xj + tξj )2 (xk + tξk )2 − x2j x2k
(ξ(g ◦ f )(x1 , x2 , x3 ))i = lim
t→0 t
(x2j + 2txj ξj + t2 ξj2 )(x2k + 2txk ξk + t2 ξk2 ) − x2j x2k
= lim
t→0 t
2t(xj xk ξj + xj xk ξk ) + t2 (...)
2 2
= lim = 2xj xk (xk ξj + xj ξk )
t→0 t
where the ellipses denotes some linear combination of products of the xj , xk , ξj , ξk . The
exact value does not matter since it tends to zero in the limit. In total,
ξ(g ◦ f )(x, y, z) = (2yz(zξ2 + yξ3 ), 2xz(zξ1 + xξ3 ), 2xy(yξ1 + xξ2 )).
Hence, for fixed (x, y, z) ∈ A3 ,
d(g ◦ f )(x,y,z) (ξ) = (2yz(zξ2 + yξ3 ), 2xz(zξ1 + xξ3 ), 2xy(yξ1 + xξ2 )).
Note that this is indeed linear in ξ, which is expected of the differential.
b) We first compute the directional derivatives of f and g:
((x + tξ1 )2 , (y + tξ2 )2 , (z + tξ3 )2 ) − (x2 , y 2 , z 2 )
ξf (x, y, z) = lim
t→0 t
(2txξ1 + t2 ξ12 , 2tyξ2 + t2 ξ22 , 2tzξ3 + t2 ξ32 )
= lim
t→0 t
= (2xξ1 , 2yξ2 , 2zξ3 )
To compute ξg(x, y, z), we will employ the same strategy as in a), using the same observation.
Thus,
(xj + tξj )(xk + tξk ) − xj xk
(ξg(x1 , x2 , x3 ))i = lim
t→0 t
t(xj ξk + xk ξj ) + t2 ξj ξk
= lim = xj ξk + xk ξj
t→0 t
There is an easy way to compute directional derivatives which is what I normally use in
practice. What you do is differentiate each component as you would normally, and treat
x0 , y 0 , z 0 as ξ1 , ξ2 , ξ3 respectively. So, for example with f we have
ξf (x, y, z) = (2xx0 , 2yy 0 , 2zz 0 ) = (2xξ1 , 2yξ2 , 2zξ3 ),
which we rigorously showed earlier.

Now, the differentials are


df(x,y,z) (ξ) = (2xξ1 , 2yξ2 , 2zξ3 )
dg(x,y,z) (ξ) = (yξ3 + zξ2 , xξ3 + zξ1 , xξ2 + yξ3 )
By the chain rule, d(g ◦ f )(x,y,z) (ξ) = dgf (x,y,z) ◦ df(x,y,z) (ξ). Computing dgf (x,y,z) gives
dgf (x,y,z) (ξ) = dg(x2 ,y2 ,z2 ) (ξ) = (y 2 ξ3 + z 2 ξ2 , x2 ξ3 + z 2 ξ1 , x2 ξ2 + y 2 ξ1 ).
The composition therefore is
dgf (x,y,z) ◦ df(x,y,z) (ξ) = dgf (x,y,z) (2xξ1 , 2yξ2 , 2zξ1 )
= (y 2 (2zξ3 ) + z 2 (2yξ2 ), x2 (2zξ3 ) + z 2 (2xξ1 ), x2 (2yξ2 ) + y 2 (2xξ1 ))
= (2yz(yξ3 + zξ2 ), 2xz(xξ3 + zξ1 ), 2xy(xξ2 + yξ1 ))
which is exactly d(g ◦ f )(x,y,z) (ξ) as computed before.

TA comment: You, me, and the professor all know that you know how to do calculus. You should
feel free to use the tools for computing directional derivatives that you learned in your calculus
class, rather than having to work directly with the limit (unless the exercise asks you to use the
limit definition).
M382D Attempted HW Problems

I got dragged and I deserve it...

Problem 7.
a) Show that T is a 2-manifold.
b) Define the Gauss map g : T → S 2 to the unit sphere in A3 by mapping a point p ∈ T
to the unit normal vector to T at p, viewed as a point of S 2 . (Here I am relying on your
geometric intuition, not on definitions we have discussed in this class.) Show that g is
smooth. Compute its differential in some coordinate system.

Solution:
a) Recall (by our discussion in problem 2) that if f : A3 → R is smooth then Xc = f −1 (c)
is a manifoldpwhen dfp is never the zero map for p ∈ Xc . Consider f : A3 → R by
f (x, y, z) = ( x2 + y 2 − R)2 + z 2 . The differential is
! !
R R
dfp = 2x 1 − p dx + 2y 1 − p dy + 2zdz.
x2 + y 2 x2 + y 2
p
Now, when is the differential the zero map? We first find x, y for which 1 − R/ x2 + y 2 = 0;
this occurs when x2 + y 2 = R2 . Next, z must always be zero. Supposing first that x2 + y 2 =
R2 and z = 0, we see that f (x, y, z) = (R−R)2 +02 = 0 6= r2 . So, these points are not in Xr2 .
The differential can also be zero when x = y = 0 so that f (x, y, z) = R2 . But, since r < R
we have that these points are not in Xr2 either. It follows that Xr2 , the desired torus, is a
manifold. It is specifically a 2-manifold since we can parameterize it by X : [0, 2π)×[0, 2π) ⊂
A2 → A3 where X(t, s) = ((r cos(s) + R) cos(t), (r cos(s) + R) sin(t), r sin(s)).

Observe that this gives us an additional way of showing T is a manifold. It is clearly


Hausdorff and second countable as a subset of A3 . Consider the following sets
P1 = {(x, y, z) ∈ A3 | x ≥ 0, y = 0}, a half plane;
P2 = {(x, y, z) ∈ A3 | x ≤ 0, y = 0}, a half plane;
3
P3 = {(x, y, z) ∈ A | y ≥ 0, x = 0}, a plane;
D1 = {(x, y, z) ∈ A | x + y ≥ (R + r)2 , z = 0},
3 2 2
a plane with a disc removed;
3 2 2 2
D2 = {(x, y, z) ∈ A | x + y ≤ (R − r) , z = 0}, an open disc;
3
D3 = {(x, y, z) ∈ A | z = r}, a plane.
Now define Ui for i = 1, 2, 3 by Ui = T \ {Pi ∪ Di }. Each of these is an open set and their
union covers T .

Consider the parameterizations X1 , X2 , X3 : (0, 2π) × (0, 2π) → T given by


X1 (t, s) = ((r cos(s) + R) cos(t), (r cos(s) + R) sin(t), r sin(s))
X2 (t, s) = (−(−r cos(s) + R) cos(t), −(−r cos(s) + R) sin(t), −r sin(s))
X3 (t, s) = (−(−r sin(s) + R) sin(t), (−r sin(s) + R) cos(t), r cos(s))
Note that the curve ((r + R) cos(t), (r + R) sin(t), 0) parameterizes T ∩ D1 while the curve
(r cos(s) + R, 0, r sin(s)) parameterizes T ∩ P1 . For every other point p ∈ T \ (P1 ∪ D1 )
there exist unique u, v ∈ (0, 2π) such that p = X1 (u, v). The interpretation of these values
is as follows: v tells us what angle on the given circle to be at while u tells us how much to
revolve the circle around the z axis. Thus X1 is a homeomorphism (0, 2π) × (0, 2π) → U1 .
We conclude similar results for the others, and thus since the Ui cover T it follows T is a
topological 2-manifold.

Finally, the transition maps are actually easy to compute. It is relatively straightforward to
M382D Attempted HW Problems

see that X2 (t, s) = X1 (t + π, s + π) and X3 (t, s) = X1 (t + π/2, s + π2 ). Let φi : Ui → A2 be


the inverse of Xi . Then,
(φ1 ◦ φ−1
2 )(t, s) = (X1−1 ◦ X2 )(t, s) = (X1−1 ◦ X1 )(t + π, s + π) = (t + π, s + π)
(φ1 ◦ φ−1
3 )(t, s) = (X1−1 ◦ X3 )(t, s) = (t + π/2, s + π/2)
(φ2 ◦ φ−1
3 )(t, s) = (X2−1 ◦ X3 )(t, s) = (X2−1 ◦ X1 ) ◦ (X1−1 ◦ X3 )(t, s)
= (X1−1 ◦ X2 )−1 ◦ (X1−1 ◦ X3 )(t, s) = (X1−1 ◦ X2 )−1 (t + π/2, s + π/2)
= (t + π/2 − π, s + π/2 − π) = (t − π/2, s − π/2)
These are smooth with smooth inverses, hence we have an atlas {(Ui , φi )}i=1,2,3 . We can
then complete it to a maximal atlas.
b) Recall our parameterization of the torus by X(t, s). For each p ∈ T there exist unique angles
u and v such that p = X(u, v). The Gauss map is relatively straightforward to deduce.

We first want to find way to determine the unit normal to T at p in terms of u and v.
To do this, we can explicitly find two linearly independent vectors which span the literal
tangent plane T̄p T . Clearly, the unit normal will just be (up to sign) the unit normal of
this plane. So, let us find two linearly independent vectors in T̄p T . Recall that vectors in
T̄p T are velocities of curves in T (passing through p) at p. Consider two curves X(t, v) and
X(u, s), which clearly pass through p at t = u and s = v respectively. Their velocities at p
are just the partial derivatives

Xt (u, v) = X(t, s)
∂t (t,s)=(u,v)
= (−(r cos(v) + R) sin(t), (r cos(v) + R) cos(t), 0)|t=u
= (−(r cos(v) + R) sin(u), (r cos(v) + R) cos(u), 0)
= (r cos(v) + R)(− sin(u), cos(u), 0)


Xs (u, v) = X(t, s)
∂s (t,s)=(u,v)
= (−r sin(s) cos(u), −r sin(s) sin(u), r cos(s))|s=v
= r(− sin(v) cos(u), − sin(v) sin(u), cos(v))
The scaling factors do not matter for linear independence, so we ignore them. This results
in two unit vectors
X̃t (u, v) = (− sin(u), cos(u), 0)
X̃s (u, v) = (− sin(v) cos(u), − sin(v) sin(u), cos(v))
which we can take the cross product of to find a unit normal at p. This is done below:
 
i j k
N (u, v) =  − sin(u) cos(u) 0 
− sin(v) cos(u) − sin(v) sin(u) cos(v)
= (cos(u) cos(v), sin(u) cos(v), sin(u)2 sin(v) + cos(u)2 sin(v))
= (cos(u) cos(v), sin(u) cos(v), sin(v))
One can check graphically that this is the outer unit normal to T at p (that is, we should
not change the sign). So, the Gauss map is G = X −1 ◦ N , which is smooth since each of
X −1 and N are smooth.

We use the coordinates above, so that the Gauss map in these coordinates is just
G(u, v) = (cos(u) cos(v), sin(u) cos(v), sin(v)).
In computing the differential, we see that dG(u,v) : T̄(u,v) T → T̄G(u,v) S 2 , since we are
restricting attention to a specific coordinate system. Thus, the differential is a map between
M382D Attempted HW Problems

the literal tangent planes. But notice that the unit normal to T̄G(u,v) S 2 is just G(u, v), so
that the tangent planes coincide. In this sense we view dG(u,v) as an endomorphism. This
lets us represent dG(u,v) in the basis {Xt (u, v), Xs (u, v)}. The vector dG(u,v) (Xt (u, v)) is
precisely the velocity of G(t, v) at G(u, v) since the curve G(t, v) is the image of the curve
X(t, v) in S 2 . Similar reasoning applies to dG(u,v) (Xs (u, v)) so that

dG(u,v) (Xt (u, v)) = G(t, s)
∂t (t,s)=(u,v)
=
(− sin(t) cos(v), cos(t) cos(v), 0)|t=u
 
cos(v)
= cos(v)(− sin(u), cos(u), 0) = Xt (u, v)
r cos(v) + R

dG(u,v) (Xs (u, v)) = G(t, s)
∂s (t,s)=(u,v)
=(− cos(u) sin(s), − sin(u) sin(s), cos(s))|s=v
1
= (− cos(u) sin(v), − sin(u) sin(v), cos(v)) = Xs (u, v)
r
Hence, as a matrix in the basis {Xt (u, v), Xs (u, v)}
 
cos(v)/(r cos(v) + R) 0
dG(u,v) = .
0 1/r
Note that this is well defined since r < R, hence r cos(v) + R is never zero.
TA comment: part a: if x = y = 0, then the differential is undefined – but as you noted, this isn’t
a problem because the z-axis does not intersect the torus. part b: the problem did not specify
the choice of inner vs outer unit normal, but that is important: good work addressing this! great
solution overall!
M382D Attempted HW Problems

HW 2
Guillemin/Pollack: Chapter 1.3: 3, 5, 7.

1.3.3. Let f : R → R be a local diffeomorphism. Prove that the image of f is an open interval and
that, in fact, f maps R diffeomorphically onto this interval.

Solution: First note that for any p ∈ A, λ ∈ R with λ 6= 0 we have


f (p + tλ) − f (p) f (p + tλ) − f (p)
dfp (λ) = lim = λ lim = λf 0 (p).
t→0 t t→0 tλ
The equality dfp (λ) = λf 0 (p) also holds true when λ = 0 trivially. Now, since f is a local diffeo-
morphism, we see that f 0 (p) 6= 0 (otherwise, dfp would be the zero map, which is not invertible).
Then, for each p we know there exists an open set Up ⊂ A and Vp ⊂ A such that f |Up : Up → Vp is
a diffeomorphism. WLOG we can always assume Up and Vp to be connected, so that they are open
intervals. Fix a p ∈ A and suppose f 0 (p) > 0. Since f |Up is a diffeomorphism (hence has continuous
derivative) and f 0 is never zero, we have that f 0 |Up is positive. Let Uq1 , Uq2 be such that p ∈ Uqi for
i = 1, 2 and q1 6= q2 . We see that f 0 |Uqi is positive for each i. Since the Up cover R, we see that f is
monotone, hence injective. It follows that f is bijective onto its image. Since each Vp is an interval,
their union – f (R) – is too.

We now show that a bijective local diffeomorphism is a diffeomorphism. As stated, f |Up is a diffeo-
morphism so that, in particular, f is smooth at p. Since this holds for all p, we have that f is smooth
everywhere. Now, since f is bijective there exists an inverse f −1 . For each q ∈ f (R), there exists
a p ∈ R such that f (p) = q. Hence, q ∈ Vp for this p. Since f |Up is a diffeomorphism onto Vp , it
follows that f −1 |Vp is a diffeomorphism onto Up . Hence, f −1 is also a bijective local diffeomorphism.
By the preceding, we see that f −1 is smooth for all q ∈ f (R). Hence, f is a diffeomorphism.

TA comment: I don’t know why you proved that dfp (λ) = λf 0 (p). What does it add to your
proof? Also, you wrote that “since each Vp is an interval, their union... is too,” which is not true:
(1, 2) union (3, 4).

I didn’t make it clear, but proving that dfp (λ) = λf 0 (p) let’s me convert between a differential
and a derivative. Remember that differentials fix points and act on tangent vectors, but a (direc-
tional) derivative fixes tangent vectors and acts on points. Now, I want to say something about what
f does to R. Generally, the differential is useless for this because we can’t vary the point – hence
the conversion. And I was a little sloppy saying later that since each Vp is an interval, their union
is too. Since f is smooth, it is continuous and therefore the image f (R) is connected. It remains to
show that f (R) is open, which we can do by showing f is an open map. Let U be any open subset of
R and consider q ∈ f (U ). Let p ∈ U be such that f (p) = q. Since f is a local diffeomorphism, there
exist open sets Up , Vp ⊂ R such that f |Up : Up → Vp is a diffeomorphism. WLOG we can assume
they are connected, and that Up ⊂ U . Hence, q ∈ Vp ⊂ f (U ), and f (U ) is open.

1.3.5. Prove that a local diffeomorphism f : X → Y is actually a diffeomorphism of X onto an open


subset of Y , provided that f is one-to-one.

Solution: It is easy to see that f is a local homeomorphism, so that it is an open map. Let
Z = f (X), which is open, and hence a submanifold. By assumption, f : X → Z is bijective. Fix
p ∈ X. We wish to show that for some charts (Uα , φα ), (Vβ , ψβ ) on X, Z around p, f (p) respectively
that
ψβ ◦ f ◦ φ−1
α : φα (Uα ∩ f
−1
(Vβ )) → ψβ (Vβ )
is a smooth map. By assumption there exists U ⊂ X and V ⊂ Z open such that f |U : U → V
is a diffeomorphism. Then, there exist charts (Uα , φα ) and (Vβ , ψβ ) of p, f (p) such that Uα ⊂ U ,
Vβ ⊂ V , and f (Uα ) ⊂ Vβ . Hence, by using appropriate restrictions we see that
ψβ ◦ f ◦ φ−1
α : φα (Uα ) → ψβ (Vβ )
M382D Attempted HW Problems

is smooth, as a composition of smooth maps. Since f −1 |V : V → U is a diffeomorphism, exactly the


same work can be used to find charts such that the transition is smooth. Hence, both f and f −1
are smooth everywhere.

Note that 1.3.3. is implied by 1.3.5.


1.3.7.
a) Check that g : R → S 1 , g(t) = (cos 2πt, sin 2πt), is, in fact, a local diffeomorphism.
b) From Exercise 6, it follows that G : R2 → S 1 × S 1 , G = g × g, is a local diffeomorphism.
Also, if L is a line in R2 , the restriction G : L → S 1 × S 1 is an immersion. Prove that if L
has irrational slope, G is one-to-one on L.

Solution:
a) We can explicitly compute dgt (λ) where λ ∈ R. For nonzero λ we have
 
cos(2π(t + sλ)) − cos(2πt) sin(2π(t + sλ)) − sin(2πt)
dgt (λ) = 2πλ lim , 2πλ lim
s→0 2sπλ s→0 2sπλ
= (−2πλ sin 2πt, 2πλ cos 2πt) = 2πλ(− sin 2πt, cos 2πt).
Now what is the tangent space of S 1 ? Here, we view it as embedded naturally in A2x,y . Then,
S 1 is the set of solutions to x2 + y 2 = 1. Taking differentials, we see that 2xdx + 2ydy = 0.
Hence,
Tp S 1 = {(ξ, η) ∈ R2 | xξ + yη = 0}
where p = (x, y). Now let t be such that p = (cos(2πt), sin(2πt)). Then,
Tp S 1 = {(ξ, η) ∈ R2 | cos(2πt)ξ + sin(2πt)η = 0}.
It is clear that dgt (R) = Tp S 1 , so that dgt is surjective. Moreover, dgt is injective since if
λ1 , λ2 ∈ R are such that dgt (λ1 − λ2 ) = 0, then
dgt (λ1 − λ2 ) = 2π(λ1 − λ2 )(− sin 2πt, cos 2πt) = (0, 0).
Both components are zero only if λ1 − λ2 = 0.
b) We write G as G(t, s) for (t, s) points on L. A generic point on S 1 × S 1 is of the form
((cos 2πt, sin 2πt), (cos 2πs, sin 2πs)).
Suppose t1 , s1 and t2 , s2 are such that G(t1 , s1 ) = G(t2 , s2 ) but t1 6= t2 . By equating the
first coordinates, we see that
(cos 2πt1 , sin 2πt1 ) = (cos 2πt2 , sin 2πt2 ).
But, this occurs iff t1 − t2 ∈ Z. Similarly, equating the second coordinates shows that
s1 − s2 ∈ Z. By hypothesis, t1 − t2 6= 0. It follows that (s1 − s2 )/(t1 − t2 ) ∈ Q. But, this is
exactly the slope of L. Hence if G is not injective then L has rational slope.
Dan’s Problems.
Problem 1. Let M be a topological manifold and A = {(Uα , φα )}α∈A an atlas. Define
A¯ = {(U, φ) charts on M | (U, φ) is C ∞ -related to all charts in A }.
Prove that A¯ is a maximal atlas on M , i.e., a differential structure.

Solution: We first show that the charts cover M . To this end, it suffices to show that A ⊂ A¯.
For any chart (Uα , φα ), since A is an atlas, necessarily this chart is C ∞ -related to all charts in A .
Hence A ⊂ A¯, and [ [
M= Uα ⊂ U ⊂ M,
α∈A (U,φ)∈A¯

as desired. Next, we show that A¯ is an atlas on M . Let (V1 , ψ1 ), (V2 , ψ2 ) be two charts in A¯. Then
by definition,
−1
ψi ◦ φα : φα (Uα ∩ Vi ) → ψi (Uα ∩ Vi )
M382D Attempted HW Problems

and their inverses are smooth for all α and i = 1, 2. Hence,

ψj ◦ ψi−1 = (ψj ◦ φα
−1
) ◦ (φα ◦ ψi−1 ) : ψi (Vi ∩ Vj ) → ψj (Vi ∩ Vj )

is smooth for i, j = 1, 2 distinct. Thus (V1 , ψ1 ), (V2 , ψ2 ) are C ∞ -related for any choice of charts in
A¯. Hence A¯ is an atlas on M .
Suppose now that (V, ψ) is C ∞ -related to all (U, φ) ∈ A¯. Since A ⊂ A¯, necessarily (V, ψ) is C ∞ -
related to all (Uα , φα ) in A . By definition of A¯, we see that (V, ψ) ∈ A¯. It follows that A¯ is a
differential structure.

Problem 2.
a) Construct a diffeomorphism from the unit sphere

S1 = {(x, y, z) ∈ A3 | x2 + y 2 + z 2 = 1}

to the ellipsoid
x2 y2 z2
 
S2 = (x, y, z) ∈ A3 + + =1 .
2 3 4
Be sure to prove that the map you construct is smooth.

Solution:
a) Redefine S2 for a, b, c 6= 0 as
  x 2  y 2  z 2 
S2 = (x, y, z) ∈ A3 + + =1
a b c
√ √
so that this coincides with the above definition for a = 2, b = 3, c = 2.

Throughout, I use subscripts of 1 for objects related to S1 , and similarly for S2 .


Let p = (x1 , y1 , z1 ) ∈ S1 . Define f (p) by

f (p) = f (x1 , y1 , z1 ) = (ax1 , by1 , cz1 ).

If (x2 , y2 , z2 ) ∈ S2 , then setting x1 = x2 /a, y1 = y2 /b, z1 = z2 /c yields a point (x1 , y1 , z1 ) ∈


S1 since
 x 2  y 2  z 2
2 2 2
x21 + y12 + z12 = + + = 1.
a b c
Furthermore, it is readily verified that f (x1 , y1 , z1 ) = (x2 , y2 , z2 ) so that f is surjective. The
map f is clearly injective.

Consider the charts φi : Si ∩ {zi > 0} → A2(ui ,vi ) given by (xi , yi , zi ) 7→ (xi , yi ) for i = 1, 2.
Then,
 q   q 
φ2 ◦ f ◦ φ−1
1 (u1 , v1 ) = φ 2 ◦ f u1 , v1 , 1 − u2 − v2
1 1 = φ 2 au1 , bv1 , c 1 − u2 − v2
1 1

= (au1 , bv1 )

This is clearly smooth. To check if the differential dfp is bijective, it suffices to check for one
transition, and the above shows that for p in the upper hemisphere,

d(φ2 ◦ f ◦ φ−1
1 )φ1 (p) (ξ1 , ζ1 ) = (aξ1 , bζ1 ),

which is bijective. Essentially the same calculation works for all of the six hemispheres which
cover S1 , and therefore f is smooth everywhere with bijective differential. It follows that f
is a diffeomorphism, as a bijective local diffeomorphism.
M382D Attempted HW Problems

Problem 3.
a) There is an obvious inclusion ι̇ : S 2 → A3 . Show that the differential dι̇p at any point p ∈ S 2
is an injection dι̇p : Tp S 2 → R3 and identify the image.
b) On the upper hemisphere {z > 0} we can consider the functions x, y to be coordinate
functions. As a secondary coordinate system we take spherical coordinates θ, ϕ defined by
solving the equations

x = sin(ϕ) cos(θ)
y = sin(ϕ) sin(θ).

Identify a (maximal) subset of the upper hemisphere on which θ, ϕ is a coordinate system.


(You may want to translate: replace θ, ϕ by θ − θ0 , ϕ − ϕ0 for some θ0 , ϕ0 ). On that subset
express the vector field ∂/∂x in terms of ∂/∂θ and ∂/∂ϕ. Also, compute dι̇(∂/∂x) as a
vector field on a subset of A3 .

Solution:
a) Throughout I will work with p = (0, 0, 1). The general case follows via the rotational
symmetry of S 2 . Let (Uα , φα ) denote the chart 2
√ at p given by Uα = S ∩ {z > 0} and
−1 2 2
φα (x, y, z) = (x, y). Hence, φα (u, v) = (u, v, 1 − u − v ). Denote the components of ξα
by ξα1 and ξα2 respectively.

By definition, dι̇p (ξ) is

dι̇p (ξ) = d(ψβ ◦ ι̇ ◦ φ−1


α )φα (p) (ξα )

for charts φα on X containing p and ψβ on An containing f (p). There is just one chart we
need to use, namely (An , idAn ). Furthermore, the above is independent of the chart on X,
so we can choose the particular chart above. Then, since ι̇ is inclusion we have

dι̇p (ξ) = d(ι̇ ◦ φ−1


α )(0,0) (ξα )

where ι̇ ◦ φ−1 n n
α : A → A . We know how to compute the differentials of these explicitly. In
particular,

dι̇p (ξ) = d(ι̇ ◦ φ−1


α )(0,0) (ξα )
d  p 
= tξα1 , tξα2 , 1 − (tξα1 )2 + (tξα2 )2
dt t=0
= (ξα1 , ξα2 , 0)

We can always identify a tangent vector ξ by any one of its components, in particular by
ξα . Hence by varying ξα1 , ξα2 , we see that dfp (Tp X) is the xy-plane in R3 . Using rotations,
it follows that dfp (Tp X) in general is the plane through the origin in R3 with normal vector
p. One easily sees from the above that dι̇p is injective for all p.
b) Note: I call the coordinate functions u, v to distinguish from the labeling of coordinates on
A3 by x, y, z. So, in the problem statement we replace instances of x, y with u, v.

Let ϕ, θ take values in (0, π/2), (0, 2π) respectively. Then, ϕ, θ cover the upper hemisphere
minus the minor arc connecting (1, 0, 0) and (0, 0, 1) (including both endpoints). This is
maximal since changing θ0 just rotates the arc, while changing ϕ0 removes two circles from
the upper hemisphere. In the above, we have chosen ϕ0 so that these two circles coincide
with the north pole (radius 0) and the equator (which is removed anyways, since z > 0).
M382D Attempted HW Problems

We see that
du = sin(ϕ)d(cos(θ)) + d(sin(ϕ)) cos(θ)
= − sin(ϕ) sin(θ)dθ + cos(ϕ) cos(θ)dϕ
dv = sin(ϕ)d(sin(θ)) + d(sin(ϕ)) sin(θ)
= sin(ϕ) cos(θ)dθ + cos(ϕ) sin(θ)dϕ
By definition, ∂/∂u is such that du(∂/∂u) = 1, dv(∂/∂u) = 0. Let
∂ ∂ ∂
= aθ + aϕ
∂u ∂θ ∂ϕ
for constants aθ , aϕ to be determined. Testing this against each differential gives
     
∂ ∂ ∂ ∂ ∂
1 = du = − sin(ϕ) sin(θ)dθ aθ + aϕ + cos(ϕ) cos(θ)dϕ aθ + aϕ
∂u ∂θ ∂ϕ ∂θ ∂ϕ
= −aθ sin(ϕ) sin(θ) + aϕ cos(ϕ) cos(θ)
     
∂ ∂ ∂ ∂ ∂
0 = dv = sin(ϕ) cos(θ)dθ aθ + aϕ + cos(ϕ) sin(θ)dϕ aθ + aϕ
∂u ∂θ ∂ϕ ∂θ ∂ϕ
= aθ sin(ϕ) cos(θ) + aϕ cos(ϕ) sin(θ)

where we have exploited the linearity of the differential and that {∂/∂θ, ∂/∂ϕ} is the dual
basis of {dθ, dϕ}. Arranging this into a matrix we have
    
− sin(ϕ) sin(θ) cos(ϕ) cos(θ) aθ 1
= .
sin(ϕ) cos(θ) cos(ϕ) sin(θ) aϕ 0
The determinant of the 2 × 2 matrix appearing is
− cos(ϕ) sin(ϕ) sin(θ)2 − cos(ϕ) sin(ϕ) cos(θ)2 = − cos(ϕ) sin(ϕ).
Since ϕ ∈ (0, π/2), this determinant is never zero and we may invert it. Thus,
    
aθ 1 cos(ϕ) sin(θ) − cos(ϕ) cos(θ) 1
= −
aϕ cos(ϕ) sin(ϕ) − sin(ϕ) cos(θ) − sin(ϕ) sin(θ) 0
  
− sin(θ)/ sin(ϕ) cos(θ)/ sin(ϕ) 1
=
cos(θ)/ cos(ϕ) sin(θ)/ cos(ϕ) 0
 
− sin(θ)/ sin(ϕ)
=
cos(θ)/ cos(ϕ)
In total,    
∂ sin(θ) ∂ cos(θ) ∂
=− + .
∂u sin(ϕ) ∂θ cos(ϕ) ∂ϕ
Now we want to write ∂/∂u is a vector field on the open unit disc in A2u,v . We can map ∂/∂u
into T S 2 , which is a union of planes in A3 . Hence the image of each ∂/∂u can be written in
terms of ∂/∂x, ∂/∂y, and ∂/∂z. We note that u, v are related to x, y, z by
x = u, y = v
p
z = 1 − u2 − v 2
It follows from this that
 
∂ ∂ 2u ∂ ∂ x ∂
dι = −√ = −p .
∂u ∂x 2
1−u −v 2 ∂z ∂x 2
1−x −y 2 ∂z
I talked about the following interpretation of the tangent space in the previous homework, but I will
include it here for completeness. It highlights the essence of Problem 3a). Consider an embedding
f : X → An of a manifold X into affine space. Then, as an embedding, dfp is injective. We can
therefore identify Tp X with its image in Rn . This identification is as follows:
M382D Attempted HW Problems

Let v ∈ dfp (Tp X) and consider a curve γ : (−, ) → X such that γ(0) = p and (f ◦ γ)0 (0) = v.
Then,
d
dfp (ξ) = (f ◦ γ)(t) = (f ◦ γ)0 (0) = v.
dt t=0
for some ξ ∈ Tp X
On the other hand, for any chart (Uα , φα ) on X around p, we can consider the curve in An
given by (φα ◦ γ)(t) this has some initial velocity ξα = (φα ◦ γ)0 (0). Now consider an affine map
g : An → An . By definition, the differential at q is
d
dgq (ξ) = (g ◦ γ)(t)
dt t=0

for any suitably chosen curve γ. In class, we just use the straight line curve. The condition on γ is
that γ(0) = q and γ 0 (0) = ξ.

Consider now two charts indexed by α, β on X around p. Then, using the above
d d
d(φβ ◦ φ−1
α )φα (p) (ξα ) = (φβ ◦ φ−1
α ◦ φα ◦ γ)(0) = (φβ ◦ γ)(0) = ξβ .
dt t=0 dt t=0

Note that the curve we use, φα ◦ γ, is suitable since by definition ξα = (φα ◦ γ)0 (0) and φα ◦ γ(0) =
φα (p). The above says that if ξ is a vector whose α component is ξα , then ξ ∈ Tp X. Injectivity tells
us that this is the same ξ at the beginning of the discussion.

So, the image of Tp X under dfp is precisely the set of initial velocities of curves on f (X) pass-
ing through f (p).

TA comment: Part a: it’s not clear to me what “we can always identify a tangent vector ξ by
one of its components” means

I meant here the identification explained in the comment for 1.2.11.

Problem 4.
a) Show
∂f ∂xi ∂f
= .
∂y j ∂y j ∂xi
b) Verify from a) and other equations that
∂f i ∂f j
df = dx = dy .
∂xi ∂y j

Solution: Note to the grader: I may have misunderstood this problem, I originally had it in mind
that the partials are defined so that b) is true. Instead, for b) I assume that one equality holds by
definition, and prove the other using a).
a) The maps
f ◦ x−1 : An → A, f ◦ y −1 : An → A, x ◦ y −1 : An → An
are all smooth functions for which we can take usual partial derivatives. By the ordinary
chain rule,
∂(f ◦ y −1 ) ∂ ∂(f ◦ x−1 ) ∂xi
= (f ◦ x−1 )(x ◦ y −1 ) =
∂y j ∂y j ∂xi ∂y j
(e.g., if I had a function f (x1 , x2 ) where x1 , x2 are functions of (y 1 , y 2 ), then ∂f /∂y 1 =
∂f /∂x1 ∂x1 /∂y 1 + ∂f /∂x2 ∂x2 /∂y 1 ).
M382D Attempted HW Problems

b) First let us sort out some notation. The composition x◦y −1 : An → An is a smooth function,
so its partial derivatives ∂(x ◦ y −1 )/∂y j exist for j = 1, ..., n. We denote these as ∂x/∂y j .
If p ∈ M , we write ∂x/∂y j (p) for ∂(x ◦ y −1 )/∂y j (y(p)). Finally, ∂x/∂y j has n-components,
which we write as ∂xi /∂xj for i = 1, ..., n. With this, for p ∈ M we can write the differential
d(x ◦ y −1 )p as
 1
∂x1
  1
∂x1

∂x ∂x
∂y 1 (p) · · · ∂y n (p) ∂y 1 · · · ∂y n
 . .. 
d(x ◦ y −1 )y(p) =  ..  =  .. .. .. 

 .
. . .   . . .  .

n n n n
∂x ∂x ∂x
∂y 1 (p) · · · ∂y n (p) ∂y 1 · · · ∂x
∂y n p
Note that this is consistent with
 
0
∂x1 1
 .. 
 
∂x
1 ··· ∂y n .
1 n  ∂y.
   
∂x ∂x ∂x ∂ .. .. 
= d(x ◦ y −1 )y(p)
 
 ..
, ..., = =  1
∂y i ∂y i ∂y i ∂y i . .   
∂xn ∂xn .
∂y 1 ··· ∂y n
 .. 
p
0
where we have a 1 in the i-th entry. Analogously,
d(f ◦ x−1 )x(p) = ∂f ∂f

∂x1 ··· ∂xn p
.
but we do not have to fuss about decomposing ∂f /∂xi into components, since f is a
real-valued function. This problem is strange, in that we define (notationally) that df =
∂f /∂xi dxi . Of course our choice of x could have been y instead, so from a notational point
of view we are done. But let us just assume that df = ∂f /∂y i dy i . Then, by the chain rule,
d(f ◦ y −1 )y(p) = d(f ◦ x−1 ◦ x ◦ y −1 )y(p) = d(f ◦ x−1 )x(p) ◦ d(x ◦ y −1 )y(p)
 1
∂x1

∂x
∂y 1 · · · ∂y n

∂f ∂f
  . .. .. 
= · · · ∂x  .
p . . . 

∂x1 n

∂xn ∂xn
∂y 1 · · · ∂yn
p
 
∂f ∂xi ∂f ∂xi
= ∂xi ∂y 1 · · · ∂xi ∂yn
p
∂xi ∂f j ∂f j
= j i
dy = dy
∂y ∂x ∂y j
where we have used a) in the last step.

TA comment: Thanks for the note; I think everything looks good.


M382D Attempted HW Problems

HW 3
Guillemin/Pollack: Chapter 1.4: 2, 5, 10, 12.

1.4.2.
a) If X is compact and Y connected, show every submersion f : X → Y is surjective.
b) Show that there exist no submersions of compact manifolds into Euclidean spaces.

Solution:
a) We first show that f is an open map. Let W ⊂ X be open and p ∈ U . Suppose that
dim(X)(p) = n and dim(Y )(f (p)) = m where n ≥ m. Then there exist charts (U, x), (V, y)
about p, f (p) respectively such that
y i ◦ f ◦ x−1 = xi , i = 1, ..., m
and W ⊂ U . The above exactly means that y ◦ f ◦ x−1 is a projection, and is therefore open.
Since x, y are diffeomorphisms (by Problem 3, as they are charts), it follows that y ◦ f ◦ x−1
is open iff f is.

Now, since f is open we know that f (X) ⊂ Y is open. If we can show that f (X) is
closed, then we know due to connectedness of Y that f (X) is either Y or empty. But, f
is continuous, and thus f (X) is compact. Hence, f (X) is both open and closed, and f is
surjective.
b) The above argument shows that f (X) must be compact, but Rn is not compact.

1.4.5. Check that 0 is the only critical value of the map f : R3 → R defined by
f (x, y, z) = x2 + y 2 − z 2 .
Prove that if a and b are either both positive or both
p negative, then f −1 (a) and f −1 (b) are diffeomor-
3
phic. [Hint: Consider scalar multiplication by b/a on R .] Pictorially examine the catastrophic
change in the topology of f −1 (c) as c passes through the critical value.

Solution: The differential of f is


df = 2xdx + 2ydy − 2zdz
which is clearly surjective except when p = (0, 0, 0). To see this, let p = (x, y, z). If x 6= 0, then set
v = a/(2x)∂/∂x|p where a ∈ R. Then dfp (v) = a. If x = 0 and y 6= 0, then set v = a/(2y)∂/∂y|p –
hence also dfp (v) = a. Now if x = y = 0 and z 6= 0, set v = a/(2z)∂/∂z|p . Here too, dfp (v) = a. The
only remaining case is when p = (0, 0, 0), but in this case dfp = 0 which is obviously not surjective.
So, p = (0, 0, 0) is the only critical point and f (0, 0, 0) = 0 is the only critical value.
p
Suppose now that p ∈ f −1 (a). We show that b/ap ∈ f −1 (b). We clearly have that
r !  
b b
f p = f (p) = b.
a a
p p
Now let q ∈ f −1 (b). Set p = a/bq. Then, by the same logic, f (p) = a. By construction q = b/ap
where p ∈ f −1 (a). So, combined with the in f −1 (b) arise this way.
p above, it follows that all points −1
That is to say scalar multiplication by b/a is a homeomorphism from f (a) to f −1 (b). Finally,
such
p a map is clearly smooth so that it is a diffeomorphism. Note that the scalar multiplication
b/a does not make sense if the signs of b and a are different.

As c increases, f −1 (c) transforms from a hyperboloid of two sheets (c < 0, disconnected) to a


double cone (c = 0, not a manifold), to a hyperboloid of one sheet (c > 0, connected).
M382D Attempted HW Problems

1.4.10. Verify that the tangent space to O(n) at the identity matrix I is the vector space of skew
symmetric n × n matrices – that is, matrices A satisfying At = −A.

Solution: The tangent space at p is the set of initial velocities of curves through p. So, we choose
an arbitrary curve and use the conditions on O(n) to impose a condition on the initial velocity. To
this end, let α : (−, ) → O(n) be a curve such that α(0) = I and set A = α0 (0). Since α(t) ∈ O(n)
for all t ∈ (−, ), we have that α(t)T α(t) = I. Taking derivatives, one sees that
 T
d d d
0= (α(t)T α(t)) = (α(t))T α(0) + α(0)T A = α(t) + A = AT + A
dt t=0 dt t=0 dt t=0
The identity dα(t)T /dt = (dα(t)/dt)T is obvious by checking entries. So, we see that A is skew
symmetric. It follows that every tangent vector at the identity is a skew symmetric matrix. Now let
A be skew symmetric and set α to be α(t) = exp(tA). Then,
α(0) = I
0
α (t) = A exp(tA) → α0 (0) = A
(α(t)) T
= exp(tA) = exp(tAT ) = exp(−tA) = exp(tA)−1
T

Recalling that O(n) is defined as the set of matrices M such that M T M = I, we have exhibited
a path in O(n) whose initial velocity is any arbitrary skew symmetric matrix. Thus, all tangent
vectors arise this way.
1.4.12. Prove that the set of all 2 × 2 matrices of rank 1 is a three-dimensional submanifold of
R4 = M (2). [Hint: Show that the determinant function is a submersion on the manifold of nonzero
2 × 2 matrices M (2) \ {0}.]

Solution: Note that all matrices in M2 (R) have rank 0, 1, or 2. If the rank is 0 or 1 the deter-
minant is zero, while if the rank is 2 then the determinant is nonzero. Now, the only rank zero
matrix is the zero matrix. So, M := M2 (R) \ {0} can be partitioned into the set of determinant
zero matrices (equivalently, rank 1) and the set of nonzero determinant matrices (equivalently, rank
2). It is a manifold as an open subset of a manifold. If det |M : M → R is a submersion then
d(det |M )p is surjective for all p ∈ M . In particular, this implies that every real number is a regular
value. Now, under this restricted determinant function we have that det |−1 M (0) is a manifold and
dim(det |−1
M (0)) = dim(M ) − dim(R) = 4 − 1 = 3. Note that the dimension of M is that of M2 (R)
since removing a point will not change the dimensions of nearby points. But, det |−1 M (0) is precisely
the set of rank 1 matrices. So, it suffices to show that det |M is a submersion.

Let A ∈ M . Recall that TA M = TA (M2 (R)). Now, for any B ∈ M2 (R) we have that A+tB ∈ M2 (R)
and it follows that TA (M2 (R)) = M2 (R). Define Bij as the matrix whose ij component is 1 with
zeroes everywhere else. Let the subscript kl denote the “reversal” of ij, where we swap ones and
twos in ij. For example, Bij and Bkl have nonzero entries in opposite corners. We then have
 
det(A + tBij ) − det(A)
lim = akl .
t→0 t
I claim that the Bij are enough to show that d(det |M )A is surjective. Clearly, if akl is nonzero then
 
λ λ
d(det |M )A Bij = akl = λ
akl akl
for any real λ. So, if any of the entries of A are nonzero, we can choose an appropriate Bij to show
surjectivity. Of course, if all the entries of A are zero then we fail to conclude surjectivity, but this
only happens when A is the zero matrix.

TA comment: Why is d(det)|p surjective? Also: “the dimensions of nearby points.” Points are
zero-dimensional, so what are you trying to say here? I think the most correct way to say this is
that the dimension of an open submanifold of X is equal to dim(X).
M382D Attempted HW Problems

I’m not exactly sure what the issue is here. The goal is to show that d(det |M )A is nonzero. Since
TA M = M2 (R) and Tdet |M (A) R = R, we have that d(det |M )A : M2 (R) → R. To check for surjectiv-
ity, let λ ∈ R. Define
       
1 0 0 1 0 0 0 0
B11 = , B12 = , B21 = , B22 = .
0 0 0 0 1 0 0 1
You can then check that
d(det |M )A (B11 ) = a22 , d(det |M )A (B12 ) = −a21 , d(det |M )A (B21 ) = −a12 , d(det |M )A (B22 ) = a11 .
So, if A is not the zero matrix, one of the akl is nonzero. Choose the corresponding Bij so that the
differential gives akl and apply linearity to scale to λ.

And, the comment about dimensions at nearby points is exactly what I meant – if you define a
function dim : M → N which outputs the dimension of your manifold at a point p, then dim is
locally constant.
Dan’s Problems.
Problem 1.
a) Consider the function f : R → R defined by
( 2
e−1/x , x > 0;
f (x) =
0, x ≤ 0.
Prove that f is C ∞ . Sketch the graph of f . Compare f to its Taylor series at x = 0.
b) Given real numbers a < b show that
g(x) := f (x − a)f (b − x)
is smooth and vanishes outside the interval (a, b).
c) Given real numbers a < b, construct a C ∞ function h : R → R such that: (i) h(x) = 0 for
x ≤ a, (ii) h(x) = 1 for x ≥ b, and (iii) h is monotonic nondecreasing.
d) Given real numbers a < b < c < d, construct a C ∞ function k : R → R so that (i) k(x) = 0
for x ≤ a, (ii) k(x) = 1 for b ≤ x ≤ c, and (iii) k(x) = 0 for x ≥ d.
e) Given real numbers ai < bi < ci < di , i = 1, ..., n, construct a C ∞ function k : An → R so
that (i) k(x1 , ..., xn ) = 0 if any xi ≤ ai ; (ii) k(x1 , ..., xn ) = 1 if bi ≤ xi ≤ ci for all i = 1, ..., n;
and (iii) k(x1 , ..., xn ) = 0 if any xi ≥ di .
f) Prove that on every manifold X there is a nonconstant C ∞ function f : X → R.

Solution:
2
a) We first show that f˜(x) = e−1/x is a smooth function. I claim that for each n ≥ 0, there
exist polynomials Pn and Qn such that
dn f˜ Pn ˜
n
= f.
dx Qn
We do this inductively. For n = 0 it is obvious, and for n = 1 we have that df˜/dx = 2/x3 f˜,
so that P1 = 2 and Q1 = x3 . Now suppose the inductive hypothesis holds. Then,
!
dn+1 f˜ d dn f˜
 
d Pn ˜
= = f
dxn+1 dx dxn dx Qn
    
d Pn ˜ Pn 2 ˜
= f+ f
dx Qn Qn x3
 0 0
  
Pn Qn − Qn Pn ˜ 2Pn
= f + f˜
Q2n x3 Qn
 0
Pn Qn x3 − Q0n Pn x3 + 2Pn Qn ˜

= f
x3 Q2n
M382D Attempted HW Problems

Noting that sums, products, and derivatives of polynomials are polynomials, we see that
Pn+1 := Pn0 Qn x3 − Q0n Pn x3 + 2Pn Qn and Qn+1 = x3 Q2n are polynomials such that
dn+1 f˜ Pn+1 ˜
= f,
dxn+1 Qn+1
thus concluding the induction. Now, we know that Q1 = x3 so that in general Qn = xm
for some m > 0. Hence, all the derivatives are continuous except possibly at x = 0. We fix
some m > 0 and investigate the limit
2
f˜(x) e−1/x
lim = lim .
x→0 xm x→0 xm

By a change of variables, one discovers that


2
e−1/x 2
lim+ m
= lim xm e−x = 0
x→0 x x→∞

and similarly for the left-sided limit. The above limit can be evaluated by at most m uses of
L’hopital’s rule. Hence, since Qn always takes the form Qn = xm we have that Pn /Qn f˜ → 0
as x → 0 for any n. It follows that f˜ is smooth. The above computation also shows that
dn f˜/dxn |x=0 = 0 for all n, so that all the derivatives agree with the constant function 0 at
0. Hence, by the gluing lemma we have that f is smooth. This also shows that the Taylor
expansion of f at x = 0 is given by T (x) = 0, despite f being nonzero for x > 0. The graph
looks like

b) As the product of smooth functions, g is smooth. Since f vanishes outside (0, ∞), f (x − a)
vanishes outside (a, ∞) while f (b−x) vanishes outside (−∞, b). Hence, g(x) vanishes outside
(a, ∞) ∩ (−∞, b) = (a, b).
c) Consider g as above. This is a smooth compactly supported function so that it is integrable.
We have that supp(g) = [a, b], and g is positive on (a, b). Hence kgk1 > 0. We can define h
as the convolution
Z ∞ Z ∞
1 1 1
h(x) = (χ(0,∞) ∗ g)(x) = χ(0,∞) (y)g(x − y) dy = g(x − y) dy.
kgk1 kgk1 −∞ kgk1 0
For fixed x, what is the support of g(x − y)? Pictorially, we have reflected g about the y axis
(so its support becomes [−b, −a]) and then translated to the right by x units (so its support
becomes [x − b, x − a]). For x ≤ a we see that g(x − y) is zero on [0, ∞) so that h(x) = 0 for
x ≤ a. Otherwise,
Z x−a
1
h(x) = g(x − y) dy.
kgk1 max{0,x−b}
For x ≥ b, we have
Z x−a Z ∞
1 1
h(x) = g(x − y)dy = g(x − y) dy = 1.
kgk1 x−b kgk1 −∞

Finally, if a < x < b then


Z x−a
1
h(x) = g(x − y)dy.
kgk1 0
M382D Attempted HW Problems

Since g(x − y) is positive on (0, x − a) ⊂ (x − b, x − a), and the sets (0, x − a) increase in
measure as x increases to b, it follows that h(x) is increasing on a < x < b. Since h is the
convolution with a smooth function, it is smooth.
d) Denote g as in part b) by ga,b , and similarly for h so that ha,b = χ(0,∞) ∗ ga,b /kga,b k1 . Define
ka,b,c,d by
ka,b,c,d (x) = ha,b (x)h−d,−c (−x).
We know that ha,b is zero on (−∞, a] and 1 on [b, ∞), and similarly h−d,−c is zero on
(−∞, −d] and 1 on [−c, ∞). So, h−d,−c (−x) is zero on [d, ∞) and 1 on (−∞, c].
e) Define k(x1 , ..., xn ) by
n
Y
k̃(x1 , ..., xn ) = kai ,bi ,ci ,di (xi ).
i=1
This obviously has the desired properties by part d).
f) Fix some p ∈ X and let (U, φ) be a chart around p. Note that U could be all of X. Now,
φ maps U into some affine space An . Since φ(U ) ⊂ An is open, there exists Q with center
q = φ(p) and cl(Q) ⊂ U . Explicitly, we may write Q as
Q = {(x1 , ..., xn ) ∈ An | ai < xi < di }
for some ai , di ∈ R, i = 1, ..., n where ai < di and (ai + di )/2 = q i , so that q is the center of
Q. For r > 0 close to zero we may define Qr by
Qr = {(x1 , ..., xn ) ∈ An | ai + r < xi < di − r}
so that Qr shrinks Q slightly into its center. Setting bi = ai + r and ci = di − r, we may
choose r > 0 small so that ai < bi < ci < di (I believe r < min{(di − ai )/2} should work).
Fix such an r. By part e), there exists a C ∞ function k : An → R so that k(x1 , ..., xn ) = 0 if
any xi ≤ ai or if any xi ≥ di , while k(x1 , ..., xn ) = 1 if bi ≤ x ≤ ci for all i = 1, ..., n. Recast,
k is a smooth function which is zero outside Q and is 1 inside cl(Qr ). Now let k̃ = k|φ(U )
and define f : X → R by k̃ ◦ φ on U and f = 0 on X \ U . This is clearly nonconstant since
f (p) = 1. It is smooth since if p0 ∈ U , we may just choose the chart (U, φ), and observe that
idR ◦ f ◦ φ−1 = k̃ is smooth. If p0 ∈ / U , then f is just the constant zero function, which is
smooth. As an aside, we must choose Q so that cl(Q) ⊂ U . If not, when p0 ∈ X \ U it could
be that p0 ∈ ∂U . Hence, any neighborhood flows into U . But, we can choose a small enough
neighborhood that does not intersect φ−1 (Q) because of this closure condition. Since k̃ is
zero off Q, it follows that f is identically zero in this small neighborhood.
Problem 2. Consider the function
f (x, y, z) = x4 + y 4 + z 4
defined on A3 .
a) Determine the critical points of f , that is, the points where the differential of f vanishes.
b) Compute the differential of f in cylindrical coordinates r, z, θ given by
x = r cos θ
y = r sin θ
z = z
Do this two ways. First: write f in cylindrical coordinates and then differentiate. Second:
differentiate f in rectangular coordinates and then change to cylindrical coordinates. Your
answers should agree.
c) Let g : S 2 → R be the restriction of f to the unit sphere. What is the maximum value of g?
Where is it attained? Can you do a complete analysis of the critical points, i.e., determine
the maxima, minima, and saddle points? How many critical points are there? How many
critical values (values of g at the critical points)?
M382D Attempted HW Problems

Solution:
a) The differential of f is simply

df = 4x3 dx + 4y 3 dy + 4z 3 dz.

This vanishes iff x = y = z = 0. So, the only critical point of f is at (0, 0, 0). Indeed, this
makes sense since f (0, 0, 0) = 0, and if any of x, y, or z are nonzero then f is positive.
b) First, we have the following identity:

1
cos(θ)4 + sin(θ)4 = (cos(4θ) + 3) ,
4

which can be obtained by using double angle formula on the right-hand side. Next, convert-
ing f into cylindrical coordinates and differentiating yields

f (r, θ, z) = r4 cos(θ)4 + r4 sin(θ)4 + z 4 ;


df = 4r3 (cos(θ)4 + sin(θ)4 )dr + 4r4 (− cos(θ)3 sin(θ) + sin(θ)3 cos(θ))dθ + 4z 3 dz
= 4r3 (cos(θ)4 + sin(θ)4 )dr − 4r4 cos(θ) sin(θ) cos(2θ)dθ + 4z 3 dz
= r3 (cos(4θ) + 3)dr − r4 sin(4θ)dθ + 4z 3 dz.

We can also compute dx and dy in terms of dθ and dr by differentiating the conversion


formulae. This gives

dx = cos(θ)dr − r sin(θ)dθ,
dy = sin(θ)dr + r cos(θ)dθ.

Using this, we see that

df = 4x3 dx + 4y 3 dy + 4z 3 dz
= 4r3 cos(θ)3 (cos(θ)dr − r sin(θ)dθ) + 4r3 sin(θ)3 (sin(θ)dr + r cos(θ)dθ) + 4z 3 dz
= 4r3 (cos(θ)4 + sin(θ)4 )dr + 4r4 (− cos(θ)3 sin(θ) + sin(θ)3 cos(θ))dθ + 4z 3 dz
= r3 (cos(4θ) + 3)dr − r4 sin(4θ)dθ + 4z 3 dz

as desired.
c) We first convert f into spherical coordinates:

f (r, θ, ϕ) = r4 sin(ϕ)4 cos(θ)4 + r4 sin(ϕ)4 sin(θ)4 + r4 cos(ϕ)4 .

To restrict f to the unit sphere, it satisfies to set r = 1. Thus,

g(θ, ϕ) = sin(ϕ)4 cos(θ)4 + sin(ϕ)4 sin(θ)4 + cos(ϕ)4 .

My reasoning for p doing this is as follows: In Cartesian coordinates, we must make the sub-
stitution z = ± 1 − x2 − y 2 . So, g will be a function from D the unit disc to R. The issue
with this is that the differential will not detect certain critical values – in particular, those
which correspond to critical points along ∂D. The one dimensional analog of this is having
a function g : [0, 1] → R, and to find maxima/minima you need to test the endpoints (that
is, ∂[0, 1]). With spherical coordinates, you traditionally restrict the parameters θ, ϕ (and,
in the following analysis we will do this to extract the critical points). But, we can let both
range over R, it’s just that this function is periodic. This lets us detect critical points along
the boundary, though.
M382D Attempted HW Problems

The differential of g in spherical coordinates is


dg = (−4 sin(ϕ)4 cos(θ)3 sin(θ) + 4 sin(ϕ)4 sin(θ)3 cos(θ))dθ
+(4 sin(ϕ)3 cos(ϕ) cos(θ)4 + 4 sin(ϕ)3 cos(ϕ) sin(θ)4 − 4 cos(ϕ)3 sin(ϕ))dϕ
= 4 sin(ϕ)4 sin(θ) cos(θ)(− cos(θ)2 + sin(θ)2 )dθ
+4 sin(ϕ) cos(ϕ)(sin(ϕ)2 cos(θ)4 + sin(ϕ)2 sin(θ)4 − cos(ϕ)2 )dϕ
= −2 sin(ϕ)4 sin(2θ) cos(2θ)dθ
+2 sin(2ϕ)(sin(ϕ)2 (cos(θ)4 + sin(θ)4 ) − cos(ϕ)2 )dϕ
= − sin(ϕ)4 sin(4θ)dθ + 2 sin(2ϕ)(sin(ϕ)2 (cos(θ)4 + sin(θ)4 ) − cos(ϕ)2 )dϕ
To find critical points, it must be that both coefficients are zero. There are some easy ones
we can find then. If ϕ = 0, π, then both sin(ϕ) and sin(2ϕ) are zero. Hence, any θ works.
But, notice that if we choose ϕ = 0 and let θ be any value,
x = sin(ϕ) cos(θ) = 0
y = sin(ϕ) sin(θ) = 0
z = cos(ϕ) = 1
regardless of the choice of θ. Similarly, if ϕ = π we get (0, 0, −1) regardless of the choice of θ.

Now, if ϕ = π/2, then sin(2ϕ) = 0. Hence the second coefficient is zero, and we choose
θ so that the first coordinate is zero. Note that sin(ϕ) = ±1, so we simply need θ such
that sin(4θ) = 0. The values of θ solving this are θ = 0, π/4, π/2, 3π/4, π, 5π/4, 3π/2, 7π/4.
First, when θ = 0, π/2, π, 3π/2 we have that cos(θ), sin(θ) are either 0 or ±1. We have that
sin(ϕ) = 1 while cos(ϕ) = 0, so in Cartesian coordinates these correspond to the points
√ (0, ±1, 0). Next, consider θ = π/4, 3π/4, 5π/4, 7π/4. Then cos(θ), sin(θ) are
(±1, 0, 0) and
either√±1/ 2. √ So, in Cartesian coordinates the points corresponding to these θ and ϕ are
(±1/ 2, ±1/ 2, 0) (here, the ± may be taken independently of each other, giving four
choices total).

Now suppose that neither sin(ϕ) or sin(2ϕ) are zero. We must have that sin(4θ) = 0,
producing the same list of θ as previously. We use the condition
sin(ϕ)2 (cos(θ)4 + sin(θ)4 ) − cos(ϕ)2 = 0
to determine ϕ. For θ = 0, π/2, π, 3π/2 we have that cos(θ)4 + sin(θ)4 = 1 (since one of the
two is ±1 and the other is zero). Hence,
− cos(2ϕ) = sin(ϕ)2 − cos(ϕ)2 = 0
√ √
So, √
we have that
√ ϕ = π/4, 3π/4. When θ = 0, these ϕ give the points (1/ 2, 0, 1/ 2) and
(1/ 2, 0, −1/√ 2). By √ similar reasoning,
√ we see√that for θ = 0, π/2, π, 3π/2 that we get the
points (±1/ 2, 0, ±1/ 2) and (0, ±1/ 2, ±1/ 2).
Finally, let θ = π/4, 3π/4, 5π/4, 7π/4. Then
 4  4
1 1 1 1 1
cos(θ)4 + sin(θ)4 = ± √ + ±√ = + = .
2 2 4 4 2
Hence, we must find ϕ satisfying
sin(ϕ)2 /2 − cos(ϕ)2 = 0.
Note that this is equivalent to finding ϕ satisfying
3 sin(ϕ)2 − 2 = sin(ϕ)2 − 2(1 − sin(ϕ)2 ) = sin(ϕ)2 − 2 cos(ϕ)2 = 0.
p p
So, wephave that ϕ = arcsin( 2/3), πp− arcsin( p2/3). First consider θ = 0 and ϕ =
arcsin( 2/3). We have that √ cos(arcsin(
√ √ 2/3)) = 1 − 2/3 = 1/3 so that, in Cartesian co-
ordinates this point is (1/ 3, 1/ 3, 1/ 3). Varying ϕ and θ among the√possible √ values√just
changes the sign of the coordinates. So, these values correspond to (±1/ 3, ±1/ 3, ±1/ 3).
M382D Attempted HW Problems

In total, we have the following groups of critical points


C1 = {(±1, 0, 0), (0, ±1, 0), (0, 0, ±1)}
√ √ √ √ √ √
C2 = {(±1/ 2, ±1/ 2, 0), (±1/ 2, 0, ±1/ 2), (0, ±1/ 2, ±1/ 2, 0)}
√ √ √
C3 = {(±1/ 3, ±1/ 3, ±1/ 3)}.
Thus, there are 6+12+8 = 26 critical points (respectively from each group). I have arranged
these so that their values under g are the same – for those in C1 , it is 1, in C2 is is 1/2, and
in C3 it is 1/3. These are global maxima, saddle points, and global minima√in that √ order.
To see that the critical points in C2 are saddle points, let us look at p = (1/ 2, 1/ 2, 0) in
particular. Consider the curve γ(t) = (cos t, sin t, 0). Traveling from (1, 0, 0) along this curve,
we start at a maximum, pass through p, and land at (0, 1, 0), also a maximum. It follows
that p must be a local minimum, since there are no other critical points between (1, 0, 0) and
(0, 1, 0) along γ. Similarly, look at the meridian γ √
passing
√through
√ both poles√ and √p. Starting

at (0, 0, 1), we pass through the critical points (1/ 3, 1/ 3, 1/ 3), p, (1/ 3, 1/ 3, −1/ 3),
and (0, 0, −1). In this order, we start at a maximum and pass through a minimum, maximum
(p), minimum, and finally a maximum. So, p is a local max in one direction and a local min
in another.
Problem 3. Let X be a smooth manifold, U ⊂ X an open subset, and x : U → An . Prove that (U, x)
is a chart if and only if x : U → x(U ) is a diffeomorphism.

Solution: Suppose first that x : U → An is a chart. In general, a function f : M → N , where


M, N are manifolds is smooth at p if there exist charts (U, x) and (V, y) about p, f (p) respectively
such that
y ◦ f ◦ x−1 : x(U ∩ f −1 (V )) → y(V ∩ f (U ))
is a smooth map. Now, apply this with M = X, N = An , f = x, and y = idAn |x(U ) = idx(U ) . Then,
the composition becomes
idx(U ) ◦ x ◦ x−1 : x(U ) → idx(U ) (x(U ))
which is smooth.

Now suppose that x : U → x(U ) is a diffeomorphism. Then, since differentiability implies con-
tinuity, both x and x−1 are continuous. Since x is bijective, it follows that x is a homeomorphism.

TA comment: In the ⇐ direction, you also need to argue that this chart is C ∞ -compatible with the
atlas on X.

If (V, y) is a chart with U ∩ V 6= Ø, then we must check if


y ◦ x−1 : x(U ∩ V ) → An
is smooth. Because (V, y) is a chart, y : V → An is smooth, and so the restriction y : U ∩ V → An
is too. Since x−1 : x(U ) → U is a diffeomorphism, the restriction x−1 : x(U ∩ V ) → U ∩ V is also
a diffeomorphism, hence smooth. So the composition is smooth, and (V, y) is C ∞ -related to (U, x).
Since this holds for an arbitrary chart, (U, x) is C ∞ -compatible with the atlas.
M382D Attempted HW Problems

HW 4
Guillemin/Pollack: Chapter 1.7: 4, 6; Chapter 1.8: 3, 4.

1.7.4. Prove that the rational numbers have measure zero in R, even though they are dense.

Solution: Let  > 0. Enumerate the rationals by {rk }∞ k=1 . Set Ik = (rk − /2
k+1
, rk + /2k+1 )
k
so that |Ik | = /2 . It follows that Q ⊂ ∪k Ik and
∞ ∞ ∞
!
[ X X 
m Ik ≤ m(Ik ) = = ,
2k
k=1 k=1 k=1

where m denotes the Lebesgue P measure. Hence, for any  > 0 there exists a covering of Q by open
intervals {Ik }∞
k=1 such that k m(Ik ) < .

1.7.6. 1.7.6. Prove that the sphere S k is simply connected if k > 1. [Hint: If f : S 1 → S k and
/ f (S 1 ). Now use stereographic projection.]
k > 1, Sard gives you a point p ∈

Solution: Consider smooth f : M → N where M, N are manifolds and are such that dim M < dim N .
Then for any p ∈ M , dfp : Tp M → Tf (p) N cannot be surjective since Tp M has smaller dimension
than Tf (p) N . Hence, every point is critical and f (M ) ⊂ N has measure zero. In particular, if
f : S 1 → S k is smooth and k > 1, then f (S 1 ) has measure zero in S k . Consequently, the comple-
ment is dense and there exists a point p ∈ / f (S 1 ). Applying a stereographic projection s sending p
“to infinity”, we obtain a loop s ◦ f : S → Rk . But, any loop in Rk is nullhomotopic so that f is
1

too in S k .

1.8.3. Show that T (X × Y ) is diffeomorphic to T (X) × T (Y ).

Solution: Recall from a previous problem that T(x,y) (X × Y ) is naturally identified with Tx X × Ty Y
by Φ : T(x,y) (X × Y ) → Tx X × Ty Y where
Φ(ξ) = (d(πX )(x,y) (ξ), d(πY )(x,y) (ξ))
For abbreviation, we will typically call the first component η and the second component ζ.

I recall here the manifold structure of the tangent bundle. The tangent bundle as a set is
T (X) = {(x, ξ) | x ∈ X, ξ ∈ Tx X}.
We may equip it with a manifold structure where each atlas takes the form (T (Uα ), dφα ). Here,
(Uα , φα ) is a chart on X and dφα is the total differential given by dφα (x, ξ) = (φα (x), d(φα )x (ξ)).
Since φα : Uα → An , it follows that d(φα )x : Tx X → Rn , and thus dφα : T (X) → An × Rn .

Define f : T (X × Y ) → T (X) × T (Y ) by
f ((x, y), ξ) = ((x, d(πX )(x,y) (ξ)), (y, d(πY )(x,y) (ξ))) = ((x, η), (y, ζ)).
Injectivity and surjectivity follow fairly easily. So, we just need to show that f and its inverse
are smooth. Consider the chart (T (Uα × Vβ ), d(φα × ψβ )) on T (X × Y ) and the chart (T (Uα ) ×
T (Vβ ), d(φα ) × d(ψβ )) on T (X) × T (Y ). We know that d(φα × ψβ )−1 : An+m × Rn+m → T (X × Y )
where dim X = n and dim Y = m. Given (p, v) ∈ An+m × Rn+m , we push this forward. First set
x = πX ◦ (φα × ψβ )−1 (p)
y = πY ◦ (φα × ψβ )−1 (p)
ξ = d(φα × ψβ )−1
p (v).

Then, pushing (p, v) forward by f ◦ d(φα × ψβ )−1 yields


f ◦ d(φα × ψβ )−1 (p, v) = f ((φα × ψβ )−1 (p), d(φα × ψβ )−1
p (v))
= f ((x, y), ξ) = ((x, η), (y, ζ)).
M382D Attempted HW Problems

Now, pushing this forward by d(φα ) × d(ψβ ) gives


d(φα ) × d(ψβ )((x, η), (y, ζ)) = ((φα (x), d(φα )x (η)), (ψβ (y), d(ψβ )y (ζ))).
Finally we have that f ((x, y), ξ) = ((x, π1 ◦ Φ(ξ)), (y, π2 ◦ Φ(ξ))). Hence, the components are
φα (x) = φα ◦ πX ◦ (φα × ψβ )−1 (p)
ψβ (y) = ψβ ◦ πY ◦ (φα × ψβ )−1 (p)
d(φα )x (η) = d(φα )x ◦ π1 ◦ Φ ◦ d(φα × ψβ )−1
p (v)

d(ψβ )y (ζ) = d(ψβ )y ◦ π2 ◦ Φ ◦ d(φα × ψβ )−1


p (v)
all of which are smooth. The inverse is similar.

TA comment: At the bottom of the first page, I don’t see how dφα T (X) → An × Rn happens.
I can see this for T (Uα ), but not necessarily more than that, unless I’m misreading.

My spacing got a little messed up when compiling all this, so what this comment refers to is when I
start saying “We may equip it... dφα : T (X) → An ×Rn ”. It shouldn’t be T (X) there, it should read
dφα : T (Uα ) → An × Rn . That’s all we need anyways, since the chart takes the form (T (Uα ), dφα )
we need that dφα is defined on the set TUα . The point of this discussion was to get the codomain
correct.
1.8.4. Show that the tangent bundle to S 1 is diffeomorphic to the cylinder S 1 × R.

Solution: Define f : R2 × R2 → R2 × R by
f ((x1 , x2 ), (v1 , v2 )) = (x1 , x2 , h(x2 , −x1 ), (v1 , v2 )i) = ((x1 , x2 ), x2 v1 − x1 v2 ).
Consider also g : R2 × R → R2 × R2 given by
g((x1 , x2 ), t) = ((x1 , x2 ), t(x2 , −x1 ))
Both of these functions are smooth. Next, consider the compositions f ◦ g and g ◦ f
f ◦ g((x1 , x2 ), t) = f ((x1 , x2 ), t(x2 , −x1 )) = ((x1 , x2 ), tx22 + tx21 ))
g ◦ f ((x1 , x2 ), (v1 , v2 )) = g((x1 , x2 ), x2 v1 − x1 v2 ) = ((x1 , x2 ), (x2 v1 − x1 v2 )(x2 , −x1 ))
Observe that if (x1 , x2 ) ∈ S then f ◦ g is the identity. Moreover, if p = (x1 , x2 ) ∈ S 1 and
1

v = (v1 , v2 ) ∈ Tp S 1 (here, I assume S 1 is embedded in R2 so that the tangent space is the traditional
tangent line) we see that x2 v1 − x1 v2 6= 0. This is because p is orthogonal to the tangent line, and a
vector orthogonal to p (and hence parallel to v) is (x2 , −x1 ). Consequently, (x2 v1 − x1 v2 )(x2 , −x1 ) ∈
Tp S 1 . Under these conditions, g◦f is also the identity. So, restricting f and g yields smooth functions
f : T S 1 → S 1 × R and g : S 1 × R → T S 1 which are inverses.
Dan’s Problems.
Problem 1. Let M, N be smooth manifolds and f : M → N an injective proper immersion. (A map
is proper if for all C ⊂ N compact, the inverse image f −1 (C) ⊂ M is compact.) Prove that f is an
embedding.

Solution: It suffices to show that f −1 : f (M ) → M is continuous. We first show that an im-


mersion is a closed map. Let C ⊂ X be closed. We wish to show that N \ f (C) is open – that
is for every q ∈ N \ f (C) there exists a neighborhood Vq ⊂ N containing q and not intersecting
f (C). Since topological manifolds inherit all the local toplogical properties of Rn , it follows that
manifolds are locally compact. Hence, there exists an open neighborhood Uq of q with compact
closure. Since f is proper, f −1 (Cl(Uq )) is compact in M , and so too is C ∩ f −1 (Cl(Uq )) as a closed
subset of a compact set. But f is continuous, so the image f (C ∩ f −1 (Cl(Uq ))) is compact, hence
closed in N since it is Hausdorff. Finally, set Vq = Uq \ f (C ∩ f −1 (Cl(Uq ))). Now suppose there
exists a c ∈ C such that f (c) ∈ Vq . Then, c ∈ f −1 (Vq ) ⊂ f −1 (Uq ) ⊂ f −1 (Cl(Uq )). It follows that
c ∈ C ∩ f −1 (Cl(Uq )), and therefore f (c) ∈/ Vq , a contradiction. Hence, Vq is an open neighborhood
of q not intersecting f (C). Since f is a closed map, f −1 : f (M ) → M is continuous.
M382D Attempted HW Problems

Problem 2. Produce a smooth map f : (−δ, δ)×S 1 → S 1 for some δ > 0 with the following property:
If f |t = f |{t}×S 1 , and q ∈ S 1 is a fixed point, then #f |t (q) is nonconstant as a function of t. What
causes the jump in this function? Give other examples for other compact domains and arbitrary
codomains. Can you see a topological invariant in this situation?

Solution: Here’s the idea: imagine having a loop that just wraps around S 1 a single time. Fix
the loop at (1, 0) and pinch it at (−1, 0). Then, drag (and stretch at the same time) this around S 1
one way as t ∈ (−δ, δ) increases, or in the opposite direction as t decreases. The following picture
showcases this.

An obvious question is if this can be done smoothly. I claim that it can, in a rather explicit way. To
do this, we may view f as a composition f = F ◦ π where F : (−δ, δ) × S 1 → A3 and π : A3 → S 1
is the projection (here, I view S 1 as embedded in the xy plane of A3 ). So, what we must do is
find a parametrization for the loop F which is smooth – projections are automatically smooth. Fix
t ∈ (−δ, δ), where δ > 0 is some large number. We define two functions
( 2
1 − (1 − t)e−1/(1−x ) −1 ≤ x ≤ 1
kt (x) =
1 else
( 2
e1−1/(1−x ) −1 ≤ x ≤ 1
h(x) = .
1 else

Then, set Kt (x) = kt (x/π) and H(x) = h((x − π)/π). Both of these functions are smooth. Now, we
parametrize F (t, s) as
F (t, s) = (cos(sKt (s)), sin(sKt (s)), H(s))
for s ∈ [0, 2π]. The height H increases from 0 to 1 (at π), then decreases smoothly back to 0 (at
2π). The function Kt serves as a modulation factor which varies smoothly from t (at 0) to 1 (from
π onwards). Below is a picture of this parametrization.

Where the blue and red meet represents the portion of the loop which gets pulled around. Note that
the point (1, 0, 0) is fixed on F . That is, for all t we have that (1, 0, 0) = F (t, 0). One can visually
see that as t increases, the number of points above (1, 0, 0) on F (t, s) increases too.

TA comment: Nicely done!


M382D Attempted HW Problems

Problem 3. Construct a nontrivial rank one real vector bundle over the circle S 1 by gluing the ends
of [0, 1] × R using the linear map ξ 7→ −ξ. In other words, identify (0, ξ) ∼ (1, −ξ). From this
construct a vector bundle π : E → S 1 . Can you identify the 2-manifold which is the total space (the
‘E’ in π : E → S 1 ) of this bundle?

Solution: Let E = ([0, 1] × R)/ ∼ where ∼ is the equivalence relation defined in the problem.
We recall the definition of a vector bundle here. A vector bundle consists of the following data
a) Two topological spaces E and X, the total and base spaces respectively.
b) A bundle projection π : E → X, which is a continuous map (sometimes surjective depending
on the author).
c) For every x ∈ X, the fiber π −1 ({x}) is a finite dimensional vector space.
as well as the following compatibility condition: for every x ∈ X there exists an open neighborhood
U of x, a natural number kx , and a homeomorphism ϕ : U × Rkx → π −1 (U ) such that
i) π ◦ ϕ is the projection U × Rkx → U and;
ii) The map v 7→ ϕ(x, v) is an isomorphism between Rkx and π −1 ({x}).
A rank one vector bundle is a vector bundle such that kx = 1 for all x ∈ X. Here, our base
space is actually a manifold. Our bundle projection is [t, ξ] 7→ (cos(2πt), sin(2πt)), where we
have embedded S 1 in R2 . For each p = (cos(2πt), sin(2πt)) ∈ S 1 (restrict t to [0, 1] through-
out) we see that π −1 ({p}) = [t, ξ]. Since t is fixed, this carries a natural vector space structure
by [t, ξ1 ] + [t, ξ2 ] = [t, ξ1 + ξ2 ] and λ[t, ξ] = [t, λξ]. Hence, we just need to check the compatibility
condition, and if it holds, we will have a rank one vector bundle.

Now define U1 = S 1 \ {(1, 0)} and U2 = S 1 \ {(−1, 0)}. These form an open cover of S 1 . Let
ϕ1 : U1 × R → π −1 (U1 ) be given by ((cos(2πt), sin(2πt), ξ) 7→ [(t, ξ)]. Since the quotient only affects
those points of [0, 1]×R whose first component is 0 or 1, we see that [(t, ξ)] is just the singleton (t, ξ).
Treating it as such, ϕ1 is a homeomorphism (it just takes a curved infinite strip and flattens it out).
Clearly π ◦ ϕ is just the projection U1 × R → U1 . When t is fixed we get an obvious isomorphism,
which is essentially just the identity on ξ.

What about p = (1, 0), which a priori poses a problem since E is constructed by gluing above
p? We can just define ϕ2 analogously to ϕ1 (replacing all subscripts of 1 by 2). Clearly too π ◦ ϕ2
is just the projection U2 × R → U2 and for fixed t, ϕ2 is an isomorphism. So, we just need to check
that ϕ2 is a homeomorphism.

We argue this visually – we constructed E by gluing above (1, 0), but we can get a diffeomor-
phic copy of E by gluing above (−1, 0) instead. Call this Ẽ and the homeomorphism f . We can
similarly define π̃ : Ẽ → S 1 and ϕ̃2 : U2 × R → π̃ −1 (U2 ). By the same reasoning as before, ϕ̃2 is
a homeomorphism. Viewing ϕ2 as a composition ϕ2 = f ◦ ϕ̃2 , we see that ϕ2 is a composition of
homeomorphisms.

The space E is diffeomorphic to the Möbius band without boundary.

TA comment: In the future, you can save yourself some time by assuming the reader knows what a
vector bundle is.

I do this because it’s helpful for me to have a reference to the definitions right there in the homework.
Problem 4. Let V be a finite dimensional real inner product space. Define the Stiefel manifold
St2 (V ) = {b : R2 → V | b is an isometry}.
Construct a smooth manifold structure on Stk (V ).

Solution: Let V be of dimension n and define the k-Stiefel manifold by


Stk (V ) = {b : Rk → V | b is an isometry}.
M382D Attempted HW Problems

Of course, the above is only well defined when k ≤ n. Since b is an isometry, there is an associated
n × k matrix A such that AT A = Ik . Hence, we may regard Stk (V ) as
Stk (V ) = {A ∈ Mn×k (R) | AT A = Ik }
instead. Define Φ : Mn×k (R) → Symk×k (R) by φ(A) = AT A, which is smooth. Then for any
A ∈ Mn×k (R) and V ∈ TA Mn×k (R),
d d
dΦA (V ) = Φ(A + tV ) = (A + tV )T (A + tV ) = V T A + AT V.
dt t=0 dt t=0
Observe that Stk (V ) = Φ−1 ({Ik }), so that if Ik is a regular value of Φ, we are done. To check this,
we must check that for every A ∈ Stk (V ) the differential dΦA is surjective. Since Symk×k (R) is
a vector space, its tangent space is canonically isomorphic to itself. Let W ∈ Symk×k (R) and let
V = 1/2AW ∈ TA Mn×k (R). Then,
1 1
dΦA (V ) = V T A + AT V = W T AT A + AT AW = W
2 2
T T
since A A = Ik and W = W . So, Stk (V ) is a submanifold of Mn×k (R) and inherits a smooth
structure as follows. First, Stk (V ) has constant dimension m = nk − 1/2k(k + 1). Since it is a
submanifold, for each p ∈ Stk (V ) there exists a chart (U, φ) of Mn×k (R) around p such that
φ(Stk (V ) ∩ U ) = {(x1 , ..., xnk ) ∈ Ank | xm+1 = ... = xnk = 0} ∩ φ(U ).
We call the above a submanifold chart of Stk (V ) around p. Let A = {(Uα , φα )}α∈A be a covering
of Stk (V ) by submanifold charts. Then, for each φα = (φ1α , ..., φnk 1 m
α ) define ψα = (φα , ..., φα ). Set
Vα = Stk (V ) ∩ Uα . Then, B = {(Vα , ψα )}α∈A is an atlas on Stk (V ).

TA comment: nice work!


Problem 5. What manifold parametrizes great circles in the standard unit sphere S 2 ⊂ A3 ?

Solution: Each great circle may be identified with a particular plane passing through the center
of S 2 . Now, this plane is determined by a choice of unit normal up to a sign. This is equivalent
to identifying antipodal points on S 2 (since the unit normals are just points on S 2 ). Hence, RP2
parametrizes the great circles.
Problem 6. For each of the following construct an example.
a) A compact manifold X and a smooth manifold Y with dim X = dim Y = 2, and a smooth
map f : X → Y such that if R ⊂ Y is the subset of regular values and # : R → Z≥0 the
function which assigns to q ∈ R the cardinality of f −1 (q), then # takes on three distinct
values. (Recall from lecture that # is locally constant.)
b) An embedding f : X → Y which is not proper
c) A non-simply connected compact 4-manifold
d) A surjective local diffeomorphism of 3-manifolds which is not a diffeomorphism

Solution:
b) Consider X = An and Y = S n and f : X → Y the inverse stereographic projection.
Let p ∈ S n be the north pole so that f (X) = Y \ {p}. Then f is an embedding, but
f −1 (S n ) = An .
c) The manifold T 4 = S 1 × S 1 × S 1 × S 1 is a compact 4-manifold whose fundamental group is
nontrivial (isomorphic to Z ⊕ Z ⊕ Z ⊕ Z).
d) Let us work in A2 for a moment. Consider the map f : A2 → A2 \ {0} given by f (x, y) =
(ex cos y, ex sin y). Fix a point (u, v) ∈ A2 and consider U an open unit square centered at
(u, v). Then, f (U ) has boundary two circular arcs and two radial lines, which diffeomorphic
to U . To show that f |U actually gives such a diffeomorphism, we see that
 x
e cos y −ex sin y
 u
e cos v −eu sin v
 
df(u,v) = = .
ex sin y ex cos y (u,v) eu sin v eu cos v
M382D Attempted HW Problems

This is invertible since its determinant is e2u > 0. So, f is a local diffeomorphism. It is
easily seen to be surjective. However, it is not injective since f is periodic in the second
coordinate. So, f cannot be a global diffeomorphism.

We can extend this to a map F : A3 → A3 \ Z where Z is the z-axis in A3 . This new


map is given by F (x, y, z) = (f (x, y), z), which is also a local diffeomorphism (essentially by
the same computation as before).

TA comment: Nice examples!


Problem 7.
a) A 3 × 3 rotation matrix always has a fixed line – that is, an an eigenspace – which is actually
pointwise fixed – the eigenvalue is 1. Show that this is so. (A 3 × 3 rotation matrix is
an orthogonal matrix with determinant 1. The group of all such matrices is denoted SO3 .)
Except for the identity matrix I, this line is unique. Show that the map f : SO3 \{I} → RP2
so defined is a submersion. What is the inverse image of a point?
b) Show that RP3 may be constructed from the unit ball B 3 ⊂ A3 by identifying antipodal
points of the boundary S 2 .
c) Construct a diffeomorphism f : RP3 → SO3 . Hint: Take the ball in part (b) to have radius
π.
d) The manifold underlying the group SO2 of orthogonal 2 × 2 matrices of determinant 1 is
also familiar. What is it? What manifold underlies O2 ?

Solution:
a) Let A be a 3 × 3 rotation matrix. We wish to show that det(A − I) = 0, where I is the 3 × 3
identity matrix. Due to orthogonality,
det(A − I) = det((A − I)T ) = det(AT − I) = det(AT − AT A) = det(AT (I − A))
= det(AT ) det(I − A) = det(A−1 )(−1)3 det(A − I) = − det(A − I)
which shows that det(A − I) = 0. Hence, 1 is an eigenvalue. The eigenspace corresponding
to this eigenvalue is some fixed line, which (besides the identity) uniquely determines the
rotation.

Note that RP2 may be identified with antipodal points on S 2 , which in turn is identi-
fied with lines passing through the origin. So, the inverse image of a point is just all rotation
matrices whose axis of rotation is the corresponding line.

The map f : SO3 → RP2 is a submersion if dfp : Tp SO3 → Tf (p) RP2 is surjective. We
use the curves approach to tangent vectors to work this out.

For each point p ∈ RP2 , there is a corresponding line L(p) passing through the origin.
Given a curve β : (−, ) → RP2 , we get a corresponding path of lines γ. For each line, we
can associate a rotation which is just rotation by π with axis of rotation the given line. This
gives a curve α : (−, ) → SO3 such that f ◦ α = β. We can actually use any fixed angle,
but using π resolves any ambiguity about direction.
b) I think by definition, we are using RP3 = Gr2 (R), the set of planes passing through the
origin in R. For each such plane, there are two unit normals differing only by sign. Of
course, these unit normals are elements of S 2 so that identifying them (via the antipodal
map) yields RP2 .
c) The idea is that each point in RP3 gives two antipodal points, which generate a line. We
use this line and the distance to the origin to specify the axis of rotation and the angle of
rotation. The issue is we have no way of separating points at the same distance, so we need
to be a little clever.
To overcome this, note that given any rotation in SO3 , we have an axis of rotation L and an
M382D Attempted HW Problems

oriented angle of rotation. I will represent this as a pair (η, θ), where θ ∈ (−π, π) and η is a
unit vector parallel to L. The direction of η is given by the direction of rotation (say, using
the right hand rule, looking where your thumb points as you curl your hand). Of course,
this does not cover those rotations with magnitude π, but for these the direction does not
matter. Indeed, it is for this reason that we identify boundary points on the ball!
Now, η is some unit vector so that θη lies in the interior of Bπ . It is this point that we map
our chosen rotation to. Intuitively this map is smooth with smooth inverse. We can see this
by varying η and θ, one just moves the axis of rotation around while the other just moves
the amount of rotation.
d) Every rotation in SO2 is just given by a specified number θ. As a polar angle, this gives
some point on S 1 , which tells you how much to rotate. So, SO2 is diffeomorphic to S 1 . An
explicit map is
 
cos θ − sin θ
7→ (cos θ, sin θ).
sin θ cos θ
When considering O2 , we just introduce reflections. This gives a separate connected com-
ponent so that O2 = S 1 × {−1, 1} = S 1 ∪ S 1 . The first copy just gives usual rotations while
the second gives rotations and a reflection.

TA comment: Part (b): Gr2 (R) doesn’t make sense, because there are no two-dimensional subspaces
of R. Also, the problem is asking for a construction of RP3 , not RP2 .

I think these were two typos, and they should’ve been Gr2 (R3 ) and RP3 respectively.

Problem 8. Let X, Y be finite dimensional real vector spaces and W ⊂ Y a subspace. A linear map
L : X → Y is transverse to W if L(X) + W = Y , that is, if any vector in Y is a sum (possibly
nonuniquely) of a vector in L(X) and a vector in W .
a) Let π : Y → Y /W be the quotient map. Prove that L is transverse to W if and only if π ◦ L
is surjective.
b) If L is transverse to W then compute the dimension of L−1 (W ) ⊂ X.
c) Prove that the set of linear maps transverse to W is an open subset of Hom(X, Y ).

Solution:
a) The elements of Y /W are equivalence classes [v] = v + W . The map π : Y → Y /W is given
by y 7→ [y]. Suppose first the L is transverse to W . Choose some coset v + W ∈ Y /W . Since
v ∈ Y , there exists an x ∈ X and a w ∈ W such that v = L(x) + w. So, L(x) represents the
same coset and
v + W = [L(x)] = π ◦ L(x).
Conversely, let y ∈ Y so that it lies in some coset v + W . We wish to write y = v + w for
some w ∈ W and v ∈ L(X). To do this, appeal to surjectivity. Then there exists an x ∈ X
such that
v + W = π ◦ L(x) = [L(x)] = L(x) + W
Hence, y ∈ L(x) + W and therefore L(X) + W = Y .
b) Observe that L−1 (W ) = π ◦ L−1 ([0]). So, it suffices to show that [0] is a regular value.
Since each of X and Y /W are vector spaces, we have for any x ∈ X that Tx X ' X and
T[L(x)] Y /W ' Y /W . Let v ∈ Tx X. Then,
d
d(π ◦ L)x (v) = (π ◦ L)(x + tv) = [v] = π ◦ L(v)
dt t=0

But, π ◦ L is surjective by part a) owing to transversality of L. Hence, we know that


dim L−1 (W ) = dim X − dim Y /X = dim X − dim Y + dim W.
M382D Attempted HW Problems

c) I argue this (loosely) by contradiction. Let TrW (X, Y ) denote the set of linear maps L :
X → Y transverse to W . Suppose that TrW (X, Y ) contains a limit point L and let Ln be a
sequence in TrW (X, Y ) converging to L. Then, look at the images Ln (X) and L(X). Since
L is a limit point, the subspaces Ln (X) get closer and closer to W . One way I imagine this
is if W is a plane and Ln (X) are planes which start orthogonal to W and rotate down onto
W.
M382D Attempted HW Problems

HW 5
I didn’t get a chance to read over this carefully when originally writing it. So, I apologize for any
typos!
Guillemin/Pollack: Chapter 1.8: 7, 8.
1.8.7. A point x ∈ X is a zero of the vector field v if v(x) = 0. Show that if k is odd, there exists
a vector field v on S k having no zeros. [HINT: For k = 1, use (x1 , x2 ) 7→ (−x2 , x1 ).] It is a rather
deep topological fact that nonvanishing vector fields do not exist on the even spheres. We will see
why in Chapter 3.

Solution: We show that v(x) = (−x2 , x1 , −x4 , x3 , ..., −xk+1 , xk ) is a nonvanishing vector field on
S k , where k is odd. Evidently the only vanishing point is at x = 0, which is not on S k . Now,
is v(x) ∈ Tx S k ? Recall that Tx S k is the plane with unit normal x. So, it suffices to show that
hx, v(x)i = 0. But we can simply compute the inner product
hx, v(x)i = [−x1 x2 + x2 x1 ] + [−x3 x4 + x3 x4 ] + ... + [−xk+1 xk + xk xk+1 ] = 0.
Hence, v(x) is a nonvanishing vector field on S k for k odd.
1.8.8. Prove that if S k has a nonvanishing vector field, then its antipodal map is homotopic to the
identity (Compare Section 6, Exercise 7.) [HINT: Show that you may take |v(x)| = 1 everywhere.
Now rotate x to −x in the direction indicated by v(x).]

Solution: Let X, Y be smooth manifolds. Recall that a homotopy between f0 : X → Y and


f1 : X → Y is a smooth map F : X × I → Y such that F (x, 0) = f0 (x) and F (x, 1) = f1 (x) (usually
homotopies are only continuous, but since we’re working in with smooth objects, we use smooth).
So here we need to find a smooth map F : S k × I → S k such that F (x, 0) = −x and F (x, 1) = x.

Suppose that S k has a nonvanishing vector field v(x). Since |v(x)| is nonzero, it is a smooth
function. Hence, v(x)/|v(x)| is a smooth unit vector field. Thus, we assume WLOG that v(x) has
norm 1 everywhere. Now define F (x, t) by
F (x, t) = x cos(πt) + v(x) sin(πt)
which is clearly smooth. Moreover, F (x, 0) = x cos(0) = x while F (x, 1) = x cos(π) = −x. So, it
remains to show that F maps into S k . We simply compute the norm of F , noting that because v(x)
is tangent to S k at x then v(x) ⊥ x:
|F (x, t)|2 = |x|2 cos(πt)2 + |v(x)|2 sin(πt)2 = cos(πt)2 + sin(πt)2 = 1.
Hence, F : S k × I → S k is a homotopy of x to −x.

TA comment: Why did you introduce the notion of a smooth homotopy? I don’t see where it
was necessary in your later work.

Guillemin and Pollack do this, so I followed.


Dan’s Problems.
Problem 1.
a) Recall that On , the set of orthogonal n × n matrices, is a Lie group. It acts on the sphere
S n−1 of unit vectors in Rn . Show that the map
On → S n−1
 
1
0
g 7→ g  
..
.
is a fiber bundle. Try n = 1, 2, 3.
M382D Attempted HW Problems

b) Let V be a finite dimensional real inner product space. Recall (from Homework #4) the
Stiefel manifold Stk (V ) for k ∈ {1, 2, ..., dim V }. What is St1 (V )? What is Stn (V ) if
n = dim V ? (Is there a sensible definition of St0 (V )?) For any k, construct a map
Stk (V ) → Grk (V )
and prove that it is a fiber bundle. Construct a fiber bundle
Stk (V ) → Stk−1 (V )
and so a sequence of fiber bundles
Stk (V ) → Stk−1 (V ) → ... → St0 (V ).
c) What are the fibers of each map in this problem?

Solution:
a) First let U = S n−1 \ {e1 }, where e1 = (1, 0, ...). The map π : On → S n−1 above is then
given by π(g) = ge1 . We define ϕ : U × On−1 as follows. For each h ∈ On−1 , there is an
associated orthonormal (n − 1) × (n − 1) matrix B. We can construct an orthogonal n × n
matrix A by  
1 0 ··· 0
0 b11 · · · b1n 
A = .
 
. .. . . .. 
. . . . 
0 bn1 · · · bnn
Now, given u ∈ U , there exists a geodesic connecting −e1 and −u. Let φ(u) be the rotation
matrix obtained by rotating −e1 to −u along this geodesic. This varies smoothly as u does.
Finally, define ϕ(u, B) by
ϕ(u, B) = φ(u)A
where A is given as above. As a product of orthogonal matrices, this is orthogonal, and
hence represents an element of On . Now what is ϕ(u, B)e1 ? We have that Ae1 = e1 so
that ϕ(u, B)e1 = φ(u)e1 . Since φ(u) rotates −e1 to −u, φ(u)e1 = u. That is to say,
π ◦ ϕ(u, B) = u. One can do the same thing for V = S n−1 \ {−e1 } to cover all of S n−1 .
b) Let n = dim V and k ≤ n. By definition,
Stk (V ) = {b : Rk → V | b is an isometry}.
If we fix an underlying basis {e1 , ..., ek } of Rk , then each isometry b ∈ Stk (V ) is uniquely
associated to an (ordered) k-frame in V . We will work with this definition henceforth. For
k = 1, we have that Stk (V ) is the set of 1-frames in V , which are just the unit vectors. So
St1 (V ) = S n−1 sitting inside V . If k = n, then Stk (V ) is just the set of n-frames in V ,
which is O(n).

For each k, define πkGr : Stk (V ) → Grk (V ) by taking a k-frame and mapping it to its
span. This is some k-dimensional subspace W of V , so that W ∈ Grk (V ) as desired. We
now want to find a local trivialization. Fix W ∈ Grk (V ) and let U be the set of “nearby”
k-dimensional subspaces. One way of viewing this is the set of k-dimensional subspaces
which trivially intersect W ⊥ . This way, W projects surjectively onto each such subspace.
What is (πkGr )−1 (W )? This is just the set of k-frames which span W . But, we can identify
W with Rk and each k-frame with a k-frame in Rk . In this way, (πkGr )−1 (W ) = Stk (Rk ) =
O(k).
We now construct ϕ : U × (πkGr )−1 (W ) → (πkGr )−1 (U ). Choose some W 0 ∈ U and an
orthonormal basis F = {f1 , ..., fk } of W (together, this is just a point in the domain). We
can project F onto W 0 and get a basis. By applying Gram-Schmidt (a smooth process), we
get an orthonormal basis F 0 = {f10 , ..., fk0 }. Define ϕ(W 0 , F ) = F 0 . The inverse map ϕ−1 (F 0 )
is given as follows: F 0 spans some subspace W 0 ∈ U , and we can find a basis F for which
applying the above procedure yields F 0 (I think F may be obtained by projecting F 0 onto
W and applying Gram-Schmidt). Since Gram-Schmidt is a specific algorithm, this basis F
M382D Attempted HW Problems

is unique, and the inverse is well defined.

We can also construct a map πkSt : Stk (V ) → Stk−1 (V ) which takes some k-frame {f1 , ..., fk }
and maps it to {f1 , ..., fk−1 } (so, we forget the last vector). For any (k − 1)-frame F =
{f1 , ..., fk−1 } in Stk−1 (V ), we have that (πkSt )−1 (F ) is the set of completed k-frames. Each
of these is uniquely associated to a unit vector orthogonal to all of f1 , ..., fk−1 . So, we write
(πkSt )−1 (F ) = {f˜ ∈ V | f˜ is a unit vector and orthogonal to f1 , ..., fk−1 }.
I unfortunately do not have time to finish this problem, but I think the idea is to use the
Gram-Schmidt idea above for the complement.

TA comment: Part a: you’ve shown that the map is surjective, but what about local triviality?

Now that I’ve taken differential geometry, I know that a) is actually a principal G-bundle with struc-
ture group On−1 . In any case, I constructed the local trivialization. I defined a map ϕ : U × On−1 →
On . I then showed that π ◦ ϕ(u, B) = u for any B ∈ On−1 . In other words, ϕ : U × On−1 → π −1 (U ).
Since On is a lie group, the map ϕ (which is defined just as a group action) is smooth. So, it’s a
local trivialization.

Problem 2.
a) Consider the diagram
E
π
f
M0 M
in which π is a fiber bundle and f a smooth map. Construct E 0 , π 0 , f˜ in the diagram

E0 E
0 π
π
f
M0 M

so that the diagram commutes and f˜ restricts to a diffeomorphism (π 0 )−1 (m0 ) → π −1 (f (m0 ))
for all m0 ∈ M 0 . Show that π 0 is a fiber bundle. It is the pullback of π along f .
b) Let M be a smooth manifold and π1 : Ei → M , i = 1, 2 be fiber bundles. Construct a fiber
bundle π : E1 ×M E2 → M , the fiber product of π1 and π2 , whose fibers are the Cartesian
products of the fibers of π1 and π2 .
c) Prove that if f : X → Y is a smooth map, then the differential df = f∗ : T X → T Y is also
a smooth map. (Check in charts.)

Solution: Let us look at some general theory for a moment. First, given a vector space V and
subspaces V1 , V2 , then
V ⊕ V = Diag(V × V ) + (V1 ⊕ V2 ) iff V = V1 + V2
where Diag(V × V ) = {(v, v) | v ∈ V }. To see this, first suppose the former equality and let v ∈ V .
Then there exists v 0 ∈ V , v1 ∈ V1 , and v2 ∈ V2 such that
(v, 0) = (v 0 , v 0 ) + (v1 , v2 ) = (v1 + v 0 , v2 + v 0 ).
Consequently, v2 + v 0 = 0 and v = v1 + v 0 = v1 − v2 . Hence, v ∈ V1 + V2 . Essentially the
same proof can be used in reverse to show that if v ∈ V then (v, 0) ∈ Diag(V × V ) + (V1 ⊕ V2 ).
By symmetry, (0, v 0 ) ∈ Diag(V × V ) + (V1 ⊕ V2 ) for any v 0 ∈ V . Together, this shows that
V ⊕ V ⊂ Diag(V × V ) + (V1 ⊕ V2 ), where the reverse inclusion is trivial (it is a subpspace).
M382D Attempted HW Problems

Next, let X, Y, Z be smooth manifolds and f : Y → X, g : Z → X smooth. Say that f t g


iff for every y ∈ Y and z ∈ Z such that p = f (y) = g(z),
Tp X = dfy (Ty Y ) + dgz (Tz Z).
I claim that f t g iff f × g : Y × Z → X × X is transverse to Diag(X × X). Let (y, z) ∈
(f × g)−1 (Diag(X × X)). Suppose that f × g t Diag(X × X). Then, by transversality
T(f ×g)(y,z) (X × X) = T(f ×g)(y,z) Diag(X × X) + d(f × g)(y,z) (T(y,z) (Y × Z)).
Since (y, z) ∈ (f × g)−1 (Diag(X × X)), we have p = f (y) = g(z). Next, from a previous homework
T(y,z) (Y × Z) = Ty Y × Tz Z = Ty Y ⊕ Tz Z
since the direct product is over finitely many vector spaces. Similarly for T(p,p) (X ×X) = Tp X ⊕Tp X.
Hence,
Tp X ⊕ Tp X = T(p,p) Diag(X × X) + d(f × g)(y,z) (Ty Y ⊕ Tz Z)
= T(p,p) Diag(X × X) + (dfy (Ty Y ) ⊕ dgz (Tz Z)).
From another previous homework, we have
T(p,p) Diag(X × X) = Diag(Tp X × Tp X) = Diag(Tp X ⊕ Tp X).
In turn,
Tp X ⊕ Tp X = Diag(Tp X ⊕ Tp X) + (dfy (Ty Y ) ⊕ dgz (Tz Z)).
This is exactly the form of the previous vector space lemma we proved. It follows that
Tp X = dfy (Ty Y ) + dgz (Tz Z),
which is to say that f t g. All of the above steps can be reversed to prove the converse.

From this, we obtain the following: Let πi : Ei → M , i = 1, 2 be fiber bundles. Define E1 ×M E2 =


{(e1 , e2 ) | π1 (e1 ) = π2 (e2 )}. Then, E1 ×M E2 = (π1 × π2 )−1 (Diag(M × M )). Since π1 , π2 are
submersions, they are transverse to each other. Hence, π1 × π2 t Diag(M × M ). We then get that
E1 ×M E2 is a submanifold of E1 × E2 .

Note that the above really only needed that one of π1 , π2 was a submersion to conclude that π1 t π2 .
Therefore, just considering one fiber bundle π : E → M and a smooth map f : M 0 → M , we also get
that E 0 := {(m0 , e) ∈ M 0 × E | f (m0 ) = π(e)} is a submanifold. To see this, we have that π is a sub-
mersion so that π t f , and thus f ×π t Diag(M ×M ). Consequently, E 0 = (f ×π)−1 (Diag(M ×M ))
is a submanifold of M 0 × E.
a) Define E 0 as the subset E 0 = {(m0 , e) ∈ M 0 × E | f (m0 ) = π(e)}. With π 0 : E 0 → M 0 defined
by projection onto the first coordinate, we have f ◦ π 0 (m0 , e) = f (m0 ) = π(e). So, it suffices
to take f˜ : E 0 → E by projection onto the second coordinate. This makes the diagram
commute. From the earlier discussion, E 0 is a smooth manifold so that f˜ and π 0 are smooth
(as projections are). Note that (π 0 )−1 (m0 ) is the set of (m0 , e) ∈ E 0 such that π(e) = f (m0 );
this is just a condition on the e since m0 is fixed. Similarly, π −1 (f (m0 )) is the set of e such
that π(e) = f (m0 ). Clearly f˜ restricts to a diffeomorphism.

Let p ∈ M 0 . Consider f (p), which has some neighborhood U which locally trivializes π.
That is, there exists a diffeomorphism ϕ making the following diagram commute.
ϕ
π −1 (U ) U × π −1 (f (p))

π Pr1
U
Set U 0 = f −1 (U ). We construct ϕ0 : (π 0 )−1 (U 0 ) → U 0 × (π 0 )−1 (p). Since π 0 is just a
projection, (π 0 )−1 (p) is the set of points (p, w) such that π(w) = f (p). So, (p, w) ∈ (π 0 )−1 (p)
when w ∈ π −1 (f (p)). Define ϕ0 by
ϕ0 (q, v) = (q, (p, Pr2 ◦ϕ(v)))
M382D Attempted HW Problems

where v ∈ E is such that π(v) = f (q). It follows that Pr2 ◦ϕ(v) ∈ π −1 (f (p)), and so
(p, Pr2 ◦ϕ(v)) ∈ (π 0 )−1 (p) as desired. The inverse is given by
(ϕ0 )−1 (q, (p, w)) = (q, ϕ−1 (f (q), w))
where w ∈ E is such that π(w) = f (p). Then,
ϕ0 ◦ (ϕ0 )−1 (q, (p, w)) = ϕ0 (q, ϕ−1 (f (q), w)) = (q, (p, Pr2 ◦ϕ(ϕ−1 (f (q), w)))) = (q, (p, w)).
Similarly,
(ϕ0 )−1 ◦ ϕ0 (q, v) = (ϕ0 )−1 (q, (f (q), Pr2 ◦ϕ(v))) = (q, ϕ−1 (f (q), Pr2 ◦ϕ(v))).
We must now check that v = ϕ−1 (f (q), Pr2 ◦ϕ(v)), or equivalently ϕ(v) = (f (q), Pr2 ◦ϕ(v)).
The second coordinates certainly agree, so we check the first. By the local triviality condi-
tion, Pr1 ◦ϕ(v) = f (q). All of the above are smooth coordinate wise, as a composition of
projections and other smooth functions. Hence,
ϕ0
(π 0 )−1 (U 0 ) U 0 × (π 0 )−1 (p)

π0 Pr1
U
is a local trivialization.
b) By applying a) with f : M 0 → M and π : E → M the fiber bundles πi : Ei → M , we get
Pr1
E1 ×M E2 E1
Pr2 π π1

E2 π2 M

The induced map π is our fiber bundle. Let p ∈ M . By passing to a common open set if
necessary, there exists an open U which trivializes πi : Ei → M via ϕi . We construct a
diffeomorphism ϕ making the following diagram commute
ϕ
π −1 (U ) U × π1−1 (p) × π2−1 (p)

π Pr1
U
An element of π −1 (U ) is a pair (v, w) ∈ E1 × E2 such that π1 (v) = π2 (w) = q ∈ U . Define
ϕ by
ϕ(v, w) = (q, Pr2 ◦ϕ1 (v), Pr2 ◦ϕ2 (w))
with inverse
ϕ−1 (q, (v, w)) = (ϕ−1 −1
1 (q, v), ϕ2 (q, w)).
We have that π1 ◦ ϕ−1
1 (q, v) = Pr1 (q, v) = q by local triviality, and similarly for the second
coordinate. So, ϕ−1 maps into the correct space. We now check that they are inverses
ϕ−1 ◦ ϕ(v, w) = ϕ−1 (q, Pr2 ◦ϕ1 (v), Pr2 ◦ϕ2 (w))
= (ϕ−1 −1
1 (q, Pr2 ◦ϕ1 (v)), ϕ2 (q, Pr2 ◦ϕ2 (w))) = (v, w)

where the last step uses local triviality. Also,


ϕ ◦ ϕ−1 (q, (v, w)) = ϕ(ϕ−1 −1
1 (q, v), ϕ2 (q, w))

= (q, Pr2 ◦ϕ1 (ϕ−1 −1


1 (q, v)), Pr2 ◦ϕ2 (ϕ2 (q, w))) = (q, (v, w))

Once more, these are both smooth maps. We have therefore found a local trivialization.

TA comment: Nice!
M382D Attempted HW Problems

Problem 4.
a) Let V be a finite dimensional real vector space. Recall that an inner product on V is a
function h−, −i : V × V → R which is linear in each variable separately, symmetric, and
positive definite: for ξ, ξ1 , ξ2 , η ∈ V and λ ∈ R we have
hξ1 + λξ2 , ηi = hξ1 , ηi + λhξ2 , ηi
hξ, ηi = hη, ξi
hξ, ξi > 0 if ξ 6= 0.
b) Show that the space of maps V × V → R which satisfy the first two equations above is a
vector space. What is its dimension (in terms of dim V )? Show that the subset of maps
which in addition satisfy the positive definiteness condition is convex.
c) A Riemannian metric on a smooth manifold X is a smoothly varying assignment of inner
products on the tangent spaces Tp X. How do we formalize “smoothly varying” in the
previous sentence?
d) Construct a Riemannian metric on U ⊂ X if U is the domain of a coordinate chart
(U ; x1 , ..., xn ).
e) Use a partition of unity to construct a Riemannian metric on X.

Solution:
a) Maybe I’m missing something, but nothing is actually asked in part a).
b) Let V denote the space of function V × V → R satisfying the first two conditions. We define
+ : V × V → V by
(T1 + T2 )(ξ, η) = T1 (ξ, η) + T2 (ξ, η)
which is indeed in V . We define × : R × V → V by
(λT )(ξ, η) = λ(T (ξ, η))
which is also in V . One can check that, with these maps, V is indeed a vector space. The
zero map T (ξ, η) = 0 acts as the zero element.

Let {e1 , ..., en } be a basis for V , so that if ξ ∈ V then ξ = ξ i ei . For T ∈ V define a


matrix A ∈ Mn×n (R) by Mij = T (ei , ej ). Then, M is a symmetric matrix. Conversely, for
every symmetric matrix M we can define T ∈ V by T (ei , ej ) = Mij and extending to all of
V × V linearly. So, V has the same dimension as Symn×n (R), i.e. n(n + 1)/2.

Let VS denote the subset of V which are positive definite. If T1 , T2 ∈ VS then for any
t ∈ [0, 1],
((1 − t)T1 + tT2 )(ξ, η) = (1 − t)T1 (ξ, η) + tT2 (ξ, η)
= (1 − t)T1 (η, ξ) + tT2 (η, ξ) = ((1 − t)T1 + tT2 )(η, ξ).
Hence, (1 − t)T1 + tT2 ∈ VS for all t ∈ [0, 1]. That is, VS is convex.
c) For each p ∈ X the Riemannian metric g assigns an inner product h−, −ip on Tp X. For this
to vary smoothly, we mean that if (U, x) is a chart around p ∈ X then the n2 maps
* +
∂ ∂
gij (p) = ,
∂xi p ∂xj p
p

defined on U are smooth. A natural question to ask is if this property depends on the choice
of coordinates used. But, it does not. We can rewrite the above as
gij (p) = hdx−1 −1
q (ei ), dxq (ej )ip

where q = x(p) and ei is the ith basis vector of Rn . Let (V, y) be another chart around p.
We wish to show that if gij (p) is smooth on U ∩ V then
hij (p) = hdyr−1 (ei ), dyr−1 (ej )ip
M382D Attempted HW Problems

is smooth on U ∩ V ; here, r = y(p). Note that


dyr−1 = d(x−1 ◦ x ◦ y −1 )r = dx−1
q ◦ d(x ◦ y
−1
)r
so that
hij (p) = hdx−1
q ◦ d(x ◦ y
−1
)r (ei ), dx−1
q ◦ d(x ◦ y
−1
)r (ej )ip .
Since d(x◦y −1 ) varies smoothly with r, and by definition gij is smooth, we see that hij is too.

As a side remark, that the space VS above is convex implies that the set of Riemannian
metrics on a manifold is also a convex set.
d) Define hv, wip by hdxp (v), dxp (w)i, where h−, −i is the standard inner product on Rn . Then,
the local representation of g is
gij (p) = hdx−1 −1 −1 −1
q (ei ), dxq (ej )ip = hdxp ◦ dxq (ei ), dxp ◦ dxq (ej )i = hei , ej i

which are clearly smooth.


e) Let {ϕα } be a partition of unity subordinate to the covering {(Uα , xα )}α∈A of X by coordi-
nate charts. That is,
i) Each ϕα is smooth.
ii) The collection of supports {spt(ϕα )}α∈A is locally finite. This means for each p ∈ M
there is a neighborhood U such Pthat U ∩ spt(ϕα ) 6= ∅ for only finitely many α.
iii) For all p ∈ M , ϕα (p) ≥ 0 and α∈A ϕα (p) = 1.
Now, use part d) to construct a Riemannian metric g α on each Uα . Then, for v, w ∈ Tp X
define X
g(v, w) = hv, wip = ϕα (p)g α (v, w).
α∈A
Local finiteness guarantees that, for each p, the above sum is finite in a neighborhood of p.
So, each local representation of g is smooth. As a sum of inner products, it is a bilinear
symmetric form. Hence, we only need check positive definiteness. If v ∈ Tp X is a nonzero
vector, then X
g(v, v) = ϕα (p)hv, viα
p
α∈A
where each hv, viα α
P
p is positive since the h−, −ip are positive definite. Now, since α∈A ϕα (p) =
1 and all are nonnegative, it follows that at least one of them is strictly positive. Hence,
g(v, v) > 0.

Or, you can use Whitney embedding to embed X in some AN and then equip X with
the induced metric from the Euclidean metric (realize Tp X as a subset of RN , then h−, −ip
is just the restriction of the Euclidean inner product to Tp X).

TA comment: part b: the proof that the positive definite ones form a convex subspace appears to
be missing a step. part c: your side remark is true but there’s an argument to make here part e:
I’m glad you gave both proofs. The Whitney embedding argument works, but is a lot harder to
generalize (e.g. to metrics on vector bundles other than T M ).

What did I do for b)...? Did I prove that the space of symmetric forms is convex? In any case, the
right work should be
((1 − t)T1 + tT2 )(ξ, ξ) = (1 − t)T1 (ξ, ξ) + tT2 (ξ, ξ) > 0
since T1 , T2 are positive definite.
Problem 6. Let M be a smooth manifold and π : T ∗ M → M its cotangent bundle. Introduce the
notation
Ω0 (M ) = {f : M → R smooth}
Ω1 (M ) = {sections of π : T ∗ M → M }
M382D Attempted HW Problems

Construct vector space structures on each of these sets. Show that the differential is a linear map
d
Ω0 (M ) −
→ Ω1 (M ).
We will soon construct “higher” versions of these vector spaces and of the differential.

Solution: A vector space structure on Ω0 (M ) is easy to construct. The maps + : Ω0 (M ) × Ω0 (M ) →


Ω0 (M ) and × : Ω0 (M ) × R → Ω0 (M ) are defined by
(f1 + f2 )(p) = f1 (p) + f2 (p)
(λf )(p) = λf (p)
0
for f1 , f2 , f ∈ Ω (M ) and λ ∈ R. Clearly the above maps are smooth (work in charts and use the fact
that the sum of smooth affine functions is smooth, and multiplication by a constant is smooth). The
constant map f ≡ 0 acts as the additive identity. Now let us deduce what a section of π : T ∗ M → M
is. A section is a smooth map s : M → T ∗ M , which assigns to each point p a cotangent vector (that
is, a vector in the dual space Tp∗ M , a linear map from Tp M → R). So, we define +, × analogously,
(s1 + s2 )(p) = s1 (p) + s2 (p)
(λs)(p) = λs(p)
for s1 , s2 , s ∈ Ω (M ) and λ ∈ R. Since s1 (p), s2 (p) ∈ Tp∗ M , their sum makes sense and is a vec-
1

tor in Tp∗ M . Similarly, multiplication by a constant makes sense. As above, these operations yield
smooth maps. The additive identity is the section s : M → T ∗ M giving the zero vector in each Tp∗ M .

We now show that the differential is a map Ω0 (M ) → Ω1 (M ). First, given f ∈ Ω0 (M ) we


have df : T M → T R ' R. Next, we can view df as df (p, v) = dfp (v), where (p, v) ∈ T M .
This identification yields a linear map dfp : Tp M → R, which is exactly an element of Tp∗ M . So,
df : M → (Tp M → R) = M → Tp∗ M , and is a section of π : T ∗ M → M so long as df is smooth.
But, it is by Problem 2c). To see that d is linear, note that for f1 , f2 ∈ Ω0 (M ) and λ ∈ R
d d d
d(f1 + λf2 )p (v) = (f1 + λf2 )(α(t)) = f1 (α(t)) + λ f2 (α(t))
dt t=0 dt t=0 dt t=0
= d(f1 )p (v) + λd(f2 )p (v)
where α : (−, ) → M is such that α(0) = p and α0 (0) = v ∈ Tp M . It follows that
d(f1 + λf2 )(p, v) = d(f1 + λf2 )p (v) = d(f1 )p (v) + λd(f2 )p (v) = df1 (p, v) + λdf2 (p, v)
so that d is linear.

TA comment: Good work!


M382D Attempted HW Problems

Midterm (100/110 points)


Problem 1. (10/10 points). Consider the map γ : (−1, 1) → S 2 into the unit sphere S 2 ⊂ A3 defined
by
p
γ(t) = (t/2, 1 − t2 /2, −t/2)
a) Show that γ is an immersion. Is the image a submanifold?
b) Introduce spherical coordinates on S 2 so that the point γ(0) is in your coordinate system.
Express the derivative γ 0 (0) ∈ Tγ(0) S 2 in your coordinate system.

Solution:
a) The differential at a point p ∈ (−1, 1) is computed as
!
v vp v
dγp (v) = ,− p ,−
2 2 1 − p2 /2 2

where v ∈ R. The above is clearly injective so that γ is an immersion. I claim that the
image is a submanifold. To show this, we find a submanifold chart. Introduce the coordinate
system on S 2 by
√ √ ! √ √ !
2 2 2 2
(x, y, z) = cos(θ) cos(ϕ) − , 0, + sin(θ) cos(ϕ)(0, 1, 0) + sin(ϕ) , 0,
2 2 2 2

where θ ∈ (0, 2π) and ϕ ∈ (−π/2, π/2). This is just a rotation of the standard spherical
coordinate system. Now consider when ϕ = 0. This gives
 √ √ 
(x, y, z) = − 2/2 cos(θ), sin(θ), 2/2 cos(θ) .

I claim that the S 2 . To see this, fix a point γ(t). We can


√ image of γ(t) lies on this slice of √
solve t/2 = − 2/2√cos(θ) and get θ = arccos(−t/ 2). Since t ∈ (−1, 1), θ is well defined.
We find then that 2/2 cos(θ) = −t/2. Now,
√ q √ p
sin(θ) = sin(arccos(−t/ 2)) = 1 − (−t/ 2)2 = 1 − t2 /2
as desired. So, in the (θ, ϕ) coordinate system, the image of γ is just the line from
((−π/4, 0), (π/4, 0)). Hence, we have found a submanifold chart. You can also show that γ is
an embedding, which may be easier, but in working that out I stumbled across this argument.

Dan’s comment: Good job. You could have used these coordinates in (b) as well!
b) We introduce spherical coordinates on S 2 by
x = cos(θ) cos(ϕ)
y = sin(θ) cos(ϕ)
z = sin(ϕ)
where θ ∈ (0, 2π) and ϕ ∈ (−π/2, π/2). Note that this covers S 2 except for an arc connecting
(1, 0, 0) and (0, 0, ±1). So, it covers the entire image of γ, in particular γ(0) = (0, 1, 0). This
corresponds to the point (π/2, 0) in coordinates. We now compute ∂/∂θ|γ(0) and ∂/∂ϕ|γ(0) .
First look at the curve
α(θ) = (cos(θ), sin(θ), 0).
We have that

= α0 (π/2) = (−1, 0, 0).
∂θ γ(0)

Similarly, look at the curve


β(ϕ) = (0, cos(ϕ), sin(ϕ)).
M382D Attempted HW Problems

Then,

= β 0 (0) = (0, 0, 1).
∂ϕ γ(0)

Since the derivative is


!
0 1 t 1
γ (t) = ,− p ,− ,
2 2 1 − t /2 2
2

the velocity at t = 0 is γ 0 (0) = (1/2, 0, −1/2). Thus,


∂ ∂
γ 0 (0) = −1/2 − 1/2 .
∂θ γ(0) ∂ϕ γ(0)

Problem 2. (10/10 points). Suppose U ⊂ An is an open set and f : U → R a smooth function.


Define the graph of f as a subset of An+1 and prove that it is a submanifold.

Solution: The graph of f is the subset defined as


Graph(f ) = {(p, f (p)) | p ∈ U } ⊂ An+1 .
To show that Graph(f ) is a submanifold, we find an embedding. Define F : U → An+1 by F (p) =
(p, f (p)). So, the image of F is Graph(f ). We must show that F is an injective immersion which is
a homeomorphism onto its image. First, observe that Pr1 ◦F = IdU . Since the identity is injective,
it follows that F is too. Next, by the chain rule
d(Pr1 )(p,f (p)) ◦ dFp = IdTp U .
Suppose ξ ∈ Tp U is such that dFp (ξ) = 0. Then necessarily d(Pr1 )(p,f (p)) (0) = 0 as a linear map.
But, this is equal to IdTp U (ξ) = ξ, so that the kernel of dFp is trivial. Hence, dFp is injective, and
we have an injective immersion. Define G : Graph(f ) → U by G = Pr1 |Graph(f ) . Then,
G ◦ F (p) = Pr1 |Graph(f ) (p, f (p)) = p
F ◦ G(q) = F ◦ Pr1 (p, f (p)) = F (p) = (p, f (p)) = q
where q = (p, f (p)) for some p ∈ U . Hence, G is the inverse of F . Since the components of F are
continuous, it is continuous. Since G is a restriction of a projection, it too is continuous. Hence F
is a homeomorphism onto Graph(f ).

Dan’s comment: well done

Problem 3. (14/15 points). Consider the equation


(x1 )2 + (x2 )2 = (x3 )2 + (x4 )2 .
Interpret (x1 , x2 , x3 , x4 ) as a vector in R4 and explain how the equation defines a subset S of RP3 .
Prove that S is a submanifold of RP3 . Find a familiar smooth manifold diffeomorphic to S.

Solution: Let us take a step back and look at the analogous equation
(x1 )2 = (x2 )2
where (x1 , x2 ) ∈ R2 . Note that this is a homogeneous equation, meaning if (x1 , x2 ) is a solution
then (λx1 , λx2 ) is a solution for any λ. Consequently, the solution set consists of lines. Each line is
uniquely defined by a point in RP1 , and there are two such solutions. So, S is just a copy of S 0 in RP1 .

We can apply similar logic here to show that S is a subset of RP3 . Notably, the given equation is ho-
mogeneous so that any solution (x1 , x2 , x3 , x4 ) can gives rise to a set of solutions (λx1 , λx2 , λx3 , λx4 ).
These are, of course, lines in R4 and give rise to a single point in RP3 .
M382D Attempted HW Problems

Before continuing, consider the following: let (x1 , x2 , x3 , x4 ) ∈ S 3 be a solution to the equation.
Then, (x4 )2 = 1 − (x1 )2 − (x2 )2 − (x3 )2 , and therefore
1
(x1 )2 + (x2 )2 = (x3 )2 + 1 − (x1 )2 − (x2 )2 − (x3 )2 → (x1 )2 + (x2 )2 =
2
Consequently, (x3 )2 + (x4 )2 = 1/2 too. We will use this fact several times later on.

To see that S is a submanifold, we look at the map f : RP3 → R defined by


(x1 )2 + (x2 )2 − (x3 )2 − (x4 )2
f ([x1 , x2 , x3 , x4 ]) = .
(x1 )2 + (x2 )2 + (x3 )2 + (x4 )2
Observe that f is well defined (that is, it does not depend on the chosen representative) as a quotient
of homogeneous polynomials of the same degree. Note that f = 0 if and only if [x1 , x2 , x3 , x4 ] ∈
S ⊂ RP3 . So, we show that S is a submanifold by showing that 0 is a regular value. Recall that the
tangent space at p of an abstract manifold X consists of tuples of vectors in Rn , one for each coordi-
nate system containing p. Because RP3 is not embedded, when computing the differential we really
have to compute it in coordinate systems. But, the final result does not depend on the choice of co-
ordinates. Hence, the differential is surjective if and only if it is surjective in some coordinate system.

An atlas on RPn is given as follows: Define Ui as


Ui = {[x1 , ..., xn+1 ] | xi 6= 0}.
Note that the Ui cover RPn – the only point that would be missed is [0, ..., 0], but this is not an
element of RPn , since we realize it as a quotient of Rn+1 \ 0 (or S n ). Next, define ϕi : Ui → Rn by
 1
xi−1 xi+1 xn+1

x
ϕi ([x1 , ..., xn+1 ]) = , ..., , , ...,
xi xi xi xi
which is well defined since each component is quotient of homogeneous polynomials of the same
degree. The inverse is given by
ϕ−1 1 n 1
i (y , ..., y ) = [y , ..., y
i−1
, 1, y i , ..., y n ].
Returning to our specific case, let p ∈ RP3 be such that, say, x1 6= 0. Then, we compute
∂(f ◦ ϕ−1
1 )
d(f ◦ ϕ−1
1 )ϕ1 (p) = dyϕj 1 (p)
∂y j ϕ1 (p)

where we have imposed standard coordinates (y 1 , y 3 , y 3 ) on R3 . First, the composition is


1 + (y 1 )2 − (y 2 )2 − (y 3 )2
(f ◦ ϕ−1
1 )(y) = .
1 + (y 1 )2 + (y 2 )2 + (y 3 )2
The first partial derivative of this is
∂(f ◦ ϕ−1 2(y 2 )2 + 2(y 3 )2
 
1 ) ∂
1
= 1−
∂y ∂y 1 1 + (y 1 )2 + (y 2 )2 + (y 3 )2
4y ((y 2 )2 + (y 3 )2 )
1
= .
(1 + (y 1 )2 + (y 2 )2 + (y 3 )2 )2
The computations of the other partial derivatives are similar. In total,
 
−1 4
d(f ◦ ϕ1 )q = (q 1 ((q 2 )2 + (q 3 )2 )dyq1
(1 + (q 1 )2 + (q 2 )2 + (q 3 ))2
− (1 + (q 1 )2 )(q 2 dyq2 + q 3 dyq3 ))
where q = (q 1 , q 2 , q 3 ) ∈ ϕ1 (U1 ). Now suppose q ∈ ϕ1 (U1 ) is such that p = ϕ−1 1 2 3
1 (q) = (1, q , q , q ) ∈
S. Then,
1 + (q 1 )2 = (q 2 )2 + (q 3 )2
and substituting this into the above gives
d(f ◦ ϕ−1 1 1 2 2 3 3
1 )q = q dyq − q dyq − q dyq
M382D Attempted HW Problems

Now, since we have 1 + (q 1 )2 = (q 2 )2 + (q 3 )2 , it must be that one of q 2 , q 3 is nonzero. Hence,


d(f ◦ ϕ−1
1 )q is never the zero map, and since the tangent space to R is a one dimensional vector
space, it must be surjective.

Of course, the above only accounts for those p ∈ S with nonzero first component. For the oth-
ers, we do the same computation for ϕi , i = 2, 3, 4. The resulting differential is virtually the same,
up to some changes in sign.

I will now describe S as a subset first of S 3 . First, by the remark earlier if (x1 , x2 , x3 ,√
x4 ) is a solution
√ 1
3 1 2 2 2 3 2 4 2 1
lying on S , then (x ) +(x ) = 1/2 = (x ) +(x √ ) . Consequently, S is precisely 1/ 2S ×1/ 2S ,
a torus. We can parametrize it by X(u, v) = 1/ 2(cos u, sin u, cos v, sin v) where u, v ∈ [0, 2π). Upon
identifying antipodal points, we identify
√ √
1/ 2(cos u, sin u, cos v, sin v) ∼ 1/ 2(− cos u, − sin u, − cos v, − sin v)
so that u ∼ u + π and v ∼ v + π. So, really we’re just identifying antipodal points in each copy of
S 1 separately. Hence, the quotient in RP3 is a torus. (Remark: I think I have actually seen this
before? I’ve learned a little about the Wilmore conjecture, and I think this is the Clifford torus/flat
torus).

Dan’s comment: I’m not sure what you mean by “separately” here: there is the action by a single
involution, not a product group generated by two involutions. The preimage in S 3 is a Clifford
torus. Need better argument at the end.

What I meant by “separately” was that we have two copies of S 1 , and on each we’re identify-
ing by an antipodal action. I guess I should have just said (u, v) ∼ (u + π, v + π) to write it as a
single action so there wasn’t confusion, but I was trying to say that it’s not like the action in one
coordinate affects the action in the second.

Problem 4. (15/15 points).


a) Let X, Y, Z be smooth manifolds and f : X → Y , g : Z → Y smooth maps. Give a careful
definition which begins “f is transverse to g if...”. In case f is transverse to g, prove that
the fiber product W of f and g, defined by
W = {(x, z) ∈ X × Z | f (x) = g(z)}
is a manifold. What is its dimension? What is the tangent space at a point of W ? Note the
special cases where one or both of f and g is the inclusion of a submanifold.
b) Let F : X → X be a smooth map, and define f, g : X → X × X by f (x) = (x, x) and
g(x) = (x, F (x)). What condition on F guarantees that f and g are transverse? If so, what
is the fiber product of f and g?

Solution:
a) In this setup, we say that f t g (f is transverse to g) if for every x ∈ X and z ∈ Z such
that p = f (x) = g(z),
Tp Y = dfx (Tx X) + dgz (Tz Z).
Before continuing, we introduce a quick lemma about vector spaces: If V is a vector space
with V1 , V2 subspaces then
V ⊕ V = Diag(V × V ) + V1 ⊕ V2 iff V = V1 + V2
where Diag(V × V ) = {(v, v) | v ∈ V }. To see this, suppose the former. Then (v, 0) ∈ V ⊕ V ,
and there must exist v 0 ∈ V , v1 ∈ V1 , and v2 ∈ V2 such that
(v, 0) = (v 0 , v 0 ) + (v1 , v2 ).
By equating coordinates, we see that v 0 = −v2 and v = v 0 + v1 = v1 − v2 ∈ V1 + V2 . The con-
verse is similar: given (v, ṽ) ∈ V ⊕ V , write it as (v, 0) + (0, ṽ). If we can show each of these
M382D Attempted HW Problems

can be written in the form (v 0 , v 0 ) + (v1 , v2 ), we are done. Doing it for the first term is essen-
tially the same as the above in reverse. For the second, note that the argument is symmetric.

Now, we recall that


T(x,z) (X × Z) = Tx X × Tz Z = Tx X ⊕ Tz Z
T(p,p) (Diag(Y × Y )) = Diag(Tp Y × Tp Y ) = Diag(Tp Y ⊕ Tp Y )
d(f × g)(x,z) (T(x,z) (X × Z)) = d(f × g)(x,z) (Tx X ⊕ Tz Z) = dfx (Tx X) ⊕ dgz (Tz Z).
All of the above come from previous homework assignments from Guillemin and Pollack.
Using these facts, I will prove that if f t g then f × g : X × Z → Y × Y is transverse to
Diag(Y × Y ). Note that (f × g)−1 (Diag(Y × Y )) = W . Hence, by transversality and the
fact that Diag(Y × Y ) is a submanifold of Y × Y , we find that W is a submanifold of X × Z.
Moreover, the codimension of W in X × Z is the codimension of Diag(Y × Y ) in Y × Y .
The latter is [dim Y + dim Y ] − dim Y = dim Y since Diag(Y × Y ) is diffeomorphic to Y .
So, dim(X × Z) − dim W = dim Y and dim W = dim X + dim Z − dim Y .

Now let’s prove that if f t g then f × g t Diag(Y × Y ). We want to show that if


(x, z) ∈ (f × g)−1 (Diag(Y × Y )) then
T(p,p) (Y × Y ) = T(p,p) (Diag(Y × Y )) + d(f × g)(x,z) (T(x,z) (X × Z))
where p = f (x) = g(z) (this equality holds by definition of the preimage). By definition of
f t g,
Tp Y = dfx (Tx X) + dgz (Tz Z)
where both dfx (Tx X) and dgz (Tz Z) are subspaces of Tp Y . Hence, by the vector space
lemma,
Tp Y ⊕ Tp Y = Diag(Tp Y ⊕ Tp Y ) + dfx (Tx X) ⊕ dgz (Tz Z).
Now, using our given facts we reduce this to
T(p,p) (Y × Y ) = T(p,p) (Diag(Y × Y )) + d(f × g)(x,z) (T(x,z) (X × Z))
as desired.

By the transversality theorem, we have that if p ∈ W then


Tp W = d(f × g)−1
p (T(f ×g)(p) Diag(Y × Y )).

If I interpreted the above correctly, I think it says the following: The tangent space at
p = (x, z) of W corresponds to a tuple of velocity vectors. These velocity vectors are ob-
tained by looking at a curve α : (−, ) → Diag(Y × Y ) which passes through (f (x), g(z)) at
t = 0. Then, since there is a diffeomorphism between Diag(Y × Y ) and Y , we can view α
as a curve in Y instead. Pulling back this curve into X and Z by f and g respectively gives
two curves. Computing the tangent vectors to these curves at x and z respectively give the
two velocity vectors we want.

Dan’s comment: Careful here; what does it mean to pullback curves?

I meant the preimage... I thought that’s what it means to pull back by a function.

I think if f, g are inclusions of submanifolds, then I think W is simply the set of inter-
section points of these submanifolds. That is, W ' X ∩ Z.

Dan’s comment: Yes


b) First, if p ∈ X we have that
dfp (Tp X) = Diag(Tp X × Tp X) = T(p,p) Diag(X × X)
dgp (Tp X) = Graph(dgp )
M382D Attempted HW Problems

Then, f t g whenever

dfp (Tp X) + dgp (Tp X) = T(p,p) Diag(X × X) + Graph(dgp ) = Tp X × Tp X

for all p ∈ X where f (p) = g(p). Note that these are precisely the fixed points of F ! So, if
there are no fixed points then f t g vacuously. Now, we have that

dim(Diag(X × X)) = dim X = dim(Graph(dgp ))

while
dim(Tp X × Tp X) = dim X + dim X

so that f t g if and only if T(p,p) Diag(X × X) ∩ Graph(dgp ) = Ø. That is to say, dgp has no
fixed points. Since dgp is a linear map, this means that 1 is not an eigenvalue of dgp . Since
dgp = (IdTp X , dFp ), this implies that 1 is not an eigenvalue of dFp either. So, f t g either
when F has no fixed points or dFp has no fixed points. By definition,

W = {(x, x) ∈ X × X | F (x) = x}

where we have the condition F (x) = x since it is equivalent to f (x) = g(x). So,

W = Diag(Fix(F ) × Fix(F ))

where Fix(F ) is the set of fixed points of F .

Dan’s comment: good job

Problem 5. (6/15 points). Let Σ ⊂ A3 be a smooth compact 2-dimensional submanifold and γ :


(−, ) → RP2 a smooth curve. Use the origin to identify A3 with R3 and write γ(t) = Lt , where
Lt ⊂ A3 is a line through the origin. Suppose that L0 intersects Σ transversely at p0 .
a) Prove that there is a neighborhood U ⊂ Σ of p0 and δ > 0 such that Lt intersects U ⊂ Σ
transversely at a single point pt for |t| < δ.
b) Prove that the curve t 7→ pt is smooth.

Solution:
a) As manifolds, both Σ and L0 are locally connected. Since Σ is compact, we have that L0 ∩ Σ
is compact and locally connected. Hence, L0 ∩ Σ consists of finitely many connected com-
ponents. In particular, we can find some R > 0 such that BR (p0 ) ∩ L0 ∩ Σ = {p0 }.

Dan’s comment: Why does this last statement follow? You could use transversality in-
stead.

I think this is what I was trying to get at. Either Σ intersects L0 tangentially (either
at a point or in an interval) or transversally. Since it’s the latter, we can find a sufficiently
small ball so that BR (p0 ) ∩ L0 ∩ Σ is just p0 , removing all of the other intersections.

For each Lt , we can parametrize Lt ∩ BR (p0 ) as αt (s) for s ∈ [−R, R], where αt0 (s) is
constant. This can be achieved by parametrizing Lt by arc length first and then using an
appropriate combination of translations and dilations. How I visualize this is by lining up
all the Lt ∩ BR (p0 ) over (−, ). The sizes of these vary smoothly, but we can parameterize
each slice from −R to R. It’s like having a fiber bundle, see the below picture:
M382D Attempted HW Problems

Dan’s comment: I agree with the conclusions, but you need a stronger argument.

Define F : (−, ) × [−R, R] → A3 by F (t, s) = αt (s). So, for fixed t, Ft is just Lt ∩ BR (p0 ).
I claim that F is smooth (which depends heavily on how you parametrize earlier). Since
[−R, R] is compact and L0 ∩ BR (p0 ) is transverse to Σ there exists an η > 0 such that Lt is
transverse to Σ for |t| < η.

Dan’s comment: In the ball

Now, denote by L(p, q) the line passing through p, q ∈ A3 . Define L by


L = {L(p, q) | p, q ∈ Σ ∩ BR (p0 ), p 6= q}.
That is, L consists of all the lines passing through at least two points in Σ ∩ BR (p0 ). Of
course, each line defines a point in RP2 , so we may view L as a subset of RP2 . Consider
the complement, which part of γ(t) lies in. In particular, γ(0) lies in it.

Dan’s comment: What does this statement mean and why is it true?

So, for every pair of distinct points p, q ∈ Σ ∩ BR (p0 ) we can construct a line L(p, q) passing
through them. For each t ∈ (−, ), γ(t) is a line Lt . We can ask if Lt lies in L or not.
If Lt intersects Σ ∩ BR (p0 ) at more than one point, then we can use these distinct points
to construct L(p, q) = Lt and conclude Lt ∈ L . So, Lt ∈ / L if and only if Lt intersects
Σ ∩ BR (p0 ) at a singleton or not at all. By construction, L0 intersects Σ ∩ BR (p0 ) only at p0 ,
so L0 ∈/ L . By continuity of γ, we conclude that there is an entire interval I around 0 such
/ L . This is what I mean by “part of γ(t)” lies in the complement.
that that if t ∈ I then γ(t) ∈

Now take δ+ > 0 to be the smallest positive number such that γ(t) ∈ L for t just larger
than δ+ . Define δ− < 0 similarly. Let δ = min{|δ− |, δ+ , η}.
M382D Attempted HW Problems

So, Lt intersects U = Σ ∩ BR (p0 ) at exactly one point for t < |δ|, and these intersections are
transversal.
b) I do not know exactly how to do this, but my idea is as follows: Consider how we parame-
terized the Lt earlier. I’ve drawn in an example Σ.

So, the intersection points pt give some curve β(t) in the right figure, defined on (−δ, δ).
That is, γ ◦ β(t) = pt . If β is smooth, then this composition is too.

Dan’s comment: You have given some good geometric ideas, but you need to use the tools we have
developed to make proofs out of them.

Everyone struggled with this problem, I was pretty happy with getting 6/15 points. The best I
could do was give a wishy-washy intuitive proof, so I knew I wasn’t really using the theorems devel-
oped in class.

Here’s how Dan suggested to do it in his solution writeup. The first thing to remember is we
have a submanifold, so let’s use that. We can find an open neighborhood U ⊂ R3 of p0 such that
Σ ∩ U is g −1 (0) for some g : U → R. Next, we can lift γ to a curve γ̃ in S 2 (there’s going to be two
choices, choose the one so that γ̃(0) = λp0 for some λ > 0). Next, define Γ : (−, ) × R → R3 by
Γ(t, s) = sγ̃(t)
For each fixed t, the image Γt (s) is the line Lt we described before. For fixed s, we can imagine a
sphere of radius s and the intersection of all the Lt with this sphere. This will produce two curves
(since each line intersects at two points), one of which corresponds with the lift we chose previously.
So, Γs (t) is a curve in a sphere of radius s. This isn’t so important, but it’s helpful to get a geometric
picture of this map Γ before we start to work with it.

Now, we can restrict Γ to Γ−1 (U ). Considering s as a function of t, in order for Γ to map into
Σ it must be that
g(Γ(t, s(t))) = 0.
Checking that the partial derivative of g ◦ Γ with respect to s does not vanish at (0, |p0 |) [note: Dan
says (0, 0), but why would you want s = 0? Wouldn’t Γ(0, 0) be the origin, but we want it to be
p0 ?], we can use the implicit function theorem to find δ > 0 such that t 7→ s(t) exists and satisfies
g(Γ(t, s(t))) = 0. All this is doing is saying “Look at L0 , which passes through p0 in Σ. Then, move
L0 a little bit according to Lt . Keeping track of the intersection of Lt with Σ, we get a small curve
in Σ. That curve is Γ(t, s(t)).

Finally, by reducing δ enough we can guarantee that the partial derivative of g ◦ Γ with respect
to s does not vanish at (t, s(t)) for any t ∈ (−δ, δ). The map t 7→ pt = Γ(t, s(t)) is our desired map,
and transversality follows from the fact that the partial derivative is nonzero.
M382D Attempted HW Problems

Problem 6. (15/15 points). Write a careful proof of the following theorem. Let f : X → Y be a
proper injective immersion. Then f is an embedding.

Solution: An embedding is an injective immersion which is a homeomorphism onto its image. We


already know that f is an injective immersion, and since f is smooth it is continuous. Hence, we
need only show that f −1 : f (X) → Y is continuous. Equivalently, we can show that f : X → Y is a
closed map. I claim that every proper map to a locally compact, Hausdorff space (which manifolds
are) is a closed map.

Let C ⊂ X be closed. We want to show that f (C) ⊂ Y is closed, i.e. that Y \ f (C) is open.
To this end, let q ∈ Y \ f (C); we produce an open neighborhood of q not intersecting f (C). Since
manifolds are locally Euclidean, they are locally compact. Thus, there exists an open neighborhood
U of q with compact closure.

Do you mean locally compact and Hausdorff?

I mean, manifolds are also locally Hausdorff. I guess I use this later, but for right now I think
I just need local compactness. Maybe I’m overlooking something.

Because f is proper, f −1 (Cl(U )) is compact in X. Then f −1 (Cl(U )) ∩ C is a closed subset of


a compact set, hence is also compact. By continuity, f (f −1 (Cl(U )) ∩ C) is compact, hence closed
since Y is Hausdorff. Now set V = U \ f (f −1 (Cl(U )) ∩ C), which is open. We have that

f (f −1 (Cl(U )) ∩ C) = f (f −1 (Cl(U ))) ∩ f (C).


We assumed that q ∈ Y \ f (C) so that q ∈ V . Now, suppose that V ∩ f (C) 6= Ø. Then there exists
a c ∈ X such that f (c) ∈ V . So,

c ∈ f −1 (V ) ⊂ f −1 (U ) ⊂ f −1 (Cl(U )).
It follows that c ∈ f −1 (Cl(U )) ∩ C. Hence, f (c) ∈ f (f −1 (Cl(U )) ∩ C). But we assumed that
f (c) ∈ V , which excludes these points, a contradiction. The below picture helps visualize this:

Problem 7. (30/30 points). Proof or counterexample:


a) Suppose X is a manifold and f : X → R a smooth function. Then f has at least one critical
point.
b) Let f : X → Y be a smooth map and q ∈ Y . Then f −1 (q) is a manifold.
M382D Attempted HW Problems

c) Let X be a manifold of positive dimension. Then there exists a vector field on X which is
not identically zero. (Recall that a vector field is a smooth map ξ : X → T X such that
π ◦ ξ = IdX , where π : T X → X is the natural projection.)
d) Let f : R → S 2 be a smooth map. Then there exists q ∈ S 2 such that q ∈
/ f (R)
e) Let f : X → Y be a local diffeomorphism of smooth manifolds. Then f is a closed map: the
image of a closed subset of X is closed in Y .
f) There exists a local diffeomorphism f : X → Y with X compact and Y noncompact.

Solution:
a) Counterexample: Take X = R and f = IdR . Then f is a diffeomorphism, and in particular
a submersion for every p ∈ R. So, there are no critical points. I think it is true, however,
that if X is compact then every such f has at least two critical points. If memory serves
right, there’s also a neat theorem which says that every smooth f : T 2 → R has at least
three critical points.
b) Counterexample: Let X = A3 and Y = R. Then the smooth map f (x, y, z) = x2 + y 2 − z 2
is such that f −1 (0) is not a manifold; it is the double cone. Of course, we have a sufficient
condition when f −1 (q) is a manifold – if q is a regular value.
c) Proof: I assume X 6= Ø (otherwise, I am not sure how to treat T Ø as a collection of motion
germs). Choose some p ∈ X and a chart (U, x) around p. Then, consider the constant
vector field v defined on x(U ) assigning to each point the tangent vector (1, 0, ..., 0). Now
let ϕ ∈ Cc∞ (x(U )) be such that ϕ(x(p)) = 1. Then, ϕv is a smooth vector field vanishing
around ∂x(U ) but is not identically zero. We can push this forward onto X by dx−1 . Define
ξ : X → T X by extending this vector field to zero outside U . In total,
(
(q, dx−1
x(q) ((ϕ(x(q)), 0, ..., 0))) q∈U
ξ(q) =
(q, 0) q∈
/U

where ξ(p) = (p, dx−1x(p) ((1, 0, ..., 0))) 6= 0.


d) Proof: Since 1 = dim R < dim S 2 = 2, we have that dfp is never surjective. Denoting Crit(f )
the set of critical points, we see that R = Crit(f ). By Sard’s theorem, S 2 \ f (Crit(f )) =
S 2 \ f (R) is dense. If this were empty, it would not be dense in S 2 , hence there must be
some q ∈ S 2 \ f (R).
e) Counterexample: Maybe this is more of a topology trick, but consider the map f : R → R
given by f (x) = arctan(x). Then, f is a local diffeomorphism, but not a global diffeomor-
phism since it is not surjective (it is, however, a diffeomorphism onto its image). Now,
f (R) = (−π/2, π/2), but R is closed while (−π/2, π/2) is open in R.

There is an exercise in Guillemin and Pollack which I did on a previous homework, where
we show that any local diffeomorphism f : R → R has image an open interval (essentially,
use MVT). I will remark that, in general, if one hopes to construct a local diffeomorphism
f : X → Y which is not closed, then X cannot be compact.
f) Proof: Let X = Ø and consider f the empty function where Y is any nonempty non-compact
space. For f to be a local diffeomorphism, we need that for all p ∈ X there exists an open
neighborhood U of p such that f |U : U → f (U ) is a diffeomorphism. But this is vacuously
true.

You can create more interesting examples. For instance, take X any compact manifold.
By the Whitney embedding theorem, we may embed X in RN for some large N . Now take
Y = X ∪ [RN \ B (X)] for any  > 0 (small enough so that the set difference makes sense).
We also view Y as embedded in RN . Then, let f : X → Y be the inclusion of X into Y .
Then f is a local diffeomorphism (since restricting the codomain to X gives a diffeomor-
phism), but Y is noncompact, as it is not bounded. The issue is that Y is disconnected!
Take the following example with X = S 2 . The two connected components of R3 \ B (x) are
in red and blue.
M382D Attempted HW Problems

In fact, when Y is connected then the only example is the first. To see this, since Y is
Hausdorff and f is continuous f (X) is closed in Y . But, local diffeomorphisms are open
maps, and X is open in itself, so f (X) is also open in Y . By connectedness, either f (X)
is the empty set or f (X) = Y . But, Y is not compact, so the latter cannot occur. So, we
are left with the case that f (X) = Ø. As f is a local diffeomorphism, it must be locally
invertible. Hence X is the empty set too.

Problem 9. (6/10 points) (Extra Credit).


a) Construct a diffeomorphism Gr2 (R3 ) → RP2 .
b) Construct a diffeomorphism RP3 → SO(3), where SO(3) is the Lie group of orthogonal 3 × 3
matrices of determinant 1.
c) Construct a double cover S 2 × S 2 → Gr2 (R4 ).

Solution:
a) Every point in Gr2 (R3 ) corresponds to some plane passing through the origin in A3 . This
uniquely identifies a normal line passing through the origin. Of course, these are points in
RP2 . All lines through the origin arise this way.
b) Recall that RP3 can be constructed from the unit ball B 3 ⊂ A3 by identifying antipodal
points of the boundary S 2 .

Given any rotation in SO(3), we have an axis of rotation L and an oriented angle of ro-
tation. I will represent this as a pair (η, θ) where θ ∈ (−π, π) and η is a unit vector parallel
to L. The direction of η is given by the direction of rotation (say, using the right hand rule,
looking where your thumb points as you curl your hand). Of course, this does not cover
those rotations with magnitude π, but for these the direction does not matter. Indeed, it is
for this reason that we identify boundary points on the ball!

Now, η is some unit vector so that θη lies in the interior of Bπ , the ball of radius π. It
is this point that we map our chosen rotation to. Intuitively this map is smooth with
smooth inverse. We can see this by varying η and θ, one just moves the axis of rotation
around while the other just moves the amount of rotation.

Dan’s comment: You need to account for the identity element of SO(3) as well
M382D Attempted HW Problems

The identity element should be the origin. The above work doesn’t make too much sense
because we don’t have an axis of rotation. But we can still interpret θη as 0 since, if we
have the identity element, this is a rotation by 0. So, θ = 0.
c) No clue, but it seems interesting!

Dan wrote up some remarks on the midterm. For this problem, he wrote “I do not know an
elementary solution to (c) – I put it as extra credit for you to play with and perhaps come
up with one.” I don’t know one either, Dan...
M382D Attempted HW Problems

HW 6
Guillemin/Pollack: Chapter 1.5: 2, 9, 10; Chapter 2.1: 2, 6; Chapter 2.2: 1, 2, 3, 4.

1.5.2. Which of the following linear spaces intersect transversally?


a) The xy plane and the z axis in R3 .
b) The xy plane and the plane spanned by {(3, 2, 0), (0, 4, −1)} in R3 .
c) The plane spanned by {(1, 0, 0), (2, 1, 0)} and the y axis in R3 .
d) Rk × {0} and {0} × Rl in Rn . (Depends on k, l, n.)
e) Rk × {0} and Rl × {0} in Rn . (Depends on k, l, n.)
f) V × {0} and the diagonal in V × V .
g) The symmetric (AT = A) and skew symmetric (AT = −A) matrices in M (n).

Solution: Recall that two submanifolds X, Z of a manifold Y intersect transversally if and only if
for every x ∈ X ∩ Z,
Tx X + Tx Z = Tx Y.
Next, suppose that X is an n-dimensional affine subspace of Am passing through the origin with
n ≤ m. Then naturally identifying points in Am with position vectors in Rm , for any p ∈ X the
tangent space Tp X is simply X in Rm . Since all of the submanifolds in a) - e) contain the origin,
transversality is equivalent to X + Z = Y , viewing each of these as subspaces of Rn . Finally, note
that if dim X + dim Z ≥ Y and if BX , BZ are bases of X and Z respectively, then we simply need
to show that BX ∪ BZ contains a basis for Y .
a) These intersect transversally, since a basis for the xy plane is {(1, 0, 0), (0, 1, 0)} while one
for the z axis is {(0, 0, 1)}.
b) These intersect transversally since X + Z contains all linear combinations of the vectors
(1, 0, 0), (0, 1, 0), (3, 2, 0), and (0, 4, −1). The set {(1, 0, 0), (0, 1, 0), (0, 4, −1)} is a basis of
R3 .
c) These do not intersect transversally since any linear combination of (1, 0, 0), (2, 1, 0), and
(0, 1, 0) will have vanishing third component. So, these only span the xy plane in R3 .
d) Let v, w be vectors in Rk and Rn−k and define v∗w as the concatenation (v1 , ..., vk , w1 , ..., wn−k ).
Let {ei }ki=1 and {fi }li=1 be the standard basis vectors for Rk and Rl respectively. Let
ēi = ei ∗ 0 and f¯i = 0 ∗ fi . Then {ēi }ki=1 spans Rk × {0} and similarly for {f¯i }li=1 . Note that
these are all vectors in Rn .

First, if k + l < n then Rk × {0} and {0} × Rl cannot intersect transversally – the union of
their bases consists only of k + l < n vectors, and hence cannot span Rn . If k + l ≥ n, then
{ēi }ki=1 ∪ {f¯i }li=k+l−n+1 is exactly the standard basis of Rn , and they intersect transversally.
Note that a) is a special case of this with k = 2, l = 1.
e) Use the same notation as before, except defining f¯i = fi ∗ 0 instead. Note that {ēi }ki=1 ∪
max{k,l}
{f¯i }li=1 = {gi }i=1 where {gi } is the standard basis of Rn . So, these spaces are intersect
transversally if and only if one of k or l is n.
f) These intersect transversally. We must show that V × {0} + Diag(V × V ) = V × V . But,
this is obvious since an element of V × V is an element of the form (v, w) for v, w ∈ V , and
(v − w, 0) + (w, w) = (v, w).
g) Let X denote the symmetric matrices and Z the skew-symmetric matrices in Mn×n (R). Then
X ∩ Z = 0, and since both X and Z are vector spaces we have T0 X ' X and T0 Z ' Z.
Let Ai,j be the n × n matrix with aij = 1 and 0 otherwise. Then, any n × n matrix can be
written as a sum of the Ai,j . So, it suffices to show that each Ai,j can be written as the sum
of a symmetric and skew-symmetric matrix. Let Bi,j denote the matrix with bij = bji = 1
and 0 otherwise, and Ci,j the matrix with cij = 1, cji = −1, and 0 otherwise – if i = j then
let Ci,i = 0. Then Ai,j = 1/2(Bi,j + Ci,j ) for i 6= j and Ai,j = Bi,j for i = j. Evidently,
Bi,j ∈ X whereas Ci,j ∈ Z.
M382D Attempted HW Problems

1.5.9. Let V be a vector space, and let ∆ be the diagonal of V ×V . For a linear map A : V → V , con-
sider the graph W = {(v, Av) | v ∈ V }. Show that W t ∆ if and only if +1 is not an eigenvalue of A.

Solution: First, Graph(A) ∩ Diag(V × V ) consists of all (v, v) ∈ V × V where Av = v. So, if


+1 is not an eigenvalue then the intersection is empty, and hence Graph(A) t Diag(V × V ) vacu-
ously. Suppose now that Graph(A) t Diag(V × V ). Then, for every p ∈ Graph(A) ∩ Diag(V × V )
we have
Tp (V × V ) = Tp Graph(A) + Tp Diag(V × V ).
Note that these are all vector spaces, so that we can write
V × V = Graph(A) + Diag(V × V ).
Hence, for any v1 , v2 ∈ V there exist w, v ∈ V such that (v1 , v2 ) = (w, Aw) + (v, v). Solving for v1
and v2 , we get v1 = w + v and v2 = Aw + v. Hence, Aw − w = v2 − v1 , which in general is nonzero.
So, +1 is not an eigenvalue.

TA comment: If 1 is not an eigenvalue, then Graph(A) and the diagonal intersect at the origin... also,
your proof with v1 , v2 , and w doesn’t seem correct to me. What if I pick v1 = v2 and get a nonzero w?

Okay so I was silly here, obviously Graph(A) ∩ Diag(V × V ) contains 0, since linear transforma-
tions map 0 to 0. I don’t even need to check the intersection condition, that’s something you do
with manifolds, not vector spaces. So don’t mind that. Instead, we just assume that +1 is not an
eigenvalue and ask if for any (v1 , v2 ) ∈ V × V we can write
(v1 , v2 ) = (w, Aw) + (v, v)
for some v, w ∈ V . This decomposition need not be unique, it just needs to exist. Well, since +1 is
not an eigenvalue we know that Aw 6= w. So (w, Aw) + (v, v) 6∈ Diag(V × V ), which is a good sign
if we’re trying to get to all of V × V ! Now let’s look at our chosen vectors v1 , v2 . We need to find
w so that
v1 − v2 = w − Aw.
Since v1 , v2 range across all of V , we equivalently ask if for any ṽ ∈ V there exists a w ∈ V such
that
ṽ = (Id −A)w.
In other words, is Id −A surjective? Since +1 is not an eigenvalue, we know that Id −A is injective,
and since V is finite dimensional it is surjective. Hence for any ṽ ∈ V , in particular v1 − v2 , there
exists a w such that ṽ = w − Aw. Now simply define v = v1 − w. Then necessarily
Aw + v = Aw − w + v1 = [v2 − v1 ] + v1 = v2
as desired. So, we’ve found our w and v.

As for the converse, recall that by transversality


dim[V × V ] = dim[Graph(A)] + dim[Diag(V × V )] − dim[Graph(A) ∩ Diag(V × V )].
We know that
dim[V × V ] = 2 dim V
dim[Graph(A)] = dim V
dim[Diag(V × V )] = dim V
where the last two can be seen by defining maps v 7→ (v, Av) and v 7→ (v, v), showing they are
injective, and applying rank-nullity. So,
dim[Graph(A) ∩ Diag(V × V )] = 0
and consists only of the zero vector. In particular, we can write
Graph(A) ∩ Diag(V × V ) = Ker(Id −A)
and so Id −A is injective.
M382D Attempted HW Problems

1.5.10. Let f : X → X be a map with fixed point x; that is, f (x) = x. If +1 is not an eigenvalue of
dfx : Tx X → Tx X, then x is called a Lefschetz fixed point of f . f is called a Lefschetz map if all its
fixed points are Lefschetz. Prove that if X is compact and f is Lefschetz, then f has only finitely
many fixed points.

Solution: We prove first that Graph(f ) t Diag(X × X). For any p ∈ Graph(f ) ∩ Diag(X × X),
there exists an x ∈ X with p = (x, f (x)) = (x, x), so that f (x) = x. Hence, x is a Lefschetz fixed
point. So, by Exercise 1.5.9 Graph(dfx ) t Diag(Tx X × Tx X) and
Tx X × Tx X = Graph(dfx ) + Diag(Tx X × Tx X).
There are standard isomorphisms
Tx X × Tx X ' Tp (X × X)
Graph(dfx ) ' Tp (Graph(f ))
Diag(Tx X × Tx X) ' Tp (Diag(X × X))
So, the above is equivalent to
Tp (X × X) = Tp (Graph(f )) + Tp (Diag(X × X)),
that is Graph(f ) t Diag(X × X). By the transversality theorem, Graph(f ) ∩ Diag(X × X) is a
submanifold of X × X. The underlying set is {(x, x) ∈ X | x ∈ Fix(f )}, where Fix(f ) is the
set of fixed points of f . This is diffeomorphic to Fix(f ) via (x, x) 7→ x. Furthermore, Graph(f ) ∩
Diag(X × X) is a zero dimensional manifold, since dim(Graph(f )) = dim(Diag(X × X)) = dim(X)
and dim(X × X) = 2 dim(X). So, Fix(f ) is also a zero dimensional submanifold of X. Since X is
compact, Fix(f ) is finite.
2.1.2. Prove that if f : X → Y is a diffeomorphism of manifolds with boundary, then ∂f maps ∂X
diffeomorphically onto ∂Y .

Solution: We first show that the image of ∂f is ∂Y . By definition, ∂Y consists of those points
y ∈ Y such that there exists a chart (ψ, V ), ψ : V → V 0 ⊂ H n where y ∈ ψ −1 (∂V 0 ). Now let
x ∈ ∂X and consider such a chart (ϕ, U ) about x. Set ψ = ϕ ◦ f −1 : f (U ) → U 0 ⊂ H n . Then,
(ψ, f (U )) is a chart about f (x) and since x ∈ ϕ−1 (∂U 0 ), f (x) ∈ (f ◦ ϕ−1 )(∂U 0 ) = ψ −1 (∂U 0 ) as
desired. Since ∂f is just the restriction of f to ∂X, it is smooth and injective. By the same logic,
∂f −1 is a smooth, injective map that surjects onto ∂X. Hence, ∂f is a diffeomorphism onto ∂Y .
2.1.6. There are two standard ways of making manifolds with boundary out of the unit square by
gluing a pair of opposite edges (Figure 2-5). Simple gluing produces the cylinder, whereas gluing
after one twist produces the closed Möbius band. Check that the boundary of the cylinder is two
copies of S 1 , while the boundary of the Möbius band is one copy of S 1 ; consequently, the cylinder
and Möbius band are not diffeomorphic. What happens if you twist n times before gluing?

Solution: First note that ∂(S 1 × I) = S 1 × ∂I = [S 1 × {0}] ∪ [S 1 × {1}]. So, when I 2 / ∼ = S 1 × I the
boundary consists of two copies of S 1 . Now let’s consider the quotient of I 2 resulting in the Möbius
band. We can embed the Möbius band in A3 as a ruled surface via the parameterization
X(u, v) = (− cos(u) sin(u/2), − sin(u) sin(u/2), cos(u/2))v + (cos(u), sin(u), 0)
with 0 ≤ u < 2π and v ∈ [−1, 1]. We see that the boundary is parameterized by
γ(t) = ((1 − sin(t)) cos(2t), (1 − sin(t)) sin(2t), cos(t))
for 0 ≤ t < 2π. This is obtained by looking at the curves X(u, 1) and X(u, −1), concatenating
them, and reparameterizing the result. It is clear then that the boundary of the Möbius band is
diffeomorphic to S 1 , since the image of γ is a 1-dimensional connected, compact smooth manifold.

TA comment: Why does X(u, v) have five components if we’re embedding in R3 ?

Not sure where he got this from, but since he was confused I’ll address it. The parametrization
M382D Attempted HW Problems

X(u, v) is the sum of two vectors, each with three components. One of them is multiplied by v,
the other is not. The parameter v acts as the “ruling”, traces out lines coming out of a certain
parametrized curve. That curve is (cos u, sin u, 0), a circle.
2.2.1. Any one-dimensional, compact, connected submanifold of R3 is diffeomorphic to a circle. But
can it be deformed into a circle within R3 ? Draw some pictures, or try with string.

Solution: No, the trefoil knot provides a counterexample since it is knotted. If you try to un-
knot it, you will see you have to pass the knot through itself (that is, S 1 and the trefoil are not
isotopic). One can explicitly show this by computing the knot groups of each – for S 1 it is Z whereas
for the trefoil it is represented by ha, b |a2 = b3 i.
2.2.2. Show that the fixed point in the Brouwer theorem need not be an interior point.

Solution: For any ball B n choose a point x0 ∈ ∂B n and consider the map f : B n → B n given
by f (x) = x0 . As a constant map it is smooth, and clearly x0 is a fixed point.
2.2.3. Find maps of the solid torus into itself having no fixed points. Where does the proof of the
Brouwer theorem fail?

Solution: Let X be the solid torus and consider h : X → R3 the standard embedding as a solid
of revolution about the z-axis. Then, any nontrivial rotation g around the z-axis has no fixed
points on h(X), since the only fixed points of such a rotation lie on the z-axis. Consider now
f = h−1 ◦ g ◦ h : X → X.
I think the proof of Brouwer’s theorem fails when trying to draw a ray from f (x) through x to
the boundary. In the solid torus, such a ray will not necessarily stay contained within – e.g. the ray
may have to pass through the central hole. Precisely, the solid torus is not convex.
2.2.4. Prove that the Brouwer theorem is false for the open ball |x|2 < a. [Hint: See Chapter 1,
Section 1, Exercise 4.]

Solution: Suppose that every smooth map f : Int(B n ) → Int(B n ) has a fixed point. By Exer-
cise 4 in Chapter 1.1, there exists a diffeomorphism g : Int(B n ) → Rn . Let y 6= 0 ∈ Rn and
let T : Rn → Rn be the translation T (x) = x − y. Then, g −1 ◦ T ◦ g is a smooth bijective map
Int(B n ) → Int(B n ), and therefore has a fixed point x ∈ Int(B n ). So,
x = g −1 (T (g(x))) = g −1 (g(x) − y) → g(x) = g(x) − y
from which it follows that y = 0, a contradiction.

TA comment: This is a great counterexample but didn’t need to be phrased as a proof by con-
tradiction: just, here’s a function with no fixed points, therefore Brouwer’s theorem doesn’t hold for
the open ball.
Dan’s Problems.
Problem 2.
a) Let f = f (x, y, z) and g = g(x, y, z) be smooth functions defined on an open set U = A3 , and
suppose each has 0 as a regular value. Then X = f −1 (0) and Y = g −1 (0) are submanifolds
of A3 of dimension 2. Then X and Y intersect transversely if and only if a certain condition
on f and g holds. What is it?
b) Check your answer for the specific functions
f = x2 + y 2 + z 2 − 1
g = (x − a)2 + y 2 + z 2 − 1
where a is a real parameter. For what values of a is the intersection transverse? Think
about the geometric picture as well as the equations.
M382D Attempted HW Problems

Solution:
a) Recall that for any p ∈ X, the vector ∇f (p) is normal to X at p. Regarding Tp X as a plane
in R3 , we have that ∇f (p) is normal to Tp X. Recall also that ∇f (p) is the unique vector
satisfying
h∇f (p), ξi = dfp (ξ)
for all ξ ∈ Tp X. Because 0 is a regular value of f it follows that ∇f (p) 6= 0. Since planes
in R3 are uniquely identified with their normal vectors (up to a scalar constant), we may
identify Tp X with ∇f (p). For X and Y to intersect transversely, it must be that every
p ∈ X ∩ Y satisfies
Tp X + Tp Y = R3 .
This is satisfied if and only if the two planes Tp X and Tp Y do not coincide. And that occurs
if and only if their normal vectors are not parallel. So, X and Y intersect transversely if
and only if for every p ∈ X ∩ Y the gradients ∇f (p) and ∇g(p) are not parallel.
b) First, the intersection is vacuously transverse if |a| > 2, since there is no intersection between
X = f −1 (0) and Y = g −1 (0). For |a| = 2, the intersection consists of a point (either (1, 0, 0)
or (−1, 0, 0) depending on the sign of a). Here, the two spheres X and Y touch tangently
and therefore their tangent planes coincide. Hence, the intersection is not transverse. For
0 < |a| < 2, any point p ∈ X ∩ Y is such that Tp X and Tp Y are noncoincidental planes.
This follows since at such points the gradients of f and g are not parallel. For a = 0, X ∩ Y
is just the unit sphere. But, for every p ∈ X ∩ Y the tangent planes Tp X and Tp Y obviously
coincide. So the intersection is not transversal.

Indeed, we can verify this from the functions directly. If a = 0 then f = g and clearly
∇f (p) = ∇g(p) for all p ∈ X ∩ Y . Now suppose a 6= 0. Then p ∈ X ∩ Y when p1 = a/2.
The gradients of f and g at p are
∇f (p) = (2x, 2y, 2z)|p = (2p1 , 2p2 , 2p3 ) = (a, 2p2 , 2p3 ),
∇g(p) = (2(x − a), 2y, 2z)|p = (2(p1 − a), 2p2 , 2p3 ) = (−a, 2p2 , 2p3 ).
We see that ∇f (p) and ∇g(p) are parallel when p2 = p3 = 0. Hence, p1 = ±1 and
p1 − a = ±1. It follows that a = ±2 (since we exclude a = 0). Finally, if 0 < |a| < 2, then
one of p2 or p3 is nonzero, and therefore ∇f (p) and ∇g(p) cannot be parallel.

Problem 3. Let X be a manifold with boundary. Construct a smooth function f : X → R such that
0 is a regular value, f −1 (0) = ∂X, and f < 0 on X \ ∂X.

Solution: For simplicity, let us work with X = H 1 = {x | x ≤ 0}. A map f : X → R is smooth


if for every x ∈ H 1 there exists an open set U such that f exhibits a smooth extension f˜ in U .
Consider the function f : X → R given by f (x) = x. Since f is smooth away from 0, we only need
to check f is smooth at 0. Given some open ball around x = 0, f may be extended by the smooth
function IdR . So, f is smooth. Moreover, f −1 (0) = 0 = ∂H 1 and f < 0 on H 1 \ ∂H 1 . Finally, the
smooth extension at any 0 is actually a local diffeomorphism, and by definition df0 = d(IdR )0 . So, 0
is a regular value of f . This construction should generalize to H k for higher values of k by f (x) = x1 .

Now let X be a k-dimensional manifold with boundary. Let {Uα }α∈A be a covering of X by
charts. Then, there exists a partition of unity {ρα }α∈A subordinate to this covering. Since each
(Uα , ϕα ) is a chart, ϕα maps Uα diffeomorphically to an open subset Vα of the affine half space H k .
Let g : H k → R be given by g(x) = x1 . We saw above that g −1 (0) = ∂H k , g < 0 on H k \ ∂H k
and that 0 is a regular value of g. Consider the composition ψα = g|Vα ◦ ϕα : Uα → R. Then
ψα−1 (0) = ∂Uα , ψα < 0 on Uα \ ∂Uα , and since ϕα is a diffeomorphism 0 is still a regular value.
Finally, let f : X → R be defined by
X
f= ψα ρα .
α∈A
M382D Attempted HW Problems

Note that f (p) = 0 if and only if ψα (p)ρα (p) = 0 for all α ∈ A . Observe further that p ∈ ∂X if and
only if ψα (p) = 0 for all p ∈ ∂Uα . Since
X
ρα (p) = 1
α∈A

for any p ∈ X, and all the ρα are nonnegative, it follows that at least one ρα (p) 6= 0. Hence f (p) = 0
if and only if p ∈ ∂X. Furthermore, f (p) < 0 for p ∈ X \ ∂X since the ρα (p) are nonnegative and
the ψα (p) are negative. Finally suppose that p ∈ ∂X. Let α ∈ A be such that ρα (p) 6= 0. Then for
ξ ∈ Tp X,
d(ψα ρα )p (ξ) = ρα (p)d(ψα )p (ξ) + ψα (p)d(ρα )p (ξ) = ρα (p)d(ψα )p (ξ).
Since ρα (p) is nonzero and d(ψα )p is surjective, we find too that d(ψα ρα )p is. Consequently, dfp is
surjective and 0 is a regular value.
M382D Attempted HW Problems

HW 7
Guillemin/Pollack: Chapter 2.4: 3, 5, 6, 8, 11, 13; Chapter 2.6: 1, 2.

2.4.3. Suppose that X and Z are compact manifolds and that f : X → Y , and g : Z → Y are
smooth maps into the manifold Y . If dim X + dim Z = dim Y , we can define the mod 2 intersection
number of f and g by I2 (f, g) = I2 (f × g, ∆), where ∆ is the diagonal of Y × Y .
a) Prove that I2 (f, g) is unaltered if either f or g is varied by a homotopy.
b) Check that I2 (f, g) = I2 (g, f ) [Hint: use Exercise 2 with the “switching diffeomorphism”
(a, b) → (b, a) of Y × Y → Y × Y .]
c) If Z is actually a submanifold of Y and ι̇ : Z ,−→ Y is its inclusion, show that
I2 (f, ι̇) = I2 (f, Z).
d) Prove that for two compact submanifolds X and Z in Y ,
I2 (X, Z) = I2 (Z, X).
(Note: This is trivial when X t Z. So why did we use this approach?)

Solution:
a) Suppose first that f0 ∼ f1 and fi × g t Diag(Y × Y ) for i = 0, 1 (if not, homotope if
necessary to achieve transversality; since homotopy is transitive we may proceed). To show
that I2 (f0 , g) = I2 (f1 , g) we must show that I2 (f0 ×g, Diag(Y ×Y )) = I2 (f1 ×g, Diag(Y ×Y )).
This will be true if we can find a homotopy between f0 × g and f1 × g. But we can do this –
if F is a homotopy between f0 and f1 then F × g is a homotopy between f0 × g and f1 × g.
A similar argument holds if g0 ∼ g1 .
b) Exercise 2.4.2 tells us that if f : X → Y and g : Y → Z are smooth with X compact, and g
is transversal to a closed manifold W ⊂ Z (so that g −1 (W ) is a submanifold of Y ) then
I2 (f, g −1 (W )) = I2 (g ◦ f, W ).
In particular, we can apply this with f × g : X × Z → Y × Y , h : Y × Y → Y × Y given by
(a, b) 7→ (b, a), and W = Diag(Y × Y ). Then h−1 (W ) = W and
I2 (f, g) = I2 (f × g, h−1 (W )) = I2 (h ◦ (f × g), W ) = I2 (g × f, Diag(Y × Y )) = I2 (g, f ).
c) First let us investigate the condition that f × g t Diag(Y × Y ). It is satisfied if for any
p = (y, y) ∈ Diag(Y × Y ) and x ∈ X, z ∈ Z with f (x) = y = g(z) then
d(f × g)(x,z) (T(x,z) (X × Z)) + T(y,y) (Diag(Y × Y )) = T(y,y) (Y × Y ).
Equivalently,
dfx (Tx X) ⊕ dgz (Tz Z) + Diag(Ty Y ⊕ Ty Y ) = Ty Y ⊕ Ty Y.
As done in previous homeworks, this implies that
Ty Y = dfx (Tx X) + dgz (Tz Z).
Now apply this with g = ι̇ : Z → Y . We know that dι̇ : T Z → T Y is the inclusion so that
Ty Y = dfx (Tx X) + Ty Z
(note that I have changed to Ty Z from Tz Z since ι̇(z) = z). But this is exactly the transver-
sality condition f t Z. So, f × ι̇ t Diag(Y × Y ) if and only if f t Z and it makes sense to
talk about I2 (f, Z).

Now what is (f × ι̇)−1 (Diag(Y × Y ))? This is the set of points


W = {(x, z) ∈ X × Z | f (x) = ι̇(z) = z}.
−1
Consider also f (Z), which is
f −1 (Z) = {x ∈ X | f (x) = z for some z ∈ Z}.
M382D Attempted HW Problems

But note that these are in a bijective correspondence – consider π : W → f −1 (Z) which
is the projection onto the x component and h : f −1 (Z) → W given by (x, f (x)). So their
cardinalities, and in particular their cardinalities mod 2, are equal.
d) Let ι̇X : X → Y and ι̇Z : Z → Y be the inclusions. If X is not transverse to Z, then we can
homotope ι̇X to ι̇X̃ so that X̃ t Z. By definition
I2 (X̃, Z) = I2 (ι̇X̃ , Z) I2 (Z, X̃) = I2 (ι̇Z , X̃).
We know from c) that I2 (ι̇X̃ , Z) = I2 (ι̇X̃ , ι̇Z ) and similarly for Z with the roles reversed.
But by b) this role reversal does not matter, and they are equal.
2.4.5. Prove that intersection theory is vacuous in contractible manifolds: if Y is contractible and
dim Y > 0, then I2 (f, Z) = 0 for every f : X → Y , X compact and Z closed, dim X +dim Z = dim Y .
(No dimension-zero anomalies here.) In particular, intersection theory is vacuous in Euclidean space.

Solution: Since Y is contractible any map f : X → Y is nullhomotopic. So, f ∼ g where g : X → Y


is a constant map g(x) = y. Assume dim Z < dim Y . Then by Sard’s theorem, ι̇ : Z → Y is
not surjective, and we can choose g as before so that y ∈ / Z. Since g(X) has empty intersection
with Z, it is vacuously transverse. Therefore I2 (g, Z) is well defined, and is simply the cardinality
of g −1 (Z) (mod 2). But again this preimage is empty, so has cardinality zero. It seems there is
still a zero-dimensional anomaly, despite what the question says. In particular if you take Z = Y
and X a dimension zero compact set (hence finite), then the preimage of any constant map is X.
So I2 (f, Z) = #X (mod 2), which can clearly be nonzero. Any constant map in this situation is
trivially transverse, so there are no issues there.

TA comment: re: the zero-dimensional anomaly, you’re correct.


2.4.6. Prove that no compact manifold – other than the one-point space – is contractible. [Hint:
Apply Exercise 5 to the identity map.]

Solution: Suppose Y is contractible and choose some p ∈ Y and let Z = {p}, a closed subman-
ifold of dimension zero in Y . If X = Y then X is compact with dim X + dim Z = dim Y . Consider
IdY : Y → Y . Then, I2 (IdY , {p}) = 0. But Id−1
Y ({p}) = {p} is a singleton, a contradiction.

2.4.8.
a) Let f : S 1 → S 1 be any smooth map. Prove that there exists a smooth map g : R → R such
that f (cos t, sin t) = (cos g(t), sin g(t)), and satisfying g(2π) = g(0) + 2πq for some integer
q. [Hint: First define g on [0, 2π], and show that g(2π) = g(0) + 2πq. Now extend g by
demanding g(t + 2π) = g(t) + 2πq.]
b) Prove that deg2 (f ) = q (mod 2).

Solution:
a) First recall that R is a universal covering space of S 1 with covering map p(t) = (cos t, sin t).
Restated, we want to find g : R → R such that f ◦p = p◦g. Let h(t) = (f ◦p)(t) : [0, 2π] → S 1 .
We can lift the path h(t) to R since it is a universal cover. This yields a path g : [0, 2π] → R
such that p ◦ g = h = f ◦ p as desired. Since f and p are smooth, the composition f ◦ p = p ◦ g
is too. Because p is a local diffeomorphism it follows that g is smooth. Finally let us analyze
g(2π) and g(0). We have that
(cos g(0), sin g(0)) = f (cos 0, sin 0) = f (cos 2π, sin 2π) = (cos g(2π), sin g(2π)).
By equating coordinates we obtain g(2π) = g(0) + 2πq for some integer q. Then extend g
to all of R by letting g(t + 2π) = g(t) + 2πq.
b) I’ll work out the details assuming that (cos g(0), sin g(0)) is a regular value of f . The general
case is not much more work (and can probably be achieved by a translation of R). We
wish to count the elements of f −1 ({p}). Since g is periodic, we need only count how many
t ∈ [0, 2π) are such that g(t) = g(0) (mod 2π). Since g(2π) = g(0) + 2πq and g is smooth, by
M382D Attempted HW Problems

the intermediate value theorem it must encounter the points g(0), g(0)+2π, ..., g(0)+2π(q−1)
in [0, 2π), and by homotoping g if necessary we can have it so g hits these points once. Since
the degree is unaltered by homotopy, we find that deg2 (f ) = q.

TA comment: (b): “the general case is not much more work” is not a complete proof.

He’s right and I’m lazy, sue me. The key idea for solving this problem comes across well in this case,
and I think that’s what’s important.

2.4.11. Suppose that f : X → Y has deg2 (f ) 6= 0. Prove that f is onto. [Remember that X must
be compact, Y connected, and dim X = dim Y for deg2 (f ) to make sense.]

Solution: If f is not surjective, then there exists a point y ∈ Y such that y ∈/ f (X). Since there is
no intersection, f is vacuously transverse to {y}. Hence I2 (f, {y}) = #f −1 ({y}) = 0. But, the mod
2 degree of f is deg2 (f ) = I2 (f, {y}) for any y ∈ Y .

2.4.13. If f : X → Y is transversal to Z and dim X + dim Z = dim Y , then we can at least define
I2 (f, Z) as #f −1 (Z) (mod 2), as long as f −1 (Z) is finite. Let us explore how useful this definition
is without the two assumptions made in the text: X compact and Z closed. Find examples to show:

a) I2 (f, Z) may not be a homotopy invariant if Z is not closed.


b) I2 (f, Z) may not be a homotopy invariant if X is not compact.
c) The Boundary Theorem is false without the requirement that Z be closed.
d) The Boundary Theorem is false without the requirement that X be compact.
e) The Boundary Theorem is false without the requirement that W be compact, even if X =
∂W is compact and Z is closed. [Hint: Look at the cylinder S 1 × R.]

Solution: For parts a) - d) I work with Y = A2 and X, Z one-dimensional manifolds. Furthermore,


f will be the inclusion ι̇ : X → Y . Thus computing I2 (f, Z) amounts to computing the number of
intersections points (mod 2) between X and Z. To show a) and b) we can deform X in Y (that is,
we homotope ι̇). For parts c) - e), we can always extend ι̇ in the obvious way.

a) Consider X the subset {(x, y) ∈ A2 | (x + 1)2 + y 1 = 1} and Z the punctured x-axis at


the origin. Then Z is not closed, but is a submanifold of Y , and X is an embedded circle,
hence compact. Moreover, X is transverse to Z at their only intersection point (−2, 0). Any
homotopy of ι̇ which translates X along the x-axis changes the intersection number from 1
to 0.
b) Consider X the interval (−1, 1) × {0}, and Z the y-axis. This intersection is transversal,
hence I2 (h, Z) is well defined and equal to 1. But, we can push (−1, 1) × {0} off of Z.
c) Consider X = S 1 ⊂ A2 the usual embedding of S 1 and Z = (0, 2) × {0}. Note that X is the
boundary of W = D2 , a compact manifold. Computing I2 (ι̇, Z), we see it is one.
d) Consider X the x-axis and Z the y-axis which trivially intersect transversely. Observe that
X is the boundary of the half-space W = {y ≥ 0}. But I2 (ι̇, Z) is just one.
e) Let Y = S 1 × R, X = S 1 × {0}, and Z = (1, 0) × R. Then X is compact, but does not
bound a compact manifold with boundary (the two choices for W are W = S 1 × [0, ∞) or
W = S 1 × (−∞, 0]). Since dim Y = 2, X and Z are of complementary dimension, and they
intersect transversely. The intersection X × Z is just that of a circle and a line orthogonal
to it; clearly I2 (ι̇, Z) = 1.

Here are some illustrations:


M382D Attempted HW Problems

TA comment: Great pictures!


2.6.1. Show that the Borsuk-Ulam theorem is equivalent to the following assertion: if f : S k → S k
carries antipodal points to antipodal points, then deg2 (f ) = 1.

Solution: Supposing Borsuk-Ulam holds, let f : S k → S k carry antipodal points to antipodal


points (that is, f (p) = −f (−p)). Then f satisfies the required symmetry condition and the image
of f does not contain the origin. Hence W2 (f, 0) = 1. By definition and applying Borsuk-Ulam,
 
f
1 = W2 (f, 0) = deg2
|f |
But the image of f lies on S k , so |f (p)| = 1. Hence,
deg2 (f ) = 1.
(There may be some technical fuss, because the statement of Borsuk-Ulam is for maps f : S k → Rk+1 ,
but this is the idea).

Now suppose that if f : S k → S k satisfies f (p) = −f (−p) then deg2 (f ) = 1. Let g : S k → Rk+1 be
such that g misses the origin and satisfies g(p) = −g(−p). Then h(p) = g(p)/|g(p)| is well defined,
is a map h : S k → S k , and satisfies the condition h(p) = −h(−p). Then by hypothesis deg2 (h) = 1.
By definition, the winding number of g is
 
g
W2 (g, 0) = deg2 = deg2 (h) = 1.
|g|
2.6.2. Prove that any map f : S 1 → S 1 mapping antipodal points to antipodal points has deg2 (f ) =
1 by a direct computation with Exercise 8, Section 4. [Hint: If g : R → R is as in Exercise 8, Section
4, show that
g(s + π) = g(s) + πq,
M382D Attempted HW Problems

where q is odd.]

Solution: Since f maps antipodal points to antipodal points we have f (p) = −f (−p). Let p =
(cos t, sin t) for some t ∈ R. Then,
f (cos t, sin t) = −f (− cos t, − sin t) = −f (cos(t + π), sin(t + π))
By the relation f (cos t, sin t) = (cos g(t), sin g(t)) we have
(cos g(t), sin g(t)) = −(cos g(t + π), sin g(t + π)) = (cos(g(t + π) + π), sin(g(t + π) + π)).
By equating coordinates we arrive at
g(t + π) = g(t) + πqt
for some integer qt a priori depending on t. However,
g(t + π) − g(t)
qt =
π
where the function on the right-hand side is smooth. It follows that qt is actually constant since it
takes integer values. Call this constant q. Can q be even? Suppose so, then
g(π) = g(0) + 2π q̃
for some integer q̃. Then,
f (1, 0) = f (cos 0, sin 0) = (cos g(0), sin g(0)) = (cos(g(0) + 2π q̃), sin(g(0) + 2π q̃))
f (−1, 0) = f (cos π, sin π) = (cos g(π), sin g(π)) = (cos(g(0) + 2π q̃), sin(g(0) + 2π q̃)).
But, f is supposed to map antipodal points to antipodal points. So, q is odd. By 2.4.8 b), we have
that deg2 (f ) = q (mod 2), and since q is odd deg2 (f ) = 1.

TA comment: Nice!

Dan’s Problems.

Problem 1. For each of the following construct an example (with justification) or show that it does
not exist.
a) For each n ≥ 1 two maps S n → S n which are not homotopic.
b) For each n ≥ 1 two maps RPn → RPn which are not homotopic.
c) A map f : S 1 × S 1 → S 2 with deg2 (f ) 6= 0.
d) A map f : S 2 → S 1 × S 1 with deg2 (f ) 6= 0.
e) A map f : S 2 → RP2 with deg2 (f ) 6= 0.
f) A map f : RP2 → S 2 with deg2 (f ) 6= 0.

Solution:
a) Let f0 : S n → S n be IdS n and f1 : S n → S n be a constant map. We have deg2 (f0 ) = 1
while deg2 (f1 ) = 0, and since deg2 (f ) is stable under homotopy, f0 is not homotopic to f1 .
b) Same example as above, with f0 : RPn → RPn the map IdRPn instead.
c) We realize S 1 × S 1 as the quotient of [0, 1] × [0, 1] identifying opposite edges. We can obtain
S 2 from this by identifying all points on the boundary to a single point. For points in the
interior, we see that they are mapped homeomorphically onto S 2 . Therefore this map has
degree 1.
d) Since π2 (S 1 × S 1 ) = 0, every map f : S 2 → S 1 × S 1 is homotopic to a constant map. Since
constant maps have mod 2 degree zero, no such map exists.

Note that the example in c) does not apply here. It only makes sense because we col-
lapse all the boundary points to a single point. If we try to do the same construction in
reverse (from S 2 → S 1 × S 1 ), it wouldn’t make sense. One point p in S 2 would map to the
entire boundary of [0, 1] × [0, 1], which we would subsequently identify opposite edges. So,
M382D Attempted HW Problems

as a map from S 2 → S 1 × S 1 , where does p go to? It has to be the union of two circles in
the torus, but that does not make sense.
e) Recall that S 2 is a universal cover of RP2 with covering map π : S 2 → RP2 the antipodal
map. Let f : S 2 → RP2 be any smooth map and lift it to f˜ : S 2 → S 2 . By the lifting
property, f = π ◦ f˜. If q ∈ RP2 then we can count f −1 ({q}) by counting π −1 ({q}), and for
each p ∈ π −1 ({q}) the number of points in f˜−1 ({p}). A priori there may be some overlap
here we need to consider.

First, since S 2 is a double cover of RP2 we have π −1 ({q}) = {p, −p}. Suppose f −1 ({p}) =
{t1 , ..., tn } and f −1 ({−p}) = {s1 , ..., sk }. Note: I think we cannot assume n = k, but only
that they are equal mod 2. Now suppose that ti = sj for some 1 ≤ i ≤ n and 1 ≤ j ≤ k.
Then p = f (ti ) = f (si ) = −p, a contradiction. It follows that these two sets are disjoint,
and therefore #(π ◦ f )−1 ({q}) = n + k. Since n = k mod 2, their sum is 0 mod 2.
f) Same procedure as in c).
I remember reading at one point that every connected n-dimensional manifold X admits a CW
structure. If X n−1 is the n − 1 skeleton, I think you can take the CW structure to only have one
n-cell (I imagine pictorially filling in the n − 1 skeleton, like how soap film fills in a wire frame). It
seems to me if you take X → X/X n−1 ' S n , this should be a degree one map.

TA comment: 1b: correct, but why? (e.g. they induce different maps on π1 .) 1f: there are no
maps RP2 → S 2 with nonzero degree. They would be covering maps, and a covering space of an
orientable manifold is orientable.
M382D Attempted HW Problems

HW 8
Warner: Chapter 2: 2, 9, 10, 12, 15.

2.9. Prove that the elements v1 , ..., vr of the vector space V are linearly independent if and only if
v1 ∧ ... ∧ vr 6= 0.

Solution: Suppose first that v1 , ..., vr are linearly dependent. Then there exist c1 , ..., cr ∈ R not
all zero such that ci vi = 0. WLOG assume c1 6= 0 (by reordering the indices if necessary). Then,
1
v1 ∧ v2 ∧ ... ∧ vr = − (v2 + ... + vr ) ∧ v2 ∧ ... ∧ vr
c1
1
= − 1 (v2 ∧ v2 ∧ ... ∧ vr + ... + vr ∧ v2 ∧ ... ∧ vr ) = 0
c
where each term in the above sum contains two vi for each 2 ≤ i ≤ r. Therefore, each is zero, and
we have shown if v1 ∧ ... ∧ vr 6= 0 then v1 , ..., vr are linearly independent (via the contrapositive).

Now suppose v1 , ..., vr are linearly independent. Recall that v1 ∧ ... ∧ vr is an equivalence class
in Vr,0 (V )/Ir (V ). That is, we quotient out by all elements of Vr,0 containing consecutive, repeating
factors (e.g. v1 ⊗ v1 ⊗ v2 ⊗ ... ⊗ vr−1 ). Since the v1 , ..., vr are linearly independent, v1 ⊗ ... ⊗ vr is
not in Ir (V ), and therefore v1 ∧ ... ∧ vr is not the zero equivalence class.

TA comment: The proof “linearly independent, therefore nonzero” is a little terse. What if v1 , ..., vn
don’t have any repeating factors, but are nevertheless linearly dependent?

The point is, if any of the vi is linearly dependent on the others, we could write vi = cj vj ,
j ∈ {1, ..., n} \ {i}. Then, the tensor product
v1 ⊗ ... ⊗ vi ⊗ ...vn
could be written
cj v1 ⊗ ... ⊗ vj ⊗ ...vn
I think these vanish in the quotient since they contain repeating factors.

2.10. Prove that the linearly independent sets {v1 , ..., vr } and {w1 , ..., wr } are bases of the same r-
dimensional subspace of a vector space V if and only if v1 ∧ ... ∧ vr = cw1 ∧ ... ∧ wr where necessarily
c 6= 0; and in this case c = det A where A = (aij ) and vi = Σaij wj .

Solution: Suppose first that v1 ∧ ... ∧ vr = cw1 ∧ ... ∧ wr for some c 6= 0 (this is necessary, since
otherwise by 2.9 the set {v1 , ..., vr } is linearly dependent). Observe that {v1 , ..., vr } and {w1 , ..., wr }
are bases for the same r-dimensional subspace if and only if they both span that space. Hence, it
suffices to show that each wi can be written as a linear combination of v1 , ..., vr . That is, if the set
{v1 , ..., vr , wi } is linearly dependent. But, we know from problem 2.9. that this occurs if and only if
the wedge is zero. By assumption,
(v1 ∧ ... ∧ vr ) ∧ wi = c(w1 ∧ ... ∧ wr ) ∧ wi = 0
and we are done. Now suppose that {v1 , ..., vr } and {w1 , ..., wr } are bases of the same r-dimensional
subspace. Then for each vi we can find aji , 1 ≤ j ≤ r such that vi = aji wj . Computing the wedge
product,
v1 ∧ ... ∧ vr = (aj1 wj ) ∧ ... ∧ (ajr wj ) = cw1 ∧ ... ∧ wr
for some c 6= 0 by distributing the wedge product and applying the alternating rule several times.
I’m not sure the most efficient way to show that c = det A. Probably by induction on r?

TA comment: re: c = det(A), induction is a good approach!


M382D Attempted HW Problems

2.15. Let ξ ∈ V . Prove that the composition


ξ∧ ξ∧
Λp (V ) −−→ Λp+1 (V ) −−→ Λp+2 (V )
of left exterior multiplication by ξ with itself is an exact sequence; that is, the image of the first map
is the kernel of the second.

Solution: By definition Λp (V ) = Vp,0 /Ip (V ). A generic element of Λp (V ) looks like a sum of


v1 ∧ ... ∧ vp for v1 , ..., vp ∈ V . So, an element of Im(ξ∧) looks like a sum of ξ ∧ v1 ∧ ... ∧ vp . Then,
applying ξ∧ once more gives ξ ∧ ξ ∧ v1 ∧ ... ∧ vp . This is the equivalence class of ξ ⊗ ξ ⊗ v1 ⊗ ... ⊗ vp in
Vp+2,0 /Ip+2 (V ). But, ξ ⊗ ξ ⊗ v1 ⊗ ... ⊗ vp ∈ Ip+2 (V ), so that this is zero. Hence Im(ξ∧) ⊂ Ker(ξ∧).
Now suppose that we have a generic element of Λp+1 (V ) that is not in Im(ξ∧). Once more this is
a sum of elements of the form v1 ∧ ... ∧ vp+1 , where {v1 , ..., vp+1 , ξ} is linearly independent by 2.9.
Looking at one of these in particular and left multiplying it by ξ, we get ξ ∧ v1 ∧ ... ∧ vp+1 . By
applying 2.9., we see this is not zero.

Dan’s Problems.

Problem 1. Let S be a set. Recall from lecture (notes) the definition of a free vector space generated
by S. In this problem you prove existence.
a) First, if V1 , V2 are vector spaces define the direct sum V1 ⊕ V2 vector space. Its underlying
set is the Cartesian product V1 × V2 , for example. L
b) Now let {Vs }s∈S be a set of vector spaces parametrized by S. Define the direct
` sum s∈S Vs
whose underlying set is the set of finitely supported functions ξ : S → s∈S V` s with the
property ξ(s) ∈ Vs . What is vector addition? Scalar multiplication? (Here denotes
disjoint union, a tricky operation.
Q . . )
c) Define the direct product s∈S Vs by dropping the support condition. What is the direct
product in the special case that all Vs equal a fixed vector space V ?
d) Use (some of) these constructions to prove existence of a free vector space generated by S.
Verify the universal property.
e) For the categorically minded, formulate universal properties for the direct sum and direct
product.

Solution:
a) I assume V1 , V2 are vector spaces over the same field F . The vector space V1 ⊕ V2 is the
set V1 × V2 equipped with + : (V1 × V2 ) × (V1 × V2 ) → V1 × V2 defined by coordinate wise
addition
(v1 , v2 ) + (w1 , w2 ) = (v1 + v2 , w1 + w2 )
(I liberally use + for all the different vector space additions) and scalar multiplication · :
F × (V1 × V2 ) → V1 × V2 defined by
c · (v1 , v2 ) = (cv1 , cv2 ).
This is a vector space since we just need to check the axioms component wise, but each Vi
is a vector space. The additive identity of V1 ⊕ V2 is (0, 0) while the multiplicative identity
is 1.
b) For completeness I will now use subscripts: for each s ∈ S we have a vector space (Vs , +s , ·s , 0s , 1s )
over a fixed field F . A digression: we should formally talk about the disjoint union before
continuing. It is defined as
a [
Vs = {(v, s) | v ∈ Vs }
s∈S s∈S

If any of the Vs coincide, the disjoint


` union “disjointifies”
S them by attaching a label (the
index) to the elements. Define π : s∈S Vs → s∈S Vs by (v, s) 7→ v (which forgets the label
M382D Attempted HW Problems
L
we just gave it). The direct sum s∈S Vs has underlying set
 a
Sf s := ξ : S → Vs ξ(s) = (ξs , s) for some ξs ∈ Vs and ξs = 0s for all
s∈S

but finitely many s ∈ S .

Note that this is a slight modification of Dan’s problem statement (he writes ξ(s) ∈ Vs ,
which formally makes sense, but in actuality it should keep track of the underlying index
too). Addition + : Sf s × Sf s → Sf s is defined by
(ξ 1 + ξ 2 )(s) := (ξs1 +s ξs2 , s).
Since ξs1 , ξs2 ∈ Vs for each s, we can add them together and get a well defined element
1 2
ξ`s + ξs ∈ Vs . By definition, this is an element of {(v, s) | v ∈ Vs } so that it is an element of
1 2 1 1 2 2
s∈S Vs . Since ξ , ξ ∈ Sf s there exist n, m such that s1 , ..., sn , s1 , ..., sm are indices with
j
ξsj 6= 0 (1 ≤ j ≤ n, m). So, there are at most n + m many indices for which (ξ 1 + ξ 2 )s :=
i
ξs1 +s ξs2 6= 0s . It follows that ξ 1 +ξ 2 ∈ Sf s as desired. Scalar multiplication · : F ×Sf s → Sf s
is defined by
(c · ξ)(s) = (c ·s ξs , s)
`
for each s ∈ S. As before, this is a well defined element of Sf s since (c ·s ξs , s) ∈ s∈S Vs
and since its support is exactly the support of ξ.
All the vector space axioms are easy to verify and follow from evaluating the ξ involved
at a specific s, and then manipulating the term (v, s) in the first coordinate using the vector
space axioms for Vs . Note that the verification of these axioms does not depend on ` the fact
that each ξ has finite support. The additive and multiplicative identities are 0 : S → s∈S Vs
and 1 ∈ F where 0 is defined by
0(s) = (0s , s)
for all s ∈ S. This is easily an element of Sf s and satisfies the additive identity properties.
c) I use the same notation as above. The direct product has underlying set
( )
a
S := ξ : S → Vs ξ(s) = (ξs , s) for some ξs ∈ Vs .
s∈S

Addition + : S × S → S and · : F × S → S are defined analogously as previous; we


justL
no longer work with functions with finite support. Since the verification of the axioms
for s∈S Vs did not rely on having finite support, the same logic carries over here to prove
we have a vector space structure. The additive and multiplicative identities remain the same.
`
In the case that all the Vs are a fixed vector space V , we have that s∈S Vs = V × S.
A map ξ ∈ S is just a section of the vector bundle (V × S, πs ) with πs : V × S the projection
onto the second coordinate. The additive identity is the zero section. I think technically
here you need V to be finite dimensional to actually have a vector bundle.
d) I recall here for convenience that a pair (V, i) of a vector space V and a map i : S → V is
a free vector space generated by S if it satisfies the following universal property: for every
(W, f ) consisting of a vector space W and a map f : S → W there exists a unique linear map
T : F → W satisfying f = T ◦ i. (Aside: In the above, V, W carry vector space structures.
The maps given from S are of the underlying sets).

First, if we’re going to construct a vector space we needLan underlying field, call it F .
As a field F is a vector spaceLover itself. Then consider s∈S F , which by b) is a vector
space. We can define i : S → s∈S F by
i(s) = bs : S → F
M382D Attempted HW Problems

where bs is defined by
(
(1F , t) s=t
bs (t) = .
(0F , t) else
Since
L each bs has support a single point of S, they L are clearly in the underlying set of
s∈S F . Also note that {b }
s s∈S forms a basis of s∈S F since
X
f= f bs .
s∈spt(f )

The above sum is only formal. As an aside the direct sum is used here instead of the direct
product because a basis needs to represent any element as a finite linear combination! This
is where the support condition is used.

Now let W be a vector space and f : S → W . To get a linear map T : V → W we


only need to define T on a basis of V and extend linearly. So, we define T on {bs }s∈S by
T (bs ) = f (s).
By definition, bs = i(s) so that f = T ◦ i.
TA comment: (a): a vector space does not have a multiplicative identity

I meant the underlying field’s multiplicative identity.

Problem 2. In this problem we work in An with standard coordinates x1 , x2 , ..., xn . Or, you can
imagine that the xi are local coordinates on an n-dimensional manifold.
a) Take n = 3, call the coordinates x, y, z, and set
α = xdx + ydy
β = zdz
γ = dx ∧ dy + xdz
Compute α ∧ β, α ∧ γ, and γ ∧ γ.
b) Compute dα, dβ, and dγ.
c) Now write an arbitrary 1-form ω in An and compute dω.
d) For a function f : An → R verify explicitly that d(df ) = 0. (To ease notation in the last two
problems, you may want to try n small first.)

Solution:
a) These computations are just manipulations of the alternating rule and skew symmetry of
the wedge product. Note that as a direct consequence of the alternating rule, every k-form
for k > 3 is zero.
α ∧ β = (xdx + ydy) ∧ (zdz) = xz(dx ∧ dz) + yz(dy ∧ dz)
α ∧ γ = (xdx + ydy) ∧ (dx ∧ dy + xdz)
= x(dx ∧ dx) ∧ dy + ydy ∧ (−dy ∧ dx) + x2 (dx ∧ dz) + xy(dy ∧ dz)
= −y(dy ∧ dy) ∧ dx + x2 (dx ∧ dz) + y(dy ∧ dz)
= x2 (dx ∧ dz) + xy(dy ∧ dz)
γ ∧ γ = (dx ∧ dy + xdz) ∧ (dx ∧ dy + xdz)
= −(dy ∧ dx) ∧ (dx ∧ dy) + xdz ∧ (dx ∧ dy) + (dx ∧ dy) ∧ (xdz) + x2 (dz ∧ dz)
= −x(dx ∧ dz ∧ dy) + x(dx ∧ dy ∧ dz)
= 2x(dx ∧ dy ∧ dz)
M382D Attempted HW Problems

b) These computations are manipulations of linearity and the Leibniz rule of the differential,
together with algebraic properties of the wedge product.
dα = d(xdx) + d(ydy) = dx ∧ dx + x(d2 x) + dy ∧ dy + y(d2 y) = 0
dβ = dz ∧ dz + z(d2 z) = 0.
Notably, both α and β are closed. In fact, α = df where f (x, y, z) = x2 /2 + y 2 /2 and β = dg
where g(x, y, z) = z 2 /2 (up to addition of an element in Ker(d)).

We now turn to computing dγ, which is different from the previous since it involves a 2-form.
We have not discussed this in class, so let us first analyze how to do this computation in
general. Let ω be a k-form and η an l-form. We expect a formula like
d(ω ∧ η) = dω ∧ η + ω ∧ dη.
As we will see, this is not quite true. Instead we ask for a formula like
d(ω ∧ η) = dω ∧ η + cω ∧ dη
for an appropriately chosen c, to be computed. It could be that c depends on the order of
operation in the wedge product. By the Kozul sign rule,
dη ∧ ω = (−1)k(l+1) ω ∧ dη
η ∧ dω = (−1)(k+1)l dω ∧ η.
Whatever formula we have, it must be that d(η ∧ ω) = (−1)kl d(ω ∧ η). So
d(η ∧ ω) = dη ∧ ω + c1 η ∧ dω = (−1)kl+k ω ∧ dη + c1 (−1)kl+l dω ∧ η
(−1)kl d(ω ∧ η) = (−1)kl dω ∧ η + c2 (−1)kl ω ∧ dη
We have equality above if c1 = (−1)l and c2 = (−1)k . Note that the exponent is just the
degree of the first form in the wedge product. So, our rule is
d(ω ∧ η) = dω ∧ η + (−1)k ω ∧ dη,
where k is the degree of ω. This is consistent with ω a 0-form and η a 1-form, as in previous
examples. Then,
dγ = d(dx ∧ dy) + (dx ∧ dz + x ∧ d2 z) = d2 x ∧ dy − dx ∧ d2 y + dx ∧ dz = dx ∧ dz.
c) An arbitrary 1-form ω in An takes the form
ω = ωi dxi .
Then,
dω = dωi ∧ dxi .
Expanding out dωi gives  
∂ωi j
dx ∧ dxi .
∂xj
When i = j a term vanishes, and to put everything in standard form if i > j we reverse the
wedge product. I’ll compute this explicitly for a particular fixed i with the sum shown.
n   X  ∂ωi  X  ∂ωi 
X ∂ωi j i j i j
j
dx ∧ dx = j
dx ∧ dx + j
dx ∧ dxi
j=1
∂x j<i
∂x j>i
∂x
X ∂ωi X ∂ωi
= j
(dxj ∧ dxi ) − j
(dxi ∧ dxj )
j<i
∂x j>i
∂x

In the above, all terms are such that the first differential has smaller index then the second.
When summing over all i, we combine like terms. Consider 1 ≤ i < j ≤ n. Then the
dxi ∧ dxj term has coefficient
∂ωj ∂ωi

∂xi ∂xj
M382D Attempted HW Problems

by just using the above for fixed i and varying j, then fixed j and varying i, and checking
to see in which sum the dxi ∧ dxj term is in. So, we can write dω as
X  ∂ωj ∂ωi

dω = − dxi ∧ dxj .
∂xi ∂xj
1≤i<j≤n
n
d) If f : A → R then df is a 1-form given as
∂f
df = dxi .
∂xi
So, we can take ω = df with ωi = ∂f /∂xi . Then, substituting this into the formula for dω
gives
X  ∂ωj ∂ωi

d(df ) = dω = − dxi ∧ dxj
∂xi ∂xj
1≤i<j≤n
X  ∂2f ∂2f

= − dxi ∧ dxj = 0
∂xi ∂xj ∂xj ∂xi
1≤i<j≤n

by equality of mixed partials.


TA comment: (b): you can compute dγ with a lot less work, and therefore save yourself some time.

If he meant that I don’t need to develop the differentiation rule for 2-forms, I disagree because
we didn’t cover that in class by this point. What I think he meant is we can write
1
dx ∧ dy = d(xdy − ydx)
2
and just use the fact that d2 = 0.

Problem 3. Let P, Q : U → R be smooth functions on an open set U ⊂ A2 , and consider the


differential form
α = P dx + Qdy,
where we restrict the global coordinates x, y on the affine plane A2 to U .
a) Compute dα.
b) Consider a parametrized curve γ : [0, T ] → A2 , which we can write in coordinates as a pair
of functions (x(t), y(t)). Compute γ ∗ α, which is a 1-form on [0, T ].
c) What can you say if α = df , where f is a smooth function on A2 ?
d) Are you reminded of some integration theorems from advanced calculus?

Solution:
a) Applying Problem 2c) with n = 2 gives
X  ∂αj ∂αi
 
∂Q ∂P

i j
dα = − dx ∧ dx = − dx ∧ dy.
∂xi ∂xj ∂x ∂y
1≤i<j≤2

b) By definition,
γ ∗ α = (P ◦ γ) d(x(t)) + (Q ◦ γ) d(y(t)) = ((P ◦ γ)x0 + (P ◦ γ)y 0 ) dt.
c) If α = df then dα = d2 f = 0. With ∇f = (∂x f, ∂y f ) = (P, Q), we see that
γ ∗ α = h∇f (γ), γ 0 i dt.
d) Observe that f : U → R so that f ◦ γ : [0, T ] → R is a smooth function (technically, we must
restrict f to the image of γ). By the fundamental theorem of calculus,
Z b Z b
d
(f ◦ γ)(b) − (f ◦ γ)(a) = (f ◦ γ)(t) dt = h∇f (t), γ 0 (t)i dt
a dt a
M382D Attempted HW Problems

for a, b ∈ [0, T ]. In particular, for a = 0 and b = T ,


Z T Z
γ ∗ α = (f ◦ γ)(T ) − (f ◦ γ)(0) = ∇f ◦ γ dγ
0 C
where C is the curve γ([0, T ]), and the right-hand side is a line integral.
Problem 4. Consider the differential form

β = P dy ∧ dz + Qdz ∧ dx + Rdx ∧ dy.


on an open set U ⊂ A3 in affine space with standard coordinates x, y, z, where P, Q, R : U → R.
a) Compute dβ.
b) Consider a parametrized surface σ : V → A3 , where V is an open set in A2 with coordinates
u, v. This is given by writing x, y, z as functions of u, v. Compute σ ∗ β.
c) What can you say if β = dα for α a 1-form on U ?
d) Are you reminded of some integration theorems from advanced calculus?

Solution:
a) Using the rule obtained in 2b),
d(P dy ∧ dz) = dP ∧ dy ∧ dz + P ∧ d(dy ∧ dz)
 
∂P ∂P ∂P
= dx + dy + dz ∧ dy ∧ dz
∂x ∂y ∂z
∂P
= dx ∧ dy ∧ dz
∂x
Repeating this for the other two terms and applying linearity gives
∂P ∂Q ∂R
dβ = dx ∧ dy ∧ dz + dy ∧ dz ∧ dx + dz ∧ dx ∧ dy
∂x ∂y ∂z
 
∂P ∂Q ∂R
= + + dx ∧ dy ∧ dz
∂x ∂y ∂z
b) First, we can write σ as σ(u, v) = (f (u, v), g(u, v), h(u, v)) where f, g, h : V → R are smooth.
This then gives,
∂f ∂f
d(f (u, v)) = du + dv
∂u ∂v
and similarly for g, h. Thus,
   
∂f ∂f ∂g ∂g
d(f (u, v)) ∧ d(g(u, v)) = du + dv ∧ du + dv
∂u ∂v ∂u ∂v
 
∂f ∂g ∂g ∂f
= − du ∧ dv
∂u ∂v ∂u ∂v
For shorthand, we write ∂u f ∂v g − ∂u g∂v f = ∂(f, g)/∂(u, v) Substituting all this gives
 
∗ ∂(g, h) ∂(h, f ) ∂(f, g)
σ β= (P ◦ σ) + (Q ◦ σ) + (R ◦ σ) du ∧ dv
∂(u, v) ∂(u, v) ∂(u, v)
c) Suppose now that β = dα for a 1-form α on U . First, dβ = 0 so that from part a),
∂P ∂Q ∂R
+ + = 0.
∂x ∂y ∂z
If F = (P, Q, R) is a vector field on U then the above says that Div F = 0.
d) The surface integral of F over S = σ(V ) is
Z Z Z
F dσ = hF ◦ σ, ∂u σ × ∂v σi du ∧ dv = σ∗ β
S V V
where by ∂u σ I mean (∂u f, ∂u g, ∂u h) and similarly for v.
M382D Attempted HW Problems

Problem 5.
a) Consider a 1-form α = g(x)dx on the affine line A1 . Prove that there exists a function f (x)
so that α = df .
b) Now try the same problem with A1 replaced by the circle S 1 . Equivalently, replace α and f
with a periodic 1-form and a periodic function.

Solution:
a) By the fundamental theorem of calculus, since g is smooth R(hence continuous) there exists
x
a f : A1 → R such that f 0 (x) = g(x) (in particular, f (x) = 0 g 0 (t) dt). Since g is smooth,
we see that f is smooth too. It follows that α = df .
b) Consider the example in Problem 6c). With polar coordinates x = r cos θ, y = sin θ we have
dx = cos θdr − r sin θdθ
dy = sin θdr + r cos θdθ
0
Hence ω written in polar coordinates is
r cos θ(sin θdr + r cos θdθ) − r sin θ(cos θdr − r sin θdθ)
ω0 = = dθ.
r2
Restricting this to r = 1 (i.e., S 1 ⊂ A2 ) we get a 1-form on S 1 . As shown in 6c, this
differential form is not exact.
Problem 6. Consider the 1-form
ω = xdy + ydx
2
on the affine plane A with standard coordinates x, y.
a) Compute dω.
b) Is there a function f so that ω = df ?
c) Now repeat for the form
xdy − ydx
ω0 =
x2 + y 2
2
on the punctured affine plane A \ {0} with standard coordinates x, y.

Solution:
a) Direct computation shows
dω = dx ∧ dy + x(d2 y) + dy ∧ dx + y(d2 x) = 0.
b) Since ω is closed and A2 is contractible there exists an f such that ω = df . In particular,
f = xy works.
c) We first compute the partial derivatives of x/(x2 +y 2 ). Those of −y/(x2 +y 2 ) follow similarly.
(x2 + y 2 ) − x(2x) y 2 − x2
 
∂ x
= =
∂x x2 + y 2 (x2 + y 2 )2 (x2 + y 2 )2
 
∂ x −2xy
= 2
∂y x2 + y 2 (x + y 2 )2
Then,
y 2 − x2
 
0 −2xy
dω = dx + 2 dy ∧ dy
(x2 + y 2 )2 (x + y 2 )2
y 2 − x2
 
2xy
+ dx + 2 dy ∧ dx
(x2 + y 2 )2 (x + y 2 )2
 2
y − x2 y 2 − x2

= − dx ∧ dy = 0
(x2 + y 2 )2 (x2 + y 2 )2
So, ω 0 is closed. However, it is not exact. We can naı̈vely try to write ω = df for some f ,
and find that f = arctan(y/x). But, f is not smooth (it’s not even continuous).
M382D Attempted HW Problems

TA comment: Note: d2 x, while valid notation, is nonstandard (versus dx ∧ dx). (c): the problem is
not smoothness or continuity: this function just isn’t defined on the y-axis.

No, I mean d2 = d ◦ d. As for c), I think my thinking was if you could smoothly extend the
function to be defined, then it’d be okay, but you can’t do that because it’s not continuous.
Problem 7. In this problem we work in An with standard coordinates x1 , ..., xn . Compute d of the
following differential forms.
Pn
a) γ = i=1 (−1)i−1 xi dx1 ∧ ... ∧ dxi−1 ∧ dxi+1 ∧ ... ∧ dxn .
b) r−n γ where 2
Pn r i = (x
1 2
) + ... + (xn )2 .
2 i
c) sin(r ) i=1 x dx .
a) Since d2 = 0, after applying the Leibniz rule we only see the differentiation of xi . That is,
Xn
dγ = (−1)i−1 dxi ∧ (dx1 ∧ ... ∧ dxi−1 ∧ dxi+1 ∧ ... ∧ dxn ).
i=1
Putting this into standard form yields
X n
dγ = (−1)i−1 (−1)i−1 dx1 ∧ ... ∧ dxn = n dx1 ∧ ... ∧ dxn .
i=1

b) r−n is a 0-form so that


d(r−n γ) = dr−n ∧ γ + r−n dγ.
Computing dr−n separately,
∂ −n ∂
r = ((x1 )2 + ... + (xn )2 )−n/2 = −nxi ((x1 )2 + ... + (xn )2 )−n/2−1 ;
∂xi ∂xi
n n
X ∂r−n X
dr−n = i
dx i
= −nr −n−2
xi dxi .
i=1
∂x i=1
So,
n
! n
!
X X
−n −n−2 i i
dr ∧ γ = −nr x dx ∧ (−1)i−1 xi dx1 ∧ ... ∧ dxi−1 ∧ dxi+1 ∧ ... ∧ dxn .
i=1 i=1

Let us look at xi dxi ∧γ. Since each term in γ but one contains dxi , taking the wedge product
with these vanishes. So, we get
Xn
dr−n ∧ γ = −nr−n−2 (−1)i−1 (xi )2 dxi ∧ dx1 ∧ ... ∧ dxi−1 ∧ dxi+1 ∧ ... ∧ dxn .
i=1
Putting this into standard form as before gives
Xn
dr−n ∧ γ = −nr−n−2 (xi )2 dx1 ∧ ... ∧ dxn = −nr−n dx1 ∧ ... ∧ dxn .
i=1

Now notePthat this is exactly −r−n dγ, so d(r−n γ) = 0.


n
c) Let ω = i=1 xi sin(r2 ) dxi so that ωi = xi sin(r2 ). Then,
∂ωi ∂
j
= xi j sin((x1 )2 + ... + (xn )2 ) = 2xi xj cos(r2 ).
∂x ∂x
Observe that this is symmetric in the indices. By 2c), we have
X  ∂ωj ∂ωi

dω = − dxi ∧ dxj = 0.
∂xi ∂xj
1≤i<j≤n

TA comment: (c): what happens when i = j?

Not sure what the issue is... if i = j then the term inside the parenthesis for dω is zero, so it
doesn’t appear in the sum? So it’s fine?
M382D Attempted HW Problems

HW 9
Warner: Chapter 2: 13, 16; Chapter 4: 12.

2.13. LetVV be an n-dimensional real inner product space. We extend the inner product from V
to all of V by setting the inner product of elements which are homogeneous of different degrees
equal to zero, and by setting
hw1 ∧ ... ∧ wp , v1 ∧ ... ∧ vp i = det(hwi , vj i)
Vp
and then extending bilinearly to all ofV V . Prove that if e1 , ..., en isVan orthonormal basis of V
then the V
corresponding basis 2.6(1) Vnof V is an orthonormal basis for V .
n
Since VVis one-dimensional, V \{0} has two components. An orientation on V is a choice of
n
component of V \{0}. If V is an oriented inner product space, then there is a linear transformation
^ ^
?: V → V,
called star, which is well-defined by the requirement that for any orthonormal basis e1 , ..., en of V
(in particular, for any re-ordering of a given basis),
?(1) = ±e1 ∧ ... ∧ en , ? (e1 ∧ ... ∧ en ) = ±1,
?(e1 ∧ ... ∧ ep ) = ±ep+1 ∧ ... ∧ en ,
Vn
where one takes “+” if e1 ∧ .... ∧ en lies in the component of V \ {0} determined by the orientation
and “−” otherwise. Observe that
^p ^n−p
?: V → V.
Vp
Prove that on V,
?? = (−1)p(n−p) .
Vp
Also prove that for arbitrary v, w ∈ V , their inner product is given by
hv, wi = ?(w ∧ ?v) = ?(v ∧ ?w).
Solution: Since the inner product of elements of different degrees is automatically zero, we only
need to check the inner product on basis vectors of the same degree. Let 1 ≤ i1 < ... < ik ≤ n and
1 ≤ j1 < ... < jk ≤ n and consider the basis vectors
w = ei1 ∧ ... ∧ eik v = ej1 ∧ ... ∧ ejk .
Suppose that im 6= jm for some 1 ≤ m ≤ k. In this case, we can actually find an m for which im 6= jl
for all l = 1, ..., k. Let m be the smallest index such that im 6= jm . WLOG we may assume that
im < jm . Since il = jl for all l = 1, ..., m − 1, we have that im 6= jl for these l. Now by our ordering
of indices, im < jm < jm+1 < ... < jk . So, im 6= jl for l = m, ..., k. Consider the entries heim , ejl i for
this particular m and l = 1, ..., k. Since im 6= jl , these are all zero. Hence there is a row in (hwi , vj i)
with all zero entries. By computing the determinant via a cofactor expansion along this row, we see
that it is zero. So, if w 6= v then hw, vi = 0. If w = v, then im = jm for all 1 ≤ m ≤ k. It follows
that (hwi , vj i) is the identity matrix and has determinant 1. So, the basis vectors are orthonormal.

To show that ?? = (−1)p(n−p) we choose an arbitrary orthonormal basis A = {e1 , ..., en } of V


and compute directly. Observe that the choice of orientation does not matter since we apply ? twice
– if it is negative, we’ll just get a (−1)2 factor. So, we assume that e1 , ..., en is positively oriented.
Then,
?(?(e1 ∧ ... ∧ ep )) = ?(ep+1 ∧ ... ∧ en ).
Note that B = {ep+1 , ..., en , e1 , ..., ep } is an orthonormal basis, so by definition of ?
?(ep+1 ∧ ... ∧ en ) = ±e1 ∧ ... ∧ ep
where the sign is dependent on the orientation of B. It immediately follows that ? is involutive up
to a sign. We assumed that A has positive orientation. So, we just need to reorder
ep+1 ∧ ... ∧ en ∧ e1 ∧ ... ∧ ep = (ep+1 ∧ ... ∧ en ) ∧ (e1 ∧ ... ∧ ep )
M382D Attempted HW Problems

so that the indices increase, and see what sign appears. Since this is the wedge between an (n − p)-
vector and a p-vector, by the Kozul sign rule
ep+1 ∧ ... ∧ en ∧ e1 ∧ ... ∧ ep = (−1)p(n−p) e1 ∧ ... ∧ en .
So, if (−1)p(n−p) = 1 then B is positively oriented, and if (−1)p(n−p) = −1 then B is negatively
oriented. Either way, the sign in front of e1 ∧ ... ∧ ep after applying ? is equal to (−1)p(n−p) . Thus,
?(ep+1 ∧ ... ∧ en ) = ±e1 ∧ ... ∧ ep = (−1)p(n−p) e1 ∧ ... ∧ ep .
To show the last part, first adopt the convention that I = (i1 , ..., ipP ) where 1 ≤ i1 < ... < ip ≤ n
and eI = ei1 ∧ ... ∧ eip . Then, an arbitrary k-form is written w = I wI eI for scalars wI . Write
I 0 = (i01 , ..., i0n−p ) with 1 ≤ i01 < ... < i0n−p ≤ n where {i01 , ..., i0n−p } = {1, ..., n} \ {i1 , ..., ip }. Then
?eI = (−1)I eI 0 . The constant (−1)I is defined by
eI ∧ eI 0 = (−1)I e1 ∧ ... ∧ en .
We assume that {e1 , ..., en } is positively oriented. Since ? is linear,
X X
?(w ∧ ?v) = v I [?(w ∧ ?eI )] = v I (−1)I [?(w ∧ eI 0 )]
I I
XX X
J I I
= w v (−1) [?(eJ ∧ eI 0 )] = wI v I (−1)I [?(eI ∧ eI 0 )]
J I I
X X
I I I 2
= w v ((−1) ) [?(e1 ∧ ... ∧ en )] = wI v I
I I
where we have used the fact that eJ ∧ eI 0 6= 0 if and only if J ∩ I 0 = ∅. This implies that J = I. On
the other hand, XX X
hw, vi = wJ v I heJ , eI i = wI v I .
J I I
So,
hw, vi = ?(w ∧ ?v).
By symmetry of the inner product we get
?(w ∧ ?v) = hw, vi = hv, wi = ?(v ∧ ?w).
2.16. [Cartan lemma] Let p ≤ d, and let ω1 , ..., ωp be 1-forms on M d which are linearly independent
pointwise. Let θ1 , ..., θp be 1-forms on M such that
p
X
θi ∧ ωi = 0.
i=1
Prove that there exists C ∞ functions Aij on M with Aij = Aji such that
p
X
θi = Aij ωj i = 1, ..., p.
j=1
V1
Solution: Since {ω1 , ..., ωp } is a linearly independent set and p ≤ d = dim( V ), we can extend it
to a basis {ω1 , ..., ωd }. Then, for each 1 ≤ i ≤ p,
d
X
θi = Aij ωj .
j=1

By taking the wedge product with ωi , we see that


d
X X X
θi ∧ ωi = Aij ωj ∧ ωi = Aij ωj ∧ ωi − Aij ωi ∧ ωj .
j=1 j<i j>i

By hypothesis, the sum over i is zero. Collecting like terms gives


X p X X
0= θ i ∧ ωi = (Aji − Aij )ωi ∧ ωj − Aij ωi ∧ ωj
i=1 i<j≤p i≤p<j
M382D Attempted HW Problems

(For a quick sanity check, the first sum on the RHS ranges over p(p − 1)/2 indices and has two terms
each, accounting for p(p − 1) terms. The second sum ranges over p(d − p) terms; in total the two
sums give p(d − 1). This is exactly what we expect by adding p sums, each having d − 1 terms since
one of them is zero). By linear independence, we know that all the coefficients must be zero. Hence,
Aij = 0 for all 1 ≤ i ≤ p and j > p, and Aij = Aji otherwise.
4.12. If α and β are closed differential forms, prove that α ∧ β is closed. If, in addition, β is exact,
prove that α ∧ β is exact.

Solution: Let α be a k-form and β be an l-form. Then,


d(α ∧ β) = dα ∧ β + (−1)k α ∧ dβ = 0
since dα = dβ = 0 owing to closedness. Now also suppose that β = dγ for some (l − 1)-form γ. It
follows that
d((−1)k α ∧ γ) = (−1)k dα ∧ γ + (−1)2k α ∧ dγ = α ∧ β
once more owing to closedness of α. Hence, α ∧ β is exact.
Dan’s Problems.
Problem 1. Suppose V is a vector space with inner product h−, −i. Define an induced inner product
V2
on V . You may want to consider V finite dimensional with orthonormal basis e1 , ..., en . Then
V2
what property does the basis e1 ∧ e2 , e1 ∧ e3 , ..., e1 ∧ en , e2 ∧ e3 , ... of V have? Suppose ξ1 ∧ ξ2
represents a parallelogram. What is the geometric interpretation of the norm kξ1 ∧ ξ2 k? What about
the inner product between two parallelograms?

Solution: We need only define h−, −iV2 V on decomposable elements and then extend bilinearly.
For ξ1 , ξ2 , η1 , η2 ∈ V we define h−, −iV2 V by
 
hξ1 , η1 i hξ1 , η2 i
hξ1 ∧ ξ2 , η1 ∧ η2 iV2 V = det .
hξ2 , η1 i hξ2 , η2 i
In particular for i, j, k, l we have
 
hei , ek i hei , el i
hei ∧ ej , ek ∧ el iV2 V = det = δik δjl − δil δjk .
hej , ek i hej , el i
Notice that this agrees with the alternating rule. Since δij takes values in {0, 1}, for the above to
be 1 it must be that i = k 6= j = l. So,
hei ∧ ej , ek ∧ el iV2 V = 1 ⇔ ei ∧ ej = ek ∧ el .
By the same logic,
hei ∧ ej , ek ∧ el iV2 V = −1 ⇔
ei ∧ ej = −ek ∧ el .
V2
Notice, then, that one of ei ∧ ej and ek ∧ el is not a basis vector of V (one has the proper ordering
of indices whereas the other does not). Finally, for any ei ∧ ej and ek ∧ el the inner product takes
values {−1, 0, 1}. Hence, all others are zero. It follows that q {e1 ∧ e2 , ..., e1 ∧ en , e2 ∧ e3 , ...} is an
V2
orthonormal basis of V . The norm k − k 2 V is defined as h−, −iV2 V . Hence,
V

 
2 hξ1 , ξ1 i hξ1 , ξ2 i
kξ1 ∧ ξ2 k 2 V = hξ1 ∧ ξ2 , ξ1 ∧ ξ2 i = det
V
hξ2 , ξ1 i hξ2 , ξ2 i
= kξ1 k2 kξ2 k2 − hξ1 , ξ2 i2
= kξ1 k2 kξ2 k2 − kξ1 k2 kξ2 k2 cos(θ)2 = kξ1 k2 kξ2 k2 sin(θ)2
where θ is the angle between ξ1 and ξ2 . In Rn , kξ1 kkξ2 k sin(θ) gives the area of the parallelogram
spanned by ξ1 and ξ2 . We use this interpretation; hence kξ1 ∧ ξ2 k represents the area of the
parallelogram ξ1 ∧ ξ2 . The inner product hξ1 ∧ ξ2 , η1 ∧ η2 i of two parallelograms is just the product of
their areas and the cosine of the angle between the parallelograms (which has a reasonable definition
in R3 , but I’m not sure about other spaces). If both ξ1 ∧ξ2 and η1 ∧η2 lie in the same two dimensional
M382D Attempted HW Problems
V2
subspace, we can say a little more. Consider the case when V has dimension 2. Then V is one
dimensional and the inner product is just the (signed) product of the areas.

Problem 2. Let V, W be vector spaces and T : V → W a linear map. Recall that for each k ∈ Z≥0
there is an induced map
^k ^k ^k
T : V → W
Vk
characterized by T (ξ1 ∧ ... ∧ ξk ) = T ξ1 ∧ ... ∧ T ξk . Suppose V = W is finite dimensional and T is
Vk
diagonalizable. Compute the trace of T . Compute
n
X ^k
(−1)k tk Tr T
k=0

where t is a “dummy variable”. Can you formulate and prove a formula which holds even if T is not
diagonalizable?

Solution: Let λ1 , ...λn be the (not necessarily distinct) eigenvalues of T . Then, we can find an
Vk
orthonormal basis {ξ1 , ..., ξn } where T (ξi ) = λi ξi . For each k we construct a basis of V with
elements ξj1 ∧ ... ∧ ξjk where 1 ≤ j1 < ... < jk ≤ n. Then,
^k
T (ξj1 ∧ ... ∧ ξjk ) = T ξj1 ∧ ... ∧ T ξjk = λj1 ...λjk ξj1 ∧ ... ∧ ξjk
Vk
so that this is an orthonormal basis of eigenvectors of V . To compute the trace we just need to
add all the eigenvalues. Define by pk (x1 , ..., xn ) the k-th elementary symmetric polynomial
X
pk (x1 , ..., xn ) = xj1 ...xjk
1≤j1 <...<jk ≤n

for 1 ≤ k ≤ n, identically 1 for k = 0, and 0 for k > n. Observe that each choice of 1 ≤ j1 < ... <
Vk
jk ≤ n corresponds to a unique choice of basis vector in V . Therefore,
^k X
Tr T = pk (λ1 , ..., λn ) = λj1 ...λjk .
1≤j1 <...<jk ≤n

Now note the following: consider a diagonal n × n matrix A. The characteristic polynomial is
n
Y
f (x) = det(xI − A) = (x − ak ).
k=1

where ai denotes the k-th diagonal entry. We inductively show that


n
X
f (x) = (−1)k xn−k pk (a1 , ..., an ).
k=0

This is clearly true when n = 1. Note that


 
X X
an pk (a1 , ..., an−1 ) = an  aj1 ...ajk  = aj1 ...ajk+1 .
1≤j1 <...<jk ≤n−1 1≤j1 <...<jk <jk+1 =n

Moreover,
X X
pk+1 (a1 , ..., an−1 ) = aj1 ...ajk+1 = aj1 ...ajk+1 .
1≤j1 <...<jk <jk+1 ≤n−1 1≤j1 <...<jk <jk+1 <n

By adding these identities it follows that


an pk (a1 , ..., an−1 ) + pk+1 (a1 , ..., an−1 ) = pk+1 (a1 , ..., an ).
M382D Attempted HW Problems

Now suppose the inductive hypothesis holds for n − 1. Then,


n
Y n−1
Y n−1
X
(x − ai ) = (x − an ) (x − ai ) = (x − an ) (−1)k xn−1−k pk (a1 , ..., an−1 )
k=1 k=1 k=0
n−1
X n−1
X
= (−1)k xn−k pk (a1 , ..., an−1 ) + (−1)k+1 xn−1−k an pk (a1 , ..., an−1 )
k=0 k=0
n−2
X n−2
X
= (−1)k+1 xn−(k+1) pk+1 (a1 , ..., an−1 ) + (−1)k+1 xn−1−k an pk (a1 , ..., an−1 )
k=0 k=0
n
Y
+ xn + (−1)n ak
k=1
n
Y n−2
X
= xn + (−1)n ak + (−1)k+1 xn−1−k pk+1 (a1 , ..., an )
k=1 k=0
n
X
= (−1)k xn−k pk (a1 , ..., an )
k=0

thus proving the inductive step. From this, we deduce


n
X n
X ^k
det(tI − T ) = (−1)k tn−k pk (λ1 , ..., λn ) = (−1)k tn−k Tr T.
k=0 k=0

To write this in the form the problem asks for,


n
X ^k n
X ^k
(−1)k tk Tr T = tn (−1)k (1/t)n−k Tr T = tn det(1/tI − T ).
k=0 k=0

This has a removable discontinuity at t = 0, which we can fix by defining it to be zero here. To
generalize this to non-diagonalizable endomorphisms, define Φ : End(V ) → Poly≤n (R) by
n
X ^k
Φ(T ) = (−1)k tk Tr T − tn det(1/tI − T )
k=0

where Poly≤n (R) is the vector space of polynomials of degree less than or equal to n and coefficients
in R. We have that Φ(T ) = 0 for any diagonalizable T ∈ End(V ). If these linear transformations
are dense in End(V ), then we are done. Because Φ is continuous in T , we can take a sequence Tj
of diagonalizable endomorphisms such that Tj → T , and conclude that 0 = Φ(Tj ) → Φ(T ). The
issue is that this is not a dense subset. Instead we complexify by tensoring C to each of End(V )
and Poly≤n (R). Since none of the above work made use of the fact that V is a vector space over
R, the same result holds for diagonalziable T with complex values. Then, we can apply our density
argument to show this formula holds for all T : V → V linear with V a vector space over C.

Problem 4. In this problem you will study differential forms on Euclidean 3-space E3 and relate the
exterior derivative d to div, grad, and curl. (Euclidean space E3 is the standard affine space A3 in
which the underlying vector space R3 is endowed with the standard inner product.) Suppose
∂ ∂ ∂
ξ = P (x, y, z) + Q(x, y, z) + R(x, y, z)
∂x ∂y ∂z
is a vector field on E3 . We associate a 1-form αξ and a 2-form βξ by the formulas
αξ = P dx + Qdy + Rdz
βξ = P dy ∧ dz + Qdz ∧ dx + Rdx ∧ dy
These formulas give isomorphisms
X(Ed ) ' Ω1 (E3 ) ' Ω2 (E3 ),
M382D Attempted HW Problems

where X(E3 ) is the vector space of vector fields on E3 , i.e., functions E3 → R3 . Also, we can associate
a 3-form ωf to a function f : E3 → R by the formula
ωf = f (x, y, z) dx ∧ dy ∧ dz.
a) These isomorphisms are made pointwise, so belong to linear algebra. That is, they are
derived from similar isomorphisms for a 3-dimensional real inner product
V2 ∗ space V . Choose
an orthonormal basis for V and define isomorphisms V ' V ∗ ' V by imitating the
formulas above. Check that these isomorphisms are independent of the choice of basis.
Relate to the star operator you studied in Problem #13 in Warner? Can you generalize to
higher dimensions? What is the linear algebra manifestation of the identification of functions
and 3-forms stated above?
b) Identify the composition
d
Ω0 (E3 ) −
→ Ω1 (E3 ) → X(E3 )
with the gradient of a function. (The second map is the isomorphism above.) Generalize to
En for any n.
c) Identify the composition
d
X(E3 ) → Ω1 (E3 ) −
→ Ω2 (E3 ) → X(E3 ).
with the curl. (The first and last maps are the isomorphisms above.)
d) Identify the composition
d
X(E3 ) → Ω2 (E3 ) −
→ Ω3 (E3 ) → Ω0 (E3 ).
with the divergence.

Solution:
a) Let {e1 , e2 , e3 } be an orthonormal basis for V . Then there exists a dual basis {e1 , e2 , e3 } of
V ∗ defined by ei (ej ) = δji . We construct an isomorphism V ' V ∗ by
3
X 3
X
ai ei ↔ ai ei .
i=1 i=1

This is the same isomorphism given by the Riesz representation theorem, so is independent
V2 ∗
of the choice of basis. With the same setup, we can define another isomorphism V ∗ ' V
by
ai ei ↔ a1 e2 ∧ e3 + a2 e3 ∧ e1 + a3 e1 ∧ e2 .
Recall that in three dimensions, with e1 ∧ e2 ∧ e3 positively oriented,
?(1) = e1 ∧ e2 ∧ e3 ? (e1 ∧ e2 ∧ e3 ) = 1
?(e1 ) = e2 ∧ e3 ? (e2 ) = e3 ∧ e1 ? (e3 ) = e1 ∧ e2
So, the isomorphism given is just
ai ei ↔ a1 e2 ∧ e3 + a2 e3 ∧ e1 + a3 e1 ∧ e2 = ?(ai ei ).
V1 ∗ V2 ∗
Now, ? : V ∗ = V → V is a linear map between vector spaces of the same dimension
(since dim V = 3). By Rank-Nullity, to show that ? is an isomorphism we just need to show
V2 ∗
it is surjective. But {e1 ∧ e2 , e1 ∧ e3 , e2 ∧ e3 } is a basis of V , and from the above we see
that all of these lie in the image of ?. Hence, it is surjective, and therefore an isomorphism.
The isomorphism just identifies an element v ∈ V ∗ with its image under ?. This definition
is independent of the choice of basis. In higher dimensions you should be able to construct
Vk ∗ Vn−k ∗
isomorphisms V ' V , also related using ?. For the association f ↔ f dx ∧ dy ∧ dz,
V3 ∗
look at the association pointwise. Then, you get the isomorphism R ' V .
M382D Attempted HW Problems

b) Let f ∈ Ω0 (E3 ). Then,


∂f ∂f ∂f
df = dx + dy + dz.
∂x ∂y ∂z
So, by applying the isomorphism Ω1 (E3 ) → X(E3 ), we get
 
∂f ∂ ∂f ∂ ∂f ∂ ∂f ∂f ∂f
+ + = , , .
∂x ∂x ∂y ∂y ∂z ∂z ∂x ∂y ∂z
By definition the gradient is ∇f = (∂x f, ∂y f, ∂z f ), so we are done.
c) Given ξ ∈ X(E3 ), passing it through the chain of compositions gives
∂ ∂ ∂
P +Q +R → P dx + Qdy + Rdz
∂x ∂z ∂z
d
−→ dP ∧ dx + dQ ∧ dy + dR ∧ dz
   
∂P ∂P ∂Q ∂Q
= dy + dz ∧ dx + dx + dz ∧ dy
∂y ∂z ∂x ∂z
 
∂R ∂R
+ dx + dy ∧ dz
∂x ∂y
     
∂R ∂Q ∂P ∂R ∂Q ∂P
= − dy ∧ dz + − dz ∧ dx + − dx ∧ dy
∂y ∂z ∂z ∂x ∂x ∂y
     
∂R ∂Q ∂ ∂P ∂R ∂ ∂Q ∂P ∂
→ − + − + −
∂y ∂z ∂x ∂z ∂x ∂y ∂x ∂y ∂z
Therefore we get  
∂R ∂Q ∂P ∂R ∂Q ∂P
− , − , − .
∂y ∂z ∂z ∂x ∂x ∂y
Recall that the curl of a vector field F = (P, Q, R) is
 
i j k
∇ × F = det ∂x ∂y ∂z  = (∂y R − ∂z Q, −∂x R + ∂z P, ∂x Q − ∂y P )
P Q R
which is what we have above.
d) Let ξ ∈ X(E3 ). Then,
∂ ∂ ∂
P +Q +R → P dy ∧ dz + Qdz ∧ dx + Rdx ∧ dy
∂x ∂z ∂z
d

→ dP ∧ dy ∧ dz + dQ ∧ dz ∧ dx + dR ∧ dx ∧ dy
∂P ∂Q ∂R
= dx ∧ dy ∧ dz + dy ∧ dz ∧ dx + dz ∧ dx ∧ dy
∂x ∂y ∂z
 
∂P ∂Q ∂R
= + + dx ∧ dy ∧ dz
∂x ∂y ∂z
∂P ∂Q ∂R
→ + + = Div F
∂x ∂y ∂z
where F = (P, Q, R) ∈ X(E3 ).
Problem 5. Consider the 1-form
ω = xdy + ydx
2
on the affine plane A with standard coordinates x, y.
a) Compute dω.
b) Is there a function f so that ω = df ?
c) Now repeat for the form
xdy − ydx
ω0 =
x2 + y 2
on the punctured affine plane A2 \ {0} with standard coordinates x, y.
M382D Attempted HW Problems

Solution: This seems to be a repeat of a previous hw problem? Below is the same work.
a) Direct computation shows
dω = dx ∧ dy + x(d2 y) + dy ∧ dx + y(d2 x) = 0.
b) Since ω is closed and A2 is contractible there exists an f such that ω = df . In particular,
f = xy works.
c) We first compute the partial derivatives of x/(x2 +y 2 ). Those of −y/(x2 +y 2 ) follow similarly.
(x2 + y 2 ) − x(2x) y 2 − x2
 
∂ x
2 2
= 2 2 2
= 2
∂x x + y (x + y ) (x + y 2 )2
 
∂ x −2xy
2 2
= 2
∂y x + y (x + y 2 )2
Then,
 2
y − x2

−2xy
dω 0 = dx + dy ∧ dy
(x2 + y 2 )2 (x2 + y 2 )2
y 2 − x2
 
2xy
+ dx + dy ∧ dx
(x2 + y 2 )2 (x2 + y 2 )2
 2
y − x2 y 2 − x2

= − dx ∧ dy = 0
(x2 + y 2 )2 (x2 + y 2 )2
So, ω 0 is closed. However, it is not exact. We can naı̈vely try to write ω = df for some f ,
and find that f = arctan(y/x). But, f is not smooth (it’s not even continuous).
Problem 6. What is the orientation double cover of RPn ? Of CPn ?

Solution: Recall that the orientation double cover M̃ → M of an orientable manifold M is just
M̃ = M × {0, 1} with projection onto the first coordinate. Now, RPn is orientable for odd n and
CPn is orientable for all n. So, we just need to investigate the orientation double cover of RPn
for even n. The antipodal map is a deck transformation with degree −1, and is thus orientation
reversing. It follows that S n → RPn is the orientation double cover of RPn .

TA comment: Proof that these spaces are/aren’t orientable?

We freely used these facts in class, so I didn’t know we had to prove it. Just use homology, though,
look at the n-th homology and see if it’s isomorphic to Z. If it is, a choice of generator induces a
(global) choice of orientation, corresponding to the upward/downward normal vector.
M382D Attempted HW Problems

HW 10
Guillemin/Pollack: Chapter 4.4: 1, 2, 3, 8, 12; Chapter 4.7: 2, 3, 4, 7, 8, 9, 13.

4.4.1. Let Z be a finite set of points in X, considered as a 0-manifold. Fix an orientation of Z, an


assignment of orientation numbers σ(z) = ±1 to each z ∈ Z. Let f be any function on X, considered
as a 0-form, and check that
Z X
f= σ(z)f (z).
Z z∈Z

Solution: Consider the covering of Z by charts {({z}, xz )}z∈Z where xz : {z} → R0 . We take xz to
be an orientation preserving diffeomorphism. By definition,
Z XZ XZ
f= f= (x−1 ∗
z ) f
Z z∈Z {z} z∈Z R0

Since f is just a function, the pullback is


(x−1 ∗ −1
z ) f = f ◦ xz

which sends R0 → f (z). Now, the oriented integral on R0 of a function g : R0 → R is simply


Z
g = ±g(R0 )
R0

whose sign depends on the chosen orientation. Finally,


Z XZ X
f= f ◦ x−1
z = σ(z)f (z).
Z z∈Z R0 z∈Z

4.4.2. Let X be an oriented k-dimensional manifold with boundary, and ω a compactly supported
k-form on X. Recall that −X designates the oriented manifold obtained simply by reversing the
orientation on X. Check that Z Z
ω=− ω.
X −X

Solution: First suppose that spt ω ⊂ U with (U, x) an orientation preserving chart of X. Let (Rk , ±)
denote Rk with orientation. By definition,
Z Z
ω= (x−1 )∗ ω.
X x(U )

k k
Next, consider the map ι̇ : (R , +) → (R , −) which simply reverses orientation. Then,
Z Z Z
(x−1 )∗ ω = − (ι̇−1 )∗ (x−1 )∗ ω = − (x−1 ◦ ι̇−1 )∗ ω
x(U ) ι̇◦x(U ) ι̇◦x(U )

But, ι̇ ◦ x : U → (ι̇ ◦ x(U ), −) is orientation-preserving map on −X so that by definition


Z Z
− (x−1 ◦ ι̇−1 )∗ ω = − ω.
ι̇◦x(U ) −X

Together, this implies


Z Z
ω=− ω.
X −X
If spt ω is not contained in the domain of a chart, simply use a partition of unity {ρi }i∈I subordinate
to a covering of X by orientation-preserving charts {(Ui , xi )}i∈I and apply the above argument to
each ρi ω and (Ui , xi ). This gives
Z XZ XZ Z
ω= ρi ω = − ρi ω = − ω.
X i∈I X i∈I −X X
M382D Attempted HW Problems

4.4.3. Let c : [a, b] → X be a smooth curve, and let c(a) = p, c(b) = q. Show that if ω is the
differential of a function on X, ω = df , then
Z b
c∗ ω = f (q) − f (p).
a
Solution: Since pullbacks commute with differentials and ω = df ,
Z b Z b Z b
∗ ∗
c ω= c (df ) = d(c∗ f ).
a a a
But, c∗ f = f ◦ c : [a, b] → R, so that
Z b Z b b
d(f ◦ c)
Z
c∗ ω = d(f ◦ c) = dx = [f ◦ c]|ba = f (q) − f (p).
a a a dx
4.4.8. Define a 1-form ω on the punctured plane R2 \ {0} by
   
−y x
ω(x, y) = dx + dy.
x2 + y 2 x2 + y 2
R
a) Calculate C ω for any circle C of radius r around the origin.
b) Prove that in the half-plane {x > 0}, ω is the differential of a function. [Hint: Try
arctan(y/x) as a random possibility.]
c) Why isn’t ω the differential of a function globally on R2 \ {0}?

Solution:
a) First, a choice of orientation on C is simply a direction to travel around – we choose the
counter-clockwise direction. We can parameterize C as the image of h : R → R2 \ {0} where
h(t) = (r cos t, r sin t). Next, pulling back the components of ω gives
 
−y −r sin t sin t
◦ h(t) = 2 2 =− r
x2 + y 2 2 2
r cos t + r sin t
 
x r cos t cos t
◦ h(t) = 2 2 = .
x2 + y 2 2 2
r cos t + r sin t r
Pulling back dx, dy gives
h∗ dx = d(x ◦ h) = d(r cos t) = −r sin t dt
h∗ dy = d(y ◦ h) = d(r sin t) = r cos t dt
Consequently,
   
∗ sin t cos t
h ω= − (−r sin t)dt + (r cos t)dt = dt.
r r
We then get
Z Z 2π Z 2π
ω= h∗ ω = dt = 2π.
C 0 0
b) The partial derivatives of arctan(y/x) are
∂  y  ∂/∂x(y/x) −y/x2 −y
arctan = 2
= 2 2
= 2
∂x x 1 + (y/x) 1 + y /x x + y2
∂  y  ∂/∂y(y/x) 1/x x
arctan = 2
= 2 2
= 2
∂y x 1 + (y/x) 1 + y /x x + y2
Hence,    
∂ arctan(y/x) ∂ arctan(y/x)
ω= dx + dy.
∂x ∂y
Since x > 0, arctan(y/x) is always well-defined, and in fact is a smooth function. Hence,
ω = d arctan(y/x).
M382D Attempted HW Problems

c) Heuristically, if it were then we expect f (x, y) = arctan(y/x) + c for a constant c, but f is


not defined on the entire x-axis. More rigorously, consider the map γ : S 1 → R2 \ {0} which
embeds S 1 as a circle of radius r centered at 0 and in local coordinates is given by h. Then,
for any differential 1-form ω on X,
I Z
ω= γ ∗ ω.
γ S1

Next, if g : R → S 1 is the map g(t) = (cos t, sin t) then


Z Z 2π
ω= g ∗ ω.
S1 0

Putting the two together yields


I Z 2π Z 2π
∗ ∗
ω= g γ ω= (γ ◦ g)∗ ω.
γ 0 0

But, γ ◦ g = h, so we know the above is 2π. Were ω = df for some smooth f , then the
contour integral would be zero instead.

4.7.2. Prove the classical Green’s formula in the plane: let W be a compact domain in R2 with
smooth boundary ∂W = γ. Prove
Z Z  
∂g ∂f
f dx + gdy = − dx ∧ dy.
γ W ∂x ∂y

Solution: By Stokes’ theorem,


Z Z Z
ω= ω= dω
γ ∂W W

for a 1-form ω on W . Of course, any 1-form ω takes the form ω = f dx + gdy for smooth functions
f, g : W → R. Therefore,
Z Z Z Z  
∂g ∂f
f dx + gdy = d(f dx + gdy) = df ∧ dx + dg ∧ dy = − dx ∧ dy.
γ W W W ∂x ∂y

4.7.3. Prove the Divergence Theorem: Let W be a compact domain in R3 with smooth boundary,
and let F = (f1 , f2 , f3 ) be a smooth vector field on W . Then
Z Z
Div F dx ∧ dy ∧ dz = hη, F i dA.
W ∂W

(Here η is the outward normal to ∂W . See Exercises 13 and 14 of Section 4 for dA, and page 178
for Div F .)

Solution: By Exercise 4.4.14, the 2-form hη, F idA is equal to the restriction of

ω = f1 dy ∧ dz + f2 dz ∧ dx + f3 dx ∧ dy

to ∂W . So,
Z Z Z Z
hη, F i dA = ω= dω = df1 ∧ dy ∧ dz + df2 ∧ dz ∧ dx + df3 ∧ dx ∧ dy
∂W ∂W W W
Z   Z
∂f1 ∂f2 ∂f3
= + + dx ∧ dy ∧ dz = Div F dx ∧ dy ∧ dz
W ∂x ∂y ∂z W
M382D Attempted HW Problems

4.7.4. Prove the classical Stokes theorem: let S be a compact oriented two-manifold in R3 with
boundary, and let F = (f1 , f2 , f3 ) be a smooth vector field in a neighborhood of S. Prove
Z Z
h∇ × F, ηi dA = f1 dx + f2 dy + f3 dz.
S ∂S

(Here η is the outward normal to S. For dA, see Exercises 13 and 14 of Section 4, and for ∇ × F ,
see page 178.)

Solution: By exercise 4.4.14,


     
∂f3 ∂f2 ∂f1 ∂f3 ∂f2 ∂f1
h∇ × F, ηi dA = − dy ∧ dz + − dz ∧ dx + − dx ∧ dy
∂y ∂z ∂z ∂x ∂x ∂y
where the 2-form on the right is restricted to S. Next define the 1-form ω by
ω = f1 dx + f2 dy + f3 dz.
Then,
   
∂f1 ∂f1 ∂f1 ∂f2 ∂f2 ∂f2
dω = dx + dy + dz ∧ dx + dx + dy + dz ∧ dy
∂x ∂y ∂z ∂x ∂y ∂z
 
∂f3 ∂f3 ∂f3
+ dx + dy + dz ∧ dz
∂x ∂y ∂z
   
∂f1 ∂f1 ∂f2 ∂f2
= − dx ∧ dy + dz ∧ dx + dx ∧ dy − dz ∧ dy
∂y ∂z ∂x ∂z
 
∂f3 ∂f3
+ − dx ∧ dz + dy ∧ dz
∂x ∂y
     
∂f3 ∂f2 ∂f1 ∂f3 ∂f2 ∂f1
= − dy ∧ dz + − dz ∧ dx + − dx ∧ dy
∂y ∂z ∂z ∂x ∂x ∂y
So, the restriction of dω to S is just h∇ × F, ηidA. Consequently,
Z Z Z Z
h∇ × F, ηi dA = dω = ω= f1 dx + f2 dy + f3 dz
S S ∂S ∂S

as desired.

4.7.7. Let X
R be compact and boundaryless, and let ω be an exact k-form on X, where k = dim X.
Prove that X ω = 0. [Hint: Apply Stokes’ theorem. Remember that X is a manifold with boundary,
even though ∂X is empty.]

Solution: Since ω is exact, there exists a (k − 1)-form α such that dα = ω. Then by Stokes’
theorem,
Z Z Z Z
ω= dα = α = α = 0.
X X ∂X ∅

4.7.8. Suppose that X = ∂W, W is compact, and f : X → Y is a smoothRmap. Let ω be a closed


k-form on Y , where k = dim X. Prove that if f extends to all of W , then X f ∗ ω = 0.

Solution: Let F : W → Y denote an extension of f to W , that is F |X = f . Then,


Z Z Z
f ∗ω = F ∗ω = F ∗ ω.
X X ∂W

Since ω is closed, dω = 0. By Stokes’ theorem,


Z Z Z
F ∗ω = d(F ∗ ω) = F ∗ dω = 0.
∂W W W
M382D Attempted HW Problems

4.7.9. Suppose that f0 , f1 : X → Y are homotopic maps and that the compact, boundaryless
manifold X has dimension k. Prove that for all closed k-forms ω on Y ,
Z Z

f0 ω = f1∗ ω.
X X

[Hint: Previous exercise.]

Solution: Since f0 , f1 are homotopic, there exists a homotopy F : X ×I → Y such that F |X×{i} = fi
for i = 0, 1. Consider ∂F : [X × {0}] ∪ [X × {1}] → Y . Choose an orientation on X and the natural
orientation on I. Then X × {1} is positively oriented while X × {0} is negatively oriented. Since
∂F extends (trivially) to F ,
Z Z Z Z Z
∗ ∗ ∗ ∗
0= (∂F ) ω = f1 ω + f0 ω = f1 ω − f0∗ ω
[−X×{0}]∪[X×{1}] X×{1} −X×{0} X X

as desired.

TA comment: It’s not exactly precise to say that X × {1} is positively oriented and X × {0} is
negatively oriented. Instead, the induced orientation on X × {1} is equal to the chosen orientation
on X, and the induced orientation on X × {0} is equal to the opposite orientation. To me, “pos-
itively oriented” and “negatively oriented” sound like canonical orientations on X, which we don’t
necessarily have.

4.7.13. Suppose that Z0 and Z1 are compact, cobordant, p-dimensional submanifolds of X. Prove
that Z Z
ω= ω
Z0 Z1
for every closed p-form ω on X.

Solution: Two manifolds Z0 and Z1 in X are cobordant if there exists a compact manifold with
boundary W , in X ×I, such that ∂W = [Z0 ×{0}]∪[Z1 ×{1}]. Choosing an orientation on X induces
one on X × I and therefore the components of ∂W . Moreover, since Z0 and Z1 are orientable, X
induces an orientation on them. Consider the diffeomorphisms (zi , i) 7→ zi from Zi × {i} → Zi for
i = 0, 1. Then for i = 1 this is orientation preserving whereas for i = 0 this is orientation reversing.
It follows that Z Z Z Z
ω− ω= ω= dω = 0
Z1 Z0 ∂W W

since ω is closed. (Remark: I think actually the ordering of X × I versus I × X matters here, and I
should really be using the latter).

TA comment: You are correct that I × X is the correct one to use here (though the proof still
goes through if you use X × I).

Dan’s Problems.

Problem 1. Let
0 → V 0 → V → V 00 → 0
be a short exact sequence of finite dimensional real vector spaces.
(1) Let e01 , ..., e0k be a basis of V 0 and e001 , ..., e00l be a basis of V 00 . Choose vectors ẽ001 , ..., ẽ00l in V
which map to the corresponding vectors in V 00 . Show that ẽ001 , ..., ẽ00l , e01 , ..., e0k is a basis of
V . (Identify vectors in V 0 with their image in V 00 .)
(2) Let T : V → V be a linear map such that T (V 0 ) ⊂ V 0 . Then T induces an endomorphism
T 0 of V 0 and T 00 of V 00 . What is the relationship of det T to det T 0 and det T 00 ? What kind
of matrix represents T in the basis of (a)?
M382D Attempted HW Problems

(3) Use the bases in (a) to define an isomorphism


Det V 00 ⊗ Det V 0 → Det V.
Prove that the isomorphism is independent of the choices.
(4) Use the isomorphism in (c) to give a rule which oriented the third of V, V 0 , V 00 if the other
two are oriented. You might call your rule “quotient before sub”.

Solution: Throughout I use e0m to denote vectors in V 0 and their image in V interchangeably.
a) Suppose that {ẽ001 , ..., ẽ00l , e01 , ..., e0k } is linearly dependent. Enumerate these from 1 to l + k
so that em = ẽ00m for 1 ≤ m ≤ l and em = e0m−l if l < m ≤ l + k. Then there exist am not
all zero such that am em = 0. WLOG we may assume that there exists an an 6= 0 where
1 ≤ n ≤ l. If not, then am+l e0m = 0 for 1 ≤ m ≤ k, a contradiction since {e01 , ..., e0k } is a
basis for V 0 . Then,
n−1 l+k
!
1 X
m
X
m
en = − n a em + a em .
a m=1 m=n+1

Since
i j
0→V0 → → V 00 → 0
− V −
is exact, and by definition e00m = j(em ),
n−1 l+k
! n−1 l
!
1 X X 1 X X
e00n = j(en ) = − n m
a j(em ) + m
a j(em ) =− n am e00m + am e00m ,
a m=1 m=n+1
a m=1 m=n+1

a contradiction since {e001 , ..., e00l } is a basis for V 00 . Now, since j is surjective and Ker j = Im i,
dim(Im i) + l = dim(Ker j) + dim(Im j) = dim(V ).
On the other hand, by injectivity of i,
dim(Im i) = dim(Ker i) + dim(Im i) = dim(V 0 ) = k
so that dim V = k + l. Since {e1 , ..., el+l } is a linearly independent set in V with cardinality
dim V , it must be a basis.
c) Define φ : Det V 00 ⊗ Det V 0 → Det V by
(e001 ∧ ... ∧ e00l ) ⊗ (e01 ∧ ... ∧ e0k ) 7→ ẽ001 ∧ ... ∧ ẽ00l ∧ e01 ∧ ... ∧ e0k
and extend linearly. By part a), we know that {ẽ001 , ..., ẽ00l , e01 , ..., e0k } is linearly independent, so
the wedge product of all of them is nonzero. The determinant line Det V is a one-dimensional
vector space, it follows that φ is surjective. The tensor product Det V 00 ⊗ Det V 0 is also a
one-dimensional vector space, so we see that φ is an isomorphism.

There are three choices we made


i) Choice of basis on V 00 .
ii) Choice of basis on V 0 .
iii) Choice of representatives ẽ00m for j −1 (e00m ).
We check that φ is independent of these choices.
i) (Choice of bases). Suppose we have another basis f100 , ..., fl00 on V 00 and f10 , ..., fk0 on V 0 .
Let f˜m
00
∈ V be such that j(f˜m 00
) = fm00
. Let P denote the change of basis matrix from
˜00 00
{fm } to {ẽm } and Q the change of basis matrix from {fm 0
} to {e0m }. From a previous
exercise, we have that
ẽ001 ∧ ... ∧ ẽ00l = det(P ) f˜100 ∧ ... ∧ f˜l00 ;
e01 ∧ ... ∧ e0l = det(Q) f10 ∧ ... ∧ fl0 .
M382D Attempted HW Problems

Define φf : Det V 00 ⊗ Det V 0 → Det V exactly as we did φ, but replacing all instances
of e with f . Then,
φf ((e001 ∧ ... ∧ e00l ) ⊗ (e01 ∧ ... ∧ e0k )) = det(P ) det(Q) φf ((f100 ∧ ... ∧ fl00 ) ⊗ (f10 ∧ ... ∧ fk0 ))
= det(P ) det(Q) f100 ∧ ... ∧ fl00 ∧ f10 ∧ ... ∧ fk0
= (det(P ) f˜00 ∧ ... ∧ f˜00 ) ∧ (det(Q) f 0 ∧ ... ∧ f 0 )
1 l 1 k
= (ẽ001 ∧ ... ∧ ẽ00l ) ∧ (e01 ∧ ... ∧ e0k )
So, φf has the same action as φ on (e001 ∧ ... ∧ e00l ) ⊗ (e01 ∧ ... ∧ e0k ). It follows that these
are the same isomorphism.
ii) (Choice of representatives). We just need to prove that altering the choice of represen-
tative ẽ00m for any m does not change φ. If n many of the representatives are changed,
simply apply this result n times (once for each), and conclude that there is no change
in φ overall.

Since j is surjective we have that V / Ker j ' V 00 . By exactness Ker j = Im i so that


V / Im i ' V 00 . It follows that if ẽ00m , f˜m
00
∈ V are such that j(ẽ00m ) = e00m = j(f˜m
00
) then
f˜m = ẽm + g, where g ∈ Im i = Ker j. Define φf the same as φ before, but with ẽ00m
00 00

replaced by f˜m 00
. Then,
φf ((e001 ∧ ... ∧ e00l ) ⊗ (e01 ∧ ... ∧ e0k )) = (ẽ001 ∧ ... ∧ ẽ00m−1 ∧ f˜m
00
∧ ẽ00m+1 ∧ ... ∧ ẽ00l ) ∧ (e01 ∧ ... ∧ e0k )
= (ẽ001 ∧ ... ∧ ẽ00m−1 ∧ (ẽ00m + g) ∧ ẽ00m+1 ∧ ... ∧ ẽ00l )
∧ (e01 ∧ ... ∧ e0k )
= (ẽ001 ∧ ... ∧ ẽ00m−1 ∧ g ∧ ẽ00m+1 ∧ ... ∧ ẽ00l ) ∧ (e01 ∧ ... ∧ e0k )
+ (ẽ001 ∧ ... ∧ ẽ00l ) ∧ (e01 ∧ ... ∧ e0k )
= (ẽ001 ∧ ... ∧ ẽ00l ) ∧ (e01 ∧ ... ∧ e0k )
so that φf = φ. Note that the wedge product involving g disappears since g ∈
Span{e01 , ..., e0k }.
d) There are three possibilities
i) (V 00 and V 0 are oriented). Let {e00m } and {e0m } be positively oriented bases of V 00 and
V 0 respectively. Then, φ((e001 ∧ ... ∧ e00l ) ⊗ (e01 ∧ ... ∧ e0k )) is a basis of Det V and lies in
some path component. Choose this component as the orientation.
ii) (V and V 00 are oriented). Let {e00m } be a positively oriented basis of V 00 . Choose
representatives ẽ00m which map to e00m . Then {ẽ00m } is a linearly independent set in V
which we may extend to a positively oriented basis. The newly added vectors – call
them e01 , ..., e0k – span Im i. Since i is injective, this gives us a basis {i−1 (e01 ), ..., i−1 (e0k )}
of V 0 . Then i−1 (e01 )∧...∧i−1 (e0k ) is a basis of Det V 0 , and we choose the path component
it resides in as the orientation on V .
iii) (V and V 0 are oriented). Similar to the one above, choose a positively oriented basis
{e01 , ..., e0k } of V 0 and extend this to a positively oriented basis of V . Label the newly
added basis elements as ẽ001 , ..., ẽ00l . This gives us a basis {j(ẽ001 ), ..., j(ẽ00l )} of V 00 , which
we declare positively oriented in the same way as above.
TA comment: Nice!

Problem 3. Let C be the complex (affine) line1 with coordinate z. Write z = x + iy where x, y ∈ R
and i2 = −1. Recall the complex conjugate z̄ = x − iy. The coordinates x, y identify C with the
real affine plane A2 .
a) Write x, y in terms of z, z̄. We use z, z̄ as (complex) coordinates on A2 .

1It is a complex line: we navigate with a single complex number, just as we navigate on the real line with a single
real number. A complex plane requires two complex numbers to locate a point.
M382D Attempted HW Problems

b) We use complex differential forms, which are linear combinations of dx, dy with complex
coefficients. Express dz, dz̄ in terms of dx, dy. Define the basis ∂/∂z, ∂/∂ z̄ dual to dz, dz̄
and express it in terms of ∂/∂x, ∂/∂y.
c) Let U ⊂ C be an open set. Show that a C 1 function f : U → C is analytic (holomorphic) if
and only if ∂f /∂ z̄ = 0.
d) Continuing, define the complex 1-form α ∈ Ω1 (U ; C) by
α = f (z, z̄)dz.
Show that dα = 0 if and only if f is holomorphic.
e) Apply Stokes’ theorem to the 1-form α on a bounded open subset of C whose closure has
smooth boundary. Is the result familiar from complex analysis?

Solution:
a) We have
2x + iy − iy z + z̄
x= =
2 2
2iy + x − x z − z̄
y= = .
2i 2i
b) To express dz, dz̄ in terms of dx, dy,
dz = dx + idy
dz̄ = dx − idy.
The basis {∂/∂z, ∂/∂ z̄} dual to {dz, dz̄} is defined by
∂ ∂
(dz) = 1 (dz̄) = 0
∂z ∂z
∂ ∂
(dz) = 0 (dz̄) = 1
∂ z̄ ∂ z̄
and similarly for {∂/∂x, ∂/∂y}. Suppose that
∂ ∂ ∂
= a11 + a21
∂z ∂x ∂y
∂ ∂ ∂
= a12 + a22
∂ z̄ ∂x ∂y
Then, ∂/∂z applied to dz, dz̄ expressed in terms of dx, dy gives
∂ ∂
1 = a11 (dx + idy) + a21 (dx + idy) = a11 + ia21
∂x ∂y
∂ ∂
0 = a11 (dx − idy) + a21 (dx − idy) = a11 − ia21
∂x ∂y
It follows that a11 = 1/2 and a21 = −i/2. Similarly testing ∂/∂ z̄,
∂ ∂
0 = a12 (dx + idy) + a22 (dx + idy) = a12 + ia22
∂x ∂y
∂ ∂
1 = a12 (dx − idy) + a22 (dx − idy) = a12 − ia22
∂x ∂y
from which we see a12 = 1/2 and a22 = i/2. The change of basis matrix is thus
   
a11 a12 1/2 1/2
A= =
a21 a22 −i/2 i/2
c) Write f : U → C as f (x, y) = u(x, y) + iz(x, y) for C 1 real-valued functions u, v : V → R
(where we identify U with a subset V of R2 in the usual way). Applying 2∂/∂ z̄ to f gives
     
∂f ∂ ∂ ∂u ∂v ∂u ∂v
2 = +i (u + iv) = − +i + .
∂ z̄ ∂x ∂y ∂x ∂y ∂y ∂x
M382D Attempted HW Problems

Hence, ∂f /∂ z̄ = 0 if and only if f satisfies the Cauchy-Riemann equations (that is, f is


holomorphic).
d) Computing dα directly gives
 
∂f ∂f ∂f
dα = df (z, z̄) ∧ dz = dz + dz̄ ∧ dz = dz̄ ∧ dz.
∂z ∂ z̄ ∂ z̄
Note that
dz̄ ∧ dz = (dx − idy) ∧ (dx + idy) = 2idx ∧ dy 6= 0.
Hence, dα = 0 if and only if ∂f /∂ z̄ = 0.
e) Let U ⊂ C be open with smooth boundary. By Stokes’ theorem,
Z Z
α= dα.
∂U U
If α = f dz where f is holomorphic, then d) we have that the integral vanishes. This is
Cauchy’s theorem.
M382D Attempted HW Problems

HW 11
Guillemin/Pollack: Chapter 3.2: 12, 14, 16, 17, 26; Chapter 3.3: 6, 8, 9, 11, 14, 16.
3.2.12. Suppose that f : X → Y is a diffeomorphism of connected oriented manifolds with boundary.
Show that if dfx : Tx X → Tf (x) Y preserves orientation at one point x, then f preserves orientation
globally.

Solution: Note that the orientation on the boundary is induced by the orientation of the inte-
rior, so we may assume that X and Y do not have boundary. We then prove the following instead:
If f : X → Y is a diffeomorphism of connected oriented manifolds then f is either orientation pre-
serving or orientation reversing.

Let P be the set of points in X such that dfp preserves orientation and M the set of points which
reverse it. Choose p ∈ P and orientation preserving charts (U, x) and (V, y) about p in X and f (p) in
Y respectively. Set g = y ◦f ◦x−1 : x(U ∩f −1 (V )) → y(f (U )∩V ), a composition of diffeomorphisms.
Then for any q ∈ x(U ∩ f −1 (V )), det(dgq ) 6= 0. Because dfp is orientation preserving, it follows
that det(dgq ) > 0 everywhere on x(U ∩ f −1 (V )). Hence, dfp preserves orientation on the open set
U ∩ f −1 (V ). This shows that P is open. In the exact same way, we can show that M is open. So
P and M are disjoint, closed, and open. Thus one is all of X and the other is empty.

In our specific case, we know that p ∈ P so that P is nonempty, and thus all of X. That is, f
preserves orientation globally.
3.2.14. Let X and Z be transversal submanifolds of Y , all three being oriented. Let X ∩ Z denote
the intersection manifold with the orientation prescribed by the inclusion map X → Y . Now suppose
that
dim X + dim Z = dim Y,
so X ∩ Z is zero dimensional. Then at any point y ∈ X ∩ Z,
Ty X ⊕ Ty Z = Ty Y.
Check that the orientation number of y in X ∩ Z is +1 if the orientations of X and Z add up - in
the order of the above sum - to the orientation of Y , and −1 if not.

Solution: Fix orientations on Tp X, Tp Z, and Tp Y . Recall that the differential of the inclusion
map is just the inclusion of tangent spaces. Let S = X ∩ Z = ι̇−1 (Z). Since S is zero dimensional
we have that Np (S; X) = Tp X/0, and there is a canonical isomorphism Tp X ' Np (S; X). Then,
dι̇p (Np (S; X)) ' Tp X ⊂ Tp Y (here, really, dι̇p is the induced map on the quotient). Since
dι̇p (Np (S; X)) ⊕ Tp Z ' Tp Y
and Tp Z and Tp Y are both oriented, we get an orientation on dι̇p (Np (S; X)) ' Tp X. Then, the
orientation induced on Tp S is +1 if this orientation agrees with the one already chosen for Tp X and
is −1 otherwise. This is exactly the same thing as checking if the orientations on Tp X and Tp Z “add
up”.

TA comment: here p = y, right?

Yeah I prefer using p for points in my manifold instead of x, y, etc.


3.2.16. Now drop the assumption of dimensional complementarity. Prove that whenever X t Z in
Y , the two orientations on the intersection manifold are related by
X ∩ Z = (−1)(codim X)(codim Z) Z ∩ X.
Note that whenever X and Z do have complementary dimensions, then (codim X)(codim Z) =
(dim X)(dim Z). [Hint: Show that the orientation of S = X ∩ Z is specified by the formula
[Ny (S; X) ⊕ Ny (S; Z)] ⊕ Ty S = Ty Y.
M382D Attempted HW Problems

But for Z ∩ X, the first two spaces are interchanged.]

Solution: Let dim X = x, dim Y = y, and dim Z = z. Let oX , oY , and oZ be orientations of


Tp X, Tp Y , and Tp Z respectively. Now orient Np (S; X) and Np (S; Z) by
d(ι̇X )p (Np (S; X)) ⊕ Tp Z = Tp Y
d(ι̇Z )p (Np (S; Z)) ⊕ Tp X = Tp Y
call these orientations oN (S;X) and oN (S;Z) respectively. Then, we orient Tp S in two ways. First, we
can orient it by
Np (S; X) ⊕ Tp S = Tp X
and call this orientation o1 . Next orient it by
Np (S; Z) ⊕ Tp S = Tp Z
and call this orientation o2 . In what follows I naturally identify vectors in Np (S; X) with those in
d(ι̇X )p (Np (S; X)) (respectively with Z). Let {uy−z+1 , ..., ux }2 ∈ o1 a basis of Tp S. Then we can
extend this to a basis {u1 , ..., ux } ∈ oX of Tp X. By definition, {u1 , ..., uy−z } ∈ oN (S;X) . Next extend
{u1 , ..., ux } to a basis {u1 , ..., uy } ∈ oY of Tp Y . Reordering this, we have {ux+1 , ..., uy , u1 , ..., ux }
where {ux+1 , ..., uy } is a basis of Np (S; Z). This is obtained by x(y − x) transpositions, so that
{ux+1 , ..., uy } ∈ oN (S;Z) if x(y−x) is even and not if odd. Consider the basis {ux+1 , ..., uy , uy−z+1 , ..., ux }
of Tp Z. Then, concatenating this with our basis for Np (S; X) gives a basis {u1 , ..., uy−z , ux+1 , ...,
uy , uy−z+1 , ..., ux } of Tp Y . We know that {u1 , ..., uy } ∈ oY and the previous basis is obtained
from this one by (y − x)(x + z − y) transpositions. Hence, if both {ux+1 , ..., uy } ∈ oN (S;Z)
and {u1 , ..., uy−z , ux+1 , ..., uy , uy−z+1 , ..., ux } ∈ oY , then {uy−z+1 , ..., ux } ∈ o2 . The same conclu-
sion holds if neither are in these orientations. So, we must check the parity of x(y − x) against
(y − x)(x + z − y) = x(y − x) − (y − x)(y − z). Thus the orientations o1 and o2 agree if −(y − x)(y − z)
is even, equivalently if (codim X)(codim Z) is even. Hence
X ∩ Z = (−1)(codim X)(codim Z) Z ∩ X.
3.2.17. Compute the orientation of X ∩Z in the following examples by exhibiting positively oriented
bases at every point. [By convention, we orient the three coordinate axes so that the standard basis
vectors are positively oriented. Orient the xy plane so that {(1, 0, 0), (0, 1, 0)} is positive and the yz
plane so that {(0, 1, 0), (0, 0, 1)} is positive. Finally, orient S 1 and S 2 as the boundary of B 2 and
B 3 , respectively.]
a) X = x axis, Z = y axis (in R2 )
b) X = S 1 , Z = y axis (in R2 )
c) X = xy-plane, Z = z axis (in R3 )
d) X = S 2 , Z = yz plane (in R3 )
e) X = S 1 in xy plane, Z = yz plane (in R3 )
f) X = xy plane, Z = yz plane (in R3 )
g) X = hyperboloid x2 + y 2 − z 2 = a with preimage orientation (a > 0), Z = xy plane (in R3 )

Solution: First note for a) - b) and e), dim X + dim Z = dim Y where Y is the ambient space in
parenthesis. Thus we may apply Exercise 3.2.14. I also drew pictures to help visualize everything,
but most of the work is just applying the “quotient before sub” rule over and over.
a) Let p = (0, 0), the only intersection point. Note that {e1 } ∈ oX (p) and {e2 } ∈ oZ (p). Since
{e1 , e2 } ∈ oY (p), we have that X ∩ Z has orientation number +1.
b) There are two intersection points at p = (0, 1) and q = (0, −1). Observe that {−e1 } ∈ oX (p)
of Tp S 1 while {e1 } ∈ oX (q) of Tq S 1 . But, {−e1 , e2 } ∈
/ oY whereas {e1 , e2 } ∈ oY . Hence the
orientation number at p is −1 and at q is +1.
c) Let p = (0, 0, 0). Observe that {e1 , e2 } ∈ oX (p) while {e3 } ∈ oZ (p). Then, {e1 , e2 , e3 } ∈
oY (p) so that the orientation number of X ∩ Z is +1.
2This may be poor notation. I wanted to emphasize the point that the orientation is specified by putting the basis
vectors of the normal space first.
M382D Attempted HW Problems

d) Let p ∈ S = ι̇−1 (Z). Recall that if f : X → Y is transverse to a submanifold Z then the


orientation on S = f −1 (Z) is given by

dfp (Np (S; X)) ⊕ Tf (p) Z = Tf (p) Y


Np (S; X) ⊕ Tp S = Tp X.

In the special case f = ι̇ : X → Y , the inclusion we have

Np (S; X) ⊕ Tp Z = Tp Y
Np (S; X) ⊕ Tp S = Tp X

(where the first Np (S; X) is viewed as a subspace of the tangent space Tp Y ). We have that
{e2 , e3 } ∈ oZ (p) and {e1 , e2 , e3 } ∈ oY (p). So, {e1 } ∈ oN (S;X) (p). Next, recall the boundary
orientation is given by

Span{ηp } ⊕ Tp S 2 = Tp B 3

where ηp is the outward unit normal at p of B 3 . Notice that for any p ∈ S, we have
{ηp , e1 , ηp × e1 } ∈ oB 3 (p). It follows that {e1 , ηp × e1 } ∈ oX (p). Then, from the above
{ηp × e1 } ∈ oS (p).
e) There are two intersection points at p = (0, 1, 0) and q = (0, −1, 0). As before, {−e1 } ∈
oX (p) and {e1 } ∈ oX (q). Then {−e1 , e2 , e3 } ∈ / oY (p) whereas {e1 , e2 , e3 } ∈ oY (q). Thus p
has orientation number −1 and q has +1.
f) Let p ∈ S. Since {e2 , e3 } ∈ oZ (p) and {e1 , e2 , e3 } ∈ oY (p) respectively, it follows that
{e1 } ∈ oN (S;X) (p). Additionally, {e1 , e2 } ∈ oX (p) so that {e2 } ∈ oS (p).
g) Let f : R3 → R be given by f (x, y, z) = x2 + y 2 − z 2 . Then X = f −1 (a) where a > 0. Recall
that a basis vector for Np (X; Y ) is ∇f (p). Next, dfp is

dfp (v) = h∇f (p), vi

for any v ∈ R3 . In particular, for v = ∇f (p) we have that

dfp (∇f (p)) = h∇f (p), ∇f (p)i = k∇f (p)k2 .

Since k∇f (p)k2 ∈ oR (here R carries the canonical orientation) it follows that ∇f (p) ∈
oN (X;Y ) (p). This induces an orientation on Tp X. In particular, for p ∈ S we have that
∇f (p) = cηp where c > 0 and ηp is the outward unit normal of S 1 in the xy plane. So, ηp
induces the same orientation on Tp X. Note that {ηp , e3 , ηp × e3 } ∈ oY (p). It follows that
{e3 , ηp × e3 } ∈ oX (p). One can deduce that {ηp , e3 × ηp } ∈ oZ (p). Since ηp = (cos θ, sin θ)
where θ is such that p = (cos θ, sin θ) we see that e3 × ηp = (− sin θ, cos θ, 0). The change of
basis matrix taking {ηp , e3 × ηp , e3 } to {e1 , e2 , e3 } is

 
cos θ − sin θ 0
M =  sin θ cos θ 0
0 0 1

which has positive determinant. Hence {ηp , e3 × ηp } ∈ oZ (p). Since {e3 , ηp , e3 × ηp } ∈ oY (p),
it follows that {e3 } ∈ oN (S;X) (p). Because {e3 , ηp × e3 } ∈ oX (p), we have {ηp × e3 } ∈ oS (p)
(i.e., the S is oriented clockwise).
M382D Attempted HW Problems

TA comment: Good pictures!

3.2.26. Prove that every simply connected manifold is orientable. [Hint: Pick an “origin” x ∈ X,
and choose an orientation for the single space Tx X. If y ∈ X, orient Ty X as follows. Choose a se-
quence of open sets U1 , ..., Ul , each diffeomorphic to an open ball in Rk , such that each Ui ∩Ui+1 6= Ø,
and x ∈ U1 , y ∈ Ul . Successively orient the sets Ui , and show that the orientation induced on Ty X
from Ul does not depend on the choice of the Ui .]

Solution: Let p, q ∈ X and (Ui , xi ) be charts such that each Ui is diffeomorphic to some ball
Bi ⊂ Rk , p ∈ U1 , q ∈ Ul , and Ui ∩ Ui+1 6= Ø. Orient Tp X and B1 such that d(x1 )p preserves
orientation. This induces an orientation on Tp0 X for all other p0 ∈ U1 by declaring that d(x1 )p0 is
orientation preserving everywhere. To orient U2 , we choose some point p0 ∈ U1 ∩ U2 , so that Tp0 X is
already oriented. Apply the same procedure as before. Inductively proceed until an orientation on
Ul is chosen. Set p0,1 = 1 and notice that for any p1,2 ∈ U1 ∩ U2 , g1 : B1 → B1 orientation preserving
sending x1 (p) to x1 (p1,2 ), the map d(x1 )−1
x1 (p1,2 ) ◦ d(g1 )x(p0,1 ) ◦ d(x1 )p is orientation preserving. Call
this map h0,1 . Similarly construct hi,i+1 where pi,i+1 ∈ Ui ∩ Ui+1 and q = pl,l+1 . Then,
hl,l+1 ◦ ... ◦ h0,1 : Tp X → Tq X
is orientation preserving. It follows from this that if the Ui are changed with other open sets, we
still induce the same orientation on Tq X (since, although the above maps change, the composite is
still orientation preserving).
2
3.3.6. Show that z 2 = e−|z| for some complex number z.
2 2 2
Solution: Let f (z) = z 2 − e−|z| . The real and imaginary parts are Re(f )(x, y) = x2 − y 2 − e−x −y
and Im(f )(x, y) = 2xy respectively. Define u(x) = Re(f )(x, 0) and v(x) = Im(f )(x, 0). Then
2
v = 0 and u(x) = x2 − e−x . So we need only find an x such that u(x) = 0. The derivative is
2
u0 (x) = 2x(1 + e−x ), which is increasing. Observe that u(−1) = u(1) ≈ 0.6321 while u(0) = −1.
Hence there are two real roots. I think the intended way to solve this problem is to show that f (z)
is homotopic to z 2 so that there are two roots, but I’m not entirely sure.
M382D Attempted HW Problems

TA comment: You are correct that the intended way to solve this problem is to use oriented inter-
section theory; what you wrote down is correct but much harder to generalize to other settings in
differential topology than the intersection-theoretic proof.

I don’t like intersection theory.


3.3.8. According to Exercise 8, Chapter 2, Section 4, for any map f : S 1 → S 1 there exists a map
g : R → R such that
f (cos t, sin t) = (cos g(t), sin g(t)).
Moreover, g satisfies g(t + 2π) = g(t) + 2πq for some integer q. Show that deg(f ) = q.

Solution: First consider g̃ : R → R where g̃(t) = qt. Define G : [0, 1] × R → R by G(s, t) =


sg̃(t) + (1 − s)g(t). Then G is a homotopy between g̃ and g. Let f˜ be defined by f˜(cos t, sin t) =
(cos g̃(t), sin g̃(t)). Then G induces a homotopy F : [0, 1] × S 1 → S 1 of f˜ to f . So, we need only
check this in the case when g(t) = qt. Let p ∈ S 1 . Then f −1 (q) consists of exactly q many points,
evenly spaced (since g increases with constant speed). Since both S 1 are oriented counter-clockwise
(by (cos t, sin t) and (cos g(t), sin g(t))) then all the local degrees are +1. Hence the degree is q.

For more details, refer to my solution of Exercise 2.4.8 in HW 7. There I prove the mod 2 de-
gree is q mod 2 by computing the degree, then taking it mod 2. So, I think I solved this problem
back then.

TA comment: Why is p a regular value?

I don’t remember this well enough, and I remember distinctly not liking this problem (and the
related ones like 2.4.8). I think you just use Sard’s theorem to say that almost every point is a
regular value... Also I think I have a typo – I think f −1 (q) should be f −1 (p).
3.3.9. Prove that two maps of the circle S 1 into itself are homotopic if and only if they have the
same degree. This is a special case of a remarkable theorem of Hopf, which we will prove later. [Hint:
If g0 , g1 : R → R both satisfy g(t + 1) = g(t) + 2πq, then so do all the maps gs = sg1 + (1 − s)g0 .]

Solution: Since degree is homotopy invariant, if f0 ' f1 then deg(f0 ) = deg(f1 ). Thus we need
only show if deg(f0 ) = deg(f1 ) then f0 ' f1 . Let q = deg(f0 ) = deg(f1 ). Then by Exercise 3.3.8.,
there exist maps g̃i : R → R such that fi ◦ p = p ◦ g̃i where g̃i (t + 2π) = g̃i (t) + 2πq. Define gi : R → R
by gi (t) = g̃i (2πt). It follows that
gi (t + 1) = g̃i (2πt + 2π) = g̃i (2πt) + 2πq = gi (t) + 2πq.
Next for s ∈ R define gs : R → R by
gs (t) = sg1 (t) + (1 − s)g0 (t).
We easily see that
gs (t + 1) = sg1 (t + 1) + (1 − s)g0 (t + 1) = s(g1 (t) + 2πq) + (1 − s)(g0 (t) + 2πq) = gs (t) + 2πq.
As in 3.3.8, we can use these maps to define a homotopy fs – this is well defined and continuous
since all the fs have the same q-value.
3.3.11. Prove that a map f : S 1 → S 1 extends to the whole ball B = {|z| ≤ 1} if and only if
deg(f ) = 0. [Hint: If deg(f ) = 0, use Exercise 9 to extend f to the annulus A = {1/2 ≤ |z| ≤ 1}
so that, on the inner circle {|z| = 1/2}, the extended map is constant. By the trick of Exercise 1,
Chapter 1, Section 6, you can make the map constant on a whole neighborhood of the inner circle.
Now extend to the rest of B.]

Solution: First if f : S 1 → S 1 extends to all of B, then because S 1 = ∂B it follows that


deg(f ) = 0. Now suppose that deg(f ) = 0. Then by Exercise 3.3.9, f is homotopic to a con-
stant map g : S 1 → {q} ⊂ S 1 . That is, there exists F : [0, 1] × S 1 → S 1 smooth such that
M382D Attempted HW Problems

F (0, p) = g(p) = q and F (1, p) = f (p). Now let ρ : R → R be smooth, increasing, and such that
ρ(t) = 0 for t ≤ 1/2 +  and ρ(t) = 1 for t ≥ 1 −  for some 1/2 >  > 0. Define F̃ : [0, 1] × S 1 → S 1
by F̃ (t, p) = F (ρ(t), p). Next define h : B → S 1 by h(p) = F̃ (|p|, p/|p|). If 0 ≤ |p| ≤ 1/2 +  then
h(p) = F (0, p/|p|) = q whereas if |p| ≥ 1 −  then h(p) = F (1, p/|p|) = f (p/|p|). In particular,
when |p| = 1 then h(p) = p. So, h is an extension of f . As a remark, we needed to smooth out the
homotopy because if not, we would only have a continuous extension and not a smooth one.
g f
3.3.14. Suppose that W − →X− → Y is a sequence of maps with f t Z in Y . Show that if f ◦ g and
Z are appropriate for intersection theory, so are g and f −1 (Z). Prove that
I(f ◦ g, Z) = I(g, f −1 (Z)).
Solution: Suppose that f ◦ g is transverse to Z. Then for any w ∈ (f ◦ g)−1 (Z),
d(f ◦ g)w (Tw W ) + Tz Z = Tz Y
where z = (f ◦ g)(w). By transversality of f to Z we have for any x ∈ f −1 (Z) that
Tg(w) (f −1 (Z)) = (dfg(w) )−1 (Tz Z).
Let vx ∈ Tg(w) X. Then, dfg(w) (vx ) ∈ Tz Y and by transversality of f ◦ g to Z we find vw ∈ Tw W and
vz ∈ Tz Z such that
dfg(w) ◦ dgw (vw ) + vz = dfg(w) (vx ).
But this means that vx − dgw (vw ) ∈ (dfg(w) )−1 (Tz Z). Consequently, there exists some vx0 ∈
Tg(w) (f −1 (Z)) such that vx = dgw (vw ) + vx0 . In other words,
dgw (Tw W ) + Tg(w) (f −1 (Z)) = Tg(w) X
so that g t f −1 (Z). Since f t Z, dim f −1 (Z) = dim X + dim Z − dim Y . Because we assume
dim W + dim Z = dim Y ,
dim W + dim f −1 (Z) = dim W + dim X + dim Z − dim Y = dim Y − dim Y + dim X = dim X.
So, W, X, f −1 satisfy the required dimension requirement. Moreover, W is compact and f −1 (Z) is
a submanifold of X. We may need to assume X is connected, however; but I also think an oriented
intersection theory with the codomain not connected exists.

Next, we orient S = (f ◦ g)−1 (Z) in two ways. First, we can orient S 0 = f −1 (Z) via the preimage
orientation, and then S = g −1 (f −1 (Z)) via the preimage orientation as well. That is, let p ∈ S, let
q = g(p), r = (f ◦ g)(p), and orient S 0 by
Nq (S 0 ; X) ⊕ Tr Z = Tr Y
Nq (S 0 ; X) ⊕ Tq S 0 = Tq X
(the first normal space is really the pushforward via the differential of the inclusion). Then, orient
S by
Np (S; W ) ⊕ Tq S 0 = Tq X
Np (S; W ) ⊕ Tp S = Tp W.
Call this orientation o1 and the orientation on Np (S; W ) by o1N (S;W ) . Alternatively, we can orient
S directly via the preimage orientation:
Np (S; W ) ⊕ Tr Z = Tr Y
Np (S; W ) ⊕ Tp S = Tp W
Call this orientation o2 and the orientation on Np (S; W ) by o2N (S;W ) . Let s = dim S, s0 = dim S 0 ,
w = dim W , x = dim X, y = dim Y , and z = dim Z. Then s0 = x + z − y and s = w + z − y. Since o1
and o2 are both determined by oiN (S;W ) and oW (for i = 1, 2 respectively) using the two out of three
rule, we need only show that o1N (S;W ) = o2N (S;W ) . So let {u1 , ..., uy−z } ∈ o1N (S;W ) . We can extend
this to a basis {u1 , ..., ux } ∈ oX . Hence, {uy−z+1 , ..., ux } ∈ oS 0 . Next, using N (S 0 ; X) ⊕ T S 0 = T X
we see that {u1 , ..., uy−z } ∈ oN (S 0 ;X) (note: this really should be the pushforward of these vectors).
M382D Attempted HW Problems

Extend this to a basis {u1 , ..., uy−z , vy−z+1 , ..., vy } ∈ oY . It follows that {vy−z+1 , ..., vy } ∈ oZ . Fi-
nally, using N (S; W ) ⊕ T Z = T Y we see that {u1 , ..., uy−z } ∈ o2N (S;W ) .

TA comment: Great work with a careful analysis of the orientations!

3.3.16. Let Z be a compact submanifold of Y , both oriented, with dim Z = 1/2 dim Y . Prove that
I(Z, Z) = I(Z × Z, ∆), where ∆ is the diagonal of Y . [Hint: Let ι̇ be the inclusion map of Z. Then
I(Z, Z) = I(ι̇, Z) = I(ι̇, ι̇) = (−1)dim Z I(ι̇ × ι̇, ∆) = (−1)dim Z I(Z × Z, ∆).
What happens when dim Z is odd?]

Solution: Per the hint, we have that


I(Z, Z) = I(ι̇, Z) = I(ι̇, ι̇), I(Z × Z, Diag(Y × Y )) = I(ι̇ × ι̇, Diag(Y × Y ))
all by definition. So we just need to show that the right hand side of both of these are equal. Next,
given two maps f : X → Y and g : Z → Y with X, Z compact, oriented and Y oriented then f t g
if and only if f × g t Diag(Y × Y ) and in this case
I(f, g) = (−1)dim Z I(f × g, Diag(Y × Y )).
We want to apply this with f = g = ι̇ : Z → Y , but we need to perturb one of these slightly to
achieve transversality. So, construct ˜ι̇ : Z̃ → Y homotopic to ι̇ where Z and Z̃ intersect transversally.
Then ι̇ t ˜ι̇ and we have
I(ι̇, ι̇) = I(˜ι̇, ι̇) = (−1)dim Z I(˜ι̇ × ι̇, Diag(Y × Y )) = (−1)dim Z I(ι̇ × ι̇, Diag(Y × Y )).
If dim Z is even, we are done. Finally, recall that if X, Z are compact submanifolds of Y then
I(X, Z) = (−1)(dim X)(dim Z) I(Z, X).
2
Thus, when dim Z is odd, I(Z, Z) = (−1)(dim Z) I(Z, Z) and necessarily I(Z, Z) = 0. But then
I(Z, Z) = 0 = −I(Z, Z) = I(Z × Z, Diag(Y × Y )).
So, in either way we get equality.

Dan’s Problems.

Problem 1. For each of the following construct an example or prove that none exists.
a) A map f : S 1 × S 1 → S 2 of degree 3.
b) A map f : S 2 → S 1 × S 1 of degree 3.
c) A map f : S 5 → S 5 of degree 3.
d) A map f : RP5 → RP5 of degree 3.
e) A map f : X → S n of any given degree d ∈ Z, where X is a compact oriented n-manifold.

Solution:
a) See e), apply with X = S 1 × S 1 , n = 2, and d = 3.
b) Since π2 (S 1 × S 1 ) = 0, every map f : S 2 → S 1 × S 1 is homotopic to a constant map. Since
constant maps have degree zero, no such map exists.
c) See e), apply with X = S 5 , n = 5, and d = 3.
d) I’m not sure...
e) Let p1 , ..., pd ∈ X and choose orientation preserving charts (U1 , x1 ), ..., (Ud , xd ) such that
all the Ui are disjoint and xi (pi ) = 0. Then there exist Vi ⊂ Ui such that (Vi , x|Vi ) is a
diffeomorphism onto Bri (0) for some ri > 0. Next, let q ∈ S n . Then there exist orientation
preserving diffeomorphisms gi : S n \ {−q} → Bri (0). Define f on each Vi by gi−1 ◦ x|Vi ,
a diffeomorphism, and define f to be identically −q on X \ {V1 ∪ ... ∪ Vd }. Since f is a
local, orientation preserving diffeomorphism it follows that the degree is d. (Note: I’m not
entirely sure if this is smooth, you may have to use a partition of unity or some smoothing
out process like in 3.3.11).
M382D Attempted HW Problems

Problem 2. Let α : S n → S n be the antipodal map, and suppose f : S n → S n satisfies f (p) = f (α(p))
for all p ∈ S n . Prove that deg f is even.

Solution: Observe that if q is a regular value of f then f −1 (q) = {p1 , −p1 , ..., pN , −pN }. Since
the orientation number of each point ±pi is either ±1, we have three options: pi and −pi have the
same orientation number or opposite orientation numbers. In either case, the sum is an even number
(±2 in the former, 0 in the latter). Since the sum of even numbers is even, it follows that deg f is
even.
M382D Attempted HW Problems

HW 12
Guillemin/Pollack: Chapter 3.3: 17, 19, 20; Chapter 3.5: 5, 7. I unfortunately did not
have much time to read over this homework when originally writing it, so there may be typos.
3.3.17. Prove that the Euler characteristic of an orientable manifold X is the same for all choices
of orientation. [Hint: Use Exercise 16 with Z being the diagonal of X × X and Y = X × X. Apply
Exercise 23, Section 2.]

Solution: Exercise 3.2.23 asserts that if X is an orientable manifold, then the product orienta-
tion on X × X is the same no matter the orientation on X. So, let us show this first. WLOG assume
that X is connected. Let o1X and o2X denote the two orientations on X and let oiX×X denote the
product orientation on X × X induced by oiX , i = 1, 2. For p ∈ X we have the following canonical
isomorphism
T(p,p) (X × X) = Tp X ⊕ Tp X.
Let {(v1 , 0), ..., (vn , 0), (0, w1 ), ..., (0, wn )} ∈ o1X×X (p, p). By definition, this means {v1 , ..., vn } and
{w1 , ..., wn } are both bases of Tp X, and either they are both in o1X (p) or both not. Since o2X is the
complement of o1X , this is the same as saying {v1 , ..., vn } and {w1 , ..., wn } are either both in o2X (p)
or both not. That is, {(v1 , 0), ..., (v1 , 0), (0, w1 ), ..., (0, wn )} ∈ o2X×X (p, p). The converse proceeds in
exactly the same way, so that o1X×X = o2X×X .

By Exercise 3.3.16, if Z is a compact submanifold of Y , both are oriented, and dim Z = 2 dim Y ,
then
I(Z, Z) = I(Z × Z, Diag(Y × Y )).
Let X be compact and orientable. Per the hint, we apply this with Z = Diag(X ×X) and Y = X ×X.
Then,
χ(X) = I(Diag(X × X), Diag(X × X))
= I(Diag(X × X) × Diag(X × X), Diag((X × X) × (X × X))).
Fix an orientation on X, denoted oX . This induces a product orientation on X × X, denoted oX×X ,
and an orientation on Diag(X × X), denoted oDiag(X×X) via the diffeomorphism Diag(X × X) ' X.
We can then get an orientation o1 on Diag(X × X) × Diag(X × X) using the product orientation.
Finally, we can get an orientation o2 on Diag((X ×X)×(X ×X)) using the diffeomorphism Diag((X ×
X) × (X × X)) ' X × X. Note that, as a product orientation, o1 does not depend on the orientation
on Diag(X × X). In turn, o1 does not depend on the orientation oX . On the other hand, o2 does
depend on the orientation oX×X on X × X. But, since this is a product orientation it does not
depend on the orientation oX .
3.3.20. As a special case of Exercise 19, prove that the Euler characteristic is well defined even for
nonorientable manifolds and that it is still a diffeomorphism invariant. (You need Exercise 25 of
Section 2 again.)

Solution: By Exercise 3.3.19, we can define the Euler characteristic for nonorientable manifolds
X by setting Z = Diag(X × X) and Y = X × X. I’m not exactly sure how to show that it is a
diffeomorphism invariant, my idea is that if you have a diffeomorphism f : X → X̃ then this induces
diffeomorphisms X × X → X̃ × X̃ and Diag(X × X) → Diag(X̃ × X̃). The Euler characteristic is
computed by looking at the orientation numbers of the points in (Z × Z) ∩ Diag(Y × Y ). I think
these diffeomorphisms preserve the “local picture” of these intersections.
3.5.5. A zero of v is nondegenerate if dvx : Tx X → Tx X is bijective. Prove that nondegenerate zeros
are isolated. Furthermore, show that at a nondegenerate zero x, indx (v) = +1 if the isomorphism
dvx preserves orientation, and indx (v) = −1 if dvx reverses orientation. [Hint: Deduce from Exercise
4 that x is a nondegenerate zero of v if and only if it is a Lefschetz fixed point of ft .

Solution: Embed X in Rn via the Whitney Embedding theorem, and let  > 0. Denote by X 
M382D Attempted HW Problems

the set of points at a distance less than  to X. For  small enough, by the -neighborhood theorem
and the fact that X is compact, there exists a normal projection map π : X  → X that restricts to
the identity map on X. Once more by compactness, for t small enough and x ∈ X, x + tv(x) lies
inside X  and so we can define
ft (x) = π(x + tv(x)).
(The idea is if we have a vector field on a manifold, and literally imagine this as arrows based at
points on it, we can push the points of the manifold along these arrows. But, this of course knocks
them off the manifold. So, we need a way of projecting them back on. The -neighborhood theorem
provides us a well-defined way to do this). Now let x be a zero of v and γ : R → X satisfying
γ(0) = x. Then,
d(ft )x (γ 0 (0)) = dπx (γ 0 (0) + tdvx (γ 0 (0))).
By Exercise 3.5.3, since x is a zero of v it follows that dvx : Tx X → Tx X. Also, π|X is the identity
so dπx is also the identity. Hence, d(ft )x = I + tdvx .

Now, we proved in a previous homework (also due to compactness) that Lefschetz fixed points
are isolated. If x is a nondegenerate zero then dvx is bijective, and so dvx (ξ) 6= 0 when ξ 6= 0. So, for
nonzero t we see that d(ft )x (ξ) 6= ξ. That is, ξ is not an eigenvector. It follows that x is a Lefschetz
fixed point of ft .

Finally, we have that Lx (ft ) = indx (v), where Lx (ft ) is the local Lefschetz number of ft at x.
But Lx (ft ) is +1 if d(ft )x − I preserves orientation and −1 if not. And, d(tt )x − I = tdvx , so we
just need to check if dvx preserves or reverses orientation.
Dan’s Problems.
Problem 1. Let W be an n-dimensional complex vector space for some n ∈ Z>0 , and let WR be the
underlying 2n-dimensional real vector space. Multiplication by i ∈ C on W is a real linear map
I : WR → WR which satisfies I 2 = − IdWR . For any basis e1 , ..., en of W as a complex vector space,
there is an associated real basis e1 , Ie1 , e2 , Ie2 , ... of WR , and so a nonzero vector e1 ∧ Ie1 ∧ ... in the
real determinant line Det WR .
a) Suppose another basis f1 , ..., fn of W is related by ej = Aij fi for some complex n × n matrix
(Aij ). What is the change of basis of the associated bases of WR ? How are the induced
nonzero elements of Det WR related?
b) Conclude that WR has a canonical orientation. Can you express it in terms of the complex
determinant line Det W ?
c) If W 0 is another complex vector space, then a complex linear map T : W 0 → W induces a
real linear map TR : WR0 → WR . Prove that if T is an isomorphism, then TR is orientation-
preserving.
d) What is the relevance of this problem to the proof of the fundamental theorem of algebra
given in lecture?

Solution:
a) Let us first look at e1 and Ie1 . We can write e1 in terms of the fi as
e1 = Ai1 fi = A11 f1 + ... + An1 fn
where all the Ai1 are complex numbers. Splitting them into real and imaginary parts,
e1 = Re A11 f1 + Im A11 if1 + ... + Re An1 fn + Im An1 ifn .
By definition, multiplication by i ∈ C on W corresponds to a real linear map I : WR → WR .
So, every ifj as a vector in W is regarded as Ifj in WR . Hence,
e1 = Re A11 f1 + Im A11 If1 + ... + Re An1 fn + Im An1 Ifn .
By multiplying by I and noting I 2 = − IdWR , we see how to write Ie1 in terms of the fi , Ifi :
Ie1 = − Im A11 f1 + Re A11 If1 ... − Im An1 fn + Re An1 Ifn .
M382D Attempted HW Problems

The expansions of the ei and Iei are found similarly. Hence, the change of basis matrix is
Re A11 Im A11 · · · Re An1 Im An1
 
 − Im A11 Re A11 · · · − Im An1 Re An1 
 
AR = 
 .. .. .. .. ..  .
 . . . . . 
 Re A1n 1
Im An · · · Re An n
Im Ann 
− Im A1n Re A1n · · · − Im Ann Re Ann
The matrix AR can be obtained from A by a simple procedure. For each entry Aji in A,
replace it by a 2 × 2 matrix
Re Aji Im Aji
 
.
− Im Aji Re Aji
Hence in the 1×1 case we see that | det A|2 = det AR . To see this holds in general dimension,
let Re A and Im A denote the following matrices
Re A11 · · · Re An1 Im A11 · · · Im An1
   

Re A =  ... .. ..  , Im A =  ... .. ..  .
 
. .  . . 
Re A1n ··· Re Ann Im A1n ··· Im Ann
Then, observe that
Re A11 Re An1 Im A11 Im An1
 
··· ···
 − Im A11 ··· − Im An1 Re An1 ··· Re An1 
 
AR ∼ 
 .. .. .. .. .. .. 
 . . . . . .  
 Re A1n ··· Re Ann Im A1n ··· Im Ann 
− Im A1n ··· − Im Ann Re Ann ··· Re Ann
This is obtained by n(n−1)/2 column transpositions. Then by n(n−1)/2 row transpositions,
we get  
Re A Im A
AR ∼
− Im A Re A
and so we just need to compute the determinant of this matrix. To this end, consider
conjugating the above with the matrix
   
I 0 −I 0
M= , M −1 = .
0 iI 0 iI
where we view I as in matrix form. Then,
   
−I 0 Re A Im A I 0
M −1 AR M =
0 iI − Im A Re A 0 iI
  
−I 0 I Re A iI Im A
=
0 iI −I Im A iI Re A
 
Re A i Im A
=
i Im A Re A
But the determinant of this matrix is easily computed as
det(Re A + i Im A) det(Re A − i Im A) = det(A) det(Ā) = | det A|2
as desired.
b) Since the change of basis matrix always has positive determinant, we can choose any basis
{e1 , ..., en } of W and look at which path component of Det WR the vector e1 ∧Ie1 ∧...∧en ∧Ien
lies in.
c) The induced linear map TR : WR0 → WR is defined by ei 7→ T (ei ) and Iei 7→ IT (ei ) for
i = 1, ..., n. Hence if e1 ∧ Ie1 ∧ ... ∧ en ∧ Ien is a nonzero element Det WR0 , the induced map
V2n
TR : Det WR0 → Det WR is given by
^2n
TR (e1 ∧ Ie1 ∧ ... ∧ en ∧ Ien ) = T (e1 ) ∧ IT (e1 ) ∧ ... ∧ T (en ) ∧ IT (en ).
M382D Attempted HW Problems
V2n
For TR to be orientation preserving, it must be that TR takes an oriented basis e1 ∧ Ie1 ∧
... ∧ en ∧ Ien to an oriented basis T (e1 ) ∧ IT (e1 ) ∧ ... ∧ en ∧ IT (en ). If T is an isomorphism,
then a basis e1 , ..., en of WR0 is taken to a basis T (e1 ), ..., T (en ). By b), it follows that
e1 ∧ Ie1 ∧ ... ∧ en ∧ Ien and T (e1 ) ∧ IT (e1 ) ∧ ... ∧ T (en ) ∧ IT (en ) are oriented bases of Det WR0
and Det WR respectively. That is, T is orientation preserving.

Problem 2. Let X0 , X1 be compact manifolds. A bordism W : X0 → X1 is a compact manifold


'
with boundary together with a diffeomorphism ∂W −→ X0 ∪ X1 . If X0 , X1 are oriented, then an
oriented bordism W : X0 → X1 is a compact oriented manifold with boundary together with an
'
orientation-preserving diffeomorphism ∂W −
→ −X0 ∪ X1 . Two manifolds are bordant if there exists
a bordism between them.
a) Show that one circle is bordant to two circles, even as oriented manifolds.
b) Let Y be an oriented manifold, Z ⊂ Y an oriented submanifold, W : X0 → X1 an oriented
bordism between oriented manifolds, and suppose each manifold has a dimension and that
dim X0 + dim Z = dim Y . Let f : W → Y be a smooth map, and denote its restrictions to
X0 , X1 as f0 , f1 , respectively. Prove that I(f0 , X0 ) = I(f1 , X1 ). This generalizes the smooth
homotopy invariance of the oriented intersection number (and so the oriented degree) to
oriented bordism invariance.
c) Use this, or any technique you like, to compute the Euler characteristic of S 2 . (By definition
this is IY ×Y (∆, ∆) for Y = S 2 . As sketched in lecture, there are submanifolds S 2 × pt and
pt × S 2 in S 2 × S 2 , and ∆ is bordant to a manifold obtained from the union of S 2 × pt and
pt × S 2 by a small “surgery” which eliminates the non-manifold point pt × pt in the union.)

Solution:
a) See the below picture. To the left I show X0 and X1 with chosen orientations. In the middle
I depict an oriented bordism – the arrows on the surface show the orientation of the tangent
spaces. At the right I show two tangent spaces used to compute the boundary orientation.

b) We first homotope f so that f, ∂f t Z. Since X0 ∪X1 is the boundary of a compact manifold


W and f is defined on all of W , we see that
I(∂f, Z) = 0.
Now ∂f : X0 ∪ X1 → Y . To compute I(∂f, Z) we look at points in (∂f )−1 (Z), assign a + or
− using the preimage orientation, and add them all up. We want to obtain something like
I(∂f, Z) = I(f0 , Z) + I(f1 , Z)
where f0 : X0 → Y and f1 : X1 → Y are the restrictions of ∂f to X0 and X1 respectively.
We have to be careful about orientations though. Recall the preimage orientation is defined
M382D Attempted HW Problems

via
dfp Np (f0−1 (Z); X0 ) ⊕ Tf (p) Z = Tf (p) Y
Np (f0−1 (Z); X0 ) ⊕ Tp (f0−1 (Z)) = Tp X0

If we only change the orientation of X0 , then the orientation of Np (f0−1 (Z); X) remains the
same. Hence, the orientation on f0−1 (Z) also reverses. Let oX0 denote the orientation on X0
and −oX0 its complement. By definition of an oriented bordism, ∂W ' −X0 ∪X1 where −X0
has orientation −oX0 . Consequently, when computing I(∂f, Z), we can compute I(f0 , Z)
and I(f1 , Z) separately and add them together, so long as we use the preimage orientation
on f0−1 (Z) induced by −oX . One way to write this is
IW (∂f Z) = I−X0 (f0 , Z) + IX1 (f1 , Z).
We saw above that changing orientation on X0 also changes orientation on the preimage.
Hence,
I−X0 (f0 , Z) = −IX0 (f0 , Z).
Putting this together with our result from the boundary theorem gives
IX1 (f1 , Z) − IX0 (f0 , Z) = 0.
TA comment: Great diagram!

Problem 3. Recall the Hopf fibration h : S 3 → S 2 defined as


h : S 3 ⊂ C2 → CP2
(z, w) 7→ [z, w]

a) Omit the point ∞ = (0, 1) ⊂ C2 from S 3 and identify the complement with A3 . Draw a
picture of some
fibers of h. Observe the linking of distinct fibers.
b) Construct an analogous Hopf fibration which replaces C with R. Do you recognize that map?
It is not homotopic to a constant map: prove it. What about using the division algebra H
of quaternions, in which case you need to explain carefully what the quaternionic projective
line is. Is the resulting Hopf fibration homotopic to a constant map? Can you predict what
map you get if you use the octonians?
Solution:
a) These are kind of hard to visualize, but I tried my best.
M382D Attempted HW Problems

Here’s how the above were obtained: start with a point (θ, ϕ) on S 2 , where 0 ≤ θ < 2π
and 0 ≤ ϕ ≤ π. The point (θ, 0) corresponds to the north pole while the point (θ, π)
corresponds to the south pole. The curve (θ, π/2) traverses the equator at unit speed. We
can stereographically project this point to one in R2 , and hence get a point z ∈ C. This
uniquely defines a projective complex line [1, z]. As a side remark, the north pole gets sent
to [0, 1] while the south pole
p gets sent it to p[1, 0]. This actually gives a diffeomorphism S 2 to
1 it 2
CP . Now, note that [e / 1 + p |z| , ze / 1 + p |z|2 ] defines the same projective line for any
t ∈ [0, 2π). The elements (eit / 1 + |z|2 , zeit / 1 + |z|2 ) of S 3 ⊂ C2 are precisely the points
in the preimage h−1 ([1, z]). We can split these points into the real and imaginary parts to
get an element of S 3 ⊂ A4 . Let z1 = Re(z) and z2 = Im(z). Then, the preimage h−1 ([1, z])
as a subset of A4 is
( ! )
cos t sin t z1 cos t − z2 sin t z2 cos t + z1 sin t
p ,p , p , p t ∈ [0, 2π) .
1 + |z|2 1 + |z|2 1 + |z|2 1 + |z|2
By projecting into A3 using a stereographic projection (sending (0, 0, 1, 0) to ∞), we can
parameterize the image by
cos t
x(t) = p 2 2
1 + z1 + z2 − z1 cos t + z2 sin t
sin t
y(t) = p 2 2
1 + z1 + z2 − z1 cos t + z2 sin t
z2 cos t + z1 sin t
z(t) = p
1 + z1 + z22 − z1 cos t + z2 sin t
2

On can check that these are, in fact, circles. Now, remember that z was obtained by a point
(θ, ϕ) on S 2 . As θ varies, the circle h−1 ([1, z]) rotates about the z-axis. As ϕ increases, the
circle h−1 ([1, z]) becomes more parallel to the xy-plane and decreases in radius. When (θ, ϕ)
is the south pole, we get a circle in the xy-plane. As ϕ decreases, the circle gets steeper and
grows in radius. When (θ, ϕ) is the north pole, we get the z-axis.
TA comment: Great pictures!
M382D Attempted HW Problems

Final (96/110 points)


Dan’s comment: Excellent job on the final and in the class.

Problem 1. (17/20 points).


a) Consider the differential 2-form
ω = x2 dy ∧ dz − y 2 dx ∧ dz + z dx ∧ dy
on A3 . Let Σ ⊂ A3 be the ellipsoid
x2 y2 z2
+ + =1
a2 b2 c2
R
for some positive real numbers a, b, c. Compute Σ ω.
b) Let ω ∈ Ω3 (S 3 ) be the restriction of the differential form
x1 dx2 ∧ dx3 ∧ dx4
on A4 to the unit sphere S 3 . Prove that ωR = π ∗ ω̄ for a unique ω̄ ∈ Ω3 (RP3 ), where
π : S 3 → RP3 is the covering map. Compute RP3 ω̄.

Solution:
a) For this problem alone I will emphasize the pullbacks. In later problems, the pullback no-
tation will often be suppressed as is convention.

Introduce the following “ellipsoidal coordinates”


x = ar sin(φ) cos(θ)
y = br sin(φ) sin(θ)
z = cr cos(φ)
where r ∈ [0, ∞), φ ∈ [0, π], and θ ∈ [0, 2π). If we restrict the coordinates r, φ, θ to r = 1,
φ ∈ (0, π), and θ = (0, 2π) then we parameterize Σ \ E where E is a set of measure zero. Call
this parameterization ϕ. Then, the restriction of x, y, z to Σ \ E in this coordinate system
are given by
ϕ∗ x = a sin(φ) cos(θ)
ϕ∗ y = b sin(φ) sin(θ)
ϕ∗ z = c cos(φ)
with φ ∈ (0, π) and θ ∈ (0, 2π). The differentials are computed as
ϕ∗ dx = a cos(φ) cos(θ) dφ − a sin(φ) sin(θ) dθ
ϕ∗ dy = b cos(φ) sin(θ) dφ + b sin(φ) cos(θ) dθ
ϕ∗ dz = −c sin(φ) dφ.
The wedge products of these are
ϕ∗ dy ∧ ϕ∗ dz = [b cos(φ) sin(θ) dφ + b sin(φ) cos(θ) dθ] ∧ [−c sin(φ) dφ]
= bc sin(φ)2 cos(θ) dφ ∧ dθ
ϕ∗ dx ∧ ϕ∗ dz = [a cos(φ) cos(θ) dφ − a sin(φ) sin(θ) dθ] ∧ [−c sin(φ) dφ]
= −ac sin(φ)2 sin(θ) dφ ∧ dθ
ϕ∗ dx ∧ ϕ∗ dy = [a cos(φ) cos(θ) dφ − a sin(φ) sin(θ) dθ] ∧ [b cos(φ) sin(θ) dφ + b sin(φ) cos(θ) dθ]
= ab cos(φ) sin(φ) dφ ∧ dθ.
Finally, observe that
ϕ∗ (z dx ∧ dy) = ϕ∗ z[ϕ∗ (dx ∧ dy)] = ϕ∗ z[ϕ∗ dx ∧ ϕ∗ dy]
M382D Attempted HW Problems

and similarly for the other terms. Consequently, the pullback of ω is

ϕ∗ ω = [a sin(φ) cos(θ)]2 [bc sin(φ)2 cos(θ) dφ ∧ dθ] − [b sin(φ) sin(θ)]2 [−ac sin(φ)2 sin(θ) dφ ∧ dθ]
+ [c cos(φ)][ab cos(φ) sin(φ) dφ ∧ dθ]
= abc sin(φ)[a sin(φ)3 cos(θ)3 + b sin(φ)3 sin(θ)3 + cos(φ)2 ]dφ ∧ dθ
R
To compute Σ ω we need to specify an orientation on Σ. To do this, let W be the compact
manifold with boundary such that ∂W = Σ (that is, we “fill in” Σ). If a = b = c = 1 then
W = D3 . By orienting A3 in the standard way we induce an orientation on W . Then orient
Σ by the boundary orientation. The following picture shows how to find this orientation.

Formally, let oW be the above orientation on W and fix a point (φ0 , θ0 ) ∈ (0, π) × (0, 2π).
Consider the curves

γ1 (φ) = ϕ(φ, θ0 ) = (a sin(φ) cos(θ0 ), b sin(φ) sin(θ0 ), c cos(φ))


γ2 (θ) = ϕ(φ0 , θ) = (a sin(φ0 ) cos(θ), b sin(φ0 ) sin(θ), c cos(φ0 ))

The velocities of these curves at φ0 and θ0 respectively are

γ10 (φ0 ) = (a cos(φ0 ) cos(θ0 ), b cos(φ0 ) sin(θ0 ), −c sin(φ0 ))


γ20 (θ0 ) = (−a sin(φ0 ) sin(θ0 ), b sin(φ0 ) cos(θ0 ), 0).

An outward normal to Σ at p = ϕ(φ0 , θ0 ) is then given by


 
i j k
ηp = γ10 (φ0 ) × γ20 (θ0 ) = det  a cos(φ0 ) cos(θ0 ) b cos(φ0 ) sin(θ0 ) −c sin(φ0 )
−a sin(φ0 ) sin(θ0 ) b sin(φ0 ) cos(θ0 ) 0
= (bc sin(φ0 )2 cos(θ0 ), ac sin(φ0 )2 sin(θ0 ), ab cos(φ0 ) sin(φ0 ))

Finally, consider the change of basis matrix taking {e3 , e1 , e2 } to {ηp , γ10 (φ0 ), γ20 (θ0 )}. This
is given by

a cos(φ0 ) cos(θ0 ) −a sin(φ0 ) sin(θ0 ) bc sin(φ0 )2 cos(θ0 )


 

A =  b cos(φ0 ) sin(θ0 ) b sin(φ0 ) cos(θ0 ) ac sin(φ0 )2 sin(θ0 )


−c sin(φ0 ) 0 ab cos(φ0 ) sin(φ0 )
M382D Attempted HW Problems

The determinant of this is


det(A) = [a2 b2 cos(φ0 )2 sin(φ0 )2 cos(θ0 )2 ] + [a2 b2 cos(φ0 )2 sin(φ0 )2 sin(θ0 )2 + a2 c2 sin(φ0 )4 sin(θ0 )2 ]
+ [b2 c2 sin(φ0 )4 cos(θ0 )2 ]
= a2 b2 cos(φ0 )2 sin(φ0 )2 + a2 c2 sin(φ0 )4 sin(θ0 )2 + b2 c2 sin(φ0 )4 cos(θ0 )2 > 0.
So, {ηp , γ10 (φ0 ), γ2 (θ0 )0 } ∈ oW (p). By the “quotient before sub” and “outward normal first”
conventions in determining the boundary orientation, {γ10 (φ0 ), γ20 (θ0 )} ∈ oΣ (p). But,
!
0 ∂
γ1 (φ0 ) = dϕ(φ0 ,θ0 )
∂φ (φ0 ,θ0 )
!

γ20 (θ0 ) = dϕ(φ0 ,θ0 )
∂θ (φ0 ,θ0 )
so that ϕ is an orientation-preserving diffeomorphism when we choose o(0,π)×(0,2π) such that
{∂/∂φ, ∂/∂θ} ∈ o(0,π)×(0,2π) . By definition,
Z Z Z
ω= ω= ϕ∗ ω
Σ Σ\E (0,π)×(0,2π)
Z
= abc sin(φ)[a sin(φ)3 cos(θ)3 + b sin(φ)3 sin(θ)3 + cos(φ)2 ] dφ ∧ dθ.
(0,π)×(0,2π)

Recall that if U ⊂ An is open and ω ∈ Ωnc (U ) then there is an isomorphism Ωnc (U ) ' Ω0c (U )
where
ω = f dx1 ∧ ... ∧ dxn 7→ f.
By definition, Z Z
ω= f.
U U
In this case, we have
f = abc sin(φ)[a sin(φ)3 cos(θ)3 + b sin(φ)3 sin(θ)3 + cos(φ)2 ]
on (0, π) × (0, 2π). Since (0, π) × (0, 2π) is bounded, Cl(spt(f )) is compact. So the integral
is well defined and by Fubini’s theorem,
Z Z
ω= abc sin(φ)[a sin(φ)3 cos(θ)3 + b sin(φ)3 sin(θ)3 + cos(φ)2 ] dφ ∧ dθ
Σ (0,π)×(0,2π)
Z
= abc sin(φ)[a sin(φ)3 cos(θ)3 + b sin(φ)3 sin(θ)3 + cos(φ)2 ] d(φ × θ)
(0,π)×(0,2π)
Z π Z 2π
= abc sin(φ) a sin(φ)3 cos(θ)3 + b sin(φ)3 sin(θ)3 + cos(φ)2 dθdφ
Z0 π 0

= abc sin(φ)[2π cos(φ)2 ] dφ = abc.
0 3
Alternatively, we can use Stokes’ theorem:
dω = (2x + 2y + 1) dx ∧ dy ∧ dz
so that
Z Z Z

ω= dω = (2x + 2y + 1) dx ∧ dy ∧ dz = Vol(W ) = abc
Σ W W 3
where we orient W as before and Σ has the boundary orientation. We used symmetry of
the functions x and y on W to conclude that
Z Z
x dx ∧ dy ∧ dz = y dx ∧ dy ∧ dz = 0.
W W
M382D Attempted HW Problems

b) Suppose that ω 1 , ω 2 ∈ Ω3 (RP3 ) are such that ω = π ∗ ω i . Then,


0 = ω − ω = π ∗ ω 1 − π ∗ ω 2 = π ∗ (ω 1 − ω 2 ).
Now let ω ∈ Ω3 (RP3 ) be such that π ∗ ω = 0. Let q ∈ RP3 and w1 , w2 , w3 ∈ Tq RP3 . Since
π is surjective, there exists a p ∈ S 3 such that π(p) = q. Since π is a local diffeomorphism,
it is a submersion and therefore there exist v1 , v2 , v3 ∈ Tp S 3 such that dπp (vi ) = wi . Then,
ω q (w1 , w2 , w3 ) = ω q (dπp (v1 ), ..., dπp (v3 ))
= (π ∗ ω)p (v1 , v2 , v3 ) = 0.
Hence ω is the zero form. It follows that π ∗ is injective, and in our case ω 1 = ω 2 .
Next, since S 3 and RP3 are compact, we have that
Z Z Z Z

ω= π ω̄ = deg(π) ω̄ = 2 ω̄.
S3 S3 RP3 RP3

Here we orient S 3 first and then use the fact that π is a local diffeomorphism
R to orient RP3
so that dπp is orientation-preserving. It suffices to compute S 3 ω. We could go through
the computation using hyperspherical coordinates, but that is a fairly long computation.
Instead let’s just use Stokes’ theorem. Orient D4 (like how we did in a), using the standard
orientation on A4 ) and equip S 3 = ∂D4 with the boundary orientation. Then,
π2
Z Z Z
ω= d(x1 dx2 ∧ dx3 ∧ dx4 ) = dx1 ∧ dx2 ∧ dx3 ∧ dx4 = Vol(D4 ) = .
S3 D4 D4 2
Hence,
π2
Z Z
1
ω= ω= .
RP3 2 S3 4
Dan’s comment: Incorrect or missing argument for existence of ω̄

A couple of us forgot to do that, and none of us really know how to? It’s probably some alge-
bra fact.
Problem 2. (19/20 points). Give an example of each of the following. Be sure to write careful
justifications.
a) A manifold X and a submanifold Y such that the oriented intersection number IX (Y, Y )
is defined and nonzero. (Be sure to explain why the intersection number is well-defined for
your example.)
b) A manifold X whose tangent bundle π : T X → X is not trivializable.
c) A manifold X and a submanifold Y such that the normal bundle ν(Y ⊂ X) → Y is not
trivializable.
d) A nonorientable manifold X and a map f : X → X with Fix(f ) = Ø.

Solution:
a) Recall that the Euler characteristic χ(Z) of a manifold Z can be computed by
χ(Z) = IZ×Z (Diag(Z × Z), Diag(Z × Z)).
If we choose Z compact, connected, and oriented with χ(Z) 6= 0, then the orientation on
Z unambiguously induces an orientation on X := Z × Z (that is, if we choose the other
orientation on Z, the induced orientation on the product is the same). Moreover, since
Y := Diag(Z × Z) ' Z we can orient Diag(Z × Z) according to the orientation on Z. The
intersection number is well defined since Y is compact and oriented and X is connected
and oriented. Although Y is not transverse to itself, by stability of transversality we can
homotope the inclusion ι̇ : Y → X so the intersection is transverse.
b) We first recall some terminology:
• Let πV : V → X and πW : W → X be two vector bundles over X. Then F : V → W is a
vector bundle isomorphism if πV = πW ◦ F and for each x ∈ X, F |Vx is an isomorphism
of vector spaces.
M382D Attempted HW Problems

• A vector bundle πV : V → X is called trivializable if there exists a vector space


isomorphism to the trivial vector bundle π 0 : X × Rn → X.
With this, we can show if πV is trivializable then there exists sections s1 , ..., sn : X → V
such that for all x ∈ X, {s1 (x), ..., sn (x)} is a linearly independent set in Vx . Suppose that
F : X × Rn → V is an isomorphism of vector spaces. Let e1 , ..., en be the standard basis of
Rn . By definition F is smooth so that
si (x) = F (x, ei )
is smooth for each i = 1, ..., n. Moreover, since F is an isomorphism its restriction to each
fiber is a vector space isomorphism. It follows that each {si (x)}ni=1 is linearly independent.

Now, by the hairy ball theorem S 2 does not admit any nowhere vanishing sections of its
tangent bundle. That is, for each section s : S 2 → T S 2 there exists some p ∈ S 2 such that
s(p) = 0. Since any set containing zero is linearly dependent, it follows that T S 2 cannot be
trivializable.
c) Let Z be a smooth manifold. Then, the normal bundle ν(Diag(Z × Z) ⊂ Z × Z) →
Diag(Z × Z) is canonically isomorphic to the tangent bundle T Z → Z. So, take Z = S 2 ,
Y = Diag(S 2 × S 2 ), and X = S 2 × S 2 , and use part b).
d) Let X be the Klein bottle, represented as the quotient I 2 / ∼ where (0, y) ∼ (1, 1 − y) and
(x, 0) ∼ (x, 1). Consider the map f : X → X given piecewise by
(
[(x + 1/2, y)] 0 ≤ x ≤ 1/2
f ([(x, y)]) =
[(x − 1/2, 1 − y)] 1/2 ≤ x ≤ 1
Then this map is well defined and has no fixed points. See the below picture (which is just
used as guidance – we do not have a well defined map f on the immersed Klein bottle in R3
due to the self-intersection).

As a remark, we can also consider the maps ft : X → X given piecewise by


(
[(x + t, y)] 0≤x≤1−t
ft ([(x, y)]) =
[(x − 1 + t, 1 − y)] 1−t≤x≤1
Now consider gt : X → X given by gt if 0 ≤ t ≤ 1 and gt−1 ◦ g1 if 1 ≤ t ≤ 2. Then
g0 = g2 = IdX and gt is an isotopy of the Klein bottle.
Dan’s comment: In (a) did not give an example.

I thought giving a general family of examples would be fine, but no, apparently not... don’t make
dumb mistakes I guess?

Problem 3. (10/10 points). Consider the differential form


x dy − y dx
ω=
x2 + y 2
on A2 \ {0}.
a) Show that dω = 0.
M382D Attempted HW Problems

b) Suppose fn : S 1 → A2 \ {0} is the map fn (θ) = (cos(nθ), sin(nθ)). Compute


Z
fn∗ ω.
S1
c) Prove that there does not exist a function f : A2 \ {0} → R such that ω = df .

Solution:
a) We compute some partial derivatives first:
x2 + y 2 − x(2x) y 2 − x2
 
∂ x
= =
∂x x2 + y 2 (x2 + y 2 )2 (x2 + y 2 )2
2 2
y 2 − x2
 
∂ −y −(x + y ) + y(2y)
= =
∂y x2 + y 2 (x2 + y 2 )2 (x2 + y 2 )2
Hence,
    
∂ x ∂ x
dω = dx + dy ∧ dy
∂x x2 + y 2 ∂y x2 + y 2
     
∂ −y ∂ −y
+ dx + dy ∧ dx
∂x x2 + y 2 ∂y x2 + y 2
 2
y − x2
 2
y − x2
 
= dx ∧ dy + dy ∧ dx = 0
(x2 + y 2 )2 (x2 + y 2 )2
b) Consider the map p : R → S 1 given by θ 7→ (cos θ, sin θ). What is defined as fn in the
problem statement is really a map fn : R → A2 \ {0}. Let f˜n : S 1 → A \ {0} be the unique
map such that f˜n ◦ p(θ) = fn (θ). What we are interested in is computing
Z
f˜n∗ ω.
S1
1
By restricting p to (0, 2π) we cover S \ {(1, 0)} (still denote this restriction by p). The
interval (0, 2π) comes with a natural orientation, where {∂/∂θ} is oriented. Then, define an
orientation on S 1 so that {dp(∂/∂θ)} is oriented (this is the counter-clockwise orientation).
Hence, (S 1 \ {(1, 0)}, p−1 ) is an orientation preserving chart and
Z Z Z 2π Z 2π
f˜n∗ ω = f˜n∗ ω = p∗ f˜n∗ ω = fn∗ ω.
S1 S 1 \{(1,0)} 0 0

Differentiating the coordinate equations


x = cos(nθ)
y = sin(nθ)
gives
dx = −n sin(nθ) dθ
dy = n cos(nθ) dθ.
Substituting these into ω gives
n cos(nθ)2 dθ + n sin(nθ)2 dθ
fn∗ ω = = n dθ.
cos(nθ)2 + sin(nθ)2
The integral is then easily computed as
Z Z 2π
f˜n∗ ω = n dθ = 2πn.
S1 0
c) Here are two ways of doing it, both by contradiction. If such a function existed, then we
would necessarily have
y x ∂f ∂f
− 2 dx + 2 dy = ω = df = dx + dy.
x + y2 x + y2 ∂x ∂y
M382D Attempted HW Problems

Hence,
∂f y
=− 2
∂x x + y2
∂f x
= 2 .
∂y x + y2
From the computation in part a), the mixed second partials are equal and we expect to find
a solution f . Integrating the first gives
f (x, y) = arctan(y/x) + cy
for some constant c. Differentiating this with respect to y gives
x ∂f 1/x
= = +c
x2 + y 2 ∂y 1 + y 2 /x2
so that c = 0. Thus, f = arctan(y/x). But this is not defined on the entire x-axis.

Alternatively, consider S a circle of radius 1 >  > 0. Let A be the annulus with
boundary circles S 1 and S . We may orient A so that the boundary orientation gives
a counter-clockwise orientation on S 1 and a clockwise orientation on S . Let Fn (r, θ) =
(r cos(nθ), r sin(nθ)) and use the same method as in b) to define F̃n on A. Then by Stokes’
theorem
Z Z Z Z Z Z
0= ∗
F̃n d(df ) = ∗
F̃n ω = ˜∗
fn ω + ∗
g̃n ω = ˜∗
fn ω − g̃n∗ ω
A A S1 S S1 −S

g̃ ∗ ω
R
where g̃n = F̃n (, −). Now the computation for −S n
proceeds almost the same as in b),
and we get Z Z
0= f˜n∗ ω − g̃n∗ ω = 2πn − 2πn > 0.
S1 −S
Note that we cannot just apply Stokes’ theorem without removing the singularity like this,
because the differential form dω is not defined on all of D2 .

Dan’s comment: Work towards a short, convincing argument.

I like to provide multiple proofs, because it gives better insight into problems. Maybe
this is good for homeworks, but not for exams.
Problem 4. (10/10 points).
a) State carefully what it means for a map to be transverse to a submanifold. Be sure to define
your notation and include all of the necessary hypotheses.
b) Suppose X, Y, S are smooth manifolds, F : S ×X → Y a smooth map, Z ⊂ Y a submanifold,
and assume that F is transverse to Z. Prove that there exists s ∈ S such that F |{s}×X :
X → Y is transverse to Z.

Solution:
a) Let f : X → Y be a map of smooth manifolds, Z ⊂ Y a submanifold, and p ∈ f −1 (Z).
Then f is transverse to Z at p if
dfp (Tp X) + Tf (p) Z = Tf (p) Y.
We say that f is transverse to Z if f is transverse to Z at p for every p ∈ f −1 (Z).
b) We prove something stronger: that F |{s}×X is transverse to Z for almost every s ∈ S.

Dan’s comment: What does ’almost every’ mean?

You know, I’m pretty sure Dan uses that terminology. But anyways, what I mean is that
we can define “measure zero” (not in a literal analysis way) in terms of Sard’s theorem.
M382D Attempted HW Problems

Let W = F −1 (Z), which is a submanifold of S × X by transvserality. Denote by πS and πX


the natural projections S × X → S and S × X → X. Let πW be the restriction of πS to W .
The strategy is to show that if s ∈ S is a regular value of πW , then F |{s}×X is transverse to
Z. Since almost every s ∈ S is a regular value of πW (due to Sard’s theorem), the conclusion
follows.

We have two short exact sequences. The first is


dF(s,x)
0 → T(s,x) W → Ts S ⊕ Tx X −−−−→ Ty Y /Ty Z → 0
where y = F (s, x). The first map is just the inclusion whereas the second map is the
pushforward by the differential. It is surjective by transversality:
dF(s,x) (Ts S ⊕ Tx X) + Ty Z = Ty Y.
The second short exact sequence is
0 → Tx X → Ts S ⊕ Tx X → Ts S → 0
where the first map is the inclusion ξ 7→ 0⊕ξ and the second map is the projection η ⊕ξ 7→ η.

We claim that, in fact, s is a regular value of πW if and only if F |{s}×X is transverse


to Z. That is, d(πW )(s,x) : T(s,x) W → Ts S is surjective if and only if d(F |{s}×X )x : Tx X →
Ty Y /Ty Z is surjective. More algebraically,
d(πW )(s,x)
T(s,x) W −−−−−−−→ Ts S → 0
d(F |{s}×X )x
Tx X −−−−−−−−→ Ty Y /Ty Z → 0.
Putting this all into one diagram,
0

Tx X
d(F |{s}×X )x
j
ι̇ dF(s,x)
0 T(s,x) W Ts S ⊕ Tx X Ty Y /Ty Z 0

d(πW )(s,x)
Ts S

0
Both inner triangles commute, the lower left by definition of πW as the composition W →
S × X → S, and the upper right since
F (s, x + tξ) − F (s, x) F (s + 0t, x + tξ) − F (s, x)
d(F |{s}×X )x (ξ) = lim = lim
t→0 t t→0 t
= dF(s,x) (0 ⊕ ξ).
So, the entire diagram is symmetric about a diagonal, and we need only prove one implication
to prove both. Suppose that s is a regular value of πW .
• Let [ζ] ∈ Ty Y /Ty Z. Our goal is to find an element of Tx X which, under d(F |{s}×X )x ,
maps to [ζ].
• By exactness of the middle row, there exist η ∈ Ts S and ξ ∈ Tx X such that dF(s,x) (η ⊕
ξ) = [ζ].
• Since η ∈ Ts S and s is a regular value of dπW , there exists a Ξ ∈ T(s,x) W such that
d(πW )(s,x) (Ξ) = η.
M382D Attempted HW Problems

• Let Ξ0 = ι̇(Ξ) ∈ Ts S ⊕Tx X. By exactness, Ker(dF(s,x) ) = Im(ι̇) so that dF(s,x) (Ξ0 ) = 0.


In particular, dF(s,x) (Ξ0 − (η ⊕ ξ)) = [ζ].
• Write Ξ0 = η 0 ⊕ ξ 0 . By commutativity, η = d(πW )(s,x) (Ξ) = Pr1 (ι̇(Ξ)) = η 0 . Hence,
Ξ0 − (η ⊕ ξ) = (η ⊕ ξ 0 ) − (η ⊕ ξ) = 0 ⊕ (ξ 0 − ξ).
• We can pull this back to an element ξ 0 − ξ ∈ Tx X, so that j(ξ 0 − ξ) = 0 ⊕ (ξ 0 − ξ) =
Ξ0 − (η ⊕ ξ). By commutativity, d(F |{s}×X )x (ξ 0 − ξ) = dF(s,x) (j(ξ 0 − ξ)) = [ζ].

Problem 5. (6/10 points). Let X be a manifold of positive dimension and ξ a nowhere vanishing
vector field on X. Construct a smooth 1-form α on X such that α(ξ) = 1 at every point of X.

Solution: Recall that every smooth 1-form α can locally be written as αp = fi (p)dxip where (U, x)
is a chart and p ∈ U . Consider the functions fi (p) = ξi (p)/|ξ(p)|2 . These are smooth since ξ is
a smooth section of T X → X. Moreover, they are well defined since ξ is nowhere vanishing. Let
{(Uα , xα )}α∈A be a covering of X by charts and choose a partition of unity {ρα }α∈A subordinate
to this covering. For p ∈ X let Ap = {a ∈ A | p ∈ Uα }. Define fiα on Uα by
ξi (p)
fiα (p) =
|ξi (p)|2
and let α be defined pointwise by
X
αp = ρα (p)fiα (p)d(xiα )p .
α∈Ap

By the preceding analysis, this is a smooth 1-form. Also,


n
X X X ξi (p)2 X
αp (ξ(p)) = ρα (p)fiα (p)d(xiα )p (ξ(p)) = ρα (p) 2
= ρα (p) = 1
i=1
|ξ(p)|
α∈Ap α∈Ap α∈Ap

as desired.

Dan’s comment: The way this is written it is difficult to be sure you have the argument.

So, Dan’s solution of this was to do the following: Use a partition of unity to construct a Rie-
mannian metric g on X. Then define
hη, ξi
αp (η) =
hξ, ξi
I’m not exactly sure what the ξ are supposed to be, but this is basically what I was trying to do. I
implicitly use a Riemannian metric in defining | · |2 , which is just h−, −i.
 
a b
Problem 6. (4/10 points). The matrix A = with a, b, c, d ∈ Z determines a linear map
b c
2 2 2 2
R → R which preserves the integral lattice Z ⊂ R . Therefore, it induces a self map fA of the
2-torus R2 /Z2 . Compute the Lefschetz number of the map fA .

Solution: Admittedly, I do not have a solution. So I will share my progress on this problem,
and what I believe the answer to be.

Consider first the matrices A of the form


 
a −b
A=
b a
where a, b ∈ Z. Observe that
   
x ax − by
A = ' (ax − by) + (bx + ay)i = (a + bi)(x + yi)
y bx + ay
where we use the standard identification of points in R2 with points in C. So, we can view the action
of A as an action on complex numbers by z 7→ (a + bi)z. A fixed point of this map will be such that
M382D Attempted HW Problems

z ≡ (a + bi)z mod Z[i], or equivalently ((a − 1) + bi)z ≡ 0 mod Z[i]. The number of solutions to this
is (a − 1)2 + b2 , I think. Now observe that (a − 1)2 + b2 = (1 − a)(1 − a) − (−b)(b), so that
(a − 1)2 + b2 = | det(I − A)|.
I believe this holds in the general case, that is | Fix(fA )| = | det(I − A)|. Since A is just a linear
map, it follows that dfA = A. Hence,
X X
L(fA ) = Lp (f ) = sgn(det(I − A)) = | Fix(fa )| sgn(det(I − A))
p∈Fix(fA ) p∈Fix(fA )

= | det(I − A)| sgn(det(I − A)) = det(I − A).


There are two reasons why I believe this to be true more or less in general. One reason for this is
the following: !
X2
k ∗
L(fA ) = (−1) Tr fA .
k (R2 /Z2 )
HdR
k=0
On the other hand, we have this formula for det(I − A):
2 ^ 
X k
k
det(I − A) = (−1) Tr A
k=0
which was shown in a previous homework problem. I cannot helpVbut notice the similarity in these
• •
formulas, and wonder if there is a natural isomorphism between (Z2 ) and HdR (R2 /Z2 ).
I also suspect this to be true based on a series of computations which I did (and will not include,
because there is too much scratch work). I took several arbitrary matrices and computed the fixed
points, plotted them by hand in R2 /Z2 , and tried to find a pattern to find them in general. The
only thing I really found is that they form a lattice, which leads me to suspect there is a clean way
to use symmetry to find | Fix(fA )|. It may be nice to see if the curves t 7→ (t, s) and s 7→ (t, s) have
any rotational symmetry, where we embed R2 /Z2 as a surface of revolution in A3 .

Finally, the formula does not quite work, for example when I − A is not invertible. Indeed consider
A as the reflection over the x-axis. Then the entire set {(1/2, n) | n ∈ Z} remains fixed under A,
after quotienting by Z2 .

Dan’s comment: A good start.

Apparently you can do this using the Lefschetz fixed point theorem, but here’s another way to
do it (from Dan’s solutions). First, fA is Lefschetz iff I − A is invertible, and the local Lefschetz
number at a fixed point is sgn(det(I − A)) (since dfA = A). Let p ∈ R2 /Z2 be the image of (0, 0) in
−1
the torus. Then, Fix(fA ) = fI−A (p). Now since fI−A is a local diffeomorphism [note: I’m not sure
why this is true... what happens if A = I?], every value is regular and furthermore the local degree
is given by sgn(det(I − A)). All together, L(fA ) = deg(fI−A ). Now to compute this degree, we can
appeal to differential forms. We must find a differential form which descends to the torus – the form
ω = dx ∧ dy does exactly this (since it is invariant under translation). Then,
Z Z

fI−A (dx ∧ dy) = deg(fI−A ) dx ∧ dy.
R2 /Z2 R2 /Z2

On the other hand, one can compute that fI−A (dx ∧ dy) = det(I − A)(dx ∧ dy) so that
Z Z Z

deg(fI−A ) dx ∧ dy = fI−A (dx ∧ dy) = det(I − A) dx ∧ dy.
R2 /Z2 R2 /Z2 R2 /Z2

After showing that the left and right most integrals are nonzero, we deduce that L(fA ) = deg(fI−A ) =
det(I − A).

In fact, Dan claims that Z


dx ∧ dy = 1.
R2 /Z2
M382D Attempted HW Problems

To see why, I think we’re using the fact that dx ∧ dy is the area element on R2 , and the torus can
be viewed as a quotient of the square [0, 1]2 ?

Problem 7. (10/10 points).


a) Let π : S 4 → RP4 be the projection and ω ∈ Ω4 (RP4 ) a 4-form. Prove that S 4 π ∗ ω = 0.
R

b) Fix ω ∈ Ω4 (S 4 ), let X be a compact oriented 4-manifold, and suppose f : [0, 1] × X → S 4


is a smooth map. Prove
Z Z

f0 ω = f1∗ ω.
X X
Solution:
a) Let (U , x) be a chart on RPn and suppose first that ω is compactly supported in U . We
can lift U to π −1 (U ) = U1 ∪ U2 where the Ui are disjoint and diffeomorphic to U . Simi-
larly, we can lift x to a smooth function x : U1 ∪ U2 → x(U ). This produces two smooth
functions xi = x|Ui related by x1 ◦ α|U2 = x2 and similarly with the roles of 1 and 2 re-
versed. Note that (Ui , xi ) are two charts on S n . We now want to relate (x−1 ∗ ∗
i ) π ω to
−1 ∗ −1 −1 −1 ∗ ∗ −1 ∗
(x ) ω. By definition, π ◦ xi = x for each i. Hence (xi ) π ω = (x ) ω. In particu-
lar, (x−1 ∗ ∗ −1 ∗ ∗
1 ) π ω = (x2 ) π ω.

Let ω = π ∗ ω ∈ Ωn (U1 ∪ U2 ). By the relation x1 ◦ α|U2 = x2 we have


(x−1 ∗ −1 ∗ −1 ∗ −1 ∗ −1 ∗ ∗
1 ) ω = (x2 ) ω = (x1 ) (α|U2 ) ω = (x1 ) (α|U1 ) ω.

What this tells us is that, on U1 , the n-forms ω and (α|U1 )∗ ω agree. Consequently,
Z Z Z Z
∗ n+1
ω= (α|U1 ) ω = deg(α|U1 ) ω = (−1) ω.
U1 U1 U2 U2

Now,
Z Z Z Z Z Z
n+1
ω= ω+ ω = (−1) ω+ ω = g(n) ω
U U1 U2 U2 U2 U2
R
where g(n) = 2 mod (n, 2). So, for n = 4 we have g(n) = 0, and thus U ω = 0. Applying
a partition of unity always reduces us to the case when ω is compactly supported in the
domain of a chart, so we are done.

Note: Something about the above feels wrong, because it seems like it should imply that
Vol(S 4 ) = 0. But, this is the only thing I can think of doing.
b) First orient [0, 1] × X with the product orientation induced by the standard orientation o[0,1]
on [0, 1] and our chosen orientation on X, oX . Recall that
T(t,p) ([0, 1] × X) = Tt [0, 1] ⊕ Tp X
so that the product orientation o[0,1]×X is defined so that if {v} ∈ o[0,1] (t) and {e1 , ..., en } ∈
oX (p) then {v ⊕ 0, 0 ⊕ e1 , ..., 0 ⊕ en } ∈ o[0,1]×X (t, p). Next we orient ∂([0, 1] × X) =
[{0}×X]∪[{1}×X] using the boundary orientation. Let [ξ] ∈ ν(t,p) (∂([0, 1]×X) ⊂ [0, 1]×X).
Choose a representative ξ with no component in Tp X. If t = 0 then ξ is an outward normal
if ξ ∈
/ o[0,1] (0) whereas if t = 1 then ξ is an outward normal if ξ ∈ o[0,1] (1). The spaces
{t} × X for t = 0, 1 are naturally diffeomorphic to X, so by orienting them we induce an
orientation on X. Then, we can check if this induced orientation agrees with oX or not.

The orientation on o{t}×X is given by the following: If ξ is an outward normal at (t, p)


then {0 ⊕ e1 , ..., 0 ⊕ e1 } ∈ o{t}×X (t, p) if
{ξ ⊕ 0, 0 ⊕ e1 , ..., 0 ⊕ e1 } ∈ o[0,1]×X (t, p).
Let {e1 , ..., en } ∈ oX (p). Since ξ ∈ o[0,1] (1) but ξ ∈ / o[0,1] (0), it follows that {0 ⊕ e1 , ..., 0 ⊕
en } ∈ o[0,1]×X (1) but {0 ⊕ e1 , ..., 0 ⊕ en } ∈
/ o[0,1]×X (0). If πt : {t} × X → X is the projection,
M382D Attempted HW Problems

then dπt (0 ⊕ ei ) = ei . Hence, π1 is an orientation-preserving diffeomorphism whereas π0 is


orientation-reversing. In this sense we write
∂([0, 1] × X) = −X0 ∪ X1 .
Now suppose g : Z → Y is a smooth map between oriented manifolds, where Z = ∂W and
W is compact, oriented. Suppose n = dim Z = dim Y and g extends to a map on all of W .
Then deg(g) = 0 and for any ω ∈ Ωnc (Y ) we have
Z Z

g ω = deg(g) ω = 0.
Z Y

In particular, when Y is compact we may simply assume ω ∈ Ωn (Y ). If we apply this with


W = [0, 1] × X, Z = −X0 ∪ X1 , Y = S 4 , and g the map defined piecewise by
(
f0 (p) (t, p) ∈ X0
g(t, p) = .
f1 (p) (t, p) ∈ X1
Then g trivially extends to f , and by applying the above orientation analysis and properties
of the disjoint union,
Z Z Z Z Z
− f0∗ ω + f1∗ ω = f0∗ ω + f1∗ ω = g ∗ ω = 0.
X X −X0 X1 −X0 ∪X1

Problem 8. (20/20 points). Let V be a 4 dimensional real inner product space and X the Grass-
mannian of 2-planes in V .
a) Prove that the map f : X → X which maps each 2-plane to its orthogonal complement is
not smoothly homotopic to the identity map.
b) Choose subspaces L ⊂ H ⊂ V such that dim L = 1 and dim H = 3. Define
Z1 = {W ∈ X | L ⊂ W }
Z2 = {W ∈ X | W ⊂ H}
Compute the mod 2 intersection number #2 (Z1 , Z2 ).

Solution:
a) A couple of interesting remarks first: since f is an involution, we have that
1 = deg(IdX ) = deg(f ◦ f ) = deg(f )2 .
Hence deg(f ) = ±1. Moreover, one can actually show that f is diffeomorphism. So, one
method of attack is to show that f is an orientation-reversing diffeomorphism. I spent en-
tirely too much time thinking of the orientation on Gr(2, 4) and trying to show this.

Dan’s comment: I’m sure that is time well-spent, even if you didn’t get to a conclusion.

Instead, we will use Lefschetz numbers. Recall that


L(f ) = #X×X (Graph(f ), Diag(X × X)).
Since f takes a 2-plane W to its orthogonal complement W ⊥ it follows that f has no
fixed points. Thus, Graph(f ) ∩ Diag(X × X) = Ø and L(f ) = 0. On the other hand,
L(IdX ) = χ(X) = 2. Since the Lefschetz number is a homotopy invariant, it follows that f
is not homotopic to IdX .

Note that this argument does not work if χ(X) = 0. For example, consider Gr(1, 2) ' S 1 ,
which has χ(S 1 ) = 0 (since every compact odd dimensional manifold has vanishing Euler
characteristic). Indeed, if you look at the map f in this case then rotating L by a small
amount also rotates L⊥ in the same direction by the same amount. This implies that the
differential is orientation preserving, and we actually see that f ' IdX .
M382D Attempted HW Problems

b) Let L1 = L and choose some L0 6⊂ H. Choose some rotation R in V taking L0 to L1 . Since


L1 ⊂ H, this rotation is nontrivial. Let θ be the angle of rotation and denote by Rt the
rotation in the same direction as R by an angle of tθ (so R1 = R, R1/2 ◦ R1/2 = R, etc.),
−1
where 0 ≤ t ≤ 1. Define Zt = R1−t (Z1 ). It is clear that the notation is consistent, i.e. since
R0 = IdV then the two definitions of Z1 coincide. One way of writing Zt is
Zt = {W ∈ X | Lt ⊂ W }
−1
where Lt = R1−t L1 .As a smooth homotopy between the inclusions ι̇1 : Z1 → X and
ι̇0 : Z0 → X, we see that #2 (Z0 , Z2 ) = #2 (Z1 , Z2 ). But since L0 6⊂ H, any W ∈ Z0 cannot
be in Z2 . Consequently #2 (Z1 , Z2 ) = 0.

You might also like