0% found this document useful (0 votes)
3 views

Basic 1

Uploaded by

KHILAN00
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Basic 1

Uploaded by

KHILAN00
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

BASIC QUALIFYING EXAM

RAYMOND CHU

These are my solutions for the Basic Qualifying Exam at UCLA. The exams can be found here. I wrote
these solutions up while studying for the Fall 2020 Basic Exam. These solutions should have a majority
of the solutions for the basic exam from 2010 Spring to 2020 Spring.

I am very thankful to Jerry Luo, Yotam Yaniv, Joel Barnett, Steven Truong, Jas Singh, Grace Li, Xinzhe
Zuo, John Zhang, and James Leng for many useful discussions on these problems.

Contents
1. Spring 2010 1
2. Fall 2010 5
3. Spring 2011 8
4. Fall 2011 11
5. Spring 2012 14
6. Fall 2012 17
7. Spring 2013 20
8. Fall 2013 25
9. Spring 2014 28
10. Fall 2014 32
11. Spring 2015 36
12. Fall 2015 39
13. Spring 2016 42
14. Fall 2016 47
15. Spring 2017 50
16. Fall 2017 53
17. Spring 2018 57
18. Fall 2018 61
19. Spring 2019 67
20. Fall 2019 72
21. Spring 2020 76

1. Spring 2010
n×n
Problem 1. Recall that if A ∈ R and B ∈ Rn×n then AB is invertible if and only if A and B are
invertible. Let us define the matrix U := [u1 , ..., un ] and Y := [y1 , .., yn ] then U + Y is invertible if and
only if U T (U + Y ) = I + U T Y is invertible. And I + U T Y is invertible if and only if the columns {ui + yi }
form a basis of Rn .
Pn Pn
Notice that ||U T Y ||22 := i=1,j (U T Y )2ij = Tr(Y T U U T Y ) = Tr(Y T Y ) = i,j=1 Yij2 < 1. So it suffices to
show if ||B||2 < 1 then I + B is invertible. Indeed, fix x such that (I + B)x = 0 then
n
X
xi + xj Bij = 0 for all i
j=1

Date: March 22, 2021.


1
2 RAYMOND CHU

n
X
xi = − xj Bij
j=1
Pn Pn
Let x := (x1 , .., xn )T and y := −( j=1 xj B1j , ..., j=1 xj Bnj )T so we get x = y. Then by taking norms
we get
n X
X n n
X
||x||2 = ||y||2 = ( xj Bij )2 ≤ ||x||2 2
Bij < ||x||2
i=1 j=1 i,j=1

where the first inequality is due to Cauchy-Schwarz and the last inequality applies whenever ||x||2 6= 0
due to ||B||22 < 1. Therefore, we get x = 0. so I + U T Y is invertible so {u1 + y1 , .., un + yn } is linearly
independent.
Problem 2. By spectral theorem we can write there exists a basis of orthonormal eigenvectors of A.
Write the eigenvectors as {v1 , .., vn } where vi is associated with λi as defined in the problem. Then for
any fixed k we have for U := span{v1 } ⊕ ... ⊕ span{vk } which is k dimensional
max min (Ax, x) ≥ min (Ax, x) = λk
V,dim(U )=k ||x||=1,x∈V ||x||=1,x∈U

where the last inequality follows from


Xk k
X n
X n
X
2
(Ax, x) = ( αi λi vi , αi vi ) = αi λi ≥ αi2 λk = λk
i=1 i=1 i=1 i=1

αi2
P
since = 1 due to ||x|| = 1 and {vi } are orthonormal.

For the reverse inequality fix a k dimensional subspace U then we claim that at least k eigenvectors live
in U . Indeed, if there are only ` < k eigenvectors say vi1 , ..vi` then U ⊂ span(vi1 ) ⊕ ... ⊕ span(vi` ) so
U has at most dimension ` < k which is a contradiction. So as there exists at least k eigenvectors in U .
This implies that min||x||=1,x∈U (Ax, x) ≤ λk where ||x|| = 1 since we have at least k eigenvectors. As U
is arbitrary we conclude.
Problem 3. If ST = T S and S, T are normal then we have a basis of orthonormal eigenvectors for T
i.e. T (vi ) = λi vi . Then
λi S(vi ) = ST (vi ) = T S(vi )
so S(vi ) is either a eigenvector of T with value λ or S(vi ) = 0. In either case we have for E(λi , T )
that S : E(λi , T ) → E(λi , T ) is a normal operator. So by the spectral theorem there exists a basis of
eigenvectors wj such that S(wj ) = αj wj . Union all of these eigenvectors in all E(λi , T ) along with using
V = ⊕ni=1 E(λi , T ) to conclude.
Problem 4a. As A is symmetric and SPD we get all of its eigenvalues are non-negative. But the trace
is the sum of the eigenvalues, which implies all of its eigenvalues must be zero. Therefore, by spectral
theorem it implies A is similar to the zero matrix, so A is the zero matrix.
Problem 4b. Using T T ∗ is self adjoint we get T = T ∗ so we have
T 2 = 4T − 3I
which implies the minimal polynomial divides x2 − 4x + 3 = (x − 1)(x − 3) so all of its eigenvalues can
be 1 or 3 so it is Positive Definite.
Problem 5. We get that the minimal polynomial M (t) = Πni=1 (t − λi )ai −1 so both of these matrix have
a Jordan Block of size ai − 1 for λi . But as P (t) = Πni=1 (t − λi )ai we get that the total size of the Jordan
Blocks of λi is ai . So we must have one Jordan block of size ai − 1 and one of size 1 for λi . Therefore,
both matrix have the same JCF, so they are similar to one another.
Problem 6a. By direct computation we get
   
2 3 2 1 −1 3
A=
1 1 0 2 1 −2
Problem 6b. We have  n
n2n−1
  
n 2 3 2 −1 3
A =
1 1 0 2n 1 −2
take n = 100
3

Problem 6c. By direct computation we get


   
n an an+1
A =
an−1 an
Problem 7. This is a typical diagonalization argument. Indeed enumerate the rationals as {qn }. Then
(1)
{fn (q1 )} is bounded sequence in R so there is a convergent sub-sequence ni and a limit f (q1 ) and
(2) (1)
{fn(1) (q2 )} is also bounded so there exists a sub-sequence ni of ni and a limit f (q2 ). Repeat this for
i
(k)
all n and define the sub-sequence nk := nk . Then for any j we have as fn(k) (qj ) → f (qj ) so for any
m
fixed ε > 0 the existence of N such that we have for any m ≥ N
|fn(k) (qj ) − f (qj )| ≤ ε
m

(k)
By construction we have nk is a subsequence of nm , so we also have for large enough k that
|fnk (qj ) − f (qj )| ≤ ε
Problem 8. As K is a closed subset of a complete metric space it is easy to see that K is complete.
Assume K is also totally bounded. Then let {xn } be an arbitrary sequence in K. Then there exists an
SN
integer N such that K ⊂ i=1 B1 (zi ) for zi ∈ K. Then if {xn } is a finite set we are done so assume it is
infinite this implies there exists a i such that there are infinitely many terms of xn in B1 (zi ). Let y1 := zi
(1)
and let this new subsequence which has infinitely many terms in B1 (zi ) be defined as {xn }. Repeat the
(2)
argument to find a ball of radius 1/2 with center y2 := zi such that there are infinitely many terms
(1) (2) (2)
{xn } in B 21 (zi ) with this new subsequence denoted {xn }. We can do this for all n with balls of radius
(n) (n)
1/2n and cenreres yn := zi and let wn := xn . Then we claim wn is Cauchy. Indeed, if n ≤ m
(n) (n) 1
d(wn , wm ) ≤ d(wn , zi ) + d(zi , wm ) ≤
2n−1
where the last inequality is due to wn , wm ∈ B 21n (wn ). Then completeness implies we have a convergent
subsequence.
Problem 9. Since ∇f (x0 , y0 , z0 ) 6= 0 we can WLOG assume that ∂x f (x0 , y0 , z0 ) 6= 0. Then as f ∈ C 1
with f : R1 + 2 → R such that f (x0 , y0 , z0 ) = 0 and ∂x f (x0 , y0 , z0 ) 6= 0 we can apply the Implicit
Function Theorem to find a open neighborhood U ⊂ R2 with (y0 , z0 ) ∈ U such that ∂x f (x0 , y0 , z0 ) 6= 0
in U and a function ϕ : U → ϕ(U ) such that
f (ϕ(s, t), s, t) = 0
and ∂xi ϕ(x2 , x3 ) = −∂xi f (∂x f )−1 . Take the surface as (ϕ(s, t), s, t) then it is a differentiable surface in
U due to the derivative formula above and f ∈ C 1 and ∂x f (x0 , y0 , z0 ) 6= 0 in U .
2
t u1 u2
Problem 10a). Fix u = (u1 , u2 ) then f (tu) − f (0) = √ 2 2
so we have
t u1 +u2

f (tu) − f (0) u1 u2
=p 2
t u1 + u22
Therefore, the directional derivative exists for all directions at (0, 0) and is √u12u2 2 .
u1 +u2
Problem 10b. If f was differentiable at (0, 0) then the directional derivative for all u would be given
by Df (0) · u which implies that the directional derivative are linear with respect to the directions. But
obviously if u 6= v then Df (0) · u + Df (0) · v 6= f (tu+tv)−f
0
(0)
so it implies there cannot be differentiable
at the origin.
Problem 11. Fix ε > 0 then there exists an N1 such that if n ≥ N1 then if n ≥ N1 we have
X∞
|an | < ε
k=n
P
and there exists an a such that an → a so there exists an N2 such that if n ≥ N2 then if n ≥ N2
n
X
ak − k
k=1
4 RAYMOND CHU

Then as σ is a bijection on N there exists an N3 such that if n ≥ N3 then σ(n) ∈


/ {1, ..., max{N1 , N2 }}.
Take any N ≥ max{N1 , N2 , N3 } then
N
X N
X N2
X N
X
aσ(n) − a ≤ aσ(n) − an + an − a
n=1 n=1 n=1 n=1
X
=| aσ(i) | + ε
i:σ(i)∈{1,...,N
/ 1

X∞
≤ |ai | + ε ≤ 2ε
i=N1 +1
so aσ(n) → a.
Problem 12a. False. Take fn as a triangle on [0, n12 ] with mass n1 . Then maxx∈[0,1] fn = n and
´1
f (x) = n1 .
0 n
Problem 12b. Type writer function. I.e. f1 = 1, f2 = χ[0,1/2] , f3 = χ[1/2,1] , f4 = χ[0,1/4] , f5 =
χ[1/4,1/2] , f6 = χ[1/2,3/4] , f=χ[3/4,1] . This does not converge to 0 anywhere but converges in L1 to the 0
function. This function is not continuous but we can modify it by making it into tents to get the desired
result.
5

2. Fall 2010
Problem 1a). We first prove that if inf x∈K,y∈F ρ(x, y) > 0 then K ∩ F = ∅. Indeed, assume this was
false then there exists a sequence {xj , yj } ⊂ K × F with
lim d(xj , yj ) = 0
j→+∞

Then as K is compact there exists a sub-sequence xjk ⊂ K and x ∈ K such that xjk → x. This implies
lim d(x, yjk ) = 0
k→+∞

thanks to the triangle inequality. But this implies x is a limit point in the closed set F , so we must have
x ∈ F . Therefore, x ∈ K ∩ F which is a contradiction.

For the reverse direction just note that if x ∈ K ∩ F then


0≤ inf d(x, y) ≤ d(x, x) = 0
x∈K,y∈K

so we must have K ∩ F = ∅.
Problem 1b. If f is a continuous function then
G(f ) := {(x, f (x)) : x ∈ R}
is closed subset of R2 . Then let F := {(x, 0) : x ∈ R}. Then we have G(f ) and F is closed subset of R2 .
Then taking the standard metric in R2 we have G(exp(−x2 )) and F are disjoint since exp(−x2 ) 6= 0 for
any x ∈ R. But we have
inf 2 d(x, y) = 0
x∈G(exp(−x ),y∈F
2 2
since d((x, 0), (x, exp(−x )) = exp(−x ) → 0 as x → +∞.
Problem 2a. We say a bounded function f in [a, b] is Riemann integrable if for any ε > 0 we can find
a partition such that the lower Riemann sum within epsilon distance of the upper Riemann sum with
respect to this partition. I.e. if ε > 0 we want to find a partition P = {a = x0 < x1 < ... < xN = b} such
that

N
X N
X
inf f (x)∆xi + ε ≥ sup f (x)∆xi
x∈[xi−1 ,xi ]
i=1 i=1 x∈[xi−1 ,xi ]
where ∆xi := xi − xi−1
Problem 2b. Let f be continous on [a, b] then it is uniformly continuous so there exists a δ > 0 such
ε
that if |x − y| ≤ δ then |f (x) − f (y)| ≤ b−a . Let mesh(P ) := maxN
i=1 ∆xi < δ then

N N
X X ε
| inf f (x) − sup f (x)|∆xi ≤ ∆xi = ε
i=1
x∈[xi−1 ,xi ] x∈[xi−1 ,xi ] i=1
b−a

This implies
N
X N
X
inf f (x)∆xi + ε ≥ sup f (x)∆xi
x∈[xi−1 ,xi ]
i=1 i=1 x∈[xi−1 ,xi ]
as desired.
Problem 3a. If f ∈ C 3 (R) then we have for any x, y ∈ R
f 00 (ξ(y))
f (x) = f (y) + f 0 (y)(x − y) + (x − y)2
2
for some ξ(y) ∈ (x, y) and if g : R2 → R then we must have for any x, y ∈ R2

g(x) = g(y) + ∇g(y) · (x − y) + (x − y)T D2 g(ξ(y))(x − y))


where ξ(y) = (ξ(y1 ), ξ(y2 )) where ξ(yi ) ∈ (xi , yi ) and D2 is the Hessian Matrix.
6 RAYMOND CHU

Problem 3b. Fix u, v ∈ R2 with u = (u1 , u2 ), v = (v1 , v2 (. Let h : R → R be defined by


h(t) := g(tu + (1 − t)v)
then
d
h(t) = ∂x g(tu + (1 − t)v)(u1 − v1 ) + ∂y g(tu + (1 − t)v)(u2 − v2 )
dt
so
d
h(t)|t=0 = ∇g(v) · (u − v)
dt
and
d2 2
h(t) = ∂xx g(tu+(1−t)v)(u1 −v1 )2 +∂yy
2
g(tu+(1−t)v)(u2 −v2 )2 +2∂xy2
g(tu+(1−t)y)(u1 −v1 )(u2 −v2 )
dt2
= (u − v)T D2 g(tu + (1 − t)v)(u − v)
By Taylor Theorem for single variable function with remainder
(u − v)T D2 g(ξ(v))(u − v)
h(1) = h(0) + ∇g(v) · (u − v) +
2
but h(1) = g(u) and h(0) = g(v) so we arrived at the desired result.
PN
Problem 4 a. We claim that the family { i=1 αi eβi x+γi y } i.e. finite linear combinations of eβi x+γi y is
dense in [0, 1]2 . Indeed, this is family is an algebra because eβi x+γi y eβj x+γj y = e(βi +βj )x+(γi +γj )y and this
family is closed under finite linear combinations. This family vanishes nowhere since ex is never 0 and
if (x1 , y1 ) 6= (x2 , y2 ) then WLOG x1 6= x2 then f (x, y) := ex satisfies ex1 6= ex2 , so it separates points.
Therefore, as [0, 1]2 is compact Stone Weiestrass implies this family is dense in C([0, 1]2 ) with the sup
norm. This implies if f ∈ C([0, 1]2 ) then for any ε > 0 there is an N such that for
N
X N
X
sup |f (x, y) − αi eβi x+γi y | = sup |f (x, y) − αi eβi x eγi y | < ε
(x,y)∈[0,1]2 i=1 (x,y)∈[0,1]2 i=1
βi x γi y
Let gi (x) := αi e , hi (x) := e and we have arrived at the desired conclusion.
Problem 4b. No, if it were true then for any ε > 0 we can find a {gi (x)}Ni=1 such that

N
X N
X
|f (x, y) − (gi (x))2 | < ε ⇒ f (x, y) > (gi (x))2 − ε ≥ −ε
i=1 i=1
Letting ε → 0 we get that f (x, y) ≥ 0. So if this were true then any continuous function such that
f (x, y) = f (y, x) must be positive, but take f (x, y) = −x2 for a counter example. This implies the claim
is false.
Problem 5a. Recall span(S) is defined as the smallest subspace that contains S. Let V = R2 and
S = {(x, 2x + 1) : x ∈ R} and S 0 = {(x, 3x + 1) : x ∈ R} then span(S) = span(S 0 ) = R2 since the only
subspace that contains them is R2 . So span(S) ∩ span(S 0 ) = R2 but span(S ∩ S 0 ) = span(∅) = {0}.
Problem 5b.
Problem 6. By Cayley-Hamilton if p is the characteristic polynomial of T then P p(T ) = 0. And the
n
roots of p are the eigenvalues of T so p(0) 6= 0 since T is invertiable. So p(T ) = i=1 αi T i + cI = 0
where c 6= 0 then
n
X αi i−1
−T ( T )=I
i=1
c
Pn
with T 0 := I then T −1 = − i=1 αci T i−1 = q(T ) for a polynomial q.
Problem 7. Let {vi }ni=1 be an orthonormal basis of V and {wi }m i=1 be an orthonormal basis of W then
n ≤ m since dim(V ) ≤ dim(W ). Let T (vi ) = wi for i = 1, .., n then
(T (vi ), T (vj ))W = (wi , wj )W = δij
and
(vi , vj )V = δij
so
(T (vi ), T (vj ))W = (vi , vj )V
7

this implies for any v, v 0 ∈ V that


(T (v), T (v 0 ))W = (v, v 0 )V
Problem 8. Let x ∈ W1⊥ + W2⊥ then x = w1 + w2 with wi ∈ Wi⊥ then for any z ∈ W1 ∩ W2
(x, z) = (w1 , z) + (w2 , z) = 0
so x ∈ (W1 ∩ W2 ) . Now let e1 , .., en be a orthonormal basis of W1⊥ and v1 , .., vm be a orthonormal basis

of W2⊥ . Now we have


dim((W1 ∩ W2 )⊥ ) = dim(V ) − dim(W1 ) − dim(W2 ) + dim(W1 + W2 )
= dim(W1⊥ ) + dim(W2⊥ ) − dim((W1 + W2 )⊥ )
and
dim(W1⊥ + W2⊥ ) = dim(W1⊥ ) + dim(W2⊥ ) − dim(W1⊥ ∩ W2⊥ )
so it suffices to show (W1 + W2 )⊥ ⊂ W1⊥ ∩ W2⊥ since that implies
dim((W1 ∩ W2 )⊥ ) − dim(W1⊥ + W2⊥ ) = dim(W1⊥ ∩ W2⊥ ) − dim((W1 + W2 )⊥ ) ≤ 0
. Indeed if x ∈ (W1 + W2 )⊥ then for any wi ∈ Wi (x, wi ) = (x, wi + 0) = 0 since wi + 0 ∈ W1 + W2 so
x ∈ W1⊥ ∩ W2⊥ . This implies
dim((W1 ∩ W2 )⊥ ) ≤ dim(W1⊥ + W2⊥ )
but we already have W1 + W2 ⊂ (W1 ∩ W2 )⊥ so W1∩ + W2∩ = (W1 ∩ W2 )⊥
⊥ ⊥

Problem 9a. Solving x = A−1 (Bx + c) gives x = (−1, −1).


Problem 9b. No, take x0 = (0, 0) then for all n xn has positive components so it cannot converge to
(−1, −1).
Problem 10. Note that as f is Lipschitz with say constant M then xk (t) is also Lipschitz with constant
M . So the family is equicontinuous. But they are also uniformly bounded on any compact subset since
we have xk (0) = 0. So Arzela Ascoli implies the existence of a subsequence that converges uniformly to
a limit x(t) on [−N, N ]. So it suffices to show that
ˆ t
x(t) = f (x(t), t)
0
Problem 11. We have due to Jensen’s Inequality
ˆ 1 ˆ 1 2
0 2 0
|f (x)| ≥ f (x) = 1
0 0
and the min is attained by f (x) = x. This min is unique thanks to the strict convexity of | · |2 . Indeed, if
f and g are both mins then we have for λ ∈ (0, 1) that |λf 0 (x) + (1 − λ)g 0 (x)|2 ≤ λ|f 0 (x)| + (1 − λ)|g 0 (x)|
with the inequality strict unless f 0 (x) = g 0 (x). But since f 6= g and the boundary conditions we know
that f 0 (x) 6= g 0 (x) for a set of positive measure on [0, 1]. This means
ˆ 1 ˆ 1
|λf 0 (x) + (1 − λ)g 0 (x)|2 < λ|f 0 (x)|2 + (1 − λ)|g 0 (x)|2 = 1
0 0
which is a contradiction.
Problem 12. Note ˆ ˆ ˆ
2π r(t)
f (x, t)dx = ρf (ρ, θ, t)dρdθ
D(t) θ=0 ρ=0
So one has ˆ ˆ ˆ
2π r(t)
d d
f (x, t)dx = ρf (ρ, θ, t)dρdθ
dt D(t) θ=0 dt ρ=0
ˆ ˆ 2π
= ft dx + r(t)f (r(t), θ, t)r0 (t)dθ
D(t) θ=0
8 RAYMOND CHU

3. Spring 2011
Problem 1. We know that if the eigenvalues of A are λ1 , λ2 , λ3 then the characteristic polynomial is
χ(t) = (t − λ1 )(t − λ2 )(t − λ3 ) = (t2 − tλ2 − tλ1 + λ1 λ2 )(t − λ3 )
= t3 − (λ1 + λ2 + λ3 )t2 + t(λ2 λ3 + λ3 λ1 + λ1 λ2 ) − λ1 λ2 λ3
= t3 − 4t2 + t(λ2 λ3 + λ3 λ1 + λ1 λ2 ) − 2
where we solved for the det using the hint. Using the given identities we get λ1 λ3 + λ2 λ3 + λ1 λ2 = 5 so
χ(t) = t3 − 4t2 + 5t − 2 = (t − 1)2 (t − 2)
Therefore, the minimal polynomial is either
(t − 1)(t − 2) or (t − 1)2 (t − 2)
this means either  
1 1 0
J = diag(1, 1, 2) or J = 0 1 0
0 0 2
Problem 2. If A is diagonalizable then
A = S −1 DS
where D is a diagonal matrix, so
Ak = S −1 Dk S
k
which means A is diagonalizable.

Now assume Ak is diagonalizable. As F = C then we can find a Jordan matrix J and an invertible matrix
V such that
A = V −1 JV
then
Ak = V −1 J k V
but as the Jordan form is unique (up to permutation) and Ak is diagonalizable this must mean J k is a
diagonal matrix. This occurs if and only if all there is no 1s above the diagonals since we cannot have a
zero eigenvalue. So we must have J be a diagonal matrix, so A is diagonalizable.
Problem 3. We claim that when H is Hermitian then we can find a basis of V consisting of orthonormal
eigenvectors of H.
We prove the problem by induction on the dimension. It is trivial when the vector space is 1 dimen-
sional. So now assume it holds for any vector space of dimension less than n. Let H be a Hermitian
operator on an n dimensional complex inner product vector space V . As the field L is⊥ complex we know
that there exists an eigenvector v1 with length 1. Let U := span(v1 ) then V = U U and as H(U ) ⊂ U
we have H(U ⊥ ) ⊂ U ⊥ thanks to H being self adjoint. And dim(U ⊥ ) = n − 1 < n so we can consider the
restricted operator H|U ⊥ and apply the induction hypothesis to find {v2 , .., vn } such that H|U ⊥ (vi ) = λi vi
and (vi , vj ) = δij and U ⊥ = span(v2 , .., vn ). This implies V = span(v1 , .., vn ) and H(vi ) = λi vi with
(vi , vj ) = δij .

Now we fix a orthonormal basis of V {e1 , .., en } where we assume every linear operator L matrix form
is written as
[L(e1 ), .., L(en )]
Then for the unitary operator U (vi ) = ei (it is unitary since it maps an orthnomrla basis to an orthonormal
basis)
U HU −1 (ei ) = λi ei
so U HU = diag(λ1 , .., λn ). But as U is unitary we have U −1 = U ∗ .
−1

Problem 4.
Problem 5. If Ax = b then for any y ∈ (ker(AT )) then
(b, y) = (Ax, y) = (x, AT y) = 0
so b ∈ (ker(AT ))⊥ . And dim((ker(AT ))⊥ ) = dim(range(A)) which completes the proof.
9

Problem 6. Let us consider w = Ay + (w − Ay) where w − Ay ∈ Range(A)⊥ then

||Ay − w|| ≤ ||Ax − Ay|| + ||Ay − w|| = ||Ax − w||


where we used (Ax−Ay) ⊥ (Ay −w). So the minimizers are exactly the y such that w −Ay ∈ Range(A)⊥
i.e. for any x ∈ V
0 = (w − Ay, Ax) = (A∗ w − A∗ Ay, x) = 0
or the y such that A∗ Ay = A∗ W as desired.
Problem 7. Follows by IVT. Indeed, f (1) = −1 and f (0) = 1 so by IVT there exists a root between 0
and 1.
Problem 8a. (
1 if x ∈ Q ∩ [0, 1]
f (x) =
−1 else
Problem 8b. (
n for x ∈ (0, n1 ]
fn (x) =
0 else
then fn → 0 everywhere but
ˆ 1 ˆ 1
fn (x)dx = 1 6= f (x) = 0
0 0
Problem 9. Assume there exists a point f (x∗ ) > 0 then continuity implies there is a δ ball where

f (x) > f (x2 ) so
ˆ b ˆ x∗ +δ
f (x∗ )
f (x) ≥ >0
a x∗ −δ 2
which is a contradiction
Problem 10a. Let f : G ⊂ R2 → R. We say f is differentiable at (x0 , y0 ) ∈ G if there exists a linear
transformation Df (x0 , y0 ) ∈ R2×1 such that for x := (x0 , y0 )
|f (x + h) − f (x) − Df (x) · h|
lim =0
||h||→0 ||h||
Problem 10b. Define Df (x) := (∂x f (x), ∂y f (x)). Then
2
X
f (x + h) − f (x) = f (pi+1 ) − f (pi )
i=1

with p1 := x, p2 := (x0 + h1 , y0 ) and p3 := (x0 + h1 , y0 + h2 )


2
X
= hi ∂xi f (qi )
i=1

with qi → x as ||h|| → 0|| thanks to MVT. Then


f (x + h) − f (x) − Df (x) · h h1 (∂x f (q1 ) − ∂x f (x)) + h2 (∂y f (q2 ) − ∂x f (x))
=
||h|| ||h||

≤ |∂x f (q1 ) − ∂x f (x)| + |∂y f (q2 ) − ∂x f (x))|


which converges to 0 as h → 0 thanks to continuity of the partial derivatives.
Problem 11a. We claim that all connected sets in R are intervals. Indeed, let E ⊂ R be connected.
then the map f (x) := x is continuous so f (E) is connected. Assume for the sake of contradiction that
E is not an interval. Then there must exist an x, y ∈ E and a z ∈ E c such that x < z < y but the
intermediate value theorem implies z ∈ f (E). But observe f (E) = E which is a contradiction so all
connected sets in R are intervals so they are arcwise connected.
10 RAYMOND CHU

Problem 11b. Take the topologist sin curve


1
G(f ) := {x, sin( )} ∪ {0, 0}
x
for x ∈ (0, 1].Note that {x, sin( x1 )} is connected since the map x 7→ (x, sin( x1 ) is continuous for x ∈ (0, 1]
and {0, 0} is connected. Then as (x, sin( x1 ) ∩ {0, 0} = 6 ∅ we get the claim. But it is not path connected so
it is not arcwise connected since there is no way to extend sin(1/x) to a continuous function on [0, 1].
Problem 12a. Note that ˆ x
T (f ) − T (g) = f (x) − g(x)
0
so ˆ c
||T (f ) − T (g)||L∞ ≤ ||f (x) − g(x)||L∞ = c||f (x) − g(x)||L∞
0
so it is a contraction map so we have an f such that T (f ) = f . But as f ∈ C([0, 1]) we actually have
ˆ x
g(x) := 1 + f (x) ∈ C 1 ([0, 1])
0
Indeed, fix ε > 0 then by uniform continuity we can choose a δ > 0 such that if d(x, y) < δ then
d(f (x), f (y)) < ε so
ˆ ˆ
g(x + h) − g(x) 1 x+h 1 x+h
| − f (x)| = | f (s) − f (x)| ≤ |f (s) − f (x)| ≤ ε
h h x h x
when h < δ. So if ˆ x
f =1+ f (x) ⇒ f 0 = f
0
but we also have f (0) = 1.
Problem 12b. An approximation for exp(t) thanks to the proof of Banach Fixed Point theorem.
11

4. Fall 2011
Problem 1. Let (X, d) be a compact metric space. Set
g(x) := d(f (x), x)
which is continuous, so it attains a min as X is compact at z ∈ X. If f (z) 6= z then we have
g(f (z)) = d(f 2 (z), f (z)) < d(f (z), z) = g(z)
which contradicts the minimality so f (z) = z. So we have found a fixed point. But also if x = f (x) and
y = f (y) and x 6= y then we have
d(z, x) = d(f (z), f (x)) < d(x, z)
so it is unique.
Problem 2. As f ∈ C 1 we have for any x, y that
ˆ 1
f (x) − f (y) = ∇f (tx + (1 − t)y) · (x − y)dt
0

Let g(t) := f (tx + (1 − t)y) ⇒ g 0 (t) = ∇f (tx + (1 − t)y) · (x − y). Then we have for any t > 0
g 0 (t) − g 0 (0) = (∇f (tx + (1 − t)y) − ∇f (y)) · (x − y)
t c
= (∇f (tx + (1 − t)y) − ∇f (y)) · (x − y) ≥ ||t(x − y)||2 = ct||x − y||2
t t
Therefore, g 0 (t) ≥ g 0 (0) this implies
ˆ 1
f (x) − f (y) ≥ g 0 (0) = ∇f (y) · (x − y)
0

This condition implies convexity (in fact is equivalent). Indeed, let us fix α ∈ [0, 1] then let x :=
αx + (1 − α)y then we have
(
f (x) ≥ f (z) + ∇f (z) · (x − z)
f (y) ≥ f (z) + ∇f (z) · (y − z)
so we get
αf (x) + (1 − α)f (y) ≥ f (z) + ∇f (z) · (αx + (1 − α)y − z)
= f (z)
so we arrived at
αf (x) + (1 − α)f (y) ≥ f (αx + (1 − α)y)
i.e. f is convex.
Problem 3.
P∞ P∞
Problem 4a. Note that the sum n=1 (−1)n n1 := n=1 an converges thanks to Dirichlet’s criterion.
And P it is unconditionally convergent. So we claim for any α ∈ R there exists a bijection σ : N → N such

that n=1 aσ(n) . Indeed note if we let
ai + |ai | ai − |ai |
pi := ni :=
2 2
P
then pi is the non-negative terms of an and qi is the non-positive terms of an . We also must have pi
P PN1 PN1 −1
and ni diverge. Therefore, there exists an N1 such that i=1 pi ≥ α ≥ i=1 pi . Note that pi = 0 iff
ni 6= 0 and ni = 0 iff pi 6= 0. Let {i1 , .., iN } ⊂ {1, .., N1 } be the index such that pi > 0 then for 1 ≤ j ≤ N
PN1 PN2 PN1 −1 PN2 −1
define σ(j) = ij . Then there exists an N2 such that i=1 pi + j=1 nj ≤ α ≤ i=1 pi + j=1 nj .
(2)
Again let {i1 , .., iN (2) } ⊂ {1, .., N2 } such that nij 6= 0 and define σ(j + N ) = ij for 1 ≤ j ≤ N . By
induction we repeat this procedure for all N i.e. we find an N2n such that
N2n N2n−1 2n −1
NX N2n−1
X X X
pi + ni ≥ α ≥ pi + ni
i=1 i=1 i=1 i=1
12 RAYMOND CHU

and N2n+1 such that


N 2n N2n+1 N2n N2n+1 −1
X X X X
pi + ni ≤ α ≤ pi + ni
i=1 i=1 i=1 i=1
and putting σ(i) as the index of non-zero terms of pi from N2n−2 to N2 then of the index of the non-zero
terms of qi from N2n−1 to N2n+1 . Then we get the following estimate
N
X2n 2n −1
NX N
X2n

aσ(n) ≥ α ≥ aσ(n) ⇒ 0 ≥ α − ≥ −aσ(N2n +1 → 0


i=1 i=1 i=1
since an → 0
Problem 4b. This sum converges absolutely by the p-test. So any rearrangement converges to the same
sum. Let

X 1
α := (−1)n 2
n=1
n
then fix ε > 0 then there exists N1 such that if n ≥ N1 we have
X∞
|an | < ε
k=
P
and as an → α we can find an N2 ≥ N1 such that
N2
X
an − a < ε
n=1

so for any rearrangement σ there exists an N3 ≥ N2 such that if n ≥ N3 then σ(n) ∈


/ {1, ..., N2 } then for
any n ≥ N3
Xn Xn N2
X N2
X
aσ(n) − α ≤ aσ(n) − aj + aj − α
k=1 k=1 j=1 j=1

n
X N2
X N2
X
≤ aσ(n) − aj + aj − α
k=1 j=1 j=1

X
≤ aσ(k) + ε
k:σ(k)∈{1,..,N
/ 2}


X
≤ |an | + ε ≤ 2ε
n=N2
Problem 5. Just take any monotone function with countably many jumps.
Problem 6. See Fall 2012 number 3.
Problem 7. See Fall 2016 number 4.
Problem 8. We will show that for an arbitrary complex valued matrix A then there exists a basis of
generalized eigenvectors. But as null(A−λI) = null((A−λI)2 ) this implies every generalized eigenvector
is an eigenvector. First we show
V = range(An ) ⊕ ker(An )
for n = dim(V ) By rank nullity it suffices to show their intersection is the zero element. Let v ∈
range(An ) + ker(An ) then
v = An x ⇒ 0 = An v = A2n x ⇒ An x = 0
so the first claim holds. Now fix an eigenvalue λ associated with eigenvector v of A. Let
G(λ, A) := null((A − λI)n )
then we argue by induction. The case n = 1 is trivial, so assume the induction holds true for any subspace
withd dimension less than n. Then
V = G(λ, A) ⊕ U
13

for U := range((A − λI)n ). Now we claim A(U ) ⊂ U . Indeed, if x ∈ U then


(A − λI)n Ax = A(A − I)n x = 0
so we can apply our induction hypothesis onto the restricted operator A|U to find a basis of generalized
eigenvectors of A|U on U . It is clear that these are generalized eigenvectors of A, so we found a basis of
eigenvectors of A on V . So we are done.
Problem 9. Let L : V → V be self adjoint such that there exists a unit vector
||Lx − µx|| ≤ ε
As L is self adjoint there exists a basis of orthonormal eigenvectors. Let us denote the orthonormal
eigenvectors with eigenvalue λi ∈ R as vi . Then
Xn n
X
x= (x, vi )vi ⇒ 1 = ||x||2 = (x, vi )2
i=1 i=1
Then
n
X
(Lx − µx, Lx − µx) = (λi − µ)2 (x, vi )2 ≤ ε2
i=1
Pn 2 2
As 1 = i=1 (x, vi ) there exists a j such that (x, vj ) ≤ 1. Then
(λj − µ)2 ≤ ε2
This implies
|λj − µ| ≤ ε
as desired.
Problem 10. As A is a real matrix and A3 = I its eigenvalues must be either 1 repeated with multiplicity
3 or a single eigenvalue 1 with 2 complex conjugate roots of unity (with order 3). In the first case we get
A is the identity matrix then our other eigenvalues must be the 2 complex conjugate roots of unity. Let
these eigenvalues be denoted as λ and λ so over C A is diagonlizable to the form
A = S −1 diag(1, λ, λ)S
where S may be a complex matrix. Then note that the matrix diag(λ, λ) is simialr to
 
cos(θ) − sin(θ)
R :=
sin(θ) cos(θ)

with θ = 3 since the eigenvalues of R are λ, λ. Therefore, there exists U such that
diag(λ, λ) = U RU −1
so    
−1 1 0 1 0 1 0
S S=A
0 U 0 R 0 U −1
so A is similar to the desired form with either θ = 0 or 2π
3 . Note that if A and B are real matrix such
that A is similar to B over C then they are similar over R.
Problem 11. dim(ker(S)/im(T )) = dim(ker(S))−dim(im(T )). and dim(Im(T )) = dim(V ), dim(W ) =
dim(U ) + dim(null(S)) so we get both sides of the equality as dim(ker(S)) − dim(im(T )).
Problem 12. Note if x satisfies
||Ax − b|| ≤ ||Ay − b||
for all y then Ax − b ∈ range(A)⊥ . But Rn = range(A) range(A)⊥ and b = Ax + (b − Ax) with
L
Ax ∈ range(A) and b − Ax ∈ range(A)⊥ so Ax must be the same value for any minimizer.
14 RAYMOND CHU

5. Spring 2012
Problem 1. It is clear that ρ(A, B) ≥ 0 and ρ(A, B) = ρ(B, A). Now if ρ(A, B) = 0 then fix x ∈ A so
0 = sup inf |x − y| ≥ inf |x − y|
x∈A y∈B y∈B

so inf y∈B |x − y| = 0 that is there exists a sequence {yn } ⊂ B such that yn → x so x ∈ B = B


since B is closed. Therefore, A ⊂ B. The reverse subset follows from supy∈B inf x∈A |x − y| = 0. So
ρ(A, B) = 0 ⇐⇒ A = B. Now we prove the triangle inequality. Observe for all a ∈ A, b ∈ B and c ∈ C
for A, B, C ∈ Ω we have
|a − b| ≤ |a − c| + |c − b|
inf |a − b| ≤ |a − c| + inf |c − b|
b∈B b∈B
inf |a − b| ≤ inf {|a − c| + inf |c − b|}
b∈B c∈C b∈B
inf |a − b| ≤ inf |a − c| + sup inf |c − b|
b∈B c∈C c∈C b∈B
sup inf |a − b| ≤ sup inf |a − c| + sup inf |c − b| ≤ ρ(A, C) + ρ(B, C)
a∈A b∈B a∈A c∈C c∈C b∈B
This inequality also holds for supb∈B supa∈A |a − b| so we have
ρ(A, B) ≤ ρ(A, C) + ρ(B, C)
so ρ is a metric as desired.
Problem 2. Fix ε > 0 then as f is uniformly continuous there exists a δ > 0 such that on d(x, y) ≤
δ ⇒ d(f (x), f (y)) ≤ ε. Consider a uniform partition of [a, b] by [ai−1 , ai ] where ai − ai−1 ≤ 2δ . Then as
fn → f and {ai } is finite we can find an N such that for all n ≥ N we have
|fn (ai ) − f (ai )| ≤ ε
for all ai . Now fix x ∈ [ai−1 , ai ] then we have by uniform continuity that
|f (x) − f (ai )| ≤ ε |f (x) − f (ai+1 )| ≤ ε
Then by convexity we have
f (x) ≤ max{f (ai ), f (ai+1 )}
so
fn (x) ≤ max{fn (ai ), fn (ai+1 )} ≤ max{f (ai ), f (ai+1 )} + ε ≤ f (x) + 2ε
i.e. for all n ≥ N we have
fn (x) − f (x) ≤ 2ε
For the reverse inequality if x ∈ (ai−1 , ai ) then convexity implies that
fn (x) − fn (ai ) fn (ai+1 ) − fn (ai ) fn (x) − fn (ai+1 )
≤ ≤
x − ai ai+1 − ai x − ai+1
so
fn (ai+1 ) − fn (ai )
fn (x) ≥ (x − ai+1 ) + fn (ai+1 )
ai+1 − ai
and
fn (ai+1 ) − fn (ai )
fn (x) ≤ (x − ai ) + fn (ai )
ai+1 − ai
so we have
fn (ai+1 ) − fn (ai ) f (ai+1 ) − f (ai )
fn (x) − f (x) ≥ (x − ai+1 ) + fn (ai+1 ) − (x − ai ) − f (ai )
ai+1 − ai ai+1 − ai
By uniform convergence on {ai } and uniform continuity of f we have fn (ai+1 ) − f (ai ) ≤ −2ε and the
secant slope terms are
x − ai
(fn (ai+1 ) − f (ai+1 ) + f (ai ) − fn (ai )) − fn (ai+1 ) + fn (ai )
ai+1 − ai
x−ai
by rewriting ai+1 = ai + (ai+1 − ai ). Note 0 ≤ ai+1 −ai ≤ 1, then using triangle inequality and fn (ai ) →
f (ai ) we get that
fn (x) − f (x) ≥ −5ε
15

so we have
||fn (x) − f (x)||L∞ [a,b] ≤ 5ε
for n ≥ N so we have uniform convergence.
Problem 3. Bisection Method and completeness of (R, | · |).
Pn
Problem 4. Note that an ≥ 0 implies sn ≥ 0. Let Cn := i=1 sn . We claim sn ≤ M which would
imply it converges which sn is an increasing sequence bounded above. Indeed fix an n then we have as
Cn C2n
n → s there exists an M such that 2n ≤ M Then using C2n ≤ (n − 1)s1 + (n + 1)sn we get
s1 s1 sn sn
− + + ≤M
2 2n 2 2n
so
s1
sn ≤ 2M +≤ 2M + s1
n
Pn
Therefore, sn = i=1 ai . Now let a := limn→∞ sn < +∞. then we claim sn → a. Indeed,
Pn
Cn (Si − a)
| − a| = | i=1 |
n n
For ε > 0 there exists an N such that if n ≥ N then |Si − a| ≤ ε. Then if n > N
Pn PN Pn
i=1 |Si − a| i=1 |Si − a| |Si − a|
≤ = + i=N
n n n
2M (n − N )ε 2M
+≤ ≤ +ε
n n n
which converges to 0 as n → ∞. Therefore, Cnn → a so by uniqueness of limit a = s.
2
Problem 5. Define T : C([0, 1]) → C([0, 1]) via T (f ) := ex + f (x2 ) . Note that T (f ) ∈ C([0, 1]) since it
is a addition and composition of continuous function and its domain is [0, 1] since x2 is bijection from
[0, 1] to [0, 1]. Use Banach Fixed Point Theorem since it’s a contraction map with α = 21 .
y x x y x
Problem 6. Note that the vector field ( x2 +y 2 , − x2 +y 2 ) is conservative since ∇ arctan( y ) = ( x2 +y 2 , − x2 +y 2 ).
x
However arctan( y ) is not differentiable on the y-axis. And as our path must start and end at (1, 0), we
necessarily do not have zero circulation (since the potential cannot be made C 1 on any open neighborhood
of the curve). Indeed, we do not have zero circulation since the path γ(t) := (cos(t), sin(t)) we have
ˆ 2π
− sin2 (t) − cos2 (t)
I(γ) = = −2π
0 cos2 (t) + sin2 (t)
 
4 −3
Problem 7. We have A = and its eigenvalues are 1, 3 so it is diagonlizable and we must have
1 0
1/n
limn→∞ an =3
Problem 8. As A ∈ C n×n there exists an Upper Triangle Matrix T such that
A = S∗T S
where S is unitary thanks to Schur’s Decomposition. As similar matrix share the same eigenvalues and
the eigenvalues of the upper triangular matrix T are the diagonal entries, it suffices to find Ti → T such
that T is diagonalizable. Indeed, consider Tn := T + diag(h1 , .., hn ) where hi < n1 and are chosen such
that (Tn )ii 6= (Tn )jj for any j 6= i. Then as Tn has distinct diagonal terms and is upper triangular, it has
n distinct eigenvalues, so it is diagonalizable. So Tn = Vn∗ Dn Vn where V is unitary. So
An := S ∗ Tn S = S ∗ Vn∗ Dn Vn S = (Vn S)∗ Dn (Vn S)
converges to A entry wise as n → ∞. Note that if A, B are unitary so is AB, therefore (Vn S)∗ is the
inverse of (Vn S). Therefore,
An = S ∗ Vn∗ Dn Vn S := Bi Li Bi−1 → S ∗ T S = A
Problem 9.
16 RAYMOND CHU

Problem 10. It is always equal. Indeed, we have A = V ∗ T V where T is a upper triangular matrix and
V is unitary. Then note that eA = V ∗ eT V which can easily be seen by the definition. So
det(eA ) = det(eT )
and
XN N
Y
exp(T r(A)) = exp( Tii ) = exp(Tii )
i=1 i=1
T2
since similar matrix share the same trace. We also know that (eT )ii = 1 + Tii + 2!ii + ... = exp(Tii ) And
det(eT ) = (eT )ii since eT is upper triangular. Therefore, we have det(eA ) = exp(T r(A)) for any complex
valued matrix.
Problem 11a. By Cayley Hamilton A solve its characteristic polynomial which is of degree 2. Solve for
this.
Problem 11b. If P (A) and Q(A) are second degree polynomial such that P (A) = Q(A) = 0 make them
both monoic. Then (P − Q)(A) = 0 and P − Q is a first degree polynomial which is impossible since A
is not a constant multiple of the identity matrix.
Problem 12. They are equivalent over via x0 := x + iy and y 0 := x − iy then Q1 (x0 , y 0 ) = x2 + y 2 =
Q2 (x, y) and the transformation is non-singular since
     0
1 i x x
= 0
1 −i y y
and the matrix is of full rank. But by Sylvester Law of Inertia two quadratic forms are equivalent over R
if and only if the associated symmetric matrix A of Q1 and B of Q2 have the same number of positive,
negative, and zero eigenvalues. We have
0 1
   
1 0
A= 1 2 B=
2 0 0 1
But notice 1/2 and −1/2 are eigenvalues of B so they cannot be equivalent over R.
17

6. Fall 2012
Pn
Problem 1. We prove the statement by summation by parts. Indeed, let Bn := i=1 bi then we have
for any m ≥ n
Xn n
X n
X n−1
X
ai bi = ai (Bi − Bi−1 ) = ai Bi − ai+1 Bi
i=m i=m i=m i=m−1
n−1
X
= an Bn − am Bm−1 + Bi (ai − ai+1 )
i=m
So
n
X n−1
X
ai bi ≤ |an Bn | + |am Bm−1 | + |Bi |(ai − ai+1 )
i=m i=m
since ai is decreasing. Therefore, as |Bn | ≤ M we have
n
X n−1
X
ai bi ≤ M (|an | + |am−1 | + (ai − ai+1 )
i=m i=m

= M (|an | + |am−1 | + am − an )
≤ M (2|an | + |am | + |am−1 )
M
Then as an → 0 we can for any ε > 0 find any N such that for k ≥ N we have |ak | ≤ 4 ε then choosing
n, m ≥ N we have
Xn
ai bi ≤ ε
i=m
Pn
so Sn := i=1 ai bi is a Cauchy sequence, so it is convergent.
Problem 2a. We say a bounded function f is Riemann Integrable on [0, 1] if and only if for all ε > 0
there exists a partition P such that if P = {0 = x0 < x1 < ... < xn = 1} with Ii := [xi−1 , xi ] and
∆xi := xi − xi−1 then for ω(f, Ii ) := supx,y∈Ii |f (x) − f (y)| we have
n
X
ω(f, Ii )∆xi ≤ ε
i=1

i.e. the upper and lower Riemann sum difference can be made arbitrarily small.
Problem 2b. Fix a uniform partition of size ε i.e. δxi = ε for all i. Then since f is non-decreasing then
ω(f, Ii ) = f (xi ) − f (xi−1 ) so
n
X
ω(f, Ii )∆xi = ε(f (b) − f (a))
i=1
since the sum is telescoping. Then as f is bounded we have
≤ 2M ε
for M := ||f ||L∞ . Therefore, it is Riemann Integrable.
Problem 3. If fn → f uniformly then the 3ε trick shows f is continuous. The converse is known as
Dini’s Theorem. Indeed as f is continuous any fixed ε > 0 we have
Gn := {x : f (x) − fn (x) > −ε}
is open since it is the preimage of an open set on a continuous function. Then as fn (x) → f (x) for all
x ∈ X we have

[
X= Gn
n=1
As X is compact there exists a finite subcover so
N
[
X⊂ Gnk
k=1
18 RAYMOND CHU

Note that fn (x) ≥ fn+1 (x) implies f (x) − fn (x) ≤ f (x) − fn+1 (x). Therefore, we have for all n ≥
max{n1 , .., nN } := K
f (x) − fn (x) ≥ f (x) − fK (x) > −ε
where we can ensure f (x) − fk (x) > −ε thanks to the monotocity and finite subcover. And we have
fn (x) ≥ f (x) ⇒ f (x) − fn (x) ≤ 0
Therefore, we have for all n ≥ K that
||f (x) − fK (x)||L∞ (K) ≤ ε
so we have uniform convergence.
Problem 4. Let Fn be closed sets such that int(Fn ) = ∅ and assume X is complete with

[
X= Fn
n=1

Clearly X cannot have empty interior for int(X) = X 6= ∅ so there exists an x ∈ X ∩ F1c . Then we must
have an n such that n ≥ 2 and B n1 (x) ∩ F1 = ∅ for otherwise we would get x is a limit point which would
imply x ∈ F1 which is a contradiction. Let this ball be denoted as Bh1 (x) then we must have Bh1 (x)
is not contained in F2 since it has empty interior, so there exists an x2 ∈ Bh1 (x) ∩ F2c . Similarily we
can find an n ≥ 3 such that B n1 (x2 ) ∩ F2 = ∅. Choosing n smaller we can assume Bh2 (x2 ) ⊂ Bh1 (x).
Proceed inductively
Tn to generate points xn with radius hn < n1 such that Bhn (xn ) ⊂ Bhn+1 (xn+1 ) and
Bhn (xn ) ∩ i=1 Fn = ∅. Then {xnS } forms a Cauchy sequence so there exists an x ∈ X such that xn → x.

But x ∈ Bhn (xn ) for all n so x ∈/ n=1 Fn = X which is our contradiction. So BCT holds.
T∞
Problem 5. An equivalent form of BCT is that if Gn are open dense sets then n=1 Gn is dense in a
complete metric space. This implies is not a Gδ for we could define Hn := (−∞, qn ) ∪ (qn , ∞) for an
enumeration of qnT . Then this is an open dense set, so Gn ∩ Hn is an open dense set (density is due to

Gn is open). But n=1 (Gn ∩ Hn ) is the empty set but BCT says it is dense, which is a contradiction. In
fact it shows any countable set in a complete metric space cannot be a Gδ .
Problem 6a. Assume for sake of contradiction that there exists an (x∗ , y ∗ ) such that F (x∗ , y ∗ ) is non-
zero (wlog it is positive). Then by continuity there is a small square with (x∗ , y ∗ ) at the center such that
∗ ∗
F (x, y) ≥ F (x2,y ) . But then for this square we have the integral mass is positive since
ˆ ˆ
F (x∗ , y ∗ )
0= F (x, y) ≥ >0
S S 2
so F ≡ 0.
Problem 6b. We have for any square `1 ≤ x ≤ `2 with `3 ≤ y ≤ `4
ˆ `2 ˆ `4 ˆ `4 ˆ `2 ˆ `4
2 2
∂x,y f (x, y)dydx = ∂x,y f (x, y)dydx = ∂x f (x, `4 ) − ∂x f (x, `3 )
x=`1 y=`3 y=`3 x=`1 y=`3

f (`2 , `4 ) − f (`1 , `4 ) − f (`2 , `3 ) + f (`1 , `3 )


ˆ `2 ˆ `4
2
= ∂y,x f (x, y)
x=`1 y=`3
2 2
so by 6a) we must have ∂y,x f (x, y) = ∂x,y f (x, y).
Problem 7. This means there exists an N such that AN = A. Therefore, if we let µ(x) be the minimal
polynomial of A we have the existence of a polynomial p such that
p(x)µ(x) = x(xN −1 − 1)
N
Y −1
=x (x − λi )
i=1
where λi are the (N − 1) roots of unity. In particular this implies that µ(x) has no repeated root which
is equivalent with A being diagonalizable.
19

Problem 8. Note that [w1 , w2 ] = (Hw2 , w1 ). Then w ∈ W iff for all v ∈ W we have (Hw, v) = 0 i.e.
H(W ) ⊂ W ⊥ . Then the restricted operator satisfies H|W : W → W ⊥ . As det(H) 6= 0 we must have H
is injective i.e. dim(W ) = rank(H|W ). Then
n = dim(W ) + dim(W ) ≥ dim(W ) + rank(H|W ) = 2dim(W )
where we used Im(H|W ) ⊂ W ⊥ i.e.
n
≥ dim(W )
2
For the examples take H = diag(1, −1, 1, −1, ....).
Problem 9. Note that we have Rm = Im(A)⊕(Im(A))⊥ so there exists b1 ∈ Im(A) and b2 ∈ (Im(A))⊥
such that b = b1 + b2 . Then
||b1 − b||2 ≤ ||b1 − b||2 + ||Ax − b1 ||2
= ||b − Ax||2
where for the equality we used Pythagerous theorem since b1 − b ∈ (Im(A))⊥ and Ax − b1 ∈ Im(A).
Then as f (x) := ||Ax − b||2 is convex the minimum is unique i.e. Ax = b1 is the unique min. Then we
have for any x, y ∈ M A(x − y) = b1 − b1 = 0 so x − y ∈ N . Then fix an x0 ∈ M then for any x ∈ M we
have x = x0 + (x − x0 ) where x0 ∈ M and x − x0 ∈ N so M ⊂ x0 + N . Choosing the same x0 as before
we have Ax = b1 then we have for any y ∈ N A(x + y) = b1 i.e. it minimizes the problem so x0 + N ⊂ M
i.e. M = x0 + N .
Problem 10. Note that P (A) = (A + I)3 (A − I) = 0 so as the minimal polynomial divides P (A) all
of A eigenvalues are −1 or +1. As rank(B) = 2 we have nullity(B) = 2 i.e. the eigenspace of −1
has dimension 2 so there are two Jordan Blocks with eigenvalue −1. As |T r(A)| = 2 we must have 3
eigenvalues of −1 and 1 eigenvalue of 1. But as we only have two eigenvalues, we must have a 2 × 2
Jordan Block of −1, a 1 × 1 Jordan Block of −1 and a 1 × 1 Jordan block of 1
Problem 11.
Problem 12. We have rank(A) ≥ r if and only if there exists a r × r sub-matrix such that it has
invertible. Then we have for any linear operator L that L is invertible if and only if LT is invertible since
ker(L) = range(LT )⊥ . This implies rank(A) = rank(AT ).
20 RAYMOND CHU

7. Spring 2013
Problem 1a. See 2012 Fall Problem 2a)
Problem 1b. See 2012 Fall Problem 2b)
P∞ Pn
Problem 1c. Observe that k=1 21k = 1 and let Sn := k=1 21k and let Ik := [Sk−1 , Sk ] with S0 := 0.
S∞
Then [0, 1] = k=1 Ik is a disjoint union except at the end points. Let
n
f := Sk for x ∈ Ik

then this is a monotone function with infinitely many jumps i.e. it is discontinuous on a countable set
but it is Riemann Integrable thanks to monotocity.
Problem 2.
Problem 3. We will show sequentially compact implies complete and totally bounded first. Given any
Cauchy sequence the sequential compactness implies there is a sub-sequence that converges, but a cauchy
sequence with a convergent sub-sequence is convergent. Hence it is complete. It is totally bounded since
if not there exists an ε0 > 0 such that if the space is denoted as X we have
X * Bε0 (x1 )
thus there is an x2 such that d(x2 , x1 ) > ε0 but being not totally bounded implies
X * Bε0 (x1 ) ∪ Bε0 (x2 )
similarily we can find an x3 such that d(x1 , x3 ) > ε0 and d(x2 , x3 ) > ε0 due to X not being totally
bounded such that
X * Bε0 (x1 ) ∪ Bε0 (x2 ) ∪ Bε0 (x3 )
Thus proceeding by induction we can find a sequence {xn } such that for any m we have
d(xn , xm ) ≥ ε0
which means there cannot be a convergent sub sequence. So X must be totally bounded.

Now assume X is totally bounded and complete. Fix a sequence {xn } ⊂ X and assume xn has infinitely
many distinct values for otherwise the sequence will have a convergent sub-sequence and there will be
nothing to prove. As X is totally bounded we have y1 , .., yN such that
X ⊂ B 21 (y1 ) ∪ ... ∪ B 21 (yN )
Thus there exists an 1 ≤ i ≤ N such that there are infinitely many values of xn ∈ B 21 (yi ). Denote this
(1)
subsequence as xn then there exists z1 , .., zM such that
M
[
X⊂ B 14 (zi )
i=1
(2) (1) (2)
and again there exists a subsequence xn of xn such that they are infinitely many terms of xn in B 14 (zj )
(k) (k) (k−1)
for some j. Proceeding inductively we can find a sequence xn such that xn is a susbequence of xn
(k) (k) (k) (n)
and there are infinitely many terms of xn in B 1k (wj ) for some wj . Let the subsequence yn := xn
2
i.e. the diagonal susbequence then for n ≥ m
(m) (m) 1 1
d(yn , ym ) ≤ d(yn , wj ) + d(wj , ym ) ≤ m
+ m
2 2
(n) (m) (m)
since as n ≥ m we have {xk } ⊂ B 21m (wj ) since it is a subsequence of {xk }. Therefore, it is Cauchy
then by completeness there exists a limit. So the space is sequentially compact.
Problem 4. We prove the stronger general result: If f : [1, ∞) → [0, ∞) such that f is decreasing and
limx→+∞ f (x) = 0 then
XN ˆ N +1
f (i) − f (x)dx
i=1 1
21

converges to a finite limit. Indeed, as f (x) → 0 we have for all ε > 0 an M > 0 such that for x > M such
PN ´ N +1
that f (x) ≤ ε. Then let aN := i=1 f (x) − 1 f (x)dx then we have for N ≥ M
XN ˆ N +1
|aN − aM | = f (i) − f (x)dx
i=M +1 M +1

−M ˆ
NX M +1+i
= f (M + i) − f (x)dx
i=1 M +i
NX−M ˆ M +1+i
= f (M + i) − f (x)dx
i=1 M +i
−M
NX ˆ M +1+i
≤ f (M + i) − f (M + i + 1)dx
i=1 M +i
−M
NX
= f (M + i) − f (M + i + 1)
i=1
= f (M + 1) − f (N + 1)
≤ 2ε
for M, N large. So it is a Cauchy Sequence and we conclude by the completeness of R. The third equality
we used f (M + i) ≥ f (x) on [M + i, M + 1 + i] so that the term is already positive. Note that
n
X ˆ n
hn := f (j) − f (x)dx
j=1 1

1
for f (x) := x. By our result we have
n
X ˆ n+1
f (j) − f (j)
j=1 1
converges so
n
X ˆ n+1 ˆ n+1
hn = f (j) − f (j) − f (j)
j=1 1 n
´ n+1 Pn
and since f decreases to 0 that limn→+∞ f (j) = 0. So hn converges to the limit of f (j) −
´ n+1 n j=1

1
f (j).
Problem 5a. There is a typo and it should be Un (cos(θ)) = sin(n(θ))
sin(θ) base case is trivial since U1 = 1.
Then we have
sin(θ)Un+1 (cos(θ)) = 2 cos(θ) sin(nθ) − sin((n − 1)θ)
= 2 cos(θ) sin(nθ) − (sin(nθ) cos(−θ) + sin(−θ) cos(nθ))
= cos(θ) sin(nθ) + sin(θ) cos(nθ) = sin((n + 1)θ)
so induction holds.
Problem 5b. Consider x 7→ cos(θ) then we get
ˆ 1 p ˆ π
Um (x)Un (x) 1 − x2 dx = sin(nθ) sin(mθ)dθ
−1 0
Problem 6a. Note by Schur’s Decomposition we have that any complex matrix is unitary equivalent to
an upper triangular matrix i.e.
A = UT TU
where T is upper triangular and U −1 = U T . Then we recall if an operator has only distinct eigenvalues
then it is diagonalizable and that the eigenvalues of an upper triangular matrix are its diagonals. Then
consider
Ak := U T (T + diag(h1 , .., hn ))U
p
where h21 + ... + h2n ≤ k1 and hi are chosen such that Tii + hi 6= Tjj + hj for all i 6= j. Letting k → ∞
gives Ak → A and each Ak is diagnolizable since it has distinct eigenvalues.
22 RAYMOND CHU

Problem 6b. Note that f (A) := det(A − λI) is continuous from Rn×n → R isnce it is a polynomial of
the coefficients of A. So if An → A we have f (An ) → f (A). Let
 
cos(θ) sin(θ)
A :=
− sin(θ) cos(θ)
with θ = π2 then this has only complex eigenvalues. So if An → A we must have for large n that An has
complex eigenvalues. Therefore, there does not exist a sequence of real diagonolizable matrix An such
that An → A. So they are not dense.
Problem 7a. We define
||A|| := sup ||A(x)||
||x||=1

Note that
||A(x)|| ≤ ||A|| ||x||
so
||A2 (x)|| ≤ ||A|| ||A(x)|| ≤ ||A||2 ||x||
so
||A2 || ≤ ||A||2
Problem 7b. By the observation above we have for when |x| = 1
A2 (x)
exp(A)(x) := x + Ax + + ..
2!
||A||2
≤ 1 + ||A|| + + ...
2
= exp(||A||)
so we have
exp(A)(x) ≤ |x| exp(||A||) < ∞
so the series makes sense everywhere.
Problem 7c. Note if ||A|| < 1 then for |x| = 1 we have
∞ ∞
X ||A||n X
log(I + A)(x) ≤ 1 + ≤ ||A||n < +∞
n=1
n n=0

where the last line is justified via ||A|| < 1.


Problem 7d. No thank you.
Problem 8a.
(T x, y) = (x, T ∗ y)
Problem 8b. Typo it should be transpose of the conjugate matrix. But it follows from writing out the
inner products.
Problem 8c. We have x ∈ ker(T ) iff for all y ∈ V
0 = (T x, y) = (x, T ∗ y)
i.e. x ∈ Im(T ∗ )⊥ so Ker(T ) ⊂ Im(T ∗ )⊥ . Then fix y ∈ Im(T ∗ )⊥ . So for all x ∈ V we have
0 = (y, T ∗ x) = (T y, x)
so T y = 0 Therefore, Im(T ∗ )⊥ = Ker(T ).
Problem 8d. This implies that if L is an operator then it is invertible if and only if T ∗ is invertible. Use
that rank(T ) ≥ r iff there exists an r × r submatrix such that the submatrix is invertible. This implies
that rank(T ∗ ) ≥ rank(T ) by choosing r = rank(T ) and the other inequality follows from replacing T
with T ∗ and using T ∗∗ = T .
23

Problem 9a. Observation 1: If  


cos(θ) sin(θ)
A :=
− sin(θ) cos(θ)
then its eigenvalues are cos(θ) + i sin(θ) and cos(θ) − i sin(θ) and any z ∈ C such that |z| = 1 can be
represented as cos(θ) + i sin(θ).

Observation 2: As A is orthogonal so its eigenvalues λ satisfy |λ| = 1. But as A is real if λ is complex


valued then λ is an eigenvalue since eigenvalues come in complex conjugate pairs for real valued matrix.

Observation 3: A is normal, so it is diagonalizable over C.

By observation 1 as A is diagnolizable over it has a basis of eigenvectors. Order the eigenvalues such that
all the real ones are from 1 to j i.e. λ1 , .., λj are real. Then for λj+1 to λn these are the complex valued
eigenvalues, but by observation 2 we have for any λj+1 = λj+` for some ` > 1. Order the eigenvalues so
λj+2 = λj+1 , λj+3 = λj+4 , ... till n. Then we have by observation 3 that
A = U ∗ DU
where U −1 = U ∗ and for D = diag(λ1 , .., λn ) then the complex conjugate eigenvalues i.e. λj+1 , λj+2 and
λj+3 , λj+4 ,.. till λn−1 , λn are similar to a rotation matrix since rotation matrix are diagnolizable over C
since they are unitary i.e. we have
   
λi 0 cos(θ) sin(θ)
= ViT V
0 λi − sin(θ) cos(θ) i
where Vi−1 = Vi∗ for any i ≥ j. Then notice
L L L L L L
 we have D = A1 A2 ... Aj+1 Aj+3 .. An−1
λj 0 ∗
where Ai = [±1] for i ≤ j. And Aj+1 = and similarily till An−1 and we have Vj Aj+1 Vj+1
0 λj L L L L
equal to some rotation matrix. Therefore, D is similar to A1 ... Rj+1 ... Rn1 where Rk := B is
some rotation matrix and the change of basis matrix is a block matrix along the diagonals with unitary
blocks so its inverse is its conjugate transpose. Then as multiplication of unitary matrix is unitary we
have shown
A = V ∗ BV
where B is of the desired form. However, the similarity is over C to get similarity over R note that
V = V1 + iV2 where V1 and V2 are real matrix, so
(V1 + iV2 )A = B(V1 + iV2 )
but as A and B are real we must have
V2 A = BV2
and V1 A = BV1 , therefore for any r ∈ C we have
(V1 + rV2 )A = B(V1 + rV2 )
Then as we have for f (r) := det(V1 + rV2 ) and f (i) 6= 0 since it is invertible then f (r) is a non-zero
polynomial. So there are only finitely many roots so there exists an r ∈ R such that f (r) 6= 0 so (V1 +rV2 )
is invertible. So
A = (V1 + rV2 )−1 B(V1 + rV2 )
so they are similar over R.
Problem 9b. As n is odd there must exist a real eigenvalue which is either −1 or 1. Let this eigenvector
associated to it be denoted by v then A2 v = λ2 v = v. We note v is real since A is real so A − λI is also
real so its Kernal is real.
Problem 10a. By computation we get C is of the form
 
1 a b c
0 1 a b 
 
0 0 1 a
0 0 0 1
then G = Id is a subspace of the set of 4 × 4 matrix.
24 RAYMOND CHU

Problem 10b. It is 3 dimensional since there are 3 free parameters a, b, and c.


Problem 11a. Note that     
1 1 Fn−1 Fn
=
1 0 Fn−2 Fn−1
√ √
so in particular we get Fn = √15 (λn1 − λn2 ) by diagnolizing the matrix where λ1 = 1+2 5 and λ2 = 1− 5
2 .
Then
Fn λn − λn2 λ1 λn−1 − λ2 λn−1
= n−11 = λ 1 − λ2 + 2 1
Fn−1 λ1 − λ2n−1 λn−1
1 − λn−1
2
Since λn2 → 0 as n → ∞ we have the second term approaches λ2 so the it approaches λ1 .
Problem 11b. Note that playing with the explicit solution we have
2
F2n+3 F2n+1 − F2n+2 = (λ21 )(λ22 )(F2n+1 F2n−1 − F2n
2
)
and λ21 λ22 = 1 so the result follows from induction.
Problem 12. Note

1 X
= (−1)n x−2n
x2 + 1 n=1
for x ∈ (−1, 1). So for any ε > 0 with ε < 1 we have
ˆ 1−ε ˆ 1−ε X∞
1
2
= (−1)n x2n
0 1 + x 0 n=1
since

X ∞
X
(−1)n x−2n ≤ (1 − ε)2n = C(ε) < +∞
n=1 n=1
so it is uniformly convergent on [0, 1 − ε] So by uniform convergence we can swap the integral and sum
X∞ ˆ 1−ε
= (−1)n x2n
n=1 0

X (1 − ε)2n+1
= (−1)n
n=1
2n + 1
Note that

X 1
(−1)n
n=1
2n + 1
1
converges due to summation by parts since | (−1)n | < M and 2n+1
P
monotonically goes to zero. So
Abel’s Theorem says
∞ ∞
X (1 − ε)2n+1 X 1
lim (−1)n = (−1)n
ε→0
n=1
2n + 1 n=1
2n +1
and ˆ 1−e

X
n 1 1 π
(−1) = lim 2
=
n=1
2n + 1 ε→0 0 1 + x 4
´ 1−e 1 π
where in the last inequality we used that f (ε) := 0 1+x2 is continuous to get limε→0 f (ε) = f (0) = 4.
25

8. Fall 2013
Problem 1. Fix {an } such that an is positive. Assume
n
Y
Pn := (1 + ai )
i=1
converges to a non-zero limit a. Then as Pn > 0 for all n we can take the log of it to see
n
X
log(Pn ) = log(1 + ai )
i=1
So as log(x) is continuous on (0, ∞) and Pn → a > 0 we have
lim log(Pn ) = log( lim Pn ) = log(a) > 0
n→∞ n→∞
Pn
Therefore, i=1 log(1 + ai ) converges so log(1 + ai ) → 0 so ai → 0. Then as we have
log(x + 1)
lim =1
x
x→0
we conclude there exists an δ > 0 such that if |x| < δ then
1 log(x + 1)
≤ ≤2
2 x
Then as ai → 0 there exists an N such that for i ≥ N we have |ai | = ai < δ so for any M ≥ N we have
M M M
1X X X
(*) ai ≤ log(ai + 1) ≤ 2 ai
2
i=N i=N i=N
PM
Therefore, for any fixed ε > 0 choosing N, M large enough from convergence of i=N log(ai + 1) we have
M
X ε
log(ai + 1) ≤
2
i=N
In particular,
M
X M
X
ai = ai ≤ ε
i=N i=N
PN
so { i=1 ai } forms a Cauchy sequence so it converges.
PN PN
Now assume n=1 an converges then by (∗) i=1 log(ai + 1) converges. So
N
X N
Y
Pn := log(ai + 1) = log( 1 + ai )
i=1 i=1
Therefore, there exists an a such that
N
Y
log( 1 + ai ) → a
i=1
Then we have by taking exponentials and using that it is a continuous map to see
N
Y
1 + ai → exp(a)
i=1
which is strictly bigger than 0. And the equivalence is proved.
Problem 2a. Let
A := {x : f (x) is not continuous }
we claim that
A = {x : lim− f (y) 6= lim+ f (y) := B
y→x y→x
Note for any x that the left and right limits of f are well defined since f is monotone and locally bounded
(by the right and left end points of the interval). So B makes sense and it is clear B ⊂ A. But if x ∈ A
26 RAYMOND CHU

then the left and right limits are well defined so we must have limy→x− f (y) 6= limy→x+ f (y) for otherwise
f would be continuous at x. Then for each x ∈ A we can pick q ∈ (limy→x− f (y), limy→x+ f (y)) ∩ Q.
But as f is monotone we have for any z ∈ that q ∈ / (limy→z− f (y), limy→z+ f (y)) since f is monotone.
Therefore, we have found an injection from A to . So A is countable.
Problem 2b.
Problem 3a. For any partition of [0, 1] we have
n−1
X n−1
Xq
|γ(tj+1 ) − γ(tj )| = (tj+1 − tj )2 + (f (tj+1 ) − f (tj )2
i=1 i=1
n−1
X n−1
X
≤ |tj+1 − tj | + |f (tj+1 ) − f (tj )| = (tj+1 − tj ) + (f (tj+1 ) − f (tj ))
i=1 i=1
= 1 + f (1) − f (0)
where for the second equality we used f is increasing and tj+1 > tj and the last equality we used it was
two telescoping sums.
Problem 3b.
Problem 4. See 2012 Fall number 6 a).
Problem 5. See Fall 2011 number 2.
Problem 6. I do not think compactness is needed. Indeed, assume {xn } does not converge to x then
there exists an ε0 > 0 such that for any N there is an n(N ) ≥ N such that
d(xn(N ) , x) ≥ ε0
Take N = 1, 2, 3, .. then there is a sequence xn(N ) such that
d(xn(N ) , x) ≥ ε0
But as xn(N ) is a sub-sequence we can find a further sub-sequence that converges but this is a contradiction
since
d(xn(N ) , x) ≥ ε0
for all N .
Problem 7. Let PN denote the N + 1 dimensional space of polynomials of degree N and define the lin-
ear map ψ : PN → RN +1 via ψ(P ) = (P (z1 ), .., P (m1 ) (z1 ), P (z2 ), .., P (m2 ) (z2 ), ...., P (zn ), ..., P (m2 ) (zn )).
Then it suffices to show ψ is bijective which is equivalent to showing it is injective since dim(PN ) =
dim(RN +1 ). If ψ(P ) = (0, ..., 0) then zi is a (mi + 1) root of ψ so ψ has N + 1 roots which implies by the
fundamental theorem of algebra that P ≡ 0. Therefore, this map is bijective so the desired result holds.
Problem 8. As P is an orthogonal projection withe trace 2, we have the existence of a unitary matrix
U such that
P = U T diag(1, 1, 0)U
Therefore,
P − I = U T diag(1, 0, 0)U
i.e. it has rank 1. So we must have the existence of p, q ∈ R3 such that
P − I = pq T
and as P − I is self adjoint and as (P − I)2 = P − I we have
P − I = pq T pq T = qpT pq T = αqq T
for some α = ||p||2 . We can assume α = 1 since we can put the put the constant into q. So
P − I = qq T
for some q and we have diag(P − I) = (q12 , q22 , q32 ) so we have
√ √
2 1 5
q1 = ± √ q2 = ± √ q3 = ± √
3 2 6
so
P = I + qq T
27

for any choice of the q with the above signs.


Problem 9. Fix v ∈ V such that v 6= 0. Then consider
W := {v, Av, .., Ak−1 v}
where k − 1 ≤ d − 1 is the largest integer such that the above list is linearly independent. So we have
α0 , .., αk−1 such that
α0 v + α1 Av + ... + αk−1 Ak−1 v + Ak v = 0
this implies T (W ) ⊂ W . Therefore, for A|W in the basis of {v, Av, .., Ak−1 v} we have A(Am v) = Am+1 (v)
for m = 0 till k − 2 and A(Ak−1 v) = −α0 v − α1 Av − ...αk−1 Ak−1 v i.e. it is a companion matrix so T |W
characteristic polynomial if denoted g(t) is
(−1)k (α0 + α1 t + ...αk−1 tk−1 + tk )
Then we claim this divides the characteristic polynomial of T . Indeed fix a basis of w = {v, Av, .., Ak−1 v}
and extend it to a basis of V β := {v, Av, .., Ak−1 v, w1 , ..wn } then in this basis we have
 
B1 B2
[T ]β =
0 B3
so its characteristic polynomial f (t)
 
B1 − tI B2
f (t) = det(A − tI) = det
0 B3 − tI
= det(B1 − tI)det(B3 − tI) = g(t)det(B3 − tI)
so the characteristic polynomial of A divides the 0 characteristic polynomial of A|W . But as f (T ) =
p(T )g(T ) for some polynomial p we have f (T )v = p(T )g(T )v = 0 since g(T )v = 0. As v is arbitrary it
implies f (T )v = 0 for all v ∈ V so T satisfies its own characteristic polynomial. And the characteristic
polynomial is of degree d so we are done.
Problem 10.
Problem 11. We say T is normal iff T ∗ T = T T ∗ where T ∗ is the adjoint of T then we have
(T x, T x) = (x, T ∗ T x) = (x, T T ∗ x) = (T ∗ x, T ∗ x)
so ||T x|| = ||T ∗ x||. Now by Schur’s Decomposition as T is complex there is a unitary matrix U such that
T = U T AU
where A is an upper triangular matrix. We will show from ||T x|| = ||T ∗ x|| that A is in fact diagonal.
Note that unitary equivalence preserves normal operators so ||A|| = ||A∗ ||. Then
||A(e1 )||2 = |a11 |2
||A∗ (e1 )||2 = |a11 |2 + |a12 |2 + ... + |a1n |2
so a12 = ... = a1n = 0. Then using a12 = 0
||A(e2 )||2 = |a22 |2
||A∗ (e2 )||2 = |a22 |2 + |a23 |2 + ... + |a2n |2
so a2j = 0 for j 6= 2. We can proceed inductively to show all the non-diagonal terms are zero. So A
is a diagonal matrix. So T is unitarily equivalent to a diagonal matrix. This means there exists an
orthonormal basis such that T v = λv for some λ. Indeed,
TUT = UT A
T U T = [T u1 , .., T un ] U T A = [λ1 u1 , ..., λn un ]
where ui is the ith column of U T so T (ui ) = λi ui and we have ui are an orthonormal basis since U is
unitary. So we have a orthnormal basis of eigenvectors.
Problem 12. Note
CA (X) = (X − 1)2 (X − 2)2
and that A is similar to B if and only if they have the same Jordan Canonical Form. We can either have
the Jordan form as 4 1 × 1 block, or one 2 × 2 block with two 1 × 1 block, or two 2 × 2 blocks so there
are a total of 4 similarity/congruence classes.
28 RAYMOND CHU

9. Spring 2014
Problem 1a. Note that
t4 = −1 ⇒ t = cos(θ) + i sin(θ)
such that cos(4θ) = −1 and sin(4θ) = 0. So θ = π4 , 3π 5π 7π π π 7π
4 , 4 , 4 . Note that cos( 4 ) + i sin( 4 ) = cos( 4 ) −
i sin( 7π 3π 3π 5π 5π
4 ) and cos( 4 ) + i sin( 4 ) = cos( 4 ) − i sin( 4 ) so the matrix
 π 
R( 4 ) 0
A :=
0 R( 3π
4 )
with  
cos(θ) − sin(θ)
R(θ) :=
sin(θ) cos(θ)
4
has characteristic polynomial t + 1. But as all the eigenvalues of A are distinct we have the minimal
polynomial is the characteristic polynomial.
Problem 1b. This question is false. But it can be shown that all sub-spaces are of even dimension i.e.
for A take W = span{a, b, 0, 0} for a, b ∈ R or span{0, 0, a, b}. To see why it has to be two dimensional.
Fix W ⊂ R4 such that W is a subspace and let A(W ) ⊂ W where A is defined in part a. Assuming
W 6= {0} this means if we fix an orthonormal basis {w1 , .., wm } of W and extend it to a orthonormal
basis of Rn i.e. β = {w1 , .., wm , v1 , .., vn−m } then A written in this basis takes the form
 
B1 B2
[A]β =
0 B3
In particular this implies the restricted operator A|W characteristic polynomial divides the characteristic
polynomial of A since if we let g(t) be the characteristic polynomial of A and f (t) be the characteristic
polynomial of A|W we have
g(t) = det(A − tI) = det(B1 − tI)det(B3 − tI) = f (t)det(B3 − tI)
And since A|W is a real operator all of its eigenvalues come in complex conjugate pairs so either the
characteristic polynomial of B1 is t4 + 1, (t − λ1 )(t − λ1 ), or (t − λ2 )(t − λ2 ) for λi = cos(θi ) + i sin(θi )
with θ1 = π4 or θ2 = 3π 4
4 . If the characteristic polynomial of B1 is t + 1 then we are done since B1 will be
a 4 × 4 matrix i.e. the basis of W has dimension 4. So WLOG assume that the characteristic polynomial
of B1 is (t − λ1 )(t − λ1 ). Then B1 is similar to the rotation matrix R( π4 ) which implies B1 is a 2 × 2
matrix. So W has dimension 2 so either dim(W ) is 2 or 4.
Problem 2. Note that
rank(ST ) + nullity(ST ) = dim(V ) ⇒ rank(ST ) = dim(V ) − nullity(ST )
≥ dim(V ) − nullity(S) − nullity(T )
so
rank(ST ) ≥ rank(S) + nullity(S) − nullity(S) − nullity(T )
= rank(S) − nullity(T )
which is equivalent to the desired inequality.
Problem 3. Assume for the sake of contradiction that A−1 exists then
B − A−1 BA = I
which implies n = tr(I) = tr(B − A−1 BA) = tr(B) − tr(B) = 0 which is a contradiction.
Problem 4. We claim this holds for all invertible matrix B indeed
det(BA − λI) = det(B −1 (BA − λI)B) = det(AB − λI)
where we used det(AB) = det(A)det(B) = det(B)det(A) = det(BA). Now we claim the set of invertible
matrix is dense in Rn×n . Indeed, given any matrix A ∈ Rn×n we can extend it to an operator over n . So
by Schur’s Decomposition we have A is unitarily equivalent to an upper triangular matrix i.e.
A = UT TU
and the eigenvalues of A are the diagonal terms of T . So consider
1
An := U T (T + I)U
n
29

then as there are only finitely many eigenvalues there exists an N such that for n ≥ N we have diag(T + n1 I)
have no zero entries. Therefore, An is invertible and clearly as n → ∞ we have An → A and An is real
valued since we are adding a real valued matrix to a real valued matrix. Then since the determinant is
a continuous function since its a polynomial of the coefficients of the matrix we have for a given B there
exists Bn → B where Bn are invertible so
lim det(ABn − tI) = lim det(Bn A − tI)
n→∞ n→∞
so continuity lets us put the limit inside so
det(AB − tI) = det(BA − tI)
Problem 5. Note V = range(L) (range(L))⊥ and we see that for any b there exists unique b1 ∈
L
range(L) and b2 ∈ (range(L))⊥ such that b = b1 + b2 . Then L(x) minimizes
||L(x) − b||
if and only if L(x) = b1 since
||b1 − b||2 ≤ ||b1 − b||2 + ||L(x) − b1 ||2 = ||L(x) − b||2
where the last line we used Pythagoras theorem since b1 − b ∈ (range(L))⊥ and the other term is in
range(L). So b1 is a min but the convexity of f (x) := ||L(x) − b||2 tells us the minimum is unique since
L(x) − b is affine and || · || is convex. Therefore, all minimizes x satisfy L(x) = b1 so if x and y minimize
it then L(x) = L(y).
Problem 6. Note the spectral theorem implies that
A = U ∗ DU
where U is unitary and D is a diagonal matrix since A is a normal operator so
A∗ = U ∗ D ∗ U
Note for any given polynomial P we have
P (A) = U ∗ P (D)U
Pn
so it suffices toPshow there exist a polynomial such that P (D) = D∗ . Note that if P (x) = i=1 αi xi we
n
have P (D) = i=1 αi Di . If we let diag(D) = (λ1 , ..., λn ) and diag(D∗ ) = (β1 , .., βn ) then it suffices to
show there exists a P such that P (λi ) = βi for all i = 1, .., n. Indeed this will just be the usual Lagrnage
Polynomials indeed fix a j and note
Y x − λi
Pj (x) :=
λj − λi
i6=j
satisfies (
1 if k = j
Pj (λk ) =
0 else
So let
n
X
P (x) := βi Pj (x)
j=1
then it satisfies
P (λk ) = βk
Pn Pn Pn
for all k. Therefore, P (D) = i=1 αi Di = diag( i=1 αi λ1 , .., i=1 αi λn ) = diag(β1 , .., βn ) = D∗ so
P (A) = A∗ as desired.
Problem 7. We will write out our counter example {anm } in matrix form:
 
1 −1 0 ... 0 0 0 0
−1 1 −1 1 0... 0 0 0 
0 0 1 −1 1 −1 ... 0 .. 0
and so on. Then summing along each row we get P zero and summing
P column we get zero since there
are only finitely many 1s and −1s. In particular, m anm =P n anm = 0 and these sums converge
absolutely since there are only finitely many terms. However, n,m |an | = +∞ since there are infinitely
many 1s and −1s.
30 RAYMOND CHU

Problem 8a. Note that by induction one easily sees that we have
( 1
(n) Qn (t)e− t for t > 0
f (t) =
0 for t ≤ 0
where Qn is a rational function so it suffices to show limt→0+ f (n) (t) = 0. Then as we have exponentials
e−t decay faster than any rational functions at t = +∞ we have the limit is zero so it is smooth.
− 1
(
2 e t2 −1 for − 1 < t < 1
Problem 8b. Note that we have f (t − 1) = is smooth since it is the
0 else
composition of two smooth functions. In particular in Rd we have
− 1
(
2 e |x|2 −1 for |t| < 1
f (|x| − 1) =
0 else
is smooth since its the composition of two functions. Then as this function is strictly positive we can
divide by its L1 mass to find a function as desired in the problem statement
Problem 9. See Fall 2010 number 11.
Problem 10. This is one side of Arzela-Ascoli. Enumerate Q ∩ [0, 1] = {qn }n∈N then for any {fn } ⊂ F
we have from uniform bound of the family
|fn (q1 )| ≤ M
(1)
so by compactness of [0, 1] we find a limit f (q1 ) along the subsequence nk such that fn(1) (q1 ) → f (q1 ).
i
Then by induction for any k we find have that
|fn(k−1) (qk )| ≤ M
i

(k)
so we find a limit f (qk ) and a subsequence n ⊂ n(k−1) such that fn(k) (qk ) → f (q1 ). Let the subsequence
i
(k)
mk := nk then we have for any n ∈ N that fmk (qn ) converges. Fix ε > 0 then by equicontinuity there
is a δ > 0 such that if d(x, y) < δ then for any f ∈ F we have |f (x) − f (y)| < 3ε . Then as Q ∩ [0, 1] is
dense we have
[0, 1] ⊂∞n=1 Bδ (qn )
so compactness gives us a finite subcover say q1 , .., qN are the centers. Then as fmk (x) → f (qi ) for all
1 ≤ i ≤ N we can find an M such that if k, m ≥ M then
ε
|fmn (qi ) − fmk (qi )| <
3
for any 1 ≤ i ≤ N . Then for any x there exists a qi such that x ∈ Bδ (qi ) so
|fmk (x) − fmn (x)| ≤ |fmk (x) − fmk (qi )| + |fmk (qi ) − fmn (qi )| + |fmn (qi ) − fmn (x)|
so the first and last term are controlled by ε/3 due to equicontinuity while the second term if less than
ε/3 if k, n ≥ M so we have for k, m ≥ M
|fmk (x) − fmn (x)| ≤ ε
so it is a uniformly cauchy subsequence of C([0, 1]) which by completeness implies the existence of a limit
f.
Problem 11. We note that this means F is a compact subset of C([0, 1]). In particular, F is totally
bounded. We claim this implies F is totally boubded. Indeed, fix ε > 0 then there exists f1 , .., fN ∈ F
such that
N
[
F⊂ Bε/2 (fi )
i=1
then as fi ∈ F there exists an gi ∈ F such that d(gi , fi ) < 2ε . Then for any f ∈ F there is an i such that
d(f, fi ) < 2ε so d(f, gi ) ≤ d(f, fi ) + d(fi , gi ) ≤ ε i.e.
N
[
F⊂ Bε (gi )
i=1
31

for gi ∈ F so F is totally bounded. Then we have the existence of g1 , .., gN ∈ F such that
N
[
F⊂ B1 (gi )
i=1
so for any f ∈ F we have ||f || ≤ 1 + max1≤i≤N ||gi || so it is uniformly bounded. For equicontinuity fix
ε > 0 then we have the existence of g1 , .., gN ∈ F such that
N
[
F⊂ Bε/3 (gi )
i=1
Then there exists a δ > 0 such that if d(x, y) < δ then for all 1 ≤ i ≤ N that d(gi (x), gi (y)) < ε/3 due to
uniform continuity of gi since [0, 1] is compact. Then for any f ∈ F there is a gi such that ||f − gi || < ε/3
so if d(x, y) ≤ δ then
|f (x) − f (y)| ≤ |f (x) − gi (x)| + |gi (x) − gi (y)| + |gi (y) − f (y)| ≤ ε/3 + ε/3 + ε/3 = ε
so the family is equicontinuous.
S∞
Problem 12a. Note that E c ∩ [0, 1] = n=1 int(In ) so E c is open in [0, 1].
Problem 12b. We need the following lemma: If A ⊂ R is a perfect set i.e. it is a closed set that has
no isolated points then A is uncountable. Indeed we first show A is complete. Indeed given a Cauchy
sequence {xm } ⊂ A we have that there exists a limit in R which implies x ∈ A. Now this means Baire
Category Theorem can be applied. Indeed, as A is countable we have
[∞
A= {an }
n=1
but each {an } is closed with empty interior so BCT says there exists an n such that {an } has non-empty
interior which is our contradiction.

Now we have 4 cases either 0 is an isolated point or a limit point, or 1 is an isolated point or a limit
point. WLOG assume that 0 and 1 are limit points for if say 1 is an isolated point then Ẽ := E − {1}
would be closed and we can repeat the proof below. Now fix x ∈ E that is not 0 or 1 and we claim x is
a limit point. Indeed assume not then x is isolated because E is closed then there exists an ε > 0 such
that
(x − ε, x + ε) ∩ E = {x}
This is our contradiction. Indeed, let x ∈ Ii then WLOG x is a left end point then (x − ε, x) ∈ / Ii but as
there is no interval end points in (x − ε, x) and In cover [0, 1] we must have an Ik such that (x − ε, x) ⊂ Ik
but as it does not have a end point in (x − ε, x) its right boundary point must be greater than or equal
to x hence we must have Ik ∩ Ii 6= ∅ which is a contradiction. Therefore, E is a countable perfect set
(since each In has two points), which is a contradiction.
32 RAYMOND CHU

10. Fall 2014


Problem 1. Let v := x − y, w := x + y then
v 2 + w2 1
H(x, y) = f (v, w) = +
2 |v|
Fix ε > 0 and R > ε and let the annul us with outer radius R and inner radius be defined as e
AR,ε := BR (0) − Bε (0). Then AR,ε is compact and f is continuous so f attains a min over AR,ε . Our
goal is to show there exists an R and ε such that the min becomes a global min. Indeed, on Bε (0) we
2
1
have f (v, w) ≥ minv∈Bε (0) |v| = 1ε and on R2 − BR (0) we have f (v, w) ≥ R2 . Then as f (1, 1) = 2 we see
by taking R big enough and ε small enough that
R2 1
min f (v, w) ≥ min{ , } > 2 = f (1, 1)
v∈R2 −AR,ε 2 ε
and (1, 1) ∈ AR,ε . Therefore, if z := min(v,w)∈AR,ε f (v, w) then
z ≤ f (1, 1) < min f (v, w)
v∈R2 −AR,ε

Therefore, z is a global minimum and it is attained at a point v 6= 0 ⇐⇒ x 6= y .


Problem 2. Claim: If A is closed and the union of two disconnected sets X, Y then X and Y are closed.
Indeed, let x ∈ X then x ∈ A = A implies that x ∈ X or Y . So x ∈ X or Y , but as we have X ∩ Y = ∅
and x ∈ X implies x is not in Y i.e. x ∈ X, so X is closed.

Now assume for the sake of contradiction that A is disconnected so there exists closed sets X, Y such
that
A=X ∪Y X ∩Y =∅
then
A ∩ B = (X ∩ B) ∪ (Y ∩ B)
and
(X ∪ Y ) ∩ B ⊂ X ∩ Y = ∅
so it follows that X ∩ B or Y ∩ B = ∅ since A ∩ B is connected. Assume X ∩ B = ∅, then
A ∪ B = X ∪ (Y ∪ B)
and
X ∩ (Y ∪ B) = (X ∩ Y ) ∪ (X ∩ B) = ∅
therefore, we have A ∪ B is disconnected which is a contradiction.
Problem 3. As f is continuous on [0, 1] compact we know that f is uniformly continuous. So for any
ε > 0 there is a δ > 0 such that if d(x, y) < δ then d(f (x), f (y)) < δ. Note that
[
[0, 1] ⊂ Bδ (x)
x∈[0,1]
SN
so by compactness there exists a finite subcover say i=1 Bδ (xi ) where we ordered the centers such
that xi−1 ≤ xi ≤ xi+1 . Then from pointwise convergence we can find an N such that if n ≥ N then
|fn (xi ) − f (xi )| < ε for all 1 ≤ i ≤ N . Then observe for any y ∈ (xi−1 , xi ) that form monotonicity
fn (xi−1 ) ≤ fn (y) ≤ fn (xi )
In particular, we have for n ≥ N that
fn (xi−1 ) − f (xi ) ≤ fn (y) − f (xi ) ≤ fn (xi ) − f (xi ) ≤ ε
In addition we have
|fn (xi−1 ) − f (xi )| ≤ |fn (xi−1 ) − f (xi−1 )| + |f (xi−1 ) − f (xi )| ≤ ε + ε = 2ε
where the first ε is due to pointwise convergence and the second ε is due to uniform continuity of f . So
putting these inequalities together gives
|fn (y) − f (xi )| ≤ 2ε
33

Then we have
|fn (y) − f (y)| ≤ |fn (y) − f (xi )| + |f (xi ) − f (y)| ≤ 3ε
In particular, this means that if n ≥ N that
sup ||fn (x) − f (x)|| ≤ 3ε
x∈[0,1]

so it converges uniformly.
Problem 4. We will show that the family is uniformly Lipschitz on [−1, 1], which thanks to the equi-
bound on fn implies by Arzela-Ascoli the desired result. Indeed fix y < x ∈ [−1, 1], z ∈ (1, 2) then we
claim we have
f (x) − f (y) f (z) − f (y)

x−y z−y
Then for any h > 0
f (x) = f (λy + (1 − λ)z) ≤ λf (y) + (1 − λ)f (z)
x−z
for λ := y−z < 1 and plugging in this inequality gives the desired bound. This implies
f (x) − f (y) 2||f ||L∞ ([−2,21]) 2||f ||L∞ ([−2,2])
≤ ≤ := C
x−y z−y z−1
and a similar argument shows if w ∈ (−2, −1) then
f (x) − f (w) f (x) − f (y)

x−w x−y
which implies
f (x) − f (y)
≥ C||f ||L∞ ([−2,2]
x−y
which implies convex functions are locally Lipschitz with a constant that only depends on the max
of f over the domain. Therefore, since our family is uniformly bounded by 1 the claim follows from
Arzela-Ascoli.

Problem 5. We claim for all n we have an ≤ 2. Indeed, a1 = 2 < 2 then by induction we have
a2n+1 = 2 + an ≤ 4 ⇒ an+1 ≤ 2
so we have that an is a bounded sequence. Now we claim an is a monotone increasing sequence. Indeed,
for any n we have
a2n = 2 + an−1 ≥ 2an−1 ≥ a2n−1
which shows that an is a monotone increasing sequence bounded above by 2, so it converges. To find the
limit note that we just need to solve

x = 2 + x ⇒ x = lim an = 2
n→+∞

Problem 6. Note that


n−1 n−1
X k+1 k 1 X 0 (n)
|f ( ) − f ( )| = |f (yk )|
n n n
k=0 k=0
(n) 0
where yk ∈ ( nk , k+1
n ) due to the MVT. So as |f | ∈ C([0, 1]) we have Riemann’s criterion that for the
1 2
partition Pn := {x0 := 0 < x1 := n < x2 := n < ... < xn :=} and any yk ∈ [xk−1 , xk ] that
n−1 n−1 ˆ 1
X k+1 k 1 X 0 (n)
lim |f ( ) − f ( )| = lim |f (yk )| = |f 0 (x)|dx
n→+∞ n n n→+∞ n 0
k=0 k=0

as desired.
Problem 7. Computation gives that solutions are of the form
     
1 2 4
0 −3 −5
  + α  + β 
1 1 0
0 0 1
34 RAYMOND CHU

Then the norm squared is given by


f (α, β) = (1 + 2α + 4β)2 + (−3α − 5β)2 + (1 + α)2 + β 2
and we want to minimize this over all α, β so we find the points where
∂α f = ∂β f = 0
and compare their values. This gives the min is at
34 13
α=− β=
19 19
Problem 8. We have an eigenvalue of n + k − 1 with the vector (1, 1.., 1)T . We also (n − 1) eigenvalues
k − 1 with the eigenvector e1 − e2 , e2 − e3 and we have (n − 1) of these and they are linearly independent.
So our determinant is (k − 1)n−1 (n + k − 1).
Problem 9. We know that as A ∈ Cn×n that there exists an invertible matrix S such that
M M
A = S −1 (J1 J2 ... Jk )S

where Ji are Jordan Block. WLOG put `, ` + 1, .., k as the Jordan Blocks with zero diagonal. Then as
A 6= 0 we know that ` > 1. Then for 1 ≤ i ≤ ` − 1 let the diagonal terms of each block be denoted as
λ1 , .., λ`−1 and λi 6= 0 for 1 ≤ i ≤ ` − 1 so there exists a αi such that λi + αi 6= 0 and λi . So let Bi = αi Id
1
n`+k
for 1 ≤ i ≤ ` − 1 with the same size as Ji . Then fix any α` , ..., αk ∈ C such that α`+k 6= αi or zero
where ni denote the size of Ji . Define
 
0 0
B`+k :=
α`+k 0
2
where B`+k is of size n`+k and B`+k is zero everywhere except the (n`+k , 1) entry. Then B`+k = 0 so its
only eigenvalues are zero. But we have
J`+k + B`+k = super diagonal(1, .., 1) + B`+k
In particular the transpose of J`+k +B`+k is the companion matrix with characteristic polynomial xn`+k −
αi . So we have the eigenvalues of
M M M
A + B := S −1 ((J1 + B1 ) (J2 + B2 ) ... (Jk + Bk ))S
1
`+kn
are λi + αi for 1 ≤ i ≤ ` − 1 and α`+k while the eigenvalues of B are αi and 0. And by construction
these are different.
Problem 10. We can have at most n2 − (n − 1) 1s since if we had more than that there would be at
least two rows with all ones. Then let
 
1 1 1 1... 1
A := 0 1 1 ... 1
1 0 1 ... 1
i.e. it is one everywhere except the first subdiagonal. Then this is invertible since if
Ax = 0
then we get from the first two equations
n
X n
X
xi = 0 and xi = 0
i=1 i=2

which gives x1 = 0. Repeating a similar argument usign the 1st row and jth column for j > 1 gives
xj−1 = 0. But then this means x1 , .., xn−1 = 0 but using the first equation again gives xn = 0. So its
kernal is only the zero vector so it is invertible.
Problem 11. Note that A2 is still a integer matrix so T r(A2 ) ∈ Z but T r(A2 ) = λ21 + λ22 + λ23 + λ24 .
35

´1 ´1
Problem 12. Note that aij = 0 xi−1 xj−1 i.e. A is a gram matrix. Let (f, g) := 0 f (x)g(x) then this
is an inner product so if ξ ∈ Rn we have
X X
aij ξi ξj = (xi−1 , xj−1 )ξi ξj
i,j i,j
X X X X X
i−1 j−1
= (ξi x , ξj x )=( ξi xi−1 , ξj xj−1 ) = ( ξi xi−1 , ξi xi−1 )
i,j i j i i
X ˆ 1 X
= || ξi xi−1 ||2 = ξi2 x2i−2 > 0
i 0 i
with equality iff ξ = 0 so the quadratic form is positive. And we also have aij = aji so A is symmetric
so it is positive definite.
36 RAYMOND CHU

11. Spring 2015


Problem 1. We claim f < 2. Indeed, observe that
4
1+ <2
10
so inequality fails at x = 2. But as f (0) = 0 and f is continuous we see if there exists a point y such that
f (y) ≥ 2 then IVT implies there exists a point where f (x∗ ) = 2 which contradicts the inequality.
Problem 2. Define
|f (x) − f (y)|
[f ]α := sup
x,y∈[0,1] |x − y|α
x6=y
Then let F ⊂ C α ([0, 1]) be a bounded sequence i.e. for all f ∈ F we have ||f ||C α := ||f ||L∞ + [f ]C α ≤ C
where C does not depend on f . Then the family is totally bounded since ||f ||L∞ ≤ C and as [f ]C α ≤ C
it is equicontinous with modulus of continuity C|x − y|α . So by Arzela-Ascoli there exists a uniformly
convergent subsequence which we denote by {fn } to a limit function g ∈ C α ([0, 1]). We know g ∈
C α ([0, 1]) since C α ([0, 1]) is a closed subset of C([0, 1]). Now we want to show that for any β < α we
have
||fn − g||C β ([0,1]) → 0
As fn → f in C([0, 1]) it suffices to control [fn − g]C β . Indeed observe that if we let f := fn − g
α
! αβ   αβ  β
|f (x) − f (y)| |f (x) − f (y)| β α
−1 |f (x) − f (y)| β
1− α |f (x) − f (y)| α
= = |f (x) − f (y)| β = |f (x)−f (y)|
|x − y|β |x − y|α |x − y|α |x − y|α
β 1− β
≤ 21− α ||f ||L∞α [f ]C α
β 1− β
≤ C22− α ||f ||L∞α
β β 1− β
Note that 1 − α > 0 since α > β. This implies F ⊂ C β since we can have [f ]β ≤ 21− α ||f ||L∞α [f ]C α . This
inequality also implies that [f ]C β = [fn − g]C β → 0 as n → +∞. So they converge in C β . The problem
statement has α = 12 > 13 = β.
Problem 3. Fix 0 < |h| < 1 then notice for any n ∈ N that
f (x + h) − f (x) f (x + h) − f (x + n1 ) + f (x + n1 ) − f (x)
=
h h
f (x + h) − f (x + n1 ) h − 1
n f (x + n1 ) − f (x) 1
n
= +
h − n1 h 1
n
h
= (I) + (II)
1
Note to make (II) become zero in the limit we just need to choose a sequence n such that nh is bounded.
Indeed, assume that 0 < h < 1 then there exists an N such that
1 1
≤h≤
N +1 N
so
1
N +1≥ ≥N
h
So we get
N +1 1
≥ ≥1
N Nh
so along this sub-sequence of N we get (II) → 0. Also note that since f is Lipschitz that
f (x + h) − f (x + n1 )
≤C
h − n1
so it suffices to show that along this same subsequence of N that
h − N1
→0
h
37

Indeed, we have the estimate


−1 1
≤h− ≤0
N (N + 1) N
so
−1 h − N1
≤ ≤0
h(N 2 + N ) h
so w e get
(N + 1) h − N1
− ≤0
N2 + N h
Therefore, (I) and (II) both converge to 0 as h → 0. Therefore, f is differentiable and f 0 ≡ 0 on R which
is connected so we also have that f is constant.
Problem 4. We first need the following lemma: Let f be a function with the IVT property then if
f is discontinuous at x0 then there exists an ε0 > 0 such that we have a sequence {xn } → x0 and
f (xn ) = f (x0 ) + ε20 or f (xn ) = f (x0 ) − ε20 . For now assuming the lemma is true, we have our contradic-
tion since there exists a sub-sequence with f (xnk ) = f (x0 ) + ε20 or f (xnk ) = f (x0 ) − ε20 for all k. Say
f (xnk ) = f (x0 ) − ε20 for all k then A := f −1 (f (x0 − ε20 )) is a closed set so as xnk → x we have x0 ∈ A
but this implies f (x0 ) = f (x0 ) − ε20 which is a contradiction. So it suffices to prove the lemma.

Let f have the IVT property and assume it is discontinuous at x0 then there exists an ε0 > 0 such that
there exists an y with |x0 − y| < n1 and
|f (x0 ) − f (y)| ≥ ε0
Assume that f (y) > f (x0 ) then we have f (y) ≥ ε0 + f (x0 ). In particular, this means f (x0 ) + ε20 ∈
[f (x0 ), f (y)] so by the IVT property there exists an xn ∈ (x0 , y) such that f (xn ) = f (x0 ) + ε20 . Note if
f (x0 ) ≥ f (y0 ) an identical argument yields the existence of an xn such that f (xn ) = f (x0 ) − ε20 and the
lemma holds. Which concludes the problem.
Problem 5. We proved this in Spring 2013 Number 4 and used it to prove that problem.
Problem 6. Let the operator T : C([0, 1]) → C([0, 1]) be defined via
ˆ
2 1 1
T (f ) := et + cos(s)f (s)ds
2 0
Then we have ˆ
1 1 1
||T (f ) − T (g)||L∞ ≤ ||f − g||L∞ = ||f − g||L∞
2 0 2
where we used | cos(s)| is bounded by 1. Then Banach Fixed Point Theorem implies that there exists a
2 ´1
unique fixed point of T (f ) i.e. f (t) = et + 12 0 cos(s)f (s)ds and f ∈ C([0, 1]).
Problem 7. This quadratic form is associated to the following symmetric matrix
 
9 6 −5
A :=  6 6 −1
−5 −1 6
 
x
which has a negative eigenvalue so there exists an (x, y, z) such that [x, y, z]A y  = f (x, y, z) < 0
z
Problem 8a. This is false. Let
A := diag(2, 2, 2)
then det(A) = 8. If An → A then we have each entry of An converges to each entry of A. But observe
that f : R3×3 → R defined via
f (A) = det(A)
is a continuous map since it is a polynomial int he coefficients of A. So in particular if An → A then
f (An ) → f (A) but for all n we have f (An ) = 1 and f (A) = 8 which is a contradiction.
38 RAYMOND CHU

Problem 8b. This is true. Indeed, by Schur Decomposition we have


A = UT TU
where T is an upper triangular matrix. Fix k ∈ N and let hk := {h1 , .., hn } such that ||h|| < k1 such that
Tii + hi 6= Tjj + hj for all j 6= i. This is possible for any k since we have a finite number of eigenvalues
then let us define
An := U T (T + diag(h1 , .., hn ))U
Pn 1
then Ak has distinct eigenvalues and An → A since d(A, Ak ) = ( i=1 h2k ) 2 < k1 for any k.
Problem 9. Fix a basis of U1 ∩ W1 i.e. {v1 , .., vd } extend it to a basis of U1 + W1 with the first d − `
elements being from U1 and the next d` from W1 and extend it to a basis of Rn i.e.
{v1 , .., vd , u1 , ..., ud−` , w1 , .., wd−` , q1 , .., qm }. Now do the same i.e. start with a d element basis of U2 ∩ W2
extend it to a basis of U2 + W2 with the first d − ` elements from U2 and the next from W2 and the finish
the rest to form a basis of Rn . Denote this basis of
(1) (1) (1) (1) (1) (1) (1) (1) (1) (1)
{v1 , ..., vd , u1 , .., ud−` , w1 , .., wd−` , q1 , .., qm }. Define T via T (ui ) = T (ui ), T (vi ) = T (vi ), T (wi ) =
(1) (1)
T (wi ), T (qi ) = T (qi ). This is possible since U 1, V 1, U 2, V 2 all have the same dimension and their
intersections do as well. Therefore, we have found an operator such that T (U1 ) = U2 and T (W1 ) = W2 .
Problem 10. Note that     
A B I Q A 0
=
C D 0 S 0 I
so we have det(M )det(S) = det(A) as desired.
Problem 11. Let θn := 2πn 11 then consider
 
cos(θn ) − sin(θn )
Rn :=
sin(θn ) cos(θn )
(11)
these are 11 commuting matrix with order 11. Note that we have Rn = Id and no smaller number k
such that Rnk = Id since  
k cos(kθn ) − sin(kθn )
Rn :=
sin(kθn ) cos(kθn )
and kθn = 2πnk
11 and nk divides 11 iff k = 11m for some m ∈ N since 11 is prime.
Problem 12a. We have    
5 1 4 0 1/6 1/6
M=
1 −1 0 −2 1/6 −5/6
so we have   4  
5 1 e 0 1/6 1/6
exp(M ) =
1 −1 0 e−2 1/6 −5/6
Problem 12b. We claim given any matrix A that exp(A) has positive eigenvalues. Indeed given any
matrix A ∈ Cn×n we can find a unitary S ∈ Cn×n such that
A = S ∗ (T )S
where T is upper triangular by Schur Decomposition. Then
exp(A) = S ∗ (exp(T ))S
Note that applying any polynomial to T still results in an upper triangular matrix. In particular
exp(T )ii = exp(Tii ) so all the eigenvlaues of exp(A) are of the form exp(Tii ) > 0. And the claim is
proved But M has an eigenvalue −2 which implies there is no map A such that exp(A) = M .
39

12. Fall 2015


Problem 1. Fix ε > 0 and N ∈ N then for any n ≥ N we have n = αN + r where α ∈ N and
r ∈ {0, 1, .., N − 1} then
an aαN +r aαN + ar aαN ai
= ≤ ≤ + max
n αN + r αN + r αN i=1,..,N −1 n
αaN ai
≤ + max
αN i=1,..,N −1 n
aN
= +ε
N
for n large enough where we used maxi=1,..,N −1 ani → 0. So it follows that
an an
lim ≤ inf
n→∞ n n∈N n

this implies
an an
lim = inf
n→∞ n n∈N n
as desired 
Problem 2. Observe that as h ≥ 0
ˆ b ˆ b ˆ b
min g(ξ) h(x)dx ≤ g(x)h(x) ≤ max g(ξ) h(x)dx
ξ∈[a,b] a a ξ∈[a,b] a
So for the continous function ˆ b
F (y) := g(y) h(x)dx
a
we can apply IVT to find a ζ such that
ˆ b ˆ b
F (ζ) = g(x)h(x)dx = g(ζ) h(x)dx
a a
as desired 
P1]n is compact, and fn → 0 continuous
Problem 3. By Dini’s Theorem since fn is non-increasing, [−1,
we have fn → 0 uniformly. Now we sum by parts i.e. for Bn := i=1 bi we have
n
X n
X n
X n
X n−1
X
ai bi = (Bi − Bi−1 )ai = Bi ai − Bi−1 ai = Bi ai − Bj aj+1
i=m i=m i=m i=m j=m−1

n−1
X
= Bn an − Bm−1 am + Bj (aj − aj+1 )
j=m
Pn j
So we have for n ≥ m for Bn := j=1 (−1)
n−1
X
|gm (x) − gn (x)| ≤ |Bn fn (x) − Bm−1 fm (x)| + | Bj (fj (x) − fj+1 (x)|
j=m

using that |Bn | ≤ C we have


n−1
X
≤ C(|fn (x)| + |fm (x)|) + C|fj (x) − fj+1 (x)|
j=m

using fj is non-increasing we have


n−1
X
= C(|fn (x)| + |fm (x)|) + C(fj (x) − fj+1 (x))
j=m

n−1
X
= C(|fn (x)| + |fm (x)|) + C(fn (x) − fm (x))
j=m
40 RAYMOND CHU

Then as fn (x) is a montone sequence this is equation to


= C(|fn (x)| + |fm (x)| + fm−1 (x) − fn−1 (x))
then using supx |fj (x)| → 0 we have that the sequence {gm (x)} is Cauchy in C([−1, 1]) we have by
completeness of C([−1, 1]) the existence of g ∈ C([−1, 1]) such that gn (x) → g(x) uniformly.
Problem 4. Let us define the operator from T : C([0, ∞)) → C([0, ∞)) by
ˆ x
T (f ) := e−2x + f (t)e−2t dt
0
then ˆ ∞
1
d(T (f ), T (g)) ≤ ||f − g||L∞ e−2t dt =
d(f, g)
0 2
Therefore, T is a contraction mapping so there exists a unique fixed point of T i.e. there is some
f ∈ C([0, ∞)) such that T (f ) = f . By Banach’s Fixed Point if we start with any f ∈ C([0, ∞)) then
define fn+1 := T (fn ) then fn+1 converges to the unique fixed point i.e. f . To explicitly find f note that
ˆ x
−2x
f =e + f (t)e−2t dt
0
and differentiate to find f , which converts this integral equation into a differential equation.
Problem 5. By the implicit function theorem we have

−1
∂y x(y, z) = −∂y F (∂x F )

−1
∂z y(x, z) = −∂z F (∂y F )
∂x z(x, y) = −∂x F (∂x F )−1

so multiplying them we get


∂y x∂z y∂x z = −1
Problem 7. By Sylvester Rank Theorem we have
rank(T ) − ker(S) ≤ rank(ST ) ≤ min{rank(S), rank(T )}
Take S = AT and T = B then
1 ≤ rank(AT B) ≤ 3
Problem 8. Follows from direct computation.
Problem 9. Since we have
det(A − λI) = det((A − λI)T ) = det(AT − λI)
we conclude A and AT have the same eigenvalues. Therefore, as
AT = −A
we must have for every positive eigenvalue a negative eigenvalue. Therefore, the product of the eigenvalues
must be non-negative i.e. det(A) ≥ 0.
Problem 10a. Note that exp(A) is absolutely convergent for all x since

X ||A||n
|| exp(A)|| ≤ ≤ exp(||A||)
n=0
n!
then if AB = BA we can apply binomial theorem to see
∞ ∞ Xn  
X (A + B)n X n An−k B k
exp(A + B) = =
n=0
n! n=0
k (n − k)! k!
k=0

= exp(A) exp(B)
since by Cauchy Product and AB = BA we have

! ∞ ! ∞ X n
X X X
n k
an A bk B = ( an−k An−k bk xB k )
n=0 k=0 n=0 k=0
41

Problem 10b. Let    


1 0 0 0
A= B=
0 0 0 1
then    
e 0 0 0
exp(A) = exp(B) =
0 0 0 e
and    
0 0 e 0
exp(A) exp(B) = 6= exp(A + B) =
0 0 0 e
Problem 11a. Note that for any m ≥ 0
ker(Adim(V) ) = ker(Adim(V)+m )
so if there exists a square root we must have
S 10 6= 0
but S 12 = 0 but as dim(V ) = 6 we must have ker(S 6 ) = ker(S 10 ) = ker(S 12 ) but the last two do not
agree so no square roots exist.
Problem 11b. Consider
Si,i+1 = 1 and 0 else
12×12
where S ∈ R then S 6= 0 but S = 0 so define A := S 2 then A has such a square root.
10 12

Problem 12. We know M = diag(1, 2, 3, 4, .., n)+A where A is a matrix of all ones let Λ := diag(1, 2, 3, 4, .., n)
Then
(M x, x) = (Λx, x) + (Ax, x)
Xn n
X
= jx2j + ( xk ) 2 ≥ 0
j=1 k=1
so M is positive definite.
42 RAYMOND CHU

13. Spring 2016


Problem 1. Let xn → x then we either have
f (x, x) − f (xn , x) ≤ f (x, x) − f (x, xn ) ≤ f (x, x) − f (x, xn )
or
f (x, x) − f (x, xn ) ≤ f (x, x) − f (x, xn ) ≤ f (x, x) − f (xn , x)
in either case we have as n∞ that |f (x, x) − f (xn , xn )| → 0 so g(x) := f (x, x) is continuous.
Problem 2a. We say f is Riemann Integrable if for any ε > 0 there exists a partition P of [a, b] with
P = {a = x0 < x1 < ... < xn = b} and Ii := [xi−1 , xi ] and ω(f, Ii ) := supx,y∈Ii |f (x) − f (y)| with
δxi := xi − xi−1
X n
ω(f, Ii )∆xi
i=1
Problem 2b. Fix ε > 0 and as xn → x there exists an N such that if n ≥ N then xn ∈ B 2ε (x) := I1 .
Then for i = 1, .., N let Ii+1 := B i+1 ε (xi ). If Ii ∩ Ij 6= ∅ for i 6= j then we can make the radius of each
2
ball smaller to ensure they are disjoint so WLOG assume that Ii ∩ Ij = ∅ whenever j 6= i. Then consider
any partition that contains I1 , .., IN +1 call the remaining intervals IN +2 , ..., IM then observe
(
1 if i = 1, .., N + 1
ω(f, Ii ) =
0 else
so
M N +1
X X ε
ω(f, Ii )∆xi ≤ ≤ε
i=1 i=1
2i+1
so f is Riemann Integrable.
Problem 3. Note that
n−1 ˆ 1 n−1 ˆ k+1
X k X x n
f( ) − n f (x) = f( ) − n f (x)
n 0 n k
k=0 k=0 n

and we have
ˆ k+1
n k+1
ˆ k+1
n
f (x) = (x + A)f (x)| k
n
− (x + A)f 0 (x)
k n k
n n

Taking A = − k+1
n gives along with the MVT of integrals that
ˆ k+1
1 k 0
n k+1
= f ( ) − f (ξk ) (x − )
n n k
n
n
1 k f 0 (ξk )
= f( ) +
n n n2
where ξk ∈ ( nk , k+1
n ). So
n−1 ˆ 1 n−1
X k X f 0 (ξk )
f( ) − n f (x) = −
n 0 n
k=0 k=0
This is a Riemann Sum so it converges to
ˆ 1
− f 0 (x) = f (0) − f (1)
0
So ˆ
n 1
X k
f( ) − n f (x) → f (0)
n 0
k=0
In particular, this implies
n ˆ 1
X k
n( f( ) − n f (x))
n 0
k=0
43

diverges. I assume the correct question was to find the limit of


n ˆ 1
X k
f( ) − n f (x)
n 0
k=0

Problem 4. As β is a continuous map from [0, 1] → [0, 1) we know from continuity that there exists an
x∗ ∈ [0, 1] such that β(x∗ ) is the max. In particular, β ≤ β(x∗ ) < 1. So define the map T : C([0, 1]) →
C([0, 1]) defined via
ˆ 1
T (f ) := α(x) + β(t)fn (t)dt
0
is a contraction map on the complete metric space C([0, 1]). Consider the iteration scheme
f0 (x) ≡ 0
ˆ 1
fn+1 (x) = α(x) + β(t)fn (t)dt
0
Then we have for any n ≥ m we have
ˆ 1
T (fn ) − T (fm ) = β(t)(fn (t) − fm (t))
0
So
||T (fn ) − T (fm )||L∞ (0,1) ≤ γ||fn (t) − fm (t)||L∞ (0,1)

for γ := β(x ) < 1. In particular, iterating this inequality gives
||T (fn ) − T (fm )|| ≤ γ m ||f0 (t) − fn−m (t)|| = γ m ||fn−m (t)||
≤ γ m (||fn−m − fn−m−1 || + ||fn−m−1 − fn−m−2 || + ...||f1 (t)||)
≤ γ m (γ n−m ||f1 || + γ n−m−1 ||f1 || + ... + ||f1 ||)
m
X
= ||f1 || γn
k=n
so it is Cauchy and completeness implies the existence of a limit f . Then fn → f uniformly so
lim T (fn ) = T ( lim fn ) = T (f )
n→∞ n→∞
and
lim T (fn ) = lim fn+1 = f
n→∞ n→∞
i.e. f = T (f ). So the limit is a fixed point. To find it explicitly we differentiate the integral equation
T (f ) = f
and solve the ODE.
Problem 5. As ∇g(x0 , y0 ) 6= 0 we can assume WLOG that ∂y g(x0 , y0 ) 6= 0. Then the Implicit Function
Theorem implies there exists an open neighborhood U ⊂ R1 containing x0 and a map ϕ : U → ϕ(U )
satisfies
g(x, ϕ(x)) = g(x0 , y0 ) = 0
for x ∈ U . Then we get
d
0= g(x, ϕ(x)) = ∂x g(x, ϕ(x)) + ϕ0 (x)∂y g(x, ϕ(x))
dx
i.e.
∂x g(x, ϕ(x))
ϕ0 (x) = −
∂y g(x, ϕ(x))
Then let us define
ψ(x) := f (x, ϕ(x))
then ψ has a minimum at x = x0 so we have
d
0= ψ(x)|x=x0 = ∂x f (x0 , ϕ(x0 )) + ∂y f (x0 , ϕ(x0 ))ϕ0 (x0 )
dx
44 RAYMOND CHU

Putting these together we get and that ϕ(x0 ) = y0 gives


∂x f (x0 , y0 ) ∂y f (x0 , y0 )
=
∂x g(x0 , y0 ) ∂y g(x0 , y0 )
∂ f (x ,y )
so it follows that for λ := ∂yy g(x00,y00) = ∂∂xxfg(x
(x0 ,y0 )
0 ,y0 )
that ∂x f (x0 , y0 ) = λ∂x g(x0 , y0 ) and ∂y f (x0 , y0 ) =
λ∂y g(x0 , y0 ) i.e. ∇f (x0 , y0 ) = λ∇g(x0 , y0 )
Problem 6. Let us consider an open ball Br (x). Let us assume that y is a limit point then there exists
an yn ⊂ Br (x) → y. Then there exists an N such that for n ≥ N we have
r
d(y, yn ) ≤
2
for any n ≥ N . So we have for any n ≥ N
r
d(x, y) ≤ max{d(x, yn ), d(yn , y)} ≤ max{d(x, yn ), } < r
2
since yn ∈ Br (x) so it follows y ∈ Br (x) i.e. the open ball is also closed.

Now consider a closed ball. Let ε := 2r . Then if y ∈ {y : ρ(x, y) ≤ r} then for any z ∈ Be (y) we have
r
d(z, x) ≤ max{d(z, y), d(y, x)} ≤ max{ , d(x, y)} ≤ r
2
i.e. z ∈ {y : ρ(x, y) ≤ r} so Bε (t) ⊂ {y : ρ(x, y) ≤ r}. So it is also open since every point is an interior
point.
Problem 7. We need the following lemma: Lemma 1 If A is a real normal matrix, then A is unitarily
equivalent to the following matrix
M M M
B := R1 R2 .... Rm
where  
  |λi | cos(θi ) −|λi | sin(θi )
Ri = λi or
|λi | sin(θi ) |λi | cos(θi )
Since A is normal it is complex diagonalizable and since A is real all the complex eigenvalues come in
conjugate pairs. Fix a complex eigenvalue say λ then λ = |λ|eiθ for some θ ∈ [0, 2π]. Then the eigenvalues
of  
|λ| cos(θ) −|λ| sin(θ)
|λ| sin(θ) |λ| cos(θ)
is exactly |λ|(cos(θ) + i sin(θ)) = λ and |λ|(cos(θ) − i sin(θ)) = λ. This matrix is normal so it is unitarily
similar to
diag(λ, λ)
By composing bases we get the lemma.

Now since M is orthogonal it is normal and eigenvalues have magnitude 1 so the lemma implies there
exists complex unitary matrix such that
M M M
M = U T (R1 R2 ... Rm )U
Then this implies M M M
U M = (R1 R2 ... Rm )U
Then U = A + iB where A and B are real. So we get for any r ∈ C
M M M
(A + rB)M = (R1 R2 ... Rm )(A + rB)
Let p(r) := det(A + rB) then p(i) 6= 0 since U is invertible. Therefore, p is not the zero-polynomial so
there exists only finitely many roots so there exists an r ∈ R such that p(r) 6= 0. In particular,
M M M
M = (A + rB)−1 (R1 R2 ... Rm )(A + rB)
Let V := (A + rB) which is real. So
M M M
M = V −1 (R1 R2 ... Rm )V
45

Assume that M has 2k complex roots. WLOG assume that R1 , .., Rk be the rotation matrix in lemma 1
while Rk+1 , .., Rm be the diagonal matrix. Consider
M M M M M M
Li := V −1 (I1 ... Ii−1 Ri Ii+1 ... Im )V
where Ij is the identity matrix with the size of Rj and Li is real. Then we clearly have
m
Y M M M
Li = V T (R1 R2 ... Rm )V = M
i=1
so if there exists even a single complex eigenvalue we have m ≤ n − 1. So the claim is proved for unitary
with even a single complex eigenvalue since Li is the identity on a n − 2 dimension subspace. When the
unitary matrix only has real eigenvalues relabel the Ri such that Ri = [1] for 1 ≤ i ≤ k and Rk+i = [−1]
for 1 ≤ i ≤ n − k. If k = n then M = I and there is nothing to prove. We just replace Rk+i and Rk+i+1
with
Wi := diag(−1, −1, 1, 1.., 1)
Then if k is even we are done, but if k is odd the the we keep the last entry as W` := [−1]. Then we do
k
Y `
Y
Li Wi = M
i=1 i=1
and k + ` ≤ n − 1 as long as k > 1. If k = 1 we just do
W` := diag(1, −1, 1, 1.., 1)
to get the result.
Problem 8. Assume
(I + A)x = 0
then (
x1 + a11 x1 + a21 x2 = 0
x2 + a12 x1 + a22 x2 = 0
so we have
||x||2 = (a11 x1 + a21 x2 )2 + (a12 x1 + a22 x2 )2
2
X 1
≤ ||x||2 ( a2ij ) ≤ ||x||2
i,j=1
10
by Cauchy Schwarz, so we must have ||x|| = 0 i.e. x = 0 so I + A is invertible.
Problem 9. Let A ∈ R3×3 be defined via
A := [v1 , v2 , v3 ]
then det(A) 6= 0 iff v1 , v2 , v3 are linearly independent over R. and
det(A) = x(2 − x2 )

so they are linearly independent over R iff x 6= 0 or ± 2.

Note that if they are linearly


√ independent over R they are linearly independent over Q. So it suffices
to check at x = 0 and x = ± 2. It is easy√ to show that at x = 0 they are not linearly independent over
. It is easy to verify the Kernals at x = 2 are spanned by
 
−1

 2
1

and at x = − 2 is spanned by
 
√1
 2
1
so they are linearly independent over .
46 RAYMOND CHU

Problem 10a. Fix A ∈ M at(3, C). By Schur Decomposition there exists a unitary matrix U and an
upper triangular matrix T such that
A = U ∗T U
(k) (k) (k) (k) Pn (k)
Fix a sequence of numbers {h1 , ..., hn } such that Tii + hi 6= Tjj + hj for i 6= j and i=1 |h1 | ≤ k1
then define
(k)
Ak := U ∗ (T + diag(h1 , ..., h(k)
n )U
Then Ak has distinct eigenvalues and Ak → A as k → ∞
Problem 10b. Let A := diag(1, 2, 3) then if An → A then we have det(An − λI) → det(A − λI) but if
An has only one Jordan Block then we must have
det(An − λI) = (λ − λn )3
for some λn but this can never converge to
(λ − 1)(λ − 2)(λ − 3) = det(A − λI)
Problem 11.
Problem 12. As A is self adjoint and Avk = (2k − 1)vk we have vi ⊥ vj for any i 6= j. The result then
follows from bilinearity of the inner product.
47

14. Fall 2016


Problem 1. We claim similar matrix share the same eigenvalues. Indeed, if
SAS −1 = B ⇒ SA(x) = BS(x)
for all x. Let Ax = λx then
λSx = BSx
and as S is invertible we have Sx 6= 0 so Sx is an eigenvector of B with eigenvalue λ. This shows all the
eigenvalues of A are eigenvalues of B and a similar argument shows that they share the same eigenvalue.
We then note that the Jordan Canonical Form implies that all the eigenvalues of B 3 are the eigenvalues
of B cubed. But as B is similar to B 3 for each eigenvalue λ we must have λ3 = λ ⇒ λ(λ2 − 1) = 0 so
either λ = 0 or λ = ±1. But as B is invertible 0 cannot be an eigenvalue so the only eigenvalues are ±1
which are roots of unity.
Problem 2. Note that A is a Jordan Block so we have
 n
n2n−1

2
An =
0 2n
So
2n n2n−1
P∞ P∞ 
exp(A) = n=0 n! n=0 n!
∞ 2n
P
0 n=0 n!
 2
e2

e
=
0 e2
Let B := exp(A) then
||B||2 = sup ||Bx||2 = sup (Bx, By)
||x||=1 ||x||=1,||y||=1

= sup (B T Bx, y)
||x||=1,||y||=1

And B B is symmetric so it is real diagonalizable. So we get if we let λ be the largest eigenvalue of B T B


T

that √
||B|| = λ
So one just computes the eigenvalues of B T B to get ||B||.
Problem 3.
Problem 4. We use the following lemma: A matrix A has rank(A) ≥ r iff there exists an r×r submatrix
of A such that the submatrix is invertible. This can be easily seen to be equivalent to having r linearly
independent columns so we omit the details. Now let r = rank(A) then there exists an r × r submatrix
that is full rank. Then as An → A we must have the same entries in the r × r submatrix converge to the
r × r submatrix of A. Then as the det is a polynomial map of the coefficients we must have for sufficiently
large n the det of the r × r submatrix of An is non-zero due to continuity. So this implies for sufficiently
large n we have rank(An ) ≥ r i.e.
rank(A) ≤ lim inf rank(An )
n→∞

Problem 5.
Problem 6a. Let √
v := ( 2, π, ..., 1)
√ √ √
then λ 2 ∈ Q iff λ = q 2 for q ∈ Q but q 2π ∈ / Q.
Problem 6b. Notice that A − 3I is a rational matrix. We consider the companion matrix
 
A − 3I 0
where the 0 represents an n × 1 matrix. As 3 is an eigenvalue of A we have that A − 3I has a non-zero
kernal. So to find the Kernal we just preform Gaussian Elimination till A − 3I is of the form
 
Ir×r 0n−r×n−r v1
0 0 v2
48 RAYMOND CHU

where Ir×r is the r × r identity matrix for r = rank(A − 3I) and we have r ≤ n − 1. Then we have that
v1 and v2 are in the Kernal of A − 3I but preforming Gaussian Elimination on rational entries leaves the
entries rational i.e. v1 0 ∈ Qn and this is also in the Kernal.
Problem 7.
Problem 8a. This is a standard diagonal argument.
Problem 8b. Let (
0 if x ≤ n
fn (x) :=
1 if x ≥ n + 1
with a line connection the two between x = n and n + 1. Then we have fn (m) = 0 for any m > n so
limn→∞ fn (m) = 0 but limm→∞ fn (m) = 1 for all n. So the double limits do not agree.
Problem 9. If fn → f uniformly then for any ε > 0 there exists an N ∈ N such that ||fn − fm ||L∞ < 3ε
for n, m ≥ N . Then for any n, m ≥ N we have
|fn (x) − fn (y)| ≤ |fn (x) − fN (x)| + |fN (x) − fN (y)| + |fN (y) − fn (y)|

≤ + |fN (x) − fN (y)|
3
ε
As fN is continuous there exists a δN > 0 such that if d(x, y) < δN ⇒ d(fN (x), fN (y)) < 3 so
|fn (x) − fn (y)| < ε
for x, y such that d(x, y) < δN . Then f1 , .., fN −1 are uniformly continuous let δi be chosen such that
d(x, y) < δi ⇒ d(fi (x), fi (y)) < ε then let δ := min{δ1 , ..., δN } then if d(x, y) < δ we have
d(fi (x), fi (y)) < ε
i.e. the family is equicontinuous.

Now if fn → f pointwise and {fn } is equicontiuous. Then fix ε > 0 then there exists a δ > 0 such that
if |fn (x) − fn (y)| < ε and |f (x) − f (y)| < ε whenever d(x, y) < δ. Now consider the open cover of [0, 1]
[
[0, 1] ⊂ Bδ (x)
x∈[0,1]

so there exists a finite subcover Bδ (xi ) for i = 1, .., N . Then as fn (xi ) → f (xi ) there exists an Mi such
that if n ≥ Mi then |fn (xi ) − f (xi )| < ε. Let M := max{M1 , .., MN } then |fn (xi ) − f (xi )| ≤ ε for any i
when n ≥ M . Then for any x it must live in some δ ball say xi then
|f (x) − fn (x)| ≤ |f (x) − f (xi )| + |f (xi ) − fn (xi )| + |fn (xi ) − fn (x)| ≤ 3ε
so
sup ||f − fn || < 3ε
for any n ≥ M i.e. uniform covergence.
Problem 10. We are asked to minimize for g(x, y) := x4 + y 4 − 2
min
(x,y):g(x,y)=0

Problem 11. Let (


1
sin(n) for t ≤ n
fn (t) :=
sin( 1t ) for t ≥ 1
n
then if there exists a sub-sequence that converges to a limit f then we must have f (t) = sin( 1t ) for
t ∈ (0, 1]. But this limit cannot be continuous since sin(1/t) cannot be extended to be continuous on
[0, 1]. So it is not compact, hence not complete.
Problem 12. As f is convex we have for any x < zt < y where zt := (1 − λ)x + λy for λ ∈ [0, 1] that
f (x) − f (zt ) f (x) − f (y) f (y) − f (zt )
≤ ≤
x − zt x−y y − zt
In particular sending zt → y and zt → x that
f (y) − f (x)
f 0 (x) ≤ ≤ f 0 (y)
y−x
49

so we have
(y − x)f 0 (x) + f (x) ≤ f (y) ≤ (y − x)f 0 (y) + f (x)
i.e.
f (y) ≥ f (x) + f 0 (x)(y − x)
whenever y > x and if x > y we have the inequality
f (x) − f (y)
≤ f 0 (x)
x−y
i.e.
f (x) ≤ (x − y)f 0 (x) + f (y) ⇒ f (x) + (y − x)f 0 (x) ≤ f (y)
whenever x > y. The inequality is trivial for x = y so we have arrived at the conclusion.

For the reverse fix x, y and let z := λx + (1 − λ)y then


f (y) ≥ f (z) + λf 0 (z)(x − y)
and
f (x) ≥ f (z) + (1 − λ)f 0 (z)(y − x)
so
(1 − λ)f (y) + λf (y) ≥ f (z)
as desired.
50 RAYMOND CHU

15. Spring 2017


Problem 1. It is clear that range(M M T ) ⊂ range(M ) so it suffices to show rank(M M T ) = rank(M ).
Now fix x ∈ Ker(M M T ) thenf or any y ∈ Rn we have
(M M T x, y) = 0 ⇒ (M T x, M T y) = 0
taking y = x we get M T x = 0. In particular, we have shown that Ker(M M T ) ⊂ Ker(M T ). But we
trivially have Ker(M T ) ⊂ Ker(M M T ) so Ker(M T ) = Ker(M M T ) i.e. nullity(M T ) = nullity(M M T ).
But using nullity(M T ) = nullity(M ) we get
rank(M M T ) = rank(M )
so we have range(M M T ) = range(M ) as desired.
Problem 2.
Problem 3a. As M = M T and M M T = I we get M 2 = I. In particular, as M is normal it is complex
diagnolizable so it has a basis of eigenvectors. Let λ be an eigenvalue associated with the eigenvector x
then we have x = M 2 x = λ2 x so λ = ±1. As M is positive def all the eigenvalues are positive i.e. λ = 1.
Therefore, by the spectral theorem we have the existence of complex unitary matrix U such that
M = U T IU = I
so M = I.
Problem 3b. No.
Problem 4.
Qn
Problem 5. Fix a polynomial F (X) then F (x) = α i=1 (x − xi ) for some xi and α. Then we claim
F (T )x = 0 for x 6= 0 iff F has an eigenvalue as a root. The direction eigenvalue as a root imply F (T )x = 0
Q0
for x 6= 0 is trivial by taking an eigenvector as x. For the other direction let k ≥ 0 with i=1 (T −xi I) := x
Qk Qk
be the largest integer such that i=1 (T − xi I)x 6= 0 then we have i=1 (T − xi I)x is an eigenvector of
T − xk+1 I. So this implies F (T ) is invertible iff F does not have any eigenvalues as roots i.e. iff the
minimal polynomial and F do not share ant roots.
Problem 6b. Just do Grahm-Schmidt on
       
1 0 0 1 0 0 0 0
0 0 0 0 1 0 0 1
Problem 7. Let the operator T : C([0, 1]) → C([0, 1]) be defined via
ˆ x 2
T (f ) := 1 − tf
0
Indeed this maps to C([0, 1]) since
ˆ x 2 ˆ 1 2 ˆ 1
0≤ tf ≤ tf ≤ t2 f 2 ≤ 1
0 0 0
where the second last inequality is due to Jensen’s Inequality. So in particular,
0 ≤ T (f ) ≤ 1
and the continuity is clear. Then observe
ˆ x 2 ˆ x 2 ˆ x ˆ x
T (f ) − T (g) = tg − tf =( tg − tf )( tf + tg)
0 0 0 0
ˆ x ˆ 1
1
tg ≤ |tg| ≤
0 0 2
where we used ||g||L∞ ≤ 1. In particular this implies
ˆ x ˆ x
1
|T (f ) − T (g)| ≤ |tf − tg|dt ≤ ||f − g||L∞ t = ||f − g||L∞
0 0 2
i.e. T is a contraction map and an operator on a complete metric space. So it follows from Banach Fixed
Point Theorem that this admits a unique fixed point.
51

Problem 8. Note that by integrating by parts we have


ˆ 1 ˆ 1
1 1
f (x)dx = f (x)(x − )|1x=0 − (x − )f 0 (x)
0 2 0 2
ˆ 1
f (1) + f (0) 1
= − (x − )f 0
2 0 2
2 ˆ 1 2
f (1) + f (0) x x x x
= + f 0 (x)( − )|1x=0 + ( − )f 00 (x)
2 2 2 0 2 2
ˆ 1 2
f (1) + f (0) x x
= + ( − )f 00 (x)
2 0 2 2
Therefore,
ˆ 1 ˆ 1 2 ˆ
f (1) + f (0) x x 00 1 1 00
− f (x)dx = ( − )f (x) ≤ |f (x)|
2 0 0 2 2 8 0
2
where we got 1/8 since maxx∈[0,1] | x2 − x2 | = 18
Problem 9a. Let d(x, y) := dist(x, y) then notice that by the reverse triangle inequality it suffices to
show for any ε > 0 and any x ∈ X that there exists a z ∈ Zε where Zε is coutable such that
d(x, z) < ε
As C(X) is separable there exists a countable set of functions {fn } such that for any g ∈ C(X) we have
an n such that
ε
||g(x) − fn (x)||L∞ <
2
Fix an x ∈ X ad let g(z) := d(x, z) then g(z) ∈ C(X) so there exists an n such that
ε
||g(x) − fn (x)|| = ||fn (x)|| ≤ ||g(z) − fn (z)||L∞ <
2
Let {gn } ⊂ {fn } be chosen such that for each n there exists an xn such that
ε
||d(xn , z) − gn (z)||L∞ <
2
then we have gn (xn ) < 2ε . Then fix an arbitrary x ∈ X then this implies there exists an n such that
ε
||d(x, z) − gn (z)|| <
2
In particular this implies
d(x, xn ) ≤ ||d(x, xn ) − gn (xn )|| + ||gn (xn )|| < ε
If we let Z := {xn } then the claim is shown.
Problem 9b. In a we have shown for any ε > 0 there exists a countable set Zε such that for any x ∈ X
there exists a z ∈ Zε such that
d(x, z) < ε
1
S∞
Let εn := n and consider Z := n=1 Zεn which is countable since it is a countable union of countable
sets and for any ε > 0 and x ∈ X there exists an z ∈ Z such that
d(x, z) < ε
so X is separable.
Problem 10a. As K is compact it is closed. In particular, if K can be written as the union of two
separated sets A and B then A and B are both closed. But as K is bounded so are A and B. In
particular, A and B are compact and
K = A ∪ B such thatA ∩ B = ∅
Then we must have the existence of an ε0 > 0 such that d(A, B) > ε0 . Let a ∈ A and b ∈ B then by
assumption there exists a sequence x0 , x1 , .., xn such that x0 = a and xn = b with ||xk − xk−1 || < ε20 . As
x0 = a ∈ A and xn = b ∈ B this implies there exists an integer such that xk ∈ A and xk+1 ∈ B and we
have ||xk+1 − xk || < ε20 but this contradicts d(A, B) > ε0 . Therefore, K is connected.
Problem 10b. Take the topologist sine curve.
52 RAYMOND CHU

Problem 11. We claim that the family is uniformly bounded on [0, 1] and that it is equicontinuous on
[ k1 , 1] for any k ∈ N. Indeed, note that
ˆ 1 ˆ n
n 1 π
||Fn ||L∞ ≤ 1+ 2 x2
dx = 1 + 2
dt = 1 + arctan(n) ≤ 1 +
0 1 + n 0 1 + t 2
so the family is uniformly bounded. And we have by the fundamental theorem of calculus as fn are
continuous that
n
|Fn0 (x)| = |fn (x)| ≤ 1 +
1 + n2 x2
and on [ k1 , 1] we have
n
||Fn0 (x)||L∞ ( k1 ,1) ≤ 1 + 2
1 + nk2
Noting that
n
lim 2 = 0
n→∞ 1 + n
k2
we get that there exists a constant that depends only on k such that C = C(k)
||Fn0 (x)||L∞ (1/k,1) ≤ 1 + C(k)
So it is equicontinuous with lipschitz constant 1 + C(k). So by Arzela-Ascoli there exist a uniformly
(2)
convergent subsbequence on [ 21 , 1] denoted by nk to a function f1 (x). We can similarly find a uniformly
(3) (2)
convergent subsequence on [ 31 , 1] where nk ⊂ nk to a function f2 (x). Note that by uniqueness of limit
1
we have f1 (x) = f2 (x) for x ∈ [ 2 , 1], so we may as well call this limit f .Do this for all k. Then we define
(k)
nk := nk i.e. the diagonal subsequence. Then note Fn (0) = 0 for all n. Then define f (0) := 0 then
Fnk (x) → f (x) for all x ∈ [0, 1].
Problem 12. Let
F (y; t) := y 4 + ty 2 + t2 y
then note that as |y| → +∞ that we have F (y; t) → ∞. Therefore, there exists a compact set K(t) such
that for x ∈ / K(t) we have F (x) > 1 but F (0; t) = 0. Therefore, as F (y; t) is continuous for any fixed
t there exists a min of F (y; t) over K(t). This minimum is the global minimum since F (y; t) > F (0, 0)
for x ∈
/ K(t). So a global minimum exists on a compact subset. We can take a slightly larger compact
subset to ensure that the global minimum is an interior point then we must have ∂y F (y, t) = 0 i.e.
4y 3 + 2yt + t2 = 0
so there is at most 3 candidates for the global min which we denote by {y1 (t), y2 (t), y3 (t)}. But notice
that
2
∂yy F (y; t) = 12y 2 + 2t
so if t > 0 then F√ is uniformly convex

so

the minimum is unique.

So assume t ≤ 0. Then we have on
A1 (t) := (−∞, − 6−t ], A2 (t) := (− 6−t , 6−t ), and A3 (t) := [ 6−t , ∞) then the global mi cannot occur in
A3 (t) since if y ≥ 0 then F (−y; t) < F (y, t). And if the min occurs in A2 (t) then F (y; t)|A2 (t) is concave
so yi must be a max since critical points of concave functions are global √ maxs. Therefore, the global min
2
must occur in A3 (t). But on A3 (t) we have∂yy F (y; t) < 0 for x − −t6 so there is at most one zero.
Therefore, there is a unique global minimum yi (t).
53

16. Fall 2017


Problem 1b. Do grahm-schmidt on {1, x, x2 , x3 }.
Problem 2a. Indeed we have
det(AB − λI) = det(AA−1 (AB − λI)) = det(A−1 (AB − λI)A)
= det(BA − λI)
so AB and BA do have the same characteristic polynomial when A is invertible.
Problem 2b. Note that the set of complex invertible matrix are dense and that det(A) : Cn×n → C is
a continuous map since it is a polynomial of the entries of A. Indeed, given a matrix A we can consider
it as an operator on C then Schur’s Theorem tells us that there exists unitary matrix U and an upper
triangular matrix T such that
A = U ∗T U
then consider
1
Ak := U ∗ (T + I)U
n
then Ak → A in entry wise. And there exists an N ∈ N such that if k ≥ N then (Ak )ii 6= 0 for any
1 ≤ i ≤ n i.e. 0 is not an eigenvalue of Ak . Therefore, given an A ∈ Cn×n then we have a sequence
Ak → A such that Ak is invertible then
det(Ak B − λI) = det(BAk − λI)
for all k. Then taking limits along with the continuity of det gives
det(AB − λI) = det(BA − λI)
Problem 3. The matrix is diagonlizable so we get
(
x1 = αe5t + βe4t
x2 = αe5t + 2βe4t
for α, β ∈ R.
Problem 4a. Fix an arbitrary finite collection of {ei }i∈F then if
X
L := αi e#
i =0
i∈F

then for any i ∈ F we get


L(ei ) = 0 ⇒ αi = 0
since e#
j (ei )
= δij
Problem 4b. We claim this is a basis iff V is finite dimensional. Indeed if V is finite dimensional, then
given an T ∈ V ∗ i.e. T : V → R let {v1 , .., vn } be a basis of V then for any x ∈ V there exists unique
constant α1 , .., αn such that
Xn n
X n
X
T (x) = T ( αi vi ) = αi T (vi ) = T (vi )e#
i (x)
i=1 i=1 i=1

since e#
i (ej )
= αi so {e#
i }

spans V and is also linearly independent, so it is a basis of V when V is finite
dimensional.

Now assume for the sake of contradiction that {e#i } is a basis and V is infinite dimensional. Let {ei }i∈I
be a basis of V where I is an infinite counting set. Then consider the operator T : V → R defined via
Id(x) := x
Then fix any finite collection of {ei }i∈F then fix j ∈
/ F then
X
L(x) := αi ei
i∈F

has
L(ej ) = 0
54 RAYMOND CHU

/ F. Therefore, we cannot represent the Id operator with finite linear combinations of {e#
since j ∈ i } so it
is not a basis.
Problem 5a. Yes. We have 0 ∈ X then if f, g ∈ X then af + g ∈ X since there exists i1 , .., iN and
j1 , ..jM such that {af (ei1 ), ..., af (eiN )} and {g(ej1 ), ..., g(ejM )} span the image of f and g so the image of
af + g is a subset of the span of span{af (ei1 ), ..., af (eiN )} + span{g(ej1 ), ..., g(ejM )} which has at most
dimension N + M so af + g ∈ X
Problem 5b. No for I ∈ Y and −I ∈ Y but I − I = 0 ∈
/Y
Problem 5c. We claim X ∩ Y = ∅. Fix a basis {ei }i∈I of V . Then if f ∈ X then there exists ei1 , .., eiN
such that {f (ei1 ), .., f (eiN )} is a basis of im(f ). In particular this implies for k ∈
/ {i1 , .., iN } that there
exists constant α1 , .., αN , αk such that
N
X
αj f (eij ) + αk f (eik ) = 0
i=1

In particular, this implies {α1 ei1 , ..., αN eiN , αk eik } ∈ ker(f ) for all k ∈ / {i1 , .., iN } and there are infinitely
many such k and the set {α1 ei1 , ..., αN eiN , αk eik } and {α˜1 ei1 , ..., α˜N eiN , αm eim } are linearly independent
whenever im 6= ik and both are not in i1 , .., iN . So the kernel of f is infinite dimensional which implies
X ∩ Y = ∅.
Problem 6a. This is true by Spectral Theorem.
Problem 6b. This is false. We need it to be Hermitian or Normal to be able to apply the Spectral
Theorem. Take  
1+i 1
A=
1 1−i
then characteristic polynomial is (x − 1)2 but A − I 6= 0 so it only has one eigenvector. Therefore, it is
not diagnolizable.
Problem 6c. Consider the matrix  
1 2
A :=
2 2
then over R its characteristic polynomial is x2 − 3x − 2 = x2 − 2 over Z/3Z which does not admit any
roots over Z/3Z so it has no eigenvalues so it is not diagnolizable.
Problem 7. As an is decreasing we have the following inequality

X ∞
X ∞
X
n
an ≤ 2 a 2n ≤2 an
n=1 n=1 n=1
P∞ P∞
In particular, n=1 2n a2n converges iff n=1 an converges. Therefore, we must have 2n a2n → 0. Now
fix an k ∈ N then there exists an n such that k ∈ [2n , 2n+1 ] then we have from the decreasing condition
that
kak ≤ ka2n ≤ 2n+1 a2n = 2(2n a2n ) → 0
so we have kak → 0 since 2n a2n → 0.
Problem 8a. Assume L is discontinuous at x0 . Then there exists an ε0 > 0 such that there exists a
sequence xn → x0 and
|L(xn ) − L(x0 )| > ε0
In particular as L(xn ) = limx→xn f (x) this implies that there exists an N such that if d(x0 , y) < N1
then |L(x0 ) − L(y)| < ε20 . As yn → x0 we can find an N1 such that d(yn , x0 ) < 2N 1
. Then as L(yn ) =
limz→yn f (z) this means we can find an N2 such that if d(z, yn ) < N2 then d(f (z), L(yn )) < ε20 . Choose
1
1
z such that d(z, yn ) < min{ 2N , 1 } then d(z, yn ) ≤ N12 and d(z, x0 ) ≤ d(z, yn ) + d(yn , x0 ) < N1 so we
2 2N
have
ε0 ε0
|L(xn ) − L(x0 )| ≤ |L(xn ) − f (z)| + |f (z) − L(x0 )| < + = ε0
2 2
which is a contradiction.
55

S∞ S∞
Problem 8b. Let A := {x ∈ [a, b] : f (x) 6= L(x)} = n=1 {x ∈ [a, b] : |f (x) − L(x)| ≥ n1 } := n=1 An
We claim that each An is countable. Indeed, if not there exists an n such that An is uncountable thus
there exists a sequence {xk } ⊂ An with infinitely many distinct terms. Thus as [a, b] is compact there
exists a subsequence such that xk converges to a limit x. We still denote this subsequence as xk . Then
we have (
L(xk ) → L(x)
f (xk ) → L(x)
since xk → x but then this implies by uniform continuity of L that there exists a δ > 0 if d(xk , x) < δ
then (
1
|L(xk ) − L(x)| < 3n
1
|f (xk ) − L(x)| < 3n
choose large enough k such that d(xk , x) < δ then
2
|f (xk ) − L(xk )| ≤ |f (xk ) − L(x)| + |L(x) − L(xk )| ≤
3n
but
1
|f (xk ) − L(xk )| >
n
since xk ∈ An which is a contradiction. Therefore, there exists only countably many terms in each An .
Therefore, as A is a countable union of countable sets it is countable.
Problem 8c. Note that b implies that f (x) is continuous except for a countable set. Now we claim that
if f is continuous except for on a countable set that it is Riemann Integrable. Indeed, fix ε > 0 and
let ω(f, Ii ) denote the oscillation of f on the interval Ii . Enumerate the subset of discontinuity where
ω(f, qn ) ≥ α := D where ω(f, qn ) := limr→0 ω(f, Br (qn )) and consider
In := B 2εn (qn )
and for each x ∈
/ D Then we have for any x ∈/ D the existence of an εx > 0 such that ω(f, Bex (x)) < α

[ [
[a, b] ⊂ In ∪ Bεx (x)
n=1 x∈[a,b]−D

So compactness ensures there exists a finite subcover say


N
[ M
[
[a, b] ⊂ I ij ∪ Bεi (xi )
j=1 i=1

Let Bεi (xi ) := Ji . Let P be any partition that contains the points
N M
[ ε ε [
{qij − , qi + } ∪ {xi − εi , xi + εi }
j=1
2ij j
2ij i=1

Then if P = {a = t0 < t1 < ... < tK = b} and let ∆ti := ti − ti−1 and Ti := [ti−1 , ti ] then note that for
any i we have either
ε ε
Ti ⊂ {qij − ij , qij + ij } or {xi − εi , xi + εi }
2 2
for some ij or i. Then
K n M
X X ε X
∆ti ω(f, Ti ) ≤ ω(f, Ii ) + 2εi ω(f, Ji )
j=1 j=1
2ij i=1
Let M := ||f ||L∞ < ∞ then we have
M
X
≤ 2εM + 2εi α
i=1
≤ 2εM + 2(b − a)α
choose α = ε then we have
≤ ε(2M + (b − a))
i.e. the lower and upper sums are within Cε for a constant C > 0 so f is Riemann Integrable.
56 RAYMOND CHU

Problem 9. Fix x ∈ X and n, m ∈ N then assume m > n so there is a k such that n + k = m then
k−1
X
ρ(f n (x), f n+k (x)) ≤ ρ(f n+i (x), f n+1+i (x)) ≤ ρ(x, f (x))[cn + cn+1 + ... + cn+k ]
i=0
and as

X
cn < ∞
n=1
and cn ≥ 0 we get that {f n (x)} := {xn } is a Cauchy Sequence, so completeness implies there exists a
limit x∗ . But by continuity (since f is Lipschitz with constant c1 ) we have
lim f n+1 (x) = lim f (xn ) = f (x)
n→∞ n→∞
and
lim f n+1 (x) = lim f n (x) = x
n→∞ n→∞

P = x. Uniqueness follows from if there exists two fixed points then cn ≥ 1 for all n so we do not
so f (x)
have cn < ∞.
Problem 10. As [a, b] is compact and f ∈ C([a, b]) stone Weiestrass implies that there exists a sequence
of polynomial pn → f uniformly but for any pn we have
ˆ b
f (x)pn (x) = 0
a
so by uniform convergence we have
ˆ b ˆ b ˆ b
0 = lim f (x)pn (x) = lim f (x)pn (x) = f 2 (x)
n→∞ a a n→∞ a
i.e. f = 0 everywhere due to continuity.
Problem 11. Note that x 7→ log(x) is concave on (0, ∞). We can assume a, b 6= 0 for otherwise the
inequality is trivial. So we have
ap bq 1 1
log( + ) ≥ log(ap ) + log(bq )
p q p q
p q
a b
log( + ) ≥ log(a) + log(b)
p q
So exponentiation of both sides give
ap bq
+ ≥ ab
p q
as desired
Problem 12. See Fall 2014 number 10 and 11.
57

17. Spring 2018


Problem 1. Let V := C ∞ ([0, 2]) and let L : V → V be defined via for f ∈ C ∞ ([0, 2])
L(f ) = f 0
Then
L(ekt ) = kekt
i.e. ekt is an eigenvector with eigenvalue k. Then we claim {ekt }nk=1 is linearly independent. Indeed this
is trivially true for k = 1 so assume it holds when k = n − 1. Then if
Xn
αk ekt = 0
k=1
for all t ∈ [0, 2] then applying L gives
n
X
kαk ekt = 0
k=1
and multiplying the first quantity by n gives
n
X
nαk ekt = 0
k=1
Subtracting these quantities give
n−1
X
(n − k)αk ekt
k=1
which by induction implies αk (n − k) = 0 for all 1 ≤ k ≤ n − 1 but as n 6= k we get α1 , .., an−1 = 0 which
then implies αk = 0.
Problem 2. As A2 = A we have for any x ∈ R5 that
x = Ax + (x − Ax)
2
and x − Ax ∈ ker(A) since A = A i.e.
M
R5 = range(A) kernal(A)
Now we claim that we have A|range(A) = Id. Indeed, fix x ∈ R5 then we have x = u + v for u ∈ range(A)
and v ∈ ker(A) so there exists a w such that u = Aw then we get Ax = Au = A2 w = Aw = u i.e. if
x ∈ range(A) we have Ax = x. Therefore, if
I − (A + B) is invertible
then for any x ∈ range(A) such that x 6= 0 we have
−B(x) 6= 0
5
L
In particular, R = range(A) kernal(A) implies that ker(B) ⊂ ker(A). And we can repeat the same
argument to get ker(B) = ker(A). Therefore, rank nullity implies rank(A) = rank(B).
Problem 3. Take  
0 1
A=
0 0
then A2 = 0 and An = 0 for all n ≥ 2. Therefore,
A2
eA = I + A + 0 = I + A +
2
but A 6= 0.
Problem 4. All of these matrix have eigenvalues 1 so they have a real Jordan Canonical Form. So they
are similar iff they have the same Jordan Canonical Form. Note we get by computation that A, B, C, D, E
all have a minimal polynomial (x − 1)2 so we have that they all have the following Jordan Form
 
1 1 0
0 1 0
0 0 1
58 RAYMOND CHU

but F Jordan Form is  


1 1 0
0 1 1
0 0 1
so A, B, C, D and E are similar to one another while F is similar to itself.
Problem 5a. Note that A is positive definite iff
2
X
ξi Aij xj > 0
i,j=1

for all ξ = [ξ1 , ξ2 ]T 6= 0. Then if A and B are positive definite we have


2
X 2
X 2
X
ξi (Aij + Bij )xj = ξi Aij xj + ξi Bij xj > 0
i,j=1 i,j=1 i,j=1
for all ξ 6= 0 since A and B are positive definite. Therefore, A + B is positive definite.
Problem 5b.
Problem 6.
Problem 7. By Dirichlet’s test since n1 → 0 monotonically it suffices to show for any p there exists an
M = M (p) such that for any N
X N
sin(πn/p) ≤ M
i=1
By Euler’s Identity we have
eiθ − e−iθ
= sin(θ)
2i
So
N N
X 1 X i πn πn
sin(πn/p) = e p − e−i p
i=1
2i i=1
iπ(n+1)/p
1 − e−iπ(n+1)/p
 
1 1−e
= π − π
2i 1 − ei p 1 − e−i p
since the denominator is never zero we have
1 1 − eiπ(n+1)/p 1 − e−iπ(n+1)/p
   
1 2 2
π − π ≤ + := M (p) < ∞
2i 1 − ei p 1 − e−i p 2 |1 − eiπ/p | |1 − eiπ/p |
so the sum converges for any p.
Problem 8. Let us follow the hint. We first claim xn → 0. Indeed, we will use induction to show
0 ≤ xn ≤ 1 then Taylor’s Theorem with remainder implies xn+1 := sin(xn ) ≤ xn . The base case is given
then if 0 ≤ xn ≤ 1 then we have 0 ≤ sin(xn ) = xn+1 ≤ 1. Therefore, {xn } converges since it is a bounded
monotonic sequence. Say the limit is x. Now it converges to 0 since the continuity of sin gives us
lim xn+1 = lim sin(xn ) = sin(x)
n→∞ n→∞

x = lim xn+1 = sin(x)


n→∞
so we have x = 0 since Taylor’s theorem with remainder implies the unique fixed point has to be at x = 0.
Now we proceed with the hint: we claim that
1 1
lim − 2
x→0 sin2 (x) x
x3
exists. Indeed for any fixed x Taylor’s Theorem with remainder implies sin(x) = x − 6 cos(ξ(x)) for
ξ(x) ∈ (0, x) so
1 4 2 x6
1 1 1 3 x cos(ξ(x)) − cos (ξ) 36 1
2 − 2 = 2 = x 2 x 2 →
sin (x) x x 4 2
x (1 − 3 cos(ξ(x)) + cos (ξ(x)) 36 ) 3
59

In particular, this means


1 1 1
− 2 →
x2k+1 xk 3
an
Now we claim if an+1 − an → L then n → L Indeed,
Pn−1
an (ai+1 − ai ) + a1
− L = i=1 −L
n n
Pn−1
(ai+1 − ai ) + a1 − nL
= i=1
n
Pn−1
(ai+1 − a i − L) + (a1 − L)
= i=1
n
So we have for any ε > 0 an N ∈ N such that if n ≥ N then |an+1 − an − L| < ε and
n N
! !
an 1 X 1 X
−L ≤ |ai+1 − ai − L| + |ai+1 − ai − L| + |a1 − L|
n n n i=1
i=N

ε(n − N ) M (N + 1)
≤ +
n n
where M := max{maxi=1,..,M |ai+1 − ai − L|, |a1 − L|} Taking n to be sufficiently large we get
≤ 2ε
an
so we have n → L. Therefore, we have
1 1
2

nxn 3
i.e.
nx2n
→3
1
i.e. √ √
nxn → 3
as desired.
Problem 9. Fix an interval [a, b] and ε > 0 then we have for any partition P := {x0 = a < ... < xn = b}
with uniform step size ∆x < ε with intervals Ii := [xi−1 , xi ] and ω(f, Ii ) := supx,y∈Ii |f (x) − f (y)| that
n
X n
X
∆xω(f, Ii ) = ε f (xi ) − f (xi−1 ) = ε(f (b) − f (a)) ≤ 2M ε
i=1 i=1
where M := ||f ||L∞ [a,b] so f is Riemann Integrable.
Problem 10. Fix x ∈ U then let A := f −1 (f (x)) ∩ U i.e. the preimage of f on f (x). Then this is closed
in U since f is continuous since it is C 1 (since the partials are continuous on U ). We also claim it is open.
Indeed fix x ∈ A then x ∈ U so there exists an ε > 0 such that Bε (x) ⊂ U . Then for any y ∈ Bε (x) we
have tx + (1 − t)y ∈ Bε (x) for 0 ≤ t ≤ 1 since balls are convex. In particular, we get
g(t) := f (tx + (1 − t)y)
1
is C and
g(1) − g(0) = g 0 (ξ)
for some ξ ∈ (0, 1). But g 0 (t) = ∇f (tx + (1 − t)y) · (x − y) = 0 so g(1) = g(0) i.e. f (y) = f (x). Therefore,
A is open. But as U is connected we must have A = U i.e. for all y ∈ U we have y ∈ f −1 (f (x)) i.e.
f (y) = f (x) for all y ∈ U i.e. f is constant.
Problem 11. As X is compact and f is continuous we have
f (X) = f (X)
now fix x ∈ X then consider xn := f (xn−1 ) with x0 := x. Then as X is compact there exists a subsequence
such that xnk → y. Then for any ε > 0 we have for large enough nk that
ε > ρ(xnk , xnk+1 ) = ρ(f nk −1 (x), f nk+1 −1 (x))
= ρ(x, f nk+1 −nk (x))
60 RAYMOND CHU

i.e. for y := f nk+1 −nk −1 (x) we have


ρ(x, f (y)) < ε
so x ∈ f (X) = f (X). In particular, we get X ⊂ f (X) = f (X) but the other subset is trivial so X = f (X)
i.e. f is surjective.
Problem 12. Let ε > 0 then by equicontinuity there exists a δ > 0 such that if d(x, y) < δ then for any
f ∈ F we have d(f (x), f (y)) < ε so choose x ∈ X and let y ∈ X such that d(x, y) < δ then there exists
an f1 ∈ F such that
g(x) ≤ f1 (x) + ε ≤ f1 (y) + 2ε ≤ g(y) + 2ε
i.e.
g(x) − g(y) < 2ε
and we also have the existence of an f2 such that
g(y) < f2 (y) + ε ≤ f2 (x) + 2ε ≤ g(x) + 2ε
i.e.
|g(x) − g(y)| ≤ 2ε
whenever d(x, y) < δ so g is uniformly continuous.
61

18. Fall 2018


Problem 1. Assume that

X an
n=1
2an+1
an
converges then we must have 2an +1 → 0 i.e. an → 0. So there exists an n ≥ N such that
2an + 1 ≤ 3
so for any fixed ε > 0 there is an M such that for any n, m ≥ M
m m
aj X X aj
ε≥ ≥
j=n
2aj + 1 j=n 3
P
for Therefore, as aj is non-negative we must have aj converges which is a contradiction.
Problem 2. Assume
A∪B =X ∪Y X ∩Y =X ∩Y =∅
as A is connected we have A ⊂ X or A ⊂ Y . WLOG assume A ⊂ X then Y ⊂ B. Then
Rn = X ∪ Y ∪ C
and
(X ∪ C) ∩ Y = (X ∩ Y ) ∪ (C ∩ Y ) ⊂ ∅ ∪ (C ∩ B) = ∅
and
(X ∪ C) ∩ Y ⊂ (X ∩ Y ) ∪ (C ∩ Y ) ⊂ ∅ ∪ (B ∩ Y ) = ∅
n
but R is connected so this is a contradiction.
Problem 3. Let f and g be Riemann Integrable such that there is an α > 0 with
|g(x) − g(y)| ≥ α|x − y|
Then g is injective since if g(x) = g(y) then
0 = |g(x) − g(y)| ≥ α|x − y|
so we can define its inverse on im(G). Then its inverse is Lipschitz since
|x − y| = |g(g −1 (x)) − g(g −1 (y))| ≥ α|g −1 (x) − g −1 (y)|
Now we just need to show that f ◦ g set of discontinuity has measure zero. In particular, if we let
E := {x : f is discontinuous} then we want to show g −1 (E ∩ im(G)) has measure zero. But as g −1
is Lipschitz this set indeed has S measure zero. Indeed P we have for any ε > 0 there are open intervals

(an , bn ) such that E ∩ im(G) ⊂
S∞ −1 n=1 (a n , b n ) S ∞ bn − a n ≤ ε P
with because E ∩ im(G)
P −1 has zero measure.
−1 ∗ −1 ∗
Then
P1 ∗ g (E ∩ im(G)) ⊂ n=1 g (a ,
n nb ) = (c
n=1 n n, d ) and dn − cn = g (bn − g
) (an ) ≤
∗ 1 ε −1
P
α n(b − an ) ≤ α b n − an = α letting ε → 0 shows g (E ∩ im(G)) has zero measure. Then as g set
of discontinuity has measure zero, we conclude that f ◦ g has a set of discontinuity has measure zero. So
f ◦ g is Riemann integrable.
Problem 4. We claim that f is concave i.e. we have the following secant line inequality if x < zt < y
then
f (x) − f (zt ) f (x) − f (y) f (y) − f (zt )
≥ ≥
x − zt x−y y − zt
f (x)−f (zt )
Indeed, as f is differentiable on (x, y) MVT implies there exists a ξ1 ∈ (x, zt ) such that x−zt = f 0 (ξ1 )
f (y)−f (zt )
ad a ξ2 ∈ (zt , y) such that y−zt = f 0 (ξ2 ) so in particular, as ξ1 ≤ ξ2 we have
f (x) − f (zt ) f (y) − f (zt )

x − zt y − zt
As y ∈ (0, 1) there exists an ε > 0 such that y + ε0 ∈ (0, 1) so
f (x) − f (zt ) f (y + ε) − f (zt )
≥ := L(ε)
f − zt y + ε − zt
62 RAYMOND CHU

Note that L is well defined for 0 ≤≤0 and is continuous so we have


f (x) − f (zt ) f (y) − f (zt )
≥ lim L(ε) =
f − zt ε→0 y − zt
and a similar argument gives
f (x) − f (zt ) f (x) − f (y) f (y) − f (zt )
≥ ≥
x − zt x−y y − zt
Letting x1 < x2 ∈ (0, 1) we have 0 < x1 < x2 so the secant line inequality with x = 0, zt = x1 , y = x2
gives
f (x1 ) f (x2 )

x2 x2
since f (0) = 0 i.e.
f (x)
g(x) :=
x
is a decreasing function on (0, 1).
Problem 5a. Fix x, y ∈ ∂B then as
h(z) := g(z) + |x − z|
is a continuous map on ∂B which is compact, we get the existence of a minimum i.e. a x∗ such that
h(x∗ ) = inf [g(z) + |x − z|]
z∈∂B
i.e.
f (x) = g(x∗ ) + |x − x∗ |
and then we have that
f (y) ≤ g(x∗ ) + |y − x∗ |
since f (y) ≤ g(z) + |y − z| for any z ∈ ∂B. Then
f (y) − f (x) ≤ g(x∗ ) + |y − x∗ | − g(x∗ ) − |x − x∗ | = |y − x∗ | − |x − x∗ | ≤ |x − y|
We can repeat a similar argument to get
f (x) − f (y) ≤ |x − y|
which implies
|f (x) − f (y)| ≤ |x − y|
so f is 1-Lipschitz.
Problem 5b. By Arzela-Ascoli it suffices to show M (g) is equicontinuous, closed, and uniformly bounded.
It is closed since if fn ⊂ M (g) converge to f uniformly, then we have f |∂B = g since uniform implies
pointwise and f would be 1−Lipschitz since
|f (x) − f (y)| ≤ |f (x) − fn (x)| + |fn (x) − fn (y)| + |fn (y) − f (y)|
ε
and for any ε > 0 we can find an N such that if n ≥ N the ||fn − f ||L∞ < 2 so we get
|f (x) − f (y)| ≤ ε + |fn (x) − fn (y)| ≤ ε + |x − y|
as ε is arbitrary we conclude that f is 1-Lipschitz i.e. M (g) is a closed subset of C(B).

Now we claim that for any f ∈ M (g) that f is 1−Lipschitz on B. We now that f on int(B) is 1−Lipschitz
and f on ∂B is 1−Lipschitz so it suffices to show if x ∈int(B) and y ∈ ∂B then
|f (x) − f (y)| ≤ |x − y|
Indeed fix t ∈ (0, 1) and define
`(t) := f (ty + (1 − t)x)
then `(0) = f (x) and `(1) = f (y) and ty + (1 − t)x ∈int(B) since ||tx + (1 − t)y|| ≤ |t|||x|| + (1 − t)||y|| =
|t|||x|| + (1 − t) < 1 for t 6= 1. Then we have
|f (x) − `(t)| ≤ t|x − y| ≤ |x − y|
63

letting t → 1 and using continuity of `(t) we get


|f (x) − f (y)| ≤ |x − y|

Let M := supx∈∂B |g(x)| then for any f ∈ M (g) and any y ∈ B we have for any x ∈ ∂B
|f (y)| ≤ |f (y) − f (x)| + |f (x)| ≤ |y − x| + M ≤ 1 + M
so by Arzela-Ascoli we conclude M (g) is a compact subset of C(B)
Problem 6. We will show that F is C 1 . Fix ε > 0 then consider
ˆ ∞ −tx
F (x + h) − F (x) e
− dt
h 0 t1/2
Notice these integrals have finite volume since
ˆ ∞ ˆ 1 ˆ ∞
1 − e−tx 1 − e−tx 1 − e−tx
≤ +
0 t3/2 0 t3/2 1 t3/2
ˆ 1 ˆ ∞
tx 1
≤ 3/2
+ 3/2
0 t 1 t
= 2x + 2
−tx
where we used that e is convex to get that its tangent line lies below e−tx . And we have
ˆ ∞ −tx ˆ 1 −tx ˆ ∞ −tx ˆ 1 ˆ ∞
e e e 1
1/2
≤ 1/2
+ 1/2
≤ 1/2
+ e−tx
0 t 0 t 1 t 0 t 1

e−x
=2+
x
which is finite for any x ∈ (0, ∞). Then we have
ˆ ∞
F (x + h) − F (x) 1 e−tx − e−t(x+h)
=
h h 0 t3/2
ˆ ∞ 2
1 hte−tx + h2 t2 e−tξ(x)
=
h 0 t3/2
ˆ ∞
te−tx
= + O(h)
0 t3/2
since the O(h2 ) term is integrable. So we get
ˆ ∞
e−tx
F 0 (x) = dt
0 t1/2
which is continuous since ˆ ∞
0 0 hte−tx + O(h2 )
F (x + h) − F (x) =
0 t1/2
where the O(h2 ) term is integrable. Then as
ˆ ∞ √ ˆ ∞
h te−tx ≤ h e−tx = hC(x) → 0
0 0

so it is continuous. It is also injective since F 0 > 0 and its inverse is well defined on range(F ) and is C 1
thanks to
1
(F −1 )0 (F (x)) = 0
F (x)
and F 0 (x) 6= 0. It is easy to see limx→0 F (x) = 0 and limx→+∞ F (x) = +∞ so F is actually a bijection.
64 RAYMOND CHU

Problem 7. We claim that M


Rn = range(T ) Ker(T )
n
Indeed, given x ∈ R we have x = T x + (x − T x) and x − T x ∈ Ker(T ). So it suffices that
range(T ) ∩ Ker(T ) = {0}. If x ∈ range(T ) ∩ Ker(T ) there is a y such that T y = x and T x = 0
implies T 2 y = 0 but T 2 y = T y so 0 = T y = x. So the claim is proved.

Now we claim that T |range(T ) = Id. Indeed, if x ∈ range(T ) then there is a y such that T (y) = x
then T (x) = T 2 (y) = T (y) = x. Now fix a basis of range(T ) and extend it to a basis of ker(T ) i.e.
{v1 , .., vm , w1 , .., wn−m } where vi ∈ range(T ) and wi ∈ Ker(T ) then T on this basis is
[T v1 , T v2 , .., T vm , T w1 , .., T wn−m ] = [v1 , v2 , .., vm , 0, .., 0]
i.e. if we let β := {v1 , .., vm , w1 , .., wn−m } then
 
Id 0
[T ]β =
0 0
where Id is an m × m block of the identity matrix where m = rank(T ). This is the desired bases.
Problem 8. As X is symmetric we know from the spectral theorem that there exists a unitary matrix
U such that
X = U DU T
where D = diag(λ1 , ..., λn ) and λi ∈ R. Then we have for z in with im(z) > 0 that
X − zI = U (D − zI)U T
so
G := (X − zI)−1 = U D̃U T
where D̃ = diag( λ11−z , ..., λn1−z ) note that λi − z 6= 0 since λi ∈ R and Im(z) > 0. Then notice that
n n
X X 1
|Gij |2 = (G∗ G)ii = u2
j=1
λ2
j=1 j
+ |z|2 ij

and
n
X 1
Gii = u2
i=1
λi − z ij
Then note that
n
Im(Gii ) X 1 2
= 2 + |z|2 uij
Im(z) λ
j=1 i
as desired
Problem 9. We claim ker(f ) + ker(g) = Rn . Indeed, it suffices to show dim(ker(f ) + ker(g)) = n.
Indeed as f, g ∈ V ∗ are linearly independent we have f, g 6= 0 so dim(ker(f )) = dim(ker(g)) = n−1 since
Im(f ) = R. But as f andg are linearly independent we know that ker(f ) 6= ker(g) for if ker(f ) = ker(g)
then
f = cg
which implies they are not linearly independent. Therefore,
dim(ker(f ) + ker(g)) = dim(ker(f )) + dim(ker(g)) − dim(ker(f ) ∩ ker(g))
≥ n − 1 + n − 1 − (n − 2) = n
so ker(f ) + ker(g) = R so this means for any v ∈ Rn there exists a v2 ∈ ker(f ) and v1 ∈ ker(g) such
n

that v = v1 + v2 . Therefore, by linearity,


(
f (v) = f (v1 ) + f (v2 ) = f (v1 )
g(v) = g(v1 ) + g(v2 ) = g(v2 )
as desired.
65

Problem 10. Note we diagonlize the matrix into


   
1 0 0 1 0 0 1 0 0
A = 1 1 0 0 12 0  −1 1 0
1
1 2 1 0 0 3 1 −2 1
so we have    
1 0 0 1 0 0 1 0 0
An = 1 1 0 0 21n 0  −1 1 0
1 2 1 0 0 31n 1 −2 1
   
1 0 0 1 0 0
= 1 − 21n 1
2n 0  → 1 0 0
1 − 2n−1 + 3n 2n−1 − 3n 31n
1 1 1 2
1 0 0
Problem 11. Note that if we let W be the set of 3 × 3 symmetric matrix we have
M
R3×3 = V W
since
A + AT A − AT
A= +
2 2
and V ∩ W = {0} In particular,
dim(V ) = 3
since            
1 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1
0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0
are 6 linearly independent matrix in W and
     
0 0 0 0 1 0 0 0 1
0 0 −1 −1 0 0 0 0 0
0 1 0 0 0 0 −1 0 0
are 3 linearly independent matrix in V so this forms a basis of V since R3×3 = V
L
W . Note that it is
an inner product since
1 1 λ
< A + λB, C >= Tr((A + λB)C T ) = Tr(AC T ) + Tr(BC T )
2 2 2
thanks to the linearity of trace and
1 1
< A, B >= Tr(AB T ) = Tr(BAT ) =< B, A)
2 2
since trace(AT ) = trace(A). Then note that
n
1 1 X
< A, A >= Tr(AAT ) = |aij |2
2 2 i,j=1

so < A, A >≥ 0 with equality to zero iff A = 0. So it is an inner product. To find an orthonormal basis
do Grahm-Schmit on the basis vectors of W mentioned above.
Problem 12. Fix a basis {v1 , ..., vn } of V . Then let Wi := {v1 , .., vi−1 , vi+1 , ..., vn } so as T (vj ) ∈ W i
for all i 6= j we get that for i 6= j
X
T (vj ) = αk vk
k6=i
Fix a ` 6= i or j then we get
X
T (vj ) = αk vk
k6=`
which implies that α` = αi = 0 We can repeat this argument for all k 6= j to get
T (vj ) = αj vj
66 RAYMOND CHU

So T in this basis is T = diag(α1 , .., αn ) so it suffices to show αi = αj for all i, j. Fix i 6= j and then
we have for Eij := span{vi + vj } and let Wij := {ek : k 6= i, j} then let M := Eij + Wij which is n − 1
dimensional. Therefore, we have as T (M ) ⊂ M that
X
T (vi + vj ) = α(vi + vj ) + β k vk
k6=i,k

but
T (vi + vj ) = αi vi + αj vj
so we must have βk = 0 for all k 6= i, k and α = αi = αj . Iterating this with i fixed at 1 and letting
2 ≤ j ≤ n shows α = αi for all 1 ≤ i ≤ n i.e. T vi = αvi for all i so T is a constant multiple of the
identity.
67

19. Spring 2019


Problem 1. To show it is complete it suffices to show it is a closed subsbet of the complete metric space
(C([0, 1]), || · ||L∞ ). Indeed, fix ε > 0 then if X 3 fn → f then there exists an N such that if n ≥ N then
||fn (x) − f (x)||L∞ < 2ε so we have
|f (x) − f (y)| ≤ |f (x) − fn (x)| + |fn (x) − fn (y)| + |fn (y) − f (y)| ≤ ε + |fn (x) − fn (y)|
≤ ε + |x − y|
Letting ε → 0 gives
|f (x) − f (y)| ≤ |x − y|
so f ∈ X so it is a closed subset of a complete metric space, so X is complete.

We will show that X is path connected which implies it is connected. Fix f, g ∈ X and define
γ(t) := (1 − t)f + tg
then for any fixed t ∈ [0, 1] we have γ(t)(x) ∈ X since
|γ(t)(x) − γ(t)(y)| ≤ (1 − t)|f (x) − f (y)| + t|g(x) − g(y)|
≤ |x − y|
Now we claim γ is continuous. Indeed, if given an ε > 0 then if |t1 − t2 | < ε then
||γ(t1 ) − γ(t2 )||L∞ ≤ |t1 − t2 |(||f ||L∞ + ||g||L∞ ) < ε((||f ||L∞ + ||g||L∞ )
so γ is continuous. Therefore, X is path connected.
Problem 2. Note that 1 1    
an a
2 2 = n+1
1 0 an−1 an
So in particular we have  1 1 n    
2 2 a an+1
=
1 0 b an
We can solve for an and compute limn→∞ directly by diagonalizing this matrix.
1
Problem 3. First observe that since f (x) ≥ δ that f (x) is finite so
1 1 f (y) − f (x)
− =
f (x) f (y) f (x)f (y)
so
1 1 |f (y) − f (x)|
− ≤
f (x) f (y) δ2
Now as f is Riemann Integrable for any ε > 0 there exists a partition P = {a = x0 < ... < xn = b} with
∆xi := xi − xi−1 , Ii := [xi−1 , xi ], and ω(f, Ii ) := supx,y∈Ii |f (x) − f (y)| such that
n
X
∆xi ω(f, Ii ) ≤ δ 2 ε
i=1
then as
n n
X 1 1 X
∆xi ω( , Ii ) ≤ 2 ∆xi ω(f, Ii ) ≤ ε
i=1
f δ i=1
1
so the lower and upper Riemann sums are within ε and as ε is arbitrary we conclude that f is Riemann
Integrable.
Problem 4. Let
`(x) := g(x)(f (b) − f (a)) − f (x)(g(b) − g(a))
Then note that
`(a) = `(b) = f (b)g(a) − g(b)f (a)
so Rolle’s Theorem implies there exists a ξ such that
`0 (ξ) = 0
68 RAYMOND CHU

i.e.
g 0 (ξ)(f (b) − f (a)) = f 0 (ξ)(g(b) − g(a))
as desired where ξ ∈ (a, b).
Problem 5. Let X ∗ denote the completion of X. Embed (X, d) to (X ∗ , d∗ ) via the identity map. Then
we next claim theres an isometric embedding of (X ∗ , d∗ ) to (C(X ∗ ), || · ||L∞ ) ) which is a Banach Space.
Indeed, fix an x ∈ X ∗ and define
Φx∗ (y) : d∗ (x∗ , y)
then Φx∗ : X ∗ → C(X ∗ ) and
||Φx∗ (y) − Φy∗ (y)||L∞ = sup |d(x∗ , y) − d(y ∗ , y)| ≤ d(x∗ , y ∗ )
y∈X ∗

and taking y = y ∗ gives


sup |d(x∗ , y) − d(y ∗ , y)| ≥ d(x∗ , y ∗ )
y∈X ∗
i.e.
||Φx∗ (y) − Φy∗ (y)||L∞ = d∗ (x∗ , y ∗ )
so the map Φ is an isometric embedding of X ∗ to C(X ∗ ). Let X := C(X ∗ ) with norm || · ||L∞ then
for any fixed x the map ψx : X → C(X) defined via ψx (y) := d(x, y) is an isometric embedding into
C(X) ⊂ C(X ∗ ) which is a Banach Space where we used that d(x, y) = d∗ (x, y) when x, y ∈ X.
Problem 6. Note that
ˆ ∞ ˆ ∞ ˆ ∞
x −x/n −u
e dx = ue du = − e−u du = 1
0 n2 0 0
x
and we note that limx→∞ xe− n = 0 since exponential decay is much faster than linear growth, so the
x
maximum over R+ must be attained om a compact set. In particular, we will have ∂x xe− n = 0 at the
max or x = 0. The critical point x∗ satisfies
x∗ x∗ − x∗
e− n − e n =0
n
so
x∗
1− = 0 ⇒ x∗ = n
n
so
1
fn (x∗ ) =
en
but fn (0) = 0 so the max must occur at x∗ so we have
1
||fn ||L∞ = →0
ne
so fn uniformly converges to 0.
Problem 7. We find the characteristic polynomial to find out it is
χ(x) = −1 − 2x2 + 13x − x3
then Cayley-Hamilton gives us
0 = χ(A) = −Id − 2A2 + 13A − A3 ⇒ Id = A(−2A + 13Id − A3 )
so A−1 = (13Id − A3 − 2A).
Problem 8. We claim that this subspace is the space of trace zero matrix and its dimension is n2 − 1.
Indeed, trace zero matrix have dimension n2 − 1 since if we define the matrix E (ij) to satisfy
(
(ij) 1 if k = i and j = `
Ek` =
0 else

Then E (ij) for i 6= j is in the space of trace zero matrix, so there are at least n2 − n of them. Then we
define the n − 1 matrix E (ii) for 2 ≤ i ≤ n with
69


1 if k = i and j = i

(ij)
Ek` = −1 if k = i − 1 and ` = i − 1

0 else

then there are n − 1 matrix in the space of trace zero matrix. Then these matrix are linearly independent,
so there at least n2 − 1 independent matrix in the space of trace zero matrix. But as Id has non-zero
trace the dimension cannot be greater than n2 − 1, so the dimension of trace zero matrix is n2 − 1. We
will prove these two definitions are equivalent over R2 so we have the dimension is 3. Let U denote the
subspace of trace zero matrix. Then we clearly have W ⊂ U since if C ∈ W then C = AB − BA for some
A, B then tr(AB − BA) = tr(AB) − tr(BA) = 0.

Now it suffices to show that on the basis E (22) , E (12) , E (21) that this property is true. Fix any diagonal
matrix D = (1, 0) then for any matrix B we have
 
0 b12
DB − BD =
b21 0

so for E (12) and E (21) this property clearly holds. And


       
(22) 1 0 0 1 1 0 1 0 0 1
E = = −
0 −1 0 0 0 0 0 0 0 0

so we have U ⊂ W so dim(W ) = 3.
Problem 9. We write D in its matrix form with respect to the standard bases {1, x, x2 , .., x1 0}

D = super diag(1, 2, 3, .., 10)

Then

X Dn
exp(D) = I +
n=1
n!

10
X Dn
=I+
n=1
n!
P∞ n
since D is nilpotent. Then as n=1 Dn! is nilpotent all of the eigenvalues must be one. This follows from
if A and B are nilpotent such that AB = BA then AB is nilpotent thanks to the binomial theorem. And
we have

X Dn
rank( ) = 10
n=1
n!

so its kernal is one dimensional, so the only eigenvector are constants i.e.
 
c
0
 
...
0

Problem 10. As A is diagonalizable there exists a U such that and a diagonal matrix U such that

A = U −1 DU

U −1
     
0 D I U 0 A I
=
0 U −1 0 D 0 U 0 A
70 RAYMOND CHU

   
A I D I
So if is diagonalizable so would := B. But observe that the characteristic polynomial
0 A 0 D
B is just the characteristic polynomial of A squared. But note for λi := Dii we have
 
xn+1
λ2 x2 − xn+2 
 

 ... 

 λn xn − x2n 
(B − λ1 I)x = 0 ⇐⇒  =0
 xn+1 

 λ2 xn+2 
 
 ... 
λn xn+2
so for all λk 6= 0 we get xn+k = 0 so if λj = 0 we also get λj xj − xn+j = 0 ⇒ xn+j = 0 so xn+k = 0 for
all k ≥ 1. Therefore, we x must be of the form x = (x1 , .., xn , 0, .., 0)T . But
(B − λI)x = Ax − λx
so it must be an eigenvalue of A i.e. for each eigenspace the multiplicity of the eigenvectors is the same
as D, so B cannot be diagonalized since it only has n eigenvectors and not 2n.
Problem 11. As rank(A) = rank(A2 ) we must have nullity(A2 ) = nullity(A) i.e. the generalized
eigenspace of 0 for A is the same as the eigespace. So in Jordan Canonical Form A must have no non-
trivial blocks of 0. Then as we have only finitely many eigenvalues, we see for small enough λ that A + λI
is invertible. So by JCF we have
M M M
A = U −1 (J1 J2 ... Jk )U
for Jordan Blocks Jk and any Jordan block with a diagonal zero must be 1 × 1. In particular, we can
write
J˜ 0
 
A = U −1 U
0 0
where J˜ is an invertible matrix so we have
(J˜ + λI)−1 J˜ 0
 
(A + λI)−1 A = U −1 U
0 0
so  
−1 −1 I 0
lim (A + λI) A = U U
λ→0 0 0
So one direction is proved. Now if limλ→0 (A + λI)−1 A exists then it must have no non trivial size zero
Jordan Blocks. Indeed,
M M
A = U −1 (J1 ... Jk )U
then M M
(A + λI)−1 = U −1 ((J1 + λI)−1 ... (Jk + λI)−1 )U
and say Jk is a Jordan Block with zero diagonals and is of size k × k for k > 1 then the diagonal terms
of (Jk + λI)−1 Jk super diagonal have terms of the form C/λ for some constant C so we have it blows
up, so the limit does not exist. So at most the Jordan Blocks of zero are of size 1 × 1. This means that
the generalized eigenspace of zero is equal to the eigenspace of zero, so we have ker(A) = ker(A2 ) so we
have rank(A) = rank(A2 ).
Problem 12. Assume otherwise then there is an x such that Ax = 0 with ||x|| = 1
n
X
aii xi = − aij xj
j=1,j6=i

so taking norms squared gives


 2
n
X n
X n
X
(aii xi )2 =  aij xj 
i=1 i=1 j=1,j6=i
71

Applying Cauchy-Schwarz gives


n
X n
X
≤ ( (aij )2 )||x||
i=1 j=1,j6=i
X
= a2ij < 1
i6=j
but
n
X n
X
(aii xi )2 ≥ |xi |2 = 1
i=1 i=1
so we have the contradiction that 1 < 1 so we must have A is invertible.
72 RAYMOND CHU

20. Fall 2019


Problem 1. If Aλ is invertible then as A−1 (e1 ) 6= 0 we get
Aλ (A−1 e1 ) 6= 0
= e1 + λ(e1 , A−1 (e1 ))e1 6= 0
i.e. 1 + λ(e1 , A−1 (e1 ) 6= 0. For the converse fix x such that Aλ x = 0 then
A−1 (Ax + λ(e1 , x)e1 ) = 0
so
x + λ(e1 , x)A−1 (e1 ) = 0
so if x = (x1 , .., xn ) we get that
x1 + λx1 A−1
11 = x1 (1 + λ(e1 , A
−1
(e1 ))) = 0
so we must have x1 = 0 but this implies x = 0 since if x1 = 0 then we have Aλ (x) = Ax and Ax = 0 iff
x = 0.
Problem 2. Notice that by staring at the matrix we get that the eigenvalues and eigenvectors of A2 + A
are        
1 0 1 0
0 1  0   1 
λ = {6, 6, 0, 0}        
0 1  0  −1
1 0 −1 0
so we have
 1 T
√1   √1 √1


2
0 2
0 6 0 0 0 2
0 2
0
0 √1 0 √1  0 6 0 0  0 √1 0 √1 
A2 + A =  2 2  2 2 
 
1 1 1 √1 
0 √ 0 − √  0 0 0 0  0 √ 0 −
  
2 2 2 2
1 0 − √12 0 0 0 0 0 1 0 − √12 0
where for the last step we used that our eigenvectors form an orthogonal matrix. In particular, let
 1 T
√1   √1 √1


2
0 2
0 2 0 0 0 2
0 2
0
0 √1 0 √1  0 2 0 0  0 √1 0 √1 
A := 
 2 2    2 2 
0 √1 0 − √1  0 0 0 0  0 √1 0 − √1 
  
2 2 2 2
1 0 − √2 1
0 0 0 0 0 1 0 − √2 1
0
then A is symmetric since
 1 T
√1   √1 √1


2
0 2
0 6 0 0 0 2
0 2
0
0 √1 0 √1  0 6 0 0
0 √1 0 √1 
AT =  2 2  2 2 
 
0 √1 0 − √12  0 0 0 0  0 √1 0 − √12 
 
2 2
1 0 − √12 0 0 0 0 0 1 0 − √12 0
Problem 4.
Problem 5. View A as a complex operator then we have that
M M
A = U −1 (J1 ... Jm )U
where Ji is a Jordan Block. In particular
M M
Ak = U −1 (J1k ... Jmk
)U

Then note that Jik is either λki or k` λk−`



i where λi are the eigenvalues of A. Then if |λi | < 1 we have
λki → 0 and k` λk−` k
 
i → 0 since ` is polynomial growth while λki is exponential decay. So we have each
entry of Ji goes to 0. In particular, this means all the entries of Ak converges to zero. Then as we have
k

||A||2op = sup (Ax, Ax) = sup (AT Ax, x) = max |σi |


||x||=1 ||x||=1 1≤i≤n
73

where σi are the eigenvalues of AT A (spectral theorem guarantees the existence of a basis of eigenvectors
in R). But as AT A is positive definite, we have that max1≤i≤n |σi | ≤ Tr(AT A) = ||A||22 so we have
  21
X
||A||op ≤ ||A||2 =  |aij |2 
ij

for any matrix. As Ak entry wise goes to zero there exists for any ε > 0 an N such that if k ≥ N then
(k) 2 (k)
|aij |2 < nε 2 where aij is the ijth entry of Ak , so
 2
X (k)
||Ak ||op ≤  |aij |2  ≤ ε
ij

for k ≥ N so we have
||Ak ||op → 0
For the converse fix let v be an eigenvector associated to the eigenvalue λ where ||v|| = 1 then
|λk v| = |Ak v| ≤ ||Ak ||op → 0
so we must have |λ| < 1
Problem 6a. If B is invertible define T : Mn → Mn via
TB (A) := (B T )−1 AB −1
since rank(B) = rank(B T ) so B T is also invertible. Then
TB (LB (A)) = (B T )−1 B T ABB −1 = A
LB (TB (A)) = B T (B T )−1 AB −1 B = A
so LB is invertible with inverse TB .

Now assume LB is invertible, but that B does not have full rank. Then range(B) 6= Rn . Let
(
0 on range(B)
A :=
Id on range(B)⊥
then A is not the zero operator but
LB (A) = 0
which implies LB is not invertible, so this is a contradiction.
Problem 6b and 6c. Assume rank(B) = k then we define the new linear map
TB : Mn → Mn where TB (A) := (B T )E1 AE2 B
where Ei are invertible matrix. In particular, as rank(B) = rank(B T ) we have due to Jordan Elimination
the existence of elementary matrix such that
(B T )E1 = diag(1, ..., 1, 0, .., 0) E2 B = diag(1, ..., 1, 0, .., 0)
where both have k ones. Then this map has the same range as LB since LB (E1 AE2 ) = TB (A) and
TB (E1−1 AE2−1 ) = LB (A). This lets us deduce that for a general matrix A that TB (A) has n2 − k 2 zeros.
So the kernal(TB ) has dimension n2 − k 2 , so its range must have dimension k 2 = rank(B)2 .
Problem 7. Consider the operator L : [0, 1] → [0, 1] defined via
L(x) = cos(x)
Note that this is well defined since cos([0, 1]) ⊂ [0, 1]. And as [0, 1] is a closed subset of R it is complete.
Then note that for x < y ∈ [0, 1]
L(x) − L(y) = cos(x) − cos(y) = (x − y) sin(ξ(x, y))
by MVT where ξ(x, y) ∈ [x, y] but as sin is an increasing function we have
|L(x) − L(y)| ≤ | sin(1)||x − y|
74 RAYMOND CHU

and | sin(1)| < 1 so we can apply Banach Fixed Point Theorem to obtain the existence and uniqueness
of a fixed point on [0, 1] of the operator L i.e. there exists a unique solution to
x = cos(x)
on [0, 1].
Problem 8. Note for any h ∈ (0, 1] that
X h X h 1X 1 2X 1
2 2
≤ 2 2
= 2
= ≤ C(h)
1+n h n h h n h n2
n∈Z n∈Z n∈Z n∈N

so the sum is well defined. Note that by symmetry we have


X h X h
2 2
=2
1+n h 1 + n2 h2
n∈Z n∈N

Define
h
f (x; h) :=
1 + x2 h2
then for x ∈ [0, ∞) we have f (x; h) is a decreasing function. In particular, we have the following inequality
∞ ˆ ∞
X h h
2 2
≤ h + dx
n=0
1+n h 0 1 + x2 h2
π π
=h+ ≤1+
2 2
so we have
X h
0≤ ≤2+π
1 + n2 h2
n∈Z
i.e.
X h
sup < +∞
h∈(0,1] n∈Z 1 + n2 h2
3
Problem 9a. Let the map F : R → R be defined via
F (x, y, z) = (2 + x + y)ez − z 2 − ex − ey
then F ∈ C ∞ (R3 ) since the partials are smooth and
F (0, 0, 0) = 0
with
∇F (0, 0, 0) = (0, 0, 2)T
so as the 1 × 1 submatrix corresponding to ∂F 1 3
∂z (0, 0, 0) is non-singular with F ∈ C (R ), we can apply
∂F
implicit function theorem to find an open subset U ⊂ R2 with (0, 0) ∈ U where since ∂z 6= 0 at (0, 0, 0)
we can use continuity to make ∂F∂z 6= 0 in U and a function ϕ such that
F (x, y, ϕ(x, y)) = 0 for (x, y) ∈ U
and ϕ(0, 0) = 0. Then for the regularity of ϕ we have by the implicit function theorem that
 −1  −1 !T
∂F ∂F ∂F ∂F
∇ϕ = ,
∂x ∂z ∂y ∂z

where ∂z F 6= 0 in U . Then as F is smooth and ∂F


∂z 6= 0 we can use product rule/quotient rule to see that
ϕ ∈ C ∞ (U )
Problem 9b. Note that ∇ϕ(0, 0) = (0, 0)T so it is a critical point. We compute the Hessian at (0, 0) to
get
D2 ϕ(0, 0) = diag(1/2, 1/2)
so D2 ϕ is positive definite, so it is at a min.
Problem 10.
75

Problem 11a. Fix {fn } ⊂ X that is a Cauchy Sequence. Then fix an ε > 0 then there is an N such
that if n ≥ N then
ε
||fn (x) − fm (x)||L∞ <
2
so in particular we have for any x ∈ [0, 1] that
ε
|fn (x) − fm (x)| <
2
by completeness of [0, 1] we determine that there exists an f (x) ∈ [0, 1] such that
fn (x) → f (x)
ε
Then we notice that as fn (x) → f (x) there is an Nx such that if n ≥ Nx then |fNx (x) − f (x)| ≤ 2
ε ε
|f (x) − fn (x)| ≤ |f (x) − fNx (x)| + |fNx (x) − fn (x)| ≤ + = ε
2 2
since we may as well assume that Nx ≥ N . So we have
||f (x) − fn (x)||L∞ < ε
so we have fn (x) → f (x) uniformly. So it suffices to show f (x) is decreasing but this follows since by
taking n large enough we have for x ≤ y
ε ε
f (x) ≤ fn (x) + ≤ fn (y) + ≤ f (y) + ε
2 2
so letting ε → 0 shows when x ≤ y we have
f (x) ≤ f (y)
as desired. So it is complete
Problem 11b. Take {xn } ⊂ X since xn is an increasing function and on [0, 1] we have 0 ≤ xn ≤ 1. But
(
n 0 if 0 ≤ x < 1
x → := f (x)
1 if x = 1
so if there exists a subsequence xnk that converged it must converge to f (x). But as it is a uniformly
convergent subsequence the limit must be continuous since xn ∈ C([0, 1]). This shows no subsequence
uniformly converges so it is not sequentially compact.
Problem 12. We clearly have that if f : `∞ → R is continuous then f |K is continuous for any compact
set K. So it suffices to show the other direction. Indeed, let xn → x then define K := {xn } ∪ {x} then we
claim this is a compact subset of `∞ . Indeed, take {yn } ⊂ K if it only has finitely many terms then we
are done, so assume it is infinite. Then as xn → x for a fixed ε > 0 there is an N ∈ N such that if n ≥ N
then d(xn , x) < ε. Then as {yn } is an infinite subset of K there must be a k such that if n ≥ k then
{yn }n≥k ⊂ {xn }n≥N so in particular x is a limit point of {yn } so K is compact. But as f |K is compact
we have
f |K (xn ) → f |K (x)
i.e. f (xn ) → f (x) so f is continuous.
76 RAYMOND CHU

21. Spring 2020


Problem 1. Note that
T r(AB − BA) = T r(AB) − T r(BA) = 0
so we cannot have
AB − BA = Id
because it would imply tr(Id) = 0 which is false.
Problem 2. We claim if for a matrix C and D that if C is similar to D then they have the same
eigenvalues. Indeed,
det(C − λI) = det(SS −1 (C − λI))
= det(S −1 (C − λI)S) = det(D − λI)
so they have the same characteristic polynomial. Indeed, then as B is similar to B 5 implies that if x is
an eigen vector with eigenvalue λ then
Bx = λx B 5 x = λ5 x
so for every eigenvalue λ we must have λ = λ5 . But as B is invertible we have λ 6= 0 so
λ25 = λ5 = λ ⇒ λ24 = 1
Problem 3.
Problem 4a. Notice this implies for all x, y ∈ C that
(x + y, A(x + y)) = 0
i.e. since (x, Ax) = y, Ay) = 0
(y, Ax) + (x, Ay) = 0
Taking xtoix also gives
(y, A(ix)) + (ix, Ay) = 0
(
−i(y, Ax) + i(x, Ay) = 0
(y, Ax) + (x, Ay) = 0
implies
(
−i(y, Ax) + i(x, Ay) = 0
i(y, Ax) + i(x, Ay) = 0
i.e.
2i(x, Ay) = 0
i.e.
(x, Ay) = 0
for all x, y ∈ C so
Ay = 0
for all y so A is the zero operator.
Problem 4b. Take
cos( π2 ) −sin( π2 )
 
A=
sin( π2 ) cos( π2 )
then
(Av, v) = 0
since it is a rotation of v by 90 degrees.
Problem 5.
77

Problem 6. Assume that T ∈ Mm×m (R) eigenvalues satisfy |λi | < 1. By viewing T as an operator over
C we can find a basis such that
T = U −1 JU
where M M M
J = J1 J2 ... Jm
where Ji is a Jordan Block with diagonal entries being the eigenvalues of T . Note that
T n = U −1 J n U
and M M M
J = J1n J2n ... n
Jm
So it suffices to show for an arbitrary Jordan Block Ji that |(Ji )kk | < 1 implies Jin = 0 where Ji is of size
mm. This follows from that
n n−1 n n−2 n
 n    n−m+1 
λ 1 λ 2 λ ... m−1λ
n n−1 n
Jn =  0 λn 1 λ ... m−2 λ
n−m+2 
n
0 0 λ ....
n n−k
n

i.e. each entry is either 0 or λ or k λ . But as |λ| < 1 we know that λn → 0 and as exponentials
decay much faster than polynomials grow L’hopital gives nk λn−k 0 as n → ∞. Therefore, J n → 0. So
we have T n → 0.

Now if T n → 0 fix an eigenvector x with eigenvalue λ then we have


T n x = λn x → 0
so |λ| < 1.
Problem 7. Both parts follow from IVT.
Problem 8. Density of polynomials in C([a, b]) implies this.
Problem 9. See Fall 2016 number 11.
Problem 10a. This is Banach’s Fixed Point. Fix an x ∈ X then let xn := f (xn−1 ) with x0 := x then
if n ≥ m
d(xn+1 , xm+1 ) = d(f n (x), f m (x)) ≤ λm d(f n−m (x), x)
≤ λm d(f n−m (x), f n−m−1 (x)) + d(f n−m−1 (x), f n−m−2 (x)) + ....d(f (x), x)


n−m
X
≤ d(f (x), x) λm+i → 0
k=0
since it is a convergent sum due to λ < 1 so {xn } is Cauchy. Then completeness implies there exists a
limit say z then (
limn→∞ xn = limn→∞ xn+1 = limn→∞ f (xn ) = f (z)
limn→∞ xn = z
i.e. f (z) = z where we used continuity of f . This is unique since if f (z1 ) = z1 and f (z2 ) and z1 6= z2
d(z1 , z2 ) = d(f (z1 ), f (z2 )) ≤ λd(z1 , z2 ) < d(z1 , z2 )
which is a contradiction so it is unique.
Problem 10b. Uniqueness follows from if x 6= y and f (x) = f (y) then we get
d(x, y)2
d(x, y) ≤
1 + d(x, y)
i.e.
1
1≤
1 + d(x, y)
but as d(x, y) < +∞ we have
1
<1
1 + d(x, y)
78 RAYMOND CHU

which is a contradiction so there is uniqueness.

To see existence fix x ∈ X and define x0 := x with xn+1 := f (xn ). Then we have
 
d(xn , xn−1 )
d(xn+1 , xn ) ≤ d(xn , xn−1 ) ≤ d(xn , xn−1 ) ≤ d(x0 , x1 )
1 + d(xn , xn−1 )
Note that the function
x
g(x) :=
1+x
is an increasing function so we have from d(xn+1 , xn ) ≤ d(x0 , x1 ) that
d(xn , xn−1 ) d(x0 , x1 )
≤ := λ < 1
1 + d(xn , xn−1 ) 1 + d(x0 , x1 )
so for our fixed x if we let A := {f n (x) : n ∈ N} where f n means the nth iterate of f then we have
f |A : A → A. Now fix u, v ∈ A then assume that u = f n+1 (x) and v = f n+m+1 (x). Then
d(u, v) = d(f n+1 (x), f n+m+1 (x)) ≤ d(x, f m (x))
≤ d(x, f (x)) + d(f (x)f 2 (x)) + ... + d(f m−1 (x), f m (x))
≤ d(x, f (x))(1 + λ + ... + λm )
X ∞
≤ λi < M
i=1
i.e. there exists an M > 0 such that
d(f n (x), f m (x)) < M
for any n, m so we have
d(x, y) M
≤ := α < 1
d(x, y) + 1 1+M
on A so we have f |A satisfies
d(f |A (x), f |A (y)) ≤ αd(x, y)
so it is a contraction mapping on A. Therefore, by the proof in 10a {fn (x)} is a Cauchy Sequence. By
completeness of X we conclude a limit in X and an identical argument as in 10a concludes that the limit
is a fixed point.
Problem 11.
Problem 12a. See Fall 2016 number 12.
Problem 12b. If f 00 ≥ 0 then by Taylor’s Theorem we have
(x − y)2
f (y) = f (x) + f 0 (x)(y − x) + f 00 (ξ(y))
2
for some ξ(y) ∈ [min{x, y}, max{x, y}] but as f 00 ≥ 0 we have
f (y) ≥ f (x) + f 0 (y)(y − x)
then part a) implies the desired result.

You might also like