Linear Algebra II - Lectnotes
Linear Algebra II - Lectnotes
Armando Martino
Spring 2024
Contents
Preamble 2
Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1 Groups 3
3 Bases 16
3.1 Spanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Linear independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 Linear Transformations 24
4.1 Matrix representation I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Kernel and image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4 Dimension Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.5 Matrix representation II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5 Determinants 34
6 Diagonalisability 38
6.1 Eigen-things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.2 Diagonalisability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.3 Cayley–Hamilton Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7 Coursework Sheets 46
1
2 CONTENTS
Preamble
These lecture notes contain more–less verbatim what appears on the (black– or white–)board during the
lectures.
The colour coding - in the pdf version - signifies the following: what is red is being defined. Blue are
named theorems and statements (these could be referred to by these names). Finally green is used to
reference lecture notes of other modules (mostly MATH1048 Linear Algebra I).
Acknowledgement
For the most part, these notes were designed and written by Dr Bernhard Koeck, and originally typed
into LaTeX by the undergraduate student Thomas Blundell–Hunter in summer 2011. They have been
regularly updated to reflect the changes in the syllabus. The current version is a complete overhaul of
the technical side of the notes by Ján Špakula, so that they are available also in html.
®
MIX
Paper from
responsible sources
FSC® C020438
1 GROUPS 3
1 Groups
G × H := {(g, h) | g ∈ G, h ∈ H},
(b) A group is a set G together with a binary operation G × G → G, (a, b) 7→ a ∗ b (which we usually
refer to as “group operation”, or “group multiplication”, or just “multiplication”) such that the
following axioms are satisfied:
(d) A group is called finite if G has only finitely many elements; in this case its cardinality |G| is
called the order of G.
Example 1.2. (a) The usual addition N × N → N, (m, n) 7→ m + n, defines a binary operation on N
(and so does multiplication), but subtraction does not, because for instance 2 − 3 = −1 ̸∈ N.
(b) The sets Z, Q, R and C together with addition as the binary operation are abelian groups. The
sets Q∗ := Q \ {0}, R∗ := R \ {0} and C∗ := C \ {0} together with multiplication as the binary
operation are abelian groups.
(c) The set N with addition is not a group, because, for example, there doesn’t exist a (right) inverse
to 1 ∈ N. (In other words, the equation 1+? = 0 has no solution in N.)
(d) For any n ∈ N, the set Rn together with vector addition is a group. For any m, n ∈ N the set
Mm×n (R) of real m–by–n matrices together with matrix addition is a group. For every n ∈ N,
the set GLn (R) of invertible real (n × n)–matrices together with matrix multiplication is a group,
called the general linear group. If n > 1, GLn (R) is not abelian: e.g.
1 1 1 0 2 1 1 0 1 1 1 1
· = , but · = .
0 1 1 1 1 1 1 1 0 1 1 2
(b) For every a ∈ G there exists precisely one right inverse (call it b) to a, and this b is also a left
inverse of a (meaning that we also have b ∗ a = e). We write a−1 for this inverse of a (or −a if the
group operation is “+”).
(b) For all a ∈ G both the equation a ∗ x = b and y ∗ a = b have a unique solution in G. (Another
way to say this: for every a ∈ G, both the map G → G, x 7→ a ∗ x (called left translation by a) and
G → G, y 7→ y ∗ a (called right translation by a) are bijective.)
(In additive notation, i.e. when the group operation is “+”, we write ma instead of am .)
Then for all m, n ∈ Z and a ∈ G we have am+n = am ∗ an and amn = (am )n .
If a, b ∈ G commute (i.e. a ∗ b = b ∗ a), then for all m ∈ Z we have (a ∗ b)m = am ∗ bm .
(c) Both (a−1 )−1 and a are solutions of the equation a−1 ∗ x = e (see Proposition 1.3/(b)). Now apply
(b).
(d) We have (a∗b)∗(b−1 ∗a−1 ) = a∗(b∗(b−1 ∗a−1 )) = a∗((b∗b−1 )∗a−1 ) = a∗(e∗a−1 ) = a∗a−1 =
e. =⇒ b−1 ∗ a−1 is a right inverse to a ∗ b.
Example 1.5. (a) The group table of a finite group G = {a1 , a2 , . . . , an } is a table like this:
∗ ··· aj ···
.. .. ..
. . .
ai ··· ai ∗ a j · · ·
.. ..
. .
The trivial group is the group with exactly one element, say e. Its group table must be
∗ e
e e
Any group with two elements, say e and a, is given by the group table:
∗ e a
e e a
a a e
Any group with three elements, say e, a and b, must have group table:
∗ e a b
e e a b
a a b e
b b e a
(Note that a ∗ b = a would imply b = e by 1.4/(a).) There are two “essentially different” groups of
order 4.
Note that Proposition 1.4/(b) implies that group tables must satisfy “sudoku rules”, i.e. that every
group element must appear in each row and each column exactly once. However, not every table
obeying this rule is a group table of a group; for example the table below does not. Why? (Hint:
what is a ∗ a ∗ b?)
e a b c d
a e c d b
b c d e a
c d a b e
d b e a c
(b) Let m ∈ N and define Cm := {0, 1, . . . , m − 1}. Define a binary operation on the set Cm by:
(
x+y if x + y < m;
x ⊕ y :=
x+y−m if x + y ≥ m.
Then Cm together with ⊕ is an abelian group called the cyclic group of order m. (Caveat: these
notes will use ⊕ for the group operation on Cm , to distinguish it from “+” between numbers.
However it is very common to just use “+” for the operation on Cm .)
Proof (that Cm is a group). (•) Associativity: Let x, y, z ∈ {0, 1, . . . , m − 1}. Want to show (x ⊕
y) ⊕ z = x ⊕ (y ⊕ z) in Cm .
- First case: Suppose that x + y + z < m. Then also x + y < m and y + z < m.
=⇒ LHS = x + y ⊕ z = (x + y) + z = x + (y + z) = x ⊕ y + z = RHS (using associativity
of addition of integers).
6 1 GROUPS
(c) Let S be a set (such as S = {1, 2, . . . , n} for some n ∈ N) and let Sym(S) denote the set of all
bijective maps π : S → S, also called permutations of S.
For any π, σ ∈ Sym(S), we denote σ ◦ π their composition (as functions, so (σ ◦ π)(s) = σ (π(s))
for all s ∈ S). This defines a binary operation on Sym(S).
Then Sym(S) together with composition is a group, called the permutation group of S (or some-
times also the symmetric group of S).
The identity element in Sym(S) is the identity function (denoted idS or just id), and the inverse of
π ∈ Sym(S) is the inverse function π −1 (as in Calculus I).
If S = {1, . . . , n} for some
n ∈ N, we write
Sn for Sym(S) and use the “table notation” to describe
1 2 ··· n
permutations π ∈ Sn as π(1) π(2) ··· π(n) . For example:
−1
1 2 3 4 1 2 3 4
= in S4 ,
3 1 2 4 2 3 1 4
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
◦ = in S5 .
3 4 1 2 5 2 3 5 1 4 4 1 5 3 2
Definition 1.6. Let n ≥ 1 and s ≤ n. Let a1 , . . . , as ∈ {1, . . . , n} be pairwise distinct. The permutation
π ∈ Sn such that
is denoted by ⟨a1 , . . . , as ⟩. Any permutation of this form is called a cycle. If s = 2, it is called a transpo-
sition. The number s is called the length (or order) of the cycle.
1 2 3 4 5 6 1 2 3 4
For example, ⟨3, 1, 5, 2⟩ = in S6 ; and ⟨3, 2⟩ = in S4 .
5 3 1 4 2 6 1 3 2 4
Proposition 1.7. Every permutation σ ∈ Sn is a composition of cycles.
1 2 3 4 5 6 7 8 9 10 11
Example 1.8. (a) Let σ = ∈ S11 .
2 5 6 9 1 4 10 11 3 7 8
Then σ = ⟨1, 2, 5⟩ ◦ ⟨3, 6, 4, 9⟩ ◦ ⟨7, 10⟩ ◦ ⟨8, 11⟩.
1 2 3 4
(b) Let τ = ∈ S4 . Then τ = ⟨1, 4⟩.
4 2 3 1
1 GROUPS 7
General Recipe (which, with a bit of effort, can be turned into a proof of 1.7).
Denote by σ ∈ Sn the permutation that we want to write as a composition of cycles.
Start with some a ∈ {1, . . . , n} such that σ (a) ̸= a (e.g. a := 1). (If there is no such a then σ = id and
we are done.)
Let m ∈ N, m > 1, be the smallest number such that σ m (a) ∈ {a, σ (a), σ 2 (a), . . . , σ m−1 (a)}. (Actually
then necessarily σ m (a) = a.)
Let σ1 be the cycle σ1 = ⟨a, σ (a), σ 2 (a), . . . , σ m−1 (a)⟩.
Now repeat the steps: take b ∈ {1, . . . , n} \ {a, σ (a), . . . , σ m−1 (a)} such that σ (b) ̸= b. Let l ∈ N, l > 1,
be the smallest number such that σ l (b) ∈ {b, σ (b), . . . , σ l−1 (b)}. Let σ2 = ⟨b, σ (b), . . . , σ l−1 (b)⟩.
Continuing in this way, we find a decomposition into cycles: σ = σ1 ◦ σ2 ◦ · · ·.
Theorem 1.10. (a) The definition of sgn(σ ) does not depend on the chosen cycle decomposition of
σ.
In Group Theory in Year 2. (But for one possible proof for (a), see also an optional problem on one of
the Courseworks.)
8 2 FIELDS AND VECTOR SPACES
2.1 Fields
Definition 2.1. A field is a set F together with binary operations on F, which we will refer to as addition
and multiplication, such that:
• F together with addition is an abelian group (we use the notation a + b, 0 or 0F , −a), and
• F × := F \ {0} together with multiplication is an abelian group (we use the notation a · b or ab, 1,
a−1 ),
Proof.
Definition 2.4. Let F be a field. A vector space over F is an abelian group V (we will use “+” for the
binary operation) together with a map F × V → V (called scalar multiplication and written as (a, x) 7→
ax), such that the following axioms are satisfied:
The elements of V are called vectors. The elements of F will be referred to as scalars. We write 0F and
0V for the neutral elements of F and V , respectively, and often just 0 for both (when it is clear from the
context if it is a scalar or a vector). Furthermore we use the notation u − v for u + (−v) when u, v are
both vectors, or both scalars.
Example 2.5. (a) For every n ∈ N the set Rn together with the usual addition and scalar multiplication
(as seen in Linear Algebra I) is a vector space over R. Similarly, for any field F, the set
F n := {(a1 , . . . , an ) : a1 , . . . , an ∈ F}
together with component-wise addition and the obvious scalar multiplication is a vector space over
F. For example F22 = {(0, 0), (0, 1), (1, 0), (1, 1)} is a vector space over F2 ; F = F 1 is a vector
space over F, and finally F 0 := {0} is a vector space over F.
(b) Let V be the additive group of C. We view the usual multiplication R × V → V, (a, x) 7→ ax, as
scalar multiplication of R on V . Then V is a vector space over R. Similarly, we can think of C or
R as vector spaces over Q.
(c) Let V denote the abelian group R (with the usual addition). For a ∈ R and x ∈ V we put a ⊗ x :=
a2 x ∈ V ; this defines a scalar multiplication
R ×V → V, (a, x) 7→ a ⊗ x,
of the field R on V . Which of the vector space axioms (see 2.4) hold for V with this scalar multi-
plication?
Proposition 2.6. Let V be a vector space over a field F and let a, b ∈ F and x, y ∈ V . Then we have:
(a) (a − b)x = ax − bx
(b) a(x − y) = ax − ay
(c) ax = 0V ⇐⇒ a = 0F or x = 0V
(d) (−1)x = −x
(b) On Coursework.
The next example is the “mother” of almost all vector spaces. It vastly generalises the fourth of the
following five ways of representing vectors and vector addition in R3 .
2 FIELDS AND VECTOR SPACES 11
0.5
0.0
z
a+b
−0.5
−1.0 1.0
0 0.8
1 0.6
x 0.4
y
2 a
0.2
3
(I) 0.0
(II) a = (2.5, 0, −1) b = (1, 1, 0.5) a + b = (3.5, 1, −0.5)
1 2 3 1 2 3 1 2 3
(III) a = b= a+b =
2.5 0 −1 1 1 0.5 3.5 1 −0.5
x
3
x
2 function_name
x a
y
1 x xx x a+b
x x b
0 x
x
−1 x
1.0 1.5 2.0 2.5 3.0
(V) x
Example 2.7. Let S be any set and let F be a field. Let
F S := { f : S → F}
denote the set of all maps from S to F. We define an addition on F S and a scalar multiplication of F on
F S as follows: When f , g ∈ F S and a ∈ F we set:
( f + g)(s) := f (s) + g(s) for any s ∈ S
(a f )(s) := a f (s) for any s ∈ S.
Then F S is a vector space over F (see below for the proof).
Special Cases:
12 2 FIELDS AND VECTOR SPACES
(a) Let S = {1, . . . , n}. Identifying any map f : {1, . . . , n} → F with the corresponding tuple ( f (1), . . . , f (n)),
we see that F S can be identified with the set F n of all n-tuples (a1 , . . . , an ) considered in Example
2.5/(a).
(b) Let S = {1, . . . , n} × {1, . . . , m}. Identifying any map f : {1, . . . , n} × {1, . . . , m} → F with the
corresponding matrix:
f ((1, 1)) . . . f ((1, m))
.. ..
. .
f ((n, 1)) . . . f ((n, m))
we see that F S can be identified with the set Mn×m (F) of (n × m)-matrices
a11 . . . a1m
.. .. ..
. . .
an1 . . . anm
with entries in F. In particular Mn×m (F) is a vector space over F.
(c) Let S = N. Identifying any map f : N → F with the sequence ( f (1), f (2), f (3), . . . ) we see that
F N can be identified with the set of all infinite sequences (a1 , a2 , a3 , . . . ) in F.
(d) Let F = R and let S be an interval I in R. Then F S = RI is the set of all functions f : I → R. (We
can visualise these functions via their graph, similarly as in (V) above.)
Proof (that F S is a vector space over F): First, F S with the above defined “+” is an abelian group:
Associativity: Let f , g, h ∈ F S .
We need to show: ( f + g) + h = f + (g + h) in F S
⇐⇒ (( f + g) + h)(s) = ( f + (g + h))(s) ) for all s ∈ S.
LHS = ( f + g)(s) + h(s) = ( f (s) + g(s)) + h(s)
(by definition of addition in F S )
RHS = f (s) + (g + h)(s) = f (s) + (g(s) + h(s)
=⇒ LHS = RHS (by associativity in F)
Identity element: Let 0 denote the constant function S → F, s 7→ 0F .
For any f ∈ F S and s ∈ S we have ( f + 0)(s) = f (s) + 0(s) = f (s) + 0F = f (s), hence f + 0 = f .
Similarly we have 0 + f = f . (using definitions of 0 and “+”, and field axioms)
=⇒ 0 is the identity element.
Inverses: Let f ∈ F S . Define (− f )(s) := − f (s).
For any s ∈ S we have ( f + (− f ))(s) = f (s) + (− f )(s) = f (s) + (− f (s)) = 0F = 0(s).
=⇒ f + (− f ) = 0 in F S , so − f is the inverse to f . (↑ defns of “+”, “− f ”, 0, and field axioms)
Commutativity: Let f , g ∈ F S .
For any s ∈ S we have ( f + g)(s) = f (s) + g(s) = g(s) + f (s) = (g + f )(s).
=⇒ f + g = g + f . (↑ by the definition of “+”, and commutativity of + in F)
Now the four axioms from Definition 2.4 (only (i) and (iii) spelled out here, the others are similar):
First distributivity law: Let a, b ∈ F and f ∈ F S . We want to check that (a + b) f = a f + b f :
For all s ∈ S we have
((a + b) f )(s) = (a + b)( f (s)) (by definition of the scalar multiplication)
= a( f (s)) + b( f (s)) (by distributivity in F)
= (a f )(s) + (b f )(s) (by definition of the scalar multiplication)
= (a f + b f )(s) (by definition of addition in F S )
=⇒ (a + b) f = a f + b f .
Axiom (iii): Let a, b ∈ F and f ∈ F S . We want to check that (ab) f = a(b f ).
For all s ∈ S we have
2 FIELDS AND VECTOR SPACES 13
2.3 Subspaces
Definition 2.8. Let V be a vector space over a field F. A subset W of V is called a subspace of V if the
following conditions hold:
(a) 0V ∈ W .
(b) “W is closed under addition”: for all x, y ∈ W we also have x + y ∈ W .
(c) “W is closed under scalar multiplication”: for all a ∈ F and x ∈ W we have ax ∈ W .
Note that condition (b) states that the restriction of the addition in V to W gives a binary operation
W ×W → W on W (addition in W ).
Similarly, condition (c) states that the scalar multiplication of F on V yields a map F ×W → W which
we view as a scalar multiplication of F on W .
Proposition 2.9. Let V be a vector space over a field F and let W be a subspace of V . Then W together
with the above mentioned addition and scalar multiplication is a vector space over F.
Proof. The following axioms hold for W because they already hold for V :
• associativity of addition;
• commutativity of addition;
• all the four axioms in Definition 2.4.
Example 2.10. (a) Examples of subspaces of Rn as seen in Linear Algebra I, such as the nullspace
of any real (n × m)-matrix, or the column space of any real (m × n)-matrix.
(b) The set of convergent sequences is a subspace of the vector space RN of all sequences (a1 , a2 , a3 , . . . )
in R. A subspace of this subspace (and hence of RN ) is the set of all sequences in R that converge
to 0. (See Calculus I for proofs).
(c) Let A ∈ Ml×m (R). Then W := {B ∈ Mm×n (R) | AB = 0} is a subspace of Mm×n (R).
Proof :
(i) We have A · 0 = 0 =⇒ 0 ∈ W .
(ii) Let B1 , B2 ∈ W
=⇒ A(B1 + B2 ) = AB1 + AB2 = 0 + 0 = 0
=⇒ B1 + B2 ∈ W .
(iii) Let a ∈ R and B ∈ W
=⇒ A(aB) = a(AB) = a0 = 0
=⇒ aB ∈ W . □
(d) Let I be a non-empty interval in R. The following subsets of the vector space RI consisting of all
functions from I to R are subspaces:
14 2 FIELDS AND VECTOR SPACES
(e) The subset Zn of the vector space Rn over R is closed under addition but not closed under scalar
multiplication: For instance, (1, 0, . . . , 0) ∈ Zn and 21 ∈ R, but 12 (1, 0, . . . , 0) ∈
/ Zn .
(f) The subsets W1 := {(a, 0) : a ∈ R} and W2 := {(0, b) : b ∈ R} are subspaces of R2 . The subset
W := W1 ∪W2 of the vector space R2 is closed under scalar multiplication but not under addition
because, for instance, (1, 0) and (0, 1) are in W but (1, 0) + (0, 1) = (1, 1) ∈
/ W.
Proposition 2.11. Let W1 ,W2 be subspaces of a vector space V over a field F. Then the intersection
W1 ∩W2 and the sum of subspaces
W1 +W2 := {x1 + x2 ∈ V | x1 ∈ W1 , x2 ∈ W2 }
For W1 +W2 :
3 Bases
3.1 Spanning
(b) The subset of V consisting of all linear combinations of x1 , . . . , xn is called the span of x1 , . . . , xn
and is denoted by Span(x1 , . . . , xn ) (or SpanF (x1 , . . . , xn )); i.e.
Example 3.2. (a) Let V := Mn×m (F) be the vector space of (n × m)-matrices with entries in F. For
i ∈ {1, . . . , n} and j ∈ {1, . . . , m}, let Ei j denote the (n × m)-matrix with zeroes everywhere except
at (i j) where it has the entry 1. Then the matrices Ei j ; i = 1, . . . , n; j = 1, . . . , m form a spanning
set of V .
Proof : Let A = (ai j ) ∈ Mn×m (F) be an arbitrary matrix. Then A = ∑ni=1 ∑mj=1 ai j Ei j .
2 3 = 2 1 0 + 3 0 1 + (−1) 0 0 + 5 0 0 = 2E + 3E + (−1)E + 5E .
For example: −1 5 00 00 10 01 11 12 21 22
1 i
(b) Do the vectors , ∈ C2 span the vector space C2 over C?
i 2
w
Solution. Let ∈ C2 be an arbitrary vector.
z
1 i w
We want to know if we can find a1 , a2 ∈ C such that a1 + a2 = .
i 2 z
1 i w R27→R2−iR1 1 i w
Hence −−−−−−−→ .
i 2 z 0 3 z − iw
As in Linear
Algebra
I we conclude that this system is solvable. (Theorem 3.15 of L.A.I.)
1 i
Thus , span C2 .
i 2
Proposition 3.3. Let V be a vector space over a field F. Let x1 , . . . , xn ∈ V . Then Span(x1 , . . . , xn ) is the
smallest subspace of V that contains x1 , . . . , xn . Furthermore:
• The statement (b) implies that the span of the column vectors of any matrix does not change when
performing (standard) column operations.
Span(x1 , . . . , xn ) contains x1 , . . . , xn :
. . . because xi = 0F · x1 + . . . 0F · xi−1 + 1 · xi + 0F · xi+1 + · · · + 0F · xn ∈ Span(x1 , . . . , xn ).
Span(x1 , . . . , xn ) is the smallest:
Let W be a subspace of V such that x1 , . . . , xn ∈ W .
Let x ∈ Span(x1 , . . . , xn ). Write x = a1 x1 + · · · + an xn with a1 , . . . an ∈ F.
=⇒ a1 x1 , . . . , an xn ∈ W (by condition 2.8/(c))
=⇒ x = a1 x1 + · · · + an xn ∈ W . (by condition 2.8/(b))
=⇒ Span(x1 , . . . , xn ) ⊆ W .
Part (a): Span(x1 , . . . , xn ) ⊆ Span(x1 , . . . , xn , x) =: W
(because W is a subspace of V and x1 , . . . , xn ∈ W )
Span(x1 , . . . , xn , x) ⊆ Span(x1 , . . . , xn ) =: W
e
e is a subspace of V and x1 , . . . , xn , x ∈ W
(because W e)
Part (b): Span(x1 , . . . , xn ) ⊆ Span(x1 , x2 − a2 x1 , . . . , xn − an x1 ) =: W
(because W is a subspace of V and x1 , . . . , xn ∈ W )
Span(x1 , x2 − a2 x1 , . . . , xn − an x1 ) ⊆ Span(x1 , . . . , xn ) =: W
e
(because W e is a subspace of V and x1 ∈ W e and for i = 2, . . . , n also xi − ai x1 ∈ W
e)
Definition 3.4. Let V be a vector space over a field F. Let x1 , . . . , xn ∈ V . We say that x1 , . . . , xn are
linearly independent (over F) if the following condition holds:
if a1 , . . . , an ∈ F and a1 x1 + · · · + an xn = 0V then a1 = · · · = an = 0F .
Otherwise we say that x1 , . . . , xn are linearly dependent. A linear combination a1 x1 + · · · + an xn is called
trivial if a1 = · · · = an = 0, otherwise it is called non-trivial. (See also Section 6.5 of L.A.I.)
Note. x1 , . . . , xn are linearly dependent ⇐⇒ ∃a1 , . . . , an ∈ F, not all zero, such that a1 x1 + · · · + an xn =
0V . In other words, ⇐⇒ there exists a non-trivial linear combination of x1 , . . . , xn which equals 0V .
Example 3.5. (a) Examples as seen in Linear Algebra I. (Section 6.5 of L.A.I.)
18 3 BASES
1 1 0
(b) The three vectors x1 = 0, x2 = 1, x3 = −1 ∈ F 3 are not linearly independent because
1 0 1
x1 − x2 − x3 = 0.
c1 1 0 c1
3
(c) Determine all vectors c2 ∈ C such that x1 := i , x2 := 1 , x3 := c2 ∈ C3 are lin-
c3 1 0 c3
early dependent.
(e) Let I ⊆ R be a non-empty open interval. Recall from 2.10/(d)/(iv) that for any i ∈ N0 , we denote
t i the polynomial function I → R, s 7→ si .
The vectors t 0 ,t 1 ,t 2 , . . . ,t n are linearly independent in RI .
Proof : Let a0 , . . . , an ∈ R such that a0t 0 + a1t 1 + · · · + ant n = 0
=⇒ a0 + a1 s + · · · + an sn = 0 for all s ∈ I
=⇒ a0 = · · · = an = 0, because any non-zero real polynomial of degree n has at most n real
roots. (This follows from the Fundamental Theorem of Algebra. Alternatively, it can be proved
by induction and using long division.)
Proposition 3.6. Let V be a vector space over a field F.
(b) Every subset of any set of linearly independent vectors is linearly independent again. (This is
equivalent to: If a subset of a set of vectors is linearly dependent, then the set itself is linearly
dependent.)
(c) Let x1 , . . . , xn ∈ V and suppose that xi = 0V for some i ∈ {1, . . . , n}, or that xi = x j for some i ̸= j.
Then x1 , . . . , xn are linearly dependent.
(d) If x1 , . . . , xn ∈ V are linearly dependent then at least one vector xi among x1 , . . . , xn is a linear
combination of the other ones.
3.3 Bases
Definition 3.7. Let V be a vector space over a field F. Let x1 , . . . , xn ∈ V . We say that x1 , . . . , xn form a
basis of V if x1 , . . . , xn both span V and are linearly independent. (Compare Definition 6.40 of L.A.I.)
Example 3.8. (a) Let F be a field. The vectors e1 := (1, 0, . . . , 0); . . . ; en := (0, . . . , 0, 1) form a basis
F n , called the standard basis of F n (as in Linear Algebra I (Ex 6.41(a)).
(b) The polynomials 1,t, . . . ,t n form a basis of Pn . (see 2.10/(d)/(iv), 3.2/(c)).
(c) 1, i form a basis of the vector space C over R (cf 2.5/(b)).
(d) Determine a basis of the nullspace N(A) ⊆ R4 of the matrix
1 −1 3 2
2 −1 6 7
A := ∈ M4×4 (R).
3 −2 9 9
−2 0 −6 −10
Solution. We perform Gaussian elimination until the reduced lower echelon form (row opera-
tions):
1 −1 3 2 1 −1 3 2
2 −1 6 7 R27→R2−2R1 0
1 0 3
A= 3 −2 9 −−−−−−−→
9 R37→R3−3R1 0 1 0 3
R47→R4+2R1
−2 0 −6 −10 0 −2 0 −6
1 0 3 5
R17→R1+R2 0 1 0 3
−−−−−−−→ =: A
e
R37→R3−R2 0 0 0 0
R47→R4+2R2
0 0 0 0
20 3 BASES
Because performing row operations does not change the nullspace of a matrix (see Note below),
we have:
e = {x ∈ R4 : Ax
N(A) = N(A) e = 0}
x1
x
2 4
∈ R : x1 = −3x3 − 5x4 ; x2 = −3x4
= x
3
x4
−3x3 − 5x4
−3x
4
= : x 3 , x 4 ∈ R
x3
x4
−3 −5
0 −3
= x3 + x4 : x3 , x4 ∈ R
1 0
0 1
−3 −5
0 −3
= Span
1 , 0 .
0 1
Note. Row operations do not change the nullspace of a matrix (say A). This is because vectors x are in
N(A) exactly if they are solutions to Ax = 0, i.e. solutions to the homogeneous system of linear equations
described by the matrix A. Row operations on A correspond to the “allowed” operations on the linear
system of equations.
Proposition 3.9. Let V be a vector space over a field F. Let x1 , . . . , xn ∈ V . The following statements
are equivalent:
(b) x1 , . . . , xn form a minimal spanning set of V (i.e. x1 , . . . , xn span V and after removing any vector
from x1 , . . . , xn the remaining ones don’t span V anymore). (Compare Def 6.23 of L.A.I.)
(c) x1 , . . . , xn form a maximal linearly independent subset of V (i.e. x1 , . . . , xn are linearly independent
and for any x ∈ V the n + 1 vectors x1 , . . . , xn , x are linearly dependent).
x = a1 x1 + · · · + an xn
Proof.
“(a) =⇒ (b)”: 1/Spanning: x1 , . . . , xn span V by definition of a basis.
2/Minimality: Suppose that the spanning set x1 , . . . , xn is not minimal.
3 BASES 21
Corollary 3.10. Let V be a vector space over a field F. Suppose V = Span(x1 , . . . , xn ) for some
x1 , . . . xn ∈ V . Then a subset of x1 , . . . , xn forms a basis of V . In particular V has a basis.
3.4 Dimension
Theorem 3.11 (This Theorem allows us to define the dimension of a vector space.). Let V be a vector
space over a field F. Suppose x1 , . . . , xn and y1 , . . . , ym both form a basis of V . Then m = n. (Compare
Thm 6.44 from L.A.I.)
Definition 3.12. Let V be a vector space over a field F. If the vectors x1 , . . . , xn ∈ V form a basis of V ,
we say that V is of finite dimension, and call n the dimension of V . We write dimF (V ) or just dim(V )
for n. Note that n does not depend on the chosen basis x1 , . . . , xn by 3.11.
(d) dimC (C3 ) = 3, dimR (C3 ) = 6. In general dimC (Cn ) = n, dimR (Cn ) = 2n.
(h) About dimF (Span(x1 , . . . , xn )). We determine its dimension by finding a basis, i.e. subset of
x1 , . . . , xn which still spans Span(x1 , . . . , xn ), and it is linearly independent. For example:
22 3 BASES
For every i ∈ {1, . . . , m} we can write yi = ai1 x1 + · · · + ain xn for some ai1 , . . . ain ∈ F.
(because x1 , . . . , xn span V )
=⇒ For all c1 , . . . , cm ∈ F we have: (using the axioms of a vector space)
Corollary 3.15 (Two-out-of-three basis criterion). Let V be a vector space over a field F. Let x1 , . . . xn ∈
V . Suppose two of the following three statements hold. Then x1 , . . . , xn form a basis:
(c) Let V be a vector space of finite dimension over a field F and let W be a subspace of V . Then
dimF (W ) ≤ dimF (V ).
Proof : If vectors are L.I. in W , they are also L.I. in V . (by def. of L.I.)
=⇒ Any L.I. subset of W has at most dim(V ) elements (use 3.14 and 3.9 (a) ⇐⇒ (c))
=⇒ dim(W ) ≤ dim(V ) (by 3.9 (a) ⇐⇒ (c))
24 4 LINEAR TRANSFORMATIONS
4 Linear Transformations
with entries ai j in F. We use the notation Mm×n (F) for the set of all (m × n)-matrices over F (see also
2.7/(b)). We define addition and multiplication of matrices (and other notions) in the same way as in the
case F = R (as seen in Linear Algebra I).
For example:
1 1+i 1−i 1 − i + 3 + 3i 4 + 2i
• = = (matrices over C)
2 1−i 3 2 − 2i + 3 − 3i 5 − 5i
0 1 1 1 0 1
• = (as matrices over F2 )
1 1 0 1 1 0
0 1 1 1 0 1
• but = (as matrices over R)
1 1 0 1 1 2
Definition 4.2. Let V,W be vector spaces over a field F. A map L : V → W is called a linear transfor-
mation if the following two conditions hold:
LA :F n → F m
x1 a11 x1 + · · · + a1n xn
x = ... 7→ Ax = ..
.
xn am1 x1 + · · · + amn xn
is a linear transformation.
(Compare
with Lemma 5.3 in L.A.I.)
a 0
For example, if A = ∈ M2×2 (R) for some a ∈ R then LA : R2 → R2 is given by x 7→ ax,
0 a
i.e. it isa stretch of the plane
by a factor of a.
cos(φ ) sin(φ )
If A = for some 0 ≤ φ < 2π then LA : R2 → R2 is the clockwise rotation by
− sin(φ ) cos(φ )
the angle φ .
2/ Let a ∈ F and x ∈ F n .
=⇒ LA (ax) = A(ax) = a(Ax) = a(LA (x)).
(The middle equality of both chains of equalities has been proved in Linear Algebra I for F = R,
see Thm 2.13(i) and (ii), the same proof works for any field F.)
(b) Let V be a vector space over a field F. Then the following maps are linear transformations (cf. Ex-
ample 5.4(c),(d) in L.A.I.):
• id: V → V , x 7→ x (identity)
• 0: V → V , x 7→ 0V (zero map)
• the map V → V , given by x 7→ ax, for any given a ∈ F fixed (stretch)
(c) Let L : V → W and M : W → Z be linear transformation between vector spaces over a field F.
Then their composition M ◦ L : V → Z is again a linear transformation. (See also Section 5.3 of
L.A.I.)
(d) Let V be the subspace of RR consisting of all differentiable functions. Then differentiation D :
V → RR , f 7→ f ′ , is a linear transformation.
Proof.
1/ Let f , g ∈ V =⇒ D( f + g) = ( f + g)′ = f ′ + g′ = D( f ) + D(g).
2/ Let a ∈ R and f ∈ V =⇒ D(a f ) = (a f )′ = a f ′ = a(D( f )).
(The middle equality in both chains of equalities has been proved in Calculus.)
x
(e) The map L : R2 → R, 1 7→ x1 x2 , is not a linear transformation.
x2
1
Proof. Let a = 2 and x = ∈ R2 . Then:
1
!
2
L(ax) = L = 4, but aL(x) = 2 · 1 = 2.
2
Proposition 4.4 (Matrix representation I). Let F be a field. Let L : F n → F m be a linear transformation.
Then there exists a unique matrix A ∈ Mm×n (F) such that L = LA (as defined in 4.3/(a)). In this case we
say that A represents L (with respect to the standard bases of F n and F m ).
(See also Theorem 5.6 of L.A.I.)
c1
3 2 2c 1 + c3 − 4c 2 2 −4 1
For example, the map R → R , c2 7→ , is represented by A = ∈
c2 0 1 0
c3
M2×3 (R).
26 4 LINEAR TRANSFORMATIONS
1 0
0 ..
Proof. Let e1 := . , . . . , en := . denote the standard basis of F n .
.
. 0
0 1
Uniqueness: Suppose A ∈ Mm×n (F) satisfies L = LA .
=⇒ The jth column of A is Ae j = LA (e j ) = L(e j ) (for j = 1, . . . , n)
=⇒ A is the (m × n)-matrix with the column vector L(e1 ), . . . , L(en ).
Let
Existence: A be defined this way. We want to show L = LA .
c1
..
Let c = . ∈ F n =⇒ c = c1 e1 + · · · + cn en ;
cn
=⇒ L(c) = L(c1 e1 ) + · · · + L(cn en ) = c1 L(e1 ) + · · · + cn L(en )
and LA (c) = ... = c1 LA (e1 ) + · · · + cn LA (en )
(because L and LA are linear transformations)
=⇒ L(c) = LA (c) because L(e j ) = LA (e j ) for all j = 1, . . . , n).
Definition 4.5. Let L : V → W be a linear transformation between vector spaces V,W over a field F.
Then
ker(L) := {x ∈ V : L(x) = 0W }
is called the kernel of L, and
im(L) := {y ∈ W : ∃x ∈ V : y = L(x)}
ker(LA ) = N(A)
where N(A) = {c ∈ F n : Ac = 0} denotes the nullspace of A (see also Section 6.2 of L.A.I.) and
im(LA ) = Col(A)
where Col(A) denotes the column space of A; i.e. Col(A) = Span(a1 , . . . , an ), where a1 , . . . , an denote
the n columns of A. (See also Section 6.4 of L.A.I.)
Proposition 4.7. Let V and W be vector spaces over a field F and let L : V → W be a linear transfor-
mation. Then:
Example 4.8. Let A ∈ M4×4 (R) be as in 3.8/(d). Find a basis of the image, im(LA ), of LA : R4 → R4 , c 7→
Ac.
−2 −2
(because im(LA ) = Col(A) = Col(Ã) by 4.6 and 3.3/(b))
=⇒ They form a basis on im(LA ).
(because they are also L.I., as they are not multiples of each other) □
Proposition 4.9. Let V and W be vector spaces over a field F and let L : V → W be a linear transfor-
mation. Let x1 , . . . , xn ∈ V . Then:
(ii) If L(x1 ), . . . , L(xn ) are linearly independent, then x1 , . . . , xn are linearly independent.
Proof. (a) First, Span(L(x1 ), . . . , L(xn )) ⊆ im(L) (by 3.3 Note (i)).
For the other inclusion, let y ∈ im(L);
=⇒ ∃x ∈ V such that y = L(x)
and ∃a1 , . . . , an ∈ F such that x = a1 x1 + · · · + an xn (since V = Span(x1 , . . . , xn ))
=⇒ y = L(x) = L(a1 x1 + · · · + an xn ) =
28 4 LINEAR TRANSFORMATIONS
Proposition 4.10 (Kernel Criterion). Let V and W be vector spaces over a field F, and let L : V → W
be a linear transformation. Then:
Proof. “=⇒”:
Let x ∈ ker(L) =⇒ L(x) = 0W .
We also have L(0V ) = 0W .
=⇒ x = 0V . (by injectivity)
“⇐=”:
Let x, y ∈ V such that L(x) = L(y);
=⇒ L(x − y) = L(x) − L(y) = 0W ;
=⇒ x − y = 0V (since ker(L) = {0V })
=⇒ x = y.
4.3 Isomorphism
Definition 4.11. Let V,W be vector spaces over a field F. A bijective linear transformation L : V → W is
called an isomorphism. The vector spaces V and W are called isomorphic if there exists an isomorphism
L : V → W ; we then write V ∼
= W.
Example 4.12. (a) For any vector space V over a field F, the identity id : V → V is an isomorphism.
is clearly an isomorphism.
Proposition 4.13. Let V be a vector space over a field F with basis x1 , . . . xn . Then the map
a1
n ..
L : F → V, . 7→ a1 x1 + · · · + an xn
an
is an isomorphism. (We will later use the notation Ix1 ,...,xn for the map L.)
a1 b1
.. ..
Proof. (1) Let a = . and b = . ∈ F n ;
an b
n
a1 + b1
..
=⇒ L(a + b) = L .
an + bn
= (a1 + b1 )x1 + · · · + (an + bn )xn (by definition of L)
= (a1 x1 + · · · + an xn ) + (b1 x1 + · · · + bn xn )
(by distributivity, commutativity and associativity)
= L(a) + L(b). (by definition of L)
b1
..
(2) Let a ∈ F and b = . ∈ F n ;
b
n
ab1
..
=⇒ L(ab) = L .
abn
= (ab1 )x1 + · · · + (abn )xn (by definition of L)
= a(b1 x1 + · · · + bn xn ) (using the axioms of a vector space)
= a(L(b)). (by definition of L)
a1
0
(3) ker(L) = . ∈ F : a1 x1 + · · · + an xn = 0V = ...
.. n
an 0
(because x1 , . . . , xn are linearly independent)
=⇒ L is injective. (by 4.10)
Theorem 4.14. Let V and W be vector spaces over a field F of dimension n and m, respectively. Then
V and W are isomorphic if and only if n = m.
Proof. “⇐=”:
We assume that n = m.
=⇒ We have isomorphisms LV : F n → V and LW : F n → W (by 4.13)
=⇒ LW ◦ LV−1 is an isomorphism between V and W . (by 4.3/(b) and 4.12/(b))
“=⇒”:
We assume that V and W are isomorphic.
Let L : V → W be an isomorphism and let x1 , . . . , xn be a basis of V .
=⇒ L(x1 ), . . . , L(xn ) span im(L) = W (by 4.9/(i))
and are linearly independent −1
(by 4.9/(ii) applied to L and L(x1 ), . . . , L(xn ))
=⇒ L(x1 ), . . . , L(xn ) form a basis of W
=⇒ n = dimF (W ) = m.
Theorem 4.15 (Dimension Theorem). Let V be a vector space over a field F of finite dimension and let
L : V → W be a linear transformation from V to another vector space W over F. Then:
Example 4.16.
Let
1 −2 2 3 −1
A = −3 6 −1 1 −7 .
2 −4 5 8 −4
We want to find dimR (ker(LA )) and dimR (im(LA )) – we do this by finding bases for both ker(LA ) and
im(LA ).
We find a basis of the nullspace N(A):
1 −2 2 3 −1 1 −2 0 −1 3
Gaussian elimination
A = −3 6 −1 1 −7 −−−−−−−−−−−→ 0 0 0 0 0 =: Ã.
2 −4 5 8 −4 0 0 1 2 −2
4 LINEAR TRANSFORMATIONS 31
Then
N(A) = {x ∈ R5 : Ax = 0} = {x ∈ R5 : Ãx = 0} =
x1
x
2
5
= x3 ∈ R : x1 = 2x2 + x4 − 3x5 , x3 = −2x4 + 2x5 =
x
4
x5
2x2 + x4 − 3x5
x
2
= −2x4 + 2x5 : x2 , x4 , x5 ∈ R =
x4
x5
2 1 −3
1 0 0
= x2 0 + x4 −2 + x5 2 : x2 , x4 , x5 ∈ R =
0 1 0
0 0 1
2 1 −3
1 0 0
= SpanR 0 , −2 , 2 .
0 1 0
0 0 1
The three vectors above are linearly independent, since if a1 , a2 , a3 ∈ R and
2 1 −3 0
1 0 0 0
a1 0 + a2 −2 + a3 2 = 0 ,
0 1 0 0
0 0 1 0
then a1 = a2 = a3 = 0 by looking at the second, fourth and fifth coordinates, respectively.
Thus these three vectors are a basis of N(A), so dimR (ker(LA )) = dimR (N(A)) = 3.
We now find a basis of the image im(LA ):
1 −2 2 3 −1 1 0 0 0 0
column operations
A = −3 6 −1 1 −7 −−−−−−−−−→ −3 0 5 0 0 .
2 −4 5 8 −4 2 0 1 0 0
1 0
So the vectors −3 , 5 span im(LA ). Since they are obviously not multiples of each other, they
2 1
are linearly independent, hence form a basis of im(LA ). Consequently, dimR (im(LA )) = 2.
Proposition 4.17 (Matrix representation II). Let V and W be vector spaces over a field F with bases
x1 , . . . , xn and y1 , . . . , ym , respectively. Let L : V → W be a linear transformation. Then there exists a
unique matrix A ∈ Mm×n (F) that represents L with respect to x1 , . . . , xn and y1 , . . . , ym . Here we say that
A = (ai j ) ∈ Mm×n (F) represents L with respect to x1 , . . . , xn and y1 , . . . , ym if for all c1 , . . . , cn , d1 , . . . , dm ∈
F we have
d1 c1
.. ..
L(c1 x1 + · · · + cn xn ) = d1 y1 + · · · + dm ym ⇐⇒ . = A . .
dm cn
c1 d1
.. ..
and (Iy1 ,...,ym ◦ LA ) . = Iy1 ,...,ym . = d1 x1 + · · · + dm ym .
cn dm
Hence: L(c1 x1 + · · ·+ c
nx )
n = d y
1 1 + · · · + d y
mm
c1 c1
.. ..
⇐⇒ (L ◦ Iy1 ,...,ym ) . = (Iy1 ,...,ym ◦ LA ) .
cn cn
Therefore: A represents L ⇐⇒ L ◦ Ix1 ,...,xn = Iy1 ,...,ym ◦ LA .
Note. Given L, x1 , . . . , xn and y1 , . . . , ym as in 4.17 we find the corresponding matrix A as follows: For
each i = 1, . . . , n we compute L(xi ), represent L(xi ) as a linear combination of y1 , . . . , ym and write the
coefficients of this linear combination into the ith column of A.
Example 4.18. Find the matrix A ∈ M3×4 (R) representing differentiation D : P3 → P2 , f 7→ f ′ , with
respect to the bases 1,t,t 2 ,t 3 and 1,t,t 2 of P3 and P2 , respectively.
Solution. We have
D(1) = 0 = 0 + 0t + 0t 2
D(t) = 1 = 1 + 0t + 0t 2
D(t 2 ) = 2t = 0 + 2t + 0t 2
D(t 3 ) = 3t 2 = 0 + 0t + 3t 2
0 1 0 0
=⇒ A = 0 0 2 0.
0 0 0 3
1 −1
Example 4.19. Let B := ∈ M2×2 (R). Find the matrix A ∈ M2×2 (R) representing the linear
2 4
2 2 1 1
transformation LB : R → R , x 7→ Bx, with respect to the basis , of R2 (used for both
−2 −1
source and target space).
Solution.
1 1 −1 1 3 1 1
LB = = =3 +0
2 2 4 −2 −6 −2 −1
1 1 −1 1 2 1 1
LB = = =0 +2
−1 2 4 −1 −2 −2 −1
3 0
=⇒ A = .
0 2
34 5 DETERMINANTS
5 Determinants
In Linear Algebra I the determinant of a square matrix has been defined axiomatically (cf. Theorem 5.3
here). Here we begin with the following closed formula.
Definition 5.1 (Leibniz’ definition of determinant). Let F be a field. Let n ≥ 1 and A = (ai j )i, j=1,...,n ∈
Mn×n (F). Then
n
det(A) := ∑ sgn(σ ) ∏ ai,σ (i) ∈ F
σ ∈Sn i=1
det(A) = sgn(id)a11 a22 + sgn(⟨1, 2⟩)a12 a21 = a11 a22 − a12 a21 .
1 + 2i 3 + 4i
For example: if A = ∈ M2×2 (C) then
1 − 2i 2 − i
det(A) = (1 + 2i)(2 − i) − (3 + 4i)(1 − 2i) = (2 + 2 + 4i − i) − (3 + 8 + 4i − 6i) = −7 + 5i.
a11 a12 a13
(c) Let n = 3 and A = a21 a22 a23 ∈ M3×3 (F).
a31 a32 a33
We have S3 = {id, ⟨1, 2, 3⟩, ⟨1, 3, 2⟩, ⟨1, 3⟩, ⟨2, 3⟩, ⟨1, 2⟩} and
det(A) = sgn(id) a11 a22 a33 + sgn(⟨1, 2, 3⟩) a12 a23 a31 + sgn(⟨1, 3, 2⟩) a13 a21 a32
| {z } | {z } | {z }
=+1 =+1 =+1
+ sgn(⟨1, 3⟩) a13 a22 a31 + sgn(⟨2, 3⟩) a11 a23 a32 + sgn(⟨1, 2⟩) a12 a21 a33 .
| {z } | {z } | {z }
=−1 =−1 =−1
Theorem 5.3 (Weierstrass’ axiomatic description of the determinant map). See Defn 4.1 of L.A.I. Let F
be a field. Let n ≥ 1. The map
det : Mn×n (F) → F, A 7→ det(A),
has the following properties and is uniquely determined by these properties:
(a) det
is linear in each column:
a1s + b1s a1s b1s
.. . .
det C D = det C .. D + det C .. D.
.
ans + bns ans bns
(b) Multiplying any column of a matrix A ∈ Mn×n (F) with a scalar λ ∈ F changes det(A) by the factor
λ:
λ a1s a1s
.. .
det C D = λ · det C .. D.
.
λ ans ans
(d) det(In ) = 1.
Remark 5.4. Theorem 5.3 and the following Corollary 5.5 also hold when “columns” are replaced with
“rows” (similar proofs).
(c) Let B be obtained from A by swapping two columns of A. Then det(B) = −det(A).
(d) Let λ ∈ F and let B be obtained from A by adding the λ -multiple of the jth column of A to the ith
column of A (i ̸= j). Then det(B) = det(A).
Theorem 5.6. Let F be a field. Let A ∈ Mn×n (F). Then the following are equivalent:
(a) A is invertible.
(d) det(A) ̸= 0.
Proof.
“(a) =⇒ (b)”: We’ll prove the following more precise statement:
(⋆) ∃B ∈ Mn×n (F) such that AB = In ⇐⇒ The columns of A span F n .
Proof of (⋆): Let a1 . . . , an denote the columns of A. Then:
LHS ⇐⇒ ∃b1 , . . . , bn ∈ F n such that Abi = ei for all i = 1, . . . , n
(we can use the columns of B = (b1 , . . . , bn ))
n
⇐⇒ ∃b1 , . . . , bn ∈ F such that a1 bi1 + · · · + an bin = ei for all i = 1, . . . , n
⇐⇒ e1 , . . . , en ∈ Span(a1 , . . . , an )
⇐⇒ RHS.
“(b) ⇐⇒ (c)”: The columns of A span F n
⇐⇒ The columns of A form a basis of F n (by 3.15)
⇐⇒ The columns of A are linearly independent (by 3.15)
⇐⇒ N(A) = {0} (by definition of N(A))
“(b) =⇒ (a)”: ∃B ∈ Mn×n (F) such that AB = In (by (⋆))
=⇒ A(BA) = (AB)A = In A = A = AIn
=⇒ A(BA − In ) = 0
=⇒ Every column of BA − In belongs to N(A)
=⇒ BA − In = 0 (because N(A) = {0} by (b) ⇐⇒ (c))
=⇒ BA = In
=⇒ A is invertible (as both AB = In and BA = In )
“(b) ⇐⇒ (d)”: We apply column operations to the matrix A until we arrive at a lower triangular matrix
C. Then:
The columns of A span F n
⇐⇒ the columns of C span F n (by 3.3/(b))
⇐⇒ all the diagonal elements of C are non-zero (because C is triangular)
⇐⇒ det(C) ̸= 0 (by 5.2/(d))
⇐⇒ det(A) ̸= 0 (because det(C) = λ det(A) for some non-zero λ ∈ F by 5.5)
5 DETERMINANTS 37
Theorem 5.7. (See also Thm 4.21 of L.A.I.) Let F be a field. Let A, B ∈ Mn×n (F). Then:
Proof. Omitted. (See the proof of Thm 4.21 in L.A.I., it works for any field F.)
m 1 + 4i 1
Example 5.8. For each m ∈ N compute det(A ) ∈ C, here A = ∈ M2×2 (C).
5+i 1−i
6 Diagonalisability
6.1 Eigen-things
Definition 6.1. Let V be a vector space over a field F and let L : V → V be a linear transformation from
V to itself.
(b) An element λ ∈ F is called an eigenvalue of L if Eλ (L) is not the zero space. In this case any
vector x in Eλ (L) different from the zero vector is called an eigenvector of L with eigenvalue λ .
(c) Let A ∈ Mn×n (F). The eigenspaces, eigenvalues and eigenvectors of A are, by definition, those of
LA : F n → F n , x 7→ Ax.
Proposition 6.2. Let F, V and L be as in Defn 6.1. Then Eλ (L) is a subspace of V for every λ ∈ F.
λ is an eigenvalue of A ⇐⇒ det(λ In − A) = 0.
(pA (λ ) := det(λ In − A) is called the characteristic polynomial of A) (See also Proposition 7.5 in L.A.I.)
Proof. λ is an eigenvalue of A
⇐⇒ ∃x ∈ F n , x ̸= 0, such that Ax = λ x
⇐⇒ ∃x ∈ F n , x ̸= 0, such that (λ In − A)x = 0
⇐⇒ N(λ In − A) ̸= {0}
⇐⇒ det(λ In − A) = 0. (by 5.6 (c) ⇐⇒ (d))
Solution.
λ − 5i −3
pA (λ ) = det(λ I2 − A) = det
−2 λ + 2i
= (λ − 5i)(λ + 2i) − 6 = λ 2 − (3i)λ + 4
6 DIAGONALISABILITY 39
√
3i± −9−16 3i±5i
The two roots of this polynomial are λ1,2 = 2 = 2 = 4i or −i;
=⇒ eigenvalues of A are 4i and −i.
Basis of E4i (A): Apply Gaussian elimination to
−i −3 R17→iR1 1 −3i R27→R2+2R1 1 −3i
4iI2 − A = −−−−→ −−−−−−−→
−2 6i −2 6i 0 0
x1 2
=⇒ E4i (A) = ∈ C : x1 = (3i)x2
x2
(3i)x2 3i
= : x2 ∈ C = Span
x2 1
3i
=⇒ a basis of E4i (A) is (as it is L.I.).
1
Basis of E−i (A): Apply Gaussian elimination to
−6i −3 R1↔R2 −2 i R27→R2−(3i)R1 −2 i
−iI2 − A = −−−−→ −−−−−−−−→
−2 i −6i −3 0 0
x1 2 i
=⇒ E−i (A) = ∈ C : x1 = x2
x2 2
i i
2 x2 2
= : x2 ∈ C = Span
x2 1
i
=⇒ a basis of E−i (A) is (as it is L.I.).
2
Example 6.5. Let V be the real vector space of infinitely often differentiable functions from R to R and
let D : V → V, f 7→ f ′ , denote differentiation (cf. 4.3/(d)). Then for every λ ∈ R the eigenspace of D
with eigenvalue λ is of dimension 1 with basis given by the function expλ : R → R, t 7→ eλt .
6.2 Diagonalisability
Definition 6.6. (a) Let F, V and L : V → V be as in Defn 6.1. We say that L is diagonalisable if there
exists a basis x1 , . . . , xn of V such that the matrix D representing L with respect to this basis is a
diagonal matrix.
(b) Let F be a field. We say that a square matrix A ∈ Mn×n (F) is diagonalisable if the linear transfor-
mation LA : F n → F n , x 7→ Ax, is diagonalisable.
40 6 DIAGONALISABILITY
Proposition 6.7. Let F, V and L : V → V be as in Defn 6.1. Then L is diagonalisable if and only if V
has a basis x1 , . . . , xn consisting of eigenvectors of L.
Proof. “=⇒”:
Suppose ∃ a basis x1 , . . . , xn of V such that the matrix D representing L is diagonal, with some λ1 , . . . , λn ∈
F on the main diagonal.
=⇒ for any c1 , . . . , cn ∈ F we have
λn cn 0 λn cn
L(xi ) = λi xi
Proposition 6.8. Let F be a field. Let A ∈ Mn×n (F). Then A is diagonalisable if and only if there exists
an invertible matrix M ∈ Mn×n (F) such that M −1 AM is a diagonal matrix. (In this case we say that M
diagonalises A.)
Proof preparation. Let M ∈ Mn×n (F) with column vectors x1 , . . . , xn . Suppose M is invertible. Then:
xi is an eigenvector of A with eigenvalue λi
⇐⇒ Axi = λi xi
⇐⇒ AMei = λi (Mei ) (because xi = Mei is the ith column of M)
⇐⇒ AMei = M(λi ei )
6 DIAGONALISABILITY 41
Proof. “=⇒”
A is diagonalisable
=⇒ ∃ a basis x1 , . . . , xn of F n consisting of eigenvectors of A (by 6.7)
=⇒ The matrix M ∈ Mn×n (F) with columns x1 , . . . , xn is invertible (by 5.6 (b) =⇒ (a))
and M −1 AM is diagonal. (by “preparation” above)
“⇐=”
There exists an invertible M ∈ Mn×n (F) such that M −1 AM is diagonal
=⇒ the columns of M are eigenvectors of A (by “preparation” above)
and they form a basis of F n . (by 5.6 (a) =⇒ (b) and 3.15)
=⇒ A is diagonalisable. (by 6.7)
Example 6.9. Show that the matrix
0 −1 1
A := −3 −2 3 ∈ M3×3 (R)
−2 −2 3
is diagonalisable and find an invertible matrix M ∈ M3×3 (R) that diagonalises it.
Solution. First compute the characteristic polynomial of A:
λ 1 −1
pA (λ ) = det(λ I3 − A) = det 3 λ + 2 −3
2 2 λ −3
= λ (λ + 2)(λ − 3) + 1(−3)2 + (−1)3 · 2 − (−1)(λ + 2)2 − λ (−3)(2) − 1 · 3(λ − 3)
= λ (λ 2 − λ − 6) − 6 − 6 + 2λ + 4 + 6λ − 3λ + 9
= λ 3 − λ 2 − λ + 1 = λ 2 (λ − 1) − (λ − 1) = (λ 2 − 1)(λ − 1) = (λ − 1)2 (λ + 1).
=⇒ Eigenvalues of A are 1 and −1.
Basis of E1 (A): We apply Gaussian elimination to 1 · I3 − A:
1 1 −1 1 1 −1
R27→R2−3R1
1 · I3 − A = 3 3 −3 −−−−−−−→ 0 0 0 =: Ã
R37→R3−2R1
2 2 −2 0 0 0
b1
=⇒ E1 (A) = N(1 · I3 − A) = N(Ã) = b2 ∈ R3 : b1 + b2 − b3 = 0
b3
−b2 + b3
= b2 : b2 , b3 ∈ R
b3
−1 1 −1 1
= b2 1 + b3 0 : b2 , b3 ∈ R = Span
1 , 0
0 1 0 1
42 6 DIAGONALISABILITY
−1 1
Also x1 := 1 , x2 := 0 are L.I. (as they are not multiples of each other)
0 1
=⇒ A basis of E1 (A) is x1 , x2 .
Basis of E−1 (A): We apply Gaussian elimination to (−1)I3 − A:
−1 1 −1 −1 1 −1
R27→R2+3R1
−I3 − A = 3 1 −3 −−−−−−−→ 0 4 −6
R37→R3+2R1
2 2 −4 0 4 −6
1
−1 0 2
R7→R1− 41 R2
−−−−−−−→ 0 4 −6 =: Â
R37→R3−R2
0 0 0
(a) The algebraic multiplicity aλ (A) of λ is its multiplicity as a root of the characteristic polynomial
of A.
(b) The geometric multiplicity gλ (A) of λ is the dimension of the eigenspace Eλ (A).
Example 6.11. (a) In Example 6.9 we had pA (λ ) = (λ − 1)2 (λ + 1), so a1 (A) = 2 and a−1 (A) = 1.
Looking at the eigenspaces, we had g1 (A) = 2 and g−1 (A) = 1.
1 1
(b) Let A = ∈ M2×2 (F) (for any field F)
0 1
2 1
=⇒ pA (λ ) = (λ − 1) and a basis of E1 (A) is
0
=⇒ a1 (A) = 2 but g1 (A) = 1.
Theorem 6.12. Let F be a field. Let A ∈ Mn×n (F). Then A is diagonalisable if and only if the char-
acteristic polynomial of A splits into linear factors and the algebraic multiplicity equals the geometric
multiplicity for each eigenvalue of A.
6 DIAGONALISABILITY 43
Proof. Omitted.
0 1
Example 6.13. Determine whether the matrix A = is diagonalisable when viewed as an
−1 0
element of M2×2 (R), of M2×2 (C) and of M2×2 (F2 ). If A is diagonalisable then determine an invertible
matrix M that diagonalises A.
λ −1
Solution. pA (λ ) = det = λ 2 + 1.
1 λ
For R:
λ 2 + 1 does not split into linear factors
=⇒ as an element of M2×2 (R) the matrix A is not diagonalisable. (by 6.12)
(Actually A is a rotation by 90◦ about the origin.)
For C:
pA (λ ) = λ 2 + 1 = (λ + i)(λ − i)
=⇒ a+i (A) = 1 and a−i (A) = 1.
Basis of Ei (A): We apply Gaussian elimination to iI2 − A:
i −1 R17→(−i)R1 1 i R27→R2−R1 1 i
iI2 − A = −−−−−−→ −−−−−−→ =: Ã
1 i 1 i 0 0
b1 2
=⇒ Ei (A) = N(iI2 − A) = N(Ã) = ∈ C : b1 + ib2 = 0
b2
−ib2 −i
= : b2 ∈ C = Span
b2 1
−i
Also is linearly independent (as it is not 0)
1
−i
=⇒ is a basis of Ei (A)
1
=⇒ gi (A) = 1.
Basis of E−i (A): We apply Gaussian elimination to (−i)I2 − A:
−i −1 R17→iR1 1 −i R27→R2−R1 1 −i
−iI2 − A = −−−−→ −−−−−−→ =: Â
1 −i 1 −i 0 0
b1 2
=⇒ E−i (A) = N((−i)I2 − A) = N(Â) = ∈ C : b1 − ib2 = 0
b2
ib2 i
= : b2 ∈ C = Span
b2 1
i
Also is linearly independent (as it is not 0)
1
i
=⇒ is a basis of E−i (A)
1
=⇒ g−i (A) = 1.
=⇒ A is diagonalisable
when viewed as an element of M2×2 (C) (by 6.12)
−i i
and M = diagonalises A.
1 1
44 6 DIAGONALISABILITY
b1 2
=⇒ E1 (A) = N(I2 − A) = N(A) =
b ∈ F2 : b1 + b2 = 0
b2
b2 1
= : b2 ∈ F2 = Span
b2 1
1
Also is linearly independent (as it is not 0)
1
1
=⇒ is a basis of E1 (A)
1
=⇒ g1 (A) = 1.
=⇒ A is not diagonalisable. (by 6.12, since a1 (A) = 2 ̸= 1 = g1 (A))
Theorem 6.14 (Cayley–Hamilton Theorem). Let F be a field, let A ∈ Mn×n (F) and let pA be the char-
acteristic polynomial of A. Then pA (A) is the zero matrix.
0 1
Example 6.15. Let A = ∈ M2 (F)
−1 0
=⇒ pA (λ ) = λ 2 + 1 (see Example 6.13)
−1 0 1 0 0 0
=⇒ pA (A) = A2 + 1 · I2 = + = .
0 −1 0 1 0 0
=⇒ pA (λ ) = det(λ · In − A) = (λ − a1 ) . . . (λ − an )
=⇒ pA (A) = (A − a1 In ) · · · (A − an In )
0 0 a1 − a2 0
a2 − a1 0
= ...
.. ..
. .
0 an − a1 0 an − a2
a1 − an 0
..
...
.
an−1 − an
0 0
= 0,
because the product of any two diagonal matrices with diagonal entries b1 , . . . , bn and c1 , . . . , cn respec-
tively, is the diagonal matrix with diagonal entries b1 c1 , . . . , bn cn .
6 DIAGONALISABILITY 45
Preparatory Step: If A, M, D ∈ Mn×n (F) are such that M is invertible and D = M −1 AM, then:
7 Coursework Sheets
Submit a single pdf with scans of your work to Blackboard by Monday, 5 February 2024, 17:00.
Exercise 1
1 1 1
Consider the matrix A = 3 1 0 .
2 0 −1
1
(a) Show that the equation Ax = 2 has no solution in R3 .
3
(b) Find two linearly independent vectors y in R3 such that the equation Ax = y has a solution in R3 .
Exercise 2
(a) Let w, z be complex numbers. Solve the linear equation wx = z; in other words, find all x ∈ C such
that wx = z. (Hint: You need to distinguish three cases.)
(5i)x1 + 3x2 = 12 + i
x1 − (2i)x2 = 3 − i.
Exercise 3
Let A be a real m × n matrix. Recall that its nullspace is the following set of vectors: N(A) = {u ∈ Rn |
Au = 0 ∈ Rm } ⊆ Rn . Prove that N(A) is a subspace of Rn .
Multiplication of complex numbers defines a binary operation on C× := C \ {0}. Show that C× together
with this operation is an abelian group. (Here we consider the multiplication of complex numbers defined
like so: if a + bi and c + di (a, b, c, d ∈ R) are complex numbers, their product is declared to be the
complex number (ac − bd) + (ad + bc)i. In your arguments, you may without further discussion use the
usual laws of algebra for R. such as associativity for addition and multiplication of real numbers.)
Let G be a group such that for all a ∈ G we have a ∗ a = e. Show that G is abelian.
7 COURSEWORK SHEETS 47
Submit a single pdf with scans of your work to Blackboard by Monday, 12 February 2024, 17:00.
Exercise 1
Let G and H be groups with binary operations ⊞ and ⊙, respectively. We define a binary operation ∗ on
the cartesian product G × H by
Exercise 2
(b) Show that G together with the binary operation G × G → G, (a, b) 7→ a ∗ b, is a group.
Exercise 3
Let G = {s,t, u, v} be a group with s ∗ u = u and t ∗ t = v. Determine the group table of G. (There is only
one way of completing the group table for G. Give a reason for each step.)
Exercise 4
Write down the group tables for the groups C4 and C2 ×C2 (cf. Exercise 1). For every element a in C4
and C2 ×C2 determine the smallest positive integer m such that ma equals the identity element.
Let G be a group whose binary operation is written additively, i.e. G × G → G, (a, b) 7→ a + b. Show
that m(na) = (mn)a for all a ∈ G and m, n ∈ Z. (Hint: You need to distinguish up to 9 cases.) Write
down the other two exponential laws in additive notation as well. (Formulate these laws as complete
mathematical statements including all quantifiers. No proofs are required.)
Submit a single pdf with scans of your work to Blackboard by Monday, 19 February 2024, 17:00.
Exercise 1
Write down the group table for the permutation group S3 and show that S3 is not abelian. (You may find
it more convenient to write all elements of S3 in cycle notation.)
48 7 COURSEWORK SHEETS
Exercise 2
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6
Let σ := ∈ S9 , τ := ∈ S6
3 7 6 1 8 9 4 2 5 4 2 5 6 3 1
1 2 ... n−1 n
and η := ∈ Sn (for any even n ∈ N).
n n−1 ... 2 1
(c) Determine the sign of σ 2 , τ 2 and η 2 in two ways, firstly using (b) and secondly using (a) and
Theorem 1.10 (b).
Exercise 3
(Note this is an equality between maps. Hence, in order to show this equality you need to show that both
sides are equal after applying them to an arbitrary element b of {1, 2, . . . , n}. To do so you will need to
distinguish whether b belongs to {σ (a1 ), . . . , σ (as )} or not.)
Exercise 4
√ √ √
Let Q( 5) denote the set of real numbers z of the form z = a + b 5 where a, b ∈ Q. Show that Q( 5)
together with the usual√addition and multiplication of real numbers is a field.
√ (Hint: You need to show
that√for any w, z ∈ Q( 5) also w + z, wz, −z and z−1 (if z ̸= 0) are in Q( 5) and that 0 and 1 are
√ in
Q( 5). Distributivity, commutativity and associativity for addition and multiplication hold in Q( 5)
because they hold in R.)
Exercise 5
a a′ ab′ + a′ b a a′ aa′
(i) + = ; (ii) = .
b b′ bb′ b b′ bb′
a a′
(iii) = if and only if ab′ = a′ b;
b b′
a
ab′
(iv) b
a′
= (if in addition a′ ̸= 0).
b′
a′ b
7 COURSEWORK SHEETS 49
The solutions for this will not be provided (but possible to find in a book or google). Not necessary for
the rest of the module at all. Feel free to ignore.
Task 1. The aim is to prove Thm 1.10 from the notes, about the sign function on the symmetric groups
Sn . Here’s one possible path to a proof.
• Let σ be a permutation, and write it as a product of transpositions. Define the number nsgn(σ )
(for “new sign’ ’) to be equal to 1 if the number of transpositions is even, and −1 if the number of
transpositions is odd. Again, apriori nsgn depends on how do we write σ as a product of transpo-
sitions. However, by the first point above, nsgn(σ ) = sgn(σ ), since every cycle decomposition of
σ gives also a way to write σ as a product of transpositions. So the goal now is to prove that nsgn
is well defined, and that it’s multiplicative.
• A way to prove the above is to find a way to characterise nsgn to be something intrinsic to a
permutation. Here’s such a thing: Given a permutation σ ∈ Sn , we say that σ reverses the pair
(i, j), if i, j ∈ {1, . . . , n}, i < j and σ (i) > σ ( j). Let isgn(σ ) be 1 if σ reverses even number of
pairs, and −1 if σ reverses odd number of pairs.
• Prove that if σ is a permutation and τ is a transposition, then isgn(σ ◦τ) = −isgn(σ ) = isgn(τ ◦σ ).
• From the previous point, conclude that isgn(σ ) = nsgn(σ ) (thus the sign is well defined).
• From the definition of nsgn, show that nsgn(σ ◦ τ) = nsgn(σ )nsgn(τ) for any two permutations
σ and τ.
Task 2. Prove that the number of elements of Sn (i.e. the order of the symmetric group Sn ) is n!.
Submit a single pdf with scans of your work to Blackboard by Monday, 26 February 2024, 17:00.
Exercise 1
x1
The set R2
together with the usual vector addition forms an abelian group. For a ∈ R and x = ∈
x2
2 ax1
R we put a ⊗ x := ∈ R2 ; this defines a scalar multiplication
0
R × R2 → R2 , (a, x) 7→ a ⊗ x,
of the field R on R2 . Determine which of the axioms defining a vector space hold for the abelian group
R2 with this scalar multiplication. (Proofs or counterexamples are required.)
50 7 COURSEWORK SHEETS
Exercise 2
The set R>0 of positive real numbers together with multiplication forms an abelian group. Let Rn>0
denote the n-fold cartesian product of R>0 with itself (cf. Exercise 1 on Sheet 2). (You may find it
convenient to use the symbol ⊕ for the binary operation in the abelian group Rn>0 , that is (b1 , . . . , bn ) ⊕
(c1 , . . . , cn ) = (b1 c1 , . . . , bn cn ) for b1 , . . . , bn , c1 , . . . , cn ∈ R>0 .) Furthermore, for a ∈ Q and b = (b1 , . . . , bn ) ∈
Rn>0 we put a⊗b := (ba1 , . . . , ban ). Show that the abelian group Rn>0 together with the scalar multiplication
Exercise 3
Exercise 4
Let S be a set and let V be a vector space over a field F. Let V S denote the set of all maps from S to
V . We define an addition on V S and a scalar multiplication of F on V S as follows: let f , g ∈ V S and let
a ∈ F; then
( f + g)(s) := f (s) + g(s) and (a f )(s) := a ( f (s)) (for any s ∈ S).
Show that V S is a vector space over F. (For a complete proof many axioms need to be checked. In order
to save you some writing, your solution will be considered complete, if you check that there exists an
additive identity element in V S , that every element in V S has an additive inverse and that the second
distributivity law holds.)
Submit a single pdf with scans of your work to Blackboard by Monday, 11 March 2024, 17:00.
Exercise 1
Let n ≥ 2. Which of the conditions defining a subspace are satisfied for the following subsets of the
vector space Mn×n (R) of real (n × n)-matrices? (Proofs or counterexamples are required.)
(Recall that rank(A) denotes the number of non-zero rows in a row-echelon form of A and trace(A)
denotes the sum ∑ni=1 aii of the diagonal elements of the matrix A = (ai j ).)
7 COURSEWORK SHEETS 51
Exercise 2
Which of the following subsets of the vector space RR of all functions from R to R are subspaces?
(Proofs or counterexamples are required.)
(Recall that a function f : R → R is called even if f (−s) = f (s) for all s ∈ R.)
Exercise 3
(a) Let V be a vector space over F2 . Show that every non-empty subset W of V which is closed under
addition is a subspace of V .
(b) Show that {(0, 0), (1, 0)} is a subspace of the vector space F22 over F2 .
(c) Write down all subsets of F22 and underline those subsets which are subspaces. (No explanations
are required.)
Exercise 4
Let V be a vector space over a field F and let X,Y and Z be subspaces of V , such that X ⊆ Y . Show that
Y ∩ (X + Z) = X + (Y ∩ Z). (Note: this is an equality of sets, so you need to show that every vector in
the LHS also belongs to RHS, and vice versa.)
Let V be a vector space over a field F. Putting S = V in Example 2.7 we obtain the vector space F V
consisting of all functions from V to F. Consider the subset
V ∗ := {L : V → F | L is a linear transformation},
consisting of all linear transformations from the vector space V to the (one-dimensional) vector space
F. Show that V ∗ is a subspace of F V . (To get you started, at the end of this sheet you’ll find a detailed
proof of the first of the three conditions that need to be verified for a subspace.)
Verification of the first condition of being a subspace, for V ∗ from the above Exercise
(You don’t need to reproduce this in your solution, just say that the first condition is proved.)
The first condition for a subspace asserts that the zero vector of the “big” vector space F V belongs to set
V ∗ that we are showing to be a subspace.
The zero vector ( = the additive identity element for vector addition) of F V is the zero function 0 : V → F,
defined by 0(v) = 0F for all v ∈ V , that is, it maps every vector v from V to the additive identity element
0F in the field F.
52 7 COURSEWORK SHEETS
We need to show that this function 0 belongs to the set V ∗ , in other words, that it is a linear transformation
from V to F. This entails checking two conditions:
(a) 0 is compatible with addition: take arbitrary vectors x, y ∈ V . We need to check that 0(x + y) =
0(x) + 0(y) in F:
LHS = 0F (by definition of 0)
RHS = 0F + 0F = 0F (by definition of 0 and the field axioms)
So LHS = RHS.
(b) 0 is compatible with scalar multiplication: take a vector x ∈ V and a scalar a ∈ F. We need to
check that 0(ax) = a(0(x)) in F:
LHS = 0F (by definition of 0)
RHS = a0F = 0F (by definition of 0 and Prop. 2.3(a))
So again LHS = RHS.
Submit a single pdf with scans of your work to Blackboard by Monday, 18 March 2024, 17:00.
Exercise 1
Which of the following are spanning sets for the vector space P2 of polynomial functions of degree at
most 2? (Give reasons for your answers.)
1 2 2
(a) 2 , t + t, t − 1
(b) 1, 2t, t 2 , 3t 2 + 5
(c) t + 1, t 2 + t
Exercise 2
Determine whether the following are linearly independent sets of vectors in the vector space RR of all
functions from R to R. (Give reasons for your answers.)
(a) 1 + t, 1 + t + t 2 , 1 + t + t 2 + t 3 , 1 + t + t 2 + t 4
(b) sin, cos2 , sin3
(c) 1, sin2 , cos2
Exercise 3
Exercise 4
(a) Determine whether the following (2 × 2)-matrices form a basis of the vector space M2×2 (R) of all
(2 × 2)-matrices over R:
1 0 2 1 3 2 4 3
A1 = , A2 = , A3 = , A4 = .
0 0 0 0 1 0 2 1
(b) Find a basis of the subspace W := {A ∈ M2×2 (R) | trace(A) = 0} of the vector space M2×2 (R)
and hence determine the dimension of W . (Recall that trace(B) of a square matrix B = (bi j ) ∈
Mn×n (F) denotes the sum of its diagonal entries, trace(B) = ∑ni=1 bii .)
Submit a single pdf with scans of your work to Blackboard by Monday, 22 April 2024, 17:00.
Exercise 1
Determine whether the following maps are linear transformations. (For a matrix A, AT denotes its
transpose, see Section 2.3 in L.A.I.) (Proofs or counterexamples are required.)
x2
x1 x1
(a) L : R2 → R3 , 7→ x1 + 2x2 (b) L : R2 → R, 7→ x12 + x1 x2
x2 x2
x1 − x2
Exercise 2
Find a basis of the image of L. Using the Dimension Theorem show that L is injective.
54 7 COURSEWORK SHEETS
Exercise 3
Let F be a field.
(a) Let A ∈ Mn×n (F) be an invertible matrix. Show that the linear transformation
LA : F n → F n , x 7→ Ax,
(b) Let L : V → W be an isomorphism between vector spaces over F. Show that the inverse map
L−1 : W → V is a linear transformation (and hence an isomorphism as well).
Exercise 4
For y ∈ Rn let Ly : Rn → R denote the map given by x 7→ Ly (x) = x · y where x · y denotes the dot product
of x and y introduced in Linear Algebra I.
(a) For each y ∈ Rn show that Ly is a linear transformation and compute dimR (ker(Ly )).
(b) Let (Rn )∗ denote the vector space introduced in Coursework 5/Extra Question, that is
Submit a single pdf with scans of your work to Blackboard by Monday, 6 May 2024, 17:00.
Exercise 1
From Calculus we Rknow that for any polynomial function f : R → R of degree at most n, the function
I( f ) : R → R, s 7→ 0s f (u) du, is a polynomial function of degree at most n + 1. Show that the map
I : Pn → Pn+1 , f 7→ I( f ),
is an injective linear transformation, determine a basis of the image of I and find the matrix M ∈
M(n+2)×(n+1) (R) that represents I with respect to the basis 1,t, . . . ,t n of Pn and the basis 1,t, . . . ,t n+1
of Pn+1 .
Exercise 2
1−i α i
(a) Let α ∈ C and A := i − α 1 − α α − i ∈ M3×3 (C). Compute det(A) ∈ C.
1−α 1 2+α
7 COURSEWORK SHEETS 55
(b) Let F be a field, n be even and let c1 , . . . , cn ∈ F. Follow the blueprint of the proof of Example
5.2(d) and use Exercise 2(a) on Coursework Sheet 3 to compute the determinant of the matrix
0 . . . 0 c1
..
. · · 0
B := .. ∈ Mn×n (F).
0 · · .
cn 0 . . . 0
Exercise 3
Show that det(A) = (b + (n − 1)a)(b − a)n−1 . (Hint: Begin with R1 7→ R1 + R2 + . . . + Rn , apply Theorem
5.3(b) and use further row operations to arrive at an upper triangular matrix.)
Exercise 4
1+i 1−i 2 2 3+i 2i
Let A = 3 i −i ∈ M3×3 (C) and B = 1 2 − i 2 + 2i ∈ M3×3 (C).
1 2−i 2+i 1−i i 3
Compute det(A), det(B), det(AB) and det(A3 ).
Submit a single pdf with scans of your work to Blackboard by Monday, 13 May 2024, 17:00.
Exercise 1
(a) If n = 2 show that pA (λ ) = λ 2 − trace(A)λ + det(A). (Recall that trace(B) of a square matrix
B = (bi j ) ∈ Mn×n (F) denotes the sum of its diagonal entries, trace(B) = ∑ni=1 bii .)
Exercise 2
Exercise 3
Find the eigenvalues of each of the following matrices and determine a basis of the eigenspace for each
eigenvalue. Determine which of these matrices are diagonalizable; if so, write down a diagonalizing
matrix.
2 2 1
3 −1
A = 1 3 1 ∈ M3×3 (R), B = ∈ M2×2 (R),
1 5
1 2 2
1 0 0
C = 1 −1 2 as element of M3×3 (R) and as element of M3×3 (C).
1 −1 1
Compute C2020 .
Exercise 4
Let V be a vector space over a field F and let L, M be two linear transformations from V to itself.
(a) Suppose that L ◦ M = M ◦ L. Show that L(Eλ (M)) ⊆ Eλ (M) for all λ ∈ F.
(b) Suppose that V is of finite dimension. Show that L is injective if and only if it is surjective.