Part IA - Vectors and Matrices: Based On Lectures by N. Peake
Part IA - Vectors and Matrices: Based On Lectures by N. Peake
Michaelmas 2014
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
Complex numbers
Review of complex numbers, including complex conjugate, inverse, modulus, argument
and Argand diagram. Informal treatment of complex logarithm, n-th roots and complex
powers. de Moivre’s theorem. [2]
Vectors
Review of elementary algebra of vectors in R3 , including scalar product. Brief discussion
of vectors in Rn and Cn ; scalar product and the Cauchy-Schwarz inequality. Concepts
of linear span, linear independence, subspaces, basis and dimension.
Suffix notation: including summation convention, δij and εijk . Vector product and
triple product: definition and geometrical interpretation. Solution of linear vector
equations. Applications of vectors to geometry, including equations of lines, planes and
spheres. [5]
Matrices
Elementary algebra of 3 × 3 matrices, including determinants. Extension to n × n
complex matrices. Trace, determinant, non-singular matrices and inverses. Matrices as
linear transformations; examples of geometrical actions including rotations, reflections,
dilations, shears; kernel and image. [4]
Simultaneous linear equations: matrix formulation; existence and uniqueness of solu-
tions, geometric interpretation; Gaussian elimination. [3]
Symmetric, anti-symmetric, orthogonal, hermitian and unitary matrices. Decomposition
of a general matrix into isotropic, symmetric trace-free and antisymmetric parts. [1]
1 Complex numbers
1.1 Basic properties
1.2 Complex exponential function
1.3 Roots of unity
1.4 Complex logarithm and power
1.5 De Moivre’s theorem
1.6 Lines and circles in C
2 Vectors
2.1 Definition and basic properties
2.2 Scalar product
2.2.1 Geometric picture (R2 and R3 only)
2.2.2 General algebraic definition
2.3 Cauchy-Schwarz inequality
2.4 Vector product
2.5 Scalar triple product
2.6 Spanning sets and bases
2.6.1 2D space
2.6.2 3D space
2.6.3 Rn space
2.6.4 Cn space
2.7 Vector subspaces
2.8 Suffix notation
2.9 Geometry
2.9.1 Lines
2.9.2 Plane
2.10 Vector equations
3 Linear maps
3.1 Examples
3.1.1 Rotation in R3
3.1.2 Reflection in R3
3.2 Linear Maps
3.3 Rank and nullity
3.4 Matrices
3.4.1 Examples
3.4.2 Matrix Algebra
3.4.3 Decomposition of an n × n matrix
3.4.4 Matrix inverse
3.5 Determinants
3.5.1 Permutations
3.5.2 Properties of determinants
3.5.3 Minors and Cofactors
7 Transformation groups
7.1 Groups of orthogonal matrices
7.2 Length preserving matrices
7.3 Lorentz transformations
0 Introduction
Vectors and matrices is the language in which a lot of mathematics is written
in. In physics, many variables such as position and momentum are expressed as
vectors. Heisenberg also formulated quantum mechanics in terms of vectors and
matrices. In statistics, one might pack all the results of all experiments into a
single vector, and work with a large vector instead of many small quantities. In
group theory, matrices are used to represent the symmetries of space (as well as
many other groups).
So what is a vector? Vectors are very general objects, and can in theory
represent very complex objects. However, in this course, our focus is on vectors
in Rn or Cn . We can think of each of these as an array of n real or complex
numbers. For example, (1, 6, 4) is a vector in R3 . These vectors are added in the
obvious way. For example, (1, 6, 4) + (3, 5, 2) = (4, 11, 6). We can also multiply
vectors by numbers, say 2(1, 6, 4) = (2, 12, 8). Often, these vectors represent
points in an n-dimensional space.
Matrices, on the other hand, represent functions between vectors, i.e. a
function that takes in a vector and outputs another vector. These, however, are
not arbitrary functions. Instead matrices represent linear functions. These are
functions that satisfy the equality f (λx + µy) = λf (x) + µf (y) for arbitrary
numbers λ, µ and vectors x, y. It is important to note that the function x 7→ x+c
for some constant vector c is not linear according to this definition, even though
it might look linear.
It turns out that for each linear function from Rn to Rm , we can represent
the function uniquely by an m × n array of numbers, which is what we call the
matrix. Expressing a linear function as a matrix allows us to conveniently study
many of its properties, which is why we usually talk about matrices instead of
the function itself.
1 Complex numbers
In R, not every polynomial equation has a solution. For example, there does
not exist any x such that x2 + 1 = 0, since for any x, x2 is non-negative, and
x2 + 1 can never be 0. To solve this problem, we introduce the “number” i that
satisfies i2 = −1. Then i is a solution to the equation x2 + 1 = 0. Similarly, −i
is also a solution to the equation.
We can add and multiply numbers with i. For example, we can obtain
numbers 3 + i or 1 + 3i. These numbers are known as complex numbers. It turns
out that by adding this single number i, every polynomial equation will have a
root. In fact, for an nth order polynomial equation, we will later see that there
will always be n roots, if we account for multiplicity. We will go into details in
Chapter 5.
Apart from solving equations, complex numbers have a lot of rather important
applications. For example, they are used in electronics to represent alternating
currents, and form an integral part in the formulation of quantum mechanics.
1.1 Basic properties
Definition (Complex number). A complex number is a number z ∈ C of the
form z = a + ib with a, b ∈ R, where i2 = −1. We write a = Re(z) and b = Im(z).
We have
Im
z1 + z2
z1
z2
Re
z̄2
z = r(cos θ + i sin θ)
Clearly the pair (r, θ) uniquely describes a complex number z, but each complex
number z ∈ C can be described by many different θ since sin(2π + θ) = sin θ
and cos(2π + θ) = cos θ. Often we take the principle value θ ∈ (−π, π].
When writing zi = ri (cos θi + i sin θi ), we have
In other words, when multiplying complex numbers, the moduli multiply and
the arguments add.
Proposition. z z̄ = a2 + b2 = |z|2 .
Proposition. z −1 = z̄/|z|2 .
Theorem (Triangle inequality). For all z1 , z2 ∈ C, we have
Proof.
∞ X
X ∞
amn = a00 + a01 + a02 + · · ·
n=0 m=0
+ a10 + a11 + a12 + · · ·
+ a20 + a21 + a22 + · · ·
= (a00 ) + (a10 + a01 ) + (a20 + a11 + a02 ) + · · ·
X∞ X r
= ar−m,m
r=0 m=0
This is not exactly a rigorous proof, since we should not hand-wave about
infinite sums so casually. But in fact, we did not even show that the definition of
exp(z) is well defined for all numbers z, since the sum might diverge. All these
will be done in that IA Analysis I course.
Theorem. exp(z1 ) exp(z2 ) = exp(z1 + z2 )
Proof.
∞ X ∞
X z1m z2n
exp(z1 ) exp(z2 ) =
n=0 m=0
m! n!
∞ X
r
X z1r−m z2m
=
r=0 m=0
(r − m)! m!
∞ r
X 1 X r!
= z1r−m z2m
r=0
r! m=0
(r − m)!m!
∞
X (z1 + z2 )r
=
r=0
r!
Again, to define the sine and cosine functions, instead of referring to “angles”
(since it doesn’t make much sense to refer to complex “angles”), we again use a
series definition.
Definition (Sine and cosine functions). Define, for all z ∈ C,
∞
X (−1)n 2n+1 1 1
sin z = z = z − z3 + z5 + · · ·
n=0
(2n + 1)! 3! 5!
∞
X (−1)n 2n 1 2 1
cos z = z =1− z + z4 + · · ·
n=0
(2n)! 2! 4!
One very important result is the relationship between exp, sin and cos.
Theorem. eiz = cos z + i sin z.
Alternatively, since sin(−z) = − sin z and cos(−z) = cos z, we have
eiz + e−iz
cos z = ,
2
eiz − e−iz
sin z = .
2i
Proof.
∞ n
X i n
eiz = z
n=0
n!
∞ ∞
X i2n 2n X i2n+1 2n+1
= z + z
n=0
(2n)! n=0
(2n + 1)!
∞ ∞
X (−1)n 2n X (−1)n 2n+1
= z +i z
n=0
(2n)! n=0
(2n + 1)!
= cos z + i sin z
Proof. First prove for the n ≥ 0 case by induction. The n = 0 case is true since
it merely reads 1 = 1. We then have
Note that “cos nθ + i sin nθ = einθ = (eiθ )n = (cos θ + i sin θ)n ” is not a valid
proof of De Moivre’s theorem, since we do not know yet that einθ = (eiθ )n . In
fact, De Moivre’s theorem tells us that this is a valid rule to apply.
Example. We have cos 5θ + i sin 5θ = (cos θ + i sin θ)5 . By binomial expansion
of the RHS and taking real and imaginary parts, we have
z̄−z¯0
conjugate of this expression to obtain λ̄ = w̄ . The trick here is to realize that
λ is a real number. So we must have λ = λ̄. This means that we must have
z − z0 z̄ − z¯0
=
w w̄
z w̄ − z̄w = z0 w̄ − z̄0 w.
z w̄ − z̄w = z0 w̄ − z̄0 w.
|z − c| = ρ
|z − c|2 = ρ2
(z − c)(z̄ − c̄) = ρ2
z z̄ − c̄z − cz̄ = ρ2 − cc̄
Often, vectors have a length and direction. The length is denoted by |v|. In
this case, we can think of a vector as an “arrow” in space. Note that λa is either
parallel (λ ≥ 0) to or anti-parallel (λ ≤ 0) to a.
Definition (Unit vector). A unit vector is a vector with length 1. We write a
unit vector as v̂.
Intuitively, this is the product of the parts of a and b that are parallel.
a
|a|
b
|a| cos θ
Using the dot product, we can write the projection of b onto a as (|b| cos θ)â =
(â · b)â.
The cosine rule can be derived as follows:
−−→ −−→ −→
|BC|2 = |AB + AC|2
−−→ −→ −−→ −→
= (AB + AC) · (AB + AC)
−−→ −→ −−→ −→
= |AB|2 + |AC|2 − 2|AB||AC| cos θ
We will later come up with a convenient algebraic way to evaluate this scalar
product.
Example. Instead of the usual Rn vector space, we can consider the set of all
real (integrable) functions as a vector space. We can define the following inner
product: Z 1
hf | gi = f (x)g(x) dx.
0
2.3 Cauchy-Schwarz inequality
Theorem (Cauchy-Schwarz inequality). For all x, y ∈ Rn ,
|x · y| ≤ |x||y|.
|x − λy|2 ≥ 0
(x − λy) · (x − λy) ≥ 0
λ |y| − λ(2x · y) + |x|2 ≥ 0.
2 2
Note that we proved this using the axioms of the scalar product. So this
result holds for all possible scalar products on any (real) vector space.
Example. Let x = (α, β, γ) and y = (1, 1, 1). Then by the Cauchy-Schwarz
inequality, we have
√ p
α + β + γ ≤ 3 α2 + β 2 + γ 2
α2 + β 2 + γ 2 ≥ αβ + βγ + γα,
with equality if α = β = γ.
Corollary (Triangle inequality).
|x + y| ≤ |x| + |y|.
Proof.
|x + y|2 = (x + y) · (x + y)
= |x|2 + 2x · y + |y|2
≤ |x|2 + 2|x||y| + |y|2
= (|x| + |y|)2 .
So
|x + y| ≤ |x| + |y|.
2.4 Vector product
Apart from the scalar product, we can also define the vector product. However,
this is defined only for R3 space, but not spaces in general.
a×b
[a, b, c] = a · (b × c).
Proof. The area of the base of the parallelepiped is given by |b||c| sin θ = |b × c|.
Thus the volume= |b × c||a| cos φ = |a · (b × c)|, where φ is the angle between
a and the normal to b and c. However, since a, b, c form a right-handed system,
we have a · (b × c) ≥ 0. Therefore the volume is a · (b × c).
Theorem. a × (b + c) = a × b + a × c.
d · d = d · [a × (b + c)] − d · (a × b) − d · (a × c)
= (b + c) · (d × a) − b · (d × a) − c · (d × a)
=0
Thus d = 0.
2.6 Spanning sets and bases
2.6.1 2D space
Definition (Spanning set). A set of vectors {a, b} spans R2 if for all vectors
r ∈ R2 , there exist some λ, µ ∈ R such that r = λa + µb.
In R2 , two vectors span the space if a × b 6= 0.
Example. {î, ĵ} = {(1, 0), (0, 1)} is a basis of R2 . They are the standard basis
of R2 .
2.6.2 3D space
We can extend the above definitions of spanning set and linear independent set
to R3 . Here we have
Theorem. If a, b, c ∈ R3 are non-coplanar, i.e. a · (b × c) 6= 0, then they form
a basis of R3 .
Proof. For any r, write r = λa + µb + νc. Performing the scalar product
with b × c on both sides, one obtains r · (b × c) = λa · (b × c) + µb · (b × c) +
νc · (b × c) = λ[a, b, c]. Thus λ = [r, b, c]/[a, b, c]. The values of µ and ν can
be found similarly. Thus each r can be written as a linear combination of a, b
and c.
By the formula derived above, it follows that if αa + βb + γc = 0, then
α = β = γ = 0. Thus they are linearly independent.
Note that while we came up with formulas for λ, µ and ν, we did not actually
prove that these coefficients indeed work. This is rather unsatisfactory. We
could, of course, expand everything out and show that this indeed works, but
in IB Linear Algebra, we will prove a much more general result, saying that if
we have an n-dimensional space and a set of n linear independent vectors, then
they form a basis.
In R3 , the standard basis is î, ĵ, k̂, or (1, 0, 0), (0, 1, 0) and (0, 0, 1).
2.6.3 Rn space
In general, we can define
2.6.4 Cn space
Cn is very similar to Rn , except that we have complex numbers. As a result, P we
need a different definition of the scalar product. If we still defined u · v = ui vi ,
then if we let u = (0, i), then u · u = −1 < 0. This would be bad if we want to
use the scalar product to define a norm.
Definition (Cn ). Cn = {(z1 , z2 , · · · , zn ) : zi ∈ C}. It has the same standard
n n
P ∗ as R but the scalar product is defined differently. For u, v ∈ C , u · v =
basis
ui vi . The scalar product has the following properties:
(i) u · v = (v · u)∗
(ii) u · (λv + µw) = λ(u · v) + µ(u · w)
(iii) u · u ≥ 0 and u · u = 0 iff u = 0
We can prove the Cauchy-Schwarz inequality for complex vector spaces using
the same proof as the real case, except that this time we have to first multiply y
by some eiθ so that x · (eiθ y) is a real number. The factor of eiθ will drop off at
the end when we take the modulus signs.
2.7 Vector subspaces
Definition (Vector subspace). A vector subspace of a vector space V is a subset
of V that is also a vector space under the same operations. Both V and {0} are
subspaces of V . All others are proper subspaces.
A useful criterion is that a subset U ⊆ V is a subspace iff
(i) x, y ∈ U ⇒ (x + y) ∈ U .
(ii) x ∈ U ⇒ λx ∈ U for all scalars λ.
(iii) 0 ∈ U .
x = α1 (a + c) + β1 (b + c);
y = α2 (a + c) + β2 (b + c).
Then
(ii) Suffix appears twice in a term: dummy suffix and is summed over
(iii) Suffix appears three times or more: WRONG!
Example. [(a · b)c − (a · c)b]i = aj bj ci − aj cj bi summing over j understood.
It is possible for an item to have more than one index. These objects are
known as tensors, which will be studied in depth in the IA Vector Calculus
course.
Here we will define two important tensors:
Definition (Kronecker delta).
(
1 i=j
δij = .
0 i 6= j
We have
δ11 δ12 δ13 1 0 0
δ21 δ22 δ23 = 0 1 0 = I.
δ31 δ32 δ33 0 0 1
So the Kronecker delta represents an identity matrix.
Example.
We have
(i) εijk δjk = εijj = 0
(ii) If ajk = akj (i.e. aij is symmetric), then εijk ajk = εijk akj = −εikj akj .
Since εijk ajk = εikj akj (we simply renamed dummy suffices), we have
εijk ajk = 0.
LHS: Summing over i, the only non-zero terms are when j, k = 6 i and p, q 6= i.
If j = p and k = q, LHS is (−1)2 or (+1)2 = 1. If j = q and k = p, LHS is
(+1)(−1) or (−1)(+1) = −1. All other possibilities result in 0.
Equally, we have εijk εpqk = δip δjq − δjp δiq and εijk εpjq = δip δkq − δiq δkp .
Proposition.
a · (b × c) = b · (c × a)
a × (b × c) = (a · c)b − (a · b)c.
Proof.
Spherical trigonometry
Proposition. (a × b) · (b × c) = (a · a)(b · c) − (a · b)(a · c).
Proof.
δ(A, B) α
B C
Suppose we are living on the surface of the sphere. So the distance from A to B is
the arc length on the sphere. We can imagine this to be along the circumference
of the circle through A and B with center O. So the distance is ∠AOB, which we
shall denote by δ(A, B). So a · b = cos ∠AOB = cos δ(A, B). We obtain similar
expressions for other dot products. Similarly, we get |a × b| = sin δ(A, B).
(a × b) · (a × c)
cos α =
|a × b||a × c|
b · c − (a · b)(a · c)
=
|a × b||a × c|
Putting in our expressions for the dot and cross products, we obtain
cos α sin δ(A, B) sin δ(A, C) = cos δ(B, C) − cos δ(A, B) cos δ(A, C).
This is the spherical cosine rule that applies when we live on the surface of a
sphere. What does this spherical geometry look like?
Consider a spherical equilateral triangle. Using the spherical cosine rule,
cos δ − cos2 δ 1
cos α = =1− .
sin2 δ 1 + cos δ
Since cos δ ≤ 1, we have cos α ≤ 12 and α ≥ 60◦ . Equality holds iff δ = 0, i.e. the
triangle is simply a point. So on a sphere, each angle of an equilateral triangle is
greater than 60◦ , and the angle sum of a triangle is greater than 180◦ .
2.9 Geometry
2.9.1 Lines
Any line through a and parallel to t can be written as
x = a + λt.
(x − a) × t = 0 or x × t = a × t.
2.9.2 Plane
To define a plane Π, we need a normal n to the plane and a fixed point b. For
any x ∈ Π, the vector x − b is contained in the plane and is thus normal to n,
i.e. (x − b) · n = 0.
Theorem. The equation of a plane through b with normal n is given by
x · n = b · n.
(x − a) · [(b − a) × (c − a)] = 0.
Example.
(i) Consider the intersection between a line x × t = a × t with the plane
x · n = b · n. Cross n on the right with the line equation to obtain
(x · n)t − (t · n)x = (a × t) × n
Eliminate x · n using x · n = b · n
(t · n)x = (b · n)t − (a × t) × n
(b · n)t − (a × t) × n
x= .
t·n
Exercise: what if t · n = 0?
(ii) Shortest distance between two lines. Let L1 be (x − a1 ) × t1 = 0 and L2
be (x − a2 ) × t2 = 0.
The distance of closest approach s is along a line perpendicular to both L1
and L2 , i.e. the line of closest approach is perpendicular to both lines and
thus parallel to t1 × t2 . The distance
s can then be found by projecting
a1 − a2 onto t1 × t2 . Thus s = (a1 − a2 ) · |tt11 ×t2
×t2 | .
2.10 Vector equations
Example. x − (x × a) × b = c. Strategy: take the dot or cross of the equation
with suitable vectors. The equation can be expanded to form
x − (x · b)a + (a · b)x = c.
x · b − (x · b)(a · b) + (a · b)(x · b) = c · b
x · b = c · b.
x(1 + a · b) = c + (c · b)a
If (1 + a · b) is non-zero, then
c + (c · b)a
x=
1+a·b
Otherwise, when (1 + a · b) = 0, if c + (c · b)a 6= 0, then a contradiction is
reached. Otherwise, x · b = c · b is the most general solution, which is a plane
of solutions.
3 Linear maps
A linear map is a special type of function between vector spaces. In fact, most
of the time, these are the only functions we actually care about. They are maps
that satisfy the property f (λa + µb) = λf (a) + µf (b).
We will first look at two important examples of linear maps — rotations and
reflections, and then study their properties formally.
3.1 Examples
3.1.1 Rotation in R3
In R3 , first consider the simple cases where we rotate about the z axis by θ. We
call this rotation R and write x0 = R(x).
Suppose that initially, x = (x, y, z) = (r cos φ, r sin φ, z). Then after a
rotation by θ, we get
We can represent this by a matrix R such that x0i = Rij xj . Using our formula
above, we obtain
cos θ − sin θ 0
R = sin θ cos θ 0
0 0 1
Now consider the general case where we rotate by θ about n̂.
n̂
A0
A0
θ
B A B A
x0 C C
O
−−→ −−→ −−→
We have x0 = OB + BC + CA0 . We know that
−−→
OB = (n̂ · x)n̂
−−→ −−→
BC = BA cos θ
−−→ −→
= (BO + OA) cos θ
= (−(n̂ · x)n̂ + x) cos θ
−→ −−→ −−→ −−→
Finally, to get CA, we know that |CA0 | = |BA0 | sin θ = |BA| sin θ = |n̂ × x| sin θ.
−−→ −−→
Also, CA0 is parallel to n̂ × x. So we must have CA0 = (n̂ × x) sin θ.
Thus x0 = x cos θ + (1 − cos θ)(n̂ · x)n̂ + n̂ × x sin θ. In components,
3.1.2 Reflection in R3
Suppose we want to reflect through a plane through O with normal n̂. First of
all the projection of x onto n̂ is given by (x · n̂)n̂. So we get x0 = x − 2(x · n̂)n̂.
In suffix notation, we have x0i = xi − 2xj nj ni . So our reflection matrix is
Rij = δij − 2ni nj .
n̂ x
x0
3.2 Linear Maps
Definition (Domain, codomain and image of map). Consider sets A and B
and mapping T : A → B such that each x ∈ A is mapped into a unique
x0 = T (x) ∈ B. A is the domain of T and B is the co-domain of T . Typically,
we have T : Rn → Rm or T : Cn → Cm .
Definition (Linear map). Let V, W be real (or complex) vector spaces, and
T : V → W . Then T is a linear map if
(i) T (a + b) = T (a) + T (b) for all a, b ∈ V .
(ii) T (λa) = λT (a) for all λ ∈ R (or C).
(iii) Consider a projection of x onto a plane with normal n̂. The image is the
plane itself, and the kernel is any vector parallel to n̂
Theorem. Consider a linear map f : U → V , where U, V are vector spaces.
Then im(f ) is a subspace of V , and ker(f ) is a subspace of U .
y = f (α1 e1 + α2 e2 + · · · + αn en ),
Then
f (αm+1 em+1 + · · · + αn en ) = 0.
Thus αm+1 em+1 + · · · + αn en ∈ ker(f ). Since {e1 , · · · , em } span ker(f ),
there exist some α1 , α2 , · · · αm such that
αm+1 em+1 + · · · + αn en = α1 e1 + · · · + αm em .
x+y+z =0
2x − y + 5z = 0
x + 2z = 0
Note that the first and second equation add to give 3x+6z = 0, which is identical
to the third. Then using the first and third equation, we have y = −x − z = z.
So the kernel is any vector in the form (−2z, z, z) and is the span of (−2, 1, 1).
To find the image, extend the basis of ker(f ) to a basis of the whole of R3 :
{(−2, 1, 1), (0, 1, 0), (0, 0, 1)}. Apply f to this basis to obtain (0, 0, 0), (1, −1, 0)
and (1, 5, 2). From the proof of the rank-nullity theorem, we know that f (0, 0, 1)
and f (0, 0, 1) is a basis of the image.
To get the standard form of the image, we know that the normal to the plane
is parallel to (1, −1, 0) × (1, 5, 2) k (1, 1, −3). Since 0 ∈ im(f ), the equation of
the plane is x + y − 3z = 0.
3.4 Matrices
In the examples above, we have represented our linear maps by some object R
such that x0i = Rij xj . We call R the matrix for the linear map. In general, let
α : Rn → Rm be a linear map, and x0 = α(x).
Let {ei } be a basis of Rn . Then x = xj ej for some xj . Then we get
x0 = α(xj ej ) = xj α(ej ).
So we get that
x0i = [α(ej )]i xj .
We now define Aij = [α(ej )]i . Then x0i = Aij xj . We write
A11 · · · A1n
A = {Aij } = ... ..
Aij .
Am1 · · · Amn
Here Aij is the entry in the ith row of the jth column. We say that A is an
m × n matrix, and write x0 = Ax.
We see that the columns of the matrix are the images of the standard basis
vectors under the mapping α.
Example.
3.4.1 Examples
(i) In R2 , consider a reflection in a line with an angle θ to the x axis. We
know thatî 7→ cos 2θî + sin2θĵ , with ĵ 7→ − cos 2θĵ + sin 2θî. Then the
cos 2θ sin 2θ
matrix is .
sin 2θ − cos 2θ
(iii) In R3 , a reflection in plane with normal n̂ is given by Rij = δij − 2n̂i n̂j .
Written as a matrix, we have
sheer in x direction
x x0
or
Proposition.
(i) (AT )T = A.
x1
x2
(ii) If x is a column vector . , xT is a row vector (x1 x2 · · · xn ).
..
xn
Example. Consider the reflection matrix Rij = δij − 2n̂i n̂j . We have tr(A) =
Rii = 3 − 2n̂ · n̂ = 3 − 2 = 1.
Proposition. tr(BC) = tr(CB)
Proof. tr(BC) = Bik Cki = Cki Bik = (CB)kk = tr(CB)
AA−1 = A−1 A = I.
Note that not all square matrices have inverses. For example, the zero matrix
clearly has no inverse.
Definition (Invertible matrix). If A has an inverse, then A is invertible.
Proposition. (AB)−1 = B −1 A−1
Note that an orthogonal matrix A satisfies Aik (ATkj ) = δij , i.e. Aik Ajk = δij .
We can see this as saying “the scalar product of two distinct rows is 0, and the
scalar product of a row with itself is 1”. Alternatively, the rows (and columns —
by considering AT ) of an orthogonal matrix form an orthonormal set.
†
Similarly, for a unitary matrix, Uik Ukj = δij , i.e. uik u∗jk = u∗ik ujk = δij . i.e.
the rows are orthonormal, using the definition of complex scalar product.
Example.
(i) The reflection in a plane is an orthogonal matrix. Since Rij = δij − 2ni nj ,
We have
(ii) The rotation is an orthogonal matrix. We could multiply out using suffix
notation, but it would be cumbersome to do so. Alternatively, denote
rotation matrix by θ about n̂ as R(θ, n̂). Clearly, R(θ, n̂)−1 = R(−θ, n̂).
We have
In other words, R(−θ, n̂) = R(θ, n̂)T . So R(θ, n̂)−1 = R(θ, n̂)T .
3.5 Determinants
Consider a linear map α : R3 → R3 . The standard basis e1 , e2 , e3 is mapped to
e01 , e02 , e03 with e0i = Aei . Thus the unit cube formed by e1 , e2 , e3 is mapped to
the parallelepiped with volume
3.5.1 Permutations
To define the determinant for square matrices of arbitrary size, we first have to
consider permutations.
Definition (Fixed
point). A fixed point of ρ is a k such that ρ(k) = k. e.g. in
1 2 3 4
, 3 is the fixed point. By convention, we can omit the fixed point
4 1 3 2
1 2 4
and write as .
4 1 2
or equivalently,
det(A) = εj1 j2 ···jn Aj1 1 Aj2 2 · · · Ajn n .
Proposition.
a b
c = ad − bc
d
since the right hand side is just re-ordering the order of multiplication. Choose
ρ = σ −1 and note that ε(σ) = ε(ρ). Then
X
det(A) = ε(ρ)A1σ(1) A2σ(2) · · · Anσ(n) = det(AT ).
ρ∈Sn
Now write an arbitrary σ in the form σ = ρ(1 2). Then ε(σ) = ε(ρ)ε((1 2)) =
−ε(ρ). So X
det(A) = −ε(ρ)Aρ(2)1 Aρ(1)2 Aρ(3)3 · · · Aρ(n)n .
ρ∈Sn
But columns 1 and 2 are identical, so Aρ(2)1 = Aρ(2)2 and Aρ(1)2 = Aρ(1)1 . So
det(A) = − det(A) and det(A) = 0.
Proposition. If 2 rows or 2 columns of a matrix are linearly dependent, then
the determinant is zero.
det(a1 · · · ai · · · aj · · · an ) = det(a1 · · · ai + aj · · · aj · · · an )
= det(a1 · · · ai + aj · · · aj − (ai + aj ) · · · an )
= det(a1 · · · ai + aj · · · − ai · · · an )
= det(a1 · · · aj · · · − ai · · · an )
= − det(a1 · · · aj · · · ai · · · an )
Alternatively, we can prove this from the definition directly, using the fact that
the sign of a transposition is −1 (and that the sign is multiplicative).
Now consider the many different S’s. If in S, two of k1 and kn are equal, then S
is a determinant of a matrix with two columns the same, i.e. S = 0. So we only
have to consider the sum over distinct ki s. Thus the ki s are are a permutation
of 1, · · · n, say ki = ρ(i). Then we can write
X X
det AB = Bρ(1)1 · · · Bρ(n)n ε(σ)Aσ(1)ρ(1) · · · Aσ(n)ρ(n)
ρ σ
X
= Bρ(1)1 · · · Bρ(n)n (ε(ρ) det A)
ρ
X
= det A ε(ρ)Bρ(1)1 · · · Bρ(n)n
ρ
= det A det B
AAT = I
det AAT = det I
det A det AT = 1
(det A)2 = 1
det A = ±1
Example. 1, 2, 3, 5 = 1, 2, 3, 4̄, 5.
The significance of these definitions is that we can use them to provide a
systematic way of evaluating determinants. We will also use them to find inverses
of matrices.
Proof.
n
X n
X
det A = Aj i i εj1 j2 ···jn Aj1 1 Aj2 2 · · · Aji i · · · Ajn n
ji =1 j1 ,··· ,ji ,···jn
Let σ ∈ Sn be the permutation which moves ji to the ith position, and leave
everything else in its natural order, i.e.
1 · · · i i + 1 i + 2 · · · ji − 1 ji ji + 1 · · · n
σ=
1 · · · ji i i + 1 · · · ji − 2 ji − 1 ji + 1 · · · n
1 · · · · · · j¯i · · · n
ρ=
j1 · · · j¯i · · · · · · jn
Ax = d.
On the other hand, given the equation Ax = d, if A−1 exists, then by multiplying
both sides on the left by A−1 , we obtain x = A−1 d.
Hence, we have constructed A−1 in the 2 × 2 case, and shown that the
condition for its existence is det A 6= 0, with
1 A22 −A12
A−1 =
det A −A21 A11
4.2 Inverse of an n × n matrix
For larger matrices, the formula for the inverse is similar, but slightly more
complicated (and costly to evaluate). The key to finding the inverse is the
following:
P
Lemma. Aik ∆jk = δij det A.
Proof. If i 6= j, then consider an n × n matrix B, which is identical to A except
the jth row is replaced by the ith row of A. So ∆jk of B = ∆jk of A, since ∆jk
does not depend on the elements in row j. Since B has a duplicate row, we know
that
Xn Xn
0 = det B = Bjk ∆jk = Aik ∆jk .
k=1 k=1
(i)
Here Aii 6= 0 (which we can achieve by re-ordering), and the superfix (i) refers
(2)
to the “version number” of the coefficient, e.g. A22 is the second version of the
coefficient of x2 in the second row.
Let’s consider the different possibilities:
(r) (r)
(i) r < m and at least one of dr+1 , · · · dm =
6 0. Then a contradiction is
reached. The system is inconsistent and has no solution. We say it is
overdetermined.
Example. Consider the system
3x1 + 2x2 + x3 = 3
6x1 + 3x2 + 3x3 = 0
6x1 + 2x2 + 4x3 = 6
This becomes
3x1 + 2x2 + x3 = 3
0 − x2 + x3 = −6
0 − 2x2 + 2x3 = 0
And then
3x1 + 2x2 + x3 = 3
0 − x2 + x3 = −6
0 = 12
(3)
We have d3 = 12 = 0 and there is no solution.
(r)
(ii) If r = n ≤ m, and all dr+i = 0. Then from the nth equation, there
(n) (n)
is a unique solution for xn = dn /Ann , and hence for all xi by back
substitution. This system is determined.
Example.
2x1 + 5x2 = 2
4x1 + 3x2 = 11
This becomes
2x1 + 5x2 = 2
−7x2 = 7
(r)
(iii) If r < n and dr+i = 0, then xr+1 , · · · xn can be freely chosen, and there
are infinitely many solutions. System is under-determined. e.g.
x1 + x2 = 1
2x1 + 2x2 = 2
Which gives
x1 + x2 = 1
0=0
This determinant is an upper triangular one (all elements below diagonal are 0)
and the determinant is the product of its diagonal elements.
(r)
Hence if r < n (and di = 0 for i > r), then we have case (ii) and the
(2) (n)
det A = 0. If r = n, then det A = (−1)k A11 A22 · · · Ann 6= 0.
4.4 Matrix rank
Consider a linear map α : Rn → Rm . Recall the rank r(α) is the dimension of
the image. Suppose that the matrix A is associated with the linear map. We
also call r(A) the rank of A.
Recall that if the standard basis is e1 , · · · en , then Ae1 , · · · , Aen span the
image (but not necessarily linearly independent).
Further, Ae1 , · · · , Aen are the columns of the matrix A. Hence r(A) is the
number of linearly independent columns.
Definition (Column and row rank of linear map). The column rank of a matrix
is the maximum number of linearly independent columns.
The row rank of a matrix is the maximum number of linearly independent
rows.
Theorem. The column rank and row rank are equal for any m × n matrix.
Proof. Let r be the row rank of A. Write the biggest set of linearly independent
rows as v1T , v2T , · · · vrT or in component form vkT = (vk1 , vk2 , · · · , vkn ) for k =
1, 2, · · · , r.
Now denote the ith row of A as rTi = (Ai1 , Ai2 , · · · Ain ).
Note that every row of A can be written as a linear combination of the v’s.
(If ri cannot be written as a linear combination of the v’s, then it is independent
of the v’s and v is not the maximum collection of linearly independent rows)
Write
r
X
rTi = Cik vkT .
k=1
or
A1j C1k
A2j Xr C2k
.. = vkj .
. ..
k=1
Amj Cmk
So every column of A can be written as a linear combination of the r column
vectors ck . Then the column rank of A ≤ r, the row rank of A.
Apply the same argument to AT to see that the row rank is ≤ the column
rank.
4.5 Homogeneous problem Ax = 0
We restrict our attention to the square case, i.e. number of unknowns = number
of equations. Here A is an n × n matrix. We want to solve Ax = 0.
First of all, if det A 6= 0, then A−1 exists and x−1 = A−1 0 = 0, which is the
unique solution. Hence if Ax = 0 with x 6= 0, then det A = 0.
(i) det(A) 6= 0. So A−1 exists and n(A) = 0, r(A) = n. Then for any d ∈ Rn ,
a unique solution must exists and it is x = A−1 d.
(ii) det(A) = 0. Then A−1 does not exist, and n(A) > 0, r(A) < n. So the
image of A is not the whole of Rn .
(a) If d 6∈ im A, then there is no solution (by definition of the image)
(b) If d ∈ im A, then by definition there exists at least one x such that
Ax = d. The general solution of Ax = d can be written as x = x0 +y,
where x0 is a particular solution (i.e. Ax0 = d), and y is any vector
in ker A (i.e. Ay = 0). (cf. Isomorphism theorem)
If n(A) = 0, then y = 0 only, and then the solution is unique (i.e.
case (i)). If n(A) > 0 , then {ui }, i = 1, · · · , n(A) is a basis of the
kernel. Hence
n(A)
X
y= µj uj ,
j=1
so
n(A)
X
x = x0 + µj uj
j=1
Example.
1 1 x1 1
=
a 1 x2 b
We have det A = 1 − a. If a 6= 1, then A−1 exists and
1 1 1 −1
A−1 = = .
1−a 1 − a −a 1
Then
1 1−b
x= .
1−a −a + b
If a = 1, then
x1 + x2 1
Ax = = (x1 + x2 ) .
x1 + x2 1
1 1 1
So im A = span and ker A = span 6 1, then
. If b = 6∈ im A
1 −1 b
1
and there is no solution. If b = 1, then ∈ im A.
b
1
We find a particular solution of . So The general solution is
0
1 1
x= +λ .
0 −1
We have det A = (a − b)2 (2a + b). If a 6= b and b 6= −2a, then the inverse exists
and there is a unique solution for any c. Otherwise, the possible cases are
(i) a = b,b6= −2a.
So a6=0. The kernel is the plane x + y + z = 0which
is
−1 −1 1
span 1 , 0 We extend this basis to R3 by adding 0.
0 1 0
a 1 1
So the image is the span of a = 1. Hence if c 6= 1, then c is not
a 1 1
in
theimage and there is no solution. If c = 1, then a particular solution
1
a
is 0 and the general solution is
0
1
a −1 −1
x = 0 + λ 1 + µ 0
0 0 1
x + y − 2z = 0
−2x + y + z = 0
x − 2y + z = 0
1
This can be solved to give x = y = z, and the kernel is span 1 . We
1
1 0
add 0 and 0 to form a basis of R3 . So the image is the span of
0 1
1 −2
−2 , 1 .
1 1
1
If c is in the image, then
1
1 1 −2
c = λ −2 + µ 1 .
1 1 1
where cj ∈ C and cm 6= 0.
Then p(z) = 0 has precisely m (not necessarily distinct) roots in the complex
plane, accounting for multiplicity.
Note that we have the disclaimer “accounting for multiplicity”. For example,
x2 − 2x + 1 = 0 has only one distinct root, 1, but we say that this root has
multiplicity 2, and is thus counted twice. Formally, multiplicity is defined as
follows:
Definition (Multiplicity of root). The root z = ω has multiplicity k if (z − ω)k
is a factor of p(z) but (z − ω)k+1 is not.
Ax = λx
for some λ. λ is the associated eigenvalue. This means that the direction of the
eigenvector is preserved by the mapping, but is scaled up by λ.
There is a rather easy way of finding eigenvalues:
det(A − λI) = 0.
(A − λI)x = 0
and thus
x ∈ ker(A − λI)
But x 6= 0. So ker(A − λI) is non-trivial and det(A − λI) = 0. The (⇐) direction
is similar.
Definition (Characteristic equation of matrix). The characteristic equation of
A is
det(A − λI) = 0.
Definition (Characteristic polynomial of matrix). The characteristic polynomial
of A is
pA (λ) = det(A − λI).
From the definition of the determinant,
tr(A) = λ1 + λ2 + · · · + λn .
∆λ = M (λ) − m(λ).
d1 x1 + d2 x2 + · · · + dr xr = 0.
Suppose that this is the shortest non-trivial linear combination that gives 0 (we
may need to re-order xi ).
Now apply (A − λ1 I) to the whole equation to obtain
We know that the first term is 0, while the others are not (since we assumed
λi 6= λj for i 6= j). So
So we obtain
x1 1
=
x2 i
1
to be an eigenvector. Clearly any scalar multiple of is also a solution,
i
1
but still in the same eigenspace Ei = span
i
Solving (A − λ2 I)x = 0 gives
x1 1
= .
x2 −i
1
So E−i = span .
−i
Note that M (±i) = m(±i) = 1, so ∆±i = 0. Also note that the two
eigenvectors are linearly independent and form a basis of C2 .
(ii) Consider
−2 2 −3
A= 2 1 −6
−1 −2 0
Then det(A − λI) = 0 gives 45 + 21λ − λ2 − λ3 . So λ1 = 5, λ2 = λ3 = −3.
The eigenvector with eigenvalue 5 is
1
x= 2
−1
for any
x2 ,
x3 . This gives two linearly independent eigenvectors, say
−2 3
1 , 0.
0 1
So M (5) = m(5) = 1 and M (−3) = m(−3) = 2, and there is no defect for
both of them. Note that these three eigenvectors form a basis of C3 .
(iii) Let
−3 −1 1
A = −1 −3 1
−2 −2 0
Then 0 = pA (λ) = −(λ + 2)4 . So λ = −2, −2, −2. To find the eigenvectors,
we have
−1 −1 1 x1
(A + 2I)x = −1 −1 1 x2 = 0
−2 −2 2 x3
The general is thus x1 + x2 − x3 = 0, and
solution the
general
solution
is
x1 1 0
thus x = x2 . The eigenspace E−2 = span 0 , 1 .
x1 + x2 1 1
i.e. Pij is the ith component of e˜j with respect to the basis {e1 , e2 , · · · , en }.
Note that the sum is made as Pij ei , not Pij ej . This is different from the formula
for matrix multiplication.
Matrix P has as its columns the vectors e˜j relative to {e1 , e2 , · · · , en }. So
P = (e˜1 e˜2 · · · e˜n ) and
P (ei ) = e˜i
Similarly, we can write
n
X
ei = Qki e˜k
k=1
with Q = (e1 e2 · · · en ).
Substituting this into the equation for e˜j , we have
n n
!
X X
e˜j = Qki e˜k Pij
i=1 k=1
n n
!
X X
= e˜k Qki Pij
k=1 i=1
But ẽ1 , ẽ2 , · · · , ẽn are linearly independent, so this is only possible if
n
X
Qki Pij = δkj ,
i=1
Theorem. Denote vector as u with respect to {ei } and ũ with respect to {e˜i }.
Then
u = P ũ and ũ = P −1 u
Example. Take the first basis as {e1 = (1, 0), e2 = (0, 1)} and the second as
{e˜1 = (1, 1), e˜2 = (−1, 1)}.
So e˜1 = e1 + e2 and e˜2 = −e1 + e2 . We have
1 −1
P= .
1 1
u = u1 e1 + u2 e2
1 1
= u1 (e˜1 − e˜2 ) + u2 (e˜1 + e˜2 )
2 2
1 1
= (u1 + u2 )e˜1 + (−u1 + u2 )e˜2 .
2 2
Alternatively, using the formula above, we obtain
ũ = P −1 u
1 1 1 u1
=
2 −1 1 u2
1
2 (u1 + u2 )
= 1
2 (−u1 + u2 )
u0 = α(u) = Au.
Denote u and u0 as being with respect to basis {ei } (i.e. same basis in both
spaces), and ũ, ũ0 with respect to {e˜i }.
Using what we’ve got above, we have
u0 = Au
P ũ0 = AP ũ
ũ0 = P −1 AP ũ
= Ãũ
So
Theorem.
à = P −1 AP.
1 λ 0
Example. Consider the shear Sλ = 0 1 0 with respect to the standard
0 0 1
basis. Choose a new set of basis vectors by rotating by θ about the e3 axis:
So we have
cos θ − sin θ 0 cos θ sin θ 0
P = sin θ cos θ 0 , P −1 = − sin θ cos θ 0
0 0 1 0 0 1
Clearly this is much more complicated than our original basis. This shows that
choosing a sensible basis is important.
More generally, given α : Cm → Cn , given x ∈ Cm , x0 ∈ Cn with x0 = Ax.
We know that A is an n × m matrix.
Suppose Cm has a basis {ei } and Cn has a basis {fi }. Now change bases to
{e˜i } and {f˜i }.
We know that x = P x̃ with P being an m × m matrix, with x0 = Rx̃0 with
R being an n × n matrix.
Combining both of these, we have
Rx̃0 = AP x̃
x̃0 = R−1 AP x̃
Therefore à = R−1 AP .
Example. Consider α : R3 → R2 , with respect to the standard bases in both
spaces,
2 3 4
A=
1 6 3
2 1
Use a new basis , in R2 and keep the standard basis in R3 . The basis
1 5
change matrix in R3 is simply I, while
2 1 1 5 −1
R= , R−1 =
1 5 9 −1 2
e˜1 = e1 7→ 2f1 + f2 = f1
e˜2 = e2 7→ 3f1 + 6f2 = f˜1 + f˜2
17 ˜ 2 ˜
e˜3 = e3 7→ 4f1 + 3f2 = f1 + f2
9 9
and we can construct the matrix correspondingly.
5.4 Similar matrices
Definition (Similar matrices). Two n × n matrices A and B are similar if there
exists an invertible matrix P such that
B = P −1 AP,
i.e. they represent the same map under different bases. Alternatively, using the
language from IA Groups, we say that they are in the same conjugacy class.
Proposition. Similar matrices have the following properties:
Note that (iii) implies (i) and (ii) since the determinant and trace are the
coefficients of the characteristic polynomial
Proof. They are proven as follows:
(i) det B = det(P −1 AP ) = (det A)(det P )−1 (det P ) = det A
(ii)
tr B = Bii
= Pij−1 Ajk Pki
= Ajk Pki Pij−1
= Ajk (P P −1 )kj
= Ajk δkj
= Ajj
= tr A
(iii)
If we let
1 −2 3 1 2 −3
1
P = 2 1 0 , P −1 = −2 4 6 ,
8
1 0 1 1 2 5
then
5 0 0
à = P −1 AP = 0 −3 0 ,
0 0 −3
so A is diagonalizable.
Theorem. Let λ1 , λ2 , · · · , λr , with r ≤ n be the distinct eigenvalues of A. Let
B1 , B2 , · · · Br be the bases of the eigenspaces Eλ1 , Eλ2 , · · · , Eλr correspondingly.
[r
Then the set B = Bi is linearly independent.
i=1
This is similar to the proof we had for the case where the eigenvalues are
distinct. However, we are going to do it much concisely, and the actual meat of
the proof is actually just a single line.
(1) (1) (1)
Proof. Write B1 = {x1 , x2 , · · · xm(λ1 ) }. Then m(λ1 ) = dim(Eλ1 ), and simi-
larly for all Bi .
Consider the following general linear combination of all elements in B. Con-
sider the equation
r m(λ
Xi ) (i)
X
αij xj = 0.
i=1 j=1
The first sum is summing over all eigenspaces, and the second sum sums over
the basis vectors in Bi . Now apply the matrix
Y
(A − λk I)
k=1,2,··· ,K̄,··· ,r
(K)
Since the xj are linearly independent (BK is a basis), αKj = 0 for all j. Since
K was arbitrary, all αij must be zero. So B is linearly independent.
Proposition. A is diagonalizable iff all its eigenvalues have zero defect.
5.6 Canonical (Jordan normal) form
Given a matrix A, if its eigenvalues all have non-zero defect, then we can find
a basis in which it is diagonal. However, if some eigenvalue does have defect,
we can still put it into an almost-diagonal form. This is known as the Jordan
normal form.
So
eA = P [diag(eλ1 , eλ2 , · · · , eλn )]P −1 .
λ 1
(iii) For 2 × 2 matrices which are similar to B = We see that the
0 λ
characteristic polynomial pB (z) = det(B − zI) = (λ − z)2 . Then pB (B) =
2
2 0 −1 0 0
(λI − B) = = .
0 0 0 0
Since we have proved for the diagonalizable matrices above, we now know
that any 2 × 2 matrix satisfies Cayley-Hamilton theorem.
In IB Linear Algebra, we will prove the Cayley Hamilton theorem properly for
all matrices without assuming diagonalizability.
5.8 Eigenvalues and eigenvectors of a Hermitian matrix
5.8.1 Eigenvalues and eigenvectors
Theorem. The eigenvalues of a Hermitian matrix H are real.
Proof. Suppose that H has eigenvalue λ with eigenvector v 6= 0. Then
Hv = λv.
v† Hv = λv† v (∗)
We take the Hermitian conjugate of both sides. The left hand side is
(v† Hv)† = v† H † v = v† Hv
(λv† v)† = λ∗ v† v
So we have
v† Hv = λ∗ v† v.
From (∗), we know that λv† v = λ∗ v† v. Since v 6= 0, we know that v† v =
v · v 6= 0. So λ = λ∗ and λ is real.
Hvi = λi vi (i)
Hvj = λj vj . (ii)
λi vj† vi = λj vj† vi .
Since λi 6= λj , we must have vj† vi = 0. So their inner product is zero and are
orthogonal.
So we know that if a Hermitian matrix has n distinct eigenvalues, then
the eigenvectors form an orthonormal basis. However, if there are degenerate
eigenvalues, it is more difficult, and requires the Gram-Schmidt process.
At each step, we subtract out the components of vi that belong to the space
of {v1 , · · · , vk−1 }. This ensures that all the vectors are orthogonal. Finally, we
normalize each basis vector individually to obtain an orthonormal basis.
Then
So U is a unitary matrix.
B 0 = {v1 , v2 , · · · , vr , w1 , w2 , · · · , wn−r }
B̃ = {v1 , v2 , · · · , vr , u1 , u2 , · · · , un−r }.
Now write
↑ ↑ ↑ ↑ ↑
P = v1 v2 ··· vr u1 ··· un−r
↓ ↓ ↓ ↓ ↓
We have shown above that this is a unitary matrix, i.e. P −1 = P † . So if we
change basis, we have
P −1 HP = P † HP
λ1 0 ··· 0 0 0 ··· 0
0 λ2 ··· 0 0 0 ··· 0
.. .. .. .. ..
.. ..
.
. . . . . . 0
0 0 ··· λr 0 0 ··· 0
=
0
0 ··· 0 c11 c12 ··· c1,n−r
0
0 ··· 0 c21 c22 ··· c2,n−r
. .. .. .. .. .. .. ..
.
. . . . . . . .
0 0 ··· 0 cn−r,1 cn−r,2 ··· cn−r,n−r
with other entries 0. (where we have a r × r identity matrix block on the top
left corner and a (n − r) × (n − r) with columns formed by wj )
Since the columns of Q are orthonormal, Q is unitary. So Q† P † HP Q =
diag(λ1 , λ2 , · · · , λr , λr+1 , · · · , λn ), where the first r λs are distinct and the re-
maining ones are copies of previous ones.
The n linearly-independent eigenvectors are the columns of P Q.
D = U † HU
H = U DU †
D = QT SQ
S = QDQT
N N † = N †N
xT Ax + bT x + c = 0,
xT Sx + bT x + c = 0
with S symmetric.
Since S is real and symmetric, we can diagonalize it using S = QDQT with
D diagonal. We write x0 = QT x and b0 = QT b. So we have
xT Dx = k.
λ1 x21 + λ2 x22 = k.
x21 x2
2
− 22 = 1,
a b
which is a hyperbola.
(iii) λ1 λ2 = 0: Say λ2 = 0, λ1 = 6 0. Note that in this case, our symmetric
matrix S is not invertible and we cannot shift our origin using as above.
From our initial equation, we have
b01
x001 = x01 +
2λ1
c (b0 )2
x002 = x02 + 0 − 1 0
b2 4λ1 b2
to remove the x01 and constant term. Dropping the primes, we have
λ1 x21 + b2 x2 = 0,
which is a parabola.
Note that above we assumed b02 6= 0. If b02 = 0, we have λ1 (x01 )2 +b01 x01 +c =
0. If we solve this quadratic for x01 , we obtain 0, 1 or 2 solutions for x1
(and x2 can be any value). So we have 0, 1 or 2 straight lines.
These are known as conic sections. As you will see in IA Dynamics and Relativity,
this are the trajectories of planets under the influence of gravity.
6.2 Focus-directrix property
Conic sections can be defined in a different way, in terms of
Definition (Conic sections). The eccentricity and scale are properties of a conic
section that satisfy the following:
Let the foci of a conic section be (±ae, 0) and the directrices be x = ±a/e.
A conic section is the set of points whose distance from focus is e× distance
from directrix which is closer to that of focus (unless e = 1, where we take the
distance to the other directrix).
Now consider the different cases of e:
y
x = a/e
(x, y)
ae
x
O
p a
(x − ae)2 + y 2 = e −x
e
x2 y2
+ =1
a2 a2 (1 − e2 )
√
Which is an ellipse with semi-major axis a and semi-minor axis a 1 − e2 .
(if e = 0, then we have a circle)
(ii) e > 1. So
y
x = a/e
(x, y)
ae
x
O
p a
(x − ae)2 + y 2 = e x −
e
x2 y2
− 2 2 =1
a2 a (e − 1)
y
x=a
(x, y)
a
x
O
p
(x − a)2 + y 2 = (x + 1)
y 2 = 4ax
l = a|1 − e2 |.
So (P x)T P y = xT y.
(iii) ⇒ (iv): (P vi )T P vj = viT vj = δij . So P vi ’s are also orthonormal.
(iv) ⇒ (v): Take the vi ’s to be the standard basis. So the columns of P , being
P ei , are orthonormal.
(v) ⇒ (i): The columns of P are orthonormal. Then (P P T )ij = Pik Pjk =
(Pi ) · (Pj ) = δij , viewing Pi as the ith column of P . So P P T = I.
hx|yi = hM x|M yi
for all x, y.
We know that xT Jy = (M x)T JM y = xT M T JM y. Since this has to be
true for all x and y, we must have
J = M T JM.
Definition (Lorentz group). The Lorentz group is a group of all Lorentz matrices
under matrix multiplication.
It is easy to prove that this is a group. For the closure axiom, we have
Bv1 Bv2 = Bv3 , where
v1 + v2
v3 = tanh(tanh−1 v1 + tanh−1 v2 ) =
1 + v1 v 2
The set of all Bv is a group of transformations which preserve the Minkowski
inner product.