0% found this document useful (0 votes)
25 views157 pages

Culegere Peter

The document is a textbook on elements of linear algebra. It introduces basic concepts such as matrices, vectors spaces, linear maps, and inner product spaces. It provides definitions, properties, examples and problems for each topic. The textbook serves to give graduate students a theoretical foundation and practical understanding of linear algebra.

Uploaded by

Ioana Avram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views157 pages

Culegere Peter

The document is a textbook on elements of linear algebra. It introduces basic concepts such as matrices, vectors spaces, linear maps, and inner product spaces. It provides definitions, properties, examples and problems for each topic. The textbook serves to give graduate students a theoretical foundation and practical understanding of linear algebra.

Uploaded by

Ioana Avram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 157

Elements of Linear Algebra

Peter Ioan Radu

László Szilárd Csaba Viorel Adrian

Cluj-Napoca
2014
Contents

Introduction iii

1 Matrices 1
1.1 Basic definitions, operations and properties. . . . . . . . . . . . . . . 1
1.2 Determinants and systems of linear equations . . . . . . . . . . . . . 13
1.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2 Vector Spaces 28
2.1 Definition of Vector Space and basic properties . . . . . . . . . . . . . 28
2.2 Subspaces of a vector space . . . . . . . . . . . . . . . . . . . . . . . 29
2.3 Basis. Dimension. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4 Local computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3 Linear maps between vector spaces 53


3.1 Properties of LpV, W q . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2 Local form of a linear map . . . . . . . . . . . . . . . . . . . . . . . . 63
3.3 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

i
CONTENTS ii

4 Proper vectors and the Jordan canonical form 72


4.1 Invariant subspaces. Proper vectors and values . . . . . . . . . . . . . 72
4.2 The minimal polynomial of an operator . . . . . . . . . . . . . . . . . 76
4.3 Diagonal matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.4 The Jordan canonical form . . . . . . . . . . . . . . . . . . . . . . . . 86
4.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

5 Inner product spaces 95


5.1 Basic definitions and results . . . . . . . . . . . . . . . . . . . . . . . 95
5.2 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.3 Orthogonal complement . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.4 Linear manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
5.5 The Gram determinant. Distances. . . . . . . . . . . . . . . . . . . . 113
5.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

6 Operators on inner product spaces. 126


6.1 Linear functionals and adjoints . . . . . . . . . . . . . . . . . . . . . 126
6.2 Normal operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.3 Isometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.4 Self adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

7 Elements of geometry 145


7.1 Quadratic forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.2 Quadrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7.3 Conics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

Bibliography 153
Introduction

The aim of this book is to give an introduction to linear algebra and at the same
time to provide some applications that might be useful both in practice and theory.
Hopefully this book will be a real help for graduate level students to understand
the basics of this beautiful mathematical subject called linear algebra. Our scope is
twofold: one hand we give a theoretical introduction to this field, which is more than
exhaustive for the need and understanding capability of graduate students, on the
other hand we present fully solved examples and problems that might be helpful in
preparing to exams and also show the techniques used in the art of problem solving
on this field. At the end of every chapter, this work contains several proposed
problems that can be solved using the theory and solved examples stated previously.

iii
1
Matrices

1.1 Basic definitions, operations and properties.


Definition 1.1. A matrix of dimension m ˆ n with elements in a field F, (where
usually F “ R, or F “ C), is a function A : t1, . . . , mu ˆ t1, . . . , nu Ñ F,

Api, jq “ aij P F, @i P t1, 2, . . . , mu, j P t1, 2, . . . , nu.

Usually an m ˆ n matrix is represented as a table with m lines and n columns:


¨ ˛
a a12 . . . a1n
˚ 11 ‹
˚ ‹
˚ a21 a22 . . . a2n ‹
A“˚ ˚ .. ..

.. ‹ .
˚ . . ... . ‹
˝ ‚
am1 am2 . . . amn

Hence, the elements of a matrix A are denoted by aij , where aij stands for the
number that appears in the ith row and the j th column of A (this is called the pi, jq
entry of A) and the matrix is represented as A “ paij qi“1,m .
j“1,n
We will denote the set of all m ˆ n matrices with entries in F by Mm,n pFq
respectively, when m “ n by Mn pFq. It is worth mentioning that the elements of

1
Basic definitions, operations and properties. 2

Mn pFq are called square matrices. In what follows, we provide some examples.

Example 1.2. Consider the matrices


¨ ˛
1 2 3 ¨ ˛
˚ ‹ i 2`i 0
˚ ‹
A “ ˚ 4 5 6 ‹ , respectively B “ ˝ ? ‚,
˝ ‚ ´3 2 ´1 ` 3i
7 8 9
where i is the imaginary unit. Then A P M3 pRq, or in other words, A is a real valued
square matrix, meanwhile B P M2,3 pCq, or in other words, B is a complex valued
matrix with two rows and three columns.

In what follows we present some special matrices.

Example 1.3. Consider the matrix In “ paij qi,j“1,n P Mn pFq, aij “ 1, if i “


j and aij “ 0 otherwise. Here 1 P F, respectively 0 P F are the multiplicative
identity respectively the zero element of the field F.
Then ¨ ˛
1 0 ... 0
˚ ‹
˚ ‹
˚0 1 . . . 0‹
In “ ˚
˚ .. ..

.. ‹
˚. . . . . .‹
˝ ‚
0 0 ... 1
and is called the identity matrix (or unit matrix ) of order n.

Remark 1.4. Sometimes we denote the identity matrix simply by I.

Example 1.5. Consider the matrices O “ paij qi“1,m P Mm,n pFq having all entries
j“1,n
the zero element of the field F. Then
¨ ˛
0 0 ... 0
˚ ‹
˚ ‹
˚0 0 . . . 0‹
O“˚ ˚ .. ..

.. ‹
˚. . . . . .‹
˝ ‚
0 0 ... 0
Basic definitions, operations and properties. 3

and is called the null matrix of order m ˆ n.

Example 1.6. Consider the matrices A “ paij qi,j“1,m P Mn pFq given by aij “ 0
whenever i ą j, respectively aij “ 0 whenever i ă j. Then
¨ ˛ ¨ ˛
a a ... a1n a 0 ... 0
˚ 11 12 ‹ ˚ 11 ‹
˚ ‹ ˚ ‹
˚ 0 a22 . . . a2n ‹ ˚ a21 a22 ... 0 ‹
A“˚ ˚ .. ..
‹ ˚
.. ‹ , respectively A “ ˚ .. .. ..


˚ . . ... . ‹ ˚ . . ... . ‹
˝ ‚ ˝ ‚
0 0 ... ann an1 an2 . . . ann

is called upper triangular, respectively lower triangular matrix.


If all entries outside the main diagonal are zero, A is called a diagonal matrix.
In this case we have ¨ ˛
a 0 ... 0
˚ 11 ‹
˚ ‹
˚ 0 a22 ... 0 ‹
A“˚
˚ .. .. ..
‹.

˚ . . ... . ‹
˝ ‚
0 0 . . . ann

Addition of Matrices.
If A and B are m ˆ n matrices, the sum of A and B is defined to be the m ˆ n
matrix A ` B obtained by adding corresponding entries. Hence, the addition
operation is a function

` : Mm,n pFq ˆ Mm,n pFq Ñ Mm,n pFq,

paij qi“1,m ` pbij qi“1,m “ paij ` bij qi“1,m , @ paij qi“1,m , pbij qi“1,m P Mm,n pFq.
j“1,n j“1,n j“1,n j“1,n j“1,n

In other words, for A, B P Mm,n pFq their sum is defined as

C “ A ` B “ pcij qi“1,m
j“1,n

where cij “ aij ` bij for all i P t1, 2, . . . , mu, j P t1, 2, . . . , nu.
Basic definitions, operations and properties. 4

Properties of Matrix Addition.


Let O P Mm,n pFq the null matrix of size m ˆ n. For a given matrix
X “ pxij qi“1,m P Mm,n pFq we denote by ´X its additive inverse (opposite) that is
j“1,n
´X “ p´xij qi“1,m P Mm,n pFq. For every A, B, C P Mm,n pFq the following properties
j“1,n
hold:

1. A ` B is again an m ˆ n matrix (closure property).

2. pA ` Bq ` C “ A ` pB ` Cq (associative property).

3. A ` B “ B ` A (commutative property).

4. A ` O “ O ` A “ A (additive identity).

5. A ` p´Aq “ p´Aq ` A “ O (the additive inverse).

It turns out that pMm,n pFq, `q is an Abelian group.

Scalar multiplication.
For A P Mm,n pFq and α P F define αA “ pαaij qi“1,m . Hence, the scalar
j“1,n
multiplication operation is a function

¨ : F ˆ Mm,n pFq Ñ Mm,n pFq,

α ¨ paij qi“1,m “ pα ¨ aij qi“1,m , @ α P F, paij qi“1,m , P Mm,n pFq.


j“1,n j“1,n j“1,n

Properties of Scalar multiplication.


Obviously, for every A, B P Mm,n pFq and α, β P F the following properties hold:

1. αA is again an m ˆ n matrix ( closure property).

2. pαβqA “ αpβAq (associative property).


Basic definitions, operations and properties. 5

3. αpA ` Bq “ αA ` αB (distributive property).

4. pα ` βqA “ αA ` βA (distributive property).

5. 1A “ A, where 1 is the multiplicative identity of F (identity property).

Of course that we listed here only the left multiplication of matrices by scalars. By
defining αA “ Aα we obtain the right multiplication of matrices by scalars.
¨ ˛ ¨ ˛
1 ´1 1 ´1 0 2
˚ ‹ ˚ ‹
˚ ‹ ˚ ‹
Example 1.7. If A “ ˚ 0 2 ´1 ‹ and B “ ˚ 1 ´1 1 ‹ , then
˝ ‚ ˝ ‚
´2 2 0 0 ´1 2
¨ ˛ ¨ ˛
3 ´2 0 1 ´2 4
˚ ‹ ˚ ‹
˚ ‹ ˚ ‹
2A ´ B “ ˚ ´1 5 ´3 ‹ and 2A ` B “ ˚ 1 3 ´1 ‹ .
˝ ‚ ˝ ‚
´4 5 ´2 ´4 3 2
Transpose.
The transpose of a matrix A P Mm,n pFq is defined to be a matrix AJ P Mn,m pFq
obtaining by interchanging rows and columns of A. Locally, if A “ paij qi“1,m , then
j“1,n
AJ “ paji q j“1,n .
i“1,m
It is clear that pAJ qJ “ A. A matrix, that has many columns, but only one row, is
called a row matrix. Thus, row matrix A with n columns is an 1 ˆ n matrix, i.e.

A “ pa1 a2 a3 . . . an q.

A matrix, that has many rows, but only one column, is called a column matrix.
Thus, a column matrix A with m rows is an m ˆ 1 matrix, i.e.
¨ ˛
a1
˚ ‹
˚ ‹
˚ a2 ‹
˚
A“˚ . ‹ ‹.
˚ .. ‹
˝ ‚
am
Basic definitions, operations and properties. 6

Obviously, the transpose of a row matrix is a column matrix and viceversa, hence,
in inline text a column matrix A is represented as

A “ pa1 a2 . . . am qJ .

Conjugate Transpose.
Let A P Mm,n pCq. Define the conjugate transpose of A “ paij qi“1,m P Mm,n pCq by
j“1,n
A‹ “ paji q j“1,n , where z denotes the complex conjugate of the number z P C. We
i“1,m
have that pA‹ q‹ “ A and AJ “ A‹ whenever A contains only real entries.

Properties of the Transpose.


For every A, B P Mm,n pFq and α P K hold:

1. pA ` BqJ “ AJ ` B J .

2. pA ` Bq‹ “ A‹ ` B ‹ .

3. pαAqJ “ αAJ and pαAq‹ “ αA‹ .

Symmetries.
Let A “ paij q i“1,n P Mn pFq be a square matrix. We recall that
j“1,n

• A is said to be a symmetric matrix whenever A “ AJ (locally aij “ aji for all


i, j P t1, 2, . . . nu).

• A is said to be a skew-symmetric matrix whenever A “ ´AJ (locally


aij “ ´aji for all i, j P t1, 2, . . . nu).

• A is said to be a hermitian matrix whenever A “ A‹ (locally aij “ aji for all


i, j P t1, 2, . . . nu).
Basic definitions, operations and properties. 7

• A is said to be a skew-hermitian matrix whenever A “ ´A‹ (locally


aij “ ´aji for all i, j P t1, 2, . . . nu).

It can easily be observed, that every symmetric real matrix is hermitian,


respectively, every skew-symmetric real matrix is skew-hermitian.
¨ ˛
1 ´2 4
˚ ‹
˚ ‹
Example 1.8. The matrix A “ ˚ ´2 0 3 ‹ is a symmetric matrix,
˝ ‚
4 3 2
¨ ˛
0 1 ´3
˚ ‹
˚ ‹
meanwhile the matrix B “ ˚ ´1 0 3 ‹ is a skew-symmetric matrix.
˝ ‚
3 ´3 0
¨ ˛
1 1`i i
˚ ‹
˚ ‹
The matrix C “ ˚ 1 ´ i 3 3 ´ 2i ‹ is a hermitian matrix, meanwhile the
˝ ‚
´i 3 ` 2i 2
¨ ˛
´i 2´i ´3i
˚ ‹
˚ ‹
matrix D “ ˚ ´2 ´ i i 2 ` 3i ‹ is a skew-hermitian matrix.
˝ ‚
´3i ´2 ` 3i 0

Matrix multiplication.
For a matrix X “ pxij qi“1,m P Mm,n pFq we denote by Xi˚ its ith row, i.e. the row
j“1,n
matrix
Xi‹ “ pxi1 xi2 . . . xin q.

Similarly, the j th column of X is the column matrix

X‹j “ px1j x2j . . . xmj qJ .

It is obvious that
pX J qi‹ “ pX‹i qJ ,
Basic definitions, operations and properties. 8

respectively
pX J q‹j “ pXj‹ qJ .

We say that the matrices A and B are conformable for multiplication in the order
AB, whenever A has exactly as many columns as B has rows, that is A P Mm,p pFq
and B P Mp,n pFq.
For conformable matrices A “ paij qi“1,m and B “ pbjk q j“1,p the matrix product AB
j“1,p k“1,n
is defined to be the m ˆ n matrix C “ pcik qi“1,m with
k“1,n

ÿ
p
cik “ Ai‹ B‹k “ aij bjk .
j“1

In the case that A and B failed to be conformable, the product AB is not defined.

Remark 1.9. Note, the product is not commutative, that is, in general,
AB ‰ BA even if both products exists and have the same shape.
¨ ˛
¨ ˛ 1 ´1
1 0 ´1 ˚ ‹
˚ ‹
Example 1.10. Let A “ ˝ ‚ and B “ ˚ 0 1 ‹.
´1 1 0 ˝ ‚
´1 1
¨ ˛
¨ ˛ 2 ´1 ´1
2 0 ˚ ‹
Then AB “ ˝ ‚ and BA “ ˚ ˚ ´1 1

0 ‹.
´1 2 ˝ ‚
0 1 1

Rows and columns of a product.


Suppose that A “ paij qi“1,m P Mm,p pFq and B “ pbij q i“1,p P Mp,n pFq.
j“1,p j“1,n
There are various ways to express the individual rows and columns of a matrix
product. For example the ith row of AB is
Basic definitions, operations and properties. 9

Ci‹ “ rABsi‹ “ rAi‹ B‹1 Ai‹ B‹2 . . . Ai‹ B‹n s “ Ai‹ B


¨ ˛
B
˚ 1‹ ‹
´ ˚
¯ ˚B2‹ ‹‹
“ ai1 ai2 . . . aip ˚ ˚ .. ‹

˚ . ‹
˝ ‚
Bp‹

There are some similar representations for individual columns, i.e. the j th column
is

C‹j “ rABs‹j “ rA1‹ B‹j A2‹ B‹j . . . Am‹ B‹j sJ “ AB‹j


¨ ˛
b
˚ 1j ‹
´ ¯ ˚b2j ‹
˚

“ A‹1 A‹2 . . . A‹p ˚ ˚ .. ‹

˚ . ‹
˝ ‚
bpj

Consequently, we have:

1. rABsi‹ “ Ai‹ B pith row of AB).

2. rABs‹j “ AB‹j pj th column of AB).


řp
3. rABsi‹ “ ai1 B1‹ ` ai2 B2‹ ` ¨ ¨ ¨ ` ai pBp‹ “ k“1 aik Bk‹ .
řp
4. rABs‹j “ A‹1 b1j ` A‹2 b2j ` ¨ ¨ ¨ ` A‹p bpj “ k“1 A‹k bkj .

The last two equations has both theoretical and practical importance. They shows
that the rows of AB are combinations of rows of B, while columns of AB are
combinations of columns of A. So it is waisted time to compute the entire product
when only one row or column is needed.
Basic definitions, operations and properties. 10

Properties of matrix multiplication.

Distributive and associative laws.


For conformable matrices one has:

1. ApB ` Cq “ AB ` AC (left-hand distributive law).

2. pB ` CqA “ BA ` CA (right-hand distributive law).

3. ApBCq “ pABqC (associative law).

For a matrix A P Mn pFq, one has

AIn “ A and In A “ A ,

where In P Mn pFq is the identity matrix of order n.

Proposition 1.11. For conformable matrices A P Mm,p pFq and B P Mp,n pFq, on
has
pABqJ “ B J AJ .

The case of conjugate transposition is similar:

pABq‹ “ B ‹ A‹ .

Proof. Let C “ pcij q i“1,n “ pABqJ . Then for every


j“1,m
i P t1, 2, . . . , nu, j P t1, 2, . . . , mu one has cij “ rABsji “ Aj‹ B‹i . Let us consider
now the pi, jq entry of B J AJ .
ÿ
p
J J J J J J
rB A sij “ pB qi‹ pA q‹j “ pB‹i q pAj‹ q “ rB J sik rAJ skj
k“1
ÿ
p ÿ
p
“ bki ajk “ ajk bki
k“1 k“1
“ Aj‹ B‹i
Basic definitions, operations and properties. 11

Exercise. Prove, that for every matrix A “ paij qi“1,m P Mm,n pFq the matrices
j“1,n
AAJ and AJ A are symmetric matrices.
For a matrix A P Mn pFq, one can introduce its mth power by

A0 “ In , A1 “ A, Am “ Am´1 A.
¨ ˛ ¨ ˛ ¨ ˛
0 1 ´1 0 0 ´1
Example 1.12. If A “ ˝ ‚ then A2 “ ˝ ‚, A3 “ ˝ ‚
´1 0 0 ´1 1 0
¨ ˛
1 0
and A4 “ ˝ ‚ “ I2 . Hence Am “ Amp mod q4 .
0 1

Trace of a product. Let A be a square matrix of order n. The trace of A is the


sum of the elements of the main diagonal, that is
ÿ
n
trace A “ aii .
i“1

Proposition 1.13. For A P Mm,n pCq and B P Mn,m pCq one has
trace AB “ trace BA.

Proof. We have
ÿ
m ÿ
m ÿ
m ÿ
n
trace AB “ rABsii “ pAqi‹ pBq‹i “ aik bki “
i“1 i“1 i“1 k“1

ÿ
m ÿ
n ÿ
n ÿ
m ÿ
n
bki aik “ bki aik “ rBAskk “ trace BA.
i“1 k“1 k“1 i“1 k“1

Block Matrix Multiplication.


Suppose that A and B are partitioned into submatrices-referred to as blocks- as
indicated below:
Basic definitions, operations and properties. 12

¨ ˛ ¨ ˛
A A12 ... A1r B B12 ... B1t
˚ 11 ‹ ˚ 11 ‹
˚ ‹ ˚ ‹
˚A21 A22 ... A2r ‹ ˚B B22 . . . B2r ‹
A“˚ ‹ and B “ ˚ 21 ‹
˚ .. .. .. ‹ ˚ .. .. .. ‹
˚ . . ... . ‹ ˚ . . ... . ‹
˝ ‚ ˝ ‚
As1 As2 ... Asr Br1 Br2 . . . Brt

We say that the partitioned matrices are conformable partitioned if the pairs
pAik , Bkj q are conformable matrices, for every indices i, j, k. In this case the
product AB is formed by combining blocks exactly the same way as the scalars are
combined in ordinary matrix multiplication. That is, the pi, jq block in the
product AB is
Ai1 B1j ` Ai2 B2j ` . . . Air Brj .

Matrix Inversion.
For a square matrix A P Mn pFq, the matrix B P Mn pFq that satisfies

AB “ In and BA “ In

(if exists) is called the inverse of A and is denoted by B “ A´1 . Not all square
matrices admits an inverse (are invertible). An invertible square matrix is called
nonsingular and a square matrix with no inverse is called singular matrix.
Although not all matrices are invertible, when an inverse exists, it is unique.
Indeed, suppose that X1 and X2 are both inverses for a nonsingular matrix A.
Then

X1 “ X1 In “ X1 pAX2 q “ pX1 AqX2 “ In X2 “ X2

which implies that only one inverse is possible.


Properties of Matrix Inversion. For nonsingular matrices A, B P Mn pFq, the
following statements hold.
Determinants and systems of linear equations 13

1. pA´1 q´1 “ A

2. The product AB is nonsingular.

3. pABq´1 “ B ´1 A´1 .

4. pA´1 qJ “ pAJ q´1 and pA´1 q‹ “ pA‹ q´1 .

One can easily prove the following statements.


Products of nonsingular matrices are nonsingular.
If A P Mn pFq is nonsingular, then there is a unique solution X P Mn,p pFq for the
equation
AX “ B, where B P Mn,p pFq,

and the solution is X “ A´1 B.


A system of n linear equations in n unknowns can be written in the form Ax “ b,
with x, b P Mn,1 pFq, so it follows when A is nonsingular, that the system has a
unique solution x “ A´1 b.

1.2 Determinants and systems of linear


equations
Determinants.
For every square matrix A “ paij q i“1,n P Mn pFq one can assign a scalar denoted
j“1,n
detpAq called the determinant of A. In extended form we write
› ›
› ›
› a11 a12 ¨ ¨ ¨ a1n ›
› ›
› ›
› a21 a22 ¨ ¨ ¨ a2n ›

detpAq “ › . ›
. .. .. .. › .
› . . . . ›
› ›
› ›
› an1 an2 ¨ ¨ ¨ ann ›
Determinants and systems of linear equations 14

In order to define the determinant of a square matrix, we need the following


notations and notions. Recall that by a permutation of the integers t1, 2, ..., nu we
mean an arrangement of these integers in some definite order. In other words, a
permutation is a bijection σ : t1, 2, . . . , nu Ñ t1, 2, . . . , nu. It can easily be observed
that the number of permutations of the integers t1, 2, ..., nu equals
n! “ 1 ¨ 2 ¨ . . . ¨ n. Let us denote by Sn the set of all permutations of the integers
t1, 2, ..., nu. A pair pi, jq is called an inversion of a permutation σ P Sn is i ă j and
σpiq ą σpjq. A permutation σ P Sn is called even or odd according to whether the
number of inversions of σ is even or odd respectively. The sign of a permutation
σ P Sn , denoted by sgn pσq, is `1 if the permutation is even and ´1 if the
permutation is odd.

Definition 1.14. Let A P Mn pFq. The determinant of A is the scalar defined by


the equation
ÿ
detpAq “ sgn pσq a1σp1q ¨ a2σp2q ¨ . . . ¨ anσpnq .
σPSn

It can easily be computed, that for A “ paij q i“1,2 P M2 pFq, one has
j“1,2

detpAq “ a11 a22 ´ a12 a21 .

Similarly, if A “ paij q i“1,3 P M3 pFq, then its determinant can be calculated by the
j“1,3
rule
detpAq “

a11 a22 a33 ` a13 a21 a32 ` a12 a23 a31 ´ a13 a22 a31 ´ a11 a23 a32 ´ a12 a21 a33 .
¨ ˛
1 2 3
˚ ‹
˚ ‹
Example 1.15. If A “ ˚ 4 5 6 ‹ then
˝ ‚
7 8 9
detpAq “ 1 ¨ 5 ¨ 9 ` 3 ¨ 4 ¨ 8 ` 2 ¨ 6 ¨ 7 ´ 3 ¨ 5 ¨ 7 ´ 1 ¨ 6 ¨ 8 ´ 2 ¨ 4 ¨ 9 “ 0.
Determinants and systems of linear equations 15

Laplace’s theorem.
Let A P Mn pFq and let k be an integer, 1 ď k ď n. Consider the rows i1 . . . ik and
the columns j1 . . . jk of A. By deleting the other rows and columns we obtain a
submatrix of A of order k, whose determinant is called a minor of A and is
denoted by Mij11...i
...jk
k
. Now let us delete the rows i1 . . . ik and the columns j1 . . . jk of
A.. We obtain a submatrix of A of order n ´ k. Its determinant is called the
complementary minor of Mij11...i
...jk Ăj1 ...jk . Finally let us denote
and it is denoted by M
k i1 ,...ik

(the so called cofactor)

Aij11...i
...jk Ăj1 ...jk .
“ p´1qi1 `¨¨¨`ik `j1 `¨¨¨`jk M
k i1 ...ik
˜ ¸J
The adjugate of A is the matrix adjpAq “ pAji q i“1,n , that is
j“1,n
¨ ˛
A11 A12 ¨¨¨ A1n
˚ ‹
˚ ‹
˚ A21 A22 ¨¨¨ A2n ‹
adjpAq “ ˚
˚ .. .. .. ‹

˚ . . ¨¨¨ . ‹
˝ ‚
An1 An2 ¨ ¨ ¨ An n

The next result provides a computation method of the inverse of a nonsingular


matrix.

Theorem 1.16. A square matrix A P Mn pFq is invertible if and only if


detpAq ‰ 0. In this case its inverse can be obtained by the formula

1
A´1 “ adjpAq.
detpAq

Corollary 1.17. A linear system Ax “ 0 with n equations in n unknowns has a


non-trivial solution if and only if detpAq “ 0.

We state, without proof, the Laplace expansion theorem:


Determinants and systems of linear equations 16

Theorem 1.18.
ÿ
detpAq “ ...jk j1 ...jk
Mij11...i k
Ai1 ...ik , where

• The indices i1 . . . ik are fixed

• The indices j1 . . . jk runs over all the possible values, such that
1 ď j1 ă ¨ ¨ ¨ ă jk ď n.

As immediate consequences we obtain the following methods of calculating


determinants called row expansion and column expansion.

Corollary 1.19. Let A P Mn pFq. Then


řn
(i) detpAq “ k“1 aik Aki , (expansion by row i)
řn
(ii) detpAq “ k“1 akj Ajk , (expansion by column j).

Properties of the determinant.


Let A, B P Mn pFq and let a P F. Then

(1) detpAJ q “ detpAq.

(2) A permutation of the rows, (respectively columns) of A multiplies the


determinant by the sign of the permutation.

(3) A determinant with two equal rows (or two equal columns) is zero.

(4) The determinant of A is not changed if a multiple of one row (or column) is
added to another row (or column).

(5) detpA´1 q “ 1
detpAq
.

(6) detpABq “ detpAq detpBq.


Determinants and systems of linear equations 17

(7) detpaAq “ an detpAq.

(8) If A is a triangular matrix, i.e. aij “ 0 whenever i ą j (aij “ 0 whenever


i ă j), then its determinant equals the product of the diagonal entries, that
ś
is detpAq “ a11 ¨ a22 ¨ . . . ¨ ann “ ni“1 aii .

Rank. Elementary transformations.


A natural number r is called the rank of the matrix A P Mm,n pFq if

1. There exists a square submatrix M P Mr pFq of A which is nonsingular (that


is detpMq ‰ 0).

2. If p ą r, for every submatrix N P Mp pFq of A one has detpNq “ 0.

We denote rank pAq “ r.


It can be proved that for A P Mm,n pFq and B P Mn,p pFq, then

rank pAq ` rank pBq ´ m ď rank pABq ď mintrank pAq, rank pBqu .

Theorem 1.20. Let A, B P Mn pFq with detpAq ‰ 0. Then rank pABq “ rank pBq.

Proof. Since detpAq ‰ 0, we have rank pAq “ n. By using the above notation with
m “ p “ n we obtain rank pBq ď rank pABq ď rank pBq. Hence
rank pABq “ rank pBq.

Definition 1.21. The following operations are called elementary row


transformations on the matrix A P Mm,n pFq:

1. Interchanging of any two rows.

2. Multiplication of a row by any non-zero number.

3. The addition of one row to another.


Determinants and systems of linear equations 18

Similarly one can define the elementary column transformations.


Consider an arbitrary determinant. If it is nonzero it will be nonzero after
preforming elementary transformations. If it is zero it will remain zero. One can
conclude that the rank of a matrix does not change if we perform any elementary
transformation on the matrix. So we can use elementary transformation in order
to compute the rank.
Namely given a matrix A P Mm,n pFq we transform it by an appropriate succession
of elementary transformations- into a matrix B such that

• the diagonal entries of B are either 0 or 1, all the 1’s preceding all the 0’s on
the diagonal.

• all the other entries of B are 0.

Since the rank is invariant under elementary transformations, we have


rank pAq “ rank pBq, but it is clear that the rank of B is equal to the number of 1’s
on the diagonal.
The next theorem offers a procedure to compute the inverse of a matrix:

Theorem 1.22. If a square matrix is reduced to the identity matrix by a sequence


of elementary row operations, the same sequence of elementary row transformations
performed on the identity matrix produces the inverse of the given matrix.
¨ ˛
1 2 0
˚ ‹
˚ ‹
Example 1.23. Compute the inverse of the matrix A “ ˚ 0 2 1 ‹ by using
˝ ‚
0 0 3
elementary row operations.
¨ ˛ ¨ ˛
1 2 0 1 0 0
˚ ‹ ˚ ‹ p´ 1 A3‹ `A2‹ q
˚ ‹ ˚ ‹ 3
We write ˚ 0 2 1 ‹ ˚ 0 1 0 ‹ »
˝ ‚ ˝ ‚
0 0 3 0 0 1
Determinants and systems of linear equations 19

¨ ˛ ¨ ˛
1 2 0 1 0 0
˚ ‹ ˚ ‹ p´A `A q
˚ ‹ ˚ ‹ 2‹ 1‹
˚ 0 2 0 ‹ ˚ 0 1 ´3 ‹ 1 »
˝ ‚ ˝ ‚
0 0 3 0 0 1
¨ ˛ ¨ ˛ ¨ ˛ ¨ ˛
1 0 0 1 ´1 31 1 0 0 1 ´1 1
˚ ‹ ˚ ‹ p 1 A2‹ , 1 A3‹ q ˚ ‹ ˚ 3 ‹
˚ ‹ ˚ 1 ‹ 2 3 ˚ ‹ ˚ ‹
˚ 0 2 0 ‹ ˚ 0 1 ´3 ‹ » ˚ 0 1 0 ‹ ˚ 0 1
´ 61 ‹.
˝ ‚ ˝ ‚ ˝ ‚ ˝ 2 ‚
1
0 0 3 0 0 1 0 0 1 0 0 3
¨ ˛
1
1 ´1 3
˚ ‹
´1 ˚ ‹
Hence A “ ˚ 0 2 ´ 6 ‹ .
1 1
˝ ‚
1
0 0 3
Recall that a matrix is in row echelon form if

(1) All nonzero rows are above any rows of all zeroes.

(2) The first nonzero element (leading coefficient) of a nonzero row is always
strictly to the right of the first nonzero element of the row above it.

If supplementary the condition


p3q Every leading coefficient is 1 and is the only nonzero entry in its column, is
also satisfied, we say that the matrix is in reduced row echelon form.
An arbitrary matrix can be put in reduced row echelon form by applying a finite
sequence of elementary row operations. This procedure is called the Gauss-Jordan
elimination procedure.
Existence of an inverse. For a square matrix A P Mn pFq the following
statements are equivalent.

1. A´1 exists (A is nonsingular).

2. rank pAq “ n.

3. A is transformed by Gauss Jordan in In .


Determinants and systems of linear equations 20

4. Ax “ 0 implies that x “ 0.

Systems of linear equations.


Recall that system of m linear equations in n unknowns can be written as
$

’ a11 x1 ` a12 x2 ` ¨ ¨ ¨ a1n xn “ b1




& a x ` a x ` ¨¨¨a x “ b
21 1 22 2 2n n 2

’ ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨




% a x ` a x ` ¨¨¨a x “ b .
m1 1 m2 2 mn n m

Here x1 , x2 , . . . , xn are the unknowns, a11 , a12 , . . . , amn are the coefficients of the
system, and b1 , b2 , . . . , bm are the constant terms. Observe that a systems of linear
equations may be written as Ax “ b, with A “ paij qi“1,m P Mm,n pFq, x P Mn,1 pFq
j“1,n
and b P Mm,1 pFq. The matrix A is called the coefficient matrix, while the matrix
rA|bs P Mm,n`1 pFq, $
& a if j ‰ n ` 1
ij
rA|bsij “
% b if j “ n ` 1
i

is called the augmented matrix of the system.


We say that x1 , x2 , ..., xn is a solution of a linear system if x1 , x2 , ..., xn satisfy each
equations of the system. A linear system is consistent if it has a solution, and
inconsistent otherwise. According to the Rouché-Capelli theorem, any system of
linear equations is inconsistent if the rank of the augmented matrix is greater than
the rank of the coefficient matrix. If, on the other hand, the ranks of these two
matrices are equal, the system must have at least one solution. The solution is
unique if and only if the rank equals the number of variables. Otherwise the
general solution has k free parameters where k is the difference between the
number of variables and the rank. Two linear systems are equivalent if and only if
they have the same solution set.
Determinants and systems of linear equations 21

In row reduction, the linear system is represented as an augmented matrix rA|bs.


This matrix is then modified using elementary row operations until it reaches
reduced row echelon form. Because these operations are reversible, the augmented
matrix produced always represents a linear system that is equivalent to the
original. In this way one can easily read the solutions.

Example 1.24. By using Gauss-Jordan elimination procedure solve the following


systems of linear equations.
$

’ x1 ´ x2 ` 2x4 “ ´2




& 2x1 ` x2 ´ x3 “ 4

’ x1 ´ x2 ´ 2x3 ` x4 “ 1




% x ` x ` x “ ´1.
2 3 4

¨ ˛
1 ´1 0 2 ´2
˚ ‹
˚ ‹
˚ 2 1 ´1 0 4 ‹ p´2A1‹ `A2‹ ,´A1‹ `A3‹ q
We ˚
have rA|bs “ ˚ ‹ »

˚ 1 ´1 ´2 1 1 ‹
˝ ‚
0 1 1 1 ´1
¨ ˛
1 ´1 0 2 ´2
˚ ‹
˚ ‹
˚ 0 3 ´1 ´4 8 ‹ pA2‹ ØA4‹ q
˚ ‹ »
˚ ‹
˚ 0 0 ´2 ´1 3 ‹
˝ ‚
0 1 1 1 ´1
¨ ˛
1 ´1 0 2 ´2
˚ ‹
˚ ‹
˚ 0 1 1 1 ´1 ‹ pA2‹ `A1‹ ,´3A2‹ `A4‹ q
˚ ‹ »
˚ ‹
˚ 0 0 ´2 ´1 3 ‹
˝ ‚
0 3 ´1 ´4 8
Determinants and systems of linear equations 22

¨ ˛
1 0 1 3 ´3
˚ ‹
˚ ‹
˚ 0 1 1 1 ´1 ‹ p 12 A3‹ `A1‹ , 21 A3‹ `A.21‹ ,´2A3‹ `A4‹ q
˚ ‹ »
˚ ‹
˚ 0 0 ´2 ´1 3 ‹
˝ ‚
0 0 ´4 ´7 11
¨ ˛
5 3
1 0 0 ´2
˚ 2 ‹
˚ 1 1 ‹ 1
˚ 0 1 0 ‹ p 2 A4‹ `A1‹ , 101 A4‹ `A2‹ ,´ 15 A4‹ `A3‹ q
˚ 2 2 ‹ »
˚ ‹
˚ 0 0 ´2 ´1 3 ‹
˝ ‚
0 0 0 ´5 5
¨ ˛ ¨ ˛
1 0 0 0 1 1 0 0 0 1
˚ ‹ ˚ ‹
˚ ‹ 1 ˚ ‹
˚ 0 1 0 0 1 ‹ p´ 2 A3‹ ,´ 5 A4‹ q ˚ 0 1 0 0
1
1 ‹
˚ ‹ » ˚ ‹.
˚ ‹ ˚ ‹
˚ 0 0 ´2 0 2 ‹ ˚ 0 0 1 0 ´1 ‹
˝ ‚ ˝ ‚
0 0 0 ´5 5 0 0 0 1 ´1
One can easily read the solution x1 “ 1, x2 “ 1, x3 “ ´1, x4 “ ´1.
Recall that a system of linear equations is called homogeneous if b “ p0 0 ¨ ¨ ¨ 0qJ
that is $

’ a11 x1 ` a12 x2 ` ¨ ¨ ¨ a1n xn “ 0




& a x ` a x ` ¨¨¨a x “ 0
21 1 22 2 2n n

’ ¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨¨




% a x ` a x ` ¨ ¨ ¨ a x “ 0.
m1 1 m2 2 mn n

A homogeneous system is equivalent to a matrix equation of the form

Ax “ O.

Obviously a homogeneous system is consistent, having the trivial solution


x1 “ x2 “ ¨ ¨ ¨ “ xn “ 0.
It can be easily realized that a homogeneous linear system has a non-trivial
solution if and only if the number of leading coefficients in echelon form is less
than the number of unknowns, in other words, the coefficient matrix is singular.
Problems 23

1.3 Problems
Problem 1.3.1. By using Laplace’s
› theorem compute
› the following determinants.
› › › ›
› › › 2 1 0 0 0 0 ›
› 1 2 3 4 5 › › ›
› › › ›
› › › 1 2 1 0 0 0 ›
› 2 1 2 3 4 › › ›
› › › ›
› › › 0 1 2 1 0 0 ›
D1 “ › 0 2 1 2 3 › , D2 “ › › ›.
› › ›
› › › 0 0 1 2 1 0 ›
› 0 0 2 1 2 › › ›
› › › ›
› › › 0 0 0 1 2 1 ›
› 0 0 0 2 1 › › ›
› ›
› 0 0 0 0 1 2 ›

Problem 1.3.2. Compute the following determinants.


› ›
› ›
› 1 ω ω2 ω3 ›
› ›
› ›
› ω ω ω 2 3
1 ›
a) ›› › , where ω P C such that the relation ω 2 ` ω ` 1 “ 0 holds.

› ω ω2 3
1 ω ›
› ›
› 3 ›
› ω 1 ω ω › 2

› ›
› ›
› 1 1 1 ... 1 ›
› ›
› ›
› 1 ǫ ǫ2 . . . ǫn´1 ›
› ›
› 2pn´1q ›
b) › 1 ǫ2 ǫ 4
... ǫ › , where ǫ “ cos 2π
n
` i sin 2π
n
.
› ›
› .. .. .. .. .. ›
› . . . . . ›
› ›
› 2 ›
› 1 ǫn´1 ǫ2pn´1q . . . ǫpn´1q ›

Problem 1.3.3. Let A “ paij q i“1,n P Mn pCq and let us denote


j“1,n
A “ paij q i“1,n P Mn pCq. Show that
j“1,n

a) detpAq “ detpAq.

b) If aij “ aji , i, j P t1, 2, . . . , nu then detpAq P R.

Problem 1.3.4. Let a1 , a2 , . . . an P C. Compute the following determinants.


Problems 24

› ›
› ›
› 1 1 1 ... 1 ›
› ›
› ›
› a1 a2 a3 ... an ›
› ›
› ›
a) › a21 a22 a23 ... a2n ›.
› ›
› .. .. .. .. .. ›
› . . . . . ›
› ›
› n´1 n´1 ›
› a1 a2 a3n´1 . . . ann´1 ›
› ›
› ›
› a1 a2 a3 . . . an ›
› ›
› ›
› an a1 a2 . . . an´1 ›
b) ›› . .. .. ..

.. › .
› .. . . . . ›
› ›
› ›
› a2 a3 a4 . . . a1 ›

Problem 1.3.5. Compute An , n ě 1 for the following matrices.


¨ ˛ ¨ ˛
7 4 a b
a) A “ ˝ ‚, A “ ˝ ‚, a, b P R.
´9 ´5 b a
¨ ˛ ¨ ˛
1 3 5 a b b
˚ ‹ ˚ ‹
˚ ‹ ˚ ‹
b) A “ ˚0 1 3‹ , A “ ˚ b a b ‹ , a, b P R.
˝ ‚ ˝ ‚
0 0 1 b b a

Problem 1.3.6. Compute the rank of the following matrices by using the
Gauss-Jordan elimination method.
¨ ˛ ¨ ˛
0 1 ´2 ´3 ´5 1 2 ´2 3 ´2
˚ ‹ ˚ ‹
˚ ‹ ˚ ‹
˚ 6 ´1 1 2 3 ‹ ˚ 3 ´1 1 ´3 4 ‹
˚
a) ˚ ‹ ˚ ‹.
‹, ˚ ‹
˚´2 4 3 2 1 ‹ ˚´2 1 0 1 ´1‹
˝ ‚ ˝ ‚
´3 0 2 1 2 2 0 0 ´1 0
Problems 25

¨ ˛
1 ´2 3 5 ´3 6
˚ ‹
˚ ‹
˚0 1 2 3 4 7 ‹
˚ ‹
˚ ‹
b) ˚2 1 3 3 ´2 5 ‹ .
˚ ‹
˚ ‹
˚5 0 9 11 ´7 16‹
˝ ‚
2 4 9 12 10 26

Problem 1.3.7. Find the inverses of the following matrices by using the
Gauss-Jordan elimination method.
¨ ˛
¨ ˛ 2 ´1 1
1 1 ˚ ‹
˝ ‚ ˚ ‹
a) A “ , B “ ˚1 2 3 ‹.
1 3 ˝ ‚
3 1 ´1
$
& 1 if i ‰ j
b) A “ paij q i“1,n P Mn pRq, where aij “
j“1,n
% 0 otherwise.

Problem 1.3.8. Prove that if A and B are square matrices of the same size, both
invertible, then:

a) ApI ` Aq´1 “ pI ` A´1 q´1 ,

b) pA ` BB J q´1 B “ A´1 BpI ` B J A´1 Bq´1 ,

c) pA´1 ` B ´1 q´1 “ ApA ` Bq´1 B,

d) A ´ ApA ` Bq´1 A “ B ´ BpA ` Bq´1 B,

e) A´1 ` B ´1 “ A´1 pA ` BqB ´1

f) pI ` ABq´1 “ I ´ ApI ` BAq´1 B,

g) pI ` ABq´1 A “ ApI ` BAq´1 .


Problems 26

Problem 1.3.9. For every matrix A P Mm,n pCq prove that the product A‹ A and
AA‹ are hermitian matrices.

Problem 1.3.10. For a quadratic matrix A of order n explain why the equation

AX ´ XA “ I

has no solution.

Problem 1.3.11. Solve the following systems of linear equations by using


Gauss-Jordan elimination procedure.

a) $

’ 2x1 ´ 3x2 ` x3 ` 4x4 “ 13




& 3x1 ` x2 ´ x3 ` 8x4 “ 2

’ 5x1 ` 3x2 ´ 4x3 ` 2x4 “ ´12




% x ` 4x ´ 2x ` 2x “ ´12.
1 2 3 4

b) $

’ x1 ´ x2 ` x3 ´ x4 ` x5 ´ x6 “ 1





’ x1 ` x2 ` x3 ` x4 ` x5 ` x6 “ 1




& 2x1 ` x3 ´ x5 “ 1

’ x2 ´ 3x3 ` 4x4 “ ´4





’ ´x1 ` 3x2 ` 5x3 ´ x6 “ ´1




% x ` 2x ` 3x ` 4x ` 5x ` 6x “ 2
1 2 3 4 5 6

Problem 1.3.12. Find m, n, p P R such that the following systems be consistent,


and then solve the systems.
Problems 27

a) $

’ 2x ´ y ´ z “ 0





’ x ` 2y ´ 3z “ 0




& 2x ` 3y ` mz “ 0

’ nx ` y ` z “ 0





’ x ` py ` 6z “ 0




% 2ex “ y ` z ` 2.

b) $

’ 2x ´ y ` z “ 0





’ ´x ` 2y ` z “ 0




& mx ´ y ` 2z “ 0

’ x ` ny ´ 2z “ 0





’ 3x ` y ` pz “ 0




% x2 ` y 2 ` x2 “ 3.
2
Vector Spaces

2.1 Definition of Vector Space and basic


properties
Definition 2.1. A vector space V over a field F (or F vector space) is a set V
with an addition ` (internal composition law) such that pV, `q is an abelian group
and a scalar multiplication ¨ : F ˆ V Ñ V, pα, vq Ñ α ¨ v “ αv, satisfying the
following properties:

1. αpv ` wq “ αv ` αw, @α P F, @v, w P F

2. pα ` βqv “ αv ` βv, @α, β P F, @v P V

3. αpβvq “ pαβqv

4. 1 ¨ v “ v, @v P V

The elements of V are called vectors and the elements of F are called scalars. The
scalar multiplication depends upon F. For this reason when we need to be exact

28
Subspaces of a vector space 29

we will say that V is a vector space over F, instead of simply saying that V is a
vector space. Usually a vector space over R is called a real vector space and a
vector space over C is called a complex vector space.
Remark. From the definition of a vector space V over F the following rules for
calculus are easily deduced:

• α ¨ 0V “ 0

• 0F ¨ v “ 0V

• α ¨ v “ 0V ñ α “ 0F or v “ 0V .

Examples. We will list a number of simple examples, which appear frequently in


practice.

• V “ Cn has a structure of R vector space, but it also has a structure of C


vector space.

• V “ FrXs, the set of all polynomials with coefficients in F with the usual
addition and scalar multiplication is an F vector space.

• Mm,n pFq with the usual addition and scalar multiplication is a F vector space.

• Cra,bs , the set of all continuous real valued functions defined on the interval
ra, bs, with the usual addition and scalar multiplication is an R vector space.

2.2 Subspaces of a vector space


It is natural to ask about subsets of a vector space V which are conveniently closed
with respect to the operations in the vector space. For this reason we give the
following:
Subspaces of a vector space 30

Definition 2.2. Let V a vector space over F. A subset U Ă V is called subspace of


V over F if it is stable with respect to the composition laws, that is,
v ` u P U, @v, u P U, and αv P U@α P F, v P U, and the induced operations verify the
properties form the definition of a vector space over F.

It is easy to prove the following propositions:

Proposition 2.3. Let V be a F vector space and U Ă V a nonempty subset. U is


a vector subspace of V over F iff the following conditions are met:

• v ´ u P U, @v, u P U

• αv P U, @α P F, @v P U

Proof. Obviously, the properties of multiplication with scalars, respectively the


associativity and commutativity of addition operation are inherited from V . Hence,
it remain to prove that 0 P U and for all u P U one has ´u P U. Since αu P U for
every u P U and α P F it follows that 0u “ 0 P U and 0 ´ u “ ´u P U.

Proposition 2.4. Let V be a F vector space and U Ă V a nonempty subset. U is


a vector subspace of V over F iff

αv ` βu P U, @α, β P F, @v, u P V.

Proof. Let u, v P U. For α “ 1, β “ ´1 we have v ´ u P U. For β “ 0 and α P F we


obtain αv P U. The conclusion follows from the previous proposition.

Example 2.5. Let S “ tpx, y, zq P R3 |x ` y ` z “ 0u. Show that S is a subspace


of R3 .

To see that S is a subspace we check that for all α, β P R and all


v1 “ px1 , y1 , z1 q , v2 “ px2 , y2 , z2 q P S

αv1 ` βv2 P S.
Subspaces of a vector space 31

Indeed, since v1 , v2 P S we have

x1 ` y1 ` z1 “ 0

x2 ` y2 ` z2 “ 0,

and by multiplying the equations with α and β respectively, and adding the
resulting equations we obtain

pαx1 ` βx2 q ` pαy1 ` βy2q ` pαz1 ` βz2 q “ 0.

But this is nothing else than the fact that


αv1 ` βv2 “ pαx1 ` βx2 , αy1 ` βy2 , αz1 ` βz2 q satisfies the equation that defines S.
The next propositions show how one can operate with vector subspaces (to obtain
a new vector subspace) and how one can obtain a subspace from a family of
vectors.

Proposition 2.6. Let V be a vector space and U, W Ă V two vector subspaces.


The sets
U X W and U ` W “ tu ` w|u P U, w P W u

are subspaces of V .

Proof. We prove the statements by making use of the Proposition 2.4. Let α, β P F
and let u, v P U X W. Then u, v P U and u, v P W. Since U and W are vector spaces
it follows that αv ` βu P U, respectively αv ` βu P W. Hence αv ` βu P U X W.
Now consider α, β P F and let x, y P U ` W. Then x “ u1 ` w1 , y “ u2 ` w2 for
some vectors u1 , u2 P U, w1 , w2 P W. But then

αx ` βy “ pαu1 ` βu2q ` pαw1 ` βw2 q P U ` W.


Subspaces of a vector space 32

The subspace U X W is called the intersection vector subspace, while the subspace
U ` W is called the sum vector subspace. Of course that these definitions can be
also given for finite intersections (respectively finite sums) of subspaces.

Proposition 2.7. Let V be a vector space over F and S Ă V nonempty. The set
řn (
xSy “ i“1 αi vi : αi P F and vi P S, for all i “ 1, n, n P N is a vector subspace

over F of V .

Proof. The proof is straightforward in virtue of Proposition 2.4.

The above vector space is called the vector space generated by S, or the linear hull
of the set S and is often denoted by spanpSq. It is the smallest subspace of V
which contains S, in the sense that for every U subspace of V with S Ă U it
follows that xSy Ă U.
Now we specialize the notion of sum of subspaces, to direct sum of subspaces.

Definition 2.8. Let V be a vector space and Ui Ă V subspaces, i “ 1, n. The sum


U1 ` ¨ ¨ ¨ ` Un is called direct sum if for every v P U1 ` ¨ ¨ ¨ ` Un , from
v “ u1 ` ¨ ¨ ¨ ` un “ w1 ` ¨ ¨ ¨ ` wn with ui , wi P Ui , i “ 1, n it follows that
ui “ wi , for every i “ 1, n.

The direct sum of the subspaces Ui , i “ 1, n will be denoted by U1 ‘ ¨ ¨ ¨ ‘ Un . The


previous definition can be reformulated as follows. Every u P U1 ` ¨ ¨ ¨ ` Un can be
written uniquely as u “ u1 ` u2 ` . . . ` un where ui P Ui , i “ 1, n.
The next proposition characterizes the direct sum of two subspaces.

Proposition 2.9. Let V be a vector space and U, W Ă V be subspaces. The sum


U ` W is a direct sum iff U X W “ t0V u.

Proof. Assume that U ` W is a direct sum and there exists s P U X W, s ‰ 0V .


But then every x P U ` W, x “ u ` w can be written as
Subspaces of a vector space 33

x “ pu ´ sq ` pw ` sq P U ` W. From the definition of the direct sum we have


u “ u ´ s, w “ w ` s hence s “ 0V , contradiction.
Conversely, assume that U X W “ t0V u and U ` W is not a direct sum. Hence,
there exists x P U ` W such that x “ u1 ` w1 “ u2 ` w2 P U ` W and u1 ‰ u2 or
w1 ‰ w2 . But then u1 ´ u2 “ w1 ´ w2 , hence u1 ´ u2 , w1 ´ w2 P U X W. It follows
that u1 “ u2 and w1 “ w2 , contradiction.

Let V be a vector space over F and U be a subspace. On V one can define the
following binary relation RU : let u, v P V , u RU v iff u ´ v P U.
It can easily be verified that the relation RU is an equivalence relation, that is

(r) v RU v, for all v P V . (reflexivity)

(t) u RU v and v RU w ùñ u RU w, for all u, v, w P V. (transitivity)

(s) u RU v ùñ v RU u, for all u, v P V. (symmetry)

The equivalence class of a vector v P V is defined as

RU rvs “ tu P V : v RU uu “ v ` U.

The quotient set (or factor set) V {RU is denoted by V {U and consists of the set of
all equivalence classes, that is

V {U “ tRU rvs : v P V u.

Theorem 2.10. On the factor set V {U there is a natural structure of a vector


space over F.

Proof. Indeed, let us define the sum of two equivalence class RU rvs and RU rws by

RU rvs ` RU rvs “ RU rv ` ws
Basis. Dimension. 34

and the multiplication with scalars by

αRU rvs “ RU rαvs.

Then, is an easy verification that with these operations V {U becomes an F


space.

The vector space from the previous theorem is called the factor vector space, or
the quotient vector space.

2.3 Basis. Dimension.


Up to now we have tried to explain some properties of vector spaces ”in the large”.
Namely we have talked about vector spaces, subspaces, direct sums, factor space.
The Proposition 2.7 naturally raises some questions related to the structure of a
vector space V . Is there a set S which generates V (that is xSy “ V )? If the
answer is yes, how big should it be? Namely how big should a ”minimal” one
(minimal in the sense of cardinal numbers) be? Is there a finite set which generates
V ? We will shed some light on these questions in the next part of this chapter.
Why are the answers to such questions important? The reason is quite simple. If
we control (in some way) a minimal system of generators, we control the whole
space.

Definition 2.11. Let V be a F vector space. A nonempty set S Ă V is called


system of generators for V if for every v P V there exists a finite subset
tv1 , . . . , vn u Ă V and the scalars α1 , . . . , αn P F such that v “ α1 v1 ` ¨ ¨ ¨ ` αn vn (it
is also said that V is a linear combination of v1 , . . . , vn with scalars in F). V is
called dimensionally finite, or finitely generated, if it has a finite system of
generators.
Basis. Dimension. 35

A nonempty set L Ă V is called a linear independent system of vectors if for every


finite subset tv1 , . . . , vn u Ă L of it α1 v1 ` . . . αn vn “ 0 implies that ai “ 0 for all
i “ 1, n.
A nonempty set of vectors which is not linear independent is called linearly
dependent.
A subset B Ă V is called basis of V if it is both a system of generators and linearly
independent. In this case every vector v P V can be uniquely written as a linear
combination of vectors from B.

Example 2.12. Check whether the vectors p0, 1, 2q , p1, 2, 0q , p2, 0, 1q are linearly
independent in R3 .

By definition, the three vectors are linearly independent if the implication

α1 p0, 1, 2q ` α2 p1, 2, 0q ` α3 p2, 0, 1q “ 0R3 ñ α1 “ α2 “ α3 “ 0

holds.
Checking the above implication actually amounts (after computing the right hand
side) to investigating whether the linear system
$

’ α2 ` 2α2 “ 0

&
α1 ` 2α2 “ 0



% 2α
1 ` α2 “ 0

has only the trivial solution pα1 , α2 , α3 q “ p0, 0, 0q or not. But we can easily
compute the rank of the matrix, which is 3 due to
ˇ ˇ
ˇ ˇ
ˇ 0 1 2 ˇ
ˇ ˇ
ˇ ˇ
ˇ 1 2 0 ˇ “ ´9 ‰ 0,
ˇ ˇ
ˇ ˇ
ˇ 2 0 1 ˇ
Basis. Dimension. 36

to see that, indeed, the system has only the trivial solution, and hence the three
vectors are linearly independent.
We have the following theorem.

Theorem 2.13. (Existence of basis) Every vector space V has a basis.

We will not prove this general theorem here, instead we will restrict to finite
dimensional vector spaces.

Theorem 2.14. Let V ‰ t0u be e finitely generated vector space over F. From
every finite system of generators one can extract a basis.

Proof. Let S “ tv1 , . . . , vr u be a finite generators system. It is clear that there are
nonzero vectors in S (otherwise V “ t0u). Let 0 ‰ v1 P S. The set tv1 u is linearly
independent (because αv1 “ 0 ñ α “ 0 from v1 ‰ 0). That means that S contains
linearly independent subsets. Now P pSq is finite (S being finite), and in a finite
number of steps we can extract a maximal linearly independent system, let say
B “ tv1 , . . . , vn u, 1 ď n ď r in the following way:

v2 P Szxv1 y,

v3 P Szxtv1 , v2 uy
..
.

vn P Szxtv1 , v2 , . . . , vn´1 uy.

We prove that B is a basis for V . It is enough to show that B generates V ,


because B is linearly independent by the choice of it. Let v P V . S being a system
of generators it follows that it is enough to show that every vk P S, n ď k ď r is a
linear combination of vectors from B. Suppose, by contrary, that vk is not a linear
combination of vectors from B. It follows that the set B Y tvk u is linearly
independent, contradiction with the maximality of B.
Basis. Dimension. 37

Corollary 2.15. Let V be a F vector space and S a system of generators for V .


Every linearly independent set L Ă S can be completed to a basis of V .

Proof. Let L Ă S be a linearly independent set in S. If L is maximal by the


previous Theorem it follows that L is a basis. If L is not maximal, there exists a
linearly independent set L1 with L Ă L1 Ă S. If L1 is maximal it follows that L1 is
a basis. If it is not maximal, we repeat the previous step. Because S is a finite set,
after a finite number of steps we obtain a system of linearly independent vectors B
which is maximal, L Ă B Ă S, so B is a basis for V , again by the previous
Theorem.

Theorem 2.16. Let V be a finitely generated vector space over F. Every linearly
independent system of vectors L can be completed to a basis of V .

Proof. Let S be a finite system of generators. The intersection L X S is again a


system of generators and L Ă L X S. We apply the previous corollary and we
obtain that L can be completed to a basis of V .

Theorem 2.17. (The cardinal of a basis). Let V be a finitely generated F vector


space. Every basis of V is finite and has the same number of elements.

Proof. Let B “ te1 , . . . .en u be a basis of V , and let B1 te11 , . . . , e1m u a system of
vectors with m ą n. We show that B1 can not be a basis for V .
ř
Because B is a basis the vectors e1i can be uniquely written as e1i “ nj“1 aij ej ,
ř
1 ď i ď m. If B1 is linearly independent, then it follows that m 1
i“1 λi ei “ 0 implies
ř
λi “ 0, i “ 1, m, or, in other words, the system m i“1 aij λi “ 0, j “ 1, n has only

the trivial solution, impossible.

Definition 2.18. Let V ‰ t0u be a F vector space finitely generated. The number
of elements in a basis of V is called the dimension of V (it does not depend on the
Basis. Dimension. 38

choice of the basis, and it is denoted by dim F V ). The vector space V is said to be
of finite dimension. For V “ t0u , dim F V “ 0.

Remark 2.19. According to the proof of Theorem 2.17, if dim F V “ n then any
set of m ą n vectors is linear dependent.

Corollary 2.20. Let V be a vector space over F of finite dimension, dim F V “ n.

1. Any linearly independent system of n vectors is a basis. Any system of m


vectors, m ą n is linearly dependent.

2. Any system of generators of V which consists of n vectors is a basis. Any


system of m vectors, m ă n is not a system of generators

Proof. a) Consider L “ tv1 , . . . , vn u a linearly independent system of n vectors.


From the completion theorem (Theorem 2.16) it follows that L can be completed
to a basis of V . It follows from the cardinal basis theorem (Theorem 2.17) that
there is no need to complete L, so L is a basis.
Let L1 be a system of m vectors, m ą n. If L1 is linearly independent it follows that
L1 can be completed to a basis (Theorem 2.16), so dim F V ě m ą n, contradiction.
b) Let S “ tv1 , . . . , vn u be a system of generators which consists of n vectors.
From the Theorem 2.14 it follows that a basis can be extracted from its n vectors.
Again from the basis Theorem 2.17 it follows that there is no need to extract any
vector, so S is a basis.
Let S 1 be a generators system which consists of m vectors, m ă n. From the
Theorem 2.14 it follows that from S 1 one can extract a basis, so dim F V ď m ă n,
contradiction.

Remark 2.21. The dimension of a finite dimensional vector space is equal to any
of the following:
Basis. Dimension. 39

• The number of the vectors in a basis.

• The minimal number of vectors in a system of generators.

• The maximal number of vectors in a linearly independent system.

Example 2.22. Let S “ tpx, y, zq P R3 |x ` y ` z “ 0u. Give an example of a basis


of S.

In example 2.5 we have shown that S is a subspace of R3 . One can see that, from a
geometric point of view, S is a plane passing through the origin, so dim S “ 2.
This follows also from rewriting S as follows
(
S “ px, y, zq P R4 |x ` y ` z “ 0

“ tpx, y, ´x ´ yq |x, y P Ru

“ tx p1, 0, ´1q ` y p0, 1, ´1q |x, y P Ru

“ span tp1, 0, ´1q , p0, 1, ´1qu .

The vectors p1, 0, ´1q and p0, 1, ´1q are linearly independent so they form a basis
of S.

Theorem 2.23. Every linearly independent list of vectors in a finite dimensional


vector space can be extended to a basis of the vector space.

Proof. Suppose that V is finite dimensional and tv1 , . . . , vm u is linearly


independent. We want to extend this set to a basis of V . V being finite
dimensional, there exists a finite set tw1 , . . . , wn u, a list of vectors which spans V .

• If w1 is in the span of tv1 , . . . , vm u, let B “ tv1 , . . . , vm u. If not, let


B “ tv1 , . . . , vm , w1 u.
Basis. Dimension. 40

• If wj is in the span of B, let B unchanged. If wj is not in the span of B,


extend B by jointing wj to it.

After each step B is still linearly independent. After n steps at most, the span of
B includes all the w’s. Thus B also spans V , and being linearly independent, it
follows that it is a basis.

As an application we show that every subspace of a finite dimensional vector space


can be paired with another subspace to form a direct sum which is the whole space.

Theorem 2.24. Let V be a finite dimensional vector space and U a subspace of V .


There exists a subspace W of V such that V “ U ‘ W .

Proof. Because V is finite dimensional, so is U. Choose tu1 , . . . , um u a basis of U.


This basis of U a linearly independent list of vectors, so it can be extended to a
basis tu1 , . . . , um , w1 , . . . , wn u of V . Let W “ xw1 , . . . , wn y.
We prove that V “ U ‘ W . For this we will show that

V “ U ` W, and U X W “ t0u

Let v P V , there exists pa1 , . . . , am , b1 , . . . , bn ) such that

v “ a1 u1 ` ¨ ¨ ¨ ` am um ` b1 w1 ` ¨ ¨ ¨ ` bn wm ,

because tu1 , . . . , um , w1 , . . . , wn u generates V . By denoting


a1 u1 ` ¨ ¨ ¨ ` am um “ u P U and b1 w1 ` ¨ ¨ ¨ ` bn wm “ w P W we have just proven
that V “ U ` W .
Suppose now that U X W ‰ t0u, so let 0 ‰ v P U X W . Then there exist scalars
a1 , . . . , am P F and b1 , . . . , bn P F not all zero, with

v “ a1 u1 ` ¨ ¨ ¨ ` am um “ b1 w1 ` ¨ ¨ ¨ ` bn wm ,
Basis. Dimension. 41

so
a1 u1 ` ¨ ¨ ¨ ` am um ´ b1 w1 ´ ¨ ¨ ¨ ´ bn wm “ 0.

But this is a contradiction with the fact that tu1 , . . . , um , w1 , . . . , wn u is a basis of


V , so we obtain the contradiction, i.e. U X W “ t0u.

The next theorem relates the dimension of the sum and the intersection of two
subspaces with the dimension of the given subspaces:

Theorem 2.25. If U and W are two subspaces of a finite dimensional vector


space V , then

dim pU ` W q “ dim U ` dim W ´ dim pU X W q .

Proof. Let tu1 , . . . , um u be a basis of U X W , so dim U X W “ m. This is a linearly


independent set of vectors in U and W respectively, so it can be extended to a
basis tu1 , . . . , um , v1 . . . vi u of U and a basis tu1 , . . . , um , w1 , . . . wj u of W , so
dim U “ m ` i and dim W “ m ` j. The proof will be complete if we show that
tu1 , . . . , um , v1 . . . , vi , w1 , . . . , wj u is a basis for U ` W , because in this case

dim pU ` W q “ m ` i ` j

“ pm ` iq ` pm ` jq ´ m

“ dim U ` dim W ´ dimpU X W q

The set spantu1 , . . . , um , v1 . . . , vi , w1 , . . . , wj u contains U and W , so it contains


U ` W . That means that to show that it is a basis for U ` W it is only needed to
show that it is linearly independent. Suppose that

a1 u1 ` ¨ ¨ ¨ ` am um ` b1 v1 ` ¨ ¨ ¨ ` bi vi ` c1 w1 ` ¨ ¨ ¨ ` cj wj “ 0 .

We have

c1 w1 ` ¨ ¨ ¨ ` cj wj “ ´a1 u1 ´ ¨ ¨ ¨ ´ am um ´ b1 v1 ´ ¨ ¨ ¨ ´ bi vi
Basis. Dimension. 42

which shows that w “ c1 w1 ` ¨ ¨ ¨ ` cj wj P U . But this is also in W , so it lies in


U X W . Because u1 , . . . , um is a basis in U X W it follows that there exist the
scalars d1 , . . . , dm P F, not all zero, such that

c1 w1 ` ¨ ¨ ¨ ` cj wj “ ´pd1 u1 ` ¨ ¨ ¨ ` dm um q .

But tu1 , . . . , um , w1 , . . . , wj u is a basis in W , so it is linearly independent, that is


all ci ’s are zero.
The relation involving a’s, b’s and c’s becomes

a1 u1 ` ¨ ¨ ¨ ` am um ` b1 v1 ` ¨ ¨ ¨ ` bi vi “ 0 ,

so a’s and b’s are zero because the vectors tu1 , . . . , um , v1 . . . , vi u form a basis in U.
So all the a’s, b’s and c’s are zero, that means that
tu1 , . . . , um , v1 , . . . , vi , w1 , . . . , wj u are linearly independent, and because that
generates U ` W , they form a basis of U ` W .

The previous theorem shows that the dimension fits well with the direct sum of
spaces. That is, if U X W “ t0u, the sum is the direct sum and we have

dim pU ‘ W q “ dim U ` dim W .

This is true for the direct sum of any finite number of spaces as it is shown in the
next theorem:

Theorem 2.26. Let V be a finite dimensional space, Ui subspaces of V , i “ 1, n,


such that
V “ U1 ` ¨ ¨ ¨ ` Un ,

and
dim V “ dim U1 ` ¨ ¨ ¨ ` dim Un .
Basis. Dimension. 43

Then
V “ U1 ‘ ¨ ¨ ¨ ‘ Un .

Proof. One can choose a basis for each Ui . By putting all these bases in one list,
we obtain a list of vectors which spans V (by the first property in the theorem),
and it is also a basis, because by the second property, the number of vectors in this
list is dim V .
Suppose that we have ui P Ui , i “ 1, n, such that

0 “ u1 ` ¨ ¨ ¨ ` un .

Every ui is represented as the sum of the vectors of basis of Ui , and because all
these bases form a basis of V , it follows that we have a linear combination of the
vectors of a base of V which is zero. So all the scalars are zero, that is all ui are
zero, so the sum is direct.

We end the section with two important observations. Let V be a vector space over
F (not necessary finite dimensional). Consider a basis B “ pei qiPI of V .
We have the first representation theorem:

Theorem 2.27. Let V be a vector space over F (not necessary finite dimensional).
Let us consider a basis B “ pei qiPI . For every v P V, v ‰ 0 there exist a unique
subset B1 Ď B, B1 “ tei1 , . . . , eik u and the nonzero scalars ai1 , . . . , aik P F˚ , such
that

ÿ
k
v“ aij eij “ ai1 ei1 ` ¨ ¨ ¨ ` aik eik .
j“1

Proof. Obviously, by the definition of basis v is a finite linear combination of the


elements of the basis. We must show the uniqueness. Assume the contrary, that
ÿ
n ÿ
m
v“ αji eji “ αki eki , αji ‰ 0, i “ 1, n, αki ‰ 0, i “ 1, m.
i“1 i“1
Local computations 44

Assume that there exists eks R tej1 , . . . , ejn u. Then, since


řn řm
i“1 αji eji ´ i“1 αki eki “ 0 we obtain that αks “ 0, contradiction. Similarly,

ejs P tek1 , . . . , em u, for all s “ 1, n. Hence, m “ n and one may assume that
ÿ
n ÿ
n
v“ αji eji “ αki eki , αji ‰ 0, i “ 1, n, αki ‰ 0, i “ 1, n.
i“1 i“1
řn řn
Using the relation i“1 αji eji ´ i“1 αki eki “ 0 again we obtain that
αji “ αki , i P t1, . . . , nu, contradiction.

Example 2.28. Show that B “ tp1, 1q , p1, ´1qu.is a basis of R2 , and find the
representation with respect to B of the vector v “ p3, ´1q.

Our aim is to find the representation of v “ p3, ´1q with respect to B, that is, to
find two scalars x, y P R such that

v “ x p1, 1q ` y p1, ´1q .

Expressing the above equality component wise gives a system with two unknowns,
x and y $
& x ` y “ 3
% x ´ y “ ´1.

Its unique solution, and the answer to our problem is x “ 1, y “ 2.

2.4 Local computations


In this section we deal with some computations related to finite dimensional vector
spaces.
Let V be an F finite dimensional vector space, with a basis B “ te1 , . . . , en u. Any
vector v P V can be uniquely represented as
ÿ
n
v“ ai ei “ a1 e1 ` ¨ ¨ ¨ ` an en .
i“1
Local computations 45

The scalars pa1 , . . . , an q are called the coordinates of the vector v in the basis B. It
1
is obvious that if we have another basis B , the coordinates of the same vector in
the new basis change. How we can measure this change? Let us start with a
situation that is a bit more general.

Theorem 2.29. Let V be a finite dimensional vector space over F with a basis
1 1
B “ te1 , . . . , en u. Consider the vectors S “ te1 , . . . , em u Ď V :

1
e1 “ a11 e1 ` ¨ ¨ ¨ ` a1n en

...
1
em “ am1 e1 ` ¨ ¨ ¨ ` amn en

Denote by A “ paij qi“1,m the matrix formed by the coefficients in the above
j“1,n
equations. The dimension of the subspace xSy is eqaul to the rank of the matrix A,
i.e. dimxSy “ rankA.

Proof. Let us denote by Xi “ pai1 , . . . , ain q P Fn , i “ 1, m the coordinates of


1 ř 1
ei , i “ 1, m in B. Then, the linear combination m i“1 λi ei has its coordinates
řm
i“1 λi Xi in B. Hence the set of all coordinate vectors of elements of xSy equals
1 1
the subspace of Fn generated by tX1 , ..., Xm u. Moreover e1 , . . . , em will be linearly
independent if and only if X1 , . . . , Xm are. Obviously, the dimension of the
subspace xX1 , ..., Xm y of Fn is equal with the rank of the matrix
¨ ˛
X1
˚ ‹
˚ .. ‹
˚ . ‹ “ A.
˝ ‚
Xm
Local computations 46

1 1
Consider now the case of m “ n in the above discussion. The set S “ te1 , . . . , en u
is a basis iff rankA “ n We have now

1
e1 “ a11 e1 ` ¨ ¨ ¨ ` a1n en
1
e2 “ a21 e1 ` ¨ ¨ ¨ ` a2n en

...
1
en “ an1 e1 ` ¨ ¨ ¨ ` ann en ,

1
representing the relations that change from the basis B to the new basis B “ S.
The matrix AJ is denoted by
¨ ˛
a11 a21 . . . an1
˚ ‹
˚ ‹
1 ˚ a12 a22 . . . an2 ‹
P pe,e q “˚
˚


˚ ... ... ... ... ‹
˝ ‚
a1n a2n . . . ann
.
The columns of this matrix are given by the coordinates of the vectors
1
of the new basis e with respect to the old basis e!
Remarks

• In the matrix notations we have


¨ ˛ ¨ ˛
1
e e
˚ 1 ‹ ˚ 1 ‹
˚ 1 ‹ ˚ ‹
˚ e2 ‹ ˚ e ‹ 1
˚ ‹ “ A˚ 2 ‹ or pe1 q1,n “ pP pe,e q qJ peq1,n
˚ ‹ ˚ ‹
˚ ... ‹ ˚ ... ‹
˝ ‚ ˝ ‚
1
en en

1 1
• Consider the change of the basis from B to B with the matrix P pe,e q and
1 2 1 2
the change of the basis from B to B with the matrix P pe ,e q . We can think
Local computations 47

at the ”composition” of these two changes, i.e. the change of the basis from
2 2
B to B with the matrix P pe,e q . It is easy to see that one has
1 1 2 2
P pe,e q P pe ,e q “ P pe,e q

2
• If in the above discussion we consider B “ B one has
1 1
P pe,e q P pe ,eq “ In ,

that is
1 1
pP pe ,eq q´1 “ P pe,e q

At this step we try to answer the next question, which is important in


applications. If we have two basis, a vector can be represented in both of them.
What is the relation between the coordinates in the two basis?
Let us fix the setting first. Consider the vector space V , with two basis
1 1 1 1
B “ te1 , . . . , en u and B “ te1 , . . . , en u and P pe,e q the matrix of the change of basis.
Let v P V . We have

1 1
v “ a1 e1 ` ¨ ¨ ¨ ` an en “ b1 e1 ` ¨ ¨ ¨ ` bn en ,

where pa1 , . . . an q and pb1 , . . . bn q are the coordinates of the same vector in the two
basis. We can write
¨ ˛ ¨ ˛
1
e e
˚ 1 ‹ ˚ 1 ‹
´ ˚ ‹
¯ ˚ e2 ‹ ´ ˚
¯ ˚ e1 ‹

an ¨ ˚ ‹ “ b b ... b ¨˚ ‹.
2
pvq “ a1 a2 . . . ˚ ‹ 1 2 n ˚ ‹
˚ ... ‹ ˚ ... ‹
˝ ‚ ˝ ‚
1
en en
Local computations 48

Denote ¨ ˛
a
˚ 1 ‹
˚ ‹
˚ a2 ‹
pvqe “ ˚
˚


˚ ... ‹
˝ ‚
an
and ¨ ˛
b
˚ 1 ‹
˚ ‹
˚ b2 ‹
pvqe1 “ ˚
˚
‹.

˚ ... ‹
˝ ‚
bn
the matrices of the coordinates of v in the two basis.
Denote further the basis columns
¨ ˛
e
˚ 1 ‹
˚ ‹
˚ e2 ‹
peq1n “˚
˚


˚ ... ‹
˝ ‚
en

the column matrix of the basis B and


¨ ˛
1
e
˚ 1 ‹
˚ 1 ‹
1 ˚ e2 ‹
pe q1n “˚
˚


˚ ... ‹
˝ ‚
1
en
1
the matrix column of the basis B , we have
1 1
v “ pvqJ J J
e peq1n “ pvqe1 pe q1n “ pvqe1 pP
pe,e q J
q peq1n

Because v is uniquely represented in a basis it follows


1
pvqJ
e1
pP pe,e q qJ “ pvqJ
e ,
Problems 49

or
1 1
pvqe1 “ pP pe,e q q´1 pvqe “ P pe ,eq pvqe .

Hence,
1
pvqe “ pP pe,e q qpvqe1 .

2.5 Problems
Problem 2.5.1. Show that for spanpv1 , . . . , vn q “ V one has
spanpv1 ´ v2 , v2 ´ v3 , . . . , vn´1 ´ vn , vn q “ V .

Problem 2.5.2. Find a basis for the subspace generated by the given vectors in
M3 pRq. ¨ ˛ ¨ ˛ ¨ ˛
1 2 0 ´1 2
3 0 1 2
˚ ‹ ˚ ‹ ˚ ‹
˚ ‹ ˚ ‹ ˚ ‹
˚2 4 1 ‹ ˚2 1 ´1‹ ,
, ˚´2 2 ´1‹ .
˝ ‚ ˝ ‚ ˝ ‚
3 1 ´1 0 1 1 ´1 2 1

Problem 2.5.3. Let V be a finite dimensional vector space dim V “ n. Show that
there exist one dimensional subspaces U1 , . . . , Un , such that

V “ U1 ‘ ¨ ¨ ¨ ‘ Un .

Problem 2.5.4. Find three distinct subspaces U, V, W of R2 such that

R2 “ U ‘ V “ V ‘ W “ W ‘ U.

Problem 2.5.5. Let U, W be subspaces of R8 , with dim U “ 3, dim W “ 5 and


dim U ` W “ 8. Show that U X W “ t0u.

Problem 2.5.6. Let U, W be subspaces of R9 with dim U “ dim W “ 5. Show


that U X W ‰ t0u.
Problems 50

Problem 2.5.7. Let U and W be subspaces of a vector space V and suppose that
each vector v P V has a unique expression of the form v “ u ` w where u belongs
to U and w to W. Prove that
V “ U ‘ W.

Problem 2.5.8. In Cra, bs find the dimension of the subspaces generated by the
following sets of vectors:

a) t1, cos 2x, cos2 xu,

b) tea1 x , . . . , ean x u, where ai ‰ aj for i ‰ j

Problem 2.5.9. Find the dimension and a basis in the intersection and sum of
the following subspaces:

• U “ spantp2, 3, ´1q, p1, 2, 2, q, p1, 1, ´3qu,


V “ spantp1, 2, 1q, p1, 1, ´1q, p1, 3, 3qu.

• U “ spantp1, 1, 2, ´1q, p0, ´1, ´1, 2q, p´1, 2, 1, ´3u,


V “ spantp2, 1, 0, 1q, p´2, ´1, ´1, ´1q, p3, 0, 2, 3qu.

Problem 2.5.10. Let U, V, W be subspaces of some vector space and suppose that
U Ď W. Prove that
pU ` V q X W “ U ` pV X W q.

Problem 2.5.11. In R4 we consider the following subspace


V “ spantp2, 1, 0, 1q, p´2, ´1, ´1, ´1q, p3, 0, 2, 3qu. Find a subspace W of R4 such
that R4 “ V ‘ W .

Problem 2.5.12. Let V, W be two vector spaces over the same field F. Find the
dimension and a basis of V ˆ W.
Problems 51

Problem 2.5.13. Find a basis in the space of symmetric, respectively


skew-symmetric matrices of dimension n.

Problem 2.5.14. Let


V “ tpx1 , . . . , xn q P Rn |, x1 ` x2 ` . . . ` xn “ 0, x1 ` xn “ 0u. Find a basis in V .

Problem 2.5.15. Let Mn pRq be the set of the real square matrices of order n,
and An , respectively Sn the set of symmetric, respectively skew-symmetric
matrices of order n. Show that Mn pRq “ An ‘ Sn .

Problem 2.5.16. Let us denote by Rn rXs the set of all polynomials having degree
at most n with real coefficients. Obviously Rn rXs is a subspace of RrXs with the
induced operations. Find the dimension of the quotient space Rn rXs{U where U is
the subspace of all real constant polynomials.

Problem 2.5.17. Let V be a finite-dimensional vector space and let U and W be


two subspaces of V. Prove that

dim ppU ` W q{W q “ dim pU{pU X W qq.

Problem 2.5.18. Let us consider the matrix


¨ ˛
1 3 5 ´3 6
˚ ‹
˚ ‹
˚1 2 3 4 7 ‹
˚ ‹
˚ ‹
M “ ˚1 3 3 ´2 5 ‹ .
˚ ‹
˚ ‹
˚0 9 11 ´7 16‹
˝ ‚
4 9 12 10 26

Let U and W be the subspaces of R5 generated by rows 1, 2 and 5 of M, and by


rows 3 and 4 of M respectively. Find the dimensions of U ` W and U X W.
Problems 52

Problem 2.5.19. Find bases for the sum and intersection of the subspaces U and
W of R4 rXs generated by the respective sets of polynomials
t1 ` 2x ` x3 , 1 ´ x ´ x2 u and tx ` x2 ´ 3x3 , 2 ` 2x ´ 2x3 u.
3
Linear maps between vector spaces

Up to now we met with vector spaces. It is natural to ask about maps between
them, which are compatible with the linear structure of a vector space. These are
called linear maps, special maps which also transport the linear structure. They
are also called morphisms of vector spaces or linear transformations.

Definition 3.1. Let V and W be two vector spaces over the same field F. A linear
map from V to W is a map f : V Ñ W which has the property that
f pαv ` βuq “ αf pvq ` βf puq for all v, u P V and α, β P F.

The class of linear maps between V and W will be denoted by LF pV, W q or


HomF pV, W q.
From the definition it follows that f p0V q “ 0W and
ÿ
n ÿ
n
fp αi vi q “ αi f pvi q, @ αi P F, @vi P V, i “ 1, n.
i“1 i“1

We shall define now two important notions related to a linear map, the kernel and
the image.
Consider the sets:

ker f “ f ´1 p0W q “ tv P V |f pvq “ 0w u, and

53
54

imf “ f pV q “ tw P W |D v P V, f pvq “ wu.

Definition 3.2. The sets ker f and f pV q are called the kernel (or the null space),
respectively the image of f .

An easy exercise will prove the following:

Proposition 3.3. The kernel and the image of a linear map f : V Ñ W are
subspaces of V and W respectively.

Example 3.4. Let T : R2 Ñ R2 be given by px, yq ÞÑ px ` y, x ` yq. Find ker T


and T pR2 q.

By definition
(
ker T “ px, yq P R2 |T px, yq “ p0, 0q
(
“ px, yq P R2 | px ` y, x ` yq “ p0, 0q
(
“ px, yq P R2 |x ` y “ 0 .

Geometrically, this is the straight line with equation y “ ´x. Clearly


ker T “ span tp1, ´1qu and dim ker T “ 1.
From the way T is defined we see that all vectors in the image T pR2 q of T , have
both components equal to each other, so
` ˘
T R2 “ tpα, αq |α P Ru

“ span tp1, 1qu .

For the finite dimensional case the dimension of ker and im of a linear map
between vector spaces are related by the following:

Theorem 3.5. Let f : V Ñ W be a linear map between vector spaces V and W


over the field F, V being finite dimensional.

dim V “ dim ker f ` dim imf.


55

Proof. Let n and m be the dimensions of V and ker f , m ď n. Consider a basis


te1 , . . . , em u for ker f . The independent system of vectors e1 , . . . , em can be
completed to a basis te1 , . . . , em , em`1 , . . . , en u of V .
Our aim is to prove that the vectors f pem`1 q, . . . , f pen q form a basis for f pV q. It is
sufficient to prove that the elements f pem`1 q, . . . , f pen q are linearly independent
since they generate f pV q.
Suppose the contrary, that f pem`1 q, . . . , f pen q are not linearly independent. There
exists αm`1 , . . . , αn P F such that

ÿ
n
αk f pek q “ 0W ,
k“m`1
and by the linearity of f ,

ÿ
n
fp αk ek q “ 0W .
k“m`1
Hence

ÿ
n
1
v “ αk ek P ker f
k“m`1

and v 1 can be written in terms of e1 , . . . , em . This is only compatible with the fact
that e1 , . . . , en form a basis of V if αm`1 “ ¨ ¨ ¨ “ αn “ 0, which implies the linear
independence of the vectors f pem`1 q, . . . , f pen q.

Theorem 3.6. Let f : V Ñ W be a linear mapping between vector spaces V and


W , and dim V “ dim W ď 8. Then, f pV q “ W iff ker f “ t0V u. In particular f is
onto iff f is one to one.

Proof. Suppose that ker f “ t0V u. Since f pV q is a subspace of W it follows that


dim V “ dim f pV q ď dim W , which forces dim f pV q “ dim W , and this implies
that f pV q “ W .
56

The fact that f pV q “ W implies that ker f “ t0V u follows by reversing the
arguments.

Proposition 3.7. Let f : V Ñ W be a linear map between vector spaces V, W over


F. If f is a bijection, it follows that its inverse f ´1 : W Ñ V is a linear map.

Proof. Because f is a bijection @w1 , w2 P W , D! v1 , v2 P V , such that


f pvi q “ wi , i “ 1, 2. Because f is linear, it follows that

α1 w1 ` α2 w2 “ α1 f pv1 q ` α2 f pv2 q “ f pα1 v1 ` α2 v2 q.

It follows that α1 v1 ` α2 v2 “ f ´1 pα1 w1 ` α2 w2 q, so

f ´1 pα1 w1 ` α2 w2 q “ α1 f ´1 pw1 `q ` α2 f ´1 pw2 q.

Definition 3.8. A linear bijective map f : V Ñ W between vector spaces V, W


over F is called an isomorphism of the vector space V over W , or isomorphism
between the vector spaces V and W .
A vector space V is called isomorphic with a vector space W if there exists an
isomorphism f : V Ñ W . The fact that the vector spaces V and W are isomorphic
will denote by V » W .

Example 3.9. Let V be a F vector space and V1 , V2 two supplementary spaces,


that is V “ V1 ‘ V2 . It follows that @v P V we have the unique decomposition
v “ v1 ` v2 , with v1 P V1 and v2 P V2 . The map

p : V Ñ V1 , ppvq “ v1 , @v P V

is called the projection of V on V1 , parallel with V2 .


Properties of LpV, W q 57

The map s : V Ñ V, spvq “ v1 ´ v2 , @v P V is called the symmetry of V with


respect to V1 , parallel with V2 .
It is easy to see that for v P V1 , v2 “ 0, so ppvq “ v and spvq “ v, and for v P V2 ,
v1 “ 0, so ppvq “ 0 and spvq “ ´v.

3.1 Properties of LpV, W q


In this section we will prove some properties of linear maps and of LpV, W q.

Proposition 3.10. Let f : V Ñ W be a linear map between the linear spaces V, W


over F.

1. If V1 Ď V is a subspace of V , then f pV1 q is a subspace of W .

2. If W1 Ď W is a subspace of W , then f ´1 pW1 q is a subspace of V .

Proof. 1. Let w1 , w2 be in f pV1 q. It follows that there exist v1 , v2 P V1 such that


f pvi q “ wi , i “ 1, 2. Then, for every α, β P F we have

αw1 ` βw2 “ αf pv1 q ` βf pv2 q “ f pαv1 ` βv2 q P f pV1 q.

2. For v1 , v2 P f ´1 pW1 q we have that f pv1 q, f pv2 q P W1 , so


@ α, β P F, αf pv1 q ` βf pv2 q P W1 . Because f is linear
αf pv1 q ` βf pv2 q “ f pαv1 ` βv2 q ñ αv1 ` βv2 P f ´1 pW1 q.

The next proposition shows that the kernel and the image of a linear map
characterize the injectivity and surjectivity properties of the map.

Proposition 3.11. Let f : V Ñ W be a linear map between the linear spaces V, W .


Properties of LpV, W q 58

1. f is one to one (injective) ðñ ker f “ t0u.

2. f is onto (surjective) ðñ f pV q “ W .

3. f is bijective ðñ ker f “ t0u and f pV q “ W .

Proof. 1 Suppose that f is one to one. Because f p0V q “ 0W it follows that


ker f “ t0V u Ă V . For the converse, suppose that ker f “ t0V u. Let v1 , v2 P V with
f pv1 q “ f pv2 q. It follows that f pv1 ´ v2 q “ 0 and because ker f “ t0u we have that
v1 “ v2 . The claims 2. and 3. can be proved in the same manner.

Next we shall study how special maps act on special systems of vectors.

Proposition 3.12. Let f : V Ñ W be a linear map between the linear spaces V, W


and S “ tvi |i P Iu a system of vectors in V .

1. If f is one to one and S is linear independent, then f pSq is linear


independent.

2. If f is onto and S is a system of generators, then f pSq is s system of


generators.

3. If f is bijective and S is a basis for V , then f pSq is a basis for W .

Proof. 1. Let tw1 , . . . , wn u be a finite subsystem from f pSq, and αi P F with


řn
i“1 αi wi “ 0. There exist the vectors vi P V such that f pvi q “ wi , for all
ř ř ř
i P t1, . . . , nu. Then ni“1 αi wi “ ni“1 αi f pvi q “ f p ni“1 αi vi q “ 0, so
řn
i“1 αi vi “ 0. Because S is linearly independent it follows that αi “ 0 for all

i “ 1, n, so f pSq is linearly independent.


2. Let w P W . There exists v P V with f pvq “ w. Because S is a system of
generators, there exists a finite family of vectors in S, vi , and the scalars
Properties of LpV, W q 59

řn
αi P F, i “ 1, n such that i“1 αi vi “ v. It follows that
ÿ
n ÿ
n
w “ f pvq “ f p αi vi q “ αi f pvi q.
i“1 i“1

3. Because f is bijective and S is a basis for V , it follows that both 1. and 2. hold,
that is f pSq is a basis for W .

Definition 3.13. Let f, g : V Ñ W be linear maps between the linear spaces V


and W over F, and α P F. We define

1. f ` g : V Ñ W by pf ` gqpvq “ f pvq ` gpvq, @ v P V , the sum of the linear


maps, and

2. αf : V Ñ W by pαf qpvq “ αf pvq, @ v P V, @ α P F, the scalar multiplication


of a linear map.

Proposition 3.14. With the operations defined above LpV, W q becomes a vector
space over F.

The proof of this statement is an easy verification.


In the next part we specialize in the study of the linear maps, namely we consider
the case V “ W .

Definition 3.15. The set of endomorphisms of a linear space V is:

EndpLq “ tf : V Ñ V | f linear u.

By the results from the previous section, EndpV q is an F linear space.


Let W, U be two other linear spaces over the same field F, f P LpV, W q and
g P LpW, Uq. We define the product (composition) of f and g by
h “ g ˝ f : V Ñ U,
hpvq “ gpf pvqq, @ v P V.
Properties of LpV, W q 60

Proposition 3.16. The product of two linear maps is a linear map.


Moreover if f and g as above are isomorphisms, then the product h “ g ˝ f is an
isomorphism.

Proof. We check that for all v1 , v2 P V and all α, β P F

hpαv1 ` βv2 q “ gpf pαv1 ` βv2 qq

“ gpαf pv1q ` βf pv2 qq

“ gpαf pv1qq ` gpβf pv2qq

“ αhpv1 q ` βhpv2 q.

The last statement follows from the fact that h is a linear bijection.

It can be shown that the composition is distributive with respect to the sum of
linear maps, so EndpV q becomes an unitary ring.
It can easily be realized that:

Proposition 3.17. The isomorphism between two linear spaces is an equivalence


relation.

Definition 3.18. Let V be an F linear space. The set

AutpV q “ tf P EndpV q| f isomorphism u

is called the set of automorphisms of the vector space V .

Proposition 3.19. AutpV q is a group with respect to the composition of linear


maps.

Proof. It is only needed to list the properties.

1. the identity map IV is the unit element.


Properties of LpV, W q 61

2. g ˝ f is an automorphism for f and g automorphisms.

3. the inverse of an automorphism is an automorphism.

The group of automorphisms of a linear space is called the general linear group
and is denoted by GLpV q.

Example 3.20. • Projectors endomorphisms. An endomorphism


p : V Ñ V is called projector of the linear space V iff

p2 “ p,

where p2 “ p ˝ p. If p is a projector, then:

1. ker p ‘ ppV q “ V

2. the endomorphism q “ IV ´ p is again a projector.

Denote v1 “ ppvq and v2 “ v ´ v1 , it follows that


ppv2 q “ ppvq ´ ppv1 q “ ppvq ´ p2 pvq “ 0V , so v2 P ker f . Hence

v “ v1 ` v2 , @ v P V,

where v1 , v2 P f pV q and, moreover, the decomposition is unique, so we have


the direct sum decomposition ker p ‘ ppV q “ V . For the last assertion simply
compute q 2 “ pIV ´ pq ˝ pIV ´ pq “ IV ´ p ´ p ` p2 “ IV ´ p “ q, because p is
a projector. It can be seen that qpV q “ ker p and ker q “ qpV q. Denote by
V1 “ ppV q and V2 “ ker p. It follows that p is the projection of V on V1 ,
parallel with V2 , and q is the projection of V on V2 parallel with V1 .

• Involutive automorphisms. An operator s : V Ñ V is called involutive iff


s2 “ IV . From the definition and the previous example one has:
Properties of LpV, W q 62

1. an involutive operator is an automorphism

2. for every involutive automorphism, the linear operators:

1
ps : V Ñ V, ps pvq “ pv ` spvqq
2
1
qs : V Ñ V, qs pvq “ pv ´ spvqq
2

are projectors and satisfy the relation ps ` qs “ 1V .

3. reciprocally, for a projector p : V Ñ V , the operator sp : V Ñ V , given


by sp pvq “ 2ppvq ´ v is an involutive automorphism.

From the previous facts it follows that ps ˝ s “ s ˝ ps “ p, sp ˝ p “ p ˝ sp “ p. An


involutive automorphism s is a symmetry of V with respect to the subspace ps pV q,
parallel with the subspace ker ps .

Example 3.21. Let V be a vector space and f : V Ñ V a linear map such that
ker f “ imf . Determine the set imf 2 , where f 2 denotes the composition of f with
itself, f 2 “ f ˝ f .

We start by writing down explicitly

imf 2 “ imf ˝ f

“ f ˝ f pV q

“ f pf pV qq .

But, f pV q “ imf “ ker f is the set of all vectors which are mapped by f to zero, so

imf 2 “ f pker f q

“ 0.
Local form of a linear map 63

3.2 Local form of a linear map


Let V and W be two vector spaces over the same filed F, dim V “ m, dim W “ n,
and e “ te1 , . . . , em u and f “ tf1 , . . . , fn u be bases in V and W respectively. A
linear map T P LpV, W q is uniquely determined by the values of the basis e.
We have

T pe1 q “ a11 f1 ` ¨ ¨ ¨ ` a1n fn ,

T pe2 q “ a21 f1 ` ¨ ¨ ¨ ` a2n fn ,


..
.

T pem q “ am1 f1 ` ¨ ¨ ¨ ` amn fn ,

or, in the matrix notation


¨ ˛ ¨ ˛
T pe1 q f1
˚ ‹ ˚ ‹
˚ ‹ ˚ ‹
˚ T pe2 q ‹ ˚ f2 ‹
˚ ‹ “ A˚ ‹ where A “ paij qi“1,m .
˚ .. ‹ ˚ .. ‹
˚ . ‹ ˚ . ‹ j“1,n
˝ ‚ ˝ ‚
T pem q fn
pf,eq
The transposed of A is denoted by MT and is called the matrix of the linear
map T is the basis e and f .
From the definition of the matrix of a linear map it follows that:

Theorem 3.22. • For T1 , T2 P LpV, W q and a1 , a2 P F

Ma1 T1 `a2 T2 “ a1 MT1 ` a2 MT2

• The vector space LpV, W q is isomporphic to Mm,n pFq by the map


T P LpV, W q ÞÑ MT pFq P Mm,n pFq.

• Particularly EndpV q is isomorphic to Mn pFq.


Local form of a linear map 64

Now we want to see how the image of a vector by a linear map can we expressed.
ř
Let v P V, v “ m J
i“1 vi ei , or in the matrix notation pvqe peq1m , where, as usual

¨ ˛
v1
˚ ‹
˚ ‹
˚ v2 ‹
pvqe “ ˚
˚ ..


˚ . ‹
˝ ‚
vn
and ¨ ˛
e1
˚ ‹
˚ ‹
˚ e2 ‹
peq1m “˚
˚ ..
‹.

˚ . ‹
˝ ‚
em
řn
Now denote T pvq “ w “ j“1 wj ej P W , we have

T pvq “ pwqJ
f pf q1n .

řm
T being linear, we have T pvq “ i“1 vi T pei q, or, again in matrix notation:

T pvq “ pvqJ
e pT peqq1m .

pf,eq
From the definition of MT it follows that
pf,eq J
pT peqq1m “ pMT q pf q1n .

So finally we have
pf,eq J
pwqJ J
f pf q1n “ pvqe pMT q pf q1n .

By the uniqueness of the coordinates of a vector in a basis it follows that


pf,eq J
pwqJ J
f “ pvqe pMT q .

Taking the transposed of the above relation we get


pf,eq
pwqf “ pMT qpvqe .
Local form of a linear map 65

¨ ˛
´3 0 2
˚ ‹
3 3 ˚ ‹
Example 3.23. Let T : R Ñ R , T “ ˚ 1 1 0 ‹. Find a basis in ker T and
˝ ‚
´2 1 2
3
find the dimension of T pR q .

Observe that the kernel of T ,


$ ¨ ˛ ˛,¨

’ x /
/ 0

& ˚ ‹ ˚ ‹/ .
˚ ‹ ˚ ‹
ker T “ px, y, zq P |T “
˚ y ‹ ˚ 0 ‹ ,

’ ˝ ‚ ˝ ‚/ /

% /
z 0 -

is the set of solutions of the linear homogeneous system


$

’ ´3x ` 2z “ 0

&
x ` y “ 0 (3.1)



% ´2x ` y ` 2z “ 0,

the matrix of the system being exactly T . To solve this system we need to
compute the rank of the matrix T . We get that
ˇ ˇ
ˇ ˇ
ˇ ´3 0 2 ˇ
ˇ ˇ
ˇ ˇ
ˇ 1 1 0 ˇ“0
ˇ ˇ
ˇ ˇ
ˇ ´2 1 2 ˇ

and that rank A “ 2. To solve the system we chose x “ α as a parameter and


express y and z in terms of x from the first two equations to get

3
x “ α, y “ ´x, z “ x.
2

The set of solutions is


"ˆ ˙ * "ˆ ˙*
3 3
α, ´α, α |α P R “ span 1, ´1,
2 2
Local form of a linear map 66

` ˘
and hence, a basis in ker T consists only of 1, ´1, 32 , dim ker T “ 1.
Based on the dimension formula
` ˘
dim ker T ` dim T R3 “ dim R3

we infer that dim T pR3 q “ 2.

Proposition 3.24. Let V, W, U be vector spaces over F, of dimensions m, n, p, and


T P LpV, W q, S P LpW, Uq, with matrices MT and MS , in some basis. Consider the
composition map S ˝ T : V Ñ U with the matrix MS˝T . Then

MS˝T “ Ms MT .

Proof. Indeed, one can easily see that for v P V we have pT pvqq “ MT pvq where
pT pvqq, respectively pvq stand for the coordinate of T pvq, respectively v in the
appropriate bases. Similarly, for w P W one has pSpwqq “ MS pwq.
Hence, pS ˝ T pvqq “ pSpT pvqqq “ MS pT pvqq “ MS MT pvq, or, equivalently

MS˝T “ MS MT .

Let V and W be vector spaces and T P LpV, W q be a linear map. In V and W we


consider the bases e “ te1 , . . . , em u and f “ tf1 , . . . , fn u, with respect to these
pf,eq
bases the linear map has the matrix MT . If we consider two other bases
e1 “ te11 , . . . , e1m u and f 1 “ tf11 , . . . , fn1 u the matrix of T with respect to these bases
pf 1 ,e1 q
will be MT . What relation do we have between the matrixes of the same linear
map in these two bases?

pf 1 ,e1 q 1 pf,eq 1
Theorem 3.25. In the above conditions MT “ P pf ,f q MT P pe,e q .
Problems 67

Proof. Let us consider v P V and let w “ T pvq. We have

pf 1 ,e1 q pf 1 ,e1 q 1
pwqf 1 “ MT pvqe1 “ MT P pe ,eq pvqe .

On the other hand

1 1 1 pf,eq
pwqf 1 “ P pf ,f q pwqf “ P pf ,f q pT pvqqf “ P pf ,f q MT pvqe .

1 1
Taking into account that pP pe ,eq q´1 “ P pe,e q we get

pf 1 ,e1 q 1 pf,eq 1 1 pf,eq 1


MT “ P pf ,f q MT pP pe ,eq q´1 “ P pf ,f q MT P pe,e q .

Corollary 3.26. Let e and e1 be two bases of a finite-dimensional vector space V


pe,eq
and let T : V Ñ V be a linear mapping. If T is represented by matrices A “ MT
pe1 ,e1 q
and A1 “ MT with respect to e and e1 respectively, then A1 “ P AP ´1 where P is
the matrix representing the change of basis e to e1 .

3.3 Problems
Problem 3.3.1. Consider the following mappings T : R3 Ñ R3 . Study which one
of them is a linear mapping.

a) T px1 , x2 , x3 q “ px21 , x2 , x23 .

b) T px1 , x2 , x3 q “ px3 , x1 , x2 q.

c) T px1 , x2 , x3 q “ px1 ´ 1, x2 , x3 q.

d) T px1 , x2 , x3 q “ px1 ` x2 , x2 ´ x3 , x1 ` x2 ` x3 q.

e) T px1 , x2 , x3 q “ px3 , 0, 0q.


Problems 68

f) T px1 , x2 , x3 q “ px1 , 2x2 , 3x3 q.

Problem 3.3.2. Let T P EndpV q and let tei : i “ 1, nu be a basis in V . Prove


that the following statements are equivalent.

1. The matrix of T , with respect to the basis tei : i “ 1, nu is upper triangular.

2. T pek q P spante1 , . . . , ek u for all k “ 1, n.

3. T pspante1 , . . . , ek uq “ spante1 , . . . , ek u for all k “ 1, n.

Problem 3.3.3. Let T1 , T2 : R3 Ñ R3 having the matrices


¨ ˛
3 1 0
˚ ‹
˚ ‹
MT1 “ ˚0 2 1‹ ,
˝ ‚
1 2 3

respectively ¨ ˛
´1 4 2
˚ ‹
˚ ‹
MT2 “ ˚ 0 4 1‹
˝ ‚
0 0 5
in the canonical basis of R3 .

a) Find the image of p0, 1, ´1q through T1 , T1´1 , T2 , T2´1 .

b) Find the image of p1, 3, ´2q through T1 ` T2 , pT1 ` T2 q´1 .

c) Find the image of p1, 2, 0q through T1 ˝ T2 , T2 ˝ T1 .

Problem 3.3.4. Let V be a complex vector space and let T P EndpV q. Show that
there exists a basis in V such that the matrix of T relative to this basis is upper
triangular.
Problems 69

Problem 3.3.5. Let T : R4 Ñ R3 be a linear mapping represented by the matrix


¨ ˛
1 0 1 2
˚ ‹
˚ ‹
M “ ˚´1 1 0 1 ‹.
˝ ‚
0 ´1 ´1 ´3
Find a basis in ker T, im T and the dimension of the spaces V, W, ker T and im T .

Problem 3.3.6. Show that a linear transformation T : V Ñ W is injective if and


only if it has the property of mapping linearly independent subsets of V to linearly
independent subsets of W.

Problem 3.3.7. Show that a linear transformation T : V Ñ W is surjective if and


only if it has the property of mapping any set of generators of V to a set of
generators of W.

Problem 3.3.8. Let T : V Ñ W be a linear mapping represented by the matrix


¨ ˛
1 1 1 2
˚ ‹
˚ ‹
M “ ˚´1 1 1 1 ‹.
˝ ‚
0 ´2 ´2 ´3
Compute dim V, dim W and find a basis in im T and ker T .

Problem 3.3.9. Find all the linear mappings T : R Ñ R with the property
im T “ ker T .
Find all n P N such that there exists a linear mapping T : Rn Ñ Rn with the
property im T “ ker T .

Problem 3.3.10. Let V , respectively Vi , i “ 1, n be vector spaces over C. Show


that, if T : V1 ˆ V2 ˆ ¨ ¨ ¨ ˆ Vn Ñ V is a linear mapping then there exist and they
are unique the linear mappings Ti : Vi Ñ V , i “ 1, n such that

T pv1 , . . . , vn q “ T1 pv1 q ` T2 pv2 q ` ¨ ¨ ¨ ` Tn pvn q.


Problems 70

Problem 3.3.11. (The first isomorphism theorem). If T : V Ñ W is a linear


transformation between vector spaces V and W , then

V { ker T » im T.

[Hint: show that the mapping S : V { ker T Ñ im T, Spv ` ker T q “ T pvq is a


bijective linear mapping.]

Problem 3.3.12. (The second isomorphism theorem). If U and W are subspaces


of a vector space V, then

pU ` W q{W » U{pU X W q.

[Hint: define the mapping T : U Ñ pU ` W q{W by the rule T puq “ u ` W , show


that T is a linear mapping and use the previous problem.]

Problem 3.3.13. (The third isomorphism theorem). Let U and W be subspaces


of a vector space V such that W Ď U. Prove that U{W is a subspace of V {W and
that pV {W q{pU{W q » V {U.
[Hint: define a mapping T : V {W Ñ V {U by the rule T pv ` W q “ v ` U, show
that T is a linear mapping and use the firs isomorphism theorem.]

Problem 3.3.14. Show that every subspace U of a finite-dimensional vector space


V is the kernel and the image of suitable linear operators on V.

Problem 3.3.15. Let T : R4 Ñ R4 having the matrix


¨ ˛
1 2 0 1
˚ ‹
˚ ‹
˚3 0 ´1 2‹
MT “ ˚
˚


˚2 5 3 1‹
˝ ‚
1 2 1 3
in the canonical basis te1 , e2 , e3 , e4 u of R4 .
Find the matrix of T with respect to the following basis.
Problems 71

a) te1 , e3 , e2 , e4 u.

b) te1 , e1 ` e2 , e1 ` e2 ` e3 , e1 ` e2 ` e3 ` e4 u.

c) te4 ´ e1 , e3 ` e4 , e2 ´ e4 , e4 u.

Problem 3.3.16. A linear transformation T : R3 Ñ R2 is defined by


T px, x2 , x3 q “ px1 ´ x2 ´ x3 , ´x1 ` x3 q. Let e “ tp2, 0, 0q, p´1, 2, 0q, p1, 1, 1qu and
f “ tp0, ´1q, p1, 2qu be bases in R3 and R2 respectively. Find the matrix that
represents T with respect to these bases.
4
Proper vectors and the Jordan canonical
form

4.1 Invariant subspaces. Proper vectors and


values
In this part we shall further develop the theory of linear maps. Namely we are
interested in the structure of an operator.
Let us begin with a short description of what we expect to obtain.
Suppose that we have a vector space V over a field F and a linear operator
T P EndpV q. Suppose further that we have the direct sum decomposition:

à
m
V “ Ui ,
i“1

where each Ui is a direct subspace of V. To understand the behavior of T it is only


needed to understand the behavior of each restriction T |Uj . Studying T |Uj should
be easier than dealing with T because Uj is a ”smaller” vector space than V .

72
Invariant subspaces. Proper vectors and values 73

However we have a problem: if we want to apply tools which are commonly used in
the theory of linear maps (such as taking powers for example) the problem is that
generally T may not map Uj into itself, in other words T |Uj may not be an
operator on Uj . For this reason it is natural to consider only that kind of
decomposition for which T maps every Uj into itself.

Definition 4.1. Let V be an operator on the vector space V over F and U a


subspace of V . The subspace U is called invariant under T if T pUq Ă U, in other
words T |U is an operator on U.

Of course that another natural question arises when dealing with invariant
subspaces. How does an operator behave on an invariant subspace of dimension
one? Every one dimensional subspace is of the form U “ tλu|λ P Fu. If U is
invariant by T it follows that T puq should be in U, and hence there should exist a
scalar λ P F such that T puq “ λu. Conversely if a nonzero vector u exists in V
such that T puq “ λu, for some λ P F, then the subspace U spanned by u is
invariant under T and for every vector v in U one has T pvq “ λv. It seems
reasonable to give the following definition:

Definition 4.2. Let T P EndpV q be an operator on a vector space over the field F.
A scalar λ P F is called eigenvalue (or proper value) for T if there exists a nonzero
vector v P V such that T pvq “ λv. A corresponding vector satisfying the above
equality is called eigenvector (or proper vector) associated to the eigenvalue λ.

The set of eigenvectors of T corresponding to an eigenvalue λ forms a vector space,


denoted by Epλq, the proper subspace corresponding to the proper value λ. It is
clear that Epλq “ kerpT ´ λIV q
For the finite dimensional case let MT be the matrix of T in some basis. The
equality T pvq “ λv is equivalent with MT v “ λv or, with pMT ´ λIn qv “ 0, which
Invariant subspaces. Proper vectors and values 74

is a linear system. Obviously this homogeneous system of linear equations has a


nontrivial solution if and only if

detpMT ´ λIn q “ 0.

Observe that detpMT ´ λIn q is a polynomial of degree n in λ, where n “ dim V.


This polynomial is called the characteristic polynomial of the operator T. Hence,
the eigenvalues of T are the roots of its characteristic polynomial.
Notice that the characteristic polynomial does not depend on the choice of the
basis B that we choose when computing the matrix MT of the transformation T .
Indeed, let B 1 be another basis and MT1 the matrix of T with respect to this new
basis. Further, let P be transition matrix from B to B 1 . So MT1 “ P ´1 MT P and
detpP q ‰ 0. We have

detpP ´1MT P ´ λIq “ detpP ´1MT P ´ P ´1 pλIqP q “

1
detpP ´1 pMT ´ λIqP q “ detpMT ´ λIq detpP q “ detpMT ´ λIq,
detpP q
which proves our claim.

Theorem 4.3. Let T P EndpV q. Suppose that λi , i “ 1, m are distinct eigenvalues


of T , and vi , i “ 1, m are the corresponding proper vectors. The set tv1 , . . . , vm u is
linearly independent.

Proof. Suppose, by contrary, that the set tv1 , . . . , vm u is linearly dependent. It


follows that a smallest index k exists such that

vk P spantv1 , . . . , vk´1 u.

Thus the scalars a1 , . . . ak´1 exist such that

vk “ a1 v1 ` . . . ak´1 vk´1 .
Invariant subspaces. Proper vectors and values 75

Applying T to the above equality, we get

λk vk “ a1 λ1 v1 ` . . . ak´1 λk´1 vk´1 .

It follows that

0 “ a1 pλk ´ λ1 qv1 ` ¨ ¨ ¨ ` ak´1 pλk ´ λk´1 qvk´1 .

Because we choose k to be the smallest index such that vk “ a1 v1 ` ¨ ¨ ¨ ` ak´1 vk´1 ,


it follows that the set tv1 , ¨ ¨ ¨ vk´1 u is linearly independent. It follows that all the
a’s are zero.

Corollary 4.4. An operator T on a finite dimensional vector space V has at most


dim V distinct eigenvalues.

Proof. This is an obvious consequence of the fact that in a finite dimensional


vector space we have at most dim V linearly independent vectors.

The linear maps which have exactly n “ dim V linearly independent eigenvectors
have very nice and simple properties. This is the happiest case we can meet with
in the class of linear maps.

Definition 4.5. A linear map T : V Ñ V is said to be diagonalizable if there


exists a basis of V consisting of n independent eigenvectors, n “ dim V .

Recall that matrices A and B are similar if there is an invertible matrix P such
that B “ P AP ´1. Hence, a matrix A is diagonalizable if it is similar to a diagonal
matrix D.
The minimal polynomial of an operator 76

4.2 The minimal polynomial of an operator


The main reason for which there exists a richer theory on operators than for linear
maps is that operators can be raised to powers (we can consider the composition of
an operator with itself).
Let V be an n-dimensional vector space over a field F and T : V Ñ V be a linear
operator.
Now, LpV, V q “ EndpV q is an n2 dimensional vector space. We can consider
T 2 “ T ˝ T and of course we obtain T n “ T n´1 ˝ T inductively. We define T 0 as
being the identity operator I “ IV on V . If T is invertible (bijective), then there
exists T ´1 , so we define T ´m “ pT ´1 qm . Of course that

T m T n “ T m`n , for m, n P Z.

For T P EndpV q and p P FrXs a polynomial given by

ppzq “ a0 ` a1 z ` . . . am z m , z P F

we define the operator ppT q given by

ppT q “ a0 I ` a1 T ` . . . am T m .

This is a new use of the same symbol p, because we are applying it to operators
not only to elements in F. If we fix the operator T we obtain a function defined on
FrXs with values in EndpV q, given by p Ñ ppT q which is linear. For p, q P FrXs we
define the operator pq given by ppqqpT q “ ppT qqpT q.
Now we begin the study of the existence of eigenvalues and of their properties.

Theorem 4.6. Every operator over a finite dimensional, nonzero, complex vector
space has an eigenvalue.
The minimal polynomial of an operator 77

Proof. Suppose V is a finite dimensional complex vector space and T P EndpV q.


Choose v P V , v ‰ 0. Consider the set

pv, T pvq, T 2pvq, . . . T n pvqq.

This set is a linearly dependent system of vectors (they are n ` 1) vectors and
dim V “ n. Then there exist complex numbers, a0 , . . . an , not all 0, such that

0 “ a0 v ` a1 T pvq ` ¨ ¨ ¨ ` an T n pvq .

Let m be the largest index such that am ‰ 0. Then we have the decomposition

a0 ` a1 z ` ¨ ¨ ¨ ` am z m “ a0 pz ´ λ1 q . . . pz ´ λm q .

It follows that

0 “ a0 v ` a1 T pvq ` . . . an T n pvq

“ pa0 I ` a1 T ` . . . an T n qpvq

“ a0 pT ´ λ1 Iq . . . pT ´ λm Iqpvq .

which means that T ´ λj I is not injective for at least one j, or equivalently T has
an eigenvalue.

Remark 4.7. The analogous statement is not true for real vector spaces. But on
real vector spaces there are always invariant subspaces of dimension 1 or 2.

Example 4.8. Let T : F2 Ñ F2 given by T px, yq “ p´y, xq. It has no eigenvalues


and eigenvectors if F “ R. Find them for F “ C.

Obviously, T px, yq “ λpx, yq leads to p´y, xq “ λpx, yq, or equivalently


$
& λx ` y “ 0
% λy ´ x “ 0.
The minimal polynomial of an operator 78

The previous system is equivalent to x “ λy, pλ2 ` 1qy “ 0.


If λ P R then the solution is x “ y “ 0, but note that p0, 0q is excluded from
eigenvectors by definition.
If λ P C we obtain the eigenvalues λ1 “ i, λ2 “ ´i and the corresponding
eigenvectors pi, 1q P C2 , respectively p´i, 1q P C2 .

Theorem 4.9. Every operator on an odd dimensional real vector space has an
eigenvalue.

Proof. Let T P EndpV q and n “ dim V odd. The eigenvalues of T are the roots of
the characteristic polynomial that is detpMT ´ λIn q. This polynomial is a
polynomial of degree n in λ, hence, since n is odd, the equation detpMT ´ λIn q “ 0
has at least one real solution.

A central goal of linear algebra is to show that a given operator T P EndpV q has a
reasonably simple matrix in a given basis. It is natural to think that reasonably
simple means that the matrix has as many 01 s as possible.
Recall that for a basis tek , k “ 1, nu,
ÿ
n
T pek q “ aik ei ,
i“1

where MT “ paij qi“1,m is the matrix of the operator.


j“1,n

Theorem 4.10. Suppose T P EndpV q and tei , i “ 1, nu is a basis of V . Then the


following statements are equivalent:

1. The matrix of T with respect to the basis tei , i “ 1, nu is upper triangular.

2. T pek q P spante1 , . . . , ek u for k “ 1, n.

3. spante1 , . . . , ek u is invariant under T for each k “ 1, n.


The minimal polynomial of an operator 79

Proof. 1ô2 obviously follows from a moment’s tought and the definition. Again
3ñ2. It remains only to prove that 2ñ3.
So, suppose 2 holds. Fix k P t1, . . . , nu. From 2 we have

T pe1 q P spante1 u Ď spante1 , . . . , ek u

T pe2 q P spante1 , e2 u Ď spante1 , . . . , ek u


..
.

T pek q P spante1 , . . . , ek u Ď spante1 , . . . , ek u.

So, for v a linear combination of te1 , . . . , ek u one has that

T pvq P spante1 , . . . , ek u,

consequently 3. holds.

Theorem 4.11. Suppose that V is a complex vector space and T P EndpV q. Then
there exists a basis of V such that T is an upper-triangular matrix with respect to
this basis.

Proof. Induction on the dim V . Clearly this holds for dim V “ 1.


Suppose that dim V ą 1 and the result holds for all complex vector spaces of
dimension smaller then the dimension of V . Let λ be an eigenvalue of T (it exists)
and
U “ impT ´ λIq.

Because T ´ λI is not surjective, dim U ă dim V . Furthermore U is invariant


under T , since for u P U there exists v P V such that u “ T pvq ´ λv, hence
T puq “ T pT pvqq ´ λT pvq “ pT ´ λIqpwq P U where w “ T pvq.
The minimal polynomial of an operator 80

So, T |U is an operator on U. By the induction hypothesis there is a basis


tu1 , . . . , um u of U with respect to which T |U has an upper-triangular matrix. So,
for each j P t1, . . . , mu we have

T puj q “ T |U puj q P spantu1 , . . . , um u.

Extend the basis tu1 , . . . , um u of U to a basis tu1 , . . . um , v1 , . . . vn u of V . For each


k “ 1, n

T pvk q “ pT ´ λIqpvk q ` λvk .

By the definition of U, pT ´ λIqpvk q P U “ spantu1 , . . . , um u. Thus the equation


above shows that
T pvk q P spantu1 , . . . , um , vk u.

From this, in virtue of the previous theorem, it follows that T has an


upper-triangular matrix with respect to this basis.

One of the good points of this theorem is that, if we have this kind of basis, we can
decide if the operator is invertible by analysing the matrix of the operator.

Theorem 4.12. Suppose T P EndpV q has an upper triangular matrix with respect
to some basis of V . Then T is invertible if and only if all the entries on the
diagonal are non zero.

Proof. Let te1 , . . . , en u be a basis of V with respect to which T has the matrix
¨ ˛
λ1 . . . ˚
˚ ‹
˚ ‹
˚ 0 λ2 ... ‹
MT “ ˚
˚


˚ 0 0 ... ‹
˝ ‚
0 0 0 λn
The minimal polynomial of an operator 81

We will prove that T is not invertible iff one of the λk ’s equals zero. If λ1 “ 0, then
T pv1 q “ 0, so T is not invertible, as desired.
Suppose λk “ 0, 1 ă k ď n. The operator T maps the vectors e1 , . . . , ek´1 in
spante1 , . . . , ek´1 u and, because λk “ 0, T pek q P te1 , . . . , ek´1 u. So, the vectors
T pe1 q, . . . , T pek q are linearly dependent (they are k vectors in a k ´ 1 dimensional
vector space, spante1 , . . . , ek´1 u. Consequently T is not injective, and not
invertible.
Suppose that T is not invertible. Then ker T ‰ t0u, so v P V, v ‰ 0 exists such
that T pvq “ 0. Let
v “ a1 e1 ` ¨ ¨ ¨ ` an en

and let k be the largest integer with ak ‰ 0. Then

v “ a1 e1 ` ¨ ¨ ¨ ` ak ek ,
and
0 “ T pvq,
0 “ T pa1 e1 ` ¨ ¨ ¨ ` ak ek q,
0 “ pa1 T pe1 q ` ¨ ¨ ¨ ` ak´1 T pek´1 qq ` ak T pek q.

The term pa1 T pe1 q ` ¨ ¨ ¨ ` ak´1 T pek´1 qq is in spante1 , . . . , ek´1 u, because of the
form of MT . Finally T pek q P spante1 . . . , ek´1 u. Thus when T pek q is written as a
linear combination of the basis te1 , . . . , en u, the coefficient of ek will be zero. In
other words, λk “ 0.

Theorem 4.13. Suppose that T P EndpV q has an upper triangular matrix with
respect to some basis of V . Then the eigenvalues of T are exactly of the entries on
the diagonal of the upper triangular matrix.

Proof. Suppose that we have a basis te1 , . . . , en u such that the matrix of T is
upper triangular in this basis. Let λ P F, and consider the operator T ´ λI. It has
Diagonal matrices 82

the same matrix, except that on the diagonal the entries are λi ´ λ if those in the
matrix of T are λj . It follows that T ´ λI is not invertible iff λ is equal with some
λj . So λ is proper value as desired.

4.3 Diagonal matrices


A diagonal matrix is a matrix which is zero except possibly the diagonal.

Proposition 4.14. If T P EndpV q has dim V distinct eigenvalues, then T has a


diagonal matrix ¨ ˛
λ1 0
˚ ‹
˚ ‹
˚ λ2 ‹
˚ ‹
˚ .. ‹
˚ . ‹
˝ ‚
0 λn
with respect to some basis.

Proof. Suppose that T has dim V distinct eigenvalues, λ1 , . . . , λn , where


n “ dim V . Choose corresponding eigenvectors e1 , . . . , en . Because nonzero vectors
corresponding to distinct eigenvalues are linearly independent, we obtain a set of
vectors with the cardinal equal to dim V , that is a basis, and in this basis the
matrix of T is diagonal.

The next proposition imposes several conditions on an operator that are equivalent
to having a diagonal matrix.

Proposition 4.15. Suppose T P EndV . Denote λ1 , . . . , λn the distinct eigenvalues


of T . The following conditions are equivalent.

1. T has a diagonal matrix with respect to some basis of V .


Diagonal matrices 83

2. V has a basis consisting of proper vectors.

3. There exists one dimensional subspaces U1 , . . . , Um of V , each invariant


under T such that
V “ U1 ‘ ¨ ¨ ¨ ‘ Um .

4. V “ kerpT ´ λ1 Iq ‘ ¨ ¨ ¨ ‘ kerpT ´ λn Iq.

5. dim V “ dim kerpT ´ λ1 Iq ` ¨ ¨ ¨ ` dim kerpT ´ λn Iq.

Proof. We saw that 1 ô 2. Suppose 2 holds. Choose te1 , . . . , em u a basis


consisting of proper vectors, and Ui “ spantei u, for i “ 1, m. Hence 2 ñ 3.
Suppose 3 holds. Choose a basis ej P Uj , j “ 1, m. It follows that ej , j “ 1, m is a
proper vector, so they are linearly independent, and because they are m vectors,
they form a basis. Thus 3 implies 2.
Now we know that 1, 2, 3 are equivalent. Next we will prove the following chain of
implications
2ñ4ñ5ñ2

Suppose 2 holds, then V has a basis consisting of eigenvectors. Then every vector
in V is a linear combination of eigenvectors of T , that is

V “ kerpT ´ λ1 Iq ` ¨ ¨ ¨ ` kerpT ´ λn Iq.

We show that it is a direct sum. Suppose that

0 “ u1 ` ¨ ¨ ¨ ` un ,

with uj P kerpT ´ λj Iq, j “ 1, n. They are linearly independent, so all are 0.


Finally 4 ñ 5 is clear because in 4 we have a direct sum.
5 ñ 2. dim V “ dim kerpT ´ λ1 Iq ` ¨ ¨ ¨ ` dim kerpT ´ λn Iq. According to a
precedent result, distinct eigenvalues give rise to linear independent eigenvectors.
Diagonal matrices 84

Let te11 , . . . , e1i1 u, . . . , ten1 , . . . , enin u bases in kerpT ´ λ1 Iq, . . . , kerpT ´ λn Iq. Then
dim V “ i1 ` ¨ ¨ ¨ ` in , and te11 , . . . , e1i1 , . . . , en1 , . . . , enin u are linearly independent.
Hence V “ spante11 , . . . , e1i1 , . . . , en1 , . . . , enin u which shows that 2 holds.
¨ ˛
2 ´1 ´1
˚ ‹
˚ ‹
Example 4.16. Consider the matrix A “ ˚´1 2 ´1‹ . Show that A is
˝ ‚
´1 ´1 0
diagonalizable and find the diagonal matrix similar to A.

The characteristic polynomial of A is

detpA ´ λIq “ ´λ3 ` 4λ2 ´ λ ´ 6 “ ´pλ ` 1qpλ ´ 2qpλ ´ 3q.

Hence, the eigenvalues of A are λ1 “ ´1, λ2 “ 2 and λ3 “ 3. To find the


corresponding eigenvectors, we have to solve the three linear systems pA ` Iqv “ 0,
pA ´ 2Iqv “ 0 and pA ´ 3Iqv “ 0. On solving these systems, we find that the
solution spaces are
tpα, α, 2αq : α P Ru,

tpα, α, ´αq : α P Ru,

respectively
tpα, ´α, 0q : α P Ru.

Hence, the corresponding eigenvectors associated to λ1 , λ2 and λ3 respectively, are


v1 “ p1, 1, 2q, v2 “ p1, 1, ´1q and v3 “ p1, ´1, 0q respectively. There exists 3 linear
independent eigenvectors, thus A is diagonalizable.
¨ ˛
1 1 1
˚ ‹
˚ ‹
Our transition matrix P “ rv1 |v2 |v3 s “ ˚1 1 ´1‹ .
˝ ‚
2 ´1 0
Diagonal matrices 85

¨ ˛
1 1 2
˚ ‹
˚ ‹
We have P ´1 “ 61 ˚2 2 ´2‹ .
˝ ‚
3 ´3 0
Hence, the diagonal matrix similar to A is
¨ ˛
´1 0 0
˚ ‹
˚ ‹
D “ P ´1AP “ ˚ 0 2 0‹ .
˝ ‚
0 0 3

Obviously one may directly compute D, by knowing, that D is the diagonal matrix
having the eigenvalues of A on its main diagonal.

Proposition 4.17. If λ is a proper value for an operator (endomorphism) T , and


v ‰ 0, v P V is a proper vector then one has:

1. @k P N, λk is a proper value for T k “ T ˝ ¨ ¨ ¨ ˝ T (k times) and v is a proper


vector of T k .

2. If p P FrXs is a polynomial with coefficients in F, then ppλq is an eigenvalue


for ppT q and v is a proper vector of ppT q.

3. For T automorphism (bijective endomorphism), λ´1 is a proper value for T ´1


and v is an eigenvector for T ´1 .

Proof. 1. We have T pvq “ λv, hence T ˝ T pvq “ T pλvq “ λT pvq “ λ2 v. Assume,


that T k´1pvq “ λk´1 v. Then T k pvq “

T ˝ T k´1 pvq “ T pT k´1 pvqq “ T pλk´1 vq “ λk´1T pvq “ λk´1 λv “ λk v.

2. Let p “ a0 ` a1 x ` ¨ ¨ ¨ ` an xn P FrXs. Then ppT qpvq “

a0 Ipvq ` a1 T pvq ` ¨ ¨ ¨ ` an T n pvq “ a0 v ` a1 pλvq ` ¨ ¨ ¨ ` an pλn vq “ ppλqv.


The Jordan canonical form 86

3. T ´1 pvq “ u such that T puq “ v. But v “ λ´1 T pvq “ T pλ´1 vq, hence
T puq “ T pλ´1 vq. Since T is injective we have u “ λ´1 v, or equivalently
T ´1 pvq “ λ´1 v.

Example 4.18. Let T : V Ñ V be a linear map. Prove that if ´1 is an eigenvalue


of T 2 ` T then 1 is an eigenvalue of T 3 . Here I is the identity map and
T 2 “ T ˝ T , etc.

From the fact that ´1 is an eigenvalue of T 2 ` T there exists v ‰ 0 such that


` 2 ˘
T ` T v “ ´v,

or, equivalently
` ˘
T 2 ` T ` I v “ 0.

Now, we apply the linear map T ´ I (recall that the linear maps form a vector
space, so the sum or difference of two linear maps is still linear) to the above
relation to get
` ˘
pT ´ Iq T 2 ` T ` I v “ 0.

Here we have used that, by linearity, pT ´ Iq 0 “ 0.


Finally, simple algebra yields pT ´ Iq pT 2 ` T ` Iq “ T 3 ´ I, so the above equation
shows that
T 3 v “ v,

as desired.

4.4 The Jordan canonical form


In a previous sections we have seen the endomorphisms which are diagonalizable.
The Jordan canonical form 87

Let V be a vector space of finite dimension n over a field F. Let T : V Ñ V and let
λ0 be an eigenvalue of T . Consider the matrix form of the endomorphism in a
given basis, T pvq “ MT v. The eigenvalues are the roots of the characteristic
polynomial detpMt ´ λIn q “ 0. It can be proved that this polynomial does not
depend on the basis and of the matrix MT . So, it will be called the characteristic
polynomial of the endomorphism T , and it will be denoted by P pλq, and of course
deg P “ n. Sometimes it is called the characteristic polynomial of the matrix, but
we understand that is the matrix associated to an operator.
Denote by mpλ0 q the multiplicity of λ0 as a root of this polynomial. Associated to
the proper value λ0 we consider the proper subspace corresponding to λ0 :

Epλ0 q “ tv P V |T pvq “ λ0 vu.

Consider a basis of V and let MT be the matrix of T with respect to this basis. We
havev that:

Theorem 4.19. With the above notations, the following holds

dim Epλ0 q “ n ´ rank pMT ´ λ0 Iq ď mpλ0 q.

Proof. Obviously is enough to prove the claim in V “ Rn . Let x1 , x2 , . . . , xr be


linearly independent eigenvectors associated to λ0 , so that dim Epλ0 q “ r.
Complete this set with xr`1 , . . . xn to a basis of Rn . Let P be the matrix whose
columns are xi , i “ 1, n. We have MT P “ rλ0 x1 | . . . |λ0 xr | . . . s. We get that the
first r columns of P ´1MT P are diagonal with λ0 on the diagonal, but that the rest
of the columns are indeterminable. We prove next that P ´1 MT P has the same
characteristic polynomial as MT . Indeed

detpP ´1MT P ´ λIq “ detpP ´1MT P ´ P ´1 pλIqP q “


The Jordan canonical form 88

1
detpP ´1 pMT ´ λIqP q “ detpMT ´ λIq detpP q “ detpMT ´ λIq.
detpP q
But since the first few columns of P ´1 MT P are diagonal with λ0 on the diagonal
we have that the characteristic polynomial of P ´1MT P has a factor of at least
pλ0 ´ λqr , so the algebraic multiplicity of λ0 is at least r.

The value dim Epλ0 q is called the geometric multiplicity of the eigenvalue λ0 .
Let T P EndpV q, and suppose that the roots of the characteristic polynomial are in
F. Let λ be a root of the characteristic polynomial, i.e. an eigenvalue of T .
Consider m the algebraic multiplicity of λ and q “ dim Epλq, the geometric
multiplicity of λ.
It is possible to find q eigenvectors and m ´ q principal vectors (also called
generalized eigenvectors), all of them linearly independent, and an eigenvector v
and the corresponding principal vectors u1 , . . . , ur satisfy

T pvq “ λv, T pu1q “ λu1 ` v, . . . , T pur q “ λur ` ur´1

The precedent definition can equivalently be stated as


A nonzero vector u is called a generalized eigenvector of rank r associated with the
eigenvalue λ if and only if pT ´ λIqr puq “ 0 and pT ´ λIqr´1 puq ‰ 0. We note that
a generalized eigenvector of rank 1 is an ordinary eigenvector. The previously
defined principal vectors u1 , . . . , ur are generalized eigenvectors of rank 2, . . . , r ` 1.
It is known that if λ is an eigenvalue of algebraic multiplicity m, then there are m
linearly independent generalized eigenvectors associated with λ.
These eigenvectors and principal vectors associated to T by considering all the
eigenvalues of T form a basis of V , called the Jordan basis with respect to T . The
The Jordan canonical form 89

matrix of T relative to a Jordan basis is called a Jordan matrix, and it has the form
¨ ˛
J
˚ 1 ‹
˚ ‹
˚ J2 ‹
˚ ‹
˚ .. ‹
˚ . ‹
˝ ‚
Jp

The J’s are matrices, called Jordan cells. Each cell represents the contribution of
an eigenvector v, and the corresponding principal vectors, u1 , . . . ur , and it has the
form
¨ ˛
λ 1
˚ ‹
˚ ‹
˚ λ 1 ‹
˚ ‹
˚ ‹
˚ λ 1 ‹ P Mr`1 pFq
˚ ‹
˚ .. ‹
˚ . 1 ‹
˝ ‚
λ
It is easy to see that the Jordan matrix is a diagonal matrix iff there are no
principal vectors iff mpλq “ dim Epλq for each eigenvalue λ.
Let MT be the matrix of T with respect to a given basis B, and J be the Jordan
matrix with respect to a Jordan basis B 1 . Late P be the transition matrix from B
to B 1 , hence it have columns consisting of either eigenvectors or generalized
eigenvectors. Then J “ P ´1MT P , hence MT “ P JP ´1 .

Example 4.20. (algebraic multiplicity


¨ 3,˛geometric multiplicity 2) Consider the
0 1 0
˚ ‹
˚ ‹
operator with the matrix A “ ˚´4 4 0‹ . Find the Jordan matrix and the
˝ ‚
´2 1 2
transition matrix of A.
The Jordan canonical form 90

The characteristic polynomial of A is detpA ´ λIq “ p2 ´ λq3 , hence λ “ 2 is an


eigenvalue with algebraic multiplicity 3. By solving the homogenous system
pA ´ 2Iqv “ 0 we obtain the solution space
Ep2q “ kerpA ´ 2Iq “ tpα, 2α, βq : α, β P Ru. Hence the dimension of Ep2q is 2,
consequently the eigenvalue λ “ 2 has geometric multiplicity 2. Therefore we can
take the linear independent eigenvectors v1 “ p1, 2, 1q respectively v2 “ p0, 0, 1q.
Note that we need a generalized eigenvector, which can be obtained as a solution
of the system
pA ´ 2Iqu “ v1 .

The solutions of this system lie in the set tpα, 2α ` 1, βq : α, β P Ru, hence a
generalized eigenvector, is u1 “ p1, 3, 0q.
Note that v1 , u1 , v¨
2 are linear
˛ independent, ¨
hence we take
˛ the transition matrix
1 1 0 3 ´1 0
˚ ‹ ˚ ‹
˚ ‹ ˚ ‹
P “ rv1 |u1 |v2 s “ ˚2 3 0‹ . Then P ´1 “ ˚´2 1 0‹ , hence
˝ ‚ ˝ ‚
1 0 1 ´3 1 1
¨ ˛
2 1 0
˚ ‹
˚ ‹
J “ P ´1 AP “ ˚0 2 0‹ .
˝ ‚
0 0 2

Example 4.21. (algebraic multiplicity


¨ 3, geometric
˛ multiplicity 1) Consider the
´1 ´18 ´7
˚ ‹
˚ ‹
operator with the matrix A “ ˚ 1 ´13 ´4‹ . Find the Jordan matrix and the
˝ ‚
´1 25 8
transition matrix of A.

The characteristic polynomial of A is detpA ´ λIq “ ´pλ ` 2q3 , hence λ “ ´2 is an


eigenvalue with algebraic multiplicity 3. By solving the homogenous system
pA ` 2Iqv “ 0 we obtain the solution space
Problems 91

Ep2q “ kerpA ` 2Iq “ tp5α, 3α, ´7αq : α P Ru. Hence the dimension of Ep2q is 1,
consequently the eigenvalue λ “ 2 has geometric multiplicity 1. Therefore we can
take the linear independent eigenvector v “ p5, 3, ´7q. Note that we need two
generalized eigenvectors, which can be obtained as a solution of the system

pA ` 2Iqu1 “ v,

respectively
pA ` 2Iqu2 “ u1 .

The solutions of the first system lie in the set tp´ 1`5α
7
, ´ 2`3α
7
, αq : α P Ru, hence a
generalized eigenvector, for α “ 4 is u1 “ p´3, ´2, 4q.
The solutions of the system pA ` 2Iqu2 “ u1 with u1 “ p´3, ´2, 4q lie in the set
tp´ 3`5α
7
, 1´3α
7
, αq : α P Ru, hence a generalized eigenvector, for α “ 5 is
u1 “ p´4, ´2, 5q. Note that v, u1 , u2¨are linear independent,
˛ hence we take the
5 ´3 ´4
˚ ‹
˚ ‹
transition matrix P “ rv1 |u1 |u2 s “ ˚ 3 ´2 ´2‹ . Then
˝ ‚
´7 4 5
¨ ˛
´2 ´1 ´2
˚ ‹
˚ ‹
P ´1 “ ˚´1 ´3 ´2‹ , hence
˝ ‚
´2 1 ´1
¨ ˛
´2 1 0
˚ ‹
´1 ˚ ‹
J “ P AP “ ˚ 0 ´2 1 ‹ .
˝ ‚
0 0 ´2

4.5 Problems
Problem 4.5.1. Find the eigenvalues and eigenvectors of the operator
f 1 pxq
T : C 8 p1, bq Ñ C 8 p1, bq, T pf qpxq “ xex2
.
Problems 92

¨ ˛
1 5
Problem 4.5.2. Find matrices which diagonalize the following: aq ˝ ‚.
3 3
¨ ˛
1 2 ´1
˚ ‹
˚ ‹
bq ˚1 0 1 ‹.
˝ ‚
4 ´4 5

Problem 4.5.3. Find the Jordan canonical form and the transition matrix for the
matrix ¨ ˛
2 1 ´1
˚ ‹
˚ ‹
˚3 ´2 3 ‹ .
˝ ‚
2 ´2 3

Problem 4.5.4. Prove that a square matrix and its transpose have the same
eigenvalues.

Problem 4.5.5. Find the Jordan canonical form and the transition matrix for the
matrix ¨ ˛
6 6 ´15
˚ ‹
˚ ‹
˚1 5 ´5 ‹ .
˝ ‚
1 2 ´2

Problem 4.5.6. Find the eigenvalues and eigenvectors of the operator


T : Cr´π, πs Ñ Cr´π, πs,
żπ
T pf qpxq “ px cos y ` sin x sin yqf pyqdy.
´π

Problem 4.5.7. Find the Jordan canonical form and the transition matrix for the
matrix ¨ ˛
4 1 1
˚ ‹
˚ ‹
˚´2 2 ´2‹ .
˝ ‚
1 1 4
Problems 93

Problem 4.5.8. Find the eigenvalues and eigenvectors of the operator


T : Cr´π, πs Ñ Cr´π, πs,
żπ
T pf qpxq “ pcos3 px ´ yq ` 1qf pyqdy.
´π

Problem 4.5.9. Find the Jordan canonical form and the transition matrix for the
matrix ¨ ˛
7 ´12 6
˚ ‹
˚ ‹
˚10 ´19 10‹ .
˝ ‚
12 ´24 13

Problem 4.5.10. Find the eigenvalues and eigenvectors of the operator


f 1 pxq
T : C 8 p1, 2q Ñ C 8 p1, 2q, T pf qpxq “ sin2 x
.
˛ ¨
1 1
Problem 4.5.11. Triangularize the matrix A “ ˝ ‚.
´1 3

Problem 4.5.12. Find the Jordan canonical form and the transition matrix for
the matrix ¨ ˛
4 ´5 2
˚ ‹
˚ ‹
˚5 ´7 3‹ .
˝ ‚
6 ´9 4

Problem 4.5.13. Find the eigenvalues and eigenvectors of the operator


f 1 pxq
T : C 8 p1, bq Ñ C 8 p1, bq, T pf qpxq “ tan2 x
.

Problem 4.5.14. Find the Jordan canonical form and the transition matrix for
the matrix ¨ ˛
1 1 0
˚ ‹
˚ ‹
˚´4 ´2 1 ‹ .
˝ ‚
4 1 ´2
Problems 94

Problem 4.5.15. Prove that a complex 2 ˆ ¨2 matrix


˛ is not diagonalizable if and
a b
only if it is similar to a matrix of the form ˝ ‚, where b ‰ 0.
0 a

Problem 4.5.16. Find the Jordan canonical form and the transition matrix for
the matrices ¨ ¨˛ ˛
1 ´3 3 4 6 ´15
˚ ‹ ˚ ‹
˚ ‹ ˚ ‹
˚´2 ´6 13‹ , ˚1 3 ´5 ‹ .
˝ ‚ ˝ ‚
´1 ´4 8 1 2 ´4

Problem 4.5.17. Prove that if A and B are n ˆ n matrices, then AB and BA


have the same eigenvalues.

Problem 4.5.18. Find the Jordan canonical form and the transition matrix for
the matrix ¨ ˛
2 6 ´15
˚ ‹
˚ ‹
˚1 1 ´5 ‹ .
˝ ‚
1 2 ´6
5
Inner product spaces

5.1 Basic definitions and results


Up to now we have studied vector spaces, linear maps, special linear maps.
We can measure if two vectors are equal, but we do not have something like
”length”, so we cannot compare two vectors. Moreover we cannot say anything
about the position of two vectors.
In a vector space one can define the norm of a vector and the inner product of two
vectors. The notion of the norm permits us to measure the length of the vectors,
and compare two vectors. The inner product of two vectors, on one hand induces a
norm, so the length can be measured, and on the other hand (at least in the case
of real vector spaces), lets us measure the angle between two vectors, so a full
geometry can be constructed there. Nevertheless in the case of complex vector
spaces, the angle of two vectors is not clearly defined, but the orthogonality is.

Definition 5.1. An inner product on a vector space V over the field F is a


function (bilinear form) x¨, ¨y : V ˆ V Ñ R with the properties:

• (positivity and definiteness) xv, vy ě 0 and xv, vy “ 0 iff v “ 0.

95
Basic definitions and results 96

• (additivity in the first slot) xu ` v, wy “ xu, wy ` xv, wy, for all u, v, w P V.

• (homogeneity in the first slot) xαv, wy “ αxv, wy for all α P F and v, w P V.

• (conjugate symmetry) xv, wy “ xw, vy for all v, w P V .

An inner product space is a pair pV, x¨, ¨yq, where V is vector space and x¨, ¨y is an
inner product on V .

The most important example of an inner product space is Fn . Let v “ pv1 , . . . , vn q


and w “ pw1 , . . . wn q and define the inner product by

xv, wy “ v1 w 1 ` ¨ ¨ ¨ ` vn w n .

This is the typical example of an inner product, called the Euclidean inner
product, and when Fn is referred to as an inner product space, one should assume
that the inner product is the Euclidean one, unless explicitly stated otherwise.
¨ ˛
a b
Example 5.2. Let A P M2 pRq, A “ ˝ ‚ be a positive definite matrix, that
b c
2
is a ą 0, detpAq ą ˛ for every u “ pu1 , u2q, v “ pv1 , v2 q P R we define
¨0. Then
u1
xu, vy “ pv1 v2 qA ˝ ‚.
u2
It can easily be verified that x¨, ¨y is an inner product on the real linear space R2 .
If A “ I2 we obtain the usual inner product xu, vy “ u1 v1 ` u2 v2 .

From the definition one can easily deduce the following properties of an inner
product:

xv, 0y “ x0, vy “ 0,

xu, v ` wy “ xu, vy ` xu, wy,

xu, αvy “ αxu, vy,

for all u, v, w P V and α P F


Basic definitions and results 97

Definition 5.3. Let V be a vector space over F. A function

}¨}:V ÑR

is called a norm on V if:

• (positivity) }v} ě 0, v P V, }v} “ 0 ô v “ 0 ;

• (homogeneity) }αv} “ |α| ¨ }v}, @α P F, @v P V ;

• (triangle inequality) }u ` v} ď }u} ` }v}, @u, v P V.

A normed space is a pair pV, } ¨ }q, where V is a vector space and } ¨ } is a norm on
V.

Example 5.4. On the real linear space Rn one can define a norm in several ways.
Indeed, for any x “ px1 , x2 , . . . , xn q P Rn define its norm as
a
}x} “ x21 ` x22 ` ¨ ¨ ¨ ` x2n . One can easily verify that the axioms in the definition
of norm are satisfied. This norm is called the euclidian norm.
More generally, for any p P R, p ě 1 we can define
1
}x} “ p|x1 |p ` |x2 |p ` ¨ ¨ ¨ ` |xn |p q p , the so called p´norm on Rn .
Another way to define a norm on Rn is }x} “ maxt|x1 |, |x2 |, . . . , |xn |u. This is the
so called maximum norm.

Definition 5.5. Let X be a nonempty set. A function d : X ˆ X Ñ R satisfying


the following properties:

• (positivity) dpx, yq ě 0, @x, y P X and dpx, yq “ 0 ô x “ y;

• (symmetry) dpx, yq “ dpy, xq, @x, y P X;

• (triangle inequality) dpx, yq ď dpx, zq ` dpz, yq, @x, y, z P X;


Basic definitions and results 98

is called a metric or distance on X. A set X with a metric defined on it is called a


metric space.

Example 5.6. Let X be an arbitrary set. One can define a distance on X by


$
& 0, if x “ y
dpx, yq “
% 1, otherwise.

This metric is called the discrete metric on X. On Rn the Chebyshev distance is


defined as

dpx, yq “ max |xi ´ yi |, x “ px1 , x2 , . . . , xn q, y “ py1 , y2 , . . . , yn q P Rn .


1ďiďn

In this course we are mainly interested in the inner product spaces. But we should
a
point out that an inner product on V defines a norm, by }v} “ xv, vy for v P V ,
and a norm on V defines a metric by dpv, wq “ }w ´ v}, for v, w P V .
On the other hand, from their generality point of view the metrics are the most
general ones (can be defined on any set), followed by norms (which assumes the
linearity of the space where is defined) and on the last position is the inner
product. It should be pointed that every inner product generates a norm, but not
every norm comes from an inner product, as is the case for the max norm defined
above.
For an inner product space pV, x¨, ¨yq the following identity is true:
C G
ÿ
m ÿ
n ÿ
m ÿ
n
αi vi , βj wj “ αi β j xvi , wj y.
i“1 j“1 i“1 j“1

Definition 5.7. Two vectors u, v P V are said to be orthogonal (uKv) if xu, vy “ 0.

In a real inner product space we can define the angle of two vectors as

{ xv, wy
pv, wq “ arccos
}v} ¨ }w}
Basic definitions and results 99

We have
{ π
vKw ô xv, wy “ 0 ô pv, wq “ .
2
Theorem 5.8. (Parallelogram law) Let V be an inner product space and
u, v P V . Then

}u ` v}2 ` }u ´ v}2 “ 2p}u}2 ` }v}2 q.

Proof.

}u ` v}2 ` }u ´ v}2 “ xu ` v, u ` vy ` xu ´ v, u ´ vy “ xu, uy ` xu, vy ` xv, uy ` xv, vy

`xu, uy ´ xu, vy ´ xv, uy ` xv, vy

“ 2p}u}2 ` }v}2q.

Theorem 5.9. (Pythagorean Theorem) Let V be an inner product space, and


u, v P V orthogonal vectors. Then

}u ` v}2 “ }u}2 ` }v}2 .

Proof.

}u ` v}2 “ xu ` v, u ` vy

“ xu, uy ` xu, vy ` xv, uy ` xv, vy

“ }u}2 ` }v}2 .

Now we are going to prove one of the most important inequalities in mathematics,
namely the Cauchy-Schwartz inequality. There are several methods of proof for
this, we will give one related to our aims.
Basic definitions and results 100

Consider u, v P V . We want to write u as a sum between a vector collinear to v


and a vector orthogonal to v. Let α P F and write u as u “ αv ` pu ´ αvq.
Imposing now the condition that v is orthogonal to pu ´ αvq, one obtains

0 “ xu ´ αv, vy “ xu, vy ´ α}v}2,


xu,vy
so one has to choose α “ }v}2
, and the decomposition is

ˆ ˙
xu, vy xu, vy
u“ v` u´ v .
}v}2 }v}2
Theorem 5.10. Cauchy-Schwartz Inequality Let V be an inner product space
and u, v P V . Then
|xu, vy| ď }u} ¨ }v}.

The equality holds iff one of u, v is a scalar multiple of the other (u and v are
collinear).

Proof. Let u, v P V . If v “ 0 both sides of the inequality are 0 and the desired
´ ¯
xu,vy xu,vy
result holds. Suppose that v ‰ 0. Write u “ }v}2 v ` u ´ }v}2 v . Taking into
xu,vy xu,vy
account that the vectors }v}2
v and u ´ }v}2
v are orthogonal, by the Pythagorean
theorem we obtain
› › › ›2
› xu, vy ›2 › xu, vy ›
}u} 2
“ ›› v › ` ›u ´ v ›
}v}2 › › }v}2 ›
› ›2
|xu, vy|2 ›› xu, vy ››
“ ` ›u ´ v
}v}2 }v}2 ›
|xu, vy|2
ě ,
}v}2

inequality equivalent with the one in the theorem.


xu,vy
We have equality iff u ´ }v}2
v “ 0, that is iff u is a scalar multiple of v.
Orthonormal Bases 101

5.2 Orthonormal Bases


Definition 5.11. Let pV, x¨, ¨yq an inner product space and let I be an arbitrary
index set. A family of vectors A “ tei P V |i P Iu is called an orthogonal family, if
xei , ej y “ 0 for every i, j P I, i ‰ j. The family A is called orthonormal if it is
orthogonal and }ei } “ 1 for every i P I.

One of the reason that one studies orthonormal families is that in such special
bases the computations are much more simple.

Proposition 5.12. If pe1 , e2 , . . . , em q is an orthonormal family of vectors in V ,


then

}α1 e1 ` α2 e2 ` ¨ ¨ ¨ ` αm em }2 “ |α1 |2 ` |α2 |2 ` ¨ ¨ ¨ ` |αm |2

for all α1 , α2 , . . . , αm P F.

Proof. Apply Pythagorean Theorem, that is

}α1 e1 ` α2 e2 ` ¨ ¨ ¨ ` αm em }2 “ |α1 |2 }e1 }2 ` |α2 |2 }e2 }2 ` ¨ ¨ ¨ ` |αm |2 }en }2 .

The conclusion follows taking into account that }ei } “ 1, i “ 1, n.

Corollary 5.13. Every orthonormal list of vectors is linearly independent.

Proof. Let pe1 , e2 , . . . , em q be an orthonormal list of vectors in V and


α1 , α2 , . . . , αm P F with

α1 e1 ` α2 e2 ` ¨ ¨ ¨ ` αm em “ 0.

It follows that }α1 e1 ` α2 e2 ` ¨ ¨ ¨ ` αm em }2 “ |α1 |2 ` |α2 |2 ` ¨ ¨ ¨ ` |αm |2 “ 0, that is


αj “ 0, j “ 1, m.
Orthonormal Bases 102

An orthonornal basis of an inner product vector space V is a basis of V which is


also an orthonormal list of V . It is clear that every orthonormal list of vectors of
length dim V is an orthonormal basis (because it is linearly independent, being
orthonormal).

Theorem 5.14. Let pe1 , e2 , . . . , en q be an orthonormal basis of an inner product


space V . If v “ α1 e1 ` α2 e2 ` ¨ ¨ ¨ ` αn en P V , then

• αi “ xv, ei y, for all i P t1, 2, . . . , nu and


ÿ
n
• }v} “2
|xv, ei y|2
i“1

Proof. Since v “ α1 e1 ` α2 e2 ` ¨ ¨ ¨ ` αn en , by taking the inner product in both


sides with ei we have
xv, ei y “ α1 xe1 , ei y ` α2 xe2 , ei y ` ¨ ¨ ¨ ` αi xei , ei y ` ¨ ¨ ¨ ` αn xen , ei y “ αi .
The second assertion comes from applying the previous proposition. Indeed,
ÿ
n
}v}2 “ }α1 e1 ` ¨ ¨ ¨ ` αn en }2 “ |α1 |2 ` ¨ ¨ ¨ ` |αn |2 “ |xv, ei y|2 .
i“1

Up to now we have an image about the usefulness of orthonormal basis. The


advantage is that in an orthonormal basis the computations are easy, as in the
euclidean two or three dimensional spaces. But how does one go to find them?
The next result gives an answer to the question. The following result is a well
known algorithm in linear algebra, called the Gram-Schmidt procedure. The
procedure is pointed here, giving a method for turning a linearly independent list
into an orthonormal one, with the same span as the original one.

Theorem 5.15. Gram-Schmidt If pv1 , v2 , . . . , vm q is a linearly independent set


of vectors in V , then there exists an orthonormal set of vectors pe1 , . . . em q in V,
Orthonormal Bases 103

such that
spanpv1 , v2 , . . . , vk q “ spanpe1 , e2 . . . , ek q

for every k P t1, 2, . . . , mu.

Proof. Let pv1 , v2 , . . . , vm q be a linearly independent set of vectors. The family of


orthonormal vectors pe1 , e2 . . . , em q will be constructed inductively. Start with
v1
e1 “ }v1 }
. Suppose now that j ą 1 and an orthonormal family pe1 , e2 , . . . , ej´1 q has
been constructed such that

spanpv1 , v2 , . . . , vj´1q “ spanpe1 , e2 , . . . , ej´1 q

Consider

vj ´ xvj , e1 ye1 ´ ¨ ¨ ¨ ´ xvj , ej´1yej´1


ej “
}vj ´ xvj , e1 ye1 ´ ¨ ¨ ¨ ´ xvj , ej´1yej´1 }
Since the list pv1 , v2 , . . . , vm q is linearly independent, it follows that vj is not in
spanpv1 , v2 , . . . , vj´1q, and thus is not in spanpe1 , e2 , . . . , ej´1q. Hence ej is well
defined, and }ej } “ 1. By direct computations it follows that for 1 ă k ă j one has

B F
vj ´ xvj , e1 ye1 ´ ¨ ¨ ¨ ´ xvj , ej´1yej´1
xej , ek y “ , ek
}vj ´ xvj , e1 ye1 ´ ¨ ¨ ¨ ´ xvj , ej´1yej´1 }
xvj , ek y ´ xvj , ek y

}vj ´ xvj , e1 ye1 ´ ¨ ¨ ¨ ´ xvj , ej´1 yej´1}
“ 0,

thus pe1 , e2 , . . . ek q is an orthonormal family. By the definition of ej one can see


that vj P spanpe1 , e2 , . . . , ej q, which gives (together with our hypothesis of
induction), that
spanpv1 , v2 , . . . , vj q Ă spanpe1 , e2 , . . . , ej q
Orthonormal Bases 104

Both lists being linearly independent (the first one by hypothesis and the second
one by orthonormality), it follows that the generated subspaces above have the
same dimension j, so they are equal.

Remark 5.16. If in the Gram-Schmidt process we do not normalize the vectors


we obtain an orthogonal basis instead of an orthonormal one.

Example 5.17. Orthonormalize the following list of vectors in R4 :

tv1 “ p0, 1, 1, 0q, v2 “ p0, 4, 0, 1q, v3 “ p1, ´1, 1, 0q, v4 “ p1, 3, 0, 1qu.

First we will orthogonalize by using the Gram-Schmidt procedure.


Let u1 “ v1 “ p0, 1, 1, 0q.

xv2 , u1y 4
u2 “ v2 ´ u1 “ p0, 4, 0, 1q ´ p0, 1, 1, 0q “ p0, 2, ´2, 1q.
xu1 , u1 y 2
ˆ ˙
xv3 , u1 y xv3 , u2 y 1 1 4
u3 “ v3 ´ u1 ´ u2 “ 1, ´ , , .
xu1, u1 y xu2, u2 y 9 9 9
ˆ ˙
xv4 , u1 y xv4 , u2y xv4 , u3y 1 1 1 2
u4 “ v4 ´ u1 ´ u2 ´ u3 “ , ,´ ,´ .
xu1, u1 y xu2 , u2 y xu3 , u3 y 11 22 22 11
It can easily be verified that the list tu1 , u2, u3 , u4 u is orthogonal. Take now
ui
wi “ }ui }
, i “ 1, 4. We obtain
ˆ ˙
1 1
w1 “ 0, ? , ? , 0 ,
2 2
ˆ ˙
2 2 1
w2 “ 0, , ´ , ,
3 3 3
ˆ ˙
3 1 1 4
w3 “ ? , ´ ? , ? , ? ,
11 3 11 3 11 3 11
ˆ? ? ? ? ˙
22 22 22 2 22
w4 “ , ,´ ,´ .
11 22 22 11
Obviously the list tw1 , w2 , w3 , w4 u is orthonormal.
Now we can state the main results in this section.
Orthogonal complement 105

Corollary 5.18. Every finitely dimensional inner product space has an


orhtonormal basis.

Proof. Choose a basis of V , apply the Gram-Schmidt procedure to it and obtain


an orthonormal list of length equal to dim V . It follows that the list is a basis,
being linearly independent.

The next proposition shows that any orthonormal list can be extended to an
orthonormal basis.

Proposition 5.19. Every orhtonormal family of vectors can be extended to an


orthonormal basis of V .

Proof. Suppose pe1 , e2 , . . . , em q is an orthonormal family of vectors.. Being linearly


independent, it can be extended to a basis, pe1 , e2 , . . . , em , vm`1 , . . . , vn q. Applying
now the Gram-Schmidt procedure to pe1 , e2 , . . . , em , vm`1 , . . . , vn q, we obtain the
list pe1 , e2 , . . . , em , fm`1 , . . . , fn q, (note that the Gram Schmidt procedure leaves
the first m entries unchanged, being already orthonormal). Hence we have an
extension to an orthonormal basis.

Corollary 5.20. Suppose that T P EndpV q. If T has an upper triangular form


with respect to some basis of V , then T has an upper triangular form with respect
to some orthonormal basis of V .

Corollary 5.21. Suppose that V is a complex vector space and T P EndpV q. Then
T has an upper triangular form with respect to some orthonormal basis of V .

5.3 Orthogonal complement


Let U Ď V be a subset of an inner product space V . The orthogonal complement
of U, denoted by U K is the set of all vectors in V which are orthogonal to every
Orthogonal complement 106

vector in U i.e.:
U K “ tv P V |xv, uy “ 0, @u P Uu.

It can easily be verified that U K is a subspace of V , V K “ t0u and t0uK “ V , as


well that U1 Ď U2 ñ U2K Ď U1K .

Theorem 5.22. If U is a subspace of V , then

V “ U ‘ UK

Proof. Suppose that U is a subspace of V . We will show that

V “ U ` UK

Let te1 , . . . , em u be an orthonormal basis of U and v P V . We have

v “ pxv, e1ye1 ` ¨ ¨ ¨ ` xv, em yem q ` pv ´ xv, e1 ye1 ´ ¨ ¨ ¨ ´ xv, em yem q

Denote the first vector by u and the second by w. Clearly u P U. For each
j P t1, 2, . . . , mu one has

xw, ej y “ xv, ej y ´ xv, ej y

“ 0

Thus w is orthogonal to every vector in the basis of U, that is w P U K ,


consequently
V “ U ` U K.

We will show now that U X U K “ t0u. Suppose that v P U X U K . Then v is


orthogonal to every vector in U, hence xv, vy “ 0, that is v “ 0. The relations
V “ U ` U K and U X U K “ t0u imply the conclusion of the theorem.

Proposition 5.23. If U1 , U2 are subspaces of V then


Orthogonal complement 107

a) U1 “ pU1K qK .

b) pU1 ` U2 qK “ U1K X U2K .

c) pU1 X U2 qK “ U1K ` U2K .

Proof. a) We show first that U1 Ď pU1K qK . Let u1 P U1 . Then for all v P U1K one has
vKu1 . In other words xu1 , vy “ 0 for all v P U1K . Hence u1 P pU1K qK .
Assume now that pU1K qK Ę U1 . Hence, there exists u2 P pU1K qK zU1 . Since
V “ U1 ‘ U1K we obtain that there exists u1 P U1 such that u2 ´ u1 P U1K p˚q.
On the other hand, according to the first part of proof u1 P pU1K qK and pU1K qK is a
linear subspace, hence u2 ´ u1 P pU1K qK . Hence, for all v P U1K we have
pu2 ´ u1 qKv p˚˚q.
p˚q and p˚˚q implies that pu2 ´ u1 qKpu2 ´ u1 q that is xu2 ´ u1 , u2 ´ u1 y “ 0, which
leads to u1 “ u2 contradiction.
b) For v P pU1 ` U2 qK one has xv, u1 ` u2 y “ 0 for all u1 ` u2 P U1 ` U2 . By taking
u2 “ 0 we obtain that v P U1K and by taking u1 “ 0 we obtain that v P U2K . Hence
pU1 ` U2 qK Ď U1K X U2K .
Conversely, let v P U1K X U2K . Then xv, u1y “ 0 for all u1 P U1 and xv, u2y “ 0 for all
u2 P U2 . Hence xv, u1 ` u2y “ 0 for all u1 P U1 and u2 P U2 , that is v P pU1 ` U2 qK .
c) According to a) ppU1 X U2 qK qK “ U1 X U2 .
According to b) and a) pU1K ` U2K qK “ pU1K qK X pU2K qK “ U1 X U2 .
Hence, ppU1 X U2 qK qK “ pU1K ` U2K qK which leads to pU1 X U2 qK “ U1K ` U2K .

Example 5.24. Let U “ tpx1 , x2 , x3 , x4 q P R4 |x1 ´ x2 ` x3 ´ x4 “ 0u. Knowing


that U is a subspace of R4 , compute dim U and U K .
Linear manifolds 108

It is easy to see that


(
U “ px1 , x2 , x3 , x4 q P R4 |x1 ´ x2 ` x3 ´ x4 “ 0

“ tpx1 , x2 , x3 , x1 ´ x2 ` x3 q |x1 , x2 , x3 P Ru

“ tx1 p1, 0, 0, 1q ` x2 p0, 1, 0, ´1q ` x3 p0, 0, 1, 1q |x1 , x2 , x3 P Ru

“ span tp1, 0, 0, 1q , p0, 1, 0, ´1q , p0, 0, 1, 1qu .

The three vectors p1, 0, 0, 1q , p0, 1, 0, ´1q , p0, 0, 1, 1q are linearly independent (the
rank of the matrix they form is 3), so they form a basis of U and dim U “ 3.
The dimension formula
dim U ` dim U K “ dim R4

tells us that dim U K “ 1, so U K is generated by a single vector. A vector that


generates U K is p1, ´1, 1, ´1q, the vector formed by the coefficients that appear in
the linear equation that defines U. This is true because the right hand side of the
equation is exactly the scalar product between uK “ p1, ´1, 1, ´1q and a vector
v “ px1 , x2 , x3 , x4 q P U.

5.4 Linear manifolds


Let V be a vector space over the field F.

Definition 5.25. A set L “ v0 ` VL “ tv0 ` v|v P VL u , where v0 P V is a vector


and VL Ă V is a subspace of V is called a linear manifold (or linear variety). The
subspace VL is called the director subspace of the linear variety.

Remark 5.26. One can easily verify the following.

• A linear manifold is a translated subspace, that is L “ f pVL q where


f : V Ñ V , f pvq “ v0 ` v.
Linear manifolds 109

• if v0 P VL then L “ VL .

• v0 P L because v0 “ v0 ` 0 P v0 ` VL .

• for v1 , v2 P L we have v1 ´ v2 P VL .

• for every v1 P L we have L “ v1 ` VL .

• L1 “ L2 , where L1 “ v0 ` VL1 and L2 “ v01 ` VL2 iff VL1 “ VL2 and


v0 ´ v01 P VL1 .

Definition 5.27. We would like to emphasize that:

1. The dimension of a linear manifold is the dimension of its director subspace.

2. Two linear manifolds L1 and L2 are called orthogonal if VL1 KVL2 .

3. Two linear manifolds L1 and L2 are called parallel if VL1 Ă VL2 or VL2 Ă VL1 .

Let L “ v0 ` VL be a linear manifold in a finitely dimensional vector space V . For


dim L “ k ď n “ dim V one can choose in the director subspace VL a basis of finite
dimension tv1 , . . . , vk u. We have

L “ tv “ v0 ` α1 v1 ` ¨ ¨ ¨ ` αk vk |αi P F, i “ 1, ku

We can consider an arbitrary basis (fixed) in V , let’s say E “ te1 , . . . , en u and if we


use the column vectors for the coordinates in this basis, i.e.
vrEs “ px1 , . . . , xn qJ , v0rEs “ px01 , . . . , x0n qJ , vjrEs “ px1j , . . . , xnj qJ , j “ 1, k, one has
the parametric equations of the linear manifold
Linear manifolds 110

$



’ x1 “ x01 ` α1 x11 ` ¨ ¨ ¨ ` αk x1k

&
..
’ .




%xn “ x0n ` α1 xn1 ` ¨ ¨ ¨ ` αk xnk

The rank of the matrix pxij qi“1,n is k because the vectors v1 , . . . , vk are linearly
j“1,k
independent.
It is worthwhile to mention that:

1. a linear manifold of dimension one is called line.

2. a linear manifold of dimension two is called plane.

3. a linear manifold of dimension k is called k plane.

4. a linear manifold of dimension n ´ 1 in an n dimensional vector space is


called hyperplane.

Theorem 5.28. Let us consider V an n-dimensional vector space over the field F.
Then any subspace of V is the kernel of a surjective linear map.

Proof. Suppose VL is a subspace of V of dimension k. Choose a basis te1 , . . . , ek u


in VL and complete it to a basis te1 , . . . , ek , ek`1 , . . . , en u of V . Consider
U “ spantek`1 , . . . , en u. Let T : V Ñ U given by

T pe1 q “ 0, . . . T pek q “ 0, T pek`1q “ ek`1 , . . . , T pen q “ en .

Obviously, T pα1 e1 ` ¨ ¨ ¨ ` αn en q “ α1 T pe1 q ` ¨ ¨ ¨ ` αn T pen q “ αk`1 ek`1 ` ¨ ¨ ¨ ` αn en


defines a linear map. It is also clear that ker T “ VL as well that T is surjective,
i.e. im T “ U.
Linear manifolds 111

Remark 5.29. In fact the map constructed in the previous theorem is nothing
but the projection on U parallel to the space spante1 , . . . , ek u.

Theorem 5.30. Let V, U two linear spaces over the same field F. If T : V Ñ U is
a surjective linear map, then for every u0 P U, the set L “ tv P V |T pvq “ u0 u is a
linear manifold.

Proof. T being surjective, there exists v0 P V with T pv0 q “ u0 . We will show that
tv ´ v0 |v P Lu “ ker T .
Let v P L. We have T pv ´ v0 q “ T pvq ´ T pv0 q “ 0, so tv ´ v0 |v P Lu Ď ker T .
Let v1 P ker T , i.e. T pv1 q “ 0. Write v1 “ pv1 ` v0 q ´ v0 . T pv1 ` v0 q “ u0, so
pv1 ` v0 q P L. Hence, v1 P tv ´ v0 |v P Lu or, in other words ker T Ď tv ´ v0 |v P Lu.
Consequently L “ v0 ` ker T, which shows that L is a linear manifold.

The previous theorems give rise to the next:

Theorem 5.31. Let V a linear space of dimension n. Then, for every linear
manifold L Ă V of dimension dim L “ k ă n, there exists an n ´ k-dimensional
vector space U, a surjective linear map T : V Ñ U and a vector u P U such that

L “ tv P V |T pvq “ uu.

Proof. Indeed, consider L “ v0 ` VL , where the dimension of the director subspace


VL “ k. Choose a basis te1 , . . . , ek u in VL and complete it to a basis
te1 , . . . , ek , ek`1, . . . , en u of V . Consider U “ spantek`1 , . . . , en u. Obviously
dim U “ n ´ k. According to a previous theorem the linear map
T : V Ñ U, T pα1 e1 ` ¨ ¨ ¨ ` αk ek ` αk`1 ek`1 ` ¨ ¨ ¨ ` αn en q “ αk`1 ek`1 ` ¨ ¨ ¨ ` αn en is
surjective and ker T “ VL . Let T pv0 q “ u. Then, according to the proof of the
previous theorem L “ tv P V |T pvq “ uu.
Linear manifolds 112

Remark 5.32. If we choose in V and U two bases and we write the linear map by
matrix notation MT v “ u we have the implicit equations of the linear manifold L,
$



’ a11 v1 ` a12 v2 ` ¨ ¨ ¨ ` a1n vn “ u1

&
..
’ .




%ap1 v1 ` ap2 v2 ` ¨ ¨ ¨ ` apn vn “ up

where p “ n ´ k “ dim U “ rank paij q i“1,p .


j“1,n

A hyperplane has only one equation

a1 v1 ` ¨ ¨ ¨ ` an vn “ u0

The director subspace can be seen as

VL “ tv “ v1 e1 ` ¨ ¨ ¨ ` vn en |f pvq “ 0u “ ker f,

where f is the linear map (linear functional) f : V Ñ R with


f pe1 q “ a1 , . . . , f pen q “ an .
If we think of the hyperplane as a linear manifold in the euclidean space Rn , the
equation can be written as

xv, ay “ u0 , where a “ a1 e1 ` ¨ ¨ ¨ ` an en , u0 P R.

The vector a is called the normal vector to the hyperplane.


Generally in a euclidean space the equations of a linear manifold are

$



’ xv, v1 y “ u1

&
..
’ .




%xv, vp y “ up
The Gram determinant. Distances. 113

where the vectors v1 , . . . vp are linearly independent. The director subspace is given
by
$



’ xv, v1 y “ 0

&
..
’ .




%xv, vp y “ 0

so, the vectors v1 , . . . , vp are orthogonal to the director subspace VL .

5.5 The Gram determinant. Distances.


In this section we will explain how we can measure the distance between some
”linear sets”, which are linear manifolds.
Let pV, x¨, ¨yq be an inner product space and consider the vectors vi P V , i “ 1, k.
The determinant

xv1 , v1 y xv1 , v2 y ... xv1 , vk y


xv2 , v1 y xv2 , v2 y ... xv2 , vk y

Gpv1 , . . . , vk q “
...... ... ...



xvk , v1 y xvk , v2 y ... xvk , vk y

is called the Gram determinant of the vectors v1 . . . vk .

Proposition 5.33. In an inner product space the vectors v1 , . . . , vk are linearly


independent iff Gpv1 , . . . , vk q ‰ 0.

Proof. Let us consider the homogenous system


¨ ˛ ¨ ˛
x 0
˚ 1‹ ˚ ‹
˚ ‹ ˚ ‹
˚x2 ‹ ˚0‹
G¨˚ ‹ ˚ ‹
˚ .. ‹ “ ˚ .. ‹ .
˚ . ‹ ˚.‹
˝ ‚ ˝ ‚
xk 0
The Gram determinant. Distances. 114

This system can be written as


$



’ xv1 , vy “ 0

&
..
’ . where v “ x1 v1 ` . . . xk vk .




%xvk , vy “ 0

The following statements are equivalent.


The vectors v1 , . . . , vk are linearly dependent. ðñ There exist x1 , . . . , xk P F, not
all zero such that v “ 0. ðñ The homogenous system has a nontrivial solution.
ðñ det G “ 0.

Proposition 5.34. If te1 , . . . , en u are linearly independent vectors and tf1 , . . . , fn u


are vectors obtained by Gram Schmidt orthogonalization process, one has:

Gpe1 , . . . , en q “ Gpf1 , . . . , fn q “ }f1 }2 ¨ . . . ¨ }fn }2

Proof. In Gpf1 , . . . , fn q replace fn by en ´ a1 f1 ´ ¨ ¨ ¨ ´ an´1 fn´1 and we obtain

Gpf1 , . . . , fn q “ Gpf1 , . . . , fn´1 , en q.

By an inductive process the relation in the theorem follows. Obviously


Gpf1 , . . . , fn q “ }f1 }2 ¨ . . . ¨ }fn }2 because in the determinant we have only on the
diagonal xf1 , f1 y, . . . , xfn , fn y.

Remark 5.35. Observe that:


d
Gpe1 , . . . ek q
• }fk } “
Gpe1 , . . . , ek´1 q

• fk “ ek ´ a1 f1 ´ . . . ak´1 fk´1 “ ek ´ vk one obtains ek “ fk ` vk ,


vk P spante1 , . . . , ek´1 u and fk P spante1 , . . . , ek´1uK , so fk is the orthogonal
complement of ek with respect to the space generated by te1 . . . , ek´1 u.
The Gram determinant. Distances. 115

The distance between a vector and a subspace


Let U be a subspace of the inner product space V . The distance between a vector
v and the subspace U is

dpv, Uq “ inf dpv, wq “ inf }v ´ w}.


wPU wPU

Remark 5.36. The linear structure implies a very simple but useful fact:

dpv, Uq “ dpv ` w, w ` Uq

for every v, w P V and U Ď V , that is the linear structure implies that the distance
is invariant by translations

We are interested in the special case when U is a subspace.

Proposition 5.37. The distance between a vector v P V and a subspace U is given


by
d
Gpe1 , . . . , ek , vq
dpv, Uq “ }v K } “ ,
Gpe1 , . . . , ek q
where v “ v1 ` v K , v1 P U, v K P U K and e1 , . . . , ek is a basis in U.

Proof. First we prove that }v K } “ }v ´ v1 } ď }v ´ u}, @u P U. We have

}v K } ď }v ´ u} ô

xv K , v K y ď xv K ` v1 ´ u, v K ` v1 ´ uy ô

xv K , v K y ď xv K , v K y ` xv1 ´ u, v1 ´ uy.
b
The second part of the equality, i.e. }v } “ Gpe
K 1 ,...,ek ,vq
Gpe1 ,...,ek q
, follows from the previous
remark.

Definition 5.38. If e1 , . . . , ek are vectors in V the volume of the k- parallelepiped


a
constructed on the vectors e1 , . . . , ek is defined by Vk pe1 , . . . , ek q “ Gpe1 , . . . , ek q.
The Gram determinant. Distances. 116

We have the following inductive relation

Vk`1 pe1 , . . . , ek , ek`1 q “ Vk pe1 , . . . , ek qdpek`1 , spante1 , . . . , ek uq.

The distance between a vector and a linear manifold


Let L “ v0 ` VL be a linear manifold, and let v be a vector in a finitely
dimensional inner product space V . The distance induced by the norm is invariant
by translations, that is, for all v1 , v2 P V one has

dpv1 , v2 q “ dpv1 ` v0 , v1 ` v0 q ô }v1 ´ v2 } “ }v1 ` v0 ´ pv2 ` v0 q}

That means that we have

dpv, Lq “ inf dpv, wq “ inf dpv, v0 ` vL q


wPL vL PVL

“ inf dpv ´ v0 , vL q
vL PVL

“ dpv ´ v0 , VL q.

Finally, d
Gpe1 , . . . , ek , v ´ v0 q
dpv, Lq “ dpv ´ v0 , VL q “ ,
Gpe1 , . . . , ek q
where e1 , . . . , ek is a basis in VL .

Example 5.39. Consider the linear manifolds


L “ tpx, y, z, tq P R4 |x ` y ` t “ 2, x ´ 2y ` z ` t “ 3u,
K “ tpx, y, z, tq P R4 |x ` y ` z ´ t “ 1, x ` y ` z ` t “ 3u. Find the director
subspaces VL , VK and a basis in VL X VK . Find the distance of v “ p1, 0, 2, 2q from
L, respectively K, and show that the distance between L and K is 0.

Since L “ v0 ` VL and K “ u0 ` VK it follows that VL “ L ´ v0 and VK “ K ´ u0


for some v0 P L, u0 P K. By taking x “ y “ 0 in the equations that describe L we
The Gram determinant. Distances. 117

obtain t “ 2, z “ 1, hence v0 “ p0, 0, 1, 2q P L. Analogously u0 “ p0, 0, 2, 1q P K.


Hence the director subspaces are

VL “ tpx, y, z ´ 1, t ´ 2q P R4 |x ` y ` t “ 2, x ´ 2y ` z ` t “ 3u “

tpx, y, z, tq P R4 |x ` y ` t “ 0, x ´ 2y ` z ` t “ 0u,

respectively

VK “ tpx, y, z ´ 2, t ´ 1q P R4 |x ` y ` z ´ t “ 1, x ` y ` z ` t “ 3u “

tpx, y, z, tq P R4 |x ` y ` z ´ t “ 0, x ` y ` z ` t “ 0u.
$
& x`y`t“ 0
By solving the homogenous systems , respectively
% x ´ 2y ` z ` t “ 0
$
& x`y`z´t“0
we obtain that
% x`y`z`t“0

VL “ spamte1 “ p´1, 1, 3, 0q, e2 “ p´1, 0, 0, 1qu,

respectively
VK “ spamte3 “ p´1, 1, 0, 0q, e4 “ p´1, 0, 1, 0qu.

Since detre1 |e2 |e3 |e4 s “ 3 ‰ 0 the vectors e1 , e2 , e3 , e4 are linearly independent,
hence VL X VK “ t0u. The distance of v from L is
d c
Gpe1 , e2 , v ´ v0 q 19
dpv, Lq “ dpv ´ v0 , VL q “ “ ,
Gpe1 , e2 q 21

meanwhile d c
Gpe3 , e4 , v ´ v0 q 4
dpv, Kq “ dpv ´ v0 , VK q “ “ .
Gpe3 , e4 q 3
The Gram determinant. Distances. 118

$

’ x`y`t“2




& x ´ 2y ` z ` t “ 3
It is obvious that K X L ‰ H, since the system is

’ x`y`z´t“1




% x`y`z`t“3
consistent, having solution p1, 0, 1, 1q, hence we must have

dpL, Kq “ 0.

Let us consider now the hyperplane H of equation

xv ´ v0 , ny “ 0 .

The director subspace is VH “ xv, ny “ 0 and the distance

dpv, Hq “ dpv ´ v0 , VH q.

One can decompose v ´ v0 “ αn ` vH , where vH is the orthogonal projection of


v ´ v0 on VH and αn is the normal component of v ´ v0 with respect to VH . It
means that
dpv, Hq “ }αn}

Let us compute a little now, taking into account the previous observations about
the tangential and normal part:

xv ´ v0 , ny “ xαn ` vH , ny

“ αxn, ny ` xvH , ny

“ α}n}2 ` 0

So, we obtained
|xv ´ v0 , ny|
“ |α|}n} “ }αn}
}n}
The Gram determinant. Distances. 119

that is
|xv ´ v0 , ny|
dpv, Hq “
}n}
In the case that we have an orthonormal basis at hand, the equation of the
hyperplane H is
a1 x1 ` ¨ ¨ ¨ ` ak xk ` b “ 0 ,

so the relation is now

|a1 v1 ` ¨ ¨ ¨ ` ak vk ` b|
dpv, Hq “ a .
a21 ` ¨ ¨ ¨ ` a2k
The distance between two linear manifolds
For A and B sets in a metric space, the distance between them is defined as

dpA, Bq “ inftdpa, bq|a P A , b P Bu.

For two linear manifolds L1 “ v1 ` V1 and L2 “ v2 ` V2 it easily follows:

dpL1 , L2 q “ dpv1 ` V1 , v2 ` V2 q “ dpv1 ´ v2 , V1 ´ V2 q (5.1)

“ dpv1 ´ v2 , V1 ` V2 q. (5.2)

This gives us the next proposition.

Proposition 5.40. The distance between the linear manifolds L1 “ v1 ` V1 and


L2 “ v2 ` V2 is equal to the distance between the vector v1 ´ v2 and the sum space
V1 ` V2 .

If we choose a basis in V1 ` V2 , let’s say e1 , . . . , ek , then this formula follows:


d
Gpe1 , . . . , ek , v1 ´ v2 q
dpL1 , L2 q “ .
Gpe1 , . . . , ek q

Some analytic geometry


The Gram determinant. Distances. 120

In this section we are going to apply distance problems in euclidean spaces.


Consider the vector space Rn with the canonical inner product, that is: for
x “ px1 , . . . , xn q, y “ py1 , . . . , yn q P Rn the inner product is given by
ÿ
n
xx, yy “ xk yk .
i“1

Consider D1 , D2 two lines (one dimensional linear manifolds), M a point (zero


dimensional linear manifold, we assimilate with the vector xM “ 0M ), P a two
dimensional linear manifold (a plane), and H an n ´ 1 dimensional linear manifold
(hyperplane). The equations of these linear manifolds are:

D1 : x “ x1 ` sd1 ,

D2 : x “ x2 ` td2 ,

M : x “ xM ,

P : x “ xP ` αv1 ` βv2 ,

respectively
H : xx, ny ` b “ 0,

where s, t, α, β, b P R. Recall that two linear manifolds are parallel if the director
space of one of them is included in the director space of the other.
Now we can write down several formulas for distances between linear manifolds.
The Gram determinant. Distances. 121

d
GpxM ´ x1 , d1 q
dpM, D1 q “ ;
Gpd1 q
d
GpxM ´ xP , v1 , v2 q
dpM, P q “ ;
Gpv 1 , v 2 q
d
Gpx1 ´ x2 , d1 , d2 q
dpD1 , D2 q “ if D1 ∦ D2
Gpd1 , d2 q
d
Gpx1 ´ x2 , d1 q
dpD1 , D2 q “ if D1 k D2
Gpd1 , q
|xxM , ny ` b|
dpM, Hq “
}n}
d
Gpx1 ´ xP , d1 , v1 , v2 q
dpD1 , P q “ if D1 ∦ P
Gpd1 , v 1 , v 2 q

Example 5.41. Find the distance between the hyperplane


H “ tpx, y, z, tq P R4 : x ` y ` z ` t “ 1u and the line
D “ tpx, y, z, tq P R4 : x ` y ` z ` t “ 3, x ´ y ´ 3z ´ t “ ´1, 2x ´ 2y ` 3z ` t “ 1u.

Since v0 “ p0, 0, 0, 1q P H its director subspace is VH “ tpx, y, z, tq P R4 :


x ` y ` z ` t “ 0u “ spamte1 “ p1, 0, 0, ´1q, e2 “ p0, 1, 0, ´1q, e3 “ p0, 0, 1, ´1qu.
Since u0 “ p1, 1, 0, 1q P D its director subspace is VD “ tpx, y, z, tq P R4 :
x ` y ` z ` t “ 0, x ´ y ´ 3z ´ t “ 0, 2x ´ 2y ` 3z ` t “ 0u “ spamte4 “ p1, 1, 1, ´3qu.
We have e4 “ e1 ` e2 ` e3 hence VD Ă VH that is D and H are parallel. Obviously
one can compute their distance by the formula
d
Gpe1 , e2 , e3 , v0 ´ u0 q
dpD, Hq “ .
Gpe1 , e2 , e3 q

But, observe that the distance between these manifolds is actually the distance
between a point M P D and H, hence is more simple to compute from the formula
Problems 122

|xxM ,ny`b|
dpM, Hq “ }n}
, with xM “ u0 . Indeed the equation of H is x ` y ` z ` t “ 1,
thus n “ p1, 1, 1, 1q and b “ ´1, hence

|xp1, 1, 0, 1q, p1, 1, 1, 1qy ´ 1| 2


dpD, Hq “ “ “ 1.
}p1, 1, 1, 1q} 2

5.6 Problems
Problem 5.6.1. Prove that for the nonzero vectors x, y P R2 , it holds

xx, yy “ }x}}y} cos θ,

where θ is the angle between x and y.

Problem 5.6.2. Find the angle between the vectors p´2, 4, 3q and p1, ´2, 3q.

Problem 5.6.3. Find the two unit vectors which are orthogonal to both of the
vectors p´2, 3, ´1q and p1, 1, 1q.

Problem 5.6.4. Let u, v P V , V inner product space. Show that

}u} ď }u ` av}, @a P F ô xu, vy “ 0.

Problem 5.6.5. Prove that


ÿ
n ÿ
n ÿ
n
1
2
p ai bi q ď p ia2i qp b2i q,
i“1 i“1 i“1
i

for all ai , bi P R , , i “ 1, n.

Problem 5.6.6. Let S be the subspace of the inner product space R3 rXs, the
space of polynomials of degree at most 3, generated by the polynomials 1 ´ x2 and
ş1
2 ´ x ` x2 , where xf, gy “ 0 f pxqgpxqdx. Find a basis for the orthogonal
complement of S.
Problems 123

Problem 5.6.7. Let u, v P V , V inner product space. If

}u} “ 3 , }u ` v} “ 4 , }u ´ v} “ 6,

find }v}.

Problem 5.6.8. Prove or infirm the following statement: There exists an inner
product on R2 such that the norm induced by this scalar product satisfies

}px1 , x2 } “ |x1 | ` |x2 |,

for all px1 , x2 q P R2 .

Problem 5.6.9. Show that the planes P : x ´ 3y ` 4z “ 12 and


P2 : 2x ´ 6y ` 8z “ 6 are parallel and then find the distance between them.

Problem 5.6.10. Let V be an inner product space. Then it holds:

}u ` v}2 ´ }u ´ v}2
xu, vy “ , @u, v P V.
4

Problem 5.6.11. If V is a complex vector space with an inner product on it,


show that

}u ` v}2 ´ }u ´ v}2 ` i}u ` iv}2 ´ i}u ´ iv}2


xu, vy “ , @ u, v P V.
4

Problem 5.6.12. Prove that the following set


" *
1 sin x sin nx cos x cos nx
? , ? ,..., ? , ? ,..., ?
2π π π π π

is orthonormal in Cr´π, πs, endowed with the scalar product


żπ
xf, gy “ f pxqgpxqdx.
´π
Problems 124

Problem 5.6.13. Show that the set of all vectors in Rn which are orthogonal to a
given vector v P Rn is a subspace of Rn . What will its dimension be?

Problem 5.6.14. If S is a subspace of a finite dimensional real inner product


space V, prove that S K » V {S.

Problem 5.6.15. Let V be an inner product space and let tv1 , . . . , vm u a list of
linearly independent vectors from V. How many orthonormal families te1 , . . . , em uq
can be constructed by using the Gram-Schmidt procedure, such that

spantv1 , . . . , vi u “ spante1 , . . . , ei u, @ i “ 1, m.

Problem 5.6.16. Orthonormalize the following list of vectors in R4


tp1, 11, 0, 1q, p1, ´2, 1, 1q, p1, 1, 1, 0q, p1, 1, 1, 1qu.

Problem 5.6.17. Let V be an inner product space and let U Ď V subspace. Show
that
dim U K “ dim V ´ dim U.

Problem 5.6.18. Let te1 , . . . , em u be an orthonormal list in the inner product


space V . Show that
}v}2 “ |xv, e1 y|2 ` ¨ ¨ ¨ ` |xv, em y|2

if and only if v P spante1 , . . . , em u.

Problem 5.6.19. Let V be a finite-dimensional real inner product space with a


basis te1 , . . . , en u. Show that for any u, w P V it holds
xu, wy “ rusJ Gpe1 , . . . , en qrws where rus is the coordinate vector (represented as a
column matrix) of u with respect to the given basis and Gpe1 , . . . , en q is the matrix
having the same entries as the Gram determinant of te1 , . . . , en u..

Problem 5.6.20. Find the distance between the following linear manifolds.
Problems 125

a) L “ tpx, y, z, tq P R4 |x ` y ` t “ 1, x ´ 2y ` z “ ´1u, K “ tpx, y, z, tq P


R4 |y ` 2z ´ t “ 1, x ` y ` z ` t “ 2, x ´ y ´ 2z “ ´4u.

b) L “ tpx, y, z, tq P R4 |x ` y ` t “ 2, x ´ 2y ` z “ 3u, K “ tpx, y, z, tq P


R4 |y ` z ´ t “ 1, 2x ´ y ` z ` t “ 3u.

c) L “ tpx, y, z, tq P R4 |x ` z ` t “ 1, x ` y ` z “ 2u, K “ tpx, y, z, tq P


R4 |y ` t “ 3, x ` t “ 4u.

d) L “ tpx, y, z, tq P R4 |x ` z ` t “ 1, x ` y ` z “ 2, x ´ y ` t “ 2u, K “
tpx, y, z, tq P R4 |2x ` y ` 2z ` t “ 4u.

Problem 5.6.21. Let V be an inner product space, let U Ď V be an arbitrary


subset and let U1 , U2 Ď V be subspaces. Show that U K is a subspace of V , and
V K “ 0,respectively 0K “ V . Further the following implication holds:
U1 Ď U2 ñ U1K Ě U2K .
6
Operators on inner product spaces.

6.1 Linear functionals and adjoints


A linear functional on a vector space V over the field F is a linear map f : V Ñ F.

Example 6.1. f : F3 Ñ F given by f pv1 , v2 , v3 q “ 3v1 ` 4v2 ´ 5v3 is a linear


functional on F3 .

Assume now that V is an inner product space. For fixed v P V , the map f : V Ñ F
given by f puq “ xu, vy is a linear functional. The next fundamental theorem shows
that in case V is a Hilbert space, then every linear continuous functional on V is of
this form. Recall that an inner product space is a Hilbert space if is complete, that
is, every Cauchy sequence is convergent. In other words, if the sequence pxn q Ď V
satisfies, the condition:

@ǫ ą 0 Dnǫ P N s.t. @n, m ą nǫ ùñ }xn ´ xm }V ă ǫ

then pxn q is convergent.

Theorem 6.2. Suppose f is a linear continuous functional on the Hilbert space V .


Then there is a unique vector v P V such that

126
Linear functionals and adjoints 127

f puq “ xu, vy .

Proof. We will present the proof only in the finite dimensional case. We show first
that there is a vector v P V such that f puq “ xu, vy. Let te1 , . . . , en u be an
orthonormal basis of V . One has

f puq “ f pxu, e1ye1 ` ¨ ¨ ¨ ` xu, en yen q “ xu, e1 yf pe1 q ` . . . xu, en yf pen q

“ xu, f pe1 qe1 ` ¨ ¨ ¨ ` f pen qen y ,

for every u P V . It follows that the vector

v “ f pe1 qe1 ` ¨ ¨ ¨ ` f pen qen

satisfies f puq “ xu, vy for every u P V .


It remains to prove the unicity of v. Suppose that there are v1 , v2 P V such that

f puq “ xu, v1 y “ xu, v2y,

for every u P V . It follows that

0 “ xu, v1y ´ xu, v2 y “ xu, v1 ´ v2 y @ u P V

Taking u “ v1 ´ v2 it follows that v1 “ v2 , so v is unique.

Remark 6.3. Note that every linear functional on a finite dimensional Hilbert
space H is continuous. Even more, on every finite dimensional inner product space
H, the inner product defines a norm (metric) such that with the topology induced
by this metric H becomes a Hilbert space.
Linear functionals and adjoints 128

Let us consider another vector space W over F, and an inner product on it, such
that pW, x¨, ¨yq becomes a Hilbert space.
Let T P LpV, W q a continuous operator in the topologies induced by the norms
a a
}v}V “ xv, vyV , respectively }w}W “ xw, wyW , (as a continuous function in
analysis). We define now the adjoint of T , as follows.
Fix w P W . Consider the linear functional on V which maps v in xT pvq, wyW . It
follows that there exists a unique vector T ˚ pwq P V such that

xv, T ˚pwqyV “ xT pvq, wyW @v P V.

The operator T ‹ : W Ñ V constructed above is called the adjoint of T .

Example 6.4. Let T : R3 Ñ R2 given by T px, y, zq “ py ` 3z, 2xq.


Its adjoint operator is T ˚ : R2 Ñ R3 . Fix pu, vq P R2 . It follows

xpx, y, zq, T ˚pu, vqy “ xT px, y, zq, pu, vqy

“ xpy ` 3z, 2xq, pu, vqy

“ yu ` 3zu ` 2xv

“ xpx, y, zq, p2v, u, 3uqy

forall px, y, zq P R3 . This shows that

T ˚ pu, vq “ p2v, u, 3uq.

Note that in the example above T ˚ is not only a map from R2 to R3 , but also a
linear map.
We shall prove this in general. Let T P LpV, W q, so we want to prove that
T ˚ P LpW, V q.
Let w1 , w1 P W . One has, by definition:
Linear functionals and adjoints 129

xT pvq, w1 ` w2 y “ xT pvq, w1y ` xT v, w2y

“ xv, T ˚pw1 qy ` xv, T ˚pw2 qy

“ xv, T ˚pw1 q ` T ˚ pw2 qy,

which shows that T ˚ pw1 q ` T ˚ pw2 q plays the role of T ˚ pw1 ` w2 q. By the
uniqueness proved before, we have that

T ˚ pw1 q ` T ˚pw2 q “ T ˚ pw1 ` w2 q .

Remains to check the homogeneity of T ˚ . For a P F one has

xT pvq, awy “ axT pvq, wy

“ axv, T ˚pwqy

“ xv, aT ˚pwqy .

This shows that aT ˚ pwq plays the role of T ˚ pawq, and again by the uniqueness we
have that
aT ˚ pwq “ T ˚ pawq .

Thus T ˚ is a linear map, as claimed.


One can easily verify that we have the following properties:

a) additivity pS ` T q˚ “ S ˚ ` T ˚ for all S, T P LpV, W q.

b) conjugate homogeneity paT q˚ “ aT ˚ for all a P F and T P LpV, W q.

c) adjoint of adjoint pT ˚ q˚ “ T for all T P LpV, W q.

d) identity I ˚ “ I, if I “ IV , V “ W .
Linear functionals and adjoints 130

e) products pST q˚ “ T ˚ S ˚ for all T P LpV, W q and S P LpW, Uq.

For the sake of completeness we prove the above statements. Let v P V and w P W.
aq Let S, T P LpU, W q. Then, xpS ` T qpvq, wy “ xv, pS ` T q˚ pwqy. On the other
hand
xpS`T qpvq, wy “ xSpvq, wy`xT pvq, wy “ xv, S ˚pwqy`xv, T ˚pwqy “ xv, pS ˚`T ˚ qpwqy.
Hence, pS ` T q˚ “ S ˚ ` T ˚ .
bq Let a P F and T P LpU, W q. We have xpaT qpvq, wy “ xv, paT q˚ pwqy. But
xpaT qpvq, wy “ axT pvq, wy “ axv, T ˚ pwq “ xv, aT ˚ pwqy.
Hence, paT q˚ “ aT ˚ pwq.
cq Let T P LpU, W q. Then
xw, T pvqy “ xT pvq, wy “ xv, T ˚pwqy “ xT ˚ pwq, vy “ xw, pT ˚q˚ pvqy.
Hence, pT ˚ q˚ “ T.
dq Let V “ W. We have xv, Ipwqy “ xv, wy “ xIpvq, wy “ xv, I ˚pwqy.
Hence I “ I ˚ .
eq Let T P LpV, W q and S P LpW, Uq. Then for all u P U and v P V it holds:
xT ˚ S ˚ puq, vy “ xS ˚puq, ppT q˚q˚ pvq “ xS ˚ puq, T pvqy “ xu, pS ˚q˚ T pvqy “ xu, ST pvqy “
xST pvq, uy “ xv, pST q˚puqy “ xpST q˚puq, vy.
Hence, T ˚ S ˚ “ pST q˚.

Proposition 6.5. Suppose that T P LpV, W q is continuous. Then

1. ker T ˚ “ pim T qK .

2. im T ˚ “ pker T qK .

3. ker T “ pim T ˚ qK .

4. im T “ pker T ˚ qK .
Linear functionals and adjoints 131

Proof. 1. Let w P W . Then

w P ker T ˚ ô T ˚ pwq “ 0

ô xv, T ˚pwqy “ 0 @vPV

ô xT pvq, wy “ 0 @v P V

ô w P pim T qK ,

that is ker T ˚ “ pim T qK . If we take the orthogonal complement in both sides we


get 4. Replacing T by T ˚ in 1 and 4 gives 3 and 2.

The conjugate transpose of a type pm, nq- matrix is an pn, mq matrix obtained by
interchanging the rows and columns and taking the complex conjugate of each
entry. The adjoint of a matrix (which is a linear transform between two finite
dimensional spaces in the appropriate bases) is the conjugate transpose of that
matrix as the next result shows.

Proposition 6.6. Suppose that T P LpV, W q. If te1 , . . . , en u, and tf1 , . . . , fm u are


orthonormal bases for V and W respectively, and we denote by MT and MT ˚ the
matrices of T and T ˚ in these bases, then MT ˚ is the conjugate transpose of MT .

Proof. The k th column of MT is obtained by writing T pek q as linear combination


of f ’s, the scalars used became the k th column of M . Being the basis with f ’s
j T j

orthonormal, it follows that

T pek q “ xT pek q, f1 yf1 ` ¨ ¨ ¨ ` xT pek q, fm yfm .

So on the position pk, jq of MT we have xT pek q, fj y. Replacing T with T ˚ and


interchanging the roles played by e’s andf ’s, we see that the entry on the position
Normal operators 132

pj, kq of MT ˚ the entry is xT ˚ pfk q, ej y, which equals to xfk , T pej qy, which equals to
xT pej q, fk y. In others words, MT ˚ equals to the complex conjugate of MT .

6.2 Normal operators


An operator on a Hilbert space is called normal if it commutes with its adjoint,
that is
T T ˚ “ T ˚T .

Remark 6.7. We will call a complex square matrix normal if commute with its
conjugate transpose, that is A P Mn pCq is normal iff

AA˚ “ A˚ A,
J
where A˚ is the conjugate transpose of A, that is A˚ “ A . It can easily be
observed that the matrix of a normal operator is a normal matrix.

Example 6.8. On F2 consider the operator which in the canonical basis has the
matrix ¨ ˛
2 ´3
A“˝ ‚.
3 2
This is a normal operator.

Indeed let T : F2 Ñ F2 be the operator whose matrix is A. Then


T px, yq “ p2x ´ 3y, 3x ` 2yq, thus xT px, yq, pu, vqy “ p2x ´ 3yqu ` p3x ` 2yqv “
p2v ´ 3uqy ` p3v ` 2uqx “ xpx, yq, p3v ` 2u, 2v ´ 3uqy. Hence, the adjoint of T is
T ˚ pu, vq “ p2u ` 3v, ´3u ` 2vq. It can easily be computed that
T T ˚ pu, vq “ T ˚ T pu, vq “ p13u, 13vq, hence T T ˚ “ T ˚ T.

Proposition 6.9. An operator T P LpV q is normal operator iff

}T pvq} “ }T ˚ pvq} for all v P V.


Normal operators 133

Proof. Let T P LpV q.

T is normal ðñ T ˚ T ´ T T ˚ “ 0

ðñ xpT ˚T ´ T T ˚ qpvq, vy “ 0 for all v P V

ðñ xT ˚ T pvq, vy “ xT T ˚pvq, vy for all v P V

ðñ }T pvq}2 “ }T ˚ pvq}2 for all v P V.

Theorem 6.10. Let T be a normal operator on V and λ0 be an eigenvalue of T .

1. The proper subspace Epλ0 q is T ˚ invariant.

2. If v0 is an eigenvector of T corresponding to the eigenvalue λ0 , then v0 is an


eigenvector of T ˚ corresponding to the eigenvalue λ0 .

3. Let v, w be two eigenvectors corresponding to distinct eigenvalues λ, β. Then


v, w are orthogonal.

Proof. Let v P Epλ0 q. We have to prove that T ˚ pvq P Epλ0 q.


Since T pvq “ λ0 v, we have

T pT ˚ pvqq “ pT T ˚qpvq “ pT ˚ T qpvq “ T ˚ pT pvqq “ T ˚ pλ0 vq “ λ0 T ˚ pvq.

which show that T ˚ pvq P Epλ0 q.


For the second statement in the theorem we have that T pv0 q “ λ0 v0 . Let
w P Epλ0 q. Then

xT ˚ pv0 q, wy “ xv0 , T pwqy

“ xv0 , λ0 wy “ λ0 xv0 , wy “ xλ0 v0 , wy.

That means that


xT ˚ pv0 q ´ λ0 v0 , wy “ 0 ,
Normal operators 134

for all w P Epλ0 q. The first term in the inner product lives in Epλ0 q by the
previous statement. Take w “ T ˚ pv0 q ´ λ0 v0 and it follows that T ˚ pv0 q “ λ0 v0 , i.e.
the second assertion in the theorem.
Now follows the last statement. One has T pvq “ λv and T pβq “ βw. By the
previous point T ˚ pwq “ βw, so

xT pvq, wy “ xv, T ˚pwq

(def. of adjoint), which implies λxv, wy “ βxv, wy. Since λ ‰ β, it follows that
xv, wy “ 0.

Proposition 6.11. If U is a T invariant subspace of V then U K is a T ˚ invariant


subspace of V .

Proof.

w P U K , v P V ùñ w P U K , T pvq P U ùñ xv, T ˚pwqy “ xT pvq, wy.

That is T ˚ pwq P U K .

A unitary space is an inner product space over C.

Theorem 6.12. Suppose that V is a finite dimensional unitary space, and


T P LpV q is an operator. Then T is normal iff there exists an orthonormal basis B
of V relative to which the matrix of T is diagonal.

Proof. First suppose that T has a diagonal matrix. The matrix of T ˚ is the
complex transpose, so it is again diagonal. Any two diagonal matrices commutes,
that means that T is normal.
Normal operators 135

To prove the other direction suppose that T is normal. Then, there is a basis
te1 , . . . , en u of V with respect to which the matrix of T is upper triangular, that is
¨ ˛
a a . . . an,n
˚ 1,1 1,2 ‹
˚ ‹
˚ 0 a2,2 . . . a2,n ‹
A“˚ ˚ .. ..

.. ‹ .
˚ . . ¨¨¨ . ‹
˝ ‚
0 0 . . . an,n

We will show that the matrix A is actually a diagonal one.


We have
b
}T pe1 q} “ |a1,1 |2

and
b
˚
}T pe1 q} “ |a1,1 |2 ` ¨ ¨ ¨ ` |a1,n |2 .

Because T is normal the norms are equal, so a1,2 “ ¨ ¨ ¨ “ a1,n “ 0.


b b
}T pe2 q} “ |a1,2 | ` |a2,2 | “ |a2,2 |2
2 2

and
b
˚
}T pe2 q} “ |a2,2 |2 ` ¨ ¨ ¨ ` |a2,n |2 .

Because T is normal the norms are equal, so a2,3 “ ¨ ¨ ¨ “ a2,n “ 0.


By continuing the procedure we obtain that for every k P t1, . . . , n ´ 1u we have
ak,k`1 “ ¨ ¨ ¨ “ ak,n “ 0, hence A is diagonal.

Theorem 6.13. (Complex spectral theorem) Suppose that V is a unitary


space. Then T has an orthonormal basis consisting of eigenvectors iff T is normal.

Proof. Induction on n “ dim V . The statement is obvious for n “ 1. Suppose that


this is true for all dimensions less than n. Let T P LpV q. Then T has at least one
Isometries 136

eigenvalue λ. If dim Epλq “ n it is enough to construct an orthonormal basis of


Epλq. For dim Epλq ă n, choose E K pλq, and we have 0 ă dim E K pλq ă n.
Now Epλq is T ˚ invariant, so E K pλq is T invariant. By the induction hypothesis,
E K pλq has an orthonormal basis consisting of eigenvectors of T . Add this to the
orthonormal basis of Epλq. The result is an orthonormal basis of V consisting of
eigenvectors.

6.3 Isometries
An operator T P LpV q is called an isometry if

}T pvq} “ }v} , for all v P V.

Example 6.14. Let I be the identity map of V (V complex v.s.), and λ P C with
|λ| “ 1. The map λI is an isometry, since }λIpvq} “ }λv} “ |λ|}v} “ }v}.
If T is an isometry it easily follows that T is injective.
Indeed, assume the contrary, that is, there exists u, v P V, u ‰ v such that
T puq “ T pvq. Hence, 0 “ }T puq ´ T pvq} “ }T pu ´ vq} “ }u ´ v}, contradiction with
u ‰ v.

Theorem 6.15. Suppose T P LpV q. The following are equivalent:

1. T is an isometry.

2. xT puq, T pvqy “ xu, vy for every u, v P V .

3. T ˚ T “ I.

4. tT pe1 q, . . . , T pem qu is an orthonormal list for every orthonormal list


te1 , . . . , em u.
Isometries 137

5. There exists an orthonormal basis te1 , . . . , en u of V such that


pT te1 q, . . . , T pen qu is an orthonormal basis.

6. T ˚ is an isometry.

7. xT ˚ puq, T ˚pvqy “ xu, vy for all u, v P V .

8. T T ˚ “ I

9. tT ˚ pe1 q, . . . , T ˚ pem qu is an orthonormal list for every orthonormal list


pe1 , . . . , em q .

10. There exists an orthonormal basis te1 , . . . , en u of V such that


tT ˚ pe1 q, . . . , T ˚ pen qu is an orthonormal basis.

Proof. Suppose that 1 holds. Let u, v P V. Then


}u ´ v}2 “ }T pu ´ vq}2 “ xT puq ´ T pvq, T puq ´ T pvqy “
“ }T puq}2 ` }T pvq}2 ´ 2xT puq, T pvqy “ }u}2 ` }v}2 ´ 2xT puq, T pvqy. On the other
hand }u ´ v}2 “ }u}2 ` }v}2 ´ 2xu, vy.
Suppose now that 2 holds. Then

xpT ˚T ´ Iqpuq, vy “ xT puq, T pvqy ´ xu, vy “ 0 .

for every u, v P V . Take v “ pT ˚ T ´ Iqpuq and it follows that T ˚ T ´ I “ 0, i.e. 3.


Suppose 3 holds. Let pe1 . . . em q be an orthonormal list of vectors in V . Then

xT pej q, T pek qy “ xT ˚ T pej q, ek y “ xej , ek y,

i.e. 4 holds. Obviously 4 implies 5.


Isometries 138

Suppose 5 holds. Let pe1 , . . . en q be an orthonormal basis of V such that


pT pe1 q, . . . , T pen qq is orthonormal basis. For v P V

}T pvq}2 “ }T pxv, e1ye1 ` ¨ ¨ ¨ ` xv, en yen q}2

“ }xv, e1yT pe1 q ` ¨ ¨ ¨ ` xv, en yT pen q}2

“ |xv, e1 y|2 ` ¨ ¨ ¨ ` |xv, en y|2

“ }v}2 .

Taking square roots we see that T is an isometry. We have now


1 ùñ 2 ùñ 3 ùñ 4 ùñ 5 ùñ 1. Replacing T by T ˚ we see that 6 through 10 are
equivalent. We need only to prove the equivalence of one assertion in the first
group with one in the second group.
3 ô 8 which is easy to see since T T ˚ “ I ñ
T T ˚ puq “ u, @u P V ñ pT T ˚qpT puqq “ T puq, @u P V, or equivalently
T ppT ˚ T qpuqq “ T puq, @u P V , T is injective, hence T ˚ T “ I.
Conversely, T ˚ T “ I ñ T ˚ T puq “ u, @u P V ñ pT ˚ T qpT ˚puqq “ T ˚ puq, @u P V, or
equivalently T ˚ ppT T ˚ qpuqq “ T ˚ puq, @u P V , T ˚ is injective, hence T T ˚ “ I.

Remark 6.16. Recall that a real square matrix A is called orthogonal iff
AAJ “ AJ A “ I. A complex square matrix B is called unitary if BB ˚ “ B ˚ B “ I,
J
where B ˚ is the conjugate transpose of B, that is B ˚ “ B . It can easily be
observed that the matrix of an isometry on a real (complex) finite dimensional
inner product space is an orthogonal (unitary) matrix.

The last theorem shows that every isometry is a normal operator. So, the
characterizations of normal operators can be used to give a complete description of
isometries.

Theorem 6.17. Suppose that V is a complex inner product space and T P LpV q.
Isometries 139

Then T is an isometry iff there is an orthonormal basis of T consisting of


eigenvectors of T whose corresponding eigenvalues have modulus 1.

Proof. Suppose that there is an othonormal basis te1 , . . . , en u consisting of


eigenvectors whose corresponding eigenvalues tλ1 , . . . , λn u have absolute value 1. It
follows that for every v P V

T pvq “ xv, e1 yT pe1 q ` ¨ ¨ ¨ ` xv, en yT pen q

“ λ1 xv, e1 ye1 ` ¨ ¨ ¨ ` λn xv, en yen .

Thus }T pvq}2 “ |xv, e1 y|2 ` ¨ ¨ ¨ ` |xv, en y|2 “ }v}2 that is

}T pvq} “ }v}.

Now we are going to prove the other direction. Suppose T is an isometry. By the
complex spectral theorem there is an orthonormal basis of V consisting of
eigenvectors te1 , . . . , en u.. Let ej , j P t1, . . . , nu be such a vector, associated to an
eigenvalue λj . It follows that

|λj |}ej } “ }λj ej } “ }T pej q} “ }ej },

hence |λj | “ 1, for all j P t1, . . . , nu.

Finally we state the following important theorem concerning on the form of the
matrix of an isometry.

Theorem 6.18. Suppose that V is a real inner product space and T P LpV q. Then
T is an isometry iff there exist an orthonormal basis of V with respect to which T
has a block diagonal matrix where each block on the diagonal matrix is a p1, 1q
matrix containing 1 or ´1, or a p2, 2q matrix of the form
¨ ˛
cos θ ´ sin θ
˝ ‚
sin θ cos θ
Self adjoint operators 140

with θ P p0, πq.

Proof. The eigenvalues of T have modulus 1, hence are the form 1, ´1 or


cos θ ˘ sin θ. On the other hand, the matrix of T is similar to a diagonal matrix
whose diagonal entries are the eigenvalues.

6.4 Self adjoint operators


An operator T P LpV q is called self-adjoint if T “ T ˚ that is xT pvq, wy “ xv, T pwqy
for all v, w P V .

Remark 6.19. Obviously a self adjoint operator T P LpV q is normal since in this
case holds
T T ˚ “ T ˚ T ˚ “ T ˚ T.

Example 6.20. Let T be an operator on F2 whose matrix with respect to the


standard basis is ¨ ˛
2 b
˝ ‚.
3 5
Then T is self-adjoint iff b “ 3.

Indeed, for px, yq P F2 one has T px, yq “ p2x ` by, 3x ` 5yq, hence for pu, vq P F2 it
holds

xT px, yq, pu, vqy “ p2x ` byqu ` p3x ` 5yqv “ xpx, yq, p2u ` 3v, bu ` 5vqy.

Thus T ˚ px, yq “ p2x ` 3y, bx ` 5yq.


In conclusion T is self adjoint, i.e. T “ T ˚ iif b “ 3.
It can easily be verified that the sum of two self adjoint operators and the product
of an self adjoint operator by a real scalar is an self-adjoint operator.
Self adjoint operators 141

Indeed, let S, T P LpV q be two self adjoint operators. Then


pS ` T q˚ “ S ˚ ` T ˚ “ S ` T, hence S ` T is self adjoint. On the other hand for
their product we have pST q˚ “ T ˚ S ˚ “ T S. Hence T S is self adjoint iff ST “ T S.
Let now a P R. Then paT q˚ “ aT ˚ “ aT, hence aT is self adjoint.

Remark 6.21. When F “ C the adjoint on LpV q plays a similar role to complex
conjugation on C. A complex number is real iff z “ z. Thus for a self adjoint
operator T the sum T ` T ˚ is analogous to a real number. The analogy is reflected
in some important properties of a self-adjoint operator, beginning with its
eigenvalues.

Remark 6.22. Recall that a complex square matrix A is called hermitian iff
J
A “ A˚ , where A˚ is the conjugate transpose of A, that is A˚ “ A . If A is a
square matrix with real entries, then A is called symmetric iff A “ AJ . It can
easily be observed that matrix of a self adjoint operator on a complex (real) inner
product space is hermitian (symmetric).

Proposition 6.23. The following statements hold.

• Every eigenvalue of a self-adjoint operator is real.

• Let v, w eigenvectors corresponding to distinct eigenvalues. Then xv, wy “ 0.

Proof. Suppose that T is a self-adjoint operator on V . Let λ be an eigenvalue of T ,


and v be an eigenvector, that is T pvq “ λv. Then

λ}v}2 “ xλv, vy

“ xT pvq, vy

“ xv, T pvqy (because T is self-adjoint)

“ xv, λvy

“ λ}v}2 .
Problems 142

Thus λ “ λ, i.e. λ is real.


The next assertion cames from the fact that a self-adjoint operator is normal.

Theorem 6.24. Let T P LpV q, where V is an inner product space. The following
statements are equivalent.

1. T is self-adjoint.

2. There exists an orthonormal basis of V relative to which the matrix of T is


diagonal with real entries.

Proof. Assume that T is self adjoint. Since T is normal there exists exists an
orthonormal basis of V relative to which the matrix of T MT is upper triangular.
But the matrix of T ˚ in this basis is MT ˚ “ MT ˚ , and from T “ T ˚ one has
MT “ MT˚ , hence MT is diagonal, and also the diagonal are formed byy real entries.
Conversely, let MT a diagonal matrix of T , with real entries in some orthonormal
basis. Then MT “ MTJ , hence MT “ MT ˚ or equivalently T “ T ˚ .

6.5 Problems
Problem 6.5.1. Suppose that A is a complex matrix with real eigenvalues which
can be diagonalized by a unitary matrix. Prove that A must be hermitian.

Problem 6.5.2. Prove or give a counter example: the product of any two self
adjoint operators on a finite dimensional inner product space is self adjoint.

Problem 6.5.3. Show that an upper triangular matrix is normal if and only if it
is diagonal.

Problem 6.5.4. Suppose p P LpV q is such that p2 “ p. Prove that p is an


orthogonal projection if and only if p is self adjoint.
Problems 143

Problem 6.5.5. Show that if V is a real inner product space, then the set of self
adjoint operators on V is a subspace of LpV q. Show that if V is a complex inner
product space, then the set of self-adjoint operators on V is not a subspace of
LpV q.

Problem 6.5.6. Show that if dim V ě 2 then the set of normal operators on V is
not a subspace of LpV q.

Problem 6.5.7. Let A be a normal matrix. Prove that A is unitary if and only if
all its eigenvalues λ satisfy |λ| “ 1.

Problem 6.5.8. Let v be any unit vector in Cn and put A “ In ´ 2XX ˚ . Prove
that A is both hermitian and unitary. Deduce that A “ A´1 .

Problem 6.5.9. Suppose V is a complex inner product space and T P LpV q is a


normal operator such that T 9 “ T 8 . Prove that T is self adjoint and T 2 “ T.

Problem 6.5.10. Let A be a normal matrix. Show that A is hermitian if and


only if all its eigenvalues are real.

Problem 6.5.11. Prove that if T P LpV q is normal, then

im T “ im T ˚ .

and
ker T k “ ker T

im T k “ im T

for every positive integer k.

Problem 6.5.12. A complex matrix A is called skew-hermitian if A˚ “ ´A.


Prove the following statements.
Problems 144

a) A skew-hermitian matrix is normal.

b) The eigenvalues of a skew-hermitian matrix are purely imaginary, that is,


have the real part 0.

c) A normal matrix is skew-hermitian if all its eigenvalues are purely imaginary.

Problem 6.5.13. Suppose V is a complex inner product space. An operator


?
S P LpV q is called a square root of T P LpV q if S 2 “ T. We denote S “ T . Prove
that every normal operator on V has a square root.

Problem 6.5.14. Prove or disprove: e identity operator on F2 has infinitely many


self adjoint square roots.

Problem 6.5.15. Let T, S P LpV q be isometries and R P LpV q a positive operator,


?
(that is xRpvq, vy ě 0 for all v P V ), such that T “ SR. Prove that R “ T ˚T .

Problem 6.5.16. Let R2 rXs be the inner product space of polynomials with
degree at most 2, with the scalar product
ż1
xp, qy “ pptqqptqdt.
0

Let T P LpR2 rXsq, T pax2 ` bx ` cq “ bx.

a) Show that the matrix of T with respect to the given basis is hermitian.

b) Show that T is not self-adjoint.

(Note that there is no contradiction between these statements because the basis in
the first statement is not orthonormal.)

Problem 6.5.17. Prove that a normal operator on a complex inner-product space


is self-adjoint if and only if all its eigenvalues are real.
7
Elements of geometry

following joint notes with I. Raşa and D. Inoan

7.1 Quadratic forms


Consider the n-dimensional space Rn and denote by x “ px1 , . . . , xn q the
coordinates of a vector x P Rn with respect to the canonical basis E “ te1 , . . . , en u .
A quadratic form is a map Q : Rn Ñ R

Qpxq “ a11 x21 ` . . . ann x2n ` 2a12 x1 x2 ` ¨ ¨ ¨ ` 2aij xi xj ` . . . 2an´1,n xn´1 xn ,

where the coefficients aij are all real.


Thus, quadratic forms are homogenous polynomials of degree two in a number of
variables.
Using matrix multiplication, we can write Q in a compact form as

Qpxq “ X t AX,

where

145
Quadratic forms 146

¨ ˛ ¨ ˛
x1 a11 a12 . . . a1n
˚ ‹ ˚ ‹
˚ ‹ ˚ ‹
˚ x2 ‹ ˚ a12 a22 . . . a2n ‹
X “˚
˚ ..

‹ and A“˚
˚ .. .. ..
‹.

˚ . ‹ ˚ . . . ‹
˝ ‚ ˝ ‚
xn a1n a2n . . . ann
The symmetric matrix A (notice that aij “ aji ) is be called the matrix of the
quadratic form. Being symmetric (and real), A it is the matrix of a self-adjoint
operator with respect to the basis E. This operator, that we call T , is
diagonalizable and there exists a basis B “ tb1 , . . . , bn u formed by eigenvectors
with respect to which T has a diagonal matrix consisting of eigenvalues (also
denoted by T )
T “ diagtλ1 . . . . , λn u.

Let C be the transition matrix from E to B and


¨ ˛
x11
˚ ‹
˚ 1 ‹
˚ x2 ‹
X1 “ ˚
˚ .. ‹

˚ . ‹
˝ ‚
1
xn
the coordinates of the initial vector written in B. We have that

X “ CX 1

Knowing that T “ C ´1 AC, and that C ´1 “ C J we can compute that

Qpxq “ X J AX
J
“ pCX 1 q A pCX 1 q

“ X 1J C J ACX 1

“ X 1J T X 1
2 2
“ λ1 x1 1 ` ¨ ¨ ¨ ` λn x1 n ,
Quadrics 147

and we say that we have reduced Q to its canonical form

2 2
Qpxq “ λ1 x1 1 ` ¨ ¨ ¨ ` λn x1 n .

This is called the geometric method.


The quadratic form is called

• positive definite if Qpxq ą 0 for every x P Rn zt0u

• negative definite if Qpxq ă 0 for every x P Rn zt0u.

We can characterize the positive definiteness of a quadratic form in terms of the


diagonal minors of its matrix
ˇ ˇ
ˇ ˇ
ˇ a11 a12 ˇ
D1 “ a11 , D2 “ ˇˇ ˇ , . . . , Dn “ det A.
ˇ
ˇ a12 a22 ˇ

We have the following criteria:

• Q is positive definite iff Di ą 0 for every i “ 1, n

• Q is negative definite iff p´1qi Di ą 0 for every i “ 1, n.

7.2 Quadrics
The general equation of a quadric is

a11 x2 ` a22 y 2 ` a33 z 2 ` 2a12 xy ` 2a13 xz ` 2a23 yz

2a14 x ` 2a24 y ` 2a34 z ` a44 “ 0.

From a geometric point of view, quadrics, which are also called quadric surfaces,
are two-dimensional surfaces defined as the locus of zeros of a second degree
Quadrics 148

polynomial in x, y and z. Maybe the most prominent example of a quadric is the


sphere (the spherical surface).
The type is determined by the quadratic form that contains all terms of degree two

Q “ a11 x2 ` a22 y 2 ` a33 z 2 ` 2a12 xy ` 2a13 xz ` 2a23 yz.

We distinguish, based on the sign of the eigenvalues of the matrix of Q, between:


ellipsoids, elliptic or hyperbolic paraboloids, hyperboloids with one or two sheets,
cones and cylinders.
We study how to reduce the general equations of a quadric to a canonical form.
We reduce Q to a canonical form using the geometric method.
Consider the matrix A associated to Q. Being symmetric, A has real eigenvalues
λ1 , λ2 , λ3 . If they are distinct, the corresponding eigenvectors are orthogonal (if
not we apply the Gram-Schmidt algorithm). Thus, we obtain three orthogonal
unit vectors tb1 , b2 , b3 u, a basis in R3 .
Let R be the transition matrix from ti, j, ku to the new basis tb1 , b2 , b3 u. We recall
from previous chapters that R has the three vectors b1 , b2 , b3 as its columns

R “ rb1 |b2 |b3 s .

Now, we compute det R and check whether

det R “ 1.

If necessary, i.e., if .det R “ ´1, we must change one of the vectors by its opposite
(for example take R “ r´b1 |b2 |b3 s). This assures that the matrix R defines a
rotation, the new basis being obtained from the original one by this rotation. Let
px, y, zq and px1 , y 1, z 1 q be the coordinates of the same point in the original basis
Quadrics 149

and in the new one, we have


¨ ˛ ¨ ˛
1
x x
˚ ‹ ˚ ‹
˚ ‹ ˚ 1 ‹
˚ y ‹ “ R˚ y ‹.
˝ ‚ ˝ ‚
1
z z
We know that with respect to the new coordinates

2 2 2
Q “ λ1 x1 ` λ1 y 1 ` λn z 1 ,

and thus, the equation of the quadric reduces to the simpler form

2 2 2
λ1 x1 ` λ1 y 1 ` λn z 1 ` 2a1 14 x1 ` 2a1 24 y 1 ` 2a1 34 z 1 ` a44 “ 0.

To obtain the canonical form of the quadric we still have to perform another
transformation, namely a translation. To complete this step we investigate three
cases: (A) when A has three nonzero eigenvalues, (B) when one eigenvalue is zero
and (C) when two eigenvalues are equal to zero.
(A) For λi ‰ 0 we obtain

λ1 px1 ´ x0 q2 ` λ2 py 1 ´ y0 q2 ` λ3 pz 1 ´ z0 q2 ` a1 44 “ 0

Consider the translation defined by

x2 “ x1 ´ x0 ,

y 2 “ y 1 ´ y0 ,

z 2 “ z 1 ´ z0 .

In the new coordinates the equation of the quadric reduces to the canonical form

λ1 x22 ` λ2 y 22 ` λ3 z 22 ` a1 44 “ 0.

The cases (B) and (C) can be treated similarly.


Conics 150

7.3 Conics
Studied since the time of ancient greek geometers, conic sections (or just conics)
are obtained, as their name shows, by intersecting a cone with a sectioning plane.
They have played a crucial role in the development of modern science, especially in
astronomy, the motion of earth round the sun taking place on a particular conic
called ellipse. Also, we point out the fact that the circle is a conic section, a special
case of ellipse.
The general equation of a conic is

a11 x2 ` 2a12 xy ` a22 y 2 ` 2a13 x ` 2a23 y ` a33 “ 0.

The following two determinants obtained from the coefficients of the conic play a
crucial role in the classification of conics
ˇ ˇ
ˇ ˇ ˇ ˇ
ˇ a11 a12 a13 ˇ ˇ ˇ
ˇ ˇ ˇ a11 a12 ˇ
ˇ ˇ
∆ “ ˇ a12 a22 a23 ˇ and D2 “ ˇˇ ˇ.
ˇ
ˇ ˇ ˇ a12 a22 ˇ
ˇ ˇ
ˇ a13 a23 a33 ˇ

Notice that the second determinant corresponds to the quadratic form defined by
the first three terms.

Conical sections can be classified as follows:

Degenerate conics, for which ∆ “ 0. These include:two intersecting lines (when


D2 ă 0), two parallel lines or one line (when D2 “ 0) and one point (when D2 ą 0).

Nondegenerate conics, for which ∆ “ 0. Depending on D2 we distinguish


between the

x2 y 2
Ellipse pD2 ą 0q whose canonical equation is ` 2 “ 1,
a2 b
Conics 151

P arabola pD2 “ 0q whose canonical equation is y 2 ´ 2ax “ 0,


x2 y 2
Hyperbola pD2 ă 0q whose canonical equation is ´ 2 “ 1.
a2 b

x2 y2
Figure 7.1: Ellipse a2
` b2
“1

The reduction of a conic section to its canonical form is very similar with the
procedure that we have presented in the last section when dealing with quadrics.
Again, we must perform a rotation and a translation. We show how the reduction
can be performed by means of an example.

Example 7.1. Find the canonical form of 5x2 ` 4xy ` 8y 2 ´ 32x ´ 56y ` 80 “ 0.

The matrix of the quadratic form of this conic is


¨ ˛
5 2
˝ ‚
2 8
and its eigenvalues are the roots of λ2 ´ 13λ ` 36 “ 0. So λ1 “ 9 and λ2 “ 4, while
two normed eigenvectors are v1 “ ?1 p1, 2q and v2 “ ?1 p´2, 1q respectively. The
5 5

rotation matrix is thus ¨ ˛


?1 ´ ?25
R“˝ 5 ‚,
?2 ?1
5 5
Conics 152

and we can check that det R “ 1.


Now, the relation between the old and the new coordinates is given by
¨ ˛ ¨ ˛
1
x x
˝ ‚“ R ˝ ‚
y y1

that is

1
x “ ? px1 ´ 2y 1q
5
1
y “ ? p2x1 ` y 1q .
5

By substituting these expressions in the initial equation we get


? ?
12 12 144 5 1 8 5 1
9x ` 4y ´ x ` y ` 80 “ 0.
5 5

To see the translation that we need to perform we rewrite the above equation as
follows
˜ ? ˆ ? ˙2 ¸ ˜ ? ˆ ? ˙2 ¸
8 5 8 5 5 5
9 x12 ´ 2 x1 ` ` 4 y 12 ` 2 y1 `
5 5 5 5
ˆ ? ˙2 ˆ ? ˙2
8 5 5
´9 ´4 ` 80 “ 0.
5 5

Finally, we obtain
ˆ ? ˙2 ˆ ? ˙2
1 8 5 1 5
9 x ´ `4 y ` ´ 30 “ 0.
5 5
? ?
Thus, the translation x2 “ x1 ´ 8 5
5
, y2 “ y1 ` 5
5
reduces the conic to the canonical
form
3 22 2
x ` y 22 “ 1.
10 15
Bibliography

[1] Sheldon Axler: Linear Algebra Done Right, Springer, 2-nd edition, 2004.

[2] Derek Robinson: A Course in Linear Algebra with Applications, World


Scientific Publishing, 2-nd Edition, 2006.

153

You might also like