Santos Linearalgebra
Santos Linearalgebra
Raimundo B. Nascimento
[email protected]
Sumrio
Prefcio iv 6 Determinants 111
6.1 Permutations . . . . . . . . . . . . . . . . 111
To the Student v 6.2 Cycle Notation . . . . . . . . . . . . . . . . 114
1 Preliminares 1 6.3 Determinants . . . . . . . . . . . . . . . . 119
1.1 Conjuntos e Notao . . . . . . . . . . . . 1 6.4 Laplace Expansion . . . . . . . . . . . . . 129
1.2 Partio e Relao de Equivalncia . . . . . 4 6.5 Determinants and Linear Systems . . . . . 136
1.3 Binary Operations . . . . . . . . . . . . . . 6
1.4 Zn . . . . . . . . . . . . . . . . . . . . . . 9 7 Eigenvalues and Eigenvectors 138
1.5 Fields . . . . . . . . . . . . . . . . . . . . 13 7.1 Similar Matrices . . . . . . . . . . . . . . . 138
1.6 Functions . . . . . . . . . . . . . . . . . . 15 7.2 Eigenvalues and Eigenvectors . . . . . . . 139
7.3 Diagonalisability . . . . . . . . . . . . . . 143
2 Matrices and Matrix Operations 18 7.4 Theorem of Cayley and Hamilton . . . . . 147
2.1 The Algebra of Matrices . . . . . . . . . . . 18
2.2 Matrix Multiplication . . . . . . . . . . . . 22 8 Linear Algebra and Geometry 149
2.3 Trace and Transpose . . . . . . . . . . . . 28 8.1 Points and Bi-points in R2 . . . . . . . . . 149
2.4 Special Matrices . . . . . . . . . . . . . . . 31 8.2 Vectors in R2 . . . . . . . . . . . . . . . . 152
2.5 Matrix Inversion . . . . . . . . . . . . . . . 36 8.3 Dot Product in R2 . . . . . . . . . . . . . . 158
2.6 Block Matrices . . . . . . . . . . . . . . . 44
8.4 Lines on the Plane . . . . . . . . . . . . . 164
2.7 Rank of a Matrix . . . . . . . . . . . . . . 45
8.5 Vectors in R3 . . . . . . . . . . . . . . . . 169
2.8 Rank and Invertibility . . . . . . . . . . . . 55
8.6 Planes and Lines in R3 . . . . . . . . . . . 174
3 Linear Equations 65 8.7 Rn . . . . . . . . . . . . . . . . . . . . . . 178
3.1 Definitions . . . . . . . . . . . . . . . . . 65
3.2 Existence of Solutions . . . . . . . . . . . 70 A Answers and Hints 183
3.3 Examples of Linear Systems . . . . . . . . 71 Answers and Hints . . . . . . . . . . . . . . . . 183
4 Vector Spaces 76
GNU Free Documentation License 263
4.1 Vector Spaces . . . . . . . . . . . . . . . . 76
1. APPLICABILITY AND DEFINITIONS . . . . . . 263
4.2 Vector Subspaces . . . . . . . . . . . . . . 79
2. VERBATIM COPYING . . . . . . . . . . . . . 263
4.3 Linear Independence . . . . . . . . . . . . 81
3. COPYING IN QUANTITY . . . . . . . . . . . . 263
4.4 Spanning Sets . . . . . . . . . . . . . . . . 84
4. MODIFICATIONS . . . . . . . . . . . . . . . 263
4.5 Bases . . . . . . . . . . . . . . . . . . . . 87
5. COMBINING DOCUMENTS . . . . . . . . . . 264
4.6 Coordinates . . . . . . . . . . . . . . . . . 91
6. COLLECTIONS OF DOCUMENTS . . . . . . . 264
5 Linear Transformations 97 7. AGGREGATION WITH INDEPENDENT WORKS 264
5.1 Linear Transformations . . . . . . . . . . . 97 8. TRANSLATION . . . . . . . . . . . . . . . . 264
5.2 Kernel and Image of a Linear Transformation 99 9. TERMINATION . . . . . . . . . . . . . . . . 264
5.3 Matrix Representation . . . . . . . . . . . 104 10. FUTURE REVISIONS OF THIS LICENSE . . 264
iii
Copyright
c 2007 David Anthony SANTOS. Permission is granted to copy, distribute and/or
modify this document under the terms of the GNU Free Documentation License, Version 1.2
or any later version published by the Free Software Foundation; with no Invariant Sections, no
Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section
entitled GNU Free Documentation License.
Prefcio
These notes started during the Spring of 2002, when John MAJEWICZ and I each taught a section of
Linear Algebra. I would like to thank him for numerous suggestions on the written notes.
The students of my class were: Craig BARIBAULT, Chun CAO, Jacky CHAN, Pho DO, Keith HAR-
MON, Nicholas SELVAGGI, Sanda SHWE, and Huong VU. I must also thank my former student William
CARROLL for some comments and for supplying the proofs of a few results.
Johns students were David HERNNDEZ, Adel JAILILI, Andrew KIM, Jong KIM, Abdelmounaim LA-
AYOUNI, Aju MATHEW, Nikita MORIN, Thomas NEGRN, Latoya ROBINSON, and Saem SOEURN.
Linear Algebra is often a students first introduction to abstract mathematics. Linear Algebra is well
suited for this, as it has a number of beautiful but elementary and easy to prove theorems. My purpose
with these notes is to introduce students to the concept of proof in a gentle manner.
iv
v
Ao Leitor
These notes are provided for your benefit as an attempt to organise the salient points of the course. They
are a very terse account of the main ideas of the course, and are to be used mostly to refer to central
definitions and theorems. The number of examples is minimal, and here you will find few exercises.
The motivation or informal ideas of looking at a certain topic, the ideas linking a topic with another, the
worked-out examples, etc., are given in class. Hence these notes are not a substitute to lectures: you
must always attend to lectures. The order of the notes may not necessarily be the order followed in
the class.
There is a certain algebraic fluency that is necessary for a course at this level. These algebraic
prerequisites would be difficult to codify here, as they vary depending on class response and the topic
lectured. If at any stage you stumble in Algebra, seek help! I am here to help you!
Tutoring can sometimes help, but bear in mind that whoever tutors you may not be familiar with
my conventions. Again, I am here to help! On the same vein, other books may help, but the approach
presented here is at times unorthodox and finding alternative sources might be difficult.
1 My doctoral adviser used to say I said A, I wrote B, I meant C and it should have been D!
Captulo 1
Preliminares
3 Definio (Implications) The symbol = is read implies, and the symbol is read if and only if.
4 Exemplo Prove that between any two rational numbers there is always a rational number.
a c
Solution: Let (a, c) Z2 , (b, d) (N \ {0})2 , b
< d
. Then da < bc. Now
a a+c
ab + ad < ab + bc = a(b + d) < b(a + c) = < ,
b b+d
a+c c
da + dc < cb + cd = d(a + c) < c(b + d) = < ,
b+d d
a+c a c
whence the rational number lies between b
and d
.
b+d
5 Definio Let A be a set. If a belongs to the set A, then we write a A, read a is an element of A. If
a does not belong to the set A, we write a 6 A, read a is not an element of A.
1
2 Captulo 1
6 Definio (Conjunction, Disjunction, and Negation) The symbol is read or (disjunction), the symbol
is read and (conjunction), and the symbol is read not.
7 Definio (Quantifiers) The symbol is read for all (the universal quantifier), and the symbol is read
there exists (the existential quantifier).
We have
(x A, P(x)) ( A, P(x)) (1.1)
N Z Q R C.
A=B (A B) (B A).
A B = {x : (x A) (x B)}.
A B = {x : (x A) (x B)}.
A \ B = {x : (x A) (x 6 B)}.
A B A B A B
(A B) C = (A C) (B C).
Conjuntos e Notao 3
Solution: We have,
x (A B) C x (A B) x C
(x A x B) x C
(x A x C) (x B x C)
(x A C) (x B C)
x (A C) (B C),
13 Definio Let A1 , A2 , . . . , An , be sets. The Cartesian Product of these n sets is defined and denoted
by
A1 A2 An = {(a1 , a2 , . . . , an ) : ak Ak },
that is, the set of all ordered n-tuples whose elements belong to the given sets.
In the particular case when all the A k are equal to a set A, we write
A1 A2 An = An .
t 0 = |x| t t x t. (1.4)
a R = a2 = |a|. (1.5)
Problema 1.1.3 Prove that Problema 1.1.7 Prove that a set with n 0 elements
has 2n subsets.
X \ (A B) = (X \ A) (X \ B).
Problema 1.1.4 Prove that Problema 1.1.8 Let (a, b) R2 . Prove that
17 Exemplo Let
2Z = {. . . , 6, 4, 2, 0, 2, 4, 6, . . .} = 0
be the set of even integers and let
2Z + 1 = {. . . , 5, 3, 1, 1, 3, 5, . . .} = 1
18 Exemplo Let
3Z = {. . . 9, , 6, 3, 0, 3, 6, 9, . . .} = 0
be the integral multiples of 3, let
3Z + 1 = {. . . , 8, 5, 2, 1, 4, 7, . . .} = 1
3Z + 2 = {. . . , 7, 4, 1, 2, 5, 8, . . .} = 2
Notice that 0 and 1 do not mean the same in examples 17 and 18. Whenever we make use
of this notation, the integral divisor must be made explicit.
19 Exemplo Observe
R = (Q) (R \ Q), = (Q) (R \ Q),
which means that the real numbers can be partitioned into the rational and irrational numbers.
Partio e Relao de Equivalncia 5
20 Definio Let A, B be sets. A relation R is a subset of the Cartesian product A B. We write the fact
that (x, y) R as x y.
reflexive if (x A), x x,
symmetric if ((x, y) A2 ), x y = y x,
anti-symmetric if ((x, y) A2 ), (x y) (y x) = x = y,
A relation R which is reflexive, symmetric and transitive is called an equivalence relation on A. A relation
R which is reflexive, anti-symmetric and transitive is called a partial order on A.
22 Exemplo Let S ={All Human Beings}, and define on S as a b if and only if a and b have the same
mother. Then a a since any human a has the same mother as himself. Similarly, a b = b a
and (a b) (b c) = (a c). Therefore is an equivalence relation.
23 Exemplo Let L be the set of all lines on the plane and write l1 l2 if l1 ||l2 (the line l1 is parallel to the
line l2 ). Then is an equivalence relation on L.
a x
24 Exemplo In Q define the relation b y ay = bx, where we will always assume that the
denominators are non-zero. Then is an equivalence relation. For a
b
a
b
since ab = ab. Clearly
a x x a
= ay = bx = xb = ya = .
b y y b
Finally, if a
b
x
y and y x
s
t
then we have ay = bx and xt = sy. Multiplying these two equalities
ayxt = bxsy. This gives
ayxt bxsy = 0 = xy(at bs) = 0.
Now if x = 0, we will have a = s = 0, in which case trivially at = bs. Otherwise we must have at bs = 0
and so ab
st .
ab a2 + b2
b2 + a2
b a,
27 Definio Let be an equivalence relation on a set S. Then the equivalence class of a is defined and
denoted by
[a] = {x S : x a}.
28 Lemma Let be an equivalence relation on a set S. Then two equivalence classes are either identical
or disjoint.
Proof: We prove that if (a, b) S2 , and [a] [b] 6= then [a] = [b]. Suppose that x [a] [b].
Now x [a] = x a = a x, by symmetry. Similarly, x [b] = x b. By transitivity
(a x) (x b) = a b.
Now, if y [b] then b y. Again by transitivity, a y. This means that y [a]. We have shewn
that y [b] = y [a] and so [b] [a]. In a similar fashion, we may prove that [a] [b]. This
establishes the result.
and [a] [b] = if a b. This proves the first half of the theorem.
Conversely, let
[
S= S , S S = if 6= ,
Homework
Problema 1.2.1 For (a, b) (Q \ {0})2 define the rela- Problema 1.2.3 Define the relation in R by x y
tion as follows: a b a b
Z. Determine whether xey = yex . Prove that is an equivalence relation.
this relation is reflexive, symmetric, and/or transitive.
Problema 1.2.4 Define the relation in Q by x y
3y + h
Problema 1.2.2 Give an example of a relation on Z \ {0} h Z such that x = . [A] Prove that is an
3
which is reflexive, but is neither symmetric nor transi- equivalence relation. [B] Determine [x], the equivalence
tive. of x Q. [C] Is 32 54 ?
SS T
: .
(a, b) 7 (a, b)
Binary Operations 7
We usually use the infix notation a b rather than the prefix notation (a, b). If S = T then we say
that the binary operation is internal or closed and if S 6= T then we say that it is external. If
ab =ba
a (b c) = (a b) c,
a (b c) = (a b) c = a b c,
without ambiguity.
We usually omit the sign and use juxtaposition to indicate the operation . Thus we write
ab instead of a b.
31 Exemplo The operation + (ordinary addition) on the set Z Z is a commutative and associative closed
binary operation.
a b = 1 + ab = 1 + ba = ba,
34 Definio Let S be a set and : S S S be a closed binary operation. The couple hS, i is called
an algebra.
When we desire to drop the sign and indicate the binary operation by juxtaposition, we
simply speak of the algebra S.
35 Exemplo Both hZ, +i and hQ, i are algebras. Here + is the standard addition of real numbers and
is the standard multiplication.
37 Exemplo (Putnam Exam, 1972) Let S be a set and let be a binary operation of S satisfying the laws
(x, y) S2
x (x y) = y, (1.7)
(y x) x = y. (1.8)
Shew that is commutative, but not necessarily associative.
Solution: By (1.8)
x y = ((x y) x) x.
By (1.8) again
((x y) x) x = ((x y) ((x y) y)) x.
8 Captulo 1
By (1.7)
((x y) ((x y) y)) x = (y) x = y x,
which is what we wanted to prove.
40 Exemplo In hZ, +i the element 0 Z acts as an identity, and in hQ, i the element 1 Q acts as an
identity.
xb = yb = x = y.
Finally, we say an element c S is cancellable or regular if it is both left and right cancellable.
42 Definio Let hS, i and hS, i be algebras. We say that is left-distributive with respect to if
We say that is distributive with respect to if it is both left and right distributive with respect to .
Homework
Zn 9
Problema 1.3.1 Let Problema 1.3.4 On R \ {1} define the a binary operation
2 3 3 3
S = {x Z : (a, b) Z , x = a + b + c 3abc}.
a b = a + b ab,
Prove that S is closed under multiplication, that is, if where juxtaposition means ordinary multiplication and
x S and y S then xy S. + is the ordinary addition of real numbers. Clearly is
a closed binary operation. Prove that
Problema 1.3.2 Let hS, i be an associative algebra, let Prove that is both commutative and associative.
a S be a fixed element and define the closed binary
operation by
xy = x a y. Find an element e R \ {1} such that (a
R \ {1}) (e a = a).
Prove that is also associative over S S.
Given e as above and an arbitrary element a Problema 1.3.6 Define the symmetric difference of the
Q] 1; 1[, solve the equation a b = e for b. sets A, B as AB = (A \ B) (B \ A). Prove that is
commutative and associative.
1.4 Zn
43 Theorem (Division Algorithm) Let n > 0 be an integer. Then for any integer a there exist unique integers
q (called the quotient) and r (called the remainder) such that a = qn + r and 0 r < q.
Proof: In the proof of this theorem, we use the following property of the integers, called the
well-ordering principle: any non-empty set of non-negative integers has a smallest element.
44 Exemplo If n = 5 the Division Algorithm says that we can arrange all the integers in five columns as
10 Captulo 1
follows:
.. .. .. .. ..
. . . . .
10 9 8 7 6
5 4 3 2 1
0 1 2 3 4
5 6 7 8 9
.. .. .. .. ..
. . . . .
The arrangement above shews that any integer comes in one of 5 flavours: those leaving remainder 0
upon division by 5, those leaving remainder 1 upon division by 5, etc. We let
Let n be a fixed positive integer. Define the relation by x y if and only if they leave the same
remainder upon division by n. Then clearly is an equivalence relation. As such it partitions the set of
integers Z into disjoint equivalence classes by Theorem 29. This motivates the following definition.
45 Definio Let n be a positive integer. The n residue classes upon division by n are
0 = nZ, 1 = nZ + 1, 2 = nZ + 2, . . . , n 1 = nZ + n 1.
Zn = {0, 1, . . . , n 1}.
Our interest is now to define some sort of addition and some sort of multiplication in Zn .
46 Theorem (Addition and Multiplication Modulo n) Let n be a positive integer. For (a, b) (Zn )2 define
a + b = r, where r is the remainder of a + b upon division by n. and a b = t, where t is the remainder
of ab upon division by n. Then these operations are well defined.
Proof: We need to prove that given arbitrary representatives of the residue classes, we always
obtain the same result from our operations. That is, if a = a and b = b then we have a + b =
a + b and a b = a b .
Now
a = a = (q, q ) Z2 , r N, a = qn + r, a = q n + r, 0 r < n,
b=b = (q1 , q1 ) Z2 , r1 N, b = q1 n + r1 , b = q1 n + r1 , 0 r1 < n.
Zn 11
Hence
a + b = (q + q1 )n + r + r1 , a + b = (q + q1 )n + r + r1 ,
meaning that both a + b and a + b leave the same remainder upon division by n, and therefore
a + b = a + b = a + b = a + b .
Similarly
and so both ab and a b leave the same remainder upon division by n, and therefore
a b = ab = a b = a b .
47 Exemplo Let
Z6 = {0, 1, 2, 3, 4, 5}
be the residue classes modulo 6. Construct the natural addition + table for Z6 . Also, construct the
natural multiplication table for Z6 .
Solution: The required tables are given in tables 1.1 and 1.2.
+ 0 1 2 3 4 5 0 1 2 3 4 5
0 0 1 2 3 4 5 0 0 0 0 0 0 0
1 1 2 3 4 5 0 1 0 1 2 3 4 5
2 2 3 4 5 0 1 2 0 2 4 0 2 4
3 3 4 5 0 1 2 3 0 3 0 3 0 3
4 4 5 0 1 2 3 4 0 4 2 0 4 2
5 5 0 1 2 3 4 5 0 5 4 3 2 1
Tabela 1.1: Addition table for Z6 . Tabela 1.2: Multiplication table for Z6 .
We notice that even though 2 6= 0 and 3 6= 0 we have 2 3 = 0 in Z6 . This prompts the following
definition.
49 Exemplo Let
Z7 = {0, 1, 2, 3, 4, 5, 6}
be the residue classes modulo 7. Construct the natural addition + table for Z7 . Also, construct the
natural multiplication table for Z7
12 Captulo 1
+ 0 1 2 3 4 5 6 0 1 2 3 4 5 6
0 0 1 2 3 4 5 6 0 0 0 0 0 0 0 0
1 1 2 3 4 5 6 0 1 0 1 2 3 4 5 6
2 2 3 4 5 6 0 1 2 0 2 4 6 1 3 5
3 3 4 5 6 0 1 2 3 0 3 6 2 5 1 4
4 4 5 6 0 1 2 3 4 0 4 1 5 2 6 3
5 5 6 0 1 2 3 4 5 0 5 3 1 6 4 2
6 6 0 1 2 3 4 5 6 0 6 5 4 3 2 1
Tabela 1.3: Addition table for Z7 . Tabela 1.4: Multiplication table for Z7 .
Solution: The required tables are given in tables 1.3 and 1.4.
45x = 27,
that is,
x = 5.
51 Definio Let a, b be integers with one of them different from 0. The greatest common divisor d of
a, b, denoted by d = gcd(a, b) is the largest positive integer that divides both a and b.
52 Theorem (Bachet-Bezout Theorem) The greatest common divisor of any two integers a, b can be writ-
ten as a linear combination of a and b, i.e., there are integers x, y with
gcd(a, b) = ax + by.
We first prove that d divides a. By the Division Algorithm, we can find integers q, r, 0 r < d
such that a = dq + r. Then
r = a dq = a(1 qx0 ) by0 .
Fields 13
If r > 0, then r A is smaller than the smaller element of A, namely d, a contradiction. Thus
r = 0. This entails dq = a, i.e. d divides a. We can similarly prove that d divides b.
Assume that t divides a and b. Then a = tm, b = tn for integers m, n. Hence d = ax0 + bx0 =
t(mx0 + ny0 ), that is, t divides d. The theorem is thus proved.
Homework
Problema 1.4.1 Write the addition and multiplication in Z11 .
tables of Z11 under natural addition and multiplication
modulo 11.
Problema 1.4.4 Prove that if n > 0 is a composite inte-
Problema 1.4.2 Solve the equation 3x2 5x + 1 = 0 in ger, Zn has zero divisors.
Z11 .
1.5 Fields
53 Definio Let F be a set having at least two elements 0F and 1F (0F 6= 1F ) together with two operations
(multiplication, which we usually represent via juxtaposition) and + (addition). A field hF, , +i is a
triplet satisfying the following axioms (a, b, c) F3 :
F1 Addition and multiplication are associative:
a + b = b + a, ab = ba (1.10)
a(b + c) = ab + ac (1.11)
a1 F, aa1 = a1 a = 1F (1.15)
a1 ab = a1 0F = b = 0F .
This means that the only way of obtaining a zero product is if one of the factors is 0F .
55 Exemplo hQ, , +i, hR, , +i, and hC, , +i are all fields. The multiplicative identity in each case is 1
and the additive identity is 0.
56 Exemplo Let
Q( 2) = {a + 2b : (a, b) Q2 }
and multiplication as
(a + 2b)(c + 2d) = (ac + 2bd) + 2(ad + bc).
Then hQ + 2Q, , +i is a field. Observe
0F = 0, 1F = 1, that the additive inverse of a + 2b is a 2b,
and the multiplicative inverse of a + 2b, (a, b) 6= (0, 0) is
1 a 2b a 2b
(a + 2b)1 = = 2 2
= 2 2
2 .
a+ 2b a 2b a 2b a 2b2
Here a2 2b2 6= 0 since 2 is irrational.
57 Theorem If p is a prime, hZp , , +i is a field under multiplication modulo p and + addition modulo
p.
Proof: Clearly the additive identity is 0 and the multiplicative identity is 1. The additive
inverse of a is p a. We must prove that every a Zp \ {0} has a multiplicative inverse. Such
an a satisfies gcd(a, p) = 1 and by the Bachet-Bezout Theorem 52, there exist integers x, y with
px + ay = 1. In such case we have
1 = px + ay = ay = a y,
whence (a)1 = y.
If the field does not have characteristic p 6= 0 then we say that it is of characteristic 0. Clearly Q, R and
C are of characteristic 0, while Zp for prime p, is of characteristic p.
Proof: If the characteristic of the field is 0, there is nothing to prove. Let p be the least positive
integer for which a F, pa = 0F . Let us prove that p must be a prime. Assume that instead we
had p = st with integers s > 1, t > 1. Take a = 1F . Then we must have (st)1F = 0F , which entails
(s1F )(t1F ) = 0F . But in a field there are no zero-divisors by Theorem 54, hence either s1F = 0F or
t1F = 0F . But either of these equalities contradicts the minimality of p. Hence p is a prime.
Functions 15
Homework
Problema 1.5.1 Consider the set of numbers
Q( 2, 3, 6) = {a + b 2 + c 3 + d 6 : (a, b, c, d) Q4 }.
6) is
Assume that Q( 2, 3, a field under ordinary addition and multiplication. What is the multiplicative inverse
of the element 2 + 2 3 + 3 6?
Problema 1.5.2 Let F be a field and a, b two non-zero elements of F. Prove that
(a)1 = (a1 ).
Problema 1.5.4 Let F be a field and a, b two non-zero elements of F. Prove that
ab1 = (a)(b1 ).
1.6 Functions
60 Definio By a function or a mapping from one set to another, we mean a rule or mechanism that
assigns to every input element of the first set a unique output element of the second set. We shall call
the set of inputs the domain of the function, the set of possible outputs the target set of the function,
and the set of actual outputs the image of the function.
D T
f: .
x 7 f(x)
Here f is the name of the function, D is its domain, T is its target set, x is the name of a typical input
and f(x) is the output or image of x under f. We call the assignment x 7 f(x) the assignment rule of the
function. Sometimes x is also called the independent variable. The set f(D) = {f(a)|a D} is called the
image of f. Observe that f(D) T.
1 2 1 4
2 8 2 2
3 4 3
X Y
61 Definio A function f : is said to be injective or one-to-one if (a, b) X2 , we have
x 7 f(x)
a 6= b = f(a) 6= f(b).
16 Captulo 1
62 Exemplo The function in the diagram 1.4 is an injective function. The function represented by
the diagram 1.5, however, is not injective, (3) = (1) = 4, but 3 6= 1.
R \ {1} R \ {1}
t:
x+1
x 7
x1
is an injection.
a+1 b+1
t(a) = t(b) = =
a1 b1
= (a + 1)(b 1) = (b + 1)(a 1)
= ab a + b 1 = ab b + a 1
= 2a = 2b
= a = b
1 4 1 4
2 2 2 2
3
8
A function is surjective if its image coincides with its target set. It is easy to see that a
graphical criterion for a function to be surjective is that every horizontal line passing through a
point of the target set (a subset of the y-axis) of the function must also meet the curve.
65 Exemplo The function represented by diagram 1.6 is surjective. The function represented by
diagram 1.7 is not surjective as 8 does not have a preimage.
R R
66 Exemplo Prove that t : is a surjection.
3
x 7 x
Functions 17
Solution: Since the graph of t is that of a cubic polynomial with only one zero, every horizontal
line passing through a point in R will eventually meet the graph of g, whence t is surjective. To
prove this analytically, proceed as follows. We must prove that ( b R) (a) such that
t(a) = b. We choose a so that a = b1/3 . Then
Homework
Problema 1.6.1 Prove that
R R
h:
x 7 x3
is an injection.
As a shortcut, we often use the notation A = [a ij ] to denote the matrix A with entries aij .
Notice that when we refer to the matrix we put parenthesesas in [aij ], and when we refer to
a specific entry we do not use the surrounding parenthesesas in aij .
69 Exemplo
0 1 1
A=
1 2 3
is a 2 3 matrix and
2 1
B=
1 2
0 3
is a 3 2 matrix.
18
The Algebra of Matrices 19
Solution: This is
2
1 11 12 22 12 32 12 42 0 3 8 15
22 12 22 22 22 32 2 2
2 4 5 12
3 0
A=
=
.
2
3 12 32 22 32 32 32 42 8 5 0 7
42 12 42 22 42 32 42 42 15 12 7 0
71 Definio Let hF, , +i be a field. We denote by Mmn (F) the set of all m n matrices with entries
over F. Mnn (F) is, in particular, the set of all square matrices of size n with entries over F.
72 Definio The m n zero matrix 0mn Mmn (F) is the matrix with 0F s everywhere,
0F 0F 0F 0F
0F 0F 0F 0F
0mn = 0F .
0F 0F 0F
. .. .. ..
.. . . .
0F 0F 0F 0F
73 Definio The n n identity matrix In Mnn (F) is the matrix with 1F s on the main diagonal and
0F s everywhere else,
1F 0F 0F 0F
0F 1F 0F 0F
In = 0F .
0F 1F 0F
. .. .. ..
.. . . .
0F 0F 0F 1F
74 Definio (Matrix Addition and Multiplication of a Matrix by a Scalar) Let A = [aij ] Mmn (F), B =
[bij ] Mmn (F) and F. The matrix A + B is the matrix C Mmn (F) with entries C = [cij ] where
cij = aij + bij .
20 Captulo 2
1 1 1 1
75 Exemplo For A =
1 1
and B =
2 1 we have
0 2 0 1
1 3
A + 2B =
3 3.
0 0.
A + (B + C) = (A + B) + C (2.3)
M6 Distributive law
(A + B) = A + B (2.6)
M7 Distributive law
( + )A = A + B (2.7)
M8
1F A = A (2.8)
M9
(A) = ()A (2.9)
Proof: The theorem follows at once by reducing each statement to an entry-wise and appealing
to the field axioms.
Homework
Problema 2.1.1 Write out explicitly the 3 3 matrix A = [aij ] where aij = ij .
Problema 2.1.2 Write out explicitly the 3 3 matrix A = [aij ] where aij = ij.
The Algebra of Matrices 21
3 x 1 2 1 3 7 3 7
+2 = .
1 2 0 5 x 4 11 y 8
1 2 4 2
2A 5B =
,
2A + 6B =
.
0 1 6 0
Problema 2.1.7 A person goes along the rows of a movie theater and asks the tallest person of each row to stand
up. Then he selects the shortest of these people, who we will call the shortest giant. Another person goes along the
rows and asks the shortest person to stand up and from these he selects the tallest, which we will call the tallest
midget. Who is taller, the tallest midget or the shortest giant?
Problema 2.1.8 (Putnam Exam, 1959) Choose five elements from the matrix
11 17 25 19 16
24 10 13 15 3
,
12 5 14 2 18
23 4 1 8 22
6 20 7 21 9
no two coming from the same row or column, so that the minimum of these five elements is as large as possible.
22 Captulo 2
Observe that we use juxtaposition rather than a special symbol to denote matrix multipli-
cation. This will simplify notation.In order to obtain the ij-th entry of the matrix AB we multiply
elementwise the i-th row of A by the j-th column of B. Observe that AB is a m p matrix.
1 2 5 6
78 Exemplo Let M =
and N =
be matrices over R. Then
3 4 7 8
1 2 5 6 1 5 + 2 7 1 6 + 2 8 19 22
MN =
=
=
,
3 4 7 8 35+47 36+48 43 50
and
5 6 1 2 5 1 + 6 3 5 2 + 6 4 23 34
NM =
=
=
.
7 8 3 4 71+83 72+84 31 46
Hence, in particular, matrix multiplication is not necessarily commutative.
79 Exemplo We have
1 2 1 1 0 0 0
1 1
1 2 1 = 0
0 0 ,
1 1 1
1 1 1 1 1 2 0 0 0
over R. Observe then that the product of two non-zero matrices may be the zero matrix.
Proof: To shew this we only need to consider the ij-th entry of each side, appeal to the
associativity of the underlying field F and verify that both sides are indeed equal to
n X
X r
aik bkk ck j .
k=1 k =1
By virtue of associativity, a square matrix commutes with its powers, that is, if A
Mnn (F), and (r, s) N2 , then (Ar )(As ) = (As )(Ar ) = Ar+s .
Solution: The assertion is trivial for n = 1. Assume its truth for n 1, that is, assume
An1 = 3n2 A. Observe that
1 1 1 1 1 1 3 3 3
2
A =
1 1 1 1 1
1 = 3 3
3 = 3A.
1 1 1 1 1 1 3 3 3
Now
An = AAn1 = A(3n2 A) = 3n2 A2 = 3n2 3A = 3n1 A,
and so the assertion is proved by induction.
24 Captulo 2
83 Theorem Let A Mnn (F). Then there is a unique identity matrix. That is, if E Mnn (F) is such
that AE = EA = A, then E = In .
Proof: It is clear that for any A Mnn (F), AIn = In A = A. Now because E is an identity,
EIn = In . Because In is an identity, EIn = E. Whence
In = EIn = E,
demonstrating uniqueness.
84 Exemplo Let A = [aij ] Mnn (R) be such that aij = 0 for i > j and aij = 1 if i j. Find A2 .
Observe that the i-th row of A has i 1 0s followed by n i + 1 1s, and the j-th column of A
has j 1s followed by n j 0s. Therefore if i 1 > j, then bij = 0. If i j + 1, then
j
X
bij = aik akj = j i + 1.
k=i
Homework
Problema 2.2.1 Determine the product
1 1 2 1 1 1
1 1 0 1 1 2
1 0 0 a b c
Problema 2.2.2 Let A =
b. Find AB and BA.
, B =
1 1 0 c a
1 1 1 b c a
Matrix Multiplication 25
1 2 3 1 1 1 a a a
Problema 2.2.3 Find a + b + c if
2 3 1 2 2 2 = b b b.
3 1 2 3 3 3 c c c
0 2 3 4
0 0 2 3
Problema 2.2.4 Let N = . Find N2008 .
0
0 0 2
0 0 0 0
Problema 2.2.7 A square matrix X is called idempotent if X2 = X. Prove that if AB = A and BA = B then A and B
are idempotent.
over R.
Problema 2.2.10 Prove or disprove! If (A, B) (Mnn (F))2 are such that AB = 0n , then also BA = 0n .
Problema 2.2.11 Prove or disprove! For all matrices (A, B) (Mnn (F))2 ,
(A + B)(A B) = A2 B2 .
1 2
Problema 2.2.12 Consider the matrix A =
, where x is a real number. Find the value of x such that there
3 x
0 0
are non-zero 2 2 matrices B such that AB =
.
0 0
n
1 1 1 n
Problema 2.2.13 Prove, using mathematical induction, that
=
.
0 1 0 1
1 1
Problema 2.2.14 Let M =
. Find M6 .
1 1
0 3
Problema 2.2.15 Let A =
. Find, with proof, A2003 .
2 0
Problema 2.2.16 Let (A, B, C) Mlm (F) Mmn (F) Mmn (F) and F. Prove that
A(B + C) = AB + AC,
(A + B)C = AC + BC,
(AB) = (A)B = A(B).
Problema 2.2.18 A matrix A = [aij ] Mnn (R) is said to be checkered if aij = 0 when (j i) is odd. Prove that the
sum and the product of two checkered matrices is checkered.
Matrix Multiplication 27
Prove that
n(n+1)
1 n 2
n
A =
0 1 n
.
0 0 1
Problema 2.2.20 Let (A, B) (Mnn (F))2 and k be a positive integer such that Ak = 0n . If AB = B prove that
B = 0n .
a b
Problema 2.2.21 Let A =
. Demonstrate that
c d
Problema 2.2.22 Let A M2 (F) and let k Z, k > 2. Prove that Ak = 02 if and only if A2 = 02 .
Problema 2.2.27 Let X be a 2 2 matrices with real number entries. Solve the equation
1 1
X2 + X =
.
1 1
28 Captulo 2
Problema 2.2.28 Prove, by means of induction that for the following n n we have
3
n(n+1)
1 1 1 1 1 3 6 2
(n1)n
0 1 1 1 0 1 3
2
= 0 (n2)(n1) .
0
0 1 1 0 1 2
. .. .. .. . .. ..
.. ..
. . .
. .
0 0 0 1 0 0 0 1
Pn
Proof: PThe first assertion is trivial. To prove the second, observe that AB = ( k=1 aik bkj ) and
n
BA = ( k=1 bik akj ). Then
n X
X n n X
X n
tr (AB) = aik bki = bki aik = tr (BA) ,
i=1 k=1 k=1 i=1
87 Exemplo Let A Mnn (R). Shew that A can be written as the sum of two matrices whose trace is
different from 0.
Solution: Write
A = (A In ) + In .
tr (A)
Now, tr (A In ) = tr (A) n and tr (In ) = n. Thus it suffices to take 6= , 6= 0.
n
Since R has infinitely many elements, we can find such an .
Trace and Transpose 29
88 Exemplo Let A, B be square matrices of the same size and over the same field of characteristic 0. Is
it possible that AB BA = In ? Prove or disprove!
89 Definio The transpose of a matrix of a matrix A = [aij ] Mmn (F) is the matrix AT = B = [bij ]
Mnm (F), where bij = aji .
90 Exemplo If
a b c
M=
d e f ,
g h i
with entries in R, then
a d g
T
M =
b e h.
c f i
91 Theorem Let
A = [aij ] Mmn (F), B = [bij ] Mmn (F), C = [cij ] Mnr (F), F, u N.
Then
ATT = A, (2.12)
T T T
(A + B) = A + B , (2.13)
T T T
(AC) = C A , (2.14)
u T T u
(A ) = (A ) . (2.15)
Proof: The first two assertions are obvious, and the fourth follows from the third by using
induction. To prove the third put AT = (ij ), ij = aji , CT = (ij ), ij = cji , AC = (uij ) and
CT AT = (vij ). Then
n
X n
X Xn
uij = aik ckj = ki jk = jk ki = vji ,
k=1 k=1 k=1
whence the theorem follows.
92 Definio A square matrix A Mnn (F) is symmetric if AT = A. A matrix B Mnn (F) is skew-
symmetric if BT = B.
93 Exemplo Let A, B be square matrices of the same size, with A symmetric and B skew-symmetric.
Prove that the matrix A2 BA2 is skew-symmetric.
Solution: We have
(A2 BA2 )T = (A2 )T (B)T (A2 )T = A2 (B)A2 = A2 BA2 .
30 Captulo 2
94 Theorem Let F be a field of characteristic different from 2. Then any square matrix A can be written
as the sum of a symmetric and a skew-symmetric matrix.
(A AT )T = AT ATT = (A AT ),
95 Exemplo Find, with proof, a square matrix A with entries in Z2 such A is not the sum of a symmetric
and and anti-symmetric matrix.
Homework
Problema 2.3.1 Write Problema 2.3.4 Let (A, B) (M22 (R))2 be symmetric
matrices. Must their product AB be symmetric? Prove
or disprove!
1 2 3
1 M33 (R)
A=
2 3
Problema 2.3.5 Given square matrices (A, B)
(M77 (R))2 such that tr A2 = tr B2 = 1, and
3 1 2
(A B)2 = 3I7 ,
as the sum of two 33 matrices E1 , E2 , with tr (E2 ) = 10.
find tr (BA).
AC + DB = In ,
Problema 2.3.9 Prove or disprove! If (A, B, C)
CA + BD = 0n . (M33 (F))3 then tr (ABC) = tr (BAC).
Special Matrices 31
P Pn
Problema 2.3.10 Let A be a square matrix. Prove that tr AAT = n
i=1
2
j=1 aij .
the matrix AAT is symmetric.
is the set {0, 2, 7}. The counter diagonal of A is the set {5, 2, 9}.
98 Definio A square matrix is a diagonal matrix if every entry off its main diagonal is 0F .
100 Definio A square matrix is a scalar matrix if it is of the form In for some scalar .
103 Exemplo The matrix A M34 (R) shewn is upper triangular and B M44 (R) is lower triangular.
1 0 0 0
1 a b c
1 a 0 0
A=
0 2 3 0
B=
0 2 3 0
0 0 0 1
1 1 t 1
105 Definio The set of matrices Eij Mmn (F), Eij = (ers ) such that eij = 1F and ei j = 0F , (i , j ) 6=
(i, j) is called the set of elementary matrices. Observe that in fact ers = ir sj .
Elementary matrices have interesting effects when we pre-multiply and post-multiply a matrix by them.
Then
0 0 0 0 0 0 2
E23 A =
9 10 11 12 ,
AE23 =
0 0 6 .
0 0 0 0 0 0 10
107 Theorem (Multiplication by Elementary Matrices) Let Eij Mmn (F) be an elementary matrix, and
let A Mnm (F). Then Eij A has as its i-th row the j-th row of A and 0F s everywhere else. Similarly,
AEij has as its j-th column the i-th column of A and 0F s everywhere else.
Proof: Put (uv ) = Eij A. To obtain Eij A we multiply the rows of Eij by the columns of A. Now
n
X n
X
uv = euk akv = ui kj akv = ui ajv .
k=1 k=1
Therefore, for u 6= i, uv = 0F , i.e., off of the i-th row the entries of Eij A are 0F , and iv = jv ,
that is, the i-th row of Eij A is the j-th row of A. The case for AEij is similarly argued.
108 Corollary Let (Eij , Ekl ) (Mnn (F))2 , be square elementary matrices. Then
Eij Ekl = jk Eil .
109 Exemplo Let M Mnn (F) be a matrix such that AM = MA for all matrices A Mnn (F).
Demonstrate that M = aIn for some a F, i.e. M is a scalar matrix.
Solution: Assume (s, t) {1, 2, . . . , n}2 . Let M = (mij ) and Est Mnn (F). Since M
commutes with Est we have
0 0 ... 0 0 0 ... m1s . . . 0
. .. .. .. ..
.
. . ... . 0 0 . m2s . 0
. . .. .. .. ..
= Est M = MEst = .. ..
mt1
mt2 ... mtn . . . .
. .. .. .. ..
..
. ... . 0 0
. m(n1)s . 0
.. ..
0 0 ... 0 0 0 . mns . 0
For arbitrary s 6= t we have shown that mst = mts = 0, and that mss = mtt . Thus the entries off
the main diagonal are zero and the diagonal entries are all equal to one another, whence M is a
scalar matrix.
110 Definio Let F and Eij Mnn (F). A square matrix in Mnn (F) of the form In + Eij is called
a transvection.
that is, post-multiplication by T adds 4 times the first column of A to the third row of A.
112 Theorem (Multiplication by a Transvection Matrix) Let In + Eij Mnn (F) be a transvection and let
A Mnm (F). Then (In + Eij )A adds the j-th row of A to its i-th row and leaves the other rows
unchanged. Similarly, if B Mpn (F), B(In + Eij ) adds the i-th column of B to the j-th column and
leaves the other columns unchanged.
Proof: Simply observe that (In + Eij )A = A + Eij A and A(In + Eij ) = A + AEij and apply
Theorem 107.
Observe that the particular transvection In + ( 1F )Eii Mnn (F) consists of a diagonal matrix with
1F s everywhere on the diagonal, except on the ii-th position, where it has a .
then
4 0 0 1 1 1 4 4 4
SA =
0 1 0 5
6 7 = 5 6
7 ,
0 0 1 1 2 3 1 2 3
116 Definio We write Iijn for the matrix which permutes the i-th row with the j-th row of the identity
matrix. We call Iij
n a transposition matrix.
If
1 2 4 3
5 6 7 8
A=
,
9 10 11 12
13 14 15 16
then
1 2 3 4
9 10 11 12
(23)
I4 A =
,
5 6 7 8
13 14 15 16
and
1 3 2 4
5 7 6 8
(23)
AI4 =
.
9 11 10 12
13 15 14 16
Proof: We must prove that Iijn A exchanges the i-th and j-th rows but leaves the other rows
unchanged. But this follows upon observing that
Iij
n = In + Eij + Eji Eii Ejj
119 Definio A square matrix which is either a transvection matrix, a dilatation matrix or a transposi-
tion matrix is called an elimination matrix.
Homework
Problema 2.4.1 Consider the matrices is transformed into the matrix
1 0 1 0 4 2 4 2 hg g i
B=
ed d f
0 1 0 1 0 1 0 1
A= , B= .
1
2b 2a 2a 2c
1 1 1
1
1 1 1
by a series of row and column operations. Find explicit
1 1 1 1 1 1 1 1
permutation matrices P, P , an explicit dilatation matrix
D, and an explicit transvection matrix T such that
Find a specific dilatation matrix D, a specific transposi-
tion matrix P, and a specific transvection matrix T such B = DPAP T.
that B = TDAP.
Problema 2.4.3 Let A Mnn (F). Prove that if
For
1 0
1 0 0 1 0
1 =
,
0
0 1 0 0 1
x y
regardless of the values of x and y. Observe, however, that A does not have a left inverse, for
a b
a b 0
1 0 0
=
c d c d 0 ,
0 1 0
f g f g 0
(In ) 1 In = In = 1 In (In ) .
124 Theorem Let A Mnn (F) a square matrix possessing a left inverse L and a right inverse R. Then
L = R. Thus an invertible square matrix possesses a unique inverse.
125 Definio The subset of Mnn (F) of all invertible n n matrices is denoted by GLn (F), read the
linear group of rank n over F.
126 Corollary Let (A, B) (GLn (F))2 . Then AB is also invertible and
(AB)1 = B1 A1 .
127 Corollary If a square matrix S Mnn (F) is invertible, then S1 is also invertible and (S1 )1 = S,
in view of the uniqueness of the inverses of square matrices.
128 Corollary If a square matrix A Mnn (F) is invertible, then AT is also invertible and (AT )1 =
(A1 )T .
The next few theorems will prove that elimination matrices are invertible matrices.
129 Theorem (Invertibility of Transvections) Let In + Eij Mnn (F) be a transvection, and let i 6= j. Then
= In 2 ij Eij
= In ,
since i 6= j.
+(1 1F )Eii
+( 1F )(1 1F )Eii
= In + ( 1F )Eii
+(1 1F )Eii
+( 1F )(1 1F ))Eii
= In + ( 1F + 1 1F + 1F
1 1F ))Eii
= In ,
is invertible and
1
1 0 0 0 0 1 0 0 0 0
1
0 1
0 2 0 0 0 2 0 0 0
= 1
0
0 3 0 0 0 0 3 0 0
. .. .. .. .. .. .. .. .. ..
..
. . . .
. . . . .
0 0 0 0 n 0 0 0 0 1
n
Proof: By Theorem 118 pre-multiplication of Iij n by In exchanges the i-th row with the j-th row,
ij
meaning that they return to the original position in In . Observe in particular that Iij ij T
n = (In ) , and
so In (In ) = In .
ij ij T
136 Corollary If a square matrix can be represented as the product of elimination matrices of the same
size, then it is invertible.
40 Captulo 2
Proof: This follows from Corollary 126, and Theorems 129, 131, and 134.
is the transvection I3 + 4E23 followed by the dilatation of the second column of this transvection by 3.
Thus
1 0 0 1 0 0 1 0 0
0 3 4 = 0 1 4 0 3 0 ,
0 0 1 0 0 1 0 0 1
and so
1 1 1
1 0 0 1 0 0 1 0 0
0
3 4
= 0
3 0
0
1 4
0 0 1 0 0 1 0 0 1
1 0 0 1 0 0
= 0
1
0 0 1
4
3
0 0 1 0 0 1
1 0 0
= 0
1 4 .
3
3
0 0 1
hence
1 1 1
1 1 1 1 0 0 1 1 0
0
1 1
= 0
1 1
0
1 0
0 0 1 0 0 1 0 0 1
1 0 0 1 1 0
= 0
1 1 0
1 0
0 0 1 0 0 1
1 1 0
= 0
1 1 .
0 0 1
In the next section we will give a general method that will permit us to find the inverse of a square
matrix when it exists.
a b
139 Exemplo Let T = M22 (R). Then
c d
a b d b 1 0
= (ad bc)
c d c a 0 1
140 Exemplo If
1 1 1 1
1
1 1 1
A=
,
1 1 1 1
1 1 1 1
42 Captulo 2
2
1 1 1 1
1 1 1 1
A2 =
= 4I4 ,
1 1 1 1
1 1 1 1
141 Exemplo A matrix A Mnn (R) is said to be nilpotent of index k if satisfies A 6= 0n , A2 6= 0n , . . . , Ak1 6=
0n and Ak = 0n for integer k 1. Prove that if A is nilpotent, then In A is invertible and find its inverse.
Solution: To motivate the solution, think that instead of a matrix, we had a real number x
with |x| < 1. Then the inverse of 1 x is
1
(1 x)1 = = 1 + x + x2 + x3 + .
1x
(A + A2 + A3 + + Ak )
= In
and
= In .
Matrix Inversion 43
is
3 0 0
A1
=
0 2 0 ,
0 0 4
as
2 0 0 3 0 0 1 0 0
AA1
=
0 3 0
0
2 0 = 0
1 0
0 0 4 0 0 4 0 0 1
143 Exemplo (Putnam Exam, 1991) Let A and B be different n n matrices with real entries. If A3 = B3
and A2 B = B2 A, prove that A2 + B2 is not invertible.
(A2 + B2 )(A B) = A3 A2 B + B2 A B3 = 0n .
144 Lemma If A Mnn (F) has a row or a column consisting all of 0F s, then A is singular.
Proof: If A were invertible, the (i, i)-th Pnentry of the product of its inverse with A would be 1F .
But if the i-th row of A is all 0F s, then k=1 aik bki = 0F , so the (i, i) entry of any matrix product
with A is 0F , and never 1F .
1 1 1 a 1 1
is the matrix A1 =
1 1 2
1 b 1 . Deter-
Problema 2.5.4 Let S GLn (F), (A, B) (Mnn (F))2 ,
and k a positive integer. Prove that if B = SAS1 then
1 2 3 1 1 0 Bk = SAk S1 .
mine a and b.
Problema 2.5.5 Let A Mnn (F) and let k be a posi-
tive integer. Prove that A is invertible if and only if Ak is
invertible.
Problema 2.5.2 A square matrix A satisfies A3 6= 0n but
A4 = 0n . Demonstrate that In + A is invertible and find,
with proof, its inverse. Problema 2.5.6 Let S GLn (C), A Mnn (C) with
Ak = 0n for some positive integer k. Prove that both
44 Captulo 2
In SAS1 and In S1 AS are invertible and find their Mnn (F), n > 1, (a, b) F2 . Determine when A is in-
inverses. vertible and find this inverse when it exists.
Problema 2.5.7 Let A and B be square matrices of the Problema 2.5.11 Let (A, B) (Mnn (F))2 be matrices
same size such that both A B and A + B are invertible. such that A + B = AB. Demonstrate that A In is inver-
Put C = (A B)1 + (A + B)1 . Prove that tible and find this inverse.
ACA ACB + BCA BCB = 2A.
Problema 2.5.12 Let S GL n (F) and A Mnn (F).
Problema 2.5.8 Let A, B, C be non-zero square matri- Prove that tr (A) = tr SAS1 .
ces of the same size over the same field and such that
ABC = 0n . Prove that at least two of these three matri- Problema 2.5.13 Let A Mnn (R) be a skew-
ces are not invertible. symmetric matrix. Prove that In + A is invertible.
Furthermore, if B = (In A)(In + A)1 , prove that
Problema 2.5.9 Let (A, B) (Mnn (F))2 be such that B1 = BT .
A2 = B2 = (AB)2 = In . Prove that AB = BA.
Problema 2.5.14 A matrix A Mnn (F) is said to be
a magic square if the sum of each individual row equals
a b b b the sum of each individual column. Assume that A is
a magic square and invertible. Prove that A1 is also a
b a b b magic square.
Problema 2.5.10 Let A =
b
b a b
. .. .. ..
.. . . .
b b b a
If (A, A ) (M
2
m (F)) , (B, B ) (Mmn (F))2 , (C, C ) (Mnm (F))2 , (D, D ) (Mm (F))2 ,
and
A B A B
S=
,
T =
,
C D C D
with square matrices A Mm (F) and B Mrr (F), and a matrix C Mmr (F). Then L is invertible if
and only if A and B are, in which case
1 1 1
A A CB
L1 =
0rm B1
Proof: Assume first that A, and B are invertible. Direct calculation yields
1 1 1 1 1 1 1
A C A A CB AA AA CB + CB
=
0rm B 0rm B1 0rm BB1
Im 0mr
=
0rm Ir
= Im+r .
E H
Assume now that L is invertible, L1 = , with E Mm (F) and K Mrr (F), but that,
J K
say, B is singular. Then
Im 0mr
= LL1
0rm Ir
A C E H
=
0rm B J K
AE + CJ AH + BK
=
,
BJ BK
148 Theorem Row equivalence, column equivalence, and equivalence are equivalence relations.
Proof: We prove the result for row equivalence. The result for column equivalence, and
equivalence are analogously proved.
46 Captulo 2
Since Im GLm (F) and A = Im A, row equivalence is a reflexive relation. Assume (A, B)
(Mmn (F))2 and that P GLm (F) such that B = PA. Then A = P1 B and since P1
GLm (F), we see that row equivalence is a symmetric relation. Finally assume (A, B, C)
(Mmn (F))3 and that P GLm (F), P GLm (F) such that A = PB, B = P C. Then A = PP C.
But PP GLm (F) in view of Corollary 126. This completes the proof.
149 Theorem Let A Mmn (F). Then A can be reduced, by means of pre-multiplication and post-
multiplication by elimination matrices, to a unique matrix of the form
Ir 0r(nr)
Dm,n,r =
,
(2.16)
0(mr)r 0(mr)(nr)
called the Hermite normal form of A. Thus there exist P GLm (F), Q GLn (F) such that Dm,n,r = PAQ.
The integer r 0 is called the rank of the matrix A which we denote by rank (A).
Proof: If A is the m n zero matrix, then the theorem is obvious, taking r = 0. Assume hence
that A is not the zero matrix. We proceed as follows using the Gau-Jordan Algorithm.
GJ-1 Since A is a non-zero matrix, it has a non-zero column. By means of permutation matrices
we move this column to the first column.
GJ-2 Since this column is a non-zero column, it must have an entry a 6= 0F . Again, by means of
permutation matrices, we move the row on which this entry is to the first row.
GJ-3 By means of a dilatation matrix with scale factor a1 , we make this new (1, 1) entry into a
1F .
GJ-4 By means of transvections (adding various multiples of row 1 to the other rows) we now
annihilate every entry below the entry (1, 1).
This process ends up in a matrix of the form
1F
0F
b22 b23 b2n
P1 AQ1 = 0F . (2.17)
b32 b33 b3n
.. .. ..
0F
. . .
0F bm2 bm3 bmn
Here the asterisks represent unknown entries. Observe that the bs form a (m 1) (n 1)
matrix.
GJ-5 Apply GJ-1 through GJ-4 to the matrix of the bs.
Observe that this results in a matrix of the form
1F
0F 1F
P2 AQ2 = 0F . (2.18)
0F c33 c3n
.. .. ..
0F
. . .
0F 0F cm3 cmn
Rank of a Matrix 47
GJ-6 Add the appropriate multiple of column 1 to column 2, that is, apply a transvection, in order
to make the entry in the (1, 2) position 0F .
GJ-7 Apply GJ-1 through GJ-6 to the matrix of the cs, etc.
Observe that this process eventually stops, and in fact, it is clear that rank (A) min(m, n).
Suppose now that A were equivalent to a matrix Dm,n,s with s > r. Since matrix equivalence
is an equivalence relation, Dm,n,s and Dm,n,r would be equivalent, and so there would be
R GLm (F), S GLn (F), such that RDm,n,r S = Dm,n,s , that is, RDm,n,r = Dm,n,s S1 .
Partition R and S1 as follows
S11 S12 S13
R11 R12
S1
R=
,
=
S21 S22 S23 ,
R21 R22
S31 S32 S33
and
Ir 0r(sr) 0r(ns) S11 S12 S13
Dm,n,s S1 = 0(sr)r
Isr 0(sr)(ns) S21
S22 S23
0(ms)r 0(ms)(sr) 0(ms)(ns) S31 S32 S33
S11 S12 S13
= S21
S22 S23
.
0(ms)r 0(ms)(sr) 0(ms)(ns)
48 Captulo 2
we must have S12 = 0r(sr) , S13 = 0r(ns) , S22 = 0(sr)(sr) , S23 = 0(sr)(ns) . Hence
S11 0r(sr) 0r(ns)
S1
=
S21 0(sr)(sr) 0(sr)(ns) .
S31 S32 S33
The matrix
0(sr)(sr) 0(sr)(ns)
S32 S33
Albeit the rank of a matrix is unique, the matrices P and Q appearing in Theorem 149 are
not necessarily unique. For example, the matrix
1 0
0 1
0 0
Proof: Let P, Q, Dm,n,r as in Theorem 149. Observe that PT , QT are invertible. Then
and since this last matrix has the same number of 1F s as Dm,n,r , the corollary is proven.
has rank (A) = 2 and find invertible matrices P GL2 (R) and Q GL3 (R) such that
1 0 0
PAQ =
.
0 1 0
We now subtract twice the second row from the first, by effecting
1 2 3 2 0 3 0 0
= .
0 1 0 1 0 0 1 0
We conclude that
0 0 1
1/3 0 1 2 0 2 3
1
0 0
1 0 =
,
0
0 1 0 1 0 1 0 0 1 0
1 0 0
and
0 0 1
Q=
0 1 0 .
1 0 0
In practice it is easier to do away with the multiplication by elimination matrices and perform row
and column operations on the augmented (m + n) (m + n) matrix
In 0nm
.
A Im
152 Definio Denote the rows of a matrix A Mmn (F) by R1 , R2 , . . . , Rm , and its columns by C1 , C2 , . . . , Cn .
The elimination operations will be denoted as follows.
Exchanging the i-th row with the j-th row, which we denote by Ri Rj , and the s-th column by
the t-th column by Cs Ct .
A dilatation of the i-th row by a non-zero scalar F \ {0F }, we will denote by Ri Ri . Similarly,
Cj Cj denotes the dilatation of the j-th column by the non-zero scalar .
A transvection on the rows will be denoted by Ri +Rj Ri , and one on the columns by Cs +Ct
Cs .
Solution: First observe that rank (A) min(4, 2) = 2, so the rank can be either 1 or 2 (why
not 0?). Form the augmented matrix
1 0 0 0 0 0
0 1 0 0 0 0
1 0 1 0 0 0
.
0 0 0 1 0 0
1 1 0 0 1 0
1 2 0 0 0 1
Rank of a Matrix 51
Perform R6 2R5 R6
1 0 0 0 0 0
0 1 0 0 0 0
1 0 1 0 0 0
.
0 0 0 1 0 0
0 1 1 0 1 0
0 0 1 0 2 1
Perform R4 R5
1 0 0 0 0 0
0 1 0 0 0 0
1 0 1 0 0 0
.
0 1 1 0 1 0
0 0 0 1 0 0
0 0 1 0 2 1
Finally, perform R3 R3
1 0 0 0 0 0
0 1 0 0 0 0
1 0 1 0 0 0
.
0 1 1 0 1 0
0 0 0 1 0 0
0 0 1 0 2 1
52 Captulo 2
We conclude that
1 0 0 0 1 0 1 0
1 0 1 0 0 1
0 0
0 1
= .
0 1 0 0 1 1 0 1 0 0
1 0 2 1 1 2 0 0
Proof: We prove that rank (A) rank (AB). The proof that rank (B) rank (AB) is similar
and left to the reader. Put r = rank (A) , s = rank (AB). There exist matrices P GLm (F),
Q GLn (F), S GLm (F), T GLp (F) such that
Now
and
Ir 0r(nr) V11 V12
Mmp (F).
Dm,p,s V =
0(mr)r 0(mr)(nr) V21 V22
If s > r then (i) U11 would have at least one row of 0F s meaning that U11 is non-invertible by
Lemma 144. (ii) U21 = 0(ms)s . Thus from (i) and (ii) and from Lemma 146, U is not invertible,
which is a contradiction.
Rank of a Matrix 53
155 Corollary Let A Mmn (F), B Mnp (F). If A is invertible then rank (AB) = rank (B). If B is
invertible then rank (AB) = rank (A).
and so rank (B) = rank (AB). A similar argument works when B is invertible.
156 Exemplo Study the various possibilities for the rank of the matrix
1 1 1
A=
b + c .
c + a a + b
bc ca ab
We now examine the various ways of getting rows consisting only of 0s. If a = b = c, the last
two rows are 0-rows and so rank (A) = 1. If exactly two of a, b, c are equal, the last row is a
0-row, but the middle one is not, and so rank (A) = 2 in this case. If none of a, b, c are equal,
then the rank is clearly 3.
Homework
Problema 2.7.1 On a symmetric matrix A Mnn (R) with n 3,
R3 3R1 R3
successively followed by
C3 3C1 C3
are performed. Is the resulting matrix still symmetric?
54 Captulo 2
Problema 2.7.3 Let A, B be arbitrary n n matrices over R. Prove or disprove! rank (AB) = rank (BA) .
1 1 0 0
0 0 1 1
Problema 2.7.4 Determine the rank of the matrix .
2 2 2 2
2 0 0 2
4 2
Problema 2.7.5 Suppose that the matrix
M22 (R) has rank 1. How many possible values can x as-
x2 x
sume?
Problema 2.7.6 Demonstrate that a non-zero n n matrix A over a field F has rank 1 if and only if A can be
factored as A = XY, where X Mn1 (F) and Y M1n (F).
Problema 2.7.7 Study the various possibilities for the rank of the matrix
1 a 1 b
a 1 b 1
1 b 1 a
b 1 a 1
when (a, b) R2 .
1 1 0 1
m 1 1 1
Problema 2.7.8 Find the rank of as a function of m C.
1
m 1 0
1 1 m 2
Rank and Invertibility 55
a2 ab ab b2
a2 b2
ab ab
Problema 2.7.9 Determine the rank of the matrix .
ab
b2 a2 ab
b2 ab ab a 2
1 1 1 1
a b a b
Problema 2.7.10 Determine the rank of the matrix .
c c d d
ac bc ad bd
1 1 2
Problema 2.7.11 Let A M32 (R), B M22 (R), and C M23 (R) be such that ABC =
2 x 1. Find x.
1 2 1
Problema 2.7.12 Let B be the matrix obtained by adjoining a row (or column) to a matrix A. Prove that either
rank (B) = rank (A) or rank (B) = rank (A) + 1.
Problema 2.7.13 Let A Mnn (R). Prove that rank (A) = rank AAT . Find a counterexample in the case
A Mnn (C).
Problema 2.7.14 Prove that the rank of a skew-symmetric matrix with real number entries is an even number.
Proof: Observe that we always have rank (A) n. If A is left invertible, then L Mnm (F)
such that LA = In . By Theorem 154,
n = rank (In ) = rank (LA) rank (A) ,
whence the two inequalities give rank (A) = n.
Conversely, assume that rank (A) = n. Then rank AT = n by Corollary 150, and so by
Theorem 149 there exist P GLm (F), Q GLn (F), such that
In
, QT AT PT =
PAQ = In 0n(mn) .
0(mn)n
This gives
QT AT PT PAQ = In = AT PT PA = (QT )1 Q1
= ((QT )1 Q1 )1 AT PT PA = In ,
56 Captulo 2
By combining Theorem 157 and Theorem 124, the following corollary is thus immediate.
158 Corollary If A Mmn (F) possesses a left inverse L and a right inverse R then m = n and L = R.
We use Gau-Jordan Reduction to find the inverse of A GLn (F). We form the augmented matrix
T = [A|In ] which is obtained by putting A side by side with the identity matrix In . We perform permissible
row operations on T until instead of A we obtain In , which will appear if the matrix is invertible. The
matrix on the right will be A1 . We finish with [In |A1 ].
If A Mnn (R) is non-invertible, then the left hand side in the procedure above will not
reduce to In .
6 0 1
B=
3 2 0 .
1 0 1
Solution: We have
6 0 1 1 0 0 1 0 1 0 0 1
R1 R3
3 2 0 0 1 0 3 2 0 0 1 0
1 0 1 0 0 1 6 0 1 1 0 0
1 0 1 0 0 1
R3 6R1 R3
0 2 4 0 1 4
R2 3R1 R2
0 0 2 1 0 1
5 0 0 1 0 6
R2 2R3 R2
0 2 0 5 1 2
5R1 +R3 R1
0 0 2 1 0 1
1 0 0 3 0 4
3R1 R1 ; 4R3 R3
.
0 1 0 6 4 1
4R2 R2
0 0 1 4 0 4
Rank and Invertibility 57
We conclude that
1
6 0 1 3 0 4
3
2 0
=
6 4 1 .
1 0 1 4 0 4
0 1 1
160 Exemplo Use Gau-Jordan reduction to find the inverse of the matrix A = 4 3 4 . Also,
3 3 4
find A2001 .
0 1 1 1 0 0 0 1 1 1 0 0
R2 R3 R2
4 3 4 0 1 0 1 0 0 0 1 1
3 3 4 0 0 1 3 3 4 0 0 1
0 1 1 1 0 0
R3 3R2 R3
1 0 0 0 1 1
0 3 4 0 3 4
0 1 1 1 0 0
R3 +3R1 R3
1
0 0 0 1 1
0 0 1 3 3 4
0 1 0 4 3 4
R1 +R3 R1
1 0 0 0 1 1
0 0 1 3 3 4
1 0 0 0 1 1
R1 R2
0 1 0 4 3 4 .
0 0 1 3 3 4
58 Captulo 2
0 1 1
A1
=
4 3 4 = A.
3 3 4
161 Exemplo Find the inverse of the triangular matrix A Mnn (R),
1 1 1 1
0 1 1 1
A = 0 .
0 1 1
. .. .. ..
.. . . .
0 0 0 1
1 1 1 1 1 0 0 0
0 1 1 1 0 1 0 0
,
0
0 1 1 0 0 1 0
. .. .. .. .. .. .. ..
.. . . . . . . .
0 0 0 1 0 0 0 1
1 0 0 0 1 1 0 0
1
0 1 0 0 0 1 0
,
0
0 1 0 0 0 1 0
. .. .. .. .. .. .. ..
.. . . . . . . .
0 0 0 1 0 0 0 1
Rank and Invertibility 59
whence
1 1 0 0
1
0 1 0
A1 = 0 ,
0 1 0
. .. .. ..
.. . . .
0 0 0 1
that is, the inverse of A has 1s on the diagonal and 1s on the superdiagonal.
162 Theorem Let A Mnn (F) be a triangular matrix such that a11 a22 ann 6= 0F . Then A is invertible.
Proof: Since the entry akk 6= 0F we multiply the k-th row by a1kk and then proceed to subtract
the appropriate multiples of the preceding k 1 rows at each stage.
163 Exemplo (Putnam Exam, 1969) Let A and B be matrices of size 3 2 and 2 3 respectively. Suppose
that their product AB is given by
8 2 2
AB =
2 5 4 .
2 4 5
and so rank (AB) = 2. This entails that rank (AB)2 = 2. Now, since BA is a 2 2 matrix,
and we must conclude that rank (BA) = 2. This means that BA is invertible and so
= BA 9I2 = 02
Homework
Problema 2.8.1 Find the inverse of the matrix
1 2 3
1 M33 (Z7 ).
2 3
3 1 2
1 0 0
Problema 2.8.3 Let A = 1
1 1 0 where x 6= 0 is a real number. Find A .
1 1 x
1 0 1 0 1 0
1
Problema 2.8.4 If the inverse of the matrix M =
1 0 0 is the matrix M = 1 1 a, find (a, b).
0 1 1 1 1 b
1 0 0
n 1
Problema 2.8.5 Let A =
1 1 0 and let n > 0 be an integer. Find (A ) .
1 1 1
Rank and Invertibility 61
Problema 2.8.6 Give an example of a 2 2 invertible matrix A over R such that A + A1 is the zero matrix.
Problema 2.8.7 Find all the values of the parameter a for which the matrix B given below is not invertible.
1 a+2 2
B=
0 a 1
2 1 a
Problema 2.8.10 Let A and B be n n matrices over a field F such that AB is invertible. Prove that both A and B
must be invertible.
has inverse
1 n a 1 1 ... 1
1 1na 1 ... 1
1
a(n + a)
1 1 1na ... 1
.. .. .. ..
. . . ... .
1 1 1 ... 1na
has inverse
2
2 n 2 + n2 2 2 2
2 2 n2 2 + n2 2 2
1
2 n2 2 + n2 .
2n3 2
2 2
. .. .. .. .. ..
..
. . . . .
2 + n2 2 2 2 2 n2
has inverse
1 a1 s 1 1 1
...
a21 a1 a2 a1 a3 a1 an
1 1 a2 s 1 1
...
a2 a1
a22 a2 a3 a2 an
1 1 1 1 a3 s 1
...
,
s
a3 a1 a3 a2 a23 a3 a1 n
.. .. .. ..
. . . ... .
1 1 1 1 an s
...
an a1 an a2 an a3 a2n
1 1 1
where s = 1 + a1
+ a2
+ + an
.
Problema 2.8.16 Let A M55 (R). Shew that if rank A2 < 5, then rank (A) < 5.
Problema 2.8.17 Let p be an odd prime. How many invertible 2 2 matrices are there with entries all in Zp ?
Problema 2.8.18 Let A, B be matrices of the same size. Prove that rank (A + B) rank (A) + rank (B).
0 1 1
Problema 2.8.19 Let A M3,2 (R) and B M2,3 (R) be matrices such that AB =
1 0 1. Prove that
1 1 2
BA = I2 .
Captulo 3
Linear Equations
3.1 Definitions
We can write a system of m linear equations in n variables over a field F
AX = Y, (3.2)
where A is the matrix of coefficients, X is the matrix of variables and Y is the matrix of constants. Most
often we will dispense with the matrix of variables X and will simply write the augmented matrix of the
system as
a
11 a12 a1n y 1
21 a22 a2n y2
a
[A|Y] =
.
. (3.3)
. .. .. .. ..
. . . . .
am1 am2 amn ym
164 Definio Let AX = Y be as in 3.1. If Y = 0m1 , then the system is called homogeneous, otherwise it
is called inhomogeneous. The set
{X Mn1 (F) : AX = 0m1 }
is called the kernel or nullspace of A and it is denoted by ker (A).
64
Definitions 65
165 Definio A system of linear equations is consistent if it has a solution. If the system does not have
a solution then we say that it is inconsistent.
166 Definio If a row of a matrix is non-zero, we call the first non-zero entry of this row a pivot for this
row.
For any two consecutive rows Ri and Ri+1 , either Ri+1 is all 0F s or the pivot of Ri+1 is immediately
to the right of the pivot of Ri .
The variables accompanying these pivots are called the leading variables. Those variables which are not
leading variables are the free parameters.
are in row-echelon form, with the pivots circled, but the matrices
1 0 1 1 1 0 1 1
0 0 1 2 0 0 0 0
, ,
0 0 1 1 0 0 0 1
0 0 0 0 0 0 0 0
Observe that given a matrix A Mmn (F), by following Gau-Jordan reduction la Theo-
rem 149, we can find a matrix P GLm (F) such that PA = B is in row-echelon form.
Solution: Observe that the matrix of coefficients is already in row-echelon form. Clearly every
variable is a leading variable, and by back substitution
6
2w = 6 = w = = 3,
2
z w = 4 = z = 4 + w = 4 3 = 1,
1 1
2y + z = 1 = y = z = 1,
2 2
x + y + z + w = 3 = x = 3 y z w = 0.
The (unique) solution is thus
x 0
y 1
= .
z 1
w 3
Solution: The system is already in row-echelon form, and we see that x, y, z are leading
variables while w is a free parameter. We put w = t. Using back substitution, and operating
from the bottom up, we find
z w = 4 = z = 4 + w = 4 + t,
1 1 1 1 5 1
2y + z = 1 = y = z= 2 t= t,
2 2 2 2 2 2
5 1 9 3
x + y + z + w = 3 = x = 3 y z w = 3 + + t4tt= t.
2 2 2 2
The solution is thus
9 3
x 2 2 t
y 5 1 t
2 2
= , t R.
z 4 + t
w t
Definitions 67
Solution: We see that x, y are leading variables, while z, w are free parameters. We put
z = s, w = t. Operating from the bottom up, we find
1 1 1 1
2y + z = 1 = y = z= s,
2 2 2 2
5 3
x + y + z + w = 3 = x = 3 y z w = s t.
2 2
The solution is thus
5 3
x
2 2
s t
y 1 1s
2 2
, (s, t) R2 .
=
z s
w t
x + 2y + 2z = 0,
y + 2z = 1,
working in Z3 .
The system is already in row-echelon form and x, y are leading variables while z is a free
parameter. We find
y = 1 2z = 1 + 1z,
and
x = 2y 2z = 1 + 2z.
Thus
x 1 + 2z
y 1 + 1z , z Z3 .
=
z z
68 Captulo 3
x 0
= ,
y 2
z 1
and
x 2
= .
y 0
z 2
Homework
Problema 3.1.1 Find all the solutions in Z3 of the sys- Problema 3.1.5 This problem introduces Hill block
tem ciphers, which are a way of encoding information with
an encoding matrix A Mnn (Z26 ), where n is a strictly
x + y + z + w = 0,
positive integer. Split a plaintext into blocks of n letters,
creating a series of n 1 matrices Pk , and consider the
2y + w = 2. numerical equivalent (A = 0, B = 1, C = 2, . . . , Z = 25)
of each letter. The encoded message is the translation to
letters of the n 1 matrices Ck = APk mod 26.
Problema 3.1.2 In Z7 , given that
1 For example, suppose you want to encode the mes-
1 2 3 4 2 0 sage COMMUNISTS EAT OFFAL with the encoding ma-
trix
2
3 1
=
2 0 4 ,
0 1 0
3 1 2 0 4 2
A=
3 0 0 ,
find all solutions of the system
0 0 2
1x + 2y + 3z = 5; a 3 3 matrix. First, split the plaintext into groups of
three letters:
2x + 3y + 1z = 6;
3x + 1y + 2z = 0. COM MUN IST SEA TOF FAL.
Find the product AP1 modulo 26, and translate into let- system
ters:
x0 + x1 = 0,
0 1 0 2 14 O
x0 + x2 = 1,
AP1 =
3 0 0 14 = 6 = G ,
x0 + x3 = 2,
0 0 2 12 24 Y .. .. ..
. . .
hence COM is encoded into OGY. Your task is to com- x0 + x100 = 99,
plete the encoding of the message.
x0 + x1 + x2 + + x100 = 4949.
Problema 3.1.6 Find all solutions in Z103 , if any, to the Hints: 0 + 1 + 2 + + 99 = 4950, 99 77 103 74 = 1.
173 Lemma Let A Mmn (F) be in row-echelon form, and let X Mn1 (F) be a matrix of variables.
The homogeneous system AX = 0m1 of m linear equations in n variables has (i) a unique solution if
m = n, (ii) multiple solutions if m < n.
Proof: If m = n then A is a square triangular matrix whose diagonal elements are different
from 0F . As such, it is invertible by virtue of Theorem 162. Thus
so there is only the unique solution X = 0n1 , called the trivial solution.
If m < n then there are n m free variables. Letting these variables run through the elements of
the field, we obtain multiple solutions. Thus if the field has infinitely many elements, we obtain
infinitely many solutions, and if the field has k elements, we obtain knm solutions. Observe
that in this case there is always a non-trivial solution.
174 Theorem Let A Mmn (F), and let X Mn1 (F) be a matrix of variables. The homogeneous system
AX = 0m1 of m linear equations in n variables always has a non-trivial solution if m < n.
Proof: We can find a matrix P GLm (F) such that B = PA is in row-echelon form. Now
That is, the systems AX = 0m1 and BX = 0m1 have the same set of solutions. But by Lemma
173 there is a non-trivial solution.
175 Theorem (Kronecker-Capelli) Let A Mmn (F), Y Mm1 (F) be constant matrices and X Mn1 (F)
be a matrix of variables. The matrix equation AX = Y is solvable if and only if
Now assume that r = rank (A) = rank ([A|Y]). This means that adding an extra column to A
does not change the rank, and hence, by a sequence column operations [A|Y] is equivalent to
[A|0n1 ]. Observe that none of these operations is a permutation of the columns, since the first
n columns of [A|Y] and [A|0n1 ] are the same. This means that Y can be obtained from the
columns Ci , 1 i n of A by means of transvections and dilatations. But then
n
X
Y = xi C i .
i=1
Problema 3.2.1 Let A Mnp (F), B Mnq (F) and rank (C) P Mp (q) such that B = AP.
put C = [A B] Mn(p+q) (F) Prove that rank (A) =
x + 2y + 3z + 4w = 8
x + 2y + 4z + 7w = 12
2x + 4y + 6z + 8w = 16
Examples of Linear Systems 71
Solution: Form the expanded matrix of coefficients and apply row operations to obtain
1 2 3 4 8 1 2 3 4 8
R3 2R1 R3
1 2 4 7 12 0 0 1 3 4 .
R2 R1 R2
2 4 6 8 16 0 0 0 0 0
The matrix is now in row-echelon form. The variables x and z are the pivots, so w and y are
free. Setting w = s, y = t we have
z = 4 3s,
x = 8 4w 3z 2y = 8 4s 3(4 3s) 2t = 4 + 5s 2t.
Hence the solution is given by
x 4 + 5s 2t
y t
= .
z 4 3s
w s
x 1
1
= .
y
+ 3
1
z
+3
This gives
2z = 3 = z = 5,
2y = 1 4z = 2 = y = 1,
x = 2 z = 4.
Examples of Linear Systems 73
Homework
Problema 3.3.1 Find the general solution to the system
1 1 1 1 1 a 1
1 0 1 0 1 b 1
=
2 1 2 1 2
c 0
4 2 4 2 4
d 0
1 0 0 0 1 f 0
if any.
Problema 3.3.4 Study the following system of linear equations with parameter a.
ax + y 2z = 1,
Problema 3.3.5 Determine the values of the parameter m for which the system
x + y + (1 m)z = m+2
(1 + m)x y + 2z = 0
2x my + 3z = m+2
is solvable.
74 Captulo 3
Problema 3.3.6 Determine the values of the parameter m for which the system
x + y + z + t = 4a
x y z + t = 4b
x y + z + t = 4c
x y + z t = 4d
is solvable.
Problema 3.3.8 For which values of the real parameter a does the following system have (i) no solutions, (ii) exactly
one solution, (iii) infinitely many solutions?
ax + ay = 2a + 2,
2x + (a + 1)y + (a 1)z = a2 2a + 9.
x3 y2 z6 = 1
x4 y5 z12 = 2
x2 y2 z5 = 3.
Problema 3.3.10 (Leningrad Mathematical Olympiad, 1987, Grade 5) The numbers 1, 2, . . . , 16 are arranged in
a 4 4 matrix A as shewn below. We may add 1 to all the numbers of any row or subtract 1 from all numbers of
any column. Using only the allowed operations, how can we obtain AT ?
1 2 3 4
5 6 7 8
A=
9 10 11 12
13 14 15 16
Problema 3.3.11 (International Mathematics Olympiad, 1963) Find all solutions x1 , x2 , x3 , x4 , x5 of the system
x5 + x2 = yx1 ;
x1 + x3 = yx2 ;
x2 + x4 = yx3 ;
x3 + x5 = yx4 ;
x4 + x1 = yx5 ,
where y is a parameter.
Captulo 4
Vector Spaces
VS3 Commutativity
a + b = b +
a (4.3)
VS4 Associativity
(
a + b) +
c =
a + (b +
c) (4.4)
VS9
1F
a =
a (4.9)
VS10
()
a = (
a) (4.10)
75
76 Captulo 4
(a1 , a2 , . . . , an ) = (a1 , a2 , . . . , an ).
In particular, hZ22 , +, , Z2 i is a vector space with only four elements and we have seen the two-dimensional
and tridimensional spaces hR2 , +, , Ri and hR3 , +, , Ri.
181 Exemplo hMmn (F), +, , Fi is a vector space under matrix addition and scalar multiplication of
matrices.
182 Exemplo If
F[x] = {a0 + a1 x + a2 x + + an xn : ai F, n N}
denotes the set of polynomials with coefficients in a field hF, +, i then hF[x], +, , Fi is a vector space,
under polynomial addition and scalar multiplication of a polynomial.
183 Exemplo If
Fn [x] = {a0 + a1 x + a2 x + + ak xk : ai F, n N, k n}
denotes the set of polynomials with coefficients in a field hF, +, i and degree at most n, then hFn [x], +, , Fi
is a vector space, under polynomial addition and scalar multiplication of a polynomial.
184 Exemplo Let k N and let Ck (R[a;b] ) denote the set of k-fold continuously differentiable real-valued
functions defined on the interval [a; b]. Then Ck (R[a;b] ) is a vector space under addition of functions and
multiplication of a function by a scalar.
185 Exemplo Let p ]1; +[. Consider the set of sequences {an }
n=0 , an C,
X
p p
l = {an }n=0 : |an | < + .
n=0
Then lp is a vector space by defining addition as termwise addition of sequences and scalar multiplica-
tion as termwise multiplication:
{an }
n=0 + {bn }n=0 = {(an + bn )}n=0 ,
All the axioms of a vector space follow trivially from the fact that we are adding complex numbers,
p
except that we must prove
P P that inpl there is closure under addition and scalar multiplication. Since
p
< + closure under scalar multiplication follows easily. To prove
n=0 |an | < + = n=0 |an |
closure under addition, observe that if z C then |z| R+ and so by the Minkowski Inequality Theorem
405 we have
P 1/p P 1/p P 1/p
N p N p N p
n=0 |an + bn | n=0 |an | + n=0 |bn |
(4.11)
P p 1/p
P p 1/p
n=0 |an | + n=0 |bn | .
This in turn implies that the series on the left in (4.11) converges, and so we may take the limit as
N + obtaining
!1/p
!1/p
!1/p
X X X
p p p
|an + bn | |an | + |bn | . (4.12)
n=0 n=0 n=0
Now (4.12) implies that the sum of two sequences in lp is also in lp , which demonstrates closure under
addition.
Vector Spaces 77
Proof: We have
0 = ( 0 + 0 ) = 0 + 0 .
Hence
0 0 = 0,
or
0 = 0,
proving the theorem.
Proof: We have
0F
v = (0F + 0F )
v = 0F
v + 0F
v.
Therefore
0F
v 0F
v = 0F
v,
or
0 = 0F
v,
proving the theorem.
189 Theorem In any vector space hV, +, , Fi, F,
v V,
v = 0 = = 0F
v = 0.
F,
v V,
()
v = (
v ) = (
v ).
78 Captulo 4
Proof: We have
0F
v = ( + ())
v =
v + ()
v,
whence
v ) + 0F
(
v = ()
v,
that is
(
v ) = ()
v.
Similarly,
0 = (
v
v ) =
v + (
v ),
whence
(
v ) + 0 = (
v ),
that is
(
v ) = (
v ),
proving the theorem.
Homework
Problema 4.1.1 Is R2 with vector addition and scalar V is a vector space over F if vector addition is defined as
multiplication defined as a b = ab, (a, b) (R+ )2 and scalar multiplication is
defined as a = a , (, a) (R, R+ ).
x1 y1 x1 + y1 x1 x1
+ =
=
, Problema 4.1.4 Let C denote the complex numbers and
x2 y2 x2 + y2 x2 0 R denote the real numbers. Is C a vector space over R
under ordinary addition and multiplication? Is R a vector
a vector space? space over C?
Problema 4.1.2 Demonstrate that the commutativity Problema 4.1.5 Construct a vector space with exactly 8
axiom 4.3 is redundant. elements.
Problema 4.1.3 Let V = R+ =]0; +[, the positive real Problema 4.1.6 Construct a vector space with exactly 9
numbers and F = R, the real numbers. Demonstrate that elements.
192 Exemplo Trivially, X1 = { 0 } and X2 = V are vector subspaces of V.
193 Theorem Let hV, +, , Fi be a vector space. Then U V, U 6= is a subspace of V if and only if
F and (
a , b) U2 it is verified that
a + b U.
Proof: Observe that U inherits commutativity, associativity and the distributive laws from V.
Thus a non-empty U V is a vector subspace of V if (i) U is closed under scalar multiplication,
that is, if F and
v U, then
v U; (ii) U is closed under vector addition, that is, if
2
( u , v ) U , then u + v U. Observe that (i) gives the existence of inverses in U, for take
= 1F and so
v U =
v U. This coupled with (ii) gives the existence of the zero-vector,
for 0 = v v U. Thus we need to prove that if a non-empty subset of V satisfies the property
Vector Subspaces 79
stated in the Theorem then it is closed under scalar multiplication and vector addition, and vice-
versa, if a non-empty subset of V is closed under scalar multiplication and vector addition, then
it satisfies the property stated in the Theorem. But this is trivial.
194 Exemplo Shew that X = {A Mnn (F) : tr (A) = 0F } is a subspace of Mnn (F).
195 Exemplo Let U Mnn (F) be an arbitrary but fixed. Shew that
196 Theorem Let X V, Y V be vector subspaces of a vector space hV, +, , Fi. Then their intersection
X Y is also a vector subspace of V.
Proof: Let F and ( a , b) (X Y)2 . Then clearly (
a , b) X and (
a , b) Y. Since X is
a vector subspace,
a + b X and since Y is a vector subspace, a + b Y. Thus
a + b X Y
We we will soon see that the only vector subspaces of hR , +, , Ri are the set containing the
2
zero-vector, any line through the origin, and R itself. The only vector subspaces of hR3 , +, , Ri
2
are the set containing the zero-vector, any line through the origin, any plane containing the origin
and R3 itself.
Homework
Problema 4.2.1 Prove that Problema 4.2.2 Prove that
a
a
b
2a 3b
4
X = R : a b 3d = 0
X = 5b : a, b R
c
d
a + 2b
a
is a vector subspace of R4 .
is a vector subspace of R5 .
80 Captulo 4
Problema 4.2.3 Let A Mmn (F) be a fixed matrix.
a
Demonstrate that
b : a, b R , ab = 0 R3
2
S = {X Mn1 (F) : AX = 0m1 }
is a subspace of Mn1 (F). 0
Problema 4.2.4 Prove that the set X Mnn (F) of up- a b
: (a, b) R2 , a + b2 = 0 M22 (R)
per triangular matrices is a subspace of Mnn (F).
0 0
Problema 4.2.7 Prove that the following subsets are not Problema 4.2.9 Let V a vector space over a field F. If F
subspaces of the given vector space. Here you must say is infinite, show that V is not the set-theoretic union of
which of the axioms for a vector space fail. a finite number of proper subspaces.
Problema 4.2.10 Give an example of a finite vector
a
space V over a finite field F such that
: a, b R, a + b = 1 R3
2 2
b V = V1 V2 V3 ,
0
where the Vk are proper subspaces.
is said to be a linear combination of the vectors
a i V, 1 i n.
a b
198 Exemplo Any matrix M22 (R) can be written as a linear combination of the matrices
c d
1 0 0 1 0 0 0 0
, , , ,
0 0 0 0 1 0 0 1
for
a b 1 0 0 1 0 0 0 0
= a +b + c + d .
c d 0 0 0 0 1 0 0 1
199 Exemplo Any polynomial of degree at most 2, say a + bx + cx2 R2 [x] can be written as a linear
combination of 1, x 1, and x2 x + 2, for
200 Definio The vectors
a i V, 1 i n, are linearly dependent or tied if
n
X
(1 , 2 , , n ) Fn \ {0} such that j
aj = 0 ,
j=1
that is, if there is a non-trivial linear combination of them adding to the zero vector.
A family of vectors is linearly independent if and only if the only linear combination of them
giving the zero-vector is the trivial linear combination.
202 Exemplo
1 4 7
, ,
2 5 8
3
6 9
Solution: Assume that a(
u
v ) + b(
u +
v ) = 0 . Then
(a + b)
u + (a b)
v = 0.
Since
u ,v are linearly independent, the above coefficients must be 0, that is, a + b = 0F and
a b = 0F . But this gives 2a = 2b = 0F , which implies a = b = 0F , if the characteristic of the field
is not 2. This proves the linear independence of
u
v and
u + v.
204 Theorem Let A Mmn (F). Then the columns of A are linearly independent if and only the only
solution to the system AX = 0m is the trivial solution.
x1 A1 + x2 A2 + + xn An = AX,
Homework
Problema 4.3.1 Shew that Problema 4.3.5 Let { v 1,
v 2,
v 3,
v 4 } be a linearly inde-
pendent family of vectors. Prove that the family
1 1 1
{
v +
v ,
1 v +
2v ,v +
2 v ,
3
v +
3
v } 4 4 1
, ,
0 1 1 is not linearly independent.
0 1
0 Problema 4.3.6 Let { v 1,
v 2,
v 3 } be linearly indepen-
5
dent vectors in R . Are the vectors
forms a free family of vectors in R3 .
b 1 = 3v 1 + 2 v 2 + 4
v 3,
Problema 4.3.2 Prove that the set b 2 = v 1 + 4 v 2 + 2 v 3,
b 3 = 9v 1 + 4 v 2 + 3
v 3,
1 1 1 1
4
b = v + 2v + 5v , 1
2 3
linearly independent? Prove or disprove!
1 1 1 1
, , ,
Problema 4.3.7 Is the family {1, 2} linearly indepen-
1 1 1 0
dent over Q?
1 1 1 1
Problema 4.3.8 Is the family {1, 2} linearly indepen-
4
is a linearly
independent set of vectors in R and shew dent over R?
1
Problema 4.3.9 Consider the vector space
2 V = {a + b 2 + c 3 : (a, b, c) Q3 }.
that X = can be written as a linear combination of
1 1. Shew that {1, 2, 3} are linearly independent over
Q.
1
2. Express
these vectors. 1 2
+
1 2 12 2
, u
v ) (Rn )2 . Prove that |
Problema 4.3.3 Let ( u
v| =
as a linear combination of {1, 2, 3}.
u v if and only if u and v are linearly dependent.
Problema 4.3.4 Prove that Problema 4.3.10 Let f, g, h belong to C (RR ) (the space
of infinitely continuously differentiable real-valued func-
1 0 1 0 0 1 0 1
tions defined on the real line) and be given by
, , ,
f(x) = ex , g(x) = e2x , h(x) = e3x .
0 1 0 1 1 0 1 0
Shew that f, g, h are linearly independent over R.
1 1
is a linearly independent family over R. Write
as
Problema 4.3.11 Let f, g, h belong to C (RR ) be given
1 1 by
f(x) = cos2 x, g(x) = sin2 x, h(x) = cos 2x.
a linear combination of these matrices.
Shew that f, g, h are linearly dependent over R.
Spanning Sets 83
207 Theorem If {
u 1,
u 2, . . . ,
u k , . . . , } V spans V, then any superset
{
w,
u 1,
u 2, . . . ,
u k, . . . , } V
also spans V.
l
X X l
u i = 0F
i w+ i
u i.
i=1 i=1
a
spans R since given
3
b R we may write
3
c
a
= a
b i + b j + ck.
c
spans R3 .
84 Captulo 4
1 0 0 0 0 1
215 Exemplo If A M22 (R), A span
,
,
then A has the form
0 0 0 1 1 0
1 0 0 0 0 1 a c
a + b +c = ,
0 0 0 1 1 0 c b
i.e., this family spans the set of all symmetric 2 2 matrices over R.
216 Theorem Let V be a vector space over a field F and let (
v ,
w) V 2 , F \ {0F }. Then
span
v ,
w = span
v ,
w .
217 Theorem Let V be a vector space over a field F and let (
v ,
w) V 2 , F. Then
span
w = span
v ,
w, v +
w .
Homework
Problema 4.4.1 Let R3 [x] denote the set of polynomials with degree at most 3 and real coefficients. Prove that the
set
{1, 1 + x, (1 + x)2 , (1 + x)3 }
spans R3 [x].
1 1 0
Problema 4.4.2 Shew that
6 span , .
1 0 1
1 1 1
1 0 0 0 0 1
Problema 4.4.3 What is span
,
,
?
0 0 0 1 1 0
prove that
span
a , b = span
c, d .
4.5 Bases
218 Definio A family {
u 1,
u 2, . . . ,
u k , . . .} V is said to be a basis of V if (i) they are linearly indepen-
dent, (ii) they span V.
where there is a 1F on the i-th slot and 0F s on the other n 1 positions, is a basis for Fn .
Proof: Since U is a linearly independent family, we need only to prove that it spans V. Take
v V. If
v U then there is nothing to prove, so assume that
v V \ U. Consider the set
U = U { v }. This set properly contains U, and so, by assumption, it forms a dependent family.
There exists scalars 0 , 1 , . . . , n such that
0v + 1u 1 + + n
un = 0 .
Now, 6= 0 , otherwise the
u would be linearly dependent. Hence 1 exists and we have
0 F i 0
v = 1
0 (1 u 1 + + n u n ),
and so the
u i span V.
From Theorem 220 it follows that to shew that a vector space has a basis it is enough to
shew that it has a maximal linearly independent set of vectors. Such a proof requires something
called Zrns Lemma, and it is beyond our scope. We dodge the whole business by taking as an
axiom that every vector space possesses a basis.
Bases 87
221 Theorem (Steinitz Replacement Theorem) Let hV, +, , Fi be a vector space and let U = {
u 1,
u 2 , . . .}
V. Let W = {w1 , w2 , . . . , wk } be an independent family of vectors in span (U). Then there exist k of the
u i s, say {
u 1,
u 2, . . . ,
u k } which may be replaced by the
wi s in such a way that
span
w1 ,
w2 , . . . ,
wk ,
u k+1 , . . . = span (U) .
span
u 2 , . . . , = span
w1 ,
u 1,
u 2, . . . , .
Assume now that the theorem is true for any set of fewer than k independent vectors. We may
thus assume that that {
u 1 , . . .} has more than k 1 vectors and that
span
w1 ,
w2 , . . . ,
wk1 ,
u k , , . . . = span (U) .
Since
wk U we have
wk = 1
w1 + 2
w2 + + k1
wk1 + k
u k + k+1
u k+1 + m
u m.
If all the i = 0F , then the {
w1 ,
w2 , . . . ,
wk } would be linearly dependent, contrary to assumption.
Thus there is a i 6= 0F , and after reordering, we may assume that k 6= 0F . We have therefore
u k = 1
k (wk (1 w1 + 2 w2 + + k1 wk1 + k+1 u k+1 + m u m )).
222 Corollary Let {
w1 ,
w2 , . . . ,
wn } be anindependent family of vectors with V = span
w1 ,
w2 , . . . ,
wn .
Proof:
1. In the Steinitz Replacement Theorem 221 replace the first n
u i s by the
wi s and n
follows.
2. If {
u 1,
u 2, . . . ,
u } are a linearly independent family, then we may interchange the rle of
the wi and u i obtaining n. Conversely, if = n and if the
u i are dependent, we could
express some u as a linear combination of the remaining 1 vectors, and thus we would
i
have shewn that some 1 vectors span V. From (1) in this corollary we would conclude
that n 1, contradicting n = .
3. This follows from the definition of what a basis is and from (2) of this corollary.
88 Captulo 4
223 Definio The dimension of a vector space hV, +, , Fi is the number of elements of any of its bases,
and we denote it by dim V.
Proof:
Take any basis {
u 1, . . . ,
u k,
u k+1 , . . . , } and use Steinitz Replacement Theorem 221.
225 Corollary If U V is a vector subspace of a finite dimensional vector space V then dim U dim V.
Proof: Since any basis of U can be extended to a basis of V, it follows that the number of
elements of the basis of U is at most as large as that for V.
226 Exemplo Find a basis and the dimension of the space generated by the set of symmetric matrices in
Mnn (R).
Solution: Let Eij Mnn (R) be the n n matrix with a 1 on the ij-th position and 0s
n(n 1)
everywhere else. For 1 i < j n, consider the n 2
= matrices Aij = Eij + Eji . The
2
Aij have a 1 on the ij-th and ji-th position and 0s everywhere else. They, together with the n
matrices Eii , 1 i n constitute a basis for the space of symmetric matrices. The dimension of
this space is thus
n(n 1) n(n + 1)
+n= .
2 2
227 Theorem Let {
u 1, . . . ,
u n } be vectors in Rn . Then the
u s form a basis if and only if the n n matrix
A formed by taking the u s as the columns of A is invertible.
Proof:
Since we have the right number of vectors, it is enough to prove that the
u s are linearly
x1
x
2
independent. But if X =
. , then
.
.
xn
x1
u 1 + + xn
u n = AX.
If A is invertible, then AX = 0n = X = A1 0n = 0n , meaning that x1 = x2 = xn = 0, so the
u s are linearly independent.
AX = 0n = P1 Dn,n,r Q1 X = 0n = Dn,n,r Q1 X = 0n .
y1
y
1
2
Put Q X =
. . Then
.
.
yn
Dn,n,r Q1 X = 0n = y1
e 1 + + yr
e r = 0n ,
where e j is the n-dimensional column vector with a 1 on the j-th slot and 0s everywhere else.
If r < n then yr+1 , . . . , yn can be taken arbitrarily and so there would not be a unique solution,
a contradiction. Hence r = n and A is invertible.
Homework
Problema 4.5.1 In problem 4.2.2 we saw that
a
2a 3b
X = 5b : a, b R
a + 2b
a
{
v1 +
v 2,
v2 +
v 3,
v3 +
v 4,
v4 +
v 5,
v5 +
v 1}
Problema 4.5.3 Find a basis for the solution space of the system of n + 1 linear equations of 2n unknowns
x1 + x2 + + xn = 0,
x2 + x3 + + xn+1 = 0,
......
...
xn+1 + xn+2 + + x2n = 0.
Problema 4.5.4 Prove that the set V of skew-symmetric n n matrices is a subspace of Mnn (R) and find its
dimension. Exhibit a basis for V.
n(n + 1)
Problema 4.5.6 Prove that the dimension of the vector subspace of lower triangular n n matrices is and
2
find a basis for this space.
90 Captulo 4
is a vector space of M33 (R) and find a basis for it and its dimension.
4.6 Coordinates
228 Theorem Let {
v 1,
v 2, . . . ,
v n } be a basis for a vector space V. Then any
v V has a unique
representation
v = a1
v 2 + + an
v 1 + a2 v n.
Proof: Let
v = b1
v 1 + b2
v 2 + + bn
vn
be another representation of
v . Then
0 = (a1 b1 ) v 1 + (a2 b2 )
v 2 + + (an bn )
v n.
Since { v 1 , v 2 , . . . , v n } forms a basis for V, they are a linearly independent family. Thus we
must have
a1 b1 = a2 b2 = = an bn = 0F ,
that is
a1 = b1 ; a2 = b2 ; ; an = bn ,
proving uniqueness.
Coordinates 91
1
230 Exemplo The standard ordered basis for R is S = { i , j , k }. The vector
3
2 R for example, has
3
3
coordinates (1, 2, 3)S . If the order of the basis were changed to the ordered basis S1 = { i , k , j }, then
1
R3 would have coordinates (1, 3, 2)S .
2 1
3
1
3
231 Exemplo Consider the vector
2 R (given in standard representation). Since
3
1 1 1 1
= 1 1 + 3 ,
2 0 1 1
3 0 0 1
1 1 1
1
under the ordered basis B1 = 0 , 1 , 1 ,
has coordinates (1, 1, 3)B . We write
2 1
0 0 1 3
1 1
2 = 1 .
3 3
B1
92 Captulo 4
Thus
1
x 2 1 1 1 3
=
y 1 1 1 2 4
B2
1 1
3 3 1 1 3
=
1 2
3
3 1 2 4
2
3 1 3
=
1
3 1 4
6
=
.
5
B2
In general let us consider bases B1 , B2 for the same vector space V. We want to convert XB1 to
YB2 . We let A be the matrix formed with the column vectors of B1 in the given order an B be the matrix
formed with the column vectors of B2 in the given order. Both A and B are invertible matrices since the
Bs are bases, in view of Theorem 227. Then we must have
Also,
XB1 = A1 BYB2 .
1 1 1
B1 = 1
, , ,
1 0
1 0 0
1
1 2
B2 = , , .
1 1 0
1 0 0
Solution: Let
1 1 1 1 1 2
A=
1 1 0
, B =
1 1 0 .
1 0 0 1 0 0
94 Captulo 4
P = B1 A
1
1 1 2 1 1 1
=
1 1 0
1
1 0
1 0 0 1 0 0
0 0 1 1 1 1
=
0 1 1 1
1 0
1 1
2 2
1 1 0 0
1 0 0
=
2 1 0 .
1
2 1 2
Now,
1 0 0 1 1
YB2 =
2 1 0 2
=
4
.
1 11
2 1 2
3 2
B1 B2
3
As a check, observe that in the standard basis for R
1 1 1 1 6
2
= 1
1
+ 2 + 3 = ,
1 0 3
3 1 0 0 1
B1
1 1 1 2 6
11
4
= 1 1 4 1 +
= .
0 3
2
11
2
1 0 0 1
B2
Coordinates 95
Homework
Problema 4.6.1 1. Prove that the following vectors Problema 4.6.2 Consider the matrix
are linearly independent in R4
a 1 1 1
1 1 1 1
0 1 0 1
A(a) = .
1 1 1 1
a1 = , a2 = , a3 = , a4 = .
1 0 a 1
1 1 1 1
1 1 1 1
1 1 1 1
Determine all a for which A(a) is not invertible.
Find the inverse of A(a) when A(a) is invertible.
1 Find the transition matrix from the basis
2 1 1 1 1
2. Find the coordinates of under the ordered ba-
1
1 1 1 0
B1 = , , ,
1
1 1 0 0
sis ( a , a , a , a ).
1 2 3 4
1 0 0 0
1
to the basis
2
3. Find the coordinates of under the ordered ba- a 1 1 1
1
0 1 0 1
B2 = , , , .
1
1 0 a 1
sis ( a , a , a , a ).
1
3
2
4
1 1 1 1
Captulo 5
Linear Transformations
V W
L: ,
a 7 L(
a)
is a function which is
Linear: L(
a + b) = L(
a ) + L( b),
Homogeneous: L(
a ) = L(
a ), for F.
It is clear that the above two conditions can be summarised conveniently into
L(
a + b ) = L(
a ) + L( b).
Mnn (R) R
L: .
A 7 tr (A)
96
Linear Transformations 97
238 Exemplo For a point (x, y) R2 , its reflexion about the y-axis is (x, y). Prove that
R2 R2
R:
(x, y) 7 (x, y)
is linear.
= ((x1 + x2 ), y1 + y2 )
= (x1 , y1 ) + (x2 , y2 )
whence R is linear.
Solution: Since
5 1 1
= 4 ,
3 1 1
we have
1 2 6
5 1 1
1 0 4
L = 4L L = 4
= .
3 1 1 2 2 6
3 3 9
240 Theorem Let hV, +, , Fi and hW, +, , Fi be vector spaces over the same field F, and let L : V W be
a linear transformation. Then
98 Captulo 5
L( 0 V ) = 0 W .
x V, L(
x ) = L(
x ).
Proof: We have
L( 0 V ) = L( 0 V + 0 V ) = L( 0 V ) + L( 0 V ),
hence
L( 0 V ) L( 0 V ) = L( 0 V ).
Since
L( 0 V ) L( 0 V ) = 0 W ,
we obtain the first result.
Now
0 W = L( 0 V ) = L(
x + (
x )) = L(
x ) + L(
x ),
from where the second result follows.
Homework
Problema 5.1.1 Consider L : R3 R3 , Prove that
V W
T :
v 7
T (
v)
Since T (0 V)
= 0 W by Theorem 240, we have 0 V ker (T ) and 0 W Im (T ).
Kernel and Image of a Linear Transformation 99
242 Theorem Let hV, +, , Fi and hW, +, , Fi be vector spaces over the same field F, and
V W
T :
v 7
T (
v)
Proof: Let ( v 1,
v 2 ) (ker (T ))2 and F. Then T ( v 1 ) = T (
v 2 ) = 0 V . We must prove that
v 1 +
v 2 ker (T ), that is, that T (
v 1 +
v 2 ) = 0 W . But
T (
v 1 +
v 2 ) = T (
v 1 ) + T (
v 2) = 0 V + 0 V = 0 V
Now, let (
w1 ,
w2 ) (Im (T ))2 and F. Then (
v 1,
v 2 ) V 2 such that T (
v 1) =
w1 and
T ( v 2 ) = w2 . We must prove that w1 + w2 Im (T ), that is, that v such that T (
v) =
w1 + w2 . But
w1 +
w2 = T (
v 1 ) + T (
v 2 ) = T (
v 1 + v 2 ),
and so we may take v = v + v . This proves that Im (T ) is a subspace of W.
1 2
243 Theorem Let hV, +, , Fi and hW, +, , Fi be vector spaces over the same field F, and
V W
T :
v 7
T (
v)
be a linear transformation. Then T is injective if and only if ker (T ) = 0 V .
Proof: Assume that T is injective. Then there is a unique
x V mapping to 0 W :
T (
x ) = 0 W.
By Theorem 240, T ( 0 V ) = 0 W , i.e., a linear transformation takes the zero vector of one space
to the zero vector of the target space, and so we must have x = 0 V.
Conversely, assume that ker (T ) = { 0 V }, and that T (
x ) = T (
y ). We must prove that
x =
y . But
T (
x ) = T (
y) = T (
x ) T (
y) = 0 W
= T (
x
y) = 0 W
=
(
x
y ) ker (T )
= x
y = 0V
=
x =
y,
as we wanted to shew.
100 Captulo 5
244 Theorem (Dimension Theorem) Let hV, +, , Fi and hW, +, , Fi be vector spaces of finite dimension
over the same field F, and
V W
T :
v 7
T (
v)
be a linear transformation. Then
dim ker (T ) + dim Im (T ) = dim V.
Proof: Let {
v 1,
v 2, . . . ,
v k } be a basis for ker (T ). By virtue of Theorem 224, we may extend
this to a basis A = { v 1 , v 2 , . . . ,
v k,
v k+1 ,
v k+2 , . . . ,
v n } of V. Here n = dim V. We will now
shew that B = {T ( v k+1 ), T ( v k+2 ), . . . , T ( v n )} is a basis for Im (T ). We prove that (i) B spans
Im (T ), and (ii) B is a linearly independent family.
Let
w Im (T ). Then
v V such that T (
v) =
w. Now since A is a basis for V we can write
n
X
v =
i
v i.
i=1
Hence
n
X n
X
w = T (
v) =
i T (
v i) =
i T (
v i ),
i=1 i=k+1
since T (
v i ) = 0 V for 1 i k. Thus B spans Im (T ).
245 Corollary If dim V = dim W < +, then T is injective if and only if it is surjective.
Proof: Simply observe that if T is injective then dim ker (T ) = 0, and if T is surjective Im (T ) =
T (V) = W and Im (T ) = dim W.
a b
Solution: Put A =
and assume L(A) = 02 . Then
c d
0 0 a c a b 0 1
= L(A) = = (c b) .
0 0 b d c d 1 0
Kernel and Image of a Linear Transformation 101
0 k
Im (L) = :kR .
k 0
247 Exemplo Consider the linear transformation L : M22 (R) R3 [X] given by
a b
L = (a + b)X2 + (a b)X3 .
c d
Solution: We have
a b
0 = L = (a + b)X2 + (a b)X3 = a + b = 0, a b = 0, = a = b = 0.
c d
Thus
0 0
2
ker (L) = : (c, d) R .
c d
Im (L) = span X2 + X3 , X2 X3 .
Homework
Problema 5.2.1 In problem 5.1.1 we saw that L : R3 Problema 5.2.2 Consider the function L : R4 R2 gi-
R3 , ven by
x x y z x
L y = x + y + z
y x + y
L = .
z z z
xy
is linear. Determine ker (L) and Im (L). w
102 Captulo 5
satisfy
Problema 5.2.7 Let
1 2 1
1
1
0
M22 (R) R
0
1
1
L: .
L 0 = ; L 1 = ; L 0 = . A 7 tr (A)
1 0 1
0
0
1
Determine ker (L) and Im (L).
0 0 0
formed by the column vectors above is called the matrix representation of the linear map L with respect
to the bases {
v i }i[1;m] , {
wi }i[1;n] .
Solution:
1 1 0 1 0 1
1. The matrix will be a 3 3 matrix. We have L 0 = 1, L 1 = 1 , and L
0 = 1 ,
0 0 0 0 1 1
whence the desired matrix is
1 1 1
1 1 1 .
0 0 1
Matrix Representation 105
250 Exemplo Let Rn [x] denote the set of polynomials with real coefficients with degree at most n.
1. Prove that
R3 [x] R1 [x]
L:
p(x) 7 p (x)
is a linear transformation. Here p (x) denotes the second derivative of p(x) with respect to x.
2. Find the matrix of L using the ordered bases {1, x, x2 , x3 } for R3 [x] and {1, x} for R2 [x].
3. Find the matrix of L using the ordered bases {1, x, x2 , x3 } for R3 [x] and {1, x + 2} for R1 [x].
4. Find a basis for ker (L) and find dim ker (L).
Solution:
whence L is linear.
2. We have
d2 0
L(1) = 1 = 0 = 0(1) + 0(x) =
,
dx 2
0
d2 0
L(x) = x = 0 = 0(1) + 0(x) =
,
dx 2
0
d2 2 2
L(x2 ) = x = 2 = 2(1) + 0(x) =
,
dx2
0
d2 0
L(x3 ) = x3 = 6x = 0(1) + 6(x) =
,
dx 2
6
3. We have
d2 0
L(1) = 1 = 0 = 0(1) + 0(x + 2) = ,
dx2
0
d2 0
L(x) = x = 0 = 0(1) + 0(x + 2) = ,
dx2
0
d2 2
L(x2 ) = x2 = 2 = 2(1) + 0(x + 2) = ,
dx2
0
d2 12
L(x3 ) = x3 = 6x = 12(1) + 6(x + 2) = ,
dx 2
6
0 = L(p(x)) = 2c + 6dx, x R.
251 Exemplo
1. A linear transformation T : R3 R3 is such that
2 3
T( i ) =
1 ;
T( j ) =
0 .
1 1
It is known that
Im (T ) = span T ( i ), T ( j )
Matrix Representation 107
and that
1
ker (T ) = span
2 .
1
Argue that there must be and such that
T ( k ) = T ( i ) + T ( j ).
2. Find and , and hence, the matrix representing T under the standard ordered basis.
Solution:
1. Since T ( k ) Im (T ) and Im (T ) is generated by T ( i ) and T ( k ) there must be (, ) R2
with
2 3 2 + 3
T ( k ) = T ( i ) + T ( j ) = 1 + 0 =
.
1 1
2. The matrix of T is
2 3 2 + 3
=
.
T( i ) T( j ) T(k) 1 0
1 1
1
2 ker (T ) we must have
Since
1
2 3 2 + 3 1 0
= .
1 0 2 0
1 1 1 0
Solving the resulting system of linear equations we obtain = 1, = 2. The required matrix
is thus
2 3 8
.
1 0 1
1 1 1
Homework
Problema 5.3.1 Let T : R4 R3 be a linear transformations such that
1 1 1 1
0 1 0 1
1 0 1 2
T =0 , T = 1 , T = 0 , T = 1 .
1 1 1 0
1
1
1
1
1 0 0 0
Find the matrix of T with respect to the canonical bases. Find the dimensions and describe ker (T ) and Im (T ).
Problema 5.3.2 1. A linear transformation T : R3 R3 has as image the plane with equation x + y + z = 0 and
as kernel the line x = y = z. If
1 a 2 3 1 1
1 = 0 ,
T 1 = b ,
T 2 = 2 .
T
2 1 1 5 1 c
Find a, b, c.
2. Find the matrix representation of T under the standard basis.
is a linear transformation.
2. Find a basis for ker (T ) and find dim ker (T )
3. Find a basis for Im (T ) and find dim Im (T ).
1 1 0
1 1
4. Find the matrix of T under the ordered bases A = 2
, of R and B = , , of R3 .
1 0 1
2
3
1
1 0
where
x
x + 2y
y =
L .
3x z
z
1. The bases for both R3 and R2 are both the standard ordered bases.
1 1 1
2. The ordered basis for R3 is
2
0 , 1 , 1 and R has the standard ordered basis .
0 0 1
1 1 1
1 1
3
2
3. The ordered basis for R is 0 , 1 , 1 and the ordered basis for R is A = , .
0 1
0 0 1
1 2
Problema 5.3.5 A linear transformation T : R2 R2 satisfies ker (T ) = Im (T ), and T
= . Find the matrix
1 3
representing T under the standard ordered basis.
Problema 5.3.6 Find the matrix representation for the linear map
M22 (R) R
L: ,
A 7 tr (A)
Problema 5.3.7 Let A Mnp (R), B Mpq (R), and C Mqr (R), be such that rank (B) = rank (AB). Shew
that
rank (BC) = rank (ABC) .
Captulo 6
Determinants
6.1 Permutations
252 Definio Let S be a finite set with n 1 elements. A permutation is a bijective function : S S.
It is easy to see that there are n! permutations from S onto itself.
Since we are mostly concerned with the action that exerts on S rather than with the particular names
of the elements of S, we will take S to be the set S = {1, 2, 3, . . . , n}. We indicate a permutation by
means of the following convenient diagram
1 2 n
=
.
(1) (2) (n)
253 Definio The notation Sn will denote the set of all permutations on {1, 2, 3, . . . , n}. Under this
notation, the composition of two permutations (, ) S2n is
1 2 n 1 2 n
=
(1) (2) (n) (1) (2) (n)
.
1 2 n
=
( )(1) ( )(2) ( )(n)
We usually do away with the and write simply as . This product of permutations
is thus simply function composition.
1 : S S
110
Permutations 111
then
(1) (2) (n)
1 =
.
1 2 n
2 3
254 Exemplo The set S3 has 3! = 6 elements, which are given below.
1. Id : {1, 2, 3} {1, 2, 3} with
1 2 3
Id =
.
1 2 3
(We read from right to left 1 2 3 (1 goes to 2, 2 goes to 3, so 1 goes to 3), etc. Similarly
1 2 3 1 2 3 1 2 3
1 1 =
=
= 3 .
2 3 1 1 3 2 2 1 3
Observe in particular that 1 1 6= 1 1 . Finding all the other products we deduce the following
multiplication table (where the multiplication operation is really composition of functions).
Id 1 2 3 1 2
Id Id 1 2 3 1 2
1 1 Id 1 2 2 3
2 2 2 Id 1 3 1
3 3 1 2 Id 1 2
2 2 2 3 1 Id 1
1 1 3 1 2 2 Id
The permutations in example 254 can be conveniently interpreted as follows. Consider an equilateral
triangle with vertices labelled 1, 2 and 3, as in figure 6.1. Each a is a reflexion (flipping) about
the line joining the vertex a with the midpoint of the side opposite a. For example 1 fixes 1 and
flips 2 and 3. Observe that two successive flips return the vertices to their original position and so
(a {1, 2, 3})(2a = Id ). Similarly, 1 is a rotation of the vertices by an angle of 120 . Three successive
rotations restore the vertices to their original position and so 31 = Id .
Observe that 1
1 = 1 .
Observe that 1
1 = 2 .
258 Definio Let l 1 and let i1 , . . . , il {1, 2, . . . n} be distinct. We write (i1 i2 . . . il ) for the element
Sn such that (ir ) = ir+1 , 1 r < l, (il ) = i1 and (i) = i for i 6 {i1 , . . . , il }. We say that
(i1 i2 . . . il ) is a cycle of length l. The order of a cycle is its length. Observe that if has order l then
l = Id .
Observe that (i 2 . . . il i1 ) = (i1 . . . il ) etc., and that (1) = (2) = = (n) = Id . In fact, we
have
(i1 . . . il ) = (j1 . . . jm )
if and only if (1) l = m and if (2) l > 1: a such that k: ik = jk+a mod l . Two cycles (i1 , . . . , il )
and (j1 , . . . , jm ) are disjoint if {i1 , . . . , il } {j1 , . . . , jm } = . Disjoint cycles commute and if
= 1 2 t is the product of disjoint cycles of length l1 , l2 , . . . , lt respectively, then has
order
lcm (l1 , l2 , . . . , lt ) .
is
(285)(3746).
The order of is lcm (3, 4) = 12.
114 Captulo 6
260 Exemplo The cycle decomposition = (123)(567) in S9 arises from the permutation
1 2 3 4 5 6 7 8 9
=
.
2 3 1 4 6 7 5 8 9
261 Exemplo Find a shuffle of a deck of 13 cards that requires 42 repeats to return the cards to their
original order.
Solution: Here is one (of many possible ones). Observe that 7 + 6 = 13 and 7 6 = 42. We
take the permutation
(1 2 3 4 5 6 7)(8 9 10 11 12 13)
which has order 42. This corresponds to the following shuffle: For
take the ith card to the (i + 1)th place, take the 7th card to the first position and the 13th card
to the 8th position. Query: Of all possible shuffles of 13 cards, which one takes the longest to
restitute the cards to their original position?
262 Exemplo Let a shuffle of a deck of 10 cards be made as follows: The top card is put at the bottom,
the deck is cut in half, the bottom half is placed on top of the top half, and then the resulting bottom
card is put on top. How many times must this shuffle be repeated to get the cards in the initial order?
Explain.
Cutting this new arrangement in half and putting the lower half on top corresponds to
1 2 3 4 5 6 7 8 9 10
.
7 8 9 10 1 2 3 4 5 6
The order of this permutation is lcm(2, 2, 2, 2, 2) = 2, so in 2 shuffles the cards are restored to
their original position.
The above examples illustrate the general case, given in the following theorem.
Proof: Let Sn , a1 {1, 2, . . . , n}. Put k (a1 ) = ak+1 , k 0. Let a1 , a2 , . . . , as be the longest
chain with no repeats. Then we have (as ) = a1 . If the {a1 , a2 , . . . , as } exhaust {1, 2, . . . , n}
then we have = (a1 a2 . . . as ). If not, there exist b1 {1, 2, . . . , n} \ {a1 , a2 , . . . , as }. Again,
we find the longest chain of distinct b1 , b2 , . . . , bt such that (bk ) = bk+1 , k = 1, . . . , t 1
and (bt ) = b1 . If the {a1 , a2 , . . . , as , b1 , b2 , . . . , bt } exhaust all the {1, 2, . . . , n} we have =
(a1 a2 . . . as )(b1 b2 . . . bt ). If not we continue the process and find
265 Exemplo The cycle (23468) can be written as a product of transpositions as follows
(23468) = (28)(26)(24)(23).
Notice that this decomposition as the product of transpositions is not unique. Another decomposition is
(23468) = (23)(34)(46)(68).
Let Sn and let (i, j) {1, 2, . . . , n}2 , i 6= j. Since is a permutation, (a, b) {1, 2, . . . , n}2 , a 6= b,
such that (j) (i) = b a. This means that
Y
(i) (j)
= 1.
1i<jn ij
If sgn() = 1, then we say that is an even permutation, and if sgn() = 1 we say that is an odd
permutation.
Proof: Let be transposition that exchanges k and l, and assume that k < l:
1 2 ... k1 k k + 1 ... l1 l l+1 ... n
=
1 2 ... k1 l k + 1 ... l1 k l+1 ... n
The pairs (i, j) with i {1, 2, . . . , k 1} {l, l + 1, . . . , n} and i < j do not suffer an inversion;
The pair (k, j) with k < j suffers an inversion if and only if j {k + 1, k + 2, . . . , l}, making
l k inversions;
If i {k + 1, k + 2, . . . , l 1} and i < j, (i, j) suffers an inversion if and only if j = l, giving
l 1 k inversions.
Proof: We have
Q ()(i)()(j)
sgn() = 1i<jn ij
Q Q
((i))((j)) (i)(j)
= 1i<jn (i)(j)
1i<jn ij
.
The second factor on this last equality is clearly sgn(), we must shew that the first factor is
sgn(). Observe now that for 1 a < b n we have
and so
Y ((i)) ((j)) Y (a) (b)
= = sgn().
1i<jn
(i) (j) 1a<bn
ab
Proof: This follows at once from Theorem 270 and Lemma 273.
275 Exemplo The cycle (4678) is an odd cycle; the cycle (1) is an even cycle; the cycle (12345) is an even
cycle.
276 Corollary Every permutation can be decomposed as a product of transpositions. This decomposition
is not necessarily unique, but its parity is unique.
Proof: This follows from Theorem 263, Lemma 266, and Corollary 274.
277 Exemplo (The 15 puzzle) Consider a grid with 16 squares, as shewn in (6.1), where 15 squares are
numbered 1 through 15 and the 16th slot is empty.
1 2 3 4
5 6 7 8
(6.1)
9 10 11 12
13 14 15
In this grid we may successively exchange the empty slot with any of its neighbours, as for example
1 2 3 4
5 6 7 8
. (6.2)
9 10 11 12
13 14 15
118 Captulo 6
We ask whether through a series of valid moves we may arrive at the following position.
1 2 3 4
5 6 7 8
(6.3)
9 10 11 12
13 15 14
Solution: Let us shew that this is impossible. Each time we move a square to the empty
position, we make transpositions on the set {1, 2, . . . , 16}. Thus at each move, the permutation
is multiplied by a transposition and hence it changes sign. Observe that the permutation corres-
ponding to the square in (6.3) is (14 15) (the positions 14th and 15th are transposed) and hence
it is an odd permutation. But we claim that the empty slot can only return to its original position
after an even permutation. To see this paint the grid as a checkerboard:
B R B R
R B R B
(6.4)
B R B R
R B R B
We see that after each move, the empty square changes from black to red, and thus after an
odd number of moves the empty slot is on a red square. Thus the empty slot cannot return to its
original position in an odd number of moves. This completes the proof.
Homework
Problema 6.2.1 Decompose the permutation
1 2 3 4 5 6 7 8 9
2 3 4 1 5 8 6 7 9
6.3 Determinants
There are many ways of developing the theory of determinants. We will choose a way that will allow
us to deduce the properties of determinants with ease, but has the drawback of being computationally
cumbersome. In the next section we will shew that our way of defining determinants is equivalent to a
more computationally friendly one.
It may be pertinent here to quickly review some properties of permutations. Recall that if Sn is a
cycle of length l, then its signum sgn() = 1 depending on the parity of l 1. For example, in S7 ,
= (1 3 4 7 6)
has length 5, and the parity of 5 1 = 4 is even, and so we write sgn() = +1. On the other hand,
= (1 3 4 7 6 5)
Determinants 119
278 Definio Let A Mnn (F), A = [aij ] be a square matrix. The determinant of A is defined and
denoted by the sum X
det A = sgn()a1(1) a2(2) an(n) .
Sn
280 Exemplo If n = 2, then S2 has 2! = 2 members, Id and = (1 2). Observe that sgn() = 1. Thus if
a11 a12
A=
a21 a22
then
det A = sgn(Id )a1Id (1) a2Id (2) + sgn()a1(1) a2(2) = a11 a22 a12 a21 .
281 Exemplo From the above formula for 2 2 matrices it follows that
1 2
det A = det
3 4
= (1)(4) (3)(2) = 2,
1 2
det B = det
(1)(4) (3)(2)
3 4
= 10,
and
0 4
det(A + B) = det
= (0)(8) (6)(4) = 24.
6 8
a11 a12 a13
A=
a21 a22 a23
a31 a32 a33
then
det A = sgn(Id )a1Id (1) a2Id (2) a3Id (3) + sgn(1 )a11 (1) a21 (2) a31 (3)
+sgn(2 )a12 (1) a22 (2) a32 (3) + sgn(3 )a13 (1) a23 (2) a33 (3)
+sgn(1 )a11 (1) a21 (2) a31 (3) + sgn(2 )a12 (1) a22 (2) a32 (3)
283 Theorem (Row-Alternancy of Determinants) Let A Mnn (F), A = [aij ]. If B Mnn (F), B = [bij ] is
the matrix obtained by interchanging the s-th row of A with its t-th row, then det B = det A.
Then (a) = (a) for a {1, 2, . . . , n} \ {s, t}. Also, sgn() = sgn()sgn() = sgn(). As
ranges through all permutations of Sn , so does , hence
P
det B = Sn sgn()b1(1) b2(2) bs(s) bt(t) bn(n)
P
= Sn sgn()a1(1) a2(2) ast ats an(n)
P
= Sn sgn()a1(1) a2(2) as(s) at(t) an(n)
P
= Sn sgn()a1(1) a2(2) an(n)
= det A.
Determinants 121
det AT = det C
P
= Sn sgn()c1(1) c2(2) cn(n)
P
= Sn sgn()a(1)1 a(2)2 a(n)n .
But the product a(1)1 a(2)2 a(n)n also appears in det A with the same signum sgn(),
since the permutation
(1) (2) (n)
1 2 n
286 Corollary (Column-Alternancy of Determinants) Let A Mnn (F), A = [aij ]. If C Mnn (F), C = [cij ]
is the matrix obtained by interchanging the s-th column of A with its t-th column, then det C = det A.
Proof: This follows upon combining Theorem 283 and Theorem 285.
287 Theorem (Row Homogeneity of Determinants) Let A Mnn (F), A = [aij ] and F. If B Mnn (F), B =
[bij ] is the matrix obtained by multiplying the s-th row of A by , then
det B = det A.
122 Captulo 6
288 Corollary (Column Homogeneity of Determinants) If C Mnn (F), C = (Cij ) is the matrix obtained
by multiplying the s-th column of A by , then
det C = det A.
Proof: This follows upon using Theorem 285 and Theorem 287.
It follows from Theorem 287 and Corollary 288 that if a row (or column) of a matrix consists
of 0F s only, then the determinant of this matrix is 0F .
289 Exemplo
x 1 a 1 1 a
det = x det
x2 1 b x 1 b .
x3 1 c x2 1 c
290 Corollary
det(A) = n det A.
Proof: Since there are n columns, we are able to pull out one factor of from each one.
291 Exemplo Recall that a matrix A is skew-symmetric if A = AT . Let A M2001 (R) be skew-
symmetric. Prove that det A = 0.
Solution: We have
292 Lemma (Row-Linearity and Column-Linearity of Determinants) Let A Mnn (F), A = [aij ]. For a fi-
Determinants 123
xed row s, suppose that asj = bsj + csj for each j [1; n]. Then
a11 a12 a1n
a11 a12 a1n
a11 a12 a1n
Proof: Put
a11 a12 a1n
a11 a12 a1n
and
a11 a12 a1n
Then
P
det S = Sn sgn()a1(1) a2(2) a(s1)(s1) (bs(s)
= det T + det U.
293 Lemma If two rows or two columns of A Mnn (F), A = [aij ] are identical, then det A = 0F .
Proof: Suppose asj = atj for s 6= t and for all j [1; n]. In particular, this means that for any
Sn we have as(t) = at(t) and at(s) = as(s) . Let be the transposition
s t
=
.
(t) (s)
Then (a) = (a) for a {1, 2, . . . , n} \ {s, t}. Also, sgn() = sgn()sgn() = sgn(). As
runs through all even permutations, runs through all odd permutations, and viceversa.
Therefore
P
detA = Sn sgn()a1(1) a2(2) as(s) at(t) an(n)
P
= Sn sgn()a1(1) a2(2) as(s) at(t) an(n)
sgn()=1
+sgn()a1(1) a2(2) as(s) at(t) an(n)
P
= Sn sgn() a1(1) a2(2) as(s) at(t) an(n)
sgn()=1
a1(1) a2(2) as(t) at(s) an(n)
P
= Sn sgn() a1(1) a2(2) as(s) at(t) an(n)
sgn()=1
a1(1) a2(2) at(t) as(s) an(n)
= 0F .
294 Corollary If two rows or two columns of A Mnn (F), A = [aij ] are proportional, then det A = 0F .
Proof: Suppose asj = atj for s 6= t and for all j [1; n]. If B is the matrix obtained by pulling
out the factor from the s-th row then det A = det B. But now the s-th and the t-th rows in B
are identical, and so det B = 0F . Arguing on AT will yield the analogous result for the columns.
295 Exemplo
1 a b 1 1 b
det = a det
1 a c 1 1 c = 0,
1 a d 1 1 d
since on the last determinant the first two columns are identical.
296 Theorem (Multilinearity of Determinants) Let A Mnn (F), A = [aij ] and F. If X Mnn (F), X =
(xij ) is the matrix obtained by the row transvection Rs + Rt Rs then det X = det A. Similarly,
if Y Mnn (F), Y = (yij ) is the matrix obtained by the column transvection Cs + Ct Cs then
det Y = det A.
Proof: For the row transvection it suffices to take bsj = asj , csj = atj for j [1; n] in Lemma
292. With the same notation as in the lemma, T = A, and so,
But U has its s-th and t-th rows proportional (s 6= t), and so by Corollary 294 det U = 0F .
Hence det X = det A. To obtain the result for column transvections it suffices now to also apply
Theorem 285.
is divisible by 13.
Solution: Observe that 299, 468 and 741 are all divisible by 13. Thus
2 9 9 2 9 299 2 9 23
C3 +10C2 +100C1 C3
det det det
4 6 8 = 4 6 468
= 13 4 6
,
36
7 4 1 7 4 741 7 4 57
298 Theorem The determinant of a triangular matrix (upper or lower) is the product of its diagonal
elements.
126 Captulo 6
Proof: Let A Mnn (F), A = [aij ] be a triangular matrix. Observe that if 6= Id then
ai(i) a(i)2 (i) = 0F occurs in the product
Thus
P
det A = Sn sgn()a1(1) a2(2) an(n)
= sgn(Id )a1Id (1) a2Id (2) anId (n) = a11 a22 ann .
det In = 1F .
Solution: We have
1 2 3 1 0 0
C2 2C1 C2
C3 3C1 C3
det det
4 5 6 4 3 6
7 8 9 7 6 12
1 0 0
(3)(6) det
= 4 1 1
7 2 2
= 0,
since in this last matrix the second and third columns are identical and so Lemma 293 applies.
Pn
Proof: Put D = AB, D = (dij ), dij = k=1 aik bkj . If A(c:k) , D(c:k) , 1 k n denote the
columns of A and D, respectively, observe that
n
X
D(c:k) = blk A(c:l) , 1 k n.
l=1
Determinants 127
By Lemma 293, if any two of the A(c:jl ) are identical, the determinant on the right vanishes. So
each one of the jl is different in the non-vanishing terms and so the map
{1, 2, . . . , n} {1, 2, . . . , n}
:
l 7 jl
= (sgn()) det A.
We deduce that
det(AB) = det D
Pn
= jn =1 b1j1 b2j2 bnjn det(A(c:j1 ) , A(c:j2 ) , . . . , A(c:jn ) )
P
= (det A) Sn (sgn())b1(1) b2(2) bn(n)
as we wanted to shew.
By applying the preceding theorem multiple times we obtain
303 Corollary If A GLn (F) and if k is a positive integer then det A 6= 0F and
det Ak = (det A)k .
Proof: We have AA1 = In and so by Theorem 301 (det A)(det A1 ) = 1F , from where the
result follows.
Homework
Problema 6.3.1 Let
bc ca ab
= det
a b c .
a2 b2 c 2
128 Captulo 6
Problema 6.3.3 After the indicated column operations on a 33 matrix A with det A = 540, matrices A1 , A2 , . . . , A5
are successively obtained:
C1 +3C2 C1 C2 C3 3C2 C1 C2 C1 3C2 C1 2C1 C1
A A1 A2 A3 A4 A5
Determine the numerical values of det A1 , det A2 , det A3 , det A4 and det A5 .
is divisible by 1722.
Problema 6.3.5 Let A, B, C be 3 3 matrices with det A = 3, det B3 = 8, det C = 2. Compute (i) det ABC, (ii)
det 5AC, (iii) det A3 B3 C1 . Express your answers as fractions.
Problema 6.3.7 Prove or disprove! The set X = {A Mnn (F) : det A = 0F } is a vector subspace of Mnn (F).
Put X
Cij = (sgn())a1(1) a2(2) an(n) .
Sn
(i)=j
Laplace Expansion 129
Then
P
det A = Sn (sgn())a1(1) a2(2) an(n)
Pn P
= i=1 aij Sn (sgn())a1(1) a2(2)
(i)=j (6.5)
a(i1)(i1) a(i+1)(i+1) an(n)
Pn
= i=1 aij Cij ,
P
det A = Sn (sgn())a1(1) a2(2) an(n)
Pn P
= j=1 aij Sn (sgn())a1(1) a2(2)
(i)=j
304 Definio Let A Mnn (F), A = [aij ]. The ij-th minor Aij Mn1 (R) is the (n 1) (n 1) matrix
obtained by deleting the i-th row and the j-th column from A.
305 Exemplo If
1 2 3
A=
4 5 6
7 8 9
Now,
P
Cnn = Sn sgn()a1(1) a2(2) a(n1)(n1)
(n)=n
P
= Sn1 sgn()a1(1) a2(2) a(n1)(n1)
= det Ann ,
130 Captulo 6
since the second sum shewn is the determinant of the submatrix obtained by deleting the last
row and last column from A.
To find Cij for general ij we perform some row and column interchanges to A in order to bring aij
to the nn-th position. We thus bring the i-th row to the n-th row by a series of transpositions, first
swapping the i-th and the (i + 1)-th row, then swapping the new (i + 1)-th row and the (i + 2)-th
row, and so forth until the original i-th row makes it to the n-th row. We have made thereby
n i interchanges. To this new matrix we perform analogous interchanges to the j-th column,
thereby making n j interchanges. We have made a total of 2n i j interchanges. Observe
that (1)2nij = (1)i+j . Call the analogous quantities in the resulting matrix A , Cnn
, Ann .
Then
Cij = Cnn = det Ann = (1)i+j det Aij ,
by virtue of Corollary 284.
Solution: We have
5 6 4 6 4 5
det A = 1(1)1+1 det
+ 2(1)1+2 det
+ 3(1)1+3 det
8 9 7 9 7 8
= 1(45 48) 2(36 42) + 3(32 35) = 0.
Solution:
1 1 1 1 0 0
det = det
a b c a ba ca
a2 b2 c2 a2 b2 a2 c2 a2
ba ca
= det
b2 c 2 c2 a2
1 1
= (b a)(c a) det
b+a c+a
1 2 3 4 2000
2 1 2 3 1999
3 2 1 2 1998
det A = det .
4 3 2 1 1997
2000 1999 1998 1997 1
1 1 1 1 1 1
1 1 1 1 1 1
1 1 1
1 1 1
det 1 .
1 1 1 1 1
1
1 1 1 1 1
2000 1999 1998 1997 2 1
132 Captulo 6
310 Definio Let A Mnn (F). The classical adjoint or adjugate of A is the n n matrix adj (A) whose
entries are given by
[adj (A)]ij = (1)i+j det Aji ,
where Aji is the ji-th minor of A.
Proof: We have
Pn
[A(adj (A))]ij = k=1 aik [adj (A)]kj
Pn
= k=1 aik (1)i+k det Ajk .
Now, this last sum is det A if i = j by virtue of Theorem 306. If i 6= j it is 0, since then the j-th
row is identical to the i-th row and this determinant is 0F by virtue of Lemma 293. Thus on the
diagonal entries we get det A and the off-diagonal entries are 0F . This proves the theorem.
The next corollary follows immediately.
312 Corollary Let A Mnn (F). Then A is invertible if and only det A 6= 0F and
adj (A)
A1 = .
det A
Laplace Expansion 133
Homework
Problema 6.4.1 Find
1 2 3
det
4 5 6
7 8 9
by expanding along the second column.
a b c
Problema 6.4.2 Prove that det 3 3 3
c a b = a + b + c 3abc. This type of matrix is called a circulant matrix.
b c a
Problema 6.4.5 If
1 1 1 1
x a 0 0
det = 0,
x 0 b 0
x 0 0 c
and xabc 6= 0, prove that
1 1 1 1
= + + .
x a b c
Compute AT A.
Use the above to prove that
det A = (a2 + b2 + c2 + d2 )2 .
n n n
Problema 6.4.11 Let n N, n > 1 be an odd integer. Recall that the binomial coefficients satisfy
k n
= 0
= 1
and that for 1 k n, ! ! !
n n1 n1
= + .
k k1 k
Determinants and Linear Systems 135
Prove that
n n n
1 1 2
n1
1
n n n
1 1
1 n2 n1
n n
det n n n = (1 + (1) ) .
n1 1 1 n3 n2
n n n
1 2 3
1 1
Problema 6.4.12 Let A GLn (F), n > 1. Prove that det(adj (A)) = (det A)n1 .
Problema 6.4.14 If A GL2 (F), , and let k be a positive integer. Prove that det(adj adj(A)) = det A.
| {z }
k
is known as a circulant matrix. Prove that its determinant is (a + b + c + d)(a b + c d)((a c)2 + (b d)2 ).
det A 6= 0F .
A is invertible.
The contrapositive form of the implications and will be used later. Here it is for future reference.
314 Corollary Let A Mnn (F). If there is X 6= 0n1 such that AX = 0n1 then det A = 0F .
Homework
1 1 1
Problema 6.5.1 For which a is the matrix
1 a 1 singular (non-invertible)?
1 1 a
Captulo 7
Eigenvalues and Eigenvectors
Since similarity is an equivalence relation, it partitions the set of n n matrices into equivalence classes
by Theorem 29.
Suppose that
1 0 0 0
0
2 0 0
A=
.
.
. .. .. ..
. . . .
0 0 0 n
137
138 Captulo 7
so we have a simpler way of computing BK . Our task will now be to establish when a particular square
matrix is diagonalisable.
318 Definio Let V be a finite-dimensional vector space over a field F and let T : V V be a linear
transformation. A scalar F is called an eigenvalue of T if there is a
v 6= 0 (called an eigenvector)
such that T ( v ) = v .
320 Theorem Let A Mnn (F) be the matrix representation of T : V V. Then F is an eigenvalue
of T if an only if det(In A) = 0F .
Proof: is an eigenvalue of A there is
v 6= 0 such that A
v =
v
v A
v = 0
Inv Av = 0 det(In A) = 0F by Corollary 314.
1 1 0 0
1 1 0 0
322 Exemplo Let A =
. Find
0 0 1 1
0 0 1 1
The eigenvalues of A.
Solution: We have
1 1 0 0
1 1 0 0
det(I4 A) = det
0 0 1 1
0 0 1 1
1 0 0 1 0 0
= ( 1) det + det
0 1 1 0 1 1
0 1 1 0 1 1
= ( 2)()(( 1)2 1)
= ( 2)2 ()2
If = 0, then
1 1 0 0
1 1 0 0
0I4 A =
.
0 0 1 1
0 0 1 1
140 Captulo 7
and if
1 1 0 0 a 0
0
0 1 1 b 0
= ,
0 0 0 0 c 0
0 0 0 0 d 0
then c = d and a = b
Thus the general solution of the system (0I4 A)X = 0n1 is
a 1 0
b 1 0
= a + c .
c 0 1
d 0 1
If = 2, then
1 1 0 0
1 1 0 0
2I4 A =
.
0 0 1 1
0 0 1 1
and if
Eigenvalues and Eigenvectors 141
1 1 0 0 a 0
0
0 1 1 b 0
= ,
0 0 0 0 c 0
0 0 0 0 d 0
then c = d and a = b
Thus the general solution of the system (2I4 A)X = 0n1 is
a 1 0
b 1 0
= a + c .
c 0 1
d 0 1
Proof: Put p() = det(In A). Then p(0F ) = det(A) = (1)n det A is the constant term of
the characteristic polynomial. If = 0F is an eigenvalue then
p(0F ) = 0F = det A = 0F ,
Proof: We have
Homework
Problema 7.2.1 Find the eigenvalues and eigenvectors
0 2 1
1 1
Problema 7.2.5 Let A =
of A = 2 3 2. Find
1 1
1 2 0
Problema 7.2.2 Let A be a 2 2 matrix over some some The characteristic polynomial of A.
field F. Prove that the characteristic polynomial of A is
The eigenvalues of A.
2 (tr (A)) + det A.
The corresponding eigenvectors.
7.3 Diagonalisability
In this section we find conditions for diagonalisability.
Since i 6= 0F 7.3 is saying that the eigenvectors
v 1,
v 2, . . . ,
v k1 are linearly dependent,
a contradiction. Thus the claim follows for k distinct eigenvalues and the result is proven by
induction.
326 Theorem A matrix A Mnn (F) is diagonalisable if and only if it possesses n linearly independent
eigenvectors.
Proof: Assume first that A is diagonalisable, so there exists P GLn (F) and
1 0 0 0
0
2 0 0
D=
.
. .. .. ..
. . . .
0 0 0 n
such that
1 0 0 0
0 2 0 0
1
P AP =
.
.
. .. .. ..
. . . .
0 0 0 n
Then
1 0 0 0
0
2 0 0
[AP1 ; AP2 ; ; APn ] = AP = P
.
= [1 P1 ; 2 P2 ; ; n Pn ],
. .. .. ..
. . . .
0 0 0 n
where the Pk are the columns of P. Since P is invertible, the Pk are linearly independent by
virtue of Theorems 204 and 313.
Since A
v i = i
v i we see that AP = PD. Again P is invertible by Theorems 204 and 313 since
the v k are linearly independent. Left multiplying by P1 we deduce P1 AP = D, from where A
is diagonalisable.
Homework
Problema 7.3.1 Let A be a 2 2 matrix with eigenva- 1. Find det A.
1 2. Find A1 .
and
lues 1 and 2 and corresponding eigenvectors
3. Find rank (A I4 ).
0
4. Find det(A I4 ).
1 5. Find the characteristic polynomial of A.
, respectively. Determine A10 .
1 6. Find the eigenvalues of A.
7. Find the eigenvectors of A.
8. Find A10 .
9 4
Problema 7.3.2 Consider the matrix A =
.
20 9 Problema 7.3.5 Consider the matrix
where
1 1 0 0 0
1 0 1 0 0
.. .. ..
. . .
1 0 0
P= .
.. .. .. .. .. ..
.
. . . . .
1
0 0 0 1
1 1 1 1 1
det B = det(In A) = n + b1 n1 + b2 n2 + + bn ,
Since adj (B) is a matrix obtained by using (n 1) (n 1) determinants from B, we may write
Hence
from where
In = Bn1
b1 In = ABn1 + Bn2
b2 In = ABn2 + Bn3
..
.
bn1 In = AB1 + B0
bn In = AB0 .
Theorem of Cayley and Hamilton 147
Multiply now the k-th row by Ank (the first row appearing is really the 0-th row):
An = An Bn1
bn1 A = A2 B1 + AB0
bn In = AB0 .
An + b1 An1 + + bn1 A + bn In = 0n ,
Homework
Problema 7.4.1 A 33 matrix A has characteristic polynomial (1)(+2). What is the characteristic polynomial
of A2 ?
Captulo 8
Linear Algebra and Geometry
b
O
x
a1 b1
R and B = R we define their addition as
Given A = 2 2
a2 b2
a1 b1 a1 + b1
+ =
A+B= .
(8.1)
a2 b2 a2 + b2
148
Points and Bi-points in R2 149
Throughout this chapter, unless otherwise noted, we will use the convention that a point
A R2 will have its coordinates named after its letter, thus
a1
A=
.
a2
330 Definio Consider the points A R2 , B R2 . By the bi-point starting at A and ending at B, denoted
by [A, B], we mean the directed line segment from A to B. We define
0
[A, A] = O =
.
0
The bi-point [A, B] can be thus interpreted as an arrow starting at A and finishing, with
the arrow tip, at B. We say that A is the tail of the bi-point [A, B] and that B is its head. Some
authors use the terminology fixed vector instead of bi-point.
331 Definio Let A 6= B be points on the plane and let L be the line passing through A and B. The
direction of the bi-point [A, B] is the direction of the line L, that is, the angle
that the line L
;
2 2
makes with the horizontal. See figure 8.2.
332 Definio Let A, B lie on line L, and let C, D lie on line L . If L||L then we say that [A, B] has the
same direction as [C, D]. We say that the bi-points [A, B] and [C, D] have the same sense if they have
the same direction and if both their heads lie on the same half-plane made by the line joining their
tails. They have opposite sense if they have the same direction and if both their heads lie on alternative
half-planes made by the line joining their tails. See figures 8.3 and 8.4 .
B B B
D
A A A
b C
C
333 Definio Let A 6= B. The Euclidean length or norm of bi-point [A, B] is simply the distance between
A and B and it is denoted by
q
||[A, B]|| = (a1 b1 )2 + (a2 b2 )2 .
We define
||[A, A]|| = ||O|| = 0.
A bi-point is said to have unit length if it has norm 1.
A bi-point is completely determined by three things: (i) its norm, (ii) its direction, and (iii) its
sense.
334 Definio (Chasles Rule) Two bi-points are said to be contiguous if one has as tail the head of the
other. In such case we define the sum of contiguous bi-points [A, B] and [B, C] by Chasles Rule
0[A, B] = O
and
[A, A] = O.
We define [A, B] as follows.
2. [A, B] has the sense of [A, B] if > 0 and sense opposite [A, B] if < 0.
3. [A, B] has norm ||||[A, B]|| which is a contraction of [A, B] if 0 < || < 1 or a dilatation of [A, B] if
|| > 1.
2[A, B]
B [A, B]
A
C
1
[A, B]
2
8.2 Vectors in R2
336 Definio (Midpoint) Let A, B be points in R2 . We define the midpoint of the bi-point [A, B] as
a +b
A+B 12 1
= .
2 a2 +b2
2
337 Definio (Equipollence) Two bi-points [X, Y] and [A, B] are said to be equipollent written [X, Y]
[A, B] if the midpoints of the bi-points [X, B] and [Y, A] coincide, that is,
X+B Y +A
[X, Y] [A, B] = .
2 2
See figure 8.7.
Geometrically, equipollence means that the quadrilateral XYBA is a parallelogram. Thus the bi-points
[X, Y] and [A, B] have the same norm, sense, and direction.
Y
X ||
|| B
A
Figura 8.7: Equipollent bi-points.
338 Lemma Two bi-points [X, Y] and [A, B] are equipollent if and only if
y1 x1 b1 a1
= .
y2 x2 b2 a2
as desired.
From Lemma 338, equipollent bi-points have the same norm, the same direction, and the
same sense.
y1 x1
Proof: Write [X, Y] [A, B] if [X, Y] if equipollent to [A, B]. Now [X, Y] [X, Y] since
=
y2 x2
y1 x1
and so the relation is reflexive. Also
y2 x2
y1 x1 b1 a1
[X, Y] [A, B]
=
y2 x2 b2 a2
b1 a1 y1 x1
=
b2 a2 y2 x2
340 Definio (Vectors on the Plane) The equivalence class in which the bi-point [X, Y] falls is called the
vector (or free vector) from X to Y, and is denoted by XY. Thus we write
y1 x1
[X, Y] XY =
.
y2 x2
If we desire to talk about a vector without mentioning a bi-point representative, we write, say,
v , thus
denoting vectors with boldface lowercase letters. If it is necessary to mention the coordinates of v we
will write
v1
v =
.
v2
Vectors in R2 153
For any point X on the plane, we have XX = 0 , the zero vector. If [X, Y]
v then [Y, X]
v.
p1
R we may form the vector OP =
341 Definio (Position Vector) For any particular point P = 2
p2
p1
. We call
OP the position vector of P and we use boldface lowercase letters to denote the equality
p2
OP = p.
1 3
342 Exemplo The vector into which the bi-point with tail at A =
and head at B = falls is
2 4
3 (1)
4
AB =
= .
42 2
3 7
X=
,Y =
7 9
Given two vectors u,
v we define their sum
u + v as follows. Find a bi-point representative AB
u
and a contiguous bi-point representative BC v . Then by Chasles Rule
u +
v = AB + BC = AC.
Similarly we define scalar multiplication of a vector by scaling one of its bi-point representatives.We
define the norm of a vector
v R2 to be the norm of any of its bi-point representatives.
u1 v1
Componentwise we may see that given vectors
, v = , and a scalar R then their
u =
u2 v2
sum and scalar multiplication take the form
u1 v1 u1
u + + ,
v =
u =
.
u2 v2 u2
2
u
B
v u
u
A
C
u +
v
1
u
2
344 Exemplo Diagonals are drawn in a rectangle ABCD. If AB =
x and AC =
y , then BC =
y
x,
CD = x , DA = x y , and BD = y 2 x .
u1 q
346 Definio If
u =
, then we define its norm as
u = u21 + u22 . The distance between two
u2
vectors u and v is dh u ,
u
v i =
v .
347 Exemplo Let a R, a > 0 and let
v 6= 0 . Find a vector with norm a and parallel to
v.
v
Solution: Observe that has norm 1 as
v
v v
= = 1.
v v
v
v
Hence the vector a has norm a and it is in the direction of
v . One may also take a .
v v
Vectors in R2 155
1
348 Exemplo If M is the midpoint of the bi-point [X, Y] then XM = MY from where XM = 2
XY. Moreover,
if T is any point, by Chasles Rule
TX + TY = TM + MX + TM + MY
= 2TM XM + MY
= 2TM.
349 Exemplo Let ABC be a triangle on the plane. Prove that the line joining the midpoints of two sides
of the triangle is parallel to the third side and measures half its length.
Solution: Let the midpoints of [A, B] and [A, C] be MC and MB , respectively. We shew that
BC = 2MC MB . We have 2AMC = AB and 2AMB = AC. Thus
BC = BA + AC
= AB + AC
= 2AMC + 2AMB
= 2MC A + 2AMB
= 2(MC A + AMB )
= 2MC MB ,
as we wanted to shew.
350 Exemplo In ABC, let MC be the midpoint of side AB. Shew that
1
CMC = CA + CB .
2
Solution: Since AMC = MC B, we have
CA + CB = CMC + MC A + CMC + MC B
= 2CMC AMC + MC B
= 2CMC ,
351 Theorem (Section Formula) Let APB be a straight line and and be real numbers such that
||[A, P]||
= .
||[P, B]||
With
a = OA, b = OB, and
p = OP, then
b +
a
p = . (8.4)
+
156 Captulo 8
whence
[A, P] = [A, B] = AP = AB =
p
a = (b
a ).
+ + +
On combining these formul
( + )(
p
a ) = ( b
a ) = ( + )
p = b +
a,
a
a
a
c d
c
c
b b
b
Figura 8.10: [A]. Problem Figura 8.11: [B]. Problem Figura 8.12: [C]. Problem
8.2.6. 8.2.6. 8.2.6.
a
a
b c
c d c d a d
e
b b f
Figura 8.13: [D]. Problem Figura 8.14: [E]. Problem Figura 8.15: [F]. Problem
8.2.6. 8.2.6. 8.2.6.
Homework
Problema 8.2.1
Let
a be
a real
number. Find the dis-
1
1 1 a where
.
v =
and
tance between .
1
a 1
Problema 8.2.3 Given a pentagon ABCDE, find AB +
BC + CD + DE + EA.
Problema 8.2.4 For which values of a will the vectors of a skew quadrilateral form the vertices of a parallelo-
gram.
a+1
2a + 5
a =
,
b =
Problema 8.2.9 ABCD is a parallelogram. E is the mid-
a2 1 a2 4a + 3 point of [B, C] and F is the midpoint of [D, C]. Prove that
will be parallel? AC + BD = 2BC.
Problema 8.2.5 In ABC let the midpoints of [A, B] and Problema 8.2.10 Let A, B be two points on the plane.
Construct two points I and J such that
[A, C] be MC and MB , respectively. Put MC B =
x,
MB C = y , and CA = z . Express [A] AB + BC + MC MB ,
1
IA = 3IB, JA = JB,
[B] AMC + MC MB + MB C, [C] AC + MC A BMB in 3
terms of x , y , and z .
and then demonstrate that for any arbitrary point M on
the plane
Problema 8.2.6 A circle is divided into three, four equal, MA + 3MB = 4MI
or six equal parts (figures 8.10 through 8.15). Find the and
sum of the vectors. Assume that the divisions start or
3MA + MB = 4MJ.
stop at the centre of the circle, as suggested in the figu-
res.
Problema 8.2.11 You find an ancient treasure map in
your great-grandfathers sea-chest. The sketch indicates
Problema 8.2.7 Diagonals are drawn in a square (figu- that from the gallows you should walk to the oak tree,
res ?? through ??). Find the vectorial sum a + b +
c.
turn right 90 and walk a like distance, putting and x at
Assume that the diagonals either start, stop, or pass th- the point where you stop; then go back to the gallows,
rough the centre of the square, as suggested by the figu- walk to the pine tree, turn left 90 , walk the same dis-
res. tance, mark point Y. Then you will find the treasure at
the midpoint of the segment XY. So you charter a sai-
ling vessel and go to the remote south-seas island. On
arrival, you readily locate the oak and pine trees, but
unfortunately, the gallows was struck by lightning, bur-
ned to dust and dispersed to the winds. No trace of it
Problema 8.2.8 Prove that the mid-points of the sides remains. What do you do?
a1
b1
ab = = a1 b1 + a2 b2 .
a2 b2
The following properties of the dot product are easy to deduce from the definition.
DP1 Bilinearity
(
x +
y )
z =
x
z +
y
z,
x (
y +
z) =
x
y +
x
z (8.5)
DP3 Commutativity
x
y = yx (8.7)
DP4
x
x 0 (8.8)
DP5
x
x = 0
x = 0 (8.9)
158 Captulo 8
DP6
p
||x|| = x
x (8.10)
a1
then we can write any vector as a sum
a =
a2
a = a1 i + a2 j .
The vectors
1 0
i = ,
j = ,
0 1
satisfy i j = 0, and i = j = 1.
\
354 Definio Given vectors
a and b , we define the angle between them, denoted by (
a , b ), as the
angle between any two contiguous bi-point representatives of
a and b.
355 Theorem
\
a b = ||
a |||| b || cos (
a , b).
Proof: Using Al-Kashis Law of Cosines on the length of the vectors, we have
\
|| b
a ||2 = ||
a ||2 + || b||2 2||
a |||| b|| cos (
a, b)
\
(b
a )( b
a ) = ||
a ||2 + || b||2 2||
a |||| b|| cos (
a, b)
\
b b 2
ab +
a
a = ||
a ||2 + || b||2 2||
a |||| b|| cos (
a, b)
\
|| b ||2 2 a b + || b ||2 = ||
a ||2 + || b ||2 2||
a |||| b|| cos (
a , b)
\
a b = ||
a |||| b || cos (
a , b ),
as we wanted to shew.
\
Putting (
a, b) =
2
in Theorem 355 we obtain the following corollary.
356 Corollary Two vectors in R2 are perpendicular if and only if their dot product is 0.
b
a
a
Dot Product in R2 159
357 Definio Two vectors are said to be orthogonal if they are perpendicular. If
a is orthogonal to b,
we write
a b.
358 Definio If
a b and
a =
b = 1 we say that a and b are orthonormal.
It follows that the vector 0 is simultaneously parallel and perpendicular to any vector!
359 Definio Let
a R2 be fixed. Then the orthogonal space to
a is defined and denoted by
a = {
x R2 :
x
a }.
Proof:
||
a + b ||2 = (
a + b )(
a + b)
= a
a + 2
ab + bb
||
a ||2 + 2||
a |||| b|| + || b||2
= (||
a || + || b ||)2 ,
362 Corollary (Pythagorean Theorem) If
a b then
2
2
||
a + b ||2 =
a + b .
Proof: Since
a b = 0, we have
||
a + b||2 = (
a + b )(
a + b)
=
a
a + 2
ab + bb
=
a
a + 0 + bb
= ||
a ||2 + || b ||2 ,
160 Captulo 8
363 Definio The projection of t onto
v (or the
v -component of t ) is the vector
1
\
t
proj
v = (cos ( t , v )) t
v,
v
\
where (
v , t ) [0; ] is the convex angle between
v and t read in the positive sense.
b b
364 Corollary Let
a 6= 0 . Then
1
x
a
x
proj = cos (
\
x ,
a )
x a =
a
a.
a a 2
365 Theorem Let
a R2 \ { 0 }. Then any
x R2 can be decomposed as
x =
u +
v,
where
u R
a and
v
a .
= 0,
366 Corollary Let
v
w be non-zero vectors in R2 . Then any vector
a R2 has a unique representation
as a linear combination of v , w ,
a = s
v + t
w, (s, t) R2 .
Dot Product in R2 161
where
v is orthogonal to
v . But then
v ||
w and hence there exists R with
v =
w. Taking
t = s we achieve the decomposition
a = s
v + t
w.
367 Corollary Let
p ,
q be non-zero, non-parallel vectors in R2 . Then any vector
a R2 has a unique
representation as a linear combination of p , q ,
a = l
p + m
q , (l, m) R2 .
Proof:
Consider
z = q
q proj
. Clearly
p
z and so by Corollary 366, there exists unique
p
(s, t) R2 such that
a =
s
p + t
z
=
s
p tproj
q
+ t
q
p
=
s t q
p
p + t
q,
2
||
p ||
q
p
establishing the result upon choosing l = s t 2 and m = t.
||
p ||
1 1
368 Exemplo Let
p = , q = . Write p as the sum of two vectors, one parallel to q and the other
1 2
perpendicular to q .
p
Solution: We use Theorem 365. We know that proj
is parallel to
q , and we find
q
p
q 3
3
p
proj = 5
2 q = q =
.
q
q 5 6
5
We also compute
3 2
p
1 5 5
p proj
q
=
=
.
6 1
1 5 5
Observe that
3 2
= 6 6 = 0,
5 5
25 25
6 1
5
5
162 Captulo 8
b
C
B b
b
H
369 Exemplo Prove that the altitudes of a triangle ABC on the plane are concurrent. This point is
called the orthocentre of the triangle.
Solution: Put
a = OA, b = OB, c = OC. First observe that for any x , we have, upon
expanding,
(
x a )( b
c ) + (
x b)(c
a ) + (
x
c )(
a b ) = 0. (8.11)
Let H be the point of intersection of the altitude from A and the altitude from B. Then
0 = AHCB = (OH OA)(OB OC) = (OH
a )( b
c ), (8.12)
and
c
0 = BHAC = (OH OB)(OC OA) = (OH b)( a ). (8.13)
Putting
x = OH in (8.11) and subtracting from it (8.12) and (8.13), we gather that
c )(
0 = (OH a b ) = CHAB,
Homework
Problema
8.3.1 Determine the value of a so that the other parallel to
s.
a 1
be perpendicular to . Problema 8.3.4 Prove that
1a 1
a i )2 + (
a = (
2
a j )2 .
a + b = 0 = = = 0.
4 1 2
Problema 8.3.3 Let
p = , r = , s = .
Problema 8.3.6 Let (x,
y ) (R2 )2 with ||
x || = 23 ||
y ||.
Shew that 2 x + 3 y and 2 x 3 y are perpendicular.
5 1 1
Write
p as the sum of two vectors, one parallel to
r and
Lines on the Plane 163
Problema 8.3.7 Let Problema 8.3.10 Let
x ,
a , b be fixed vectors in R2 . Prove
a be non-zero vectors in R2 .
that if Prove that
v
v R2 , a =
v b, a
proj
then
a = b.
x
proj =
a,
a
Problema 8.3.8 Let (
a , b ) (R2 )2 . Prove that
2
2
2 2
with 0 1.
a + b + a b = 2 a + 2 b .
It is clear that the points A, B, and C are collinear if and only if AB is parallel to AC. Thus we have the
following definition.
371 Definio The parametric equation with parameter t R of the straight line passing through the
p1
in the direction of the vector v 6= 0 is
point P =
p2
x p1
= t
v.
y p2
x
If , then the equation of the line can be written in the form
r =
y
r
p = t
v. (8.14)
372 Theorem Let
v 6= 0 and let
n
v . An alternative form for the equation of the line
r
p = t
v is
(
r
p )
n = 0.
a
is perpendicular to the line with Cartesian equation ax + by = c.
Moreover, the vector
b
b c
a 6= 0. Then we can put y = t and x = a t+ a and the parametric equation of this line is
c b
x a
= t a ,
y 1
and we have
b
a a
= b a + b = 0.
a
1 b
Similarly if b 6= 0 we can put x = t and y = a
b
c
t+ b and the parametric equation of this line is
x 1
= t ,
c a
y b b
and we have
1 a
= a a b = 0,
b
ab
b
proving the theorem in this case.
a
a2 +b2
The vector
has norm 1 and is orthogonal to the line ax + by = c.
b
a2 +b2
2 4
373 Exemplo The equation of the line passing through A =
and in the direction of v = is
3 5
x 2 4
= .
y3 5
1 2
374 Exemplo Find the equation of the line passing through A =
and B = .
1 3
375 Exemplo Suppose that (m, b) R2 . Write the Cartesian equation of the line y = mx+b in parametric
form.
Solution: Here is a way. Put x = t. Then y = mt + b and so the desired parametric form is
x 1
= t .
yb m
1 1
Put
v =
and
w=
v
. Since the lines are perpendicular we must have w = 0, which
m1 m2
yields
0 =
v
w = 1(1) + m1 (m2 ) = m1 m2 = 1.
Proof: Let R0 be the point on the line that is nearest to B. Then BR0 = r0 b is orthogonal to
the line, and the distance we seek is
( |(
r
r
0b r0 b) n 0 b) n |
||proj || = n = .
n
2
n
n
r0 b |
r
0 n b n |
n b
a
|
a b )
n | |( n|
||proj || = = = ,
n n | n n
as we wanted to shew.
166 Captulo 8
If the line has Cartesian equation ax + by = c, then at least one of a and b is 6= 0. Let us suppose
a 6= 0, as the argument when a = 0 and b 6= 0 is similar. Then ax + by = c is equivalent to
c
x a a
= 0.
y 0 b
c
a b1
a
We use the result obtained above with
a = , n = , and B = . Then n =
0 b b2
a2 + b2 and
c b
1 a
a
|( a b) n| =
= |c ab1 bb2 | = |ab1 + bb2 c|,
b2 b
378 Exemplo Recall that the medians of ABC are lines joining the vertices of ABC with the midpoints
of the side opposite the vertex. Prove that the medians of a triangle are concurrent, that is, that they
pass through a common point.
Solution: Let MA , MB , and MC denote the midpoints of the lines opposite A, B, and C,
respectively.
The equation of the line passing through A and in the direction of AMA is (with
x
r =
)
y
r = OA + rAMA .
Similarly, the equation of the line passing through B and in the direction of BMB is
r = OB + sBMA .
These two lines must intersect at a point G inside the triangle. We will shew that GC is parallel
to CMC , which means that the three points G, C, MC are collinear.
Now, (r0 , s0 ) R2 such that
OA + r0 AMA = OG = OB + s0 BMB ,
that is
r0 AMA s0 BMB = OB OA,
or
r0 (AB + BMA ) s0 (BA + AMB ) = AB.
Since MA and MB are the midpoints of [B, C] and [C, A] respectively, we have 2BMA = BC and
2AMB = AC = AB + BC. The relationship becomes
1
1 1
r0 (AB + BC) s0 (AB + AB + BC) = AB,
2 2 2
Lines on the Plane 167
s0 r0 s0
(r0 + 1)AB = ( + )BC.
2 2 2
We must have
s0
r0 + 1 = 0,
2
r0 s0
+ = 0,
2 2
since otherwise the vectors AB and BC would be parallel, and the triangle would be degenerate.
Solving, we find s0 = r0 = 23 . Thus we have OA + 23 AMA = OG, or AG = 32 AMA , and similarly,
2
BG = 3 BMB .
From AG = 23 AMA , we deduce AG = 2GMA . Since MA is the midpoint of [B, C], we have
GB + GC = 2GMA = AG, which is equivalent to
GA + GB + GC = 0 .
As MC is the midpoint of [A, B] we have GA + GB = 2GMC . Thus
0 = GA + GB + GC = 2GMC + GC.
This means that GC = 2GMC , that is, that they are parallel, and so the points G, C and MC
all lie on the same line. This achieves the desired result.
Homework
Problema 8.4.1 Find the angle between the lines 2x Prove that E, I, J are collinear.
y = 1 and x 3y = 1.
Problema 8.4.5 Let ABCD be a parallelogram.
Problema 8.4.2 Find the equation of the line passing th- Let E and F be points such that
1 2
1
3
and in a direction perpendicular to .
rough AE = AC and AF = AC.
4 4
1 1
Demonstrate that the lines (BE) and (DF) are pa-
rallel.
Problema 8.4.3 ABC has centroid G, and A B C Let I be the midpoint of [A, D] and J be the midpoint
satisfies of [B, C]. Demonstrate that the lines (AB) and (IJ)
AA + BB + CC = 0 .
are parallel. What type of quadrilateral is IEJF?
Prove that G is also the centroid of A B C .
Problema 8.4.6 ABCD is a parallelogram; point I is the
midpoint of [A, B]. Point E is defined by the relation
Problema 8.4.4 Let ABCD be a trapezoid, with bases
IE = 31 ID. Prove that
[A, B] and [C, D]. The lines (AC) and (BD) meet at E and
the lines (AD) and (BC) meet at F. Prove that the line
1
(EF) passes through the midpoints of [A, B] and [C, D] by AE = AB + AD
3
proving the following steps.
and prove that the points A, C, E are collinear.
Let I be the midpoint of [A, B] and let J be the point
of intersection of the lines (FI) and (DC). Prove
Problema 8.4.7 Put OA = a , OB = b , OC =
that J is the midpoint of [C, D]. Deduce that F, I, J c . Prove
are collinear. that A, B, C are collinear if and only if there exist real
168 Captulo 8
numbers , , , not all zero, such that and A B C (not necessarily in the same plane) are
so positioned that (AA ), (BB ), (CC ) all pass through
a + b +
c = 0 , + + = 0. the same point V and if (BC) and (B C ) meet at L, (CA)
and (C A ) meet at M, and (AB) and (A B ) meet at N,
Problema 8.4.8 Prove Desargues Theorem: If ABC then L, M, N are collinear.
8.5 Vectors in R3
We now extend the notions studied for R2 to R3 . The rectangular coordinate form of a vector in R3 is
a1
a =
a2 .
a3
In particular, if
1 0 0
i = 0 , j = 1 , k =
0
0 0 1
a1
then we can write any vector a =
a2 as a sum
a3
a = a1 i + a2 j + a3 k .
a1 b1
b2 , their dot product is
Given a = a2 and b =
a3 b3
a b = a1 b1 + a2 b2 + a3 b3 ,
and
q
a = a21 + a22 + a23 .
We also have
i j = j k = k i = 0,
and
i = j = k = 1.
379 Definio A system of unit vectors i , j , k is right-handed if the shortest-route rotation which brings
i to coincide with j is performed in a counter-clockwise manner. It is left-handed if the rotation is done
in a clockwise manner.
Vectors in R3 169
To study points in space we must first agree on the orientation that we will give our coordinate system.
We will use, unless otherwise noted, a right-handed orientation, as in figure 8.22.
k k
j j
i i
The analogues of the Cauchy-Bunyakovsky-Schwarz and the Triangle Inequality also hold
in R3 .
We now define the (standard) cross (wedge) product in R3 as a product satisfying the following pro-
perties.
380 Definio Let (
x ,
y ,
z , ) R3 R3 R3 R. The wedge product : R3 R3 R3 is a closed
binary operation satisfying
CP1 Anti-commutativity:
x
y = (
y
x) (8.15)
CP2 Bilinearity:
(
x +
z )
y =
x
y +
z
y,
x (
z +
y) =
x
z +
x
y (8.16)
CP4
x
x = 0 (8.18)
x1 y1
381 Theorem Let
x =
and
y =
be vectors in R3 . Then
x2 y2
x3 y3
x
y = (x2 y3 x3 y2 ) i + (x3 y1 x1 y3 ) j + (x1 y2 x2 y1 ) k .
170 Captulo 8
Proof: Since i i = j j = k k = 0 we have
(x1 i + x2 j + x3 k )(y1 i + y2 j + y3 k ) = x1 y2 i j + x1 y3 i k
+x2 y1 j i + x2 y3 j k
+x3 y1 k i + x3 y2 k j
= x1 y2 k x1 y3 j x2 y1 k
+x2 y3 i + x3 y1 j x3 y2 i ,
Solution: We have
( i 3 k )( j + 2 k ) = i j + 2 i k 3 k j 6 k k
= k 2 j 3 i + 60
= 3 i 2 j + k .
Hence
1 0 3
= .
0 1 2
3 2 1
383 Theorem The cross product vector
x
y is simultaneously perpendicular to
x and
y.
Proof: We will only check the first assertion, the second verification is analogous.
x (
x y) = (x1 i + x2 j + x3 k )((x2 y3 x3 y2 ) i
+(x3 y1 x1 y3 ) j + (x1 y2 x2 y1 ) k )
= x1 x2 y3 x1 x3 y2 + x2 x3 y1 x2 x1 y3 + x3 x1 y2 x3 x2 y1
= 0,
384 Theorem c ) = ( a
a ( b c ) b ( a b )
c.
Proof:
a ( b
c ) = (a1 i + a2 j + a3 k )((b2 c3 b3 c2 ) i +
+(b3 c1 b1 c3 ) j + (b1 c2 b2 c1 ) k )
= a1 (b3 c1 b1 c3 ) k a1 (b1 c2 b2 c1 ) j
a2 (b2 c3 b3 c2 ) k + a2 (b1 c2 b2 c1 ) i
+a3 (b2 c3 b3 c2 ) j a3 (b3 c1 b1 c3 ) i
= (a1 c1 + a2 c2 + a3 c3 )(b1 i + b2 j + b3 i )
+(a1 b1 a2 b2 a3 b3 )(c1 i + c2 j + c3 i )
= ( a
c ) b ( a b )
c,
\
386 Theorem Let (
x ,
y ) [0; [ be the convex angle between two vectors
x and
y . Then
||
x
y || = ||
x ||||
\
y || sin (
x ,
y ).
Proof: We have
||
x
y ||2 = (x2 y3 x3 y2 )2 + (x3 y1 x1 y3 )2 + (x1 y2 x2 y1 )2
= ||
x ||2 || y ||2 ( x y )2
\
= || x ||2 || y ||2 cos2 (
x ||2 || y ||2 || x ,
y)
\
= ||
x ||2 || y ||2 sin2 (
x ,
y ),
172 Captulo 8
whence the theorem follows. The Theorem is illustrated in Figure 8.24. Geometrically
it means
that the area of the parallelogram generated by joining
x and
y at their heads is
x
y .
x
y
x
y
387 Corollary Two non-zero vectors
x ,
y satisfy
x
y = 0 if and only if they are parallel.
389 Exemplo Let
x R3 , ||x|| = 1. Find
||
x i ||2 + ||
x j ||2 + ||
x k ||2 .
2
2
||
x i ||2 =
x i ( x i )2 = 1 ( x i )2 ,
2
2
|| x j (
x k ||2 = x j )2 = 1 ( x j )2 ,
2
2
|| x k (
x j ||2 = x k )2 = 1 (
x k )2 ,
2
and since (
x i )2 + (
x j )2 + (
x k )2 =
x = 1, the desired sum equals 3 1 = 2.
Problema 8.5.1 Consider a tetrahedron ABCS. [A] Find Problema 8.5.4 Prove or disprove! The cross product is
AB + BC + CS. [B] Find AC + CS + SA + AB. associative.
Problema 8.5.5 Prove that x
Problema 8.5.2 Find a vector simultaneously perpendi-
x = 0 follows from the
anti-commutativity of the cross product.
1 1
Problema 8.5.6 Expand the product (
a b )(
cular to
1
and and having norm 3.
1
a + b ).
1 0
Problema 8.5.7 The vectors
a , b are constant vectors.
Solve the equation
a (
x b ) = b (
x
a ).
Problema 8.5.3 Find the area of the triangle whose ver-
Problema 8.5.8 The vectors a , b ,
c are constant vec-
0 0 1
tors. Solve the system of equations
tices are at P = 0, Q = 1, and R =
0.
2
x +
y 3
y +
x
a =
a = b, c,
1 0 0
Planes and Lines in R3 173
Problema 8.5.12 Let (
Problema 8.5.9 Prove that there do not exist three unit a , b ) R3 R3 be fixed. Solve
vectors in R3 such that the angle between any two of the equation
2
them be > . ax = b,
3
for
x.
391 Lemma Let
v ,
w in R3 be non-parallel vectors. Then every vector
u of the form
u = a
v + b
w,
((a, b) R2 arbitrary) is coplanar with both
v and
w. Conversely, any vector t coplanar with both
v
and w can be uniquely expressed in the form
t = p
v + q
w.
Proof: This follows at once from Corollary 367, since the operations occur on a plane, which
can be identified with R2 .
A plane is determined by three non-collinear points. Suppose that A, B, and C are non-collinear
x
points on the same plane and that R =
y is another arbitrary point on this plane. Since A, B, and C
z
are non-collinear, AB and AC, which are coplanar, are non-parallel. Since AR also lies on the plane, we
have by Lemma 391, that there exist real numbers p, q with
AR = pAB + qAC.
By Chasles Rule,
OR = OA + p(OB OA) + q(OC OA),
is the equation of a plane containing the three non-collinear points A, B, and C. By letting
r = OR,
a = OA, etc., we deduce that
r a = p( b
a ) + q(
c a ).
392 Definio The parametric equation of a plane containing the point A, and parallel to the vectors
u
and v is given by
r
a = p
u + qv.
x a1 = pu1 + qv1 ,
y a2 = pu2 + qv2 ,
z a3 = pu3 + qv3 .
The Cartesian equation of a plane is an equation of the form ax + by + cz = d with (a, b, c, d) R4 and
a2 + b2 + c2 6= 0.
393 Exemplo Find both the parametric equation and the Cartesian equation of the plane parallel to the
1 1 0
vectors
1
and and passing through the point .
1 1
1 0 2
394 Theorem Let u and
v be non-parallel vectors and let
r
a = p
u + q
v be the equation of the plane
containing A an parallel to the vectors u and v . If n is simultaneously perpendicular to
u and
v then
(
a )
r n = 0.
a
Moreover, the vector
b is normal to the plane with Cartesian equation ax + by + cz = d.
c
d b c
x= y z.
a a a
Planes and Lines in R3 175
395 Exemplo Find once again, by appealing to Theorem 394, the Cartesian equation of the plane parallel
1 1 0
to the vectors
1
and and passing through the point .
1 1
1 0 2
1 1 1
Solution: The vector
1 1 = 1 is normal to the plane. The plane has thus equation
1 0 0
x 1
y + 1 1 = 0 = x + y + 1 = 0 = x y = 1,
z2 0
as obtained before.
Proof: Let R0 be the point on the plane that is nearest to B. Then BR0 = r0 b is orthogonal
to the plane, and the distance we seek is
( |(
r
r0 b r0 b) n
0 b) n |
||proj || = n = .
n
2
n
n
r
0b
|
r
0 n b n |
|
a
n b
n | |(
a b ) n|
||proj || = = = ,
n
n |
n n
as we wanted to shew.
Given three planes in space, they may (i) be parallel (which allows for some of them to
coincide), (ii) two may be parallel and the third intersect each of the other two at a line, (iii)
intersect at a line, (iv) intersect at a point.
176 Captulo 8
397 Definio The equation of a line passing through A R3 in the direction of
v 6= 0 is given by
r
a = t
v , t R.
398 Theorem Put OA =
a , OB = b , and OC =
c . Points (A, B, C) (R3 )3 are collinear if and only if
c +
a b + b
c
a = 0.
Proof: If the points A, B, C are collinear, then AB is parallel to AC and by Corollary 387, we
must have
(
c
a )( b
a) = 0 .
Rearranging, gives
c
c b
a
ab = 0 .
Further rearranging completes the proof.
399 Theorem (Distance Between a Point and a Line) Let L :
r =
a +
v, 6 0 , be a line and let B be a
v =
point not on L. Then the distance from B to L is given by
||(
a b) v ||
.
v
||BR0 v || ||(
r
0 b ) v ||
||r0 b || = = .
v v
(
r
0 b) v = ( a b ) v ,
giving
||(
a b )v ||
||r0 b|| = ,
v
Given two lines in space, one of the following three situations might arise: (i) the lines
intersect at a point, (ii) the lines are parallel, (iii) the lines are skew (one over the other, without
intersecting).
Homework
Problema 8.6.1 Find the equation of the plane passing Problema 8.6.2 Find the equation of plane containing
through the points (a, 0, a), (a, 1, 0), and (0, 1, 2a) in the point (1, 1, 1) and perpendicular to the line x =
R3 . 1 + t, y = 2t, z = 1 t.
Rn 177
Problema 8.6.5 Find the equation of the line perpendi- Problema 8.6.8 Points a, b, c in R3 are collinear and it
is known that
a
c = i 2 j and
cular to the plane ax + a2 y + a3 z = 0, a 6= 0 and passing
ab = 2 k 3 i .
through the point (0, 0, 1). Find b
c.
Problema 8.6.6 The two planes Problema 8.6.9 Find the equation of the plane which is
x y z = 1, x z = 1, 3 1
intersect at a line. Write the equation of this line in the
1.
equidistant of the points 2 and
form
1 1
x
y a + t v , t R.
=
Problema 8.6.10 (Putnam Exam, 1980) Let S be the
solid in three-dimensional space consisting of all points
z (x, y, z) satisfying the following system of six conditions:
x 0, y 0, z 0,
Problema 8.6.7 Find the equation of the plane passing
x + y + z 11,
1 2
2x + 4y + 3z 36,
0 ,
through the points and parallel to the line
1 2x + 3z 24.
Determine the number of vertices and the number of ed-
1 1 ges of S.
8.7 Rn
As a generalisation of R2 and R3 we define Rn as the set of n-tuples
x1
x
2
: xi R .
.
.
.
x
n
n
X n
X n
X
Proof: Put a = x2k , b = xk yk , and c = y2k . Consider
k=1 k=1 k=1
n
X n
X n
X n
X
2 2
f(t) = (txk yk ) = t x2k 2t xk yk + y2k = at2 + bt + c.
k=1 k=1 k=1 k=1
This is a quadratic polynomial which is non-negative for all real t, so it must have complex roots.
Its discriminant b2 4ac must be non-positive, from where we gather
n
!2 n
! n !
X X X
2 2
4 xk yk 4 xk yk .
k=1 k=1 k=1
This gives
x
|
2
x
y |2
2
y
from where we deduce the result.
401 Exemplo Assume that ak , bk , ck , k = 1, . . . , n, are positive real numbers. Shew that
n
!4 n
! n ! n !2
X X X X
4 4 2
ak bk ck ak bk ck .
k=1 k=1 k=1 k=1
Pn
Solution: Using CBS on k=1 (ak bk )ck once we obtain
n n
!1/2 n
!1/2
X X X
ak bk ck a2k b2k c2k .
k=1 k=1 k=1
Pn 1/2
Using CBS again on k=1 a2k b2k we obtain
Pn Pn 1/2 Pn 1/2
k=1 ak bk ck k=1 a2k b2k k=1 c2k
Pn 1/4 Pn 1/4 Pn 1/2
k=1 a4k k=1 b4k k=1 c2k ,
402 Theorem (Triangle Inequality) Given (
x ,
y ) (Rn )2 the following inequality holds
x +
x + y
y .
Proof: We have
||
a + b ||2 = (
a + b )(
a + b)
= a
a + 2
ab + bb
||
a ||2 + 2||
a |||| b|| + || b||2
= (||
a || + || b ||)2 ,
Rn 179
n
!1/p
X
p
x p = |xk | (8.20)
k=1
Clearly
x p 0 (8.21)
x p = 0
x = 0 (8.22)
x p = ||
x p , R (8.23)
We now prove analogues of the Cauchy-Bunyakovsky-Schwarz and the Triangle Inequality for ||||p . For
this we need the following lemma.
1 1
403 Lemma (Youngs Inequality) Let p > 1 and put + = 1. Then for (a, b) ([0; +[)2 we have
p q
ap bq
ab + .
p q
[0; +[ R
f: .
x 7 xk k(x 1)
Then 0 = f (x) = kxk1 k x = 1. Since f (x) = k(k 1)xk2 < 0 for 0 < k < 1, x 0, x = 1
1
is a maximum point. Hence f(x) f(1) for x 0, that is xk 1 + k(x 1). Letting k = and
p
p
a
x = q we deduce
b
a 1 ap
1+ 1 .
bq/p p bq
Rearranging gives
ap b1+p/qp b1+p/q
ab b1+p/q +
p p
from where we obtain the inequality.
404 Theorem (Hlder Inequality) Given (
x ,y ) (Rn )2 the following inequality holds
|
x x p
y |
y q .
Proof: If
x p = 0 or
y q = 0 there is nothing to prove, so assume otherwise. From the
Young Inequality we have
Adding, we deduce
Pn |xk | |yk | 1 Pn 1 Pn
|xk |p + |y |q
y q k=1 k
x p
k=1 p k=1 q
y q x p
p q
p q
x p y q
= p +
q
x p p y q q
1 1
= +
p q
= 1.
This gives
n
X
x p
|xk yk |
y q .
k=1
The result follows by observing that
Xn n
X
x p
|xk yk |
xk yk y q .
k=1 k=1
As a generalisation of the Triangle Inequality we have
405 Theorem (Minkowski Inequality) Let p ]1; +[. Given (
x ,
y ) (Rn )2 the following inequality holds
x +y x +
y .
p p p
x +
p/q
= x p y p
Homework
Rn 181
Problema 8.7.1 Prove Lagranges identity: Problema 8.7.4 Let a Rn be a fixed vector. Demons-
trate that
P 2 P P
X = { a
x Rn :
= 2 2 x = 0}
1jn aj bj 1jn aj 1jn bj
P is a subspace of Rn .
1k<jn (ak bj aj bk )2
1.1.2
x X \ (X \ A) x X x 6 (X \ A)
xXxA
x X A.
1.1.3
X \ (A B) x X (x 6 (A B))
x X (x 6 A x 6 B)
(x X x 6 A) (x X x 6 B)
x (X \ A) x (X \ B)
x (X \ A) (X \ B).
1.1.8 We have
|a| = |a b + b| |a b| + |b|,
giving
|a| |b| |a b|.
Similarly,
|b| = |b a + a| |b a| + |a| = |a b| + |a|,
gives
|b| |a| |a b|.
The stated inequality follows from this.
1.2.1 a a since a a
= 1 Z, and so the relation is reflexive. The relation is not symmetric. For 2 1 since 21 Z
but 1 2 since 2 6 Z. The relation is transitive. For assume a b and b c. Then there exist (m, n) Z2 such
1
a
that b = m, bc = n. This gives
a a b
= = mn Z,
c b c
and so a c.
a2 +a
1.2.2 Here is one possible example: put a b b
Z. Then clearly if a Z \ {0} we have a a since
a2 +a 52 +5
a
= a + 1 Z. On the other hand, the relation is not symmetric, since 5 2 as 2
= 15 Z but 2 6 5, as
22 +2 6 2
5 +5 2
3 +3 52 +5
5
= 5
6 Z. It is not transitive either, since 3
Z = 5 3 and 12
Z = 3 12 but 12
6 Z and so
5 12.
182
Answers and Hints 183
1
1.2.4 [B] [x] = x + Z. [C] No.
3
1.3.1 Let = 12 + i 2
3
. Then 2 + + 1 = 0 and 3 = 1. Then
(a + b + 2 c)(u + v + 2 w) = au + bw + cv
+(av + bu + cw)
+2 (aw + bv + cu),
and
(a + 2 b + c)(u + 2 v + w) = au + bw + cv
+(aw + bv + cu)
+2 (av + bu + cw).
1.3.2 We have
x(yz) = x(y a z) = (x) (a) (y a z) = x a y a z,
where we may drop the parentheses since is associative. Similarly
whence the denominator never vanishes and since sums, multiplications and divisions of rational numbers
a+b a+b
are rational, is also rational. We must prove now that 1 < < 1 for (a, b) ] 1; 1[2 . We have
1 + ab 1 + ab
a+b
1 < <1 1 ab < a + b < 1 + ab
1 + ab
1 ab a b < 0 < 1 + ab a b
Since (a, b) ] 1; 1[2 , (a + 1)(b + 1) > 0 and so (a + 1)(b + 1) < 0 giving the sinistral inequality. Similarly
a 1 < 0 and b 1 < 0 give (a 1)(b 1) > 0, the dextral inequality. Since the steps are reversible, we have
a+b
established that indeed 1 < < 1.
1 + ab
184 Appendix A
a+b b+a
Since a b = = = b a, commutativity follows trivially. Now
1 + ab 1 + ba
b+c
a (b c) = a
1 + bc
b+c
a+
1 + bc
=
b+c
1+a
1 + bc
(a + b) + c(1 + ab)
=
1 + ab + (a + b)c
a + b + c + abc
= ,
1 + ab + bc + ca
whence is associative.
a+e
If a e = a then = a, which gives a + e = a + ea2 or e(a2 1) = 0. Since a 6= 1, we must have e = 0.
1 + ae
a+b
If a b = 0, then = 0, which means that b = a.
1 + ab
a (b c) = a (b + c bc)
= a + b + c bc a(b + c bc)
= a + b + c ab bc ca + abc.
(a b) c = (a + b ab) c
= a + b ab + c (a + b ab)c
= a + b + c ab bc ca + abc,
whence is associative.
+ 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
0 0 1 2 3 4 5 6 7 8 9 10 0 0 0 0 0 0 0 0 0 0 0 0
1 1 2 3 4 5 6 7 8 9 10 0 1 0 1 2 3 4 5 6 7 8 9 10
2 2 3 4 5 6 7 8 9 10 0 1 2 0 2 4 6 8 10 1 3 5 7 9
3 3 4 5 6 7 8 9 10 0 1 2 3 0 3 6 9 1 4 7 10 2 5 8
4 4 5 6 7 8 9 10 0 1 2 3 4 0 4 8 1 5 9 2 6 10 3 7
5 5 6 7 8 9 10 0 1 2 3 4 5 0 5 10 4 9 3 8 2 7 1 6
6 6 7 8 9 10 0 1 2 3 4 5 6 0 6 1 7 2 8 3 9 4 10 5
7 7 0 9 10 0 1 2 3 4 5 6 7 0 7 3 10 6 2 9 5 1 8 4
8 8 9 10 0 1 2 3 4 5 6 7 8 0 8 5 2 10 7 4 1 9 6 3
9 9 10 0 1 2 3 4 5 6 7 8 9 0 9 7 5 3 1 10 8 6 4 2
10 10 0 1 2 3 4 5 6 7 8 9 10 0 10 9 8 7 6 5 4 3 2 1
Tabela A.1: Addition table for Z11 . Tabela A.2: Multiplication table Z11 .
1.3.5 We have
xy = (x y) (x y)
= [y (x y)] x
= [(x y) x] y
= [(y x) x] y
= [(x x) y] y
= (y y) (x x)
= y x,
proving commutativity.
1.5.1 We have
1 2+2 33 6
=
2+2 3+3 6 ( 2 + 2 3)2 (3 6)2
2+2 33 6
=
2 + 12 +
4 6 54
2+2 33 6
=
40 +4 6
( 2 + 2 3 3 6)(40 4 6)
=
402 (4 6)2
( 2 + 2 3 3 6)(40 4 6)
=
1504
16 2 + 22 3 30 6 18
=
376
1.5.2 Since
(a)b1 + ab1 = (a + a)b1 = 0F b1 = 0F ,
we obtain by adding (ab1 ) to both sides that
(a)b1 = (ab1 ).
Similarly, from
a(b1 ) + ab1 = a(b1 + b1 ) = a0F = 0F ,
we obtain by adding (ab1 ) to both sides that
a(b1 ) = (ab1 ).
h(a) = h(b) = a3 = b3
= a3 b3 = 0
= (a b)(a2 + ab + b2 ) = 0
Now,
a 2 3a2
b2 + ab + a2 = b + + .
2 4
This shews that b2 + ab + a2 is positive unless both a and b are zero. Hence a b = 0 in all cases. We have shewn
that h(b) = h(a) = a = b, and the function is thus injective.
1.6.2 We have
6a 6b
f(a) = f(b) =
2a 3 2b 3
6a(2b 3) = 6b(2a 3)
18a = 18b
a = b,
1 1 1
2.1.1 A =
2 4 8 .
3 9 27
1 2 3
2.1.2 A =
2 4 6 .
3 6 9
a + 1 0 2c 2a 4a 2c
2.1.3 M + N =
a b 2a 0 ,
2M =
0 2a 2b .
2a 0 2 2a + 2b 0 2
2.1.4 x = 1 and y = 4.
13 1 5 0
2.1.5 A =
, B =
15 3 6 1
2.1.8 The set of border elements is the union of two rows and two columns. Thus we may choose at most four
elements from the border, and at least one from the central 3 3 matrix. The largest element of this 3 3 matrix is
15, so any allowable choice of does not exceed 15. The choice 25, 15, 18,l 23, 20 shews that the largest minimum is
indeed 15.
2 2
2.2.1
0 2
2.2.2
a b c a + b + c b+c c
AB =
c+a a+b
, BA =
b+c a + b + c a+b b
a+b+c a+b+c a+b+c a+b+c c+a a
1 2 3 1 1 1 14 14 14
2.2.3
2 3 1 2 2 2 = 11 11 11, whence a + b + c = 36.
3 1 2 3 3 3 11 11 11
0 0 4 12 0 0 0 8 0 0 0 0
0 0 0 4 0 0 0 0 0 0 0 0
2.2.4 An easy computation leads to N =
2 3
, N = 4
and N = . Hence
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0
any powerfrom the fourth onis the zero matrix.
188 Appendix A
0 2 0 3
0 2 0 3
2.2.5 AB = 04 and BA = .
0 2 0 3
0 2 0 3
1 0 a+b
2
=
(a + b)
(a + b) 1
2
0 0 1
= m(a + b)
For the second part, observe that using the preceding part of the problem,
1 0 0
02
m(a)m(a) = m(a a) = m(0) =
0 1 = I3 ,
2
0 0 1
2.2.8 For this problem you need to recall that if |r| < 1, then
a
a + ar + ar2 + ar3 + + = .
1r
This gives
1 1 1 1 4
1+ 4
+ 42
+ 43
+ = 1
= ,
1 4
3
1
1 1 1 2 2
2
+ 23
+ 25
+ = 1
= ,
1 4
3
and
1 1 1 1
1+ 2
+ 22
+ 23
+ = 1
= 2.
1 2
Answers and Hints 189
This gives
1 1 1 1 1 1 4 2
1 + 4 + 42 + 43 + 2
+ 23
+ 25
+ 0 3 3
0
I3 + A + A2 + A3 + =
= 2
12 + 213 + 215 + 1+ 1
+ 1
+ 1
+ 0 4
0 .
4 42 43 3
3
1 1 1
0 0 1+ 2
+ 22
+ 23
+ 0 0 2
0 0 1 0
2.2.11 Disprove! Take for example A =
and B =
. Then
1 1 1 0
1 0 1 0
A2 B2 =
6
= = (A + B)(A B).
0 1 2 1
2.2.12 x = 6.
32 32
2.2.14
.
32 32
1001 1002
0 2 3
2.2.15 A2003 =
.
21002 31001 0
2.2.17 The assertion is clearly true for n = 1. Assume that is it true for n, that is, assume
cos(n) sin(n)
An =
.
sin(n) cos(n)
190 Appendix A
Then
An+1 = AAn
cos sin cos(n) sin(n)
=
sin cos sin(n) cos(n)
cos cos(n) sin sin(n) cos sin(n) sin cos(n)
=
sin cos(n) + cos sin(n) sin sin(n) + cos cos(n)
cos(n + 1) sin(n + 1)
=
,
sin(n + 1) cos(n + 1)
2.2.18 Let A = [aij ], B = [bij ] be checkered n n matrices. Then A + B = (aij + bij ). If jP i is odd, then
aij + bij = 0 + 0 = 0, which shows that A + B is checkered. Furthermore, let AB = [cij ] with cij = nk=1 aik bkj . If i is
even and j odd, then aik = 0 for odd k and bkj = 0 for even k. Thus cij = 0 for i even and j odd. Similarly, cij = 0 for
odd i and even j. This proves that AB is checkered.
2.2.19 Put
0 1 1
J=
0 0 1 .
0 0 0
n
An In n1
n2 2
= 3 + nI3 J+ 2
I3 J
n
1 0 0 0 n n 0 0 2
= 0
1 0 + 0
0 n + 0
0 0
0 0 1 0 0 0 0 0 0
n(n+1)
1 n 2
= 0
1 n
,
0 0 1
n n(n1) n n(n+1)
giving the result, since and n + .
2
= 2 2
= 2
Answers and Hints 191
A2 B = A(AB) = AB = B
A3 B = A(A2 B) = A(AB) = AB = B
..
.
Am B = AB = B.
Hence B = Am B = 0n B = 0n .
a b
2.2.22 Put A =
. Using 2.2.21, deduce by iteration that
c d
Ak = (a + d)k1 A.
a b
2.2.23
, bc = a2
c a
a b
2.2.24 I2 ,
, a2 = 1 bc
c a
a b
2.2.25 We complete squares by putting Y =
= X I. Then
c d
2
a + bc b(a + d) 1 0 0 0
= Y 2 = X2 2X + I = (X I)2 = +I= .
c(a + d) bc + d2 6 3 6 4
1 0 1 0
, .
3 3 3 1
a b
2.2.27 Put X =
. Then
c d
a2 + bc + a = 1
a2 + bc + a = 1
1
d = a 6=
2
1 1 ab + bd + b = 1 b(a + d + 1) = 1
X2 + X = 1
c=b=
2a +1
1 1
ca + dc + c = 1
c(a + d + 1) = 1
1
a2 +
(2a + 1)2 + a = 1
cb + d2 + d = 1 (d a)(a + d + 1) = 0
Observe that Jk = 3k1 J for integer k 1. Using the binomial theorem we have
An = (2I3 J)n
Pn n
(2I3 )nk (1)k Jk
= k=0 k
1 Pn
2n I3 + J k=1 n
nk
= k
2 (1)k 3k
3
1
= 2n I3 + J((1)n 2n )
3
n n+1
(1) + 2 (1)n 2n (1)n 2n
1
=
(1)n 2n (1)n + 2n+1 n n .
(1) 2
3
(1)n 2n (1)n 2n n
(1) + 2 n+1
1 1 1 1
2.3.2 There are infinitely many examples. One could take A =
and B =
. Another set is
1 1 1 1
Answers and Hints 193
2 0 0 0
A= and B = .
0 0 0 2
tr (AC) + tr (DB) = n.
a contradiction, since n 1.
1 1 3 0
2.3.4 Disprove! This is not generally true. Take A =
and B =
. Clearly AT = A and BT = B. We have
1 2 0 1
3 1
AB =
3 2
but
3 3
(AB)T =
.
1 2
2.3.6 We have
2
a b a b a + bc ab + bd
tr A2 = tr
= tr
= a2 + d2 + 2bc
c d c d ca + cd d2 + cb
and
2
a b
tr = (a + d)2 .
c d
Thus
tr A2 = (tr (A))2 a2 + d2 + 2bc = (a + d)2 bc = ad,
is the condition sought.
2.3.7
tr (A I4 )2 tr A2 2A + I4
=
= 4 2tr (A) + 4
= 2tr (A) ,
2.3.8 Disprove! Take A = B = In and n > 1. Then tr (AB) = n < n2 = tr (A) tr (B).
1 0 0 1 0 0
2.3.9 Disprove! Take A =
, B =
, C =
. Then tr (ABC) = 1 but tr (BAC) = 0.
0 0 0 0 1 0
194 Appendix A
2.3.10 We have
2.3.11 We have
n
X
0 = cii = x2ik = xik = 0.
k=1
1 0 1 0 0 0 1 0
0 1 0 1 0 1 0 0
A1 = so take P= .
1
1 1 1
1
0 0 0
1 1 1 1 0 0 0 1
2 0 2 0 2 0 0 0
0 1 0 1 0 1 0 0
A2 = so take D= .
1
1 1 1
0
0 1 0
1 1 1 1 0 0 0 1
4 2 4 2 1 0 0 2
0 1 0 1 0 1 0 0
B= so take T = .
1
1 1 1
0
0 1 0
1 1 1 1 0 0 0 1
Answers and Hints 195
a b c g h i
P:3 1
d e f d e f
g h i a b c
h g i
P :C1 C2
e d f
b a c
h g g i
T :C1 C2 C1
e d d f
ba a c
hg g i
D:23 3
ed d f
2b 2a 2a 2c
Thus we take
0 0 1 0 1 0
P=
0 1 0 , P = 1
0 0 ,
1 0 0 0 0 1
1 0 0 1 0 0
T =
1 1 0 ,
D=
0 1 0 .
0 0 1 0 0 2
where the entries appear on the j-column. Then we see that tr (AEij ) = aji and similarly, by considering BEij , we
see that tr (BEij ) = bji . Therefore i, j, aji = bji , which implies that A = B.
196 Appendix A
which means that i, j, aji ajk = 0. In particular, a2ji = 0, which means that i, j, aji = 0, i.e., A = 0n .
2.5.1 a = 1, b = 2.
(In + A)(In A + A2 A3 ) = In A + A2 A3 A + A2 A3 + A4 = In ,
2.5.8 We argue by contradiction. If exactly one of the matrices is not invertible, the identities
A = AIn = (ABC)(BC)1 = 0n ,
0n = 0n C1 B1 A1 = (ABC)C1 B1 A1 = In ,
also gives a contradiction.
= AB = BA,
by cancelling A on the left and B on the right. One can also argue that A = A1 , B = B1 , and so AB = (AB)1 =
B1 A1 = BA.
2.5.10 Observe that A = (a b)In + bU, where U is the n n matrix with 1F s everywhere. Prove that
2.7.3 If B is invertible, then rank (AB) = rank (A) = rank (BA). Similarly, if A is invertible rank (AB) = rank (B) =
1 0 0 1 0 0
rank (BA). Now, take A =
and B =
. Then AB = B, and so rank (AB) = 1. But BA =
,
0 0 0 0 0 0
and so rank (BA) = 0.
2.7.5 The maximum rank of this matrix could be 2. Hence, for the rank to be 1, the rows must be proportional,
which entails
x2 x
= = x2 2x = 0 = x {0, 2}.
4 2
2.7.6 Assume first that the non-zero n n matrix A over a field F has rank 1. By permuting the rows of the matrix
we may assume that every other row is a scalar multiple of the first row, which is non-zero since the rank is 1.
Hence A must be of the form
a1 a2 an a1
1 a1 1 a2 1 an
a2
A= = 1 n1 := XY,
.. .. .. .. 1
. . . .
n1 a1 n1 a2 n1 an an
Conversely, assume that A can be factored as A = XY, where X Mn1 (F) and Y M1n (F). Since A is
non-zero, we must have rank (A) 1. Similarly, neither X nor Y could be all zeroes, because otherwise A would be
zero. This means that rank (X) = 1 = rank (Y). Now, since
Performing R3 R4 R3 we have
1 a 1 b
1 a2
0 ba 1 ab
.
2
2
0
0 2a(a b) a(a b )
0 0 a2 b2 2(a b)
Factorising, this is
1 a 1 b
1 a2
0 ba 1 ab
= .
0
0 2a(a b) a(a b)(a + b)
0 0 0 a(a + 2 + b)(a b)(a 2 + b)
Thus the rank is 4 if (a + 2 + b)(a b)(a 2 + b) 6= 0. The rank is 3 if a + b = 2 and (a, b) 6= (1, 1) or if a + b = 2
and (a, b) 6= (1, 1). The rank is 2 if a = b 6= 1 and a 6= 1. The rank is 1 if a = b = 1.
has rank at least 2, since the first and third rows are not proportional. This means that it must have rank exactly
two, and the last two rows must be proportional. Hence
x+2 5
= = x = 13.
3 1
1 i
2.7.13 For the counterexample consider A =
.
i 1
From R2 R3 we obtain
1 2 3 1 0 0
0 2 0 4 0 1 .
0 6 2 5 1 0
We deduce that
1
1 2 3 4 2 0
2
3 1
=
2 0 4 .
3 1 2 0 4 2
0 0 1 1 0 0
0 1 a 0 1 0 .
1 a b 0 0 1
Performing R1 R3 , R3 R3 , in succession,
1 a b 0 0 1
0 1 a 0 1 0 .
0 0 1 1 0 0
1 0 b + a2 0 a 1
0 1 0 a 1 0 .
0 0 1 1 0 0
1 0 0 b a2 a 1
0 1 0 a 1 0 ,
0 0 1 1 0 0
whence
2
b a a 1
B1
=
a 1 0 .
1 0 0
Answers and Hints 201
Now,
0 0 1 a b c b a2 a 1
BAB1
= 0
1 a 1
0 0 a
1 0
1 a b 0 1 0 1 0 0
0 1 0 b a2 a 1
= 1 a
0 a
1 0
0 0 c 1 0 0
a 1 0
= b 0 1
c 0 0
= AT ,
Perform R2 R1 R2 and R3 R1 R3 :
1 0 0 1 0 0
0 1 0 1 1 0 .
0 1 x 1 0 1
Performing R3 R2 R3 :
1 0 0 1 0 0
0 1 0 1 1 0 .
0 0 x 0 1 1
1
Finally, performing R3 R3 :
x
1 0 0 1 0 0
0 1 0 1 1 0 .
1 1
0 0 1 0
x x
2.8.4 Since MM1 = I3 , multiplying the first row of M times the third column of M1 , and again, the third row of
M times the third column of M1 , we gather that
1 0 + 0 a + 1 b = 0, 0 0 + 1 a + 1 b = 1 = b = 0, a = 1.
202 Appendix A
1 0 0 1 0 0
n 1
n
2.8.5 It is easy to prove by induction that A =
n 1 0. Row-reducing, (A ) = n 1 0.
n(n + 1) (n 1)n
n 1 n 1
2 2
0 1
2.8.6 Take, for example, A =
= A1 .
1 0
Whence
1
a 2a 3a 1/a 2/b 1/c
=
0 b 2b 0 1/b 2/c .
0 0 c 0 0 1/c
Answers and Hints 203
2.8.9 To compute the inverse matrix we proceed formally as follows. The augmented matrix is
b a 0 1 0 0
c 0 a 0 1 0 .
0 c b 0 0 1
1
1 1 a
b a 0 2b 2c
2bc
c
0 a
= 1 b
2ac 1
2a 2c
c 1 1
0 c b 2ba 2a 2b
as long as abc 6= 0.
1
Perform R
ab+bc+ca+abc 1
R1 . The matrix turns into
bc ca ab
1 1 1 ab+bc+ca+abc ab+bc+ca+abc ab+bc+ca+abc
.
ca ca + abc ca 0 ca 0
ab ab ab + abc 0 0 ab
1 1
Perform R
ABC 2
R2 and R
ABC 3
R3 . We obtain
bc ca ab
1 1 1 ab+bc+ca+abc ab+bc+ca+abc ab+bc+ca+abc
c 1 ca a .
0 1 0 ab+bc+ca+abc ab+bc+ca+abc
b b(ab+bc+ca+abc)
b a 1 ab
0 0 1 ab+bc+ca+abc ab+bc+ca+abc c
c(ab+bc+ca+abc)
2.8.16 Since rank A2 < 5, A2 is not invertible. But then A is not invertible and hence rank (A) < 5.
2.8.17 Each entry can be chosen in p ways, which means that there are p2 ways of choosing the two entries of an
arbitrary row. The first row cannot be the zero row, hence there are p2 1 ways of choosing it. The second row
cannot be one of the p multiples of the first row, hence there are p2 p ways of choosing it. In total, this gives
(p2 1)(p2 p) invertible matrices in Zp .
2.8.18 Assume that both A and B are m n matrices. Let C = [A B] be the m (2n) obtained by juxtaposing
A to B. rank (C) is the number of linearly independent columns of C, which is composed of the columns of A and
B. By column-reducing the first n columns, we find rank (A) linearly independent columns. By column-reducing
columns n + 1 to 2n, we find rank (B) linearly independent columns. These rank (A) + rank (B) columns are
distinct, and are a subset of the columns of C. Since C has at most rank (C) linearly independent columns, it
follows that rank (C) rank (A) + rank (B). Furthermore, by adding the n + k-column (1 k n) of C to the
k-th column, we see that C is column-equivalent to [A + B B]. But clearly
rank (A + B) rank ([A + B B]) = rank (C) ,
since [A + B B] is obtained by adding columns to A + B. We deduce
rank (A + B) rank ([A + B B]) = rank (C) rank (A) + rank (B) ,
as was to be shewn.
2.8.19 Since the first two columns of AB are not proportional, and since the last column is the sum of the first two,
rank (AB) = 2. Now,
2
0 1 1 0 1 1
(AB)2 =
=
1 0 1 1 0 1 = AB.
1 1 2 1 1 2
Since BA is a 2 2 matrix, rank (BA) 2. Also,
2 = rank (AB) = rank (AB)2 = rank (A(BA)B) rank (BA) ,
3.1.2 We have
1 2 3 x 5
2 3 1 y = 6 ,
3 1 2 z 0
Hence
1
x 1 2 3 5 4 2 0 5 4
=
y 2 3 1 6 = 2 0 4 6 = 3 .
z 3 1 2 0 0 4 2 0 3
0 1 2 3 4 5 6 7 8 9 10 11 12
A B C D E F G H I J K L M
13 14 15 16 17 18 19 20 21 22 23 24 25
N O P Q R S T U V W X Y Z
Answers and Hints 207
we find
M 12 I 8 S 18 T 19 F 5
U = 20 ,
P2 =
S = 18 ,
P3 =
E = 4 ,
P4 =
O = 14 ,
P5 =
A = 0 .
P6 =
N 13 T 19 A 0 F 5 L 11
Thus
20 U 18 S 4 E 14 O 0 A
AP2 = = , AP3 = = , AP4 = = , AP5 = = , AP6 = = .
10 K 24 Y 2 C 5 F 15 P
0 A 12 M 0 A 10 K 22 W
3.1.6 Observe that since 103 is prime, Z103 is a field. Adding the first hundred equations,
100x0 + x1 + x2 + + x100 = 4950 = 99x0 = 4950 4949 = 1 = x0 = 77 mod 103.
Now, for 1 k 100,
xk = k 1 x0 = k 78 = k + 25.
This gives
x1 = 26, x2 = 27, . . . , x77 = 102, x78 = 0, x79 = 1, x80 = 2, . . . , x100 = 22.
3.3.1 Observe that the third row is the sum of the first two rows and the fourth row is twice the third. So we have
1 1 1 1 1 1 1 1 1 1 1 1
1 0 1 0 1 1 1 0 1 0 1 1
R3 R1 R2 R3
2 1 2 1 2 0 0 0 0 0 0 0
R4 2R1 2R2 R4
4 2 4 2 4 0 0 0 0 0 0 0
1 0 0 0 1 0 1 0 0 0 1 0
0 1 1 1 0 1
0 0 1 0 0 1
R2 R5 R2
0 0 0 0 0 0
R1 R5 R1
0 0 0 0 0 0
1 0 0 0 1 0
1
1
3.3.2 The unique solution is 1.
1
1
Performing R1 R2 .
1 2m 1 4m
2m 1 1 2 .
2
1 1 2m 2m
Performing R2 R3 .
1 2m 1 4m
2 .
1 1 2m 2m
2m 1 1 2
1
If m = 2
the matrix becomes
1 1 1 2
3
0 0 0 2
0 0 0 0
1 1 1
and hence it does not have a solution. If m 6= 2
, by performing R
12m 2
R2 and R
12m 3
R3 , the matrix becomes
1 2m 1 4m
2m(m2) .
0 1 1
12m
0 1 + 2m 1 2(1 + 2m)
3.3.4 By performing the elementary row operations, we obtain the following triangular form:
ax + y 2z = 1,
1 1 a
2b 2c
2bc c
= 1 b 1 ,
2a 2ac 2c
b
c 1 1
2ba 2a 2b
a
b2 + c2 a2
2bc
a2 + c2 b2
=
2ac
2
a + b2 c2
2ab
Answers and Hints 211
x + 5y + 6z = 2, 2x + 2y = 6, 2x + 3y + z = 9,
3.3.9
x = 22 36
y = 23 312
z = 22 37 .
3.3.10 Denote the addition operations applied to the rows by a1 , a2 , a3 , a4 and the subtraction operations to the
columns by b1 , b2 , b3 , b4 . Comparing A and AT we obtain 7 equations in 8 unknowns. By inspecting the diagonal
entries, and the entries of the first row of A and AT , we deduce the following equations
a1 = b1 ,
a2 = b2 ,
a3 = b3 ,
a4 = b4 ,
a1 b2 = 3,
a1 b3 = 6,
a1 b4 = 9.
This is a system of 7 equations in 8 unknowns. We may let a4 = 0 and thus obtain a1 = b1 = 9, a2 = b2 = 6,
a3 = b3 = 3, a4 = b4 = 0.
Performing R5 + R4 R5 we get
1 0 0 1 y 0
0 1 y 1 0 0
.
0
0 1 y 1 0
0
0 0 y3 + 2y 1 y2 + y 1 0
0 0 0 y3 + y2 + 3y 2 0 0
Since y2 s s = (y2 + y 1)s ys, this last solution can be also written as
x1 yt s
x2 ys yt
x3 = ys t , (s, t) R2 .
x4 s
x5 t
4.1.2 We expand (1F + 1F )(
a + b ) in two ways, first using 4.7 first and then 4.8, obtaining
(1F + 1F )(
a + b ) = (1F + 1F )
a + (1F + 1F ) b =
a +
a + b + b,
a + b = b +
a,
4.1.3 We must prove that each of the axioms of a vector space are satisfied. Clearly if (x, y, ) R+ R+ R then
x y = xy > 0 and x = x > 0, so V is closed under vector addition and scalar multiplication. Commutativity and
associativity of vector addition are obvious.
x A = x = xA = x = A = 1.
Now
(x y) = (xy) = x y = x y = ( x) ( y),
and
( + ) x = x+ = x x = (x ) (x ) = ( x) ( x),
whence the distributive laws hold.
Finally,
1 x = x1 = x,
and
( x) = ( x) = (x ) = x = () x,
and the last two axioms also hold.
4.1.4 C is a vector space over R, the proof is trivial. But R is not a vector space over C, since, for example taking
i as a scalar (from C) and 1 as a vector (from R) the scalar multiple i 1 = i 6 R and so there is no closure under
scalar multiplication.
Addition is the natural element-wise addition and scalar multiplication is ordinary element-wise scalar multiplica-
tion.
Then
a a a + a
b b b + b
x + y = + = .
c c c + c
d d d + d
Observe that
(a + a ) (b + b ) 3(d + d ) = (a b 3d) + (a b 3d ) = 0 + 0 = 0,
meaning that
x +
y X, and so X is a vector subspace of R4 .
4.2.2 Take
a1 a2
2a1 3b1 2a2 3b2
, R.
u =
, v =
5b1 5b2
a1 + 2b1 a2 + 2b2
a1 a2
Put s = a1 + a2 , t = b1 + b2 . Then
a1 + a2 s
2(a1 + a2 ) 3(b1 + b2 ) 2s 3t
u + v = = 5t X,
5(b1 + b 2 )
(a1 + a2 ) + 2(b1 + b2 ) s + 2t
a1 + a2 s
since this last matrix has the basic shape of matrices in X. This shews that X is a vector subspace of R5 .
4.2.7 We shew that some of the properties in the definition of vector subspace fail to hold in these sets.
Answers and Hints 217
0
0
Take x = 1 , = 2. Then x V but 2 x = 2 2
2 6 V as 0 + 2 = 4 6= 1. So V is not closed under scalar
0 0
multiplication.
0 1 1
Take x = 1 , y = 0. Then x W, y W but x + y =
1 6 W as 1 1 = 1 6= 0. Hence W is not closed
0 0 0
under vector addition.
1 1 1 1 1 1
Take
x =
. Then
x Z but
x =
=
6 Z as 1 + (1)2 = 2 6= 0. So Z is not closed
0 0 0 0 0 0
under scalar multiplication.
4.2.9 Assume contrariwise thatSV = U k is the shortest such list. Since the Uj are proper subspaces,
S S S
S1 U2 U
k > 1. Choose y 6 U1 . Put L = {
x U1 , x 6 U2 Uk and choose y + x | F}. Claim: L U1 = . For if
T
T
u L U1 then a0 F with u = y + a0 x and so y = u a0 x U1 , a contradiction. So L and U1 are disjoint.
We now shew that L has at most one vector in common with Uj , 2 j k. For, if there were two elements of F,
a 6= b with
y + a
x ,
y + b
x Uj , j 2 then
(a b)
x = (
y + a
x ) (
y + b
x ) Uj ,
Conclusion: since F is infinite, L is infinite. But we have shewn that L can have at most one element in common
with the Uj . This means that there are not enough Uj to go around to cover the whole of L. So V cannot be a finite
union of proper subspaces.
It is easy to verify that these subspaces satisfy the conditions of the problem.
218 Appendix A
4.3.1 If
1 1 1
+ b + c =
a
0 1 1 0 ,
0 0 1
then
a + b + c 0
b + c = 0 .
c 0
4.3.2 Assume
1 1 1 1 0
1 1 1 1 0
a + b + c + d = .
1 1 1 0 0
1 1 1 1 0
Then
a + b + c + d = 0,
a + b c + d = 0,
a b + c = 0,
a b c + d = 0.
Subtracting the second equation from the first, we deduce 2c = 0, that is, c = 0. Subtracting the third equation from
the fourth, we deduce 2c + d = 0 or d = 0. From the first and third equations, we then deduce a + b = 0 and
a b = 0, which entails a = b = 0. In conclusion, a = b = c = d = 0.
Now, put
1 1 1 1 1
1 1 1 1 2
x + y + z + w = .
1 1 1 0 1
1 1 1 1 1
Then
x + y + z + w = 1,
x + y z + w = 2,
x y + z = 1,
x y z + w = 1.
Solving as before, we find
1 1 1 1 1
1 1 2
1 1
1 1
2 + = .
2 2
1 1 1 0 1
1 1 1 1 1
Answers and Hints 219
4.3.5 We have
(
v1 +
v 2 ) (
v2 +
v 3 ) + (
v3 +
v 4 ) (
v4 +
v 1) = 0 ,
a non-trivial linear combination of these vectors equalling the zero-vector.
4.3.7 Yes. Suppose that a + b 2 = 0 is a non-trivial linear combination of 1 and 2 with rational numbers a and
b. If one of a, b is different from 0 then so is the other. Hence
b
a + b 2 = 0 = 2= .
a
b
The sinistral side of the equality 2 = is irrational whereas the dextral side is rational, a contradiction.
a
4.3.8 No. The representation 2 1 + ( 2) 2 = 0 is a non-trivial linear combination of 1 and 2.
aex = 0.
Since the exponential function never vanishes, we deduce that a = 0. Thus a = b = c = 0 and the family is linearly
independent over R.
which implies
cos 2x cos2 x + sin2 x = 0.
220 Appendix A
In order to do this we find the Taylor expansion of p around x = 1. Letting x = 1 in this last equality,
s = p(1) = a b + c d R.
Now,
p (x) = b + 2cx + 3dx2 = t + 2u(1 + x) + 3v(1 + x)2 .
Letting x = 1 we find
t = p (1) = b 2c + 3d R.
Again,
p (x) = 2c + 6dx = 2u + 6v(1 + x).
Letting x = 1 we find
u = p (1) = c 3d R.
Finally,
p (x) = 6d = 6v,
so we let v = d R. In other words, we have
4.4.3 It is
1 0 0 0 0 1 a c
a + b + c = ,
0 0 0 1 1 0 c b
i.e., this family spans the set of all skew-symmetric 2 2 matrices over R.
4.5.1 We have
a 1 0
2a 3b 2 3
5b = a 0 + b 5 ,
a + 2b 1 2
a 1 0
Answers and Hints 221
spans the subspace. To shew that this is a linearly independent family, assume that
1 0 0
2 3 0
a 0 + b 5 = 0 .
1 2 0
1 0 0
Then it follows clearly that a = b = 0, and so this is a linearly independent family. Conclusion:
1 0
2 3
0 , 5
1 2
1
0
4.5.2 Suppose
a(
v1 +
v 2 ) + b(
v2 +
v 3 ) + c(
v3 +
v 4 ) + d(
v4 +
v 5 ) + f(
v5 +
0 = v 1)
= (a + f)
v 1 + (a + b)
v 2 + (b + c)
v 3 + (c + d)
v 4 + (d + f)
v 5.
Since {
v 1,
v 2, . . . ,
v 5 } are linearly independent, we have
a + f = 0,
a+b=0
b+c=0
c+d=0
d + f = 0.
Solving we find a = b = c = d = f = 0, which means that the
{
v1 +
v 2,
v2 +
v 3,
v3 +
v 4,
v4 +
v 5,
v5 +
v 1}
are linearly independent. Since the dimension of V is 5, and we have 5 linearly independent vectors, they must also
be a basis for V.
222 Appendix A
4.5.3 The matrix of coefficients is already in echelon form. The dimension of the solution space is n 1 and the
following vectors in R2n form a basis for the solution space
1 1
1
1 0
0
0 1
. . .
.. ..
. .
1
0 0
a1 =
,
a2 =
,...,
an1 =
1 .
1 1
0
1 0
..
.
0 1
0
. .
.. ..
1
0 0
(The second 1 occurs on the n-th position. The 1s migrate from the 2nd and n + 1-th position on a1 to the
n 1-th and 2n-th position on an1 .)
(A + B)T = (A + B),
(n 1)n
1 + 2 + + (n 1) =
2
matrices Ak , which are 0 everywhere except in the ij-th and ji-spot, where 1 i < j n, aij = 1 = aji and i + j = k,
3 k 2n 1. (In the case n = 3, they are
0 1 0 0 0 1 0 0 0
, , ,
1 0 0 0 0 0 0 0 1
0 0 0 1 0 0 0 1 0
(n 1)n
for example.) It is clear that these matrices form a basis for V and hence V has dimension .
2
4.5.5 Take (
u ,
v ) X2 and R. Then
a a
b b
u = , b + 2c = 0,
v = , b + 2c = 0.
c c
d d
Answers and Hints 223
We have
a + a
b + b
u + v = ,
c + c
d + d
(b + b ) + 2(c + c ) = (b + 2c) + (b + 2c ) = 0 + 0 = 0.
Now
a a 1 0 0
b 2c 0 2 0
= = a + c + d
c c 0 1 0
d d 0 0 1
It is clear that
1 0 0
0 2 0
, ,
0 1 0
0 0 1
are linearly independent and span X. They thus constitute a basis for X.
n(n + 1)
4.5.6 As a basis we may take the matrices Eij Mn (F) for 1 i j n.
2
,
4.5.7 dim X = 2, as basis one may take {
v 1 v2 }.
,
4.5.8 dim X = 3, as basis one may take {
v 1 v2 , v3 }.
,
4.5.9 dim X = 3, as basis one may take {
v 1 v2 , v3 }.
and if a + b + c = 0, a + d + g = 0, a + b + c = 0, a + d + g = 0, then
a + a + b + b + c + c = (a + b + c) + (a + b + c ) = 0 + 0 = 0,
and
a + a + d + d + g + g = (a + d + g) + (a + d + g ) = 0 + 0 = 0,
proving that V is a subspace.
Now, a + b + c = 0 = a + d + g = a = b c, g = b + c d. Thus
a b c b c b c 1 1 0 1 0 1 0 0 0 0 0 0
= = b +c + d + f
0 d f 0 d f 0 0 0 0 0 0 0 1 0 0 0 1 .
0 0 g 0 0 b+cd 0 0 1 0 0 1 0 0 1 0 0 0
224 Appendix A
It is clear that these four matrices span V and are linearly independent. Hence, dim V = 4.
2. Since the
a k are four linearly independent vectors in R4 and dim R4 = 4, they form a basis for R4 . Now, we
want to solve
x 1 1 1 1 x 1
y 1 1 1 1
y 2
A = =
z 1
1 1 1
z 1
w 1 1 1 1 w 1
and so
x 1 1/4 1/4 1/4 1/4 1 5/4
y 2 1/4 1/4 1/4 1/4 2 1/4
1
=A = = .
z
1 1/4
1/4 1/4 1/4 1 1/4
w 1 1/4 1/4 1/4 1/4 1 1/4
It follows that
1 1 1 1 1
2 5 1 1
1 1
1
1
1
= + .
4 4 4 4
1 1 1 1 1
1 1 1 1 1
Answers and Hints 225
3. Since we have
1 1 1 1 1
1
1 1
2 1 1
5 1 1
= + ,
4 4 4 4
1 1 1 1 1
1 1 1 1 1
1 1
0 0
a1 a 1
1 1a 1 a+1
4.6.2 [1] a = 1, [2] (A(a))1 = [3]
1 a
a1 1 0
a1
1 a 1 a 1
1 1 1
0
a1 a1 a1
0
a 1 a 1
a a 1
0
a1 a1 a 1
1 2+a a+1 1
5.1.2
L(H + H ) = A1 (H + H )A1
= A1 HA1 + (A1 H A1 )
= L(H) + L(H ),
proving that L is linear.
5.1.3 Let S be convex and let
a , b T (S). We must prove that [0; 1], (1 ) a + b T (S). But since
a, b
belong to T (S),
x S,
y S with T (
x) = a , T (
y ) = b . Since S is convex, (1 )
x +
y S. Thus
T ((1 ) x + y ) T (S),
which means that
(1 )T (
x ) + T (
y ) T (S),
that is,
(1 )
a + b T (S),
as we wished to show.
x
5.2.1 Assume y ker (L). Then
z
x 0
y = 0 ,
L
z 0
that is
x y z = 0,
x + y + z = 0,
z = 0.
This implies that x y = 0 and x + y = 0, and so x = y = z = 0. This means that
0
ker (L) = ,
0
0
and L is injective.
By the Dimension Theorem 244, dim Im (L) = dim V dim ker (L) = 3 0 = 3, which means that
Im (L) = R3
and L is surjective.
5.2.2
1. If a is any scalar,
x x x + ax x x
y + ay (x + ax ) + (y + ay ) x + y
y y x + y y y
L + a = L = = + a = L + aL ,
z z z + az (x + ax
) (y + ay
) x y x
y z z
w w w + aw w w
Answers and Hints 227
whence L is linear.
2. We have,
x
0
y x + y 0
0
L =
= = x = y, x = y = x = y = 0 = ker (L) = : z R, w R .
z xy 0
z
w
w
Then
0
a
0
= T
b
0
c
0
1 1 0
= (a b)T 0 + bT 1 + cT
0
0 0 1
1 2 1
0 1 1
= (a b) + b + c
1 0 1
0 0 0
a+b+c
b c
= .
a + b + c
0
228 Appendix A
and so
1 1
0 1
Im (T ) = span , .
1 1
0 0
Then x = 2y and so
x 2
= y .
y 1
This means that dim ker (L) = 1 and ker (L) is the line through the origin and (2, 1). Observe that L is not
injective.
a
b Im (L).
By the Dimension Theorem 244, dim Im (L) = dim V dim ker (L) = 2 1 = 1. Assume that
c
Then (x, y) R2 such that
x + 2y a
x
L = x + 2y
= b .
y
0 c
Answers and Hints 229
and so L is injective.
a
b Im (L).
By the Dimension Theorem 244, dim Im (L) = dim V dim ker (L) = 2 0 = 2. Assume that
c
Then (x, y) R2 such that
x y a
x
x + y = b .
L =
y
0 c
This means that
a x y 1 1
= = x + y .
b x + y 1 1
c 0 0 0
Since
1 1
,
1 1
0 0
are linearly independent, they span a subspace of dimension 2 in R3 , that is, a plane containing the origin. Observe
that L is not surjective.
5.2.6 Assume that
x
x y z 0
y =
L = .
y 2z 0
z
230 Appendix A
3
Then y = 2z; x = y + z = 3z. This means that ker (L) = z 2 : z R . Hence dim ker (L) = 1, and so L is not
1
injective.
Now, if
x
x y z a
y =
L = .
y 2z b
z
Then
a x y z 1 1 1
= = x + y + z .
b y 2z 0 1 2
Now,
1 1 1
2 =
3
0 1 2
and
1 1
,
0 1
are linearly independent. Since dim Im (L) = 2, we have Im (L) = R2 , and so L is surjective.
and so dim ker (L) = 3. Thus L is not injective. L is surjective, however. For if R, then
0
= tr
.
0 0
L(A + B) = (A + B)T + (A + B)
= AT + BT + A + B
= AT + A + BT + B
= L(A) + L(B),
a b
2. Assume that A = ker (L). Then
c d
0 0 a b a c 2a b + c
= L(A) = + = ,
0 0 c d b d b+c 2d
from where
2 0 0 1 0 0
Im (L) = span
,
,
.
0 0 1 0 0 2
I(
x ) T (
x ) ker (T )
(I T )(
x ) ker (T )
x Im (I T ) .
Hence
a 1 1 1 1
b 1 0 1 2
T = dT + (2a c b)T + (d 2a + 2c + b)T + (a + c)T
c 1 1 1 0
d 1 0 0 0
0 1 0 1
= d + (2a c b) + (d 2a + 2c + b) + (a + c)
0 1 0 1
1 1 1 1
ab
= a b .
a + 2d
This gives
1 0 0 0
1 1 0 0
0 1 0 0
T = 1 , T =1 , T =0 , T =0 .
0 0 1 0
1
0
0
2
0 0 0 1
The required matrix is therefore
1 1 0 0
1 1 0 0 .
1 0 0 2
1 1
This matrix has rank 2, and so dim Im (T ) = 2. We can use , as a basis for Im (T ). Thus by the
1 1
1 0
a 2d
0 a b
b 2d
dimension theorem dim ker (T ) = 2. If ker
=T = , Hence the vectors in (T ) have the form
0 ab
c
c
0
a + 2d
d d
2 0
2 0
and hence we may take , as a basis for ker (T ).
0 1
1 0
Answers and Hints 233
a + 0 + 1 = 0 = a = 1,
3 + b 5 = 0 = b = 2,
1 + 2 + c = 0 = c = 1.
1
1 ker (T ) and so
2. Observe that
1
1 0
1 = 0 .
T
1 0
Thus
1
2 1 3
1 T
T 0 = T = ,
1 2
0 1 1 5
0
1 1 1
T = T T = ,
1 2 1 2
0 1 1 1
0 1 1 1
T 0 = T
1 T
= .
1 0
1 2 1 1
whence
1 1
Im (T ) = span
1 , 1 .
2 3
4. We have
3 1 1 0 11/2
1
11
5 13 =
T = = ,
1 1 0 1 5/2
2 2 2
2
8 1 1 0 13/2
B
and
4
1
1
0
15/2
1 15
= = 7
19
=
T
2 1 0 1 7/2
.
2 2 2
3
11 1 1 0 19/2
B
Answers and Hints 235
5.3.4 The matrix will be a 2 3 matrix. In each case, we find the action of L on the basis elements of R3 and express
the result in the given basis for R3 .
1. We have
1
0
0
1 2 0
0 = , L 1 = , L 0 = .
L
3 0 1
0 0 1
The required matrix is
1 2 0
.
3 0 1
2. We have
1 1 1
1
3 3
0 = , L 1 = , L 1 = .
L
3 3 2
0 0 1
The required matrix is
1 3 3
.
3 3 2
3. We have
1
1 1 1 2
0 = = 2 + 3 = ,
L
3 0 1 3
A
0
1
3 1 1 0
1 = = 0 + 3 = ,
L
3 0 1 3
A
0
1
3 1
1 1
1 = = 1 + 2 = .
L
2 0 1 2
A
1
The required matrix is
2 0 1
.
3 3 2
236 Appendix A
2
Im (T ) = ker (T ) and so
5.3.5 Observe that
3
2 0
= .
T
3 0
Now
1 1 2 1 2 6
= T 3 = 3T T = ,
T
0 1 3 1 3 9
and
0 2 1 2 1 4
= T 2 = T 2T = .
T
1 3 1 3 1 6
The required matrix is thus
6 4
.
9 6
5.3.7 First observe that ker (B) ker (AB) since X Mq1 (R),
BX = 0 = (AB)X = A(BX) = 0.
Now
dim ker (B) = q dim Im (B)
= q rank (B)
= q rank (AB)
= q dim Im (AB)
Thus ker (B) = ker (AB) . Similarly, we can demonstrate that ker (ABC) = ker (BC) . Thus
= dim Im (BC)
= rank (BC) .
6.3.1 Multiplying the first column of the given matrix by a, its second column by b, and its third column by c, we
obtain
abc abc abc
abc =
a
2
b2 2 .
c
a3 b3 c3
We may factor out abc from the first row of this last matrix thereby obtaining
1 1 1
abc = abc det
2 2 .
a b2 c
a3 b3 c3
Performing C2 C1 C2 and C3 C1 C3 ,
1 0 0
= (a + b + c) det
.
2b b c a 0
2c 0 c a b
as wanted.
6.3.3 det A1 = det A = 540 by multilinearity. det A2 = det A1 = 540 by alternancy. det A3 = 3 det A2 = 1620
by both multilinearity and homogeneity from one column. det A4 = det A3 = 1620 by multilinearity, and det A5 =
2 det A4 = 3240 by homogeneity from one column.
and
..
a12 a13 . a1n
..
0 a23 . a2n
..
Y = 0
.
0 a3n
. .. .. .. ..
.. . . . .
..
0 0 0 .
Clearly A = X + Y, det X = (a11 )(a22 ) (ann ) 6= 0, and det Y = n 6= 0. This completes the proof.
6.3.7 No.
6.4.1 We have
4 6 1 3 1 3
det A = 2(1)1+2 det
+ 5(1)2+2 det
+ 8(1)2+3 det
7 9 7 9 4 6
= 2(36 42) + 5(9 21) 8(6 12) = 0.
Answers and Hints 239
a b c b c a
a det b det + c det = a(a2 bc) b(ca b2 ) + c(c2 ab) = a3 + b3 + c3 3abc.
c a b a b c
6.4.3 Since the second column has three 0s, it is advantageous to expand along it, and thus we are reduced to
calculate
1 1 1
3(1)3+2 det
2 0 1
1 0 1
Expanding this last determinant along the second column, the original determinant is thus
2 1
3(1)3+2 (1)(1)1+2 det
= 3(1)(1)(1)(1) = 3.
1 1
1 1 1 1
x a 0 0
0 = det
x 0 b 0
x 0 0 c
a 00 1 1 1
det
x det
= 0 b 0 0 b 0
0 0 c 0 0 c
1 1 1 1 1 1
+x det x det
a 0 0 a 0 0
0 0 c 0 b 0
1 1 1 1 1 1
xabc xbc + x det a 0 0 x det
=
a 0 0 .
0 0 c 0 b 0
240 Appendix A
It follows that
abc = x(bc + ab + ca),
whence
1 bc + ab + ca 1 1 1
= = + + ,
x abc a b c
as wanted.
= 2ab(a b),
as wanted.
Expanding the resulting two determinants along the second row, we obtain
a b a b
ad det + b(c) det = ad(ad bc) bc(ad bc) = (ad bc)2 ,
c d c d
as wanted.
Assume that the result is true for n 1. Expanding the determinant along the first column
1 1 1 1 1
..
..
0 0 . 0 0
1 0 0 . 0 0
1 0 0 0
0 1 0 0 0
det = 1 det 0
1 0 0
0 0 1 0 0 .
.. ..
..
.
.. .. ..
. .
..
. . .
0 0 1 0
0 0 0 1 0
1 1 1 1
..
1 0 . 0
0 1 0
1 det
0
0 0
. .. ..
..
. .
0 0 1 0
= 1(0) (1)(1)n
= (1)n+1 ,
6.4.10 Perform Ck C1 Ck for k [2; n]. Observe that these operations do not affect the value of the determinant.
Then
1 n1 n1 n1 n 1
..
n 2n 0 0 . 0
n 0 3n 0 0
det A = det .
n
0 0 4n 0
. .. .. ..
..
. . .
n 0 0 0 0 0
242 Appendix A
n 1 n1 n1 n1 n 1
..
2 n 0 0 . 0 0
0 3n 0 0 0
det A = (1)1+n
n det
0
0 4n 0 0
. .. .. ..
..
. . .
0 0 0 1 0
= (n!)(1)n
= (1)n+1 n!,
n n
6.4.11 Recall that ,
k
= nk
n
!
X n
= 2n
k=0
k
and
n
!
X k n
(1) = 0, if n > 0.
k
k=0
Assume that n is odd. Observe that then there are n + 1 (an even number) of columns and that on the same row,
n n
k
is on a column of opposite parity to that of nk . By performing C1 C2 + C3 C4 + + Cn Cn+1 C1 ,
the first column becomes all 0s, whence the determinant if 0 if n is odd.
Answers and Hints 243
2 2
b + 2bc + c + ab + ac ab ac
C1 +C2 +C3 C1
det
= 2 2 2
ab + a + 2ca + c + bc a + 2ca + c
2
bc
ac + bc + a2 + 2ab + b2 bc a2 + 2ab + b2
(b + c)(a + b + c) ab ac
det
= 2
(a + c)(a + b + c) a + 2ca + c
2
bc
2 2
(a + b)(a + b + c) bc a + 2ab + b
Factoring this is
2(a + b + c) (a + c)(a + b + c) (a + b)(a + b + c)
(a + b + c) det
,
a+c a2 + 2ca + c2 bc
a+b bc a2 + 2ab + b2
which in turn is
2 a+c a+b
2
(a + b + c) det
a + c a2 + 2ca + c2 bc
a+b bc a2 + 2ab + b2
244 Appendix A
2 a c a b
2
(a + b + c) det
2
a + c 0 a ab ac
a+b a2 ab ac 0
This last matrix we will expand by the second column, obtaining that the original determinant is thus
2
a + c a ab ac 2 a b
(a + b + c)2 + (a2 + ab + ac) det
(a + c) det
2
a+b 0 a+c a ab ac
This simplifies to
= 2abc(a + b + c)3 ,
as claimed.
6.4.16 We have
Answers and Hints 245
a b c d a + b + c + d a+b+c+d a+b+c+d a + b + c + d
d a b c R1 +R2 +R3 +R4 R1
d a b c
det det
=
c d a b c d a b
b c d a b c d a
1 1 1 1
d a b c
= (a + b + c + d) det
c d a b
b c d a
1 1 1 0
C4 C3 +C2 C1 C4
d a b c b + a d
(a + b + c + d)
=
c
d a b a + d c
b c d ad+cb
1 1 1 0
d a b 1
= (a + b + c + d)(a b + c d)
c
d a 1
b c d 1
1 1 1 0
R2 +R3 R2 , R4 +R3 R4
d + c a+d b+a 0
(a + b + c + d)(a b + c d)
=
c
d a 1
b+c c+d a+d 0
1 1 1
= (a + b + c + d)(a b + c d)
d + c a+d b + a
b+c c+d a+d
0 0 1
C1 C3 C1 , C2 C3 C2
(a + b + c + d)(a b + c d)
d + c b a db b + a
=
b+cad ca a+d
d + c b a d b
= (a + b + c + d)(a b + c d)
b+cad ca
= (a + b + c + d)(a b + c d)(d + c b a)(c a) (d b)(b + c a d)
= (a + b + c + d)(a b + c d)
((c a)(c a) + (c a)(d b) (d b)(c a) (d b)(b d))
= (a + b + c + d)(a b + c d)((a c)2 + (b d)2 ).
246 Appendix A
Since
(a c)2 + (b d)2 = (a c + i(b d))(a c i(b d)),
the above determinant is then
7.2.1 We have
1 1
det(I2 A) = det = ( 1)2 1 = ( 2),
1 1
If
1 1 a 0
=
0 0 b 0
then a = b. Thus
a 1
= a
b 1
1
as the eigenvector corresponding to = 0. Similarly, for = 2,
and we can take
1
1 3
2I2 A =
,
1 3
If
1 3 a 0
=
0 0 b 0
1
as the eigenvector corresponding to = 2.
and we can take
3
7.2.5 We have
2 1
det(I3 A) det
= 2 3 2
1 2
3 2 2 2 2 3
= det
+ 2 det
+ det
2 1 1 2
= (2 3 4) + 2(2 2) + ( 1)
= ( 4)( + 1) 5( + 1)
= (2 4 5)( + 1)
= ( + 1)2 ( 5)
If = 5,
5 2 1 a 0
(5I3 A) =
2 2 2 b = 0
a = c, b = 2c,
1 2 5 c 0
a
1
= c .
b 2
c 1
248 Appendix A
1
2 .
We may take as eigenvector
1
7.2.6 The characteristic polynomial of A must be 2 1, which means that tr (A) = 0 and det A = 1. Hence A
a c
must be of the form , with a2 bc = 1, that is, a2 + bc = 1.
b a
7.2.7 We must shew that det(In A) = det(In A)T . Now, recall that the determinant of a square matrix is the
same as the determinant of its transpose. Hence
7.3.1 Put
1 0 1 0
D=
, ,P =
.
0 2 1 1
We find
1 1
P1 =
.
0 1
Since A = PDP1
1 0 1 0 1 1 1 1023
A10 = PD10 P1 =
=
.
1 1 0 1024 0 1 0 1024
7.3.2
9 4
1. A has characteristic polynomial det = ( 9)( + 9) + 80 = 2 1 = ( 1)( + 1).
20 +9
2. ( 1)( + 1) = 0 = {1, 1}.
3. For = 1 we have
9 4 a a
= 1 = 10a = 4b = a = 2b ,
5
20 9 b b
2
as an eigenvector.
so we can take
5
For = 1 we have
9 4 a a
= 1 = 8a = 4b = a = b ,
2
20 9 b b
1
as an eigenvector.
so we can take
2
Answers and Hints 249
4. We can do this problem in at least three ways. The quickest is perhaps the following.
Recall that a 2 2 matrix has characteristic polynomial 2 (tr (A)) + det A. Since A has eigenvalues 1
and 1, A20 has eigenvalues 120 = 1 and (1)20 = 1, i.e., the sole of A20 is 1 and so A20 has characteristic
polynomial ( 1)2 = 2 2 + 1. This means that tr A20 = 2 and so tr A20 = 2.
20 1
2 1 1 0 2 1
A20 =
5 2 0 1 5 2
1
2 1 1 0 2 1
=
5 2 0 1 5 2
1 0
=
,
0 1
and so a + d = 2. One may also use the fact that tr (XY) = tr (YX) and hence
tr A20 = tr PD20 P1 = tr PP1 D20 = tr D20 = 2.
7.3.3 Put
1 0 0 1 1 1
D=
0 1 0 ,
X=
0 1 1 .
0 0 3 0 0 1
Then we know that A = XDX1 and so we need to find X1 . But this is readily obtained by performing R1 R2 R1
and R2 R3 R3 in the augmented matrix
1 1 1 1 0 0
0 1 1 0 1 0 ,
0 0 1 0 0 1
getting
1 1 0
X1
=
0 1 1 .
0 0 1
250 Appendix A
Thus
A = XDX1
1 1 1 1 0 0 1 1 0
= 0 1 1 0
1 0 0
1 1
0 0 1 0 0 3 0 0 1
1 0 4
= 0
1 4 .
0 0 3
7.3.6 We find
+ 7 6
det(I2 A) = det = 2 3 + 2 = ( 1)( 2).
12 10
2
and that the eigenvalue = 1 has
A short calculation shews that the eigenvalue = 2 has eigenvector
3
3
. Thus we may form
eigenvector
4
2 0 2 3 4 3
D=
,
P=
,
P1 =
.
0 1 3 4 3 1
This gives
n n n
2 3 2 0 4 3 8 2 + 9 6 2 + 6
A = PDP1 = An = PDn P1 =
=
.
3 4 0 1 3 1 12 2n 12 n
92 8
7.4.1 The eigenvalues of A are 0, 1, and 2. Those of A2 are 0, 1, and 4. Hence, the characteristic polynomial of
A2 is ( 1)( 4).
8.2.1 2a2 2a + 1
1
8.2.2
p
= 22 =
1 1 1
v = 2
= ()2 + ()2 = 2 4
= = .
8
8.2.3 0
8.2.4 a = 1 or a = 8.
8.2.5 [A] 2(
x +
y ) 21
z , [B]
x +
y 12
z , [C] (
x +
y +
z)
8.2.6 [A]. 0 , [B]. 0 , [C]. 0 , [D]. 0 , [E]. 2
c (= 2 d )
8.2.7 [F]. 0 , [G]. b , [H]. 2 0 , [I]. 0 .
Answers and Hints 251
8.2.8 Let the skew quadrilateral be ABCD and let P, Q, R, S be the midpoints of [A, B], [B, C], [C, D], [D, A], respec-
tively. Put
x = OX, where X {A, B, C, D, P, Q, R, S}. Using the Section Formula 8.4 we have
a+b
b +c
c +d
d +a
p = , q = , r = , s = .
2 2 2 2
This gives
a
c
a
c
p
q = ,
s
r = .
2 2
This means that QP = RS and so PQRS is a parallelogram since one pair of sides are equal and parallel.
8.2.9 We have 2BC = BE + EC. By Chasles Rule AC = AE + EC, and BD = BE + ED. We deduce that
AC + BD = AE + EC + BE + ED = AD + BC.
But since ABCD is a parallelogram, AD = BC. Hence
AC + BD = AD + BC = 2BC.
8.2.10 We have IA = 3IB IA = 3(IA + AB) = 3IA 3AB. Thus we deduce
IA + 3IA = 3AB 4IA = 3AB
4AI = 3AB
3
AI = 4
AB.
Similarly
JA = 31 JB 3JA = JB
3JA = JA AB
4JA = AB
1
AJ = 4
AB
.
3
1
Thus we take I such that AI = 4
AB and J such that AJ = 4
AB.
Now
MA + 3MB = MI + IA + 3IB
= 4MI + IA + 3IB
= 4MI,
and
3MA + MB = 3MJ + 3JA + MJ + JB
= 4MJ + 3JA + JB
= 4MJ.
8.2.11 Let G, O and P denote vectors from an arbitrary origin to the gallows, oak, and pine, respectively. The
conditions of the problem define X and Y , thought of similarly as vectors from the origin, by X = O + R(O G),
Y = P R( P G), where R is the 90 rotation to the right, a linear transformation on vectors in the plane; the fact
that R is 90 leftward rotation has been used in writing Y. Anyway, then
X+Y O+ P R(O P )
= +
2 2 2
252 Appendix A
is independent of the position of the gallows. This gives a simple algorithm for treasure-finding: take P as the
O + R(O)
(hitherto) arbitrary origin, then the treasure is at .
2
8.3.1 a = 1
2
8.3.3
4 1 2
= 2 +3 = 2r + 3s.
p =
5 1 1
a i , a2 =
8.3.4 Since a1 =
a j , we may write
a = (
a i ) i + (
a j ) j
8.3.5
a (
a + b) =
a + b = 0 = a 0
= a
(
a) = 0
2
= a = 0.
a 6= 0 , we must have
Since
a 6= 0 and thus = 0. But if = 0 then
a + b = 0 = b = 0
= = 0,
since b 6= 0 .
But the norm of a vector is 0 if and only if the vector is the 0 vector. Therefore
a b = 0 , i.e.,
a = b.
8.3.8 We have
2
(
a b )(
a b = a b)
a
= a 2
ab + bb
2
=
a 2
2
a b + b ,
8.3.9 We have
||
u +
v ||2 ||
u
v ||2
= (
u +
v )(
u +
v ) (
u
v )(
u
v)
u
=
u
u + 2
v
v +
u
v (
u
u 2
v
v +
v)
= u
4
v,
giving the result.
Answers and Hints 253
8.3.10 By definition
a
proj proja
x a
x
proj
=
2 a
a
a
a x
2 x a
||
x ||
= 2 a
a
a
(
x )2
= 2
2 a ,
||x|| a
(
a
x )2
Since 0 2 2 1 by the CBS Inequality, the result follows.
x
a
8.3.11 Clearly, if
a = 0 and 6= 0 then there are no solutions. If both
a = 0 and = 0, then the solution set is the
whole space R2 . So assume that a 6= 0 . By Theorem 365, we may write x =u + x
v with proj
= u || a and v a .
a
Thus there are infinitely many solutions, each of the form
x
a
x =
u +
v = a +
v = a +
v,
||a||2 ||a||2
where
v
a .
2 1
8.4.1 Since
is normal to 2x y = 1 and b = is normal to x 3y = 1, the desired angle can be
a =
1 3
obtained by finding the angle between the normal vectors:
\
ab 5 1
(
a , b ) = arccos
= arccos = arccos = .
a b
5 10 2 4
8.4.2 2(x 1) + (y + 1) = 0 or 2x + y = 1.
8.4.3 By Chasles Rule AA = AG + GA , BB = BG + GB , and CC = CG + GC . Thus
0 = AA + BB + CC
= AG + GA + BG + GB + CG + GC
= (GA + GB + GC) + (GA + GB + GC )
= GA + GB + GC ,
8.4.4 We have:
The points F, A, D are collinear, and so FA is parallel to FD, meaning that there is k R \ {0} such that
FA = kFD. Since the lines (AB) and (DC) are parallel, we obtain through Thales Theorem that FI = kFJ and
FB = kFC. This gives
FA FI = k(FD FJ) = IA = kJD.
Similarly
FB FI = k(FC FJ) = IB = kJC.
Since I is the midpoint of [A, B], IA + IB = 0 , and thus k(JC + JD) = 0 . Since k 6= 0, we have JC + JD = 0 ,
meaning that J is the midpoint of [C, D]. Therefore the midpoints of [A, B] and [C, D] are aligned with F.
254 Appendix A
Let J be the intersection of the lines (EI) and (DC). Let us prove that J = J.
Since the points E, A, C are collinear, there is l 6= 0 such that EA = lEC. Since the lines (ab) and (DC) are
parallel, we obtain via Thales Theorem that EI = lEJ and EB = lED. These equalities give
EA EI = l(EC EJ ) = IA = lJ C,
EB EI = l(ED EJ ) = IB = lJ D.
Since I is the midpoint of [A, B], IA + IB = 0 , and thus l(J C + J D) = 0 . Since l 6= 0, we deduce J C + J D = 0 ,
that is, J is the midpoint of [C, D], and so J = J.
8.4.5 We have:
By Chasles Rule
1
1 ,
AE = 4
AC AB + BE = 4
AC
and
3 3 .
AF = 4
AC AD + DF = 4
AC
Adding, and observing that since ABCD is a parallelogram, AB = CD,
AB + BE + AD + DF = AC BE + DF = AC AB AD
BE + DF = AD + DC AB AD .
BE = DF.
The last equality shews that the lines (BE) and (DF) are parallel.
Observe that BJ = 12 BC = 21 AD = AI = IA . Hence
IJ = IA + AB + BJ = AB,
proving that the lines (AB) and (IJ) are parallel.
Observe that
1 1 1
IE = IA + AE = DA + AC = CB + FC = CJ + FC = FC + CJ = FJ,
2 4 2
whence IEJF is a parallelogram.
1
8.4.6 Since IE = 3
ID and [I, D] is a median of ABD, E is the centre of gravity of ABD. Let M be the midpoint
of [B, D], and observe that M is the centre of the parallelogram, and so 2AM = AB + AD. Thus
2 1
1
AE = AM = (2AM) = (AB + AD).
3 3 3
To shew that A, C, E are collinear it is enough to notice that AE = 13 AC.
||[A, B]||
8.4.7 Suppose A, B, C are collinear and that = . Then by the Section Formula 8.4,
||[B, C]||
c +
a
b = ,
+
whence a ( + ) b +
c = 0 and clearly ( + ) + = 0. Thus we may take = , = + , and = .
Conversely, suppose that
a + b +
c = 0, ++=0
for some real numbers , , , not all zero. Assume without loss of generality that 6= 0. Otherwise we simply
change the roles of , and and Then = ( + ) 6= 0. Hence
a + b
a + b = ( + )c =
c = ,
+
and thus [O, C] divides [A, B] into the ratio , and therefore, A, B, C are collinear.
Answers and Hints 255
8.4.8 Put OX =
x for X {A, A , B, B , C, C , L, M, N, V}. Using problem 8.4.7 we deduce
v +
a + a = 0 , 1 + + = 0,
(A.1)
v +
a + a = 0 , 1 + + = 0,
(A.2)
v +
a + a = 0 ,
1 + + = 0. (A.3)
From A.2, A.3, and the Section Formula 8.4 we find
b
c b c
= = l,
whence ( ) l = b
c . In a similar fashion, we deduce
( )
m =
c
a,
( )
n =
a b.
This gives
( ) l + ( )
m + ( )
n = 0,
( ) + ( ) + ( ) = 0,
and appealing to problem 8.4.7 once again, we deduce that L, M, N are collinear.
8.5.1 [A] AS, [B] AB.
8.5.2 Put
1 1 1
= ( i + j + k )( i + j ) = j i = .
a =1
1 1
1 0 0
Then either
3
2
3 3
a a
= = 3 ,
a 2 2
0
or
3
2
3a
= 3
a
2
0
8.5.4 It is not associative, since i ( i j ) = i k = j but ( i i ) j = 0 j = 0 .
8.5.5 We have
x
x =
x
x by letting x in 8.15. Thus 2
y = x
x = 0 and hence
x
x = 0.
8.5.6 2
ab
256 Appendix A
8.5.7
a ( x b ) = b (
a ) (
a b )
x ( x ) b = ( b
a
x ( b
a x = b
a
x a ) x ) x = 0.
The answer is thus { x : x R a b }.
8.5.8
(a b )a + 6 b + 2 a
c
x =
2
12 + 2 a
( a c )
a + 6c + 3
ab
y = 2
18 + 3a
8.5.9 Assume contrariwise that a, b,
c are three unit vectors in R3 such that the angle between any two of them
2 1 1 1
. Then a b < , b c < , and
c
is > a < . Thus
3 2 2 2
2 = 2 2
2
a + b + c a + b +
c
a b + 2 b
+2 c
c + 2
a
< 1+1+1111
= 0,
1
x =
a 2 a b
a
in this last case.
Answers and Hints 257
8.5.13 Let
x ,
y ,
x ,
y be vectors in R3 and let R be a scalar. Then
L((
x ,
y ) + (
x ,
y )) = L(
x +
x ,
y +
y )
(
x +
x ) k + h (
y +
= y )
=
x k +
x k + h
y + h
y
= L(
x ,
y ) + L(
x ,
y )
and
0 (a) a
11 =0
2a 0 2a
that is,
2ax + 3a2 y az = a2 .
1 1
Since the line follows the direction of 2
, this means that is normal to the plane, and thus the equation of
2
1 1
the desired plane is
(x 1) 2(y 1) (z 1) = 0.
258 Appendix A
8.6.3 Observe that (0, 0, 0) (as 0 = 2(0) = 3(0)) is on the line, and hence on the plane. Thus the vector
10 1
1 0 = 1
1 0 1
lies on the plane. Now, if x = 2y = 3z = t, then x = t, y = t/2, z = t/3. Hence, the vectorial form of the equation of the
line is
0 1 1
r =
+t
= t
.
0
1/2 1/2
0 1/3 1/3
1
1/2 also lies on the plane, and thus
This means that
1/3
1
1 1/6
=
1 1/2 4/3
1 1/3 3/2
or
x y z 1 1 1
+ + = + + .
a b c a b c
We may also write this as
bcx + cay + abz = ab + bc + ca.
Answers and Hints 259
a
8.6.5 A vector normal to the plane is
a . The line sought has the same direction as this vector, thus the equation
2
a2
of the line is
x 0 a
= + t 2 ,
y 0 a t R.
z 1 a2
8.6.6 We have
x z y = 1 = 1 y = 1 = y = 2.
Hence if z = t,
x
t 1 1 1
= = + t .
y 2 2 0
z t 0 1
8.6.8 We have
c
a = i + 2 j and
a b = 2 k 3 i . By Theorem 398, we have
c = ab c
b
a = 2 k + 3 i + i 2 j = 4 i 2 j 2 k .
8.6.9 4x + 6y = 1
8.6.10 There are 7 vertices (V0 = (0, 0, 0), V1 = (11, 0, 0), V2 = (0, 9, 0), V3 = (0, 0, 8), V4 = (0, 3, 8), V5 = (9, 0, 2),
V6 = (4, 7, 0)) and 11 edges (V0 V1 , V0 V2 , V0 V3 , V1 V5 , V1 V6 , V2 V4 , V3 V4 , V3 V5 , V4 V5 , and V4 V6 ).
P 2 = 0.
8.7.2 Expand n i=1 ai
P
8.7.3 Observe that n k=1 1 = n. Then we have
n
!2 n !2 n
! n !
2
X X 1 X 2
X 1
n = 1 = (ak ) ak ,
k=1 k=1
ak
k=1 k=1
a2k
8.7.4 Take (
u ,
v ) X2 and R. Then
a (
u +
a
v) =
a
u +
v = 0 + 0 = 0,
Since aj 6= 0 = ||aj ||2 6= 0, we must have j = 0. Thus the only linear combination giving the zero vector is the
trivial linear combination, which proves that the vectors are linearly independent.
[Cul] CULLEN, C., Matrices and Linear Transformations, 2nd ed., New York: Dover Publications,
1990.
[Del] DELODE, C., Gomtrie Affine et Euclidienne, Paris: Dunod, 2000.
[Fad] FADDEV, D., SOMINSKY, I., Recueil dExercises dAlgbre Suprieure, 7th ed., Paris: Ellipses,
1994.
[Hau] HAUSNER, M., A Vector Space Approach to Geometry, New York: Dover, 1998.
[Lan] LANG, S., Introduction to Linear Algebra, 2nd ed., New York: Springer-Verlag, 1986.
[Pro] PROSKURYAKOV, I. V., Problems in Linear Algebra, Moscow: Mir Publishers, 1978.
[Riv] RIVAUD, J., Ejercicios de lgebra, Madrid: Aguilar, 1968.
[Tru] TRUFFAULT, B., Gomtrie lmentaire: Cours et exercises, Paris: Ellipses, 2001.
261
GNU Free Documentation License
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it,
with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible
for modifications made by others.
This License is a kind of copyleft, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft
license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms
that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this
License principally for works whose purpose is instruction or reference.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies
to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further
copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Documents license notice requires Cover Texts, you must
enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly
identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying
with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with
each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document,
free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus
accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the
Document.
4. MODIFICATIONS
262
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the
Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified
Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the
Document). You may use the same title as a previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the
Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the publisher.
E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Documents license notice.
I. Preserve the section Entitled History, Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is
no section Entitled History in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version
as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions
it was based on. These may be placed in the History section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original
publisher of the version it refers to gives permission.
K. For any section Entitled Acknowledgements or Dedications, Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements
and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
M. Delete any section Entitled Endorsements. Such a section may not be included in the Modified Version.
N. Do not retitle any existing section to be Entitled Endorsements or to conflict in title with any Invariant Section.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some
or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Versions license notice. These titles must be distinct from any other section titles.
You may add a section Entitled Endorsements, provided it contains nothing but endorsements of your Modified Version by various partiesfor example, statements of peer review or that the text has
been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of
Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you
or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
5. COMBINING DOCUMENTS
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of
the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name
but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number.
Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled History in the various original documents, forming one section Entitled History; likewise combine any sections Entitled Acknowledge-
ments, and any sections Entitled Dedications. You must delete all sections Entitled Endorsements.
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy
that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License
in all other respects regarding verbatim copying of that document.
8. TRANSLATION
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special
permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this
License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and
disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled Acknowledgements, Dedications, or History, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void,
and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties
remain in full compliance.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License or any later version applies to it, you have the option
of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version
number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.