Linear Algebra Book Answers
Linear Algebra Book Answers
LINEAR ALGEBRA
Jim Hefferon
http://joshua.smcvt.edu/linearalgebra
Notation
R, R+ , Rn
N, C
(a .. b), [a .. b]
h. . .i
hi,j
V, W, U
~v, ~0, ~0V
Pn , Mnm
[S]
~ ~
hB, Di, ,
En = h~e1 , . . . , ~en i
W
V=
MN
h, g
t, s
RepB (~v), RepB,D (h)
Znm or Z, Inn or I
|T |
R(h), N (h)
R (h), N (h)
character
,
,
name
alpha AL-fuh
beta BAY-tuh
gamma GAM-muh
delta DEL-tuh
epsilon EP-suh-lon
zeta ZAY-tuh
eta AY-tuh
theta THAY-tuh
iota eye-OH-tuh
kappa KAP-uh
lambda LAM-duh
mu MEW
character
,
o
,
,
,
,
,
name
nu NEW
xi KSIGH
omicron OM-uh-CRON
pi PIE
rho ROW
sigma SIG-muh
tau TOW (as in cow)
upsilon OOP-suh-LON
phi FEE, or FI (as in hi)
chi KI (as in hi)
psi SIGH, or PSIGH
omega oh-MAY-guh
Capitals shown are the ones that differ from Roman capitals.
Preface
These are answers to the exercises in Linear Algebra by J Hefferon. An answer
labeled here as One.II.3.4 is for the question numbered 4 from the first chapter, second
section, and third subsection. The Topics are numbered separately.
If you have an electronic version of this file then save it in the same directory as
the book. That way, clicking on the question number in the book takes you to its
answer and clicking on the answer number takes you to the question,
I welcome bug reports and comments. Contact information is on the books home
page http://joshua.smcvt.edu/linearalgebra.
Jim Hefferon
Saint Michaels College, Colchester VT USA
2014-Dec-25
Contents
Chapter One: Linear Systems
Solving Linear Systems . . . . . . . . . . . . . .
One.I.1: Gausss Method . . . . . . . . . . . .
One.I.2: Describing the Solution Set . . . . .
One.I.3: General = Particular + Homogeneous
Linear Geometry . . . . . . . . . . . . . . . . .
One.II.1: Vectors in Space . . . . . . . . . . .
One.II.2: Length and Angle Measures . . . .
Reduced Echelon Form . . . . . . . . . . . . . .
One.III.1: Gauss-Jordan Reduction . . . . . .
One.III.2: The Linear Combination Lemma .
Topic: Computer Algebra Systems . . . . . . .
Topic: Accuracy of Computations . . . . . . . .
Topic: Analyzing Networks . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. 59
. 59
. 68
. 79
. 79
. 91
. 91
. 101
. 107
. 117
1
1
8
17
23
23
27
37
37
43
48
52
53
iv
Fields . . . . . . . . .
Crystals . . . . . . .
Voting Paradoxes . .
Dimensional Analysis
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
122
123
124
127
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
131
131
147
150
150
160
167
167
181
190
190
193
200
207
215
215
222
229
229
234
246
255
260
264
266
275
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
277
277
281
285
289
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Answers to Exercises
Geometry of Determinants . . . . . . . . . .
Four.II.1: Determinants as Size Functions
Laplaces Formula . . . . . . . . . . . . . . .
Four.III.1: Laplaces Expansion . . . . . .
Topic: Cramers Rule . . . . . . . . . . . . .
Topic: Speed of Calculating Determinants .
Topic: Chis Method . . . . . . . . . . . . .
Topic: Projective Geometry . . . . . . . . .
i
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
292
292
298
298
304
305
305
307
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
313
313
313
321
326
334
334
336
343
343
353
365
368
369
371
ii
Chapter One
2x +
3y =
13
(5/2)y = 15/2
1 +3
z=0
y + 3z = 1
y
=4
x
2 +3
z=0
y + 3z = 1
3z = 3
gives x = 1, y = 4, and z = 1.
One.I.1.18
2x + 2y =
5
5y = 5/2
2y = 3
gives y = 3/2 and x = 1/2 as the only solution.
1 +2
shows, because the variable z is not a leading variable in any row, that there are
many solutions.
(d) Row reduction
31 +2 x y = 1
0 = 1
shows that there is no solution.
(e) Gausss Method
1 4
x+
x + y z = 10
2x 2y + z = 0
x
+z= 5
4y + z = 20
21 +2
1 +3
y z = 10
4y + 3z = 20
y + 2z = 5
4y + z = 20
x+
(1/4)2 +3
2 +4
y
4y +
z = 10
3z = 20
(5/4)z = 0
4z = 0
z+
w=
5
w=
1
(5/2)z (5/2)w = 15/2
y
w=
1
(3/2)1 +3
21 +4
2x
2 +4
z+
w=
5
y
w=
1
(5/2)z (5/2)w = 15/2
0=
0
xy=
1
0 = 3 + k
Answers to Exercises
4x + 2y 2z = 10
4y 8z = 4
31 +3
6x 3y + z = 9
8z = 0
gives z = 0, y = 1, and x = 2. Note that no satisfies that requirement.
One.I.1.22 (a) Gausss Method
x 3y =
b1
x 3y =
b1
10y = 3b1 + b2 2 +3
10y =
3b1 + b2
31 +2
1 +3
10y = b1 + b3 2 +4
0 = 2b1 b2 + b3
21 +4
10y = 2b1 + b4
0 = b1 b2 + b4
shows that this system is consistent if and only if both b3 = 2b1 + b2 and
b4 = b1 + b2 .
(b) Reduction
x1 + 2x2 + 3x3 =
b1
21 +2
x2 3x3 = 2b1 + b2
1 +3
2x2 + 5x3 = b1 + b3
x1 + 2x2 + 3x3 =
b1
x2 3x3 =
2b1 + b2
x3 = 5b1 + 2b2 + b3
shows that each of b1 , b2 , and b3 can be any real number this system always
has a unique solution.
One.I.1.23 This system with more unknowns than equations
x+y+z=0
x+y+z=1
has no solution.
One.I.1.24 Yes. For example, the fact that we can have the same reaction in two
different flasks shows that twice any solution is another, different, solution (if a
physical reaction occurs then there must be at least one nonzero solution).
One.I.1.25 Because f(1) = 2, f(1) = 6, and f(2) = 3 we get a linear system.
1a + 1b + c = 2
1a 1b + c = 6
4a + 2b + c = 3
Gausss Method
a+ b+ c= 2
a+ b+ c= 2
1 +2
2 +3
2b
= 4
2b
= 4
41 +3
2b 3c = 5
3c = 9
shows that the solution is f(x) = 1x2 2x + 3.
22 +3
xy=0
0=0
while S1 is a proper superset because it contains at least two points: (1, 1) and (2, 0).
In this example the solution set does not change.
x + y = 2 02 x + y = 2
2x + 2y = 4
0=0
One.I.1.27 (a) Yes, by inspection the given equation results from 1 + 2 .
(b) No. The pair (1, 1) satisfies the given equation. However, that pair does not
satisfy the first equation in the system.
(c) Yes. To see if the given row is c1 1 + c2 2 , solve the system of equations
relating the coefficients of x, y, z, and the constants:
2c1 + 6c2 = 6
c1 3c2 = 9
c1 + c2 = 5
4c1 + 5c2 = 2
and get c1 = 3 and c2 = 2, so the given row is 31 + 22 .
One.I.1.28 If a 6= 0 then the solution set of the first equation is {(x, y) | x = (c by)/a }.
Taking y = 0 gives the solution (c/a, 0), and since the second equation is supposed
to have the same solution set, substituting into it gives that a(c/a) + d 0 = e, so
c = e. Then taking y = 1 in x = (c by)/a gives that a((c b)/a) + d 1 = e,
which gives that b = d. Hence they are the same equation.
When a = 0 the equations can be different and still have the same solution
set: e.g., 0x + 3y = 6 and 0x + 6y = 12.
One.I.1.29 We take three cases: that a 6= 0, that a = 0 and c 6= 0, and that both
a = 0 and c = 0.
For the first, we assume that a 6= 0. Then the reduction
by = j
(c/a)1 +2 ax +
Answers to Exercises
to conclude that the system has a unique solution if and only if b 6= 0 (we use
the case assumption that c 6= 0 to get a unique x in back substitution). But
where a = 0 and c 6= 0 the condition b 6= 0 is equivalent to the condition
ad bc 6= 0. That finishes the second case.
Finally, for the third case, if both a and c are 0 then the system
0x + by = j
0x + dy = k
might have no solutions (if the second equation is not a multiple of the first) or it
might have infinitely many solutions (if the second equation is a multiple of the
first then for each y satisfying both equations, any pair (x, y) will do), but it never
has a unique solution. Note that a = 0 and c = 0 gives that ad bc = 0.
One.I.1.30 Recall that if a pair of lines share two distinct points then they are the
same line. Thats because two points determine a line, so these two points determine
each of the two lines, and so they are the same line.
Thus the lines can share one point (giving a unique solution), share no points
(giving no solutions), or share at least two points (which makes them the same
line).
One.I.1.31 For the reduction operation of multiplying i by a nonzero real number k,
we have that (s1 , . . . , sn ) satisfies this system
a1,1 x1 + a1,2 x2 + + a1,n xn = d1
..
.
kai,1 x1 + kai,2 x2 + + kai,n xn = kdi
..
.
am,1 x1 + am,2 x2 + + am,n xn = dm
if and only if a1,1 s1 + a1,2 s2 + + a1,n sn = d1 and . . . kai,1 s1 + kai,2 s2 + +
kai,n sn = kdi and . . . am,1 s1 + am,2 s2 + + am,n sn = dm by the definition of
satisfies. Because k 6= 0, thats true if and only if a1,1 s1 +a1,2 s2 + +a1,n sn = d1
and . . . ai,1 s1 +ai,2 s2 + +ai,n sn = di and . . . am,1 s1 +am,2 s2 + +am,n sn =
dm (this is straightforward canceling on both sides of the i-th equation), which
says that (s1 , . . . , sn ) solves
a1,1 x1 + a1,2 x2 + + a1,n xn = d1
..
.
ai,1 x1 + ai,2 x2 + + ai,n xn = di
..
.
am,1 x1 + am,2 x2 + + am,n xn = dm
as required.
a1,n xn =
..
.
ai,1 x1 + +
ai,n xn =
..
.
(kai,1 + aj,1 )x1 + + (kai,n + aj,n )xn =
..
.
am,1 x1 + +
am,n xn =
d1
di
kdi + dj
dm
Answers to Exercises
The row combination case is the nontrivial one. The operation ki + j results
in this j-th row.
k ai,1 + aj,1 + + k ai,n + aj,n = k di + dj
The i-th row unchanged because of the i 6= j restriction. Because the i-th row is
unchanged, the operation ki + j returns the j-th row to its original state.
(Observe that the i = j conditino on the ki + j is needed, or else this could
happen
3x + 2y = 7
21 +1
9x + 6y = 21
21 +1
9x 6y = 21
p + 5n + 10d = 83
4n + 9d = 70
has more than one solution; in fact, it has infinitely many of them. However, it
has a limited number of solutions in which p, n, and d are non-negative integers.
Running through d = 0, . . . , d = 8 shows that (p, n, d) = (3, 4, 6) is the only
solution using natural numbers.
One.I.1.36 Solving the system
(1/3)(a + b + c) + d = 29
(1/3)(b + c + d) + a = 23
(1/3)(c + d + a) + b = 21
(1/3)(d + a + b) + c = 17
we obtain a = 12, b = 9, c = 3, d = 21. Thus the second item, 21, is the correct
answer.
One.I.1.37 This is how the answer was given in the cited source. A comparison
of the units and hundreds columns of this addition shows that there must be a
carry from the tens column. The tens column then tells us that A < H, so there
can be no carry from the units or hundreds columns. The five columns then give
the following five equations.
A+E=W
2H = A + 10
H=W+1
H + T = E + 10
A+1=T
The five linear equations in five unknowns, if solved simultaneously, produce the
unique solution: A = 4, T = 5, H = 7, W = 6 and E = 2, so that the original
example in addition was 47474 + 5272 = 52746.
One.I.1.38 This is how the answer was given in the cited source. Eight commissioners voted for B. To see this, we will use the given information to study how
many voters chose each order of A, B, C.
The six orders of preference are ABC, ACB, BAC, BCA, CAB, CBA; assume
they receive a, b, c, d, e, f votes respectively. We know that
a + b + e = 11
d + e + f = 12
a + c + d = 14
from the number preferring A over B, the number preferring C over A, and the
number preferring B over C. Because 20 votes were cast, we also know that
c+d+ f=9
a+b+c=8
b+ e+ f=6
from the preferences for B over A, for A over C, and for C over B.
The solution is a = 6, b = 1, c = 1, d = 7, e = 4, and f = 1. The number of
commissioners voting for B as their first choice is therefore c + d = 1 + 7 = 8.
Comments. The answer to this question would have been the same had we known
only that at least 14 commissioners preferred B over C.
The seemingly paradoxical nature of the commissioners preferences (A is preferred to B, and B is preferred to C, and C is preferred to A), an example of
non-transitive dominance, is not uncommon when individual choices are pooled.
One.I.1.39 This is how the answer was given in the cited source. We have not
used dependent yet; it means here that Gausss Method shows that there is
not a unique solution. If n > 3 the system is dependent and the solution is not
unique. Hence n < 3. But the term system implies n > 1. Hence n = 2. If the
equations are
ax + (a + d)y = a + 2d
(a + 3d)x + (a + 4d)y = a + 5d
then x = 1, y = 2.
(a) 2
One.I.2.16
(a) 23
(b) 3
(c) 1
(b) 32
(c) 22
Answers to Exercises
One.I.2.17
5
(a) 1
5
20
5
(b)
2
(c) 4
0
(d)
41
52
3 6
0 0
18
0
12
(f) 8
4
One.I.2.18
18
6
(1/3)1 +2
leaves x leading and y free. Making y the parameter, gives x = 6 2y and this
solution set.
!
!
6
2
{
+
y | y R}
0
1
(b) A reduction
1
1
1
1
1
1
1 +2
1
0
1
2
1
2
1 0
1 1
4 1
Method
1
4
2
5
5 17
1 +2
41 +3
0
0
0
1
1
1
1
1
1
1
2 +3
0
0
0
1
0
2 1
2 0
1 1
1
1
0
3
0
1 +2
(1/2)1 +3
(3/2)2 +3
0
0
0
0
1
1
3/2
1
1
0
1
2
1/2
1
2
5/2
1
1
1
5/2
1
1
0
1
0
10
1 2
2 1
1 1
easy
1
0
1
0
1
1
4
1
1 2 1 0
3
0 3 2 1 2
0 3 2 1 2
1 2 1 0
3
0 3 2 1 2
0 0
0 0
0
21 +2
1 +3
2 +3
and ends with x and y leading while z and w are free. Solving for y gives
y = (2 + 2z + w)/3 and substitution shows that x + 2(2 + 2z + w)/3 z = 3 so
x = (5/3) (1/3)z (2/3)w, making this the solution set.
5/3
1/3
2/3
2/3 2/3
1/3
{
+
z +
w | z, w R}
0 1
0
0
0
1
(f) The reduction
1 0
2 1
3 1
1 1
0 1
1 0
2
7
21 +2
31 +3
2 +3
0
0
0
0
0
1
1
0
1
0
1
2
2
1
2
0
1
3
3
1
3
0
6
5
6
1
1
1
1
0
1
3
21 +2
2
0
1
3
1
2
1
1
ends with x and y leading, and with z free. Solving for y gives y = (1 2z)/(3),
and then substitution 2x+(12z)/(3)z = 1 shows that x = ((4/3)+(1/3)z)/2.
Hence the solution set is this.
2/3
1/6
{ 1/3 + 2/3 z | z R }
0
1
Answers to Exercises
11
1 0
0 1
1 2
of Gausss Method
1 0 1 0 1
1 0 1
1 +3
2 1 3
0 1 2 1 3
3 1 7
0 2 4 1 6
1 0 1 0 1
22 +3
0 1 2 1 3
0 0 0
1 0
leaves x, y, and w leading. The solution set is here.
1
1
3 2
{ + z | z R}
0 1
0
0
(c) This row reduction
1 1 1 0 0
1 1 1 0 0
0 1 0 1 0
31 +3 0 1 0 1 0
0 1 0 1 0
3 2 3 1 0
0 1 0 1 0
0 1 0 1 0
1 1 1 0 0
1 0 1 0
2 +3 0
2 +4
0 0 0 0 0
0 0 0 0 0
ends with z and w free.
We have
this solution
set.
0
1
1
0 0
1
{ + z + w | z, w R}
0 1
0
0
0
1
(d) Gausss Method done in this way
!
!
1 2 3 1 1 1
1 2
3
1 1 1
31 +2
3 1 1 1 1 3
0 7 8 2 4 0
ends with c, d, and e free. Solving for b shows that b = (8c + 2d 4e)/(7)
and then substitution a + 2(8c + 2d 4e)/(7) + 3c + 1d 1e = 1 shows that
a = 1 (5/7)c (3/7)d (1/7)e and we have the solution set.
1
5/7
3/7
1/7
0 8/7
2/7
4/7
{ 0 + 1 c + 0 d + 0 e | c, d, e R}
0 0
1
0
0
0
0
1
12
One.I.2.20
3 2 1 1
1 1 1 2
5 5 1 0
(1/3)1 +2
(5/3)1 +3
2 +3
0
0
2
1
1
5/3 2/3
5/3
5/3 2/3 5/3
2
1
1
0
0
x
1
3/5
{ y = 1 + 2/5 z | z R }
z
0
1
(b) This is the reduction.
1 1 2
1 1 0
3 1 2
0 2 2
0
3
6
3
1 +2
31 +3
22 +3
2 +4
1
0
0
0
1
0
0
0
1
2
4
2
2
2
4
2
1
2
0
0
2
2
0
0
0
3
6
3
0
3
0
0
3/2
1
{ 3/2 + 1 z | z R }
0
1
(c) Gausss Method
2
1
1
1
1
1
1
0
4
1
(1/2)1 +2
2
0
1
3/2
1
1
3/2 1/2
4
3
(d) Here is
1 1
1 1
3 1
the
2
0
2
1
0
1/3
2 1 1/3
{ + z
w | z, w R}
0 1 0
0
0
1
reduction.
0
1 1 2 0
1
1 +2
22 +3
3
0 2 2 3
0
31 +3
0
0 4 4
0
0
1
2
0
2
2
0
3
6
Answers to Exercises
13
One.I.2.21 For each problem we get a system of linear equations by looking at the
equations of components.
(a) k = 5
(b) The second components show that i = 2, the third components show that
j = 1.
(c) m = 4, n = 2
One.I.2.22 For each problem we get a system of linear equations by looking at the
equations of components.
(a) Yes; take k = 1/2.
(b) No; the system with equations 5 = 5 j and 4 = 4 j has no solution.
(c) Yes; take r = 2.
(d) No. The second components give k = 0. Then the third components give
j = 1. But the first components dont check.
One.I.2.23 (a) Let c be the number of acres of corn, s be the number of acres of soy,
and a be the number of acres of oats.
c + s + a = 1200 201 +2 c + s + a = 1200
{ s = 16 000/30 + 8/30 a | a R }
a
0
1
(b) There are many answers possible here. For instance we can take a = 0 to get
c = 20 000/30 666.66 and s = 16000/30 533.33. Another example is to take
a = 20 000/38 526.32, giving c = 0 and s = 7360/38 193.68.
(c) Plug your answers from the prior part into 100c + 300s + 80a.
One.I.2.24 This system has one equation. The leading variable is x1 , the other
variables are free.
1
1
1
0
{ . x2 + + .
xn | x2 , . . . , xn R }
..
..
0
1
One.I.2.25
(a)
Gausss
Method
here
gives
1 2 0 1 a
1 2 0 1
a
21 +2
2 0 1 0 b
0 4 1 2 2a + b
1 +3
1 1 0 2 c
0 1 0 3
a + c
1 2
0
1
a
(1/4)2 +3
1
2
2a + b
0 4
14
a + 2c
5
ac 3
{
+ w | w R}
2a + b 4c 10
0
1
(b) Plug in with a = 3, b = 1, and c = 2.
7
5
5 3
{ + w | w R}
15 10
1
0
One.I.2.26 Leaving the comma out, say by writing a123 , is ambiguous because it
could mean a1,23 or a12,3 .
2 3 4 5
1 1 1 1
3 4 5 6
1 1 1 1
One.I.2.27
(a)
(b)
4 5 6 7
1 1 1 1
5 6 7 8
1 1 1 1
!
!
1 4
2 1
5 10
One.I.2.28
(a) 2 5
(b)
(c)
(d) (1 1 0)
3 1
10 5
3 6
One.I.2.29
1 +2
a+
b+c=2
2b
=4
Answers to Exercises
One.I.2.31
15
(a) Here is one the fourth equation is redundant but still OK.
x+y z+ w=0
y z
=0
2z + 2w = 0
z+ w=0
(b) Here is one.
x+yz+w=0
w=0
w=0
w=0
(c) This is one.
x+yz+w=0
x+yz+w=0
x+yz+w=0
x+yz+w=0
One.I.2.32 This is how the answer was given in the cited source. My solution
was to define the numbers of arbuzoids as 3-dimensional vectors, and express all
possible elementary transitions as such vectors,
too:
R: 13
1
1
2
Operations: 1, 2 , and 1
G: 15
B: 17
2
1
1
Now, it is enough to check whether the solution to one of the following systems of
linear
equations
exists:
13
1
1
2
0
0
45
(or 45 or 0 )
15 + x 1 + y 2 + 1 = 0
17
2
1
1
45
0
0
Solving
1 1 2 13
1 1 2 13
1 +2 2 +3
0
3 3
2
1 2 1 15
21 +3
2 1 1
28
0
0
0
0
gives y + 2/3 = z so if the number of transformations z is an integer then y is not.
The other two systems give similar conclusions so there is no solution.
One.I.2.33 This is how the answer was given in the cited source.
(a) Formal solution of the system yields
a3 1
a2 + a
x= 2
y= 2
.
a 1
a 1
If a + 1 6= 0 and a 1 6= 0, then the system has the single solution
a2 + a + 1
a
x=
y=
.
a+1
a+1
16
a4 1
a2 1
y=
a3 + a
.
a2 1
Answers to Exercises
17
1 1 2 0
1 1 0
3
1 +2
3 1 2 6 31 +3
0 2 2 3
22 +3
2 +4
1
0
0
0
1
0
0
0
1
2
4
2
2
2
4
2
1
2
0
0
2
2
0
0
0
3
6
3
0
3
0
0
3/2
1
{ 3/2 + 1 z | z R }
0
1
Similarly we can reduce the associated homogeneous system
1 1 2 0
1 1 2
1 1 0 0
0 2 2
1
2
3 1 2 0 31 +3 0 4 4
0 2 2 0
0 2 2
1 1 2
0 2 2
22 +3
2 +4
0 0
0
0 0
0
0
0
0
0
0
0
0
0
18
Answers to Exercises
19
is infinite.
2/3
1/3
5/3
1/3
2/3 2/3
{
w | z, w R }
z +
+
0
0 1
1
0
0
A particular solution and the solution set for the associated homogeneous system
are here.
5/3
1/3
2/3
2/3
2/3
1/3
z +
w | z, w R}
0
1
0
0
0
1
(f) This systems solution set is empty. Thus, there is no particular solution. The
solution set of the associated
homogeneous
system is this.
1
1
2
3
{ z + w | z, w R }
1
0
0
1
One.I.3.16 The answers from the prior subsection show the row operations. Each
answer here just lists the solution set, the particular solution, and the homogeneous
solution.
(a) The solution set is this.
2/3
1/6
{ 1/3 + 2/3 z | z R}
0
1
A particular solution and the solution set for the associated homogeneous system
are here.
2/3
1/6
{ 2/3 z | z R}
1/3
0
1
(b) The solution set is infinite.
1
1
3 2
{ + z | z R}
0 1
0
0
Here are a particular solution and the solution set for the associated homogeneous
system.
1
1
3
2
{ z | z R}
0
1
0
0
20
5/7
3/7
1/7
1
0 8/7
2/7
4/7
{ 0 + 1 c + 0 d + 0 e | c, d, e R }
0 0
1
0
0
0
0
1
And, this is a particular solution and the solution set for the associated homogeneous system.
5/7
3/7
1/7
1
4/7
0
8/7
2/7
{ 1 c + 0 d + 0 e | c, d, e R }
0
1
0
0
0
1
0
0
0
One.I.3.17 Just plug them in and see if they satisfy all three equations.
(a) No.
(b) Yes.
(c) Yes.
One.I.3.18 Gausss Method on the associated homogeneous system
1 1 0 1 0
1 1 0
1 0
21 +2
2 3 1 0 0
0 5 1 2 0
0 1
1 1 0
0 1
1
1 0
1 1
0
1
0
(1/5)2 +3
1 2 0
0 5
0 0 6/5 7/5 0
Answers to Exercises
21
5/6
1/6
{
w | w R}
7/6
1
(a) That vector is indeed a particular solution, so the required general solution is
this.
0
5/6
0 1/6
{ +
w | w R}
0 7/6
4
1
(b) That vector is a particular solution so the required general solution is this.
5
5/6
1 1/6
{ +
w | w R}
7 7/6
10
1
(c) That vector is not a solution of the system since it does not satisfy the third
equation. No such general solution exists.
One.I.3.19 The first is nonsingular while the second is singular. Just do Gausss
Method and see if the echelon form result has non-0 numbers in each entry on the
diagonal.
One.I.3.20
(a) Nonsingular:
1 +2
1
0
2
1
0 0
ends with row 2 without a leading entry.
(c) Neither. A matrix must be square for either word to apply.
(d) Singular.
(e) Nonsingular.
One.I.3.21 In each case we must decide if the vector is a linear combination of the
vectors in the set.
(a) Yes. Solve
!
!
!
1
1
2
c1
+ c2
=
4
5
3
22
1
5
2 1 1
(1/2)1 +2
0
1 0
0 1
1
shows that
has no solution.
(c) Yes. The reduction
1 2 3 4
0 1 3 2
4 5 0 1
2
3
41 +2
1 1
0 1
2
5
0
0
1
1/2
1
1/2
1
22 +3
0
0
1
1/2
0
1/2
2
2
1
1
c1 1 + c2 0 = 0
0
1
1
3
0
41 +3
32 +3
1
1 2
3
4
3
2
3
0 1
0 3 12 15 4
1 2 3
4 1
2 3
0 1 3
0 0 3 9 5
Answers to Exercises
23
One.I.3.23 In this case the solution set is all of Rn and we can express it in the
required form.
1
0
0
0
1
0
{ c1
.. + c2 .. + + cn .. | c1 , . . . , cn R }
.
.
.
0
0
1
n
~
One.I.3.24 Assume ~s, t R and write
them as here.
s1
t1
..
..
~
~s = .
t= .
sn
tn
Also let ai,1 x1 + + ai,n xn = 0 be the i-th equation in the homogeneous system.
(a) The check is easy.
ai,1 (s1 + t1 ) + + ai,n (sn + tn )
= (ai,1 s1 + + ai,n sn ) + (ai,1 t1 + + ai,n tn ) = 0 + 0
(b) This is similar to the prior one.
ai,1 (3s1 ) + + ai,n (3sn ) = 3(ai,1 s1 + + ai,n sn ) = 3 0 = 0
(c) This one is not much harder.
ai,1 (ks1 + mt1 ) + + ai,n (ksn + mtn )
= k(ai,1 s1 + + ai,n sn ) + m(ai,1 t1 + + ai,n tn ) = k 0 + m 0
What is wrong with that argument is that any linear combination involving only
the zero vector yields the zero vector.
One.I.3.25 First the proof.
Gausss Method will use only rationals (e.g., (m/n)i + j ). Thus we can
express the solution set using only rational numbers as the components of each
vector. Now the particular solution is all rational.
There are infinitely many rational vector solutions if and only if the associated
homogeneous system has infinitely many real vector solutions. Thats because
setting any parameters to be rationals will produce an all-rational solution.
Linear Geometry
One.II.1: Vectors in Space
24
One.II.1.1
One.II.1.2
(a)
2
1
!
(b)
1
2
4
(c) 0
3
0
(d) 0
0
3
1
2
1 1 0
=
0 5 5
4
1
5
Answers to Exercises
25
2
0
3
1 0
1 1
1 3
0
3
0
2
0
4
1
0
1 +2
1 +3
32 +3
1 0 0 2 1
0
0 1 3 2
0 3 0 2 1
1 0 0 2 1
0
0 1 3 2
0 0 9 8 1
1
2
2
1
0
{ 1 + 3 ( 19 + 89 m) + 0 m | m R } = { 2/3 + 8/3 m | m R }
4
0
0
0
4
One.II.1.7
26
2+t=
0
t = s + 4w
1 t = 2s + w
gives t = 2, w = 1, and s = 2 so their
intersection is this point.
0
2
3
One.II.1.8 (a) The vector shown
0.5
2
0 + 1 1
0
0
instead it is
0.5
1
2
0 + 1 2 = 2
0
0
0
which has a parameter twice as large.
(b) The vector
2
1/2
1/2
P = { 0 + y 1 + z 0 | y, z R }
0
0
1
0.5
0.5
2
2
(0 + 1 1) + (0 + 0 1)
0
1
0
0
instead it is
2
0.5
0.5
1
0 + 1 1 + 0 1 = 1
0
0
1
1
Answers to Exercises
27
triangle is as follows, so w
~ = 3 2 from the north west.
@
w
~@
@
R
@
(a)
32 + 12 = 10
(b)
(c)
18
(d) 0
(e)
28
One.II.2.12
(a) arccos(9/ 85) 0.22 radians
0.52 radians
(c) Not defined.
3
1
{ 1 y + 0 z | y, z R }
0
1
One.II.2.16
(1)(1) + (0)(1)
) 0.79 radians
1 2
(d) Using the formula from the prior item, limn arccos(1/ n) = /2 radians.
arccos(
Answers to Exercises
29
.
.
.
~ = [ .. + .. ] ..
(~u + ~v) w
wn
un
vn
w1
u1 + v1
..
..
=
.
.
un + vn
wn
vn
vn
un
One.II.2.19 (a) Verifying that (k~x) ~y = k(~x ~y) = ~x (k~y) for k R and ~x, ~y Rn
is easy. Now, for k R and ~v, w
~ Rn , if ~u = k~v then ~u ~v = (k~v) ~v = k(~v ~v),
which is k times a nonnegative real.
The ~v = k~u half is similar (actually, taking the k in this paragraph to be the
reciprocal of the k above gives that we need only worry about the k = 0 case).
(b) We first consider the ~u ~v > 0 case. From the Triangle Inequality we know
that ~u ~v = |~u | |~v | if and only if one vector is a nonnegative scalar multiple of the
other. But thats all we need because the first part of this exercise shows that, in
a context where the dot product of the two vectors is positive, the two statements
one vector is a scalar multiple of the other and one vector is a nonnegative
scalar multiple of the other, are equivalent.
We finish by considering the ~u ~v < 0 case. Because 0 < |~u ~v| = (~u ~v) =
(~u) ~v and |~u | |~v | = |~u | |~v |, we have that 0 < (~u) ~v = |~u | |~v |. Now the
prior paragraph applies to give that one of the two vectors ~u and ~v is a scalar
30
1
0
!
w
~ =
1
1
One.II.2.21 We prove that a vector has length zero if and only if all its components
are zero.
Let ~u Rn have components u1 , . . . , un . Recall that the square of any real
number is greater than or equal to zero, with equality only when that real is zero.
Thus |~u |2 = u1 2 + + un 2 is a sum of numbers greater than or equal to zero, and
so is itself greater than or equal to zero, with equality if and only if each ui is zero.
Hence |~u | = 0 if and only if all the components of ~u are zero.
One.II.2.22 We can easily check that
x1 + x2 y1 + y2
,
2
2
is on the line connecting the two, and is equidistant from both. The generalization
is obvious.
One.II.2.23 Assume that ~v Rn has components v1 , . . . , vn . If ~v 6= ~0 then we have
this.
v
!2
u
u
v1
t p
+ +
v1 2 + + vn 2
s
=
vn
!2
v1 2 + + vn 2
v1 2
vn 2
+ +
v1 2 + + vn 2
v1 2 + + vn 2
=1
If ~v = ~0 then ~v/|~v | is not defined.
One.II.2.24 For the first question, assume that ~v Rn and r > 0, take the root, and
factor.
q
q
|r~v | = (rv1 )2 + + (rvn )2 = r2 (v1 2 + + vn 2 = r|~v |
For the second question, the result is r times as long, but it points in the opposite
direction in that r~v + (r)~v = ~0.
One.II.2.25 Assume that ~u,~v Rn both have length 1. Apply Cauchy-Schwarz:
|~u ~v| 6 |~u | |~v | = 1.
To see that less than can happen, in R2 take
!
!
1
0
~u =
~v =
0
1
and note that ~u ~v = 0. For equal to, note that ~u ~u = 1.
Answers to Exercises
31
One.II.2.26 Write
u1
.
~u = ..
un
v1
.
~v = ..
vn
32
Answers to Exercises
33
One.II.2.38 Let
u1
.
~u = .. ,
v1
.
~v = ..
un
w1
.
w
~ = ..
vn
wn
and then
~u
u1
kv1
mw1
. . .
k~v + m~
w = .. .. + ..
un
kvn
mwn
kv1 + mw1
u1
.
..
= ..
.
kvn + mwn
un
= u1 (kv1 + mw1 ) + + un (kvn + mwn )
= ku1 v1 + mu1 w1 + + kun vn + mun wn
= (ku1 v1 + + kun vn ) + (mu1 w1 + + mun wn )
~)
= k(~u ~v) + m(~u w
as required.
One.II.2.39 For x, y R+ , set
~u =
!
x
~v =
!
y
x+y
xy 6
2
as desired.
One.II.2.40
12
12
214
!
! ) = arccos(
= arccos(
) 0.17 rad
244
193
7
10
|
||
|
12
12
(b) Applying the same equation to (9 19) gives about 0.09 radians.
(c) The angle will measure 0 radians if the other person is born on the same
day. It will also measure 0 if one birthday is a scalar multiple of the other. For
instance, a person born on Mar 6 would be harmonious with a person born on
Feb 4.
34
12
!
= arccos(
7
|
||
12
m
d
! )
m
|
d
The result is
For (7, 12) worst case 0.95958064648 rads, date (12, 1)
That is 54.9799211457 degrees
Answers to Exercises
35
A more conceptual approach is to consider the relation of all points (month, day)
to the point (7, 12). The picture below makes clear that the answer is either
Dec 1 or Jan 31, depending on which is further from the birthdate. The dashed
line bisects the angle between the line from the origin to Dec 1, and the line
from the origin to Jan 31. Birthdays above the line are furthest from Dec 1 and
birthdays below the line are furthest from Jan 31.
30
20
10
J F MAM J J A S O N D
One.II.2.41 This is how the answer was given in the cited source. The actual
velocity ~v of the wind is the sum of the ships velocity and the apparent velocity of
the wind. Without loss of generality we may assume a
~ and ~b to be unit vectors,
and may write
~v = ~v1 + s~
a = ~v2 + t~b
where s and t are undetermined scalars. Take the dot product first by a
~ and then
36
s=
~v = ~v1 +
aj 2
16j6n+1
bj 2
16j6n+1
ak bj aj bk
2
16k<j6n+1
X
X
= (
aj 2 ) + an+1 2 (
bj 2 ) + bn+1 2
16j6n
16j6n
ak bj aj bk
16k<j6n
aj
16j6n
2
aj
bj
bj
bj an+1 +
16k6n
ak bn+1 an+1 bk
bj 2 an+1 2 +
16j6n
16j6n
2
16j6n
ak bj aj bk
16j6n
16k6n
2
16j6n
16k<j6n
2
2
16j6n
ak bn+1 an+1 bk
16k6n
ak bj aj bk
2
16k<j6n
16j6n
ak bn+1 an+1 bk
2
2
2
Answers to Exercises
37
16j6n
X
ak bn+1 2
16k6n
aj bj
2
16j6n
ak bn+1 an+1 bk +
16k6n
+2
16j6n
an+1 2 bk 2
16k6n
16k6n
aj bj + an+1 bn+1
2
16j6n
1 1 0
0 2 2
!
!
1 1 2
1 0 1
(1/2)2
2 +1
0 1 1
0 1 1
(b) The solution set has one parameter.
!
1 0 1 4
1 0 1
21 +2
2 2 0 1
0 2 2
(c) There is a unique solution.
!
3 2
1
21 +2
6 1 1/2
(1/3)1
(1/5)2
1
0
3
0
2/3
1
2
5
4
7
1
3/2
!
1/3
3/10
(1/2)2
1 0
0 1
1
1
!
4
7/2
(2/3)2 +1
1 0
0 1
2/15
3/10
38
2 1
0
1
2 1 0 1
(1/2)1 +2
5
0 7/2 1 11/2
1 3 1
5
5
0 1
2
0
1
2
2 1
0
1
1
2 1 0
2 3
(7/2)2 +3
1
2
2
5
5
0 1
0 7/2 1 11/2
0 0 8 12
1 1/2 0 1/2
1 1/2 0 1/2
(1/2)1
23 +2
1
2
1
0
5
2
0
0
(1/8)2
3/2
3/2
0
0
1
0
0
1
1 0 0 1/2
(1/2)2 +1
2
0 1 0
0 0 1 3/2
One.III.1.9
2
3
(a)
1
1
1
1
1
2
(2/3)2 +3
3 +1
(1/3)3 +2
1
0
0
0
0
0
3
21 +2
1
31 +3
9
1
1
3
3
1 1
1
(1/3)2
3 1
1
5/3
0 1 1/3
(3/13)3
0 13/3 17/3
0 0
1
17/13
1 0 22/13
1 0 0
6/13
2 +1
1 0 16/13
0 1 0 16/13
0 1 17/13
0 0 1 17/13
0
0
1
3
2
1
1
5
(b)
2
4
1
1
1
2
1
5
1
1
21 +2
41 +3
(1/3)2
0
0
1 1
0 1
0 0
1
3
3
2
1
0
2 0
3 1
3 1
1/3
0
2 +3
2 +1
0
0
1 0
0 1
0 0
1
3
0
1
1
0
2
3
0
1/3
1/3
0
0 5/2
0
1
0 1
(2/5)2
1
0
Answers to Exercises
39
1
21 +2
1 +3
0
1 3
1
(1/6)2
0 1 1/3
(1/2)3
0 0
1
1 3 0
1 0 0
(1/3)3 +2
32 +1
0 1 0
0 1 0
3 +1
0 0 1
0 0 1
(c) There are more columns than rows so we must get more than just a diagonal
of ones.
1 0 3
1
2
1 0 3
1
2
1 +2
2 +3
3
3
0 4 1 0
0 4 1 0
31 +3
0 4 1 2 4
0 0 0 2 7
1 0
3
1
2
1 0
3
0 3/2
(1/4)2
3 +1
0 1 1/4 0 3/4
0 1 1/4 0 3/4
(1/2)3
0 0
0
1 7/2
0 0
0
1 7/2
(d) As in the prior item, this is not a square matrix.
1 5 1 5
1 5 1 5
1 3
2 3
0 0 5 6 0 1 3 2
0 1 3 2
0 0 5 6
1 5 0 19/5
1 5 1
5
(1/5)3
33 +2
2
0 1 0 8/5
0 1 3
3 +1
0 0 1 6/5
0 0 1 6/5
1 0 0 59/5
52 +1
0 1 0 8/5
0 0 1 6/5
One.III.1.11 (a) Swap first.
1 0 0
1 2
1 +3
2 +3
(1/2)1
(1/2)3 +1
(1/2)2 +1
0 1 0
(1/2)2
(1/2)3 +2
0 0 1
(1/2)3
(b) Here the swap is in the middle.
1 0
0
21 +2
2 3
(1/3)2
32 +1
0 1 1/3
1 +3
0 0
0
One.III.1.12 For the Gausss halves, see the answers to Chapter Ones section I.2
question Exercise 19.
3
6
0
2
2
40
0
1
2/3 1/3
0 1 2/3
(1/3)2
The solution set is this
2/3
1/6
{ 1/3 + 2/3 z | z R }
0
1
(b) The second half is
1 0 1 0 1
3 +2
0 1 2 0 3
0 0 0 1 0
so the solution is this.
1
1
3 2
{ + z | z R}
0 1
0
0
(c) This Jordan half
1 0 1 1 0
2 +1 0 1 0 1 0
0 0 0 0 0
0 0 0 0 0
gives
0
1
1
0 0
1
{ + z + w | z, w R }
0 1
0
0
0
1
(of course, we could omit the zero vector from the description).
(d) The Jordan half
!
(1/7)2
1 2
0 1
3
8/7
1
2/7
1
4/7
22 +1
1 0
0 1
5/7 3/7
8/7 2/7
1/7
4/7
1
0
!
1
0
1
5/7
3/7
1/7
0 8/7
2/7
4/7
{ 0 + 1 c + 0 d + 0 e | c, d, e R }
0 0
1
0
0
0
0
1
2/3
1/3
Answers to Exercises
One.III.1.13 Routine Gausss Method gives one:
2
1
1
3
2 1
1
31 +2
(9/2)2 +3
1
2 7
0
0 1 2
(1/2)1 +3
0 9/2 1/2 7/2
0 0 19/2
and any cosmetic change, such as multiplying the bottom row by 2,
2 1 1
3
0 1 2 7
0 0 19 70
gives another.
41
7
35
One.III.1.14 In the cases listed below, we take a, b R. Thus, some canonical forms
listed below actually include infinitely many cases. In particular, they includes the
cases a = 0 !
and b = 0.
!
!
!
0 0
1 a
0 1
1 0
(a)
,
,
,
0 0
0 0
0 0
0 1
!
!
!
!
!
!
1 a b
0 1 a
0 0 1
1 0 a
1 a 0
0 0 0
(b)
,
,
,
,
,
,
0 0 0
0 0 0
0 0 0
0 1 b
0 0 1
0 0 0
!
0 1 0
0 0 1
0 0
1 a
0 1
1 0
(c) 0 0, 0 0 , 0 0, 0 1
0 0
0 0
0 0
0 0
1 a b
0 1 a
0 1 0
0 0 1
1 0 a
0 0 0
(d) 0 0 0, 0 0 0 , 0 0 0 , 0 0 1, 0 0 0, 0 1 b,
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
1 a 0
1 0 0
0 0 1, 0 1 0
0 0 0
0 0 1
One.III.1.15 A nonsingular homogeneous linear system has a unique solution. So a
nonsingular matrix must reduce to a (square) matrix that is all 0s except for 1s
down the upper-left to lower-right diagonal, such as these.
!
1 0 0
1 0
0 1 0
0 1
0 0 1
One.III.1.16 It is an equivalence relation. To prove that we must check that the
relation is reflexive, symmetric, and transitive.
Assume that all matrices are 22. For reflexive, we note that a matrix has the
same sum of entries as itself. For symmetric, we assume A has the same sum of
42
One.III.1.17
..
.
i,1 ai,n
.
aj,1 aj,n
..
.
1 +1
0
3
ki +j
0
4
0
3
1 +1
0
4
..
.
ai,1
..
kai,1 + aj,1
..
.
kai,n + aj,n
ai,n
leaves the i-th row unchanged because of the i 6= j restriction. Because the i-th
row is unchanged, this operation
..
.
ai,1
ai,n
ki +j
..
..
.
returns the j-th row to its original state.
One.III.1.18 To be an equivalence, each relation must be reflexive, symmetric, and
transitive.
(a) This relation is not symmetric because if x has taken 4 classes and y has taken
3 then x is related to y but y is not related to x.
(b) This is reflexive because xs name starts with the same letter as does xs. It is
symmetric because if xs name starts with the same letter as ys then ys starts
with the same letter as does xs. And it is transitive because if xs name starts
with the same letter as does ys and ys name starts with the same letter as does
zs then xs starts with the same letter as does zs. So it is an equivalence.
One.III.1.19 For each we must check the three conditions of reflexivity, symmetry,
and transitivity.
(a) Any matrix clearly has the same product down the diagonal as itself, so the
relation is reflexive. The relation is symmetric because if A has the same product
Answers to Exercises
43
down its diagonal as does B, if a1,1 a2,2 = b1,1 b2,2 , then B has the same
product as does A.
Transitivity is similar: suppose that As product is r and that it equals Bs
product. Suppose also that Bs product equals Cs. Then all three have a product
of r, and As equals Cs.
There is an equivalence class for each real number, namely the class contains
all 22 matrices whose product down the diagonal is that real.
(b) For reflexivity, if the matrix A has a 1 entry then it is related to itself while if
it does not then it is also related to itself. Symmetry also has two cases: suppose
that A and B are related. If A has a 1 entry then so does B, and thus B is related
to A. If A has no 1 then neither does B, and again B is related to A.
For transitivity, suppose that A is related to B and B to C. If A has a 1 entry
then so does B, and because B is related to C, therefore so does C, and hence A
is related to C. Likewise, if A has no 1 then neither does B, and consequently
neither does C, giving the conclusion that A is related to C.
There are exactly two equivalence classes, one containing any 22 matrix
that has at least one entry that is a 1, and the other containing all the matrices
that have no 1s.
One.III.1.20 (a) This relation is not reflexive. For instance, any matrix with an
upper-left entry of 1 is not related to itself.
(b) This relation is not transitive. For these three, A is related to B, and B is
related to C, but A is not !
related to C.
!
!
A=
0
0
0
,
0
B=
4
0
0
,
0
C=
8
0
0
,
0
0 0
while the second gives
!
!
1 2
1 0
22 +1
0 1
0 1
The two reduced echelon form matrices are not identical, and so the original
matrices are not row equivalent.
1 2
44
1 0
31 +2
0 1
51 +3
0 1
5
5
0
0
2 +3
0
1
0
2
1
2
5 0
0
0
0
1
0
5
0
0
0
0
2
0
10
0
(1/2)2
0
0
5
0
0
1
0
0 3 3
0 1 1
0 1 1
and the second.
2 2
1 2
0 3
5
1
1
0
(1/2)1
(1/3)2
!
5/2
1/3
1
1
2 +1
1
0
0
1
17/6
1/3
1
0
1
0
1
0
1
1
!
1
1
2 +1
1
0
1
0
!
0
1
2 +1
1
0
0
1
!
3
2
!
1
2
!
ba
ca
Answers to Exercises
45
a
b
0
1
are the nonsingular matrices. Thats because a linear system for which this is
the matrix of coefficients will have a unique solution, and that is the definition of
nonsingular. (Another way to say the same thing is to say that they fall into none
of the above classes.)
One.III.2.12
a
b
where a, b R.
(b) They have this form (for a, b R).
!
2a
2b
1a
1b
(c) They have the form
a b
c d
k
0
1
0
!
0
1
and
1
0
0
0
!
0
1
46
One.III.2.17 Any two nn nonsingular matrices have the same reduced echelon form,
namely the matrix with all 0s except for 1s down the diagonal.
1 0
0
0
0 1
..
0 0
1
Two same-sized singular matrices need not be row equivalent. For example,
these two 22 singular matrices are not row equivalent.
!
!
1 1
1 0
and
0 0
0 0
One.III.2.18 Since there is one and only one reduced echelon form matrix in each
class, we can just list the possible reduced echelon form matrices.
For that list, see the answer for Exercise 14.
One.III.2.19 (a) If there is a linear relationship where c0 is not zero then we can
~ 0 from both sides and divide by c0 to get
~ 0 as a linear combination
subtract c0
of the others. (Remark: if there are no other vectors in the set if the relationship
is, say, ~0 = 3 ~0 then the statement is still true because the zero vector is by
definition the sum of the empty set of vectors.)
~ 0 is a combination of the others
~ 0 = c1
~ 1 + + cn
~ n then
Conversely, if
~ 0 from both sides gives a relationship where at least one of the
subtracting
~ 0.
coefficients is nonzero; namely, the 1 in front of
(b) The first row is not a linear combination of the others for the reason given in
the proof: in the equation of components from the column containing the leading
entry of the first row, the only nonzero entry is the leading entry from the first
row, so its coefficient must be zero. Thus, from the prior part of this exercise,
the first row is in no linear relationship with the other rows.
Thus, when considering whether the second row can be in a linear relationship
with the other rows, we can leave the first row out. But now the argument just
applied to the first row will apply to the second row. (That is, we are arguing
here by induction.)
One.III.2.20 We know that 4s + c + 10d = 8.45 and that 3s + c + 7d = 6.30, and wed
like to know what s + c + d is. Fortunately, s + c + d is a linear combination of
4s + c + 10d and 3s + c + 7d. Calling the unknown price p, we have this reduction.
4
1
10
4 1 10 8.45
8.45
(3/4)1 +2
0.037 5
3 1 7 6.30
0 1/4 1/2
(1/4)1 +3
1 1 1
p
0 3/4 3/2 p 2.112 5
4
1
10
8.45
32 +3
2 1 3
0 5/3
8
7/3
gives y = 7/5 and x = 11/5. Now any equation not satisfied by (7/5, 11/5)
will do, e.g., 5x + 5y = 3.
(2) Every equation can be derived from an inconsistent system. For instance, here
is how to derive 3x + 2y = 4 from 0 = 5. First,
0=5
(3/5)1
x1
0 = 3 0 = 3x
(validity of the x = 0 case is separate but clear). Similarly, 0 = 2y. Ditto for
0 = 4. But now, 0 + 0 = 0 gives 3x + 2y = 4.
One.III.2.22 Define linear systems to be equivalent if their augmented matrices are
row equivalent. The proof that equivalent systems have the same solution set is
easy.
One.III.2.23 (a) The three possible row swaps are easy, as are the three possible
rescalings. One of the six possible row combinations is k1 + 2 :
1
2
3
k 1 + 3 k 2 + 0 k 3 + 3
1
4
5
and again the first and second columns add to the third. The other five combinations are similar.
(b) The obvious conjecture is that row operations do not change linear relationships
among columns.
(c) A case-by-case proof follows the sketch given in the first item.
48
Other Computer Algebra Systems have similar commands. These Maple commands
> A:=array( [[40,15],
[-50,25]] );
> u:=array([100,50]);
> linsolve(A,u);
A Maple session
> A:=array( [[2,2],
[1,-4]] );
> u:=array([5,0]);
> linsolve(A,u);
Answers to Exercises
49
(c) This system has infinitely many solutions. In the first subsection, with z as a
parameter, we got x = (43 7z)/4 and y = (13 z)/4. Sage gets the same.
sage: var('x,y,z')
(x, y, z)
sage: system = [x - 3*y + z == 1,
....:
x + y + 2*z == 14]
sage: solve(system, x,y)
[[x == -7/4*z + 43/4, y == -1/4*z + 13/4]]
Similarly, When the array A and vector u are given to Maple and it is asked to
linsolve(A,u), it returns no result at all; that is, it responds with no solutions.
(e) Sage finds
sage: var('x,y,z')
(x, y, z)
sage: system = [
4*y + z == 20,
....:
2*x - 2*y + z == 0,
....:
x +
z == 5,
....:
x +
y - z == 10]
sage: solve(system, x,y,z)
[[x == 5, y == 5, z == 0]]
+ w
w
- w
+ w
==
==
==
==
5,
-1,
0,
9]
+ 3, w == r2]]
50
== 4,
== 5,
== 17]
== r4]]
(e) This system has infinitely many solutions; in the second subsection we described
the solution set with two parameters.
5/3
1/3
2/3
2/3 2/3
1/3
{
+
z +
w | z, w R }
0 1
0
0
0
1
Sage does the same.
sage: var('x,y,z,w')
(x, y, z, w)
sage: system = [
x + 2*y - z
== 3,
....:
2*x +
y
+ w == 4,
....:
x y + z + w == 1]
sage: solve(system, x,y,z,w)
[[x == r6, y == -r5 - 2*r6 + 4, z == -2*r5 - 3*r6 + 5, w == r5]]
,
b c + a d b c + a d
52
x + 2y =
3
8y = 7.992
gives (x, y) = (1.002, 0.999). So for this system a small change in the constant
produces only a small change in the solution.
3
.0003
0
1.556
1789
1.569
1805
gives the conclusion that x = 10460 and y = 1.009. Of course, this is wildly
different than the correct answer.
4
(a) For the first one, first, (2/3) (1/3) is .666 666 67 .333 333 33 = .333 333 34
and so (2/3) + ((2/3) (1/3)) = .666 666 67 + .333 333 34 = 1.000 000 0.
For the other one, first ((2/3)+(2/3)) = .666 666 67+.666 666 67 = 1.333 333 3
and so ((2/3) + (2/3)) (1/3) = 1.333 333 3 .333 333 33 = .999 999 97.
(b) The first equation is .333 333 33 x + 1.000 000 0 y = 0 while the second is
.666 666 67 x + 2.000 000 0 y = 0.
(a) This calculation
(2/3)1 +2
(1/3)1 +3
(1/2)2 +3
0
0
0
0
2
1
(4/3) + 2 (2/3) + 2
(2/3) + 2 (1/3)
2
1
(4/3) + 2 (2/3) + 2
2 + 4
1 +
2 + 4
The solution with two digits retained is z = 2.1, y = 2.6, and x = .43.
0
.13 101 .67 100 .20 101
(1/3)1 +3
0
.67 100 .33 100 .10 101
0
.13 101 .67 100 .20 101
2
2
0
0
.15 10
.31 10
(a) The total resistance is 7 ohms. With a 9 volt potential, the flow will be
9/7 amperes. Incidentally, the voltage drops will then be: 27/7 volts across the
3 ohm resistor, and 18/7 volts across each of the two 2 ohm resistors.
(b) One way to do this network is to note that the 2 ohm resistor on the left has
a voltage drop of 9 volts (and hence the flow through it is 9/2 amperes), and the
remaining portion on the right also has a voltage drop of 9 volts, and so we can
analyze it as in the prior item. We can also use linear systems.
i0
i2
i1
i3
54
i0
i2
i1
i3
i4
i6
i5
i1 i2 = 0
i1 + i2 = 0
5i1
= 20
8i2 = 20
5i1 + 8i2 = 0
The current flowing in each branch is then is i2 = 20/8 = 2.5, i1 = 20/5 = 4, and
i0 = 13/2 = 6.5, all in amperes. Thus the parallel portion is acting like a single
resistor of size 20/(13/2) 3.08 ohms.
(b) A similar analysis gives that is i2 = i1 = 20/8 = 2.5 and i0 = 40/8 =
5 amperes. The equivalent resistance is 20/5 = 4 ohms.
(c) Another analysis like the prior ones gives is i2 = 20/r2 , i1 = 20/r1 , and
i0 = 20(r1 + r2 )/(r1 r2 ), all in amperes. So the parallel portion is acting like a
single resistor of size 20/i0 = r1 r2 /(r1 + r2 ) ohms. (This equation is often stated
as: the equivalent resistance r satisfies 1/r = (1/r1 ) + (1/r2 ).)
3 Kirchoffs Current Law, applied to the node where r1 , r2 , and rg come together,
and also applied to the node where r3 , r4 , and rg come together gives these.
i1 i2 ig = 0
i3 i4 + ig = 0
Answers to Exercises
55
Kirchoffs Voltage law, applied to the loop in the top right, and to the loop in the
bottom right, gives these.
i3 r3 ig rg i1 r1 = 0
i4 r4 i2 r2 + ig rg = 0
Assuming that ig is zero gives that i1 = i2 , that i3 = i4 , that i1 r1 = i3 r3 , and that
i2 r2 = i4 r4 . Then rearranging the last equality
i2 r2 i3 r3
r4 =
i4 i1 r1
and cancelling the is gives the desired conclusion.
4
(a) An adaptation is: in any intersection the flow in equals the flow out. It does
seem reasonable in this case, unless cars are stuck at an intersection for a long
time.
(b) We can label the flow in this way.
Shelburne St
Willow
Jay Ln
west
east
Winooski Ave
Because 50 cars leave via Main while 25 cars enter, i1 25 = i2 . Similarly Piers
in/out balance means that i2 = i3 and North gives i3 + 25 = i1 . We have this
system.
i1 i2
= 25
i2 i3 = 0
i1
+ i3 = 25
(c) The row operations 1 + 2 and rho2 + 3 lead to the conclusion that there
are infinitely many solutions. With i3 as the parameter,
25 + i3
{ i3 | i3 R}
i3
of course, since the problem is stated in number of cars, we might restrict i3 to
be a natural number.
(d) If we picture an initially-empty circle with the given input/output behavior,
we can superimpose a z3 -many cars circling endlessly to get a new solution.
(e) A suitable restatement might be: the number of cars entering the circle must
equal the number of cars leaving. The reasonableness of this one is not as clear.
Over the five minute time period we could find that a half dozen more cars
entered than left, although the problem statements into/out table does satisfy
this property. In any event, it is of no help in getting a unique solution since for
that we would need to know the number of cars circling endlessly.
56
5
55
i1
75
40
i2
i4
i3
5
80
50
30
i5
70
i7
i6
We apply Kirchhoffs principle that the flow into the intersection of Willow
and Shelburne must equal the flow out to get i1 + 25 = i2 + 125. Doing the
intersections from right to left and top to bottom gives these equations.
i1 i2
= 10
i1
+ i3
= 15
i2
+ i4
= 5
i3 i4
+ i6
= 50
i5
i7 = 10
i6 + i7 = 30
The row operation 1 + 2 followed by 2 + 3 then 3 + 4 and 4 + 5 and
finally 5 + 6 result in this system.
i1 i2
= 10
i2 + i3
= 25
i3 + i4 i5
= 30
i5 + i6
= 20
i6 + i7 = 30
0= 0
Since the free variables are i4 and i7 we take them as parameters.
i6 = i7 30
i5 = i6 + 20 = (i7 30) + 20 = i7 10
i3 = i4 + i5 + 30 = i4 + (i7 10) + 30 = i4 + i7 + 20
i2 = i3 25 = (i4 + i7 + 20) 25 = i4 + i7 5
i1 = i2 + 10 = (i4 + i7 5) + 10 = i4 + i7 + 5
Obviously i4 and i7 have to be positive, and in fact the first equation shows
that i7 must be at least 30. If we start with i7 , then the i2 equation shows that
0 6 i4 6 i7 5.
(b) We cannot take i7 to be zero or else i6 will be negative (this would mean cars
going the wrong way on the one-way street Jay). We can, however, take i7 to be
as small as 30, and then there are many suitable i4 s. For instance, the solution
(i1 , i2 , i3 , i4 , i5 , i6 , i7 ) = (35, 25, 50, 0, 20, 0, 30)
Answers to Exercises
results from choosing i4 = 0.
57
Chapter Two
(a) 3 + 2x x
(b)
1
0
+1
3
!
(c) 3ex + 2ex
Two.I.1.19 (a) Three elements are: 1 + 2x, 2 1x, and x. (Of course, many answers
are possible.)
The verification is just like Example 1.3. We first do conditions 1-5 from
Definition 1.1, having to do with addition. For closure under addition, condition (1), note that where a + bx, c + dx P1 we have that (a + bx) + (c + dx) =
(a + c) + (b + d)x is a linear polynomial with real coefficients and so is an
element of P1 . Condition (2) is verified with: where a + bx, c + dx P1
then (a + bx) + (c + dx) = (a + c) + (b + d)x, while in the other order they
are (c + dx) + (a + bx) = (c + a) + (d + b)x, and both a + c = c + a and
b + d = d + b as these are real numbers. Condition (3) is similar: suppose
a+bx, c+dx, e+fx P then ((a+bx)+(c+dx))+(e+fx) = (a+c+e)+(b+d+f)x
while (a + bx) + ((c + dx) + (e + fx)) = (a + c + e) + (b + d + f)x, and the two
60
Answers to Exercises
61
a
c
b
d
to verify that the addition of the matrices is commutative. The verification for
condition (3), associativity of matrix addition, is similar to the prior verification:
!
!
!
!
a b
e f
i j
(a + e) + i (b + f) + j
+
+
=
c d
g h
k l
(c + g) + k (d + h) + l
while
a
c
b
d
!
+
e
g
f
h
!
+
i
k
!
j
=
l
a + (e + i)
c + (g + k)
b + (f + j)
d + (h + l)
62
rsa
rsc
rsb
rsd
1a
1c
sa sb
=r
sc sd
1b
1d
!
=
sa
sc
a
=r s
c
sb
sd
!
b
d
(b) This differs from the prior item in this exercise only in that we are restricting
to the set T of matrices with a zero in the second row and first column. Here are
three elements of T .
!
!
!
1 2
1 2
0 0
,
,
0 4
0 4
0 0
Some of the verifications for this item are the same as for the first item in this
exercise, and below well just do the ones that are different.
For (1), the sum of 22 real matrices with a zero in the 2, 1 entry is also a
22 real matrix with a zero in the 2, 1 entry.
!
!
!
a b
e f
a+e b+f
+
0 d
0 h
0
d+h
The verification for condition (2) given in the prior item works in this item also.
The same holds for condition (3). For (4), note that the 22 matrix of zeroes is
an element of T . Condition (5) holds because for any 22 matrix A the additive
Answers to Exercises
63
inverse is the matrix 1 A and so the additive inverse of a matrix with a zero in
the 2, 1 entry is also a matrix with a zero in the 2, 1 entry.
Condition 6 holds because a scalar multiple of a 22 matrix with a zero in the
2, 1 entry is a 22 matrix with a zero in the 2, 1 entry. Condition (7)s verification
is the same as in the prior item. So are condition (8)s, (9)s, and (10)s.
Two.I.1.21 (a) Three elements are (1 2 3), (2 1 3), and (0 0 0).
We must check conditions (1)-(10) in Definition 1.1. Conditions (1)-(5)
concern addition. For condition (1) recall that the sum of two three-component
row vectors
(a b c) + (d e f) = (a + d b + e c + f)
is also a three-component row vector (all of the letters a, . . . , f represent real
numbers). Verification of (2) is routine
(a b c) + (d e f) = (a + d b + e c + f)
= (d + a e + b f + c) = (d e f) + (a b c)
(the second equality holds because the three entries are real numbers and real
number addition commutes). Condition (3)s verification is similar.
(a b c) + (d e f) + (g h i) = ((a + d) + g (b + e) + h (c + f) + i)
= (a + (d + g) b + (e + h) c + (f + i)) = (a b c) + (d e f) + (g h i)
For (4), observe that the three-component row vector (0 0 0) is the additive
identity: (a b c) + (0 0 0) = (a b c). To verify condition (5), assume we are
given the element (a b c) of the set and note that (a b c) is also in the
set and has the desired property: (a b c) + (a b c) = (0 0 0).
Conditions (6)-(10) involve scalar multiplication. To verify (6), that the
space is closed under the scalar multiplication operation that was given, note
that r(a b c) = (ra rb rc) is a three-component row vector with real entries.
For (7) we compute.
(r + s)(a b c) = ((r + s)a (r + s)b (r + s)c) = (ra + sa rb + sb rc + sc)
= (ra rb rc) + (sa sb sc) = r(a b c) + s(a b c)
Condition (8) is very similar.
r (a b c) + (d e f)
= r(a + d b + e c + f) = (r(a + d) r(b + e) r(c + f))
= (ra + rd rb + re rc + rf) = (ra rb rc) + (rd re rf)
= r(a b c) + r(d e f)
So is the computation for condition (9).
(rs)(a b c) = (rsa rsb rsc) = r(sa sb sc) = r s(a b c)
64
a
e
a+e
b f b + f
+ =
c g c + g
d
h
d+h
is also a member of L, which is true because it satisfies the criteria for membership
in L: (a + e) + (b + f) (c + g) + (d + h) = (a + b c + d) + (e + f g + h) = 0 + 0.
The verifications for conditions (2), (3), and (5) are similar to the ones in the
first part of this exercise. For condition (4) note that the vector of zeroes is a
member of L because its first component plus its second, minus its third, and
plus its fourth, totals to zero.
Condition (6), closure of scalar multiplication, is similar: where the vector is
an element of L,
a
ra
b rb
r =
c rc
d
rd
is also an element of L because ra + rb rc + rd = r(a + b c + d) = r 0 = 0.
The verification for conditions (7), (8), (9), and (10) are as in the prior item of
this exercise.
Two.I.1.22 In each item the set is called Q. For some items, there are other correct
ways to show that Q is not a vector space.
(a) It is not closed under addition; it fails to meet condition (1).
1
0
1
0 , 1 Q
1 6 Q
0
0
0
(b) It is not closed under addition.
1
0
0 , 1 Q
0
0
1
1 6 Q
0
1
0
2
0
!
6 Q
1 (1 + 1x + 1x2 ) 6 Q
Answers to Exercises
65
66
Answers to Exercises
67
Two.I.1.38 Each element of a vector space has one and only one additive inverse.
For, let V be a vector space and suppose that ~v V. If w
~ 1, w
~ 2 V are both
additive inverses of ~v then consider w
~ 1 + ~v + w
~ 2 . On the one hand, we have that it
equals w
~ 1 + (~v + w
~ 2) = w
~ 1 + ~0 = w
~ 1 . On the other hand we have that it equals
(~
w1 + ~v) + w
~ 2 = ~0 + w
~2 =w
~ 2 . Therefore, w
~1 =w
~ 2.
~ | r, s R} where either
Two.I.1.39 (a) Every such set has the form {r ~v + s w
or both of ~v, w
~ may be ~0. With the inherited operations, closure of addition
(r1~v + s1 w
~ ) + (r2~v + s2 w
~ ) = (r1 + r2 )~v + (s1 + s2 )~
w and scalar multiplication
c(r~v + s~
w) = (cr)~v + (cs)~
w are easy. The other conditions are also routine.
(b) No such set can be a vector space under the inherited operations because it
does not have a zero element.
Two.I.1.40 Assume that ~v V is not ~0.
(a) One direction of the if and only if is clear: if r = 0 then r ~v = ~0. For the other
way, let r be a nonzero scalar. If r~v = ~0 then (1/r) r~v = (1/r) ~0 shows that
~v = ~0, contrary to the assumption.
(b) Where r1 , r2 are scalars, r1~v = r2~v holds if and only if (r1 r2 )~v = ~0. By the
prior item, then r1 r2 = 0.
(c) A nontrivial space has a vector ~v 6= ~0. Consider the set {k ~v | k R}. By the
prior item this set is infinite.
(d) The solution set is either trivial, or nontrivial. In the second case, it is infinite.
Two.I.1.41 Yes. A theorem of first semester calculus says that a sum of differentiable
functions is differentiable and that (f + g)0 = f0 + g0 , and that a multiple of a
differentiable function is differentiable and that (r f)0 = r f0 .
Two.I.1.42 The check is routine. Note that 1 is 1 + 0i and the zero elements are
these.
2
(a) (0 + 0i) + (0 + 0i)x
! + (0 + 0i)x
0 + 0i 0 + 0i
(b)
0 + 0i 0 + 0i
Two.I.1.43 Notably absent from the definition of a vector space is a distance measure.
Two.I.1.44
Each equality above follows from the associativity of three vectors that is given
as a condition in the definition of a vector space. For instance, the second =
68
Answers to Exercises
69
2
1 1
1 1 0
1 1 3
(1/2)1 +2
(1/2)1 +3
0
0
1
3/2
0
1/2
3
gives r1 = 2 and r2 = 1.
(b) Yes; the linear system arising from r1 (x2 ) + r2 (2x + x2 ) + r3 (x + x3 ) = x x3
2r2 + r3 = 1
r1 + r2
= 0
r3 = 1
gives that 1(x2 ) + 1(2x + x2 ) 1(x + x3 ) = x x3 .
(c) No; any combination of the two given matrices has a zero in the upper right.
70
r2
=y
r2
=y
r3 = (1/2)x + (1/2)y + z
r1
+ r3 = z
so that, given any x, y, and z, we can compute that r3 = (1/2)x + (1/2)y + z,
r2 = y, and r1 = (1/2)x (1/2)y.
(c) No. In particular, we cannot get the vector
0
0
1
as a linear combination since the two given vectors both have a third component
of zero.
(d) Yes. The equation
1
3
1
2
x
r1 0 + r2 1 + r3 0 + r4 1 = y
1
0
0
5
z
leadsto this reduction.
1 3 1 2 x
1 3 1 2
x
1 +3 32 +3
y
0 1 0 1 y
0 1 0 1
1 0 0 5 z
0 0 1 6 x + 3y + z
We have infinitely many solutions. We can, for example, set r4 to be zero
and solve for r3 , r2 , and r1 in terms of x, y, and z by the usual methods of
back-substitution.
Two.I.2.25
Answers to Exercises
71
2 3 5 6 x
1 0 1 0 y
1 1 2 2 z
2
3
5
6
x
(1/2)1 +2 (1/3)2 +3
(1/2)x + y
0 3/2 3/2 3
(1/2)1 +3
0
0
0
0 (1/3)x (1/3)y + z
This shows that not every three-tall vector can be so expressed. Only the vectors
satisfying the restriction that (1/3)x (1/3)y + z = 0 are in the span. (To see
that any such vector is indeed expressible, take r3 and r4 to be zero and solve
for r1 and r2 in terms of x, y, and z by back-substitution.)
Two.I.2.26 (a) { (c b c) | b, c R} = {b(0 1 0) + c(1 0 1) | b, c R } The obvious choice for the set that spans is { (0 1 0), (1 0 1) }.
!
!
!
!
d b
0 1
0 0
1 0
(b) {
| b, c, d R } = { b
+c
+d
| b, c, d R}
c d
0 0
1 0
0 1
One set that spans this space consists of those three matrices.
(c) The system
a + 3b
=0
2a
c d = 0
gives b = (c + d)/6 and a = (c + d)/2. So one description is this.
!
!
1/2 1/6
1/2 1/6
{c
+d
| c, d R }
1
0
0
1
That shows that a set spanning this subspace consists of those two matrices.
(d) The a = 2b c gives that the set { (2b c) + bx + cx3 | b, c R } equals the
set { b(2 + x) + c(1 + x3 ) | b, c R}. So the subspace is the span of the set
{2 + x, 1 + x3 }.
(e) The set { a + bx + cx2 | a + 7b + 49c = 0 } can be parametrized as
{ b(7 + x) + c(49 + x2 ) | b, c R}
and so has the spanning set {7 + x, 49 + x2 }.
Two.I.2.27
72
2/3
1/3
2/3
1/3
{y 1 + z 0 | y, z R}
{ 1 , 0 }
0
1
0
1
1/2
1
2 0
(c) { ,
}
1 0
1
0
(d) Parametrize the description as { a1 + a1 x + a3 x2 + a3 x3 | a1 , a3 R} to get
{ 1 + x, x2 + x3 }.
(e) {1, x, x2!
, x3 , x4 } !
!
!
1 0
0 1
0 0
0 0
(f) {
,
,
,
}
0 0
0 0
1 0
0 1
Answers to Exercises
73
One reason that it is not a subspace of M22 is that it does not contain the zero
matrix. (Another reason is that it is not closed under addition, since the sum of
the two is not an element of A. It is also not closed under scalar multiplication.)
(b) This set of two vectors does not span R2 .
!
!
1
3
{
,
}
1
3
No linear combination of these two can give a vector whose second component is
unequal to its first component.
Two.I.2.33 No. The only subspaces of R1 are the space itself and its trivial subspace.
Any subspace S of R that contains a nonzero member ~v must contain the set of all
of its scalar multiples {r ~v | r R }. But this set is all of R.
Two.I.2.34 Item (1) is checked in the text.
Item (2) has five conditions. First, for closure, if c R and ~s S then c ~s S
as c ~s = c ~s + 0 ~0. Second, because the operations in S are inherited from V, for
c, d R and ~s S, the scalar product (c + d) ~s in S equals the product (c + d) ~s
in V, and that equals c ~s + d ~s in V, which equals c ~s + d ~s in S.
The check for the third, fourth, and fifth conditions are similar to the second
conditions check just given.
Two.I.2.35 An exercise in the prior subsection shows that every vector space has
only one zero vector (that is, there is only one vector that is the additive identity
element of the space). But a trivial space has only one element and that element
must be this (unique) zero vector.
Two.I.2.36 As the hint suggests, the basic reason is the Linear Combination Lemma
from the first chapter. For the full proof, we will show mutual containment between
the two sets.
The first containment [[S]] [S] is an instance of the more general, and obvious,
fact that for any subset T of a vector space, [T ] T .
For the other containment, that [[S]] [S], take m vectors from [S], namely
c1,1~s1,1 + + c1,n1~s1,n1 , . . . , c1,m~s1,m + + c1,nm~s1,nm , and note that any
linear combination of those
r1 (c1,1~s1,1 + + c1,n1~s1,n1 ) + + rm (c1,m~s1,m + + c1,nm~s1,nm )
is a linear combination of elements of S
= (r1 c1,1 )~s1,1 + + (r1 c1,n1 )~s1,n1 + + (rm c1,m )~s1,m + + (rm c1,nm )~s1,nm
and so is in [S]. That is, simply recall that a linear combination of linear combinations (of members of S) is a linear combination (again of members of S).
74
Two.I.2.37 (a) It is not a subspace because these are not the inherited operations.
For one thing, in this space,
x
1
0 y = 0
z
0
3
while this does not, of course, hold in R .
(b) We can combine the argument showing closure under addition with the argument showing closure under scalar multiplication into one single argument showing
closure under linear combinations of two vectors. If r1 , r2 , x1 , x2 , y1 , y2 , z1 , z2
are inR then
x1
x2
r1 x1 r1 + 1
r2 x2 r2 + 1
r1 y1 + r2 y2 =
r1 y1
r2 y2
+
z1
z2
r1 z1
r2 z2
r1 x1 r1 + r2 x2 r2 + 1
=
r1 y1 + r2 y2
r1 z1 + r2 z2
(note that the definition of addition in this space is that the first components
combine as (r1 x1 r1 + 1) + (r2 x2 r2 + 1) 1, so the first component of the
last vector does not say + 2). Adding the three components of the last vector
gives r1 (x1 1 + y1 + z1 ) + r2 (x2 1 + y2 + z2 ) + 1 = r1 0 + r2 0 + 1 = 1.
Most of the other checks of the conditions are easy (although the oddness of
the operations keeps them from being routine). Commutativity of addition goes
like this.
x1
x2
x1 + x2 1
x2 + x1 1
x2
x1
y1 + y2 = y1 + y2 = y2 + y1 = y2 + y1
z1
z2
z 1 + z2
z2 + z 1
z2
z1
Associativity
of addition
has
x1
x2
x3
(x1 + x2 1) + x3 1
(y1 + y2 ) + y3 =
(y1 + y2 ) + y3
z1
z2
z3
(z1 + z2 ) + z3
while
x1
x2
x3
x1 + (x2 + x3 1) 1
y1 + (y2 + y3 )
y1 + (y2 + y3 ) =
z1
z2
z3
z1 + (z2 + z3 )
and they are equal. The identity element with respect to this addition operation
works this way
x
1
x+11
x
y + 0 = y + 0 = y
z
0
z+0
z
Answers to Exercises
75
y
0
z
z
zz
0
The conditions on scalar
multiplication
are
also
easy.
For
the
first condition,
x
(r + s)x (r + s) + 1
(r + s) y =
(r + s)y
z
(r + s)z
while
x
x
rx r + 1
sx s + 1
r y + s y =
ry
sy
+
z
z
rz
sz
(rx r + 1) + (sx s + 1) 1
=
ry + sy
rz + sz
and the two
are
equal.
The second
conditioncompares
x2
x1 + x2 1
r(x1 + x2 1) r + 1
x1
r (y1 + y2 ) = r y1 + y2 =
r(y1 + y2 )
z2
z1 + z2
r(z1 + z2 )
z1
with
x1
x2
rx1 r + 1
rx2 r + 1
r y1 + r y2 =
ry1
ry2
+
z1
z2
rz1
rz2
(rx1 r + 1) + (rx2 r + 1) 1
=
ry1 + ry2
rz1 + rz2
and they are equal. For the third
condition,
x
rsx rs + 1
(rs) y =
rsy
z
rsz
while
x
sx s + 1
r(sx s + 1) r + 1
r(s y) = r(
sy
rsy
) =
z
sz
rsz
and the two are equal. Forscalar
1 we
multiplication
by
have this.
x
1x 1 + 1
x
1 y =
1y
= y
z
1z
z
76
x1 1
x2 1
x1 + x2 2
(in P)
y1 + y2 = y1 + y2
z1
z2
z1 + z 2
and then move the result back out by 1 along the x-axis.
x1 + x2 1
y1 + y2 .
z1 + z2
Scalar multiplication is similar.
(c) For the subspace to be closed under the inherited scalar multiplication, where
~v is a member of that subspace,
0
0 ~v = 0
0
must also be a member.
The converse does not hold. Here is a subset of R3 that contains the origin
0
1
{ 0 , 0 }
0
0
(this subset has only two elements) but is not a subspace.
Answers to Exercises
77
Two.I.2.40 (a) The union of the x-axis and the y-axis in R2 is one.
(b) The set of integers, as a subset of R1 , is one.
(c) The subset {~v } of R2 is one, where ~v is any nonzero vector.
Two.I.2.41 Because vector space addition is commutative, a reordering of summands
leaves a linear combination unchanged.
Two.I.2.42 We always consider that span in the context of an enclosing space.
Two.I.2.43 It is both if and only if.
For if, let S be a subset of a vector space V and assume ~v S satisfies
~v = c1~s1 + + cn~sn where c1 , . . . , cn are scalars and ~s1 , . . . ,~sn S. We must
show that [S {~v }] = [S].
Containment one way, [S] [S {~v }] is obvious. For the other direction,
[S {~v }] [S], note that if a vector is in the set on the left then it has the form
d0~v + d1~t1 + + dm~tm where the ds are scalars and the ~t s are in S. Rewrite
that as d0 (c1~s1 + + cn~sn ) + d1~t1 + + dm~tm and note that the result is a
member of the span of S.
The only if is clearly true adding ~v enlarges the span to include at least ~v.
Two.I.2.44 (a) Always.
Assume that A, B are subspaces of V. Note that their intersection is not
empty as both contain the zero vector. If w
~ ,~s A B and r, s are scalars then
r~v + s~
w A because each vector is in A and so a linear combination is in A, and
r~v + s~
w B for the same reason. Thus the intersection is closed. Now Lemma 2.9
applies.
(b) Sometimes (more precisely, only if A B or B A).
To see the answer is not always, take V to be R3 , take A to be the x-axis,
and B to be the y-axis. Note that
!
!
!
!
1
0
1
0
A and
B but
+
6 A B
0
1
0
1
as the sum is in neither A nor B.
The answer is not never because if A B or B A then clearly A B is a
subspace.
To show that A B is a subspace only if one subspace contains the other,
we assume that A 6 B and B 6 A and prove that the union is not a subspace.
The assumption that A is not a subset of B means that there is an a
~ A with
a
~ 6 B. The other assumption gives a ~b B with ~b 6 A. Consider a
~ + ~b. Note
that sum is not an element of A or else (~
a + ~b) a
~ would be in A, which it is
not. Similarly the sum is not an element of B. Hence the sum is not an element
of A B, and so the union is not a subspace.
78
(c) Never. As A is a subspace, it contains the zero vector, and therefore the set
that is As complement does not. Without the zero vector, the complement
cannot be a vector space.
Two.I.2.45 The span of a set does not depend on the enclosing space. A linear
combination of vectors from S gives the same sum whether we regard the operations
as those of W or as those of V, because the operations of W are inherited from V.
Two.I.2.46 It is; apply Lemma 2.9. (You must consider the following. Suppose B is a
subspace of a vector space V and suppose A B V is a subspace. From which
space does A inherit its operations? The answer is that it doesnt matter A will
inherit the same operations in either case.)
Two.I.2.47 (a) Always; if S T then a linear combination of elements of S is also a
linear combination of elements of T .
(b) Sometimes (more precisely, if and only if S T or T S).
3
The answer is not always
shown by this
example
asis
from R
1
0
1
0
S = { 0 , 1 }, T = { 0 , 0 }
0
0
0
1
because of this.
1
1
1 [S T ]
1 6 [S] [T ]
1
1
The answer is not never because if either set contains the other then equality
is clear. We can characterize equality as happening only when either set contains
the other by assuming S 6 T (implying the existence of a vector ~s S with ~s 6 T )
and T 6 S (giving a ~t T with ~t 6 S), noting ~s + ~t [S T ], and showing that
~s + ~t 6 [S] [T ].
(c) Sometimes.
Clearly [S T ] [S] [T ] because any linear combination of vectors from S T
is a combination of vectors from S and also a combination of vectors from T .
2
Containment the other way does
! not
! always hold.!For instance, in R , take
S={
1
0
,
},
0
1
T ={
2
}
0
Answers to Exercises
79
(d) Never, as the span of the complement is a subspace, while the complement of
the span is not (it does not contain the zero vector).
Two.I.2.48 Call the subset S. By Lemma 2.9, we need to check that [S] is closed
under linear combinations. If c1~s1 + + cn~sn , cn+1~sn+1 + + cm~sm [S] then
for any p, r R we have
p (c1~s1 + + cn~sn ) + r (cn+1~sn+1 + + cm~sm )
= pc1~s1 + + pcn~sn + rcn+1~sn+1 + + rcm~sm
which is an element of [S].
Two.I.2.49 For this to happen, one of the conditions giving the sensibleness of the
addition and scalar multiplication operations must be violated. Consider R2 with
these operations.
!
!
!
!
!
x1
x2
0
x
0
+
=
r
=
y1
y2
0
y
0
The set R2 is closed under these operations. But it is not a vector space.
!
!
1
1
1
6=
1
1
Linear Independence
Two.II.1: Definition and Examples
Two.II.1.20 For each of these, when the subset is independent you must prove it, and
when the subset is dependent you must give an example of a dependence.
(a) It is dependent. Considering
1
2
4
0
c1 3 + c2 2 + c3 4 = 0
5
4
14
0
gives this linear system.
c1 + 2c2 + 4c3 = 0
3c1 + 2c2 4c3 = 0
5c1 + 4c2 + 14c3 = 0
80
1 2
3 2
5 4
4
4
14
0
0
31 +2
51 +3
(3/4)2 +3
1 2
0 8
0 0
4
8
0
0
0
yields a free variable, so there are infinitely many solutions. For an example of
a particular dependence we can set c3 to be, say, 1. Then we get c2 = 1 and
c1 = 2.
(b) It is dependent. The linear system that arises here
1 2 3 0
1 2
3
0
71 +2 2 +3
7 7 7 0
0 7 14 0
71 +3
0
7 7 7 0
0 0
0
has infinitely many solutions. We can get a particular solution by taking c3 to
be, say, 1, and back-substituting to get the resulting c2 and c1 .
(c) It is linearly independent. The system
1 4 0
0 1 0
1 2 3 1
0 1 0
0 0 0
1 4 0
0 0 0
has only the solution c1 = 0 and c2 = 0. (We could also have gotten the answer
by inspection the second vector is obviously not a multiple of the first, and
vice versa.)
(d) It is linearly dependent. The linear system
9 2 3
12 0
12 0
9 0 5
0 1 4 1 0
has more unknowns than equations, and so Gausss Method must end with at
least one variable free (there cant be a contradictory equation because the system
is homogeneous, and so has at least the solution of all zeroes). To exhibit a
combination, we can do the reduction
9 2
3
12 0
1 +2
(1/2)2 +3
0 0
0 2 2
0 0 3 1 0
and take, say, c4 = 1. Then we have that c3 = 1/3, c2 = 1/3, and c1 =
31/27.
Two.II.1.21 In the cases of independence, you must prove that it is independent.
Otherwise, you must exhibit a dependence. (Here we give a specific dependence
but others are possible.)
Answers to Exercises
81
3
5
1
0
3
5
1 0
(1/3)1 +2 32 (12/13)2 +3
4
0
0 13
1 6 1 0
31 +3
9
3 5 0
0
0
128/13 0
with only one solution: c1 = 0, c2 = 0, and c3 = 0.
(b) This set is independent. We can see this by inspection, straight from the
definition of linear independence. Obviously neither is a multiple of the other.
(c) This set is linearly independent. The linear system reduces in this way
0
2 3
4 0
2
3
4
(1/2)1 +2 (17/5)2 +3
2
0
1 1 0 0
0 5/2
(7/2)1 +3
7 2 3 0
0
0
51/5 0
to show that there is only the solution c1 = 0, c2 = 0, and c3 = 0.
(d) This set is linearly dependent. The linear system
8 0 2 8 0
3 1 2 2 0
3 2 2 5 0
must, after reduction, end with at least one variable free (there are more variables
than equations, and there is no possibility of a contradictory equation because
the system is homogeneous). We can take the free variables as parameters to
describe the solution set. We can then set the parameter to a nonzero value to
get a nontrivial linear relation.
Two.II.1.22
1 1 0
1
21 +2
2
1
0
0 0 0
0
1
3
0
0
0
82
1 1 1 0
11 0
3 4
1 3
7 0
1 1 1 0
31 +2
14 0
0 7
1 +3
0 4
8 0
1 1 1 0
(4/7)2 +3
14 0
0 7
0 0
0 0
with infinitely many solutions, that is, more than just the trivial solution.
c1
1
{ c2 = 2 c3 | c3 R }
1
c3
So the set is linearly dependent. One dependence comes from setting c3 = 2,
giving c1 = 2 and c2 = 4.
(c) Without having to set up a system we can see that the second element of the
set is a multiple of the first (namely, 0 times the first).
Two.II.1.23 Let Z be the zero function Z(x) = 0, which is the additive identity in the
vector space under discussion.
(a) This set is linearly independent. Consider c1 f(x) + c2 g(x) = Z(x). Plugging
in x = 1 and x = 2 gives a linear system
c1 1 +
c2 1 = 0
c1 2 + c2 (1/2) = 0
with the unique solution c1 = 0, c2 = 0.
(b) This set is linearly independent. Consider c1 f(x) + c2 g(x) = Z(x) and plug
in x = 0 and x = /2 to get
c1 1 + c2 0 = 0
c1 0 + c2 1 = 0
which obviously gives that c1 = 0, c2 = 0.
(c) This set is also linearly independent. Considering c1 f(x) + c2 g(x) = Z(x)
and plugging in x = 1 and x = e
c1 e + c2 0 = 0
c1 e e + c2 1 = 0
gives that c1 = 0 and c2 = 0.
Two.II.1.24 In each case, if the set is independent then you must prove that and if it
is dependent then you must exhibit a dependence.
(a) This set is dependent. The familiar relation sin2 (x) + cos2 (x) = 1 shows that
2 = c1 (4 sin2 (x)) + c2 (cos2 (x)) is satisfied by c1 = 1/2 and c2 = 2.
Answers to Exercises
83
c1 + ( 2/2)c2 + c3 = 0
whose only solution is c1 = 0, c2 = 0, and c3 = 0.
(c) By inspection, this set is independent. Any dependence cos(x) = c x is not
possible since the cosine function is not a multiple of the identity function (we
are applying Corollary 1.18).
(d) By inspection, we spot that there is a dependence. Because (1+x)2 = x2 +2x+1,
we get that c1 (1 + x)2 + c2 (x2 + 2x) = 3 is satisfied by c1 = 3 and c2 = 3.
(e) This set is dependent. The easiest way to see that is to recall the trigonometric
relationship cos2 (x) sin2 (x) = cos(2x). (Remark. A person who doesnt recall
this, and tries some xs, simply never gets a system leading to a unique solution,
and never gets to conclude that the set is independent. Of course, this person
might wonder if they simply never tried the right set of xs, but a few tries will
lead most people to look instead for a dependence.)
(f) This set is dependent, because it contains the zero object in the vector space,
the zero polynomial.
Two.II.1.25 No, that equation is not a linear relationship. In fact this set is independent, as the system arising from taking x to be 0, /6 and /4 shows.
Two.II.1.26 No. Here are two members of the plane where the second is a multiple of
the first.
1
2
0 , 0
0
0
(Another reason that the answer is no is the the zero vector is a member of the
plane and no set containing the zero vector is linearly independent.)
Two.II.1.27 We have already showed this: the Linear Combination Lemma and its
corollary state that in an echelon form matrix, no nonzero row is a linear combination
of the others.
Two.II.1.28 (a) Assume that {~u,~v, w
~ } is linearly independent, so that any relationship d0 ~u + d1~v + d2 w
~ = ~0 leads to the conclusion that d0 = 0, d1 = 0, and
d2 = 0.
Consider the relationship c1 (~u) + c2 (~u + ~v) + c3 (~u + ~v + w
~ ) = ~0. Rewrite it
~
to get (c1 + c2 + c3 )~u + (c2 + c3 )~v + (c3 )~
w = 0. Taking d0 to be c1 + c2 + c3 ,
84
Two.II.1.29 (a) A singleton set {~v } is linearly independent if and only if ~v 6= ~0.
For the if direction, with ~v 6= ~0, we can apply Lemma 1.5 by considering the
relationship c ~v = ~0 and noting that the only solution is the trivial one: c = 0.
For the only if direction, just recall that Example 1.11 shows that {~0 } is linearly
dependent, and so if the set {~v } is linearly independent then ~v 6= ~0.
(Remark. Another answer is to say that this is the special case of Lemma 1.14
where S = .)
(b) A set with two elements is linearly independent if and only if neither member
is a multiple of the other (note that if one is the zero vector then it is a multiple
of the other). This is an equivalent statement: a set is linearly dependent if and
only if one element is a multiple of the other.
The proof is easy. A set {~v1 ,~v2 } is linearly dependent if and only if there is a
relationship c1~v1 + c2~v2 = ~0 with either c1 6= 0 or c2 6= 0 (or both). That holds
if and only if ~v1 = (c2 /c1 )~v2 or ~v2 = (c1 /c2 )~v1 (or both).
Two.II.1.30 This set is linearly dependent set because it contains the zero vector.
Two.II.1.31 Lemma 1.19 gives the if half. The converse (the only if statement)
does not hold. An example is to consider the vector space R2 and these vectors.
!
!
!
1
0
1
~x =
, ~y =
, ~z =
0
1
1
Two.II.1.32
Answers to Exercises
85
1
2
,
} R2
0
0
and these two linear!combinations
! give the
! same result
!
!
0
1
2
1
2
=2
1
=4
2
0
0
0
0
0
Thus, a linearly dependent set might have indistinct sums.
In fact, this stronger statement holds: if a set is linearly dependent then it
must have the property that there are two distinct linear combinations that sum
to the same vector. Briefly, where c1~s1 + + cn~sn = ~0 then multiplying both
sides of the relationship by two gives another relationship. If the first relationship
is nontrivial then the second is also.
Two.II.1.33 In this if and only if statement, the if half is clear if the polynomial is
the zero polynomial then the function that arises from the action of the polynomial
must be the zero function x 7 0. For only if we write p(x) = cn xn + + c0 .
Plugging in zero p(0) = 0 gives that c0 = 0. Taking the derivative and plugging in
zero p0 (0) = 0 gives that c1 = 0. Similarly we get that each ci is zero, and p is the
zero polynomial.
Two.II.1.34 The work in this section suggests that we should define an n-dimensional
non-degenerate linear surface as the span of a linearly independent set of n vectors.
Two.II.1.35 (a) For any!a1,1 , . . . , a2,4
!,
!
!
!
S={
c1
a1,1
a2,1
+ c2
a1,2
a2,2
+ c3
a1,3
a2,3
+ c4
a1,4
a2,4
0
0
86
a1,1
a1,2
a1,3
a1,4
a1,5
0
a
a
a
a
a 0
2,1
2,2
2,3
2,4
2,5
c1
+ c2
+ c3
+ c4
+ c5
=
a3,1
a3,2
a3,3
a3,4
a3,5 0
a4,1
a4,2
a4,3
a4,4
a4,5
0
and note that the resulting linear system
a1,1 c1 + a1,2 c2 + a1,3 c3 + a1,4 c4 + a1,5 c5 = 0
a2,1 c1 + a2,2 c2 + a2,3 c3 + a2,4 c4 + a2,5 c5 = 0
a3,1 c1 + a3,2 c2 + a3,3 c3 + a3,4 c4 + a3,5 c5 = 0
a4,1 c1 + a4,2 c2 + a4,3 c3 + a4,4 c4 + a4,5 c5 = 0
Answers to Exercises
87
has four equations and five unknowns, so Gausss Method must end with at least
one c variable free, so there are infinitely many solutions, and so the above linear
relationship among the four-tall vectors has more solutions than just the trivial
solution.
The smallest linearly independent set is the empty set.
The biggest linearly dependent set is R4 . The smallest is {~0 }.
Two.II.1.39 (a) The intersection of two linearly independent sets S T must be
linearly independent as it is a subset of the linearly independent set S (as well as
the linearly independent set T also, of course).
(b) The complement of a linearly independent set is linearly dependent as it
contains the zero vector.
(c) A simple example in R2 is these two sets.
!
!
1
0
S={
}
T ={
}
0
1
A somewhat subtler example, again in R2 , is these two.
!
!
!
1
1
0
S={
}
T ={
,
}
0
0
1
(d) We must produce an example. One, in R2 , is
!
!
1
2
S={
}
T ={
}
0
0
since the linear dependence of S1 S2 is easy to see.
Two.II.1.40 (a) Lemma 1.5 requires that the vectors ~s1 , . . . ,~sn , ~t1 , . . . , ~tm be distinct.
But we could have that the union S T is linearly independent with some ~si
equal to some ~tj .
(b) One example in R2 is these two.
!
!
!
1
1
0
S={
}
T ={
,
}
0
0
1
(c) An example from R2 is these sets.
!
!
1
0
S={
,
}
0
1
!
!
1
1
T ={
,
}
0
1
(d) The union of two linearly independent sets S T is linearly independent if and
only if their spans of S and T (ST ) have a trivial intersection [S][T (ST )] =
{~0 }. To prove that, assume that S and T are linearly independent subsets of some
vector space.
For the only if direction, assume that the intersection of the spans is trivial
[S] [T (S T )] = {~0 }. Consider the set S (T (S T )) = S T and consider
88
Answers to Exercises
89
must eventually happen because S is finite, and [S] will be reached at worst when
we have used every vector from S.
Two.II.1.42
0
0
gives
ax + by = 0
cx + dy = 0
(c/a)1 +2
ax +
by = 0
((c/a)b + d)y = 0
1
b/a
c/a
0
1 b/a c/a 0
(1/a)1
d1 +2
e
f
0
d
0 (ae bd)/a (af cd)/a 0
g1 +3
g
h
i
0
0 (ah bg)/a (ai cg)/a 0
Then we get a 1 in the second row, second column entry. (Assuming for the
moment that ae bd 6= 0, in order to do the row reduction step.)
1
b/a
c/a
0
(a/(aebd))2
1
(af cd)/(ae bd) 0
0
0 (ah bg)/a
(ai cg)/a
0
Then, under the assumptions, we perform the row operation ((ah bg)/a)2 + 3
to get this.
1 b/a
c/a
0
1
(af cd)/(ae bd)
0
0
0
0
(aei + bgf + cdh hfa idb gec)/(ae bd) 0
90
0
1
b/a
c/a
0
(af cd)/a 0
0
0 (ah bg)/a (ai cg)/a 0
1
b/a
c/a
0
2 3
=
r
or
d
e
e
d
g
h
h
g
(or both) for some scalars r and s. Eliminating r and s in order to restate this
condition only in terms of the given letters a, b, d, e, g, h, we have that it is
not independent it is dependent iff ae bd = ah gb = dh ge.
(d) Dependence or independence is a function of the indices, so there is indeed a
formula (although at first glance a person might think the formula involves cases:
if the first component of the first vector is zero then . . . , this guess turns out
not to be correct).
Answers to Exercises
91
Two.II.1.43 Recall that two vectors from Rn are perpendicular if and only if their
dot product is zero.
~ are perpendicular nonzero vectors in Rn , with n > 1.
(a) Assume that ~v and w
With the linear relationship c~v + d~
w = ~0, apply ~v to both sides to conclude that
c k~vk2 + d 0 = 0. Because ~v 6= ~0 we have that c = 0. A similar application of
w
~ shows that d = 0.
(b) Two vectors in R1 are perpendicular if and only if at least one of them is zero.
We define R0 to be a trivial space, and so both ~v and w
~ are the zero vector.
(c) The right generalization is to look at a set {~v1 , . . . ,~vn } Rk of vectors that
are mutually orthogonal (also called pairwise perpendicular ): if i 6= j then ~vi is
perpendicular to ~vj . Mimicking the proof of the first item above shows that such
a set of nonzero vectors is linearly independent.
Two.II.1.44 (a) This check is routine.
(b) The summation is infinite (has infinitely many summands). The definition of
linear combination involves only finite sums.
(c) No nontrivial finite sum of members of { g, f0 , f1 , . . . } adds to the zero object: assume that
c0 (1/(1 x)) + c1 1 + + cn xn = 0
(any finite sum uses a highest power, here n). Multiply both sides by 1 x to
conclude that each coefficient is zero, because a polynomial describes the zero
function only when it is the zero polynomial.
Two.II.1.45 It is both if and only if.
Let T be a subset of the subspace S of the vector space V. The assertion that
any linear relationship c1~t1 + + cn~tn = ~0 among members of T must be the
trivial relationship c1 = 0, . . . , cn = 0 is a statement that holds in S if and only if
it holds in V, because the subspace S inherits its addition and scalar multiplication
operations from V.
92
Two.III.1.19 By Theorem 1.12, each is a basis if and only if we can express each vector
in the space in a unique way as a linear combination of the given vectors.
(a) Yes this is a basis. The relation
1
3
0
x
c1 2 + c2 2 + c3 0 = y
3
1
1
z
gives
1 3 0 x
1 3 0 x
21 +2 22 +3
2 2 0 y
0 4 0 2x + y
31 +3
3 1 1 z
0 0 1 x 2y + z
which has the unique solution c3 = x 2y + z, c2 = x/2 y/4, and c1 =
x/2 + 3y/4.
(b) This is not a basis. Setting it up as in the prior item
1
3
x
c1 2 + c2 2 = y
3
1
z
gives a linear system whose solution
1 3 x
1 3 x
21 +2 22 +3
2 2 y
0 4 2x + y
31 +3
3 1 z
0 0 x 2y + z
is possible if and only if the three-tall vectors components x, y, and z satisfy
x 2y + z = 0. For instance, we can find the coefficients c1 and c2 that work
when x = 1, y = 1, and z = 1. However, there are no cs that work for x = 1,
y = 1, and z = 2. Thus this is not a basis; it does not span the space.
Answers to Exercises
93
1
0 1 2 x
1 3 21 +2 (1/3)2 +3
2
1
5
y
1 1 0 z
0
to this reduction
1
3
0
0 1 1 x
1 1
1 3 21 +2 (1/3)2 +3
2 1 3 y
0 3
1 1 0 z
0 0
0
5
1/3
y + 2z
x y/3 2z/3
x, y, and z.
0
3
0
y + 2z
x y/3 2z/3
which does not have a solution for each triple x, y, and z. Instead, the span of
the given set includes only those three-tall vectors where x = y/3 + 2z/3.
Two.III.1.20
(a) We solve
!
c1
1
1
1
1
1
2
with
1
1
+ c2
!
1
1
1
2
1
0
1 +2
1
2
1
1
and conclude that c2 = 1/2 and so c1 = 3/2. Thus, the representation is this.
!
!
1
3/2
RepB (
)=
2
1/2
B
(b) The relationship c1 (1)+c2 (1+x)+c3 (1+x+x2 )+c4 (1+x+x2 +x3 ) = x2 +x3
is easily solved by eye to give that c4 = 1, c3 = 0, c2 = 1, and c1 = 0.
0
1
RepD (x2 + x3 ) =
0
1 D
0
0
1
1
(c) RepE4 ( ) =
0
0
1
1 E
4
Two.III.1.21 Solving
3
1
!
=
1
1
1
1
c1 +
!
c2
gives c1 = 2 and c2 = 1.
!
3
RepB1 (
)=
1
2
1
!
B1
94
!
=
1
2
!
c1 +
1
3
!
c2
gives this.
!
3
RepB2 (
)=
1
10
7
!
B2
Two.III.1.22 A natural basis is h1, x, x2 i. There are bases for P2 that do not contain
any polynomials of degree one or degree zero. One is h1 + x + x2 , x + x2 , x2 i. (Every
basis has at least one polynomial of degree two, though.)
Two.III.1.23 The reduction
!
!
1 4 3 1 0
1 4 3 1 0
21 +2
2 8 6 2 0
0 0 0 0 0
gives that the only condition is that x1 = 4x2 3x3 + x4 . The solution set is
4x2 3x3 + x4
x2
{
| x2 , x3 , x4 R}
x3
x4
1
3
4
0
0
1
= {x2 + x3 + x4 | x2 , x3 , x4 R }
0
1
0
1
0
0
and so the obvious candidate for
the
basis
is
this.
4
3
1
1 0 0
h , , i
0 1 0
0
1
0
Weve shown that this spans the space, and showing it is also linearly independent
is routine.
Two.III.1.24 There are many bases.
This!is a natural
!
! one. !
1 0
0 1
0 0
0 0
,
,
,
i
0 0
0 0
1 0
0 1
Two.III.1.25 For each item, many answers are possible.
(a) One way to proceed is to parametrize by expressing the a2 as a combination of
the other two a2 = 2a1 + a0 . Then a2 x2 + a1 x + a0 is (2a1 + a0 )x2 + a1 x + a0
and
h
{(2a1 + a0 )x2 + a1 x + a0 | a1 , a0 R}
= { a1 (2x2 + x) + a0 (x2 + 1) | a1 , a0 R }
Answers to Exercises
95
suggests h2x2 + x, x2 + 1i. This only shows that it spans, but checking that it is
linearly independent is routine.
(b) Parametrize {(a b c) | a + b = 0 } to get { (b b c) | b, c R }, which suggests using the sequence h(1 1 0), (0 0 1)i. Weve shown that it spans, and
checking that it is linearly independent is easy.
(c) Rewriting
!
!
!
a b
1 0
0 1
{
| a, b R} = {a
+b
| a, b R}
0 2b
0 0
0 2
suggests this for the basis.
1
h
0
!
0
0
,
0
0
!
1
i
2
!
c3
Using the upper right entries we see that c1 = 0. The upper left entries give that
c2 = 0, and the lower left entries show that c3 = 0.
Two.III.1.27 We will show that the second is a basis; the first is similar. We will show
this straight from the definition of a basis, because this example appears before
Theorem 1.12.
To see that it is linearly independent, we set up c1 (cos sin ) + c2 (2 cos +
3 sin ) = 0 cos + 0 sin . Taking = 0 and = /2 gives this system
c1 1 + c2 2 = 0 1 +2 c1 + 2c2 = 0
c1 (1) + c2 3 = 0
+ 5c2 = 0
96
(4/3)2 +3
2c1 + 3c2 = a0
3c2 = a0 + a1
0 = (4/3)a0 (4/3)a1 + a2
Answers to Exercises
97
gives that a1 = 12a2 109a3 and that a0 = 35a2 + 420a3 . Rewriting (35a2 +
420a3 )+(12a2 109a3 )x+a2 x2 +a3 x3 as a2 (3512x+x2 )+a3 (420109x+x3 )
suggests this for a basis h35 12x + x2 , 420 109x + x3 i. The above shows that
it spans the space. Checking it is linearly independent is routine. (Comment. A
worthwhile check is to verify that both polynomials in the basis have both seven
and five as roots.)
(c) Here there are three conditions on the cubics, that a0 +7a1 +49a2 +343a3 = 0,
that a0 + 5a1 + 25a2 + 125a3 = 0, and that a0 + 3a1 + 9a2 + 27a3 = 0. Gausss
Method
a0 + 7a1 + 49a2 + 343a3 = 0
a0 + 7a1 + 49a2 + 343a3 = 0
1 +2 22 +3
98
~ 1,
~1 +
~ 2,
~1 +
~ 3 i in this way
can represent the same ~v with respect to h2
~ 1 ) + d2 (
~1 +
~ 2 ) + d3 (
~1 +
~ 3 ).
~v = (1/2)(d1 d2 d3 )(2
Two.III.1.33 Each forms a linearly independent set if we omit ~v. To preserve linear
independence, we must expand the span of each. That is, we must determine the
span of each (leaving ~v out), and then pick a ~v lying outside of that span. Then to
finish, we must check that the result spans the entire given space. Those checks are
routine.
(a) Any vector that is not a multiple of the given one, that is, any vector that is
not on the line y = x will do here. One is ~v = ~e1 .
(b) By inspection, we notice that the vector ~e3 is not in the span of the set of the
two given vectors. The check that the resulting set is a basis for R3 is routine.
(c) For any member of the span { c1 (x) + c2 (1 + x2 ) | c1 , c2 R}, the coefficient
of x2 equals the constant term. So we expand the span if we add a quadratic
without this property, say, ~v = 1 x2 . The check that the result is a basis for P2
is easy.
~ 1 + + ck
~k
Two.III.1.34 To show that each scalar is zero, simply subtract c1
~
~
~
ck+1 k+1 cn n = 0. The obvious generalization is that in any equation
~ and in which each
~ appears only once, each scalar is zero.
involving only the s,
For instance, an equation with a combination of the even-indexed basis vectors (i.e.,
~ 2,
~ 4 , etc.) on the right and the odd-indexed basis vectors on the left also gives
1
}
2
1
2
!
= c2
1
2
implies that c1 = c2 . The idea here is that this subset fails to be a basis because it
fails to span the space; the proof of the theorem establishes that linear combinations
are unique if and only if the subset is linearly independent.
Answers to Exercises
Two.III.1.37
99
0 0
1 0 0
h0 0 0 , 0 1
0 0 0
0 0
basis.
0
0
0 , 0
0
0
!
0
0
,
0
0
0
0
0
!
0
0
,
1
1
0
0
0 , 1
1
0
1
0
0
!
1
i
0
0
0
0 , 0
0
1
0
0
0
1
0
0 , 0
0
0
0
0
1
1i
0
(c) As in the prior two questions, we can form a basis from two kinds of matrices.
First are the matrices with a single one on the diagonal and all other entries
zero (there are n of those matrices). Second are the matrices with two opposed
off-diagonal entries are ones and all other entries are zeros. (That is, all entries
in M are zero except that mi,j and mj,i are one.)
Two.III.1.38 (a) Any four vectors from R3 are linearly related because the vector
equation
x1
x2
x3
x4
0
c1 y1 + c2 y2 + c3 y3 + c4 y4 = 0
z1
z2
z3
z4
0
gives rise to a linear system
x1 c1 + x2 c2 + x3 c3 + x4 c4 = 0
y 1 c1 + y2 c2 + y 3 c3 + y 4 c4 = 0
z 1 c1 + z 2 c2 + z 3 c3 + z 4 c4 = 0
that is homogeneous (and so has a solution) and has four unknowns but only
three equations, and therefore has nontrivial solutions. (Of course, this argument
applies to any subset of R3 with four or more vectors.)
(b) We shall do just the two-vector case. Given x1 , . . . , z2 ,
x1
x2
S = { y1 , y2 }
z1
z2
to decide which vectors
x
y
z
100
x1
x2
x
c1 y1 + c2 y2 = y
z1
z2
z
and row reduce the resulting system.
x1 c1 + x2 c2 = x
y1 c1 + y2 c2 = y
z1 c1 + z2 c2 = z
There are two variables c1 and c2 but three equations, so when Gausss Method
finishes, on the bottom row there will be some relationship of the form 0 =
m1 x + m2 y + m3 z. Hence, vectors in the span of the two-element set S must
satisfy some restriction. Hence the span is not all of R3 .
Two.III.1.39 We have (using these odball operations with care)
1yz
y + 1
z + 1
{
y
| y, z R } = { y + 0 | y, z R }
z
0
z
0
0
= {y 1 + z 0 | y, z R }
0
1
and so a natural candidate for a basis is this.
0
0
h1 , 0i
1
0
To check linear independence we set up
1
0
0
c1 1 + c2 0 = 0
0
0
1
(the vector on the right is the zero object in this space). That yields the linear
system
(c1 + 1) + (c2 + 1) 1 = 1
c1
=0
c2
=0
with only the solution c1 = 0 and c2 = 0. Checking the span is similar.
Answers to Exercises
101
Two.III.2: Dimension
Two.III.2.16 One basis is h1, x, x2 i, and so the dimension is three.
Two.III.2.17 The solution set is
4x2 3x3 + x4
x2
{
| x2 , x3 , x4 R }
x3
x4
so a natural basis is this
4
3
1
1 0 0
h , , i
0 1 0
0
0
1
(checking linear independence is easy). Thus the dimension is three.
Two.III.2.18
wz
0
1
1
y 1
0
0
{
= y + z + w | y, z, w R}
z 0
1
0
w
0
0
1
That gives the space as the span of the three-vector set. To show the three vector
set makes a basis we check that it is linearly independent.
0
0
1
1
0 1
0
0
= c1 + c2 + c3
0 0
1
0
0
0
0
1
The second components give that c1 = 0, and the third and fourth components
give that c2 = 0 and c3 = 0. So one basis is this.
0
1
1
1 0 0
h , , i
0 1 0
0
0
1
The dimension is the number of vectors in a basis: 3.
102
a
0
{ 0
0
0
0 0 0 0
b 0 0 0
0 c 0 0 | a, . . . , e R}
0 0 d 0
0 0 0 e
0
1 0 0 0 0
0
0 0 0 0 0
{ 0 0 0 0 0 a + 0
0
0 0 0 0 0
0
0 0 0 0 0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0 b + | a, . . . , e R }
0
0
1 0 0 0 0
0 0 0 0 0
0 0
0 0 0 0 0 0 1 0 0 0
0 0
h0 0 0 0 0 , 0 0 0 0 0 , . . . , 0 0
0 0 0 0 0 0 0 0 0 0
0 0
0 0 0 0 0
0 0 0 0 0
0 0
trivial. So this is a
0
0
0
0
0
0
0
0
0
0
0
0
0i
0
1
(c) The restrictions form a two-equations, four-unknowns linear system. Parametrizing that system to express the leading variables in terms of those that are free
gives a0 = a1 , a1 = a1 , a2 = 2a3 , and a3 = a3 .
{a1 + a1 x + 2a3 x2 + a3 x3 | a1 , a3 R}
= {(1 + x) a1 + (2x2 + x3 ) a3 | a1 , a3 R }
That description shows that the space is the span of the two-element set
{ 1 + x, x2 + x3 }. We will be done if we show the set is linearly independent.
This relationship
0 + 0x + 0x2 + 0x3 = (1 + x) c1 + (2x2 + x3 ) c2
gives that c1 = 0 from the constant terms, and c2 = 0 from the cubic terms. One
basis for the space is h1 + x, 2x2 + x3 i. This is a two-dimensional space.
Two.III.2.19 For this space
!
a b
{
| a, b, c, d R}
c d
= {a
1
0
0
0
!
+ + d
0
0
0
1
!
| a, b, c, d R }
Answers to Exercises
103
!
0
0
,
0
0
!
1
0
,
0
1
!
0
0
,
0
0
!
0
i
1
2 0
1 0
!
+d
0
0
0
1
!
| b, c, d R}
1
h
0
!
0
i
1
!
0
i
1
104
Two.III.2.23 First recall that cos 2 = cos2 sin2 , and so deletion of cos 2 from
this set leaves the span unchanged. Whats left, the set {cos2 , sin2 , sin 2 }, is
linearly independent (consider the relationship c1 cos2 +c2 sin2 +c3 sin 2 = Z()
where Z is the zero function, and then take = 0, = /4, and = /2 to conclude
that each c is zero). It is therefore a basis for its span. That shows that the span is
a dimension three vector space.
Two.III.2.24 Here is a basis
h(1 + 0i, 0 + 0i, . . . , 0 + 0i), (0 + 1i, 0 + 0i, . . . , 0 + 0i), (0 + 0i, 1 + 0i, . . . , 0 + 0i), . . .i
and so the dimension is 2 47 = 94.
Two.III.2.25 A
1
h 0
0
basis is
0 0 0
0 0 0
0 0 0
0
0
0 , 0
0
0
1
0
0
0
0
0
0
0
0
0
0
0 , . . . , 0
0
0
0
0
0
0
0
0
0
0
0
0 i
1
(a) One
(b) Two
(c) n
Two.III.2.29 We need only produce an infinite linearly independent set. One is such
sequence is hf1 , f2 , . . .i where fi : R R is
1 if x = i
fi (x) =
0 otherwise
the function that has value 1 only at x = i.
Answers to Exercises
105
Two.III.2.30 A function is a set of ordered pairs (x, f(x)). So there is only one function
with an empty domain, namely the empty set. A vector space with only one element
a trivial vector space and has dimension zero.
Two.III.2.31 Apply Corollary 2.11.
Two.III.2.32 A plane has the form {~p + t1~v1 + t2~v2 | t1 , t2 R}. (The first chapter
also calls this a 2-flat, and contains a discussion of why this is equivalent to the
description often taken in Calculus as the set of points (x, y, z) subject to a condition
of the form ax + by + cz = d). When the plane passes through the origin we can
take the particular vector ~p to be ~0. Thus, in the language we have developed in
this chapter, a plane through the origin is the span of a set of two vectors.
Now for the statement. Asserting that the three are not coplanar is the same
as asserting that no vector lies in the span of the other two no vector is a linear
combination of the other two. Thats simply an assertion that the three-element
set is linearly independent. By Corollary 2.15, thats equivalent to an assertion
that the set is a basis for R3 (more precisely, any sequence made from the sets
elements is a basis).
Two.III.2.33 Let the space V be finite dimensional and let S be a subspace of V.
If S is not finite dimensional then it has a linearly independent set that is
infinite (start with the empty set and iterate adding vectors that are not linearly
dependent on the set; this process can continue for infinitely many steps or else S
would be finite dimensional). But any linearly independent subset of S is a linearly
independent subset of V, contradicting Corollary 2.11
~
Two.III.2.34 It ensures that we exhaust the s.
That is, it justifies the first sentence
of the last paragraph.
Two.III.2.35 Let BU be a basis for U and let BW be a basis for W. Consider the
concatenation of the two basis sequences. If there is a repeated element then the
intersection U W is nontrivial. Otherwise, the set BU BW is linearly dependent
as it is a six member subset of the five-dimensional space R5 . In either case some
member of BW is in the span of BU , and thus U W is more than just the trivial
space {~0 }.
Generalization: if U, W are subspaces of a vector space of dimension n and if
dim(U) + dim(W) > n then they have a nontrivial intersection.
Two.III.2.36 First, note that a set is a basis for some space if and only if it is linearly
independent, because in that case it is a basis for its own span.
(a) The answer to the question in the second paragraph is yes (implying yes
answers for both questions in the first paragraph). If BU is a basis for U then
BU is a linearly independent subset of W. Apply Corollary 2.13 to expand it to
a basis for W. That is the desired BW .
106
Answers to Exercises
107
~v =
..
.
z
Then (~v) = ~v for every permutation , so V is just the span of ~v, which has
dimension 0 or 1 according to whether ~v is ~0 or not.
Now suppose not all the coordinates of ~v are equal; let x and y with x 6= y be
among the coordinates of ~v. Then
we can find permutations
1 and 2 such that
x
y
y
x
a
a
3
1 (~v) = and 2 (~v) =
3
..
..
.
.
an
an
for some a3 , . . . , an R. Therefore,
1
1
1
0
1 (~v) 2 (~v) =
yx
..
.
0
is in V. That is, ~e2 ~e1 V, where ~e1 , ~e2 , . . . , ~en is the standard basis for Rn .
Similarly, ~e3 ~e2 , . . . , ~en ~e1 are all in V. It is easy to see that the vectors ~e2 ~e1 ,
~e3 ~e2 , . . . , ~en ~e1 are linearly independent (that is, form a linearly independent
set), so dim V > n 1.
Finally, we can write
~v = x1~e1 + x2~e2 + + xn~en
= (x1 + x2 + + xn )~e1 + x2 (~e2 ~e1 ) + + xn (~en ~e1 )
This shows that if x1 + x2 + + xn = 0 then ~v is in the span of ~e2 ~e1 , . . . ,
e~n ~e1 (that is, is in the span of the set of those vectors); similarly, each (~v)
will be in this span, so V will equal this span and dim V = n 1. On the other
hand, if x1 + x2 + + xn 6= 0 then the above equation shows that ~e1 V and thus
~e1 , . . . , ~en V, so V = Rn and dim V = n.
108
Two.III.3.16
(e)
1
2
(a)
2
1
3
1
!
(b)
2
1
1
3
(c) 4
3
7
8
(d) (0 0 0)
0 1 1 1
1
1 0
2
1 2 31 +2 2 +3
0 1 1
1
2 1
1 0
3 1
7 1
0 0
0 1
Thus, the vector is not in the row space.
Two.III.3.18
1 +2
c1 + c2 = 1
0=2
1 3
1 1
1 3
1
21 +2 2 +3
2
0
4
0
0
6
2
1
3
1 3 3 0
0 0 6
2
1
Answers to Exercises
109
2c1 + 5c2 = 3
4c2 = 4
and Gausss Method produces c2 = 1 and c1 = 1. That is, there is indeed such
a pair of scalars and so the vector is indeed in the column space of the matrix.
(b) No; we are asking if there are!scalars c1 and
! c2 such
! that
4
8
0
+ c2
=
2
4
1
and one way to proceed is to consider the resulting linear system
4c1 8c2 = 0
2c1 4c2 = 1
that is easily seen to have no solution. Another way to proceed is to note that
any linear combination of the columns on the left has a second component half
as big as its first component, but the vector on the right does not meet that
criterion.
(c) Yes; we can simply observe that the vector is the first column minus the second.
Or, failing that, setting
the columns
upthe relationship
among
1
1
1
2
c1 1 + c2 1 + c3 1 = 0
1
1
1
0
and considering the resulting linear system
c1 c2 + c3 = 2
c1 c2 + c3 = 2
c1 c2 + c3 = 2
1 +2
2 +3
c1 + c2 c3 = 0
2c2 2c3 = 2
2c2 2c3 = 2
1 +3
c1 c2 + c3 = 0
2c2 + 2c3 = 2
0= 0
gives the additional information (beyond that there is at least one solution) that
there are infinitely many solutions. Parametrizing gives c2 = 1 + c3 and c1 = 1,
and so taking c3 to be zero gives a particular solution of c1 = 1, c2 = 1, and
c3 = 0 (which is, of course, the observation made at the start).
Two.III.3.20
A routineGaussian reduction
2 0 3
4
2 0
3
4
0 1 1 1
1
1
(3/2)1 +3 2 +3 3 +4 0 1
3 1 0
0 0 11/2 3
2 (1/2)1 +4
1 0 4 1
0 0
0
0
c1
1 0 4 1
1 1
1 4 31 +3 2 +3 3 +4 0 1
21 +4
0 0 11
6
0 0 0
0
110
(1/2)1 +3
0
0
1
3/2
0
1/2
4/3
shows that the row rank, and hence the rank, is three.
(b) Inspection of the columns shows that the others are multiples of the first
(inspection of the rows shows the same thing). Thus the rank is one.
Alternatively, the reduction
1 1 2
1 1 2
31 +2
3 3 6
0 0 0
21 +3
2 2 4
0 0 0
shows the same thing.
(c) This calculation
1 3 2
5 1 1
6 4 3
51 +2 2 +3
61 +3
0
0
3
14
0
9
0
1 2 0
1 2
0
3 1 1
1
(3/5)2 +3
23 +4 0 5
31 +2
1 1 1 1 +3
0 0 8/5
(4/5)2 +4
21 +4
2 0 4
0 0
0
Discard the zero vector as showing that there was a redundancy among the starting
vectors, to get this basis for the column space.
1
0
0
h2 , 5 , 0 i
0
1
8/5
The matrixs rank is the dimension of its column space, so it is three. (It is also
equal to the dimension of its row space.)
Answers to Exercises
Two.III.3.23
111
1 3
1 3
1 4
2 1
1 +2
(1/6)2 +3
1 +3
21 +4
(5/6)2 +4
1
0
0
0
3
6
0
0
1 2
1
1 2
1
1 2
1
31 +2
2 +3
0 5 4
3 1 1
0 5 4
1 +3
1 3 3
0 5 4
0 0
0
and then transposing back gives this
basis.
1
0
h2 , 5i
1
4
(c) Notice first that the surrounding space is as P3 , not P2 . Then, taking the first
polynomial 1 + 1 x + 0 x2 + 0 x3 to be the same as the row vector (1 1 0 0),
etc., leads to
1 1 0 0
1 1
0 0
1 +2 2 +3
1 0 1 0
0 1 1 0
31 +3
3 2 1 0
0 0
0 0
which yields the basis h1 + x, x x2 i.
(d) Here the same gives
1 0 1
3
1 1
1 +2
2
1
4
1 0 3
1 +3
1 0 5 1 1 9
leading to this basis.
!
1
h
3
0
1
0
0
1
0 0
,
1
1 0
!
2
i
5
22 +3
1
1 3
1
1 +2 42 +3
1
2
0
0
0 12 6
0
One basis for the span is this.
1
0
0
h1 , 3 , 0 i
3
3
6
0
0
0
1 3
2 1
0 0
1
0
0
5
0
3
6
112
0 1 1
2 2 0
1 2 (7/2)1 +3 72 +3
21 +4
72 +4
7 0 0
4 3 2
2 2 0
1
1
(5/7)3 +4 0
0 0 7
0 0
0
One basis for the span of that set is h2 2x, x + x2 , 5x2 i.
Two.III.3.25 Only the zero matrices have rank of zero. The only matrices of rank one
have the form
k1
..
.
km
where is some nonzero row vector, and not all of the ki s are zero. (Remark. We
cant simply say that all of the rows are multiples of the first because the first row
might be the zero row. Another Remark. The above also applies with column
replacing row.)
Two.III.3.26 If a 6= 0 then a choice of d = (c/a)b will make the second row be a
multiple of the first, specifically, c/a times the first. If a = 0 and b = 0 then any
non-0 choice for d will ensure that the second row is nonzero. If a = 0 and b 6= 0
and c = 0 then any choice for d will do, since the matrix will automatically have
rank one (even with the choice of d = 0). Finally, if a = 0 and b 6= 0 and c 6= 0
then no choice for d will suffice because the matrix is sure to have rank two.
Two.III.3.27 The column rank is two. One way to see this is by inspection the
column space consists of two-tall columns and so can have a dimension of at least
two, and we can easily find two columns that together form a linearly independent
set (the fourth and fifth columns, for instance). Another way to see this is to recall
that the column rank equals the row rank, and to perform Gausss Method, which
leaves two nonzero rows.
Two.III.3.28 We apply Theorem 3.13. The number of columns of a matrix of coefficients A of a linear system equals the number n of unknowns. A linear system
with at least one solution has at most one solution if and only if the space of
solutions of the associated homogeneous system has dimension zero (recall: in the
General = Particular + Homogeneous equation ~v = ~p + ~h, provided that such a ~p
Answers to Exercises
113
exists, the solution ~v is unique if and only if the vector ~h is unique, namely ~h = ~0).
But that means, by the theorem, that n = r.
Two.III.3.29 The set of columns must be dependent because the rank of the matrix
is at most five while there are nine columns.
Two.III.3.30 There is little danger of their being equal since the row space is a set of
row vectors while the column space is a set of columns (unless the matrix is 11,
in which case the two spaces must be equal).
Remark. Consider
!
1 3
A=
2 6
and note that the row space is the set of all multiples of (1 3) while the column
space consists of multiples of
!
1
2
so we also cannot argue that the two spaces must be simply transposes of each
other.
Two.III.3.31 First, the vector space is the set of four-tuples of real numbers, under
the natural operations. Although this is not the set of four-wide row vectors, the
difference is slight it is the same as that set. So we will treat the four-tuples
like four-wide vectors.
With that, one way to see that (1, 0, 1, 0) is not in the span of the first set is to
note that this
reduction
1 1 2 3
1 1 2 3
1 +2 2 +3
1 1 2 0
0 2 0 3
31 +3
3 1 6 6
0 0 0 0
and this
one
1 1
1 1
3 1
1 0
2 3
2 0
6 6
1 0
1 +2
2 +3
31 +3 (1/2)2 +4
1 +4
3 4
1
0
0
0
1
2
0
0
2
0
1
0
3
3
3/2
0
yield matrices differing in rank. This means that addition of (1, 0, 1, 0) to the set
of the first three four-tuples increases the rank, and hence the span, of that set.
Therefore (1, 0, 1, 0) is not already in the span.
Two.III.3.32 It is a subspace because it is the column space of the matrix
3 2 4
1 0 1
2 2 5
114
3 1 2
3
1
2
(2/3)1 +2 (7/2)2 +3
2 0 2
0 2/3 2/3
(4/3)1 +3
4 1 5
0
0
0
and transpose back to get this.
3
0
h1 , 2/3i
2
2/3
Two.III.3.33 We can do this as a straightforward calculation.
T
ra1,1 + sb1,1 . . . ra1,n + sb1,n
..
(rA + sB)T =
.
ram,1 + sbm,1 . . . ram,n + sbm,n
..
=
.
ra1,n + sb1,n . . . ram,n + sbm,n
ra1,1 . . . ram,1
sb1,1 . . . sbm,1
..
..
=
+
.
.
ra1,n
T
...
ram,n
sb1,n
...
sbm,n
= rA + sB
Two.III.3.34
1
1
2
2
1
0 0 1
1 2 1
1 +2
2
1
!
1
4
3
0
2
1
0
4
0
!
1
4
1
3
2
4
2
1
4
3
1
4
2
5
2
1
22
1
0
2
0
!
0
2
Answers to Exercises
115
(c) Assume that A and B are matrices with equal row spaces. Construct a matrix
C with the rows of A above the rows of B, and another matrix D with the rows
of B above the rows of A.
!
!
A
B
C=
D=
B
A
Observe that C and D are row-equivalent (via a sequence of row-swaps) and so
Gauss-Jordan reduce to the same reduced echelon form matrix.
Because the row spaces are equal, the rows of B are linear combinations of
the rows of A so Gauss-Jordan reduction on C simply turns the rows of B to
zero rows and thus the nonzero rows of C are just the nonzero rows obtained by
Gauss-Jordan reducing A. The same can be said for the matrix D Gauss-Jordan
reduction on D gives the same non-zero rows as are produced by reduction on B
alone. Therefore, A yields the same nonzero rows as C, which yields the same
nonzero rows as D, which yields the same nonzero rows as B.
Two.III.3.35 It cannot be bigger.
Two.III.3.36 The number of rows in a maximal linearly independent set cannot exceed
the number of rows. A better bound (the bound that is, in general, the best possible)
is the minimum of m and n, because the row rank equals the column rank.
Two.III.3.37 Because the rows of a matrix A are the columns of AT the dimension
of the row space of A equals the dimension of the column space of AT . But the
dimension of the row space of A is the rank of A and the dimension of the column
space of AT is the rank of AT . Thus the two ranks are equal.
Two.III.3.38 False. The first is a set of columns while the second is a set of rows.
This example, however,
!
1 4
1 2 3
A=
,
A T = 2 5
4 5 6
3 6
indicates that as soon as we have a formal meaning for the same, we can apply it
here:
!
!
!
1
2
3
Columnspace(A) = [{
,
,
}]
4
5
6
while
Rowspace(AT ) = [{(1 4), (2 5), (3 6) }]
are the same as each other.
Two.III.3.39 No. Here, Gausss Method does not change the column space.
!
!
1 0
1 0
31 +2
3 1
0 1
116
Answers to Exercises
117
step1
stepk
echelon form
or we can get the same results by performing step1 through stepk separately on
A and B, and then adding. The largest rank that we can end with in the second
case is clearly the sum of the ranks. (The matrices above give examples of both
possibilities, rank(A+B) < rank(A)+rank(B) and rank(A+B) = rank(A)+rank(B),
happening.)
c1 + 1.1c2 = b
0.1c2 = a + b
118
z
y
z
z
z
0
and the intersection of the two spaces is trivial.
Two.III.4.22 It is. Showing that these two are subspaces is routine. To see that the
space is the direct sum of these two, just note that each member of P2 has the
unique decomposition m + nx + px2 = (m + px2 ) + (nx).
Two.III.4.23 To show that they are subspaces is routine. We will argue they are
complements with Lemma 4.15. The intersection E O is trivial because the only
polynomial satisfying both conditions p(x) = p(x) and p(x) = p(x) is the zero
polynomial. To see that the entire space is the sum of the subspaces E + O = Pn ,
note that the polynomials p0 (x) = 1, p2 (x) = x2 , p4 (x) = x4 , etc., are in E and
also note that the polynomials p1 (x) = x, p3 (x) = x3 , etc., are in O. Hence any
member of Pn is a combination of members of E and O.
Two.III.4.24 Each of these is R3 .
(a) These are broken into some separate lines for readability.
Answers to Exercises
119
W1 + W2 + W3 , W1 + W2 + W3 + W4 , W1 + W2 + W3 + W5 ,
W1 + W2 + W3 + W4 + W5 , W1 + W2 + W4 , W1 + W2 + W4 + W5 ,
W1 + W2 + W5 , W1 + W3 + W4 , W1 + W3 + W5 , W1 + W3 + W4 + W5 ,
W1 + W4 , W1 + W4 + W5 , W1 + W5 ,
W2 + W3 + W4 , W2 + W3 + W4 + W5 , W2 + W4 , W2 + W4 + W5 ,
W3 + W4 , W3 + W4 + W5 ,
W4 + W5
(b) W1 W2 W3 , W1 W4 , W1 W5 , W2 W4 , W3 W4
Two.III.4.25 Clearly each is a subspace. The bases Bi = hxi i for the subspaces, when
concatenated, form a basis for the whole space.
Two.III.4.26 It is W2 .
Two.III.4.27 True by Lemma 4.8.
Two.III.4.28 Two distinct direct sum decompositions of R4 are easy to find. Two such
are W1 = [{~e1 , ~e2 }] and W2 = [{~e3 , ~e4 }], and also U1 = [{~e1 }] and U2 = [{~e2 , ~e3 , ~e4 }].
(Many more are possible, for example R4 and its trivial subspace.)
In contrast, any partition of R1 s single-vector basis will give one basis with no
elements and another with a single element. Thus any decomposition involves R1
and its trivial subspace.
Two.III.4.29 Set inclusion one way is easy: { w
~ 1 + + w
~k |w
~ i Wi } is a subset of
[W1 . . . Wk ] because each w
~ 1 + + w
~ k is a sum of vectors from the union.
For the other inclusion, to any linear combination of vectors from the union
apply commutativity of vector addition to put vectors from W1 first, followed by
vectors from W2 , etc. Add the vectors from W1 to get a w
~ 1 W1 , add the vectors
from W2 to get a w
~ 2 W2 , etc. The result has the desired form.
Two.III.4.30 One example is to take the space to be R3 , and to take the subspaces to
be the xy-plane, the xz-plane, and the yz-plane.
Two.III.4.31 Of course, the zero vector is in all of the subspaces, so the intersection contains at least that one vector.. By the definition of direct sum the set
{ W1 , . . . , Wk } is independent and so no nonzero vector of Wi is a multiple of a
member of Wj , when i 6= j. In particular, no nonzero vector from Wi equals a
member of Wj .
Two.III.4.32 It can contain a trivial subspace; this set of subspaces of R3 is independent: { {~0 }, x-axis }. No nonzero vector from the trivial space {~0 } is a multiple of a
vector from the x-axis, simply because the trivial space has no nonzero vectors to
be candidates for such a multiple (and also no nonzero vector from the x-axis is a
multiple of the zero vector from the trivial subspace).
120
Two.III.4.33 Yes. For any subspace of a vector space we can take any basis h
~ 1, . . . ,
~ ki
~ k+1 , . . . ,
~ n i for the whole
for that subspace and extend it to a basis h
~ 1, . . . ,
~ k,
~ k+1 , . . . ,
~ n i.
space. Then the complement of the original subspace has this basis h
Two.III.4.34 (a) It must. We can write any member of W1 + W2 as w
~1+w
~ 2 where
w
~ 1 W1 and w
~ 2 W2 . As S1 spans W1 , the vector w
~ 1 is a combination of
members of S1 . Similarly w
~ 2 is a combination of members of S2 .
(b) An easy way to see that it can be linearly independent is to take each to be
the empty set. On the other hand, in the space R1 , if W1 = R1 and W2 = R1
and S1 = { 1 } and S2 = { 2 }, then their union S1 S2 is not independent.
Two.III.4.35
!
b
| b, c, d R }
d
Answers to Exercises
121
2
3
0 1
1 0
Topic: Fields
1 Going through the five conditions shows that they are all familiar from elementary
mathematics.
2 As with the prior question, going through the five conditions shows that for both
of these structures, the properties are familiar.
3 The integers fail condition (5). For instance, there is no multiplicative inverse for
2 while 2 is an integer, 1/2 is not.
4 We can do these checks by listing all of the possibilities. For instance, to verify the
first half of condition (2) we must check that the structure is closed under addition
and that addition is commutative a + b = b + a, we can check both of these for
all possible pairs a and b because there are only four such pairs. Similarly, for
associativity, there are only eight triples a, b, c, and so the check is not too long.
(There are other ways to do the checks; in particular, you may recognize these
operations as arithmetic modulo 2. But an exhaustive check is not onerous)
0
0
1
2
1
1
2
0
0
1
2
2
2
0
1
0
0
0
0
1
0
1
2
2
0
2
1
As in the prior item, we could verify that they satisfy the conditions by listing all
of the cases.
Topic: Crystals
1 Each fundamental unit is 3.34 1010 cm, so there are about 0.1/(3.34 1010 )
such units. That gives 2.99 108 , so there are something like 300, 000, 000 (three
hundred million) regions.
2
(a) We solve
c1
1.42
0
!
+ c2
0.71
1.23
!
=
5.67
3.14
!
=
So this point is two columns of hexagons over and one hexagon up.
5.67
3.14
!
=
1.42c1
= 5.67
1.42c2 = 3.14
(we get c2 2.21 and c1 3.99), but it doesnt seem to have to do much with
the physical structure that we are studying.
3 In terms of the basis the locations of the corner atoms are (0, 0, 0), (1, 0, 0), . . . ,
(1, 1, 1). The locations of the face atoms are (0.5, 0.5, 1), (1, 0.5, 0.5), (0.5, 1, 0.5),
(0, 0.5, 0.5), (0.5, 0, 0.5), and (0.5, 0.5, 0). The locations of the atoms a quarter of
the way down from the top are (0.75, 0.75, 0.75) and (0.25, 0.25, 0.25). The atoms a
quarter of the way up from the bottom are at (0.75, 0.25, 0.25) and (0.25, 0.75, 0.25).
Converting to ngstroms is easy.
4
3.924 108
0
0
(f) h
0
0
, 3.924 108 ,
i
8
0
0
3.924 10
character
most
middle
least
experience
middle
least
most
policies
least
most
middle
The Democrat is better than the Republican for character and experience. The
Republican wins over the Third for character and policies. And, the Third beats
the Democrat for experience and policies.
Answers to Exercises
125
c1 + c2
= 1
2c2 +
c3 = 0
1 +3
c1
+ c3 = 1
(3/2)c3 = 2
gives c3 = 4/3, c2 = 2/3, and c1 = 1/3. For a positive spin voter in the third
row,
c1 c2 c3 = 1
c1 c2
c3 = 1
1 +2 (1/2)2 +3
c1 + c2
= 1
2c2 +
c3 = 2
1 +3
c1
+ c3 = 1
(3/2)c3 = 1
gives c3 = 2/3, c2 = 4/3, and c1 = 1/3.
3 The mock election corresponds to the table on page 152 in the way shown in the
first table, and after cancellation the result is the second table.
positive spin
D>R>T
5 voters
R>T >D
8 voters
T >D>R
8 voters
negative spin
T >R>D
2 voters
D>T >R
4 voters
R>D>T
2 voters
positive spin
D>R>T
3 voters
R>T >D
4 voters
T >D>R
6 voters
negative spin
T >R>D
-cancelledD>T >R
-cancelledR>D>T
-cancelled-
All three come from the same side of the table (the left), as the result from this
Topic says must happen. Tallying the election can now proceed, using the canceled
numbers
D
+4
R
1
+6
R
1
R
1
126
4
D
1
+4
+8
D
1
R
1
D
c
D
a
ab+c
a + b + c
a+bc
(a) A two-voter election can have a majority cycle in two ways. First, the two
voters could be opposites, resulting after cancellation in the trivial election (with
the majority cycle of all zeroes). Second, the two voters could have the same
spin but come from different rows, as here.
D
1
R
1
D
1
+1
R
1
+0
D
1
R
1
R
2
(b) There are two cases. An even number of voters can split half and half into
opposites, e.g., half the voters are D > R > T and half are T > R > D. Then
cancellation gives the trivial election. If the number of voters is greater than one
and odd (of the form 2k + 1 with k > 0) then using the cycle diagram from the
proof,
D
T
D
a
D
c
a+bc
ab+c
a + b + c
128
v0 2
v0
2 v0
and Buckinghams function here is f2 (1 , 2 , 3 , 4 ) = 3 1 sin(4 ) +
(1/2)1 2 .
2 Consider
(L0 M0 T 1 )p1 (L1 M1 T 2 )p2 (L3 M0 T 0 )p3 (L0 M1 T 0 )p4 = (L0 M0 T 0 )
which gives these relations among the powers.
p2 3p3
=0
p1 + 2p2
=0
1 3 2 +3
p2
+ p4 = 0
p2
+ p4 = 0
p1 + 2p2
=0
3p3 + p4 = 0
This is the solution space (because we wish to express k as a function of the other
quantities, we take p2 as the parameter).
2
1
{
p2 | p2 R }
1/3
1
Answers to Exercises
129
(a) Setting
(L2 M1 T 2 )p1 (L0 M0 T 1 )p2 (L3 M0 T 0 )p3 = (L0 M0 T 0 )
gives this
2p1
+ 3p3 = 0
p1
=0
2p1 p2
=0
which implies that p1 = p2 = p3 = 0. That is, among quantities with these
dimensional formulas, the only dimensionless product is the trivial one.
(b) Setting
(L2 M1 T 2 )p1 (L0 M0 T 1 )p2 (L3 M0 T 0 )p3 (L3 M1 T 0 )p4 = (L0 M0 T 0 )
gives this.
2p1
+ 3p3 3p4 = 0
p1
+ p4 = 0
2p1 p2
=0
(1/2)1 +2 2 3
1 +3
2p1
+
p2 +
3p3
3p4 = 0
3p3
3p4 = 0
(3/2)p3 + (5/2)p4 = 0
Taking p1 as parameter to express the torque gives this description of the solution
set.
1
2
{
p1 | p1 R }
5/3
1
Denoting the torque by , the rotation rate by r, the volume of air by V, and
the density of air by d we have that 1 = r2 V 5/3 d1 , and so the torque is
r2 V 5/3 d times a constant.
4
dimensional
formula
L1 M0 T 1
L1 M0 T 0
L1 M0 T 0
L1 M0 T 2
130
1 +4
p1 + p2 + p3 + p4 = 0
p 2 + p3 p 4 = 0
Taking p3 and p4 as parameters, we can describe the solution set in this way.
0
2
1
1
{ p3 + p4 | p3 , p4 R}
1
0
0
1
That gives {1 = h/d, 2 = dg/v2 } as a complete set.
Chapter Three
(a b) 7
!
a
b
It is one-to-one
if f sends
because
two members of the domain to the same image,
that is, if f (a b) = f (c d) , then the definition of f gives that
!
!
a
c
=
b
d
and since column vectors are equal only if they have equal components, we have
that a = c and that b = d. Thus, if f maps two row vectors from the domain to
the same column vector then the two row vectors are equal: (a b) = (c d).
To show that f is onto we must show that any member of the codomain R2 is
the image under f of some row vector. Thats easy;
!
x
y
is f (x y) .
132
a0 + b0
= a1 + b1
a2 + b2
a0
b0
= a1 + b1
a2
b2
= f(a0 + a1 x + a2 x2 ) + f(b0 + b1 x + b2 x2 )
Answers to Exercises
133
ra0
= ra1
ra2
a0
= r a1
a2
= r f(a0 + a1 x + a2 x2 )
that it preserves scalar multiplication.
Three.I.1.13 !
These are the
! images.
!
5
0
1
(a)
(b)
(c)
2
2
1
To prove that f is one-to-one, assume that it maps two linear polynomials to
the same image f(a1 + b1 x) = f(a2 + b2 x). Then
!
!
a1 b1
a2 b2
=
b1
b2
and so, since column vectors are equal only when their components are equal,
b1 = b2 and a1 = a2 . That shows that the two linear polynomials are equal, and
so f is one-to-one.
To show that f is onto, note that this member of the codomain
!
s
t
is the image of this member of the domain (s + t) + tx.
To check that f preserves structure, we can use item (2) of Lemma 1.11.
f (c1 (a1 + b1 x) + c2 (a2 + b2 x)) = f ((c1 a1 + c2 a2 ) + (c1 b1 + c2 b2 )x)
!
(c1 a1 + c2 a2 ) (c1 b1 + c2 b2 )
=
c1 b1 + c2 b2
!
!
a1 b 1
a2 b2
= c1
+ c2
b1
b2
= c1 f(a1 + b1 x) + c2 f(a2 + b2 x)
Three.I.1.14 To verify it is one-to-one, assume that f1 (c1 x + c2 y + c3 z) = f1 (d1 x +
d2 y+d3 z). Then c1 +c2 x+c3 x2 = d1 +d2 x+d3 x2 by the definition of f1 . Members
of P2 are equal only when they have the same coefficients, so this implies that c1 = d1
134
Answers to Exercises
135
nq
p
~v =
m
q
To finish we verify that the map preserves linear combinations. By Lemma 1.11
this will
showthat the
map
preserves
the operations.
a1
a2
r1 a1 + r2 a2
b
b
r b + r b
1
2
1 1
2 2
h(r1 + r2 ) = h(
)
c1
c2
r1 c1 + r2 c2
d1
d2
r1 d1 + r2 d2
!
r1 c1 + r2 c2 (r1 a1 + r2 a2 ) + (r1 d1 + r2 d2 )
=
r1 b1 + r2 b2
r1 d1 + r2 d2
!
!
c1 a1 + d1
c2 a2 + d2
= r1
+ r2
b1
d1
b2
d2
a1
a2
b
b
1
2
= r1 h( ) + r2 h( )
c1
c2
d1
d2
Three.I.1.17 (a) No; this map is not one-to-one. In particular, the matrix of all
zeroes is mapped to the same image as the matrix of all ones.
136
a 1 + b 1 + c1 + d 1
a 2 + b 2 + c2 + d 2
a +b +c
a +b +c
1
1
1
2
2
2
then
=
a1 + b 1
a2 + b2
a1
a2
gives that a1 = a2 , and that b1 = b2 , and that c1 = c2 , and that d1 = d2 .
It is onto, since this shows
x
!
y
w
zw
)
= f(
z
yz xy
w
that any four-tall vector is the image of a 22 matrix.
Finally, it preserves combinations
!
!
a1 b 1
a2 b 2
f( r1
+ r2
)
c1 d 1
c2 d 2
!
r1 a1 + r2 a2 r1 b1 + r2 b2
= f(
)
r1 c1 + r2 c2 r1 d1 + r2 d2
r1 a1 + + r2 d2
r a + + r c
1 1
2 2
=
r1 a1 + + r2 b2
r1 a1 + r2 a2
a1 + + d1
a2 + + d2
a + + c
a + + c
1
2
1
2
= r1
+ r2
a1 + b 1
a2 + b2
a1
a2
!
!
a1 b1
a2 b2
= r1 f(
) + r2 f(
)
c1 d1
c2 d 2
and so item (2) of Lemma 1.11 shows that it preserves structure.
(c) Yes, it is an isomorphism.
To show that it is one-to-one, we suppose that two members of the domain
have the same image under f.
!
!
a1 b1
a2 b2
f(
) = f(
)
c1 d 1
c2 d 2
Answers to Exercises
137
2a
b
138
a1
a0 + a1 x + a2 x2 7 a0
a2
and
a0 + a1
a0 + a1 x + a2 x2 7 a1
a2
Verification is straightforward (for the second, to show that it is onto, note that
s
t
u
is the image of (s t) + tx + ux2 ).
Three.I.1.21 The space R2 is not a subspace of R3 because it is not a subset of R3 .
The two-tall vectors in R2 are not members of R3 .
The natural isomorphism : R2 R3 (called the injection map) is this.
!
x
x
7 y
y
0
This map is one-to-one because
!
!
x1
x2
f(
) = f(
)
y1
y2
implies
x1
x2
y1 = y2
0
0
which in turn implies that x1 = x2 and y1 = y2 , and therefore the initial two
two-tall vectors are equal.
Because
!
x
x
)
y = f(
y
0
this map is onto the xy-plane.
To show that this map preserves structure, we will use item (2) of Lemma 1.11
and show
!
!
!
c1 x1 + c2 x2
x1
x2
c1 x1 + c2 x2
f(c1
+ c2
) = f(
) = c1 y1 + c2 y2
y1
y2
c1 y1 + c2 y2
0
!
!
x1
x2
x1
x2
= c1 y1 + c2 y2 = c1 f(
) + c2 f(
)
y1
y2
0
0
that it preserves combinations of two vectors.
Answers to Exercises
Three.I.1.22
Here
are two:
r1
r1 r2
r2
. 7
.
.
r16
139
r1
r1
r2
r2
. 7 .
.
.
.
.
r16
...
...
r16
and
..
.
r16
r1
r1 r2 . . .
r2
..
7 ..
.
.
. . . rmn
rmn
Checking that this is an isomorphism is easy.
Rn . (If we take P1 and R0 to be trivial
Three.I.1.24 If n > 1 then Pn1 =
vector spaces, then the relationship extends one dimension lower.) The natural
isomorphism between them is this.
a0
a1
n1
a0 + a1 x + + an1 x
7 .
..
an1
Checking that it is an isomorphism is straightforward.
Three.I.1.25 This is the map, expanded.
f(a0 + a1 x + a2 x2 + a3 x3 + a4 x4 + a5 x5 )
= a0 + a1 (x 1) + a2 (x 1)2 + a3 (x 1)3
+ a4 (x 1)4 + a5 (x 1)5
= a0 + a1 (x 1) + a2 (x2 2x + 1)
+ a3 (x3 3x2 + 3x 1)
+ a4 (x4 4x3 + 6x2 4x + 1)
+ a5 (x5 5x4 + 10x3 10x2 + 5x 1)
= (a0 a1 + a2 a3 + a4 a5 )
+ (a1 2a2 + 3a3 4a4 + 5a5 )x
+ (a2 3a3 + 6a4 10a5 )x2
+ (a3 4a4 + 10a5 )x3
+ (a4 5a5 )x4 + a5 x5
140
This map is a correspondence because it has an inverse, the map p(x) 7 p(x + 1).
To finish checking that it is an isomorphism we apply item (2) of Lemma 1.11
and show that it preserves linear combinations of two polynomials. Briefly, f(c
(a0 + a1 x + + a5 x5 ) + d (b0 + b1 x + + b5 x5 )) equals this
(ca0 ca1 + ca2 ca3 + ca4 ca5 + db0 db1 + db2 db3 + db4 db5 )
+ + (ca5 + db5 )x5
which equals c f(a0 + a1 x + + a5 x5 ) + d f(b0 + b1 x + + b5 x5 ).
Three.I.1.26 No vector space has the empty set underlying it. We can take ~v to be
the zero vector.
Three.I.1.27 Yes; where the two spaces are { a
~ } and { ~b }, the map sending a
~ to ~b is
clearly one-to-one and onto, and also preserves what little structure there is.
Three.I.1.28 A linear combination of n = 0 vectors adds to the zero vector and so
Lemma 1.10 shows that the three statements are equivalent in this case.
Three.I.1.29 Consider the basis h1i for P0 and let f(1) R be k. For any a P0
we have that f(a) = f(a 1) = af(1) = ak and so fs action is multiplication by k.
Note that k 6= 0 or else the map is not one-to-one. (Incidentally, any such map
a 7 ka is an isomorphism, as is easy to check.)
Three.I.1.30 In each item, following item (2) of Lemma 1.11, we show that the map
preserves structure by showing that the it preserves linear combinations of two
members of the domain.
(a) The identity map is clearly one-to-one and onto. For linear combinations the
check is easy.
id(c1 ~v1 + c2 ~v2 ) = c1~v1 + c2~v2 = c1 id(~v1 ) + c2 id(~v2 )
(b) The inverse of a correspondence is also a correspondence (as stated in the
appendix), so we need only check that the inverse preserves linear combinations.
Assume that w
~ 1 = f(~v1 ) (so f1 (~
w1 ) = ~v1 ) and assume that w
~ 2 = f(~v2 ).
f1 (c1 w
~ 1 + c2 w
~ 2 ) = f1 c1 f(~v1 ) + c2 f(~v2 )
= f1 ( f c1~v1 + c2~v2 )
= c1~v1 + c2~v2
= c1 f1 (~
w1 ) + c2 f1 (~
w2 )
(c) The composition of two correspondences is a correspondence (as stated in the
appendix), so we need only check that the composition map preserves linear
Answers to Exercises
141
combinations.
g f c1 ~v1 + c2 ~v2 = g f(c1~v1 + c2~v2 )
= g c1 f(~v1 ) + c2 f(~v2 )
= c1 g f(~v1 )) + c2 g(f(~v2 )
= c1 g f (~v1 ) + c2 g f (~v2 )
Three.I.1.31 One direction is easy: by definition, if f is one-to-one then for any w
~ W
at most one ~v V has f(~v ) = w
~ , and so in particular, at most one member of V is
mapped to ~0W . The proof of Lemma 1.10 does not use the fact that the map is a
correspondence and therefore shows that any structure-preserving map f sends ~0V
to ~0W .
For the other direction, assume that the only member of V that is mapped
to ~0W is ~0V . To show that f is one-to-one assume that f(~v1 ) = f(~v2 ). Then
f(~v1 ) f(~v2 ) = ~0W and so f(~v1 ~v2 ) = ~0W . Consequently ~v1 ~v2 = ~0V , so ~v1 = ~v2 ,
and so f is one-to-one.
Three.I.1.32 We will prove something stronger not only is the existence of a dependence preserved by isomorphism, but each instance of a dependence is preserved,
that is,
~vi = c1~v1 + + ci1~vi1 + ci+1~vi+1 + + ck~vk
f(~vi ) = c1 f(~v1 ) + + ci1 f(~vi1 ) + ci+1 f(~vi+1 ) + + ck f(~vk ).
The = direction of this statement holds by item (3) of Lemma 1.11. The =
direction holds by regrouping
f(~vi ) = c1 f(~v1 ) + + ci1 f(~vi1 ) + ci+1 f(~vi+1 ) + + ck f(~vk )
= f(c1~v1 + + ci1~vi1 + ci+1~vi+1 + + ck~vk )
and applying the fact that f is one-to-one, and so for the two vectors ~vi and
c1~v1 + + ci1~vi1 + ci+1 f~vi+1 + + ck f(~vk to be mapped to the same image
by f, they must be equal.
Three.I.1.33 (a) This map is one-to-one because if ds (~v1 ) = ds (~v2 ) then by definition
of the map, s ~v1 = s ~v2 and so ~v1 = ~v2 , as s is nonzero. This map is onto as any
w
~ R2 is the image of ~v = (1/s) w
~ (again, note that s is nonzero). (Another
way to see that this map is a correspondence is to observe that it has an inverse:
the inverse of ds is d1/s .)
To finish, note that this map preserves linear combinations
ds (c1 ~v1 + c2 ~v2 ) = s(c1~v1 + c2~v2 ) = c1 s~v1 + c2 s~v2 = c1 ds (~v1 ) + c2 ds (~v2 )
and therefore is an isomorphism.
142
!
t
!
(x1 + x2 ) cos (y1 + y2 ) sin
(x1 + x2 ) sin + (y1 + y2 ) cos
!
x1 cos y1 sin
=
+
x1 sin + y1 cos
!
x2 cos y2 sin
x2 sin + y2 cos
`
7
( )
x 1 + k2
kx
Answers to Exercises
143
cos + 2
sin
) (
)
= (
1 + k2
1 + k2
1 + k2 1 + k2
1 k2
2k
=
cos +
sin
1 + k2
1 + k2
and thus the first component of the image vector is this.
2k
1 k2
r cos(2 ) =
x+
y
1 + k2
1 + k2
A similar calculation shows that the second component of the image vector is
this.
1 k2
2k
x
y
r sin(2 ) =
2
1+k
1 + k2
With this algebraic description of the action of f`
!
!
x
(1 k2 /1 + k2 ) x + (2k/1 + k2 ) y
f`
7
y
(2k/1 + k2 ) x (1 k2 /1 + k2 ) y
checking that it preserves structure is routine.
Three.I.1.34 First, the map p(x) 7 p(x + k) doesnt count because it is a version
of p(x) 7 p(x k). Here is a correct answer (many others are also correct):
a0 + a1 x + a2 x2 7 a2 + a0 x + a1 x2 . Verification that this is an isomorphism is
straightforward.
Three.I.1.35 (a) For the only if half, let f : R1 R1 to be an isomorphism. Consider the basis h1i R1 . Designate f(1) by k. Then for any x we have that
f(x) = f(x 1) = x f(1) = xk, and so fs action is multiplication by k. To finish
this half, just note that k 6= 0 or else f would not be one-to-one.
For the if half we only have to check that such a map is an isomorphism
when k 6= 0. To check that it is one-to-one, assume that f(x1 ) = f(x2 ) so that
kx1 = kx2 and divide by the nonzero factor k to conclude that x1 = x2 . To check
that it is onto, note that any y R1 is the image of x = y/k (again, k 6= 0).
Finally, to check that such a map preserves combinations of two members of the
domain, we have this.
f(c1 x1 + c2 x2 ) = k(c1 x1 + c2 x2 ) = c1 kx1 + c2 kx2 = c1 f(x1 ) + c2 f(x2 )
(b) By the prior item, fs action is x 7 (7/3)x. Thus f(2) = 14/3.
(c) For the only if half, assume that f : R2 R2 is an automorphism. Consider
the standard basis E2 for R2 . Let!
!
f(~e1 ) =
a
c
and
f(~e2 ) =
b
.
d
144
x1
x2
) = f(
)
y1
y2
and so
ax1 + by1
cx1 + dy1
ax2 + by2
cx2 + dy2
) = f(
) f(
)=
=
1
3
4
3
4
1
1
2
Three.I.1.36 There are many answers; two are linear independence and subspaces.
First we show that if a set {~v1 , . . . ,~vn } is linearly independent then its image
{f(~v1 ), . . . , f(~vn ) } is also linearly independent. Consider a linear relationship among
members of the image set.
0 = c1 f(~v1 ) + + cn f(v~n ) = f(c1~v1 ) + + f(cn v~n ) = f(c1~v1 + + cn v~n )
Because this map is an isomorphism, it is one-to-one. So f maps only one vector
from the domain to the zero vector in the range, that is, c1~v1 + + cn~vn equals the
zero vector (in the domain, of course). But, if {~v1 , . . . ,~vn } is linearly independent
Answers to Exercises
145
then all of the cs are zero, and so {f(~v1 ), . . . , f(~vn ) } is linearly independent also.
(Remark. There is a small point about this argument that is worth mention. In
a set, repeats collapse, that is, strictly speaking, this is a one-element set: {~v,~v },
because the things listed as in it are the same thing. Observe, however, the use of
the subscript n in the above argument. In moving from the domain set {~v1 , . . . ,~vn }
to the image set {f(~v1 ), . . . , f(~vn ) }, there is no collapsing, because the image set
does not have repeats, because the isomorphism f is one-to-one.)
To show that if f : V W is an isomorphism and if U is a subspace of the
domain V then the set of image vectors f(U) = { w
~ W|w
~ = f(~u) for some ~u U}
is a subspace of W, we need only show that it is closed under linear combinations
of two of its members (it is nonempty because it contains the image of the zero
vector). We have
c1 f(~u1 ) + c2 f(~u2 ) = f(c1 ~u1 ) + f(c2 ~u2 ) = f(c1 ~u1 + c2 ~u2 )
and c1 ~u1 + c2 ~u2 is a member of U because of the closure of a subspace under
combinations. Hence the combination of f(~u1 ) and f(~u2 ) is a member of f(U).
Three.I.1.37
146
cp1 + dq1
= cp2 + dq2
cp3 + dq3
p1
q1
= c p2 + d q2
p3
q3
= RepB (~p) + RepB (~q)
(d) Use any basis B for P2 whose first two members are x + x2 and 1 x, say
B = hx + x2 , 1 x, 1i.
Answers to Exercises
147
148
Three.I.2.10
5
2
!
(b)
0
2
!
(c)
1
1
Answers to Exercises
149
Three.I.2.22 One direction is easy: if the two are isomorphic via f then for any basis
B V, the set D = f(B) is also a basis (this is shown in Lemma 2.4). The check
~ 1 + + cn
~ n) =
that corresponding vectors have the same coordinates: f(c1
~ 1 ) + + cn f(
~ n ) = c1~1 + + cn~n is routine.
c1 f(
For the other half, assume that there are bases such that corresponding vectors
have the same coordinates with respect to those bases. Because f is a correspondence,
to show that it is an isomorphism, we need only show that it preserves structure.
Because RepB (~v ) = RepD (f(~v )), the map f preserves structure if and only if
representations preserve addition: RepB (~v1 + ~v2 ) = RepB (~v1 ) + RepB (~v2 ) and
scalar multiplication: RepB (r ~v ) = r RepB (~v ) The addition calculation is this:
~ 1 + + (cn + dn )
~ n = c1
~ 1 + + cn
~ n + d1
~ 1 + + dn
~ n , and
(c1 + d1 )
the scalar multiplication calculation is similar.
Three.I.2.23 (a) Pulling the definition back from R4 to P3 gives that a0 + a1 x +
a2 x2 + a3 x3 is orthogonal to b0 + b1 x + b2 x2 + b3 x3 if and only if a0 b0 + a1 b1 +
a2 b2 + a3 b3 = 0.
(b) A natural definition is this.
a0
a1
a
2a
1
2
D( ) =
a2
3a3
a3
0
Three.I.2.24 Yes.
First, f is well-defined because every member of V has one and only one
representation as a linear combination of elements of B.
Second we must show that f is one-to-one and onto. It is one-to-one because
every member of W has only one representation as a linear combination of elements
of D, since D is a basis. And f is onto because every member of W has at least one
representation as a linear combination of members of D.
Finally, preservation of structure is routine to check. For instance, here is the
preservation of addition calculation.
(c1
~ 1 + + cn
~ n ) + (d1
~ 1 + + dn
~ n) )
f(
(c1 + d1 )
~ 1 + + (cn + dn )
~n )
= f(
~ 1 ) + + (cn + dn )f(
~ n)
= (c1 + d1 )f(
~ 1 ) + + cn f(
~ n ) + d1 f(
~ 1 ) + + dn f(
~ n)
= c1 f(
1
1
~ 1 + + cn
~ n ) + +f(d
~ 1 + + dn
~ n)
= f(c
Preservation of scalar multiplication is
(The second equality is the definition of f.)
similar.
150
Homomorphisms
Three.II.1: Definition
Three.II.1.18
c1 x1 + c2 x2
x1
x2
h(c1 y1 + c2 y2 ) = h(c1 y1 + c2 y2 )
z1
z2
c1 z 1 + c2 z 2
c1 x1 + c2 x2
=
c1 x1 + c2 x2 + c1 y1 + c2 y2 + c1 z1 + c2 z2
!
!
x1
x2
= c1
+ c2
x1 + y1 + z1
c2 + y 2 + z 2
x1
x2
= c1 h(y1 ) + c2 h(y2 )
z1
z2
x1
x2
c1 x1 + c2 x2
h(c1 y1 + c2 y2 ) = h(c1 y1 + c2 y2 )
z1
z2
c1 z 1 + c2 z 2
!
0
=
0
x1
x2
= c1 h(y1 ) + c2 h(y2 )
z1
z2
Answers to Exercises
151
1
1
0
0
6 h(0) + h(0)
=
0
0
x1
x2
c1 x1 + c2 x2
h(c1 y1 + c2 y2 ) = h(c1 y1 + c2 y2 )
z1
z2
c1 z 1 + c2 z 2
!
2(c1 x1 + c2 x2 ) + (c1 y1 + c2 y2 )
=
3(c1 y1 + c2 y2 ) 4(c1 z1 + c2 z2 )
!
!
2x1 + y1
2x2 + y2
= c1
+ c2
3y1 4z1
3y2 4z2
x1
x2
= c1 h(y1 ) + c2 h(y2 )
z1
z2
Three.II.1.19 For each, we must either check that the map preserves linear combinations or give an example of a linear combination that is not.
(a) Yes. The check that it preserves combinations is routine.
h(r1
a1
c1
b1
d1
!
+ r2
a2
c2
!
b2
r1 a1 + r2 a2
) = h(
d2
r1 c1 + r2 c2
!
r1 b1 + r2 b2
)
r1 d1 + r2 d2
= (r1 a1 + r2 a2 ) + (r1 d1 + r2 d2 )
= r1 (a1 + d1 ) + r2 (a2 + d2 )
!
a1 b 1
a2
= r1 h(
) + r2 h(
c1 d 1
c2
!
b2
)
d2
h(2
1
0
!
0
2
) = h(
1
0
!
0
)=4
2
while
1
2 h(
0
!
0
)=21=2
1
(c) Yes. This is the check that it preserves combinations of two members of
152
a1
c1
b1
d1
!
+ r2
a2
c2
r1 a1 + r2 a2
= h(
r1 c1 + r2 c2
!
b2
)
d2
!
r1 b1 + r2 b2
)
r1 d1 + r2 d2
!
0
)=1+1=2
0
Three.II.1.20 The check that each is a homomorphisms is routine. Here is the check
for the differentiation map.
d
(r (a0 + a1 x + a2 x2 + a3 x3 ) + s (b0 + b1 x + b2 x2 + b3 x3 ))
dx
d
=
((ra0 + sb0 ) + (ra1 + sb1 )x + (ra2 + sb2 )x2 + (ra3 + sb3 )x3 )
dx
= (ra1 + sb1 ) + 2(ra2 + sb2 )x + 3(ra3 + sb3 )x2
= r (a1 + 2a2 x + 3a3 x2 ) + s (b1 + 2b2 x + 3b3 x2 )
d
d
=r
(a0 + a1 x + a2 x2 + a3 x3 ) + s
(b0 + b1 x + b2 x2 + b3 x3 )
dx
dx
(An alternate proof is to simply note that this is a property of differentiation that
is familiar from calculus.)
These two maps are not inverses as this composition does not act as the identity
map on this element of the domain.
d/dx
1 P3 7 0 P2 7 0 P3
Three.II.1.21 Each of these projections is a homomorphism. Projection to the xz-plane
and to the yz-plane are these maps.
x
x
x
0
y 7 0
y 7 y
z
z
z
z
Answers to Exercises
153
Projection to the x-axis, to the y-axis, and to the z-axis are these maps.
x
x
x
0
x
0
7 y
7 0
y 7 0
y
y
z
0
z
0
z
z
And projection to the origin is this map.
x
0
7
y
0
z
0
Verification that each is a homomorphism is straightforward. (The last one, of
course, is the zero transformation on R3 .)
Three.II.1.22 (a) This verifies that the map preserves linear combinations. By
Lemma 1.7 that suffices to show that it is a homomorphism.
h( d1 (a1 x2 + b1 x + c1 ) + d2 (a2 x2 + b2 x + c2 ) )
= h((d1 a1 + d2 a2 )x2 + (d1 b1 + d2 b2 )x + (d1 c1 + d2 c2 ))
!
(d1 a1 + d2 a2 ) + (d1 b1 + d2 b2 )
=
(d1 a1 + d2 a2 ) + (d1 c1 + d2 c2 )
!
!
d1 a1 + d1 b1
d2 a2 + d2 b2
=
+
d1 a1 + d1 c1
d2 a2 + d2 c2
!
!
a1 + b 1
a2 + b2
= d1
+ d2
a1 + c1
a2 + c2
= d1 h(a1 x2 + b1 x + c1 ) + d2 h(a2 x2 + b2 x + c2 )
(b) It preserves linear combinations.
!
!
!
x1
x2
a1 x1 + a2 x2
f( a1
+ a2
) = f(
)
y1
y2
a1 y1 + a2 y2
= (a1 x1 + a2 x2 ) (a1 y1 + a2 y2 )
3(a1 y1 + a2 y2 )
0
0
= a1 x1 y1 + a2 x2 y2
3y1
3y2
!
!
x1
x2
= a1 f(
) + a2 f(
)
y1
y2
154
Three.II.1.23 The first is not onto; for instance, there is no polynomial that is sent
the constant polynomial p(x) = 1. The second is not one-to-one; both of these
members of the domain
!
!
1 0
0 0
and
0 0
0 1
map to the same member of the codomain, 1 R.
Three.II.1.24 Yes; in any space id(c ~v + d w
~ ) = c ~v + d w
~ = c id(~v) + d id(~
w).
Three.II.1.25 (a) This map does not preserve structure since f(1 + 1) = 3, while
f(1) + f(1) = 2.
(b) The check is routine.
!
!
!
x1
x2
r1 x1 + r2 x2
f(r1
+ r2
) = f(
)
y1
y2
r1 y1 + r2 y2
= (r1 x1 + r2 x2 ) + 2(r1 y1 + r2 y2 )
= r1 (x1 + 2y1 ) + r2 (x2 + 2y2 )
!
!
x1
x2
= r1 f(
) + r2 f(
)
y1
y2
Three.II.1.26 Yes. Where h : V W is linear, h(~u ~v) = h(~u + (1) ~v) = h(~u) +
(1) h(~v) = h(~u) h(~v).
~1 +
Three.II.1.27 (a) Let ~v V be represented with respect to the basis as ~v = c1
~
~
~
~
~
+ cn n . Then h(~v) = h(c1 1 + + cn n ) = c1 h(1 ) + + cn h(n ) =
c1 ~0 + + cn ~0 = ~0.
(b) This argument is similar to the prior one. Let ~v V be represented with
~ 1 + + cn
~ n . Then h(c1
~ 1 + + cn
~ n) =
respect to the basis as ~v = c1
~
~
~
~
c1 h(1 ) + + cn h(n ) = c1 1 + + cn n = ~v.
~ 1 ) + + cn h(
~ n ) = c1 r
~ 1 + + cn r
~ n = r(c1
~1 +
(c) As above, only c1 h(
~ n ) = r~v.
+ cn
Three.II.1.28 That it is a homomorphism follows from the familiar rules that the
logarithm of a product is the sum of the logarithms ln(ab) = ln(a) + ln(b) and that
the logarithm of a power is the multiple of the logarithm ln(ar ) = r ln(a). This
map is an isomorphism because it has an inverse, namely, the exponential map, so
it is a correspondence, and therefore it is an isomorphism.
= x/2 and y
= y/3, the image set is
Three.II.1.29 Where x
!
!
x
x
(2
x)2 (3
y) 2
2 + y
2 = 1 }
{
| 4 + 9 = 1} = {
|x
y
y
y
-plane.
the unit circle in the x
Answers to Exercises
155
x1
x2
c1 x1 + c2 x2
h(c1 y1 + c2 y2 ) = h(c1 y1 + c2 y2 )
z1
z2
c1 z1 + c2 z2
= 3(c1 x1 + c2 x2 ) (c1 y1 + c2 y2 ) (c1 z1 + c2 z2 )
= c1 (3x1 y1 z1 ) + c2 (3x2 y2 z2 )
x2
x1
= c1 h(y1 ) + c2 h(y2 )
z1
z2
The natural guess at a generalization is that for any fixed ~k R3 the map ~v 7 ~v ~k
is linear. This statement is true. It follows from properties of the dot product we
have seen earlier: (~v + ~u) ~k = ~v ~k + ~u ~k and (r~v) ~k = r(~v ~k). (The natural guess
at a generalization of this generalization, that the map from Rn to R whose action
consists of taking the dot product of its argument with a fixed vector ~k Rn is
linear, is also true.)
xn
156
cx1 + dy1
..
= h(
)
.
cxn + dyn
..
=
.
am,1 (cx1 + dy1 ) + + am,n (cxn + dyn )
a1,1 x1 + + a1,n xn
a1,1 y1 + + a1,n yn
..
..
=c
+d
.
.
am,1 x1 + + am,n xn
am,1 y1 + + am,n yn
x1
y1
..
..
= c h( . ) + d h( . )
xn
yn
(b) Each power i of the derivative operator is linear because of these rules familiar
from calculus.
di
di
di
di
di
( f(x) + g(x) ) =
f(x) + i g(x)
r f(x) = r i f(x)
i
i
i
dx
dx
dx
dx
dx
Thus the given map is a linear transformation of Pn because any linear combination of linear maps is also a linear map.
Three.II.1.34 (This argument has already appeared, as part of the proof that isomorphism is an equivalence.) Let f : U V and g : V W be linear. The composition
preserves linear combinations
x+y
x+y
Answers to Exercises
157
sends this linearly independent set in the domain to a linearly dependent image.
!
!
!
!
1
1
1
2
{~v1 ,~v2 } = {
,
} 7 {
,
} = {w
~ 1, w
~ 2}
0
1
1
2
(c) Not necessarily. An example is the projection map : R3 R2
!
x
x
y 7
y
z
and this set that does not span the domain but maps to a set that does span the
codomain.
!
!
1
0
1
0
{ 0 , 1 } 7 {
,
}
0
1
0
0
(d) Not necessarily. For instance, the injection map : R2 R3 sends the standard
basis E2 for the domain to a set that does not span the codomain. (Remark.
However, the set of w
~ s does span the range. A proof is easy.)
Three.II.1.36 Recall that the entry in row i and column j of the transpose of M is
the entry mj,i from row j and column i of M. Now, the check is routine. Start
with the transpose of the combination.
T
..
..
.
.
+ s bi,j ]
[r
i,j
..
..
.
.
Combine and take the transpose.
..
..
.
.
= raj,i + sbj,i
=
ra
+
sb
i,j
i,j
..
..
.
.
Then bring out the scalars, and un-transpose.
..
..
.
.
+ s bj,i
=r
j,i
..
..
.
.
..
.
=r
aj,i + s
..
.
while the codomain is Mnm .
..
.
bj,i
..
.
158
Three.II.1.37
Answers to Exercises
159
(c) This image of U nonempty because U is nonempty. For closure under combinations, where ~u1 , . . . , ~un U,
c1 h(~u1 )+ +cn h(~un ) = h(c1 ~u1 )+ +h(cn ~un ) = h(c1 ~u1 + +cn ~un )
which is itself in h(U) as c1 ~u1 + + cn ~un is in U. Thus this set is a subspace.
(d) The natural generalization is that the inverse image of a subspace of is a
subspace.
Suppose that X is a subspace of W. Note that ~0W X so that the set
{~v V | h(~v) X} is not empty. To show that this set is closed under combinations, let ~v1 , . . . ,~vn be elements of V such that h(~v1 ) = ~x1 , . . . , h(~vn ) = ~xn and
note that
h(c1 ~v1 + + cn ~vn ) = c1 h(~v1 ) + + cn h(~vn ) = c1 ~x1 + + cn ~xn
so a linear combination of elements of h1 (X) is also in h1 (X).
Three.II.1.41 No; the set of isomorphisms does not contain the zero map (unless the
space is trivial).
~ 1, . . . ,
~ n i doesnt span the space then the map neednt be unique.
Three.II.1.42 If h
For instance, if we try to define a map from R2 to itself by specifying only that
~e1 maps to itself, then there is more than one homomorphism possible; both the
identity map and the projection map onto the first component fit this condition.
~ 1, . . . ,
~ n i is linearly independent then we risk
If we drop the condition that h
an inconsistent specification (i.e, there could be no such map). An example is if we
consider h~e2 , ~e1 , 2~e1 i, and try to define a map from R2 to itself that sends ~e2 to
itself, and sends both ~e1 and 2~e1 to ~e1 . No homomorphism can satisfy these three
conditions.
Three.II.1.43
y
y
onto the two axes. Now, where f1 (~v) = 1 (F(~v)) and f2 (~v) = 2 (F(~v)) we have
the desired component functions.
!
f1 (~v)
F(~v) =
f2 (~v)
160
Answers to Exercises
161
R(h) = { a + ax + ax2 P3 | a, b R } = { a (1 + x + x2 ) | a R}
and so the rank is one.
(b) The range space
R(h) = {a + d | a, b, c, d R }
is all of R (we can get any real number by taking d to be 0 and taking a to be
the desired number). Thus, the rank is one.
(c) The range space is R(h) = {r + sx2 | r, s R}. The rank is two.
(d) The range space is the trivial subspace of R4 so the rank is zero.
Three.II.2.24
b
d
d
| a + d = 0} = {
c
!
b
| b, c, d R }
d
162
(b) 3
(c) 3
(d) 0
Three.II.2.26 Because
d
(a0 + a1 x + + an xn ) = a1 + 2a2 x + 3a3 x2 + + nan xn1
dx
we have this.
d
N ( ) = { a0 + + an xn | a1 + 2a2 x + + nan xn1 = 0 + 0x + + 0xn1 }
dx
= {a0 + + an xn | a1 = 0, and a2 = 0, . . . , an = 0 }
= {a0 + 0x + 0x2 + + 0xn | a0 R }
In the same way,
N(
dk
) = {a0 + a1 x + + an xn | a0 , . . . , ak1 R }
dxk
for k 6 n.
Three.II.2.27 The shadow of a scalar multiple is the scalar multiple of the shadow.
Three.II.2.28 (a) Setting a0 +(a0 +a1 )x+(a2 +a3 )x3 = 0+0x+0x2 +0x3 gives a0 = 0
and a0 + a1 = 0 and a2 + a3 = 0, so the null space is {a3 x2 + a3 x3 | a3 R }.
(b) Setting a0 + (a0 + a1 )x + (a2 + a3 )x3 = 2 + 0x + 0x2 x3 gives that a0 = 2,
and a1 = 2, and a2 + a3 = 1. Taking a3 as a parameter, and renaming it a3 = a gives this set description { 2 2x + (1 a)x2 + ax3 | a R } =
{ (2 2x x2 ) + a (x2 + x3 ) | a R}.
(c) This set is empty because the range of h includes only those polynomials with
a 0x2 term.
Three.II.2.29 All inverse images are lines with slope 2.
2x + y = 0
2x + y = 3
2x + y = 1
Answers to Exercises
163
164
For the other half, assume that h is one-to-one and so by Theorem 2.20 has a
trivial null space. Then for any ~v1 , . . . ,~vn V, the relation
~0W = c1 h(~v1 ) + + cn h(~vn ) = h(c1 ~v1 + + cn ~vn )
implies the relation c1 ~v1 + + cn ~vn = ~0V . Hence, if a subset of V is independent
then so is its image in W.
Remark. The statement is that a linear map is one-to-one if and only if it
preserves independence for all sets (that is, if a set is independent then its image is
also independent). A map that is not one-to-one may well preserve some independent
sets. One example is this map from R3 to R2 .
!
x
x+y+z
y 7
0
z
Linear independence is preserved for this set
!
1
1
{ 0 } 7 {
}
0
0
and (in a somewhat more tricky example) also for this set
!
1
0
1
}
{ 0 , 1 } 7 {
0
0
0
(recall that in a set, repeated elements do not appear twice). However, there are
sets whose independence is not preserved under this map
!
!
1
0
1
2
{ 0 , 2 } 7 {
,
}
0
0
0
0
and so not all sets have independence preserved.
~ 1, . . . ,
~ n i for
Three.II.2.37 (We use the notation from Theorem 1.9.) Fix a basis h
V and a basis h~
w1 , . . . , w
~ k i for W. If the dimension k of W is less than or equal to
the dimension n of V then the theorem gives a linear map from V to W determined
in this way.
~ 1 7 w
~k
~ 1, . . . ,
7 w
~k
and
~ k+1 7 w
~ n 7 w
~ k, . . . ,
~k
Answers to Exercises
165
2
Three.II.2.38 Yes. For the transformation
given by
! of R !
x
0
h
7
y
x
we have this.
!
0
N (h) = {
| y R} = R(h)
y
Remark. We will see more of this in the fifth chapter.
Three.II.2.39 This is a simple calculation.
h([S]) = { h(c1~s1 + + cn~sn ) | c1 , . . . , cn R and ~s1 , . . . ,~sn S}
..
h(c ~x + d ~y) =
.
am,1 (cx1 + dy1 ) + + am,n (cxn + dyn )
..
..
=
+
.
.
am,1 cx1 + + am,n cxn
= c h(~x) + d h(~y)
The appropriate conclusion is that General = Particular + Homogeneous.
(e) Each power of the derivative is linear because of the rules
dk
dk
dk
dk
dk
(f(x)
+
g(x))
=
f(x)
+
g(x)
and
rf(x)
=
r
f(x)
dxk
dxk
dxk
dxk
dxk
from calculus. Thus the given map is a linear transformation of the space because
any linear combination of linear maps is also a linear map by Lemma 1.17.
The appropriate conclusion is General = Particular + Homogeneous, where the
associated homogeneous differential equation has a constant of 0.
166
n
is an isomorphism from V to R .
To see that is one-to-one, assume that h1 and h2 are members of V such
that (h1 ) = (h2 ). Then
~ 1 ) h2 (
~ 1)
h1 (
.. ..
. = .
~ n)
~ n)
h1 (
h2 (
~ 1 ) = h2 (
~ 1 ), etc. But a homomorphism is determined by
and consequently, h1 (
its action on a basis, so h1 = h2 , and therefore is one-to-one.
To see that is onto, consider
x1
..
.
xn
for x1 , . . . , xn R. This function h from V to R
~ 1 + + cn
~ n 7h c1 x1 + + cn xn
c1
is linear and maps it to the given vector in Rn , so is onto.
Answers to Exercises
167
we have
~ 1 + + cn
~ n)
(r1 h1 + r2 h2 )(c1
~ 1 ) + r2 h2 (
~ 1 )) + + cn (r1 h1 (
~ n ) + r2 h2 (
~ n ))
= c1 (r1 h1 (
~ 1 ) + + cn h 1 (
~ n )) + r2 (c1 h2 (
~ 1 ) + + cn h 2 (
~ n ))
= r1 (c1 h1 (
so (r1 h1 + r2 h2 ) = r1 (h1 ) + r2 (h2 ).
~ 1, . . . ,
~ n i for V. Consider
Three.II.2.44 Let h : V W be linear and fix a basis h
these n maps from V to W
~ 1 ), h2 (~v) = c2 h(
~ 2 ), . . . , hn (~v) = cn h(
~ n)
h1 (~v) = c1 h(
~
~
for any ~v = c1 1 + + cn n . Clearly h is the sum of the hi s. We need only
~ 1 + + dn
~ n we have hi (r~v + s~u) =
check that each hi is linear: where ~u = d1
rci + sdi = rhi (~v) + shi (~u).
Three.II.2.45 Either yes (trivially) or no (nearly trivially).
If we take V is homomorphic to W to mean there is a homomorphism from V
into (but not necessarily onto) W, then every space is homomorphic to every other
space as a zero map always exists.
If we take V is homomorphic to W to mean there is an onto homomorphism
from V to W then the relation is not an equivalence. For instance, there is an onto
homomorphism from R3 to R2 (projection is one) but no homomorphism from R2
onto R3 by Corollary 2.17, so the relation is not reflexive.
Three.II.2.46 That they form the chains is obvious. For the rest, we show here that
R(tj+1 ) = R(tj ) implies that R(tj+2 ) = R(tj+1 ). Induction then applies.
Assume that R(tj+1 ) = R(tj ). Then t : R(tj+1 ) R(tj+2 ) is the same map,
with the same domain, as t : R(tj ) R(tj+1 ). Thus it has the same range:
R(tj+2 ) = R(tj+1 ).
168
Three.III.1.12
12+31+10
5
(a) 0 2 + (1) 1 + 2 0 = 1
12+11+00
3
0
(c) 0
0
Three.III.1.13
(a)
24+12
3 4 (1/2) 2
!
=
10
11
!
(b)
4
1
!
(c) Not defined.
1 1 0
1 2 1
RepB,D (h) =
0 0 0
0 0 1 B,D
and, as
1
RepB (1 3x + 2x2 ) = 3
2
1 1 0
2
1
1 2 1
3
RepD (h(1 3x + 2x2 )) =
3 =
0 0 0
0
2
B
0 0 1 B,D
2 D
Thus, h(1 3x + 2x2 ) = 2 1 3 x + 0 x2 2 x3 = 2 3x 2x3 , as above.
Three.III.1.16 Again, as recalled in the subsection, with respect to Ei , a column vector
represents itself.
Answers to Exercises
169
(a) To represent h with respect to E2 , E3 take the images of the basis vectors from
the domain, and represent them with respect to the basis for the codomain. The
first is this
2
2
RepE3 ( h(~e1 ) ) = RepE3 (2) = 2
0
0
while the second is this.
0
0
RepE3 ( h(~e2 ) ) = RepE3 ( 1 ) = 1
1
1
Adjoin these to make the matrix.
1
1
v1
v2
and so
RepE3 ( h(~v) ) = 2
0
!
0
2v1
v1
= 2v1 + v2
1
v2
1
v2
2
1
2
1/2
!
1
1/2
1
1/2
!
D
170
Three.III.1.18
(a) We must first find the image of each vector from the domains
basis, and then represent that image with respect to the codomains basis.
0
1
0
2
dx
dx
d1
0
0
2
RepB (
) = RepB (
) = RepB (
)=
0
0
0
dx
dx
dx
0
0
0
0
0
d x3
RepB (
)=
3
dx
0
Those representations are then adjoined to
map.
0
0
d
RepB,B ( ) =
0
dx
0
0
2
0
0
0
0
3
0
(b) Proceeding as in the prior item, we represent the images of the domains
basis vectors
0
1
0
0
0
1
2
d
x
d
x
d1
RepD (
) = RepD (
) = RepD (
)=
0
0
0
dx
dx
dx
0
0
0
0
0
d x3
RepD (
)=
1
dx
0
and adjoin to make the matrix.
0
0
d
RepB,D ( ) =
0
dx
0
1
0
0
0
0
1
0
0
0
0
1
0
Three.III.1.19 For each, we must find the image of each of the domains basis vectors,
represent each image with respect to the codomains basis, and then adjoin those
representations to get the matrix.
(a) The basis vectors from the domain have these images
1 7 0
x 7 1
x2 7 2x
...
Answers to Exercises
171
and these images are represented with respect to the codomains basis in this
way.
0
1
0
0
0
2
0
0
0
.
RepB (0) =
Rep
(1)
=
Rep
(2x)
=
B
B
..
..
.
.
.
.
...
0
0
0
n1
RepB (nx
)=
..
.
n
0
The matrix
0
0
d
RepB,B ( ) =
dx
0
0
1
0
..
.
0
0
0
2
...
...
0
0
...
...
0
0
n
0
x 7 x2 /2
x2 7 x3 /3
...
0
0
0
1
2
0 RepB
1/2
(x
/2)
=
RepBn+1 (x) =
n+1
..
..
.
.
...
n+1
0
RepBn+1 (x
/(n + 1)) =
..
.
1/(n + 1)
172
0
0
... 0
0
1
0
... 0
0
0
1/2
.
.
.
0
0
RepBn ,Bn+1 ( ) =
..
.
0
0
. . . 0 1/(n + 1)
(c) The images of the basis vectors of the domain are
1 7 1
x 7 1/2
x2 7 1/3
...
...
RepB,E1 ( ) = 1
1/2
1/n 1/(n + 1)
x 7 3
x2 7 9
...
RepE1 (3) = 3
Z1
)= 1
RepE1 (9) = 9
9
3n
...
(e) The images of the basis vectors from the domain are 1 7 1, and x 7 x + 1 =
1 + x, and x2 7 (x + 1)2 = 1 + 2x + x2 , and x3 7 (x + 1)3 = 1 + 3x + 3x2 + x3 ,
etc. The representations are here.
1
1
1
0
1
2
0
0
1
RepB (1 + x) = RepB (1 + 2x + x2 ) = . . .
RepB (1) =
0
0
0
..
..
..
.
.
.
0
0
0
The resulting matrix
1 1 1 1 ... 1
0 1 2 3 . . . n
1
RepB,B (slide1 ) =
2
0 0 1 3 . . .
.
..
0 0 0
... 1
Answers to Exercises
173
is Pascals triangle (recall that nr is the number of ways to choose r things,
without order and without repetition, from a set of size n).
Three.III.1.20 Where the space is n-dimensional,
1 0... 0
0 1 . . . 0
RepB,B (id) =
..
0 0 . . . 1 B,B
is the nn identity matrix.
Three.III.1.21 Taking this as the natural basis
!
!
!
1
0
0
1
0
0
0
~ 1,
~ 2,
~ 3,
~ 4i = h
B = h
,
,
,
0 0
0 0
1 0
0
the transpose map acts in this way
~ 1 7
~1
~ 2 7
~3
~ 3 7
~2
~ 4 7
~4
!
0
i
1
so that representing the images with respect to the codomains basis and adjoining
those column vectors together gives this.
1 0 0 0
0 0 1 0
RepB,B (trans) =
0 1 0 0
0 0 0 1 B,B
Three.III.1.22 (a) With respect to the basis of the codomain, the images of the
members of the
basis
of the domain
are
represented as
0
0
0
0
1
0
0
0
~ 2 ) = RepB (
~ 3 ) = RepB (
~ 4 ) = RepB (~0) =
RepB (
0
1
0
0
0
0
1
0
and consequently, the matrix representing
the
transformation
is
this.
0 0 0 0
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 0
1 0 0 0
(b)
0 0 0 0
0 0 1 0
0 0 0 0
1 0 0 0
(c)
0 1 0 0
0 0 0 0
174
Three.III.1.23
ds (~v)
~v
This maps effect on the vectors in the standard basis for the domain is
!
!
!
!
1 ds s
0 ds 0
7
7
0
0
1
s
and those images are represented with respect to the codomains basis (again,
the standard basis) by themselves.
!
!
!
!
s
s
0
0
RepE2 (
)=
RepE2 (
)=
0
0
s
s
Thus the representation of the dilation map is this.
!
s 0
RepE2 ,E2 (ds ) =
0 s
(b) The picture of f` : R2 R2 is this.
f
`
7
(the case of a line with undefined slope is separate but easy) and so the matrix
representing reflection is this.
!
1
1 k2
2k
RepE2 ,E2 (f` ) =
1 + k2
2k
(1 k2 )
Three.III.1.24 Call the map t : R2 R2 .
(a) To represent this map with respect to the standard bases, we must find, and
then represent, the images of the vectors ~e1 and ~e2 from the domains basis. The
image of ~e1 is given.
One way to find the image of ~e2 is by eye we can see this.
!
!
!
!
!
!
1
1
0
2
1
3
t
=
7
=
1
0
1
0
0
0
A more systematic way to find the image of ~e2 is to use the given information
to represent the transformation, and then use that representation to determine
Answers to Exercises
175
1
0
As
RepC (~e2 ) =
1
1
!
C
we have that
RepE2 (t(~e2 )) =
2
0
1
0
!
C,E2
1
1
!
=
C
3
0
!
E2
and consequently we know that t(~e2 ) = 3 ~e1 (since, with respect to the standard
basis, this vector is represented by itself). Therefore, this is the representation of
t with respect to E2 , E2 .
!
1 3
RepE2 ,E2 (t) =
0 0
E2 ,E2
(b) To use the matrix developed in the prior item, note that
!
!
0
0
RepE2 (
)=
5
5
E2
and so we have this is the representation, with respect to the codomains basis,
of the image of the given vector.
!
!
!
!
0
1 3
0
15
RepE2 (t(
)) =
=
5
0 0
5
0
E2 ,E2
E2
E2
Because the codomains basis is the standard one, and so vectors in the codomain
are represented by themselves, we have this.
!
!
0
15
t(
)=
5
0
(c) We first find the image of each member of B, and then represent those images
with respect to D. For the first step, we can use the matrix developed earlier.
!
!
!
!
!
!
1
1 3
1
4
1
4
RepE2 (
)=
=
so t(
)=
1
0 0
1
0
1
0
E2 ,E2
E2
E2
Actually, for the second member of B there is no need to apply the matrix because
the problem statement gives its image.
!
!
1
2
t(
)=
1
0
176
1
2
1/2
1
!
B,D
(d) We know the images of the members of the domains basis from the prior item.
!
!
!
!
1
4
1
2
t(
)=
t(
)=
1
0
1
0
We can compute the representation of those images with respect to the codomains
basis.
!
!
!
!
4
2
2
1
RepB (
)=
and RepB (
)=
0
2
0
1
B
2
2
1
1
!
B,B
~ 2 7 h(
~ 2)
~ n 7 h(
~ n)
...
and those images are represented with respect to the codomains basis in this
way.
1
0
0
1
~ 1 ) ) = . Reph(B) ( h(
~ 2) ) = .
Reph(B) ( h(
.
.
.
.
0
0
0
0
~ n) ) = .
Reph(B) ( h(
.
.
...
1
Hence, the matrix is the identity.
0
RepB,h(B) (h) =
0
1
...
..
Answers to Exercises
177
(b) Using the matrix in the prior item, the representation is this.
c1
..
Reph(B) ( h(~v) ) = .
cn h(B)
Three.III.1.26 The product
h1,1
h2,1
hm,1
...
...
..
.
...
h1,i
h2,i
...
...
hm,i
...
h1,n .
h1,i
..
h2,n
h2,i
.
=
1
.
. .
.
h1,n .
hm,i
0
d/dx
(a) The images of the basis vectors for the domain are cos x 7
d/dx
sin x and sin x 7 cos x. Representing those with respect to the codomains
basis (again, B) and adjoining the representations gives this matrix.
!
d
0 1
RepB,B ( ) =
dx
1 0
B,B
d/dx
d/dx
(b) The images of the vectors in the domains basis are ex 7 ex and e2x 7
2e2x . Representing with respect to the codomains basis and adjoining gives this
matrix.
!
d
1 0
RepB,B ( ) =
dx
0 2
B,B
d/dx
d/dx
d/dx
0 1 0
d
0 0 0
RepB,B ( ) =
0 0 1
dx
0 0 0
0
0
1
1 B,B
Three.III.1.28 (a) It is the set of vectors of the codomain represented with respect
to the codomains basis in this way.
!
!
!
1 0
x
x
{
| x, y R} = {
| x, y R}
0 0
y
0
As the codomains basis is E2 , and so each vector is represented by itself, the
range of this transformation is the x-axis.
178
Answers to Exercises
179
Three.III.1.30 We mimic Example 1.1, just replacing the numbers with letters.
~ 1, . . . ,
~ n i and D as h~1 , . . . , ~m i. By definition of representation
Write B as h
of a map with respect to bases, the assumption that
h1,1 . . . h1,n
.
..
RepB,D (h) = ..
.
hm,1
...
hm,n
0
0
0
0
1
1
7 cos
7 sin
0
1
0 7 0
1
cos
sin
0
0
0
are represented with respect to the codomains basis (again, E3 ) by themselves,
so adjoining the representations to make the matrix gives this.
1
0
0
1
cos
0
0
0
sin
0 7 0
1 7 1
0 7 0
0
sin
0
0
1
cos
180
cos 0 sin
1
0
0
sin 0 cos
(c) To a person standing up, with the vertical z-axis, a rotation of the xy-plane
that is clockwise proceeds from the positive y-axis to the positive x-axis. That
is, it rotates opposite to the direction in Example 1.9. The images of the vectors
from the domains basis
1
cos
0
sin
0
0
0 7 sin
1 7 cos
0 7 0
0
0
0
0
1
1
are represented with respect to E3 by themselves, so the matrix is this.
cos sin 0
sin cos 0
0
0
1
cos sin 0 0
sin cos 0 0
(d)
0
0
1 0
0
0
0 1
~ 1, . . . ,
~ k i and then write BV as the
Three.III.1.32 (a) Write the basis BU as h
~ 1, . . . ,
~ k,
~ k+1 , . . . ,
~ n i. If
extension h
c1
..
RepBU (~v) = .
ck
~ 1 + + ck
~ k then
so that ~v = c1
c1
.
..
c
k
RepBV (~v) =
0
..
.
0
~ 1 + + ck
~k + 0
~ k+1 + + 0
~ n.
because ~v = c1
(b) We must first decide what the question means. Compare h : V W with its
restriction to the subspace h U : U W. The range space of the restriction is a
Answers to Exercises
181
subspace of W, so fix a basis Dh(U) for this range space and extend it to a basis
DV for W. We want the relationship between these two.
RepBV ,DV (h)
and
h1,1 . . . h1,k
.
..
RepBU ,Dh(U) (h U ) = ..
.
hp,1 . . . hp,k
then the extension is represented in this way.
h1,1 . . . h1,k
h1,k+1
...
h1,n
.
..
..
.
h
hp,k+1
...
hp,n
p,1 . . . hp,k
0
...
0
hp+1,k+1 . . . hp+1,n
..
..
.
.
0
...
0
hm,k+1 . . . hm,n
~ 1 ), . . . , h(
~ i ) }.
(c) Take Wi to be the span of {h(
(d) Apply the answer from the second item to the third item.
(e) No. For instance x : R2 R2 , projection onto the x axis, is represented by
these two upper-triangular matrices
!
!
1 0
0 1
RepE2 ,E2 (x ) =
and RepC,E2 (x ) =
0 0
0 0
where C = h~e2 , ~e1 i.
3
1
182
=
2
2
1
1
1
(b) The second member of the!basis maps
!
!
1
0
(1/2
=
7
0
1
1/2
B
D
to this member of the codomain.!
!
!
1
1
1
1
1
+
=
2
2
1
1
0
Answers to Exercises
183
(c) Because the map that the matrix represents is the identity map on the basis,
it must be the identity on all members of the domain. We can come to the same
conclusion in another way by considering
!
!
x
y
=
y
x
B
which maps to
(x + y)/2
(x y)/2
which represents this member of R2 .
!
x+y
xy
1
2
2
1
!
D
1
1
!
=
x
y
maps to
0
a
!
representing
and so the linear map represented by the matrix with respect to these bases
a cos + b sin 7 a cos
is projection onto the first component.
Three.III.2.16 Denote the given basis of P2 by B. Application of the linear map is
represented by matrix-vector multiplication. Thus the first vector in E3 maps to
the element of P2 represented with respect to B by
1 3 0
1
1
0 1 0 0 = 0
1 0 1
0
1
and that element is 1 + x. Calculate the other two images of basis vectors in the
same way.
3
1 3 0
0
0
1 3 0
0
0 1 0 1 = 1 = RepB (4 + x2 ) 0 1 0 0 = 0 = RepB (x)
1 0 1
0
0
1 0 1
1
1
So the range of h is the span of three polynomials 1 + x, 4 + x2 , and x. We can
thus decide if 1 + 2x is in the range of the map by looking for scalars c1 , c2 , and c3
such that
c1 (1 + x) + c2 (4 + x2 ) + c3 (x) = 1 + 2x
184
(b) The representation map RepD : W R2 and its inverse are isomorphisms,
and so preserve the dimension of subspaces. The subspace of R2 that is in the
prior item is one-dimensional. Therefore, the image of that subspace under the
inverse of the representation map the null space of G, is also one-dimensional.
(c) The set of representations of members of the range space is this.
!
!
x + 2y
1
{
| x, y R} = {k
| k R}
3x + 6y
3
D
(d) Of course, Theorem 2.4 gives that the rank of the map equals the rank of the
matrix, which is one. Alternatively, the same argument that we used above for
the null space gives here that the dimension of the range space is one.
(e) One plus one equals two.
Three.III.2.18 (a) (i) The dimension of the domain space is the number of columns m =
2. The dimension of the codomain space is the number of rows n = 2.
For the rest, we consider this matrix-vector
equation.
!
!
!
2
1
1
3
x
y
a
b
()
Answers to Exercises
We solve for x and y.
!
2 1 a
(1/2)1 +2
1 3 b
185
(1/2)1
(1/2)2 +1
(2/7)2
1 0
0 1
(3/7)a (1/7)b
(1/7)a + (2/7)b
!
R2
in equation () the system has a solution, by the calculation. So the range space
is all of the codomain R(h) = R2 . The maps rank is the dimension of the
range, 2. The map is onto because the range space is all of the codomain.
(iii) Again by the calculation, to find the nullspace, setting a = b = 0 in
equation () gives that x = y = 0. The null space is the trivial subspace of the
domain.
!
0
N (h) = {
}
0
The nullity is the dimension of that null space, 0. The map is one-to-one because
the null space is trivial.
(b) (i) The dimension of the domain space is the number of matrix columns, m = 3,
and the dimension of the codomain space is the number of rows, n = 3.
The calculation is this.
0
1 3 a
1 2 1 +3 22 +3
3 4 b
2
2 1 2 c
3
a
0 1
0 0
0
2a + b + c
(ii) There are codomain triples
a
b R3
c
for which the system does not have a solution, specifically the system only has a
solution if 2a + b + c = 0.
a
1/2
1/2
R(h) = { b | a = (b + c)/2 } = { 1 b + 0 c | b, c R }
c
0
1
The maps rank is the ranges dimension, 2. The map is not onto because the
range space is not all of the codomain.
186
x
5/2
1 1 a
a + b
1 0
21 +2 22 +3 2 2 +1
2a b
2 1 b
0 1
31 +3
3 1 c
0 0 a 2b + c
(ii) The range is this subspace of the codomain.
2b c
2
1
R(h) = { b | b, c R} = { 1 b + 0 c | b, c R }
c
0
1
The rank is 2. The map is not onto.
(iii) The null space is the trivial subspace of the domain.
!
!
x
0
N (h) = {
=
}
y
0
The nullity is 0. The map is one-to-one.
Three.III.2.19
(a)
2
2
0 1 a
1 0 1
21 +2
1 0 b
0 1 2
21 +3
2 2 c
0 2 4
1 0 1
22 +3
0 1 2
0 0 0
2a + b
2a + c
2a + b
2a 2b + c
(i) The dimensions are m = n = 3. (ii) The range space is the set containing all
of the members of the codomain for which this system has a solution.
b (1/2)c
R(h) = {
b
| b, c R}
c
The rank is 2. Because the rank is less than the dimension n = 3 of the codomain,
the map is not onto.
Answers to Exercises
187
(iii) The null space is the set of members of the domain that map to a = 0,
b = 0, and c = 0.
N (h) = { 2z | z R }
z
The nullity is 1. Because the nullity is not 0 the map is not one-to-one.
(b) Here, (i) the domain and codomain are each of dimension 3. To show (ii)
and (iii), that the map is an isomorphism, we must show it is both onto and
one-to-one. For that we dont need to augment the matrix with a, b, and c; this
calculation
2 1 0
(1/2)1
23 +2
(3/2)1 +2 32 +3
3 1 1
2
(7/2)1 +3
2
7 2 1
(1/2)3
1 0 0
(1/2)2 +1
0 1 0
0 0 1
gives that for each codomain vector there is one and only one associated domain
vector.
~ W there
Three.III.2.20 (a) The defined map h is onto if and only if for every w
is a ~v V such that h(~v) = w
~ . Since for every vector there is exactly one
representation, converting to representations gives that h is onto if and only if
for every representation RepD (~
w) there is a representation RepB (~v) such that
H RepB (~v) = RepD (~
w).
(b) This is just like the prior part.
(c) As described at the start of this subsection, by definition the map h defined
by the matrix H associates this domain vector ~v with this codomain vector w
~.
v1
h1,1 v1 + + h1,n vn
.
..
RepB (~v) = ..
RepD (~
w) = H RepB (~v) =
.
vn
hm,1 v1 + + hm,n vn
Fix w
~ W and consider the linear system defined by the above equation.
h1,1 v1 + + h1,n vn = w1
h2,1 v1 + + h2,n vn = w2
..
.
hn,1 v1 + + hn,n vn = wn
(Again, here the wi are fixed and the vj are unknowns.) Now, H is nonsingular
if and only if for all w1 , . . . , wn this system has a solution and the solution is
188
unique. By the first two parts of this exercise this is true if and only if the map
h is onto and one-to-one. This in turn is true if and only if h is an isomorphism.
Three.III.2.21 No, the range spaces may differ. Example 2.3 shows this.
Three.III.2.22 Recall that the representation map
Rep
V 7B Rn
n
is an isomorphism. Thus, its inverse map Rep1
B : R V is also an isomorphism.
n
The desired transformation of R is then this composition.
Rep1
Rep
B
Rn 7
V
7 D Rn
Because a composition of isomorphisms is also an isomorphism, this map RepD
Rep1
B is an isomorphism.
Three.III.2.23 Yes. Consider
!
1 0
H=
0 1
E2
E2
=
E2
y
x
interchanges first and second components (that is, it is a reflection about the line
y = x). The last
!
!
!
!
x
x
x + 3y
x + 3y
=
7
=
y
y
y
y
E2
E2
stretches vectors parallel to the y axis, by an amount equal to three times their
distance from that axis (this is a skew.)
Answers to Exercises
189
g2
gn
(the matrix has n columns because V is n-dimensional and it has only one row
because R is one-dimensional). Then taking ~x to be the column vector that is
the transpose of this matrix
g1
..
~x = .
gn
190
Matrix Operations
Three.IV.1: Sums and Scalar Products
Three.IV.1.8
7
9
0
1
!
6
6
12
6
6
12
6
18
4
0
2
6
(a)
(b)
(c)
!
1 28
(d)
(e) Not defined.
2
1
Three.IV.1.9 Represent the domain vector ~v V and the maps g, h : V W with
respect to bases B, D in the usual way.
(a) The representation of (g + h) (~v) = g(~v) + h(~v)
(g1,1 v1 + + g1,n vn )~1 + + (gm,1 v1 + + gm,n vn )~m
+ (h1,1 v1 + + h1,n vn )~1 + + (hm,1 v1 + + hm,n vn )~m
regroups
= ((g1,1 + h1,1 )v1 + + (g1,1 + h1,n )vn ) ~1
+ + ((gm,1 + hm,1 )v1 + + (gm,n + hm,n )vn ) ~m
to the entry-by-entry sum of the representation of g(~v) and the representation of
h(~v).
(b) The representation of (r h) (~v) = r h(~v)
r (h1,1 v1 + h1,2 v2 + + h1,n vn )~1
+ + (hm,1 v1 + hm,2 v2 + + hm,n vn )~m
= (rh1,1 v1 + + rh1,n vn ) ~1
+ + (rhm,1 v1 + + rhm,n vn ) ~m
is the entry-by-entry multiple of r and the representation of h.
Answers to Exercises
191
g1,1 . . . g1,n
h1,1 . . . h1,n
.
.
..
..
G = ..
H = ..
.
.
gm,1 . . . gm,n
hm,1 . . . hm,n
then, by definition we have
g1,1 + h1,1
..
G+H=
.
gm,1 + hm,1
...
...
g1,n + h1,n
..
.
gm,n + hm,n
and
h1,1 + g1,1
..
H+G=
.
hm,1 + gm,1
...
...
h1,n + g1,n
..
.
hm,n + gm,n
and the two are equal since their entries are equal gi,j + hi,j = hi,j + gi,j . That is,
each of these is easy to check by using Definition 1.3 alone.
However, each property is also easy to understand in terms of the represented
maps, by applying Theorem 1.4 as well as the definition.
(a) The two maps g + h and h + g are equal because g(~v) + h(~v) = h(~v) + g(~v),
as addition is commutative in any vector space. Because the maps are the same,
they must have the same representative.
(b) As with the prior answer, except that here we apply that vector space addition
is associative.
(c) As before, except that here we note that g(~v) + z(~v) = g(~v) + ~0 = g(~v).
(d) Apply that 0 g(~v) = ~0 = z(~v).
(e) Apply that (r + s) g(~v) = r g(~v) + s g(~v).
(f) Apply the prior two items with r = 1 and s = 1.
(g) Apply that r (g(~v) + h(~v)) = r g(~v) + r h(~v).
(h) Apply that (rs) g(~v) = r (s g(~v)).
Three.IV.1.11 For any V, W with bases B, D, the (appropriately-sized) zero matrix
represents this map.
~ 1 7 0 ~1 + + 0 ~m
~n
7 0 ~1 + + 0 ~m
This is the zero map.
There are no other matrices that represent only one map. For, suppose that H
is not the zero matrix. Then it has a nonzero entry; assume that hi,j 6= 0. With
respect to bases B, D, it represents h1 : V W sending
~ j 7 h1,j~1 + + hi,j~i + + hm,j~m
192
(the notation 2 D means to double all of the members of D). These maps are easily
seen to be unequal.
Three.IV.1.12 Fix bases B and D for V and W, and consider RepB,D : L(V, W) Mmn
associating each linear map with the matrix representing that map h 7 RepB,D (h).
From the prior section we know that (under fixed bases) the matrices correspond
to linear maps, so the representation map is one-to-one and onto. That it preserves
linear operations is Theorem 1.4.
Three.IV.1.13 Fix bases and represent the transformations with 22 matrices. The
space of matrices M22 has dimension four, and hence the above six-element set is
linearly dependent. By the prior exercise that extends to a dependence of maps.
(The misleading part is only that there are six transformations, not five, so that we
have more than we need to give the existence of the dependence.)
Three.IV.1.14 That the trace of a sum is the sum of the traces holds because both
trace(H + G) and trace(H) + trace(G) are the sum of h1,1 + g1,1 with h2,2 + g2,2 ,
etc. For scalar multiplication we have trace(r H) = r trace(H); the proof is easy.
Thus the trace map is a homomorphism from Mnn to R.
Three.IV.1.15 (a) The i, j entry of (G + H)T is gj,i + hj,i . That is also the i, j entry
of GT + HT .
(b) The i, j entry of (r H)T is rhj,i , which is also the i, j entry of r HT .
Three.IV.1.16 (a) For H + HT , the i, j entry is hi,j + hj,i and the j, i entry of is
hj,i + hi,j . The two are equal and thus H + HT is symmetric.
Every symmetric matrix does have that form, since we can write H = (1/2)
(H + HT ).
(b) The set of symmetric matrices is nonempty as it contains the zero matrix.
Clearly a scalar multiple of a symmetric matrix is symmetric. A sum H + G
of two symmetric matrices is symmetric because hi,j + gi,j = hj,i + gj,i (since
hi,j = hj,i and gi,j = gj,i ). Thus the subset is nonempty and closed under the
inherited operations, and so it is a subspace.
Three.IV.1.17 (a) Scalar multiplication leaves the rank of a matrix unchanged except
that multiplication by zero leaves the matrix with rank zero. (This follows from
the first theorem of the book, that multiplying a row by a nonzero scalar doesnt
change the solution set of the associated linear system.)
(b) A sum of rank n matrices can have rank less than n. For instance, for any
matrix H, the sum H + (1) H has rank zero.
Answers to Exercises
193
A sum of rank n matrices can have rank greater than n. Here are rank one
matrices that sum to a rank two matrix.
!
!
!
1 0
0 0
1 0
+
=
0 0
0 1
0 1
1
0
(a)
0
1
Three.IV.2.15
(c)
18
24
0
0
15.5
19
1
10
2
4
!
(b)
2
17
1
1
1
1
!
(c) Not defined.
(a)
!
17
16
(d)
1
2
!
!
2
2 3
6
(b)
=
4
4 1
36
!
!
!
18 17
1
6
1
=
0
24 16
36 34
1
10
Three.IV.2.16
(a) Yes.
(b) Yes.
(c) No.
Three.IV.2.17
(a) 21
(b) 11
1
34
(d) No.
(d) 22
Three.IV.2.18 We have
h1,1 (g1,1 y1 + g1,2 y2 ) + h1,2 (g2,1 y1 + g2,2 y2 ) + h1,3 (g3,1 y1 + g3,2 y2 ) = d1
h2,1 (g1,1 y1 + g1,2 y2 ) + h2,2 (g2,1 y1 + g2,2 y2 ) + h2,3 (g3,1 y1 + g3,2 y2 ) = d2
which, after expanding and regrouping about the ys yields this.
(h1,1 g1,1 + h1,2 g2,1 + h1,3 g3,1 )y1 + (h1,1 g1,2 + h1,2 g2,2 + h1,3 g3,2 )y2 = d1
(h2,1 g1,1 + h2,2 g2,1 + h2,3 g3,1 )y1 + (h2,1 g1,2 + h2,2 g2,2 + h2,3 g3,2 )y2 = d2
We can express the starting system and the system used for the substitutions in
matrix language, as
! x
!
x1
1
h1,1 h1,2 h1,3
d1
x2 = H x2 =
h2,1 h2,2 h2,3
d2
x3
x3
and
g1,1
g2,1
g3,1
!
!
g1,2
x1
y1
y1
=G
= x2
g2,2
y2
y2
g3,2
x3
194
a+b
2a + 2b
3a 3b
0
0
0 7 0x2 + 0x + 1
1
0
1 7 x2 + 2x + 1
1
5/2
1/2
1/2
0
3/2
1/2
1
Similarly, because
1 + x 7
0
1
2
0
!
1 x 7
0 2
1 0
!
2
x 7
1
0
1
0
0
0
1
1
RepC,D (g) =
1/3 1/3
0
0
1
1/2
0
0
2
3
RepB,D (g h) =
4/3
0
(d) The matrix multiplication is routine, just
0
0
1
5/2
3/2
1
1
1/2
3/2 1/2
1/3 1/3
0
2
1
0
0
0
1
3/2
2/3
0
0
0 7
1
0
0
0
0
0
0
0
0
2
1
0
1/2
3 3/2 0
1/2 =
4/3 2/3 0
0
0
0
0
Answers to Exercises
195
Three.IV.2.20 Technically, no. The dot product operation yields a scalar while the
matrix product yields a 11 matrix. However, we usually will ignore the distinction.
Three.IV.2.21 The action of d/dx on B is 1 7 0, x 7 1, x2 7 2x, . . . and so this is
its (n + 1)(n + 1) matrix representation.
0 1 0
0
0 0 2
0
d
.
..
RepB,B ( ) =
dx
0 0 0
n
0 0 0
0
The product of this
0 1
0 0
0 0
0 0
2
0 0 2 0
0
0
0
0 0 0 6
0
2
0
..
.
..
=
.
0 0 0
n(n 1)
0
n
0 0 0
0
0
0
0 0 0
0
p 7dx
196
Three.IV.2.25 Each follows easily from the associated map fact. For instance, p
applications of the transformation h, following q applications, is simply p + q
applications.
Three.IV.2.26 Although we can do these by going through the indices, they are best
understood in terms of the represented maps. That is, fix spaces and bases so that
the matrices represent linear maps f, g, h.
(a) Yes; we have both r (g h) (~v) = r g( h(~v) ) = (r g) h (~v) and g (r h) (~v) =
g( r h(~v) ) = r g(h(~v)) = r (g h) (~v) (the second equality holds because of the
linearity of g).
(b) Both answers are yes. First, f (rg + sh) and r (f g) + s (f h) both send ~v
to r f(g(~v)) + s f(h(~v)); the calculation is as in the prior item (using the linearity
of f for the first one). For the other, (rf + sg) h and r (f h) + s (g h) both
send ~v to r f(h(~v)) + s g(h(~v)).
Three.IV.2.27 We have not seen a map interpretation of the transpose operation, so
we will verify these by considering the entries.
(a) The i, j entry of GHT is the j, i entry of GH, which is the dot product of the
j-th row of G and the i-th column of H. The i, j entry of HT GT is the dot product
of the i-th row of HT and the j-th column of GT , which is the dot product of
the i-th column of H and the j-th row of G. Dot product is commutative and so
these two are equal.
T
T
(b) By the prior item each equals its transpose, e.g., (HHT ) = HT HT = HHT .
Three.IV.2.28 Consider rx , ry : R3 R3 rotating all vectors /2 radians counterclockwise about the x and y axes (counterclockwise in the sense that a person whose
head is at ~e1 or ~e2 and whose feet are at the origin sees, when looking toward the
origin, the rotation as counterclockwise).
Rotating rx first and then ry is different than rotating ry first and then rx . In
particular, rx (~e3 ) = ~e2 so ry rx (~e3 ) = ~e2 , while ry (~e3 ) = ~e1 so rx ry (~e3 ) = ~e1 ,
and hence the maps do not commute.
Three.IV.2.29 It doesnt matter (as long as the spaces have the appropriate dimensions).
For associativity, suppose that F is mr, that G is rn, and that H is nk.
We can take any r dimensional space, any m dimensional space, any n dimensional
space, and any k dimensional space for instance, Rr , Rm , Rn , and Rk will do.
We can take any bases A, B, C, and D, for those spaces. Then, with respect to
Answers to Exercises
197
7 R(f)
7 R(g f)
First, the image of R(f) must have dimension less than or equal to the dimension
of R(f), by the prior sentence. On the other hand, R(f) is a subset of the domain
of g, and thus its image has dimension less than or equal the dimension of the
domain of g. Combining those two, the rank of a composition is less than or
equal to the minimum of the two ranks.
The matrix fact follows immediately.
Three.IV.2.31 The commutes with relation is reflexive and symmetric. However, it
is not transitive: for instance, with
!
!
!
1 2
1 0
5 6
G=
H=
J=
3 4
0 1
7 8
G commutes with H and H commutes with J, but G does not commute with J.
198
Three.IV.2.32
x
0
0
y x
y 7 y 7 0
z
0
0
1 0 0
0
RepE3 ,E3 (x ) = 0 0 0
RepE3 ,E3 (y ) = 0
0 0 0
0
and their product (in either order) is the zero matrix.
(d) Where B = h1, x, x2 , x3 , x4 i,
0 0 2 0 0
0
0 0 0 6 0
0
d3
d2
RepB,B ( 3 ) = 0
RepB,B ( 2 ) = 0 0 0 0 12
dx
dx
0 0 0 0 0
0
0 0 0 0 0
0
and their product (in either order) is the zero matrix.
space of fourth-
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
24
0
0
1
3
2
4
5
7
T=
6
8
56
88
56
88
!
S2 T 2 =
60
76
68
84
~ 1 7
~ 1, . . . ,
~ n 7
~ n,
Three.IV.2.34 Because the identity map acts on the basis B as
the representation is this.
1 0 0
0
0 1 0
0
0
0 0 1
..
.
0 0 0
1
The second part of the question is obvious from Theorem 2.7.
Three.IV.2.35 Here are four solutions.
T=
1
0
0
1
Answers to Exercises
199
Three.IV.2.36 (a) The vector space M22 has dimension four. The set { T 4 , . . . , T, I }
has five elements and thus is linearly dependent.
(b) Where T is nn, generalizing the argument from the prior item shows that there
2
is such a polynomial of degree n2 or less, since {T n , . . . , T, I } is a n2 + 1-member
subset of the n2 -dimensional space Mnn .
(c) First compute the powers
!
!
!
1/2 3/2
0 1
1/2 3/2
4
2
3
T =
T =
T =
3/2
1/2
1 0
3/2 1/2
(observe that rotating by /6 three times results in a rotation by /2, which is
indeed what T 3 represents). Then set c4 T 4 + c3 T 3 + c2 T 2 + c1 T + c0 I equal to
the zero matrix
!
!
!
0 1
1/2 3/2
1/2 3/2
c4 +
c3 +
c2
3/2 1/2
1 0
3/2
1/2
!
!
!
1 0
0 0
3/2 1/2
c1 +
c0 =
+
3/2
0 1
0 0
1/2
to get this linear system.
(1/2)c4
+ (1/2)c2 + ( 3/2)c1 + c0 = 0
(1/2)c4
+ (1/2)c2 + ( 3/2)c1 + c0 = 0
(1/2)c4
+ (1/2)c2 + ( 3/2)c1 + c0 = 0
(1/2)c1
=0
1 +4 2 +3 ( 3/2)c4 c3 ( 3/2)c2
0=0
0=0
(1/2)c4
+ (1/2)c2 + ( 3/2)c1 +
c =0
c3
3c2
2c1 3c0 = 0
31 +2
0=0
0=0
(1/2) + ( 3/2)c1 +
c =0
0
3
2c1 3c0 = 0
200
t=1 k=1
t=1 k=1
r X
s
X
k=1 t=1
r
X
fi,k
k=1
s
X
gk,t ht,j
t=1
(the first equality comes from using the distributive law to multiply through
the hs, the second equality is the associative law for real numbers, the third
is the commutative law for reals, and the fourth equality follows on using the
distributive law to factor the fs out), which is the i, j entry of F(GH).
(b) The k-th component of h(~v) is
n
X
hk,j vj
j=1
j=1
k=1 j=1
k=1 j=1
n X
r
X
j=1 k=1
n X
r
X
(
gi,k hk,j ) vj
j=1 k=1
(the first equality holds by using the distributive law to multiply the gs through,
the second equality represents the use of associativity of reals, the third follows
by commutativity of reals, and the fourth comes from using the distributive law
to factor the vs out).
Answers to Exercises
201
1 0 0 d e f = a b c
0 0 1
g h i
g h i
(b) This matrix swaps column one
and
two.
!
!
!
a b
0 1
b a
=
c d
1 0
d c
Three.IV.3.27 Multiply by C1,2 (2), then by C1,3 (7), and then by C2,3 (3), paying
attention to the right-to-left order.
1 0 0
1 0 0
1 0 0
1 2 1 0
0 1 0 0 1 0 2 1 0 2 3 1 1
0 3 1
7 0 1
0 0 1
7 11 4 3
1 2
1
0
= 0 1 1 1
0 0
0
0
202
Three.IV.3.28 The product is the identity matrix (recall that cos2 + sin2 = 1).
An explanation is that the given matrix represents, with respect to the standard
bases, a rotation in R2 of radians while the transpose represents a rotation of
radians. The two cancel.
Three.IV.3.29 (a) The adjacency matrix is this (e.g, the first row shows that there
is only one connection including Burlington, the road to Winooski).
0 0 0 0 1
0 0 1 1 1
0 1 0 1 0
0 1 1 0 0
1 1 0 0 0
(b) Because these are two-way roads, any road connecting city i to city j gives a
connection between city j and city i.
(c) The square of the adjacency matrix tells how cities are connected by trips
involving two roads.
Three.IV.3.30 The pay due each person appears in the matrix product of the two
arrays.
Three.IV.3.31
2
3
The Gauss-Jordan
reduction is routine.
2 0
21 +2 2 +3 (1/5)2
1 0
31 +3
(1/2)3
1 2
22 +1
0
0
0
1
0
0
1
1
1
1
1 0 0
1 0 0
1 0 0
T = 2 1 0 0 1 0 0 1 0
0 0 1
3 0 1
0 1 1
1
1
1
1
0
0
1 0
0
1 2 0
0 1/5 0 0 1
0 0 1 0
0
0
1
0 0 1/2
0 0 1
Then just remember how to take the inverse
For instance, Ci,j (k)1 = Ci,j (k).
1 0 0
1 0 0
1 0 0
1
= 2 1 0 0 1 0 0 1 0 0
0 0 1
3 0 1
0 1 1
0
0
1
0 0
1
0
0
1
0
0
1
0 0
2
0
2
1
0
0
1
Answers to Exercises
203
Three.IV.3.32 One way to produce this matrix from the identity is to use the column
operations of first multiplying the second column by three, and then adding the
negative of the resulting second column to the first.
!
!
!
1 0
1 0
1 0
0 1
0 3
3 3
In contrast with row operations, column operations are written from left to right,
so this matrix product expresses doing the above two operations.
!
!
1 0
1 0
0 3
1 1
Remark. Alternatively, we could get the required matrix with row operations.
Starting with the identity, first adding the negative of the first row to the second,
and then multiplying the second row by three will work. Because we write successive
row operations as matrix products from right to left, doing these two row operations
is expressed with: the same matrix product.
Three.IV.3.33 The set of diagonal matrices is nonempty as the zero matrix is diagonal.
Clearly it is closed under scalar multiples and sums. Therefore it is a subspace.
The dimension is n; here is a basis.
1 0 ...
0 0 ...
0 0
0 0
,...,
}
{
..
..
.
.
Three.IV.3.35 For any scalar r and square matrix H we have (rI)H = r(IH) = rH =
r(HI) = (Hr)I = H(rI).
There are no other such matrices; here is an argument for 22 matrices that is
easily extended to nn. If a matrix commutes with all others then it commutes
with this unit matrix.
!
!
!
!
!
!
0 a
a b
0 1
0 1
a b
c d
=
=
=
0 c
c d
0 0
0 0
c d
0 0
From this we first conclude that the upper left entry a must equal its lower right
entry d. We also conclude that the lower left entry c is zero. The argument for the
upper right entry b is similar.
204
1 0 ...
0 0 ...
a1,1 a1,2 . . .
.
..
c1 0 0
+ + cn,m ..
= .
..
.
0 ...
1
an,1 . . .
an,m
has the unique solution c1 = a1,1 , c2 = a1,2 , etc.
Three.IV.3.42 Call that matrix F.
! We have
!
!
2
1
3 2
5 3
2
3
4
F =
F =
F =
1 1
2 1
3 2
In general,
!
fn+1
fn
n
F =
fn
fn1
where fi is the i-th Fibonacci number fi = fi1 + fi2 and f0 = 0, f1 = 1, which
we verify by induction, based on this
! equation.
!
!
fi1 fi2
1 1
fi
fi1
=
fi2 fi3
1 0
fi1 fi2
Answers to Exercises
205
and
~ 1 7g
~2
~ 2 7g ~0
will do.
Three.IV.3.47 (a) Each entry pi,j = gi,1 h1,j + + g1,r hr,1 takes r multiplications
and there are m n entries. Thus there are m n r multiplications.
206
Three.IV.3.48 This is how the answer was given in the cited source. No, it does not.
Let A and B represent, with respect to the standard bases, these transformations
of R3 .
x
x
x
0
a
a
y 7 y
y 7 x
z
0
z
y
Observe that
x
0
x
0
abab
baba
but
y
0
y
0 .
z
0
z
x
Three.IV.3.49 This is how the answer was given in the cited source.
(a) Obvious.
(b) If AT A~x = ~0 then ~y ~y = 0 where ~y = A~x. Hence ~y = ~0 by (a).
The converse is obvious.
(c) By (b), A~x1 ,. . . ,A~xn are linearly independent iff AT A~x1 ,. . . , AT A~vn are
linearly independent.
(d) We have
col rank(A) = col rank(AT A) = dim {AT (A~x) | all ~x }
6 dim {AT~y | all ~y } = col rank(AT ).
T
Thus also col rank(AT ) 6 col rank(AT ) and so col rank(A) = col rank(AT ) =
row rank(A).
Answers to Exercises
207
Three.IV.3.50 This is how the answer was given in the cited source. Let h~z1 , . . . , ~zk i
be a basis for R(A) N (A) (k might be 0). Let ~x1 , . . . , ~xk V be such that
A~xi = ~zi . Note {A~x1 , . . . , A~xk } is linearly independent, and extend to a basis for
R(A): A~x1 , . . . , A~xk , A~xk+1 , . . . , A~xr1 where r1 = dim(R(A)).
Now take ~x V. Write
A~x = a1 (A~x1 ) + + ar1 (A~xr1 )
and so
A2~x = a1 (A2~x1 ) + + ar1 (A2~xr1 ).
But A~x1 , . . . , A~xk N (A), so A2~x1 = ~0, . . . , A2~xk = ~0 and we now know
A2~xk+1 , . . . , A2~xr1
spans R(A2 ).
To see {A2~xk+1 , . . . , A2~xr1 } is linearly independent, write
bk+1 A2~xk+1 + + br1 A2~xr1 = ~0
A[bk+1 A~xk+1 + + br1 A~xr1 ] = ~0
and, since bk+1 A~xk+1 + + br1 A~xr1 N (A) we get a contradiction unless it is
~0 (clearly it is in R(A), but A~x1 , . . . , A~xk is a basis for R(A) N (A)).
Hence dim(R(A2 )) = r1 k = dim(R(A)) dim(R(A) N (A)).
Three.IV.4: Inverses
Three.IV.4.12 Here
is one way
1
0
1
1 2
0
3 1
1 1
0
with
to
0
1
0
proceed.
Follow
1
1 0
1 +3
0 0
0
0 1
0
0
0
(1/3)2 +3
0
3
0
1
1
4/3
0
1
1
0
1/3 1
0
3
1
1
1
1
0
1
1 0
1
0
1
0
(1/3)2
1/3
0
0
0 1 1/3
(3/4)3
0 0
1 1/4 3/4 3/4
1 0 0
1/4 1/4
3/4
(1/3)3 +2
0
1
0
1
0
1
0
1
208
Three.IV.4.13
(b) Yes.
!
1
1
1 1
1
=
Three.IV.4.14 (a)
2 1 1 (1)
3
1 2
1
!
!
1
3 4
3/4 1
(b)
=
0 (3) 4 1
1 0
1/4 0
(c) The prior question shows that no inverse exists.
Three.IV.4.15
0 2 0 1
0
1
(1/2)2
0
1
1/3
1/3
1/3
0
0 1/2
1
0
(1/3)2 +1
1
2
1/3 1/6
0
1/2
=
3201
6
0 2
0 3
1/3
2/3
2
0
1
3
1/2
1
1
0
0
1
(1/2)1
42
(3/2)1 +2
1 1/4
0
1
!
1 0
2 1/2
0 1/4 3/2 1
!
1/2 0
(1/4)2 +1
6 4
1 0
0 1
2
6
1
4
2 1 3 (1/2)
1
3
1/2
2
1
2 0 1
=2
2
0
1
3
4
0
1/2
2
1
1/2
0
1
shows that the left side wont reduce to the identity, so no inverse exists. The
check ad bc = 2 2 (4) (1) = 0 agrees.
Answers to Exercises
209
1 1
0 2
1 1
2 +3
23 +2
33 +1
2 +1
1 1 3
3 1 0 0
1 +3
4 0 1 0 0 2 4
0 0 0 1
0 2 3
1 1
3 1
0 0
(1/2)2
4 0
1 0
0 2
3
0 0 1 1 1 1
4
3
3
1 1 0
2 3/2
2
0 1 0
0 0 1 1
1 1
1 0 0
2 3/2
1
2 3/2
2
0 1 0
0 0 1 1
1 1
1 0 0
0 1 0
1 0 1
1 1 3
0 1 2
0 0 1
1
0
0 1/2
1
1
0
1
0
2
1
2
3
5
4
2
1
0
0
0
1
0
0
1
2
3 2 0 0 1
3 1
4 0 1 0
0 2
0
1
5 1 0 0
2
3 2 0
0 1
(1/2)2 +3
4 0
1 0
0 2
0
0
7 1 1/2 0
1 3/2 1
0
0 1/2
(1/2)1
1 2
0 1/2
0
0
(1/2)2
0
0
1 1/7 1/14
0
(1/7)3
1 3/2 0 1/7
1/14 1/2
23 +2
0
1 0 2/7 5/14
0
3 +1
0
0 1 1/7
1/14
0
2/7 5/14
0
0 1 0
0 0 1
1/7
1/14
0
210
2
2
3 1 0
1 2 3 0 1
4 2 3 0 0
0
1
0
0
(1/2)1 +2
21 +3
0
0
22 +3
2
3
6
2
3
0
3
9/2
9
3
9/2
0
0
1
0 0
1 0
2 1
1
1/2
2
1
1/2
1
0
1
0
As a check, note that the third column of the starting matrix is 3/2 times the
second, and so it is indeed singular and therefore has no inverse.
Three.IV.4.16 We can use Corollary 4.11.
1
1523
5
2
3
1
!
=
5
2
3
1
0
5
!1
=
1/5
0
0
1/5
!1
Answers to Exercises
211
Three.IV.4.20 One way to check that the first is true is with the angle sum formulas
from trigonometry.
!
cos(1 + 2 ) sin(1 + 2 )
sin(1 + 2 ) cos(1 + 2 )
!
cos 1 cos 2 sin 1 sin 2 sin 1 cos 2 cos 1 sin 2
=
sin 1 cos 2 + cos 1 sin 2
cos 1 cos 2 sin 1 sin 2
!
!
cos 1 sin 1
cos 2 sin 2
=
sin 1
cos 1
sin 2
cos 2
Checking the second equation in this way is similar.
Of course, the equations can be not just checked but also understood by recalling
that t is the map that rotates vectors about the origin through an angle of radians.
Three.IV.4.21 There are two cases. For the first case we assume that a is nonzero.
Then
!
!
1
0
1
0
a
b
a
b
(c/a)1 +2
=
0 (bc/a) + d c/a 1
0 (ad bc)/a c/a 1
shows that the matrix is invertible (in this a 6= 0 case) if and only if ad bc 6= 0.
To find the inverse, we finish with the Jordan half of the reduction. !
(1/a)1
(a/adbc)2
(b/a)2 +1
1 b/a
0
1
1 0
0 1
1/a
c/(ad bc)
d/(ad bc)
c/(ad bc)
0
a/(ad bc)
!
b/(ad bc)
a/(ad bc)
The other case is the a = 0 case. We swap to get!c into the 1, 1 position.
c d 0 1
1 2
0 b 1 0
This matrix is nonsingular if and only if both b and c are nonzero (which, under
the case assumption that a = 0, holds if and only if ad bc 6= 0). To find the
inverse we do the Jordan half.
!
!
1 d/c
0
1/c
1 0 d/bc 1/c
(1/c)1
(d/c)2 +1
0
1
1/b 0
0 1
1/b
0
(1/b)2
(Note that this is what is required, since a = 0 gives that ad bc = bc).
Three.IV.4.22 With H a 23 matrix, in looking for a matrix G such that the combination HG acts as the 2 2 identity we need G to be 3 2. Setting up the
equation
! m n
!
1 0 1
1 0
p q =
0 1 0
0 1
r s
212
+r
n
p
q
=1
+s = 0
=0
=1
!
!
1
a b
1 0 1
= 0
c d
0 1 0
0
0
1
0
0
1
gives rise to a linear system with nine equations and four unknowns.
a
=1
b
=0
a
=0
c
=0
d
=1
c
=0
e =0
f=0
e =1
This system is inconsistent (the first equation conflicts with the third, as do the
seventh and ninth) and so there is no left inverse.
Three.IV.4.23 With respect to the standard bases
RepE2 ,E3 () = 0
0
we have
1
0
! 1 0
!
a b c
1 0
Answers to Exercises
213
=1
=0
d
=0
e =1
There are infinitely many solutions in a, . . . , f to this system because two of these
variables are entirely unrestricted
a
1
0
0
b 0
0
0
c 0
1
0
{ = + c + f | c, f R}
d 0
0
0
e 1
0
0
b
f
0
0
1
and so there are infinitely many solutions to the matrix equation.
!
1 0 c
{
| c, f R }
0 1 f
With the bases still fixed at E2 , E2 , for instance taking c = 2 and f = 3 gives a
matrix representing this map.
!
x
f2,3 x + 2z
y 7
y + 3z
z
The check that f2,3 is the identity map on R2 is easy.
Three.IV.4.24 By Lemma 4.2 it cannot have infinitely many left inverses, because a
matrix with both left and right inverses has only one of each (and that one of each
is one of both the left and right inverse matrices are equal).
Three.IV.4.25 (a) True, It must be linear, as the proof from Theorem II.2.20 shows.
(b) False. It may be linear, but it need not be. Consider the projection map
: R3 R2 described at the start of this subsection. Define : R2 R3 in this
way.
!
x
x
7 y
y
1
It is a right inverse of because does this.
!
!
x
x
x
7 y 7
y
y
1
It is not linear because it does not map the zero vector to the zero vector.
214
Answers to Exercises
215
r=1 s=1
s=1 r=1
s=1
(A is singular if k = 0).
Change of Basis
Three.V.1: Changing Representations of Vectors
Three.V.1.7 For the matrix to change bases from D to E2 we need that RepE2 (id(~1 )) =
RepE2 (~1 ) and that RepE2 (id(~2 )) = RepE2 (~2 ). Of course, the representation of a
vector in R2 with respect to the standard basis is easy.
!
!
2
2
~
~
RepE2 (1 ) =
RepE2 (2 ) =
1
4
Concatenating those two together to make the columns of the change of basis matrix
gives this.
!
2 2
RepD,E2 (id) =
1 4
216
For the change of basis matrix in the other direction we can calculate RepD (id(~e1 )) =
RepD (~e1 ) and RepD (id(~e2 )) = RepD (~e2 ) (this job is routine) or we can take the
inverse of the above matrix. Because of the formula for the inverse of a 22 matrix,
this is easy.
!
!
1
4 2
4/10 2/10
RepE2 ,D (id) =
=
10
1 2
1/10 2/10
~ 1 )) = RepD (
~ 1 ) and RepD (id(
~ 2 )) = RepD (
~ 2)
Three.V.1.8 Concatenate RepD (id(
to make the !
change of basis matrix! RepB,D (id). !
!
0 1
2 1/2
1 1
1 1
(a)
(b)
(c)
(d)
1 0
1 1/2
2 4
1 2
~ 1 )) = RepD (
~ 1 ), RepD (id(
~ 2 )) = RepD (
~ 2 ),
Three.V.1.9 The vectors RepD (id(
~
~
and RepD (id(3 )) = RepD (3 ) make the change of basis matrix RepB,D (id).
1 1 1/2
0 0 1
1 1 0
(b) 0 1 1
(c) 1 1 1/2
(a) 1 0 0
0 1 0
0 0
1
0 2
0
E.g., for the first column of the first matrix, 1 = 0 x2 + 1 1 + 0 x.
Three.V.1.10 One way to go is to find RepB (~1 ) and RepB (~2 ), and then concatenate
them into the columns of the desired change of basis matrix. Another way is to
find the inverse
! of the matrices
! that answer Exercise
! 8.
!
0 1
1 1
2 1/2
2 1
(a)
(b)
(c)
(d)
1 0
2 4
1 1/2
1 1
Three.V.1.11 A matrix changes bases if and only if it is nonsingular.
(a) This matrix is nonsingular and so changes bases. Finding to what basis E2 is
changed means finding D such that
!
5 0
RepE2 ,D (id) =
0 4
and by the definition of how a matrix represents a linear map, we have this.
!
!
5
0
RepD (id(~e1 )) = RepD (~e1 ) =
RepD (id(~e2 )) = RepD (~e2 ) =
0
4
Where
x1
D=h
y1
we can either solve the system
!
!
!
1
x1
x2
=5
+0
0
y1
y1
!
x2
,
i
y2
0
1
x1
=0
y1
x2
+4
y1
Answers to Exercises
217
or else just spot the answer (thinking of the proof of Lemma 1.5).
!
!
1/5
0
D=h
,
i
0
1/4
(b) Yes, this matrix is nonsingular and so changes bases. To calculate D, we
proceed as above with
!
!
x1
x2
D=h
,
i
y1
y2
to solve
1
0
x1
=2
y1
x2
+3
y1
!
and
0
1
x1
=1
y1
x2
+1
y1
1
2/3
0
1/3
0
1/3
2/3 1/3
1/3
218
RepB,D (id) = 2
3
1
1
1
1
1
(c) Representing id(x2 ), id(x2 + x), and id(x2 + x + 1) with respect to the ending
basis gives this.
0
0
1/2
0 0 1/2
RepB,D (id) = 0 1 1
1 1
1
Three.V.1.13 This question has many different solutions. One way to proceed is to
make up any basis B for any space, and then compute the appropriate D (necessarily
for the same space, of course). Another, easier, way to proceed is to fix the codomain
as R3 and the codomain basis as E3 . This way (recall that the representation of
any vector with respect to the standard basis is just the vector itself), we have this.
3
1
4
D = E3
B = h2 , 1 , 1i
0
0
4
Three.V.1.14 Checking that B = h2 sin(x) + cos(x), 3 cos(x)i is a basis is routine. Call
the natural basis D. To compute the change of basis matrix RepB,D (id) we must
find RepD (2 sin(x) + cos(x)) and RepD (3 cos(x)), that is, we need x1 , y1 , x2 , y2 such
that these equations hold.
x1 sin(x) + y1 cos(x) = 2 sin(x) + cos(x)
x2 sin(x) + y2 cos(x) = 3 cos(x)
Obviously this is the answer.
!
RepB,D (id) =
2
1
0
3
For the change of basis matrix in the other direction we could look for RepB (sin(x))
and RepB (cos(x)) by solving these.
w1 (2 sin(x) + cos(x)) + z1 (3 cos(x)) = sin(x)
w2 (2 sin(x) + cos(x)) + z2 (3 cos(x)) = cos(x)
An easier method is to find the inverse
!1 of the matrix
! found above. !
1
2 0
3 0
1/2
0
RepD,B (id) =
=
=
6
1 3
1 2
1/6 1/3
Answers to Exercises
219
Three.V.1.15 We start by taking the inverse of the matrix, that is, by deciding what
is the inverse to the map of interest.
!
1
cos(2)
sin(2)
RepD,E2 (id)RepD,E2 (id)1 =
sin(2) cos(2)
cos2 (2) sin2 (2)
!
cos(2)
sin(2)
=
sin(2) cos(2)
This is more tractable than the representation the other way because this matrix is
the concatenation of these two column vectors
!
!
cos(2)
sin(2)
RepE2 (~1 ) =
RepE2 (~2 ) =
sin(2)
cos(2)
and representations with respect to E2 are transparent.
!
!
cos(2)
sin(2)
~1 =
~2 =
sin(2)
cos(2)
This pictures the action of the map that transforms D to E2 (it is, again, the inverse
of the map that is the answer to this question). The line lies at an angle to the
x axis.
~1 =
cos(2)
sin(2)
~e2
7
sin(2)
~2 =
cos(2)
~e1
This map reflects vectors over that line. Since reflections are self-inverse, the answer
to the question is: the original map reflects about the line through the origin with
angle of elevation . (Of course, it does this to any basis.)
Three.V.1.16 The appropriately-sized identity matrix.
Three.V.1.17 Each is true if and only if the matrix is nonsingular.
Three.V.1.18 What remains is to show that left multiplication by a reduction matrix
~ 1, . . . ,
~ n i.
represents a change from another basis to B = h
Application of a row-multiplication matrix Mi (k) translates a representation
~ 1 , . . . , k
~ i, . . . ,
~ n i to one with respect to B, as here.
with respect to the basis h
~ 1 + +ci (k
~ i )+ +cn
~ n 7 c1
~ 1 + +(kci )
~ i + +cn
~ n = ~v
~v = c1
Apply a row-swap matrix Pi,j to translates a representation with respect to the
~ 1, . . . ,
~ j, . . . ,
~ i, . . . ,
~ n i to one with respect to h
~ 1, . . . ,
~ i, . . . ,
~ j, . . . ,
~ n i.
basis h
220
h1,i
..
~ i )) = RepE (
~ i)
. = RepEn (id(
n
hn,i
and, because representations with respect to the standard basis are transparent, we
have this.
h1,i
.. ~
. = i
hn,i
That is, the basis is the one composed of the columns of H.
Three.V.1.20 (a) We can change the starting vector representation to the ending
one through a sequence of row operations. The proof tells us what how the bases
change. We start by swapping the first and second rows of the representation
with respect to B to get a representation
with respect to a new basis B1 .
1
0
RepB1 (1 x + 3x2 x3 ) =
B1 = h1 x, 1 + x, x2 + x3 , x2 x3 i
1
2 B
1
We next add 2 times the third row of the vector representation to the fourth
row.
1
0
RepB3 (1 x + 3x2 x3 ) =
B2 = h1 x, 1 + x, 3x2 x3 , x2 x3 i
1
0 B
2
(The third element of B2 is the third element of B1 minus 2 times the fourth
element of B1 .) Now we can finish by doubling the third row.
1
0
RepD (1 x + 3x2 x3 ) =
D = h1 x, 1 + x, (3x2 x3 )/2, x2 x3 i
2
0 D
Answers to Exercises
221
(b) Here are three different approaches to stating such a result. The first is the
assertion: where V is a vector space with basis B and ~v V is nonzero, for any
nonzero column vector ~z (whose number of components equals the dimension of
V) there is a change of basis matrix M such that M RepB (~v) = ~z. The second
possible statement: for any (n-dimensional) vector space V and any nonzero
vector ~v V, where ~z1 , ~z2 Rn are nonzero, there are bases B, D V such
that RepB (~v) = ~z1 and RepD (~v) = ~z2 . The third is: for any nonzero ~v member
of any vector space (of dimension n) and any nonzero column vector (with n
components) there is a basis such that ~v is represented with respect to that basis
by that column vector.
The first and second statements follow easily from the third. The first follows
because the third statement gives a basis D such that RepD (~v) = ~z and then
RepB,D (id) is the desired M. The second follows from the third because it is
just a doubled application of it.
A way to prove the third is as in the answer to the first part of this question.
Here is a sketch. Represent ~v with respect to any basis B with a column vector
~z1 . This column vector must have a nonzero component because ~v is a nonzero
vector. Use that component in a sequence of row operations to convert ~z1 to ~z.
(We could fill out this sketch as an induction argument on the dimension of V.)
Three.V.1.21 This is the topic of the next subsection.
Three.V.1.22 A change of basis matrix is nonsingular and thus has rank equal to the
number of its columns. Therefore its set of columns is a linearly independent subset
of size n in Rn and it is thus a basis. The answer to the second half is also yes; all
implications in the prior sentence reverse (that is, all of the if . . . then . . . parts of
the prior sentence convert to if and only if parts).
Three.V.1.23 In response to the first half of the question, there are infinitely many
such matrices. One of them represents with respect to E2 the transformation of R2
with this action.
!
!
!
!
1
4
0
0
7
7
0
0
1
1/3
The problem of specifying two distinct input/output pairs is a bit trickier. The
fact that matrices have a linear action precludes some possibilities.
(a) Yes, there is such a matrix. These conditions
! !
!
!
!
!
a b
1
1
a b
2
1
=
=
c d
3
1
c d
1
1
222
= 1
c + 3d = 1
2a b
= 1
2c d = 1
to give this matrix.
2/7
2/7
3/7
3/7
1
3
2
6
!
but 2
1
1
!
6=
1
1
(a)
(b) 0 1
0 0 0
0 0
what
the rank of each is.
0 0
0 0
1 0
R2wrt B R2wrt D
T
idy
idy
t
R2wrt B R2wrt D
T = RepD,D
(id) T RepB,B
(id)
Answers to Exercises
(a) These two
!
1
=1
1
1
0
223
2
1
+1
1
1
!
= (3)
show that
1
1
RepD,D
(id) =
and similarly these two
!
!
0
1
=0
+1
1
0
0
1
1
0
1
1
3
1
2
1
+ (1)
1
0
=1
0
1
+1
RepB,B
(id) =
Then the answer is this.
!
1 3
1
T=
1 1
3
2
4
0
1
1
1
1
1
10
2
18
4
RepB (~v) =
1
3
2
4
3
2
B,D
7
17
=
B
!
D
!
+ 17
1
1
24
10
D
starts with
Doing the calculation with respect to B,
!
!
!
1
10 18
1
RepB (~v) =
=
3
2
4
3
B
B,D
1
2
1
+
3
2
1
1
1
24
10
!
= 1
show that
RepD,D
(id) =
1/3
1/3
1
1
44
10
1
2
!
+1
2
1
224
1
0
!
+2
0
1
1
0
!
= 1
1
2
RepB,B
(id) =
1
0
1
0
!
+0
0
1
T=
=
1/3 1
3 4
2 0
38/3
10/3
As in the prior item, a check provides some confidence that we did this calculation
without mistakes. We can for instance, fix the
! vector
1
2
~v =
3
5
!
D
1
1
1
1
+5
D
we first calculate
With respect to B,
!
1
28/3
RepB (~v) =
2
38/3
8
2
!
8/3
10/3
B,D
1
2
!
=
4
6
Vwrt B Wwrt D
H
idy
idy
h
Vwrt B Wwrt D
Answers to Exercises
These calcuations give Q.
0
!
0 0
0
RepD
)) =
(id(
0
0 1
1
225
0 0
0
RepD
)) =
(id(
1
1 1
1
0
1
!
!
1
1
0 1
1 1
RepD
)) = RepD
)) =
(id(
(id(
1
1
1 1
1 1
1
1
This is the answer.
0 0 0 1
1 1 1
0 0 1 1
Q=
P = 0 1
0
0 1 1
1
0 0
1
1 1 1
1
Three.V.2.15 Gausss Method gives this.
2 1 1
1 1/2 1/2
(1/2)1
(3/2)1 +2 2 +3
1
3/5
3 1 0
0
(1/2)1 +3
(2/5)2
1 3 2
0
0
0
Column operations complete the job of reaching the canonical form for matrix
equivalence.
1 0 0
(3/5)col2 +col3
(1/2)col1 +col2
0 1 0
(1/5)col1 +col3
0 0 0
Then
these are the
two
matrices.
1 0 0
1
0 0
1
0 0
1
0
0
1/2 0 0
P = 0 2/5 0 0
1 0 0 1 0 0
1 0 3/2 1 0
0
0
1
0
0 1
0 1 1
1/2 0 1
0
0 1
1/2
0
0
= 3/5 2/5 0
2
1
1
1 1/2 1/5
1 0 1/5
1 1/2 0
1 0
0
Q = 0 1 3/5 0
1
3/5
1
0 0 1
0 = 0
0
0
1
0 0
1
0
0
1
0 0
1
Three.V.2.16 Any nn matrix is nonsingular if and only if it has rank n, that is,
by Theorem 2.6, if and only if it is matrix equivalent to the nn matrix whose
diagonal is all ones.
!
226
no reason to suspect that we could pick the two B and D so that they are equal.
Three.V.2.19 Yes. Row rank equals column rank, so the rank of the transpose equals
the rank of the matrix. Same-sized matrices with equal ranks are matrix equivalent.
Three.V.2.20 Only a zero matrix has rank zero.
Three.V.2.21 For reflexivity, to show that any matrix is matrix equivalent to itself,
take P and Q to be identity matrices. For symmetry, if H1 = PH2 Q then H2 =
P1 H1 Q1 (inverses exist because P and Q are nonsingular). Finally, for transitivity,
assume that H1 = P2 H2 Q2 and that H2 = P3 H3 Q3 . Then substitution gives
H1 = P2 (P3 H3 Q3 )Q2 = (P2 P3 )H3 (Q3 Q2 ). A product of nonsingular matrices is
nonsingular (weve shown that the product of invertible matrices is invertible; in
fact, weve shown how to calculate the inverse) and so H1 is therefore matrix
equivalent to H3 .
Three.V.2.22 By Theorem 2.6, a zero matrix is alone in its class because it is the
only mn of rank zero. No other matrix is alone in its class; any nonzero scalar
product of a matrix has the same rank as that matrix.
Three.V.2.23 There are two matrix equivalence classes of 11 matrices those of
rank zero and those of rank one. The 33 matrices fall into four matrix equivalence
classes.
Three.V.2.24 For mn matrices there are classes for each possible rank: where k
is the minimum of m and n there are classes for the matrices of rank 0, 1, . . . , k.
Thats k + 1 classes. (Of course, totaling over all sizes of matrices we get infinitely
many classes.)
Three.V.2.25 They are closed under nonzero scalar multiplication since a nonzero
scalar multiple of a matrix has the same rank as does the matrix. They are not
closed under addition, for instance, H + (H) has rank zero.
Three.V.2.26
(a) We have
RepB,E2 (id) =
1
2
1
1
and
RepE2 ,B (id) = RepB,E2 (id)
1
2
1
1
!1
=
1
2
1
1
Answers to Exercises
227
RepB,B (t) =
1
1
1
3
1
1
1
2
1
1
!
=
2
5
0
2
4
5
1
3
1
1
4
5
!
=
1
3
B,B
9
7
!
=
B
= t(~v)
2
11
!
B
1
2
!
11
1
1
!
=
9
7
(b) We have
t
R2wrt E2 R2wrt E2
T
idy
idy
R2wrt B R2wrt B
so, writing Q for the matrix whose columns are the basis vectors, we have that
RepB,B (t) = Q1 T Q.
Three.V.2.27
Vwrt B1 Wwrt D
H
idyQ
idyP
h
Vwrt B2 Wwrt D
Since there is no need to change bases in W (or we can say that the change of
basis matrix P is the identity), we have RepB2 ,D (h) = RepB1 ,D (h) Q where
Q = RepB2 ,B1 (id).
228
Vwrt B Wwrt D1
H
Q
idyP
idy
h
Vwrt B Wwrt D2
We have that RepB,D2 (h) = P RepB,D1 (h) where P = RepD1 ,D2 (id).
Three.V.2.28 (a) Here is the arrow diagram, and a version of that diagram for inverse
functions.
h1
Vwrt B Wwrt D
H1
idy
idyP
Vwrt B Wwrt D
H
idy
idyP
h
h1
Vwrt B Wwrt D
Vwrt B Wwrt D
1
H
Yes, the inverses of the matrices represent the inverses of the maps. That is,
we can move from the lower right to the lower left by moving up, then left,
= PHQ (and P, Q invertible) and H, H
are
then down. In other words, where H
1
1
1
1
invertible then H
=Q H P .
(b) Yes; this is the prior part repeated in different terms.
(c) No, we need another assumption: if H represents h with respect to the same
starting as ending bases B, B, for some B then H2 represents h h. As a specific
example, these two matrices are both rank one and so they are matrix equivalent
!
!
1 0
0 0
0 0
1 0
but the squares are not matrix equivalent the square of the first has rank one
while the square of the second has rank zero.
(d) No. These two are not matrix equivalent but have matrix equivalent squares.
!
!
0 0
0 0
0 0
1 0
Three.V.2.29
Vwrt B1 Vwrt B1
T
scriptsizeidy
idy
t
Vwrt B2 Vwrt B2
Answers to Exercises
229
Projection
Three.VI.1: Orthogonal Projection Into a Line
2
1
Three.VI.1.6
(a)
3
2
2
1
3
0
3
2
3
2
3
2
4
=
13
3
2
!
=
!
3
0
2
=
3
3
0
2
0
!
!
=
3
3
0
0
1
1
1 2
1
1
1/6
4
1
1
(c) 2 =
2 = 1/3
6
1
1
1
1
1/6
2 2
1
1
(b)
12/13
8/13
230
3
2
1 1
3
3
3
3
4
19
1 =
1 = 1
Three.VI.1.7 (a)
19
3
3
3
3
3
1 1
3
3
(b) Writing the line as
!
1
{c
| c R}
3
3
1
!
!
1
1
3
3
1
1
2 1
1 1
1
3
Three.VI.1.8
1
1
1 1
1 1
1
1
Three.VI.1.9
1
2
3
1
(a)
3
1
3
1
1
3
=
10
1
3
!
=
2/5
6/5
1
1
3/4
1 3 1 3/4
= =
1 4 1 3/4
1
1
3/4
!
!
3
1
1
=
2
3
1
!
=
3/2
1/2
Answers to Exercises
0
4
3
1
231
!
3
1
2
=
5
3
1
!
!
3
3
1
1
In general the projection
! is this.
!
(b)
3
x1
1
x2
!
!
3
3
1
1
3
1
!
=
!
6/5
2/5
3x1 + x2
=
10
3
1
!
=
!
(9x1 + 3x2 )/10
(3x1 + x2 )/10
3/10
1/10
Three.VI.1.10 Suppose that ~v1 and ~v2 are nonzero and orthogonal. Consider the
linear relationship c1~v1 + c2~v2 = ~0. Take the dot product of both sides of the
equation with ~v1 to get that
~v1 (c1~v1 + c2~v2 ) = c1 (~v1 ~v1 ) + c2 (~v1 ~v2 )
= c1 (~v1 ~v1 ) + c2 0 = c1 (~v1 ~v1 )
is equal to ~v1 ~0 = ~0. With the assumption that ~v1 is nonzero, this gives that c1 is
zero. Showing that c2 is zero is similar.
Three.VI.1.11 (a) If the vector ~v is in the line then the orthogonal projection is
~v. To verify this by calculation, note that since ~v is in the line we have that
~v = c~v ~s for some scalar c~v .
~v ~s
~s ~s
c~v ~s ~s
~s =
~s = c~v
~s = c~v 1 ~s = ~v
~s ~s
~s ~s
~s ~s
(Remark. If we assume that ~v is nonzero then we can simplify the above by
taking ~s to be ~v.)
(b) Write c~p~s for the projection proj[~s ] (~v). Note that, by the assumption that ~v
is not in the line, both ~v and ~v c~p~s are nonzero. Note also that if c~p is zero
then we are actually considering the one-element set {~v }, and with ~v nonzero,
this set is necessarily linearly independent. Therefore, we are left considering the
case that c~p is nonzero.
Setting up a linear relationship
a1 (~v) + a2 (~v c~p~s) = ~0
leads to the equation (a1 + a2 ) ~v = a2 c~p ~s. Because ~v isnt in the line, the
scalars a1 + a2 and a2 c~p must both be zero. We handled the c~p = 0 case above,
so the remaining case is that a2 = 0, and this gives that a1 = 0 also. Hence the
set is linearly independent.
232
~v ~v
~v ~v
~v ~v
~v ~v
We can dispose of the remaining n = 0 and n = 1 cases. The dimension n = 0
case is the trivial vector space, here there is only one vector and so it cannot be
expressed as the projection of a different vector. In the dimension n = 1 case there
is only one (non-degenerate) line, and every vector is in it, hence every vector is
the projection only of itself.
Three.VI.1.14 The proof is simply a calculation.
~v ~s
|~v ~s |
|~v ~s |
~v ~s
~s k = |
| k~s k =
k~s k =
k
2
~s ~s
~s ~s
k~s k
k~s k
Three.VI.1.15 Because the projection of ~v into the line spanned by ~s is
~v ~s
~s
~s ~s
the distance squared from the point to the line is this (we write a vector dotted
with itself w
~ w
~ as w
~ 2 ).
~v ~s
~v ~s
~v ~s
~v ~s
k~v
~s k2 = ~v ~v ~v (
~s) (
~s ) ~v + (
~s )2
~s ~s
~s ~s
~s ~s
~s ~s
~v ~s
~v ~s
= ~v ~v 2 (
) ~v ~s + (
) ~s ~s
~s ~s
~s ~s
(~v ~v ) (~s ~s ) 2 (~v ~s )2 + (~v ~s )2
=
~s ~s
Answers to Exercises
233
!
!
!
!
y
1
x+y
1
1
(x + y)/2
x
!
!
=
=
7
2
1
1
(x + y)/2
y
1
1
1
1
which is the effect of this matrix.
!
1/2 1/2
1/2 1/2
(b) Rotating the entire plane /4 radians clockwise brings the y = x line to lie on
the x-axis. Now projecting and then rotating back has the desired effect.
Three.VI.1.20 The sequence need not settle
down. With
!
!
1
1
~b =
a
~=
0
1
the projections are these. !
!
!
1/2
1/2
1/4
~v1 =
, ~v2 =
, ~v3 =
, ...
1/2
0
1/4
This sequence doesnt repeat.
234
(a)
1
1
~1 =
2
1
~2 =
2
1
!
2
proj[~1 ] (
)=
1
!
3
2
1
1
!
=
2
1
1/2
1/2
2
1
1
1
1
1
1
1
!
!
1
1
1/ 2
2/2
,
h
i
1/ 2
2/2
(b)
0
1
~1 =
1
3
~2 =
1
3
!
1
proj[~1 ] (
)=
3
!
3
1
0
1
!
=
1
0
1
3
!
!
0
1
1
3
!
!
0
0
1
1
0
1
Answers to Exercises
235
(c)
0
1
~1 =
1
0
~2 =
1
0
!
1
proj[~1 ] (
)=
0
!
0
1
0
1
!
=
1
0
1
0
!
!
1
0
0
1
!
!
0
0
1
1
0
1
0 2
1
1
1
2
1
~2 = 0 proj[~1 ] ( 0 ) = 0
2
2
1
1
1
2 2
2
2
1
2
1
0
= 0
2 = 0
12
1
2
1
2
2
2
236
2
1
5/6
0
8 1
= 3
2
0 = 5/3
12
2
1
2
1
5/6
This is the orthonormal basis.
1/ 2
1/ 6
1/ 3
h1/ 3 , 0 , 2/ 6 i
1/ 3
1/ 2
1/ 6
(b) The first basis vector is what was given.
1
~1 = 1
0
The second is here.
0
1
1 1
0
0
0
0
0
~2 = 1 proj[~1 ] (1) = 1
1
1
0
0
0
1 1
0
0
0
1
1/2
1
= 1
1 = 1/2
2
0
0
0
1
1
0
Answers to Exercises
237
2
1
2
1/2
3 1
3 1/2
2
1
1/2
1
0
1
0
= 3 1
1/2
1
1
1/2
1/2
1
0
0
1 1
1/2 1/2
0
0
0
0
2
1
1/2
0
1 5/2
1
1/2 = 0
= 3
2
1/2
1
0
0
1
Here is the associated orthonormal basis.
1/ 2
1/ 2
0
h1/ 2 , 1/ 2 0i
0
0
1
238
1
1/2
1
1
1 = 1/2
= 0
2
0
1
1
and then normalize.
1/ 2
1/ 6
h1/ 2 , 1/ 6 i
0
2/ 6
1 +2
xy z+w=0
y + 2z w = 0
Answers to Exercises
239
0
1
1/3
1 2 2 1/3
=
=
0
6 1 1/3
1
0
1
and finish by normalizing.
1
2
1
0
3/6
1/ 6
2/ 6 3/6
h ,
i
1/ 6 3/6
0
3/2
Three.VI.2.14 A linearly independent subset of Rn is a basis for its own span. Apply
Theorem 2.7.
Remark. Heres why the phrase linearly independent is in the question.
Dropping the phrase would require us to worry about two things. The first thing to
worry about is that when we do the Gram-Schmidt process on a linearly dependent
set then we get some zero vectors. For instance, with
!
!
1
3
S={
,
}
2
6
we would get this.
~1 =
1
2
!
~2 =
3
6
!
3
proj[~1 ] (
)=
6
0
0
This first thing is not so bad because the zero vector is by definition orthogonal
to every other vector, so we could accept this situation as yielding an orthogonal
set (although it of course cant be normalized), or we just could modify the GramSchmidt procedure to throw out any zero vectors. The second thing to worry about
if we drop the phrase linearly independent from the question is that the set might
be infinite. Of course, any subspace of the finite-dimensional Rn must also be
finite-dimensional so only finitely many of its members are linearly independent,
but nonetheless, a process that examines the vectors in an infinite set one at a
240
time would at least require some more elaboration in this question. A linearly
independent subset of Rn is automatically finite in fact, of size n or less so the
linearly independent phrase obviates these concerns.
Three.VI.2.15 If that set is not linearly independent, then we get a zero vector.
Otherwise (if our set is linearly independent but does not span the space), we
are doing Gram-Schmidt on a set that is a basis for a subspace and so we get an
orthogonal basis for a subspace.
Three.VI.2.16 The process leaves the basis unchanged.
Three.VI.2.17 (a) The argument is as in the i = 3 case of the proof of Theorem 2.7.
The dot product
~i ~v proj[~1 ] (~v ) proj[~vk ] (~v )
can be written as the sum of terms of the form ~i proj[~j ] (~v ) with j 6= i, and
the term ~i (~v proj[~i ] (~v )). The first kind of term equals zero because the
~s are mutually orthogonal. The other term is zero because this projection is
orthogonal (that is, the projection definition makes it zero: ~i (~v proj[~i ] (~v )) =
~i ~v ~i ((~v ~i )/(~i ~i )) ~i equals, after all of the cancellation is done, zero).
(b) The vector ~v is in black and the vector proj[~1 ] (~v ) + proj[~v2 ] (~v ) = 1 ~e1 + 2 ~e2
is in gray.
The vector ~v (proj[~1 ] (~v ) + proj[~v2 ] (~v )) lies on the dotted line connecting the
black vector to the gray one, that is, it is orthogonal to the xy-plane.
(c) We get this diagram by following the hint.
The dashed triangle has a right angle where the gray vector 1 ~e1 + 2 ~e2 meets
the vertical dashed line ~v (1 ~e1 + 2 ~e2 ); this is what first item of this question
proved. The Pythagorean theorem then gives that the hypotenuse the segment
from ~v to any other vector is longer than the vertical dashed line.
More formally, writing proj[~1 ] (~v ) + + proj[~vk ] (~v ) as c1 ~1 + + ck ~k ,
Answers to Exercises
241
+ (c1 ~1 + + ck ~k ) (d1 ~1 + + dk ~k )
and that ~v(c1 ~1 + +ck ~k ) (c1 ~1 + +ck ~k )(d1 ~1 + +dk ~k ) = 0
(because the first item shows the ~v (c1 ~1 + + ck ~k ) is orthogonal to each
~ and so it is orthogonal to this linear combination of the ~s). Now apply the
Pythagorean Theorem (i.e., the Triangle Inequality).
Three.VI.2.18 One way to proceed is to find a third vector so that the three together
make a basis for R3 , e.g.,
1
~
3 = 0
0
(the second vector is not dependent on the third because it has a nonzero second
component, and the first is not dependent on the second and third because of its
nonzero third component), and then apply the Gram-Schmidt process. The first
element of the new basis is this.
1
~1 = 5
1
And this is the second element.
2
1
2 5
2
2
2
1
0
~2 = 2 proj[~1 ] (2) = 2
1
1
0
0
0
5 5
1
1
2
1
14/9
12
= 2
5 = 2/9
27
0
1
4/9
1
5
1
242
1
1
1
14/9
0 2/9
0 5
1
14/9
1
0
0
1
4/9
= 0 5
2/9
1
1
14/9
14/9
1
4/9
0
5 5
2/9 2/9
1
1
4/9
4/9
1
1
14/9
1/18
1
7
= 0
5
2/9 = 1/18
27
12
0
1
4/9
4/18
The result ~3 is orthogonal to both ~1 and ~2 . It is therefore orthogonal to every
vector in the span of the set {~1 , ~2 }, including the two vectors given in the question.
Three.VI.2.19
3
1
!
2
proj[
)=
~ 1](
3
!
2
proj[
)=
~ 2](
3
2
3
1
1
2
3
1
0
1
1
1
1
1
0
1
0
!
!
1
1
5
=
2
1
1
1
0
2
=
1
1
0
!
!
1
1
!
B
Answers to Exercises
and the two projections are easy.
!
!
2
1
!
!
!
3
1
5
2
1
1
!
!
proj[
)=
=
~ 1](
2
3
1
1
1
1
1
1
!
!
1
2
!
!
1
3
1
1
2
!
!
=
proj[
)=
~ 2](
2
1
3
1
1
1
1
243
1
1
we have k~
wk = w
~ w
~ ).
~v ~2
~v ~2
~v ~1
~v ~1
0 6 ~v
~1 +
~2
~v
~1 +
~2
~1 ~1
~2 ~2
~1 ~1
~2 ~2
~v ~1
~v ~2
= ~v ~v 2 ~v
~1 +
~2
~1 ~1
~2 ~2
~v ~2
~v ~1
~v ~2
~v ~1
~1 +
~2
~1 +
~2
+
~1 ~1
~2 ~2
~1 ~1
~2 ~2
~v ~1
~v ~2
= ~v ~v 2
(~v ~1 ) +
(~v ~2 )
~1 ~1
~2 ~2
~v ~1 2
~v ~2 2
+ (
) (~1 ~1 ) + (
) (~2 ~2 )
~1 ~1
~2 ~2
(The two mixed terms in the third part of the third line are zero because ~1 and ~2
are orthogonal.) The result now follows on gathering like terms and on recognizing
244
(b e h) d = 0 (b e h) e = 1 (b e h) f = 0
g
h
i
a
b
c
(c f i) d = 0
(c f i) e = 0
(c f i) f = 1
g
h
i
(the three conditions in the lower left are redundant but nonetheless correct). Those,
in turn, hold if and only if
a d g
a b c
1 0 0
b e h d e f = 0 1 0
c f i
g h i
0 0 1
as required.
This is an example, the inverse of this matrix is its transpose.
1/ 2 1/ 2 0
1/ 2 1/ 2 0
0
0
1
Three.VI.2.23 If the set is empty then the summation on the left side is the linear
combination of the empty set of vectors, which by definition adds to the zero vector.
In the second sentence, there is not such i, so the if . . . then . . . implication is
vacuously true.
Three.VI.2.24 (a) Part of the induction argument proving Theorem 2.7 checks that
~ 1, . . . ,
~ i i. (The i = 3 case in the proof illustrates.) Thus, in
~i is in the span of h
the change of basis matrix RepK,B (id), the i-th column RepB (~i ) has components
i + 1 through k that are zero.
Answers to Exercises
245
(b) One way to see this is to recall the computational procedure that we use to
find the inverse. We write the matrix, write the identity matrix next to it, and
then we do Gauss-Jordan reduction. If the matrix starts out upper triangular
then the Gauss-Jordan reduction involves only the Jordan half and these steps,
when performed on the identity, will result in an upper triangular inverse matrix.
Three.VI.2.25 For the inductive step, we assume that for all j in [1..i], these three
conditions are true of each ~j : (i) each ~j is nonzero, (ii) each ~j is a linear
~ 1, . . . ,
~ j , and (iii) each ~j is orthogonal to all of the
combination of the vectors
~m s prior to it (that is, with m < j). With those inductive hypotheses, consider
~i+1 .
~ i+1 proj[~ ] (i+1 ) proj[~ ] (i+1 ) proj[~ ] (i+1 )
~i+1 =
1
2
i
~ i+1
=
i+1 ~2
i+1 ~i
i+1 ~1
~1
~2
~i
~1 ~1
~2 ~2
~i ~i
By the inductive assumption (ii) we can expand each ~j into a linear combination
~ 1, . . . ,
~j
of
~ i+1 ~1
~1
~1 ~1
~ i+1 ~2
~ 1,
~2
linear combination of
~2 ~2
~ i+1 ~i
~ 1, . . . ,
~i
linear combination of
~i ~i
~ i+1
=
246
are concatenated
B = BM
BN
!
!
1
2
,
i
=h
1
1
1
1
!
+1
2
1
then the answer comes from retaining the M part and dropping the N part.
!
!
3
1
projM,N (
)=
2
1
(b) When the bases
BM
!
1
=h
i
1
!
1
BN h
i
2
1
2
BM
BN
1
= h0i
1
Answers to Exercises
247
Three.VI.3.11 As in Example 3.5, we can simplify the calculation by just finding the
space of vectors perpendicular to all the the vectors in Ms basis.
(a) Parametrizing to get
!
1
M = {c
| c R}
1
gives that
u
M {
v
!
|0=
u
v
!
!
1
u
}={
| 0 = u + v }
1
v
M = {k
| k R}
1
(b) As in the answer to the prior part, we can describe M as a span
!
!
3/2
3/2
M = {c
| c R}
BM = h
i
1
1
and then M is the set of vectors perpendicular to the one vector in this basis.
!
!
u
2/3
M ={
| (3/2) u + 1 v = 0 } = { k
| k R}
v
1
(c) Parametrizing the linear requirement in the description of M gives this basis.
!
!
1
1
M = {c
| c R}
BM = h
i
1
1
Now, M is the set of vectors perpendicular to (the one vector in) BM .
!
!
u
1
M ={
| u + v = 0 } = {k
| k R}
v
1
(By the way, this answer checks with the first item in this question.)
(d) Every vector in the space is perpendicular to the zero vector so M = Rn .
(e) The appropriate description and basis for M are routine.
!
!
0
0
M = {y
| y R}
BM = h
i
1
1
Then
u
M ={
v
!
| 0 u + 1 v = 0 } = {k
1
0
!
| k R}
248
(1/3)1 +2
3u +
v
=0
(1/3)v + w = 0
and parametrizing.
1
M = {k 3 | k R}
1
(g) Here, M is one-dimensional
0
M = {c 1 | c R }
1
BM
0
= h1i
1
P = { y |
}
y =
0 1 2
0
z
z
gives this basis for P .
3
BP = h 2 i
1
1
1
0
3
(c) 1 = (5/14) 0 + (8/14) 1 + (3/14) 2
2
3
2
1
1
5/14
(d) projP (1) = 8/14
2
31/14
Answers to Exercises
(e) The
0
3
249
! 1 0
0
1 0 3
1
1
0 1
0 1 2
2
3 2
1
0
0
1
!
3
2
0
10
1
6
2
= 0
3
!1
6
5
1
0
0
1
2
2
!
3
2
5 6 3
1
=
6 10 2
14
3
2 13
when applied to the vector, yields the expected result.
5 6 3
1
5/14
1
6 10 2 1 = 8/14
14
3
2 13
2
31/14
Three.VI.3.13
1
1
!
| c R}
For the first way, we take the vector spanning the line M to be
!
1
~s =
1
and the Definition 1.1 formula gives this.
!
!
1
1
!
1
3
1
!
!
proj[~s ] (
)=
3
1
1
1
1
1
1
4
=
1
1
!
=
!
1
=h
i
1
and so (as in Example 3.5 and 3.6, we can just find the vectors perpendicular to
all of the members of the basis)
!
!
!
u
1
1
M ={
| 1 u + 1 v = 0 } = {k
| k R}
BM = h
i
v
1
1
and representing the vector with respect to the concatenation gives this.
!
!
!
1
1
1
= 2
1
3
1
1
250
2
2
2
2
=
0
0
0
2
1
1
1
1
1
0 0
1
1
To proceed by the second method we find M ,
0
u
1
M = { v | u + w = 0 } = {j 0 + k 1 | j, k R }
w
1
0
find the representation of the given vector with respect to the concatenation of
the bases BM and BM
0
1
1
0
1 = 1 0 + 1 0 + 1 1
2
1
1
0
Answers to Exercises
251
1/2 0 1/2
= 0
0
0
1/2 0 1/2
1
252
Answers to Exercises
Three.VI.3.24
253
is this.
RepE3 ,E1 (f) = 1
3
By the definition
off
v1
v1
v1
1
N (f) = { v2 | 1v1 + 2v2 + 3v3 = 0 } = { v2 | 2 v2 = 0 }
v3
3
v3
v3
and this second description exactly says this.
1
N (f) = [{ 2 }]
3
n
(b) The generalization is that for any f : R R there is a vector ~h so that
v1
.. f
. 7 h1 v1 + + hn vn
vn
and ~h N (f) . We can prove this by, as in the prior item, representing f with
respect to the standard bases and taking ~h to be the column vector gotten by
transposing the one row of that matrix representation.
(c) Of course,
!
1 2 3
RepE3 ,E2 (f) =
4 5 6
and so the null space is this
set.
! v
!
v1
1
1 2 3
0
N (f){ v2 |
}
v2 =
4 5 6
0
v3
v3
That description makes clearthat
1
4
2 , 5 N (f)
3
6
n
and since N (f) is a subspace of R , the span of the two vectors is a subspace
of the perp of the null space. To
is an equality, take
see
that this containment
1
4
M = [{ 2 }]
N = [{ 5 }]
3
6
in the third item of Exercise 23, as suggested in the hint.
254
v1
h1,1 v1 + h1,2 v2 + + h1,n vn
.. f
..
. 7
.
vn
and the description of the null space gives that on transposing the m rows of H
h1,1
hm,1
h1,2
hm,2
~h1 = . , . . . ~hm = .
.
.
.
.
h1,n
hm,n
we have N (f) = [{ ~h1 , . . . , ~hm }]. ([Strang 93] describes this space as the transpose of the row space of H.)
Three.VI.3.25 (a) First note that if a vector ~v is already in the line then the orthogonal projection gives ~v itself. One way to verify this is to apply the formula for
projection into the line spanned by a vector ~s, namely (~v ~s/~s ~s) ~s. Taking the
line as { k ~v | k R } (the ~v = ~0 case is separate but easy) gives (~v ~v/~v ~v) ~v,
which simplifies to ~v, as required.
Now, that answers the question because after once projecting into the line,
the result proj` (~v) is in that line. The prior paragraph says that projecting into
the same line again will have no effect.
(b) The argument here is similar to the one in the prior item. With V = M N,
the projection of ~v = m
~ +n
~ is projM,N (~v ) = m.
~ Now repeating the projection
will give projM,N (m)
~ = m,
~ as required, because the decomposition of a member
of M into the sum of a member of M and a member of N is m
~ =m
~ + ~0. Thus,
projecting twice into M along N has the same effect as projecting once.
(c) As suggested by the prior items, the condition gives that t leaves vectors in
~ 1, . . . ,
~ r to be basis
the range space unchanged, and hints that we should take
vectors for the range, that is, that we should take the range space of t for M (so
that dim(M) = r). As for the complement, we write N for the null space of t
and we will show that V = M N.
To show this, we can show that their intersection is trivial M N = {~0 } and
that they sum to the entire space M + N = V. For the first, if a vector m
~ is in the
range space then there is a ~v V with t(~v) = m,
~ and the condition on t gives that
t(m)
~ = (t t) (~v) = t(~v) = m,
~ while if that same vector is also in the null space
~
then t(m)
~ = 0 and so the intersection of the range space and null space is trivial.
For the second, to write an arbitrary ~v as the sum of a vector from the range
space and a vector from the null space, the fact that the condition t(~v) = t(t(~v))
can be rewritten as t(~v t(~v)) = ~0 suggests taking ~v = t(~v) + (~v t(~v)).
~ 1, . . . ,
~ n i for V where h
~ 1, . . . ,
~ r i is a
To finish we taking a basis B = h
~ r+1 , . . . ,
~ n i is a basis for the null space N.
basis for the range space M and h
(d) Every projection (as defined in this exercise) is a projection into its range
space and along its null space.
(e) This also follows immediately from the third item.
T
Three.VI.3.26 For any matrix M we have that (M1 ) = (MT )1 , and for any two
matrices M, N we have that MNT = NT MT (provided, of course, that the inverse
and product are defined). Applying these two gives that the matrix equals its
transpose.
T
T
T
A(AT A)1 AT = (AT )( (AT A)1 )(AT )
T
T 1
T
= (AT )( (AT A)
)(AT ) = A(AT AT )1 AT = A(AT A)1 AT
16 1832 16
20
40
24
24 =
3520
8
8
32
32
16 16
40
40
24 24
32 32
40
40
so the slope of the line of best fit is approximately 0.52.
256
10
20
30
40
1
1
.
A=
..
1
1
1852.71
1858.88
..
1985.54
1993.71
292.0
285.0
b = ..
226.32
224.39
(the dates have been rounded to months, e.g., for a September record, the decimal
.71 (8.5/12) was used), Maple responded with an intercept of b = 994.8276974
and a slope of m = 0.3871993827.
280
260
240
220
1850
1900
1950
2000
1
.38
249.0
1
246.2
.54
..
A := ....
b = ..
1 92.71
208.86
1 95.54
207.37
(the dates have been rounded to months, e.g., for a September record, the decimal
.71 (8.5/12) was used), Maple gives an intercept of b = 243.1590327 and a slope
of m = 0.401647703. The slope given in the body of this Topic for the mens mile
is quite close to this.
Answers to Exercises
257
250
240
230
220
210
200
1
1
.
A=
..
1
1
zeroed at 1900)
373.2
21.46
327.5
32.63
..
b = ..
.
255.61
89.54
252.56
96.63
(the dates have been rounded to months, e.g., for a September record, the decimal
.71 (8.5/12) was used), MAPLE gave an intercept of b = 378.7114894 and a slope
of m = 1.445753225.
380
360
340
320
300
280
260
240
220
1900 1920 1940 1960 1980 2000
5 These are the equations of the lines for mens and womens mile (the vertical
intercept term of the equation for the womens mile has been adjusted from the
answer above, to zero it at the year 0, because thats how the mens mile equation
was done).
y = 994.8276974 0.3871993827x
y = 3125.6426 1.445753225x
Obviously the lines cross. A computer program is the easiest way to do the
arithmetic: MuPAD gives x = 2012.949004 and y = 215.4150856 (215 seconds is
3 minutes and 35 seconds). Remark. Of course all of this projection is highly
dubious for one thing, the equation for the women is influenced by the quite slow
early times but it is nonetheless fun.
258
1900
1950
2000
6 Sage gives the line of best fit as toll = 0.05 dist + 5.63.
sage: dist = [2, 7, 8, 16, 27, 47, 67, 82, 102, 120]
sage: toll = [6, 6, 6, 6.5, 2.5, 1, 1, 1, 1, 1]
sage: var('a,b,t')
(a, b, t)
sage: model(t) = a*t+b
sage: data = zip(dist,toll)
sage: fit = find_fit(data, model, solution_dict=True)
sage: model.subs(fit)
t |--> -0.0508568169130319*t + 5.630955848442933
sage: p = plot(model.subs(fit), (t,0,120))+points(data,size=25,color='red')
sage: p.save('bridges.pdf')
But the graph shows that the equation has little predictive value.
6
5
4
3
2
1
20
40
60
80
100
120
Apparently a better model is that (with only one intermediate exception) crossings
in the city cost roughly the same as each other, and crossings upstate cost the same
as each other.
7
(a) A computer algebra system like MAPLE or MuPAD will give an intercept
of b = 4259/1398 3.239628 and a slope of m = 71/2796 0.025393419
Plugging x = 31 into the equation yields a predicted number of O-ring failures
of y = 2.45 (rounded to two places). Plugging in y = 4 and solving gives a
temperature of x = 29.94 F.
Answers to Exercises
259
1 53
1 75
.
A=
.
1 80
1 81
3
2
.
b=
..
0
0
MAPLE gives the intercept b = 187/40 = 4.675 and the slope m = 73/1200
0.060833. Here, plugging x = 31 into the equation predicts y = 2.79 O-ring
failures (rounded to two places). Plugging in y = 4 failures gives a temperature
of x = 11 F.
3
2
1
0
8
40
50
60
70
80
1
0.5
0
0.5
0
1
1
A = 1
1
1
1
2
7
8
0.40893539
0.1426675
b = 0.18184359
0.71600334
0.97954837
1.2833012
(a) To represent H, recall that rotation counterclockwise by radians is represented with respect to the standard basis in this way.
!
cos sin
RepE2 ,E2 (h) =
sin cos
A clockwise angle is the negative of a counterclockwise one.
!
!
cos(/4) sin(/4)
2/2
2/2
Answers to Exercises
261
2/2
0
2/2
(2/ 2)1
(1/ 2)2
1
0
1
1
1
0
2 +1
0
1
1
1
2/ 2
0
1/ 2
1
1
!
0
H=I
1
H=
|
1 0
1 1
!
!
2/2 0
1
0
2
0
{z
!
1
I
1
}
gives the desired factorization of H (here, the partial identity is I, and Q is trivial,
that is, it is also an identity matrix).
(d) Reading the composition from right to left (and ignoring the identity matrices
as trivial) gives that H has the same effect as first performing this skew
x
y
x + y
y
~u
~v
h(~u)
h(~v)
~u
x
y
x
x + y
~v
h(~u)
h(~v)
For instance, the effect of H on the unit vector whose angle with the x-axis is
/6 is this.
262
( 3 + 1)/2
1/2
x
x + y
7
y
y
2( 3 + 1)/2
2/2
x
( 2/2)x
7
y
2y
x
y
x
x + y
2( 3 +
1)/4
2(1 3)/4
Verifying that the resulting vector has unit length and forms an angle of /12
with the x-axis is routine.
2 We will first represent the map with a matrix H, perform the row operations and,
if needed, column operations to reduce it to a partial-identity matrix. We will then
translate that into a factorization H = PBQ. Substituting into the general matrix
!
cos sin
RepE2 ,E2 (r )
sin cos
gives this representation.
!
3/2
1/2
1/2
RepE2 ,E2 (r2/3 )
3/2
Gausss Method is routine.
!
1/2 3/2
31 +2
0
2
21
(1/2)2
0
1
0 1/2
3
Taking inverses to solve for H yields this
!
1/2 3/2
1
=
3/2 1/2
3
1
0
!
3
1
32 +1
way.
!
!
0
1/2 3/2
=I
1
3/2 1/2
factorization.
!
!
0
1/2 0
1
1
0
2
0
!
3
I
1
1
0
0
1
Answers to Exercises
263
1 2 1
31 +2
0 0 3
1 +3
0 0 1
1 2 1
(1/3)2 +3
0 0 3
0 0 0
1 2 1
(1/3)2
0 0 1
0 0 0
2 +1
0
0
2
0
0
1
0
gives the reduced echelon form of the matrix. Now the two column operations of
taking 2 times the first column and adding it to the second, and then of swapping
columns two and three produce this partial identity.
1 0 0
B = 0 1 0
0 0 0
All of that translates into matrix terms as: where
1
0
0
1
0
0
1 0
1 1 0
P = 0 1 0 0 1/3 0 0
1
0 0 1
0 0 1
0
0
1
0 1/3 1
1 0
0
1 0
0 3 1
1
0 0
0
1
and
Q = 0
0
2
1
0
0
0
0 1
1
0
1
0
0
0
1
x1
xp(1)
x2
xp(2)
. 7 .
.
.
.
.
xn
xp(n)
xp(1)
x1
x
x
p(2)
2
.
.
.
..
p .
7
xp(n)
xi
..
..
.
.
xn
xn
will, when followed by the swap of the i-th and n-th components, give the map p.
is achievable as a composition of swaps.
Now, the inductive hypothesis gives that p
6 (a) A line is a subset of Rn of the form {~v = ~u + t w
~ | t R }. The image of a
point on that line is h(~v) = h(~u + t w
~ ) = h(~u) + t h(~
w), and the set of such
vectors, as t ranges over the reals, is a line (albeit, degenerate if h(~
w) = ~0).
(b) This is an obvious extension of the prior argument.
(c) If the point B is between the points A and C then the line from A to C has B
in it. That is, there is a t (0 .. 1) such that ~b = a
~ + t (~c a
~ ) (where B is the
~
endpoint of b, etc.). Now, as in the argument of the first item, linearity shows
that h(~b) = h(~
a) + t h(~c a
~ ).
7 The two are inverse. For instance, for a fixed x R, if f0 (x) = k (with k 6= 0) then
(f1 )0 (x) = 1/k.
f(x)
f1 (f(x))
(a) The sum of the entries of M is the sum of the sums of the three rows.
(b) The constraints on entries of M involving the center entry make this system.
m2,1 + m2,2 + m2,3 = s
m1,2 + m2,2 + m3,2 = s
m1,1 + m2,2 + m3,3 = s
m1,3 + m2,2 + m3,1 = s
Adding those four equations counts each matrix entry once and only once, except
that we count the center entry four times. Thus the left side sums to 3s + 3m2,2
while the right sums to 4s. So 3m2,2 = s.
(c) The second row adds to s so m2,1 + m2,2 + m2,3 = 3m2,2 , giving that
(1/2) (m2,1 + m2,3 ) = m2,2 . The same goes for the column and the diagonals.
(d) By the prior exercise either both m2,1 and m2,3 are equal to m2,2 or else one is
greater while one is smaller. Thus m2,2 is the median of the set {m2,1 , m2,2 , m2,3 }.
The same reasoning applied to the second column shows that Thus m2,2 is the
median of the set { m1,2 , m2,1 , m2,2 , m2,3 , m3,2 }. Extending to the two diagonals
shows it is the median of the set of all entries.
2 For any k
1 1
0 0
1 0
0 1
1 0
0 1
we have this.
0 0 s
1 1 s
1 0 s
1 +3
0 1 s 1 +5
0 1 s
1
2 6
1
0
0
0
1
0
0
0
1
0
1
1
1
1
1
1
1
1
1
0
0 0
1 0
1 0
0 1
0 1
1 1
0 0
1 1
1 0
0 1
0 1
1 0
s
s
0
s
s
s
0
s
2 +3
2 +4
2 +5
1 1
0 1
0 0
0 1
0 0
0 0
0
1
2
1
1
1
0
0
0
1
1
1
s
s
s
s
266
function w = coin(p,v)
q = 1-p;
A=[1,p,0,0,0,0;
0,0,p,0,0,0;
0,q,0,p,0,0;
0,0,q,0,p,0;
0,0,0,q,0,0;
0,0,0,0,q,1];
w = A * v;
endfunction
Answers to Exercises
267
v24 =
0.39600
0.00276
0.00000
0.00447
0.00000
0.59676
p5 (n + 1) = 0.5 p4 (n)
0
p0 (0)
p1 (0) 0
p (0) 0
2
=
p3 (0) 1
p4 (0) 0
p5 (0)
we will prove by induction that when n is odd then p1 (n) = p3 (n) = 0 and when
n is even then p2 (n) = p4 (n) = 0. Note first that this is true in the n = 0 base
case by the initial conditions. For the inductive step, suppose that it is true in
the n = 0, n = 1, . . . , n = k cases and consider the n = k + 1 case. If k + 1 is
odd then the two
p1 (k + 1) = 0.5 p2 (k) = 0.5 0 = 0
p3 (k + 1) = 0.5 p2 (k) + 0.5 p4 (k) = 0.5 0 + 0.5 0 = 0
follow from the inductive hypothesis that p2 (k) = p4 (k) = 0 since k is even. The
case where k + 1 is even is similar.
(c) We can use, say, n = 100. This Octave session
octave:1> B=[1,.5,0,0,0,0;
>
0,0,.5,0,0,0;
>
0,.5,0,.5,0,0;
>
0,0,.5,0,.5,0;
>
0,0,0,.5,0,0;
>
0,0,0,0,.5,1];
octave:2> B100=B**100
B100 =
1.00000
0.80000
0.60000
0.40000
0.20000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
268
0.20000
0.40000
0.60000
0.80000
1.00000
octave:3> B100*[0;1;0;0;0;0]
octave:4> B100*[0;1;0;0;0;0]
octave:5> B100*[0;0;0;1;0;0]
octave:6> B100*[0;1;0;0;0;0]
1/6
0
0
0
0
0
1/6 2/6
0
0
0
0
F=[1/6,
0,
0,
0,
0,
>
1/6,
2/6, 0,
0,
0,
>
1/6,
1/6, 3/6, 0,
0,
0;
>
1/6,
0;
>
1/6,
>
1/6,
octave:2> v0=[1;0;0;0;0;0]
octave:3> v1=F*v0
octave:4> v2=F*v1
octave:5> v3=F*v2
octave:6> v4=F*v3
octave:7> v5=F*v4
0;
0;
Answers to Exercises
269
3
0.0046296
0.0324074
0.0879630
0.1712963
0.2824074
0.4212963
4
0.00077160
0.01157407
0.05015432
0.13503086
0.28472222
0.51774691
5
0.00012860
0.00398663
0.02713477
0.10043724
0.27019033
0.59812243
(a) It does seem reasonable that, while the firms present location should strongly
influence where it is next time (for instance, whether it stays), any locations in
the prior stages should have little influence. That is, while a company may move
or stay because of where it is, it is unlikely to move or stay because of where it
was.
(b) This is the Octave session, slightly edited, with the outputs put together in a
table at the end.
octave:1>
>
>
>
>
M =
0.78700
0.00000
0.00000
0.00000
0.02100
octave:2>
octave:3>
octave:4>
octave:5>
octave:6>
M=[.787,0,0,.111,.102;
0,.966,.034,0,0;
0,.063,.937,0,0;
0,0,.074,.612,.314;
.021,.009,.005,.010,.954]
0.00000 0.00000 0.11100 0.10200
0.96600 0.03400 0.00000 0.00000
0.06300 0.93700 0.00000 0.00000
0.00000 0.07400 0.61200 0.31400
0.00900 0.00500 0.01000 0.95400
v0=[.025;.025;.025;.025;.900]
v1=M*v0
v2=M*v1
v3=M*v2
v4=M*v3
0.025000
0.114250
0.025000 0.025000
0.025000 0.025000
0.025000 0.299750
0.900000
0.859725
(c) This is a continuation of the
~p2
~p3
~p4
0.210879
0.300739
0.377920
0.025000 0.025000 0.025000
octave:7> p0=[.0000;.6522;.3478;.0000;.0000]
octave:8> p1=M*p0
octave:9> p2=M*p1
octave:10> p3=M*p2
octave:11> p4=M*p3
270
0.00000
0.65220
0.34780
0.00000
0.00000
(d) This is more
octave:12>
M50 =
0.03992
0.00000
0.00000
0.03384
0.04003
octave:13>
p50 =
0.29024
0.54615
0.54430
0.32766
0.28695
octave:14>
p51 =
0.29406
0.54609
0.54442
0.33091
0.29076
~p1
~p2
0.0036329
0.00000
0.64185 0.6325047
0.36698 0.3842942
0.02574 0.0452966
0.0151277
0.00761
of the same Octave session.
~p3
0.0094301
0.6240656
0.3999315
0.0609094
0.0225751
~p4
0.016485
0.616445
0.414052
0.073960
0.029960
M50=M**50
0.33666 0.20318
0.65162 0.34838
0.64553 0.35447
0.38235 0.22511
0.33316 0.20029
p50=M50*p0
0.02198
0.00000
0.00000
0.01864
0.02204
0.37332
0.00000
0.00000
0.31652
0.37437
p51=M*p50
1 2p
p
p
1 2p
0
p
0
p
0
0
p
0
1 2p
0
p
0 0
sU (n)
sU (n + 1)
0 0
tA (n) tA (n + 1)
0 0 tB (n) = tB (n + 1)
1 0 sA (n) sA (n + 1)
0 1
sB (n)
sB (n + 1)
Answers to Exercises
T =
0.50000
0.25000
0.25000
0.00000
0.00000
octave:2>
octave:3>
octave:4>
octave:5>
octave:6>
octave:7>
0.25000 0.25000
0.50000 0.00000
0.00000 0.50000
0.25000 0.00000
0.00000 0.25000
p0=[1;0;0;0;0]
p1=T*p0
p2=T*p1
p3=T*p2
p4=T*p3
p5=T*p4
271
0.00000
0.00000
0.00000
1.00000
0.00000
0.00000
0.00000
0.00000
0.00000
1.00000
sU
tA
tB
sA
sB
~p0
1
0
0
0
0
~p1
0.50000
0.25000
0.25000
0.00000
0.00000
~p2
0.375000
0.250000
0.250000
0.062500
0.062500
~p3
0.31250
0.21875
0.21875
0.12500
0.12500
~p4
0.26562
0.18750
0.18750
0.17969
0.17969
~p5
0.22656
0.16016
0.16016
0.22656
0.22656
x=(.01:.01:.50)';
y=(.01:.01:.50)';
for i=.01:.01:.50
y(100*i)=learn(i);
endfor
z=[x, y];
gplot z
yields this plot. There is no threshold value no probability above which the
curve rises sharply.
272
0.01
0.99
pT (n)
pC (n)
!
=
pT (n + 1)
pC (n + 1)
3
0.23831
0.76169
4
0.22210
0.77790
5
0.20767
0.79233
6
7
8
9
10
0.19482
0.18339
0.17322
0.16417
0.15611
0.80518
0.81661
0.82678
0.83583
0.84389
(c) This is the sT = 0.2 result.
1
2
3
4
5
n=0
0.20000
0.18800
0.17732
0.16781
0.15936
0.15183
0.80000
0.81200
0.82268
0.83219
0.84064
0.84817
6
7
8
9
10
0.14513
0.13916
0.13385
0.12913
0.12493
0.85487
0.86084
0.86615
0.87087
0.87507
(d) Although the probability vectors start 0.1 apart, they end only 0.032 apart.
So they are alike.
6 These are the p = .55 vectors, and the p = 0.60 vectors.
Answers to Exercises
n=0
0-0 1
1-0 0
0-1 0
2-0 0
1-1 0
0-2 0
3-0 0
2-1 0
1-2 0
0-3 0
4-0 0
3-1 0
2-2 0
1-3 0
0-4 0
4-1 0
3-2 0
2-3 0
1-4 0
4-2 0
3-3 0
2-4 0
4-3 0
3-4 0
n=0
0-0 1
1-0 0
0-1 0
2-0 0
1-1 0
0-2 0
3-0 0
2-1 0
1-2 0
0-3 0
4-0 0
3-1 0
2-2 0
1-3 0
0-4 0
4-1 0
3-2 0
2-3 0
1-4 0
4-2 0
3-3 0
2-4 0
4-3 0
3-4 0
n=1
0
0.55000
0.45000
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
n=1
0
0.60000
0.40000
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
273
n=2
0
0
0
0.30250
0.49500
0.20250
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
n=2
0
0
0
0.36000
0.48000
0.16000
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
n=3
0
0
0
0
0
0
0.16638
0.40837
0.33412
0.09112
0
0
0
0
0
0
0
0
0
0
0
0
0
0
n=3
0
0
0
0
0
0
0.21600
0.43200
0.28800
0.06400
0
0
0
0
0
0
0
0
0
0
0
0
0
0
n=4
0
0
0
0
0
0
0
0
0
0
0.09151
0.29948
0.36754
0.20047
0.04101
0
0
0
0
0
0
0
0
0
n=4
0
0
0
0
0
0
0
0
0
0
0.12960
0.34560
0.34560
0.15360
0.02560
0
0
0
0
0
0
0
0
0
n=5
0
0
0
0
0
0
0
0
0
0
0.09151
0
0
0
0.04101
0.16471
0.33691
0.27565
0.09021
0
0
0
0
0
n=5
0
0
0
0
0
0
0
0
0
0
0.12960
0
0
0
0.02560
0.20736
0.34560
0.23040
0.06144
0
0
0
0
0
n=6
0
0
0
0
0
0
0
0
0
0
0.09151
0
0
0
0.04101
0.16471
0
0
0.09021
0.18530
0.30322
0.12404
0
0
n=6
0
0
0
0
0
0
0
0
0
0
0.12960
0
0
0
0.02560
0.20736
0
0
0.06144
0.20736
0.27648
0.09216
0
0
n=7
0
0
0
0
0
0
0
0
0
0
0.09151
0
0
0
0.04101
0.16471
0
0
0.09021
0.18530
0
0.12404
0.16677
0.13645
n=7
0
0
0
0
0
0
0
0
0
0
0.12960
0
0
0
0.02560
0.20736
0
0
0.06144
0.20736
0
0.09216
0.16589
0.11059
274
When the American League has a p = 0.55 probability of winning each game
then their probability of winning the series is 0.60829. When their probability of
winning any one game is p = 0.6 then their probability of winning the series is
0.71021.
(b) From this Octave session and its graph
octave:1>
octave:2>
octave:3>
octave:4>
>
>
octave:5>
octave:6>
v0=[1;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0;0];
x=(.01:.01:.99)';
y=(.01:.01:.99)';
for i=.01:.01:.99
y(100*i)=markov(i,v0);
endfor
z=[x, y];
gplot z
by eye we judge that if p > 0.7 then the team is close to assured of the series.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
line 1
(a) They must satisfy this condition because the total probability of a state
transition (including back to the same state) is 100%.
(b) See the answer to the third item.
(c) We will do the 22 case; bigger-sized cases are just notational problems. This
product
!
!
!
a1,1 a1,2
b1,1 b1,2
a1,1 b1,1 + a1,2 b2,1 a1,1 b1,2 + a1,2 b2,2
=
a2,1 a2,2
b2,1 b2,2
a2,1 b1,1 + a2,2 b2,1 a2,1 b1,2 + a2,2 b2,2
has these two column sums
(a1,1 b1,1 +a1,2 b2,1 )+(a2,1 b1,1 +a2,2 b2,1 ) = (a1,1 +a2,1 )b1,1 +(a1,2 +a2,2 )b2,1
= 1 b1,1 + 1 b2,1 = 1
and
(a1,1 b1,2 +a1,2 b2,2 )+(a2,1 b1,2 +a2,2 b2,2 ) = (a1,1 +a2,1 )b1,2 +(a1,2 +a2,2 )b2,2
= 1 b1,2 + 1 b2,2 = 1
as required.
(a) Yes.
(b) No, the columns do not have length one.
(c) Yes.
x
x cos(/6) y sin(/6)
0
x ( 3/2) y (1/2) + 0
(a)
7
+
=
y
x sin(/6) + y cos(/6)
1
x (1/2) + y cos( 3/2) + 1
276
2/ 5 and cos = 1/ 5.
!
!
x
x (1/ 5) y (2/ 5)
7
x (2/ 5) + y (1/ 5)
y
!
!
!
x
x (1/ 5) y (2/ 5)
1
x/ 5 + 2y/ 5 + 1
(c)
7
+
=
y
1
x (2/ 5) + y (1/ 5)
2x/ 5 + y/ 5 + 1
(a) Let f be distance-preserving and consider f1 . Any two points in the codomain
can be written as f(P1 ) and f(P2 ). Because f is distance-preserving, the distance
from f(P1 ) to f(P2 ) equals the distance from P1 to P2 . But this is exactly what
is required for f1 to be distance-preserving.
(b) Any plane figure F is congruent to itself via the identity map id : R2 R2 ,
which is obviously distance-preserving. If F1 is congruent to F2 (via some f)
then F2 is congruent to F1 via f1 , which is distance-preserving by the prior
item. Finally, if F1 is congruent to F2 (via some f) and F2 is congruent to F3
(via some g) then F1 is congruent to F3 via g f, which is easily checked to be
distance-preserving.
(a) The Pythagorean Theorem gives that three points are collinear if and only
if (for some ordering of them into P1 , P2 , and P3 ), dist(P1 , P2 ) + dist(P2 , P3 ) =
dist(P1 , P3 ). Of course, where f is distance-preserving, this holds if and only
if dist(f(P1 ), f(P2 )) + dist(f(P2 ), f(P3 )) = dist(f(P1 ), f(P3 )), which, again by
Pythagoras, is true if and only if f(P1 ), f(P2 ), and f(P3 ) are collinear.
The argument for betweeness is similar (above, P2 is between P1 and P3 ).
If the figure F is a triangle then it is the union of three line segments P1 P2 ,
P2 P3 , and P1 P3 . The prior two paragraphs together show that the property of
being a line segment is invariant. So f(F) is the union of three line segments, and
so is a triangle.
A circle C centered at P and of radius r is the set of all points Q such that
dist(P, Q) = r. Applying the distance-preserving map f gives that the image
f(C) is the set of all f(Q) subject to the condition that dist(P, Q) = r. Since
dist(P, Q) = dist(f(P), f(Q)), the set f(C) is also a circle, with center f(P) and
radius r.
(b) Here are two that are easy to verify: (i) the property of being a right triangle,
and (ii) the property of two lines being parallel.
(c) One that was mentioned in the section is the sense of a figure. A triangle
whose vertices read clockwise as P1 , P2 , P3 may, under a distance-preserving map,
be sent to a triangle read P1 , P2 , P3 counterclockwise.
Chapter Four
Chapter Four:
Determinants
Definition
Four.I.1: Exploration
Four.I.1.1
(a) 4
(b) 3
(c) 12
Four.I.1.2
(a) 6
(b) 21
(c) 27
Four.I.1.3 For the first, apply the formula in this section, note that any term with a
d, g, or h is zero, and simplify. Lower-triangular matrices work the same way.
Four.I.1.4 (a) Nonsingular, the determinant is 1.
(b) Nonsingular, the determinant is 1.
(c) Singular, the determinant is 0.
Four.I.1.5 (a) Nonsingular, the determinant is 3.
(b) Singular, the determinant is 0.
(c) Singular, the determinant is 0.
Four.I.1.6 (a) det(B) = det(A) via 21 + 2
(b) det(B) = det(A) via 2 3
(c) det(B) = (1/2) det(A) via (1/2)2
Four.I.1.7 Using the formula for the determinant of a 33 matrix we expand the left
side
1 b c2 + 1 c a2 + 1 a b2 b2 c 1 c2 a 1 a2 b 1
and by distributing we expand the right side.
(bc ba ac + a2 ) (c b) = c2 b b2 c bac + b2 a ac2 + acb + a2 c a2 b
Now we can just check that the two are equal. (Remark. This is the 33 case of
Vandermondes determinant which arises in applications).
278
1 b/a c/a
1
b/a
c/a
(1/a)1
d1 +2
e
f
d
0 (ae bd)/a (af cd)/a
g1 +3
g
h
i
0 (ah bg)/a (ai cg)/a
1
b/a
c/a
(a/(aebd))2
1
(af cd)/(ae bd)
0
0 (ah bg)/a
(ai cg)/a
This step finishes the calculation.
1 b/a
c/a
((ahbg)/a)2 +3
1
(af cd)/(ae bd)
0
0
0
(aei + bgf + cdh hfa idb gec)/(ae bd)
Now assuming that a 6= 0 and ae bd 6= 0, the original matrix is nonsingular if
and only if the 3, 3 entry above is nonzero. That is, under the assumptions, the
original matrix is nonsingular if and only if aei + bgf + cdh hfa idb gec 6= 0,
as required.
We finish by running down what happens if the assumptions that were taken
for convenience in the prior paragraph do not hold. First, if a 6= 0 but ae bd = 0
then we can swap
1
b/a
c/a
1
b/a
c/a
2 3
0
(af cd)/a 0 (ah bg)/a (ai cg)/a
0
0 (ah bg)/a (ai cg)/a
0
0
(af cd)/a
and conclude that the matrix is nonsingular if and only if either ah bg = 0 or
af cd = 0. The condition ah bg = 0 or af cd = 0 is equivalent to the
condition (ah bg)(af cd) = 0. Multiplying out and using the case assumption
that ae bd = 0 to substitute ae for bd gives this.
0 = ahaf ahcd bgaf + bgcd = ahaf ahcd bgaf + aegc
= a(haf hcd bgf + egc)
Since a 6= 0, we have that the matrix is nonsingular if and only if haf hcd bgf +
egc = 0. Therefore, in this a 6= 0 and ae bd = 0 case, the matrix is nonsingular
when haf hcd bgf + egc i(ae bd) = 0.
Answers to Exercises
279
The remaining cases are routine. Do the a = 0 but d 6= 0 case and the a = 0
and d = 0 but g 6= 0 case by first swapping rows and then going on as above. The
a = 0, d = 0, and g = 0 case is easy that matrix is singular since the columns
form a linearly dependent set, and the determinant comes out to be zero.
Four.I.1.10 Figuring the determinant and doing some algebra gives this.
0 = y1 x + x2 y + x1 y2 y2 x x1 y x2 y1
(x2 x1 ) y = (y2 y1 ) x + x2 y1 x1 y2
y2 y1
x2 y1 x1 y2
y=
x+
x2 x1
x2 x1
Note that this is the equation of a line (in particular, in contains the familiar
expression for the slope), and note that (x1 , y1 ) and (x2 , y2 ) satisfy it.
Four.I.1.11 (a) The comparison with the formula given in the preamble to this
section is easy.
(b) While it holds for 22 matrices !
h1,1 h1,2 h1,1
= h1,1 h2,2 + h1,2 h2,1
h2,1 h2,2 h2,1
h2,1 h1,2 h2,2 h1,1
= h1,1 h2,2 h1,2 h2,1
it does not hold for 44 matrices. An example is that this matrix is singular
because the second and third rows
are equal
1 0 0 1
0 1 1 0
0 1 1 0
1 0 0 1
but following the
scheme of the mnemonicdoes not give zero.
1 0 0 1
1 0 0
0 1 1 0
0 1 1
=1+0+0+0
0 1 1 0
0 1 1
(1) 0 0 0
1 0 0 1 1 0 0
Four.I.1.12 The determinant is (x2 y3 x3 y2 )~e1 + (x3 y1 x1 y3 )~e2 + (x1 y2 x2 y1 )~e3 .
To check perpendicularity, we check that the dot product with the first vector is
zero
x2 y3 x3 y2
x1
x2 x3 y1 x1 y3 = x1 x2 y3 x1 x3 y2 +x2 x3 y1 x1 x2 y3 +x1 x3 y2 x2 x3 y1 = 0
x1 y2 x2 y1
x3
and
the
dot product with
the second vector is also zero.
y1
x2 y3 x3 y2
y2 x3 y1 x1 y3 = x2 y1 y3 x3 y1 y2 +x3 y1 y2 x1 y2 y3 +x1 y2 y3 x2 y1 y3 = 0
y3
x1 y2 x2 y1
280
Four.I.1.13
y2
y1 C
F
E
x2
x1
by taking the area of the entire rectangle and subtracting the area of A the upperleft rectangle, B the upper-middle triangle, D the upper-right triangle, C the
lower-left triangle, E the lower-middle triangle, and F the lower-right rectangle
(x1 + x2 )(y1 + y2 ) x2 y1 (1/2)x1 y1 (1/2)x2 y2 (1/2)x2 y2 (1/2)x1 y1 x2 y1 .
Simplification gives the determinant formula.
This determinant is the negative of the one above; the formula distinguishes
whether the second column is counterclockwise from the first.
Four.I.1.15 The computation for 2 2 matrices, using the formula quoted in the
preamble, is easy. It does also hold for 33 matrices; the computation is routine.
Four.I.1.16 No. Recall that!constants come
2 4
1
det(
) = 2 det(
2 6
2
out
! one row at a time. !
2
1 2
) = 2 2 det(
)
6
1 3
This contradicts linearity (here we didnt need S, i.e., we can take S to be the
matrix of zeros).
Four.I.1.17 Bring out the cs one row at a time.
Four.I.1.18 There are no real numbers that make the matrix singular because
the determinant of the matrix cos2 + sin2 is never 0, it equals 1 for all .
Geometrically, with respect to the standard basis, this matrix represents a rotation
Answers to Exercises
281
of the plane through an angle of . Each such map is one-to-one for one thing, it
is invertible.
Four.I.1.19 This is how the answer was given in the cited source. Let P be the
sum of the three positive terms of the determinant and N the sum of the three
negative terms. The maximum value of P is
9 8 7 + 6 5 4 + 3 2 1 = 630.
The minimum value of N consistent with P is
9 6 1 + 8 5 2 + 7 4 3 = 218.
Any change in P would result in lowering that sum by more than 4. Therefore 412
the maximum value for the determinant and one form for the determinant is
9 4 2
3 8 6 .
5 1 7
3 1
1 2 3 1 2
Four.I.2.8
1 0 = 0 0 2 = 0 1
0 0
1 4 0 1 4
1 0 0 1 1 0 0 1 1 0 0
2 1 1 0 0 1 1 2 0 1 1
(b)
=
=
1 0 1 0 0 0 1 1 0 0 1
1 1 1 0 0 1 1 1 0 0 0
2 1 2
1
Four.I.2.9 (a)
=
= 3;
1 1 0 3/2
1 1 0 1 1 0 1 1 0
(b) 3 0 2 = 0 3 2 = 0 3 2 = 0
5 2 2 0 3 2 0 0 0
Four.I.2.10 When is the determinant not zero?
1 0 1 1 1 0
0 1 2 0 0 1
=
1 0 k
0 0 0
0 0 1 1 0 0
2
4 =6
2
1
2
=1
1
1
1
2
k1
1
1
0
1
1
282
h1,2 h1,3
h2,2 h2,3
h3,2 h3,3
h
1,1 h1,2
= (6) h2,1 h2,2
h3,1 h3,2
h1,3
h2,3
h3,3
(c)
h + h
3,1
1,1
h2,1
5h3,1
h1,2 + h3,2
h2,2
5h3,2
h1,3 + h3,3
h2,3
5h3,3
h + h
3,1 h1,2 + h3,2
1,1
=5
h2,1
h2,2
h3,1
h3,2
h
1,1 h1,2 h1,3
= 5 h2,1 h2,2 h2,3
h3,1 h3,2 h3,3
h1,3 + h3,3
h2,3
h3,3
Answers to Exercises
283
1 1 1
1 1
Four.I.2.15
, 1 1 1
1 1
1 1 1
(b) The determinant in the 11 case is 1. In every other case the second row is
the negative of the first, and so matrix is singular and the determinant is zero.
! 2 3 4
2 3
Four.I.2.16 (a) 2 ,
, 3 4 5
3 4
4 5 6
(b) The 11 and 22 cases yield these.
2 3
2 = 2
= 1
3 4
(a) 1 ,
1
3
2
4
is easy to check.
2
|A + B| =
6
4
= 8
8
|A| + |B| = 2 2 = 4
By the way, this also gives an example where scalar multiplication is not preserved
|2 A| 6= 2 |A|.
Four.I.2.18 No, we cannot replace it. Remark 2.2 shows that the four conditions after
the replacement would conflict no function satisfies all four.
Four.I.2.19 A upper-triangular matrix is in echelon form.
A lower-triangular matrix is either singular or nonsingular. If it is singular then
it has a zero on its diagonal and so its determinant (namely, zero) is indeed the
product down its diagonal. If it is nonsingular then it has no zeroes on its diagonal,
and we can reduce it by Gausss Method to echelon form without changing the
diagonal.
Four.I.2.20 (a) The properties in the definition of determinant show that |Mi (k)| = k,
|Pi,j | = 1, and |Ci,j (k)| = 1.
(b) The three cases are easy to check by recalling the action of left multiplication
by each type of matrix.
284
(c) If T S is invertible (T S)M = I then the associative property of matrix multiplication T (SM) = I shows that T is invertible. So if T is not invertible then neither
is T S.
(d) If T is singular then apply the prior answer: |T S| = 0 and |T | |S| = 0 |S| = 0.
If T is not singular then we can write it as a product of elementary matrices
|T S| = |Er E1 S| = |Er | |E1 | |S| = |Er E1 ||S| = |T ||S|.
(e) 1 = |I| = |T T 1 | = |T ||T 1 |
Four.I.2.21 (a) We must show that if
ki +j
T
T
then d(T ) = |T S|/|S| = |T S|/|S| = d(T ). We will be done if we show that combining
rows first and then multiplying to get T S gives the same result as multiplying
first to get T S and then combining (because the determinant |T S| is unaffected
by the combination so well then have |T S| = |T S|, and hence d(T ) = d(T )). That
argument runs: after adding k times row i of T S to row j of T S, the j, p entry is
(kti,1 + tj,1 )s1,p + + (kti,r + tj,r )sr,p , which is the j, p entry of T S.
i j
(b) We need only show that swapping T T and then multiplying to get T S
gives the same result as multiplying T by S and then swapping (because, as the
determinant |T S| changes sign on the row swap, well then have |T S| = |T S|, and
so d(T ) = d(T )). That argument runs just like the prior one.
(c) Not surprisingly by now, we need only show that multiplying a row by a scalar
ki
T T and then computing T S gives the same result as first computing T S
and then multiplying the row by k (as the determinant |T S| is rescaled by k the
multiplication, well have |T S| = k|T S|, so d(T ) = k d(T )). The argument runs
just as above.
(d) Clear.
(e) Because weve shown that d(T ) is a determinant and that determinant functions
(if they exist) are unique, we have that so |T | = d(T ) = |T S|/|S|.
Four.I.2.22 We will first argue that a rank r matrix has a rr submatrix with nonzero
determinant. A rank r matrix has a linearly independent set of r rows. A matrix
made from those rows will have row rank r and thus has column rank r. Conclusion:
from those r rows we can extract a linearly independent set of r columns, and so
the original matrix has a rr submatrix of rank r.
We finish by showing that if r is the largest such integer then the rank of the
matrix is r. We need only show, by the maximality of r, that if a matrix has a
kk submatrix of nonzero determinant then the rank of the matrix is at least k.
Consider such a k k submatrix. Its rows are parts of the rows of the original
matrix, clearly the set of whole rows is linearly independent. Thus the row rank of
the original matrix is at least k, and the row rank of a matrix equals its rank.
Answers to Exercises
285
Four.I.2.23 A matrix with only rational entries reduces with Gausss Method to an
echelon form matrix using only rational arithmetic. Thus the entries on the diagonal
must be rationals, and so the product down the diagonal is rational.
Four.I.2.24 This is how the answer was given in the cited source. The value
(1a4 )3 of the determinant is independent of the values B, C, D. Hence operation (e)
does not change the value of the determinant but merely changes its appearance.
Thus the element of likeness in (a), (b), (c), (d), and (e) is only that the appearance
of the principle entity is changed. The same element appears in (f) changing the
name-label of a rose, (g) writing a decimal integer in the scale of 12, (h) gilding the
lily, (i) whitewashing a politician, and (j) granting an honorary degree.
6 |
=0
(b) This matrix is nonsingular.
2
2 1
3 1 0 = (2)(1)(5) |P1 | + (2)(0)(0) |P2 | + (2)(3)(5) |P3 |
2 0 5
+ (2)(0)(2) |P4 | + (1)(3)(0) |P5 | + (2)(1)(1) |P6 |
= 42
Four.I.3.18
gives this
1
1 4
2 3 = 0
0
5 1
5
2
1
0
0
+ (1)(3)
1
1
1
1
3 = 0
0
4
5
2
0
1
= 1
0
1
3 = 5
5/2
286
6 |
= 5
Four.I.3.19 Following
Example
3.6 gives this.
t
1,1 t1,2 t1,3
t2,1 t2,2 t2,3 = t1,1 t2,2 t3,3 |P1 | + t1,1 t2,3 t3,2 |P2 |
t3,1 t3,2 t3,3
+ t1,2 t2,1 t3,3 |P3 | + t1,2 t2,3 t3,1 |P4 |
+ t1,3 t2,1 t3,2 |P5 | + t1,3 t2,2 t3,1 |P6 |
= t1,1 t2,2 t3,3 (+1) + t1,1 t2,3 t3,2 (1)
+ t1,2 t2,1 t3,3 (1) + t1,2 t2,3 t3,1 (+1)
+ t1,3 t2,1 t3,2 (+1) + t1,3 t2,2 t3,1 (1)
Four.I.3.20 This is all of the permutations where (1) = 1
1 = h1, 2, 3, 4i
2 = h1, 2, 4, 3i
4 = h1, 3, 4, 2i
3 = h1, 3, 2, 4i
5 = h1, 4, 2, 3i
6 = h1, 4, 3, 2i
8 = h2, 1, 4, 3i
10 = h2, 3, 4, 1i
9 = h2, 3, 1, 4i
11 = h2, 4, 1, 3i
12 = h2, 4, 3, 1i
14 = h3, 1, 4, 2i
17 = h3, 4, 1, 2i
15 = h3, 2, 1, 4i
18 = h3, 4, 2, 1i
20 = h4, 1, 3, 2i
23 = h4, 3, 1, 2i
21 = h4, 2, 1, 3i
24 = h4, 3, 2, 1i
Answers to Exercises
287
288
perms 2
of p+1,...,p+q
Answers to Exercises
289
the elements in two of the columns of the derived determinant are proportional, so
the determinant
vanishes. That
is,
2 1 x 4 1 x 3 1 x 2 1 2
4 2 x 3 = 2 x 1 2 = x + 1 2 4 = 0.
6 3 x 10 3 x 7 3 x 4 3 6
Four.I.3.37 This is how the answer was given in the cited source. Let
a b c
d e f
g h i
have magic sum N = S/3. Then
N = (a + e + i) + (d + e + f) + (g + e + c)
(a + d + g) (c + f + i) = 3e
and S = 9e. Hence,
adding
and columns,
rows
a b c a b c a b 3e a b e
D = d e f = d e
f = d e 3e = d e e S.
g h i 3e 3e 3e 3e 3e 9e 1 1 1
Four.I.3.38 This is how the answer was given in the cited source. Denote by Dn
the determinant in question and by ai,j the element in the i-th row and j-th column.
Then from the law of formation of the elements we have
ai,j = ai,j1 + ai1,j ,
a1,j = ai,1 = 1.
Subtract each row of Dn from the row following it, beginning the process with
the last pair of rows. After the n 1 subtractions the above equality shows that
the element ai,j is replaced by the element ai,j1 , and all the elements in the first
column, except a1,1 = 1, become zeroes. Now subtract each column from the one
following it, beginning with the last pair. After this process the element ai,j1
is replaced by ai1,j1 , as shown in the above relation. The result of the two
operations is to replace ai,j by ai1,j1 , and to reduce each element in the first
row and in the first column to zero. Hence Dn = Dn+i and consequently
Dn = Dn1 = Dn2 = = D2 = 1.
290
0 1 0 0
1 0 0 0
0 0 0 1
0 0 1 0
the two row swaps 1 2 and 3 4 will produce the identity matrix.
Four.I.4.15 The pattern is this.
1
i
sgn(i ) +1
2
1
3
1
4
+1
5
+1
6
1
...
...
Answers to Exercises
291
Four.I.4.17 If (i) = j then 1 (j) = i. The result now follows on the observation
that P has a 1 in entry i, j if and only if (i) = j, and P1 has a 1 in entry j, i if
and only if 1 (j) = i,
Four.I.4.18 This does not say that m is the least number of swaps to produce an
identity, nor does it say that m is the most. It instead says that there is a way to
swap to the identity in exactly m steps.
Let j be the first row that is inverted with respect to a prior row and let k be
the first row giving that inversion. We have this interval of rows.
.
.
.
k
r1
.
.
j < k < r1 < < rs
.
rs
j
..
.
Swap.
.
.
.
j
r1
.
.
.
rs
k
..
.
The second matrix has one fewer inversion because there is one fewer inversion in
the interval (s vs. s + 1) and inversions involving rows outside the interval are not
affected.
Proceed in this way, at each step reducing the number of inversions by one with
each row swap. When no inversions remain the result is the identity.
The contrast with Corollary 4.5 is that the statement of this exercise is a there
exists statement: there exists a way to swap to the identity in exactly m steps.
But the corollary is a for all statement: for all ways to swap to the identity, the
parity (evenness or oddness) is the same.
Four.I.4.19 (a) First, g(1 ) is the product of the single factor 21 and so g(1 ) = 1.
Second, g(2 ) is the product of the single factor 1 2 and so g(2 ) = 1.
(b) permutation 1 2 3 4 5 6
g()
2
2 2
2
2
2
292
Geometry of Determinants
Four.II.1: Determinants as Size Functions
Four.II.1.8 For each, find the determinant and take the absolute value.
(a) 7
(b) 0
(c) 58
Four.II.1.9 Solving
3
2
1
4
c1 3 + c2 6 + c3 0 = 1
1
1
5
2
gives the unique solution c3 = 11/57, c2 = 40/57 and c1 = 99/57. Because
c1 > 1, the vector is not in the box.
Four.II.1.10 Move the parallelepiped to start at the origin, so that it becomes the
box formed by
!
!
3
2
h
,
i
0
1
and now the absolute value of this determinant is easily computed as 3.
3 2
=3
0 1
Four.II.1.11
Four.II.1.12
(a) 3
(b) 9
(c) 1/9
31 +2
1 +3
0
0
0
1
0
4
2
gives the determinant as +2. The sign is positive so the transformation preserves
orientation.
Answers to Exercises
293
1
4
,
i
1
0
294
2
0
0
1/2
2x 3y + 3z = 0
3y + z = 0
with this solution set.
{ 1/3 z | z R },
1
A solution of length one is this.
1
1
p
1/3
19/9
1
Thus the area of the triangle
is
the
absolute
value of this determinant.
1 2 3/19
0 3 1/19 = 12/ 19
1 3
3/ 19
Answers to Exercises
295
Four.II.1.25 (a) Because the image of a linearly dependent set is linearly dependent,
if the vectors forming S make a linearly dependent set, so that |S| = 0, then the
vectors forming t(S) make a linearly dependent set, so that |T S| = 0, and in this
case the equation holds.
ki +j
(b) We must check that if T T then d(T ) = |T S|/|S| = |T S|/|S| = d(T ). We
can do this by checking that combining rows first and then multiplying to get T S
gives the same result as multiplying first to get T S and then combining (because
the determinant |T S| is unaffected by the combining rows so well then have that
|T S| = |T S| and hence that d(T ) = d(T )). This check runs: after adding k times
row i of T S to row j of T S, the j, p entry is (kti,1 +tj,1 )s1,p + +(kti,r +tj,r )sr,p ,
which is the j, p entry of T S.
i j
(c) For the second property, we need only check that swapping T T and
then multiplying to get T S gives the same result as multiplying T by S first and
then swapping (because, as the determinant |T S| changes sign on the row swap,
well then have |T S| = |T S|, and so d(T ) = d(T )). This check runs just like the
one for the first property.
ki
For the third property, we need only show that performing T T and then
computing T S gives the same result as first computing T S and then performing
the scalar multiplication (as the determinant |T S| is rescaled by k, well have
|T S| = k|T S| and so d(T ) = k d(T )). Here too, the argument runs just as above.
The fourth property, that if T is I then the result is 1, is obvious.
(d) Determinant functions are unique, so |T S|/|S| = d(T ) = |T |, and so |T S| = |T ||S|.
Four.II.1.26 Any permutation matrix has the property that the transpose of the
matrix is its inverse.
For the implication, we know that |AT | = |A|. Then 1 = |A A1 | = |A AT | =
|A| |AT | = |A|2 .
The converse does not hold; here is an example.
!
3 1
2 1
Four.II.1.27 Where the sides of the box are c times longer, the box has c3 times as
many cubic units of volume.
Four.II.1.28 If H = P1 GP then |H| = |P1 ||G||P| = |P1 ||P||G| = |P1 P||G| = |G|.
Four.II.1.29 !(a) The!new basis
! is the
! old basis rotated by /4.
1
0
0
1
(b) h
,
i, h
,
i
0
1
1
0
(c) In each case the determinant is +1 (we say that these bases have positive
orientation).
296
.
+
+
Here each associated determinant is 1 (we say that such bases have a negative
orientation).
(e) There is one positively oriented basis h(1)i and one negatively oriented basis
h(1)i.
(f) There are 48 bases (6 half-axis choices are possible for the first unit vector, 4
for the second, and 2 for the last). Half are positively oriented like the standard
basis on the left below, and half are negatively oriented like the one on the right
~
e3
~
e1
~
e2
s1,i
s2,i
RepEn (~si ) = .
..
sn,i
and then we represent the map application with matrix-vector multiplication
RepEn ( t(~si ) ) =
..
..
.
.
tn,1 tn,2 . . . tn,n
sn,j
t1,1
t1,2
t1,n
t2,1
t2,2
t2,n
..
..
..
tn,1
tn,2
tn,n
Answers to Exercises
297
As in the derivation of the permutation expansion formula, we apply multilinearity, first splitting along the sum in the first argument
det(s1,1~t1 , . . . , s1,n~t1 + s2,n~t2 + + sn,n~tn )
+ + det(sn,1~tn , . . . , s1,n~t1 + s2,n~t2 + + sn,n~tn )
and then splitting each of those n summands along the sums in the second arguments,
etc. We end with, as in the derivation of the permutation expansion, nn summand
determinants, each of the form det(si1 ,1~ti1 , si2 ,2~ti2 , . . . , sin ,n~tin ). Factor out each
of the si,j s = si1 ,1 si2 ,2 . . . sin ,n det(~ti1 , ~ti2 , . . . , ~tin ).
As in the permutation expansion derivation, whenever two of the indices in i1 ,
. . . , in are equal then the determinant has two equal arguments, and evaluates to
0. So we need only consider the cases where i1 , . . . , in form a permutation of the
numbers 1, . . . , n. We thus have
X
det(t(~s1 ), . . . , t(~sn )) =
s(1),1 . . . s(n),n det(~t(1) , . . . , ~t(n) ).
permutations
Swap the columns in det(~t(1) , . . . , ~t(n) ) to get the matrix T back, which changes
the sign by a factor of sgn , and then factor out the determinant of T .
X
X
=
s(1),1 . . . s(n),n det(~t1 , . . . , ~tn )sgn = det(T )
s(1),1 . . . s(n),n sgn .
As in the proof that the determinant of a matrix equals the determinant of its
transpose, we commute the ss to list them by ascending row number instead of by
ascending column number (and we substitute sgn(1 ) for sgn()).
X
= det(T )
s1,1 (1) . . . sn,1 (n) sgn 1 = det(T ) det(~s1 ,~s2 , . . . ,~sn )
Four.II.1.31
x
y
1
x2
y
2
1
x3
y
3
1
298
Laplaces Formula
Four.III.1: Laplaces Expansion
Four.III.1.11
2+3
(a) (1)
4 1 1
(c) (1)
= 2
0 2
1
0
0
= 2
2
(b) (1)
3+2
1 2
= 5
1 3
Answers to Exercises
299
1 2
1 2
2 2
Four.III.1.12 (a) 3 (+1)
= 13
+ 1 (+1)
+ 0 (1)
1 3
1 0
3 0
3 0
3 1
0 1
(b) 1 (1)
= 13
+ 2 (1)
+ 2 (+1)
1 3
1 0
3 0
3 0
3 0
1 2
(c) 1 (+1)
= 13
+ 0 (+1)
+ 2 (1)
1 2
1 3
1 3
Four.III.1.13 This is adj(T ).
T1,1
T1,2
T1,3
Four.III.1.14
T2,1
T2,2
T2,3
(c)
6
9
6
9
5
8
6
12
6
2
2 3
+
5
8 9
1
1 3
+
4
7 9
1 2
1
+
7 8
4
6
3
0
1
T1,1 T2,1 T3,1
1 2 2
T1,2 T2,2 T3,2 =
1 1 1
0 1 2
= 3 2 8
0 1
1
8
T3,1
4
T3,2 =
7
T3,3
4
+
7
= 6
3
1
1
T1,1
T1,2
T2,1
T2,2
3
6
3
6
2
5
1 4
4
0 2
1
2 4
4
1
1 2
1 2 1
0 1 0
4 1
=
=
2
3
4 1
2 3
300
4 3
4 3
0 3
0
3
8
9
8
9
T1,1 T2,1 T3,1
1 3
1 3 1 3
=
T
T
T
1,2
2,2
3,2
1 9 1 9
1 3
1 4 1 4
1 0
1 8 1 0
1 8
24 12 12
= 12
6
6
8
4
4
0 1 2
0 1/3 2/3
a
c
b
d
expanded on the first row gives a (+1)|d| + b (1)|c| = ad bc (note the two
11 minors).
Four.III.1.18 The determinant of
a b
d e
g h
f
i
is this.
d
e f
a
b
h i
g
d
f
+c
g
i
e
= a(ei fh) b(di fg) + c(dh eg)
h
Answers to Exercises
Four.III.1.19
(a)
T1,1
T1,2
301
T2,1
T2,2
t2,2
=
t2,1
t2,2
t2,1
t1,2
t1,1
t1,2
=
t1,1
!
t2,2
t2,1
t1,2
t1,1
whose
value
0 0
1 0 = 1
0 1
d1 0
0
0 d
0
2
0
0 d3
D=
...
..
0
=3
1
.
dn
d 2 dn
0
0
0
d1 d3 d n
0
adj(D) =
..
d1 dn1
By the way, Theorem 1.9 provides a slicker way to derive this conclusion.
Four.III.1.22 Just note that if S = T T then the cofactor Sj,i equals the cofactor Ti,j
because (1)j+i = (1)i+j and because the minors are the transposes of each other
(and the determinant of a transpose equals the determinant of the matrix).
Four.III.1.23
T = 4
7
2 3
3
6
adj(T ) = 6 12
5 6
8 9
3
6
6
3
adj(adj(T )) = 0
0
0
0
0
0
0
302
Four.III.1.24
(a) An example
M = 0
0
2
4
0
5
6
M1,1
adj(M) = M1,2
M1,3
M2,1
M2,2
M2,3
4 5
2 3 2
0 6 4
0 6
M3,1
1
0 5 1 3
M3,2 =
0
0 6 0 6
M3,3
1 2 1
0 4
0 0 0
0 0
24 12 2
=0
6
5
0
0
4
3
5
3
5
2
m1,1 . . . m1,b . . .
m
2,1 . . . m2,b
..
..
..
..
.
mn,b
when deleted, leave an upper triangular minor, because entry i, j of the minor is
either entry i, j of M (this happens if a > i and b > j; in this case i < j implies
that the entry is zero) or it is entry i, j + 1 of M (this happens if i < a and j > b;
in this case, i < j implies that i < j + 1, which implies that the entry is zero),
or it is entry i + 1, j + 1 of M (this last case happens when i > a and j > b;
obviously here i < j implies that i + 1 < j + 1 and so the entry is zero). Thus the
determinant of the minor is the product down the diagonal. Observe that the
a 1, a entry of M is the a 1, a 1 entry of the minor (it doesnt get deleted
because the relation a > b is strict). But this entry is zero because M is upper
triangular and a 1 < a. Therefore the cofactor is zero, and the adjoint is upper
triangular. (The lower triangular case is similar.)
(b) This is immediate from the prior part, by Theorem 1.9.
Four.III.1.25 We will show that each determinant can be expanded along row i. The
argument for column j is similar.
Each term in the permutation expansion contains one and only one entry
from each row. As in Example 1.1, factor out each row i entry to get |T | =
ti,1 Ti,1 + + ti,n Ti,n , where each Ti,j is a sum of terms not containing any
elements of row i. We will show that Ti,j is the i, j cofactor.
Consider the i, j = n, n case first:
X
tn,n Tn,n = tn,n
t1,(1) t2,(2) . . . tn1,(n1) sgn()
where the sum is over all n-permutations such that (n) = n. To show that
Ti,j is the minor Ti,j , we need only show that if is an n-permutation such that
(n) = n and is an n 1-permutation with (1) = (1), . . . , (n 1) = (n 1)
then sgn() = sgn(). But thats true because and have the same number of
inversions.
Back to the general i, j case. Swap adjacent rows until the i-th is last and swap
adjacent columns until the j-th is last. Observe that the determinant of the i, j-th
minor is not affected by these adjacent swaps because inversions are preserved
(since the minor has the i-th row and j-th column omitted). On the other hand, the
sign of |T | and Ti,j changes n i plus n j times. Thus Ti,j = (1)ni+nj |Ti,j | =
(1)i+j |Ti,j |.
Four.III.1.26 This is obvious for the 11 base case.
For the inductive case, assume that the determinant of a matrix equals the
determinant of its transpose for all 11, . . . , (n 1)(n 1) matrices. Expanding
on row i gives |T | = ti,1 Ti,1 + . . . + ti,n Ti,n and expanding on column i gives
|T T | = t1,i (T T )1,i + + tn,i (T T )n,i Since (1)i+j = (1)j+i the signs are the same
in the two summations. Since the j, i minor of T T is the transpose of the i, j minor
of T , the inductive hypothesis gives |(T T )i,j | = |Ti,j |.
Four.III.1.27 This is how the answer was given in the cited source. Denoting the
above determinant by Dn , it is seen that D2 = 1, D3 = 2. It remains to show
that Dn = Dn1 + Dn2 , n > 4. In Dn subtract the (n 3)-th column from the
(n 1)-th, the (n 4)-th from the (n 2)-th, . . . , the first from the third, obtaining
1 1 0
0
0 0 . . .
1 1 1 0
0 0 . . .
Fn = 0 1
1 1 0 0 . . . .
0 0
1
1 1 0 . . .
.
.
.
.
.
. . . .
By expanding this determinant with reference to the first row, there results the
desired relation.
304
1
1
1
1
4
7
3
=
= 3
1
1
2
(b) x = 2, y = 2
2 z=1
3 Determinants are unchanged by combinations, including column combinations, so
det(Bi ) = det(~
a1 , . . . , x 1 a
~ 1 + + xi a
~ i + + xn a
~ n, . . . , a
~ n ). Use the operation
of taking x1 times the first column and adding it to the i-th column, etc.,
to see this is equal to det(~
a1 , . . . , x i a
~ i, . . . , a
~ n ). In turn, that is equal to xi
det(~
a1 , . . . , a
~ i, . . . , a
~ n ) = xi det(A), as required.
4
a2,1 x1 + a2,2 x2 = b2
a2,1 a2,2
0
x1
x2
!
=
a1,1
a2,1
b1
b2
7 Of course, singular systems have |A| equal to zero, but we can characterize the
infinitely many solutions case is by the fact that all of the |Bi | are zero as well.
8 We can consider the two nonsingular cases together with this system
x1 + 2x2 = 6
x1 + 2x2 = c
where c = 6 of course yields infinitely many solutions, and any other value for c
yields no solutions. The corresponding vector equation
!
!
!
1
2
6
x1
+ x2
=
1
2
c
gives a picture of two overlapping vectors. Both lie on the line y = x. In the c = 6
case the vector on the right side also lies on the line y = x but in any other case it
does not.
306
1
3
6
6
12
2 8
C3 = 1 2
4 2
2
2
C2 =
!
4
4
with determinant det(C2 ) = 64. The determinant of the original matrix is thus
64/(22 21 ) = 8
2 The same construction as was used for the 33 case above shows that in place of
a1,1 we can select any nonzero entry ai,j . Entry cp,q of Chis matrix is the value
of this determinant
a
a1,q+1
1,1
ap+1,1 ap+1,q+1
where p + 1 6= i and q + 1 6= j.
3 Sarruss formula uses 12 multiplications and 5 additions (including the subtractions
in with the additions). Chis formula uses two multiplications and an addition
(which is actually a subtraction) for each of the four 2 2 determinants, and
another two multiplications and an addition for the 22 Chis determinant, as
well as a final division by a1,1 . Thats eleven multiplication/divisions and five
addition/subtractions. So Chi is the winner.
4 Consider an nn matrix.
a1,1
a
2,1
A=
an1,1
an,1
a1,2
a2,2
..
.
an1,2
an,2
a1,n1
a2,n1
an1,n1
an,n1
a1,1
a1,2
a a
a2,2 a1,1
2,1 1,1
a1,1 2
..
.
a1,1 3
..
an1,1 a1,1 an1,2 a1,1
.
a1,1 n
an,1 a1,1
an,2 a1,1
a1,n
a2,n
an1,n
an,n
a1,n1
a2,n1 a1,1
an1,n1 a1,1
an,n1 a1,1
a1,n
a2,n a1,1
an1,n a1,1
an,n a1,1
..
.
a3,1 1 +3
an,1 1 +n
The result is a matrix whose first row is unchanged, whose first column is all zeros
(except for the 1, 1 entry of a1,1 ), and whose remaining entries are these.
a1,2
a1,n1
a1,n
a2,2 a1,1 a2,1 a1,2 a2,n1 an,n a2,n1 a1,n1 a2,n an,n a2,n a1,n
..
4
5
6
x
y = 3x + 6y 3z
z
308
comes from
u
1
0 = u2
u3
u1
u = u2
u3
v1
v = v2
v3
The equation for the point incident on two lines is the same.
4 If p1 , p2 , p3 , and q1 , q2 , q3 are two triples of homogeneous coordinates for p
then the two column vectors are in proportion, that is, lie on the same line through
the origin. Similarly, the two row vectors are in proportion.
p1
q1
k p2 = q2
m (L1 L2 L3 ) = (M1 M2 M3 )
p3
q3
Then multiplying gives the answer (km) (p1 L1 + p2 L2 + p3 L3 ) = q1 M1 + q2 M2 +
q3 M3 = 0.
5 The picture of the solar eclipse unless the image plane is exactly perpendicular
to the line from the sun through the pinhole shows the circle of the sun projecting
to an image that is an ellipse. (Another example is that in many pictures in this
Topic, weve shown the circle that is the spheres equator as an ellipse, that is, a
viewer of the drawing sees a circle as an ellipse.)
The solar eclipse picture also shows the converse. If we picture the projection
as going from left to right through the pinhole then the ellipse I projects through P
to a circle S.
6 A spot on the unit sphere
p1
p 2
p3
is non-equatorial if and only if p3 =
6 0. In that case it corresponds to this point on
the z = 1 plane
p1 /p3
p2 /p3
1
since that is intersection of the line containing the vector and the plane.
7
Answers to Exercises
309
T0
U0
V0
V2
U2
T1
U1
T2
V1
coordinate
1
0
0
1
RepB (~t0 ) = 0 RepB (~t1 ) = 1 RepB (~t2 ) = 0 RepB (~v0 ) = 1
0
0
1
1
(c) First, any U0 on T0 V0
1
1
a+b
RepB (~u0 ) = a 0 + b 1 = b
0
1
b
has homogeneous coordinate vectors
of this
form
u0
1
1
(u0 is a parameter; it depends on where on the T0 V0 line the point U0 is, but
any point on that line has a homogeneous coordinate vector of this form for some
u0 R). Similarly, U2 is on T1 V0
0
1
d
RepB (~u2 ) = c 1 + d 1 = c + d
0
1
d
and so has this homogeneous coordinate
vector.
1
u 2
1
Also similarly, U1 is incident on T
2 V0
0
1
f
RepB (~u1 ) = e 0 + f 1 = f
1
1
e+f
and has this homogeneous coordinatevector.
1
1
u1
310
g + h = iu0
=
hu2 = i
h=i+j
Substituting hu2 for i in the first equation
hu0 u2
hu2
h
shows that V1 has this two-parameter
homogeneous
coordinate vector.
u0 u2
u2
1
(e) Since V2 is the intersection T0 U1 T1 U0
k + l = nu0
1
1
0
u0
=
k 0 + l 1 = m 1 + n 1
l=m+n
0
u1
0
1
lu1 = n
lu0 u1
l
lu1
gives that V2 has this two-parameter
coordinate vector.
homogeneous
u0 u1
1
u1
line
its
homogeneous
vector has the form
(f) Because V1 is on the T1 U
1
coordinate
0
1
q
p 1 + q 1 = p + q
()
0
u1
qu1
but a previous part of this question established that V1 s homogeneous coordinate
vectors have the form
u0 u2
u2
1
and so this a homogeneous coordinate
vector
for V1 .
u0 u1 u2
()
u1 u2
u1
By () and (), there is a relationship among the three parameters: u0 u1 u2 = 1.
Answers to Exercises
311
u0 u1 u2
1
u2 = u2
u1 u2
u1 u2
Now, the T2 U2 line consists of the points whose homogeneous coordinates have
this form.
0
1
s
r 0 + s u2 = su2
1
1
r+s
Taking s = 1 and r = u1 u2 1 shows that the homogeneous coordinate vectors
of V2 have this form.
Chapter Five
2/14
4/14
!
=
0
11/2
0
5
Five.II.1.6 (a) Because the matrix (2) is 11, the matrices P and P1 are also 11 and
so where P = (p) the inverse is P1 = (1/p). Thus P(2)P1 = (p)(2)(1/p) = (2).
(b) Yes: recall that we can bring scalar multiples out of a matrix P(cI)P1 =
cPIP1 = cI. By the way, the zero and identity matrices are the special cases
c = 0 and c = 1.
(c) No, as this example shows.
!
!
!
!
1 2
1 0
1 2
5 4
=
1 1
0 3
1 1
2
1
Five.II.1.7
(a)
t
C3wrt B C3wrt B
T
idy
idy
t
C3wrt D C3wrt D
314
the effect
of the
transformation
0
0
0
1
1
2
t
t
t
2 7 3
1 7 0
0 7 1
3
4
0
2
1
0
and represented
those
outputs
with
respect
to
the
ending
basis
B
2
2
0
0
1
1
RepB (0) = 0
RepB ( 1 ) = 3
RepB ( 3 ) = 7
4
10
2
2
0
3
to get the matrix.
2 0 1
T = RepB,B (t) = 7 0 3
10 2 3
(c) Find the
effect
of
the
transformation
on
of D
theelements
0
1
1
1
1
1
t
t
t
0 7 1
1 7 0
0 7 0
0
1
2
0
0
0
and represented
those
with respecttothe ending
basis D
1
1
1
1
0
1
RepD (0) = 0
RepD (0) = 0
RepD (1) = 1
0
0
2
2
0
0
to get the matrix.
1 1 1
T = RepD,D (t) = 0 0
1
0 2
0
(d) To go down on the right we need RepB,D (id) so we first compute the effect of
the identity map on each element of D, which is no effect, and then represent
the results
with respect
to B.
1
4
0
1
0
1
RepD (2) = 2
RepD (1) = 1
RepD (0) = 0
3
3
0
0
1
1
So this is P.
4 1 1
P= 2
1
0
3
0
1
For the other matrix RepD,B (id) we can either find it directly, as we just have
with P, or we can do the usual calculation
of a matrix
inverse.
1
1
1
P1 = 2 1 2
3 3 2
Answers to Exercises
315
Five.II.1.8 (a) Because we describe t with the members of B, finding the matrix
representation is easy:
0
0
1
RepB (t(1)) = 0
RepB (t(x2 )) = 1
RepB (t(x)) = 0
1
1
3
B
gives this.
RepB,B (t) 1
1
0
3
1
0
1
(b) We will find t(1), t(1 + x), and t(1 + x + x2 , to find how each is represented
with respect to D. We are given that t(1) = 3, and the other two are easy to see:
t(1 + x) = x2 + 2 and t(1 + x + x2 ) = x2 + x + 3. By eye, we get the representation
of each vector
3
2
2
RepD (t(1)) = 0
RepD (t(1+x)) = 1
RepD (t(1+x+x2 )) = 0
0
1
1
D
RepD,D (t) = 0
0
2
1
1
0
1
Vwrt B Vwrt B
T
idyP
idyP
t
Vwrt D Vwrt D
0 0 1
0 1 1
P1 = 0 1 1
P = 1 1 0
1 1 1
1
0 0
Five.II.1.9
(a)
t
C2wrt B C2wrt B
T
idy
idy
t
C2wrt D C2wrt D
316
0
0
1
1
and represent those with respect to D
!
!
1
1/2
RepD (
)=
0
0
!
1
RepD (
)=
1
so we have this.
P = RepB,D (id) =
1/2
0
1/2
1/2
1/2
1/2
For the matrix on the left we can either compute it directly, as in the prior
paragraph, or we can take the inverse.
!
!
1
1/2 1/2
2 2
1
P = RepD,B (id) =
=
(1/4)
0
1/2
0 2
(c) As with the prior item we can either compute it directly from the definition
or compute it using matrix operations.
!
!
!
!
2
2
1
1
2
2
3
3
PT P1 =
=
0 2
2 1
0 2
2 1
Five.II.1.10 One possible choice of the bases is
!
!
!
!
1
1
1
0
B=h
,
i
D = E2 = h
,
i
2
1
0
1
(this B comes from the map description). To find the matrix T = RepB,B (t), solve
the relations
!
!
!
!
!
!
1
1
3
1
1
1
c1
+ c2
=
c1
+ c2
=
2
1
0
2
1
2
to get c1 = 1, c2 = 2, c1 = 1/3 and c2 = 4/3.
RepB,B (t) =
1 1/3
2 4/3
Finding RepD,D (t) involves a bit more computation. We first find t(~e1 ). The
relation
!
!
!
1
1
1
c1
+ c2
=
2
1
0
gives c1 = 1/3 and c2 = 2/3, and so
RepB (~e1 ) =
1/3
2/3
!
B
Answers to Exercises
317
making
RepB (t(~e1 )) =
1 1/3
2 4/3
!
1/3
=
2/3
B,B
1/9
14/9
!
B
and hence t acts on the first basis vector ~e1 in this way.
!
!
!
1
1
5/3
t(~e1 ) = (1/9)
(14/9)
=
2
1
4/3
The computation for t(~e2 ) is similar. The relation
!
!
!
1
1
0
c1
+ c2
=
2
1
1
gives c1 = 1/3 and c2 = 1/3, so
!
1/3
1/3
RepB (~e1 ) =
making
RepB (t(~e1 )) =
1 1/3
2 4/3
!
B,B
!
1/3
=
1/3
B
!
4/9
2/9
and hence t acts on the second basis vector ~e2 in this way.
!
!
!
1
1
2/3
t(~e2 ) = (4/9)
(2/9)
=
2
1
2/3
Therefore
RepD,D (t) =
5/3 2/3
4/3 2/3
1
2
1
1
= RepB,D (id)
1
1
2
1
1
!1
=
1/3
2/3
1/3
1/3
5/3
4/3
2/3
2/3
Five.II.1.11 Gausss Method shows that the first matrix represents maps of rank two
while the second matrix represents maps of rank three.
Five.II.1.12 The only representation of a zero map is a zero matrix, no matter what
the pair of bases RepB,D (z) = Z, and so in particular for any single basis B we have
RepB,B (z) = Z. The case of the identity is slightly different: the only representation
318
of the identity map, with respect to any B, B, is the identity RepB,B (id) = I.
(Remark: of course, we have seen examples where B 6= D and RepB,D (id) 6= I in
fact, we have seen that any nonsingular matrix is a representation of the identity
map with respect to some B, D.)
Five.II.1.13 No. If A = PBP1 then A2 = (PBP1 )(PBP1 ) = PB2 P1 .
Five.II.1.14 Matrix similarity is a special case of matrix equivalence (if matrices
are similar then they are matrix equivalent) and matrix equivalence preserves
nonsingularity.
Five.II.1.15 A matrix is similar to itself; take P to be the identity matrix: P = IPI1 =
IPI.
If T is similar to T then T = PT P1 and so P1 T P = T . Rewrite T =
1 1 1
(P )T (P ) to conclude that T is similar to T .
For transitivity, if T is similar to S and S is similar to U then T = PSP1 and
S = QUQ1 . Then T = PQUQ1 P1 = (PQ)U(PQ)1 , showing that T is similar
to U.
Five.II.1.16 Let fx and fy be the reflection maps (sometimes called flips). For any
bases B and D, the matrices RepB,B (fx ) and RepD,D (fy ) are similar. First note
that
!
!
1 0
1 0
S = RepE2 ,E2 (fx ) =
T = RepE2 ,E2 (fy ) =
0 1
0 1
are similar because the second matrix is the representation of fx with respect to
the basis A = h~e2 , ~e1 i:
!
!
1
0
0
1
=P
1 0
P1
0 1
R2wrt A x VR2wrt A
T
idyP
idyP
f
R2wrt E2 x R2wrt E2
S
Now the conclusion follows from the transitivity part of Exercise 15.
We can also finish without relying on that exercise. Write RepB,B (fx ) =
QT Q1 = QRepE2 ,E2 (fx )Q1 and RepD,D (fy ) = RSR1 = RRepE2 ,E2 (fy )R1 .
By the equation in the first paragraph, the first of these two is RepB,B (fx ) =
QPRepE2 ,E2 (fy )P1 Q1 . Rewriting the second of these two as R1 RepD,D (fy )
R = RepE2 ,E2 (fy ) and substituting gives the desired relationship
RepB,B (fx ) = QPRepE2 ,E2 (fy )P1 Q1
= QPR1 RepD,D (fy ) RP1 Q1 = (QPR1 ) RepD,D (fy ) (QPR1 )1
Answers to Exercises
319
Thus the matrices RepB,B (fx ) and RepD,D (fy ) are similar.
Five.II.1.17 We must show that if two matrices are similar then they have the same
determinant and the same rank. Both determinant and rank are properties of
matrices that are preserved by matrix equivalence. They are therefore preserved by
similarity (which is a special case of matrix equivalence: if two matrices are similar
then they are matrix equivalent).
To prove the statement without quoting the results about matrix equivalence,
note first that rank is a property of the map (it is the dimension of the range space)
and since weve shown that the rank of a map is the rank of a representation, it must
be the same for all representations. As for determinants, |PSP1 | = |P| |S| |P1 | =
|P| |S| |P|1 = |S|.
The converse of the statement does not hold; for instance, there are matrices
with the same determinant that are not similar. To check this, consider a nonzero
matrix with a determinant of zero. It is not similar to the zero matrix, the zero
matrix is similar only to itself, but they have they same determinant. The argument
for rank is much the same.
Five.II.1.18 The matrix equivalence class containing all n n rank zero matrices
contains only a single matrix, the zero matrix. Therefore it has as a subset only
one similarity class.
In contrast, the matrix equivalence class of 11 matrices of rank one consists
of those 11 matrices (k) where k 6= 0. For any basis B, the representation of
multiplication by the scalar k is RepB,B (tk ) = (k), so each such matrix is alone in
its similarity class. So this is a case where a matrix equivalence class splits into
infinitely many similarity classes.
Five.II.1.19 Yes, these are similar
1
0
0
3
3
0
0
1
~ 1,
~ 2 i, the second matrix is
since, where the first matrix is RepB,B (t) for B = h
~
~
RepD,D (t) for D = h2 , 1 i.
Five.II.1.20 The k-th powers are similar because, where each matrix represents the
map t, the k-th powers represent tk , the composition of k-many ts. (For instance,
if T = reptB, B then T 2 = RepB,B (t t).)
Restated more computationally, if T = PSP1 then T 2 = (PSP1 )(PSP1 ) =
2 1
PS P . Induction extends that to all powers.
For the k 6 0 case, suppose that S is invertible and that T = PSP1 . Note that
T is invertible: T 1 = (PSP1 )1 = PS1 P1 , and that same equation shows that
T 1 is similar to S1 . Other negative powers are now given by the first paragraph.
320
2
1
1
0
0
3
1
1
2
1
!
=
5
2
4
1
(this example is not entirely arbitrary because the center matrices on the two left
sides add to the zero matrix). Note that the sums of these similar matrices are not
similar
!
!
!
!
!
!
1 0
1 0
0 0
5/3 2/3
5 4
0 0
+
=
+
6=
0 3
0 3
0 0
4/3 7/3
2
1
0 0
since the zero matrix is similar only to itself.
Five.II.1.24 If N = P(T I)P1 then N = PT P1 P(I)P1 . The diagonal matrix
I commutes with anything, so P(I)P1 = PP1 (I) = I. Thus N = PT P1 I
and consequently N + I = PT P1 . (So not only are they similar, in fact they are
similar via the same P.)
Answers to Exercises
321
Five.II.2: Diagonalizability
Five.II.2.6 Because we chose the basis vectors arbitrarily, many different answers are
possible. However, here is one way to go; to diagonalize
!
4 2
T=
1 1
take it as the representation of a transformation with respect to the standard basis
~ 1,
~ 2 i such that
T = RepE2 ,E2 (t) and look for B = h
!
1 0
RepB,B (t) =
0 2
~ 1 ) = 1 and t(
~ 2 ) = 2 .
that is, such that t(
!
4 2 ~
4
~1
1 = 1
1 1
1
!
2 ~
~2
2 = 2
1
(4 x) b1 +
2 b2 = 0
((x2 5x + 6)/(4 x)) b2 = 0
(so
=2
, and 1 = 2).
1
1 1
1
1
If x = 3 then the first equation is b1 2b2 = 0 and so the associated vectors are
those whose first component is twice their second:
!
! !
!
2
4
2
2
2
~2 =
(so
=3
, and so 2 = 3).
1
1 1
1
1
322
This picture
t
R2wrt E2 R2wrt E2
T
idy
idy
t
R2wrt B R2wrt B
D
4
1
2
1
1
1
2
1
Comment. This equation matches the T = PSP1 definition under this renaming.
!
!1
!
!
2 0
1 2
1 2
4 2
1
T=
P=
P =
S=
0 3
1 1
1 1
1 1
Five.II.2.7
(a) Setting up
!
!
2 1
b1
=x
0 2
b2
b1
b2
(2 x) b1 +
b2 = 0
(2 x) b2 = 0
0 2
0
0
0
Following the other possibility leads to a first equation of 4b1 + b2 = 0 and so
the vectors associated with this solution have a second component that is four
times their first component.
!
!
!
!
2 1
b1
b1
1
~2 =
=2
0 2
4b1
4b1
4
The diagonalization is this.
!
!
1 1
2 1
1
0 4
0 2
0
1
4
!1
=
2 0
0 2
Answers to Exercises
323
gives that x = 5, associated with vectors whose second component is zero and
whose first component is free.
!
1
~1 =
0
The x = 1 possibility gives a first equation of 4b1 + 4b2 = 0 and so the associated
vectors have a second component that is the negative of their first component.
!
1
~1 =
1
We thus have this diagonalization.
!
!
1 1
5 4
1
0 1
0 1
0
1
1
!1
=
p p
d1 0
d1 0
.
..
0
= 0 ...
dn
Five.II.2.9 These two are not similar !
!
0 0
1 0
0 0
0 1
because each is alone in its similarity class.
For the second half, these
!
2 0
3
0 3
0
0
2
5
0
0
1
dp
n
~ 1,
~ 2 i to h
~ 2,
~ 1 i. (Quesare similar via the matrix that changes bases from h
tion. Are two diagonal matrices similar if and only if their diagonal entries are
permutations of each others?)
Five.II.2.10 Contrast these two.
2
0
0
1
2
0
0
0
a1,1
0
1/a1,1
0
a2,2
1/a2,2
0
0
..
..
.
.
an,n
1/an,n
324
3
0
3
1
1
0
1
1
!1
=
3
0
0
1
(b) It is a coincidence, in the sense that if T = PSP1 then T need not equal
P1 SP. Even in the case of a diagonal matrix D, the condition that D = PT P1
does not imply that D equals P1 T P. The matrices from Example 2.2 show this.
!
!
!
!
!1
!
1 2
4 2
6 0
6 0
1 2
6 12
=
=
1 1
1 1
5 1
5 1
1 1
6 11
Five.II.2.13 The columns of the matrix are the vectors associated with the xs. The
exact choice, and the order of the choice was arbitrary. We could, for instance, get
a different matrix by swapping the two columns.
Five.II.2.14 Diagonalizing and then taking
!k
1 1
3 1
=
3 4
4 2
!1
!
1 1
1 1
1
Five.II.2.15 (a)
0 1
0 0
0
!1
!
!
1 1
0 1
1 1
(b)
=
0 1
1 0
0 1
Answers to Exercises
325
Five.II.2.17 If
!
c
P1 =
1
1
P
0
then
1
P
0
c
1
c
1
cp + q
cr + s
a
0
0
b
a
0
!
0
P
b
a
0
0
b
ap
br
so
p
r
q
s
p
r
1
0
aq
bs
p
r
!
q
s
The 1, 1 entries show that a = 1 and the 1, 2 entries then show that pc = 0. Since
c 6= 0 this means that p = 0. The 2, 1 entries show that b = 1 and the 2, 2 entries
then show that rc = 0. Since c 6= 0 this means that r = 0. But if both p and r are
0 then P is not invertible.
Five.II.2.18
a
c
(a) Using the formula for the inverse of a 22 matrix gives this.
!
!
!
1
b
1 2
d b
ad bc
d
2 1
c a
1
=
ad bc
ad + 2bd 2ac bc
cd + 2d2 2c2 cd
ab 2b2 + 2a2 + ab
bc 2bd + 2ac + ad
Now pick scalars a, . . . , d so that adbc 6= 0 and 2d2 2c2 = 0 and 2a2 2b2 = 0.
For example, these will do.
!
!
!
!
1 6 0
1
1 1
1 1
1 2
2
2 0 2
1 1
1 1
2 1
(b) As above,
!
a b
x
c d
y
!
1
y
ad bc
z
1
=
ad bc
d
c
b
a
abx b2 y + a2 y + abz
bcx bdy + acy + adz
326
(x z)2 4(y)(y)
y 6= 0
2y
(as above, if x, y, z R then this discriminant is positive so a symmetric, real,
22 matrix is similar to a real diagonal matrix).
For a check we try x= 1, y = 2, z = 1.
0 0 + 16
0 0 + 16
b=
= 1
d=
= 1
4
4
Note that not all four choices (b, d) = (+1, +1), . . . , (1, 1) satisfy ad bc 6= 0.
d=
(x z)
(a) This
10 x
9
0=
= (10 x)(2 x) (36)
4
2 x
(c) x2 21 = 0; 1 = 21, 2 = 21
(d) x2 = 0; 1 = 0
(e) x2 2x + 1 = 0; 1 = 1
Five.II.3.21 (a) The characteristic equation is (3 x)(1 x) = 0. Its roots, the
eigenvalues, are 1 = 3 and 2 = 1. For the eigenvectors we consider this
equation.
!
!
!
3x
0
b1
0
=
8
1 x
b2
0
For the eigenvector associated with 1 = 3, we consider the resulting linear
system.
0 b1 + 0 b2 = 0
8 b1 + 4 b2 = 0
Answers to Exercises
327
The eigenspace is the set of vectors whose second component is twice the first
component.
!
!
!
!
b2 /2
3 0
b2 /2
b2 /2
{
| b2 C}
=3
b2
8 1
b2
b2
(Here, the parameter is b2 only because that is the variable that is free in the
above system.) Hence, this is an eigenvector associated with the eigenvalue 3.
!
1
2
Finding an eigenvector associated with 2 = 1 is similar. This system
4 b1 + 0 b2 = 0
8 b1 + 0 b2 = 0
leads to the set of vectors whose first component is zero.
!
!
!
!
0
3 0
0
0
{
| b2 C }
= 1
b2
8 1
b2
b2
And so this is an eigenvector associated with 2 .
!
0
1
(b) The characteristic equation is
3 x 2
0=
= x2 3x + 2 = (x 2)(x 1)
1 x
and so the eigenvalues are 1 = 2 and 2 = 1. To find eigenvectors, consider this
system.
(3 x) b1 + 2 b2 = 0
1 b1 x b2 = 0
For 1 = 2 we get
1 b1 + 2 b2 = 0
1 b1 2 b2 = 0
leading to this eigenspace and eigenvector.
!
!
2b2
2
{
| b2 C}
b2
1
For 2 = 1 the system is
2 b1 + 2 b2 = 0
1 b1 1 b2 = 0
leading to this.
b2
{
b2
!
| b2 C }
1
1
328
1
= x2 + 1
2 x
(5/(2i))1 +2
(2 i) b1 1 b2 = 0
0=0
(2 i) =
2 i
2 i 2 i
2 i
to see that it gives a 0 = 0 equation.) These are the resulting eigenspace and
eigenvector.
!
!
(1/(2 i))b2
1/(2 i)
{
| b2 C }
b2
1
For 2 = i the system
(2 + i) b1
1 b2 = 0
5 b1 (2 + i) b2 = 0
(5/(2+i))1 +2
(2 + i) b1 1 b2 = 0
0=0
leads to this.
(1/(2 + i))b2
{
b2
!
| b2 C }
1/(2 + i)
1
b2 +
x b2 +
b3 = 0
b3 = 0
(1 x) b3 = 0
Answers to Exercises
329
b2
{ b2 | b2 C}
0
So these are eigenvectors associated with
1
0
0
Five.II.3.24
1 = 1 and 2 = 0.
1
1
0
=0
=0
0 b3 = 0
leading to this.
0
b2
{ b2 + 0 | b2 , b3 C}
0
b3
0
1
1 , 0
0
1
330
and the eigenvalues are 1 = 4 and (by using the quadratic equation) 2 = 2 + 3
b2
=0
x b2 +
b3 = 0
4 b1 17 b2 + (8 x) b3 = 0
b2
=0
4 b2 + b3 = 0
4 b1 17 b2 + 4 b3 = 0
1 +3
4 b1 +
42 +3
b2
=0
4 b2 + b3 = 0
16 b2 + 4 b3 = 0
4 b1 +
b2
=0
4 b2 + b3 = 0
0=0
(1/16) b3
1
V4 = { (1/4) b3 | b2 C}
4
16
b3
(2 3) b1 +
b2
=0
(2 3) b2 +
b3 = 0
4 b1
17 b2 + (6 3) b3 = 0
(2 3) b1 +
b2
=0
(4/(2 3))1 +3
(2 3) b2 +
b3 = 0
+ (9 4 3) b2 + (6 3) b3 = 0
(the middle coefficient in the third equation equals the number (4/(2 3))
(2 3) b1 +
b2
=0
(2 3) b2 + b3 = 0
0=0
which leads to this eigenspace and eigenvector.
(1/(2 + 3)2 ) b3
(1/(2 + 3)2 )
(1/(2 + 3))
1
Answers to Exercises
331
(2 + 3) b1 +
b2
=0
b3 = 0
(2 + 3) b2 +
4 b1
17 b2 + (6 + 3) b3 = 0
(2 + 3) b1 +
b2
=0
(4/(2+ 3))1 +3
b3 = 0
(2 + 3) b2 +
(9 + 4 3) b2 + (6 + 3) b3 = 0
(2 + 3) b1 +
b2
=0
(2 + 3) b2 + b3 = 0
0=0
which gives this eigenspace and eigenvector.
(1/(2 + 3)2 ) b3
(1/(2 + 3)2 )
(1/(2 + 3))
1
Five.II.3.25 With respect to the natural basis B = h1, x, x2 i the matrix representation
is this.
5 6
2
RepB,B (t) = 0 1 8
1 0 2
Thus the characteristic equation
5x
6
2
0= 0
1 x
8 = (5 x)(1 x)(2 x) 48 2 (1 x)
1
0
2 x
is 0 = x3 +2x2 +15x36 = 1(x+4)(x3)2 . To find the associated eigenvectors,
consider this system.
(5 x) b1 +
b1
6b2 +
2b3 = 0
(1 x) b2
8b3 = 0
+ (2 x) b3 = 0
(1/9)1 +3
(2/9)2 +3
2 b3
V4 = { (8/3) b3 | b3 C }
b3
8/3
1
332
4 b2 8 b3 = 0
4 b2 8 b3 = 0
b1
5 b3 = 0
with this eigenspace and eigenvector.
5 b3
5
V3 = { 2 b3 | b3 C}
2
b3
1
!
!
!
!
0 0
2 3
1 0
2 1
Five.II.3.26 = 1,
and
, = 2,
, = 1,
0 1
1 0
1 0
1 0
Five.II.3.27 Fix the natural basis B = h1, x, x2 , x3 i. The maps action is 1 7 0, x 7 1,
x2 7 2x, and x3 7 3x2 and its representation is easy to compute.
0 1 0 0
0 0 2 0
T = RepB,B (d/dx) =
0 0 0 3
0 0 0 0 B,B
We find the eigenvalues with this computation.
x 1
0
0
0 x 2
0
0 = |T xI| =
= x4
0
0 x 3
0
0
0 x
Thus the map has the single eigenvalue = 0. To find the associated eigenvectors,
wesolve
0 1 0 0
b1
b1
0 0 2 0
b
b
2
2
=
b2 = 0, b3 = 0, b4 = 0
=0
0 0 0 3
b3
b 3
0 0 0 0 B,B b4 B
b4 B
to get this eigenspace.
b1
0
{ | b1 C } = { b1 + 0 x + 0 x2 + 0 x3 | b1 C } = { b1 | b1 C}
0
0 B
Five.II.3.28 The determinant of the triangular matrix T xI is the product down the
diagonal, and so it factors into the product of the terms ti,i x.
Five.II.3.29 Just expand the determinant of T xI.
a x
c
= (a x)(d x) bc = x2 + (a d) x + (ad bc)
b
d x
Answers to Exercises
333
Five.II.3.30 Any two representations of that transformation are similar, and similar
matrices have the same characteristic polynomial.
Five.II.3.31 It is not true. All of the eigenvalues of this matrix are 0.
!
0 1
0 0
Five.II.3.32 (a) Use = 1 and the identity map.
(b) Yes, use the transformation that multiplies all vectors by the scalar .
Five.II.3.33 If t(~v) = ~v then ~v 7 ~0 under the map t id.
Five.II.3.34 The characteristic equation
a x
b
0=
= (a x)(d x) bc
c
d x
simplifies to x2 + (a d) x + (ad bc). Checking that the values x = a + b and
x = a c satisfy the equation (under the a + b = c + d condition) is routine.
Five.II.3.35 Consider an eigenspace V . Any w
~ V is the image w
~ = ~v of some
~v V (namely, ~v = (1/) w
~ ). Thus, on V (which is a nontrivial subspace) the
action of t1 is t1 (~
w) = ~v = (1/) w
~ , and so 1/ is an eigenvalue of t1 .
Five.II.3.36 (a) We have (cT + dI)~v = cT~v + dI~v = c~v + d~v = (c + d) ~v.
(b) Suppose that S = PT P1 is diagonal. Then P(cT + dI)P1 = P(cT )P1 +
P(dI)P1 = cPT P1 + dI = cS + dI is also diagonal.
Five.II.3.37 The scalar is an eigenvalue if and only if the transformation t id is
singular. A transformation is singular if and only if it is not an isomorphism (that
is, a transformation is an isomorphism if and only if it is nonsingular).
Five.II.3.38 (a) Where the eigenvalue is associated with the eigenvector ~x then
Ak~x = A A~x = Ak1 ~x = Ak1~x = = k~x. (The full details require
induction on k.)
(b) The eigenvector associated with might not be an eigenvector associated with
.
Five.II.3.39 No. These are two same-sized, equal rank, matrices with different eigenvalues.
!
!
1 0
1 0
0 1
0 2
Five.II.3.40 The characteristic polynomial has an odd power and so has at least one
real root.
334
0 0
0
0 2 0
0 0 3
Five.II.3.42 We must show that it is one-to-one and onto, and that it respects the
operations of matrix addition and scalar multiplication.
To show that it is one-to-one, suppose that tP (T ) = tP (S), that is, suppose that
PT P1 = PSP1 , and note that multiplying both sides on the left by P1 and on
the right by P gives that T = S. To show that it is onto, consider S Mnn and
observe that S = tP (P1 SP).
The map tP preserves matrix addition since tP (T + S) = P(T + S)P1 =
(PT + PS)P1 = PT P1 + PSP1 = tP (T + S) follows from properties of matrix
multiplication and addition that we have seen. Scalar multiplication is similar: tP (cT ) = P(c T )P1 = c (PT P1 ) = c tP (T ).
Five.II.3.43 This is how the answer was given in the cited source. If the argument
of the characteristic function of A is set equal to c, adding the first (n 1) rows
(columns) to the nth row (column) yields a determinant whose nth row (column)
is zero. Thus c is a characteristic root of A.
Nilpotence
Five.III.1: Self-Composition
Five.III.1.9 For the zero transformation, no matter what the space, the chain of range
spaces is V {~0 } = {~0 } = and the chain of null spaces is {~0 } V = V = . For
the identity transformation the chains are V = V = V = and {~0 } = {~0 } = .
Five.III.1.10
0
cx2
a + bx + cx2 7
and any higher power is the same map. Thus, while R(t0 ) is the space of
quadratic polynomials with no linear term {p + rx2 | p, r C }, and R(t20 ) is
Answers to Exercises
335
the space of purely-quadratic polynomials {rx2 | r C}, this is where the chain
stabilizes R (t0 ) = { rx2 | n C }. As for null spaces, N (t0 ) is the space of
purely-linear quadratic polynomials {qx | q C }, and N (t20 ) is the space of
quadratic polynomials with no x2 term {p + qx | p, q C }, and this is the end
N (t0 ) = N (t20 ).
(b) The second power
!
!
!
a t1
0
0
t1
7
7
b
a
0
is the zero map. Consequently, the chain of range spaces
!
0
R2 {
| p C} {~0 } =
p
and the chain of null spaces
q
{~0 } {
0
!
| q C } R2 =
each has length two. The generalized range space is the trivial subspace and the
generalized null space is the entire space.
(c) Iterates of this map cycle around
t
2
2
2
a + bx + cx2 7
b + cx + ax2 7
c + ax + bx2 7
a + bx + cx2
and the chains of range spaces and null spaces are trivial.
P2 = P2 =
{~0 } = {~0 } =
Thus, obviously, generalized spaces are R (t2 ) = P2 and N (t2 ) = {~0 }.
(d) We have
a
a
a
a
b 7 a 7 a 7 a 7
c
b
a
a
and so the chain of range spaces
p
p
R3 { p | p, r C } { p | p C} =
p
r
and the chain of null spaces
0
0
{~0 } { 0 | r C } { q | q, r C } =
r
r
each has length two. The generalized spaces are the final ones shown above in
each chain.
Five.III.1.11 Each maps x 7 t(t(t(x))).
336
Five.III.1.12 Recall that if W is a subspace of V then we can enlarge any basis BW for
W to make a basis BV for V. From this the first sentence is immediate. The second
sentence is also not hard: W is the span of BW and if W is a proper subspace then
V is not the span of BW , and so BV must have at least one vector more than does
BW .
Five.III.1.13 It is both if and only if. A linear map is nonsingular if and only if it
preserves dimension, that is, if the dimension of its range equals the dimension of its
domain. With a transformation t : V V that means that the map is nonsingular
if and only if it is onto: R(t) = V (and thus R(t2 ) = V, etc).
Five.III.1.14 The null spaces form chains because because if ~v N (tj ) then tj (~v) = ~0
and tj+1 (~v) = t( tj (~v) ) = t(~0) = ~0 and so ~v N (tj+1 ).
Now, the further property for null spaces follows from that fact that it holds
for range spaces, along with the prior exercise. Because the dimension of R(tj )
plus the dimension of N (tj ) equals the dimension n of the starting space V, when
the dimensions of the range spaces stop decreasing, so do the dimensions of the
null spaces. The prior exercise shows that from this point k on, the containments
in the chain are not proper the null spaces are equal.
Five.III.1.15 (Many examples are correct but here is one.) An example is the shift
operator on triples of reals (x, y, z) 7 (0, x, y). The null space is all triples that
start with two zeros. The map stabilizes after three iterations.
Five.III.1.16 The differentiation operator d/dx : P1 P1 has the same range space as
null space. For an example of where they are disjoint except for the zero vector
consider an identity map, or any nonsingular map.
Five.III.2: Strings
Five.III.2.19 Three. It is at least three because `2 ( (1, 1, 1) ) = (0, 0, 1) 6= ~0. It is at
most three because (x, y, z) 7 (0, x, y) 7 (0, 0, x) 7 (0, 0, 0).
Five.III.2.20 (a) The domain has dimension four. The maps action is that any vector
~ 1 +c2
~ 2 +c3
~ 3 +c4
~ 4 goes to c1
~ 2 +c2 ~0+c3
~ 4 +c4 ~0 =
in the space c1
~
~
~ 2 and
c1 3 + c3 4 . The first application of the map sends two basis vectors
~
4 to zero, and therefore the null space has dimension two and the range space
has dimension two. With a second application, all four basis vectors go to zero
and so the null space of the second power has dimension four while the range
space of the second power has dimension zero. Thus the index of nilpotency is
Answers to Exercises
337
0 0 0 0
1 0 0 0
0 0 0 0
0 0 1 0
(b) The dimension of the domain of this map is six. For the first power the
dimension of the null space is four and the dimension of the range space is two.
For the second power the dimension of the null space is five and the dimension
of the range space is one. Then the third iteration results in a null space of
dimension six and a range space of dimension zero. The index of nilpotency is
three, and this is the canonical form.
0 0 0 0 0 0
1 0 0 0 0 0
0 1 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
(c) The dimension of the domain is three, and the index of nilpotency is three.
The first powers null space has dimension one and its range space has dimension
two. The second powers null space has dimension two and its range space has
dimension one. Finally, the third powers null space has dimension three and its
range space has dimension zero. Here
canonical form matrix.
is the
0 0 0
1 0 0
0 1 0
Five.III.2.21 By Lemma 1.4 the nullity has grown as large as possible by the n-th
iteration where n is the dimension of the domain. Thus, for the 22 matrices, we
need only check whether the square is the zero matrix. For the 33 matrices, we
need only check the cube.
(a) Yes, this matrix is nilpotent because its square is the zero matrix.
(b) No, the square is not the zero matrix.
!2
!
3 1
10 6
=
1 3
6 10
(c) Yes, the cube is the zero matrix. In fact, the square is zero.
(d) No, the third power is not the zero matrix.
1 1 4
206 86 304
8
26
3 0 1 = 26
5 2 7
438 180 634
338
0
0
1
0
0
0
0
0
2
0
0
0
calculations
Np
1
0
0
0
0
1
1
0
0
0
0
1
0
0
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
1
0
0
1
0
0
0
N (Np )
r
u
{
u v | r, u, v C }
u
v
r
s
{
u v | r, s, u, v C }
u
v
2
zero matrix
C5
gives these requirements of the string basis: three basis vectors map directly to zero,
one more basis vector maps to zero by a second application, and the final basis
vector maps to zero by a third application. Thus, the string basis has this form.
~ 2 7
~ 3 7 ~0
~ 1 7
~ 4 7 ~0
~ 5 7 ~0
0 0 0 0
1 0 0 0
0 1 0 0
0 0 0 0
0 0 0 0
Five.III.2.23
0 0
1 0
0 1
0 0
0 0
0
0
0
0
0 0 0
0 0 0
0 0 0
0 0 0
0 1 0
corresponding to the length three string and the length two string in the basis.
(b) Assume that N is the representation of the underlying map with respect to
the standard basis. Let B be the basis to which we will change. By the similarity
Answers to Exercises
339
diagram
n
C2wrt E2 C2wrt E2
N
idyP
idyP
n
C2wrt B C2wrt B
we have that the canonical form matrix isPNP1 where
1 0 0 0 0
0 1 0 1 0
P1 = RepB,E5 (id) = 1 0 1 0 0
0 0 1 1 1
0 0 0 0 1
and P is the inverse of that.
1 0 0
0
1 1 1 1
1 0 1 1
0 0 0
0
0
1
1
1
1/2
1/2
!
1/2
1/2
N (Np )
!
u
{
| u C}
u
2 zero matrix
C2
shows that any map represented by the matrix must act on the string basis in
this way
~ 1 7
~ 2 7 ~0
because the null space after one application has dimension one and exactly one
~ 2 , maps to zero. Therefore, this representation with respect to
basis vector,
~ 1,
~ 2 i is the canonical form.
h
!
0 0
1 0
(b) The calculation here is
p
1
0
0
2
0 0
u
{ | u, v C }
1 1
v
1 1
v
zero matrix
C3
340
~ 3 7 ~0
because the null space after one application of the map has dimension two
~ 2 and
~ 3 are both sent to zero and one more iteration results in the
u
1 1 1
| u C}
1
{
0
1
0
1
u
1 1 1
1 0 1
u
| u, v C }
2
{
0 0 0
v
1 0 1
u
3
zero matrix
C3
shows that any map represented by this basis must act on a string basis in this
way.
~ 1 7
~ 2 7
~ 3 7 ~0
0 0 0
1 0 0
0 1 0
Five.III.2.25 A couple of examples
!
!
!
0 0 0
a b c
0 0 0
0 0
a b
0 0
=
1 0 0 d e f = a b c
1 0
c d
a b
0 1 0
g h i
d e f
suggest that left multiplication by a block of subdiagonal ones shifts the rows of a
matrix downward.
Distinct blocks
0 0 0 0
a b c d
0 0 0 0
1 0 0 0 e f g h a b c d
0 0 0 0 i j k l 0 0 0 0
0 0 1 0
m n o p
i j k l
act to shift down distinct parts of the matrix.
Right multiplication does an analogous thing to columns. See Exercise 19.
Five.III.2.26 Yes. Generalize the last sentence in Example 2.10. As to the index, that
same last sentence shows that the index of the new matrix is less than or equal to
and reversing the roles of the two matrices gives inequality in the
the index of N,
other direction.
Answers to Exercises
341
Another answer to this question is to show that a matrix is nilpotent if and only
if any associated map is nilpotent, and with the same index. Then, because similar
matrices represent the same map, the conclusion follows. This is Exercise 32 below.
Five.III.2.27 Observe that a canonical form nilpotent matrix has only zero eigenvalues;
e.g., the determinant of this lower-triangular matrix
x 0
0
1 x 0
0
1 x
is (x)3 , the only root of which is zero. But similar matrices have the same
eigenvalues and every nilpotent matrix is similar to one in canonical form.
Another way to see this is to observe that a nilpotent matrix sends all vectors
to zero after some number of iterations, but that conflicts with an action on an
eigenspace ~v 7 ~v unless is zero.
Five.III.2.28 No, by Lemma 1.4 for a map on a two-dimensional space, the nullity has
grown as large as possible by the second iteration.
Five.III.2.29 The index of nilpotency of a transformation can be zero only when the
vector starting the string must be ~0, that is, only when V is a trivial space.
Five.III.2.30 (a) Any member w
~ of the span is a linear combination w
~ = c0
k1
~v + c1 t(~v) + + ck1 t
(~v). But then, by the linearity of the map,
t(~
w) = c0 t(~v) + c1 t2 (~v) + + ck2 tk1 (~v) + ck1 ~0 is also in the span.
(b) The operation in the prior item, when iterated k times, will result in a linear
combination of zeros.
(c) If ~v = ~0 then the set is empty and so is linearly independent by definition.
Otherwise write c1~v + + ck1 tk1 (~v) = ~0 and apply tk1 to both sides. The
right side gives ~0 while the left side gives c1 tk1 (~v); conclude that c1 = 0.
Continue in this way by applying tk2 to both sides, etc.
(d) Of course, t acts on the span by acting on this basis as a single, k-long, t-string.
0 0 0 0 ... 0 0
1 0 0 0 . . . 0 0
0 1 0 0 . . . 0 0
0 0 1 0
0 0
..
.
0 0 0 0
1 0
342
and apply t.
~~ ) + c
~0 = c1,1
~ 1 + c1,0 t(
~ 1 ) + + c1,h1 1 th1 (
~
1
1,h1 0
~ 2 + + ci,hi 1 thi (~i ) + ci,hi~0
+ c2,1
Conclude that the coefficients c1,1 , . . . , c1,hi 1 , c2,1 , . . . , ci,hi 1 are all zero as
is a basis. Substitute back into the first displayed equation to conclude that
BC
the remaining coefficients are zero also.
Five.III.2.32 For any basis B, a transformation n is nilpotent if and only if N =
RepB,B (n) is a nilpotent matrix. This is because only the zero matrix represents
the zero map and so nj is the zero map if and only if Nj is the zero matrix.
Five.III.2.33 It can be of any size greater than or equal to one. To have a transformation that is nilpotent of index four, whose cube has range space of dimension k,
take a vector space, a basis for that space, and a transformation that acts on that
basis in this way.
~1
~ 2 7
~ 3 7
~ 4 7 ~0
~5
~ 6 7
~ 7 7
~ 8 7 ~0
..
.
~ 4k3 7
~ 4k2 7
~ 4k1 7
~ 4k 7 ~0
..
.
possibly other, shorter, strings
So the dimension of the range space of T 3 can be as large as desired. The smallest
that it can be is one there must be at least one string or else the maps index of
nilpotency would not be four.
Five.III.2.34 These two have only zero for eigenvalues
!
!
0 0
0 0
0 0
1 0
but are not similar (they have different canonical representatives, namely, themselves).
Five.III.2.35 It is onto by Lemma 1.4. It need not be the identity: consider this map
t : R2 R2 .
!
!
x
y
t
7
y
x
For that map R (t) = R2 , and t is not the identity.
Five.III.2.36 A simple reordering of the string basis will do. For instance, a map that
is associated with this string basis
~ 1 7
~ 2 7 ~0
Answers to Exercises
343
~ 1,
~ 2 i by this matrix
is represented with respect to B = h
!
0 0
1 0
~ 2,
~ 1 i in this way.
but is represented with respect to B = h
!
0 1
0 0
Five.III.2.37 Let t : V V be the transformation. If rank(t) = nullity(t) then the
equation rank(t) + nullity(t) = dim(V) shows that dim(V) is even.
Five.III.2.38 For the matrices to be nilpotent they must be square. For them to
commute they must be the same size. Thus their product and sum are defined.
Call the matrices A and B. To see that AB is nilpotent, multiply (AB)2 =
ABAB = AABB = A2 B2 , and (AB)3 = A3 B3 , etc., and, as A is nilpotent, that
product is eventually zero.
The sum is similar; use the Binomial Theorem.
Jordan Form
Five.IV.1: Polynomials of Maps and Matrices
Five.IV.1.13 The Cayley-Hamilton Theorem Theorem 1.8 says that the minimal
polynomial must contain the same linear factors as the characteristic polynomial,
although possibly of lower degree but not of zero degree.
(a) The possibilities are m1 (x) = x 3, m2 (x) = (x 3)2 , m3 (x) = (x 3)3 , and
m4 (x) = (x 3)4 . The first is a degree one polynomial, the second is degree two,
the third is degree three, and the fourth is degree four.
(b) The possibilities are m1 (x) = (x + 1)(x 4), m2 (x) = (x + 1)2 (x 4), and
m3 (x) = (x + 1)3 (x 4). The first is a quadratic polynomial, that is, it has degree
two. The second has degree three, and the third has degree four.
(c) We have m1 (x) = (x2)(x5), m2 (x) = (x2)2 (x5), m3 (x) = (x2)(x5)2 ,
and m4 (x) = (x 2)2 (x 5)2 . They are polynomials of degree two, three, three,
and four.
344
3x
0
0
T xI = 1
3x
0
0
0
4x
the characteristic polynomial is easy c(x) = |T xI| = (3 x)2 (4 x) = 1
(x 3)2 (x 4). There are only two possibilities for the minimal polynomial,
m1 (x) = (x 3)(x 4) and m2 (x) = (x 3)2 (x 4). (Note that the characteristic
polynomial has a negative sign but the minimal polynomial does not since it
must have a leading coefficient of one). Because m1 (T ) is not the zero matrix
0 0 0
1 0 0
0 0 0
(T 3I)(T 4I) = 1 0 0 1 1 0 = 1 0 0
0 0 1
0
0 0
0 0 0
the minimal polynomial is m(x) = m2 (x).
(T 3I)2 (T 4I) = (T 3I) (T 3I)(T 4I)
0 0 0
0 0 0
0 0 0
= 1 0 0 1 0 0 = 0 0 0
0 0 1
0 0 0
0 0 0
(b) As in the prior item, the fact that the matrix is triangular makes computation
of the characteristic polynomial
easy.
3 x
0
0
c(x) = |T xI| = 1
3x
0 = (3 x)3 = 1 (x 3)3
0
0
3 x
There are three possibilities for the minimal polynomial m1 (x) = (x3), m2 (x) =
(x 3)2 , and m3 (x) = (x 3)3 . We settle the question by computing m1 (T )
0 0 0
T 3I = 1 0 0
0 0 0
and m2 (T ).
0 0 0
0 0 0
0 0 0
(T 3I)2 = 1 0 0 1 0 0 = 0 0 0
0 0 0
0 0 0
0 0 0
Because m2 (T ) is the zero matrix, m2 (x) is the minimal polynomial.
Answers to Exercises
345
0 0 0
T 3I = 1 0 0
0 1 0
and m2 (T )
0 0 0
0 0 0
0 0 0
(T 3I)2 = 1 0 0 1 0 0 = 0 0 0
0 1 0
0 1 0
1 0 0
and m3 (T ).
0 0 0
0 0 0
0 0 0
0 0 1
4 0 1
0 0 4
(T 2I)(T 6I) = 0 4 2 0 0 2 = 0 0 0
0 0 0
0 0 4
0 0 0
It therefore must be that m(x) = m2 (x) = (x 2)2 (x 6). Here is a verification.
(T 2I)2 (T 6I) = (T 2I) (T 2I)(T 6I)
0 0 1
0 0 4
0 0 0
= 0 4 2 0 0 0 = 0 0 0
0 0 0
0 0 0
0 0 0
(e) The characteristic
polynomial is
2 x
2
1
c(x) = |T xI| = 0
6x
2 = (2 x)2 (6 x) = (x 2)2 (x 6)
0
0
2 x
346
(T 2I)(T 6I) = 0
0
2
4
0
1
4 2
2 0 0
0
0 0
1
0
2 = 0
4
0
0
0
0
0
0
1 x
0
c(x) = |T xI| = 0
3
1
4
0
3x
0
4 1 x
9
4
5
4
0
0
0
2x
1
0
0
0 = (x 3)3 (x + 1)2
1
4 x
Here are the possibilities for the minimal polynomial, listed here by ascending
degree: m1 (x) = (x 3)(x + 1), m1 (x) = (x 3)2 (x + 1), m1 (x) = (x 3)(x + 1)2 ,
m1 (x) = (x 3)3 (x + 1), m1 (x) = (x 3)2 (x + 1)2 , and m1 (x) = (x 3)3 (x + 1)2 .
The first one doesnt pan out
4
0
(T 3I)(T + 1I) = 0
3
1
0
0
= 0
4
4
4
0
4
9
5
0
0
4
4
4
0
0
0
1
1
0
0
0
4
4
0 0
0 0
0 0
0 4
0 4
0
0
0
0
0 0
1 3
1
1
0
0
4
4
4
4
4
9
5
0
0
0
4
4
0 0
0 0
0 0
3 1
1 5
Answers to Exercises
347
0
4 4
0
0
0
0
0
0
0
0 0
= 0 4 4 0
0 0
3 9 4 1 1 4
4
1
5
4
1
1
0 0 0 0 0
0 0 0 0 0
= 0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
0
0
0
4
4
0 0
0 0
0 0
0 4
0 4
0
0
4
4
1
0
1
1
3
3
i)) (x (
i))
x 1 = (1 x) (x ( +
2
2
2
2
0 x
As the roots are distinct, the characteristic polynomial equals the minimal polynomial.
Five.IV.1.16 We know that Pn is a dimension n + 1 space and that the differentiation operator is nilpotent of index n + 1 (for instance, taking n = 3, P3 =
{ c3 x3 + c2 x2 + c1 x + c0 | c3 , . . . , c0 C } and the fourth derivative of a cubic is the
zero polynomial). Represent this operator using the canonical form for nilpotent
transformations.
0 0 0 ...
0
1 0 0
0
0 1 0
..
.
0 0 0
1 0
This is an (n + 1)(n + 1) matrix with an easy characteristic polynomial, c(x) =
xn+1 . (Remark: this matrix is RepB,B (d/dx) where B = hxn , nxn1 , n(n
1)xn2 , . . . , n!i.) To find the minimal polynomial as in Example 1.12 we consider
the powers of T 0I = T . But, of course, the first power of T that is the zero matrix
is the power n + 1. So the minimal polynomial is also xn+1 .
Five.IV.1.17 Call the matrix T and suppose that it is nn. Because T is triangular,
and so T xI is triangular, the characteristic polynomial is c(x) = (x )n . To see
348
0 0 0 ... 0
1 0 0 . . . 0
0 1 0
..
.
0 0 ... 1 0
Recognize it as the canonical form for a transformation that is nilpotent of degree n;
the power (T I)j is zero first when j is n.
Five.IV.1.18 The n = 3 case provides a hint. A natural basis for P3 is B = h1, x, x2 , x3 i.
The action of the transformation is
1 7 1
x 7 x + 1
x2 7 x2 + 2x + 1
x3 7 x3 + 3x2 + 3x + 1
1 1 1 1
0 1 2 3
0 0 1 3
0 0 0 1
Because it is triangular, the fact that the characteristic polynomial is c(x) = (x1)4
is clear. For the minimal polynomial, the candidates are m1 (x) = (x 1),
0 1 1 1
0 0 2 3
T 1I =
0 0 0 3
0 0 0 0
m2 (x) = (x 1)2 ,
0
0
(T 1I)2 =
0
0
0
0
0
0
2
0
0
0
6
6
0
0
0
0
(T 1I)3 =
0
0
0
0
0
0
0
0
0
0
6
0
0
0
m3 (x) = (x 1)3 ,
and m4 (x) = (x 1)4 . Because m1 , m2 , and m3 are not right, m4 must be right,
as is easily verified.
In the case of a general n, the representation is an upper triangular matrix with
ones on the diagonal. Thus the characteristic polynomial is c(x) = (x 1)n+1 . One
way to verify that the minimal polynomial equals the characteristic polynomial is
Answers to Exercises
349
argue something like this: say that an upper triangular matrix is 0-upper triangular
if there are nonzero entries on the diagonal, that it is 1-upper triangular if the
diagonal contains only zeroes and there are nonzero entries just above the diagonal,
etc. As the above example illustrates, an induction argument will show that, where
T has only nonnegative entries, T j is j-upper triangular.
Five.IV.1.19 The map twice is the same as the map once: = , that is, 2 =
and so the minimal polynomial is of degree at most two since m(x) = x2 x will
do. The fact that no linear polynomial will do follows from applying the maps on
the left and right side of c1 + c0 id = z (where z is the zero map) to these two
vectors.
0
1
0
0
1
0
Thus the minimal polynomial is m.
Five.IV.1.20 This is one answer.
1
0
0
0
0
0
0
a
c
b
d
+
ac + cd bc + d2
ac + cd ad + d2
0
ad bc
and just check each entry sum to see that the result is the zero matrix.
Five.IV.1.23 By the Cayley-Hamilton theorem the degree of the minimal polynomial
is less than or equal to the degree of the characteristic polynomial, n. Example 1.6
shows that n can happen.
Five.IV.1.24 Let the linear transformation be t : V V. If t is nilpotent then there is
an n such that tn is the zero map, so t satisfies the polynomial p(x) = xn = (x0)n .
By Lemma 1.10 the minimal polynomial of t divides p, so the minimal polynomial
has only zero for a root. By Cayley-Hamilton, Theorem 1.8, the characteristic
polynomial has only zero for a root. Thus the only eigenvalue of t is zero.
350
t1,1
0
t2,2
0
T =
..
tn,n
the characteristic polynomial is (t1,1 x)(t2,2 x) (tn,n x). Of course, some of
those factors may be repeated, e.g., the matrix might have t1,1 = t2,2 . For instance,
the characteristic polynomial of
3 0 0
D = 0 3 0
0 0 1
is (3 x)2 (1 x) = 1 (x 3)2 (x 1).
To form the minimal polynomial, take the terms x ti,i , throw out repeats, and
multiply them together. For instance, the minimal polynomial of D is (x 3)(x 1).
To check this, note first that Theorem 1.8, the Cayley-Hamilton theorem, requires
that each linear factor in the characteristic polynomial appears at least once in the
minimal polynomial. One way to check the other direction that in the case of a
diagonal matrix, each linear factor need appear at most once is to use a matrix
argument. A diagonal matrix, multiplying from the left, rescales rows by the entry
on the diagonal. But in a product (T t1,1 I) , even without any repeat factors,
every row is zero in at least one of the factors.
For instance, in the product
0 0 0
2 0 0
1 0 0
Answers to Exercises
351
because the first and second rows of the first matrix D 3I are zero, the entire
product will have a first row and second row that are zero. And because the third
row of the middle matrix D 1I is zero, the entire product has a third row of zero.
Five.IV.1.28 This subsection starts with the observation that the powers of a linear
transformation cannot climb forever without a repeat, that is, that for some
power n there is a linear relationship cn tn + + c1 t + c0 id = z where z is
the zero transformation. The definition of projection is that for such a map one
linear relationship is quadratic, t2 t = z. To finish, we need only consider whether
this relationship might not be minimal, that is, are there projections for which the
minimal polynomial is constant or linear?
For the minimal polynomial to be constant, the map would have to satisfy that
c0 id = z, where c0 = 1 since the leading coefficient of a minimal polynomial is
1. This is only satisfied by the zero transformation on a trivial space. This is a
projection, but not an interesting one.
For the minimal polynomial of a transformation to be linear would give c1 t +
c0 id = z where c1 = 1. This equation gives t = c0 id. Coupling it with the
requirement that t2 = t gives t2 = (c0 )2 id = c0 id, which gives that c0 = 0
and t is the zero transformation or that c0 = 1 and t is the identity.
Thus, except in the cases where the projection is a zero map or an identity map,
the minimal polynomial is m(x) = x2 x.
Five.IV.1.29 (a) This is a property of functions in general, not just of linear
functions. Suppose that f and g are one-to-one functions such that f g is
defined. Let f g(x1 ) = f g(x2 ), so that f(g(x1 )) = f(g(x2 )). Because f is
one-to-one this implies that g(x1 ) = g(x2 ). Because g is also one-to-one, this in
turn implies that x1 = x2 . Thus, in summary, f g(x1 ) = f g(x2 ) implies that
x1 = x2 and so f g is one-to-one.
(b) If the linear map h is not one-to-one then there are unequal vectors ~v1 ,
~v2 that map to the same value h(~v1 ) = h(~v2 ). Because h is linear, we have
~0 = h(~v1 ) h(~v2 ) = h(~v1 ~v2 ) and so ~v1 ~v2 is a nonzero vector from the
domain that h maps to the zero vector of the codomain (~v1 ~v2 does not equal
the zero vector of the domain because ~v1 does not equal ~v2 ).
(c) The minimal polynomial m(t) sends every vector in the domain to zero and so
it is not one-to-one (except in a trivial space, which we ignore). By the first item
of this question, since the composition m(t) is not one-to-one, at least one of the
components t i is not one-to-one. By the second item, t i has a nontrivial
null space. Because (t i )(~v) = ~0 holds if and only if t(~v) = i ~v, the prior
sentence gives that i is an eigenvalue (recall that the definition of eigenvalue
requires that the relationship hold for at least one nonzero ~v).
352
Answers to Exercises
353
(b) If T is not invertible then the constant term in its minimal polynomial is zero.
Thus,
T n + + m1 T = (T n1 + + m1 I)T = T (T n1 + + m1 I)
is the zero matrix.
Five.IV.1.33 (a) For the inductive step, assume that Lemma 1.7 is true for polynomials of degree i, . . . , k 1 and consider a polynomial f(x) of degree k. Factor f(x) = k(x 1 )q1 (x z )qz and let k(x 1 )q1 1 (x z )qz be
cn1 xn1 + + c1 x + c0 . Substitute:
k(t 1 )q1 (t z )qz (~v) = (t 1 ) (t 1 )q1 (t z )qz (~v)
= (t 1 ) (cn1 tn1 (~v) + + c0~v)
= f(t)(~v)
(the second equality follows from the inductive hypothesis and the third from
the linearity of t).
(b) One example is to consider the squaring map s : R R given by s(x) = x2 . It
is nonlinear. The action defined by the polynomial f(t) = t2 1 changes s to
f(s) = s2 1, which is this map.
s2 1
x 7 s s(x) 1 = x4 1
Observe that this map differs from the map (s 1) (s + 1); for instance, the
first map takes x = 5 to 624 while the second one takes x = 5 to 675.
Five.IV.1.34 Yes. Expand down the last column to check that xn + mn1 xn1 +
+ m1 x + m0 is plus or minus the determinant of this.
x
0
0
m0
0 1x
0
m1
0
1x
m2
0
..
.
1 x mn1
1/2 1/2
1/4 1/4
2
1
1
4
1
1
2
2
354
Five.IV.2.19 (a) The characteristic polynomial is c(x) = (x 3)2 and the minimal
polynomial is the same.
(b) The characteristic polynomial is c(x) = (x + 1)2 . The minimal polynomial is
m(x) = x + 1.
(c) The characteristic polynomial is c(x) = (x + (1/2))(x 2)2 and the minimal
polynomial is the same.
(d) The characteristic polynomial is c(x) = (x 3)3 The minimal polynomial is
the same.
(e) The characteristic polynomial is c(x) = (x 3)4 . The minimal polynomial is
m(x) = (x 3)2 .
(f) The characteristic polynomial is c(x) = (x + 4)2 (x 4)2 and the minimal
polynomial is the same.
(g) The characteristic polynomial is c(x) = (x 2)2 (x 3)(x 5) and the minimal
polynomial is m(x) = (x 2)(x 3)(x 5).
(h) The characteristic polynomial is c(x) = (x 2)2 (x 3)(x 5) and the minimal
polynomial is the same.
Five.IV.2.20 (a) The transformation t3 is nilpotent (that is, N (t3) is the entire
~ 1 7
~ 2 7
~ 3 7
~ 4 7 ~0
space) and it acts on a string basis via two strings,
~
~
and 5 7 0. Consequently, t 3 can be represented in this canonical form.
0
1
N3 = 0
0
0
0
0
1
0
0
0
0
0
1
0
3
1
J3 = N3 + 3I = 0
0
0
0
0
0
0
0
0
0
0
0
form matrix.
0 0 0 0
3 0 0 0
1 3 0 0
0 1 3 0
0 0 0 3
Answers to Exercises
355
form is this.
1 0 0 0 0
0 2 0 0 0
0 1 2 0 0
0 0 0 2 0
0 0 0 1 2
Five.IV.2.21 For each, because many choices of basis are possible, many other answers
are possible. Of course, the calculation to check if an answer gives that PT P1 is in
Jordan form is the arbiter of whats correct.
(a) Here is the arrow diagram.
t
C3wrt E3 C3wrt E3
T
idyP
idyP
t
C3wrt B C3wrt B
J
The matrix to move from the lower left to the upper left is
1
1
1
P = RepE3 ,B (id)
= RepB,E3 (id) = 1
2
this.
2
0
0
1
0
The matrix P to move from the upper right to the lower right is the inverse of
P1 .
(b) We want this matrix and its inverse.
1 0 3
P1 = 0 1 4
0 2 0
(c) The concatenation of these bases for the generalized null spaces will do for the
basis for the entire space.
1
1
1
0
1
0 0
1 0 1
B1 = h 0 , 1i
B3 = h1 , 0 , 1 i
1 0
0 2 2
0
1
0
2
0
The change of basis matrices are this one and its inverse.
1 1 1
0 1
0
0
1
0 1
1
P = 0 1 1 0
1
1
0
0 2 2
0
1
0
2
0
356
10 4
2y/5
1
1
{
| y C}
25 10
y
0 0
2
C2
2
0 0
(Thus, this transformation is nilpotent: N (t 0) is the entire space). From the
~ 1 7
~ 2 7 ~0. This is the
nullities we know that ts action on a string basis is
canonical form matrix for the action of t 0 on N (t 0) = C2
!
0 0
N0 =
1 0
and this is the Jordan form of the matrix.
J0 = N0 + 0 I =
0
1
0
0
Note that if a matrix is nilpotent then its canonical form equals its Jordan form.
We can find such a string basis using the techniques of the prior section.
!
!
1
10
B=h
,
i
0
25
We took the first basis vector so that it is in the null space of t2 but is not in the
null space of t. The second basis vector is the image of the first under t.
(b) The characteristic polynomial of this matrix is c(x) = (x + 1)2 , so it is a
single-eigenvalue matrix. (That is, the generalized null space of t + 1 is the entire
Answers to Exercises
357
space.) We have
!
2y/3
N (t + 1) = {
| y C}
y
N ((t + 1)2 ) = C2
~
~
~
and so the action of t + 1 on an associated string
! basis is 1 7 2 7 0. Thus,
0 0
N1 =
1 0
the Jordan form of T is
!
1 0
J1 = N1 + 1 I =
1 1
and choosing vectors from the above null spaces gives this string basis (other
choices are possible).
!
!
1
6
B=h
,
i
0
9
(c) The characteristic polynomial c(x) = (1 x)(4 x)2 = 1 (x 1)(x 4)2 has
two roots and they are the eigenvalues 1 = 1 and 2 = 4.
We handle the two eigenvalues separately. For 1 , the calculation of the
powers of T 1I yields
0
N (t 1) = { y | y C }
0
and the null space of (t 1)2 is the same. Thus this set is the generalized null
space N (t 1). The nullities show that the action of the restriction of t 1 to
~ 1 7 ~0.
the generalized null space on a string basis is
A similar calculation
for 2 = 4 gives these null spaces.
0
yz
N (t 4) = { z | z C}
N ((t 4)2 ) = { y | y, z C }
z
z
(The null space of (t 4)3 is the same, as it must be because the power of the
term associated with 2 = 4 in the characteristic polynomial is two, and so the
restriction of t 2 to the generalized null space N (t 2) is nilpotent of index
at most two it takes at most two applications of t 2 for the null space to
settle down.) The pattern of how the nullities rise tells us that the action of t 4
~ 2 7
~ 3 7 ~0.
on an associated string basis for N (t 4) is
Putting the information for the two eigenvalues together gives the Jordan
form of the transformation t.
1 0 0
0 4 0
0 1 4
358
2 0 0
0 4 0
0 1 4
and a suitable basis is this.
1
0
1
_
B = B2 B4 = h1 , 1 , 1 i
1
1
1
(e) The characteristic polynomial of this matrix is c(x) = (2 x)3 = 1 (x 2)3 .
This matrix has only a single eigenvalue, = 2. By finding the powers of T 2I
we have
y
y (1/2)z
N (t 2) = { y | y C}
N ((t 2)2 ) = {
y
| y, z C}
0
z
and
N ((t 2)3 ) = C3
~ 1 7
~ 2 7
~ 3 7 ~0.
and so the action of t 2 on an associated string basis is
The Jordan form is this
1
0
0
2
1
0
2
Answers to Exercises
359
2y + z
N (t 1) = { y | y, z C }
N ((t 1)2 ) = C3
z
~ 1 7
~ 2 7 ~0
shows that the action of the nilpotent map t 1 on a string basis is
~ 3 7 ~0. Therefore the Jordan form is
and
1 0 0
J = 1 1 0
0 0 1
and an appropriate basis (a string
basis
associated
with t 1) is this.
0
2
1
B = h1 , 2 , 0i
0
2
1
(g) The characteristic polynomial is a bit large for by-hand calculation, but just
manageable c(x) = x4 24x3 + 216x2 864x + 1296 = (x 6)4 . This is a
single-eigenvalue
The
map,so the transformation t 6 is nilpotent.
null spaces
z w
x
z w
z w
N (t6) = {
N ((t6)2 ) = {
| z, w C}
| x, z, w C }
z
z
w
w
and
N ((t 6)3 ) = C4
~ 1 7
~ 2 7
and the nullities show that the action of t 6 on a string basis is
~ 3 7 ~0 and
~ 4 7 ~0. The Jordan form is
6 0 0 0
1 6 0 0
0 1 6 0
0 0 0 6
and finding a suitable string
basis
is routine.
0
2
3
1
0 1 3 1
B = h , , , i
0 1 6 1
1
2
3
0
360
~ 2 7 ~0
~ 4 7 ~0
In combination, that makes four possible Jordan forms, the two first actions, the
second and first, the first and second, and the two second actions.
2 0 0 0
2 0 0 0
2 0 0 0
2 0 0 0
1 2 0 0 0 2 0 0 1 2 0 0 0 2 0 0
0
0 1 0 0
0 1 0 0
0 1 0 0
0 1 0
0
0 0 1
0
0 1 1
0
0 1 1
0
0 0 1
~ 1 7 ~0.
Five.IV.2.24 The restriction of t + 2 to N (t + 2) can have only the action
The restriction of t 1 to N (t 1) could have any of these three actions on an
associated string basis.
~ 2 7
~ 3 7
~ 4 7 ~0
~ 2 7
~3
~2
7 ~0
7 ~0
~
~
~
4 7 0
3
7 ~0
~ 4 7 ~0
Taken together there are three possible Jordan forms, the one arising from the first
action by t 1 (along with the only action from t + 2), the one arising from the
second action, and the one arising from the third action.
2 0 0 0
2 0 0 0
2 0 0 0
0 1 0 0 0 1 0 0 0 1 0 0
0 1 1 0 0 1 1 0 0 0 1 0
0 0 1 1
0 0 0 1
0 0 0 1
~ 1 7 ~0.
Five.IV.2.25 The action of t + 1 on a string basis for N (t + 1) must be
Because of the power of x 2 in the minimal polynomial, a string basis for t 2
has length two and so the action of t 2 on N (t 2) must be of this form.
~ 2 7
~ 3 7 ~0
~
~
4 7 0
Therefore there is only one Jordan form that
1 0 0
0 2 0
0 1 2
0 0 0
is possible.
0
0
0
2
Answers to Exercises
361
Five.IV.2.26 There are two possible Jordan forms. The action of t + 1 on a string basis
~ 1 7 ~0. There are two actions for t 2 on a string basis
for N (t + 1) must be
for N (t 2) that are possible with this characteristic polynomial and minimal
polynomial.
~ 2 7
~ 3 7 ~0
~ 2 7
~ 3 7 ~0
~
~
~
~
~
4 7 5 7 0
4 7 0
~ 5 7 ~0
1 0 0 0 0
1 0 0 0 0
0 2 0 0 0
0 2 0 0 0
0 1 2 0 0
0 1 2 0 0
0 0 0 2 0
0 0 0 2 0
0 0 0 1 2
0 0 0 0 2
Five.IV.2.27
have
!1
1
0
1
0
1 1
1 0
!
=
0
0
0
1
362
1
1
!1
0
1
1
0
1 1
1 1
!
=
1 0
0 1
0 0 0 0
1 0 0 0
J=
0 1 0 0
0 0 1 0
Five.IV.2.29 Yes. Each has the characteristic polynomial (x + 1)2 . Calculations of
the powers of T1 + 1 I and T2 + 1 I gives these two.
!
!
y/2
0
N (t1 + 1) = {
| y C}
N (t2 + 1) = {
| y C}
y
y
(Of course, for each the null space of the square is the entire space.) The way that
the nullities rise shows that each is similar to!this Jordan form matrix
1
1
0
1
i
1
1
i
and so we get a description of the null space of t + i by solving this linear system.
ix y = 0
x + iy = 0
i1 +2
ix y = 0
0=0
(To change the relation ix = y so that the leading variable x is expressed in terms
of the free variable y, we can multiply both sides by i.)
Answers to Exercises
363
~ 2 7 ~0
3 0 0
3
1 3 0
0
0
0 4
0
Similarly there are two Jordan
0
0
0
3
0
0
4
0 0
3 0 0
4 0
0 4 0
1 4
0 0 4
3 0 0 0
3 0 0
1 3 0 0 1 3 0
0 0 3 0 0 0 3
0 0 1 3
0 0 0
0
0
0
3
364
9 0 0
3
1 9 0 = 1/6
0 0 4
0
2
0
0
2
(a) By eye, we see that the largest eigenvalue is 4. Sage gives this.
sage: def eigen(M,v,num_loops=10):
....:
for p in range(num_loops):
....:
v_normalized = (1/v.norm())*v
....:
v = M*v
....:
return v
....:
sage: M = matrix(RDF, [[1,5], [0,4]])
sage: v = vector(RDF, [1, 2])
sage: v = eigen(M,v)
sage: (M*v).dot_product(v)/v.dot_product(v)
4.00000147259
(b) A simple calculation shows that the largest eigenvalue is 2. Sage gives this.
sage: M = matrix(RDF, [[3,2], [-1,0]])
sage: v = vector(RDF, [1, 2])
sage: v = eigen(M,v)
sage: (M*v).dot_product(v)/v.dot_product(v)
2.00097741083
366
2
(b) Sage takes a few more iterations on this one. This makes use of the procedure
defined in the prior item.
sage: M = matrix(RDF, [[3,2], [-1,0]])
sage: v = vector(RDF, [1, 2])
sage: v,v_prior,dex = eigen_by_iter(M,v)
sage: (M*v).norm()/v.norm()
2.01585174302
sage: dex
6
eigen_by_iter
is defined above.
Sage does not return (use <Ctrl>-c to interrupt the computation). Adding some
error checking code to the routine
def eigen_by_iter(M, v, toler=0.01):
dex = 0
diff = 10
while abs(diff)>toler:
dex = dex+1
if dex>1000:
print "oops! probably in some loop: \nv=",v,"\nv_next=",v_next
v_next = M*v
if (v.norm()==0):
print "oops! v is zero"
return None
if (v_next.norm()==0):
print "oops! v_next is zero"
return None
v_normalized = (1/v.norm())*v
v_next_normalized = (1/v_next.norm())*v_next
diff = (v_next_normalized-v_normalized).norm()
v_prior = v_normalized
v = v_next_normalized
return v, v_prior, dex
gives this.
oops! probably in some loop:
v= (0.707106781187, -1.48029736617e-16, -0.707106781187)
v_next= (2.12132034356, -4.4408920985e-16, -2.12132034356)
oops! probably in some loop:
v= (-0.707106781187, 1.48029736617e-16, 0.707106781187)
v_next= (-2.12132034356, 4.4408920985e-16, 2.12132034356)
oops! probably in some loop:
v= (0.707106781187, -1.48029736617e-16, -0.707106781187)
v_next= (2.12132034356, -4.4408920985e-16, -2.12132034356)
So it is circling.
5 In theory, this method would produce 2 . In practice, however, rounding errors in
the computation introduce components in the direction of ~v1 , and so the method
will still produce 1 , although it may take somewhat longer than it would have
taken with a more fortunate choice of initial vector.
6 Instead of using ~vk = T~vk1 , use T 1~vk = ~vk1 .
368
0.89
0
0
0.89
.90
.10
!
.01
=
.99
.01
.10
!
.01
.10
!
!
.01
p
=
.10
r
0
0
So inside the park the population grows by about eleven percent while outside the
park the population grows by about fifty five percent.
3 The matrix equation
pn+1
rn+1
!
=
0.95
0.05
0.01
0.99
pn
rn
cn+1
.95
un+1 = .04
mn+1
.01
.06
.90
.04
cn
0
.10 un
.90
mn
3 We have this.
0
1/3
H=
1/3
1/3
0
0
1/2
1/2
1 1/2
0
0
0 1/2
0
0
(c) Page p3 is important, but it passes its importance on to only one page, p1 . So
that page receives a large boost.
Answers to Exercises
371
"
!n
!n #
1+ 5
1 5
1
F(n) =
2
2
5
As observed earlier, (1 + 5)/2 is larger than one while (1 + 5)/2 has absolute
value less than one.
sage: phi = (1+5^(0.5))/2
sage: psi = (1-5^(0.5))/2
sage: phi
1.61803398874989
sage: psi
-0.618033988749895
So the value of the expression is dominated by the first term. Solving 1000 =
So by the seventeenth power, the second term does not contribute enough to change
the roundoff. For the ten thousand and million calculations the situation is even
more extreme.
sage: b = ln(10000*5^(0.5))/ln(phi)
sage: b
20.8121638053112
sage: c = ln(1000000*5^(0.5))/ln(phi)
sage: c
30.3821077388746
372
f(n)
5 2
=
f(n 1) 1 0
f(n 2)
0 1
8
f(n 1)
0 f(n 2)
0
f(n 3)
(a) The solution of the homogeneous recurrence is f(n) = c1 2n + c2 3n . Substituting f(0) = 1 and f(1) = 1 gives this linear system.
c1 + c2 = 1
2c1 + 3c2 = 1
By eye we see that c1 = 2 and c2 = 1.
(b) The solution of the homogeneous recurrence is c1 2n + c2 (2)n . The initial
conditions give this linear system.
c1 + c2 = 0
2c1 2c2 = 1
The solution is c1 = 1/4, c2 = 1/4.
(c) The homogeneous recurrence has the solution f(n) = c1 (1)n + c2 2n + c3 4n .
With the initial conditions we get this linear system.
c1 + c2 + c3 = 1
c1 + 2c2 + 4c3 = 1
c1 + 4c2 + 16c3 = 3
Its solution is c1 = 1/3, c2 = 2/3, c3 = 0.
f(0)
f(1)
f 7
..
f(k 1)
Answers to Exercises
373
..
(a f1 + b f2 ) =
.
af1 (k 1) + bf2 (k 1)
f1 (0)
f2 (0)
..
..
= a
+ b
= a (f1 ) + b (f2 )
.
.
f1 (k 1)
f2 (k 1)
5 We use the hint to prove this.
an1 an2 an3
1
0
0
1
0 =
0
0
1
..
..
.
.
0
0
0
...
...
ank+1
0
..
.
...
ank
0
..
.
0
...
0
0
1
(1)k1 ank 1
0
0
1
..
..
..
.
.
.
0
0
0
...
1
(The matrix is square so the sign in front of is 1even ). Application of the
inductive hypothesis gives the desired result.
= (1)k1 ank 1
(1)k2 (k1 + an1 k2 + an2 k3 + + ank+1 0 )
6 This is a straightforward induction on n.
7 Sage says that we are safe.
sage: T64 = 18446744073709551615
sage: T64_days = T64/(60*60*24)
sage: T64_days
1229782938247303441/5760
sage: T64_years = T64_days/365.25
sage: T64_years
374
5.84542046090626e11
sage: age_of_universe = 13.8e9
sage: T64_years/age_of_universe
42.3581192819294