Defranza Solution Manual
Defranza Solution Manual
for
Introduction to
Linear Algebra with Applications
Jim DeFranza
Contents
1 Systems of Linear Equations and Matrices 1
Exercise Set 1.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Exercise Set 1.2 Matrices and Elementary Row Operations . . . . . . . . . . . . . . . . . . . . . . . 7
Exercise Set 1.3 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise Set 1.4 The Inverse of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Exercise Set 1.5 Matrix Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Exercise Set 1.6 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Exercise Set 1.7 Elementary Matrices and LU Factorization . . . . . . . . . . . . . . . . . . . . . . 27
Exercise Set 1.8 Applications of Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . . 32
Review Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Chapter Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2 Linear Combinations and Linear Independence 42
Exercise Set 2.1 Vectors in R
n
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Exercise Set 2.2 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Exercise Set 2.3 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Review Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Chapter Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3 Vector Spaces 60
Exercise Set 3.1 Denition of a Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Exercise Set 3.2 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Exercise Set 3.3 Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Exercise Set 3.4 Coordinates and Change of Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Exercise Set 3.5 Application: Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Review Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Chapter Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4 Linear Transformations 88
Exercise Set 4.1 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Exercise Set 4.2 The Null Space and Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Exercise Set 4.3 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Exercise Set 4.4 Matrix Transformation of a Linear Transformation . . . . . . . . . . . . . . . . . . 101
Exercise Set 4.5 Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Exercise Set 4.6 Application: Computer Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Review Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Chapter Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5 Eigenvalues and Eigenvectors 118
Exercise Set 5.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Exercise Set 5.2 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Exercise Set 5.3 Application: Systems of Linear Dierential Equations . . . . . . . . . . . . . . . . . 128
Exercise Set 5.4 Application: Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
i
ii CONTENTS
Review Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Chapter Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6 Inner Product Spaces 137
Exercise Set 6.1 The Dot Product on R
n
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Exercise Set 6.2 Inner Product Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Exercise Set 6.3 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Exercise Set 6.4 Orthogonal Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Exercise Set 6.5 Application: Least Squares Approximation . . . . . . . . . . . . . . . . . . . . . . . 157
Exercise Set 6.6 Diagonalization of Symmetric Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 161
Exercise Set 6.7 Application: Quadratic Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Exercise Set 6.8 Application: Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . 166
Review Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Chapter Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
A Preliminaries 173
Exercise Set A.1 Algebra of Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Exercise Set A.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Exercise Set A.3 Techniques of Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Exercise Set A.4 Mathematical Induction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
1.1 Systems of Linear Equations 1
Solutions to All Exercises
1
Systems of Linear Equations and
Matrices
Exercise Set 1.1
In Section 1.1 of the text, Gaussian Elimination is used to solve a linear system. This procedure utilizes
three operations that when applied to a linear system result in a new system that is equivalent to the original.
Equivalent means that the linear systems have the same solutions. The three operations are:
Interchange two equations.
Multiply any equation by a nonzero constant.
Add a multiple of one equation to another.
When used judiciously these three operations allow us to reduce a linear system to a triangular linear system,
which can be solved. A linear system is consistent if there is at least one solution and is inconsistent if there
are no solutions. Every linear system has either a unique solution, innitely many solutions or no solutions.
For example, the triangular linear systems
_
_
x
1
x
2
+x
3
= 2
x
2
2x
3
= 1
x
3
= 2
,
_
_
x
1
2x
2
+x
3
= 2
x
2
+ 2x
3
= 3 ,
_
_
2x
1
+x
3
= 1
x
2
x
3
= 2
0 = 4
have a unique solution, innitely many solutions, and no solutions, respectively. In the second linear system,
the variable x
3
is a free variable, and once assigned any real number the values of x
1
and x
2
are determined.
In this way the linear system has innitely many solutions. If a linear system has the same form as the second
system, but also has the additional equation 0 = 0, then the linear system will still have free variables. The
third system is inconsistent since the last equation 0 = 4 is impossible. In some cases, the conditions on the
right hand side of a linear system are not specied. Consider for example, the linear system
_
_
x
1
x
2
= a
2x
1
+ 2x
2
+x
3
= b
2x
3
= c
which is equivalent to
_
x
1
x
2
= a
x
3
= b + 2a
0 = c 2b 4a
.
This linear system is consistent only for values a, b and c such that c 2b 4a = 0.
Solutions to Exercises
1. Applying the given operations we obtain the equivalent triangular system
_
_
x
1
x
2
2x
3
= 3
x
1
+ 2x
2
+ 3x
3
= 1
2x
1
2x
2
2x
3
= 2
E
1
+E
2
E
2
_
x
1
x
2
2x
3
= 3
x
2
+x
3
= 4
2x
1
2x
2
2x
3
= 2
(2)E
1
+E
3
E
3
_
x
1
x
2
2x
3
= 3
x
2
+x
3
= 4
2x
3
= 8
. Using back substitution, the linear system has the unique solution
x
1
= 3, x
2
= 8, x
3
= 4.
2. Applying the given operations we obtain the equivalent triangular system
_
_
2x
1
2x
2
x
3
= 3
x
1
3x
2
+x
3
= 2
x
1
2x
2
= 2
E
1
E
2
_
x
1
3x
2
+x
3
= 2
2x
1
2x
2
x
3
= 3
x
1
2x
2
= 2
(2)E
1
+E
2
E
2
_
x
1
3x
2
+x
3
= 2
4x
2
3x
3
= 1
x
1
2x
2
= 2
(1)E
1
+E
3
E
3
_
x
1
3x
2
+x
3
= 2
4x
2
3x
3
= 1
x
2
x
3
= 4
E
2
E
3
_
x
1
3x
2
+ x
3
= 2
x
2
x
3
= 4
4x
2
3x
3
= 1
(4)E
2
+E
3
E
3
_
x
1
3x
2
+x
3
= 2
x
2
x
3
= 4
x
3
= 15
.
Using back substitution, the linear system has the unique solution x
1
= 20, x
2
= 11, x
3
= 15.
3. Applying the given operations we obtain the equivalent triangular system
_
_
x
1
+ 3x
4
= 2
x
1
+x
2
+ 4x
4
= 3
2x
1
+x
3
+ 8x
4
= 3
x
1
+x
2
+x
3
+ 6x
4
= 2
(1)E
1
+E
2
E
2
_
x
1
+ 3x
4
= 2
x
2
+x
4
= 1
2x
1
+x
3
+ 8x
4
= 3
x
1
+x
2
+x
3
+ 6x
4
= 2
(2)E
1
+E
3
E
3
_
x
1
+ 3x
4
= 2
x
2
+x
4
= 1
+x
3
+ 2x
4
= 1
x
1
+x
2
+x
3
+ 6x
4
= 2
(1)E
1
+E
4
E
4
_
x
1
+ 3x
4
= 2
x
2
+x
4
= 1
+x
3
+ 2x
4
= 1
x
2
+x
3
+ 3x
4
= 0
(1)E
2
+E
4
E
4
_
x
1
+ 3x
4
= 2
x
2
+x
4
= 1
x
3
+ 2x
4
= 1
x
3
+ 2x
4
= 1
(1)E
3
+E
4
E
4
_
x
1
+ 3x
4
= 2
x
2
+x
4
= 1
x
3
+ 2x
4
= 1
0 = 0
.
The nal triangular linear system has more variables than equations, that is, there is a free variable. As a
result there are innitely many solutions. Specically, using back substitution, the solutions are given by
x
1
= 2 3x
4
, x
2
= 1 x
4
, x
3
= 1 2x
4
, x
4
R.
4. Applying the given operations we obtain the equivalent triangular system
_
_
x
1
+x
3
= 2
x
1
+x
2
+ 4x
3
= 1
2x
1
+ 2x
3
+x
4
= 1
(1)E
1
+E
2
E
2
_
x
1
+x
3
= 2
x
2
+ 3x
3
= 1
2x
1
+ 2x
3
+x
4
= 1
(2)E
1
+E
3
E
3
_
x
1
+x
3
= 2
x
2
+ 3x
3
= 1
x
4
= 3
. Using back substitution, the set of solutions is given by x
1
= 2 x
3
, x
2
= 1
3x
3
, x
4
= 3, x
3
R.
5. The second equation gives immediately that x = 0. Substituting the value x = 0 into the rst equation,
we have that y =
2
3
. Hence, the linear system has the unique solution x = 0, y =
2
3
.
6. From the second equation y = 1 and substituting this value in the rst equation gives x = 4.
7. The rst equation gives x = 1 and substituting this value in the second equation, we have y = 0. Hence,
the linear system has the unique solution x = 1, y = 0.
8. The operation 3E
2
+E
1
E
1
gives 5x = 1, so x =
1
5
. Substitution in equation two, then gives y =
1
5
.
9. Notice that the rst equation is three times the second and hence, the equations have the same solutions.
Since each equation has innitely many solutions the linear system has innitely many solutions with solution
set S =
__
2t+4
3
, t
_
t R
_
.
10. Since the rst equation is 3 times the second, the equations describe the same line and hence, there are
innitely many solutions, given by x =
5
3
y +
1
3
, y R.
11. The operations E
1
E
3
, E
1
+E
2
E
2
, 3E
1
+E
3
E
3
and
8
5
E
2
+E
3
E
3
, reduce the linear system
to the equivalent triangular system
_
_
x 2y +z = 2
5y + 2z = 5
9
5
z = 0
.
The unique solution is x = 0, y = 1, z = 0.
12. Reducing the linear system gives
_
_
x + 3y +z = 2
2x + 2y 4z = 1
y + 3z = 1
reduces to
_
x + 3y +z = 2
8y 2z = 3
y + 3z = 1
reduces to
_
x + 3y +z = 2
8y 2z = 3
z =
1
2
.
So the unique solution is x = 0, y =
1
2
, z =
1
2
.
13. The operations E
1
E
2
, 2E
1
+E
2
E
2
, 3E
1
+E
3
E
3
and E
2
+E
3
E
3
, reduce the linear system
to the equivalent triangular system
_
_
x + 5z = 1
2y + 12z = 1
0 = 0
.
The linear system has innitely many solutions with solution set S =
__
1 5t, 6t +
1
2
, t
_
t R
_
.
14. Reducing the linear system gives
_
_
x +y + 4z = 1
3x y + 2z = 2
2x 2y 8z = 2
reduces to
_
x +y + 4z = 1
2y + 14z = 1
2x 2y 8z = 2
reduces to
_
x +y + 4z = 1
2y + 14z = 1
0 = 0
.
There are innitely many solutions with solution set S =
__
3t +
1
2
, 7t
1
2
, t
_
| t R
_
.
15. Adding the two equations yields 6x
1
+ 6x
3
= 4, so that x
1
=
2
3
x
3
. Substituting this value
in the rst equation gives x
2
=
1
2
. The linear system has innitely many solutions with solution set
S =
__
t +
2
3
,
1
2
, t
_
t R
_
.
16. Reducing the linear system gives
4 Chapter 1 Systems of Linear Equations and Matrices
_
2x
1
+x
2
= 2
3x
1
x
2
+ 2x
3
= 1
reduces to
_
2x
1
+x
2
= 2
x
2
+ 4x
3
= 8
.
There are innitely many solutions with solution set S = {(2t + 3, 4t + 8, t) | t R} .
17. The operation 2E
1
+E
2
E
2
gives the equation 3x
2
3x
3
4x
4
= 9. Hence, the linear system has two
free variables, x
3
and x
4
. The two parameter set of solutions is S =
__
3
5
3
t, s
4
3
t + 3, s, t
_
s, t R
_
.
18. The linear system is in reduced form. The solution set is a two parameter family given by
S =
__
1
2
s 3t +
5
2
, 3t 2, s, t
_
| s, t R
_
.
19. The operation 2E
1
+ E
2
E
2
gives x = b 2a. Then y = a + 2x = a + 2(b 2a) = 2b 3a, so that
the unique solution is x = 2a +b, y = 3a + 2b.
20. The linear system
_
2x + 3y = a
x +y = b
reduces to
_
2x + 3y = a
y = a 2b
,
so the solution is x = a + 3b, y = a 2b.
21. The linear system is equivalent to the triangular linear system
_
_
x z = b
y = a + 3b
z = c 7b 2a
,
which has the unique solution x = 2a + 6b c, y = a + 3b, z = 2a 7b +c.
22. The linear system
_
_
3x + 2y +z = a
x y z = b
x y 2z = c
E
2
+E
3
E
3
, E
1
+ 3E
2
E
1
reduces to
_
y 2z = a + 3b
x y z = b
z = b +c
,
so the solution is x = a 3b +c, y = a 5b + 2c, z = b c.
23. Since the operation 2E
1
+ E
2
E
2
gives the equation 0 = 2a + 2, then the linear system is consistent
for a = 1.
24. Since
_
x + 3y = a
2x 6y = 3
reduces to
_
x + 3y = a
0 = 3 + 2a
,
the linear system is consistent if a =
3
2
.
25. Since the operation 2E
1
+E
2
E
2
gives the equation 0 = a +b, then the linear system is consistent for
b = a.
26. Since
_
6x 3y = a
2x + y = b
reduces to
_
6x 3y = a
0 =
1
3
a +b
,
the linear system is consistent if b =
1
3
a.
1.1 Systems of Linear Equations 5
27. The linear system is equivalent to the triangular linear system
_
_
x 2y + 4z = a
5y 9z = 2a +b
0 = c a b
and hence, is consistent for all a, b, and c such that c a b = 0.
28. Since
_
_
x y + 2z = a
2x + 4y 3z = b
4x + 2y +z = c
reduces to
_
x y + 2z = a
6y 7z = b 2a
0 = c 2a b
,
the linear system is consistent if c 2a b = 0.
29. The operation 2E
1
+E
2
E
2
gives the equivalent linear system
_
x +y = 2
(a 2)y = 7
.
Hence, if a = 2, the linear system is inconsistent.
30. Since
_
2x y = 4
ax + 3y = 2
reduces to
_
2x y = 4
3 +
1
2
a = 2 2a
,
the linear system is inconsistent if 3 +
1
2
a = 0, that is a = 6. Notice that if a = 6, then 2 2a = 0.
31. The operation 3E
1
+E
2
E
2
gives the equivalent linear system
_
x y = 2
0 = a 6
.
Hence, the linear system is inconsistent for all a = 6.
32. Since
_
2x y = a
6x 3y = a
reduces to
_
2x y = a
0 = 2a
the linear system is inconsistent for a = 0.
33. To nd the parabola y = ax
2
+bx +c that passes through the specied points we solve the linear system
_
_
c = 0.25
a +b +c = 1.75
a b +c = 4.25
.
The unique solution is a = 1, b = 3, and c =
1
4
, so the parabola is y = x
2
3x +
1
4
=
_
x
3
2
_
2
2. The
vertex of the parabola is the point
_
3
2
, 2
_
.
34. To nd the parabola y = ax
2
+bx +c that passes through the specied points we solve the linear system
_
_
c = 2
9a 3b +c = 1
0.25a + 0.5b +c = 0.75
.
6 Chapter 1 Systems of Linear Equations and Matrices
The unique solution is a = 1, b = 2 and c = 2, so the parabola is y = x
2
2x + 2 = (x + 1)
2
+ 3. The
vertex of the parabola is (1, 3).
35. To nd the parabola y = ax
2
+bx +c that passes through the specied points we solve the linear system
_
_
(0.5)
2
a (0.5)b +c = 3.25
a + b +c = 2
(2.3)
2
a + (2.3)b +c = 2.91
.
The unique solution is a = 1, b = 4, and c = 1, so the parabola is y = x
2
+ 4x 1 = (x 2)
2
+ 3.
The vertex of the parabola is the point (2, 3).
36. To nd the parabola y = ax
2
+bx +c that passes through the specied points we solve the linear system
_
_
c = 2875
a +b +c = 5675
9a + 3b +c = 5525
.
The unique solution is a = 2800, b = 5600, and c = 2875, so the parabola is y = 2800x
2
5600x 2875.
The x coordinate of the vertex is given by x =
b
2a
, so x = 1. The y coordinate of the vertex is y = 5675.
37. a. The point of intersection of the three lines can be
found by solving the linear system
_
_
x +y = 1
6x + 5y = 3
12x + 5y = 39
.
This linear system has the unique solution (2, 3).
b.
x
y
25
25
5
5
38. a. The point of intersection of the three lines can be
found by solving the linear system
_
_
2x +y = 0
x +y = 1
3x +y = 1
4x +y = 2
.
This linear system has the unique solution (1, 2).
b.
x
y
25
25
5
5
39. a. The linear system
_
x +y = 2
x y = 0
has the unique solution x = 1 and y = 1. Notice that the two lines
have dierent slopes.
b. The linear system
_
x +y = 1
2x + 2y = 2
has innitely many solutions given by the one parameter set S =
{(1 t, t) | t R}. Notice that the second equation is twice the rst and the equations represent the same
line.
c. The linear system
_
x +y = 2
3x + 3y = 6
is inconsistent.
40. Using the operations dE
1
E
1
, bE
2
E
2
, followed by (1)E
2
+E
1
E
1
gives (ad bc)x = dx
1
bx
2
.
Since ad bc is not zero, we have that x =
dx1bx2
adbc
. In a similar way, we have that y =
ax2cx1
adbc
.
41. a. S = {(3 2s t, 2 +s 2t, s, t) | s, t R} b. S = {(7 2s 5t, s, 2 +s + 2t, t) | s, t R}
1.2 Matrices and Elementary Row Operations 7
42. a. Let x
4
= s, and x
5
= t, so that x
3
= 2 + 2s 3t, x
2
= 1 +s + t, and x
1
= 2 + 3t. b. Let x
3
= s
and x
5
= t, so that x
4
= 1 +
1
2
s +
3
2
t, x
2
= 2 +
1
2
s +
5
2
t, and x
1
= 2 + 3t.
43. Applying kE
1
E
1
, 9E
2
E
2
, and E
1
+E
2
E
2
gives the equivalent linear system
_
9kx + k
2
y = 9k
(9 k
2
)y = 27 9k
.
Whether the linear system is consistent or inconsistent can now be determined by examining the second
equation.
a. If k = 3, the second equation becomes 0 = 54, so the linear system is inconsistent. b. If k = 3, then
the second equation becomes 0 = 0, so the linear system has innitely many solutions. c. If k = 3, then the
linear system has a unique solution.
44. The linear system
_
_
kx +y +z = 0
x +ky +z = 0
x +y +kz = 0
reduces to
_
x + ky + z = 0
(1 k)y + (k 1)z = 0
(2 +k)(1 k)z = 0
.
a. The linear system has a unique solution if k = 1 and k = 2. b. If k = 2, the solution set is a one
parameter family. c. If k = 1, the solution set is a two parameter family.
Exercise Set 1.2
Matrices are used to provide an alternative way to represent a linear system. Reducing a linear system to
triangular form is then equivalent to row reducing the augmented matrix corresponding to the linear system
to a triangular matrix. For example, the augmented matrix for the linear system
_
_
x
1
x
2
x
3
2x
4
= 1
2x
1
+ 2x
2
+x
3
2x
4
= 2
x
1
2x
2
+x
3
+ 2x
4
= 2
is
_
_
1 1 1 2 1
2 2 1 2 2
1 2 1 2 2
_
_
.
The coecient matrix is the 34 matrix consisting of the coecients of each variable, that is, the augmented
matrix with the augmented column
_
_
1
2
2
_
_
deleted. The rst four columns of the augmented matrix cor-
respond to the variables x
1
, x
2
, x
4
, and x
4
, respectively and the augmented column to the constants on the
right of each equation. Reducing the linear system using the three valid operations is equivalent to reducing
the augmented matrix to a triangular matrix using the row operations:
Interchange two rows.
Multiply any row by a nonzero constant.
Add a multiple of one row to another.
In the above example, the augmented matrix can be reduced to either
_
_
-1 1 1 2 1
0 -3 0 0 1
0 0 -1 6 4
_
_ or
_
_
1 0 0 4 8/3
0 1 0 0 1/3
0 0 1 6 4
_
_
The left matrix is in row echelon form and the right is in reduced row echelon form. The framed terms
are the pivots of the matrix. The pivot entries correspond to dependent variables and the non-pivot entries
correspond to free variables. In this example, the free variable is x
4
and x
1
, x
2
, and x
3
depend on x
4
. So
the linear system has innitely many solutions given by x
1
=
8
3
+ 4x
4
, x
2
=
1
3
, x
3
= 4 6x
4
, and x
4
is an
8 Chapter 1 Systems of Linear Equations and Matrices
arbitrary real number. For a linear system with the same number of equations as variables, there will be a
unique solution if and only if the coecient matrix can be row reduced to the matrix with each diagonal entry
1 and all others 0.
Solutions to Exercises
1.
_
2 3 5
1 1 3
_
2.
_
2 2 1
3 0 1
_
3.
_
_
2 0 1 4
1 4 1 2
4 1 1 1
_
_
4.
_
_
3 1 1 2
0 0 4 0
4 2 3 1
_
_
5.
_
2 0 1 4
1 4 1 2
_
6.
_
4 1 1 1
4 4 2 2
_
7.
_
_
2 4 2 2 2
4 2 3 2 2
1 3 3 3 4
_
_
8.
_
_
3 0 3 4 3
4 2 2 4 4
0 4 3 2 3
_
_
9. The linear system has the unique solution x =
1, y =
1
2
, z = 0.
10. The linear system has the unique solution
x = 2, y = 0, z =
2
3
.
11. The linear system is consistent with free vari-
able z. There are innitely many solutions given
by x = 3 2z, y = 2 +z, z R.
12. The linear system is consistent with free vari-
able z. There are innitely many solutions given
by x = 4 +
1
3
z, y =
4
3
3z, z R.
13. The variable z = 2 and y is a free variable,
so the linear system has innitely many solutions
given by x = 3 + 2y, z = 2, y R.
14. The variables y and z are free variables, so
the linear system has innitely many solutions
given by x = 1 5y 5z, y R, z R.
15. The last row of the matrix represents the
impossible equation 0 = 1, so the linear system is
inconsistent.
16. The last row of the matrix represents the
impossible equation 0 = 1, so the linear system is
inconsistent.
17. The linear system is consistent with free
variables z and w. The solutions are given by
x = 3 + 2z 5w, y = 2 +z 2w, z R, w R.
18. The linear system is consistent with free
variables y and z. The solutions are given by
x = 1 3y + 3z, w = 4, y R, z R.
19. The linear system has innitely many so-
lutions given by x = 1 + 3w, y = 7 + w, z =
1 2w, w R.
20. The linear system has innitely many solu-
tions given by x = 1
2
5
z, y = 1+3z, w =
4
5
, z
R.
21. The matrix is in reduced row echelon form. 22. The matrix is in reduced row echelon form.
23. Since the matrix contains nonzero entries
above the pivots in rows two and three, the ma-
trix is not in reduced row echelon form.
24. Since the pivot in row two is not a one, the
matrix is not in reduced row echelon form.
25. The matrix is in reduced row echelon form. 26. The matrix is in reduced row echelon form.
27. Since the rst nonzero term in row three is
to the left of the rst nonzero term in row two,
the matrix is not in reduced row echelon form.
28. Since the matrix contains nonzero entries
above the pivots in rows two and three, the ma-
trix is not in reduced row echelon form.
29. To nd the reduced row echelon form of the matrix we rst reduce the matrix to triangular form using
_
2 3
2 1
_
R
1
+R
2
R
2
_
2 3
0 4
_
. The next step is to make the pivots 1, and eliminate the term above the
pivot in row two. This gives
_
2 3
0 4
_
1
4
R
2
R
2
_
2 3
0 1
_
(3)R
2
+R
1
R
1
_
2 0
0 1
_
1
2
R
1
R
1
_
1 0
0 1
_
.
30. The matrix
_
3 2
3 3
_
reduces to
_
1 0
0 1
_
.
1.2 Matrices and Elementary Row Operations 9
31. To avoid the introduction of fractions we interchange rows one and three. The remaining operations are
used to change all pivots to ones and eliminate nonzero entries above and below them.
_
_
3 3 1
3 1 0
1 1 2
_
_
R
1
R
3
_
_
1 1 2
3 1 1
3 3 1
_
_
3R
1
+R
2
R
2
_
_
1 1 2
0 4 6
3 3 1
_
_
3R
1
+R
3
R
3
_
_
1 1 2
0 4 6
0 0 7
_
_
1
7
R
3
R
3
_
_
1 1 2
0 4 6
0 0 1
_
_
(6)R
3
+R
2
R
2
_
_
1 1 2
0 4 0
0 0 1
_
_
2R
3
+R
1
R
1
_
_
1 1 0
0 4 0
0 0 1
_
_
1
4
R
2
R
2
_
_
1 1 0
0 1 0
0 0 1
_
_
R
2
+R
1
R
1
_
_
1 0 0
0 1 0
0 0 1
_
_
(1)R
1
R
1
_
_
1 0 0
0 1 0
0 0 1
_
_
.
32. The matrix
_
_
0 2 1
1 3 3
1 2 3
_
_
reduces to
_
_
1 0 0
0 1 0
0 0 1
_
_
.
33. The matrix in reduced row echelon form is
_
1 0 1
0 1 0
_
.
34. The matrix in reduced row echelon form is
_
1 0
3
8
0 1
1
4
_
.
35. The matrix in reduced row echelon form is
_
_
1 0 0 2
0 1 0 1
0 0 1 0
_
_
.
36. The matrix in reduced row echelon form is
_
_
1 0 0 2
0 1 0
6
5
0 0 1
8
5
_
_
.
37. The augmented matrix for the linear system and the reduced row echelon form are
_
1 1 1
4 3 2
_
_
1 0 1
0 1 2
_
.
The unique solution to the linear system is x = 1, y = 2.
38. The augmented matrix for the linear system
_
3 1 1
4 2 0
_
reduces to
_
1 0
1
5
0 1
2
5
_
.
The unique solution to the linear system is x =
1
5
, y =
2
5
.
39. The augmented matrix for the linear system and the reduced row echelon form are
_
_
3 3 0 3
4 1 3 3
2 2 0 2
_
_
_
_
1 0 0 1
0 1 0 0
0 0 1
1
3
_
_
.
The unique solution for the linear system is x = 1, y = 0, z =
1
3
.
40. The augmented matrix
_
_
2 0 4 1
4 3 2 0
2 0 2 2
_
_
reduces to
_
_
1 0 0
5
6
0 1 0 1
0 0 1
1
6
_
_
.
The unique solution for the linear system is x =
5
6
, y = 1, z =
1
6
.
41. The augmented matrix for the linear system and the reduced row echelon form are
_
_
1 2 1 1
2 3 2 0
1 1 1 2
_
_
_
_
1 0 1 0
0 1 0 0
0 0 0 1
_
_
.
10 Chapter 1 Systems of Linear Equations and Matrices
The linear system is inconsistent.
42. The augmented matrix
_
_
3 0 2 3
2 0 1 2
0 0 1 2
_
_
reduces to
_
_
1 0 0 0
0 1 0 0
0 0 0 1
_
_
.
The linear system is inconsistent.
43. The augmented matrix for the linear system and the reduced row echelon form are
_
3 2 3 3
1 2 1 2
_
_
1 0 2
1
2
0 1
3
2
3
4
_
.
As a result, the variable x
3
is free and there are innitely many solutions to the linear system given by
x
1
=
1
2
2x
3
, x
2
=
3
4
+
3
2
x
3
, x
3
R.
44. The augmented matrix
_
0 3 1 2
1 0 1 2
_
reduces to
_
1 0 1 2
0 1
1
3
2
3
_
.
As a result, the variable x
3
is free and there are innitely many solutions to the linear system given by
x
1
= 2 x
3
, x
2
=
2
3
1
3
x
3
, x
3
R.
45. The augmented matrix for the linear system and the reduced row echelon form are
_
_
1 0 3 1 2
2 3 3 1 2
2 2 2 1 2
_
_
_
_
1 0 0
1
2
1
0 1 0
1
2
1
0 0 1
1
2
1
_
_
.
As a result, the variable x
4
is free and there are innitely many solutions to the linear system given by
x
1
= 1
1
2
x
4
, x
2
= 1
1
2
x
4
, x
3
= 1
1
2
x
4
, x
4
R.
46. The augmented matrix
_
_
3 1 3 3 3
1 1 1 1 3
3 3 1 2 1
_
_
reduces to
_
_
1 0 0
3
4
4
0 1 0
9
4
6
0 0 1
5
2
5
_
_
.
As a result, the variable x
4
is free and there are innitely many solutions to the linear system given by
x
1
= 4
3
4
x
4
, x
2
= 6
9
4
x
4
, x
3
= 5
5
2
x
4
, x
4
R.
47. The augmented matrix for the linear system and the reduced row echelon form are
_
_
3 3 1 3 3
1 1 1 2 3
4 2 0 1 0
_
_
_
_
1 0
1
3
1
2
1
0 1
2
3
3
2
2
0 0 0 0 0
_
_
.
As a result, the variables x
3
and x
4
are free and there are innitely many solutions to the linear system given
by x
1
= 8 x
3
8x
4
, x
2
= 11 x
3
11x
4
, x
3
R, x
4
R.
48. The augmented matrix
_
_
3 2 1 2 2
1 1 0 3 3
4 3 1 1 1
_
_
reduces to
_
_
1 0 1 8 8
0 1 1 11 11
0 0 0 0 0
_
_
.
As a result, the variables x
3
and x
4
are free and there are innitely many solutions to the linear system given
by x
1
= 8 x
3
8x
4
, x
2
= 11 x
3
11x
4
, x
3
R, x
4
R.
49. The augmented matrix for the linear system and the row echelon form are
1.3 Matrix Algebra 11
_
_
1 2 1 a
2 3 2 b
1 1 1 c
_
_
_
_
1 2 1 a
0 1 0 2a +b
0 0 0 a +b +c
_
_
.
a. The linear system is consistent precisely when the last equation, from the row echelon form, is consistent.
That is, when c a +b = 0. b. Similarly, the linear system is inconsistent when c a +b = 0. c. For those
values of a, b, and c for which the linear system is consistent, there is a free variable, so that there are innitely
many solutions. d. The linear system is consistent if a = 1, b = 0, c = 1. If the variables are denoted by x, y
and z, then one solution is obtained by setting z = 1, that is, x = 2, y = 2, z = 1.
50. Reducing the augmented matrix gives
_
a 1 1
2 a 1 1
_
_
2 a 1 1
a 1 1
_
_
1
a1
2
1
2
a 1 1
_
_
1
a1
2
1
2
0
a(a1)
2
+ 1 1
1
2
a
_
.
a. If a = 1, the linear system is consistent. b. If the linear system is consistent and a = 2, then the
solution is unique. If the linear system is consistent and a = 2 there are innitely many solutions. c. Let
a = 1. The unique solution is x =
1
2
, y =
1
2
.
51. The augmented matrix for the linear system and the reduced row echelon form are
_
_
2 3 1 a
1 1 1 b
0 5 1 c
_
_
_
_
1 0
4
5
1
2
a +
3
10
b
0 1
1
5
1
5
c
0 0 0 a + 2b c
_
_
.
a. The linear system is consistent precisely when the last equation, from the reduced row echelon form, is
consistent. That is, when a+2bc = 0. b. Similarly, the linear system is inconsistent when a+2bc = 0 = 0.
c. For those values of a, b, and c for which the linear system is consistent, there is a free variable, so that
there are innitely many solutions. d. The linear system is consistent if a = 0, b = 0, c = 0. If the variables
are denoted by x, y and z, then one solution is obtained by setting z = 1, that is, x =
4
5
, y =
1
5
, z = 1.
52.
_
1 0
0 1
_
,
_
1 0
0 0
_
,
_
1 1
0 0
_
,
_
0 1
0 0
_
.
Exercise Set 1.3
Addition and scalar multiplication are dened componentwise allowing algebra to be performed on ex-
pressions involving matrices. Many of the properties enjoyed by the real numbers also hold for matrices. For
example, addition is commutative and associative, the matrix of all zeros plays the same role as 0 in the real
numbers since the zero matrix added to any matrix A is A. If each component of a matrix A is negated,
denoted by A, then A + (A) is the zero matrix. Matrix multiplication is also dened. The matrix AB
that is the product of A with B, is obtained by taking the dot product of each row vector of A with each
column vector of B. The order of multiplication is important since it is not always the case that AB and BA
are the same matrix. When simplifying expressions with matrices, care is then needed and the multiplication
of matrices can be reversed only when it is assumed or known that the matrices commute. The distributive
property does hold for matrices, so that A(B + C) = AB + AC. In this case however, it is also necessary
to note that (B + C)A = BA + CA again since matrix multiplication is not commutative. The transpose
of a matrix A, denoted by A
t
, is obtained by interchanging the rows and columns of a matrix. There are
important properties of the transpose operation you should also be familiar with before solving the exercises.
Of particular importance is (AB)
t
= B
t
A
t
. Other properties are (A + B)
t
= A
t
+ B
t
, (cA)
t
= cA
t
, and
(A
t
)
t
= A. A class of matrices that is introduced in Section 1.3 and considered throughout the text are the
symmetric matrices. A matrix A is symmetric it is equal to its transpose, that is, A
t
= A. For example, in
the case of 2 2 matrices,
A =
_
a b
c d
_
= A
t
=
_
a c
b d
_
b = c.
12 Chapter 1 Systems of Linear Equations and Matrices
Here we used that two matrices are equal if and only if corresponding components are equal. Some of the
exercises involve showing some matrix or combination of matrices is symmetric. For example, to show that
the product of two matrices AB is symmetric, requires showing that (AB)
t
= AB.
Solutions to Exercises
1. Since addition of matrices is dened componentwise, we have that
A +B =
_
2 3
4 1
_
+
_
1 3
2 5
_
=
_
2 1 3 + 3
4 2 1 + 5
_
=
_
1 0
2 6
_
.
Also, since addition of real numbers is commutative, A +B = B +A.
2. 3A 2B = 3
_
2 3
4 1
_
2
_
1 3
2 5
_
=
_
8 15
16 7
_
3. To evaluate the matrix expression (A+B) +C requires we rst add A+B and then add C to the result.
On the other hand to evaluate A + (B + C) we rst evaluate B + C and then add A. Since addition of real
numbers is associative the two results are the same, that is (A +B) +C =
_
2 1
7 4
_
= A+ (B +C).
4. 3(A +B) 5C = 3
__
2 3
4 1
_
+
_
1 3
2 5
__
5
_
1 1
5 2
_
= 3
_
1 0
2 6
_
5
_
1 1
5 2
_
=
_
2 5
19 28
_
5. Since a scalar times a matrix multiples each entry of the matrix by the real number, we have that
(A B) +C =
_
_
7 3 9
0 5 6
1 2 10
_
_
and 2A+B =
_
_
7 3 9
3 10 6
2 2 11
_
_
.
6. A + 2B C =
_
_
3 3 3
1 0 2
0 2 3
_
_
+ 2
_
_
1 3 3
2 5 2
1 2 4
_
_
_
_
5 3 9
3 10 6
2 2 11
_
_
=
_
_
0 0 0
0 0 0
0 0 0
_
_
7. The products are AB =
_
7 2
0 8
_
and BA =
_
6 2
7 7
_
. Notice that, A and B are examples of matrices
that do not commute, that is, the order of multiplication can not be reversed.
8. 3(AB) = 3
_
7 2
0 8
_
=
_
21 6
0 24
_
and A(3B) =
_
3 1
2 4
_ _
6 0
3 6
_
=
_
21 6
0 24
_
9. AB =
_
9 4
13 7
_
10. BA =
_
_
9 7 9
10 2 6
6 9 9
_
_
11. AB =
_
_
5 6 4
3 6 18
5 7 6
_
_
12. AB =
_
_
5 5 1
10 0 9
6 0 2
_
_
13. First, adding the matrices B and C gives
A(B +C) =
_
2 3
3 0
_ __
2 0
2 0
_
+
_
2 0
1 1
__
=
_
2 3
3 0
_ _
4 0
3 1
_
=
_
(2)(4) + (3)(3) (2)(0) + (3)(1)
(3)(4) + (0)(3) (3)(0) + (0)(1)
_
=
_
1 3
12 0
_
.
1.3 Matrix Algebra 13
14. (A +B)C =
_
0 3
1 0
_ _
2 0
1 1
_
=
_
3 3
2 0
_
15. 2A(B 3C) =
_
10 18
24 0
_
16. (A + 2B)(3C) =
_
21 9
6 0
_
17. To nd the transpose of a matrix the rows
and columns are reversed. So A
t
and B
t
are 32
matrices and the operation is dened. The result
is 2A
t
B
t
=
_
_
7 5
1 3
3 2
_
_
.
18. Since B
t
is 32 and 2Ais 23, the expression
B
t
2A is not dened.
19. Since A is 2 3 and B
t
is 3 2, then the
product AB
t
is dened with AB
t
=
_
7 4
5 1
_
20. BA
t
=
_
7 5
4 1
_
21. (A
t
+B
t
)C =
_
_
1 7
6 8
4 12
_
_
22. Since C is 2 2 and A
t
+ B
t
is 3 2, the
expression is not dened.
23. (A
t
C)B =
_
_
0 20 15
0 0 0
18 22 15
_
_
24. Since A
t
is 32 and B
t
is 32, the expression
is not dened.
25. AB = AC =
_
5 1
5 1
_
26. If A =
_
0 2
0 5
_
and B =
_
1 1
0 0
_
, then
AB =
_
0 0
0 0
_
.
27. The product
A
2
= AA =
_
a b
0 c
_ _
a b
0 c
_
=
_
a
2
ab +bc
0 c
2
_
=
_
1 0
0 1
_
if and only if a
2
= 1, c
2
= 1, and ab +bc = b(a +c) = 0. That is, a = 1, c = 1, and b(a +c) = 0, so that A
has one of the forms
_
1 0
0 1
_
,
_
1 b
0 1
_
,
_
1 b
0 1
_
, or
_
1 0
0 1
_
.
28. Since AM =
_
2a +c 2b +d
a +c b +d
_
and MA =
_
2a +b a +b
2c +d c +d
_
, then AM = MA if and only if 2a +c =
2a +b, 2b +d = a +b, a +c = 2c +d, bed = c +d. That is, b = c and d = a b, so the matrices have the form
_
a b
b a b
_
.
29. Let A =
_
1 1
0 0
_
and B =
_
1 1
1 1
_
. Then AB =
_
0 0
0 0
_
, and neither of the matrices are the
zero matrix. Notice that, this can not happen with real numbers. That is, if the product of two real numbers
is zero, then at least one of the numbers must be zero.
30. Let A =
_
a b
c d
_
and B =
_
e f
g h
_
, so that AB BA =
_
1 0
0 1
_
if and only if
_
bg cf (af +bh (be +fd)
(ce +dg) (ag +ch) cf bg
_
=
_
1 0
0 1
_
. So bg cf and cf bg must both be 1, which
is not possible.
31. The product
_
1 2
a 0
_ _
3 b
4 1
_
=
_
5 b + 2
3a ab
_
14 Chapter 1 Systems of Linear Equations and Matrices
will equal
_
5 6
12 16
_
if and only b + 2 = 6, 3a = 12, and ab = 16. That is, a = b = 4.
32. Let A =
_
a b
c d
_
and B =
_
e f
g h
_
. Since
AB BA =
_
bg cf (af +bh (be +fd)
(ce +dg) (ag +ch) cf bg
_
,
then the sum of the terms on the diagonal is (bg cf) + (cf bg) = 0.
33. Several powers of the matrix A are given by
A
2
=
_
_
1 0 0
0 1 0
0 0 1
_
_
, A
3
=
_
_
1 0 0
0 1 0
0 0 1
_
_
, A
4
=
_
_
1 0 0
0 1 0
0 0 1
_
_
, and A
5
=
_
_
1 0 0
0 1 0
0 0 1
_
_
.
We can see that if n is even, then A
n
is the identity matrix, so in particular A
20
=
_
_
1 0 0
0 1 0
0 0 1
_
_
. Notice also
that, if n is odd, then A
n
=
_
_
1 0 0
0 1 0
0 0 1
_
_
.
34. Since (A +B)(A B) = A
2
AB +BA B
2
, then (A +B)(A B) = A
2
B
2
when AB = BA.
35. We can rst rewrite the expression A
2
B as A
2
B = AAB. Since AB = BA, then A
2
B = AAB = ABA =
BAA = BA
2
.
36. a. Since AB = BA and AC = CA, then (BC)A = B(CA) = B(AC) = A(BC) and hence BC and A
commute. b. Let A =
_
1 0
0 1
_
, so that A commutes with every 22 matrix. Then select any two matrices
that do not commute. For example, let B =
_
1 0
1 0
_
and C =
_
0 1
0 1
_
.
37. Multiplication of A times the vector x =
_
_
1
0
.
.
.
0
_
_
gives the rst column vector of the matrix A. Then
Ax = 0 forces the rst column vector of A to be the zero vector. Then let x =
_
_
0
1
.
.
.
0
_
_
and so on, to show
that each column vector of A is the zero vector. Hence, A is the zero matrix.
38. Let A
n
=
_
1 n n
n 1 +n
_
and A
m
=
_
1 m m
m 1 +m
_
. Then
A
n
A
m
=
_
(1 n)(1 m) nm (1 n)(m) (1 +m)n
n(1 m) +m(1 +n) mn + (1 +n)(1 +m)
_
=
_
1 (m+n) (m+n)
m+n 1 + (m+n)
_
= A
m+n
.
39. Let A =
_
a b
c d
_
, so that A
t
=
_
a c
b d
_
. Then
AA
t
=
_
a b
c d
_ _
a c
b d
_
=
_
a
2
+b
2
ac +bd
ac +bd c
2
+d
2
_
=
_
0 0
0 0
_
if and only if a
2
+b
2
= 0, c
2
+d
2
= 0, and ac+bd = 0. The only solution to these equations is a = b = c = d = 0,
so the only matrix that satises AA
t
= 0 is the 2 2 zero matrix.
1.4 The Inverse of a Matrix 15
40. Since A and B are symmetric, then A
t
= A and B
t
= B. In addition, if AB = BA, then
(AB)
t
= B
t
A
t
= BA = AB
and hence, AB is symmetric.
41. If A is an m n matrix, then A
t
is an n m matrix, so that AA
t
and A
t
A are both dened with AA
t
being an m m matrix and A
t
A an n n matrix. Since (AA
t
)
t
= (A
t
)
t
A
t
= AA
t
, then the matrix AA
t
is
symmetric. Similarly, (A
t
A)
t
= A
t
(A
t
)
t
= A
t
A, so that A
t
A is also symmetric.
42. Since A and B are idempotent, then A
2
= A and B
2
= B. In addition, if AB = BA, then
(AB)
2
= ABAB = AABB = A
2
B
2
= AB
and hence, AB is idempotent.
43. Let A = (a
ij
) be an n n matrix. If A
t
= A, then the diagonal entries satisfy a
ii
= a
ii
and hence,
a
ii
= 0 for each i.
44. Let A = (a
ij
) and B = (b
ij
) be n n matrices.
a.
tr(A +B) = tr
_
_
_
_
_
_
_
a
11
+b
11
a
12
+b
12
. . . a
1n
+b
1n
a
21
+b
21
a
22
+b
22
. . . a
2n
+b
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
+b
n1
a
n2
+b
n2
. . . a
nn
+b
nn
_
_
_
_
_
_
_
= (a
11
+b
11
) + (a
22
+b
22
) + + (a
nn
+b
nn
)
= (a
11
+a
22
+ +a
nn
) + (b
11
+b
22
+ +b
nn
) = tr(A) +tr(B).
b.
tr(cA) = tr
_
_
_
_
_
_
_
ca
11
ca
12
. . . ca
1n
ca
21
ca
22
. . . ca
2n
.
.
.
.
.
.
.
.
.
.
.
.
ca
n1
ca
n2
. . . ca
nn
_
_
_
_
_
_
_
= ca
11
+ca
22
+ +ca
nn
= ctr(A).
Exercise Set 1.4
The inverse of a square matrix plays the same role as the reciprocal of a nonzero number for real numbers.
The n n identity matrix I, with each diagonal entry a 1 and all other entries 0, satises AI = IA = A for
all n n matrices. Then the inverse of an n n matrix, when it exists, is unique and is the matrix, denoted
by A
1
, such that AA
1
= A
1
A = I. In the case of 2 2 matrices
A =
_
a b
c d
_
has an inverse if and only if ad bc = 0 and A
1
=
1
ad bc
_
d b
c a
_
.
A procedure for nding the inverse of an n n matrix involves forming the augmented matrix [A | I] and
then row reducing the n 2n matrix. If in the reduction process A is transformed to the identity matrix,
then the resulting augmented part of the matrix is the inverse. For example, if
A =
_
_
2 2 1
1 1 2
2 1 2
_
_
and B =
_
_
0 2 2
1 1 0
2 1 1
_
_
,
then A is invertible and B is not since
16 Chapter 1 Systems of Linear Equations and Matrices
_
_
2 2 1 1 0 0
1 1 2 0 1 0
2 1 2 0 0 1
_
_
_
_
1 0 0 4 3 5
0 1 0 2 2 3
0 0 1 3 2 4
_
_
. .
A
1
but
_
_
0 2 2 1 0 0
1 1 0 0 1 0
2 1 1 0 0 1
_
_
_
_
1 0 1 0 1 1
0 1 1 0 2 1
0 0 0 1 4 2
_
_
.
The inverse of the product of two invertible matrices A and B can be found from the inverses of the individual
matrices A
1
and B
1
. But as in the case of the transpose operation, the order of multiplication is reversed,
that is, (AB)
1
= B
1
A
1
.
Solutions to Exercises
1. Since (1)(1) (2)(3) = 5 and is nonzero,
the inverse exists and
A
1
=
1
5
_
1 2
3 1
_
.
2. Since (3)(2) (1)(1) = 7 and is nonzero,
the inverse exists and
A
1
=
1
7
_
2 1
1 3
_
.
3. Since (2)(4) (2)(4) = 0, then the matrix
is not invertible.
4. Since (1)(2) (1)(2) = 0, then the matrix is
not invertible.
5. To determine whether of not the matrix is invertible we row reduce the augmented matrix
_
_
0 1 1 1 0 0
3 1 1 0 1 0
1 2 1 0 0 1
_
_
R
1
R
3
_
_
1 2 1 0 0 1
3 1 1 0 1 0
0 1 1 1 0 0
_
_
(3)R
1
+R
2
R
2
_
_
1 2 1 0 0 1
0 5 4 0 1 3
0 1 1 1 0 0
_
_
R
2
R
3
_
_
1 2 1 0 0 1
0 1 1 1 0 0
0 5 4 0 1 3
_
_
(2)R
2
+R
1
R
1
_
_
1 0 1 2 0 1
0 1 1 1 0 0
0 5 4 0 1 3
_
_
(5)R
2
+R
3
R
3
_
_
1 0 1 2 0 1
0 1 1 1 0 0
0 0 1 5 1 3
_
_
(1)R
3
R
3
_
_
1 0 1 2 0 1
0 1 1 1 0 0
0 0 1 5 1 3
_
_
(1)R
3
+R
1
R
1
_
_
1 0 0 3 1 2
0 1 1 1 0 0
0 0 1 5 1 3
_
_
(1)R
3
+R
2
R
2
_
_
1 0 0 3 1 2
0 1 0 4 1 3
0 0 1 5 1 3
_
_
. Since the original matrix has been reduce to the identity matrix, the inverse
exists and A
1
=
_
_
3 1 2
4 1 3
5 1 3
_
_
.
6. Since
_
_
0 2 1 1 0 0
1 0 0 0 1 0
2 1 1 0 0 1
_
_
reduces to
_
_
1 0 0 0 1 0
0 1 0 1 2 1
0 0 1 1 4 2
_
_
the matrix is invertible and A
1
=
_
_
0 1 0
1 2 1
1 4 2
_
_
.
1.4 The Inverse of a Matrix 17
7. Since the matrix A is row equivalent to the
matrix
_
_
1 1 0
0 0 1
0 0 0
_
_
, the matrix A can not be
reduced to the identity and hence is not invertible.
8. Since the matrix A is not row equivalent to
the identity matrix, then A is not invertible.
9. A
1
=
_
_
1/3 1 2 1/2
0 1 2 1
0 0 1 1/2
0 0 0 1/2
_
_
10. A
1
=
_
_
1 3 3 0
0 1 1 1/2
0 0 1/2 1/2
0 0 0 1/2
_
_
11. A
1
=
1
3
_
_
3 0 0 0
6 3 0 0
1 2 1 0
1 1 1 1
_
_
12.
A
1
=
_
_
1 0 0 0
2 1 0 0
1/2 1/2 1/2 0
1 1 0 1/2
_
_
13. The matrix A is not invertible. 14. The matrix A is not invertible.
15. A
1
=
_
_
0 0 1 0
1 1 2 1
1 2 1 1
0 1 1 1
_
_
16. The matrix A is not invertible.
17. Performing the operations, we have that AB+A =
_
3 8
10 10
_
= A(B+I) and AB+B =
_
2 9
6 3
_
=
(A +I)B.
18. Since the distributive property holds for matrix multiplication and addition, we have that (A+I)(A+I) =
A
2
+A +A +I = A
2
+ 2A +I.
19. Let A =
_
1 2
2 1
_
. a. Since A
2
=
_
3 4
4 3
_
and 2A =
_
2 4
4 2
_
, then A
2
2A+ 5I = 0. b.
Since (1)(1) (2)(2) = 5, the inverse exists and A
1
=
1
5
_
1 2
2 1
_
=
1
5
(2I A).
c. If A
2
2A+5I = 0, then A
2
2A = 5I, so that A
_
1
5
(2I A)
_
=
2
5
A
1
5
A
2
=
1
5
(A
2
2A) =
1
5
(5I) = I.
Hence A
1
=
1
5
(2I A).
20. Applying the operations (3)R
1
+R
2
R
2
and (1)R
1
+R
3
R
3
gives
_
_
1 0
3 2 0
1 2 1
_
_
reduces to
_
_
1 0
0 2 3 0
1 2 1
_
_
. So if =
2
3
, then the matrix can not be reduced to the identity
and hence, will not be invertible.
21. The matrix is row equivalent to
_
_
1 0
0 3 1
0 1 2 1
_
_
. If = 2, then the second and third rows are
identical, so the matrix can not be row reduced to the identity and hence, is not invertible.
22. Since the matrix is row equivalent to
_
_
1 2 1
0 4 1
0 4 2 0
_
_
, if = 2, then the matrix can not be row
reduced to the identity matrix and hence, is not invertible.
23. a. If = 1, then the matrix A is invertible.
b. When = 1 the inverse matrix is A
1
=
_
_
1
1
1
1
1
1
1
1
1
1
0 0 1
_
_
.
18 Chapter 1 Systems of Linear Equations and Matrices
24. If = 0, =
2 or =
_
=
_
1 0
0 1
_
,
so that A
1
= A
t
and hence, A is an orthogonal matrix.
37. a. Using the associative property of matrix multiplication, we have that
(ABC)(C
1
B
1
A
1
) = (AB)CC
1
(B
1
A
1
) = ABB
1
A
1
= AA
1
= I.
b. The proof is by induction on the number of matrices k.
Base Case: When k = 2, since (A
1
A
2
)
1
= A
1
2
A
1
1
, the statement holds.
Inductive Hypothesis: Suppose that (A
1
A
2
A
k
)
1
= A
1
k
A
1
k1
A
1
1
. Then for k + 1 matrices, we
have that (A
1
A
2
A
k
A
k+1
)
1
= ([A
1
A
2
A
k
]A
k+1
)
1
. Since [A
1
A
2
A
k
] and A
k+1
can be considered as
1.5 Matrix Equations 19
two matrices, by the base case, we have that ([A
1
A
2
A
k
]A
k+1
)
1
= A
1
k+1
[A
1
A
2
A
k
]
1
. Finally, by the
inductive hypothesis
([A
1
A
2
A
k
]A
k+1
)
1
= A
1
k+1
[A
1
A
2
A
k
]
1
= A
1
k+1
A
1
k
A
1
k1
A
1
1
.
38. Since a
kk
= 0 for each k, then
_
_
a
11
0 . . . . . . 0
0 a
22
0 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . 0 a
nn
_
_
_
_
1
a11
0 . . . . . . 0
0
1
a22
0 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . 0
1
ann
_
_
=
_
_
1 0 . . . . . . 0
0 1 0 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . 0 1
_
_
and hence, A is invertible.
39. If A is invertible, then the augmented matrix [A|I] can be row reduced to [I|A
1
]. If A is upper triangular,
then only terms on or above the main diagonal can be aected by the reduction process and hence, the inverse
is upper triangular. Similarly, the inverse for an invertible lower triangle matrix is also lower triangular.
40. Since A is invertible, then A is row equivalent to the identity matrix. If B is row equivalent to A, then
B is also row equivalent to the identity matrix and hence, B is invertible.
41. a. Expanding the matrix equation
_
a b
c d
_ _
x
1
x
2
x
3
x
4
_
=
_
1 0
0 1
_
, gives
_
ax
1
+bx
3
ax
2
+bx
4
cx
1
+ dx
3
cx
2
+dx
4
_
=
_
1 0
0 1
_
. b. From part (a), we have the two linear systems
_
ax
1
+bx
3
= 1
cx
1
+dx
3
= 0
and
_
ax
2
+bx
4
= 0
cx
2
+dx
4
= 1
.
In the rst linear system, multiplying the rst equation by d and the second by b and then adding the results
gives the equation (ad bc)x
1
= d. Since the assumption is that ad bc = 0, then d = 0. Similarly, from
the second linear system we conclude that b = 0. c. From part (b), both b = 0 and d = 0. Notice that
if in addition either a = 0 or c = 0, then the matrix is not invertible. Also from part (b), we have that
ax
1
= 1, ax
2
= 0, cx
1
= 0, and cx
2
= 1. If a and c are not zero, then these equations are inconsistent and the
matrix is not invertible.
Exercise Set 1.5
A linear system can be written as a matrix equation Ax = b, where A is the coecient matrix of the linear
system, x is the vector of variables and b is the vector of constants on the right hand side of each equation.
For example, the matrix equation corresponding to the linear system
_
_
x
1
x
2
x
3
= 2
x
1
+ 2x
2
+ 2x
3
= 3
x
1
+ 2x
2
+x
3
= 1
is
_
_
1 1 1
1 2 2
1 2 1
_
_
_
_
x
1
x
2
x
3
_
_
=
_
_
2
3
1
_
_
.
If the coecient matrix A, as in the previous example, has an inverse, then the linear system always has a
unique solution. That is, both sides of the equation Ax = b can be multiplied on the left by A
1
to obtain
the solution x = A
1
b. In the above example, since the coecient matrix is invertible the linear system has
a unique solution. That is,
x =
_
_
x
1
x
2
x
3
_
_
= A
1
_
_
2
3
1
_
_
=
1
3
_
_
2 1 0
1 2 3
0 3 3
_
_
_
_
2
3
1
_
_
=
1
3
_
_
1
7
12
_
_
=
_
_
1/3
7/3
4
_
_
.
Every homogeneous linear system Ax = 0 has at least one solution, namely the trivial solution, where each
component of the vector x is 0. If in addition, the linear system is an n n (square) linear system and A
20 Chapter 1 Systems of Linear Equations and Matrices
is invertible, then the only solution is the trivial one. That is, the unique solution is x = A
1
0 = 0. The
equivalent statement is that if Ax = 0 has two distinct solutions, then the matrix A is not invertible. One
additional fact established in the section and that is useful in solving the exercises is that when the linear
system Ax = b has two distinct solutions, then it has innitely many solutions. That is, every linear linear
system has either a unique solution, innitely many solutions, or is inconsistent.
Solutions to Exercises
1. Let A =
_
2 3
1 2
_
, x =
_
x
y
_
, and b =
_
1
4
_
.
2. A =
_
4 1
2 5
_
, x =
_
x
y
_
, and b =
_
3
2
_
3. Let A =
_
_
2 3 1
1 1 2
3 2 2
_
_
, x =
_
_
x
y
z
_
_
, and
b =
_
_
1
1
3
_
_
.
4. A =
_
_
0 3 2
1 0 4
1 0 3
_
_
, x =
_
_
x
y
z
_
_
, and b =
_
_
2
3
4
_
_
5. Let A =
_
_
4 3 2 3
3 3 1 0
2 3 4 4
_
_
,
x =
_
_
x
1
x
2
x
3
x
4
_
_
, and b =
_
_
1
4
3
_
_
.
6. A =
_
_
0 3 1 2
0 4 2 4
1 3 2 0
_
_
,
x =
_
_
x
1
x
2
x
3
x
4
_
_
, and b =
_
_
4
0
3
_
_
7.
_
2x 5y = 3
2x + y = 2
8.
_
2x 4y = 1
3y = 1
9.
_
_
2y = 3
2x y z = 1
3x y + 2z = 1
10.
_
_
4x 5y + 5z = 3
4x y +z = 2
4x + 3y + 5z = 1
11.
_
2x
1
+ 5x
2
5x
3
+ 3x
4
= 2
3x
1
+x
2
2x
3
4x
4
= 0
12.
_
_
2x
2
+ 4x
3
2x
4
= 4
2x
1
+x
3
+x
4
= 3
x
1
+x
3
2x
4
= 1
13. The solution is x = A
1
b =
_
_
1
4
3
_
_
. 14. The solution is x = A
1
b =
_
_
6
8
2
_
_
.
15. The solution is x = A
1
b =
_
_
9
3
8
7
_
_
. 16. The solution is x = A
1
b =
_
_
1
2
1
1
_
_
17. The coecient matrix of the linear system is A =
_
1 4
3 2
_
, so that
A
1
=
1
(1)(2) (4)(3)
_
2 4
3 1
_
=
1
10
_
2 4
3 1
_
.
Hence, the linear system has the unique solution x = A
1
_
2
3
_
=
1
10
_
16
9
_
.
1.5 Matrix Equations 21
18. Since the inverse of the coecient matrix
_
2 4
2 3
_
is A
1
=
1
2
_
3 4
2 2
_
, the solution is x =
A
1
_
4
3
_
=
_
12
7
_
.
19. If the coecient matrix is denoted by A, then the unique solution is
A
1
_
_
1
1
1
_
_
=
_
_
7 3 1
3 1 0
8 3 1
_
_
_
_
1
1
1
_
_
=
_
_
11
4
12
_
_
.
20. If the coecient matrix is denoted by A, then the unique solution is
A
1
_
_
0
1
2
_
_
=
_
_
2 5 1
2 4 1
1 2 0
_
_
_
_
0
1
2
_
_
=
_
_
7
6
2
_
_
.
21. If the coecient matrix is denoted by A, then the unique solution is
A
1
_
_
1
1
0
0
_
_
=
_
_
1 1 0 0
2 2 1 2
0
1
3
2
3
1
0
1
3
1
3
0
_
_
_
_
1
1
0
0
_
_
=
1
3
_
_
0
0
1
1
_
_
.
22. If the coecient matrix is denoted by A, then the unique solution is
A
1
_
_
3
2
3
1
_
_
=
1
2
_
_
1 2 3 1
0 2 2 0
1 4 3 0
1 2 1 1
_
_
_
_
3
2
3
1
_
_
=
_
9
2
5
7
5
2
_
_
23. a. x =
1
5
_
3 1
2 1
_ _
2
1
_
=
1
5
_
7
3
_
b. x =
1
5
_
3 1
2 1
_ _
3
2
_
=
1
5
_
7
8
_
24. a. x = A
1
_
_
2
1
1
_
_
=
_
_
7 3 1
3 1 0
8 3 1
_
_
_
_
2
1
1
_
_
=
_
_
18
7
20
_
_
b. x = A
1
_
_
1
1
0
_
_
=
_
_
7 3 1
3 1 0
8 3 1
_
_
_
_
1
1
0
_
_
=
_
_
10
4
11
_
_
25. The reduced row echelon form of the matrix A is
_
_
1 4
3 12
2 8
_
_
_
_
1 4
0 0
0 0
_
_
hence, the linear system has innitely many solutions with solution set S =
__
4t
t
_
t R
_
. A particular
nontrivial solution is x = 4 and y = 1.
26. Since the matrix
_
_
1 2 4
2 4 8
3 6 12
_
_
reduces to
_
_
1 2 4
0 0 0
0 0 0
_
_
,
the solution set is S =
_
_
_
_
_
2s 4t
s
t
_
_
s, t R
_
_
_
with a particular nontrivial solution of x = 2, y = 1, and
z = 1.
22 Chapter 1 Systems of Linear Equations and Matrices
27. A =
_
_
1 2 1
1 2 1
1 2 1
_
_
28. A =
_
_
1 1 1
1 1 1
1 1 1
_
_
29. Since Au = Av with u = v, then A(u v) = 0 and hence, the homogeneous linear system Ax = 0 has a
nontrivial solution. Therefore, A is not invertible.
30. Since u is a solution to A(x) = b and v is a solution to A(x) = 0, then Au = b and Av = b. Hence,
A(u +v) = Au +Av = b +0 = b.
31. a. Let A =
_
_
2 1
1 1
3 2
_
_
, x =
_
x
y
_
, and b =
_
_
1
2
1
_
_
. The reduced row echelon form of the augmented
matrix is
_
_
2 1 1
1 1 2
1 2 1
_
_
_
_
1 0 1
0 1 1
0 0 0
_
_
,
so that the solution to the linear system is x =
_
1
1
_
. b. C =
1
3
_
1 1 0
1 2 0
_
c. The solution to the linear system is also given by Cb =
1
3
_
1 1 0
1 2 0
_
_
_
1
2
1
_
_
=
_
1
1
_
.
32. a. Since the augmented matrix
_
_
2 1 3
1 1 2
3 2 5
_
_
reduces to
_
_
2 1 3
0 1 1
0 0 0
_
_
the unique solution is x =
_
1
1
_
. b. C =
_
1 1 0
1 2 0
_
c. Cb =
_
1 1 0
1 2 0
_
_
_
3
2
5
_
_
=
_
1
1
_
Exercise Set 1.6
The determinant of a square matrix is a number that provides information about the matrix. If the matrix
is the coecient matrix of a linear system, then the determinant gives information about the solutions of the
linear system. For a 22 matrix A =
_
a b
c d
_
, then det(A) = adbc. Another class of matrices where nding
the determinant requires a simple computation are the triangular matrices. In this case the determinant is
the product of the entries on the diagonal. So if A = (a
ij
) is an n n matrix, then det(A) = a
11
a
22
a
nn
.
The standard row operations on a matrix can be used to reduce a square matrix to an upper triangular matrix
and the aect of a row operation on the determinant can be used to nd the determinant of the matrix from
the triangular form.
If two rows are interchanged, then the new determinant is the negative of the original determinant.
If a row is multiplied by a scalar c, then the new determinant is c times the original determinant.
If a multiple of one row is added to another, then the new determinant is unchanged.
Some immediate consequences of the third property are, if a matrix has a row of zeros, or two equal rows, or one
row a multiple of another, then the determinant is 0. The same properties hold if row is replaced with column.
It A is an n n matrix, since in the matrix cA each row of A is multiplied by c, then det(cA) = c
n
det(A).
Two other useful properties are det(A
t
) = det(A) and if A is invertible, then det(A
1
) =
1
det(A)
.
The most important observation made in Section 1.6 is that a square matrix is invertible if and only if its
determinant is not zero. Then
1.6 Determinants 23
A is invertible det(A) = 0 Ax = b has a unique solution
Ax = 0 has only the trivial solution
A is row equivalent to I.
One useful observation that follows is that if the determinant of the coecient matrix is 0, then the linear
system is inconsistent or has innitely many solutions.
Solutions to Exercises
1. Since the matrix is triangular, the determinant
is the product of the diagonal entries. Hence the
determinant is 24.
2. Since the matrix has two identical rows, the
determinant is 0.
3. Since the matrix is triangular, the determinant
is the product of the diagonal entries. Hence the
determinant is 10.
4. Since the second row is twice the rst, the
determinant 0.
5. Since the determinant is 2, the matrix is in-
vertible.
6. Since the determinant is -17, the matrix is
invertible.
7. Since the matrix is triangular the determinant
is 6 and hence, the matrix is invertible.
8. Since there are two identical rows the determi-
nant is 0, and hence the matrix is not invertible.
9. a. Expanding along row one
det(A) = 2
1 4
1 2
(0)
3 4
4 2
+ (1)
3 1
4 1
= 5.
b. Expanding along row two
det(A) = 3
0 1
1 2
+ (1)
2 1
4 2
+ (4)
2 0
4 1
= 5.
c. Expanding along column two
det(A) = (0)
3 4
4 2
+ (1)
2 1
4 2
+ (1)
2 1
3 4
= 5.
d. det
_
_
_
_
4 1 2
3 1 4
2 0 1
_
_
_
_
= 5 e. Let B denote the matrix in part (d) and B
). f. Let B
) = det(B
1 1 2
2 0 1
1 0 1
+ 3
1 1 2
3 0 1
0 0 1
1 1 2
3 2 1
0 1 1
+ 3
1 1 1
3 2 0
0 1 0
= 15.
b. Expanding along row three the signs alternate in the pattern +, , +, , and the determinant is again
15. c. Expanding along column two the signs alternate in the pattern , +, , + and the determinant is
again 15. d Since row two contains two zeros this is the preferred expansion. e. Since the determinant is
nonzero, the matrix is invertible.
11. Determinant: 13; Invertible 12. Determinant: 19; Invertible
13. Determinant: 16; Invertible 14. Determinant: 5; Invertible
24 Chapter 1 Systems of Linear Equations and Matrices
15. Determinant: 0; Not invertible 16. Determinant: 0; Not invertible
17. Determinant: 30; Invertible 18. Determinant: -76; Invertible
19. Determinant: 90; Invertible 20. Determinant: 8; Invertible
21. Determinant: 0; Not invertible 22. Determinant: 0; Not invertible
23. Determinant: 32; Invertible 24. Determinant: 59; Invertible
25. Determinant: 0; Not invertible 26. Determinant: 5; Invertible
27. Since multiplying a matrix by a scalar mul-
tiplies each row by the scalar, we have that
det(3A) = 3
3
det(A) = 270.
28. det(2A
1
) = 2
3
det(A
1
) =
8
det(A)
=
4
5
29.
det((2A)
1
) =
1
det(2A)
=
1
2
3
det(A)
=
1
80
30. Since the matrix is the transpose of the orig-
inal matrix but with two columns interchanged,
the determinant is 10.
31. Expanding along row 3
x
2
x 2
2 1 1
0 0 5
= (5)
x
2
x
2 1
= (5)(x
2
2x) = 5x
2
+ 10x.
Then the determinant of the matrix is 0 when 5x
2
+ 10x = 5x(x 2) = 0, that is x = 0 or x = 2.
32. Since
_
_
1 1 1 1 1
0 1 1 1 1
1 0 1 1 1
1 1 0 1 1
1 1 1 0 1
_
_
reduces to
_
1 1 1 1 1
0 1 1 1 1
0 0 1 1 1
0 0 0 1 1
0 0 0 0 1
_
_
using only the operation of adding a multiple of one row to another, the determinants of the two matrices are
equal. Since the reduced matrix is triangular and the product of the diagonal entries is 1, then the determinant
of the original matrix is also 1.
33. Since the determinant is a
1
b
2
b
1
a
2
xb
2
+xa
2
+yb
1
ya
1
, then the determinant will be zero precisely
when y =
b2a2
b1a1
x +
b1a2a1b2
b1a1
. This equation describes a straight line in the plane.
34. a. A =
_
1 1
2 2
_
, B =
_
1 1
2 2
_
, C =
_
1 1
2 2
_
b. det(A) = 0, det(B) = 0, det(C) = 4 c.
Only the matrix C has an inverse. d. Since
_
1 1 3
2 2 1
_
reduces to
_
1 1 3
0 0 5
_
, the linear system is
inconsistent. e. Since
_
1 1 3
2 2 6
_
reduces to
_
1 1 3
0 0 0
_
, there are innitely many solutions given by
x = 3 y, y R. f. Since
_
1 1 3
2 2 1
_
reduces to
_
1 1 3
0 4 5
_
, the linear system has the unique
solution x =
7
4
, y =
5
4
.
35. a. A =
_
_
1 1 2
1 2 3
2 2 2
_
_
b. det(A) = 2 c. Since the determinant of the coecient matrix is not zero
it is invertible and hence, the linear system has a unique solution. d. The unique solution is x =
_
_
3
8
4
_
_
.
1.6 Determinants 25
36. a. A =
_
_
1 3 2
2 5 1
2 6 4
_
_
b. det(A) = 0 c. The system will have innitely many solutions or is
inconsistent. d. Since
_
_
1 3 2 1
2 5 1 2
2 6 4 2
_
_
reduces to
_
_
1 3 2 1
0 1 5 4
0 0 0 0
_
_
, the linear system has innitely
many solutions given by x = 11 13z, y = 4 + 5z, z R.
37. a. A =
_
_
1 0 1
2 0 2
1 3 3
_
_
b. Expanding along column three, then
det(A) =
1 1
2 2
_
_
1 0 1 0
0 1
4
3
0
0 0 0 1
_
_
the linear system is inconsistent.
38. a. Since
x
2
x y 1
0 0 3 1
1 1 1 1
16 4 2 1
= 3x
2
27x 12y + 36,
the equation of the parabola is
3x
2
27x 12y + 36.
b.
x
y
220
220
20
20
39. a. Since
y
2
x y 1
4 2 2 1
4 3 2 1
9 4 3 1
= 29y
2
+ 20x 25y + 106,
the equation of the parabola is
29y
2
+ 20x 25y + 106 = 0.
b.
x
y
25
25
5
5
40. a. Since
x
2
+y
2
x y 1
18 3 3 1
5 1 2 1
9 3 0 1
= 24x
2
24y
2
6x60y+234,
the equation of the circle is
24x
2
24y
2
6x 60y + 234 = 0.
b.
x
y
25
25
5
5
26 Chapter 1 Systems of Linear Equations and Matrices
41. a. Since
x
2
y
2
x y 1
0 16 0 4 1
0 16 0 4 1
1 4 1 2 1
4 9 2 3 1
= 136x
2
16y
2
328x + 256,
the equation of the hyperbola is
136x
2
16y
2
328x + 256 = 0.
b.
x
y
25
25
5
5
42. a. Since
x
2
y
2
x y 1
9 4 3 2 1
1 9 1 3 1
1 1 1 1 1
16 4 4 2 1
= 84x
2
294y
2
+84x+630y+924,
the equation of the ellipse is
84x
2
294y
2
+ 84x + 630y + 924 = 0.
b.
x
y
25
25
5
5
43. a. Since
x
2
xy y
2
x y 1
1 0 0 1 0 1
0 0 1 0 1 1
1 0 0 1 0 1
4 4 4 2 2 1
9 3 1 3 1 1
= 12+12x
2
36xy+42y
2
30y,
the equation of the ellipse is
12 + 12x
2
36xy + 42y
2
30y = 0.
b.
x
y
25
25
5
5
44.
x =
4 3
4 2
2 3
2 2
= 2,
y =
2 4
2 4
2 3
2 2
= 0
45.
x =
7 5
6 3
5 5
2 3
=
9
5
,
y =
5 7
2 6
5 5
2 3
=
16
5
46.
x =
4 5
3 1
2 5
4 1
=
11
8
,
y =
2 4
4 3
2 5
4 1
=
5
9
47.
x =
3 4
10 5
9 4
7 5
=
25
73
,
y =
9 3
7 10
9 4
7 5
=
111
73
1.7 Elementary Matrices and LU Factorization 27
48.
x =
12 7
5 11
10 7
12 11
=
97
26
,
y =
10 12
12 5
10 7
12 11
=
47
23
49.
x =
4 3
3 4
1 3
8 4
=
25
28
,
y =
1 4
8 3
1 3
8 4
=
29
28
50. x =
8 1 4
3 4 1
8 0 1
2 1 4
0 4 1
4 0 1
=
91
68
, y =
2 8 4
0 3 1
4 8 1
2 1 4
0 4 1
4 0 1
=
3
34
, z =
2 1 8
0 4 3
4 0 8
2 1 4
0 4 1
4 0 1
=
45
17
51. x =
2 3 2
2 3 8
2 2 7
2 3 2
1 3 8
3 2 7
=
160
103
, y =
2 2 2
1 2 8
3 2 7
2 3 2
1 3 8
3 2 7
=
10
103
, z =
2 3 2
1 3 2
3 2 2
2 3 2
1 3 8
3 2 7
=
42
103
52. Suppose A
t
= A. Then det(A) = det(A
t
) = det(A) = (1)
n
det(A). If n is odd, then det(A) =
det(A) and hence det(A) = 0. Therefore, A is not invertible.
53. Expansion of the determinant of A across row one equals the expansion down column one of A
t
, so
det(A) = det(A
t
).
54. If A = (a
ij
) is upper triangular, then det(A) = a
11
a
22
a
nn
. But A
t
is lower triangular with the same
diagonal entries, so det(A
t
) = a
11
a
22
a
nn
= det(A).
Exercise Set 1.7
A factorization of a matrix, like factoring a quadratic polynomial, refers to writing a matrix as the product
of other matrices. Just like the resulting linear factors of a quadratic are useful and provide information about
the original quadratic polynomial, the lower triangular and upper triangular factors in an LU factorization
are easier to work with and can be used to provide information about the matrix. An elementary matrix is
obtained by applying one row operation to the identity matrix. For example,
_
_
1 0 0
0 1 0
0 0 1
_
_
(1)R
1
+R
3
R
3
_
_
1 0 0
0 1 0
1 0 1
_
_
.
. .
Elementary Matrix
If a matrix A is multiplied by an elementary matrix E, the result is the same as applying to the matrix A the
corresponding row operation that dened E. For example, using the elementary matrix above
EA =
_
_
1 0 0
0 1 0
1 0 1
_
_
_
_
1 3 1
2 1 0
1 2 1
_
_
=
_
_
1 3 1
2 1 0
0 1 0
_
_
.
Also since each elementary row operation can be reversed, elementary matrices are invertible. To nd an LU
factorization of A :
28 Chapter 1 Systems of Linear Equations and Matrices
Row reduce the matrix A to an upper triangular matrix U.
Use the corresponding elementary matrices to write U in the form U = E
k
E
1
A.
If row interchanges are not required, then each of the elementary matrices is lower triangular, so that
A = E
1
1
E
1
k
U is an LU factorization of A. If row interchanges are required, then a permutation
matrix is also required.
When A = LU is an LU factorization of A, and A is invertible, then A = (LU)
1
= U
1
L
1
. If the
determinant of A is required, then since L and U are triangular, their determinants are simply the product of
their diagonal entries and det(A) = det(LU) = det(L) det(U). An LU factorization can also be used to solve
a linear system. To solve the linear system
_
_
x
1
x
2
+ 2x
2
= 2
2x
1
+ 2x
2
+x
3
= 0
x
1
+x
2
= 1
the rst step is to nd an LU factorization of the coecient matrix of the linear system. That is,
A = LU =
_
_
1 0 0
2 1 0
1 0 1
_
_
_
_
1 1 2
0 4 3
0 0 2
_
_
.
Next solve the linear system Ly = b =
_
_
2
0
1
_
_
using forward substitution, so that y
1
= 2, y
2
= 2y
1
=
4, y
3
= 1 +y
1
= 3. As the nal step solve Ux = y =
_
_
2
4
3
_
_
using back substitution, so that x
3
=
3
2
, x
2
=
1
4
(4 + 3x
3
) =
1
8
, x
1
= 2 +x
2
2x
3
=
7
8
.
Solutions to Exercises
1. a. E =
_
_
1 0 0
2 1 0
0 0 1
_
_
b. EA =
_
_
1 2 1
5 5 4
1 1 4
_
_
2. a. E =
_
_
0 1 0
1 0 0
0 0 1
_
_
b. EA =
_
_
3 1 2
1 2 1
1 1 4
_
_
3. a. E =
_
_
1 0 0
0 1 0
0 3 1
_
_
b. EA =
_
_
1 2 1
3 1 2
8 2 10
_
_
4. a. E =
_
_
1 0 0
0 1 0
1 0 1
_
_
b. EA =
_
_
1 2 1
3 1 2
0 1 5
_
_
5. a. The required row operations are 2R
1
+R
2
R
2
,
1
10
R
2
R
2
, and 3R
2
+R
1
R
1
. The corresponding
elementary matrices that transform A to the identity are given in
I = E
3
E
2
E
1
A =
_
1 3
0 1
_ _
1 0
0
1
10
_ _
1 0
2 1
_
A.
b. Since elementary matrices are invertible, we have that
A = E
1
1
E
1
2
E
1
3
=
_
1 0
2 1
_ _
1 0
0 10
_ _
1 3
0 1
_
.
1.7 Elementary Matrices and LU Factorization 29
6. a. The required row operations are R
1
+R
2
R
2
,
1
10
R
2
R
2
, 5R
2
+R
1
R
1
, and
1
2
R
1
R
1
. The
corresponding elementary matrices that transform A to the identity are given in
I = E
4
E
3
E
2
E
1
A =
_
1
2
0
0 1
_ _
1 5
0 1
_ _
1 0
0
1
10
_ _
1 0
1 1
_
A.
b. Since elementary matrices are invertible, we have that
A = E
1
1
E
1
2
E
1
3
E
1
4
=
_
1 0
1 1
_ _
1 0
0 10
_ _
1 5
0 1
_ _
2 0
0 1
_
.
7. a. The identity matrix can be written as I = E
5
E
4
E
3
E
2
E
1
A, where the elementary matrices are
E
1
=
_
_
1 0 0
2 1 0
0 0 1
_
_
, E
2
=
_
_
1 0 0
0 1 0
1 0 1
_
_
, E
3
=
_
_
1 2 0
0 1 0
0 0 1
_
_
, E
4
=
_
_
1 0 11
0 1 0
0 0 1
_
_
, and
E
5
=
_
_
1 0 0
0 1 5
0 0 1
_
_
. b. A = E
1
1
E
1
2
E
1
3
E
1
4
E
1
5
8. a. Row operations to reduce the matrix to the identity matrix are
3R
1
+R
2
R
2
2R
1
+R
3
R
3
R
2
R
3
4R
2
+R
3
R
3
R
1
R
1
R
2
R
2
R
3
R
3
R
2
+R
1
R
1
R
3
+R
2
R
2
with corresponding elementary matrices
E
1
=
_
_
1 0 0
3 1 0
0 0 1
_
_
, E
2
=
_
_
1 0 0
0 1 0
2 0 1
_
_
, E
3
=
_
_
1 0 0
0 0 1
0 1 0
_
_
, E
4
=
_
_
1 0 0
0 1 0
0 4 1
_
_
,
E
5
=
_
_
1 0 0
0 1 0
0 0 1
_
_
, E
6
=
_
_
1 0 0
0 1 0
0 0 1
_
_
, E
7
=
_
_
1 0 0
0 1 0
0 0 1
_
_
, E
8
=
_
_
1 1 0
0 1 0
0 0 1
_
_
, and
E
9
=
_
_
1 0 0
0 1 1
0 0 1
_
_
. b. A = E
1
1
E
1
2
E
1
3
E
1
4
E
1
5
E
1
6
E
1
7
E
1
8
E
1
9
9. a. The identity matrix can be written as I = E
6
E
1
A, where the elementary matrices are
E
1
=
_
_
0 1 0
1 0 0
0 0 1
_
_
, E
2
=
_
_
1 2 0
0 1 0
0 0 1
_
_
, E
3
=
_
_
1 0 0
0 1 0
0 1 1
_
_
, E
4
=
_
_
1 0 0
0 1 1
0 0 1
_
_
,
E
5
=
_
_
1 0 1
0 1 0
0 0 1
_
_
, and E
6
=
_
_
1 0 0
0 1 0
0 0 1
_
_
. b. A = E
1
1
E
1
2
E
1
6
10. a. There are only two row interchanges needed, R
1
R
4
and R
2
R
3
. So
I = E
2
E
1
A =
_
_
0 0 0 1
0 1 0 0
0 0 1 0
1 0 0 0
_
_
_
_
1 0 0 0
0 0 1 0
0 1 0 0
0 0 0 1
_
_
A.
30 Chapter 1 Systems of Linear Equations and Matrices
b. A = E
1
1
E
1
2
.
11. The matrix A can be row reduced to an upper triangular matrix U =
_
1 2
0 1
_
, by means of the one
operation 3R
1
+ R
2
R
2
. The corresponding elementary matrix is E =
_
1 0
3 1
_
, so that EA = U. Then
the LU factorization of A is A = LU = E
1
U =
_
1 0
3 1
_ _
1 2
0 1
_
.
12. L =
_
1 0
1/6 1
_
, U =
_
3 9
0 1/2
_
13. The matrix A can be row reduced to an upper triangular matrix U =
_
_
1 2 1
0 1 3
0 0 1
_
_
, by means of
the operations (2)R
1
+ R
2
R
2
and 3R
1
+ R
3
R
3
. The corresponding elementary matrices are E
1
=
_
_
1 0 0
2 1 0
0 0 1
_
_
and E
2
=
_
_
1 0 0
0 1 0
3 0 1
_
_
, so that E
2
E
1
A = U. Then the LU factorization of A is A = LU =
E
1
1
E
1
2
=
_
_
1 0 0
2 1 0
3 0 1
_
_
_
_
1 2 1
0 1 3
0 0 1
_
_
.
14. L =
_
_
1 0 0
1 1 0
2 0 1
_
_
,
U =
_
_
1 1 1
0 1 3
0 0 1
_
_
15. A = LU
=
_
_
1 0 0
1 1 0
1
1
2
1
_
_
_
_
1
1
2
3
0 1 4
0 0 3
_
_
16. A = LU =
_
_
1 0 0 0
2 1 0 0
1 0 1 0
3 0 0 1
_
_
_
_
1 2 1 3
0 1 1 1
0 0 1 5
0 0 0 1
_
_
17. The rst step is to determine an LU factorization for the coecient matrix of the linear system A =
_
2 1
4 1
_
. We have that A = LU =
_
1 0
2 1
_ _
2 1
0 1
_
. Next we solve Ly =
_
1
5
_
to obtain
y
1
= 1 and y
2
= 3. The last step is to solve Ux = y, which has the unique solution x
1
= 2 and x
2
= 3.
18. An LU factorization of the matrix A is given by A =
_
1 0
2 1
_ _
3 2
0 1
_
. Then the solution to
Ly =
_
2
7/2
_
is y
1
= 2, y
2
=
1
2
, so the solution to Ux = y is x
1
= 1, x
2
=
1
2
.
19. An LU factorization of the coecient matrix A is A = LU =
_
_
1 0 0
1 1 0
2 0 1
_
_
_
_
1 4 3
0 1 2
0 0 1
_
_
. To solve
Ly =
_
_
0
3
1
_
_
, we have that
_
_
1 0 0 0
1 1 0 3
2 0 1 1
_
_
_
_
1 0 0 0
0 1 0 3
2 0 1 1
_
_
_
_
1 0 0 0
0 1 0 3
0 0 1 1
_
_
and hence, the solution is y
1
= 0, y
2
= 3, and y
3
= 1. Finally, the solution to Ux = y, which is the solution
to the linear system, is x
1
= 23, x
2
= 5, and x
3
= 1.
1.7 Elementary Matrices and LU Factorization 31
20. An LU factorization of the matrix A is given by A =
_
_
1 0 0
2 1 0
2 0 1
_
_
_
_
1 2 1
0 1 4
0 0 1
_
_
. Then the solution
to Ly =
_
_
1
8
4
_
_
is y
1
= 1, y
2
= 10, y
3
= 2, so the solution to Ux = y is x
1
= 1, x
2
= 2, x
3
= 2.
21. LU Factorization of the coecient matrix:
A = LU =
_
_
1 0 0 0
1 1 0 0
2 0 1 0
1 1 0 1
_
_
_
_
1 2 3 1
0 1 2 2
0 0 1 1
0 0 0 1
_
_
Solution to Ly =
_
_
5
6
14
8
_
_
: y
1
= 5, y
2
= 1, y
3
= 4, y
4
= 2
Solution to Ux = y : x
1
= 25, x
2
= 7, x
3
= 6, x
4
= 2
22. An LU factorization of the matrix A is given by A =
_
_
1 0 0 0
0 1 0 0
1 0 1 0
2 2 0 1
_
_
_
_
1 2 2 1
0 1 1 1
0 0 1 3
0 0 0 2
_
_
. Then the
solution to Ly =
_
_
5
2
1
1
_
_
is y
1
= 5, y
2
= 2, y
3
= 6, y
4
= 13, so the solution to Ux = y is x
1
=
31
2
, x
2
=
34, x
3
=
51
2
, x
4
=
13
2
.
23. In order to row reduce the matrix A to an upper triangular matrix requires the operation of switching
rows. This is reected in the matrix P in the factorization
A = PLU =
_
_
0 0 1
0 1 0
1 0 0
_
_
_
_
1 0 0
2 5 0
0 1
1
5
_
_
_
_
1 3 2
0 1
4
5
0 0 1
_
_
.
24. In order to row reduce the matrix A to an upper triangular matrix requires the operation of switching
rows. This is reected in the matrix P in the factorization
A = PLU =
_
_
0 0 1
1 0 0
0 1 0
_
_
_
_
1 0 0
1
2
1 0
0 0 1
_
_
_
_
2 1 1
0
1
2
7
2
0 0 1
_
_
.
25. Using the LU factorization A = LU =
_
1 0
3 1
_ _
1 4
0 1
_
, we have that
A
1
= U
1
L
1
=
_
1 4
0 1
_ _
1 0
3 1
_
=
_
11 4
3 1
_
.
26.
A
1
= (LU)
1
=
__
1 0
2 1
_ _
1 7
0 6
__
1
= U
1
L
1
=
_
1
7
6
0
1
6
_ _
1 0
2 1
_
=
_
10
3
7
6
1
3
1
6
_
32 Chapter 1 Systems of Linear Equations and Matrices
27. Using the LU factorization A = LU =
_
_
1 0 0
1 1 0
1 1 1
_
_
_
_
2 1 1
0 1 1
0 0 3
_
_
, we have that
A
1
= U
1
L
1
=
_
_
1
2
1
2
0
0 1
1
3
0 0
1
3
_
_
_
_
1 0 0
1 1 0
0 1 1
_
_
=
_
_
1
1
2
0
1
2
3
1
3
0
1
3
1
3
_
_
.
28.
A
1
= (LU)
1
=
_
_
_
_
1 0 0
1 1 0
1 1 1
_
_
_
_
3 2 1
0 1 2
0 0 1
_
_
_
_
1
=
_
_
1
3
2
3
1
0 1 2
0 0 1
_
_
_
_
1 0 0
1 1 0
0 1 1
_
_
=
_
_
1
3
1
3
1
1 1 2
0 1 1
_
_
29. Suppose
_
a 0
b c
_ _
d e
0 f
_
=
_
0 1
1 0
_
.
This gives the system of equations ad = 0, ae = 1, bd = 1, be + cf = 0. The rst two equations are satised
only when a = 0 and d = 0. But this incompatible with the third equation.
30. Since A is row equivalent to B there are elementary matrices such that B = E
m
. . . E
1
A and since B
is row equivalent to C there are elementary matrices such that C = D
n
. . . D
1
B. Then C = D
n
. . . D
1
B =
D
n
. . . D
1
E
m
. . . E
1
A and hence, A is row equivalent to C.
31. If A is invertible, there are elementary matrices E
1
, . . . , E
k
such that I = E
k
E
1
A. Similarly, there
are elementary matrices D
1
, . . . , D
such that I = D
D
1
B. Then A = E
1
k
E
1
1
D
D
1
B, so A is row
equivalent to B.
32. a. Since L is invertible, the diagonal entries are all nonzero. b. The determinant of A is the product
of the diagonal entries of L and U, that is det(A) =
11
nn
u
11
u
nn
. c. Since L is lower triangular
and invertible it is row equivalent to the identity matrix and can be reduced to I using only replacement
operations.
Exercise Set 1.8
1. We need to nd positive whole numbers x
1
, x
2
, x
3
, and x
4
such that x
1
Al
3
+x
2
CuO x
3
Al
2
O
3
+x
4
Cu
is balanced. That is, we need to solve the linear system
_
_
3x
1
= 2x
3
x
2
= 3x
3
x
2
= x
4
, which has innitely many solutions given by x
1
=
2
9
x
2
, x
3
=
1
3
x
2
, x
4
= x
2
, x
2
R.
A particular solution that balances the equation is given by x
1
= 2, x
2
= 9, x
3
= 3, x
4
= 9.
2. To balance the equation x
1
I
2
+ x
2
Na
2
S
2
O
3
x
3
NaI + x
4
Na
2
S
4
O
6
, we solve the linear system
_
_
2x
1
= x
3
2x
2
= x
3
+ 2x
4
2x
2
= 4x
4
3x
2
= 6x
4
, so that x
1
= x
4
, x
2
= 2x
4
, x
3
= 2x
4
, x
4
R. For a particular solution that balances
the equation, let x
4
= 1, so x
1
= 1, x
2
= 2, and x
3
= 2.
1.8 Applications of Systems of Linear Equations 33
3. We need to nd positive whole numbers x
1
, x
2
, x
3
, x
4
and x
5
such that
x
1
NaHCO
3
+x
2
C
6
H
8
O
7
x
3
Na
3
C
6
H
5
O
7
+ x
4
H
2
O +x
5
CO
2
.
The augmented matrix for the resulting homogeneous linear system and the reduced row echelon form are
_
_
1 0 3 0 0 0
1 8 5 2 0 0
1 6 6 0 1 0
3 7 7 1 2 0
_
_
1 0 0 0 1 0
0 1 0 0
1
3
0
0 0 1 0
1
3
0
0 0 0 1 1 0
_
_
.
Hence the solution set for the linear system is given by x
1
= x
5
, x
2
=
1
3
x
5
, x
3
=
1
3
x
5
, x
4
= x
5
, x
5
R. A
particular solution that balances the equation is x
1
= x
4
= x
5
= 3, x
2
= x
3
= 1.
4. To balance the equation
x
1
MnS +x
2
As
2
Cr
10
O
35
+x
3
H
2
SO
4
x
4
HMnO
4
+x
5
AsH
3
+x
6
CrS
3
O
12
+x
7
H
2
O,
we solve the linear system
_
_
x
1
= x
4
x
1
+x
3
= 3x
6
2x
2
= x
5
10x
2
= x
6
35x
2
+ 4x
3
= 4x
4
+ 12x
6
+x
7
2x
3
= x
4
+ 3x
5
+ 2x
7
.
The augmented matrix for the equivalent homogeneous linear system
_
_
1 0 0 0 0 0 0 0
1 0 1 0 0 3 0 0
0 2 0 0 1 0 0 0
0 10 0 0 0 1 0 0
0 35 4 4 0 12 1 0
0 0 2 1 3 0 2 0
_
_
reduces to
_
1 0 0 1 0 0
16
327
0
0 1 0 0 0
13
327
0
0 0 1 0 0 0
374
327
0
0 0 0 1 0 0
16
327
0
0 0 0 0 1 0
26
327
0
0 0 0 0 0 1
130
327
0
_
_
.
A particular solution that balances the equation is x
7
= 327, x
1
= 16, x
2
= 13, x
3
= 374, x
4
= 16, x
5
= 26,
and x
6
= 130.
5. Let x
1
, x
2
, . . . , x
7
be dened as in the gure. The total ows in and out of the entire network and in and
out each intersection are given in the table.
300
800
700
500
300
x
1
x
2
x
3
x
4
x
5
x
6
x
7
Flow In Flow Out
700+300+500+300 x
1
+ 800 +x
4
+x
7
x
2
+x
3
x
1
+ 800
x
5
+ 700 x
3
+x
4
x
6
+ 300 x
5
+x
7
500+300 x
2
+x
6
Equating the total ows in and out gives a linear system with solution x
1
= 1000x
4
x
7
, x
2
= 800x
6
, x
3
=
1000 x
4
+x
6
x
7
, x
5
= 300 +x
6
x
7
, with x
4
, x
6
, and x
7
free variables. Since the network consists of one
way streets, the individual ows are nonnegative. As a sample solution let, x
4
= 200, x
6
= 300, x
7
= 100,
then x
1
= 700, x
2
= 500, x
3
= 1000, x
5
= 500.
34 Chapter 1 Systems of Linear Equations and Matrices
6. Let x
1
, x
2
, . . . , x
8
be dened as in the gure.
100
300
400
400
500
500
500
300
200
x
1
x
2
x
3
x
4
x
5
x
6
x
7
x
8
Balancing all in and out ows generates the linear system
_
_
x
1
+ 500 +x
6
+ 200 + 500 = 100 + 400 +x
8
+ 500 + 300
x
1
+ 500 = x
2
+x
5
x
5
+ 300 = 100 + 400
x
7
+ 500 = x
4
+ 300
x
6
+ 400 = x
7
+x
8
x
3
+ 200 = 400 + 500
x
2
+x
4
= x
3
+ 300.
The set of solutions is given by x
1
= 500 x
7
, x
2
= 800 x
7
, x
3
= 700, x
4
= 200 + x
7
, x
5
= 200, x
6
=
400 + x
7
+ x
8
, x
7
, x
8
R. In order to have all positive ows, for example, let x
7
= 300 and x
8
= 200, so
x
1
= 200, x
2
= 500, x
3
= 700, x
4
= 500, x
5
= 200, x
6
= 100.
7. Equating total ows in and out gives the linear system
_
_
x
1
+x
4
= 150
x
1
x
2
x
5
= 100
x
2
+x
3
= 100
x
3
+x
4
+x
5
= 50
with solution x
1
= 150 x
4
, x
2
= 50 x
4
x
5
, and x
3
= 50 + x
4
+ x
5
. Letting, for example, x
4
= x
5
= 20
gives the particular solution x
1
= 130, x
2
= 10, x
3
= 90.
8. The set of solutions is given by x
1
= 200 +x
8
, x
2
= 100 +x
8
, x
3
= 100 +x
8
, x
4
= x
8
, x
5
= 150 +x
8
, x
6
=
150 +x
8
, x
7
= 100 +x
8
, x
8
R. If x
8
150, then all the ows will remain positive.
9. If x
1
, x
2
, x
3
, and x
4
denote the number of grams required from each of the four food groups, then the
specications yield the linear system
_
_
20x
1
+ 30x
2
+ 40x
3
+ 10x
4
= 250
40x
1
+ 20x
2
+ 35x
3
+ 20x
4
= 300
50x
1
+ 40x
2
+ 10x
3
+ 30x
4
= 400
5x
1
+ 5x
2
+ 10x
3
+ 5x
4
= 70
.
The solution is x
1
= 1.4, x
2
= 3.2, x
3
= 1.6, x
4
= 6.2.
10. If x
1
, x
2
, and x
3
denote the number of grams required from each of the three food groups, then the
specications yield the linear system
_
_
200x
1
+ 400x
2
+ 300x
3
= 2400
300x
1
+ 500x
2
+ 400x
3
= 3500
40x
1
+ 50x
2
+ 20x
3
= 200
5x
1
+ 3x
2
+ 2x
3
= 25
,
which is inconsistent, and hence it is not possible to prepare the required diet.
1.8 Applications of Systems of Linear Equations 35
11. a. A =
_
_
0.02 0.04 0.05
0.03 0.02 0.04
0.03 0.3 0.1
_
_
b. The internal demand vector is A
_
_
300
150
200
_
_
=
_
_
22
20
74
_
_
. The total
external demand for the three sectors is 300 22 = 278, 150 20 = 130, and 200 74 = 126, respectively.
c. (I A)
1
_
_
1.02 0.06 0.06
0.03 1.04 0.05
0.05 0.35 1.13
_
_
d. The levels of production that balance the economy are given by
X = (I A)
1
D =
_
_
1.02 0.06 0.06
0.03 1.04 0.05
0.05 0.35 1.13
_
_
_
_
350
400
600
_
_
=
_
_
418.2
454.9
832.3
_
_
.
12. The level of production for each sector of the economy is given by X = (I A)
1
D and hence
X
_
_
56.4
17.5
23.8
26.1
57.1
41.5
53.2
30.6
40.0
55.0
_
_
.
13. a. The health care data are shown in the scatter
plot.
0
200
400
600
800
1000
1200
1400
1
9
6
5
1
9
7
0
1
9
7
5
1
9
8
0
1
9
8
5
1
9
9
0
1
9
9
5
2
0
0
0
b. If the parabola is y = ax
2
+ bx + c, then
assuming the points (1970, 80), (1980, 250) and
(1990, 690) are on the parabola gives the linear
system
_
_
3880900a + 1970b +c = 80
3920400a + 1980b +c = 250
3960100a + 1990b +c = 690
.
c. The solution to the linear system given in part
(b) is a =
27
20
, b =
10631
2
, c = 5232400, so that
the parabola that approximates the data is y =
27
20
x
2
10631
2
x + 5232400.
e. The model gives an estimate, in billions of
dollars, for health care costs in 2010 at
27
20
(2010)
2
10631
2
(2010) + 5232400 = 2380.
d.
0
200
400
600
800
1000
1200
1400
1
9
6
5
1
9
7
0
1
9
7
5
1
9
8
0
1
9
8
5
1
9
9
0
1
9
9
5
2
0
0
0
36 Chapter 1 Systems of Linear Equations and Matrices
14. Using the data for 1985, 1990, and 2000, a parabola y = ax
2
+ bx + c will t the three points provided
there is a solution to the linear system
_
_
a(1985)
2
+b(1985) +c = = 1
a(1990)
2
+b(1990) +c = 11
a(2000)
2
+b(2000) +c = 741
.
The unique solution is a =
71
15
, b = 18813, c =
56080223
3
, so the parabola is
y =
71
15
x
2
18813x +
56080223
3
. The estimated number of subscribers predicted by the model in 2010 is
71
15
(2010)
2
18813(2010) +
56080223
3
2418 million. Notice that, if the years 2000, 2001, and 2002 are used
to generate the parabola we obtain y = 7x
2
+ 28221x 28441259, which is concave down. This reects the
fact that the rate of growth during this period is slowing down. Using this model the expected use in 2010 is
2251 million.
15. a. A =
_
0.9 0.08
0.1 0.92
_
b. A
_
1500000
600000
_
=
_
1398000
702000
_
c. A
2
_
1500000
600000
_
=
_
1314360
785640
_
d.
A
n
_
1500000
600000
_
16. a. A =
_
0.8 0.8
0.2 0.2
_
b. Since A
_
800
200
_
=
_
800
200
_
, after the rst week there are 800 healthy mice and
200 infected mice. c. Since A
2
_
800
200
_
=
_
800
200
_
, after the second week there are 800 healthy mice and
200 infected mice. d. Since A
6
_
800
200
_
_
800
200
_
, after six weeks the number of healthy and infected mice
still has not changed. In Section 5.4, we will see that
_
800
200
_
is the steady state solution to this problem.
17. The transition matrix is A =
_
_
0.9 0.2 0.1
0.1 0.5 0.3
0 0.3 0.6
_
_
, so the number of the population in each category after
one month are given by A
_
_
20000
20000
10000
_
_
=
_
_
23000
15000
12000
_
_
, after two months by A
2
_
_
20000
20000
10000
_
_
=
_
_
24900
13400
11700
_
_
, and
after one year by A
12
_
_
20000
20000
10000
_
_
_
_
30530
11120
8350
_
_
.
18. Let c denote the number of consumers. The number of months n required for the new company to acquire
20% of the market is the value such that
_
0.98 0.05
0.02 0.95
_
n
_
c
0
_
_
0.8c
0.2c
_
.
If n = 17, then he matrix product is approximately
_
0.797c
0.203c
_
, so it takes approximately 17 months.
19. a. I
1
+I
3
= I
2
b.
_
4I
1
+ 3I
2
= 8
3I
2
+ 5I
3
= 10
c.
_
_
I
1
I
2
+I
3
= 0
4I
1
+ 3I
2
= 8
3I
2
+ 5I
3
= 10
The solution to the linear system is I
1
0.72, I
2
1.7, I
3
0.98
20. a.
_
_
I
1
+I
5
= I
2
I
4
+I
5
= I
3
I
4
+I
6
= I
3
I
1
+I
6
= I
2
b.
_
_
4I
1
+ 6I
2
= 14
6I
2
+ 4I
3
+ 2I
5
+ 3I
6
= 18
4I
3
+ 6I
4
= 16
Review Chapter 1 37
c. The augmented matrix for the linear system
_
_
1 1 0 0 1 0 0
0 0 1 1 1 1 0
0 0 1 1 0 1 0
1 1 0 0 0 1 0
4 6 0 0 0 0 14
0 6 4 0 2 3 18
0 0 4 6 0 0 16
_
_
,
so an approximate solution is I
1
= 1.2, I
2
= 1.5, I
3
= 1.8, I
4
= 1.5, I
5
= 0.3, and I
6
= 0.3.
21. Denote the average temperatures of the four points by a, b, c, and d, clockwise starting with the upper
left point. The resulting linear system is
_
_
4a b d = 50
a + 4b c = 55
b + 4c d = 45
a c + 4d = 40.
For example, at the rst point a =
20+30+b+d
4
. The solution is a 24.4, b 25.6, c 23.1, d 21.9.
22. The augmented matrix for the resulting linear system is
_
_
4 1 0 0 0 0 0 0 0 1 50
1 4 1 0 0 0 0 0 1 0 30
0 1 4 1 0 0 0 1 0 0 30
0 0 1 4 1 0 1 0 0 0 30
0 0 0 1 4 1 0 0 0 0 55
0 0 0 0 1 4 1 0 0 0 45
0 0 0 1 0 1 4 1 0 0 20
0 0 1 0 0 0 1 4 1 0 20
0 1 0 0 0 0 0 1 4 1 20
1 0 0 0 0 0 0 0 1 4 40
_
_
,
which reduces to
_
_
1 0 0 0 0 0 0 0 0 0 24.4
0 1 0 0 0 0 0 0 0 0 25.9
0 0 1 0 0 0 0 0 0 0 26.4
0 0 0 1 0 0 0 0 0 0 26.5
0 0 0 0 1 0 0 0 0 0 26.3
0 0 0 0 0 1 0 0 0 0 23.6
0 0 0 0 0 0 1 0 0 0 23.3
0 0 0 0 0 0 0 1 0 0 23.1
0 0 0 0 0 0 0 0 1 0 22.7
0 0 0 0 0 0 0 0 0 1 21.8
_
_
.
Review Exercises Chapter 1
1. a. A =
_
_
1 1 2 1
1 0 1 2
2 2 0 1
1 1 2 3
_
_
b. det(A) = 8 c. Since the determinant of the coecient matrix is not
0, the matrix is invertible and the linear system is consistent and has a unique solution.
38 Chapter 1 Systems of Linear Equations and Matrices
d. Since the linear system Ax = b has a unique solution for every b, the only solution to the homogeneous
system is the trivial solution. e. From part (b), since the determinant is not zero the inverse exists and
A
1
=
1
8
_
_
3 8 2 7
5 8 6 9
5 0 2 1
4 0 0 4
_
_
.
f. The solution can be found by using the inverse matrix and is given by
x = A
1
_
_
3
1
2
5
_
_
=
1
4
_
_
11
17
7
4
_
_
.
2. a. Since the fourth row is twice the second row, the determinant of the coecient matrix is 0. b. Since
the determinant is 0, the matrix is not invertible and hence, the linear system does not have a unique solution.
c. Since the augmented matrix
_
_
1 1 2 1 a
1 3 1 1 b
3 5 5 1 c
2 2 4 2 d
_
_
reduces to
_
1 1 2 1 a
0 2 3 2 a +b
0 0 2 0 2a +b +c
0 0 0 0 2a +d
_
_
,
then the linear system is consistent if 2a +d = 0. d. The linear system is inconsistent for all a and d such
that 2a +d = 0. e. When consistent there is a free variable and hence, there are innitely many solutions.
f. The corresponding augmented matrix further reduces to
_
_
1 0 0 2
21
2
0 1 0 1
9
2
0 0 1 0 2
0 0 0 0 0
_
_
,
so the set of solutions is given by x
1
=
21
2
2x
4
, x
2
=
9
2
x
4
, x
3
= 2, x
4
R.
3. Let A =
_
a b
0 c
_
. The matrix A is idempotent provided A
2
= A, that is,
_
a
2
ab +bc
0 c
2
_
=
_
a b
0 c
_
.
The two matrices are equal if and only if the system of equations a
2
= a, ab + bc = b, c
2
= c has a solution.
From the rst and third equations, we have that a = 0 or a = 1 and c = 0 or c = 1. From the second
equation, we see that b(a + c 1) = 0, so b = 0 or a + c = 1. Given these constraints, the possible solutions
are a = 0, c = 0, b = 0; a = 0, c = 1, b R; a = 1, c = 0, b R; a = 1, b = 0, c = 1
4. Let A =
_
a b
c d
_
. If A is to commute with every 2 2 matrix, then it must commute with
_
1 0
0 0
_
and
_
0 0
0 1
_
. So it must be the case that
_
a b
c d
_ _
1 0
0 0
_
=
_
1 0
0 0
_ _
a b
c d
_
_
a 0
c 0
_
=
_
a b
0 0
_
.
Hence, b = c = 0 and the matrix must have the form
_
a 0
0 d
_
. In addition,
_
a 0
0 d
_ _
0 0
0 1
_
=
_
0 0
0 1
_ _
a 0
0 d
_
_
0 a
0 0
_
=
_
0 d
0 0
_
,
Review Chapter 1 39
so that a = d. Then A will commute with every 2 2 matrix if and only if it has the form
A =
_
a 0
0 a
_
, a R.
5. a. If A =
_
a
1
b
1
c
1
d
1
_
, B =
_
a
2
b
2
c
2
d
2
_
, then
AB BA =
_
a
1
a
2
+b
1
c
2
a
1
b
2
+b
1
d
2
a
2
c
1
+c
2
d
1
b
2
c
1
+ d
1
d
2
_
_
a
1
a
2
+b
2
c
1
a
2
b
1
+b
2
d
1
a
1
c
2
+c
1
d
2
b
1
c
2
+d
1
d
2
_
=
_
b
1
c
2
b
2
c
1
a
1
b
2
+b
1
d
1
a
2
b
1
b
2
d
1
a
2
c
1
+c
2
d
1
a
1
c
2
c
1
d
2
b
2
c
1
b
1
c
2
_
so that the sum of the diagonal entries is b
1
c
2
b
2
c
1
+b
2
c
1
b
1
c
2
= 0.
b. If M is a 2 2 matrix and the sum of the diagonal entries is 0, then M has the form M =
_
a b
c a
_
,
then
_
a b
c a
_ _
a b
c a
_
=
_
a
2
+bc 0
0 a
2
+bc
_
= (a
2
+bc)I.
c. Let M = AB BA. By part (a), M
2
= kI, for some k. Then
(AB BA)
2
C = M
2
C = (kI)C = C(kI) = CM
2
= C(AB BA)
2
.
6. To balance the trac ow pattern the total in ow
to the network must equal the total ow out and
the ow into each intersection must equal the ow
out of that intersection. Using the labels shown
in the gure the resulting linear system is
100
300
400
300
200
500
500
400
600
x
1
x
2
x
3
x
4
x
5
x
6
x
7
x
8
_
_
x
1
= 300
x
2
x
3
= 100
x
3
x
4
= 200
x
1
x
5
+x
6
= 300
x
2
+x
6
x
7
= 200
x
7
+x
8
= 900
x
4
+x
5
+x
8
= 1200.
The augmented matrix for the linear system
_
_
1 0 0 0 0 0 0 0 300
0 1 1 0 0 0 0 0 100
0 0 1 1 0 0 0 0 200
1 0 0 0 1 1 0 0 300
0 1 0 0 0 1 1 0 200
0 1 0 0 0 0 1 1 900
0 0 0 1 1 0 0 1 1200
_
_
reduces to
_
1 0 0 0 0 0 0 0 300
0 1 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 100
0 0 0 1 0 0 0 0 100
0 0 0 0 1 0 0 1 1100
0 0 0 0 0 1 0 1 1100
0 0 0 0 0 0 1 1 900
_
_
,
Since x
3
is negative, the ow can not be balanced.
7. a. Since the matrix A is triangular, the determinant is the product of the diagonal entries and hence,
det(A) = 1. Since the determinant is not 0, then A is invertible.
b. Six ones can be added making 21 the maximum number of entries that can be one and the matrix is
invertible.
40 Chapter 1 Systems of Linear Equations and Matrices
8. If A is invertible, then det(A) = 0. Since det(A
t
) = det(A), then A
t
is also invertible. Now since A
1
A = I,
then (A
1
A)
t
= I
t
= I and hence, A
t
(A
1
)
t
= I. Therefore, (A
1
)
t
= (A
t
)
1
.
9. a. To show that B = A + A
t
is symmetric we need to show that B
t
= B. Using the properties of the
transpose operation, we have that B
t
= (A + A
t
)
t
= A
t
+ (A
t
)
t
= A
t
+ A = B. Similarly, C = A A
t
is
skew-symmetric since C
t
= (A A
t
)
t
= A
t
(A
t
)
t
= A
t
A = C.
b. A =
1
2
(A +A
t
) +
1
2
(A A
t
)
10. Let and be scalars such that + = 1. If u and v are solutions to Ax = b, then Au = b and
Av = b. So
A(u +v) = A(u) +A(v) = A(u) +A(v) = b +b = ( +)b = b.
Chapter Test Chapter 1
1. T 2. F. All linear systems have ei-
ther one solution, innitely many
solutions, or no solutions.
3. F. Let A =
_
1 1
1 1
_
and B =
_
1 1
1 1
_
.
4. T 5. F. The linear system Ax = 0
has a nontrivial solution if and
only if the matrix A is not in-
vertible.
6. T
7. F.
(ABC)
1
= C
1
B
1
A
1
8. T 9. T
10. T 11. T 12. T
13. T 14. T 15. F. The determinant is un-
changed.
16. T 17. F. Let A =
_
1 0
0 1
_
, and
B =
_
1 0
0 1
_
18. T
19. F. The linear system may be
inconsistent, but it also may have
innitely many solutions.
20. T 21. F. The inverse is
1
5
_
1 1
3 2
_
.
22. T 23. T 24. T
25. T 26. T 27. F. The determinant of the
coecient matrix is 4.
28. T 29. F. The only solution is
x =
5
4
, y =
1
4
.
30. T
Chapter Test Chapter 1 41
31. T 32. T 33. F. The determinant is given
by
5 8
4 6
2 8
2 6
2 5
2 4
_
c
1
c
2
= 6
2c
1
+ 2c
2
+c
3
= 3
c
2
+c
3
= 5
. The augmented matrix corresponding to the linear system and the reduced row
echelon form are
_
_
1 1 0 3
2 2 1 1
0 1 1 2
_
_
_
_
1 0 0 6
0 1 0 3
0 0 1 5
_
_
,
so that the linear system has the unique solution c
1
= 6, c
2
= 3 and c
3
= 5. As a result the vector can be
written as
_
_
6
3
5
_
_
= 6
_
_
1
2
0
_
_
+
_
_
1
2
1
_
_
+
_
_
0
1
1
_
_
.
Notice that in this example the coecient matrix of the resulting linear system is always the same regardless
of the vector v. As a consequence, every vector v in R
3
can be written in terms of the three given vectors. If
the linear system that is generated turns out to be inconsistent for some vector v, then the vector can not be
written as a combination of the others. The linear system can also have innitely many solutions for a given
v, which says the vector can be written in terms of the other vectors in many ways.
2.1 Vectors in R
n
43
Solutions to Exercises
1. Adding corresponding components of the vec-
tors u and v, we have that
u +v =
_
_
1
2
3
_
_
= v +u.
2. Adding the vectors inside the parentheses rst
gives
(u +v)+w =
_
_
1
2
3
_
_
+
_
_
2
1
1
_
_
=
_
_
1
3
2
_
_
= u + (v +w).
3. u 2v + 3w =
_
_
11
7
0
_
_
4. u +
1
2
v 2w =
_
_
6
2
1
_
_
5. 3(u +v) w =
_
_
1
7
8
_
_
6. 2u 3(u 2w) =
_
_
20
10
3
_
_
7. A scalar multiple of a vector is obtained by
multiplying each component of the vector by the
scalar and addition is componentwise, so that the
vector expression simplies to
_
_
17
14
9
6
_
_
.
8. 3u 2v =
_
_
3
10
11
2
_
_
9. To show that (x
1
+ x
2
)u = x
1
u + x
2
u, we will expand the left hand side to obtain the right hand side.
That is,
(x
1
+x
2
)u = (x
1
+x
2
)
_
_
1
2
3
0
_
_
=
_
_
x
1
+x
2
2x
1
2x
2
3x
1
+ 3x
2
0
_
_
=
_
_
x
1
2x
1
3x
1
0
_
_
+
_
_
x
2
2x
2
3x
2
0
_
_
= x
1
u +x
2
v.
10.
x
1
(u +v) = x
1
_
_
4
0
2
1
_
_
=
_
_
x
1
0
2x
1
x
1
_
_
=
_
_
x
1
+ 3x
1
2x
1
+ 2x
2
3x
1
x
1
0 +x
2
_
_
= x
1
_
_
1
2
3
0
_
_
+x
2
_
_
3
2
1
1
_
_
= x
1
u +x
2
v
11. Simply multiply each of the vectors e
1
, e
2
and e
3
by the corresponding component of the
vector. That is, v = 2e
1
+ 4e
2
+e
3
12. v = e
1
+ 3e
2
+ 2e
3
13. v = 3e
2
2e
3
14. v = e
1
+
1
2
e
3
15. Let w =
_
_
a
b
c
_
_
. Then u + 3v 2w = 0 if and only if
_
_
1
4
2
_
_
+
_
_
6
6
0
_
_
_
_
2a
2b
2c
_
_
=
_
_
0
0
0
_
_
.
Simplifying and equating components gives the equations 2a 7 = 0, 2b + 2 = 0, 2c 2 = 0 and hence,
44 Chapter 2 Linear Combinations and Linear Independence
w =
_
_
7
2
1
1
_
_
.
16. Let w =
_
_
a
b
c
_
_
. Then u + 3v 2w = 0 if and only if
_
_
2
0
1
_
_
+
_
_
6
9
12
_
_
_
_
2a
2b
2c
_
_
=
_
_
0
0
0
_
_
.
Simplifying and equating components gives the equations 2a +8 = 0, 2b 9 = 0, 2c 11 = 0 and hence,
w =
_
_
4
9
2
11
2
_
_
.
17. The linear system is
_
c
1
+ 3c
2
= 2
2c
1
2c
2
= 1
, with solution c
1
=
7
4
, c
2
=
5
4
. Thus, the vector
_
2
1
_
can
be written as the combination
7
4
_
1
2
_
5
4
_
3
2
_
.
18. The linear system is
_
2c
1
c
2
= 0
5c
1
2c
2
= 5
, with solution c
1
= 5, c
2
= 10. Thus, the vector
_
0
5
_
can be
written as the combination 5
_
2
5
_
+ 10
_
1
2
_
.
19. The linear system is
_
c
1
c
2
= 3
2c
1
2c
2
= 1
, which is inconsistent. Hence, the vector
_
3
1
_
can not be written
as a combination of
_
1
2
_
and
_
1
2
_
.
20. The linear system is
_
c
1
+ 2c
2
= 1
3c
1
6c
2
= 1
, which is inconsistent. Hence, the vector
_
1
1
_
can not be
written as a combination of
_
1
3
_
and
_
2
6
_
.
21. The linear system is
_
_
4c
1
5c
3
= 3
4c
1
+ 3c
2
+c
3
= 3
3c
1
c
2
5c
3
= 4
. The augmented matrix for the linear system and the
reduced row echelon form are
_
_
4 0 5 3
4 3 1 3
3 1 5 4
_
_
_
_
1 0 0
87
121
0 1 0
238
121
0 0 1
3
121
_
_
,
so that the unique solution to the linear system is c
1
=
87
121
, c
2
=
238
121
, c
3
=
3
121
. The vector
_
_
3
3
4
_
_
is a
combination of the three vectors.
22. The linear system is
_
_
c
2
+c
3
= 1
c
1
+c
2
+c
3
= 0
c
1
c
3
= 1
, which has the unique solution c
1
= 1, c
2
= 1, c
3
= 0
and hence, the vector
_
_
1
0
1
_
_
can be written as a combination of the other vectors.
2.1 Vectors in R
n
45
23. The linear system is
_
_
c
1
c
2
+c
3
= 1
c
2
c
3
= 0
c
1
+c
2
c
3
= 2
, which is inconsistent and hence, the vector
_
_
1
0
2
_
_
cannot
be written as a combination of the other vectors.
24. The linear system is
_
_
c
1
+ 2c
3
= 6
2c
1
+ 2c
2
+c
3
= 7
4c
1
+ 4c
2
+ 2c
3
= 3
, which is inconsistent and hence, the vector
_
_
6
7
3
_
_
cannot
be written as a combination of the other vectors.
25. All 22 vectors. Moreover, c
1
=
1
3
a
2
3
b, c
2
=
1
3
a +
1
3
b.
26. All 22 vectors. Moreover, c
1
=
1
2
a+
1
2
b, c
2
=
1
2
a +
1
2
b.
27. Row reduction of the matrix
_
1 2 a
1 2 b
_
gives
_
1 2 a
0 0 a +b
_
, which is consistent when b = a.
So the vector equation can be solved for all vectors of the form
_
a
a
_
such that a R.
28. The augmented matrix
_
3 6 a
1 2 b
_
reduces to
_
1 2 a
0 0
1
3
a +b
_
, which is consistent when b =
1
3
a. So
the vector equation can be solved for all vectors of the form
_
a
1
3
a
_
such that a R.
29. All 3 3 vectors. Moreover, c
1
=
1
3
a
2
3
b +
2
3
c, c
2
=
1
3
a +
2
3
b +
1
3
c, c
3
=
1
3
a +
1
3
b
1
3
c.
30. All vectors of the form
_
_
a
b
a
_
_
such that a, b
R.
31. All vectors of the form
_
_
a
b
2a 3b
_
_
such that a, b R.
32. All vectors of the form
_
_
a
b
2a 5b
_
_
such
that a, b R.
33. Let u =
_
_
u
1
u
2
.
.
.
u
n
_
_
, v =
_
_
v
1
v
2
.
.
.
v
n
_
_
, and w =
_
_
w
1
w
2
.
.
.
w
n
_
_
. Since addition of real numbers is associative, we
have that
(u + v) +w =
_
_
_
_
_
_
_
u
1
+v
1
u
2
+v
2
.
.
.
u
n
+v
n
_
_
_
_
_
_
_
+
_
_
w
1
w
2
.
.
.
w
n
_
_
=
_
_
(v
1
+u
1
) +w
1
(v
2
+u
2
) +w
2
.
.
.
(v
n
+u
n
) +w
n
_
_
=
_
_
u
1
+ (v
1
+w
1
)
u
2
+ (v
2
+w
2
)
.
.
.
u
n
+ (v
n
+w
n
)
_
_
u + (v +w).
34. Let u =
_
_
u
1
u
2
.
.
.
u
n
_
_
. Then u =
_
_
u
1
u
2
.
.
.
u
n
_
_
+
_
_
0
0
.
.
.
0
_
_
=
_
_
0
0
.
.
.
0
_
_
+
_
_
u
1
u
2
.
.
.
u
n
_
_
=
_
_
u
1
u
2
.
.
.
u
n
_
_
. Hence, the zero vector is
the additive identity.
35. Let u =
_
_
u
1
u
2
.
.
.
u
n
_
_
. Then u + (u) =
_
_
u
1
u
2
.
.
.
u
n
_
_
+
_
_
u
1
u
2
.
.
.
u
n
_
_
=
_
_
0
0
.
.
.
0
_
_
= (u) +u. Hence, the vector u is
the additive inverse of u.
46 Chapter 2 Linear Combinations and Linear Independence
36. Let u =
_
_
u
1
u
2
.
.
.
u
n
_
_
, v =
_
_
v
1
v
2
.
.
.
v
n
_
_
, and c a scalar. Then
c(u +v) = c
_
_
u
1
+v
1
u
2
+v
2
.
.
.
u
n
+v
n
_
_
=
_
_
cu
1
+cv
1
cu
2
+cv
2
.
.
.
cu
n
+cv
n
_
_
=
_
_
cu
1
cu
2
.
.
.
cu
n
_
_
+
_
_
cv
1
cv
2
.
.
.
cv
n
_
_
= cu +cv.
37. Let u =
_
_
u
1
u
2
.
.
.
u
n
_
_
and c and d scalars. Then
(c +d)u =
_
_
(c +d)u
1
(c +d)u
2
.
.
.
(c +d)u
n
_
_
=
_
_
cu
1
+du
1
cu
2
+du
2
.
.
.
cu
n
+du
n
_
_
=
_
_
cu
1
cu
2
.
.
.
cu
n
_
_
+
_
_
du
1
du
2
.
.
.
du
n
_
_
= cu + du.
38. Let u =
_
_
u
1
u
2
.
.
.
u
n
_
_
and c and d scalars. Then c(du) = c
_
_
du
1
du
2
.
.
.
du
n
_
_
=
_
_
cdu
1
cdu
2
.
.
.
cdu
n
_
_
= (cd)u.
39. Let u =
_
_
u
1
u
2
.
.
.
u
n
_
_
. Then (1)u =
_
_
(1)u
1
(1)u
2
.
.
.
(1)u
n
_
_
= u.
40. Suppose u +z
1
= u and u +z
2
= u. Then u +z
1
= u +z
2
and hence, z
1
= z
2
.
Exercise Set 2.2
The vectors in R
2
and R
3
, called the coordinate vectors, are the unit vectors that dene the standard axes.
These vectors are
e
1
=
_
1
0
_
, e
2
=
_
0
1
_
in R
2
and e
1
=
_
_
1
0
0
_
_
, e
2
=
_
_
0
1
0
_
_
, e
3
=
_
_
0
0
1
_
_
in R
3
with the obvious extension to R
n
. Every vector in the Euclidean spaces is a combination of the coordinate
vectors. For example,
_
_
v
1
v
2
v
3
_
_
= v
1
_
_
1
0
0
_
_
+v
2
_
_
0
1
0
_
_
+v
3
_
_
0
0
1
_
_
.
The coordinate vectors are special vectors but not in the sense of generating all the vectors in the space.
Many sets of vectors can generate all other vectors. The vector v is a linear combination of v
1
, v
2
, . . . , v
n
if
there are scalars c
1
, c
2
, . . . , c
n
such that
v = c
1
v
1
+c
2
v
2
+ +c
n
v
n
.
2.2 Linear Combinations 47
For specic vectors we have already seen that an equation of this form generates a linear system. If the linear
system has at least one solution, then v is a linear combination of the others. Notice that the set of all linear
combinations of the vectors e
1
and e
2
in R
3
is the xy plane and hence, not all of R
3
. Similarly, all linear
combinations of one nonzero vector in R
2
is a line. In the exercises when asked to determine whether or not
a vector is a linear combination of other vectors, rst set up the equation above and then solve the resulting
linear system. For example, to determine all vectors in R
3
that are a linear combination of
_
_
1
1
1
_
_
,
_
_
0
1
1
_
_
,
and
_
_
2
5
1
_
_
let v =
_
_
a
b
c
_
_
, be an arbitrary vector. Then form the augmented matrix
_
_
1 0 2 a
1 1 5 b
1 1 1 c
_
_
that row reduces to
_
_
1 0 2 a
0 1 3 a +b
0 0 0 2a +b c
_
_
.
Hence, the components of any vector v that is a linear combination of the three vectors must satisfy 2a +
b c = 0. This is not all the vectors in R
3
. In this case notice that
_
_
2
5
1
_
_
= 2
_
_
1
1
1
_
_
+ 3
_
_
0
1
1
_
_
.
Solutions to Exercises
1. To determine whether or not a vector is a linear combination of other vectors we always set up a vector
equation of the form v = c
1
+ v
1
+ c
n
v
n
and then determine if the resulting linear system can be solved.
In matrix form the linear system is
_
1 2 4
1 3 11
_
that row reduces to
_
1 0 2
0 1 3
_
, which has the unique solution c
1
= 2 and c
2
= 3. Hence,
the vector v is a linear combination of v
1
and v
2
.
2. Since the resulting linear system
_
1 3 13
2 0 2
_
row reduces to
_
1 0 1
0 1 4
_
, which has the unique
solution c
1
= 1, c
2
= 4, then the vector v is a linear combination of v
1
and v
2
.
3. Since the resulting linear system
_
2 3 1
4 6 1
_
row reduces to
_
1 3/2 0
0 0 1
_
it is inconsistent, then
v is not a linear combination of v
1
and v
2
.
4. Since the resulting linear system
_
1 1/2 3
2 1 2
_
row reduces to
_
1 1/2 0
0 0 1
_
it is inconsistent, then v
is not a linear combination of v
1
and v
2
.
5. Since the resulting linear system
_
_
2 1 3
3 4 10
4 2 10
_
_
row reduces to
_
_
1 0 2
0 1 1
0 0 0
_
_
it is consistent with the unique solution c
1
= 2 and c
2
= 1, then v is a linear combination of v
1
and v
2
.
6. Since the resulting linear system
_
_
3 2 2
4 7 6
1 3 8
_
_
row reduces to
_
_
1 0 2
0 1 2
0 0 0
_
_
48 Chapter 2 Linear Combinations and Linear Independence
it is consistent, then v is a linear combination of v
1
and v
2
.
7. Since the resulting linear system
_
_
2 3 2 2
2 0 0 8
0 3 1 2
_
_
_
_
1 0 0 4
0 1 0 2/3
0 0 1 4
_
_
,
is consistent and has a unique solution, then v can
be written in only one way as a linear combination
of v
1
, v
2
, and v
3
.
8. Since
_
_
1 2 3 5
1 1 1 4
0 1 3 7
_
_
_
_
1 0 0 1
0 1 0 1
0 0 1 2
_
_
,
is consistent and has a unique solution, then v can
be written in only one way as a linear combination
of v
1
, v
2
, and v
3
.
9. Since
_
_
1 1 0 1
2 1 1 1
1 3 2 5
_
_
_
_
1 0 0 0
0 1 1 0
0 0 0 1
_
_
,
the linear system is inconsistent, so v can not be
written as a linear combination of v
1
, v
2
, and v
3
.
10. Since
_
_
3 1 1 3
2 4 10 5
1 1 3 5
_
_
_
_
1 0 1 0
0 1 2 0
0 0 0 1
_
_
,
the linear system is inconsistent, so v can not be
written as a linear combination of v
1
, v
2
, and v
3
.
11. Since
_
_
2 1 1 3
3 6 1 17
4 1 2 17
1 2 3 7
_
_
row reduces to
_
_
1 0 0 3
0 1 0 1
0 0 1 2
0 0 0 0
_
_
,
the linear system has a unique solution, so v can
be written as a linear combination of v
1
, v
2
, and
v
3
.
12. Since
_
_
2 1 3 6
3 1 1 3
4 2 3 3
5 3 1 7
_
_
reduces to
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
_
_
,
the linear system is inconsistent, so v can not be
written as a linear combination of v
1
, v
2
, and v
3
.
13. Since
_
3 0 1 3
1 1 2 0
_
reduces to
_
1 0 1/3 1
0 1 7/3 1
_
, there are innitely many ways in which scalars
can be selected so v is a linear combination of v
1
, v
2
, and v
3
. Specically, any set of scalars given by
c
1
= 1 +
1
3
c
3
, c
2
= 1 +
7
3
c
3
, c
3
R.
14. Since
_
1 2 3 1
1 1 0 1
_
reduces to
_
1 0 1 1/3
0 1 1 2/3
_
, there are innitely many ways in which
scalars can be selected so v is a linear combination of v
1
, v
2
, and v
3
. Specically, any set of scalars given by
c
1
=
1
3
c
3
, c
2
=
2
3
+c
3
, c
3
R.
15. Since
_
_
0 2 2 2 0
1 1 3 1 1
1 2 1 2 3
_
_
reduces to
_
_
1 0 0 6 3
0 1 0 1 2
0 0 1 2 2
_
_
, there are innitely many ways in
which scalars can be selected so v is a linear combination of v
1
, v
2
, v
3
, and v
4
. Specically, any set of scalars
given by c
1
= 3 + 6c
4
, c
2
= 2 c
4
, c
3
= 2 + 2c
4
, c
4
R.
16. Since
_
_
1 0 0 3 3
1 1 1 1 3
2 1 2 2 1
_
_
reduces to
_
_
1 0 0 3 3
0 1 0 12 5
0 0 1 10 5
_
_
, there are innitely many ways
in which scalars can be selected so v is a linear combination of v
1
, v
2
, v
3
, and v
4
. Specically, any set of
scalars given by c
1
= 3 3c
4
, c
2
= 5 + 12c
4
, c
3
= 5 10c
4
, c
4
R.
2.2 Linear Combinations 49
17. The matrix equation
c
1
M
1
+c
2
M
2
+c
3
M
3
= c
1
_
1 2
1 1
_
+c
2
_
2 3
1 4
_
+c
3
_
1 3
2 1
_
=
_
2 4
4 0
_
leads to the augmented matrix
_
_
1 2 1 2
2 3 3 4
1 1 2 4
1 4 1 0
_
_
, which reduces to
_
1 0 0 1
0 1 0 1
0 0 1 3
0 0 0 0
_
_
,
and hence, the linear system has the unique solution c
1
= 1, c
2
= 1, and c
3
= 3. Consequently, the matrix
M is a linear combination of the three matrices.
18. Since the augmented matrix
_
_
2 1 1 2
2 1 2 3
1 2 3 1
1 1 1 2
_
_
, reduces to
_
1 0 0 2
0 1 0 1
0 0 1 1
0 0 0 0
_
_
,
the linear system has a unique solution. Consequently, the matrix M is a linear combination of the three
matrices.
19. The matrix M is not a linear combination of the three matrices since
_
_
2 3 3 2
2 1 1 1
1 2 2 1
3 2 2 2
_
_
reduces to
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
_
_
,
so that the linear system is inconsistent.
20. Since the augmented matrix
_
_
1 0 0 2
0 1 0 1
0 0 0 3
1 0 1 4
_
_
reduces to
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
_
_
, the linear system is incon-
sistent. This is also immediately evident from row three of the original matrix. Consequently, the matrix M
can not be written as a linear combination of M
1
, M
2
, and M
3
.
21. Ax = 2
_
1
2
_
_
3
1
_
22. Ax =
_
_
1
2
3
_
_
_
_
2
3
2
_
_
+ 3
_
_
1
4
1
_
_
23. The matrix product is AB =
_
1 2
3 4
_ _
3 2
2 5
_
=
_
7 12
17 26
_
, and the column vectors are also
given by (AB)
1
= 3
_
1
3
_
+ 2
_
2
4
_
, and (AB)
2
= 2
_
1
3
_
+ 5
_
2
4
_
24. The matrix product is AB =
_
_
4 5 1
13 3 5
16 6 3
_
_
, and the column vectors are also given by (AB)
1
=
3
_
_
2
1
4
_
_
2
_
_
0
1
3
_
_
+ 2
_
_
1
4
1
_
_
, (AB)
2
= 2
_
_
2
1
4
_
_
+
_
_
0
1
3
_
_
_
_
1
4
1
_
_
, and
(AB)
3
=
_
_
2
1
4
_
_
+ 0
_
_
0
1
3
_
_
+
_
_
1
4
1
_
_
.
50 Chapter 2 Linear Combinations and Linear Independence
25. The linear combination c
1
(1 +x) +c
2
(x
2
) = 2x
2
3x 1 if and only if c
1
+c
1
x +c
2
x
2
= 2x
2
3x 1.
These two polynomials will agree for all x if and only if the coecients of like terms agree. That is, if and
only if c
1
= 1, c
1
= 3, and c
2
= 2, which is not possible. Therefore, the polynomial p(x) can not be written
as a linear combination of 1 +x and x
2
.
26. Since c
1
(1 +x) +c
2
(x
2
) = x
3
+ 3x + 3 has the solution c
1
= 3, c
2
= 1, then p(x) can be written as a
linear combination of 1 +x and x
2
.
27. Consider the equation c
1
(1 +x) +c
2
(x) +c
3
(x
2
+1) +c
4
(2x
3
x+1) = x
3
2x+1, which is equivalent
to
(c
1
+c
3
+c
4
) + (c
1
c
2
c
4
)x +c
3
x
2
+ 2c
4
x
3
= x
3
2x + 1.
After equating the coecients of like terms, we have the linear system
_
_
c
1
+c
3
+c
4
= 1
c
1
c
2
c
4
= 2
c
3
= 0
2c
4
= 1
, which has
the unique solution c
1
=
1
2
, c
2
= 2, c
3
= 0, and c
4
=
1
2
. Hence, x
3
2x + 1 =
1
2
(1 + x) + 2(x) + 0(x
2
+ 1) +
1
2
(2x
3
x + 1).
28. Equating the coecients of like terms in the equation c
1
(1+x)+c
2
(x)+c
3
(x
2
+1)+c
4
(2x
3
x+1) = x
3
gives the linear system c
1
+ c
3
+ c
4
= 0, c
1
c
2
c
4
= 0, c
3
= 0, 2c
4
= 1, which has the unique solution
c
1
=
1
2
, c
2
= 1, c
3
= 0, c
4
=
1
2
. Hence x
3
=
1
2
(1 +x) +x +
1
2
(2x
3
x + 1).
29. Since
_
_
1 3 1 a
2 7 2 b
1 2 0 c
_
_
reduces to
_
_
1 3 1 a
0 1 1 2a +b
0 0 0 3a b +c
_
_
,
all vectors
_
_
a
b
c
_
_
such that 3a b +c = 0 can be written as a linear combination of the three given vectors.
30. Since
_
_
1 0 0 a
0 1 0 b
0 1 0 c
0 0 1 d
_
_
reduces to
_
1 0 0 a
0 1 0 b
0 0 1 c
0 0 0 c b
_
_
,
all matrices
_
a b
c d
_
such that c = b can be written as a linear combination of the three given matrices.
31.
v = v
1
+v
2
+v
3
+v
4
= v
1
+v
2
+v
3
+ (v
1
2v
2
+3v
3
)
= 2v
1
v
2
+ 4v
3
32.
v = v
1
+v
2
+v
3
+v
4
= v
1
+ (2v
1
4v
3
) +v
3
+v
4
= 3v
1
3v
3
+v
4
33. Since c
1
= 0, then
v
1
=
c
2
c
1
v
2
c
n
c
1
v
n
.
34.
v = c
1
v
1
+ +c
n
v
n
= c
1
v
1
+ +c
n
v
n
+ 0w
1
+ + 0w
m
35. In order to show that S
1
= S
2
, we will show that each is a subset of the other. Let v S
1
, so that there
are scalars c
1
, . . . , c
k
such that v = c
1
v
1
+ +c
k
v
k
. Since c = 0, then v = c
1
v
1
+ +
c
k
c
(cv
k
), so v S
2
.
If v S
2
, then v = c
1
v
1
+ + (cc
k
)v
k
, so v S
1
. Therefore S
1
= S
2
.
2.3 Linear Independence 51
36. Let v S
1
. Then v = c
1
v
1
+ +c
k
v
k
= c
1
v
1
+ +c
k
v
k
+0(v
1
+v
2
), so v S
2
and hence, S
1
S
2
.
Now let v S
2
, so
v = c
1
v
1
+ +c
k
v
k
+c
k+1
(v
1
+v
2
) = (c
1
+c
k+1
)v
1
+ (c
2
+c
k+1
)v
2
+ +c
k
v
k
and hence, v S
1
. Therefore, S
2
S
1
. Since both containments hold, S
1
= S
2
.
37. If A
3
= cA
1
, then det(A) = 0. Since the linear system is assumed to be consistent, then it must have
innitely many solutions.
38. If A
3
= A
1
+ A
2
, then det(A) = 0. Since the linear system is assumed to be consistent, then it must
have innitely many solutions.
39. If f(x) = e
x
and g(x) = e
1
2
x
, then f
(x) = e
x
= f
(x), g
(x) =
1
2
e
1
2
x
, and g
(x) =
1
4
e
1
2
x
. Then
2f
3f
+f = 2e
x
3e
x
+e
x
= 0 and 2g
3g
+g =
1
2
e
1
2
x
3
2
e
1
2
x
+e
1
2
x
= 0, and hence f(x) and g(x)
are solutions to the dierential equation. In a similar manner, for arbitrary constants c
1
and c
2
, the function
c
1
f(x) +c
2
g(x) is also a solution to the dierential equation.
Exercise Set 2.3
In Section 2.3, the fundamental concept of linear independence is introduced. In R
2
and R
3
, two nonzero
vectors are linearly independent if and only if they are not scalar multiples of each other, so they do not lie
on the same line. To determine whether or not a set of vectors S = {v
1
, v
2
, . . . , v
k
} is linearly independent
set up the vector equation
c
1
v
1
+c
2
v
2
+ +c
k
v
k
= 0.
If the only solution to the resulting system of equations is c
1
= c
2
= = c
k
= 0, then the vectors are
linearly independent. If there is one or more nontrivial solutions, then the vectors are linearly dependent.
For example, the coordinate vectors in Euclidean space are linearly independent. An alternative method, for
determining linear independence is to form a matrix A with column vectors the vectors to test. The matrix
must be a square matrix, so for example, if the vectors are in R
4
, then there must be four vectors.
If det(A) = 0, then the vectors are linearly independent.
If det(A) = 0, then the vectors are linearly dependent.
For example, to determine whether or not the vectors
_
_
1
1
3
_
_
,
_
_
2
1
2
_
_
, and
_
_
3
5
1
_
_
are linearly independent
start with the equation
c
1
_
_
1
1
3
_
_
+c
2
_
_
2
1
2
_
_
+c
3
_
_
3
5
1
_
_
=
_
_
0
0
0
_
_
.
This yields the linear system
_
_
c
1
+ 2c
2
+ 3c
3
= 0
c
1
+c
2
+ 5c
3
= 0
3c
1
+ 2c
2
c
3
= 0
with augmented matrix
_
_
1 2 3 0
1 1 5 0
3 2 1 0
_
_
which reduces to
_
_
1 2 3 0
0 1 1 0
0 0 5 0
_
_
.
So the only solution is c
1
= c
2
= c
3
= 0, and the vectors are linearly independent. Now notice that the
coecient matrix
A =
_
_
1 2 3
1 1 3
3 2 1
_
_
reduces further to
_
_
1 0 0
0 1 0
0 0 1
_
_
,
so that A is row equivalent to the identity matrix. This implies the inverse A
1
exists and that det(A) = 0.
So we could have computed det(A) to conclude the vectors are linearly independent. In addition, since A
1
52 Chapter 2 Linear Combinations and Linear Independence
exists the linear system A
_
_
c
1
c
2
c
3
_
_
= b has a unique solution for every vector b in R
3
. So ever vector in R
3
can be written uniquely as a linear combination of the three vectors. The uniqueness is a key result of the
linear independence. Other results that aid in making a determination of linear independence or dependence
of a set S are:
If the zero vector is in S, then S is linearly dependent.
If S consists of m vectors in R
n
and m > n, then S is linearly dependent.
At least one vector in S is a linear combination of other vectors in S if and only if S is linearly dependent.
Any subset of a set of linearly independent vectors is linearly independent.
If S a linearly dependent set and is contained by another set T, then T is also linearly dependent.
Solutions to Exercises
1. Since
1 2
1 3
2 4
1 2
1 4
2 8
_
_
1 2
0 6
0 0
_
_
, so the only solution is the trivial solution and hence, the vectors are
linearly independent.
6. Since
_
_
4 2
2 1
6 3
_
_
reduces to
_
_
4 2
0 0
0 0
_
_
,
the vectors are linearly dependent. Also,
v
2
=
1
2
v
1
.
7. Since
4 5 3
4 3 5
1 3 5
3 1 1
3 2 3
1 2 1
_
3 1 3
1 0 1
1 2 0
2 1 1
_
_
reduces to
_
3 1 3
0 1/3 0
0 0 1
0 0 0
_
_
, the linear system c
1
v
1
+c
2
v
2
+c
3
v
3
= 0 has only the
trivial solution and hence, the vectors are linearly independent.
10. Since
_
_
2 3 1
4 4 12
1 0 2
1 4 6
_
_
reduces to
_
2 3 1
0 1 1
0 0 0
0 0 0
_
_
, the linear system c
1
v
1
+ c
2
v
2
+ c
3
v
3
= 0 has
innitely many solutions and hence, the vectors are linearly dependent.
2.3 Linear Independence 53
11. From the linear system c
1
_
3 3
2 1
_
+ c
2
_
0 1
0 0
_
+ c
3
_
1 1
1 2
_
=
_
0 0
0 0
_
, we have that
_
_
3 0 1
3 1 1
2 0 1
1 0 2
_
_
_
_
3 0 1
0 1 2
0 0 5/3
0 0 0
_
_
. Since the homogeneous linear system has only the trivial solution,
then the matrices are linearly independent.
12. Since
_
_
1 1 2
2 4 2
1 0 1
1 1 0
_
_
reduces to
_
1 1 2
0 1 1
0 0 0
0 0 0
_
_
, the matrices are linearly dependent.
13. Since
_
_
1 0 1 1
2 1 1 1
2 2 2 1
2 2 2 2
_
_
reduces to
_
1 0 1 1
0 1 1 3
0 0 6 7
0 0 0 11/3
_
_
, the homogeneous linear system c
1
M
1
+
c
2
M
2
+c
3
M
3
+c
4
M
4
=
_
0 0
0 0
_
has only the trivial solution and hence, the matrices are linearly independent.
14. Since
_
_
0 2 2 2
1 1 0 2
1 1 1 2
1 1 2 1
_
_
reduces to
_
1 1 0 2
0 2 2 2
0 0 1 2
0 0 0 1
_
_
, the homogeneous linear system c
1
M
1
+
c
2
M
2
+c
3
M
3
+c
4
M
4
=
_
0 0
0 0
_
has only the trivial solution and hence, the matrices are linearly independent.
15. Since v
2
=
1
2
v
1
, the vectors are linearly
dependent.
16. Any set of three or more vectors in R
2
is
linearly dependent.
17. Any set of vectors containing the zero vector
is linearly dependent.
18. Since v
3
= v
1
+v
2
, the vectors are linearly
dependent.
19. a. Since A
2
= 2A
1
, the column vectors of A are linearly dependent. b. Since
A
3
= A
1
+A
2
, the column vectors of A are linearly dependent.
20. a. Any set of four or more vectors in R
3
is linearly dependent. b. Since
A
3
= A
1
+A
2
, the column vectors of A are linearly dependent.
21. Form the matrix with column vectors the three given vectors, that is, let A =
_
_
1 1 2
2 0 a
1 1 4
_
_
. Since
det(A) = 2a + 12, then the vectors are linearly independent if and only if 2a + 12 = 0, that is a = 6.
22. Since the matrix
_
_
1 1 1
2 0 4
0 1 a
1 0 2
_
_
reduces to
_
1 1 1
0 2 6
0 0 a 3
0 0 0
_
_
, if a = 3, then the matrices are linearly
independent.
23. a. Since
1 1 1
1 2 1
1 3 2
_
1 1 0 0
0 1 1 0
1 1 1 0
0 0 1 0
_
_
reduces to
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 0
_
_
, the matrices are linearly independent.
b. Since
_
_
1 1 0 3
0 1 1 5
1 1 1 4
0 0 1 3
_
_
reduces to
_
1 0 0 1
0 1 0 2
0 0 1 3
0 0 0 0
_
_
, the matrix M = M
1
+ 2M
2
+ 3M
3
. c. Since
_
_
1 1 0 0
0 1 1 3
1 1 1 3
0 0 1 1
_
_
_
_
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
_
_
the linear system is inconsistent and hence, M can not be written
as a linear combination of M
1
, M
2
and M
3
.
25. Since
1 2 0
1 0 3
2 1 2
= 13, the matrix A is invertible, so that Ax = b has a unique solution for every
vector b.
26. Since
3 2 4
1 1 4
0 2 4
_
c
1
+c
2
+c
3
= 0
2c
1
+
1
2
c
2
+ 4c
3
= 0
5
2
c
1
+
2
5
c
2
+
25
4
c
3
= 0.
. Since the only solution is the trivial solution the functions are
linearly independent.
33. In the equation c
1
x +c
2
x
2
+c
3
e
x
= 0, if x = 0, then c
3
= 0. Now let x = 1, and x = 1, which gives the
linear system
_
c
1
+c
2
= 0
c
1
+c
2
= 0
. This system has solution c
1
= 0 and c
2
= 0. Hence the functions are linearly
independent.
34. Consider the equation c
1
x+c
2
e
x
+c
3
sin x = 0 for all x. Let x = 1, x = 0, and x =
1
2
to obtain the linear
system
_
_
c
1
+ec
2
= 0
c
2
= 0
1
2
c
1
+e
1/2
c
2
+c
3
= 0.
. Since the only solution is the trivial solution the functions are linearly
independent.
35. Suppose u and v are linearly dependent. Then there are scalars c
1
and c
2
, not both zero, such that
au + bv = 0. If a = 0, then u =
b
a
v. Conversely, suppose there is a scalar c such that u = cv. Then
u cv = 0 and hence, the vectors are linearly dependent.
Review Chapter 2 55
36. Consider the equation c
1
w
1
+c
2
w
2
+c
3
w
3
= 0. Since w
1
= v
1
+v
2
+v
3
, w
2
= v
2
+v
3
, and w
3
= v
3
,
then
c
1
(v
1
+v
2
+v
3
) +c
2
(v
2
+v
3
) +c
3
w
3
= 0 c
1
v
1
+ (c
1
+c
2
)v
2
+ (c
1
+c
2
+c
3
)v
3
= 0.
Since S is linearly independent, then c
1
= 0, c
1
+ c
2
= 0, c
1
+c
2
+ c
3
= 0 and hence, the only solution is the
trivial solution. Therefore, the set T is linearly independent.
37. Setting a linear combination of w
1
, w
2
, w
3
to 0, we have
0 = c
1
w
1
+c
2
w
2
+c
3
w
3
= c
1
v
1
+ (c
1
+c
2
+c
3
)v
2
+ (c
2
+c
3
)v
3
.
Since the vectors v
1
, v
2
, v
3
are linear independent, then c
1
= 0, c
1
+c
2
+c
3
= 0, and c
2
+c
3
= 0. The only
solution to this linear system is the trivial solution c
1
= c
2
= c
3
= 0, and hence, the vectors w
1
, w
2
, w
3
are
linearly independent.
38. Consider the equation c
1
w
1
+c
2
w
2
+c
3
w
3
= 0. Since w
1
= v
2
, w
2
= v
1
+v
3
, and w
3
= v
1
+v
2
+v
3
,
then
c
1
(v
2
) +c
2
(v
1
+v
3
) +c
3
(v
1
+v
2
+v
3
) = 0 (c
2
+c
3
)v
1
+ (c
1
+c
3
)v
2
+ (c
2
+c
3
)v
3
= 0.
Since S is linearly independent, then c
2
+ c
3
= 0, c
1
+ c
3
= 0, c
2
+ c
3
= 0, which implies c
1
= c
2
= c
3
.
Therefore, the set T is linearly dependent.
39. Consider c
1
v
1
+ c
2
v
2
+ c
3
v
3
= 0, which is true if and only if c
3
v
3
= c
1
v
1
c
2
v
2
. If c
3
= 0, then v
3
would be a linear combination of v
1
and v
2
contradicting the hypothesis that it is not the case. Therefore,
c
3
= 0. Now since v
1
and v
2
are linearly independent c
1
= c
2
= 0.
40. a. v
1
= v
3
v
2
, v
1
= 2v
3
2v
2
v
1
, v
1
= 3v
3
3v
2
2v
1
b. Consider the equation
v
1
= c
1
v
1
+c
2
v
2
+c
3
v
3
(1 c
1
)v
1
c
2
v
2
c
3
(v
1
+v
2
) = 0 (1 c
1
c
3
)v
1
+ (c
2
c
3
)v
2
= 0.
Then all solutions are given by c
1
= 1 c
3
, c
2
= c
3
, c
3
R.
41. Since A
1
, A
2
, . . . , A
n
are linearly independent, if
Ax = x
1
A
1
+ +x
n
A
n
= 0,
then x
1
= x
2
= = x
n
= 0. Hence, the only solution to Ax = 0 is x = 0.
42. Consider
0 = c
1
Av
1
+c
2
Av
2
+ +c
k
Av
k
= A(c
1
v
1
) +A(c
2
v
2
) + +A(c
k
v
k
) = A(c
1
v
1
+c
2
v
2
+ +c
k
v
k
).
Since A is invertible, then the only solution to the last equation is the trivial solution, so c
1
v
1
+c
2
v
2
+ +
c
k
v
k
= 0. Since the vectors v
1
, v
2
, . . . , v
k
are linearly independent, then c
1
= c
2
= = c
k
= 0 and hence
Av
1
, Av
2
, . . . , Av
k
are linearly independent.
To show that A invertible is necessary, let A =
_
1 1
1 1
_
. Since det(A) = 0, then A is not invertible. Let
v
1
=
_
1
0
_
, and v
2
=
_
0
1
_
, which are linearly independent. Then Av
1
=
_
1
1
_
and Av
2
=
_
1
1
_
, which
are linearly dependent.
Review Exercises Chapter 2
1. Since
a b
c d
= adbc = 0, the column vectors are linearly independent. If adbc = 0, then the column
vectors are linearly dependent.
56 Chapter 2 Linear Combinations and Linear Independence
2. Consider the equation
c
1
v
1
+c
2
v
2
+c
3
(v
1
+v
2
+v
3
) = 0 (c
1
+c
3
)v
1
+ (c
2
+c
3
)v
2
+c
3
v
3
= 0.
Since S is linearly independent, then
_
_
c
1
+c
3
= 0
c
2
+c
3
= 0
c
3
= 0
, which has the unique solution c
1
= c
2
= c
3
= 0.
Therefore, T is linearly independent.
3. The determinant
a
2
0 1
0 a 0
1 2 1
= a
3
a = a(a
2
1) = 0, if and only if a = 1, and a = 0. So the vectors
are linearly independent if and only if a = 1, and a = 0.
4. a. Every vector in S can be written in the form
_
_
2s t
s
t
s
_
_
= s
_
_
2
1
0
1
_
_
+t
_
_
1
0
1
0
_
_
and hence, is a linear combination of the two vectors on the right hand side of the last equation. Since the
vectors are not scalar multiples of each other they are linearly independent.
5. a. Since the vectors are not scalar multiples of each other, S is linearly independent.
b. Since
_
_
1 1 a
0 1 b
2 1 c
_
_
_
_
1 1 a
0 1 b
0 0 2a +b +c
_
_
,
the linear system is inconsistent for any values of a, b and c such that 2a +b +c = 0. If a = 1, b = 1, c = 3,
then the system is inconsistent and v =
_
_
1
1
3
_
_
is not a linear combination of the vectors.
c. All vectors
_
_
a
b
c
_
_
such that 2a + b + c = 0. d. Since
1 1 1
0 1 0
2 1 0
_
_
1 0 0
3
2
b +
1
2
c
0 1 0 b
0 0 1 a
1
2
b
1
2
c
_
_
.
Hence the system has a unique solution for any values of a, b and c. That is, all vectors in R
3
can be written
as a linear combination of the three given vectors.
6. a. The vectors are linearly dependent since four or more vectors in R
3
are linearly dependent.
b. Since det(A) =
1 2 0
1 1 2
1 1 1
_
_
1 0 0 2/5
0 1 0 6/5
0 0 1 9/5
_
_
, then v
4
=
2
5
v
1
6
5
v
2
+
9
5
v
3
.
d. They are the same.
Review Chapter 2 57
7. a. Let A =
_
_
1 1 2 1
1 0 1 2
2 2 0 1
1 1 2 3
_
_
, x =
_
_
x
y
z
w
_
_
, and b =
_
_
3
1
2
5
_
_
. b. det(A) = 8 c. Since the
determinant of A is nonzero, the column vectors of A are linearly independent. d. Since the determinant of
the coecient matrix is nonzero, the matrix A is invertible, so Ax = b has a unique solution for every vector
b. e. x =
11
4
, y =
17
4
, z =
7
4
, w = 1
8. a. Since
_
_
1 1 0
0 1 1
1 2 1
1 1 0
_
_
reduces to
_
1 0 0
0 1 0
0 0 1
0 0 0
_
_
,
then the matrix equation c
1
M
1
+c
2
M
2
+c
3
M
3
=
_
0 0
0 0
_
has only the trivial solution. Therefore, the matrices
M
1
, M
2
, and M
3
are linearly independent. b. The augmented matrix for the linear system
_
1 1
2 1
_
=
c
1
M
1
+c
2
M
2
+c
3
M
3
,
_
_
1 1 0 1
0 1 1 1
1 2 1 2
1 1 0 1
_
_
reduces to
_
1 0 0 1
0 1 0 2
0 0 1 3
0 0 0 0
_
_
,
so the unique solution is c
1
= 1, c
2
= 2, and c
3
= 3. c. The equation
_
1 1
1 2
_
= c
1
M
1
+c
2
M
2
+c
3
M
3
holds if and only if the linear system
_
_
c
1
+c
2
= 1
c
2
+c
3
= 1
c
1
+ 2c
2
+c
3
= 1
c
1
+c
2
= 2
has a solution. However, equations one and
four are inconsistent and hence, the linear system is inconsistent. d. Since the matrix
_
_
1 1 0 a
0 1 1 b
1 2 1 c
1 1 0 d
_
_
reduces to
_
1 0 0 a
0 1 1 b
0 0 2 a 3b +c
0 0 0 d a
_
_
,
the set of matrices that can be written as a linear combination of M
1
, M
2
, and M
2
all have the form
_
a b
c a
_
.
9. a. x
1
_
_
1
2
1
_
_
+ x
2
_
_
3
1
1
_
_
+ x
3
_
_
2
3
1
_
_
=
_
_
b
1
b
2
b
3
_
_
b. Since det(A) = 19, the linear system Ax = b has
a unique solution for every b equal to x = A
1
b. c. Since the determinant of A is nonzero, the column
vectors of A are linearly independent. d. Since the determinant of A is nonzero, we can conclude that the
linear system Ax = b has a unique solution from the fact that A
1
exists or from the fact that the column
vectors of A are linearly independent.
10. a. If v =
_
_
v
1
v
2
.
.
.
v
n
_
_
, then v v = v
2
1
+ + v
2
n
0. b. If v = 0, then v
i
= 0 for some i = 1, . . . , n, so
v v > 0. c.
58 Chapter 2 Linear Combinations and Linear Independence
u (v+w) =
_
_
u
1
u
2
.
.
.
u
n
_
_
_
_
_
_
_
_
v
1
v
2
.
.
.
v
n
_
_
+
_
_
w
1
w
2
.
.
.
w
n
_
_
_
_
_
_
_
=
_
_
u
1
u
2
.
.
.
u
n
_
_
v
1
+w
1
v
2
+w
2
.
.
.
v
n
+w
n
_
_
= (u
1
v
1
+u
1
w
1
) + +(u
n
v
n
+u
n
w
n
)
and
u v +u w =
_
_
u
1
u
2
.
.
.
u
n
_
_
v
1
v
2
.
.
.
v
n
_
_
+
_
_
u
1
u
2
.
.
.
u
n
_
_
w
1
w
2
.
.
.
w
n
_
_
= (u
1
v
1
+u
1
w
1
) + + (u
n
v
n
+u
n
w
n
).
d. Consider the equation
c
1
v
1
+c
2
v
2
+ +c
n
v
n
= 0, so v
i
(c
1
v
1
+c
2
v
2
+ +c
n
v
n
) = v
i
0 = 0.
Using part (b), we have
c
1
v
i
v
1
+c
1
v
i
v
1
+ +c
i
v
i
v
i
+ +c
n
v
i
v
n
= 0.
Since v
i
v
j
= 0 for i = j, then c
i
v
i
v
i
= 0. But v
i
v
i
= 0, so c
i
= 0. Since this holds for each i = 1, 2, . . . , n
the vectors are linearly independent.
Chapter Test Chapter 2
1. T 2. F. For example,
_
1 0
0 1
_
can
not be written as a linear combi-
nation of the three matrices.
3. T
4. T 5. F. Since
_
_
1 2 4
0 1 3
1 0 1
_
_
_
_
1 2 4
0 1 3
0 0 1
_
_
.
6. F. Since
_
_
1 4 2
0 3 1
1 1 0
_
_
_
_
1 4 2
0 1 0
0 0 1
_
_
.
7. F. Since
_
_
2 4 1
1 3 0
0 1 1
_
_
_
_
2 4 1
0 1
1
2
0 0
1
2
_
_
.
8. T 9. F. Since p(x) is not a scalar
multiple of q
1
(x) and any linear
combination of q
1
(x) and q
2
(x)
with nonzero scalars will contain
an x
2
.
10. F. Since there are four vec-
tors in R
3
.
11. T 12. T
13. F. The set of all linear com-
binations of matrices in T is not
all 2 2 matrices, but the set of
all linear combinations of matri-
ces from S is all 2 2 matrices.
14. T 15. F. Since
s 1 0
0 s 1
0 1 s
= s(s
2
1)
the vectors are linearly indepen-
dent if and only if s = 0 and
s = 1.
Chapter Test Chapter 2 59
16. T 17. T 18. F. Since the column vec-
tors are linearly independent,
det(A) = 0
19. T 20. T 21. F. If the vector v
3
is a lin-
ear combination of v
1
and v
2
,
then the vectors will be linearly
dependent.
22. F. At least one is a linear
combination of the others.
23. F. The determinant of the
matrix will be zero since the col-
umn vectors are linearly depen-
dent.
24. F. The third vector is a
combination of the other two and
hence, the three together are lin-
early dependent.
25. T 26. F. An n n matrix is in-
vertible if and only if the column
vectors are linearly independent.
27. T
28. F. For example, the column
vectors of any 3 4 matrix are
linearly dependent.
29. T 30. F. The vector can be a linear
combination of the linearly inde-
pendent vectors v
1
, v
2
and v
3
.
31. T 32. F. The set of coordi-
nate vectors {e
1
, e
2
, e
3
} is lin-
early independent, but the set
{e
1
, e
2
, e
3
, e
1
+e
2
+e
3
} is lin-
early dependent.
33. T
60 Chapter 3 Vector Spaces
3
Vector Spaces
Exercise Set 3.1
A vector space V is a set with an addition and scalar multiplication dened on the vectors in the set
that satisfy the ten axioms. Examples of vector spaces are the Euclidean spaces R
n
with the standard
componentwise operations, the set of m n matrices M
mn
with the standard componentwise operations,
and the set P
n
of polynomials of degree less than or equal to n (including the 0 polynomial) with the standard
operations on like terms. To show a set V with an addition and scalar multiplication dened is a vector space
requires showing all ten properties hold. To show V is not a vector space it is sucient to show one of the
properties does not hold. The operations dened on a set, even a familiar set, are free to our choosing. For
example, on the set M
nn
, we can dene addition as matrix multiplication, that is, A B = AB. Then
M
nn
is not a vector space since AB may not equal BA, so that AB is not BA for all matrices in M
nn
.
As another example, let V = R
3
and dene addition by
_
_
x
1
y
1
z
1
_
_
_
_
x
2
y
2
z
2
_
_
=
_
_
x
1
+x
2
+ 1
y
1
+y
2
+ 2
z
1
+z
2
1
_
_
.
The additive identity (Axiom (4)) is the unique vector, lets call it v
I
, such that v v
I
= v for all vectors
v R
3
. To determine the additive identity for an arbitrary vector v we solve the equation v v
I
= v, that
is,
v v
I
=
_
_
x
1
y
1
z
1
_
_
_
_
x
I
y
I
z
I
_
_
=
_
_
x
1
+x
I
+ 1
y
1
+y
I
+ 2
z
1
+z
I
1
_
_
=
_
_
x
1
y
1
z
1
_
_
v
I
=
_
_
1
2
1
_
_
.
So the additive identity in this case is not the zero vector, which is the additive identity for the vector space
R
3
with the standard operations. Now to nd the additive inverse of a vector requires using the additive
identity that we just found v
I
. So w is the additive inverse of v provided v w = v
I
, that is,
_
_
x
1
y
1
z
1
_
_
_
_
x
2
y
2
z
2
_
_
=
_
_
x
1
+x
2
+ 1
y
1
+y
2
+ 2
z
1
+z
2
1
_
_
=
_
_
1
2
1
_
_
w =
_
_
x
1
2
y
1
4
z
1
+ 2
_
_
.
Depending on how a set V and addition and scalar multiplication are dened many of the vector space proper-
ties may follow from knowing a vector space that contains V. For example, let V =
_
_
_
_
_
x
y
z
_
_
x y + z = 0
_
_
_
and dene addition and scalar multiplication as the standard operations on R
3
. It isnt necessary to verify,
for example, that if u and v are in V, then uv = v u, since the vectors are also in R
3
where the property
already holds. This applies to most, but not all of the vector space properties. Notice that the vectors in V
describe a plane that passes through the origin and hence, is not all of R
3
. So to show V is a vector space we
would need to show the sum of two vectors from V is another vector in V. In this example, if u =
_
_
x
1
y
1
z
1
_
_
and v =
_
_
x
2
y
2
z
2
_
_
, then x
1
y
1
+z
1
= 0 and x
2
y
2
+z
2
= 0. Then
u v =
_
_
x
1
y
1
z
1
_
_
+
_
_
x
2
y
2
z
2
_
_
=
_
_
x
1
+x
2
y
1
+y
2
z
1
+z
2
_
_
,
3.1 Denition of a Vector Space 61
and since (x
1
+x
2
) (y
1
+y
2
) +(z
1
+z
2
) = (x
1
y
1
+z
1
) +(x
2
y
2
+z
2
) = 0 +0 = 0, the sum is also in V.
Similarly, cu is in V for all scalars c. These are the only properties that need to be veried to show that V is
a vector space.
Solutions to Exercises
1. In order to show that a set V with an addition and scalar multiplication dened is a vector space all ten
properties in Denition 1 must be satised. To show that V is not a vector space it is sucient to show any
one of the properties does not hold. Since
_
_
x
1
y
1
z
1
_
_
_
_
x
2
y
2
z
2
_
_
=
_
_
x
1
x
2
y
1
y
2
z
1
z
2
_
_
and
_
_
x
2
y
2
z
2
_
_
_
_
x
1
y
1
z
1
_
_
=
_
_
x
2
x
1
y
2
y
1
z
2
z
1
_
_
do not agree for all pairs of vectors, the operation is not commutative, so V is not a vector space.
2. Notice that since scalar multiplication is dened in the standard way the necessary properties hold for this
operation. Since the addition of two vectors is another vector, V is closed under addition. Consider
c
_
_
_
_
x
1
x
2
x
3
_
_
_
_
y
1
y
2
y
3
_
_
_
_
= c
_
_
_
_
x
1
+y
1
1
x
2
+y
2
1
x
3
+y
3
1
_
_
_
_
=
_
_
cx
1
+cy
1
c
cx
2
+cy
2
c
cx
3
+cy
3
c
_
_
but
c
_
_
x
1
x
2
x
3
_
_
c
_
_
y
1
y
2
y
3
_
_
=
_
_
cx
1
cx
2
cx
3
_
_
_
_
cy
1
cy
2
cy
3
_
_
=
_
_
cx
1
+cy
1
1
cx
2
+cy
2
1
cx
3
+cy
3
1
_
_
,
so the expressions c (u v) and c u c v do not agree for all pairs of vectors and scalars c. Therefore,
V is not a vector space.
3. The operation is not associative so V is not a vector space. That is,
_
_
_
_
x
1
y
1
z
1
_
_
_
_
x
2
y
2
z
2
_
_
_
_
_
_
x
3
y
3
z
3
_
_
=
_
_
4x
1
+ 4x
2
+ 2x
3
4y
1
+ 4y
2
+ 2y
3
4z
1
+ 4z
2
+ 2z
3
_
_
and
_
_
x
1
y
1
z
1
_
_
_
_
_
_
x
2
y
2
z
2
_
_
_
_
x
3
y
3
z
3
_
_
_
_
=
_
_
2x
1
+ 4x
2
+ 4x
3
2y
1
+ 4y
2
+ 4y
3
2z
1
+ 4z
2
+ 4z
3
_
_
,
which do not agree for all vectors.
4. Since
c
_
_
_
_
x
1
y
1
z
1
_
_
_
_
x
2
y
2
z
2
_
_
_
_
= c
_
_
_
_
x
1
+y
1
x
2
+y
2
x
3
+y
3
_
_
_
_
=
_
_
_
_
x
1
+y
1
+c
x
2
+y
2
x
3
+y
3
_
_
_
_
but
c
_
_
x
1
x
2
x
3
_
_
c
_
_
y
1
y
2
y
3
_
_
=
_
_
x
1
+y
1
+ 2c
x
2
+y
2
x
3
+y
3
_
_
,
so c (u +v) and c u c v do not always agree.
62 Chapter 3 Vector Spaces
5.
1.
_
x
1
y
1
_
+
_
x
2
y
2
_
=
_
x
1
+x
2
y
1
+y
2
_
is in R
2
. 2.
_
x
1
y
1
_
+
_
x
2
y
2
_
=
_
x
1
+x
2
y
1
+y
2
_
=
_
x
2
+x
1
y
2
+y
1
_
=
_
x
2
y
2
_
+
_
x
1
y
1
_
3.
__
x
1
y
1
_
+
_
x
2
y
2
__
+
_
x
3
y
3
_
=
_
x
1
+x
2
+x
3
y
1
+y
2
+y
3
_
and
_
x
1
y
1
_
+
__
x
2
y
2
_
+
_
x
3
y
3
__
=
_
x
1
+x
2
+x
3
y
1
+y
2
+y
3
_
4. Let 0 =
_
0
0
_
and u =
_
x
y
_
. Then 0 + u =
u +0 = u.
5. Let u =
_
x
y
_
and u =
_
x
y
_
. Then u +
(u) = u +u = 0.
6. c
_
x
y
_
=
_
cx
cy
_
is a vector in R
2
.
7. c
__
x
1
y
1
_
+
_
x
2
y
2
__
=
c
_
x
1
+x
2
y
1
+y
2
_
=
_
c(x
1
+x
2
)
c(y
1
+y
2
)
_
=
_
cx
1
+cx
2
cy
1
+cy
2
_
= c
_
x
1
y
1
_
+ c
_
x
2
y
2
_
.
8. (c+d)
_
x
y
_
=
_
(c + d)x
(c +d)y
_
=
_
cx +dx
cy +dy
_
=
c
_
x
y
_
+d
_
x
y
_
.
9. c
_
d
_
x
y
__
= c
_
dx
dy
_
=
_
(cd)x
(cd)y
_
= (cd)
_
x
y
_
.
10. 1
_
x
y
_
=
_
x
y
_
.
6.
1.
_
a b
c d
_
+
_
e f
g h
_
=
_
a +e b +f
c +g d +h
_
is in
M
22
.
2.
_
a b
c d
_
+
_
e f
g h
_
=
_
a +e b +f
c +g d +h
_
=
_
e +a f +b
g +c h +d
_
=
_
e f
g h
_
+
_
a b
c d
_
3. Since the associative property of addition holds
for real numbers for any three 2 2 matrices
M
1
+ (M
2
+M
3
) = (M
1
+M
2
) +M
3
.
4. Let 0 =
_
0 0
0 0
_
and u =
_
a b
c d
_
. Then
0 +u = u +0 = u.
5. Let u =
_
a b
c d
_
and u =
_
a b
c d
_
.
Then u + (u) = u +u = 0.
6. k
_
a b
c d
_
=
_
ka kb
kc kd
_
is a matrix in M
22
.
7. k
__
a b
c d
_
+
_
e f
g h
__
= k
_
a +e b +f
c +g d +h
_
=
_
ka +ke kb +kf
kc +kg kd +kh
_
= kc
_
a b
c d
_
+k
_
e f
g h
_
.
8. Since real numbers have the distributive prop-
erty, (k +l)
_
a b
c d
_
= k
_
a b
c d
_
+l
_
a b
c d
_
.
9. Since the real numbers have the associative
property,
k
_
l
_
a b
c d
__
= (kl)
_
a b
c d
_
.
10. 1
_
a b
c d
_
=
_
a b
c d
_
.
3.1 Denition of a Vector Space 63
7. Since (c +d)
_
x
y
_
=
_
x +c +d
y
_
does not equal
c
_
x
y
_
+d
_
x
y
_
=
_
x +c
y
_
+
_
x +d
y
_
=
_
2x +c +d
2y
_
,
for all vectors
_
x
y
_
, then V is not a vector space.
8. a. Since
_
_
a
b
1
_
_
+
_
_
c
d
1
_
_
=
_
_
a +c
b +d
2
_
_
, the sum of two vectors in V is not another vector in V, then
V is not a vector space. b. If the third component always remains 1, then to show V is a vector space is
equivalent to showing R
2
is a vector space with the standard componentwise operations.
9. Since the operation is not commutative,
then V is not a vector space.
10. The set V with the standard operations is a
vector space.
11. The zero vector is given by 0 =
_
0
0
_
. Since
this vector is not in V, then V is not a vector
space.
12. The set V with the standard operations is a
vector space.
13. a. Since V is not closed under vector addition, then V is not a vector space. That is, if two matrices
from V are added, then the row two, column two entry of the sum has the value 2 and hence, the sum is
not in V. b. Each of the ten vector space axioms are satised with vector addition and scalar multiplication
dened in this way.
14. Suppose A and B are skew symmetric. Since
(A +B)
t
= A
t
+ B
t
= A B = (A +B) and
(cA)
t
= cA
t
= (cA), so the set of skew symmet-
ric matrices is closed under addition and scalar
multiplication. The other vector space properties
also hold and V is a vector space.
15. The set of upper triangular matrices with
the standard componentwise operations is a vec-
tor space.
16. Suppose A and B are symmetric. Since
(A +B)
t
= A
t
+B
t
= A +B and (cA)
t
= cA
t
=
cA, so the set of symmetric matrices is closed un-
der addition and scalar multiplication. The other
vector space properties also hold and V is a vector
space.
17. The set of invertible matrices is not a vector
space. Let A = I and B = I. Then A + B is
not invertible, and hence not in V .
18. Suppose A and B are idempotent matrices. Since (AB)
2
= ABAB = A
2
B
2
= AB if and only if A and
B commute, the set of idempotent matrices is not a vector space.
19. If A and C are in V and k is a scalar, then (A + C)B = AB + BC = 0, and (kA)B = k(AB) = 0, so
V is closed under addition and scalar multiplication. All the other required properties also hold since V is a
subset of the vector space of all matrices with the same operations. Hence, V is a vector space.
20. The set V is closed under addition and scalar multiplication. Since V is a subset of the vector space of
all 2 2 matrices with the standard operations, the other vector space properties also hold for V. So V is a
vector space.
21. a. The additive identity is 0 =
_
1 0
0 1
_
. Since A A
1
= AA
1
= I, then the additive inverse of A is
A
1
. b. If c = 0, then cA is not in V. Notice also that addition is not commutative, since AB is not always
equal to BA.
64 Chapter 3 Vector Spaces
22. a. Since
_
t
1 +t
_
=
_
t
1 +t
_
+
_
s
1 +s
_
=
_
t +s
1 + (t +s)
_
s = 0,
the additive identity is
_
0
1
_
. b. Since the other nine vector space properties also hold, V is a vector space.
c. Since 0
_
t
1 +t
_
=
_
0t
1 + 0t
_
=
_
0
1
_
, and
_
0
1
_
is the additive identity, then 0 v = 0.
23. a. The additive identity is 0 =
_
_
1
2
3
_
_
. Let u =
_
_
1 +a
2 a
3 + 2a
_
_
. Then the additive inverse is u =
_
_
1 a
2 +a
3 2a
_
_
. b. Each of the ten vector space axioms is satised. c. 0
_
_
1 +t
2 t
3 + 2t
_
_
=
_
_
1 + 0t
2 0t
3 + 2(0)t
_
_
=
_
_
1
2
3
_
_
24. Since S is a subset of R
3
with the same stan-
dard operations only vector space axioms (1) and
(6) need to be veried since the others are inher-
ited from the vector space R
3
. If w
1
and w
2
are in
S, then let w
1
= au+bv and w
2
= cu+dv. Then
w
1
+w
2
= (a +c)u +(b +d)v and k(au +bv) =
(ka)u + (kb)v are also in S.
25. Each of the ten vector space axioms is satis-
ed.
26. The set S is a plane through the origin in
R
3
, so the sum of vectors in S remains in S and a
scalar times a vector in S remains in S. Since the
other vector space properties are inherited form
the vector space R
3
, then S is a vector space.
27. Each of the ten vector space axioms is satis-
ed.
28. a. Since cos(0) = 1 and sin(0) = 0, the additive identity is
_
1
0
_
. The additive inverse of
_
cos t
1
sin t
1
_
is
_
cos(t
1
)
sin(t
1
)
_
=
_
cos t
1
sint
1
_
. b. The ten required properties hold making V a vector space. c. The
additive identity in this case is
_
0
0
_
. Since cos t and sin t are not both 0 for any value of t, then
_
0
0
_
is
not in V, so V is not a vector space.
29. Since (f +g)(0) = f(0) +g(0) = 1 +1 = 2, then V is not closed under addition and hence is not a vector
space.
30. Since c (d f)(x) = c (f(x + d)) = f(x + c + d) and (cd) f(x) = f(x + cd), do not agree for all
scalars, V is not a vector space.
31. a. The zero vector is given by f(x + 0) = x
3
and f(x +t) = f(x t). b. Each of the ten vector space
axioms is satised.
Exercise Set 3.2
A subset W of a vector space V is a subspace of the vector space if vectors in W, using the same addition
and scalar multiplication of V, satisfy the ten vector space properties. That is, W is a vector space. Many of
the vector space properties are inherited by W from V. For example, if u and v are vectors in W, then they
are also vectors in V, so that u v = v u. On the other hand, the additive identity may not be a vector
in W, which is a requirement for being a vector space. To show that a subset is a subspace it is sucient to
verify that
if u and v are in W and c is a scalar, then u +cv is another vector in W.
3.2 Subspaces 65
For example, let
W =
_
_
_
_
_
s 2t
t
s +t
_
_
s, t R
_
_
_
,
which is a subset of R
3
. Notice that if s = t = 0, then the additive identity
_
_
0
0
0
_
_
for the vector space R
3
is
also in W. Let u =
_
_
s 2t
t
s +t
_
_
, and v =
_
_
a 2b
b
a +b
_
_
denote two arbitrary vectors in W. Notice that we have
to use dierent parameters for the two vectors since the vectors may be dierent. Next we form the linear
combination
u +cv =
_
_
s 2t
t
s +t
_
_
+c
_
_
a 2b
b
a +b
_
_
and simplify the sum to one vector. So
u +cv =
_
_
s 2t
t
s +t
_
_
+c
_
_
a 2b
b
a +b
_
_
=
_
_
(s 2t) + (a 2b)
t +b
(s +t) + (a +b)
_
_
but this is not sucient to show the vector is in W since the vector must be written in terms of just two
parameters in the locations described in the denition of W. Continuing the simplication, we have that
u +cv =
_
_
(s +a) 2(t +b)
t +b
(s +a) + (t +b)
_
_
and now the vector u+cv is in the required form with two parameters s+a and t +b. Hence, W is a subspace
of R
3
. An arbitrary vector in W can also be written as
_
_
s 2t
t
s +t
_
_
= s
_
_
1
0
1
_
_
+t
_
_
2
1
1
_
_
and in this case
_
_
1
0
1
_
_
and
_
_
2
1
1
_
_
are linearly independent, so W is a plane in R
3
. The set W consists of
all linear combinations of the two vectors
_
_
1
0
1
_
_
and
_
_
2
1
1
_
_
, called the span of the vectors and written
W = span
_
_
_
_
_
1
0
1
_
_
,
_
_
2
1
1
_
_
_
_
_
.
The span of a set of vectors is always a subspace. Important facts to keep in mind are:
There are linearly independent vectors that span a vector space. The coordinate vectors of R
3
are a
simple example.
Two linearly independent vectors can not span R
3
, since they describe a plane and one vector can not
span R
2
since all linear combinations describe a line.
66 Chapter 3 Vector Spaces
Two linearly dependent vectors can not span R
2
. Let S = span
__
2
1
_
,
_
4
2
__
. If v in in S, then
there are scalars c
1
and c
2
such that
v = c
1
_
2
1
_
+c
2
_
4
2
_
= c
1
_
2
1
_
+c
2
_
2
_
2
1
__
= (c
1
2c
2
)
_
2
1
_
and hence, every vector in the span of S is a linear combination of only one vector.
A linearly dependent set of vectors can span a vector space. For example, let S =
__
1
0
_
,
_
0
1
_
,
_
2
3
__
.
Since the coordinate vectors are in S, then span(S) = R
2
, but the vectors are linearly dependent since
_
2
3
_
= 2
_
1
0
_
+ 3
_
0
1
_
.
In general, to determine whether or not a vector v =
_
_
v
1
v
2
.
.
.
v
n
_
_
is in span{u
1
, . . . , u
k
}, start with the vector
equation
c
1
u
1
+c
2
u
2
+ +c
k
u
k
= v,
and then solve the resulting linear system. These ideas apply to all vector spaces not just the Euclidean
spaces. For example, if S = {A M
22
| A is invertible} , then S is not a subspace of the vector space of
all 2 2 matrices. For example, the matrices
_
1 0
0 1
_
and
_
1 0
0 1
_
are both invertible, so are in S, but
_
1 0
0 1
_
+
_
1 0
0 1
_
=
_
2 0
0 0
_
, which is not invertible. To determine whether of not
_
3 1
1 1
_
is in
the span of the two matrices
_
1 2
0 1
_
and
_
1 0
1 1
_
, start with the equation
c
1
_
1 2
0 1
_
+c
2
_
1 0
1 1
_
=
_
2 1
1 1
_
_
c
1
c
2
2c
1
c
2
c
1
+c
2
_
=
_
2 1
1 1
_
.
The resulting linear system is c
1
c
2
= 2, 2c
1
= 1, c
2
= 1, c
1
+c
2
= 1, is inconsistent and hence, the matrix
is not in the span of the other two matrices.
Solutions to Exercises
1. Let
_
0
y
1
_
and
_
0
y
2
_
be two vectors in S and c a scalar. Then
_
0
y
1
_
+ c
_
0
y
2
_
=
_
0
y
1
+cy
2
_
is in
S, so S is a subspace of R
2
.
2. The set S is not a subspace of R
2
. If u =
_
1
2
_
, v =
_
3
1
_
, then u +v =
_
2
1
_
/ S.
3. The set S is not a subspace of R
2
. If u =
_
2
1
_
, v =
_
1
3
_
, then u +v =
_
1
2
_
/ S.
4. The set S is not a subspace of R
2
. If u =
_
1
0
_
, v =
_
1/2
0
_
, then u +v =
_
3/2
0
_
/ S since
(3/2)
2
+ 0
2
= 9/4 > 1.
5. The set S is not a subspace of R
2
. If u =
_
0
1
_
and c = 0, then cv =
_
0
0
_
/ S.
6. The set S is a subspace since
_
x
3x
_
+c
_
y
3y
_
=
_
x +cy
3(x +cy)
_
S.
3.2 Subspaces 67
7. Since
_
_
x
1
x
2
x
3
_
_
+c
_
_
y
1
y
2
y
3
_
_
=
_
_
x
1
+cy
1
x
2
+cy
2
x
3
+cy
3
_
_
and (x
1
+cy
1
) +(x
3
+cy
3
) = (x
1
+x
3
) +c(y
1
+y
3
) = 2(c +1) = 2 if and only if c = 2, S is not a subspace
of R
3
.
8. Suppose x
1
x
2
x
3
= 0 and y
1
y
2
y
3
= 0, where x
1
= 0, y
3
= 0, and all other components are nonzero. Then
(x
1
+y
1
)(x
2
+y
2
)(x
3
+y
3
) = 0, so S is not a subspace.
9. Since for all real numbers s, t, c, we have that
_
_
s 2t
s
t +s
_
_
+c
_
_
x 2y
x
y +x
_
_
=
_
_
(s +cx) 2(t +cy)
s +cx
(t +cy) + (s +cx)
_
_
,
is in S, then S is a subspace.
10. For any two vectors in S, we have
_
_
x
1
2
x
3
_
_
+
_
_
y
1
2
y
3
_
_
=
_
_
x
1
+y
1
4
x
2
+y
2
_
_
, which is not in S and hence, S is
not a subspace.
11. If A and B are symmetric matrices and c is
a scalar, then (A+cB)
t
= A
t
+cB
t
= A+cB, so
S is a subspace.
12. If A and B are idempotent matrices, then
(A+B)
2
= A
2
+AB +BA+B
2
will equal A
2
+
B
2
= A + B if and only if AB = BA, so S is
not a subspace.
13. Since the sum of invertible matrices may not
be invertible, S is not a subspace.
14. If A and B are skew symmetric matrices and
c is a scalar, then (A +cB)
t
= A
t
+cB
t
= A+
c(B) = (A +cB), so S is a subspace.
15. If A and B are upper triangular matrices,
then A + B and cB are also upper triangular, so
S is a subspace.
16. If A and B are diagonal matrices and c is
a scalar, then A + cB is a diagonal matrix and
hence, S is a subspace.
17. The set S is a subspace. 18. If A =
_
a b
c d
_
, B =
_
e f
g h
_
such
that a + c = 0 and e + g = 0, then A + B =
_
a +e b +f
c +g d +h
_
with
(a +e) + (d +h) = (a +d) + (e +h) = 0
and hence, S is a subspace.
19. The set S is not a subspace since
x
3
x
3
= 0, which is not a polynomial of degree
3.
20. The set S is not a subspace since
(x
2
+ x) x
2
= x, which is not a polynomial of
even degree.
21. If p(x) and q(x) are polynomials with p(0) =
0 and q(0) = 0, then
(p +q)(0) = p(0) +q(0) = 0,
and
(cq)(0) = cq(0) = 0,
so S is a subspace.
22. Yes, since ax
2
+c(bx
2
) = (a +cb)x
2
S.
23. The set S is not a subspace, since for example
(2x
2
+ 1) (x
2
+ 1) = x
2
, which is not in S.
24. The set S is a subspace (assuming the zero
polynomial is in the set).
68 Chapter 3 Vector Spaces
25. The vector v is in the span of S = {v
1
, v
2
, v
3
} provided there are scalars c
1
, c
2
, and c
3
such that
v = c
1
v
1
+c
2
v
2
+c
3
v
3
. Row reduce the augmented matrix [v
1
v
2
v
3
| v]. We have that
_
_
1 1 1 1
1 1 2 1
0 1 0 1
_
_
reduces to
_
_
1 1 1 1
0 1 0 1
0 0 3 2
_
_
,
and since the linear system has a (unique) solution, the vector v is in the span.
26. Since
_
_
1 1 1 2
1 1 2 7
0 1 0 3
_
_
reduces to
_
_
1 1 1 2
0 1 0 3
0 0 3 9
_
_
,
the linear system has a (unique) solution and hence, the vector v is in the span.
27. Since
_
_
1 0 1 2
1 1 1 1
0 2 4 6
1 1 3 5
_
_
reduces to
_
1 0 1 2
0 1 2 3
0 0 0 0
0 0 0 0
_
_
, the matrix M is in the span.
28. Since
_
_
1 0 1 1
1 1 1 1
0 2 4 2
1 1 3 3
_
_
reduces to
_
1 0 1 1
0 1 2 0
0 0 0 2
0 0 0 0
_
_
,
the linear system is inconsistent and hence, the matrix M is not in the span.
29. Since
c
1
(1 +x) +c
2
(x
2
2) +c
3
(3x) = 2x
2
6x 11 if and only if (c
1
2c
2
) +(c
1
+3c
3
)x +c
2
x
2
= 2x
2
6x 11,
we have that c
1
= 7, c
2
= 2, c
3
=
1
3
and hence, the polynomial is in the span.
30. Since
c
1
(1 +x) +c
2
(x
2
2) +c
3
(3x) = 3x
2
x 4 if and only if (c
1
2c
2
) + (c
1
+ 3c
3
)x +c
2
x
2
= 3x
2
x 4,
we have that c
1
= 2, c
2
= 3, c
3
= 1 and hence, the polynomial is in the span.
31. Since
_
_
2 1 a
1 3 b
2 1 c
_
_
reduces to
_
_
1 3 b
0 7 a + 2b
0 0 a +c
_
_
, then span(S) =
_
_
_
_
_
a
b
c
_
_
a +c = 0
_
_
_
.
32. Since
_
_
1 2 1 a
1 3 2 b
2 1 1 c
_
_
reduces to
_
_
1 2 1 a
0 1 1 a +b
0 0 0 5a + 3b +c
_
_
, then span(S) =
_
_
_
_
_
a
b
c
_
_
5a + 3b +c = 0
_
_
_
.
33. The equation c
1
_
1 2
1 0
_
+c
2
_
1 1
0 1
_
=
_
a b
c d
_
, leads to the linear system c
1
+c
2
= a, 2c
1
c
2
=
b, c
1
= c, c
2
= d, which gives span(S) =
__
a b
a+b
3
2ab
3
_
a, b R
_
.
3.2 Subspaces 69
34.
span(S) =
__
a b
c d
_
b +d = 0
_
.
35.
span(S) =
_
ax
2
+bx +c
a c = 0
_
.
36. Since
_
_
4 2 2 a
0 1 1 b
1 0 1 c
_
_
reduces to
_
_
4 2 2 a
0 1 1 b
0 0 2 c +
1
4
a +
1
2
b
_
_
,
the span is all polynomials of degree two or less.
37. a.
span(S) =
_
_
_
_
_
a
b
b2a
3
_
_
a, b R
_
_
_
.
b. The set S is linearly independent.
38. a.
span(S) =
_
_
_
_
_
a
b
3a b
_
_
a, b R
_
_
_
.
b. Since 2v
1
v
2
= v
3
, the set S is linearly
dependent.
39. a. span(S) = R
3
b. The setS is linearly independent.
40. a. Since
_
_
1 1 0 2 a
2 0 1 1 b
1 3 1 1 c
_
_
reduces to
_
_
1 1 0 2 a
0 2 1 3 2a +b
0 0 1 5 3a 2b +c
_
_
,
every vector in R
3
is a linear combination of the vectors in S and hence, span(S) = R
3
. b. The set S is
linearly dependent since there are four vectors in R
3
.
41. a. span(S) = R
3
b. The set S is linearly dependent. c. The set T is also
linearly dependent and span(T) = R
3
. d. The set H is linearly independent and we still have span(H) =
R
3
.
42. a. span(S) =
__
a b
c 0
_
a, b, c R
_
b. The set S is linearly independent.
c. The set T is also linearly independent and span(T) = M
22
.
43. a. span(S) = P
2
b. The set S is linearly dependent. c. 2x
2
+ 3x + 5 = 2(1) (x 3) + 2(x
2
+ 2x) d.
The set T is linearly independent and span(T) = P
3
.
44. a. Since
_
_
2s
1
t
1
s
1
t
1
s
1
_
_
+c
_
_
2s
2
t
2
s
2
t
2
s
2
_
_
=
_
_
2(s
1
+cs
2
) (t
1
+ct
2
)
s
1
+cs
2
t
1
+ct
2
(s
1
+cs
2
)
_
_
S,
then S is a subspace. b. Since
_
_
2s t
s
t
s
_
_
= s
_
_
2
1
0
1
_
_
+t
_
_
1
0
1
0
_
_
, then S = span
_
_
_
_
2
1
0
1
_
_
,
_
_
1
0
1
0
_
_
_
_
.
c. Since the vectors
_
_
2
1
0
1
_
_
and
_
_
1
0
1
0
_
_
are not multiples of each other they are linearly independent.
d. S R
4
70 Chapter 3 Vector Spaces
45. a., b. Since
_
_
s
s 5t
2s + 3t
_
_
= s
_
_
1
1
2
_
_
+t
_
_
0
5
3
_
_
, then S = span
_
_
_
_
_
1
1
2
_
_
,
_
_
0
5
3
_
_
_
_
_
.
Therefore, S is a subspace. c. The vectors found in part (b) are linearly independent. d. Since the span of
two linearly independent vectors in R
3
is a plane, then S = R
3
46. a. The subspace S consists of all matrices
_
a b
c d
_
such that a 2b +c +d = 0. b. From part (a)
not all matrices can be written as a linear combination of the matrices in S and hence, the span of S is not
equal to M
22
. c. The matrices that generate the set S are linearly independent.
47. Since A(x +cy) =
_
1
2
_
+c
_
1
2
_
=
_
1
2
_
if and only if c = 0, then S is not a subspace.
48. If u and v are in S, then A(u +cv) = Au +cAv = 0 +0 = 0 and hence S is a subspace.
49. Let B
1
, B
2
S. Since A commutes with B
1
and B
2
, we have that
A(B
1
+cB
2
) = AB
1
+cAB
2
= B
1
A +c(B
2
A) = (B
1
+cB
2
)A
and hence, B
1
+cB
2
S and S is a subspace.
50. Let w
1
= u
1
+v
1
and w
2
= u
2
+ v
2
be two elements of S. Let c be a scalar. Then
w
1
+cw
2
= u
1
+v
1
+c(u
2
+v
2
) = (u
1
+cu
2
) + (v
1
+cv
2
).
Since S and T are subspaces, then u
1
+cu
2
S and v
1
+cv
2
T. Therefore, w
1
+cw
2
S +T and hence,
S +T is a subspace.
51. Let w S + T, so that w = u +v, where u S, and v T. Then there are scalars c
1
, . . . , c
m
and
d
1
, . . . , d
n
such that w =
m
i=1
c
i
u
i
+
n
i=1
d
i
v
i
. Therefore, w span{u
1
, . . . , u
m
, v
1
, . . . , v
n
} and we have
shown that S +T span{u
1
, . . . , u
m
, v
1
, . . . , v
n
}.
Now let w span{u
1
, . . . , u
m
, v
1
, . . . , v
n
}, so there are scalars c
1
, . . . , c
m+n
such that w = c
1
u
1
+ +
c
m
u
m
+c
m+1
v
1
+ +c
m+n
v
n
, which is in S +T. Therefore, span{u
1
, . . . , u
m
, v
1
, . . . , v
n
} S +T.
52. a. Since
_
a a
b c
_
+k
_
d d
e f
_
=
_
a +kd a kd
b +e c +f
_
=
_
a +kd (a +kd)
b +e c +f
_
is in S, then S is a subspace. Similarly, T is a subspace. b. The sets S and T are given by
S = span
__
1 1
0 0
_
,
_
0 0
1 0
_
,
_
0 0
0 1
__
, T = span
__
1 0
1 0
_
,
_
0 1
0 0
_
,
_
0 0
0 1
__
,
so
S +T = span
__
1 1
0 0
_
,
_
0 0
1 0
_
,
_
0 0
0 1
_
,
_
1 0
1 0
_
,
_
0 1
0 0
__
.
But
_
1 1
0 0
_
=
_
1 0
1 0
_
+
_
0 0
1 0
_
_
0 1
0 0
_
, so
S +T = span
__
0 0
1 0
_
,
_
0 0
0 1
_
,
_
1 0
1 0
_
,
_
0 1
0 0
__
= M
22
.
3.3 Basis and Dimension 71
Exercise Set 3.3
In Section 3.3 of the text, the connection between a spanning set of a vector space and linear independence
is completed. The minimal spanning sets, minimal in the sense of the number of vectors in the set, are those
that are linear independent. A basis for a vector space V is a set B such that B is linearly independent and
span(B) = V. For example,
B = {e
1
, e
2
, . . . , e
n
} is a basis for R
n
B =
__
1 0
0 0
_
,
_
0 1
0 0
_
,
_
0 0
1 0
_
,
_
0 0
0 1
__
is a basis for M
22
B = {1, x, x
2
, . . . , x
n
} is a basis for P
n
.
Every vector space has innitely many bases. For example, if c = 0, then B = {ce
1
, e
2
, . . . , e
n
} is another
basis for R
n
. But all bases for a vector space have the same number of vectors, called the dimension of the
vector space, and denoted by dim(V ). As a consequence of the bases noted above:
dim(R
n
) = n
dim(M
22
) = 4 and in general dim(M
mn
) = mn
dim(P
n
) = n + 1.
If S = {v
1
, v
2
, . . . , v
m
} is a subset of a vector space V and dim(V ) = n recognizing the following possibilities
will be useful in the exercise set:
If the number of vectors in S exceeds the dimension of V, that is, m > n, then S is linearly dependent
and hence, can not be a basis.
If m > n, then span(S) can equal V, but in this case some of the vectors are linear combinations of
others and the set S can be trimmed down to a basis for V.
If m n, then the set S can be either linearly independent of linearly dependent.
If m < n, then S can not be a basis for V, since in this case span(S) = V.
If m < n and the vectors in S are linearly independent, then S can be expanded to a basis for V.
If m = n, then S will be a basis for V if either S is linearly independent or span(S) = V. So in this case
it is sucient to verify only one of the conditions.
The two vectors v
1
=
_
_
1
1
2
_
_
and v
2
=
_
_
3
1
2
_
_
are linearly independent but can not be a basis for R
3
since
all bases for R
3
must have three vectors. To expand to a basis start with the matrix
_
_
1 3 1 0 0
1 1 0 1 0
2 2 0 0 1
_
_
reduce to the echelon form
_
_
1 3 1 0 0
0 2 1 1 0
0 0 0 2 1
_
_
.
The pivots in the echelon form matrix are located in columns one, two and four, so the corresponding column
vectors in the original matrix form the basis. So
B =
_
_
_
_
_
1
1
2
_
_
,
_
_
3
1
2
_
_
,
_
_
0
1
0
_
_
_
_
_
is a basis for R
3
.
To trim a set of vectors that span the space to a basis the procedure is the same. For example, the set
S =
_
_
_
_
_
0
1
1
_
_
,
_
_
2
2
1
_
_
,
_
_
0
2
2
_
_
,
_
_
3
1
1
_
_
_
_
_
72 Chapter 3 Vector Spaces
is not a basis for R
3
since there are four vectors and hence, S is linearly dependent. Since
_
_
0 2 0 3
1 2 2 1
1 1 2 1
_
_
reduces to
_
_
1 2 2 1
0 2 0 3
0 0 0 3
_
_
,
then the span of the four vectors is R
3
. The pivots in the reduced matrix are in columns one, two and four,
so a basis for R
3
is
_
_
_
_
_
0
1
1
_
_
,
_
_
2
2
1
_
_
,
_
_
3
1
1
_
_
_
_
_
.
Solutions to Exercises
1. Since dim(R
3
) = 3 every basis for R
3
has three
vectors. Therefore, since S has only two vectors
it is not a basis for R
3
.
2. Since dim(R
2
) = 2 every basis for R
2
has two
vectors. Therefore, since S has three vectors it is
linearly dependent and hence it is not a basis for
R
2
.
3. Since the third vector can be written as the
sum of the rst two, the set S is linearly depen-
dent and hence, is not a a basis for R
3
.
4. Since dim(P
3
) = 4 every basis has four poly-
nomials and hence S is not a basis.
5. Since the third polynomial is a linear combi-
nation of the rst two, the set S is linearly de-
pendent and hence is not a basis for P
3
.
6. Since the matrix
_
2 3
1 2
_
= 2
_
1 0
0 1
_
3
_
0 1
0 0
_
+
_
0 0
1 0
_
,
the set S is linearly dependent and hence, is not
a basis.
7. The two vectors in S are not scalar multi-
ples and hence, the set S is linearly independent.
Since every linearly independent set of two vec-
tors in R
2
is a basis, the set S is a basis.
8. Since det
__
1 1
3 1
__
= 2, the set S is
linearly independent and hence, is a basis for R
2
.
9. Since
_
_
1 0 0
1 2 2
1 3 2
_
_
reduces to
_
_
1 0 0
0 2 2
0 0 5
_
_
,
S is a linearly independent set of three vectors in
R
3
and hence, S is a basis.
10. Since
_
_
1 2 1
1 1 1
0 3 2
_
_
reduces to
_
_
1 2 1
0 3 0
0 0 2
_
_
,
S is a linearly independent set of three vectors in
R
3
and hence, S is a basis.
11. Since
_
_
1 1 0 1
0 1 1 0
1 1 1 0
0 0 2 1
_
_
reduces to
_
1 1 0 1
0 1 1 0
0 0 1 1
0 0 0 3
_
_
,
the set S is a linearly independent set of four ma-
trices in M
22
. Since dim(M
22
) = 4, then S is
a basis.
12. Since
_
_
1 2 0
0 1 1
1 0 1
_
_
reduces to
_
_
1 2 0
0 1 1
0 0 1
_
_
,
S is a linearly independent set of three polynomi-
als in P
2
and hence, is a basis.
3.3 Basis and Dimension 73
13. Since
_
_
1 1 1
2 0 1
1 1 1
_
_
reduces to
_
_
1 1 1
0 2 3
0 0 1
_
_
,
the set S is a linearly independent set of three
vectors in R
3
, so is a basis.
14. Since
_
_
2 5 3
2 1 1
1 2 1
_
_
reduces to
_
_
2 5 3
0 6 4
0 0 1
_
_
,
the set S is a linearly independent set of three
vectors in R
3
, so is a basis.
15. Since
_
_
1 2 2 1
1 1 4 2
1 3 2 0
1 1 5 3
_
_
reduces to
_
1 2 2 1
0 1 2 3
0 0 1 1
0 0 0 0
_
_
,
the homogeneous linear system has innitely
many solutions so the set S is linearly dependent
and is therefore not a basis for R
4
.
16. Since
_
_
1 2 1 2
1 1 3 1
0 1 1 1
1 2 1 2
_
_
reduces to
_
1 2 1 2
0 3 4 3
0 0 7 6
0 0 0 32
_
_
,
the set S is a linearly independent set of four vec-
tors in R
4
, so is a basis.
17. Notice that
1
3
(2x
2
+x + 2 + 2(x
2
+x) 2(1)) = x and 2x
2
+ x + 2 + (x
2
+ x) 2x 2 = x
2
, so the
span of S is P
2
. Since dim(P
2
) = 3, the set S is a basis.
18. Since
_
_
1 1 2 0
0 1 1 0
0 0 1 0
0 0 1 2
_
_
reduces to
_
1 1 2 0
0 1 1 0
0 0 1 0
0 0 0 2
_
_
,
the set S is a linearly independent set of four matrices in M
22
, so is a basis.
19. Every vector in S can be written as
_
_
s + 2t
s +t
t
_
_
= s
_
_
1
1
0
_
_
+ t
_
_
2
1
1
_
_
. Since the vectors
_
_
1
1
0
_
_
and
_
_
2
1
1
_
_
are linear independent a basis for S is B =
_
_
_
_
_
1
1
0
_
_
,
_
_
2
1
1
_
_
_
_
_
and dim(S) = 2.
20. Since every matrix in S can be written in the form
_
a a +d
a +d d
_
= a
_
1 1
1 0
_
+d
_
0 1
1 1
_
,
and the two matrices on the right hand side are linearly independent, a basis for S is
B =
__
1 1
1 0
_
,
_
0 1
1 1
__
. Consequently, dim(S) = 2.
21. Every 2 2 symmetric matrix has the form
_
a b
b d
_
= a
_
1 0
0 0
_
+b
_
0 1
1 0
_
+c
_
0 0
0 1
_
. Since the
three matrices on the right are linearly independent a basis for S is B =
__
1 0
0 0
_
,
_
0 1
1 0
_
,
_
0 0
0 1
__
and dim(S) = 3.
22. Every 2 2 skew symmetric matrix has the form
_
0 b
b 0
_
= b
_
0 1
1 0
_
and hence, a basis for S is
B =
__
0 1
1 0
__
with dim(S) = 1.
74 Chapter 3 Vector Spaces
23. Since every polynomial p(x) in S satises
p(0) = 0, we have that p(x) = ax+bx
2
. Therefore,
a basis for S is B =
_
x, x
2
_
and dim(S) = 2.
24. If p(x) = ax
3
+bx
2
+cx+d and p(0) = 0, then
d = 0, so p(x) has the form p(x) = ax
2
+bx+cx.
If in addition, p(1) = 0, then a + b + c = 0, so
c = a b and hence
p(x) = ax
3
+bx
2
+(ab)x = a(x
3
x)+b(x
2
x).
Since x
3
x and x
2
x are linear independent,
then a basis for S is {x
3
x, x
2
x}, so dim(S) =
2.
25. Since det
_
_
_
_
2 2 1
2 0 2
1 2 1
_
_
_
_
= 4, the set S is already a basis for R
3
since it is a linearly independent
set of three vectors in R
3
.
26. Since
_
_
2
0
5
_
_
=
_
_
2
1
3
_
_
+
_
_
4
1
2
_
_
, the vectors in S are linearly dependent. Since the rst two vectors
are not scalar multiples of each other a basis for span(S) is
_
_
_
_
_
2
1
3
_
_
,
_
_
4
1
2
_
_
_
_
_
.
27. The vectors can not be a basis since a set of four vectors in R
3
is linearly dependent. To trim the set
down to a basis for the span row reduce the matrix with column vectors the vectors in S. This gives
_
_
2 0 1 2
3 2 1 3
0 2 0 1
_
_
reduces to
_
_
2 0 1 2
0 2
5
2
6
0 0
5
2
7
_
_
.
A basis for the span consists of the column vectors in the original matrix corresponding to the pivot columns
of the row echelon matrix. So a basis for the span of S is given by
B =
_
_
_
_
_
2
3
0
_
_
,
_
_
0
2
2
_
_
,
_
_
1
1
0
_
_
_
_
_
. Observe that span(S) = R
3
.
28. The vectors can not be a basis since a set of four vectors in R
3
is linearly dependent. To trim the set
down to a basis for the span row reduce the matrix with column vectors the vectors in S. This gives
_
_
2 1 3 1
0 0 3 2
2 3 2 2
_
_
reduces to
_
_
2 0 1 2
0 2 5 1
0 0 3 2
_
_
.
A basis for the span consists of the column vectors in the original matrix corresponding to the pivot columns
of the row echelon matrix. So a basis for the span of S is given by
B =
_
_
_
_
_
2
0
2
_
_
,
_
_
1
0
3
_
_
,
_
_
3
3
2
_
_
_
_
_
. Observe that span(S) = R
3
.
29. The vectors can not be a basis since a set of four vectors in R
3
is linearly dependent. To trim the set
down to a basis for the span row reduce the matrix with column vectors the vectors in S. This gives
_
_
2 0 2 4
3 2 1 0
0 2 2 4
_
_
_
_
2 0 2 4
0 2 2 6
0 0 0 2
_
_
.
A basis for the span consists of the column vectors in the original matrix corresponding to the pivot columns
of the row echelon matrix. So a basis for the span of S is given by
3.3 Basis and Dimension 75
B =
_
_
_
_
_
2
3
0
_
_
,
_
_
0
2
2
_
_
,
_
_
4
0
4
_
_
_
_
_
. Observe that span(S) = R
3
.
30. The vectors can not be a basis since a set of four vectors in R
3
is linearly dependent. To trim the set
down to a basis for the span row reduce the matrix with column vectors the vectors in S. This gives
_
_
2 1 0 2
2 1 2 3
0 0 2 1
_
_
reduces to
_
_
2 1 0 2
0 2 2 1
0 0 2 1
_
_
.
A basis for the span consists of the column vectors in the original matrix corresponding to the pivot columns
of the row echelon matrix. So a basis for the span of S is given by
B =
_
_
_
_
_
2
2
0
_
_
,
_
_
1
1
0
_
_
,
_
_
0
2
2
_
_
_
_
_
. Observe that span(S) = R
3
.
31. Form the 3 5 matrix with rst two column vectors the vectors in S and then augment the identity
matrix. Reducing this matrix, we have that
_
_
2 1 1 0 0
1 0 0 1 0
3 2 0 0 1
_
_
reduces to
_
_
2 1 1 0 0
0 1 1 2 0
0 0 2 1 1
_
_
.
A basis for R
3
consists of the column vectors in the original matrix corresponding to the pivot columns of the
row echelon matrix. So a basis for R
3
containing S is B =
_
_
_
_
_
2
1
3
_
_
,
_
_
1
0
2
_
_
,
_
_
1
0
0
_
_
_
_
_
.
32. Form the 3 5 matrix with rst two column vectors the vectors in S and then augment the identity
matrix. Reducing this matrix, we have that
_
_
1 1 1 0 0
1 1 0 1 0
3 1 0 0 1
_
_
reduces to
_
_
1 1 1 0 0
0 2 1 1 0
0 0 1 2 1
_
_
.
A basis for R
3
consists of the column vectors in the original matrix corresponding to the pivot columns of the
row echelon matrix. So a basis for R
3
containing S is B =
_
_
_
_
_
1
1
3
_
_
,
_
_
1
1
1
_
_
,
_
_
1
0
0
_
_
_
_
_
.
33. A basis for R
4
containing S is
B =
_
_
_
_
1
1
2
4
_
_
,
_
_
3
1
1
2
_
_
,
_
_
1
0
0
0
_
_
,
_
_
0
0
1
0
_
_
_
_
.
34. A basis for R
4
containing S is
B =
_
_
_
_
1
1
1
1
_
_
,
_
_
1
3
1
2
_
_
,
_
_
1
2
1
3
_
_
,
_
_
1
0
0
0
_
_
_
_
.
35. A basis for R
3
containing S is
B =
_
_
_
_
_
1
1
3
_
_
,
_
_
1
1
1
_
_
,
_
_
1
0
0
_
_
_
_
_
.
36. A basis for R
3
containing S is
B =
_
_
_
_
_
2
2
1
_
_
,
_
_
1
1
3
_
_
,
_
_
1
0
0
_
_
_
_
_
.
37. Let e
ii
denote the n n matrix with a 1 in the row i, column i component and 0 in all other locations.
Then B = {e
ii
| 1 i n} is a basis for the subspace of all n n diagonal matrices.
76 Chapter 3 Vector Spaces
38. Consider the equation c
1
(cv
1
) +c
2
(cv
2
) + +c
n
(cv
n
) = 0. Then c(c
1
v
1
+c
2
v
2
+ +c
n
v
n
) = 0 and
since c = 0, we have that c
1
v
1
+c
2
v
2
+ +c
n
v
n
= 0. Now since S is a basis, it is linearly independent, so
c
1
= c
2
= = c
n
= 0 and hence S
is a basis.
39. It is sucient to show that the set S
is linearly independent.
40. To solve the homogeneous equation Ax = 0 consider the matrix
_
_
3 3 1 3
1 0 1 1
2 0 2 1
_
_
that reduces to
_
_
1 0 1 0
0 1 2/3 0
0 0 0 1
_
_
.
So a vector is a solution provided it has the form x =
_
_
z
2
3
z
z
0
_
_
= z
_
_
1
2
3
1
0
_
_
. Hence, a basis for S is
_
_
_
_
1
2
3
1
0
_
_
_
_
.
41. Since H is a subspace of V, then H V . Let S = {v
1
, v
2
, . . . , v
n
} be a basis for H, so that S is a linearly
independent set of vectors in V. Since dim(V ) = n, then S is also a basis for V . Now let v be a vector in V .
Since S is a basis for V, then there exist scalars c
1
, c
2
, . . . c
n
such that c
1
v
1
+ c
2
v
2
+ + c
n
v
n
= v. Since
S is a basis for H, then v is a linear combination of vectors in H and is therefore, also in H. Hence, V H
and we have that H = V .
42. Since S = {ax
3
+ bx
2
+ cx | a, b, c R}, then dim(S) = 3. A polynomial q(x) = ax
3
+ bx
2
+ cx + d
is in T if and only if a + b + c + d = 0, that is d = a b c. Hence, a polynomial in T has the form
q(x) = a(x
3
1) +b(x
2
1) +c(x 1), so that dim(T) = 3. Now S T = {ax
3
+bx
2
+cx | a +b +c = 0}.
Hence, a polynomial q(x) is in S T if and only if q(x) = a(x
3
x) +b(x
2
x) and hence, dim(S T) = 2.
43. Every vector in W can be written as a linear combination of the form
_
_
2s +t + 3r
3s t + 2r
s +t + 2r
_
_
= s
_
_
2
3
1
_
_
+t
_
_
1
1
1
_
_
+r
_
_
3
2
2
_
_
.
But
_
_
3
2
2
_
_
=
_
_
2
3
1
_
_
+
_
_
1
1
1
_
_
and hence, span
_
_
_
_
_
2
3
1
_
_
,
_
_
1
1
1
_
_
_
_
_
= span
_
_
_
_
_
2
3
1
_
_
,
_
_
1
1
1
_
_
,
_
_
3
2
2
_
_
_
_
_
.
Since B =
_
_
_
_
_
2
3
1
_
_
,
_
_
1
1
1
_
_
_
_
_
is linear independent, B is a basis for W, so that dim(W) = 2.
44. Since S =
_
_
s
_
_
1
0
0
0
_
_
+ t
_
_
0
1
0
0
_
s, t R
_
_
, then dim(S) = 2. Since
T =
_
_
s
_
_
0
1
0
0
_
_
+t
_
_
0
0
1
0
_
s, t R
_
_
, we also have that dim(T) = 2. For the intersection, since
3.4 Coordinates and Change of Basis 77
S T =
_
_
s
_
_
0
1
0
0
_
s R
_
_
, then dim(S T) = 1.
Exercise Set 3.4
If B = {v
1
, . . . , v
n
} is an ordered basis for a vector space, then for each vector v there are scalars
c
1
, c
2
, . . . , c
n
such that v = c
1
v
1
+ c
2
v
2
+ + c
n
v
n
. The unique scalars are called the coordinates of the
vector relative to the ordered basis B, written as
[v]
B
=
_
_
c
1
c
2
.
.
.
c
n
_
_
.
If B is one of the standard bases of R
n
, then the coordinates of a vector are just the components of the vector.
For example, since every vector in R
3
can be written as
v =
_
_
x
y
z
_
_
= x
_
_
1
0
0
_
_
+y
_
_
0
1
0
_
_
+z
_
_
0
0
1
_
_
,
the coordinates relative to the standard basis are [v]
B
=
_
_
x
y
z
_
_
. To nd the coordinates relative to an ordered
basis solve the usual vector equation
c
1
v
1
+c
2
v
2
+ +c
n
v
n
= v
for the scalars c
1
, c
2
, . . . , c
n
. The order in which the vectors are given makes in dierence when dening
coordinates. For example, if B =
__
1
0
_
,
_
0
1
__
and B
=
__
0
1
_
,
_
1
0
__
, then
[v]
B
=
__
x
y
__
B
=
_
x
y
_
and [v]
B
=
__
x
y
__
B
=
_
y
x
_
.
Given the coordinates relative to one basis B = {v
1
, . . . , v
n
} to nd the coordinates of the same vector
relative to a second basis B
= {v
1
, . . . , v
n
} a transition matrix can be used. To determine a transition
matrix:
Find the coordinates of each vector in B relative to the basis B
.
Form the transition matrix where the column vectors are the coordinates found in the rst step. That
is,
[I]
B
B
= [ [v
1
]
B
[v
2
]
B
. . . [v
n
]
B
] .
The coordinates of v relative to B
B
[v]
B
.
The transition matrix that changes coordinates from B
to B is given by
[I]
B
B
= ([I]
B
B
)
1
.
Let B =
__
1
1
_
,
_
1
1
__
and B
=
__
1
2
_
,
_
2
1
__
be two bases for R
2
. The steps for nding the
transition matrix from B to B
are:
78 Chapter 3 Vector Spaces
Since
_
1 2 1 1
2 1 1 1
_
reduces to
_
1 2 1 1
0 3 1 3
_
,
so the coordinates of the two vectors in B relative to B
are
__
1
1
__
B
=
_
1/3
1/3
_
and
__
1
1
__
B
=
_
1
1
_
.
[I]
B
B
=
_
1/3 1
1/3 1
_
As an example,
__
3
2
__
B
=
_
1/3 1
1/3 1
_ __
3
2
__
B
=
_
1/3 1
1/3 1
_ _
1/2
5/2
_
=
_
7/3
8/3
_
.
Solutions to Exercises
1. The coordinates of
_
8
0
_
, relative to the basis B are the scalars c
1
and c
2
such that
c
1
_
3
1
_
+c
2
_
2
2
_
=
_
8
0
_
. The vector equation yields the linear system
_
3c
1
2c
2
= 8
c
1
+ 2c
2
= 0
, which has the
unique solution c
1
= 2, and c
2
= 1. Hence, [v]
B
=
_
2
1
_
.
2. Since
_
2 1 2
4 1 1
_
reduces to
_
1 0 1/2
0 1 3
_
, then [v]
B
=
_
1/2
3
_
.
3. To nd the coordinates we form and row reduce the matrix
_
_
1 3 1 2
1 1 0 1
2 1 2 9
_
_
_
_
1 0 0 2
0 1 0 1
0 0 1 3
_
_
, so that [v]
B
=
_
_
2
1
3
_
_
.
4. Since
_
_
2 1 0 0
2 0 0 1
1 2 1 1/2
_
_
reduces to
_
_
2 1 0 0
0 1 0 1
0 0 1 2
_
_
, then [v]
B
=
_
_
1/2
1
2
_
_
.
5. Since c
1
+ c
2
(x 1) + c
3
x
2
= 3 + 2x 2x
2
if and only if c
1
c
2
= 3, c
2
= 2, and c
3
= 2, we have that
[v]
B
=
_
_
5
2
2
_
_
.
6. The equation c
1
(x
2
+ 2x + 2) + c
2
(2x + 3) + c
3
(x
2
+ x + 1) = 8 + 6x 3x
2
gives the linear system, in
matrix form,
_
_
2 3 1 8
2 2 1 6
1 0 1 3
_
_
, which reduces to
_
_
1 0 0 1/3
0 1 0 2
0 0 1 8/3
_
_
so [v]
B
=
_
_
1/3
2
8/3
_
_
.
7. Since c
1
_
1 1
0 0
_
+c
2
_
0 1
1 0
_
+c
3
_
1 0
0 1
_
+c
4
_
1 0
1 0
_
=
_
1 3
2 2
_
if and only if
3.4 Coordinates and Change of Basis 79
_
_
c
1
+c
3
+c
4
= 1
c
1
+c
2
= 3
c
2
c
4
= 2
c
3
= 2
, and the linear system has the solution c
1
= 1, c
2
= 2, c
3
= 2, and c
4
= 4,
we have that [v]
B
=
_
_
1
2
2
4
_
_
.
8. Since c
1
_
1 1
0 1
_
+c
2
_
0 1
0 2
_
+c
3
_
1 1
1 0
_
+c
4
_
1 1
0 3
_
=
_
2 2
1 3
_
leads to the linear system
_
_
1 0 1 1 2
1 1 1 1 2
0 0 1 0 1
1 2 0 3 3
_
_
, which reduces to
_
1 0 0 0 2
0 1 0 0 2
0 0 1 0 1
0 0 0 1 1
_
_
, then [v]
B
=
_
_
2
2
1
1
_
_
.
9. [v]
B1
=
_
1/4
1/8
_
; [v]
B2
=
_
1/2
1/2
_
10. [v]
B1
=
_
_
1
1
1
_
_
;[v]
B2
=
_
_
2
2
0
_
_
11. [v]
B1
=
_
_
1
2
1
_
_
; [v]
B2
=
_
_
1
1
0
_
_
12. [v]
B1
=
_
_
1
1
1
1
_
_
; [v]
B2
=
_
_
1
3
1
7
3
1
3
_
_
13. The column vectors for the transition matrix from a basis B
1
to a basis B
2
are the coordinate vectors
for the vectors in B
1
relative to B
2
. Hence, [I]
B2
B1
=
_
_
1
1
_
B2
_
1
1
_
B2
_
=
_
1 1
1 1
_
. Then this matrix
transforms coordinates relative to B
1
to coordinates relative to B
2
, that is, [v]
B2
= [I]
B2
B1
[v]
B1
=
_
1
5
_
.
14. [I]
B2
B1
=
_
1/3 2/3
7/6 1/6
_
; [v]
B2
= [I]
B2
B1
[v]
B1
_
1
1
_
=
_
1/3
4/3
_
15. [I]
B2
B1
=
_
_
3 2 1
1 2/3 0
0 1/3 0
_
_
; [v]
B2
= [I]
B2
B1
[v]
B1
_
_
1
0
2
_
_
=
_
_
1
1
0
_
_
16. [I]
B2
B1
=
_
_
2 4/3 4/3
1 1/3 2/3
0 1/3 1/3
_
_
; [v]
B2
= [I]
B2
B1
[v]
B1
_
_
2
1
1
_
_
=
_
_
4
3
0
_
_
17. Notice that the only dierence in the bases B
1
and B
2
is the order in which the polynomials 1, x, and x
2
are given. As a result the column vectors of the transition matrix are the coordinate vectors only permuted.
That is, [I]
B2
B1
=
_
_
0 0 1
1 0 0
0 1 0
_
_
. Then [v]
B2
= [I]
B2
B1
[v]
B1
=
_
_
5
2
3
_
_
.
18. Since [v]
B2
= [I]
B2
B1
=
_
[x
2
1]
B2
[2x
2
+x + 1]
B2
[x + 1]
B2
=
_
_
1/4 5/8 3/8
1 1/2 1/2
3/4 11/8 3/8
_
_
, then [v]
B2
=
[I]
B2
B1
[v]
B1
_
_
1
1
2
_
_
=
_
_
13/8
1/2
11/8
_
_
.
80 Chapter 3 Vector Spaces
19. Since the equation c
1
_
_
1
1
1
_
_
+c
2
_
_
1
0
1
_
_
+c
3
_
_
1
1
0
_
_
=
_
_
a
b
c
_
_
gives
_
_
1 1 1 a
1 0 1 b
1 1 0 c
_
_
_
_
1 0 0 a b +c
0 1 0 a +b
0 0 1 a + 2b c
_
_
, we have that
_
_
a
b
c
_
_
B
=
_
_
a b +c
a +b
a + 2b c
_
_
.
20. Since
_
_
1 0 0 1 a
0 1 1 0 b
1 1 1 0 c
0 1 0 1 d
_
_
reduces to
_
1 0 0 0 2a +b c 2d
0 1 0 0 a b +c +d
0 0 1 0 a c d
0 0 0 1 a +b c 2d
_
_
, then
_
_
_
_
a
b
c
d
_
_
_
_
B
=
_
_
2a +b c 2d
a b +c +d
a c d
a +b c 2d
_
_
.
21. a. [I]
B2
B1
=
_
_
0 1 0
1 0 0
0 0 1
_
_
b. [v]
B2
= [I]
B2
B1
_
_
1
2
3
_
_
=
_
_
2
1
3
_
_
22. a. [I]
B2
B1
=
_
1 1
1 0
_
b. [I]
B1
B2
=
_
0 1
1 1
_
c. Since
_
1 1
1 0
_ _
0 1
1 0
_
=
_
1 0
0 1
_
, then ([I]
B2
B1
)
1
= [I]
B1
B2
.
23. a. [I]
B
S
=
_
1 1
0 2
_
b.
_
1
2
_
B
=
_
3
4
_
;
_
1
4
_
B
=
_
5
8
_
;
_
4
2
_
B
=
_
6
4
_
;
_
4
4
_
B
=
_
8
8
_
c.
d.
24. a. [v]
B
=
_
cos sin
sin cos
_ _
x
y
_
=
_
xcos y sin
xsin +y cos
_
3.5 Application: Dierential Equations 81
b.
x
y
21 1
1
c.
_
0
0
_
B
=
_
0
0
_
,
_
0
1
_
B
=
_
1
0
_
,
_
1
0
_
B
=
_
0
1
_
,
_
1
1
_
B
=
_
1
1
_
x
y
21 1
1
25. a. Since u
1
= v
1
+ 2v
2
, u
2
= v
1
+ 2v
2
v
3
, and u
3
= v
2
+v
3
, the coordinates of u
1
, u
2
, and u
3
relative to B
2
are
[u
1
]
B2
=
_
_
1
2
0
_
_
, [u
2
]
B2
=
_
_
1
2
1
_
_
, [u
3
]
B2
=
_
_
0
1
1
_
_
, so [I]
B2
B1
=
_
_
1 1 0
2 2 1
0 1 1
_
_
.
b. [2u
1
3u
2
+u
3
]
B2
= [I]
B2
B1
_
_
2
3
1
_
_
=
_
_
1
3
4
_
_
Exercise Set 3.5
1. a. Let y = e
rx
, so that y
= re
rx
and y
= r
2
e
rx
. Substituting these into the dierential equations gives
the auxiliary equation r
2
5r + 6 = 0. Factoring, we have that (r 3)(r 2) = 0 and hence, two distinct
solutions are y
1
= e
2x
and y
2
= e
3x
.
b. Since W[y
1
, y
2
](x) =
e
2x
e
3x
2e
2x
3e
3x
= e
5x
> 0 for all x, the two solutions are linear independent. c. The
general solution is the linear combination y(x) = C
1
e
2x
+C
2
e
3x
, where C
1
and C
2
are arbitrary constants.
2. a. Let y = e
rx
, so that y
= re
rx
and y
= r
2
e
rx
. Then the auxiliary equation is r
2
+ 3r + 2 = 0 =
(r + 1)(r + 2) = 0 and hence, two distinct solutions are y
1
= e
x
and y
2
= e
2x
.
b. Since W[y
1
, y
2
](x) =
e
x
e
2x
e
x
2e
2x
= e
x
= 0 for all x, the two solutions are linear independent.
c. The general solution is the linear combination y(x) = C
1
e
x
+ C
2
e
2x
, where C
1
and C
2
are arbitrary
constants.
3. a. Let y = e
rx
, so that y
= re
rx
and y
= r
2
e
rx
. Substituting these into the dierential equation gives
the auxiliary equation r
2
+4r +4 = 0. Factoring, we have that (r +2)
2
= 0. Since the auxiliary equation has
only one root of multiplicity 2, two distinct solutions are y
1
= e
2x
and y
2
= xe
2x
.
b. Since W[y
1
, y
2
](x) =
e
2x
xe
2x
2e
2x
e
2x
2xe
2x
= e
4x
> 0 for all x, the two solutions are linearly
independent. c. The general solution is the linear combination y(x) = C
1
e
2x
+C
2
xe
2x
, where C
1
and C
2
are arbitrary constants.
4. a. Let y = e
rx
, so that y
= re
rx
and y
= r
2
e
rx
. Then the auxiliary equation is r
2
4r + 5 = 0 =
(r + 1)(r 5) = 0 and hence, two distinct solutions are y
1
= e
x
and y
2
= e
5x
.
b. Since W[y
1
, y
2
](x) =
e
x
e
5x
e
x
5e
5x
= e
4x
> 0 for all x, the two solutions are linear independent. c. The
general solution is the linear combination y(x) = C
1
e
x
+C
2
e
5x
, where C
1
and C
2
are arbitrary constants.
82 Chapter 3 Vector Spaces
5. Let y = e
rx
, so that y
= re
rx
and y
= r
2
e
rx
. Substituting these into the dierential equation gives the
auxiliary equation r
2
2r + 1 = 0. Factoring, we have that (r 1)
2
= 0. Since the auxiliary equation has
only one root of multiplicity 2, two distinct and linearly independent solutions are y
1
= e
x
and y
2
= xe
x
.
The general solution is given by y(x) = C
1
e
x
+ C
2
xe
x
. The initial value conditions now allow us to nd the
specic values for C
1
and C
2
to give the solution to the initial value problem. Specically, since y(0) = 1, we
have that 1 = y(0) = C
1
e
0
+C
2
(0)e
0
, so C
1
= 1. Further, since y
(x) = e
x
+C
2
(e
x
+ xe
x
), and y
(0) = 3, we
have that 3 = 1 +C
2
, so C
2
= 2. Then the solution to the initial value problem is y(x) = e
x
+ 2xe
x
.
6. Let y = e
rx
, so that y
= re
rx
and y
= r
2
e
rx
. Then the auxiliary equation is
r
2
3r + 2 = (r 1)(r 2) = 0 and hence, two distinct and linearly independent solutions are y
1
= e
x
and
y
2
= e
2x
. The general solution is y(x) = C
1
e
x
+ C
2
e
2x
. Using the initial condition y(1) = 0, we have that
0 = y(1) = C
1
e + C
2
e
2
and 1 = y
(1) = C
1
e + C
2
(2e
2
). Solving the two equations gives C
1
= e
1
and
C
2
= e
2
. Then the solution to the initial value problem is y(x) = e
1
e
x
+e
2
e
2x
.
7. a. The auxiliary equation for y
4y
+ 3y = 0 is r
2
4r + 3 = (r 3)(r 1) = 0, so the complimentary
solution is y
c
(x) = C
1
e
3x
+C
2
e
x
.
b. Since y
p
(x) = 2ax +b and y
p
(x) = 2a, we have that
2a 4(2ax +b) + 3(ax
2
+bx +c) = 3x
2
+x + 2 3ax
2
+ (3b 8a)x + (2a + 3c 4b) = 3x
2
+x + 2.
Equating coecients of like terms, we have that a = 1, b = 3, and c = 4.
c. If f(x) = y
c
(x) +y
p
(x), then
f
(x) = 3C
1
e
3x
+C
2
e
x
+ 2x + 3 and f
(x) = 9C
1
e
3x
+C
2
e
x
+ 2.
We then have that f
(x) 4f
(x) + 3f(x) = 3x
2
+ x + 3, so y
c
(x) + y
p
(x) is a solution to the dierential
equation.
8. a. The auxiliary equation for y
+ 4y
+ 3y = 0 is r
2
+ 4r + 3 = (r + 3)(r + 1) = 0, so the complimentary
solution is y
c
(x) = C
1
e
x
+C
2
e
3x
.
b. Since y
p
(x) = 2Asin2x + 2Bcos 2x and y
p
(x) = 4Acos 2x 4Bsin 2x, after substitution in the
dierential equation, we have that (A + 8B) cos 2x + (B 8A) sin 2x = 3 sin2x and hence A =
24
65
and
B =
3
65
.
c. The general solution is y(x) = y
c
(x) +y
p
(x) = C
1
e
x
+C
2
e
3x
24
65
cos 2x
3
65
sin 2x.
9. Since the damping coecient is c = 0 and there is no external force acting on the system, so that f(x) = 0,
the dierential equation describing the problem has the form my
+ 4y = 0
are the complex values r = 8i. Hence, the general solution is
y(x) = e
0
(C
1
cos(8x) +C
2
sin(8x)) = C
1
cos(8x) +C
2
sin(8x).
Applying the initial conditions we obtain C
1
= 0.25 and C
2
= 0. The equation of motion of the spring is
y(x) =
1
4
cos(8x).
10. Since the mass is m =
w
g
=
8
32
=
1
4
, the spring constant is k = 4, the damping coecient is c = 2,
and there is no external force, the dierential equation that models the motion is
1
4
y
2y
+ 4y = 0. The
characteristic equation is r
2
8r + 16 = (r 4)
2
= 0, so the general solution is y(x) = C
1
e
4x
+ C
2
xe
4x
.
The initial conditions are y(0) = 1 and y
(x) = 4C
1
e
4x
+ C
2
[e
4x
+ 4xe
4x
], so the second initial condition gives
2 = y
(0) = 4C
1
+C
2
[1 + 0] and hence, C
2
= 6. Therefore, the solution is given by y(x) = e
4x
6xe
4x
.
Review Exercises Chapter 3
1. Row reducing the matrix A with column vectors the given vectors gives
Review Chapter 3 83
_
_
1 0 0 2
2 1 0 3
0 1 1 4
2 3 4 k
_
_
1 0 0 2
0 1 0 7
0 0 1 11
0 0 0 k 69
_
_
.
Hence, det(A) = k 69. Since the four vectors are linearly independent if and only if the determinant of A is
nonzero, the vectors are a basis for R
4
if and only if k = 69.
2. Since there are three vectors in R
3
, the vectors will form a basis if and only if they are linearly independent.
Since the matrix A with column vectors the three given vectors is upper triangle, then det(A) = acf. So the
vectors are a basis is and only if acf = 0, that is, a = 0, c = 0, and f = 0.
3. a. Since the sum of two 2 2 matrices and a scalar times a 2 2 matrix are 2 2 matrices, S is closed
under vector addition and scalar multiplication. Hence, S is a subspace of M
22
.
b. Yes, let a = 3, b = 2, c = 0.
c. Since every matrix A in S can be written in the form,
A = a
_
1 1
0 1
_
+b
_
1 0
1 0
_
+c
_
0 0
1 1
_
and the matrices
_
1 1
0 1
_
,
_
1 0
1 0
_
, and
_
0 0
1 1
_
are linearly independent, a basis for S is B =
__
1 1
0 1
_
,
_
1 0
1 0
_
,
_
0 0
1 1
__
. d. The matrix
_
0 1
2 1
_
is not in S.
4. a. Let p(x) = a + bx + cx
2
such that a + b + c = 0 and let q(x) = d +ex + fx
2
such that d + e + f = 0.
Then p(x) +kq(x) = (a + kd) + (b +ke)x + (c +kf)x
2
and
(a +kd) + (b +ke) + (c +kf) = (a +b +c) +k(d +e +f) = 0 + 0 = 0,
so p(x) +kq(x) is in S for all p(x) and q(x) in S and scalars k. Therefore S is a subspace.
b. If p(x) = a +bx +cx
2
is in S, then a +b +c = 0, so p(x) = a +bx + (a b)x
2
= a(1 x
2
) +b(x x
2
).
Therefore, a basis for S is {1 x
2
, x x
2
} and hence, dim(S) = 2.
5. a. Consider the equation
c
1
v
1
+c
2
(v
1
+v
2
) +c
3
(v
1
+v
2
+v
3
) = (c
1
+c
2
+c
3
)v
1
+ (c
2
+c
3
)v
2
+c
3
v
3
= 0.
Since S is linear independent, then c
1
+c
2
+c
3
= 0, c
2
+c
3
= 0, and c
3
= 0. The only solution to this system
is the trivial solution, so that the set T is linearly independent. Since the set T consists of three linearly
independent vectors in the three dimensional vector space V, the set T is a basis.
b. Consider the equation
c
1
(v
2
+v
3
)+c
2
(3v
1
+2v
2
+v
3
)+c
3
(v
1
v
2
+2v
3
) = (3c
2
+c
3
)v
1
+(c
1
+2c
2
c
3
)v
2
+(c
1
+c
2
+2c
3
v
3
= 0.
Since S is linearly independent, we have that set W is linearly independent if and only if the linear system
_
_
3c
2
+c
3
= 0
c
1
+ 2c
2
c
3
= 0
c
1
+c
2
+ 2c
3
= 0
has only the trivial solution. Since
_
_
0 3 1
1 2 1
1 1 2
_
_
_
_
1 2 1
0 3 1
0 0 0
_
_
the linear
system has indenitely many solutions and the set W is linearly dependent. Therefore, W is not basis.
6. a. Since dim(R
4
) = 4, every basis for R
4
has four vectors. Therefore, S is not a basis.
b. v
3
= 2v
1
+v
2
c. Since v
1
and v
2
are not scalar multiples of each other they are linearly independent,
84 Chapter 3 Vector Spaces
so span(S) = span{v
1
, v
2
} and hence the dimension of the span of S is 2. d. Since
_
_
1 2 1 0 0 0
3 1 0 1 0 0
1 1 0 0 1 0
1 1 0 0 0 1
_
_
reduces to
_
1 2 1 0 0 0
0 5 3 1 0 0
0 0 2 1 5 0
0 0 0 0 1 1
_
_
and the pivots are in columns 1, 2, 3, and 5, the basis consists of the corresponding column vectors of the
original matrix. So the basis is
_
_
_
_
1
3
1
1
_
_
,
_
_
2
1
1
1
_
_
,
_
_
1
0
0
0
_
_
,
_
_
0
0
1
0
_
_
_
_
.
e. Since
1 1 1 1
2 0 0 2
1 1 1 1
1 0 1 1
_
1 1 1 1 1 2 1 0
2 0 0 2 3 1 0 0
1 1 1 1 1 1 0 1
1 0 1 1 1 1 0 0
_
_
reduces to
_
1 0 0 0 0 1/2 1/2 1/2
0 1 0 0 2 2 0 1
0 0 1 0 1/2 1/2 1 1
0 0 0 1 3/2 0 1/2 1/2
_
_
,
then
[v
1
]
T
=
_
_
0
2
1/2
3/2
_
_
, [v
2
]
T
=
_
_
1/2
2
1/2
0
_
_
, [v
3
]
T
=
_
_
1/2
0
1
1/2
_
_
, [v
4
]
T
=
_
_
1/2
1
1
1/2
_
_
,
so
[I]
T
B
=
_
_
0 1/2 1/2 1/2
2 2 0 1
1/2 1/2 1 1
3/2 0 1/2 1/2
_
_
.
g. [I]
B
T
= ([I]
T
B
)
1
=
1
4
_
_
1 0 1 3
5 0 3 1
5 1 11 9
8 4 8 8
_
_
h. [v]
T
= [I]
T
B
_
_
1
3
2
5
_
_
=
_
_
2
13
8
2
_
_
i. [v]
B
=
_
_
6
5
6
4
_
_
7. Since c = 0 the vector v
1
can be written as
v
1
=
_
c
2
c
1
_
v
2
+
_
c
3
c
1
_
v
3
+ +
_
c
n
c
1
_
v
n
.
Since v is a linear combination of the other vectors it does not contribute to the span of the set. Hence,
V = span{v
2
, v
3
, . . . , v
n
}.
8. a. The standard basis for M
22
is
__
1 0
0 0
_
,
_
0 1
0 0
_
,
_
0 0
1 0
_
,
_
0 0
0 1
__
.
Review Chapter 3 85
b. Since
_
a b
c a
_
+k
_
d e
f d
_
=
_
a +kd b + e
c +f a +kd
_
and the terms on the diagonal are equal, then S is a
subspace. Similarly, since
_
x y
y w
_
+ k
_
p q
q r
_
=
_
x +kp y +kq
y +kq z +kr
_
and the terms on the o diagonal
are equal, then T is a subspace. c. Since each matrix in S c can be written in the form
_
a b
c a
_
= a
_
1 0
0 1
_
+b
_
0 1
0 0
_
+c
_
0 0
1 0
_
a basis for S is
__
1 0
0 1
_
,
_
0 1
0 0
_
,
_
0 0
1 0
__
and dim(S) = 3. Similarly, since
_
x y
y z
_
= x
_
1 0
0 0
_
+y
_
0 1
1 0
_
+z
_
0 0
0 1
_
a basis for T is
__
0 1
1 0
_
,
_
1 0
0 0
_
,
_
0 0
0 1
__
and dim(T) = 3. d. Since ST =
__
a b
b a
_
a, b R
_
,
a basis for S T is
__
1 0
0 1
_
,
_
0 1
1 0
__
and dim(S T) = 2.
9. a. The set B = {u, v} is a basis for R
2
since it is linearly independent. To see this consider the equation
au +bv = 0. Now take the dot product of both sides rst with u and then v. That is,
u (au +bv) = 0 a(u u) +b(u v) = 0 a(u
2
1
+u
2
2
) +b(u v) = 0.
Since u
2
1
+ u
2
2
= 1 and u v = 0, we have that a = 0. Similarly, the equation v (au + bv) = 0 gives that
b = 0. Hence, B is a set of two linearly independent vectors in R
2
and therefore, is a basis.
b. If [w]
B
=
_
_
, then
u +v =
_
x
y
_
_
u
1
+v
1
u
2
+v
2
_
=
_
x
y
_
.
To solve the linear system for and by Cramers Rule, we have that
=
x v
1
y v
2
u
1
v
1
u
2
v
2
=
xv
2
yv
1
u
1
v
2
v
1
u
2
and =
u
1
x
u
2
y
u
1
v
1
u
2
v
2
=
yu
1
xu
2
u
1
v
2
v
1
u
2
.
Notice that
u
1
v
1
u
2
v
2
_
c
1
+cc
2
+c
2
c
3
= a
0
c
2
+ 2cc
3
= a
1
c
3
= a
2
c
1
= a
0
ca
1
+c
2
a
2
, c
2
= a
1
2ca
2
, c
3
= a
2
.
So [a
0
+a
1
x +a
2
x
2
]
B
=
_
_
a
0
ca
1
+c
2
a
2
a
1
2ca
2
a
2
_
_
.
86 Chapter 3 Vector Spaces
Chapter Test Chapter 3
1. F. Since
(c +d) x = x + (c +d)
and
(c x) (d x) = 2x + (c +d),
which do not always agree, V is
not a vector space. Also x y =
y x.
2. T 3. F. Only lines that pass
through the origin are subspaces.
4. F. Since dim(M
22
) = 4, ev-
ery basis contains four matrices.
5. T 6. F. For example, the
vector
_
1
1
_
is in S but
_
1
1
_
=
_
1
1
_
is not in S.
7. F. For example, the matrices
_
1 0
0 0
_
and
_
0 0
0 1
_
are in S
but the sum
_
1 0
0 1
_
has determinant 1, so
is not in S.
8. F. Since it is not possible to
write x
3
as a linear combination
of the given polynomials.
9. T
10. T 11. T 12. T
13. T 14. T 15. T
16. F. If a set spans a vector
space, then adding more vectors
can change whether the set is lin-
early independent or dependent
but does not change the span.
17. F. A set of vectors with the
number of vectors less than the
dimension can be linearly inde-
pendent but can not span the
vector space. If the number of
vectors exceeds the dimension,
then the set is linearly depen-
dent.
18. F. The intersection of two
subspaces is always a subspace,
but the union may not be a sub-
space.
19. T 20. T 21. T
22. T 23. T 24. T
25. T 26. F.
_
1
1
_
B1
=
_
1
1/2
_
27. T
28. T 29. T 30. F.
[x
3
+ 2x
2
x]
B1
=
_
_
0
1
2
1
_
_
Chapter Test Chapter 3 87
31. T 32. F.
[x
3
+ 2x
2
x]
B2
=
_
_
1
2
0
1
_
_
33. T
34. F.
[(1 +x)
2
3(x
2
+x 1) +x
3
]
B2
= [4 x 2x
2
+x
3
]
B2
=
_
_
1
2
4
1
_
_
35. T
88 Chapter 4 Linear Transformations
4
Linear Transformations
Exercise Set 4.1
A linear transformation is a special kind of function (or mapping) dened from one vector space to another.
To verify T : V W is a linear transformation from V to W, then we must show that T satises the two
properties
T(u +v) = T(u) +T(v) and T(cu) = cT(u)
or equivalently just the one property
T(u +cv) = T(u) +cT(v).
The addition and scalar multiplication in T(u+cv) are the operations dened on V and in T(u) +cT(v) the
operations dened on W. For example, T : R
2
R
2
dened by
T
__
x
y
__
=
_
x + 2y
x y
_
is a linear transformation.
To see this compute the combination u +cv of two vectors and then apply T. Notice that the denition of T
requires the input of only one vector, so to apply T rst simplify the expression. Then we need to consider
T
__
x
1
y
1
_
+c
_
x
2
y
2
__
= T
__
x
1
+cx
2
y
1
+cy
2
__
.
Next apply the denition of the mapping resulting in a vector with two components. To nd the rst
component add the rst component of the input vector to twice the second component and for the second
component of the result subtract the components of the input vector. So
T
__
x
1
y
1
_
+c
_
x
2
y
2
__
= T
__
x
1
+cx
2
y
1
+cy
2
__
=
_
(x
1
+cx
2
) + 2(y
1
+cy
2
)
(x
1
+cx
2
) (y
1
+cy
2
)
_
.
The next step is to rewrite the output vector in the correct form. This gives
_
(x
1
+cx
2
) + 2(y
1
+cy
2
)
(x
1
+cx
2
) (y
1
+cy
2
)
_
=
_
(x
1
+ 2y
1
) +c(x
2
+ 2y
2
)
(x
1
y
1
) +c(x
2
y
2
)
_
=
_
x
1
+ 2y
1
x
1
y
1
_
+c
_
x
2
+ 2y
2
x
2
y
2
_
= T
__
x
1
y
1
__
+cT
__
x
2
y
2
__
,
and hence T is a linear transformation. On the other hand a mapping dened by
T
__
x
y
__
=
_
x + 1
y
_
is not a linear transformation
since, for example,
T
__
x
1
y
1
_
+
_
x
2
y
2
__
=
_
x
1
+x
2
+ 1
y
1
+y
2
_
= T
__
x
1
y
1
__
+T
__
x
2
y
2
__
=
_
x
1
+ 1
y
1
_
+
_
x
2
+ 1
y
2
_
=
_
x
1
+x
2
+ 2
y
1
+y
2
_
for all pairs of vectors. Other useful observations made in Section 4.1 are:
For every linear transformation T(0) = 0.
4.1 Linear Transformations 89
If A is an mn matrix, then T(v) = Av is a linear transformation from R
n
to R
m
.
T(c
1
v
1
+c
2
v
2
+ +c
n
v
n
) = c
1
T(v
1
) +c
2
T(v
2
) + +c
n
T(v
n
)
The third property can be used to nd the image of a vector when the action of a linear transformation is
known only on a specic set of vectors, for example on the vectors of a basis. For example, suppose that
T : R
3
R
3
is a linear transformation and
T
_
_
_
_
1
1
1
_
_
_
_
=
_
_
1
2
0
_
_
, T
_
_
_
_
1
0
1
_
_
_
_
=
_
_
1
1
1
_
_
, and T
_
_
_
_
0
1
1
_
_
_
_
=
_
_
2
3
1
_
_
.
Then the image of an arbitrary input vector can be found since
_
_
_
_
_
1
1
1
_
_
,
_
_
1
0
1
_
_
,
_
_
0
1
1
_
_
_
_
_
is a basis for R
3
.
For example, lets nd the image of the vector
_
_
1
2
0
_
_
. The rst step is to write the input vector in terms
of the basis vectors, so
_
_
1
2
0
_
_
=
_
_
1
1
1
_
_
+ 2
_
_
1
0
1
_
_
_
_
0
1
1
_
_
.
Then use the linearity properties of T to obtain
T
_
_
_
_
1
2
0
_
_
_
_
= T
_
_
_
_
1
1
1
_
_
+ 2
_
_
1
0
1
_
_
_
_
0
1
1
_
_
_
_
= T
_
_
_
_
1
1
1
_
_
_
_
+ 2T
_
_
_
_
1
0
1
_
_
_
_
T
_
_
_
_
0
1
1
_
_
_
_
=
_
_
1
2
0
_
_
+ 2
_
_
1
1
1
_
_
_
_
2
3
1
_
_
=
_
_
1
3
3
_
_
.
Solutions to Exercises
1. Let u =
_
u
1
u
2
_
and v =
_
v
1
v
2
_
be vectors in R
2
and c a scalar. Since
T(u +cv) = T
__
u
1
+cv
1
u
2
+cv
2
__
=
_
u
2
+cv
2
u
1
+cv
1
_
=
_
u
2
u
1
_
+c
_
v
2
v
1
_
= T(u) +cT(v),
then T is a linear transformation.
2. Let u =
_
u
1
u
2
_
and v =
_
v
1
v
2
_
be vectors in R
2
and c a scalar. Then
T(u +cv) = T
__
u
1
+cv
1
u
2
+cv
2
__
=
_
(u
1
+cv
1
) + (u
2
+cv
2
)
(u
1
+cv
1
) (u
2
+cv
2
) + 2
_
and
T(u) +cT(v) =
_
(u
1
+cv
1
) + (u
2
+cv
2
)
(u
1
+cv
1
) (u
2
+cv
2
) + 4
_
.
For example, if u =
_
1
0
_
, v =
_
0
1
_
, and c = 1, then T(u +v) =
_
2
2
_
and T(u) +T(v) =
_
2
4
_
. Hence,
T is not a linear transformation.
90 Chapter 4 Linear Transformations
3. Let u =
_
u
1
u
2
_
and v =
_
v
1
v
2
_
be vectors in R
2
. Since
T(u +v) = T
__
u
1
+v
1
u
2
+v
2
__
=
_
u
1
+v
1
u
2
1
+ 2u
2
v
2
+v
2
2
_
and
_
u
1
+v
1
u
2
2
+v
2
2
_
= T(u) +T(v),
which do not agree for all vectors, T is not a linear transformation.
4. Since
T(u +cv) = T
__
u
1
u
2
_
+c
_
v
1
v
2
__
= T
__
u
1
+cv
1
u
2
+cv
2
__
=
__
2(u
1
+cv
1
) (u
2
+cv
2
)
u
1
+cv
1
+ 3(u
2
+cv
2
)
__
and
T(u) +cT(v) =
_
2u
1
u
2
u
1
+ 3u
2
_
+c
_
2v
1
v
2
v
1
+ 3v
2
_
=
__
2(u
1
+cv
1
) (u
2
+cv
2
)
u
1
+cv
1
+ 3(u
2
+cv
2
)
__
,
then T is a linear transformation.
5. Since T
__
u
1
u
2
_
+c
_
v
1
v
2
__
=
_
u
1
+cv
1
0
_
= T
__
u
1
u
2
__
+cT
__
v
1
v
2
__
, for all pairs of vectors and
scalars c, T is a linear transformation.
6. Since
T(u +cv) = T
__
u
1
+cv
1
u
2
+cv
2
__
=
_
(u1+cv1)+(u2+cv2)
2
(u1+cv1)+(u2+cv2)
2
_
= T(u) +cT(v),
then T is a linear transformation.
7. Since T(x+y) = T(x) +T(y), if and only if at least one of x or y is zero, T is not a linear transformation.
8. Since T describes a straight line passing
through the origin, then T denes a linear trans-
formation.
9. Since T
_
c
_
x
y
__
= c
2
(x
2
+ y
2
) =
cT
__
x
y
__
= c(x
2
+ y
2
) if and only if c = 1 or
_
x
y
_
=
_
0
0
_
, T is not a linear transformation.
10. Since T is the identity mapping on the rst
two coordinates, then T is a linear transforma-
tion.
11. Since T(0) = 0, T is not a linear transforma-
tion.
12. Since cos 0 = 1, then T(0) = 0 and hence, T is not a linear transformation.
13. Since
T(p(x) +q(x)) = 2(p
(x) +q
(x)) 3(p
(x) +q
(x) 3p
(x) 3q
_
1 1
0 0
_
+ 3
_
0 0
2 0
_
=
_
0 1
6 1
_
25. Since
__
1
1
_
,
_
1
0
__
is a basis for R
2
and T is a linear operator, then it is possible to nd T(v) for
every vector in R
2
. In particular, T
__
3
7
__
= T
_
7
_
1
1
_
+ 4
_
1
0
__
=
_
22
11
_
.
26. a. T(e
1
) =
_
_
1
2
1
_
_
, T(e
2
) =
_
_
2
1
3
_
_
, T(e
3
) =
_
_
3
3
2
_
_
b. T(3e
1
4e
2
+ 6e
3
) = 3T(e
1
) 4T(e
2
) + 6T(e
3
) = 3
_
_
1
2
1
_
_
4
_
_
2
1
3
_
_
+ 6
_
_
3
3
2
_
_
=
_
_
13
20
3
_
_
92 Chapter 4 Linear Transformations
27. a. Since the polynomial 2x
2
3x+2 cannot be written as a linear combination of x
2
, 3x, and x
2
+3x,
from the given information the value of T(2x
2
3x + 2) can not be determined. That is, the equation
c
1
x
2
+ c
2
(3x) + c
3
(x
2
+ 3x) = 2x
2
2x + 1 is equivalent to (c
1
c
3
)x
2
+ (3c
2
+ 3c
3
)x = 2x
2
3x + 2,
which is not possible. b. T(3x
2
4x) = T
_
3x
2
+
4
3
(3x)
_
= 3T(x
2
) +
4
3
T(3x) =
4
3
x
2
+ 6x
13
3
.
28. a. Since
_
_
2
5
0
_
_
= 7
_
_
1
0
0
_
_
5
_
_
1
1
0
_
_
, then T
_
_
_
_
2
5
0
_
_
_
_
= 7
_
_
1
2
3
_
_
+ 5
_
_
2
2
1
_
_
=
_
_
3
4
26
_
_
.
b. Since the
1 1 1
0 1 3
0 0 0
_
1 2 1
0 7 2
_
, then all vectors that are mapped to the zero vector have the form
_
_
3
7
z
2
7
z
z
_
_
, z R.
33. a. Since
_
_
1 1 2 0
2 3 1 0
1 2 2 0
_
_
_
_
1 1 2 0
0 5 5 0
0 0 1 0
_
_
,
the zero vector is the only vector in R
3
such that T
_
_
_
_
x
y
z
_
_
_
_
=
_
0
0
_
.
b. Since
_
_
1 1 2 7
2 3 1 6
1 2 2 9
_
_
_
_
1 0 0 1
0 1 0 2
0 0 1 2
_
_
, then T
_
_
_
_
1
2
2
_
_
_
_
=
_
_
7
6
9
_
_
.
34. a. Since T(ax
2
+ bx + c) = (2ax + b) c, then T(p(x)) = 0 if and only if 2a = 0 and b a = 0. Hence,
T(p(x)) = 0 if and only if p(x) = bx + b = b(x + 1) for any real number b. b. Let p(x) = 3x
2
3x, then
T(p(x)) = 6x 3. As a second choice let q(x) = 3x
2
5x 2, so q(0) = 2 and T(q(x)) = q
(x) q(0) =
6x 5 + 2 = 6x 3. c. The mapping T is a linear operator.
35. Since T(cv + w) =
_
cT
1
(v) +T
1
(w)
cT
2
(v) +T
2
(w)
_
= c
_
T
1
(v)
T
2
(v)
_
+
_
T
1
(w)
T
2
(w)
_
= cT(v) +T(w), then T is a linear
transformation.
4.2 The Null Space and Range 93
36. Since tr(A + B) = tr(A) + tr(B) and tr(cA) = ctr(A), then T(A + cB) = T(A) + cT(B) and hence, T
is a linear transformation.
37. Since T(kA + C) = (kA + C)B B(kA +C) = kAB kBA +CB BC = kT(A) + T(C), then T is a
linear operator.
38. Since T(x + y) = m(x + y) + b and T(x) + T(y) = m(x + y) + 2b, then T(x + y) = T(x) + T(y) if and
only if b = 0. If b = 0, then we also have that T(cx) = cmx = cT(x). Hence, T is a linear operator if and only
if b = 0.
39. a. Using the properties of the Riemann Integral, we have that
T (cf +g) =
_
1
0
(cf(x) +g(x)) dx =
_
1
0
cf(x)dx +
_
1
0
g(x)dx = c
_
1
0
f(x)dx +
_
1
0
g(x)dx = cT(f) +T(g)
so T is a linear operator. b. T(2x
2
x + 3) =
19
6
40. Since T is a linear operator, T(u) = w, and T(v) = 0, then T(u +v) = T(u) +T(v) = T(w).
41. Since {v, w} is linear independent v = 0 and w = 0. Hence, if either T(v) = 0 or T(w) = 0, then the
conclusion holds. Now assume that T(v) and T(w) are linearly dependent and not zero. So there exist scalars
a and b, not both 0, such that aT(v)+bT(w) = 0. Since v and w are linearly independent, then av+bw = 0.
Hence, since T is linear, then aT(v) + bT(w) = T(av + bw) = 0, and we have shown that T(u) = 0 has a
nontrivial solution.
42. Since {v
1
, . . . , v
n
} is linearly dependent there are scalars c
1
, c
2
, . . . , c
n
, not all zero, such that
c
1
v
1
+c
2
v
2
+ +c
n
v
n
= 0. Since T is a linear operator, then
T(c
1
v
1
+c
2
v
2
+ +c
n
v
n
) = c
1
T(v
1
) +c
2
T(v
2
) + +c
n
T(v
n
) = T(0) = 0.
Therefore, {T(v
1
), . . . , T(v
n
)} is linearly dependent.
43. Let T(v) = 0 for all v in R
3
.
44. Let v be a vector in V. Since {v
1
, . . . , v
n
} is a basis there are scalars c
1
, c
2
, . . . , c
n
such that v =
c
1
v
1
+ +c
n
v
n
. Since T
1
and T
2
are linear operators, then
T
1
(c
1
v
1
+ +c
n
v
n
) = c
1
T
1
(v
1
) + +c
n
T
1
(v
n
) and T
2
(c
1
v
1
+ +c
n
v
n
) = c
1
T
2
(v
1
) + +c
n
T
2
(v
n
).
Since T
1
(v
i
) = T
2
(v
i
), for each i = 1, 2, . . . , n, then T
1
(v) = T
2
(v).
45. We use the denitions (S + T)(v) = S(u) + T(u) and (cT)(u) = c(T(u)) for addition and scalar
multiplication of linear transformations. Then L(U, V ) with these operations satisfy all ten of the vector
space axioms. For example, L(U, V ) is closed under addition since
(S +T)(cu +v) = S(cu +v) +T(cu +v)
= cS(u) +S(v) +cT(u) +T(v)
= cS(u) +cT(u) +S(v) +T(v)
= c(S +T)(u) + (S +T)(v).
Exercise Set 4.2
If T : V W is a linear transformation, then the null space is the subspace of all vectors in V that
are mapped to the zero vector in W and the range of T is the subspace of W consisting of all images of
vectors from V. Any transformation dened by a matrix product is a linear transformation. For example,
T : R
3
R
3
dened by
T
_
_
_
_
x
1
x
2
x
3
_
_
_
_
= A
_
_
x
1
x
2
x
3
_
_
=
_
_
1 3 0
2 0 3
2 0 3
_
_
_
_
x
1
x
2
x
3
_
_
94 Chapter 4 Linear Transformations
is a linear transformation. The null space of T, denoted by N(T), is the null space of the matrix, N(A) =
{x R
3
| Ax = 0}. Since
T
_
_
_
_
x
1
x
2
x
3
_
_
_
_
=
_
_
1 3 0
2 0 3
2 0 3
_
_
_
_
x
1
x
2
x
3
_
_
= x
1
_
_
1
2
2
_
_
+x
2
_
_
3
0
0
_
_
+x
3
_
_
0
3
3
_
_
,
the range of T, denoted by R(T) is the column space of A, col(A). Since
_
_
1 3 0
2 0 3
2 0 3
_
_
reduces to
_
_
1 3 0
0 6 3
0 0 0
_
_
the homogeneous equation Ax = 0 has innitely many solutions given by x
1
=
3
2
x
3
, x
2
=
1
2
x
3
, and x
3
a free variable. So the null space is
_
_
_
t
_
_
3/2
1/2
1
_
_
t R
_
_
_
, which is a line that passes through the origin
in three space. Also since the pivots in the reduced matrix are in columns one and two, a basis for the
range is
_
_
_
_
_
1
2
2
_
_
,
_
_
3
0
0
_
_
_
_
_
and hence, the range is a plane in three space. Notice that in this example,
3 = dim(R
3
) = dim(R(T)) + dim(N(T)). This is a fundamental theorem that if T : V W is a linear
transformation dened on nite dimensional vector spaces, then
dim(V ) = dim(R(T)) + dim(N(T)).
If the mapping is given as a matrix product T(v) = Av such that A is a m n matrix, then this result is
written as
n = rank(A) +nullity(A).
A number of useful statements are added to the list of equivalences concerning n n linear systems:
A is invertible Ax = b has a unique solution for every b Ax = 0 has only the trivial solution
A is row equivalent to I det(A) = 0 the column vectors of A are linearly independent
the column vectors of A span R
n
the column vectors of A are a basis for R
n
rank(A) = n R(A) = col(A) = R
n
N(A) = {0} row(A) = R
n
the number of pivot columns in the row echelon form of A is n.
Solutions to Exercises
1. Since T(v) =
_
0
0
_
, v is in N(T). 2. Since T(v) =
_
0
0
_
, v is in N(T).
3. Since T(v) =
_
5
10
_
, v is not in N(T). 4. Since T(v) =
_
0
0
_
, v is in N(T).
5. Since p
(x) = 2x 3 and p
(x) = 2, then
T(p(x)) = 2x, so p(x) is not in N(T).
6. Since p
(x) = 5 and p
_
_
1 0 2 1
0 1 1 1
0 0 0 0
_
_
there are innitely many vectors that are mapped
to
_
_
1
3
0
_
_
. For example, T
_
_
_
_
1
2
1
_
_
_
_
=
_
_
1
3
0
_
_
and hence,
_
_
1
3
0
_
_
is in R(T).
4.2 The Null Space and Range 95
10. Since
_
_
1 0 2 2
2 1 3 3
1 1 3 4
_
_
reduces to
_
_
1 0 2 0
0 1 1
0 0 0 1
_
_
the linear system is inconsistent, so the vector
_
_
2
3
4
_
_
is not in R(T).
11. Since
_
_
1 0 2 1
2 1 3 1
1 1 3 2
_
_
reduces to
_
_
1 0 2 0
0 1 1 0
0 0 0 1
_
_
, the linear system is inconsistent, so the vector
_
_
1
1
2
_
_
is not in R(T).
12. Since
_
_
1 0 2 2
2 1 3 5
1 1 3 1
_
_
reduces to
_
_
1 0 2 2
0 1 1 1
0 0 0 0
_
_
there are innitely many vectors that are
mapped to
_
_
2
5
1
_
_
and hence, the vector
_
_
2
5
1
_
_
is in R(T).
13. The matrix A is in R(T). 14. The matrix A is not in R(T).
15. The matrix A is not in R(T). 16. The matrix A is in R(T).
17. A vector v =
_
x
y
_
is in the null space, if and only if 3x +y = 0 and y = 0. That is, N(T) =
__
0
0
__
.
Hence, the null space has dimension 0, so does not have a basis.
18. A vector is in the null space if and only if
_
x +y = 0
x y = 0
, that is x = y. Therefore, N(T) =
__
a
a
_
a R
_
and hence, a basis is
__
1
1
__
.
19. Since
_
_
x + 2z
2x +y + 3z
x y + 3z
_
_
=
_
_
0
0
0
_
_
if and only if x = 2z and y = z every vector in the null space has the
form
_
_
2z
z
z
_
_
. Hence, a basis for the null space is
_
_
_
_
_
2
1
1
_
_
_
_
_
.
20. Since
_
_
2 2 2
3 5 1
0 2 1
_
_
reduces to
_
_
1 0 1/2
0 1 1/2
0 0 0
_
_
, then N(T) =
_
_
_
t
_
_
1/2
1/2
1
_
_
t R
_
_
_
and a basis for
the null space is
_
_
_
_
_
1/2
1/2
1
_
_
_
_
_
.
21. Since N(T) =
_
_
_
_
_
2s +t
s
t
_
_
s, t R
_
_
_
, a
basis for the null space is
_
_
_
_
_
2
1
0
_
_
,
_
_
1
0
1
_
_
_
_
_
.
22. A basis for the null space is
_
_
_
_
5
6
1
0
_
_
_
_
.
96 Chapter 4 Linear Transformations
23. Since T(p(x)) = 0 if and only if p(0) = 0 a
polynomial is in the null space if and only if it
has the form ax
2
+bx. A basis for the null space
is
_
x, x
2
_
.
24. If p(x) = ax
2
+ bx +c, then p
(x) = 2ax + b
and p
_
_
1 0 1 0 1
0 1 2 0 1
0 0 0 1 2
_
_
and the pivots are in columns one, two,
and four, then a basis for the column space of A and hence, for R(T), is
_
_
_
_
_
1
3
1
_
_
,
_
_
2
1
1
_
_
,
_
_
1
0
1
_
_
_
_
_
.
27. Since the range of T is the xy-plane in R
3
, a basis for the range is
_
_
_
_
_
1
0
0
_
_
,
_
_
0
1
0
_
_
_
_
_
.
28. Since
_
_
x y + 3z
x +y +z
x + 3y 5z
_
_
= x
_
_
1
1
1
_
_
+ y
_
_
1
1
3
_
_
+ z
_
_
3
1
5
_
_
and the three vectors are linearly inde-
pendent, then a basis for R(T) is
_
_
_
_
_
1
1
1
_
_
,
_
_
1
1
3
_
_
,
_
_
3
1
5
_
_
_
_
_
.
29. Since R(T) = P
2
, then a basis for the range
is
_
1, x, x
2
_
.
30. Since
R(T) = {p(x) | p(x) = ax
2
+bx+a = a(x
2
+1)+bx},
then a basis for R(T) is {x, x
2
+ 1}.
31. a. The vector w is in the range of T if the linear system
c
1
_
_
2
1
1
_
_
+c
2
_
_
0
1
1
_
_
+c
3
_
_
2
2
0
_
_
=
_
_
6
5
0
_
_
has a solution. But
_
_
2 0 2 6
1 1 2 5
1 1 0 0
_
_
_
_
2 0 2 6
0 1 1 2
0 0 0 1
_
_
, so that the linear system is inconsis-
tent. Hence,
_
_
6
5
0
_
_
is not in R(t).
b. Since
2 0 2
1 1 2
1 1 0
= 0, the column vectors are linearly dependent. To trim the vectors to a basis for
the range, we have that
_
_
2 0 2
1 1 2
1 1 0
_
_
_
_
2 0 2
0 1 1
0 0 0
_
_
. Since the pivots are in columns one and
4.2 The Null Space and Range 97
two, a basis for the range is
_
_
_
_
_
2
1
1
_
_
,
_
_
0
1
1
_
_
_
_
_
. c. Since dim(N(T)) + dim(R(T)) = dim(R
3
) = 3 and
dim(R(T)) = 2, then dim(N(T)) = 1.
32. a. The vector
_
_
2
1
2
_
_
is in R(T). b.
_
_
_
_
_
1
2
1
_
_
,
_
_
0
5
0
_
_
,
_
_
1
1
2
_
_
_
_
_
c. Since
dim(N(T)) + dim(R(T)) = 3 and dim(R(T)) = 3, then dim(N(T)) = 0.
33. a. The polynomial 2x
2
4x + 6 is not in R(T). b. Since the null space of T is the set of all con-
stant functions, then dim(N(T)) = 1 and hence, dim(R(T)) = 2. A basis for the range is
_
T(x), T(x
2
)
_
=
_
2x + 1, x
2
+x
_
.
34. a. The polynomial x
2
x2 is not in R(T). b. Since the null space of T is the set of all polynomials of the
form ax
2
, then dim(N(T)) = 1 and hence, dim(R(T)) = 2. A basis for the range is {T(1), T(x)} =
_
x
2
, x 1
_
.
35. Any linear transformations that maps three space to the entire xy-plane will work. For example, the
mapping to the xy-plane is T
_
_
_
_
x
y
z
_
_
_
_
=
_
x
y
_
.
36. Dene T : R
2
R
2
, by T
__
x
y
__
=
_
y
0
_
. Then N(T) =
__
x
0
_
x R
_
= R(T).
37. a. The range R(T) is the subspace of P
n
consisting of all polynomials of degree n 1 or less. b.
dim(R(T)) = n c. Since dim(R(T)) + dim(N(T)) = dim(P
n
) = n + 1, then dim(N(T)) = 1.
38. A polynomial is in the null space provided it
has degree k 1 or less. Hence dim(N(T)) = k.
39. a. dim(R(T)) = 2 b. dim(N(T)) = 1
40. Since dim(V ) = dim(N(T)) + dim(R(T)) = 2 dim(N(T)), then the dimension of V is an even number.
41. If B =
_
a b
c d
_
, then T(B) = AB BA =
_
0 2b
2c 0
_
, so that
N(T) =
__
a 0
0 d
_
a, d R
_
=
_
a
_
1 0
0 0
_
+d
_
0 0
0 1
_
a, d R
_
. Hence a basis for N(T) is
__
1 0
0 0
_
,
_
0 0
0 1
__
.
42. If B is an n n matrix, then T(B
t
) = (B
t
)
t
= B and hence, R(T) = M
nn
.
43. a. Notice that (A+A
t
)
t
= A
t
+A = A+A
t
, so that the range of T is a subset of the symmetric matrices.
Also if B is any symmetric matrix, then T
_
1
2
B
_
=
1
2
B+
1
2
B
t
= B. Therefore, R(T) is the set of all symmetric
matrices. b. Since a matrix A is in N(T) if and only if T(A) = A+A
t
= 0, which is if and only if A = A
t
,
then the null space of T is the set of skew-symmetric matrices.
44. a. Notice that (AA
t
)
t
= A
t
A = (AA
t
), so that the range of T is a subset of the skew-symmetric
matrices. Also if B is any skew-symmetric matrix, then T
_
1
2
B
_
=
1
2
B
1
2
B
t
= B. Therefore, R(T) is the set
of all skew-symmetric matrices. b. Since a matrix A is in N(T) if and only if T(A) = AA
t
= 0, which is if
and only if A = A
t
, then the null space of T is the set of symmetric matrices.
45. If the matrix A is invertible and B is any nn matrix, then T(A
1
B) = A(A
1
B) = B, so R(T) = M
nn
.
46. a. A basis for the range of T consists of the column vectors of A corresponding to the pivot columns of
the echelon form of A. Any zero rows of A correspond to diagonal entries that are 0, so the echelon form of
A will have pivot columns corresponding to each nonzero diagonal term. Hence, the range of T is spanned
by the nonzero column vectors of A and the number of nonzero vectors equals the number of pivot columns
of A. b. Since dim(N(T)) = n = dim(R(T)), then the dimension of the null space of T is the number of
zeros on the diagonal.
98 Chapter 4 Linear Transformations
Exercise Set 4.3
An isomorphism between vector spaces establishes a one-to-one correspondence between the vector spaces.
If T : V W is a one-to-one and onto linear transformation, then T is called an isomorphism. A mapping
is one-to-one if and only if N(T) = {0} and is onto if and only if R(T) = W. If {v
1
, . . . , v
n
} is a basis for
V and T : V W is a linear transformation, then R(T) = span{T(v
1
), . . . , T(v
n
)}. If in addition, T is
one-to-one, then {T(v
1
), . . . , T(v
n
)} is a basis for R(T). The main results of Section 4.3 are:
If V is a vector space with dim(V ) = n, then V is isomorphic to R
n
.
If V and W are vector spaces of dimension n, then V and W are isomorphic.
For example, there is a correspondence between the very dierent vector spaces P
3
and M
22
. To dene the
isomorphism, start with the standard basis S = {1, x, x
2
, x
3
} for P
3
. Since every polynomial a+bx+cx
2
+dx
3
=
a(1) +b(x) +c(x
2
) +d(x
3
) use the coordinate map
a +bx +cx
2
+dx
3
L1
[a +bx +cx
2
+dx
3
]
S
=
_
_
a
b
c
d
_
_
followed by
_
_
a
b
c
d
_
_
L2
_
a b
c d
_
,
so that the composition L
2
(L
1
(a + bx + cx
2
+ dx
3
)) =
_
a b
c d
_
denes an isomorphism between P
3
and
M
22
.
Solutions to Exercises
1. Since N(T) =
__
0
0
__
, then T is one-to-one. 2. Since N(T) =
__
a
a
_
a R
_
, then T is
not one-to-one.
3. Since N(T) =
_
_
_
_
_
0
0
0
_
_
_
_
_
, then T is one-to-
one.
4. Since
_
_
2 2 2
2 1 1
2 4 1
_
_
reduces to
_
_
2 2 2
0 3 3
0 0 3
_
_
,
then N(T) =
_
_
_
_
_
0
0
0
_
_
_
_
_
, so T is one-to-one.
5. Let p(x) = ax
2
+bx +c, so that p
_
2 1 a
0 0
1
2
a +b
_
, then a vector
_
a
b
_
is in the range of T if and
only if a = 2b and hence, T is not onto.
4.3 Isomorphisms 99
9. Since
_
_
1 1 2
0 1 1
0 0 2
_
_
is row equivalent to
the identity matrix, then the linear operator T is
onto R
3
.
10. Since
_
_
2 3 1 a
1 1 3 b
1 4 2 c
_
_
reduces to
_
_
2 3 1 a
0 5 5 a + 2b
0 0 0 a b +c
_
_
,
then a vector is in the range of T if and only if
a b +c = 0 and hence, T is not onto.
11. Since T(e
1
) =
_
1
3
_
and T(e
2
) =
_
2
0
_
are two linear independent vectors in R
2
, they form a basis.
12. Since T(e
2
) =
_
0
0
_
, the set is not a basis. 13. Since T(e
1
) =
_
3
3
_
and T(e
2
) =
_
1
1
_
are two linear independent vectors in R
2
, they
form a basis.
14. Since T(e
2
) = 2T(e
1
), the set is not a basis. 15. Since T(e
1
) =
_
_
1
0
0
_
_
, T(e
2
) =
_
_
1
1
0
_
_
,
and T(e
3
) =
_
_
2
1
5
_
_
are three linear indepen-
dent vectors in R
3
, they form a basis.
16. Since
2 3 1
2 6 3
4 9 2
4 2 1
2 0 1
2 1 3/2
1 1 2
1 2 1
0 1 5
3 1
1 3
3 1
3 1
0 1 1
2 0 2
1 1 3
1 3 0
1 2 3
0 1 3
_
a
b
c
d
_
_
.
33. Dene an isomorphism T : R
4
P
3
, by
T
_
_
_
_
_
_
a
b
c
d
_
_
_
_
_
_
= ax
3
+bx
2
+cx +d.
34. Dene an isomorphism T : M
22
P
3
, by T
__
a b
c d
__
= ax
3
+bx
2
+cx +d.
35. Since the vector space is given by V =
_
_
_
_
_
x
y
x + 2y
_
_
x, y R
_
_
_
dene an isomorphism T : V R
2
by
T
_
_
_
_
x
y
x + 2y
_
_
_
_
=
_
x
y
_
.
36. Dene an isomorphism T : P
2
, V by T(ax
2
+bx +c) =
_
a b
c a
_
.
4.4 Matrix Transformation of a Linear Transformation 101
37. Let v be a nonzero vector in R
3
. Then a line L through the origin in the direction of the vector v
is given by all scalar multiples of the vector v. That is, L = {tv| t R} . Now, let T : R
3
R
3
be an
isomorphism. Since T is linear, then T(tv) = tT(v). Also, by Theorem 8, T(v) is nonzero. Hence, the set
L
a basis for W,
two results are essential in solving the exercises:
The matrix representation of T relative to B and B
is dened by
[T]
B
B
= [ [T(v
1
)]
B
[T(v
2
)]
B
. . . [T(v
n
)]
B
] .
Coordinates of T(v) can be found using the formula
[T(v))]
B
= [T]
B
B
[v]
B
.
To outline the steps required in nding and using a matrix representation of a linear transformation dene
T : R
3
R
3
by T
_
_
_
_
x
y
z
_
_
_
_
=
_
_
x
y
z
_
_
and let
B =
_
_
_
_
_
1
0
0
_
_
,
_
_
0
1
0
_
_
,
_
_
0
0
1
_
_
_
_
_
and B
=
_
_
_
_
_
1
1
1
_
_
,
_
_
1
0
1
_
_
,
_
_
2
1
0
_
_
_
_
_
two bases for R
3
.
Apply T to each basis vector in B.
T
_
_
_
_
1
0
0
_
_
_
_
=
_
_
1
0
0
_
_
, T
_
_
_
_
0
1
0
_
_
_
_
=
_
_
0
1
0
_
_
, T
_
_
_
_
0
0
1
_
_
_
_
=
_
_
0
0
1
_
_
Find the coordinates of each of the vectors found in the rst step relative to B
. Since
_
_
1 1 2 1 0 0
1 0 1 0 1 0
1 1 0 0 0 1
_
_
_
_
1 0 0 1/2 1 1/2
1 0 1 1/2 1 1/2
1 1 0 1/2 0 1/2
_
_
,
then
_
_
_
_
1
0
0
_
_
_
_
B
=
_
_
1/2
1/2
1/2
_
_
,
_
_
_
_
0
1
0
_
_
_
_
B
=
_
_
1
1
0
_
_
,
_
_
_
_
0
0
1
_
_
_
_
B
=
_
_
1/2
1/2
1/2
_
_
.
The column vectors of the matrix representation relative to B and B
B
=
_
_
1/2 1 1/2
1/2 1 1/2
1/2 0 1/2
_
_
102 Chapter 4 Linear Transformations
The coordinates of any vector T(v) can be found using the matrix product
[T(v))]
B
= [T]
B
B
[v]
B
.
As an example, let v =
_
_
1
2
4
_
_
, then after applying the operator T the coordinates relative to B
is
given by
_
_
T
_
_
_
_
1
2
4
_
_
_
_
_
_
B
=
_
_
1/2 1 1/2
1/2 1 1/2
1/2 0 1/2
_
_
_
_
_
_
1
2
4
_
_
_
_
B
.
Since B is the standard basis the coordinates of a vector are just the components, so
_
_
T
_
_
_
_
1
2
4
_
_
_
_
_
_
B
=
_
_
1/2 1 1/2
1/2 1 1/2
1/2 0 1/2
_
_
_
_
1
2
4
_
_
=
_
_
1/2
9/2
3/2
_
_
.
This vector is not T(v), but the coordinates relative to the basis B
. Then
T
_
_
_
_
1
2
4
_
_
_
_
=
1
2
_
_
1
1
1
_
_
9
2
_
_
1
0
1
_
_
+
3
2
_
_
2
1
0
_
_
=
_
_
1
2
7/2
_
_
.
Other useful formulas that involve combinations of linear transformations and the matrix representation
are:
[S+T]
B
B
= [S]
B
B
+[T]
B
B
[kT]
B
B
= k[T]
B
B
[ST]
B
B
= [S]
B
B
[T]
B
B
[T
n
]
B
= ([T]
B
)
n
[T
1
]
B
= ([T]
B
)
1
Solutions to Exercises
1. a. Let B = {e
1
, e
2
} be the standard basis. To nd the matrix representation for A relative to B, the column
vectors are the coordinates of T(e
1
) and T(e
2
) relative to B. Recall the coordinates of a vector relative to
the standard basis are just the components of the vector. Hence, [T]
B
= [ [T(e
1
]
B
[T(e
2
]
B
] =
_
5 1
1 1
_
.
b. The direct computation is T
_
2
1
_
=
_
9
1
_
and using part (a), the result is
T
_
2
1
_
=
_
5 1
1 1
_ _
2
1
_
=
_
9
1
_
.
2. a. [T]
B
=
_
1 0
0 1
_
b. The direct computation is T
_
1
3
_
=
_
1
3
_
and using part (a), the result is
T
_
1
3
_
=
_
1 0
0 1
_ _
1
3
_
=
_
1
3
_
.
3. a. Let B = {e
1
, e
2
, e
3
} be the standard basis. Then [T]
B
= [ [T(e
1
]
B
[T(e
2
]
B
[T(e
2
]
B
] =
_
_
1 1 2
0 3 1
1 0 1
_
_
. b. The direct computation is T
_
_
1
2
3
_
_
=
_
_
3
3
2
_
_
, and using part (a) the result is
T
_
_
1
2
3
_
_
=
_
_
1 1 2
0 3 1
1 0 1
_
_
_
_
1
2
3
_
_
=
_
_
3
3
2
_
_
.
4.4 Matrix Transformation of a Linear Transformation 103
4. a. [T]
B
=
_
_
1 0 0
0 1 0
0 0 1
_
_
b. The direct computation is T
_
_
2
5
1
_
_
=
_
_
2
5
1
_
_
, and using part (a) the
result is T
_
_
2
5
1
_
_
=
_
_
1 0 0
0 1 0
0 0 1
_
_
_
_
2
5
1
_
_
=
_
_
2
5
1
_
_
.
5. a. The column vectors of the matrix representation relative to B and B
B
=
_ _
T
__
1
1
___
B
_
T
__
2
0
___
B
_
. Since B
is the standard basis, the coordinates are the components of the vectors T
__
1
1
__
and T
__
2
0
__
, so
[T]
B
B
=
_
3 2
3 6
_
.
b. The direct computation is T
_
1
2
_
=
_
3
3
_
and using part (a)
T
_
1
2
_
=
_
3 2
3 6
_ _
1
2
_
B
=
_
3 2
3 6
_ _
2
3/2
_
=
_
3
3
_
.
6. a. [T]
B
B
=
_
_
3 2 1
2 1 2
2 0 2
_
_
b.
_
_
1
1
2
_
_
= T
_
_
1
1
1
_
_
= [T]
B
B
_
_
1
1
1
_
_
B
= [T]
B
B
_
_
3/2
3
5/2
_
_
7. a. The matrix representation is given by
[T]
B
B
=
_ _
T
__
1
2
___
B
_
T
__
1
1
___
B
_
=
_ _
T
__
2
3
___
B
_
T
__
2
2
___
B
_
.
We can nd the coordinates of both vectors by considering
_
3 0 2 2
2 2 3 2
_
_
1 0
2
3
2
3
0 1
13
6
5
3
_
, so [T]
B
B
=
_
2
3
2
3
13
6
5
3
_
.
b. The direct computation is T
_
1
3
_
=
_
2
4
_
. Using part (a) we can now nd the coordinates of the
image of a vector using the formula [T(v)]
B
] = [T]
B
B
[v]
B
. and then use these coordinates to ne T(v. That
is,
_
T
_
1
3
__
B
= [T]
B
B
_
1
3
_
B
= [T]
B
B
_
2
1
_
=
_
2
3
8
3
_
, so T
_
1
3
_
=
2
3
_
3
2
_
+
8
3
_
0
2
_
=
_
2
4
_
.
8. a. [T]
B
B
=
_
_
1 1 1
3 1 1
3 1 2
_
_
b. The direct computation gives T
_
_
2
1
3
_
_
=
_
_
1
4
4
_
_
. Using the matrix in
part (a) gives
_
_
T
_
_
2
1
3
_
_
_
_
B
= [T]
B
B
_
_
T
_
_
2
1
3
_
_
_
_
B
= [T]
B
B
_
_
1
1
1
_
_
=
_
_
1
3
4
_
_
, so that
T
_
_
2
1
3
_
_
=
_
_
0
0
1
_
_
3
_
_
1
0
1
_
_
4
_
_
1
1
0
_
_
=
_
_
1
4
4
_
_
.
104 Chapter 4 Linear Transformations
9. a. Since B
B
=
_
_
1 1 1
0 1 2
0 0 1
_
_
. b. The direct computation is
T(x
2
3x + 3) = x
2
3x + 3. To nd the coordinates of the image, we have from part (a) that
_
T(x
2
3x + 3)
= [T]
B
B
[x
2
3x + 3]
B
= [T]
B
B
_
_
1
1
1
_
_
=
_
_
3
3
1
_
_
, so T(x
2
3x + 3) = 3 3x +x
2
.
10. a. [T]
B
B
=
_
_
1 1 2
1 0 1
3 1 3
_
_
b. The direct computation gives T(1 x) =
d
dx
(1 x) + (1 x) = x.
Using the matrix in part (a) gives [T(1 x)]
B
= [T]
B
B
[1 x]
B
= [T]
B
B
_
_
1
1
1
_
_
=
_
_
0
0
1
_
_
, so
T(1 x) = 0(1 +x) + 0(1 +x +x
2
) x = x.
11. First notice that if A =
_
a b
c a
_
, then T(A) =
_
0 2b
2c 0
_
.
a. [T]
B
=
_
_
0 0 0
0 2 0
0 0 2
_
_
b. The direct computation is T
__
2 1
3 2
__
=
_
0 2
6 0
_
. Using part (a)
_
T
__
2 1
3 2
___
B
= [T]
B
_
_
2
1
3
_
_
=
_
_
0
2
6
_
_
so
T
__
2 1
3 2
__
= 0
_
1 0
0 1
_
2
_
0 1
0 0
_
+ 6
_
0 0
1 0
_
=
_
0 2
6 0
_
.
12. First notice that T
__
a b
c d
__
=
_
3a b + 2c
2b +c 3d
_
. a. [T]
B
=
_
_
3 0 0 0
0 1 2 0
0 2 1 0
0 0 0 3
_
_
b. The direct computation gives T
__
1 3
1 2
__
=
_
3 1
5 6
_
. Using the matrix in part (a) gives
__
1 3
1 2
__
B
= [T]
B
_
1 3
1 2
_
B
= [T]
B
_
_
1
3
1
2
_
_
=
_
_
3
1
5
6
_
_
, so T
__
1 3
1 2
__
=
_
3 1
5 6
_
.
13. a. [T]
B
=
_
1 2
1 1
_
b. [T]
B
=
1
9
_
1 22
11 1
_
c. [T]
B
B
=
1
9
_
5 2
1 5
_
d. [T]
B
B
=
1
3
_
5 2
1 5
_
e. [T]
B
C
=
1
9
_
2 5
5 1
_
f. [T]
B
C
=
1
9
_
22 1
1 11
_
14. a. [T]
B
B
=
_
_
1 4
1 3
1 2
_
_
b. [T]
B
B
=
1
2
_
_
1 3
0 2
1 1
_
_
c. [T]
B
C
=
_
_
4 1
3 1
2 1
_
_
d. [T]
B
C
=
1
2
_
_
3 1
2 0
1 1
_
_
e.[T]
C
B
=
_
_
1 2
1 3
1 4
_
_
4.4 Matrix Transformation of a Linear Transformation 105
15. a. [T]
B
B
=
_
_
0 0
1 0
0 1/2
_
_
b. [T]
B
C
=
_
_
0 0
0 1
1/2 0
_
_
c. [T]
C
C
=
_
_
0 1
0 0
1/2 0
_
_
d. [S]
B
B
=
_
0 1 0
0 0 2
_
e. [S]
B
B
[T]
B
B
=
_
1 0
0 1
_
, [T]
B
B
[S]
B
B
=
_
_
0 0 0
0 1 0
0 0 1
_
_
f. The function S T is the identity map, that is,
(S T)(ax +b) = ax +b so S reverses the action of T.
16. a. [T]
B
B
=
1
4
_
_
2 0 0 2
0 2 2 2
3 1 1 2
3 1 1 0
_
_
b. [T]
B
B
=
_
_
2 0 2 2
0 0 2 2
1 0 1 1
0 0 2 2
_
_
c. [T]
B
B
=
1
4
_
_
4 0 0 0
2 0 6 2
1 0 3 7
3 0 5 1
_
_
d. [I]
B
B
=
1
4
_
_
2 0 0 2
0 2 2 0
1 1 1 1
1 1 1 1
_
_
, [I]
B
B
=
_
_
1 0 1 1
0 1 1 1
0 1 1 1
1 0 1 1
_
_
17. [T]
B
=
_
1 0
0 1
_
. The transformation T
reects a vector across the x-axis.
18. The transformation rotates a vector by ra-
dians in the counterclockwise direction.
19. [T]
B
= cI 20. Since T(A) = A A
t
=
_
0 b c
c b 0
_
,
then [T]
B
=
_
_
0 0 0 0
0 1 1 0
0 1 1 0
0 0 0 0
_
_
21. [T]
B
B
= [1 0 0 1] 22. a. [3S]
B
= 3[S]
B
= 3
_
1 0
1 1
_
b.
_
6
3
_
23. a. [2T +S]
B
= 2[T]
B
+ [S]
B
=
_
5 2
1 7
_
b.
_
4
23
_
24. a. [T S]
B
= [T]
B
[S]
B
=
_
3 1
2 3
_
b.
_
3
5
_
25. a. [S T]
B
= [S]
B
[T]
B
=
_
2 1
1 4
_
b.
_
1
10
_
26. a. [2T]
B
= 2[T]
B
=
_
_
2 2 2
0 4 4
2 2 2
_
_
b.
_
_
10
16
10
_
_
27. a. [3T + 2S]
B
=
_
_
3 3 1
2 6 6
3 3 1
_
_
b.
_
_
3
26
9
_
_
28. a. [T S]
B
= [T]
B
[S]
B
=
_
_
2 0 2
2 0 2
2 0 2
_
_
b.
_
_
8
4
8
_
_
29. a. [S T]
B
=
_
_
4 4 4
1 1 1
1 1 1
_
_
b.
_
_
20
5
5
_
_
30. [T]
B
=
_
4 0
0 6
_
, [T
k
]
B
= ([T]
B
)
k
=
_
4
k
0
0 (6)
k
_
106 Chapter 4 Linear Transformations
31. Since [T]
B
=
_
_
0 0 0 6 0
0 0 0 0 24
0 0 0 0 0
0 0 0 0 0
0 0 0 0 0
_
_
and [T(p(x)]
B
=
_
_
12
48
0
0
0
_
_
, then T(p(x)) = p
(x) = 12 48x.
32. Since B is the standard basis, then [T(1)]
B
= [1]
B
=
_
_
1
0
0
_
_
, [T(x)]
B
= [2x]
B
=
_
_
0
2
0
_
_
, and [T(x
2
)]
B
=
[3x
2
]
B
=
_
_
0
0
3
_
_
, so [T]
B
=
_
_
1 0 0
0 2 0
0 0 3
_
_
.
33. [S]
B
B
=
_
_
0 0 0
1 0 0
0 1 0
0 0 1
_
_
, [D]
B
B
=
_
_
0 1 0 0
0 0 2 0
0 0 0 3
_
_
, [D]
B
B
[S]
B
B
=
_
_
1 0 0
0 2 0
0 0 3
_
_
= [T]
B
34. The linear operator that reects a vector through the line perpendicular to
_
1
1
_
, that is reects across
the line y = x, is given by T
_
x
y
_
=
_
y
x
_
, so
[T]
B
=
_ _
1
1
_
B
_
1
0
_
B
_
=
_
1 1
0 1
_
.
35. If A =
_
a b
c d
_
, then the matrix representation for T is [T]
S
=
_
_
0 c b 0
b a d 0 b
c 0 d a c
0 c b 0
_
_
.
36. Since T(v) = v is the identity map, then
[T]
B
B
= [ [T(v
1
)]
B
[T(v
2
)]
B
[T(v
3
)]
B
] = [ [v
1
]
B
[v
2
]
B
[v
3
]
B
] =
_
_
0 1 0
1 0 0
0 0 1
_
_
.
If [v]
B
=
_
_
a
b
c
_
_
, then [v]
B
=
_
_
b
a
c
_
_
. The matrix [T]
B
B
can be obtained from the identity matrix by
interchanging the rst and second columns.
37.
[T]
B
= [ [T(v
1
)]
B
[T(v
2
)]
B
. . . [T(v
n
) ]
B
= [ [v
1
]
B
[v
1
+v
2
]
B
. . . [v
n1
+v
n
]
B
] =
_
_
1 1 0 0 . . . . . . 0 0
0 1 1 0 . . . . . . 0 0
0 0 1 1 . . . . . . 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0 . . . 1 0
0 0 0 0 0 . . . 1 1
0 0 0 0 0 . . . 0 1
_
_
Exercise Set 4.5
If T : V V is a linear operator the matrix representation of T relative to a basis B, denoted [T]
B
4.5 Similarity 107
depends on the basis. However, the action of the operator does not change, so does not depend on the matrix
representation. Suppose B
1
= {v
1
, . . . , v
n
} and B
2
{v
1
, . . . , v
n
} are two bases for V and the coordinates of
T(v) are [T(v)]
B1
=
_
_
c
1
c
2
.
.
.
c
n
_
_
and [T(v)]
B2
=
_
_
d
1
d
2
.
.
.
d
n
_
_
. Then
T(v) = c
1
v
1
+c
2
v
2
+ +c
n
v
n
= d
1
v
1
+d
2
v
2
+ +d
n
v
n
.
The matrix representations are related by the formula
[T]
B2
= [I]
B2
B1
[T]
B1
[I]
B1
B2
.
Recall that transition matrices are invertible with [I]
B2
B1
= ([I]
B1
B2
)
1
. Two n n matrices A and B are called
similar provided there exists an invertible matrix P such that B = P
1
AP.
Solutions to Exercises
1. The coordinates of the image of the vector v =
_
4
1
_
relative to the two bases are
[T(v)]
B1
= [T]
B1
[v]
B1
=
_
1 2
1 3
_ _
4
1
_
=
_
2
7
_
and [T(v)]
B2
= [T]
B2
[v]
B2
=
_
2 1
1 2
_ _
1
5
_
=
_
7
9
_
.
Then using the coordinates relative to the respective bases the vector T(v) is
7
_
1
1
_
+ (9)
_
1
0
_
=
_
2
7
_
= 2
_
1
0
_
7
_
0
1
_
,
so the action of the operator is the same regardless of the particular basis used.
2. The coordinates of the image of the vector v =
_
5
2
_
relative to the two bases are
[T(v)]
B1
= [T]
B1
[v]
B1
=
_
0 1
2 1
_ _
5
2
_
=
_
2
8
_
and [T(v)]
B2
= [T]
B2
[v]
B2
=
_
2 0
4 1
_ _
3
8
_
=
_
6
4
_
.
Then using the coordinates relative to the respective bases the vector T(v) is
6
_
1
2
_
4
_
1
1
_
=
_
2
8
_
= 2
_
1
0
_
+ 8
_
0
1
_
,
so the action of the operator is the same regardless of the particular basis used.
3. a. Since B
1
is the standard basis, T(e
1
) =
_
1
1
_
, and T(e
2
) =
_
1
1
_
, then [T]
B1
=
_
1 1
1 1
_
. Relative
to the basis B
2
, T
__
1
1
__
=
_
2
2
_
= 2
_
1
1
_
+0
_
1
1
_
, and T
__
1
1
__
=
_
0
0
_
, so [T]
B2
=
_
2 0
0 0
_
.
b. The coordinates of the image of the vector v =
_
3
2
_
relative to the two bases are
[T]
B1
[v]
B1
=
_
1 1
1 1
_ _
3
2
_
=
_
1
1
_
and [T]
B2
[v]
B2
=
_
2 0
0 0
_ _
1/2
5/2
_
=
_
1
0
_
.
Then using the coordinates relative to the respective bases the vector T(v) is
1
_
1
1
_
+ (0)
_
1
1
_
=
_
1
1
_
=
_
1
0
_
+
_
0
1
_
,
108 Chapter 4 Linear Transformations
so the action of the operator is the same regardless of the particular basis used.
4. a. Since B
1
is the standard basis, then [T]
B1
=
_
1 0
0 1
_
. Relative to the basis B
2
, we have [T]
B2
=
_
5/3 4/3
4/3 5/3
_
. b. The coordinates of the image of the vector v =
_
2
2
_
relative to the two bases are
[T]
B1
[v]
B1
=
_
1 0
0 1
_ _
2
2
_
=
_
2
2
_
and [T]
B2
[v]
B2
=
_
5/3 4/3
4/3 5/3
_ _
2/3
2/3
_
=
_
2
2
_
.
Then using the coordinates relative to the respective bases the vector T(v) is
2
_
1
0
_
2
_
0
1
_
=
_
2
2
_
= 2
_
2
1
_
2
_
1
2
_
,
so the action of the operator is the same regardless of the particular basis used.
5. a. [T]
B1
=
_
_
1 0 0
0 0 0
0 0 1
_
_
, [T]
B2
=
_
_
1 1 0
0 0 0
0 1 1
_
_
b. [T]
B1
[v]
B1
=
_
_
1 0 0
0 0 0
0 0 1
_
_
_
_
1
2
1
_
_
=
_
_
1
0
1
_
_
, [T]
B2
[v]
B2
=
_
_
1 1 0
0 0 0
0 1 1
_
_
_
_
3
2
4
_
_
=
_
_
1
0
2
_
_
. Since
B
1
is the standard basis and 1
_
_
1
0
1
_
_
+ (0)
_
_
1
1
0
_
_
+ (2)
_
_
0
0
1
_
_
=
_
_
1
0
1
_
_
, the action of the operator is
the same regardless of the particular basis used.
6. a. [T]
B1
=
_
_
1 1 0
1 1 1
0 1 1
_
_
, [T]
B2
=
_
_
2 1 2
3 2 4
2 1 3
_
_
b. Since B
1
is the standard basis,
T(v) = [T]
B1
[v]
B1
=
_
_
1 1 0
1 1 1
0 1 1
_
_
_
_
2
1
1
_
_
=
_
_
1
2
0
_
_
. Relative to the basis B
2
, we have
[T(v]
B2
= [T]
B2
[v]
B2
=
_
_
2 1 2
3 2 4
2 1 3
_
_
_
_
1
2
1
_
_
=
_
_
2
3
3
_
_
. Since 2
_
_
1
1
0
_
_
3
_
_
0
0
1
_
_
+ 3
_
_
1
0
1
_
_
=
_
_
1
2
0
_
_
, the action of the operator is the same regardless of the particular basis used.
7. Since B
1
is the standard basis, then the transition matrix relative to B
2
and B
1
is
P = [I]
B1
B2
=
_
__
3
1
__
B1
__
1
1
__
B1
_
=
_
3 1
1 1
_
. By Theorem 15,
[T]
B2
= P
1
[T]
B1
P =
1
2
_
1 1
1 3
_ _
1 1
3 2
_ _
3 1
1 1
_
=
_
9/2 1/2
23/2 3/2
_
.
8. Since B
1
is the standard basis, then the transition matrix relative to B
2
and B
1
is
P = [I]
B1
B2
=
_
__
1
2
__
B1
__
1
0
__
B1
_
=
_
1 1
2 0
_
. By Theorem 15,
[T]
B2
= P
1
[T]
B1
P =
1
2
_
0 1
2 1
_ _
0 2
2 3
_ _
1 1
2 0
_
=
_
4 1
0 1
_
.
4.5 Similarity 109
9. The transition matrix is P = [I]
B1
B2
=
_
1/3 1
1/3 1
_
. By Theorem 15
[T]
B2
= P
1
[T]
B1
P =
_
3/2 3/2
1/2 1/2
_ _
1 0
0 1
_ _
1/3 1
1/3 1
_
=
_
0 3
1/3 0
_
.
10. The transition matrix is P = [I]
B1
B2
=
_
3/2 1/2
1/2 1/2
_
. By Theorem 15
[T]
B2
= P
1
[T]
B1
P =
1
2
_
1/2 1/2
1/2 3/2
_ _
1 0
0 1
_ _
3/2 1/2
1/2 1/2
_
=
_
2 1
3 2
_
.
11. Since P = [I]
B1
B2
=
_
2 1
3 2
_
and [T]
B1
=
_
2 0
0 0
_
, by Theorem 15,
[T]
B2
= P
1
[T]
B1
P =
_
2 1
3 2
_ _
2 0
0 3
_ _
2 1
3 2
_
=
_
1 2
6 6
_
.
12. Since P = [I]
B1
B2
=
_
3 1
5 2
_
and [T]
B1
=
_
1 1
1 2
_
, by Theorem 15,
[T]
B2
= P
1
[T]
B1
P =
_
2 1
5 3
_ _
1 1
1 2
_ _
3 1
5 2
_
=
_
17 7
49 20
_
13. Since P = [I]
B1
B2
=
_
1 1
2 1
_
, and [T]
B1
=
_
1 1
2 1
_
by Theorem 15
[T]
B2
= P
1
[T]
B1
P =
_
1 1
2 1
_ _
1 1
2 1
_ _
1 1
2 1
_
=
_
1 1
2 1
_
.
14. Since P = [I]
B1
B2
=
_
2 1/2
2 1
_
, and [T]
B1
=
_
1 1
2 1
_
, by Theorem 15
[T]
B2
= P
1
[T]
B1
P =
_
1 1/2
2 2
_ _
2 5/2
0 2
_ _
2 1/2
2 1
_
=
_
1 1/2
6 1
_
.
15. Since T(1) = 0, T(x) = 1, and T(x
2
) = 2x, then [T]
B1
=
_
_
0 1 0
0 0 2
0 0 0
_
_
and
[T]
B2
=
_
_
0 2 0
0 0 1
0 0 0
_
_
. Now if P = [I]
B1
B2
=
_
_
1 0 2
0 2 0
0 0 1
_
_
, then by Theorem 15, [T]
B2
= P
1
[T]
B1
P.
16. Since T(1) = 0, T(x) = x, T(x
2
) = 2x
2
+ 2, and T(1 +x
2
) = 2(x
2
+ 1), then [T]
B1
=
_
_
0 0 2
0 1 0
0 0 2
_
_
and
[T]
B2
=
_
_
0 0 0
0 1 0
0 0 2
_
_
. Now if P = [I]
B1
B2
=
_
_
1 0 1
0 1 0
0 0 1
_
_
, then by Theorem 15, [T]
B2
= P
1
[T]
B1
P.
17. Since A and B are similar, there is an invertible matrix P such that B = P
1
AP. Also since B and
C are similar, there is an invertible matrix Q such that C = Q
1
BQ. Therefore, C = Q
1
P
1
APQ =
(PQ)
1
A(PQ), so that A and C are also similar.
110 Chapter 4 Linear Transformations
18. If A is similar to B, then there is an invertible matrix P such that B = P
1
AP. Then
det(B) = det(P
1
AP) = det(P
1
) det(B) det(P) = det(B).
19. For any square matrices A and B, the trace function satises the property tr(AB) = tr(BA). Now, since
A and B are similar matrices there exists an invertible matrix P such that B = P
1
AP. Hence
tr(B) = tr(P
1
AP) = tr(APP
1
) = tr(A).
20. If A is similar to B, then there is an invertible matrix P such that B = P
1
AP. Then
B
t
= (P
1
AP)
t
= P
t
A
t
(P
1
)
t
= P
t
A
t
(P
t
)
1
and hence, A
t
and B
t
are similar.
21. Since A and B are similar matrices there exists an invertible matrix P such that B = P
1
AP. Hence
B
n
= (P
1
AP)
n
= P
1
A
n
P.
Thus, A
n
and B
n
are similar.
22. If A is similar to B, then there is an invertible matrix P such that B = P
1
AP. Then
det(B I) = det(P
1
AP I) = det(P
1
(AP P)) = det(P
1
(A I)P) = det(A I).
Exercise Set 4.6
1. a. Since the triangle is reected across the x-axis, the matrix representation relative to the standard basis
for T is
_
1 0
0 1
_
. b. Since the triangle is reected across the y-axis, the matrix representation relative
to the standard basis is
_
1 0
0 1
_
. c. Since the triangle is vertically stretched by a factor of 3, the matrix
representation relative to the standard basis is
_
1 0
0 3
_
.
2. a. Since the square is reected through the origin, the matrix representation relative to the standard basis
for T is
_
1 0
0 1
_
. b. Since the square is sheared horizontally by a factor of 2, the matrix representation
relative to the standard basis for T is
_
1 2
0 1
_
. c. Since the square is sheared vertically by a factor of 3, the
matrix representation relative to the standard basis for T is
_
1 0
3 1
_
.
3. a. The matrix representation relative to the standard basis S is the product of the matrix representations
for the three separate operators. That is,
[T]
S
=
_
1 0
0 1
_ _
1 0
0
1
2
_ _
3 0
0 1
_
=
_
3 0
0 1/2
_
.
4.6 Application: Computer Graphics 111
b.
x
y
210
210
10
10
c. The matrix that will reverse the action of the oper-
ator T is the inverse of [T]
S
. That is,
[T]
1
S
=
_
1/3 0
0 2
_
.
4. a. The matrix representation relative to the standard basis S is the product of the matrix representations
for the three separate operators. That is,
[T]
S
=
_
1 2
0 1
_ _
1 0
0 1
_
=
_
1 2
0 1
_
.
b.
x
y
25
25
5
5
c. The matrix that will reverse the action of the oper-
ator T is the inverse of [T]
S
. That is,
[T]
1
S
=
_
1 2
0 1
_
.
5. a.
[T]
S
=
_
2/2
2/2
2/2
2/2
_
b.
x
y
25
25
5
5
c.
[T]
1
S
=
_
2/2
2/2
2/2
2/2
_
6. a.
[T]
S
=
_
0 1
1 0
_ _
0 1
1 0
_
=
_
1 0
0 1
_
b.
x
y
25
25
5
5
c.
[T]
1
S
=
_
1 0
0 1
_
d. The transformation is a re-
ection through the y-axis.
7. a. [T]
S
=
_
_
3/2 1/2 0
1/2
3/2 0
0 0 1
_
_
_
_
1 0 1
0 1 1
0 0 1
_
_
=
_
_
3/2 1/2
3/2 1/2
1/2
3/2
3/2 + 1/2
0 0 1
_
_
112 Chapter 4 Linear Transformations
b.
x
y
25
25
5
5
c.
_
_
3/2 1/2 1
1/2
3/2 1
0 0 1
_
_
8. a. [T]
S
=
_
_
1 0 0
0 1 0
0 0 1
_
_
_
_
1 0 4
0 1 2
0 0 1
_
_
=
_
_
1 0 4
0 1 2
0 0 1
_
_
b.
x
y
25
25
5
5
c.
_
_
1 0 4
0 1 2
0 0 1
_
_
9. a.
__
0
0
__
B
=
_
0
0
_
,
__
2
2
__
B
=
_
2
0
_
,
__
0
2
__
B
=
_
1
1
_
b. Since the operator T is given by
T
__
x
y
__
=
_
y
x
_
, then [T]
S
B
=
_ _
T
__
1
1
___
S
_
T
__
1
1
___
S
_
=
_
1 1
1 1
_
.
c.
_
1 1
1 1
_ _
0
0
_
=
_
0
0
_
,
_
1 1
1 1
_ _
2
0
_
=
_
2
2
_
,
_
1 1
1 1
_ _
1
1
_
=
_
2
0
_
.
The original triangle is reected across the line y = x, as shown in the gure.
x
y
25
25
5
5
d.
_
0 1
1 0
_ _
0
0
_
=
_
0
0
_
,
_
0 1
1 0
_ _
2
2
_
=
_
2
2
_
,
_
0 1
1 0
_ _
0
2
_
=
_
2
0
_
10. a.
_
0
0
_
B
=
_
0
0
_
,
_
1
0
_
B
=
_
1
0
_
,
_
1
1
_
B
=
_
0
1
_
B
,
_
2
1
_
B
=
_
1
1
_
Review Chapter 4 113
b. The gure shows the parallelogram determined by
the original points. The linear transformation that re-
ects a vector across the horizontal axis is dened by
T
__
x
y
__
=
_
x
y
_
. Since T
_
1
0
_
=
_
1
0
_
and
T
_
1
1
_
=
_
1
1
_
, then
[T]
B
=
_ _
1
0
_
B
_
1
1
_
B
_
=
_
1 2
0 1
_
.
x
y
21 1
c. [T]
B
_
0
0
_
=
_
0
0
_
, [T]
B
_
1
0
_
=
_
1
0
_
,
[T]
B
_
1
1
_
=
_
3
1
_
, [T]
B
_
0
1
_
=
_
2
1
_
The reection is shown in the gure.
x
y
21
1
d. The transformation relative to the standard basis is given by a horizontal shear by a factor of one followed
by a reection across the x-axis. Hence the transformation is given by the matrix
_
1 0
0 1
_ _
1 1
0 1
_
=
_
1 1
0 1
_
.
Review Exercises Chapter 4
1. a. The vectors are not scalar multiples, so S is a basis
b. Since S is a basis, for any vector
_
x
y
_
there are scalars c
1
and c
2
such that
_
x
y
_
= c
1
_
1
1
_
+ c
2
_
3
1
_
. The resulting linear system
_
c
1
+ 3c
2
= x
c
1
c
2
= y
has the unique solution c
1
=
1
4
x +
3
4
y and c
2
=
1
4
x
1
4
y. Then
T
__
x
y
__
= c
1
T
__
1
1
__
+c
2
T
__
3
1
__
=
_
_
x
x +y
x y
2y
_
_
.
c. N(T) = {0} d. Since N(T) = {0}, T is one-to-one.
e. Since the range consists of all vectors of the form
_
_
x
x +y
x y
2y
_
_
= x
_
_
1
1
1
2
_
_
+y
_
_
0
1
1
2
_
_
and the vectors
_
_
1
1
1
2
_
_
and
_
_
0
1
1
2
_
_
are linearly independent, then a basis for the range is
_
_
_
_
1
1
1
2
_
_
,
_
_
0
1
1
2
_
_
_
_
.
f. Since dim(R(T)) = 2 and dim(R
4
) = 4, then T is not onto. Also
_
_
a
b
c
d
_
_
is in R(T) if and only if c+b2a = 0.
114 Chapter 4 Linear Transformations
g.
_
_
_
_
1
0
1
1
_
_
,
_
_
1
1
0
1
_
_
,
_
_
1
0
0
0
_
_
,
_
_
0
1
0
0
_
_
_
_
h. [T]
C
B
=
_
_
1 2
5 4
7 5
2 4
_
_
i. The matrix found in part (h) can now be used to nd the coordinates of Av relative to the basis C. That is
_
A
_
x
y
__
C
=
_
_
1 2
5 4
7 5
2 4
_
_
__
x
y
__
B
=
_
_
1 2
5 4
7 5
2 4
_
_
_
1
3
x +
1
3
y
2
3
x
1
3
y
_
=
_
_
x y
x + 3y
x + 4y
2x 2y
_
_
.
Then
A
_
x
y
_
= (x y)
_
_
1
0
1
1
_
_
+ (x + 3y)
_
_
1
1
0
1
_
_
+ (x + 4y)
_
_
1
0
0
0
_
_
+ (2x 2y)
_
_
0
1
0
0
_
_
=
_
_
x
x +y
x y
2y
_
_
,
which agrees with the denition for T found in part (b).
2. a. The composition H T(p(x)) = H(T(p(x)) = H(xp(x) +p(x)) = p(x) +xp
(x) +p
(x) +p
(x) +p
(x) +p(0)) = p
(x) +p
(x) +xp
(x) +p
(x) = 2p
(x) +xp
(x) +p
(x),
then S (H T)(p(x)) = 2p
(0) +p
B
=
_
_
0 1 0 0
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
_
_
, [T]
B
B
=
_
_
1 0 0 0
1 1 0 0
0 1 1 0
0 0 1 1
0 0 0 1
_
_
, [H]
B
B
=
_
_
1 1 0 0 0
0 0 2 0 0
0 0 0 3 0
0 0 0 0 4
0 0 0 0 0
_
_
.
c. Since T(p(x)) = T(a + bx + cx
2
+ dx
3
) = 0 if and only if a + (a + b)x + (b + c)x
2
+ (c + d)x
3
+ dx
4
= 0,
then a polynomial is in the null space of T if and only if it is the zero polynomial. Therefore, the mapping T
is one-to-one. d. R(T) == span{T(1), T(x), T(x
2
), T(x
3
)} = span{x + 1, x
2
+x, x
3
+x
2
, x
4
+x
3
}.
3. a. A reection through the x-axis is given by the operator S
_
x
y
_
=
_
x
y
_
and a reection through the
y-axis by T
_
x
y
_
=
_
x
y
_
. The operator S is a linear operator since
S
__
x
y
_
+c
_
u
v
__
= S
__
x +u
y +cv
__
=
_
x +u
(y +cv)
_
=
_
x
y
_
+c
_
u
v
_
= S
__
x
y
__
+cS
__
u
v
__
.
Similarly, T is also a linear operator.
b. [S]
B
=
_
1 0
0 1
_
and [T]
B
=
_
1 0
0 1
_
. c. Since [T S]
B
=
_
1 0
0 1
_
= [S T]
B
, the linear
operators S T and T S reect a vector through the origin.
4. a. Let B =
_
1 3
1 1
_
. Since B(A + kC) = BA + kBC for all matrices A and C and scalars k,
then the mapping T is a linear transformation. Let A =
_
a b
c d
_
. Since T(A) =
_
1 3
1 1
_ _
a b
c d
_
=
_
a + 3c b + 3d
a +c b +d
_
, then T(A) =
_
0 0
0 0
_
if and only if a = b = c = d = 0 and hence T is one-to-one.
Since the matrix
_
1 3
1 1
_
is invertible, the mapping T is also onto.
Review Chapter 4 115
b. As in part (a), the mapping T is a linear transformation. Since the matrix
_
1 0
1 0
_
is not invertible,
the mapping T is neither one-to-one nor onto. In particular, N(T) =
__
a b
c d
_
a = b = 0
_
and R(T) =
__
a b
a b
_
a, b R
_
. The linear transformation S : R(T) R
2
dened by S
__
a b
a b
__
=
_
a
b
_
, is
one-to-one and onto and hence R(T) and R
2
are isomorphic.
5. a. Since T(v
1
) = v
2
= (0)v
1
+ (1)v
2
and T(v
2
) = v
1
= (1)v
1
+ (0)v
2
, then [T]
B
=
_
0 1
1 0
_
b. Simply switch the column vectors of the matrix found in part (a). That is,
[T]
B
B
= [ [T(v
1
)]
B
[T(v
2
)]
B
] = [ [v
2
]
B
[v
1
]
B
] =
_
1 0
0 1
_
.
6. a. [T]
B
=
_
0 1
1 0
_
, [S]
B
=
_
1 0
0 1
_
b. T
_
2
1
_
=
_
1
2
_
, S
_
2
3
_
=
_
2
3
_
c. [H]
B
=
_
1 0
0 1
_ _
0 1
1 0
_
=
_
1 1
1 0
_
d. H
_
2
1
_
=
_
1
2
_
e. N(T) = {0}, N(S) = {0} f. T
_
x
y
_
=
_
x
y
_
x = y = 0, S
_
x
y
_
=
_
x
y
_
y = 0
7. a. The normal vector for the plane is the cross product of the linearly independent vectors
_
_
1
0
0
_
_
and
_
_
0
1
1
_
_
, that is, n =
_
_
0
1
1
_
_
. Then using the formula given for the reection of a vec-
tor across a plane with normal n and the fact that B is the standard basis, we have that [T]
B
=
_
_
T
_
_
_
_
1
0
0
_
_
_
_
T
_
_
_
_
0
1
0
_
_
_
_
T
_
_
_
_
0
0
1
_
_
_
_
_
_
=
_
_
1 0 0
0 0 1
0 1 0
_
_
.
b. T
_
_
1
2
1
_
_
=
_
_
T
_
_
1
2
1
_
_
_
_
B
=
_
_
1 0 0
0 0 1
0 1 0
_
_
_
_
1
2
1
_
_
=
_
_
1
1
2
_
_
c. N(T) =
_
_
_
_
_
0
0
0
_
_
_
_
_
d. R(T) = R
3
e. [T
n
]
B
=
_
_
1 0 0
0 0 1
0 1 0
_
_
n
=
_
_
I, if n is even
_
_
1 0 0
0 0 1
0 1 0
_
_
, if n is odd
8. a. Since T(p(x) +cq(x)) =
_
1
0
(p(x) +cq(x))dx =
_
1
0
p(x)dx +c
_
1
0
q(x)dx = T(p(x)) +cT(q(x)), then T is
a linear transformation. b.
T(x
2
3x + 2) =
_
1
0
(x
2
3x + 2)dx =
_
x
3
3
3x
2
2
+ 2x
_
1
0
=
1
6
c. Since
_
1
0
(ax
2
+bx+c)dx =
a
3
+
b
2
+c, then N(T) =
_
ax
2
+bx +c |
a
3
+
b
2
+c = 0
_
. d. Since c =
a
3
b
2
,
then N(T) consists of all polynomials of the form ax
2
+ bx +
_
a
3
b
2
_
so a basis for the null space is
_
x
2
1
3
, x
1
2
_
. e. If r R, then
_
1
0
rdx = r and hence T is onto. f. [T]
B
B
=
_
1
1
2
1
3
g. Since
[T(x
2
3x + 2)]
B
=
_
1
1
2
1
3
[x
2
3x + 2]
B
=
_
1
1
2
1
3
_
_
2
3
1
_
_
=
1
6
and B is the standard basis, then
T(x
2
3x+2) =
1
6
. h. To nd T(xe
x
) used the product rule for dierentiation to obtain T(xe
x
) = e
x
+xe
x
.
Since S(xe
x
) =
_
x
0
te
t
dt, the integration requires integration by parts. This gives S(xe
x
) = xe
x
e
x
1. Then
(S T)(f) = S(f
(x)) =
_
x
0
f
u
2
4
+v
2
= 1
_
, which is an ellipse in
R
2
.
Chapter Test Chapter 4
1. F.
T (u +v) = T (u) +T(v)
since the second component of
the sum will contain a plus 4.
2. F. Since
T(x +y) = 2x + 2y 1
but
T(x) +T(y) = 2x + 2y 2.
3. T
4. T 5. T 6. F. Since
T(u) =
1
3
T(2u v) +
1
3
T(u +v)
=
1
3
_
1
1
_
+
1
3
_
0
1
_
=
_
1/3
2/3
_
.
7. F. Since
N(T) =
__
2t
t
_
t R
_
8. F. If T is one-to-one, then the
set is linearly independent.
9. T
10. F. For example, T(1) = 0 =
T(2).
11. T 12. F. Since, for every k,
T
__
k
k
__
=
_
0
0
_
.
13. T 14. T 15. T
16. T 17. T 18. T
Chapter Test Chapter 4 117
19. T 20. T 21. F. The transformation is a
constant mapping.
22. F. Since
dim(N(T)) + dim(R(T)) =
dim(R
4
) = 4, then
dim(R(t)) = 2.
23. T 24. F. Since
[T]
B
=
_
_
2 1 1
1 0 0
1 1 0
_
_
,
then
([T]
B
)
1
=
_
_
0 1 0
0 1 1
1 1 1
_
_
.
25. T 26. F. If T is a linear transfor-
mation, then T(0) = 0.
27. F. It projects each vector
onto the xz-plane.
28. T 29. F. Dene T : R
n
R such
that T(e
i
) = i for i = 1, . . . , n
so that {T(e
1
), . . . , T(e
n
)} =
{1, 2, . . . , n}. This set is not a ba-
sis for R, but T is onto.
30. T
31. F. False, let T : R
2
R
2
by T(v) = v. If B is the stan-
dard basis and B
= {e
2
, e
2
},
then [T]
B
=
_
0 1
1 0
_
.
32. F. Since N(T) consists of
only the zero vector, the null
space has dimension 0.
33. T
34. F. Any idempotent matrix is
in the null space.
35. F. Since T(p(x)) has degree
at most 2.
36. T.
37. T 38. T 39. F. Let A =
_
_
1 0
0 1
0 0
_
_
, so
T(v) = A
_
x
y
_
=
_
_
x
y
0
_
_
is one-
to-one.
40. T.
118 Chapter 5 Eigenvalues and Eigenvectors
5
Eigenvalues and Eigenvectors
Exercise Set 5.1
An eigenvalue of the n n matrix A is a number such that there is a nonzero vector v with Av = v.
So if and v are an eigenvalueeigenvector pair, then the action on v is a scaling of the vector. Notice that
if v is an eigenvalue corresponding to the eigenvalue , then
A(cv) = cAv = c(v) = (cv),
so A will have innitely many eigenvectors corresponding to the eigenvalue . Also recall that an eigenvalue
can be 0 (or a complex number), but eigenvectors are only nonzero vectors. An eigenspace is the set of all
eigenvectors corresponding to an eigenvalue along with the zero vector, and is denoted by V
= {v R
n
|
Av = v}. Adding the zero vector makes V
1 1 2
1 1 2
1 0 1
1 2
1 2
+ (1 )
1 1
1 1
=
2
3
.
Then det(A I) = 0 if and only if
3
=
2
(1 + ) = 0, so the eigenvalues are
1
= 0,
2
= 0.
To nd the eigenvectors corresponding to
1
= 0 solve Av = 0. Since
_
_
1 1 2
1 1 2
1 0 1
_
_
reduces to
_
_
1 1 2
0 1 1
0 0 0
_
_
,
the eigenvectors are of the form
_
_
t
t
t
_
_
, for any t = 0. Similarly, the eigenvectors of
2
= 1 have the
from
_
_
2t
2t
t
_
_
, t = 0.
The eigenspaces are V
0
=
_
_
_
t
_
_
1
1
1
_
_
t R
3
_
_
_
and V
1
=
_
_
_
t
_
_
2
2
1
_
_
t R
3
_
_
_
.
Notice that there are only two linearly independent eigenvectors of A, the algebraic multiplicity of
1
= 0
is 2, the algebraic multiplicity of
2
= 1 is 1, and the geometric multiplicities are both 1. For a 3 3
matrix other possibilities are:
5.1 Eigenvalues and Eigenvectors 119
The matrix can have three distinct real eigenvalues and three linearly independent eigenvectors.
The matrix can have two distinct real eigenvalues such that one has algebraic multiplicity 2 and the
other algebraic multiplicity 1. But the matrix can still have three linearly independent eigenvectors,
two from the eigenvalue of multiplicity 2 and one form the other.
Solutions to Exercises
1. Since the matrix equation
_
3 0
1 3
_ _
0
1
_
=
_
0
1
_
is satised if and only if = 3, the eigenvalue
corresponding to the eigenvector
_
0
1
_
is = 3.
2. Since the matrix equation
_
1 1
0 2
_ _
1
1
_
=
_
1
1
_
is satised if and only if = 2, the eigenvalue
corresponding to the eigenvector
_
1
1
_
is = 2.
3. The corresponding eigenvalue is = 0. 4. The corresponding eigenvalue is = 2.
5. The corresponding eigenvalue is = 1. 6. The corresponding eigenvalue is = 1.
7. a. The characteristic equation is det(A I) = 0, that is,
det(A I) =
2 2
3 3
= (2 )(3 ) 6 =
2
+ 5 = 0.
b. Since the eigenvalues are the solutions to the characteristic equation
2
+ 5 = ( + 5) = 0, the
eigenvalues are
1
= 0 and
2
= 5. c. The corresponding eigenvectors are found by solving, respectively,
_
2 2
3 3
_
v = 0 and
_
2 2
3 3
_
v = 5v. Hence the eigenvectors are v
1
=
_
1
1
_
and v
2
=
_
2
3
_
,
respectively.
d.
Av
1
=
_
2 2
3 3
_ _
1
1
_
=
_
0
0
_
= 0
_
1
1
_
, and Av
2
=
_
2 2
3 3
_ _
2
3
_
=
_
10
15
_
= (5)
_
2
3
_
8. a.
2
+ 4 + 3 = 0 b.
1
= 1,
2
= 3 c. v
1
=
_
1
1
_
, v
2
=
_
1
1
_
d.
_
2 1
1 2
_ _
1
1
_
=
_
1
1
_
,
_
2 1
1 2
_ _
1
1
_
= (3)
_
1
1
_
9. a. ( 1)
2
= 0 b.
1
= 1 c. v
1
=
_
1
0
_
d.
_
1 2
0 1
_ _
1
0
_
=
_
1
0
_
= (1)
_
1
0
_
10. a.
2
+ 3 + 2 = 0 b.
1
= 1,
2
= 2 c. v
1
=
_
2
1
_
, v
2
=
_
1
1
_
d.
_
0 2
1 3
_ _
2
1
_
=
(1)
_
2
1
_
,
_
0 2
1 3
_ _
1
1
_
= (2)
_
1
1
_
120 Chapter 5 Eigenvalues and Eigenvectors
11. a. The characteristic equation det(A I) = 0, is
1 0 1
0 1 0
0 2 1
= 0.
Expanding down column one, we have that
0 =
1 0 1
0 1 0
0 2 1
= (1 )
1 0
2 1
= (1 +)
2
(1 ).
b.
1
= 1,
2
= 1 c. v
1
=
_
_
1
0
0
_
_
, v
2
=
_
_
1
2
2
_
_
d.
_
_
1 0 1
0 1 0
0 2 1
_
_
_
_
1
0
0
_
_
=
_
_
1
0
0
_
_
= (1)
_
_
1
0
0
_
_
and
_
_
1 0 1
0 1 0
0 2 1
_
_
_
_
1
2
2
_
_
=
_
_
1
2
2
_
_
= (1)
_
_
1
2
2
_
_
12. a. ( + 1)( 1) = 0 b.
1
= 0,
2
= 1,
3
= 1 c. v
1
=
_
_
1
0
0
_
_
, v
2
=
_
_
2
1
0
_
_
, v
3
=
_
_
2
1
2
_
_
13. a. ( 2)( 1)
2
= 0 b.
1
= 2,
2
= 1 c. v
1
=
_
_
1
0
0
_
_
, v
2
=
_
_
3
1
1
_
_
d.
_
_
2 1 2
0 2 1
0 1 0
_
_
_
_
1
0
0
_
_
=
_
_
2
0
0
_
_
= (2)
_
_
1
0
0
_
_
and
_
_
2 1 2
0 2 1
0 1 0
_
_
_
_
3
1
1
_
_
=
_
_
3
1
1
_
_
= (1)
_
_
3
1
1
_
_
14. a. ( 1)
3
= 0 b. = 1 c. v
1
=
_
_
0
1
1
_
_
, v
2
=
_
_
1
0
0
_
_
15. a. ( + 1)( 2)( + 2)( 4) = 0 b.
1
= 1,
2
= 2,
3
= 2,
4
= 4
c. v
1
=
_
_
1
0
0
0
_
_
, v
2
=
_
_
0
1
0
0
_
_
, v
3
=
_
_
0
0
1
0
_
_
, v
4
=
_
_
0
0
0
1
_
_
d. For the eigenvalue = 1, the verication is
_
_
1 0 0 0
0 2 0 0
0 0 2 0
0 0 0 4
_
_
_
_
1
0
0
0
_
_
=
_
_
1
0
0
0
_
_
= (1)
_
_
1
0
0
0
_
_
. The other cases are similar.
16. a. ( + 1)( 1)( 2)( 3) = 0 b.
1
= 1,
2
= 1,
3
= 2,
4
= 3 c. v
1
=
_
_
1
1
0
2
_
_
,
v
2
=
_
_
1
1
0
0
_
_
,v
3
=
_
_
7
2
1
0
_
_
, v
4
=
_
_
1
0
0
0
_
_
5.1 Eigenvalues and Eigenvectors 121
17. Let A =
_
x y
z w
_
and assume the characteristic equation is given by
2
+ b + c. The characteristic
equation det(AI) = 0, is given by (x)(w) zy = 0 and simplies to
2
(x+w)+(xwyz) = 0.
Hence, b = (x +w) = tr(A) and c = xw yz = det(A).
18. Since A is invertible, then = 0. (See Exercises 19.) Now since Av = v, then A
1
Av = A
1
(v) =
A
1
v and hence A
1
v =
1
v.
19. Suppose A is not invertible. Then the homogeneous equation Ax = 0 has a nontrivial solution x
0
. Since
Ax
0
= 0 = 0x
0
, then x
0
is an eigenvector of A corresponding to the eigenvalue = 0. Conversely, suppose
that = 0 is an eigenvalue of A. Then there exists a nonzero vector x
0
such that Ax
0
= 0, so A is not
invertible.
20. If is an eigenvalue for T with geometric multiplicity n, then corresponding to are n linearly independent
eigenvectors v
1
, . . . , v
n
and hence are a basis for V. If v is a vector in V there are scalars c
1
, . . . , , c
n
such that
v = c
1
v
1
+ +c
n
v
n
. If v is also a nonzero vector, then
T(v) = T(c
1
v
1
+ +c
n
v
n
) = c
1
T(v
1
) + +c
n
T(v
n
) = c
1
v
1
+ +c
n
v
n
= v
and hence, v is an eigenvector.
21. Let A be an idempotent matrix, that is, A satises A
2
= A. Also, let be an eigenvalue of A with
corresponding eigenvector v, so that Av = v. We also have that Av = A
2
v = A(Av) = A(v) = Av =
2
v.
The two equations Av = v and Av =
2
v give
2
v = v, so that (
2
)v = 0. But v is an eigenvector,
so that v = 0. Hence, ( 1) = 0, so that the only eigenvalues are either = 0 or = 1.
22. Since (A I)
t
= A
t
I, then det(A
t
I) = det(A I)
t
= det(A I) so A
t
and A have the
same characteristic equations and hence, the same eigenvalues. For the second part of the question, let
A =
_
1 1
0 0
_
and B =
_
1 0
1 0
_
. Then the characteristic equation for both A and B is (1 ) = 0. But
the eigenvectors of A are
_
1
1
_
,
_
1
0
_
and the eigenvectors of B are
_
1
1
_
,
_
0
1
_
.
23. Let A be such that A
n
= 0 for some n and let be an eigenvalue of A with corresponding eigenvector v,
so that Av = v. Then A
2
v = Av =
2
v. Continuing in this way we see that A
n
v =
n
v. Since A
n
= 0,
then
n
v = 0. Since v = 0, then
n
= 0 and hence, = 0.
24. a. T(e) =
_
1 0
0 1
_ _
0 1
0 0
_
_
0 1
0 0
_ _
1 0
0 1
_
=
_
0 2
0 0
_
= 2e
b. T(f) =
_
1 0
0 1
_ _
0 0
1 0
_
_
0 0
1 0
_ _
1 0
0 1
_
=
_
0 0
2 0
_
= 2f
25. Since A is invertible, the inverse A
1
exists. Notice that for a matrix C, we can use the multiplicative
property of the determinant to show that
det(A
1
CA) = det(A
1
) det(C) det(A) = det(A
1
) det(A) det(C) = det(A
1
A) det(C) = det(I) det(C) = det(C).
Then det(ABI) = det(A
1
(ABI)A) = det(BAI), so that AB and BA have the same characteristic
equation and hence, the same eigenvalues.
26. First notice that tr(AB BA) = 0 and the trace of a matrix is the sum of the eigenvalues. Since the
only eigenvalue of the identity matrix I is = 1, then AB BA cannot equal I.
27. Since the matrix is triangular, then the characteristic equation is given by
det(A I) = ( a
11
)( a
22
) ( a
nn
) = 0,
so that the eigenvalues are the diagonal entries.
122 Chapter 5 Eigenvalues and Eigenvectors
28. Suppose is an eigenvalue of A, so there is a nonzero vector v such that Av = v. So the result holds
for the base case n = 1. For the inductive hypothesis assume that
n
is an eigenvalue of A
n
. By the inductive
hypothesis, we have
A
n+1
v = A(A
n
v) = A(
n
v) =
n
Av =
n
(v) =
n+1
v.
If v is an eigenvector of A, then v is an eigenvector of A
n
for all n.
29. Let be an eigenvalue of C = B
1
AB with corresponding eigenvector v. Since Cv = v, then
B
1
ABv = v. Multiplying both sides of the previous equation on the left by B gives A(Bv) = (Bv).
Therefore, Bv is an eigenvector of A corresponding to .
30. Let v be a vector in S. Then there are scalars c
1
, . . . , c
m
such that v = c
1
v
1
+ +c
m
v
m
. Then
Av = c
1
Av
1
+ +c
m
Av
m
= c
1
1
v
1
+ +c
m
m
v
m
,
which is in S.
31. The operator that reects a vector across the x-axis is T
__
x
y
__
=
_
x
y
_
. Then T
__
x
y
__
=
_
x
y
_
if and only if x( 1) = 0 and y( + 1) = 0. If = 1, then y = 0 and if = 1, then x = 0. Hence, the
eigenvalues are = 1 and = 1 with corresponding eigenvectors
_
1
0
_
and
_
0
1
_
, respectively.
32. We have that
T
_
x
y
_
=
_
x
y
_
_
y
x
_
=
_
x
y
_
_
x +y = 0
x y = 0
= 1.
Two linearly independent eigenvectors are
_
1
1
_
corresponding to
1
= 1, and
_
1
1
_
corresponding to
2
= 1.
33. If = 0 or = , then T can only be described as a rotation. Hence T
__
x
y
__
cannot be expressed by
scalar multiplication as this only performs a contraction or a dilation. When = 0, then T is the identity map
T
__
x
y
__
=
_
x
y
_
. In this case every nonzero vector in R
2
is an eigenvector with corresponding eigenvalue
equal to 1. If = , then T
__
x
y
__
=
_
1 0
0 1
_ _
x
y
_
=
_
x
y
_
. In this case every nonzero vector in
R
2
is an eigenvector with eigenvalue equal to 1.
34. a. Since T(e
kx
) = k
2
e
kx
2ke
kx
3e
kx
= (k
2
2k 3)e
kx
, the function e
kx
is an eigenfunction. b.
The eigenvalue corresponding to e
kx
is k
2
2k 3. c. f
1
(x) = e
3x
, f
2
(x) = e
x
35. a. Notice that, for example, [T(x 1)]
B
= [x
2
x]
B
and since
1
2
(x 1)
1
2
(x + 1) x
2
= x x
2
,
then [x
2
x]
B
=
_
_
1
2
1
2
1
_
_
. After computing [T(x+1)]
B
and [T(x
2
)]
B
we see that the matrix representation
of the operator is given by
[T]
B
=
_
[T(x 1)]
B
[T(x + 1)]
B
[T(x
2
)]
B
=
_
[x
2
x]
B
[x
2
+x]
B
[x
2
]
B
=
_
_
1/2 1/2 0
1/2 1/2 0
1 1 1
_
_
.
b. Similar to part (a), [T]
B
=
_
_
1 1 0
1 1 0
1 0 1
_
_
.
c. Since x
3
x
2
is the characteristic polynomial for both the matrices in part (a) and in (b), the eigenvalues
for the two matrices are the same.
5.2 Diagonalization 123
Exercise Set 5.2
Diagonalization of a matrix is another type of factorization. A matrix A is diagonalizable if there is an
invertible matrix P and a diagonal matrix D such that A = PDP
1
. An n n matrix A is diagonalizable
if and only if A has n linearly independent eigenvectors. In this case there is also a process for nding the
matrices P and D :
The column vectors of P are the eigenvectors of A.
The diagonal entries of D are the corresponding eigenvalues, placed on the diagonal of D in the same
order as the eigenvectors in P. If an eigenvalue has multiplicity greater than 1, then it is repeated that
many times on the diagonal.
Three Examples
A =
_
_
0 1 2
0 1 2
1 1 1
_
_
Eigenvalues:
3, 0, 1
Eigenvectors:
_
_
1
1
1
_
_
,
_
_
1
2
1
_
_
,
_
_
1
1
1
_
_
Diagonalizable:
P =
_
_
1 1 1
1 2 1
1 1 1
_
_
D =
_
_
3 0 0
0 0 0
0 0 1
_
_
A =
_
_
1 2 0
1 2 0
1 1 1
_
_
Eigenvalues:
0, 1 multiplicity 2
Eigenvectors:
_
_
2
1
1
_
_
,
_
_
1
1
0
_
_
,
_
_
0
0
1
_
_
Diagonalizable:
P =
_
_
2 1 0
1 1 0
1 0 1
_
_
D =
_
_
0 0 0
0 1 0
0 0 1
_
_
A =
_
_
1 0 0
2 0 0
1 1 0
_
_
Eigenvalues:
0 multiplicity 2, -1
Eigenvectors:
_
_
0
0
1
_
_
,
_
_
1
2
3
_
_
Not Diagonalizable.
Other useful results given in Section 5.2 are:
If an n n matrix A has n distinct eigenvalues, then A is diagonalizable.
Similar matrices have the same eigenvalues.
If T : V V is a linear operator and B
1
and B
2
are bases for V, then [T]
B1
and [T]
B2
have the same
eigenvalues.
An n n matrix A is diagonalizable if and only if the algebraic and geometric multiplicities add up
correctly. That is, suppose A has eigenvalues
1
, . . . ,
k
with algebraic multiplicities d
1
, . . . , d
k
. The
matrix A is diagonalizable if and only if
d
1
+d
2
+ +d
k
= dim(V
1
) + dim(V
2
) + + dim(V
k
) = n.
Solutions to Exercises
1. P
1
AP =
_
1 0
0 3
_
2. P
1
AP =
_
4 0
0 2
_
124 Chapter 5 Eigenvalues and Eigenvectors
3. P
1
AP =
_
_
0 0 0
0 2 0
0 0 1
_
_
4. P
1
AP =
_
_
3 0 0
0 2 0
0 0 2
_
_
5. The eigenvalues for the matrix A are 2 and
1. The eigenvectors are not necessary in this
case, since A is a 2 2 matrix with two distinct
eigenvalues. Hence, A is diagonalizable.
6. The eigenvalues for the matrix A are 2 +
6
and 2
2,
2,
and 0. Hence A is diagonalizable.
17. The matrix A has eigenvalues, 1, 2, and 0 with multiplicity 2. In addition there are four linearly inde-
pendent eigenvectors
_
_
0
1
1
0
_
_
,
_
_
0
1
2
3
_
_
,
_
_
0
1
0
1
_
_
, and
_
_
1
0
1
0
_
_
corresponding to the two distinct eigenvalues.
Hence, A is diagonalizable.
18. The matrix A has two eigenvalues, 2 and 0 with multiplicity 3. Since there are only two corresponding
linearly independent eigenvectors
_
_
1
1
1
1
_
_
and
_
_
1
1
1
1
_
_
, the matrix is not diagonalizable.
5.2 Diagonalization 125
19. To diagonalize the matrix we rst nd the eigenvalues. Since the characteristic equation is
2 0
1 1
5 and
5 2
1
_
, respectively. If P =
_
5 2
5 2
1 1
_
, then A = P
_
5 0
0
5
_
P
1
.
21. The eigenvalues of A are 1, 1, and 0 with corresponding linearly independent eigenvectors
_
_
0
1
1
_
_
,
_
_
2
1
3
_
_
,
and
_
_
0
1
2
_
_
, respectively. If P =
_
_
0 2 0
1 1 1
1 3 2
_
_
, then P
1
AP =
_
_
1 0 0
0 1 0
0 0 0
_
_
.
22. The eigenvalues of A are 0, 1, and 2 with corresponding linearly independent eigenvectors
_
_
1
0
0
_
_
,
_
_
4
2
1
_
_
,
and
_
_
1
2
0
_
_
, respectively. If P =
_
_
1 4 1
0 2 2
0 1 0
_
_
, then A = P
_
_
0 0 0
0 1 0
0 0 2
_
_
P
1
.
23. The eigenvalues of the matrix A are
1
= 1 and
2
= 1 of multiplicity two. But there are three
linearly independent eigenvectors,
_
_
2
1
0
_
_
, corresponding to
1
and
_
_
0
1
0
_
_
,
_
_
0
0
1
_
_
corresponding to
2
. If
P =
_
_
2 0 0
1 1 0
0 0 1
_
_
, then P
1
AP =
_
_
1 0 0
0 1 0
0 0 1
_
_
.
24. The eigenvalues of A are 1 with multiplicity 2, and 0 with corresponding linearly independent eigenvectors
_
_
0
0
1
_
_
,
_
_
1
0
0
_
_
, and
_
_
0
1
1
_
_
, respectively. If P =
_
_
0 1 0
0 0 1
1 0 1
_
_
, then A = P
_
_
1 0 0
0 1 0
0 0 0
_
_
P
1
.
25. The matrix A has three eigenvalues
1
= 1 of multiplicity 2,
2
= 0, and
3
= 2. There are
four linearly independent eigenvectors corresponding to the eigenvalues given as the column vectors in
P =
_
_
1 0 1 1
0 1 0 0
0 0 1 1
1 0 0 0
_
_
. Then P
1
AP =
_
_
1 0 0 0
0 1 0 0
0 0 0 0
0 0 0 2
_
_
.
126 Chapter 5 Eigenvalues and Eigenvectors
26. The eigenvalues of A are 3, 0 with multiplicity 2, and 1 with corresponding linearly independent
eigenvectors
_
_
1
0
1
1
_
_
,
_
_
1
0
0
1
_
_
,
_
_
1
0
1
0
_
_
and
_
_
1
1
0
0
_
_
, respectively. If P =
_
_
1 1 1 1
0 0 0 1
1 0 1 0
1 1 0 0
_
_
, then
A = P
_
_
3 0 0 0
0 0 0 0
0 0 0 0
0 0 0 1
_
_
P
1
.
27. The proof is by induction on the power k. The base case is k = 1, which is upheld since D = P
1
AP,
gives A = PDP
1
. The inductive hypothesis is to assume the result holds for a natural number k. Now
assume that A
k
= PD
k
P
1
. We need to show that the result holds for the next positive integer k + 1. Since
A
k+1
= A
k
A, by the inductive hypothesis, we have that
A
k+1
= A
k
A = (PD
k
P
1
)A = (PD
k
P
1
)(PDP
1
) = (PD
k
)(P
1
P)(DP
1
) = PD
k+1
P
1
.
28. The eigenvalues of A are 0 and 3 with corresponding eigenvectors
_
1
2
_
and
_
1
1
_
, respectively. So
A =
_
1 1
2 1
_ _
0 0
0 3
_ _
1/3 1/3
2/3 1/3
_
and hence, A
6
= PD
6
P
1
=
_
486 243
486 243
_
.
29. The eigenvalues of the matrix A are 0 and 1 of multiplicity 2, with corresponding linearly independent
eigenvectors the column vectors of P =
_
_
1 0 1
1 2 2
1 1 0
_
_
. If D =
_
_
0 0 0
0 1 0
0 0 1
_
_
is the diagonal matrix with
diagonal entries the eigenvalues of A, then A = PDP
1
. Notice that the eigenvalues on the diagonal have
the same order as the corresponding eigenvectors in P, with the eigenvalue 1 repeated two times since the
algebraic multiplicity is 2. Since D
k
=
_
_
0
k
0 0
0 1
k
0
0 0 1
k
_
_
= D, then for any k 1, A
k
= PD
k
P
1
= PDP
1
=
_
_
3 1 2
2 0 2
2 1 1
_
_
.
30. Since A = PDP
1
where D is a diagonal matrix, then
A
t
= (PDP
1
)
t
= (P
1
)
t
D
t
P
t
= (P
t
)
1
DP
t
and hence, A
t
is diagonalizable.
31. Since A is diagonalizable there is an invertible P and diagonal D such that A = PDP
1
, so that
D = P
1
AP. Since B is similar to A there is an invertible Q such that B = Q
1
AQ, so that A = QBQ
1
.
Then
D = P
1
QBQ
1
P = (Q
1
P)
1
B(Q
1
P)
and hence, B is diagonalizable.
32. Suppose A
1
exists and A is diagonalizable with A = PDP
1
. Then
A
1
= (PDP
1
)
1
= PD
1
P
1
.
Since D
1
is a diagonal matrix, then A
1
is diagonalizable. For the second part of the question, let
A =
_
0 1
0 1
_
, so that A is not invertible. But A is diagonalizable since the eigenvalues are 0 and
1 with corresponding linearly independent eigenvectors
_
1
0
_
and
_
1
1
_
, respectively and hence, A =
5.2 Diagonalization 127
_
1 1
0 1
_ _
0 0
0 1
_ _
1 1
0 1
_
.
33. If A is diagonalizable with an eigenvalue of multiplicity n, then A = P(I)P
1
= (I)PP
1
= I.
Conversely, if A = I, then A is a diagonal matrix.
34. Suppose A is a nonzero n n matrix and there is a k > 0 such that A
k
= 0. Suppose A is diagonalizable
with A = PDP
1
. Then 0 = A
k
= (PDP
1
)
k
= PD
k
P
1
and hence, D
k
= 0. But this means D = 0 which
implies A = 0, a contradiction.
35. a. [T]
B1
=
_
_
0 1 0
0 0 2
0 0 0
_
_
b. [T]
B2
=
_
_
1 1 2
1 1 0
0 0 0
_
_
c. The only eigenvalue of A and B is = 0 of
multiplicity 3. d. Neither matrix has three linearly independent eigenvectors. For example, for [T]
B2
, the
eigenvector corresponding to = 0 is
_
_
1
1
0
_
_
. Thus, the operator T is not diagonalizable.
36. Let B = {sinx, cos x}. Since the matrix
[T]
B
= [ [T(sinx)]
B
[T(cos x)]
B
] = [ [cos x]
B
[sinx]
B
] =
_
0 1
1 0
_
has the eigenvalues the complex numbers i and i with corresponding eigenvectors
_
1
i
_
and
_
1
i
_
, then
T is diagonalizable over the complex numbers but not over the real numbers.
37. To show that T is not diagonalizable it suces to show that [T]
B
is not diagonalizable for any basis B
of R
3
. Let B be the standard basis for R
3
, so that
[T]
B
= [ T(e
1
) T(e
2
) T(e
3
) ] =
_
_
2 2 2
1 2 1
1 1 0
_
_
.
The eigenvalues of [T]
B
are
1
= 1 with multiplicity 2, and
2
= 2. The corresponding eigenvectors are
_
_
0
1
1
_
_
,
_
_
1
1
1
_
_
, respectively. Since there are only two linearly independent eigenvectors, [T]
B
and hence,
T is not diagonalizable.
38. Let B be the standard basis for R
3
. Since the matrix
[T]
B
= [ T(e
1
) T(e
2
) T(e
3
) ] =
_
_
4 2 4
4 2 4
0 0 4
_
_
,
has three distinct eigenvalues 0, 4, and 6, then T is diagonalizable.
39. Since A and B are matrix representations for the same linear operator, they are similar. Let A = Q
1
BQ.
The matrix A is diagonalizable if and only if D = P
1
AP for some invertible matrix P and diagonal matrix
D. Then
D = P
1
(Q
1
BQ)P = (QP)
1
B(QP),
so that B is diagonalizable. The proof of the converse is identical.
128 Chapter 5 Eigenvalues and Eigenvectors
Exercise Set 5.3
1. The strategy is to uncouple the system of dierential equations. Writing the system in matrix form, we
have that
y
= Ay =
_
1 1
0 2
_
y.
The next step is to diagonalize the matrix A. Since A is triangular the eigenvalues of A are the diagonal
entries 1 and 2, with corresponding eigenvectors
_
1
0
_
and
_
1
1
_
, respectively. So A = PDP
1
,
where P =
_
1 1
0 1
_
, P
1
=
_
1 1
0 1
_
, and D =
_
1 0
0 2
_
. The related uncoupled system is w
=
P
1
APw =
_
1 0
0 2
_
w. The general solution to the uncoupled system is w(t) =
_
e
t
0
0 e
2t
_
w(0).
Finally, the general solution to the original system is given by y(t) = P
_
e
t
0
0 e
2t
_
P
1
y(0). That is,
y
1
(t) = (y
1
(0) +y
2
(0))e
t
y
2
(0)e
2t
, y
2
(t) = y
2
(0)e
2t
.
2. Let A =
_
1 2
1 0
_
. Then the eigenvalues of A are 1 and 2 with corresponding eigenvectors
_
1
1
_
,
and
_
2
1
_
, respectively. So A = PDP
1
=
_
1 2
1 1
_ _
1 0
0 2
_ _
1/3 2/3
1/3 1/3
_
and hence, w
(t) =
P
1
APw =
_
e
t
0
0 e
2t
_
. Then the general solution to the uncoupled system is w(t) =
_
e
t
0
0 e
2t
_
w(0)
and hence y(t) = P
_
e
t
0
0 e
2t
_
P
1
y(0), that is,
y
1
(t) =
1
3
(y
1
(0) + 2y
2
(0))e
t
+
2
3
(y
1
(0) y
2
(0))e
2t
, y
2
(t) =
1
3
(y
1
(0) + 2y
2
(0))e
t
+
1
3
(y
1
(0) +y
2
(0))e
2t
.
3. Using the same approach as in Exercise (1), we let A =
_
1 3
3 1
_
. The eigenvalues of A are 4 and 2
with corresponding eigenvectors
_
1
1
_
and
_
1
1
_
, respectively, so that
A =
1
2
_
1 1
1 1
_ _
4 0
0 2
_ _
1 1
1 1
_
. So the general solution is given by y(t) = P
_
e
4t
0
0 e
2t
_
P
1
y(0),
that is
y
1
(t) =
1
2
(y
1
(0) y
2
(0))e
4t
+
1
2
(y
1
(0) +y
2
(0))e
2t
, y
2
(t) =
1
2
(y
1
(0) +y
2
(0))e
4t
+
1
2
(y
1
(0) +y
2
(0))e
2t
.
4. Let A =
_
1 1
1 1
_
. Then the eigenvalues of A are 0 and 2 with corresponding eigenvectors
_
1
1
_
,
and
_
1
1
_
, respectively. So A = PDP
1
=
_
1 1
1 1
_ _
0 0
0 2
_ _
1/2 1/2
1/2 1/2
_
and hence, w
(t) =
P
1
APw =
_
1 0
0 e
2t
_
. Then the general solution to the uncoupled system is w(t) =
_
1 0
0 e
2t
_
w(0) and
hence y(t) = P
_
1 0
0 e
2t
_
P
1
y(0), that is,
y
1
(t) =
1
2
(y
1
(0) +y
2
(0)) +
1
2
(y
1
(0) y
2
(0))e
2t
, y
2
(t) =
1
2
(y
1
(0) +y
2
(0)) +
1
2
(y
1
(0) +y
2
(0))e
2t
.
5.3 Application: Systems of Linear Dierential Equations 129
5. Let A =
_
_
4 3 3
2 3 3
4 2 3
_
_
. The eigenvalues of A are 1, 1 and 2, so that
A = PDP
1
=
_
_
1 0 1
0 1 2
1 1 0
_
_
_
_
1 0 0
0 1 0
0 0 2
_
_
_
_
2 2 1
2 1 2
1 1 1
_
_
, where the column vectors of P are the
eigenvectors of A and the diagonal entries of D are the corresponding eigenvalues. Then
y(t) = P
_
_
e
t
0 0
0 e
t
0
0 0 e
2t
_
_
P
1
y(0), that is
y
1
(t) = (2y
1
(0) +y
2
(0) +y
3
(0))e
t
+ (y
1
(0) y
2
(0) y
3
(0))e
2t
y
2
(t) = (2y
1
(0) y
2
(0) 2y
3
(0))e
t
+ 2(y
1
(0) +y
2
(0) +y
3
(0))e
2t
y
3
(t) = (2y
1
(0) y
2
(0) y
3
(0))e
t
+ (2y
1
(0) +y
2
(0) + 2y
3
(0))e
t
.
6. Let A =
_
_
3 4 4
7 11 13
5 8 10
_
_
. The eigenvalues of A are 2, 1 and 1, so that
A = PDP
1
=
_
_
0 2 1
1 1 2
1 2 1
_
_
_
_
2 0 0
0 1 0
0 0 1
_
_
_
_
3 4 5
1 1 1
1 2 2
_
_
, where the column vectors of P are
the eigenvectors of A and the diagonal entries of D are the corresponding eigenvalues. Then
y(t) = P
_
_
e
2t
0 0
0 e
t
0
0 0 e
t
_
_
P
1
y(0), that is
y
1
(t) = (2y
1
(0) + 2y
2
(0) + 2y
3
(0))e
t
+ (y
1
(0) 2y
2
(0) 2y
3
(0))e
2t
y
2
(t) = (3y
1
(0) 4y
2
(0) 5y
3
(0))e
t
+ (y
1
(0) +y
2
(0) +y
3
(0))e
t
+ (2y
1
(0) + 4y
2
(0) + 4y
3
(0))e
2t
y
3
(t) = (3y
1
(0) + 4y
2
(0) + 5y
3
(0))e
t
+ (2y
1
(0) 2y
2
(0) 2y
3
(0))e
t
+ (y
1
(0) 2y
2
(0) 2y
3
(0))e
2t
.
7. First nd the general solution and then apply the initial conditions. The eigenvalues of A =
_
1 0
2 1
_
are 1 and 1, with corresponding eigenvectors
_
0
1
_
and
_
1
1
_
, respectively. So the general solution is
given by y(t) = P
_
e
t
0
0 e
t
_
P
1
y(0), where the column vectors of P are the eigenvectors of A. Then
y
1
(t) = e
t
y
1
(0), y
2
(t) = e
t
(y
1
(0) + y
2
(0)) e
t
y
1
(0). Since y
1
(0) = 1 and y
2
(0) = 1, the solution to the
initial value problem is y
1
(t) = e
t
, y
2
(t) = e
t
.
8. Let A =
_
_
5 12 20
4 9 16
2 4 7
_
_
. The eigenvalues of A are 1, 3 and 1, so that
A = PDP
1
=
_
_
1 2 2
2 2 1
1 1 0
_
_
_
_
1 0 0
0 3 0
0 0 1
_
_
_
_
1 2 2
1 2 3
0 1 2
_
_
, where the column vectors of P are the
eigenvectors of A and the diagonal entries of D are the corresponding eigenvalues. Then the general solution
is y(t) = P
_
_
e
t
0 0
0 e
3t
0
0 0 e
t
_
_
P
1
y(0), that is
y
1
(t) = (y
1
(0) + 2y
2
(0) 2y
3
(0))e
t
+ (2y
1
(0) 4y
2
(0) + 6y
3
(0))e
3t
+ (2y
2
(0) 4y
3
(0))e
t
130 Chapter 5 Eigenvalues and Eigenvectors
y
2
(t) = (2y
1
(0) + 4y
2
(0) 4y
3
(0))e
t
+ (2y
1
(0) 4y
2
(0) + 6y
3
(0))e
3t
+ (y
2
(0) 2y
3
(0))e
t
y
3
(t) = (y
1
(0) + 2y
2
(0) 2y
3
(0))e
t
+ (y
1
(0) 2y
2
(0) + 3y
3
(0))e
3t
.
The solution to the initial value problem is
y
1
(t) = 4e
t
+ 8e
3t
2e
t
, y
2
(t) = 8e
t
+ 8e
3t
e
t
, y
3
(t) = 4e
t
+ 4e
3t
.
9. a. Once we construct a system of dierential equations that describes the problem, then the solution is
found using the same techniques. Let y
1
(t) and y
2
(t) denote the amount of salt in each take after t minutes,
so that y
1
(t) and y
2
(t) are the rates of change for the amount of salt in each tank. The system satises the
balance law that the rate of change of salt in the tank is equal to the rate in minus the rate out. This gives
the initial value problem
y
1
(t) =
1
60
y
1
+
1
120
y
2
, y
2
(t) =
1
60
y
1
1
120
y
2
, y
1
(0) = 12, y
2
(0) = 0.
b. The solution to the system is y
1
(t) = 4 + 8e
1
40
t
, y
2
(t) = 8 8e
1
40
t
.
c. Since the exponentials in both y
1
(t) and y
2
(t) go to 0 as t goes to innity, then lim
t
y
1
(t) = 4 and
lim
t
y
2
(t) = 8. This makes sense since it says that eventually the 12 pounds of salt will be evenly distributed
in a ratio of one to two between the two tanks.
10. a. Let x(t) and y(t) denote the temperature of the rst and second oor, respectively. Then the initial
value problem that models the system is given by
x
(t) =
2
10
(x(t) 0)
5
10
(x(t) y(t)) =
7
10
x(t) +
5
10
y(t)
y
(t) =
1
10
(y(t) 0)
5
10
(y(t) x(t)) =
5
10
x(t)
6
10
y(t).
with initial conditions x(0) = 70, y(0) = 60.
b. The solution to the initial value problem is
x(t) 61.37e
0.15t
+ 8.63e
1.15t
, y(t) 67.81e
0.15t
+ 7.81e
1.15t
.
c. The rst oor will reach 32
_
0.39
0.6125
_
, the expected number of commuters using the mass transit system after two years
is approximately 39%. Similarly, since T
5
_
0.35
0.65
_
_
0.4
0.60
_
, the expected number of commuters using the
mass transit system after ve years is approximately 40%.
5.4 Application: Markov Chains 131
c. The eigenvector of T corresponding to the eigenvalue = 1 is
_
0.57
0.85
_
. The steady state vector is the
probability eigenvector
1
0.57+0.85
_
0.57
0.85
_
=
_
0.4
0.6
_
.
3. The transition matrix is given by the data in the table, so T =
_
_
0.5 0.4 0.1
0.4 0.4 0.2
0.1 0.2 0.7
_
_
. Since initially
there are only plants with pink owers, the initial probability state vector is
_
_
0
1
0
_
_
. After three generations
the probabilities of each variety are given by T
3
_
_
0
1
0
_
_
_
_
0.36
0.35
0.29
_
_
and after 10 generations T
10
_
_
0
1
0
_
_
_
_
0.33
0.33
0.33
_
_
.
4. The transition matrix for the Markov Chain is
T =
_
_
0.6 0.3 0.4
0.1 0.4 0.3
0.3 0.3 0.3
_
_
.
a. Since T
3
_
_
1
0
0
_
_
=
_
_
0.48
0.22
0.3
_
_
, the probability of the taxi being in location S after three fares is 0.3.
b. Since T
5
_
_
0.3
0.35
0.35
_
_
=
_
_
0.47
0.23
0.3
_
_
, the probability of the taxi being in location A after three fares is 0.47,
location B is 0.23, and location S is 0.3. c. The eigenvector of T corresponding to the eigenvalue = 1 is
_
_
0.82
0.40
0.52
_
_
. The steady state vector is the probability eigenvector
1
0.82 + 0.40 + 0.52
_
_
0.82
0.40
0.52
_
_
=
_
_
0.47
0.23
0.3
_
_
.
5. The transition matrix for the disease is T =
_
_
0.5 0 0
0.5 0.75 0
0 0.25 1
_
_
. The eigenvector of T, corresponding to
the eigenvalue = 1 is
_
_
0
0
1
_
_
. Hence, the steady state probability vector is
_
_
0
0
1
_
_
and the disease will not
be eradicated.
6. Since the transition matrix is T =
_
0.45 0.2
0.55 0.8
_
and the initial state vector is
_
0.7
0.3
_
, then the fraction
of smokers and non-smokers after 5 years are given by T
5
_
0.7
0.3
_
=
_
0.27
0.73
_
and after 10 years are also
given by T
10
_
0.7
0.3
_
=
_
0.27
0.73
_
. The eigenvector of T corresponding to the eigenvalue = 1 is
_
0.38
1.04
_
.
The steady state vector is the probability eigenvector
1
0.38 + 1.04
_
0.38
1.04
_
=
_
0.27
0.73
_
, so in the long run
approximately 27% will be smoking.
7. a. T =
_
_
0.33 0.25 0.17 0.25
0.25 0.33 0.25 0.17
0.17 0.25 0.33 0.25
0.25 0.17 0.25 0.33
_
_
b. T
_
_
1
0
0
0
_
_
=
_
_
0.5(0.16)
n
+ 0.25
0.25
0.5(0.16)
n
+ 0.25
0.25
_
_
c.
_
_
0.25
0.25
0.25
0.25
_
_
132 Chapter 5 Eigenvalues and Eigenvectors
8. a. Since det(T I) =
1
1
=
2
1, the eigenvalues of T are
1
= 1 and
2
= 1. b. Notice
that T is not a regular stochastic matrix since for all n 1, T
n
has zero entries. If n is odd, then T
n
= T
and if n is even, then T
n
= I. So if v =
_
v
1
v
2
_
, then Tv =
_
v
2
v
1
_
, T
2
v =
_
v
1
v
2
_
, T
3
v =
_
v
2
v
1
_
, and so
on. Hence Tv = v if and only if v =
_
1/2
1/2
_
. c. The population is split in two groups which do not
intermingle.
9. The eigenvalues of T are
1
= q + p + 1 and
2
= 1, with corresponding eigenvectors
_
1
1
_
and
_
q/p
1
_
. Then the steady state probability vector is
1
1 +
q
p
_
q/p
1
_
=
_
q
p+q
p
p+q
_
.
10. Let T =
_
a b
b c
_
be a symmetric matrix. If T is also stochastic, then a +b = 1 and b +c = 1, so a = c.
So let T =
_
a b
b a
_
.
a. Since
det(T I) =
a b
b a
= (a )
2
b
2
=
2
2a + (a
2
b
2
) = ( (a +b))( (a b)),
then the eigenvalues of T are
1
= a + b and
2
= a b. b. Notice that
1
= 1 with corresponding
eigenvector
_
1
1
_
. Hence, the steady state probability vector is
_
1/2
1/2
_
.
Review Exercises Chapter 5
1. a. Let A =
_
a b
b a
_
. To show that
_
1
1
_
is an eigenvector of A we need a constant such that
A
_
1
1
_
=
_
1
1
_
. But
_
a b
b a
_ _
1
1
_
=
_
a +b
a +b
_
= (a + b)
_
1
1
_
. b. Since the characteristic equation
for A is
det(A I) =
a b
b a
= (a )
2
b
2
=
2
2a + (a
2
+b
2
) = ( (a +b))( (a b)) = 0,
the eigenvalues of A are
1
= a + b,
2
= a b. c. v
1
=
_
1
1
_
, v
2
=
_
1
1
_
d. First notice that A is
diagonalizable since it has two linearly independent eigenvectors. The matrix D = P
1
AP, where P is the
matrix with column vectors the eigenvectors and D is the diagonal matrix with the corresponding eigenvalues
on the diagonal. That is, P =
_
1 1
1 1
_
and D =
_
a +b 0
0 a b
_
.
Review Chapter 5 133
2. a. The eigenvalues of A are
1
= 1,
2
= 0 and
3
= 2. b. Since the 3 3 matrix A has three distinct
eigenvalues, then it is diagonalizable. c. The eigenvectors corresponding to
1
= 1,
2
= 0 and
3
= 2
are
_
_
2
0
1
_
_
,
_
_
1
0
0
_
_
, and
_
_
0
1
0
_
_
, respectively. d. The eigenvectors can be shown directly to be linearly
independent, but from part (b) they must be linearly independent. e. Since the eigenvectors are form a
basis for R
3
, then the matrix A is diagonalizable. f. The matrix A = PDP
1
, where P =
_
_
2 1 0
0 0 1
1 0 0
_
_
and D =
_
_
1 0 0
0 0 0
0 0 2
_
_
.
3. a. The eigenvalues are
1
= 0 and
2
= 1. b. No conclusion can be made from part (a) about whether
or not A is diagonalizable, since the matrix does not have four distinct eigenvalues. c. Each eigenvalue
has multiplicity 2. The eigenvectors corresponding to
1
= 0 are
_
_
0
0
0
1
_
_
and
_
_
1
0
1
0
_
_
while the eigenvector
corresponding to
2
= 1 is
_
_
0
1
0
0
_
_
. d. The eigenvectors found in part (c) are linearly independent. e. Since
A is a 4 4 matrix with only three linearly independent eigenvectors, then A is not diagonalizable.
4. a. The characteristic polynomial is det(AI) = ( 1)
2
( 2). b. The eigenvalues are the solutions
to ( 1)
2
( 2) = 0, that is
1
= 1 with multiplicity 2, and
2
= 2. c. The eigenvectors of A are
_
_
1
3
0
_
_
and
_
_
0
1
1
_
_
corresponding to
1
= 1 and
_
_
0
2
1
_
_
corresponding to
2
= 2. Hence, dim(V
1
) = 2
and dim(V
2
) = 1. d. Since there are three linearly independent eigenvectors, then A is diagonalizable.
Equivalently, since dim(V
1
) + dim(V
2
) = 3, then the matrix A is diagonalizable.
e. P =
_
_
1 0 0
3 1 2
0 1 1
_
_
, D =
_
_
1 0 0
0 1 0
0 0 2
_
_
f. The columns of P and D can be permuted. For example,
P
1
=
_
_
1 0 0
3 2 1
0 1 1
_
_
, D
1
=
_
_
1 0 0
0 2 0
0 0 1
_
_
and P
2
=
_
_
0 0 1
2 1 3
1 1 0
_
_
, D
2
=
_
_
2 0 0
0 1 0
0 0 1
_
_
.
5. a. Expanding the determinant of AI across row two gives
det(A I) =
1 0
0 1
k 3
= (0)
1 0
3
+ ()
0
k
(1)
1
k 3
=
3
3 +k.
Hence, the characteristic equation is
3
3 +k = 0.
134 Chapter 5 Eigenvalues and Eigenvectors
b. The graphs of y() =
3
3+k for dierent values
of k are shown in the gure.
k = 0
k = -4
k = -3
k = -2.5
k = 2.5
k = 3
k = 4
c. The matrix will have three distinct real eigenvalues
when the graph of y() =
3
3+k crosses the x-axis
three time. That is, for
2 < k < 2.
6. Since B = P
1
AP, we have that A = PBP
1
. Suppose Bv = v, so v is an eigenvector of B corresponding
to the eigenvalue . Then
A(Pv) = PBP
1
Pv = PBv = P(v) = Pv
and hence, Pv is an eigenvector of A corresponding to the eigenvalue .
7. a. Let v =
_
_
1
1
.
.
.
1
_
_
. Then each component of the vector Av has the same value equal to the common row
sum . That is, Av =
_
.
.
.
_
=
_
_
1
1
.
.
.
1
_
_
, so is an eigenvalue of A corresponding to the eigenvector v. b.
Since A and A
t
have the same eigenvalues, then the same result holds if the sum of each column of A is equal
to .
8. a. Since T is a linear operator T(0) = 0, so {0} is invariant. And for every v in V, then T(v) is in V so
V is invariant. b. Since dim(W) = 1, there is a nonzero vector w
0
such that W = {aw
0
| a R}. Then
T(w
0
) = w
1
and since W is invariant, w
0
is in W. So there is some such that w
1
= w
0
. Hence, w
0
is
an eigenvector of T. c. By part (a), R
2
and {0} are invariant subspaces of the linear operator T. Since the
matrix representation for T is given relative to the standard basis,
T(v) = v T(v) =
_
0 1
1 0
_
v =
_
0 1
1 0
_ _
v
1
v
2
_
v
1
= v
2
= 0.
By part (b), the only invariant subspaces of T are R
2
and {0}.
9. a. Suppose w is in S(V
0
), so that w = S(v) for some eigenvector v of T corresponding to
0
. Then
T(w) = T(S(v)) = S(T(v)) = S(
0
v) =
0
S(v) =
0
w.
Hence, S(V
0
) V
0
.
b. Let v be an eigenvector of T corresponding to the eigenvalue
0
. Since T has n distinct eigenvalues then
dim(V
0
) = 1 with V
0
= span{v}. Now by part (a), T(S(v)) =
0
(S(v)), so that S(v) is also an eigenvector
of T and in span{v}. Consequently, there exists a scalar
0
such that S(v) =
0
v, so that v is also an
eigenvector of S.
c. Let B = {v
1
, v
2
, . . . , v
n
} be a basis for V consisting of eigenvectors of T and S. Thus there exist scalars
1
,
2
, . . . ,
n
and
1
,
2
, . . . ,
n
such that T(v
i
) =
i
v
i
and S(v
i
) =
i
v
i
, for 1 i n. Now let v be a
vector in V . Since B is a basis for V then there are scalars c
1
, c
2
, . . . , c
n
such that v = c
1
v
1
+c
2
v
2
+. . .+c
n
v
n
.
Applying the operator ST to both sides of this equation we obtain
ST(v) = ST(c
1
v
1
+c
2
v
2
+. . . +c
n
v
n
) = S(c
1
1
v
1
+c
2
2
v
2
+. . . +c
n
n
v
n
)
= c
1
1
v
1
+c
2
2
v
2
+. . . +c
n
n
v
n
= c
1
1
v
1
+c
2
2
v
2
+. . . +c
n
n
v
n
= T(c
1
1
v
1
+c
2
2
v
2
+. . . +c
n
n
v
n
) = TS(c
1
v
1
+c
2
v
2
+. . . +c
n
v
n
) = TS(v).
Chapter Test Chapter 5 135
Since this holds for all v in V, then ST = TS.
d. The linearly independent vectors
_
_
1
0
1
_
_
,
_
_
1
0
1
_
_
, and
_
_
0
1
0
_
_
are eigenvectors for both matrices A and
B. Let P =
_
_
1 1 0
0 0 1
1 1 0
_
_
. Then
P
1
AP =
_
_
1/2 0 1/2
1/2 0 1/2
0 1 0
_
_
_
_
3 0 1
0 2 0
1 0 3
_
_
_
_
1 1 0
0 0 1
1 1 0
_
_
=
_
_
4 0 0
0 2 0
0 0 2
_
_
and
P
1
BP =
_
_
1/2 0 1/2
1/2 0 1/2
0 1 0
_
_
_
_
1 0 2
0 1 0
2 0 1
_
_
_
_
1 1 0
0 0 1
1 1 0
_
_
=
_
_
1 0 0
0 3 0
0 0 1
_
_
.
10. a.
e
D
= lim
m
(I +D +
1
2!
D
2
+
1
3!
D
3
+ +
1
m!
D
m
) = lim
m
_
m
k=1
k
1
0 0 . . . 0
0
m
k=1
k
2
0 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 . . .
m
k=1
k
n
_
_
=
_
_
e
1
0 0 . . . 0
0 e
2
0 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 . . . e
n
_
_
.
b. Suppose that A is diagonalizable and A = PDP
1
so A
k
= PD
k
P
1
. Then
e
A
= lim
m
(I+PDP
1
+
1
2!
PD
2
P
1
+ +
1
m!
PD
m
P
1
) = lim
m
(P(I+D+
1
2!
D
2
+ +
1
m!
D
m
)P
1
) = Pe
D
P
1
.
c. The eigenvalues of A are
1
= 3 and
2
= 5 with corresponding eigenvectors
_
1
3
_
and
_
1
1
_
, respectively.
So A =
_
1 1
3 1
_ _
3 0
0 5
_ _
1/2 1/2
3/2 1/2
_
. By part (a), e
A
= Pe
D
P
1
and by part (b)
e
A
=
_
1 1
3 1
_ _
e
3
0
0 e
5
_ _
1/2 1/2
3/2 1/2
_
=
1
2
_
e
3
+ 3e
5
e
3
e
5
3e
3
+ 3e
5
3e
3
e
5
_
.
Chapter Test Chapter 5
1. F.
P
1
AP =
_
1 2
0 2
_
2. F. If two matrices are similar,
then they have the same eigen-
values. The eigenvalues of A are
1 and 1, and the only eigen-
value of D is 1, so the matrices
are not similar.
3. F. The matrix has only
two linearly independent eigen-
vectors
_
_
2
0
1
_
_
and
_
_
0
0
1
_
_
.
4. T 5. T 6. T
136 Chapter 5 Eigenvalues and Eigenvectors
7. T 8. T 9. T
10. T 11. F. Since det(A I) is
( (a +b))( (a b)),
the eigenvalues are a+b and ab.
12. F. The det(A I) is
2
2 + (1 k),
so that, for example, if k = 1,
then A has eigenvalues 0 and 2.
13. F. Let A =
_
2 1
0 2
_
. 14. T 15. F. The matrix has
one eigenvalue = 1, of mul-
tiplicity 2, but does not have
two linearly independent eigen-
vectors.
16. T 17. T 18. T
19. T 20. F.
_
2
2 2 2
_
21. T
22. T 23. F. The matrix is similar to a
diagonal matrix but it is unique
up to permutation of the diago-
nal entries.
24. F. The matrix can still have
n linearly independent eigenvec-
tors. An example is the n n
identity matrix.
25. T 26. T 27. T
28. T 29. T 30. F. The matrix
_
1 1
0 1
_
is invertible but not diagonaliz-
able.
31. T 32. T 33. T
34. F. The zero vector has to be
added.
35. T 36. T
37. T 38. T 39. T
40. T
6.1 The Dot Product on R
n
137
6
Inner Product Spaces
Exercise Set 6.1
In Section 6.1, the geometry in the plane and three space is extended to the Euclidean spaces R
n
. The dot
product of two vectors is of central importance. The dot product of u and v in R
n
is the sum of the products
of corresponding components. So
u v =
_
_
u
1
u
2
.
.
.
u
n
_
_
v
1
v
2
.
.
.
v
n
_
_
= u
1
v
1
+u
2
v
2
+ +u
n
v
n
.
In R
3
, then
u u = u
2
1
+u
2
2
+u
2
3
, so that
u u =
_
u
2
1
+u
2
2
+u
2
3
,
which is the length of the vector u, called the norm of u and denoted by ||u||. For example, the length
of a vector in R
n
is then dened as ||u|| =
_
u
2
1
+u
2
2
+ +u
2
n
and the distance between two vectors is
||u v|| =
1 + 4 + 9 =
14,
so a vector of length 1 and in the direction of u is
1
14
_
_
1
2
3
_
_
.
A very useful observation that is used in this exercise set and later ones is that when a set of vectors
{v
1
, v
2
, . . . , v
n
} is pairwise orthogonal, which means v
i
v
j
= 0 whenever i = j, then using the properties of
dot product
v
i
(c
1
v
1
+c
2
v
2
+ +c
i
v
i
+ +c
n
v
n
) = c
1
(v
i
v
1
) +c
2
(v
i
v
2
) +c
i
(v
i
v
i
) + +c
n
(v
i
v
n
)
= 0 + + 0 +c
i
(v
i
v
i
) + 0 + + 0
= c
i
(v
i
v
i
) = c
i
||v
i
||
2
.
138 Chapter 6 Inner Product Spaces
Solutions to Exercises
1. u v = (0)(1) + (1)(1) + (3)(2) = 5 2.
uv
vv
=
01+6
1+1+4
=
5
6
3.
u (v + 2w) = u
_
_
3
1
4
_
_
= 0 + 1 12 = 11
4.
uw
ww
w =
0+19
1+1+9
_
_
1
1
3
_
_
=
8
11
_
_
1
1
3
_
_
5. u =
1
2
+ 5
2
=
26 6. ||u v|| =
_
1
4
_
_
1
4
_
=
17
7. Divide each component of the vector by the
norm of the vector, so that
1
26
_
1
5
_
is a unit
vector in the direction of u.
8. Since cos =
uv
||u||||v||
=
7
26
5
, then the vec-
tors are not orthogonal.
9.
10
5
_
2
1
_
10. The vector w is orthogonal to u and v if and
only if w
1
+ 5w
2
= 0 and 2w
1
+ w
2
= 0, that is,
w
1
= 0 = w
2
.
11. u =
_
(3)
2
+ (2)
2
+ 3
2
=
_
_
_
2
1
6
_
_
_
_
2
1
6
_
_
=
41
13.
1
22
_
_
3
2
3
_
_
14. Since cos =
4
11
2
= 0, then the vectors
are not orthogonal.
15.
3
11
_
_
1
1
3
_
_
=
3
11
_
_
1
1
3
_
_
16. A vector w is orthogonal to both vectors
if and only if
_
3w
1
2w
2
+ 3w
3
= 0
w
1
w
2
3w
3
= 0
w
1
= 9w
3
, w
2
= 12w
3
. So all vectors in
span
_
_
_
_
_
9
12
1
_
_
_
_
_
are orthogonal to the two vec-
tors.
17. Since two vectors in R
2
are orthogonal if and
only if their dot product is zero, solving
_
c
3
_
_
1
2
_
= 0, gives c + 6 = 0, that is, c = 6.
18.
_
_
1
c
2
_
_
_
_
0
2
1
_
_
= 0 + 2c 2 = 0 c = 1
19. The pairs of vectors with dot product equal-
ing 0 are v
1
v
2
, v
1
v
4
, v
1
v
5
, v
2
v
3
, v
3
v
4
,
and v
3
v
5
.
20. The vectors v
2
and v
5
are in the same direc-
tion.
21. Since v
3
= v
1
, the vectors v
1
and v
3
are
in opposite directions.
22. ||v
4
|| =
_
1
3
+
1
3
+
1
3
= 1
6.1 The Dot Product on R
n
139
23. w =
_
2
0
_
x
y
25
25
5
5
u
v w
24. w =
_
2
0
_
x
y
25
25
5
5
u
v w
25. w =
3
2
_
3
1
_
x
y
25
25
5
5
u
v
w
26. w =
_
_
5
0
0
_
_
u
v
w
27. w =
1
6
_
_
5
2
1
_
_
u
v
w
28. w =
3
13
_
_
0
2
3
_
_
u
v
w
29. Let u be a vector in
span{u
1
, u
2
, , u
n
}. Then there exist scalars c
1
, c
2
, , c
n
such that
u = c
1
u
1
+c
2
u
2
+ +c
n
u
n
.
Using the distributive property of the dot product gives
v u = v (c
1
u
1
+c
2
u
2
+. . . +c
n
u
n
)
= c
1
(v u
1
) +c
2
(v u
2
) + +c
n
(v u
n
)
= c
1
(0) +c
2
(0) + +c
n
(0) = 0.
30. If u and w are in S and c is a scalar, then
(u +cw) v = u v +c(w v) = 0 + 0 = 0
and hence, S is a subspace.
140 Chapter 6 Inner Product Spaces
31. Consider the equation
c
1
v
1
+c
2
v
2
+ + c
n
v
n
= 0.
We need to show that the only solution to this equation is the trivial solution c
1
= c
2
= = c
n
= 0. Since
v
1
(c
1
v
1
+c
2
v
2
+ +c
n
v
n
) = v
1
0, we have that c
1
(v
1
v
1
) +c
2
(v
1
v
2
) + +c
n
(v
1
v
n
) = 0.
Since S is an orthogonal set of vectors, v
i
v
j
= 0, whenever i = j, so this last equation reduces to c
1
||v
1
||
2
= 0.
Now since the vectors are nonzero their lengths are positive, so ||v
1
|| = 0 and hence, c
1
= 0. In a similar way
we have that c
2
= c
3
= = c
n
= 0. Hence, S is linearly independent.
32. Since AA
1
= I, then
n
k=1
a
ik
a
1
kj
= 0 for i = j.
33. For any vector w, the square of the norm and the dot product are related by the equation ||w||
2
= w w.
Then applying this to the vectors u +v and u v gives
||u +v||
2
+ ||u v||
2
= (u +v) (u +v) + (u v) (u v)
= u u + 2u v +v v +u u 2u v +v v
= 2||u||
2
+ 2||v||
2
.
34. a. The normal vector to the plane, n =
_
_
1
2
1
_
_
, is orthogonal to every vector in the plane.
b. A =
_
_
1 2 1
0 0 0
0 0 0
_
_
35. If the column vectors of A form an orthogonal set, then the row vectors of A
t
are orthogonal to the
column vectors of A. Consequently, (A
t
A)
ij
= 0 if i = j. If i = j, then (A
t
A)
ii
= ||A
i
||
2
. Thus,
A
t
A =
_
_
||A
1
||
2
0 0
0 ||A
2
||
2
0
.
.
.
.
.
. 0
.
.
. 0
0 0 ||A
n
||
2
_
_
.
36. First notice that u (Av) = u
t
(Av). So
u (Av) = u
t
(Av) = (u
t
A)v = (A
t
u)
t
v = (A
t
u) v.
37. Suppose that (Au) v = u (Av) for all u and v in R
n
. By Exercise 36, u (Av) = (A
t
u) v. Thus,
(A
t
u) v = (Au) v
for all u and v in R
n
. Let u = e
i
and v = e
j
, so (A
t
)
ij
= A
ij
. Hence A
t
= A, so A is symmetric.
For the converse, suppose that A = A
t
. Then by Exercise 36,
u (Av) = (A
t
u) v = (Au) v.
Exercise Set 6.2
Inner products are generalizations of the dot product on the Euclidean spaces. An inner product on the
vector space V is a mapping from V V, that is the input is a pair of vectors, to R, so the output is a number.
An inner product must satisfy all the same properties of the dot product discussed in Section 6.1. So to
6.2 Inner Product Spaces 141
determine if a mapping on V V denes an inner product we must verify that:
u, u 0 and u, u = 0 if and only if u = 0
u, v = v, u
u +v, w = u, w +v, w
cu, v = c u, v
In addition the denition of length, distance and angle between vectors are given in the same way with dot
product replaced with inner product. So
||v|| =
_
v, v and cos =
u, v
||u||||v||
.
Also two vectors in an inner product space are orthogonal if and only if u, v = 0 and a set S = {v
1
, v
2
, . . . , v
k
}
is orthogonal if the vectors are pairwise orthogonal. That is, v
i
, v
j
= 0, whenever i = j. If in addition, for
each i = 1, . . . , k, ||v
i
|| = 1, then S is orthonormal. Useful in the exercise set is the fact that every orthog-
onal set of nonzero vectors in an inner product space is linearly independent. If B = {v
1
, v
2
, . . . , v
n
} is an
orthogonal basis for an inner product space, then for every vector v the scalars needed to write the vector in
terms of the basis vectors are given explicitly by the inner product, so that
v =
v, v
1
v
1
, v
1
v
1
+
v, v
2
v
2
, v
2
v
2
+ +
v, v
n
v
n
, v
n
v
n
.
If B is orthonormal, then the expansion is
v = v, v
1
v
1
+v, v
2
v
2
+ +v, v
n
v
n
.
Solutions to Exercises
1. Let u =
_
u
1
u
2
_
. To examine the denition given for u, v , we will rst let u = v. Then
u, u = u
2
1
2u
1
u
2
2u
2
u
1
+ 3u
2
2
= (u
1
3u
2
)(u
1
+u
2
).
Since u, u = 0 if and only if u
1
= 3u
2
or u
1
= u
2
, then V is not an inner product space. For example, if
u
1
= 3 and u
2
= 1, then u, u = 0 but u is not the zero vector.
2. Since u, u = u
2
1
+2u
1
u
2
= u
1
(u
1
+2u
2
) = 0 if and only if u
1
= 0 or u
1
= 2u
2
, then V is not an inner
product space.
3. In this case u, u = 0 if and only if u is the zero vector and u, v = v, u for all pairs of vectors. To
check whether the third requirement holds, we have that
u +v, w = (u
1
+v
1
)
2
w
2
1
+ (u
2
+v
2
)
2
w
2
2
and
u, w +v, w = u
2
1
w
2
1
+u
2
2
w
2
2
+v
2
1
w
2
1
+v
2
2
w
2
2
.
For example if u =
_
1
1
_
and v =
_
2
2
_
, then u +v, w = 9w
2
1
+ 9w
2
2
and u, v + v, w = 5w
2
1
+ 5w
2
2
.
Now let w =
_
1
1
_
, so that u +v, w = 18 = 10 = u, v +v, w , so V is not an inner product space.
4. The four requirements for an inner product are
satised by the given function , so V is an inner
product space.
5. The four requirements for an inner product
are satised by the dot product, so V is an inner
product space.
142 Chapter 6 Inner Product Spaces
6. The vector space V is an inner product space with the given denition for A, B . Since (A
t
A)
ii
=
m
k=1
a
t
ik
a
ki
=
m
k=1
a
ki
a
ki
=
m
k=1
a
2
ki
, then A, A 0 and A, A = 0 if and only if A = 0, so the rst
required property holds. For the second property, recall for any square matrix A tr(A
t
) = tr(A) so
A, B = tr(B
t
A) = tr((B
t
A)
t
) = tr(A
t
B) = B, A .
Using the fact that tr(A+B) = tr(A) +tr(B), the third property follows in a similar manner. Also, if c R,
then
cA, B = tr(B
t
cA) = ctr(B
t
A) = c A, B .
7. The vector space V is an inner product space with the given denition for A, B . For the rst requirement
A, A =
m
i=1
n
j=1
a
2
ij
, which is nonnegative and 0 if and only if A is the zero matrix. Since real numbers
commute, A, B = B, A . Also,
A +B, C =
m
i=1
n
j=1
(a
ij
c
ij
+b
ij
c
ij
),
then A+B, C = A, C +B, C . The fourth property follows in a similar manner.
8. Since
p, p =
n
i=0
p
2
i
0 and p, p = 0 p = 0
p, q =
n
i=0
p
i
q
i
=
n
i=0
q
i
p
i
= q, p
p +q, f =
n
i=0
(p
i
+q
i
)f
i
=
n
i=0
p
i
f
i
+
n
i=0
q
i
f
i
= p, f +q, f
cp, q =
n
i=0
cp
i
q
i
= c
n
i=0
p
i
q
i
= c p, q ,
then V is an inner product space.
9. Using the properties of the integral and the
fact that the exponential function is always non-
negative, V is an inner product space.
10. Since f, f =
_
1
1
f
2
(x)xdx, which can be
negative, then V is not an inner product space.
11. The set is orthogonal provided the functions are pairwise orthogonal. That is, provided 1, sinx =
0, 1, cos x = 0, and cos x, sin x = 0. We have
1, sinx =
_
= 0.
Similarly, 1, cos x = 0. To show cos x, sin x = 0 requires the technique of substitution, or notice that cosine
is an odd function and sine is an even function, so the product is an odd function and since the integral is
over a symmetric interval, the integral is 0.
12. Since
_
1
1
xdx = 0,
1
2
_
1
1
(5x
3
3x)dx = 0, and
1
2
_
1
1
(5x
4
3x
2
)dx = 0, the set
_
1, x,
1
2
(5x
3
3x)
_
, is
orthogonal.
13. The inner products are
1, 2x 1 =
_
1
0
(2x 1)dx = 0,
_
1, x
2
+x
1
6
_
=
_
1
0
_
x
2
+x
1
6
_
dx = 0,
and
_
2x 1, x
2
+x
1
6
_
=
_
1
0
_
2x
3
+ 3x
2
4
3
x +
1
6
_
dx = 0.
14. Since sin ax and cos axsin bx are odd functions the integrals over the symmetric interval [, ] are 0.
Also, since
_
_
1
0
(3 + 3x x
2
)
2
dx =
_
370
10
.
b. The cosine of the angle between the functions is given by cos =
f,g
||f||||g||
=
5
168
105.
16. a. The distance between the two functions is
||f g|| = cos x sin x =
(1 sin 2x)dx =
2.
b. The cosine of the angle between the functions is given by cos =
f,g
||f||||g||
= 0, so f and g are orthogonal.
17. a. xe
x
=
_
1
2
e
2
13
6
b. cos =
2
2e
2
2
18. a. ||f g|| =
e
2
2 e
2
b. cos =
4
e
2
e
2
19. a. 2x
2
4 = 2
5 b. cos =
2
3
20. a. p q = ||3 x|| =
10 b. cos = 1
21. a. To nd the distance between the matrices, we have that
A B =
_
1 2
2 1
_
_
2 1
1 3
_
_
1 1
1 4
_
__
1 1
1 4
_
,
_
1 1
1 4
__
=
tr
__
1 1
1 4
_ _
1 1
1 4
__
=
tr
__
2 5
5 17
__
=
19.
b. The cosine of the angle between the matrices is given by cos =
A,B
||A||||B||
=
3
5
6
.
22. a. AB =
tr
_
10 4
4 2
_
= 2
3 b. cos =
4
99
23. a. AB =
_tr
_
_
8 0 8
0 3 4
8 4 14
_
_
=
25 = 5 b. cos =
26
38
39
24. a. AB =
_tr
_
_
8 6 0
6 9 3
0 3 6
_
_
=
23 b. cos =
19
33
28
25. Since
_
x
y
_
is orthogonal to
_
2
3
_
if and only if
_
x
y
_
_
2
3
_
= 0, which is true if and only if 2x+3y = 0,
so the set of vectors that are orthogonal to
_
2
3
_
is
__
x
y
_
2x + 3y = 0
_
. Notice that the set describes a
line.
26. Since
_
x
y
_
_
1
b
_
= 0 x by = 0, the set of all vectors that are orthogonal to
_
1
b
_
is the line
x by = 0. Observe that S = span
__
b
1
__
.
144 Chapter 6 Inner Product Spaces
27. Since
_
_
x
y
x
_
_
_
_
2
3
1
_
_
= 0 if and only if 2x3y +z = 0, the set of all vectors orthogonal to n =
_
_
2
3
1
_
_
is
_
_
_
_
_
x
y
z
_
_
2x 3y +z = 0
_
_
_
. This set describes the plane with normal vector n.
28. The set of all vectors orthogonal to
_
_
1
1
0
_
_
, is
_
_
_
_
_
x
y
z
_
_
x +y = 0
_
_
_
.
29. a.
x
2
, x
3
_
=
_
1
0
x
5
dx =
1
6
b.e
x
, e
x
=
_
1
0
dx = 1 c. 1 =
_
_
1
0
dx = 1,
x =
_
_
1
0
x
2
dx =
3
3
d. cos =
3
2
3
e. 1 x =
3
3
30. a. Let A =
_
1 0
0 1
_
, so u, v = [ u
1
u
2
]
_
v
1
v
2
_
= u
1
v
1
+ u
2
v
2
, which is the standard inner product
on R
2
..
b. Let A =
_
2 1
1 2
_
, so u, v = 2u
1
v
1
u
1
v
2
u
2
v
1
+ 2u
2
v
2
, which denes an inner product.
c. Let A =
_
3 2
2 0
_
, so u, v = 3u
1
v
1
+ 2u
1
v
2
+ 2u
2
u
1
. In this case u, u = u
1
(3u
1
+ 4u
2
) = 0 if and only
if u
1
= 0 or u
1
=
4
3
u
2
and hence, the function does not dene an inner product.
31. If f is an even function and g is an odd function, then fg is an odd function. Since the inner product is
dened as the integral over a symmetric interval [a, a], then
f, g =
_
a
a
f(x)g(x)dx = 0,
so f and g are orthogonal.
32. Since cos nx is an even function and sinnx is an odd function, by Exercise 31, the set is orthogonal.
33. Using the scalar property of inner products, we have that
c
1
u
1
, c
2
u
2
= c
1
u
1
, c
2
u
2
= c
1
c
2
u
1
, u
2
.
Since u
1
and u
2
are orthogonal, u
1
, u
2
= 0 and hence, c
1
u
1
, c
2
u
2
= 0. Therefore, c
1
u
1
and c
2
u
2
are
orthogonal.
34. Since , and , are inner products, then
u, u = u, u +u, u 0 and u, u = 0 if and only if u = 0
u, v = u, v +u, v = v, u +v, u = v, u
cu, v = cu, v +cu, v = c u, v +c u, v = c u, v
u +v, w = u +v, w+u +v, w = u, w+v, w+u, w+v, w = u, w+v, w
and hence the function denes another inner product.
Exercise Set 6.3
Every nite dimensional inner product space has an orthogonal basis. Given any basis B the Gram-
Schmidt process is a method for constructing an orthogonal basis from B. The construction process involves
6.3 Orthonormal Bases 145
projecting one vector onto another. The orthogonal projection of u onto v is
proj
v
u =
u v
v v
v =
u, v
v, v
v.
Notice that the vectors proj
v
u and u proj
v
u are orthogonal.
Let B = {v
1
, v
2
, v
3
} be the basis for R
3
given by
v
1
=
_
_
1
2
1
_
_
, v
2
=
_
_
1
0
1
_
_
, v
3
=
_
_
1
1
1
_
_
.
Notice that B is not an orthogonal basis, since v
1
and v
3
are not orthogonal, even though v
1
and v
2
are
orthogonal. Also, v
2
and v
3
are not orthogonal.
Gram-Schmidt Process to Covert B to the Orthogonal Basis {w
1
, w
2
, w
3
} .
w
1
= v
1
=
_
_
1
2
1
_
_
w
2
= v
2
proj
w1
v
2
= v
2
v2,w1
w1,w1
w
1
=
_
_
1
0
1
_
_
_
1
0
1
_
_
1
2
1
_
_
_
_
1
2
1
_
_
1
2
1
_
_
_
_
1
2
1
_
_
=
_
_
1
0
1
_
_
w
3
= v
3
proj
w1
v
3
proj
w2
v
3
= v
3
v3,w1
w1,w1
w
1
v3,w2
w2,w2
w
2
=
_
_
1
1
1
_
_
_
1
1
1
_
_
1
2
1
_
_
_
_
1
2
1
_
_
1
2
1
_
_
_
_
1
2
1
_
_
_
1
1
1
_
_
1
0
1
_
_
_
_
1
0
1
_
_
1
0
1
_
_
_
_
1
0
1
_
_
=
_
_
1/3
1/3
1/3
_
_
An orthonormal basis is found by dividing each of the vectors in an orthogonal basis by their norms.
Solutions to Exercises
1. a. proj
v
u =
_
3/2
3/2
_
b. Since uproj
v
u =
_
1/2
1/2
_
, we have that
v (u proj
v
u) =
_
1
1
_
_
1/2
1/2
_
= 0,
so the dot product is 0 and hence, the two vectors
are orthogonal.
2. a. proj
v
u =
_
7/5
14/5
_
b. Since uproj
v
u =
_
8/5
4/5
_
, we have that
v (u proj
v
u) =
_
1
2
_
_
8/5
4/5
_
= 0,
so the dot product is 0 and hence, the two vectors
are orthogonal.
146 Chapter 6 Inner Product Spaces
3. a. proj
v
u =
_
3/5
6/5
_
b. Since
u proj
v
u =
_
8/5
4/5
_
,
we have that
v (u proj
v
u) =
_
1
2
_
_
8/5
4/5
_
= 0,
so the dot product is 0 and hence, the two vectors
are orthogonal.
4. a. proj
v
u =
_
0
0
_
b. Since
u proj
v
u =
_
1
1
_
,
we have that
v (u proj
v
u) =
_
2
2
_
_
0
0
_
= 0,
so the dot product is 0 and hence, the two vectors
are orthogonal.
5. a. proj
v
u =
_
_
4/3
4/3
4/3
_
_
b. Since
u proj
v
u =
_
_
1/3
5/3
4/3
_
_
,
we have that
v (u proj
v
u) =
_
_
1
1
1
_
_
_
_
1/3
5/3
4/3
_
_
= 0
so the dot product is 0 and hence, the two vectors
are orthogonal.
6. a. proj
v
u =
1
7
_
_
3
2
1
_
_
b. Since
u proj
v
u =
1
7
_
_
4
2
8
_
_
,
we have that
v (u proj
v
u) =
_
_
3
2
1
_
_
1
7
_
_
4
2
8
_
_
= 0,
so the dot product is 0 and hence, the two vectors
are orthogonal.
7. a. proj
v
u =
_
_
0
0
1
_
_
b. Since
u proj
v
u =
_
_
1
1
0
_
_
,
we have that
v (u proj
v
u) =
_
_
0
0
1
_
_
_
_
1
1
0
_
_
= 0,
so the dot product is 0 and hence, the two vectors
are orthogonal.
8. a. proj
v
u =
3
2
_
_
1
0
1
_
_
b. Since
u proj
v
u =
1
2
_
_
3
4
3
_
_
,
we have that
v (u proj
v
u) =
_
_
1
0
1
_
_
1
2
_
_
3
4
3
_
_
= 0,
so the dot product is 0 and hence, the two vectors
are orthogonal.
9. a. proj
q
p =
5
4
x
5
12
b. Since
p proj
q
p = x
2
9
4
x +
17
12
,
we have that
q, p proj
q
p
_
=
_
1
0
(3x 1)
_
x
2
9
4
x +
17
12
_
dx = 0
so the dot product is 0 and hence, the two vectors are orthogonal.
6.3 Orthonormal Bases 147
10. a. proj
q
p = 0 b. Since
p proj
q
p = x
2
x + 1,
we have that
q, p proj
q
p
_
=
_
1
0
(2x 1)
_
x
2
x + 1
_
dx = 0
so the dot product is 0 and hence, the two vectors are orthogonal.
11. a. proj
q
p =
7
4
x
2
+
7
4
b. Since
p proj
q
p =
15
4
x
2
3
4
,
we have that
q, p proj
q
p
_
=
_
1
0
(x
2
1)
_
15
4
x
2
3
4
_
dx = 0
so the dot product is 0, and hence, the two vectors
are orthogonal.
12. a. proj
q
p =
5
2
x b. Since
p proj
q
p =
3
2
x + 1,
we have that
q, p proj
q
p
_
=
_
1
0
x
_
3
2
x + 1
_
dx = 0
so the dot product is 0, and hence, the two vectors
are orthogonal.
13. Let B = {v
1
, v
2
} and denote the orthogonal basis by {w
1
, w
2
}. Then
w
1
= v
1
=
_
1
1
_
, w
2
= v
2
proj
w1
v
2
=
_
1
1
_
.
To obtain an orthonormal basis, divide each vector in the orthogonal basis by their norm. Since each has
length
2, an orthonormal basis is
_
1
2
_
1
1
_
,
1
2
_
1
1
__
.
14. Using the Gram-Schmidt process an orthogonal basis is
__
2
1
_
,
_
3/5
6/5
__
and an orthonormal basis
is
__
2
5
5
5
5
_
,
_
5
5
5
5
__
.
15. Let B = {v
1
, v
2
, v
3
} and denote the orthogonal basis by {w
1
, w
2
, w
3
}. Then
w
1
= v
1
=
_
_
1
0
1
_
_
, w
2
= v
2
proj
w1
v
2
=
_
_
1
2
1
_
_
, w
3
= v
3
proj
w1
v
3
proj
w2
v
3
=
_
_
1
1
1
_
_
.
To obtain an orthonormal basis, divide each vector in the orthogonal basis by their norm. This gives the
orthonormal basis
_
_
_
1
2
_
_
1
0
1
_
_
,
1
6
_
_
1
2
1
_
_
,
1
3
_
_
1
1
1
_
_
_
_
_
.
16. Using the Gram-Schmidt process an orthogonal basis is
_
_
_
_
_
1
0
1
_
_
,
_
_
1/2
1
1/2
_
_
,
_
_
1/3
1/3
1/3
_
_
_
_
_
and an or-
thonormal basis is
_
_
_
2
2
0
2
2
_
_,
_
6
6
6
3
6
6
_
_,
_
3
3
3
3
3
3
_
_
_
_
.
17.
_
3(x 1), 3x 1, 6
5(x
2
x +
1
6
)
_
18.
_
30(x
2
x),
8
_
5
2
x
2
3
2
x
_
, 10x
2
12x + 3
_
148 Chapter 6 Inner Product Spaces
19. An orthonormal basis for
span(W) is
_
_
_
1
3
_
_
1
1
1
_
_
,
1
6
_
_
2
1
1
_
_
_
_
_
.
20. An orthonormal basis for
span(W) is
_
_
_
2
2
_
_
0
1
1
_
_
,
3
3
_
_
1
1
1
_
_
_
_
_
.
21. An orthonormal basis for
span(W) is
_
_
1
6
_
_
1
2
0
1
_
_
,
1
6
_
_
2
1
1
0
_
_
,
1
6
_
_
1
0
2
1
_
_
_
_
.
22. An orthonormal basis for
span(W) is
_
5
5
_
_
1
2
0
0
_
_
,
15
15
_
_
2
1
1
1
_
_
,
30
30
_
_
2
1
0
5
_
_
_
_
.
23. An orthonormal basis for
span(W) is
_
3x, 3x + 2
_
.
24. An orthogonal basis for
span(W) is
_
1, 12x 6,
5
2
x
3
+
9
4
x
1
2
_
and an orthonormal basis is
_
1,
1
12
(12x 6),
4
7
3
_
5
2
x
3
+
9
4
x
1
2
_
_
.
25. An orthonormal basis for span(W) is
_
_
1
3
_
_
1
0
1
1
_
_
,
1
3
_
_
0
1
1
1
_
_
_
_
.
26. An orthonormal basis for span(W) is
_
_
_
1
5
_
_
2
0
1
_
_
,
1
30
_
_
1
5
2
_
_
_
_
_
.
27. Let v be a vector in V and B = {u
1
, u
2
, . . . , u
n
} an orthonormal basis for V . Then there exist scalars
c
1
, c
2
, . . . , c
n
such that v = c
1
u
1
+c
2
u
2
+ +c
n
u
n
. Then
||v||
2
= v v = c
2
1
(u
1
u
1
) +c
2
2
(u
2
u
2
) + +c
2
n
(u
n
u
n
).
Since B is orthonormal each vector in B has norm one, 1 = ||u
i
||
2
= u
i
u
i
and they are pairwise orthogonal,
so u
1
u
j
= 0, for i = j. Hence,
||v||
2
= c
2
1
+c
2
2
+ + c
2
n
= |v u
1
|
2
+ + |v u
n
|
2
.
28. To show the three statements are equivalent we will show that (a)(b), (b)(c) and (c)(a).
(a)(b): Suppose that A
1
= A
t
. Since AA
t
= I, then the row vectors are orthonormal. Since they
are orthogonal they are linearly independent and hence a basis for R
n
.
(b)(c): Suppose the row vectors of A are orthonormal. Then A
t
A = I and hence the column vectors
of A are orthonormal.
(c)(a): Suppose the column vectors are orthonormal. Then AA
t
= I and hence, A is invertible with
A
1
= A
t
.
6.3 Orthonormal Bases 149
29. Let
A =
_
_
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
. . . a
nn
_
_
and A
t
=
_
_
a
11
a
21
. . . a
n1
a
12
a
22
. . . a
n2
.
.
.
.
.
.
.
.
.
.
.
.
a
1n
a
2n
. . . a
nn
_
_
.
Suppose that the columns of A form an orthonormal set. So that
n
k=1
a
ki
a
kj
=
_
0 if i = j
1 if i = j
. Observe that
the quantity on the left hand side of this equation is the i, j entry of A
t
A, so that (A
t
A)
ij
=
_
0 if i = j
1 if i = j
and hence, A
t
A = I. Conversely, if A
t
A = I, then A has orthonormal columns.
30. We have x (Ay) = x
t
Ay = (x
t
A)y = (A
t
x)
t
y = (A
t
x) y.
31. Recall that ||Ax|| =
Ax Ax. Since the column vectors of A are orthonormal, by Exercise 29, we have
that A
t
A = I. By Exercise 30, we have that
Ax Ax = (A
t
Ax) x = x
t
(A
t
Ax) = x x.
Since the left hand side is ||Ax||
2
, we have that ||Ax||
2
= x x = ||x||
2
and hence, ||Ax|| = ||x||.
32. Using the result given in Exercise 30, we have that
(Ax) (Ay) = A
t
(Ax) y = (A
t
A)x y = (Ix) y = x y.
33. By Exercise 32, Ax Ay = x y. Then Ax Ay = 0 if and only if x y = 0.
34. Since a vector
_
_
x
y
z
w
_
_
is orthogonal to both
_
_
1
0
1
1
_
_
and
_
_
2
3
1
2
_
_
if and only if
_
x z +w = 0
2x + 3y z + 2w = 0
and
_
1 0 1 1
2 3 1 2
_
reduces to
_
1 0 1 1
0 1 1/3 0
_
, then all vectors that are orthogonal to both vectors have
the form
_
_
s t
1
3
s
s
t
_
_
, s, t R. So a basis for all these vectors is
_
_
_
_
1
1/3
1
0
_
_
,
_
_
1
0
0
1
_
_
_
_
.
35. Let
W = {v | v u
i
= 0, for all i = 1, 2, . . . m}.
Let c be a real number and x and y vectors in W, so that x u
i
= 0 and y u
i
= 0, for each i = 1, . . . , m.
Then
(x +cy) u
i
= x u
i
+c(y u
i
) = 0 +c(0) = 0,
for all i = 1, 2, . . . , n. So x +cy is in W and hence, W is a subspace.
36. Suppose A is symmetric, so A
t
= A, and u
t
Au > 0 for all nonzero vectors in R
n
.
By denition of , , we have u, u > 0 for all nonzero vectors and if u = 0, then u, u = 0.
Since A is symmetric, then u, v = u
t
Av = u
t
A
t
v = (Au)
t
v = v
t
Au = v, u .
u +v, w = (u +v)
t
Aw = (u
t
+v
t
)Aw = u
t
Aw +v
t
Aw = u, w +v, w
cu, v = (cu)
t
Av = cu
t
Av = c u, v
150 Chapter 6 Inner Product Spaces
37. Since for every nonzero vector
_
x
y
_
, we have that
v
t
Av = [x, y]
_
3 1
1 3
_ _
x
y
_
= 3x
2
+ 2xy + 3y
2
(x +y)
2
0,
so A is positive semi-denite.
38. Let u = e
i
. Then e
i
t
Ae
i
= e
i
t
A
1
= a
ii
> 0 since A is positive denite. Since this holds for each
i = 1, . . . , n, then the diagonal entries are all positive.
39. Since, when dened, (BC)
t
= C
t
B
t
, we have that
x
t
A
t
Ax = (Ax)
t
Ax = (Ax) (Ax) = ||Ax||
2
0,
so A
t
A is positive semi-denite.
40. We will show that the contrapositive statement holds. Suppose A is not invertible. Then there is a
nonzero vector x such that Ax = 0 and hence x
t
Ax = 0. Therefore, A is not positive denite.
41. Let x be an eigenvector of A corresponding to the eigenvalue , so that Ax = x. Now multiply both
sides by x
t
to obtain
x
t
Ax = x
t
(x) = (x
t
x) = (x x) = ||x||
2
.
Since A is positive denite and x is not the zero vector, then x
t
Ax > 0, so > 0.
42. a. Since
_
2
1
_
_
2
4
_
= 0, the vectors are orthogonal. b. det
__
2 2
1 4
_ _
2 1
2 4
__
= 100
c. The area of the rectangle spanned by the two
vectors, as shown in the gure is
||v
1
||||v
2
|| =
20 = 10 =
_
det(A
t
A).
x
y
25
25
5
5
d.
det
__
2 1
2 4
__
= |8 + 2| = 10 e. Suppose v
1
=
_
a
b
_
and v
2
=
_
c
d
_
are orthogonal so
ac +bd = 0, that is, ac = bd. Let A =
_
a b
c d
_
so | det(A)| = |ad bc|. The area of the rectangle spanned
by the two vectors is
det(A) = |ad bc| =
_
(ad bc)
2
=
_
a
2
d
2
2abcd +b
2
c
2
=
_
a
2
d
2
+ 2a
2
c
2
+b
2
c
2
(since ac = bd)
=
_
a
2
d
2
+a
2
c
2
+a
2
c
2
+ b
2
c
2
=
_
a
2
d
2
+a
2
c
2
+b
2
d
2
+b
2
c
2
=
_
a
2
(c
2
+d
2
) +b
2
(c
2
+d
2
) =
_
a
2
+b
2
_
c
2
+d
2
= ||v
1
||||v
2
||.
f. The area of a parallelogram is the height times the base and is equal to the are of the rectangle shown
in the gure of the exercise. So the area is ||v
1
||||v
2
p||, where p is the projection of v
2
onto v
1
.
Since p = kv
1
for some scalar k, if v
1
=
_
a
b
_
, and v
2
=
_
c
d
_
, then the area of the parallelogram is
det
__
a b
c ka d kb
__
. But this last determinant is the same as the determinant of A since the matrix
is obtained from A by the row operation kR
1
+R
2
R
2
. Therefore, the area of the parallelogramis | det(A)|.
g. The volume of a box in R
3
spanned by the vectors v
1
, v
2
, and v
3
(mutually orthogonal) is ||v
1
||||v
2
||||v
3
||.
6.4 Orthogonal Complements 151
Let A be the matrix with row vectors v
1
, v
2
and v
3
, respectively. Then
AA
t
=
_
_
||v
1
||
2
0 0
0 ||v
2
||
2
0
0 0 ||v
3
||
2
_
_
and hence, det(AA
t
) = ||v
1
||
2
||v
2
||
2
||v
3
||
2
= det(A) det(A
t
) = (det(A))
2
.
Therefore, the volume of the box is
_
(det(A))
2
= | det(A)|.
Exercise Set 6.4
The orthogonality of two vectors is extended to a vector being orthogonal to a subspace of an inner product
space. A vector v is orthogonal to a subspace W if v is orthogonal to every vector in W. For example, the
normal vector of a plane through the origin in R
3
is orthogonal to every vector in the plane. The orthogonal
complement of a subspace is the subspace of all vectors that are orthogonal to W and is given by
W
x 2y +z = 0
_
_
_
=
_
_
_
y
_
_
1
2
0
_
_
+z
_
_
1
0
1
_
_
y, z R
_
_
_
= span
_
_
_
_
_
1
2
0
_
_
,
_
_
1
0
1
_
_
_
_
_
,
then since the two vectors
_
_
1
2
0
_
_
and
_
_
1
0
1
_
_
are linearly independent W is a plane through the origin in
R
3
. A vector
_
_
a
b
c
_
_
is in W
_
_
1
2
0
_
_
= 0 and
_
_
a
b
c
_
_
_
_
1
0
1
_
_
= 0 a = c, b =
1
2
c, c R,
so
W
=
_
_
_
t
_
_
1
1/2
1
_
_
t R
_
_
_
.
So the orthogonal complement is the line in the direction of the vector (normal vector)
_
_
1
1/2
1
_
_
. In the
previous example, we used the criteria that v is in the orthogonal complement of a subspace W if and only if
v is orthogonal to each vector in a basis for W
is a subspace
W W
= {0}
(W
= W
dim(V ) = dim(W) dim(W
)
152 Chapter 6 Inner Product Spaces
Solutions to Exercises
1. W
=
__
x
y
_
_
x
y
_
_
1
2
_
= 0
_
=
__
x
y
_
x 2y = 0
_
=
__
x
y
_
y =
1
2
x
_
= span
__
1
1/2
__
So the orthogonal complement is described by a
line.
2. W
=
__
x
y
_
_
x
y
_
_
1
0
_
= 0
_
=
__
x
y
_
x = 0
_
= span
__
0
1
__
So the orthogonal complement is the y-axis.
3.
W
=
_
_
_
_
_
x
y
z
_
_
_
_
x
y
z
_
_
_
_
2
1
1
_
_
= 0
_
_
_
=
_
_
_
_
_
x
y
z
_
_
2x +y z = 0
_
_
_
= span
_
_
_
_
_
1
2
0
_
_
,
_
_
0
1
1
_
_
_
_
_
So the orthogonal complement is described by a
plane.
4.
W
=
_
_
_
_
_
x
y
z
_
_
_
_
x
y
z
_
_
_
_
1
0
2
_
_
= 0
_
_
_
=
_
_
_
_
_
x
y
z
_
_
x + 2z = 0
_
_
_
= span
_
_
_
_
_
2
0
1
_
_
,
_
_
0
1
0
_
_
_
_
_
So the orthogonal complement is a plane.
5. The orthogonal complement is the set of all vectors that are orthogonal to both
_
_
2
1
1
_
_
and
_
_
1
2
0
_
_
. That
is, the set of all vectors
_
_
x
y
z
_
_
satisfying
_
_
x
y
z
_
_
_
_
2
1
1
_
_
= 0 and
_
_
x
y
z
_
_
_
_
1
2
0
_
_
= 0
_
2x +y z = 0
x + 2y = 0
.
Since the solution to the linear system is x =
2
3
z, y =
1
3
z, z R, then
W
= span
_
_
_
_
_
2/3
1/3
1
_
_
_
_
_
. Thus, the orthogonal complement is a line in three space.
6. A vector
_
_
x
y
z
_
_
is in W
if and only if
_
3x +y z = 0
y + z = 0
, that is, x =
2
3
z, y = z. So W
=
span
_
_
_
_
_
2/3
1
1
_
_
_
_
_
, which is a line in R
3
.
7. The orthogonal complement is the set of all vectors that are orthogonal to the two given vectors. This
leads to the system of equations
6.4 Orthogonal Complements 153
_
3x +y +z w = 0
2y + z + 2w = 0
, with solution x =
1
6
z +
2
3
w, y =
1
2
z w, z, w R.
Hence a vector is in W
1
6
z +
2
3
w
1
2
z
z
w
_
_
, for all real numbers z and w, that is,
W
= span
_
_
_
_
1/6
1/2
1
0
_
_
,
_
_
2/3
1
0
1
_
_
_
_
.
8. A vector
_
_
x
y
z
w
_
_
is in W
if and only if
_
_
x +y +w = 0
x +z +w = 0
y +z +w = 0,
, that is, x = y = z =
1
2
w So W
=
span
_
_
_
_
1/2
1/2
1/2
1
_
_
_
_
, which is a line.
9.
_
_
_
_
_
1/3
1/3
1
_
_
_
_
_
10.
_
_
_
_
_
1
1
1
_
_
_
_
_
11.
_
_
_
_
1/2
3/2
1
0
_
_
,
_
_
1/2
1/2
0
1
_
_
_
_
12.
_
_
_
_
1/2
3/2
1
0
_
_
,
_
_
1/2
1/2
0
1
_
_
_
_
13. A polynomial p(x) = ax
2
+bx +c is in W
p(x), x
2
_
= 0. Now
0 = p(x), x 1 =
_
1
0
(ax
3
+ (b a)x
2
+ (c b)x c)dx a + 2b + 6c = 0
and
0 =
p(x), x
2
_
=
_
1
0
(ax
4
+bx
3
+cx
2
)dx
a
5
+
b
4
+
c
3
= 0.
Since the system
_
a + 2b + 6c = 0
a
5
+
b
4
+
c
3
= 0
has the solution a =
50
9
c, b =
52
9
, c R, then a basis for the orthogonal
complement is
_
50
9
x
2
52
9
x + 1
_
.
14. A basis for W
is
_
5x
2
16
3
x + 1
_
.
15. The set W consists of all vectors w =
_
_
w
1
w
2
w
3
w
4
_
_
such that w
4
= w
1
w
2
w
3
, that is
W =
_
_
_
_
s
t
u
s t u
_
s, t, u R
_
_
=
_
_
s
_
_
1
0
0
1
_
_
+t
_
_
0
1
0
1
_
_
+u
_
_
0
0
1
1
_
s, t, u R
_
_
.
154 Chapter 6 Inner Product Spaces
The vector v =
_
_
x
y
z
w
_
_
is in W
_
_
_
1
1
1
1
_
_
_
_
.
16. The two vectors that span W are linearly independent but are not orthogonal. Using the Gram-Schmidt
process an orthogonal basis is
_
_
_
_
_
1
0
1
_
_
,
_
_
3/2
1
3/2
_
_
_
_
_
. Then
proj
W
v =
_
_
1
2
2
_
_
_
_
1
0
1
_
_
_
_
1
0
1
_
_
_
_
1
0
1
_
_
_
_
1
0
1
_
_
+
_
_
1
2
2
_
_
_
_
3/2
1
3/2
_
_
_
_
3/2
1
3/2
_
_
_
_
3/2
1
3/2
_
_
_
_
3/2
1
3/2
_
_
=
1
11
_
_
2
5
13
_
_
.
17. The two vectors that span W are linearly independent and orthogonal, so that an orthogonal basis for
W is B =
_
_
_
_
_
2
0
0
_
_
,
_
_
0
1
0
_
_
_
_
_
. Then
proj
W
v =
_
_
1
2
3
_
_
_
_
2
0
0
_
_
_
_
2
0
0
_
_
_
_
2
0
0
_
_
_
_
2
0
0
_
_
+
_
_
1
2
3
_
_
_
_
0
1
0
_
_
_
_
0
1
0
_
_
_
_
0
1
0
_
_
_
_
0
1
0
_
_
=
2
4
_
_
2
0
0
_
_
+
2
1
_
_
0
1
0
_
_
=
_
_
1
2
0
_
_
.
18. The two vectors that span W are linearly independent but not orthogonal. Using the Gram-Schmidt
process an orthogonal basis for W is B =
_
_
_
_
_
3
1
1
_
_
,
_
_
2
14
8
_
_
_
_
_
. Then
proj
W
v =
_
_
5
3
1
_
_
_
_
3
1
1
_
_
_
_
3
1
1
_
_
_
_
3
1
1
_
_
_
_
3
1
1
_
_
+
_
_
5
3
1
_
_
_
_
2
14
8
_
_
_
_
2
14
8
_
_
_
_
2
14
8
_
_
_
_
2
14
8
_
_
=
19
11
_
_
3
1
1
_
_
1
11
_
_
1
14
8
_
_
=
_
_
5
3
1
_
_
.
Observe that
_
_
5
3
1
_
_
is contained in the subspace W.
6.4 Orthogonal Complements 155
19. The spanning vectors for W are linearly independent but are not orthogonal. Using the Gram-Schmidt
precess an orthogonal basis for W consists of the two vectors
_
_
1
2
1
_
_
and
1
6
_
_
13
4
5
_
_
. But we can also use
the orthogonal basis consisting of the two vectors
_
_
1
2
1
_
_
and
_
_
13
4
5
_
_
. Then
proj
W
v =
_
_
1
2
1
_
_
_
_
1
3
5
_
_
_
_
1
2
1
_
_
_
_
1
2
1
_
_
_
_
1
2
1
_
_
+
_
_
13
4
5
_
_
_
_
1
3
5
_
_
_
_
13
4
5
_
_
_
_
13
4
5
_
_
_
_
13
4
5
_
_
=
_
_
0
0
0
_
_
.
20. The three vectors that span W are linearly independent but not orthogonal. Using the Gram-Schmidt
process an orthogonal basis for W is B =
_
_
_
_
1
2
1
1
_
_
,
_
_
2
0
2
0
_
_
,
_
_
3
1
3
4
_
_
_
_
. Then
proj
W
v =
1
34
_
_
5
10
9
10
_
_
.
21. The spanning vectors for W are linearly independent but are not orthogonal. Using the Gram-Schmidt
precess an orthogonal basis for W is B =
_
_
_
_
3
0
1
2
_
_
,
_
_
5
21
3
6
_
_
_
_
. Then
proj
W
v =
4
73
_
_
5
21
3
6
_
_
.
22. a. W
= span
__
2
1
__
b. proj
W
v =
3
5
_
1
2
_
c. u = v proj
W
v =
1
5
_
2
1
_
d.
1
5
_
3
6
_
_
2
1
_
= 0 e.
x
y
25
25
5
5
W
W
v
proj
W
v
23. a. W
= span
__
1
3
__
b. proj
W
v =
1
10
_
3
1
_
c. u = v proj
W
v =
1
10
_
3
9
_
156 Chapter 6 Inner Product Spaces
d.
1
10
_
3
9
_
_
3
1
_
= 0 e.
x
y
25
25
5
5
W
W
v
proj
W
v
24. a. W
= span
_
_
_
_
_
1
1
0
_
_
,
_
_
0
0
1
_
_
_
_
_
b. proj
W
v =
_
_
1
1
0
_
_
c. u = v proj
W
v =
_
_
0
0
1
_
_
d. The vector u is one of the spanning vectors of W
. e.
W
W
v
proj
W
v
25. Notice that the vectors
_
_
1
1
1
_
_
and
_
_
1
2
4
_
_
are not orthogonal. Using the Gram-Schmidt process an
orthogonal basis for W is
_
_
_
_
_
1
1
1
_
_
,
_
_
0
3
3
_
_
_
_
_
. a. W
= span
_
_
_
_
_
2
1
1
_
_
_
_
_
b. proj
W
v =
1
3
_
_
2
5
1
_
_
c. u = v proj
W
v =
1
3
_
_
4
2
2
_
_
d. Since u is a scalar multiple of
_
_
2
1
1
_
_
,
then u is in W
.
e.
W
W
v
proj
W
v
26. If v is in V
, then {0}
= V.
27. Let w W
2
, so w, u = 0 for all u W
2
. Since W
1
W
2
, then w, u = 0 for all u W
1
. Hence
w W
1
, so W
2
W
1
.
6.5 Application: Least Squares Approximation 157
28. a. Let f, g W and c a scalar. Then
(f +cg)(x) = f(x) +cg(x) = f(x) +cg(x) = (f +cg)(x),
so f + cg is in W and hence, W is a subspace. b. Suppose g(x) = g(x). If f is in W, then notice that
f(x)g(x) = f(x)(g(x)) = f(x)g(x). Let h(x) = f(x)g(x). Then
f, g =
_
1
1
h(x)dx =
_
0
1
h(x)dx +
_
1
0
h(x)dx =
_
0
1
h(x)dx +
_
1
0
h(x)dx
=
_
1
0
h(x)dx +
_
1
0
h(x)dx =
_
1
0
h(x)dx +
_
1
0
h(x)dx = 0.
c. Suppose f W W
= 0. d. If g(x) =
1
2
(f(x) + f(x)), then
g(x) =
1
2
(f(x) +f(x)) = g(x). Similarly, if h(x) =
1
2
(f(x) f(x)), then h(x) = h(x).
29. a. Let A =
_
d e
f g
_
and B =
_
a b
b c
_
be a matrix in W. Then
A, B = tr(B
t
A) = tr
__
a b
b c
_ _
d e
f g
__
= tr
_
ad +bf ae +bg
bd +cf be +cg
_
.
So A W
if and only if ad +bf +be +cg = 0 for all real numbers a, b, and c. This implies A =
_
0 e
e 0
_
.
That is A is skew-symmetric.
b.
_
a b
c d
_
=
_
a
b+c
2
b+c
2
d
_
+
_
0
bc
2
bc
2
0
_
30. Let T : R
2
R
2
, be dened by T(v) = proj
W
v =
v
_
_
2
1
_
_
_
_
2
1
_
_
_
_
2
1
_
_
_
2
1
_
. Let B denote the standard basis.
a. Since T(e
1
) =
_
4/5
2/5
_
, T(e
2
) =
_
2/5
1/5
_
, then P = [T]
B
=
1
5
_
4 2
2 1
_
.
b. proj
W
_
1
1
_
=
1
5
_
4 2
2 1
_ _
1
1
_
=
_
6/5
3/5
_
.
c. P
2
=
1
5
_
4 2
2 1
_
1
5
_
4 2
2 1
_
=
1
25
_
20 10
10 5
_
=
1
5
_
4 2
2 1
_
.
31. Let w
0
be in W. Since W
. Hence, w
0
is orthogonal to every vector in W
, so w
0
(W
. That is, W (W
.
Now let w
0
(W
. Since V = WW
, then w
0
= w+v, where w W and v W
. So v, w
0
= 0,
since w
0
(W
and v W
. Then
0 = v, w
0
= v, w +v = v, w +v, v = v, v .
Therefore, since V is an inner product space v = 0, so w
0
= w W. Hence, (W
W. Since both
containments hold, we have W = (W
.
Exercise Set 6.5
1. a. To nd the least squares solution it is equivalent to solving the normal equation
A
t
A
_
x
y
_
= A
t
_
_
4
1
5
_
_
. That is,
158 Chapter 6 Inner Product Spaces
_
1 1 2
3 3 3
_
_
_
1 3
1 3
2 3
_
_
_
x
y
_
=
_
1 1 2
3 3 3
_
_
_
4
1
5
_
_
_
6 12
12 27
_ _
x
y
_
=
_
15
30
_
.
Hence, the least squares solution is x =
_
5/2
0
_
. b. Since the orthogonal projection of b onto W is A x, we
have that w
1
= A x =
_
_
5/2
5/2
5
_
_
, and w
2
= b w
1
=
_
_
3/2
3/2
0
_
_
.
2. a. To nd the least squares solution it is equivalent to solving the normal equation
A
t
A
_
x
y
_
= A
t
_
_
2
0
1
_
_
. That is,
_
2 1 1
2 2 1
_
_
_
2 2
1 2
1 1
_
_
_
x
y
_
=
_
2 1 1
2 2 1
_
_
_
2
0
1
_
_
_
6 7
7 9
_ _
x
y
_
=
_
3
3
_
.
Hence, the least squares solution is x =
_
6/5
3/5
_
. b. Since the orthogonal projection of b onto W is A x,
we have that w
1
= A x =
_
_
6/5
0
3/5
_
_
, and w
2
= b w
1
=
_
_
4/5
0
8/5
_
_
.
3. Let
A =
_
_
1965 1
1970 1
1975 1
1980 1
1985 1
1990 1
1995 1
2000 1
2004 1
_
_
and b =
_
_
927
1187
1449
1710
2004
2185
2513
2713
2803
_
_
.
a. The gure shows the scatter plot of the data, which
approximates a linear increasing trend. Also shown in
the gure is the best t line found in part (b).
b. The least squares solution is given by the solution
to the normal equation
A
t
A
_
m
b
_
= A
t
b, which is equivalent to
_
35459516 17864
17864 9
_ _
m
b
_
=
_
34790257
17491
_
.
The solution to the system gives the line that best ts
the data, y =
653089
13148
x
317689173
3287
.
6.5 Application: Least Squares Approximation 159
4. Let
A =
_
_
1955 1
1960 1
1965 1
1970 1
1975 1
1980 1
1985 1
1990 1
1995 1
2000 1
2005 1
_
_
and b =
_
_
157
141
119
104
93
87
78
70
66
62
57
_
_
.
a. The gure shows the scatter plot of the data, which
approximates a linear decreasing trend. Also shown in
the gure is the best t line found in part (b).
b. The least squares solution is given by the solution
to the normal equation
A
t
A
_
m
b
_
= A
t
b, which is equivalent to
_
43127150 21780
21780 11
_ _
m
b
_
=
_
2042030
1034
_
.
The solution to the system gives the line that best ts
the data, y =
529
275
x
19514
5
.
5. a. The gure shows the scatter plot of the data, which
approximates a linear increasing trend. Also shown in
the gure is the best t line found in part (b).
b. The line that best ts the data is
y = 0.07162857143x137.2780952.
6. To use a least squares approach to nding the best t parabola y = a + bx + cx
2
requires using a 3 3
matrix, where the columns correspond to 1, x, and x
2
. We will also shift the data so that 1980 corresponds
to 0. So let
A =
_
_
1 0 0
1 2 4
1 5 25
1 7 49
1 10 100
1 12 144
1 15 225
1 17 289
1 20 400
1 22 484
1 25 625
_
_
and b =
_
_
0.1
0.7
2.4
4.5
10
16.1
29.8
40.9
57.9
67.9
82.7
_
_
.
160 Chapter 6 Inner Product Spaces
a. The gure shows the scatter plot of the data. Also
shown in the gure is the best t parabola found in
part (b).
b. The least squares solution is given by the solution
to the normal equation
A
t
A
_
_
a
b
c
_
_
= A
t
b, which is equivalent to
_
_
11 135 2345
135 2345 45765
2345 45765 952805
_
_
_
_
a
b
c
_
_
=
_
_
313
6200
129838
_
_
The solution to the system gives the parabola that best
ts the data, y =
1235697
8494750
x
2
1384589
8494750
x
8584
15445
.
7. a. The Fourier polynomials are
p
2
(x) = 2 sin(x) sin(2x)
p
3
(x) = 2 sin(x) sin(2x) +
2
3
sin(3x)
p
4
(x) = 2 sin(x) sin(2x) +
2
3
sin(3x)
1
2
sin(4x)
p
5
(x) = 2 sin(x) sin(2x) +
2
3
sin(3x)
1
2
sin(4x) +
2
5
sin(5x).
b. The graph of the function f(x) = x, on the interval
x and the Fourier approximations are shown
in the gure.
8. a. The Fourier polynomials are
p
2
(x) =
1
2
sin(2x)
p
3
(x) =
1
2
sin(2x)
p
4
(x) =
1
2
sin(2x)
1
2
sin(4x)
p
5
(x) =
1
2
sin(2x)
1
2
sin(4x)
b. The graph of the function f(x) = x, on the interval
x and the Fourier approximations are shown
in the gure.
9. a. The Fourier polynomials are
p
2
(x) =
1
3
2
4 cos(x) + cos(2x)
p
3
(x) =
1
3
2
4 cos(x) + cos(2x)
4
9
cos(3x)
p
4
(x) =
1
3
2
4 cos(x) + cos(2x)
4
9
cos(3x)
+
1
4
cos(4x)
p
5
(x) =
1
3
2
4 cos(x) + cos(2x)
4
9
cos(3x)
+
1
4
cos(4x)
4
25
cos(5x).
b. The graph of the function f(x) = x
2
, on the interval
x and the Fourier approximations are shown
in the gure.
10. Suppose A = QR is a QR-factorization of A. To solve the normal equation A
t
Ax = A
t
b, we have that
(QR)
t
(QR)x = (QR)
t
b, so R
t
(Q
t
Q)Rx = R
t
Q
t
b.
6.6 Diagonalization of Symmetric Matrices 161
Since the column vectors of Q are orthonormal, then Q
t
Q = I, so
R
t
Rx = R
t
Q
t
b.
Since R
t
is invertible Rx = Q
t
b and since R is upper triangular, then the system can be solved using back
substitution.
Exercise Set 6.6
Diagonalization of matrices was considered earlier and several criteria were given for determining when
a matrix can be factored in this specic form. An n n matrix is diagonalizable if and only if it has n
linearly independent eigenvectors. Also if A has n distinct eigenvalues, then A is diagonalizable. If A is an
n n real symmetric matrix, then the eigenvalues are all real numbers and the eigenvectors corresponding
to distinct eigenvalues are orthogonal. We also have that every real symmetric matrix has an orthogonal
diagonalization. That is, there is an orthogonal matrix P, so that P
1
= P
t
, and a diagonal matrix D such
that A = PDP
1
= PDP
t
. The column vectors of an orthogonal matrix form an orthonormal basis for R
n
.
If the symmetric matrix A has an eigenvalue of geometric multiplicity greater than 1, then the corresponding
linearly independent eigenvectors that generate the eigenspace may not be orthogonal. So a process for nding
an orthogonal diagonalization of a real n n symmetric matrix A is:
Find the eigenvalues and corresponding eigenvectors of A.
Since A is diagonalizable there are n linearly independent eigenvectors. If necessary use the Gram-
Schmidt process to nd an orthonormal set of eigenvectors.
Form the orthogonal matrix P with column vectors determined in the previous step.
Form the diagonal matrix with diagonal entries the eigenvalues of A. The eigenvalues are placed on
the diagonal in the same order as the eigenvectors are used to dene P. If an eigenvalue has algebraic
multiplicity m, then the corresponding eigenvalue and eigenvector are repeated in D and P m-times,
respectively.
The matrix A has the factorization A = PDP
1
= PDP
t
.
Solutions to Exercises
1. The eigenvalues are the solutions to the char-
acteristic equation det(A I) = 0, that is,
1
= 3 and
2
= 1
2.
1
= 2,
2
= 4
3.
1
= 1,
2
= 3,
3
= 3 4.
1
= 1,
2
=
10,
3
=
10.
5. Since the eigenvalues are
1
= 3 with eigenvector v
1
=
_
1
2
_
and
2
= 2 with eigenvector v
2
=
_
2
1
_
,
then v
1
v
2
= 0, so the eigenvectors are orthogonal.
6. Since the eigenvalues are
1
= 1 with eigenvector v
1
=
_
1
1
_
and
2
= 5 with eigenvector v
2
=
_
1
1
_
, then v
1
v
2
= 0, so the eigenvectors are orthogonal.
7. Since the eigenvalues are
1
= 1 with eigenvector v
1
=
_
_
1
0
1
_
_
,
2
= 3 with eigenvector v
2
=
_
_
1
2
1
_
_
,
and
3
= 3 with eigenvector v
3
=
_
_
1
1
1
_
_
, then v
1
v
2
= v
1
v
3
= v
2
v
3
= 0, so the eigenvectors are
pairwise orthogonal.
162 Chapter 6 Inner Product Spaces
8. Since the eigenvalues are
1
= 1 with eigenvectors v
1
=
_
_
1
0
1
_
_
and v
2
=
_
_
0
1
0
_
_
, and
2
= 3 with
eigenvector v
3
=
_
_
1
0
1
_
_
, then v
1
v
3
= v
2
v
3
= 0, so the eigenvectors corresponding to distinct eigenvalues
are orthogonal.
9. The eigenvalues of A are
1
= 3 and
2
= 1 with multiplicity 2. Moreover, there are three linearly
independent eigenvectors and the eigenspaces are V
3
= span
_
_
_
_
_
1
0
1
_
_
_
_
_
and V
1
= span
_
_
_
_
_
1
0
1
_
_
,
_
_
0
1
0
_
_
_
_
_
.
Consequently, dim(V
3
) + dim(V
1
) = 1 + 2 = 3.
10. The eigenspaces are V
1
= span
_
_
_
_
_
0
1
0
_
_
_
_
_
, V
0
= span
_
_
_
_
_
1
0
1
_
_
_
_
_
, and V
2
= span
_
_
_
_
_
1
0
1
_
_
_
_
_
.
11. The eigenspaces are V
3
= span
_
_
_
_
3
1
1
1
_
_
_
_
, V
3
= span
_
_
_
_
0
2
1
1
_
_
_
_
, and
V
1
= span
_
_
_
_
1
1
2
0
_
_
,
_
_
0
0
1
1
_
_
_
_
, so dim(V
3
) + dim(V
3
) + dim(V
1
) = 1 + 1 + 2 = 4.
12. The eigenspaces are V
1
= span
_
_
_
_
1
0
0
0
_
_
_
_
, V
1
= span
_
_
_
_
0
0
1
0
_
_
,
_
_
0
0
0
1
_
_
_
_
, and V
2
= span
_
_
_
_
0
1
0
0
_
_
_
_
13. Since A
t
A =
_
3/2 1/2
1/2
3/2
_ _
3/2 1/2
1/2
3/2
_
=
_
1 0
0 1
_
, the inverse of A is A
t
, so the matrix A is
orthogonal.
14. Since A
t
A =
_
1
5/5
5/5 1
_
, then A
t
is not the inverse of A and hence, A is not orthogonal.
15. Since A
t
A =
_
_
2/2
2/2 0
2/2
2/2 0
0 0 1
_
_
_
_
2/2
2/2 0
2/2
2/2 0
0 0 1
_
_
=
_
_
1 0 0
0 1 0
0 0 1
_
_
, the inverse of A is A
t
, so
the matrix A is orthogonal.
16. Since A
t
A =
_
_
1 0 1/3
0 8/9 4/9
1/3 4/9 11/9
_
_
, the inverse of A is not A
t
, so the matrix A is not orthogonal.
17. The eigenvalues of the matrix A are 1 and 7 with corresponding orthogonal eigenvectors
_
1
1
_
and
_
1
1
_
, respectively. An orthonormal pair of eigenvectors is
1
2
_
1
1
_
and
1
2
_
1
1
_
. Let P =
_
1/
2 1/
2
1/
2 1/
2
_
, so that P
1
AP = D =
_
1 0
0 7
_
.
6.6 Diagonalization of Symmetric Matrices 163
18. The eigenvalues of the matrix A are 3 and 7 with corresponding orthogonal eigenvectors
_
1
1
_
and
_
1
1
_
, respectively. An orthonormal pair of eigenvectors is
1
2
_
1
1
_
and
1
2
_
1
1
_
. Let P =
_
1/
2 1/
2
1/
2 1/
2
_
, so that P
1
AP = D =
_
3 0
0 7
_
.
19. If P =
_
1/
2 1/
2
1/
2 1/
2
_
, then P
1
AP =
D =
_
4 0
0 2
_
.
20. If P =
1
5
_
2 1
1 2
_
, then P
1
AP = D =
_
2 0
0 3
_
.
21. The eigenvalues of A are 2, 2, and 1 with corresponding eigenvectors
_
_
1
2
1
_
_
,
_
_
1
0
1
_
_
, and
_
_
1
1
1
_
_
,
respectively. Since the eigenvectors are pairwise orthogonal, let P be the matrix with column vectors unit
eigenvectors. That is, let P =
_
_
1/
3 1/
2 1/
6
1/
3 0 2/
6
1/
3 1/
2 1/
6
_
_
, then P
1
AP = D =
_
_
1 0 0
0 2 0
0 0 2
_
_
.
22. If P =
1
2
_
_
1 1 0
0 0
2
1 1 0
_
_
, then P
1
AP = D =
_
_
0 0 0
0 2 1
0 0 1
_
_
.
23. Since A and B are orthogonal matrices, then AA
t
= BB
t
= I, so that
(AB)(AB)
t
= AB(B
t
A
t
) = A(BB
t
)A
t
= AIA
t
= AA
t
= I.
Since the inverse of AB is (AB)
t
, then AB is an orthogonal matrix. Similarly, (BA)(BA)
t
= I.
24. Suppose A is orthogonal, so A
1
= A
t
. Then
det(A) = det(A
t
) = det(A
1
) =
1
det(A)
,
so (det(A))
2
= 1 and hence, det(A) = 1.
25. We need to show that the inverse of A
t
is (A
t
)
t
= A. Since A is orthogonal, then AA
t
= I, that is, A is
the inverse of A
t
and hence, A
t
is also orthogonal.
26. Suppose A is orthogonal, so A
1
= A
t
. Then
(A
1
)
1
= (A
t
)
1
= (A
1
)
t
and hence A
1
is orthogonal.
27. a. Since cos
2
+ sin
2
= 1, then
A
t
A =
_
cos sin
sin cos
_ _
cos sin
sin cos
_
=
_
1 0
0 1
_
,
so the matrix A is orthogonal.
b. Let A =
_
a b
c d
_
. Since A is orthogonal, then
_
a b
c d
_ _
a c
b d
_
=
_
a
2
+b
2
ac +bd
ac +bd c
2
+d
2
_
=
_
1 0
0 1
_
_
a
2
+b
2
= 1
ac +bd = 0
c
2
+d
2
= 1
.
164 Chapter 6 Inner Product Spaces
Let v
1
=
_
a
b
_
and let be the angle that v
1
makes with the horizontal axis. Since a
2
+b
2
= 1, then v
1
is
a unit vector. Therefore a = cos and b = sin. Now let v
2
=
_
c
d
_
. Since ac +bd = 0 then v
1
and v
2
are
orthogonal. There are two cases.
Case 1. c = cos +/2 = sin and d = sin +/2 = cos , so that A =
_
cos sin
sin cos
_
.
Case 2. c = cos /2 = sin and d = sin /2 = cos , so that A =
_
cos sin
sin cos
_
.
c. If det(A) = 1, then by part (b), T(v) = Av with A =
_
cos sin
sin cos
_
. Therefore
T(v) =
_
cos sin
sin cos
_ _
x
y
_
=
_
cos () sin()
sin () cos ()
_ _
x
y
_
,
which is a rotation of a vector by radians. If det(A) = 1, then by part (b), T(v) = A
v with A
=
_
cos sin
sin cos
_
. Observe that
A
=
_
cos sin
sin cos
_ _
1 0
0 1
_
.
Hence, in this case, T is a reection through the x-axis followed by a rotation through the angle .
28. Suppose A and B are orthogonally similar, so B = P
t
AP, where P is an orthogonal matrix. Since P is
orthogonal P
1
= P
t
.
a. First suppose A is symmetric, so A
t
= A. Then B
t
= (P
t
AP)
t
= P
t
A
t
P = P
t
AP = B and hence, B is
symmetric. Conversely, suppose B is symmetric, so B = B
t
. Since B = P
t
AP = P
1
AP, then A = PBP
1
.
Then A
t
= (PBP
1
)
t
= (PBP
t
)
t
= PB
t
P
t
= PBP
t
= A and hence, A is symmetric.
b. First suppose A is orthogonal, so A
1
= A
t
. Then B
1
= (P
t
AP)
1
= P
1
A
1
P = P
t
A
t
P = (P
t
AP)
t
=
B
t
and hence, B is orthogonal. Conversely, suppose B is orthogonal, so B
1
= B
t
. Since B = P
t
AP =
P
1
AP, then A = PBP
1
. Then A
1
= (PBP
1
)
1
= PB
1
P
1
= PB
t
P
t
= (P
t
BP)
t
= A
t
and hence, A
is orthogonal.
29. Suppose D = P
t
AP, where P is an orthogonal matrix, that is P
1
= P
t
. Then
D
t
= (P
t
AP)
t
= P
t
A
t
P.
Since D is a diagonal matrix then D
t
= D, so we also have D = P
t
A
t
P and hence, P
t
AP = P
t
A
t
P. Then
P(P
t
AP)P
t
= P(P
t
A
t
P)P
t
. Since PP
t
= I, we have that A = A
t
, and hence, the matrix A is symmetric.
30. Suppose A
1
exists and D = P
t
AP, where D is a diagonal matrix and P is orthogonal. Since D = P
t
AP =
P
1
AP, then A = PDP
1
. Then A
1
= (PDP
1
)
1
= PD
1
P
1
, so D
1
= P
1
A
1
P = P
t
A
1
P and
hence A
1
is orthogonally diagonalizable.
31. a. If v =
_
_
v
1
v
2
.
.
.
v
n
_
_
, then v
t
v = v
2
1
+ . . . +v
2
n
.
b. Consider the equation Av = v. Now take the transpose of both sides to obtain v
t
A
t
= v
t
. Since A is
skew symmetric this is equivalent to
v
t
(A) = v
t
.
Now, right multiplication of both sides by v gives v
t
(Av) = v
t
v or equivalently, v
t
(v) = v
t
v. Hence,
2v
t
v = 0, so that by part (a),
2(v
2
1
+. . . +v
2
n
) = 0, that is = 0 or v = 0.
6.7 Application: Quadratic Forms 165
Since v is an eigenvector v = 0, and hence = 0. Therefore, the only eigenvalue of A is = 0.
Exercise Set 6.7
1. Let x =
_
x
y
_
, A =
_
27 9
9 3
_
, and b =
_
1
3
_
. Then the quadratic equation is equivalent to x
t
Ax +
b
t
x = 0. The next step is to diagonalize the matrix A. The eigenvalues of A are 30 and 0 with corresponding
eigenvectors
_
3
1
_
and
_
1
3
_
, respectively. Notice that the eigenvectors are not orthogonal. Using the
Gram-Schmidt process unit orthogonal vectors are v
1
=
_
3
10
10
10
10
_
and v
2
=
_
10
10
3
10
10
_
. The matrix with
column vectors v
1
and v
2
is orthogonal, but the determinant is 1. By interchanging the column vectors the
resulting matrix is orthogonal and is a rotation. Let
P =
_
10
10
3
10
10
3
10
10
10
10
_
and D =
_
0 0
0 30
_
,
so that the equation is transformed to
(x
)
t
Dx
+b
t
Px
= 0, that is 30(y
)
2
+
10x
= 0.
2. Let x =
_
x
y
_
, A =
_
2 4
4 8
_
, and b =
_
2
1
_
. Then the quadratic equation is equivalent to x
t
Ax +
b
t
x = 0. The eigenvalues of A are 10 and 0 with corresponding eigenvectors
_
1
2
_
and
_
2
1
_
, respectively.
Notice that the eigenvectors are orthogonal. Let v
1
=
5
5
_
1
2
_
and v
2
=
5
5
_
2
1
_
. The matrix with
column vectors v
1
and v
2
is orthogonal with determinant equal to 1. Let
P =
5
5
_
1 2
2 1
_
and D =
_
10 0
0 0
_
,
so that the equation is transformed to
(x
)
t
Dx
+b
t
Px
= 0, that is 10(x
)
2
+
5y
= 0.
3. Let x =
_
x
y
_
, A =
_
12 4
4 12
_
, and b =
_
0
0
_
. The matrix form of the quadratic equation is x
t
Ax
8 = 0. The eigenvalues of A are 16 and 8 with orthogonal eigenvectors
_
1
1
_
and
_
1
1
_
. Orthonormal
eigenvectors are
2
2
_
1
1
_
and
2
2
_
1
1
_
, so that P =
_
2
2
2
2
2
2
2
2
_
is an orthogonal matrix whose action
on a vector is a rotation. If D =
_
16 0
0 8
_
, then the quadratic equation is transformed to (x
)
t
Dx
8 = 0,
that is, 2(x
)
2
+ (y
)
2
= 1.
4. Let x =
_
x
y
_
, A =
_
11 3
3 19
_
, b =
_
2
4
_
, and f = 12. Then the quadratic equation is equivalent to
x
t
Ax+b
t
x+f = 0. The eigenvalues of A are 20 and 10 with corresponding eigenvectors
_
1
3
_
and
_
3
1
_
,
respectively. Notice that the eigenvectors are orthogonal but not orthonormal. Orthonormal eigenvectors
are v
1
=
1
10
_
1
3
_
and v
2
=
1
10
_
3
1
_
. The matrix with column vectors v
1
and v
2
is orthogonal with
determinant equal to 1. Let
166 Chapter 6 Inner Product Spaces
P =
1
10
_
1 3
3 1
_
and D =
_
20 0
0 10
_
,
so that the equation is transformed to
(x
)
t
Dx
+b
t
Px
+f = 0, that is 20(x
)
2
+ 10(y
)
2
10x
10y
12 = 0.
5. The transformed quadratic equation is
(x
)
2
2
(y
)
2
4
= 1.
6. The transformed quadratic equation is
(y
)
2
2
(x
)
2
2
= 1.
7. a. [x y]
_
4 0
0 16
_ _
x
y
_
16 = 0 b. The action of the matrix
P =
_
cos
4
sin
4
sin
4
cos
4
_
=
_
2
2
2
2
2
2
2
2
_
on a vector is a counter clockwise rotation of 45
. Then P
_
4 0
0 16
_
P
t
=
_
10 6
6 10
_
, so the quadratic
equation that describes the original conic rotated 45
is
[ xy ]
_
10 6
6 10
_ _
x
y
_
16 = 0, that is 10x
2
12xy + 10y
2
16 = 0.
8. a. [x y]
_
1 0
0 1
_ _
x
y
_
1 = 0 b. The action of the matrix
P =
_
cos
_
6
_
sin
_
6
_
sin
_
6
_
cos
_
6
_
_
=
_
3
2
1
2
1
2
3
2
_
on a vector is a clockwise rotation of 30
. Then P
_
1 0
0 1
_
P
t
=
_
1
2
3
2
3
2
1
2
_
, so the quadratic equation
that describes the original conic rotated 30
is
[x y]
_
1
2
3
2
3
2
1
2
_
_
x
y
_
1 = 0, that is
1
2
x
2
3xy
1
2
y
2
1 = 0.
9. a. 7x
2
+ 6
3xy + 13y
2
16 = 0 b. 7(x 3)
2
+ 6
3
2
xy+
1
4
y
2
+
1
2
x
3
2
y = 0 b.
3
4
(x2)
2
+
3
2
(x2)(y1)y+
1
4
(y1)
2
+
1
2
(x2)+
3
2
(y1) = 0
Exercise Set 6.8
1. The singular values of the matrix are
1
=
1
,
2
=
2
, where
1
and
2
are the eigenvalues of A
t
A.
Then A
t
A =
_
2 1
2 1
_ _
2 2
1 1
_
=
_
5 5
5 5
_
, so
1
=
10 and
2
= 0.
2. The singular values of the matrix are
1
=
1
,
2
=
2
, where
1
and
2
are the eigenvalues of A
t
A.
Then A
t
A =
_
1 1
2 2
_ _
1 2
1 2
_
=
_
2 0
0 8
_
, so
1
= 2
2 and
2
=
2.
6.8 Application: Singular Value Decomposition 167
3. Since
A
t
A =
_
_
1 2 2
0 1 1
2 1 1
_
_
_
_
1 0 2
2 1 1
2 1 1
_
_
=
_
_
9 4 2
4 2 2
2 2 6
_
_
has eigenvalues 0, 5 and 12, then the singular values of A are
1
= 2
3,
2
=
5, and
3
= 0.
4. Since
A
t
A =
_
_
1 1 0
0 0 1
1 1 0
_
_
_
_
1 0 1
1 0 1
0 1 0
_
_
=
_
_
2 2 0
2 2 0
0 0 1
_
_
has eigenvalues 0, 4 and 1, then the singular values of A are
1
= 0,
2
= 2, and
3
= 1.
5. Step 1: The eigenvalues of A
t
A are 4 and 64 with corresponding eigenvectors
_
1
1
_
and
_
1
1
_
, which
are orthogonal but are not orthonormal. An orthonormal pair of eigenvectors is v
1
=
_
1
2
1
2
_
and v
2
=
_
1
2
_
. Let
V =
_
1
2
1
2
1
2
1
2
_
.
Step 2: The singular values of A are
1
=
64 = 8 and
2
=
1
Av
1
1
2
Av
2
_
=
_
1
2
1
2
1
2
1
2
_
.
Step 4: The SVD of A is
A =
_
1
2
1
2
1
2
1
2
_
_
8 0
0 2
_
_
1
2
1
2
1
2
1
2
_
.
6. Step 1: The eigenvalues of A
t
A are 20 and 5 with corresponding eigenvectors
_
1
0
_
and
_
0
1
_
, which are
orthonormal. Let
V =
_
1 0
0 1
_
.
Step 2: The singular values of A are
1
= 2
5 and
2
=
5 0
0
5
_
.
Step 3: The matrix U is dened by
U =
_
1
1
Av
1
1
2
Av
2
_
=
1
5
_
5 2
5
2
5
_
.
Step 4: The SVD of A is
A =
1
5
_
5 2
5
2
5
_ _
2
5 0
0
5
_ _
1 0
0 1
_
.
168 Chapter 6 Inner Product Spaces
7. The SVD of A is
A =
_
0 1
1 0
_ _
2 0 0
0 1 0
_
_
_
0
1
2
1
2
1 0 0
0
1
2
1
2
_
_
.
8. The SVD of A is
A =
_
1 0
0 1
_ _
6 0 0
0
2 0
_
_
6
6
6
6
6
6
0
2
2
2
2
3
3
3
3
3
3
_
_.
9. a. If x =
_
x
1
x
2
_
, then the solution to the linear system Ax =
_
2
2
_
is x
1
= 2, x
2
= 0 b. The solution
to the linear system Ax =
_
2
2.000000001
_
is x
1
= 1, x
2
= 1. c. The condition number of the matrix A
is
1
2
6324555, which is relatively large. Notice that a small change in the vector b in the linear system
Ax = b results in a signicant dierence in the solutions.
10. a. If x =
_
_
x
1
x
2
x
3
_
_
, then the solution to the linear system Ax =
_
_
1
3
4
_
_
is x
1
=
5
4
, x
2
=
3
2
, x
3
= 1. b.
The solution to the linear system Bx =
_
_
1
3
4
_
_
is x
1
1.28, x
2
1.6, x
3
0.9. c. The singular values
of the matrix A are approximately
1
3.5,
2
2.3, and
3
0.96. The condition number of the matrix A
is
1
3
3.7, which is relatively small. Notice that a small change in the entries of the matrix A results in only
a small change in the solution.
Review Exercises Chapter 6
1. a. Since the set B contains three vectors in R
3
it is sucient to show that B is linearly independent. Since
_
_
1 1 2
0 0 1
1 0 0
_
_
_
_
1 0 0
0 1 0
0 0 1
_
_
, the only solution to the homogeneous linear system
_
_
1 1 2
0 0 1
1 0 0
_
_
_
_
c
1
c
2
c
3
_
_
=
_
_
0
0
0
_
_
is the trivial solution, so B is linearly independent and hence, is a basis for R
3
. b. Notice that
the vectors in B are not pairwise orthogonal. Using the Gram-Schmidt process an orthonormal basis is
_
_
_
_
_
0
1
0
_
_
,
_
_
2/2
0
2/2
_
_
,
_
_
2/2
0
2/2
_
_
_
_
_
. c. Again the spanning vectors for W are not orthogonal, which is
required to nd proj
W
v. The Gram-Schmidt process yields the orthogonal vectors
_
_
1
0
1
_
_
and
_
_
1/2
0
1/2
_
_
,
which also span W. Then
proj
W
v =
_
_
2
1
1
_
_
_
_
1
0
1
_
_
_
_
1
0
1
_
_
_
_
1
0
1
_
_
_
_
1
0
1
_
_
+
_
_
2
1
1
_
_
_
_
1/2
0
1/2
_
_
_
_
1/2
0
1/2
_
_
_
_
1/2
0
1/2
_
_
_
_
1/2
0
1/2
_
_
=
_
_
2
0
1
_
_
.
2. a. The spanning vectors for W are not linearly independent so can be trimmed to a basis for the span.
Since
Review Chapter 6 169
_
_
1 3 3 0
2 0 2 0
2 0 1 1
2 0 1 1
_
_
reduce to
_
1 0 0 1
0 1 0 2/3
0 0 1 1
0 0 0 0
_
_
with pivots in columns one, two and three, then a basis for W is
_
_
_
_
1
2
2
2
_
_
,
_
_
3
0
0
0
_
_
,
_
_
3
2
1
1
_
_
_
_
.
b. W
=
_
_
_
_
0
0
t
t
_
t R
_
_
= span
_
_
_
_
0
0
1
1
_
_
_
_
. c.
_
13
13
_
_
1
2
2
2
_
_
,
6
6
_
_
0
2
1
1
_
_
,
39
39
_
_
6
1
1
1
_
_
_
_
d.
_
2
2
_
_
0
0
1
1
_
_
_
_
e. 4 = dim(R
4
) = 3 + 1 = dim(W) + dim(W
) f. proj
W
v =
_
_
2
0
2
2
_
_
3. a. If
_
_
x
y
z
_
_
W, then
_
_
x
y
z
_
_
_
_
a
b
c
_
_
= ax+by+cz = 0, so
_
_
a
b
c
_
_
is in W
= span
_
_
_
_
_
a
b
c
_
_
_
_
_
. So W
is
the line in the direction of
_
_
a
b
c
_
_
and which is perpendicular (the normal vector) to the plane ax+by+cz = 0.
c. proj
W
v =
_
_
a
b
c
_
_
x
1
x
2
x
3
_
_
_
_
a
b
c
_
_
a
b
c
_
_
_
_
a
b
c
_
_
=
ax
1
+bx
2
+cx
3
a
2
+b
2
+c
2
_
_
a
b
c
_
_
d. proj
W
v =
|ax
1
+bx
2
+cx
3
|
a
2
+b
2
+c
2
Note that this norm is the distance from the point (x
1
, x
2
, x
3
) to the plane.
4. a. Let p(x) = x and q(x) = x
2
x + 1. then
p, q =
_
1
1
(x
3
x
2
+x)dx =
2
3
.
b. ||p q|| =
_
p q, p q =
_
_
1
1
(x
4
4x
3
+ 6x
2
4x + 1)dx =
4
10
5
c. Since p, q = 0, then the
polynomials are not orthogonal. d. cos =
p, q
||p||||q||
=
110
66
e. proj
q
p =
p, q
q, q
q =
5
33
x
2
+
5
33
x
5
33
f. W
= {ax
2
+b | a, cR}
5. a. Since 1, cos x =
_
dx =
2
,
1
cos x,
1
sinx
_
.
c. proj
W
x
2
=
_
1
2
, x
2
_
1
2
+
_
1
cos x, x
2
_
1
cos x +
_
1
sin x, x
2
_
1
sinx =
1
3
2
4 cos x
d. proj
W
x
2
=
1
3
2
5
+ 144
170 Chapter 6 Inner Product Spaces
6. a. Since v = c
1
v
1
+ +c
n
v
n
and B is orthonormal so v, v
i
= c
i
, then
_
_
c
1
c
2
.
.
.
c
n
_
_
=
_
_
v, v
1
v, v
2
.
.
.
v, v
n
_
.
b. proj
v
i
v =
v, v
i
v
i
, v
i
v
i
= v, v
i
v
i
= c
i
v
i
c. The coordinates are given by c
1
= v, v
1
= 1, c
2
= v, v
2
=
2
6
, and c
3
= v, v
3
=
1
6
_
1
2
+
2
3
_
.
7. Let B = {v
1
, v
2
, . . . , v
n
} be an orthonormal basis and [v]
B
=
_
_
v
1
v
2
.
.
.
v
3
_
_
. Then there are scalars c
1
, . . . , c
n
such that v = c
1
v
1
+c
2
v
2
+ +c
n
v
n
. Using the properties of an inner product and the fact that the vectors
are orthonormal,
v =
_
v, v =
v v =
_
c
1
v
1
+c
2
v
2
+ +c
n
v
n
, c
1
v
1
+c
2
v
2
+ +c
n
v
n
=
_
c
1
v
1
, c
1
v
1
+ +c
n
v
n
, c
n
v
n
=
_
c
2
1
v
1
, v
1
+ +c
2
n
v
n
, v
n
=
_
c
2
1
+ +c
2
n
.
If the basis is orthogonal, then v =
_
c
2
1
v
1
, v
1
+ +c
2
n
v
n
, v
n
.
8. Let v = c
1
v
1
+ +c
m
v
m
. Consider
0
v
m
i=1
v, v
i
v
i
2
=
_
v
m
i=1
v, v
i
v
i
, v
m
i=1
v, v
i
v
i
_
= v, v 2
_
v,
m
i=1
v, v
i
v
i
_
+
_
m
i=1
v, v
i
v
i
,
m
i=1
v, v
i
v
i
_
= ||v||
2
2
m
i=1
v, v
i
2
+
m
i=1
v, v
i
2
= ||v||
2
i=1
v, v
i
2
,
so
||v||
2
i=1
v, v
i
2
.
9. a. Since A =
_
_
1 0 1
1 1 2
1 0 1
1 1 2
_
_
_
_
1 0 0
0 1 0
0 0 1
0 0 0
_
_
, the vectors v
1
=
_
_
1
1
1
1
_
_
, v
2
=
_
_
0
1
0
1
_
_
, and v
3
=
_
_
1
2
1
2
_
_
are linearly independent, so B = {v
1
, v
2
, v
3
} is a basis for col(A).
b. An orthogonal basis is B
1
=
_
_
_
_
1
1
1
1
_
_
,
_
_
1/2
1/2
1/2
1/2
_
_
,
_
_
1
0
1
0
_
_
_
_
.
Chapter Test Chapter 6 171
c. An orthonormal basis is B
2
=
_
_
_
_
1/2
1/2
1/2
1/2
_
_
,
_
_
1/2
1/2
1/2
1/2
_
_
,
_
2/2
0
2/2
0
_
_
_
_
.
d. Q =
_
_
1/2 1/2
2/2
1/2 1/2 0
1/2 1/2
2/2
1/2 1/2 0
_
_
, R =
_
_
2 1 2
0 1 2
0 0
2
_
_
e. The matrix has the QR-factorization A = QR.
10. Suppose
1
(c
1
v
1
) +
2
(c
2
v
2
) + +
n
(c
n
v
n
) = (
1
c
1
)v
1
+ (
2
c
2
)v
2
+ + (
n
c
n
)v
n
= 0.
Since B is a basis, then
1
c
1
=
2
c
2
= =
n
c
n
= 0. Since c
i
= 0 for i = 1, . . . , n, then
i
= 0 for
i = 1, . . . , n, so B
1
is a basis. Since
c
i
v
i
, c
j
v
j
= c
i
c
j
v
i
, v
j
= 0 for i = j and c
i
v
i
, c
i
v
i
= c
2
i
v
i
, v
i
= 0,
then B
1
is an orthogonal basis. The basis B
1
is orthonormal if and only if
1 = ||c
i
v
i
|| = |c
i
|||v
i
|| |c
i
| =
1
||v
i
||
for all i.
Chapter Test Chapter 6
1. T 2. T 3. F. W W
= {0}
4. F. Every set of pairwise or-
thogonal vectors are also linearly
independent.
5. F.
||v
1
|| =
_
2
2
+ 1
2
+ (4)
2
+ 3
2
=
30
6. T
7. T 8. F.
v
1
, v
2
= 4 + 1 8 + 3 = 8 = 0
9. F.
cos =
v
1
, v
2
24
10
=
4
3
15
10. F.
proj
v1
v
2
=
v
1
, v
2
v
1
, v
1
v
1
=
4
15
_
_
2
1
4
3
_
_
11. T 12. T
172 Chapter 6 Linear Transformations
13. T 14. T 15. F. If v
1
=
_
_
1
0
1
_
_
and v
2
=
_
_
1
1
1
_
_
, then
W
= span
_
_
_
_
_
1
2
1
_
_
_
_
_
.
16. T 17. T 18. T
19. F.
1, x
2
1
_
=
_
1
1
(x
2
1)dx
=
4
3
20. T 21. F.
||1/2|| =
_
1
1
1
4
dx =
2
2
22. T 23. F. A basis for W
),
then dim(W
) = 1.
35. T 36. F. If dim(W) = dim(W
),
then the sum can not be 5.
37. T 38. T 39. F. If u = v, then proj
v
u =
proj
u
v but the vectors are lin-
early dependent.
40. T
A.1 Algebra of Sets 173
A
Preliminaries
Exercise Set A.1
1. A B = {2, 2, 9} 2.
AB = {4, 3, 2, 1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
3. A B = {(a, b) | a A, b B}
There are 9 9 = 81 ordered pairs in AB.
4. (A B)
c
= {. . . , 6, 5, 11, 12, 13, . . .}
5. A\B = {4, 0, 1, 3, 5, 7} 6. B\A = {3, 1, 4, 6, 8, 10}
7. A B = [0, 3] 8. (A B)
c
= (, 11] (8, )
9. A\B = (11, 0) 10. C\A = (3, )
11. A\C = (11, 9) 12. (A B)
c
C = (8, )
13. (A B)\C = (11, 9) 14. B\(A C) = (3, 8]
15.
x
y
25
25
5
5
16.
x
y
25
25
5
5
17.
x
y
25
25
5
5
18.
x
y
25
25
5
5
19.
x
y
25
25
5
5
20.
x
y
25
25
5
5
21. (A B) C = {5} = A (B C) 22. (A B) C = A (B C)
= {1, 2, 3, 5, 7, 9, 10, 11, 14, 20, 30, 37}
23. A (B C) = (A B) (A C)
= {1, 2, 5, 7}
24. A (B C) = (A B) (A C)
= {1, 2, 3, 5, 7, 9, 11, 14}
25. A\(B C) = (A\B) (A\C)
= {3, 9, 11}
26. A\(B C) = (A\B) (A\C)
= {1, 2, 3, 7, 9, 11}
174 Appendix A Preliminaries
27. Let x (A
c
)
c
. Then x is in the complement of A
c
, that is, x A, so (A
c
)
c
A. If x A, then x is not
in A
c
, that is, x (A
c
)
c
, so A (A
c
)
c
. Therefore, A = (A
c
)
c
.
28. Since an element of the universal set is either in the set A or in the complement A
c
, then the universal
set is A A
c
.
29. Let x A B. Then x A and x B, so x B and x A. Hence x B A. Similarly, we can show
that if x B A, then x A B.
30. An element x A B if and only if
x A or x B x B or x A x B A
and hence, A B = B A.
31. Let x (A B) C. Then (x A and x B) and x C. So x A and (x B and x C), and hence,
(A B) C A (B C). Similarly, we can show that A (B C) (A B) C.
32. An element x (A B) C if and only if
(x A or x B) or x C x A or (x B or x C) x A (B C)
and hence, (A B) C = (A (B C).
33. Let x A (B C). Then x A or x (B C), so x A or (x B and x C). Hence, (x A or x
B) and (x A or x C). Therefore, x (AB) (AC), so we have that A(BC) (AB) (AC).
Similarly, we can show that (A B) (A C) A (B C).
34. Suppose x A\(B C). Then (x A) and x / B C, so (x A) and (x / B or x / C) and
hence, x (A\B) or x (A\C). Therefore, A\(B C) (A\B) (A\C). Similarly, we can show that
(A\B) (A\C) A\(B C).
35. Let x A\B. Then (x A) and x / B, so (x A) and x B
c
. Hence, A\B A B
c
. Similarly, if
x A B
c
, then (x A) and (x / B), so x A\B. Hence A B
c
A\B.
36. We have that (A B) A
c
= (A A
c
) (B A
c
) = (B A
c
) = B\A.
37. Let x (A B)\(A B). Then (x A or x B) and (x / (A B)), that is,
(x A or x B) and (x / A or x / B)). Since an element can not be both in a set and not in a set, we have
that (x A and x / B) or (x B and x / A), so (AB)\(AB) (A\B) (B\A). Similarly, we can show
that (A\B) (B\A) (A B)\(A B).
38. We have that A\(A\B) = A\(A B
c
) = (A\A) (A\B
c
) = (A B) = A B.
39. Let (x, y) A(B C). Then x A and (y B and y C). So (x, y) AB and (x, y) AC, and
hence, (x, y) (AB) (AC). Therefore, A(BC) (AB) (AC). Similarly, (AB) (AC)
A (B C).
40. We have that
(A\B) (B\A) = (A B
c
) (B A
c
) = [(A B
c
) B] [(A B
c
) A
c
]
= [(A B) (B
c
B)] [(A A
c
) (B
c
A
c
)]
= (A B) (B
c
A
c
) = [(A B) B
c
] [(A B) A
c
]
= [(A B
c
) (B B
c
)] [(A A
c
) (B A
c
)]
= (A\B) (B\A).
Exercise Set A.2
1. Since for each rst coordinate there is a unique
second coordinate, then f is a function.
2. Since f(1) = 2 = f(4), then f is not one-to-
one.
A.2 Functions 175
3. Since there is no x such that f(x) = 14, then
the function is not onto. The range of f is the set
{2, 1, 3, 9, 11}.
4. f(A) = {2, 3}
5. The inverse image is the set of all numbers
that are mapped to 2 by the function f, that is
f
1
({2}) = {1, 4}.
6. f
1
(f({1}) = f
1
({2}) = {1, 4}
7. Since f(1) = 2 = f(4), then f is not one-to-
one and hence, does not have an inverse.
8. Since Y contains more elements than X, then
it is not possible.
9. To dene a function that is one-to-one we
need to use all the elements of X and assure
that if a = b, then f(a) = f(b). For example,
{(1, 2), (2, 1), (3, 3), (4, 5), (5, 9), (6, 11)}.
10. If g : Y X is dened by
{(2, 1), (1, 2), (3, 3), (5, 4), (9, 5), (11, 6), (14, 6)},
then g is onto.
11. Since
f(A B) = f((3, 7)) = [0, 49)
and
f(A) f(B) = [0, 25] [0, 49) = [0, 49),
the two sets are equal.
12. f
1
(C D) = f
1
([1, ))
= (, 1] [1, ) = f
1
(C) f
1
(D)
13. Since
f(A B) = f({0}) = {0}
and
f(A) f(B) = [0, 4] [0, 4] = [0, 4],
then f(A B) f(A) f(B), but the two sets
are not equal.
14. We have that
g(A B) = [4, 25) = g(A) g(B).
The function g is one-to-one but f is not one-to-
one.
15. To nd the inverse let y = ax +b and solve for x in terms of y. That is, x =
yb
a
. The inverse function is
commonly written using the same independent variable that is used for f, so f
1
(x) =
xb
a
.
16. Suppose that x
1
< x
2
. Then x
5
1
< x
5
2
and 2x
1
< 2x
2
, so x
5
1
+ 2x
1
< x
5
2
+ 2x
2
and hence, f is one-to-one.
Therefore, the function f has an inverse.
17. Several iterations give
f
(1)
= f(x) = x +c, f
(2)
(x) = f(f(x)) = f(x +c) = (x +c) +c = x,
f
(3)
(x) = f(f
(2)
(x)) = f(x) = x +c, f
(4)
(x) = f(f
(3)
(x)) = f(x +c) = x,
and so on. If n is odd, then f
(n)
(x) = x +c and if n is even, then f
(n)
(x) = x.
18. y = f(x)
x
y
1
1
y = (f f)(x)
x
y
1
1
176 Appendix A Preliminaries
19. a. To show that f is one-to-one, we have that
e
2x11
= e
2x21
2x
1
1 = 2x
2
1 x
1
= x
2
.
b. Since the exponential function is always positive, f is not onto R. c. Dene g : R (0, ) by g(x) = e
2x1
.
d. Let y = e
2x1
. Then ln y = ln e
2x1
, so that ln y = 2x 1. Solving for x gives x =
1+ln y
2
. Then
g
1
(x) =
1
2
(1 + ln x).
20. Since f(1) = e
1
= f(1), then f is not one-to-one.
21. a. To show that f is one-to-one, we have that 2n
1
= 2n
2
if and only if n
1
= n
2
.
b. Since every image is an even number, the range of f is a proper subset of N, and hence, the function f is
not onto. c. Since every natural number is mapped to an even natural number, we have that f
1
(E) = N
and f
1
(O) = .
22. f(E) = O, f(O) = E
23. a. Let p and q be odd numbers, so there are integers m and n such that p = 2m + 1 and q = 2n + 1.
Then f((p, q)) = f((2m+1, 2n+1)) = 2(2m+1) +2n+1 = 2(2m+n) +3, which is an odd number. Hence,
f(A) = {2k + 1 | k Z}. b. f(B) = {2k + 1 | k Z} c. Since f((m, n)) = 2m + n = 0 n = 2m,
thenf
1
({0}) = {(m, n) | n = 2m}. d. f
1
(E) = {(m, n) | n is even} e. f
1
(O) = {(m, n) | n is odd} f.
Since f((1, 2)) = 0 = f((0, 0)), then f is not one-to-one.
g. If z Z, let m = 0 and n = z, so that f(m, n) = z.
24. a. To show f is one-to-one, we have
f((x
1
, y
1
)) = f((x
2
, y
2
)) (2x
1
, 2x
1
+ 3y
1
) = (2x
2
, 2x
2
+ 3y
2
)
2x
1
= 2x
2
and 2x
1
+ 3y
1
= 2x
2
+ 3y
2
x
1
= x
2
and y
1
= y
2
(x
1
, y
1
) = (x
2
, y
2
).
b. Suppose (a, b) R
2
. Let x = a/2. Next solve
2
_
a
2
_
+ 3y = b y =
b a
3
.
Then f
__
a
2
,
ba
3
__
= (a, b) and hence, the function f is onto. c. f(A) = {(2x, 5x + 3) | x R}.
25. Let y f(A B). Then there is some x A B such that f(x) = y. Since x A B with y = f(x),
then (x A with y = f(x)) or (x B with y = f(x)), so that y f(A) or y f(B). Hence, f(A B)
f(A)f(B). Now let y f(A)f(B), that is, y f(A) or y f(B). So there exists x
1
A or x
2
B such that
f(x
1
) = y and f(x
2
) = y. Thus, there is some x AB such that f(x) = y. Therefore, f(A)f(B) f(AB).
26. Let x f
1
(CD). Then f(x) CD so f(x) C or f(x) D. Hence, x f
1
(C) or x f
1
(D) and
we have shown f
1
(CD) f
1
(C) f
1
(D). Similarly, we can show that f
1
(C) f
1
(D) f
1
(CD).
27. Let y f(f
1
(C)). So there is some x f
1
(C) such that f(x) = y, and hence, y C. Therefore
f(f
1
(C)) C.
28. First notice that f g is a mapping from A to C. Let c C. Since f is a surjection there is some b B
such that f(b) = c. Since g is a surjection there is some a A such that g(a) = b. Then (f g)(a) = f(g(a)) =
f(b) = c and hence, f g is a surjection.
29. Let c C. Since f is a surjection, there is some b B such that f(b) = c. Since g is a surjection, there is
some a A such that g(a) = b. Then (f g)(a) = f(g(a)) = f(b) = c, so that f g is a surjection. Next we
need to show that f g is one-to-one. Suppose (f g)(a
1
) = (f g)(a
2
), that is f(g(a
1
)) = f(g(a
2
)). Since f
is one-to-one, then g(a
1
) = g(a
2
). Now since g is one-to-one, then a
1
= a
2
and hence, f g is one-to-one.
30. Suppose g(a
1
) = g(a
2
). Since f is a function, then f(g(a
1
)) = f(g(a
2
)) or equivalently (f g)(a
1
) =
(f g)(a
2
). Since f g is an injection, then a
1
= a
2
and hence, g is an injection.
31. Let y f(A)\f(B). Then y f(A) and y / f(B). So there is some x A but which is not in B, with
y = f(x). Therefore x A\B with y = f(x), so f(A)\f(B) f(A\B).
A.3 Techniques of Proof 177
32. Let x f
1
(C\D). Then f(x) C and f(x) / D, and hence x f
1
(C) and x / f
1
(D). Therefore,
f
1
(C\D) f
1
(C)\f
1
(D). Similarly, f
1
(C)\f
1
(D) f
1
(C\D).
Exercise Set A.3
1. If the side is x, then by the Pythagorean The-
orem the hypotenuse is given by h
2
= x
2
+ x
2
=
2x
2
, so h =
2x.
2. Since a =
c
2
, then the area A =
1
2
bh =
1
2
a
2
=
c
2
4
.
3. If the side is x, then the height is h =
3
2
x, so
the area is A =
1
2
x
3
2
x =
3
4
x
2
.
4. Let s =
p
q
and t =
u
v
, then
s
t
=
p/q
u/v
=
pv
qu
and
hence,
s
t
is a rational number.
5. If a divides b, there is some k such that ak = b
and if b divides c, there is some such that b = c.
Then c = b = (ak) = (k)a, so a divides c.
6. Let m = 2k and n = 2, then m+n = 2k+2 =
2(k +) and hence, m+n is even.
7. If n is odd, there is some k such that n = 2k+1.
Then n
2
= (2k + 1)
2
= 2(2k
2
+ k) + 1, so n
2
is
odd.
8. Since n
2
+ n + 3 = n(n + 1) + 3 and the
product of two consecutive natural numbers is an
even number, then n
2
+ n + 3 is odd.
9. If b = a + 1, then (a + b)
2
= (2a + 1)
2
=
2(2a
2
+ 2a) + 1, so (a +b)
2
is odd.
10. If m = 2k + 1 and n = 2 + 1, then mn =
(2k + 1)(2 + 1) = 2(2k + k + ) + 1 and hence
mn is odd.
11. Let m = 2 and n = 3. Then m
2
+ n
2
= 13,
which is not divisible by 4.
12. f(x) g(x) x
2
2x + 1 x + 1
x(x 3) 0 0 x 3.
13. In a direct argument we assume that n
2
is
odd. This implies n
2
= 2k + 1 for some integer
k but taking square roots does not lead to the
conclusion. So we use a contrapositive argument.
Suppose n is even, so there is some k such that
n = 2k. Then n
2
= 4k
2
, so n
2
is even.
14. Suppose that n is odd with n = 2k +1. Then
(2k+1)
3
= 8k
3
+12k
2
+6k+1 = 2(4k
3
+6k
2
+3k)+1
and hence n
3
is odd. Since we have proven the
contrapositive statement holds, we have that the
original statement also holds.
15. To use a contrapositive argument suppose
p = q. Since p and q are positive, then
pq =
_
p
2
= p =
p+q
2
.
16. Suppose c = 2k + 1. By the Quadratic For-
mula, the solutions to n
2
+ n c = 0 are
n =
1
4c + 1
2
=
1
8k + 5
2
,
which is not be an integer.
17. Using the contrapositive argument we sup-
pose x > 0. If =
x
2
> 0, then x > .
18. To prove the contrapositive statement sup-
pose that y is rational with y =
p
q
. Since x is also
rational let x =
u
v
so x +y =
u
v
+
p
q
=
pv+qu
qv
and
hence x +y is rational.
19. Contradiction: Suppose
3
2 =
p
q
such that p
and q have no common factors. Then 2q
3
= p
3
,
so p
3
is even and hence p is even. This gives that
q is also even, which contradicts the assumption
that p and q have no common factors.
20. First notice that
n
n + 1
>
n
n + 2
n(n + 2) > n(n + 1)
n
2
+ 2n > n
2
+n n > 1.
If n = 1, then since
1
2
>
1
3
the result holds n 1.
21. If 7xy 3x
2
+2y
2
, then 3x
2
7xy +2y
2
= (3xy)(x 2y) 0. There are two cases, either both factors
are greater than or equal to 0 or both are less than or equal to 0. The rst case is not possible since the
assumption is that x < 2y. Therefore, 3x y.
178 Appendix A Preliminaries
22. Dene f : R R by f(x) = x
2
. Let A =
[1, 1] and B = [0, 1]. Then f(A) = [0, 1] = f(B)
but A B.
23. Dene f : R R by f(x) = x
2
. Let C =
[4, 4] and D = [0, 4]. Then f
1
(C) = [2, 2] =
f
1
(D) but C D.
24. Let y f(A), so there is some x A such
that y = f(x). Since A B, then x B and
hence, y f(B). Therefore, f(A) f(B).
25. If x f
1
(C), then f(x) C. Since C D,
then f(x) D. Hence, x f
1
(D).
26. In Theorem 3, of Section A.2, we showed that f(A B) f(A) f(B). Now let y f(A) f(B),
so y f(A) and y f(B). Then there are x
1
A and x
2
B such that f(x
1
) = y = f(x
2
). Since f is
one-to-one, then x
1
= x
2
so x
1
A B and hence y = f(x
1
) f(A B).
27. To show that f(A\B) f(A)\f(B), let y f(A\B). This part of the proof does not require that f be
one-to-one. So there is some x such that y = f(x) with x A and x / B. So y f(A)\f(B) and hence,
f(A\B) f(A)\f(B). Now suppose y f(A)\f(B). So there is some x A such that y = f(x). Since f is
one-to-one this is the only preimage for y, so x A\B. Therefore, f(A)\f(B) f(A\B).
28. In Theorem 3, of Section A.2, we showed that A f
1
(f(A)). Now let x f
1
(f(A)) so y = f(x) f(A).
But there is some x
1
A such that y = f(x
1
). Then f(x) = f(x
1
) and since f is one-to-one, we have x = x
1
.
Therefore, x A so that f
1
(f(A)) A.
29. By Theorem 3, of Section A.2, f(f
1
(C)) C. Let y C. Since f is onto, there is some x such that
y = f(x). So x f
1
(C), and hence y = f(x) f(f
1
(C)). Therefore, C f(f
1
(C)).
Exercise Set A.4
1. For the base case n = 1, we have that the left hand side of the summation is 1
2
= 1 and the right hand
side is
1(2)(3)
6
= 1, so the base case holds. For the inductive hypothesis assume the summation formula holds
for the natural number n. Next consider
1
2
+ 2
2
+ 3
2
+ +n
2
+ (n + 1)
2
=
n(n + 1)(2n + 1)
6
+ (n + 1)
2
=
n + 1
6
(2n
2
+ 7n + 6)
=
n + 1
6
(2n + 3)(n + 2) =
(n + 1)(n + 2)(2n + 3)
6
.
Hence, the summation formula holds for all natural numbers n.
2.
Base case: n = 1 : 1
3
=
1
2
(2)
2
4
Inductive hypothesis: Assume 1
3
+ 2
3
+ +n
3
=
n
2
(n+1)
2
4
.
Consider
1
3
+ 2
3
+ +n
3
+ (n + 1)
2
=
n
2
(n + 1)
2
4
+ (n + 1)
3
=
(n + 1)
2
4
(n
2
+ 4n + 4)
=
(n + 1)
2
(n + 2)
2
4
.
A.4 Mathematical Induction 179
3. For the base case n = 1, we have that the left hand side of the summation is 1 and the right hand side is
1(31)
2
= 1, so the base case holds. For the inductive hypothesis assume the summation formula holds for the
natural number n. Next consider
1 + 4 + 7 + + (3n 2) + (3(n + 1) 2) =
n(3n 1)
2
+ (3n + 1) =
3n
2
+ 5n + 2
2
=
(n + 1)(3n + 2)
2
.
Hence, the summation formula holds for all natural numbers n.
4.
Base case: n = 1 : 3 = 4(1)
2
1
Inductive hypothesis: Assume 3 + 11 + 19 + + (8n 5) = 4n
2
n.
Consider
3 + 11 + 19 + + (8n 5) + (8(n + 1) 5) = 4n
2
n + 8n + 3 = 4n
2
+ 7n + 3
= (4n
2
+ 8n + 4) 4 n + 3 = 4(n + 1)
2
(n + 1).
5. For the base case n = 1, we have that the left hand side of the summation is 2 and the right hand side is
1(4)
2
= 2, so the base case holds. For the inductive hypothesis assume the summation formula holds for the
natural number n. Next consider
2 + 5 + 8 + + (3n 1) + (3(n + 1) 1) =
1
2
(3n
2
+ 7n + 4) =
(n + 1)(3n + 4)
2
=
(n + 1)(3(n + 1) + 1)
2
.
Hence, the summation formula holds for all natural numbers n.
6.
Base case: n = 1 : 3 = 1(3)
Inductive hypothesis: Assume 3 + 7 + 11 + + (4n 1) = n(2n + 1).
Consider
3 + 7 + 11 + + (4n 1) + (4(n + 1) 1) = n(2n + 1) + 4n + 3 = 2n
2
+ 5n + 3
= (2n + 3)(n + 1) = (n + 1)(2(n + 1) + 1).
7. For the base case n = 1, we have that the left hand side of the summation is 3 and the right hand side is
3(2)
2
= 3, so the base case holds. For the inductive hypothesis assume the summation formula holds for the
natural number n. Next consider
3 + 6 + 9 + + 3n + 3(n + 1) =
1
2
(3n
2
+ 9n + 6) =
3
2
(n
2
+ 3n + 2) =
3(n + 1)(n + 2)
2
.
Hence, the summation formula holds for all natural numbers n.
180 Appendix A Preliminaries
8.
Base case: n = 1 : 2 =
1(2)(3)
3
Inductive hypothesis: Assume 1 2 + 2 3 + +n(n + 1) =
n(n+1)(n+2)
3
.
Consider
1 2 + 2 3 + +n(n + 1) + (n + 1)(n + 2) =
n(n + 1)(n + 2)
3
+ (n + 1)(n + 2)
=
(n + 1)(n + 2)(n + 3)
3
.
9. For the base case n = 1, we have that the left hand side of the summation is 2
1
= 2 and the right hand
side is 2
2
2 = 2, so the base case holds. For the inductive hypothesis assume the summation formula holds
for the natural number n. Next consider
n+1
k=1
2
k
=
n
k=1
2
k
+ 2
n+1
= 2
n+1
2 + 2
n+1
= 2
n+2
2.
Hence, the summation formula holds for all natural numbers n.
10.
Base case: n = 1 : 1(1!) = 1 and 2! 1 = 1
Inductive hypothesis: Assume
n
k=1
k k! = (n + 1)! 1
Consider
n+1
k=1
k k! =
n
k=1
k k! + (n + 1)(n + 1)! = (n + 1)! 1 + (n + 1)(n + 1)!
= (n + 1)!((n + 1) + 1) 1 = (n + 2)! 1.
11. The entries in the table show values of the sum for selected values of n.
n 2 + 4 + + 2n
1 2 = 1(2)
2 6 = 2(3)
3 12 = 3(4)
4 40 = 4(5)
5 30 = 5(6)
The pattern displayed by the data suggests the sum is 2 + 4 + 6 + + (2n) = n(n + 1).
For the base case n = 1, we have that the left hand side of the summation is 2 and the right hand side is
1(2) = 2, so the base case holds. For the inductive hypothesis assume the summation formula holds for the
natural number n. Next consider
2 + 4 + 6 + + 2n + 2(n + 1) = n(n + 1) + 2(n + 1) = (n + 1)(n + 2).
Hence, the summation formula holds for all natural numbers n.
A.4 Mathematical Induction 181
12. Notice that consecutive terms in the sum always dier by 4. Then
1 + 5 + 9 + + (4n 3) = 1 + (1 + 4) + (1 + 2 4) + + (1 + (n 1) 4)
= n + 4(1 + 2 + 3 + + (n 1))
= n + 4
_
(n 1)n
2
_
= n + 2(n 1)n = 2n
2
n.
13. The base case n = 5 holds since 32 = 2
5
> 25 = 5
2
. The inductive hypothesis is 2
n
> n
2
holds for the
natural number n. Consider 2
n+1
= 2(2
n
), so that by the inductive hypothesis 2
n+1
= 2(2
n
) > 2n
2
. But since
2n
2
(n + 1)
2
= n
2
2n 1 = (n 1)
2
2 > 0, for all n 5, we have 2
n+1
> (n + 1)
2
.
14.
Base case: n = 3 : 3
2
> 2(3) + 1
Inductive hypothesis: Assume n
2
> 2n + 1.
Using the inductive hypothesis (n+1)
2
= n
2
+2n+1 > (2n+1)+(2n+1) = 4n+2 > 2n+3 = 2(n+1)+1
15. The base case n = 1 holds since 1
2
+1 = 2, which is divisible by 2. The inductive hypothesis is n
2
+n is
divisible by 2. Consider (n +1)
2
+(n +1) = n
2
+n +2n +2. By the inductive hypothesis, n
2
+n is divisible
by 2, so since both terms on the right are divisible by 2, then (n+1)
2
+(n+1) is divisible by 2. Alternatively,
observe that n
2
+n = n(n + 1), which is the product of consecutive integers and is therefore even.
16.
Base case: n = 1 : Since (x y) = 1 (x y), then x y is divisible by x y.
Inductive hypothesis: Assume x
n
y
n
is divisible by x y.
Consider
x
n+1
y
n+1
= x x
n
y y
n
= x x
n
y x
n
+y x
n
y y
n
= x
n
(x y) +y(x
n
y
n
).
Since x y divides both terms on the right, then x y divides x
n+1
y
n+1
.
17. For the base case n = 1, we have that the left hand side of the summation is 1 and the right hand side
is
r1
r1
= 1, so the base case holds. For the inductive hypothesis assume the summation formula holds for the
natural number n. Next consider
1 +r +r
2
+ +r
n1
+r
n
=
r
n
1
r 1
+r
n
=
r
n
1 +r
n
(r 1)
r 1
=
r
n+1
1
r 1
.
Hence, the summation formula holds for all natural numbers n.
18. a., b.
n f
n
f
1
+f
2
+ +f
n
1 1 1
2 1 2
3 2 4
4 3 7
5 5 12
6 8 20
7 13 33
The pattern suggests that f
1
+f
2
+ +f
n
= f
n+2
1.
c.
Base case: n = 1 : f
1
= f
3
1
182 Appendix A Preliminaries
Inductive hypothesis: Assume f
1
+f
2
+ +f
n
= f
n+2
1.
Consider
f
1
+f
2
+ +f
n
+f
n+1
= f
n+2
+f
n+1
1 = f
n+3
1.
The last inequality is true since f
n+2
+f
n+1
= f
n+3
by the denition of the Fibonacci numbers.
19. Since by Theorem 1, of Section A.1, A (B
1
B
2
) = (A B
1
) (A B
2
), so the base case n = 2 holds.
Assume the formula holds for the natural number n. Consider
A (B
1
B
2
B
n
B
n+1
) = A ((B
1
B
2
B
n
) B
n+1
)
= [A (B
1
B
2
B
n
)] (A B
n+1
)
= (A B
1
) (A B
2
) (A B
n
) (A B
n+1
).
20.
Base case: n = 1 : Since the grid is 2 2 if one square is removed the remaining three squares can be
covered with the shape.
Inductive hypothesis: Assume a 2
n
2
n
grid can be covered in the prescribed fashion.
Consider a 2
n+1
2
n+1
grid. The grid consists of four grids of size 2
n
2
n
. If one square is removed
from the entire grid, then it must be removed from one of the four grids. By the inductive hypothesis
that grid with one square removed can be covered. The remaining three grids have a total of 3(2
n
2
n
)
squares and hence, can also be covered.
21.
_
n
r
_
=
n!
r!(n r)!
=
n!
(n r)!(n (n r))!
=
_
n
n r
_
22.
_
n
r 1
_
+
_
n
r
_
=
n!
(r 1)!(n r + 1)!
+
n!
(r)!(n r)!
=
n!
(r 1)!(n r)!
_
1
n r + 1
+
1
r
_
=
(n + 1)!
r!(n r + 1)!
=
_
n + 1
r
_
23. By the Binomial Theorem,
2
n
= (1 + 1)
n
=
n
k=0
_
n
k
_
.
24. By the Binomial Theorem,
0 = (1 1)
n
(1 + (1))
n
=
n
k=0
(1)
k
_
n
k
_
.