Math2014 ch3
Math2014 ch3
Linear Algebra
2
3
4
Example 1
6
7
8
9
10
Row Reduction and Echelon Forms
• A rectangular matrix is in echelon form (or row
echelon form) if it has the following three
properties:
1. All nonzero rows are above any rows of all
zeros.
2. Each leading entry of a row is in a column to
the right of the leading entry of the row
above it.
3. All entries in a column below a leading entry
are zeros.
11
12
• Any nonzero matrix may be row reduced (i.e.,
transformed by elementary row operations) into more
than one matrix in echelon form, using different
sequences of row operations. However, the reduced
echelon form one obtains from a matrix is unique.
Theorem 1: Uniqueness of the Reduced Echelon Form
Each matrix is row equivalent to one and only one
reduced echelon matrix.
13
14
• Example : Row reduce the matrix A below to echelon
form, and locate the pivot columns of A.
0 3 6 4 9
1 2 1 3 1
A
2 3 0 3 1
1 4 5 9 7
15
16
• Choose 2 in the second row as the next pivot.
Pivot
1 4 5 9 7
0 2 4 6 6
0 5 10 15 15
0 3 6 4 9
Next pivot column
• Add 5 / 2times row 2 to row 3, and add 3 / 2 times row 2 to
row 4.
17
1 4 5 9 7
0 2 4 6 6
0 0 0 0 0
0 0 0 5 0
• There is no way a leading entry can be created in
column 3. But, if we interchange rows 3 and 4, we can
produce a leading entry in column 4.
18
Pivot
1 4 5 9 7
0 2 4 6 6
0 0 0 5 0
0 0 0 0 0
Pivot columns
• The matrix is in echelon form and thus reveals that
columns 1, 2, and 4 of A are pivot columns.
19
Pivot
1 4 5 9 7
0 2 4 6 6 positions
0 0 0 5 0
0 0 0 0 0
Pivot
columns
• The pivots in the example are 1, 2 and 5.
20
• In Class Exercise: Apply elementary row operations to
transform the following matrix first into echelon form
and then into reduced echelon form.
0 3 6 6 4 5
3 7 8 5 8 9
3 9 12 9 6 15
21
Solution:
STEP 1: Begin with the leftmost nonzero column. This is a pivot
column. The pivot position is at the top.
0 3 6 6 4 5 3 9 12 9 6 15
3 7 8 5 8 9 3 7
8 5 8 9
3 9 12 9 6 15 3 6 6 4 5
0
Pivot column Pivot
• STEP 2: Select a nonzero entry in the pivot column as a pivot. If
necessary, interchange rows to move this entry into the pivot
position.
Interchange rows 1 and 3. (Rows 1 and 2 could have also been
interchanged instead.)
22
STEP 3: Use row replacement operations to create zeros in all
positions below the pivot.
We could have divided the top row by the pivot, 3, but with two 3s in
column 1, it is just as easy to add -1 times row 1 to row 2.
Pivot
3 9 12 9 6 15
0 2 4 4 2 6
0 3 6 6 4 5
STEP 4: Cover the row containing the pivot position, and cover
all rows, if any, above it. Apply steps 1–3 to the submatrix that
remains. Repeat the process until there are no more nonzero
rows to modify.
23
24
• This produces the following matrix.
3 9 12 9 6 15
0 2 4 4 2 6
0 0 0 0 1 4
• When we cover the row containing the second pivot
position for step 4, we are left with a new submatrix that
has only one row.
3 9 12 9 6 15
0 2 4 4 2 6
0 0 0 0 1 4
25
26
3 9 12 9 0 9 Row 1 (6) row 3
0 2 4 4 0 14
Row 2 (2) row 3
0 0 0 0 1 4
• The next pivot is in row 2. Scale this row, dividing by
the pivot.
3 9 12 9 0 9
0 1
1 2 2 0 7 Row scaled by
2
0 0 0 0 1 4
27
28
1
1 0 2 3 0 24 Row scaled by
0 1 2 2 0 7 3
0 0 0 0 1 4
• This is the reduced echelon form of the original
matrix.
29
1 0 5 1
0 1 1 4
0 0 0 0
30
• There are 3 variables because the augmented matrix
has four columns. The associated system of equations
is x1 5 x3 1
----(1)
x2 x3 4
00
31
34
• For instance, in system (1), add 5 times equation 2 to
equation 1 and obtain the following equivalent
system. x 5 x 21
1 2
x2 x3 4
• We could treat x2 as a parameter and solve for x1 and
x3 in terms of x2, and we would have an accurate
description of the solution set.
• When a system is inconsistent, the solution set is
empty, even when the system has free variables. In
this case, the solution set has no parametric
representation.
35
37
c) d)
38
Example 3 Determine whether the system is consistent
a) b)
39
Example 4
{
a)
b)
{
40
41
Vectors in
• A matrix with only one column is called a column vector,
or simply a vector.
• An example of a vector with two entries is
w1
w ,
w2
where w1 and w2 are any real numbers.
42
• The stands for the real numbers that appear as entries
in the vector, and the exponent 2 indicates that each
vector contains 2 entries.
• Two vectors in are equal if and only if their
corresponding entries are equal.
• Given two vectors u and v in , their sum is the vector
obtained by adding corresponding entries of u and v.
• Given a vector u and a real number c, the scalar
multiple of u by c is the vector cu obtained by
multiplying each entry in u by c.
43
1 2
• Example 1: Given u and v , find
2 5
44
GEOMETRIC DESCRIPTIONS OF
• Consider a rectangular coordinate system in the
plane. Because each point in the plane is
determined by an ordered pair of numbers, we can
identify a geometric point (a, b) with the column
vector a .
b
45
46
VECTORS IN and
• Vectors in are 3 1 column matrices with three
entries.
• They are represented geometrically by points in a
three-dimensional coordinate space, with arrows from
the origin.
• If n is a positive integer, (read “r-n”) denotes the
collection of all lists (or ordered n-tuples) of n real
numbers, usually written as n 1 column matrices,
such as u1
u
u 2
u
n .
47
ALGEBRAIC PROPERTIES OF
• The vector whose entries are all zero is called the
zero vector and is denoted by 0.
• For all u, v, w in and all scalars c and d:
48
Example 2
Question:
50
1 2 7
• Example 3: Let
b 4
, and .
a1 2 a 2 5
5 6 3
51
• Now, observe that the original vectors a1, a2, and b are
the columns of the augmented matrix that we row
reduced:
1 2 7
2 5 4
5 6 3
a1 a2 b
• Write this matrix in a way that identifies its columns.
a 1
a2 b
52
• A vector equation
x1a1 x2a 2 ... xna n b
has the same solution set as the linear system
whose augmented matrix is
a1 a 2 a n b . ----------(*)
53
54
• Let v be a nonzero vector in . Then Span {v} is the
set of all scalar multiples of v, which is the set of
points on the line in through v and 0. See the figure
below.
55
56
LINEAR INDEPENDENCE
57
LINEAR INDEPENDENCE
58
1 4 2
Example 4: Let v1 2 , v 2 5 , and v3 1 .
3 6 0
59
61
62
SETS OF TWO OR MORE VECTORS
63
• Proof:
64
Conversely, suppose S is linearly dependent.
If v1 is zero, then it is a (trivial) linear combination of the other
vectors in S.
• Otherwise, v1 0 , and there exist weights c1, …, cp, not all zero,
such that
c1v1 c2 v2 ... c p v p 0 .
65
• So j 1, and
c1v1 ... c j v j 0v j 0v j 1 ... 0v p 0
c j v j c1v1 ... c j 1v j 1
c1 c j 1
v j v1 ... v j 1.
cj cj
66
• Theorem* does not say that every vector in a linearly
dependent set is a linear combination of the
preceding vectors.
• A vector in a linearly dependent set may fail to be a
linear combination of the other vectors.
3 1
• Example 5: Let u 1 and v 6 . Describe the
0 0
68
• So w is in Span {u, v}. See the figures given below.
70
SETS OF TWO OR MORE VECTORS
• Hence Ax 0 has a nontrivial solution, and the
columns of A are linearly dependent.
• See the figure below for a matrix version of this
theorem.
71
72
• If A is an m n matrix—that is, a matrix with m rows
and n columns—then the scalar entry in the ith row
and jth column of A is denoted by aij and is called the
(i, j)-entry of A. See the figure below.
• Each column of A is a list of m real numbers, which
identifies a vector in .
73
74
• An m n matrix whose entries are all zero is a zero
matrix and is written as 0.
• The two matrices are equal if they have the same size
(i.e., the same number of rows and the same number
of columns) and if their corresponding columns are
equal, which amounts to saying that their
corresponding entries are equal.
78
• Theorem 2: Let A be an m n matrix, and let B and
C have sizes for which the indicated sums and
products are defined.
a. A( BC ) ( AB)C (associative law of
multiplication)
b. A( B C ) AB AC (left distributive law)
c. ( B C ) A BA CA (right distributive law)
d. r ( AB) (rA) B A(rB) for any scalar r
e. I m A A AI n (identity for matrix
multiplication)
79
80
• If A is an n n matrix and if k is a positive integer, then
Ak denotes the product of k copies of A:
Ak A A
k
81
82
Theorem 3: Let A and B denote matrices whose sizes
are appropriate for the following sums and
products.
a. ( AT )T A
b. ( A B)T AT BT
c. For any scalar r, (rA)T rAT
d. ( AB)T BT AT
83
84
Example 5 Calculate |A|
85
86
87
a)
b)
88
Example 7 Calculate |T|
89
90
Example 8 Calculate |A| by row operations
91
92
93
94
Example 9 Calculate the inverse of A by using Gauss-Jordan
elimination to transform [A|I] in to [I|A].
Discuss: when A is not invertible?
a)
b)
c)
95
Example 10
96
Example 11
97
Inner
Cross
product (dot
product
product)
98
Definition THE DOT PRODUCT
• If a = [a1, a2, …an ] and b = [b1, b2, …bn], then
the dot product of a and b is the number a • b (or <a,b>)given by:
a • b = a1b1 + a2b2 +… +anbn
99
Example 1
100
PROPERTIES OF DOT PRODUCT
• If a, b, and c are vectors and c is a scalar, then
101
GEOMETRIC INTERPRETATION
• The dot product a • b can be given a geometric
interpretation in terms of the angle θ between a and b.
• If θ is the angle between the vectors
a and b, then
102
Proof
• If we apply the Law of Cosines to triangle OAB here, we get:
|AB|2 = |OA|2 + |OB|2 – 2|OA||OB| cos θ
• where
|OA| = ||a||
|OB| = ||b||
Example 3
•If the vectors a and b have lengths 4
and 6, and the angle between them is π/3, find
a ∙ b.
Example 4
104
ORTHOGONAL VECTORS
•Two nonzero vectors a and b are called
perpendicular or orthogonal if the angle
between them is θ = π/2, and a ∙ b = 0
Example 5
•Show that [2,2,-1] is perpendicular
to [5,-4,2].
105
106
THE CROSS PRODUCT
•The cross product a x b of two
vectors a and b, unlike the dot product,
is a vector.
• For this reason, it is also called the vector product.
107
108
Example 6 If a = (1, 3, 4) and b = (2, 7, –5), calculate
109
• Solution
i j k
ab 1 3 4
2 7 5
3 4 1 4 1 3
i j k
7 5 2 5 2 7
(15 28)i (5 8) j (7 6)k
43i 13j k
110
Example 7 Show that a x a = 0 for any vector a
in V3.
• If a = <a1, a2, a3>, i j k
then a a a1 a2 a3
a1 a2 a3
(a2 a3 a3a2 ) i (a1a3 a3a1 ) j
(a1a2 a2 a1 ) k
0i 0 j 0k 0
111
112
Proof
•In order to show that a x b is orthogonal to
a, we compute their dot product as follows
(a b) a
a2 a3 a a3 a a2
a1 1 a2 1 a
b2 b3 b1 b3 b1 b2 3
a1 (a2b3 a3b2 ) a2 (a1b3 a3b1 ) a3 (a1b2 a2b1 )
a1a2b3 a1b2 a3 a1a2b3 b1a2 a3 a1b2 a3 b1a2 a3
0
114
• We know the direction of the vector a x b.
115
Proof
• From the definitions of the cross product
and length of a vector, we have:
• ||a x b||2
= (a2b3 – a3b2)2 + (a3b1 – a1b3)2 + (a1b2 – a2b1)2
• = ||a||2||b||2 – (a . b)2
• = ||a||2||b||2 – ||a||2||b||2 cos2θ = ||a||2||b||2 (1 – cos2θ)
• = ||a||2||b||2 sin2θ
116
•Two nonzero vectors a and b are parallel if
and only if
axb=0
117
Example 9
•Find a vector perpendicular to the plane that
passes through the points
118
• Solution:
PQ (2 1) i (5 4) j (1 6) k
i j k
3i j 7k
PQ PR 3 1 7
0 5 5
PR (1 1) i (1 4) j (1 6) k
5 j 5k (5 35) i (15 0) j (15 0) k
40i 15 j 15k
119
Example 10
•Find the area of the triangle with vertices
120
•In Example 9, we computed that
PQ PR 40, 15,15
121
3. a x (b + c) = a x b + a x c
4. (a + b) x c = a x c + b x c
These properties can be proved by
5. a · (b x c) = (a x b) · c writing the vectors in terms of their
components and using the
6. a x (b x c) = (a · c)b – (a · b)c definition of a cross product.
122
Example: Proof of property 5: a · (b x c) = (a x b) · c
• Let
a = <a1, a2, a3>
123
124
Example 1
125
126
• Example 4: Find eigenvalues of matrices and the
corresponding eigenvectors.
127
128
DIAGONALIZATION
7 2
• Example 5 : Let A . Find a formula for
4 1
1 1 5 0
P and D 0 3
1 2
• Solution: The standard formula for the inverse of a
matrix yields
2 2
2 1
P
1
1 1
129
1 1 5 2
0 2 1
PD P
2 1
0 32 1 1
1 2
• Again,
A3 ( PDP 1 ) A2 ( PD P 1 ) P D 2 P 1 PDD 2 P 1 PD 3 P 1
I
130
• In general, for k 1 ,
1 1 5 k
0 2 1
Ak PD k P 1 0
1 2 3k 1 1
2 5k 3k 5k 3k
k
2 3 2 5 k
2 3k 5k
• A square matrix A is said to be diagonalizable if A is
similar to a diagonal matrix, that is, if A PDP 1 for
some invertible matrix P and some diagonal, matrix D.
131
132
• Example 6 : Diagonalize the following matrix, if
possible.
1 3 3
A 3 5 3
3 3 1
That is, find an invertible matrix P and a diagonal
matrix D such that A PDP .
1
133
You can check that {v1, v2, v3} is a linearly independent set.
134
• Step 3. Construct P from the vectors in step 2.
1 1 1
P v1 v2 v3 1 1 0
1 0 1
• Step 4. Construct D from the corresponding eigenvalues.
1 0 0
D 0 2 0
0 0 2
135
136