0% found this document useful (0 votes)
8 views

線性代數ch02

The document provides a comprehensive overview of matrices, including definitions, properties, and operations such as addition, scalar multiplication, and multiplication of matrices. It explains concepts like matrix size, equality, transposition, and the importance of the identity matrix. Additionally, it presents various examples and theorems related to matrix operations and their properties.

Uploaded by

b2004931124
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

線性代數ch02

The document provides a comprehensive overview of matrices, including definitions, properties, and operations such as addition, scalar multiplication, and multiplication of matrices. It explains concepts like matrix size, equality, transposition, and the importance of the identity matrix. Additionally, it presents various examples and theorems related to matrix operations and their properties.

Uploaded by

b2004931124
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 68

2

Matrices and
Linear Equations
Edgar G. Goodaire
Matrices
 What is a Matrices

2.1.1 Definition
A matrix is a rectangular array of numbers enclosed in square
brackets. If there are m rows and n columns in the array, the
matrix is called m × n (read “m by n”) and said to have size m ×
n. x1 
 
x
•  2  is called a column matrix.
 
x 
 n

•  x1 x2  xn  is called a row matrix.


2
2.1.2 Definition

The transfer of an m × n matrix A is the n × m matrix whose


rows are the columns of A in the same order. The transpose of
A is denoted AT.
2.1.3 Examples   1
T
•  1  3  1 0  
•  1 0 2  0 
T

 0 2    3 2  
 2
 1 2 0  0
T
T
 1 2 3 0 0  2 4 1  2
•  2 4 6  3 4  3 6 2 •   0 2 3 4
 0 1 2     3
 5 9   0 3 5
 4
 0 4 9 3
2.1.3 Example

 1 2
Let A  3 4 . Then a31 5, a12 2, and a22 4.
 5 6
 
2.1.4 Example
Let A be the 23 matrix with aij = i – j.
The (1, 1) entry of A is a11=1 – 1=0, the (1, 2) entry is a12=1 – 2=–1,
A   0  1  2 .
and so on. We obtain  1 0  1
2.1.5 Example
If A = [ aij ] and B = AT, then bij = aji.
4
2.1.7 Matrices A and B are equal  they have same number
of rows, the same number of columns and
corresponding columns are equal.

2.1.3 Example
1 x  a 4
Let A  
 2 y  3 and B  4 a  b  .
If A = B, then – 1 = a, x = – 4, 2y = 4, and a – b = – 3.
Thus, then a = – 1, b = a + 3 = 2, x = – 4, and y = 2.

5
 Addition of Matrices
2.1.9 Example
1 2 3  2 3 0 1 0
Let A  , B  , and C  .
 4 5 6  2 1  1  0  1
Then A  B   1 5 3 while A+C and B + C are not defined.
 6 6 5
Only matrices of the same size can be added.
2.1.11 Example 0 0 0
0
The 23 zero matrix is    . For any 23 matrix
0 0 0
 a b c
A .
 d e f  We have
 a b c  0 0 0  a b c
A0     A 0  A A.
 d e f   0 0 0  d e
 
f
6
 Scalar Multiplication of
Matrices

2.1.13 Definition
The negative of a matrix A is the matrix ( – 1)A, which we denote
– A and call “ minus A”: – A = ( – 1)A.

2.1.14 Example
  1 3   1 3  1  3
If A  
, then  A ( 1)
 

 
.
  2 4   2 4  2  4

7
Theorem 2.1.15 (Properties of Matrix
Addition and Scalar Multiplication).

Let A, B, and C be matrices of the same size and let c be scalar.


1. (Closure under addition) A+B is a matrix.
2. (Commutativity of addition) A+B = B+A.
3. (Addition is associative) (A+B)+C = A+(B+C).
4. (Zero) There is a zero matrix 0 with the property that A+ 0 =
0+ A = A.
5. (Negatives) For every matrix A, there is a matrix – A called the
negative of A with the property that A+ (– A ) = 0 is the zero
matrix.

8
Theorem 2.1.15 (Properties of Matrix
Addition and Scalar Multiplication).

6. (Closure under scalar multiplication) cA is a matrix.


7. (Closure association) c(dA) = (cd)A.
8. (One) 1A = A.
9. (Distributivity) c(A+B) = cA+cB and (c+d)A = cA+dA.
10. If cA = 0, then either c = 0 or A = 0.

9
 Multiplication of
Matrices
2.1.18 The Product of a row and a column is a dot product:
aT b [a b].
2.1.20 Example
As noted at the start of this section, the system
2 x1  x2  x3 2
3 x1  2 x2  6 x3  1
x1  3 x3 1
can be written Ax = b, with
2 1  1  x1   2
A  3  2 6 , x  x2  , and b   1 .
 1 0  3  x3   1

10
Matrix times column

 T  
 a1T
 T b   a b 
  b   a1T   1 
 a2      a 2 b   a 2 b 
Ab        
   
        
 a m b   a m b 
T
 T  
 am 

Matrix A times column is the linear combination of A’s columns.

How about row times matrix A?


Row times matrix A is the linear combination of A’s rows. Why?

11
2.1.21 Aei = column i of A.

2.1.22 Problem
Suppose A and B are mn matrices such that Aei = Bei for each
ei  Rn. Prove that A = B.
Solution :
By 2.1.21, the given condition says that each column of A
equals the corresponding column of B, so A = B.

12
Matrix times matrix

 b1 b 2
 b 
p
 Ab1 Ab 2
 Ab p
 
      
Amn Bnp  A  
   
   
   

 T    TB  
 a1T   a1T 
 a 2  B  a 2 B 
 
    
 T   T 
 a m    a m B  

13
2.1.23 Problem

 2 4  1 2 4 0 1  1
Let A  3 5 1 and B  3  2 0 2 4 . Then
 0 4  2  1 2 6 2 1
 2 4  1  2 4 0 1  1  15  2 6 8 13
AB  3 5 1  3  2 0 2 4  22 4  6 15 18
 0 4  2  1 2 6 2 1  10  12 12 4 14
The first column of AB is the product of A and the first columns of B
 2  2 4  1  2  15
A 3  3 5 1  3  22 ,
 1  0 4  2  1  10
The second column of AB is the product of A and the second columns
of B  4  2 4  1  4   2
A   2   3 5 1   2  4 , and so on.
 2  0 4  2  2   12
14
2.1.24 The (i, j) entry of AB is the product of row i of A and
column j of B.

2.1.26 Example
Suppose A  1  2 and B   1 0 1 .
 3  4  3 4 2
Then AB   7  8  3 while BA is not denoted : B is 2 3
  15  16  5
but A is 2 2. (The circled numners are different.)

2.1.27 Matrix multiplication is associative: (AB)C = A(BC)


whenever all products are defined.
15
2.1.28 Matrix multiplication is not commutative. In general,
AB  BA.
In general, the nn identity matrix is the matrix whose
columns are the standard basis vectors e1, e2, …, en in order. It
is denoted In or just I when there is no chance of confusion.
2.1.27 Example
1 0 0  1 0 0 0
1 0  
I2    , I 3  0 1 0  , I 3  0 1 0 0 .
 0 1 0 0 1  0 0 1 0
   0 0 0 1

16
Theorem 2.1.30
(Properties of Matrix Multiplication).

Let A, B, and C be matrices.


1. Matrix multiplication is associative: (AB)C = A(BC) whenever
all products are defined.
2. Matrix multiplication is not commutative, in general: there exist
matrices A and B for which AB  BA.
3. Matrix multiplication distributes over addition: (A+B)C =
AC+BC and A(B+C) = AB+AC whenever all products are
defined.
4. A(cB) = (cA)B = c(AB) for any scalar c.
5. For any positive integer n, there is an nn identity matrix
denoted In with the property that AIn = A and InA = A whenever
these products are defined. 17
2.1.32 Definite

A linear combination of matrices A1, A2, …, Ak (all of the same


size) is a matrix of the from c1A1+c2A2+…+ckAk, where c1, c2,
…, ck are scalars.
2.1.33 The product Ax of a matrix A and a vector x is a linear
combination of the columns of A, the coefficients being
the components of x:
 a1 a 2 ... a n   x1   a1   a2   an 
       x2   x1     x2       xn    .
         
  
  xn       
Conversely, any linear combination of vectors is Ax for
some matrix A and some vector x. 18
2.1.34 Example

12 3  2
Let A  4 5 6 and x   1 . Then
 7
8 9  4
1 2 3  2  12  1  2  3
Ax  4 5 6   1  27 2  4   5  4  6
 7 8 9  4  42  7   8  9
 x1   1  2  3
More generally, A x2   x1  4  x2  5  x3  6 .
x   7   8  9
 3
2.1.35 Example
 2  3  5   4  2  3
4 5  30 5 4  2  5 .
 2 
8    46
 6  6  8
19
2.1.36 Problem Let A be an mn matrix and I = Im the
mm identity matrix. Explain why IA = A.

Solution : Let the columns of A be a1, a2, …, an.


The first column of IA is Ia1, which is a linear combination of
the columns of I with coefficients the components of a1.
The columns a11 of
 I are the standard basis vectors e1, e2, …, em,
 
a1  a21  , then the first column of IA is
so if 
a 
 m1 
 a11 
 
a11e1  a21e 2   am1e m  a21  a1.

a 
 m1 
Similarly, one can show that column k of IA is ak, so IA = A. 20
2-2 The Inverse and
Transpose of a Matrix
2.2.1 Definite
A matrix A is invertible or has an inverse if there is another
matrix B such that AB = I and BA = I. The matrix B is called
the inverse of A and we write B = A–1.
2.2.4 If A is invertible, so A–1, and (A–1)–1 = A.
2.2.6 Remark (Inverses are unique.) We have called B the
inverse of a matrix A if AB = I and BA = I.
Proof :
If A has two inverses, B and C, then AB = BA = I and also AC
= CA = I. Now evaluate BAC = (BA)C = IC = C, and BAC =
B(AC) = BI = B. So B = C.
21
2.2.10 Example

If A  a b  and ad  bc 0, then A is invertible and


 c d 
1 1  d  b
A  .
ad  bc    c a 

 a b  1  d  b
Solution :
 c d  ad  bc   c a 
1  ad  bc 0   1 0  .

ad - bc  0 ad  bc   0 1
2.1.12 A product of invertible matrices is invertible. Its
inverse is the product of the individual inverses, order
reversed:
(AB)–1 = B–1A–1. 22
2.2.14 Problem
If A is an invertible matrix n  n matrix and B is an n  t
matrix, shows that the equation AX = B has a solution and find
it.
Solution :
Multiplying AX = B on the left by A–1 gives A–1AX = A–1B.
Since A–1A = I, this is IX = A–1B, so X = A–1B.

2.2.16 Problem x  2 y 5
Solve the system of equations
2 x  3 y 7.
1 2
Solution : The system is Ax = b, where A  2  3 is the
 
 x
matrix of coefficients, x  y  is unknown, and b  5  .
   7 
x  A 1b   3 2  5   1 . The solution is x  1, y  3.
  2 1  7    3
23
2.2.17 Problem

Suppose A, B, and X are invertible matrices and AX–1A–1 = B–1.


What is X?
Solution : AX–1A–1 = B–1 AX–1A–1A= B–1A.
Since A–1A = I, we obtain AX–1 = B–1A  X–1 = A–1B–1A.
Thus X = (X–1)–1 = (A–1B–1A)–1 = A–1(B–1)–1(A–1)–1 = A–1BA
Theorem 2.2.18
If A is an invertible matrix, then
1. So is Ak for any integer k, and (Ak)–1 = (A–1) k.
1 1 1
2. So is cA for any nonzero scalar c, and (cA) c A .

24
 More on Transposes
Theorem 2.2.19
Let A and B be matrices and let c be a scalar.
1. (A+B)T = AT+BT (assuming A and B have the same size);
2. (cA)T = cAT;
3. (AT)T = A;
4. If A is invertible, so is AT and (AT)-1 = (A–1)T.
5. (AB)T = BTAT (assuming AB is defined): the transpose of a
product is the product of the transposes, order reverse.

25
2.2.20 Example

1 2 3 0 1
Suppose A    and B  1  3 . Then
  4 0  1
 4 2
 0 1
AB  1 2 3  1  3  14 1 .
  4 0  1 4 2   4  6

 1  4
T
Now A  2 0 and B T  0 1 4 , so
 1  3 2
 3  1
0 1 4  1  4 14  4
B A 
T T  2 0    ( AB)T .
 1  3 2 3  1  1  6
 

26
2-3 Systems of Linear
Equations
2.3.1 The Elementary Row Operations.
1. Interchange two rows.
2. Multiply a row by any scalar except 0.
3. Replace a row by that row minus a multiple of another.
2.3.3 To Solve a System of Linear Equation.
1. Write the system in the form Ax = b, where A is the
matrix of coefficients, x is the vector of unknowns, and
b is the vector whose components are the constants to
the right of the equals signs.
2. Add a vertical line and b to the right of A to form the
augmented matrix [ A│b ].
27
3. Reduce the augmented matrix to an upper triangular
matrix by Gaussian elimination.
4. Write the equations that correspond to the triangular
matrix and solve by back substitution.
x  2 y 1
2.3.4 Problem Solve the system
4 x  8 y 3.
 1 2  x  1
Solution : This is Ax = b with A   , x   , and b   .
 4 8  y  3
 1 2 1
[ A b]    R2 
 R 2  4( R1)  1 2 1 .
 4 8 3      0 0  1
So the equation are
x  2 y 1
0  1.
The last equation is not true, so there is no solution.

28
2.3.6 A system of linear equation may have
1. A unique solution,
2. no solution, or
3. infinitely many solution.
2.3.7 Definition
A row echelon matrix is an upper triangular matrix with the
following properties:
1. All row consisting entirely of 0s are at the bottom.
2. The leading nonzero entries of the nonzero rows step from
left to right as you read down the matrix;
3. All the entries in a column below a leading nonzero entry
are 0.
A row echelon from of a matrix A is a row echelon matrix U to
which A can be moved via elementary row operations.
29
2.3.9 Example

 1 1 3 4 5
• The matrix U  0 0 1 3 1 is a row echelon matrix.
 0 0 0 0 1
The pivots are circled. Since the pivots are in columns one,
three, and five, the pivot columns are columns are columns
one, three, and five.
3  2 1 6 R1(– 2)+R2
• Let A  6  4 8  7  . R1(–3)+R3
 9  6 9  12
 3  2 1  4 R2(–1)+R3  3  2 1  4
Then A   0 0 6 1  0 0 6 1 U
 0 0 6 0  0 0 0  1
So the pivots of A are 3, 6, and –1, and the pivot columns of
A are one, three, and four. 30
2 x  y  z  7
2.3.12 Example Solve the system x  y  z  2
3 x  y  z 0.

Solution :
2  1 1  7 1 1 1  2
 A b   1 R1 R 2
1 1  2      2  1 1  7  R2– 2(R1)
 3 1 1 0  3 1 1 0 R3–3(R1)
1 1 1  2 R 2 R3  1 1 1  2 1
 0  3  1  3       0  2  2 6 R 2  (  2 ) R 2
 0  2  2 6  0  3  1  3
1 1 1  2 1 1 1  2
 0 1 1  3       0 1 1  3
 0  3  1  3 R3+3(R2) 0 0 2  12 R3  12 R3
 1 1 1  2  x 1
  
  0 1 1  3   y 3 .
 0 0 1  6  
 z  6

31
2.1.13 Free variables are those that correspond to columns
which are not pivot columns.

2.3.8 Definition
A pivot in a row echelon matrix U is a leading nonzero entry in
a nonzero row.
A column containing a pivot is called a pivot column of U.
If U is a row echelon form of a matrix A, then a pivot column
that corresponds to a pivot column of U.
If U was obtained without any multiplication of rows by
nonzero scalars, then a pivot of A is defined to be any pivot of
U.
32
x1  x2  x3 2
2.3.14 Example Solve 2 x1  x2  3 x3 6
x1  2 x3 4.

Solution : E2– 2(E1) E3– E3


 1  1  1 2  1  1  1 2  1  1  1 2
 A b   2  1  3 6  .  0 1  1 2   0 1  1 2
 1 0  2 4  0 1  1 2  0 0 0 0
E3–E1
 x1  x2  x3 2  x1 2t  4
 
  x2  x3 2  setting x3 t , the solution is  x2 t  2 .
 0 0  x3 t

 x1   2t  4   4   2 
or in vector form x  x2   t  2   2   t  1 .
 x3   t   0   1

Our solution is a line, and there are infinitely many solutions.


33
2.3.15 Problem Express the vector
 2  1  1  1
 6 as a linear combination of the columns of A  2  1  3 .
 4  1 0  2

Solution : We seek a, b, and c so that


 2  1   1   1
 6 a  2  b   1  c   3 .
 4  1  0   2
       
 a   2  a   4
By 2.1.33, A b   6  , there are many solutions, such as  b   2  .
 c   4  c   0
       
 2  1   1   1
So, for instance,  6  4  2  2   1  0   3 ,
 4  1  0   2  4
       
the coefficients being the components of the vector  2 .
 0
  34
2.3.16 Problem
1 1  2 3
Is  a linear combinatio n of  , , and  ?
  6   4  0    1

Solution : The equation asks if there exist scalars a, b, and c so


a
 1 a  1  b  2  c  3  1 2 3  b  .
that   6         4 0  1  
  
  4 0
   1
     c
 
 a
1
There exist a solution to Ax b, with x  b  and b   .
c   6
 
 A b   41 20  31  61   01 28 113  21 .
   
This is row echelon form. This is one column that is not pivot
column: column three. Thus the third variable, c, is free. There
are infinity many solutions. And the vector  1 is a linear
combination of the other three.   6
35
and Linear
Independence
Key Term:
1. Homogeneous
2. Non-homogeneous
3. Particular solution
Theorem 2.4.1
Suppose xp is a particular solution to the system Ax = b. Then
any solution to Ax = b is the sum x = xp + xh of xp and a
solution xh to the homogeneous system Ax = 0.
Proof :
Suppose that x is a solution to Ax = b. We are given that xp is
another, so Axp = b and Ax = b. It follows that A(x – xp) = 0, so x
– xp = xh is a solution to Ax = 0. So x = xp + xh. 36
2.4.2 Example
x1  2 x2  3 x3  4 x4 1
The system 2 x1  4 x2  8 x3  10 x4 6 is Ax = b with
3 x1  7 x2  11x3 14 x4 7

 x1 
1 2 3 4  x2   1
A  2 4 8 10 , x   , b  6 .
 3 7 11 14 x3  7 
x 
 4
1 2 3 4 1  1 2 3 4 1  1 2 3 4 1
Then ,  2 4 8 10 6   0 0 2 2 4   0 1 2 2 4 .
 3 7 11 14 7   0 1 2 2 4  0 0 1 1 2
x4 t is free, x3 4 x3  2 x4 0, and x1 1  2 x2  3x3  4 x4  5  t.
The solution to Ax b is
 x1    5  t    5   1   5   1
x     0  0  0  0
0
x         t   , here, x   , x t   .
2
 x3   2  t   2    1 p
 2 h   1
           
x
 4  t   0   1  0   1 37
 More on Linear
Independence
2.4.3 Example
The standard basis vector of Rn are linear independent.
Proof : We suppose that some linear combination of these is
the zero vector, say
c1e1  c2e 2    cne n 0.
 c1  e1 e 2  e n 
 c2  
This is Ac = 0 where c   and A       is the n n
  
   
 cn 
identity matrix. Thus Ac = Ic = 0 certainly means c = 0, so the
vectors are linear independent.
38
2.4.4 Problem
 2   1  4
x   3 , x  1 , and x  0
Decide whether the vectors 1  1 2  2 3
  1
 4  3  1
are linear independent or linear dependent?
Solution :
Suppose c1x1  c2 x 2    cn x n 0. This is homogeneous system
 2  1 4  c1 
A  3 1 0 and c  c  . Then,
Ac = 0 with  1 2  1 2
c 
 4 3 1  3
 2  1 4 1 2  1  1 2  1  1 2  1
3 1 0   0  5 6   0 1  1   0 1  1 .
 1 2  1  0  5 3 0 0 1 0 0 1
 4 3 1  0  5 5  0 0  2  0 0 0
Gives c3 = 0, then c2 – c3 = 0, so c2 = 0 and, c2 + 2c2 – c3 = 0, so
c1 = 0. Thus, c = 0, so the vectors are linear independent.
39
2-5 The LU Factorization
of a Matrix
2.5.1 Definition
An element matrix is a square matrix, obtained from the identity
matrix by a single elementary row operation.
2.5.1 Examples
 0 1 0
• E1  1 0 0 is an elementary matrix, obtained from the
 0 0 1
33 identity matrix I by interchanging rows one and two.
 1 0 0
• The matrix E3  0 3 0 is elementary; it is obtained
from  0 0 1

40
I by multiplying row two by 3.
2.5.3 If A is a matrix and E is an elementary matrix with EA
defined, then EA is that matrix obtained by applying to A
the elementary operation that defined E.
2.5.4 If A is a elementary matrix, E has an inverse that is an
elementary matrix, namely, the elementary matrix that
reverses ( or undoes) what E does.
2.5.5 Example
 0 1 0
E1  1 0 0 is the elementary matrix that interchanging rows
 0 0 1
one and two of a matrix. We want show E1E1 = I. Alternatively,
we can compute
 0 1 0  0 1 0  1 0 0
E1 E1  1 0 0  1 0 0  0 1 0 I .
 0 0 1  0 0 1  0 0 1
41
2.5.9 Definition

The main diagonal of a  n matrix A us the list of elements a11,


a12, …. A matrix is diagonal if its only nonzero entries lie on
the main diagonal, upper triangular if all entries below (and to
the left of) the main diagonal are 0, and lower triangular if all
entries above (and to the right of) the main diagonal are 0.

Figure 2.1 Schematic [ \ ] [0 \ ] [ \ 0]


Representations of a
diagonal, upper Diagonal Upper Lower
triangular, and lower triangular triangular
triangular matrix.
42
2.5.11 Example
Let D, L, and U be the matrix
2 0  6 0 0  1 2 1 6
D  0  4  , L  1 0 0  , U  0 5  2 4  .
0 0  3 2 1  0 0 0 8
 0 0  0 3 1
Then D is a diagonal 4  2 matrix, L is a lower triangular 4  3
matrix, and U is an upper triangular 3  4 matrix.
2.5.12 Definition
An LU factorization of a matrix A is the representation of A as
the product A = LU of a (necessary square) lower triangular
matrix L and an upper triangular matrix U, which is the same
size as A.
2.5.13 Example
 1 2   1 0   1 2 is an LU factorization of A  1 2 .
 4 6  4 1  0  2  4 6
43
2.5.14 Example
 1 2 3
Find of LU factorization of A  4  6  6 .
  7  7 9

Solution : we replace row two by row two minus 4 times row one.
The elementary matrix required is
 1 0 0  1 2 3
E1   4 1 0 , thus E1 A  0  14  18
 0 0 1   7  7 9
Then we replace row three by row three plus 7 times row one.
 1 0 0 1 2 3
E2  0 1 0 , thus E2 ( E1 A)  0  14  18 .
 7 0 1  0 7 30
Next we replace row three by row three minus –1/2 times row two.
 1 0 0 1 2 0
E3  0 1 0 , thus E3 ( E2 E1 A)  0  14  18 U .
 0 1 1  0 0 21
 2 
44
Since (E3 E2 E1 ) A U , A (E3 E2 E1 )  1U E1 1 E2 1 E3 1U , so
 1 0 0  1 0 0  1 0 0  1 0 0
   
L E1 1 E2 1 E3 1  4 1 0   0 1 0   0 1 0   4 1 0
 0 0 1   7 0 1  0 
1
1   7 
1
1
 2 2

Thus A LU ;
 1 2 3  1 0 0  1 2 3
 4  6  6   4 
1 0   0  14  18 .
  
  7  7 9    7 
1
2
1   0  0 21
Theorem 2.5.15
If a matrix A can be reduced to row echelon from without row
interchanges, then A has an LU factorization.
45
 L without Effort
2.5.16 To find an LU factorization of a matrix A, try to reduce A
to an upper triangular matrix U from top down using
only the third elementary row operation.
 1 2 3
2.5.18 Remark Suppose A  1 1 1 and we apply the
 2 2 3
following sequence of third elementary row operation:
R 3 R 3 2( R 2) 
1 2 3
R 2 R 2 R1 
1 2 3
A         1 1 1        0  1  2 U
 0 0 1  0 0 1
 1 0 0
And we find L  1 1 0 , but A  LU .
 0 2 1
2.5.19 When trying to find an LU factorization, use only the
third elementary row operation. 46
 Why LU?
2.5.20 Example

 x1   0
Suppose we want to solve Ax = b for x  x2  , where b  1 ,
x   0
 1 2 3  3
and A  4  6  6 is the matrix of Problem 2.5.14.
  7  7 9
We found A = LU, with
 1 0 0 1 2 3
 
L  4 1 0  and U  0  14  18 
 7
  1
2
1   0 0 21
To solve Ax = b, which is L(Ux) = b, we first solve Ly = b for
 y1  y1 0
y  y2  . This is the system 4 y1  y2 1
y 
 3  7 y1  12 y2  y3 0. 47
1 1
We have y1 0, y2 1  4 y1 1, and y3 7 y1  2 y2  2 .
 0
Thus y  11  . Now we solve Ux = y, we have
 
 2
x1  2 x2  3 x3 0 x3  421 ,
 14 x2  18 x3 1   14 x2 1  18 x3 107 , so x2  295 , and
21x3 12 . x1  2 x2  3 x3 10
49
 1
14
 13
98
.

 13 
 985 
The solution is x   49 
 421 

2.5.21 Given A = LU, am LU factorization of A, solve Ax = b


by first solving Ly = b for y and then Ux = y for x.
48
 The PLU Factorization;
Row Interchanges

2.5.23 Example
 0 1 0 0 0 1 0
 0 1 ,  0 0 1 and  0 1 0 0 are all permutation matrices.
 1 0 1 0 0 0 0 0 1
   1 0 0 0

2.5.24 If P is a permutation matrix, then P–1 = PT.

49
2.5.25 Problem  0 1 1 0
0 1 1 4
Find an LU factorization of A   2 3 2 1 if possible;

otherwise find PLU factorization.   6 4 2  8

Solution :
Rearranging the rows of A in the order 3, 4, 1, 2 leads to the
matrix
 2 3 2 1
P ' A    6 4 2  8
 0 1 1 0
 0  1  1 4
 2 3 2 1  2 3 2 1
P ' A    6 4 2  8   0 5  4  11
 0 1 1 0 R2  ( 3)R1  0 1 1 0
 0  1  1 4  0  1  1 4

50
 2 3 2 1   2 3 2 1
  0  5  4  11   0  5  4  11
11 U '
1
R3  5 R2  0 0 1

11
 R4  R3  0 0 1
 
5 5 5 5
R4  (  15 )R2  0 0 
1 31  
 0 0 0 4
 5 5 

1 0 0 0
3 1 0 0
with L  0 
1
5
1 0 . Thus P ' A LU ' and hence A PLU ' ,
0 1
1 1
 5

0 0 1 0
with P ( P ' )  1 ( P ' )T  0 0 0 1 .
1 0 0 0
 0 1 0 0
51
2-6 LDU Factorizations
2.6.1 Definition
An LDU factorization of a matrix A is a representation of A =
LDU as the product of a (necessarily square) lower triangular
matrix L with 1s on the diagonal, a (square) diagonal matrix D,
and an upper triangular matrix U (the same size as A) with 1s on
the diagonal.

52
2.6.3 Problem   1 2 0  2
Find an LDU factorization of A  2 4 6 8
  3 2 4 1

Solution :
1 2 0  2 R 2  ( 2) R1   1 0  2 R3 
2 (  12 ) R 2
A  2 4 6 8  0 8
6 4
  3 2 4 1 R3  3R1  0  4 4 7 

 1 2 0  2  1 0 0
  0 8 6 4 U '. We have A = LU’ with L   2 1 0 .
 0 0 7 9  3 1 1
 2

The final LDU factorization of A is obtained by factoring the


diagonal entries of U’ from its rows.
 
  1 2 0  2  1 0 0   1 0 0  1  2 0 2
 2 4 6 8    2 1 0   0 8 0   0 1 34 12  .
  3 2 4 1  3  12 1  0 0 7   0 0 1 9

 7
53
 Uniqueness Questions
Theorem 2.6.4
If an invertible matrix A can be factored A = LDU’ and also A
= L1D1U1 with L and L1 lower triangular matrixes with 1s on
the diagonal, U and U1 upper triangular matrices with 1s on
the diagonal, and D and D1 diagonal matrices, then L = L1, U
= U1, and D = D1.

54
 Symmetric Matrices
2.6.5 Definition
A matrix A is symmetric if it equals its transpose: AT = A.

2.6.6 Remark Our argument that L = UT in the LDU


factorization of a symmetric matrix A assumed that A was
invertible. In fact, this restriction is not necessary. In the
exercise, we ask the reader to show that if a symmetric matrix
can be factored A = LDU, then it can also be factored A =
UTDU. (See Exercise 10.)

55
2.6.7 Problem
 1 2  1
Find an LDU factorization of the symmetric matrix A  2 3 0 .
  1 0 5
Solution :
 1 2  1  1 2  1
A   0  1 2   0  1 2 U '
 0 2 4  0 0 8
 1 2  1  1 0 0  1 2  1
Fartor U ' DU   0  1 2  0  1 8  0 1  2 ,
 0 0 8  0 0 8  0 0 1
 1 0 0  1 2  1
obtaining D  0  1 0 and U  0 1  2 .
 0 0 8  0 0 1
T
 1 0 0
We conalude that L U  2 1 0 and we can confirm
  1  2 1
that A LDU .
56
 The PLDU Factorization
2.6.8 Example
 2 4 2  1 0 0
Let A  1 2 3 . As in section 2.5, P '  0 0 1 .
 3 2 1  0 1 0
 1 0 0 2 4 2
 2 4 2  2 
Then P ' A A'  3 2 1 and A' LU '  3 1 0  0  4  2 .
 1 2 3  12 0 1  0 0 2
 1 0 0 2 0  1 2 1
0
  
Factoring U’ = DL gives A' LDU 
 3 1 0  0  4
2
0  0 1 12  .
 12 0 1  0 2  0 0 1
0
 
The inverse of the elementary matrix P’ is (P’)–1 =P’, so we
have A = PLDU with P = P’. 57
2.6.7 Problem  0 1 1 0
0 1 1 4
Find an LDU factorization of A   2 3 2 1 if possible.
 
  6 4 2 8
Otherwise find a PLDU factorization.
Solution : In problem 2.5.25 of Section 2.5, we noted A does
not have an LU factorization, but A = PLU’ with
 0 0 1 0  1 0 0 0  2 3 2 1
 0 0 0 1   3 1 0 0  0  5  4  11
P , L  0  1
1 0  , and U '  0 0 1 11 .

 1 0 0 0  5 5

5
 0 1 0 0  0 1  1 1  0 0 0 4 
 5   
Since the diagonal entries of U’ are not 0, we can factor U’ = DL
with
  2 0 0 0   1 
2
3
 1 1
 
2
 0  5 0 0 0 1 4 11 
D  0 0 15 0 and U  5 5,

 0  0 0 1  11
 0 0 4  0 0 0 1
hence obtaining a factorization A PLDU .
2-7 Finding the Inverse of
a Matrix
2.7.1 Problem
3 2 1 2
Determine whether A   2 1 and B  2  3 are inverse.
   
3 2 1 2 1 0
Solution : We compute AB   2 1   2  3  0 1 I .
     
Since the matrices are square conclude that A and B are inverse.
2.7.2 Problem   1 1
Let A  1 2 3 and B  0  1 . Then
 4 5 6  2 1
 3 3
  1 1
AB  1 2 3  0  1  1 0 . Is A invertible?
 4 5 6  2 1  0 1
 3 3
Solution : No, A is not invertible because it is not square.
Alternatively, you may check that BA  I.
59
 A Method for Finding
the Inverse
Let A be an n  n matrix. If A is a invertible, there is a matrix B
with  1 0  0  e1 e 2  e n 
AB I  0 1  0     .
      
 0 0  1  
Let the columns of B be the inverse x1, x2, …, xn. So
 x1 x 2  x n   Ax1 Ax 2  Ax n 
B   and AB 
         
We want AB = I, so we wish to find vectors x1, x2, …,xn such that
 1  0  0
 0  1  0
Ax1  0 e1 , Ax 2  0 e 2 , Ax n   e n .
     0
 0  0  1
60
2.7.4 Definition
A matrix is in reduced row echelon form if it is in row echelon
form, each pivot (that is, each leading nonzero entry in a nonzero
row) is a 1, and each such pivot is the only nonzero entry in its
column.
2.7.7 Example
 1 2  5 1 2  5  1 2  5
 3 1 5   0 5 20   0 1  4
  2 3 4  0 7  6  0 7  6
1 0 3 1 0 3  1 0 0
 0 1  4  0 1  4   0 1 0 .
 0 0 22  0 0 1  0 0 1
61
2.7.8 To find the inverse of A, apply Gaussian elimination to
 A I  attempting to move A to reduced row echelon form.
If this is I B  for some B, then B = A–1. If you cannot
move A to I, then A is not invertible.
2.7.9 Example
 1 2 0
Suppose A  3  1 2 .
  2 3  2
 1 2 01 0 0 1 2 0 1 0 0
 A I   3  1 2 0 1 0   0  7 2  3 1 0
  2 3  20 0 1  0 7  2 2 0 1
1 2 0 1 0 0
  0  7 2  3 1 0 .
 0 0 0  1 1 1
This is no point in continuing.
The given matrix A has no solution.
 2 7 0
2.7.10 Example Suppose A  1 4  1 , find A 1 ?
 1 3 0

2 7 0 1 0 0  1 4  1 0 1 0
Solution :  A I   1 4  1 0 1 0   2 7 0 1 0 0
 1 3 0 0 0 1  1 3 0 0 0 1
 1 4  10 1 0 1 4 1 0 1 0
  0  1 2 1  2 0   0 1  2  1 2 0
 0  1 1 0  1 1  0 1 1 0  1 1
1 0 7 4  7 0 1 0 7 4  7 0
 0 1  2  1 2 0   0 1  21 2 0
 0 0  1  1 1 1  0 0 1 1  1  1
1 0 0 3 0 7
  0 1 0 1 0  2 B.
 0 0 1 1  1  1
We can find AB = I, so B = A–1.
63
  2
2.7.12 problem Express the vector b  4 as a linear
 5
combination of the columns of the matrix A in 2.7.11.

Solution :  41
We just saw that Ax = b with x   12 . Since Ax is a linear
  11
combination of the columns of A with coefficients the
components of A-please recall and never forget 2.1.33, we have
 2  7  0
b 41 1   12  4  11  1 .
 1   3  0
If A has an inverse, then Ax = b has the unique solution x = A–1b.
Theorem 2.7.14
A matrix is invertible if and only if it can be written as the
product of elementary matrices. 64
2.7.15 Problem
2 3
Express A  1 1 as the product of elementary matrices.
 

Solution : We move A to the identity matrix by a sequence of


elementary row operations.

A  2 3   1 1 E A
 1 1  2 3 1

 1 1 E E A   1 0 E E E A I .
 0 1 2 1
 0 1 3 2 1

with E1  0 1 , E2  1 0 , and E3  1  1 .
 1 0   2 1  0 0
So E3 E2 E1 A I and A E3 E2 E1  E1 1 E2 1 E3 1 ;
1

 2 3  0 1  1 0  1 1 .
 1 1  1 0  2 1  0 1
65
2.7.16 Problem
1 1 3
Express A   2 2 7  as the product of elementary
matrices.  4  3  12

Solution : Row reduction to I is


1 1 3  1 1  3
A   2 2 7    2 2 7  E1 A
 4  3  12  4  3  12
 1  1  3 1  1  3
 0 0 1 E2 E1 A   0 0 1 E3 E2 E1 A
 4  3  12  0 1 0
 1  1  3  1 0  3
 0 1 0 E4 E3 E2 E1 A  0 1 0 E5 E4 E3 E2 E1 A
 0 0 1  0 0 1
 1 0 0
  0 1 0 E6 E5 E4 E3 E2 E1 A
 0 0 1
66
  1 0 0  1 0 0  1 0 0
where E1  0 1 0 , E2  2 1 0 , E3  0 1 0 ,
 0 0 1  0 0 1   4 0 1
 1 0 0  1 1 0  1 0 3
E4  0 0 1 , E5  0 1 0 , E6  0 1 0 .
 0 1 0  0 0 1  0 0 1
Since ( E6 E5 E4 E3 E2 E1 ) A I , A ( E6 E5 E4 E3 E2 E1 )  1
E1 1 E2 1 E3 1 E4 1 E5 1 E6 1 , which is
1 1 3   1 0 0  1 0 0  1 0 0
 2 2 7   0 1 0    2 1 0   0 1 0 
 4  3  12  0 0 1  0 0 1  4 0 1
 1 0 0  1  1 0  1 0  3
 0 0 1  0 1 0  0 1 0 .
 0 1 0  0 0 1  0 0 1 67
Corollary 2.7.17
If A and B are square n n matrices and AB = I ( n n identity
matrix), then BA = I too, so A and B are invertible.

Theorem 2.7.18
A square matrix is invertible if and only if its columns are linear
independent.

68

You might also like