CH06
CH06
10th ed
September 6, 2015
Chapter 6.1: Direct Methods For Solving Linear
Systems 1
Operations
E1 : a11 x1 + a12 x2 + · · · + a1n xn = b1 ,
E2 : a21 x1 + a22 x2 + · · · + a2n xn = b2 ,
..
.
En : an1 x1 + an2 x2 + · · · + ann xn = bn .
is a linear system with given constants aij , for each i, j = 1, 2, . . . , n, and bi ,
for each i = 1, 2, . . . , n, and we need to determine the unknowns x1 , . . . , xn .
Definition (6.1)
An n ⇥ m (n by m) matrix is a rectangular array of elements
with n rows and m columns in which not only is the value of an
element important, but also its position in the array.
Operation Counts
Both the amount of time required to complete calculations and the
subsequent round-off error depend on the number of floating-point
arithmetic operations needed to solve a routine problem.
Multiplications/divisions
The total number of multiplications and divisions in Algorithm 6.1
2n3 + 3n2 5n n2 + n n3 n
+ = + n2 .
6 2 3 3
Additions/subtractions
The total number of additions and subtractions in Algorithm 6.1
n3 n n2 n n3 n2 5n
+ = + .
3 2 3 2 6
| Numerical Analysis 10E
Chapter 6.2: Pivoting Strategies
8
Partial Pivoting
The simplest strategy is to select an element in the same
column that is below the diagonal and has the largest absolute
value; specifically, we determine the smallest p k such that
(k ) (k )
|apk | = max |aik |
k in
COMPLETE PIVOTING
Pivoting can incorporate interchange of both rows and columns.
Complete (or maximal) pivoting at the k th step searches all the
entries aij , for i = k , k + 1, . . . , n and j = k , k + 1, . . . , n, to find the
entry with the largest magnitude. Both row and column interchanges
are performed to bring this entry to the pivot position. The total
additional time required to incorporate complete pivoting into
Gaussian elimination is
n
X n(n 1)(2n + 5)
(k 2 1) =
6
k =2
Definition (6.2)
Two matrices A and B are equal if they have the same number of
rows and columns, say n ⇥ m, and if aij = bij , for each i = 1, 2, . . . , n
and j = 1, 2, . . . , m.
Definition (6.3)
If A and B are both n ⇥ m matrices, then the sum of A and B,
denoted A + B, is the n ⇥ m matrix whose entries are aij + bij , for
each i = 1, 2, . . . , n and j = 1, 2, . . . , m.
Definition (6.4)
If A is an n ⇥ m matrix and is a real number, then the scalar
multiplication of and A, denoted A, is the n ⇥ m matrix whose
entries are aij , for each i = 1, 2, . . . , n and j = 1, 2, . . . , m.
Theorem (6.5)
Let A, B, and C be n ⇥ m matrices and and µ be real numbers. The
following properties of addition and scalar multiplication hold:
(iii) A + O = O + A = A, (iv) A + ( A) = A + A = 0,
All these properties follow from similar results concerning the real
numbers.
Definition (6.6)
Let A be an n ⇥ m matrix and b an m-dimensional column vector. The
matrix-vector product of A and b, denoted Ab, is an n-dimensional
column vector given by
2 3 2 3 2Pm 3
a11 a12 · · · a1m b1 Pi=1 a1i bi
6a21 a22 · · · a2m 7 6 b2 7 6 m a2i bi 7
6 7 6 7 6 i=1 7
Ab = 6 . .. .. 7 6 .. 7 = 6 .. 7.
4 .. . . 5 4 . 5 4 . 5
Pm
an1 an2 · · · anm bm i=1 ani bi
Definition (6.7)
Let A be an n ⇥ m matrix and B an m ⇥ p matrix. The matrix product of A
and B, denoted AB, is an n ⇥ p matrix C whose entries cij are
m
X
cij = aik bkj = ai1 b1j + ai2 b2j + · · · + aim bmj ,
k =1
Theorem (6.8)
Let A be an n ⇥ m matrix, B be an m ⇥ k matrix, C be a k ⇥ p matrix, D be an
m ⇥ k matrix, and be a real number. The following properties hold:
Definition (6.9)
(i) A square matrix has the same number of rows as columns.
(ii) A diagonal matrix D = [dij ] is a square matrix with dij = 0 whenever
i 6= j.
(iii) The identity matrix of order n, In = [ ij ], is a diagonal matrix whose
diagonal entries are all 1s. When the size of In is clear, this matrix is
generally written simply as I.
Definition (6.10)
An upper-triangular n ⇥ n matrix U = [uij ] has, for each j = 1, 2, . . . , n, the
entries
uij = 0, for each i = j + 1, j + 2, . . . , n;
and a lower-triangular matrix L = [lij ] has, for each j = 1, 2, . . . , n, the
entries
lij = 0, for each i = 1, 2, . . . , j 1.
Definition (6.11)
An n ⇥ n matrix A is said to be nonsingular (or invertible) if an n ⇥ n
matrix A 1 exists with AA 1 = A 1 A = I. The matrix A 1 is called the
inverse of A. A matrix without an inverse is called singular (or
noninvertible).
Theorem (6.12)
For any nonsingular n ⇥ n matrix A:
1
(i) A is unique.
1 1 1
(ii) A is nonsingular and (A ) = A.
(iii) If B is also a nonsingular n ⇥ n matrix, then
(AB) 1 = B 1 A 1 .
Definition (6.13)
The transpose of an n ⇥ m matrix A = [aij ] is the m ⇥ n matrix
At = [aji ], where for each i, the ith column of At is the same as the
ith row of A. A square matrix A is called symmetric if A = At .
Theorem (6.14)
The following operations involving the transpose of a matrix hold
whenever the operation is possible:
Definition (6.15)
Suppose that A is a square matrix.
(i) If A = [a] is a 1 ⇥ 1 matrix, then det A = a.
(ii) If A is an n ⇥ n matrix, with n > 1 the minor Mij is the determinant of the
(n 1) ⇥ (n 1) submatrix of A obtained by deleting the ith row and jth
column of the matrix A.
(iii) The cofactor Aij associated with Mij is defined by Aij = ( 1)i+j Mij .
(iv) The determinant of the n ⇥ n matrix A, when n > 1, is given either by
n
X n
X
det A = aij Aij = ( 1)i+j aij Mij , for any i = 1, 2, . . . , n,
j=1 j=1
or by
n
X n
X
det A = aij Aij = ( 1)i+j aij Mij , for any j = 1, 2, . . . , n.
i=1 i=1
Theorem (6.16)
Suppose A is an n ⇥ n matrix:
(i) If any row or column of A has only zero entries, then det A = 0.
(ii) If A has two rows or two columns the same, then det A = 0.
(iii) If à is obtained from A by the operation (Ei ) $ (Ej ), with i 6= j, then
det à = det A.
(iv) If à is obtained from A by the operation ( Ei ) ! (Ei ), then
det à = det A.
(v) If à is obtained from A by the operation (Ei + Ej ) ! (Ei ) with i 6= j,
then det à = det A.
(vi) If B is also an n ⇥ n matrix, then det AB = det A det B.
(vii) det At = det A.
1 1 1
(viii) When A exists, det A = (det A) .
(ix) If A is an
Qupper triangular, lower triangular, or diagonal matrix, then
det A = ni=1 aii .
| Numerical Analysis 10E
Chapter 6.4: The Determinant of a Matrix
23
Theorem (6.17)
The following statements are equivalent for any n ⇥ n matrix A:
(i) The equation Ax = 0 has the unique solution x = 0 .
(ii) The system Ax = b has a unique solution for any n-dimensional
column vector b.
1
(iii) The matrix A is nonsingular; that is, A exists.
(iv) det A 6= 0.
(v) Gaussian elimination with row interchanges can be performed
on the system Ax = b for any n-dimensional column vector b.
Corollary (6.18)
Suppose that A and B are both n ⇥ n matrices with either AB = I or
BA = I. Then B = A 1 (and A = B 1 ).
| Numerical Analysis 10E
Chapter 6.5: Matrix Factorization
24
Theorem (6.19)
If Gaussian elimination can be performed on the linear system
Ax = b without row interchanges, then the matrix A can be factored
into the product of a lower-triangular matrix L and an upper-triangular
(i) (i)
matrix U, that is, A = LU, where mji = aji /aii ,
2 3
(1) (1) (1)
a11 a12 a1n 2 3
6 7 1 0 0
6 0
(2)
a22 7 6 m21 7
6 7 1
U=6 7 , and L = 6
4
7.
6 (n 1)
an 7 0 5
4 1,n 5 mn1 mn,n 1
(n) 1
0 0 ann
Permutation matrix
An n ⇥ n permutation matrix P = [pij ] is a matrix obtained by
rearranging the rows of In , the identity matrix. This gives a
matrix with precisely one nonzero entry in each row and in
each column, and each nonzero entry is a 1.
Definition (6.20)
The n ⇥ n matrix A is said to be diagonally dominant when
n
X
|aii | |aij | holds for each i = 1, 2, . . . , n.
j=1,
j6=i
Theorem (6.21)
A strictly diagonally dominant matrix A is nonsingular.
Moreover, in this case, Gaussian elimination can be performed
on any linear system of the form Ax = b to obtain its unique
solution without row or column interchanges, and the
computations will be stable with respect to the growth of
round-off errors.
Definition (6.22)
A matrix A is positive definite if it is symmetric and if xt Ax > 0
for every n-dimensional vector x 6= 0.
Theorem (6.23)
If A is an n ⇥ n positive definite matrix, then
(iii) max1k ,jn |akj | (iv) (aij )2 < aii ajj , for each i 6= j.
max1in |aii |;
Definition (6.24)
A leading principal submatrix of a matrix A is a matrix of the form
2 3
a11 a12 · · · a1k
6 a21 a22 · · · a2k 7
6 7
Ak = 6 . . . . 7,
4 .. .. .. .. 5
ak 1 ak 2 · · · akk
for some 1 k n.
| Numerical Analysis 10E
Chapter 6.6: Special Types of Matrices
31
Theorem (6.25)
A symmetric matrix A is positive definite if and only if each of its
leading principal submatrices has a positive determinant.
Theorem (6.26)
The symmetric matrix A is positive definite if and only if
Gaussian elimination without row interchanges can be
performed on the linear system Ax = b with all pivot elements
positive. Moreover, in this case, the computations are stable
with respect to the growth of round-off errors.
Corollary (6.27)
The matrix A is positive definite if and only if A can be factored
in the form LDLt , where L is lower triangular with 1s on its
diagonal and D is a diagonal matrix with positive diagonal
entries.
Corollary (6.28)
The matrix A is positive definite if and only if A can be factored
in the form LLt , where L is lower triangular with nonzero
diagonal entries.
Corollary (6.29)
Let A be a symmetric n ⇥ n matrix for which Gaussian
elimination can be applied without row interchanges. Then A
can be factored into LDLt , where L is lower triangular with 1s on
(1) (n)
its diagonal and D is the diagonal matrix with a11 , . . . , ann on
its diagonal.
Definition (6.30)
An n ⇥ n matrix is called a band matrix if integers p and q, with 1 < p,
q < n, exist with the property that aij = 0 whenever p j i or q i j. The
band width of a band matrix is defined as w = p + q 1.
Theorem (6.31)
Suppose that A = [aij ] is tridiagonal with ai,i 1 ai,i+1 6= 0, for
each i = 2, 3, . . . , n 1. If |a11 | > |a12 |, |aii | |ai,i 1 | + |ai,i+1 |,
for each i = 2, 3, . . . , n 1, and |ann | > |an,n 1 |, then A is
nonsingular and the values of lii described in the Crout
Factorization Algorithm are nonzero for each i = 1, 2, . . . , n.