0% found this document useful (0 votes)
36 views

Lec05 Systems

The document discusses several methods for solving linear systems of equations: Gaussian elimination transforms the system into triangular form using row operations. The LU factorization represents the matrix as the product of a lower and upper triangular matrix. Gauss-Jordan elimination further transforms the system into diagonal form. Both Gaussian elimination and Gauss-Jordan elimination can be used to solve linear systems, while the LU factorization allows solving multiple systems with the same matrix efficiently.

Uploaded by

adi.s022
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Lec05 Systems

The document discusses several methods for solving linear systems of equations: Gaussian elimination transforms the system into triangular form using row operations. The LU factorization represents the matrix as the product of a lower and upper triangular matrix. Gauss-Jordan elimination further transforms the system into diagonal form. Both Gaussian elimination and Gauss-Jordan elimination can be used to solve linear systems, while the LU factorization allows solving multiple systems with the same matrix efficiently.

Uploaded by

adi.s022
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

SOLVING LINEAR SYSTEMS OF EQUATIONS

See Chapter 3 of text


Background on linear systems
Gaussian elimination and the Gauss-Jordan algorithms
The LU factorization
Gaussian Elimination with pivoting

5-1

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Background: Linear systems


The Problem: A is an n n matrix, and b a vector of Rn.
Find x such that:
Ax = b
x is the unknown vector, b the right-hand side, and A

is the coefficient matrix


Example:

x1
6
2 4 4
2x1 + 4x2 + 4x3 = 6


x1 + 5x2 + 6x3 = 4 or 1 5 6 x2 = 4

x + 3x + x = 8
x3
8
1 3 1
1
2
3

- Solution of above system ?


5-2

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Standard mathematical solution by Cramers rule:

xi = det(Ai)/ det(A)
Ai = matrix obtained by replacing i-th column by b.
Note: This formula is useless in practice beyond n = 3

or n = 4.
Three situations:
1. The matrix A is nonsingular. There is a unique solution
given by x = A1b.
2. The matrix A is singular and b Ran(A). There are
infinitely many solutions.
3. The matrix A is singular and b
/ Ran(A). There are no
solutions.

5-3

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Example:

(1) Let A =

2 0
0 4

singular a unique solution x =

b =
!
0.5
.
2

1
. A is non8

(2) Case where A is singular & b Ran(A):


!
!
2 0
1
A=
, b=
.
0 0
0
!
0.5
infinitely many solutions: x() =
.

!
1
Example: (3) Let A same as above, but b =
.
1
Example:

No solutions since 2nd equation cannot be satisfied


5-4

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Triangular linear systems


Example:

0
0



4 4
x1
2


5 2 x2 = 1
0 2
x3
4

One equation can be trivially solved: the last one.

x3 = 2
x3 is known we can now solve the 2nd equation:

5x2 2x3 = 1 5x2 2 4 = 1

x2 = 1

Finally x1 can be determined similarly:

2x1 + 4x2 + 4x3 = 2 ... x1 = 5


5-5

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

ALGORITHM : 1

Back-Substitution algorithm

For i = n : 1 : 1 do:
t := bi

For j = i + 1 : n do
t := t (ai,i+1:n, xi+1:n)
t := t aij xj
= t an inner product
End
xi = t/aii
End
We must require that each aii 6= 0
Operation count?
Round-off error (use previous results for (, ))?

5-6

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Backward error analysis for the triangular solve


The computed solution x
of the triangular system U x =
b computed by the previous algorithm satisfies:
(U + E)
x=b
with
|E| n u |U | + O(u 2)
Backward error analysis. Computed x solves a slightly

perturbed system.
Backward error not large in general. It is said that
triangular solve is backward stable.

5-7

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Column version of back-substitution:

Back-Substitution algorithm. Column version


For j = n : 1 : 1 do:
xj = bj /ajj
For i = 1 : j 1 do
bi := bi xj aij
End
End
Justify the above algorithm [Show that it does indeed
compute the solution]
-

See text for analogous algorithms for lower triangular

systems.
5-8

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Linear Systems of Equations: Gaussian Elimination


Back to arbitrary linear systems.

Principle of the method: Since triangular systems are


easy to solve, we will transform a linear system into
one that is triangular. Main operation: combine rows so
that zeros appear in the required locations to make the
system triangular.
Notation: use a Tableau:

2x1 + 4x2 + 4x3 = 2


x1 + 3x2 + 1x3 = 1 tableau:

x + 5x + 6x = 6
1
2
3
5-9

2
1
1

4
3
5

4 2
1 1
6 6

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Main operation used: scaling and adding rows.

Replace row2 by: row2 - 12 *row1:

Example:
2
1
1

4
3
5

4 2
1 1
6 6

2
0
1

4
4 2
1 1 0
5
6 6

This is equivalent to:

1
12
0

0
1
0

0
2
0 1
1
1

4
3
5

4 2
2
1 1 = 0
6 6
1

4
4 2
1 1 0
5
6 6

The left-hand matrix is of the form


0

M = I veT1 with v = 12
0
5-10

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Linear Systems of Equations: Gaussian Elimination


Go back to original system. Step 1 must transform:
2
1
1

4
3
5

4 2
x
1 1 into: 0
6 6
0

row2 := row2 21 row1:


2
0
1

5-11

4
1
5

4 2
1 0
6 6

x
x
x

x x
x x
x x

row3 := row3 12 row1:


2
0
0

4
1
3

4 2
1 0
4 7

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Equivalent to

1
12
12

0
1
0

0
2
0 1
1
1

4
3
5

4 2
2
1 1 = 0
6 6
0

4
4 2
1 1 0
3
4 7

[A, b] [M1A, M1b]; M1 = I v (1)eT1 ; v (1)


0
1
= 2
1
2

New system A1x = b1. Step 2 must now transform:

2
0
0

4
1
3

4 2
x
1 0 into: 0
4 7
0

x
x
0

x x
x x
x x

2
0
0

4
4 2
1 1 0
0
7 7

4
4 2
2
1 1 0 = 0
3
4 7
0

4
4 2
1 1 0
0
7 7

row3 := row3 3 row2 :


Equivalent to

1
0
0
1
0 3

0
2
0 0
1
0

Second transformation is as follows:

[A1, b1] [M2A1, M2b1] M2 = I v (2)eT2 v (2)


Triangular system Solve.
5-13


0

= 0
3

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Pivot
Row k

Ak =

5-14

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Gaussian Elimination

ALGORITHM : 2

1. For k = 1 : n 1 Do:
2.
For i = k + 1 : n Do:
3.
piv := aik /akk
4.
For j := k + 1 : n + 1 Do :
5.
aij := aij piv akj
6.
End
6.
End
7. End
Operation count:

T =

n1
X

n
X

[1 +

k=1 i=k+1

n+1
X

j=k+1

2] =

n1
X

n
X

(2(n k) + 3) = ...

k=1 i=k+1

- Complete the above calculation. Order of the cost?


5-15

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

The LU factorization
Now ignore the right-hand side from the transformations.

Observation: Gaussian elimination is equivalent to n


1 successive Gaussian transformations, i.e., multiplications with matrices of the form Mk = I v (k)eTk , where
the first k components of v (k) equal zero.
Set A0 A

A M1 A 0 = A 1 M2 A 1 = A 2 M3 A 2 = A 3
Mn1An2 = An1 U
Last Ak U is an upper triangular matrix.

5-16

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

At each step we have: Ak = Mk+1Ak+1 . Therefore:

A0 =
=
=
=
=

M11A1
M11M21A2
M11M21M31A3
...
1
M11M21M31 Mn1
An1

1
L = M11M21M31 Mn1

Note: L is Lower triangular, An1 is upper triangular


LU decomposition : A = LU

5-17

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

How to get L?
1
L = M11M21M31 Mn1

Consider only the first 2 matrices in this product.


1

Note Mk

= (I v (k)eTk )1 = (I + v (k)eTk ). So:

M11M21 = (I + v (1)eT1 )(I + v (2)eT2 ) = I + v (1)eT1 + v (2)eT2


Generally,

M11M21 Mk1 = I + v (1)eT1 + v (2)eT2 + v (k)eTk

The L factor is a lower triangular matrix with ones on


the diagonal. Column k of L, contains the multipliers
lik used in the k-th step of Gaussian elimination.

5-19

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

A matrix A has an LU decomposition if


det(A(1 : k, 1 : k)) 6= 0

for

k = 1, , n 1.

In this case, the determinant of A satisfies:


n
Y
det A = det(U ) =
uii
i=1

If, in addition, A is nonsingular, then the LU factorization is unique.

5-20

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

- Show how to obtain L directly from the multipliers


- Practical use: Show how to use the LU factorization to

solve linear systems with the same matrix A


bs.

2 4

- LU factorization of the matrix A = 1 5


1 3
- Determinant of A?

and different

6?
1

True or false: Computing the LU factorization of


matrix A involves more arithmetic operations than solving
a linear system Ax = b by Gaussian elimination.
-

5-21

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Gauss-Jordan Elimination
Principle of the method: We will now transform the
system into one that is even easier to solve than triangular
systems, namely a diagonal system. The method is very
similar to Gaussian Elimination. It is just a bit more
expensive.
Back to original system. Step 1 must transform:
2
1
1

5-22

4
3
5

4 2
x
1 1 into: 0
6 6
0

x
x
x

x x
x x
x x

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

row2 := row2 0.5row1:


2
0
1

4
1
5

row3 := row3 0.5row1:

4 2
1 0
6 6

2
Step 2: 0
0

4
1
3

2
0
0

4 2
x
1 0 into: 0
4 7
0

row1 := row1 4 row2:


2
0
0

5-23

0
1
3

4
1
3

8 2
1 0
4 7

4 2
1 0
4 7
0
x
0

x x
x x
x x

row3 := row3 3 row2:


2
0
0

0
1
0

8 2
1 0
7 7

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

There is now a third step:


2
To transform: 0
0

0
1
0

row1 := row1 87 row3:


2
0
0

0
1
0

0 10
1 0
7 7

8 2
x
1 0 into: 0
7 7
0

0
x
0

0 x
0 x
x x

row2 := row2 1
row3:
7
2
0
0

0
1
0

0 10
0 1
7 7

Solution: x3 = 1; x2 = 1; x1 = 5

5-24

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

ALGORITHM : 3

Gauss-Jordan elimination

1. For k = 1 : n Do:
2.
For i = 1 : n and if i! = k Do :
3.
piv := aik /akk
4.
For j := k + 1 : n + 1 Do :
5.
aij := aij piv akj
6.
End
6.
End
7. End
Operation count:

T =

n n1
X
X

k=1 i=1

[1 +

n+1
X

j=k+1

2] =

n1
X n1
X
k=1

(2(n k) + 3) =

i=1

Complete the above calculation. Order of the cost?


How does it compare with Gaussian Elimination?
-

5-25

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

function x = gaussj (A, b)


%--------------------------------------------------% function x = gaussj (A, b)
% solves A x = b by Gauss-Jordan elimination
%--------------------------------------------------n = size(A,1) ;
A = [A,b];
for k=1:n
for i=1:n
if (i ~= k)
piv = A(i,k) / A(k,k) ;
A(i,k+1:n+1) = A(i,k+1:n+1) - piv*A(k,k+1:n+1);
end
end
end
x = A(:,n+1) ./ diag(A) ;

5-26

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Gaussian Elimination: Partial Pivoting


Consider again Gaussian Elimination for the linear system

2 2 4 2
2x1 + 2x2 + 4x3 = 2
x1 + x2 + x3 = 1 Or: 1 1 1 1

x + 4x + 6x = 5
1 4 6 5
1
2
3
row2 := row2 21 row1:
2
0
1

2
0
4

4 2
1 0
6 5

row3 := row3 12 row1:


2
0
0

2
0
3

4 2
1 0
4 6

Pivot a22 is zero. Solution : permute rows 2 and 3:

2
0
0
5-27

2
3
0

4 2
4 6
1 0
GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Gaussian Elimination with Partial Pivoting

ow
s

Row k

rm
Pe

Largest a ik

ut

er

Partial Pivoting

kk

General situation:

Always permute row k with row l such that


|alk | = maxi=k,...,n |aik |
More stable algorithm.
5-28

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

function x = gaussp (A, b)


%--------------------------------------------------% function x = guassp (A, b)
% solves A x = b by Gaussian elimination with
% partial pivoting/
%--------------------------------------------------n = size(A,1) ;
A = [A,b]
for k=1:n-1
[t, ip] = max(abs(A(k:n,k)));
ip = ip+k-1 ;
%% swap
temp = A(k,k:n+1) ;
A(k,k:n+1) = A(ip,k:n+1);
A(ip,k:n+1) = temp;
%%
for i=k+1:n
piv = A(i,k) / A(k,k) ;
A(i,k+1:n+1) = A(i,k+1:n+1) - piv*A(k,k+1:n+1);
end
end
x = backsolv(A,A(:,n+1));
5-29

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Pivoting and permutation matrices


A permutation matrix is a matrix obtained from the
identity matrix by permuting its rows

For example for the permutation = {3, 1, 4, 2} we

obtain

0
1

P =
0
0

0
0
0
1

1
0
0
0

0
0

1
0

Important observation: the matrix P A is obtained from

A by permuting its rows with the permutation


(P A)i,: = A(i),:
5-30

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

- What is the matrix P A when

0
1

P =
0
0

0
0
0
1

1
0
0
0

1
0
5
0

A=
9
1
3
0

2
6
0
4

3
7
1
5

4
8

?
2
6

Any permutation matrix is the product of interchange

permutations, which only swap two rows of I.


Notation: Eij = Identity with rows i and j swapped

5-31

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Example: To obtain = {3, 1, 4, 2} from = {1, 2, 3, 4}


we need to swap (2) (3) then (3) (4) and
finally (1) (2). Hence:

0
1

P =
0
0

0
0
0
1

1
0
0
0

0
0

= E1,2 E3,4 E2,3


1
0

- In the previous example where

>> A = [ 1 2 3 4; 5 6 7 8; 9

0 -1 2 ; -3 4 -5 6]

Matlab gives det(A) = 896. What is det(P A)?

5-32

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

At each step of G.E. with partial pivoting:

Mk+1Ek+1Ak = Ak+1
where Ek+1 encodes a swap of row k+1 with row l > k+1.
1

= Ei and (2) Mj1 Ek+1 = Ek+1


j has a permuted Gauss vector:
for k j, where M

Notes: (1) Ei

Mj

(I + v (j)eTj )Ek+1 = Ek+1(I + Ek+1v (j)eTj )


Ek+1(I + v
(j)eTj )
j
Ek+1M
Here we have used the fact that above row k + 1, the

permutation matrix Ek+1 looks just like an identity matrix.

5-33

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Result:

A0 =
=
=
=
=
=

E1M11A1
1M 1A2
E1M11E2M21A2 = E1E2M
1
2
1M 1E3M 1A3
E1E2M
1
2
3
1M
1M 1A3
E1E2E3M
1
2
3
...
1M
1M
1 M
1 An1
E1 En1 M
1
2
3
n1

In the end

P A = LU with P = En1 E1

5-34

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Error Analysis
If no zero pivots are encountered during Gaussian elimi and U

nation (no pivoting) then the computed factors L


satisfy
U
=A+H
L
with


|U
| + O(u 2)
|H| 3(n 1) u |A| + |L|
y = b and U
x
Solution x
computed via L
= y is s. t.
(A + E)
x = b with


|U
| + O(u 2)
|E| nu 3|A| + 5 |L|

5-35

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

Backward error estimate.

and |U
| are not known in advance they can be
|L|
large.
What if partial pivoting is used?
Permutations introduce no errors. Equivalent to stan-

dard LU factorization on matrix P A.


is small since lij 1. Therefore, only U is uncer |L|
tain
In practice partial pivoting is stable i.e., it is highly

unlikely to have a very large U .

5-36

GvL 3.{1,3,5}; Heath 2; TB 7,20-21 Systems

You might also like