0% found this document useful (0 votes)
16 views

03 DenseLinearSystemsA

This document discusses solving large dense linear systems numerically using LU factorization with partial pivoting. It explains the motivation, describes the standard Gaussian elimination and LU factorization algorithms, and discusses the importance of partial pivoting to avoid arithmetic errors and ensure stability when solving such systems.

Uploaded by

susma sapkota
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

03 DenseLinearSystemsA

This document discusses solving large dense linear systems numerically using LU factorization with partial pivoting. It explains the motivation, describes the standard Gaussian elimination and LU factorization algorithms, and discusses the importance of partial pivoting to avoid arithmetic errors and ensure stability when solving such systems.

Uploaded by

susma sapkota
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Solving Large Dense Linear Systems

Numerical Algorithms

Samir Moustafa

Faculty of Computer Science


University of Vienna

March 24, 2022


Motivation

Assume the we need to simulate AND operator using Ax = b


 T
0 0 1 1  T
A= ;b= 0 0 0 1
0 1 0 1
Find x such that
H 1 (Ax) = b
 
0.33̄
Then one solution for x could be
0.33̄
     
0 0   0 0
0 1 0.33̄ 0.33̄ 0
H(1 0 0.33̄ ) = H(0.33̄) = 0
    

1 1 0.66̄ 1

1
Step function H(a) := 1a>0.5
Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error
Revision

ϵ is the "machine epsilon" means the difference between 1 and


the next representable number.
Assume that
x >>> y
such that
y
0< < ϵ/2
x
Then
y
x + y = x(1 + )=x
x

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Problem Setting

We focus on the standard algorithm for numerically solving


large dense linear systems of equations

Ax = b

with given dense A ∈ Rn×n , b ∈ Rn and unknown x ∈ Rn .

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Gaussian Elimination and LU Factorization

What you should know already:

A = LU
Ax = b can be written as LU x = b
Can be solved by solving Ly = b
Forward substitution
U x = y can be solved by
Back substitution

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Gaussian Elimination and LU Factorization

y = intermediate solution: Ly = b
Gaussian elimination and LU factorization express the same
solution process
LU formation makes it clearer that the factorization phase need
not be repeated when solving additional systems having
different right-hand-side vectors but the same matrix A
L and U factors can be reused!

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Gaussian Elimination and LU Factorization

LU Factorization Algorithm:
1: for k = 1 to n − 1 do
2: if akk = 0 then
3: STOP
4: end if
5: for i = k + 1 to n do
6: mik = aik /akk
7: end for
8: for j = k + 1 to n do
9: for i = k + 1 to n do
10: aij = aij − mik akj
11: end for
12: end for
13: end for

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Partial Pivoting

Obvious problem:
If leading diagonal entry of the remaining unreduced portion of
the matrix is zero at any stage
Computing multiplier mi requires division by the diagonal entry

Solution:
If entry is 0 at stage k → interchange row k with some
subsequent row whose entry in column k is nonzero
Interchanges in both, matrix and right hand side
Does not alter the solution

Interchanging rows in this manner is called (partial) pivoting

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Partial Pivoting - Example

Note: The need for pivoting has nothing to do with


singularity !
 
0 1
1 A= is nonsingular, but row interchange is needed for
1 0
computing LU factorization
 
1 1
2 A= is singular, but has LU factorization
1 1
    
1 1 1 0 1 1
A= = = LU
1 1 1 1 0 0

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Pivoting

Unclear: What if no nonzero entry on or below the diagonal


column k?
Nothing to do at this stage
This step leaves 0 on the diagonal, so the upper triangular
matrix U is singular
LU factorization can still be completed but the back-substitution
process will fail
Original matrix must have been singular anyway

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Pivoting with ϵ

In computations, not only exact


 0 causes problems! Example:
ϵ 1
small pivots Consider A = where ϵ is positive, but
1 1
smaller
 than  machine epsilon. Without row interchanges:
1 0
M=
− 1ϵ 1
     
1 0 ϵ 1 ϵ 1
L= 1 , and U = = in floating-point
ϵ 1 0 1 − 1ϵ 0 − 1ϵ
    
1 0 ϵ 1 ϵ 1
arithmetic. But then LU = 1 = ̸= A
ϵ 1 0 − 1ϵ 1 0

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Pivoting with ϵ

     
0 1 2 1 0 1 0
With row interchanges P = ,M= ,L=
1 0 −ϵ 1 ϵ 1
   
1 1 1 1
and U = = in floating-point arithmetic.
0 1−ϵ 0 1
    
1 0 1 1 1 1
Therefore, LU = = which is the correct
ϵ 1 0 1 ϵ 1
result in floating-point arithmetic (after row interchange!).

2
The inverse of P matrix is it’s transpose: P−1 = P T .
Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error
Pivoting

Need to limit the magnitudes of the multipliers


Need to avoid that previous rounding errors are amplified when
remaining portion of the matrix and right-hand side are
multiplied by each elementary elimination matrix.
The multipliers will never exceed 1 in magnitude if for each
column we choose the entry of largest magnitude on or below
the diagonal as pivot
Called partial pivoting
Essential in practice for stability

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Pivoting

Larger pivots produce smaller multipliers


Smaller errors
If the largest entry on or below the diagonal in each column is
used → multipliers are bounded in magnitude by 1

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


LU Factorization with Partial Pivoting
1: for k = 1 to n − 1 do
2: Find index p such that
3: |apk | ≥ |ajk | for k ≤ j ≤ n
4: if p ̸= k then
5: interchange rows k and p
6: end if
7: if akk = 0 then
8: continue with next k
9: end if
10: for i = k + 1 to n do
11: mik = aik /akk
12: end for
13: for j = k + 1 to n do
14: for i = k + 1 to n do
15: aij = aij − mik akj
16: end for
17: end for
18: end for

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Pivoting

Row interchanges complicate LU factorization


Each elementary elimination matrix Mk is preceded by a
permutation matrix Pk that interchanges rows
Brings the entry of the largest magnitude on or below the
diagonal in column k into the diagonal pivot position

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Pivoting

Still yields M A = U , but now

M = Mn−1 Pn−1 Mn−2 Pn−2 . . . M1 P1

Note: L := M −1 is now not necessarily lower triangular

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Pivoting

Note that the permutation matrix P = Pn−1··· P1


Permutes the rows of A into the order determined by partial
pivoting
Determines a row ordering for the system in which no
interchanges would be required for numerical stability
Such an ordering cannot be determined in advance!
Thus,
P A = LU, where L is really lower triangular
Solve the linear system by
Ly = P b via forward-substitution
U x = y via back-substitution

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Example

    
1 2 2 x1 3
Ax = 4 4 2 x2  =  6  = b
4 6 4 x3 10
0 1 0
P1 = 1 0 0
0 0 1
    
4 4 2 x1 6
P1 Ax = 1 2 2 x2  =  3  = P1 b
4 6 4 x3 10
 
1 0 0
M1 = −0, 25 1 0
−1 0 1

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Example

    
4 4 2 x1 6
M1 P1 Ax = 0 1 1, 5 x2 = 1, 5 = M1 P1 b
   
0 2 2 x3 4
1 0 0
P2 = 0 0 1
0 1 0
    
4 4 2 x1 6
P2 M1 P1 Ax = 0 2 2  x2  =  4  = P2 M1 P1 b
0 1 1, 5 x3 1, 5

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Example
 
1 0 0
M2 = 0 1 0
0 −0, 5 1    
4 4 2 x1 6
M2 P2 M1 P1 Ax =  0 2 2  x2  =  4  = M2 P2 M1 P1 b
0 0 0, 5 x3 −0, 5
 T
x = −1 3 −1

L = M −1 = (M2 P2 M1 P1 )−1 = P1T L1 P2T L2 =3


      
0 1 0 1 0 0 1 0 0 1 0 0 0, 25 0, 5 1
1 0 0 0, 25 1 0 0 0 1 0 1 0 =  1 0 0
0 0 1 1 0 1 0 1 0 0 0, 5 1 1 1 0

3
The inverse of M matrix is the the negative sign for the element that
we changed.
Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error
Example

    
1 22 0, 25 0, 5 1 4 4 2
A = 4 42 =  1 0 0 0 2 2  = LU
4 64 1 1 0 0 0 0, 5
    
1 0 0 0 1 0 0 1 0
P = P2 P1 = 0 0 1 1 0 0 = 0 0 1
0 1 0 0 0 1 1 0 0
 
1 0 0
L= 1 1 0
0, 25 0, 5 1
     
0 1 0 1 2 2 1 0 0 4 4 2
P A = 0 0 1 4 4 2 =  1 1 0 0 2 2  = LU
1 0 0 4 6 4 0, 25 0, 5 1 0 0 0, 5

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Complete Pivoting

It is called “partial” because only the current column is searched


for a suitable pivot
More than adequate in practice
More exhaustive strategy: Complete pivoting
Search entire remaining unreduced submatrix for largest entry
Requires interchanging rows and columns
Leads to a factorization P AQ = LU
L: unit lower triangular
U : upper triangular
P and Q: permutation matrices that reorder the rows and
columns, respectively, of A

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Complete Pivoting

Resulting solution steps:


1 Solve Ly = P b via forward-substitution
2 Solve U z = y via back-substitution
3 Permute components of solution: x = Qz
Numerical stability of complete pivoting is theoretically superior
But it is much more expensive

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Pivoting

Pivot selection depends on the magnitudes of individual matrix


entries
Consequently it depends on the scaling of the matrix
Diagonal scaling may result in a different sequence of pivots
E. g., any nonzero entry in a given column can be made the
largest magnitude by heavy weighting
Badly skewed scaling can result in an ill-conditioned system →
inaccurate solution

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Pivoting

A well-formulated problem should have


Appropriately commensurate units for measuring the unknown
variables (column scaling)
Weighting of the individual equations that properly reflects their
relative importance (row scaling)
Should also account for the relative accuracy of the input data

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Benefits of Pivoting
Recall the bound for the relative residual of a computed
solution:
r = b − Ax̂
We can bound the relative residual of a computed solution as
follows:
∥r∥ ∥E∥
≤ where E . . . backward error in A
∥A∥ · ∥x̂∥ ∥A∥
How large is ∥E∥ likely to be in practice ?
For LU factorization by Gaussian elimination ∥E∥ can be
bounded as [Wilkinson 1961]
∥E∥
≤ ρ n ϵmachine ,
∥A∥
where the growth factor ρ is basically the ratio of the largest
entry of U to the largest entry of A in magnitude.
Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error
Benefits of Pivoting
∥E∥
≤ ρ n ϵmachine
∥A∥

Without pivoting, ρ can be arbitrarily large


⇒ Gaussian elimination without pivoting is unstable!

With partial pivoting ρ can theoretically still be as large as 2n−1


(since in the worst case the size of the entries can double at each
stage of the elimination)
BUT: Such behaviour is extremely rare in practice !
In practice: with partial pivoting, there is little or no growth in
the size of the entries, so that:
∥E∥
≲ n ϵmachine
∥A∥

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error


Benefits of Pivoting

This relation means: Solving a linear system by Gaussian


elimination with partial pivoting followed by forward and
backsubstitution almost always yields a very small relative
residual
Regardless of how ill-conditioned the system may be
Remember that a small relative residual does not necessarily
indicate that a computed solution is accurate unless the system
is well-conditioned !
Complete pivoting yields an even smaller growth factor
(in theory and in practice)
BUT: The additional margin of stability is usually not worth the
extra expense

Motivation LU Factorization Pivoting Arithmetic Error LU and Pivoting Relative Error

You might also like