0% found this document useful (0 votes)
22 views

Direct Methods

The document discusses solving linear systems of equations. It defines key concepts like consistency, rank, elementary row operations, and uniqueness of solutions. It describes direct solution methods like Gaussian elimination that transform the coefficient matrix into upper triangular form to solve the system. Iterative methods are also introduced that compute successive approximations to find the solution.

Uploaded by

Roshan Charagh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Direct Methods

The document discusses solving linear systems of equations. It defines key concepts like consistency, rank, elementary row operations, and uniqueness of solutions. It describes direct solution methods like Gaussian elimination that transform the coefficient matrix into upper triangular form to solve the system. Iterative methods are also introduced that compute successive approximations to find the solution.

Uploaded by

Roshan Charagh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 46

Solution of linear system of equations

 Circuit analysis (Mesh and node equations)


 Numerical solution of differential equations
(Finite Difference Method)
 Numerical solution of integral equations (Finite
Element Method, Method of Moments)
a11 x1  a12 x2    a1n xn  b1  a11 a12  a1n   x1   b1 
a a  a2 n   x2  b2 
a21 x1  a22 x2    a2 n xn  b2  21 22
  
          
    
an1 x1  an 2 x2    ann xn  bn an1 an 2  ann   xn  bn 

1
Consistency (Solvability)
 The linear system of equations Ax=b has a
solution, or said to be consistent IFF
Rank{A}=Rank{A|b}
 A system is inconsistent when
Rank{A}<Rank{A|b}
Rank{A} is the maximum number of linearly independent columns
or rows of A. Rank can be found by using ERO (Elementary Row
Oparations) or ECO (Elementary column operations).

ERO# of rows with at least one nonzero entry


ECO# of columns with at least one nonzero entry
2
Elementary row operations
 The following operations applied to the
augmented matrix [A|b], yield an equivalent
linear system
 Interchanges: The order of two rows can be
changed
 Scaling: Multiplying a row by a nonzero constant
 Replacement: The row can be replaced by the sum
of that row and a nonzero multiple of any other row.

3
An inconsistent example
1 2  x1  4
 2 4  x   5 
  2   
ERO:Multiply the first row with
-2 and add to the second row
1 2 
0 0  Rank{A}=1 Then this
 
system of
equations is
1 2 4  not solvable
0 0  3 Rank{A|b}=2
 

4
Uniqueness of solutions

 The system has a unique solution IFF

Rank{A}=Rank{A|b}=n

n is the order of the system


 Such systems are called full-rank systems

5
Full-rank systems
 If Rank{A}=n
Det{A}  0  A is nonsingular so invertible
Unique solution

1 2   x1  4
1  1  x   2
  2   

6
Rank deficient matrices
 If Rank{A}=m<n
Det{A} = 0  A is singular so not invertible
infinite number of solutions (n-m free variables)
under-determined system

1 2  x1  4
 2 4   x   8 
  2   
Rank{A}=Rank{A|b}=1
Consistent so solvable
7
Ill-conditioned system of equations
 A small deviation in the entries of A matrix,
causes a large deviation in the solution.

 1 2   x1   3   x1  1
0.48 0.99  x   1.47   x   1
  2     2  

 1 2   x1   3   x1  3
0.49 0.99  x   1.47   x   0
  2     2  

8
Ill-conditioned continued.....
 A linear system of
equations is said to
be “ill-conditioned”
if the coefficient
matrix tends to be
singular

9
Types of linear system of equations to be
studied in this course

 Coefficient matrix A is square and real


 The RHS vector b is nonzero and real
 Consistent system, solvable
 Full-rank system, unique solution
 Well-conditioned system

10
Solution Techniques
 Direct solution methods
 Finds a solution in a finite number of operations by
transforming the system into an equivalent system
that is ‘easier’ to solve.
 Diagonal, upper or lower triangular systems are
easier to solve
 Number of operations is a function of system size n.
 Iterative solution methods
 Computes succesive approximations of the solution
vector for a given A and b, starting from an initial
point x0.
 Total number of operations is uncertain, may not
converge. 11
Direct solution Methods
 Gaussian Elimination
 By using ERO, matrix A is transformed into an upper
triangular matrix (all elements below diagonal 0)
 Back substitution is used to solve the upper-
triangular system

Back substitution
 a11  a1i  a1n   x1   b1  a11  a1i  a1n   x1   b1 
                      
  ~
 ai1  aii  ain   xi    bi   0  a~ii  a~in   xi    bi 
         
           ERO           
~
an1  ani  ann   xn  bn   0  0  a~nn   xn  bn 

12
First step of elimination
Pivotal element  a11 a1(1n)   x1  b1(1) 
(1) (1) (1)
a12 a13 
 (1) (1) (1)    (1) 
 a21 a22 a23  a2(1n)   x2  b2 
 a31
(1) (1)
a32 (1)
a33  a3(1n)   x3   b3(1) 
    
         
 a (1) an(12) an(13)  (1)  
ann x  b (1) 
 n1  n   n 

 a11
(1) (1)
a12 (1)
a13  a1(1n)   x1   b1(1) 
m2 ,1  a21
(1) (1)
/ a11  ( 2) ( 2)    ( 2) 
 0 a22 a23  a2( 2n)   x2  b2 
m3,1  a31
(1) (1)
/ a11  0 ( 2)
a32 ( 2)
a33  a3( 2n )   x3   b3( 2 ) 
    
           
mn ,1  an(11) / a11
(1)  0 an( 22) an( 23)  ( 2)  
ann x  b ( 2) 
  n   n 

13
Second step of elimination
 a11
(1) (1)
a12 (1)
a13  a1(1n)   x1   b1(1) 
 ( 2) ( 2) ( 2)     ( 2) 
Pivotal element  0 a22 a23  x
a2 n   2  b2 
 0 ( 2)
a32 ( 2)
a33  a3( 2n )   x3   b3( 2 ) 
    
          
 0 an( 22) an( 23)  ( 2)  
ann x  b ( 2 ) 
   n   n 
 a11
(1) (1)
a12 (1)
a13  a1(1n)   x1   b1(1) 
 ( 2) ( 2) ( 2)     ( 2) 
 0 a22 a23  x
a2 n   2  b2 
m3, 2  a32
( 2) ( 2)
/ a22  0 0 ( 3)
a33  a3( 3n)   x3   b3( 3) 
    
           
mn , 2  an( 22) / a22
( 2)  0 0 an( 33)  ( 3)  
ann x  b ( 3) 
   n   n 
14
Gaussion elimination algorithm
Define number of steps as p (pivotal row)
For p=1,n-1
For r=p+1 to n
mr , p  arp( p ) / a (ppp )
arp( p )  0
br( p 1)  br( p )  mr , p  b p( p )
For c=p+1 to n
arc( p 1)  arc( p )  mr , p  a (pcp )
15
Back substitution algorithm
 a11
(1) (1)
a12 (1)
a13  a1(1n)   x1   b1(1) 
 ( 2) ( 2)    ( 2) 
 0 a22 a23  a2( 2n)   x2   b2 
 0 0 ( 3)
a33  a3( 3n)   x3   b3( 3) 
   
           
 0 0 0 an( n1) n 1 an( n1) n   xn 1  b ( n 1) 
     n 1

 xn 
(n) (n)

 0 0 0 0 ann   
 bn  

bn( n )
xn  ( n )
ann
xn 1 
a
1
( n 1)
b
( n 1)
n 1  ann11n xn 
n 1n 1

1  (i ) n

xi  ( i ) bi   aik xk  i  n  1, n  2,  ,1
(i )

aii  k  i 1 
16
Operation count
 Number of arithmetic operations required by
the algorithm to complete its task.
 Generally only multiplications and divisions are
counted 3 2
n n 5n
 Elimination process  
3 2 6
 Back substitution n 2
n Dominates
2 Not efficient for
3 different RHS vectors
 Total n n
n 
2

3 3
17
LU Decomposition
A=LU
Ax=b LUx=b
Define Ux=y
Ly=b Solve y by forward substitution
ERO’s must be performed on b as well as A
The information about the ERO’s are stored in L
Indeed y is obtained by applying ERO’s to b vector
Ux=y Solve x by backward substitution

18
LU Decomposition by Gaussian elimination
There are infinitely many different ways to decompose A.
Most popular one: U=Gaussian eliminated matrix
L=Multipliers used for elimination
 1 0 0  0 0 a11(1) a12(1) a13(1)  a1(1n) 
m
 2,1 1 0  0 0  0 ( 2)
a22 ( 2)
a23  a2( 2n) 

 m3,1 m3, 2 1  0 0  0 0 ( 3)
a33  a3( 3n) 
A  
      0       
mn 1,1 mn 1, 2 mn 1,3  1   0 0 0 an( n1) n 1 an( n1) n 
  
 mn ,1 mn , 2 mn ,3 mn , 4  1  0 0 0 0 (n)
ann 

Compact storage: The diagonal entries of L matrix are all 1’s,


they don’t need to be stored. LU is stored in a single matrix.
19
Operation count
n3 n Done only once

 A=LU Decomposition 3 3
n2  n
 Ly=b forward substitution
2 2
n n
 Ux=y backward substitution
3 2
n n
n 
2
 Total 3 3
 For different RHS vectors, the system can be
efficiently solved.
20
Pivoting
 Computer uses finite-precision arithmetic
 A small error is introduced in each arithmetic
operation, error propagates
 When the pivotal element is very small, the
multipliers will be large.
 Adding numbers of widely differening
magnitude can lead to loss of significance.
 To reduce error, row interchanges are made to
maximise the magnitude of the pivotal element

21
Example: Without Pivoting
1.133 5.281   x1  6.414
4-digit arithmetic 24.14  1.210  x    22.93
  2   

24.14 1.133 5.281   x1   6.414 


m21 
1.133
 21.31 0.000  113.7   x    113.8
  2   

 x1  0.9956
 x    1.001  Loss of significance
 2  
22
Example: With Pivoting
24.14  1.210  x1   22.93
1.133 5.281   x   6.414
  2   

1.133 24.14  1.210  x1  22.93


m21 
24.14
 0.04693 0.000 5.338   x   5.338
  2   

 x1  1.000
 x   1.000
 2  
23
Pivoting procedures
a11(1) a12(1) a13(1)  a1(i1)  a1(1j)  a1(1n) 
 ( 2) ( 2) ( 2) 
 0 a22 a23  a2( 2i )  a2( 2j)  a2 n 
 0 0 a33(3)  a3(3i )  a3(3j)  a3(3n) 
Eliminated  
         
part
 0 0 0  aii(i )  aij(i )  ain(i )  Pivotal
  row
         
 0 0 0  a (jii )  a (jji )  a (jni ) 
 
         
 (i ) 
 0 0 0  ani(i )  anj(i )  ann 
Pivotal column
24
Row pivoting
 Most commonly used partial pivoting procedure
 Search the pivotal column
 Find the largest element in magnitude
 Then switch this row with the pivotal row

25
Row pivoting
a11(1) a12(1) a13(1)  a1(i1)  a1(1j)  a1(1n) 
 ( 2) ( 2) ( 2) 
 0 a22 a23  a2( 2i )  a2( 2j)  a2 n 
 0 0 a33(3)  a3(3i )  a3(3j)  a3(3n)  Interchange
  these rows
         
 0 0 0  aii(i )  aij(i )  ain(i ) 
 
         
 0 0 0  a (jii )  a (jji )  a (jni ) 
 
         
 (i ) 
 0 0 0  ani(i )  anj(i )  ann 
Largest in magnitude 26
Column pivoting
a11(1) a12(1) a13(1)  a1(i1)  a1(1j)  a1(1n) 
 ( 2) ( 2) ( 2) 
 0 a22 a23  a2( 2i )  a2( 2j)  a2 n 
 0 0 a33(3)  a3(3i )  a3(3j)  a3(3n) 
 
         
 0 0 0  aii(i )  aij(i )  ain(i ) 
 
          Largest in
 0 magnitude
0 0  a (jii )  a (jji )  a (jni ) 
 
         
 (i ) 
 0 0 0  ani(i )  anj(i )  ann 
Interchange
these columns 27
Complete pivoting
a11(1) a12(1) a13(1)  a1(i1)  a1(1j)  a1(1n) 
 ( 2) ( 2) ( 2) 
 0 a22 a23  a2( 2i )  a2( 2j)  a2 n 
 0 0 a33(3)  a3(3i )  a3(3j)  a3(3n)  Interchange
  these rows
         
 0 0 0  aii(i )  aij(i )  ain(i ) 
 
         
 0 0 0  a (jii )  a (jji )  a (jni ) 
 
           Largest in
 (i ) 
 0 0 0  ani(i )  anj(i )  ann  magnitude
Interchange
these columns 28
Row Pivoting in LU Decomposition
 When two rows of A are
1  1 
interchanged, those rows  2  2
of b should also be    
 3  3
interchanged.  
 
 Use a pivot vector. Initial    
pivot vector is integers p  i  p   j
   
from 1 to n.    
 j i 
 When two rows (i and j)    
of A are interchanged,    
n  n 
apply that to pivot    
vector.

29
Modifying the b vector

 When LU decomposition 1   7 .3   7 .3 
of A is done, the pivot  3  8 .6    1 .2 
     
vector tells the order of  2   1 .2   8 .6 
rows after interchanges      
 4  4 . 8   4 . 8 
 Before applying forward p  8  b   9 .6  b    3 . 5 
substitution to solve      
6   5 . 2   5 .2 
Ly=b, modify the order 7    2 .7    2 .7 
of b vector according to      
5   3 .5   9 .6 
the entries of pivot 9    6 .9    6 .9 
     
vector

30
LU decomposition algorithm with row
pivoting
For k=1,n-1 column to be eliminated
p=k
For r=k+1 to n Column search for
if ark  a pk then p  r maximum entry
if p>k then Interchaning
For c=1 to n t  akc , akc  a pc , a pc  t the rows
For r=k+1 to n
mr ,k  ark( k ) / akk( k )
ark( k 1)  mr ,k Updating L matrix
For c=k+1 to n
( k 1)
a
Updating U matrix rc  a (k )
rc  mr ,k  a (k )
kc 31
Example
0 3 2  12  1 
A   4  2 1  b   5 p  2
 1 4  2  3  3

Column search: Maximum magnitude second row


Interchange 1st and 2nd rows

 4  2 1   2
A   0 3 2  p  1 
 1 4  2 3

32
Example continued...
 4  2 1   2
A   0 3 2  p  1 
 1 4  2 3
Eliminate a21 and a31 by using a11 as pivotal element
A=LU in compact form (in a single matrix)

 4 2 1   2
A   0 3 2  p  1 
 0.25 3.5  1.75 3
Multipliers (L matrix)
33
Example continued...
 4 2 1   2
A   0 3 2  p  1 
 0.25 3.5  1.75 3

Column search: Maximum magnitude at the third row


Interchange 2nd and 3rd rows

 4 2 1   2
A   0.25 3.5  1.75 p  3
 0 3 2  1 

34
Example continued...
 4 2 1   2
A   0.25 3.5  1.75 p  3
 0 3 2  1 

Eliminate a32 by using a22 as pivotal element

 4 2 1   2
A   0.25 3.5  1.75 p  3
 0 3 / 3.5 3.5  1 

Multipliers (L matrix)
35
Example continued...
 1 0 0   4  2 1   2
A   0.25 1 0  0 3.5  1.75 p  3
 0 3 / 3.5 1  0 0 3.5  1 

 2  12   5
p  3 b   5  b   3 
1   3   12 

A’x=b’ LUx=b’
Ux=y
Ly=b’
36
Example continued...
Ly=b’
 1 0 0  y1   5  y1    5 
 0.25 1 0  y    3   y   1.75
  2    Forward  2  
 0 3 / 3.5 1  y3   12  substitution  y3  10.5

Ux=y
 4  2 1   x1    5   x1  1 
 0 3.5  1.75  x   1.75  x    2
  2    Backward  2  
 0 0 3.5   x3  10.5 substitution  x3  3

37
Gauss-Jordan elimination
 The elements above the diagonal are made zero at the
same time that zeros are created below the diagonal
a11
(1) (1)
a12  a1(1n) b1(1)   a11
(1) (1)
a12  a1(1n) b1(1) 
 (1)   ( 2) 
a21
(1)
a22  a2(1n) b2(1)   0 a22  a2( 2n) b2( 2 ) 
           
 (1) (1) 
 ( 2) 
an1 an(12)  (1)
ann bn   0 an( 22)  ( 2)
ann bn 

a11
(1)
0  ( 2)
ann b1( 2 )  a11
(1)
0  0 b1( n 1) 
   
 0
( 2)
a22  ( 2)
ann b2( 2 )   0
( 2)
a22  0 b2( n 1) 
            
 ( 3)   
 0 0  ann bn( 3)   0 0  (n)
ann bn( n ) 

38
Gauss-Jordan Elimination
 Almost 50% more arithmetic operations than
Gaussian elimination
 Gauss-Jordan (GJ) Elimination is prefered when
the inverse of a matrix is required.
A I
 Apply GJ elimination to convert A into an
identity matrix.
I A 1

39
Different forms of LU factorization
 Doolittle form  a11 a12 a13   1 0 0 u11 u12 u13 
a a23   l21 1 0  0 u23 
Obtained by  21 a22 u22
Gaussian elimination  a31 a32 a33  l31 l32 1  0 0 u33 

 a11 a12 a13  l11 0 0  1 u12 u13 


 Crout form a a22 a23   l21 l22 0  0 1 u23 
 21
 a31 a32 a33  l31 l32 l33  0 0 1 

1 0 0 d11 0 0  1 u12 u13 


 Cholesky form l
 21 1 0  0 d 22 0  0 1 u23 
l31 l32 1  0 0 d 33  0 0 1 

40
Crout form
 First column of L is computed li1  ai1
a1 j
 Then first row of U is computed u1 j 
l 11
 The columns of L and rows of U are computed
alternately
j 1
lij  aij   lik ukj j  i, i  1,2,  , n
k 1

aij  k 1 lik ukj


i 1

uij  i  j, j  2,3,  , n
lii

41
Crout reduction sequence
 a11 a12 a13 a14  l11 0 0 0  1 u12 u13 u14  2
a a22 a23 a24  l21 l22 0 0  0 1 u23 u24 
 21  4
a31 a32 a33 a34  l31 l32 l33 0  0 0 1 u34 
     6
a41 a42 a43 a44  l41 l42 l43 l44  0 0 0 1

1 3 5 7

An entry of A matrix is used only once to compute the


corresponding entry of L or U matrix
So columns of L and rows of U can be stored in A matrix

42
Cholesky form
 A=LDU (Diagonals of L and U are 1)
 If A is symmetric
L=UT  A= UT DU= UT D1/2D1/2U
U’= D1/2U  A= U’T U’

This factorization is also called square root


factorization. Only U’ need to be stored

43
Solution of Complex Linear System of Equations

Cz=w
C=A+jBZ=x+jy w=u+jv
(A+jB)(x+jy)=(u+jv)
(Ax-By)+j(Bx+Ay)=u+jv

 A  B  x   u 
 B A   y  v
    
Real linear system of equations
44
Large and Sparse Systems
 When the linear system is large and sparse (a
lot of zero entries), direct methods become
inefficient due to fill-in terms.
 Fill-in terms are those which turn out to be
nonzero during elimination
Fill-in
 a11 0 a13 a14 0 a11 0 a13 a14 0  terms
0 a 0 a24 0  0 a22 0 a24 0 
 22 
 a31 0 a33 0 a35  Elimination  0 0 
a33 
a34  
a35
   
a41 a42 0 a44 0 0 0 0 
a44  
a45
 0 0 a53 0 a55   0 0 0 0  
a55
45
Sparse Matrices
 Node equation matrix is a sparse matrix.
 Sparse matrices are stored very efficiently by
storing only the nonzero entries
 When the system is very large (n=10,000) the
fill-in terms considerable increases storage
requirements
 In such cases iterative solution methods should
be prefered instead of direct solution methods

46

You might also like