ACM 2022-23 Unit 3 Simultaneous Linear Equation
ACM 2022-23 Unit 3 Simultaneous Linear Equation
Computation
Unit 3
Simultaneous Linear Equations
1
2
A set of Linear Algebraic Equations
4
Linear Algebraic Equations
• Laws of conservation
– Mass, momentum and energy
– Multi-component system
– Discretized differential equation
• Equilibrium equation & Compatibility equation
• Kirchhoff's law
5
Solution methods
• Graphical
• Crammer rule
• Elimination of unknowns
• Gauss elimination
• Gauss Jordon
• LU decomposition
• Gauss Seidel
6
Graphical method
• Limited to three equations
7
Graphical method
8
Graphical Method
(a) no solution
(b) infinite solutions
(c) ill-conditioned system (difficult to detect intersection point)
9
Cramer’s Rule
12
Gauss Elimination
13
Gauss Elimination
14
Example
15
Multiply Eq (2) by (-0.19/7.00333) and subtract
from Eq (3)
16
• Backward substitution
x2 = -2.5
x1 = 3.0
17
Operation counting
• Forward elimination
• Backward substitution n n 1 2
18
Operation counting
21
Pitfall-I
• Pivot element is zero
• Example
22
Pitfall-II
• Pivot element is nearly zero
• Tolerance be specified in the program to detect this
condition
• Example-II
• Factor = 1/0.0003
• Division by a very small number
23
• X1 = (2.0001-3*0.7)/0.0003 = -333
• X1 = (2.0001-3*0.67)/0.0003 = -33
• X1 = (2.0001-3*0.667)/0.0003 = -3.33
24
Pitfall-III
• Round off error
– Important when dealing with 100 or more equations,
due to error propagation
• Remedy
– Increase significant figures
– This increases computation efforts
25
Remedy
• Pivoting
– Switching largest element to pivot position
• Complete pivoting
– If columns as well as rows are searched for the
largest element and both are switched.
• Partial pivoting
– If only column below pivot element is searched for
largest element and corresponding row is switched.
• Complete pivoting is rarely used because
switching columns change the order of Xs. Add
complexity in program
26
• In programming, rows are not switched
• But track of the pivot row number is kept
• Defines order of forward elimination and
backward substitution without changing place
• Moreover, Scaling is required to identify
whether pivoting is necessary
27
Example
• Without Scaling &Pivoting With scaling & pivoting
49999 x2 49998
• x2 0.99998 1.0 Forward elimination
x1 1.00002 0.0
0.99998 x2 0.99996
x2 0.99998 1.0
• Round off error x1 1.00002 1.0
• Without scaling and with pivoting
99998 x2 99996
x2 0.99998 1.0
x1 1.00002 1.0
• Scaling should be done to identify the need of pivoting. Pivoting
and Gauss elimination be done with the original coefficient
values to avoid roundoff error introduced by the scaling.
Pitfall-IV
• Singular systems
– Slope are same
– Graphically, they will either be parallel or superimpose
– No solution or infinite solution
– Computer algorithm should recognize this case
– Determinant is zero
a11 a21
a11 a22 a21 a12 a11 a22 a21a12 0
a12 a22
Pitfalls
• Ill-conditioned equations
– Slope of equations are nearly same
38
Other indicators
• If multiplication of the inverse with the
original matrix is not close to identity matrix
=> ill condition
• If the invert of the inverted matrix is not close
to original matrix => ill condition
• Computationally inefficient methods
39
Another indicator
• Ill-conditioned equations:
– A small changes in coefficient, results in large
changes in solution
• Example
x A
Cond A
x A
≥1
A matrix that is not invertible has condition
number equal to infinity.
Matrix norm
• The norm of a matrix is a non-negative real number
which measures how large its elements are. It is a
way of determining the “size” of a matrix that is not
necessarily related to how many rows or columns the
matrix has but the magnitude of the matrix
• Frobenius (Euclidean) norm
(Root Sum Square)
• Infinity norm
(Max Absolute Row Sum)
• 1-norm
(Max Absolute Column Sum)
Example
45
46
Gauss Jordon
• Divide to make pivot element equal to 1
• Full elimination in all the (n-1) equations, not only
forward elimination
• Results in identity matrix instead of upper triangular
matrix.
• No need of backward substitution
49
50
• Multiplication: (n)n+(n-1)n+(n-2)n…=(n(n+1)/2).n=n3/2+n2/2
• i.e., S number of terms in each equation (n-i+1) * number of equations (n)
• Addition: n(n-1)+(n-1)(n-1)+(n-2)(n-1)…=(n.(n+1)/2)(n-1)=n3/2-n/2
• i.e., S number of terms in each equation (n-i+1) * number of equations (n-1)
• Number of flops
• Sparse Matrix
(PDE discretization)
Tri-diagonal Matrix
57
Iterative procedure
58
Jacobi Method
Parallelization possible
Parallelization difficult
• Diagonally dominant
63
64
Reducing flops
• Pre-divide the coefficient of the row with the
diagonal element
65
Relaxation
• l = 0 to 1 under relaxation (damping)
• l = 1 to 2 over relaxation (moving in right direction)
As obtained by Jacobi or
As obtained after Gauss-Seidel Methods
applying relaxation factor