Chapter III
Chapter III
• 3. Solutions of Systems of
Linear Algebraic Equations: Matrix-
Inversion; Special matrices and
Gauss-Seidel Iteration; Gaussian-
Elimination; LU-Decomposition
1
Introduction
• Linear algebraic equations occur in almost all branches of
engineering. Linear systems include structures, elastic solids, heat
flow, electromagnetic fields and electric circuit. If the system is
discrete, such as a truss or an electric circuit, then its analysis leads
directly to linear algebraic equations.
Methods of Solution
• There are two classes of methods for solving system of linear
algebraic equations: direct and iterative methods.
direct methods
– They transform the original equation into equivalent equations that can be
solved more easily.
– the transformation carried out by applying certain operations.
– the solution does not contain any truncation error but the round
off error is introduced due to floating point operations.
2
Iterative methods
• start with a guess of the solution x.
• repeatedly refine the solution until a certain convergence criteria is reached.
• Less efficient than direct methods due to the large number of operations or
iterations required.
• the procedures are self-correcting
• the solution contains truncation error.
• series drawback of iterative methods is that they do not always converge to
the solution.
• the initial guess affects the convergence of the solution.
• more useful to solve a set of ill-conditioned equation.
3
Direct methods
Gauss-Elimination method
partial pivoting method:
• step 1: The numerical largest coefficient of is selected from all the
equations are pivot and the corresponding equation becomes the first
equation.
• step 2: he numerical largest coefficient of is selected from all the remaining
equations as pivot and the corresponding equation becomes the second
equation. This process is repeated till an equation into a simple variable is
obtained
4
Direct methods
Gauss-Elimination method
using the Gauss elimination with
Example 21 solve the system of equations partial pivoting.
=3 Solution : We have the augmented
2 =7 matrix as
| ||
1 10 −1 3
10 =4 2 3 20 7
10 1 −2 4
5
Exercise: solve the system of equations
6
Direct methods
LU decomposition Methods
• LU Decomposition : It is possible to show that any square
matrix A can be expressed as a product of a lower
triangular matrix L and an upper triangular matrix U.
A = LU
For instance
7
Direct methods
The process of computing L and U for a given A is known as
LU-Decomposition or LU-Factorization. LU-Decomposition is
not unique, unless certain constraints are placed on L or These
constraints distinguish one type of decomposition from another.
Three commonly used Lu-Decomposition methods are:
• cholesky’s decomposition: constraints are L =
• Crout’s decomposition: constraints are = 1(i.e The matrix
U is unit upper triangular.)
• Doolittle’s factorization: As a constraint set = 1(The
matrix L is unit lower triangular. ) 8
Direct methods
Basic steps of LU-decomposition methods
step 1: Decomposing the coefficient matrix A, then rewrite the
equations as Ax = b → LUx = b
step 2: Introduce an intermediate vector y = Ux, then the above
equation becomes Ly = b
step 3: The above equation can be solved for y by forward
substitution.
step 4: Then Ux = y will yield x by the backward substitution
process
9
Direct methods
Cholesky’s decomposition method
consider the system of linear equations:
10
Direct methods
11
Direct methods
Example Solve the following equations by Cholesky’s
decomposition method.
12
Direct methods
13
Direct methods
14
Direct methods
15
Direct methods
Crout’s Method
Consider the system
16
Direct methods
Equating the corresponding elements, we obtain
17
Direct methods
Example 23 Solve the following set of equations by Crout’s
method:
18
Direct methods
19
Direct methods
20
Matrix inversion of a 3 × 3 matrix
In order to find the inverse of A, we first need to use the matrix of
cofactors, C, to create the adjoint of matrix A. The adjoint of A, de-
noted adj(A), is the transpose of the matrix of cofactors :
21
Matrix inversion of a 3 × 3 matrix
22
Matrix inversion of a 3 × 3 matrix
23
Matrix inversion of a 3 × 3 matrix
24
Iterative Methods
Introduction
Iterative methods are those in which the solution is got by
successive approximation.
Thus in an indirect method or iterative method, the amount of
computation depends on the degree of accuracy required
When the equation are in this form, they are solvable by the
method of successive approximation.
Two iterative method - i) Gauss - Jacobi iteration method
ii) Gauss - Seidal iteration method
25
Gauss – Jacobi Iteration Method
The first iterative technique is called the Jacobi method named after
Carl Gustav Jacob Jacobi(1804- 1851).
Two assumption made on Jacobi method:
1)The system given by
26
Second assumption:
3 x1 7 x2 13x3 76 12 x1 3x 2 0 x3 1
x1 5 x2 3x3 28 x1 5 x2 3x3 28
12 x1 3x2 0 x3 1 3 x1 7 x2 13x3 76
27
27
To begin the Jacobi method ,solve
28
28
29
29
30
30
31
31
32
32
(1) 85 6(0) (0)
x 3.14815
1 27
33
33
The results are tabulated
34
34
A = input('Enter a co-efficient matrix A: ');
B = input('Enter source vector B: ');
X = input('Enter initial guess vector: ');
n = input('Enter no of iterations: ');
e = input('Enter your tolerance: ');
N = length(B);
P = X+1;
j=1;
while abs ((X-P))>e
P=X;
for i=1:N
X(i) =(B(i)/A(i,i))-((A(i,[1:i-1,i+1:N])*P([1:i-
1,i+1:N])))/A(i,i);
end
fprintf('**Iteration number %d**\n',j)
X
P=X;
end
35
35
Enter a co-efficient matrix A: [10 3 1 ;3 10 2;1 2 10]
Enter source vector B: [19;29;35]
Enter initial guess vector: [0;0;0]
Enter no of iterations: 20
Enter your tolerance: 0.0001
36
36
Gauss-Seidel Method
Jorge Eduardo Celis
37
37
An iterative method.
Basic Procedure:
-Algebraically solve each linear equation for xi
-Assume an initial guess solution array
-Solve for each xi and repeat
-Use absolute relative approximate error after each iteration
to check if error is within a pre-specified tolerance.
38
38
Algorithm
A set of n equations and n unknowns:
If: the diagonal elements are
a11 x1 a12 x2 a13 x3 ... a1n xn b1 non-zero
a21 x1 a22 x2 a23 x3 ... a2n xn b2 Rewrite each equation solving
. .
. .
for the corresponding unknown
. .
ex:
an1 x1 an 2 x2 an 3 x3 ... ann xn bn First equation, solve for x1
Second equation, solve for x2
39
39
Algorithm
Rewriting each equation
c1 a12 x 2 a13 x3 a1n x n From Equation 1
x1
a11
c2 a21 x1 a23 x3 a2 n xn
x2 From equation 2
a22
cn 1 an 1,1 x1 an 1, 2 x2 an 1,n 2 xn 2 an 1,n xn From equation n-1
xn 1
an 1,n 1
cn an1 x1 an 2 x2 an ,n 1 xn 1 From equation n
xn
ann
40
40
Algorithm
General Form of each equation
n n
c1 a1 j x j cn 1 a
j 1
n 1, j xj
j 1
j n 1
x1
j 1 xn 1
a11 an 1,n 1
n
c n a nj x j
n
c2 a2 j x j
j 1 j 1
j n
x2
j 2
xn
a 22 a nn
41
41
Algorithm
General Form for any row ‘i’
n
ci aij x j
j 1
j i
xi , i 1,2, , n.
aii
42
42
Gauss-Seidel Method
Solve for the unknowns
Assume an initial guess for [X] Use rewritten equations to solve for
each value of xi.
Important: Remember to use the
x1 most recent value of xi. Which
x means to apply values calculated to
2 the calculations remaining in the
current iteration.
xn-1
xn
43
43
Calculate the Absolute Relative Approximate Error
x new
x old
a i i
new
i
100
x i
44
44
Gauss-Seidel Method: Example 1
The upward velocity of a rocket
is given at three different times
Table 1 Velocity vs. Time data.
Time, t s Velocity v m/s
5 106.8
8 177.2
12 279.2
45
45
Gauss-Seidel Method: Example 1
Using a Matrix template of the form t12 t1 1 a1 v1
2
t 2 t 2 1 a2 v2
t32 t3 1 a3 v3
25 5 1 a1 106.8
The system of equations becomes 64 8 1 a 177.2
2
144 12 1 a3 279.2
a1 1
a 2
Initial Guess: Assume an initial guess of
2
a3 5
46
46
Gauss-Seidel Method: Example 1
Rewriting each equation
106.8 5a 2 a3
a1
25
25 5 1 a1 106.8
64 8 1 a 177.2 177.2 64a1 a3
2 a2
144 12 1 a3 279.2 8
47
47
Applying the initial guess and solving for ai
a1 1 106.8 5(2) (5)
a1 3.6720
a 2 25
2
a3 5 177.2 643.6720 5
a2 7.8510
Initial Guess 8
279.2 1443.6720 12 7.8510
a3 155.36
1
When solving for a2, how many of the initial guess values were used?
48
48
Gauss-Seidel Method: Example 1
Finding the absolute relative approximate error
xinew xiold At the end of the first iteration
a i new
100
xi
a1 3.6720
a 7.8510
3.6720 1.0000 2
a 1 x100 72.76%
3.6720 a3 155.36
155.36 5.0000
a 3 x100 103.22%
155.36
49
49
Gauss-Seidel Method: Example 1
Iteration #2
Using
a1 3.6720 the values of ai are found:
a 7.8510 106.8 5 7.8510 155.36
2 a1 12.056
a3 155.36 25
from iteration #1
177.2 6412.056 155.36
a2 54.882
8
50
50
Finding the absolute relative approximate error
12.056 3.6720 At the end of the second iteration
a 1 x100 69.543%
12.056 a 12.056
1
a 54.882
2
54.882 7.8510 a3 798.54
a 2 x100 85.695%
54.882
The maximum absolute rela-
798.34 155.36 tive approximate error is
a 3 x100 80.540% 85.695%
798.34
51
51
Gauss-Seidel Method: Example 1
Repeating more iterations, the following values are obtained
Iteration a1
a 1 % a2 a 2 % a3 a 3 %
1 3.6720 72.767 −7.8510 125.47 −155.36 103.22
2 12.056 69.543 −54.882 85.695 −798.34 80.540
3 47.182 74.447 −255.51 78.521 −3448.9 76.852
4 193.33 75.595 −1093.4 76.632 −14440 76.116
5 800.53 75.850 −4577.2 76.112 −60072 75.963
6 3322.6 75.906 −19049 75.972 −249580 75.931
Notice – The relative errors are not decreasing at any significant rate
52
52
What went wrong?
Even though done correctly, the answer is not converging to the cor-
rect answer
This example illustrates a pitfall of the Gauss-Siedel method: not all
systems of equations will converge.
Is there a fix?
One class of system of equations always converges: One with a diagonally
dominant coefficient matrix.
54
54
Gauss-Seidel Method: Example 2
Rewriting each equation With an initial guess of
12 3 5 a1 1 x1 1
1 5 3 a 28 x 0
2 2
3 7 13 a3 76 x3 1
1 3 x 2 5 x3 1 30 51
x1 x1 0.50000
12 12
28 x1 3 x3 28 0.5 31
x2 x2 4.9000
5 5
76 3 x1 7 x 2 76 30.50000 74.9000
x3 x3 3.0923
13 13
55
55
The absolute relative approximate error
0.50000 1.0000
a 1 100 100.00%
0.50000
4.9000 0
a 2
100 100.00%
4.9000
3.0923 1.0000
a 3
100 67.662%
3.0923
The maximum absolute relative error after the first iteration is 100%
56
56
Gauss-Seidel Method: Example 2
After Iteration #1
x1 0.5000
x 4.9000
2
x3 3.0923
76 30.14679 74.900
x3 3.8118
13
57
57
Iteration #2 absolute relative approximate error
0.14679 0.50000
a 1 100 240.61%
0.14679
3.7153 4.9000
a 2
100 31.889%
3.7153
3.8118 3.0923
a 3
100 18.874%
3.8118
The maximum absolute relative error after the first iteration is 240.61%
This is much larger than the maximum absolute relative error obtained in
iteration #1. Is this a problem?
58
58
Repeating more iterations, the following values are obtained
Iteration a1 a 1 % a2 a 2 % a3 a 3 %
x1 0.99919 x1 1
The solution obtained x 3.0001 is close to the exact solution of x 3 .
2 2
x3 4.0001 x3 4
59
59
Gauss-Seidel Method: Example 3
Given the system of equations
x1 5 x2 3x3 28 76 7 x2 13x3
x1
12 x1 3x2 5 x3 1 3
28 x1 3 x3
With an initial guess of x2
5
x1 1
x 0 1 12 x1 3 x 2
2 x3
x3 1 5
60
60
Conducting six iterations, the following values are obtained
Iteration a1 a 1 % A2 a 2 % a3 a 3 %
1 21.000 95.238 0.80000 100.00 50.680 98.027
2 −196.15 110.71 14.421 94.453 −462.30 110.96
3 −1995.0 109.83 −116.02 112.43 4718.1 109.80
4 −20149 109.90 1204.6 109.63 −47636 109.90
5 2.0364×105 109.89 −12140 109.92 4.8144×105 109.89
6 −2.0579×105 109.89 1.2272×105 109.89 −4.8653×106 109.89
61
Yihun T. 61
Gauss-Seidel Method
The Gauss-Seidel Method can still be used
3 7 13
The coefficient matrix is not di-
A 1 5 3
agonally dominant
12 3 5
But this is the same set of equa- 12 3 5
tions used in example #2, which A 1 5 3
did converge. 3 7 13
62
62