03 Solution Methods For Linear Equations
03 Solution Methods For Linear Equations
66
Direct Methods
67
The Tri-Diagonal Matrix Algorithm (TDMA)
𝜙1 = 𝐶1
−𝛽2 𝜙1 + 𝐷2 𝜙2 − 𝛼2 𝜙3 = 𝐶2
−𝛽3 𝜙2 + 𝐷3 𝜙3 − 𝛼3 𝜙4 = 𝐶3
−𝛽4 𝜙3 + 𝐷4 𝜙4 − 𝛼4 𝜙5 = 𝐶4
⋅ ⋅ ⋅ = ⋅
−𝛽𝑛 𝜙𝑛−1 + 𝐷𝑛 𝜙𝑛 − 𝛼𝑛 𝜙𝑛+1 = 𝐶𝑛
𝜙𝑛+1 = 𝐶𝑛+1
In the above set of equations 𝜙1 and 𝜙𝑛+1 are known boundary values.
The general form of any single equation is
−𝛽𝑗 𝜙𝑗−1 + 𝐷𝑗 𝜙𝑗 − 𝛼𝑗 𝜙𝑗+1 = 𝐶𝑗
𝛼2 𝛽2 𝐶2 𝛼4 𝛽4 𝐶4
𝜙2 = 𝜙3 + 𝜙1 + 𝜙4 = 𝜙5 + 𝜙3 +
𝐷2 𝐷2 𝐷2 𝐷4 𝐷4 𝐷4
𝛼3 𝛽3 𝐶3 𝛼𝑛 𝛽𝑛 𝐶𝑛
𝜙3 = 𝜙4 + 𝜙2 + 𝜙𝑛 = 𝜙𝑛+1 + 𝜙𝑛−1 +
𝐷3 𝐷3 𝐷3 𝐷𝑛 𝐷𝑛 𝐷𝑛
68
The TDMA
Start the forward elimination process
𝛼2 𝛽2 𝐶2
𝜙2 = 𝜙 + 𝜙 +
𝐷2 3 𝐷2 1 𝐷2 𝛽2 𝐶
𝛼3 𝛽3 𝜙1 + 2 + 𝐶3
𝛼3 𝛽3 𝐶3 𝐷2 𝐷2
𝜙3 = 𝜙 + 𝜙 + 𝜙3 = 𝛼2 𝜙4 + 𝛼
𝐷3 4 𝐷3 2 𝐷3 𝐷3 − 𝛽3 𝐷3 − 𝛽3 2
𝐷2 𝐷2
𝛼4 𝛽4 𝐶4
𝜙4 = 𝜙 + 𝜙 + 𝛼2 𝛽2 𝐶2
𝐷4 5 𝐷4 3 𝐷4 𝐴2 = and 𝐶2′ = 𝜙 +
⋅ 𝐷2 𝐷2 1 𝐷2
⋅ 𝛼3 𝛽3 𝐶2′ + 𝐶3
⋅ 𝜙3 = 𝜙 +
𝛼𝑛 𝛽𝑛 𝐶𝑛 𝐷3 − 𝛽3 𝐴2 4 𝐷3 − 𝛽3 𝐴2
𝜙𝑛 = 𝜙 + 𝜙 +
𝐷𝑛 𝑛+1 𝐷𝑛 𝑛−1 𝐷𝑛 𝛼3 𝛽 𝐶 ′
+ 𝐶3
′ 3 2
𝐴3 = and 𝐶3 =
𝐷3 − 𝛽3 𝐴2 𝐷3 − 𝛽3 𝐴2
𝜙4 = 𝐴4 𝜙5 + 𝐶4′
𝜙𝑛 = 𝐴𝑛 𝜙𝑛+1 + 𝐶𝑛′ 𝜙3 = 𝐴3 𝜙4 + 𝐶3′
69
The TDMA
Perform the backward substitution process using the general form
𝛼𝑗 ′
𝛽𝑗 𝐶 + 𝐶𝑗
𝜙𝑗 = 𝐴𝑗 𝜙𝑛+1 + 𝐶𝑗′ where 𝐴𝑗 = ′
and 𝐶𝑗 = 𝑗
𝐷𝑗 − 𝛽𝑗 𝐴𝑗−1 𝐷𝑗 − 𝛽𝑗 𝐴𝑗−1
At the boundary points
𝑗=1 𝐴1 = 0 and 𝐶1′ = 𝜙1
′
𝑗 =𝑛+1 𝐴𝑛+1 = 0 and 𝐶𝑛+1 = 𝜙𝑛+1
Implementation
First arrange the system of equations
The values of 𝐴𝑗 and 𝐶𝑗′ are calculated starting at 𝑗 = 2 and going up
to 𝑗 = 𝑛
Since the value of 𝜙 is known at boundary location (n + 1) the values
for 𝜙𝑗 can be obtained in reverse order (𝜙𝑛 , 𝜙𝑛−1 , 𝜙𝑛−2 , . . . , 𝜙2 )
70
The TDMA - Example
Example 4.3
5 0 + 1100
𝐴1 = = 0.25 𝐶1′ = = 55
20 − 0 20 − 0
5 5 × 55 + 100
𝐴2 = = 0.3636 𝐶2′ = = 27.2727
15 − 5 × 0.25 15 − 5 × 0.25
5 5 × 27.2727 + 100
𝐴3 = = 0.3793 𝐶3′ = = 17.9308
15 − 5 × 0.3636 15 − 5 × 0.3636
5 5 × 17.9308 + 100
𝐴4 = = 0.3816 𝐶4′ = = 14.4735
15 − 5 × 0.3793 15 − 5 × 0.3793
5 × 14.4735 + 100
𝐴5 = 0 𝐶5′ = = 21.3009
10 − 5 × 0.3816
73
The TDMA - Example
Node 𝜷𝒋 𝑫𝒋 𝜶𝒋 𝑪𝒋 𝑨𝒋 𝑪′𝒋 𝝓𝒋
1 0 20 5 1100 0.25 55 64.23
2 5 15 5 100 0.3636 27.2727 36.91
3 5 15 5 100 0.3793 17.9308 26.50
4 5 15 5 100 0.3816 14.4735 22.60
5 5 10 0 100 0.00 21.3009 21.30
𝑎 𝐸 𝜙𝐸 + 𝑎 𝑊 𝜙𝑊 + 𝑏
Update using 𝜙𝑃 =
𝑎𝑃
Sweep repeatedly through grid points until convergence criterion is
met
In each sweep, points already visited have new values; points not
yet visited have old values
75
Jacobi Method
In Jacobi scheme all values are updated simultaneously at end of
sweep
Consider the following example
2𝑥1 + 𝑥2 + 𝑥3 = 7 𝑥1 = 7 − 𝑥2 − 𝑥3 Τ2
−𝑥1 + 3𝑥2 − 𝑥3 = 2 𝑥2 = 2 + 𝑥1 + 𝑥3 Τ3
𝑥1 − 𝑥2 + 2𝑥3 = 5 𝑥3 = 5 − 𝑥1 + 𝑥2 Τ2
Initial Guess
0 0 0
𝑥1 = 𝑥2 = 𝑥3 = 0 1
𝑥1
1 1
= 3.500 𝑥2 = 0.667 𝑥3 = 2.500
Iteration
0 1 2 3 4 5 … 17
number
𝑥1 0 3.5000 1.9167 1.6250 1.2292 1.1563 … 1.0000
𝑥2 0 0.6667 2.6667 1.6667 2.1667 1.9167 … 2.0000
𝑥3 0 2.5000 1.0833 2.8750 2.5208 2.9688 … 3.0000
76
Gauss-Seidel Scheme
The Gauss-Seidel scheme is similar to Jacobi method but proceeds by
making direct use of recently available value
Consider the same example
𝑥1 = 7 − 𝑥2 − 𝑥3 Τ2 𝑥2 = 2 + 𝑥1 + 𝑥3 Τ3 𝑥3 = 5 − 𝑥1 + 𝑥2 Τ2
0 0 0
Initial Guess 𝑥1 = 𝑥2 = 𝑥3 = 0
1 0 0
𝑥1 = 7 − 𝑥2 − 𝑥3 Τ2 = 7 − 0 − 0 Τ2 = 3.5000
1 1 0
𝑥2 = 2 + 𝑥1 + 𝑥3 Τ3 = 2 + 3.5 + 0 Τ3 = 1.8333
1 1 1
𝑥3 = 5 − 𝑥1 + 𝑥2 Τ2 = 5 − 3.5 + 1.8333 Τ2 = 1.6667
Iteration
0 1 2 3 4 5 … 13
number
𝑥1 0 3.5000 1.7500 1.3333 1.1181 1.0475 … 1.0000
𝑥2 0 1.8333 1.8056 1.9537 1.9761 1.9922 … 2.0000
𝑥3 0 1.6667 2.5278 2.8102 2.9290 2.9724 … 3.0000
77
Iterative Methods
To illustrate the method, we shall consider two very simple
examples of iterative methods using Gauss-Seidel Method
Equations:
𝑇1 = 0.4𝑇2 + 0.2 𝑇2 = 𝑇1 + 1
Iteration 0 1 2 3 4 5 ∞
T1 0 0.2 0.68 0.872 0.949 0.980 1.000
T2 0 1.2 1.68 1.872 1.949 1.980 2.000
𝑇1 = 𝑇2 − 1 𝑇2 = 2.5𝑇1 − 0.5
Iteration 0 1 2 3 4 5 ∞
T1 0
T2 0
78
Scarborough Criterion
Iterative methods are not guaranteed to converge to a solution
unless Scarborough criterion is satisfied
Scarborough criterion states that convergence of an iterative
scheme is guaranteed if:
79
Jacobi Methods
General Procedure
𝑨⋅𝜙 =𝑩
in a form where the coefficients of In matrix form this equation can be
matrix A written as follows:
𝑛
(𝑘)
𝜙𝑖 = 𝐓. 𝜙 𝑘−1 + 𝐜
𝑎𝑖𝑗 𝜙𝑗 = 𝑏𝑖
𝑗=1 where T = iteration matrix
𝑛
and c = constant vector
𝑎𝑖𝑖 𝜙𝑖 = 𝑏𝑖 − 𝑎𝑖𝑗 𝜙𝑗 (i = 1, 2, …, n)
𝑗=1
The coefficients Tij of the iteration
𝑗≠𝑖 matrix 𝑎𝑖𝑗
𝑛 − if 𝑖 ≠ 𝑗
−𝑎𝑖𝑗 𝑏𝑖 𝑇 𝑖𝑗 = ቐ 𝑎𝑖𝑖
(𝑘) (𝑘−1)
𝜙𝑖 = 𝜙𝑗 + 0 if 𝑖 = 𝑗
𝑎𝑖𝑖 𝑎𝑖𝑖
𝑗=1 The elements of the constant vector
𝑗≠𝑖 (i = 1, 2, …, n) 𝑏𝑖
𝑐𝑖 =
Iteration equation for the Jacobi method 𝑎𝑖𝑖
80
Gauss-Seidel Scheme
Iteration equation for the Gauss–Seidel method
𝑖−1 𝑛
(𝑘) −𝑎𝑖𝑗 (𝑘) −𝑎𝑖𝑗 (𝑘−1) 𝑏𝑖
𝜙𝑖 = 𝜙𝑗 + 𝜙𝑗 + (i = 1, 2, …, n)
𝑎𝑖𝑖 𝑎𝑖𝑖 𝑎𝑖𝑖
𝑗=1 𝑗=𝑖+1
81
• MATLAB codes for Example 4-1 to 4-3 demonstrating the use
of
TDMA method
Jacobi Method
Guass-Seidel Method
82
Relaxation Methods
Normalised residuals
for all solution variables
fall below 10−3.
83
Relaxation Methods
• The convergence rate of the Jacobi and Gauss–Seidel methods
depends on the properties of the iteration matrix.
• It has been found that these can be improved by the introduction of a
so-called relaxation parameter α
• Consider the iteration equation for the Jacobi method
𝑛
(𝑘) −𝑎𝑖𝑗 (𝑘−1) 𝑏𝑖
𝜙𝑖 = 𝜙𝑗 + (i = 1, 2, …, n)
𝑎𝑖𝑖 𝑎𝑖𝑖
𝑗=1
𝑗≠𝑖
it can also be written as 𝑛
(𝑘) (𝑘−1) −𝑎𝑖𝑗 (𝑘−1) 𝑏𝑖
𝜙𝑖 = 𝜙𝑖 + 𝜙𝑗 + (i = 1, 2, …, n)
𝑎𝑖𝑖 𝑎𝑖𝑖
𝑗=1
Modify convergence rate of the iteration sequence by multiplying the
second and third terms on the right-hand side by the relaxation parameter
α: 𝑛
(𝑘) (𝑘−1) −𝑎𝑖𝑗 (𝑘−1) 𝑏𝑖
𝜙𝑖 = 𝜙𝑖 +𝛼 𝜙𝑗 + (i = 1, 2, …, n)
𝑎𝑖𝑖 𝑎𝑖𝑖 84
𝑗=1
Relaxation Methods
𝑛
(𝑘) (𝑘−1) −𝑎𝑖𝑗 (𝑘−1) 𝑏𝑖
𝜙𝑖 = 𝜙𝑖 +𝛼 𝜙𝑗 + (i = 1, 2, …, n)
𝑎𝑖𝑖 𝑎𝑖𝑖
𝑗=1
𝑨⋅𝜙 =𝑩 𝑎𝑖𝑗 𝜙𝑗 = 𝑏𝑖 (i = 1, 2, …, n)
𝑗=1
For converged iteration sequence.
𝑛
(𝑘→∞)
𝑎𝑖𝑗 𝜙𝑗 = 𝑏𝑖 (i = 1, 2, …, n)
𝑗=1
85
Relaxation Methods
Dividing both sides by coefficient aii and some rearrangement yields
𝑛
𝑏𝑖 −𝑎𝑖𝑗 (𝑘→∞)
+ 𝜙𝑗 =0 (i = 1, 2, …, n)
𝑎𝑖𝑖 𝑎𝑖𝑖
𝑗=1
After k iterations the intermediate solution vector 𝜙𝑗𝑘 is not equal to the
correct solution, so 𝑛
𝑎𝑖𝑗 𝜙𝑗𝑘 ≠ 𝑏𝑖 (i = 1, 2, …, n)
𝑗=1
(𝑘)
We define the residual 𝑟𝑖 of the ith equation after k iterations as the
difference between the left- and right-hand sides
𝑛
(𝑘) (𝑘)
𝑟𝑖 = 𝑏𝑖 − 𝑎𝑖𝑗 𝜙𝑗
𝑗=1
If the iteration process is convergent the intermediate solution vector
(𝑘) (𝑘→∞)
𝜙𝑗 should get progressively closer to final solution vector 𝜙𝑗 as
the iteration count k increases,
86
Relaxation Methods
𝑛
(𝑘) (𝑘−1) −𝑎𝑖𝑗 (𝑘−1) 𝑏𝑖
𝜙𝑖 = 𝜙𝑖 +𝛼 𝜙𝑗 + (i = 1, 2, …, n)
𝑎𝑖𝑖 𝑎𝑖𝑖
𝑗=1
Note that the expression in the square brackets is just equal to the
(𝑘−1)
residual 𝑟𝑖 after k − 1 iterations divided by coefficient aii:
(𝑘−1)
(𝑘) (𝑘−1) 𝑟𝑖
𝜙𝑖 = 𝜙𝑖 +𝛼 (i = 1, 2, …, n)
𝑎𝑖𝑖
This confirms that the introduction of relaxation parameter α does not
affect the converged solution
𝑎𝑖𝑗
−𝛼 if 𝑖 ≠ 𝑗
The coefficients Tij of the iteration matrix 𝑇𝑖𝑗 = ቐ 𝑎𝑖𝑖
1 − 𝛼 if 𝑖 = 𝑗
The elements of the constant vector 𝑏𝑖
𝑐𝑖 =
𝑎𝑖𝑖 87
Relaxation Methods
Relaxation concept on the Gauss–Seidel method
In this case the iteration equation after k iterations can be rewritten as
𝑖−1 𝑛
(𝑘) (𝑘−1) −𝑎𝑖𝑗 (𝑘) −𝑎𝑖𝑗 (𝑘−1) 𝑏𝑖
𝜙𝑖 = 𝜙𝑗 + 𝜙𝑗 + 𝜙𝑗 + (i = 1, 2, …, n)
𝑎𝑖𝑖 𝑎𝑖𝑖 𝑎𝑖𝑖
𝑗=1 𝑗=𝑖
88
Relaxation Methods
• Modify the MATLAB code for Jacobi Method and Gauss-
Seidel Method
• Solve the example on slide 76 using the same initial guess with
α = 0.75, 1.0 and 1.25 and determine the number of iteration
required for converged solution in each case
89