0% found this document useful (0 votes)
32 views

Lecture 15

This document summarizes iterative methods for solving systems of linear equations, specifically the Gauss-Seidel and Jacobi methods. It provides examples of applying Gauss-Seidel to sample systems of equations, both with and without relaxation. Key steps of Gauss-Seidel include making initial guesses, solving each equation for its unknown on the diagonal, substituting values from the previous iteration, and repeating until convergence within a specified tolerance. Relaxation can accelerate convergence by taking a weighted average of the current and previous iterations' results.

Uploaded by

amjadtawfeq2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views

Lecture 15

This document summarizes iterative methods for solving systems of linear equations, specifically the Gauss-Seidel and Jacobi methods. It provides examples of applying Gauss-Seidel to sample systems of equations, both with and without relaxation. Key steps of Gauss-Seidel include making initial guesses, solving each equation for its unknown on the diagonal, substituting values from the previous iteration, and repeating until convergence within a specified tolerance. Relaxation can accelerate convergence by taking a weighted average of the current and previous iterations' results.

Uploaded by

amjadtawfeq2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Numerical Methods for

Engineers
EE0903301
Lecture 15: Iterative Methods (Part 1)
Dr. Yanal Faouri
Iterative Methods

• Iterative or approximate methods provide an alternative to the


elimination methods. Such approaches are like the techniques
developed to obtain the roots of a single equation discussed before.
• Those approaches consisted of guessing a value and then using a
systematic method to obtain a refined estimate of the root.
• Such of these methods are:
• Gauss-Seidel Method
• Jacobi Iteration
Linear Systems: Gauss-Seidel Method

• The Gauss-Seidel method is the most commonly used iterative method for solving linear algebraic equations.
Assume that we are given a set of n equations:
[A]{x} = {b}
• Suppose that for brevity we limit ourselves to a 3 × 3 set of equations. If the diagonal elements are all nonzero,
the first equation can be solved for x1, the second for x2, and the third for x3 to yield: (where j and j−1 are the
present and previous iterations.)

a11x1 + a12x2 + a13x3 = b1


a21x1 + a22x2 + a23x3 = b2
a31x1 + a32x2 + a33x3 = b3
Linear Systems: Gauss-Seidel Method

• To start the solution process, initial guesses must be made for the x’s. A simple approach is to assume that they
are all zero. These zeros can be substituted into x1j equation, which can be used to calculate a new value for x1 =
b1/a11.
• Then substitute this new value of x1 along with the previous guess of zero for x3 into x2j equation to compute a
new value for x2. The process is repeated for x3j equation to calculate a new estimate for x3.
• Then return to the first equation and repeat the entire procedure until our solution converges closely enough to
the true values.
• Convergence can be checked using the criterion that for all i,
Example: Gauss-Seidel Method

• Problem Statement.
• Use the Gauss-Seidel method to obtain the solution for
3 x1 − 0.1 x2 − 0.2 x3 = 7.85
0.1 x1 + 7x2 − 0.3 x3 = −19.3
0.3 x1 − 0.2 x2 + 10 x3 = 71.4
• Note that the solution is x1 = 3, x2 = −2.5, and x3 = 7.
• Solution. First, solve each of the equations for its unknown on the diagonal:
Example: Gauss-Seidel Method

• By assuming that x2 and x3 are zero;

• For the second iteration, the same process is repeated to compute:


Example: Gauss-Seidel Method

• The method is, therefore, converging on the true solution.


• Additional iterations could be applied to improve the answers. However, in an actual problem, we would not
know the true answer a priori. Consequently, the approximate error equation provides a means to estimate the
error. For example, for x1:

• For x2 and x3, the error estimates are εa,2 = 11.8% and εa,3 = 0.076%.

• Note that, as was the case when determining roots of a single equation, formulations such as the approximate
error equation usually provide a conservative appraisal of convergence. Thus, when they are met, they ensure
that the result is known to at least the tolerance specified by εs.
The Jacobi Iteration

• As each new x value is computed for the Gauss-Seidel method, it is immediately


used in the next equation to determine another x value. Thus, if the solution is
converging, the best available estimates will be employed.
• An alternative approach, called Jacobi iteration, utilizes a somewhat different tactic.
Rather than using the latest available x’s, this technique use equations (x1j, x2j, x3j) to
compute a set of new x’s on the basis of a set of old x’s.
• Thus, as new values are generated, they are not immediately used but rather are
retained for the next iteration.
• The difference between the Gauss-Seidel method and Jacobi iteration is depicted in
the next slide. Although there are certain cases where the Jacobi method is useful,
Gauss-Seidel’s utilization of the best available estimates usually makes it the method
of preference.
Difference between the Gauss-Seidel method
and Jacobi iteration

(a) the Gauss-Seidel and (b) the Jacobi iterative methods for solving simultaneous linear algebraic equations.
Convergence and Diagonal Dominance

• Note that the Gauss-Seidel method is similar in spirit to the technique of simple fixed-point iteration that was
used to solve for the roots of a single equation. Recall that simple fixed-point iteration was sometimes
nonconvergent. That is, as the iterations progressed, the answer moved farther and farther from the correct
result.
• Although the Gauss-Seidel method can also diverge, because it is designed for linear systems, its ability to
converge is much more predictable than for fixed-point iteration of nonlinear equations. It can be shown that if
the following condition holds, Gauss-Seidel will converge:

• That is, the absolute value of the diagonal coefficient in each of the equations must be larger than the sum of
the absolute values of the other coefficients in the equation. Such systems are said to be diagonally dominant.
• This criterion is sufficient but not necessary for convergence. That is, although the method may sometimes work
if the above condition is not met, convergence is guaranteed if the condition is satisfied.
• Fortunately, many engineering and scientific problems of practical importance fulfill this requirement. Therefore,
Gauss-Seidel represents a feasible approach to solve many problems in engineering and science.
MATLAB M-file: GaussSeidel

• To understand the Gauss-Seidel algorithm re-express the equations (x1j, x2j, x3j) in a form that is compatible with
MATLAB’s ability to perform matrix operations as:

• Notice that the solution can be expressed concisely in matrix form as {x} = {d} − [C]{x} where:
MATLAB M-file: GaussSeidel
function x = GaussSeidel(A,b,es,maxit) x = x';
% GaussSeidel: Gauss Seidel method for i = 1:n
% x = GaussSeidel(A,b): Gauss Seidel without relaxation C(i,1:n) = C(i,1:n)/A(i,i);
% input: end
% A = coefficient matrix, b = right hand side vector for i = 1:n
% es = stop criterion (default = 0.00001%) d(i) = b(i)/A(i,i);
% maxit = max iterations (default = 50) end
% output: iter = 0;
% x = solution vector while (1)
if nargin<2,error('at least 2 input arguments required'),end xold = x;
if nargin<3||isempty(es),es = 0.00001;end for i = 1:n
if nargin<4||isempty(maxit),maxit = 50;end x(i) = d(i) - C(i,:)*x;
[m,n] = size(A); if x(i) ~= 0
if m ~= n, error('Matrix A must be square'); end ea(i) = abs((x(i) - xold(i))/x(i)) * 100;
C = A; end
for i = 1:n end
C(i,i) = 0; iter = iter+1;
x(i) = 0; if max(ea)<=es || iter >= maxit, break, end
end end
Relaxation

• Relaxation represents a slight modification of the Gauss-Seidel method that is designed to enhance convergence.
After each new value of x is computed, that value is modified by a weighted average of the results of the
previous and the present iterations:

• Where λ is a weighting factor that is assigned a value between 0 and 2.

• If λ = 1, (1 − λ) is equal to 0 and the result is unmodified. However, if λ is set at a value between 0 and 1, the
result is a weighted average of the present and the previous results.

• This type of modification is called underrelaxation. It is typically employed to make a nonconvergent system
converge or to hasten convergence by dampening out oscillations.
Relaxation

• For values of λ from 1 to 2, extra weight is placed on the present value. In this instance,
there is an implicit assumption that the new value is moving in the correct direction toward
the true solution but at too slow rate.
• Thus, the added weight of λ is intended to improve the estimate by pushing it closer to the
truth. Hence, this type of modification, which is called overrelaxation, is designed to
accelerate the convergence of an already convergent system. The approach is also called
successive overrelaxation, or SOR.
• The choice of a proper value for λ is highly problem-specific and is often determined
empirically.
• For a single solution of a set of equations it is often unnecessary. However, if the system
under study is to be solved repeatedly, the efficiency introduced by a wise choice of λ can be
extremely important. Good examples are the very large systems of linear algebraic
equations that can occur when solving partial differential equations in a variety of
engineering and scientific problem contexts.
Example: Gauss-Seidel Method with Relaxation

• Problem Statement.
• Solve the following system with Gauss-Seidel using overrelaxation (λ = 1.2) and a stopping criterion of εs = 10%:
−3x1 + 12x2 = 9
10x1 − 2x2 = 8
• Solution. First rearrange the equations so that they are diagonally dominant and solve the first equation for x1
and the second for x2:

• First iteration: Using initial guesses of x1 = x2 = 0, we can solve for x1 ➔ x1 = 0.8 + 0.2(0) = 0.8
• Before solving for x2, we first apply relaxation to our result for x1 ➔ x1,r = 1.2(0.8) − 0.2(0) = 0.96
• We use the subscript r to indicate that this is the “relaxed” value. This result is then used to compute x2:
x2 = 0.75 + 0.25(0.96) = 0.99
• We then apply relaxation to this result to give ➔ x2,r = 1.2(0.99) − 0.2(0) = 1.188
Example: Gauss-Seidel Method with Relaxation

• At this point, we could compute estimated errors. However, since we started with assumed values of zero, the
errors for both variables will be 100%.
• Second iteration: Using the same procedure as for the first iteration, the second iteration yields

Because we have now nonzero values from the first


iteration, we can compute approximate error estimates as
each new value is computed.
At this point, although the error estimate for the first
unknown has fallen below the 10% stopping criterion, the
second has not. Hence, we must implement another
iteration.
Example: Gauss-Seidel Method with Relaxation

• At this point, we can terminate the computation because both error estimates have fallen below the 10%
stopping criterion. The results at this juncture, x1 = 0.984177 and x2 = 0.999586, are converging on the exact
solution of x1 = x2 = 1.

You might also like