0% found this document useful (0 votes)
7 views

Lab 1

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Lab 1

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Lab 1

Module: Scientific Computing Classes : 33rd year AY: 2023 / 2024

Lab 1: Numerical solution of systems of linear equations

Objective:
The aim of this tutorial is to implement the iterative Jacobi and Gauss-Seidel methods
for solving a system of linear equations (SLE). You’ll need the numpy and matplotlib
modules. Load these modules by typing the following commands:
import numpy as np
import matplotlib.pyplot as plt

1 Course reminders
1.1 Iterative methods for solving a SLE
Iterative methods consist in having a recurrent sequence of vectors, which converges to-
wards the solution of the SLE. This solution is given once a stopping condition imposed
on the algorithm has been satisfied.

1.1.1 Principle
Consider the SEL (S): AX = b with:
     
a11 a12 a13 · · · a1n x1 b1
 a21
 a22 a23 · · · a2n 

 x2 
 
 b2 
 
A =  a31
 a32 a33 · · · a3n  , X =  x3  et b =  b3 
   
.
 .. .. .. .. ..   ..   .. 
 . . . . .   .   .
an1 an2 an3 · · · ann xn bn
It is assumed that (S) is Cramer. Iterative methods are based on a decomposition of A into
the form A = M − N, where M is an invertible matrix. A recurrent sequence of solutions

1
X (k) , k ≥ 0, is then generated as follows:

 X ∈ R given

 (0) n

X (k+1) = M−1 NX (k) + M−1 b.



with X (0) an initial vector. This sequence converges, under certain conditions, to the exact
solution X of system (S). Examples: Jacobi, Gauss-Seidel...

1.1.2 Jacobi and Gauss-Seidel methods


The iterative Jacobi and Gauss-Seidel methods for solving (S) : AX = b, consist firstly in
decomposing decompose A into the form :

A = D−E−F

with: D : Diagonal matrix whose diagonal coefficients are those of matrix A.


E: Lower triangular matrix with zero diagonal coefficients.
F: upper triangular matrix with zero diagonal coefficients.
More precisely, given a matrix A = ( ai,j ) ∈ Mn (R),
     
a1,1 0 · · · 0 0 ··· ··· 0 0 − a1,2 · · · − a1,n
.. ..   .. ..
 − a2,1 . . . .. ..
   
 0 . . 0  0 . . . . 
A= .
.. ..
 −
  ..

.. ..
 −
  ..

..

 ..

. . 0   . . . 0   . . − an−1,n 
0 · · · 0 an,n − an,1 · · · − an,n−1 0 0 ··· ··· 0
| {z } | {z } | {z }
D E F

Jacobi method: M = D and N = E + F.


Gauss-Seidel method: M = D − E and N = F.

1.1.3 Convergence
If the matrix A has a strictly dominant diagonal, then the Jacobi and Gauss-Seidel methods
converge whatever the choice of initial vector X (0) .

1.1.4 Stop criterion


Let A = ( ai,j )1≤i,j≤n ∈ Mn (R) with aii ̸= 0 for all i = 1, · · · , n and let ε be a given
tolerancee. The stopping criteria for the Jacobi and Gauss-Seidel iterative methods include

|| AX (k) − b|| ≤ ε.

2
2 Numerical applications
Let n ≥ 4 and ai ∈ R for i = 1, 2, 3. We are interested in the numerical solution of the
problem (Sn ): AX = b with
 
a1 a2 0 ... ... 0
 .. ..  
x1
  
2
 a3 a1 a2 . .

.. .
  x2  2
. .. 

0 a3 a1 a2
   
A = A ( a1 , a2 , a3 ) =  , X =  ..  and b =  ...  .
 
 .. .. .. .. ..  . 
  
. . . . . 0  x n −1 
 
2
 ..
 
.. .. .. xn 2
. . . a2 

.
0 . . . . . . 0 a3 a1

1. (a) Write a function tridiag(a1,a2,a3,n) that returns the tridiagonal matrix A =


A( a1 , a2 , a3 ) of size n and the second member b.
(b) Test the function tridiag(a1,a2,a3,n) for a1 = 4, a2 = a3 = 1 and n = 10.
2. (a) Write a function matrice_diag_dominante(A) taking as input a matrix A, a
square matrix of order n, which checks whether this matrix is strictly diagonal
dominant or not.
(b) Test the function matrice_diag_dominante(A)on the matrices B1 = A(4, 1, 1)
and B2 = A(1, 1, 1) for n = 10.
3. (a) Write a function jacobi(B, b1, X0, epsilon) taking as input a square matrix
B of order n, a second member b1, an initial condition X (0) and a tolerance
epsilon, which returns an approximate solution of the SEL BX = b1 by Jacobi’s
method and the number of iterations performed.
The tolerance epsilon is used for the arret criterion: || BX (k) − b1|| ≤ epsilon
We use the function matrice_diag_dominante(B) to test whether B has a
strictly dominant diagonal, otherwise we return B does not have a strictly dom-
inant diagonal.
(b) Use the function tridiag(a1,a2,a3,n) to test the Jacobi function with the fol-
lowing parameters

n = 10, B = B1 , b1 = b, X (0) = (1, 1, . . . , 1)t ∈ M10,1 (R) et epsilon = 10−6 .

(c) Using the function np.linalg.solve( B1 , b), solve (S10 ). and find the Jacobi error
in Euclidean norms.

3
4. (a) write a function gauss_seidel(B, b1, X0, epsilon) which returns an ap-
proximate solution of the SEL BX = b1 by the Gauss-Seidel method and the
number of iterations performed to reach convergence by testing whether A is
strictly diagonal dominant.
(b) Test the Gauss-Seidel function for the same example considered in 3.b). Then
find the error in Euclidean norm.
5. Compare the results obtained by the two methods.
6. Plot on the same graph the number of iterations by the two iterative methods: Jacobi
and Gauss-Seidel, as a function of the accuracy epsilon. We consider epsilon ∈
{10−3 , 10−6 , 10−9 , 10−12 } and n = 10. Interpret the obtained results.
7. Let Niter(epsilon) be the function defined as follows
def Niter(epsilon):
N=np.arange(5,31,5)
Niter_J=[]
Niter_GS=[]
for n in N:
Ab=tridiag(4,1,1,n)
X0=np.zeros((n,1))
J=jacobi(Ab[0],Ab[1],X0,epsilon)
GS=gauss_seidel(Ab[0],Ab[1],X0,epsilon)
Niter_J.append(J[1])
Niter_GS.append(GS[1])
return Niter_J, Niter_GS
Test this function for epsilon=10−6 . Explain the obtained results.

You might also like