0% found this document useful (0 votes)
73 views8 pages

Generalized Jacobi and Gauss-Seidel Methods For So

The document proposes generalized versions of the Jacobi and Gauss-Seidel methods for solving linear systems of equations. The generalized methods involve decomposing the matrix into a banded matrix plus strict lower and upper parts. Convergence properties of the generalized methods are studied, showing that they converge for any initial guess if the matrix is strictly diagonally dominant. Numerical experiments demonstrate the efficiency of the new methods.

Uploaded by

ugoala bright
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views8 pages

Generalized Jacobi and Gauss-Seidel Methods For So

The document proposes generalized versions of the Jacobi and Gauss-Seidel methods for solving linear systems of equations. The generalized methods involve decomposing the matrix into a banded matrix plus strict lower and upper parts. Convergence properties of the generalized methods are studied, showing that they converge for any initial guess if the matrix is strictly diagonally dominant. Numerical experiments demonstrate the efficiency of the new methods.

Uploaded by

ugoala bright
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/255505943

Generalized Jacobi and Gauss-Seidel Methods for Solving Linear System of


Equations

Article · January 2007

CITATIONS READS

33 6,014

1 author:

Davod Khojasteh Salkuyeh


University of Guilan
119 PUBLICATIONS   957 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Davod Khojasteh Salkuyeh on 04 April 2015.

The user has requested enhancement of the downloaded file.


NUMERICAL MATHEMATICS, A Journal of Chinese Universities (English Series)
Numer. Math. J. Chinese Univ. (English Ser.), issue 2, vol. 16: 164-170, 2007

Generalized Jacobi and Gauss-Seidel Methods for Solving Linear


System of Equations

Davod Khojasteh Salkuyeh∗


(Department of Mathematics, Mohaghegh Ardabili University, P.O. Box 56199-11367, Ardabil, Iran
E-mail: [email protected])
Received September 1, 2006; Accepted (in revised version) September 19, 2006

Abstract
The Jacobi and Gauss-Seidel algorithms are among the stationary iterative meth-
ods for solving linear system of equations. They are now mostly used as precondition-
ers for the popular iterative solvers. In this paper a generalization of these methods are
proposed and their convergence properties are studied. Some numerical experiments
are given to show the efficiency of the new methods.

Keywords: Jacobi; Gauss-Seidel; generalized; convergence.


Mathematics subject classification: 65F10, 65F50

1. Introduction

Consider the linear system of equations

Ax = b, (1.1)

where the matrix A ∈ Rn×n and x, b ∈ Rn . Let A be a nonsingular matrix with nonzero
diagonal entries and
A = D − E − F,
where D is the diagonal of A, −E its strict lower part, and −F its strict upper part. Then
the Jacobi and the Gauss-Seidel methods for solving Eq. (1.1) are defined as

x k+1 = D−1 (E + F)x k + D−1 b,


x k+1 = (D − E)−1 F x k + (D − E)−1 b,

respectively. There are many iterative methods such as GMRES [7] and Bi-CGSTAB [9]
algorithms for solving Eq. (1.1) which are more efficient than the Jacobi and Gauss-Seidel
methods. However, when these methods are combined with the more efficient methods,
for example as a preconditioner, can be quite successful. For example see [4, 6]. It has
been proved that if A is a strictly diagonally dominant (SDD) or irreducibly diagonally
dominant, then the associated Jacobi and Gauss-Seidel iterations converge for any initial
guess x 0 [6]. If A is a symmetric positive definite (SPD) matrix, then the Gauss-Seidel

Corresponding author.

http://www.global-sci.org/nm 164 2007


c Global-Science Press
Generalized Jacobi and Gauss-Seidel Methods for Linear System of Equations 165

method also converges for any x 0 [1]. In this paper we generalize these two methods and
study their convergence properties.
This paper is organized as follows. In Section 2, we introduce the new algorithms and
verify their properties. Section 3 is devoted to the numerical experiments. In Section 4
some concluding remarks are also given.

2. Generalized Jacobi and Gauss-Seidel methods

Let A = (ai j ) be an n× n matrix and Tm = (t i j ) be a banded matrix of bandwidth 2m+1


defined as ¨
ai j , | i − j |≤ m,
ti j =
0, otherwise.

We consider the decomposition A = Tm − Em − Fm where −Em and −Fm are the strict lower
and upper part of the matrix Am − Tm , respectively. In other words matrices Tm , Em and
Fm are defined as following

···
 
a1,1 a1,m+1
 .. .. .. 

 . . . 

Tm =  am+1,1
 .. ,

. an−m,n
..
 
.. ..
. .
 
 . 
an,n−m ··· an,n
 
 
 
 
 
Em = 
 −am+2,1
,

 .. ..


 . . 

−an,1 · · · −an−m−1,n
 
−a1,m+2 · · · −a1,n

. .. .. 
.
 
 
−an−m−1,n
 
Fm = 

.

 
 
 

Then we define the generalized Jacobi (GJ) and generalized Gauss-Seidel (GGS) iterative
methods as follows

x k+1 = Tm−1 (Em + Fm )x k + Tm−1 b, (2.1)


−1 −1
x k+1 = (Tm − Em ) Fm x k + (Tm − Em ) b, (2.2)
166 D. Khojasteh Salkuyeh

(m) (m)
respectively. Let BGJ = Tm−1 (Em +Fm ), and BGGS = (Tm −Em )−1 Fm , be the iteration matrices
of the GJ and GGS methods, respectively. Note that if m = 0 then (2.1) and (2.2) result in
the Jacobi and Gauss-Seidel methods. We now study the convergence of the new methods.
To do so, we introduce the following definition.
Definition 2.1. An n × n matrix A = (ai j ) is said to be strictly diagonally dominant (SDD) if
n
X
| aii |> | ai j |, i = 1, · · · , n.
j=1, j6= i

Theorem 2.1. Let A be an SDD matrix. Then for any natural number m ≤ n the GJ and GGS
methods are convergent for any initial guess x 0 .
Proof. Let M = (Mi j ) and N = (Ni j ) be n×n matrices with M being SDD. Then (see [2],
Lemma 1)
ρ(M −1 N ) ≤ ρ = max ρ i , (2.3)
i
where
n
X
| Ni j |
j=1
ρi = n
.
X
| Mii | − | Mi j |
j=1, j6= i

Now, let M = Tm and N = Em + Fm in the GJ method and M = Tm − Em and N = Fm in the


GGS method. Obviously, in the both cases the matrix M is SDD. Hence M and N satisfy
relation (2.3). Having in mind that the matrix A is an SDD matrix, it can be easily verified
that ρ i < 1. Therefore ρ(M −1 N ) ≤ ρ < 1 and this completes the proof.
The definition of matrix M and N in Theorem 2.1 depend on the parameter m. Hence,
for later use, we denote ρ by ρ (m) . By a little computation one can see that

ρ (1) ≥ ρ (2) ≥ · · · ≥ ρ (n) = 0. (2.4)

By this relation we can not deduce that

(m+1) (m)
ρ(BGJ ) ≤ ρ(BGJ ),
or
(m+1) (m)
ρ(BGGS ) ≤ ρ(BGGS ).
(m)
But Eq. (2.4) shows that we can choose a natural number m ≤ n such that ρ(BGJ ) and
(m)
ρ(BGGS ) are sufficiently small. To illustrate this, consider the matrix
 
4 1 1 1
 1 3 −1 0 
 
A= .
 1 1 −4 1 
−1 −1 −1 4
Generalized Jacobi and Gauss-Seidel Methods for Linear System of Equations 167
(1)
Obviously, A is an SDD matrix. Here we have ρ(BJ ) = 0.3644 < ρ(BGJ ) = 0.4048, where
BJ is the iteration matrix of the Jacobi method. On the other hand we have
(2)
ρ(BJ ) = 0.3644 > ρ(BGJ ) = 0.2655.

For the GGS method the result is very suitable since


(1) (2)
ρ(BG ) = 0.2603 > ρ(BGGS ) = 0.1111 > ρ(BGGS ) = 0.0968,

where BG is the iteration matrix of the Gauss-Seidel method.


Now we study the new methods for an another class of matrices. Let A = (ai j ) and
B = (bi j ) be two n × n matrices. Then A ≤ B (A < B) if by definition,

ai j ≤ bi j (ai j < bi j ) for 1 ≤ i, j ≤ n.

For n × n real matrices A, M , and N , A = M − N is said to be a regular splitting of the


matrix A if M is nonsingular with M −1 ≥ O and N ≥ O, where O is the n × n zero matrix. A
matrix A = (ai j ) is said to be an M -matrix (M P-matrix) if aii > 0 for i = 1, · · · , n, ai j ≤ 0,
for i 6= j, A is nonsingular and A−1 ≥ O(A−1 > O).
Theorem 2.2. (Saad [6]) Let A = M − N be a regular splitting of the matrix A. Then
ρ(M −1 N ) < 1 if and only if A is nonsingular and A−1 ≥ O.
Theorem 2.3. (Saad [6]) Let A = (ai j ), B = (bi j ) be two matrices such that A ≤ B and
bi j ≤ 0 for all i 6= j. Then if A is an M-matrix, so is the matrix B.
Theorem 2.4. Let A be an M-matrix. Then for a given natural number m ≤ n, both of the GJ
and GGS methods are convergent for any initial guess x 0 .
Proof. Let Mm = Tm and Nm = Em + Fm in the GJ method and Mm = Tm − Em and
Nm = Fm in the GGS method. Obviously, in the both cases we have A ≤ Mm . Hence by
Theorem 2.3, we conclude that the matrix Mm is an M -matrix. On the other hand we have
Nm ≥ O. Therefore, A = Mm − Nm is a regular splitting of the matrix A. Having in mind
that A−1 ≥ 0 and Theorem 2.2 we deduce that ρ(BGJ ) < 1 and ρ(BGGS ) < 1.
(m) (m)

Theorem 2.5. (Varga [8]) Let A = M1 − N1 = M2 − N2 be two regular splitting of A, where


A−1 > O. If N2 ≥ N1 ≥ O such that neither N1 nor N2 − N1 is the null matrix, then

0 < ρ(M1−1 N1 ) < ρ(M2−1 N2 ) < 1.

For the MP-matrices, the next theorem shows that larger m results in smaller spectral radius
of the iteration matrix of GJ and GGS iterations.
Theorem 2.6. Let A be an MP-matrix, p and q be two natural numbers such that 0 ≤ p <
q ≤ n and for a given natural number m ≤ n, Mm and Nm be the matrices introduced in the
proof of Theorem 2.4 for the GJ and GGS methods. Moreover let neither Np nor Np − Nq is the
null matrix. Then
(q) (p) (q) (p)
ρ(BGJ ) < ρ(BGJ ), ρ(BGGS ) < ρ(BGGS ).
168 D. Khojasteh Salkuyeh

Table 3.1: Numerical results for g(x, y) = exp(x y). Timings are in second.

nx Jacobi GJ Gauss-Seidel GGS


20 1169(0.11) 526(0.08) 613(0.06) 60(0.02)
30 2511(0.69) 1088(0.31) 1318(0.25) 63(0.02)
40 4335(1.45) 1825(1.02) 2227(0.77) 65(0.05)

Table 3.2: Numerical results for g(x, y) = x + y. Timings are in second.

nx Jacobi GJ Gauss-Seidel GGS


20 1184(0.11) 533(0.06) 621(0.06) 60(0.02)
30 2544(0.56) 1102(0.33) 1136(0.27) 63(0.03)
40 4392(1.47) 1849(1.02) 2307(0.88) 65(0.05)

Table 3.3: Numerical results for g(x, y) = 0. Timings are in second.

nx Jacobi GJ Gauss-Seidel GGS


20 1236(0.14) 556(0.08) 649(0.08) 60(0.02)
30 2658(0.53) 1150(0.34) 1395(0.28) 63(0.03)
40 4591(1.56) 1931(1.08) 2412(0.95) 65(0.05)

Table 3.4: Numerical results for g(x, y) = − exp(4x y). Timings are in second.

nx Jacobi GJ Gauss-Seidel GGS


80 † 7869(18.5) † 68(0.16)
90 † 9718(29.32) † 68(0.20)
100 † † † 68(0.27)

Proof. By Theorem 2.4 we see that A = M p − Np = Mq − Nq are two regular splitting of


the matrix A. It can be easily seen that O ≤ Nq ≤ Np . A is an MP-matrix hence A−1 > O.
Therefore all the hypothesis of Theorem 2.5 were provided. Hence the desired result is
obtained.

3. Numerical examples

All the numerical experiments presented in this section were computed in double pre-
cision with some MATLAB codes on a personal computer Pentium 4 - 256 MHz. For the
numerical experiments we consider the equation

−∆u + g(x, y)u = f (x, y), (x, y) ∈ Ω = (0, 1) × (0, 1). (3.1)

Discretizing Eq. (3.1) on an nx × n y grid, by using the second order centered differences
for the Laplacian gives a linear system of equations of order n = nx × n y with n unknowns
ui j = u(ih, jh) (1 ≤ i, j ≤ n) :

−ui−1, j − ui, j−1 + (4 + h2 g(ih, jh))ui j − ui+1, j − ui, j+1 = h2 f (ih, jh).
Generalized Jacobi and Gauss-Seidel Methods for Linear System of Equations 169

The boundary conditions are taken so that the exact solution of the system is x =
[1, · · · , 1] T . Let nx = n y. We consider the linear systems arisen from this kind of dis-
cretization for three functions g(x, y) = exp(x y), g(x, y) = x + y and g(x, y) = 0.
It can be easily verified that the coefficient matrices of these systems are M -matrix (see
for more details [3, 5]). For each function we give the numerical results of the methods
described in section 2 for nx = 20, 30, 40. The stopping criterion k x k+1 − x k k2 < 10−7 ,
was used and the initial guess was taken to be zero vector. For the GJ and GGS methods
we let m = 1. Hence Tm is a tridiagonal matrix. In the implementation of the GJ and GGS
methods we used the LU factorization of Tm and Tm − Em , respectively. Numerical results
are given in Tables 3.1, 3.2, and 3.3. In each table the number of iterations of the method
and the CPU time (in parenthesis) for convergence are given. We also give the numerical
results related to the function g(x, y) = − exp(4x y) in Table 3.4 for nx = 80, 90, 100.
In this table a (†) shows that we have not the solution after 10000 iterations. As the
numerical results show the GJ and GGS methods are more effective than the Jacobi and
Gauss-Seidel methods, respectively.

4. Conclusion

In this paper, we have proposed a generalization of the Jacobi and Gauss-Seidel meth-
ods say GJ and GGS, respectively, and studied their convergence properties. In the de-
composition of the coefficient matrix a banded matrix Tm of bandwidth 2m + 1 is cho-
sen. Matrix Tm is chosen such that the computation of w = Tm−1 y (in GJ method) and
w = (Tm − Em )−1 y (in GGS method) can be easily done for any vector y. To do so one
may use the LU factorization of Tm and Tm − Em . In practice m is chosen very small, e.g.,
m = 1, 2. For m = 1, Tm is a tridiagonal matrix and its LU factorization can be easily
obtained. The new methods are suitable for sparse matrices such as matrices arisen from
discretization of the PDEs. These kinds of matrices are usually pentadiagonal. In this
case for m = 1, Tm is tridiagonal and each of the matrices Em and Fm contains only one
nonzero diagonal and a few additional computations are needed in comparing with Jacobi
and Gauss-Seidel methods (as we did in this paper). Numerical results show that the new
methods are more effective than the conventional Jacobi and Gauss-Seidel methods.

Acknowledgment The author would like to thank the anonymous referee for his/her
comments which substantially improved the quality of this paper.

References

[1] Datta B N. Numerical Linear Algebra and Applications. Brooks/Cole Publishing Company,
1995.
[2] Jin X Q, Wei Y M, Tam H S. Preconditioning technique for symmetric M -matrices. CALCOLO,
2005, 42: 105-113.
[3] Kerayechian A, Salkuyeh D K. On the existence, uniqueness and approximation of a class of
elliptic problems. Int. J. Appl. Math., 2002, 11: 49-60.
170 D. Khojasteh Salkuyeh

[4] Li W. A note on the preconditioned Gauss seidel (GS) method for linear system. J. Comput.
Appl. Math., 2005, 182: 81-90.
[5] Meyer C D. Matrix Analysis and Applied Linear Algebra. SIAM, 2000.
[6] Saad Y. Iterative Methods for Sparse Linear Systems. PWS Press, New York, 1995.
[7] Saad Y, Schultz M H. GMRES: a generalized minimal residual algorithm for solving nonsym-
metric linear systems. SIAM J. Sci. Stat. Comput., 1986, 7: 856-869.
[8] Varga R S. Matrix Iterative Analysis. Prentice Hall, Englewood Cliffs, NJ, 1962.
[9] van der Vorst H A. Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the
solution of nonsymmetric linear systems. SIAM J. Sci. Stat. Comput., 1992, 12: 631-644.

View publication stats

You might also like