Generalized Jacobi and Gauss-Seidel Methods For So
Generalized Jacobi and Gauss-Seidel Methods For So
net/publication/255505943
CITATIONS READS
33 6,014
1 author:
SEE PROFILE
All content following this page was uploaded by Davod Khojasteh Salkuyeh on 04 April 2015.
Abstract
The Jacobi and Gauss-Seidel algorithms are among the stationary iterative meth-
ods for solving linear system of equations. They are now mostly used as precondition-
ers for the popular iterative solvers. In this paper a generalization of these methods are
proposed and their convergence properties are studied. Some numerical experiments
are given to show the efficiency of the new methods.
1. Introduction
Ax = b, (1.1)
where the matrix A ∈ Rn×n and x, b ∈ Rn . Let A be a nonsingular matrix with nonzero
diagonal entries and
A = D − E − F,
where D is the diagonal of A, −E its strict lower part, and −F its strict upper part. Then
the Jacobi and the Gauss-Seidel methods for solving Eq. (1.1) are defined as
respectively. There are many iterative methods such as GMRES [7] and Bi-CGSTAB [9]
algorithms for solving Eq. (1.1) which are more efficient than the Jacobi and Gauss-Seidel
methods. However, when these methods are combined with the more efficient methods,
for example as a preconditioner, can be quite successful. For example see [4, 6]. It has
been proved that if A is a strictly diagonally dominant (SDD) or irreducibly diagonally
dominant, then the associated Jacobi and Gauss-Seidel iterations converge for any initial
guess x 0 [6]. If A is a symmetric positive definite (SPD) matrix, then the Gauss-Seidel
∗
Corresponding author.
method also converges for any x 0 [1]. In this paper we generalize these two methods and
study their convergence properties.
This paper is organized as follows. In Section 2, we introduce the new algorithms and
verify their properties. Section 3 is devoted to the numerical experiments. In Section 4
some concluding remarks are also given.
We consider the decomposition A = Tm − Em − Fm where −Em and −Fm are the strict lower
and upper part of the matrix Am − Tm , respectively. In other words matrices Tm , Em and
Fm are defined as following
···
a1,1 a1,m+1
.. .. ..
. . .
Tm = am+1,1
.. ,
. an−m,n
..
.. ..
. .
.
an,n−m ··· an,n
Em =
−am+2,1
,
.. ..
. .
−an,1 · · · −an−m−1,n
−a1,m+2 · · · −a1,n
. .. ..
.
−an−m−1,n
Fm =
.
Then we define the generalized Jacobi (GJ) and generalized Gauss-Seidel (GGS) iterative
methods as follows
(m) (m)
respectively. Let BGJ = Tm−1 (Em +Fm ), and BGGS = (Tm −Em )−1 Fm , be the iteration matrices
of the GJ and GGS methods, respectively. Note that if m = 0 then (2.1) and (2.2) result in
the Jacobi and Gauss-Seidel methods. We now study the convergence of the new methods.
To do so, we introduce the following definition.
Definition 2.1. An n × n matrix A = (ai j ) is said to be strictly diagonally dominant (SDD) if
n
X
| aii |> | ai j |, i = 1, · · · , n.
j=1, j6= i
Theorem 2.1. Let A be an SDD matrix. Then for any natural number m ≤ n the GJ and GGS
methods are convergent for any initial guess x 0 .
Proof. Let M = (Mi j ) and N = (Ni j ) be n×n matrices with M being SDD. Then (see [2],
Lemma 1)
ρ(M −1 N ) ≤ ρ = max ρ i , (2.3)
i
where
n
X
| Ni j |
j=1
ρi = n
.
X
| Mii | − | Mi j |
j=1, j6= i
(m+1) (m)
ρ(BGJ ) ≤ ρ(BGJ ),
or
(m+1) (m)
ρ(BGGS ) ≤ ρ(BGGS ).
(m)
But Eq. (2.4) shows that we can choose a natural number m ≤ n such that ρ(BGJ ) and
(m)
ρ(BGGS ) are sufficiently small. To illustrate this, consider the matrix
4 1 1 1
1 3 −1 0
A= .
1 1 −4 1
−1 −1 −1 4
Generalized Jacobi and Gauss-Seidel Methods for Linear System of Equations 167
(1)
Obviously, A is an SDD matrix. Here we have ρ(BJ ) = 0.3644 < ρ(BGJ ) = 0.4048, where
BJ is the iteration matrix of the Jacobi method. On the other hand we have
(2)
ρ(BJ ) = 0.3644 > ρ(BGJ ) = 0.2655.
For the MP-matrices, the next theorem shows that larger m results in smaller spectral radius
of the iteration matrix of GJ and GGS iterations.
Theorem 2.6. Let A be an MP-matrix, p and q be two natural numbers such that 0 ≤ p <
q ≤ n and for a given natural number m ≤ n, Mm and Nm be the matrices introduced in the
proof of Theorem 2.4 for the GJ and GGS methods. Moreover let neither Np nor Np − Nq is the
null matrix. Then
(q) (p) (q) (p)
ρ(BGJ ) < ρ(BGJ ), ρ(BGGS ) < ρ(BGGS ).
168 D. Khojasteh Salkuyeh
Table 3.1: Numerical results for g(x, y) = exp(x y). Timings are in second.
Table 3.4: Numerical results for g(x, y) = − exp(4x y). Timings are in second.
3. Numerical examples
All the numerical experiments presented in this section were computed in double pre-
cision with some MATLAB codes on a personal computer Pentium 4 - 256 MHz. For the
numerical experiments we consider the equation
−∆u + g(x, y)u = f (x, y), (x, y) ∈ Ω = (0, 1) × (0, 1). (3.1)
Discretizing Eq. (3.1) on an nx × n y grid, by using the second order centered differences
for the Laplacian gives a linear system of equations of order n = nx × n y with n unknowns
ui j = u(ih, jh) (1 ≤ i, j ≤ n) :
−ui−1, j − ui, j−1 + (4 + h2 g(ih, jh))ui j − ui+1, j − ui, j+1 = h2 f (ih, jh).
Generalized Jacobi and Gauss-Seidel Methods for Linear System of Equations 169
The boundary conditions are taken so that the exact solution of the system is x =
[1, · · · , 1] T . Let nx = n y. We consider the linear systems arisen from this kind of dis-
cretization for three functions g(x, y) = exp(x y), g(x, y) = x + y and g(x, y) = 0.
It can be easily verified that the coefficient matrices of these systems are M -matrix (see
for more details [3, 5]). For each function we give the numerical results of the methods
described in section 2 for nx = 20, 30, 40. The stopping criterion k x k+1 − x k k2 < 10−7 ,
was used and the initial guess was taken to be zero vector. For the GJ and GGS methods
we let m = 1. Hence Tm is a tridiagonal matrix. In the implementation of the GJ and GGS
methods we used the LU factorization of Tm and Tm − Em , respectively. Numerical results
are given in Tables 3.1, 3.2, and 3.3. In each table the number of iterations of the method
and the CPU time (in parenthesis) for convergence are given. We also give the numerical
results related to the function g(x, y) = − exp(4x y) in Table 3.4 for nx = 80, 90, 100.
In this table a (†) shows that we have not the solution after 10000 iterations. As the
numerical results show the GJ and GGS methods are more effective than the Jacobi and
Gauss-Seidel methods, respectively.
4. Conclusion
In this paper, we have proposed a generalization of the Jacobi and Gauss-Seidel meth-
ods say GJ and GGS, respectively, and studied their convergence properties. In the de-
composition of the coefficient matrix a banded matrix Tm of bandwidth 2m + 1 is cho-
sen. Matrix Tm is chosen such that the computation of w = Tm−1 y (in GJ method) and
w = (Tm − Em )−1 y (in GGS method) can be easily done for any vector y. To do so one
may use the LU factorization of Tm and Tm − Em . In practice m is chosen very small, e.g.,
m = 1, 2. For m = 1, Tm is a tridiagonal matrix and its LU factorization can be easily
obtained. The new methods are suitable for sparse matrices such as matrices arisen from
discretization of the PDEs. These kinds of matrices are usually pentadiagonal. In this
case for m = 1, Tm is tridiagonal and each of the matrices Em and Fm contains only one
nonzero diagonal and a few additional computations are needed in comparing with Jacobi
and Gauss-Seidel methods (as we did in this paper). Numerical results show that the new
methods are more effective than the conventional Jacobi and Gauss-Seidel methods.
Acknowledgment The author would like to thank the anonymous referee for his/her
comments which substantially improved the quality of this paper.
References
[1] Datta B N. Numerical Linear Algebra and Applications. Brooks/Cole Publishing Company,
1995.
[2] Jin X Q, Wei Y M, Tam H S. Preconditioning technique for symmetric M -matrices. CALCOLO,
2005, 42: 105-113.
[3] Kerayechian A, Salkuyeh D K. On the existence, uniqueness and approximation of a class of
elliptic problems. Int. J. Appl. Math., 2002, 11: 49-60.
170 D. Khojasteh Salkuyeh
[4] Li W. A note on the preconditioned Gauss seidel (GS) method for linear system. J. Comput.
Appl. Math., 2005, 182: 81-90.
[5] Meyer C D. Matrix Analysis and Applied Linear Algebra. SIAM, 2000.
[6] Saad Y. Iterative Methods for Sparse Linear Systems. PWS Press, New York, 1995.
[7] Saad Y, Schultz M H. GMRES: a generalized minimal residual algorithm for solving nonsym-
metric linear systems. SIAM J. Sci. Stat. Comput., 1986, 7: 856-869.
[8] Varga R S. Matrix Iterative Analysis. Prentice Hall, Englewood Cliffs, NJ, 1962.
[9] van der Vorst H A. Bi-CGSTAB: a fast and smoothly converging variant of Bi-CG for the
solution of nonsymmetric linear systems. SIAM J. Sci. Stat. Comput., 1992, 12: 631-644.