Mgtut
Mgtut
Multigrid Tutorial
By
William L. Briggs
Presented by
Van Emden Henson
Center for Applied Scientific Computing
Lawrence Livermore National Laboratory
This work was performed, in part, under the auspices of the United States Department of Energy by University
of California Lawrence Livermore National Laboratory under contract number W-7405-Eng-48.
2 of 119
Outline
Model Problems
Basic Iterative Schemes
Convergence; experiments
Convergence; analysis
Development of Multigrid
Coarse-grid correction
Nested Iteration
Restriction and Interpolation
Standard cycles: MV, FMG
Performance
Implementation
storage and computation costs
Performance, (cont)
Convergence
Higher dimensions
Experiments
Some Theory
The spectral picture
The subspace picture
The whole picture!
Complications
Anisotropic operators and grids
Discontinuous or anisotropic
coefficients
Nonlinear Problems: FAS
3 of 119
Suggested Reading
Brandt, Multi-level Adaptive Solutions to Boundary Value
Problems, Math Comp., 31, 1977, pp 333-390.
Brandt, 1984 Guide to Multigrid Development, with
applications to computational fluid dynamics.
Briggs, A Multigrid Tutorial, SIAM publications, 1987.
Briggs, Henson, and McCormick, A Multigrid Tutorial, 2
nd
Edition, SIAM publications, 2000.
Hackbusch, Multi-Grid Methods and Applications, 1985.
Hackbusch and Trottenburg, Multigrid Methods, Springer-
Verlag, 1982
Stben and Trottenburg, Multigrid Methods, 1987.
Wesseling, An Introduction to Multigrid Methods, Wylie,
1992
4 of 119
Multilevel methods have been
developed for...
Elliptic PDEs
Purely algebraic problems, with no physical grid; for
example, network and geodetic survey problems.
Image reconstruction and tomography
Optimization (e.g., the travelling salesman and long
transportation problems)
Statistical mechanics, Ising spin models.
Quantum chromodynamics.
Quadrature and generalized FFTs.
Integral equations.
5 of 119
Model Problems
One-dimensional boundary value problem:
Grid:
Let and for
> , < < ) ( = ) ( + ) ( x x f x u x u 0 1 0
u u = ) ( = ) ( 0 1 0
N
h =
1
x u v
i i
) (
x f f
i i
) (
. . . , , = , = , N i h i x
i
1 0
N i . . . , , = 1 0
x
0
x
1
x
2
x
i
x
N
x = 0 x = 1
6 of 119
We use Taylor Series to derive
an approximation to u(x)
We approximate the second derivative using
Taylor series:
Summing and solving,
h O x u
h
x u
h
x u h x u x u ) ( + ) ( + ) ( + ) ( + ) ( = ) (
i i i i i +
4
3 2
1
! 3 ! 2
h O x u
h
x u
h
x u h x u x u ) ( + ) ( ) ( + ) ( ) ( = ) (
i i i i i
4
3 2
1
! 3 ! 2
h O
h
x u x u x u
x u ) ( +
) ( + ) ( ) (
= ) (
i i i
i
+
2
2
1 1
2
7 of 119
We approximate the equation
with a finite difference scheme
We approximate the BVP
with the finite difference scheme:
> , < < ) ( = ) ( + ) ( x x f x u x u 0 1 0
u u = ) ( = ) ( 0 1 0
. . . , , = = +
+
N i f v
h
v v v
i i
i i i +
2
1 1
1 2 1
2
v v
0
= =
N
0
8 of 119
The discrete model problem
Letting and
we obtain the matrix equation where
is (N-1) x (N-1), symmetric, positive definite, and
A f v =
v ) , . . . , , ( = v v v
1 2 1
T
N
f ) , . . . , , ( = f f f
1 2 1
T
N
A
f
f
f
f
f
f
v
v
v
v
v
h
h
h
h
h
h
A
= ,
= ,
+
+
+
+
+
=
2 1
1 2 1
1 2 1
1 2 1
1 2
1
1
2
3
2
1
1
2
3
2
1
2
2
2
2
2
2
v
N
N
N
N
9 of 119
Solution Methods
Direct
Gaussian elimination
Factorization
Iterative
Jacobi
Gauss-Seidel
Conjugate Gradient, etc.
Note: This simple model problem can be solved
very efficiently in several ways. Pretend it cant,
and that it is very hard, because it shares many
characteristics with some very hard problems.
10 of 119
A two-dimensional boundary value
problem
Consider the problem:
Where the grid is given:
y y x x u > ; = , = , = , = , = 0 1 0 1 0 0
N
h
M
h
y x
, = , =
1 1
) , ( = ) , ( h j h i y x
y x
j i
0 M i
0 N j
x
y
z
< < , < < , ) , ( = + y x y x f u u u
y y x x
1 0 1 0
11 of 119
Discretizing the 2D problem
Let and . Again, using 2
nd
order finite differences to approximate and
we obtain the approximate equation for the
unknown , for i=1,2, M-1 and j=1,2, , N-1:
Ordering the unknowns (and also the vector f )
lexicographically by y-lines:
y x u v
j i j i
) , (
y x f f
j i j i
) , (
u
x x u
y y
y x u ) , (
j i
= +
+
+
+
f v
h
v v v
h
v v v
j i j i
y
j i j i j i
x
j i j i j i + , , , + ,
2
1 1
2
1 1
2 2
M j j M i i v
j i
= , = , = , = , = 0 0 0
v ) , . . . , , , . . . , . . . , , , , . . . , , ( = v v v v v v v v v
1 1 2 1 1 1 1 2 2 2 1 2 1 1 2 1 1 1 , , , , , , , , ,
T
N N N N N N
12 of 119
Yields the linear system
We obtain a block-tridiagonal system Av = f :
where I
y
is a diagonal matrix with on the
diagonal and
f
f
f
f
v
v
v
v
A I
I A I
I A I
I A I
I A
1
3
2
1
1
3
2
1
1
2
3
2
1
N
N
N
y
y
N
y
y y
y y
y
1
h
y
2
h h h
h h h h
h h h h
h h h
A
y x x
x y x x
x y x x
x y x
i
+ +
+ +
+ +
+ +
=
1 1 1
1 1 1 1
1 1 1 1
1 1 1
2 2 2
2 2 2 2
2 2 2 2
2 2 2
13 of 119
Iterative Methods for Linear
Systems
Consider Au = f where A is NxN and let v be an
approximation to u.
Two important measures:
The Error: with norms
The Residual: with
v u e , =
| | = | | | | e e
x a m
i
= | | | | e e
2
1
2 i
N
i =
v A f r =
| | | | r
| | | | r
2
14 of 119
Residual correction
Since and , we can write
as
which means that , which is the
Residual Equation:
Residual Correction:
v u e , = v A f r =
f u A =
f e v A = ) + (
v A f e A =
r e A =
e v u + =
15 of 119
Relaxation Schemes
Consider the 1D model problem
Jacobi Method (simultaneous displacement): Solve
the i
th
equation for holding other variables
fixed:
= = = + u u N i f h u u u
N i i i i + 0
2
1 1
0 1 1 2
v
i
N i f h v v v
i
d l o
i
d l o
i
w e n
i
) (
+
) (
) (
) + + ( = 1 1
2
1
2
1 1
16 of 119
In matrix form, the relaxation is
Let where D is diagonal and L and U
are the strictly lower and upper parts of A.
Then becomes
Let , then the iteration is:
f u A =
f u U L u D + ) + ( =
f D u U L D u + ) + ( =
1 1
U L D R
J
) + ( =
1
U L D A ) ( =
= ) ( f u U L D
f D v R v
) ( ) ( d l o
J
w e n
+ =
1
17 of 119
The iteration matrix and the
error
From the derivation,
the iteration is
subtracting,
or
hence
f D u U L D u + ) + ( =
1 1
f D u R u + =
J
1
f D v R v
) ( ) ( d l o
J
w e n
+ =
1
e
) ( ) ( d l o
J
w e n
= e R
v R u R v u =
) ( ) ( d l o
J J
w e n
f D v R f D u R v u ) + ( + =
) (
) ( d l o
J J
w e n
1 1
18 of 119
Weighted Jacobi Relaxation
Consider the iteration:
Letting A = D-L-U, the matrix form is:
.
Note that
It is easy to see that if ,
then
f h v v v v
i
d l o
i
d l o
i
d l o
i
w e n
i
) (
+
) (
) ( ) (
) + + (
+ ) (
2
1
2
1 1
f D h v U L D I v
) (
) ( d l o w e n
+ ) + ( + ) ( = 1
1 2 1
+ = f D h v R
) (
d l o
1 2
R I R
] + ) ( [ = 1
J
u u e
) (
) (
x o r p p a
t c a x e
e R e
) (
) ( d l o w e n
=
19 of 119
Gauss-Seidel Relaxation (1D)
Solve equation i for u
i
and update immediately.
Equivalently: set each component of r
to zero.
Component form: for set
Matrix form:
Let
Then iterate
Error propagation:
N i , . . . , , = 1 2 1
f h v v v
i i i i
) + + (
2
1
+
2
1 1
U L D A ) ( =
+ = ) ( f u U u L D
f L D u U L D u ) ( + ) ( =
1 1
U L D R
G
) ( =
1
f L D v R v
) ( ) ( d l o
G
w e n
) ( +
1
e R e
) ( ) ( d l o
G
w e n
20 of 119
Red-Black Gauss-Seidel
Update the even (red) points
Update the odd (black) points
f h v v v
2
2
1 2 1 2 2 i i i i
) + + (
2
1
+
f h v v v
1 2
2
2 2 2 1 2 i i i i + + +
) + + (
2
1
x
0
x
N
21 of 119
0 0 . 1 0 . 2 0 . 3 0 . 4 0 . 5 0 . 6 0 . 7 0 . 8 0 . 9 1
- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
Numerical Experiments
Solve ,
Use Fourier modes as initial iterate, with N =64:
u A = 0
= + u u u
i i i + 1 1
0 2
v
k
= ) ( =
N
k i
v
k i
n i s
1 1 1 1 , N k N i
component mode
k = 3
k = 1
k = 6
22 of 119
0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 1 0 0
0
0 . 1
0 . 2
0 . 3
0 . 4
0 . 5
0 . 6
0 . 7
0 . 8
0 . 9
1
Error reduction stalls
Weighted Jacobi on 1D problem.
Initial guess:
Error plotted against iteration number:
N
j
N
j
N
j
v
0
=
2 3
n i s
6
n i s n i s
3
1
=
3
2
| | | | e
23 of 119
0 20 40 60 80 100 120
0
0. 1
0. 2
0. 3
0. 4
0. 5
0. 6
0. 7
0. 8
0. 9
1
Convergence rates differ for
different error components
Error, ||e||
, in weighted Jacobi on Au = 0 for
100 iterations using initial guesses of v
1
, v
3
, and v
6
k = 1
k = 3
k = 6
24 of 119
Analysis of stationary iterations
Let . The exact solution is
unchanged by the iteration, i.e., .
Subtracting, we see that
Letting e
0
be the initial error and e
i
be the error
after the i
th
iteration, we see that after n
iterations we have
g v R v
) ( ) ( d l o w e n
+ =
g u R u + =
e R e
) ( ) ( d l o w e n
=
e R e
) ( ) (
n
n
=
0
25 of 119
A quick review of eigenvectors
and eigenvalues
The number is an eigenvalue of a matrix B, and w
its associated eigenvector, if Bw = w.
The eigenvalues and eigenvectors are characteristics
of a given matrix.
Eigenvectors are linearly independent, and if there is
a complete set of N distinct eigenvectors for an
NxN matrix, they form a basis; i.e., for any v, there
exist unique c
k
such that:
w c v =
k k
N
k = 1
26 of 119
Fundamental theorem of
iteration
R is convergent (that is, as ) if and
only if , where
therefore, for any initial vector v
(0)
, we see that
if and only if .
assures the convergence of the iteration
given by R and is called the convergence
factor for the iteration.
R
n
0 n
< ) ( R 1
} | | , . . . , | | , | | { = ) ( R x a m
N 2 1
n e
) ( n
s a 0 < ) ( R 1
< ) ( R 1
) ( R
27 of 119
Convergence Rate
How many iterations are needed to reduce the
initial error by ?
So, we have .
The convergence rate is given:
0 1
d
) ) ( ( | | | |
| | | |
| | | |
R R
e
e
) (
) (
d M M
M
0
0 1
d
M
=
g o l
1
0 1
) ( R
n o i t a r e t i
s t i g i d
g o l
1
g o l e t a r ) ) ( ( =
) (
=
0 1 0 1
R
R
28 of 119
Convergence analysis for
weighted Jacobi on 1D model
U L D I R
) + ( + ) ( = 1
1
= A D I
1
I R
=
2 1
1 2 1
1 2 1
1 2
2
) (
= ) ( A R
2
1
For the 1D model problem, he eigenvectors of the weighted
Jacobi iteration and the eigenvectors of the matrix A are
the same! The eigenvalues are related as well.
29 of 119
Good exercise: Find the
eigenvalues & eigenvectors of A
Show that the eigenvectors of A are Fourier
modes!
= ,
= ) (
j k k
N
k j
w
N
k
A n i s
2
n i s 4
2
,
0 1 0 2 0 3 0 4 0 5 0 6 0
- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 1 0 2 0 3 0 4 0 5 0 6 0
- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 1 0 2 0 3 0 4 0 5 0 6 0
- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 1 0 2 0 3 0 4 0 5 0 6 0
-1
-0 . 8
-0 . 6
-0 . 4
-0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 1 0 2 0 3 0 4 0 5 0 6 0
- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
30 of 119
Eigenvectors of R
and A are the
same,the eigenvalues related
Expand the initial error in terms of the
eigenvectors:
After M iterations,
The k
th
mode of the error is reduced by
k
at each
iteration
= ) (
k
N
k
R
2
n i s 2 1
2
w c e
=
) (
1
1
0
=
k k
N
k
w c w R c e R
k
M
k k
N
k
k
M
k
N
k
M
=
=
) (
1
1
1
1
0
= =
31 of 119
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
k
E
i
g
e
n
v
a
l
u
e
/ = 3 1
/ = 2 1
/ = 3 2
= 1
N
2
Relaxation suppresses
eigenmodes unevenly
Look carefully at
= ) (
k
N
k
R
2
n i s 2 1
2
Note that if
then for
For ,
1 0
< | ) ( |
k
R
1
N k , . . . , , = 1 2 1
1 0
=
2
1
2
n i s 2 1
N
=
2
n i s 2 1
2
h
) ( = 1 1 h O
2
32 of 119
Low frequencies are undamped
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
k
E
i
g
e
n
v
a
l
u
e
/ = 3 1
/ = 2 1
/ = 3 2
= 1
N
2
Notice that no value of will damp out the long
(i.e., low frequency) waves.
) ( = ) (
N N
2
R R
=
3
2
33 of 119
The Smoothing factor
The smoothing factor is the largest absolute value
among the eigenvalues in the upper half of the
spectrum of the iteration matrix
For R
N
k
2
34 of 119
Convergence of Jacobi on Au=0
Jacobi method on Au=0 with N=64. Number of
iterations required to reduce to ||e||
< .01
Initial guess :
0 10 20 30 40 50 60
0
10
20
30
40
50
60
70
80
90
100
0 10 20 30 40 50 60
0
10
20
30
40
50
60
70
80
90
100
=
N
jk
v
kj
sin
Wavenumber, k
Wavenumber, k
Unweighted Jacobi Weighted Jacobi
35 of 119
Weighted Jacobi Relaxation
Smooths the Error
Initial error:
Error after 35 iteration sweeps:
0 0 . 5 1
- 2
- 1
0
1
2
0 0 . 5 1
- 2
- 1
0
1
2
Many relaxation
schemes
have the smoothing
property, where
oscillatory
modes of the error
are
eliminated
effectively, but
smooth modes are
damped
very slowly.
N
j
N
j
N
j
v
j k
=
2 3
n i s
2
1
6 1
n i s
2
1
2
n i s
36 of 119
0 10 20 30 40 50 60
0
0. 2
0. 4
0. 6
0. 8
1
Other relaxations schemes may
be analyzed similarly
Gauss-Seidel relaxation applied to the 3-point
difference matrix A (1D model problem):
Good exercise: Show that
U L D R
G
) ( =
1
0 10 20 30 40 50 60
-1
-0. 8
-0. 6
-0. 4
-0. 2
0
0. 2
0. 4
0. 6
0. 8
1
= ) (
G k
N
k
R s o c
2
N
k j
w
j
k j k
/
,
) ( =
2
n i s
37 of 119
Convergence of Gauss-Seidel on
Au=0
Eigenvectors of R
G
are not the same as those of A.
Gauss-Seidel mixes the modes of A.
0 10 20 30 40 50 60
0
10
20
30
40
50
60
70
80
90
100
Jacobi method on Au=0
with N=64. Number of
iterations required to
reduce to ||e||
< .01
Initial guess : (Modes of
A)
=
N
jk
v
kj
sin
38 of 119
Many relaxation schemes have the smoothing
property, where oscillatory modes of the error
are eliminated effectively, but smooth modes
are damped very slowly.
This might seem like a limitation, but by using
coarse grids we can use the smoothing property to
good advantage.
Why use coarse grids??
First observation toward
multigrid
x
0
x
N
x
0
x
N
2
2h
39 of 119
Reason #1 for using coarse
grids: Nested Iteration
Coarse grids can be used to compute an improved
initial guess for the fine-grid relaxation. This is
advantageous because:
Relaxation on the coarse-grid is much cheaper (1/2 as
many points in 1D, 1/4 in 2D, 1/8 in 3D)
Relaxation on the coarse grid has a marginally better
convergence rate, for example
instead of
1 ) ( h O
2
4 1 ) ( h O
2
40 of 119
Idea! Nested Iteration
Relax on Au=f on to obtain initial guess
Relax on Au=f on to obtain initial guess
Relax on Au=f on to obtain final solution???
But, what is Au=f on , , ?
What if the error still has smooth components
when we get to the fine grid ?
4h
2h
h
v
2h
v
h
2h
4h
h
41 of 119
Reason #2 for using a coarse
grid: smooth error is (relatively)
more oscillatory there!
A smooth function:
Can be represented by linear
interpolation from a coarser grid:
0 0 . 5 1
- 1
- 0 . 5
0
0 . 5
1
0 0 . 5 1
- 1
- 0 . 5
0
0 . 5
1
On the coarse grid, the
smooth error appears to
be relatively higher in
frequency: in the example
it is the 4-mode, out of
a possible 16, on the fine
grid, 1/4 the way up the
spectrum. On the coarse
grid, it is the 4-mode out
of a possible 8, hence it
is 1/2 the way up the
spectrum.
Relaxation will be more
effective on this mode if
done on the coarser grid!!
42 of 119
For k=1,2,N/2, the k
th
mode is
preserved on the coarse grid
0 2 4 6 8 1 0 1 2
- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 1 2 3 4 5 6
- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
w
N
k j
N
k j
w
h
j k
h
j k , ,
2
2
=
=
2
n i s
2
n i s
Also, note that
on the coarse grid
w
h
N/2
0
k=4 mode, N=12 grid
k=4 mode, N=6 grid
43 of 119
For k > N/2, w
k
h
is invisible on
the coarse grid: aliasing!!
For k > N/2, the k
th
mode on
the fine grid is aliased and
appears as the (N-k)
th
mode
on the coarse grid:
0 2 4 6 8 1 0 1 2
- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 1 2 3 4 5 6
- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
) (
=
2
n i s
N
k N j
) (
= ) (
N
k j
w
j
h
k
2
2
n i s
/
) (
=
2
n i s
N
j k N
) ( = w
h
k N
2
k=9 mode, N=12 grid
k=3 mode, N=12 grid
44 of 119
Second observation toward
multigrid:
Recall the residual correction idea: Let v be an
approximation to the solution of Au=f, where the
residual r=f-Av. The the error e=u-v satisfies
Ae=r.
After relaxing on Au=f on the fine grid, the error
will be smooth. On the coarse grid, however, this
error appears more oscillatory, and relaxation will
be more effective.
Therefore we go to a coarse grid and relax on the
residual equation Ae=r, with an initial guess of e=0.
45 of 119
Idea! Coarse-grid correction
Relax on Au=f on to obtain an approximation
Compute .
Relax on Ae=r on to obtain an approximation
to the error, .
Correct the approximation .
Clearly, we need methods for the mappings
and
2h
h
v
h
2h
h
v A f r =
h
e
2h
e v v
h h h
+
2
2h
46 of 119
1D Interpolation (Prolongation)
Mapping from the coarse grid to the fine grid:
Let , be defined on , . Then
where
v
h
v
2h
h
2h
v v
2
2
h
i
h
i
=
v v v
2
1
2
1 2
h
i
h
i
h
i + +
) + ( =
2
1
for 1
2
0
N
i
I
2
2
h h h
h
:
v v I
2
2
h h
h
h
=
47 of 119
1D Interpolation (Prolongation)
2h
Values at points on the coarse grid map unchanged
to the fine grid
Values at fine-grid points NOT on the coarse grid
are the averages of their coarse-grid neighbors
48 of 119
The prolongation operator (1D)
We may regard as a linear operator from
N/2-1
N-1
e.g., for N=8,
has full rank, and thus null space
/
/ /
/ /
/
2 1
1
2 1 2 1
1
2 1 2 1
1
2 1
1 7
7
6
5
4
3
2
1
1 3
2
3
2
2
2
1
3 7
x
h
h
h
h
h
h
h
x
h
h
h
x
v
v
v
v
v
v
v
v
v
v
} {
I
2
h
h
I
2
h
h
49 of 119
How well does v
2h
approximate u?
Imagine that a coarse-grid approximation v
2h
has
been found. How well does it approximate the
exact solution u ? Think u error!!
If u is smooth, a coarse-grid interpolant of v
2h
may do very well.
50 of 119
How well does v
2h
approximate u?
Imagine that a coarse-grid approximation v
2h
has
been found. How well does it approximate the
exact solution u ? Think u error!!
If u is oscillatory, a coarse-grid interpolant of v
2h
may not work well.
51 of 119
Moral of this story:
If u is smooth, a coarse-grid interpolant of v
2h
may do very well.
If u is oscillatory, a coarse-grid interpolant of v
2h
may not work well.
Therefore, nested iteration is most effective
when the error is smooth!
52 of 119
1D Restriction by injection
Mapping from the fine grid to the coarse grid:
Let , be defined on , . Then
where .
v
h
v
2h
h
2h
v v
h
i
h
i 2
2
=
I
h h h
h
2 2
:
v v I
h h
h
h
2
2
=
53 of 119
1D Restriction by full-weighting
Let , be defined on , . Then
where
v
h
v
2h
h
2h
v v v v
h
i
h
i
h
i
h
i 1 2 2 1 2
2
) + + ( = 2
4
1
+
v v I
h h
h
h
2
2
=
54 of 119
The restriction operator R (1D)
We may regard as a linear operator from
N-1
N/2-1
e.g., for N=8,
has rank , and thus dim(NS(R))
/ / /
/ / /
/ / /
4 1 2 1 4 1
4 1 2 1 4 1
4 1 2 1 4 1
v
v
v
v
v
v
v
v
v
v
2
3
2
2
2
1
7
6
5
4
3
2
1
h
h
h
h
h
h
h
h
h
h
N
2
N
2
I
h
h
2
I
h
h
2
55 of 119
Prolongation and restriction are
often nicely related
For the 1D examples, linear interpolation and full-
weighting are related by:
A commonly used, and highly useful, requirement is
that
for c in
I c I
2
2
T
h
h
h
h
) ( =
I
2
h
h
=
1
2
1 1
2
1 1
2
1
2
1 I
h
h
2
=
1 2 1
1 2 1
1 2 1
4
1
56 of 119
2D Prolongation
v v
2
2 2
h
j i
h
j i,
=
v v v
1
2
2 1 2
h
j i
h
j i
h
j i , + , +
) + ( =
2
1
v v v
1
2
1 2 2
h
j i
h
j i
h
j i + , + ,
) + ( =
2
1
v v v v v
1 1 1 1
2
1 2 1 2
h
j i
h
j i
h
j i
h
j i
h
j i + , + + , , + + , +
) + + + ( =
4
1
4
1
2
1
4
1
2
1
1
2
1
4
1
2
1
4
1
We denote the operator by
using a give to stencil, ] [.
Centered over a c-point, ,
it shows what fraction of
the c-points value is
contributed to neighboring
f-points, .
57 of 119
2D Restriction (full-weighting)
We denote the operator by
using a give to stencil, [ ].
Centered over a c-point, ,
it shows what fractions of
the neighboring ( ) f-points
value is contributed to the
value at the c-point.
6 1
1
8
1
6 1
1
8
1
4
1
8
1
6 1
1
8
1
6 1
1
58 of 119
Now, lets put all these ideas
together
Nested Iteration (effective on smooth error
modes)
Relaxation (effective on oscillatory error modes)
Residual equation (i.e., residual correction)
Prolongation and Restriction
59 of 119
Coarse Grid Correction Scheme
1) Relax times on on with
arbitrary initial guess .
2) Compute .
3) Compute .
4) Solve on .
5) Correct fine-grid solution .
6) Relax times on on with initial
guess .
f u A
h
h
h
=
h
v
h
f u A
h
h
h
=
h
v
h
f v G C v
h
h h
) , , , (
2 1
2
v A f r
h
h h
h
=
r e A
2 2
2
h h
h
=
2h
r I r
2
2 h
h
h
h
=
e I v v
h
h
h
h h
+
2
2
60 of 119
Coarse-grid Correction
Relax on
f u A
h
h
h
=
u A f r
h
h h
h
=
Compute
r I r
2
2 h
h
h
h
=
Restrict
Solve
r e A
2 2
2
h h
h
=
r A e
2
1 2
2 h
h
h
) ( =
Correct
e I e
h
h
h
h
2
2
Interpolate
e u u
h h h
+
61 of 119
What is ?
For this scheme to work, we must have , a
coarse-grid operator. For the moment, we will
simply assume that is the coarse-grid
version of the fine-grid operator .
We will return to the question of constructing
later.
A
2h
A
2h
A
2h
A
2h
A
h
62 of 119
How do we solve the coarse-
grid residual equation? Recursion!
u I e
h
h
h
h
2
2
u I e
4
2
4
2 h
h
h
h
u I e
8
4
8
4 h
h
h
h
f A G u
h h
h
) , (
f A G u
2 2
2
h h
h
) , (
f A G u
4 4
4
h h
h
) , (
f A G u
8 8
8
h h
h
) , (
u A f I f
2 2
h
h h h
h
h
) (
u A f I f
2
2 2 4
2
4
h
h h h
h
h
) (
u A f I f
4
4 4 8
4
8
h
h h h
h
h
) (
e u u
8 8 8 h h h
+
e u u
4 4 4 h h h
+
e u u
2 2 2 h h h
+
e u u
h h h
+
f A e
H H
H
) ( =
1
63 of 119
V-cycle (recursive form)
1) Relax times on , initial arbitrary
2) If is the coarsest grid, go to 4)
Else:
3) Correct
4) Relax times on , initial guess
f v V M v
h
h
h
h
) , (
1
f u A
h
h
h
= v
h
h
v A f I f
2
2
h
h h h
h
h
) (
v
2h
0
f v V M v
2
2
2
2
h
h
h
h
) , (
v I v v
h
h
h
h h
+
2
2
2
f u A
h
h
h
= v
h
64 of 119
Storage Costs: and must
be stored on each level
G In 1-d, each coarse grid has about half
the number of points as the finer grid.
G In 2-d, each coarse grid has about one-
fourth the number of points as the finer
grid.
G In d-dimensions, each coarse grid has
about the number of points as the
finer grid.
2
d
2 1
2
2 2 2 2 1 2
N
N
d
d
d M d d d d
< ) +
+ + + + (
3 2
G Total storage cost:
less than 2, 4/3, 8/7 the cost of storage on the fine grid
for 1, 2, and 3-d problems, respectively.
v
h
f
h
65 of 119
Computation Costs
Let 1 Work Unit (WU) be the cost of one
relaxation sweep on the fine-grid.
Ignore the cost of restriction and interpolation
(typically about 20% of the total cost).
Consider a V-cycle with 1 pre-Coarse-Grid
correction relaxation sweep and 1 post-Coarse-
Grid correction relaxation sweep.
Cost of V-cycle (in WU):
Cost is about 4, 8/3, 16/7 WU per V-cycle in 1, 2,
and 3 dimensions.
2 1
2
2 2 2 2 1 2
< ) +
+ + + + (
d
d M d d d 3 2
66 of 119
Convergence Analysis
First, a heuristic argument:
The convergence factor for the oscillatory modes of the
error (e.g., the smoothing factor) is small and
independent of the grid spacing h.
The multigrid cycling schemes focus the relaxation on
the oscillatory components on each level.
2
r o f x a m r o t c a f g n i h t o o m s ) ( =
k
N k
N
R
smooth
k=1
oscillatory
k=N
k=N/2
Relax on fine grid
Relax 1st coarse grid
The overall convergence factor for multigrid
methods is small and independent of h!
67 of 119
Convergence analysis, a little
more precisely...
Continuous problem:
Discrete problem:
Global error:
(p=2 for model problem)
Algebraic error:
Suppose a tolerance is specified such that
must satisfy
Then this is guaranteed if
x u u f u A ) ( = , =
i i
u v f u A
h h
h
h
h
, =
u x u E
h
i i i
) ( =
h K E
p
v u e
h
i
h
i i
=
v
h
v u
h
e E v u u u + = +
h h h
68 of 119
We can satisfy the requirement
by imposing two conditions
1) . We use this requirement to determine a
grid spacing from
2) , which determines the number of
iterations required.
If we iterate until on then
we have converged to the level of truncation.
h
E
2
K
h
/
2
1 p
e
2
h K e ) ( =
2
p
h
69 of 119
Converging to the level of
truncation
Use a MV scheme with convergence rate
independent of h (fixed and ).
Assume a d-dimensional problem on an NxNxxN
grid with .
The V-cycle must reduce the error from
to
We can determine , the number of V-cycles
required to accomplish this.
< 1
1
2
N h =
1
O e ) ( 1
N O h O e ) ( ) (
p p
70 of 119
Work needed to converge to the
level of truncation
Since V-cycles at convergence rate are
required, we see that
implying that .
Since one V-cycle costs O(1) WU and one WU is
O(N
d
), we see that the cost of converging to the
level of truncation using the MV method is
which is comparable to fast direct methods (FFT
based).
) (
N O
p
) ( N O g o l
N N O ) (
d
g o l
71 of 119
A numerical example
Consider the two-dimensional model problem (with
=0), given by
inside the unit square, with u=0 on the boundary.
) ( ) ( + ) ( ) ( = x x y y y x u u
y y x x
1 6 1 1 6 1 2
2 2 2 2 2 2
The solution to this problem is
We examine the effectiveness of MV cycling to
solve this problem on NxN grids [(N-1) x (N-1)
interior] for N=16, 32, 64, 128.
y y x x y x u ) ( ) ( = ) , (
2 4 4 2
72 of 119
Numerical Results,
MV cycling
Shown are the results
of 16 V-cycles. We
display, at the end of
each cycle, the residual
norm, the error norm,
and the ratio of these
norms to their values at
the end of the previous
cycle.
N=16, 32, 64, 128
73 of 119
Look again at nested iteration
Idea: Its cheaper to solve a problem (i.e., takes
fewer iterations) if the initial guess is good.
How to get a good initial guess:
Interpolate coarse solution to fine grid.
Solve the problem on the coarse grid first.
Use interpolated coarse solution as initial guess on fine
grid.
G Now, lets use the V-cycle as the solver on each grid
level! This defines the Full Multigrid (FMG) cycle.
74 of 119
The Full Multigrid (FMG) cycle
Initialize
Solve on coarsest grid
interpolate initial guess
perform V-cycle
interpolate initial guess
perform V-cycle
f f f f
H h h h
, . . . , , ,
4 2
f A v
H H
H
) ( =
1
v I v
4
2
4
2 h
h
h
h
v I v
h
h
h
h
2
2
f v V M v
2
2
2
2
h
h
h
h
) , (
f v V M v
h
h
h
h
) , (
f G M F v
h
h
) (
75 of 119
Full Multigrid (FMG)
Restriction
Interpolation
High-order Interpolation
76 of 119
FMG-cycle (recursive form)
1) Initialize
2) If is the coarsest grid, then solve exactly
Else:
3) Set initial guess
4) , times
h
f f f f
H h h h
, . . . , , ,
4 2
f G M F v
h
h
) (
v I v
h
h
h
h
2
2
f G M F v
2
2
h
h
) (
f v V M v
h
h h
) , (
77 of 119
Cost of an FMG cycle
One V-cycle is performed per level, at a cost of
WU per grid (where the WU is for the
size grid involved).
The size of the WU for coarse-grid j is times
the size of the WU on the fine grid (grid 0).
Hence, cost of the FMG(1,1) cycle is less than
d=1: 8 WU; d=2: 7/2 WU d=3: 5/2 WU
2 1
1
d
2
d j
) (
= ) . . . + + + (
2 1
2
2 2 1
2 1
2
d
d d
d 2
2
78 of 119
How to tell if truncation error is
reached with Full Multigrid (FMG)
If truncation error is reached, for each grid
level h. The norms of the errors at the solution
points in the cycle should form a Cauchy sequence and
h O e ) (
2
e
e
2
h j
h j
4
1
79 of 119
Cost to achieve convergence to
truncation by the FMV method
Consider using the FMV method, which solves the
problem on to the level of truncation before
going to , i.e.,
We ask that which implies
that the algebraic error must be reduced by
on . Hence, V-cycles are needed, where
thus and computational cost of the
FMV method .
2h
h
h K v u e
2 2 2
p
h h h
) ( = 2
e h K e
h
p p
h
= 2
2
2
p
h
1
1
2
p
) (
1
O 1
N O ) (
d
80 of 119
A numerical example
Consider again the two-dimensional model problem
(with =0), given by
inside the unit square, with u=0 on the boundary.
) ( ) ( + ) ( ) ( = x x y y y x u u
y y x x
1 6 1 1 6 1 2
2 2 2 2 2 2
We examine the effectiveness of FMG cycling to
solve this problem on NxN grids [(N-1) x (N-1)
interior] for N=16, 32, 64, 128.
81 of 119
FMG results
Shown are results for three FMG cycles, and a
comparison to MV cycle results.
82 of 119
What is ?
Recall the coarse-grid correction scheme:
1) Relax on on to get .
2) Compute .
4) Solve on .
5) Correct fine-grid solution .
Assume that . Then the
residual equation can be written
for some
How does act upon ?
f u A
h
h
h
=
h
r e A
2 2
2
h h
h
=
2h
e I v v
h
h
h
h h
+
2
2
A
2h
v A f I f
2 2
h
h h h
h
h
) ( =
I e g n a R e
h
h
h
) (
2
e A r
h
h
h
= = u I A
h
h
h
h
2
2
u
2
2
h
h
A
h
I e g n a R ) (
2
h
h
v
h
83 of 119
How does act on ?
Therefore, the odd rows of are zero (in 1D
only) and . We therefore keep the even rows
of for the residual equations on . These
rows can be picked out by applying restriction!
A
h
I e g n a R ) (
2
h
h
u
2h
u I
2
2
h
h
h
u I A
h
h
h
h
2
2
r
1 2i +
= 0
I A
h
h
h
2
r I u I A I
h
h
h
h
h
h
h h
h
2
2
2
2
=
I A
h
h
h
2
2h
84 of 119
Building .
The residual equation on the coarse grid is:
Therefore, we identify the coarse-grid operator
as
Next, we determine how to compute the entries of
the coarse-grid matrix.
A
2h
r I u I A I
h
h
h
h
h
h
h h
h
2
2
2
2
=
A
2h
I A I A
2
2 2 h
h
h h
h
h
=
85 of 119
Computing the i
th
row of .
Compute where
A
2h
e A
2
2
h
i
h
e
T
h
i
2
) , . . . , , , , . . . , , ( = 0 0 1 0 0 0
i i 1
i + 1
0
0 1
e
h
i
2
0 0 1
2
1
2
1
e I
2
2
h
i
h
h
0 0
2
1
h
2
2
1
h
2
1
h
2
e I A
h
i
h
h
h
2
2
2
1
h
2
4
1
h
2
4
1
h
2
e I A I
h
i
h
h
h h
h
2
2
2
86 of 119
The i
th
row of is ,
which is the version of .
Therefore, IF relaxation on leaves only error in
the range of interpolation, then solving
determines the error exactly!
In general, this will not be the case, but this
argument certainly leads to a plausible representation
for .
The i
th
row of looks a
lot like a row of !
A
2h
A
h
A
2h
1 2 1
2
1
) ( h
2
A
h
2h
A
2h
h
87 of 119
The variational properties
The definition for that resulted from the
foregoing line of reasoning is useful for both
theoretical and practical reasons. Together with
the commonly used relationship between
restriction and prolongation we have the following
variational properties:
I A I A
2
2 2 h
h
h h
h
h
=
I c I
2
2
T
h
h
h
h
) ( =
(Galerkin Condition )
for c in
A
2h
88 of 119
Properties of the Grid Transfer
Operators: Restriction
Full Weighting: or
N-1
N/2-1
For N=8,
has rank and a null space with
dimension .
I
h h h
h
2 2
:
I
h
h
2
:
I
h
h
7 3
2
=
1 2 1
1 2 1
1 2 1
4
1
N
1
2
I
h
h
2
I N ) (
h
h
2
N
2
89 of 119
Spectral properties of .
How does act on the eigenvectors of ?
Consider , 1 k N-1, 0 j N
Good Exercise: show that
for 1 k N/2.
I
h
h
2
I
h
h
2
A
h
N
k j
w
h
j k,
= n i s
= ) ( w
N
k
w I
h
j k
j
h
k
h
h
2 2
2
2
s o c
,
w c
h
j k k ,
2
90 of 119
Spectral properties of (cont.).
i.e., [k
th
mode on ] = c
k
[ k
th
mode on ]
: N=8, k=2
: N=4, k=2
I
h
h
2
I
h
h
2
2h
I
h
h
2
2h
91 of 119
Spectral properties of (cont.).
Let for , so that
Then (another good exercise!)
I
h
h
2
k N k =
N k
N
2
< <
2
1 <
N
k
= ) ( w
N
k
w I
h
j k
j
h
k
h
h
2 2
2
,
2
n i s
w s
h
j k k ,
2
92 of 119
Spectral properties of (cont.).
i.e., [(N-k)
th
mode on ] = -s
k
[ k
th
mode on ]
: N=8, k=6
: N=4, k=2
I
h
h
2
I
h
h
2
2h
I
h
h
2
2h
h
93 of 119
Spectral properties of (cont.).
Summarizing:
Complementary modes:
I
h
h
2
w c w I
h
k k
h
k
h
h
2
2
=
w s w I
h
k k
h
k
h
h
2
2
=
w I
h
N
h
h 2
2
/
= 0
2
1 < <
N
k
k N k =
w w W
h
k
h
k k
} , { = n a p s
w W I
h
k k
h
h
2
2
94 of 119
Null Space of .
Observe that where i is odd
and is the i
th
unit vector.
Let .
While the look oscillatory, they contain all of
the Fourier modes of , i.e.,
All the Fourier modes of are needed to
represent the null space of restriction!
I
h
h
2
e A I N
= ) (
h
i
h h
h
2
n a p s
e
h
i
h
i
h
i
e A
A
h
i
=
k k
N
k
i
=
1
1
w a
a
k
0
A
h
0 0
0 0 1 1 2 0 0
95 of 119
Properties of the Grid Transfer
Operators: Interpolation
Interpolation: or
:
N/2-1
N-1
For N=8,
has full rank and null space .
I
2
h
h
I
2
2
h h h
h
:
I
2
h
h
=
1
2
1 1
2
1 1
2
1
2
1
I
2
h
h
} {
96 of 119
Spectral properties of .
How does act on the eigenvectors of ?
Consider , 1 k N/2-1,
0 j N/2
Good Exercise: show that the modes of are
NOT preserved by , but that the space is
preserved:
I
2
h
h
I
2
h
h
A
2h
= ) (
N
k j
w
j
h
k
2
2
n i s
A
2h
I
2
h
h
W
k
w
N
k
w
N
k
w I
2 2 2
2
h
k
h
k
h
k
h
h
=
2
n i s
2
s o c
= w s w c
h
k k
h
k k
97 of 119
Spectral properties of (cont.).
Interpolation of smooth modes excites
oscillatory modes on .
Note that if ,
is second-order interpolation.
I
2
h
h
w s w c I
2
h
k k
h
k k
h
h
=
2h
h
N
k
2
w
N
k
O w
N
k
O w I
2
2
2
2
2
2
h
k
h
k
h
k
h
h
= 1
w
h
k
I
2
h
h
98 of 119
The Range of .
The range of is the span of the columns of
Let be the i
th
column of .
All the Fourier modes of are needed to
represent Range( )
I
2
h
h
I
2
h
h
I
2
h
h
i
I
2
h
h
:
i
I
2
h
h
, =
k
h
k k
N
k
h
i
=
1
1
b w b 0
A
h
99 of 119
Use all the facts to analyze the
coarse-grid correction scheme
1) Relax times on :
2) Compute and restrict residual
3) Solve residual equation
4) Correct fine-grid solution .
The entire process appears as
The exact solution satisfies
h
v R v
h h
1
v A f I f
2 2
h
h h h
h
h
) (
f A v
2
1
2
2
h h
h
) ( =
v I v v
h
h
h
h h
+
2
2
v R A f I A I v R v
h
h h h
h
h h
h
h h
) ( ) ( +
2 1 2
2
u R A f I A I u R u
h
h h h
h
h h
h
h h
) ( ) ( +
2 1 2
2
100 of 119
Error propagation of the coarse-
grid correction scheme
Subtracting the previous two expressions, we get
How does CG act on the modes of ? Assume
consists of the modes and for
and .
We know how
act on and .
e R A I A I I e
h
h h
h
h h
h
h
) (
2
1
2
2
e G C e
h h
A
h
w
h
k
w
h
k
2 1 / N k
k N k =
I A I A R
, ) ( , , ,
h
h
h h
h
h
2
1
2 2
w
h
k
w
h
k
101 of 119
Error propagation of CG
For now, assume no relaxation . Then
is invariant under CG.
where
= 0
w w W
h
k
h
k k
} , { = n a p s
w s w s w G C
h
k k
h
k k
h
k
+ =
w c w c w G C
h
k k
h
k k
h
k
+ =
N
k
c
k
=
2
s o c
2
N
k
s
k
=
2
n i s
2
102 of 119
CG error propagation for k << N
Consider the case k << N (extremely smooth and
oscillatory modes):
Hence, CG eliminates the smooth modes but does
not damp the oscillatory modes of the error!
w
N
k
O w
N
k
O w
k k k
2
2
2
2
w
N
k
O w
N
k
O w
k k k
1 1
2
2
2
2
103 of 119
Now consider CG with relaxation
Next, include relaxation sweeps. Assume that
the relaxation preserves the modes of
(although this is often unnecessary). Let
denote the eigenvalue of associated with .
For k < < N/2,
R A
h
k
R
w
k
w s w s w
k k k k k k k
+
w c w c w
k k k k k k k
+
Small!
Small!
104 of 119
The crucial observation:
Between relaxation and the coarse-
grid correction, both smooth and
oscillatory components of the error
are effectively damped.
This is essentially the spectral
picture of how multigrid works. We
examine now another viewpoint, the
algebraic picture of multigrid.
105 of 119
Recall the variational properties
All the analysis that follows assumes that the
variational properties hold:
I A I A
2
2 2 h
h
h h
h
h
=
I c I
2
2
T
h
h
h
h
) ( =
(Galerkin Condition )
for c in
106 of 119
Algebraic interpretation of
coarse-grid correction
Consider the subspaces that make up and
h
2h
I R ) (
2
h
h
I N ) (
h
h
2
2h
I R ) (
h
h
2
I
2
h
h
I
h
h
2
For the rest of this talk, R( )
refers to the Range of a linear
operator.
From the fundamental theorem
of linear algebra:
or
I R I N ) ) ( ( ) (
T h
h
h
h
2 2
I R I N ) ( ) (
h
h
h
h 2
2
107 of 119
Subspace decomposition of .
Since has full rank, we can say, equivalently,
( where means that ).
Therefore, any can be written as
where and .
Hence,
h
A
h
A I N I R ) ( ) (
2
2
h h
h
A
h
h
h
y x
A
h
y x A
h
= , 0
e
h
t s e
h
h h
+ =
I R s
h
h
h
) (
2
A I N t
h h
h
h
) (
2
) (
) ( =
h h
h
h
h
h
A I N I R
2
2
108 of 119
Characteristics of the subspaces
Since for some , we
associate with the smooth components of .
But, generally has all Fourier modes in it (recall
the basis vectors for ).
Similarly, we associate with oscillatory
components of , although generally has all
Fourier modes in it as well. Recall that is
spanned by therefore is
spanned by the unit vectors
for odd i, which look oscillatory.
q I s
h
h
h
h
=
2
2
q
2
2
h
h
s
h
e
h
s
h
I
2
h
h
t
h
e
h
t
h
I N ) (
h
h
2
=
i
h
i
e A A I N ) (
h h
h
2
e
T h
i
) , . . . , , , , . . . , , ( = 0 0 1 0 0 0
109 of 119
q I A I A I I s G C
h
h
h
h h
h
h h
h
h
) ( =
2
2
2
1
2
2
Algebraic analysis of coarse-grid
correction
Recall that (without relaxation)
First note that if then .
This follows since for some
and therefore
It follows that , that is, the null
space of coarse-grid correction is the range of
interpolation.
A I A I I G C ) ( =
2
1
2
2
h h
h
h h
h
I R s
h
h
h
) (
2
s G C
h
= 0
q I s
h
h
h
h
=
2
2
q
2
2
h
h
= 0
I R G C N ) ( = ) (
2
h
h
A
2h
by Galerkin property
110 of 119
More algebraic analysis of
coarse-grid correction
Next, note that if then
Therefore .
CG is the identity on
A I N t
h h
h
h
) (
2
t A I A I I t G C
h h h
h
h h
h
h
) ( =
2
1
2
2
t t G C
h h
=
0
A I N ) (
h h
h
2
111 of 119
How does the algebraic picture
fit with the spectral view?
We may view in two ways:
=
that is,
or as
h
Low frequency modes
1 k N/2
High frequency modes
N/2 < k < N
=
h
H L
) (
) ( =
h h
h
h
h
h
A I N I R
2
2
112 of 119
Actually, each view is just part
of the picture
The operations we have examined work on
different spaces!
While is mostly oscillatory, it isnt .
and while is mostly smooth, it isnt .
Relaxation eliminates error from .
Coarse-grid correction eliminates error from
H
H
I R ) (
2
h
h
L
A I N ) (
h h
h
2
I R ) (
2
h
h
113 of 119
How it actually works (cartoon)
I R ) (
2
h
h
A I N ) (
h h
h
2
e
h
s
h
t
h
H
L
114 of 119
Relaxation eliminates H, but
increases the error in .
I R ) (
2
h
h
A I N ) (
h h
h
2
e
h
s
h
t
h
H
L
I R ) (
2
h
h
115 of 119
CG eliminates but
increases the error in H.
I R ) (
2
h
h
A I N ) (
h h
h
2
e
h
s
h
t
h
H
L
I R ) (
2
h
h
116 of 119
Difficulties:
anisotropic operators and grids
Consider the operator :
The same phenomenon
occurs for grids with much
larger spacing in one direction
than the other:
) , ( =
2
2
2
2
y x f
y
u
x
u
-100 -50 0 50 100
0
0.2
0.4
0.6
0.8
1
1.2
G If then the GS-smoothing
factors in the x- and y-directions are
shown at right.
Note that GS relaxation does not
damp oscillatory components in the x-
direction.
117 of 119
Difficulties: discontinuous or
anisotropic coefficients
Consider the operator :
where
Solutions: line-relaxation (where whole gridlines
of values are found simultaneously), and/or semi-
coarsening (coarsening only in the strongly coupled
direction).
G Again, GS-smoothing factors in the x- and y-directions
can be highly variable, and very often, GS relaxation does
not damp oscillatory components in the one or both
directions.
) ) , ( ( u y x D
y x d y x d
y x d y x d
y x D
) , ( ) , (
) , ( ) , (
= ) , (
2 2 1 2
2 1 1 1
118 of 119
For nonlinear problems, use
FAS (Full Approximation Scheme)
Suppose we wish to solve: where
the operator is non-linear. Then the linear
residual equation does not apply.
Instead, we write the non-linear residual equation:
This is transferred to the coarse grid as:
We solve for and transfer the
error (only!) to the fine grid:
r e A =
f u A = ) (
r u A e u A = ) ( ) + (
u A f I e u A
2
2 2
2
h
h h h
h
h h
h
) ) ( ( = ) + (
e u w
2 2 2 h h h
+
u I w I u u
h
h
h
h
h
h
h h
) ( +
2
2
2
119 of 119
Multigrid: increasingly, the right
tool!
Multigrid has been proven on a wide variety of
problems, especially elliptic PDEs, but has also found
application among parabolic & hyperbolic PDEs,
integral equations, evolution problems, geodesic
problems, etc.
With the right setup, multigrid is frequently an
optimal (i.e., O(N)) solver.
Multigrid is of great interest because it is one of the
very few scalable algorithms, and can be parallelized
readily and efficiently!