Selected Topics in Finite+Element+Methods
Selected Topics in Finite+Element+Methods
Zhiming Chen
Haijun Wu
Institute of Computational Mathematics, Chinese Academy
of Sciences, Beijing, 100080, China.
E-mail address: [email protected]
Preface vii
Bibliography 173
Index 177
CHAPTER 0
R1 T
Thus, 0 (f + u00 − au)v dx = 0 for all v ∈ V C 1 ([0, 1]) such that v(1) = 0. Let
w = f + u00 − au ∈ C 0 ([0, 1]). If w 6≡ 0, then w(x) is of one sign in some interval
[b, c] ⊂ [0, 1], with b < c. Choose v(x) = (x − b)2 (x − c)2 in [b, c] and v ≡ 0 outside
R1
[b, c]. But then 0 wv dx 6= 0 which is a contradiction. Thus −u00 + au = f . Now
apply (0.3) with v(x) = x to find u0 (1) = 0. So u solves (0.1). ¤
The points {xi } are called nodes. Let hi = xi − xi−1 be the length of the i-th
subinterval [xi−1 , xi ]. Define h = max1≤i≤n hi .
2.2. Finite element spaces. We shall approximate the solution u(x) by us-
ing the continuous piecewise linear functions over Mh . Introduce the linear space
of functions
©
Vh = v ∈ C 0 ([0, 1]) : v(0) = 0,
ª (0.4)
v|[xi−1 ,xi ] is a linear polynomial, i = 1, · · · , n .
It is clear that Vh ⊂ V.
2.3. The finite element method. The finite element discretization of (0.2)
reads as:
Z 1
Find uh ∈ Vh such that A(uh , vh ) = f (x)vh (x) dx ∀vh ∈ Vh . (0.5)
0
φi(x) φ (x)
n
0 x1 x xi x 1 0 x1 xi xn−1 1
i−1 i+1
x−xi−1
hi , xi−1 ≤ x ≤ xi ,
xi+1 −x
φi = hi+1 , xi < x ≤ xi+1 , 1 ≤ i ≤ n − 1.
0, x < xi−1 or x > xi+1 ,
½ x−xn−1
hn , xn−1 ≤ x ≤ 1,
φn =
0, x < xn−1 .
vi = vh (xi ), i = 1, 2, · · · , n,
then
uh = u1 φ1 + u2 φ2 + · · · + un φn , u1 , · · · , un ∈ R,
where ui = uh (xi ).
Let vh = φi , i = 1, · · · , n in (0.5), then we obtain an algebraic linear system in
unknowns u1 , u2 , · · · , un :
Z 1
A(φ1 , φi )u1 + A(φ2 , φi )u2 + · · · + A(φn , φi )un = f (x)φi dx,
0 (0.6)
i = 1, · · · , n.
Denote by
Z 1 Z 1
kij = A(φj , φi ) = φ0j φ0i + aφj φi dx, fi = f (x)φi dx,
0 0
and
¡ ¢ ¡ ¢ ¡ ¢
K = kij n×n , F = fi n×1 , U = ui n×1 ,
KU = F (0.7)
Therefore
1 1 a
Z 1 Z 1
+ + (hi + hi+1 ),
hi hi+1 3
A(φi , φi ) = φ0i φ0i dx + a φi φi dx = i = 1, · · · , n − 1,
0 0
1 a
+ hn , i = n,
hn 3
1 a
A(φi , φi−1 ) = A(φi−1 , φi ) = − + hi , i = 2, · · · , n.
hi 6
Theorem 2.1.
1
ku − uI kL2 (τi ) ≤ hi ku0 kL2 (τi ) , (0.9)
π
1
ku − uI kL2 (τi ) ≤ 2 h2i ku00 kL2 (τi ) , (0.10)
π
1
ku0 − u0I kL2 (τi ) ≤ hi ku00 kL2 (τi ) . (0.11)
π
Proof. We only prove (0.9) and leave the others as an exercise. We first
change (0.9) to the reference interval [0, 1]. Let x̂ = (x − xi−1 )/hi and let ê(x̂) =
u(x) − uI (x). Note that ê(0) = ê(1) = 0 and k = u0I is a constant. The inequality
(0.9) is equivalent to
2 1 2 hi 2 1 2
kêkL2 ([0,1]) = ku − uI kL2 (τi ) ≤ 2 ku0 kL2 (τi ) = 2 kê0 + khi kL2 ([0,1]) ,
hi π π
that is
2 1 2 1 2
kêkL2 ([0,1]) ≤ 2 kê0 kL2 ([0,1]) + 2 kkhi kL2 ([0,1]) . (0.12)
π π
© ª
Introduce the space W = w ∈ L2 ([0, 1]) : w0 ∈ L2 ([0, 1]) and w(0) = w(1) = 0 .
Let
2
kw0 kL2 ([0,1])
λ1 = inf R[w] = inf .
w∈W,w6=0 w∈W,w6=0 kwk2 2
L ([0,1])
By variational calculus it is easy to see that R[w] is the Rayleigh quotient of the
following eigenvalue problem:
−w00 = λw, w ∈ W.
Therefore λ1 = π 2 is the smallest eigenvalue of the above problem, and hence (0.12)
holds. This completes the proof of (0.9). ¤
Therefore
|||u − uh |||2 = A(u − uh , u − uh ) = A(u − uh , u − uI ) ≤ |||u − uh ||||||u − uI |||,
It follows from Theorem 2.1 that
" n #1/2
X 2 2
|||u − uh ||| ≤ |||u − uI ||| = (ku0 − u0I kL2 (τi ) + a ku − uI kL2 (τi ) )
i=1
" n ³³ ´2
#1/2
X h ³ h ´4 ´
2 2
≤ ku00 kL2 (τi ) +a ku00 kL2 (τi )
i=1
π π
·³ Z
³ h ´2 ´ 1 ¸1/2
h 00 2
= 1+a (u ) dx .
π π 0
We have proved the following error estimate.
Theorem 2.2.
h³ ³ h ´2 ´1/2
|||u − uh ||| ≤
1+a ku00 kL2 ([0,1]) .
π π
Since the above estimate depends on the unknown solution u, it is called the
a priori error estimate.
2.8. A posteriori error estimates. We will derive error estimates indepen-
dent of the unknown solution u.
Let e = u − uh . Then
A(e, e) = A(u − uh , e − eI )
Z 1 Z 1 Z 1
= f · (e − eI ) dx − u0h (e − eI )0 dx − auh (e − eI ) dx
0 0 0
Z 1 n Z
X xi
= (f − auh )(e − eI ) dx − u0h (e − eI )0 dx
0 i=1 xi−1
Xn
≤ kf − auh kL2 (τi ) ke − eI kL2 (τi )
i=1
Xn
hi
≤ kf − auh kL2 (τi ) ke0 kL2 (τi ) .
i=1
π
Here we have used Theorem 2.1 to derive the last inequality.
Define the local error estimator on the element τi = [xi−1 , xi ] as follows
1
ηi = hi kf − auh kL2 (τi ) . (0.14)
π
Then à n !1/2 à n !1/2
X X
2 2 0 2
|||e||| ≤ ηi ke k ≤ ηi |||e|||.
i=1 i=1
That is, we have the following a posteriori error estimate.
2. PIECEWISE POLYNOMIAL SPACES – THE FINITE ELEMENT METHOD 7
Now a question is if the above upper bound overestimates the true error. To
answer this question we introduce the following theorem that gives a lower bound
of the true error.
¡ R xi ¢1/2
Theorem 2.4 (Lower bound). Define |||φ|||τi = xi−1 ((φ0 )2 + aφ2 ) dx . Let
1
R xi 1
(f − auh )i = hi xi−1 (f − auh ) dx and osci = π hi kf − auh − (f − auh )i kL2 (τi ) .
Then √ ´
³ 30 1³ 6ah2i ´ 12
ηi − 1 + osci ≤ 12 + |||u − uh |||τi . (0.16)
5 π 5
Proof. Suppose ψ is differentiable over each τi and continuous on [0, 1]. It is
clear that
Z 1 Xn Z xi
A(e, ψ) = (f − auh )ψ dx − u0h ψ 0 dx (0.17)
0 i=1 xi−1
We have,
2
h2i k(f − auh )i kL2 (τi ) ≤ |||e|||τi |||ψ|||τi + osci πh−1 i kψkL2 (τi )
³³ ´ √ ´
6ah2i 2 1
π 30
= 12 + |||e|||τi + osci hi k(f − auh )i kL2 (τi ) ,
5 5
which implies
³ √
6ah2i ´ 12 π 30
hi k(f − auh )i kL2 (τi ) ≤ 12 + |||e|||τi + osci
5 5
1
Now the proof is completed by using ηi ≤ π hi k(f − auh )i kL2 (τi ) + osci . ¤
We remark that the term osci is of high order compared to ηi if f and a are
smooth enough on τi .
8 0. A BRIEF INTRODUCTION TO FINITE ELEMENT METHODS
Example 2.5. We solve the following problem by the linear finite element
method.
− u00 + 10000u = 1, 0 < x < 1,
u(0) = u(1) = 0.
The true solution (see Fig. 2) is
µ ¶
1 e100x + e100(1−x)
u= 1− .
10000 1 + e100
If we use the uniform mesh obtained by dividing the interval [0, 1] into 1051 subin-
tervals of equal length, then the error |||u − uh ||| = 2.7438 × 10−5 . On the other
hand, if we use a non-uniform mesh as shown in Fig. 2 which also contains 1051
subintervals, then the error |||u − uh ||| = 1.9939 × 10−6 is smaller than that obtained
by using the uniform mesh.
Figure 2. Example 2.5. The finite element solution and the mesh.
3. Exercises
Exercise 0.1. Prove (0.10) and (0.11).
Exercise 0.2. Let u ∈ V ,°show ° that the interpolant uI ∈ Vh is the best
d °
approximation of u in the norm ° dx · L2 ([0,1]) , that is,
k(u − uI )0 kL2 ([0,1]) = inf k(u − vh )0 kL2 ([0,1]) .
vh ∈Vh
Exercise 0.3. Use Example 2.5 to verify numerically the a posteriori error
estimates in Theorem 2.3 and 2.4.
CHAPTER 1
We write
∂f ( ∂f ∂f )T
∂xi f = = gi , i = 1, 2, · · · , d, ∇f = ,··· , .
∂xi ∂x1 ∂xd
Similarly, for a multi-index α = (α1 , α2 , · · · , αd ) ∈ Nd with length |α| =
α1 + α2 + · · · + αd , ∂ α f ∈ L1loc (Ω) is defined by
∫ ∫
|α|
α
∂ f φ dx = (−1) f ∂ α φ dx ∀ φ ∈ C0∞ (Ω),
Ω Ω
where ∂ α = ∂xα11 ∂xα22 · · · ∂xαdd .
Example 1.2. Let d = 1, Ω = (−1, 1), and f (x) = 1 − |x|. The weak
derivative of f is {
1 if x 6 0,
g=
−1 if x > 0.
The weak derivative of g does not exist.
Definition 1.3 (Sobolev space). For a non-negative integer k and a real
p > 1, we define
W k,p (Ω) = {u ∈ Lp (Ω) : ∂ α u ∈ Lp (Ω) for all |α| 6 k}.
The space is a Banach space with the norm
( ∑ )1/p
∥∂ α u∥pLp (Ω) , 1 6 p < +∞;
∥u∥W k,p (Ω) = |α|6k
max ∥∂ α u∥L∞ (Ω) , p = +∞.
|α|6k
The closure of C0∞ (Ω) in W k,p (Ω) is denoted by W0k,p (Ω). It is also a Banach
space. When p = 2, we denote
H k (Ω) = W k,2 (Ω), H0k (Ω) = W0k,2 (Ω).
The space H k (Ω) is a Hilbert space when equipped with the inner product
∑ ∫
(u, v)k,Ω = ∂ α u∂ α v dx.
|α|6k Ω
Example 1.4.
(1) Let Ω = (0, 1) and consider the function u = xα . One easily verifies
that u ∈ L2 (Ω) if α > − 12 , u ∈ H 1 (Ω) if α > 21 , and u ∈ H k (Ω) if
α > k − 12 .
(2) Let Ω = {x ∈ R2 : |x| < 1/2} and consider the function f (x) =
log log |x| . Then f ∈ W 1,p (Ω) for p 6 2 but f ̸∈ L∞ (Ω). This
example shows that functions in H 1 (Ω) are neither necessarily con-
tinuous nor bounded.
1.1. BASIC CONCEPTS OF SOBOLEV SPACE 3
Lemma 1.1.
(i) If u ∈ L1loc (Rd ), then for every ϵ > 0, uϵ ∈ C ∞ (Rn ) and ∂ α (ρϵ ∗ u) =
(∂ α ρϵ ) ∗ u for each multi-index α;
(ii) If u ∈ C(Rd ), then uε converges uniformly to u on compact subsets
of Rd ;
(iii) If u ∈ Lp (Rd ), 1 6 p < ∞, then uϵ ∈ Lp (Rd ), ∥uϵ ∥Lp (Rd ) 6 ∥u∥Lp (Rd ) ,
and limϵ→0 ∥uϵ − u∥Lp (Rd ) = 0.
A H
A HH
HH HH
A r H Ω
A HH
y
HH
H r
A H x HH
AA HH
Q(x, r) ∂Ω
Proof. We only give the proof of the first inequality. Assume it is false.
Then there exists a sequence {un } ⊂ W01,p (Ω) such that
1
∥un ∥Lp (Ω) = 1, ∥∇un ∥Lp (Ω) 6 .
n
By the compactness imbedding theorem, there exists a subsequence (still
denoted by) un and a function u ∈ Lp (Ω) such that un → u in Lp (Ω). By
the completeness of Lp (Ω) we know that ∇un → 0 in Lp (Ω)d . Thus, by the
definition of weak derivative, ∇u = 0, which implies, by Lemma 1.2, that
u = 0. This contradicts the fact that ∥u∥Lp (Ω) = 1.
Next we study the trace of functions in W k,p for which we first introduce
the Sobolev spaces of non-integer order k. There are several definitions of
fractional Sobolev spaces which unfortunately are not equivalent. Here we
shall use the following one.
Likewise, when p = ∞, W s,∞ (Ω) is the set of all functions u ∈ W k,∞ (Ω) such
that
|∂ α u(x) − ∂ α u(y)|
max ess sup < ∞ ∀|α| = k.
|α|=k x,y∈Ω,x̸=y |x − y|σ
W s,p (Ω) when p < ∞ is a Banach space with the norm
1/p
∑ ∫ ∫
|∂ u(x) − ∂ u(y)|
α α p
∥u∥W s,p (Ω) = ∥u∥pW k,p (Ω) + dx dy
Ω Ω |x − y|d+σp
|α|=k
where we have used the integration by parts formula in Theorem 1.13 in the
first term on the left hand side. There are no boundary terms since φ = 0 on
∂Ω. By the density argument we deduce that (1.8) is valid for any φ ∈ H01 (Ω),
and the resulting equation makes sense if u ∈ H01 (Ω). We choose the space
1.2. VARIATIONAL FORMULATION 9
More generally, we can consider the boundary value problem (1.1) for
f ∈ H −1 (Ω), the dual space of H01 (Ω). For example, f is defined by
∫ ( ∑ ∂φ )
d
⟨f, φ⟩ = f0 φ + fi dx, ∀ φ ∈ H01 (Ω),
Ω ∂xi
i=1
Proof. Denote by (·, ·) the inner product on V. From the Riesz repre-
sentation theorem, there exist two bounded linear operators J : U → V and
K : V ′ → V such that
(Ju, v) = a(u, v) ∀u ∈ U, v ∈ V,
(Kf, v) = ⟨f, v⟩ ∀v ∈ V, f ∈ V ′ .
Then the problem (1.12) is equivalent to: Find u ∈ U such that
Ju = Kf (1.13)
Since
sup |a(u, v)| = sup |(Ju, v)| = ∥Ju∥V , (1.14)
v∈V,∥v∥V =1 v∈V,∥v∥V =1
If (i) and (ii) hold, then from (1.15) J is injective and R(J), the range of
J, is closed. It follows from (ii) that for any v ∈ V, v ̸= 0,
sup |a(u, v)| = sup |(Ju, v)| > 0
u∈U u∈U
1.3. Exercises
Exercise 1.1. If Ω is an open subset in Rd and K is a compact subset
of Ω, show that there exists a function φ ∈ C0∞ (Rd ) such that supp(φ) ⊂ Ω
and φ = 1 in K.
12 1. VARIATIONAL FORMULATION OF ELLIPTIC PROBLEMS
a(uh , ϕi ) = ⟨f, ϕi ⟩, i = 1, · · · , N.
∑
N
a(ϕj , ϕi )zj = ⟨f, ϕi ⟩, i = 1, · · · , N,
j=1
Az = b,
13
14 2. FINITE ELEMENT METHODS FOR ELLIPTIC EQUATIONS
and so z T Az > 0, for any z ̸= 0. The matrix A is called the stiffness matrix .
Theorem 2.1 (Céa Lemma). Suppose the bilinear form a(·, ·) satisfies
(1.9) and (1.10), i.e., a is bounded and V-elliptic. Suppose u and uh are
the solutions of the variational problem (2.1) and its Galerkin approximation
(2.2), respectively. Then
β
∥u − uh ∥V 6 inf ∥u − vh ∥V . (2.4)
α vh ∈Vh
Proof. Since Vh ⊂ V, by the definition of u and uh ,
a(u, vh ) = ⟨f, vh ⟩ ∀ vh ∈ Vh ,
a(uh , vh ) = ⟨f, vh ⟩ ∀ vh ∈ Vh .
It follows by subtraction we obtain the following Galerkin orthogonality
a(u − uh , vh ) = 0 ∀ vh ∈ Vh , (2.5)
which implies that a(u − uh , vh − uh ) = 0. Thus
α∥u − uh ∥2V 6 a(u − uh , u − uh ) = a(u − uh , u − vh )
6 β∥u − uh ∥V ∥u − vh ∥V .
After dividing by ∥u − uh ∥V , the assertion is established.
The nodal basis {λ1 (x), · · · , λd+1 (x)} of the linear element satisfies
The relationship is obvious since any two linear functions are equal if they
coincide at the vertices Ai , i = 1, · · · , d + 1. Note that the barycenter of K
has barycentric coordinates ( d+11
, · · · , d+1
1
).
A3
A
A
A
rA A
A
A
A1 A2
Therefore
P A2 A3
= P (0, λ2 ) = 0. (2.10)
On the other hand,
∂P ∂ ∂P
(Aj ) = (Aj ) = 0, j = 2, 3.
∂λ1 ∂λ2 ∂λ1
Since ∇λ1 is parallel to the unit outer normal to A2 A3 , ∂λ
∂P
1
(M1 ) = 0. Notice
that ∂λ1 A2 A3 is a forth order polynomial, we have ∂λ1 |A2 A3 ≡ 0, that is,
∂P ∂P
∂P
(0, λ2 ) = 0. (2.11)
∂λ1
Combining (2.10) and (2.11), we have P = λ21 P1 . Similarly, P = λ21 λ22 λ23 Q =
0. Since Q is a polynomial, Q ≡ 0, and hence P ≡ 0.
Definition 2.6. Given a finite element (K, P, N ), let the set {ψi : 1 6
i 6 n} ⊂ P be the nodal basis of P. If v is a function for which all Ni ∈ N ,
i = 1, · · · , n, are defined, then we define the local interpolant by
∑
n
IK v := Ni (v)ψi .
i=1
Proof. We only prove the case k = 1. For k > 1, the assertion follows
from a consideration of the derivatives of order k − 1.
Let v ∈ C(Ω̄). For i = 1, 2, define
∂v
wi (x) = for x ∈ Ω,
∂xi
where on the edges we can take either of the two limiting values. Let φ ∈
C0∞ (Ω),
∫ ∑ ∫ ∂v
φwi dx = φ dx
Ω ∂xi
K∈Mh K
∑ ( ∫ ∂φ
∫ ) ∫
∂φ
= − v dx + φv · ni ds = − v dx,
K∈Mh K ∂xi ∂K Ω ∂xi
@ K2
@
@r
@
x
B@
@@
K1
∫ ∫ ∫
∇v · φ dx = − v∇ · φ dx + vi (φ · nKi ) ds, i = 1, 2.
Ki Ki ∂Ki
20 2. FINITE ELEMENT METHODS FOR ELLIPTIC EQUATIONS
Thus
∫
(v1 − v2 )ϕ ds = 0 ∀ ϕ ∈ C0∞ (B).
e
Vh = {v : v|K ∈ P1 , ∀K ∈ Mh , v is continuous
at the vertices of the elements}.
In forming the sum, we need only take account of those triangles which
overlap the support of both ϕi and ϕj . Note that Aij = 0 if the xi and xj
are not adjacent. The stiffness matrix A = (Aij ) is sparse.
In practice, for every element K ∈ Mh , we find the additive contribution
from (2.12) to the stiffness matrix. Since on each element K, the nodal
basis function reduces to one of the barycentric coordinate functions λp , p =
1, 2, · · · d + 1. Thus we need only to evaluate the following (d + 1) × (d + 1)
matrix ∫
AK : (AK )pq = a(x)∇λq · ∇λp dx. (2.13)
K
Here AK is called the element stiffness matrix . Denote by Kp the global
index of the p-th vertex of the element K. Then ϕKp |K = λp and the global
stiffness matrix may be assembled through the element stiffness matrices as
∑
Aij = (AK )pq . (2.14)
K,p,q
Kp =i,Kq =j
22 2. FINITE ELEMENT METHODS FOR ELLIPTIC EQUATIONS
3 2
1 K II 1
K III 1 2 K I
2 3 xi 3 xj
3 2 3 2
1
K IV K VI
1 KV 1
2 3
P̂ = P1 , and N̂ = {N̂i , i = 1, · · · , d + 1}, where N̂i (p) = p(Âi ) for any p ∈ P̂,
where {Âi } is the set of vertices of K̂.
2.3. COMPUTATIONAL CONSIDERATION 23
is exact for polynomials of degree 6 2. Here â12 , â23 , and â13 are the mid-edge
points of K̂.
The quadrature formula
∫ ∑
3 ∑
|K̂|
φ̂(x̂)dx̂ ∼ 3 φ̂(âi ) + 8 φ̂(âij ) + 27φ̂(â123 ) (2.18)
K̂ 60
i=1 16i<j63
6
â3 (0, 1)
@
@
@
@
â13 b @bâ23
b @
â123 @
@
b @â2-
â1 â12 (1, 0)
Figure 5. The reference element K̂ for the quadrature for-
mulas (2.16),(2.17), and (2.18).
Table 1 shows the sample points (ξi , ηi ) and weights for Gaussian quad-
rature formula which is exact for polynomials of degree 6 5
∫ ∑
7
φ̂(x̂)dx̂ ∼ wi φ̂(ξi , ηi ). (2.19)
K̂ i=1
2.4. Exercises
Exercise 2.1. Construct the nodal basis functions for the Crouzeix-
Raviort element using barycentric coordinates.
2.4. EXERCISES 25
i ξi ηi wi
1 1/3
√ 1/3
√ 9/80
2 (6 + 15)/21 (6 + 15)/21 √
√ √ 155 + 15
3 (9 − 2 15)/21 (6 + 15)/21
√ √ 2400
4 (6 + √15)/21 (9 − 2√ 15)/21
5 (6 − 15)/21 (6 − 15)/21 √
√ √ 155 − 15
6 (9 + 2 15)/21 (6 − 15)/21
√ √ 2400
7 (6 − 15)/21 (9 + 2 15)/21
Table 1. The sample points (ξi , ηi ) and weights for the seven-
point Gaussian quadrature rule over the reference element K̂.
Exercise 2.2. Show that the finite element space based on the Argyris
element is a subspace of C 1 and thus is indeed H 2 -conforming.
Exercise 2.3. Show the quadrature scheme (2.17) is exact for polyno-
mials of degree 6 2.
Exercise 2.4. Let K be a triangle in R2 . Compute the element mass
matrix
(∫ )3
MK = λi λj dx .
K i,j=1
inf ∥v + p∥H k+1 (Ω) 6 C(Ω)|v|H k+1 (Ω) ∀v ∈ H k+1 (Ω). (3.1)
p∈Pk (Ω)
( ∑
N )
∥v∥H k+1 (Ω) 6 C(Ω) |v|H k+1 (Ω) + |fi (v)| ∀v ∈ H k+1 (Ω). (3.2)
i=1
(3.1) is a direct consequence of (3.2) because for any v ∈ H k+1 (Ω), there
exists a p ∈ Pk (Ω) such that fi (p) = −fi (v), 1 6 i 6 N .
27
28 3. CONVERGENCE THEORY OF FINITE ELEMENT METHODS
∑
N
1
∥vn ∥H k+1 (Ω) = 1, |vn |H k+1 (Ω) + |fi (vn )| 6 . (3.3)
n
i=1
Since, by (3.3), |vn |H k+1 (Ω) → 0, and since the space H k+1 (Ω) is complete,
we conclude from (3.4) , that the sequence {vn } converges in H k+1 (Ω). The
limit v of this sequence satisfies
∑
N
|v|H k+1 (Ω) + |fi (v)| = 0.
i=1
Thus, it follows from Lemma 1.2 that v ∈ Pk (Ω), and hence v = 0. But this
contradicts the equality ∥v∥H k+1 (Ω) = 1.
The error analysis of the finite element method depends on the scaling
argument which makes use of the relation of Sobolev norms under the affine
transform.
F : Ω̂ → Ω, F x̂ = B x̂ + b
3.1. INTERPOLATION THEORY IN SOBOLEV SPACES 29
This completes the proof of the first inequality. The other inequality is proved
in a similar fashion.
30 3. CONVERGENCE THEORY OF FINITE ELEMENT METHODS
'$
ẑ
Ω̂ F (ẑ)
> Z
}
Z
Z
Z
ŷ&% Ω F (ŷ)
Given ξ ∈ Rd so that |ξ| = ρ̂, there exist ŷ, ẑ ∈ Ω̂ such that ŷ − ẑ = ξ (see
Figure 1). Bξ = F (ŷ) − F (ẑ) with F (ŷ), F (ẑ) ∈ Ω. We deduce |Bξ| 6 h.
This proves the first inequality in (3.5). The second inequality can be proved
similarly. The last two inequalities are consequences of the identity |det B| =
|Ω| /|Ω̂|.
Theorem 3.2. Suppose m − d/2 > l. Let (K̂, P̂, N̂ ) be a finite element
satisfying
(i) Pm−1 ⊂ P̂ ⊂ H m (K̂);
(ii) N̂ ⊂ C l (K̂)′ .
Then for 0 6 i 6 m and v̂ ∈ H m (K̂) we have
|v̂ − Iˆv̂|H i (K̂) 6 C(m, d, K̂)|v̂|H m (K̂) ,
∑
n ∑
n
∥Iˆû∥H i (K̂) = N̂j (û)ϕ̂j 6 |N̂j (û)|∥ϕ̂j ∥H i (K̂)
j=1 j=1
H i (K̂)
∑
n
6 ∥N̂j ∥C l (K̂)′ ∥ϕ̂j ∥H m (K̂) ∥û∥C l (K̂)
j=1
6 C ∥û∥C l (K̂) 6 C ∥û∥H m (K̂) .
Here we have used the Sobolev Imbedding Theorem 1.8 in the last inequality.
Next by Theorem 3.1
Definition 3.3. Let (K̂, P̂, N̂ ) be a finite element and x = F (x̂) = B x̂+b
be an affine map. Let v = v̂ ◦ F −1 . The finite element (K, P, N ) is affine-
interpolation equivalent to (K̂, P̂, N̂ ) if
(i) K = F (K̂);
(ii) P = {p : p̂ ∈ P̂};
(iii) c = Iˆv̂.
Iv
Here Iˆv̂ and Iv are the (K̂, P̂, N̂ )-interpolant and the (K, P, N )-interpolant,
respectively.
Theorem 3.5. Let (K̂, P̂, N̂ ) satisfy the conditions of Theorem 3.2 and
let (K, P, N ) be affine-interpolation equivalent to (K̂, P̂, N̂ ). Then for 0 6
i 6 m and v ∈ H m (K) we have
Now we consider the inverse estimates which are useful in the error anal-
ysis of finite element methods. We first introduce the quasi-uniform meshes.
We assume the bilinear form a : H01 (Ω) × H01 (Ω) → R is bounded and H01 (Ω)-
elliptic:
|a(u, v)| 6 β∥u∥H 1 (Ω) ∥v∥H 1 (Ω) , a(u, u) > α∥u∥2H 1 (Ω) , ∀u, v ∈ H01 (Ω).
Then we know from Lax-Milgram Lemma that (3.8) and (3.9) have a unique
solution u, uh , respectively.
34 3. CONVERGENCE THEORY OF FINITE ELEMENT METHODS
Theorem 3.9. If the solution u ∈ H01 (Ω) has the regularity u ∈ H 2 (Ω),
then there exists a constant C independent of h such that
If the solution of the problem (3.8) does not in H 2 (Ω), we still have the
convergence of finite element methods.
For u ∈ H01 (Ω) and any ϵ > 0, there exists a function vϵ ∈ C0∞ (Ω) such that
∥u − vϵ ∥H 1 (Ω) 6 ϵ.
Thus
By letting h → 0 we get
Bibliographic notes. The results in this chapter are taken for Ciarlet
[23] to which we refer for further developments in the finite element a priori
error analysis. The Deny-Lions Theorem is from [26]. The Bramble-Hilbert
Lemma is proved in [13].
3.4. Exercises
Exercise 3.1. Let m > 0 and let 1 6 p 6 ∞. Show that, under the
conditions of Lemma 3.2, there exists a constant C = C(m, p, d) such that
|v̂|W m,p (Ω̂) 6 C∥B∥m | det B|−1/p |v|W m,p (Ω) ,
|v|W m,p (Ω) 6 C∥B −1 ∥m | det B|1/p |v̂|W m,p (Ω̂) .
36 3. CONVERGENCE THEORY OF FINITE ELEMENT METHODS
Γ2
@
@ ω Γ1
@
0 θ=0
which implies
µ′′ (θ) + α2 µ(θ) = 0.
Therefore µ(θ) = A sin αθ +B cos αθ. The boundary condition µ(0) = µ(ω) =
ω θ), k = 1, 2, 3, · · · . Therefore, the
0 yields that α = kπ/ω and µ(θ) = A sin( kπ
boundary value problem △u = 0 in Sω , u = 0 on Γ1 ∪ Γ2 has a solution
π
u = rα sin(αθ), α = .
ω
Lemma 4.1. u ̸∈ H 2 (Sω ∩ BR ) for any R > 0 if π < ω < 2π.
for the linear finite element approximation of the L-shaped problem over
uniform triangulations:
∥u − uh ∥H 1 (Ω) 6 Ch2/3 . (4.1)
The implementation details of this example are given in Section 10.2.
1 0
10
0.5
H1 error
−1
10 Slope: −2/3
0
−0.5
−2
10
0 1 2
−1 10 10 10
−1 −0.5 0 0.5 1 j
2
for any ψ̂ ∈ L1 (Ŝj ). For any ψ ∈ L1 (Ω), denote by ψ̂j = ψ ◦ Fj . Let {xj }Jj=1
be the set of interior nodes. The Clément interpolation operators Πh and Π0h
are then defined by
J¯
∑
Πh : L (Ω) → Vh , Πh ψ =
1
(R̂j ψ̂j )(Fj−1 (xj ))ϕj ,
j=1
∑
J
Π0h : L1 (Ω) → Vh0 , Π0h ψ = (R̂j ψ̂j )(Fj−1 (xj ))ϕj .
j=1
6 C∥∇ψ∥L2 (K)
e .
On the other hand, by the scaled trace inequality in Exercise 3.3, for e ⊂ ∂K
for some K ∈ Mh ,
( )
∥ψ − Π0h ψ∥L2 (e) 6 C h−1/2
e ∥ψ − Π0h ψ∥L2 (K) + he1/2 ∥∇(ψ − Π0h ψ)∥L2 (K)
6 Ch1/2
e ∥∇ψ∥L2 (K)
e 6 Che ∥∇ψ∥L2 (ẽ) .
1/2
2◦ ) We have, from Theorem 3.1 and the inverse inequality, for any ψ̂ ∈
H 1 (Ŝj ),
∥ψ̂ − R̂j ψ̂∥L2 (Ŝj ) 6 inf ∥ψ̂ − p̂∥L2 (Ŝj ) 6 C∥∇ψ̂∥L2 (Ŝj ) , (4.8)
p̂∈P1 (Ŝj )
( )
∥∇R̂j ψ̂∥L2 (Ŝj ) = ∥∇R̂j (ψ̂ − ψ̂Ŝj )∥L2 (Ŝj ) 6 C R̂j ψ̂ − ψ̂Ŝj L2 (Ŝj )
6 C ψ̂ − ψ̂Ŝj
6 C∥∇ψ̂∥L2 (Ŝj ) .
L2 (Ŝj )
(4.9)
∑¯
Denote by hj the diameter of Sj . Since Jj=1 ϕj = 1, we have
∑ ( )
∥ψ−Πh ψ∥L2 (K) = ψ − (R̂j ψ̂j )(Fj−1 (xj )) ϕj L2 (K)
xj ∈K
∑
6C ψ − (R̂j ψ̂j )(Fj−1 (xj )) L2 (Sj )
xj ∈K
∑
ψ̂j − (R̂j ψ̂j )(Fj−1 (xj ))
d/2
6C hj L2 (Ŝj )
xj ∈K
∑ ( )
+ R̂j ψ̂j − (R̂j ψ̂j )(Fj−1 (xj ))
d/2
6 hj ψ̂j − R̂j ψ̂j L2 (Ŝj ) L2 (Ŝj )
xj ∈K
∑ ( )
d/2
6 hj ψ̂j − R̂j ψ̂j L2 (Ŝj )
+ ∇R̂j ψ̂j L2 (Ŝj )
xj ∈K
∑ d/2
6 hj ∇ψ̂j L2 (Ŝj )
6 ChK ∥∇ψ∥L2 (K)
e .
xj ∈K
42 4. ADAPTIVE FINITE ELEMENT METHODS
For any domain G ⊂ Ω let ∥| · |∥G = ∥a1/2 ∇ · ∥L2 (G) . Note that ∥| · |∥Ω is
the energy norm in H01 (Ω).
Theorem 4.4 (Local lower bound ). There exists a constant C2 > 0 which
depends only on the minimum angle of the mesh Mh and the maximum value
of a(x) such that for any K ∈ Mh
∑
2
ηK 6 C2 ∥|u − uh |∥2K ∗ + C2 h2K ∥f − fK ∥2L2 (K) ,
K⊂K ∗
1
∫ ∗
where fK = |K| K f dx and K is the union of all elements sharing at least
one common side with K.
and thus
h−1
K ∥φ∥L2 (K) , ∥∇φ∥L2 (K) 6 C|αK | h−1
K |K|
1/2
6 C∥hK fK ∥L2 (K) .
Now
Therefore,
( )
∥hK f ∥2L2 (K) 6 C ∥|u − uh |∥2K + ∥hK (f − fK )∥2L2 (K) .
and thus
This completes the proof upon using the estimate for ∥hK f ∥L2 (K) .
The lower bound in Theorem 4.4 implies that up to a high order quantity
(∑ )1/2
h2 ∥f − f ∥2 , the local energy error ∥|u − uh |∥K ∗ is bound
K⊂K ∗ K K 2
L (K)
from below by the error indicator ηK .
Here m > 1 is a fixed number. For example, in the case of one time bisection,
m = 2. We remark that some unmarked simplices may be refined in the step
of removing hanging nodes.
Let
∑
2
η̃K : = h̃2K ∥f ∥2L2 (K) + h̃K ∥Je ∥2L2 (e) , where h̃K := |K|1/d . (4.16)
e⊂∂K
c2 ηK 6 η̃K 6 c1 ηK . (4.17)
The modified the error indicator η̃K enjoys the following reduction property.
Lemma 4.3. Let M̂H ⊂ MH be the set of elements marked for refinement
and let Mh be a refinement of MH satisfying the assumption (4.15). Then
there exists a constant C3 depending only the minimum angle of the meshes
and the maximum value of a(x) such that, for any δ > 0,
( ( 1 ) 2 ) ( 1)
2
η̃M 6 (1 + δ) η̃ 2
MH − 1 − √
d
η̃M̂ + 1 + C3 ∥|uh − uH |∥2Ω .
h
m H δ
4.4. CONVERGENCE ANALYSIS 47
have
∑ ( ∑ ( ) 2 )
I 6(1 + δ) h̃2K ∥f ∥2L2 (K) + h̃K [[a∇uH ]] · ν e L2 (e)
K⊂K ′ ∈MH \M̂H e⊂∂K∩Ω
∑ ( ∑ ( ) )
2
+ (1 + δ) h̃2K ∥f ∥2L2 (K) + h̃K [[a∇uH ]] · ν e L2 (e)
K⊂K ′ ∈M̂H e⊂∂K∩Ω
∑ ( ∑ ( ) )
6(1 + δ) e 2 ′ ∥f ∥2 2 ′ + H
H eK′ [[a∇uH ]] · ν
2
K L (K ) e′ L2 (e′ )
K ′ ∈MH \M̂H e′ ⊂∂K ′ ∩Ω
1+δ ∑ ( ∑ ( ) )
+ √ e 2 ′ ∥f ∥2 2 ′ + H
H eK′ [[a∇uH ]] · ν
2
d
m K L (K ) e′ L2 (e′ )
K ′ ∈M̂H e′ ⊂∂K ′ ∩Ω
1+δ 2 ( ( 1 ) 2 )
2
=(1 + δ)η̃M + √ η̃ = (1 + δ) η̃ 2
MH − 1 − √ η̃ .
H \M̂H d
m M̂H d
m M̂H
Next we estimate II. For any e ∈ Bh , denote by K1 and K2 the two elements
having common side e. We have
( 1) ∑ ( ) 2
II 6C 1 + he [[a∇(uh − uH )]] · ν e L2 (e)
δ
e∈Bh
( 1) ∑
=C 1 + he ∥a∇(uh − uH )|K1 · ν1 + a∇(uh − uH )|K2 · ν2 ∥2L2 (e)
δ
e∈Bh
( 1) ∑ ( )
6C 1 + he ∥a∇(uh − uH )|K1 ∥2L2 (e) + ∥a∇(uh − uH )|K2 ∥2L2 (e)
δ
e∈Bh
( 1) ∑ ( 1)
6C 1 + ∥a∇(uh − uH )∥2K1 ∪K2 6 1 + C3 ∥|uh − uH |∥2Ω .
δ δ
e∈Bh
Theorem 4.5. Let θ ∈ (0, 1], and let {Mk , uk }k>0 be the sequence of
meshes and discrete solutions produced by the adaptive finite element algo-
rithm based on the Dörfler marking strategy and the assumption (4.15). Sup-
pose the family of meshes {Mk } is shape regular. Then there exist constants
γ > 0, C0 > 0, and 0 < α < 1, depending solely on the shape-regularity of
{Mk }, m, and the marking parameter θ, such that
( )1/2
∥|u − uk |∥2Ω + γηM
2
k
6 C0 α k . (4.18)
Proof. We first show that there exist constants γ0 > 0 and 0 < α < 1
such that
( )
∥|u − uk+1 |∥2Ω + γ0 η̃M
2
k+1
6 α 2
∥|u − u |∥
k Ω
2
+ γ 2
0 Mk .
η̃ (4.19)
For convenience, we use the notation
1
ek := ∥|u − uk |∥Ω , η̃k := η̃Mk , λ := 1 − √
d
.
m
From Lemma 4.2, Lemma 4.3, and the Dörfler strategy, we know that
( ) ( 1)
2
η̃k+1 6 (1 + δ) 1 − λθ2 η̃k2 + 1 + C3 (e2k − e2k+1 ). (4.20)
δ
Next by Theorem 4.3 and (4.17) we have
e1 η̃ 2 ,
e2k 6 C e1 = C1 /c2 .
where C (4.21)
k
( 1)
Let β = 1+ C3 . Then, it follows from (4.20) and (4.21) that, for 0 < ζ < 1,
δ
1 2 1 ( )
e2k+1 + η̃k+1 6e2k + (1 + δ) 1 − λθ2 η̃k2
β β
( ( ))
6ζ e2k + (1 − ζ)C e1 + 1 (1 + δ) 1 − λθ2 η̃ 2
k
β
( 1 ( −1 ( )) )
=ζ e2k + βζ (1 − ζ)C e1 + ζ −1 (1 + δ) 1 − λθ2 η̃ 2 .
k
β
We choose δ > 0 such that (1 + δ)(1 − λθ2 ) < 1 and choose ζ such that
( )
e1 + ζ −1 (1 + δ) 1 − λθ2 = 1 which amounts to take
βζ −1 (1 − ζ)C
( )
e1
(1 + δ) 1 − λθ2 + β C
ζ= < 1.
e1
1 + βC
This implies (4.19) holds with
1 δ e1 C3
δ(1 + δ)(1 − λθ2 ) + (1 + δ)C
γ0 = = and α2 = ζ = .
β (1 + δ)C3 δ + (1 + δ)Ce1 C3
4.4. CONVERGENCE ANALYSIS 49
To conclude the proof, we note that by (4.17) η̃k > c2 ηk and thus (4.18) is
valid with
( )1/2
γ = γ0 c22 and C0 = ∥|u − u0 |∥2Ω + γ0 η̃02 .
This completes the proof of the theorem.
Example 4.6. Consider the L-shaped domain problem in Example 4.1 us-
ing the adaptive algorithm base on the maximum strategy. Figure 3 plots the
mesh after 10 adaptive iterations (left) and plots the H 1 errors ∥u − uk ∥H 1 (Ω)
versus Nk in log-log coordinates (right), where uk is the finite element ap-
proximation over Mk , the mesh after k iterations, and Nk is the total number
of degrees of freedom in Mk . It shows that
−1/2
∥u − uk ∥H 1 (Ω) ≈ O(Nk ), (4.22)
is valid asymptotically as k → ∞. We notice that the convergence rate
is quasi-optimal. The implementation details of this example are given in
Section 10.3.
0
1 10
0.5
−1
10
Slope: −1/2
H1 error
0
−2
10
−0.5
−3
10
−1 10
2
10
3 4
10
5
10
−1 −0.5 0 0.5 1
DoFs
4.5. Exercises
Exercise 4.1. Find the general solution of the form u = rα µ(θ) to the
Laplace equation −△u = 0 in the sector Sω which satisfies the boundary
conditions
∂u
(i) = 0 on Γ1 ∪ Γ2 ;
∂ν
∂u
(ii) u = 0 on Γ1 , = 0 on Γ2 .
∂ν
Exercise 4.2. Show that there exists a constant C depending only on
the minimum angle of Mh such that (4.8) and (4.9) hold.
Exercise 4.3. Let Ω be a bounded polyhedral domain in Rd (d = 2, 3).
Prove the following error estimate for the Clément interpolation operator
∥φ − Πh φ∥H k (K) 6 Ch2−k
K |φ|H 2 (K)
e ∀φ ∈ H 2 (Ω), k = 0, 1.
Exercise 4.4. Let Ω ⊂ R2 be a bounded polygon. For f ∈ L2 (Ω) and g ∈
C(∂Ω), let u ∈ H 1 (Ω) be the weak solution of −∆u = f in Ω, u = g on ∂Ω.
Let uh ∈ Vh be the conforming linear finite element approximation such that
uh = Ih g on ∂Ω. Derive an a posteriori error estimate for ∥∇(u − uh )∥L2 (Ω) .
Exercise 4.5. Let Ω = (0, 1). Derive a posteriori error estimate for the
conforming linear finite element approximation to the two-point boundary
value problem −u′′ = f in Ω, u(0) = α, u′ (1) = β.
CHAPTER 5
where φ ∈ L2 (Ω) and ψ ∈ H01 (Ω). Then by using the Aubin-Nitsche trick
(cf. Section 3.3) we have
∥w − Pk w∥L2 (Ω) 6 Chk ∥w∥A ∀w ∈ H01 (Ω),
where hk = maxK∈Mk hK and ∥·∥A = a(·, ·)1/2 . From v − Pk−1 v = (I −
Pk−1 )(I − Pk−1 )v, we then have the following approximation property
∥(I − Pk−1 )v∥L2 (Ω) 6 Chk ∥(I − Pk−1 )v∥A ∀ v ∈ Vk . (5.4)
vk )i = vk,i , (e
(e vek )i = (vk , ϕik ), i = 1, · · · , nk . (5.6)
[ ]
ek = a(ϕj , ϕi ) nk
Let A k k i,j=1 be the stiffness matrix. We have the following
matrix representation of (5.5):
e
A ek = fek ,
ek u (5.7)
We want to consider the following linear iterative method for (5.7): Given
e(0)
u ∈ Rnk
e(n+1) = u
u e(n) + Rek (feek − A
ek u
e(n) ), n = 0, 1, 2, · · · . (5.8)
ek is called the iterator of A
R ek . Note that (5.8) converges if the spectral radius
e e
ρ(I − Rk Ak ) < 1. If we define a linear operator Rk : Vk 7→ Vk as
∑
nk
Rk g = ek )ij (g, ϕj )ϕi ,
(R (5.9)
k k
i,j=1
then Rg e ee, so that the algorithm (5.8) for the matrix equation (5.7) is
k g = Rk g
equivalent to the following linear iterative algorithm for the operator equation
(5.5): Given u(0) ∈ Vk
u(n+1) = u(n) + Rk (fk − Ak u(n) ), n = 0, 1, 2, · · · .
5.2. ITERATIVE METHODS 53
^
Here we have used the fact that A^ ku
ek u
(n) = A e(n) . It is clear that the error
propagation operator is I − Rk Ak .
Noting that A ek is symmetric and positive definite, we write A ek = De−
e−L
L e T with D e and −L e being the diagonal and the lower triangular part of
ek respectively. We recall the following choices of R
A ek that result in various
different iterative methods:
ω
ρ(Aek ) I
Richardson;
e −1
ek = ω D
R
Damped Jacobi;
(5.10)
(De − e
L)−1 Gauss-Seidel;
(D e − L) e D
e −T D( e − L)
e −1 Symmetrized Gauss-Seidel.
Lemma 5.2. The damped Jacobi iterative method for solving (5.7) is
equivalent to the following iterative scheme in the space Vk :
∑
nk
Pki A−1
(n+1) (n) (n)
uk = uk + Rk (fk − A k uk ) , Rk = ω k ,
i=1
∑nk
(g, ϕik ) i ∑nk
a(A−1 i ∑nk
k g, ϕk ) i
Rk g = ω ϕ
i , ϕi ) k
= ω i , ϕi )
ϕk = ω Pki A−1
k g ∀g ∈ Vk ,
i=1
a(ϕk k i=1
a(ϕk k i=1
Lemma 5.3. The standard Gauss-Seidel iterative method for solving (5.7)
is equivalent to the following iterative scheme in the space Vk :
Rk = (I − Ek )A−1
(n+1) (n) (n)
uk = uk + Rk (fk − Ak uk ), k ,
Rk = (I − Ek∗ Ek )A−1
(n+1) (n) (n)
uk = uk + Rk (fk − Ak uk ), k ,
The proofs of Lemma 5.3 and Lemma 5.4 are left as Exercise 5.1.
It is well-known that the classical iterative methods listed in (5.10) are
inefficient for solving (5.7) when nk is large. But they have an important
“smoothing property” that we discuss now. For example, Richardson itera-
tion for (5.7) reads as
ω ee ek u
e(n+1) = u
u e(n) + (f k − A e(n) ), n = 0, 1, 2, · · · .
e
ρ(Ak )
e ϕe = µi ϕei with µ1 6 µ2 6 · · · 6 µn , (ϕei , ϕej ) = δij and u
Let A ek − u
e0 =
∑nk k ei k
i=1 αi ϕi , then
∑
ek − u
u e(n) = αi (1 − ωµi /µnk )n ϕei .
i
For a fixed ω ∈ (0, 2), it is clear that (1 − ωµi /µnk )n converges to zero very
fast as n → ∞ if µi is close to µnk . This means that the high frequency
modes in the error get damped very quickly.
Let us illustrate the smoothing property of the Gauss-Seidel method by
a simple numerical example. Consider the Poisson equation −∆u = 1 with
homogeneous Dirichlet condition on the unit square which is discretized by
the uniform triangulation. Figure 1 shows that high frequency errors are well
annihilated by Gauss-Seidel iterations.
For the above model problem, Brandt applied the “local mode analysis”
to show that: The damped Jacobi method achieve its optimal smoothing
property when ω = 4/5; the Gauss-Seidel method is a better smoother than
the damped Jacobi method; the Gauss-Seidel method with red-black ordering
is a better smoother than the one with lexicographic ordering. We also
5.3. THE MULTIGRID V-CYCLE ALGORITHM 55
note that the red-black Gauss-Seidel and Jacobi method have better parallel
features.
yj = yj−1 + Rk (g − Ak yj−1 ).
Define Bk g = y2m+1 .
56 5. FINITE ELEMENT MULTIGRID METHODS
a((I − Bk Ak )v, v)
= a(Kkm (I − Pk−1 )Kkm v, v) + a(Kkm (I − Bk−1 Ak−1 )Pk−1 Kkm v, v)
= a((I − Pk−1 )Kkm v, Kkm v) + a((I − Bk−1 Ak−1 )Pk−1 Kkm v, Pk−1 Kkm v)
> a((I − Pk−1 )Kkm v, Kkm v) = a((I − Pk−1 )Kkm v, (I − Pk−1 )Kkm v)
> 0.
a((I − Bk Ak )v, v) 6 a((I − Pk−1 )Kkm v, Kkm v) + δa(Pk−1 Kkm v, Pk−1 Kkm v)
= (1 − δ)a((I − Pk−1 )Kkm v, Kkm v) + δa(Kkm v, Kkm v).
Now
Thus
a((I − Pk−1 )Kkm v, Kkm v) 6 αa((I − Kk )Kkm v, Kkm v).
Since Rk : Vk → Vk is symmetric and simi-definite, by Lemma 5.6 we know
that Kk is symmetric with respect to a(·, ·) and 0 6 a(Kk v, v) 6 a(v, v).
Thus the eigenvalues of Kk belong to [0, 1]. Hence
and consequently
1 ∑
2m−1
1
a((I − Kk )Kk2m v, v) 6 a((I − Kk )Kki v, v) = a((I − Kk2m )v, v).
2m 2m
i=0
This yields
α ( α )
a((I − Bk Ak )v, v) 6 (1 − δ) a(v, v) + δ − (1 − δ) a(Kkm v, Kkm v)
2m 2m
α
= a(v, v).
α + 2m
This completes the proof of the theorem.
58 5. FINITE ELEMENT MULTIGRID METHODS
∑
K
Rka = Pki A−1
k
i=1
and
Then
that is
( )1/2
∑
K
(Θ−1 v, v) 6 (v, Θ−1 v)1/2 a(vki , vki ) .
i=1
Thus
∑
K ∑
K
−1
(Θ v, v) 6 a(vki , vki ) ∀v = vki .
i=1 i=1
To show the equality in (5.17) we only need to take vki = Pki A−1 −1
k Θ v. This
proves the assertion for Rka .
2◦ ) Since Rkm = (I − Ek∗ Ek )A−1
k , we have
( )
a (I − Rkm Ak )v, v = a(Ek v, Ek v) > 0.
60 5. FINITE ELEMENT MULTIGRID METHODS
Note that (5.18) holds for any invertible operator on Vk . By letting Θ = Rkm
in (5.18) we have
(K )1/2
( m −1 ) ( a m −1 ) ∑
(Rk ) v, v 6 Rk (Rk ) v, (Rkm )−1 v
1/2
a(vki , vki ) .
i=1
It follows from (ii) that
( m −1 ) ( )1/2
(Rk ) v, v 6 γ 1/2 Rka (Rkm )−1 v, (Rkm )−1 v a(v, v)1/2 . (5.19)
Now we show
( ) ( )
Rka v, v 6 β 2 Rkm v, v , ∀ v ∈ Vk . (5.20)
Denote by y = A−1
k v, then
( m ) ( ) ( )
Rk v, v = (I − E ∗ E)A−1 ∗
k v, v = a (I − E E)y, y = a(y, y) − a(Ek y, Ek y)
Now
( ) ∑
K ∑
i K ∑
∑ i
Rka v, v = a(Pki y, Pkj Ekj−1 y) = a(Pki y, Pkj Ekj−1 y)
i=1 j=1 i=1 j=1
( )1/2 1/2
∑
K ∑
K
6β a(Pki y, Pki y) a(Pkj Ekj−1 y, Pkj Ekj−1 y)
i=1 j=1
( )1/2 ( )1/2
= β Rka v, v Rkm v, v .
by the scaling argument. Thus by the inverse estimate and (5.4) we get
∑
nk ∑
nk
a(vki , vki ) 6 Ch−2
k ∥vki ∥2L2 (Ω) 6 Ch−2
k ∥v∥L2 (Ω) 6 Ca(v, v),
2
i=1 i=1
where
a((I − Bk Ak )v, v)
∥I − Bk Ak ∥A = sup .
0̸=v∈Vk ∥v∥A
Example 5.4. Consider the Poisson equation −∆u = 1 with homoge-
neous Dirichlet condition on unit square discretized with uniform triangula-
tions. We solve the problem by the V-cycle algorithm (5.12) with zero initial
value, Gauss-Seidel smoother (m = 2), and stopping rule
e / e
∥fek − A ek ∥∞ ∥fek − A
ek u ek u
ek ∥∞ < 10−6 .
(n) (0)
The initial mesh consists of 4 triangles. Table 1 shows the number of multi-
grid iterations after 1–10 uniform refinements by the “newest vertex bisec-
tion” algorithm. The final mesh consists of 4194304 triangles and 2095105
interior nodes. For an implementation of the V-cycle algorithm we refer to
Section 10.4.
For k > 2, let ûk = ûk−1 , and iterate ûk ← ûk + Bk (fk − Ak ûk ) for l
times.
Theorem 5.5. Assume that Theorem 5.1 holds and that δ l < 1/p. Then
c3 pδ l
∥uk − ûk ∥A 6 c1 hk , k > 1.
c2 (1 − pδ l )
Proof. By Theorem 5.1 we have
∥uk − ûk ∥A 6 δ l ∥uk − ûk−1 ∥A 6 δ l (∥uk − uk−1 ∥A + ∥uk−1 − ûk−1 ∥A ).
Noting that ∥u1 − û1 ∥A = 0, we conclude that
∑
k−1 ∑
k−1
∥uk − ûk ∥A 6 (δ ) ∥uk−n+1 − uk−n ∥A 6
l n
(δ l )n ∥u − uk−n ∥A
n=1 n=1
∑
k−1 ∑
k−1
6 c1 (δ l )n hk−n 6 c1 c3 (δ l )n h̃k−n
n=1 n=1
∑
k−1
c1 c3 pδ l
6 c1 c3 h̃k (pδ l )n 6 hk .
c2 1 − pδ l
n=1
Proof. Let Wk denote the work in the k th level V-cycle iteration. To-
gether, the smoothing and correction steps yield
Wk 6 Cmnk + Wk−1 .
Hence
Wk 6 Cm(n1 + n2 + · · · + nk ) 6 Cnk .
64 5. FINITE ELEMENT MULTIGRID METHODS
Let Ŵk denote the work involved in obtaining ûk in the FMG. Then
Ŵk 6 Ŵk−1 + lWk 6 Ŵk−1 + Cnk .
Thus we have
Ŵk 6 C(n1 + · · · + nk ) 6 Cnk .
This completes the proof.
This theorem shows that the FMG has an optimal computational com-
plexity O(nk ) to compute the solution within truncation error. In contrast,
the computational complexity of the k th level V-cycle iteration is not op-
timal, because its number of operations required to compute the solution
within truncation error is O(nk log h1k ) = O(nk log nk ).
where ϕzk is the nodal basis function at the node z in Vk . For convenience we
denote N ek = {xj : j = 1, · · · , ñk }. The local Gauss-Seidel iterative operator
k
is given by
Rk = (I − (I − Pkñk ) · · · (I − Pk1 ))A−1
k .
5.7. Exercises
Exercise 5.1. Prove Lemma 5.3 and Lemma 5.4.
Exercise 5.2. Prove Lemma 5.6.
( )
Exercise 5.3. Let Rk be symmetric with respect to ·, · and let Kk =
I − Rk Ak . Then Rk is semi-definite and satisfies
( )
a Kk v, v > 0 ∀ v ∈ Vk
is equivalent to
∥Kk ∥A 6 1 and ∥I − Kk ∥A 6 1.
CHAPTER 6
B′ : M → X ′ : ⟨B ′ λ, v⟩ = b(v, λ) ∀v ∈ X.
Then (6.1) is equivalent to
Au + B ′ λ = f in X ′ ,
(6.2)
Bu = g in M ′ .
Define
V = ker(B) = {v ∈ X : b(v, µ) = 0 ∀µ ∈ M } . (6.3)
Lemma 6.1. The following assertions are equivalent:
(i) There exists a constant β > 0 such that
b(v, µ)
inf sup > β; (6.4)
µ∈M v∈X ∥v∥X ∥µ∥M
(ii) The operator B : V ⊥ → M ′ is an isomorphism, and
∥Bv∥M ′ > β∥v∥X ∀v ∈ V ⊥ ; (6.5)
(iii) The operator B ′ : M → V 0 ⊂ X ′ is an isomorphism, and
∥B ′ µ∥X ′ > β∥µ∥M ∀µ ∈ M. (6.6)
Here V 0 is the polar set
V 0 = {l ∈ X ′ : ⟨l, v⟩ = 0 ∀v ∈ V }.
Proof. By Riesz Representation Theorem, there exist canonical isomet-
ric isomorphisms
πX : X ′ → X, πM : M ′ → M
such that
(πX l, v) = ⟨l, v⟩ ∀v ∈ X, ∀l ∈ X ′ ,
(πM g, µ) = ⟨g, µ⟩ ∀µ ∈ M, ∀ g ∈ M ′ .
It is easy to check that V 0 and V ⊥ is isomorphic under the mapping πX .
In fact, for any l ∈ V 0 , (πX l, v) = ⟨l, v⟩ = 0 for any v ∈ V . This implies
πX l ∈ V ⊥ . The inverse is also valid.
We prove now the equivalence of (i) and (iii). It is clear that (6.4) is
equivalent to (6.6). So we only need to show that B ′ : M → V 0 ⊂ X ′ is an
isomorphism. By (6.6) we know that B ′ : M → R(B ′ ) is an isomorphism.
We now show R(B ′ ) = V 0 . First we have R(B ′ ) is closed and R(B ′ ) ⊂ V 0 .
In fact, for any v ∈ V and µ ∈ M , we know ⟨B ′ µ, v⟩ = ⟨Bv, µ⟩ = 0. That is
R(B ′ ) ⊂ V 0 . By isometry πX we know that πX R(B ′ ) is a closed subspace of
πX V 0 = V ⊥ . If v ∈ πX R(B ′ )⊥ ,
(πX B ′ µ, v) = 0 ∀µ ∈ M ⇔ ⟨Bv, µ⟩ = 0 ∀µ ∈ M ⇔ v ∈ V.
6.1. ABSTRACT FRAMEWORK 69
Theorem 6.2. Assume that there exist positive constants αh and βh such
that
(i) The bilinear form a is Vh -elliptic, i.e.,
a(vh , vh ) > αh ∥vh ∥2X ∀vh ∈ Vh ; (6.8)
(ii) The bilinear form b satisfies the inf-sup condition:
b(vh , µh )
inf sup > βh . (6.9)
µh ∈Mh vh ∈Xh ∥vh ∥X ∥µh ∥M
Then the discrete problem (6.7) has a unique solution (uh , λh ) ∈ Xh × Mh
which satisfies
( )
∥u − uh ∥X + ∥λ − λh ∥M 6 C inf ∥u − vh ∥X + inf ∥λ − µh ∥M .
vh ∈Xh µh ∈Mh
Therefore
∥yh ∥X 6 C ∥λ − µh ∥M + C ∥u − wh ∥X .
It follows from the triangle inequality and (6.11) that
∥u − uh ∥X = ∥u − vh − rh − yh ∥X 6 C ∥λ − µh ∥M + C ∥u − vh ∥X .
We now estimate λ−λh . Since b(vh , λ−λh ) = a(uh −u, vh ) for all vh ∈ Xh ,
we have for any µh ∈ Mh ,
∥µh − λh ∥M 6 C ∥u − uh ∥X + C ∥λ − µh ∥M 6 C ∥u − vh ∥X + C ∥λ − µh ∥M ,
b(v − πh v, µh ) = 0 ∀µh ∈ Mh .
and
h2K ( )
∥σ − πK σ∥H(div ,K) 6 C |σ|H 1 (K) + |div σ|H 1 (K) . (6.16)
ρK
Proof. We notice that (6.15) uniquely defines the interpolation operator
πK and Ni (σ) = Ni (πK σ). By (6.14) we know that
−1 T −1 T
N̂i (πd
K σ) = |(BK ) n̂i |Ni (πK σ) = |(BK ) n̂i |Ni (σ) = N̂i (σ̂).
Thus we have
π̂K σ̂ = πd
K σ.
This implies
|K|1/2
∥σ − πK σ∥L2 (K) = ∥BK (σ̂ − π̂K σ̂)∥L2 (K̂)
|K̂|1/2
|K|1/2
6 ∥BK ∥ ∥σ̂ − π̂K σ̂∥L2 (K̂) .
|K̂|1/2
6.2. THE POISSON EQUATION AS A MIXED PROBLEM 75
6 C|σ̂|H 1 (K̂) .
Thus, by Lemma 3.2 and Lemma 3.3,
|K|1/2 −1
∥σ − πK σ∥L2 (K) 6 C∥BK ∥ |σ̂|H 1 (K̂) 6 C∥BK ∥2 ∥BK ∥ |σ|H 1 (K)
|K̂|1/2
h2K
6C |σ|H 1 (K) .
ρK
On the other hand, from (6.15),
∫ ∫
div σdx = div πK σdx.
K K
Thus
∥div (σ − πK σ)∥L2 (K) 6 inf ∥div σ − c∥L2 (K) 6 ChK |div σ|H 1 (K) .
c∈P0
This completes the proof.
We define the finite element spaces
Xh : = {τ ∈ H(div; Ω) : τ |K ∈ P(K) ∀K ∈ Mh },
Mh : = {v ∈ L2 (Ω) : v|K ∈ P0 (K) ∀K ∈ Mh }.
By Lemma 6.3 and 6.4(iii) we know Xh is well-defined. The discrete problem
to approximate (6.12) is: Find (σ h , uh ) ∈ Xh × Mh such that
(σ h , τ h ) + (div τ h , uh ) = 0 ∀τ h ∈ Xh ,
(6.17)
(div σ h , vh ) = −(f, vh ) ∀vh ∈ Mh .
Lemma 6.6. For any p ∈ L2 (Ω), there exists a function τ ∈ H 1 (Ω)2 such
that div τ = p and ∥τ ∥H 1 (Ω) 6 C∥p∥L2 (Ω) .
Proof. We extend p to be zero outside the domain Ω and denote the
extension by p̃. Let BR be a circle of radius R that includes Ω̄. Let w be the
solution of the problem
−∆w = p̃ in BR , w = 0 on ∂BR .
By the regularity theorem for elliptic equations in Theorem 1.20 we know
that w ∈ H 2 (BR ) and ∥w∥H 2 (Ω) 6 ∥w∥H 2 (BR ) 6 C∥p̃∥L2 (BR ) = C∥p∥L2 (Ω) .
This shows the lemma by setting τ = −∇w.
76 6. MIXED FINITE ELEMENT METHODS
Vh = {τ h ∈ Xh : (div τ h , vh ) = 0 ∀vh ∈ Mh }
= {τ h ∈ Xh : div τ h = 0 on K ∈ Mh }.
(div τ h , vh )
inf sup > β > 0. (6.18)
vh ∈Mh τ h ∈Xh ∥τ h ∥H(div ;Ω) ∥vh ∥L2 (Ω)
Moreover, we have
This shows (6.18) and thus completes the proof by using the abstract result
Theorem 6.2 and Lemma 6.5.
6.3. THE STOKES PROBLEM 77
a shape regular mesh. We will approximate the velocity by the “mini” finite
element which we now introduce. On each element K we approximate the
velocity by a polynomial of the form
The degrees of freedom are the simplest ones, namely the values of the ve-
locity at the vertices and the center of K, the values of the pressure at the
vertices of K. The discrete problem is: Find a pair (uh , ph ) ∈ Xh × Mh such
that
Theorem 6.6. Let the solution (u, p) of the Stokes problem satisfy
( )2
u ∈ H 2 (Ω) ∩ H01 (Ω) , p ∈ H 1 (Ω) ∩ L20 (Ω).
Then
∥u − uh ∥H 1 (Ω) + ∥p − ph ∥L2 (Ω) 6 Ch(|u|H 2 (Ω) + |p|H 1 (Ω) ).
6.4. Exercises
Exercise 6.1. Formulate the mixed formulation of the Neumann Prob-
lem
∂u
−△u = f in Ω, = g on ∂Ω,
∂n
and prove the unique existence of the solution to the corresponding saddle
point problem.
Exercise 6.2. For the Stokes problem, let
Xh = {v ∈ C(Ω̄)2 : v|K ∈ P1 (K)2 ∀K ∈ Mh , v|∂Ω = 0},
Mh = {q ∈ L20 (Ω) : q|K ∈ P0 (K) ∀K ∈ Mh }.
Does the inf-sup condition
(q, div v)
inf sup >β>0
q∈Mh v∈Xh ∥q∥L2 (Ω) ∥v∥H 1 (Ω)
hold?
Exercise 6.3. Construct the local nodal basis functions for the lowest
order Raviart-Thomas finite element.
CHAPTER 7
In this chapter we consider finite element methods for solving the initial
boundary value problem of the following parabolic equation:
∑ d ( )
∂u ∂ ∂u
− aij (x) + c(x)u = f in Ω × (0, T ),
∂t ∂xi ∂xj
i,j=1
(7.1)
u = 0 on Γ × (0, T ),
u(·, 0) = u0 (·) in Ω,
where Ω is a bounded domain in Rd with boundary Γ, T > 0, u = u(x, t),
and aij , c are bounded functions on Ω, aij = aji , and there exists a constant
α0 > 0 such that
∑
d
aij (x)ξi ξj > α0 |ξ|2 , c(x) > 0 for a.e. x ∈ Ω and all ξ ∈ Rd . (7.2)
i,j=1
81
82 7. FINITE ELEMENT METHODS FOR PARABOLIC PROBLEMS
Fix any point s ∈ (0, T ) for which uϵ (s) → u(s) in L2 (Ω). Hence
lim sup sup ∥ uϵ (t) − uδ (t) ∥L2 (Ω)
ϵ,δ→0 06t6T
∫ T ( )
6 lim ∥ u′ϵ (τ ) − u′δ (t) ∥2H −1 (Ω) + ∥ uϵ (τ ) − uδ (t) ∥2H 1 (Ω) dτ = 0.
ϵ,δ→0 0
To define the weak solution of the problem (7.1) we introduce the the
bilinear form a(u, v) : H 1 (Ω) × H 1 (Ω) → R
∫ ∑d
∂u ∂v
a(u, v) = aij (x) + c(x)uv dx.
Ω ∂xj ∂xi
i,j=1
It is clear by the assumption (7.2) that a is bounded and V -elliptic, that is,
there exist constants α0 , β > 0 such that
|a(u, v)| 6 β ∥u∥H 1 (Ω) ∥v∥H 1 (Ω) ∀ u, v ∈ H01 (Ω), (7.7)
and
a(v, v) > α ∥v∥2H 1 (Ω) ∀ v ∈ H01 (Ω). (7.8)
We have the following definition of weak solutions for parabolic problems.
By Theorem 7.4 we see u ∈ C([0, T ]; L2 (Ω)), and thus the equality in (ii)
makes sense.
Theorem 7.6. There exists a unique weak solution to the problem (7.1).
Moreover, the following stability estimate holds
∥ ∂t u ∥L2 (0,T ;H −1 (Ω)) + ∥ u ∥L2 (0,T ;H 1 (Ω)) 6 C∥ u0 ∥L2 (Ω) + C∥ f ∥L2 (0,T ;H −1 (Ω)) .
But by (7.10)
∑
N
∥Uτ − Ūτ ∥2L2 (0,T ;L2 (Ω)) 6 τ ∥U n − U n−1 ∥2L2 (Ω) 6 Cτ,
n=1
which by taking τ → 0 implies u = ū a.e. in Ω × (0, T ).
Now by (7.9) we have
(∂t Uτ , v) + a(Ūτ , v) = ⟨f¯τ , v⟩ ∀v ∈ H01 (Ω) a.e. in (0, T ),
where f¯τ = f¯n for t ∈ (tn−1 , tn ). We then have, for any ϕ ∈ C0∞ (0, T ),
∫ T ∫ T
[ ]
(∂t Uτ , v) + a(Ūτ , v) ϕ dt = ⟨f¯τ , v⟩ϕ dt ∀v ∈ H01 (Ω).
0 0
Let τ → 0 in above equality, we obtain
∫ T ∫ T
[(∂t u, v) + a(u, v)] ϕ dt = ⟨f, v⟩ϕ dt ∀v ∈ H01 (Ω), ϕ ∈ C0∞ (0, T ),
0 0
which implies
⟨∂t u, v⟩ + a(u, v) = ⟨f, v⟩ ∀v ∈ H01 (Ω) a.e. in (0, T ).
This proves the existence of weak solution. The stability estimate of the
theorem follows from (7.11)-(7.12) by letting τ → 0. The uniqueness is a
direct consequence of the stability estimate.
In terms of the nodal basis {ϕj }Jj=1 for Vh0 , our semidiscrete problem may
∑
be stated: Find the coefficients zj (t) in uh (x, t) = Jj=1 zj (t)ϕj (x) such that
∑
J ∑
J
zj′ (t)(ϕj , ϕi ) + zj (t)a(ϕj , ϕi ) = (f, ϕi ), i = 1, · · · , J,
j=1 j=1
and, with zj0 the components of uh0 , zj (0) = zj0 for j = 1, · · · , J. In the
matrix notation this may be expressed as
where M = (mij ) is the mass matrix with elements mij = (ϕj , ϕi ), A = (aij )
the stiffness matrix with aij = a(ϕj , ϕi ), b = (bi ) the vector with entries
bi = (f, ϕi ), z(t) the vector of unknowns zj (t), and z0 = (zj0 ). Since M is
positive definite and invertible, the system of ordinary differential equations
(7.14) has a unique solution for t > 0.
Next we estimate the error between uh and u. To do so we first prove a
stability result for the semidiscrete problem. Throughout this chapter, C will
denote a positive generic constant independent of h, t, and can have different
values in different places.
Theorem 7.7. Let r(t) ∈ L2 (Ω), and θh (t) = θh (·, t) ∈ Vh0 satisfies
Then
∫ t
∥θh (t)∥L2 (Ω) 6 ∥θh (0)∥L2 (Ω) + ∥r(s)∥L2 (Ω) ds, (7.16)
0
(∫ t )1/2
∥θh (t)∥H 1 (Ω) 6 C ∥θh (0)∥H 1 (Ω) + C ∥r(s)∥2L2 (Ω) ds . (7.17)
0
1 d
∥θh (t)∥2L2 (Ω) + a(θh (t), θh (t)) = (r(t), θh (t)).
2 dt
From (7.8) and the Cauchy inequality,
1 d
∥θh (t)∥2L2 (Ω) 6 ∥r(t)∥L2 (Ω) ∥θh (t)∥L2 (Ω) .
2 dt
7.2. THE SEMIDISCRETE APPROXIMATION 87
d
Since dt ∥θh (t)∥L2 (Ω) might not be differentiable when θh = 0, we add ε2 to
obtain
1 d 1 d( )
∥θh (t)∥2L2 (Ω) = ∥θh (t)∥2L2 (Ω) + ε2
2 dt 2 dt
( )1/2 d ( )1/2
= ∥θh (t)∥2L2 (Ω) + ε2 ∥θh (t)∥2L2 (Ω) + ε2
dt
6 ∥r(t)∥L2 (Ω) ∥θh (t)∥L2 (Ω) ,
and hence
d( )1/2
∥θh (t)∥2L2 (Ω) + ε2 6 ∥r(t)∥L2 (Ω) .
dt
After integration and letting ε → 0 we conclude that (7.16) holds.
In order to prove (7.17), we use again (7.15), now with vh = θh,t , we
obtain
1 d 1
∥θh,t ∥2L2 (Ω) + a(θh , θh ) = (r(t), θh,t ) 6 ∥r(t)∥2L2 (Ω) + ∥θh,t ∥2L2 (Ω) .
2 dt 4
Therefore
d 1
a(θh (t), θh (t)) 6 ∥r(t)∥2L2 (Ω) ,
dt 2
and hence by integration
∫
1 t
a(θh (t), θh (t)) 6 a(θh (0), θh (0)) + ∥r(s)∥2L2 (Ω) ds.
2 0
Now (7.17) follows from (7.7) and (7.8).
Theorem 7.8. Let u and uh be the solutions of (7.1) and (7.13), respec-
tively. Suppose that
∥uh0 − u0 ∥L2 (Ω) + h ∥uh0 − u0 ∥H 1 (Ω) 6 Ch2 |u0 |H 2 (Ω) . (7.21)
Then for t > 0,
( ∫ t )
∥uh (t) − u(t)∥L2 (Ω) 6Ch 2
|u0 |H 2 (Ω) + |ut (s)|H 2 (Ω) ds , (7.22)
0
(
∥uh (t) − u(t)∥H 1 (Ω) 6Ch |u0 |H 2 (Ω) + |u(t)|H 2 (Ω)
(∫ t )1/2 )
+ ∥ut (s)∥2H 1 (Ω) ds . (7.23)
0
and
θh (0) = uh0 − Rh u0 = uh0 − u0 + u0 − Rh u0 .
7.3. THE FULLY DISCRETE APPROXIMATION 89
∥ρt (s)∥L2 (Ω) = ∥ut (s) − Rh ut (s)∥L2 (Ω) 6 Ch ∥ut (s)∥H 1 (Ω) . (7.30)
Then
∑
n
∥θn ∥L2 (Ω) 6 θ0 L2 (Ω)
+ τ rj L2 (Ω)
, (7.34)
j=1
1/2
∑
n
+C
2
∥θn ∥H 1 (Ω) 6 C θ0 H 1 (Ω)
τ rj L2 (Ω)
. (7.35)
j=1
¯ n , θn ) 6 (rn , θn ), or
Proof. Choosing vh = θn , we have (∂θ
so that
∥θn ∥L2 (Ω) 6 θn−1 L2 (Ω)
+ τ ∥rn ∥L2 (Ω) ,
¯ n , ∂θ
¯ n ) + a(θn , ∂θ
¯ n ) = (rn , ∂θ
¯ n) 6 1 n 2 ¯ n 2
(∂θ ∥r ∥L2 (Ω) + ∂θ L2 (Ω)
,
4
or
τ n 2
a(θn , θn ) 6a(θn , θn−1 ) + ∥r ∥L2 (Ω) ,
4
1 1 τ
6 a(θn , θn ) + a(θn−1 , θn−1 ) + ∥rn ∥2L2 (Ω) ,
2 2 4
so that
τ n 2
a(θn , θn ) 6 a(θn−1 , θn−1 ) + ∥r ∥L2 (Ω) ,
2
which together with (7.7) and (7.8) implies that (7.35) holds.
where
¯ n ) − ut (tn )
ω n = Rh ∂u(t
¯ n ) + (∂u(t
= (Rh − I)∂u(t ¯ n ) − ut (tn )) = ω n + ω n . (7.40)
1 2
We write
∫ tj ∫ tj
τ ω1j = (Rh − I) ut ds = (Rh − I)ut (s) ds,
tj−1 tj−1
∫ tj
τ ω2j = u(tj ) − u(tj−1 ) − τ ut (tj ) = − (s − tj−1 )utt (s) ds,
tj−1
and obtain
∫ tj
τ ∥ ω1j ∥L2 (Ω) 6 Ch 2
|ut (s)|H 2 (Ω) ds,
tj−1
∫ tj
τ ∥ ω1j ∥L2 (Ω) 6 Ch ∥ut (s)∥H 1 (Ω) ds,
tj−1
∫ tj
τ ∥ ω2j ∥L2 (Ω) 6 Cτ ∥utt (s)∥L2 (Ω) ds.
tj−1
92 7. FINITE ELEMENT METHODS FOR PARABOLIC PROBLEMS
Together our estimates and using Theorem 7.9 with rn = −ω n complete the
proof of the theorem.
with U 0 = uh0 . Here the equation for U n may be written in the matrix form
as
1 1
(M + τ A)z n = (M − τ A)z n−1 + τ b(tn− 1 ),
2 2 2
∑
n
∥θ ∥L2 (Ω) 6 θ
n 0
L2 (Ω)
+ τ rj L2 (Ω)
, (7.43)
j=1
1/2
∑
n
+C
2
∥θn ∥H 1 (Ω) 6 C θ0 H 1 (Ω)
τ rj L2 (Ω)
. (7.44)
j=1
Proof. (7.43) and (7.44) can be proved by choosing vh = (θn + θn−1 )/2
¯ n in (7.42), respectively, we omit the details.
and vh = ∂θ
Let U n and u be the solutions of (7.41) and (7.1), respectively. Then we have
for n > 0,
∥U n − u(tn )∥L2 (Ω)
( ∫ tn )
6 Ch |u0 |H 2 (Ω) +
2
|ut (s)|H 2 (Ω) ds
0
∫ tn ( )
+ Cτ 2 ∥uttt (s)∥L2 (Ω) + ∥utt (s)∥H 2 (Ω) ds, (7.45)
0
∥U n − u(tn )∥H 1 (Ω)
( (∫ )1/2 )
tn
6 Ch |u0 |H 2 (Ω) + |u(tn )|H 2 (Ω) + ∥ut (s)∥2H 1 (Ω) ds
0
(∫ tn )1/2
+ Cτ 2
(∥uttt (s)∥2L2 (Ω) + ∥utt (s)∥2H 2 (Ω) ) ds . (7.46)
0
Since θ0 and ω1j are estimated as before, to apply Theorem 7.11, it remains
to bound the terms in ω2j and ω3j . This can be done by using Taylor formula
(7.36). We omit the details.
where f ∈ L2 (0, T ; L2 (Ω)) and u0 ∈ L2 (Ω), and the coefficient a(x) is as-
sumed to be piecewise constant and positive. The weak formulation of (7.47)
reads as follows: Find u ∈ L2 (0, T ; H01 (Ω)) ∩ H 1 (0, T ; H −1 (Ω)) such that
u(·, 0) = u0 (·), and for a.e. t ∈ (0, T ) the following relation holds
Here ∂¯n Uhn = (Uhn − Uhn−1 )/τn is the backward difference quotient, and
∫ tn
1
f¯n = f (x, t) dt.
τn tn−1
using the convention that the unit normal vector νe to e points from K2 to
K1 . We observe that the integration by parts implies
∑∫
(a∇Uh , ∇φ) = −
n
Jen φds ∀φ ∈ H01 (Ω). (7.50)
e∈Bn e
Introduce the energy norm |||φ|||Ω = (a∇φ, ∇φ)1/2 . We have the following
upper bound estimate.
2 ∑
m ∑
m
6 u0 − Uh0 L2 (Ω) + n
τn ηtime +C n
τn ηspace
n=1 n=1
m ∫
(∑ tn )2
+2 f − f¯n L2 (Ω)
dt , (7.51)
n−1
n=1 t
n
where the time error indicator ηtime n
and space error indicator ηspace are given
by
1 ∑
n
ηtime = ∥|Uhn − Uhn−1 |∥2Ω , n
ηspace = ηen
3 n e∈B
Proof. From (7.49) we know that, for any φ ∈ H01 (Ω) and v ∈ V0n ,
Then from (7.48) and (7.52) we have, for a.e. t ∈ (tn−1 , tn ], and for any
φ ∈ H01 (Ω), v ∈ V0n ,
⟨ ∂(u − U ) ⟩
h
, φ + ⟨a∇(u − Uhn ), ∇φ⟩
∂t
= ⟨Rn , φ − v⟩ − (a∇Uhn , ∇(φ − v)) + ⟨f − f¯n , φ⟩.
Now we resort to the Clément interpolation operator rn : H01 (Ω) → V0n
defined in Subsection 4.2.1, which satisfies the following local approximation
properties by Theorem 4.2, for any φ ∈ H01 (Ω),
∥ φ − rn φ ∥L2 (K) + hK ∥ ∇(φ − rn φ) ∥L2 (K) 6 C ∗ hK ∥ ∇φ ∥L2 (K̃) , (7.53)
∥ φ − rn φ ∥L2 (e) 6 C ∗ h1/2
e ∥ ∇φ ∥L2 (ẽ) , (7.54)
where à is the union of all elements in Mn surrounding the sets A = K ∈ Mn
or A = e ∈ B n . The constant C ∗ depends only on the minimum angle of mesh
Mn . Based on this interpolation operator, by taking φ = u − Uh ∈ H01 (Ω),
v = rn (u − Uh ) ∈ V0n , and using (7.50) and the identity
1 1 1
(a∇(u − Uhn ), ∇(u − Uh )) = |||u − Uhn |||2Ω + |||u − Uh |||2Ω − |||Uh − Uhn |||2Ω ,
2 2 2
we deduce that
1d 1 1
∥ u − Uh ∥2L2 (Ω) + |||u − Uhn |||2Ω + |||u − Uh |||2Ω
2 dt 2 2
1
= |||Uh − Uhn |||2Ω + ⟨Rn , (u − Uh ) − rn (u − Uh )⟩
2
∑∫
+ J n [(u − Uh ) − rn (u − Uh )]ds + ⟨f − f¯n , u − Uh ⟩.
e (7.55)
e∈Bn e
In our a posteriori error estimate at the n-th time ∫ tn step, the time dis-
cretization error is controlled by |||Uhn − Uh |||Ω and
n−1
∥ f − f¯n ∥L2 (Ω) dt,
tn−1
which can only be reduced by reducing the time-step sizes τn . On the other
hand, the time-step size τn essentially controls the semi-discretization error:
the error between the exact solution u and the solution U n of the following
problem
⟨ U n − U n−1 ⟩
, φ + (a∇U n , ∇φ) = ⟨f¯n , φ⟩ ∀φ ∈ H01 (Ω). (7.56)
τn
Thus |||Uhn − Uhn−1 |||Ω is not a good error indicator for time discretization
unless the space discretization error is sufficiently resolved. In the adaptive
method for time-dependent problems, we must do space mesh and time-step
size adaptation simultaneously. Ignoring either one of them may not provide
correct error control of approximation to the problem.
Our objective next is to prove the following estimate for the local error
which ensures over-refinement will not occur for the refinement strategy based
on our space error indicator. First we note that for given Uhn−1 ∈ V0n−1 , let
U∗n ∈ H01 (Ω) be the solution of the following continuous problem
⟨ U n − U n−1 ⟩
∗ h
, φ + (a∇U∗n , ∇φ) = ⟨f¯n , φ⟩ ∀φ ∈ H01 (Ω), (7.57)
τn
n
Then the space error indicator ηspace controls only the error between Uhn and
U∗n , not between Uhn and U n (or the exact solution u).
∫
For any K ∈ Mn and φ ∈ L2 (Ω), we define PK φ = |K| 1
K φdx, the
average of φ over K. For any n = 1, 2, · · · , we also need the notation
Theorem 7.14. Let U∗n ∈ H01 (Ω) be the solution of the auxiliary problem
(7.57). Then there exist constants C2 , C3 > 0 depending only on the minimum
98 7. FINITE ELEMENT METHODS FOR PARABOLIC PROBLEMS
angle of Mn and the coefficient a(x) such that for any e ∈ B n , the following
estimate holds
∑ (1 )
ηen 6C2 Ĉn ∥ U∗n − Uhn ∥2L2 (K) + |||U∗n − Uhn |||2K
τn
K∈Ωe
∑
+ C3 h2K ∥ Rn − PK Rn ∥2L2 (K) . (7.59)
K∈Ωe
Proof. The proof extends the idea to prove the local lower bound for
elliptic equations in Theorem 4.4. For any K ∈ Mn , let ψK = (d +
1)d+1 λ1 · · · λd+1 be the bubble function, where λ1 , · · · , λd+1 are the barycen-
tric coordinate functions. By the standard scaling argument, we have the
following inf-sup relation that holds for some constant β depending only on
the minimum angle of K ∈ Mn
∫
vh φh ψK dx
inf sup K
> β > 0,
vh ∈P1 (K) φh ∈P1 (K) ∥ φh ∥L2 (K) ∥ vh ∥L2 (K)
Thus there exists a function φn ∈ P1 (K) with ∥ φn ∥L2 (K) = 1 such that
β∥ PK Rn ∥L2 (K)
∫
6 (PK Rn )ψK φn dx
K
∫ ∫ (
U n − Uhn−1 )
= (PK Rn − Rn )ψK φn dx + f¯n − h ψK φn dx
τn
∫K ∫K
U∗n − Uhn
= (PK Rn − Rn )ψK φn dx + ψK φn dx + (a∇U∗n , ∇(ψK φn ))K ,
K K τn
where we have used (7.57) in the last identity. Since Uhn ∈ P1 (K) and ψK = 0
on ∂K, simple integration by parts implies that (a∇Uhn , ∇(ψK φn ))K = 0.
Thus, we have
∥ PK Rn ∥L2 (K) 6C∥ Rn − PK Rn ∥L2 (K) + Cτn−1 ∥ U∗n − Uhn ∥L2 (K)
+ C|||U∗n − Uhn |||K |||ψK φn |||K .
Therefore, we have
h2K ∥ Rn ∥2L2 (K) 6Ch2K ∥ Rn − PK Rn ∥2L2 (K)
(1 )
+ C Ĉn ∥ U∗ − Uh ∥L2 (K) + |||U∗ − Uh |||K .
n n 2 n n 2
τn
For any e ∈ Bn , let ψe = dd λ1 · · · λd be the bubble function, where λ1 , · · · , λd
are the barycentric coordinate functions associated with the nodes of e. De-
note by ψ n = Jen ψe ∈ H01 (Ω). Then since Jen is constant on e ∈ B n , we get,
after integration by parts, that
∫ ∑ ∫
∥ Je ∥L2 (e) 6C Je ψ dx = −C
n 2 n n
a(x)∇Uhn ∇ψ n dx
e K∈Ωe K
∑ ∫
=C a(x)∇(U∗n − Uhn )∇ψ n dx
K∈Ωe K
∑ ∫ ( Un − Un )
−C Rn − ∗ n h ψ n dx,
K τ
K∈Ωe
where we have used the definition of U∗n in (7.57). Moreover, it is easy to see
that
∥ ∇ψ n ∥L2 (K) 6 Ch−1/2
e ∥ Jen ∥L2 (e) , ∥ ψ n ∥L2 (K) 6 Ch1/2
e ∥ Je ∥L2 (e) , ∀K ∈ Ωe .
n
Thus
∑
he ∥ Jen ∥2L2 (e) 6C h2K ∥ Rn ∥2L2 (K)
K∈Ωe
∑ ( 1 )
+ C Ĉ n |||U∗n − Uhn |||2K + ∥U∗
n
− Uh
n
∥L (K) .
2
τn
K∈Ωe
A natural way to achieve (7.60) is to adjust the time-step size τn such that
the following relations are satisfied
∫ tn
TOLtime 1 1 √
ηtime 6
n
, ∥ f − f¯n ∥L2 (Ω) dt 6 TOLtime . (7.61)
2T τn tn−1 2T
Let TOLspace be the tolerance allowed for the part of a posteriori error
estimate in (7.51) related to the spatially semidiscrete approximation. Then
the usual stopping criterion for the mesh adaptation is to satisfy the following
relation at each time step n
TOLspace
n
ηspace 6 . (7.62)
T
This stopping rule is appropriate for mesh refinements but not for mesh
coarsening. We will use the coarsening error indicator based on the following
theorem.
Define the weighted norm of H 1 (Ω) with parameter τn > 0
(1 )1/2
∥ φ ∥τn ,Ω = ∥ φ ∥2L2 (Ω) + |||φ|||2Ω ∀φ ∈ H 1 (Ω) (7.63)
τn
Theorem 7.15. Given Uhn−1 ∈ V n−1 and τn > 0. Let MnH be a coars-
ening of the mesh Mn . Let UH n ∈ V n,H , U n ∈ V n be the solutions of the
0 h 0
discrete problem (7.49) over meshes MnH and Mn , respectively. Then the
following error estimate is valid
∥ U∗n − UH ∥τn ,Ω 6 ∥ U∗n − Uhn ∥2τn ,Ω + ∥ Uhn − IH
n 2
Uh ∥τn ,Ω ,
n n 2
where IHn : C(Ω̄) → V n,H is the standard linear finite element interpolation
0
operator.
Hence
∥ U∗n − UH ∥τn ,Ω = ∥ U∗n − Uhn ∥2τn ,Ω + ∥ UH
n 2 n
− Uhn ∥2τn ,Ω . (7.66)
Next, by subtracting (7.64) from (7.65) and taking v = UHn − I n U n ∈ V n,H ,
H h 0
we obtain the following Galerkin orthogonal relation
⟨Un − Un ⟩
H h
, UHn
− IH
n n
Uh + (a∇(UH n
− Uhn ), ∇(UH
n
− IH
n n
Uh )) = 0,
τn
which implies
∥ UH
n
− Uhn ∥2τn ,Ω = ∥ Uhn − IH Uh ∥τn ,Ω − ∥ UH
n n 2 n
− IH Uh ∥τn ,Ω
n n 2
6 ∥ Uhn − IH Uh ∥τn ,Ω .
n n 2
then
τn := δ2 τn
end if
Theorem 7.16. For n > 1, assume that Algorithm 7.1 terminates and
generates the final mesh MnH , time-step size τn , and the the corresponding
discrete solution UHn . Here the mesh Mn is coarsened from the mesh Mn
H
produced by the first three steps. Then for any integer 1 6 m 6 N , there exists
a constant C depending only on the minimum angles of Mn , n = 1, 2, · · · , m,
7.5. THE ADAPTIVE ALGORITHM 103
and the coefficient a(x) such that the following estimate holds
∑m ∫ tn
1 m
∥ u − UH ∥L2 (Ω) +
m 2
|||u − UH |||Ω dt
n 2
2 tn−1
n=1
tm
6 ∥ u0 − Uh0 ∥2L2 (Ω) + TOLtime
T
tm m tm
+C TOLspace + C ĈH TOLcoarse , (7.68)
T T
m = max{h2 /τ : K ∈ Mn , n = 1, 2, · · · , m}.
where ĈH K n H
Proof. Let Uhn be the solution of the discrete problem (7.49) over the
mesh Mn and with the time-step size τn . Then upon the termination of
Algorithm 7.1 we have that
∫ tn
TOLtime 1 1 √
ηtime 6
n
, ∥ f − f¯n ∥L2 (Ω) dt 6 TOLtime ,
2T τn tn−1 2T
TOLspace
n
ηspace 6 .
T
From (7.49) we know that, for any φ ∈ H01 (Ω),
⟨ U n − U n−1 ⟩
H h
, φ + (a∇UH n
, ∇φ)
τn
⟨Un − Un ⟩
= H h
, φ + (a∇(UH n
− Uhn ), ∇φ)
τn
− ⟨Rn , φ⟩ + (a∇Uhn , ∇φ) + ⟨f¯n , φ⟩. (7.69)
Since MnH is a coarsening of Mn , by the Galerkin orthogonal relation as in
Theorem 3.1, we have
⟨Un − Un ⟩
H h
, vH + (a∇(UH n
− Uhn ), ∇vH ) = 0 ∀vH ∈ V0n,H .
τn
On the other hand, since Uhn is the discrete solution over mesh Mn , we have
−⟨Rn , v⟩ + (a∇Uhn , ∇v) = 0 ∀v ∈ V0n .
Thus from (7.48) and (7.69) we deduce that, for a.e. t ∈ (tn−1 , tn ] and for
any φ ∈ H01 (Ω), vH ∈ V0n,H , v ∈ V0n ,
⟨ ∂(u − U ) ⟩
H
, φ + (a∇(u − UH n
), ∇φ)
∂t
= ⟨Rn , φ − v⟩ − (a∇Uhn , ∇(φ − v)) + ⟨f − f¯n , φ⟩
⟨Un − Un ⟩
− H h
, φ − vH − (a∇(UH n
− Uhn ), ∇(φ − vH )),
τn
104 7. FINITE ELEMENT METHODS FOR PARABOLIC PROBLEMS
where for any t ∈ (tn−1 , tn ], UH (t) = l(t)UH n + (1 − l(t))U n−1 with l(t) =
h
(t − tn−1 )/τn . By taking vH = ΠnH φ ∈ V0n,H , the Clément interpolant of
φ ∈ H01 (Ω) in V0n,H , we get, after using the estimate (7.53) for the Clément
interpolation operator, that
⟨Un − Un ⟩
H h
, φ − ΠH φ + (a∇(UH
n n
− Uhn ), ∇(φ − ΠnH φ))
τn
( ∑ )1/2
6C h2K τn−2 ∥ UHn
− Uhn ∥2L2 (K) + |||UHn
− Uhn |||2Ω |||φ|||Ω
K∈Mn
H
6 C(ĈH ) ∥ UH
m 1/2 n
− Uhn ∥τn ,Ω |||φ|||Ω .
Again, since MnH is a coarsening of Mn , from the proof of Theorem 3.1 and
the Step 4 in Algorithm 7.1 we know that
√
TOLcoarse
∥ UH − Uh ∥τn ,Ω 6 ∥ IH Uh − Uh ∥τn ,Ω 6 (ηcoarse ) 6
n n n n n n 1/2
,
T
which yields
⟨Un − Un ⟩
H h
, φ − ΠnH φ + (a∇(UH n
− Uhn ), ∇(φ − ΠnH φ))
τn
√
m TOL
ĈH coarse
6C |||φ|||Ω .
T
The rest of the proof is similar to that of in Theorem 2.1. Here we omit the
details.
in Schmidt and Siebert [48]. Section 7.5 is taken from [20] to which we refer
for the discussion on the termination of the adaptive algorithm 7.1 in finite
number of steps.
7.6. Exercises
Exercise 7.1. Prove Theorem 7.3.
Exercise 7.2. Under the assumptions of Theorem 7.7, prove that, for
t > 0,
∫ t
−αt
∥θh (t)∥L2 (Ω) 6 e ∥θh (0)∥L2 (Ω) + e−α(t−s) ∥r(s)∥L2 (Ω) ds,
0
where α is the constant in (7.8).
Exercise 7.3. Let u and uh be the solutions of (7.1) and (7.13), respec-
tively. Suppose that ∥uh0 − u0 ∥L2 (Ω) 6 Ch2 |u0 |H 2 (Ω) . Then for t > 0,
∫ t ( ∫ t )
2 2
∥uh (s) − Rh u(s)∥H 1 (Ω) ds 6Ch |u0 |H 2 (Ω) +
4
|ut (s)|2H 2 (Ω) ds .
0 0
Exercise 7.4. Complete the proofs of Theorem 7.11 and Theorem 7.12.
CHAPTER 8
∇ × H = J + ∂t D, divD = ρ,
∇ × E = −∂t B, divB = 0.
D = εE, B = µH,
In this chapter we consider adaptive edge element methods for solving the
time-harmonic Maxwell equations. We will first introduce the function space
H(curl; Ω) and its conforming finite element discretization, the lowest or-
der Nédélec edge element method. Then we will derive the a priori and a
posteriori error estimate for the edge element method.
107
108 8. FINITE ELEMENT METHODS FOR MAXWELL EQUATIONS
For any ϵ > 0, let ρϵ (x) = ϵ−d ρ(x/ϵ) ∈ C0∞ (R3 ) be the mollifier function
where ρ(x) is defined in (1.4). Recall that ρϵ = 0 for |x| > ϵ. Hence, for ϵ > 0
sufficiently small, ρϵ ∗ vθ is in C0∞ (Ω)3 and from Lemma 1.1,
lim lim (ρϵ ∗ vθ ) = v in H(curl, Ω).
ϵ→0 θ→1
In the general case, Ω can be covered by a finite family of open sets
Ω ⊂ ∪16i6q Oi
such that each Ωi = Ω ∩ Oi is Lipschitz, bounded and strictly star-shaped.
Let {χi }16i6q be a partition of unity subordinate to the family {Oi }16i6q ,
that is,
∑ q
χi ∈ C0∞ (Oi ), 0 6 χi 6 1, and χi = 1 in Ω.
i=1
∑q
Then v = i=1 χi v in R3 .
Clearly χi v ∈ H(curl; Ω) with support in Ωi .
Therefore, we can finish the proof by using the result for the strictly star-
shaped domain in the first part of the proof.
8.1. THE FUNCTION SPACE H(curl; Ω) 109
Theorem 8.1. Let D(Ω̄) be the set of all functions ϕ|Ω with ϕ ∈ C0∞ (R3 ).
Then D(Ω̄)3 is dense in H(curl; Ω).
Proof. Let l belong to H(curl; Ω)′ , the dual space of H(curl; Ω). As
H(curl; Ω) is a Hilbert space, by Riesz representation theorem, we can asso-
ciate with l a function u in H(curl; Ω) such that
⟨l, v⟩ = (u, v) + (w, ∇ × v) ∀v ∈ H(curl; Ω),
where
w = ∇ × u.
Now assume that l vanishes on D(Ω̄)3 and let ũ, w̃ be respectively the exten-
sion of u, w by zero outside Ω. Then we have
∫
{ũ · v + w̃ · ∇ × v} dx = 0 ∀v ∈ C0∞ (R3 )3 .
R3
This implies that
ũ = −∇ × w̃.
Therefore w̃ ∈ H(curl; R3 ), since ũ ∈ L2 (R3 )3 . Now by Lemma 8.1 we have
w ∈ H0 (curl; Ω). As C0∞ (Ω)3 is dense in H0 (curl; Ω), let wϵ be a sequence of
functions in C0∞ (Ω)3 that tends to w in H(curl; Ω) as ϵ → 0, then
⟨l, v⟩ = lim {(−∇ × wϵ , v) + (wϵ , ∇ × v)} = 0 ∀v ∈ H(curl; Ω).
ϵ→0
Our next goal is to show that a vector field whose divergence vanishes
must be a curl filed. We assume ∂Ω has p + 1 connected parts Γi , 0 6 i 6 p,
and Γ0 is the exterior boundary. We denote Ωi the domain encompassed by
Γi , 1 6 i 6 p (see Figure 1).
Ω0
Ω 2 Γ2
Ω1 Γ1
Ωp
Γp
O
Γ0
v = ∇ × w. (8.2)
Moreover, w may be chosen such that div w = 0 and the following estimate
holds
Define
v in Ω,
ṽ = ∇θi in Ωi , 1 6 i 6 p,
0 in R3 \Ō.
Then ṽ ∈ L2 (R3 ) and div ṽ = 0. Let v̂ = (v̂1 , v̂2 , v̂3 )T be the Fourier
transform of ṽ
∫ ∑
3
v̂(ξ) = e−2πi(x,ξ) ṽ(x) dx, (x, ξ) = xi ξi .
R3 i=1
If div w = 0 we need
From (8.5) we know that v̂j (0) = 0. Since v̂j (ξ) is holomorphic, we know
that, in the neighborhood of the origin,
∑
3
∂v̂j
v̂j (ξ) = ξk + O(|ξ|2 ).
∂ξk
k=1
Thus ŵ is bounded in the neighborhood of the origin. Now ω ŵ has the com-
pact support, its inverse Fourier transform is holomorphic and its restriction
to Ω belongs to L2 (Ω)3 . On the other hand, (1 − ω)ŵ is zero in the neigh-
borhood of the origin. Hence (1 − ω)ŵ ∈ L2 (R3 )3 and its inverse Fourier
transform in L2 (R3 )3 . This proves the inverse Fourier transform of ŵ is in
L2 (Ω)3 .
Clearly w can be chosen up to an arbitrary constant. Thus (8.3) follows
from (8.7), the Parserval identity, and Poincaré inequality.
The following Helmholtz decomposition theorem is now a direct conse-
quence of Theorems 8.3 and 8.4.
Theorem 8.5. Any vector field v ∈ L2 (Ω)3 has the following orthogonal
decomposition
v = ∇q + ∇ × w,
where q ∈ H 1 (Ω)/R is the unique solution of the following problem
(∇q, ∇φ) = (v, ∇φ) ∀φ ∈ H 1 (Ω),
and w ∈ H 1 (Ω)3 satisfies div w = 0 in Ω, ∇ × w · n = 0 on Γ.
We conclude this section by proving the embedding theorem for function
spaces XN (Ω) and XT (Ω) which will be used in our subsequent analysis
XN (Ω) = {v ∈ L2 (Ω)3 : ∇ × v ∈ L2 (Ω)3 , divv ∈ L2 (Ω), v × n = 0 on Γ},
XT (Ω) = {v ∈ L2 (Ω)3 : ∇ × v ∈ L2 (Ω)3 , divv ∈ L2 (Ω), v · n = 0 on Γ}.
Theorem 8.6. If Ω is a C 1 or convex domain, XN (Ω), XT (Ω) are con-
tinuously embedded into H 1 (Ω)3 .
Proof. Without loss of generality, we may assume Ω is also simply con-
nected and has connected boundary. For, otherwise, Ω is the union of finite
number of domains Ωk having above properties. We can introduce the par-
tition of unity χk subordinate to Ωk and apply the result for each χk v.
1◦ ) Let v ∈ XT (Ω). By Theorem 8.4, for ∇ × v, we have vector potential
w ∈ H 1 (Ω)3 such that
∇ × w = ∇ × v, div w = 0 in Ω.
114 8. FINITE ELEMENT METHODS FOR MAXWELL EQUATIONS
Thus
( )
|v|H 1 (Ω) 6 C ∥ divv ∥L2 (Ω) + ∥ ∇ × v ∥L2 (Ω) .
∇ × w = ∇ × ṽ, div w = 0 in O.
Definition 8.7. The lowest order Nédélec finite element is a triple (K, P,
N ) with the following properties
(i) K ⊂ R3 is a tetrahedron;
(ii) P = {u = aK + bK × x ∀ aK , bK ∈ R3 };
∫
(iii) N = {Me : Me (u) = e (u · t) dl ∀ edge e of K, ∀u ∈ P}. Me (u) is
called the moment of u on the edge e.
Note that if u ∈ P1 (K)3 then ∇×u is a constant vector, say ∇×u = 2bK ,
which implies ∇ × (u − bK × x) = 0 in K. We get u = ∇φ + bK × x for some
φ ∈ P2 (K). When bK = 0, u should approximate the function in L2 (K)3 ,
the minimum requirement is φ ∈ P1 (K), that is, ∇φ = aK for some constant
vector aK in R3 . This motivates the shape functions in P.
Lemma 8.3. The nodal basis of the lowest order Nédélec element is {λi ∇λj
−λj ∇λi , 1 6 i < j 6 4}. Here λj , j = 1, 2, 3, 4, are barycentric coordinate
functions of the element K.
Therefore
∫ ∫ ∫
∂λ4
u14 · t14 dl = ∇λ4 · t14 dl = dl = 1.
e14 e14 e14 ∂t14
This completes the proof.
Let K be a tetrahedron with vertices Ai , 1 6 i 6 4, and let FK : K̂ → K
be the affine transform from the reference element K̂ to K
x = FK (x̂) = BK x̂ + bK , x̂ ∈ K̂, BK is invertible,
so that FK (Âi ) = Ai , 1 6 i 6 4. Notice that the normal and tangential
vectors n, n̂ and t, t̂ to the faces satisfy
−1 T
/ −1 T /
n ◦ FK = (BK ) n̂ |(BK ) n̂|, t ◦ FK = BK t̂ |BK t|.
For any scaler function φ defined on K, we associate
φ̂ = φ ◦ FK , that is, φ̂(x̂) = φ(BK x̂ + bK ).
For any vector valued function u defined on K, we associate
T
û = BK u ◦ FK , that is, T
û(x̂) = BK u(BK x̂ + bK ). (8.10)
Denote by u = (u1 , u2 , u3 ), û = (û1 , û2 , û3 ). We introduce
( ) ( )
∂ui ∂uj 3 ∂ ûi ∂ ûj 3
C= − and C = ˆ − .
∂xj ∂xi i,j=1 ∂ x̂j ∂ x̂i i,j=1
Then we have
−1 T ˆ −1
C ◦ FK = (BK ) CBK . (8.11)
In fact,
∂ ûi ∂ (∑ ) ∑ ∂u
k
= bki (uk ◦ FK ) = bki blj
∂ x̂j ∂ x̂j ∂xl
k k,l
and
∂ ûi ∂ ûj ∑ ∂uk ∑ ∂uk ∑ ( ∂uk ∂ul
)
− = bki blj − bkj bli = bki − blj .
∂ x̂j ∂ x̂i ∂xl ∂xl ∂xl ∂xk
k,l k,l k,l
This yields
∑
Cˆij = bki Ckl blj and hence Cˆ = BK
T
(C ◦ FK )BK ,
k,l
F ⊂ {x = (x1 , x2 , x3 ) ∈ R3 : x3 = 0}.
u = aK + bK × x = (b2 x3 − b3 x2 , b3 x1 − b1 x3 , b1 x2 − b2 x1 ) + aK .
= ∇ × (φ̄v) · n ds
∫F
∫
= φ̄∇ × v · n ds + ∇φ̄ × v · n ds
∫F ∫F
= ∇ × v · ∇φ̄¯ dx + ∇φ̄ × v · n ds.
K F
This implies
|Me (v)| 6 C(∥ ∇ × v ∥Lp (K) + ∥ v × n ∥Lp (F ) )∥ φ ∥W 1−1/p′ ,p′ (e)
8.2. THE CURL CONFORMING FINITE ELEMENT APPROXIMATION 119
∇ × (α(x)∇ × E) − k 2 β(x)E = f in Ω,
E×n=0 on ∂Ω.
The problem (8.14) is not necessarily coercive and thus its uniqueness and
existence is not guaranteed. Here we will not elaborate on this issue and
simply assume (8.14) has a unique solution E ∈ H0 (curl; Ω) for any given
f ∈ L2 (Ω)3 .
Let X0h = Xh ∩ H0 (curl; Ω). Then the finite element approximation to
(8.14) is to find Eh ∈ X0h such that
The discrete problem (8.15) may not have a unique solution. It can be
proved that for sufficiently small mesh size h, the problem (8.15) indeed
has a unique solution under fairly general conditions on the domain and the
coefficients α, β. Here we only consider a special case when the domain is a
convex polyhedron and α, β are constants.
∇ × γh v = τh ∇ × v = τh ∇ × wh = ∇ × wh .
which implies
We claim that
∥qi ∥L∞ (F ) 6 Ch−1
F (8.24)
which implies that ∥qi ∥L2 (F ) 6 C. Without loss of generality, we will prove
that (8.24) holds for i = 1. We first find α = (α1 , α2 , α3 )T such that
∫
q1 = α1 w1 ×n+α2 w2 ×n+α3 w3 ×n, (wi ×n)·q1 ds = δi1 , i = 1, 2, 3.
F
Therefore,
∑ ∫ 2
∥Πh v∥2L2 (K) 6 ChK (v × n)qeFe dσ
e∈Eh Fe
e⊂∂K
∑ ( )
6 ChK h−1 2 2
e ∥v∥L2 (Ke ) + he |v|H 1 (Ke )
e∈Eh
e⊂∂K
( )
6 C ∥v∥2L2 (K̃) + h2K |v|2H 1 (K̃) .
This proves the first estimate in the theorem.
Since Πh is a projection, we know that Πh cK = cK for any constant cK .
Thus
∥v − Πh v∥L2 (K) = inf ∥(v + cK ) − Πh (v + cK )∥L2 (K)
cK
( )
6 C inf ∥v + cK ∥L2 (K̃) + hK |v + cK |H 1 (K̃)
cK
6 ChK |v|H 1 (K̃) ,
where we have used the scaling argument and Theorem 3.1 in the last in-
equality. This proves the third inequality. The last inequality can be proved
similarly. The proof of the second inequality is left as an exercise.
For any v ∈ H0 (curl; Ω), by Lemma 8.6, there exists a ψ ∈ H01 (Ω) and a
vs ∈ H 1 (Ω)3 ∩ H0 (curl; Ω) such that v = ∇ψ + vs in Ω, and
∥ψ∥H 1 (Ω) + ∥vs ∥H 1 (Ω) 6 C ∥v∥H(curl;Ω) . (8.30)
128 8. FINITE ELEMENT METHODS FOR MAXWELL EQUATIONS
Bibliographic notes. The results in Section 8.1 are taken from Girault
and Raviart [34]. Further results on vector potentials on nonsmooth domains
can be found in Amrouche et al [2]. The full characterization of the trace for
functions in H(curl; Ω) can be found in Buffa et al [17]. The Nédélec edge
elements are introduced in Nédédec [43, 44]. Lemma 8.5 is taken from [2].
8.5. EXERCISES 129
Further properties of edge elements can be found in Hiptmair [36] and Monk
[42]. The error analysis in Section 8.3 follows the development in [42] which
we refer to for further results. The interpolation operator in Theorem 8.11
is from Beck et al [8] although the proof here is slightly different. In [8] the
a posteriori error estimate is derived for smooth or convex domains. Lemma
8.6, which is also known as regular decomposition theorem, is from Birman
and Solomyak [10]. Theorem 8.12 is from Chen et al [21] in which the adap-
tive multilevel edge element method for time-harmonic Maxwell equations
based on a posteriori error estimate is also considered.
8.5. Exercises
Exercise 8.1. Prove D(Ω̄)3 is dense in H(div; Ω).
Exercise 8.2. The lowest order divergence conforming finite element is
a triple (K, D, N ) with the following properties:
(i) K ⊂ R3 is a tetrahedron;
(ii) D = {u = aK + bK x ∀ aK ∈ R3 , bK ∈ R};
∫
(iii) N = {MF : MF (u) = F (u · n) ds ∀ face F of K, ∀u ∈ D}.
For any vector field u defined on K, let
û ◦ FK (x̂) = BK u(BK x̂ + bK ).
Prove that
(i) u ∈ D(K) ⇔ û ∈ D̂(K̂);
(ii) ˆ û = 0 ∀ u ∈ D(K);
div u = 0 ⇔ div
(iii) MF (u) = 0 ⇔ MF̂ (û) = 0 ∀ u ∈ D(K);
(iv) If u ∈ D(K) and MF (u) = 0, then u × n = 0 on F ;
(v) If u ∈ D(K) and MF (u) = 0 for any face F , then u = 0 in K.
Exercise 8.3. For any function u defined on K such that MF (u) is
defined on each face F of K. Let τK u be the unique polynomial in D(K)
that has the same moments as u on K: MF (τK u − u) = 0. Prove that for
any p > 2, τK is continuous on the space
{u ∈ Lp (K)3 : div u ∈ L2 (K)}.
Exercise 8.4. Let D(K) be the finite element space in Exercise 8.2 and
Wh = {uh ∈ H(div; Ω) : uh |K ∈ D(K) ∀K ∈ Mh }.
Let τh be the global interpolation operator
τh u|K = τK u on K, ∀K ∈ Mh .
130 8. FINITE ELEMENT METHODS FOR MAXWELL EQUATIONS
In this chapter we consider finite element methods for solving the follow-
ing elliptic equation with oscillating coefficients
−∇ · (a(x/ε)∇u) = f in Ω,
(9.1)
u=0 on ∂Ω,
where Ω ⊂ R2 is a bounded Lipschitz domain and f ∈ L2 (Ω). We assume
a(x/ε) = (aij (x/ε)) is a symmetric matrix and aij (y) are W 1,p (p > 2) peri-
odic functions in y with respect to a unit cube Y . We assume aε = a(x/ε) is
elliptic, that is, there exists a constant γ > 0 such that
Here and throughout this chapter, the Einstein convention for repeated in-
dices are assumed.
The problem (9.1) a model multiscale problem which arises in the mod-
eling of composite materials and the flow transport in heterogeneous porous
media. The main difficulty in solving it by standard finite element method
is that when ε is small, the underlying finite element mesh h must be much
less than ε which makes the computational costs prohibitive. The multiscale
finite element method allows to solve the problem with mesh size h greater
than ε.
Integrating the above equation in y over the cell Y and using the periodicity,
we obtain the following homogenized equation
−∇ · (a∗ ∇u0 ) = f in Ω,
(9.4)
u0 = 0 on ∂Ω,
where a∗ = (a∗ij ) is the homogenized coefficient
∫
∗ 1
aij = aik (y)(δkj − ∂χj /∂yk ) dy. (9.5)
|Y | Y
In summary we have the following asymptotic expansion
u(x) = u0 (x) + εu1 (x, x/ε) + o(ε),
where u0 satisfies (9.4) and u1 (x, y) is given by (9.3).
The above argument is heuristic. Our purpose now is to show the con-
vergence of the asymptotic expansion. Let θε denote the boundary corrector
which is the solution of
−∇ · (aε ∇θε ) = 0 in Ω,
(9.6)
θε = u1 (x, x/ε) on ∂Ω.
The variational form of the problem (9.1) is to find u(x) ∈ H01 (Ω) such
that
a(u, v) := (a(x/ε)∇u, ∇v) = (f, v) ∀v ∈ H01 (Ω). (9.7)
Similarly, the variational form of the problem (9.4) is to find u0 (x) ∈ H01 (Ω)
such that
(a∗ ∇u0 , ∇v) = (f, v) ∀v ∈ H01 (Ω). (9.8)
It can be shown that a∗ satisfies
a∗ij ξi ξj > γ|ξ|2 ∀ξ ∈ R2 .
Thus by Lax-Milgram lemma, (9.8) has a unique solution.
such that ∫
k ∂ k k
Gi (y) = (α (y)), αij (y) dy = 0.
∂yj ij Y
With this notation, we can rewrite
( )
∂u0 ∂ ∂u0 ∂ 2 u0
k
Gi (x/ε) =ε k
αij (x/ε) − εαijk
(x/ε) .
∂xk ∂xj ∂xk ∂xj ∂xk
For any φ ∈ H01 (Ω), from (9.6)–(9.8) and (9.3), we have
(a(x/ε)∇(u − u0 − εu1 + εθε ), ∇φ)
( ( ) )
∗ k ∂u0
= (a ∇u0 , ∇φ) − a(x/ε)∇ u0 − εχ , ∇φ
∂xk
∫ ∫
∂ 2 u0 ∂φ ∂ 2 u0 ∂φ
= ε aij (x/ε)χk dx − ε αij k
(x/ε) dx.
Ω ∂xj ∂xk ∂xi Ω ∂xj ∂xk ∂xi
( k ∂u0
)
Notice here we have used ∂x∂ j αij (x/ε) ∂xk
is divergence free. Thus by taking
φ = u − u0 − εu1 + εθε yields the result.
9.2. THE MULTISCALE FINITE ELEMENT METHOD 135
It is obvious that
−∇ · (a(x/ε)∇Ih u) = 0 in K,
(9.11)
Ih u = Πh u on ∂K.
Lemma 9.2. Let u ∈ H 2 (Ω)∩H01 (Ω) be the solution of (9.1). There exists
a constant C independent of h, ε such that
∥u − Ih u∥L2 (Ω) + h∥u − Ih u∥H 1 (Ω) 6 Ch2 (|u|H 2 (Ω) + ∥f ∥L2 (Ω) ).
Proof. By Theorem 3.6, we have
∥u − Πh u∥L2 (Ω) + h∥u − Πh u∥H 1 (Ω) 6 Ch2 |u|H 2 (Ω) . (9.12)
Since Πh u − Ih u = 0 on ∂K, by the scaling argument and the Poincaré-
Friedrichs inequality we get
∥Πh u − Ih u∥0,K 6 Ch∥Πh u − Ih u∥1,K . (9.13)
136 9. MULTISCALE FINITE ELEMENT METHODS FOR ELLIPTIC EQUATIONS
Choosing vh as Ih w yields
(uh − u, φ) 6 C(h/ε)∥f ∥L2 (Ω) ∥w − Ih w∥H 1 (Ω)
6 C(h/ε)∥f ∥L2 (Ω) (h/ε)∥φ∥L2 (Ω) .
Hence
(uh − u, φ)
∥uh − u∥L2 (Ω) = sup 6 C(h/ε)2 ∥f ∥L2 (Ω) .
2
0̸=φ∈L (Ω) ∥φ∥ 2
L (Ω)
9.2.2. Error estimate when h > ε. Now we consider the error esti-
mate when h > ε which is the main attraction of the multiscale finite element
method.
Theorem 9.3. Let u and uh be the solutions of (9.1) and (9.10), re-
spectively. Then there exists a constant C, independent of h and ε, such
that
∥u − uh ∥H 1 (Ω) 6 C(h + ε)∥f ∥L2 (Ω) + C((ε/h)1/2 + ε1/2 )∥u0 ∥W 1,∞ (Ω) .
It follows that
−∇ · (aε ∇uI ) = 0 in K,
uI = Πh u0 on ∂K.
Let uI0 be the solution of the homogenized problem
−∇ · (a∗ ∇uI0 ) = 0 in K,
(9.16)
uI0 = Πh u0 on ∂K,
and
∂uI0
uI1 = −χj in K. (9.17)
∂xj
Let θIε be the boundary corrector
−∇ · (a(x/ε)∇θIε ) = 0 in K,
(9.18)
θIε = uI1 (x, x/ε) on ∂K.
Clearly
uI0 = Πh u0 in K, (9.19)
138 9. MULTISCALE FINITE ELEMENT METHODS FOR ELLIPTIC EQUATIONS
that is, uI0 is linear in K which implies |uI0 |2,K = 0. Thus by following the
proof of Theorem 9.1,
It is clear that
∥ε(u1 − uI1 )∥L2 (Ω) = ε∥χj ∂(u0 − Πh u0 )/∂xj ∥L2 (Ω) 6 Chε|u0 |H 2 (Ω) .
Thus, we have
∥ε(u1 − uI1 )∥1,Ω 6 C(h + ε)|u0 |H 2 (Ω) 6 C(h + ε)∥f ∥L2 (Ω) . (9.22)
Hence
∥∇θε ∥L2 (Ω) 6 C∥∇(ξχj ∂u0 /∂xj )∥L2 (Ω)
6 C∥∇ξχj ∂u0 /∂xj ∥L2 (Ω) + C∥ξ∇χj ∂u0 /∂xj ∥L2 (Ω)
+ C∥ξχj ∇(∂u0 /∂xj )∥L2 (Ω)
√
6 C∥∇u0 ∥L∞ (Ω) |∂Ω|ε/ε + C|u0 |H 2 (Ω) (9.23)
On the other hand, from the maximum principle, we have
∥θε ∥L∞ (Ω) 6 ∥χj ∂u0 /∂xj ∥L∞ (∂Ω) 6 C∥u0 ∥W 1,∞ (Ω) .
Thus, we obtain
√
∥εθε ∥H 1 (Ω) 6 C ε∥u0 ∥W 1,∞ (Ω) + Cε|u0 |H 2 (Ω) . (9.24)
Finally, we estimate ∥εθIε ∥H 1 (Ω) . From the maximum principle, we have
∥θIε ∥L∞ (K) 6 ∥χj ∂(Πh u0 )/∂xj ∥L∞ (∂K) 6 C∥Πh u0 ∥W 1,∞ (K)
6 C∥u0 ∥W 1,∞ (K) .
Hence
∥εθIε ∥L2 (Ω) 6 Cε∥u0 ∥W 1,∞ (Ω) .
Similar to (9.23), we have
√
∥ε∇θIε ∥L2 (K) 6 C∥∇u0 ∥L∞ (K) |∂K|ε + Cε|Πh u0 |H 2 (K)
√
6 C hε∥u0 ∥W 1,∞ (K) ,
which implies
∥ε∇θIε ∥L2 (Ω) 6 C(ε/h)1/2 ∥u0 ∥W 1,∞ (Ω) .
Hence
∥εθIε ∥1,Ω 6 C((ε/h)1/2 + ε)∥u0 ∥W 1,∞ (Ω) . (9.25)
Combing (9.21)–(9.22), (9.24)–(9.25) and using Céa Lemma, we obtain
∥u − uh ∥1,Ω 6 C(h + ε)∥f ∥L2 (Ω) + C((ε/h)1/2 + ε1/2 )∥u0 ∥W 1,∞ (Ω) .
This completes the proof.
We remark that the error estimate in Theorem 9.3 is uniform when ε → 0
which suggests that one can take the mesh size h larger than ε in using the
multiscale finite element methods. The term (ε/h)1/2 in the error estimate is
due to the mismatch of the multiscale finite element basis functions with the
solution u of the original problem inside the domain. One way to improve
this error is the over-sampling finite element method that we introduce in
the next section.
140 9. MULTISCALE FINITE ELEMENT METHODS FOR ELLIPTIC EQUATIONS
i = cij φj |K
φK K S
in K.
{ }3
The existence of the constants cK
ij is guaranteed because φSj also forms
j=1
the basis of P1 (K).
K
Let OMS (K) = span {ψ̄i }3i=1 and ΠK : OMS (K) → P1 (K) the projec-
tion
K
ΠK ψ = ci φK
i if ψ = ci ψ̄i ∈ OMS (K).
Let X̄H be the finite element space
X̄H = {ψH : ψH |K ∈ OMS (K) ∀K ∈ MH }
and define ΠH : X̄H → ΠK∈MH P1 (K) through the relation
ΠH ψH |K = ΠK ψH for any K ∈ MH , ψH ∈ X̄H .
The over-sampling multiscale finite element space is then defined as
{ }
XH = ψH ∈ X̄H : ΠH ψH ∈ WH ⊂ H 1 (Ω) .
In general, XH ̸⊂ H 1 (Ω) and the requirement ΠH ψH ∈ WH is to impose
certain continuity of the functions ψH ∈ XH across the inter-element bound-
aries. Here we have an example of nonconforming finite element method.
9.3. THE OVER-SAMPLING MULTISCALE FINITE ELEMENT METHOD 141
0
XH = {ψH ∈ XH : ΠH ψH = 0 on ∂Ω} ,
such that
∑ ∫ ∫
aε ∇uH ∇ψH dx = f ψH dx ∀ψH ∈ XH
0
.
K∈MH K Ω
∏ ∏
We introduce the bilinear form aH (·, ·) : K∈MH H 1 (K) × K∈MH H 1 (K)
→R
∑ ∫ ∏
aH (φ, ψ) = aε ∇φ∇ψ dx ∀φ, ψ ∈ H 1 (K),
K∈MH K K∈MH
∥uε − uH ∥h,Ω
∫
Ω f ψH dx − aH (uε , ψH )
6C inf ∥uε − ψH ∥h,Ω + C sup .
ψH ∈XH
0
0̸=ψH ∈XH
0 ∥ψH ∥h,Ω
∫
Proof. Define ⟨R, ψH ⟩ = Ω f ψH − aH (uε , ψH ), then we have
where GK
i satisfies
∫
∂Gki
Gki (y) dy = 0 and = 0.
Y ∂yi
Multiplying (9.28) by ∇χ0H and integrating over K we see
∫ ∫ ∫
0
∗ ∂χH ∂χH
0
∂χH ∂χ0H ( x ) ∂χ0H ∂χ0H
aij dx = aij dx + Gki dx
K ∂xj ∂xi K ∂xj ∂xi K ε ∂xk ∂xi
∫
∂θS ∂χ0H
−ε aij ε dx.
K ∂xj ∂xi
From the interior estimate due to Avellaneda and Lin [4, Lemma 16]
∇θεS L∞ (K)
6 Ch−1
K θεS L∞ (S)
.
Therefore by the maximum principle and the finite element inverse estimate
∫
∂θS ∂χ0H
ε aij ε dx 6 CεhK ∇θεS L∞ (K) ∇χ0H L2 (K)
K ∂xj ∂xi
ε 2
6C ∇χ0H L2 (K) .
hK
By Lemma 9.4 we have
∫
( x ) ∂χ0H ∂χ0H 2 ε 2
Gki dx 6CεhK ∇χ0H L∞ (K) 6 C ∇χ0H L2 (K) .
K ε ∂xk ∂xi hK
Thus
ε
α∗ ∇χ0H
2 2
L2 (K)
6 C ∥∇χH ∥L2 (K) ∇χ0H L2 (K)
+C ∇χ0H L2 (K)
.
hK
This completes the proof.
We take
∑
ψH = u0 (xj )ψ¯j (x) ∈ XH
0
,
xj interior node
then
ΠH ψH |K = IH u0 ∀K ∈ MH .
where IH : C(Ω̄) → WH is the standard Lagrange interpolation operator over
linear finite element space. By (9.27) we know that
∂(IH u0 )
ψH = (IH u0 ) − εχj − εθεS ,
∂xj
where θεS ∈ H 1 (S) is the boundary corrector given by
∂(IH u0 )
−∇ · (aε ∇θεS ) = 0 in S, θεS ∂S
= −χj .
∂xj
By the interior estimate in Avellaneda and Lin [4, Lemma 16]
∇θεS L∞ (K)
6 Ch−1
K θεS L∞ (S)
6 Ch−1 −1
K |IH u0 |W 1,∞ (S) 6 ChK |u0 |W 1,∞ (K) .
Therefore
( ( ∂(IH u0 ) ))
∇ ψH − IH u0 + εχj 6 Cεh−1
K |u0 |W 1,∞ (K) |K|
1/2
∂xj L2 (K)
6 Cε |u0 |W 1,∞ (K) . (9.30)
Since
∥∇(u0 − IH u0 )∥L2 (K) 6 ChK |u0 |H 2 (K) ,
( ∂(I u − u ) )
H 0 0
ε∇ χj 6 C(h + ε) |u0 |H 2 (K) ,
∂xj L2 (K)
we finally obtain
(ε √ )
∥∇(uε − ψH )∥h,Ω 6 C(h + ε) |u0 |H 2 (K) + C + ε ∥u0 ∥W 1,∞ (Ω) .
h
It remains to estimate the non-conforming error. Since ΠH ψH ∈ WH ⊂
H 1 (Ω) we know that
∫
f ψH dx − aH (uε , ψH )
Ω
∫ ∑ ∫
= f (ψH − ΠH ψH ) dx − aε ∇uε ∇(ψH − ΠH ψH ) dx .
Ω K∈Mh K
9.3. THE OVER-SAMPLING MULTISCALE FINITE ELEMENT METHOD 145
But
( ∫
∑
|II| 6 a∗ ∇u0 ∇(ψH − ΠH ψH ) dx
K∈MH K
∫
∂u0 ∂(ψH − ΠH ψH )
+ Gki dx
K ∂xk ∂xi
)
+ ε |u0 |H 2 (K) ∥∇(ψH − ΠH ψH )∥L2 (K)
That is, ( )
ε
|II1 | 6 C ε |u0 |H 2 (Ω) + C |u0 |W 1,∞ (Ω) ∥ψH ∥h,Ω .
h
Similarly, we know that
( ε )
|II2 | 6 C ε |u0 |H 2 (Ω) + C |u0 |W 1,∞ (Ω) ∥ψH ∥h,Ω .
h
It is obvious that
|II3 | 6 Cε |u0 |H 2 (Ω) ∥ψH ∥h,Ω .
This shows that the non-conforming error in Lemma 9.3
∫
Ω f ψH dx − aH (uε , ψH )
sup
0̸=ψH ∈XH
0 ∥ψH ∥h,Ω
( ε √ )( )
6 Cε |u0 |H 2 (Ω) + C + ε ∥f ∥L2 (Ω) + ∥u0 ∥W 1,∞ (Ω) .
h
This completes the proof.
Bibliographic notes. Homogenization theory for elliptic equations with
highly oscillatory coefficients is a topic of intensive studies. We refer to the
monographs Bensoussan et al [9] and Jikov et al [39] for further results. The-
orem 9.1 is taken from [39]. The multiscale finite element method is intro-
duced in Hou and Wu [37] and Hou et al [38]. The over-sampling multiscale
finite element is introduced in Efendiev et al [29]. Further development of
multiscale finite elements can be found in Chen and Hou [19] for the mixed
multiscale finite element method and in Chen and Yue [22] for the multiscale
finite element method dealing with well singularities.
9.4. Exercises
Exercise 9.1. Show that the homogenized coefficient a∗ satisfies
a∗ij ξi ξj > γ|ξ|2 ∀ξ ∈ R2 .
Exercise 9.2. Prove Lemma 9.1.
CHAPTER 10
Implementations
f=1;
4. Specify the boundary condition:
b=’circleb1’;
5. Solve the PDE and plot the solution:
u=assempde(b,p,e,t,c,a,f);
pdesurf(p,t,u);
6. Compute the maximum error:
exact=(1-p(1,:).^2-p(2,:).^2)’/4;
error=max(abs(u-exact));
fprintf(’Error: %e. Number of nodes: %d\n’,...
error,size(p,2));
pdesurf(p,t,u-exact); %Plot the error.
7. If the error is not sufficiently small, refine the mesh:
[p,e,t]=refinemesh(g,p,e,t);
You can then solve the problem on the new mesh, plot the solution, and
recompute the error by repeating Steps 5 and 6.
In the element matrix (for example, denoted by t), the first three rows
contain indices to the corner points, given in counter-clockwise order, and
the fourth row contains the subdomain number.
t = [p1; p2; p3 % index to column in p
sd]; % subdomain number
We remark that the (global) indices to the nodal points are indicated by
the column numbers of the point matrix p, i.e., the coordinates of the i-th
point is p(:,i). The edge matrix e contains only the element sides on the
boundary of the (sub)domain(s). In the j-th element (the triangle defined by
the j-th column of t), the 1st–3rd rows gives the global indices of the 1st–3rd
vertices of the element. It is clear that, the first three rows of element matrix
t defines a map from the local indices of the nodal points to their global
indices. This relationship is important in assembling the global stiffness
matrix from the element stiffness matrices in the finite element discretization
(see Section 2.3).
For example, we consider the unit square described by the decomposed
geometry matrix
g = [2 2 2 2
0 1 1 0
1 1 0 0
1 1 0 0
1 0 0 1
0 0 0 0
1 1 1 1];
Here the decomposed geometry matrix g is obtained as follows. We first
draw the geometry (the unit square) in the GUI, then export it by selecting
“Export the Decomposed Geometry, Boundary Cond’s” from the “Boundary”
menu. For details on the decomposed geometry matrix we refer to the help
on the function "decsg.m”. Figure 1 shows a standard triangulation of the
unit square obtained by running
[p,e,t] = poimesh(g,2);
Here the output mesh data
p = [0 0.5 1 0 0.5 1 0 0.5 1
0 0 0 0.5 0.5 0.5 1 1 1];
e = [1 2 3 6 9 8 7 4
10.1. A BRIEF INTRODUCTION TO THE MATLAB PDE TOOLBOX 151
7 6 8 5 9
3 2 3 2
3 3 4 3
7 4
1 2 1 8
1 2 1 2
4 5 6
3 2 3 2
5 3 1 3
8 3
1 6 1 7
1 2 1 2
1 1 2 2 3
2 3 6 9 8 7 4 1
1 0.5 1 0.5 1 0.5 1 0.5
0.5 0 0.5 0 0.5 0 0.5 0
3 3 2 2 1 1 4 4
1 1 1 1 1 1 1 1
0 0 0 0 0 0 0 0];
t = [2 4 4 5 1 1 2 5
6 5 8 9 5 2 3 6
5 8 7 8 4 5 6 9
1 1 1 1 1 1 1 1];
Figure 1 also shows the global and local indices to the points, the indices to
the elements, and the indices to the edges, respectively.
Example 10.1. Assemble the stiffness matrix for the Poisson equation
(10.7) on a given mesh p, e, t. The following function assembles the stiff-
ness matrix from the element stiffness matrices which is analogous to the
function “pdeasmc.m”.
%
% A is the stiffness matrix, F is the right-hand side vector.
% UN=A\F returns the solution on the non-Dirichlet points.
% The solution to the full PDE problem can be obtained by the
% MATLAB command U=B*UN+ud.
% We have A*U=F.
10.1.3. A quick reference. Here is a brief table that tell you where
to find help information on constructing geometries, writing boundary con-
ditions, generating and refining meshes, and so on.
% Geometry
g = [2 2 2 2 2 2
0 1 1 -1 -1 0
1 1 -1 -1 0 0
0 0 1 1 -1 -1
0 1 1 -1 -1 0
1 1 1 1 1 1
0 0 0 0 0 0];
% Boundary conditions
r=’(x.^2+y.^2).^(1/3).*sin(2/3*(atan2(y,x)+2*pi*(y<0)))’;
b=[1 1 1 1 1 length(r) ’0’ ’0’ ’1’ r]’;
b=repmat(b,1,6);
% PDE coefficients
c=1;
a=0;
f=0;
% Initial mesh
[p,e,t]=initmesh(g);
error=[];
J=6;
for j=1:J
u=assempde(b,p,e,t,c,a,f);
err=pdeerrH1(p,t,u,ue,uex,uey);
error=[error err];
if j<J,
[p,e,t]=refinemesh(g,p,e,t);
end
end
err=quadgauss(p,t,f,u);
errx=quadgauss(p,t,[’(’ uex ’-repmat(u,7,1)).^2’],ux);
erry=quadgauss(p,t,[’(’ uey ’-repmat(u,7,1)).^2’],uy);
error=sqrt(err+errx+erry);
f=eval(f);
qt=2*ar.*(w(1)*f(1,:)+w(2)*sum(f(2:4,:))+w(3)*sum(f(5:7,:)));
q=sum(qt);
% Geometry
g = [2 2 2 2 2 2
0 1 1 -1 -1 0
1 1 -1 -1 0 0
0 0 1 1 -1 -1
0 1 1 -1 -1 0
1 1 1 1 1 1
0 0 0 0 0 0];
% Boundary conditions
b=[1 1 1 1 1 length(ue) ’0’ ’0’ ’1’ ue]’;
b=repmat(b,1,6);
% PDE coefficients
c=1;
a=0;
f=0;
158 10. IMPLEMENTATIONS
% Initial mesh
[p,e,t]=initmesh(g);
hold off;
that is,
ek−1 = (I k )t A
A ek I k . (10.10)
k−1 k−1
e1 = A
Algorithm 10.1. (Matrix version for V-cycle iterator). Let B e−1 .
1
Assume that B e k−1 ∈ R k−1 k−1 is defined, then B
n ×n e k ∈ R k k is defined as
n ×n
follows: Let e
ge ∈ Rnk .
(1) Pre-smoothing: For ye0 = 0 and j = 1, · · · , m,
ek (e
yej = yej−1 + R ek yej−1 ).
ge − A
e k−1 (I k )t (e
(2) Coarse grid correction: ee = B ek yem ), yem+1 = yem +
ge − A
k−1
k e
Ik−1 e.
160 10. IMPLEMENTATIONS
ek
u
(n+1) (n)
ek + B
=u e k (feek − A
ek u (n)
ek ), n = 0, 1, 2, · · · , (10.11)
p=p0;
e=e0;
t=t0;
fprintf(’k = %g. Number of triangles = %g\n’,1,size(t,2));
[A,F,Bc,ud]=assempde(b,p,e,t,c,a,f);
u=Bc*(A\F)+ud;
I={};
for k=2:nr+1
% Mesh and prolongation matrix
[p,e,t,C]=refinemesh_mg(g,p,e,t);
[A,F,Bf,ud]=assempde(b,p,e,t,c,a,f);
fprintf(’k = %g. Number of triangles = %g\n’,k,size(t,2));
I=[I {Bf’*C*Bc}]; % Eliminate the Dirichlet boundary nodes
Bc=Bf;
u=Bf’*C*u; % Initial value
10.4. MULTIGRID V-CYCLE ALGORITHM 161
if(k==1),
Br=A\r;
else
Ik=I{k-1}; % Prolongation matrix
[np,np1]=size(Ik);
ns=find(sum(Ik)>1);
ns=[ns, np1+1:np]; % Nodes to be smoothed.
y=zeros(np,1);
y=mgs_gs(A,r,y,ns,m); % Pre-smoothing
r1=r-A(:,ns)*y(ns);
r1=Ik’*r1;
B=Ik’*A*Ik;
Br1=mgp_vcycle(B,r1,I,m,k-1);
y=y+Ik*Br1;
y=mgs_gs(A,r,y,ns(end:-1:1),m); % Post-smoothing
Br=y;
end
function [u,steps]=mg_vcycle(A,F,I,u0,m,k,tol)
% Multigrid V-cycle iteration
if k==1,
u=A\F;
steps=1;
else
u=u0;
r=F-A*u;
error0=max(abs(r));
error=error0;
steps=0;
fprintf(’Number of Multigrid iterations: ’);
while error>tol*error0,
Br=mgp_vcycle(A,r,I,m,k); % Multigrid precondtioner
u=u+Br;
r=F-A*u;
error=max(abs(r));
steps=steps+1;
for j=1:floor(log10(steps-0.5))+1,
fprintf(’\b’);
end
fprintf(’%g’,steps);
end
fprintf(’\n’);
end
A1=A(ns,ns);
10.4. MULTIGRID V-CYCLE ALGORITHM 163
ip=ones(size(r));
ip(ns)=0;
ip=ip==1;
A2=A(ns,ip);
y1=x0(ns);
y2=x0;
y2(ns)=[];
r1=r(ns)-A2*y2;
L=tril(A1);
U=triu(A1,1);
for k=1:m
y1=L\(r1-U*y1);
end
x=x0;
x(ns)=y1;
@ @
C @
C @
C
@ C @ C @ 2 C 3@
1 @ 2 C 3 @ 4 C4 @ C @
@ C @ @
1 C 1 @ 2C@33 C 22C 3@
@ C @ @@C @ CC @ @C CC @
(a) (b) (c) (d)
Figure 2. Four similarity classes of triangles generated by “newest
vertex bisection”.
np=size(p,2);
nt=size(t,2);
if nargin==4,
it=(1:nt)’; % All triangles
end
itt1=ones(1,nt);
itt1(it)=zeros(size(it));
it1=find(itt1); % Triangles not yet to be refined
it=find(itt1==0); % Triangles whose side opposite to
% the newest vertex is to be bisected
% Prolongation matrix.
if nargout == 4,
nie = length(ie);
Icr = [sparse(1:nie,e(1,ie),1/2,nie,np)+...
sparse(1:nie,e(2,ie),1/2,nie,np)];
end
ip=(np+1):(np+length(ie));
np1=np+length(ie);
% Create new edges
e1=[e(:,ie1) ...
[e(1,ie);ip;e(3,ie);(e(3,ie)+e(4,ie))/2;e(5:7,ie)] ...
[ip;e(2,ie);(e(3,ie)+e(4,ie))/2;e(4,ie);e(5:7,ie)]];
% Fill in the new points
A=sparse(e(1,ie),e(2,ie),ip+1,np,np)...
+sparse(e(2,ie),e(1,ie),ip+1,np,np)+A;
% Prolongation matrix.
if nargout == 4,
ni=length(i);
Icr = [Icr;sparse(1:ni,i1,1/2,ni,np)+...
sparse(1:ni,i2,1/2,ni,np)];
Icr = [speye(size(Icr,2));Icr];
end
10.4. MULTIGRID V-CYCLE ALGORITHM 167
ip=(np1+1):(np1+length(i));
% Fill in the new points
A=sparse(i1,i2,ip+1,np,np)+sparse(i2,i1,ip+1,np,np)+A;
li=length(i); iti=it(i);
t1(:,(nt1+1):(nt1+li))=[mp3(i);t(2,iti);mp1(i);t(4,iti)];
nt1=nt1+li;
t1(:,(nt1+1):(nt1+li))=[t(3,iti);t(1,iti);mp3(i);t(4,iti)];
nt1=nt1+length(i);
t1(:,(nt1+1):(nt1+li))=[t(3,iti);mp3(i);mp1(i);t(4,iti)];
nt1=nt1+li;
i=find(bm==0); % Side 3 is refined
li=length(i); iti=it(i);
t1(:,(nt1+1):(nt1+li))=[t(3,iti);t(1,iti);mp3(i);t(4,iti)];
nt1=nt1+li;
t1(:,(nt1+1):(nt1+li))=[t(2,iti);t(3,iti);mp3(i);t(4,iti)];
/* Input Arguments */
/* Output Arguments */
if (nrhs != 3) {
mexErrMsgTxt("Three input arguments required.");
} else if (nlhs > 1) {
mexErrMsgTxt("Too many output arguments.");
}
ni = mxGetN(vi_IN)*mxGetM(vi_IN);
nj = mxGetN(vj_IN)*mxGetM(vj_IN);
if (ni != nj)
mexErrMsgTxt("The lengths of vi and vi must be equal");
pr = mxGetPr(A_IN);
pi = mxGetPi(A_IN);
ir = mxGetIr(A_IN);
jc = mxGetJc(A_IN);
vi = mxGetPr(vi_IN);
vj = mxGetPr(vj_IN);
if (!mxIsComplex(A_IN)){
/* Create a matrix for the return argument */
b_OUT = mxCreateDoubleMatrix(1, ni, mxREAL);
10.5. Exercises
Exercise 10.1. Solve the following problem by the linear finite element
method.
−∆u = x, −∞ < x < ∞, 0 < y < 1,
u(x, 0) = u(x, 1) = 0, −∞ < x < ∞,
u(x, y) is periodic in the x direction with period 1.
Exercise 10.2. Solve the L-shaped domain problem in Example 4.1 by
using the adaptive finite element algorithm base on the Dörfler marking strat-
egy and verify the quasi-optimality of the algorithm.
Exercise 10.3. Solve the Poisson equation on the unit disk with homoge-
neous Dirichlet boundary condition by using the full muligrid algorithm 10.2
with Gauss-Seidel smoother and verify Theorem 5.5 numerically.
Bibliography
173
174 BIBLIOGRAPHY
177
178 INDEX
partition of unity, 12
Petrov-Galerkin method, 15
Poincaré inequality, 6
Rayleigh-Ritz method, 14
reference finite element, 22
Ritz projection, 87
Ritz-Galerkin method, 15
semidiscrete problem, 85
Sobolev imbedding, 5
Sobolev spaces, 2
space
C0∞ (Ω), 1
H k (Ω), 2
H0k (Ω), 2
H s (Ω), 7
H0s (Ω), 7
L1loc (Ω), 1
W k,p (Ω), 2
W0k,p (Ω), 2
W s,p (Ω), 6
W0s,p (Ω), 7
D(Ω̄), 5
stiffness matrix, 14