MATH1070 4 Numerical Integration PDF
MATH1070 4 Numerical Integration PDF
Numerical Integration
Review of Interpolation
Theorem
Let yj = f (xj ), f (x) smooth and pn interpolates f (x) at
x0 < x1 < . . . < xn . For any x ∈ (x0 , xn ) there is a ξ such that
f (n+1) (ξ)
f (x) − pn (x) = (x − x0 )(x − x1 ) . . . (x − xn )
(n + 1)! | {z }
ψ(x)
Example: n = 1, linear:
00
f (ξ)
f (x) − p1 (x) = (x − x0 )(x − x1 ),
2
y1 − y0
p1 (x) = (x − x0 ) + y0 .
x1 − x0
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule
Mixed rule
Z b N −1 Z xj+1 N −1
X X f (xj ) + f (xj+1 )
f (x)dx = f (x)dx ≡ (xj+1 − xj )
a xj 2
j=0 | {z } j=0
interpolate f(x)
on (xj , xj+1 ) and
integrate exactly
Trapezoidal Rule
Z xj+1 Z xj+1
f (x) ≈ p1 (x)dx
xj xj
Z xj+1
f (xj+1 ) + f (xj )
= (x − xj ) + f (xj ) dx
xj xj+1 − xj
f (xj ) + f (xj+1 )
= (xj+1 − xj )
2
f (b) + f (a)
T1 (f ) = (b − a) (6.1)
2
Seek: Z xj+1
f (x)dx ≈ wj f (xj ) + wj+1 f (xj+1 )
xj
Example
Approximate integral Z 1
dx
I=
0 1+x
·
The true value is I = ln(2) = 0.693147. Using (6.1), we obtain
1 1 3
T1 = 1+ = = 0.75
2 2 4
xj = a + jh, j = 0, 1, . . . , n
Example
We give calculations of Tn (f ) for three integrals
Z 1
2 ·
(1)
I = e−x dx = 0.746824132812427
0
Z 4
dx ·
I (2) = = tan−1 (4) = 1.32581766366803
0 1 + x2
Z 2π
dx 2π ·
I (3) = = √ = 3.62759872846844
0 2 + cos(x) 3
n I (1)
I (2) I (3)
correct up to the limits due to rounding error on the computer (16 decimal digits).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule
Example
Z 1
dx
I=
0 1+x
b−a 1
Then h = 2 = 2 and
1
2 2 1 25 ·
S2 (f ) = 1 + 4( ) + = = 0.69444 (6.7)
3 3 2 36
and the error is
·
I − S2 = ln(2) − S2 = −0.00130
while the error for the trapezoidal rule (the number of function
evaluations is the same for both S2 and T2 ) was
·
I − T2 = −0.0152.
The error in S2 is smaller than that in (6.3) for T2 by a factor of 12,
a significant increase in accuracy.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Simpson’s rule:
h
Sn (f ) = (f (x0 )+4f (x1 )+2f (x2 )+4f (x3 )+2f (x4 ) (6.8)
3
+· · ·+2f (xn−2 )+4f (xn−1 )+f (xn ))
Example
Evaluate the integrals
Z 1
2 ·
I (1) = e−x dx = 0.746824132812427
0
Z 4
dx ·
I (2)
= = tan−1 (4) = 1.32581766366803
0 1 + x2
Z 2π
dx 2π ·
I (3) = = √ = 3.62759872846844
0 2 + cos(x) 3
n I (1) I (2) I (3)
Theorem
Let f ∈ C 2 [a, b], n ∈ N. The error in integrating
Z b
I(f ) = f (x)dx
a
given by
−h2 (b − a) 00
EnT ≡ I(f ) − Tn (f ) = f (cn ) (6.9)
12
b−a
where cn is some unknown point in a, b, and h = n .
Theorem
Suppose that f ∈ C 2 [a, b], and h = maxj (xj+1 − xj ). Then
Z b
X f (xj+1 )+f (xj ) b − a 2
max |f 00 (x)|.
f (x)dx − (xj+1 −xj ) ≤ 12 h a≤x≤b
a 2
j
Local error:
Z
xj+1 f (xj ) + f (xj+1 )
f (x)dx − (xj+1 −xj ) =
2
xj
proof
Z
xj+1 f 00 (ξ)
= (x − xj )(x − xj+1 )
xj 2!
Z xj+1
1
≤ |f 00 (ξ)||x − xj | |x − xj+1 |dx
2 xj
Z xj+1
1 00
≤ max |f (x)| (x − xj )(xj+1 − x)dx
2 a≤x≤b xj
1 (xj+1 − xj )3
= max |f 00 (x)| .
2 a≤x≤b 6
Hence
1
|local error| ≤ max |f 00 (x)| · h3j , hj = xj+1 − xj .
12 a≤x≤b
proof
Finally,
Z b
X f (x j ) + f (x )
j+1
|global error| =
f (x)dx − (xj+1 −xj )
a 2
j
!
Z xj+1
X f (xj ) + f (xj+1 ) X
= f (x)dx − (xj+1 −xj ) ≤ |local error|
xj 2
j j
X 1 n
1 X
≤ max |f 00 (x)|(xj+1 − xj )3 = max |f 00 (x)| (xj+1 − xj )3
12 a≤x≤b 12 a≤x≤b | {z }
j j=0
≤h2 (xj+1 −xj )
n
h2 X
≤ max |f 00 (x)| (xj+1 − xj ) .
12 a≤x≤b
j=0
| {z }
b−a
Example
Recall the example Z 1
dx
I(f ) = = ln 2
0 1+x
Here f (x) = 1
1+x , [a, b] = [0, 1], and f 00 (x) = 2
(1+x)3 . Then by (6.9)
h2 00 1
EnT (f ) = −
f (cn ), 0 ≤ cn ≤ 1, h= .
12 n
This cannot be computed exactly since cn is unknown. But
2
max |f 00 (x)| = max =2
0≤x≤1 0≤x≤1 (1 + x)3
and therefore
h2 h2
|EnT (f )| ≤ (2) =
12 6
For n = 1 and n = 2 we have
1 2
1 · ·
| E1T (f ) |
≤ = 0.167, | E2T (f ) | ≤ 2
= 0.0417.
| {z } 6 | {z } 6
−0.0569 −0.0152
YES
Recall the derivation of the trapezoidal rule Tn (f ) and use the local error
(6.10):
Z b Z xn
EnT (f ) = f (x)dx − Tn (f ) =
f (x)dx − Tn (f )
a x0
Z x1 Z x2
f (x0 ) + f (x1 ) f (x1 ) + f (x2 )
= f (x)dx − h + f (x)dx − h
x0 2 x1 2
Z xn
f (xn−1 ) + f (xn )
+ ··· + f (x)dx − h
xn−1 2
h3 00 h3 h3
=− f (γ1 ) − f 00 (γ2 ) − · · · − − f 00 (γn )
12 12 12
with γ1 ∈ [x0 , x1 ], γ2 ∈ [x1 , x2 ], . . . γn ∈ [xn−1 , xn ], and
h2 00
EnT (f ) = − hf (γ1 ) + · · · + hf 00 (γn ) , cn ∈ [a, b].
12 | {z }
=(b−a)f 00 (cn )
−h2 0
EnT (f ) ≈ (f (b) − f 0 (a)) =: E
e T (f ).
n (6.12)
12
Example
Z 1
dx
Again consider I = .
0 1+x
1
Then f 0 (x) = − , and the asymptotic estimate (6.12) yields
(1 + x)2
the estimate
−h2 −h2
eT −1 −1 1
E n = − = , h=
12 (1 + 1) (1 + 0)2 16 n
−h2 0
enT (f ) =
The estimate E 12 (f (b) − f 0 (a)) has several practical
−h2 (b−a) 00
advantages over the earlier formula (6.9) EnT (f ) = 12 f (cn ).
1 It confirms that when n is doubled (or h is halved), the error decreases
by a factor of about 4, provided that f 0 (b) − f 0 (a) 6= 0. This agrees
with the results for I (1) and I (2) .
2 (6.12) implies that the convergence of Tn (f ) will be more rapid when
f 0 (b) − f 0 (a) = 0. This is a partial explanation of the very rapid
convergence observed with I (3)
3 (6.12) leads to a more accurate numerical integration formula
by taking EenT (f ) into account:
h2 0
I(f ) − Tn (f ) ≈ − (f (b) − f 0 (a))
12
h2 0
I(f ) ≈ Tn (f ) − (f (b) − f 0 (a)) := CTn (f ), (6.13)
12
the corrected trapezoidal rule
Example
R1 2 ·
Recall the integral I (1) , I = 0
e−x dx = 0.74682413281243
n I − Tn (f ) Een (f ) CTn (f ) I − CTn (f ) Ratio
2 1.545E-4 1.533E-2 0.746698561877 1.26E-4
4 3.840E-3 3.832E-3 0.746816175313 7.96E-6 15.8
8 9.585E-4 9.580E-4 0.746823634224 4.99E-7 16.0
16 2.395E-4 2.395E-4 0.746824101633 3.12E-8 16.0
32 5.988E-5 5.988E-5 0.746824130863 1.95E-9 16.0
64 1.497E-5 1.497E-5 0.746824132690 2.22E-10 16.0
Theorem
Assume f ∈ C 4 [a, b], n ∈ N. The error in using Simpson’s rule is
h4 (b − a) (4)
EnS (f ) = I(f ) − Sn (f ) = − f (cn ) (6.14)
180
with cn ∈ [a, b] an unknown point, and h = b−a
n . Moreover, this error can be
estimated with the asymptotic error formula
4
enS (f ) = − h (f 000 (b) − f 000 (a))
E (6.15)
180
Note that (6.14) says that Simpson’s rule is exact for all f (x) that are
polynomials of degree ≤ 3, whereas the quadratic interpolation on which
Simpson’s rule is based is exact only for f (x) a polynomial of degree ≤ 2.
The degree of precision being 3 leads to the power h4 in the error, rather
than the power h3 , which would have been produced on the basis of the error
in quadratic interpolation.
The higher power of h4 , and
the simple form of the method
that historically have caused Simpson’s rule to become the most popular numerical integration rule.
Example
R 1 dx
Recall (6.7) where S2 (f ) was applied to I = 0 1+x :
1
2 1 25 ·
S2 (f ) = 2 1 + 4( ) + = = 0.69444
3 3 2 36
1 −6 24
f (x) = , f 3 (x) = , f (4) (x) =
1+x (1 + x)4 (1 + x)5
The exact error is given by
h4 (4) 1
EnS (f ) = − f (cn ), h =
180 n
for some 0 ≤ cn ≤ 1. We can bound it by
h4 2h4
|EnS (f )| ≤ 24 =
180 15
The asymptotic error is given by
h4 h4
S −6 −6
En (f ) = −
e − = −
180 (1 + 1)4 (1 + 0)4 32
·
enS =
For n = 2, E −0.00195; the actual error is −0.00130.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule
Thus, the error EnS (f ) should decrease by the same factor, provided
that f 000 (a) 6= f 000 (b). This is the error observed with integrals I (1) and
I (2) .
When f 000 (a) = f 000 (b), the error will decrease more rapidly, which is a
partial explanation of the rapid convergence for I (3) .
En (f ) = E
en (f ) (6.16)
such as for EnT (f ) and EnS (f ), says that (6.16) will vary with the
integrand f , which is illustrated with the two cases I (1) and I (2) .
From (6.14) and (6.15) we infer that Simpson’s rule will not perform
as well if f (x) is not four times continuously differentiable, on [a, b].
Example
Use Simpson’s rule to approximate
Z 1
√ 2
I= xdx = .
0 3
n Error Ratio
2 2.860E-2
4 1.014E-2 2.82
8 3.587E-3 2.83
16 1.268E-3 2.83
32 4.485E-4 2.83
√
Table: Simpson’s rule for x
The column ”Ratio” show the convergence is much slower.
As was done for the trapezoidal rule, a corrected Simpson’s rule can
be defined:
h4
f 000 (b) − f 000 (a)
CSn (f ) = Sn (f ) − (6.17)
180
This will usually will be more accurate approximation than Sn (f ).
Replacing n by 2n
c
I − I2n ≈ (6.19)
2p n p
and comparing to (6.18)
c
2p (I − I2n ) ≈ ≈ I − In
np
and solving for I gives the Richardson’s extrapolation formula
To estimate the error in I2n , compare it with the more accurate value
R2n
1
I − I2n ≈ R2n − I2n = (2p I2n − In ) − I2n
2p −1
1
I − I2n ≈ (I2n − In ) (6.21)
2p − 1
This is Richardson’s error estimate.
Example
Using the trapezoidal rule to approximate
Z 1
2 ·
I= e−x dx = 0.74682413281243
0
we have
· ·
T2 = 0.7313702518, T4 = 0.7429840978
1 p
Using (6.20) I ≈ 2p −1 (2 I2n − In ) with p = 2 and n = 2, we obtain
1 1 ·
I ≈ R4 = (4I4 − I2 ) = (4T4 − T2 ) = 0.7468553797
3 3
The error in R4 is −0.0000312; and from a previous Table, R4 is more
accurate than T32 . To estimate the error in T4 , use (6.21) to get
1 ·
I − T4 ≈ (T4 − T2 ) = 0.00387
3
The actual error in T4 is 0.00384; and thus (6.21) is a very accurate error
estimate.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands
Definition
A function f (x) is periodic with period τ if
f (x) = f (x + τ ), ∀x ∈ R (6.22)
and this relation should not be true with any smaller value of τ .
For example,
f (x) = ecos(πx)
is periodic with periodic τ = 2.
Consider integrating
Z b
I= f (x)dx
a
with trapezoidal or Simpson’s rule, and assume that b − a is and
integer multiple of the period τ .
Assume f (x) ∈ C ∞ [a, b] (has derivatives of any order).
Then for all derivatives of f (x), the periodicity of f (x) implies that
Example
The ellipse with boundary
x 2 y 2
+ =1
a b
has area πab. For the case in which the area is π (and thus ab = 1), we study
the variation of the perimeter of the ellipse as a and b vary.
We consider only the case with 1 ≤ b < ∞. Since the perimeters for
the two ellipses
x 2 y 2 x 2 y 2
+ = 1 and + =1
a b b a
are equal, we can always consider the case in which the y-axis of the
ellipse is larger than or equal to its x-axis; and this also shows
1
P = P (b), b > 0 (6.26)
b
14
12
10
0
0 0.5 1 1.5 2 2.5 3 3.5
π/2
Figure:The graph of integrand f (θ) : b = 2, 5, 8
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands
40
35 z=P(b)
30
25
20
15
10
5
0 1 2 3 4 5 6 7 8 9 10
Review
Z xj+1 Z xj+1
f (x)dx ≈ pn (x)dx
xj xj
| {z }
≡Ij
(0) (1) (n)
pn (x) interpolates at xj , xj , . . . , xj points on [xj , xj+1 ]
Local error:
Z xj+1 xj+1
f (n+1)(ξ)
Z
f (x)dx − Ij = ψ(x)dx
xj xj (n + 1)!
(integrand is error in interpolation)
(0) (1) (n)
where ψ(x) = (x − xj )(x − xj ) · · · (x − xj ).
ψ(x)
0
−2
−4
−6
x_j x_{j+1}
xj ≤ x ≤ xj+1
Conclusion: exact on Pn
1 |local error| ≤ C max |f (n+1) (x)|hn+2
2 |global error| ≤ C max |f (n+1) (x)|hn+1 (b − a)
Observation:
If ξ is a point on (xj , xj+1 ), then
Local error:
Z xj+1
1
f (n+1) (ξ) ψ(x)dx
(n + 1)! xj | {z }
=f (n+1) (xj+ 1 )+O(h)
Z2 xj+1
1
= f (n+1) (xj+ 1 ) ψ(x)dx
(n + 1)! 2
xj
| {z }
Dominant Term O(hn+2 )
C
(n+1)!
max |f (n+2) |hn+3
z }| {
Z xj+1
1 (n+2)
+ f (η(x)) (ξ − xj+ 1 ) ψ(x)dx
(n + 1)! xj | {z } | {z 2 }
take max out
integrate
| {z }
Higher Order Terms O(hn+3 )
−0.05
−0.1
−0.15
−0.2
x_j x_{j+1}
Z xj+1
ψ(x) = (x − xj )(x − xj+ 1 )(x − xj+1 ) ⇒ ψ(x)dx = 0
2
xj
ψ(x) = (x−xj)(x−xj+1/2)(x−xj+1)
0.2
0.15
0.1
0.05
−0.05
−0.1
−0.15
−0.2
x_j x_{j+1/2} x_{j+1}
0.08
0.06
0.04
0.02
−0.02
−0.04
−0.06
x_j x_{j+1/3} x_{j+2/3} x_{j+1}
Z
ψ(x)dx = 0 ⇒ local error = O(h7 )
0.4
0.3
0.2
0.1
−0.1
−0.2
−0.3
−0.4
x_j x_{j+1}
Simpson’s Rule
Is exact on P2 (and P3 actually)
← xj+1/2
Seek:
Z xj+1
f (x)dx ≈ wj f (xj ) + wj+1/2 f (xj+1/2 ) + wj+1 f (xj+1 )
xj
Exact on 1, x, x2 :
Z xj+1
1: 1dx = xj+1 − xj = wj · 1 + wj+1/2 · 1 + wj+1 · 1,
xj
xj+1 x2j+1 x2j
Z
x: xdx = − = wj xj + wj+1/2 xj+1/2 + wj+1 xj+1 ,
xj 2 2
xj+1 x3j+1 x3j
Z
2 2
x : x dx = − = wj x2j + wj+1/2 x2j+1/2 + wj+1 x2j+1 .
xj 3 3
3 × 3 linear system:
1
wj = (xj+1 − xj );
6
4
wj+1/2 = (xj+1 − xj );
6
1
wj+1 = (xj+1 − xj ).
6
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more
Theorem
h = max(xj+1 − xj ), I(f ) = Simpson’s rule approximation:
Z b
b−a 4
(4)
f (x)dx − I(f )≤ h max f (x).
2880 a≤x≤b
a
Example
2
Let f (x) = e−x for x ∈ [0, 1]
n ρn (f ) n ρn (f )
1 5.30E-2 6 7.82E-6
2 1.79E-2 7 4.62E-7
3 6.63E-4 8 9.64E-8
4 4.63E-4 9 8.05E-9
5 1.62E-5 10 9.16E-10
2
Table: Minimax errors for e−x , 0 ≤ x ≤ 1
The minimax errors ρn (f ) converge to zero rapidly, although not at a
uniform rate.
If we have a
Case n = 1
The integration formula has the form
Z 1
f (x)dx ≈ w1 f (x1 ) (6.29)
−1
This is the midpoint formula, and is exact for all linear polynomials.
To see that (6.30) is not exact for quadratics, let f (x) = x2 . Then the
error in (6.30) is
Z 1
2
x2 dx − 2(0)2 = 6= 0,
−1 3
hence (6.30) has degree of precision 1.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Case n = 2
f (x) = 1, x, x2 , x3 .
obtaining 4 equations
2 = w1 + w2
0 = w1 x1 + w2 x2
2 2 2
3 = w1 x1 + w2 x2
3 3
0 = w1 x1 + w2 x2
Case n = 2
Example
Approximate
Z 1
·
I= ex dx = e − e−1 = 2.5304024
−1
Using
Z 1
√ ! √ !
3 3
f (x)dx ≈ f − +f ≈ I2 (f )
−1 3 3
we get
√ √
3 3 ·
I2 = e− 3 + e− 3 = 2.2326961
·
I − I2 = 0.00771
The error is quite small, considering we are using only 2 node points.
Case n > 2
We seek the formula (6.28)
n
X
In (f ) = wj f (xj )
j=1
which has 2n points unspecified parameters x1 , . . . , x, w1 , . . . , wn , by forcing
the integration formula to be exact for 2n monomials
f (x) = 1, x, x2 , . . . , x2n−1
In turn, this forces In (f ) = I(f ) for all polynomials f of degree ≤ 2n − 1.
This leads to the following system of 2n nonlinear equations in 2n
unknowns:
2 = w1 + w2 + . . . + wn
0 = w1 x1 + w2 x2 + . . . + wn xn
2
3 = w1 x21 + w2 x22 + . . . + wn x2n
.. (6.34)
.
2
2n−1 = w1 x2n−2
1 + w2 x22 + . . . + wn x2n−2
n
2n−1
0 = w1 x1 + w2 x22 + . . . + wn x2n−1
n
Solving this system is a formidable problem. The nodes {xi } and weights
{wi } have been calculated and collected in tables for most commonly used
values of n.
n xi wi
2 ± 0.5773502692 1.0
3 ± 0.7745966692 0.5555555556
0.0 0.8888888889
4 ± 0.8611363116 0.3478548451
± 0.3399810436 0.6521451549
5 ± 0.9061798459 0.2369268851
± 0.5384693101 0.4786286705
0.0 0.5688888889
6 ± 0.9324695142 0.1713244924
± 0.6612093865 0.3607651730
± 0.2386191861 0.4679139346
7 ± 0.9491079123 0.1294849662
± 0.7415311856 0.2797053915
± 0.4058451514 0.3818300505
0.0 0.4179591837
8 ± 0.9602898565 0.1012285363
± 0.7966664774 0.2223810345
± 0.5255324099 0.3137066459
± 0.1834346425 0.3626837834
with
b + a + t(b − a)
fe(t) = f
2
Now apply Ien to this new integral.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
Example
Apply Gaussian
R 1 −x2 numerical Rintegration to the
R 2πthreedxintegrals
(1) (2) 4 dx (3)
I = 0 e dx, I = 0 1+x2 , I = 0 2+cos(x) , which were
used as examples for the trapezoidal and Simpson’s rules.
All are reformulated as integrals over [−1, 1]. The error results are
This is even true with many integrals in which the integrand does not
have a continuous derivative.
Example
Use Gaussian integration on Z 1 √ 2
I= xdx =
0 3
The results are n I − In Ratio
2 -7.22E-3
4 -1.16E-3 6.2
8 -1.69E-4 6.9
16 -2.30E-5 7.4
32 -3.00E-6 7.6
64 -3.84E-7 7.8
where n is the number of node points. The ratio column is defined as
I − I 12 n
I − In
and it shows that the error behaves like
c
I − In ≈ 3 (6.38)
n
for some c. The error using Simpson’s rule has an empirical rate of
1
convergence proportional to only n1.5 , a much slower rate than (6.38).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration
A result that relates the minimax error to the Gaussian numerical integration
error.
Theorem
Let f ∈ C[a, b], n ≥ 1. Then, if we aply Gaussian numerical integration to
Rb
I = a f (x)dx, the error In satisfies
|I(f ) − In (f )| ≤ 2(b − a)ρ2n−1 (f ) (6.39)
where ρ2n−1 (f ) is the minimax error of degree 2n − 1 for f (x) on [a, b].
Example
Using the table
n ρn (f ) n ρn (f )
1 5.30E-2 6 7.82E-6
2 1.79E-2 7 4.62E-7
3 6.63E-4 8 9.64E-8
4 4.63E-4 9 8.05E-9
5 1.62E-5 10 9.16E-10
apply (6.39) to Z 1
2
I= e−x dx
0
I − In ≈ I2n − In
Case n = 1
Case n = 2
The integration formula has the form
Z 1
f (x)
√ dx ≈ w1 f (x1 ) + w2 f (x2 ) (6.43)
0 x
We force equality for f (x) = 1, x, x2 , x3 . This leads to equations
Z 1
1
w1 + w2 = √ dx = 2
0 x
Z 1
x 2
w1 x1 + w2 x2 = √ dx =
0 x 3
Z 1 2
x 2
w1 x21 + w2 x22 = √ dx =
0 x 5
Z 1 3
x 2
w1 x31 + w2 x32 = √ dx =
0 x 7
This has the solution
√ · √ ·
x1 = 37 − 35
2
30 = 0.11559, x2 = 37 + 35 2
30 = 0.74156
1
√ · 1
√ ·
w1 = 1 + 18 30 = 1.30429, w2 = 1 − 18 30 = 0.69571
The resulting formula (6.43) has degree of precision 3.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature
Case n > 2
We seek formula (6.41), which has 2n unspecified parameters, x1 , . . . , xn ,
w1 , . . . , wn , by forcing the integration formula to be exact for the 2n
monomials
f (x) = 1, x, x2 , . . . , x2n−1 .
In turn, this forces In (f ) = I(f ) for all polynomials f of degree ≤ 2n − 1.
This leads to the following system of 2n nonlinear equations in 2n unknowns:
w1 + w2 + . . . + wn = 2
w1 x1 + w2 x2 + . . . + wn xn = 23
w1 x21 + w2 x22 + . . . + wn x2n = 25 (6.44)
..
.
w1 x2n−1
1 + w2 x2n−1
2 + . . . + wn x2n−1
n = 2
4n−1
Example
We evaluate
Z 1
cos(πx) ·
I= √ dx = 0.74796566683146
0 x
using (6.42)
Z 1
f (x) 1
√ dx ≈ 2f ( ) ≡ 1.0
0 x 2
and (6.43)
Z 1
f (x) ·
√ dx ≈ w1 f (x1 ) + w2 f (x2 ) = 0.740519
0 x
·
I2 is a reasonable estimate of I, with I − I2 = 0.00745.
These hypotheses are the same as were assumed for the generalized least
squares approximation theory following Section 4.7 of Chapter 4. This is not
accidental since both Gaussian quadrature and least squares approximation
theory are dependent on the subject of orthogonal polynomials. The node
points {xj } solving the system (6.44) are the zeros of the degree n orthogonal
polynomial on [a, b] with respect to the weight function w(x) = √1x .
For the generalization (6.45), the nodes {xi } are the zeros of the degree n
orthogonal polynomial on [a, b] with respect to the weight function w(x).
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Gauss’s idea:
The optimal abscissas of the κ−point Gaussian quadrature formulas are
precisely the roots of the orthogonal polynomial for the same interval and
weighting function.
Z b XZ xj+1
f (x)dx = f (x)dx
a j xj
| {z }
composite formula
1
XZ xj+1 − xj xj+1 + xj xj+1 − xj
= f t+ dt
2 2 2
j | −1 {z }
X κ
R1
= −1 g(t)dt ≈ w` g(q`)
|`=1 {z }
κ point Gauss Rule for max accuracy
Z +1
g(t)dt ≈ w1 g(q1 ) + w2 g(q2 ) + w3 g(q3 )
−1 R1
−1 1dt = 2 = w1 · 1 + w2 + w3
R1
−1 t dt = 0 = w1 q1 + w2 q2 + w3 q3
R1 2 2 2 2 2
−1 t dt = 3 = w1 q1 + w2 q2 + w3 q3
R1 3 3 3 3
−1 t dt = 0 = w1 q1 + w2 q2 + w3 q3
R1 4 2 4 4 4
−1 t dt = 5 = w1 q1 + w2 q2 + w3 q3
R1 5 5 5 5
−1 t dt = 0 = w1 q1 + w2 q2 + w3 q3
hence
r r
3 3
q1 = − , q3 =
5 5
5 5 8
w1 = , w3 = , w2 =
9 9 9
A. H. Stroud and D. Secrest: ”Gaussian Quadrature Formulas”.
Englewood Cliffs, NJ: Prentice-Hall, 1966.
Adaptive Quadrature
Problem
Rb
Given a f (x)dx and ε−preassigned tolerance
compute
Z b
I(f ) ≈ f (x)dx
a
with
(a) to assured accuracy
Z b
f (x)dx − I(f ) < ε
a
Strategy: LOCALIZE!
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Localization Theorem
P Rx
Let I(f ) = j Ij (f ) where Ij (f ) ≈ xjj+1 f (x)dx.
Z
xj+1
j+1 − xj )
ε(x
If f (x)dx − I (f )< (= local tolerance),
j
b−a
xj
Z b
then
f (x)dx − I(f ) < ε(= tolerance)
a
Z b xj+1
XZ X
Proof:
f (x)dx − I(f ) = |
f (x)dx − I(f )|
a j xj j
!
X Z xj+1 X Z xj+1
=| f (x)dx − Ij (f ) | ≤ | f (x)dx − Ij (f )|
j xj j xj
X ε(xj+1 − xj ) ε X ε
= = (xj+1 − xj ) = (b − a) = ε.
b−a b−a b−a
j j
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Need:
Step 1: compute Ij
f (xj+1 )+f (xj )
Ij = 2 (xj+1 − xj )
Error estimate:
h3j 00
Z xj+1
f (x)dx − Ij = f (ξj ) = ej 1st use of trapezoid rule
xj 12
Z xj+1
(hj /2)3 00 (hj /2)3 00
f (x)dx − I j = f (η1 ) + f (η2 ) 2nd use of TR
xj 12 12
1 h3j 00
= f (ξj ) +O(h4j )
4 12
| {z }
ej
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
I j − Ij
I j − Ij = 3ej + O(h4 ) =⇒ ej = + Higher Order Terms
3 | {z }
O(h4 )
Algorithm
Input: a, b, f (x)
upper error tolerance: εmax
initial mesh width: h
Initialize: Integral = 0.0
xL = a
1
εmin = 2k+3 εmax
* xR = xL + h
( If xR > b, xR ← b
do integral 1 more time and stop
Compute on xL , xR :
I, I and EST
I −I
EST = k+1 (if exact on Pk )
2 − 1
Trapezium rule
Z xj+1
f(xj+1 ) + f(xj )
f(x)dx − (xj+1 − xj )
xj 2
Z xj+1 Z xj+1 00
f (ξ)
f (x) − p1 (x)dx = (x − xj )(x − xj+1 ) dx
xj xj 2 | {z }
ψ(x)
f 00 (x) xj+1 xj+1
Z Z
= ψ(x)dx +O(h) ψ(x)dx
2 xj xj
| {z }
integrate exactly
5
x 10
1.5
0.5
ψ(x)
−0.5
−1
−1.5
x_j q_1 q_2 q_3 q_4 q_5 q_6 q_7
xj ≤ x ≤ xj+1
ψ(x) = (x − q1 )(x − q2 ) · · · (x − q7 )
f k+1 (ξ)
pk (x) interpolates f (x) =⇒ f (x) − pk (x) = (k+1)! ψ(x)
0.5
ψ(x) and abs(ψ(x))
−0.5
−1
−1.5
x_j q_1 q_2 q_3 q_4 q_5 q_6 q_7
xj ≤ x ≤ xj+1
Questions:
Remark
R1
If m < 1, pick q1 , . . . , qk+1 so that −1 ψ(x)dx = 0 and then the error
converges O(hk+3 ).
m=2
Step 1
Let r1 be some fixed point on [−1, 1]:
−1 < q1 < q2 < . . . < r1 < . . . < qk < qk+1
g(r1 ) − pk (r1 )
pk+1 (x) = pk (x) + ψ(x) (6.47)
ψ(r1 )
pk interpolates g(x) at q1 , . . . , qk+1 .
Claim: pk+1 interpolates g(x) at k + 2 points q1 , . . . , qk+1 , r1 .
Suppose now that (6.46) is exact on Pk+1 , then from (6.47)
Z 1 Z 1
g(x)dx − pk+1 (x) dx error in k + 2 quadrature rule, Ek+2
| −1 {z } −1 | {z }
substitute (6.47)
true
1 Z 1 Z 1
f (r1 ) − pk (r1 )
Z
= g(x)dx − pk (x)dx − ψ(x) dx
−1 −1 −1 ψ(r1 )
| {z }
error in k + 1 quadrature rule ≡ Ek+1
Step 1
1
f (r1 ) − pk (r1 )
Z
So Ek+2 = Ek+1 − ψ(x)dx
ψ(r1 ) −1
Conclusion 1
R1
If −1 ψ(x)dx = 0, then error in k + 1 point rule is exactly the same as
if we had used k + 2 points.
Step 2
Let r1 , r2 be fixed points in [−1, 1]. So interpolate at k + 3 points:
q1 , . . . , qk+1 , r1 , r2
g(r2 ) − pk (r2 )
pk+2 (x) = pk (x) + ψ(x)(x − r1 ) (6.48)
(r2 − r1 )ψ(r2 )
g(r1 ) − pk (r1 )
+ ψ(x)(x − r2 )
(r1 − r2 )ψ(r1 )
Consider error in a rule with k + 1 + 2 points:
Z 1 Z 1
error in k + 3 p. r. = g(x)dx − pk+2 (x)dx
−1 −1
1 1 1
g(r2 ) − pk (r2 )
Z Z Z
= g(x)dx − pk (x)dx − ψ(x)(x − r1 )dx
−1 −1 (r2 − r1 )ψ(r2 ) −1
Z 1
g(r1 ) − pk (r1 )
− ψ(x)(x − r2 )dx.
(r1 − r2 )ψ(r1 ) −1
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Step 2
R1 R1
So Ek+3 = Ek+1 + Const −1 ψ(x)(x − r1 )dx + Const −1 ψ(x)(x − r2 )dx
Conclusion 2
R1 R1
If −1 ψ(x)dx = 0 and −1 xψ(x)dx = 0, then error in k + 1 point rule
has the same error as k + 3 point rule.
...
Z 1 Z 1
Ek+1+m = Ek+1 + C0 ψ(x)dx + C1 ψ(x)x1 dx + . . . (6.49)
−1 −1
Z 1
+ Cm ψ(x)xm−1 dx (6.50)
−1
So
Conclusion 3
R1
If −1 ψ(x)xj dx = 0, j = 0, . . . , m − 1, then error is as good as using
m extra points.
Overview
Interpolating Quadrature
Interpolate f (x) at q0 , q1 , q2 , . . . , qk ⇒ pk (x)
f (k+1) (ξ)
f (x) − pk (x) = (x − q0 )(x − q1 ) . . . (x − qk )
(k + 1)!
Z 1 Z 1 Z 1
1
f (x)dx − pk (x)dx = f (k+1) (ξ)ψ(x)dx
−1 −1 (k + 1)! −1
Gauss rules
pick q` to maximize exactness
what is the accuracy
what are the q` ’s?
Overview
Definition
p(x) is the µ + 1st orthogonal polynomial
R1 on [−1, 1] (weight
w(x) ≡ 1) if p(x) ∈ Pµ+1 and −1 p(x)x` dx = 0, ` = 0, . . . , µ, i.e.,
R1
−1 p(x)q(x)dx = 0∀q ∈ Pµ .
Overview
Pick q0 , q1 , . . . , qk so
R1
ψ(x) · 1dx = 0
R −1
1 deg m−1
−1 ψ(x) · xdx = 0
Z 1 z}|{
.. ⇔ ψ(x) q(x) dx = 0, ∀q ∈ Pm−1
. −1 | {z }
R1 m−1 dx = 0 deg k+1
−1 ψ(x) · x
So, the Gauss quadrature points are the roots of the orthogonal
polynomial.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement
Overview
Adaptivity
I = I1 + I2
Trapezium rule’s local error = O(h3 )
Z xj+1
f (x)dx − I = 4e ⇐ e ≈ 8e1 , e ≈ 8e2 so, e ≈ 4e
x
Z xjj+1
f (x)dx − I = e = e1 + e2
xj
I −I
I − I = 3e (+ Higher Order Terms) ⇒ e = (+ Higher Order Terms)
3
Overview
Final Observation
)
T rue − Approx ≈ 4e
2 equations, 2 unknowns: e, T rue
T rue − Approx ≈ e
So, can solve for e + T rue
Solving for T rue:
I −I 4 1
T rue ≈ I + e ≈ I + ≈ I− I (+ Higher Order Terms)
3 3 3
f (x + h) − f (x)
f 0 (x) ≈ ≡ Dh f (x) (6.51)
h
for small values of h. Dh f (x) is called a numerical derivative of f (x)
with stepsize h.
Example
Use Dh f to approximate the derivative of f (x) = cos(x) at x = π6 .
h Dn (f ) Error Ratio
0.1 -0.54243 0.04243
0.05 -0.52144 0.02144 1.98
0.025 -0.51077 0.01077 1.99
0.0125 -0.50540 0.00540 1.99
0.00625 -0.50270 0.00270 2.00
0.003125 -0.50135 0.00135 2.00
h2 00
f (x + h) = f (x) + hf 0 (x) + f (c)
2
for some c between x and x + h. Substituting on the right side of
(6.51), we obtain
h2 00
1 0
Dh f (x) = f (x) + hf (x) + f (c) − f (x)
h 2
h
= f 0 (x) + f 00 (c)
2
h
f 0 (x) − Dh f (x)= − f 00 (c) (6.52)
2
The error is proportional to h, agreeing with the results in the Table
above.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4 Numerical Differentiation
n = 2, t = x1 , x0 = x1 − h, x2 = x1 + h.
Then
(x − x1 )(x − x2 ) (x − x0 )(x − x2 )
P2 (x) = 2
f (x0 ) + f (x1 )
2h −h2
(x − x0 )(x − x1 )
+ f (x2 )
2h2
2x − x1 − x2 2x − x0 − x2
P20 (x) = f (x0 ) + f (x1 )
2h2 −h2
2x − x0 − x1
+ f (x2 )
2h2
x1 − x2 2x1 − x0 − x2 x1 − x0
P20 (x1 ) = f (x0 ) + f (x1 ) + f (x2 )
2h2 −h2 2h2
f (x2 ) − f (x0 )
= (6.57)
2h
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation
f (x + h) − f (x)
f 0 (x) = ≡ Dh f (x).
h
Theorem
Assume f ∈ C n+2 [a, b].
Let x0 , x1 , . . . , xn be n + 1 distinct interpolation nodes in [a, b], and
let t be an arbitrary given point in [a, b].
Then
with
Ψn (t) = (t − x0 )(t − x1 ) · · · (t − xn )
The numbers c1 and c2 are unknown points located between the
maximum and minimum of x0 , x1 , . . . , xn and t.
To illustrate this result, an error formula can be derived for the central
difference formula (6.58).
Since t = x1 in deriving (6.58), we find that the first term on the RHS
of (6.59) is zero. Also n = 2 and
f (x1 + h) − f (x1 − h) h2
f 0 (x1 ) − = − f 000 (c2 ) (6.60)
2h 6
with x1 − h ≤ c2 ≤ x1 + h. This says that for small values of h, the
central difference formula (6.58) should be more accurate that the
earlier approximation (6.51), the forward difference formula, because
the error term of (6.58) decreases more rapidly with h.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation
Example
The earlier example f (x) = cos(x) is repeated using the central
difference fomula (6.58) (recall x1 = π6 ).
h Dn (f ) Error Ratio
0.1 -0.49916708 -0.0008329
0.05 -0.49979169 -0.0002083 4.00
0.025 -0.49994792 -0.00005208 4.00
0.0125 -0.49998698 -0.00001302 4.00
0.00625 -0.49999674 -0.000003255 4.00
The results confirm the rate of convergence given in (6.58), and they
illustrate that the central difference formula (6.58) will usually will be
superior to the earlier approximation, the forward difference formula
(6.51).
Including more terms would give higher powers of h; and for small
values of h, these additional terms should be much smaller that the
terms included in (6.62).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients
(2)
Substituting these approximations into the formula for Dh f (t) and
collecting together common powers of h give us
(2) h2
Dh f (t) ≈ (A + B + C) f (t) + h(A − C) f 0 (t) + (A + C) f 00 (t)
| {z } | {z }
=0
|2 {z } =0
=1
h3 h4
+ (A − C)f 000 (t) + (A + C)f (4) (t) (6.63)
6 24
To have (2)
Dh f (t) ≈ f 00 (t)
for arbitrary functions f (x), it is necessary to require
A + B + C = 0; coefficient of f (t)
h(A − C) = 0; coefficient of f 0 (t)
h2
2 (A + C) = 1; coefficient of f 00 (t)
This system has the solution
1 2
A=C= , B=− .
h2 h2
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients
This determines
(2) f (t + h) − 2f (t) + f (t − h)
Dh f (t) = (6.64)
h2
(2)
To determine an error error formula for Dh f (t), substitute
A = C = h12 , B = − h22 into (6.63) to obtain
(2) h2 (4)
Dh f (t) ≈ f 00 (t) + f (t).
12
The approximation in this arises from not including in the Taylor
polynomials (6.62) corresponding higher powers of h. Thus,
f (t + h) − 2f (t) + f (t − h) −h2 (4)
f 00 (t) − ≈ f (t) (6.65)
h2 12
This is accurate estimate of the error for small values of h. Of course,
in a practical situation we would not know f (4) (t). But the error
formula shows that the error decreases by a factor of about 4
when h is halved. This can lead to justify Richardson’s extrapolation
to obtain an even more accurate estimate of the error and of f 00 (t).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients
Example
Let f (x) = cos(x), t = 61 π, and use (6.64) to calculate
f 00 (t) = − cos( 16 π).
The results shown (see the ratio column) are consistent with the error
formula (6.65)
f (t + h) − 2f (t) + f (t − h) −h2 (4)
f 00 (t) − ≈ f (t).
h2 12
The extra degree of freedom could have been used to obtain a more accurate
approximation to f 00 (t), by forcing the error term to be proportional to a
higher power of h.
Many of the formulae derived by the method of undetermined coefficients can
also be derived by differentiating and evaluating a suitably chosen
interpolation polynomial. But often, it is easier to visualize the desired
formula as a combination of certain function values and to then derive the
proper combination, as was done above for (6.64).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation
The formulae derived above are useful for differentiating functions that
are known analytically and for setting up numerical methods for solving
differential equations. Nonetheless, they are very sensitive to errors in
the function values, especially if these errors are not sufficiently small
compared with the stepsize h used in the differentiation formula.
To explore this, we analyze the effect of such errors in the formula
(2)
Dh f (t) approximating f 00 (t).
(2)
Rewrite (6.64), Dh f (t) = f (t+h)−2fh(t)+f
2
(t−h)
, as
f (xi ) − fbi = i , i = 0, 1, 2
b 2 f (x1 ) = f2 − 2f1 + f0
b b b
D h
h2
h2 2 − 21 + 0
≈− f (4) (x1 ) + (6.66)
12 h2
Example
Calculate D b (2) (x1 ) for f (x) = cos(x) at x1 = 1 π. To show the effect of
h 6
rounding errors, the values fbi are obtained by rounding f (xi ) to six
significant digits; and the errors satisfy
In this example,
the bound (6.67), i.e., 2
00 b (2) f (x1 ) ≤ h f (4) (x1 ) + 4δ2 ,
f (x1 ) − D h 12 h
becomes
h2
1 4
00 (2)
f (x1 ) − Dh f (x1 ) ≤ cos π + (5 × 10−7 )
b
12 6 h2
· 2 × 10−6
= 0.0722h2 + ≡ E(h)
h2
·
For h = 0.125, the bound E(h) = 0.00126, which is not too far off
from actual error given in the table.