0% found this document useful (0 votes)
136 views

MATH1070 4 Numerical Integration PDF

The document discusses numerical integration using the trapezoidal rule. Specifically, it: 1) Derives the trapezoidal rule formula for approximating the integral of a function over an interval by dividing it into subintervals and taking the average of the function values at the endpoints of each subinterval, weighted by the subinterval widths. 2) Explains that breaking the interval into more subintervals improves the approximation and the doubling of subintervals ensures previously computed values are reused, making it more efficient than recalculating from scratch. 3) Provides an example calculation comparing approximations using 1 and 2 subintervals to the true value of an integral.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
136 views

MATH1070 4 Numerical Integration PDF

The document discusses numerical integration using the trapezoidal rule. Specifically, it: 1) Derives the trapezoidal rule formula for approximating the integral of a function over an interval by dividing it into subintervals and taking the average of the function values at the endpoints of each subinterval, weighted by the subinterval widths. 2) Explains that breaking the interval into more subintervals improves the approximation and the doubling of subintervals ensures previously computed values are reused, making it more efficient than recalculating from scratch. 3) Provides an example calculation comparing approximations using 1 and 2 subintervals to the true value of an integral.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 135

> 5.

Numerical Integration

Review of Interpolation

Find pn (x) with pn (xj ) = yj , j = 0, 1, 2, . . . , n. Solution:


x−xj
pn (x) = y0 `0 (x) + y1 `1 (x) + . . . + yn `n (x), `k (x) = nj=1,j6=k
Q
xk −xj .

Theorem
Let yj = f (xj ), f (x) smooth and pn interpolates f (x) at
x0 < x1 < . . . < xn . For any x ∈ (x0 , xn ) there is a ξ such that

f (n+1) (ξ)
f (x) − pn (x) = (x − x0 )(x − x1 ) . . . (x − xn )
(n + 1)! | {z }
ψ(x)

Example: n = 1, linear:
00
f (ξ)
f (x) − p1 (x) = (x − x0 )(x − x1 ),
2
y1 − y0
p1 (x) = (x − x0 ) + y0 .
x1 − x0
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule

Mixed rule

Find area under the curve y = f (x)

Z b N −1 Z xj+1 N −1
X X f (xj ) + f (xj+1 )
f (x)dx = f (x)dx ≡ (xj+1 − xj )
a xj 2
j=0 | {z } j=0

interpolate f(x)
on (xj , xj+1 ) and
integrate exactly

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.1 The Trapezoidal Rule

Trapezoidal Rule

Z xj+1 Z xj+1
f (x) ≈ p1 (x)dx
xj xj
Z xj+1  
f (xj+1 ) + f (xj )
= (x − xj ) + f (xj ) dx
xj xj+1 − xj
f (xj ) + f (xj+1 )
= (xj+1 − xj )
2

f (b) + f (a)
T1 (f ) = (b − a) (6.1)
2

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.1 The Trapezoidal Rule

Another derivation of Trapezoidal Rule:

Seek: Z xj+1
f (x)dx ≈ wj f (xj ) + wj+1 f (xj+1 )
xj

exact on polynomials of degree 1 (1, x).


Z xj+1
f (x) ≡ 1 : f (x)dx = xj+1 − xj = wj · 1 + wj+1 · 1
xj
xj+1 x2j+1 x2j
Z
f (x) = x : xdx = − = wj xj + wj+1 xj+1
xj 2 2

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.1 The Trapezoidal Rule

Example
Approximate integral Z 1
dx
I=
0 1+x
·
The true value is I = ln(2) = 0.693147. Using (6.1), we obtain
 
1 1 3
T1 = 1+ = = 0.75
2 2 4

and the error is


·
I − T1 (f ) = −0.0569 (6.2)

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.1 The Trapezoidal Rule

To improve on approximation (6.1) when f (x) is not a nearly linear function


on [a, b], break interval [a, b] into smaller subintervals and apply (6.1) on each
subinterval.
If the subintervals are small enough, then f (x) will be nearly linear on each
one.
Example
Evaluate the preceding example by using T1 (f ) on two subintervals of equal
length.

For two subintervals,


Z 12 1 2 2 1
dx · 1 1 + +
Z
dx 3 1 3 2
I= + = +
0 1+x 1
2
1+x 2 2 2 2
17 ·
T2 = = 0.70833
24
and the error
·
I − T2 = −0.0152 (6.3)
1
is about 4 of that for T1 in (6.2).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule

We derive the general formula for calculations using n subintervals of


equal length h = b−a
n . The endpoints of each subinterval are then

xj = a + jh, j = 0, 1, . . . , n

Breaking the integral into n subintegrals


Z b Z xn
I(f ) = f (x)dx = f (x)dx
a x0
Z x1 Z x2 Z xn
= f (x)dx + f (x)dx + . . . + f (x)dx
x0 x1 xn−1
f (x0 ) + f (x1 ) f (x1 ) + f (x2 ) f (xn−1 ) + f (xn )
≈h +h + ... + h
2 2 2
The trapezoidal numerical integration rule
 
1 1
Tn (f ) = h f (x0 )+f (x1 )+f (x2 )+· · ·+f (xn−1 )+ f (xn ) (6.4)
2 2

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.1 The Trapezoidal Rule

With a sequence of increasing values of n, Tn (f ) will usually be an


increasingly accurate approximation of I(f ).
But which sequence of n should be used?
If n is doubled repeatedly, then the function values used in each T2n (f )
will include all earlier function values used in the preceding Tn (f ).
Thus, the doubling of n will ensure that all previously computed information
is used in the new calculation, making the trapezoidal rule less expensive than
it would be otherwise.  
f (x0 ) f (x2 )
T2 (f ) = h + f (x1 ) +
2 2
with b−a a+b
h= , x0 = a, x1 = , x2 = b.
2 2
Also 
f (x0 ) f (x4 )

T4 (f ) = h + f (x1 ) + f (x2 ) + f (x3 ) +
2 2
with
b−a 3a + b a+b a + 3b
h= , x0 = a, x1 = , x2 = , x3 = , x4 = b
4 4 4 4
Only f (x1 ) and f (x3 ) need to be evaluated.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1 The Trapezoidal Rule

Example
We give calculations of Tn (f ) for three integrals
Z 1
2 ·
(1)
I = e−x dx = 0.746824132812427
0
Z 4
dx ·
I (2) = = tan−1 (4) = 1.32581766366803
0 1 + x2
Z 2π
dx 2π ·
I (3) = = √ = 3.62759872846844
0 2 + cos(x) 3
n I (1)
I (2) I (3)

Error Ratio Error Ratio Error Ratio

2 1.55E-2 -1.33E-1 -5.61E-1


4 3.84E-3 4.02 -3.59E-3 37.0 -3.76E-2 14.9
8 9.59E-4 4.01 5.64E-4 -6.37 -1.93E-4 195.0
16 2.40E-4 4.00 1.44E-4 3.92 -5.19E-9 37,600.0
32 5.99E-5 4.00 3.60E-5 4.00 *
64 1.50E-5 4.00 9.01E-6 4.00 *
128 3.74E-6 4.00 2.25E-6 4.00 *
The error for I (1) , I (2) decreases by a factor of 4 when n doubles, for I (3) the answers for n = 32, 64, 128 were

correct up to the limits due to rounding error on the computer (16 decimal digits).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule

To improve on T1 (f ) in (6.1), use quadratic interpolation to approximate


f (x) on [a, b]. Let P2 (x) be the quadratic polynomial that interpolates f (x)
at a, c = a+b
2 and b.
Z b
I(f ) ≈ P2 (x)dx (6.5)
a
Z b  
(x−c)(x−b) (x−a)(x−b) (x−a)(x−c)
= f (a) + f (c) + f (b) dx
a (a−c)(a−b) (c−a)(c−b) (b−a)(b−c)
This can be evaluated directly. But it is easier with a change of variables.
Let h = b−a
2 and u = x − a. Then
Z b Z a+2h
(x−c)(x−b) 1
dx = 2 (x − c)(x − b)dx
a (a−c)(a−b) 2h a
Z 2h  3  2h
1 1 u 3 h
− u2 h + 2h2 u =

= 2 (u − h)(u − 2h)du = 2
2h 0 2h 3 2 0 3
and
 
h a+b
S2 (f ) = f (a) + 4f ( ) + f (b) (6.6)
3 2
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule

Example
Z 1
dx
I=
0 1+x
b−a 1
Then h = 2 = 2 and
1  
2 2 1 25 ·
S2 (f ) = 1 + 4( ) + = = 0.69444 (6.7)
3 3 2 36
and the error is
·
I − S2 = ln(2) − S2 = −0.00130
while the error for the trapezoidal rule (the number of function
evaluations is the same for both S2 and T2 ) was
·
I − T2 = −0.0152.
The error in S2 is smaller than that in (6.3) for T2 by a factor of 12,
a significant increase in accuracy.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule

0.95

0.9

0.85

0.8

0.75

0.7

0.65

0.6

0.55

0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Figure: An illustration of Simpson’s rule (6.6), y = f (x), y = P2 (x)


5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.1.1 Simpson’s rule

The rule S2 (f ) will be an accurate approximation to I(f ) if f (x) is nearly


quadratic on [a, b]. For the other cases, proceed in the same manner as for
the trapezoidal rule.
Let n be an even integer, h = b−a n and define the evaluation points for f (x)
by
xj = a + jh, j = 0, 1, . . . , n
We follow the idea from the trapezoidal rule, but break [a, b] = [x0 , xn ] into
larger intervals, each containing three interpolation node points:
Z b Z xn
I(f ) = f (x)dx = f (x)dx
a x0
Z x2 Z x4 Z xn
= f (x)dx + f (x)dx + . . . + f (x)dx
x0 x2 xn−2
h h
≈ [f (x0 ) + 4f (x1 ) + f (x2 )] + [f (x2 ) + 4f (x3 ) + f (x4 )]
3 3
h
+ . . . + [f (xn−2 ) + 4f (xn−1 ) + f (xn )]
3

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.1.1 Simpson’s rule

Simpson’s rule:
h
Sn (f ) = (f (x0 )+4f (x1 )+2f (x2 )+4f (x3 )+2f (x4 ) (6.8)
3
+· · ·+2f (xn−2 )+4f (xn−1 )+f (xn ))

It has been among the most popular numerical integration


methods for more than two centuries.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.1.1 Simpson’s rule

Example
Evaluate the integrals
Z 1
2 ·
I (1) = e−x dx = 0.746824132812427
0
Z 4
dx ·
I (2)
= = tan−1 (4) = 1.32581766366803
0 1 + x2
Z 2π
dx 2π ·
I (3) = = √ = 3.62759872846844
0 2 + cos(x) 3
n I (1) I (2) I (3)

Error Ratio Error Ratio Error Ratio

2 -3.56E-4 8.66E-2 -1.26


4 -3.12E-5 11.4 3.95E-2 2.2 1.37E-1 -9.2
8 -1.99E-6 15.7 1.95E-3 20.3 1.23E-2 11.2
16 -1.25E-7 15.9 4.02E-6 485.0 6.43E-5 191.0
32 -7.79E-9 16.0 2.33E-8 172.0 1.71E-9 37,600.0
64 -4.87E-10 16.0 1.46E-9 16.0 *
128 -3.04E-11 16.0 9.15E-11 16.0 *
For I (1) , I (2) , the ratio by which the error decreases approaches 16.

For I (3), the errors converge to zero much more rapidly


5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2 Error formulas

Theorem
Let f ∈ C 2 [a, b], n ∈ N. The error in integrating
Z b
I(f ) = f (x)dx
a

using the trapezoidal rule


Tn (f ) = h 12 f (x0 ) + f (x1 ) + f (x2 ) + · · · + f (xn−1 ) + 12 f (xn ) is
 

given by

−h2 (b − a) 00
EnT ≡ I(f ) − Tn (f ) = f (cn ) (6.9)
12
b−a
where cn is some unknown point in a, b, and h = n .

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2 Error formulas

Theorem
Suppose that f ∈ C 2 [a, b], and h = maxj (xj+1 − xj ). Then

Z b
X f (xj+1 )+f (xj ) b − a 2
max |f 00 (x)|.

f (x)dx − (xj+1 −xj ) ≤ 12 h a≤x≤b

a 2
j

Proof: Let Ij be the j th subinterval and p1 = linear interpolant on Ij


at xj , xj+1 .
f 00 (x)
f (x) − p1 (x) = (x − xj )(x − xj+1 ) .
2 | {z }
ψ(x)

Local error:
Z
xj+1 f (xj ) + f (xj+1 )
f (x)dx − (xj+1 −xj ) =

2

xj

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2 Error formulas

proof

Z
xj+1 f 00 (ξ)
= (x − xj )(x − xj+1 )

xj 2!
Z xj+1
1
≤ |f 00 (ξ)||x − xj | |x − xj+1 |dx
2 xj
Z xj+1
1 00
≤ max |f (x)| (x − xj )(xj+1 − x)dx
2 a≤x≤b xj
1 (xj+1 − xj )3
= max |f 00 (x)| .
2 a≤x≤b 6
Hence
1
|local error| ≤ max |f 00 (x)| · h3j , hj = xj+1 − xj .
12 a≤x≤b

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2 Error formulas

proof
Finally,

Z b
X f (x j ) + f (x )
j+1
|global error| =
f (x)dx − (xj+1 −xj )
a 2
j
!
Z xj+1
X f (xj ) + f (xj+1 ) X
= f (x)dx − (xj+1 −xj ) ≤ |local error|
xj 2
j j
X 1 n
1 X
≤ max |f 00 (x)|(xj+1 − xj )3 = max |f 00 (x)| (xj+1 − xj )3
12 a≤x≤b 12 a≤x≤b | {z }
j j=0
≤h2 (xj+1 −xj )
n
h2 X
≤ max |f 00 (x)| (xj+1 − xj ) . 
12 a≤x≤b
j=0
| {z }
b−a

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2 Error formulas

Example
Recall the example Z 1
dx
I(f ) = = ln 2
0 1+x

Here f (x) = 1
1+x , [a, b] = [0, 1], and f 00 (x) = 2
(1+x)3 . Then by (6.9)
h2 00 1
EnT (f ) = −
f (cn ), 0 ≤ cn ≤ 1, h= .
12 n
This cannot be computed exactly since cn is unknown. But
2
max |f 00 (x)| = max =2
0≤x≤1 0≤x≤1 (1 + x)3

and therefore
h2 h2
|EnT (f )| ≤ (2) =
12 6
For n = 1 and n = 2 we have
1 2

1 · ·
| E1T (f ) |
≤ = 0.167, | E2T (f ) | ≤ 2
= 0.0417.
| {z } 6 | {z } 6
−0.0569 −0.0152

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2 Error formulas

A possible weakness in the trapezoidal rule can be inferred from the


assumption of the theorem for the error.
If f (x) does not have two continuous derivatives on [a, b], then
Tn (f ) does converge more slowly??

YES

for some functions, especially if the first derivative is not continuous.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn (f )

The error formula (6.9)


2 (b−a)
EnT (f ) ≡ I(f ) − Tn (f ) = −h 12 f 00 (cn )
00
can only be used to bound the error, because f (cn ) is unknown.

This can be improved by a more careful consideration of the error formula.

A central element of the proof of (6.9) lies in the local error


α+h
h3
Z
f (α) + f (α + h)
f (x)dx − h = − f 00 (c) (6.10)
α 2 12

for some c ∈ [α, α + h].

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn (f )

Recall the derivation of the trapezoidal rule Tn (f ) and use the local error
(6.10):
Z b Z xn
EnT (f ) = f (x)dx − Tn (f ) =
f (x)dx − Tn (f )
a x0
Z x1 Z x2
f (x0 ) + f (x1 ) f (x1 ) + f (x2 )
= f (x)dx − h + f (x)dx − h
x0 2 x1 2
Z xn
f (xn−1 ) + f (xn )
+ ··· + f (x)dx − h
xn−1 2
h3 00 h3 h3
=− f (γ1 ) − f 00 (γ2 ) − · · · − − f 00 (γn )
12 12 12
with γ1 ∈ [x0 , x1 ], γ2 ∈ [x1 , x2 ], . . . γn ∈ [xn−1 , xn ], and

h2  00 
EnT (f ) = − hf (γ1 ) + · · · + hf 00 (γn ) , cn ∈ [a, b].
12 | {z }
=(b−a)f 00 (cn )

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn (f )

To estimate the trapezoidal error, observe that


hf 00 (γ1 ) + · · · + hf 00 (γn ) is a Riemann sum for the integral
Z b
f 00 (x)dx = f 0 (b) − f 0 (a) (6.11)
a

The Riemann sum is based on the partition

[x0 , x1 ], [x1 , x2 ], . . . , [xn−1 , xn ] of [a, b].

As n → ∞, this sum will approach the integral (6.11). With (6.11),


we find an asymptotic estimate (improves as n increases)

−h2 0
EnT (f ) ≈ (f (b) − f 0 (a)) =: E
e T (f ).
n (6.12)
12

As long as f 0 (x) is computable, E


enT (f ) will be very easy to compute.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn (f )

Example
Z 1
dx
Again consider I = .
0 1+x
1
Then f 0 (x) = − , and the asymptotic estimate (6.12) yields
(1 + x)2
the estimate
−h2 −h2
 
eT −1 −1 1
E n = − = , h=
12 (1 + 1) (1 + 0)2 16 n

and for n = 1 and n = 2


e T = − 1 = −0.0625,
E e T =· −0.0156
E
1 16 2
· ·
I − T1 = −0.0569, I − T2 = −0.0152

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn (f )

−h2 0
enT (f ) =
The estimate E 12 (f (b) − f 0 (a)) has several practical
−h2 (b−a) 00
advantages over the earlier formula (6.9) EnT (f ) = 12 f (cn ).
1 It confirms that when n is doubled (or h is halved), the error decreases
by a factor of about 4, provided that f 0 (b) − f 0 (a) 6= 0. This agrees
with the results for I (1) and I (2) .
2 (6.12) implies that the convergence of Tn (f ) will be more rapid when
f 0 (b) − f 0 (a) = 0. This is a partial explanation of the very rapid
convergence observed with I (3)
3 (6.12) leads to a more accurate numerical integration formula
by taking EenT (f ) into account:

h2 0
I(f ) − Tn (f ) ≈ − (f (b) − f 0 (a))
12
h2 0
I(f ) ≈ Tn (f ) − (f (b) − f 0 (a)) := CTn (f ), (6.13)
12
the corrected trapezoidal rule

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.1 Asymptotic estimate of Tn (f )

Example
R1 2 ·
Recall the integral I (1) , I = 0
e−x dx = 0.74682413281243
n I − Tn (f ) Een (f ) CTn (f ) I − CTn (f ) Ratio
2 1.545E-4 1.533E-2 0.746698561877 1.26E-4
4 3.840E-3 3.832E-3 0.746816175313 7.96E-6 15.8
8 9.585E-4 9.580E-4 0.746823634224 4.99E-7 16.0
16 2.395E-4 2.395E-4 0.746824101633 3.12E-8 16.0
32 5.988E-5 5.988E-5 0.746824130863 1.95E-9 16.0
64 1.497E-5 1.497E-5 0.746824132690 2.22E-10 16.0

Table: Example of CTn (f ) and E


en (f )
Note that the estimate
2 −1
enT (f ) = h e , h = 1
E
6 n
is a very accurate estimator of the true error.
Also, the error in CTn (f ) converges to zero at a more rapid rate than does
the error for Tn (f ).
When n is doubled, the error in CTn (f ) decreases by a factor of about 16.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule

Theorem
Assume f ∈ C 4 [a, b], n ∈ N. The error in using Simpson’s rule is
h4 (b − a) (4)
EnS (f ) = I(f ) − Sn (f ) = − f (cn ) (6.14)
180
with cn ∈ [a, b] an unknown point, and h = b−a
n . Moreover, this error can be
estimated with the asymptotic error formula
4
enS (f ) = − h (f 000 (b) − f 000 (a))
E (6.15)
180
Note that (6.14) says that Simpson’s rule is exact for all f (x) that are
polynomials of degree ≤ 3, whereas the quadratic interpolation on which
Simpson’s rule is based is exact only for f (x) a polynomial of degree ≤ 2.
The degree of precision being 3 leads to the power h4 in the error, rather
than the power h3 , which would have been produced on the basis of the error
in quadratic interpolation.
The higher power of h4 , and
the simple form of the method

that historically have caused Simpson’s rule to become the most popular numerical integration rule.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule

Example
R 1 dx
Recall (6.7) where S2 (f ) was applied to I = 0 1+x :
1  
2 1 25 ·
S2 (f ) = 2 1 + 4( ) + = = 0.69444
3 3 2 36
1 −6 24
f (x) = , f 3 (x) = , f (4) (x) =
1+x (1 + x)4 (1 + x)5
The exact error is given by
h4 (4) 1
EnS (f ) = − f (cn ), h =
180 n
for some 0 ≤ cn ≤ 1. We can bound it by
h4 2h4
|EnS (f )| ≤ 24 =
180 15
The asymptotic error is given by
h4 h4
 
S −6 −6
En (f ) = −
e − = −
180 (1 + 1)4 (1 + 0)4 32
·
enS =
For n = 2, E −0.00195; the actual error is −0.00130.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule

The behavior in I(f ) − Sn (f ) can be derived from (6.15):


4
enS (f ) = − h f 000 (b) − f 000 (a) ,

E
180
i.e.,
when n is doubled, h is halved, and h4 decreases by of factor of 16.

Thus, the error EnS (f ) should decrease by the same factor, provided
that f 000 (a) 6= f 000 (b). This is the error observed with integrals I (1) and
I (2) .

When f 000 (a) = f 000 (b), the error will decrease more rapidly, which is a
partial explanation of the rapid convergence for I (3) .

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule

The theory of asymptotic error formulae

En (f ) = E
en (f ) (6.16)

such as for EnT (f ) and EnS (f ), says that (6.16) will vary with the
integrand f , which is illustrated with the two cases I (1) and I (2) .

From (6.14) and (6.15) we infer that Simpson’s rule will not perform
as well if f (x) is not four times continuously differentiable, on [a, b].

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule

Example
Use Simpson’s rule to approximate
Z 1
√ 2
I= xdx = .
0 3

n Error Ratio
2 2.860E-2
4 1.014E-2 2.82
8 3.587E-3 2.83
16 1.268E-3 2.83
32 4.485E-4 2.83

Table: Simpson’s rule for x
The column ”Ratio” show the convergence is much slower.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.2 Error formulae for Simpson’s rule

As was done for the trapezoidal rule, a corrected Simpson’s rule can
be defined:
h4
f 000 (b) − f 000 (a)

CSn (f ) = Sn (f ) − (6.17)
180
This will usually will be more accurate approximation than Sn (f ).

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.3 Richardson extrapolation

The error estimates for Trapezoidal rule (6.12)


−h2 0
EnT (f ) ≈ f (b) − f 0 (a)

12
and Simpson’s rule (6.15)
4
e S (f ) = − h f 000 (b) − f 000 (a)

E n
180
are both of the form
c
I − In ≈ p (6.18)
n
where In denotes the numerical integral and h = b−a n .
The constants c and p vary with the method and the function. With
most integrands f (x), p = 2 for the trapezoidal rule and p = 4 for
Simpson’s rule.
There are other numerical methods that satisfy (6.18), with other value of p
and c. We use (6.18) to obtain a computable estimate of the error I − In ,
without needing to know c explicitly.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.3 Richardson extrapolation

Replacing n by 2n
c
I − I2n ≈ (6.19)
2p n p
and comparing to (6.18)
c
2p (I − I2n ) ≈ ≈ I − In
np
and solving for I gives the Richardson’s extrapolation formula

(2p − 1)I ≈ 2p I2n − In


1
I≈ p (2p I2n − In ) ≡ R2n (6.20)
2 −1
R2n is an improved estimate of I, based on using In , I2n , p, and the
assumption (6.18). How much more accurate than I2n depends on the
validity of (6.18), (6.19).

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.3 Richardson extrapolation

To estimate the error in I2n , compare it with the more accurate value
R2n
1
I − I2n ≈ R2n − I2n = (2p I2n − In ) − I2n
2p −1
1
I − I2n ≈ (I2n − In ) (6.21)
2p − 1
This is Richardson’s error estimate.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.3 Richardson extrapolation

Example
Using the trapezoidal rule to approximate
Z 1
2 ·
I= e−x dx = 0.74682413281243
0

we have
· ·
T2 = 0.7313702518, T4 = 0.7429840978
1 p
Using (6.20) I ≈ 2p −1 (2 I2n − In ) with p = 2 and n = 2, we obtain
1 1 ·
I ≈ R4 = (4I4 − I2 ) = (4T4 − T2 ) = 0.7468553797
3 3
The error in R4 is −0.0000312; and from a previous Table, R4 is more
accurate than T32 . To estimate the error in T4 , use (6.21) to get
1 ·
I − T4 ≈ (T4 − T2 ) = 0.00387
3
The actual error in T4 is 0.00384; and thus (6.21) is a very accurate error
estimate.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands

Definition
A function f (x) is periodic with period τ if

f (x) = f (x + τ ), ∀x ∈ R (6.22)

and this relation should not be true with any smaller value of τ .

For example,
f (x) = ecos(πx)
is periodic with periodic τ = 2.

If f (x) is periodic and differentiable, then its


derivatives are also periodic with period τ .

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.4 Periodic Integrands

Consider integrating
Z b
I= f (x)dx
a
with trapezoidal or Simpson’s rule, and assume that b − a is and
integer multiple of the period τ .
Assume f (x) ∈ C ∞ [a, b] (has derivatives of any order).

Then for all derivatives of f (x), the periodicity of f (x) implies that

f (k) (a) = f (k) (b), k≥0 (6.23)

If we now look at the asymptotic error formulae for the trapezoidal


and Simpson’s rules, they become zero because of (6.23).
Thus, the error formulae Ee T (f ) and E
e S (f ) should converge to zero
n n
more rapidly when f (x) is a periodic function, provided b − a is an
integer multiple of the period of f .
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands

The asymptotic error formulae E e T (f ) and Ee S (f ) can be extended to


n n
higher-order terms in h, using the Euler-MacLaurin expansion and
the higher-order terms are multiples of f (k) (b) − f (k) (a) for all odd
integers k ≥ 1. Using this, we can prove that the errors EnT (f ) and
EnS (f ) converge to zero even more rapidly than was implied by the
earlier comments for f (x) periodic.

Note that the trapezoidal rule is the preferred integration rule


when we are dealing with smooth periodic integrands.
The earlier results for the integral I (3) illustrate this.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.4 Periodic Integrands

Example
The ellipse with boundary
 x 2  y 2
+ =1
a b
has area πab. For the case in which the area is π (and thus ab = 1), we study
the variation of the perimeter of the ellipse as a and b vary.

The ellipse has the parametric representation


(x, y) = (a cos θ, b sin θ), 0 ≤ θ ≤ 2π (6.24)
By using the standard formula for the perimeter, and using the symmetry of
the ellipse about the x-axis, the perimeter is given by
s
Z π   2  2
dx dy
P =2 + dθ
0 dθ dθ
Z πp
=2 a2 sin2 θ + b2 cos2 θdθ
0

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.4 Periodic Integrands

Since ab = 1, we write this as


Z π
r
1
P (b) = 2 2
sin2 θ + b2 cos2 θdθ
b
Z0
2 πp 4
= (b − 1) cos2 θ + 1dθ (6.25)
b 0

We consider only the case with 1 ≤ b < ∞. Since the perimeters for
the two ellipses
 x 2  y 2  x 2  y 2
+ = 1 and + =1
a b b a
are equal, we can always consider the case in which the y-axis of the
ellipse is larger than or equal to its x-axis; and this also shows
 
1
P = P (b), b > 0 (6.26)
b

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.4 Periodic Integrands

The integrand of P (b)


2 4 1
f (θ) = (b − 1) cos2 θ + 1 2
b
is periodic with period π. As discussed above, the trapezoidal rule is the
natural choice for numerical integration of (6.25). Nonetheless, there is a
variation in the behaviour of f (θ) as b varies, and this will affect the accuracy
of the numerical integration. 16

14

12

10

0
0 0.5 1 1.5 2 2.5 3 3.5
π/2
Figure:The graph of integrand f (θ) : b = 2, 5, 8
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.2.4 Periodic Integrands

n b=2 b=5 b=8


8 8.575517 19.918814 31.690628
16 8.578405 20.044483 31.953632
32 8.578422 20.063957 32.008934
64 8.578422 20.065672 32.018564
128 8.578422 20.065716 32.019660
256 8.578422 20.065717 32.019709

Table: Trapezoidal Rule Approximation of (6.25)


Note that as b increases, the trapezoidal rule converges more slowly.
This is due to the integrand f (θ) changing more rapidly as b increases.
For large b, f (θ) changes very rapidly in the vicinity of θ = 21 π; and
this causes the trapezoidal rule to be less accurate than when b is
smaller, near 1. To obtain a certain accuracy in the perimeter P (b), we
must increase n as b increases.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.2.4 Periodic Integrands
45

40

35 z=P(b)

30

25

20

15

10

5
0 1 2 3 4 5 6 7 8 9 10

Figure: The graph of perimeter function P (b) for ellipse


The graph of P (b) reveals that P (b) ≈ 4b for large b. Returning to (6.25), we
have for large b
2 π 4
Z
1
P (b) ≈ b cos2 θ 2 dθ
b 0
2 2 π
Z
≈ b | cos θ|dθ = 4b
b 0

We need to estimate the error in the above approximation to know when we


can use it to replace P (b); but it provides a way to avoid the integration of
(6.25) for the most badly behaved cases.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more

Review

Z xj+1 Z xj+1
f (x)dx ≈ pn (x)dx
xj xj
| {z }
≡Ij
(0) (1) (n)
pn (x) interpolates at xj , xj , . . . , xj points on [xj , xj+1 ]

Local error:
Z xj+1 xj+1
f (n+1)(ξ)
Z
f (x)dx − Ij = ψ(x)dx
xj xj (n + 1)!
(integrand is error in interpolation)
(0) (1) (n)
where ψ(x) = (x − xj )(x − xj ) · · · (x − xj ).

5. Numerical Integration Math 1070


> 5. Numerical Integration > Review and more
6
x 10
6

ψ(x)
0

−2

−4

−6
x_j x_{j+1}
xj ≤ x ≤ xj+1

Conclusion: exact on Pn
1 |local error| ≤ C max |f (n+1) (x)|hn+2
2 |global error| ≤ C max |f (n+1) (x)|hn+1 (b − a)

5. Numerical Integration Math 1070


> 5. Numerical Integration > Review and more

Observation:
If ξ is a point on (xj , xj+1 ), then

g(ξ) = g(xj+ 1 ) + O(h)


2

(if g 0 is continuous) i.e.,

g(ξ) = g(xj+ 1 ) + (ξ − xj+ 1 ) g 0 (η)


2
| {z 2 }
≤h
| {z }
O(h)

5. Numerical Integration Math 1070


> 5. Numerical Integration > Review and more

Local error:
Z xj+1
1
f (n+1) (ξ) ψ(x)dx
(n + 1)! xj | {z }
=f (n+1) (xj+ 1 )+O(h)
Z2 xj+1
1
= f (n+1) (xj+ 1 ) ψ(x)dx
(n + 1)! 2
xj
| {z }
Dominant Term O(hn+2 )
C
(n+1)!
max |f (n+2) |hn+3
z }| {
Z xj+1
1 (n+2)
+ f (η(x)) (ξ − xj+ 1 ) ψ(x)dx
(n + 1)! xj | {z } | {z 2 }
take max out
integrate
| {z }
Higher Order Terms O(hn+3 )

5. Numerical Integration Math 1070


> 5. Numerical Integration > Review and more

The dominant term: case N = 1, Trapezoidal Rule


ψ(x) = (x−xj)(x−xj+1)
0.05

−0.05

−0.1

−0.15

−0.2
x_j x_{j+1}

5. Numerical Integration Math 1070


> 5. Numerical Integration > Review and more

The dominant term: case N = 2, Simpson’s Rule

Z xj+1
ψ(x) = (x − xj )(x − xj+ 1 )(x − xj+1 ) ⇒ ψ(x)dx = 0
2
xj

ψ(x) = (x−xj)(x−xj+1/2)(x−xj+1)
0.2

0.15

0.1

0.05

−0.05

−0.1

−0.15

−0.2
x_j x_{j+1/2} x_{j+1}

5. Numerical Integration Math 1070


> 5. Numerical Integration > Review and more

The dominant term: case N = 3, Simpson’s 3/8’s Rule

Local error = O(h5 )


ψ(x) = (x−xj)(x−xj+1/3)(x−xj+2/3)(x−xj+1)
0.1

0.08

0.06

0.04

0.02

−0.02

−0.04

−0.06
x_j x_{j+1/3} x_{j+2/3} x_{j+1}

5. Numerical Integration Math 1070


> 5. Numerical Integration > Review and more

The dominant term: case N = 4

Z
ψ(x)dx = 0 ⇒ local error = O(h7 )

0.4

0.3

0.2

0.1

−0.1

−0.2

−0.3

−0.4
x_j x_{j+1}

5. Numerical Integration Math 1070


> 5. Numerical Integration > Review and more

Simpson’s Rule
Is exact on P2 (and P3 actually)

← xj+1/2

xj+1/2= (xj+ xj+1)/2

Seek:
Z xj+1
f (x)dx ≈ wj f (xj ) + wj+1/2 f (xj+1/2 ) + wj+1 f (xj+1 )
xj

5. Numerical Integration Math 1070


> 5. Numerical Integration > Review and more

Exact on 1, x, x2 :
Z xj+1
1: 1dx = xj+1 − xj = wj · 1 + wj+1/2 · 1 + wj+1 · 1,
xj
xj+1 x2j+1 x2j
Z
x: xdx = − = wj xj + wj+1/2 xj+1/2 + wj+1 xj+1 ,
xj 2 2
xj+1 x3j+1 x3j
Z
2 2
x : x dx = − = wj x2j + wj+1/2 x2j+1/2 + wj+1 x2j+1 .
xj 3 3

3 × 3 linear system:
1
wj = (xj+1 − xj );
6
4
wj+1/2 = (xj+1 − xj );
6
1
wj+1 = (xj+1 − xj ).
6
5. Numerical Integration Math 1070
> 5. Numerical Integration > Review and more

Theorem
h = max(xj+1 − xj ), I(f ) = Simpson’s rule approximation:
Z b
b−a 4
(4)

f (x)dx − I(f ) ≤ h max f (x) .
2880 a≤x≤b

a

Trapezoid rule versus Simpson’s rule

Cost in TR 2 function evaluations/integral × no. intervals 2


= = ;
Cost in SR 3 function evaluations/integral × no. intervals 3
(reducible to 12 if storing the previous values)
Accuracy in TR h2 b−a
12 240
= b−a
= 2 .
Accuracy in SR h4 2880 h

1 ⇒ 2.4 × 106 , i.e. SR is more accurate than TR by factor of 2.4 × 106 .


E.g. for h = 100

5. Numerical Integration Math 1070


> 5. Numerical Integration > Review and more

What if there is round-off error?


Suppose we use the method with

f (xj )computed = f (xj )true ± εj , εj = O(machine precision)


| {z }

Z b X f (xj+1 )computed + f (xj )computed


f (x)dx ≈ (xj+1 − xj )
a 2
j
X f (xj+1 ) ± εj+1 + f (xj ) ± εj
= (xj+1 − xj )
2
j
X f (xj+1 ) + f (xj ) X ±εj+1 ± εj
= (xj+1 − xj ) + (xj+1 − xj )
2 2
j j
| {z } | {z }
value in exact arithmetic contribution of round-off error≤ε(b−a)

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

The numerical methods studied in the first 2 sections were based on


integrating
1 linear (trapezoidal rule) and
2 quadratic (Simpson’s rule)
and the resulting formulae were applied on subdivisions of ever smaller
subintervals.

We consider now a numerical method based on exact integration of


polynomials of increasing degree; no subdivision of the integration
interval [a, b] is used.
Recall the Section 4.4 of Chapter 4 on approximation of functions.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Let f (x) ∈ C[a, b].


Then ρn (f ) denotes the smallest error bound that can be attained in
approximating f (x) with a polynomial p(x) of degree ≤ n on the given
interval a ≤ x ≤ b.
The polynomial mn (x) that yields this approximation is called the
minimax approximation of degree n for f (x)

max |f (x) − mn (x)| = ρn (f ) (6.27)


a≤x≤b

and ρn (f ) is called the minimax error.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Example
2
Let f (x) = e−x for x ∈ [0, 1]

n ρn (f ) n ρn (f )
1 5.30E-2 6 7.82E-6
2 1.79E-2 7 4.62E-7
3 6.63E-4 8 9.64E-8
4 4.63E-4 9 8.05E-9
5 1.62E-5 10 9.16E-10
2
Table: Minimax errors for e−x , 0 ≤ x ≤ 1
The minimax errors ρn (f ) converge to zero rapidly, although not at a
uniform rate.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

If we have a

numerical integration formula to integrate low- to


moderate-degree polynomials exactly,

then the hope is that

the same formula will integrate other functions f (x) almost


exactly,

if f (x) is well ”approximable” by such polynomials.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

To illustrate the derivation of such integration formulae, we restrict ourselves


Z 1
to the integral
I(f ) = f (x)dx.
−1

The integration formula is to have the general form:


(Gaussian numerical integration method)
n
X
In (f ) = wj f (xj ) (6.28)
j=1

and we require that


the nodes {x1 , . . . , xn } and
weights {w1 , . . . , wn }
be so chosen that
In (f ) = I(f )
for all polynomials f (x) of as large degree as possible.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Case n = 1
The integration formula has the form
Z 1
f (x)dx ≈ w1 f (x1 ) (6.29)
−1

Using f (x) ≡ 1, and forcing (6.29) 2 = w1


Using f (x) = x 0 = w1 x1
which implies x1 = 0. Hence (6.29) becomes
Z 1
f (x)dx ≈ 2f (0) ≡ I1 (f ) (6.30)
−1

This is the midpoint formula, and is exact for all linear polynomials.
To see that (6.30) is not exact for quadratics, let f (x) = x2 . Then the
error in (6.30) is
Z 1
2
x2 dx − 2(0)2 = 6= 0,
−1 3
hence (6.30) has degree of precision 1.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Case n = 2

The integration formula is


Z 1
f (x)dx ≈ w1 f (x1 ) + w2 f (x2 ) (6.31)
−1

and it has four unspecified quantities: x1 , x2 , w1 , w2 . To determine


these, we require (6.31) to be exact for the four monomials

f (x) = 1, x, x2 , x3 .

obtaining 4 equations

2 = w1 + w2
0 = w1 x1 + w2 x2
2 2 2
3 = w1 x1 + w2 x2
3 3
0 = w1 x1 + w2 x2

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Case n = 2

This is a nonlinear system with a solution


√ √
3 3
w1 = w2 = 1, x1 = − , x2 = (6.32)
3 3
and another one based on reversing the signs of x1 and x2 .
This yields the integration formula
Z 1 √ ! √ !
3 3
f (x)dx ≈ f − +f ≈ I2 (f ) (6.33)
−1 3 3

which has degree of precision 3 (exact on all polynomials of degree 3


and not exact for f (x) = x4 ).

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Example
Approximate
Z 1
·
I= ex dx = e − e−1 = 2.5304024
−1

Using
Z 1
√ ! √ !
3 3
f (x)dx ≈ f − +f ≈ I2 (f )
−1 3 3
we get
√ √
3 3 ·
I2 = e− 3 + e− 3 = 2.2326961
·
I − I2 = 0.00771

The error is quite small, considering we are using only 2 node points.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Case n > 2
We seek the formula (6.28)
n
X
In (f ) = wj f (xj )
j=1
which has 2n points unspecified parameters x1 , . . . , x, w1 , . . . , wn , by forcing
the integration formula to be exact for 2n monomials
f (x) = 1, x, x2 , . . . , x2n−1
In turn, this forces In (f ) = I(f ) for all polynomials f of degree ≤ 2n − 1.
This leads to the following system of 2n nonlinear equations in 2n
unknowns:
2 = w1 + w2 + . . . + wn
0 = w1 x1 + w2 x2 + . . . + wn xn
2
3 = w1 x21 + w2 x22 + . . . + wn x2n
.. (6.34)
.
2
2n−1 = w1 x2n−2
1 + w2 x22 + . . . + wn x2n−2
n
2n−1
0 = w1 x1 + w2 x22 + . . . + wn x2n−1
n

The resulting formula In (f ) has degree of precision 2n − 1.


5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Solving this system is a formidable problem. The nodes {xi } and weights
{wi } have been calculated and collected in tables for most commonly used
values of n.
n xi wi
2 ± 0.5773502692 1.0
3 ± 0.7745966692 0.5555555556
0.0 0.8888888889
4 ± 0.8611363116 0.3478548451
± 0.3399810436 0.6521451549
5 ± 0.9061798459 0.2369268851
± 0.5384693101 0.4786286705
0.0 0.5688888889
6 ± 0.9324695142 0.1713244924
± 0.6612093865 0.3607651730
± 0.2386191861 0.4679139346
7 ± 0.9491079123 0.1294849662
± 0.7415311856 0.2797053915
± 0.4058451514 0.3818300505
0.0 0.4179591837
8 ± 0.9602898565 0.1012285363
± 0.7966664774 0.2223810345
± 0.5255324099 0.3137066459
± 0.1834346425 0.3626837834

Table: Nodes and weights for Gaussian quadrature formulae

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

There is also another approach to the development of the numerical


integration formula (6.28), using the theory of orthogonal
polynomials.
From that theory, it can be shown that the nodes {x1 , . . . , xn } are the
zeros of the Legendre polynomials of degree n on the interval [−1, 1].
Recall that these polynomials were introduced in Section 4.7. For
example,
1
P2 (x) = (3x2 − 1)
2
and its roots are the nodes given√ in (6.32) √
x1 = − 33 , x2 = 33 .
Since the Legendre polynomials are well known, the nodes {xj } can be
found without any recourse to the nonlinear system (6.34).

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

The sequence of formulae (6.28) is called Gaussian numerical integration


method. From its definition, In (f ) uses n nodes, and it is exact for all
R1
polynomials of degree ≤ 2n − 1. In (f ) is limited to −1 f (x)dx, an integral
over [−1, 1]. But this limitation is easily removed.
Given an integral Z b
I(f ) = f (x)dx (6.35)
a

introduce the linear change of variable


b + a + t(b − a)
x= , −1 ≤ t ≤ 1 (6.36)
2
transforming the integral to
1
b−a
Z
I(f ) = fe(t)dt (6.37)
2 −1

with  
b + a + t(b − a)
fe(t) = f
2
Now apply Ien to this new integral.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Example
Apply Gaussian
R 1 −x2 numerical Rintegration to the
R 2πthreedxintegrals
(1) (2) 4 dx (3)
I = 0 e dx, I = 0 1+x2 , I = 0 2+cos(x) , which were
used as examples for the trapezoidal and Simpson’s rules.

All are reformulated as integrals over [−1, 1]. The error results are

n Error in I (1) Error in I (2) Error in I (3)


2 2.29E-4 -2.33E-2 8.23E-1
3 9.55E-6 -3.49E-2 -4.30E-1
4 -3.35E-7 1.90E-3 1.77E-1
5 6.05E-9 1.70E-3 -8.12E-2
6 -7.77E-11 2.74E-4 3.55E-2
7 7.89E-13 -6.45E-5 -1.58E-2
10 * 1.27E-6 1.37E-3
15 * 7.40E-10 -2.33E-5
20 * * 3.96E-7
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

If these results are compared to those of trapezoidal and Simpson’s


rule, then
Gaussian integration of I (1) and I (2) is much more efficient than
the trapezoidal rules.
But then integration of the periodic integrand I (3) is not as efficient
as with the trapezoidal rule.
These results are also true for most other integrals.

Except for periodic integrands, Gaussian numerical integration is


usually much more accurate than trapezoidal and Simpson rules.

This is even true with many integrals in which the integrand does not
have a continuous derivative.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Example
Use Gaussian integration on Z 1 √ 2
I= xdx =
0 3
The results are n I − In Ratio
2 -7.22E-3
4 -1.16E-3 6.2
8 -1.69E-4 6.9
16 -2.30E-5 7.4
32 -3.00E-6 7.6
64 -3.84E-7 7.8
where n is the number of node points. The ratio column is defined as
I − I 12 n
I − In
and it shows that the error behaves like
c
I − In ≈ 3 (6.38)
n
for some c. The error using Simpson’s rule has an empirical rate of
1
convergence proportional to only n1.5 , a much slower rate than (6.38).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

A result that relates the minimax error to the Gaussian numerical integration
error.
Theorem
Let f ∈ C[a, b], n ≥ 1. Then, if we aply Gaussian numerical integration to
Rb
I = a f (x)dx, the error In satisfies
|I(f ) − In (f )| ≤ 2(b − a)ρ2n−1 (f ) (6.39)

where ρ2n−1 (f ) is the minimax error of degree 2n − 1 for f (x) on [a, b].

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Example
Using the table
n ρn (f ) n ρn (f )
1 5.30E-2 6 7.82E-6
2 1.79E-2 7 4.62E-7
3 6.63E-4 8 9.64E-8
4 4.63E-4 9 8.05E-9
5 1.62E-5 10 9.16E-10
apply (6.39) to Z 1
2
I= e−x dx
0

For n = 3, the above bound implies


2 ·
|I − I3 | ≤ ρ5 (e−x ) = 3.24 × 10−5 .

The actual error is 9.95E − 6.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3 Gaussian Numerical Integration

Gaussian numerical integration is not as simple to use as are the


trapezoidal and Simpson rules, partly because the Gaussian nodes and
weights do not have simple formulae and also because the error is
harder to predict.
Nonetheless, the increase in the speed of convergence is so rapid and
dramatic in most instances that the method should always be
considered seriously when one is doing many integrations.
Estimating the error is quite difficult, and most people satisfy
themselves by looking at two or more succesive values.
If n is doubled, then repeatedly comparing two successive values, In
and I2n , is almost always adequate for estimating the error in In

I − In ≈ I2n − In

This is somewhat inefficient, but the speed of convergence in In is so


rapid that this will still not diminish its advantage over most other
methods.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature

A common problem is the evaluation of integrals of the form


Z b
I(f ) = w(x)f (x)dx (6.40)
a

with f (x) a “well-behaved” function and w(x) a possibly (and often)


ill-behaved function. Gaussian quadrature has been generalized to
handle such integrals for many functions w(x). Examples include
Z 1 Z 1√ Z 1
f (x) 1
√ , xf (x)dx, f (x) ln( )dx.
−1 1 − x2 0 0 x

The function w(x) is called a weight function.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature

We begin by imitating the development given earlier in this section,


and we do so for the special case of
Z 1
f (x)
I(f ) = √ dx
0 x

in which w(x) = √1x .


As before, we seek numerical integration formulae of the form
n
X
In (f ) = wj f (xj ) (6.41)
j=1

and we require that the nodes {x1 , . . . , xn } and the weights


{w1 , . . . , wn } be so chosen that In (f ) = I(f ) for polynomials f (x)
of as large as possible.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature

Case n = 1

The integration formula has the form


Z 1
f (x)
√ dx ≈ w1 f (x1 )
0 x
We force equality for f (x) = 1 and f (x) = x. This leads to equations
Z 1
1
w1 = √ dx = 2
0 x
Z 1
x 2
w1 x1 = √ dx =
0 x 3
Solving for w1 and x1 , we obtain the formula
Z 1
f (x) 1
√ dx ≈ 2f ( ) (6.42)
0 x 3
and it has the degree of precision 1.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature

Case n = 2
The integration formula has the form
Z 1
f (x)
√ dx ≈ w1 f (x1 ) + w2 f (x2 ) (6.43)
0 x
We force equality for f (x) = 1, x, x2 , x3 . This leads to equations
Z 1
1
w1 + w2 = √ dx = 2
0 x
Z 1
x 2
w1 x1 + w2 x2 = √ dx =
0 x 3
Z 1 2
x 2
w1 x21 + w2 x22 = √ dx =
0 x 5
Z 1 3
x 2
w1 x31 + w2 x32 = √ dx =
0 x 7
This has the solution
√ · √ ·
x1 = 37 − 35
2
30 = 0.11559, x2 = 37 + 35 2
30 = 0.74156
1
√ · 1
√ ·
w1 = 1 + 18 30 = 1.30429, w2 = 1 − 18 30 = 0.69571
The resulting formula (6.43) has degree of precision 3.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature

Case n > 2
We seek formula (6.41), which has 2n unspecified parameters, x1 , . . . , xn ,
w1 , . . . , wn , by forcing the integration formula to be exact for the 2n
monomials
f (x) = 1, x, x2 , . . . , x2n−1 .
In turn, this forces In (f ) = I(f ) for all polynomials f of degree ≤ 2n − 1.
This leads to the following system of 2n nonlinear equations in 2n unknowns:
w1 + w2 + . . . + wn = 2
w1 x1 + w2 x2 + . . . + wn xn = 23
w1 x21 + w2 x22 + . . . + wn x2n = 25 (6.44)
..
.
w1 x2n−1
1 + w2 x2n−1
2 + . . . + wn x2n−1
n = 2
4n−1

The resulting formula In (f ) has degree of precision 2n − 1.


As before, this system is very difficult to solve directly, but there are
alternative methods of deriving {xi } and {wi }. It is based on looking at the
polynomials that are orthogonal with respect to the weight function
w(x) = √1x on the interval [0, 1].
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature

Example
We evaluate
Z 1
cos(πx) ·
I= √ dx = 0.74796566683146
0 x

using (6.42)
Z 1
f (x) 1
√ dx ≈ 2f ( ) ≡ 1.0
0 x 2
and (6.43)
Z 1
f (x) ·
√ dx ≈ w1 f (x1 ) + w2 f (x2 ) = 0.740519
0 x
·
I2 is a reasonable estimate of I, with I − I2 = 0.00745.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.3.1 Weighted Gaussian Quadrature

A general theory can be developed for the weighted Gaussian quadrature


Z b Xn
I(f ) = w(x)f (x)dx ≈ wj f (xj ) = In (f ) (6.45)
a j=1

It requires the following assumptions for the weight function w(x):


1 w(x) > 0 for a < x < b;
2 For all integers n ≥ n,
Z b
w(x)|x|n dx < ∞
a

These hypotheses are the same as were assumed for the generalized least
squares approximation theory following Section 4.7 of Chapter 4. This is not
accidental since both Gaussian quadrature and least squares approximation
theory are dependent on the subject of orthogonal polynomials. The node
points {xj } solving the system (6.44) are the zeros of the degree n orthogonal
polynomial on [a, b] with respect to the weight function w(x) = √1x .
For the generalization (6.45), the nodes {xi } are the zeros of the degree n
orthogonal polynomial on [a, b] with respect to the weight function w(x).
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement

Gauss’s idea:
The optimal abscissas of the κ−point Gaussian quadrature formulas are
precisely the roots of the orthogonal polynomial for the same interval and
weighting function.
Z b XZ xj+1
f (x)dx = f (x)dx
a j xj
| {z }
composite formula
1   
XZ xj+1 − xj xj+1 + xj xj+1 − xj
= f t+ dt
2 2 2
j | −1 {z }
X κ
R1
= −1 g(t)dt ≈ w` g(q`)
|`=1 {z }
κ point Gauss Rule for max accuracy

w1 , . . . , wκ : weights, q1 , . . . , qκ : quadrature points on (−1, 1).


Exact on polynomials p(x) ∈ P2κ−1 , i.e., 1, t, t2 , . . . , t2κ−1 .
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement

Example: 3 point Gauss, exact on P5 ⇔ exact on 1, t, t2 , t3 , t4 , t5

Z +1
g(t)dt ≈ w1 g(q1 ) + w2 g(q2 ) + w3 g(q3 )
−1 R1
−1 1dt = 2 = w1 · 1 + w2 + w3
R1
−1 t dt = 0 = w1 q1 + w2 q2 + w3 q3
R1 2 2 2 2 2
−1 t dt = 3 = w1 q1 + w2 q2 + w3 q3
R1 3 3 3 3
−1 t dt = 0 = w1 q1 + w2 q2 + w3 q3
R1 4 2 4 4 4
−1 t dt = 5 = w1 q1 + w2 q2 + w3 q3
R1 5 5 5 5
−1 t dt = 0 = w1 q1 + w2 q2 + w3 q3

Guess: q1 = −q3 , q2 = 0(q1 ≤ q2 ≤ q3 ), w1 = w3 .

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Example: 3 point Gauss, exact on P5 ⇔ exact on 1, t, t2 , t3 , t4 , t5

With this guess:



 2w1 + w2 = 2
2w1 q12 = 2/3 ,
2w1 q14 = 2/5

hence
r r
3 3
q1 = − , q3 =
5 5
5 5 8
w1 = , w3 = , w2 =
9 9 9
A. H. Stroud and D. Secrest: ”Gaussian Quadrature Formulas”.
Englewood Cliffs, NJ: Prentice-Hall, 1966.

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

1 The idea of Gauss


Gauss-Lobatto
Z 1
g(t)dt = w1 g(−1) + w2 g(q2 ) + · · · + wk−1 g(k−1 ) + wk g(1)
−1

k − 2 nodes locates as k-point formula; is accurate P2k−3 . (Order


decreased by 2 beside the Gauss quadrature formula).

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Adaptive Quadrature
Problem
Rb
Given a f (x)dx and ε−preassigned tolerance
compute
Z b
I(f ) ≈ f (x)dx
a
with
(a) to assured accuracy
Z b


f (x)dx − I(f ) < ε
a

(b) at minimal / near minimal cost (no. function evaluations)

Strategy: LOCALIZE!
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement

Localization Theorem
P Rx
Let I(f ) = j Ij (f ) where Ij (f ) ≈ xjj+1 f (x)dx.
Z
xj+1
j+1 − xj )
ε(x
If f (x)dx − I (f ) < (= local tolerance),

j
b−a

xj
Z b

then
f (x)dx − I(f ) < ε(= tolerance)
a

Z b xj+1
XZ X
Proof:
f (x)dx − I(f ) = |
f (x)dx − I(f )|
a j xj j
!
X Z xj+1 X Z xj+1
=| f (x)dx − Ij (f ) | ≤ | f (x)dx − Ij (f )|
j xj j xj
X ε(xj+1 − xj ) ε X ε
= = (xj+1 − xj ) = (b − a) = ε.
b−a b−a b−a
j j
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement

Need:

Estimator for local error


and
Strategy
when to cut h to ensure accuracy?
when to increase h to ensure minimal cost?
One approach: halving and doubling!
Recall: Trapezoidal rule
f (xj )+f (xj+1 )
Ij ≈ (xj+1 − xj ) 2 . A priori estimate:
xj+1
(xj+1 − xj )3 00
Z
f (x)dx − Ij = f (sj )
xj 12

for some sj in (xj , xj+1 ).

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Step 1: compute Ij
f (xj+1 )+f (xj )
Ij = 2 (xj+1 − xj )

Step 2: cut interval in half + reuse trapezoidal rule

f (xj ) + f (xj+ 1 ) f (xj+1 ) + f (xj+ 1 )


2 2
Ij = (xj+ 1 − xj ) + (xj+1 − xj+ 1 )
2 2 2 2

Error estimate:
h3j 00
Z xj+1
f (x)dx − Ij = f (ξj ) = ej 1st use of trapezoid rule
xj 12
Z xj+1
(hj /2)3 00 (hj /2)3 00
f (x)dx − I j = f (η1 ) + f (η2 ) 2nd use of TR
xj 12 12
1 h3j 00
= f (ξj ) +O(h4j )
4 12
| {z }
ej
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement

ej = 4ej + Higher Order Terms


Substracting

I j − Ij
I j − Ij = 3ej + O(h4 ) =⇒ ej = + Higher Order Terms
3 | {z }
O(h4 )

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

4 points Gauss: exact on P7


local error: O(h9 )
global error: O(h8 )
A priori estimate:
Z xj+1
f (x)dx − Ij = C(xj+1 − xj )9 f (8) (ξj )
xj
Z xj+1
f (x)dx − Ij = Ch9j f (8) (ξj )
xj
Z xj+1  9  9
hj hj
f (x)dx − I j = C f (8)
(ξj0 ) +C f (8) (ξj00 )
xj 2 2
| {z }
= C8 h9j f (8) (ξj )+O(h10 )
2
10
⇒ I j − Ij = 225ej + O(h )
I j − Ij
⇒ ej = + Higher Order Terms
225 | {z }
O(h10 )
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement

Algorithm

Input: a, b, f (x)
upper error tolerance: εmax
initial mesh width: h
Initialize: Integral = 0.0
xL = a
1
εmin = 2k+3 εmax
* xR = xL + h
( If xR > b, xR ← b 
do integral 1 more time and stop
Compute on xL , xR :
I, I and EST
I −I
EST = k+1 (if exact on Pk )
2 − 1

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

’error is just right’:


h h
If εmin b−a < EST < εmax b−a
Integral ← Integral + I
xL ← xR
go to *
’error is too small’:
If EST ≤ εb−a min
h
Integral ← Integral + I
xL ← xR
h ← 2h
go to *
’error is too big’:
If EST ≥ εb−a max
h
h ← h/2.0
go to *
STOP
END
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement

Trapezium rule
Z xj+1
f(xj+1 ) + f(xj )
f(x)dx − (xj+1 − xj )
xj 2
Z xj+1 Z xj+1 00
f (ξ)
f (x) − p1 (x)dx = (x − xj )(x − xj+1 ) dx
xj xj 2 | {z }
ψ(x)
f 00 (x) xj+1 xj+1
Z Z
= ψ(x)dx +O(h) ψ(x)dx
2 xj xj
| {z }
integrate exactly

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

The mysteries of ψ(x)

5
x 10
1.5

0.5
ψ(x)

−0.5

−1

−1.5
x_j q_1 q_2 q_3 q_4 q_5 q_6 q_7
xj ≤ x ≤ xj+1

ψ(x) = (x − q1 )(x − q2 ) · · · (x − q7 )

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Error in k + 1 point quadrature

f k+1 (ξ)
pk (x) interpolates f (x) =⇒ f (x) − pk (x) = (k+1)! ψ(x)

(xj ≤) q1 < q2 < . . . < qk+1 (≤ xj+1 )

Z xj+1 Z xj+1 Z xj+1


ψ(x) (k+1)
f (x)dx − pk (x)dx = f (ξ)dx
xj xj xj (k + 1)!
| {z } | {z }
true approx

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

1. A simple error bound


Ignoring oscillation of ψ(x):
Z xj+1
max |f k+1 | max |f k+1 |
|error| ≤ |ψ(x)|dx ≤ |xj+1 −xj |k+2
(k + 1)! xj (k + 1)!
| {z }
R xj+1 R xj+1
= xj |x−q1 |···|x−qk+1 |≤hk+1 xj dx
5
x 10
1.5

0.5
ψ(x) and abs(ψ(x))

−0.5

−1

−1.5
x_j q_1 q_2 q_3 q_4 q_5 q_6 q_7
xj ≤ x ≤ xj+1

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

2. Analysis without cancelation

xj < ξ < x < xj+1


Lemma
Let ξ, x ∈ (xj , xj+1 ). Then

f (k+1) (ξ) = f (k+1) (x) + (ξ − x)f k+2 (η) MVT

for some η between ξ and x, and |ξ − x| ≤ xj+1 − xj ≤ h.

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

2. Analysis without cancelation


|error| ≤ |true − approx|
Z xj+1
1 h i
=| ψ(x) f (k+1) (x) +(ξ − x)f (k+2) (η) dx|
(k + 1)! xj | {z } | {z }
f ixed O(h)
xj+1
f (k+1) (x)
Z
≤ | ψ(x)dx |
(k + 1)! xj
| {z }
R xj+1
=0 if xj ψ(x)dx=0
Z xj+1
1
+ | f (k+2) (η)(ξ − x)ψ(x)|
(k + 1)! xj
max |f (k+2) | xj+1
Z
≤ |ξ − x| |ψ(x)| dx
(k + 1)! xj | {z } | {z }
≤h ≤hk+1

hk+3 max |f (k+2)(x) |


≤ The error for Simpson’s rule, i.e. cancelation.
(k + 1)!
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement

ψ(x) interpolates zero at k + 1 points (deg ψ(x) = k + 1)


Lemma
The general result of p(q` ) = 0, ` = 1, . . . , k + 1, p ∈ Pk+1 is
p(x) = Constant · ψ(x).

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Questions:

1) How to pick the points q1 , . . . , qk+1 so that


Z 1
g(x)dx ≈ w1 g(q1 ) + . . . + wk+1 g(qk+1 ) (6.46)
−1

integrates Pk+m exactly?


2) What does this imply about the error?

Remark
R1
If m < 1, pick q1 , . . . , qk+1 so that −1 ψ(x)dx = 0 and then the error
converges O(hk+3 ).

m=2

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Step 1
Let r1 be some fixed point on [−1, 1]:
−1 < q1 < q2 < . . . < r1 < . . . < qk < qk+1
g(r1 ) − pk (r1 )
pk+1 (x) = pk (x) + ψ(x) (6.47)
ψ(r1 )
pk interpolates g(x) at q1 , . . . , qk+1 .
Claim: pk+1 interpolates g(x) at k + 2 points q1 , . . . , qk+1 , r1 .
Suppose now that (6.46) is exact on Pk+1 , then from (6.47)
Z 1 Z 1
g(x)dx − pk+1 (x) dx error in k + 2 quadrature rule, Ek+2

| −1 {z } −1 | {z }
substitute (6.47)
true
1 Z 1 Z 1
f (r1 ) − pk (r1 )
Z
= g(x)dx − pk (x)dx − ψ(x) dx
−1 −1 −1 ψ(r1 )
| {z }
error in k + 1 quadrature rule ≡ Ek+1

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Step 1

1
f (r1 ) − pk (r1 )
Z
So Ek+2 = Ek+1 − ψ(x)dx
ψ(r1 ) −1

Conclusion 1
R1
If −1 ψ(x)dx = 0, then error in k + 1 point rule is exactly the same as
if we had used k + 2 points.

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Step 2
Let r1 , r2 be fixed points in [−1, 1]. So interpolate at k + 3 points:
q1 , . . . , qk+1 , r1 , r2
g(r2 ) − pk (r2 )
pk+2 (x) = pk (x) + ψ(x)(x − r1 ) (6.48)
(r2 − r1 )ψ(r2 )
g(r1 ) − pk (r1 )
+ ψ(x)(x − r2 )
(r1 − r2 )ψ(r1 )
Consider error in a rule with k + 1 + 2 points:
Z 1 Z 1
error in k + 3 p. r. = g(x)dx − pk+2 (x)dx
−1 −1
1 1 1
g(r2 ) − pk (r2 )
Z Z Z
= g(x)dx − pk (x)dx − ψ(x)(x − r1 )dx
−1 −1 (r2 − r1 )ψ(r2 ) −1
Z 1
g(r1 ) − pk (r1 )
− ψ(x)(x − r2 )dx.
(r1 − r2 )ψ(r1 ) −1
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement

Step 2

R1 R1
So Ek+3 = Ek+1 + Const −1 ψ(x)(x − r1 )dx + Const −1 ψ(x)(x − r2 )dx

Conclusion 2
R1 R1
If −1 ψ(x)dx = 0 and −1 xψ(x)dx = 0, then error in k + 1 point rule
has the same error as k + 3 point rule.

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

...

Z 1 Z 1
Ek+1+m = Ek+1 + C0 ψ(x)dx + C1 ψ(x)x1 dx + . . . (6.49)
−1 −1
Z 1
+ Cm ψ(x)xm−1 dx (6.50)
−1

So
Conclusion 3
R1
If −1 ψ(x)xj dx = 0, j = 0, . . . , m − 1, then error is as good as using
m extra points.

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Overview

Interpolating Quadrature
Interpolate f (x) at q0 , q1 , q2 , . . . , qk ⇒ pk (x)

f (k+1) (ξ)
f (x) − pk (x) = (x − q0 )(x − q1 ) . . . (x − qk )
(k + 1)!
Z 1 Z 1 Z 1
1
f (x)dx − pk (x)dx = f (k+1) (ξ)ψ(x)dx
−1 −1 (k + 1)! −1

Gauss rules
pick q` to maximize exactness
what is the accuracy
what are the q` ’s?

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Overview

Interpolate at k + 1 + m points Interpolate at k + 1 points


q0 , . . . , qk , r1 . . . , rm q0 , . . . , q k
error error
⇓ ⇓
R1
Ek+m = Ek + c0 −1 ψ(x) · 1dx
R1
+c1 −1 ψ(x) · xdx + . . .
R1
+cm −1 ψ(x) · xm−1 dx

Definition
p(x) is the µ + 1st orthogonal polynomial
R1 on [−1, 1] (weight
w(x) ≡ 1) if p(x) ∈ Pµ+1 and −1 p(x)x` dx = 0, ` = 0, . . . , µ, i.e.,
R1
−1 p(x)q(x)dx = 0∀q ∈ Pµ .

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Overview

Pick q0 , q1 , . . . , qk so
R1
ψ(x) · 1dx = 0
R −1
1 deg m−1
−1 ψ(x) · xdx = 0
Z 1 z}|{
.. ⇔ ψ(x) q(x) dx = 0, ∀q ∈ Pm−1
. −1 | {z }
R1 m−1 dx = 0 deg k+1
−1 ψ(x) · x

So, maximum accuracy if ψ(x) is the orthogonal polynomial of degree


k + 1:
Z 1
ψ(x)q(x)dx = 0 ∀q ∈ Pk ⇒ m − 1 = k, m = k + 1
−1

So, the Gauss quadrature points are the roots of the orthogonal
polynomial.
5. Numerical Integration Math 1070
> 5. Numerical Integration > Supplement

Overview

Adaptivity

I = I1 + I2
Trapezium rule’s local error = O(h3 )
Z xj+1
f (x)dx − I = 4e ⇐ e ≈ 8e1 , e ≈ 8e2 so, e ≈ 4e
x
Z xjj+1
f (x)dx − I = e = e1 + e2
xj

I −I
I − I = 3e (+ Higher Order Terms) ⇒ e = (+ Higher Order Terms)
3

5. Numerical Integration Math 1070


> 5. Numerical Integration > Supplement

Overview

Final Observation
)
T rue − Approx ≈ 4e
2 equations, 2 unknowns: e, T rue
T rue − Approx ≈ e
So, can solve for e + T rue
Solving for T rue:

I −I 4 1
T rue ≈ I + e ≈ I + ≈ I− I (+ Higher Order Terms)
3 3 3

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.4 Numerical Differentiation

To numerically calculate the derivative of f (x), begin by recalling the


definition of derivative
f (x + h) − f (x)
f 0 (x) = lim
x→0 h
This justifies using

f (x + h) − f (x)
f 0 (x) ≈ ≡ Dh f (x) (6.51)
h
for small values of h. Dh f (x) is called a numerical derivative of f (x)
with stepsize h.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.4 Numerical Differentiation

Example
Use Dh f to approximate the derivative of f (x) = cos(x) at x = π6 .

h Dn (f ) Error Ratio
0.1 -0.54243 0.04243
0.05 -0.52144 0.02144 1.98
0.025 -0.51077 0.01077 1.99
0.0125 -0.50540 0.00540 1.99
0.00625 -0.50270 0.00270 2.00
0.003125 -0.50135 0.00135 2.00

Looking at the error column, we see the error is nearly proportional to


h; when h is halved, the error is almost halved.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.4 Numerical Differentiation

To explain the behaviour in this example, Taylor’s theorem can be used


to find an error formula. Expanding f (x + h) about x, we get

h2 00
f (x + h) = f (x) + hf 0 (x) + f (c)
2
for some c between x and x + h. Substituting on the right side of
(6.51), we obtain

h2 00
  
1 0
Dh f (x) = f (x) + hf (x) + f (c) − f (x)
h 2
h
= f 0 (x) + f 00 (c)
2
h
f 0 (x) − Dh f (x)= − f 00 (c) (6.52)
2
The error is proportional to h, agreeing with the results in the Table
above.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4 Numerical Differentiation

For that example, π π h


f0 − Dh f = cos(c) (6.53)
6 6 2
where c is between 61 π and 16 π + h.
Let’s check that if c is replaced by 61 π, then the RHS of (6.53) agrees with
the error column in the Table.
As seen in the example, we use the formula (6.51) with a positive stepsize
h > 0. The formula (6.51) is commonly known as the forward difference
formula for the first derivative.
We can formally replace h by −h in (6.51) to obtain the formula
f (x) − f (x − h)
f 0 (x) ≈ , h>0 (6.54)
h
This is the backward difference formula for the first derivative.
A derivation similar to that leading to (6.52) shows that
f (x) − f (x − h) h
f 0 (x) − = f 00 (c)) (6.55)
h 2
for some c between x and x − h.
Thus, we expect the accuracy of the backward difference formula to be
almost the same as that of the forward difference formula.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation

Let Pn (x) denote the degree n polynomial that interpolates f (x) at


n + 1 node points x0 , . . . , xn .

To calculate f 0 (x) at some point x = t, use

f 0 (t) ≈ Pn0 (t) (6.56)

Many different formulae can be obtained by


1 varying n and by
2 varying the placement of the nodes x0 , . . . , xn relative to the
point t of interest.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation

As an especially useful example of (6.56), take

n = 2, t = x1 , x0 = x1 − h, x2 = x1 + h.

Then
(x − x1 )(x − x2 ) (x − x0 )(x − x2 )
P2 (x) = 2
f (x0 ) + f (x1 )
2h −h2
(x − x0 )(x − x1 )
+ f (x2 )
2h2
2x − x1 − x2 2x − x0 − x2
P20 (x) = f (x0 ) + f (x1 )
2h2 −h2
2x − x0 − x1
+ f (x2 )
2h2
x1 − x2 2x1 − x0 − x2 x1 − x0
P20 (x1 ) = f (x0 ) + f (x1 ) + f (x2 )
2h2 −h2 2h2
f (x2 ) − f (x0 )
= (6.57)
2h
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation

The central difference formula

Replacing x0 and x2 by x1 − h and x1 + h, from (6.56) and (6.57) we


obtain central difference formula
f (x1 + h) − f (x1 − h)
f 0 (x1 ) ≈ ≡ Dh f (x1 ), (6.58)
2h
another approximation to the derivative of f (x).

It will be shown below that this is a more accurate approximation to


f 0 (x) than is the the forward difference formula Dh f (x) of (6.51), i.e.,

f (x + h) − f (x)
f 0 (x) = ≡ Dh f (x).
h

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation

Theorem
Assume f ∈ C n+2 [a, b].
Let x0 , x1 , . . . , xn be n + 1 distinct interpolation nodes in [a, b], and
let t be an arbitrary given point in [a, b].
Then

f (n+2) (c1 ) f (n+1) (c2 )


f 0 (t) − Pn0 (t) = Ψn (t) + Ψ0n (t) (6.59)
(n + 2)! (n + 1)!

with
Ψn (t) = (t − x0 )(t − x1 ) · · · (t − xn )
The numbers c1 and c2 are unknown points located between the
maximum and minimum of x0 , x1 , . . . , xn and t.

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation

To illustrate this result, an error formula can be derived for the central
difference formula (6.58).
Since t = x1 in deriving (6.58), we find that the first term on the RHS
of (6.59) is zero. Also n = 2 and

Ψ2 (x) = (x − x0 )(x − x1 )(x − x2 )


Ψ02 (x) = (x − x0 )(x − x1 ) + (x − x0 )(x − x2 ) + (x − x0 )(x − x1 )
Ψ02 (x1 ) = (x1 − x0 )(x1 − x2 ) = −h2

Using this in (6.59), we get

f (x1 + h) − f (x1 − h) h2
f 0 (x1 ) − = − f 000 (c2 ) (6.60)
2h 6
with x1 − h ≤ c2 ≤ x1 + h. This says that for small values of h, the
central difference formula (6.58) should be more accurate that the
earlier approximation (6.51), the forward difference formula, because
the error term of (6.58) decreases more rapidly with h.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.1 Differentiation Using Interpolation

Example
The earlier example f (x) = cos(x) is repeated using the central
difference fomula (6.58) (recall x1 = π6 ).

h Dn (f ) Error Ratio
0.1 -0.49916708 -0.0008329
0.05 -0.49979169 -0.0002083 4.00
0.025 -0.49994792 -0.00005208 4.00
0.0125 -0.49998698 -0.00001302 4.00
0.00625 -0.49999674 -0.000003255 4.00

The results confirm the rate of convergence given in (6.58), and they
illustrate that the central difference formula (6.58) will usually will be
superior to the earlier approximation, the forward difference formula
(6.51).

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients

The method of undetermined coefficients is a procedure used in


deriving formulae for numerical differentiation, interpolation and
integration. We will explain the method by using it to derive an
approximation for f 00 (x).
To approximate f 00 (x) at some point x = t, write
(2)
f 00 (t) ≈ Dh f (t) ≡ Af (t + h) + Bf (t) + Cf (t − h) (6.61)

with A, B and C unspecified constants. Replace f (t − h) and f (t + h)


by the Taylor polynomial approximations
h2 00 h3 000 h4 (4)
f (t − h) ≈ f (t) − hf 0 (t) + 2 f (t) − 6 f (t) + 24 f (t)
(6.62)
h2 00 h3 000 h4 (4)
f (t + h) ≈ f (t) + hf 0 (t) + 2 f (t) + 6 f (t) + 24 f (t)

Including more terms would give higher powers of h; and for small
values of h, these additional terms should be much smaller that the
terms included in (6.62).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients
(2)
Substituting these approximations into the formula for Dh f (t) and
collecting together common powers of h give us
(2) h2
Dh f (t) ≈ (A + B + C) f (t) + h(A − C) f 0 (t) + (A + C) f 00 (t)
| {z } | {z }
=0
|2 {z } =0
=1
h3 h4
+ (A − C)f 000 (t) + (A + C)f (4) (t) (6.63)
6 24
To have (2)
Dh f (t) ≈ f 00 (t)
for arbitrary functions f (x), it is necessary to require
A + B + C = 0; coefficient of f (t)
h(A − C) = 0; coefficient of f 0 (t)
h2
2 (A + C) = 1; coefficient of f 00 (t)
This system has the solution
1 2
A=C= , B=− .
h2 h2
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients

This determines
(2) f (t + h) − 2f (t) + f (t − h)
Dh f (t) = (6.64)
h2
(2)
To determine an error error formula for Dh f (t), substitute
A = C = h12 , B = − h22 into (6.63) to obtain
(2) h2 (4)
Dh f (t) ≈ f 00 (t) + f (t).
12
The approximation in this arises from not including in the Taylor
polynomials (6.62) corresponding higher powers of h. Thus,
f (t + h) − 2f (t) + f (t − h) −h2 (4)
f 00 (t) − ≈ f (t) (6.65)
h2 12
This is accurate estimate of the error for small values of h. Of course,
in a practical situation we would not know f (4) (t). But the error
formula shows that the error decreases by a factor of about 4
when h is halved. This can lead to justify Richardson’s extrapolation
to obtain an even more accurate estimate of the error and of f 00 (t).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients

Example
Let f (x) = cos(x), t = 61 π, and use (6.64) to calculate
f 00 (t) = − cos( 16 π).

h Dh2 (f ) Error Ratio


0.5 -0.84813289 -1.789E-2
0.25 -0.86152424 -4.501E-3 3.97
0.125 -0.86489835 -1.127E-3 3.99
0.0625 -0.86574353 -2.819E-4 4.00
0.03125 -0.86595493 -7.048E-5 4.00

The results shown (see the ratio column) are consistent with the error
formula (6.65)
f (t + h) − 2f (t) + f (t − h) −h2 (4)
f 00 (t) − ≈ f (t).
h2 12

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.4.2 The Method of Undetermined Coefficients

In the derivation of (6.65), the form (6.61)


(2)
f 00 (t) ≈ Dh f (t) ≡ Af (t + h) + Bf (t) + Cf (t − h)
was assumed for the approximate derivative. We could equally well have
chosen to evaluate f (x) at points other than those used there, for example,

f 00 (t) ≈ Af (t + 2h) + Bf (t + h) + Cf (t)

Or, we could have chosen more evaluation points, as in

f 00 (t) ≈ Af (t + 3h) + Bf (t + 2h) + Cf (t + h) + Df (t)

The extra degree of freedom could have been used to obtain a more accurate
approximation to f 00 (t), by forcing the error term to be proportional to a
higher power of h.
Many of the formulae derived by the method of undetermined coefficients can
also be derived by differentiating and evaluating a suitably chosen
interpolation polynomial. But often, it is easier to visualize the desired
formula as a combination of certain function values and to then derive the
proper combination, as was done above for (6.64).
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation

The formulae derived above are useful for differentiating functions that
are known analytically and for setting up numerical methods for solving
differential equations. Nonetheless, they are very sensitive to errors in
the function values, especially if these errors are not sufficiently small
compared with the stepsize h used in the differentiation formula.
To explore this, we analyze the effect of such errors in the formula
(2)
Dh f (t) approximating f 00 (t).
(2)
Rewrite (6.64), Dh f (t) = f (t+h)−2fh(t)+f
2
(t−h)
, as

(2) f (x2 ) − 2f (x1 ) + f (x0 )


Dh f (x1 ) = ≈ f 00 (x1 )
h2
where x2 = x1 + h, x0 = x1 − h. Let the actual function values used
in the computation be denoted by fb0 , fb1 , and fb2 with

f (xi ) − fbi = i , i = 0, 1, 2

the errors in the function values.


5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation

Thus, the actual quantity calculated is

b 2 f (x1 ) = f2 − 2f1 + f0
b b b
D h
h2

For the error in this quantity, replace fbj by f (xj ) − j , j = 0, 1, 2 to


obtain

b 2 f (x1 ) = f 00 (x1 ) − [f (x2 ) − 2 ] − 2[f (x1 ) − 1 ] + [f (x0 ) − 0 ]


f 00 (x1 )− D h
h2
h f (x2 ) − 2f (x 1 ) + f (x0 ) i 2 − 21 + 0
= f 00 (x1 ) − 2
+
| {z h } h2
(6.65) −h2
= 12 f (4) (t)

h2 2 − 21 + 0
≈− f (4) (x1 ) + (6.66)
12 h2

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation

The errors 0 , 1 , 2 are generally random in some interval [−δ, δ].


If the values fb0 , fb1 , fb2 are experimental data, then δ is a bound on
the experimental data.
Also, if these function values fbi are obtained from computing
f (x) on a computer, then the errors j are combination of
rounding or chopping errors and δ is a bound on these errors.
In either case, (6.66)
2
b 2 f (x1 ) ≈ − h f (4) (x1 ) + 2 − 21 + 0
f 00 (x1 )− D h
12 h2
yields the approximate inequality
h2 4δ
00 (2) (4)
f (x1 ) − Dh f (x1 ) ≤ f (x1 ) + 2 (6.67)
b
12 h
This error bound suggests that as h → 0,

the error will eventually increase, because of the final term h2
.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation

Example
Calculate D b (2) (x1 ) for f (x) = cos(x) at x1 = 1 π. To show the effect of
h 6
rounding errors, the values fbi are obtained by rounding f (xi ) to six
significant digits; and the errors satisfy

|i | ≤ 5.0 × 10−7 = δ, i = 0, 1, 2


b (2) f (x1 ) is calculated exactly.
Other than these rounding errors, the formula D h
The results are
h b 2 (f )
D Error
h
0.5 -0.848128 -0.017897
0.25 -0.861504 -0.004521
0.125 -0.864832 -0.001193
0.0625 -0.865536 -0.000489
0.03125 -0.865280 -0.000745
0.015625 -0.860160 -0.005865
0.0078125 -0.851968 -0.014057
0.00390625 -0.786432 -0.079593
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation

In this example,
the bound (6.67), i.e., 2
00 b (2) f (x1 ) ≤ h f (4) (x1 ) + 4δ2 ,

f (x1 ) − D h 12 h
becomes
h2    
1 4
00 (2)
f (x1 ) − Dh f (x1 ) ≤ cos π + (5 × 10−7 )
b
12 6 h2
· 2 × 10−6
= 0.0722h2 + ≡ E(h)
h2
·
For h = 0.125, the bound E(h) = 0.00126, which is not too far off
from actual error given in the table.

The bound E(h) indicates that there is a smallest value of h, call it


h∗ , below which the error will begin to increase.
To find it, let E 0 (h) = 0, with its root being h∗ .
·
This leads to h∗ = 0.00726, which is consistent with the behaviour of
the errors in the table.
5. Numerical Integration Math 1070
> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation

5. Numerical Integration Math 1070


> 5. Numerical Integration > 5.4.3 Effects of error in function evaluation

One must be very cautious in using numerical differentiation,


because of the sensitivity to errors in the function values.

This is especially true if the function values are obtained empirically


with relatively large experimental errors, as is common in practice. In
this latter case, one should probably use a carefully prepared package
program for numerical differentiation. Such programs take into
account the error in the data, attempting to find numerical derivatives
that are as accurate as can be justified by the data.

In the absence of such a program, one should consider producing a


cubic spline function that approximates the data, and then use its
derivative as a numerical derivative for the data. The cubic spline
function could be based on interpolation; or better for data with large
relatively errors, construct a cubic spline that is a least squares
approximation to the data. The concept of least squares
approximation is introduced in Section 7.1.
5. Numerical Integration Math 1070

You might also like