0% found this document useful (0 votes)
16 views5 pages

Lecture 17

Research and methodology.

Uploaded by

Patrina Phiri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views5 pages

Lecture 17

Research and methodology.

Uploaded by

Patrina Phiri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Notes for Lecture 17 Mon, 10/30/2023

Working with functions: differentiation + integration

Numerical differentiation
f (x + h) ¡ f (x)
We know from Calculus that f 0(x) = lim .
h!0 h
1
To numerically approximate f 0(x) we could use f 0(x)  h [f (x + h) ¡ f (x)] for small h.

In this section, we analyze this and other ways of numerically differentiating a function.
Application. These approximations are crucial for developing tools to numerically solve (partial) differential
equations by discretizing them.

Review. We can express Taylor's theorem (Theorem 52) in the following manner:

1 1 1
f (x + h) = f (x) + f 0(x)h + f 00(x)h2 + ::: + f (n)(x)hn + f (n+1)()hn+1
2 n! (n + 1)!
Taylor polynomial error

This form is particularly convenient for the kind of error analysis that we are doing here.
Important notation. When the exact form of the error is not so important, we simply write O(hn+1) and say
that the error is of order n + 1.

Definition 110. We write e(h) = O(hn) if there is a constant C such that je(h)j 6 Chn for all
small enough h.
For our purposes, e(h) is usually an error term and this notation allows us to talk about that error without being
more precise than necessary.
If e(h) is the error, then we often say that an approximation is of order n if e(h) = O(hn).
Caution. This notion of order is different from the order of convergence that we discussed in the context of
fixed-point iteration and Newton's method.

1
Example 111. Determine the order of the approximation f 0(x)  h [f (x + h) ¡ f (x)].
Comment. This approximation of the derivative is called a (first) forward difference for f 0(x).
1
Likewise, f 0(x)  h [f (x) ¡ f (x ¡ h)] is a (first) backward difference for f 0(x).

h2 00 h3
Solution. By Taylor's theorem, f (x + h) = f (x) + hf 0(x) + f (x) + f 000(x) + O(h4). It follows that
2 6

1
[f (x + h) ¡ f (x)] = f 0(x) + h f 00(x) + O(h2) = f 0(x) + O(h) :
h 2
Hence, the error is of order 1.
h
Comment. The presence of the term 2 f 00(x) tells us that the order is exactly 1 unless f 00(x) = 0 (that is, the
order cannot generally be improved to  for some  < 1).

Armin Straub 68
[email protected]
1
Example 112. Determine the order of the approximation f 0(x)  2h [f (x + h) ¡ f (x ¡ h)].
Comment. This approximation of the derivative is called a (first) central difference for f 0(x).
Solution. By Taylor's theorem (Theorem 52),

h2 00 h3 h4
f (x + h) = f (x) + hf 0(x) + f (x) + f 000(x) + f (4)(x) + O(h5); (2)
2 6 24
h2 00 h3 000 h4
0
f (x ¡ h) = f (x) ¡ hf (x) + f (x) ¡ f (x) + f (4)(x) + O(h5):
2 6 24
(Note that the second formula just has h replaced with ¡h.) Subtracting the second from the first, we obtain

1 2
[f (x + h) ¡ f (x ¡ h)] = f 0(x) + h f 000(x) + O(h3) = f 0(x) + O(h2) :
2h 6
Hence, the error is of order 2.

Example 113. Use both forward and central differences to approximate f 0(x) for f (x) = x2.
1 1
Solution. We get h
[f (x + h) ¡ f (x)] = 2x + h and 2h
[f (x + h) ¡ f (x ¡ h)] = 2x.
h
Comment. In the forward difference case, the error is of order 1 (also note that 2 f 00(x)
= h). In the central
difference case, we find that we get f 0(x) without error. In hindsight, with the error formulas in mind, this is
not a surprise and reflects the fact that f 000(x) = 0.

Example 114. Use both forward and central differences to approximate f 0(2) for f (x) = 1/x.
1 1
Solution. (Since f 0(x) = ¡1/x2, the exact value is f 0(2) = ¡1/4.) In each case, we use h = 10 and h = 20 .
1 1 5
 h = 10 : h
[f (x + h) ¡ f (x)] = ¡ 21  ¡0.2381, error 0.0119
1 1 10 1
h= 20
: h
[f (x + h) ¡ f (x)] = ¡ 41  ¡0.2439, error 0.0061 (reduced by about 2 )

1 1 100
 h = 10 : 2h
[f (x + h) ¡ f (x ¡ h)] = ¡ 399  ¡0.25063, error ¡0.00063
1 1 400 1
h = 20 : 2h
[f (x + h) ¡ f (x ¡ h)] = ¡ 1599  ¡0.25016, error ¡0.00016 (reduced by about 4
)

Important comment. The forward difference has an error of order 1. In other words, for small h, it should
behave like Ch. In particular, if we replace h by h/2, then the error should be about 1/2 (as we saw above).
On the other hand, the central difference has an error of order 2 and so should behave like Ch2. In particular,
if we replace h by h/2, then the error should be about 1/22 = 1/4 (and, again, this is what we saw above).

Example 115. Find a central difference for f 00(x) and determine the order of the error.
Solution. Adding the two expansions in (2) to kill the f 0(x) terms, and subtracting 2f (x), we find that

1 2
2
[f (x + h) ¡ 2f (x) + f (x ¡ h)] = f 00(x) + h f (4)(x) + O(h3) = f 00(x) + O(h2) :
h 12
The error is of order 2.
1
Alternatively. If we iterate the approximation f 0(x)  2h
[f (x + h) ¡ f (x ¡ h)] (in the second step, we apply
it with x replaced by x  h), we obtain

1 0 1
f 00(x)  [f (x + h) ¡ f 0(x ¡ h)]  2 [f (x + 2h) ¡ 2f (x) + f (x ¡ 2h)];
2h 4h
which is the same as what we found above, just with h replaced by 2h.

Armin Straub 69
[email protected]
Example 116. Obtain approximations for f 0(x) and f 00(x) using the values f (x), f (x + h),
f (x + 2h) as follows: determine the polynomial interpolation corresponding to these values and
then use its derivatives to approximate those of f . In each case, determine the order of the
approximation and the leading term of the error.
Solution. We first compute the polynomial p(t) that interpolates the three points (x; f (x)), (x + h; f (x + h)),
(x + 2h; f (x + 2h)) using Newton's divided differences:

f [] f [; ] f [; ; ]


x f (x)
f (x + h) ¡ f (x)
=: c1
h
f (x + 2h) ¡ 2f (x + h) + f (x)
x + h f (x + h) =: c2
2h2
f (x + 2h) ¡ f (x + h)
h
x + 2h f (x + 2h)

Hence, reading the coefficients from the top edge of the triangle, the interpolating polynomial is

p(t) = f (x) + c1(t ¡ x) + c2(t ¡ x)(t ¡ x ¡ h):

 (approximating f 0(x)) Since p 0(t) = c1 + c2(2t ¡ 2x ¡ h), we have

f (x + h) ¡ f (x) f (x + 2h) ¡ 2f (x + h) + f (x)


p 0(x) = c1 ¡ hc2 = ¡
h 2h
¡f (x + 2h) + 4f (x + h) ¡ 3f (x)
= :
2h
This is our approximation for f 0(x). To determine the order and the error, we combine

1 f 000(x) 3
f (x + h) = f (x) + f 0(x)h + f 00(x)h2 + h + O(h4);
2 6
4f 000(x) 3
f (x + 2h) = f (x) + 2f 0(x)h + 2f 00(x)h2 + h + O(h4)
3
(note that the latter is just the former with h replaced by 2h) to find

2f 000(x) 3
¡f (x + 2h) + 4f (x + h) ¡ 3f (x) = 2f 0(x)h ¡ h + O(h4):
3
Hence, dividing by 2h, we conclude that

¡f (x + 2h) + 4f (x + h) ¡ 3f (x) f 000(x) 2


= f 0(x) ¡ h + O(h3):
2h 3
Consequently, the approximation is of order 2.
f (x + 2h) ¡ 2f (x + h) + f (x)
 (approximating f 00 (x)) Since p 00(t) = 2c2, we have p 00(x) = 2c2 = .
h2
This is our approximation for f 00(x). To determine the order and the error, we proceed as before to find

f (x + 2h) ¡ 2f (x + h) + f (x) = f 00(x)h2 + f 000(x)h3 + O(h4):

Hence, dividing by h2, we conclude that

f (x + 2h) ¡ 2f (x + h) + f (x)
= f 00(x) + f 000(x)h + O(h2):
h2
Consequently, the approximation is of order 1.
Comment. Alternatively, can you derive these approximations by combining f (x) with the Taylor expansions of
f (x + h) and f (x + 2h)? As a third way of producing such approximations, we will soon see that the present order
1
2 approximation of f 0(x) can be obtained by applying Richardson extrapolation to f 0(x)  h [f (x + h) ¡ f (x)].

Armin Straub 70
[email protected]
Example 117. (homework) Obtain approximations for f 0(x) and f 00(x) using the values f (x ¡
2h), f (x), f (x + 3h) as follows: determine the polynomial interpolation corresponding to these
values and then use its derivatives to approximate those of f . In each case, determine the order
of the approximation and the leading term of the error.
Solution. We first compute the polynomial p(t) that interpolates the three points (x ¡ 2h; f (x ¡ 2h)), (x; f (x)),
(x + 3h; f (x + 3h)) using Newton's divided differences:

f [] f [; ] f [; ; ]


x ¡ 2h f (x ¡ 2h)
f (x) ¡ f (x ¡ 2h)
= : c1
2h
2f (x + 3h) ¡ 5f (x) + 3f (x ¡ 2h)
x f (x) =: c2
30h2
f (x + 3h) ¡ f (x)
3h
x + 3h f (x + 3h)

Hence, reading the coefficients from the top edge of the triangle, the interpolating polynomial is

p(t) = f (x ¡ 2h) + c1(t ¡ x + 2h) + c2(t ¡ x + 2h)(t ¡ x):

 (approximating f 0(x)) Since p 0(t) = c1 + c2(2t ¡ 2x + 2h), we have

f (x) ¡ f (x ¡ 2h) 2f (x + 3h) ¡ 5f (x) + 3f (x ¡ 2h)


p 0(x) = c1 + 2hc2 = +
2h 15h
4f (x + 3h) + 5f (x) ¡ 9f (x ¡ 2h)
= :
30h
This is our approximation for f 0(x). To determine the order and the error, we combine

1 f 000(x) 3
f (x + h) = f (x) + f 0(x)h + f 00(x)h2 + h + O(h4);
2 6
4f 000(x) 3
f (x ¡ 2h) = f (x) ¡ 2f 0(x)h + 2f 00(x)h2 ¡ h + O(h4);
3
9 9f 000(x) 3
f (x + 3h) = f (x) + 3f 0(x)h + f 00(x)h2 + h + O(h4)
2 2
to find
4f (x + 3h) + 5f (x) ¡ 9f (x ¡ 2h) = 30f 0(x)h + 30f 000(x)h3 + O(h4):

Hence, dividing by 30h, we conclude that

4f (x + 3h) + 5f (x) ¡ 9f (x ¡ 2h)


= f 0(x) + f 000(x)h2 + O(h3):
30h
Consequently, the approximation is of order 2.
2f (x + 3h) ¡ 5f (x) + 3f (x ¡ 2h)
 (approximating f 00 (x)) Since p 00(t) = 2c2, we have p 00(x) = 2c2 = .
15h2
This is our approximation for f 00(x). To determine the order and the error, we proceed as before to find

2f (x + 3h) ¡ 5f (x) + 3f (x ¡ 2h) = 15f 00(x)h2 + 5f 000(x)h3 + O(h4):

Hence, dividing by 15h2, we conclude that

2f (x + 3h) ¡ 5f (x) + 3f (x ¡ 2h) 1


= f 00(x) + f 000(x)h + O(h2):
15h 2 3
Consequently, the approximation is of order 1.

Armin Straub 71
[email protected]
Example 118. Python Let us see how the forward and central difference compare in practice.
>>> def forward_difference(f, x, h):
return (f(x+h)-f(x))/h
>>> def central_difference(f, x, h):
return (f(x+h)-f(x-h))/(2*h)

We apply these to f (x) = 2x at x = 1. In that case, the exact derivative is f 0(1) = 2ln(2)  1.386.
>>> def f(x):
return 2**x
>>> [forward_difference(f, 1, 10**-n) for n in range(5)]

[2.0, 1.4354692507258626, 1.3911100113437769, 1.3867749251610384, 1.3863424075299946]

>>> [central_difference(f, 1, 10**-n) for n in range(5)]

[1.5, 1.3874047099948572, 1.3863054619682957, 1.3862944721280135, 1.3862943622289237]


It is probably easier to see what happens to the error if we subtract the true value from these
approximations:
>>> from math import log
>>> [forward_difference(f, 1, 10**-n) - 2*log(2) for n in range(12)]

[0.6137056388801094, 0.04917488960597205, 0.004815650223886303, 0.00048056404114782403,


4.80464101040301e-05, 4.804564444071957e-06, 4.80467807983942e-07, 4.703673917028084e-
08, 7.068710283775204e-09, 2.7352223619381277e-07, 1.161700655893938e-06,
1.8925269049896443e-05]

>>> [central_difference(f, 1, 10**-n) - 2*log(2) for n in range(12)]

[0.11370563888010943, 0.001110348874966638, 1.1100848405165564e-05,


1.1100812291608975e-07, 1.1090330875873633e-09, 1.879385536085465e-11,
7.43052286367174e-11, -7.028508886008922e-10, 7.068710283775204e-09,
1.6249993373129712e-07, 1.161700655893938e-06, 7.823038803644877e-06]
For the forward difference, we can see how the error decreases roughly by 1 / 10 initially, as
expected. Likewise, for the central difference, the error decreases roughly by 1 / 102 (order 2)
initially. However, in both cases, the errors end up increasing after a while before getting close to
machine precision. We discuss this in the next section.
Note how, for the forward difference, our best approximation has error 7.07 10¡9 while, for the central difference,
our best approximation has error 1.88  10¡11 . While the latter is an improvement, either is worryingly large.

Armin Straub 72
[email protected]

You might also like