Error Analysis (SR.02) PDF
Error Analysis (SR.02) PDF
ROUNDOFF ERRORS
• Roundoff errors :
➢ arise because digital computers cannot represent some quantities exactly.
➢ directly related to the manner in which numbers are stored in a computer.
➢ can lead to erroneous results (unstable calculation) => ill-conditioned.
where
s = the significand (or mantissa), b = the base of the number system, and e = the exponent.
• The number is normalized by moving the decimal place over so that only one
significant digit is to the left of the decimal point.
• This is done so computer memory is not wasted on storing useless non significant zeros.
For example, a value like 0.005678 could be represented in a wasteful manner as 0.005678 ×
100. However, normalization would yield 5.678 × 10−3 which eliminates the useless zeroes.
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~
where f = the mantissa (i.e., the fractional part of the significand). For example, if we normalized
the binary number 1101.1, the result would be 1.1011 × (2)−3 or (1 + 0.1011) × 2−3.
• Thus, although the original number has five significant bits, we only have
to store the four fractional bits: 0.1011
• Now, just as in Example 4.2, this means that the numbers will have a limited
range and precision.
• By default, MATLAB has adopted the IEEE double-precision format in which
eight bytes (64 bits) are used to represent floating-point numbers.
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~
Range:
• The largest positive number can be represented in binary as
• This value can be translated into a base-10 value of 2–1022 = 2.2251 × 10–308.
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~
• Numbers that are smaller than this value create an underflow and, in
MATLAB, are set to zero.
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~
➢ The epsilon of the machine (short: eps) is the minimum distance that a floating
point arithmetic program like Matlab can recognize between two numbers x and y.
ARITHMETIC MANIPULATIONS OF
COMPUTER NUMBERS – ROUND OFF ERROR
• When two floating-point numbers are added, the numbers are first expressed
so that they have the same exponents.
• For example, if we want to add 1.557 + 0.04341, the computer would express
the numbers as 0.1557 × 101 + 0.004341 × 101.
• Then the mantissas are added to give 0.160041 × 101.
• Now, because this hypothetical computer only carries a 4-digit mantissa, the
excess number of digits get chopped off and the result is 0.1600 × 101.
• Notice how the last two digits of the second number (41) that were shifted to
the right have essentially been lost from the computation.
ARITHMETIC MANIPULATIONS OF
COMPUTER NUMBERS
• Subtraction is performed identically to addition except that the sign of the
subtrahend is reversed.
• For example, suppose that we are subtracting 26.86 from 36.41. That is,
• For this case the result must be normalized because the leading zero is unnecessary.
• So we must shift the decimal one place to the right to give 0.9550 × 101 = 9.550.
• Notice that the zero added to the end of the mantissa is not significant but is
merely appended to fill the empty space created by the shift.
ARITHMETIC MANIPULATIONS OF
COMPUTER NUMBERS
• Even more dramatic results would be obtained when the numbers are very close as
in
• The format long command lets us see the 15 significant-digit representation used
by MATLAB.
• You would expect that sum would be equal to 1. However, although 0.0001 is a
nice round number in base-10, it cannot be expressed exactly in base-2.
• Thus, the sum comes out to be slightly different than 1.
ARITHMETIC MANIPULATIONS OF COMPUTER
NUMBERS ~ LARGE COMPUTATIONS ~
• We should note that MATLAB has features that are designed to minimize such
errors. For example, suppose that you form a vector as in
>> format long
>> s = [0:0.0001:1];
• For this case, rather than being equal to 0.99999999999991, the last entry will be
exactly one as verified by
>> s(10001)
ans =
1
ARITHMETIC MANIPULATIONS OF COMPUTER
NUMBERS
~ ADDING A LARGE AND A SMALL NUMBER ~
• Suppose we add a small number, 0.0010, to a large number, 4000, using a
hypothetical computer with the 4-digit mantissa and the 1-digit exponent.
• After modifying the smaller number so that its exponent matches the larger,
Smearing.
• Smearing occurs whenever the individual terms in a summation are larger
than the summation itself.
• One case where this occurs is in a series of mixed signs.
ARITHMETIC MANIPULATIONS OF COMPUTER
NUMBERS ~ LARGE COMPUTATIONS ~
Inner products.
• As should be clear from the last sections, some infinite series are particularly prone
to roundoff error. A far more ubiquitous manipulation is the calculation of inner
products as in
• Truncation errors are those that result from using an approximation in place of an
exact mathematical procedure.
• For example, in previous Chap. we approximated the derivative of velocity of a
bungee jumper by a finite-difference equation of the form [Eq. (1.11)]
• To gain insight into the properties of such errors, we now turn to a mathematical
formulation that is used widely in numerical methods to express functions in an
approximate fashion—the Taylor series.
THE TAYLOR SERIES
• Taylor’s theorem and its associated formula, the Taylor series, is of great value
in the study of numerical methods.
• In essence, the Taylor theorem states that any smooth function can be
approximated as a polynomial.
• The Taylor series then provides a means to express this idea mathematically
in a form that can be used to generate practical results.
• A good problem context for this exercise is to predict a function value at one
point in terms of the function value and its derivatives at another point.
THE TAYLOR SERIES
• We’ll call your horizontal location xi and your vertical distance with respect to
the base of the hill f (xi).
• You are given the task of predicting the height at a position xi+1, which is a
distance h away from you.
• The best guess would be the same height as where you’re standing now! You
could express this prediction mathematically as
THE TAYLOR SERIES
• So now you are allowed to get off the platform and stand on the hill surface
with one leg positioned in front of you and the other behind.
• You immediately sense that the front foot is lower than the back foot.
• With this additional information, you’re clearly in a better position to predict
the height at f (xi+1).
• In essence, you use the slope estimate to project a straight line out to xi+1.
• You can express this prediction mathematically by
THE TAYLOR SERIES
• So now you are allowed to stand on the hill surface and take two measurements.
• First, you measure the slope behind you at a distance of Δx:
Let’s call this slope f ʹb (xi).
• Then you measure the slope in front of you at a distance of Δx:
Let’s call this slope f ʹf (xi).
• You immediately recognize that the slope behind is milder than the one in front.
Clearly the drop in height is “accelerating” downward in front of you.
• As you might expect, you’re now going to add a second-order term to your
equation and make it into a parabola.
THE TAYLOR SERIES
• To make use of this formula, you need an estimate of the second derivative.
• You can use the last two slopes you determined to estimate it as
• Recognize that all the values subscripted i represent values that you have
estimated. That is, they are numbers.
• Consequently, the only unknowns are the values at the prediction position xi+1.
Thus, it is a quadratic equation of the form
• Thus, we can see that the second-order Taylor series approximates the
function with a second-order polynomial.
THE TAYLOR SERIES
• Note that because Eq. (4.13) is an infinite series, an equal sign replaces the
approximate sign that was used in Eqs. (4.9) through (4.11).
THE TAYLOR SERIES
• A remainder term is also included to account for all terms from n + 1 to infinity:
• where the subscript n connotes that this is the remainder for the nth-order
approximation and ξ is a value of x that lies somewhere between xi and xi+1.
• We can now see why the Taylor theorem states that any smooth function can be
approximated as a polynomial and that the Taylor series provides a means to express
this idea mathematically.
THE TAYLOR SERIES
• The assessment of how many terms are required to get “close enough” is
based on the remainder term of the expansion (Eq. 4.14).
• This relationship has two major drawbacks.
• First, ξ is not known exactly but merely lies somewhere between xi and xi+1.
• Second, to evaluate Eq. (4.14), we need to determine the (n + 1)th derivative of f(x).
THE TAYLOR SERIES
• Eq. (4.14) is useful for gaining insight into truncation errors.
• This is because we do have control over the term h in the equation.
• Consequently, Eq. (4.14) is often expressed as
• where the nomenclature O(hn+1) means that the truncation error is of the order
of hn+1.
• For example,
• if the error is O(h), halving the step size will halve the error.
• on the other hand, if the error is O(h2), halving the step size will quarter the error.
THE REMAINDER FOR
THE TAYLOR SERIES EXPANSION
• Suppose that we truncated the Taylor series expansion [Eq. (4.13)] after the
zero-order term to yield
• For this case, the value of ξ conforms to the x value corresponding to the
second derivative that makes Eq. (4.17) exact.
• Similar higher-order versions can be developed from Eq. (4.14).
USING THE TAYLOR SERIES TO
ESTIMATE TRUNCATION ERRORS
• Although the Taylor series will be extremely useful in estimating truncation
errors, it may not be clear to you how the expansion can actually be applied
to numerical methods.
• In fact, we have already done so in our example of the bungee jumper.
• Recall that the objective of both Examples 1.1 and 1.2 was to predict velocity
as a function of time.
USING THE TAYLOR SERIES TO
ESTIMATE TRUNCATION ERRORS
• That is, we were interested in determining υ(t).
• As specified by Eq. (4.13), υ(t) can be expanded in a Taylor series:
• Now let us truncate the series after the first derivative term:
USING THE TAYLOR SERIES TO
ESTIMATE TRUNCATION ERRORS
• Equation (4.18) can be solved for
• The first part of Eq. (4.19) is exactly the same relationship that was used to
approximate the derivative in Example 1.2 [Eq. (1.11)].
USING THE TAYLOR SERIES TO
ESTIMATE TRUNCATION ERRORS
• However, because of the Taylor series approach, we have now obtained an estimate of the
truncation error associated with this approximation of the derivative. Using Eqs. (4.14) and
(4.19) yields
• Thus, the estimate of the derivative [Eq. (1.11) or the first part of Eq. (4.19)] has a
truncation error of order ti+1 − ti.
• In other words, the error of our derivative approximation should be proportional to the
step size. Consequently, if we halve the step size, we would expect to halve the error of the
derivative.
NUMERICAL DIFFERENTIATION
• where h is called the step size—that is, the length of the interval over which the
approximation is made, xi+1 − xi. It is termed a “forward” difference because it
utilizes data at i and i + 1 to estimate the derivative (Fig. 4.10a).
NUMERICAL DIFFERENTIATION
• This forward difference is but one of many that can be developed from the Taylor series
to approximate derivatives numerically.
• For example, backward and centered difference approximations of the first derivative
can be developed in a fashion similar to the derivation of Eq. (4.19).
• The former utilizes values at xi−1 and xi (Fig. 4.10b), whereas the latter uses values that are
equally spaced around the point at which the derivative is estimated (Fig. 4.10c).
• More accurate approximations of the first derivative can be developed by including
higher-order terms of the Taylor series.
• Finally, all the foregoing versions can also be developed for second, third, and higher
derivatives.
BACKWARD DIFFERENCE APPROXIMATION
OF THE FIRST DERIVATIVE
• The Taylor series can be expanded backward to calculate a previous value on the
basis of a present value, as in
• Truncating this equation after the first derivative and rearranging yields
• to yield
CENTERED DIFFERENCE APPROXIMATION
OF THE FIRST DERIVATIVE.
• which can be solved for
NUMERICAL DIFFERENTIATION
• Equation (4.24) can be multiplied by 2 and subtracted from Eq. (4.26) to give
FINITE-DIFFERENCE APPROXIMATIONS OF
HIGHER DERIVATIVES.
• which can be solved for
• As was the case with the first-derivative approximations, the centered case is more
accurate.
FINITE-DIFFERENCE APPROXIMATIONS OF
HIGHER DERIVATIVES.
• Notice also that the centered version can be alternatively expressed as
• Thus, just as the second derivative is a derivative of a derivative, the second finite
difference approximation is a difference of two first finite differences [recall Eq.
(4.12)].
TOTAL NUMERICAL ERROR
• The total numerical error is the summation of the truncation and roundoff errors.
• In general, the only way to minimize roundoff errors is to increase the number of
significant figures of the computer.
• Further, we have noted that roundoff error may increase due to subtractive
cancellation or due to an increase in the number of computations in an analysis.
• In contrast, Example 4.4 demonstrated that the truncation error can be reduced by
decreasing the step size.
• Because a decrease in step size can lead to subtractive cancellation or to an increase
in computations, the truncation errors are decreased as the roundoff errors are
increased
TOTAL NUMERICAL ERROR
• When using MATLAB, such situations are relatively uncommon because of its 15- to
16- digit precision.
• Nevertheless, they sometimes do occur and suggest a sort of “numerical uncertainty
principle” that places an absolute limit on the accuracy that may be obtained using
certain computerized numerical methods.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• As described in Sec. 4.3.4, a centered difference approximation of the first derivative
can be written as [Eq. (4.25)]
• where the ˜f ’s are the rounded function values and the e’s are the associated roundoff
errors. Substituting these values into Eq. (4.28) gives
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• We can see that the total error of the finite-difference approximation consists of a
roundoff error that decreases with step size and a truncation error that increases with
step size.
• Assuming that the absolute value of each component of the roundoff error has an
upper bound of ε, the maximum possible value of the difference ei+1 – ei−1 will be 2ε.
• Further, assume that the third derivative has a maximum absolute value of M. An
upper bound on the absolute value of the total error can therefore be represented as
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• An optimal step size can be determined by differentiating Eq. (4.29), setting the
result equal to zero and solving for
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• For most practical cases, we do not know the exact error associated with numerical
methods.
• The exception, of course, is when we know the exact solution, which makes our numerical
approximations unnecessary.
• Therefore, for most engineering and scientific applications we must settle for some
estimate of the error in our calculations.
• There are no systematic and general approaches to evaluating numerical errors for all
problems.
• In many cases error estimates are based on the experience and judgment of the engineer
or scientist.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• Although error analysis is to a certain extent an art, there are several practical
programming guidelines we can suggest.
• First and foremost, avoid subtracting two nearly equal numbers.
• Loss of significance almost always occurs when this is done.
• Sometimes you can rearrange or reformulate the problem to avoid subtractive
cancellation.
• If this is not possible, you may want to use extended-precision arithmetic.
• Furthermore, when adding and subtracting numbers, it is best to sort the numbers and
work with the smallest numbers first. This avoids loss of significance.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• Beyond these computational hints, one can attempt to predict total numerical errors
using theoretical formulations.
• The Taylor series is our primary tool for analysis of such errors.
• Prediction of total numerical error is very complicated for even moderately sized
problems and tends to be pessimistic. Therefore, it is usually attempted for only
small-scale tasks.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• The tendency is to push forward with the numerical computations and try to estimate the accuracy of
your results.
• This can sometimes be done by seeing if the results satisfy some condition or equation as a check.
• Or it may be possible to substitute the results back into the original equation to check that it is
actually satisfied.
• Finally you should be prepared to perform numerical experiments to increase your awareness of
computational errors and possible ill-conditioned problems. Such experiments may involve
repeating the computations with a different step size or method and comparing the results. We may
employ sensitivity analysis to see how our solution changes when we change model parameters or
input values.
• We may want to try different numerical algorithms that have different theoretical foundations, are
based on different computational strategies, or have different convergence properties and stability
characteristics.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• When the results of numerical computations are extremely critical and may involve
loss of human life or have severe economic ramifications, it is appropriate to take
special precautions.
• This may involve the use of two or more independent groups to solve the same
problem so that their results can be compared.
• The roles of errors will be a topic of concern and analysis in all sections of this book.
• We will leave these investigations to specific sections.