0% found this document useful (0 votes)
34 views87 pages

Error Analysis (SR.02) PDF

Roundoff errors arise in digital computers due to limitations in exactly representing all quantities. Errors can accumulate over large computations and lead to unstable or erroneous results. There are two types of errors - those due to limits on magnitude and precision of number representation, and those due to numerical operations being sensitive to small changes in values.

Uploaded by

Billy Suryono
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views87 pages

Error Analysis (SR.02) PDF

Roundoff errors arise in digital computers due to limitations in exactly representing all quantities. Errors can accumulate over large computations and lead to unstable or erroneous results. There are two types of errors - those due to limits on magnitude and precision of number representation, and those due to numerical operations being sensitive to small changes in values.

Uploaded by

Billy Suryono
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

ROUNDOFF ERRORS

ROUNDOFF ERRORS

• Roundoff errors :
➢ arise because digital computers cannot represent some quantities exactly.
➢ directly related to the manner in which numbers are stored in a computer.
➢ can lead to erroneous results (unstable calculation) => ill-conditioned.

• There are two major facets of roundoff errors involved in numerical


calculations:
1. Digital computers have magnitude and precision limits on their ability to
represent numbers.
2. Certain numerical manipulations are highly sensitive to roundoff errors.
COMPUTER NUMBER REPRESENTATION
~ INTEGER REPRESENTATION ~

• The most straightforward approach, called the signed magnitude method,


employs the first bit of a word to indicate the sign, with a 0 for positive and a 1
for negative. The remaining bits are used to store the number.
• For example, the integer value of 173 is represented in binary as 10101101:
COMPUTER NUMBER REPRESENTATION
~ INTEGER REPRESENTATION ~

• Therefore, the binary equivalent of −173 would be stored on a 16-bit


computer, as depicted in Fig. 4.3.
COMPUTER NUMBER REPRESENTATION
~ INTEGER REPRESENTATION ~

• If such a scheme is employed, there clearly is a limited range of integers


that can be represented.
• 16-bit word size, if one bit is used for the sign, the 15 remaining bits can
represent binary integers from 0 to 111111111111111.
• The upper limit can be converted to a decimal integer, as in
(1 × 214) + (1 × 213) + . . . + (1 × 21) + (1 × 20) = 32,767.
COMPUTER NUMBER REPRESENTATION
~ INTEGER REPRESENTATION ~

• Zero is defined as 0000000000000000, it is redundant to use the number


1000000000000000 to define a “minus zero.”
• Therefore, it is conventionally employed to represent an additional negative
number: −32,768, and the range is from −32,768 to 32,767.
• Ait provides a nice way to illustrate our point, the signed magnitude method
is not actually used to represent integers for conventional computers. A
preferred approach called the 2s complement technique
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~

• Fractional quantities are represented in computers using floating-point format.


• In this approach, which is very much like scientific notation, the number is expressed as

where
s = the significand (or mantissa), b = the base of the number system, and e = the exponent.

• The number is normalized by moving the decimal place over so that only one
significant digit is to the left of the decimal point.
• This is done so computer memory is not wasted on storing useless non significant zeros.
For example, a value like 0.005678 could be represented in a wasteful manner as 0.005678 ×
100. However, normalization would yield 5.678 × 10−3 which eliminates the useless zeroes.
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~

• Now, let us examine how floating-point quantities are actually represented in


a real computer using base-2 or binary numbers.
• First, let’s look at normalization. Since binary numbers consist exclusively of
0s and 1s, a bonus occurs when they are normalized. That is, the bit to the
left of the binary point will always be one! => This means that this leading
bit does not have to be stored.
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~

• Hence, nonzero binary floating-point numbers can be expressed as

where f = the mantissa (i.e., the fractional part of the significand). For example, if we normalized
the binary number 1101.1, the result would be 1.1011 × (2)−3 or (1 + 0.1011) × 2−3.

because of normalization, 53 bits can be stored


COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~

• Thus, although the original number has five significant bits, we only have
to store the four fractional bits: 0.1011
• Now, just as in Example 4.2, this means that the numbers will have a limited
range and precision.
• By default, MATLAB has adopted the IEEE double-precision format in which
eight bytes (64 bits) are used to represent floating-point numbers.
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~

Range:
• The largest positive number can be represented in binary as

• Since the significand is approximately 2 (it is actually 2 − 2−52), the largest


value is therefore 21024 = 1.7977 × 10308.
• The smallest positive number can be represented as

• This value can be translated into a base-10 value of 2–1022 = 2.2251 × 10–308.
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~

• MATLAB has a number of built-in functions related to its internal number


representation.
• For example, the realmax function displays the largest positive real number:
>> format long
>> realmax
ans =
1.797693134862316e + 308

• Numbers occurring in computations that exceed this value create an overflow.


In MATLAB they are set to infinity, inf.
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~

• The realmin function displays the smallest positive real number:


>> realmin
ans =
2.225073858507201e − 308

• Numbers that are smaller than this value create an underflow and, in
MATLAB, are set to zero.
COMPUTER NUMBER REPRESENTATION
~ FLOATING-POINT REPRESENTATION ~

• Finally, the eps function displays the machine epsilon:


>> eps
ans = 2.220446049250313e − 016

➢ The epsilon of the machine (short: eps) is the minimum distance that a floating
point arithmetic program like Matlab can recognize between two numbers x and y.
ARITHMETIC MANIPULATIONS OF
COMPUTER NUMBERS – ROUND OFF ERROR
• When two floating-point numbers are added, the numbers are first expressed
so that they have the same exponents.
• For example, if we want to add 1.557 + 0.04341, the computer would express
the numbers as 0.1557 × 101 + 0.004341 × 101.
• Then the mantissas are added to give 0.160041 × 101.
• Now, because this hypothetical computer only carries a 4-digit mantissa, the
excess number of digits get chopped off and the result is 0.1600 × 101.
• Notice how the last two digits of the second number (41) that were shifted to
the right have essentially been lost from the computation.
ARITHMETIC MANIPULATIONS OF
COMPUTER NUMBERS
• Subtraction is performed identically to addition except that the sign of the
subtrahend is reversed.
• For example, suppose that we are subtracting 26.86 from 36.41. That is,

• For this case the result must be normalized because the leading zero is unnecessary.
• So we must shift the decimal one place to the right to give 0.9550 × 101 = 9.550.
• Notice that the zero added to the end of the mantissa is not significant but is
merely appended to fill the empty space created by the shift.
ARITHMETIC MANIPULATIONS OF
COMPUTER NUMBERS
• Even more dramatic results would be obtained when the numbers are very close as
in

• which would be converted to 0.1000 × 100 = 0.1000.


• Thus, for this case, three nonsignificant zeros are appended.
• The subtracting of two nearly equal numbers is called subtractive cancellation.
• It is the classic example of how the manner in which computers handle mathematics
can lead to numerical problems.
ARITHMETIC MANIPULATIONS OF COMPUTER
NUMBERS ~ LARGE COMPUTATIONS ~

• Even though an individual roundoff error could be small, the cumulative


effect over the course of a large computation can be significant.
• A very simple case involves summing a round base-10 number that is not
round in base-2. Suppose that the following M-file is constructed:
ARITHMETIC MANIPULATIONS OF COMPUTER
NUMBERS ~ LARGE COMPUTATIONS ~

• When this function is executed, the result is

• The format long command lets us see the 15 significant-digit representation used
by MATLAB.
• You would expect that sum would be equal to 1. However, although 0.0001 is a
nice round number in base-10, it cannot be expressed exactly in base-2.
• Thus, the sum comes out to be slightly different than 1.
ARITHMETIC MANIPULATIONS OF COMPUTER
NUMBERS ~ LARGE COMPUTATIONS ~

• We should note that MATLAB has features that are designed to minimize such
errors. For example, suppose that you form a vector as in
>> format long
>> s = [0:0.0001:1];

• For this case, rather than being equal to 0.99999999999991, the last entry will be
exactly one as verified by
>> s(10001)
ans =
1
ARITHMETIC MANIPULATIONS OF COMPUTER
NUMBERS
~ ADDING A LARGE AND A SMALL NUMBER ~
• Suppose we add a small number, 0.0010, to a large number, 4000, using a
hypothetical computer with the 4-digit mantissa and the 1-digit exponent.
• After modifying the smaller number so that its exponent matches the larger,

• which is chopped to 0.4000 × 104.


• This type of error can occur in the computation of an infinite series. One way
to mitigate this type of error is to sum the series in reverse order.
ARITHMETIC MANIPULATIONS OF COMPUTER
NUMBERS ~ LARGE COMPUTATIONS ~

Smearing.
• Smearing occurs whenever the individual terms in a summation are larger
than the summation itself.
• One case where this occurs is in a series of mixed signs.
ARITHMETIC MANIPULATIONS OF COMPUTER
NUMBERS ~ LARGE COMPUTATIONS ~

Inner products.
• As should be clear from the last sections, some infinite series are particularly prone
to roundoff error. A far more ubiquitous manipulation is the calculation of inner
products as in

• This operation is very common, particularly in the solution of simultaneous linear


algebraic equations. Such summations are prone to roundoff error.
• Consequently, it is often desirable to compute such summations in double precision
as is done automatically in MATLAB.
TRUNCATION ERRORS
TRUNCATION ERRORS

• Truncation errors are those that result from using an approximation in place of an
exact mathematical procedure.
• For example, in previous Chap. we approximated the derivative of velocity of a
bungee jumper by a finite-difference equation of the form [Eq. (1.11)]

• To gain insight into the properties of such errors, we now turn to a mathematical
formulation that is used widely in numerical methods to express functions in an
approximate fashion—the Taylor series.
THE TAYLOR SERIES

• Taylor’s theorem and its associated formula, the Taylor series, is of great value
in the study of numerical methods.
• In essence, the Taylor theorem states that any smooth function can be
approximated as a polynomial.
• The Taylor series then provides a means to express this idea mathematically
in a form that can be used to generate practical results.
• A good problem context for this exercise is to predict a function value at one
point in terms of the function value and its derivatives at another point.
THE TAYLOR SERIES

• We’ll call your horizontal location xi and your vertical distance with respect to
the base of the hill f (xi).
• You are given the task of predicting the height at a position xi+1, which is a
distance h away from you.
• The best guess would be the same height as where you’re standing now! You
could express this prediction mathematically as
THE TAYLOR SERIES

• This relationship, which is called the zero-order approximation, indicates


that the value of f at the new point is the same as the value at the old point.
• If xi and xi+1 are close to each other, it is likely that the new value is probably
similar to the old value.
• Equation (4.9) provides a perfect estimate if the function being approximated
is, in fact, a constant.
THE TAYLOR SERIES

• So now you are allowed to get off the platform and stand on the hill surface
with one leg positioned in front of you and the other behind.
• You immediately sense that the front foot is lower than the back foot.
• With this additional information, you’re clearly in a better position to predict
the height at f (xi+1).
• In essence, you use the slope estimate to project a straight line out to xi+1.
• You can express this prediction mathematically by
THE TAYLOR SERIES

• This is called a first-order approximation because the additional first-order


term consists of a slope f ʹ (xi) multiplied by h, the distance between xi and
xi+1.
• Thus, the expression is now in the form of a straight line that is capable of
predicting an increase or decrease of the function between xi and xi+1.
• Although Eq. (4.10) can predict a change, it is only exact for a straight-line,
or linear, trend.
THE TAYLOR SERIES

• So now you are allowed to stand on the hill surface and take two measurements.
• First, you measure the slope behind you at a distance of Δx:
Let’s call this slope f ʹb (xi).
• Then you measure the slope in front of you at a distance of Δx:
Let’s call this slope f ʹf (xi).
• You immediately recognize that the slope behind is milder than the one in front.
Clearly the drop in height is “accelerating” downward in front of you.
• As you might expect, you’re now going to add a second-order term to your
equation and make it into a parabola.
THE TAYLOR SERIES

• The Taylor series provides the correct way to do this as in

• To make use of this formula, you need an estimate of the second derivative.
• You can use the last two slopes you determined to estimate it as

• Thus, the second derivative is merely a derivative of a derivative; in this


case, the rate of change of the slope.
THE TAYLOR SERIES

• Recognize that all the values subscripted i represent values that you have
estimated. That is, they are numbers.
• Consequently, the only unknowns are the values at the prediction position xi+1.
Thus, it is a quadratic equation of the form

• Thus, we can see that the second-order Taylor series approximates the
function with a second-order polynomial.
THE TAYLOR SERIES

• Clearly, we could keep adding more derivatives to capture more of the


function’s curvature.
• Thus, we arrive at the complete Taylor series expansion

• Note that because Eq. (4.13) is an infinite series, an equal sign replaces the
approximate sign that was used in Eqs. (4.9) through (4.11).
THE TAYLOR SERIES

• A remainder term is also included to account for all terms from n + 1 to infinity:

• where the subscript n connotes that this is the remainder for the nth-order
approximation and ξ is a value of x that lies somewhere between xi and xi+1.
• We can now see why the Taylor theorem states that any smooth function can be
approximated as a polynomial and that the Taylor series provides a means to express
this idea mathematically.
THE TAYLOR SERIES

• The assessment of how many terms are required to get “close enough” is
based on the remainder term of the expansion (Eq. 4.14).
• This relationship has two major drawbacks.
• First, ξ is not known exactly but merely lies somewhere between xi and xi+1.
• Second, to evaluate Eq. (4.14), we need to determine the (n + 1)th derivative of f(x).
THE TAYLOR SERIES
• Eq. (4.14) is useful for gaining insight into truncation errors.
• This is because we do have control over the term h in the equation.
• Consequently, Eq. (4.14) is often expressed as

• where the nomenclature O(hn+1) means that the truncation error is of the order
of hn+1.
• For example,
• if the error is O(h), halving the step size will halve the error.
• on the other hand, if the error is O(h2), halving the step size will quarter the error.
THE REMAINDER FOR
THE TAYLOR SERIES EXPANSION
• Suppose that we truncated the Taylor series expansion [Eq. (4.13)] after the
zero-order term to yield

• A visual depiction of this zero-order prediction is shown in Fig. 4.8. The


remainder, or error, of this prediction consists of the infinite series of terms
that were truncated
THE REMAINDER FOR
THE TAYLOR SERIES EXPANSION
• It is obviously inconvenient to deal with the remainder in this infinite series
format. One simplification might be to truncate the remainder itself, as in

• Although, lower-order derivatives usually account for a greater share of the


remainder than the higher-order terms, this result is still inexact because of
the neglected second- and higher-order terms.
• This “in-exactness” is implied by the approximate equality symbol (≅)
employed in Eq. (4.15).
THE REMAINDER FOR
THE TAYLOR SERIES EXPANSION
• An alternative simplification that transforms the approximation into an
equivalence is based on a graphical insight.
• As in Fig. 4.9, the derivative mean-value theorem states that if a function f (x)
and its first derivative are continuous over an interval from xi to xi+1, then
there exists at least one point on the function that has a slope, designated by
fʹ(ξ), that is parallel to the line joining f (xi) and f (xi+1).
• The parameter ξ marks the x value where this slope occurs (Fig. 4.9).
• A physical illustration of this theorem is that, if you travel between two points
with an average velocity, there will be at least one moment during the course
of the trip when you will be moving at that average velocity.
THE REMAINDER FOR
THE TAYLOR SERIES EXPANSION
• By invoking this theorem, it is simple to realize that, as illustrated in Fig. 4.9,
the slope f ʹ (ξ) is equal to the rise R0 divided by the run h, or

• which can be rearranged to give


THE REMAINDER FOR
THE TAYLOR SERIES EXPANSION
• Thus, we have derived the zero-order version of Eq. (4.14). The higher-order
versions are merely a logical extension of the reasoning used to derive Eq.
(4.16). The first-order version is

• For this case, the value of ξ conforms to the x value corresponding to the
second derivative that makes Eq. (4.17) exact.
• Similar higher-order versions can be developed from Eq. (4.14).
USING THE TAYLOR SERIES TO
ESTIMATE TRUNCATION ERRORS
• Although the Taylor series will be extremely useful in estimating truncation
errors, it may not be clear to you how the expansion can actually be applied
to numerical methods.
• In fact, we have already done so in our example of the bungee jumper.
• Recall that the objective of both Examples 1.1 and 1.2 was to predict velocity
as a function of time.
USING THE TAYLOR SERIES TO
ESTIMATE TRUNCATION ERRORS
• That is, we were interested in determining υ(t).
• As specified by Eq. (4.13), υ(t) can be expanded in a Taylor series:

• Now let us truncate the series after the first derivative term:
USING THE TAYLOR SERIES TO
ESTIMATE TRUNCATION ERRORS
• Equation (4.18) can be solved for

• The first part of Eq. (4.19) is exactly the same relationship that was used to
approximate the derivative in Example 1.2 [Eq. (1.11)].
USING THE TAYLOR SERIES TO
ESTIMATE TRUNCATION ERRORS
• However, because of the Taylor series approach, we have now obtained an estimate of the
truncation error associated with this approximation of the derivative. Using Eqs. (4.14) and
(4.19) yields

• Thus, the estimate of the derivative [Eq. (1.11) or the first part of Eq. (4.19)] has a
truncation error of order ti+1 − ti.
• In other words, the error of our derivative approximation should be proportional to the
step size. Consequently, if we halve the step size, we would expect to halve the error of the
derivative.
NUMERICAL DIFFERENTIATION

• Equation (4.19) is given a formal label in numerical methods—it is called a finite


difference. It can be represented generally as

• where h is called the step size—that is, the length of the interval over which the
approximation is made, xi+1 − xi. It is termed a “forward” difference because it
utilizes data at i and i + 1 to estimate the derivative (Fig. 4.10a).
NUMERICAL DIFFERENTIATION

• This forward difference is but one of many that can be developed from the Taylor series
to approximate derivatives numerically.
• For example, backward and centered difference approximations of the first derivative
can be developed in a fashion similar to the derivation of Eq. (4.19).
• The former utilizes values at xi−1 and xi (Fig. 4.10b), whereas the latter uses values that are
equally spaced around the point at which the derivative is estimated (Fig. 4.10c).
• More accurate approximations of the first derivative can be developed by including
higher-order terms of the Taylor series.
• Finally, all the foregoing versions can also be developed for second, third, and higher
derivatives.
BACKWARD DIFFERENCE APPROXIMATION
OF THE FIRST DERIVATIVE
• The Taylor series can be expanded backward to calculate a previous value on the
basis of a present value, as in

• Truncating this equation after the first derivative and rearranging yields

• where the error is O(h).


CENTERED DIFFERENCE APPROXIMATION
OF THE FIRST DERIVATIVE.
• A third way to approximate the first derivative is to subtract Eq. (4.22) from the
forward Taylor series expansion:

• to yield
CENTERED DIFFERENCE APPROXIMATION
OF THE FIRST DERIVATIVE.
• which can be solved for
NUMERICAL DIFFERENTIATION

• Equation (4.25) is a centered finite difference representation of the first derivative.


• Notice that the truncation error is of the order of h2 in contrast to the forward and
backward approximations that were of the order of h.
• Consequently, the Taylor series analysis yields the practical information that the
centered difference is a more accurate representation of the derivative (Fig. 4.10c).
• For example, if we halve the step size using a forward or backward difference, we
would approximately halve the truncation error, whereas for the central difference,
the error would be quartered.
FINITE-DIFFERENCE APPROXIMATIONS OF
HIGHER DERIVATIVES.
• Besides first derivatives, the Taylor series expansion can be used to derive numerical
estimates of higher derivatives.
• To do this, we write a forward Taylor series expansion for f (xi+2) in terms of f (xi):

• Equation (4.24) can be multiplied by 2 and subtracted from Eq. (4.26) to give
FINITE-DIFFERENCE APPROXIMATIONS OF
HIGHER DERIVATIVES.
• which can be solved for

• This relationship is called the second forward finite difference.


• Similar manipulations can be employed to derive a backward version
FINITE-DIFFERENCE APPROXIMATIONS OF
HIGHER DERIVATIVES.
• A centered difference approximation for the second derivative can be derived by
adding Eqs. (4.22) and (4.24) and rearranging the result to give

• As was the case with the first-derivative approximations, the centered case is more
accurate.
FINITE-DIFFERENCE APPROXIMATIONS OF
HIGHER DERIVATIVES.
• Notice also that the centered version can be alternatively expressed as

• Thus, just as the second derivative is a derivative of a derivative, the second finite
difference approximation is a difference of two first finite differences [recall Eq.
(4.12)].
TOTAL NUMERICAL ERROR
• The total numerical error is the summation of the truncation and roundoff errors.
• In general, the only way to minimize roundoff errors is to increase the number of
significant figures of the computer.
• Further, we have noted that roundoff error may increase due to subtractive
cancellation or due to an increase in the number of computations in an analysis.
• In contrast, Example 4.4 demonstrated that the truncation error can be reduced by
decreasing the step size.
• Because a decrease in step size can lead to subtractive cancellation or to an increase
in computations, the truncation errors are decreased as the roundoff errors are
increased
TOTAL NUMERICAL ERROR

• Therefore, we are faced by the following dilemma:


• The strategy for decreasing one component of the total error leads to an increase of the
other component.
• In a computation, we could conceivably decrease the step size to minimize truncation
errors only to discover that in doing so, the roundoff error begins to dominate the solution
and the total error grows! Thus, our remedy becomes our problem (Fig. 4.11).
• One challenge that we face is to determine an appropriate step size for a particular
computation. We would like to choose a large step size to decrease the amount of
calculations and roundoff errors without incurring the penalty of a large truncation error.
• If the total error is as shown in Fig. 4.11, the challenge is to identify the point of
diminishing returns where roundoff error begins to negate the benefits of step-size
reduction.
TOTAL NUMERICAL ERROR

• When using MATLAB, such situations are relatively uncommon because of its 15- to
16- digit precision.
• Nevertheless, they sometimes do occur and suggest a sort of “numerical uncertainty
principle” that places an absolute limit on the accuracy that may be obtained using
certain computerized numerical methods.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• As described in Sec. 4.3.4, a centered difference approximation of the first derivative
can be written as [Eq. (4.25)]

• Thus, if the two function values in the numerator of the finite-difference


approximation have no roundoff error, the only error is due to truncation.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• However, because we are using digital computers, the function values do include
roundoff error as in

• where the ˜f ’s are the rounded function values and the e’s are the associated roundoff
errors. Substituting these values into Eq. (4.28) gives
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• We can see that the total error of the finite-difference approximation consists of a
roundoff error that decreases with step size and a truncation error that increases with
step size.
• Assuming that the absolute value of each component of the roundoff error has an
upper bound of ε, the maximum possible value of the difference ei+1 – ei−1 will be 2ε.
• Further, assume that the third derivative has a maximum absolute value of M. An
upper bound on the absolute value of the total error can therefore be represented as
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• An optimal step size can be determined by differentiating Eq. (4.29), setting the
result equal to zero and solving for
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• For most practical cases, we do not know the exact error associated with numerical
methods.
• The exception, of course, is when we know the exact solution, which makes our numerical
approximations unnecessary.
• Therefore, for most engineering and scientific applications we must settle for some
estimate of the error in our calculations.
• There are no systematic and general approaches to evaluating numerical errors for all
problems.
• In many cases error estimates are based on the experience and judgment of the engineer
or scientist.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• Although error analysis is to a certain extent an art, there are several practical
programming guidelines we can suggest.
• First and foremost, avoid subtracting two nearly equal numbers.
• Loss of significance almost always occurs when this is done.
• Sometimes you can rearrange or reformulate the problem to avoid subtractive
cancellation.
• If this is not possible, you may want to use extended-precision arithmetic.
• Furthermore, when adding and subtracting numbers, it is best to sort the numbers and
work with the smallest numbers first. This avoids loss of significance.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• Beyond these computational hints, one can attempt to predict total numerical errors
using theoretical formulations.
• The Taylor series is our primary tool for analysis of such errors.
• Prediction of total numerical error is very complicated for even moderately sized
problems and tends to be pessimistic. Therefore, it is usually attempted for only
small-scale tasks.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• The tendency is to push forward with the numerical computations and try to estimate the accuracy of
your results.
• This can sometimes be done by seeing if the results satisfy some condition or equation as a check.
• Or it may be possible to substitute the results back into the original equation to check that it is
actually satisfied.
• Finally you should be prepared to perform numerical experiments to increase your awareness of
computational errors and possible ill-conditioned problems. Such experiments may involve
repeating the computations with a different step size or method and comparing the results. We may
employ sensitivity analysis to see how our solution changes when we change model parameters or
input values.
• We may want to try different numerical algorithms that have different theoretical foundations, are
based on different computational strategies, or have different convergence properties and stability
characteristics.
ERROR ANALYSIS OF NUMERICAL
DIFFERENTIATION
• When the results of numerical computations are extremely critical and may involve
loss of human life or have severe economic ramifications, it is appropriate to take
special precautions.
• This may involve the use of two or more independent groups to solve the same
problem so that their results can be compared.
• The roles of errors will be a topic of concern and analysis in all sections of this book.
• We will leave these investigations to specific sections.

You might also like