Mat 202
Mat 202
K. Issa, Ph.D.
[email protected], [email protected]
Department of Statistics and Mathematical Sciences,
Kwara State University, Malete, Ilorin, Nigeria.
Course Outlines
Solution of algebraic and transcendental equations. Curve fitting, error analysis, interpolation
and approximations. Zeros of non linear equations in one variable. System of linear equations.
Numerical differentiation and integration. Initial value problems in ODE.
Recommended Textbooks
• Numerical methods for Engineers and Scientists 2nd Edition by Joe D. Hoffman
1
Contents
1 CURVE FITTING 4
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Direct fit polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 ERROR ANALYSIS 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 The effects of errors on the basic operation of Arithmetic . . . . . . . . . . . . . 10
2.2.1 Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.3 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.4 Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.5 Error in function evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 FINITE DIFFERENCE 22
4.1 Forward difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2 Backward difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 Central differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4.1 Newton-Gregory Interpolation Formula . . . . . . . . . . . . . . . . . . . 26
4.4.2 Lagrange Interpolation Formula . . . . . . . . . . . . . . . . . . . . . . . 28
4.4.3 Everett’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2
6 SYSTEM OF LINEAR EQUATIONS 37
6.1 Solution by Gauss-Jacobi iteration . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.2 Solution by Gauss-Seidel iteration . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7 NUMERICAL INTEGRATION 44
7.1 TRAPEZOIDAL RULE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.1.1 Error Estimate in Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . 46
1
7.2 SIMPSON’S 3
RULE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.2.1 Error Estimate in Simpson’s 1/3 Rule . . . . . . . . . . . . . . . . . . . . 48
3
1 CURVE FITTING
1.1 Introduction
Suppose a set of data is given by points xi and corresponding value f (xi ), for i = 0, 1, · · · , n.
The process of obtaining a function in order to represent the data by a curve or line is often
referred to as curve fitting. The three commonly approximating functions are
1. polynomial
2. Trigonometric
3. Exponential
a. Exact fit
b. Approximate fit
Pn (x) = a0 + a1 x + a2 x2 + · · · + an xn (1)
An exact fit passes exactly through all the discrete data points. This type of fit is useful for
small sets of smooth data
An approximate fit yields a polynomial that passes through the set of data in the best manner
possibles without being required to pass exactly through any of the data points. Approximate
fits are useful for large sets of smooth data.
A set of discrete data may be equally spaced or unequally spaced in the independent variable
x. In the general case where the data are unequally spaced, several procedures can be used to
fit approximating polynomials such procedures include
b. Lagrange polynomials
When the data are equally spaced, procedures based on differences are used, these include
4
iii) Stirling centered difference polynomial
The property of polynomials that makes them suitable as approximation functions is stated by
the Wwierstrass.
Approximation theorem
If f (x) is a continuous function in the closed interval a ≤ x ≤ b, then for every > 0∃ a
polynomial Pn (x), where the value of n depends on the value of , such that ∀x ∈ [a, b]
Here, we shall consider a completely general procedure for fitting a polynomial to a set of equally
spaced or unequally spaced data. Given (n+1) sets of data [x0 , f (x0 )] , [x1 , f (x1 )] , · · · [xn , f (xn )],
which will be written as (x0 , f (x0 )) , (x1 , f (x1 )) , · · · (xn , f (xn )), determine the unique nth-
degree polynomial Pn (x) that passes exactly through the n + 1 points:
Pn (x) = a0 + a1 x + a2 x2 + · · · + an xn (3)
For simplicity of notation, let f (xi ) = fi . Substituting each data point into Equation (3) yields
n + 1 equations:
f0 = a0 + a1 x0 + a2 x20 + · · · + an xn0
f1 = a0 + a1 x1 + a2 x21 + · · · + an xn1
.. .. ..
. . .
.. .. ..
. . .
fn = a0 + a1 xn + a2 xn + · · · + an xnn
2
There are n + 1 linear equations containing the n + 1 coefficients a0 to an and can solve using
Gauss elimination. The resulting polynomial is the unique nth-degree polynomial that passes
exactly through the n+1 data points. The direct fit polynomial procedure works for both equally
spaced data and unequally spaced data. MATLAB code for the polynomial approximation with
the graph
MATLAB code that compute polynomial approximation with the graph and maximum error
clear all
close all
a=input(’the initial value=’);
b=input(’the end point=’);
l=input(’number of subinterval=’) ;% compute d err base on this subinterval
5
x=linspace(a,b,l);% h=(b-a)/(n-1)% x=[x0 x1 x2,...xn]
d=input(’degree of approximation=’);
approx=input(’the function to be approximate=’);
p=polyfit(x,approx,d);
y1=polyval(p,x);
max_err=max(abs(approx-y1))
figure
plot(x,y1,’o’)
hold on
plot(x,approx,’r’)
hold off
Example 1: Consider y = f (x) = ex , find the 2nd, 3rd and 4th degree polynomial approx-
imation to the function over the interval 1.1 ≤ x ≤ 1.5, hence compute the interpolant y at
x = 1.22
Solution: y(1.22) = f (1.22) = e1.22 = 3.387188
for degree 5
1. f (x) = x1 , 1 ≤ x ≤ 3
6
2. f (x) = exp(x), −1≤x≤1
3. f (x) = sin(x), 0 ≤ x ≤ 4π
2 ERROR ANALYSIS
2.1 Introduction
Numerical analysis provide approximation method to the true desired solution of mathematical
problems or method. It is therefore important to be able to estimate the error involved in such
approximation. Consequently, the study of the error is of central concern in numerical analysis.
Errors are introduced by the computational process itself. Computers perform mathematical
operations with only a finite number of digits. If the number yn is an approximation to the
exact result y, then the difference y–yn is called error.
Hence Exact value = approximate value + error
In numerical computations, we come across the following types of errors:
(a) Absolute and relative errors: If XE is the exact or true value of a quantity and XA is
its approximate value, then |XE –XA | is called the absolute error Ea . Therefore absolute
error
Ea = |XE –XA | (4)
XE –XA
Er = (5)
XE
XE –XA
Ep = 100Er = 100 (6)
XE
Significant digits: The concept of a significant figure, or digit, has been developed to
formally define the reliability of a numerical value. The significant digits of a number are
7
those that can be used with confidence. If XE is the exact or true value and XA is an
approximation to XE , then XA is said to approximate XE to t significant digits if t is the
largest non-negative integer for which
XE –XA
< 5 × 10−t (7)
XE
XE –XA exp(1)–2.71828
= = 0.67 × 10−6 < 5 × 10−6
XE exp(1)
Example 3:
Let the exact or true value = 20
3
and the approximate value = 6.666
2
The absolute error is 0.000666 · · · = 3000
2
1
The relative error is 3000
20 = 10000
, therefore The number of significant digits is 4
3
(b) Inherent Errors: Inherent errors are the errors that pre-exist in the problem statement
itself before its solution is obtained. Inherent errors exist because the data being ap-
proximate or due to the limitations of the calculations using digital computers. Inherent
errors cannot be completely eliminated but can be minimised if we select better data or
by employing high precision computer computations
(c) Round-off error: Most numbers have infinite decimal representation and therefore have
to be rounded in calculations. Error introduced by the omission of significant figures due
to computer imperfection is called the round-off error
For example (i) 13.6895 u 13.690 (3D or 5 s.f.) and Error= 13.6895−13.690 = −0.0005 =
−5 × 10−4
(ii) 11.6955 u 11.696 and Error= 11.6955 − 11.696 = −0.005 = −5 × 10−3
Note: When a number is rounded to k decimal places, the round off error is given by
1
|round off error| ≤ 10−k (8)
2
Example 1: Suppose 2.365 has been rounded off to 3.D.P, then the round off error
≤ 12 10−3 = 0.005
8
Example 2: Given the number π is approximated 4 decimal places. (i) Determine the
relative error due to chopping and express it as a per cent. (ii) Determine the relative
error due to rounding and express it as a per cent.
Solution: (i) The relative error due to chopping is given by
3.1415−π
Er = π
= 2.949 × 10–5 or 0.002949%
(ii) The relative error due to rounding is given by
3.1416−π
Er = π
= 2.338 × 10–6 or 0.0002338%
Exercise Use Taylor series expansions to predict f (2) for f (x) = ln(x) with a base
point at x = 1. hence, determine the relative error and percentage relative error for the
approximation
(d) Truncation errors: Truncation errors are defined as those errors that result from using
an approximation in place of an exact mathematical procedure. Truncation error results
from terminating after a finite number of terms known as formula truncation error or
simply truncation error.
Let a function f (x) be infinitely differentiable in an interval which includes the point
x = a. Then the Taylor series expansion of f (x) about x = a is given by
∞
X f (k) (a)(x − a)k
f (x) = (9)
k=0
k!
where f (k) (a) denotes the kth derivative of f (x) evaluated at x = a. If the series is
truncated after n terms, then it is equivalent to approximating f (x) with a polynomial of
degree n–1.
n−1 (k)
X f (a)(x − a)k
f (x) = (10)
k=0
k!
The error in approximating En (x) is equal to the sum of the neglected higher order terms
and is often called the tail of the series. The tail is given by
Example 1: Given the f (x) = sin x,(i) expand f (x) about x = 0 using Taylor series (b)
π
truncate the series to n = 6 terms (c) find the relative error at x = 4
due to truncation
in (b).
solution The Taylor series expansion of sin x is given by
∞
X f (k) (a)(x − a)k x3 x5 x7
f (x) = sin x = =x− + − + ··· (12)
k=0
k! 3! 5! 7!
9
(b) Truncation of the Taylor series to n = 6 terms.
x3 x5
f6 (x) = x − + (13)
3! 5!
π
(c) applying equation 5, therefore the relative error at x = 4
due to truncation in (b)
becomes
π
( π4 )3 ( π4 )5
f6 ( π4 )
− sin( π4 ) 4
− + − sin( π4 )
Er6 = = 3! 5! = 5.129 × 10–5 (14)
sin( π4 ) π
sin( 4 )
(e) Human error: This occur by misuse of method applied. They are error created by the
person performing the calculation. A common error is that of transposition of a number
such as 89.3285 as 89.3825
If x1 , x2 are approximations to the values X1 , X2 respectively, and e1 and e2 are the corre-
sponding errors (any type) in this approximation, then X1 = x1 + e1 ⇒ e1 = X1 − x1 and
X2 = x2 + e2 ⇒ e2 = X2 − x2 . Let X = X1 + X2 (exact), x = x1 + x2 (approximate), then
1 1
|e| ≤ |e1 | + |e2 | ≤ 10−k1 + 10−k2 (16)
2 2
Example 1: The numbers X1 and X2 when rounded-off to 3D are 3.724 and 4.037 respectively.
Evaluate an approximation to X1 + X2 and discuss the error involved.
solution: Let X1 u x1 = 3.724(3D) ⇒ |e1 | = |X1 − x1 | ≤ 1
2
10−3 and X2 u x2 =
4.037(3D) ⇒ |e2 | = |X2 − x2 | ≤ 21 10−3 , again let X = X1 +X2 u x1 +x2 = x = 3.724+4.037 =
7.761
10
Then, |e| = |X − x| ≤ |e1 | + |e2 | ≤ 21 10−3 + 12 10−3 = 0.0005 + 0.0005 = 0.001 ⇒ |X − x| ≤
0.001, therefore
2.2.2 Subtraction
Example 1: The numbers A and B are correctly rounded to 3D as 3.724 and 2.251 respec-
tively. Evaluate an approximation B − A and discuss the error in the approximation.
solution: Let A u a = 3.724(3D) ⇒ |e1 | = |A − a| ≤ 21 10−3 and B u b = 2.251(3D) ⇒ |e2 | =
|B − b| ≤ 12 10−3 , again let C = B − A u b − a = z = 2.251 − 3.724 = −1.473
Thus, |e| = |C − c| ≤ |e1 | + |e2 | ≤ 12 10−3 + 12 10−3 = 0.001 ⇒ |C − c| ≤ 0.001, therefore
11
− 0.001 ≤ C − c ≤ 0.001 ⇒ − 0.001 + c ≤ C ≤ 0.001 + c
2.2.3 Multiplication
2.2.4 Division
X1
With the usual notation i. e X = , we have
X2
X 1 x1
|e| = |X − x| = − (23)
X 2 x2
12
Now,
−1 2 !
X1 x1 + e 1 −1 1 e2 1 e2 e2
= = (x1 + e1 )(x2 + e2 ) = (x1 + e1 ) 1+ = (x1 + e1 ) 1 − + − ···
X2 x2 + e 2 x2 x2 x2 x2 x2
1 x1 e2 e1 e2 X 1 x1 e1 x1 e2
= x1 − + e1 − + ··· ⇒ − u − 2 ⇒
x2 x2 x2 X 2 x2 x2 x2
e1 x1 e2 e1 x1 e 2
|e| u − 2 ≤ +
x2 x2 x2 x22
(24)
for the relative error
e X −x X2 e1 x1 e2 x2 e1 x1 e 2 e1 e2 e1 e2
= u − 2 u − 2 = − ≤ + (25)
X X X1 x2 x2 x1 x2 x2 x1 x2 x1 x2
Thus, (as for multiplication) the approximate relative error in division is less or equal to the
sum of the relative error.
Example: The numbers P and Q when rounded to 4 S.F. are 37.26 and 0.05371 respectively.
P
Evaluate Q
as accurate as possible.
Example 1: The number 7.36 is correctly rounded to the number X. Obtain an approximation
√
as accurate as possible to X
√
solution: let X u x = 7.36 (2D), f (x) = x ⇒ f 0 (x) = 2√1 x
|ef | u |e||f 0 (x)| ≤ 12 10−2 1
√
2 x
= 0.005
√
2 7.36
= 0.0009
13
√
Thus f (X) = f (x) + ef = 7.36 ± 0.0009
⇒ − 0.0009 + 2.7129 ≤ f (X) ≤ 0.0009 + 2.7129 ⇒ 2.712 ≤ f (X) ≤ 2.7138, ∴ f (X) =
2.71(2D).
Example 2: The number X when rounded correctly to 3D is 0.359.. Obtain an approximation
as accurate as possible to cos X
solution: let X u x = 0.359 (3D), f (x) = cos x ⇒ f 0 (x) = − sin x
|ef | u |e||f 0 (x)| ≤ 21 10−3 |− sin x| = 12 10−3 |− sin(0.359)| ⇒ |ef | ≤ 0.0002
Thus f (X) = f (x) + ef = cos(0.359) ± 0.0002
⇒ − 0.0002 + 0.9363 ≤ f (X) ≤ 0.0002 + 0.9363 ⇒ 0.9361 ≤ f (X) ≤ 0.9365, ∴ f (X) =
0.94(2D).
Exercise:
1. The numbers X and Y when rounded correctly to 3D are 0.359 and 0.752 respectively.
Obtain an approximation as accurate as possible to sin X, ex and sin Y
1. Direct Methods: Direct methods give the exact value of the roots in a finite number of
steps. We assume here that there are no round off errors. Direct methods determine all
the roots at the same time.
2. Indirect or Iterative Methods: Indirect or iterative methods are based on the concept
of successive approximations. The general procedure is to start with one or more initial
approximation to the root and obtain a sequence of iterates (xk ) which in the limit con-
verges to the actual or true solution to the root. Indirect or iterative methods determine
one or two roots at a time. The indirect or iterative methods are further divided into
14
two categories: bracketing and open methods. The bracketing methods require the lim-
its between which the root lies, whereas the open methods require the initial estimation
of the solution. Bisection and False position methods are two known examples of the
bracketing methods. Among the open methods, the Newton-Raphson and the method of
successive approximation are most commonly used. The most popular method for solving
a non-linear equation is the Newton-Raphson method and this method has a high rate of
convergence to a solution. In this chapter, we present the following indirect or iterative
methods with illustrative examples:
a. Bisection Method
Suppose that f is defined and continuous on an interval [a, b]. If the values of f (x) at x = a
and x = b are of opposite sign then it is clear from figure 1 that f must have at least one root
between a and b. This result is known as intermediate value theorem, provides a simple
and effective way of finding the approximate location of the roots of f
Consider a bracketing interval [a, b] for which f (a)f (b) < 0. In the bisection method the value
b+a
of f at the mid-point, c = , is calculated. There are three possibilities.
2
1. if f (c) = 0 (very unlikely), then c is a root of f
2. if f (a)f (c) < 0, then f has a root between a and c. The process can then be repeated on
the new interval [a, c] .
3. Finally, if f (a)f (c) > 0 it follows that f (b)f (c) < 0 since f (a) and f (b) have an opposite
signs, therefore f has a root between c and b and the process can be repeated with [c, b] .
15
1
Suppose that we wish to calculate a root within ± 10−k with initial interval [a, b], then after
2
b−a
n − 1 steps the bracketing interval has length n−1 , (where (b–a) denotes the length of the
2
original interval with which we started). Hence we require
b−a 1
n−1
< 10−k ⇒ 2n−2 > 10k (b − a) (28)
2 2
taking the log of both sides, we obtain
log10 10k (b − a)
n≥2+ (29)
log10 2
% MATLAB code for Bisection method
function [x e] = mybisect(f,a,b,n)
% function [x e] = mybisect(f,a,b,n)
% Does n iterations of the bisection method for a function f
% Inputs: f -- an inline function
% a,b -- left and right edges of the interval
% n -- the number of bisections to do.
% Outputs: x -- the estimated solution of f(x) = 0
% e -- an upper bound on the error
format long
c = f(a); d = f(b);
if c*d > 0.0
error(’Function has same sign at both endpoints.’)
end
disp(’ x y’)
for i = 1:n
x = (a + b)/2;
y = f(x);
disp([ x y])
if y == 0.0 % solved the equation exactly
e = 0;
break % jumps out of the for loop
end
if c*y < 0
b=x;
else
16
a=x;
end
end
e = (b-a)/2;
Example 1: Use the Bisection method to find a root of the equation x3 − 4x − 8.95 = 0
accurate to three decimal places, given that the root lies between 2 and 3.
solution: f (x) = x3 –4x–8.95 = 0, f (2) = 23 –4(2)–8.95 = –8.95 < 0, f (3) = 33 –4(3)–8.95 =
6.05 > 0
2+3
c1 = = 2.5 ⇒ f (2.5) = −3.325 and f (2.5)f (3) < 0therefore, the new interval is [2.5, 3]
2
2.5 + 3
, again, c2 = = 2.75 ⇒ f (2.75) = 0.8469 and f (2.5)f (2.75) < 0therefore, the new
2
interval is [2.5, 2.75]
2.5 + 2.75
c3 = = 2.625 ⇒ f (2.625) = −1.362109 and f (2.625)f (2.75) < 0therefore, the new
2
interval is [2.625, 2.75]
2.625 + 2.75
c4 = = 2.6875 ⇒ f (2.6875) = −0.289111 and f (2.6875)f (2.75) < 0therefore, the
2
new interval is [2.6875, 2.75], again
2.6875 + 2.75
c5 = = 2.71875 ⇒ f (2.71875) = 0.270916748 and
2
f (2.6875)f (2.71875) < 0 therefore, the new interval is [2.6875, 2.71875], and so on
Hence the root is 2.70 accurate to two decimal places and the number of iteration is 8
Exercise: Find one root of ex –3x = 0 in the interval [1.5, 1.6] correct to two decimal places
using the method of Bisection, (Answer x = 1.51, number of Iteration is 6)
17
3.2 Method of False Position (Regular Falsi Method)
The regular falsi method is an attempt to produce an iterative method (scheme) with more
rapid convergence and which is always guaranteed to converge.
The equation of the line AB0 is given by
y − λ = m(x − λ) (30)
f (x0 ) − f (λ)
where m is the gradient of the line AB0 Thus,
x0 − λ
f (x0 ) − f (λ)
y − f (λ) = (x − λ) (31)
x0 − λ
At the point P1 (x1 , 0), y = 0, so that
f (x0 ) − f (λ) λf (x0 ) − x0 f (λ)
− f (λ) = (x1 − λ) ⇒ x1 = (32)
x0 − λ f (x0 ) − f (λ)
Similarly,
λf (x1 ) − x1 f (λ)
x2 = (33)
f (x1 ) − f (λ)
and in general, we have
λf (xn ) − xn f (λ) af (b) − bf (a)
xn+1 = , n ≥ 0 or xn+1 = (34)
f (xn ) − f (λ) f (b) − f (a)
equation (34) is the regular falsi method
The procedure (or algorithm) for finding a solution with the method of False Position is given
below:
Algorithm for the method of False Position
1. Define the first interval (a, b) such that solution exists between them. Check f (a)f (b) < 0.
2. Compute the first estimate of the numerical solution xn+1 using Eq.(34).
18
3. Find out whether the actual solution is between a and x1 or between x1 and b. This is
accomplished by checking the sign of the product f (a)f (x1 ).
If f (a)f (x1 ) < 0, the solution is between a and x1 .
If f (a)f (x1 ) > 0, the solution is between x1 and b.
4. Select the subinterval that contains the solution (a to x1 , or x1 to b) is the new interval
(a, b) and go back to step 2. Step 2 through 4 are repeated until a specified tolerance or
error bound is attained.
The regular falsi method always converges to an answer, provided a root is initially bracketed
in the interval (a, b).
Example 1: Determine the root of the equation 2x3 − 7x + 2 = 0, using regular falsi method,
given that the root lies in [0, 1]
solution: using equation (34)
λf (xn ) − xn f (λ)
xn+1 = (35)
f (xn ) − f (λ)
Let f (x) = 2x3 − 7x + 2, f (0) = 2, f (1) = −3 ⇒ f (0)f (1) < 0, hence the solution of the
equation lies in [0, 1]
set λ = 0 and x0 = 1
then the iteration scheme gives
when n = 0
−2 −2
x1 = 2
= = 0.400
2x0 − 7 2(1)2 − 7
−2 −2
when n = 1 : x2 = 2
= = 0.2994
2x1 − 7 2(0.4)2 − 7
−2 −2
when n = 2 : x3 = 2 = = 0.2932
2x2 − 7 2(0.2994)2 − 7
Similarly, we get x4 = 0.2929, x5 = 0.2929, hence, the process converge at 0.2929 (4D), there-
fore the root is 0.2929.
2 −2)
Example 2: Determine the root of the equation (x + 1)2 e(x − 1 = 0, using regular falsi
method, given that the root lies in [0, 1]
2 −2)
solution: Let f (x) = (x + 1)2 e(x − 1, f (0) = −0.864665, f (1) = 0.471517 ⇒ f (0)f (1) =
−0.407705 < 0, hence the solution of the equation lies in [0, 1]
set λ = 0 and x0 = 1
0(0.471517) − 1(−0.864665)
x1 = = 0.647116, f (1)f (x1 ) < 0, the new interval is [0.647116, 1],
0.471517 + 0.864665
19
0.647116(0.471517) − 1(−0.441884)
again x2 = = 0.817834, f (1)f (x2 ) < 0, the new interval is
0.471517 + 0.441885
[0.817834, 1] and so on
Hence, the root is 0.86687 accurate to 5 decimal places and the number of iteration is 9
disp([ x y])
if y == 0.0 % solved the equation exactly
e = 0;
break % jumps out of the for loop
end
if c*y < 0
b=x;
else
a=x;
end
20
end
Exercise: Show that the following equations have exactly one root and that in each case the
root lies in the interval 12 , 1
(1)a. x − cos x = 0
b. x2 + log(x)
c. xex − 1 = 0
d. ex –3x2 = 0
(2) Find a real root of cos x–3x + 5 = 0. Correct to four decimal places using the method of
False Position method.
The Newton-Raphson (N-R) method often called Newton’s iteration scheme converges faster.
One drawback of this method is that it uses the derivative f’(x) of the function as well as the
function f (x) itself. Hence, the Newton-Raphson method is usable only in problems where f 0 (x)
can be readily computed. Here, again we assume that f (x) is continuous and differentiable and
the equation is known to have a solution near a given point.
Thus, Newton-Raphson method formula is given by
f (xn )
xn+1 = xn − (36)
f 0 (xn )
Example: Use Newton-Raphson method to find the real root near 2 of the equation x4 –11x +
8 = 0 accurate to five decimal places.
solution: Let f (x) = x4 –11x + 8 ⇒ f 0 (x) = 4x3 –11, x0 = 2
f (x0 ) = f (2) = 24 –11(2) + 8 = 2, f 0 (x0 ) = f 0 (2) = 4(23 )–11 = 21
f (x0 ) 2
therefore, n = 0 : x1 = x0 − 0 = 2 − 21 = 1.90476
f (x0 )
f (x1 ) 4 −11(1.90476)+8
n = 1 : x2 = x1 − 0 = 1.90476 − 1.904764(1.904763 )−11
= 1.89209
f (x1 )
f (x2 ) 4 −11(1.89209)+8
n = 2 : x3 = x2 − 0 = 1.89209 − 1.892094(1.892093 )−11
= 1.89188
f (x2 )
f (x3 ) 4 −11(1.89188)+8
n = 3 : x4 = x3 − 0 = 1.89188 − 1.891884(1.891883 )−11
= 1.89188
f (x3 )
Hence the root of the equation is 1.89188.
Exercise:
21
1. Using Newton-Raphson method, find a root of the function f (x) = ex –3x2 to an accuracy
of 5 digits. The root is known to lie between 0.5 and 1.0. Take the starting value of x as
x0 = 1.0 (Answer: x = 0.91001)
Suppose that the equation f (x) = 0 can be rearranged as x = g(x) such that f (α) = 0, (that
is, α is a root of f (x), then α = g(α)
An obvious iteration to try for the calculation of a fixed points is
Eq. (37) is said to converge if g 0 (x0 ) < 1, where x0 is the initial approximation.
The value of x0 is chosen arbitrarily. There are many ways of rearranging f (x) = 0
Example: Given the equation x2 − 2x − 8 = 0.
√ 2x + 8
There are 3 possible rearrangements of this equation, which are: x = 2x + 8, x = ,x =
x
x2 − 8
, the iteration forms are
2
√ 2xn + 8 x2n − 8
xn+1 = 2xn + 8, xn+1 = , xn+1 = , the first one converges (to 4) at 6th
xn 2
iteration, second one converges (to 4) at 11 iteration with initial guess x0 = 5 while third does
not converge.
4 FINITE DIFFERENCE
The forward, backward and central differences are used to obtain interpolation formula from
table of abscissa {xi } which are equally spaced
22
for evenly spaced x-values
xr = x0 + rh, r = 0(1)n
we define 4f (x) = f (x + h) − f (x) or 4f (xr ) = f (xr + h) − f (xr )
4fr = fr+1 − fr
The symbol 4 is called the first forward difference operator.
The second order difference is given by
x fr 4fr 42 f r 43 fr
x0 f0
4f0 = f1 − f0
x1 f1 42 f 0
4f1 = f2 − f1 43 f 0
x2 f2 42 f 1
4f2
x3 f3
.. ..
. .
23
Table 2: Forward difference table for Example 1
x fr 4fr 42 f r 43 fr
x0 = 1 4
9
x1 = 2 13 12
21 6
x2 = 3 34 18
39 6
x3 = 4 73 24
63
x4 = 5 136
(x − 1)(x + 5)
Exercise: Construct forward difference table for the function with x0 =
(x + 1)(x + 2)
0, h = 0.1 and r = 0(1)8
1 1
µfr = fr+ + fr−
1 1 or σfr (41)
2 2 2 2
24
Table 3: Forward difference table
x fr 4fr 42 f r 43 f r
x0 f0
5f1
x1 f1 52 f 2
5f2 53 f 3
x2 f2 52 f 3
5f3
x3 f3
.. ..
. .
1 1 1
− 12
1 1 − 12
µfr = f 1 + fr− 1 = E +E
2 fr ⇒ µ = E +E
2 (45)
2 r+ 2 2 2 2
2
4
Example 1: Evaluate x3
E
solution: Let h be the step length
42
x3 = 42 E −1 x3 (E − 1)2 E −1 x3
E
= (E 2 − 2E + 1)E −1 x3 = Ex3 − 2x3 + E −1 x3
25
42 x
e = 42 E −1 ex = 42 ex−h = e− h(42 ex ) = e−h · ex · (eh − 1)2
E
Therefore,
2 the right
hand sides becomes
x
4 Ee −h ex+h
ex = ex · = e · ex
· (eh
− 1)2
= ex
E 42 ex ex · (eh − 1)2
Given [x0 , f (x0 )], [x1 , f (x1 )], . . . , [xn , f (xn )], the process of finding the values of f (x) for some
other values of x (lies between x0 , x1 , . . . , xn ) is called interpolation
Weierstrass theorem: If f (x) is continuous in [a, b], then it can be represented in [a, b]
to any degree of accuracy by a polynomial of suitable degree, i.e there exist a polynomial
3 |f (x) − p(x)| < , ∀x ∈ (a, b) : > 0 is preassigned number. This theorem is the justification
for the replacement of a function by a polynomial.
The Newton-Gregory (N-G) interpolation formula is used when the function values are given
at equally spaced points:
x − x0
u= , let n and k = 0(1)n be integers, then E u fk = (1 + 4)u fk (since E = 1 + 4)
h
using Taylor series expansion to expansion (1 + 4)u and multiply the resulting equation by fk ,
we obtain
This is the nth degree N-G interpolation formula or N-G forward difference inter-
polation formula
The associated error is given by
Example 1:
2N a + 2H2 O −→ 2N aOH + H2 (50)
26
Estimate the quantity (y) of sodium hydroxide (N aOH) formed at time x = 30min from the
following table by choosing x0 = 29.
time(min):x 19 29 39 49 59
quantity(g):y 41 103 168 218 235
solution: since third differences are constant (that is h = 10), then a polynomial of degree
3 will fit the given data exactly.
x y 4fr 42 f r 43 fr
19 41
62
29 −→ 103& 3
65& -18
39 168 −15&
50 -18
49 218 -33
17
59 235
xu −x0
In this case xu = 30, x0 = 29, h = 10, u = h
= 0.10, applying equation (47), we have
Solution
27
Table 5: Forward difference table for Example 2
x f (x) 4fr 42 f r 43 f r
0 −→ 0.25&
−1.75&
0.5 −1.50 1.50&
−0.25 0
1.0 -1.75 1.50
1.25 0
1.5 -0.50 1.50
2.75 0
2.0 2.25 1.50
4.25 0
2.5 6.50 1.50
5.75
3.0 12.25
Since the second differences are constant i.e 4n f (x) = 0 ∀n ≥ 3, the required degree is 2.
Hence, we fit a quadratic for the given data.
x−x0 x−0
Now, u = h
= 0.5
= 2x, the Newton forward difference formula of degree 2 is given by
u(u − 1) 2 2x(2x − 1)
f (x) ∼
= f0 +u4f0 + 4 f0 = 0.25+2x(−1.75)+ (1.5) = 3x2 −5x+0.25 (51)
2! 2
Suppose the function values of a given function f(x) are f (x0 ), f (x1 ), . . . , f (xn ) at unevenly
spaced points xk , k = 0, 1, . . . , n. We can use Lagrange’s formula to interpolate the given data
as
n n
x − xj
f (x) ∼
X Y
= Pn (x) = Li (x)yi , where Li (x) = 6 i
, j= (52)
i=0
x − xj
j=0 i
28
N = length(x)-1; %the degree of polynomial
l = 0;
for m = 1:N + 1
P = 1;
for k = 1:N + 1
if k ~= m, P = conv(P,[1 -x(k)])/(x(m)-x(k)); end
end
L(m,:) = P; %Lagrange coefficient polynomial
l = l + y(m)*P; %Lagrange polynomial (3.1.3)
end
Example 1 Find the Lagrange interpolating polynomial corresponding to the data points
(1, 7), (4, 13) and (7, 23), hence interpolate for x = 5.5.
Solution
x 1 4 7
y 7 13 23
(x − x1 )(x − x2 ) (x − 4)(x − 7) 1
x2 − 11x + 28
L0 (x) = = = (54)
(x0 − x1 )(x0 − x2 ) (1 − 4)(1 − 7) 18
(x − x0 )(x − x2 ) (x − 1)(x − 7) 1
= − x2 − 8x + 7
L1 (x) = = (55)
(x1 − x0 )(x1 − x2 ) (4 − 1)(4 − 7) 9
(x − x0 )(x − x1 ) (x − 1)(x − 4) 1
x2 − 5x + 4
L2 (x) = = = (56)
(x2 − x0 )(x2 − x1 ) (7 − 1)(7 − 4) 18
substituting equations (54)-(56) in (53), we obtain
P2 (x) = 91 (2x2 + 8x + 53) , when x = 5.5 ⇒ P2 (x = 5.5) = 35
2
= 17.5
Example 2: From the given data, evaluate f(5) using Lagrange interpolation
x 1 4 7 9
f (x) 2 13 123 504
29
Since we have 4 values (data points), we can fit a polynomial of maximum degree 3
n n
X Y x − xj
Pn (x) = Li (x)yi where Li (x) = 6 i
, j=
i=0 j=0
x i − x j
3
(57)
X
n = 3 ⇒ P3 (x) = Li (x)yi = L0 (x)y0 + L1 (x)y1 + L2 (x)y2 + L3 (x)y3
i=0
(x − 4)(x − 7)(x − 9) 1
x3 − 20x2 + 127x − 252
L0 (x) = =− (58)
(1 − 4)(1 − 7)(1 − 9) 144
(x − 1)(x − 7)(x − 9) 1
x3 − 17x2 + 79x − 63
L1 (x) = = (59)
(4 − 1)(4 − 7)(4 − 9) 45
(x − 1)(x − 4)(x − 9) 1
x3 − 14x2 + 49x − 36
L2 (x) = =− (60)
(7 − 1)(7 − 4)(7 − 9) 36
(x − 1)(x − 4)(x − 7) 1
80x3 − 12x2 + 39x − 20
L3 (x) = = (61)
(9 − 1)(9 − 4)(9 − 7) 80
substituting equations (58)-(61) in (57), we obtain
P3 (x) = 3.1861x3 − 32.7889x2 + 100.7028x − 69.1000, when x = 5 ⇒ P3 (x = 5) = 12.94
Remark: The major drawback of this method is that, it is complicated and will require
considerable computation if interpolation of required at a very large number of points (but
MATLAB codes make it so ease)
Exercise: Using Lagrange’s interpolation formula, find the value of y corresponding to
x = 10 from the following data.
x 5 6 9 11
f (x) 380 -2 196 508
The interpolation nodes xm , xm+1 , . . . shall be used. generally, we would have x0 < x < x1 and
x − x0
define u = , v =1−u
h
number of given points is 2m + 2 and degree of the polynomial is 2m + 1 the Everett’s
formula is given by
v(v 2 − 1) 2 v(v 2 − 12 )(v 2 − 22 ) 4 v(v 2 − 12 )(v 2 − 22 ) · · · (v 2 − m2 ) 2m
Pn (x) = vf0 + δ f0 + δ f0 + · · · + δ f0
3! 5! (2m + 1)!
u(u2 − 1) 2 u(u2 − 12 )(u2 − 22 ) 4 u(u2 − 12 )(u2 − 22 ) · · · (u2 − m2 ) 2m
+ uf1 + δ f1 + δ f1 + · · · + δ f1
3! 5! (2m + 1)!
(62)
30
Example 1: From the data given below, determine the value of f (x) when x = 5.5 by using
the Everett’s formula
x 3 4 5 6 7 8
f (x) 205 240 259 262 250 224
solution: Since we are given 6 points, then 2m + 2 = 6 ⇒ m = 2
r xr fr δfr δ 2 fr δ 3 fr δ 4 fr δ 5 fr
-2 3 205
35
-1 4 240 -16
19 0
0 5 259 -16 1
3 1 -1
1 6 262 -15 0
-12 1
2 7 250 -14
-26
3 8 224
x−x0 5.5−5
The degree of the polynomial is n = 2m + 1 = 5, u = h
= 1
= 0.5, v = 1 − u = 0.5,
using equation (62), we have
31
only for a set of arguments.
(3u2 − 6u + 2) 3
0 1 (2u − 1) 2
f (x) = 4f0 + 4 f0 + 4 f0 + · · · (65)
h 2! 3!
Solution
32
Table 7: Numerical differentiation for Example 1
x y 4y 42 y 43 y
1.0 −→ 5.4680&
0.1985&
1.1 5.6665 0.0614&
0.2599 0.0074
1.2 5.9264 0.0688
0.3287 0.0074
1.3 6.2551 0.0763
0.4050 0.0074
1.4 6.6601 0.0837
0.4887
1.5 7.1488
Here x0 = 1.0, h = 0.1 and y0 = 5.4680, applying Eq.(67) and (68), we obtain
dy 1 1 2 1 3 1 4
= 4f0 − 4 f0 + 4 f0 − 4 f0 + · · ·
dx x=1.0 h 2 3 4
1 1 1
= 0.1985 − (0.0614) + (0.0074)
h0.1 2 3
= 1.7020
d2 y
1 2 3 (11) 4 1
= 2 4 f0 − 4 f0 + 4 f0 + · · · = (0.0614 − 0.0074) = 5.4040
dx2 x=1.0 h 12 0.12
Example 2: Obtain the first and second derivatives of the function tabulated below at the
points x = 1.1 and x = 1.2.
Solution
33
Table 8: Numerical differentiation for Example 2
x y 4y 42 y 43 y
1.0 −→ 0&
0.128&
1.2 −→ 0.128& 0.288&
0.416& 0.05
1.4 0.544 0.338&
0.754 0.05
1.6 1.298 0.388
1.142 0.05
1.8 2.440 0.438
1.580
2.0 4.020
Since x = 1.1 is a not on the forward difference table, the point near it on the table will
dy d2 y
be the initial point, i.e x0 = 1.0 and compute and , using Eqs. (67) and (68), where
dx dx2
x − x0 1.1 − 1.0
u= = = 0.5, therefore
h 0.2
(3u2 − 6u + 2) 3
dy 0 1 (2u − 1) 2
= y (1.1) = 4f0 + 4 f0 + 4 f0 + · · ·
dx x=1.1 h 2! 3!
(3(0.5)2 − 6(0.5) + 2)
1 (2(0.5) − 1)
= 0.128 + (0.288) + (0.05) = 0.62958
0.2 2 6
and
d2 y (12u2 − 36u + 22) 4
00 1 2 (6u − 6) 3
= y (1.1) = 2 4 f0 + 4 f0 + 4 f0 + · · ·
dx2 x=1.1 h 3! 4!
1 (6(0.5) − 6)
= 0.288 + (0.05 = 6.575
0.22 6
Now, x = 1.2 = x0 ⇒ u = 0, then we obtain
dy d2 y
= y 0 (1.2) = 1.31833 and = y 00 (1.2) = 7.2
dx x=1.2 dx2 1.2
Exercise
1.
2N aOH + 2HCl −→ 2N aCl + H2 O
The chemical reaction given above shows the quantity (f (x) of Sodium hydroxide (in
gram) formed in respect to the time taken (in hour), obtain the polynomial that will fit
34
the data below, using backward difference formula of appropriate degree at t = 2. Hence
find the quantity of Sodium chloride (N aCl) formed when t = 34 hr
1 3 5
t (hr) 0 2
1 2
2 2
f (x) (g) 184 204 226 250 276 304
2. Find the first and second derivatives of the functions tabulated below at the point x =
1.1, x = 1.2 and x = 1.9
(3u2 + 6u + 2) 3
0 1 (2u + 1) 2
f (x) = 5f0 + 5 f0 + 5 f0 + · · · (70)
h 2! 3!
time t(secs) 1 2 3 4 5 6
distance s(m) 0.0201 0.0844 0.3444 1.0100 2.3660 4.7719
35
Solution
-
t s 5s 52 s 53 s 54 s 55 s
1 0.0201
0.0643
2 0.0844 0.1957
0.2600 0.2099
3 0.3444 0.4056 0.075
0.6656 0.2848 0
4 1.0100 0.6904 0.075%
1.3560 0.3595%
5 2.3660 1.0499%
2.4059%
6 −→ 4.7719%
As in the case of interpolation, the derivatives are more accurate near the centre of the range of
fit. Consequently, the central difference formula gives better accuracy than forward difference
formula. Now,
f (x0 + 12 h) − f (x0 − 12 h) δf0
f 0 (x0 ) = lim = lim (76)
h→0 h h→0 h
this implies
1
f 0 (x0 ) ∼
= δf0 for h suffciently smal (77)
h
36
0
f (x0 + 1
h) 0
− f (x0 − 1
h)
1
h
δf 1 − 1
h
δf− 1 δ f 1 − f− 1 δ 2 f0
f 00 (x0 ) = lim 2 2 2 2 2 2
= lim = = lim
h→0 h h→0 h h2 h→0 h2
(78)
Thus for sufficiently small value of h
1
f 00 (x0 ) ∼
= 2 δ 2 f0 with error O(h2 ) (79)
h
it can be shown by induction that
1 n
f (n) (x0 ) = δ f0 + O(hn ) (80)
hn
Eq. (80) is the central difference formula for n − th derivatives of f (x), when x = x0 .
For n = 2, we obtain
1
f 00 (x0 ) ∼
= 2 (f1 − 2f0 + f−1 ) (81)
h
Example: Using central difference approach, estimate f 00 (0.2) for the function f (x) = exp(x)
with h = 0.02, hence compute the error.
37
matrix are linearly independent in the sense that no row (or column) is a linear combination of
the other rows (or columns). If the coefficient matrix is singular, the equations may have infinite
number of solutions, or no solutions at all, depending on the constant vector. If the coefficient
matrix is singular, the equations may have infinite number of solutions, or no solutions at all,
depending on the constant vector. Linear algebraic equations occur in almost all branches of
engineering. Their most important application in engineering is in the analysis of linear systems
(any system whose response is proportional to the input is deemed to be linear). There are two
fundamental different approaches for solving system of linear algebraic equations. These are
2. Iterative method
Examples of direct elimination methods include Gauss elimination, the matrix inverse
method and LU decomposition. Iterative methods obtain the solution by assuming a trial so-
lution (called initial guess). The assumed solution is substituted into the system of equations
in a repetitive manner until the scheme converges to the approximate roots. Examples of itera-
tive methods are Gauss-Jacobi iteration, Gauss-Seidel iteration and successive-over-relaxation
(SOR).
To solve equation (82) by this method, the matrix A must be strictly diagonally dominant.
The following iterative schemes are derived from (82)
{m+1} 1 h
{m} {m}
i
x1 = b1 − a12 x2 − a13 x3 − · · · − a1n xn{m}
a11
{m+1} 1 h
{m} {m}
i
x2 = b2 − a21 x1 − a23 x3 − · · · − a2n xn{m}
a22
{m+1} 1 h
{m} {m}
i
(83)
x3 = b3 − a31 x1 − a32 x2 − · · · − a3n xn{m}
a33
.. .. ..
. . .
1 h {m} {m} {m}
i
x{m+1}
n = bn − an1 x1 − an2 x2 − · · · − a(n−1)n xn−1
ann
The general case for n linear equations is
" n
#
{m+1} 1 X {m}
xi = bi − aij xj , i = 1, 2, . . . , n (84)
aii j=1
If x0i , the initial approximation is not given, we may set x0i = 0, ∀i. This iteration continues
{m+1} {m}
until the stopping criteria xi − xi < , for a prescribed tolerance , is achieved. For
38
3 × 3 matrix, we assume that the coefficients a11 , a22 and a33 are the largest coefficients in the
respective equations, so that
|a11 | ≥ |a12 | + |a13 |
Example 1: Solve the following equations by Jacobi’s method with initial approximation
15x + 3y − 2z = 85
2x + 10y + z = 51
x − 2y + 8z = 5
|15| ≥ |3| + |2|
1
15x + 3y–2z = 85 ⇒ x = (85 − 3y + 2z)
15
1
2x + 10y + z = 51 ⇒ y = (51 − 2x − z)
10
1
x − 2y + 8z = 5 ⇒ z = (5 − x + 2y)
8
using equation (83), we have
1
x{m+1} = (85 − 3y {m} + 2z {m} )
15
1
y {m+1} = (51 − 2x{m} − z {m} )
10
1
z {m+1} = (5 − x{m} + 2y {m} )
8
for first iteration (m = 0), we have
1 17
x{1} = (85 − 3y {0} + 2z {0} ) =
15 3
1 51
y {1} {0}
= (51 − 2x − z ) = {0}
10 10
1 5
z {1} {0}
= (5 − x + 2y ) = {0}
8 8
second iteration (m = 1), we have
1
x{2} = (85 − 3y {1} + 2z {1} ) = 4.73
15
1
y {2} = (51 − 2x{1} − z {1} ) = 3.904
10
1
z {2} = (5 − x{1} + 2y {1} ) = 1.192
8
39
third (3rd) iteration (m = 2), we have
1
x{3} = (85 − 3y {2} + 2z {2} ) = 5.045
15
1
y {3} = (51 − 2x{2} − z {2} ) = 4.035
10
1
z {3} = (5 − x{2} + 2y {2} ) = 1.010
8
1
x{4} = (85 − 3y {3} + 2z {3} ) = 4.994
15
1
y {4} = (51 − 2x{3} − z {3} ) = 3.990
10
1
z {4} = (5 − x{3} + 2y {3} ) = 1.003
8
1
x{5} = (85 − 3y {4} + 2z {4} ) = 5.002
15
1
y {5} = (51 − 2x{4} − z {4} ) = 4.001
10
1
z {5} = (5 − x{4} + 2y {4} ) = 0.998
8
1
x{6} = (85 − 3y {5} + 2z {5} ) = 5
15
1
y {6} = (51 − 2x{5} − z {5} ) = 4
10
1
z {6} = (5 − x{5} + 2y {5} ) = 1
8
1
x{7} = (85 − 3y {6} + 2z {6} ) = 5
15
1
y {7} = (51 − 2x{6} − z {6} ) = 4
10
1
z {7} = (5 − x{6} + 2y {6} ) = 1
8
Example 2: Use the Jacobi’s iterative scheme to obtain the solutions of the system of equations
correct to three decimal places with initial guess (1,1,1)
x + 2y + z = 0
3x + y − z = 0
x − y + 4z = 3
40
Using equation (85)
|1| =
6 |2| + |1|
3x + y − z = 0
x + 2y + z = 0
x − y + 4z = 3
5x + 2y − z = 6
x + 6y − 3z = 4
2x + y + 4z = 7
41
2. Use Jacobi iterative scheme to obtain the solution of the system of equations correct to
two decimal places, with initial approximation
5x1 − 2x2 + x3 = 4
x1 + 4x2 − 2x3 = 3
x1 + 2x2 + 4x3 = 17
4x − 2y + 7z = 6
2x + 3y − 4z = 9
x + y + 5z = 12
Consider the system of Eq. (82) , for this system of equations, we define the Gauss-Seidel
method as:
{m+1} 1 h
{m} {m}
i
x1 = b1 − a12 x2 − a13 x3 − ··· − a1n xn{m}
a11
{m+1} 1 h
{m+1} {m}
i
x2 = b2 − a21 x1 − a23 x3 − · · · − a2n x{m}
n
a22
{m+1} 1 h
{m+1} {m+1}
i
(87)
x3 = b3 − a31 x1 − a32 x2 − ··· − a3n xn{m}
a33
.. .. ..
. . .
1 h {m+1} {m+1} {m+1}
i
x{m+1}
n = bn − an1 x1 − an2 x2 − · · · − a(n−1)n xn−1
ann
The general case for n linear equations is
" n n
#
{m+1} 1 X {m+1}
X {m}
xi = bi − aij xj − aij xj , i = 1, 2, . . . , n (88)
aii j=1 j=i+1
You may notice here that in the first equation of system (87), we substitute the initial ap-
{0} {0} {0}
proximation (x2 , x3 , · · · , xn ) on the right hand side. In the second equation we sub-
{1} {0} {0}
stitute (x1 , x3 , · · · , xn ) on the right hand side. In the third equation, we substitute
{1} {1} {0} {0}
(x1 , x2 , x4 , · · · , xn ) on the right hand side. We continue in this manner until all the
components have been improved. At the end of this first iteration, we will have an improved
{1} {1} {1} {1}
vector (x1 , x2 , x3 , · · · , xn ). The entire process is then repeated and this method is also
called method of successive displacements.
42
Example 1: Solve the following equations by Gauss-Seidal method
8x + 2y − 2z = 8
x − 8y + 3z = −4
2x + y + 9z = 12
7 NUMERICAL INTEGRATION
The general form of the problem of numerical integration may be stated as follows:
Given a set of data points (xi , yi ) , i = 0, 1, · · · , n of a function y = f (x), where f (x) is not
explicitly known. Here, we are required to evaluate the definite integral
Z b
I= f (x)dx (90)
a
44
for numerical integration by using Newton’s forward difference formula. Here, we assume the
interval (a, b) is divided into n-equal subintervals such that
b−a
h= , a = x0 < x2 < · · · < xn = b (91)
n
with xn = x0 + nh, where h is the internal size, n is the number of subintervals a and b is the
limits of integration with b > a.
Hence, the integral in Eq.(90) can be written as
Z xn
I= f (x)dx (92)
x0
where x = x0 + ph Z n
p p 2
I=h f0 + 4f0 + 4 f0 + · · · dx (94)
0 1 2
Hence, after simplification, we get
Z xn
n(n − 2)2 3
n n(2n − 3) 2
I= f (x)dx = nh f0 + 4f0 + 4 f0 + 4 f0 + · · · (95)
x0 2 12 24
Substituting n = 1 in Eq.(95) (the same as 47 and u ≡ n) and considering the curve y = f (x)
through the points (x0 , y0 ) and (x1 , y1 ) as a straight line (a polynomial of first degree so that
the differences of order higher than first become zero), we obtain
n Z xn
X h
I= Ir = f (x)dx = [f0 + 2(f1 + f2 + · · · + fn−1 ) + fn ] (96)
r=1 x0 2
45
7.1.1 Error Estimate in Trapezoidal Rule
Let y = f (x) be a continuous function with continuous derivatives in the interval [x0, xn].
Expanding y in a Taylor’s series around x = x0 , we get
Z x1 Z x1
(x − x0 )2 00 (x − x0 )3 000
0
f (x)dx = f0 + (x − x0 )f0 + f0 + f0 + · · · dx
x0 x0 2! 3!
(97)
h2 h3 h3
= hf0 + f00 + f000 + f0000 + · · ·
2 6 24
Again, from Eq. (i)
Z x1
h2 00
h h h 0
f (x)dx = (f0 + f1 ) = (f0 + f (x0 + h)) = f0 + f0 + hf0 + f0 + · · ·
x0 2 2 2 2
(98)
2 3 3
h h h
= hf0 + f00 + f000 + f0000 + · · ·
2 4 12
Hence, the error e1 in (x0 , x1 ) is obtained from Eqs. (97) and (98) as
1 3 00
e1 = − h f0 + · · · (99)
12
Similarly, we have
1 3 00 1
e2 = − h f1 + · · · , e3 = − h3 f200 + · · · , and so on (100)
12 12
46
R 1.2 2
3. 0
ex dx
R 1.2 2
4. 0
x2 ex dx
R 12 1
5. 0 1+x2
dx
R1
6. 0
sin(x2 )dx
h 0.2
I= [f0 + f6 + 2(f1 + f2 + f3 + f4 + f5 )] = [1 + 3.320 + 2(1.2211.4921.8222.2262.718)] ≈ 2.32785
2 2
R 1.2
The exact value is 0 ex dx = 2.32012
solution 2
b−a
a = 0, b = 1.2 n = 6, h = n
= 0.2
h
I= [f0 + f6 + 2(f1 + f2 + f3 + f4 + f5 )] = 1.49532
2
No exact solution
1
7.2 SIMPSON’S 3 RULE
Substituting n = 2 in Eq. (95) (the same as 47 and u ≡ n) and taking the curve through
the points (x0 , y0 ), (x1 , y1 ) and (x2 , y2 ) as a polynomial of second degree (parabola) so that the
differences of order higher than two vanish, we obtain
Z xn
h
I= f (x)dx = [f0 + 4(f1 + f3 + f5 + · · · + f2n−1 ) + 2(f2 + f4 + f6 + · · · + f2n−2 ) + f2n ]
x0 3
(103)
1 1
Equation (103) is known as Simpson’s 3
rule. Simpson’s 3
rule requires the whole range (the
given interval) must be divided into even number of equal subintervals.
47
7.2.1 Error Estimate in Simpson’s 1/3 Rule
h h
[f0 + 4f1 + f2 ] = [f0 + 4f (x0 + h) + f (x0 + 2h)]
3 3
h2 00 h3 000
h 0 0 4 2 00 8 3 000
= f0 + 4 f0 + hf0 + f0 + f0 + · · · + f0 + 2hf0 + h f0 + h f0 + · · ·
3 2! 3! 2! 3!
4 2 5
= 2hf0 + 2h2 f00 + h3 f000 + f0000 + h5 f0iv + · · ·
3 3 18
(105)
Hence, from Eqs. (104) and (105), the error in the subinterval (x0 , x2 ) is given by
Z x2
h 4 5 1
e1 = f (x)dx − [f0 + 4f1 + f2 ] = − h5 f0iv + · · · ≈ − h5 f0iv (106)
x0 3 15 18 90
1 5 iv 1
e2 = − h f2 , e3 = − h5 f4iv , and so on (107)
90 90
Example 2 Solve examples under trapezoidal’s rule using Simpson’s 1/3 rule
Answer to the first one is 2.3201374482 and 1.453296 for the second example.
48
8 NUMERICAL SOLUTION OF FIRST ORDER OR-
DINARY DIFFERENTIAL EQUATION
Suppose we require the solution of
dy
y(x) = = f (x, y), subject to the initial condition y(x0 ) = y0 (108)
dx
3. Euler’s method
h 0 h2 h3
yn+1 = yn + yn + yn00 + yn000 + · · · (109)
1! 2! 3!
dy
Example 1: Solve − 5y = 0, given y(0) = 1. Estimate y(0.1) and y(0.2), hence compare
dx
your results with the exact solution, given y = e5x .
Solution:
x0 = 0, h = 0.1, y0 = 1, y 0 = 5y ⇒ y00 = 5y0 = 5 × 1 = 5, y 00 = 5y 0 ⇒ y000 = 5y00 = 5 × 5 =
25, y 000 = 5y 00 ⇒ y0000 = 5y000 = 5 × 25 = 125, y iv = 5y 000 ⇒ y0iv = 5y0000 = 5 × 125 = 625
applying Eq. (109), we have
h 0 h2 00 h3 000 h4 iv
y1 = y0 + y + y + y0 + y0 + · · ·
1! 0 2! 0 3! 4!
0.12 0.13 0.14
= 1 + 0.1(5) + (25) + (125) + (625) + · · · = 1.64844
2 6 24
y1 = 1.64844, y10 = 5y1 = 5 × 1.64844 = 8.2422, y100 = 5y10 = 5 × 8.2422 = 41.211, y1000 = 5y100 =
5 × 41.211 = 206.055, y1iv = 5y1000 = 5 × 206.055 = 1030.28
h 0 h2 00 h3 000 h4 iv
y2 = y(0.2) = = y1 + y + y + y1 + y1 + · · ·
1! 1 2! 1 3! 4!
0.12 0.13 0.14
= 1.648441 + 0.1(0.82422) + (41.211) + (206.055) + (1030.28) = 2.71735
2 6 24
Example 2: Obtain the solution of the IVP y 0 + y = e3x , y(0) = 1, using Taylor’s series
method. Compare your results with the exact solution y = 14 e3x + 34 e−x when x = 0.2 and
49
x = 0.4.
Solution:
x0 = 0, h = 0.2, y0 = 1, y 0 = e3x − y ⇒ y00 = e3x0 − y0 = 1 − 1 = 0, y 00 = 3e3x − y 0 ⇒ y000 =
3e3x0 − y00 = 3 − 0 = 3, y 000 = 9e3x − y 00 ⇒ y0000 = 9e3x0 − y000 = 9 − 3 = 6, y iv = 27e3x − y 000 ⇒
y0iv = 27e3x0 − y0000 = 27 − 6 = 21
applying Eq. (109), we have y1 = y(0.2) = 1.0694
0
x1 = 0.2, y1 = 1.0694, y10 = e3x1 − y1 = 0.752719, y100 = 3e3x1 − y= 4.71364, y1000 = 9e3x1 − y100 =
11.71364, y1iv = 27e3x1 − y1000 = 37.5117
applying Eq. (109), we obtain y2 = y(0.4) = 1.3323
Exercise
Using Taylor’s series method, obtain the solutions of the following IVP correct to 4 decimal
places
dy 2
(a) = 2xy, y(1) = 1. The exact solution is y = ex −1
dx
(b) y 0 = y 2 + 1, y(0) = 0. The exact solution is y = tan x
dy
Example: Solve − 5y = 0, given y(0) = 1 at y4 (0.1), y4 (0.2), y4 (0.3)
dx
y 0 = 5y ⇒ f (x, y) = 5y, from y(0) ⇒ x0 = 0, y0 = 1
substitute n = 1 in Eq (111), we obtain
50
Z x Z x Z x
y1 = y0 + f (x, y0 )dx = 1 + 5y0 dx = 1 + 5dx = 1 + 5x
Z0 x Z0 x 0
5
y2 = y0 + f (x, y1 )dx = 1 +(1 + 5x)dx = 1 + x + x2
2
Z0 x Z0 x
5 1 5
y3 = y0 + f (x, y2 )dx = 1 + 1 + x + x2 dx = 1 + x + x2 + x3
2 2 6
Z0 x Z0 x
1 5
y4 = y0 + f (x, y3 )dx = 1 + 1 + x + x2 + x3 dx
0 0 2 6
1 1 5
= 1 + x + x2 + x3 + x4
2 6 24
Exercise Obtain the solution of the IVP y 0 + y = e3x , y(0) = 1, using Picard’s method.
Compare your results with the exact solution y = 14 e3x + 34 e−x when x = 0.1, x = 0.3 and
x = 0.4.
dy
Example 1: Solve − 5y = 0, given y(0) = 1. Estimate y(0.1) and y(0.2), hence compare
dx
your results with the exact solution, given y = e5x .
Solution: x0 = 0, y0 = 1, f (x, y) = 5y ⇒ f (xn , yn ) = 5yn , h = 0.1
hyn2
yn+1 = yn − (113)
1 + xn
51