0% found this document useful (0 votes)
20 views

Mat 202

Uploaded by

sirfuad01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Mat 202

Uploaded by

sirfuad01
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

MAT 202: Introduction to Numerical Analysis

K. Issa, Ph.D.
[email protected], [email protected]
Department of Statistics and Mathematical Sciences,
Kwara State University, Malete, Ilorin, Nigeria.

Course Outlines

Solution of algebraic and transcendental equations. Curve fitting, error analysis, interpolation
and approximations. Zeros of non linear equations in one variable. System of linear equations.
Numerical differentiation and integration. Initial value problems in ODE.
Recommended Textbooks

• Mathematical Method for Physics and Engineering Third Edition by K. F. Riley, M.


P. Hobson and S.j. Bence.

• Advanced Engineering Mathematics by Alan Jeffrey

• Advanced Engineering Mathematics by Kreyszig E.

• Numerical method 2010 Edition by Rao V. Dukkipati

• Numerical methods for Engineers and Scientists 2nd Edition by Joe D. Hoffman

1
Contents
1 CURVE FITTING 4
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Direct fit polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 ERROR ANALYSIS 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 The effects of errors on the basic operation of Arithmetic . . . . . . . . . . . . . 10
2.2.1 Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.2 Subtraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.3 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.4 Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.5 Error in function evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 13

3 SOLUTION OF NON LINEAR EQUATIONS 14


3.1 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Method of False Position (Regular Falsi Method) . . . . . . . . . . . . . . . . . 18
3.3 Newton-Raphson Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4 Successive Approximation Method (Fixed point iteration) . . . . . . . . . . . . . 22

4 FINITE DIFFERENCE 22
4.1 Forward difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.2 Backward difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3 Central differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4.1 Newton-Gregory Interpolation Formula . . . . . . . . . . . . . . . . . . . 26
4.4.2 Lagrange Interpolation Formula . . . . . . . . . . . . . . . . . . . . . . . 28
4.4.3 Everett’s Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

5 NUMERICAL DIFFERENTIATION AND INTEGRATION 31


5.1 Numerical differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.1.1 Derivatives based on forward difference formula . . . . . . . . . . . . . . 32
5.1.2 Derivatives based on backward difference formula . . . . . . . . . . . . . 35
5.1.3 Derivatives based on central difference formula . . . . . . . . . . . . . . . 36

2
6 SYSTEM OF LINEAR EQUATIONS 37
6.1 Solution by Gauss-Jacobi iteration . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.2 Solution by Gauss-Seidel iteration . . . . . . . . . . . . . . . . . . . . . . . . . . 42

7 NUMERICAL INTEGRATION 44
7.1 TRAPEZOIDAL RULE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.1.1 Error Estimate in Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . 46
1
7.2 SIMPSON’S 3
RULE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.2.1 Error Estimate in Simpson’s 1/3 Rule . . . . . . . . . . . . . . . . . . . . 48

8 NUMERICAL SOLUTION OF FIRST ORDER ORDINARY DIFFEREN-


TIAL EQUATION 49
8.1 Taylor’s series method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8.2 Picard’s method of successive approximation . . . . . . . . . . . . . . . . . . . . 50
8.3 Euler’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3
1 CURVE FITTING
1.1 Introduction

Suppose a set of data is given by points xi and corresponding value f (xi ), for i = 0, 1, · · · , n.
The process of obtaining a function in order to represent the data by a curve or line is often
referred to as curve fitting. The three commonly approximating functions are

1. polynomial

2. Trigonometric

3. Exponential

Polynomials can be fit to a set of discrete data in two ways, namely

a. Exact fit

b. Approximate fit

The general nth degree polynomial is given by

Pn (x) = a0 + a1 x + a2 x2 + · · · + an xn (1)

An exact fit passes exactly through all the discrete data points. This type of fit is useful for
small sets of smooth data
An approximate fit yields a polynomial that passes through the set of data in the best manner
possibles without being required to pass exactly through any of the data points. Approximate
fits are useful for large sets of smooth data.
A set of discrete data may be equally spaced or unequally spaced in the independent variable
x. In the general case where the data are unequally spaced, several procedures can be used to
fit approximating polynomials such procedures include

a. direct fit polynomials

b. Lagrange polynomials

c. Divided difference polynomials

When the data are equally spaced, procedures based on differences are used, these include

i) Newton forward-difference polynomials

ii) Newton backward-difference polynomials

4
iii) Stirling centered difference polynomial

iv) Bessel centered difference polynomial

The property of polynomials that makes them suitable as approximation functions is stated by
the Wwierstrass.
Approximation theorem
If f (x) is a continuous function in the closed interval a ≤ x ≤ b, then for every  > 0∃ a
polynomial Pn (x), where the value of n depends on the value of , such that ∀x ∈ [a, b]

|Pn (x) − f (x)| (2)

1.2 Direct fit polynomials

Here, we shall consider a completely general procedure for fitting a polynomial to a set of equally
spaced or unequally spaced data. Given (n+1) sets of data [x0 , f (x0 )] , [x1 , f (x1 )] , · · · [xn , f (xn )],
which will be written as (x0 , f (x0 )) , (x1 , f (x1 )) , · · · (xn , f (xn )), determine the unique nth-
degree polynomial Pn (x) that passes exactly through the n + 1 points:

Pn (x) = a0 + a1 x + a2 x2 + · · · + an xn (3)

For simplicity of notation, let f (xi ) = fi . Substituting each data point into Equation (3) yields
n + 1 equations:
f0 = a0 + a1 x0 + a2 x20 + · · · + an xn0
f1 = a0 + a1 x1 + a2 x21 + · · · + an xn1
.. .. ..
. . .
.. .. ..
. . .
fn = a0 + a1 xn + a2 xn + · · · + an xnn
2

There are n + 1 linear equations containing the n + 1 coefficients a0 to an and can solve using
Gauss elimination. The resulting polynomial is the unique nth-degree polynomial that passes
exactly through the n+1 data points. The direct fit polynomial procedure works for both equally
spaced data and unequally spaced data. MATLAB code for the polynomial approximation with
the graph

MATLAB code that compute polynomial approximation with the graph and maximum error
clear all
close all
a=input(’the initial value=’);
b=input(’the end point=’);
l=input(’number of subinterval=’) ;% compute d err base on this subinterval

5
x=linspace(a,b,l);% h=(b-a)/(n-1)% x=[x0 x1 x2,...xn]
d=input(’degree of approximation=’);
approx=input(’the function to be approximate=’);
p=polyfit(x,approx,d);
y1=polyval(p,x);
max_err=max(abs(approx-y1))
figure
plot(x,y1,’o’)
hold on
plot(x,approx,’r’)
hold off

Example 1: Consider y = f (x) = ex , find the 2nd, 3rd and 4th degree polynomial approx-
imation to the function over the interval 1.1 ≤ x ≤ 1.5, hence compute the interpolant y at
x = 1.22
Solution: y(1.22) = f (1.22) = e1.22 = 3.387188
for degree 5

xi 1.1 1.20 1.25 1.4 1.5


f (xi ) 3.004166 3.320117 3.669297 4.055200 4.481689
from equation (3), we obtain

11 121 1331 14641


x = 1.1 ⇒ 3.00416602 = a0 + a1 + a2 + a3 + a4 (i)
10 100 1000 10000
6 36 216 1296
x = 1.2 :⇒ 3.320117 = a0 + a1 + a2 + a3 + a4 (ii)
5 25 125 625
13 169 1331 28561
x = 1.3 ⇒ 3.669297 = a0 + a1 + a2 + a3 + a4 (iii)
10 100 1000 1000
7 49 343 2401
x = 1.4 ⇒ 4.055200 = a0 + a1 + a2 + a3 + a4 (iv)
5 25 125 625
3 9 27 81
x = 1.5 ⇒ 4.4816890 = a0 + a1 + a2 + a3 + a4 (v)
2 4 8 16
solving equations (i)-(v) simultaneously, we obtain a0 = 1.0902326, a1 = 0.66170379, a3 =
0.996498, a4 = −0.183260, a5 = 0.153142
therefore P4 (x) = 1.0902326 + 0.66170379x + 0.996498x2 − 0.183260x3 + 0.153142x4 , ∴ P4 (x =
1.22) = 3.38718731
Exercise 1: Find the 2nd and 5th degree polynomial approximation to the function

1. f (x) = x1 , 1 ≤ x ≤ 3

6
2. f (x) = exp(x), −1≤x≤1

3. f (x) = sin(x), 0 ≤ x ≤ 4π

2 ERROR ANALYSIS
2.1 Introduction

Numerical analysis provide approximation method to the true desired solution of mathematical
problems or method. It is therefore important to be able to estimate the error involved in such
approximation. Consequently, the study of the error is of central concern in numerical analysis.
Errors are introduced by the computational process itself. Computers perform mathematical
operations with only a finite number of digits. If the number yn is an approximation to the
exact result y, then the difference y–yn is called error.
Hence Exact value = approximate value + error
In numerical computations, we come across the following types of errors:

(a) Absolute and relative errors

(b) Inherent errors

(c) Round-off errors

(d) Truncation errors

(e) Human error

(a) Absolute and relative errors: If XE is the exact or true value of a quantity and XA is
its approximate value, then |XE –XA | is called the absolute error Ea . Therefore absolute
error
Ea = |XE –XA | (4)

and relative error is defined by

XE –XA
Er = (5)
XE

provided XE 6= 0 and The percentage relative error is

XE –XA
Ep = 100Er = 100 (6)
XE

Significant digits: The concept of a significant figure, or digit, has been developed to
formally define the reliability of a numerical value. The significant digits of a number are

7
those that can be used with confidence. If XE is the exact or true value and XA is an
approximation to XE , then XA is said to approximate XE to t significant digits if t is the
largest non-negative integer for which

XE –XA
< 5 × 10−t (7)
XE

Example 1: If X = 86.36(exact value) and x = 86 (approximate value), then e(error) =


e 0.36
X − x = 0.36 and absolute error Ea = |0.36| = 0.36, relative error is = =
X 86.36
0.004169 and percentage relative error (Ep ) is Ep = 100 × 0.004169 = 0.4169
Example 2: If XE = exp(1) is approximated by XA = 2.71828, what is the significant
number of digits to which XA approximates XE ?

XE –XA exp(1)–2.71828
= = 0.67 × 10−6 < 5 × 10−6
XE exp(1)

Example 3:
Let the exact or true value = 20
3
and the approximate value = 6.666
2
The absolute error is 0.000666 · · · = 3000
2
1
The relative error is 3000
20 = 10000
, therefore The number of significant digits is 4
3

(b) Inherent Errors: Inherent errors are the errors that pre-exist in the problem statement
itself before its solution is obtained. Inherent errors exist because the data being ap-
proximate or due to the limitations of the calculations using digital computers. Inherent
errors cannot be completely eliminated but can be minimised if we select better data or
by employing high precision computer computations

(c) Round-off error: Most numbers have infinite decimal representation and therefore have
to be rounded in calculations. Error introduced by the omission of significant figures due
to computer imperfection is called the round-off error
For example (i) 13.6895 u 13.690 (3D or 5 s.f.) and Error= 13.6895−13.690 = −0.0005 =
−5 × 10−4
(ii) 11.6955 u 11.696 and Error= 11.6955 − 11.696 = −0.005 = −5 × 10−3
Note: When a number is rounded to k decimal places, the round off error is given by

1
|round off error| ≤ 10−k (8)
2

Example 1: Suppose 2.365 has been rounded off to 3.D.P, then the round off error
≤ 12 10−3 = 0.005

8
Example 2: Given the number π is approximated 4 decimal places. (i) Determine the
relative error due to chopping and express it as a per cent. (ii) Determine the relative
error due to rounding and express it as a per cent.
Solution: (i) The relative error due to chopping is given by
3.1415−π
Er = π
= 2.949 × 10–5 or 0.002949%
(ii) The relative error due to rounding is given by
3.1416−π
Er = π
= 2.338 × 10–6 or 0.0002338%
Exercise Use Taylor series expansions to predict f (2) for f (x) = ln(x) with a base
point at x = 1. hence, determine the relative error and percentage relative error for the
approximation

(d) Truncation errors: Truncation errors are defined as those errors that result from using
an approximation in place of an exact mathematical procedure. Truncation error results
from terminating after a finite number of terms known as formula truncation error or
simply truncation error.
Let a function f (x) be infinitely differentiable in an interval which includes the point
x = a. Then the Taylor series expansion of f (x) about x = a is given by

X f (k) (a)(x − a)k
f (x) = (9)
k=0
k!

where f (k) (a) denotes the kth derivative of f (x) evaluated at x = a. If the series is
truncated after n terms, then it is equivalent to approximating f (x) with a polynomial of
degree n–1.
n−1 (k)
X f (a)(x − a)k
f (x) = (10)
k=0
k!
The error in approximating En (x) is equal to the sum of the neglected higher order terms
and is often called the tail of the series. The tail is given by

f (n) (ξ)(x − a)n


En (x) u f (x) − fn (x) = (11)
n!

Example 1: Given the f (x) = sin x,(i) expand f (x) about x = 0 using Taylor series (b)
π
truncate the series to n = 6 terms (c) find the relative error at x = 4
due to truncation
in (b).
solution The Taylor series expansion of sin x is given by

X f (k) (a)(x − a)k x3 x5 x7
f (x) = sin x = =x− + − + ··· (12)
k=0
k! 3! 5! 7!

9
(b) Truncation of the Taylor series to n = 6 terms.

x3 x5
f6 (x) = x − + (13)
3! 5!
π
(c) applying equation 5, therefore the relative error at x = 4
due to truncation in (b)
becomes

π
( π4 )3 ( π4 )5
f6 ( π4 )
− sin( π4 ) 4
− + − sin( π4 )
Er6 = = 3! 5! = 5.129 × 10–5 (14)
sin( π4 ) π
sin( 4 )

(e) Human error: This occur by misuse of method applied. They are error created by the
person performing the calculation. A common error is that of transposition of a number
such as 89.3285 as 89.3825

2.2 The effects of errors on the basic operation of Arithmetic


2.2.1 Addition

If x1 , x2 are approximations to the values X1 , X2 respectively, and e1 and e2 are the corre-
sponding errors (any type) in this approximation, then X1 = x1 + e1 ⇒ e1 = X1 − x1 and
X2 = x2 + e2 ⇒ e2 = X2 − x2 . Let X = X1 + X2 (exact), x = x1 + x2 (approximate), then

|e| = |X − x| = |(X1 + X2 ) − (x1 + x2 )| = |(X1 − x1 ) + (X2 − x2 )| = |e1 + e2 | ⇒


(15)
|e| ≤ |e1 | + |e2 |
Thus, the absolute error in the sum of the two numbers is less than or equal to the absolute
error in each of the numbers.
If the error e1 is due to rounding-off X1 to k1 decimal places and the error e2 is due to rounding-
off X2 to k2 decimal places, then
|e1 | ≤ 21 10−k1 and |e2 | ≤ 12 10−k2 therefore

1 1
|e| ≤ |e1 | + |e2 | ≤ 10−k1 + 10−k2 (16)
2 2

Thus, the error e or e1 + e2 in the approximation x = x1 + x2 to the true value X = X1 + X2


lies within ± 12 10−k1 + 10−k2


Example 1: The numbers X1 and X2 when rounded-off to 3D are 3.724 and 4.037 respectively.
Evaluate an approximation to X1 + X2 and discuss the error involved.
solution: Let X1 u x1 = 3.724(3D) ⇒ |e1 | = |X1 − x1 | ≤ 1
2
10−3 and X2 u x2 =
4.037(3D) ⇒ |e2 | = |X2 − x2 | ≤ 21 10−3 , again let X = X1 +X2 u x1 +x2 = x = 3.724+4.037 =
7.761

10
Then, |e| = |X − x| ≤ |e1 | + |e2 | ≤ 21 10−3 + 12 10−3 = 0.0005 + 0.0005 = 0.001 ⇒ |X − x| ≤
0.001, therefore

− 0.001 ≤ X − x ≤ 0.001 ⇒ − 0.001 + x ≤ X ≤ 0.001 + x

− 0.001 + 7.761 ≤ X ≤ 0.001 + 7.761 (17)

7.760 ≤ X ≤ 7.762, ∴ X = 7.76 (2D or 3 S.F.)


Example 2: The numbers X and Y when rounded-off to 4 S.F. are 23.86 and 0.01762 respec-
tively. Evaluate an approximation to X + Y and discuss the error involved.
solution: Let X u x = 23.86(2D) ⇒ |e1 | = |X1 − x1 | ≤ 21 10−2 and Y u y = 0.01762(5D) ⇒ |e2 | =
|Y − y| ≤ 21 10−5 , again let Z = X + Y u x + y = z = 23.86 + 0.01762 = 23.87762 u 23.878
Thus, |e| = |Z − z| ≤ |e1 | + |e2 | ≤ 21 10−2 + 12 10−5 = 0.005005 u 0.005 ⇒ |Z − z| ≤ 0.005,
therefore

− 0.005 ≤ Z − z ≤ 0.005 ⇒ − 0.005 + z ≤ Z ≤ 0.005 + z

− 0.005 + 23.878 ≤ Z ≤ 0.005 + 23.878 (18)

23.873 ≤ Z ≤ 23.883, ∴ Z = 23.9 (1D or 3 S.F.)


Exercise: The numbers A and B when rounded-off to 3D are 6.324 and 2.058 respectively.
Evaluate an approximation to A + B and discuss the error involved.

2.2.2 Subtraction

Using the representation in 2.2.1, let X = X1 − X2 and x = x1 − x2 , then

|e| = |X − x| = |(X1 − X2 ) − (x1 − x2 )| = |(X1 − x1 ) − (X2 + x2 )| = |(X1 − x1 ) + (−(X2 − x2 ))|

≤ |(X1 − x1 )| + |(−(X2 − x2 ))| ≤ |e1 | + |e2 | ⇒

|e| ≤ |e1 | + |e2 |


(19)
Just as in addition (as discussed 15), the absolute error in the difference of two numbers is less
than or equal to the sum of the absolute errors in each of the numbers.
If |e1 | = 12 10−k1 and |e2 | = 12 10−k2 , then |e| ≤ 21 10−k1 + 10−k2


Example 1: The numbers A and B are correctly rounded to 3D as 3.724 and 2.251 respec-
tively. Evaluate an approximation B − A and discuss the error in the approximation.
solution: Let A u a = 3.724(3D) ⇒ |e1 | = |A − a| ≤ 21 10−3 and B u b = 2.251(3D) ⇒ |e2 | =
|B − b| ≤ 12 10−3 , again let C = B − A u b − a = z = 2.251 − 3.724 = −1.473
Thus, |e| = |C − c| ≤ |e1 | + |e2 | ≤ 12 10−3 + 12 10−3 = 0.001 ⇒ |C − c| ≤ 0.001, therefore

11
− 0.001 ≤ C − c ≤ 0.001 ⇒ − 0.001 + c ≤ C ≤ 0.001 + c

− 0.001 − 1.473 ≤ C ≤ 0.001 − 1.473

− 1.474 ≤ C ≤ −1.472, ∴ C = −1.47 (2D or 3 S.F.)


Note: If n numbers are rounded to kD, then the round-off error (|e|) is n2 10−k
Exercise: The numbers A and B are correctly rounded to 2S.F. as (i) 8.2 and 0.00056 re-
spectively (ii) 6.3 and 0.00087 respectively (iii) 42.5 and 0.00683 respectively. Evaluate an
approximation A − B, A + B and discuss the error in the approximation.

2.2.3 Multiplication

With the usual notation, let X = X1 X2 u x1 x2 = x this implies


|e| = |X − x| = |X1 X2 − x1 x2 | = |(x1 + e1 )(x2 + e2 ) − x1 x2 | = |x1 x2 + x1 e2 + x2 e1 + e1 e2 − x1 x2 |

|e| u |x1 e2 + x2 e1 | since e1 e2 can be negligible


(20)
Thus, |e| ≤ |x1 ||e2 | + |x2 ||e1 |, if |e1 | ≤ 21 10−k1 , |e2 | ≤ 21 10−k2 , then
1
|x1 |10−k2 + |x2 |10−k1

|e| ≤
2 (21)
1
= (|x1 | + |x2 |) 10−k , (if k = k1 = k2 )
2
Also, for the relative error, we have
e X −x e2 x1 + e1 x2 e2 x1 + e1 x2 e1 e2
= = u ≤ + (22)
X X X1 X2 x1 x2 x1 x2
Thus, the relative error in X (the product) is less or equal to the sum of the relative errors in
the numbers.
Example 1: The numbers X1 and X2 when rounded to 3D are 4.701 and 0.832 respectively.
Evaluate X1 X2 as accurate as possible.
solution: Let X1 u x1 = 4.701, X2 u x2 = 0.832 and X1 X2 u x1 x2 = 3.911232 u 3.911
|e| ≤ |x1 ||e2 | + |x2 ||e1 |, since X and Y are of the same decimal places (i.e k1 = k2 ), therefore
|e| ≤ 12 (|x1 | + |x2 |) 10−k = 12 (4.701 + 0.832) 10−3 = 0.003, ∴ X = x + e = 3.911 ± 0.003
⇒ − 0.003 + 3.911 ≤ X ≤ 0.003 + 3.911 ⇒ 3.908 ≤ X ≤ 3.914, ∴ X = 3.91 (2D)
Exercise: The numbers Y1 and Y2 when rounded to 3D are 6.803 and 0.543 respectively.
Evaluate X1 X2 as accurate as possible.

2.2.4 Division
X1
With the usual notation i. e X = , we have
X2
X 1 x1
|e| = |X − x| = − (23)
X 2 x2

12
Now,
 −1  2 !
X1 x1 + e 1 −1 1 e2 1 e2 e2
= = (x1 + e1 )(x2 + e2 ) = (x1 + e1 ) 1+ = (x1 + e1 ) 1 − + − ···
X2 x2 + e 2 x2 x2 x2 x2 x2
 
1 x1 e2 e1 e2 X 1 x1 e1 x1 e2
= x1 − + e1 − + ··· ⇒ − u − 2 ⇒
x2 x2 x2 X 2 x2 x2 x2
e1 x1 e2 e1 x1 e 2
|e| u − 2 ≤ +
x2 x2 x2 x22
(24)
for the relative error
   
e X −x X2 e1 x1 e2 x2 e1 x1 e 2 e1 e2 e1 e2
= u − 2 u − 2 = − ≤ + (25)
X X X1 x2 x2 x1 x2 x2 x1 x2 x1 x2

Thus, (as for multiplication) the approximate relative error in division is less or equal to the
sum of the relative error.
Example: The numbers P and Q when rounded to 4 S.F. are 37.26 and 0.05371 respectively.
P
Evaluate Q
as accurate as possible.

solution: Let P u p = 37.26, Q u q = 0.05371 and R = Q P


u r = pq = 0.05371
37.26
= 693.725
1 −2 1 −5
e
10 10
R
≤ pe + qe ≤ 2 + 2 ≤ 0.00023, therefore |e| = 0.00023|r| = 0.00023(693.725) =u
37.26 0.05371
0.2
R = r + e = 693.725 ± 0.2 ⇒ − 0.2 + 693.725 ≤ R ≤ 0.2 + 693.725
⇒ 693.5 ≤ R ≤ 693.9, ∴ R = 694

2.2.5 Error in function evaluation

Let e be the error in the approximation of X by x, so that X = x + e


Then, if ef denotes the error when a function f is evaluated exactly at X , we have

f (X) = f (x) + ef ⇒ ef = f (X) − f (x) = f (x + e) − f (x)


1 1 (26)
ef = f (x) + ef 0 (x) + e2 f 00 (x) + · · · − f (x) = ef 0 (x) + e2 f 00 (x) + · · ·
2 2
Thus, if e is small and the second and higher derivatives of f evaluated at x are not excessively
large, we have
ef u ef 0 (x) ⇒ |ef | u |e||f 0 (x)| (27)

Example 1: The number 7.36 is correctly rounded to the number X. Obtain an approximation

as accurate as possible to X

solution: let X u x = 7.36 (2D), f (x) = x ⇒ f 0 (x) = 2√1 x
|ef | u |e||f 0 (x)| ≤ 12 10−2 1

2 x
= 0.005

2 7.36
= 0.0009

13

Thus f (X) = f (x) + ef = 7.36 ± 0.0009
⇒ − 0.0009 + 2.7129 ≤ f (X) ≤ 0.0009 + 2.7129 ⇒ 2.712 ≤ f (X) ≤ 2.7138, ∴ f (X) =
2.71(2D).
Example 2: The number X when rounded correctly to 3D is 0.359.. Obtain an approximation
as accurate as possible to cos X
solution: let X u x = 0.359 (3D), f (x) = cos x ⇒ f 0 (x) = − sin x
|ef | u |e||f 0 (x)| ≤ 21 10−3 |− sin x| = 12 10−3 |− sin(0.359)| ⇒ |ef | ≤ 0.0002
Thus f (X) = f (x) + ef = cos(0.359) ± 0.0002
⇒ − 0.0002 + 0.9363 ≤ f (X) ≤ 0.0002 + 0.9363 ⇒ 0.9361 ≤ f (X) ≤ 0.9365, ∴ f (X) =
0.94(2D).
Exercise:

1. The numbers X and Y when rounded correctly to 3D are 0.359 and 0.752 respectively.
Obtain an approximation as accurate as possible to sin X, ex and sin Y

2. Evaluate (i) A + B, A − D, A + B − C, AC, AB


C
, given that A = 11.029, B = 2.3452, C =
13.734, D = 0.00875 are correctly when rounded approximation

3 SOLUTION OF NON LINEAR EQUATIONS


The roots of equations may be real or complex. In general, an equation may have any number
of (real) roots, or no roots at all. For example, sin x–x = 0 has a single root, namely, x = 0,
whereas tan x − x = 0 has infinite number of roots (x = 0, ±4.493, ±7.725, · · · ). There are two
types of methods available to find the roots of algebraic and transcendental equations of the
form f (x) = 0.

1. Direct Methods: Direct methods give the exact value of the roots in a finite number of
steps. We assume here that there are no round off errors. Direct methods determine all
the roots at the same time.

2. Indirect or Iterative Methods: Indirect or iterative methods are based on the concept
of successive approximations. The general procedure is to start with one or more initial
approximation to the root and obtain a sequence of iterates (xk ) which in the limit con-
verges to the actual or true solution to the root. Indirect or iterative methods determine
one or two roots at a time. The indirect or iterative methods are further divided into

14
two categories: bracketing and open methods. The bracketing methods require the lim-
its between which the root lies, whereas the open methods require the initial estimation
of the solution. Bisection and False position methods are two known examples of the
bracketing methods. Among the open methods, the Newton-Raphson and the method of
successive approximation are most commonly used. The most popular method for solving
a non-linear equation is the Newton-Raphson method and this method has a high rate of
convergence to a solution. In this chapter, we present the following indirect or iterative
methods with illustrative examples:

a. Bisection Method

b. Method of False Position (Regular Falsi Method)

c. Newton-Raphson Method (Newton’s method)

d. Successive Approximation Method (Fixed point iteration).

3.1 Bisection Method

Figure 1: Bisection graph

Suppose that f is defined and continuous on an interval [a, b]. If the values of f (x) at x = a
and x = b are of opposite sign then it is clear from figure 1 that f must have at least one root
between a and b. This result is known as intermediate value theorem, provides a simple
and effective way of finding the approximate location of the roots of f
Consider a bracketing interval [a, b] for which f (a)f (b) < 0. In the bisection method the value
b+a
of f at the mid-point, c = , is calculated. There are three possibilities.
2
1. if f (c) = 0 (very unlikely), then c is a root of f

2. if f (a)f (c) < 0, then f has a root between a and c. The process can then be repeated on
the new interval [a, c] .

3. Finally, if f (a)f (c) > 0 it follows that f (b)f (c) < 0 since f (a) and f (b) have an opposite
signs, therefore f has a root between c and b and the process can be repeated with [c, b] .

15
1
Suppose that we wish to calculate a root within ± 10−k with initial interval [a, b], then after
2
b−a
n − 1 steps the bracketing interval has length n−1 , (where (b–a) denotes the length of the
2
original interval with which we started). Hence we require
b−a 1
n−1
< 10−k ⇒ 2n−2 > 10k (b − a) (28)
2 2
taking the log of both sides, we obtain
 
log10 10k (b − a)
n≥2+ (29)
log10 2
% MATLAB code for Bisection method
function [x e] = mybisect(f,a,b,n)
% function [x e] = mybisect(f,a,b,n)
% Does n iterations of the bisection method for a function f
% Inputs: f -- an inline function
% a,b -- left and right edges of the interval
% n -- the number of bisections to do.
% Outputs: x -- the estimated solution of f(x) = 0
% e -- an upper bound on the error
format long
c = f(a); d = f(b);
if c*d > 0.0
error(’Function has same sign at both endpoints.’)
end
disp(’ x y’)
for i = 1:n
x = (a + b)/2;
y = f(x);
disp([ x y])
if y == 0.0 % solved the equation exactly
e = 0;
break % jumps out of the for loop
end
if c*y < 0
b=x;
else

16
a=x;
end
end
e = (b-a)/2;

Example 1: Use the Bisection method to find a root of the equation x3 − 4x − 8.95 = 0
accurate to three decimal places, given that the root lies between 2 and 3.
solution: f (x) = x3 –4x–8.95 = 0, f (2) = 23 –4(2)–8.95 = –8.95 < 0, f (3) = 33 –4(3)–8.95 =
6.05 > 0
2+3
c1 = = 2.5 ⇒ f (2.5) = −3.325 and f (2.5)f (3) < 0therefore, the new interval is [2.5, 3]
2
2.5 + 3
, again, c2 = = 2.75 ⇒ f (2.75) = 0.8469 and f (2.5)f (2.75) < 0therefore, the new
2
interval is [2.5, 2.75]
2.5 + 2.75
c3 = = 2.625 ⇒ f (2.625) = −1.362109 and f (2.625)f (2.75) < 0therefore, the new
2
interval is [2.625, 2.75]
2.625 + 2.75
c4 = = 2.6875 ⇒ f (2.6875) = −0.289111 and f (2.6875)f (2.75) < 0therefore, the
2
new interval is [2.6875, 2.75], again
2.6875 + 2.75
c5 = = 2.71875 ⇒ f (2.71875) = 0.270916748 and
2
f (2.6875)f (2.71875) < 0 therefore, the new interval is [2.6875, 2.71875], and so on
Hence the root is 2.70 accurate to two decimal places and the number of iteration is 8

% MATLAB solution to example 1


f=@(x) x^3-4*x-8.95; %exp(x)-3*x; %x-tan(x);
[root error]=mybisect(f,2,3,17)

Exercise: Find one root of ex –3x = 0 in the interval [1.5, 1.6] correct to two decimal places
using the method of Bisection, (Answer x = 1.51, number of Iteration is 6)

17
3.2 Method of False Position (Regular Falsi Method)

Figure 2: Regular fasli graph

The regular falsi method is an attempt to produce an iterative method (scheme) with more
rapid convergence and which is always guaranteed to converge.
The equation of the line AB0 is given by

y − λ = m(x − λ) (30)
f (x0 ) − f (λ)
where m is the gradient of the line AB0 Thus,
x0 − λ
f (x0 ) − f (λ)
y − f (λ) = (x − λ) (31)
x0 − λ
At the point P1 (x1 , 0), y = 0, so that
f (x0 ) − f (λ) λf (x0 ) − x0 f (λ)
− f (λ) = (x1 − λ) ⇒ x1 = (32)
x0 − λ f (x0 ) − f (λ)
Similarly,
λf (x1 ) − x1 f (λ)
x2 = (33)
f (x1 ) − f (λ)
and in general, we have
λf (xn ) − xn f (λ) af (b) − bf (a)
xn+1 = , n ≥ 0 or xn+1 = (34)
f (xn ) − f (λ) f (b) − f (a)
equation (34) is the regular falsi method
The procedure (or algorithm) for finding a solution with the method of False Position is given
below:
Algorithm for the method of False Position

1. Define the first interval (a, b) such that solution exists between them. Check f (a)f (b) < 0.

2. Compute the first estimate of the numerical solution xn+1 using Eq.(34).

18
3. Find out whether the actual solution is between a and x1 or between x1 and b. This is
accomplished by checking the sign of the product f (a)f (x1 ).
If f (a)f (x1 ) < 0, the solution is between a and x1 .
If f (a)f (x1 ) > 0, the solution is between x1 and b.

4. Select the subinterval that contains the solution (a to x1 , or x1 to b) is the new interval
(a, b) and go back to step 2. Step 2 through 4 are repeated until a specified tolerance or
error bound is attained.

The regular falsi method always converges to an answer, provided a root is initially bracketed
in the interval (a, b).
Example 1: Determine the root of the equation 2x3 − 7x + 2 = 0, using regular falsi method,
given that the root lies in [0, 1]
solution: using equation (34)

λf (xn ) − xn f (λ)
xn+1 = (35)
f (xn ) − f (λ)

Let f (x) = 2x3 − 7x + 2, f (0) = 2, f (1) = −3 ⇒ f (0)f (1) < 0, hence the solution of the
equation lies in [0, 1]
set λ = 0 and x0 = 1
then the iteration scheme gives

−xn f (0) −2xn −2


xn+1 = = 3 = 2
f (xn ) − f (0) 2xn − 7xn + 2 − 2 2xn − 7

when n = 0
−2 −2
x1 = 2
= = 0.400
2x0 − 7 2(1)2 − 7
−2 −2
when n = 1 : x2 = 2
= = 0.2994
2x1 − 7 2(0.4)2 − 7
−2 −2
when n = 2 : x3 = 2 = = 0.2932
2x2 − 7 2(0.2994)2 − 7
Similarly, we get x4 = 0.2929, x5 = 0.2929, hence, the process converge at 0.2929 (4D), there-
fore the root is 0.2929.
2 −2)
Example 2: Determine the root of the equation (x + 1)2 e(x − 1 = 0, using regular falsi
method, given that the root lies in [0, 1]
2 −2)
solution: Let f (x) = (x + 1)2 e(x − 1, f (0) = −0.864665, f (1) = 0.471517 ⇒ f (0)f (1) =
−0.407705 < 0, hence the solution of the equation lies in [0, 1]
set λ = 0 and x0 = 1
0(0.471517) − 1(−0.864665)
x1 = = 0.647116, f (1)f (x1 ) < 0, the new interval is [0.647116, 1],
0.471517 + 0.864665

19
0.647116(0.471517) − 1(−0.441884)
again x2 = = 0.817834, f (1)f (x2 ) < 0, the new interval is
0.471517 + 0.441885
[0.817834, 1] and so on
Hence, the root is 0.86687 accurate to 5 decimal places and the number of iteration is 9

MATLAB code for regular fasli method


function x = fasli(f,a,b,n)
% function [x e] = mybisect(f,a,b,n)
% Does n iterations of the bisection method for a function f
% Inputs: f -- an inline function
% a,b -- left and right edges of the interval
% n -- the number of bisections to do.
% Outputs: x -- the estimated solution of f(x) = 0
% e -- an upper bound on the error
format long
c = f(a); d = f(b);
if c*d > 0.0
error(’Function has same sign at both endpoints.’)
end
disp(’ x y’)
for i = 1:n
x = (a*f(b)-b*f(a))/(f(b)-f(a));
y = f(x);

disp([ x y])
if y == 0.0 % solved the equation exactly
e = 0;
break % jumps out of the for loop
end
if c*y < 0
b=x;
else
a=x;
end

20
end

Exercise: Show that the following equations have exactly one root and that in each case the
root lies in the interval 12 , 1
 

(1)a. x − cos x = 0

b. x2 + log(x)

c. xex − 1 = 0

d. ex –3x2 = 0

(2) Find a real root of cos x–3x + 5 = 0. Correct to four decimal places using the method of
False Position method.

3.3 Newton-Raphson Method

The Newton-Raphson (N-R) method often called Newton’s iteration scheme converges faster.
One drawback of this method is that it uses the derivative f’(x) of the function as well as the
function f (x) itself. Hence, the Newton-Raphson method is usable only in problems where f 0 (x)
can be readily computed. Here, again we assume that f (x) is continuous and differentiable and
the equation is known to have a solution near a given point.
Thus, Newton-Raphson method formula is given by

f (xn )
xn+1 = xn − (36)
f 0 (xn )

Example: Use Newton-Raphson method to find the real root near 2 of the equation x4 –11x +
8 = 0 accurate to five decimal places.
solution: Let f (x) = x4 –11x + 8 ⇒ f 0 (x) = 4x3 –11, x0 = 2
f (x0 ) = f (2) = 24 –11(2) + 8 = 2, f 0 (x0 ) = f 0 (2) = 4(23 )–11 = 21
f (x0 ) 2
therefore, n = 0 : x1 = x0 − 0 = 2 − 21 = 1.90476
f (x0 )
f (x1 ) 4 −11(1.90476)+8
n = 1 : x2 = x1 − 0 = 1.90476 − 1.904764(1.904763 )−11
= 1.89209
f (x1 )
f (x2 ) 4 −11(1.89209)+8
n = 2 : x3 = x2 − 0 = 1.89209 − 1.892094(1.892093 )−11
= 1.89188
f (x2 )
f (x3 ) 4 −11(1.89188)+8
n = 3 : x4 = x3 − 0 = 1.89188 − 1.891884(1.891883 )−11
= 1.89188
f (x3 )
Hence the root of the equation is 1.89188.
Exercise:

21
1. Using Newton-Raphson method, find a root of the function f (x) = ex –3x2 to an accuracy
of 5 digits. The root is known to lie between 0.5 and 1.0. Take the starting value of x as
x0 = 1.0 (Answer: x = 0.91001)

2. Evaluate 29 to five decimal places by Newton-Raphson iterative method.(Hint: Let x =


29 then x2 –29 = 0.

3.4 Successive Approximation Method (Fixed point iteration)

Suppose that the equation f (x) = 0 can be rearranged as x = g(x) such that f (α) = 0, (that
is, α is a root of f (x), then α = g(α)
An obvious iteration to try for the calculation of a fixed points is

xn+1 = g(xn ), n ≥ 0 (37)

Eq. (37) is said to converge if g 0 (x0 ) < 1, where x0 is the initial approximation.
The value of x0 is chosen arbitrarily. There are many ways of rearranging f (x) = 0
Example: Given the equation x2 − 2x − 8 = 0.
√ 2x + 8
There are 3 possible rearrangements of this equation, which are: x = 2x + 8, x = ,x =
x
x2 − 8
, the iteration forms are
2
√ 2xn + 8 x2n − 8
xn+1 = 2xn + 8, xn+1 = , xn+1 = , the first one converges (to 4) at 6th
xn 2
iteration, second one converges (to 4) at 11 iteration with initial guess x0 = 5 while third does
not converge.

4 FINITE DIFFERENCE
The forward, backward and central differences are used to obtain interpolation formula from
table of abscissa {xi } which are equally spaced

4.1 Forward difference

Figure 3: finite difference

22
for evenly spaced x-values
xr = x0 + rh, r = 0(1)n
we define 4f (x) = f (x + h) − f (x) or 4f (xr ) = f (xr + h) − f (xr )
4fr = fr+1 − fr
The symbol 4 is called the first forward difference operator.
The second order difference is given by

42 fr = 4(4fr ) = 4(fr+1 − fr ) = 4fr+1 − 4fr


(38)
= fr+2 − fr+1 − (fr+1 − fr ) = fr+2 − 2fr+1 + fr
Higher order differences are defined recursively by 4m fr = 4m−1 (4fr )
Successive differences are listed in table 1

Table 1: Forward difference table

x fr 4fr 42 f r 43 fr

x0 f0
4f0 = f1 − f0
x1 f1 42 f 0
4f1 = f2 − f1 43 f 0
x2 f2 42 f 1
4f2
x3 f3
.. ..
. .

Example 1: Construct a forward difference table for y = f (x) = x3 + 2x + 1 for x0 = 1,


h = 1 and r = 0, · · · , 4.
solution

23
Table 2: Forward difference table for Example 1

x fr 4fr 42 f r 43 fr

x0 = 1 4
9
x1 = 2 13 12
21 6
x2 = 3 34 18
39 6
x3 = 4 73 24
63
x4 = 5 136

(x − 1)(x + 5)
Exercise: Construct forward difference table for the function with x0 =
(x + 1)(x + 2)
0, h = 0.1 and r = 0(1)8

4.2 Backward difference

we define 5fr = fr − fr−1


The symbol 5 is called the first backward difference operator.
The second order difference is given by

52 fr = 5(5fr ) = 5(fr − fr−1 ) = 5fr − 5fr−1


(39)
= fr − 2fr−1 + fr−2
∴ 5m fr = 5m−1 (5fr )

4.3 Central differences

The central difference operator is denoted by the symbol δ and is defined by

δfr = fr+ 1 − fr− 1 or δfr+ 1 = fr+1 − fr (40)


2 2 2

The averaging operator, µ is defined by

1  1
µfr = fr+ + fr−
1 1 or σfr (41)
2 2 2 2

Some relationship between the operators

24
Table 3: Forward difference table

x fr 4fr 42 f r 43 f r

x0 f0
5f1
x1 f1 52 f 2
5f2 53 f 3
x2 f2 52 f 3
5f3
x3 f3
.. ..
. .

To relate these operators,w e use shift operator defined by

Efr = fr+1 , E 2 fr = fr+2 , . . . , En fr = fr+n


(42)
E −1 fr = fr−1 , E −2 fr = fr−2 , . . . , E−n fr = fr−n
4 fr = fr+1 − fr = Efr − fr = (E − 1)fr ⇒ 4 = E − 1 or E = 1 + 4
1 (43)
5 fr = fr − fr−1 = fr − E −1 fr = (1 − E −1 )fr ⇒ 5 = 1 − E −1 or E =
1−5
1 1 1
δfr+ 1 = fr+1 − fr ⇒ δE 2 = Efr − fr = (E − 1)fr ⇒ δ = E 2 − E − 2 (44)
2

1  1 1
− 12
 1 1 − 12

µfr = f 1 + fr− 1 = E +E
2 fr ⇒ µ = E +E
2 (45)
2 r+ 2 2 2 2
 2
4
Example 1: Evaluate x3
E
solution: Let h be the step length

42
 
x3 = 42 E −1 x3 (E − 1)2 E −1 x3

E
= (E 2 − 2E + 1)E −1 x3 = Ex3 − 2x3 + E −1 x3

since f (x + h) = Ef (x) ⇒ E n xr = (x + nh)r


 2
4
x3 = (x + h)3 − 2x3 + (x − h)3 = 6xh2 , if h = 1,
E
 2
4
∴ x3 = 6x
E
 2  x

4 Ee
Example 2: Prove that ex = ex · the interval of differencing being h
E 42 ex
solution We know that Ef (x) = f (x + h) ⇒ Eex = ex+h and 4ex = ex+h − ex
⇒ 42 ex = ex · (eh − 1)2

25
42 x
 
e = 42 E −1 ex = 42 ex−h = e− h(42 ex ) = e−h · ex · (eh − 1)2
E
Therefore,
 2 the right
 hand  sides becomes
x
4 Ee −h ex+h
ex = ex · = e · ex
· (eh
− 1)2
= ex
E 42 ex ex · (eh − 1)2

4.4 Polynomial Interpolation

Given [x0 , f (x0 )], [x1 , f (x1 )], . . . , [xn , f (xn )], the process of finding the values of f (x) for some
other values of x (lies between x0 , x1 , . . . , xn ) is called interpolation
Weierstrass theorem: If f (x) is continuous in [a, b], then it can be represented in [a, b]
to any degree of accuracy by a polynomial of suitable degree, i.e there exist a polynomial
3 |f (x) − p(x)| < , ∀x ∈ (a, b) :  > 0 is preassigned number. This theorem is the justification
for the replacement of a function by a polynomial.

4.4.1 Newton-Gregory Interpolation Formula

The Newton-Gregory (N-G) interpolation formula is used when the function values are given
at equally spaced points:
x − x0
u= , let n and k = 0(1)n be integers, then E u fk = (1 + 4)u fk (since E = 1 + 4)
h
using Taylor series expansion to expansion (1 + 4)u and multiply the resulting equation by fk ,
we obtain

u(u − 1) 2 u(u − 1)(u − 2) · · · (u − (n + 1)) n


E u fk = fu+k = fk + u 4 fk + 4 fk + . . . + 4 fk + . . .
2! n!
(46)
Let k = 0, so that we get the approximation

u(u − 1) 2 u(u − 1)(u − 2) · · · (u − (n + 1)) n


fu = f0 + u 4 f0 + 4 f0 + . . . + 4 f0 + . . . (47)
2! n!

This is the nth degree N-G interpolation formula or N-G forward difference inter-
polation formula
The associated error is given by

u(u − 1)(u − 2) · · · (u − (n + 2)) n+1 n+1


h f (ξ), ξ ∈ (x0 , xu , xn ) (48)
n!

Also, Since E u fk = (1 − 5)−u fk

u(u + 1) 2 u(u + 1)(u + 2) · · · (u + (n + 1)) n


fu = f0 + u 5 f0 + 5 f0 + . . . + 5 f0 + . . . (49)
2! n!

Example 1:
2N a + 2H2 O −→ 2N aOH + H2 (50)

26
Estimate the quantity (y) of sodium hydroxide (N aOH) formed at time x = 30min from the
following table by choosing x0 = 29.

time(min):x 19 29 39 49 59
quantity(g):y 41 103 168 218 235

solution: since third differences are constant (that is h = 10), then a polynomial of degree
3 will fit the given data exactly.

Table 4: Forward difference table for Example 1

x y 4fr 42 f r 43 fr

19 41
62
29 −→ 103& 3
65& -18
39 168 −15&
50 -18
49 218 -33
17
59 235

xu −x0
In this case xu = 30, x0 = 29, h = 10, u = h
= 0.10, applying equation (47), we have

u(u − 1) 2 u(u − 1)(u − 2) 3


y(xu ) ∼
= y0 + u 4 y0 + 4 y0 + 4 y0
2! 3!
0.9(−0.9 × −15) 0.1(−0.9)(−1.9 × −18)
= 103 + 0.1(65) + + = 109.662 ∼
= 110
2 6
Example 2: From difference table of function values given below, determine the degree of the
polynomial that will fit the data and obtain the polynomial, using Newton forward difference
formula of appropriate degree.

x 0 0.5 1.0 1.5 2.0 2.5 3.0


f (x) 0.25 -1.50 -1.75 -0.50 2.25 6.5 12.25

Solution

27
Table 5: Forward difference table for Example 2

x f (x) 4fr 42 f r 43 f r

0 −→ 0.25&
−1.75&
0.5 −1.50 1.50&
−0.25 0
1.0 -1.75 1.50
1.25 0
1.5 -0.50 1.50
2.75 0
2.0 2.25 1.50
4.25 0
2.5 6.50 1.50
5.75
3.0 12.25

Since the second differences are constant i.e 4n f (x) = 0 ∀n ≥ 3, the required degree is 2.
Hence, we fit a quadratic for the given data.
x−x0 x−0
Now, u = h
= 0.5
= 2x, the Newton forward difference formula of degree 2 is given by

u(u − 1) 2 2x(2x − 1)
f (x) ∼
= f0 +u4f0 + 4 f0 = 0.25+2x(−1.75)+ (1.5) = 3x2 −5x+0.25 (51)
2! 2

4.4.2 Lagrange Interpolation Formula

Suppose the function values of a given function f(x) are f (x0 ), f (x1 ), . . . , f (xn ) at unevenly
spaced points xk , k = 0, 1, . . . , n. We can use Lagrange’s formula to interpolate the given data
as
n n
x − xj
f (x) ∼
X Y
= Pn (x) = Li (x)yi , where Li (x) = 6 i
, j= (52)
i=0
x − xj
j=0 i

where This is called Lagrange interpolation formula of degree n.

%MATLAB code for Lagrange interpolation formula


function [l,L] = lagrange(x,y)
%Input : x = [x0 x1 ... xN], y = [y0 y1 ... yN]
%Output: l = Lagrange polynomial coefficients of degree N
% L = Lagrange coefficient polynomial

28
N = length(x)-1; %the degree of polynomial
l = 0;
for m = 1:N + 1
P = 1;
for k = 1:N + 1
if k ~= m, P = conv(P,[1 -x(k)])/(x(m)-x(k)); end
end
L(m,:) = P; %Lagrange coefficient polynomial
l = l + y(m)*P; %Lagrange polynomial (3.1.3)
end

Example 1 Find the Lagrange interpolating polynomial corresponding to the data points
(1, 7), (4, 13) and (7, 23), hence interpolate for x = 5.5.
Solution

x 1 4 7
y 7 13 23

Since we have 3 values (data points), we can fit a polynomial of degree 2


n n
X Y x − xj
Pn (x) = Li (x)yi where Li (x) = 6 i
, j=
i=0
x − xj
j=0 i
2
(53)
X
n = 2 ⇒ P2 (x) = Li (x)yi = L0 (x)y0 + L1 (x)y1 + L2 (x)y2
i=0

(x − x1 )(x − x2 ) (x − 4)(x − 7) 1
x2 − 11x + 28

L0 (x) = = = (54)
(x0 − x1 )(x0 − x2 ) (1 − 4)(1 − 7) 18
(x − x0 )(x − x2 ) (x − 1)(x − 7) 1
= − x2 − 8x + 7

L1 (x) = = (55)
(x1 − x0 )(x1 − x2 ) (4 − 1)(4 − 7) 9

(x − x0 )(x − x1 ) (x − 1)(x − 4) 1
x2 − 5x + 4

L2 (x) = = = (56)
(x2 − x0 )(x2 − x1 ) (7 − 1)(7 − 4) 18
substituting equations (54)-(56) in (53), we obtain
P2 (x) = 91 (2x2 + 8x + 53) , when x = 5.5 ⇒ P2 (x = 5.5) = 35
2
= 17.5
Example 2: From the given data, evaluate f(5) using Lagrange interpolation

x 1 4 7 9
f (x) 2 13 123 504

29
Since we have 4 values (data points), we can fit a polynomial of maximum degree 3
n n
X Y x − xj
Pn (x) = Li (x)yi where Li (x) = 6 i
, j=
i=0 j=0
x i − x j

3
(57)
X
n = 3 ⇒ P3 (x) = Li (x)yi = L0 (x)y0 + L1 (x)y1 + L2 (x)y2 + L3 (x)y3
i=0

(x − 4)(x − 7)(x − 9) 1
x3 − 20x2 + 127x − 252

L0 (x) = =− (58)
(1 − 4)(1 − 7)(1 − 9) 144
(x − 1)(x − 7)(x − 9) 1
x3 − 17x2 + 79x − 63

L1 (x) = = (59)
(4 − 1)(4 − 7)(4 − 9) 45

(x − 1)(x − 4)(x − 9) 1
x3 − 14x2 + 49x − 36

L2 (x) = =− (60)
(7 − 1)(7 − 4)(7 − 9) 36
(x − 1)(x − 4)(x − 7) 1
80x3 − 12x2 + 39x − 20

L3 (x) = = (61)
(9 − 1)(9 − 4)(9 − 7) 80
substituting equations (58)-(61) in (57), we obtain
P3 (x) = 3.1861x3 − 32.7889x2 + 100.7028x − 69.1000, when x = 5 ⇒ P3 (x = 5) = 12.94

%MATLAB code for Example 1


x=[1 4 7 9];
y=[2 13 122 504];
l = lagrange(x,y)

Remark: The major drawback of this method is that, it is complicated and will require
considerable computation if interpolation of required at a very large number of points (but
MATLAB codes make it so ease)
Exercise: Using Lagrange’s interpolation formula, find the value of y corresponding to
x = 10 from the following data.
x 5 6 9 11
f (x) 380 -2 196 508

4.4.3 Everett’s Formula

The interpolation nodes xm , xm+1 , . . . shall be used. generally, we would have x0 < x < x1 and
x − x0
define u = , v =1−u
h
number of given points is 2m + 2 and degree of the polynomial is 2m + 1 the Everett’s
formula is given by
v(v 2 − 1) 2 v(v 2 − 12 )(v 2 − 22 ) 4 v(v 2 − 12 )(v 2 − 22 ) · · · (v 2 − m2 ) 2m
Pn (x) = vf0 + δ f0 + δ f0 + · · · + δ f0
3! 5! (2m + 1)!
u(u2 − 1) 2 u(u2 − 12 )(u2 − 22 ) 4 u(u2 − 12 )(u2 − 22 ) · · · (u2 − m2 ) 2m
+ uf1 + δ f1 + δ f1 + · · · + δ f1
3! 5! (2m + 1)!
(62)

30
Example 1: From the data given below, determine the value of f (x) when x = 5.5 by using
the Everett’s formula
x 3 4 5 6 7 8
f (x) 205 240 259 262 250 224
solution: Since we are given 6 points, then 2m + 2 = 6 ⇒ m = 2

Table 6: Forward difference table for Example 1

r xr fr δfr δ 2 fr δ 3 fr δ 4 fr δ 5 fr

-2 3 205
35
-1 4 240 -16
19 0
0 5 259 -16 1
3 1 -1
1 6 262 -15 0
-12 1
2 7 250 -14
-26
3 8 224

x−x0 5.5−5
The degree of the polynomial is n = 2m + 1 = 5, u = h
= 1
= 0.5, v = 1 − u = 0.5,
using equation (62), we have

v(v 2 − 1) 2 v(v 2 − 12 )(v 2 − 4) 4


P5 (x) = vf0 + δ f0 + δ f0
3! 5!
u(u2 − 1) 2 u(u2 − 1)(u2 − 4) 4
+ uf1 + δ f1 + δ f1
3! 5! (63)
0.5(0.52 − 1) 0.5(0.52 − 1)(0.52 − 4)
P5 (x) = 0.5(259) + (−16) + (1)
3! 5!
0.5(0.52 − 1) 0.5(0.52 − 1)(0.52 − 4)
+ 0.5(262) + (−15) + (0) = 261.949
3! 5!

5 NUMERICAL DIFFERENTIATION AND INTEGRA-


TION
5.1 Numerical differentiation

Numerical differentiation is a method to compute the derivatives of a function at some values


of independent variable x, when the function f (x) is explicitly unknown, however it is known

31
only for a set of arguments.

5.1.1 Derivatives based on forward difference formula

From equation (47), that is

u(u − 1) 2 u(u − 1)(u − 2) · · · (u − (n + 1)) n


fu = f0 + u 4 f0 + 4 f0 + . . . + 4 f0 + . . . (64)
2! n!

differentiating Eq. (47) w.r.t. x, we have

(3u2 − 6u + 2) 3
 
0 1 (2u − 1) 2
f (x) = 4f0 + 4 f0 + 4 f0 + · · · (65)
h 2! 3!

(12u2 − 36u + 22) 4


 
00 1 2 (6u − 6) 3
f (x) = 2 4 f0 + 4 f0 + 4 f0 + · · · (66)
h 3! 4!
x − x0 du 1 df df du 1 df
NOTE: u = ⇒ = , = f 0 (x) = · = · and
h dx h dx du dx h du
d2 f 1 d2 f 1 d2 f du 1 d2 f
   
00 d df d 1 df
= f (x) = = · = = · =
dx2 dx dx dx h du h dudx h du2 dx h2 du2
When x = x0 , u = 0, Eq. (65) and (66) become
 
0 1 1 2 1 3 1 4
f (x) = 4f0 − 4 f0 + 4 f0 − 4 f0 + · · · (67)
h 2 3 4
 
00 1 2 3 (11) 4
f (x) = 2 4 f0 − 4 f0 + 4 f0 + · · · (68)
h 12
dy d2 y
Example 1: From the table below, find the value of and 2
dx dx
x 1.0 1.1 1.2 1.3 1.4 1.5
y 5.4680 5.6665 5.9264 6.2551 6.6601 7.1488

Solution

32
Table 7: Numerical differentiation for Example 1

x y 4y 42 y 43 y

1.0 −→ 5.4680&
0.1985&
1.1 5.6665 0.0614&
0.2599 0.0074
1.2 5.9264 0.0688
0.3287 0.0074
1.3 6.2551 0.0763
0.4050 0.0074
1.4 6.6601 0.0837
0.4887
1.5 7.1488

Here x0 = 1.0, h = 0.1 and y0 = 5.4680, applying Eq.(67) and (68), we obtain
 
dy 1 1 2 1 3 1 4
= 4f0 − 4 f0 + 4 f0 − 4 f0 + · · ·
dx x=1.0 h 2 3 4
 
1 1 1
= 0.1985 − (0.0614) + (0.0074)
h0.1 2 3
= 1.7020

d2 y
 
1 2 3 (11) 4 1
= 2 4 f0 − 4 f0 + 4 f0 + · · · = (0.0614 − 0.0074) = 5.4040
dx2 x=1.0 h 12 0.12
Example 2: Obtain the first and second derivatives of the function tabulated below at the
points x = 1.1 and x = 1.2.

x 1.0 1.2 1.4 1.6 1.8 2.0


y 0 0.128 0.544 1.298 2.440 4.020

Solution

33
Table 8: Numerical differentiation for Example 2

x y 4y 42 y 43 y

1.0 −→ 0&
0.128&
1.2 −→ 0.128& 0.288&
0.416& 0.05
1.4 0.544 0.338&
0.754 0.05
1.6 1.298 0.388
1.142 0.05
1.8 2.440 0.438
1.580
2.0 4.020

Since x = 1.1 is a not on the forward difference table, the point near it on the table will
dy d2 y
be the initial point, i.e x0 = 1.0 and compute and , using Eqs. (67) and (68), where
dx dx2
x − x0 1.1 − 1.0
u= = = 0.5, therefore
h 0.2
(3u2 − 6u + 2) 3
 
dy 0 1 (2u − 1) 2
= y (1.1) = 4f0 + 4 f0 + 4 f0 + · · ·
dx x=1.1 h 2! 3!
(3(0.5)2 − 6(0.5) + 2)
 
1 (2(0.5) − 1)
= 0.128 + (0.288) + (0.05) = 0.62958
0.2 2 6
and
d2 y (12u2 − 36u + 22) 4
 
00 1 2 (6u − 6) 3
= y (1.1) = 2 4 f0 + 4 f0 + 4 f0 + · · ·
dx2 x=1.1 h 3! 4!
 
1 (6(0.5) − 6)
= 0.288 + (0.05 = 6.575
0.22 6
Now, x = 1.2 = x0 ⇒ u = 0, then we obtain
dy d2 y
= y 0 (1.2) = 1.31833 and = y 00 (1.2) = 7.2
dx x=1.2 dx2 1.2

Exercise

1.
2N aOH + 2HCl −→ 2N aCl + H2 O

The chemical reaction given above shows the quantity (f (x) of Sodium hydroxide (in
gram) formed in respect to the time taken (in hour), obtain the polynomial that will fit

34
the data below, using backward difference formula of appropriate degree at t = 2. Hence
find the quantity of Sodium chloride (N aCl) formed when t = 34 hr

1 3 5
t (hr) 0 2
1 2
2 2
f (x) (g) 184 204 226 250 276 304

2. Find the first and second derivatives of the functions tabulated below at the point x =
1.1, x = 1.2 and x = 1.9

x 1.0 1.2 1.4 1.6 1.8 2.0


y 0.0 0.10 0.50 1.25 2.40 3.90

3. Estimate f 0 (1.7) and f 0 (2.2) from the following table

x 1.3 1.5 1.7 1.9 2.1 2.3 2.5


f (x) 3.699 4.482 5.474 6.686 8.166 9.974 12.182

5.1.2 Derivatives based on backward difference formula

From Eq. (49), that is

u(u + 1) 2 u(u + 1)(u + 2) · · · (u + (n + 1)) n


fu = f0 + u 5 f0 + 5 f0 + . . . + 5 f0 + . . . (69)
2! n!

differentiating Eq. (49) w.r.t. x, we obtain

(3u2 + 6u + 2) 3
 
0 1 (2u + 1) 2
f (x) = 5f0 + 5 f0 + 5 f0 + · · · (70)
h 2! 3!

(6u2 + 18u + 11) 4


 
00 1 2 3
f (x) = 2 5 f0 + (u + 1) 5 f0 + 5 f0 + · · · (71)
h 12
Equations (70) and (71) give the approximate derivatives of f (x) at arbitrary point
x − x0
u= ⇒ x = x0 + uh. When x = x0 , u = 0, (70) and (71) become
h
 
0 1 (1) 2 1 3 1 4
f (x) = 5f0 + 5 f0 + 5 f0 + 5 f0 + · · · (72)
h 2! 3 4
 
00 1 2 3 11 4
f (x) = 2 5 f0 + 5 f0 + 5 f0 + · · · (73)
h 12
Example: A slider in a machine moves along a fixed straight rod. Its distance s(m) along the
rod are given in the following table for various values of the time t (seconds). Find the velocity
and acceleration of the slider at time t = 6 secs.

time t(secs) 1 2 3 4 5 6
distance s(m) 0.0201 0.0844 0.3444 1.0100 2.3660 4.7719

35
Solution
-

Table 9: backward difference table for the example above

t s 5s 52 s 53 s 54 s 55 s

1 0.0201
0.0643
2 0.0844 0.1957
0.2600 0.2099
3 0.3444 0.4056 0.075
0.6656 0.2848 0
4 1.0100 0.6904 0.075%
1.3560 0.3595%
5 2.3660 1.0499%
2.4059%
6 −→ 4.7719%

h = 1, applying equations (72) and (73)


 
ds 1 (1) 2 1 3 1 4
= 5f0 + 5 f0 + 5 f0 + 5 f0 + · · ·
dt
t=6 h 2! 3 4
  (74)
1 1 1
= 2.4059 + (1.0499) + (0.3595) + (0.075) = 3.069m/s
2 3 4
d2 s
   
1 2 3 11 4 11
= 2 5 f0 + 5 f0 + 5 f0 + · · · = 1.0499 + 0.3595 + (0.075) = 1.4782
dt2 t=6 h 12 12
(75)

5.1.3 Derivatives based on central difference formula

As in the case of interpolation, the derivatives are more accurate near the centre of the range of
fit. Consequently, the central difference formula gives better accuracy than forward difference
formula. Now,
f (x0 + 12 h) − f (x0 − 12 h) δf0
f 0 (x0 ) = lim = lim (76)
h→0 h h→0 h

this implies
1
f 0 (x0 ) ∼
= δf0 for h suffciently smal (77)
h

36
 
0
f (x0 + 1
h) 0
− f (x0 − 1
h)
1
h
δf 1 − 1
h
δf− 1 δ f 1 − f− 1 δ 2 f0
f 00 (x0 ) = lim 2 2 2 2 2 2
= lim = = lim
h→0 h h→0 h h2 h→0 h2
(78)
Thus for sufficiently small value of h
1
f 00 (x0 ) ∼
= 2 δ 2 f0 with error O(h2 ) (79)
h
it can be shown by induction that
1 n
f (n) (x0 ) = δ f0 + O(hn ) (80)
hn
Eq. (80) is the central difference formula for n − th derivatives of f (x), when x = x0 .
For n = 2, we obtain
1
f 00 (x0 ) ∼
= 2 (f1 − 2f0 + f−1 ) (81)
h
Example: Using central difference approach, estimate f 00 (0.2) for the function f (x) = exp(x)
with h = 0.02, hence compute the error.

Solution: let x0 = 0.2, x−1 = x0 − h = 0.2 − 0.02 = 0.18, x1 = x0 + h = 0.22, f0 = f (0.2) =


1.2214, f−1 = f (0.18) = 1.1972, f1 = f (0.22) = 1.2461 Using Eq. (81), we have
1 1
f 00 (x0 ) ∼
= 2 (f1 − 2f0 + f−1 ) = (1.2461 − 2(1.2214) + 1.1972) = 1.22144
h 0.022
the error is Exact-approximate=exp(0.2) − 1.22144 = 3.72 × 10−5 .

6 SYSTEM OF LINEAR EQUATIONS


A system of algebraic equations has the form
a11 x1 + a12 x2 + a13 x3 + · · · + a1n xn = b1
a21 x1 + a22 x2 + a23 x3 + · · · + a2n xn = b2
.. .. ..
. . . (82)
.. .. ..
. . .
an1 x1 + an2 x2 + an3 x3 + · · · + ann xn = bn
where the coefficients amn and the constants bm are known and xm represents the unknowns.
In matrix notation, the equations are written as
    
a11 a12 ··· a1n x1 b1
 a21 a22 ··· a2n  x2   b2 
=  ⇒ AX = b
    
 .. .. ..  .. ..
 . . ··· .  .   . 
an1 an2 ··· ann xn bn
A system of linear equations in n unknowns has a unique solution, provided that the determinant
of the coefficient matrix is non-singular i.e., if |A| =
6 0. The rows and columns of a non-singular

37
matrix are linearly independent in the sense that no row (or column) is a linear combination of
the other rows (or columns). If the coefficient matrix is singular, the equations may have infinite
number of solutions, or no solutions at all, depending on the constant vector. If the coefficient
matrix is singular, the equations may have infinite number of solutions, or no solutions at all,
depending on the constant vector. Linear algebraic equations occur in almost all branches of
engineering. Their most important application in engineering is in the analysis of linear systems
(any system whose response is proportional to the input is deemed to be linear). There are two
fundamental different approaches for solving system of linear algebraic equations. These are

1. Direct elimination methods

2. Iterative method

Examples of direct elimination methods include Gauss elimination, the matrix inverse
method and LU decomposition. Iterative methods obtain the solution by assuming a trial so-
lution (called initial guess). The assumed solution is substituted into the system of equations
in a repetitive manner until the scheme converges to the approximate roots. Examples of itera-
tive methods are Gauss-Jacobi iteration, Gauss-Seidel iteration and successive-over-relaxation
(SOR).

6.1 Solution by Gauss-Jacobi iteration

To solve equation (82) by this method, the matrix A must be strictly diagonally dominant.
The following iterative schemes are derived from (82)

{m+1} 1 h
{m} {m}
i
x1 = b1 − a12 x2 − a13 x3 − · · · − a1n xn{m}
a11
{m+1} 1 h
{m} {m}
i
x2 = b2 − a21 x1 − a23 x3 − · · · − a2n xn{m}
a22
{m+1} 1 h
{m} {m}
i
(83)
x3 = b3 − a31 x1 − a32 x2 − · · · − a3n xn{m}
a33
.. .. ..
. . .
1 h {m} {m} {m}
i
x{m+1}
n = bn − an1 x1 − an2 x2 − · · · − a(n−1)n xn−1
ann
The general case for n linear equations is
" n
#
{m+1} 1 X {m}
xi = bi − aij xj , i = 1, 2, . . . , n (84)
aii j=1

If x0i , the initial approximation is not given, we may set x0i = 0, ∀i. This iteration continues
{m+1} {m}
until the stopping criteria xi − xi <  , for a prescribed tolerance , is achieved. For

38
3 × 3 matrix, we assume that the coefficients a11 , a22 and a33 are the largest coefficients in the
respective equations, so that
|a11 | ≥ |a12 | + |a13 |

|a22 | ≥ |a21 | + |a23 | (85)

|a33 | ≥ |a31 | + |a32 |


Jacobi’s iteration method is applicable only if the conditions given in Eq.(85) are satisfied.

Example 1: Solve the following equations by Jacobi’s method with initial approximation

15x + 3y − 2z = 85

2x + 10y + z = 51

x − 2y + 8z = 5
|15| ≥ |3| + |2|

|10| ≥ |2| + |1|

|8| ≥ |1| + |2|

1
15x + 3y–2z = 85 ⇒ x = (85 − 3y + 2z)
15
1
2x + 10y + z = 51 ⇒ y = (51 − 2x − z)
10
1
x − 2y + 8z = 5 ⇒ z = (5 − x + 2y)
8
using equation (83), we have
1
x{m+1} = (85 − 3y {m} + 2z {m} )
15
1
y {m+1} = (51 − 2x{m} − z {m} )
10
1
z {m+1} = (5 − x{m} + 2y {m} )
8
for first iteration (m = 0), we have
1 17
x{1} = (85 − 3y {0} + 2z {0} ) =
15 3
1 51
y {1} {0}
= (51 − 2x − z ) = {0}
10 10
1 5
z {1} {0}
= (5 − x + 2y ) = {0}
8 8
second iteration (m = 1), we have
1
x{2} = (85 − 3y {1} + 2z {1} ) = 4.73
15
1
y {2} = (51 − 2x{1} − z {1} ) = 3.904
10
1
z {2} = (5 − x{1} + 2y {1} ) = 1.192
8

39
third (3rd) iteration (m = 2), we have

1
x{3} = (85 − 3y {2} + 2z {2} ) = 5.045
15
1
y {3} = (51 − 2x{2} − z {2} ) = 4.035
10
1
z {3} = (5 − x{2} + 2y {2} ) = 1.010
8

4th iteration (m = 3), we have

1
x{4} = (85 − 3y {3} + 2z {3} ) = 4.994
15
1
y {4} = (51 − 2x{3} − z {3} ) = 3.990
10
1
z {4} = (5 − x{3} + 2y {3} ) = 1.003
8

5th iteration (m = 4), we have

1
x{5} = (85 − 3y {4} + 2z {4} ) = 5.002
15
1
y {5} = (51 − 2x{4} − z {4} ) = 4.001
10
1
z {5} = (5 − x{4} + 2y {4} ) = 0.998
8

6th iteration (m = 5), we have

1
x{6} = (85 − 3y {5} + 2z {5} ) = 5
15
1
y {6} = (51 − 2x{5} − z {5} ) = 4
10
1
z {6} = (5 − x{5} + 2y {5} ) = 1
8

7th iteration (m = 6), we have

1
x{7} = (85 − 3y {6} + 2z {6} ) = 5
15
1
y {7} = (51 − 2x{6} − z {6} ) = 4
10
1
z {7} = (5 − x{6} + 2y {6} ) = 1
8

Example 2: Use the Jacobi’s iterative scheme to obtain the solutions of the system of equations
correct to three decimal places with initial guess (1,1,1)

x + 2y + z = 0

3x + y − z = 0

x − y + 4z = 3

40
Using equation (85)
|1| =
6 |2| + |1|

|3| ≥ |1| + |−1|

|4| ≥ |1| + |−1|


since equation (85) is not satisfied, we now, rearrange the equations in such a way that all the
diagonal terms are dominant, that is

3x + y − z = 0

x + 2y + z = 0

x − y + 4z = 3

again verify equation (85)


|3| ≥ |1| + |−1|

|2| ≥ |1| + |1|

|4| ≥ |1| + |−1|


for m−th iteration, we have
1 {m}
x{m+1} = − y {m}

z
3
1
y {m+1} −x{m} − z {m}

= (86)
2
1
z {m+1} 3 − x{m} + y {m}

=
4
From Eq. (86), we obtain
m x{m} y {m} z {m}
0 1 1 1
1 0 -1 0.75
2 0.5833 -0.3750 0.500
3 0.29167 -0.47917 0.51042
4 0.32986 -0.40104 0.57862
5 0.32595 -0.45334 0.56728
6 0.34021 -0.44662 0.55329
7 0.3333 -0.44675 0.55498
8 0.33391 -0.44414 0.55498
9 0.33304 -0.44445 0.5555
Exercise

1. Solve the system of linear equations by Gauss-Jacobi iteration

5x + 2y − z = 6

x + 6y − 3z = 4

2x + y + 4z = 7

41
2. Use Jacobi iterative scheme to obtain the solution of the system of equations correct to
two decimal places, with initial approximation

5x1 − 2x2 + x3 = 4

x1 + 4x2 − 2x3 = 3

x1 + 2x2 + 4x3 = 17

3. Solve the system of linear equations by Gauss-Jacobi iteration

4x − 2y + 7z = 6

2x + 3y − 4z = 9

x + y + 5z = 12

6.2 Solution by Gauss-Seidel iteration

Consider the system of Eq. (82) , for this system of equations, we define the Gauss-Seidel
method as:
{m+1} 1 h
{m} {m}
i
x1 = b1 − a12 x2 − a13 x3 − ··· − a1n xn{m}
a11
{m+1} 1 h
{m+1} {m}
i
x2 = b2 − a21 x1 − a23 x3 − · · · − a2n x{m}
n
a22
{m+1} 1 h
{m+1} {m+1}
i
(87)
x3 = b3 − a31 x1 − a32 x2 − ··· − a3n xn{m}
a33
.. .. ..
. . .
1 h {m+1} {m+1} {m+1}
i
x{m+1}
n = bn − an1 x1 − an2 x2 − · · · − a(n−1)n xn−1
ann
The general case for n linear equations is
" n n
#
{m+1} 1 X {m+1}
X {m}
xi = bi − aij xj − aij xj , i = 1, 2, . . . , n (88)
aii j=1 j=i+1

You may notice here that in the first equation of system (87), we substitute the initial ap-
{0} {0} {0}
proximation (x2 , x3 , · · · , xn ) on the right hand side. In the second equation we sub-
{1} {0} {0}
stitute (x1 , x3 , · · · , xn ) on the right hand side. In the third equation, we substitute
{1} {1} {0} {0}
(x1 , x2 , x4 , · · · , xn ) on the right hand side. We continue in this manner until all the
components have been improved. At the end of this first iteration, we will have an improved
{1} {1} {1} {1}
vector (x1 , x2 , x3 , · · · , xn ). The entire process is then repeated and this method is also
called method of successive displacements.

42
Example 1: Solve the following equations by Gauss-Seidal method

8x + 2y − 2z = 8

x − 8y + 3z = −4

2x + y + 9z = 12

Using the condition (85)


|8| ≥ |2| + |−2|

|−8| ≥ |1| + |3|

|9| ≥ |2| + |−1|


Since, the conditions of convergence are satisfied and we can apply Gauss-Seidel method. Then,
we rewrite the given equations as follows:
1
8x + 2y − 2z = 8 ⇒ x = (4 − y + z)
4
1
x − 8y + 3z = −4 ⇒ y = (4 + x + 3z)
8
1
2x + y + 9z = 12 ⇒ z = (12 − 2x − y)
9
1
x{m+1} = 4 − y {m} + z {m}

4
1
y {m+1} = 4 + x{m+1} + 3z {m}

(89)
8
1
z {m+1} = 12 − 2x{m+1} − y {m+1}

9
first (1st) iteration (m = 0)
1
x{1} = 4 − y {0} + z {0} = 1

4
1
y {1} 4 + x{1} + 3z {0} = 0.625

=
8
1
z {1} 12 − 2x{1} − y {1} = 1.042

=
9
second (2nd) iteration (m = 1)
1
x{2} = 4 − y {1} + z {1} = 1.104

4
1
y {2} 4 + x{2} + 3z {1} = 1.029

=
8
1
z {2} 12 − 2x{2} − y {2} = 0.974

=
9
third (3rd) iteration (m = 2)
1
x{3} = 4 − y {2} + z {2} = 0.986

4
1
y {3} 4 + x{3} + 3z {2} = 0.989

=
8
1
z {3} 12 − 2x{3} − y {3} = 1.004

=
9
43
fourth (4th) iteration (m = 3)
1
x{4} = 4 − y {3} + z {3} = 1.004

4
1
y {4} 4 + x{4} + 3z {3} = 1.002

=
8
1
z {4} 12 − 2x{4} − y {4} = 0.999

=
9
5th iteration (m = 4)
1
x{5} = 4 − y {4} + z {4} = 0.999

4
1
y {5} 4 + x{5} + 3z {4} = 1.000

=
8
1
z {5} 12 − 2x{5} − y {5} = 1.000

=
9
6th iteration (m = 5)
1
x{6} = 4 − y {5} + z {5} = 1.000

4
1
y {6} 4 + x{6} + 3z {5} = 1.000

=
8
1
z {6} 12 − 2x{6} − y {6} = 1.000

=
9
%MATLAB code for Example 1
x(1)=0;y(1)=0;z(1)=0; % initail approximation
n=input(’the number iteration to be perform=’)
for m=1:n % n=number of iteration
x(m+1)=1/4*(4-y(m)+z(m));
y(m+1)=1/8*(4+x(m+1)+3*z(m));
z(m+1)=1/9*(12-2*x(m+1)-y(m+1));
end
[x’ y’ z’]

7 NUMERICAL INTEGRATION
The general form of the problem of numerical integration may be stated as follows:
Given a set of data points (xi , yi ) , i = 0, 1, · · · , n of a function y = f (x), where f (x) is not
explicitly known. Here, we are required to evaluate the definite integral
Z b
I= f (x)dx (90)
a

Here, we replace y = f (x) by an interpolating polynomial φ(x) in order to obtain an approx-


imate value of the definite integral of Eq.(90). In what follows, we derive a general formula

44
for numerical integration by using Newton’s forward difference formula. Here, we assume the
interval (a, b) is divided into n-equal subintervals such that

b−a
h= , a = x0 < x2 < · · · < xn = b (91)
n

with xn = x0 + nh, where h is the internal size, n is the number of subintervals a and b is the
limits of integration with b > a.
Hence, the integral in Eq.(90) can be written as
Z xn
I= f (x)dx (92)
x0

Using Newton’s forward interpolation formula, we have


Z xn      
p p 2
I= f0 + 4f0 + 4 f0 + · · · dx (93)
x0 1 2

where x = x0 + ph Z n      
p p 2
I=h f0 + 4f0 + 4 f0 + · · · dx (94)
0 1 2
Hence, after simplification, we get
Z xn
n(n − 2)2 3
 
n n(2n − 3) 2
I= f (x)dx = nh f0 + 4f0 + 4 f0 + 4 f0 + · · · (95)
x0 2 12 24

The formula given by Eq.(95) is known as Newton-Cotes closed quadrature formula.


From the general formula (Eq.(95)), we can derive or deduce different integration formulae by
substituting n = 1, 2, · · ·

7.1 TRAPEZOIDAL RULE

Substituting n = 1 in Eq.(95) (the same as 47 and u ≡ n) and considering the curve y = f (x)
through the points (x0 , y0 ) and (x1 , y1 ) as a straight line (a polynomial of first degree so that
the differences of order higher than first become zero), we obtain
n Z xn
X h
I= Ir = f (x)dx = [f0 + 2(f1 + f2 + · · · + fn−1 ) + fn ] (96)
r=1 x0 2

Equation (96) is known as the trapezoidal rule.


Summarising, the trapezoidal rule signifies that the curve y = f (x) is replaced by n-straight
lines joining the points (xn , yn ), i = 0, 1, · · · , n.

45
7.1.1 Error Estimate in Trapezoidal Rule

Let y = f (x) be a continuous function with continuous derivatives in the interval [x0, xn].
Expanding y in a Taylor’s series around x = x0 , we get
Z x1 Z x1 
(x − x0 )2 00 (x − x0 )3 000

0
f (x)dx = f0 + (x − x0 )f0 + f0 + f0 + · · · dx
x0 x0 2! 3!
(97)
h2 h3 h3
= hf0 + f00 + f000 + f0000 + · · ·
2 6 24
Again, from Eq. (i)
Z x1
h2 00
 
h h h 0
f (x)dx = (f0 + f1 ) = (f0 + f (x0 + h)) = f0 + f0 + hf0 + f0 + · · ·
x0 2 2 2 2
(98)
2 3 3
h h h
= hf0 + f00 + f000 + f0000 + · · ·
2 4 12
Hence, the error e1 in (x0 , x1 ) is obtained from Eqs. (97) and (98) as

1 3 00
e1 = − h f0 + · · · (99)
12

Similarly, we have

1 3 00 1
e2 = − h f1 + · · · , e3 = − h3 f200 + · · · , and so on (100)
12 12

In general, we can write


1 3 00
en = − h fn−1 + · · · (101)
12
Hence, the total error E in the interval (x0 , xn ) can be written as
n
X 1 3  00
h f0 + f100 + f200 + · · · + fn−1
00

E= ei = − (102)
i=1
12

MATLAB code for the trapezodial’s rule

function trapezodial = trpzds(f,a,b,N)


%integral of f(x) over [a,b] by trapezoidal rule with N segments
if abs(b - a) < eps | N <= 0, trapezodial = 0; return; end
h = (b - a)/N; x = a +[0:N]*h;
trapezodial = h*((f(1) + f(N + 1))/2 + sum(f(2:N))); %Eq.(55)

Example: Evaluate the integral


R 1.2
1. 0
ex dx
R 1.2
2. 0
x2 ex dx

46
R 1.2 2
3. 0
ex dx
R 1.2 2
4. 0
x2 ex dx
R 12 1
5. 0 1+x2
dx
R1
6. 0
sin(x2 )dx

taking six intervals by using trapezoidal rule up to five significant figures.


solution 1
b−a
a = 0, b = 1.2 n = 6, h = n
= 0.2

xi 0 0.2 0.4 0.6 0.8 1.0 1.2


f (xi ) 1 1.221 1.492 1.822 2.226 2.718 3.320
From Eq. (96 we have

h 0.2
I= [f0 + f6 + 2(f1 + f2 + f3 + f4 + f5 )] = [1 + 3.320 + 2(1.2211.4921.8222.2262.718)] ≈ 2.32785
2 2
R 1.2
The exact value is 0 ex dx = 2.32012
solution 2
b−a
a = 0, b = 1.2 n = 6, h = n
= 0.2

xi 0 0.2 0.4 0.6 0.8 1.0 1.2


f (xi ) 0 0.04886 0.23869 0.65596 1.42435 2.71828 4.78097
From Eq. (96 we have

h
I= [f0 + f6 + 2(f1 + f2 + f3 + f4 + f5 )] = 1.49532
2

No exact solution

1
7.2 SIMPSON’S 3 RULE

Substituting n = 2 in Eq. (95) (the same as 47 and u ≡ n) and taking the curve through
the points (x0 , y0 ), (x1 , y1 ) and (x2 , y2 ) as a polynomial of second degree (parabola) so that the
differences of order higher than two vanish, we obtain
Z xn
h
I= f (x)dx = [f0 + 4(f1 + f3 + f5 + · · · + f2n−1 ) + 2(f2 + f4 + f6 + · · · + f2n−2 ) + f2n ]
x0 3
(103)
1 1
Equation (103) is known as Simpson’s 3
rule. Simpson’s 3
rule requires the whole range (the
given interval) must be divided into even number of equal subintervals.

47
7.2.1 Error Estimate in Simpson’s 1/3 Rule

Expanding y = f (x) around x = x0 by Taylor’s series, we obtain


Z x2 Z x0 +2h 
(x − x0 )2 00 (x − x0 )3 000

0
f (x)dx = f0 + (x − x0 )f0 + f0 + f0 + · · · dx
x0 x0 2! 3! (104)
4 2 4
= 2hf0 + 2h2 f00 + h3 f000 + h4 f0000 + h5 f0iv + · · ·
3 3 15
from Eq. (??), we have

h h
[f0 + 4f1 + f2 ] = [f0 + 4f (x0 + h) + f (x0 + 2h)]
3 3
h2 00 h3 000
  
h 0 0 4 2 00 8 3 000
= f0 + 4 f0 + hf0 + f0 + f0 + · · · + f0 + 2hf0 + h f0 + h f0 + · · ·
3 2! 3! 2! 3!
4 2 5
= 2hf0 + 2h2 f00 + h3 f000 + f0000 + h5 f0iv + · · ·
3 3 18
(105)
Hence, from Eqs. (104) and (105), the error in the subinterval (x0 , x2 ) is given by
Z x2  
h 4 5 1
e1 = f (x)dx − [f0 + 4f1 + f2 ] = − h5 f0iv + · · · ≈ − h5 f0iv (106)
x0 3 15 18 90

Likewise, the errors in the subsequent intervals are given by

1 5 iv 1
e2 = − h f2 , e3 = − h5 f4iv , and so on (107)
90 90

MATLAB code for the Simpson’s 1/3 rule

function simpson = smpsns(f,a,b,N)


%integral of f(x) over [a,b] by Simpson’s rule with N segments
if nargin < 4, N = 100; end
if abs(b - a)<1e-12 | N <= 0, simpson = 0; return; end
if mod(N,2) ~= 0, N = N + 1; end %make N even
h = (b - a)/N; x = a + [0:N]*h; %the boundary nodes for N segments
%fx = fevel(f,x,varargin{:}); %values of f for all nodes
f(find(f == inf)) = realmax; f(find(f == -inf)) = -realmax;
kodd = 2:2:N; keven = 3:2:N - 1; %the set of odd/even indices
simpson = h/3*(f(1) + f(N + 1)+4*sum(f(kodd)) + 2*sum(f(keven)));%Eq.(65)

Example 2 Solve examples under trapezoidal’s rule using Simpson’s 1/3 rule
Answer to the first one is 2.3201374482 and 1.453296 for the second example.

48
8 NUMERICAL SOLUTION OF FIRST ORDER OR-
DINARY DIFFERENTIAL EQUATION
Suppose we require the solution of

dy
y(x) = = f (x, y), subject to the initial condition y(x0 ) = y0 (108)
dx

this type of problem is called initial value problem (IVP)


For the purpose of our discussion, we shall consider the following methods

1. Taylor series method

2. Picard’s approximation method

3. Euler’s method

8.1 Taylor’s series method

The Taylor’s series method is given by

h 0 h2 h3
yn+1 = yn + yn + yn00 + yn000 + · · · (109)
1! 2! 3!
dy
Example 1: Solve − 5y = 0, given y(0) = 1. Estimate y(0.1) and y(0.2), hence compare
dx
your results with the exact solution, given y = e5x .

Solution:
x0 = 0, h = 0.1, y0 = 1, y 0 = 5y ⇒ y00 = 5y0 = 5 × 1 = 5, y 00 = 5y 0 ⇒ y000 = 5y00 = 5 × 5 =
25, y 000 = 5y 00 ⇒ y0000 = 5y000 = 5 × 25 = 125, y iv = 5y 000 ⇒ y0iv = 5y0000 = 5 × 125 = 625
applying Eq. (109), we have

h 0 h2 00 h3 000 h4 iv
y1 = y0 + y + y + y0 + y0 + · · ·
1! 0 2! 0 3! 4!
0.12 0.13 0.14
= 1 + 0.1(5) + (25) + (125) + (625) + · · · = 1.64844
2 6 24
y1 = 1.64844, y10 = 5y1 = 5 × 1.64844 = 8.2422, y100 = 5y10 = 5 × 8.2422 = 41.211, y1000 = 5y100 =
5 × 41.211 = 206.055, y1iv = 5y1000 = 5 × 206.055 = 1030.28

h 0 h2 00 h3 000 h4 iv
y2 = y(0.2) = = y1 + y + y + y1 + y1 + · · ·
1! 1 2! 1 3! 4!
0.12 0.13 0.14
= 1.648441 + 0.1(0.82422) + (41.211) + (206.055) + (1030.28) = 2.71735
2 6 24
Example 2: Obtain the solution of the IVP y 0 + y = e3x , y(0) = 1, using Taylor’s series
method. Compare your results with the exact solution y = 14 e3x + 34 e−x when x = 0.2 and

49
x = 0.4.

Solution:
x0 = 0, h = 0.2, y0 = 1, y 0 = e3x − y ⇒ y00 = e3x0 − y0 = 1 − 1 = 0, y 00 = 3e3x − y 0 ⇒ y000 =
3e3x0 − y00 = 3 − 0 = 3, y 000 = 9e3x − y 00 ⇒ y0000 = 9e3x0 − y000 = 9 − 3 = 6, y iv = 27e3x − y 000 ⇒
y0iv = 27e3x0 − y0000 = 27 − 6 = 21
applying Eq. (109), we have y1 = y(0.2) = 1.0694
0
x1 = 0.2, y1 = 1.0694, y10 = e3x1 − y1 = 0.752719, y100 = 3e3x1 − y= 4.71364, y1000 = 9e3x1 − y100 =
11.71364, y1iv = 27e3x1 − y1000 = 37.5117
applying Eq. (109), we obtain y2 = y(0.4) = 1.3323

x Approximate Exact Error


0.2 1.0694 1.0696 0.0002
0.4 1.3323 1.3328 0.0005

Exercise
Using Taylor’s series method, obtain the solutions of the following IVP correct to 4 decimal
places
dy 2
(a) = 2xy, y(1) = 1. The exact solution is y = ex −1
dx
(b) y 0 = y 2 + 1, y(0) = 0. The exact solution is y = tan x

8.2 Picard’s method of successive approximation

To solve the IVP


dy
y(x) = = f (x, y), y(x0 ) = y0 (110)
dx
the Picard’s method is given by
Z x
yn = y0 + f (x, yn−1 )dx (111)
x0

dy
Example: Solve − 5y = 0, given y(0) = 1 at y4 (0.1), y4 (0.2), y4 (0.3)
dx
y 0 = 5y ⇒ f (x, y) = 5y, from y(0) ⇒ x0 = 0, y0 = 1
substitute n = 1 in Eq (111), we obtain

50
Z x Z x Z x
y1 = y0 + f (x, y0 )dx = 1 + 5y0 dx = 1 + 5dx = 1 + 5x
Z0 x Z0 x 0
5
y2 = y0 + f (x, y1 )dx = 1 +(1 + 5x)dx = 1 + x + x2
2
Z0 x Z0 x  
5 1 5
y3 = y0 + f (x, y2 )dx = 1 + 1 + x + x2 dx = 1 + x + x2 + x3
2 2 6
Z0 x Z0 x  
1 5
y4 = y0 + f (x, y3 )dx = 1 + 1 + x + x2 + x3 dx
0 0 2 6
1 1 5
= 1 + x + x2 + x3 + x4
2 6 24

Therefore, y4 (0.1) = 1.1052, y4 (0.2) = 1.2217, y4 (0.3) = 1.35212

Exercise Obtain the solution of the IVP y 0 + y = e3x , y(0) = 1, using Picard’s method.
Compare your results with the exact solution y = 14 e3x + 34 e−x when x = 0.1, x = 0.3 and
x = 0.4.

8.3 Euler’s method

To solve problem (108) by Euler’s method, we employ the formula

yn+1 = yn + hf (xn , yn ) (112)

dy
Example 1: Solve − 5y = 0, given y(0) = 1. Estimate y(0.1) and y(0.2), hence compare
dx
your results with the exact solution, given y = e5x .
Solution: x0 = 0, y0 = 1, f (x, y) = 5y ⇒ f (xn , yn ) = 5yn , h = 0.1

yn+1 = yn + hyn0 = yn + hf (xn , yn )

y1 = y(0.1) = y0 + hf (x0 , y0 ) = y0 + 5hy0 = 1 + 5(0.1)(1) = 1.5

y2 = y(0.2) = y1 + hf (x1 , y1 ) = y1 + 5hy1 = 1.5 + 5(0.1)(1.5) = 2.25

x Approximate Exact Error


0.1 1.5 1.64872 0.14872
0.2 2.25 2.71828 0.46828
y 2
Example 2 Use Euler’s method with h = 0.05 to estimate y(0.2) when y 0 = − 1+x , y(0) = 1
yn 2
solution: In this case, we have yn0 = f (xn , yn ) = − 1+x n

hyn2
yn+1 = yn − (113)
1 + xn

when n = 0, y1 = 0.95, when n = 1, y2 = 0.90702, when n = 2, y3 = 0.86963 and when


n = 3, y4 = 0.83675

51

You might also like