0% found this document useful (0 votes)
20 views

PHS 411_Computational Physicspdf

The document discusses errors in computational physics, defining various types such as round-off, truncation, absolute, relative, inherent, and accumulated errors. It also covers root-finding methods, including the Bisection Method, Regula Falsi Method, and Newton-Raphson Method, detailing their algorithms and examples. The importance of choosing appropriate initial guesses for these methods is emphasized to ensure accurate results.

Uploaded by

cp7qcdp4zw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

PHS 411_Computational Physicspdf

The document discusses errors in computational physics, defining various types such as round-off, truncation, absolute, relative, inherent, and accumulated errors. It also covers root-finding methods, including the Bisection Method, Regula Falsi Method, and Newton-Raphson Method, detailing their algorithms and examples. The importance of choosing appropriate initial guesses for these methods is emphasized to ensure accurate results.

Uploaded by

cp7qcdp4zw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

PHY 411: COMPUTATIONAL PHYSICS

CHAPTER ONE
ERRORS

An error may be defined as the difference between an exact and computed value,

Suppose the exact value of a solution of a computational problem is 1.023; now when a
computer or calculator is used, the solution is obtained as 1.022823; hence the error in this
calculation is (1.023 – 1.022823) 0.000177

Types of Error

Round-off error:- This error due to the rounding-off of a quantity due to limitations in the
digits.

Truncation error:-Truncation means cutting off the other digits i.e. no rounding –off. For
instance, 1.8123459 may be truncated to 1.812345 due to a preset allowable number of digits.

Absolute Error:- The absolute value of an error is called the absolute error; that is;

Absolute error = | |

Relative error:- Relative error is the ratio of the absolute error to the absolute value of the exact
value.; That is

Relative error = | |

Percentage error:-This is equivalent to Relative error x 100

Inherent error:- In a numerical method calculation, we may have some basic mathematical
assumptions for simplifying a problem. Due to these assumptions, some errors are possible at the
beginning of the process itself. This error is known as inherent error.

Accumulated error:-Consider the following procedure:

= 100 ( = 0,1,2, … . . )

Therefore,

= 100

= 100

= 100 etc

Let the exact value of Y0 = 9.98


Suppose we start with Y0 = 10

Here, there is an inherent error of 0.02

Therefore,

= 100 = 100 x 10 = 1000

= 100 = 100 x 1000 = 100,000

= 100 = 100 x 100,000 = 10,000,000

The table below shows the exact and computed values,

Variable Exact Value Computed value Error


Y0 9.98 10 0.02
Y1 998 1000 2
Y2 99800 100,000 200
Y3 9980000 10,000,000 20,000

Notice above, how the error quantities accumulated. A small error of 0.02 at Y0leads to an error
of 20,000 in Y3. So, in a sequence of computations, an error in one value may affect the
computation of the next value and the error gets accumulated. This is called accumulated error.

Relative Accumulated Error

This is the ratio of the accumulated error to the exact value of that iteration. In the above
example, the relative accumulated error is shown below.

Variable Exact Value Computed value Accumulated Relative Accumulated error


Error
Y0 9.98 10 0.02 0.02⁄9.98 = 0.002004
Y1 998 1000 2 2⁄998 = 0.002004
Y2 99800 100,000 200 200⁄99800 = 0.002004
Y3 9980000 10,000,000 20,000 20,000⁄9980000 = 0.002004

Notice that the relative accumulated error is same for all the values.
CHAPTER TWO
ROOT FINDING IN ONE DIMENSION

This involves searching for solutions to equations of the form: ( ) = 0

The various methods include:

1. Bisection Method
This is the simplest method of finding a root to an equation. Here we need two initial
guesses xaandxb which bracket the root.

Let = ( ) and = ( ) such that ≤ 0 (see fig 1)

Figure1: Graphical representation of the bisection method showing two initial guesses (xa and
xbbracketting the root).
Clearly, if = 0 then one or both of and must be a root of ( )= 0

The basic algorithm for the bisection method relies on repeated applications of:
( )
Let =

If Fc= f(c) = 0 then, x =xc is an exact solution,


Else if < 0 then the root lies in the interval ( , )
Else the root lies in the interval ( , )
By replacing the interval ( , ) with either ( , ) or ( , )( whichever brackets the
root), the error in our estimation of the solution to ( ) = 0 is on the average, halved.
We repeat this interval halving until either the exact root has been found of the interval is
smaller than some specified tolerance.
Hence, the root bisection is a simple but slowly convergent method for finding a solution
of ( ) = 0, assuming the function f is continuous. It is based on the intermediate value
theorem, which states that if a continuous function f has opposite signs at some x = aand
x = b(>a) that is, either ( ) < 0, ( ) > 0 ( ) > 0, ( ) < 0, then f must be 0
somewhere on [ , ].
We thus obtain a solution by repeated bisection of the interval and in each iteration, we
pick that half which also satisfies that sign condition.

Example:
Given that ( ) = − 2.44, solve using the method of root bisection, the form
( )= 0.

Solution:
Given that ( ) = − 2.44 = 0
Therefore,
− 2.44 = 0
Direct method gives = 2.44
But by root bisection;
Let the trial value of x = -1
X F(x) = x-2.44
Trial value -1 -3.44
0 -2.44
1 -1.44
2 -0.44
3 +0.56

It is clear from the table that the solution lies between x = 2 and x = 3.

Now choosing x = 2.5, we obtain F(x) = 0.06, we thus discard x = 3 since F(x) must lie
between 2.5 and 2. Bisecting 2 and 2.5, we have x = 2.25 with F(x) = -0.19.
Obviously now, the answer must lie between 2.25 and 2.5.
The bisection thus continues until we obtain F(x) very close to zero, with the two values
of x having opposite signs.

X F(x) = x-2.44
2.25 -0.19
2.375 -0.065
2.4375 -0.0025
2.50 -0.06

When the above is implemented in a computer program, it may be instructed to stop at


say, | ( )| ≤ 10 , since the computer may not get exactly to zero.

Full Algorithm

1. Define F(x)
2. Read x1 , x2, values of x such that F(x1)F(x2) < 0
3. Read convergence term, s = 10-6, say.
4. Calculate F(y), y = (x1+x2) / 2
5. If abs(x2-x1) ≤ s, then y is a root. Go to 9
6. If abs(x2-x1) > s, Go to 7
7. If F(x1)F(x2) ≤ 0, (x1,y) contains a root, set x2 = y and return to step 4
8. If not, (y, x2) contains a root, set x1 = y and return to step 4
9. Write the root A
Flowchart for the Root Bisection Method

Start

Dimension

Define F(x)

Input interval limits x1, x2, tolerance, s

Y = (x1 + x2) / 2

Abs (x2 – x1) : s >


>
F(x1)F(x2) : s

≤ X1 = y

X2 = y

Output Root = A
2. The RegulaFalsi (False position) Method

This method is similar to the bisection method in the sense that it requires two initial
guesses to bracketthe root. However, instead of simply dividing the region in two, a
linear interpolation is used to obtain a new point which is (hopefully, but not necessarily)
closer to the root than the equivalent estimate for the bisection method. A graphical
interpretation of this method is shown in figure 2.

Figure2: Root finding by the linear interpolation (regulafalsi) method. The two initial guesses xa and xb
must bracket the root.
The basic algorithm for the method is:
Let = − = −

If = ( ) = 0 thenx = xc is an exact solution..


Else if fafc< 0 then the root lies in the interval (xa,xc)
Else the root lies in the interval (xc, xb)
Because the solution remains bracketed at each step convergence is guaranteed as was the
case for bisection method, The method is first order and is exact for linear f
Note also that the method should not be used near a solution.

Example
Find all real solutions of the equation = 2 by the method of false position.
Solution
Let xa = 1 and xb = 2
Now rewriting the equation in the form: −2 =0
Then fa= 1-2 = -1
fb = 8 -2 = 14
Therefore,
( )( ) ( )( )
= = = = 1.07
( )

= (1.07) − 2 = −0.689
Now, from the algorithm, fc ≠ 0, hence xc ≠ x, the exact solution.
Again, fafb = (-1)(-0.689) = 0.689 > 0
Therefore, the roots lie in the interval xc, xb
That is, ±(1.07, 2) (two roots)
3. The Newton-Raphson Method
This is another iteration method for solving equations of the form: F(x) = 0, where f is
assumed to have a continuous derivative f ‘. The method is commonly used because of its
simplicity and great speed. The idea is that we approximate the graph of f by suitable
tangents. Using an approximate value x0 obtained from the graph of f, we let x be the
point of intersection of the x – axis and the tangent to the curve of f at x0.
y

Y = f(x)

F(x)

β x
X2 x1 x0

Figure 4: Illustration of the tangents to the curve in Newton-Raphson method

Then,
′( ( )
tan = )= ,

Hence,
( )
x = − ′( ,
)

( )
In the second step, we compute; x = − ′( ,
)

And generally,
( )
x = − ′( )
Algorithm for the Newton-Raphson method
1. Define f(x)
2. Define f’ (x)
3. Read x0, s (tolerance0
4. K = 0
( )
5. x = − ′( ,
)

6. Print k+1, xk+1, f(xk+1)


7. If | − | < then go to 10
8. K = k+1
9. Go to step 5
10. Print ‘The root is -----‘, xk+1
11. End

Consider the equation: +2 + 2.2 + 0.4 = 0

Here, ( )= +2 + 2.2 + 0.4

′( )=3 + 4 + 2.2

Let the initial guess, x0 = - 0.5

Let us now, write a FORTRAN program for solving the equation, using the Newton-
Raphson’s method.
C NEWTON RAPHSON METHOD

DIMENSION X(100)

F(X) = X**3. + 2. *X*X + 2.2 *X + 0.4

F1(X) = 3. *X*X + 4. *X + 2.2

WRITE (*,*) ‘TYPE THE INITIAL GUESS’

READ (*,5) X(0)

5 FORMAT (F10.4)

WRITE (*,*) ‘TYPE THE TOLERANCE VALUE’

READ (*,6) S

6 FORMAT (F3.6)

WRITE (*,*) ‘ITERATION X F(X)’

K=0

50 X(K+1) = X(K) – F(X(K)) / F1(X(K))

WRITE (*,10) K+1, X(K+1), F(X(K+1))

10 FORMAT (1X, I6, 5X, F10.4, 5X, F10.4)

IF (ABS(X(K+1) – X(K) .LE. S) GOTO 100

K = K+1

GOTO 50

100 WRITE (*,15) X(K+1)

15 FORMAT (1X, ‘THE FINAL ROOT IS ‘, F10.4)

STOP

END
Assignment

If S = 0.00005, manually find the root of the above example after 5 iterations.

CHOOSING THE INITIAL GUESS, X0

In the Newton-Raphson’s method, we have to start with an initial guess, x0. How do we
choose x0?

If f (a) and f (b) are of opposite signs, then there is at least one value of x between a andb
such thatf(x)= 0. We can start with f(0), find f(0), f(1), f(2) ----------------- . If there is a
number k such that f(k) and f(k+1) are of opposite signs then there is one root between k and
k+1, so we can choose the initial guess x0 = k or x0 = k+1.

Example;

Consider the equation: −7 + +1=0

F(0) = 1 ( = +ve)

F(1) = -4 (= -ve)

Therefore, there is a root between 0 and 1, hence our initial guess x0, may be taken as 0 or 1

Example:

Evaluate a real root of + 2.1 + 13.1 + 22.2 = 0, using the Newton Raphson’s
method, correct to three decimal places.

Solution;

F(x) = + 2.1 + 13.1 + 22.2

F(0) = 22.2 (positive)

Now, since all the coefficients are positive, we note that f(1), f(2), ------- are all positive. So
the equation has no positive root.

We thus search in the negative side:


F (-1) = 20.2 (positive)

F (-2) = +ve = f (-3) ------ f (-11). But f (-12) is negative, so we can choose x0 = -11.

Iteration 1

F(x) f + 2.1 + 13.1 + 22.2

F’ (x) = f 3 + 24.2 + 13.1

Now, with x0 = -11

F (x0) = F (-11) = 11.2

F’ (x0) = F’(-11) = f 3(−11) + 24.2(−11) + 13.1 = 109.9

Therefore,

( ) .
x = − ′( = −11 − = -11.1019
) .

Iteration 2

X1 = -11.1019

F (x1) = f (-11.1019) = -0.2169

F’ (x1) = F’ (-11.1019) = 114.1906

Therefore,.
( ) ( . )
= − ′( = - 11.1019 – = - 11.100001
) .

Iteration 3

X2 = -11.100001

F (x2) = F ( - 11.100001) = - 0.0001131

F’ (x2) = F’ (-11.100001) = 114.1101

Therefore,
( ) ( . )
= − ′( = - 11.100001 – = - 11.1000000
) .

Now, correct to three decimal places, x2 = x3, and so, the real root is x = -11.1000.

Example 2

Set up a Newton-Raphson iteration for computing the square root x of a given positive
number c and apply it to c = 2

Solution

We have = √ , hence

F (x) = − =0
′( )=2

Newton-Raphson formula becomes:

( ) ( − )
= − ′( )
= −
2

= =

= 1 2( + )

Therefore,

For c = 2, choosing x0 = 1, we obtain:

X1 = 1.500000, x2 = 1.416667, x3 = 1.414216, x4 = 1.414214, ………

Now, x4 is exact to 6 decimal places.


Now, what happens if ′ ( ) = 0?

( ) = 0,and ′( ) = 0 , we have repeated roots or multiplicity (multiple roots).


Recall, if
The sign in this case will not change; the method hence breaks down. The method also fails
for a complex solution (i.e. + 1 = 0)

4. The Secant Method


We obtain the Secant method from the Newton-Raphson method, replacing the
derivative F’ (x) by the difference quotient:
′( ( ) ( )
)=

( )
Then instead of using = − ′( (as in Newton-Raphson’s), we have
)


= − ( )
( )− ( )

Geometrically, we intersect the x-axis at xk+1 with the secant of f (x) passing through Pk-1 and Pk
in the figure below.

y
Y = f(x) Secant

Pk-1

Pk

x
S xk Xk-1

Xk+1

We thus, need two starting values x0 and x1.


CHAPTER THREE

COMPUTATION OF SOME STANDARD FUNCTIONS


Consider the sine x series:

= − + − ----------
! !

For a given value of x, sin x can be evaluated by summing up the terms of the right hand
side. Similarly, cosx, ex ,etc can also be found from the following series;

=1− + − ----------
! !

= 1+ + + + --------
! !

Example:

Solve sin (0.25) correct to five decimal places.

Solution;

Given that x = 0.25

(0.25)
= = = 0.0026041
3! 6 6

(0.25)
= = = 0.0000081
5! 120 120

(0.25)
= = = 0.0000000
7! 5040 5040

(correct to 6 D)
Therefore,

= − + = 0.25 − 0.002641 + 0.0000081


3! 5!

= 0.2475071 = 0.24751 (correct to 5D) in radians


1. Taylor’s Series Expansion
Let F(x) be a function. We want to write f(x) as a power series about a point x0. That is,
we want to write f(x) as:
( )= + ( − )+ ( − ) + … … … ----------------------- (1)
Where c0, c1, c2, ----------, are constants.
We are interested in finding the constants co, c1, ----------, given f(x) and x0.
Therefore, from equation (1),
= ( )
If we differentiate equation (1), we obtain:
+2 ( – ) + 3 ( – ) + − − − − − − − (2)

Therefore,

′( )= ′( )
or =

Differentiating equation (2), we obtain:

′′ ( )=2 + 3(2)( )( − )

Hence,

′′ ( )
′′ ( ) = 2 or = !

Proceeding like this, we shall get;

′′′ ( )
( ) ( )
= =
3! 4!

In general,

( )
( )
=
!

Equation (1) thus becomes;

′( ) ′′ ( )
( )= ( )+ ( − )+ ( − ) + … … … --------- (3)
! !
This is called the Taylor’s series expansion about x0.

Taylor’s series can be expressed in various forms. Putting x = x0 + h in equation (3), we get
another form of Taylor’s series:

′( ) ′′ ( )
( + ℎ) = ( ) + ℎ+ ℎ + … … … ---------- (4)
! !

Some authors use x in the place of h in equation (4), so we get yet another form of Taylor’s
series;

′( ) ′′ ( )
( + )= ( )+ + + … … … --------- (5)
! !

2. The Maclaurin’s Series

The Taylor’s series of equation (5) about x0 = 0 is called Maclaurin’s series of f(x), that is,

′( ) ′′ ( )
( ) = (0) + + + … … … ----------------- (6)
! !

3. Binomial Series

Consider,

( ) = (1 + )

′( ) = (1 + )

′′ ( ) = ( − 1)(1 + )

′′′ ( ) = ( − 1)( − 2)(1 + )

Now applying the above to the Maclaurin’s series, we obtain, noting that;

(0) = 1

′(
0) =

′′ (
0) = ( − 1)

′′′ (
0) = ( − 1)( − 2)
So we obtain:

( − 1) ( − 1)( − 2)
(1 + ) = 1 + + +
2! 3!

This is the Binomial series.

Example

.
Derive the Maclaurin’s series for and hence, evaluate correct to two decimal places.

Solution:

( )= , (0) = 1

′( )=− ′(
, 0) = − 1

′′ ( )= ′′ (
, 0) = 1

′′′ ( )=− ′′′ (


, 0) = − 1

Now, by Maclaurin’s series,

′ ′′
(0) (0)
( ) = (0) + + + ………
1! 2!

That is,

= 1− + − + … … … ..
1! 2! 3!

Therefore,

.
(0.2) (0.2) (0.2)
= 1 − 0.2 + − + − ⋯ ..
2 6 24

= 0.81 (to 2D).


CHAPTER FOUR

INTERPOLATION

Suppose F(x) is a function whose value at certain points xo, x1, ……,xn are known. The values are
f(x0), f(x1), ……, f(xn). Consider a point x different from xo, x1, ……,xn . F(x) is not known.

We can find an approximate value of F(x) from the known values. This method of finding F(x)
from these known values is called interpolation. We say that w interpolate F(x) from f(x0), f(x1),
……, f(xn

Linear Interpolation

Let x0, x1 be two points and f0, f1 be the function values at these two points respectively. Let x be
a point between x0 and x1. We are interested in interpolating F(x) from the values F(x0) and F(x1).

F0 F(x) =? F1

x0 x x1

Now consider the Taylor’s series:


( )
( ) = ( )+ ( − ) + … … ….
1!

Considering only the first two terms, we have:

′( )(
= + − )

Therefore,

′(

)=

The Taylor’s series at x gives



( )
( )= ( )+ ( − ) + ………
1!

Also considering the first the first two terms, we have:

′(
( − )
( ) = ( )+ )( − )= + ( − )
( − )

Now let be denoted by p

We thus get; ( ) = +( − ) = + − = − +

Therefore,

( ) = (1 − ) +

This is called the linear interpolation formula. Since x is a point between x0 and x1, p is a non-
negative fractional value, i.e. 0 ≤ ≤1

Example

Consider the following table:

7 19
15 35

Find the value of (10)

Solution:

= 7, = 19

= 15, = 35

F0 =15 F(x) =? F1 =35

x0 =7 X =10 x1 =19
− 10 − 7 3
= = =
− 19 − 7 12

= 0.25

Therefore,

1− = 1 − 0.25 = 0.75

Hence, ( ) = (1 − ) +

= (0.75 x 15) + (0.25 x 35) = 11.25 + 8.75

F(10) = 20

Lagrange Interpolation

Linear Lagrange interpolation is interpolation by the straight line through ( , ), ( , )

Y = f(x)
P1(x)
F0 F1

x
X0 x X1

Thus, by that idea, the linear Lagrange polynomial P1 is the sum P1= L0f0 + L1f1 with L0 , the
linear polynomial that is 1 at x0 and 0 at x1 .

Similarly, L1 is 0 at x0 and 1 at x1.

Therefore,
− −
( )= , ( )=
− −

This gives the linear Lagrange polynomial;

( ) =L0f0 + L1f1 = +

Example 1

Compute ln 5.3 from ln 5.0 = 1.6094, ln 5.7 = 1.7405 by linear Lagrange interpolation and
determine the error from ln 5.3 = 1.6677.

Solution;

= 5.0

= 5.7

= ln 5.0

= ln 5.7

Therefore.

5.3 − 5.7 −0.4


(5.3) = = = 0.57
5.0 − 5.7 −0.7

5.3 − 5.0 0.3


(5.3) = = = 0.43
5.7 − 5.0 0.7

Hence,

ln 5.3 = (5.3) + (5.3)

= 0.57 x 1.6094 + 0.43 x 1.7405 = 1.6658

The error is 1.6677 – 1.6658 = 0.0019

The quadratic Lagrange Interpolation


This interpolation of given (x0 ,f0), (x1 , f1), (x2 , f2) by a second degree polynomial P2(x), which
by Lagrange’s idea, is:

( )= ( ) + ( ) + ( )

With,

( ) = 1, ( ) = 1, ( ) = 1 and

( )= ( ) = 0, e.t.c., we therefore claim that:

( ) ( − )( − )
( )= =
( ) ( − )( − )

( ) ( − )( − )
( )= =
( ) ( − )( − )

( ) ( − )( − )
( )= =
( ) ( − )( − )

The above relations are valid since, the numerator makes =0 ≠ ; and the
denominator makes ( ) = 1 because it equals the numerator at =

Example 2

Compute ln 5.3 by using the quadratic Lagrange interpolation, using the data of example 1
and ln 7.2 = 1.9741. Compute the error and compare the accuracy with the linear Lagrange
case.

Solution:

( − )( − ) (5.3 − 5.7)(5.3 − 7.2)


(5.3) = =
( − )( − ) (5.0 − 5.7)(5.0 − 7.2)

( . )( . ) .
=( = = 0.4935
. )( . ) .

( − )( − ) (5.3 − 5.0)(5.3 − 7.2)


(5.3) = =
( − )( − ) (5.7 − 5.0)(5.7 − 7.2)
(0.3)(−1.9) −0.57
= = = 0.5429
(0.7)(−1.5) −1.05

( − )( − ) (5.3 − 5.0)(5.3 − 5.7)


(5.3) = =
( − )( − ) (7.2 − 5.0)(7.2 − 5.7)

(0.3)(−0.4) −0.12
= = = −0.03636
(2.2)(1.5) 3.3

Therefore,

ln 5.3 = (5.3) + (5.3) + (5.3)

= 0.4935 x 1.6094 + 0.5429 x 1.7405 + (- 0.03636) x 1.9741

= 0.7942 + 0.9449 – 0.07177 =1.6673 (4D)

The error = 1.6677 -1.6673 = 0.0004

The above results show that the Lagrange quadratic interpolation is more accurate for this case.

Generally, the Lagrange interpolation polynomial may be written as:

( )
( )≅ ( )= ( ) =
( )

Where,

( )=1 0 ℎ ℎ
CHAPTER FIVE

INTRODUCTION TO FINITE DIFFERENCES

Consider a function: ( ) = −5 +6

The table below illustrates various finite difference parameters.

( ) ∆ ∆ ∆ ∆ …………..
0 6
-4
1 2 - 4
-8 6
2 -6 2
-6 6
3 -12 2 8
6
4 -10 16 14
6
5 6 20
36
6 42

Notice that a constant (6) occurs in the forward difference at ∆ (3rd forward difference)

It can be shown that:

∆ =

Therefore,

=6

=6 +

=3 + +

Hence, = + + +
We now determine the constants:

At x = 0, ( ) = 6 =

Therefore, ( ) = + + +6

At = 1, ( ) = 2 (from the table)

Therefore, 2 = 1 + + + 6, --------------------------- (1)

At = 2, ( ) = −6

Hence,

−6 = 8 + (4) + 2 + 6 --------------------------------- (2)

We then solve simultaneously for the other constants.

The first forward difference is generally taken as an approximation for the first difference, i.e.
′ ′′
∆ ≅ ( ) , (but will be exact if linear). Also, ∆ ≅ ( ) (but exact if quadratic).

′′′
Similarly, ∆ ≅ ( ) (but exact if cubic).

Note also that;

( )will be linear if the constant terms occur at ∆ column, quadratic if they occur at ∆
column and cubic if at ∆ column.

Now,

′(
− −
∆ ≅ )= =
− −

Let − = ℎ = interval,

Then,

′(

)=

Now, from the Taylor’s series expansion, let us on this occasion consider the expansion about a
point :

( ) ′′ ( )
= +ℎ + + + … … …. ------------------------------ (1)
! !

( )
In this and subsequently, we denote the nth derivative evaluated at by

Hence,

( ) ( ) ( )
= −ℎ + − + … … …. ------------------------------ (2)
! !

( )
From equations one and two, three different expressions that approximate can be derived.
The first is from equation (1), considering the first two terms:

( ) ℎ ( )
− =ℎ +
2!

( )
Therefore, = + !

( ) ( )
Hence, ≡ = − ------------------------------------------ (3)
!

The quantity, is known as the forward difference and it is clearly a poor approximation,
( )
since it is in error by approximately .

The second of the expressions is from equation (2), considering the first two terms:

( ) ℎ ( )
− =ℎ −
2!

( ) ( )
Therefore, = − !

( ) ( )
Hence, ≡ = + --------------------------------------------- (4)
!

Also, the quantity, is called the backward difference. The sign of the error is reversed,
compared to that of the forward difference.
The third expression is obtained by subtracting equation (2) from equation (1), we then have:

( ) ( ) ℎ ( )
− =ℎ +ℎ +2
3!

( )
( )
= 2ℎ + 2ℎ
3!

Hence,

( )
( ) ℎ
− = 2ℎ +
3!

Therefore,

( )
− ( ) ℎ
= +
2ℎ 3!

So,

( )
( )
≡ = − - ------------------------------------------------- (5)
!

( )
The quantity, is known as the central difference approximation to and can be seen
( )
from equation (5) to be in error by approximately . Note that this is a better approximation
compared to either the forward or backward difference.

( )
By a similar procedure, a central difference approximation to can be obtained:

( )
= ≅ -------------------------------------------- (6)

2
(4)
The error in this approximation, also known as the second difference of , is about 12

It is obvious that the second difference approximation is far better that the first difference.
Example:

The following is copied from the tabulation of a second degree polynomial ( ) at values of x
from 1 to 12 inclusive.

2, 2,?,8, 14, 22, 32, 46, ?, 74, 92, 112

The entries marked ?were illegible and in addition, one error was made in transcription.
Complete and correct the table.

Solution:

S/N ( ) ∆ ∆
1 2 Because, the polynomial is second degree, the 2nd
0
2 2 ? differences (which are proportional to ) should be
?
3 ? ?
? constant and clearly, this should be 2. Hence, the 6th value in
4 8 ?
6 the ∆ column should be 2 and also all the ?in this column.
5 14 2
8 Equally, the 7th value in the ∆ column should be 12 and not
6 22 2
10 14. And since all the values in the.∆ column are a constant
7 32 14 4
? ,2, the first two ? in∆ column are 2 and 4 respectively and
8 46 ?
? the last two are 14 and 16 respectively. Working this
9 ? ?
? backward to ( )column, the first ? = 4, while the 8th value
10 74 18
2 in this column = 44 and not 46.. Finally, the last ? in the
11 92
20 ( ) column = 58
12 112
The entries should therefore read:
2, 2, 4, 8, 14, 22, 32, 44, 58, 74, 92, 112
CHAPTER SIX

SIMULTANEOUS LINEAR EQUATIONS

Consider a set of N simultaneous linear equations in N variables (unknowns), , = 1,2, … . .

The equations take the general form:

+ + ……+ =

+ + ……+ =

------- ------ ---------- ----- ---- ------ ………………… (1)

+ + ……+ =

Where, are constants and form the elements of a square matrix A. The are given and form a
column vector . If A is non-singular, equation (1) can be solved for the using the inverse of
A according to the formula; = .

Systems of linear Equations

Consider the following system of linear equations:

+ + ……+ = ,

+ + ……+ = ,

− − − − − − − − − − − − −

− − − − − − − − − − − − −

+ + ……+ = ,

A solution to this system of equations is a set of values , ,….., which satisfies the above
equations.
Consider the matrix:

……
……
……

This called the coefficient matrix


,
The vector: − − − is called the right hand side vector.
,

In some special cases, the solution can be got directly.

Cases 1

A square matrix is called a diagonal matrix if the diagonal entries alone are non-zeros. Suppose
the coefficient matrix is a diagonal matrix, i.e. the coefficient matrix is of the form:

0 0 … . .0
⎡0 0…0 ⎤
⎢ ⎥
⎢− − − − − −⎥
⎢− − − − − −⎥
⎣0 0 0 . . ⎦

The equations will be of the form:

= ,

= ,

− − − − −

− − − − −

= ,

In this case, the solution can be directly written as:

, , ,
= , = ,………, =
Case 2

A matrix is said to be lower triangular if all its upper diagonal entries are zeros.

Suppose the coefficient matrix is a lower diagonal matrix, i.e. it is of the following form:

0 0 … .0
⎡ 0. … 0 ⎤
⎢ ⎥
⎢− − − − − −⎥
⎢− − − − − −⎥
⎣ … ⎦

The equations will be of the following form:

= ,

+ = ,

+ + = ,

−−−−−−−−−−−−−−−−−−−

−−−−−−−−−−−−−−−−−−−

+ + + ……+ = ,

From the first equation,

,
=

Substituting into the second equation, we have:

1
= , −

Also, doing the same for :n

1
= ( , − − )

Similarly, we can find , ,……., . This is called forward substitution method.


Case 3

Suppose the coefficient is upper triangular. Then, the equations will be of the following form;

+ + ….+ = ,

0 + + ⋯.+ = ,

−−−−−−−−−−−−−−−−

−−−−−−−−−−−−−−−−−

+ = ,

Starting from the last equation,

.
=

The( − 1) ℎ equation can now be used to evaluate thus:

1
= , − ,
,

In general, after evaluating , ….. , we evaluate as:

1
= =( , − , …… )

We can thus evaluate all the values. This is called backward substitutiion method.

Elementary Row Operations

Consider a matrix
….
⎡ .… ⎤
⎢− − − − − −⎥
A= ⎢ ⎥
⎢− − − − − −⎥
⎣ … ⎦

Operation 1

Multiplying each element of a row by a constant:


If the ℎ row is multiplied by a constant , we write: = (read as becomes ).

Operation 2

Multiplying one row by a constant and subtracting it from another row, i.e. can be replaced by
− . We write ← − .

Operation 3

Two rows can be exchanged: If are exchanged, we write ↔ .

When operation 1 is performed, the determinant is multiplied by .

If operation 2 is performed on a matrix, its determinant value is not affected.

When operation 3 is performed, the sign of its determinant value reverses.

Now consider a matrix in which all the lower diagonal entries of the first column are zero;
..
A= 0 ..
0 ..
…………………..
0 an2 an3..ann

..
| |= ..
..

Pivotal Condensation

Consider a matrix,

..
A = ..
…. …. ..…..
an1 an2 an3 ..ann

Also consider a row operation ← − , performed on the matrix A; then the

entry will become zero. Similarly, do the operation : ← − , for i = 2,3,4,…n


Then the lower diagonal entries of the first column will become zero. Note that these operations
not affect the determinant value of A.

In the above operation, would have now become:

− , that is: ← −

Therefore, according to the new notation:

..
A = 0 ..
0 ..
…………………..
0 an2 an3..ann

..
| |= ..
..

Now, we can once again repeat the above procedure on the reduced matrix to get determinant:

| |= ⋮ ⋱ ⋮

‘A’ was a matrix. In the first step, we condensed it into a ( − 1) ( − 1) matrix. Now, it
has further been condensed into( − 2) ( − 2) matrix. Repeating the above procedure, we can
condense the matrix into1 1. So, determinant, = . . .…. .

Algorithm Development

Let A be the given matrix,

1. Do the row operation ← −

(for = 2,3,4 … . )

This makes all the lower diagonal entries of the first column zero.

2. Do the row operation ← − (for = 3,4 … . )


This also makes all the lower diagonal entries of the second column zero.

3. Do the row operation ← − (for = 4,5 … . )

This makes all the lower diagonal entries of the third column zero.

In general, in order to make the lower diagonal entries of the column zero,

4. Do the row operation, ← − for = + 1, + 2 … . )

Doing the above operation for = 1,2,3, … . , − 1, makes all the lower diagonal entries of
the matrix zero. Hence, determinant = . . ,…, .

Notice that the following segment will do the required row operation:

For = 1

= − ∗

This operation has to be repeated for = +1 in order to make the lower diagonal entries
of the column zero.

The complete algorithm is show below:

1. Read n
2. =1
3. =1
4.
5.
6.
7. =1 −1
8. = +1
9. =

10. =1
11. = − ∗
12.
13.
14.
15. =1
16. =1
17. = ∗
18.
19.
20. .

Practice Questions
1. Write a FORTRAN program to implement the pivotal condensation method, to find the
determinant of any matrix of order n.
2. Find the determinant of :

1.2 -2.1 3.2 4.3


-1.4 -2.6 3.0 4.1
-2.2 1.7 4.0 1.2
1.1 3.6 5.0 4.6
Using the pivotal condensation method,
Gauss Elimination Method

Consider the equation:

+ + ⋯+ = ,

+ + ⋯+ = ,

--- ------ ------ ------ ------- ------ ------

+ +⋯+ = ,

This can be in matrix form and solved using the row operation which was done for the pivotal
condensation method.

The algorithm consists of three major steps thus:

(i) Read the matrix


(ii) Reduce it to upper triangular form
(iii) Use backward substitution to get the solution..

Algorithm:

Read Matrix A.

1. Read
2. =1
3. =1 +1
4.
5. Next j
6.
Reduce to upper Triangular
7. =1 −1
8. = +1
9. =
10. =1 +1
11. = − ∗
12. Next j
13.
14.
Backward Substitution
,
15. =

16. = −1 1 −1
17. = ,

18. = +1
19. = − ∗
20. Next j
21. =

22.
Print Answer
23. =1
24.
25.
26.

Example:

Solve the following system of equations by the Gauss elimination method:

1
+ + + = 3.5
2

− +2 + = −2

−3 + +2 + = −3

− + 2 =0

Solution: The matrix is:


1 1 0.5 1 3.5
-1 2 0 1 -2
-3 1 2 1 -3
-1 0 0 2 0

In order to make zero, the lower diagonal entries of the first column, do the following operations

← +

← +3

← +

These will yield:

1 1 0.5 1 3.5
0 3 0.5 2 1.5
0 4 3.5 4 7.5
0 1 0.5 3 3.5

Now do the operations:

4
← −
3

← −
3

These will yield:

1 1 0.5 1 3.5
0 3 0.5 2 1.5
0 0 2.833 1.33 5.5
0 0 0.66 2.33 3

Now, doing:

0.66
← − ∗
2.33

Will result to:


1 1 0.5 1 3.5
0 3 0.5 2 1.5
0 0 2.833 1.33 5.5
0 0 0 2.0196 1.70588

Now the equations become:

+ + + = 3.5 ------------------ (1)

3 + +2 = 1.5 -------------- (2)

2.833 + 1.33 = 5.5 ------------- (3)

2.0196 = 1.70588 ------- (4)

From equation (4),

= 0.84466

From equation (3),

2.833 + 1.33(0.84466) = 5.5

→ = 1.544

Also, from equation (2),

1
3 + (1.544) + 2(0.84466) = 1.5
2

→ = −0.3204

Finally, equation (1) gives, after substituting , , values:

= 2.2039
CHAPTER SEVEN

DIFFERENTIAL EQUATIONS

The following are some differential equations:

=( + )

= +

+ (1 − ) + =( − 1)

( )
If is the highest order derivative in a differential equation, the equation is said to be a
order differential equation.

A solution to the differential equation is the value of which satisfies the differential equation.

Example:

Consider the differential equation: = 6 +4

This is a second order differential equation. The function:

= +2 −1

satisfies the differential equation, hence, = +2 − 1 is a solution to the differential


equation.

Numerical Solutions
Consider the equation: =6 +4

A solution is = +2 − 1 , however, instead of writing the solution as a function of x,


we can find the numerical values of y for various pivotal values of x. The solution from =
0 = 1 can be expressed as follows:

0 0.2 0.4 0.6 0.8 1.0


−1 −0.912 −0.616 0.064 0.792 2.0
The values are got by the function = +2 − 1. This table of numerical values of y is
said to be a numerical solution to the differential equation.

The initial value Problem

Consider the differential equation: = ( , ); ( ) =

This is a first order differential equation. Here, the value at = . The solution is
given, We must assume a small incrementℎ. i.e.

= +ℎ

= +ℎ

− −− −

= +ℎ

=? =? =? =?

Let us denote the y values at , , … …. as , …. respectively. is given and so we must find


, …. This differential equation is called an initial value problem.

Euler’s Method

Consider the initial value problem: = ( , ); ( )=

is a function of , so we shall write that function as ( )

Using the Taylor’s series expansion:

ℎ ℎ
( + ℎ) = ( ) + ( )+ ( ) +⋯…
1! 2!
Here, ( + ℎ) denotes value at +ℎ

( )denotes value at + ℎ, e.t.c.

Given;

( )=
( + ℎ) = ( ) = (say)

( )=

But, = ( , )

→ ( )= ( , )

Now, let

( )= ( , )=

ℎ , ( )=

Therefore, Taylor’s series expansion up to the first order term, gives:

= +ℎ

Similarly, we can derive:

= +ℎ

= +ℎ

In general,
= +ℎ , where = ( , )

This is called the Euler’s formula to solve an initial value problem.

Algorithm for Euler’s method

1. ( , )
2. , , ,ℎ
3. =0 −1
4. = +ℎ
5. = +ℎ ( , )
6. ,
7.
8.

Assignment: Implement the above in any programming language (FORTRAN of BASIC)


Example:

Solve the initial value problem: = + ; (1) = 0.8; = 1(0.5)3

Solution:

Given: ( , ) = + , = 1, = 0.8, ℎ = 0.5, = 1 3

= 0.8 =? =? =? =?
=1 = 1.5 =2 = 2.5 =3

= +ℎ

But = ( , ) = (1, 0.8) = 1.64

Therefore, = 0.8 + (0.5)(1.64) = 1.62

= +ℎ

But = ( , ) = (1.5, 1.62) = 4.8744

Therefore, = 1.62 + (0.5)(4.8744) = 4.0572

= +ℎ

But = ( , ) = (2, 4.0572) = 20.460871

Therefore, = 4.0572 + (0.5)(20.460871) = 14.287635

= +ℎ

But = ( , ) = (2.5, 14.287635) = 210.38651

Therefore, = 14.287635 + (0.5)(210.38651) = 119.48088

So the numerical solution got by Euler’s method is:

= 0.8 = 1.62 = 4.0572 = 14.287635 = 119.48088


=1 = 1.5 =2 = 2.5 =3
Assignment: Using Euler’s method, solve: 5 =3 ; (0) = 1

For the interval 0 ≤ ≤ 0.3, ℎ ℎ = 0.1

Backward Euler’s Method

The formula for backward Euler’s method is given by: = +ℎ

Where, = ( , )

For example, consider the initial value problem:

=2 ; (0) = 1; = 0(0.2)0.4

Solution:

( , )=2

= 0, = 1, ℎ = 0.2

The backward Euler’s method formula is: = +ℎ

→ = + ℎ(2 ∗ )

Therefore,

− 2ℎ ∗ =

Hence,

= (1 − 2ℎ )

OR

=
(1 − 2ℎ )

=1 =? =?
=0 = 0.2 = 0.4

Now, put = 0 in the formula:


1
= = = 1.0032102
(1 − 2ℎ ) 1 − 2 0.2)(0.2)
(

Put = 1 in the formula:

1.0032102
= = = 1.0295671
(1 − 2ℎ ) 1 − 2(0.2)(0.4)

Therefore, the numerical solution to the problem is:

=1 = 1.0032102 = 1.0295671
=0 = 0.2 = 0.4

Euler-Richardson’s Method

The formula is written as


= + +2
3

Where = ( , ) = ,

Also, = + ; = +

Algorithm

1. ( , )
2. , , ℎ,
3. =0 −1
4. = +

5. = + ( , )
6. = +ℎ
7. = + ( , )+2 ( ,
8. ,
9.
10. End

Let us now, develop a FORTRAN programme for the function: ( , ) = (1 + )

i.e. = (1 + ) ; (0) = 1; = 0(0.1)0.6

Hence, = 0, = 1, ℎ = 0.1, =6

Note: Let = and =


C PROGRAM FOR EULER RICHARDSON

DIMENSION X(20), Y(20)

F(X,Y) = 0.5*(1. +X) *Y*Y

WRITE (*,*) ‘ENTER X0 , Y0 , H , N VALUES’

READ (*,5) X(0), Y(0), H, N

5 FORMAT (3F15.5, I5)

WRITE (*,*) X(I), Y(I)

DO 25 I = 0, N-1

XM = X(I) + H/2.0

YM = Y(I) + H/2.0 * F(X(I), Y(I))

X(I+1) = X(I) +H

FI = F(X(I), Y(I))

FM = F(XM, YM)

Y(I+1) = Y(I) +H/3. * (FI + 2.0 * FM)

WRITE (*,15) X(I+1), Y(I+1)

15 FORMAT (1X, 2F15.5)

CONTINUE

STOP

END

Taylor’s Series Method


Given that y is a function of x, it is written as y(x)

By Taylor’s series expansion;

( ) ( )
( + ℎ) = ( ) + ℎ+ ℎ + ………
1! 2!

ℎ ℎ
( + ℎ) = + + + ………
1! 2!
Where, = ( , )

= ( , )

ℎ ℎ
→ ( )= + + + ………
1! 2!
Let the given initial value problem to be solved be:

= ( , ); ( )=

Now consider the problem:

=4 +1

(0) = 1.5

= 0(0.2)0.8

Here,
=4 +1

=4 +1

= 12 = 12

= 24 = 24

( ) ( )
= 24 = 24

( ) ( )
=0 =0

Therefore, Taylor’s expansion becomes:

ℎ ℎ ℎ
= + ℎ (4 + 1) + (12 )+ (24 ) + (24)
2 6 24
Given that ℎ = 0.2

= + 0.8 + 0.2 + 0.24 + 0.052 + 0.0016

Hence, by putting = 0, 1, 2, 3 respectively, we can evaluate , , , .


The Runge – Kutta Methods
Consider the initial value problem:

= ( , ); ( )=

Sincey is a function of x, and it can be written as y(x)

Then by mean value theorem,

( + ℎ) = ( ) + ℎ ( + ℎ)

Where, 0 < <1

In our usual notation, this can be written as:

= +ℎ + ℎ, ( + ℎ)

Now, choosing = , we obtain:

ℎ ℎ
= +ℎ + , +
2 2

And since Euler’s method with spacing , this formula may be expressed as:

=ℎ ( , )


=ℎ ( + , + )
2 2
Therefore,
= +

This is called the second order Runge – Kutta formula.

The third order formula is:

=ℎ ( , )


=ℎ ( + , + )
2 2

=ℎ ( +ℎ, +2 − )

Therefore,

1
= + ( +4 + )
6
The fourth order Runge – Kutta formula is given as:

=ℎ ( , )


=ℎ ( + , + )
2 2

=ℎ + , +
2 2

=ℎ ( +ℎ, + )

Therefore,

1
= + ( +2 +3 + )
6
Example

Solve the initial value problem value using the Runge – Kutta second order method.

= (1 + ) ; (0) = 1 ; = 0(0.2)0.6

Solution:

( , ) = (1 + ) ; = 0, = 1, ℎ = 0.2

=1 =? =? =?
=0 = 0.2 = 0.4 = 0.6

To find

=ℎ ( , ) = 0.2(1 + )

= 0.2(1)(1) = 0.2


=ℎ + , + = ℎ (0.1, 1.1) = 0.2(1 + 0.01)1.1
2 2

= 0.2222

Therefore,
= + = 1 + 0.2222

= 1.2222
To find :

=ℎ ( , ) = 0.2(1 + ) = 0.2(1 + 0.04)(1.2222)

= 0.2542222


=ℎ + , + = ℎ (0.3, 1.349333) = 0.2(1 + 0.09)(1.34933)
2 2

= 0.2941546

Then, = + = 1.2222 + 0.2941546 = 1.5163768

To find

=ℎ ( , ) = 0.2(1 + ) = 0.2(1 + 0.16)(1.5163768)

= 0.3517994


=ℎ + , + = ℎ (0.5, 1.6922785)
2 2

= 0.4230691

Then, = + = 1.5163768 + 0.4230691

= 1.9394459

Assignment

Solve the problem given below, using the Runge – Kutta fourth order method:

= (1 + ) ; (0) = 1 ; = 0(0.2)0.6
CHAPTER EIGHT

NUMERICAL INTEGRATION
Methods:

1. Trapezoidal Formula

+
= ( ) ≈ ℎ( + + +⋯…. )
2

ℎ , = ( ), ( = 0,1,2, … . . , )


ℎ=

2. Simpson’s Formula (Parabola formula)


= ( ) ≈ [ + + 2( + + ⋯…. ) + 4( + + ⋯….+ )]
3

− −
ℎ ℎ= =
2

3. Newton’s Formula ( rule)

3ℎ
= ( ) ≈ [ + + 2( + + ⋯…. ) + 3( + + +
8
+ ⋯….+ )+ )]

− −
ℎ ,ℎ = =
3
Example 1

Evaluate the integral, employing the trapezoidal rule, for n = 10

Solution:

Form a table of the integrand function

0 0 0.0 1.0000
1 0.1 0.01 0.9900
2 0.2 0.04 0.9608
3 0.3 0.09 0.9139
4 0.4 0.16 0.8521
5 0.5 0.25 0.7755
6 0.6 0.36 0.6977
7 0.7 0.49 0.6125
8 0.8 0.64 0.5273
9 0.9 0.81 0.4449
10 1.0 1.00 0.3679
Applying the formula;

+
= ( ) ≈ ℎ( + + +⋯…. )
2

We note that : ( + )+∑ = 7.4620

Therefore,

=∫ = ℎ ∗ 7.4620 = 0.1 ∗ 7.4620

≈ 0.746
Example 2

Compute the integral; =∫ by the Simpson formula, for n = 10

Solution:

Form a table of the function:

Values of =
= 0, 10
0 0 0.00 1.0000
1 0.1 0.01 1.0101
2 0.2 0.04 1.0408
3 0.3 0.09 1.0942
4 0.4 0.16 1.1735
5 0.5 0.25 1.2840
6 0.6 0.36 1.4333
7 0.7 0.49 1.6323
8 0.8 0.64 1.8965
9 0.9 0.81 2.2479
10 1.0 1.00 2.7183
3.7183 5.5441 7.2685

Applying the Simpson’s formula:

1
≈ [(3.7183) + 2(5.5441) + 4(7.2685)]
30

= 1.46268 ≈ 1.4627
Example 3

Compute the integral, using the Newton’s formula for h = 0.1

1+

Solution:

.
If ℎ = 0.1 ℎ = = =6
.

Now form a table of the function:

Values of =

1+ = 0, = 6 =3 = 1,2,4,5
0 0 1.00 1.0000
1 0.1 1.10 0.9091
2 0.2 1.20 0.8333
3 0.3 1.30 0.7692
4 0.4 1.40 0.7143
5 0.5 1.50 0.6667
6 0.6 1.60 0.6250

1.6250 0.7692 3.1234

Applying the formula, we obtain;

3ℎ
= ( ) ≈ [ + + 2( + + ⋯…. ) + 3( + + +
8

+ ⋯….+ )+ )]

.
3
≈ ∗ 0.1 ∗ (1.6250 + 1.5384 + 9.3702) ≈ 0.47001
1+ 8
Bibliography

1. Erwin Kreyszig (1993) Advanced Engineering Mathematics.


New York: John Wiley & Sons Inc. Seventh Edition
2. Riley, K.F., Hobson, M.P. and Bence, S.J. (1999). Mathematical Methods for Physics
And Engineering. Cambridge: University Press. Low – Price Edition.
3. Xavier, C.(1985): FORTRAN 77 and Numerical Methods.

You might also like