Numerical Study of Some Iterative Methods for Solving Nonlinear Equation
Numerical Study of Some Iterative Methods for Solving Nonlinear Equation
Abstract: In this paper we introduce, numerical study of some iterative methods for solving non linear
equations. Many iterative methods for solving algebraic and transcendental equations is presented by
the different formulae. Using bisection method , secant method and the Newton’s iterative method and
their results are compared. The software, matlab 2009a was used to find the root of the function for
the interval [0,1]. Numerical rate of convergence of root has been found in each calculation. It was
observed that the Bisection method converges at the 47 iteration while Newton and Secant methods
converge to the exact root of 0.36042170296032 with error level at the 4th and 5th iteration
respectively. It was also observed that the Newton method required less number of iteration in
comparison to that of secant method. However, when we compare performance, we must compare
both cost and speed of convergence [6]. It was then concluded that of the three methods considered,
Secant method is the most effective scheme. By the use of numerical experiments to show that secant
method are more efficient than others.
Keywords: roots, rate of convergence, Iterative methods, Algorithm, Transcendental equations,
Numerical experiments, Efficiency Index.
I. Introduction
Solving nonlinear equations is one of the most important and challenging problems in science and
engineering applications. The root finding problem is one of the most relevant computational
problems. It arises in a wide variety of practical applications in Physics, Chemistry, Biosciences,
Engineering, etc. As a matter of fact, the determination of any unknown appearing implicitly in
scientific or engineering formulas, gives rise to root finding problem [1]. Relevant situations in
Physics where such problems are needed to be solved include finding the equilibrium position of an
object, potential surface of a field and quantized energy level of confined structure [7]. The common
root-finding methods include: Bisection, Newton-Raphson, False position, Secant methods etc.
Different methods converge to the root at different rates. That is, some methods are faster in
converging to the root than others. The rate of convergence could be linear, quadratic or otherwise.
The higher the order, the faster the method converges [3]. The study is at comparing the rate of
performance (convergence) of Bisection, Newton-Raphson and Secant as methods of root-finding.
Obviously, Newton-Raphson method may converge faster than any other method but when we
compare performance, it is needful to consider both cost and speed of convergence. An algorithm
Numerical Study of Some Iterative Methods for Solving…
that converges quickly but takes a few seconds per iteration may take more time overall than an
algorithm that converges more slowly, but takes only a few milliseconds per iteration [8]. For the
purpose of this general analysis, we may assume that the cost of an iteration is dominated by the
evaluation of the function - this is likely the case in practice. So, the number of function evaluations
per iteration is likely a good measure of cost [8].Secant method requires only one function
evaluation per iteration, since the value of function
can be stored from the previous iteration [6]. Newton’s method, on the other hand,
requires one function evaluation and the one derivative evaluation per iteration. It is often difficult
to estimate the cost of evaluating the derivative in general (if it is possible) [1, 4-5]. It seem safe, to
assume that in most cases, evaluating the derivative is at least as costly as evaluating the function
[8]. Thus, we can estimate that the Newton iteration takes about two functions evaluation per
iteration. This disparity in cost means that we can run two iterations of the secant method in the
same time it will take to run one iteration of Newton method. In comparing the rate of convergence
of Bisection, Newton and Secant methods,[8] used C++programming language to calculate the cube
roots of numbers from 1 to 25, using the three methods. They observed that the rate of convergence
is in the following order: Bisection method < Newton method < Secant method. They concluded that
Newton method is 7.678622465 times better than the Bisection method while Secant method is
1.389482397 times better than the Newton method. For Solving nonlinear equations Newton’s
method is one of the most pre dominant problems in numerical analysis [1]. Some historical points
on this method can be found in [13,14].
We have | | | |
Intermediate Value Theorem: If f is continuous in Closed interval [a, b] and K is any number
between f(a) and f(b), then there exists a number c in (a; b) such that f(c) = K. In particular, if f(a)
and f(b) are opposite signs, then there exists a number c in (a; b) such that f(c) = 0.
www.ijesi.org 2|Page
Numerical Study of Some Iterative Methods for Solving…
Bisection-Method: As the title suggests, the method is based on repeated bisections of an interval
containing the root. The basic idea is very simple. Suppose f(x) = 0 is known to have a real root x =
in an interval [a, b].
• Then bisect the interval [a, b], and let be the middle point of [a, b]. If c is the root, then
we are done. Otherwise, one of the intervals [a, c] or [c, b] will contain the root.
• Find the one that contains the root and bisect that interval again.
• Continue the process of bisections until the root is trapped in an interval as small as warranted by
the desired accuracy.
To implement the above idea, we must know in each iteration: which of the two intervals contain
the root of f(x) = 0.
Chose (a0, b0) - The two numbers such that f(a0)f(b0) < 0.
• Compute
• Test, using one of the criteria stated in the next section, if ck is the desired root. If so, stop.
• If ck is not the desired root, test if f(ck)f(ak) < 0. If so, set bk+1 = ck and ak+1 = ak.
End.
Let us now find out what is the minimum number of iterations N needed with the bisection method
to achieve a certain desired accuracy. Let the interval length after N iterations is
That is
www.ijesi.org 3|Page
Numerical Study of Some Iterative Methods for Solving…
ln
ln2
log log
log 2
Theorem: The number of iterations N needed in the bisection method to obtain an accuracy of is
given by
log log
log 2
Remarks: (i) Since the number of iterations N needed to achieve a certain accuracy depends upon
the initial length of the interval containing the root, it is desirable to choose the initial interval [a0,
b0] as small as possible.
The Newton-Raphson method finds the slope (tangent line) of the function at the current point and
uses the zero of the tangent line as the next reference point. The process is repeated until the root is
found [10-12]. The method is probably the most popular technique for solving nonlinear equation
because of its quadratic convergence rate. But it is sometimes damped if bad initial guesses are
used [9-11].It was suggested however, that Newton’s method should sometimes be started with
Picard iteration to improve the initial guess [9]. Newton Raphson method is much more efficient
than the Bisection method. However, it requires the calculation of the derivative of a function as the
reference point which is not always easy or either the derivative does not exist at all or it cannot be
expressed in terms of elementary function [6, 7]. Furthermore, the tangent line often shoots wildly
and might occasionally be trapped in a loop [6]. Once you have xk the next approximation xk+1 is
determined by treating f(x) as a linear function at xk.
Another way of looking at the same fact: We can rewrite f(x) = 0,
b(x)f(x) = 0, x = x − b(x)f(x) = g(x)
Here the function b(x) is chosen in such a way, to get fastest possible convergence. We have
g′(x) = 1 − b′(x) f(x) − b(x) f′(x)
Let r be the root, such that f(r) = 0, and r = g(r). We have
g′(r) = 1 − b(r) f′(r) smallest possible: |g r |
Choose now
1 − b(x)f′(x) = 0, b(x) =1/f′(x)
(1.2)
where is an initial approximation sufficiently near to . The convergence order of the Newton’s
method is quadratic for simple roots [4]. By implication, the quadratic convergence we mean that
the accuracy gets doubled at each iteration.
www.ijesi.org 4|Page
Numerical Study of Some Iterative Methods for Solving…
It was remarked in [6], that if none of the above criteria has been satisfied, within a predetermined,
say, N, iteration, then the method has failed after the prescribed number of iteration. In this case,
one could try the method again with a different . Meanwhile, a judicious choice of can
sometimes be obtained by drawing the graph of f(x), if possible. However, there does not seems to
exist a clear- cut guideline on how to choose a right starting point, x0 that guarantees the
convergence of the Newton-Raphson method to a desire root. Newton method just needs one,
namely, x0.
[ ][ ]
(1.3)
| | |f′′ |
Where , and taking lim | | ′
|f |
If as
www.ijesi.org 5|Page
Numerical Study of Some Iterative Methods for Solving…
Thus, in the case of a simple root , Newton’s method has second order— quadratic convergence,
|f′′ |
with asymptotic error constant ′ .Thus the subsequent error at each step is
|f |
proportional to the square of the previous error
Secant Method: A major disadvantage of the Newton Method is the requirement of finding the
value of the derivative of f(x) at each approximation. There are some functions for which this job is
either extremely difficult (if not impossible) or time consuming. A way out is to approximate the
derivative by knowing the values of the function at that and the previous approximation according
to [6]. Or the derivative can be approximated by finite divided difference. The secant method may
be regarded as an approximation to Newton’s method where, instead
of f’(xk), the quotient
is used; i.e. instead of the tangent to the graph of f(x) at xk , we use the secant joining the points (xk-1,
f(xk-1) and (xk, f(xk)) . If we equate the two expressions for the slope of the secant:
(1.4)
This is secant method. The advantage of the method is that f’(x) need not be evaluated; so the
number of function evaluations is half that of Newton’s method. However, its convergence rate is
slightly less than Newton and it requires two initial guesses x0 and x1.
Convergence analysis: Let xk be the sequence of approximation and also assume that is a
simple root of f(x)=0, substituting , , in
equation (1.4) we obtain
– (1.5)
Expanding and in taylor’s series about the point and noting that
we get
www.ijesi.org 6|Page
Numerical Study of Some Iterative Methods for Solving…
[ ]
[ ][ ]
(1.6)
⁄
(1.8)
Comparing the powers of on both sides, we get
which gives √
Neglecting negative sign ,we find that the rate of convergence for the secant method is
Hence the rate of convergence is super linear.
Numerical roots for bisection method: The Bisection method for a single-variable function
on [0,1], using the software, matlab. The results are presented in Table
1 to 3.
www.ijesi.org 7|Page
Numerical Study of Some Iterative Methods for Solving…
a f(a) b f(b) c | |
www.ijesi.org 8|Page
Numerical Study of Some Iterative Methods for Solving…
Table 1 shows that the iteration data obtained for bisection method .it was observed that the
function converges to 0.36042170296032 at the 47 iteration with error level of 0.00000000000001
Numerical roots for secant method: The secant method for a single-variable function
on [0, 0.5], using the matlab software.
www.ijesi.org 9|Page
Numerical Study of Some Iterative Methods for Solving…
| |
Table 2 shows that the function converges to root 0.36042170296032 at the iteration 5th with
error level 0.00000000000001
Numerical roots for Newton method : The Newton method for a single-variable function
on [0,1], using the software, matlab.
S/No.
| |
1. 0.00000000000000 1.000000000000
2. 0.33333333333333 0.06841772828994
3. 0.36017071357763 6.279850705706025e-004
4. 0.36042168047602 5.625155319322062e-008
5. 0.36042170296032 6.661338147750939e-016
6. 0.36042170296032 4.440892098500626e-016
7. 0.36042170296032 2.220446049250313e-016
Table3. Shows that the function converges to 0.36042170296032 at the iteration 4th with error
level 0.00000000000001
From the above tables we see that the number of iteration for bisection method is too much in
comparison to other two methods .Now note that we converged to almost the same root as for
Newton method. but this time in 5 iteration as opposed to 4iteration for Newton . However, recall
that Newton method is more costly per iteration. Newton achieved its accuracy with total of 8 calls
to the function and the derivatives . Secant did it with only 5 calls, all to . Secant was
actually more efficient in terms of the number of total function call. Since , so we can do two
iteration of secant method for the same cost as one iteration of Newton method. Now we will
compare Newton method and secant method based on execution time.
Executing time: The execution time of a given task is defined as the time spent by the system
executing that task, including the time spent executing run time or system services on its behalf.
www.ijesi.org 10 | P a g e
Numerical Study of Some Iterative Methods for Solving…
Thus from the above discussions we see that the Newton raphson method is taking more time in
comparison to that of secant method.[15-16] Since the secant method requires only one function
evaluation per iteration and Newton method requires the evaluation of both the function and
derivative at every iteration.
Efficiency Index: The efficiency index of an iterative method is defined by the equation p ⁄n
where p is the order of the method and n is the total number of call functions at each step of
iteration. In this way, the efficiency index of secant method is 1.62, which is better than the 1.41 of
Newton method and bisection method. Thus we conclude that the secant method has better overall
performance than others method. So we observed that the rates of convergence of the above
methods are in this manner.
V. CONCLUSION: In Newton method, there is the need for two functions evaluations at each step,
and at the start. If a difficult problem requires much iteration to converge, the number
of function evaluations with Newton’s method may be many more than with linear iteration
methods. Because Newton’s method always uses two per iteration whereas the others take only
one. Based on our results and discussions, we now conclude that the secant method is formally the
most effective of the Newton method, we have considered here in the study. But requires only a
single function evaluation per iteration. Analysis of efficiency from the numerical computation
www.ijesi.org 11 | P a g e
Numerical Study of Some Iterative Methods for Solving…
shows that bisection method converges too slow but sure. Thus these methods have great practical
utilities.
REFERENCES:
[1]. S. Amat, S. Busquier, J.M. Gutierrez, Geometric constructions of iterative functions to solve
nonlinear equations, J. Comput. Appl. Math. 157, (2003) 197-205.
[2]. K. E. Atkinson, An introduction to numerical analysis, 2nd ed., John Wiley & Sons, New
York, 1987.
[3]. F. Costabile, M.I. Gualtieri, S.S. Capizzano, An iterative method for the solutions of
nonlinear equations, Calcolo 30,(1999)17-34.
[4]. J. Stoer, R. Bulrisch, Introduction to Numerical Analysis, Second edition, Springer–
Verlag,1993.
[5].Ehiwario, J.C., Aghamie, S.O.Department of Mathematics, College of Education,Agbor, Delta State.
[6]. Biswa Nath Datta (2012), Lecture Notes on Numerical Solution of root Finding Problems.
www.math.niu.edu/dattab.
[7]. Iwetan, C.N, Fuwape, I.A,Olajide, M.S and Adenodi, R.A(2012), Comparative Study of the
Bisection and Newton Methods in solving for Zero and Extremes of a Single-Variable Function.
J.of NAMP vol.21 pp 173-176.
[8]. Srivastava, R.B and Srivastava, S (2011), Comparison of Numerical Rate of Convergence of
Bisection, Newton and Secant Methods. Journal of Chemical, Biological and Physical Sciences.
Vol 2(1) pp 472-479.
[9]. http://www.efunda.com/math/num_rootfinding-cfm February, 2014
[10]. Wikipedia, the free Encyclopedia
[11]. Autar, K.K and Egwu, E (2008), http://www.numericalmethods.eng.usf.edu Retrieved on 20th
February, 2014.
[12]. McDonough, J.M (2001), Lecture in Computationl Numerical Analysis. Dept. of Mechanical
Engineering, University of Kentucky.
[13]. Allen, M.B and Isaacson E.L (1998), Numerical Analysis for Applied Science. John Wiley and
sons.Pp188-195
[14]. T.Yamamoto, Historical development in convergence analysis for Newton's and Newton-like
methods, J. Comput. Appl.Math,124,(2000),1-23.
[15]. Applied Numerical Methods Using MATLAB, Won Young Yang Chung-Ang , Wenwu Cao , Tae-
Sang Chung Chung , John Morris , A John Willey & Sons, Inc, Publication 2005.
[16]. Applied Numerical Methods with MATLAB, for Engineers and Scientist, Third Edition, Steven
C. Chapra.
www.ijesi.org 12 | P a g e