Root Finding
Root Finding
1.1
HG
The root-finding problem is one of the most important computational problems. It arises in
a wide variety of practical applications in physics, chemistry, biosciences, engineering,
etc. As a matter of fact, determination of any unknown appearing implicitly in scientific
or engineering formulas gives rise to a root-finding problem. We consider one such simple
application here.
One of the classical laws of planetary motion due to Kepler says that a planet revolves around
the sun in an elliptic orbit as shown in the figure below :
Suppose one needs to find the position (x, y) of the planet at time t. This can be determined
by the following formula:
x = a cos(E e)
y = a 1 e2 sin E
where
e
E
Planet
a
y
Sun
perifocus
To determine the position (x, y), one must know how to compute E, which can be computed
from Keplers equation of motion:
M = E e sin(E),
0 < e < 1,
where M is the mean anomaly. This equation, thus, relates the eccentric anomaly, E to the
mean anomaly, M . Thus to find E we can solve the nonlinear equation:
f (E) = M E + e sin(E) = 0
The solution of such an equation is the subject of this chapter.
A solution of this equation with numerical values of M and e using several different methods
described in this Chapter will be considered later.
1.2
Introduction
As the title suggests, the Root-Finding Problem is the problem of finding a root of the
equation f (x) = 0, where f (x) is a function of a single variable x. Specifically, the problem is
stated as follows:
The Root-Finding Problem
Given a function f (x). Find a number x = such that f () = 0.
1.2. INTRODUCTION
Definition 1.1. The number x = such that f () = 0 is called a root of the equation f (x) = 0
or a zero of the function f (x).
As seen from our case study above, the root-finding problem is a classical problem and dates back
to the 18th century. The function f (x) can be algebraic or trigonometric or a combination
of both.
1.2.1
Except for some very special functions, it is not possible to find an analytical expression for
the root, from where the solution can be exactly determined. This is true for even commonly
arising polynomial functions. A polynomial Pn (x) of degree n has the form:
Pn (x) = a0 + a1 (x) + a2 x2 + + an xn
(an 6= 0)
The Fundamental Theorem of Algebra states that a polynomial Pn (x) of degree n(n 1)
has at least one zero.
A student learns very early in school, how to solve a quadratic equation: P2 (x) = ax2 + bx + c
using the analytical formula:
b b2 4ac
.
x=
2a
(see later for a numerically effective version of this formula.)
Unfortunately, such analytical formulas do not exist for polynomials of degree 5 or greater. This
fact was proved by Abel and Galois in the 19th century.
Thus, most computational methods for the root-finding problem have to be iterative in nature.
The idea behind an iterative method is the following:
Starting with an initial approximation x0 , construct a sequence of iterates {xk } using an iteration formula with a hope that this sequence converges to a root of f (x) = 0.
Two important aspects of an iterative method are: convergence and stopping criterion.
We will discuss the convergence issue of each method whenever we discuss such a method in
this book.
Determining an universally accepted stopping criterion is complicated for many reasons.
Here is a rough guideline, which can be used to terminate an iteration in a computer program.
|ck1 ck |
|ck |
1.3
Bisection-Method
As the title suggests, the method is based on repeated bisections of an interval containing the
root. The basic idea is very simple.
Basic Idea:
Suppose f (x) = 0 is known to have a real root x = in an interval [a, b].
Then bisect the interval [a, b], and let c = a+b
2 be the middle point of [a, b]. If c is the
root, then we are done. Otherwise, one of the intervals [a, c] or [c, b] will contain the root.
Find the one that contains the root and bisect that interval again.
Continue the process of bisections until the root is trapped in an interval as small as
warranted by the desired accuracy.
To implement the above idea, we must know in each iteration: which of the two intervals contain
the root of f (x) = 0.
The Intermediate Value Theorem of calculus can help us identify the interval in each
iteration. For a proof of this theorem, see any calculus book (e.g. Stewart []).
Intermediate Value Theorem (IVT)
Suppose
(i) f (x) is continuous on a closed interval [a, b],
1.3. BISECTION-METHOD
and
(ii) M is any number between f (a) and f (b).
Then
there is at least one number c in [a, b] such that f (c) = M .
Consequently, if
(i) f (x) is continuous on [a, b],
and
(ii) f (a) and f (b) are of opposite signs.
Then there is a root x = c of f (x) = 0 or in [a, b].
ak +bk
2 .
Test, using one of the criteria stated in the next section, if ck is the desired root. If so,
stop.
If ck is not the desired root, test if f (ck )f (ak ) < 0. If so, set bk+1 = ck and ak+1 = ak .
Otherwise, set ak+1 = ck , bk+1 = bk .
End.
Example 1.3
Find the positive root of f (x) = x3 6x2 + 11x 6 = 0 using Bisection method.
Solution:
Finding the interval [a, b] bracketing the root:
Since the bisection method finds a root in a given interval [a, b], we must try to find that interval
first. This can be done using IVT.
Choose a0 = 2.5 and b0 = 4.
We now show that both hypotheses of IVT are satisfied for f (x) in [2.5, 4].
(i) f (x) = x3 6x2 + 11x 6 is continuous on [2.5, 4].
(ii) f (2.5)f (4) < 0.
Thus, by IVT, there is a root of f (x) = 0 in [2.5, 4].
Input Data:
(i) f (x) = x3 6x2 + 11x 6
(ii) a0 = 2.5,
b0 = 4
Solution.
Iteration 1. (k = 0):
c0 =
4 + 2.5
6.5
a 0 + b0
=
=
= 3.25.
2
2
2
3.25 + 2.5
= 2.8750.
2
2.875 + 3.250
a2 + b2
=
= 3.0625.
2
2
a3 +b3
2
2.875+3.0625
2
= 2.9688.
1.3. BISECTION-METHOD
1. From the statement of the bisection algorithm, it is clear that the algorithm always converges.
2. The example above shows that the convergence, however, can be very slow.
3. Computing ck : It might happen that at a certain iteration k, computation of ck =
will give overflow. It is better to compute ck as:
ck = ak +
at +bk
2
bk a k
.
2
Stopping Criteria
Since this is an iterative method, we must determine some stopping criteria that will allow the
iteration to stop. Here are some commonly used stopping criteria.
Let be the tolerance; that is, we would like to obtain the root with an error of at most of .
Then
Note: Criterion 1 can be misleading since it is possible to have |f (ck )| very small, even if ck is
not close to the root. (Do an example to convince yourself that it is true).
b0 a0
.
2N
That is,
2N (b0 a0 ) ,
b0 a 0
,
or
2N
or
N ln 2 ln(b0 a0 ) ln , (Taking the natural log on both sides.)
or
ln(b0 a0 ) ln
.
2
Theorem 1.4. The number of iterations N needed in the bisection method to obtain an
accuracy of is given by
(1.1)
Remark: Note the number N depends only on the initial interval [a0 , b0 ] bracketing the root.
Example 1.5
Find the minimum number of iterations needed by the bisection algorithm to approximate the
root x = 3 of x3 6x2 + 11x 6 = 0 with error tolerance 103 .
Input Data:
(
b=4
Thus, a minimum of 11 iterations will be needed to obtain the desired accuracy using the
bisection method.
Remarks: (i) Since the number of iterations N needed to achieve a certain accuracy depends
upon the initial length of the interval containing the root, it is desirable to choose the initial
interval [a0 , b0 ] as small as possible.
1.3.1
Besides the stopping criteria mentioned in the Introduction, a suitable stopping criterion, specific for the bisection method, will be:
Stop the bisection iteration if for any k:
ba
2k
(The length of the interval after k iterations is less than or equal to the tolerance .)
1.4
Fixed-Point Iteration
Suppose that the equation f (x) = 0 is written in the form x = g(x); that is,
f (x) = x g(x) = 0.
Then any fixed point of g(x) is a root of f (x) = 0 because
f () = g() = = 0.
Thus, a root of f (x) = 0 can be found by finding a fixed point of x = g(x), which corresponds
to f (x) = 0.
Finding a root of f (x) = 0 by finding a fixed point of x = g(x) immediately suggests an iterative
procedure of the following type.
Start with an initial guess x0 of the root and form a sequence {xk } defined by
xk+1 = g(xk ),
k = 0, 1, 2, . . .
The simplest way to write f (x) = 0 in the form x = g(x) is to add x on both sides, that is,
x = f (x) + x = g(x).
But it does not very often work.
To convince yourself, consider Example 1.3 again. Here,
f (x) = x3 6x2 + 11x 6 = 0.
Define
g(x) = x + f (x) = x3 6x2 + 12x 6.
We know that there is a root of f (x) in [2.5, 4]; namely x = 3.
10
Fixed-Point
x2
x1
x0
11
Fixed-Point
x0
x1
x2
(ii) g (x) exists on (a, b) with the property that there exists a positive constant 0 < r < 1
such that
|g (x)| r,
for all x in (a, b).
Then
(i) there is a unique fixed point x = of g(x) in [a, b],
(ii) for any x0 in [a, b], the sequence {xk } defined by
xk+1 = g(xk ), k = 0, 1,
converges to the fixed point x = ; that is, to the root of f (x) = 0.
The proof of the fixed-point theorem will require two important theorems from calculus: Initial
Value Theorem (IVT) and the Mean Value Theorem (MVT). The IVT has been stated
earlier. We will now state the MVT.
The Mean Value Theorem (MVT)
Let f (x) be a function such that
(i) it is continuous on [a, b]
12
Proof of Existence
First, consider the case where either g(a) = a or g(b) = b.
13
Since both hypotheses of the IVT are satisfied, by this theorem, there exists a number c in [a, b]
such that
h(c) = 0
this means g(c) = c.
That is, x = c is a fixed point of g(x). This proves the Existence part of the Theorem.
Proof of Convergence:
Let be the root of f (x) = 0. Then the absolute error at step k + 1 is given by
ek+1 = | xk+1 |.
To prove that the sequence converges, we need to show that limk ek+1 = 0.
(1.2)
14
To show this, we apply the Mean Value Theorem to g(x) in [xk , ]. Since g(x) also satisfies
both the hypotheses of the MVT in [xk , ], we get
g() g(xk )
= g (c)
xk
(1.3)
or
| xk+1 |
= |g (c)|
| xk |
(1.4)
ek+1
= |g (c)|.
ek
(1.5)
lim r k+1 e0 = 0
k
This proves that the sequence {xk } converges to x = and that x = is the only fixed point
of g(x).
15
In practice, it is not easy to verify Assumption 1. The following corollary of the fixed-point
theorem is more useful in practice because the conditions here are more easily verifiable. We
leave the proof of the corollary as an [Exercise].
Corollary 1.8. Let the iteration function g(x) be such that
(i) it is continuously differentiable in some open interval containing the fixed point x = ,
(ii) |g ()| < 1.
Then there exists a number > 0 such that the iteration: xk+1 = g(xk ) converges whenever x0
is chosen in |x0 | .
Example 1.9
Find a positive zero of x3 6x2 + 11x 6 in [2.5, 4] using fixed-point iterations.
Input Data:
(
Solution.
Step 1. Writing f (x) = 0 in the form x = g(x) so that both hypotheses of Theorem 1.7 are
satisfied.
We have seen in the last section that the obvious form: g(x) = x + f (x) = x3 6x2 + 12x 6
does not work.
Lets now write f (x) = 0 in the form: x =
1
3
11 (x
+ 6x2 + 6) = g(x).
Then (i) for all x [2.5, 4], g(x) [2.5, 4] (Note that g(2.5) = 2.5341 and g(4) = 3.4545).
1
(ii) g (x) = 11
(3x2 + 12x) steadily decreases in [2.5, 4] and remains less than 1 for all x in
(2.5, 4). (Note that g (2.5) = 1.0227 and g (4) = 0.)
Thus, both hypotheses of Theorem 1.6 are satisfied. So, according to this theorem, for any x0
in [2.5, 4], the sequence {xk } should converge to the fixed point.
We now do a few iterations to verify this assertion. Starting with x0 = 3.5 and using the
iteration:
1
xk+1 = g(xk ) = 11
(3x2k + 12xk )
(k = 0, 1, 2, . . . , 11)
We obtain:
16
x0 = 3.5
x1 = g(x0 ) = 3.3295
x2 = g(x1 ) = 3.2367
x3 = g(x2 ) = 3.1772
x4 = g(xx ) = 3.1359
x5 = g(x4 ) = 3.1059
x6 = g(x5 ) = 3.0835
x7 = g(x6 ) = 3.0664
x8 = g(x7 ) = 3.0531
x9 = g(x8 ) = 3.0427
x10 = g(x9 ) = 3.0344
x11 = g(x10 ) = 3.0278
Solution.
Write f (x) = 0 in the form x = g(x) so that xk+1 = g(xk ) converges for any choice of x0 in
[0, 2 ]n :
From f (x) = x cos x = 0, we obtain x = cos x.
Thus, if we take g(x) = cos x, then
(i) For all x in [0, 2 ], g(x) [0, 2 ]. (In particular, note that g(0) = cos(0) = 1 and g( 2 ) =
cos( 2 ) = 0).
(ii) g (x) = sin x. Thus, |g (x)| < 1 in [0, 2 ].
Thus, both properties of g(x) of Theorem 1.6 are satisfied.
According to the Fixed-Point Iteration Theorem (Theorem 1.7), the iterations must converge
with any choice of x0 in [0, 2 ]. This is verified from the following computations.
17
x0 = 0
x1 = cos xo = 1
x2 = cos x1 = 0.5403
x3 = cos x2 = 0.8576
..
.
x17 = 0.73956
x18 = cos x17 = 0.73955
1.5
The Newton- method, described below, shows that there is a special choice of g(x) that will
allow us to readily use the results of Corollary 1.8.
Assume that f (x) exists and is continuous on [a, b] and is a simple root of f (x), that is,
f () = 0 and f () 6= 0.
Choose
g(x) = x
Then g (x) = 1
Thus, g () =
f ()f ()
(f ())2
f (x)
.
f (x)
f (x)f (x)
(f (x))2 .
= 0, since f () = 0, and f () 6= 0.
Since g (x) is continuous, this means that there exists a small neighborhood around the root
x = such that for all points x in that neighborhood, |g (x)| < 1.
Thus, if g(x) is chosen as above and the starting approximation x0 is chosen sufficiently close
to the root x = , then the fixed-point iteration is guaranteed to converge.
This leads us to the following well-known classical method known as the Newton method.
Note: In many text books and literature, Newtons Method is referred to as Newtons
Method. See the article about this in SIAM Review (1996).
Algorithm 1.11 (The Newton Method).
Inputs: f (x) - The given function
x0 - The initial approximation
- The error tolerance
N - The maximum number of iterations
18
f (xk )
.
f (xk )
If |f (xk )| < ,
|xk+1 xk |
or
< ,
stopping criteria.
|xk |
or k > N, stop.
End
f (xk )
f (xk )
Example 1.13
Find a positive real root of cos x x3 = 0.
Obtaining an initial approximation: Since cos x 1 for all x and for x > 1, x3 > 1, the
positive zero must lie between 0 and 1. So, lets take x0 = 0.5.
Input Data:
f (xk )
f (xk )
Solution.
Compute f (x) = sin x 3x2 .
cos(0.5)(0.5)3
sin(0.5)3(0.5)2
Iteration 1. k=0: x1 = x0
f (x0 )
f (x0 )
= 0.5
Iteration 2. k=1: x2 = x1
f (x1
f (x1 )
= 0.9097.
Iteration 3. k=2: x3 = x2
f (x2 )
f (x2 )
= 0.8672.
Iteration 4. k=3: x4 = x3
f (x3 )
f (x3 )
= 0.8654.
= 1.1121.
19
Iteration 5. k=4: x5 = x4
f (x4 )
f (x4 )
= 0.8654.
Notes:
1. If none of the above criteria has been satisfied within a predetermined, say N , iterations,
then the method has failed after the prescribed number of iterations. In that case, one
could try the method again with a different x0 .
2. A judicious choice of x0 can sometimes be obtained by drawing the graph of f (x), if
possible. However, there does not seem to exist a clear-cut guideline of how to choose
a right starting point x0 that guarantees the convergence of the Newton Method to a
desired root.
3. Starting point issues.
(a) If x0 is far from the solution, Newtons method may diverge [Exercise].
(b) For some functions, some starting point may enter an infinite circle in the sense that
the sequence of iterations will oscillate without converging to a root [Exercise].
1.5.1
A, where A > 0.
A.
20
Example 1.14
Approximate the Positive Square Root of 2 Using Newtons
Method with x0 = 1.5 (Do three iterations).
Input Data: A = 2, x0 = 1.5
Formula to be Used: xk+1 =
x2k +A
2xk ,
k = 0, 1, and 2.
Solution.
Iteration 1. x1 =
x20 +A
2x0
Iteration 2. x2 =
x21 +A
2x1
= 1.4142,
Iteration 3. x3 =
x22 +A
2x2
= 1.4142.
(1.5)2 +2
3
= 1.4167,
xk+1 = xk
xnk A
(n 1)xnk + A
=
.
nxkn1
nxkn1
1
A
1
A,
1
f (xk )
xk A
= xk (2 Axk ).
=
x
k
f (xk )
x12
k
1
x
A.
21
1
A
1
A
1
A.
Example 1.15
Approximate
1
3
Input Data:
A = 3, x0 = 0.25
Formula to be Used:
xk+1 = xk (2 Axk ), k = 0, 1, 2
Solution.
Iteration 1. x1 = x0 (2 Ax0 ) = 0.25(2 3 0.025) = 0.3125
Iteration 2. x2 = x1 (2 Ax1 ) = 0.3125(2 3 0.3125) = 0.3320
Iteration 3. x3 = x2 (2 Ax2 ) = 0.3320(2 3 0.3320) = 0.3333
Note that 0.3333 is the 4-digit approximation of
1.5.2
1
3
The first approximation x1 can be viewed as the x-intercept of the tangent line to the
graph of f (x) at (x0 , f (x0 )).
The second approximation x2 is the x-intercept of the tangent line to the graph of f (x)
at (x1 , f (x1 )) and so on.
22
tangent to f (x0 )
f (x0 )
tangent to f (x1 )
f (x1 )
x2
1.5.3
x1
x0
A major disadvantage of the Newton Method is the requirement of finding the value of the derivative of f (x) at each approximation. There are functions for which this job is either extremely
difficult (if not impossible) or time consuming. A way out is to approximate the derivative by
knowing the values of the function at that and the previous approximation. Knowing f (xk )
and f (xk1 ), we can approximate f (xk ) as:
f (xk )
f (xk ) f (xk1 )
.
xk xk1
xk+1 = xk
23
Note: The secant method needs two approximations x0 and x1 to start with, whereas the
Newton method just needs one, namely, x0 .
Example 1.17
Approximate the Positive Square Root of 2, choosing x0 = 1.5 and x1 = 1 (Do four iterations).
Solution.
Iteration 1. (k = 1). Compute x2 from x0 and x1 :
f (x1 )(x1 x0 )
f (x1 ) f (x0 )
(1)(.5)
.5
=1
=1+
= 1.4,
1 .25
1.25
x2 = x1
24
x3 = x2
= 1.4 + 0.0167
= 1.4167,
Iteration 3. (k = 3). Compute x4 from x2 and x3 :
x4 = x3
f (x3 )(x3 x2 )
f (x3 ) f (x2 )
= 1.4142,
f (x4 )(x4 x3 )
= 1.4142.
f (x4 ) f (x3 )
Note: By comparing the results of this example with those obtained by the Newton method
Example 1.14, we see that
it took 4 iterations by the secant method to obtain a 4-digit accuracy of 2 = 1.4142; whereas
for the Newton method, this accuracy was obtained just after 2 iterations.
In general, the Newton method converges faster than the secant method. The exact rate of
convergence of these two and the other methods will be discussed in the next section.
1.5.4
x2 is the x-intercept of the secant line passing through (x0 , f (x0 )) and (x1 , f (x1 )).
x3 is the x-intercept of the secant-line passing through (x1 , f (x1 )) and (x2 , f (x2 )).
And so on.
Note: That is why the method is called the secant method.
1.6
In the Bisection method, every interval under consideration is guaranteed to have the desired
root x = . That is why the Bisection method is often called a bracket method, because every
25
(x1 , f (x1 ))
x2
x0
x=
x1
(x2 , f (x2 ))
(x0 , f (x0 ))
interval brackets the root. However, the Newton method and the secant method are not bracket
methods in that sense, because there is no guarantee that the two successive approximations
will bracket the root. Here is an example.
Example 1.18
(The Secant Method does not bracket the root.)
Compute a zero of tan(x)6 = 0, choosing x0 = 0, and x1 = 0.48, using the Secant Method.
(Note that = 0.44743154 is a root of f (x) = 0 lying in the interval [0, 0.48]).
Iteration 1. (k = 1). Compute x2 :
x0 )
x2 = x1 f (x1 ) f (x(x11)f
(x0 ) = 0.181194.
Clearly, the iterations are diverging, despite of the fact of the initial interval [0, 0.48]
26
If |
xi+1 xi
|
xi
< or i N , stop.
27
Example 1.20
Consider again the previous example: (Example 1.18). Find the positive root of tan(x)6 =
0 in [0, 0.48], using the Method of False Position.
f (x) = tan(x) 6,
Input Data:
Initial approximation: x0 = 0, x1 = 0.48
(x x
i
i1
Formula to be Used: xi+1 = xi f (xi ) f (xi )f
(xi1 )
Solution.
Iteration 1. (Initial approximations for Iteration 1: x0 = 0, x1 = 0.48).
x0
Step 1. x2 = x1 f (x1 ) f (xx11)f
(x0 ) = 0.181192
x1 = 0.181192.
28
1.7
We have seen that the Bisection method always converges, and each of the methods: the
Newton Method, the Secant method, and the method of False Position, converges under certain
conditions. The question now arises:
When a method converges, how fast does it converge?
In other words, what is the rate of convergence? To this end, we first define:
Definition 1.21 (Rate of Convergence of an Iterative Method). Suppose that the sequence {xk } converges to . Then the sequence {xk } is said to converge to with the order of
convergence if there exists a positive constant p such that
lim
|xk+1 |
ek+1
= lim = p.
k ek
|xk |
ek+1
ek
= limk |g (c)|.
ek+1
ek
29
= |g ()|. Thus = 1.
The fixed point iteration scheme has linear rate of convergence, when it converges.
f n (p)
f (p)
(x p)2 + +
(n p)n + Rn (x),
2!
n!
f (n+1) (c)
(x p)n+1 .
(n + 1)!
Lets choose a small interval around the root x = . Then, for any x in this interval, we have,
by Taylors theorem of order 1, the following expansion of the function g(x):
g(x) = g() + (x )g () +
(x )2
g (k )
2!
30
or
Since g(xk ) = xk+1 , and g() = , from the last equation, we have
xk+1 =
Or, ek+1 =
e2k
2! kg (k )k.
(xk )2
g (k ).
2!
1
ek+1
= kg (k )k.
2
k0 ek
2
|xk |2
2
|g (k )|.
ek+1
e2k
= lim |g
( )|
k
|g ()|
2 .
31
1.8
Recall that the most important underlying assumption in the proof of quadratic convergence of
the Newton method was that f (x) is not zero at the approximation x = xk , and in particular
f () 6= 0. This means that is a simple root of f (x) = 0.
The question, therefore, is: How can the Newton method be modified for a multiple root so that
the quadratic convergence can be retained?
Note that if f (x) has a root of multiplicity m, then
f () = f () = f () = = f (m1) () = 0,
where f (m1) () denotes the (m 1)th derivative of f (x) at x = .
In this case f (x) can be written as
f (x) = (x )m h(x),
where h(x) is a polynomial of degree at most m 2.
In case f (x) has a root of multiplicity m, it was suggested by Ralston and Robinowitz (1978)
that, the following modified iteration will ensure quadratic convergence in the Newton method,
under certain conditions [Exercise].
The Newton Iterations for Multiple Roots of Multiplicity m: Method I
xi+1 = xi
mf (xi )
f (xi )
Since in practice, the multiplicity m is not known a priori, another useful modification is to
apply the Newton iteration to the function:
u(x) =
f (x)
f (x)
32
in place of f (x).
Thus, in this case, the modified Newton iteration becomes xi+1 = xi
u(xi )
u (xi ) .
we have
xi+1 = xi
f (xi )f (xi )
[f (xi )]2 [f (xi )]f (xi )
Example 1.22
Approximate the double root x = 0 of ex x 1 = 0, choosing x0 = 0.5. (Do two iterations.)
Figure 1.6: The Graph of f (x) = ex x 1.
Method I
Input Data:
(i) f (x) = e x 1
2f (xi )
f (xi )
Solution.
Iteration 1. (i = 0). Compute x1 :
x1 = 0.5
2(e0.5 0.5 1)
= .0415 = 4.15 102
(e0.5 1)
33
2(e0.0415 0.0415 1)
= .00028703 = 2.8703 104 .
e0.0415 1
Method II
(i) f (x) = ex x 1,
Input Data:
(ii) Initial approximation: x0 = 0.5
f (xi )f (xi )
(f (xi )2 f (xi )f (xi ))
Solution.
f (x0 )f (x0 )
2
0 )) f (x0 )f (x0 )
(f (x
(e0.5 0.51)(e0.5 1)
(e0.5 )2 (e0.5 0.51)e0.5
f (x1 )f (x1 )
(f (x1 ))2 f (x1 )f (x1 )
(e0.0493 0.04931)(e0.0493 1)
(e0.0493 )2 (e0.0493 0.04931)e0.0493
= 4.1180 104
1.9
A major requirement of all the methods that we have discussed so far is the evaluation of f (x)
at successive approximations xk . Furthermore, Newtons method requires evaluation of the
derivatives of f (x) at each iteration.
If f (x) happens to be a polynomial Pn (x) (which is the case in many practical applications),
then there is an extremely simple classical technique, known as Horners Method to compute
Pn (x) at x = z.
The value Pn (z) can be obtained recursively in n-steps from the coefficients of the polynomial Pn (x).
34
(Note that the Newton Method requires evaluation of Pn (x) and its derivative at successive
iterations).
1.9.1
Horners Method
bn1 = an1 + bn z
..
.
and so on.
In general, bk bk+1 z = ak
bk = ak + bk+1 z. k = n 1, n 2, . . . , 1, 0.
Thus, knowing the coefficients an , an1 , . . . , a1 , a0 of Pn (x), the coefficients bn , bn1 , . . . , b1 of
Qn1 (x) can be computed recursively starting with bn = an , as shown above. That is,
bk = ak + bk+1 z,
k = n 1,
k = 2, . . . , 1, 0.
35
Output: b0 = Pn (z).
Step 1. Set bn = an .
Step 2. For k = n 1, n 2, . . . , 1, 0 do.
Compute bk = ak + bk+1 z
Step 3. Set Pn (z) = b0
End
It is interesting to note that as a by-product of above, we also obtain Pn (z), as the following
computations show.
Computing Pn (z)
Pn (x) = (x z)Qn1 (x) + b0
Thus, Pn (x) = Qn1 (x) + (x z)Qn1 (x).
So, Pn (z) = Qn1 (z).
Write Qn1 (x) = (x z)Rn2 (x) + c1
Substituting x = z, we get Qn1 (z) = c1 .
To compute c1 from the coefficients of Qn1 (x) we proceed in the same way as before.
Let Rn2 (x) = cn xn2 + cn1 xn3 + + c3 x + c2
Then from above we have
bn xn1 + bn1 xn2 + + b2 x + b1 = (x z)(cn xn2 + cn1 xn3 + + c3 x + c2 ) + c1 .
36
c1 = b1 + zc2
Since bs have already been computed (by Algorithm 1.23), we can obtain cs from bs.
Then, we have the following scheme to compute Pn (z):
Computing Pn (z)
Inputs:
(i) b1 , ..., bn - The coefficients of the polynomial Qn1 (x) obtained from the previous algorithm
(ii) z - The number at which P3 (x) has to be evaluated.
Example 1.24
Given P3 (x) = x3 7x2 + 6x + 5. Compute P3 (2) and P3 (2) using Horners scheme.
Input Data:
Formula to be used:
a0 = 5, a1 = 6, a2 = 7, a3 = 1
z=2
n=3
37
Solution.
Compute P3 (2).
Generate the sequence {bk } from {ak }:
b3 = a 3 = 1
b2 = a2 + b3 z = 7 + 2 = 5
b1 = a1 + b2 z = 6 10 = 4
b0 = a0 + b1 z = 5 8 = 3.
So, P3 (2) = b0 = 3
Compute P3 (2).
Generate the sequence {ck } from {bk }.
c3 = b3 = 1
c2 = b2 + c3 z = 5 + 2 = 3
c1 = b1 + c2 z = 4 6 = 10.
So, P3 (2) = 10.
1.9.2
We now describe the Newton Method for a polynomial using Horners scheme. Recall that the
Newton iteration for finding a root of f (x) = 0 is:
xk+1 = xk
f (xk )
f (xk )
38
Note:
1.10. THE MULLER
METHOD: FINDING A PAIR OF COMPLEX ROOTS.
39
Solution.
Iteration 1. (k = 0). Compute x1 from x0 :
b0
17
3
x1 = x0
=
= 1.7
=2
c1
10
10
Note that b0 = P3 (x0 ) and c1 = P3 (x0 ) have already been computed in Example 1.23.
Iteration 2. (k = 1). Compute x2 from x1 :
Step 1. z = x1 = 1.7, b3 = 1, c3 = b3 = 1
Step 2.
b2 = a2 + z b3 = 7 + 1.7 = 5.3
0.1170
b0
= 1.6872
= 1.7
c1
9.130
1.10
The M
uller Method: Finding a Pair of Complex Roots.
The M
uller Method is an extension of the Secant Method. It is conveniently used to approximate
a complex root of a real equation, using real starting approximations or just a real root.
Note that none of the methods: the Newton, the Secant, the Fixed Point iteration Methods, etc.
can produce an approximation of a complex root starting from real approximations.
The M
uller Method is capable of doing so.
Given two initial approximations, x0 and x1 the successive approximations are computed as
follows:
Approximating x2 from x0 and x1 : The approximation x2 is computed as the xintercept of the secant line passing through (x0 , f (x0 )) and (x1 , f (x1 )).
40
x0
x1
x2
x3
1.10. THE MULLER
METHOD: FINDING A PAIR OF COMPLEX ROOTS.
41
Based on the above idea, we now give the details of how to obtain x3 starting from x0 , x1 , and
x2 .
Let the equation of the parabola P be
P (x) = a(x x2 )2 + b(x x2 ) + c.
That is, P (x2 ) = c, and a and b are computed as follows:
Since P (x) passes through (x0 , f (x0 )), (x1 , f (x1 )), and (x2 , (f (x2 )), we have
P (x0 ) = f (x0 ) = a(x0 x2 )2 + b(x0 x2 ) + c
P (x2 ) = f (x2 ) = c.
Knowing c = f (x2 ), we can now obtain a and b by solving the first two equations:
#
a(x0 x2 )2 + b(x0 x2 ) = f (x0 ) c
a(x1 x2 )2 + b(x1 x2 ) = f (x1 ) c.
Or
! !
(x0 x2 )2 (x0 x2 )
a
=
2
(x1 x2 ) (x1 x2 )
b
!
f (x0 ) c
.
f (x1 ) c
ax2 + bx + c = 0
2c
x=
b b2 4ac
The sign in the denominator is chosen so that it is the largest in magnitude, guaranteeing that
x3 is closest to x2 .
42
x3 x2 =
2c
2c
or x3 = x2 +
.
b b2 4ac
b b2 4ac
Note: One can also give on explicit expressions for a and b as follows:
b=
a=
However, for computational efficiency and accuracy, it is easier to solve the following 2 2
linear system (displayed in Step2-2 of Algorithm 1.27 below) to obtain a and b, once c = f (x2 )
is computed.
!
f (x0 ) c
.
f (x1 ) c
2c
2.3 Compute x3 = x2 +
Step 3. Test if |x3 x2 | < or if the number of iterations exceeds N . If so, stop. Otherwise
go to Step 4.
1.10. THE MULLER
METHOD: FINDING A PAIR OF COMPLEX ROOTS.
43
x2 x3
Step 4. Relabel x3 as x2 , x2 as x1 , and x1 as x0 : x1 x2
x0 x1
Return to Step 1.
Example 1.28
(Approximating a Pair of Complex Roots by Mullers Method) Approximate a pair of
complex roots of x3 2x2 5 = 0 starting with the initial approximations as x0 = 1, x1 = 0,
and x2 = 1. (Note the roots of f (x) = {2.6906, 0.3453 1.31876}.)
2c
,
b b2 4ac
12
2c
= x2 +
= 0.25 + 1.5612i.
2
3 9 48
b 4ac
x2 x3 = 0.25 + 1.5612i
x1 x2 = 1
x0 x1 = 0
44
Step 1. Compute the new functional values: f (x0 ) = 5, f (x1 ) = 6, and f (x2 ) =
2.0625 5.0741i
Step 2. Compute the next approximation x3 :
2.1 Set c = f (x2 ).
2.2 Obtain a and b by solving the 2 2 linear system:
! !
a
x22
x2
=
2
b
(x1 x2 ) (x1 x2 )
!
f (x0 ) c
,
f (x1 ) c
2c
= x2 (0.8388 + 0.3702i) = 0.5888 + 1.1910i
b2 4ac
x2 x3 = 0.5888 + 1.1910i
x1 x2 = 0.25 + 1.5612i
x0 x1 = 1
Step 1. Compute the functional values: f (x0 ) = 6, f (x1 ) = 2.0627 5.073i, f (x2 ) =
0.5549 + 2.3542i
Step 2. Compute the new approximation x3 :
2.1 Set c = f (x2 ).
2.2 Obtain a and b by solving the 2 2 system:
! !
(x0 x2 )2 (x0 x2 )
a
=
2
(x1 x2 ) (x1 x2 )
b
!
f (x0 ) c
,
f (x1 ) c
2c
= 0.3664 + 1.3508i
b2 4ac
Iteration 4. x3 = 0.3451 + 1.3180i (Details computations are similar and not shown)
Iteration 5. x3 = 0.3453 + 1.3187i
Iteration 6. x3 = 0.3453 + 1.3187i
x3 x2
< 5.1393 104 , we stop here and accept the latest value
Test for Convergence:
x2
of x3 as an approximation to the root.
1.10. THE MULLER
METHOD: FINDING A PAIR OF COMPLEX ROOTS.
45
How close is this root to the actual root? (x1 )(x2 )(x2.6906) = x3 1.996x2 5
Example 1.29
(Approximating a Real Root by Mullers Method)
Approximate a real root of f (x) = x3 7x2 + 6x + 5 using x0 = 0, x1 = 1, and x2 = 2 as initial
approximations. (All three roots are real: 5.8210, 1.6872, 0.5090. We will try to approximate
the root 1.6872).
f (x) = x3 7x2 + 6x + 5
Input Data:
Initial approximations: x0 = 0, x1 = 1, and x2 = 2.
Solution.
8
8
2c
b2 4ac
a = 4
b = 12
x2 x3 = 1.7247,
x1 x2 = 2,
x0 x1 = 1
46
0.5253 0.7247
0.0758 0.2753
2.3 x3 = x2
a
b
a = 2.2752
5c
,
b = 9.0227
3c
2c
= 1.7247 0.0385 = 1.6862
b2 4ac
1.10. THE MULLER
METHOD: FINDING A PAIR OF COMPLEX ROOTS.
47
Work Required
Two function evaluations
per iteration
Convergence
Always converges
(but slow)
Fixed-Point
Rate of convergence
is linear
Newton
Quadratic convergence
at a simple root.
Convergence is linear
for a multiple root.
Secant
Superlinear
M
uller
Convergence is almost
quadratic near the root
Remarks
(i) The initial interval
containing the root
must be found first.
(ii) A bracketing
method - every iteration
brackets the root.
Finding the right iteration
function x = g(x) is
a challenging task
(i) Initial approximation x0
must be chosen carefully-if
it is far from the root, the
method will diverge.
(ii) For some function, the
derivative is difficult to
compute, if not impossible
Needs two initial
approximations
Concluding Remarks
If the derivation of f (x) is computable, then the Newton method is an excellent rootfinding mehtod. The initial approximation x0 can be computed by drawing a graph of
f (x) or by running a far iteration of the bisection method and taking the approximate
value obtained by the bisection method as the initial approximation for Newton-Ralphson
iteration.
If the derivative is not computable, use the secant method, instead.
48
1.11
The roots of a polynomial can be very sensitive to small perturbutions of the coefficients of the
polynomial. For example, take
p(x) = x2 2x + 1.
The roots are x1 = 1, x2 = 1. Now perturb only the coefficient 2 to 1.9999, leaving the
other two coefficients unchanged. The roots of the peturbed polynomial are: 1 .01i. Thus,
both the roots become complex.
One might think that this happened because of the roots of p(x) are multiple roots. It is
true that the multiple roots are prone to perturbations, but the roots of a polynomial with
well-separated roots can be very sensitive too!
The well-known example of this is the celebrated Wilkinson polynomial:
p(x) = (x 1)(x 2)...(x 20).
The roots of this polynomial are 1, 2, . . ., 20 and thus very well-separated. But, a small
perturbation of the coefficient x19 which is 210 + 223 will change the roots completely. Some
of them will even become complex. This is illustrated in the following graph. Note that some
roots are more sensitive than the others.
1.12
Deflation Technique
Deflation is a technique to compute the other roots of f (x) = 0, once an approximate real root
or a pair of complex roots are computed. Then if x = is an approximate root of P (x) = 0,
where P (x) is a polynomial of degree n, then we can write
P (x) = (x )Qn1 (x)
where Qn1 (x) is a polynomial of degree n 1.
Similarly, if x = i is an approximate pair of complex conjugate roots, then
P (x) = (x i)(x + i)Qn2 (x)
where Qn2 (x) is a polynomial of degree n 2.
Moreover, the zeros of Qn1 (x) in the first case and those of Qn2 (x) is the second case are
also the zeros of P (x).
The coefficients of Qn1 (x) or Qn2 (x) can be generated by using synthetic division as in
Horners method, and then any of the root-finding procedures obtained before can be applied to
49
Im
10
15
20
25
Re
compute a root or a pair of roots of Qn2 (x) = 0 or Qn1 (x). The procedure can be continued
until all the roots are found.
Example 1.30
Find the roots of f (x) = x4 + 2x3 + 3x2 + 2x + 2 = 0.
It is easy to see that x = i is a pair of complex conjugate roots of f (x) = 0. Then we can
write
f (x) = (x + i)(x i)(b3 x2 + b2 x + b1 )
or
x4 + 2x3 + 3x2 + 2x + 2 = (x2 + 1)(b3 x2 + b2 x + b1 ).
Equating coefficients of x4 , x3 , and x2 on both sides, we obtain
b3 = 1,
b2 = 2,
b1 + b3 = 3,
giving b3 = 1, b2 = 2, b1 = 2.
So, (x4 + 2x3 + 3x2 + 2x + 2) = (x2 + 1)(x2 + 2x + 2).
50
Thus the roots of the polynomial equation: x4 + 2x3 + 2x2 + 2x + 2 are given by the roots of
(i) x2 + 1 = 0 and (ii) x2 + 2x + 2.
The zeros of the deflated polynomial Q2 (x) = x2 + 2x + 2 can now be found by using any of
the methods described earlier.
1.13
MATLAB built-in functions fzero, poly, polyval, and roots can be used in the context of
zero-finding of a function.
In a MATLAB setting, type help follwed by these function names, to know the details of the
uses of these functions. Here are some basic uses of these functions.
The function fzero can also be used with options to display the optimal requirements
such as values of x and f (x) at each iteration (possibly with some specified tolerance).
For example, the following usages:
z = fzero (function, x, optimset (Display, iter). will display the value of x and the
corresponding value of f (x) at each iteration performed (see below Exercise 1.31). Other
values of options are also possible.
Example 1.31
(ii) Find the zero of f (x) = ex 1 in [1, 2] and display the values of x and f (x) at each
iteration.
Solution (i)
51
x
1
0.729908
0.404072
0.151887
0.0292917
0.000564678
8.30977e06
2.34595e09
2.15787e17
f (x)
0.632121
0.518047
0.48811
0.140914
0.0288669
0.000546857
8.30974e06
2.345959e09
0
Procedure
initial
interpolation
interpolation
interpolation
interpolation
interpolation
interpolation
interpolation
interpolation
radius
length
height
l = h2 + r 2
52
p
r2 + 9
53
(i) Find the coefficients of the polynomial equation P3 (x) whose roots are: 1, 1, 2, and
3.
(ii) Compute now, numerically, the roots and check the accuracy of each computed root.
(iii) Perturb the coefficient of x2 of P4 (x) by 104 and recompute the roots.
Solution (i).
v = [1, 1, 2, 3]T
c = polyv
The vector c contains the coefficients of the 4-th degree polynomial P4 (x) with the roots:
1,1,2, and 3.
c = [1, 7, 17, 17, 6]T
P4 (x) = x4 7x3 + 17x2 17x + 6
Solution (ii).
z = roots (c) gives the computed zeros of the polynomial P4 (x).
3.0000
2.0000
z=
1.0000
1.0000
Solution (iii).
Perturbed polynomial P4 (x) is given by
P4 (x) = x4 (7 + 104 )x3 + 17x2 17x + 6
The vector cc containing the coefficients of P4 (x) is given by:
cc = [
]T
zz = roots (cc).
The vector zz contains the zeros of the perturbed polynomial P4 (x):
vv = polyval (zz).
The vector vv contains the values of P4 (x) evaluated at the computed roots of P4 (x):
vv = [
]T
54
Function eig.
eig(A) is used to compute the eigenvalues of a matrix A. It can be used to find all the roots
of a polynomial, as the following discussions show.
It is easy to verify that the roots of the polynomial
Pn (x) = a0 + a1 x + a2 x2 + + an1 xn1 + xn
are the eigenvalues of the companion matrix.
0 0
1 0
0 1
Cn =
.. .. . .
.
. .
0 0
0
0
0
..
.
a0
a1
a2
..
.
1 an1
Thus,
eig(cn ) will compute all the roots of Pn (x). In fact, the MATLAB built-in function roots(c)
is based on this idea.
Example 1.34
Find the roots of P3 (x) = x3 3x + 1.
Solution.
0 01
The matrix c3 = 1
0
0.
0
1
3
1.8784
Thus the roots of P3 (x) are: 1.5321
0.3473
Verify: We will now show that these roots are the same as obtained by the function roots:
c = [1, 0, 3, 1]T
roots (c).
roots(c) =
1.8784
1.5321
0.3473
55
Remarks: It is not recommended that the eigenvalues of an arbitrary matrix A are computed by transforming A first into companion form C and then finding its zeros. Because, the
transforming matrix can be highly ill-conditioned (Please see Chapter 8).
56
57
(viii) f (x) = ex 1 x
x2
2
in [1, 1]
3.
4. (Computational) Construct a simple example to show that a small functional value of
f (xk ) at an iteration k does not necessarily mean that xk is near the root x = of f (x).
5. (Analytical)
(a) Prove that if g(x) is continuously differentiable in some open interval containing the
fixed point , and if |g () < 1, then there exists a number > 0 such that the
iteration
xk+1 = g(xk )
converges whenever x0 is chosen such that |x0 | .
(b) Using the above result, find the iteration function g(x) and an interval [a, b] for
f (x) = x tan x
such that the iteration always converges. Find the solution with an accuracy of 104 .
(c) Find a zero of the function f (x) = ex sin x in [0.5, 0.7], by choosing g(x) = x+f (x)
and using the result in (a).
6. (Computational) For each of the following functions, find an interval and an iteration
function x = g(x) so that the fixed-point iteration:
xk+1 = g(xk )
will converge for any choice of x0 in the interval. Then apply two iterations to approximate
a zero in that interval choosing a suitable initial approximation x0 .
58
3
x2
x2
(c) f (x) = x ex
1
1+ex2
(f) f (x) =
x2
4
+ cos x
(h) f (x) = x3 7x + 6
(i) f (x) = 1 tan x4
8. (Computational) Study the convergence of all fixed-point iterations of the function f (x) =
x3 7x + 6 at x = 2 with all possible iteration functions g(x). Represent these fixed point
iteration schemes graphically.
9. (Computational)
(a) Sketch a graph of f (x) = x + ln x and show that f (x) has one and only one zero in
0 < x < , but that the related fixed point iteration xi+1 = g(xi ), x0 given, x0 6= ,
with g(x) = ln x, does not converge to even if x0 is arbitrarily close to .
(b) How can one rewrite the iteration equation so as to obtain convergence? Show for
this iteration the uniqueness of the fixed point . Give its rate of convergence.
10. (Computational) The iteration xn+1 = 2 (1 + c)xn + cx3n will converge to = 1 for
some values of c (provided x0 is chosen sufficiently close to ). Find the values of c for
which this is true. For what value of c will the convergence be quadratic?
11. (Computational) Apply Newtons method, the secant method and the method of false
position to determine a zero with an accuracy of 104 to each of the functions in Exercise
6.
12. (Computational) Write down Newtons iterations for the following functions:
(a) x3 cos(x2 + 1) = 0
(b) 2 sin x x2 x + 1 = 0
13. (Analytical and Computational) Develop Newtons method to compute the following:
(a) ln(a) (natural log of a) (a > 0)
(b) arc cos a and arc sin a
59
(c) ea
Do three iterations for each problem by choosing a suitable function and a suitable x0 .
Compare your results with those obtained by MATLAB built-in function fzero.
14. (Computational) Explain what happens when Newtons method is applied to find a root
of x3 3x + 6 = 0, starting with x0 = 1.
15. (Computational) Construct your own example to demonstrate the fact that Newtons
method may diverge if x0 is not properly chosen.
16. (Computational) Show that the Newton sequence of iterations for the function f (x) =
x3 2x + 2 with x0 = 0 will oscillate between 0 and 1. Give a graphical representation of
this fact.
Find now a positive root of f (x) using Newtons method by choosing a suitable x0 .
17. (Computational) Study the convergence behavior of Newtons method to approximate
the zero x = 2 of the following two polynomials:
f (x) = x3 3x 2
g(x) = x2 4x + 4
starting with the same initial approximation x0 = 1.9.
18. (Computational) Write down the Newton-sequence of iterations to find a minimum or a
maximum of a function f (x).
Apply it to find the minimum value of f (x) = x2 + sin x
60
25. (Computational) The polynomial P8 (x) = x8 + 2x7 + x6 + 2x5 + 5x3 + 7x2 + 5x + 7 has a
pair of complex-conjugate zeros x = i. Approximate them using Mullers method (Do
three iterations.) Choose x0 = 0, x1 = 0.1, x2 = 0.5.
26. (Computational) The polynomial P6 (x) = 3x6 7x3 8x2 + 12x + 3 = 0 has a double
root at x = 1. Approximate this root first by using standard Newtons method and
then by two modified Newtons methods, starting with x0 = 0.5.
Use Horners method to find the polynomial value and its derivative at each iteration.
Present your results in tabular form. (Do three iterations.)
27. (Applied) Consider the Kepler equation of motion
M = E e sin E
relating the mean anomaly M to the eccentric anomaly E of an elliptic orbit with
eccentricity e. To find E, we need to solve the nonlinear equation:
f (E) = M + e sin E E = 0
(a) Show that the fixed point iteration
E = g(E) = M + e sin E
converges.
(b) Given e = 0.0167 (Earths eccentricity)and M = 1 (in radians) compute E using
i. bisection method,
ii. fixed-point iteration E = g(E) as above,
iii. Newtons method;
and
Use Newtons method to compute the specific volume V of carbon dioxide at a temperature
of T = 300oK, given P = 1 atom, R = 0.082054 J (kgoK), a = 3.592 Pam6 /kg2 , b =
0.04267m2 /kg.
Obtain the initial approximation V0 from the ideal gas law: P V = RT .
61
29. (Applied) Repeat the last problem for oxygen for which a = 1.360 and b = 0.03183.
30. (Applied) Windmill Electric Power
It is becoming increasingly common to use wind turbines to generate electric power. The
energy output of the power generated by a windmill depends upon the blades diameter
and velocity of the wind. A good estimate of the energy output is given by the following
formula:
EO = 0.01328D 2 V 3
Use Newtons method to determine what should be the diameter of the windmill blade if
one wishes to generate 500 watts of electric power when the wind speed is 10mph.
31. (Applied) A spherical tank of radius r = m contains a liquid of volume V = 0.5m3 .
What is the depth, h, of the liquid?
h2
3 (3r
h) to solve.
1
sin 10t
2
!
12
12
A cos
t + B sin
t
2
2
Determine the time required for the body to come to rest (x(t) = 0) using the initial
conditions: x(0) = 1, x (0) = 5.
(Hints: First, find A and B using the initial conditions. Then solve the above equation
for t setting x(t) = 0.)
34. (Analytical and Computational) (Haleys Method) The following iteration for finding
a root of f (x) = 0:
2f (xi )f (xi )
xi+1 = xi
2[f (xi )2 f (xi )f (xi )]
is known as Haleys iteration.
62
(ii) Using Haleys iteration, compute a positive zero of f (x) = x3 3x + 1 starting with
x0 = 0.5 with an error tolerance of = 104 .
63
MATLAB EXERCISES
M1.1 Write MATLAB functions in the following formats to implement the methods: Bisection, Fixed-Point Iteration, Newton, Secant, and Muller.
function [xval, funval, iter, error] = bisect(fnc, intv, tol, maxitr)
function [xval, funval, iter, erro] = newton(fnc, deriv, x0, to, maxitr)
function [ xval, funval, iter, error] = fixed-point(fnc, x0, tol, maxitr)
function [xval,funval,iter, error] = secant(fnc, x0, x1, tol, maxitr)
function [xval, funval, iter, error] = muller(fnc, x0, x1, x2, tol, maxitr)
intv
xval
iter
funval
error
fnc
deriv
xo, x1, x2
tol
maxitr
fncval
=
=
=
=
=
=
=
=
=
=
=
Presentation of Results: Present your results both in tabular and graphic forms. The
tables and graphs should display the values of the function and the approximate root at
each iteration. Results from all the methods need to be presented in the same table and
same graph.
Table : Comparison of Different Root-Finding Methods
Method
Iteration
x-Value
Function Value
Data: Implement the methods using the functions given in Exercises 6(a), 6(b), 6(d),
6(e), 6(h), and 6(i).
M1.2 Apply now the built-in MATLAB function fzero to the function of Exercise 2 and present
your results along with those of other methods in the same table and same graph using
the format of M1.1.
64
M1.3 Write a MATLAB program, using the built-in MATLAB functions, polyval, and polyder to find a zero of a polynomial, based on Newtons Method with Horners scheme.
Implement the program using
P (x) = x5 x4 5x3 x2 + 4x + 3
Present results of each iteration both in tabular and graphic forms.
M1.4 Choosing the values of a, b, R and P in appropriate units, find v at temperatures
T = 100oK, T = 250oK, T = 450oK by solving the Van der Waals equation state using
each of the methods developed in M1.1. Present the results in tabular form.
1
, (c) e5 ,
M1.5 Apply the MATLAB program newton to compute (a) 5 1000 and (b) 12000
and (d) ln(10), choosing a suitable initial approximation in each case. For each problem,
present your results of iterations in tabular forms.
M1.6 Consider the following equation:
A=
R
1 = (1 + i)n
i
where A = Amount of money borrowed; i = Interest rate per time period; R = Amount
of each payment; n = Number of payments (all payments are of equal value).
If the amount of the loan is $20,000 and the payment amount per month is $450.00 and
the loan is paid off in 5 years, what is the monthly interest rate? (Solve the problem using
the newton program.)
M1.7 The following formula gives the upward velocity v of a rocket:
v = u ln
m
gt
m qt
where u = The velocity of the fuel expelled relative to the rocket; m = Initial mass of the
rocket (at t = 0); g = Gravitys acceleration; t = Time.
Use any bracket method to find the time taken by the rocket to attain the velocity of 500
m/s, given
u = 950 m/s,
m = 2 105 kg,
q = 3, 000 kg/s,
Choose 10 t 25.
M1.8 Suppose the concentration c of a chemical in a mixed reactor is given by:
c = cin (1 e0.05t ) + c0 e0.05t
where cin = Inflow concentration; and c0 = Original concentration.
Find the time t required for c to be 90% of cin , if c0 = 5, and cin = 15.
65
66