0% found this document useful (0 votes)
41 views

Numerical Methods: Solving Nonlinear Equations

This document provides an overview of numerical methods for solving nonlinear equations. It introduces iterative methods for finding roots of functions and discusses techniques like the bisection method. The bisection method generates a sequence of approximations that converge to the root by repeatedly bisecting intervals and isolating sign changes of the function. The document also covers estimating initial approximations, defining rates of convergence, and outlines several other root-finding methods that will be described in more detail later.

Uploaded by

Sriram Murugan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Numerical Methods: Solving Nonlinear Equations

This document provides an overview of numerical methods for solving nonlinear equations. It introduces iterative methods for finding roots of functions and discusses techniques like the bisection method. The bisection method generates a sequence of approximations that converge to the root by repeatedly bisecting intervals and isolating sign changes of the function. The document also covers estimating initial approximations, defining rates of convergence, and outlines several other root-finding methods that will be described in more detail later.

Uploaded by

Sriram Murugan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Lecture 2

Numerical methods
Solving nonlinear equations
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Introduction

Solving nonlinear equation


f (x)=0 (1)
means to find such points x* Î Â that
f (x*)=0.
We call such point roots of function f (x).

In general, we do not know (because it is impossible)


the explicit formula for roots of f (x).

Iterative methods:
we generate sequence of approximations
x0 , x1 , x2, 
from one or several initial approximations (guess)
of root x*,
which converge to the root x*.
Introduction

For some methods it is enough


to prescribe interval a, b ,
which contains the searched root,
other require
initial guess to be
reasonably close to the true root.

Usually we start with robust


but reliable method
and then,
when we are close enough to the root,
we switch to more sophisticated
and faster convergent method.
Introduction

For simplicity, we will consider only problem of finding


simple root x* of function f (x),
i.e. we suppose that f ¢ ( x *) ¹ 0.

We will also suppose that


function f (x) is continuous and
has so many continuous derivatives,
how many we need.
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Root separation and estimation of initial approximation

In order to find solutions of


f (x)=0
we have to estimate
the number of roots
and
we have to determine intervals
containing a unique root.

Theorem: If the function is continuous on interval a, b and

f (a )⋅ f (b) < 0,

then there is at least one root of f (x)=0 on interval a, b .


Root separation and estimation of initial approximation

x*
Root separation and estimation of initial approximation

We can find the initial approximation of roots of


f (x)=0
from graph of function f (x).

Other possibility is to assemble the table of points é xi , f ( xi )ù for


ë û
some division
a = x0 < x1 <  < xi-1 < xi <  < xn = b
of chosen interval a, b .
Root separation and estimation of initial approximation

Example: Obtain rough guess of roots of equation f (x)=0, where


f ( x) = 4sin x - x3 -1.
Root separation and estimation of initial approximation

Example: Obtain a rough guess of roots of equation

e x + x2 - 3 = 0
Solution: Rearrange the equation as follows

e x = 3 - x2
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Bisection method

It is based on the principle of sign changes.

Let suppose that function values of f (a0) and f (b0)


at endpoints of interval (a0 , b0 ) are of opposite signs,
i.e. f (a0 )⋅ f (b0 ) < 0 .
We will construct a sequence of intervals
(a0 , b0 ) É (a1 , b1 ) É (a2 , b2 ) É (a3 , b3 ) É ,
containing the root.
Intervals (ak +1, bk +1 ), k = 0,1,  are determined
recursively as follows:
Bisection method

Find a midpoint of interval (ak , bk ) and designate it xk +1 = 12 (ak + bk ) .


If f ( xk +1 ) = 0 then x* = xk +1 and stop.
If f ( xk +1 ) ¹ 0 then

ì
ï (ak , xk +1 ), if f (ak ) f ( xk +1 ) < 0,
ï
(ak +1, bk +1 ) = íï
î ( xk +1 , bk ), if f (ak ) f ( xk +1 ) > 0.
ï
From construction of ( ak +1 , bk +1 ) it follows that f ( ak +1 ) f (bk +1 ) < 0 ,
so each interval ( ak , bk ) contains a root.
Bisection method

After k steps the root is in interval I k := (ak , bk ) with length

bk -1 - ak -1
I k = bk - ak = =  = 2-k (b0 - a0 ).
2

Midpoint xk +1 of interval (ak , bk ) is an approximation of x* with an error

xk +1 - x * £ 12 (bk - ak ) = 2-k -1 (b0 - a0 ). (2)

For k ¥ obviously I k  0 and xk  x *.

Example: How many iterations by bisection method we have to perform


in order to refine the root by one decimal digit?
Bisection method

Bisection method converge slowly


but the convergence is always guaranteed.

The rate of convergence (2) does not depend on function f (x),


because we used only signs of function values.

If we efficiently use those values


(and possibly also values of derivatives f‘(x) ),
we could achieve faster convergence.

Such “refined” methods usually converge


only if we starts from good initial approximation.
Most often such initial guess is obtained by bisection method.
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Rate of convergence

Let x0 , x1 , x2 ,  is a sequence which converges to x * and


ek = xk - x * . If there exists number p and constant C ¹ 0 such that

ek +1
lim p
= C, (3)
k ¥ ek
then p is called the order of convergence and
C is error constant.
We say that
linear, p = 1 and C < 1,
convergence is superlinear, if p > 1,
quadratic, p = 2.
Rate of convergence

Let x0 , x1 , x2 ,  is a sequence which converges to x * and


ek = xk - x * . If there exists number p and constant C ¹ 0 such that

ek +1
lim p
= C, (3)
k ¥ ek
then p is called the order of convergence and
C is error constant.
We say that
linear, p = 1 and C < 1,
convergence is superlinear, if p > 1,
quadratic, p = 2.
We say that the method converges with order p ,
if all convergent sequences obtained by this method
have the order of convergence greater or equal to p and
at least one of them has order of convergence exactly equal to p.
Rate of convergence

Example: What is the order of convergence of bisection method?

ek +1
lim p
= C, ek = xk - x *
k ¥ ek
Rate of convergence

Example: What is the order of convergence of bisection method?

Midpoint xk +1 of interval (ak , bk ) is an approximation of x* with error


xk +1 - x * £ 12 (bk - ak ) = 2-k -1 (b0 - a0 ).

p-1
xk +1 - x * 2 -k -1 æ
(b0 - a0 ) 1 ç 2 ÷ k ö
lim = = ç ÷÷
k ¥ x - x * p
é -k ù p
2 ççè b - a ÷ø
êë 2 (b0 - a0 )úû
k 0 0

1
p = 1, C =
2
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Regula falsi (false position) method

It is very similar to bisection method.


However, the next iteration point is not midpoint of interval
but intersection of axis x with secant through é ak , f ak ù and ébk , f
( )û (bk )ùû .
ë ë
Regula falsi (false position) method

The root of secant we estimate by

bk - ak
xk +1 = bk - f (bk )
f (bk ) - f (ak )

If f ( xk +1 ) = 0 then x* = xk +1 and stop.


If f ( xk +1 ) ¹ 0 then

ì
ï (ak , xk +1 ), if f (ak ) f ( xk +1 ) < 0,
ï
(ak +1, bk +1 ) = íï
î ( xk +1 , bk ), if f (ak ) f ( xk +1 ) > 0.
ï
From construction of ( ak +1 , bk +1 ) it follows that f ( ak +1 ) f (bk +1 ) < 0,
so each interval ( ak , bk ) contains a root.
Regula falsi (false position) method

After k steps the root is in interval I k := (ak , bk ) .


Unlike the bisection method the length of interval Ik in some
cases fail to converge to a zero limit.

Regula falsi method always converges.

The rate of convergence is


(similarly as bisection method)
linear.
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Secant method

It is similar to regula falsi method.

We start from interval a, b containing the root.

Denote x0 = a and x1 = b .

Let secant goes through points é x0 , f ( x0 )ù and é x1 , f ( x1 )ù and


ë û ë û
its intersect with axis x
we denote x2 .

Unlike the regula falsi method


we will not select an interval containing the root but
we construct secant through points é x1 , f ( x1 )ù , é x2 , f ( x2 )ù ,
ë û ë û
and its root we denote x3 .
Secant method

It is similar to regula falsi method.

We start from interval a, b containing the root.

Denote x0 = a and x1 = b .

Let secant goes through points é x0 , f ( x0 )ù and é x1 , f ( x1 )ù and


ë û ë û
its intersect with axis x
we denote x2 .

Unlike the regula falsi method


we will not select an interval containing the root but
we construct secant through points é x1 , f ( x1 )ù , é x2 , f ( x2 )ù ,
ë û ë û
and its root we denote x3 .

Then we construct secant through é x2 , f ( x2 )ù and é x3 , f ( x3 )ù and so on.


ë û ë û
Secant method
Secant method

The k-th approximation of root is obtained by


xk - xk -1
xk +1 = xk - f ( xk ) ,
f ( xk ) - f ( xk -1 )
where x0 = a, x1 = b.

The computation is finished if stop criterion is hold.


xk +1 - xk £  , or xk +1 - xk £  xk ,
or f ( xk +1 ) £  ,
or if we find the root.

Caution! The condition does not guarantee that xk +1 - x * £  .

Example: How we can check the condition xk +1 - x * £  ?.


Secant method

Secant method could be divergent !


Secant method

Secant method converge faster than regula falsi,


but could also diverge.

It converge
if initial points x1 and x2 are close enough to root x *.

Is it possible to show, that convergence rate is


1
( )
p = 1 + 5  1.618 ,
2
i.e. the secant method is superlinear.
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Newton’s (Newton-Raphson) method

We will work with tangent


of the graph of function f.

Therefore we suppose that f is differentiable.

We chose the initial approximation of the root x0 .


We route a tangent line

to the graph of function f é x0 , f ( x0 )ù .


through point
ë û
The intersect with axis x will be x1.
Then we route a tangent line through é x1 , f ( x1 )ù ,
ë û
The intersect with axis x will be x2 ,
and so on.
Newton’s (Newton-Raphson) method
Newton’s (Newton-Raphson) method

Suppose that we know xk


and we want to find better approximation xk +1 .

We construct the tangent line to the curve y = f ( x ) through éë xk , f ( xk )ùû.


Using equation for the tangent line

y = f ( xk ) + f ¢ ( xk )( x - xk )
with y := 0 we obtain an intersect with the axis x:

f ( xk )
xk +1 = xk - .
f ¢ ( xk )
Newton’s (Newton-Raphson) method - convergence

Let ek = xk - x * be an error in the k-th step.

Lets construct the Taylor expansion of f ( x *) at xk


1 2
0 = f ( x *) = f ( xk ) + ( x * -xk ) f ¢ ( xk ) + ( x * -xk ) f ¢¢ ( ) ,
2
where  is some point of interval with endpoints xk and x *.

After some algebra we obtain

1 2 f ¢¢ ( ) f ( xk )
- ( x * -xk ) = + ( x * -xk )
2 f ¢ ( xk ) f ¢ ( xk )
1 2 f ¢¢ ( )
é f ( x ) ù
- ( x * -xk ) = x * - êê xk - k ú = x * -xk +1
2 f ¢ ( xk ) f ¢ ( xk ) ú
ë û
1 2 f ¢¢ ( )
ek = ek +1
2 f ¢ ( xk )
Newton’s (Newton-Raphson) method - convergence

Let ek = xk - x * be an error in the k-th step.

Lets construct the Taylor expansion of f ( x *) at xk


1 2
0 = f ( x *) = f ( xk ) + ( x * -xk ) f ¢ ( xk ) + ( x * -xk ) f ¢¢ ( ) ,
2
where  is some point of interval with endpoints xk and x *.

After some algebra we obtain

1 2 f ¢¢ ( ) f ( xk )
- ( x * -xk ) = + ( x * -xk )
2 f ¢ ( xk ) f ¢ ( xk )
1 2 f ¢¢ ( )
é f ( x ) ù
- ( x * -xk ) = x * - êê xk - k ú = x * -xk +1
2 f ¢ ( xk ) f ¢ ( xk ) ú
ë û
1 2 f ¢¢ ( )
ek = ek +1
2 f ¢ ( xk )
Newton’s (Newton-Raphson) method - convergence

Let ek = xk - x * be an error in the k-th step.

Lets construct the Taylor expansion of f ( x *) at xk


1 2
0 = f ( x *) = f ( xk ) + ( x * -xk ) f ¢ ( xk ) + ( x * -xk ) f ¢¢ ( ) ,
2
where  is some point of interval with endpoints xk and x *.

After some algebra we obtain

1 2 f ¢¢ ( ) f ( xk )
- ( x * -xk ) = + ( x * -xk )
2 f ¢ ( xk ) f ¢ ( xk )
1 2 f ¢¢ ( )
é f ( x ) ù
- ( x * -xk ) = x * - êê xk - k ú = x * -xk +1
2 f ¢ ( xk ) f ¢ ( xk ) ú
ë û
1 2 f ¢¢ ( )
ek = ek +1
2 f ¢ ( xk )
Newton’s (Newton-Raphson) method - convergence

Let ek = xk - x * be an error in the k-th step.

Lets construct the Taylor expansion of f ( x *) at xk


1 2
0 = f ( x *) = f ( xk ) + ( x * -xk ) f ¢ ( xk ) + ( x * -xk ) f ¢¢ ( ) ,
2
where  is some point of interval with endpoints xk and x *.

After some algebra we obtain

1 2 f ¢¢ ( ) f ( xk )
- ( x * -xk ) = + ( x * -xk )
2 f ¢ ( xk ) f ¢ ( xk )
1 2 f ¢¢ ( )
é f ( x ) ù
- ( x * -xk ) = x * - êê xk - k ú = x * -xk +1
2 f ¢ ( xk ) f ¢ ( xk ) ú
ë û
1 2 f ¢¢ ( )
ek = ek +1
2 f ¢ ( xk )
Newton’s (Newton-Raphson) method - convergence

1 2 f ¢¢ ( )
ek = ek +1 (4)
2 f ¢ ( xk )
After applying a limit
ek +1 f ¢¢ ( )
lim =2 .
k ¥ ek
2
f ¢ ( xk )
Recall the definition of the rate of convergence:
Let x0 , x1 , x2 ,  is a sequence which converges to x * and
ek = xk - x * . If there exists number p and constant C ¹ 0 such that

ek +1
lim p
= C,
k ¥ ek
then p is called the order of convergence and
C is error constant.

Newton’s method converges quadratically.


Newton’s (Newton-Raphson) method - convergence

Newton’s method can also diverge


Newton’s (Newton-Raphson) method - convergence

Question: After which condition the is the Newton’s method convergent?

Suppose that in some vicinity I of the root it holds

1 f ¢¢ ( y )
£m for all x Î I , y Î I.
2 f ¢ ( x)
If xk Î I , then from (4) follows
2 2
ek +1 £ m ek or mek +1 £ mek .
Repeating this idea we get
2 4 8 2 k +1
mek +1 £ mek £ mek -1 £ mek -2 £  £ me0
If me0 < 1, then for sure ek +1  0 and therefore xk +1  x * .

Newton’s method is always convergent


if the initial approximation is sufficiently close to the root.
Combined method

Good initial approximation x0 can be obtained by bisection method.

Combination of bisection and Newton’s method leads to


a combined method,
which is always convergent.

e.g. procedure rtsafe from Numerical Recipes;


Newton’s method is applied only in the vicinity of the root,
otherwise the bisection method is used.
This assures the fast convergence.
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Steffensen’s method

Steffensen’s method is modified Newton’s method

f ( xk )
xk +1 = xk - ,
f ¢ ( xk )
where the derivative f¢ is approximated by

f ( xk + hk ) - f ( xk )
f ¢ ( x) » ,
hk
and hk is number, which tends to zero for greater k.
We chose hk = f ( xk ).

Unlike the secant method we have one more function evaluation.


However, is it possible to show that
the rate of convergence is the same
as in Newton’s method, i.e., quadratic.
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Functional analysis

Metric space

A metric space is an ordered pair ( X , d ) where X is a set and


d is a metric on X , such that for any x, y, z Î X , the following
holds:
1. d ( x, y ) ³ 0
2. d ( x, y ) = 0  x = y
3. d ( x, y ) = d ( y , x )
4. d ( x, z ) £ d ( x, y ) + d ( y , z )

Convergence: If there is some distance such that


no matter how far you go out in the sequence,
you can find all subsequent elements which are closer to
the limit than 
Functional analysis

Cauchy sequence <- term in functional analysis

Definition: Metric space is complete if


each Cauchy sequence has limit in the space

Definition: The elementx Î X is called fixed point


of mapping F : X  X ,
if F ( x ) = x.
Functional analysis
Functional analysis

Contraction mapping: images of two elements are closer then originals

"x, y Î M d ( F ( x ) , F ( y )) £  d ( x, y );  Î 0,1)
Functional analysis

Banach fixed-point theorem:


Let (X, d) be a non-empty complete metric space with a contraction
mapping g : X  X . Then g admits a unique fixed-point x * in X.
Furthermore, x * can be found as follows:
start with an arbitrary element x0 in X and define
a sequence { xn } by g ( xn-1 ) = xn , then xn  x * .
Fixed-point iteration

What it is good for? Suppose we want to solve f ( x) = 0.


f ( x)
Let‘s rewrite the f ( x) = 0 as +x= x , assuming h( x ) ¹ 0
h( 
x) 
.


g ( x)

We‘ll get fixed-point problem for g ( x) .

while the solution of g ( x p ) = x p .

is root of f ( x p ) = 0 .

Function g is called the iterative function.

We will chose the initial approximation x0 and


next iterations will be xk +1 = g ( xk ) .
Fixed-point iteration
Fixed-point iteration

This way not always leads to the fixed point of g.


Fixed-point iteration

We said that
fixed-point iteration method converges
if the iterative function is
contraction mapping.

In case of function of one variable,


contraction closely relates to
the rate of increase of function.
Fixed-point iteration

Theorem:
Let function g maps an interval a, b to itself
and g is derivative on this interval.
If there exists number  Î 0,1) so that

g ¢ ( x) £  "x Î a , b ,
then there exists fixed point x * of function g in interval a, b and
sequence of iterations

xk +1 = g ( xk )
converges to the fixed point for any initial approximation x0 Î a, b .
Next it holds

xk - x * £ xk - xk -1 .
1- 

Then is it possible to show, that convergence is linear.


Fixed-point iteration

The are many ways how to express x from the f ( x) = 0 .

One possibility is to divide the equation f ( x) = 0 by its derivative f’,


then multiply the equation by -1
and after all we add to both sides of equation x .
We get

f ( x)
x = x- .
f ¢ ( x)

Newton’s method is a special case


of fixed-point iteration method.
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Aitken Extrapolation

Recall the definition of the rate of convergence:


Let x0 , x1 , x2 ,  is a sequence which converges to x * and
ek = xk - x * . If there exists number p and constant C ¹ 0 such that

ek +1
lim p
= C,
k ¥ ek
then p is called the order of convergence and
C is error constant.

Suppose the linear convergence of an iterative method


xk +1 = g ( xk ) ,
i.e. it holds

xk - x *
lim £C .
k ¥ xk -1 - x *
Aitken Extrapolation

We can speed-up the convergence of fixed-point iteration as follows:


Suppose that k  1 . Then it approximately holds
xk - x* » C ( xk -1 - x *) ,
xk +1 - x* » C ( xk - x *) ,
from which we could express the fixed point x*
xk - x * xk -1 - x *
»
xk +1 - x * xk - x *
2
( xk - x *) » ( xk +1 - x *)( xk -1 - x *)
xk2 - 2 xk x * + x *2 » xk +1 xk -1 - x * ( xk +1 + xk -1 ) + x *2
xk2 - xk +1 xk -1 » -x * ( xk +1 + xk -1 + 2 xk )
xk +1 xk -1 - xk2
x* »
xk +1 + 2 xk + xk -1
Aitken Extrapolation

We can speed-up the convergence of fixed-point iteration as follows:


Suppose that k  1 . Then it approximately holds
xk - x* » C ( xk -1 - x *) ,
xk +1 - x* » C ( xk - x *) ,
from which we could express the fixed point x*
2
xk -1 xk +1 - xk2 ( xk - xk -1 )
x* » = xk -1 - ,
xk +1 - 2 xk + xk -1 xk +1 - 2 xk + xk -1
where xk = g ( xk -1 ) , xk +1 = g ( xk ) = g ( g ( xk -1 )) .
This way we can define the new iterative formula
2
( g ( xk ) - xk )
xk +1 = xk - .
g ( g ( xk )) - 2 g ( xk ) + xk

We obtained the Aitken-Steffensen iterative method


for finding the fixed point x * of function g ( x ) .
Aitken Extrapolation

If the initial approximation x0


is close enough to fixed point x* and
if g ¢ ( x *) ¹ 1,
then Aitken-Steffensen method converges quadratically.

If g ¢ ( x *) = 1, The convergence of this method is slow.


Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
A few notes

Note (About the multiplicity of roots)

We say that the root x * of equation f ( x ) = 0 has multiplicity q,


if function g ( x ) = f ( x ) / ( x - x *)
q
is defined in point x * .

and there is a root of f ( x ) in that point, i.e. if


0 < g ( x *) < ¥ .
If function f ( x) has a continuous derivatives up to order q .

in the vicinity of the root x * then


f ( ) ( x *) = 0,
j
j = 0,1, , q -1.
Some of before mentioned methods could be applied
for finding the multiple roots but the convergence is slower.

If we expect the multiple roots,


it is advisable to use the fact that
function u ( x) = f ( x) / f ¢ ( x) has simple root.
A few notes

Note (On achievable accuracy)


Let xk is an approximation of simple root of equation f ( x) = 0.
Using the mean value theorem we get
f ( xk ) = f ( xk ) - f ( x *) = f ¢ ( )( xk - x *) ,
where  is some point between xk and x *.
Suppose that we work with approximate values
f ( x ) = f ( x ) +  ,
k k k
where  k £  .
achievable
Then the best results we can accuracy f the
achieve is of = 0.
xk root ( ) x*
In that case f ( xk ) £  , so
f ( xk )  
xk - x * = £ » =:  x* ,
f ¢ ( xk ) f ¢ ( ) f ¢ ( x *)
while f ¢ is nearly constant in the vicinity of root.
*
It is impossible to compute x * with error less than  x .
A few notes

Note (On achievable accuracy)


If the slope f ¢ ( x *) in the root x * is small,
then the achievable accuracy is large –
- ill-conditioned problem
A few notes

Note (On achievable accuracy)

Similar consideration for the root of multiplicity q


imply the achievable accuracy

1
æ ö q
* çç  ⋅ q ! ÷÷
 x = ç (q ) ÷÷ .
ççè f ( x *)÷ø

The exponent 1/ q causes that


the computation of multiple root
is in general ill-conditioned task.
Lecture 2

CONTENTS

1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Literature

• Press, W. H., Flannery, B. P., Teukolsky, S. A., Vetterling, W. T.:


Numerical Recipes in Fortran, The Art of Scientific Computing
Cambridge Universiy Press 1990

• Hämmerlin, G., Hoffmann, K. H.


Numerical Mathematics,
Springer-Verlag, Berlin 1991

• Quarteroni, A., Sacco, R., Saleri, F.


Numerical Mathematics
Springer, Berlin 2000

You might also like