Numerical Methods: Solving Nonlinear Equations
Numerical Methods: Solving Nonlinear Equations
Numerical methods
Solving nonlinear equations
Lecture 2
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Introduction
Iterative methods:
we generate sequence of approximations
x0 , x1 , x2,
from one or several initial approximations (guess)
of root x*,
which converge to the root x*.
Introduction
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Root separation and estimation of initial approximation
f (a )⋅ f (b) < 0,
x*
Root separation and estimation of initial approximation
e x + x2 - 3 = 0
Solution: Rearrange the equation as follows
e x = 3 - x2
Lecture 2
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Bisection method
ì
ï (ak , xk +1 ), if f (ak ) f ( xk +1 ) < 0,
ï
(ak +1, bk +1 ) = íï
î ( xk +1 , bk ), if f (ak ) f ( xk +1 ) > 0.
ï
From construction of ( ak +1 , bk +1 ) it follows that f ( ak +1 ) f (bk +1 ) < 0 ,
so each interval ( ak , bk ) contains a root.
Bisection method
bk -1 - ak -1
I k = bk - ak = = = 2-k (b0 - a0 ).
2
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Rate of convergence
ek +1
lim p
= C, (3)
k ¥ ek
then p is called the order of convergence and
C is error constant.
We say that
linear, p = 1 and C < 1,
convergence is superlinear, if p > 1,
quadratic, p = 2.
Rate of convergence
ek +1
lim p
= C, (3)
k ¥ ek
then p is called the order of convergence and
C is error constant.
We say that
linear, p = 1 and C < 1,
convergence is superlinear, if p > 1,
quadratic, p = 2.
We say that the method converges with order p ,
if all convergent sequences obtained by this method
have the order of convergence greater or equal to p and
at least one of them has order of convergence exactly equal to p.
Rate of convergence
ek +1
lim p
= C, ek = xk - x *
k ¥ ek
Rate of convergence
p-1
xk +1 - x * 2 -k -1 æ
(b0 - a0 ) 1 ç 2 ÷ k ö
lim = = ç ÷÷
k ¥ x - x * p
é -k ù p
2 ççè b - a ÷ø
êë 2 (b0 - a0 )úû
k 0 0
1
p = 1, C =
2
Lecture 2
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Regula falsi (false position) method
bk - ak
xk +1 = bk - f (bk )
f (bk ) - f (ak )
ì
ï (ak , xk +1 ), if f (ak ) f ( xk +1 ) < 0,
ï
(ak +1, bk +1 ) = íï
î ( xk +1 , bk ), if f (ak ) f ( xk +1 ) > 0.
ï
From construction of ( ak +1 , bk +1 ) it follows that f ( ak +1 ) f (bk +1 ) < 0,
so each interval ( ak , bk ) contains a root.
Regula falsi (false position) method
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Secant method
Denote x0 = a and x1 = b .
Denote x0 = a and x1 = b .
It converge
if initial points x1 and x2 are close enough to root x *.
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Newton’s (Newton-Raphson) method
y = f ( xk ) + f ¢ ( xk )( x - xk )
with y := 0 we obtain an intersect with the axis x:
f ( xk )
xk +1 = xk - .
f ¢ ( xk )
Newton’s (Newton-Raphson) method - convergence
1 2 f ¢¢ ( ) f ( xk )
- ( x * -xk ) = + ( x * -xk )
2 f ¢ ( xk ) f ¢ ( xk )
1 2 f ¢¢ ( )
é f ( x ) ù
- ( x * -xk ) = x * - êê xk - k ú = x * -xk +1
2 f ¢ ( xk ) f ¢ ( xk ) ú
ë û
1 2 f ¢¢ ( )
ek = ek +1
2 f ¢ ( xk )
Newton’s (Newton-Raphson) method - convergence
1 2 f ¢¢ ( ) f ( xk )
- ( x * -xk ) = + ( x * -xk )
2 f ¢ ( xk ) f ¢ ( xk )
1 2 f ¢¢ ( )
é f ( x ) ù
- ( x * -xk ) = x * - êê xk - k ú = x * -xk +1
2 f ¢ ( xk ) f ¢ ( xk ) ú
ë û
1 2 f ¢¢ ( )
ek = ek +1
2 f ¢ ( xk )
Newton’s (Newton-Raphson) method - convergence
1 2 f ¢¢ ( ) f ( xk )
- ( x * -xk ) = + ( x * -xk )
2 f ¢ ( xk ) f ¢ ( xk )
1 2 f ¢¢ ( )
é f ( x ) ù
- ( x * -xk ) = x * - êê xk - k ú = x * -xk +1
2 f ¢ ( xk ) f ¢ ( xk ) ú
ë û
1 2 f ¢¢ ( )
ek = ek +1
2 f ¢ ( xk )
Newton’s (Newton-Raphson) method - convergence
1 2 f ¢¢ ( ) f ( xk )
- ( x * -xk ) = + ( x * -xk )
2 f ¢ ( xk ) f ¢ ( xk )
1 2 f ¢¢ ( )
é f ( x ) ù
- ( x * -xk ) = x * - êê xk - k ú = x * -xk +1
2 f ¢ ( xk ) f ¢ ( xk ) ú
ë û
1 2 f ¢¢ ( )
ek = ek +1
2 f ¢ ( xk )
Newton’s (Newton-Raphson) method - convergence
1 2 f ¢¢ ( )
ek = ek +1 (4)
2 f ¢ ( xk )
After applying a limit
ek +1 f ¢¢ ( )
lim =2 .
k ¥ ek
2
f ¢ ( xk )
Recall the definition of the rate of convergence:
Let x0 , x1 , x2 , is a sequence which converges to x * and
ek = xk - x * . If there exists number p and constant C ¹ 0 such that
ek +1
lim p
= C,
k ¥ ek
then p is called the order of convergence and
C is error constant.
1 f ¢¢ ( y )
£m for all x Î I , y Î I.
2 f ¢ ( x)
If xk Î I , then from (4) follows
2 2
ek +1 £ m ek or mek +1 £ mek .
Repeating this idea we get
2 4 8 2 k +1
mek +1 £ mek £ mek -1 £ mek -2 £ £ me0
If me0 < 1, then for sure ek +1 0 and therefore xk +1 x * .
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Steffensen’s method
f ( xk )
xk +1 = xk - ,
f ¢ ( xk )
where the derivative f¢ is approximated by
f ( xk + hk ) - f ( xk )
f ¢ ( x) » ,
hk
and hk is number, which tends to zero for greater k.
We chose hk = f ( xk ).
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Functional analysis
Metric space
"x, y Î M d ( F ( x ) , F ( y )) £ d ( x, y ); Î 0,1)
Functional analysis
g ( x)
is root of f ( x p ) = 0 .
We said that
fixed-point iteration method converges
if the iterative function is
contraction mapping.
Theorem:
Let function g maps an interval a, b to itself
and g is derivative on this interval.
If there exists number Î 0,1) so that
g ¢ ( x) £ "x Î a , b ,
then there exists fixed point x * of function g in interval a, b and
sequence of iterations
xk +1 = g ( xk )
converges to the fixed point for any initial approximation x0 Î a, b .
Next it holds
xk - x * £ xk - xk -1 .
1-
f ( x)
x = x- .
f ¢ ( x)
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Aitken Extrapolation
ek +1
lim p
= C,
k ¥ ek
then p is called the order of convergence and
C is error constant.
xk - x *
lim £C .
k ¥ xk -1 - x *
Aitken Extrapolation
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
A few notes
1
æ ö q
* çç ⋅ q ! ÷÷
x = ç (q ) ÷÷ .
ççè f ( x *)÷ø
CONTENTS
1. Introduction
2. Root separation and estimation of initial approximation
3. Bisection method
4. Rate of convergence
5. Regula falsi (false position) method
6. Secant method
7. Newton’s (Newton-Raphson) method
8. Steffensen’s method
9. Fixed-point iteration
10. Aitken Extrapolation
11. A few notes
12. Literature
Literature