0% found this document useful (0 votes)
78 views

1.1: The Bisection Method September 2019: MA385/530 - Numerical Analysis

The document introduces numerical methods for solving nonlinear equations, as linear and simple nonlinear equations can be solved with formulas but most require numerical approaches. It focuses on the bisection method, which iteratively narrows the interval containing the solution by bisecting each interval and testing where the function changes sign, guaranteeing convergence. Examples demonstrate applying the bisection method to find solutions to within a given accuracy.

Uploaded by

Bashar Jawad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views

1.1: The Bisection Method September 2019: MA385/530 - Numerical Analysis

The document introduces numerical methods for solving nonlinear equations, as linear and simple nonlinear equations can be solved with formulas but most require numerical approaches. It focuses on the bisection method, which iteratively narrows the interval containing the solution by bisecting each interval and testing where the function changes sign, guaranteeing convergence. Examples demonstrate applying the bisection method to find solutions to within a given accuracy.

Uploaded by

Bashar Jawad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

(1/58)

Solving nonlinear equations


§1.1: The bisection method
MA385/530 – Numerical Analysis
September 2019

3
2
f(x)= x −2
2 x−axis

0
x[0]=a x[2]=1 x[3]=1.5 x[1]=b
−1

−2
−0.5 0 0.5 1 1.5 2 2.5
§1 Solving nonlinear equations Intro (2/58)

Linear equations are of the form:

find x such that ax + b = 0

and are easy to solve. Some nonlinear problems are also easy to
solve, e.g.,

find x such that ax 2 + bx + c = 0.

Similarly, there are formulae for all cubic and quartic polynomial
equations. But most equations do not have simple formulae for
their solutions, so numerical methods are needed.
§1 Solving nonlinear equations Intro (3/58)

References
Chap. 1 of Süli and Mayers (Introduction to Numerical
Analysis). We’ll follow this pretty closely in lectures, though
we will do the sections in reverse order!
Stewart (Afternotes ...), Lectures 1–5. A well-presented
introduction, with lots of diagrams to give an intuitive
introduction.
Chapter 4 of Moler’s “Numerical Computing with MATLAB”.
Gives a brief introduction to the methods we study, and
description of MATLAB functions for solving these problems.
The proof of the convergence of Newton’s Method is based on
the presentation in Thm 3.2 of Epperson.
§1 Solving nonlinear equations Intro (4/58)

Our generic problem is:

Let f be a continuous function on the interval [a, b].


Find τ ∈ [a, b] such that f (τ ) = 0.

Here f is some specified function, and τ is the solution to


f (x ) = 0.
This leads to two natural questions:

(1) How do we know there is a solution?


(2) How do we find it?
§1 Solving nonlinear equations Intro (5/58)

The following gives sufficient conditions for the existence of a


solution:
Theorem 1.1
Let f be a real-valued function that is defined and continuous on a
bounded closed interval [a, b] ⊂ R. Suppose that f (a)f (b) ≤ 0.
Then there exists τ ∈ [a, b] such that f (τ ) = 0.
§1 Solving nonlinear equations Intro (6/58)

So now we know there is a solution τ to f (x ) = 0, but how to we


actually solve it? Usually we don’t! Instead we construct a
sequence of estimates {x0 , x1 , x2 , x3 , . . . } that converge to the
true solution. So now we have to answer these questions:

(1) How can we construct the sequence x0 , x1 , . . . ?


(2) How do we show that limk→∞ xk = τ ?
§1 Solving nonlinear equations Intro (7/58)

There are some subtleties here, particularly with part (2). What we
would like to say is that at each step the error is getting smaller.
That is

|τ − xk | < |τ − xk−1 | for k = 1, 2, 3, . . . .

But we can’t. Usually all we can say is that the bounds on the
error is getting smaller. That is: let εk be a bound on the error
at step k
|τ − xk | < εk ,
then εk+1 < µεk for some number µ ∈ (0, 1). It is easiest to
explain this in terms of an example, so we’ll study the simplest
method: Bisection.
Bisection (8/58)

The most elementary algorithm is the “Bisection Method” (also


known as “Interval Bisection”). Suppose that we know that f
changes sign on the interval [a, b] = [x0 , x1 ] and, thus, f (x ) = 0
has a solution, τ , in [a, b]. Proceed as follows

1. Set x2 to be the midpoint of the interval [x0 , x1 ].


2. Choose one of the sub-intervals [x0 , x2 ] and [x2 , x1 ] where f
change sign;
3. Repeat Steps 1–2 on that sub-interval, until f is sufficiently
small at the end points of the interval.
Bisection (9/58)

This may be expressed more precisely using some pseudocode.


The Bisection Algorithm
Set eps to be the stopping criterion.
If |f (a)| ≤ eps, return a. Exit.
If |f (b)| ≤ eps, return b. Exit.
Set xL = a and xR = b.
Set k = 1
while( |f (xk )| > eps)
xk+1 = (xL + xR )/2;
if (f (xL )f (xk+1 ) < eps)
xR = xk+1 ;
else
xL = xk+1
end if;
k =k +1
end while;
Bisection (10/58)
Example 1.2

Find an estimate for 2 that is correct to 6 decimal places.
Solution: Use bisection to solve f (x ) := x 2 − 2 = 0 on the
interval [0, 2].

3
f(x)= x2 −2
2 x−axis

0
x[0]=a x[2]=1 x[3]=1.5 x[1]=b
−1

−2
−0.5 0 0.5 1 1.5 2 2.5
Bisection (11/58)

Find an estimate for 2 that is correct to 6 decimal places.
Solution: Use bisection to solve f (x ) := x 2 − 2 = 0 on the
interval [0, 2].

k xk |xk − τ | |xk − xk−1 |


0 0.000000 1.41
1 2.000000 5.86e-01
2 1.000000 4.14e-01 1.00
3 1.500000 8.58e-02 5.00e-01
4 1.250000 1.64e-01 2.50e-01
5 1.375000 3.92e-02 1.25e-01
6 1.437500 2.33e-02 6.25e-02
7 1.406250 7.96e-03 3.12e-02
8 1.421875 7.66e-03 1.56e-02
9 1.414062 1.51e-04 7.81e-03
10 1.417969 3.76e-03 3.91e-03
.. .. .. ..
. . . .
22 1.414214 5.72e-07 9.54e-07
The bisection method works (12/58)

The main advantages of the Bisection method are

It will always work.


After k steps we know that
Theorem 1.3

1 k−1
|τ − xk | ≤ |b − a|, for k = 2, 3, 4, ...
2
Improving upon bisection (13/58)

A disadvantage of bisection is that it is not particularly efficient.


So our next goal will be to derive better methods, particularly the
Secant Method and Newton’s method . We also have to come
up with some way of expressing what we mean by “better”; and
we’ll have to use Taylor’s theorem in our analyses.
Exercises (14/58)
Exercise 1.1
Does Proposition 1.1.1 mean that, if there is a solution to
f (x ) = 0 in [a, b] then f (a)f (b) ≤ 0? That is, is f (a)f (b) ≤ 0 a
necessary condition for their being a solution to f (x ) = 0? Give an
example that supports your answer.

Exercise 1.2
Suppose we want to find τ ∈ [a, b] such that f (τ ) = 0 for some
given f , a and b. Write down an estimate for the number of
iterations K required by the bisection method to ensure that, for a
given ε, we know |xk − τ | ≤ ε for all k ≥ K . In particular, how
does this estimate depend on f , a and b?
Exercises (15/58)
Exercise 1.3
How many (decimal) digits of accuracy are gained at each step of
the bisection method? (If you prefer, how many steps are needed
to gain a single (decimal) digit of accuracy?)

Exercise 1.4
Let f (x ) = e x − 2x − 2. Show that there is a solution to the
problem: find τ ∈ [0, 2] such that f (τ ) = 0.
Taking x0 = 0 and x1 = 2, use 6 steps of the bisection method to
estimate τ . You may use a computer program to do this, but
please note that in your solution.
Give an upper bound for the error |τ − x6 |.
Exercises (16/58)
Exercise 1.5

We wish to estimate τ = 3 4 numerically by solving f (x ) = 0 in
[a, b] for some suitably chosen f , a and b.
(i) Suggest suitable choices of f , a, and b for this problem.
(ii) Show that f has a zero in [a, b].

(iii) Use 6 steps of the bisection method to estimate 3 4. You
may use a computer program to do this, but please note that
in your solution.
(iv) Use Theorem 1.3 to give an upper bound for the error
|τ − x6 |.

You might also like