0% found this document useful (0 votes)
10 views

tut 2s

The document discusses numerical analysis techniques, specifically the bisection method and Newton's method for solving nonlinear equations. It includes theorems on convergence, error estimates, and exercises that require applying these methods to specific equations. The document emphasizes the importance of initial guesses and provides detailed solutions to the exercises.

Uploaded by

Locke Cole
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

tut 2s

The document discusses numerical analysis techniques, specifically the bisection method and Newton's method for solving nonlinear equations. It includes theorems on convergence, error estimates, and exercises that require applying these methods to specific equations. The document emphasizes the importance of initial guesses and provides detailed solutions to the exercises.

Uploaded by

Locke Cole
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

MATH3230B Numerical Analysis

Tutorial 2

1 Recall:
1. Bisection method:
The bisection algorithm is based on the intermediate value theorem:

Theorem 1 (Intermediate value theorem). Let f (x) be continuous on [a, b], then for any real number g that
lies between f (a) and f (b) there exists ζ ∈ [a, b] such that f (ζ) = g.

We then have the following


Theorem 2. If f (a)f (b) < 0, then there exists at least one solution x∗ ∈ (a, b) such that f (x∗ ) = 0.

For the algorithm, please read Page 14 of the Lecture notes.


2. Convergence of Bisection Algorithm:
Theorem 3. Let f (x) be a continuous function on [a, b] such that f (a)f (b) < 0, then the bisection algorithm
always converges to a solution x∗ of the equation f (x) = 0, and the following error estimate holds for the
k−th approximate value xk :
1
|xk − x∗ | ≤ (bk − ak ) = 2−(k+1) (b − a)
2
3. Newton’s method:
Newton’s method is an iterative method. Basically it begins with an initial guess x0 , and produces successive
approximate values
x1 , x2 , x3 , . . . , xk , . . .
The Newton’s method is:
f (xk )
xk+1 = xk − , k = 0, 1, 2, . . .
f 0 (xk )

Assuming that f (x) is continuously differentiable up to second order near x = x∗ , we have the following
results:

Theorem 4 (Local convergence of Newton’s method). As long as the initial guess x0 is taken such that
|x0 − x∗ | ≤ δ, then all xk generated by the Newton’s method lie in the range |x − x∗ | ≤ δ.
Theorem 5 (Quadratic convergence of Newton’s method). The Newton’s method is a local convergence
iterative method, and it has quadratical convergence when the initial guess x0 is taken within the convergence
region.

1
2 Exercises:
1. * Please read the Lecture Notes for the bisection method and answer the questions.
Consider the following nonlinear equation:

f (x) = 0 x ∈ I, (1)

where I is an interval and f : I → R is a nonlinear equation.


(a) Please state the assumptions on f such that the bisection algorithm works.
(b) Assume the assumptions you mentioned in (a) are satisfied. Prove the convergence of the bisection
method and derive its error estimate.
(c) Let us consider the equation
f (x) := x3 + 5x2 − 3 = 0 (2)
and use I = [0, 1] as the initial interval.
i. Find the minimum number of iterations required to approximate the solution with an absolute error
of less than 10−6 .
ii. Let {xn }∞n=0 be the sequence generated by the bisection method. Please calculus the value of x0 , x1
and x2 .

Solution. (a) 1. f is continuous on I; 2. For I = [a, b], f (a)f (b) ≤ 0.


1 1
(b) Let [an , bn ] denote the interval in the n-th iteration. Since an+1 = (an + bn ) or bn+1 = (an + bn ), it
2 2
is clear that
1
bn+1 − an+1 = (bn − an ).
2
Hence,  n
1
bn − a n = (b0 − a0 ), ∀n ≥ 1 (3)
2
Note that an and bn are both monotone, bounded sequence, so both an and bn are convergent. By
equation (3), we have
lim bn − lim an = 0
⇒ lim bn = lim an = x∗
Since f (an )f (bn ) < 0, ∀n, we have f (x∗ )f (x∗ ) ≤ 0. Hence, f (x∗ ) = 0.
Convergence and error:
Since xn = 21 (bn + an ), then |xn − x∗ | ≤ 12 (bn − an ) = 2−(n+1) (b0 − a0 ).
(c) |xn − x∗ | ≤ 2−(n+1) (1 − 0) < 10−6 . Since 2−14 < 5−6 , 2−14 · 2−6 < 10−6 . Hence, 2−20 < 10−6 . The
iteration n = 19.
(d)
1 1
x0 = (a0 + b0 ) = .
2 2
f (x0 ) < 0 ⇒ a1 = x0 , b1 = b0 = 1
1 3
x1 = (a1 + b1 ) = .
2 4
1 3
f (x1 ) > 0 ⇒ a2 = , b2 =
2 4
1 5
x2 = (a2 + b2 ) = .
2 8

1
2. (a) Apply Newton’s method to the equation x − a = 0 to derive the following reciprocal algorithm:

xn+1 = 2xn − ax2n .

2
1
(b) Use part (a) to compute the approximation value of 1/1.68 with initial guess x0 = 2 at the 3rd iteration.

Solution.

(a) Consider the following non-linear equation:


1
f (x) := − a,
x
1
f 0 (x) = − .
x2
Newton’s method gives:
1
f (xn ) xn − a
xn+1 = xn − = x n − = xn + xn − ax2n = 2xn − ax2n
f 0 (xn ) − x12
n

(b) Let a = 1.68 in the above expression, so

x0 = 0.5
x1 = 2x0 − ax20 = 0.58
x2 = 2x1 − ax21 = 0.594848
x3 = 2x2 − ax22 = 0.595238


3. (a) Apply Newton’s method to find a with a > 0 and m is a positive integer.
m


(b) Use part (a) to compute the approximation value of 3 2 with initial guess x0 = 1 at the 3rd iteration.

Solution.

(a) Consider the following non-linear equation:

f (x) := xm − a,

f 0 (x) = mxm−1 .
Newton’s method gives:

f (xn ) xm
n −a 1 a (m − 1)xm
n +a
xn+1 = xn − = xn − m−1 = 1 − x n + m−1 = m−1
f 0 (xn ) mxn m mxn mxn

(b) Let a = 2 and m = 3 in the above expression, so

x0 = 1
x1 = 1.333333
x2 = 1.263889
x3 = 1.259933

4. Consider a real nonlinear function f , and the nonlinear equation:

f (x) = 0.

3
(a) * Prove the local convergence of the Newton’s method applied to f to find the solution x∗ .
(b) * Conclude the assumptions that you made in your proof in (a).
(c) The choice of initial guess x0 is very important to the convergence of the Newton’s method. Try the
Newton’s method for the following special function f . Let the function f : [0, 10] → R be given by

 x2 , if x ∈ [0, 2),
f (x) = 6 − x, if x ∈ [2, 3), (4)
3, if x ∈ [3, 10].

i. We first choose an initial guess 0 < x0 < 2. Does the Newton’s method work? Please prove your
claim.
ii. Now we choose an initial guess 2 < x0 < 3. Does the Newton’s method work? Please explain why
briefly.
(d) * Derive an iterative method to solve the nonlinear equation (4) based on the Taylor’s expansion up to
the second order approximation.

Solution. (a) Define ϕ(x) = x − ff0(x)


(x) , then the Newton’s method can be written as the fixed-point iteration:

xk+1 = ϕ(xk ), k = 0, 1, 2, ...

By direct substitution, we have


x∗ = ϕ(x∗ )
Let ek = xk − x∗ be the error at the k-th iteration, then

ek+1 = ϕ(xk ) − ϕ(x∗ )

By the mean-value theorem, we can have the following error equation for ek :

ek+1 = ϕ0 (ζk )(xk − x∗) = ϕ0 (ζk )ek ,

where ζk lies between xk and x∗ . Since

f 0 (x)f 0 (x) − f 00 (x)f (x) f (x)f 00 (x)


ϕ0 (x) = 1 − 0
= ,
f (x) 2 f 0 (x)2

and f (x∗ ) = 0, we have


f (x∗ )f 00 (x∗ )
ϕ0 (x∗ ) = =0
f 0 (x∗ )2
Therefore, we can find a constant δ > 0 such that,
1
|ϕ0 (x)| ≤ , x ∈ {x | |x − x∗ | ≤ δ}
2
In fact, by induction it is easy to check that for k = 1, 2, ..., we have
1 1
ϕ0 (ζk ) ≤ < 1, |ek+1 | ≤ |ek | ≤ δ
2 2
This yields
1 1
|e0 | = k+1 |x0 − x∗ |,
|ek+1 | ≤
2k+1 2
so we have ek → 0 as k → ∞, namely xk → x∗ as k → ∞. This demonstrates the convergence of the
Newton’s method.
(b) i. f (x∗ ) = 0
ii. f is continuously differentiable up to 2nd order.
iii. f 0 (x∗ ) 6= 0

4
x2n xn
(c) i. As x0 ∈ (0, 2), xn+1 = xn − 2xn = 2 , which implies that

1
xn = x0
2n
Thus the Newton’s method works.
ii. As x0 ∈ (2, 3), x1 = 6. Iteration stops as f 0 (x1 ) = 0.
(d) Given an initial guess x0 , the Taylor expansion yields
1
F (x) = f (x0 ) + f 0 (x0 )(x − x0 ) + f 00 (x0 )(x − x0 )2 ≈ f (x).
2
So instead of solving f (x) = 0, we can solve F (x) = 0. Solving the quadratic equation, we have
p
−f 0 (x0 ) ± (f 0 (x0 ))2 − 2f (x0 )f 00 (x0 )
x = x1 = x0 +
f 00 (x0 )

Obviously we need to assume the initial x0 is close enough such that 2f (x0 )f 00 (x0 ) ≤ (f 0 (x0 ))2 is
guaraneted. For the issue of the choice of the sign, we should indeed take the one which leaves us
closest to our current iteration. That is, we should choose the root such that xn+1 − xn is smaller in
magnitude. There are two reasons. The first is that another root may be far away from current xn or the
exact solution. The second one is that we hope to prevent our scheme to bounce back and forth during
iterations. In order to minimize the increment size we can choose the root such that it is opposite to
−f (x0 )0 . Therefore we have
p
f 0 (xn ) − sign(f 0 (xn )) (f 0 (xn ))2 − 2f (xn )f 00 (xn )
xn+1 = xn − .
f 00 (xn )

Assume that f (x) satisfies the following conditions:


f (x) = 0 has a solution x∗ .
f is continuously differentiable up to third order.
f 0 (x∗ ) 6= 0 and f 00 (x∗ ) 6= 0. If we set
p
−f 0 (x) + sign(f 0 (x)) (f 0 (x))2 − 2f (x)f 00 (x)
φ(x) = x +
f 00 (x)

then the function φ is continuously differentiable near x∗ and

dsign(f 0 (x)) 0
  
1
φ(x∗ ) = 1 + 00 (f 00
(x)) 2
|f (x)| − 1 =0
(f (x))2 dx

Hence, there exists a neighborhood Bδ (x∗ ) of x∗ such that φ0 (x) ≤ 21 . Thus, the sequence {xn } locally
converges to x∗ .

You might also like