0% found this document useful (0 votes)
46 views

Convergence and Error Estimate of The Steffensen M

Uploaded by

pablo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

Convergence and Error Estimate of The Steffensen M

Uploaded by

pablo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Scholarly Research Exchange

SRX Mathematics • Volume 2010 • Article ID 674539 • doi:10.3814/2010/674539

Research Article
Convergence and Error Estimate of the Steffensen Method

Xufeng Shang,1 Xingping Shao,2 and Peng Wu3


1 Department of Mathematics, China Jiliang University, Hangzhou 310018, China
2 Department of Mathematics, Zhejiang University, Hangzhou 310027, China
3 China Academy of Railway Sciences, Beijing 100081, China

Correspondence should be addressed to Xufeng Shang, [email protected]

Received 15 July 2009; Revised 30 August 2009; Accepted 31 August 2009


Copyright © 2010 Xufeng Shang et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We present the Steffensen method in Rn space. The convergence theorem is established by using the technique of majorizing
function. Meanwhile, an error estimate is given. It avoids the calculus of derivative but has the same convergence order 2 with
Newton’s method. Finally, illustrative examples are included to demonstrate the validity and applicability of the technique.

1. Introduction which is still derivative free and uses four evaluations of the
function to achieve cubic convergence. Ren et al. [10] derived
Let X and Y be real or complex Banach space, and let F : a one-parameter class of fourth-order methods for solving
D ⊂ X → Y be a nonlinear Fréchet differentiable operator nonlinear equations. But above all methods are given only
in some convex domain D. The well-known iteration for for solving algebraic equation.
solving the equation Newton’s method can be extended to high dimension
directly, nevertheless the Steffensen method cannot. So the
F(x) = 0 (1) Steffensen method in high dimension is seldom researched.
is Newton’s method defined as follows: In [6], there are several kinds of forms of Steffenen method
in high dimension mentioned. Resently, Amat et al. [11]
xk+1 = xk − F  (xk )−1 F(xk ), k = 0, 1, 2, . . . . (2) generalize one form of the Steffensen method and give its
convergence theorem and error estimate. Hilout [12] studies
In recent years, some authors [1–5] discuss Newton’s the convergence of Steffensen-type algorithms. A class of
method and some modifications and give the convergence Steffensen-type algorithms for solving generalized equations
theorem and the error estimate. In order to avoid the calculus on Banach spaces is proposed. Alarcon et al. [13] discussed a
of derivative of f (x) and avoid f  (x) = 0, many authors modified Steffensens type iterative scheme for the numerical
investigate the Steffensen method: solution of a system of nonlinear equations.
In this paper, we discuss another form of the Steffensen
f 2 (xk ) method in Rn space. The form is defined as follows:
xk+1 = xk −   , (3)
f xk + f (xk ) − f (xk )
xk+1 = xk − J(xk , Hk )−1 F(xk ), k = 0, 1, 2, . . . , (4)
which is given for solving algebraic equation (only in one
dimension) [6]. For example, H. t. Kung and Traub presented where J(xk , Hk ) = (F(xk + Hk e1 ) − F(xk ), . . . , F(xk + Hk en )−
a class of multipoint iterative functions without derivative. F(xk ))Hk −1 , k = 1, 2, . . ., Hk = diag( f1 (xk ), f2 (xk ), . . . ,
Chen [7] studied a particular class of these methods which fn (xk )), and F(x) = ( f1 (x), f2 (x), . . . , fn (xk ))T , where fi :
contain Steffensen method in special case, and X. Wu and Rn → R and x = (x1 , x2 , . . . , xn ) is a vector.
H. Wu [8] studied a class of quadratic convergence iteration Compared with Newton’s iteration, the advantage of
formulae without derivatives. Zheng et al. [9] gave a second- this form avoids evaluation of the derivative but has the
order parametric Steffensen-like method, which is derivative same convergence order. In the paper, we also establish the
free and only uses two evaluations of the function in one step; convergence theorem under Kantorovich condition and give
they also suggested a variant of the Steffensen-like method the error estimate.
2 SRX Mathematics

This paper is organized as follows. In Section 2, the 3. Main Result


majorizing sequence and its properties are presented. In
Section 3, we establish Kantorovich-type theorem for this Let F : D ⊂ Rn → Rn be a nonlinear Frechet differentiable
kind of method by using majorizing function. Moreover, an operator in some convex domain D and satisfy
error estimate is given. In Section 4, the proof of the main      
F (x) − F  y  ≤ M x − y , ∀x ∈ D. (13)
theorem is presented. Finally, a numerical comparison with
Newton’s method is presented. For the iteration form (4), we have the following lemma and
theorem.
2. Majorizing Sequence and Its Properties
Lemma 4. Let F : Rn → Rn , and F satisfies the iteration (4),
Let K, β, and η be nonnegative numbers, and let h be a then one has
majorant function defined by  
J(xk , Hk ) − F  (xk ) ≤ M
F(xk )
, ∀x k ∈ D0 . (14)
1 1 η 2
h(t) = Kt 2 − t + . (5)
2 β β
Proof. Let F(x) = ( f1 (x), . . . , fn (x))T and H = diag( f1 (x),
Applying the iteration (2) to h, we can get the real . . . , fn (x)). Then
sequences {tk }:
J(xk , Hk )
h(t )    
tk+1 = tk −  k , k = 0, 1, . . . , t0 = 0. (6) = F xk + Hk e1 − F(xk ), . . . , F(xk + Hk en ) − F(xk ) Hk−1
h (tk )

First, we have the following lemmas. 1  
= F  xk + tHk e1 dtHk e1 , . . . ,
0
Lemma 1. When α = Kβη ≤ 1/2, the real function h has two
positive roots 1
√ √

F (xk +tHk e )dtHk en n
Hk−1
0
1 − 1 − 2α 1 + 1 − 2α
t∗ = η, t ∗∗ = η. (7) 
α α 1  
= F  xk + tHk e1 e1 dt, . . . ,
Lemma 2. Under the assumptions of previous Lemma, the real 0
sequence {tn } is monotonically increasing and tending to the 1
root t ∗ of h. F (xk + tHk e )e dt Hk Hk−1
 n n
0
Lemma 3. Suppose that the sequence tk is produced by the  1
iteration (6). If α = Kβη ≤ 1/2, then 1

 1
 1  n n
= F xk + tHk e e dt, . . . , F (xk + tHk e )e dt ,
k
0 0
θ2 (15)
t ∗ − tk ≤ (t ∗∗ − t ∗ ), (8)
1 − θ 2k
so
where  
√ J(xk , Hk ) − F  (xk )
1 − 1 − 2α 
θ= √ . (9) 1
1 + 1 − 2α  1  
  
= F x + tH e 1 1
e dt, . . . , F (x + tH e n n
)e dt
 0 k k k k
Proof. Let ak = t ∗ − tk , bk = t ∗∗ − tk . Then 0
1 

K  K  
h(tk ) = ak bk , h (tk ) = − (ak + bk ). (10) − F (xk )dt 

2 2 0
 1
Thus, we have  1   1
  
= 1 n n
F xk + tHk e e dt, . . . , F (xk + tHk e )e dt
ak bk a2k  0 0
ak+1 = ak − = ,  
ak + bk ak + bk 1 1 
(11)  
− F (xk )e dt, . . . , F (xk )e dt 
1

n
ak bk bk2 0 0
bk+1 = bk − = .
ak + bk ak + bk  1 1
1    
By (11), we get ≤ tM Hk e1 dt, tM Hk e2 dt, . . . , n
tM
Hk e
dt
0 0 0
 2  2k  2k
ak ak−1 a0 t∗ 2k M
= = ··· = = = (θ) . (12) ≤
F(xk )
.
bk bk−1 b0 t ∗∗ 2
(16)
Since bk = t ∗∗ − t ∗ + ak , it is easy to solve ak from (12). The
lemma follows. This ends the proof.
SRX Mathematics 3

For an initial value x0 ∈ D, suppose that F  (x0 )−1 exists By Lemma 5 and inductive assumptions, we have
and F satisfies
   
F(xn+1 )

     
F (x0 )−1  ≤ β, F (x0 )−1 F(x0 ) ≤ η, (17) 


=  − J(xn , Hn )(xn+1 − xn ) + F  (xn )(xn+1 − xn )
where β and γ are positive constants. Then we have the 
following theorem. 1 

  
+ (F (xn + t(xn+1 − xn )) − F (xn ))dt(xn+1 − xn )

Theorem 1. If F satisfies the conditions (13), (14), and (17), 0
and α = Kηβ ≤ 1/2, K ≥ M + M/β, then the sequence xk    
produced by the iteration (4) with the initial value x0 is well M M

F(xn )

xn+1 − xn
+
xn+1 − xn
2
defined and converges the unique root x∗ ∈ S(x0 , t ∗ ) of F. 2 2
Moreover M M
≤ h(tn )(tn+1 − tn ) + (tn+1 − tn )2
2 2
  θ2
k
xk − x∗  ≤ t ∗ − tk ≤ (t ∗∗ − t ∗ ), k = 0, 1, . . . , M  M
1 − θ 2k =− h (tn )(tn+1 − tn )2 + (tn+1 − tn )2
(18) 2 2
√ M  M
where S(x0 , r) = {x |
x − x0
≤ r }, θ = (1 − 1 − 2α)/(1 + ≤− h (t0 )(tn+1 − tn )2 + (tn+1 − tn )2
√ 2 2
1 − 2α). 
M M K
= + (tn+1 − tn )2 ≤ (tn+1 − tn )2 = h(tn+1 ).
2β 2 2
4. Proof of the Main Theorem (21)
Let F satisfy the conditions (13), (14), and (17). To prove
Theorem 1, we need to prove some useful Lemmas. So (c) is valid for k = n + 1.
Since
F  (x0 )−1
≤ β and
x − x0
< 1/(Kβ), by
Lemma 5. If the sequence xk produced by the iteration (4), Lemma 4 and iteration (6), we have
then
   
J(xk , Hk ) − F  (x0 ) ≤ J(xk , Hk ) − F  (xk )

F(xk+1 ) = (F (xk ) − J(xk , Hk ))(xk+1 − xk )  
1 + F  (xk ) − F  (x0 )
+ (F  (xk + t(xk+1 − xk )) − F  (xk ))dt(xk+1 − xk ).  
M
0 ≤
F(xk )
+ M
xk − x0

(19) 2
 
M
Proof. By (4) and Talyor expansion, the lemma can be ≤ h(tk ) + M
xk − x0

2
verified directly.  
M
≤ h(tk−1 ) + Mtk
Lemma 6. If α ≤ 1/2 and K ≥ M + M/β, then 2
 
(a) xk ∈ S(x0 , t ∗ ), M 
≤− h (tk−1 )(tk − tk−1 ) + Mtk
2
(b)
xk+1 − xk
≤ tk+1 − tk ,  
M 
≤− h (t0 )tk + Mtk
(c)
F(xk )
≤ h(tk ), 2
−1 −1

(d) J(xk , Hk ) exists, and
J(xk , Hk )
≤ −1/h (tk ), M
≤   + M tk

where the sequence {tk } is produced by the iteration (6).
1 1
≤ Ktk = h (tk ) + < .
Proof. The conclusions (a), (b), and (c) are obviously true β β
for k = 0. Now assume that they remain valid while k ≤ n. (22)
At first, by (b), we can see

n 


n Hence, by Banach lemma, J(xk , Hk )−1 exists and
 

xn+1 − x0
≤  x j+1 − x j  ≤ t j+1 − t j = tn+1 ≤ t ∗ .
 
j =0 j =0   



  f (x0 )−1  1
(20) J(xk , Hk )−1  ≤ 


  ≤−  .
1 −  f  (x0 )−1  h (tk ) + 1/β h (tk )
Thus, xn+1 ∈ S(x0 , t ∗ ). (23)
4 SRX Mathematics

Table 1: The numerical comparison between Newton’s method and Table 2: The numerical comparison between Newton’s method and
Steffensen method. Steffensen method of
F(x(n) )
.
Step Steffensen method Newton’s method n Steffensen method Newton’s method

F(x∗ )

F(x∗ )
1 1.4449e − 009 1.9878e − 009
1 3.3324e − 03 3.3324e − 03 2 1.9665e − 017 4.5194e − 017
5 6.6813e − 05 6.6737e − 05
10 6.8111e − 07 6.7586e − 07
By substituting this expression into the differential equation,
we have the following system of nonlinear equations:
Therefore
x j −1 − 2x j + x j+1 − h2 ex j = 0, j = 1, 2, 3, . . . , n − 1. (29)
−h(tn+1 )

xn+2 − xn+1
≤ = tn+2 − tn+1 . (24)
h (tn+1 ) We therefore have an operator F : Rn−1 → Rn−1 such that
F(x) = H(x) − h2 g(x), where
So (b) is also valid for k = n + 1.
⎛ ⎞
Proof of Theorem 1. By (b) in Lemma 6, we have that {xn } is −2 1 0 ··· 0
⎜ ⎟
a Cauchy sequence. Let x∗ be the limit of xn , as n → ∞, we ⎜ 1 −2 1 · · · 0 ⎟
⎜ ⎟
have ⎜ ⎟
⎜ 1 −2 · · · 0 ⎟
 ∗  H =⎜

0 ⎟,

 x − x n  ≤ t ∗ − tn . ⎜
(25) ⎜ .. .. .. . . .. ⎟

⎜ . . . . . ⎟
∗ ⎝ ⎠
It yields F(x ) = 0. This completes the proof of Theorem 1.
0 0 0 · · · −2
⎛ ⎞
x1
5. Numerical Examples ⎜ ⎟
⎜ x2 ⎟
⎜ ⎟ (30)
⎜ ⎟
We establish a theorem of convergence for the iteration (4). x=⎜
⎜ .. ⎟,

The best property of this method is that it does not use any ⎜ . ⎟
⎝ ⎠
derivative but has the same good properties of convergence xn−1
with Newton’s method. The following example can give the ⎛ ⎞
enough explanation. The computations associated with the e x1
examples were performed using Maple 9. ⎜ ⎟
⎜ e x2 ⎟
⎜ ⎟
⎜ ⎟
Example 1. We consider the following equation in 2D: g(x) = ⎜
⎜ .. ⎟

.
⎜ . ⎟
⎝ ⎠
f1 = x1 − 0.7 sin(x1 ) − 0.2 cos(x2 ) = 0, exn−1
(26)
f2 = x2 − 0.7 cos(x1 ) + 0.2 sin(x2 ) = 0.
Now we apply the Steffensen method and Newton’s method
T to approximate the solution of F(x) = 0. We choose n = 12,
The initial vector is (1/2, 1/2) . We solve the above
then (29) gives eleven equations. For the initial iterate values
equations by Newton’s method and the Steffensen method,
chosen as x j (0) = t j (t j − 1), j = 1, 2, . . . , 11, after two iterates,
respectively. The numerical comparison of the methods is
the Steffensen method is better than Newton’s method (see
displayed in Table 1. From Table 1, we can find the Steffensen
Table 2).
method has the same convergence rate with Newton’s
method.
Example 3. We consider the following nonlinear boundary
value problem of Burgers-Huxley equation:
Example 2. We consider the following nonlinear boundary
value problem of second order [14]: ut + uux − uxx = u(1 − u)(u − 0.4),
2
d x(t) u(0, t) = 0.2 − 0.2 tanh (0.12t),
= ex(t) , x(0) = 0 = x(1). (27)
dt 2
(x ∈ (0, 1), t ∈ (0, 1]) (31)
To solve this problem by finite differences, we start by
drawing the usual grid line with t j = jh, where h = 1/n u(1, t) = 0.2 − 0.2 tanh (0.2 + 0.12t),
and n is an integer. Note that x0 and xn are given by the
boundary conditions, then x0 = 0 = xn . We first approximate u(x, 0) = 0.2 − 0.2 tanh (0.2x).
the second derivative at these points:
This equation’s exact solution is

x j −1 − 2x j + x j+1
x ≈ , j = 1, 2, 3, . . . , n − 1. (28) u(x, t) = 0.2(1 − tanh(0.2x + 0.12t)). (32)
h2
SRX Mathematics 5

Table 3: The error of the approximation solution U n and the exact [3] D. Han and X. Wang, “Convergence on a deformed Newton
solution U ∗ . method,” Applied Mathematics and Computation, vol. 94, no.
1, pp. 65–72, 1998.
Method Step Error
[4] J. Chen and Z. Shen, “Convergence analysis of the secant type

Steffensen 7
Un − U
≤ 1.231e − 7 methods,” Applied Mathematics and Computation, vol. 188, no.
Newton 10
Un − U ∗
≤ 1.231e − 7 1, pp. 514–524, 2007.
[5] Y. Zhao and Q. Wu, “Newton-Kantorovich theorem for a
family of modified Halleys method under Holder continuity
By finite differences replacing the derivative, we get the conditions in Banach space,” Applied Mathematics and Com-
following system of nonlinear equations: putation, vol. 202, pp. 243–251, 2008.
[6] J. M. Ortega and W. C. Rheinbolbt, Iterative Solution of
(I + 0.01An (Un ))Un = Un−1 + 0.01F(Un ), Nonlinear Equations in Several Variables[M], Beijing Science
Press, Beijing, China, 1983.
u0,n = 0.2 − 0.2 tanh (0.0012n), [7] D. Chen, “On the convergence of a class of generalized Stef-
(33) fensens iterative procedures and error analysis,” International
u100,n = 0.2 − 0.2 tanh (0.2 + 0.0012t), Journal of Computer Mathematics, vol. 31, pp. 195–203, 1989.
[8] X. Wu and H. Wu, “On a class of quadratic convergence
ui,0 = 0.2 − 0.2 tanh (0.002i), 1 ≤ i ≤ 99, iteration formulae without derivatives,” Applied Mathematics
and Computation, vol. 107, pp. 77–80, 2000.
where I is identity matrix, Un = (u1,n , u2,n , . . . , u98,n , u99,n )T , [9] Q. Zheng, J. Wang, P. Zhao, and L. Zhang, “A Steffensen-like
tanh(x) = (ex − e−x )/(ex + e−x ), An (Un ) = tridiag[−(10000 + method and its higher-order variants,” Applied Mathematics
50(ui,n + |ui,n |)), 2(10000 + 50|ui,n |), −(10000 − 50(ui,n − and Computation, vol. 214, no. 1, pp. 10–16, 2009.
|ui,n |))]. Also we have [10] H. M. Ren, Q. B. Wu, and W. H. Bi, “A class of two-
       step Steffensen type methods with fourth-order convergence,”
F(Un ) = f u1,n + 10000 + 50 u1,n + u1,n  u0,n , Applied Mathematics and Computation, vol. 209, pp. 206–210,
      2009.
f u2,n , . . . , f u98,n , f u99,n (34) [11] S. Amat, S. Busquier, and V. Candela, “A class of quasi-Newton
     generalized Steffensen methods on Banach spaces,” Journal of
+ 10000 − 50 u99,n − u99,n  u100,n , Computational and Applied Mathematics, vol. 149, no. 2, pp.
397–406, 2002.
where f (ui,n ) = ui,n (1 − ui,n )(ui,n − 0.4), i = 1, 2, . . . , 100. [12] S. Hilout, “Convergence analysis of a family of Steffensen-type
We solve system (33) by Newton’s method and Steffensen methods for generalized equations,” Journal of Mathematical
method, respectively, and U0 = (u1,0 , u2,0 , . . . , u99,0 ) is initial Analysis and Applications, vol. 339, no. 2, pp. 753–761, 2008.
value. We choose the same iterative stop conditions: [13] V. Alarcon, S. Amat, S. Busquier, and D. J. Lopez, “A
  Steffensens type method in Banach spaces with applications
 k+1 
Un − Unk  < 1.0e − 10. (35) on boundary-value problems,” Journal of Computational and

Applied Mathematics, vol. 216, no. 1, pp. 243–250, 2008.
By the computer’s calculus, in Table 3 the data represent [14] J. A. Ezquerro, M. A. Hernandez, and M. A. Salanova,
Steffensen method’s advantage. “A discretization scheme for some conservative problems,”
Journal of Computational and Applied Mathematics, vol. 115,
no. 1-2, pp. 181–192, 2000.

6. Conclusion
The derivatives may prevent the application of the methods
especially when they are not easy to find. In this paper,
we show that the Steffensen method only uses evaluations
of the function but maintains quadratic convergence and
can converge. The new iterative method seems to work well
in our numerical results, since we have obtained optimal
order of convergence without any stability problem. With the
different choice of Hk in the iteration (4), we will get different
iterations without any derivative. Discussing these iterations
is the future work to do.

References
[1] D. Han and J. Zhu, “Convergence and error estimate of
a deformed Newton method,” Applied Mathematics and
Computation, vol. 173, no. 2, pp. 1115–1123, 2006.
[2] J. Zhu and D. Han, “Convergence and error estimate of
“Newton like” method,” Journal of Zhejiang University, vol. 32,
no. 6, pp. 623–626, 2005.

You might also like