0% found this document useful (0 votes)
76 views15 pages

Waziri PDF

The document proposes a Jacobian-free diagonal Newton's method for solving nonlinear systems where the Jacobian matrix is singular. It approximates the inverse Jacobian as a diagonal matrix to bypass points where the Jacobian is singular. The method iteratively updates this diagonal approximation matrix using a correction term obtained by minimizing the Frobenius norm of the correction subject to the secant equation. Numerical experiments show the method performs encouragingly.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views15 pages

Waziri PDF

The document proposes a Jacobian-free diagonal Newton's method for solving nonlinear systems where the Jacobian matrix is singular. It approximates the inverse Jacobian as a diagonal matrix to bypass points where the Jacobian is singular. The method iteratively updates this diagonal approximation matrix using a correction term obtained by minimizing the Frobenius norm of the correction subject to the secant equation. Numerical experiments show the method performs encouragingly.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Malaysian Journal of Mathematical Sciences 5(2): 241-255 (2011)

Jacobian-Free Diagonal Newton’s Method for Solving


Nonlinear Systems with Singular Jacobian
1
Mohammed Waziri Yusuf, 1,2Leong Wah June and
1,2
Malik Abu Hassan
1
Faculty of Science, Universiti Putra Malaysia,
43400 UPM Serdang, Selangor, Malaysia
2
Institute for Mathematical Research, Universiti Putra Malaysia,
43400 UPM Serdang, Malaysia
E-mail: [email protected]

ABSTRACT
The basic requirement of Newton’s method in solving systems of nonlinear equations
is, the Jacobian must be non-singular. This condition restricts to some extent the
application of Newton method. In this paper we present a modification of Newton’s
method for systems of nonlinear equations where the Jacobian is singular. This is
made possible by approximating the Jacobian inverse into a diagonal matrix by
means of variational techniques. The anticipation of our approach is to bypass the
point in which the Jacobian is singular. The local convergence of the proposed
method has been proven under suitable assumptions. Numerical experiments are
carried out which show that, the proposed method is very encouraging.

Keywords: Nonlinear equations, diagonally update, Jacobian, singular.

INTRODUCTION
Let us consider the problem of finding the solution of nonlinear
equations

F ( x ) = 0, (1)

where F = ( f1 , f 2 ... f n ) : R n → R n is assumed to satisfy the following


assumptions:

A1. F is continuously differentiable in on open neighbourhood E ⊂ R n .


A2. There exists a solution x∗ in E such that F ( x∗ ) = 0 and F ′( x∗ ) is
non-singular at a solution.
A3. The Jacobian F ′( xk ) is Lipschitz continuous at x∗ .
Mohammed Waziri Yusuf , Wah June Leong & Malik Abu Hassan

The best-know method for finding the solution to (1), is the Newton’s
method, in which the Newtonian iterations are given by:

xk +1 = xk − ( F ′( xk )) −1 F ( xk ), (2)

where k = 0,1,2…. If F ′( x∗ ) is non-singular at a solution of (1) , the


convergence is guaranteed and the rate is quadratic from any initial point x0
in the neighbourhood of x∗ (Dennis and Wolkowicz (1993); Shen and Ypma
(2005)), i.e.

xk +1 − x∗ ≤ h xk − x∗ 2
(3)

for some h. When the Jacobian is singular the convergence rate may be
unsatisfactory, i.e. it slows down when approaching a singular root and the
convergence to x∗ may even be lost (Leong and Hassan (2009); Hassan et
al. (2009); Dennis and Schnabel (1983); Waziri et al. (2010)). This
condition (non-singular Jacobian F ′( x∗ ) ≠ 0 ) restricts to certain extent the
application of Newton’s method for solving nonlinear equations (Kelly
(1995)). Some approaches have been developed to overcome this
shortcoming. The simplest approach is incorporated in the fixed Newton’s
method. The method generates an iterative sequence{xk } from a given initial
guess x0 (Ortega and Rheinboldt (1970); Waziri et al. (2010); Farid et al.
(2011))

xk +1 = xk − ( F ′( x0 )) −1 F ( xk ), k = 0,1, 2,…. (4)

This method overcomes both the disadvantages of Newton’s method


(computing and storing the Jacobian in each iteration) but the method is
significantly slow. From the computational complexity point of view, the
fixed Newton is cheaper than Newton’s method (Ortega and Rheinboldt
(1970)), however, it still requires computing and storing the Jacobian at
initial guess x0 .

The idea of diagonal updating has been used by (Modarres et al.


(2011); Grapsay and Malihoutsakit (2007); Leong et al. (2010)) on
unconstrained optimization. Leong et al. (Modarres et al. (2011))
approximates the Hessian into a diagonal matrix and Hassan et al. (2003)
focuses on the Hessian inverses diagonal updating. In this paper we
242 Malaysian Journal of Mathematical Sciences
Jacobian-Free Diagonal Newton’s Method for Solving Nonlinear Systems with Singular Jacobian

proposed a modification of Newton’s method for nonlinear systems with


singular Jacobian at the solution x∗ , by approximating the Jacobian inverse
into a diagonal matrix. The anticipation has been to bypass the point at
which the Jacobian is singular, since we do not need to compute the
Jacobian at all. The method proposed in this work is computationally
cheaper than Newton’s method and fixed Newton’s method in term of CPU
time, due to the fact that our strategy is derivatives free and the
approximation is only on the diagonal elements.

JACOBIAN-FREE DIAGONAL NEWTON’S METHOD FOR


SOLVING NONLINEAR SYSTEMS WITH SINGULAR
JACOBIAN (JFSJ)
In this section, we derive Jacobian-Free diagonal Newton’s method
for solving nonlinear systems with singular Jacobian. We start by the mean
value theorem, to obtain the secant‘s equation

F ′( xk )∆xk = ∆Fk , (5)

where F ′( xk ) is the Jacobian matrix, ∆xk = xk +1 − xk and


∆Fk = F ( xk +1 ) − F ( xk ) .

On the other hand, (5) can be rearranged to give

∆xk = ( F ′( xk )) −1 ∆Fk . (6)

Let G be an approximation of the Jacobian inverse into a diagonal matrix


i.e.

( F ′( xk ))−1 ≈ G , (7)

We propose to update G by adding a correction U (diagonal matrix) in


each iteration i.e.

Gk +1 = Gk + U k . (8)

Malaysian Journal of Mathematical Sciences 243


Mohammed Waziri Yusuf , Wah June Leong & Malik Abu Hassan

To obtain the accurate information of the Jacobian inverse in relation to the


updating matrix, Gk +1 , we entail Gk +1 to satisfy the secant’s equation (5), i.e.

∆xk = (Gk + U k )∆Fk . (9)

Now, we can write using the weak secant condition Farid et al. (2010),
Leong and Hassan (2011) and Natasa and Zorna (2001) to obtain

∆FkT ∆xk = ∆FkT (Gk + U k ) ∆Fk . (10)

In order to ensure superior condition number and numerical steadiness in the


approximation, we effort to control the error of the U (correction) through
the following problem.

1 2
min Uk F
2 (11)
s.t ∆Fk (Gk + U k )∆Fk = ∆FkT ∆xk ,
T

where . F is the Frobenius norm. Denoting U = diag (τ 1 ,τ 2 ,…,τ n ) and


∆Fk = (∆Fk(1) , ∆Fk(2) ,… ∆Fk( n ) ), therefore we can rewrite (11) as follows

1
min (τ 12 + τ 22 + … + τ n2 )
2
n
(12)
s.t ∑∆F
i =1
k
( i )2
i k
T
k
T
τ = ∆F ∆xk − ∆F Gk ∆Fk .

The solution of (12) can be found through taking into consideration its
Lagrange function as follows
n
1
L (τ i , λ ) = (τ 12 + τ 22 + … + τ n2 ) + β (∑∆Fk( i ) τ i − ∆FkT ∆xk + ∆FkT Gk ∆Fk ). (13)
2

2 i =1

where β is the corresponding Lagrange multiplier.

Differentiating (13) with respect to each τ i and equating them to zero, we


get
∂L 2
= τ i + β∆Fk(i ) = 0, i = 0,1, 2,..., n . (14)
∂τ i
244 Malaysian Journal of Mathematical Sciences
Jacobian-Free Diagonal Newton’s Method for Solving Nonlinear Systems with Singular Jacobian

It follows from (13) that


2
τ i = − β∆Fk(i ) , i = 0,1,2,..., n. (15)

2
Multiplying (15) by ∆Fk( i ) and summing over n , we have
n n

∑∆F τ = −∑∆Fk( i ) β .
( i )2 4

k i (16)
i =1 i =1

In order to invoking the constraint, we differentiate (13) with respect to β


which yields
n

∑∆F
i =1
k
( i )2
τ = ∆FkT ∆xk − ∆FkT Gk ∆Fk .
i (17)

Using (16) and (17), it follows that

∆FkT ∆xk − ∆FkT Gk ∆Fk


β =− ( i )4
(18)
∑ in=1∆Fk

Substituting (18) into (15), followed by some algebraic manipulation, we


obtain:
(∆FkT ∆xk − ∆FkT Gk ∆Fk ) ( i )2
τi = 4
∆ Fk , i = 1, 2,… , n. (19)
∑ in=1∆Fk
( i)

2 2 2
(i )4
Letting Ek = diag (∆Fk(1) , ∆Fk(2) ,…, ∆Fk( n ) ) and ∑ in=1∆Fk = Tr ( Ek2 )
where Tr is the trace operation, yields

(∆FkT ∆xk − ∆FkT Gk ∆Fk )


Uk = Ek . (20)
Tr ( Ek2 )

Finally, we present the proposed updating scheme as follows:

( ∆FkT ∆xk − ∆FkT Gk ∆Fk )


Gk +1 = Gk + Ek . (21)
Tr ( E 2 )

Malaysian Journal of Mathematical Sciences 245


Mohammed Waziri Yusuf , Wah June Leong & Malik Abu Hassan

We safeguard the possibly very small ∆Fk and Tr ( E 2 ), where ∆Fk ≥ 10−4.
If not set Gk +1 = Gk . Based on the above explanation, we have the following
algorithm.

Algorithm JFSJ:

Step 1: Given x0 and G0 = I n , set k = 0


Step 2: Compute F ( xk )
Step 3: Compute xk +1 = xk − Gk F ( xk )
Step 4: Check if ∆xk + F ( xk ) 2 ≤ 10−4 stop. If not go to step 5.
Set k := k + 1
Step 5: If ∆Fk 2
≥ 10−4 , compute Gk +1 using formula (21), else set
Gk +1 = Gk and go to 2.

CONVERGENCE ANALYSIS
To analyze the convergence of the proposed method, we will make
the following standard assumptions on F .

Assumption 1
(i) F is differentiable in an open convex set E in ℜ n
(ii) There exists x∗ ∈ E such that F ( x∗ ) = 0, F ′( x ) is continuous for all
x.
(iii) F ′( x ) Satisfies Lipschitz condition of order one i.e. there exists a
positive constant µ such that

F ′( x ) − F ′( y ) ≤ µ x − y (22)
n
for all x, y ∈ℜ
2 2
(iv) There exists constants c1 ≤ c2 such that c1 ω ≤ ω T F ′( x)ω ≤ c2 ω for
all x ∈ E and ω ∈ℜn . To prove the convergence of JFSJ method, we
need the following result.

246 Malaysian Journal of Mathematical Sciences


Jacobian-Free Diagonal Newton’s Method for Solving Nonlinear Systems with Singular Jacobian

Theorem 3.1.
Let Assumption 1 holds. There are K B > 0, δ > 0 and δ1 > 0, , such that if
x0 ∈ B(δ ) and the matrix-valued function B ( x) satisfies
I − B( x) F ′( x∗ ) = ρ ( x ) < δ1 for all x ∈ B (δ ) then the iteration

xk +1 = xk − B ( xk ) F ( xk ), (23)

converges linearly to x∗ .

For the proving of Theorem 3.1 see Kelly (1995).

Based on the Theorem 3.1 together with our explanation in the previous
section, we have the following results.

Theorem 3.2
With that the assumption 1 holds, there are constant β > 0, δ > 0, α > 0
and γ > 0, such that if x0 ∈ E and D0 satisfies I − G0 F ′( x∗ ) F
<δ,
where . F denotes the Frobenius norm for all x ∈ E then the iteration

xk +1 = xk − Gk F ( xk )

where Gk defined by (21), converges linearly to x∗ .

Proof.
We need to show that the updating formula Dk satisfied
I − Gk F ′( x ∗ ) F
< δk , for some constant δk > 0 and all k.
Gk +1 F
= Gk + U k F
, it follows that

Gk +1 F
≤ Gk F
+ Uk F
. (24)

Malaysian Journal of Mathematical Sciences 247


Mohammed Waziri Yusuf , Wah June Leong & Malik Abu Hassan

For k = 0 and assuming D0 = I , we have

G1 F
≤ G0 F
+ U0 F
. (25)

Since G0 = I n hence G0 F
= n . From (20) when k = 0 we have

∆F0T ∆x0 − ∆F0T G0 ∆F0 2


| U 0(i ) |= 2
∆F0(i )
Tr ( E0 )

| ∆F0T ∆x0 − ∆F0T G0 ∆F0 | 2


≤ 2
∆F0(max)
Tr ( E0 )

| ∆F0T ∆x0 − ∆F0T D0 ∆F0 | 4


= (max)2 (i )4
∆F0(max) . (26)
∆F 0 ∑ ∆Fn
i =1

4
∆F0(max)
Since ≤ 1, then (26) turns into
∑ i=1∆F0( i )
n 4

| ∆F0T F ′( x )∆F0 − ∆F0T G0 ∆F0 |


| U 0( i ) |≤ 2
. (27)
∆F0(max)

From condition (iv) and since c1 ≤ c2 , but c1 and c2 can be negative, hence
we may not have c1 ≤ c2 . Therefore, we choose the largest among
c1 and c2 (i.e. c = max{| c1 |,| c2 |} ), then (27) becomes

| c − n | (∆F0T ∆F0 )
| U 0( i ) |≤ 2
. (28)
∆F0(max)

2 2
Since ∆F0(i ) ≤ ∆F0(max) for i = 1,..., n, it follows that

( i) | c − n | ∆F0(max)
|U 0 |≤ 2 . (29)
∆F0(max)

248 Malaysian Journal of Mathematical Sciences


Jacobian-Free Diagonal Newton’s Method for Solving Nonlinear Systems with Singular Jacobian

Hence we obtain
3
U0 F
≤ n2 | c − n | . (30)

3
Suppose α = n 2 | c − n |, then

U0 F
≤ α. (31)

Substituting (31) into (25) and let β = n + α , it follows that

G1 F
≤ β. (32)

At k = 0, it's assumed that I − G0 F ′( x∗ ) F


< δ , then we have

I − G1 F ′( x ∗ ) = I − (G0 + U 0 ) F ′( x∗ ) ,
F F

≤ I − G0 F ′( x∗ ) F
+ U 0 F ′( x∗ ) F
,

≤ I − G0 F ′( x∗ ) + U0 F
F ′( x∗ ) , (33)
F F

hence I − G1 F ′( x ∗ ) F
< δ + αϕ = δ1 . (Even when F ′( x ∗ )
F
= 0 ). Thus, by
induction, I − Gk F ′( x∗ ) F
< δ k for all k . Therefore from Theorem 3.1, the
sequence generated by Algorithm JFSJ converges linearly to x∗ .

NUMERICAL RESULTS
In order to demonstrate the performance of our new proposed
method (JFSJ) for solving nonlinear systems with singular Jacobian, it has
been applied to some popular problems. We implemented the method (JFSJ)
using variable precision arithmetic in Matlab 7.0 . All the calculations were
carried out in double precision computer. The stopping criterion used is

∆xk + F ( xk ) ≤ 10−4 . (34)

Malaysian Journal of Mathematical Sciences 249


Mohammed Waziri Yusuf , Wah June Leong & Malik Abu Hassan

We present and describe the used test problems as follows:

Problem 1. (Jose et al. (2009)). f : R 2 → R 2 is defined by

( x1 − 1)2 ( x1 − x2 )

f ( x) =  5  2 x1 
( x1 − 2) cos  x2 
  

x0 = (0,3) , (0.5, 2) are chosen and x∗ = (1,2).

Problem 2. (Ishihara, K. (2001)). f : R3 → R 3 is defined by

4 x1 − 2 x2 + x12 − 3

f ( x ) = − x1 + 4 x2 − x3 + x22 − 3
 2
−2 x2 + 4 x3 + x3 − 3

x0 = ( −1.5, 0, −1.5), (4, 0, 4), ( −1, 5, −1), (4, 4, 4), ( −10, 0, −10) are chosen and
x∗ = (1,1,1).

Problem 3. f : R 2 → R 2 is defined by

 2
1 + x 2 + sin( x2 − 1) − 1
 1
f ( x) = 
sin( x − 1) + 2 + −1
 2
1 + x22

x0 = (0.5, 0.5), (2, 2), (0.1, 0.1) are chosen and x∗ = (1,1).

Problem 4. f : R 2 → R 2 is defined by

1 + tan(2 − 2 cos x1 ) − exp(sin x1 )


f ( x) = 
1 + tan(2 − 2 cos x2 ) − exp(sin x2 )

x0 = (3, 0), (0, 0.5), ( −0.5, −0.5) are chosen and x∗ = (0,0).

250 Malaysian Journal of Mathematical Sciences


Jacobian-Free Diagonal Newton’s Method for Solving Nonlinear Systems with Singular Jacobian

Problem 5. f : R 2 → R 2 is defined by

e 1 + x2 − 1
x

f ( x) =  x
e 2 + x1 − 1

x0 = ( −0.5, −0.5) is chosen and x∗ = (0,0).

Problem 6. f : R3 → R 3 is defined by

cos x1 − 9 + 3x1 + 8 exp( x2 )



f ( x ) = cos x2 − 9 + 3x2 + 8 exp( x1 )
cos x − x − 1
 3 3

x0 = ( −1, −1, −1), (3, 3,3)(0.5, 0.5, 0.5), (−3, −3, −3) and x∗ = (0, 0, 0).

Problem 7. (Ishihara, K. (2001)). f : R 2 → R 2 is defined by

 4 x − 2 x2 + x1 − 3
2

f ( x) =  1 2
 −2 x1 + 4 x2 + x1 − 3

x0 = (3,3), (0, −1.5), (−2, 3), (0, 2) are chosen and x∗ = (1,1).

Problem 8. f : R 2 → R 2 is defined by

 3 x12 − x22
f ( x) =  2
cos( x1 ) − 1/(1 + x2 )

x0 = (0.5,1) is chosen and x∗ = (0,0).

Malaysian Journal of Mathematical Sciences 251


Mohammed Waziri Yusuf , Wah June Leong & Malik Abu Hassan

Problem 9. (Shen and Ypma (2005)). f : R 2 → R 2 is defined by

 x12 − x22
f ( x) =  2 2
3 x1 − 3x2

x0 = (0.5, 0.4), ( −0.5, −0.4), ( −0.3, −0.5) and (0.4,0.5) are chosen and
x∗ = (0,0).

TABLE 1: The Numerical Results of JFSJ method on Problems 1-9

x0 Number of
Problems CPU time
iteration
1 (0,3) 9 0.0002
(0.5, 2) 7 0.0001
2 (-1.5, 0, -1.5) 17 0.0156
(4,0,4) 38 0.0311
(-1, 0.5, -1) 15 0.0009
(-10,0,-10) 51 0.0312
(4,4,4) 10 0.0311
3 (0.5,0.5) 18 0.0281
(1.5, 1.5) 18 0.0252
(2, 2) 27 0.0310
(0.1,0.1) 8 0.0013
4 (3,0) 12 0.0006
(0, 0.5) 8 0.0004
(-0.5, -0.5) 6 0.0004
5 (-0.5,-0.5) 5 0.0003
6 (-1,-1,-1) 10 0.0156
(3,3,3) 15 0.0321
(-3,-3,-3) 19 0.0388
(0.5,0.5,0.5) 8 0.0018
7 (-2,3) 14 0.0156
(3,3) 11 0.0107
(0,-1.5) 18 0.0168
(0,2) 10 0.0003
8 (0.5,1) 39 0.0312
9 (0.5,0.4) 11 0.0010
(-0.5,-0.4) 21 0.0156
(-0.3,-0.5) 9 0.0006
(0.4,0.5) 12 0.0012

252 Malaysian Journal of Mathematical Sciences


Jacobian-Free Diagonal Newton’s Method for Solving Nonlinear Systems with Singular Jacobian

The numerical results of our proposed method (JFSJ) is reported in


Table 1, which include number of iteration and CPU time in seconds. From
Table 1 it appears that, proposed method of this paper JFSJ has solved all
the tested problems with their respective initial guesses. Moreover the
results are very encouraging for our method. Indeed we observe that JFSJ
method requires little CPU time to converge to the solution, due to the fact
that Newton’s method may likely fails to converge if the Jacobian is
singular.

Another advantage of our method over Newton’s method and Fixed


Newton method is the storage requirement; this is more noticeable as the
dimension increases.

CONCLUSION
In this paper, we have presented a modification of Newton‘s method
for solving systems of nonlinear equations with singular Jacobian. Our
scheme is based on approximating the Jacobian inverse to a non-singular
diagonal matrix without computing the Jacobian at all. The anticipation has
been to bypass the point at which the Jacobian is singular. Among its
desirable feature is that it requires very low memory requirement in building
the approximation of the Jacobian inverse. In fact, the size of the update
matrix increases in O (n), as oppose to Newton’s and Fixed Newton
methods that increase in O (n 2 ). This is more noticeable as the dimension of
the system increases.

Finally, we conclude that, to the best of our knowledge there are not
many alternatives when the Jacobian matrix of a nonlinear system is
singular. As such, this result confirms that our method (JFSJ) is a good
alternative to Newton method and fixed Newton, especially when the
Jacobian is singular at any point x k .

REFERENCES
Decker, D.W. and Kelly, C.T. 1985. Broyden's methods for a class of
problems having singular Jacobian at root. SIAM J. of Num. Anal.
23: 566-574.

Malaysian Journal of Mathematical Sciences 253


Mohammed Waziri Yusuf , Wah June Leong & Malik Abu Hassan

Dennis Jr, J.E. and Schnabel, R.B. 1983. Numerical Methods for
Unconstrained Optimization and Nonlinear Equations. Englewood
Cliffs, NJ: Prentice-Hall.

Dennis, J.E. and Wolkowicz, H. 1993. Sizing and least change secant
methods. SIAM J. Numer. Anal. 30: 1291–1313.

Farid, M., Leong, W.J. and Hassan, M.A. 2010. A new two-step gradient-
type method for large scale unconstrained optimization. Computers
and Mathematics with Applications. 59: 3301-3307.

Farid, M., Leong, W.J. and Hassan, M.A. 2011. An improved multi-step
gradient-type method for large scale optimization. Computers and
Mathematics with Applications. 61: 3312-3318.

Grapsay, T.N. and Malihoutsakit, E.N. 2007. Newton's method without


direct function evaluation, In Proceedings of 8th Hellenic European
Conference on Computer Mathematics and its Applications
(HERCMA 2007), Athens, Hellas.

Griewank, A. and Osborne, M.R. 1983. Analysis of Newton's method at


irregular singularities. SIAM J. Num. Anal. 20: 747-773.

Hassan, M.A., Leong, W.J. and Farid, M. A new gradient method via quasi-
Cauchy relation which guarantees descent. J. Comput. Appl. Math.
230: 300-305.

Ishihara, K. 2001. Iterative methods for solving Nonlinear equations with


singular Jacobian matrix at the solution. In the Proceedings of the
International Conference on Recent Advances in Computational
Mathematics (ICRACM 2001), Japan.

Joseé, L.H, Eulalia, M and Juan, R.M. 2009. Modified Newton's method for
systems of nonlinear equations with singular Jacobian. Comput.
Appl. Math. 224: 77-83.

Kelley, C.T. 1995. Iterative Methods for Linear and Nonlinear Equations.
Philadelphia, PA: SIAM.

254 Malaysian Journal of Mathematical Sciences


Jacobian-Free Diagonal Newton’s Method for Solving Nonlinear Systems with Singular Jacobian

Leong, W.J and Hassan, M.A. 2009. A restarting approach for the
symmetric rank one update for unconstrained optimization.
Computational Optimization and Applications. 43: 327-334.

Leong, W.J and Hassan, M.A. 2011. A new gradient method via least
change secant update. International Journal of Computer
Mathematics. 88: 816-828.

Leong, W.J., Hassan, M.A. and Farid, M. 2010. A monotone gradient


method via weak secant equation for unconstrained optimization.
Taiwanese J. Math. 14(2): 413-423.

Modarres, F., Hassan, M.A. and Leong, W.J. 2011. Improved Hessian
approximation with modified secant equations for symmetric rank-
one method. J. Comput. Appl. Math. 235: 2423-2431.

Natasa, K. and Zorna, L. 2001. Newton-like method with modification of the


right-hand vector. J. Math. Compt. 71: 237-250.

Ortega, J.M. and Rheinboldt, W.C. 1970. Iterative Solution of Nonlinear


equations in several variables. New York: Academic Press.

Shen, Y.Q and Ypma, T.J. 2005. Newton's method for singular nonlinear
equations using approximate left and right nullspaces of the
Jacobian. App. Num. Math. 54: 256-265.

Waziri, M.Y., Leong, W.J., Hassan, M.A. and Monsi, M. 2010. A new
Newton’s method with diagonal jacobian approximation for systems
of nonlinear equations. J. Mathematics and Statistics. 6: 246-252.

Waziri, M.Y., Leong, W.J., Hassan, M.A. and Monsi, M. 2010 Jacobian
computation-free Newton method for systems of Non-Linear
equations. Journal of numerical Mathematics and stochastic. 2(1):
54-63.

Malaysian Journal of Mathematical Sciences 255

You might also like