Waziri PDF
Waziri PDF
ABSTRACT
The basic requirement of Newton’s method in solving systems of nonlinear equations
is, the Jacobian must be non-singular. This condition restricts to some extent the
application of Newton method. In this paper we present a modification of Newton’s
method for systems of nonlinear equations where the Jacobian is singular. This is
made possible by approximating the Jacobian inverse into a diagonal matrix by
means of variational techniques. The anticipation of our approach is to bypass the
point in which the Jacobian is singular. The local convergence of the proposed
method has been proven under suitable assumptions. Numerical experiments are
carried out which show that, the proposed method is very encouraging.
INTRODUCTION
Let us consider the problem of finding the solution of nonlinear
equations
F ( x ) = 0, (1)
The best-know method for finding the solution to (1), is the Newton’s
method, in which the Newtonian iterations are given by:
xk +1 = xk − ( F ′( xk )) −1 F ( xk ), (2)
xk +1 − x∗ ≤ h xk − x∗ 2
(3)
for some h. When the Jacobian is singular the convergence rate may be
unsatisfactory, i.e. it slows down when approaching a singular root and the
convergence to x∗ may even be lost (Leong and Hassan (2009); Hassan et
al. (2009); Dennis and Schnabel (1983); Waziri et al. (2010)). This
condition (non-singular Jacobian F ′( x∗ ) ≠ 0 ) restricts to certain extent the
application of Newton’s method for solving nonlinear equations (Kelly
(1995)). Some approaches have been developed to overcome this
shortcoming. The simplest approach is incorporated in the fixed Newton’s
method. The method generates an iterative sequence{xk } from a given initial
guess x0 (Ortega and Rheinboldt (1970); Waziri et al. (2010); Farid et al.
(2011))
( F ′( xk ))−1 ≈ G , (7)
Gk +1 = Gk + U k . (8)
Now, we can write using the weak secant condition Farid et al. (2010),
Leong and Hassan (2011) and Natasa and Zorna (2001) to obtain
1 2
min Uk F
2 (11)
s.t ∆Fk (Gk + U k )∆Fk = ∆FkT ∆xk ,
T
1
min (τ 12 + τ 22 + … + τ n2 )
2
n
(12)
s.t ∑∆F
i =1
k
( i )2
i k
T
k
T
τ = ∆F ∆xk − ∆F Gk ∆Fk .
The solution of (12) can be found through taking into consideration its
Lagrange function as follows
n
1
L (τ i , λ ) = (τ 12 + τ 22 + … + τ n2 ) + β (∑∆Fk( i ) τ i − ∆FkT ∆xk + ∆FkT Gk ∆Fk ). (13)
2
2 i =1
2
Multiplying (15) by ∆Fk( i ) and summing over n , we have
n n
∑∆F τ = −∑∆Fk( i ) β .
( i )2 4
k i (16)
i =1 i =1
∑∆F
i =1
k
( i )2
τ = ∆FkT ∆xk − ∆FkT Gk ∆Fk .
i (17)
2 2 2
(i )4
Letting Ek = diag (∆Fk(1) , ∆Fk(2) ,…, ∆Fk( n ) ) and ∑ in=1∆Fk = Tr ( Ek2 )
where Tr is the trace operation, yields
We safeguard the possibly very small ∆Fk and Tr ( E 2 ), where ∆Fk ≥ 10−4.
If not set Gk +1 = Gk . Based on the above explanation, we have the following
algorithm.
Algorithm JFSJ:
CONVERGENCE ANALYSIS
To analyze the convergence of the proposed method, we will make
the following standard assumptions on F .
Assumption 1
(i) F is differentiable in an open convex set E in ℜ n
(ii) There exists x∗ ∈ E such that F ( x∗ ) = 0, F ′( x ) is continuous for all
x.
(iii) F ′( x ) Satisfies Lipschitz condition of order one i.e. there exists a
positive constant µ such that
F ′( x ) − F ′( y ) ≤ µ x − y (22)
n
for all x, y ∈ℜ
2 2
(iv) There exists constants c1 ≤ c2 such that c1 ω ≤ ω T F ′( x)ω ≤ c2 ω for
all x ∈ E and ω ∈ℜn . To prove the convergence of JFSJ method, we
need the following result.
Theorem 3.1.
Let Assumption 1 holds. There are K B > 0, δ > 0 and δ1 > 0, , such that if
x0 ∈ B(δ ) and the matrix-valued function B ( x) satisfies
I − B( x) F ′( x∗ ) = ρ ( x ) < δ1 for all x ∈ B (δ ) then the iteration
xk +1 = xk − B ( xk ) F ( xk ), (23)
converges linearly to x∗ .
Based on the Theorem 3.1 together with our explanation in the previous
section, we have the following results.
Theorem 3.2
With that the assumption 1 holds, there are constant β > 0, δ > 0, α > 0
and γ > 0, such that if x0 ∈ E and D0 satisfies I − G0 F ′( x∗ ) F
<δ,
where . F denotes the Frobenius norm for all x ∈ E then the iteration
xk +1 = xk − Gk F ( xk )
Proof.
We need to show that the updating formula Dk satisfied
I − Gk F ′( x ∗ ) F
< δk , for some constant δk > 0 and all k.
Gk +1 F
= Gk + U k F
, it follows that
Gk +1 F
≤ Gk F
+ Uk F
. (24)
G1 F
≤ G0 F
+ U0 F
. (25)
Since G0 = I n hence G0 F
= n . From (20) when k = 0 we have
4
∆F0(max)
Since ≤ 1, then (26) turns into
∑ i=1∆F0( i )
n 4
From condition (iv) and since c1 ≤ c2 , but c1 and c2 can be negative, hence
we may not have c1 ≤ c2 . Therefore, we choose the largest among
c1 and c2 (i.e. c = max{| c1 |,| c2 |} ), then (27) becomes
| c − n | (∆F0T ∆F0 )
| U 0( i ) |≤ 2
. (28)
∆F0(max)
2 2
Since ∆F0(i ) ≤ ∆F0(max) for i = 1,..., n, it follows that
( i) | c − n | ∆F0(max)
|U 0 |≤ 2 . (29)
∆F0(max)
Hence we obtain
3
U0 F
≤ n2 | c − n | . (30)
3
Suppose α = n 2 | c − n |, then
U0 F
≤ α. (31)
G1 F
≤ β. (32)
I − G1 F ′( x ∗ ) = I − (G0 + U 0 ) F ′( x∗ ) ,
F F
≤ I − G0 F ′( x∗ ) F
+ U 0 F ′( x∗ ) F
,
≤ I − G0 F ′( x∗ ) + U0 F
F ′( x∗ ) , (33)
F F
hence I − G1 F ′( x ∗ ) F
< δ + αϕ = δ1 . (Even when F ′( x ∗ )
F
= 0 ). Thus, by
induction, I − Gk F ′( x∗ ) F
< δ k for all k . Therefore from Theorem 3.1, the
sequence generated by Algorithm JFSJ converges linearly to x∗ .
NUMERICAL RESULTS
In order to demonstrate the performance of our new proposed
method (JFSJ) for solving nonlinear systems with singular Jacobian, it has
been applied to some popular problems. We implemented the method (JFSJ)
using variable precision arithmetic in Matlab 7.0 . All the calculations were
carried out in double precision computer. The stopping criterion used is
( x1 − 1)2 ( x1 − x2 )
f ( x) = 5 2 x1
( x1 − 2) cos x2
4 x1 − 2 x2 + x12 − 3
f ( x ) = − x1 + 4 x2 − x3 + x22 − 3
2
−2 x2 + 4 x3 + x3 − 3
x0 = ( −1.5, 0, −1.5), (4, 0, 4), ( −1, 5, −1), (4, 4, 4), ( −10, 0, −10) are chosen and
x∗ = (1,1,1).
Problem 3. f : R 2 → R 2 is defined by
2
1 + x 2 + sin( x2 − 1) − 1
1
f ( x) =
sin( x − 1) + 2 + −1
2
1 + x22
x0 = (0.5, 0.5), (2, 2), (0.1, 0.1) are chosen and x∗ = (1,1).
Problem 4. f : R 2 → R 2 is defined by
x0 = (3, 0), (0, 0.5), ( −0.5, −0.5) are chosen and x∗ = (0,0).
Problem 5. f : R 2 → R 2 is defined by
e 1 + x2 − 1
x
f ( x) = x
e 2 + x1 − 1
Problem 6. f : R3 → R 3 is defined by
x0 = ( −1, −1, −1), (3, 3,3)(0.5, 0.5, 0.5), (−3, −3, −3) and x∗ = (0, 0, 0).
4 x − 2 x2 + x1 − 3
2
f ( x) = 1 2
−2 x1 + 4 x2 + x1 − 3
x0 = (3,3), (0, −1.5), (−2, 3), (0, 2) are chosen and x∗ = (1,1).
Problem 8. f : R 2 → R 2 is defined by
3 x12 − x22
f ( x) = 2
cos( x1 ) − 1/(1 + x2 )
x12 − x22
f ( x) = 2 2
3 x1 − 3x2
x0 = (0.5, 0.4), ( −0.5, −0.4), ( −0.3, −0.5) and (0.4,0.5) are chosen and
x∗ = (0,0).
x0 Number of
Problems CPU time
iteration
1 (0,3) 9 0.0002
(0.5, 2) 7 0.0001
2 (-1.5, 0, -1.5) 17 0.0156
(4,0,4) 38 0.0311
(-1, 0.5, -1) 15 0.0009
(-10,0,-10) 51 0.0312
(4,4,4) 10 0.0311
3 (0.5,0.5) 18 0.0281
(1.5, 1.5) 18 0.0252
(2, 2) 27 0.0310
(0.1,0.1) 8 0.0013
4 (3,0) 12 0.0006
(0, 0.5) 8 0.0004
(-0.5, -0.5) 6 0.0004
5 (-0.5,-0.5) 5 0.0003
6 (-1,-1,-1) 10 0.0156
(3,3,3) 15 0.0321
(-3,-3,-3) 19 0.0388
(0.5,0.5,0.5) 8 0.0018
7 (-2,3) 14 0.0156
(3,3) 11 0.0107
(0,-1.5) 18 0.0168
(0,2) 10 0.0003
8 (0.5,1) 39 0.0312
9 (0.5,0.4) 11 0.0010
(-0.5,-0.4) 21 0.0156
(-0.3,-0.5) 9 0.0006
(0.4,0.5) 12 0.0012
CONCLUSION
In this paper, we have presented a modification of Newton‘s method
for solving systems of nonlinear equations with singular Jacobian. Our
scheme is based on approximating the Jacobian inverse to a non-singular
diagonal matrix without computing the Jacobian at all. The anticipation has
been to bypass the point at which the Jacobian is singular. Among its
desirable feature is that it requires very low memory requirement in building
the approximation of the Jacobian inverse. In fact, the size of the update
matrix increases in O (n), as oppose to Newton’s and Fixed Newton
methods that increase in O (n 2 ). This is more noticeable as the dimension of
the system increases.
Finally, we conclude that, to the best of our knowledge there are not
many alternatives when the Jacobian matrix of a nonlinear system is
singular. As such, this result confirms that our method (JFSJ) is a good
alternative to Newton method and fixed Newton, especially when the
Jacobian is singular at any point x k .
REFERENCES
Decker, D.W. and Kelly, C.T. 1985. Broyden's methods for a class of
problems having singular Jacobian at root. SIAM J. of Num. Anal.
23: 566-574.
Dennis Jr, J.E. and Schnabel, R.B. 1983. Numerical Methods for
Unconstrained Optimization and Nonlinear Equations. Englewood
Cliffs, NJ: Prentice-Hall.
Dennis, J.E. and Wolkowicz, H. 1993. Sizing and least change secant
methods. SIAM J. Numer. Anal. 30: 1291–1313.
Farid, M., Leong, W.J. and Hassan, M.A. 2010. A new two-step gradient-
type method for large scale unconstrained optimization. Computers
and Mathematics with Applications. 59: 3301-3307.
Farid, M., Leong, W.J. and Hassan, M.A. 2011. An improved multi-step
gradient-type method for large scale optimization. Computers and
Mathematics with Applications. 61: 3312-3318.
Hassan, M.A., Leong, W.J. and Farid, M. A new gradient method via quasi-
Cauchy relation which guarantees descent. J. Comput. Appl. Math.
230: 300-305.
Joseé, L.H, Eulalia, M and Juan, R.M. 2009. Modified Newton's method for
systems of nonlinear equations with singular Jacobian. Comput.
Appl. Math. 224: 77-83.
Kelley, C.T. 1995. Iterative Methods for Linear and Nonlinear Equations.
Philadelphia, PA: SIAM.
Leong, W.J and Hassan, M.A. 2009. A restarting approach for the
symmetric rank one update for unconstrained optimization.
Computational Optimization and Applications. 43: 327-334.
Leong, W.J and Hassan, M.A. 2011. A new gradient method via least
change secant update. International Journal of Computer
Mathematics. 88: 816-828.
Modarres, F., Hassan, M.A. and Leong, W.J. 2011. Improved Hessian
approximation with modified secant equations for symmetric rank-
one method. J. Comput. Appl. Math. 235: 2423-2431.
Shen, Y.Q and Ypma, T.J. 2005. Newton's method for singular nonlinear
equations using approximate left and right nullspaces of the
Jacobian. App. Num. Math. 54: 256-265.
Waziri, M.Y., Leong, W.J., Hassan, M.A. and Monsi, M. 2010. A new
Newton’s method with diagonal jacobian approximation for systems
of nonlinear equations. J. Mathematics and Statistics. 6: 246-252.
Waziri, M.Y., Leong, W.J., Hassan, M.A. and Monsi, M. 2010 Jacobian
computation-free Newton method for systems of Non-Linear
equations. Journal of numerical Mathematics and stochastic. 2(1):
54-63.