A Line Search Algorithm For Unconstrained Optimiza
A Line Search Algorithm For Unconstrained Optimiza
net/publication/220204226
CITATIONS READS
19 571
3 authors:
Zengxin Wei
Guangxi University
75 PUBLICATIONS 2,052 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Sha Lu on 04 April 2014.
Received February 6th, 2010; revised March 30th, 2010; accepted March 31st, 2010.
ABSTRACT
It is well known that the line search methods play a very important role for optimization problems. In this paper a new line
search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global
convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given
search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule.
Numerical results show that the new method is effective.
Step 4: Calculate the search direction d k 1 by (3), the convergence of the line search method (see [15,26]).
where k is defined by (4). Assumption A (i) f is bounded below on the level set
gT d {x R n : f ( x) f ( x0 )} ;
Step 5: Let d knew d k1 min{0, k 1 k 12 }g k 1 g k 1 ,
1
|| g k 1 || Assumption A (ii) In some neighborhood 0 of ,
|| yk |||| g k 1 || f is differentiable and its gradient is Lipschitz con-
where d k1 d k 1 , sk xk 1 xk ,
|| sk |||| d k 1 || tinuous, namely, there exists a constants L 0 such that
|| g ( x ) g ( y ) || L || x y || , for all x, y 0 .
|| yk || max{|| sk ||,|| yk ||} , yk g k 1 g k .
In the following, let g k 0 for all k , for otherwise a
Step 6: Let d : d knaw
1 , k : k 1 , and go to step 2.
stationary point has been found.
Remark. In the Step 5 of Algorithm 1, we have Lemma 3.1 Consider Algorithm 1. Let Assumption (ii)
|| yk || max{|| sk ||,|| yk ||} hold. Then (5) and (9) hold.
1,
|| sk || || sk || Proof. If k 0 , (5) and (9) hold obviously. For k 1 ,
by Assumption (ii) and the Step 5 of Algorithm 1, we
which can increase the convergent speed of the algorithm have
from the computation point of view.
Here we give the normal PRP conjugate gradient algo- || d knew
1
|| || d k1 || || d k1 ||
rithm and one modified PRP conjugate gradient algorithm || g k 1 || (2 max{1, L} 1) || g k 1 || .
[14] as follows.
2) Algorithm 2 (PRP Algorithm ) Now we consider the vector product g kT1 d k1 in the
Step 0: Choose an initial point x0 R n , and constants following two cases:
1 case 1. If g kT1 d k1 0 . Then we get
0 1 , 0 1 , 1 2 1 . Set d 0 g 0
2 g kT1d k1
f ( x0 ) , k : 0. g kT1d knew
1
g kT1d k1 min{0, }|| g k 1 ||2 || g k 1 ||2
|| g k 1 ||2
Step 1: If || g k || , then stop; Otherwise go to step 2.
g kT1d k1 || g k 1 ||2
Step 2: Compute steplength k by one line search
|| g k 1 ||2 .
technique, and let xk 1 xk k d k .
Step 3: If || g k 1 || , then stop; Otherwise go to step 4. case 2. If g kT1 d k1 0 . Then we obtain
Step 4: Calculate the search direction d k 1 by (3), g kT1 d k1
g kT1 d knew g kT1 d k1 min{0, } || g k 1 ||2 || g k 1 ||2
where k is defined by (4). 1
|| g k 1 ||2
Step 5: Let k : k 1 and go to step 2. g kT1 d k1
3) Algorithm 3 (PRP+ Algorithm see [14]) g kT1 d k1 || g k 1 ||2 || g k 1 ||2
|| g k 1 ||2
Step 0: Choose an initial point x0 R n , and con-
|| g k 1 ||2 .
1
stants 0 1 , 0 1 , 1 2 1 . Set d 0 Let c (0,1) , c1 2 max{1, L} 1 and use the Step 6
2
g 0 f ( x0 ) , k : 0. of Algorithm 1, (5) and (9) hold, respectively. The proof is
Step 1: If || g k || , then stop; Otherwise go to step 2. completed.
The above lemma shows that the search direction d k
Step 2: Compute steplength k by one line search has such that the sufficient descent condition (5) and the
technique, and let xk 1 xk k d k . condition (9) without any line search rule.
Step 3: If || g k 1 || , then stop; Otherwise go to step 4. Based on Lemma 3.1, Assumption (i) and (ii), let us
give the global convergence theorem of Algorithm 1.
Step 4: Calculate the search direction d k 1 by (3),
Theorem 3.1 Let { k , d k , xk 1 , g k 1 } be generated by
where k max{0, kPRP } Algorithm 1 with the exact minimization rule, the Gold-
Step 5: Let k : k 1 and go to step 2. stein line search rule, the SWP line search rule, or the
We will concentrate on the convergent results of Al- WWP line search rule, and Assumption (i) and (ii) hold.
gorithm 1 in the following section. Then
3. Convergence Analysis lim
k
|| g k || 0 (12)
it is clear that the given method has the most wins (has the with a Strong Global Convergence Properties,” SIAM
highest probability of being the optimal solver). Journal of Optimization, Vol. 10, No. 1, 2000, pp. 177-
In summary, the presented numerical results reveal that 182.
the new method, compared with the normal PRP method [12] Z. Wei, G. Li, and L. Qi, “New Nonlinear Conjugate
and the modified PRP method [14], has potential advan- Gradient Formulas for Large-Scale Unconstrained Optimi-
tages. zation Problems,” Applied Mathematics and Computation,
Vol. 179, No. 2, 2006, pp. 407-430.
5. Conclusions [13] G. Yuan and X. Lu, “A Modified PRP Conjugate Gradient
Method,” Annals of Operations Research, Vol. 166, No. 1,
This paper gives a new line search method for uncon- 2009, pp. 73-90.
strained optimization. The global and R-linear conver-
[14] J. C. Gibert, J. Nocedal, “Global Convergence Properties
gence are established under weaker assumptions on the
of Conjugate Gradient Methods for Optimization,” SIAM
search direction d k . Especially, the direction d k satis- Journal on Optimization, Vol. 2, No. 1, 1992, pp. 21-42.
fies the sufficient condition (5) and the condition (9) [15] Y. Dai and Y. Yuan, “Nonlinear Conjugate Gradient
without carrying out any line search technique, and some Methods,” Shanghai Science and Technology Press, 2000.
paper [14,27,30] often obtains these two conditions by [16] E. Polak and G. Ribiè, “Note Sur la Xonvergence de
assumption. The comparison of the numerical results Directions Conjugèes,” Rev Francaise Informat Recher-
shows that the new search direction of the new algorithm che Operatinelle 3e Annèe, Vol. 16, 1969, pp. 35-43.
is a good search direction at every iteration.
[17] M. J. D. Powell, “Nonconvex Minimization Calculations
and the Conjugate Gradient Method,” Lecture Notes in
REFERENCES Mathematics, Springer-Verlag, Berlin, Vol. 1066, 1984,
[1] G. Yuan and X. Lu, “A New Line Search Method with pp. 122-141.
Trust Region for Unconstrained Optimization,” Commu- [18] L. Grippo and S. Lucidi, “A Globally Convergent Version
nications on Applied Nonlinear Analysis, Vol. 15, No. 1, of the Polak-RibiÈRe Gradient Method,” Mathematical
2008, pp. 35-49. Programming, Vol. 78, No. 3, 1997, pp. 375-391.
[2] G. Yuan, X. Lu, and Z. Wei, “New Two-Point Stepsize [19] W. W. Hager and H. Zhang, “A New Conjugate Gradient
Gradient Methods for Solving Unconstrained Optimi- Method with Guaranteed Descent and an Efficient Line
zation Problems,” Natural Science Journal of Xiangtan Search,” SIAM Journal on Optimization, Vol. 16, No. 1,
University, Vol. 29, No. 1, 2007, pp. 13-15. 2005, pp. 170-192.
[3] G. Yuan and Z. Wei, “New Line Search Methods for [20] Z. Wei, S. Yao, and L. Liu, “The Convergence Properties
Uncons- trained Optimization,” Journal of the Korean of Some New Conjugate Gradient Methods,” Applied
Statistical Society, Vol. 38, No. 1, 2009, pp. 29-39. Mathematics and Computation, Vol. 183, No. 2, 2006, pp.
[4] Y. Yuan and W. Sun, “Theory and Methods of Optimi- 1341-1350.
zation,” Science Press of China, Beijing, 1999. [21] G. H. Yu, “Nonlinear Self-Scaling Conjugate Gradient
[5] D. C. Luenerger, “Linear and Nonlinear Programming,” Methods for Large-scale Optimization Problems,” Thesis
2nd Edition, Addition Wesley, Reading, MA, 1989. of Doctor's Degree, Sun Yat-Sen University, 2007.
[6] J. Nocedal and S. J. Wright, “Numerical Optimization,” [22] G. Yuan, “Modified Nonlinear Conjugate Gradient
Springer, Berlin, Heidelberg, New York, 1999. Methods with Sufficient Descent Property for Large-Scale
Optimization Problems,” Optimization Letters, Vol. 3, No.
[7] Z. Wei, G. Li, and L. Qi, “New Quasi-Newton Methods for 1, 2009, pp. 11-21.
Unconstrained Optimization Problems,” Applied Mathe-
matics and Computation, Vol. 175, No. 1, 2006, pp. 1156- [23] G. Yuan, “A Conjugate Gradient Method for Uncons-
1188. trained Optimization Problems,” International Journal of
Mathematics and Mathematical Sciences, Vol. 2009, 2009,
[8] Z. Wei, G. Yu, G. Yuan, and Z. Lian, “The Superlinear pp. 1-14.
Convergence of a Modified BFGS-type Method for
Unconstrained Optimization,” Computational Optimiza- [24] G. Yuan, X. Lu, and Z. Wei, “A Conjugate Gradient
tion and Applications, Vol. 29, No. 3, 2004, pp. 315-332. Method with Descent Direction for Unconstrained Optimi-
zation,” Journal of Computational and Applied Mathe-
[9] G. Yuan and Z. Wei, “The Superlinear Convergence Anal- matics, Vol. 233, No. 2, 2009, pp. 519-530.
ysis of a Nonmonotone BFGS Algorithm on Convex
Objective Functions,” Acta Mathematica Sinica, English [25] L. Zhang, W. Zhou, and D. Li, “A Descent Modified
Series, Vol. 24, No. 1, 2008, pp. 35-42. Polak-RibiÈRe-Polyak Conjugate Method and its Global
Convergence,” IMA Journal on Numerical Analysis, Vol.
[10] G. Yuan and Z. Wei, “Convergence Analysis of a Modified 26, No. 4, 2006, pp. 629-649.
BFGS Method on Convex Minimizations,” Computational
Optimization and Applications, Science Citation Index, [26] Y. Liu and C. Storey, “Efficient Generalized Conjugate
2008. Gradient Algorithms, Part 1: Theory,” Journal of Optimi-
zation Theory and Application, Vol. 69, No. 1, 1992, pp.
[11] Y. Dai and Y. Yuan, “A Nonlinear Conjugate Gradient 17-41.
[27] Z. J. Shi, “Convergence of Line Search Methods for [29] E. D. Dolan and J. J. Moré, “Benchmarking Optimization
Unconstrained Optimization,” Applied Mathematics and Software with Performance Profiles,” Mathematical
Computation, Vol. 157, No. 2, 2004, pp. 393-405. Programming, Vol. 91, No. 2, 2002, pp. 201-213.
[28] J. J. Moré, B. S. Garbow, and K. E. Hillstrome, “Testing [30] Y. Dai and Q. Ni, “Testing Different Conjugate Gradient
Unconstrained Optimization Software,” ACM Transac- Methods for Large-scale Unconstrained Optimization,”
tions on Mathematical Software, Vol. 7, No. 1, 1981, pp. Journal of Computational Mathematics, Vol. 21, No. 3,
17-41. 2003, pp. 311-320.