0% found this document useful (0 votes)
10 views

A Line Search Algorithm For Unconstrained Optimiza

Uploaded by

Nam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

A Line Search Algorithm For Unconstrained Optimiza

Uploaded by

Nam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220204226

A Line Search Algorithm for Unconstrained Optimization.

Article in Journal of Software Engineering and Applications · January 2010


Source: DBLP

CITATIONS READS

19 571

3 authors:

Gonglin Yuan Sha Lu


Guangxi University Guangxi Teacher's Education College
103 PUBLICATIONS 1,856 CITATIONS 12 PUBLICATIONS 106 CITATIONS

SEE PROFILE SEE PROFILE

Zengxin Wei
Guangxi University
75 PUBLICATIONS 2,052 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

optimization View project

symmetric nonliear equation View project

All content following this page was uploaded by Sha Lu on 04 April 2014.

The user has requested enhancement of the downloaded file.


J. Software Engineering & Applications, 2010, 3: 503-509
doi:10.4236/jsea.2010.35057 Published Online May 2010 (http://www.SciRP.org/journal/jsea)

A Line Search Algorithm for Unconstrained


Optimization*
Gonglin Yuan1, Sha Lu2, Zengxin Wei1
1
College of Mathematics and Information Science, Guangxi University, Nanning, China; 2School of Mathematical Science, Guangxi
Teachers Education University, Nanning, China.
Email: [email protected]

Received February 6th, 2010; revised March 30th, 2010; accepted March 31st, 2010.

ABSTRACT
It is well known that the line search methods play a very important role for optimization problems. In this paper a new line
search method is proposed for solving unconstrained optimization. Under weak conditions, this method possesses global
convergence and R-linear convergence for nonconvex function and convex function, respectively. Moreover, the given
search direction has sufficiently descent property and belongs to a trust region without carrying out any line search rule.
Numerical results show that the new method is effective.

Keywords: Line Search, Unconstrained Optimization, Global Convergence, R-linear Convergence

1. Introduction large-scale minimization problems [4]. However, the


steepest descent method often yields zigzag phenomena in
Consider the unconstrained optimization problem solving practical problems, which makes the algorithm
minn f ( x) , (1) converge to an optimal solution very slowly, or even fail
xR
to converge [5,6]. Then the steepest descent method is not
where f : R n  R is continuously differentiable. The the fastest one among the line search methods.
line search algorithm for (1) often generates a sequence of If d k   H k g k is the search direction at each iteration
iterates {xk } by letting in the algorithm, where H k is an n × n matrix approxi-
xk 1  xk   k d k , k  0,1, 2, mating [ 2 f ( xk )]1 , then the corresponding line search
(2)
method is called Newton-like method [4-6] such as
where x k is the current iterate point, d k is a search Newton method, quasi-Newton method, variable metric
direction, and  k  0 is a steplength. Different choices method, etc. Many papers [7-10] have been proposed by
the method for optimization problems. However, one
of d k and  k will determine different line search drawback of the Newton-like method is required to store
methods [1-3]. The method is divided into two stages at and compute matrix H k at each iteration and thus adds
each iteration: 1) choose a descent search direction d k ; 2) the cost of storage and computation. Accordingly, this
choose a step size  k along the search direction d k . method is not suitable to solve large-scale optimization
Throughout this paper, we denote f ( xk ) by f k , problems in many cases.
The conjugate gradient method is a powerful line
f ( xk ) by g k , and f ( xk 1 ) by g k 1 , respectively.
search method for solving the large scale optimization
||  || denotes the Euclidian norm of vectors. problems because of its simplicity and its very low
One simple line search method is the steepest descent memory requirement. The search direction of the conju-
method if we take d k   g k as a search direction at every gate gradient method often takes the form
iteration, which has wide applications in solving  g   k dk , if k  1
dk   k (3)
* This work is supported by China NSF grants 10761001, the Scientific  gk , if k  0,
Research Foundation of Guangxi University (Grant No. X081082), and
Guangxi SF grants 0991028. where  k  R is a scalar which determines the different

Copyright © 2010 SciRes. JSEA


504 A Line Search Algorithm for Unconstrained Optimization

conjugate gradient methods [11-13] etc. The convergence g ( xk   k d k )T d k   2 g kT d k . (8)


behavior of the different conjugate gradient methods with
some line search conditions [14] has been widely studied Over the past few years, much effort has been put to
by many authors for many years (see [4, 15]). At present, find out new formulas for conjugate methods such that
one of the most efficient formula for  k from the com- they have not only global convergence property for gen-
eral functions but also good numerical performance (see
putation point of view is the following PRP method [14,15]). Resent years, some good results on the nonlinear
gT (g  g ) conjugate gradient method are given [19-25].
 kPRP  k 1 k 1 2 k . (4)
|| g k || These observations motivate us to propose a new
method which possesses not only the simplicity and low
If xk 1  xk , it is easy to get  kPRP  0 , which implies memory but also desirable theoretical features. In this
that the direction d k of the PRP method will turn out to paper, we design a new line search method which pos-
be the steepest descent direction as the restart condition sesses not only the sufficiently descent property but also
automatically when the next iteration point is approximate the following property
to the current point. This case is very important to ensure || d k || c1 || g k || , for all k  0 and some constant
the efficiency of the PRP conjugate gradient method (see c1  0 (9)
[4,15] etc.). For the convergence of the PRP conjugate
gradient method, Polak and Ribière [16] proved that the whatever line search rule is used, where the property (9)
PRP method with the exact line search is globally con- implies that the search direction d k is in a trust region
vergent when the objective function is convex, and Powell automatically.
[17] gave a counter example to show that there exist This paper is organized as follows. In the next section,
nonconvex functions on which the PRP method does not the algorithms and other line search rules are stated. The
converge globally even the exact line search is used. global convergence and the R-linear convergence of the
We all know that the following sufficient descent con- new method are established in Section 3. Numerical re-
dition sults and one conclusion are presented in Section 4 and in
g kT d k  c || g k ||2 , for all k  0 and some constant Section 5, respectively.
c0 (5) 2. The Algorithms
is very important to insure the global convergence of the Besides the inexact line search techniques WWP and
algorithm by nonlinear conjugate gradient method, and it SWP, there exist other line search rules which are often
may be crucial for conjugate gradient methods [14]. It has used to analyze the convergence of the line search
been showed that the PRP method with the following method:
strong Wolfe-Powell (SWP) line search rules which is to 1) The exact minimization rule. The step size  k is
find the step size  k satisfying
chosen such that
f ( xk   k d k )  f k   1 k g kT d k , (6) f ( xk   k d k )  min f ( xk   d k ) . (10)
 0
and
2) Goldstein rule. The step size  k is chosen to satisfy
| g ( xk   k d k )T d k |  2 | g kT d k | (7)
(6) and
did not ensure the condition (5) at each iteration, where
1 f ( xk   k d k )  f k   2 k g kT d k . (11)
0   1  ,  1   2  1 . Then Grippo and Lucidi [18]
2 Now we give our algorithm as follows.
presented a new line search rule which can ensure the 1) Algorithm 1 (New Algorithm)
sufficient descent condition and established the conver- Step 0: Choose an initial point x0  R n , and constants
gence of the PRP method with their line search technique.
1
Powell [17] suggested that  k should not be less than 0    1 , 0  1  ,  1   2  1 . Set d 0   g 0
2
zero. Considering this idea, Gilbert and Nocedal [14]
 f ( x0 ) , k : 0.
proved that the modified PRP method  k  max{0,  kPRP }
Step 1: If || g k ||  , then stop; Otherwise go to step 2.
is globally convergent under the sufficient descent as-
sumption condition and the following weak Wolfe-Powell Step 2: Compute steplength  k by one line search
(WWP) line search technique: find the steplength  k technique, and let xk 1  xk   k d k .
such that (6) and Step 3: If || g k 1 ||  , then stop; Otherwise go to step 4.

Copyright © 2010 SciRes. JSEA


A Line Search Algorithm for Unconstrained Optimization 505

Step 4: Calculate the search direction d k 1 by (3), the convergence of the line search method (see [15,26]).
where  k is defined by (4). Assumption A (i) f is bounded below on the level set
gT d    {x  R n : f ( x)  f ( x0 )} ;
Step 5: Let d knew  d k1  min{0,  k 1 k 12 }g k 1  g k 1 ,
1
|| g k 1 || Assumption A (ii) In some neighborhood  0 of  ,
|| yk |||| g k 1 || f is differentiable and its gradient is Lipschitz con-
where d k1  d k 1 , sk  xk 1  xk ,
|| sk |||| d k 1 || tinuous, namely, there exists a constants L  0 such that
|| g ( x )  g ( y ) ||  L || x  y || , for all x, y   0 .
|| yk || max{|| sk ||,|| yk ||} , yk  g k 1  g k .
In the following, let g k  0 for all k , for otherwise a
Step 6: Let d : d knaw
1 , k : k  1 , and go to step 2.
stationary point has been found.
Remark. In the Step 5 of Algorithm 1, we have Lemma 3.1 Consider Algorithm 1. Let Assumption (ii)
|| yk || max{|| sk ||,|| yk ||} hold. Then (5) and (9) hold.
 1,
|| sk || || sk || Proof. If k  0 , (5) and (9) hold obviously. For k  1 ,
by Assumption (ii) and the Step 5 of Algorithm 1, we
which can increase the convergent speed of the algorithm have
from the computation point of view.
Here we give the normal PRP conjugate gradient algo- || d knew
1
||  || d k1 ||  || d k1 ||
rithm and one modified PRP conjugate gradient algorithm  || g k 1 ||  (2 max{1, L}  1) || g k 1 || .
[14] as follows.
2) Algorithm 2 (PRP Algorithm ) Now we consider the vector product g kT1 d k1 in the
Step 0: Choose an initial point x0  R n , and constants following two cases:
1 case 1. If g kT1 d k1  0 . Then we get
0    1 , 0  1  ,  1   2  1 . Set d 0   g 0 
2 g kT1d k1
f ( x0 ) , k : 0. g kT1d knew
1
 g kT1d k1  min{0,  }|| g k 1 ||2  || g k 1 ||2
|| g k 1 ||2
Step 1: If || g k ||  , then stop; Otherwise go to step 2.
 g kT1d k1  || g k 1 ||2
Step 2: Compute steplength  k by one line search
  || g k 1 ||2 .
technique, and let xk 1  xk   k d k .
Step 3: If || g k 1 ||  , then stop; Otherwise go to step 4. case 2. If g kT1 d k1  0 . Then we obtain
Step 4: Calculate the search direction d k 1 by (3), g kT1 d k1
g kT1 d knew  g kT1 d k1  min{0,  } || g k 1 ||2  || g k 1 ||2
where  k is defined by (4). 1
|| g k 1 ||2
Step 5: Let k : k  1 and go to step 2. g kT1 d k1
3) Algorithm 3 (PRP+ Algorithm see [14])  g kT1 d k1  || g k 1 ||2  || g k 1 ||2
|| g k 1 ||2
Step 0: Choose an initial point x0  R n , and con-
  || g k 1 ||2 .
1
stants 0    1 , 0   1  ,  1   2  1 . Set d 0  Let c  (0,1) , c1  2 max{1, L}  1 and use the Step 6
2
 g 0  f ( x0 ) , k : 0. of Algorithm 1, (5) and (9) hold, respectively. The proof is
Step 1: If || g k ||  , then stop; Otherwise go to step 2. completed.
The above lemma shows that the search direction d k
Step 2: Compute steplength k by one line search has such that the sufficient descent condition (5) and the
technique, and let xk 1  xk   k d k . condition (9) without any line search rule.
Step 3: If || g k 1 ||  , then stop; Otherwise go to step 4. Based on Lemma 3.1, Assumption (i) and (ii), let us
give the global convergence theorem of Algorithm 1.
Step 4: Calculate the search direction d k 1 by (3),
Theorem 3.1 Let { k , d k , xk 1 , g k 1 } be generated by
where  k  max{0,  kPRP } Algorithm 1 with the exact minimization rule, the Gold-
Step 5: Let k : k  1 and go to step 2. stein line search rule, the SWP line search rule, or the
We will concentrate on the convergent results of Al- WWP line search rule, and Assumption (i) and (ii) hold.
gorithm 1 in the following section. Then
3. Convergence Analysis lim
k 
|| g k || 0 (12)

The following assumptions are often needed to analyze holds.

Copyright © 2010 SciRes. JSEA


506 A Line Search Algorithm for Unconstrained Optimization

Proof. We will prove the result of this theorem with the  2 g kT d k  g ( xk   k d k )T d k   2 g kT d k .


exact minimization rule, the Goldstein line search rule, the
SWP line search rule, and the WWP line search rule, Similar to the proof of the above case. We can obtain
respectively. (12) immediately.
1) For the exact minimization rule. Let the step size  k 4) For weak Wolf-Powell rule. Let the step size  k be
be the solution of (10). the solution of (6) and (8). Similar to the proof of the case
3), we can also get (12).
By the mean value theorem, g kT d k  0 , and Assump-
Then we conclude this result of this theorem.
tion (ii), for any By Lemma 3.1, there exists a constant  0  0 such
 1 | g kT d k | 2 | g kT d k |  that
 k   2
, 2 
,
 5 L || d k || 5 L || d k ||  g kT d k
   0 , for all k . (16)
we have || g k |||| d k ||
f ( xk   k d k )  f ( xk )  f ( xk   k d k )  f ( xk ) By the proof process of Lemma 3.1. We can deduce that
 0 g ( xk  t d k ) ( d k ) dt
1 
k
T 
k
there exists a positive number  1 satisfying
  k g kT d k  0 [ g ( xk  t k d k )  g k ]T ( k d k ) dt
1
 g kT d k 2
f k  f k 1  1 ( ) , for all k. (17)
1 || d k ||
  g d k  L k2 || d k ||2

k
T
k
2 Similar to the proof of Theorem 4.1 in [27], it is not
1 | g kT d k | 1 4 ( g kT d k ) 2 difficult to prove the linear convergence rate of Algorithm
 (  g T
k
d k
)  L || d ||2 (13) 1. We state the theorem as follows but omit the proof.
5 L || d k ||2 2 25 L2 || d k ||4
Theorem 3.2 (see [27]) Based on (16), (17), and the
3 ( g kT d k ) 2 condition that the function f is twice continuously dif-
 ,
25 L || d k ||2
ferentiable and uniformly convex on R n . Let
which together with Assumption (i), we can obtain { k , d k , xk 1 , g k 1 } be generated by Algorithm 1 with the
( g kT d k ) 2
 exact minimization rule, the Goldstein line search rule,
 2
  . (14) the SWP line search rule, or the WWP line search rule.
k 0 || d ||
k
Then {xk } converges to x at least linearly, where x 
This implies that
is the unique minimal point of f ( x ) .
( g kT d k ) 2
lim 0 (15) 4. Numerical Results
k 
|| d k ||2
holds. By Lemma 3.1, we get (12). In this section, we report some numerical experiments
2) For Goldstein rule. Let the step size  k be the so- with Algorithm 1, Algorithm 2, and Algorithm 3. We test
these algorithms on some problem [28] taken from
lution of (6) and (11). MATLAB with given initial points. The parameters
By (11) and the mean value theorem, we have
common to these methods were set identically,  1  0.1 ,
 k g ( xk   k k d k )T d k  f k  f k 1   2 k g kT d k ,
 2  0.9 ,   106 , In this experiment, the following
where  k  (0,1) , thus Himmeblau stop rule is used:
| f ( xk )  f ( xk 1 ) |
g ( xk   k  k d k ) d k   2 g d k .
T T
k If | f ( xk ) | e1 ,let stop1  ; Other-
| f ( xk ) |
Using Assumption (ii) again, we get
wise, let stop1 | f ( xk )  f ( xk 1 ) | , where e1  105 . If
(1   2 ) g d k
T
k
, || g k ||  or stop1  e2 was satisfied, the program will
 [ g ( xk   k k d k )  g k ]T d k   k L || d k ||2
be stopped, where e2  105 .
which combining with (6), and use Assumption (i), we We also stop the program if the iteration number is
have (14) and (15), respectively. By Lemma 3.1, (12) more than one thousand. Since the line search cannot
holds. always ensure the descent condition d kT g k  0 , uphill
3) For strong Wolf-Powell rule. Let the step size  k search direction may occur in the numerical experiments.
be the solution of (6) and (7). In this case, the line search rule maybe failed. In order to
By (7), we have avoid this case, the stepsize  k will be accepted if the

Copyright © 2010 SciRes. JSEA


A Line Search Algorithm for Unconstrained Optimization 507

searching number is more than forty in the line search.


The detailed numerical results are listed on the web site
http://210.36.18.9:8018/publication.asp?id=34402
Dolan and Moré [29] gave a new tool to analyze the
efficiency of Algorithms. They introduced the notion of a
performance profile as a means to evaluate and compare
the performance of the set of solvers S on a test set P .
Assuming that there exist ns solvers and n p problems,
for each problem p and solver s , they defined
t p ,s  computing time (the number of function evalua-
tions or others) required to solve problem p by solver s .
Requiring a baseline for comparisons, they compared
the performance on problem p by solver s with the
best performance by any solver on this problem; that is,
using the performance ratio Figure 1. Performance profiles(NT) of methods with Gold-
t p ,s stein rule
rp ,s  .
min{t p ,s : s  S }
Suppose that a parameter rM  rp ,s for all p , s is
chosen, and rp ,s  rM if and only if solver s does not
solve problem p .
The performance of solver s on any given problem
might be of interest, but we would like to obtain an overall
assessment of the performance of the solver, then they
defined
1
 s (t )  size{ p  P : rp ,s  t},
np
Thus  s (t ) was the probability for solver s  S that a
performance ratio rp ,s was within a factor t  R of the
best possible ration. Then function  s was the (cumula- Figure 2. Performance profiles(NT) of methods with strong
tive) distribution function for the performance ratio. The Wolfe-Powell rule
performance profile  s : R  [0, 1] for a solver was a
nondecreasing, piecewise constant function, continuous
from the right at each breakpoint. The value of  s (1) was
the probability that the solver would win over the rest of
the solvers.
According to the above rules, we know that one solver
whose performance profile plot is on top right will win
over the rest of the solvers.
In Figures 1-3, NA denotes Algorithm 1, PRP denotes
Algorithm 2, and PRP+ denotes Algorithm 3. Figures 1-3
show that the performance of these methods is relative to
NT  NF  m  NG , where NF and NG denote the
number of function evaluations and gradient evaluations
respectively, and m is an integer. According to the re-
sults on automatic differentiation [30], the value of m
can be set to m  5 . That is to say, one gradient evalua-
tion is equivalent to m number of function evaluations if Figure 3. Performance profiles (NT) of methods with weak
automatic differentiation is used. From these three figures Wolfe-Power rule

Copyright © 2010 SciRes. JSEA


508 A Line Search Algorithm for Unconstrained Optimization

it is clear that the given method has the most wins (has the with a Strong Global Convergence Properties,” SIAM
highest probability of being the optimal solver). Journal of Optimization, Vol. 10, No. 1, 2000, pp. 177-
In summary, the presented numerical results reveal that 182.
the new method, compared with the normal PRP method [12] Z. Wei, G. Li, and L. Qi, “New Nonlinear Conjugate
and the modified PRP method [14], has potential advan- Gradient Formulas for Large-Scale Unconstrained Optimi-
tages. zation Problems,” Applied Mathematics and Computation,
Vol. 179, No. 2, 2006, pp. 407-430.
5. Conclusions [13] G. Yuan and X. Lu, “A Modified PRP Conjugate Gradient
Method,” Annals of Operations Research, Vol. 166, No. 1,
This paper gives a new line search method for uncon- 2009, pp. 73-90.
strained optimization. The global and R-linear conver-
[14] J. C. Gibert, J. Nocedal, “Global Convergence Properties
gence are established under weaker assumptions on the
of Conjugate Gradient Methods for Optimization,” SIAM
search direction d k . Especially, the direction d k satis- Journal on Optimization, Vol. 2, No. 1, 1992, pp. 21-42.
fies the sufficient condition (5) and the condition (9) [15] Y. Dai and Y. Yuan, “Nonlinear Conjugate Gradient
without carrying out any line search technique, and some Methods,” Shanghai Science and Technology Press, 2000.
paper [14,27,30] often obtains these two conditions by [16] E. Polak and G. Ribiè, “Note Sur la Xonvergence de
assumption. The comparison of the numerical results Directions Conjugèes,” Rev Francaise Informat Recher-
shows that the new search direction of the new algorithm che Operatinelle 3e Annèe, Vol. 16, 1969, pp. 35-43.
is a good search direction at every iteration.
[17] M. J. D. Powell, “Nonconvex Minimization Calculations
and the Conjugate Gradient Method,” Lecture Notes in
REFERENCES Mathematics, Springer-Verlag, Berlin, Vol. 1066, 1984,
[1] G. Yuan and X. Lu, “A New Line Search Method with pp. 122-141.
Trust Region for Unconstrained Optimization,” Commu- [18] L. Grippo and S. Lucidi, “A Globally Convergent Version
nications on Applied Nonlinear Analysis, Vol. 15, No. 1, of the Polak-RibiÈRe Gradient Method,” Mathematical
2008, pp. 35-49. Programming, Vol. 78, No. 3, 1997, pp. 375-391.
[2] G. Yuan, X. Lu, and Z. Wei, “New Two-Point Stepsize [19] W. W. Hager and H. Zhang, “A New Conjugate Gradient
Gradient Methods for Solving Unconstrained Optimi- Method with Guaranteed Descent and an Efficient Line
zation Problems,” Natural Science Journal of Xiangtan Search,” SIAM Journal on Optimization, Vol. 16, No. 1,
University, Vol. 29, No. 1, 2007, pp. 13-15. 2005, pp. 170-192.
[3] G. Yuan and Z. Wei, “New Line Search Methods for [20] Z. Wei, S. Yao, and L. Liu, “The Convergence Properties
Uncons- trained Optimization,” Journal of the Korean of Some New Conjugate Gradient Methods,” Applied
Statistical Society, Vol. 38, No. 1, 2009, pp. 29-39. Mathematics and Computation, Vol. 183, No. 2, 2006, pp.
[4] Y. Yuan and W. Sun, “Theory and Methods of Optimi- 1341-1350.
zation,” Science Press of China, Beijing, 1999. [21] G. H. Yu, “Nonlinear Self-Scaling Conjugate Gradient
[5] D. C. Luenerger, “Linear and Nonlinear Programming,” Methods for Large-scale Optimization Problems,” Thesis
2nd Edition, Addition Wesley, Reading, MA, 1989. of Doctor's Degree, Sun Yat-Sen University, 2007.

[6] J. Nocedal and S. J. Wright, “Numerical Optimization,” [22] G. Yuan, “Modified Nonlinear Conjugate Gradient
Springer, Berlin, Heidelberg, New York, 1999. Methods with Sufficient Descent Property for Large-Scale
Optimization Problems,” Optimization Letters, Vol. 3, No.
[7] Z. Wei, G. Li, and L. Qi, “New Quasi-Newton Methods for 1, 2009, pp. 11-21.
Unconstrained Optimization Problems,” Applied Mathe-
matics and Computation, Vol. 175, No. 1, 2006, pp. 1156- [23] G. Yuan, “A Conjugate Gradient Method for Uncons-
1188. trained Optimization Problems,” International Journal of
Mathematics and Mathematical Sciences, Vol. 2009, 2009,
[8] Z. Wei, G. Yu, G. Yuan, and Z. Lian, “The Superlinear pp. 1-14.
Convergence of a Modified BFGS-type Method for
Unconstrained Optimization,” Computational Optimiza- [24] G. Yuan, X. Lu, and Z. Wei, “A Conjugate Gradient
tion and Applications, Vol. 29, No. 3, 2004, pp. 315-332. Method with Descent Direction for Unconstrained Optimi-
zation,” Journal of Computational and Applied Mathe-
[9] G. Yuan and Z. Wei, “The Superlinear Convergence Anal- matics, Vol. 233, No. 2, 2009, pp. 519-530.
ysis of a Nonmonotone BFGS Algorithm on Convex
Objective Functions,” Acta Mathematica Sinica, English [25] L. Zhang, W. Zhou, and D. Li, “A Descent Modified
Series, Vol. 24, No. 1, 2008, pp. 35-42. Polak-RibiÈRe-Polyak Conjugate Method and its Global
Convergence,” IMA Journal on Numerical Analysis, Vol.
[10] G. Yuan and Z. Wei, “Convergence Analysis of a Modified 26, No. 4, 2006, pp. 629-649.
BFGS Method on Convex Minimizations,” Computational
Optimization and Applications, Science Citation Index, [26] Y. Liu and C. Storey, “Efficient Generalized Conjugate
2008. Gradient Algorithms, Part 1: Theory,” Journal of Optimi-
zation Theory and Application, Vol. 69, No. 1, 1992, pp.
[11] Y. Dai and Y. Yuan, “A Nonlinear Conjugate Gradient 17-41.

Copyright © 2010 SciRes. JSEA


A Line Search Algorithm for Unconstrained Optimization 509

[27] Z. J. Shi, “Convergence of Line Search Methods for [29] E. D. Dolan and J. J. Moré, “Benchmarking Optimization
Unconstrained Optimization,” Applied Mathematics and Software with Performance Profiles,” Mathematical
Computation, Vol. 157, No. 2, 2004, pp. 393-405. Programming, Vol. 91, No. 2, 2002, pp. 201-213.
[28] J. J. Moré, B. S. Garbow, and K. E. Hillstrome, “Testing [30] Y. Dai and Q. Ni, “Testing Different Conjugate Gradient
Unconstrained Optimization Software,” ACM Transac- Methods for Large-scale Unconstrained Optimization,”
tions on Mathematical Software, Vol. 7, No. 1, 1981, pp. Journal of Computational Mathematics, Vol. 21, No. 3,
17-41. 2003, pp. 311-320.

Copyright © 2010 SciRes. JSEA

View publication stats

You might also like