Applied Mathematics and Computation Volume 208 issue 1 2009 [doi 10.1016_2Fj.amc.2008.12.002] Ioannis G. Tsoulos -- Solving constrained optimization problems using a novel genetic algorithm
Applied Mathematics and Computation Volume 208 issue 1 2009 [doi 10.1016_2Fj.amc.2008.12.002] Ioannis G. Tsoulos -- Solving constrained optimization problems using a novel genetic algorithm
a r t i c l e i n f o a b s t r a c t
Keywords: A novel genetic algorithm is described in this paper for the problem of constrained optimi-
Constrained optimization zation. The algorithm incorporates modified genetic operators that preserve the feasibility
Evolutionary algorithms of the trial solutions encoded in the chromosomes, the stochastic application of a local
Genetic algorithms search procedure and a stopping rule which is based on asymptotic considerations. The
Genetic operations
algorithm is tested on a series of well-known test problems and a comparison is made
Stopping rules
against the algorithms C-SOMGA and DONLP2.
Ó 2008 Elsevier Inc. All rights reserved.
1. Introduction
In this article a novel genetic algorithm method for the constrained optimization problem is presented. The proposed
method utilizes:
1. Modified versions of the genetic operators of crossover and mutation. These new versions preserve the feasibility of the
trial solutions of the constrained problem that are encoded in the chromosomes.
0096-3003/$ - see front matter Ó 2008 Elsevier Inc. All rights reserved.
doi:10.1016/j.amc.2008.12.002
274 I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283
2. Application of a local search procedure to chromosomes that are selected randomly from the genetic population.
3. A stopping rule based on asymptotic considerations.
The rest of this article is organized as follows: in Section 2 the steps of the proposed method are described in detail, in
Section 3 the test functions as well as the experimental results are listed and finally in Section 4 some conclusions are de-
rived and some thoughts for future research are given.
2. Method description
The steps of the proposed method are listed in Algorithm 1. The algorithm initializes a set of K chromosomes inside
the bounds ½a; b of the objective function and iteratively applies the modified genetic operators to the chromosomes
until some stopping criteria are met. In the following the critical steps of the proposed algorithm are explained in
detail.
The proposed algorithm uses a penalization strategy in order to transform the constrained optimization problem to
an unconstrained one. When a trial solution encoded to a chromosome is infeasible, its fitness value is penalized
according to the constraint violations. The steps for the evaluation of the fitness for any given chromosome x are the
following:
3. Calculate the violation of the inequality constraints g i ðxÞ; i ¼ 1; . . . ; m of the Eq. (1) as
X
m
v 3 ðxÞ ¼ G2 ðg i ðxÞÞ; ð4Þ
i¼1
In every generation, the genetic operations of crossover and mutation are taking place. The process of crossover is pre-
sented in Algorithm 2 and the mutation procedure is outlined in Algorithm 3. These genetic operations are the typical genetic
operations used in most real coded genetic algorithms, but the feasibility of the chromosomes is preserved. Also, these oper-
ations rejects chromosomes that are created outside the bound constraints. The auxiliary procedures perturb(x) and perturb-
Feasible(x) are listed in Algorithms 4 and 5, respectively.
In every generation the application of a local search procedure to some randomly selected chromosomes is taking place.
The purpose of this applications is to improve the fitness of the chromosomes and to speed up the convergence of the algo-
rithm to the global minimum. The steps of this procedure are the following:
1. For i ¼ 1; . . . ; K do
(a) Let r a random number in ð0; 1Þ.
(b) If r 6 pl then
(i.) Apply a local search procedure to the chromosome xi
(c) endif
2. End for
The used local search procedure is the algorithm Cobyla due to Powell [35].
In this paper a stopping rule based on asymptotic considerations is used. The algorithm records in every generation the
variance of the best discovered value. If there is not any improvement for a number of generations we can assume that the
algorithm have managed to discover the global minimum and hence the algorithm should terminate. This stopping rule was
proposed recently in the paper of Tsoulos [36] and it has as follows:
1. At every generation iter, the variance rðiterÞ of the best discovered value is calculated.
2. The genetic algorithm terminates when
rðlastÞ
rðiterÞ 6 OR iter > ITERMAX; ð7Þ
2
where last is the generation where the current best value was discovered for the first time.
3. Experiments
The proposed method was tested on a series of well-known test problems from the relevant literature. In the following,
these test problems are described followed by the experimental results of the proposed method.
Levy
This problem is described in [37] and it is given by
min f ðxÞ ¼ x1 x2 ;
x
Salkin
This problem is described in [38] and it is given by
Himmelblau
This problem is described in [39] and it is given by
min f ðxÞ ¼ 4:3x1 þ 31:8x2 þ 63:3x3 þ 15:8x4 þ 68:5x5 þ 4:7x6 ;
x
Hess
This problem is described in [40] and it is given by
max f ðxÞ ¼ 25ðx1 2Þ2 þ ðx2 2Þ2 þ ðx3 1Þ2 þ ðx4 4Þ2 þ ðx5 1Þ2 þ ðx6 4Þ2 ;
x
with 0 6 x1 6 5; 0 6 x2 6 1; 1 6 x3 6 5; 0 6 x4 6 6; 0 6 x5 6 5; 0 6 x6 6 10 subject to
g 1 ðxÞ ¼ x1 þ x2 2 P 0;
g 2 ðxÞ ¼ x1 þ x2 þ 6 P 0;
g 3 ðxÞ ¼ x1 x2 þ 2 P 0;
g 4 ðxÞ ¼ x1 þ 3x2 þ 2 P 0;
g 5 ðxÞ ¼ ðx3 3Þ2 þ x4 4 P 0;
g 6 ðxÞ ¼ ðx5 3Þ2 þ x6 4 P 0:
The value of global maximum is fmax ¼ 310.
SHITTKOWSKI
The problem is described in [41] and it is given by
min f ðxÞ ¼ ðx21 þ x2 11Þ2 þ ðx1 þ x22 7Þ2 ;
x
CHOOTINAN1
This problem is described in [42] and it is given by
X
4 X
4 X
13
min f ðxÞ ¼ 5 xi 5 x2i xi ;
x
i¼1 i¼1 i¼1
with 0 6 xi 6 1 for i ¼ 1; . . . ; 9; 13, 0 6 xi 6 100 for i ¼ 10; 11; 12 with the following constraints:
CHOOTINAN2
The source of this problem is [42] and it is given by
3
sin ð2px1 Þ sinð2px2 Þ
max f ðxÞ ¼ ;
x x31 ðx1 þ x2 Þ
with x 2 ½0; 102 . The constraints are given by
g 1 ðxÞ ¼ x21 þ x2 1 P 0;
g 2 ðxÞ ¼ 1 þ x1 ðx2 4Þ2 P 0:
The value of global maximum is fmax ¼ 0:095.
CHOOTINAN3
This problem is described in [42] and it is given by
h1 ðxÞ ¼ x2 x21 ¼ 0:
The value of global minimum is fmin ¼ 0:75.
CHOOTINAN4
The source of this problem is [42] and it is given by
pffiffiffin Y
n
max f ðxÞ ¼ n xi ;
x
i¼1
n
with x 2 ½0; 1 . The constraints are given by
X
n
h1 ðxÞ ¼ x2i 1 ¼ 0:
i¼1
In this paper we used as n the values n ¼ 2; 10; 20; 30; 40 with names CHOOTINAN4(2), CHOOTINAN4(10), CHOOTINAN4(20),
CHOOTINAN4(30) and CHOOTINAN4(40).
CHOOTINAN5
This problem is described in [42] and it is given by
P
n cos4 ðx Þ 2Qn cos2 ðx Þ
i¼1 i i¼1 i
max f ðxÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x Pn 2
i¼1 ixi
In this paper we used the value of n ¼ 10; 20; 30; 40 with names CHOOTINAN5(10), CHOOTINAN5(20), CHOOTINAN5(30) and
CHOOTINAN5(40).
LIN1
This problem is described in [43] and it is given by
g 1 ðxÞ ¼ x1 þ x22 P 0;
g 2 ðxÞ ¼ x21 þ x2 P 0:
The value of global minimum is fmin ¼ 0:25000.
I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283 279
LIN2
This problem is described in [43] and it is given by
min f ðxÞ ¼ x1 x2 ;
x
with 0 6 x1 6 3; 0 6 x2 6 4 subject to
G15
This problem is described in [44] and it is given by
LIN3
This problem is described in [43] and it is given by
BEAM
The welded beam design problem is described in [45] and it is given by
where
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x2
sðxÞ ¼ ðs0 Þ2 þ 2s0 s00 þ ðs00 Þ2 ;
2R
P
s0 ¼ pffiffiffi ;
2x1 x2
QR
s00 ¼ ;
J
x2
Q ¼P Lþ ;
2
280 I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x22 x 1 þ x3 2
R¼ þ ;
4 2
pffiffiffi 2 2
x x 1 þ x3
J¼2 2x1 x2 2 þ ;
12 2
6PL
rðxÞ ¼ 2 ;
x 4 x3
4PL3
dðxÞ ¼ ;
Ex33 x4
qffiffiffiffiffiffi
ffi
2 6
x3 x4 rffiffiffiffiffiffi!
4:013E 36 x3 E
Pc ðxÞ ¼ 1 ;
L2 2L 4G
P ¼ 600; L ¼ 14; E ¼ 30 106 ; G ¼ 12 106 :
The proposed method was tested on the series of test problems described previously using the parameters given in Table
1. The method was tested against the C-SOMGA described in [26] and the method DONLP2 available from the Internet that is
based on [46,47]. In Table 2 the results from the application of the proposed method to the test problems are listed. In Table 3
the reported results from the application of method C-SOMGA to the test problems and in Table 4 the experimental results
from the application of the method DONLP2 are reported. The proposed method as well as the DONLP2 was applied 100
times on every test problem using different seed for the random number generator each time. The columns in tables have
the following meaning:
Table 1
Parameters of the algorithm.
Parameter Value
K 200
ITERMAX 200
ps 0.1
pm 0.05
pl 0.01
k 103
MAXTRIES 10
Table 2
Experimental results with the proposed method.
Table 3
Experimental results with the method CSOMGA.
Table 4
Experimental results with the DONLP2 method.
1. The column PROBLEM denotes the name of the test problem as given in the previous section.
2. The column BEST denotes the best value located by the proposed algorithm.
3. The column MEAN denotes the mean of the optimal objective function values, located by the proposed algorithm.
4. The column STD denotes the standard deviation of the optimal objective function values.
5. The column FEVALS denotes the average number of function evaluations spent by the proposed algorithm.
As we can see, in most cases where comparative results are available, the proposed method overcomes C-SOMGA in mean
and standard deviation values. Also, the proposed method has managed to discover the global minimum in almost every
objective function. On the other hand, the method DONLP2 overcomes the proposed method in function evaluations for
Table 5
Experiments with the parameter k.
the five problems LEVY, CHOOTINAN3, CHOOTINAN4(2), LIN1 and LIN3 but the proposed method is better for the rest of the
problems in mean and standard deviation values.
The choice of the parameter k is very crucial for the algorithm, because small values can lead to search outside the feasible
region and produce infeasible solutions, while greater penalties can make difficult or even impossible the search for opti-
mum solutions. In Table 5 the experiments with various values for the parameter k for some of the test problems are listed.
The numbers in the cells represents the average number of function evaluations from 100 runs using different seed for the
random generator each time. It is obvious from the experimental results that the choice of the parameter k has a small effect
to the number of function evaluations required in order to discover the global minimum.
4. Conclusions
A novel genetic algorithm for the location of the global minimum of constrained optimization problems was presented in
this paper. The method suggested (a) modified genetic operators that preserve the feasibility of the chromosomes (b) appli-
cations of a local search procedure to randomly selected chromosomes and (c) a stopping rule based on asymptotic consid-
erations. The method was tested on a series of well-known of test functions and proved to be successful in the majority of
test problems. Moreover, the proposed method can be implemented very easily and the utilization of the proposed stopping
significantly improves the speed of the method. Future research will include the use of more efficient initialization tech-
niques as well as the use of other local search procedures.
References
[1] O.A. Sauer, D.M. Shepard, T.R. Mackie, Application of constrained optimization to radiotherapy planning, Medical Physics 26 (1999) 2359–2366.
[2] E.G. Birgin, I. Chambouleyron, J.M. Martínez, Estimation of the optical constants and the thickness of thin films using unconstrained optimization,
Journal of Computational Physics 151 (1999) 862–880.
[3] C. Yang, J.C. Meza, L.W. Wang, A constrained optimization algorithm for total energy minimization in electronic structure calculations, Journal of
Computational Physics 217 (2006) 709–721.
[4] K.D. Lee, S. Eyi, Transonic airfoil design by constrained optimization, Journal of Aircraft 30 (1993) 805–806.
[5] F.P. Seelos, R.E. Arvidson, Bounded variable least squares – application of a constrained optimization algorithm to the analysis of TES Emissivity
Spectra, in: 34th Annual Lunar and Planetary Science Conference, March 17–21, 2003, League City, Texas (abstract no. 1817).
[6] M.J. Field, Constrained optimization of ab initio and semiempirical Hartree–Fock wave functions using direct minimization or simulated annealing,
Journal of Physical Chemistry 95 (1991) 5104–5108.
[7] G.A. Williams, J.M. Dugan, R.B. Altman, Constrained global optimization for estimating molecular structure from atomic distances, Journal of
Computational Biology 8 (2001) 523–547.
[8] J. Baker, D. Bergeron, Constrained optimization in Cartesian coordinates, Journal of Computational Chemistry 14 (1993) 1339–1346.
[9] J.E.A. Bertram, A. Ruina, Multiple walking speed–frequency relations are predicted by constrained optimization, Journal of Theoretical Biology 209
(2001) 445–453.
[10] J.E.A. Bertram, Constrained optimization in human walking: cost minimization and gait plasticity, Journal of Experimental Biology 208 (2005) 979–
991.
[11] I. Iakovidis, R.M. Gulrajani, Regularization of the inverse epicardial solution using linearly constrained optimization, in: Engineering in Medicine and
Biology Society, Proceedings of the Annual International Conference of the IEEE Publication Date: 31 October–3 November, 1991. vol. 13, pp. 698–699.
[12] D.P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Academic Press, New York, 1982.
[13] D.P. Bertsekas, A.E. Ozdaglar, Pseudonormality and a Lagrange multiplier theory for constrained optimization, Journal of Optimization Theory and
Applications 114 (2004) 287–343.
[14] P.E. Gill, W. Murray, The computation of Lagrange-multiplier estimates for constrained minimization, Mathematical Programming 17 (1979) 32–60.
[15] M.J.D. Powell, Y. Yuan, A trust region algorithm for equality constrained optimization, Mathematical Programming 49 (2005) 189–211.
[16] R.H. Byrd, R.B. Schnabel, G.A. Shultz, A trust region algorithm for nonlinearly constrained optimization, SIAM Journal on Numerical Analysis 24 (1987)
1152–1170.
[17] D.M. Gay, A trust-region approach to linearly constrained optimization, Lecture Notes in Mathematics, vol. 1066, 1984, pp. 72–105 (Áñãåíôéí (ñ)).
[18] M.Cs. Markót, J. Fernández, L.G. Casado, T. Csendes, New interval methods for constrained global optimization, Mathematical Programming 106 (2005)
287–318.
[19] K. Ichida, Constrained optimization using interval analysis, Computers and Industrial Engineering 31 (1996) 933–937.
[20] W.E. Lillo, S. Hui, S.H. Zak, Neural networks for constrained optimization problems, International Journal of Circuit Theory and Applications 21 (1993)
385–399.
[21] W.E. Lillo, M.H. Loh, S. Hui, S.H. Zak, On solving constrained optimization problems with neural networks: a penalty method approach, IEEE
Transactions on Neural Networks 4 (1993) 931–940.
[22] S. Zhang, X. Zhu, L.H. Zou, Second-order neural nets for constrained optimization, IEEE Transactions on Neural Networks 3 (1992) 1021–1024.
[23] S. Lucidi, M. Sciandrone, P. Tseng, Objective-derivative-free methods for constrained optimization, Mathematical Programming 92 (1999) 37–59.
[24] G. Liuzzi, S. Lucidi, A derivative-free algorithm for nonlinear programming, TR 17/05, Department of Computer and Systems Science, Antonio Ruberti,
University of Rome, La Sapienza, 2005.
[25] M.B. Subrahmanyam, An extension of the simplex method to constrained nonlinear optimization, Journal of Optimization Theory and Applications 62
(1989) 311–319.
[26] K. Deep, Dipti, A self-organizing migrating genetic algorithm for constrained optimization, Applied Mathematics and Computation 198 (2008) 237–
250.
[27] A. Homaifar, Constrained optimization via genetic algorithms, Simulation 62 (1994) 242–253.
[28] S. Venkatraman, G.G. Yen, A generic framework for constrained optimization using genetic algorithms, IEEE Transactions on Evolutionary Computation
9 (2005) 424–435.
[29] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evolutionary Computation 4 (1996) 1–32.
[30] V.S. Summanwar, V.K. Jayaraman, B.D. Kulkarni, H.S. Kusumakar, K. Gupta, J. Rajesh, Solution of constrained optimization problems by multi-objective
genetic algorithm, Computers and Chemical Engineering 26 (2002) 1481–1492.
I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283 283
[31] Q. He, L. Wang, A hybrid particle swarm optimization with a feasibility – based rule for constrained optimization, Applied Mathematics and
Computation 186 (2007) 1407–1422.
[32] X.H.Hu, R.C. Eberhart, Solving constrained nonlinear optimization problems with particle swarm optimization, in: N. Callaos (Ed.), Proceedings of the
Sixth World Multiconference on Systematics, Cybergenetics and Informatics, Orlando, FL, 2002, pp. 203–206.
[33] H. Sarimveis, A. Nikolakopoulos, A line up evolutionary algorithm for solving nonlinear constrained optimization problems, Computers and Operations
Research 32 (2005) 1499–1514.
[34] R.L. Becerra, C.A.C. Coello, Cultured differential evolution for constrained optimization, Computer Methods in Applied Mechanics and Engineering 195
(2006) 4303–4322.
[35] M.J.D. Powell, A Direct search optimization method that models the objective and constraint functions by linear interpolation, DAMTP/NA5,
Cambridge, England.
[36] I.G. Tsoulos, Modifications of real code genetic algorithm for global optimization, Applied Mathematics and Computation 203 (2008) 598–607.
[37] A.V. Levy, A. Montalvo, The tunneling algorithm for global optimization of functions, SIAM Journal of Scientific and Statistical Computing 6 (1985) 15–
29.
[38] H.M. Salkin, Integer Programming, Edison Wesley Publishing Com., Amsterdam, 1975.
[39] D.M. Himmelblau, Applied Nonlinear Programming, McGraw-Hill, New York, 1972.
[40] R. Hess, A heuristic search for estimating a global solution of non convex programming problems, Operations Research 21 (1973) 1267–1280.
[41] K. Schittkowski, More examples for mathematical programming codes, Lecture Notes in Economics and Mathematical Systems (1987) 282.
[42] P. Chootinan, A. Chen, Constraint handling in genetic algorithms using a gradient – based repair method, Computer and Operations Research 33 (2006)
2263–2281.
[43] Y.C. Lin, K.S. Hwang, F.S. Wang, Hybrid differential evolution with multiplier updating method for nonlinear constrained optimization problems, in:
Proceedings of the 2002 IEEE World Congress on Computation Intelligence, IEEE Service Center, Honolulu Hawaii, 2002, pp. 872–877.
[44] J.J. Liang, T.P. Runarsson, E. Mezura-Montes, M. Clerc, P.N. Suganthan, C.A.C. Coello, K. Deb, Problem definitions and evaluation criteria for the CEC2006
special session on constrained real-parameter optimization. <http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC-06/CEC06.htm>.
[45] C.A.C. Coello, Use of a self-adaptive penalty approach for engineering optimization problems, Computers and Industrial Engineering 41 (2000) 113–
127.
[46] P. Spelluci, An SQP method for general nonlinear programs using only equality constrained subproblems, Mathematical Programming 82 (1998) 413–
448.
[47] P. Spelluci, A new technique for inconsistent problems in the SQP method, Mathematical Methods of Operational Research 47 (1998) 355–400.