0% found this document useful (0 votes)
6 views

Applied Mathematics and Computation Volume 208 issue 1 2009 [doi 10.1016_2Fj.amc.2008.12.002] Ioannis G. Tsoulos -- Solving constrained optimization problems using a novel genetic algorithm

Uploaded by

Mohamed Kamal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Applied Mathematics and Computation Volume 208 issue 1 2009 [doi 10.1016_2Fj.amc.2008.12.002] Ioannis G. Tsoulos -- Solving constrained optimization problems using a novel genetic algorithm

Uploaded by

Mohamed Kamal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Applied Mathematics and Computation 208 (2009) 273–283

Contents lists available at ScienceDirect

Applied Mathematics and Computation


journal homepage: www.elsevier.com/locate/amc

Solving constrained optimization problems


using a novel genetic algorithm
Ioannis G. Tsoulos
Department of Communications, Informatics and Management, Technological Educational Institute of Epirus, Greece

a r t i c l e i n f o a b s t r a c t

Keywords: A novel genetic algorithm is described in this paper for the problem of constrained optimi-
Constrained optimization zation. The algorithm incorporates modified genetic operators that preserve the feasibility
Evolutionary algorithms of the trial solutions encoded in the chromosomes, the stochastic application of a local
Genetic algorithms search procedure and a stopping rule which is based on asymptotic considerations. The
Genetic operations
algorithm is tested on a series of well-known test problems and a comparison is made
Stopping rules
against the algorithms C-SOMGA and DONLP2.
Ó 2008 Elsevier Inc. All rights reserved.

1. Introduction

In general, the constrained optimization problem can be defined as


min f ðxÞ
x

subject to g i ðxÞ 6 0; i ¼ 1; . . . ; m; ð1Þ


hj ðxÞ ¼ 0; j ¼ 1; . . . ; p;
where xi 2 ½ai ; bi ; i ¼ 1; . . . ; n. The above problem is applicable in many scientific and practical fields such as physics [1–3],
aircraft design [4], astronomy [5], chemistry [6–8], biology [9–11], etc. In order to tackle the above problem many methods
have been proposed such as Lagrange multiplier methods [12–14], Trust region methods [15–17], interval methods [18,19],
methods that utilize artificial neural networks [20–22], derivative free optimization methods [23–25] etc. Recently, popula-
tion based stochastic search techniques such as genetic algorithms [26–30], particle swarm optimization [31,32] and differ-
ential evolution [33,34] have been used to solve constrained optimization problems. The main advantages of population
based methods are

 They do not require the objective function to be differentiable or continuous.


 They do not require the evaluation of gradients.
 They can escape from local minima.

In this article a novel genetic algorithm method for the constrained optimization problem is presented. The proposed
method utilizes:

1. Modified versions of the genetic operators of crossover and mutation. These new versions preserve the feasibility of the
trial solutions of the constrained problem that are encoded in the chromosomes.

E-mail address: [email protected]

0096-3003/$ - see front matter Ó 2008 Elsevier Inc. All rights reserved.
doi:10.1016/j.amc.2008.12.002
274 I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283

2. Application of a local search procedure to chromosomes that are selected randomly from the genetic population.
3. A stopping rule based on asymptotic considerations.

The rest of this article is organized as follows: in Section 2 the steps of the proposed method are described in detail, in
Section 3 the test functions as well as the experimental results are listed and finally in Section 4 some conclusions are de-
rived and some thoughts for future research are given.

2. Method description

The steps of the proposed method are listed in Algorithm 1. The algorithm initializes a set of K chromosomes inside
the bounds ½a; b of the objective function and iteratively applies the modified genetic operators to the chromosomes
until some stopping criteria are met. In the following the critical steps of the proposed algorithm are explained in
detail.

Algorithm 1. The main steps of the proposed algorithm


1. Set the parameters of the algorithm:
(a) The number of chromosomes K.
(b) The maximum number of generations ITERMAX.
(c) The selection rate ps .
(d) The mutation rate pm .
(e) The local search rate pl .
2. Set iter¼0.
3. Initialize the K chromosomes and store them to the set X ¼ fx1 ; x2 ; . . . ; xK g. Every chromosome xi is initialized
randomly inside the feasible region ½a; b of the Eq. (1).
4. Evaluate the fitness for every chromosome. This evaluation is performed using a penalty technique as described in
Section 2.1.
5. Apply the modified genetic operations of crossover and mutation to the population. These operations are described in
Section 2.2.
6. Select randomly some of the chromosomes from the population and apply to them the local search procedure as
described in Section 2.3.
7. Set iter=iter+1.
8. If the termination criteria are hold terminate, else goto step 4. The details of the termination criteria are given in
Section 2.4.

2.1. Fitness evaluation

The proposed algorithm uses a penalization strategy in order to transform the constrained optimization problem to
an unconstrained one. When a trial solution encoded to a chromosome is infeasible, its fitness value is penalized
according to the constraint violations. The steps for the evaluation of the fitness for any given chromosome x are the
following:

1. Calculate the function value at point x using:

v 1 ðxÞ ¼ f ðxÞ: ð2Þ


2. Calculate the violation of the equality constraints hi ðxÞ; i ¼ 1; . . . ; p of Eq. (1) as
X
p
2
v 2 ðxÞ ¼ hi ðxÞ: ð3Þ
i¼1

3. Calculate the violation of the inequality constraints g i ðxÞ; i ¼ 1; . . . ; m of the Eq. (1) as
X
m
v 3 ðxÞ ¼ G2 ðg i ðxÞÞ; ð4Þ
i¼1

where the function GðxÞ is defined as follows:



0; x 6 0;
GðxÞ ¼ ð5Þ
x; x > 0:
I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283 275

4. Compute the fitness value v ðxÞ using the formula


v ðxÞ ¼ v 1 ðxÞ þ kv 2 ðxÞ þ kv 3 ðxÞ; ð6Þ
where k > 0.

2.2. Genetic operations

In every generation, the genetic operations of crossover and mutation are taking place. The process of crossover is pre-
sented in Algorithm 2 and the mutation procedure is outlined in Algorithm 3. These genetic operations are the typical genetic
operations used in most real coded genetic algorithms, but the feasibility of the chromosomes is preserved. Also, these oper-
ations rejects chromosomes that are created outside the bound constraints. The auxiliary procedures perturb(x) and perturb-
Feasible(x) are listed in Algorithms 4 and 5, respectively.

Algorithm 2. The proposed crossover procedure


1. Sort the chromosomes x1 ; x2 ; . . . ; xK with respect to their fitness values f1 ; f2 ; . . . ; fK in a way such that the best chromo-
some is placed at the beginning of the population and the worst at the end.
2. Set C ¼ £ where C will be the produced children from the crossover procedure.
3. For i ¼ 1:: ð1p2s ÞK do
(a) Select using tournament selection two chromosomes p1 ; p2 from X.
(b) For j ¼ 1::n do
i. Create a random number a in the range ½0:5; 1:5.
ii. Set c1j ¼ a  p1j þ ð1  aÞp2j
iii. Set c2j ¼ a  p2j þ ð1  aÞp1j
(c) End for
(d) C ¼ C [ c1
(e) C ¼ C [ c2
4. End for
5. For i ¼ 1::ð1  ps Þ  K do
(a) Take the next element ci from the set of children C.
(b) If (xKi1 is feasible and ci is feasible) or (xKi1 is not feasible and ci is not feasible) then xKi1 ¼ ci
6. End for

Algorithm 3. The proposed mutation procedure


1. For i ¼ 2::K do
(a) If xi is feasible then perturbFeasibleðxi Þ
(b) else perturbðxi Þ
(c) endif
2. End for

Algorithm 4. The procedure perturb(x)


1. For j ¼ 1; . . . ; n do
(a) Let r a random number in ð0; 1Þ
(b) If r 6 pm then xj ¼ aj þ n  ðbj  aj Þ, where n a random number inð0; 1Þ.
(c) endif
2. End for

Algorithm 5. The procedure perturbFeasible(x)


1. For j ¼ 1::n do
2. Let r a random number in ð0; 1Þ
(a) If r 6 pm then
i. Set tries=0
ii. Set xold ¼ xj
iii. Set xj ¼ aj þ n  ðbj  aj Þ, where n a random number in ð0; 1Þ.
iv. If (x is not feasible) then xj ¼ xold
v. tries=tries+1
vi. Iftries 6 MAXTRIES goto step 2.a.ii
(b) endif
3. End for
276 I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283

2.3. Application of local search

In every generation the application of a local search procedure to some randomly selected chromosomes is taking place.
The purpose of this applications is to improve the fitness of the chromosomes and to speed up the convergence of the algo-
rithm to the global minimum. The steps of this procedure are the following:

1. For i ¼ 1; . . . ; K do
(a) Let r a random number in ð0; 1Þ.
(b) If r 6 pl then
(i.) Apply a local search procedure to the chromosome xi
(c) endif

2. End for

The used local search procedure is the algorithm Cobyla due to Powell [35].

2.4. Termination criteria

In this paper a stopping rule based on asymptotic considerations is used. The algorithm records in every generation the
variance of the best discovered value. If there is not any improvement for a number of generations we can assume that the
algorithm have managed to discover the global minimum and hence the algorithm should terminate. This stopping rule was
proposed recently in the paper of Tsoulos [36] and it has as follows:

1. At every generation iter, the variance rðiterÞ of the best discovered value is calculated.
2. The genetic algorithm terminates when
rðlastÞ
rðiterÞ 6 OR iter > ITERMAX; ð7Þ
2
where last is the generation where the current best value was discovered for the first time.

3. Experiments

The proposed method was tested on a series of well-known test problems from the relevant literature. In the following,
these test problems are described followed by the experimental results of the proposed method.

3.1. Test functions

Levy
This problem is described in [37] and it is given by
min f ðxÞ ¼ x1  x2 ;
x

with x 2 ½0; 12 , subject to


j k 1 1
 
1 1

g 1 ðxÞ ¼ ðx1  1Þ2 þ ðx2  1Þ  þ ðx 1  1Þðx 2  1Þ   1 P 0;
2a2 2b2 a2 b2
with a ¼ 2; b ¼ 0:25. The value of global minimum is fmin ¼ 1:8729.

Salkin
This problem is described in [38] and it is given by

max f ðxÞ ¼ 3x1 þ x2 þ 2x3 þ x4  x5 ;


x

with 1 6 x1 6 4; 80 6 x2 6 88; 30 6 x3 6 35; 145 6 x4 6 150; 0 6 x5 6 2 subject to


g 1 ðxÞ ¼ 25x1  40x2 þ 16x3 þ 21x4 þ x5 6 300;
g 2 ðxÞ ¼ x1 þ 20x2  50x3 þ x4  x5 6 200;
g 3 ðxÞ ¼ 60x1 þ x2  x3 þ 2x4 þ x5 6 600;
g 4 ðxÞ ¼ 7x1 þ 4x2 þ 15x3  x4 þ 65x5 6 700:
This global maximum is fmax ¼ 320.
I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283 277

Himmelblau
This problem is described in [39] and it is given by
min f ðxÞ ¼ 4:3x1 þ 31:8x2 þ 63:3x3 þ 15:8x4 þ 68:5x5 þ 4:7x6 ;
x

with 0 6 x1 6 0:31; 0 6 x2 6 0:046; 0 6 x3 6 0:068; 0 6 x4 6 0:042; 0 6 x5 6 0:028; 0 6 x6 6 0:0134 subject to


g 1 ðxÞ ¼ 17:1x1 þ 38:2x2 þ 204:2x3 þ 212:3x4 þ 623:4x5 þ 1495:5x6  169x1 x3  3580x3 x5
 3810x4 x5  18500x4 x6  24300x5 x6  4:97 P 0;
g 2 ðxÞ ¼ 1:88 þ 17:9x1 þ 36:8x2 þ 113:9x3 þ 169:7x4 þ 337:8x5 þ 1385:2x6  139x1 x3  2450x4 x5
 600x4 x6  17200x5 x6 P 0;
g 3 ðxÞ ¼ 429:08  273x2  70x4  819x5 þ 26000x4 x5 P 0;
g 4 ðxÞ ¼ 159:9x1  311x2 þ 587x4 þ 391x5 þ 2198x6  14000x1 x6 þ 78:02 P 0:
The value of global minimum is fmin ¼ 0:0156.

Hess
This problem is described in [40] and it is given by
max f ðxÞ ¼ 25ðx1  2Þ2 þ ðx2  2Þ2 þ ðx3  1Þ2 þ ðx4  4Þ2 þ ðx5  1Þ2 þ ðx6  4Þ2 ;
x

with 0 6 x1 6 5; 0 6 x2 6 1; 1 6 x3 6 5; 0 6 x4 6 6; 0 6 x5 6 5; 0 6 x6 6 10 subject to
g 1 ðxÞ ¼ x1 þ x2  2 P 0;
g 2 ðxÞ ¼ x1 þ x2 þ 6 P 0;
g 3 ðxÞ ¼ x1  x2 þ 2 P 0;
g 4 ðxÞ ¼ x1 þ 3x2 þ 2 P 0;
g 5 ðxÞ ¼ ðx3  3Þ2 þ x4  4 P 0;
g 6 ðxÞ ¼ ðx5  3Þ2 þ x6  4 P 0:
The value of global maximum is fmax ¼ 310.

SHITTKOWSKI
The problem is described in [41] and it is given by
min f ðxÞ ¼ ðx21 þ x2  11Þ2 þ ðx1 þ x22  7Þ2 ;
x

with x 2 ½0; 62 subject to


g 1 ðxÞ ¼ 4:84  ðx1  0:05Þ2  ðx2  2:5Þ2 P 0;
g 2 ðxÞ ¼ x21 þ ðx2  2:5Þ2  4:84 P 0:
The value of global minimum is fmin ¼ 13:59085.

CHOOTINAN1
This problem is described in [42] and it is given by
X
4 X
4 X
13
min f ðxÞ ¼ 5 xi  5 x2i  xi ;
x
i¼1 i¼1 i¼1

with 0 6 xi 6 1 for i ¼ 1; . . . ; 9; 13, 0 6 xi 6 100 for i ¼ 10; 11; 12 with the following constraints:

g 1 ðxÞ ¼ 10  ð2x1 þ 2x2 þ x10 þ x11 Þ P 0;


g 2 ðxÞ ¼ 10  ð2x1 þ 2x3 þ x10 þ x12 Þ P 0;
g 3 ðxÞ ¼ 10  ð2x2 þ 2x3 þ x11 þ x12 Þ P 0;
g 4 ðxÞ ¼ 8x1  x10 P 0;
g 5 ðxÞ ¼ 8x2  x11 P 0;
g 6 ðxÞ ¼ 8x3  x12 P 0;
g 7 ðxÞ ¼ 2x4 þ x5  x10 P 0;
g 8 ðxÞ ¼ 2x6 þ x7  x11 P 0;
g 9 ðxÞ ¼ 2x8 þ x9  x12 P 0:
The value of global minimum is fmin ¼ 15:0.
278 I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283

CHOOTINAN2
The source of this problem is [42] and it is given by
3
sin ð2px1 Þ sinð2px2 Þ
max f ðxÞ ¼ ;
x x31 ðx1 þ x2 Þ
with x 2 ½0; 102 . The constraints are given by

g 1 ðxÞ ¼ x21 þ x2  1 P 0;
g 2 ðxÞ ¼ 1 þ x1  ðx2  4Þ2 P 0:
The value of global maximum is fmax ¼ 0:095.

CHOOTINAN3
This problem is described in [42] and it is given by

min f ðxÞ ¼ x21 þ ðx2  1Þ2 ;


x

with x 2 ½1; 12 . The constraints are given by

h1 ðxÞ ¼ x2  x21 ¼ 0:
The value of global minimum is fmin ¼ 0:75.

CHOOTINAN4
The source of this problem is [42] and it is given by
pffiffiffin Y
n
max f ðxÞ ¼ n xi ;
x
i¼1
n
with x 2 ½0; 1 . The constraints are given by
X
n
h1 ðxÞ ¼ x2i  1 ¼ 0:
i¼1

In this paper we used as n the values n ¼ 2; 10; 20; 30; 40 with names CHOOTINAN4(2), CHOOTINAN4(10), CHOOTINAN4(20),
CHOOTINAN4(30) and CHOOTINAN4(40).

CHOOTINAN5
This problem is described in [42] and it is given by
 
P 
 n cos4 ðx Þ  2Qn cos2 ðx Þ
 i¼1 i i¼1 i 
max f ðxÞ ¼  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 
x  Pn 2 
 i¼1 ixi 

with x 2 ½0; 10n subject to


Y
n
g 1 ðxÞ ¼ xi  0:75 P 0;
i¼1
X
n
g 2 ðxÞ ¼ 7:5n  xi P 0:
i¼1

In this paper we used the value of n ¼ 10; 20; 30; 40 with names CHOOTINAN5(10), CHOOTINAN5(20), CHOOTINAN5(30) and
CHOOTINAN5(40).

LIN1
This problem is described in [43] and it is given by

min f ðxÞ ¼ 100ðx2  x21 Þ2 þ ð1  x1 Þ2 ;


x

with 0:5 6 x1 6 0:5; x2 6 10. The constraints are given by

g 1 ðxÞ ¼ x1 þ x22 P 0;
g 2 ðxÞ ¼ x21 þ x2 P 0:
The value of global minimum is fmin ¼ 0:25000.
I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283 279

LIN2
This problem is described in [43] and it is given by
min f ðxÞ ¼ x1  x2 ;
x

with 0 6 x1 6 3; 0 6 x2 6 4 subject to

g 1 ðxÞ ¼ 2x41  8x31 þ 8x21  x2 þ 2 P 0;


g 2 ðxÞ ¼ 4x41  32x31 þ 88x21  96x1  x2 þ 36 P 0:
The value of global minimum is fmin ¼ 5:5081.

G15
This problem is described in [44] and it is given by

min f ðxÞ ¼ 1000  x21  2x22  x23  x1 x2  x1 x3 ;


x

with x 2 ½0; 103 subject to the following constraints:

h1 ðxÞ ¼ x21 þ x22 þ x23  25 ¼ 0;


h2 ðxÞ ¼ 8x1 þ 14x2 þ 7x3  56 ¼ 0:
The value of the global minimum is fmin ¼ 961:7150.

LIN3
This problem is described in [43] and it is given by

min f ðxÞ ¼ 0:01x21 þ x22 ;


x

with 2 6 x1 6 50; 0 6 x2 6 50 subject to


g 1 ðxÞ ¼ x1 x2  25 P 0;
g 2 ðxÞ ¼ x21 þ x22  25 P 0:
The value of global minimum is fmin ¼ 5:0000

BEAM
The welded beam design problem is described in [45] and it is given by

min f ðxÞ ¼ 1:10471x21 x2 þ 0:04811x3 x4 ð14 þ x2 Þ;


x

with 0:1 6 x1 ; x4 6 2; 0:1 6 x2 ; x3 6 10 subject to

g 1 ðxÞ ¼ sðxÞ  13600 6 0;


g 2 ðxÞ ¼ rðxÞ  20000 6 0;
g 3 ðxÞ ¼ x1  x4 6 0;
g 4 ðxÞ ¼ 0:10471x21 þ 0:04811x3 x4 ð14 þ x2 Þ  5 6 0;
g 5 ðxÞ ¼ 0:125  x1 6 0;
g 6 ðxÞ ¼ dðxÞ  0:25 6 0;
g 7 ðxÞ ¼ P  Pc ðxÞ 6 0;

where
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x2
sðxÞ ¼ ðs0 Þ2 þ 2s0 s00 þ ðs00 Þ2 ;
2R
P
s0 ¼ pffiffiffi ;
2x1 x2
QR
s00 ¼ ;
J
x2
Q ¼P Lþ ;
2
280 I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x22 x 1 þ x3 2
R¼ þ ;
4 2
pffiffiffi  2 2

x x 1 þ x3
J¼2 2x1 x2 2 þ ;
12 2
6PL
rðxÞ ¼ 2 ;
x 4 x3
4PL3
dðxÞ ¼ ;
Ex33 x4
qffiffiffiffiffiffi

2 6
x3 x4 rffiffiffiffiffiffi!
4:013E 36 x3 E
Pc ðxÞ ¼ 1 ;
L2 2L 4G
P ¼ 600; L ¼ 14; E ¼ 30  106 ; G ¼ 12  106 :

The value of global minimum is fmin ¼ 1:724852.

3.2. Experimental results

The proposed method was tested on the series of test problems described previously using the parameters given in Table
1. The method was tested against the C-SOMGA described in [26] and the method DONLP2 available from the Internet that is
based on [46,47]. In Table 2 the results from the application of the proposed method to the test problems are listed. In Table 3
the reported results from the application of method C-SOMGA to the test problems and in Table 4 the experimental results
from the application of the method DONLP2 are reported. The proposed method as well as the DONLP2 was applied 100
times on every test problem using different seed for the random number generator each time. The columns in tables have
the following meaning:

Table 1
Parameters of the algorithm.

Parameter Value
K 200
ITERMAX 200
ps 0.1
pm 0.05
pl 0.01
k 103
MAXTRIES 10

Table 2
Experimental results with the proposed method.

Problem FEVALS BEST MEAN STD


LEVY 4572 1.8730 1.8730 4:82  106
SALKIN 7244 320.0000 320.0000 0.0
HIMMELBLAU 23539 0.01561 0.01563 5:71  105
HESS 16139 310.0000 309.9999 4:5  104
SHITTKOWSKI 17483 13.5907 13.5937 6:61  103
CHOOTINAN1 21833 15.0000 14.9999 7:59  105
CHOOTINAN2 3147 0.09582 0.09582 9:81  1010
CHOOTINAN3 4651 0.7500 0.75003 1:11  106
CHOOTINAN4(2) 4940 1.0000 1.0001 6:71  105
CHOOTINAN4(10) 15240 1.0125 1.0124 3:47  105
CHOOTINAN4(20) 31118 1.0319 1.0318 1:8  104
CHOOTINAN4(30) 45238 1.0482 1.0477 3:66  104
CHOOTINAN4(40) 91778 1.0645 1.0636 6:9  104
CHOOTINAN5(10) 183025 0.7473 0.7118 3:30  102
CHOOTINAN5(20) 483870 0.79466 0.7555 2:71  102
CHOOTINAN5(30) 658320 0.8078 0.7845 1:73  102
CHOOTINAN5(40) 753240 0.7692 0.7219 3:8  102
LIN1 5633 0.2500 0.2500 6:78  107
LIN2 2431 5.5080 5.5080 2:17  106
G15 3593 961.71515 961.71516 1:88  105
LIN3 15516 5.0000 5.0010 2:78  103
BEAM 5608 1.725934 1.725937 3:30  105
I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283 281

Table 3
Experimental results with the method CSOMGA.

PROBLEM MEAN STD


LEVY 1.86839 0.00398
SALKIN 319.99270 0.00229
HIMMELBLAU 0.01531 0.00026
HESS 309.15 3.08913
SHITTKOWSKI 13.59610 0.00315
CHOOTINAN1 14.99225 0.00304
CHOOTINAN2 0.08816 0.01657
CHOOTINAN3 0.82519 0.09761
CHOOTINAN4(2) 0.88794 0.21215
CHOOTINAN5(20) 0.77542 0.02739

Table 4
Experimental results with the DONLP2 method.

PROBLEM FEVALS MEAN STD


LEVY 84 1.8730 1:2  107
SALKIN 281 319.7423 0.7758
HIMMELBLAU 187 1.3880 2.1793
HESS 343 262.6723 55.4475
SHITTKOWSKI 437 18.6812 35.6322
CHOOTINAN1 184 11.3781 0.4190
CHOOTINAN2 165 0.03687 0.0243
CHOOTINAN3 122 0.75 7:2  108
CHOOTINAN4(2) 97 1.00 9:6  108
CHOOTINAN4(10) 2008 0.99 0.0995
CHOOTINAN4(20) 5388 0.96 0.1960
CHOOTINAN4(30) 11577 0.97 0.1706
CHOOTINAN4(40) 18484 0.86 0.3470
CHOOTINAN5(10) 3743 0.2249 0.0687
CHOOTINAN5(20) 22863 0.2297 0.0551
CHOOTINAN5(30) 63267 0.2254 0.0396
CHOOTINAN5(40) 43400 0.2209 0.0370
LIN1 138 0.25 0
LIN2 78 4.5414 0.9183
G15 139 875.1608 275.2253
LIN3 248 5.00 1:7  106
BEAM 28851 2.1177 0.7417

1. The column PROBLEM denotes the name of the test problem as given in the previous section.
2. The column BEST denotes the best value located by the proposed algorithm.
3. The column MEAN denotes the mean of the optimal objective function values, located by the proposed algorithm.
4. The column STD denotes the standard deviation of the optimal objective function values.
5. The column FEVALS denotes the average number of function evaluations spent by the proposed algorithm.

As we can see, in most cases where comparative results are available, the proposed method overcomes C-SOMGA in mean
and standard deviation values. Also, the proposed method has managed to discover the global minimum in almost every
objective function. On the other hand, the method DONLP2 overcomes the proposed method in function evaluations for

Table 5
Experiments with the parameter k.

PROBLEM k ¼ 10 k ¼ 102 k ¼ 103 k ¼ 104


LEVY 4618 4667 4572 4714
SALKIN 7185 7198 7244 7402
HIMMELBLAU 25048 25054 23539 25285
HESS 17201 16983 16139 17113
SHITTKOWSKI 19107 19465 17483 17279
LIN1 5986 5815 5633 8204
LIN2 2396 2314 2431 2350
G15 3860 3755 3593 3860
LIN3 16233 16107 15516 15845
BEAM 5856 5782 5608 5963
282 I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283

the five problems LEVY, CHOOTINAN3, CHOOTINAN4(2), LIN1 and LIN3 but the proposed method is better for the rest of the
problems in mean and standard deviation values.

3.3. Experiments with the parameter k

The choice of the parameter k is very crucial for the algorithm, because small values can lead to search outside the feasible
region and produce infeasible solutions, while greater penalties can make difficult or even impossible the search for opti-
mum solutions. In Table 5 the experiments with various values for the parameter k for some of the test problems are listed.
The numbers in the cells represents the average number of function evaluations from 100 runs using different seed for the
random generator each time. It is obvious from the experimental results that the choice of the parameter k has a small effect
to the number of function evaluations required in order to discover the global minimum.

4. Conclusions

A novel genetic algorithm for the location of the global minimum of constrained optimization problems was presented in
this paper. The method suggested (a) modified genetic operators that preserve the feasibility of the chromosomes (b) appli-
cations of a local search procedure to randomly selected chromosomes and (c) a stopping rule based on asymptotic consid-
erations. The method was tested on a series of well-known of test functions and proved to be successful in the majority of
test problems. Moreover, the proposed method can be implemented very easily and the utilization of the proposed stopping
significantly improves the speed of the method. Future research will include the use of more efficient initialization tech-
niques as well as the use of other local search procedures.

References

[1] O.A. Sauer, D.M. Shepard, T.R. Mackie, Application of constrained optimization to radiotherapy planning, Medical Physics 26 (1999) 2359–2366.
[2] E.G. Birgin, I. Chambouleyron, J.M. Martínez, Estimation of the optical constants and the thickness of thin films using unconstrained optimization,
Journal of Computational Physics 151 (1999) 862–880.
[3] C. Yang, J.C. Meza, L.W. Wang, A constrained optimization algorithm for total energy minimization in electronic structure calculations, Journal of
Computational Physics 217 (2006) 709–721.
[4] K.D. Lee, S. Eyi, Transonic airfoil design by constrained optimization, Journal of Aircraft 30 (1993) 805–806.
[5] F.P. Seelos, R.E. Arvidson, Bounded variable least squares – application of a constrained optimization algorithm to the analysis of TES Emissivity
Spectra, in: 34th Annual Lunar and Planetary Science Conference, March 17–21, 2003, League City, Texas (abstract no. 1817).
[6] M.J. Field, Constrained optimization of ab initio and semiempirical Hartree–Fock wave functions using direct minimization or simulated annealing,
Journal of Physical Chemistry 95 (1991) 5104–5108.
[7] G.A. Williams, J.M. Dugan, R.B. Altman, Constrained global optimization for estimating molecular structure from atomic distances, Journal of
Computational Biology 8 (2001) 523–547.
[8] J. Baker, D. Bergeron, Constrained optimization in Cartesian coordinates, Journal of Computational Chemistry 14 (1993) 1339–1346.
[9] J.E.A. Bertram, A. Ruina, Multiple walking speed–frequency relations are predicted by constrained optimization, Journal of Theoretical Biology 209
(2001) 445–453.
[10] J.E.A. Bertram, Constrained optimization in human walking: cost minimization and gait plasticity, Journal of Experimental Biology 208 (2005) 979–
991.
[11] I. Iakovidis, R.M. Gulrajani, Regularization of the inverse epicardial solution using linearly constrained optimization, in: Engineering in Medicine and
Biology Society, Proceedings of the Annual International Conference of the IEEE Publication Date: 31 October–3 November, 1991. vol. 13, pp. 698–699.
[12] D.P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods, Academic Press, New York, 1982.
[13] D.P. Bertsekas, A.E. Ozdaglar, Pseudonormality and a Lagrange multiplier theory for constrained optimization, Journal of Optimization Theory and
Applications 114 (2004) 287–343.
[14] P.E. Gill, W. Murray, The computation of Lagrange-multiplier estimates for constrained minimization, Mathematical Programming 17 (1979) 32–60.
[15] M.J.D. Powell, Y. Yuan, A trust region algorithm for equality constrained optimization, Mathematical Programming 49 (2005) 189–211.
[16] R.H. Byrd, R.B. Schnabel, G.A. Shultz, A trust region algorithm for nonlinearly constrained optimization, SIAM Journal on Numerical Analysis 24 (1987)
1152–1170.
[17] D.M. Gay, A trust-region approach to linearly constrained optimization, Lecture Notes in Mathematics, vol. 1066, 1984, pp. 72–105 (Áñãåíôéí (ñ)).
[18] M.Cs. Markót, J. Fernández, L.G. Casado, T. Csendes, New interval methods for constrained global optimization, Mathematical Programming 106 (2005)
287–318.
[19] K. Ichida, Constrained optimization using interval analysis, Computers and Industrial Engineering 31 (1996) 933–937.
[20] W.E. Lillo, S. Hui, S.H. Zak, Neural networks for constrained optimization problems, International Journal of Circuit Theory and Applications 21 (1993)
385–399.
[21] W.E. Lillo, M.H. Loh, S. Hui, S.H. Zak, On solving constrained optimization problems with neural networks: a penalty method approach, IEEE
Transactions on Neural Networks 4 (1993) 931–940.
[22] S. Zhang, X. Zhu, L.H. Zou, Second-order neural nets for constrained optimization, IEEE Transactions on Neural Networks 3 (1992) 1021–1024.
[23] S. Lucidi, M. Sciandrone, P. Tseng, Objective-derivative-free methods for constrained optimization, Mathematical Programming 92 (1999) 37–59.
[24] G. Liuzzi, S. Lucidi, A derivative-free algorithm for nonlinear programming, TR 17/05, Department of Computer and Systems Science, Antonio Ruberti,
University of Rome, La Sapienza, 2005.
[25] M.B. Subrahmanyam, An extension of the simplex method to constrained nonlinear optimization, Journal of Optimization Theory and Applications 62
(1989) 311–319.
[26] K. Deep, Dipti, A self-organizing migrating genetic algorithm for constrained optimization, Applied Mathematics and Computation 198 (2008) 237–
250.
[27] A. Homaifar, Constrained optimization via genetic algorithms, Simulation 62 (1994) 242–253.
[28] S. Venkatraman, G.G. Yen, A generic framework for constrained optimization using genetic algorithms, IEEE Transactions on Evolutionary Computation
9 (2005) 424–435.
[29] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evolutionary Computation 4 (1996) 1–32.
[30] V.S. Summanwar, V.K. Jayaraman, B.D. Kulkarni, H.S. Kusumakar, K. Gupta, J. Rajesh, Solution of constrained optimization problems by multi-objective
genetic algorithm, Computers and Chemical Engineering 26 (2002) 1481–1492.
I.G. Tsoulos / Applied Mathematics and Computation 208 (2009) 273–283 283

[31] Q. He, L. Wang, A hybrid particle swarm optimization with a feasibility – based rule for constrained optimization, Applied Mathematics and
Computation 186 (2007) 1407–1422.
[32] X.H.Hu, R.C. Eberhart, Solving constrained nonlinear optimization problems with particle swarm optimization, in: N. Callaos (Ed.), Proceedings of the
Sixth World Multiconference on Systematics, Cybergenetics and Informatics, Orlando, FL, 2002, pp. 203–206.
[33] H. Sarimveis, A. Nikolakopoulos, A line up evolutionary algorithm for solving nonlinear constrained optimization problems, Computers and Operations
Research 32 (2005) 1499–1514.
[34] R.L. Becerra, C.A.C. Coello, Cultured differential evolution for constrained optimization, Computer Methods in Applied Mechanics and Engineering 195
(2006) 4303–4322.
[35] M.J.D. Powell, A Direct search optimization method that models the objective and constraint functions by linear interpolation, DAMTP/NA5,
Cambridge, England.
[36] I.G. Tsoulos, Modifications of real code genetic algorithm for global optimization, Applied Mathematics and Computation 203 (2008) 598–607.
[37] A.V. Levy, A. Montalvo, The tunneling algorithm for global optimization of functions, SIAM Journal of Scientific and Statistical Computing 6 (1985) 15–
29.
[38] H.M. Salkin, Integer Programming, Edison Wesley Publishing Com., Amsterdam, 1975.
[39] D.M. Himmelblau, Applied Nonlinear Programming, McGraw-Hill, New York, 1972.
[40] R. Hess, A heuristic search for estimating a global solution of non convex programming problems, Operations Research 21 (1973) 1267–1280.
[41] K. Schittkowski, More examples for mathematical programming codes, Lecture Notes in Economics and Mathematical Systems (1987) 282.
[42] P. Chootinan, A. Chen, Constraint handling in genetic algorithms using a gradient – based repair method, Computer and Operations Research 33 (2006)
2263–2281.
[43] Y.C. Lin, K.S. Hwang, F.S. Wang, Hybrid differential evolution with multiplier updating method for nonlinear constrained optimization problems, in:
Proceedings of the 2002 IEEE World Congress on Computation Intelligence, IEEE Service Center, Honolulu Hawaii, 2002, pp. 872–877.
[44] J.J. Liang, T.P. Runarsson, E. Mezura-Montes, M. Clerc, P.N. Suganthan, C.A.C. Coello, K. Deb, Problem definitions and evaluation criteria for the CEC2006
special session on constrained real-parameter optimization. <http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC-06/CEC06.htm>.
[45] C.A.C. Coello, Use of a self-adaptive penalty approach for engineering optimization problems, Computers and Industrial Engineering 41 (2000) 113–
127.
[46] P. Spelluci, An SQP method for general nonlinear programs using only equality constrained subproblems, Mathematical Programming 82 (1998) 413–
448.
[47] P. Spelluci, A new technique for inconsistent problems in the SQP method, Mathematical Methods of Operational Research 47 (1998) 355–400.

You might also like